Matrix Analysis

for Scientists & Engineers
Matrix Analysis
for Scientists & Engineers
This page intentionally left blank This page intentionally left blank
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
slam.
Matrix Analysis
for Scientists & Engineers
Alan J. Laub
University of California
Davis, California
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
1 0 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA,
508-647-7000, Fax: 508-647-7101, info@mathworks.com, www.mathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress Cataloging-in-Publication Data
Laub, Alan J., 1948-
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0-89871-576-8 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA188138 2005
512.9'434—dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission.
slam is a registered trademark.
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
10987654321
All rights reserved. Printed in the United States of America. No part of this book
may be reproduced, stored, or transmitted in any manner without the written permission
of the publisher. For information, write to the Society for Industrial and Applied
Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB product information,
please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA,
508-647-7000, Fax: 508-647-7101, info@mathworks.com, wwwmathworks.com
Mathematica is a registered trademark of Wolfram Research, Inc.
Mathcad is a registered trademark of Mathsoft Engineering & Education, Inc.
Library of Congress Cataloging-in-Publication Data
Laub, Alan J., 1948-
Matrix analysis for scientists and engineers / Alan J. Laub.
p. cm.
Includes bibliographical references and index.
ISBN 0-89871-576-8 (pbk.)
1. Matrices. 2. Mathematical analysis. I. Title.
QA 188.L38 2005
512.9'434-dc22
2004059962
About the cover: The original artwork featured on the cover was created by freelance
artist Aaron Tallon of Philadelphia, PA. Used by permission .

5.lam... is a registered trademark.
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
To my wife, Beverley
(who captivated me in the UBC math library
nearly forty years ago)
This page intentionally left blank This page intentionally left blank
Contents
Preface xi
1 Introduction and Review 1
1.1 Some Notation and Terminology 1
1.2 Matrix Arithmetic 3
1.3 Inner Products and Orthogonality 4
1.4 Determinants 4
2 Vector Spaces 7
2.1 Definitions and Examples 7
2.2 Subspaces 9
2.3 Linear Independence 10
2.4 Sums and Intersections of Subspaces 13
3 Linear Transformations 17
3.1 Definition and Examples 17
3.2 Matrix Representation of Linear Transformations 18
3.3 Composition of Transformations 19
3.4 Structure of Linear Transformations 20
3.5 Four Fundamental Subspaces 22
4 Introduction to the Moore-Penrose Pseudoinverse 29
4.1 Definitions and Characterizations 29
4.2 Examples 30
4.3 Properties and Applications 31
5 Introduction to the Singular Value Decomposition 35
5.1 The Fundamental Theorem 35
5.2 Some Basic Properties 38
5.3 Row and Column Compressions 40
6 Linear Equations 43
6.1 Vector Linear Equations 43
6.2 Matrix Linear Equations 44
6.3 A More General Matrix Linear Equation 47
6.4 Some Useful and Interesting Inverses 47
vii
Contents
Preface
1 Introduction and Review
1.1 Some Notation and Terminology
1.2 Matrix Arithmetic . . . . . . . .
1.3 Inner Products and Orthogonality .
1.4 Determinants
2 Vector Spaces
2.1 Definitions and Examples .
2.2 Subspaces.........
2.3 Linear Independence . . .
2.4 Sums and Intersections of Subspaces
3 Linear Transformations
3.1 Definition and Examples . . . . . . . . . . . . .
3.2 Matrix Representation of Linear Transformations
3.3 Composition of Transformations . .
3.4 Structure of Linear Transformations
3.5 Four Fundamental Subspaces . . . .
4 Introduction to the Moore-Penrose Pseudoinverse
4.1 Definitions and Characterizations.
4.2 Examples..........
4.3 Properties and Applications . . . .
5 Introduction to the Singular Value Decomposition
5.1 The Fundamental Theorem . . .
5.2 Some Basic Properties .....
5.3 Rowand Column Compressions
6 Linear Equations
6.1 Vector Linear Equations . . . . . . . . .
6.2 Matrix Linear Equations ....... .
6.3 A More General Matrix Linear Equation
6.4 Some Useful and Interesting Inverses.
vii
xi
1
1
3
4
4
7
7
9
10
13
17
17
18
19
20
22
29
29
30
31
35
35
38
40
43
43
44
47
47
viii Contents
7 Projections, Inner Product Spaces, and Norms 51
7.1 Projections 51
7.1.1 The four fundamental orthogonal projections 52
7.2 Inner Product Spaces 54
7.3 Vector Norms 57
7.4 Matrix Norms 59
8 Linear Least Squares Problems 65
8.1 The Linear Least Squares Problem 65
8.2 Geometric Solution 67
8.3 Linear Regression and Other Linear Least Squares Problems 67
8.3.1 Example: Linear regression 67
8.3.2 Other least squares problems 69
8.4 Least Squares and Singular Value Decomposition 70
8.5 Least Squares and QR Factorization 71
9 Eigenvalues and Eigenvectors 75
9.1 Fundamental Definitions and Properties 75
9.2 Jordan Canonical Form 82
9.3 Determination of the JCF 85
9.3.1 Theoretical computation 86
9.3.2 On the +1's in JCF blocks 88
9.4 Geometric Aspects of the JCF 89
9.5 The Matrix Sign Function 91
10 Canonical Forms 95
10.1 Some Basic Canonical Forms 95
10.2 Definite Matrices 99
10.3 Equivalence Transformations and Congruence 102
10.3.1 Block matrices and definiteness 104
10.4 Rational Canonical Form 104
11 Linear Differential and Difference Equations 109
11.1 Differential Equations 109
11.1.1 Properties of the matrix exponential 109
11.1.2 Homogeneous linear differential equations 112
11.1.3 Inhomogeneous linear differential equations 112
11.1.4 Linear matrix differential equations 113
11.1.5 Modal decompositions 114
11.1.6 Computation of the matrix exponential 114
11.2 Difference Equations 118
11.2.1 Homogeneous linear difference equations 118
11.2.2 Inhomogeneous linear difference equations 118
11.2.3 Computation of matrix powers 119
11.3 Higher-Order Equations 120
viii
7 Projections, Inner Product Spaces, and Norms
7.1 Projections ..................... .
7.1.1 The four fundamental orthogonal projections
7.2 Inner Product Spaces
7.3 Vector Norms
7.4 Matrix Norms ....
8 Linear Least Squares Problems
8.1 The Linear Least Squares Problem . . . . . . . . . . . . . .
8.2 Geometric Solution . . . . . . . . . . . . . . . . . . . . . .
8.3 Linear Regression and Other Linear Least Squares Problems
8.3.1 Example: Linear regression ...... .
8.3.2 Other least squares problems ...... .
8.4 Least Squares and Singular Value Decomposition
8.5 Least Squares and QR Factorization . . . . . . .
9 Eigenvalues and Eigenvectors
9.1 Fundamental Definitions and Properties
9.2 Jordan Canonical Form .... .
9.3 Determination of the JCF .... .
9.3.1 Theoretical computation .
9.3.2 On the + l's in JCF blocks
9.4 Geometric Aspects of the JCF
9.5 The Matrix Sign Function.
10 Canonical Forms
10.1 Some Basic Canonical Forms .
10.2 Definite Matrices . . . . . . .
10.3 Equivalence Transformations and Congruence
10.3.1 Block matrices and definiteness
10.4 Rational Canonical Form . . . . . . . . .
11 Linear Differential and Difference Equations
ILl Differential Equations . . . . . . . . . . . . . . . .
11.1.1 Properties ofthe matrix exponential . . . .
11.1.2 Homogeneous linear differential equations
11.1.3 Inhomogeneous linear differential equations
11.1.4 Linear matrix differential equations . .
11.1.5 Modal decompositions . . . . . . . . .
11.1.6 Computation of the matrix exponential
11.2 Difference Equations . . . . . . . . . . . . . .
11.2.1 Homogeneous linear difference equations
11.2.2 Inhomogeneous linear difference equations
11.2.3 Computation of matrix powers .
11.3 Higher-Order Equations. . . . . . . . . . . . . . .
Contents
51
51
52
54
57
59
65
65
67
67
67
69
70
71
75
75
82
85
86
88
89
91
95
95
99
102
104
104
109
109
109
112
112
113
114
114
118
118
118
119
120
Contents ix
12 Generalized Eigenvalue Problems 125
12.1 The Generalized Eigenvalue/Eigenvector Problem 125
12.2 Canonical Forms 127
12.3 Application to the Computation of System Zeros 130
12.4 Symmetric Generalized Eigenvalue Problems 131
12.5 Simultaneous Diagonalization 133
12.5.1 Simultaneous diagonalization via SVD 133
12.6 Higher-Order Eigenvalue Problems 135
12.6.1 Conversion to first-order form 135
13 Kronecker Products 139
13.1 Definition and Examples 139
13.2 Properties of the Kronecker Product 140
13.3 Application to Sylvester and Lyapunov Equations 144
Bibliography 151
Index 153
Contents
12 Generalized Eigenvalue Problems
12.1 The Generalized EigenvaluelEigenvector Problem
12.2 Canonical Forms ................ .
12.3 Application to the Computation of System Zeros .
12.4 Symmetric Generalized Eigenvalue Problems .
12.5 Simultaneous Diagonalization ........ .
12.5.1 Simultaneous diagonalization via SVD
12.6 Higher-Order Eigenvalue Problems ..
12.6.1 Conversion to first-order form
13 Kronecker Products
13.1 Definition and Examples ............ .
13.2 Properties of the Kronecker Product ...... .
13.3 Application to Sylvester and Lyapunov Equations
Bibliography
Index
ix
125
125
127
130
131
133
133
135
135
139
139
140
144
151
153
This page intentionally left blank This page intentionally left blank
Preface
This book is intended to be used as a text for beginning graduate-level (or even senior-level)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a one-quarter or one-semester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basis-free or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then well-equipped to pursue, either via formal courses or through self-
study, follow-on topics on the computational side (at the level of [7], [11], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec-
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "out-of-order" by conventional standards) introduction of topics such as pseu-
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top-
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MA TL A B® although other software such as
xi
Preface
This book is intended to be used as a text for beginning graduate-level (or even senior-level)
students in engineering, the sciences, mathematics, computer science, or computational
science who wish to be familar with enough matrix analysis that they are prepared to use its
tools and ideas comfortably in a variety of applications. By matrix analysis I mean linear
algebra and matrix theory together with their intrinsic interaction with and application to
linear dynamical systems (systems of linear differential or difference equations). The text
can be used in a one-quarter or one-semester course to provide a compact overview of
much of the important and useful mathematics that, in many cases, students meant to learn
thoroughly as undergraduates, but somehow didn't quite manage to do. Certain topics
that may have been treated cursorily in undergraduate courses are treated in more depth
and more advanced material is introduced. I have tried throughout to emphasize only the
more important and "useful" tools, methods, and mathematical structures. Instructors are
encouraged to supplement the book with specific application examples from their own
particular subject area.
The choice of topics covered in linear algebra and matrix theory is motivated both by
applications and by computational utility and relevance. The concept of matrix factorization
is emphasized throughout to provide a foundation for a later course in numerical linear
algebra. Matrices are stressed more than abstract vector spaces, although Chapters 2 and 3
do cover some geometric (i.e., basis-free or subspace) aspects of many of the fundamental
notions. The books by Meyer [18], Noble and Daniel [20], Ortega [21], and Strang [24]
are excellent companion texts for this book. Upon completion of a course based on this
text, the student is then well-equipped to pursue, either via formal courses or through self-
study, follow-on topics on the computational side (at the level of [7], [II], [23], or [25], for
example) or on the theoretical side (at the level of [12], [13], or [16], for example).
Prerequisites for using this text are quite modest: essentially just an understanding
of calculus and definitely some previous exposure to matrices and linear algebra. Basic
concepts such as determinants, singularity of matrices, eigenvalues and eigenvectors, and
positive definite matrices should have been covered at least once, even though their recollec-
tion may occasionally be "hazy." However, requiring such material as prerequisite permits
the early (but "out-of-order" by conventional standards) introduction of topics such as pseu-
doinverses and the singular value decomposition (SVD). These powerful and versatile tools
can then be exploited to provide a unifying foundation upon which to base subsequent top-
ics. Because tools such as the SVD are not generally amenable to "hand computation," this
approach necessarily presupposes the availability of appropriate mathematical software on
a digital computer. For this, I highly recommend MAlLAB® although other software such as
xi
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa-
tional issues for two principal reasons. First, "real-life" problems seldom yield to simple
closed-form formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modern scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These turn out to
be much more difficult problems and frequently involve research-level questions when set
in the context of the finite-precision, finite-range floating-point arithmetic environment of
most modern computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modern state-space approach to dynamical systems. State-space methods are
now standard in much of modern engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modern language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary state-space theory) to an appendix or introducing it "on-the-fly" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing,
xii Preface
Mathematica® or Mathcad® is also excellent. Since this text is not intended for a course in
numerical linear algebra per se, the details of most of the numerical aspects of linear algebra
are deferred to such a course.
The presentation of the material in this book is strongly influenced by computa-
tional issues for two principal reasons. First, "real-life" problems seldom yield to simple
closed-form formulas or solutions. They must generally be solved computationally and
it is important to know which types of algorithms can be relied upon and which cannot.
Some of the key algorithms of numerical linear algebra, in particular, form the foundation
upon which rests virtually all of modem scientific and engineering computation. A second
motivation for a computational emphasis is that it provides many of the essential tools for
what I call "qualitative mathematics." For example, in an elementary linear algebra course,
a set of vectors is either linearly independent or it is not. This is an absolutely fundamental
concept. But in most engineering or scientific contexts we want to know more than that.
If a set of vectors is linearly independent, how "nearly dependent" are the vectors? If they
are linearly dependent, are there "best" linearly independent subsets? These tum out to
be much more difficult problems and frequently involve research-level questions when set
in the context of the finite-precision, finite-range floating-point arithmetic environment of
most modem computing platforms.
Some of the applications of matrix analysis mentioned briefly in this book derive
from the modem state-space approach to dynamical systems. State-space methods are
now standard in much of modem engineering where, for example, control systems with
large numbers of interacting inputs, outputs, and states often give rise to models of very
high order that must be analyzed, simulated, and evaluated. The "language" in which such
models are conveniently described involves vectors and matrices. It is thus crucial to acquire
a working knowledge of the vocabulary and grammar of this language. The tools of matrix
analysis are also applied on a daily basis to problems in biology, chemistry, econometrics,
physics, statistics, and a wide variety of other fields, and thus the text can serve a rather
diverse audience. Mastery of the material in this text should enable the student to read and
understand the modem language of matrices used throughout mathematics, science, and
engineering.
While prerequisites for this text are modest, and while most material is developed from
basic ideas in the book, the student does require a certain amount of what is conventionally
referred to as "mathematical maturity." Proofs are given for many theorems. When they are
not given explicitly, they are either obvious or easily found in the literature. This is ideal
material from which to learn a bit about mathematical proofs and the mathematical maturity
and insight gained thereby. It is my firm conviction that such maturity is neither encouraged
nor nurtured by relegating the mathematical aspects of applications (for example, linear
algebra for elementary state-space theory) to an appendix or introducing it "on-the-f1y" when
necessary. Rather, one must lay a firm foundation upon which subsequent applications and
perspectives can be built in a logical, consistent, and coherent fashion.
I have taught this material for many years, many times at UCSB and twice at UC
Davis, and the course has proven to be remarkably successful at enabling students from
disparate backgrounds to acquire a quite acceptable level of mathematical maturity and
rigor for subsequent graduate studies in a variety of disciplines. Indeed, many students who
completed the course, especially the first few times it was offered, remarked afterward that
if only they had had this course before they took linear systems, or signal processing.
Preface xiii
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
— AJL, June 2004
Preface XIII
or estimation theory, etc., they would have been able to concentrate on the new ideas
they wanted to learn, rather than having to spend time making up for deficiencies in their
background in matrices and linear algebra. My fellow instructors, too, realized that by
requiring this course as a prerequisite, they no longer had to provide as much time for
"review" and could focus instead on the subject at hand. The concept seems to work.
-AJL, June 2004
This page intentionally left blank This page intentionally left blank
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
1. R
n
= the set of n-tuples of real numbers represented as column vectors. Thus, x e Rn
means
where xi e R for i e n.
Henceforth, the notation n denotes the set {1, . . . , n}.
Note: Vectors are always column vectors. A row vector is denoted by y
T
, where
y G Rn and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., X
T
y is a scalar while
xy
T
is an n x n matrix.
2. Cn = the set of n-tuples of complex numbers represented as column vectors.
3. R
mxn
= the set of real (or real-valued) m x n matrices.
4. R
mxnr
= the set of real m x n matrices of rank r. Thus, R
nxnn
denotes the set of real
nonsingular n x n matrices.
5. C
mxn
= the set of complex (or complex-valued) m x n matrices.
6. C
mxn
= the set of complex m x n matrices of rank r.
1
Chapter 1
Introduction and Review
1.1 Some Notation and Terminology
We begin with a brief introduction to some standard notation and terminology to be used
throughout the text. This is followed by a review of some basic notions in matrix analysis
and linear algebra.
The following sets appear frequently throughout subsequent chapters:
I. IR
n
= the set of n-tuples of real numbers represented as column vectors. Thus, x E IR
n
means
where Xi E IR for i E !!.
Henceforth, the notation!! denotes the set {I, ... , n }.
Note: Vectors are always column vectors. A row vector is denoted by y ~ where
y E IR
n
and the superscript T is the transpose operation. That a vector is always a
column vector rather than a row vector is entirely arbitrary, but this convention makes
it easy to recognize immediately throughout the text that, e.g., x
T
y is a scalar while
xyT is an n x n matrix.
2. en = the set of n-tuples of complex numbers represented as column vectors.
3. IR
rn
xn = the set of real (or real-valued) m x n matrices.
4. 1R;n xn = the set of real m x n matrices of rank r. Thus, I R ~ xn denotes the set of real
nonsingular n x n matrices.
5. e
rnxn
= the set of complex (or complex-valued) m x n matrices.
6. e;n xn = the set of complex m x n matrices of rank r.
Chapter 1. Introduction and Review
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A e R
nxn
, B e R
mx n
, and
C e R
mxm
, then the (m+ n) x (m+ n) matrix [ A0 Bc ] is block upper triangular.
The transpose of a matrix A is denoted by A
T
and is the matrix whose (i, j)th entry
is the (7, Oth entry of A, that is, (A
7
),, = a,,. Note that if A e R
mx
", then A
7
" e E"
xm
.
If A e C
mx
", then its Hermitian transpose (or conjugate transpose) is denoted by A
H
(or
sometimes A*) and its (i, j)\h entry is (A
H
),
7
= («77), where the bar indicates complex
conjugation; i.e., if z = a + jf$ (j = i = v^T), then z = a — jfi. A matrix A is symmetric
if A = A
T
and Hermitian if A = A
H
. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A
T
implies that A is real-valued while a statement
like A = A
H
implies that A is complex-valued.
Remark 1.1. While \/—\ is most commonly denoted by i in mathematics texts, j is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if A,, are appropriately dimensioned subblocks, then
is symmetric (and Hermitian).
is complex-valued symmetric but not Hermitian.
is Hermitian (but not symmetric).
2
We now classify some of the more familiar "shaped" matrices. A matrix A e
(or A eC"
x
")i s
• diagonal if a,
7
= 0 for i ^ j.
• upper triangular if a,
;
= 0 for i > j.
• lower triangular if a,
7
= 0 for / < j.
• tridiagonal if a
(y
= 0 for |z — j\ > 1.
• pentadiagonal if a
i;
= 0 for |/ — j\ > 2.
• upper Hessenberg if a
f
j = 0 for i — j > 1.
• lower Hessenberg if a,
;
= 0 for j — i > 1.
2 Chapter 1. Introduction and Review
We now classify some of the more familiar "shaped" matrices. A matrix A E IR
n
xn
(or A E e
nxn
) is
• diagonal if aij = 0 for i i= }.
• upper triangular if aij = 0 for i > }.
• lower triangular if aij = 0 for i < }.
• tridiagonal if aij = 0 for Ii - JI > 1.
• pentadiagonal if aij = 0 for Ii - J I > 2.
• upper Hessenberg if aij = 0 for i - j > 1.
• lower Hessenberg if aij = 0 for } - i > 1.
Each of the above also has a "block" analogue obtained by replacing scalar components in
the respective definitions by block submatrices. For example, if A E IR
nxn
, B E IR
nxm
, and
C E jRmxm, then the (m + n) x (m + n) matrix [ ~ ~ ] is block upper triangular.
The transpose of a matrix A is denoted by AT and is the matrix whose (i, j)th entry
is the (j, i)th entry of A, that is, (AT)ij = aji. Note that if A E jRmxn, then AT E jRnxm.
If A E em xn, then its Hermitian transpose (or conjugate transpose) is denoted by A H (or
sometimes A*) and its (i, j)th entry is (AH)ij = (aji), where the bar indicates complex
conjugation; i.e., if z = IX + jfJ (j = i = R), then z = IX - jfJ. A matrix A is symmetric
if A = A T and Hermitian if A = A H. We henceforth adopt the convention that, unless
otherwise noted, an equation like A = A T implies that A is real-valued while a statement
like A = AH implies that A is complex-valued.
Remark 1.1. While R is most commonly denoted by i in mathematics texts, } is
the more common notation in electrical engineering and system theory. There is some
advantage to being conversant with both notations. The notation j is used throughout the
text but reminders are placed at strategic locations.
Example 1.2.
1. A = [
; ~ ] is symmetric (and Hermitian).
2. A = [
5
7+}
7 + j ]
2 is complex-valued symmetric but not Hermitian.
[
5 7+} ]
3 A - 2 is Hermitian (but not symmetric).
· - 7 - j
Transposes of block matrices can be defined in an obvious way. For example, it is
easy to see that if Aij are appropriately dimensioned subblocks, then
r = [
1.2. Matrix Arithmetic
1.2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrix-vector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = [96 85 74]x = 2 . Then we can quickly calculate dot products of the rows of A
with the column x to find Ax =[50 32]' but this matrix-vector product can also be computed
v1a
For large arrays of numbers, there can be important computer-architecture-related advan-
tages to preferring the latter calculation method.
For matrix multiplication, suppose A e R
mxn
and B = [bi,...,b
p
] e R
nxp
with
bi e W
1
. Then the matrix product A B can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [M I , . . . , u
n
] e R
mxn
with u
t
e R
m
and V = [v
{
,..., v
n
] e R
pxn
with v
t
e R
p
. Then
If matrices C and D are compatible for multiplication, recall that (CD)
T
= D
T
C
T
(or (CD}
H
— D
H
C
H
). This gives a dual to the matrix-vector result above. Namely, if
C eR
mxn
has row vectors cj e E
lx
", and is premultiplied by a row vector y
T
e R
l xm
,
then the product can be written as a weighted linear sum of the rows of C as follows:
3
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the readei
Then
1.2. Matrix Arithmetic 3
1 .2 Matrix Arithmetic
It is assumed that the reader is familiar with the fundamental notions of matrix addition,
multiplication of a matrix by a scalar, and multiplication of matrices.
A special case of matrix multiplication occurs when the second matrix is a column
vector x, i.e., the matrix-vector product Ax. A very important way to view this product is
to interpret it as a weighted sum (linear combination) of the columns of A. That is, suppose
I ]
A = la' ....• a"1 E JR
m
" with a, E JRm and x = l
Then
Ax = Xjal + ... + Xnan E jRm.
The importance of this interpretation cannot be overemphasized. As a numerical example,
take A = ! x = Then we can quickly calculate dot products of the rows of A
with the column x to find Ax = but this matrix-vector product can also be computed
via
3.[ J+2.[ J+l.[ l
For large arrays of numbers, there can be important computer-architecture-related advan-
tages to preferring the latter calculation method.
For matrix multiplication, suppose A E jRmxn and B = [hI,.'" h
p
] E jRnxp with
hi E jRn. Then the matrix product AB can be thought of as above, applied p times:
There is also an alternative, but equivalent, formulation of matrix multiplication that appears
frequently in the text and is presented below as a theorem. Again, its importance cannot be
overemphasized. It is deceptively simple and its full understanding is well rewarded.
Theorem 1.3. Let U = [Uj, ... , un] E jRmxn with Ui E jRm and V = [VI, .•. , Vn] E lR
Pxn
with Vi E jRP. Then
n
UV
T
= LUiVr E jRmxp.
i=I
If matrices C and D are compatible for multiplication, recall that (C D)T = DT C
T
(or (C D)H = DH C
H
). This gives a dual to the matrix-vector result above. Namely, if
C E jRmxn has row vectors cJ E jRlxn, and is premultiplied by a row vector yT E jRlxm,
then the product can be written as a weighted linear sum of the rows of C as follows:
yTC=YICf EjRlxn.
Theorem 1.3 can then also be generalized to its "row dual." The details are left to the reader.
Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y e R", the Euclidean inner product (or inner product, for short) of x and
y is given by
Note that the inner product is a scalar.
If x, y e C", we define their complex Euclidean inner product (or inner product,
for short) by
and we see that, indeed, (x, y)
c
= (y, x)
c
.
Note that x
T
x = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn.
What is true in the complex case is that X
H
x = 0 if and only if x = 0. To illustrate, consider
the nonzero vector x above. Then X
T
X = 0 but X
H
X = 2.
Two nonzero vectors x, y e R are said to be orthogonal if their inner product is
zero, i.e., x
T
y = 0. Nonzero complex vectors are orthogonal if X
H
y = 0. If x and y are
orthogonal and X
T
X = 1 and y
T
y = 1, then we say that x and y are orthonormal. A
matrix A e R
nxn
is an orthogonal matrix if A
T
A = AA
T
= /, where / is the n x n
identity matrix. The notation /„ is sometimes used to denote the identity matrix in R
nx
"
(orC"
x
"). Similarly, a matrix A e C
nxn
is said to be unitary if A
H
A = AA
H
= I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A e R
mxn
(or € C
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A e R
nxn
(or A 6 C
nxn
) we use the notation det A for the determinant of A. We list below some of
Note that (x, y)
c
= (y, x)
c
, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
( x , y )
c
= y
H
x = Eni=1 xiyi but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [ 1j ] and y = [ 1/ 2 ]. Then
while
44 Chapter 1. Introduction and Review
1.3 Inner Products and Orthogonality
For vectors x, y E IRn, the Euclidean inner product (or inner product, for short) of x and
y is given by
n
(x, y) := x
T
y = Lx;y;.
;=1
Note that the inner product is a scalar.
If x, y E <en, we define their complex Euclidean inner product (or inner product,
for short) by
n
(x'Y}c :=xHy = Lx;y;.
;=1
Note that (x, y)c = (y, x}c, i.e., the order in which x and y appear in the complex inner
product is important. The more conventional definition of the complex inner product is
(x, y)c = yH x = L:7=1 x;y; but throughout the text we prefer the symmetry with the real
case.
Example 1.4. Let x = [} ] and y = [ ~ ] . Then
(x, Y}c = [ } JH [ ~ ] = [I - j] [ ~ ] = 1 - 2j
while
and we see that, indeed, (x, Y}c = {y, x)c'
Note that x
T
x = 0 if and only if x = 0 when x E IR
n
but that this is not true if x E en.
What is true in the complex case is that x
H
x = 0 if and only if x = O. To illustrate, consider
the nonzero vector x above. Then x
T
x = 0 but x
H
X = 2.
Two nonzero vectors x, y E IR
n
are said to be orthogonal if their inner product is
zero, i.e., x
T
y = O. Nonzero complex vectors are orthogonal if x
H
y = O. If x and y are
orthogonal and x
T
x = 1 and yT y = 1, then we say that x and y are orthonormal. A
matrix A E IR
nxn
is an orthogonal matrix if AT A = AAT = I, where I is the n x n
identity matrix. The notation In is sometimes used to denote the identity matrix in IR
nxn
(or en xn). Similarly, a matrix A E en xn is said to be unitary if A H A = AA H = I. Clearly
an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. There is
no special name attached to a nonsquare matrix A E ]Rrn"n (or E e
mxn
) with orthonormal
rows or columns.
1.4 Determinants
It is assumed that the reader is familiar with the basic theory of determinants. For A E IR
n
xn
(or A E en xn) we use the notation det A for the determinant of A. We list below some of
1.4. Determinants
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = 0.
2. If A has a zero column or if any two columns of A are equal, then det A = 0.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar a results in a new matrix whose determinant is
a det A.
6. Multiplying a column of A by a scalar a results in a new matrix whose determinant
is a det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. det A
T
= det A (det A
H
= det A if A e C
nxn
).
10. If A is diagonal, then det A = a11a22 • • • a
nn
, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = a11a22 • • • a
nn
.
12. If A is lower triangular, then det A = a11a22 • • • a
nn
.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A11, A22, • • •, A
nn
(of possibly different sizes), then det A =
det A11 det A22 • • • det A
nn
.
14. If A, B eR
nxn
,thendet(AB) = det A det 5.
15. If A € R
nxn
, then det(A-
1
) = 1det A.
16. If A e R
nxn
and D e R
mxm
, then det [Ac
B
D
] = del A det ( D – CA–
l
B).
Proof: This follows easily from the block LU factorization
17. If A eR
nxn
and D e RM
mxm
, then det [Ac
B
D
] = det D det(A – BD–
1
C) .
Proof: This follows easily from the block UL factorization
5 1.4. Determinants 5
the more useful properties of determinants. Note that this is not a minimal set, i.e., several
properties are consequences of one or more of the others.
1. If A has a zero row or if any two rows of A are equal, then det A = o.
2. If A has a zero column or if any two columns of A are equal, then det A = O.
3. Interchanging two rows of A changes only the sign of the determinant.
4. Interchanging two columns of A changes only the sign of the determinant.
5. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is
exdetA.
6. Multiplying a column of A by a scalar ex results in a new matrix whose determinant
is ex det A.
7. Multiplying a row of A by a scalar and then adding it to another row does not change
the determinant.
8. Multiplying a column of A by a scalar and then adding it to another column does not
change the determinant.
9. detAT = detA (detA
H
= detA if A E C"X").
10. If A is diagonal, then det A = alla22 ... ann, i.e., det A is the product of its diagonal
elements.
11. If A is upper triangular, then det A = all a22 ... a"n.
12. If A is lower triangUlar, then det A = alla22 ... ann.
13. If A is block diagonal (or block upper triangular or block lower triangular), with
square diagonal blocks A 11, A
22
, ... , An" (of possibly different sizes), then det A =
det A 11 det A22 ... det Ann.
14. If A, B E IR
nxn
, then det(AB) = det A det B.
15. If A E then det(A-
1
) = de: A .
16. If A E and DE IR
mxm
, then det = detA det(D - CA-
1
B).
Proof" This follows easily from the block LU factorization

] [
17. If A E IR
nxn
and D E then det = det D det(A - B D-
1
C).
Proof" This follows easily from the block UL factorization
BD-
1
I
] [
Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all 1's on the diagonal) and an upper triangular matrix
U is called an LU factorization; see, for example, [24]. Another such factorization is UL
where U is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D — CA–
1
B is called the Schur complement of A in [AC BD].
Similarly, A – BD–
l
C is the Schur complement of D in [AC
B
D
].
EXERCISES
1. If A e R
nxn
and or is a scalar, what is det(aA)? What is det(–A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Let x, y e Rn. Show that det(I – xy
T
) = 1 – y
T
x.
4. Let U1, U
2
, . . ., Uk € R
nxn
be orthogonal matrices. Show that the product U =
U1 U2 • • • Uk is an orthogonal matrix.
5. Let A e R
n x n
. The trace of A, denoted TrA, is defined as the sum of its diagonal
elements, i.e., TrA = Eni=1
aii.
(a) Show that the trace is a linear function; i.e., if A, B e R
nxn
and a, ft e R, then
Tr(aA + fiB)= aTrA + fiTrB.
(b) Show that Tr(Afl) = Tr(£A), even though in general AB ^ B A.
(c) Let S € R
nxn
be skew-symmetric, i.e., S
T
= -S. Show that TrS = 0. Then
either prove the converse or provide a counterexample.
6. A matrix A e W
x
" is said to be idempotent if A
2
= A.
/ x ™ . , • , ! T 2cos
2
<9 sin 20 1 . . _ ,
(a) Show that the matrix A = - . _ .. _ .
2rt
is idempotent for all #.
2 |_ sin 2^ 2sm
z
# J
r
(b) Suppose A e IR"
X
" is idempotent and A ^ I. Show that A must be singular.
66 Chapter 1. Introduction and Review
Remark 1.5. The factorization of a matrix A into the product of a unit lower triangular
matrix L (i.e., lower triangular with all l's on the diagonal) and an upper triangular matrix
V is called an LV factorization; see, for example, [24]. Another such factorization is VL
where V is unit upper triangular and L is lower triangular. The factorizations used above
are block analogues of these.
Remark 1.6. The matrix D - e A -I B is called the Schur complement of A in [ ~ ~ ].
Similarly, A - BD-Ie is the Schur complement of Din [ ~ ~ l
EXERCISES
1. If A E jRnxn and a is a scalar, what is det(aA)? What is det(-A)?
2. If A is orthogonal, what is det A? If A is unitary, what is det A?
3. Letx,y E jRn. Showthatdet(l-xyT) = 1- yTx.
4. Let VI, V2, ... ,Vk E jRn xn be orthogonal matrices. Show that the product V =
VI V2 ... V
k
is an orthogonal matrix.
5. Let A E jRNxn. The trace of A, denoted Tr A, is defined as the sum of its diagonal
elements, i.e., TrA = L ~ = I au·
(a) Show that the trace is a linear function; i.e., if A, B E JRn xn and a, f3 E JR, then
Tr(aA + f3B) = aTrA + f3TrB.
(b) Show that Tr(AB) = Tr(BA), even though in general AB i= BA.
(c) Let S E jRnxn be skew-symmetric, i.e., ST = -So Show that TrS = O. Then
either prove the converse or provide a counterexample.
6. A matrix A E jRnxn is said to be idempotent if A2 = A.
I [ 2cos
2
0
(a) Show that the matrix A = - . 2f)
2 sm 0
sin 20 J. . d .. II II
2sin
2
0 IS I empotent lor a o.
(b) Suppose A E jRn xn is idempotent and A i= I. Show that A must be singular.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finite-dimensional vector spaces, including spaces formed by special classes
of matrices, but some infinite-dimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set F together with two operations +, • : F x F — > F such that
Axioms (A1)-(A3) state that (F, +) is a group and an abelian group if (A4) also holds.
Axioms (M1)-(M4) state that (F \ {0}, •) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "•" is
not written explicitly.
7
(Al) a + (P + y ) = (a + p ) + y f o r all a, f t, y € F.
(A2) there exists an element 0 e F such that a + 0 = a. for all a e F.
(A3 ) for all a e F, there exists an element (—a) e F such that a + (— a) = 0.
(A4 ) a + p = ft + afar all a, ft e F.
(M l) a - ( p - y ) = ( a - p ) - y f o r al l a, p, y e F.
(M 2) there exists an element 1 e F such that a • I = a for all a e F.
(M 3 ) for all a e ¥, a ^0, there exists an element a"
1
€ F such that a • a~
l
= 1.
(M 4 ) a • p = P • a for all a, p e F.
(D) a - ( p + y)=ci- p+a- y f or alia, p,ye¥.
Chapter 2
Vector Spaces
In this chapter we give a brief review of some of the basic concepts of vector spaces. The
emphasis is on finite-dimensional vector spaces, including spaces formed by special classes
of matrices, but some infinite-dimensional examples are also cited. An excellent reference
for this and the next chapter is [10], where some of the proofs that are not given here may
be found.
2.1 Definitions and Examples
Definition 2.1. A field is a set IF together with two operations +, . : IF x IF ~ IF such that
(Al) a + (,8 + y) = (a +,8) + y for all a,,8, y Elf.
(A2) there exists an element 0 E IF such that a + 0 = a for all a E IF.
(A3) for all a E IF, there exists an element (-a) E IF such that a + (-a) = O.
(A4) a + ,8 = ,8 + a for all a, ,8 Elf.
(Ml) a· (,8, y) = (a·,8)· y for all a,,8, y Elf.
(M2) there exists an element I E IF such that a . I = a for all a E IF.
(M3) for all a E IF, a f. 0, there exists an element a-I E IF such that a . a-I = 1.
(M4) a·,8 =,8 . afar all a, ,8 E IF.
(D) a· (,8 + y) = a·,8 +a· y for all a, ,8, y Elf.
Axioms (Al)-(A3) state that (IF, +) is a group and an abelian group if (A4) also holds.
Axioms (MI)-(M4) state that (IF \ to), .) is an abelian group.
Generally speaking, when no confusion can arise, the multiplication operator "." is
not written explicitly.
7
Chapter 2. Vector Spaces
Example 2.2.
1. R with ordinary addition and multiplication is a field.
2. C with ordinary complex addition and multiplication is a field.
3. Raf. r] = the field of rational functions in the indeterminate x
8
where Z+ = {0,1,2, . . . }, is a field.
4. RMr
mxn
= { m x n matrices of rank r with real coefficients) is clearly not a field since,
for example, (Ml) does not hold unless m = n. Moreover, R"
x
" is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field F is a set V together with two operations
+ :V x V -^V and- : F xV -»• V such that
A vector space is denoted by (V, F) or, when there is no possibility of confusion as to the
underlying fie Id, simply by V.
Remark 2.4. Note that + and • in Definition 2.3 are different from the + and • in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the • operator is usually not even written explicitly.
Example 2.5.
1. (R", R) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (C", C).
(VI) (V, +) is an abelian group.
(V2) ( a - p ) - v = a - ( P ' V ) f o r all a, p e F and for all v e V.
(V3) (a + ft) • v = a • v + p • v for all a, p € F and for all v e V.
(V4) a-(v + w)=a-v + a- w for all a e F and for all v, w e V.
(V5) 1 • v = v for all v e V (1 e F).
8 Chapter 2. Vector Spaces
Example 2.2.
I. IR with ordinary addition and multiplication is a field.
2. e with ordinary complex addition and multiplication is a field.
3. Ra[x] = the field of rational functions in the indeterminate x
= {a
o
+ atX + ... + apxP +}
:aj,f3i EIR ;P,qEZ ,
f30 + f3t
X
+ ... + f3qX
q
where Z+ = {O,l,2, ... }, is a field.
4. I R ~ xn = { m x n matrices of rank r with real coefficients} is clearly not a field since,
for example, (MI) does not hold unless m = n. Moreover, l R ~ x n is not a field either
since (M4) does not hold in general (although the other 8 axioms hold).
Definition 2.3. A vector space over a field IF is a set V together with two operations
+ : V x V -+ V and· : IF x V -+ V such that
(VI) (V, +) is an abelian group.
(V2) (a· f3) . v = a . (f3 . v) for all a, f3 E IF andfor all v E V.
(V3) (a + f3). v = a· v + f3. v for all a, f3 Elf andforall v E V.
(V4) a· (v + w) = a . v + a . w for all a ElF andfor all v, w E V.
(V5) I· v = v for all v E V (1 Elf).
A vector space is denoted by (V, IF) or, when there is no possibility of confusion as to the
underlying field, simply by V.
Remark 2.4. Note that + and· in Definition 2.3 are different from the + and . in Definition
2.1 in the sense of operating on different objects in different sets. In practice, this causes
no confusion and the· operator is usually not even written explicitly.
Example 2.5.
I. (IRn, IR) with addition defined by
and scalar multiplication defined by
is a vector space. Similar definitions hold for (en, e).
2.2. Subspaces
3. Let (V, F) be an arbitrary vector space and V be an arbitrary set. Let O (X > , V) be the
set of functions / mapping D to V. Then O (D, V) is a vector space with addition
defined by
2.2 Subspaces
Definition 2.6. Let (V, F) be a vector space and let W c V, W = 0. Then (W, F) is a
subspace of (V, F) i f and only i f (W, F) is i tself a vector space or, equi valently, i f and only
i f ( a w 1 + ß W 2 ) e W for all a, ß e ¥ and for all w 1 , w
2
e W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 e F, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W c V, and the symbol c,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of" is specifically flagged as such.
9
2. (E
mxn
, E) is a vector space with addition defined by
and scalar multiplication defined by
and scalar multiplication defined by
Special Cases:
(a) V = [to, t \ ] , (V, F) = (IR", E), and the functions are piecewise continuous
=: (PC[f
0
, t\ ] )
n
or continuous =: (C[?
0
, h] )
n
.
4. Let A € R"
x
". Then (x(t) : x ( t ) = Ax(t}} is a vector space (of dimension n) .
2.2. Subspaces 9
2.
(JRmxn, JR) is a vector space with addition defined by
[ ." + P"
al2 + fJI2 aln + fJln
l
a21 + fJ2I a22 + fJ22 a2n + fJ2n
A+B= .
amI + fJml am2 + fJm2 amn + fJmn
and scalar multiplication defined by
[ ya"
y
a
l2
ya," l
y
a
21 y
a
22 ya2n
yA = . . .
yaml ya
m
2
ya
mn
3. Let (V, IF) be an arbitrary vector space and '0 be an arbitrary set. Let cf>('O, V) be the
set of functions f mapping '0 to V. Then cf>('O, V) is a vector space with addition
defined by
(f + g)(d) = fed) + g(d) for all d E '0 and for all f, g E cf>
and scalar multiplication defined by
(af)(d) = af(d) for all a E IF, for all d ED, and for all f E cf>.
Special Cases:
(a) '0 = [to, td, (V, IF) = (JR
n
, JR), and the functions are piecewise continuous
=: (PC[to, td)n or continuous =: (C[to, td)n.
(b) '0 = [to, +00), (V, IF) = (JRn, JR), etc.
4. Let A E JR(nxn. Then {x(t) : x(t) = Ax(t)} is a vector space (of dimension n).
2.2 Subspaces
Definition 2.6. Let (V, IF) be a vector space and let W ~ V, W f= 0. Then (W, IF) is a
subspace of (V, IF) if and only if (W, IF) is itself a vector space or, equivalently, if and only
if(awl + fJw2) E W foral! a, fJ E IF andforall WI, W2 E W.
Remark 2.7. The latter characterization of a subspace is often the easiest way to check
or prove that something is indeed a subspace (or vector space); i.e., verify that the set in
question is closed under addition and scalar multiplication. Note, too, that since 0 E IF, this
implies that the zero vector must be in any subspace.
Notation: When the underlying field is understood, we write W ~ V, and the symbol ~ ,
when used with vector spaces, is henceforth understood to mean "is a subspace of." The
less restrictive meaning "is a subset of' is specifically flagged as such.
Then W
a
,ß is a subspace of V if and only if ß = 0. As an interesting exercise, sketch
W2,1, W2,o, W1/2,1, and W1/2,
0
. Note, too, that the vertical line through the origin (i.e.,
a = oo) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W
a
,ß with ß = 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being R unless
explicitly stated otherwise.
Definition 2.9. If 12, and S are vector spaces (or subspaces), then R = S if and only if
R C S and S C R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r e R is shown to be an element of S and then an arbitrary 5 € S is shown to
be an element of R.
2.3 Linear Independence
Let X = { v1 , v2, • • •} be a nonempty collection of vectors u, in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements v1, . . . , vk e X and scalars a1, . . . , ak not all zero such that
10 Chapter 2. Vector Spaces
Example 2.8.
1. Consider (V, F) = (R"
X
",R) and let W = [A e R"
x
" : A is symmetric}. Then
We V.
Proof: Suppose A\, A
2
are symmetric. Then it is easily shown that ctA\ + fiAi is
symmetric for all a, ft e R.
2. Let W = { A € R"
x
" : A is orthogonal}. Then W is /wf a subspace of R"
x
".
3. Consider (V, F) = (R
2
, R) and for each v € R
2
of the form v = [v1v2 ] identify v1 with
the jc-coordinate in the plane and u
2
with the y-coordinate. For a, ß e R, define
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements v1, . . . ,Vk of X and for any scalars a1, . . . , ak,
10 Chapter 2. Vector Spaces
Example 2.S.
1. Consider (V,lF) = (JR.nxn,JR.) and let W = {A E JR.nxn : A is symmetric}. Then

Proof' Suppose AI, A2 are symmetric. Then it is easily shown that aAI + f3A2 is
symmetric for all a, f3 E R
2. Let W = {A E ]Rnxn : A is orthogonal}. Then W is not a subspace of JR.nxn.
3. Consider (V, IF) = (]R2, JR.) and for each v E ]R2 of the form v = ] identify VI with
the x-coordinate in the plane and V2 with the y-coordinate. For a, f3 E R define
W",/l = {V : v = [ ac f3 ] ; c E JR.} .
Then W",/l is a subspace of V if and only if f3 = O. As an interesting exercise, sketch
W2.I, W2,O, Wi,I' and Wi,o, Note, too, that the vertical line through the origin (i.e.,
a = 00) is also a subspace.
All lines through the origin are subspaces. Shifted subspaces W",/l with f3 =1= 0 are
called linear varieties.
Henceforth, we drop the explicit dependence of a vector space on an underlying field.
Thus, V usually denotes a vector space with the underlying field generally being JR. unless
explicitly stated otherwise.
Definition 2.9. ffR and S are vector spaces (or subspaces), then R = S if and only if
R R.
Note: To prove two vector spaces are equal, one usually proves the two inclusions separately:
An arbitrary r E R is shown to be an element of S and then an arbitrary s E S is shown to
be an element of R.
2.3 Linear Independence
Let X = {VI, V2, •.• } be a nonempty collection of vectors Vi in some vector space V.
Definition 2.10. X is a linearly dependent set of vectors if and only if there exist k distinct
elements VI, ... , Vk E X and scalars aI, ..• , (Xk not all zero such that
X is a linearly independent set of vectors if and only if for any collection of k distinct
elements VI, ... , Vk of X and for any scalars aI, ••• , ak,
al VI + ... + (XkVk = 0 implies al = 0, ... , ak = O.
2.3. Linear Independence 11
(since 2v\ — v
2
+ v3 = 0).
2. Let A e R
xn
and 5 e R"
xm
. Then consider the rows of e
tA
B as vectors in C
m
[t
0
, t1]
(recall that e
fA
denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let v
f
e R", i e k, and consider the matrix V = [ v1 , ... ,Vk] e R
nxk
. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a e R
k
such that Va = 0. An equivalent condition for linear dependence is that the k x k matrix
V
T
V is singular. If the set of vectors is independent, and there exists a e R* such that
Va = 0, then a = 0. An equivalent condition for linear independence is that the matrix
V
T
V is nonsingular.
Definition 2.12. Let X = [ v1 , v2, . . . } be a collection of vectors vi. e V. Then the span of
X is defined as
Example 2.13. Let V = R
n
and define
Then Sp{e1, e
2
, ...,e
n
} = Rn.
Definition 2.14. A set of vectors X is a basis for V if and only ij
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
Example 2.11.
is a linearly independent set. Why?
s a linearly dependent set However,
1. LetV = R
3
. Then
where N = {1, 2, ...}.
2.3. Linear Independence 11
Example 2.11.
I. 1£t V = Then {[ H i Hi] } i" independent.. Why?
Howe,."I [ i 1 [ i 1 [ l ] } is a Iin=ly
(since 2vI - V2 + V3 = 0).
2. Let A E ]Rnxn and B E ]Rnxm. Then consider the rows of etA B as vectors in em [to, tIl
(recall that etA denotes the matrix exponential, which is discussed in more detail in
Chapter 11). Independence of these vectors turns out to be equivalent to a concept
called controllability, to be studied further in what follows.
Let Vi E ]Rn, i E If, and consider the matrix V = [VI, ... , Vk] E ]Rnxk. The linear
dependence of this set of vectors is equivalent to the existence of a nonzero vector a E ]Rk
such that Va = O. An equivalent condition for linear dependence is that the k x k matrix
VT V is singular. If the set of vectors is independent, and there exists a E ]Rk such that
Va = 0, then a = O. An equivalent condition for linear independence is that the matrix
V T V is nonsingular.
Definition 2.12. Let X = {VI, V2, ..• } be a collection of vectors Vi E V. Then the span of
X is defined as
Sp(X) = Sp{VI, V2, ... }
= {v : V = (Xl VI + ... + (XkVk ; (Xi ElF, Vi EX, kEN},
where N = {I, 2, ... }.
Example 2.13. Let V = ]Rn and define
0 0
0 1 0
el =
0
, e2 =
0
,'" ,en =
0
o o
Then SpIel, e2, ... , en} = ]Rn.
Definition 2.14. A set of vectors X is a basis for V if and only if
1. X is a linearly independent set (of basis vectors), and
2. Sp(X) = V.
12 Chapter 2. Vector Spaces
Example 2.15. [e\,..., e
n
} is a basis for IR" (sometimes called the natural basis).
Now let b1, ..., b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v e V there exists a unique n-tuple {E1 , . . . , E n} such that
Definition 2.16. The scalars {Ei} are called the components (or sometimes the coordinates)
of v with respect to the basis (b1, ..., b
n
] and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In Rn,
we have
To see this, write
Then
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V= 0) has n elements, V is said to
be n-dimensional or have dimension n and we write dim(V) = n or dim V — n. For
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
where
12 Chapter 2. Vector Spaces
Example 2.15. {el, ... , en} is a basis for]Rn (sometimes called the natural basis).
Now let b
l
, ... , b
n
be a basis (with a specific order associated with the basis vectors)
for V. Then for all v E V there exists a unique n-tuple ... , such that
v = + ... + = Bx,
where
B [b".,b.l. x D J
Definition 2.16. The scalars } are called the components (or sometimes the coordinates)
of v with respect to the basis {b
l
, ... , b
n
} and are unique. We say that the vector x of
components represents the vector v with respect to the basis B.
Example 2.17. In]Rn,
VI ]
: = vlel + V2e2 + ... + vne
n
·
Vn
We can also determine components of v with respect to another basis. For example, while
with respect to the basis
we have
To see this, write
Then
[ ] = I . el + 2 . e2,

[ ] = 3 . [ ] + 4· [ l
[ ] = XI • [ - ] + X2 • [ _! ]
= [ - -! ] [ l
[ ] = [ -; -1 r I [ ; ] = [ l
Theorem 2.18. The number of elements in a basis of a vector space is independent of the
particular basis considered.
Definition 2.19. If a basis X for a vector space V(Jf 0) has n elements, V is said to
be n.dimensional or have dimension n and we write dim (V) = n or dim V = n. For
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, F) be a vector space and let 71, S c V. The sum and intersection
of R, and S are defined respectively by:
The subspaces R, and S are said to be complements of each other in T.
Remark 2.23. The union of two subspaces, R C S, is not necessarily a subspace.
Definition 2.24. T = R 0 S is the direct sum of R and S if
Theorem 2.22.
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = 0. A
vector space V is finite-dimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinite-dimensional.
Thus, Theorem 2.18 says that dim(V) = the number of elements in a basis.
Example 2.20.
1. d i m(Rn)=n.
2. dim(R
mxn
) = mn.
Note: Check that a basis for R
mxn
is given by the mn matrices Eij; i e m, j e n,
where E
f
j is a matrix all of whose elements are 0 except for a 1 in the (i, j)th location.
The collection of Eij matrices can be called the "natural basis matrices."
3. dim(C[to, t1]) - +00.
4. dim{A € R
nxn
: A = A
T
} = {1/2(n + 1).
1
2
(To see why, determine 1/ 2n( n + 1) symmetric basis matrices.)
5. dim{A e R
nxn
: A is upper (lower) triangular} = 1/ 2n( n + 1).
1. n + S = {r + s : r e U, s e 5}.
2. ft H 5 = {v : v e 7^ and v e 5}.
K
1. K + S C V (in general, U\ - \ h 7^ =: ]T ft/ C V, for finite k).
1=1
2. 72. D 5 C V (in general, f] * R,
a
C V/ or an arbitrary index set A).
a e A
1. n n S = 0, and
2. U + S = T (in general ft; n (^ ft,-) = 0 am/ ]Pft,- = T).
y>f «
2.4. Sums and Intersections of Subspaces 13
consistency, and because the 0 vector is in any vector space, we define dim(O) = O. A
vector space V is finite-dimensional if there exists a basis X with n < +00 elements;
otherwise, V is infinite-dimensional.
Thus, Theorem 2.18 says that dim (V) = the number of elements in a basis.
Example 2.20.
1. = n.
2. = mn.
Note: Check that a basis for is given by the mn matrices Eij; i E m, j E
where Eij is a matrix all of whose elements are 0 except for a 1 in the (i, J)th location.
The collection of E;j matrices can be called the "natural basis matrices."
3. dim(C[to, tJJ) = +00.
4. dim{A E : A = AT} = !n(n + 1).
(To see why, determine !n(n + 1) symmetric basis matrices.)
5. dim{A E : A is upper (lower) triangular} = !n(n + 1).
2.4 Sums and Intersections of Subspaces
Definition 2.21. Let (V, JF') be a vector space and let R, S S; V. The sum and intersection
ofR and S are defined respectively by:
1. R + S = {r + s : r E R, s E S}.
2. R n S = {v : v E R and v E S}.
Theorem 2.22.
k
1. R + S S; V (in general, RI + ... + Rk =: L R; S; V, for finite k).
;=1
2. R n S S; V (in general, n Ra S; V for an arbitrary index set A).
CiEA
Remark 2.23. The union of two subspaces, R U S, is not necessarily a subspace.
Definition 2.24. T = REB S is the direct sum ofR and S if
1. R n S = 0, and
2. R + S = T (in general, R; n (L R
j
) = 0 and L Ri = T).
H;
The subspaces Rand S are said to be complements of each other in T.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of ft (or S) is not unique. For example, consider V = R
2
and let ft be any line through the origin. Then any other distinct line through the origin is
a complement of ft. Among all the complements there is a unique one orthogonal to ft.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T =R O S. Then
1. every t € T can be written uniquely in the form t = r + s with r e R and s e S.
2. dim(T) = dim(ft) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t e T can be written in two ways
as t = r1 + s1 = r2 + S2, where r1, r2 e R. and s1, S2 e S. Then r1 — r2 = s2— s\. But
r1 –r2 £ ft and 52 — si e S. Since ft fl S = 0, we must have r\ = r-i and s\ = si from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. D
Theorem 2.27. For arbitrary subspaces ft, S of a vector space V,
EXERCISES
1. Suppose {vi,..., Vk} is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let x\, *2, . . . , x/c E R" be nonzero mutually orthogonal vectors. Show that [x\,...,
X k} must be a linearly independent set.
3. Let v\,... ,v
n
be orthonormal vectors in R". Show that Av\,..., Av
n
are also or-
thonormal if and only if Ae R"
x
" is orthogonal.
4. Consider the vectors v\ — [2 l]
r
and 1*2 = [3 l]
r
. Prove that vi and V2 form a basis
for R
2
. Find the components of the vector v = [4 l]
r
with respect to this basis.
Example 2.28. Let U be the subspace of upper triangular matrices in E"
x
" and let £ be the
subspace of lower triangular matrices in R
nxn
. Then it may be checked that U + L = R
nxn
while U n £ is the set of diagonal matrices in R
nxn
. Using the fact that dim (diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, F) = (R
nxn
, R), let ft be the set of skew-symmetric matrices in
R"
x
", and let S be the set of symmetric matrices in R"
x
". Then V = U 0 S.
Proof: This follows easily from the fact that any Ae R"
x
" can be written in the form
The first matrix on the right-hand side above is in S while the second is in ft.
14 Chapter 2. Vector Spaces
Remark 2.25. The complement of R (or S) is not unique. For example, consider V = jR2
and let R be any line through the origin. Then any other distinct line through the origin is
a complement of R. Among all the complements there is a unique one orthogonal to R.
We discuss more about orthogonal complements elsewhere in the text.
Theorem 2.26. Suppose T = R EB S. Then
1. every t E T can be written uniquely in the form t = r + s with r E Rand s E S.
2. dim(T) = dim(R) + dim(S).
Proof: To prove the first part, suppose an arbitrary vector t E T can be written in two ways
as t = rl + Sl = r2 + S2, where rl, r2 E Rand SI, S2 E S. Then r, - r2 = S2 - SI. But
rl - r2 E Rand S2 - SI E S. Since R n S = 0, we must have rl = r2 and SI = S2 from
which uniqueness follows.
The statement of the second part is a special case of the next theorem. 0
Theorem 2.27. For arbitrary subspaces R, S of a vector space V,
dim(R + S) = dim(R) + dim(S) - dim(R n S).
Example 2.28. Let U be the subspace of upper triangular matrices in jRn xn and let .c be the
subspace of lower triangUlar matrices in jRn xn. Then it may be checked that U + .c = jRn xn
while un.c is the set of diagonal matrices in jRnxn. Using the fact that dim {diagonal
matrices} = n, together with Examples 2.20.2 and 2.20.5, one can easily verify the validity
of the formula given in Theorem 2.27.
Example 2.29. Let (V, IF) = (jRnxn, jR), let R be the set of skew-symmetric matrices in
jRnxn, and let S be the set of symmetric matrices in jRnxn. Then V = n $ S.
Proof: This follows easily from the fact that any A E jRnxn can be written in the form
1 TIT
A=2:(A+A )+2:(A-A).
The first matrix on the right-hand side above is in S while the second is in R.
EXERCISES
1. Suppose {VI, ... , vd is a linearly dependent set. Then show that one of the vectors
must be a linear combination of the others.
2. Let XI, X2, ... , Xk E jRn be nonzero mutually orthogonal vectors. Show that {XI, ... ,
Xk} must be a linearly independent set.
3. Let VI, ... , Vn be orthonormal vectors in jRn. Show that Av" •.. , AV
n
are also or-
thonormal if and only if A E jRnxn is orthogonal.
4. Consider the vectors VI = [2 1 f and V2 = [3 1 f. Prove that VI and V2 form a basis
for jR2. Find the components of the vector v = [4 If with respect to this basis.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + p\x + pix
2
, where po, p\, p2 e R. Show that P is a vector space over E. Show
that the polynomials 1, *, and 2x
2
— 1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces R and S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p( x) = po + p\x + • • • + p
n
x
n
, where the coefficients /?, are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e.,
those satisfying p(—x} = – p( x) . Show that P
n
= P
E
© PO-
8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and
U of upper triangular matrices.
Exercises 15
5. Let P denote the set of polynomials of degree less than or equal to two of the form
Po + PI X + P2x2, where Po, PI, P2 E R Show that P is a vector space over R Show
that the polynomials 1, x, and 2x2 - 1 are a basis for P. Find the components of the
polynomial 2 + 3x + 4x
2
with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only).
7. Let P
n
denote the vector space of polynomials of degree less than or equal to n, and of
the form p(x) = Po + PIX + ... + Pnxn, where the coefficients Pi are all real. Let PE
denote the subspace of all even polynomials in P
n
, i.e., those that satisfy the property
p( -x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e.,
those satisfying p(-x) = -p(x). Show that P
n
= PE EB Po·
8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and
U of upper triangular matrices.
This page intentionally left blank This page intentionally left blank
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V -> W is a linear
transformation if and only if
£(avi + pv
2
) = aCv\ + fi£v
2
far all a, £ e F and far all v
}
,v
2
e V.
The vector space V is called the domain of the transformation C while VV, the space into
which it maps, is called the co-domain.
Example 3.2.
1. Let F = R and take V = W = PC[f
0
, +00).
Define £ : PC[t
0
, +00) -> PC[t
0
, +00) by
2. Let F = R and take V = W = R
mx
". Fix M e R
mxm
.
Define £ : R
mx
" -> M
mxn
by
3. Let F = R and take V = P" = (p(x) = a
0
+ ct
}
x H h a
n
x" : a, E R} and
w = -p
n
-
1
.
Define C.: V —> W by Lp — p', where' denotes differentiation with respect to x.
17
Chapter 3
Linear Transformations
3.1 Definition and Examples
We begin with the basic definition of a linear transformation (or linear map, linear function,
or linear operator) between two vector spaces.
Definition 3.1. Let (V, IF) and (W, IF) be vector spaces. Then I:- : V -+ W is a linear
transformation if and only if
I:-(avi + {3V2) = al:-vi + {3I:-V2 for all a, {3 ElF and for all VI, V2 E V.
The vector space V is called the domain of the transformation I:- while W, the space into
which it maps, is called the co-domain.
Example 3.2.
1. Let IF = JR and take V = W = PC[to, +00).
Define I:- : PC[to, +00) -+ PC[to, +00) by
vet) f--+ wet) = (I:-v)(t) = 11 e-(t-r)v(r) dr.
to
2. Let IF = JR and take V = W = JRmxn. Fix ME JRmxm.
Define I:- : JRmxn -+ JRmxn by
X f--+ Y = I:-X = MX.
3. Let IF = JR and take V = pn = {p(x) = ao + alx + ... + anx
n
: ai E JR} and
W = pn-l.
Define I:- : V -+ W by I:- p = p', where I denotes differentiation with respect to x.
17
18 Chapters. Li near Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con-
veniently in matrix form. Specifically, suppose £ : (V, F) — > • (W, F) is linear and further
suppose that {u,, i e n} and {Wj, j e m] are bases for V and W, respectively. Then the
ith column of A = Mat £ (the matrix representation of £ with respect to the given bases
for V and W) is the representation of £i> , with respect to {w
}
•, j e raj. In other words,
represents £ since
where W = [w\,..., w
m
] and
is the z'th column of A. Note that A = Mat £ depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of £ on an arbitrary vector v e V is uniquely determined (by linearity)
by its action on a basis. Thus, if v = E1v1 + • • • + E
n
v
n
= Vx (where u, and hence jc, is
arbitrary), then
Thinking of A both as a matrix and as a linear transformation from Rn to R
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
Thus, £V = WA since x was arbitrary.
When V = R", W = R
m
and [vi , i e n}, [wj , j e m} are the usual (natural) bases
the equation £V = WA becomes simply £ = A. We thus commonly identify A as a linea
transformation with its matrix representation, i.e.,
18 Chapter 3. Linear Transformations
3.2 Matrix Representation of Linear Transformations
Linear transformations between vector spaces with specific bases can be represented con-
veniently in matrix form. Specifically, suppose L : (V, IF) (W, IF) is linear and further
suppose that {Vi, i E and {w j, j E !!!.} are bases for V and W, respectively. Then the
ith column of A = Mat L (the matrix representation of L with respect to the given bases
for V and W) is the representation of LVi with respect to {w j, j E m}. In other words,
represents L since
A=
al
n
]
: E JR.mxn
a
mn
LVi = aliwl + ... + amiWm
=Wai,
where W = [WI, ... , w
m
] and
is the ith column of A. Note that A = Mat L depends on the particular bases for V and W.
This could be reflected by subscripts, say, in the notation, but this is usually not done.
The action of L on an arbitrary vector V E V is uniquely determined (by linearity)
by its action on a basis. Thus, if V = VI + ... + Vn = V x (where v, and hence x, is
arbitrary), then
LVx = Lv = + ... +

= WAx.
Thus, LV = W A since x was arbitrary.
When V = JR.n, W = lR.
m
and {Vi, i E {W j' j E !!!.} are the usual (natural) bases,
the equation LV = W A becomes simply L = A. We thus commonly identify A as a linear
transformation with its matrix representation, i.e.,
Thinking of A both as a matrix and as a linear transformation from JR." to lR.
m
usually causes no
confusion. Change of basis then corresponds naturally to appropriate matrix multiplication.
3.3. Composition of Transformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and W and transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
formula
Two Special Cases:
Inner Product: Let x, y e Rn. Then their inner product is the scalar
Outer Product: Let x e R
m
, y e Rn. Then their outer product is the m x n
matrix
Note that any rank-one matrix A e R
mxn
can be written in the form A = xy
T
above (or xy
H
if A e C
mxn
). A rank-one symmetric matrix can be written in
the form XX
T
(or XX
H
).
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimZ// = p, dimV = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix multiplication. That is,
we have C — A B . The above is sometimes expressed componentwise by the
3.3. Composition ofTransformations 19
3.3 Composition of Transformations
Consider three vector spaces U, V, and Wand transformations B from U to V and A from
V to W. Then we can define a new transformation C as follows:
C
The above diagram illustrates the composition of transformations C = AB. Note that in
most texts, the arrows above are reversed as follows:
C
However, it might be useful to prefer the former since the transformations A and B appear
in the same order in both the diagram and the equation. If dimU = p, dim V = n,
and dim W = m, and if we associate matrices with the transformations in the usual way,
then composition of transformations corresponds to standard matrix mUltiplication. That is,
we have CAB . The above is sometimes expressed componentwise by the
mxp
formula
Two Special Cases:
nxp
n
cij = L aikbkj.
k=1
Inner Product: Let x, y E ~ n . Then their inner product is the scalar
n
xTy = Lx;y;.
;=1
Outer Product: Let x E ~ m , y E ~ n . Then their outer product is the m x n
matrix
Note that any rank-one matrix A E ~ m x n can be written in the form A = xyT
above (or xyH if A E c
mxn
). A rank-one symmetric matrix can be written in
the form xx
T
(or xx
H
).
20 Chapter 3. Li near Transformations
3.4 Structure of Linear Transformations
Let A : V —> W be a linear transformation.
Definition 3.3. The range of A, denotedlZ( A), is the set {w e W : w = Av for some v e V}.
Equivalently, R(A) — {Av : v e V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v e V : Av = 0}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V — >• W be a linear transformation. Then
1. R( A) C W.
2. N(A) c V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A e R
mxn
. If A is written in terms of its columns as A = [a\,... ,a
n
],
then
Proof: The proof of this theorem is easy, essentially following immediately from the defi-
nition. D
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {v1 , . . . , vk] be a set of nonzero vectors u, e Rn. The set is said to
be orthogonal if' vjvj = 0 for i ^ j and orthonormal if vf vj = 8
ij
, where 8
t
j is the
Kronecker delta defined by
Example 3.8.
is an orthogonal set.
is an orthonormal set.
3. If { t > i , . . . , Vk} with u, € M." is an orthogonal set, then I — /==, - -., — /=== | is an
I ^/v, vi ^/v'
k
v
k
]
orthonormal set.
then
20 Chapter 3. LinearTransformations
3.4 Structure of Linear Transformations
Let A : V --+ W be a linear transformation.
Definition3.3. The range of A, denotedR(A), is the set {w E W : w = Av for some v E V}.
Equivalently, R(A) = {Av : v E V}. The range of A is also known as the image of A and
denoted Im(A).
The nullspace of A, denoted N(A), is the set {v E V : Av = O}. The nullspace of
A is also known as the kernel of A and denoted Ker (A).
Theorem 3.4. Let A : V --+ W be a linear transformation. Then
1. R(A) S; W.
2. N(A) S; V.
Note that N(A) and R(A) are, in general, subspaces of different spaces.
Theorem 3.5. Let A E If A is written in terms of its columns as A = [ai, ... , an],
then
R(A) = Sp{al, ... , an} .
Proof: The proof of this theorem is easy, essentially following immediately from the defi-
nition. 0
Remark 3.6. Note that in Theorem 3.5 and throughout the text, the same symbol (A) is
used to denote both a linear transformation and its matrix representation with respect to the
usual (natural) bases. See also the last paragraph of Section 3.2.
Definition 3.7. Let {VI, ... , vd be a set of nonzero vectors Vi E The set is said to
be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij' where 8ij is the
Kronecker delta defined by
8 {I ifi=j,
ij = 0 if i f= j.
Example 3.8.
1. {[ J. [ -: J} is an orthogonal set.
2. {[ ] ,[ J} is an orthonormal set.
3 If { }
. h 1Tlln • h I th { .
. VI, •.• ,Vk Wit Vi E.IN,. IS an ort ogona set, en ... , IS an
VI
orthonormal set.
3.4. Structure of Linear Transformations 21
Definition 3.9. Let S c Rn. Then the orthogonal complement of S is defined as the set
S
1
- = {v e Rn : V
T
S = 0 for all s e S}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
Note that there is nothing special about the two vectors in the basis defining S being or-
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 311 Let R S C R
n
The
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let { v1 , ..., v
k
} be an orthonormal basis for S and let x e Rn be an arbitrary
vector. Set
3.4. Structure of Li near Transformations 21
Definition 3.9. Let S <; ]Rn. Then the orthogonal complement of S is defined as the set
vTs=OforallsES}.
Example 3.10. Let
Then it can be shown that
Working from the definition, the computation involved is simply to find all nontrivial (i.e.,
nonzero) solutions of the system of equations
3xI + 5X2 + 7X3 = 0,
-4xI + X2 + X3 = 0.
Note that there is nothing special about the two vectors in the basis defining S being or-
thogonal. Any set of vectors will do, including dependent spanning vectors (which would,
of course, then give rise to redundant equations).
Theorem 3.11. Let n, S <; ]Rn. Then
2. S \B = ]Rn.
3. = S.
4. n <; S if and only if <;
5. (n + = nl. n
6. (n n = +
Proof: We prove and discuss only item 2 here. The proofs of the other results are left as
exercises. Let {VI, ... , Vk} be an orthonormal basis for S and let x E ]Rn be an arbitrary
vector. Set
k
XI = L (xT Vi)Vi,
;=1
X2 = X -XI.
we see that x2 is orthogonal to v1, ..., Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S
1
= Rn. We also have that S U S
1
=0 since the only vector s e S orthogonal to
everything in S (i.e., including itself) is 0.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = x1 + x2. = x'1+ x'
2
, where x\, x 1 E S and x2, x'
2
e S
1
. Then
(x'1 — x1)
T
( x'
2
— x2) = 0 by definition of ST . But then (x'1 — x1)
T
( x' 1 – x1) = 0 since
x
2
— X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'
2
) . Thus,
x1 — x'1 and x2 = x
2
. D
Theorem 3.12. Let A : Rn —> R
m
. Then
1. N(A)
1
" = 7£(A
r
). (Note: This holds only for finite-dimensional vector spaces.)
2. 'R,(A)
1
~ — J\f(A
T
). (Note: This also holds for infinite-dimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x e A/ "(A). Then Ax = 0 and this is
equivalent to y
T
Ax = 0 for all v. But y
T
Ax = ( A
T
y ) x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form A
T
v, i.e., x e R(A
r
) . Since x was arbitrary, we
have established that N(A)
1
= U(A
T
}.
The proof of the second part is similar and is left as an exercise. D
Definition 3.13. Let A : R
n
-> R
m
. Then {v e R" : Av = 0} is sometimes called the
right nullspace of A. Similarly, (w e R
m
: W
T
A = 0} is called the left nullspace of A.
Clearly, the right nullspace is A/"(A) while the left nullspace is J\f(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun-
damental and useful decompositions of vectors in the domain and co-domain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : R" -> R
m
. Then
7. every vector v in the domain space R" can be written in a unique way as v = x + y,
where x € M(A) and y € J\f(A)
±
= ft(A
r
) (i.e., R" = M(A) 0 ft(A
r
)).
2. every vector w in the co-domain space R
m
can be written in a unique way asw = x+y,
where x e U(A) and y e ft(A)
1
- = Af(A
T
) (i.e., R
m
= 7l(A) 0 M(A
T
)).
This key theorem becomes very easy to remember by carefully studying and under-
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A € E^
x
". When thought of as a linear transformation from E"
to R
m
, many properties of A can be developed in terms of the four fundamental subspaces
22 Chapters. L i near Transformations
Then x\ e < S and, since
22 Chapter 3. Linear Transformations
Then XI E S and, since
T T T
x
2
V j = X V j - X I V j
=XTVj-XTVj=O,
we see that X2 is orthogonal to VI, .•. , Vk and hence to any linear combination of these
vectors. In other words, X2 is orthogonal to any vector in S. We have thus shown that
S + S.l = IRn. We also have that S n S.l = 0 since the only vector s E S orthogonal to
everything in S (i.e., including itself) is O.
It is also easy to see directly that, when we have such direct sum decompositions, we
can write vectors in a unique way with respect to the corresponding subspaces. Suppose,
for example, that x = XI + X2 = x; + x ~ , where XI, x; E Sand X2, x ~ E S.l. Then
(x; - XI/ ( x ~ - X2) = 0 by definition of S.l. But then (x; - XI)T (x; - xd = 0 since
x ~ -X2 = -(x; -XI) (which follows by rearranging the equation XI +X2 = x; + x ~ ) . Thus,
XI = x; andx2 = x ~ . 0
Theorem 3.12. Let A : IR
n
-+ IRm. Then
1. N(A).l = R(A
T
). (Note: This holds only for finite-dimensional vector spaces.)
2. R(A).l = N(A
T
). (Note: This also holds for infinite-dimensional vector spaces.)
Proof: To prove the first part, take an arbitrary x E N(A). Then Ax = 0 and this is
equivalent to yT Ax = 0 for all y. But yT Ax = (AT y{ x. Thus, Ax = 0 if and only if x
is orthogonal to all vectors of the form AT y, i.e., x E R(AT).l. Since x was arbitrary, we
have established thatN(A).l = R(A
T
).
The proof of the second part is similar and is left as an exercise. 0
Definition 3.13. Let A : IR
n
-+ IRm. Then {v E IR
n
: A v = O} is sometimes called the
right nullspace of A. Similarly, {w E IR
m
: w
T
A = O} is called the left nullspace of A.
Clearly, the right nullspace is N(A) while the left nullspace is N(A
T
).
Theorem 3.12 and part 2 of Theorem 3.11 can be combined to give two very fun-
damental and useful decompositions of vectors in the domain and co-domain of a linear
transformation A. See also Theorem 2.26.
Theorem 3.14 (Decomposition Theorem). Let A : IR
n
-+ IRm. Then
1. every vector v in the domain space IR
n
can be written in a unique way as v = x + y,
where x E N(A) and y E N(A).l = R(AT) (i.e., IR
n
= N(A) EB R(A
T
».
2. every vector w in the co-domain space IR
m
can be written ina unique way as w = x+y,
where x E R(A) and y E R(A).l = N(A
T
) (i.e., IR
m
= R(A) EBN(A
T
».
This key theorem becomes very easy to remember by carefully studying and under-
standing Figure 3.1 in the next section.
3.5 Four Fundamental Subspaces
Consider a general matrix A E lR;,xn. When thought of as a linear transformation from IR
n
to IRm, many properties of A can be developed in terms of the four fundamental subspaces
3.5. Four Fundamental Subspaces 23
Figure 3.1. Four fundamental subspaces.
7£(A), 'R.(A)^, Af ( A) , and N(A)T. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V
motion.
1. A is onto (also called epic or surjective) ifR,(A) = W.
W be a linear transfor-
2. A is one-to-one or 1-1 (also called monic or infective) ifJ\f(A) = 0. Two equivalent
characterizations of A being 1-1 that are often easier to verify in practice are the
following:
Definition 3.16. Let A : E" -> R
m
. Then rank(A) = dimftCA). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
3.5. Four Fundamental Subspaces 23
A
r
N(A)1-
r
EB {OJ
X {O}Gl
n-r m -r
Figure 3.1. Four fundamental subspaces.
R(A), R(A)1-, N(A), and N(A)1-. Figure 3.1 makes many key properties seem almost
obvious and we return to this figure frequently both in the context of linear transformations
and in illustrating concepts such as controllability and observability.
Definition 3.15. Let V and W be vector spaces and let A : V -+ W be a linear transfor-
mation.
1. A is onto (also called epic or surjective) ifR(A) = W.
2. A is one-to-one or 1-1 (also called monic or injective) if N(A) = O. Two equivalent
characterizations of A being 1-1 that are often easier to verify in practice are the
following:
(a) AVI = AV2 ===} VI = V2 .
(b) VI t= V2 ===} AVI t= AV2 .
Definition 3.16. Let A : IR
n
-+ IRm. Then rank(A) = dim R(A). This is sometimes called
the column rank of A (maximum number of independent columns). The row rank of A is
24 Chapter3. Linear Transformations
dim 7£(A
r
) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dim A/"(A).
Theorem 3.17. Let A : R
n
-> R
m
. Then dim K(A) = dimA/ '(A)
±
. (Note: Since
A/^A)
1
" = 7l(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : J\f(A)~
L
— >• 7£(A) by
Clearly T is 1-1 (since A/"(T) = 0). To see that T is also onto, take any w e 7£(A). Then
by definition there is a vector x e R" such that Ax — w. Write x = x\ + X2, where
x\ e A/^A)
1
- and jc
2
e A/"(A). Then Ajti = u; = r*i since *i e A/^A)-
1
. The last equality
shows that T is onto. We thus have that dim7?.(A) = dimA/^A^ since it is easily shown
that if { ui , . . . , iv} is abasis forA/'CA)
1
, then {Tv\, . . . , Tv
r
] is abasis for 7?.(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim7e(A) = dim A/^A)
1
= dim7l(A
T
) = rank(A
r
) =
"row rank of A." D
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : R" -> R
m
. Then dimA/"(A) + dimft(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B e R"
xn
. Then
Part 4 of Theorem 3.19 suggests looking at the general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
24 Chapter 3. LinearTransformations
dim R(AT) (maximum number of independent rows). The dual notion to rank is the nullity
of A, sometimes denoted nullity(A) or corank(A), and is defined as dimN(A).
Theorem 3.17. Let A : ]Rn ~ ]Rm. Then dim R(A) = dimNCA)-L. (Note: Since
N(A)-L = R(A
T
), this theorem is sometimes colloquially stated "row rank of A = column
rank of A.")
Proof: Define a linear transformation T : N(A)-L ~ R(A) by
Tv = Av for all v E N(A)-L.
Clearly T is 1-1 (since N(T) = 0). To see that T is also onto, take any W E R(A). Then
by definition there is a vector x E ]Rn such that Ax = w. Write x = Xl + X2, where
Xl E N(A)-L andx2 E N(A). Then AXI = W = TXI since Xl E N(A)-L. The last equality
shows that T is onto. We thus have that dim R(A) = dimN(A)-L since it is easily shown
that if {VI, ... , v
r
} is a basis for N(A)-L, then {TVI, ... , Tv
r
} is a basis for R(A). Finally, if
we apply this and several previous results, the following string of equalities follows easily:
"column rank of A" = rank(A) = dim R(A) = dimN(A)-L = dim R(AT) = rank(AT) =
"row rank of A." 0
The following corollary is immediate. Like the theorem, it is a statement about equality
of dimensions; the subspaces themselves are not necessarily in the same vector space.
Corollary 3.18. Let A : ]Rn ~ ]Rm. Then dimN(A) + dim R(A) = n, where n is the
dimension of the domain of A.
Proof: From Theorems 3.11 and 3.17 we see immediately that
n = dimN(A) + dimN(A)-L
= dimN(A) + dim R(A) . 0
For completeness, we include here a few miscellaneous results about ranks of sums
and products of matrices.
Theorem 3.19. Let A, B E ]Rnxn. Then
1. O:s rank(A + B) :s rank(A) + rank(B).
2. rank(A) + rank(B) - n :s rank(AB) :s min{rank(A), rank(B)}.
3. nullity(B) :s nullity(AB) :s nullity(A) + nullity(B).
4. if B is nonsingular, rank(AB) = rank(BA) = rank(A) and N(BA) = N(A).
Part 4 of Theorem 3.19 suggests looking atthe general problem of the four fundamental
subspaces of matrix products. The basic results are contained in the following easily proved
theorem.
3.5. Four F u n d a me n t a l Subspaces 25
Theorem 3.20. Let A e R
mxn
, B e R
nxp
. Then
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A e R
mxn
. Then
We now characterize 1-1 and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : R
n
-» R
m
. Then
1. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to
have full row rank; equivalently, AA
T
is nonsingular).
2. A is 1-1 if and only z/r a nk(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, A
T
A is nonsingular).
Proof: Proof of part 1: If A is onto, dim7?,(A) — m — rank (A). Conversely, let y e R
m
be arbitrary. Let jc = A
T
(AA
T
)~
]
y e R
n
. Then y = Ax, i.e., y e 7?.(A), so A is onto.
Proof of part 2: If A is 1-1, then A/"(A) = 0, which implies that dim A/^A)-
1
—n —
dim 7£(A
r
), and hence dim 7£(A) = n by Theorem 3.17. Conversely, suppose Ax\ = Ax^.
Then A
r
A;t i = A
T
Ax2, which implies x\ = x^. since A
r
A is invertible. Thus, A is
1-1. D
Definition 3.23. A : V —» W is invertible (or bijective) if and only if it is 1-1 and onto.
Note that if A is invertible, then dim V — dim W. Also, A : W
1
-»• E" is invertible or
nonsingular if and only z/r ank(A) = n.
Note that in the special case when A € R"
x
", the transformations A, A
r
, and A"
1
are all 1-1 and onto between the two spaces M(A)
±
and 7£(A). The transformations A
T
and A~
!
have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A~
T
.
3.5. Four Fundamental Subspaces 25
Theorem 3.20. Let A E IRmxn, B E IRnxp. Then
1. RCAB) S; RCA).
2. N(AB) ;2 N(B).
3. R«AB)T) S; R(B
T
).
4. N«AB)T) ;2 N(A
T
).
The next theorem is closely related to Theorem 3.20 and is also easily proved. It
is extremely useful in text that follows, especially when dealing with pseudoinverses and
linear least squares problems.
Theorem 3.21. Let A E IRmxn. Then
1. R(A) = R(AA
T
).
2. R(AT) = R(A
T
A).
3. N(A) = N(A
T
A).
4. N(A
T
) = N(AA
T
).
We now characterize I-I and onto transformations and provide characterizations in
terms of rank and invertibility.
Theorem 3.22. Let A : IR
n
-+ IRm. Then
1. A is onto if and only if rank (A) = m (A has linearly independent rows or is said to
have full row rank; equivalently, AA T is nonsingular).
2. A is 1-1 if and only ifrank(A) = n (A has linearly independent columns or is said
to have full column rank; equivalently, AT A is nonsingular).
Proof' Proof of part 1: If A is onto, dim R(A) = m = rank(A). Conversely, let y E IRm
be arbitrary. Let x = AT (AAT)-I Y E IRn. Then y = Ax, i.e., y E R(A), so A is onto.
Proof of part 2: If A is 1-1, then N(A) = 0, which implies that dimN(A)1- = n =
dim R(A
T
), and hence dim R(A) = n by Theorem 3.17. Conversely, suppose AXI = AX2.
Then AT AXI = AT AX2, which implies XI = X2 since AT A is invertible. Thus, A is
1-1. D
Definition 3.23. A : V -+ W is invertible (or bijective) if and only if it is 1-1 and onto.
Note that if A is invertible, then dim V = dim W. Also, A : IRn -+ IR
n
is invertible or
nonsingular ifand only ifrank(A) = n.
Note that in the special case when A E I R ~ x n , the transformations A, AT, and A-I
are all 1-1 and onto between the two spaces N(A)1- and R(A). The transformations AT
and A -I have the same domain and range but are in general different maps unless A is
orthogonal. Similar remarks apply to A and A -T.
26 Chapters. Li near Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi-
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V -> W. Then
1. A is said to be right invertible if there exists a right inverse transformation A~
R
:
W —> V such that AA~
R
= I
w
, where I
w
denotes the identity transformation on W.
2. A is said to be left invertible if there exists a left inverse transformation A~
L
: W —>
V such that A~
L
A = I
v
, where I
v
denotes the identity transformation on V.
Theorem 3.25. Let A : V -> W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only if it is 1-1.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 1-1 and
onto, in which case A~
l
= A~
R
= A~
L
.
Note: From Theorem 3.22 we see that if A : E" ->• E
m
is onto, then a right inverse
is given by A~
R
= A
T
(AA
T
) . Similarly, if A is 1-1, then a left inverse is given by
A~
L
= (A
T
A)~
1
A
T
.
Theorem 3.26. Let A : V -» V.
1. If there exists a unique right inverse A~
R
such that AA~
R
= I, then A is invertible.
2. If there exists a unique left inverse A~
L
such that A~
L
A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
Thus, (A
R
+ A
R
A — /) must be a right inverse and, therefore, by uniqueness it must be
the case that A~
R
+ A~
R
A — I = A~
R
. But this implies that A~
R
A = /, i.e., that A~
R
is
a left inverse. It then follows from Theorem 3.25 that A is invertible. D
Example 3.27.
1. Let A = [1 2] : E
2
-»• E
1
. Then A is onto. (Proof: Take any a € E
1
; then one
can always find v e E
2
such that [1 2][^] = a). Obviously A has full row rank
(=1) and A~
R
= [ _j j is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation AR = I.
26 Chapter 3. linear Transformations
If a linear transformation is not invertible, it may still be right or left invertible. Defi-
nitions of these concepts are followed by a theorem characterizing left and right invertible
transformations.
Definition 3.24. Let A : V -+ W. Then
1. A is said to be right invertible if there exists a right inverse transformation A-
R
:
W -+ V such that AA -R = I
w
, where Iw denotes the identity transfonnation on W.
2. A is said to be left invertible if there exists a left inverse transformation A -L : W -+
V such that A -L A = Iv, where Iv denotes the identity transfonnation on V.
Theorem 3.25. Let A : V -+ W. Then
1. A is right invertible if and only if it is onto.
2. A is left invertible if and only ifit is 1-1.
Moreover, A is invertible if and only if it is both right and left invertible, i.e., both 1-1 and
onto, in which case A -I = A -R = A -L.
Note: From Theorem 3.22 we see that if A : ]Rn -+ ]Rm is onto, then a right inverse
is given by A -R = AT (AAT) -I. Similarly, if A is 1-1, then a left inverse is given by
A-
L
= (AT A)-I AT.
Theorem 3.26. Let A : V -+ V.
1. If there exists a unique right inverse A - R such that A A - R = I, then A is invertible.
2. If there exists a unique left inverse A -L such that A -L A = I, then A is invertible.
Proof: We prove the first part and leave the proof of the second to the reader. Notice the
following:
A(A-
R
+ A-RA -I) = AA-
R
+ AA-RA - A
= I + I A - A since AA -R = I
= I.
Thus, (A -R + A -R A - I) must be a right inverse and, therefore, by uniqueness it must be
the case that A -R + A -R A - I = A -R. But this implies that A -R A = I, i.e., that A -R is
a left inverse. It then follows from Theorem 3.25 that A is invertible. 0
Example 3.27.
1. Let A = [1 2]:]R2 -+ ]R I. Then A is onto. (Proo!' Take any a E ]R I; then one
can always find v E ]R2 such that [1 2][ ~ ~ ] = a). Obviously A has full row rank
(= 1) and A - R = [ _ ~ ] is a right inverse. Also, it is clear that there are infinitely many
right inverses for A. In Chapter 6 we characterize all right inverses of a matrix by
characterizing all solutions of the linear matrix equation A R = I.
Exercises 27
2. Let A = [J] : E
1
-> E
2
. ThenAis 1-1. (Proof: The only solution to 0 = Av = [
I
2
]v
is v = 0, whence A/"(A) = 0 so A is 1-1). It is now obvious that A has full column
rank (=1) and A~
L
= [3 — 1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
when considered as a linear transformation on IE
below bases for its four fundamental subspaces.
\ is neither 1-1 nor onto. We give
EXERCISES
3 4
1. Let A = [
8 5
J and consider A as a linear transformation mapping E
3
to E
2
.
Find the matrix representation of A with respect to the bases
2. Consider the vector space R
nx
" over E, let S denote the subspace of symmetric
matrices, and let 7£ denote the subspace of skew-symmetric matrices. For matrices
X, Y e E
nx
" define their inner product by (X, Y) = Tr( X
r
F) . Show that, with
respect to this inner product, 'R, — S^.
3. Consider the differentiation operator C defined in Example 3.2.3. Is £ 1-1? Is£
onto?
4. Prove Theorem 3.4.
of R
3
and
of E
2
.
Exercises 27
2. LetA = [i]:]Rl ~ ]R2. Then A is 1-1. (Proof The only solution toO = Av = [i]v
is v = 0, whence N(A) = 0 so A is 1-1). It is now obvious that A has full column
rank (=1) and A -L = [3 - 1] is a left inverse. Again, it is clear that there are
infinitely many left inverses for A. In Chapter 6 we characterize all left inverses of a
matrix by characterizing all solutions of the linear matrix equation LA = I.
3. The matrix
[
1 1
A = 2 1
3 1
when considered as a linear transformation on ]R3, is neither 1-1 nor onto. We give
below bases for its four fundamental subspaces.
EXERCISES
1. Let A = [ ~ ; i) and consider A as a linear transformation mapping ]R3 to ]R2.
Find the matrix representation of A with respect to the bases
{[lHHU]}
{ [ i l [ ~ J }
2. Consider the vector space ]Rnxn over ]R, let S denote the subspace of symmetric
matrices, and let R denote the subspace of skew-symmetric matrices. For matrices
X, Y E ]Rnxn define their inner product by (X, y) = Tr(X
T
Y). Show that, with
respect to this inner product, R = S J. .
3. Consider the differentiation operator £, defined in Example 3.2.3. Is £, I-I? Is £,
onto?
4. Prove Theorem 3.4.
28 Chapters. Linear Transformations
5. Prove Theorem 3.11.4.
6. Prove Theorem 3.12.2.
7. Determine bases for the four fundamental subspaces of the matrix
8. Suppose A e R
mxn
has a left inverse. Show that A
T
has a right inverse.
9. Let A = [ J o]. Determine A/"(A) and 7£(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A € Mg
9x48
. How many linearly independent solutions can be found to the
homogeneous linear system Ax = 0?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with A
T
e
R
nxm
thought of as a transformation from R
m
to R".
28 Chapter 3. Linear Transformations
5. Prove Theorem 3.Il.4.
6. Prove Theorem 3.12.2.
7. Detennine bases for the four fundamental subspaces of the matrix

2 5 5 3
8. Suppose A E IR
m
xn has a left inverse. Show that A T has a right inverse.
9. Let A = n DetennineN(A) and R(A). Are they equal? Is this true in general?
If this is true in general, prove it; if not, provide a counterexample.
10. Suppose A E How many linearly independent solutions can be found to the
homogeneous linear system Ax = O?
11. Modify Figure 3.1 to illustrate the four fundamental subspaces associated with ATE
IR
nxm
thought of as a transformation from IR
m
to IRn.
Chapter 4
Introduction to the
Moore-Pen rose
Pseudoinverse
In this chapter we give a brief introduction to the Moore-Penrose pseudoinverse, a gener-
alization of the inverse of a matrix. The Moore-Penrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X —>• y, where X and y are arbitrary finite-
dimensional vector spaces. Define a transformation T : Af(A)
1
- —>• Tl(A) by
Then, as noted in the proof of Theorem 3.17, T is bijective (1-1 and onto), and hence we
can define a unique inverse transformation T~
l
: 7£(A) —>• J\f(A}~
L
. This transformation
can be used to give our first definition of A
+
, the Moore-Penrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A
+
.
Definition 4.1. With A and T as defined above, define a transformation A
+
: y —» • X by
where y = y\ + j2 with y\ e 7£(A) and yi e Tl(A}
L
. Then A
+
is the Moore-Penrose
pseudoinverse of A.
Although X and y were arbitrary vector spaces above, let us henceforth consider the
case X = W
1
and y = R
m
. We have thus defined A+ for all A e IR ™
X
" . A purely algebraic
characterization of A
+
is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
Chapter 4
Introduction to the
Moore-Penrose
Pseudoinverse
In this chapter we give a brief introduction to the Moore-Penrose pseudoinverse, a gener-
alization of the inverse of a matrix. The Moore-Penrose pseudoinverse is defined for any
matrix and, as is shown in the following text, brings great notational and conceptual clarity
to the study of solutions to arbitrary systems of linear equations and linear least squares
problems.
4.1 Definitions and Characterizations
Consider a linear transformation A : X ---+ y, where X and Y are arbitrary finite-
dimensional vector spaces. Define a transformation T : N(A).l ---+ R(A) by
Tx = Ax for all x E NCA).l.
Then, as noted in the proof of Theorem 3.17, T is bijective Cl-l and onto), and hence we
can define a unique inverse transformation T-
1
: RCA) ---+ NCA).l. This transformation
can be used to give our first definition of A +, the Moore-Penrose pseudoinverse of A.
Unfortunately, the definition neither provides nor suggests a good computational strategy
for determining A + .
Definition 4.1. With A and T as defined above, define a transformation A + : Y ---+ X by
where Y = YI + Yz with Yl E RCA) and Yz E RCA).l. Then A+ is the Moore-Penrose
pseudoinverse of A.
Although X and Y were arbitrary vector spaces above, let us henceforth consider the
case X = ~ n and Y = lP1.
m
. We have thus defined A + for all A E lP1.;" xn. A purely algebraic
characterization of A + is given in the next theorem, which was proved by Penrose in 1955;
see [22].
29
30 Chapter 4. Introduction to the Moore-Penrose Pseudoinverse
Theorem 4.2. Let A e R?
xn
. Then G = A
+
i f and only i f
(PI) AGA = A.
(P2) GAG = G.
(P3) (AGf = AG.
(P4) (GA)
T
= GA.
Furthermore, A
+
always exi sts and i s uni que.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa-
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)-(P4). If G
satisfies all four, then by uniqueness, it must be A
+
. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [' ]. Verify directly that A
+
= [| f ] satisfies (P1)-(P4).
Note that other left inverses (for example, A~
L
= [3 — 1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A
+
is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A e R™
xn
. Then
4.2 Examples
Each of the following can be derived or verified by using the above definitions or charac-
terizations.
Example 4.5. A
+
= A
T
(AA
T
)~ if A is onto (independent rows) (A is right invertible).
Example 4.6. A
+
= (A
T
A)~ A
T
i f A is 1-1 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
30 Chapter 4. Introduction to the Moore-Penrose Pseudoinverse
Theorem 4.2. Let A E lR;" xn. Then G = A + if and only if
(Pl) AGA = A.
(P2) GAG = G.
(P3) (AG)T = AG.
(P4) (GA)T = GA.
Furthermore, A + always exists and is unique.
Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Also,
a right or left inverse satisfies no fewer than three of the four properties. Unfortunately, as
with Definition 4.1, neither the statement of Theorem 4.2 nor its proof suggests a computa-
tional algorithm. However, the Penrose properties do offer the great virtue of providing a
checkable criterion in the following sense. Given a matrix G that is a candidate for being
the pseudoinverse of A, one need simply verify the four Penrose conditions (P1)-(P4). If G
satisfies all four, then by uniqueness, it must be A +. Such a verification is often relatively
straightforward.
Example 4.3. Consider A = [a Verify directly that A+ = [! ~ ] satisfies (PI)-(P4).
Note that other left inverses (for example, A -L = [3 - 1]) satisfy properties (PI), (P2),
and (P4) but not (P3).
Still another characterization of A + is given in the following theorem, whose proof
can be found in [1, p. 19]. While not generally suitable for computer implementation, this
characterization can be useful for hand calculation of small examples.
Theorem 4.4. Let A E lR;" xn. Then
4.2 Examples
A + = lim (AT A + 8
2
1) -I AT
6--+0
= limAT(AAT +8
2
1)-1.
6--+0
(4.1)
(4.2)
Each of the following can be derived or verified by using the above definitions or charac-
terizations.
Example 4.5. X
t
= AT (AA T) -I if A is onto (independent rows) (A is right invertible).
Example 4.6. A+ = (AT A)-I AT if A is 1-1 (independent columns) (A is left invertible).
Example 4.7. For any scalar a,
if a t= 0,
if a =0.
4.3. Properties and Appl ications 31
Example 4.8. For any vector v e M",
Example 4.9.
Example 4.10.
4.3 Properties and Applications
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A e R
mx
" and suppose U e R
mxm
, V e R
nx
" are orthogonal (M is
orthogonal if M
T
= M
-1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each c
the four Penrose conditions. D
Theorem 4.12. Let S e R
nxn
be symmetric with U
T
SU = D, where U is orthogonal an
D is diagonal. Then S
+
= UD
+
U
T
, where D
+
is again a diagonal matrix whose diagonc
elements are determined according to Example 4.7.
Theorem 4.13. For all A e R
mxn
,
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
4.3. Properties and Applications
Example 4.8. For any vector v E jRn,
Example 4.9.
[ ~ ~ r = [
0
~ l
[ ~ ~ r = [
I I
1
Example 4.10.
4 4
I I
4 4
4.3 Properties and Applications
if v i= 0,
if v = O.
31
This section presents some miscellaneous useful results on pseudoinverses. Many of these
are used in the text that follows.
Theorem 4.11. Let A E jRmxn and suppose U E jRmxm, V E jRnxn are orthogonal (M is
orthogonal if MT = M-
1
). Then
Proof: For the proof, simply verify that the expression above does indeed satisfy each of
the four Penrose conditions. 0
Theorem 4.12. Let S E jRnxn be symmetric with U
T
SU = D, where U is orthogonal and
D is diagonal. Then S+ = U D+U
T
, where D+ is again a diagonal matrix whose diagonal
elements are determined according to Example 4.7.
Theorem 4.13. For all A E jRmxn,
1. A+ = (AT A)+ AT = AT (AA
T
)+.
2. (A
T
)+ = (A+{.
Proof: Both results can be proved using the limit characterization of Theorem 4.4. The
proof of the first result is not particularly easy and does not even have the virtue of being
especially illuminating. The interested reader can consult the proof in [1, p. 27]. The
proof of the second result (which can also be proved easily by verifying the four Penrose
conditions) is as follows:
(A
T
)+ = lim (AA
T
+ 8
2
l)-IA
~ - - + O
= lim [AT(AAT + 8
2
l)-1{
~ - - + O
= [limAT(AAT + 8
2
l)-1{
~ - - + O
= (A+{. 0
32 Chapter 4. Introduction to the Moore-Penrose Pseudoinverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the Moore-Penrose pseudoinverse of any matrix (since A A
T
and A
T
A are symmetric). This
turns out to be a poor approach in finite-precision arithmetic, however (see, e.g., [7], [11],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverse-order" property for pseudoinverses of prod-
nets of matrices such as exists for inverses of nroducts TTnfortnnatelv. in peneraK
As an example consider A = [0 1J and B = LI. Then
while
However, necessary and sufficient conditions under which the reverse-order property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)
+
= B
+
A
+
if and only if
Proof: For the proof, see [9]. D
Theorem 4.15. (AB)
+
= B?A+, where BI = A+AB and A) = AB\B+.
Proof: For the proof, see [5]. D
Theorem 4.16. If A e R
n
r
xr
, B e R
r
r
xm
, then (AB)
+
= B+A+.
Proof: Since A e R
n
r
xr
, then A
+
= (A
T
A)~
l
A
T
, whence A
+
A = I
r
. Similarly, since
B e W
r
xm
, we have B
+
= B
T
(BB
T
)~\ whence BB
+
= I
r
. The result then follows by
taking BI = B, A\ = A in Theorem 4.15. D
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A e R
mxn
,
32 Chapter 4. Introduction to the Moore-Penrose Pseudo inverse
Note that by combining Theorems 4.12 and 4.13 we can, in theory at least, compute
the Moore-Penrose pseudoinverse of any matrix (since AAT and AT A are symmetric). This
turns out to be a poor approach in finite-precision arithmetic, however (see, e.g., [7], [II],
[23]), and better methods are suggested in text that follows.
Theorem 4.11 is suggestive of a "reverse-order" property for pseudoinverses of prod-
ucts of matrices such as exists for inverses of products. Unfortunately, in general,
As an example consider A = [0 I] and B = [ : J. Then
(AB)+ = 1+ = I
while
B+ A+ = [ ] =
However, necessary and sufficient conditions under which the reverse-order property does
hold are known and we quote a couple of moderately useful results for reference.
Theorem 4.14. (AB)+ = B+ A + if and only if
1. n(BB
T
AT) n(AT)
and
2. n(A T AB) nCB) .
Proof: For the proof, see [9]. 0
Theorem 4.15. (AB)+ = B{ Ai, where BI = A+ AB and AI = ABIB{.
Proof: For the proof, see [5]. 0
Theorem 4.16. If A E B E then (AB)+ = B+ A+.
Proof' Since A E then A+ = (AT A)-I AT, whence A+ A = f
r
• Similarly, since
B E lR;xm, we have B+ = BT(BBT)-I, whence BB+ = f
r
. The result then follows by
takingB
t
= B,At = A in Theorem 4.15. 0
The following theorem gives some additional useful properties of pseudoinverses.
Theorem 4.17. For all A E lR
mxn
,
1. (A+)+ = A.
2. (AT A)+ = A+(A
T
)+, (AA
T
)+ = (A
T
)+ A+.
3. n(A+) = n(A
T
) = n(A+ A) = n(A
T
A).
4. N(A+) = N(AA+) = N«AA
T
)+) = N(AA
T
) = N(A
T
).
5. If A is normal, then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O.
Exercises 33
Note: Recall that A e R"
xn
is normal if AA
T
= A
T
A. For example, if A is symmetric,
skew-symmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
for scalars a, b e E.
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A e R
nxp
, B e E
MX m
. Then K(B) c U(A) if and only if
AA+B = B.
Proof: Suppose K(B) c U(A) and take arbitrary jc e R
m
. Then Bx e H(B) c H(A), so
there exists a vector y e R
p
such that Ay = Bx. Then we have
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+B.
To prove the converse, assume that AA
+
B = B and take arbitrary y e K(B). Then
there exists a vector x e R
m
such that Bx = y, whereupon
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of \
2 2
1 •
2. If jc, y e R", show that (xy
T
)
+
= (x
T
x)
+
(y
T
y)
+
yx
T
.
3. For A e R
mxn
, prove that 7£(A) = 7£(AA
r
) using only definitions and elementary
properties of the Moore-Penrose pseudoinverse.
4. For A e R
mxn
, prove that ft(A+) = ft(A
r
).
5. For A e R
pxn
and 5 € R
mx
", show that JV(A) C A/"(S) if and only if fiA+A = B.
6. Let A G M"
xn
, 5 e E
nxm
, and D € E
mxm
and suppose further that D is nonsingular.
(a) Prove or disprove that
(b) Prove or disprove that
Exercises 33
Note: Recall that A E IRn xn is normal if A A T = A T A. For example, if A is symmetric,
skew-symmetric, or orthogonal, then it is normal. However, a matrix can be none of the
preceding but still be normal, such as
A=[ a b]
-b a
for scalars a, b E R
The next theorem is fundamental to facilitating a compact and unifying approach
to studying the existence of solutions of (matrix) linear equations and linear least squares
problems.
Theorem 4.18. Suppose A E IRnxp, B E IRnxm. Then R(B) S; R(A) if and only if
AA+B = B.
Proof: Suppose R(B) S; R(A) and take arbitrary x E IRm. Then Bx E R(B) S; RCA), so
there exists a vector y E IRP such that Ay = Bx. Then we have
Bx = Ay = AA + Ay = AA + Bx,
where one of the Penrose properties is used above. Since x was arbitrary, we have shown
that B = AA+ B.
To prove the converse, assume that AA + B = B and take arbitrary y E R(B). Then
there exists a vector x E IR
m
such that Bx = y, whereupon
y = Bx = AA+Bx E R(A). 0
EXERCISES
1. Use Theorem 4.4 to compute the pseudoinverse of U ;].
2. If x, Y E IRn, show that (xyT)+ = (x
T
x)+(yT y)+ yx
T
.
3. For A E IRmxn, prove that RCA) = R(AAT) using only definitions and elementary
properties of the Moore-Penrose pseudoinverse.
4. For A E IRmxn, prove that R(A+) = R(A
T
).
5. For A E IRPxn and BE IRmxn, show thatN(A) S; N(B) if and only if BA+ A = B.
6. Let A E IRn xn, B E JRn xm , and D E IRm xm and suppose further that D is nonsingular.
(a) Prove or disprove that
[ ~
AB
r = [
A+ -A+ABD-
i
].
D 0
D-
i
(b) Prove or disprove that
[ ~
B
r =[
A+ -A+BD-
1
l
D 0
D-
i
This page intentionally left blank This page intentionally left blank
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A e R™
xn
. Then there exist orthogonal matrices U e R
mxm
and
V € R
nxn
such that
where S = [J °
0
], S = diagfcri, ... , o>) e R
rxr
, and a\ > • • • > o
r
> 0. More
specifically, we have
The submatrix sizes are all determined by r (which must be < min{m, «}), i.e., U\ e W
nxr
,
U
2 e
^x(m-r)
; Vi e R
«xr
j
y
2 €
Rnxfo-r^
and the
0-
JM
^/ocJb in E are compatibly
dimensioned.
Proof: Since A
r
A> 0 ( A
r
Ai s symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that A A
T
> 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of A
T
A by {of , / e n} with a\ > • • • > a
r
>
0 = o>
+
i = • • • = a
n
. Let {u, , i e n} be a set of corresponding orthonormal eigenvectors
and let V\ = [v\, ..., v
r
] , Vi = [v
r+
\, . . . , v
n
]. Letting S — diag(cri, . . . , cf
r
), we can
write A
r
AVi = ViS
2
. Premultiplying by Vf gives Vf A
T
AVi = VfV^S
2
= S
2
, the latter
equality following from the orthonormality of the r;, vectors. Pre- and postmultiplying by
S~
l
eives the emotion
35
Chapter 5
Introduction to the Singular
Value Decomposition
In this chapter we give a brief introduction to the singular value decomposition (SVD). We
show that every matrix has an SVD and describe some useful properties and applications
of this important matrix factorization. The SVD plays a key conceptual and computational
role throughout (numerical) linear algebra and its applications.
5.1 The Fundamental Theorem
Theorem 5.1. Let A E Then there exist orthogonal matrices U E IRmxm and
V E IR
nxn
such that
A =
(5.1)
where =
n
S diag(ul, ... , u
r
) E
IRrxr, and UI
> > U
r
> O. More
specifically, we have
U2) [
0
] [
V
T
]
A = [U
I
I
(5.2)
0
VT
2
= Ulsvt·
(5.3)
The submatrix sizes are all determined by r (which must be S min{m, n}), i.e., UI E IRmxr,
U2 E IRrnx(m-rl, VI E IRnxr, V
2
E IRnx(n-r), and the O-subblocks in are compatibly
dimensioned.
Proof: Since AT A ::::: 0 (AT A is symmetric and nonnegative definite; recall, for example,
[24, Ch. 6]), its eigenvalues are all real and nonnegative. (Note: The rest of the proof follows
analogously if we start with the observation that AAT ::::: 0 and the details are left to the reader
as an exercise.) Denote the set of eigenvalues of AT A by {U?, i E !!.} with UI ::::: ... ::::: U
r
>
0= Ur+1 = ... = Un. Let {Vi, i E !!.} be a set of corresponding orthonormal eigenvectors
and let VI = [VI, ... ,V
r
), V2 = [Vr+I, ... ,V
n
]. LettingS = diag(uI, ... ,u
r
), we can
write A T A VI = VI S2. Premultiplying by vt gives vt A T A VI = vt VI S2 = S2, the latter
equality following from the orthonormality of the Vi vectors. Pre- and postmultiplying by
S-I gives the equation
(5.4)
35
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues o
r+
\, . . . , a
n
we
have that A
T
AV
2
= V
2
0 = 0, whence Vf A
T
AV
2
= 0. Thus, AV
2
= 0. Now define the
matrix Ui e M
mx/
" by U\ = AViS~
l
. Then from (5.4) we see that UfU\ = /; i.e., the
columns of U\ are orthonormal. Choose any matrix U
2
£ ^
7 7 I X(
™~
r)
such that [U\ U
2
] is
orthogonal. Then
since A V
2
=0. Referring to the equation U\ = A V\ S
l
defining U\, we see that U{ AV\ =
S and 1/2 AVi = U^UiS = 0. The latter equality follows from the orthogonality of the
columns of U\ andU
2
. Thus, we see that, in fact, U
T
AV = [Q Q], and defining this matrix
to be S completes the proof. D
Definition 5.2. Let A = t/E V
T
be an SVD of A as in Theorem 5.1.
1. The set [a\,..., a
r
} is called the set of (nonzero) singular values of the matrix A and
i
is denoted £(A). From the proof of Theorem 5.1 we see that cr,(A) = A
(
2
(A
T
A) =
A.? (AA
T
). Note that there are alsomin{m, n] — r zero singular values.
2. The columns ofU are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of A
1
A).
Remark 5.3. The analogous complex case in which A e C™
x
" is quite straightforward.
The decomposition is A = t/E V
H
, where U and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that U and V can be interpreted as changes of basis in both the domain
and co-domain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C, denote A thought of as a linear transformation mapping W to W. Then
rewriting A = U^V
T
as AV = U E we see that Mat £ is S with respect to the bases
[v\,..., v
n
} for R" and {u\,..., u
m
] for R
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The singular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• any orthonormal basis for jV(A) can be used for V
2
.
there may be nonuniqueness associated with the columns of V\ (and hence U\) cor-
responding to multiple cr/' s.
36 Chapter 5. Introduction to the Singular Value Decomposition
Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l, ... , an we
have that A T A V
z
= VzO = 0, whence Vi A T A V
2
= O. Thus, A V
2
= O. Now define the
matrix VI E IRmxr by VI = AViS-I. Then from (5.4) we see that VrVI = /; i.e., the
columns of VI are orthonormal. Choose any matrix V2 E IRmx(m-r) such that [VI V2] is
orthogonal. Then
V
T
AV = [
VrAV
I
VIAV
I
=[
VrAV
I
vIA VI
Vr AV
z
]
vI AV
z
]
since A V
2
= O. Referring to the equation V I = A VI S-I defining VI, we see that V r A VI =
S and vI A VI = vI VI S = O. The latter equality follows from the orthogonality of the
columns of VI and V
2
. Thus, we see that, in fact, VT A V = and defining this matrix
to be completes the proof. 0
Definition 5.2. Let A = V"i:. VT be an SVD of A as in Theorem 5.1.
1. The set {ai, ... , a
r
} is called the set of (nonzero) singular values of the matrix A and
I
is denoted From the proof of Theorem 5.1 we see that ai(A) = A;'- (AT A) =
I
At- (AA
T
). Note that there are also min{m, n} - r zero singular values.
2. The columns of V are called the left singular vectors of A (and are the orthonormal
eigenvectors of AA
T
).
3. The columns of V are called the right singular vectors of A (and are the orthonormal
eigenvectors of AT A).
Remark 5.3. The analogous complex case in which A E xn is quite straightforward.
The decomposition is A = V"i:. V H, where V and V are unitary and the proof is essentially
identical, except for Hermitian transposes replacing transposes.
Remark 5.4. Note that V and V can be interpreted as changes of basis in both the domain
and co-domain spaces with respect to which A then has a diagonal matrix representation.
Specifically, let C denote A thought of as a linear transformation mapping IR
n
to IRm. Then
rewriting A = V"i:. VT as A V = V"i:. we see that Mat C is "i:. with respect to the bases
{VI, ... , v
n
} for IR
n
and {u I, •.. , u
m
} for IR
m
(see the discussion in Section 3.2). See also
Remark 5.16.
Remark 5.5. The !:ingular value decomposition is not unique. For example, an examination
of the proof of Theorem 5.1 reveals that
• £lny orthonormal basis for N(A) can be used for V2.
• there may be nonuniqueness associated with the columns of VI (and hence VI) cor-
responding to multiple O'i'S.
5.1. The Fundamental Theorem 37
• any C/
2
can be used so long as [U\ Ui] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
je
in the complex case).
What is unique, however, is the matrix E and the span of the columns of U\, f/2, Vi, and
¥2 (see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A
T
A or
AA
T
is numerically poor in finite-precision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25].
F/vamnlp 5.7.
Example 5.10. Let A e R
MX
" be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., V
T
AV = A > 0. Then A = VAV
T
is an
SVD of A.
A factorization t/SV
r
o f a n m x n matrix A qualifies as an SVD if U and V are
orthogonal and £ is an m x n "diagonal" matrix whose diagonal elements in the upper
left corner are positive (and ordered). For example, if A = f/E V
T
is an SVD of A, then
VS
r
C/
r
i sanSVDof A
T
.
where U is an arbitrary 2x2 orthogonal matrix, is an SVD.
Example 5.8.
where 0 is arbitrary, is an SVD.
Example 5.9.
is an SVD.
5.1. The Fundamental Theorem 37
• any U2 can be used so long as [U
I
U2] is orthogonal.
• columns of U and V can be changed (in tandem) by sign (or multiplier of the form
e
j8
in the complex case).
What is unique, however, is the matrix I: and the span of the columns of UI, U2, VI, and
V
2
(see Theorem 5.11). Note, too, that a "full SVD" (5.2) can always be constructed from
a "compact SVD" (5.3).
Remark 5.6. Computing an SVD by working directly with the eigenproblem for A T A or
AA T is numerically poor in finite-precision arithmetic. Better algorithms exist that work
directly on A via a sequence of orthogonal transformations; see, e.g., [7], [11], [25],
Example 5.7.
A - [1 0 ] - U I U
T
- 01- ,
where U is an arbitrary 2 x 2 orthogonal matrix, is an SVD.
Example 5.8.
A _ [ 1
- 0 - ~ ] = [
where e is arbitrary, is an SVD.
Example 5.9.
cose
- sine
sin e
cose J [ ~ ~ J [
cose
sine
A=U n=[
I -2y'5
2 ~ ][ 3 ~ 0][
3
-5-
2
y'5
4y'5 0 0
3 S- 15
2
0
_y'5 0 0
3
-3-
[
I
]
3
3J2 [ ~
~ ]
=
2
3
2
3
is an SVD.
Sine]
-cose '
v'2 v'2
]
T T
v'2 -v'2
T
-2-
Example 5.10. Let A E IR
nxn
be symmetric and positive definite. Let V be an orthogonal
matrix of eigenvectors that diagonalizes A, i.e., VT A V = A > O. Then A = V A V
T
is an
SVDof A.
A factorization UI: VT of an m x n matrix A qualifies as an SVD if U and V are
orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper
left comer are positive (and ordered). For example, if A = UI:V
T
is an SVD of A, then
VI:TU
T
is an SVD of AT.
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A e R
mxn
have a singular value decomposition A = VLV
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let U =. [ H I , ..., u
m
] and V = [v\, ..., v
n
]. Then A has the dyadic (or outer
product) expansion
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = UZV
T
rather than, say, A = UZV.
Theorem 5.14. Let A e E
mx
" have a singular value decomposition A = UHV
T
as in
TheoremS.]. Then
where
3. The singular vectors satisfy the relations
38 Chapter 5. Introduction to the Singular Value Decomposition
5.2 Some Basic Properties
Theorem 5.11. Let A E jRrnxn have a singular value decomposition A = U'£ V
T
. Using
the notation of Theorem 5.1, the following properties hold:
1. rank(A) = r = the number of nonzero singular values of A.
2. Let V = [UI, ... , urn] and V = [VI, ... , v
n
]. Then A has the dyadic (or outer
product) expansion
r
A = Laiuiv;.
i=1
3. The singular vectors satisfy the relations
for i E r.
AVi = ajui,
AT Uj = aivi
(5.5)
(5.6)
(5.7)
4. LetUI = [UI, ... , u
r
], U2 = [Ur+I, ... , urn], VI = [VI, ... , v
r
], andV2 = [Vr+I, ... , V
n
].
Then
(a) R(VI) = R(A) = N(A
T
/.
(b) R(U
2
) = R(A)1- = N(A
T
).
(c) R(VI) = N(A)1- = R(A
T
).
(d) R(V2) = N(A) = R(AT)1-.
Remark 5.12. Part 4 of the above theorem provides a numerically superior method for
finding (orthonormal) bases for the four fundamental subspaces compared to methods based
on, for example, reduction to row or column echelon form. Note that each subspace requires
knowledge of the rank r. The relationship to the four fundamental subspaces is summarized
nicely in Figure 5.1.
Remark 5.13. The elegance of the dyadic decomposition (5.5) as a sum of outer products
and the key vector relations (5.6) and (5.7) explain why it is conventional to write the SVD
as A = U'£V
T
rather than, say, A = U,£V.
Theorem 5.14. Let A E jRmxn have a singular value decomposition A = U,£V
T
as in
Theorem 5.1. Then
(5.8)
where
5.2. Some Basic Properties 39
Figure 5.1. SVD and the four fundamental subspaces.
with the Q-subblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
Proof: The proof follows easily by verifying the four Penrose conditions. D
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A
+
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
This can also be written in matrix terms by using the so-called reverse-order identity matrix
(or exchange matrix) P = \e
r
,e
r
^\, ..., e^, e\\, which is clearly orthogonal and symmetric.
5.2. Some Basic Properties 39
A
r r
E9 {O}
/ {O)<!l
n-r m-r
Figure 5.1. SVD and the four fundamental subspaces.
with the O-subblocks appropriately sized. Furthermore, if we let the columns of U and V
be as defined in Theorem 5.11, then
r 1
= L -v;u;, (5.10)
;=1 U;
Proof' The proof follows easily by verifying the four Penrose conditions. 0
Remark 5.15. Note that none of the expressions above quite qualifies as an SVD of A +
if we insist that the singular values be ordered from largest to smallest. However, a simple
reordering accomplishes the task:
(5.11)
This can also be written in matrix terms by using the so-called reverse-order identity matrix
(or exchange matrix) P = [e
r
, er-I, ... , e2, ed, which is clearly orthogonal and symmetric.
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since ( v \ , . . . , v
r
} is a
basis forJ\f(A)
±
, then T can be defined by TV; = cr, w, , / e r. Similarly, since [u\, ... ,u
r
}
isabasisfor7£(.4), then T~
l
can be defined by T^' M, = ^-u, , / e r. From Section 3.2, the
matrix representation for T with respect to the bases { v \ , ..., v
r
} and { MI , . . . , u
r
] is clearly
S, while the matrix representation for the inverse linear transformation T~
l
with respect to
the same bases is 5""
1
.
5.3 Row and Column Compressions
Row compression
Let A E R
mxn
have an SVD given by (5.1). Then
Notice that M(A) - M(U
T
A) = A/"(SV,
r
) and the matrix SVf e R
r x
" has full row
rank. I n other words, premultiplication of A by U
T
is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
D _
by orthogonal row transformations performed directly on A to reduce it to the form
0
,
where R is upper triangular. Both compressions are analogous to the so-called row-reduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finite-precision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A e R
mxn
have an SVD given by (5.1). Then
This time, notice that H(A) = K(AV) = K(UiS) and the matrix UiS e R
mxr
has full
column rank. I n other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by column transformations. Such a compression is analogous to the
40 Chapters. Introduction to the Singular Value Decomposition
Then
40 Chapter 5. Introduction to the Singular Value Decomposition
Then
A+ = (VI p)(PS-1 p)(PVr)
is the matrix version of (5.11). A "full SVD" can be similarly constructed.
Remark 5.16. Recall the linear transformation T used in the proof of Theorem 3.17 and
in Definition 4.1. Since T is determined by its action on a basis, and since {VI, ... , v
r
} is a
basisforN(A).l, then T can be defined by TVj = OjUj , i E ~ . Similarly, since {UI, ... , u
r
}
is a basis forR(A), then T-
I
canbedefinedbyT-Iu; = tv; ,i E ~ . From Section 3.2, the
matrix representation for T with respect to the bases {VI, ... , v
r
} and {u I, ... , u
r
} is clearly
S, while the matrix representation for the inverse linear transformation T-
I
with respect to
the same bases is S-I.
5.3 Rowand Column Compressions
Row compression
Let A E lR.
mxn
have an SVD given by (5.1). Then
VT A = :EVT
= [ ~ ~ ] [ ~ i ]
- [ SVr ] lR.
mxn
- 0 E .
Notice that N(A) = N(V
T
A) = N(svr> and the matrix SVr E lR.
rxll
has full row
rank. In other words, premultiplication of A by VT is an orthogonal transformation that
"compresses" A by row transformations. Such a row compression can also be accomplished
by orthogonal row transformations performed directly on A to reduce it to the form [ ~ ] ,
where R is upper triangular. Both compressions are analogous to the so-called row-reduced
echelon form which, when derived by a Gaussian elimination algorithm implemented in
finite-precision arithmetic, is not generally as reliable a procedure.
Column compression
Again, let A E lR.
mxn
have an SVD given by (5.1). Then
AV = V:E
= [VI U2] [ ~ ~ ]
=[VIS 0] ElR.mxn.
This time, notice that R(A) = R(A V) = R(UI S) and the matrix VI S E lR.
m
xr has full
column rank. In other words, postmultiplication of A by V is an orthogonal transformation
that "compresses" A by I;olumn transformations. Such a compression is analogous to the
Exercises 41
so-called column-reduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finite-precision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X € M
mx
". If X
T
X = 0, show that X = 0.
2. Prove Theorem 5.1 starting from the observation that AA
T
> 0.
3. Let A e E"
xn
be symmetric but indefinite. Determine an SVD of A.
4. Let x e R
m
, y e R
n
be nonzero vectors. Determine an SVD of the matrix A e R™
defined by A = xy
T
.
6. Let A e R
mxn
and suppose W eR
mxm
and 7 e R
nxn
are orthogonal.
(a) Show that A and W A F have the same singular values (and hence the same rank).
(b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A € R"
XM
. Use the SVD to determine a polar factorization of A, i.e., A = QP
where Q is orthogonal and P = P
T
> 0. Note: this is analogous to the polar form
z = re
l&
ofa complex scalar z (where i = j = V^T).
5. Determine SVDs of the matrices
Exercises 41
so-called column-reduced echelon form, which is not generally a reliable procedure when
performed by Gauss transformations in finite-precision arithmetic. For details, see, for
example, [7], [11], [23], [25].
EXERCISES
1. Let X E IRmxn. If XT X = 0, show that X = o.
2. Prove Theorem 5.1 starting from the observation that AAT ~ O.
3. Let A E IR
nxn
be symmetric but indefinite. Determine an SVD of A.
4. Let x E IRm, y E ~ n be nonzero vectors. Determine an SVD of the matrix A E ~ ~ xn
defined by A = xyT.
5. Determine SVDs of the matrices
(a)
[
-1
]
0 -1
(b)
[
~ l
6. Let A E ~ m x n and suppose W E IR
mxm
and Y E ~ n x n are orthogonal.
(a) Show that A and WAY have the same singular values (and hence the same rank).
(b) Suppose that Wand Yare nonsingular but not necessarily orthogonal. Do A
and WAY have the same singular values? Do they have the same rank?
7. Let A E ~ ~ x n . Use the SVD to determine a polar factorization of A, i.e., A = Q P
where Q is orthogonal and P = p
T
> O. Note: this is analogous to the polar form
z = re
iO
of a complex scalar z (where i = j = J=I).
This page intentionally left blank This page intentionally left blank
Chapter 6
Li near Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
are studied and include, as a special case, the familiar vector system
6.1 Vector Li near Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
1. There exists a solution to (6.3) if and only ifbeH(A).
2. There exists a solution to (6.3} for all b e R
m
if and only ifU(A) = W", i.e., A is
onto; equivalently, there exists a solution if and only j/"rank([A, b]) = rank(A), and
this is possible only ifm < n (since m = dimT^(A) = rank(A) < min{m, n}).
3. A solution to (6.3) is unique if and only ifJ\f(A) = 0, i.e., A is 1-1.
4. There exists a unique solution to (6.3) for all b e W" if and only if A is nonsingular;
equivalently, A G M
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b e W
1
if and only if the columns of
A are linearly independent, i.e., A/"(A) = 0, and this is possible only ifm > n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
Chapter 6
Linear Equations
In this chapter we examine existence and uniqueness of solutions of systems of linear
equations. General linear systems of the form
(6.1)
are studied and include, as a special case, the familiar vector system
Ax = b; A E ]Rn xn, b E ]Rn.
(6.2)
6.1 Vector Linear Equations
We begin with a review of some of the principal results associated with vector linear systems.
Theorem 6.1. Consider the system of linear equations
Ax = b; A E lR
m
xn, b E lRm.
(6.3)
1. There exists a solution to (6.3) if and only if b E R(A).
2. There exists a solution to (6.3) for all b E lR
m
if and only ifR(A) = lR
m
, i.e., A is
onto; equivalently, there exists a solution if and only ifrank([A, b]) = rank(A), and
this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m, n n.
3. A solution to (6.3) is unique if and only if N(A) = 0, i.e., A is 1-1.
4. There exists a unique solution to (6.3) for all b E ]Rm if and only if A is nonsingular;
equivalently, A E lR
mxm
and A has neither a 0 singular value nor a 0 eigenvalue.
5. There exists at most one solution to (6.3) for all b E lR
m
if and only if the columns of
A are linearly independent, i.e., N(A) = 0, and this is possible only ifm ::: n.
6. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if
rank(A) < n.
43
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not 1-1, which implies rank(A) < n
by part 3. D
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
and this is clearly of the form (6.5).
has a solution if and only ifl^(B) C 7£(A); equivalently, a solution exists if and only if
AA
+
B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18.
Theorem 6.3. Let A e R
mxn
, B eR
mxk
and suppose that AA
+
B = B. Then any matrix
of the form
is a solution of
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
That all solutions arc of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6), i.e., AZ — B. Then we can write
44 Chapter 6. Linear Equations
Proof: The proofs are straightforward and can be consulted in standard texts on linear
algebra. Note that some parts of the theorem follow directly from others. For example, to
prove part 6, note that x = 0 is always a solution to the homogeneous system. Therefore, we
must have the case of a nonunique solution, i.e., A is not I-I, which implies rank(A) < n
by part 3. 0
6.2 Matrix Linear Equations
In this section we present some of the principal results concerning existence and uniqueness
of solutions to the general matrix linear system (6.1). Note that the results of Theorem
6.1 follow from those below for the special case k = 1, while results for (6.2) follow by
specializing even further to the case m = n.
Theorem 6.2 (Existence). The matrix linear equation
AX = B; A E JR.
mxn
, BE JR.mxk, (6.4)
has a solution if and only ifR(B) S; R(A); equivalently, a solution exists if and only if
AA+B = B.
Proof: The subspace inclusion criterion follows essentially from the definition of the range
of a matrix. The matrix criterion is Theorem 4.18. 0
Theorem 6.3. Let A E JR.mxn, B E JR.mxk and suppose that AA + B = B. Then any matrix
of the form
X = A+ B + (/ - A+ A)Y, where Y E JR.nxk is arbitrary, (6.5)
is a solution of
AX=B. (6.6)
Furthermore, all solutions of (6.6) are of this form.
Proof: To verify that (6.5) is a solution, premultiply by A:
AX = AA+ B + A(I - A+ A)Y
= B + (A - AA+ A)Y by hypothesis
= B since AA + A = A by the first Penrose condition.
That all solutions are of this form can be seen as follows. Let Z be an arbitrary solution of
(6.6). i.e .. AZ :::: B. Then we can write
Z=A+AZ+(I-A+A)Z
=A+B+(I-A+A)Z
and this is clearly of the form (6.5). 0
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A
+
= A"
1
and so (/ — A
+
A) = 0. Thus,
there is no "arbitrary" component, leaving only the unique solution X• = A~
1
B.
Remark 6.5. It can be shown that the particular solution X = A
+
B is the solution of (6.6)
that minimizes TrX
7
X. (Tr(-) denotes the trace of a matrix; recall that TrX
r
X = £\ • jcj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
is unique if and only if A
+
A = /; equivalently, (6.7) has a unique solution if and only if
M(A) = 0.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
that A
+
A = / can occur only if r — n, where r = rank(A) (recall r < h). But rank(A) = n
if and only if A is 1-1 or _ /V(A) = 0. D
Example 6.7. Suppose A e E"
x
". Find all solutions of the homogeneous system Ax — 0.
Solution:
where y e R" is arbitrary. Hence, there exists a nonzero solution if and only if A
+
A /= I.
This is equivalent to either rank (A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique.
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for 7£(7 — A
+
A). But if A has an SVD given by A = f/E V
T
, then it is easily
checked that / - A+A = V
2
V
2
r
and U(V
2
V^) = K(V
2
) = N(A).
Example 6.8. Characterize all right inverses of a matrix A e ]R
mx
"; equivalently, find all
solutions R of the equation AR = I
m
. Here, we write I
m
to emphasize the m x m identity
matrix.
Solution: There exists a right inverse if and only if 7£(/
m
) c 7£(A) and this is
equivalent to AA
+
I
m
= I
m
. Clearly, this can occur if and only if rank(A) = r = m (since
r < m) and this is equivalent to A being onto (A
+
is then a right inverse). All right inverses
of A are then of the form
where Y e E"
xm
is arbitrary. There is a unique right inverse if and only if A
+
A = /
(AA(A) = 0), in which case A must be invertible and R = A"
1
.
Example 6.9. Consider the system of linear first-order difference equations
6.2. Matrix Linear Equations 45
Remark 6.4. When A is square and nonsingular, A + = A-I and so (I - A + A) = O. Thus,
there is no "arbitrary" component, leaving only the unique solution X = A-I B.
Remark 6.5. It can be shown that the particular solution X = A + B is the solution of (6.6)
that minimizes TrXT X. (TrO denotes the trace of a matrix; recall that TrXT X = Li,j xlj.)
Theorem 6.6 (Uniqueness). A solution of the matrix linear equation
AX = B; A E lR,mxn, BE lR,mxk
(6,7)
is unique if and only if A + A = I; equivalently, (6.7) has a unique solution if and only if
N(A) = O.
Proof: The first equivalence is immediate from Theorem 6.3. The second follows by noting
thatA+ A = I can occur only ifr = n, wherer = rank(A) (recallr ::: n), Butrank(A) = n
if and only if A is I-lor N(A) = O. 0
Example 6.7. Suppose A E lR,nxn. Find all solutions of the homogeneous system Ax = 0,
Solution:
x=A+O+(I-A+A)y
= (I-A+A)y,
where y E lR,n is arbitrary. Hence, there exists a nonzero solution if and only if A + A t= I,
This is equivalent to either rank(A) = r < n or A being singular. Clearly, if there exists a
nonzero solution, it is not unique,
Computation: Since y is arbitrary, it is easy to see that all solutions are generated
from a basis for R(I - A + A). But if A has an SVD given by A = U h VT, then it is easily
checked that 1- A+ A = Vz V[ and R(Vz vD = R(Vz) = N(A),
Example 6.S. Characterize all right inverses of a matrix A E lR,mxn; equivalently, find all
solutions R of the equation AR = 1
m
, Here, we write 1m to emphasize the m x m identity
matrix,
Solution: There exists a right inverse if and only if R(Im) S; R(A) and this is
equivalent to AA + 1m = 1m. Clearly, this can occur if and only if rank(A) = r = m (since
r ::: m) and this is equivalent to A being onto (A + is then a right inverse). All right inverses
of A are then of the form
R = A+ 1m + (In - A+ A)Y
=A++(I-A+A)Y,
where Y E lR,nxm is arbitrary, There is a unique right inverse if and only if A+ A I
(N(A) = 0), in which case A must be invertible and R = A-I.
Example 6.9. Consider the system of linear first-order difference equations
(6,8)
46 Chapter 6. Linear Equations
with A e R"
xn
and fieR"
xm
(rc>l,ra>l). The vector Jt* in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
for k > 1. We might now ask the question: Given X Q = 0, does there exist an input sequence
{uj } y~ Q such that x^ takes an arbitrary va
of reachability. Since m > 1, from the
see that (6.8) is reachable if and only if
[ Uj }
k
jj^ such that X k takes an arbitrary value in W ? In linear system theory, this is a question
of reachability. Since m > 1, from the fundamental Existence Theorem, Theorem 6.2, we
or, equivalently, if and only if
A related question is the following: Given an arbitrary initial vector X Q , does there ex-
ist an input sequence {"y} "~ o such that x
n
= 0? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control-
lability and reachability are equivalent. The matrices A = [ °
1
Q
1 and 5 = f ^ 1 provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuous-time models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector y
k
to the system (6.8) of Example 6.9
by appending the equation
with C e R
pxn
and D € R
pxm
(p > 1). We can then pose some new questions about the
overall system that are dual in the system-theoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {"
7
}"!Q and {y_ / } "~ o
suffice to determine (uniquely) Jt
0
? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {w
y
} "~ Q and {;y/ } "Io suffice to determine
(uniquely) x
n
l The fundamental duality result from linear system theory is the following:
(A, B) is reachable [ controllable] if and only if (A
T
, B
T
] is observable [ reconstructive].
46 Chapter 6. Linear Equations
with A E IR
nx
" and B E IR
nxm
(n I, m I). The vector Xk in linear system theory is
known as the state vector at time k while Uk is the input (control) vector. The general
solution of (6.8) is given by
k-J
Xk = Akxo + LAk-J-j BUj
j=O
k k-J Uk-2
[
Uk-J ]
•...• A B]
(6.9)
(6.10)
for k 1. We might now ask the question: Given Xo = 0, does there exist an input sequence
{u j 1 such that Xk takes an arbitrary value in 1R"? In linear system theory, this is a question
of reacbability. Since m I, from the fundamental Existence Theorem, Theorem 6.2, we
see that (6.8) is reachable if and only if
R([ B, AB, ... , A
n
-
J
B]) = 1R"
or, equivalently, if and only if
rank [B, AB, ... , A
n
-
J
B] = n.
A related question is the following: Given an arbitrary initial vector Xo, does there ex-
ist an input sequence {u j l'/:b such that Xn = O? In linear system theory, this is called
controllability. Again from Theorem 6.2, we see that (6.8) is controllable if and only if
Clearly, reachability always implies controllability and, if A is nonsingular, control-
lability and reachability are equivalent. The matrices A = and B = provide an
example of a system that is controllable but not reachable.
The above are standard conditions with analogues for continuous-time models (i.e.,
linear differential equations). There are many other algebraically equivalent conditions.
Example 6.10. We now introduce an output vector Yk to the system (6.8) of Example 6.9
by appending the equation
(6.11)
with C E IR
Pxn
and D E IR
Pxm
(p 1). We can then pose some new questions about the
overall system that are dual in the system-theoretic sense to reachability and controllability.
The answers are cast in terms that are dual in the linear algebra sense as well. The condition
dual to reachability is called observability: When does knowledge of {u j r/:b and {Yj l';:b
suffice to determine (uniquely) xo? As a dual to controllability, we have the notion of
reconstructibility: When does knowledge of {u j r/:b and {YJ lj:b suffice to determine
(uniquely) xn? The fundamental duality result from linear system theory is the following:
(A. B) iJ reachable [controllablcl if and only if (A T. B T) is observable [reconsrrucrible]
6.4 Some Us ef u l and I nt er es t i ng Inverses 47
To derive a condition for observability, notice that
Thus,
Let v denote the (known) vector on the left-hand side of (6.13) and let R denote the matrix on
the right-hand side. Then, by definition, v e Tl(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A e R
mxn
, B e R
mxq
, and C e R
pxti
. Then the equation
has a solution if and only if AA
+
BC
+
C = B, in which case the general solution is of the
where Y € R
n
*
p
is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (CC
+
< g) A
+
A — I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as-
sociated with matrix inverses. In these identities, A e R
nxn
, B E R
nxm
, C e R
mxn
,
and D € E
mxm
. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
6.4 Some Useful and Interesting Inverses
Thus,
To derive a condition for observability, notice that
k-l
Yk = CAkxo + L CAk-1-j BUj + DUk.
j=O
r
Yo - Duo
Yl - CBuo - Du]
Yn-] - L j : ~ CA
n
-
2
-j BUj - DUn-l
47
(6.12)
(6.13)
Let v denote the (known) vector on the left-hand side of (6.13) and let R denote the matrix on
the right-hand side. Then, by definition, v E R(R), so a solution exists. By the fundamental
Uniqueness Theorem, Theorem 6.6, the solution is then unique if and only if N(R) = 0,
or, equivalently, if and only if
6.3 A More General Matrix Linear Equation
Theorem 6.11. Let A E jRmxn, B E jRmx
q
, and C E jRpxq. Then the equation
AXC=B (6.14)
has a solution if and only if AA + BC+C = B, in which case the general solution is of the
form
(6.15)
where Y E jRnxp is arbitrary.
A compact matrix criterion for uniqueness of solutions to (6.14) requires the notion
of the Kronecker product of matrices for its statement. Such a criterion (C C+ ® A + A = I)
is stated and proved in Theorem 13.27.
6.4 Some Useful and Interesting Inverses
In many applications, the coefficient matrices of interest are square and nonsingular. Listed
below is a small collection of useful matrix identities, particularly for block matrices, as-
sociated with matrix inverses. In these identities, A E jRnxn, B E jRnxm, C E jRmxn,
and D E jRm xm. Invertibility is assumed for any component or subblock whose inverse is
indicated. Verification of each identity is recommended as an exercise for the reader.
48 Chapter 6. Linear Equations
1. (A + BDCr
1
= A~
l
- A~
l
B(D~
l
+ CA~
l
B)~
[
CA~
l
.
This result is known as the Sherman-Morrison-Woodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)"
1
or (A"
1
+ D"
1
) . It also
yields very efficient "updating" or "downdating" formulas in expressions such as
T — 1
(A + JUT ) (with symmetric A e R"
x
" and ;c e E") that arise in optimization
theory.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A e M
mx
".
2. Let A € E
mx
", B e R
mxk
and suppose A has an SVD as in Theorem 5.1. Assuming
7Z(B) c 7£(A), characterize all solutions of the matrix linear equation
Both of these matrices satisfy the matrix equation X^ = I from which it is obvious
that X~
l
= X. Note that the positions of the / and — / blocks may be exchanged.
where E = (D — CA B) (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
where F = (A — ED C) . This result follows easily from the block UL factor-
ization in property 17 of Section 1.4.
in terms of the SVD of A
48 Chapter 6. Linear Equations
1. (A + BDC)-I = A-I - A-IB(D-
I
+ CA-IB)-ICA-I.
This result is known as the Sherman-Morrison-Woodbury formula. It has many
applications (and is frequently "rediscovered") including, for example, formulas for
the inverse of a sum of matrices such as (A + D)-lor (A-I + D-I)-I. It also
yields very efficient "updating" or "downdating" formulas in expressions such as
(A + xx
T
) -I (with symmetric A E lR
nxn
and x E lRn) that arise in optimization
theory.
2. r
l
= [
3. !/ r
l
= l r
l
= 1
Both of these matrices satisfy the matrix equation X2 = / from which it is obvious
that X-I = X. Note that the positions of the / and - / blocks may be exchanged.
4. r
l
= [
-A-I BD-
I
]
D- I .
5. r
l
= 1
6. [ / +c
BC
r
l
= [!C / 1
7. r
l
= [ A-I l
where E = (D - CA-
I
B)-I (E is the inverse of the Schur complement of A). This
result follows easily from the block LU factorization in property 16 of Section 1.4.
8. r
l
= D-
I
l
where F = (A - B D-
I
C) -I. This result follows easily from the block UL factor-
ization in property 17 of Section 1.4.
EXERCISES
1. As in Example 6.8, characterize all left inverses of a matrix A E lR
m
xn .
2. Let A E lRmxn, B E lR
fflxk
and suppose A has an SVD as in Theorem 5.1. Assuming
R(B) R(A), characterize all solutions of the matrix linear equation
AX=B
in terms of the SVD of A.
Exercises 49
3. Let jc, y e E" and suppose further that X
T
y ^ 1. Show that
4. Let x, y € E" and suppose further that X
T
y ^ 1. Show that
where c = 1/(1 — x
T
y).
5. Let A e R"
x
" and let A"
1
have columns c\, ..., c
n
and individual elements y
;y
.
Assume that x/
(
7^ 0 for some / and j. Show that the matrix B — A —
l
—ei e
T
: (i.e.,
A with — subtracted from its (zy)th element) is singular.
Hint: Show that c
t
< = M(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
Exercises 49
3. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
T -1 1 T
(/ - xy) = I - xy .
xTy -1
4. Let x, y E IR
n
and suppose further that x T y i= 1. Show that
-cxJ
C '
where C = 1/(1 - x
T
y).
5. Let A E 1 R ~ xn and let A -1 have columns Cl, ... ,C
n
and individual elements Yij.
Assume that Yji i= 0 for some i and j. Show that the matrix B = A - ~ i e;e; (i.e.,
A with yl subtracted from its (ij)th element) is singular.
l'
Hint: Show that Ci E N(B).
6. As in Example 6.10, check directly that the condition for reconstructibility takes the
form
N[
fA J ~ N(A
n
).
CA
n
-
1
This page intentionally left blank This page intentionally left blank
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X 0 y. By Theorem 2.26, every v e V
has a unique decomposition v = x + y with x e X and y e y. Define PX y • V — > • X c V
by
Figure 7.1. Oblique projections.
Theorem 7.2. Px,y is linear and P# y — Px,y-
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
P
2
= P. Also, P is a projection if and only if I —P is a projection. Infact, Py,x — I — Px,y-
Proof: Suppose P is a projection, say on X along y (using the notation of Definition 7.1).
51
Px,y is called the (oblique) projection on X along 3^.
Figure 7.1 displays the projection of v on both X and 3^ in the case V =
Chapter 7
Projections, Inner Product
Spaces, and Norms
7.1 Projections
Definition 7.1. Let V be a vector space with V = X EEl Y. By Theorem 2.26, every v E V
has a unique decomposition v = x + y with x E X and y E y. Define pX,y : V ---+ X <; V
by
PX,yV = x for all v E V.
PX,y is called the (oblique) projection on X along y.
Figure 7.1 displays the projection of von both X and Y in the case V = ]R2.
y
x
Figure 7.1. Oblique projections.
Theorem 7.2. px.y is linear and pl.
y
= px.y.
Theorem 7.3. A linear transformation P is a projection if and only if it is idempotent, i.e.,
p2 = P. Also, P isaprojectionifandonlyifl -P isaprojection. Infact, Py.x = I -px.y.
Proof: Suppose P is a projection, say on X along Y (using the notation of Definition 7.1).
51
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let u e V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, P
2
v = PPv —
Px = x = Pv. Thus, P
2
= P. Conversely, suppose P
2
= P. Let X = {v e V : Pv = v}
and y = {v € V : Pv = 0}. It is easy to check that X and 3^ are subspaces. We now prove
that V = X 0 y. First note that tfveX, then Pv = v. If v e y, then Pv = 0. Hence
i f v € X n y, then v = 0. Now let u e V be arbitrary. Then v = Pv + (I - P)v. Let
x = Pv, y = (I - P)v. Then Px = P
2
v = Pv = x so x e X, while Py = P(I - P}v =
Pv - P
2
v = 0 so y e y. Thus, V = X 0 y and the projection on X along y is P.
Essentially the same argument shows that / — P is the projection on y along X. D
Definition 7.4. In the speci al case where y = X^, PX.X
L
*
s
called an orthogonal projec-
tion and we then use the notati on PX = PX,X
L
-
Theorem 7.5. P e E"
xn
i s the matri x of an orthogonal projecti on (onto K(P)} i f and only
i fP
2
= p = P
T
.
Proof: Let P be an orthogonal projection (on X, say, along X
L
} and let jc, y e R" be
arbitrary. Note that (/ - P)x = (I - PX,X^X = P
x
±,
x
x by Theorem 7.3. Thus,
(/ - P)x e X
L
. Since Py e X, we have ( P y f ( I - P)x = y
T
P
T
(I - P)x = 0.
Since x and y were arbitrary, we must have P
T
(I — P) = 0. Hence P
T
= P
T
P = P,
with the second equality following since P
T
P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = Px + (I — P)x. Then
x
T
P
T
(I - P)x = x
T
P(I - P}x = 0. Thus, since Px e U(P), then (/ - P)x 6 ft(P)
1
and P must be an orthogonal projection. D
7.1.1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A 6 R
mxn
with SVD A = UT,V
T
=
UtSVf. Then
are easily checked to be (unique) orthogonal projections onto the respective four funda-
mental subspaces,
52 Chapter 7. Projections, Inner Product Spaces, and Norms
Let v E V be arbitrary. Then Pv = P(x + y) = Px = x. Moreover, p
2
v = P Pv =
Px = x = Pv. Thus, p2 = P. Conversely, suppose p2 = P. Let X = {v E V : Pv = v}
and Y = {v E V : Pv = OJ. It is easy to check that X and Y are subspaces. We now prove
that V = X $ y. First note that if v E X, then Pv = v. If v E Y, then Pv = O. Hence
if v E X ny, then v = O. Now let v E V be arbitrary. Then v = Pv + (I - P)v. Let
x = Pv, y = (I - P)v. Then Px = p
2
v = Pv = x so x E X, while Py = P(l - P)v =
Pv - p
2
v = 0 so Y E y. Thus, V = X $ Y and the projection on X along Y is P.
Essentially the same argument shows that I - P is the projection on Y along X. 0
Definition 7.4. In the special case where Y = X1-, px.xl. is called an orthogonal projec-
tion and we then use the notation P
x
= PX.XL
Theorem 7.5. P E jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only
if p2 = P = pT.
Proof: Let P be an orthogonal projection (on X, say, along X 1-) and let x, y E jR" be
arbitrary. Note that (I - P)x = (I - px.xJ.)x = PXJ..xx by Theorem 7.3. Thus,
(I - P)x E X1-. Since Py E X, we have (py)T (I - P)x = yT pT (I - P)x = O.
Since x and y were arbitrary, we must have pT (I - P) = O. Hence pT = pT P = P,
with the second equality following since pT P is symmetric. Conversely, suppose P is a
symmetric projection matrix and let x be arbitrary. Write x = P x + (I - P)x. Then
x
T
pT (I - P)x = x
T
P(l - P)x = O. Thus, since Px E R(P), then (I - P)x E R(P)1-
and P must be an orthogonal projection. 0
7.1 .1 The four fundamental orthogonal projections
Using the notation of Theorems 5.1 and 5.11, let A E jRmxII with SVD A = U!:V
T
U\SVr Then
r
PR(A)
AA+
U\U[
Lu;uT,
;=1
m
PR(A).L
1- AA+
U2
U
! LUiUT,
i=r+l
11
PN(A)
1- A+A
V2V{
L ViVf,
i=r+l
r
PN(A)J.
A+A
VIV{
LViVT
i=l
are easily checked to be (unique) orthogonal projections onto the respective four funda-
mental subspaces.
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v e M" on another nonzero
vector w e R
n
.
Solution: Think of the vector w as an element of the one-dimensional subspace IZ(w).
Then the desired projection is simply
(using Example 4.8)
Moreover, the vector z that is orthogonal to w and such that v = Pv + z is given by
z = PK(
W
)±V = (/ — PK(W))V = v — (^-^ j w. See Figure 7.2. A direct calculation shows
that z and u; are, in fact, orthogonal:
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {v\ , ..., Vk} was an orthornormal
basis for a subset S of W
1
. An arbitrary vector x e R" was chosen and a formula for x\
appeared rather mysteriously. The expression for x\ is simply the orthogonal projection of
x on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain E" and co-domain R
m
are given easily as follows.
Let x e W
1
be an arbitrary vector. Then
7.1. Projections 53
Example 7.6. Determine the orthogonal projection of a vector v E IR
n
on another nonzero
vector w E IRn.
Solution: Think of the vector w as an element of the one-dimensional subspace R( w).
Then the desired projection is simply
Pn(w)v = ww+v
wwTv
(using Example 4.8)
= (WTV)
T W.
W W
Moreover, the vector z that is orthogonal to wand such that v = P v + z is given by
z = Pn(w)"' v = (l - Pn(w»v = v - ( : ; ~ ) w. See Figure 7.2. A direct calculation shows
that z and ware, in fact, orthogonal:
v
z
Pv w
Figure 7.2. Orthogonal projection on a "line."
Example 7.7. Recall the proof of Theorem 3.11. There, {VI, ... , Vk} was an orthomormal
basis for a subset S of IRn. An arbitrary vector x E IR
n
was chosen and a formula for XI
appeared rather mysteriously. The expression for XI is simply the orthogonal projection of
X on S. Specifically,
Example 7.8. Recall the diagram of the four fundamental subspaces. The indicated direct
sum decompositions of the domain IR
n
and co-domain IR
m
are given easily as follows.
Let X E IR
n
be an arbitrary vector. Then
X = PN(A)u + PN(A)X
= A+ Ax + (I - A+ A)x
= VI vt x + V
2
Vi x (recall VVT = I).
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let y e ]R
m
be an arbitrary vector. Then
Example 7.9. Let
Then
and we can decompose the vector [2 3 4]
r
uniquely into the sum of a vector in A/' CA)-
1
and a vector in J\f(A), respectively, as follows:
7.2 Inner Product Spaces
Definition 7.10. Let V be a vector space over R. Then { • , • ) : V x V
product if
is a real inner
1. (x, x) > Qfor all x 6V and ( x , x } =0 if and only ifx = 0.
2. (x, y) = (y,x)forallx,y e V.
3. { *, cryi + ^2) = a(x, y\) + / 3( j t, y^} for all jc, yi, j2 ^ V and for alia, ft e R.
Example 7.11. Let V = R". Then { ^, y} = X
T
y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = E". Then ( j c, y)
Q
= X
T
Qy, where Q = Q
T
> 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A e R
mx
", then A
T
e R
nxm
is the unique linear transformation or map
such that (x, Ay) - (A
T
x, y) for all x € R
m
and for all y e R".
54 Chapter 7. Projections, Inner Product Spaces, and Norms
Similarly, let Y E IR
m
be an arbitrary vector. Then
Y = PR(A)Y +
= AA+y + (l- AA+)y
= U1Ur y + U2U[ Y (recall UU
T
= I).
Example 7.9. Let
Then
1/4
1/4
o
1/4 ]
1/4
o
and we can decompose the vector [2 3 4V uniquely into the sum of a vector in N(A)-L
and a vector in N(A), respectively, as follows:
[ ! ] A' Ax + (l - A' A)x
[
1/2 1/2 0] [ 2] [
= ! +
[
5/2] [-1/2]
= + .
7.2 Inner Product Spaces
1/2
-1/2
o
-1/2
1/2
o
Definition 7.10. Let V be a vector space over IR. Then (', .) : V x V -+ IR is a real inner
product if
1. (x, x) ::: Of or aU x E V and (x, x) = 0 if and only ifx = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + PY2) = a(x, Yl) + f3(x, Y2) for all x, Yl, Y2 E V and/or all a, f3 E IR.
Example 7.11. Let V = IRn. Then (x, y) = x
T
Y is the "usual" Euclidean inner product or
dot product.
Example 7.12. Let V = IRn. Then (x, y) Q = X T Qy, where Q = Q T > 0 is an arbitrary
n x n positive definite matrix, defines a "weighted" inner product.
Definition 7.13. If A E IR
m
xn, then ATE IR
n
xm is the unique linear transformation or map
such that {x, Ay) = {AT x, y) for all x E IR
m
andfor all y e IRn.
7.2. Inner Product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(/, y)th element of A is a
(;
, then the (i, y)t h element of A
T
is a/ , . It can also be checked
that all the usual properties of the transpose hold, such as (Afl) = B
T
A
T
. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A e R
mxn
and let {-, -}g and (•, -}
R
, with Q and
R positive definite, be weighted inner products on R
m
and W, respectively. Then we can
define the "weighted transpose" A
#
as the unique map that satisfies
(x, Ay)
Q
= (A
#
x, y)
R
for all x e R
m
and for all y e W
1
.
By Example 7.12 above, we must then have X
T
QAy = x
T
(A
#
) Ry for all x, y. Hence we
must have QA = (A
#
) R. Taking transposes (of the usual variety) gives A
T
Q = RA
#
.
Since R is nonsingular, we find
A* = /r'A' Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Q -orthogonality (Q is
a positive definite matrix). Two vectors x, y e W are <2-orthogonal (or conjugate with
respect to Q) if ( x, y}
Q
= X
T
Qy = 0. Q -orthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over C. Then {-, •} : V x V -> C is a complex
inner product if
1. ( x, x) > Qfor all x e V and ( x, x) =0 if and only ifx = 0.
2. (x, y) = (y, x) for all x, y e V.
3. (x,ayi + fiy
2
) = a(x, y\) + fi(x, y
2
}forallx, y\, y
2
e V and for alia, ft 6 C.
Remark 7.15. We could use the notation {•, -}
c
to denote a complex inner product, but
if the vectors involved are complex-valued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that ( x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
(ax\ + fix
2
, y) = a(x\, y) + P(x
2
, y}.
Remark 7.17. The Euclidean inner product of x, y e C" is given by
The conventional definition of the complex Euclidean inner product is (x, y} = y
H
x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.18. A weighted inner product can be defined as in the real case by (x, y}
Q

X
H
Qy, for arbitrary Q = Q
H
> 0. The notion of Q -orthogonality can be similarly
generalized to the complex case.
7.2. Inner product Spaces 55
It is easy to check that, with this more "abstract" definition of transpose, and if the
(i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked
that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the
definition above allows us to extend the concept of transpose to the case of weighted inner
products in the following way. Suppose A E ]Rm xn and let (., .) Q and (., .) R, with Q and
R positive definite, be weighted inner products on IR
m
and IRn, respectively. Then we can
define the "weighted transpose" A # as the unique map that satisfies
(x, AY)Q = (A#x, Y)R for all x E IRm and for all Y E IRn.
By Example 7.l2 above, we must then have x
T
QAy = x
T
(A#{ Ry for all x, y. Hence we
must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#.
Since R is nonsingular, we find
A# = R-1A
T
Q.
We can also generalize the notion of orthogonality (x
T
y = 0) to Q-orthogonality (Q is
a positive definite matrix). Two vectors x, y E IRn are Q-orthogonal (or conjugate with
respect to Q) if (x, y) Q = X T Qy = O. Q-orthogonality is an important tool used in
studying conjugate direction methods in optimization theory.
Definition 7.14. Let V be a vector space over <C. Then (., .) : V x V -+ C is a complex
inner product if
1. (x, x) :::: 0 for all x E V and (x, x) = 0 if and only if x = O.
2. (x, y) = (y, x) for all x, y E V.
3. (x, aYI + f3Y2) = a(x, yll + f3(x, Y2) for all x, YI, Y2 E V andfor all a, f3 E c.
Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but
if the vectors involved are complex-valued, the complex inner product is to be understood.
Note, too, from part 2 of the definition, that (x, x) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have
Remark 7.17. The Euclidean inner product of x, y E C
n
is given by
n
(x, y) = LXiYi = xHy.
i=1
The conventional definition of the complex Euclidean inner product is (x, y) = yH x but we
use its complex conjugate x
H
y here for symmetry with the real case.
Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y)Q =
x
H
Qy, for arbitrary Q = QH > o. The notion of Q-orthogonality can be similarly
generalized to the complex case.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an
inner product space. If F = C, we call V a complex inner product space. If F = R, we
call V a real inner product space.
Example 7.20.
1. Check that V = R"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
TrA
T
B = TrB
T
A = TrAB
T
= TrBA
T
.
2. Check that V = C
nx
" with the inner product (A, B) = Tr A
H
B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or
length) ofv by \\v\\ = */(v, v). This is called the norm induced by ( - , - ) .
Example 7.22.
1. If V = E." with the usual inner product, the induced norm is given by | | i> | | =
xV—*« 9\ 7
( E , =i < Y )
2
-
2. If V = C" with the usual inner product, the induced norm is given by \\v\\ =
(£ ?
=
, l » , - l
2
)* .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
\\Pv\\ < \\v\\forallv e V.
Proof: Since P is an orthogonal projection, P
2
= P = P
#
. (Here, the notation P
#
denotes
the unique linear transformation that satisfies ( P u , v } = (u, P
#
v) for all u, v e V. If this
seems a little too abstract, consider V = R" (or C"), where P
#
is simply the usual P
T
(or
P
H
)). Hence ( P v , v) = (P
2
v, v) = (Pv, P
#
v) = ( P v , Pv) = \\Pv\\
2
> 0. Now / - P is
also a projection, so the above result applies and we get
from which the theorem follows.
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = C" or V = R", the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by \\x\\ — • > /(• * > x), an inner
product can be defined via the following.
56 Chapter 7. Projections, Inner Product Spaces, and Norms
Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an
inner product space. If IF = e, we call V a complex inner product space. If IF = R we
call V a real inner product space.
Example 7.20.
1. Check that V = IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space. Note that other choices are possible since by properties of the trace function,
Tr AT B = Tr B T A = Tr A B T = Tr BAT.
2. Check that V = e
nxn
with the inner product (A, B) = Tr AH B is a complex inner
product space. Again, other choices are possible.
Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or
length) ofv by IIvll = -J(V,V). This is called the norm induced by (', .).
Example 7.22.
1. If V = IR
n
with the usual inner product, the induced norm is given by II v II
n 2 1
(Li=l V
i
)2.
2. If V = en with the usual inner product, the induced norm is given by II v II =
"n 2 !
(L...i=l IVi I ) .
Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then
IIPvll ::::: Ilvll for all v E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes
the unique linear transformation that satisfies (Pu, v) = (u, p#v) for all u, v E V. If this
seems a little too abstract, consider V = IR
n
(or en), where p# is simply the usual pT (or
pH)). Hence (Pv, v) = (P
2
v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll
2
::: O. Now /- P is
also a projection, so the above result applies and we get
0::::: ((I - P)v. v) = (v. v) - (Pv, v)
= IIvll2 - IIPvll
2
from which the theorem follows. 0
Definition 7.24. The norm induced on an inner product space by the "usual" inner product
is called the natural norm.
In case V = en or V = IR
n
, the natural norm is also called the Euclidean norm. In
the next section, other norms on these vector spaces are defined. A converse to the above
procedure is also available. That is, given a norm defined by IIx II = .j(X,X}, an inner
product can be defined via the following.
7.3. Vector Norms 57
Theorem 7.25 (Polarization Identity).
1. For x, y € R", an inner product is defined by
7.3 Vector Norms
Definition 7.26. Let (V, F) be a vector space. Then \ \ - \ \ : V ->• R is a vector norm if it
satisfies the following three properties:
2. For x, y e C", an inner product is defined by
where j = i = \/—T.
(This is called the triangle inequality, as seen readily from the usual diagram illus
trating the sum of two vectors in R
2
.)
Remark 7.27. It is convenient in the remainder of this section to state results for complex-
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if
there exists a vector norm || • || : V -> R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x e C", the Holder norms, or p-norms, are defined by
Special cases:
(The second equality is a theorem that requires proof.)
7.3. Vector Norms
Theorem 7.25 (Polarization Identity).
1. For x, y E an inner product is defined by
(x,y)=xTy=
2. For x, y E en, an inner product is defined by
where j = i = .J=I.
7.3 Vector Norms
IIx + yll2 _ IIxll2 _ lIyll2
2
57
Definition 7.26. Let (V, IF) be a vector space. Then II . II : V ---+ IR is a vector norm ifit
satisfies the following three properties:
1. Ilxll::: Of or all x E V and IIxll = 0 ifand only ifx = O.
2. Ilaxll = lalllxllforallx E Vandforalla E IF.
3. IIx + yll :::: IIxll + IIYliforall x, y E V.
(This is called the triangle inequality, as seen readily from the usual diagram illus-
trating the sum of two vectors in ]R2 .)
Remark 7.27. It is convenient in the remainder of this section to state results for complex-
valued vectors. The specialization to the real case is obvious.
Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if
there exists a vector norm II . II : V ---+ ]R satisfying the three conditions of Definition 7.26.
Example 7.29.
1. For x E en, the HOlder norms, or p-norms, are defined by
Special cases:
(a) Ilx III = L:7=1 IXi I (the "Manhattan" norm).
1 1
(b) Ilxllz = (L:7=1Ix;l2)2 = (X
H
X)2 (the Euclidean norm).
(c) Ilxlioo = maxlx;l = lim IIxllp-
IE!! p---++oo
(The second equality is a theorem that requires proof.)
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted p-norms:
(a) | | JC| | , .
D
= E^rf/l*/!, where 4 > 0.
(b) I k llz . g — (x
h
Q
X
Y > where Q = Q
H
> 0 (this norm is more commonly
denoted || • ||
c
).
3. On the vector space (C[to, t \ ] , R), define the vector norm
On the vector space ((C[to, t\])
n
, R), define the vector norm
Fhcorem 7.30 (Holder Inequality). Let x, y e C". Ther,
A particular case of the Holder inequality is of special interest.
Theorem 7.31 (Cauchy-Bunyakovsky-Schwarz Inequality). Let x, y e C". Then
with equality if and only if x and y are linearly dependent.
Proof: Consider the matrix [x y] e C"
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
0 < ( x
H
x ) ( y
H
y ) — ( x
H
y ) ( y
H
x ) . Since y
H
x = x
H
y, we see immediately that \X
H
y\ <
\\X\\2\\y\\2-
D
Note: This is not the classical algebraic proof of the Cauchy-Bunyakovsky-Schwarz
(C-B-S) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle 0 between two nonzero vectors x, y e C" may be defined by
cos# = I, „ |.^|| , 0 < 0 < 5-. The C-B-S inequality is thus equivalent to the statement
Il-Mmlylb — ^
|COS 0| <1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm || • ||
2
is unitarily invariant, i.e., if U € C"
x
" is unitary, then
\\Ux\\
2
= \\x\\
2
(Proof. \\Ux\\l = x
H
U
H
Ux = X
H
X = \\x\\\). However, || - ||, and || - 1^
58 Chapter 7. Projections, Inner Product Spaces, and Norms
2. Some weighted p-norms:
(a) IIxll1.D = whered; > O.
1
(b) IIx IIz.Q = (x
H
Qx) 2, where Q = QH > 0 (this norm is more commonly
denoted II . IIQ)'
3. On the vector space (C[to, ttl, 1Ft), define the vector norm
11111 = max 1/(t)I·

On the vector space «e[to, ttlr, 1Ft), define the vector norm
1111100 = max II/(t) 11
00
,

Theorem 7.30 (HOlder Inequality). Let x, y E en. Then
I I
-+-=1.
p q
A particular case of the HOlder inequality is of special interest.
Theorem 7.31 (Cauchy-Bunyakovsky-Schwarz Inequality). Let x, y E en. Then
with equality if and only if x and yare linearly dependent.
Proof' Consider the matrix [x y] E en
x2
. Since
is a nonnegative definite matrix, its determinant must be nonnegative. In other words,
o (x
H
x)(yH y) - (x
H
y)(yH x). Since yH x = x
H
y, we see immediately that IXH yl
IIxll2l1yllz. 0
Note: This is not the classical algebraic proof of the Cauchy-Bunyakovsky-Schwarz
(C-B-S) inequality (see, e.g., [20, p. 217]). However, it is particularly easy to remember.
Remark 7.32. The angle e between two nonzero vectors x, y E en may be defined by
cos e = 0 e I' The C-B-S inequality is thus equivalent to the statement
1 cose 1 1.
Remark 7.33. Theorem 7.31 and Remark 7.32 are true for general inner product spaces.
Remark 7.34. The norm II . 112 is unitarily invariant, i.e., if U E e
nxn
is unitary, then
IIUxll2 = IIxll2 (Proof IIUxili = XHUHUx = xHx = IIxlli)· However, 11·111 and 1I·IIClO
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y € C" are orthogonal, then we have the Pythagorean Identity
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (W
nxn
, R) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39. || • || : R
mx
" -> E is a matrix norm if it satisfies the following three
properties:
2 _ _/ / .
the proof of which follows easily from ||z||2 = z z.
Theorem 7.36. All norms on C" are equivalent; i.e., there exist constants c\, c-i (possibly
depending onn) such that
Example 7.37. For x G C", the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Finally, we conclude this section with a theorem about convergence of vectors. Con-
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let \\ • \\ be a vector norm and suppose v, i»
( 1 )
, v
(2
\ ... e C". Then
7.4. Matrix Norms 59
are not unitarily invariant. Similar remarks apply to the unitary invariance of norms of real
vectors under orthogonal transformation.
Remark 7.35. If x, y E en are orthogonal, then we have the Pythagorean Identity
Ilx ± = +
the proof of which follows easily from liz = ZH z.
Theorem 7.36. All norms on en are equivalent; i.e., there exist constants CI, C2 (possibly
depending on n) such that
Example 7.37. For x E en, the following inequalities are all tight bounds; i.e., there exist
vectors x for which equality holds:
Ilxlll :::: Jn Ilxlb
Ilxll2:::: IIxll»
IIxlloo :::: IIxll»
Ilxlll :::: n IIxlloo;
IIxl12 :::: Jn Ilxll
oo
;
IIxlioo :::: IIxllz.
Finally, we conclude this section with a theorem about convergence of vectors. Con-
vergence of a sequence of vectors to some limit vector can be converted into a statement
about convergence of real numbers, i.e., convergence in terms of vector norms.
Theorem 7.38. Let II· II be a vector norm and suppose v, v(l), v(2), ... E en. Then
lim V(k) = v if and only if lim II v(k) - v II = O.
k4+00
7.4 Matrix Norms
In this section we introduce the concept of matrix norm. As with vectors, the motivation for
using matrix norms is to have a notion of either the size of or the nearness of matrices. The
former notion is useful for perturbation analysis, while the latter is needed to make sense of
"convergence" of matrices. Attention is confined to the vector space (IRm xn , IR) since that is
what arises in the majority of applications. Extension to the complex case is straightforward
and essentially obvious.
Definition 7.39. II· II : IR
mxn
IR is a matrix norm if it satisfies the following three
properties:
1. IIAII Of or all A E IR
mxn
and IIAII = 0 if and only if A = O.
2. lIaAl1 = lalliAliforall A E IR
mxn
andfor all a E IR.
3. IIA + BII :::: IIAII + IIBII for all A, BE IRmxn.
(As with vectors, this is called the triangle inequality.)
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A e R
mx
". Then the Frobenius norm (or matrix Euclidean norm) is
defined by
^wncic r = laiiK^/i;;.
Example 7.41. Let A e R
mxn
. Then the matrix p-norms are defined by
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
1. The "maximum column sum" norm is
2. The "maximum row sum" norm is
3. The spectral norm is
Example 7.42. Let A E R
mxn
. The Schatten/7-norms are defined by
Some special cases of Schatten /?-norms are equal to norms defined previously. For example,
|| . ||
5 2
= || . \\
F
and || • ||
5i00
= || • ||
2
. The norm || • ||
5>1
is often called the trace norm.
Example 7.43. Let A e K
mx
". Then "mixed" norms can also be defined by
Example 7.44. The "matrix analogue of the vector 1-norm," || A\\
s
= ^ j \a
i}
; |, is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product A B in terms of the sizes of A and B individually.
60 Chapter 7. Projections, Inner Product Spaces, and Norms
Example 7.40. Let A E lR,mxn. Then the Frobenius norm (or matrix Euclidean norm) is
defined by
IIAIIF ~ (t. t ai;) I ~ (t. altA)) 1 ~ (T, (A' A)) 1 ~ (T, (AA '));
(where r = rank(A)).
Example 7.41. Let A E lR,mxn. Then the matrix p-norms are defined by
IIAxll
IIAII = max -_P = max IIAxll .
P Ilxllp;60 Ilxli
p
IIxllp=1 p
The following three special cases are important because they are "computable." Each is a
theorem and requires a proof.
I. The "maximum column sum" norm is
2. The "maximum row sum" norm is
IIAlioo = max
rE!!l. (
t laUI ).
J=1
3. The spectral norm is
tTL T
IIAII2 = Amax(A A) = A ~ a x ( A A ) = a1(A).
Note: IIA+llz = l/ar(A), where r = rank(A).
Example 7.42. Let A E lR,mxn. The Schattenp-norms are defined by
I
IIAlls.p = (at' + ... + a!)"".
Some special cases of Schatten p-norms are equal to norms defined previously. For example,
11·115.2 = II . IIF and 11'115,00 = II . 112' The norm II . 115.1 is often called the trace norm.
Example 7.43. Let A E lR,mxn _ Then "mixed" norms can also be defined by
IIAII = max IIAxil
p
p,q 11.<110#0 IIxllq
Example 7.44. The "matrix analogue of the vector I-norm," IIAlis = Li.j laij I, is a norm.
The concept of a matrix norm alone is not altogether useful since it does not allow us
to estimate the size of a matrix product AB in terms of the sizes of A and B individually.
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A e R
mxn
, B e R
nxk
. Then the norms \\ • \\
a
, \\ • \\
p
, and \\ • \\
y
are
mutually consistent if \\ A B \\
a
< \\A\\p\\B\\
y
. A matrix norm\\ • \\ is said to be consistent
if \\AB\\ < || A || || fi|| whenever the matrix product is defined.
Example 7.46.
1. || • ||/7 and || • ||
p
for all p are consistent matrix norms.
2. The "mixed" norm
is a matrix norm but it is not consistent. For example, take A = B = \ \ J1. Then
| | Af l | |
l i 00
= 2whil e| | A| |
l i 00
| | B| |
1 >00
= l.
The p -norms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
11^ 4^ 11
(or, more generally, ||A|| = max^o ., . .
p
) . For such subordinate norms, also called oper-
ator norms, we clearly have ||Aj c|| < ||A||1|jt||. Since | | Af ij c| | < | | A| | | | f l j c| | < ||A||||fl||||j t||,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that ||Ajt*|| = ||A|| ||jc*|| if the matrix normis
subordinate to the vector norm.
Theorem 7.48. If \\ • \\
m
is a consistent matrix norm, there exists a vector norm \\ • \\
v
consistent with it, i.e., H Aj c JI ^ < \\A\\
m
\\x\\
v
.
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider || • \\
F
. Then| | A^| |
2
< ||A||
F
||j c||
2
, so || • ||
2
is consistent with || • ||
F
, but there does
not exist a vector norm || • || such that ||A||
F
is given by max^o \^ •
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
2. For A e R"
x
", the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
7.4. Matrix Norms 61
Notice that this difficulty did not arise for vectors, although there are analogues for, e.g.,
inner products or outer products of vectors. We thus need the following definition.
Definition 7.45. Let A E ]Rmxn, B E ]Rnxk. Then the norms II . II", II· Ilfl' and II . lIy are
mutuallyconsistentifIlABII,,::S IIAllfllIBlly. A matrix norm 11·11 is said to be consistent
if II A B II ::s II A 1111 B II whenever the matrix product is defined.
Example 7.46.
1. II· II F and II . II p for all p are consistent matrix norms.
2. The "mixed" norm
IIAxll1
II· 11
100
= max --= max laijl
, x;60 Ilx 1100 i,j
is a matrix norm but it is not consistent. For example, take A = B = [: :]. Then
IIABIII,oo = 2 while IIAIII,ooIlBIII,oo = 1.
The p-norms are examples of matrix norms that are subordinate to (or induced by)
a vector norm, i.e.,
IIAxl1
IIAII = max -- = max IIAxl1
x;60 IIx II Ilxll=1
IIAxll .
(or, more generally, IIAllp,q = maxx;60 IIxll
q
P
), For such subordmate norms, also caUedoper-
atornorms, wec1earlyhave IIAxll ::s IIAllllxll· Since IIABxl1 ::s IIAlIllBxll ::s IIAIIIIBllllxll,
it follows that all subordinate norms are consistent.
Theorem 7.47. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is
subordinate to the vector norm.
Theorem 7.48. If II . 11m is a consistent matrix norm, there exists a vector norm II . IIv
consistent with it, i.e., IIAxliv ::s IIAlim Ilxli
v
'
Not every consistent matrix norm is subordinate to a vector norm. For example,
consider II . II F' Then II Ax 112 ::s II A II Filx 112, so II . 112 is consistent with II . II F, but there does
not exist a vector norm II . II such that IIAIIF is given by max
x
;60 " , ~ ~ i ' .
Useful Results
The following miscellaneous results about matrix norms are collected for future reference.
The interested reader is invited to prove each of them as an exercise.
1. II In II p = 1 for all p, while IIIn II F = .jii.
2. For A E ]Rnxn, the following inequalities are all tight, i.e., there exist matrices A for
which equality holds:
IIAIII ::s .jii IIAlb
IIAII2 ::s.jii IIAII
I
,
II A 1100 ::s n IIAII
I
,
IIAIIF ::s.jii IIAII
I
,
IIAIII ::s n IIAlloo,
IIAII2 ::s .jii IIAlloo,
IIAlioo ::s .jii IIAII2,
IIAIIF ::s .jii IIAlb
IIAIII ::s .jii II
A
IIF;
IIAII2::S IIAIIF;
IIAlioo ::s .jii IIAIIF;
IIAIIF ::s .jii IIAlioo'
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A eR
mxa
.
4. The norms || • \\
F
and || • ||
2
(as well as all the Schatten /?-norms, but not necessarily
other p-norms) are unitarily invariant; i.e., for all A e R
mx
" and for all orthogonal
matrices Q zR
mxm
and Z e M"
x
", ||(MZ||
a
= | | A| |
a
fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let \\ -\\bea matrix normand suppose A, A
( 1)
, A
(2)
, ... e R
mx
". Then
EXERCISES
1. If P is an orthogonal projection, prove that P
+
= P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P — Q
must be an orthogonal matrix.
3. Prove that / — A
+
A is an orthogonal projection. Also, prove directly that V
2
V/ is an
orthogonal projection, where ¥2 is defined as in Theorem 5.1.
4. Suppose that a matrix A e W
nxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(A
T
A)~
}
A
T
.
5. Find the (orthogonal) projection of the vector [2 3 4]
r
onto the subspace of R
3
spanned by the plane 3;c — v + 2z = 0.
6. Prove that E"
x
" with the inner product (A, B) = Tr A
T
B is a real inner product
space.
7. Show that the matrix norms || • ||
2
and || • \\
F
are unitarily invariant.
8. Definition: Let A e R
nxn
and denote its set of eigenvalues (not necessarily distinct)
by { A-i , . . . , > . „ } . The spectral radius of A is the scalar
62 Chapter 7. Projections, Inner Product Spaces, and Norms
3. For A E IR
mxn
,
max laijl :::: IIAII2 :::: ~ max laijl.
l.] l.]
4. The norms II . IIF and II . 112 (as well as all the Schatten p-norms, but not necessarily
other p-norms) are unitarily invariant; i.e., for all A E IR
mxn
and for all orthogonal
matrices Q E IR
mxm
and Z E IR
nxn
, IIQAZlia = IIAlla fora = 2 or F.
Convergence
The following theorem uses matrix norms to convert a statement about convergence of a
sequence of matrices into a statement about the convergence of an associated sequence of
scalars.
Theorem 7.49. Let II ·11 be a matrix norm and suppose A, A(I), A(2), ... E IRmxn. Then
lim A (k) = A if and only if lim IIA (k) - A II = o.
k ~ + o o k ~ + o o
EXERCISES
1. If P is an orthogonal projection, prove that p+ = P.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove that P - Q
must be an orthogonal matrix.
3. Prove that I - A + A is an orthogonal projection. Also, prove directly that V
2
Vl is an
orthogonal projection, where V2 is defined as in Theorem 5.1.
4. Suppose that a matrix A E IR
mxn
has linearly independent columns. Prove that the
orthogonal projection onto the space spanned by these column vectors is given by the
matrix P = A(AT A) -1 AT.
5. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R
3
spanned by the plane 3x - y + 2z = O.
6. Prove that IR
n
xn with the inner product (A, B) = Tr AT B is a real inner product
space.
7. Show that the matrix norms II . 112 and II . IIF are unitarily invariant.
8. Definition: Let A E IR
nxn
and denote its set of eigenvalues (not necessarily distinct)
by P.l, ... , An}. The spectral radius of A is the scalar
p(A) = max IA;I.
i
Exercises 63
Determine ||A||
F
, H AI d , ||A||
2
, H AH ^ , and p(A). (An n x n matrix, all of whose
columns and rows as well as main d iagonal and antid iagonal sum to s = n(n
2
+ l)/2,
is called a "magic square" matrix. I f M is a magic square matrix, it can be proved
that || M U p = s for all/?.)
10. Let A = xy
T
, where both x, y e R" are nonzero. Determine ||A||
F
, ||A||j, ||A||
2
,
and ||A||oo in terms of \\x\\
a
and /or \\y\\p, where a and ft take the value 1, 2, or oo as
appropriate.
Let
9. Let
Determine ||A||
F
, \\A\\
lt
||A||
2
, H A^ , and p(A).
Exercises 63
Let
A = [ ~ 0 ~ ] .
14 12 5
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA).
9. Let
A = [ ~ ~ ~ ] .
492
Determine IIAIIF' IIAII
I
, IIAlb IIAlloo, and peA). (An n x n matrix, all of whose
columns and rows as well as main diagonal and antidiagonal sum to s = n (n
2
+ 1) /2,
is called a "magic square" matrix. If M is a magic square matrix, it can be proved
that IIMllp = s for all p.)
10. Let A = xyT, where both x, y E IR
n
are nonzero. Determine IIAIIF' IIAIII> IIAlb
and II A 1100 in terms of IIxlla and/or IlylljJ, where ex and {3 take the value 1,2, or (Xl as
appropriate.
This page intentionally left blank This page intentionally left blank
Chapter 8
Li near Least Squares
Problems
8.1 The Li near Least Squares Problem
Problem: Suppose A e R
mx
" with m > n and b <= R
m
is a given vector. The linear least
squares problem consists of finding an element of the set
Solution: The set X has a number of easily verified properties:
1. A vector x e X if and only if A
T
r = 0, where r = b — Ax is the residual associated
with x. The equations A
T
r — 0 can be rewritten in the form A
T
Ax = A
T
b and the
latter form is commonly known as the normal equations, i.e., x e X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and onlv if x is of the form
To see why this must be so, write the residual r in the form
Now, (Pn(A)b — AJ C ) is clearly in 7£(A) , while
so these two vectors are orthogonal. Hence,
from the Pythagorean identity (Remark 7.35). Thus, ||A.x — b\\\ (and hence p ( x ) =
\\Ax —b\\2) assumes its minimum value if and only if
65
Chapter 8
Linear Least Squares
Problems
8.1 The Linear Least Squares Problem
Problem: Suppose A E jRmxn with m 2: nand b E jRm is a given vector. The linear least
squares problem consists of finding an element of the set
x = {x E jRn : p(x) = IIAx - bll
2
is minimized}.
Solution: The set X has a number of easily verified properties:
1. A vector x E X if and only if AT r = 0, where r = b - Ax is the residual associated
with x. The equations AT r = 0 can be rewritten in the form A T Ax = AT b and the
latter form is commonly known as the normal equations, i.e., x E X if and only if
x is a solution of the normal equations. For further details, see Section 8.2.
2. A vector x E X if and only if x is of the form
x=A+b+(I-A+A)y, whereyEjRnisarbitrary. (8.1)
To see why this must be so, write the residual r in the form
r = (b - PR(A)b) + (PR(A)b - Ax).
Now, (PR(A)b - Ax) is clearly in 'R(A), while
(b - PR(A)b) = (I - PR(A))b
= PR(A),,-b E 'R(A)-L
so these two vectors are orthogonal. Hence,
= lib -
= lib - + IIPR(A)b -
from the Pythagorean identity (Remark 7.35). Thus, IIAx - (and hence p(x) =
II Ax - b 112) assumes its minimum value if and only if
(8.2)
65
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA
+
b e 7£(A). By Theorem 6.3, all
solutions of (8.2) are of the form
where y e W is arbitrary. The minimum value of p ( x ) is then clearly equal to
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors jci = A
+
b + (I — A+A)y
and *2 = A+b + (I — A+A)z in X. Let 6 e [0, 1]. Then the convex combination
0*i + (1 - #)*
2
= A+b + (I - A
+
A)(Oy + (1 - 0)z) is clearly in X.
4. X has a unique element x* of minimal 2-norm. In fact, x* = A
+
b is the unique vector
that solves this "double minimization" problem, i.e., x * minimizes the residual p ( x )
and is the vector of minimum 2-norm that does so. This follows immediately from
convexity or directly from the fact that all x e X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x*} = {A+b}, if
and only if A
+
A = I or, equivalently, if and only if rank (A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A e E
mx
" and B € R
mxk
. The general solution to
is of the form
where Y € R"
xfc
is arbitrary. The unique solution of minimum 2-norm or F-norm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as K(B) c 7£(A).
If the existence condition happens to be satisfied, then equality holds and the least squares
66 Chapter 8. Linear Least Squares Problems
and this equation always has a solution since AA+b E R(A). By Theorem 6.3, all
solutions of (8.2) are of the form
x = A+ AA+b + (I - A+ A)y
=A+b+(I-A+A)y,
where y E ]R.n is arbitrary. The minimum value of p (x) is then clearly equal to
lib - PR(A)bll
z
= 11(1 - AA+)bI1
2
~ Ilbll z,
the last inequality following by Theorem 7.23.
3. X is convex. To see why, consider two arbitrary vectors Xl = A + b + (I - A + A) y
and Xz = A+b + (I - A+ A)z in X. Let 8 E [0,1]. Then the convex combination
8x, + (1 - 8)xz = A+b + (I - A+ A)(8y + (1 - 8)z) is clearly in X.
4. X has a unique element x" of minimal2-norm. In fact, x" = A + b is the unique vector
that solves this "double minimization" problem, i.e., x* minimizes the residual p(x)
and is the vector of minimum 2-norm that does so. This follows immediately from
convexity or directly from the fact that all x E X are of the form (8.1) and
which follows since the two vectors are orthogonal.
5. There is a unique solution to the least squares problem, i.e., X = {x"} = {A+b}, if
and only if A + A = lor, equivalently, if and only if rank(A) = n.
Just as for the solution of linear equations, we can generalize the linear least squares
problem to the matrix case.
Theorem 8.1. Let A E ]R.mxn and BE ]R.mxk. The general solution to
min IIAX - Bib
XElR
Plxk
is of the form
X=A+B+(I-A+A)Y,
where Y E ]R.nxk is arbitrary. The unique solution of minimum 2-norm or F-norm is
X = A+B.
Remark 8.2. Notice that solutions of the linear least squares problem look exactly the
same as solutions of the linear system AX = B. The only difference is that in the case
of linear least squares solutions, there is no "existence condition" such as R(B) S; R(A).
If the existence condition happens to be satisfied. then equality holds and the least squares
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is 0. Of all solutions that give a residual of 0, the unique solution X = A
+
B has
minimum 2-norm or F-norm.
Remark 8.3. If we take B = I
m
in Theorem 8.1, then X = A
+
can be interpreted as
saying that the Moore-Penrose pseudoinverse of A is the best (in the matrix 2-norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2-norm (and F-norm). One such is the following. Let A e M™
x
" with SVD
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing || Ax — b\\
2
is equivalent to finding the vector x e W
1
for which p — Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b — Ax must be orthogonal to 7£(A). Thus, if Ay is an arbitrary
vector in 7£(A) (i.e., y is arbitrary), we must have
Then a best rank k approximation to A for l <f c <r , i . e . , a solution to
is given by
The special case in which m = n and k = n — 1 gives a nearest singular matrix to A e
Since y is arbitrary, we must have A
T
b — A
T
Ax = 0 or A
r
A;c = A
T
b.
Special case: If A is full (column) rank, then x = (A
T
A) A
T
b.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (t\,y\), . . . , (t
m
,y
m
) for which we hypothesize a linear
(affine) relationship
8.3 Linear Regression and Other Linear Least Squares Problems 67
residual is O. Of all solutions that give a residual of 0, the unique solution X = A + B has
minimum 2-norm or F -norm.
Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as
saying that the Moore-Penrose pseudoinverse of A is the best (in the matrix 2-norm sense)
matrix such that AX approximates the identity.
Remark 8.4. Many other interesting and useful approximation results are available for the
matrix 2-norm (and F -norm). One such is the following. Let A E with SVD
A = = LOiUiV!.
i=l
Then a best rank k approximation to A for 1 :s k :s r, i.e., a solution to
min IIA - MIi2,
MEJRZ'xn
is given by
k
Mk = LOiUiV!.
i=1
The special case in which m = nand k = n - 1 gives a nearest singular matrix to A E x n .
8.2 Geometric Solution
Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx - bll
2
is equivalent to finding the vector x E lR
n
for which p = Ax is closest to b (in the Euclidean
norm sense). Clearly, r = b - Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary
vector in R(A) (i.e., y is arbitrary), we must have
0= (Ay)T (b - Ax)
=yTAT(b-Ax)
= yT (ATb _ AT Ax).
Since y is arbitrary, we must have AT b - AT Ax = 0 or AT Ax = AT b.
Special case: If A is full (column) rank, then x = (AT A)-l ATb.
8.3 Linear Regression and Other Linear Least Squares
Problems
8.3.1 Example: Linear regression
Suppose we have m measurements (ll, YI), ... , (trn, Ym) for which we hypothesize a linear
(affine) relationship
y = at + f3
(8.3)
68 Chapter 8. Linear Least Squares Problems
Figure 8.1. Projection of b on K(A).
for certain constants a. and ft. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
where &\,..., 8
m
are "errors" and we wish to minimize 8\ + • • • + 8^- Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
Figure 8.2. Simple linear regression.
Note that distances are measured in the vertical sense from the points to the line (as
indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For ex-
ample, one could measure the distances in the horizontal sense, or the perpendicular distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2-norms, one could also use 1-norms or oo-norms. The latter two are computationally
68 Chapter 8. Linear Least Squares Problems
b
r
p=Ax Ay E R(A)
Figure S.l. Projection of b on R(A).
for certain constants a and {3. One way to solve this problem is to find the line that best fits
the data in the least squares sense; i.e., with the model (8.3), we have
YI = all + {3 + 81,
Y2 = al2 + {3 + 82
where 8
1
, ... , 8
m
are "errors" and we wish to minimize 8? + ... + 8;. Geometrically, we
are trying to find the best line that minimizes the (sum of squares of the) distances from the
given data points. See, for example, Figure 8.2.
y
Figure 8.2. Simple linear regression.
Note that distances are measured in the venical sense from the point!; to [he line (a!;
indicated. for example. for the point (tl. YIn. However. other criteria nrc For cx-
ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance
from the points to the line could be used. The latter is called total least squares. Instead
of 2-norms, one could also use I-norms or oo-norms. The latter two are computationally
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2-norm case in
text that follows.
The ra "error equations" can be written in matrix form as
where
We then want to solve the problem
or, equivalently,
Solution: x — [^1 is a solution of the normal equations A
T
Ax = A
T
y where, for the
special form of the matrices above, we have
and
8.3.2 Other least squares problems
Suppose the hypothesized model is not the linear equation (8.3) but rather is of the form
y = f ( t ) =
Cl
0!(0+ • • • 4- c
n
<t>
n
(t). (8.5)
In (8.5) the < / > ,(0 are given (basis) functions and the c
;
are constants to be determined to
minimize the least squares error. The matrix problem is still (8.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing 0,• (?) = t'~
l
, i
;
e n, although this choice can lead to computational
The solution for the parameters a and ft can then be written
8.3. Linear Regression and Other Linear Least Squares Problems 69
much more difficult to handle, and thus we present only the more tractable 2-norm case in
text that follows.
The m "error equations" can be written in matrix form as
Y = Ax +0,
where
We then want to solve the problem
minoT 0 = min (Ax - y)T (Ax - y)
x
or, equivalently,
min = min II Ax -
x
Solution: x = is a solution of the normal equations AT Ax
special form of the matrices above, we have
and
AT Y = [ Li ti Yi J.
LiYi
The solution for the parameters a and f3 can then be written
8.3.2 Other least squares problems
(8.4)
AT y where, for the
Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form
(8.5)
In (8.5) the ¢i(t) are given (basis) functions and the Ci are constants to be determined to
minimize the least squares error. The matrix problem is still (S.4), where we now have
An important special case of (8.5) is least squares polynomial approximation, which
corresponds to choosing ¢i (t) = t
i
-
1
, i E !!, although this choice can lead to computational
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients c, appear linearly. The basis functions
< / > ,- can be arbitrarily nonlinear. Sometimes a problem in which the c, 's appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
y = f ( t ) = c\e
C2i
, then taking logarithms yields the equation logy = logci + cjt. Then
defining y — logy, c\ = logci, and GI = cj_ results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finite- precision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than A
T
A. Two basic classes of algorithms are
based on S VD and QR (orthogonal- upper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
The last equality follows from the fact that if v = [£ ], then ||u||^ = | | i> i \\\ + \\vi\\\ (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned, the two are equivalent. In fact, the last
quantity above is clearly minimized by taking z\ = S~
l
c\. The subvector z
2
is arbitrary,
while the minimum value of \\Ax — b\\^ is l ^l l r
via the SVD. Specifically, we assume that A has an SVD given by A = UT, V
T
= U\SVf
as in Theorem 5.1. We now note that
70 Chapter 8. Linear Least Squares Problems
difficulties because of numerical ill conditioning for large n. Numerically better approaches
are based on orthogonal polynomials, piecewise polynomial functions, splines, etc.
The key feature in (8.5) is that the coefficients Ci appear linearly. The basis functions
¢i can be arbitrarily nonlinear. Sometimes a problem in which the Ci'S appear nonlinearly
can be converted into a linear problem. For example, if the fitting function is of the form
Y = f (t) = c, e
C2
/ , then taking logarithms yields the equation log y = log c, + c2f. Then
defining y = log y, c, = log c" and C2 = C2 results in a standard linear least squares
problem.
8.4 Least Squares and Singular Value Decomposition
In the numerical linear algebra literature (e.g., [4], [7], [11], [23]), it is shown that solution
of linear least squares problems via the normal equations can be a very poor numerical
method in finite-precision arithmetic. Since the standard Kalman filter essentially amounts
to sequential updating of normal equations, it can be expected to exhibit such poor numerical
behavior in practice (and it does). Better numerical methods are based on algorithms that
work directly and solely on A itself rather than AT A. Two basic classes of algorithms are
based on SVD and QR (orthogonal-upper triangular) factorization, respectively. The former
is much more expensive but is generally more reliable and offers considerable theoretical
insight.
In this section we investigate solution of the linear least squares problem
min II Ax - b11
2
, A E IR
mxn
, bE IR
m
, (8.6)
x
via the SVD. Specifically, we assume that A has an SVD given by A = = U,SVr
as in Theorem 5.1. We now note that
IIAx - = x -
= II VT X - U
T
bll; since II . Ib is unitarily invariant
wherez=VTx,c=UTb
= II [ ] - [ ] II:
= II [ c, ] II:
The last equality follows from the fact that if v = then II v II = II viii + II v211 (note
that orthogonality is not what is used here; the subvectors can have different lengths). This
explains why it is convenient to work above with the square of the norm rather than the
norm. As far as the minimization is concerned. the two are equivalent. In fact. the last
quantity above is clearly minimized by taking z, = S-'c,. The subvector Z2 is arbitrary,
while the minimum value of II Ax - b II is II czll
8.5. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
The last equality follows from
Note that since 12 is arbitrary, V
2
z
2
is an arbitrary vector in 7Z(V
2
) = A/"(A). Thus, x has
been written in the form x = A
+
b + (/ — A
+
A ) _ y, where y e R
m
is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 -4=> b is orthogonal to all vectors in U
2
•<=^ b is orthogonal to all vectors in 7l(A}
L
Another expression for the minimum residual is || (/ — AA
+
) b| |
2
. This follows easily since
||(7 - AA+)b\\
2
2
- \\U2Ufb\\l = b
T
U
2
U^U
2
UJb = b
T
U
2
U*b = \\U?b\\
2
2
.
Finally, an important special case of the linear least squares problem is the
so-called full-rank problem, i.e., A e 1R™
X
" . In this case the SVD of A is given by
A = UZV
T
= [U
{
t/ 2][o]^i
r
> and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A e R™
X M
. It is then possible, via a sequence of so-called Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix Q
T
€ R
mxm
, we have
B.S. Least Squares and QR Factorization 71
Now transform back to the original coordinates:
x = Vz
= [VI V
2
1 [ ]
= VIZI + V2Z2
= VIS-ici + V2Z2
= vls-Iufb + V
2
Z
2
.
The last equality follows from
c = U T b = [ f: ] = [ l
Note that since Z2 is arbitrary, V
2
Z
2
is an arbitrary vector in R(V
2
) = N(A). Thus, x has
been written in the form x = A + b + (I - A + A) y, where y E ffi.m is arbitrary. This agrees,
of course, with (8.1).
The minimum value of the least squares residual is
and we clearly have that
minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2
{::=:} b is orthogonal to all vectors in R(A)l.
{::=:} b E R(A).
Another expression for the minimum residual is II (I - AA +)bllz. This follows easily since
11(1- = = b
T
U
Z
V!V
2
V!b = bTVZV!b =
Finally, an important special case of the linear least squares problem is the
so-called full-rank problem, i.e., A E In this case the SVD of A is given by
A = V:EV
T
= [VI Vzl[g]Vr, and there is thus "no V
2
part" to the solution.
8.5 Least Squares and QR Factorization
In this section, we again look at the solution of the linear least squares problem (8.6) but this
time in terms of the QR factorization. This matrix factorization is much cheaper to compute
than an SVD and, with appropriate numerical enhancements, can be quite reliable.
To simplify the exposition, we add the simplifying assumption that A has full column
rank, i.e., A E It is then possible, via a sequence of so-called Householder or Givens
transformations, to reduce A in the following way. A finite sequence of simple orthogonal
row transformations (of Householder or Givens type) can be performed on A to reduce it
to triangular form. If we label the product of such orthogonal row transformations as the
orthogonal matrix QT E ffi.mxm, we have
(8.7)
72 Chapter 8. Linear Least Squares Problems
where R e M£
x
" is upper triangular. Now write Q = [Q\ Q
2
], where Q\ e R
mx
" and
Q
2
€ K"
IX(m
~"
)
. Both Q\ and <2
2
have orthonormal columns. Multiplying through by Q
in (8.7), we see that
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the Gram-Schmidt process, i.e., by writing
AR~
l
= Q\ we see that a "triangular" linear combination (given by the coefficients of
R~
l
) of the columns of A yields the orthonormal columns of Q\.
Now note that
The last quantity above is clearly minimized by taking x = R
l
c\ and the minimum residual
is \\C 2\\2- Equivalently, we have x = R~
l
Q\b = A
+
b and the minimum residual is IIC?^!^-
EXERCISES
1. For A € W
xn
, b e E
m
, and any y e R", check directly that (I - A
+
A)y and A
+
b
are orthogonal vectors.
2. Consider the following set of measurements (*,, y
t
):
(a) Find the best (in the 2-norm sense) line of the form y = ax + ft that fits this
data.
(b) Find the best (in the 2-norm sense) line of the form jc = ay + (3 that fits this
data.
3. Suppose qi and q
2
are two orthonormal vectors and b is a fixed vector, all in R".
(a) Find the optimal linear combination aq^ + fiq
2
that is closest to b (in the 2-norm
sense).
(b) Let r denote the "error vector" b — ctq\ — flq
2
- Show that r is orthogonal to
both^i and q
2
.
72 Chapter 8. Linear Least Squares Problems
where R E is upper triangular. Now write Q = [QI Qz], where QI E ffi.mxn and
Qz E ffi.m x (m-n). Both Q I and Qz have orthonormal columns. Multiplying through by Q
in (8.7), we see that

(8.8)
= [QI Qz] [ ]
= QIR.
(8.9)
Any of (8.7), (8.8), or (8.9) are variously referred to as QR factorizations of A. Note that
(8.9) is essentially what is accomplished by the Gram-Schmidt process, i.e., by writing
AR-
1
= QI we see that a "triangular" linear combination (given by the coefficients of
R-
I
) of the columns of A yields the orthonormal columns of Q I.
Now note that
IIAx - = IIQ
T
Ax - since II . 112 is unitarily invariant
= II [ ] x - [ ] If:,
The last quantity above is clearly minimized by taking x = R-
I
Cl and the minimum residual
is Ilczllz. Equivalently, we have x = R-
1
Qf b = A +b and the minimum residual is II Qr bllz'
EXERCISES
1. For A E ffi.
mxn
, b E ffi.
m
, and any y E ffi.
n
, check directly that (I - A + A)y and A +b
are orthogonal vectors.
2. Consider the following set of measurements (Xi, Yi):
(1,2), (2,1), (3,3).
(a) Find the best (in the 2-norm sense) line of the form y = ax + fJ that fits this
data.
(b) Find the best (in the 2-norm sense) line of the form x = ay + fJ that fits this
data.
3. Suppose q, and qz are two orthonormal vectors and b is a fixed vector, all in ffi.
n

(a) Find the optimallinear combination aql + (3q2 that is closest to b (in the 2-norm
sense).
(b) Let r denote the "error vector" b - aql - {3qz. Show that r is orthogonal to
both ql and q2.
Exercises 73
4. Find all solutions of the linear least squares problem
5. Consider the problem of finding the minimum 2-norm solution of the linear least
«rmarp« nrr»h1<=>m
(a) Consider a perturbation E\ = [
0
pi of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E\. What happens to ||jt* — y ||
2
as 8 approaches 0?
(b) Now consider the perturbation EI = \
0 s
~\ of A, where again 8 is a small
positive number. Solve the perturbed problem
where A
2
— A + E
2
. What happens to \\x* — z||
2
as 8 approaches 0?
6. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to
verify that if A e R™
x
" can be factored in the form (8.9), then A+ = R~
l
Q\.
1. Let A e R"
x
", not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A
+
= R
+
Q
T
.
Exercises 73
4. Find all solutions of the linear least squares problem
min II Ax - bll
2
x
when A = [
5. Consider the problem of finding the minimum 2-norm solution of the linear least
squares problem
min II Ax - bl1
2
x
when A = ] and b = [ ! 1 The solution is
(a) Consider a perturbation EI = of A, where 8 is a small positive number.
Solve the perturbed version of the above problem,
where AI = A + E
I
. What happens to IIx* - yII2 as 8 approaches O?
(b) Now consider the perturbation E2 = n of A, where again 8 is a small
positive number. Solve the perturbed problem
min II A
2
z - bib
z
where A2 = A + E
2
• What happens to IIx* - zll2 as 8 approaches O?
6. Use the four Penrose conditions and the fact that QI has orthonormal columns to
verify that if A E can be factored in the form (8.9), then A+ = R-
I
Qf.
7. Let A E not necessarily nonsingular, and suppose A = QR, where Q is
orthogonal. Prove that A + = R+ QT .
This page intentionally left blank This page intentionally left blank
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x e C" is a right eigenvector of A e C
nxn
if there exists
a scalar A. e C, called an eigenvalue, such that
Similarly, a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue
a if
By taking Hermitian transposes in (9.1), we see immediately that X
H
is a left eigen-
vector of A
H
associated with A . Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One often-used scaling for an eigenvector is
a — \j'||;t|| so that the scaled eigenvector has norm 1. The 2-norm is the most common
norm used for such scaling.
Definition 9.2. The polynomial n (A.) = det(A —A ,/ ) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(A . / — A ). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.}
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical form to be discussed in the text to follow (see, for
example, [21]) or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (Cayley-Hamilton). For any A e C
nxn
, n(A) = 0.
Example 9.4. Let A = [~g ~g] . Then n(k) = X
2
+ 2A , — 3. It is an easy exercise to
verify that n(A) = A
2
+ 2A - 31 = 0.
It can be proved from elementary properties of determinants that if A e C"
x
", then
7 t (X) is a polynomial of degree n. Thus, the Fundamental Theorem of A lgebra says that
75
Chapter 9
Eigenvalues and
Eigenvectors
9.1 Fundamental Definitions and Properties
Definition 9.1. A nonzero vector x E en is a right eigenvector of A E e
nxn
if there exists
a scalar A E e, called an eigenvalue, such that
Ax = AX. (9.1)
Similarly, a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue
Mif
(9.2)
By taking Hennitian transposes in (9.1), we see immediately that x
H
is a left eigen-
vector of A H associated with I. Note that if x [y] is a right [left] eigenvector of A, then
so is ax [ay] for any nonzero scalar a E C. One often-used scaling for an eigenvector is
a = 1/ IIx II so that the scaled eigenvector has nonn 1. The 2-nonn is the most common
nonn used for such scaling.
Definition 9.2. The polynomialn (A) = det (A - A l) is called the characteristic polynomial
of A. (Note that the characteristic polynomial can also be defined as det(Al - A). This
results in at most a change of sign and, as a matter of convenience, we use both forms
throughout the text.)
The following classical theorem can be very useful in hand calculation. It can be
proved easily from the Jordan canonical fonn to be discussed in the text to follow (see, for
example, [21D or directly using elementary properties of inverses and determinants (see,
for example, [3]).
Theorem 9.3 (Cayley-Hamilton). For any A E e
nxn
, n(A) = O.
Example 9.4. Let A = [ - ~ - ~ ] . Then n(A) = A2 + 2A - 3. It is an easy exercise to
verify that n(A) = A2 + 2A - 31 = O.
It can be proved from elementary properties of detenninants that if A E e
nxn
, then
n(A) is a polynomial of degree n. Thus, the Fundamental Theorem of Algebra says that
75
and set X = 0 in this identity, we get the interesting fact that del (A) = AI • A.2 • • • A
M
(see
also Theorem 9.25).
If A e W
xn
, then n(X) has real coefficients. Hence the roots of 7 r( A) , i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, ft e R and let A = [ _^ £ ]. Then jr( A. ) = A.
2
- 2aA + a
2
+ ft
2
and
A has eigenvalues a ± fij (where j = i = •>/—!)•
If A € R"
x
", then there is an easily checked relationship between the left and right
eigenvectors of A and A
T
(take Hermitian transposes of both sides of (9.2)). Specifically, if
y is a left eigenvector of A corresponding to A e A( A) , then y is a right eigenvector of A
T
corresponding to A. € A ( A) . Note, too, that by elementary properties of the determinant,
we always have A ( A ) = A ( A
r
) , but that A ( A ) = A ( A ) only if A e R"
x
".
Definition 9.7. IfX is a root of multiplicity m ofjr(X), we say that X is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity ofX is the number of associated
independent eigenvectors = n — rank( A — A/) = dim J \ f(A — XI).
If A € A ( A ) has algebraic multiplicity m, then 1 < di mA/ "(A — A/) < m. Thus, if
we denote the geometric multiplicity of A by g, then we must have 1 < g < m.
Definition 9.8. A matrix A e W
x
" is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the Cayley-Hamilton Theorem, we know that n(A) = 0. However, it is pos-
sible for A to satisfy a lower-order polynomial. For example, if A = \
1
Q
®], then A sat-
isfies (1 — I)
2
= 0. But it also clearly satisfies the smaller degree polynomial equation
a - n = o.
Definition 5.5. The minimal polynomial of A G K""" is the polynomial o/ (X) of least
degree such that a (A) =0.
It can be shown that or(l) is essentially unique (unique if we force the coefficient
of the highest power of A to be +1, say; such a polynomial is said to be monic and we
generally write et (A) as a monic polynomial throughout the text). Moreover, it can also be
7 6 Chapt er 9. Ei g e n va l ue s and Ei genvect ors
7 r( A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
are the eigenvalues of A and imply the singularity of the matrix A — XI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A e C"
x
" is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomial n(X). The spectrum of A is denoted A ( A) .
Let the eigenvalues of A e C"
x
" be denoted X\ ,..., X
n
. Then if we write (9.3) in the
form
76 Chapter 9. Eigenvalues and Eigenvectors
n(A) has n roots, possibly repeated. These roots, as solutions of the determinant equation
n(A) = det(A - AI) = 0, (9.3)
are the eigenvalues of A and imply the singularity of the matrix A - AI, and hence further
guarantee the existence of corresponding nonzero eigenvectors.
Definition 9.5. The spectrum of A E c
nxn
is the set of all eigenvalues of A, i.e., the set of
all roots of its characteristic polynomialn(A). The spectrum of A is denoted A(A).
Let the eigenvalues of A E en xn be denoted A], ... , An. Then if we write (9.3) in the
form
n(A) = det(A - AI) = (A] - A) ... (An - A) (9.4)
and set A = 0 in this identity, we get the interesting fact that det(A) = A] . A2 ... An (see
also Theorem 9.25).
If A E 1Ftnxn, then n(A) has real coefficients. Hence the roots of n(A), i.e., the
eigenvalues of A, must occur in complex conjugate pairs.
Example 9.6. Let a, f3 E 1Ft and let A = [ ~ f 3 !]. Then n(A) = A
2
- 2aA + a
2
+ f32 and
A has eigenvalues a ± f3j (where j = i = R).
If A E 1Ftnxn, then there is an easily checked relationship between the left and right
eigenvectors of A and AT (take Hermitian transposes of both sides of (9.2». Specifically, if
y is a left eigenvector of A corresponding to A E A(A), then y is a right eigenvector of AT
corresponding to I E A(A). Note, too, that by elementary properties of the determinant,
we always have A(A) = A(AT), but that A(A) = A(A) only if A E 1Ftnxn.
Definition 9.7. If A is a root of multiplicity m of n(A), we say that A is an eigenvalue of A
of algebraic multiplicity m. The geometric multiplicity of A is the number of associated
independent eigenvectors = n - rank(A - AI) = dimN(A - AI).
If A E A(A) has algebraic multiplicity m, then I :::: dimN(A - AI) :::: m. Thus, if
we denote the geometric multiplicity of A by g, then we must have I :::: g :::: m.
Definition 9.8. A matrix A E 1Ft
nxn
is said to be defective if it has an eigenvalue whose
geometric multiplicity is not equal to (i.e., less than) its algebraic multiplicity. Equivalently,
A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.
From the Cayley-Hamilton Theorem, we know that n(A) = O. However, it is pos-
sible for A to satisfy a lower-order polynomial. For example, if A = [ ~ ~ ] , then A sat-
isfies (Je - 1)2 = O. But it also clearly satisfies the smaller degree polynomial equation
(it. - 1) ;;;:; 0
neftnhion ~ . ~ . Thll minimal polynomial Of A l::: l!if.nxn ix (hI' polynomilll a(A) oJ IPll.ft
degree such that a(A) ~ O.
It can be shown that a(Je) is essentially unique (unique if we force the coefficient
of the highest power of A to be + 1. say; such a polynomial is said to be monic and we
generally write a(A) as a monic polynomial throughout the text). Moreover, it can also be
9.1. Fundamental Definitions and Properties 77
shown that a (A.) divides every nonzero polynomial fi(k} for which ft (A) = 0. In particular,
a(X) divides n(X).
There is an algorithm to determine or ( A . ) directly ( without knowing eigenvalues and as-
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i. e. , 7r( A ) = ( A — 2)
4
. We denote
the geometric multiplicity by g.
A t this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
Theorem 9.11. Let A e C«
x
"
ana
[
e
t A ., be an eigenvalue of A with corresponding right
eigenvector j c,-. Furthermore, let yj be a left eigenvector corresponding to any A
;
e A ( A )
such that Xj =£ A . ,. Then yfx{ = 0.
Proof: Since Ax
t
= A ,*,,
9.1. Fundamental Definitions and Properties 77
shown that a(A) divides every nonzero polynomial f3(A) for which f3(A) = O. In particular,
a(A) divides n(A).
There is an algorithm to determine a(A) directly (without knowing eigenvalues and as-
sociated eigenvector structure). Unfortunately, this algorithm, called the Bezout algorithm,
is numerically unstable.
Example 9.10. The above definitions are illustrated below for a series of matrices, each
of which has an eigenvalue 2 of algebraic multiplicity 4, i.e., n(A) = (A - 2)4. We denote
the geometric multiplicity by g.
A - [ ~
0
! ] ha,"(A) ~ (A - 2)' ""d g ~ 1.
2 I
- 0
0 2
0 0 0
A ~ [ ~
0
~ ] ha< a(A) ~ (A - 2)' ""d g ~ 2.
2
0 2
0 0
A ~ U
I 0
~ ] h'" a(A) ~ (A - 2)2 ""d g ~ 3.
2 0
0 2
0 0
A ~ U
0 0
~ ] ha<a(A) ~ (A - 2) andg ~ 4.
2 0
0 2
0 0
At this point, one might speculate that g plus the degree of a must always be five.
Unfortunately, such is not the case. The matrix
A ~ U
I 0
!]
2 0
0 2
0 0
has a(A) = (A - 2)2 and g = 2.
Theorem 9.11. Let A E cc
nxn
and let Ai be an eigenvalue of A with corresponding right
eigenvector Xi. Furthermore, let Yj be a left eigenvector corresponding to any Aj E l\(A)
such that Aj 1= Ai. Then YY Xi = O.
Proof' Since AXi = AiXi,
(9.5)
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since y" A = Xjyf,
Subtracting (9.6) from (9.5), we find 0 = (A.,- — A
y
)j ^j c, . Since A,,- — A.
7
- ^ 0, we must have
yfxt =0.
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A e C"
x
" be Hermitian, i.e., A = A
H
. Then all eigenvalues of A must
be real.
Proof: Suppose (A ., x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A .J C. Then
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that Xx
H
x = Xx
H
x. However, since x is an
eigenvector, we have X
H
X /= 0, from which we conclude A . = A , i.e., A . is real. D
Theorem 9.13. Let A e C"
x
" be Hermitian and suppose X and / J L are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = A.J C by Z
H
to get Z
H
Ax = X z
H
x . Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A . is real to get X
H
Az =
Xx
H
z. Premultiply the equation Az = i^z by X
H
to get X
H
Az = / ^X
H
Z = Xx
H
z. Since
A, ^ /z, we must have that X
H
z = 0, i.e., the two vectors must be orthogonal. D
Let us now return to the general case.
Theorem 9.14. Let A €. C
nxn
have distinct eigenvalues A ,
1 ?
. . . , A .
n
with corresponding
right eigenvectors x\,... ,x
n
. Then [x\,..., x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118].
If A e C
nx
" has distinct eigenvalues, and if A ., e A (A ), then by Theorem 9.11, jc, is
orthogonal to all yj's for which j ^ i. However, it cannot be the case that yf*x
t
= 0 as
well, or else x
f
would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yf*Xi ^ 0
for each i, we can choose the normalization of the *, 's, or the y, 's, or both, so that y
t
H
x; = 1
f or / € n.
78 Chapter 9. Eigenvalues and Eigenvectors
Similarly, since YY A = A j yy,
(9.6)
Subtracting (9.6) from (9.5), we find 0 = (Ai - Aj)YY xi. Since Ai - Aj =1= 0, we must have
YyXi = O. 0
The proof of Theorem 9.11 is very similar to two other fundamental and important
results.
Theorem 9.12. Let A E c
nxn
be Hermitian, i.e., A = AH. Then all eigenvalues of A must
be real.
Proof: Suppose (A, x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. Then
(9.7)
Taking Hermitian transposes in (9.7) yields
Using the fact that A is Hermitian, we have that IXH x = AXH x. However, since x is an
eigenvector, we have xH x =1= 0, from which we conclude I = A, i.e., A is real. 0
Theorem 9.13. Let A E c
nxn
be Hermitian and suppose A and iJ- are distinct eigenvalues
of A with corresponding right eigenvectors x and z, respectively. Then x and z must be
orthogonal.
Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ
H
x. Take the Hermitian
transpose of this equation and use the facts that A is Hermitian and A is real to get x H Az =
AxH z. Premultiply the equation Az = iJ-Z by x
H
to get x
H
Az = iJ-XH Z = AXH z. Since
A =1= iJ-, we must have that x
H
z = 0, i.e., the two vectors must be orthogonal. 0
Let us now return to the general case.
Theorem 9.14. Let A E c
nxn
have distinct eigenvalues AI, ... , An with corresponding
right eigenvectors XI, ... , x
n
• Then {XI, ... , x
n
} is a linearly independent set. The same
result holds for the corresponding left eigenvectors.
Proof: For the proof see, for example, [21, p. 118]. 0
If A E c
nxn
has distinct eigenvalues, and if Ai E A(A), then by Theorem 9.11, Xi is
orthogonal to all y/s for which j =1= i. However, it cannot be the case that Yi
H
Xi = 0 as
well, or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9.14)
and would thus have to be 0, contradicting the fact that it is an eigenvector. Since yr Xi =1= 0
for each i, we can choose the normalization of the Xi'S, or the Yi 's, or both, so that Yi
H
Xi = 1
for i E !1.
9.1. Fundament al Def i ni t i o ns and Properties 79
Theorem 9.15. Let A e C"
x
" have distinct eigenvalues A .I , ..., A .
n
and let the correspond-
ing right eigenvectors form a matrix X = [x\, ..., x
n
]. Similarly, let Y — [y\, ..., y
n
]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that yf
1
Xi = 1, / en. Finally, let A =
di ag ( A ,j , . . . , X
n
) e W
txn
. Then A J C, = A .,- * /, / e n, can be written in matrix form as
Example 9.16. Let
Then n(X) = det( A - A ./) = - (A .
3
+ 4A .
2
+ 9 A . + 10) = - (A . + 2 )(A .
2
+ 2 A , + 5), from
which we find A ( A ) = { — 2 , — 1 ± 2 j } . We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For A - i = — 2 , solve the 3 x 3 linear system (A — (—2 } I)x\ = 0 to get
while y^Xj = 5,
;
, / en, y' e n, is expressed by the equation
These matrix equations can be combined to yield the following matrix factorizations:
and
Note that one component of ;ci can be set arbitrarily, and this then determines the other two
(since di mA /XA — ( — 2 )7) = 1). To get the corresponding left eigenvector y\, solve the
linear system y\(A + 2 1) = 0 to get
This time we have chosen the arbitrary scale factor for y\ so that y f x \ = 1.
For A
2
= — 1 + 2 j , solve the linear system (A — (— 1 + 2 j )I)x
2
= 0 to get
9.1. Fundamental Definitions and Properties 79
Theorem 9.15. Let A E en xn have distinct eigenvalues A I, ... , An and let the correspond-
ing right eigenvectors form a matrix X = [XI, ... , xn]. Similarly, let Y = [YI,"" Yn]
be the matrix of corresponding left eigenvectors. Furthermore, suppose that the left and
right eigenvectors have been normalized so that YiH Xi = 1, i E !!:: Finally, let A =
diag(AJ, ... , An) E ]Rnxn. Then AXi = AiXi, i E !!, can be written in matrixform as
AX=XA (9.8)
while YiH X j = oij, i E!!, j E !!, is expressed by the equation
yHX = I.
(9.9)
These matrix equations can be combined to yield the following matrix factorizations:
and
Example 9.16. Let
X-lAX = A = yRAX
n
A = XAX-
I
= XAyH = LAixiyr
2
5
-3
-3
-2
i=1
~ ] .
-4
(9.10)
(9.11)
Then rr(A) = det(A - AI) = -(A
3
+ 4A2 + 9)" + 10) = -()" + 2)(),,2 + 2)" + 5), from
which we find A(A) = {-2, -1 ± 2j}. We can now find the right and left eigenvectors
corresponding to these eigenvalues.
For Al = -2, solve the 3 x 3 linear system (A - (-2)l)xI = 0 to get
Note that one component of XI can be set arbitrarily, and this then determines the other two
(since dimN(A - (-2)1) = 1). To get the corresponding left eigenvector YI, solve the
linear system yi (A + 21) = 0 to get
This time we have chosen the arbitrary scale factor for YJ so that yi XI = 1.
For A2 = -1 + 2j, solve the linear system (A - (-1 + 2j) I)x2 = 0 to get
[
3+ j ]
X2 = 3 ~ / .
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system y" (A — (-1 + 27')/) = 0 and normalize y>
2
so that y"x
2
= 1 to get
For X T , = — 1 — 2 j, we could proceed to solve linear systems as for A.
2
. However, we
can also note that x$ =x
2
' and yi = jj. To see this, use the fact that A, 3 = A.2 and simply
conjugate the equation A;c
2
— ^.2 *2 to get Ax^ = ^2 X 2 - A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
It is then easy to verify that
Other results in Theorem 9.15 can also be verified. For example,
Finally, note that we could have solved directly only for *i and x
2
(and X T , = x
2
). Then,
instead of determining the j,'s directly, we could have found them instead by computing
X ~
l
and reading off its rows.
Example 9.17. Let
Then 7r(A.) = det(A - A./) = -(A
3
+ 8A
2
+ 19X + 12) = -(A. + 1)(A. + 3)(A, + 4),
from which we find A (A) = { —1, —3, —4}. Proceeding as in the previous example, it is
straightforward to compute
and
80 Chapter 9. Eigenvalues and Eigenvectors
Solve the linear system yf (A - ( -I + 2 j) I) = 0 and nonnalize Y2 so that yf X2 = 1 to get
For A3 = -I - 2j, we could proceed to solve linear systems as for A2. However, we
can also note that X3 = X2 and Y3 = Y2. To see this, use the fact that A3 = A2 and simply
conjugate the equation AX2 = A2X2 to get AX2 = A2X2. A similar argument yields the result
for left eigenvectors.
Now define the matrix X of right eigenvectors:
3+j 3-
j
]
3-j 3+j .
-2 -2
It is then easy to verify that
.!.=.L
4
l+j
4
!.±1
4
.!.=.L
4
Other results in Theorem 9.15 can also be verified. For example,
[
-2 0
X-IAX=A= 0 -1+2j
o 0
Finally, note that we could have solved directly only for XI and X2 (and X3 = X2). Then,
instead of detennining the Yi'S directly, we could have found them instead by computing
X-I and reading off its rows.
Example 9.17. Let
A = .
o -3
Then Jl"(A) = det(A - AI) = _(A
3
+ 8A
2
+ 19A + 12) = -(A + I)(A + 3)(A + 4),
from which we find A(A) = {-I, -3, -4}. Proceeding as in the previous example, it is
gtruightforw!U"d to

I
-i ]
0
-I
and

1 2 1
] y'
3 0 -3
2 -2 2
9.1. Fundamental Definitions and Properties 81
We also have X~
l
AX = A = di ag( —1, —3, —4 ) , which is equivalent to the dyadic expan-
sion
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans-
formation T.
Proof: Suppose (A, jc) is an eigenvalue/eigenvector pair such that Ax = Xx. Then, since T
is nonsingular, we have the equivalent statement (T~
l
AT)(T~
l
x) = X ( T ~
l
x ) , from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
y
H
A = Xy
H
ifandon\yif(T
H
y)
H
(T~
1
AT) =X(T
H
yf. D
Remark 9.19. If / is an analytic function (e.g., f ( x ) is a polynomial, or e
x
, or sin*,
or, in general, representable by a power series X^^o
fl
n*
n
)> then it is easy to show that
the eigenvalues of /( A) (defined as X^o^-A") are /( A) , but /( A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = T
0 O
j
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= f
0 0
1 has two
independent right eigenvectors associated with the eigenvalue 0. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to ( /( A) , x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential e'
A
is used to solve the system x = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A e R"
xn
and suppose X~~
1
AX — A, where A is diagonal. Then
9.1. Fundamental Definitions and Properties 81
We also have X-I AX = A = diag( -1, -3, -4), which is equivalent to the dyadic expan-
sion
3
A = LAixiyr
i=1
W j 0
+(-4) [ -; ]
1

- -
3
(-I) [
I I I
J + (-3) [
I
0
I
] + (-4) [
I I I
l
(;
3
(;
2
-2 3
-3
3
I 2 I
0 0 0
I I I
3 3 3
-3
3
-3
I I I
I
0
I
I I I
(;
3
(;
-2
2
3
-3
3
Theorem 9.18. Eigenvalues (but not eigenvectors) are invariant under a similarity trans-
formation T.
Proof: Suppose (A, X) is an eigenvalue/eigenvector pair such that Ax = AX. Then, since T
is nonsingular, we have the equivalent statement (T-
I
AT)(T-
I
x) = A(T-
I
x), from which
the theorem statement follows. For left eigenvectors we have a similar statement, namely
yH A = AyH if and only if (T
H
y)H (T-
1
AT) = A(T
H
y)H. D
Remark 9.19. If f is an analytic function (e.g., f(x) is a polynomial, or eX, or sinx,
or, in general, representable by a power series anxn), then it is easy to show that
the eigenvalues of f(A) (defined as are f(A), but f(A) does not necessarily
have all the same eigenvectors (unless, say, A is diagonalizable). For example, A = 6 ]
has only one right eigenvector corresponding to the eigenvalue 0, but A
2
= ] has two
independent right eigenvectors associated with the eigenvalue o. What is true is that the
eigenvalue/eigenvector pair (A, x) maps to (f(A), x) but not conversely.
The following theorem is useful when solving systems of linear differential equations.
Details of how the matrix exponential etA is used to solve the system i = Ax are the subject
of Chapter 11.
Theorem 9.20. Let A E jRnxn and suppose X-I AX = A, where A is diagonal. Then
n
= LeA,txiYiH.
i=1
82 Chapter 9. Eigenvalues and Eigenvectors
Proof: Starting from the definition, we have
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A e R
nx
" is diagonalizable with eigenvalues A .,- , /' en, and right
eigenvectors x
t
•, / € n_, then e
A
has eigenvalues e
X i
, i € n_, and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A , i.e., f ( A ) = X f ( A ) X ~
l
= Xdi ag ( / ( A . i ) , . . . , f ( X
t t
) ) X ~
l
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
1. Jordan Canonical Form (/CF): For all A e C"
x
" with eigenvalues X\,..., k
n
e C
(not necessarily distinct), there exists X € C^
x
" such that
where each of the Jordan block matrices / i , . . . , J
q
is of the form
82 Chapter 9. Eigenvalues and Eigenvectors
Proof' Starting from the definition, we have
n
= LeA;IXiYiH. 0
i=1
The following corollary is immediate from the theorem upon setting t = I.
Corollary 9.21. If A E ]Rn xn is diagonalizable with eigenvalues Ai, i E ~ , and right
eigenvectors Xi, i E ~ , then e
A
has eigenvalues e
A
" i E ~ , and the same eigenvectors.
There are extensions to Theorem 9.20 and Corollary 9.21 for any function that is
analytic on the spectrum of A, i.e., f(A) = Xf(A)X-
I
= Xdiag(J(AI), ... , f(An))X-
I
.
It is desirable, of course, to have a version of Theorem 9.20 and its corollary in which
A is not necessarily diagonalizable. It is necessary first to consider the notion of Jordan
canonical form, from which such a result is then available and presented later in this chapter.
9.2 Jordan Canonical Form
Theorem 9.22.
I. lordan Canonical Form (JCF): For all A E c
nxn
with eigenvalues AI, ... , An E C
(not necessarily distinct), there exists X E c ~ x n such that
X-I AX = 1 = diag(ll, ... , 1q), (9.12)
where each of the lordan block matrices 1
1
, ••• , 1q is of the form
Ai
0 o
0
Ai
0
1i =
Ai
(9.13)
o
Ai
o o Ai
9.2. Jordan Canonical Form 83
2. Real Jordan Canonical Form: For all A € R
nx
" with eigenvalues Xi, . . . , X
n
(not
necessarily distinct), there exists X € R"
xn
such that
where each of the Jordan block matrices J\, ..., J
q
is of the form
in the case of real eigenvalues A., e A (A), and
where M
;
= [ _»' ^ 1 and I
2
= \
0
A in the case of complex conjugate eigenvalues
a
i
±jp
i
eA(A
>
).
Proof: For the proof see, for example, [21, pp. 120-124]. D
Transformations like T = I " _, "•{ "] allow us to go back and forth between a real JCF
and its complex counterpart:
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
9.2. Jordan Canonical Form 83
and L;=1 ki = n.
2. Real Jordan Canonical Form: For all A E jRnxn with eigenvalues AI, ... , An (not
necessarily distinct), there exists X E such that
(9.14)
where each of the Jordan block matrices 11, ... , 1q is of the form
where Mi = [ ] and h = [6 in the case of complex conjugate eigenvalues
(Xi ± jfJi E A(A).
Proof: For the proof see, for example, [21, pp. 120-124]. 0
Transformations like T = [ _ - { ] allow us to go back and forth between a real JCF
and its complex counterpart:
T-I [ (X + jfJ O. ] T = [ (X fJ ] = M.
o (X - JfJ -fJ (X
For nontrivial Jordan blocks, the situation is only a bit more complicated. With
1
o
-j
o
-j
o
1 o 0 '

o -j 1
84 Chapter 9. Ei genval ues and Eigenvectors
it is easily checked that
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9 . 2 2 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A e C"
x
" with eigenvalues AI, . . . , X
n
. Then
Proof:
1. From Theorem 9.22 we have that A = X J X ~
l
. Thus,
det(A) = det(XJX-
1
) = det(7) = ] ~ [ "
=l
A, - .
2. Again, from Theorem 9.22 we have that A = X J X ~
l
. Thus,
Tr(A) = Tr(XJX~
l
) = TrC/X"
1
*) = Tr(/) = £"
=1
A.,- . D
Example 9.26. Suppose A e E
7x7
is known to have 7r(A) = (A. - 1)
4
(A - 2)
3
and
a (A.) = (A. — I)
2
(A. — 2)
2
. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
Note that 7
(1)
has elementary divisors (A - I )
2
, (A. - 1), (A. - 1), (A, - 2)
2
, and (A - 2),
while /(
2)
has elementary divisors (A - I )
2
, (A - I )
2
, (A - 2)
2
, and (A - 2).
84 Chapter 9. Eigenvalues and Eigenvectors
it is easily checked that
[ "+ jfi
0 0
] T ~ [ ~
T-
I
0
et + jf3 0 0
h
l
0 0 et - jf3 M
0 0 0 et - jf3
Definition 9.23. The characteristic polynomials of the Jordan blocks defined in Theorem
9.22 are called the elementary divisors or invariant factors of A.
Theorem 9.24. The characteristic polynomial of a matrix is the product of its elementary
divisors. The minimal polynomial of a matrix is the product of the elementary divisors of
highest degree corresponding to distinct eigenvalues.
Theorem 9.25. Let A E c
nxn
with eigenvalues AI, .. " An. Then
n
1. det(A) = nAi.
i=1
n
2. Tr(A) = 2,)i.
i=1
Proof:
1. From Theorem 9.22 we have that A = X J X-I. Thus,
det(A) = det(X J X-I) = det(J) = n7=1 Ai.
2. Again, from Theorem 9.22 we have that A = X J X-I. Thus,
Tr(A) = Tr(X J X-I) = Tr(JX-
1
X) = Tr(J) = L7=1 Ai. 0
Example 9.26. Suppose A E lR.
7x7
is known to have :rr(A) = (A - 1)4(A - 2)3 and
et(A) = (A - 1)2(A - 2)2. Then A has two possible JCFs (not counting reorderings of the
diagonal blocks):
1 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 1 I 0 0 0
J(l) =
0 0 0 1 0 0 0
and f2) =
0 0 0 1 0 0 0
0 0 0 0 2 1 0 0 0 0 0 2 1 0
0 0 0 0 0 2 0 0 0 0 0 0 2 0
0 0 0 0 0 0 2
0 0 0 0 0 0 2
Note that J(l) has elementary divisors (A - 1)z, (A - 1), (A - 1), (A - 2)2, and (A - 2),
while J(2) has elementary divisors (A - 1)2, (A - 1)2, (A - 2)2, and (A - 2).
9.3. Determination of the JCF &5
Example 9.27. Knowing T T (A.), a ( A ) , and rank (A — A,,7) for distinct A., is not sufficient to
determine the JCF of A uniquely. T he matrices
both have 7r( A . ) = (A. — a) , a( A . ) = (A. — a) , and rank( A — al) = 4, i.e., three eigen-
vectors.
9.3 Determination of the JCF
T he first critical item of information in determining the JCF of a matrix A e W
lxn
is its
number of eigenvectors. For each distinct eigenvalue A , , , the associated number of linearly
independent right (or left) eigenvectors is given by dim A^(A — A.,7) = n — rank( A — A.
(
7).
T he straightforward case is, of course, when X,- is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. T he more interesting (and difficult) case occurs when
A, is of algebraic multiplicity greater than one. For example, suppose
T hen
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let [^i £2 &]
T
denote a solution to the linear system (A — 3/) £ = 0, we find that 2£
2
+ £3=0. T hus, both
are eigenvectors (and are independent). T o get a third vector JC3 such that X = [x\ KJ_ XT,]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A e C"
xn
(or R"
x
") . Then x is a right principal vector of degree k
associated with X e A (A) if and only if (A - XI)
k
x = 0 and (A - U}
k
~
l
x ^ 0.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
9.3. Determination of the JCF 85
Example 9.27. Knowing rr(A), a(A), and rank(A - Ai l) for distinct Ai is not sufficient to
determine the JCF of A uniquely. The matrices
a 0 0 0 0 0 a 0 0 0 0 0
0 a 0 0 0 0 0 a 0 0 0 0
0 0 a 0 0 0 0 0 0 a 0 0 0 0
Al=
0 0 0 a 0 0
A2 =
0 0 0 a 0 0
0 0 0 0 a 0 0 0 0 0 0 a 0
0 0 0 0 0 a 1 0 0 0 0 0 a 0
0 0 0 0 0 0 a 0 0 0 0 0 0 a
both have rr(A) = (A - a)7, a(A) = (A - a)\ and rank(A - al) = 4, i.e., three eigen-
vectors.
9.3 Determination of the JCF
The first critical item of information in determining the JCF of a matrix A E ]R.nxn is its
number of eigenvectors. For each distinct eigenvalue Ai, the associated number of linearly
independent right (or left) eigenvectors is given by dimN(A - A;l) = n - rank(A - A;l).
The straightforward case is, of course, when Ai is simple, i.e., of algebraic multiplicity 1; it
then has precisely one eigenvector. The more interesting (and difficult) case occurs when
Ai is of algebraic multiplicity greater than one. For example, suppose
[3 2
n
A = 0 3
o 0
Then
A-3I= U
2 I]
o 0
o 0
has rank 1, so the eigenvalue 3 has two eigenvectors associated with it. If we let
denote a solution to the linear system (A - = 0, we find that + = O. Thus, both
are eigenvectors (and are independent). To get a third vector X3 such that X = [Xl X2 X3]
reduces A to JCF, we need the notion of principal vector.
Definition 9.28. Let A E c
nxn
(or ]R.nxn). Then X is a right principal vector of degree k
associated with A E A(A) ifand only if(A - ulx = 0 and (A - AI)k-l x i= o.
Remark 9.29.
1. An analogous definition holds for a left principal vector of degree k.
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
5. A right (or left) principal vector of degree k is associated with a Jordan block ji of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2x2 Jordan block {h
0
h1. Denote by x
(1)
and x
(2)
the two columns of a matrix X e R
2
,x
2
that reduces a matrix A to this JCF. Then the equation AX = XJ can be written
The first column yields the equation Ax
(1)
= hx
(1)
which simply says that x
(1)
is a right
eigenvector. The second column yields the following equation for x
(2)
, the principal vector
of degree 2:
If we premultiply (9.17) by (A - XI), we find (A - XI )
z
x
( 2 )
= (A - XI)x
w
= 0. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A e R"
x
"
(or C
nxn
). Then for each distinct X e A (A) perform the following:
1. Solve
This step finds all the eigenvectors (i.e., principal vectors of degree 1) associated with
X. The number of eigenvectors depends on the rank of A — XI. For example, if
rank(A — XI) = n — 1, there is only one eigenvector. If the algebraic multiplicity of
X is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent jc
(1)
, solve
The number of linearly independent solutions at this step depends on the rank of
(A — XI )
2
. If, for example, this rank is n — 2 , there are two linearly independent
solutions to the homogeneous equation (A — XI)
2
x^ = 0. One of these solutions
is, of course, x
(l)
(^ 0), since (A - XI )
2
x
( l )
= (A - XI)0 = 0. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of jc
(1)
vectors to get a right-hand side that is in 7£(A — XI). See, for
example, Exercise 7.)
86 Chapter 9. Eigenvalues and Eigenvectors
2. The phrase "of grade k" is often used synonymously with "of degree k."
3. Principal vectors are sometimes also called generalized eigenvectors, but the latter
term will be assigned a much different meaning in Chapter 12.
4. The case k = 1 corresponds to the "usual" eigenvector.
S. A right (or left) principal vector of degree k is associated with a Jordan block J; of
dimension k or larger.
9.3.1 Theoretical computation
To motivate the development of a procedure for determining principal vectors, consider a
2 x 2 Jordan block [ ~ i]. Denote by x(l) and x(2) the two columns of a matrix X E l R ~ X 2
that reduces a matrix A to this JCF. Then the equation AX = X J can be written
A [x(l) x(2)] = [x(l) X(2)] [ ~ ~ J.
The first column yields the equation Ax(!) = AX(!), which simply says that x(!) is a right
eigenvector. The second column yields the following equation for x(2), the principal vector
of degree 2:
(A - A/)x(2) = x(l). (9.17)
If we premultiply (9.17) by (A - AI), we find (A - A1)2 x(2) = (A - A1)X(l) = O. Thus,
the definition of principal vector is satisfied.
This suggests a "general" procedure. First, determine all eigenvalues of A E lR
nxn
(or c
nxn
). Then for each distinct A E A(A) perform the following:
1. Solve
(A - A1)X(l) = O.
This step finds all the eigenvectors (i.e., principal vectors of degree I) associated with
A. The number of eigenvectors depends on the rank of A - AI. For example, if
rank(A - A/) = n - 1, there is only one eigenvector. If the algebraic multiplicity of
A is greater than its geometric multiplicity, principal vectors still need to be computed
from succeeding steps.
2. For each independent x(l), solve
(A - A1)x(2) = x(l).
The number of linearly independent solutions at this step depends on the rank of
(A - uf. If, for example, this rank is n - 2, there are two linearly independent
solutions to the homogeneous equation (A - AI)2x (2) = o. One of these solutions
is, of course, x(l) (1= 0), since (A - 'A1)
2
x(l) = (A - AI)O = o. The other solution
is the desired principal vector of degree 2. (It may be necessary to take a linear
combination of x(l) vectors to get a right-hand side that is in R(A - AI). See, for
example, Exercise 7.)
9. 3. Determination of the JCF 87
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this natural-looking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finite-precision floating-point arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that high-quality mathematical software such as MATLAB
does not offer a jcf command, although a jordan command is available in MATLAB'S
Symbolic Toolbox.
Theorem 9.30. Suppose A e C
kxk
has an eigenvalue A, of algebraic multiplicity k and
suppose further that rank(A — AI) = k — 1. Let X = [ x
( l )
, . . . , x
(k)
], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. (x
( 1)
, . . . , x
(k)
} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde-
pendent.
Example 9.33. Let
The eigenvalues of A are A1 = 1, h2 = 1, and h
3
= 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
,(1)
(A - 2/)x3(1) = 0 yields
3. For each independent x
(2)
from step 2, solve
9.3. Determination of the JCF 87
3. For each independent X(2) from step 2, solve
(A - AI)x(3) = x(2).
4. Continue in this way until the total number of independent eigenvectors and principal
vectors is equal to the algebraic multiplicity of A.
Unfortunately, this natural-looking procedure can fail to find all Jordan vectors. For
more extensive treatments, see, for example, [20] and [21]. Determination of eigenvectors
and principal vectors is obviously very tedious for anything beyond simple problems (n = 2
or 3, say). Attempts to do such calculations in finite-precision floating-point arithmetic
generally prove unreliable. There are significant numerical difficulties inherent in attempting
to compute a JCF, and the interested student is strongly urged to consult the classical and very
readable [8] to learn why. Notice that high-quality mathematical software such as MATLAB
does not offer a j cf command, although a j ardan command is available in MATLAB's
Symbolic Toolbox.
Theorem 9.30. Suppose A E C
kxk
has an eigenvalue A of algebraic multiplicity k and
suppose further that rank(A - AI) = k - 1. Let X = [x(l), ... , X(k)], where the chain of
vectors x(i) is constructed as above. Then
Theorem 9.31. {x(l), ... , X(k)} is a linearly independent set.
Theorem 9.32. Principal vectors associated with different Jordan blocks are linearly inde-
pendent.
Example 9.33. Let
1 ;].
002
The eigenvalues of A are AI = I, A2 = 1, and A3 = 2. First, find the eigenvectors associated
with the distinct eigenvalues 1 and 2.
(A - = 0 yields
88 Chapter 9. Eigenvalues and Eigenvectors
(A- l/)x,
(1)
=0 yields
Then it is easy to check that
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary — so long as they are nonzero. For the sake of defmiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Supposed A € R
nxn
and
Let D = d i a g ( d 1 , . . . , d
n
) be a nonsingular "scaling" matrix. Then
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
( A – l/)x,
(2)
= x,
(1)
toeet
Now let
88 Chapter 9. Eigenvalues and Eigenvectors
(A - 11)x?J = 0 yields
To find a principal vector of degree 2 associated with the multiple eigenvalue 1, solve
(A - 1I)xl
2
) = xiI) to get
[ 0 ]
(2)
x, = ~ .
Now let
xl" xl"] ~ [ ~
0 5
l
X = [xiI) 1 3
0
Then it is easy to check that
X - ' ~ U
0
-5 ] [ I
n
1
-i and X-lAX = ~
1
0 0
9.3.2 On the +1 's in JCF blocks
In this subsection we show that the nonzero superdiagonal elements of a JCF need not be
1 's but can be arbitrary - so long as they are nonzero. For the sake of definiteness, we
consider below the case of a single Jordan block, but the result clearly holds for any JCF.
Suppose A E jRnxn and
Let D = diag(d" ... , d
n
) be a nonsingular "scaling" matrix. Then
A
4l.
0 0
d,
0
)...
!b.
0
d,
D-'(X-' AX)D = D-' J D = j =
A
d
n
-
I
0
d
n
-
2
A-
d
n
d
n
-
I
0 0
)...
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x\,..., x
n
] of eigenvectors
and principal vectors that reduces A to its JCF. Specifically, J is obtained from A via the
similarity transformation XD = \d\x\,..., d
n
x
n
}.
In a similar fashion, the reverse-order identity matrix (or exchange matrix)
9.4 Geometric Aspects of the JCF
Note that di mM( A — A.,/ )
w
= «,-.
Definition 9.35. Let V be a vector space over F and suppose A : V —>• V is a linear
transformation. A subspace S c V is A-invariant if AS c S, where AS is defined as the
set {As : s e S}.
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
The matrix X that reduces a matrix A e IR"
X
" (or C
nxn
) to a JCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of R. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A e R"
x
" has characteristic polynomial
and minimal polynomial
with A- i , . . . , A.
m
distinct. Then
9.4. Geometric Aspects of the JCF 89
Appropriate choice of the di's then yields any desired nonzero superdiagonal elements.
This result can also be interpreted in terms of the matrix X = [x[, ... ,x
n
] of eigenvectors
and principal vectors that reduces A to its lCF. Specifically, j is obtained from A via the
similarity transformation XD = [d[x[, ... , dnxn].
In a similar fashion, the reverse-order identity matrix (or exchange matrix)
0 0 I
0
p = pT = p-[ =
(9.18)
0 1
I 0 0
can be used to put the superdiagonal elements in the subdiagonal instead if that is desired:
A I 0 0 A 0 0
0 A 0 A 0
p-[
A
p=
0 1 A
0
A I A 0
0 0 A 0 0 A
9.4 Geometric Aspects of the JCF
The matrix X that reduces a matrix A E jH.nxn (or c
nxn
) to a lCF provides a change of basis
with respect to which the matrix is diagonal or block diagonal. It is thus natural to expect an
associated direct sum decomposition of jH.n. Such a decomposition is given in the following
theorem.
Theorem 9.34. Suppose A E jH.nxn has characteristic polynomial
n(A) = (A - A[)n) ... (A - Amt
m
and minimal polynomial
a(A) = (A - A[)V) '" (A - Am)V
m
with A I, ... , Am distinct. Then
jH.n = N(A - AlIt) E6 ... E6 N(A - AmItm
= N (A - A 1 I) v) E6 ... E6 N (A - Am I) Vm .
Note that dimN(A - AJ)Vi = ni.
Definition 9.35. Let V be a vector space over IF and suppose A : V --+ V is a linear
transformation. A subspace S ~ V is A -invariant if AS ~ S, where AS is defined as the
set {As: s E S}.
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be R" over R, and S e R"
x
* is a matrix whose columns s\,..., s/t
span a /^-dimensional subspace <S, i.e., K(S) = <S, then <S is A-invariant if and only if there
exists M eR
kxk
such that
This follows easily by comparing the /th columns of each side of (9.19):
Example 9.36. The equation Ax = A* = xA defining a right eigenvector x of an eigenvalue
X says that * spans an A-invariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
Rewriting in the form
we have that A A, = A", /,, / = 1, 2, so the columns of A, span an A-mvanant subspace.
Theorem 9.38. Suppose A e E"
x
".
7. Let p(A) = «o/ + o?i A + • • • + <x
q
A
q
be a polynomial in A. Then N(p(A)) and
7£(p(A)) are A-invariant.
2. S isA-invariant if and only ifS
1
- is A
T
-invariant.
Theorem 9.39. If V isa vector space over F such that V = N\ ® • • • 0 N
m
, where each
A// isA-invariant, then a basisfor V can be chosen with respect to which A hasa block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues A,,- as in Theorem 9.34, we could choose bases for N(A — A.,-/)"' by SVD, for
example (note that the power n, could be replaced by v,). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose A" = [ X i , . . . , X
m
] e R"
n
xn
is such that X ^AX = diag(7i,. . . , J
m
), where
each Ji = diag(/,i,..., //*,.) and each /,* is a Jordan block corresponding to A, e A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that A A", = A*,- /,, so by (9.19) the columns
of A", (i.e., the eigenvectors and principal vectors associated with A.,) span an A-invariant
subspace of W.
Finally, we return to the problem of developing a formula for e'
A
in the case that A
is not necessarily diagonalizable. Let 7, € C"
x
"' be a Jordan basis for N(A
T
— A.,/)"' .
Equivalently, partition
90 Chapter 9. Eigenvalues and Eigenvectors
If V is taken to be ]Rn over Rand S E ]Rn xk is a matrix whose columns SI, ... , Sk
span a k-dimensional subspace S, i.e., R(S) = S, then S is A-invariant if and only if there
exists M E ]Rkxk such that
AS = SM. (9.19)
This follows easily by comparing the ith columns of each side of (9.19):
Example 9.36. The equation Ax = AX = x A defining a right eigenvector x of an eigenvalue
A says that x spans an A-invariant subspace (of dimension one).
Example 9.37. Suppose X block diagonalizes A, i.e.,
X-I AX = [ ~ J
2
].
Rewriting in the form
~ J,
we have that AX
i
= X;li, i = 1,2, so the columns of Xi span an A-invariant subspace.
Theorem 9.38. Suppose A E ]Rnxn.
1. Let peA) = CloI + ClIA + '" + ClqAq be a polynomial in A. Then N(p(A)) and
R(p(A)) are A-invariant.
2. S is A -invariant if and only if S 1. is A T -invariant.
Theorem 9.39. If V is a vector space over IF such that V = NI EB ... EB N
m
, where each
N; is A-invariant, then a basis for V can be chosen with respect to which A has a block
diagonal representation.
The Jordan canonical form is a special case of the above theorem. If A has distinct
eigenvalues Ai as in Theorem 9.34, we could choose bases for N(A - Ai/)n, by SVD, for
example (note that the power ni could be replaced by Vi). We would then get a block diagonal
representation for A with full blocks rather than the highly structured Jordan blocks. Other
such "canonical" forms are discussed in text that follows.
Suppose X = [Xl ..... Xm] E ] R ~ x n is such that X-I AX = diag(J1, ... , J
m
), where
each J
i
= diag(JiI,"" Jik,) and each Jik is a Jordan block corresponding to Ai E A(A).
We could also use other block diagonal decompositions (e.g., via SVD), but we restrict our
attention here to only the Jordan block case. Note that AXi = Xi J
i
, so by (9.19) the columns
of Xi (i.e., the eigenvectors and principal vectors associated with Ai) span an A-invariant
subspace of]Rn.
Finally, we return to the problem of developing a formula for e
l
A in the case that A
is not necessarily diagonalizable. Let Yi E <e
nxn
, be a Jordan basis for N (AT - A;lt.
Equivalently, partition
9.5. The Matrix Sign Function 91
compatibly. Then
In a similar fashion we can compute
which is a useful formula when used in conjunction with the result
for a k x k Jordan block 7, associated with an eigenvalue A. = A.,.
9.5 The Matrix Sign Function
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) ^ 0. Then the sign of z is defined by
Definition 9.41. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left half-plane and P containing all Jordan blocks corresponding to
eigenvalues in the right half-plane. Then the sign of A, denoted sgn(A), is given by
9.S. The Matrix Sign Function
compatibly. Then
A = XJX-
I
= XJy
H
= [XI, ... , Xm] diag(JI, ... , J
m
) [Y
I
, ••• , Ym]H
m
= LX;JiYi
H
.
i=1
In a similar fashion we can compute
m
etA = LXietJ;YiH,
i=1
which is a useful formula when used in conjunction with the result
A 0 0
eAt teAt
.lt
2
e
At
2!
0 A
0
eAt teAt
exp t
A 0
0 0
eAt
1
0 0 A
0 0
for a k x k Jordan block J
i
associated with an eigenvalue A = Ai.
9.5 The Matrix Sign Function
91
In this section we give a very brief introduction to an interesting and useful matrix function
called the matrix sign function. It is a generalization of the sign (or signum) of a scalar. A
survey of the matrix sign function and some of its applications can be found in [15].
Definition 9.40. Let z E C with Re(z) f= O. Then the sign of z is defined by
Re(z) {+1
sgn(z) = IRe(z) I = -1
ifRe(z) > 0,
ifRe(z) < O.
Definition 9.41. Suppose A E cnxn has no eigenvalues on the imaginary axis, and let
be a Jordan canonical form for A, with N containing all Jordan blocks corresponding to the
eigenvalues of A in the left half-plane and P containing all Jordan blocks corresponding to
eigenvalues in the right half-plane. Then the sign of A, denoted sgn(A), is given by
[
-/ 0] -I
sgn(A) = X 0 / X ,
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and P,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finite-word-
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to del.
2. S
2
= I.
3. AS = SA.
4. sgn(A") = (sgn(A))".
5. sgn(T-
l
AT) = T-
l
sgn(A)TforallnonsingularT e C"
x
".
6. sgn(cA) = sgn(c) sgn(A)/or all nonzero real scalars c.
Theorem 9.43. Suppose A e C"
x
" has no eigenvalues on the imaginary axis, and let
S — sgn(A). Then the following hold:
1. 7l(S — /) is an A-invariant subspace corresponding to the left half-plane eigenvalues
of A (the negative invariant subspace).
2. R(S+/) is an A-invariant subspace corresponding to the right half-plane eigenvalues
of A (the positive invariant subspace).
3. negA = (/ — S)/2 is a projection onto the negative invariant subspace of A.
4. posA = (/ + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A e C
nxn
have distinct eigenvalues AI, ..., X
n
with corresponding right eigen-
vectors Xi, ... ,x
n
and left eigenvectors y\, ..., y
n
, respectively. Let v e C" be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
92 Chapter 9. Eigenvalues and Eigenvectors
where the negative and positive identity matrices are of the same dimensions as N and p,
respectively.
There are other equivalent definitions of the matrix sign function, but the one given
here is especially useful in deriving many of its key properties. The JCF definition of the
matrix sign function does not generally lend itself to reliable computation on a finite-word-
length digital computer. In fact, its reliable numerical calculation is an interesting topic in
its own right.
We state some of the more useful properties of the matrix sign function as theorems.
Their straightforward proofs are left to the exercises.
Theorem 9.42. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
1. S is diagonalizable with eigenvalues equal to ± 1.
2. S2 = I.
3. AS = SA.
4. sgn(AH) = (sgn(A»H.
5. sgn(T-1AT) = T-1sgn(A)T foralinonsingularT E e
nxn
.
6. sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c.
Theorem 9.43. Suppose A E e
nxn
has no eigenvalues on the imaginary axis, and let
S = sgn(A). Then the following hold:
I. R(S -l) is an A-invariant subspace corresponding to the left half-plane eigenvalues
of A (the negative invariant subspace).
2. R(S + l) is an A -invariant subspace corresponding to the right half-plane eigenvalues
of A (the positive invariant subspace).
3. negA == (l - S) /2 is a projection onto the negative invariant subspace of A.
4. posA == (l + S)/2 is a projection onto the positive invariant subspace of A.
EXERCISES
1. Let A E e
nxn
have distinct eigenvalues ),.1> ••• , ),.n with corresponding right eigen-
vectors Xl, ... , Xn and left eigenvectors Yl, ••. , Yn, respectively. Let v E en be an
arbitrary vector. Show that v can be expressed (uniquely) as a linear combination
of the right eigenvectors. Find the appropriate expression for v as a linear combination
of the left eigenvectors as well.
Exercises 93
2. Suppose A € C"
x
" is skew-Hermitian, i.e., A
H
= —A. Prove that all eigenvalues of
a skew-Hermitian matrix must be pure imaginary.
3. Suppose A e C"
x
" is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skew-Hermitian.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
6. Determine the JCFs of the following matrices:
Find a nonsingular matrix X such that X
1
AX = J, where J is the JCF
Hint: Use[ — 1 1 — l]
r
as an eigenvector. The vectors [0 1 — l]
r
and[ l 0 0]
r
are both eigenvectors, but then the equation (A — /)jc
(2)
= x
(1)
can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of e\ e R*. Characterize all left eigenvectors.
9. Let A e R"
x
" be of the form A = xy
T
, where x, y e R" are nonzero vectors with
x
T
y = 0. Determine the JCF of A.
10. Let A e R"
xn
be of the form A = / + xy
T
, where x, y e R" are nonzero vectors
with x
T
y = 0. Determine the JCF of A.
11. Suppose a matrix A e R
16x 16
has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10~
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
4. Suppose a matrix A € R
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
7. Let
Exercises 93
2. Suppose A E rc
nxn
is skew-Hermitian, i.e., AH = -A. Prove that all eigenvalues of
a skew-Hermitian matrix must be pure imaginary.
3. Suppose A E rc
nxn
is Hermitian. Let A be an eigenvalue of A with corresponding
right eigenvector x. Show that x is also a left eigenvector for A. Prove the same result
if A is skew-Hermitian.
4. Suppose a matrix A E lR.
5x5
has eigenvalues {2, 2, 2, 2, 3}. Determine all possible
JCFs for A.
5. Determine the eigenvalues, right eigenvectors and right principal vectors if necessary,
and (real) JCFs of the following matrices:
[
2 -1 ]
(a) 1 0 '
6. Determine the JCFs of the following matrices:
<a) U j n
-2
-1
2
=n
7. Let
A = [H -1]·
2 2"
Find a nonsingular matrix X such that X-I AX = J, where J is the JCF
J = [ ~ ~ ~ ] .
001
Hint: Use[-1 1 - I]T as an eigenvector. The vectors [0 -If and[1 0 of
are both eigenvectors, but then the equation (A - I)x(2) = x(1) can't be solved.
8. Show that all right eigenvectors of the Jordan block matrix in Theorem 9.30 must be
multiples of el E lR.
k
. Characterize all left eigenvectors.
9. Let A E lR.
nxn
be of the form A = xyT, where x, y E lR.
n
are nonzero vectors with
x
T
y = O. Determine the JCF of A.
10. Let A E lR.
nxn
be of the form A = 1+ xyT, where x, y E lR.
n
are nonzero vectors
with x
T
y = O. Determine the JCF of A.
11. Suppose a matrix A E lR.
16x
16 has 16 eigenvalues at 0 and its JCF consists of a single
Jordan block of the form specified in Theorem 9.22. Suppose the small number 10-
16
is added to the (16,1) element of J. What are the eigenvalues of this slightly perturbed
matrix?
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A e R"
x
" can be factored in the form A = Si$2, where Si
and £2 are real symmetric matrices and one of them, say Si, is nonsingular.
Hint: Suppose A = X J X ~
l
is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of J . Then A = ( X S i X
T
) ( X ~
T
S
2
X ~
l
) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A e W
x
" is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
where A e M"
xn
and A
n
e R
kxk
with 1 < k < n. Suppose A
u
^ 0 and that we
want to block diagonalize A via the similarity transformation
where X e R*
x
<« - *), i.e.,
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of AU and A 22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A e C"
xn
has all its eigenvalues in the left half- plane. Prove that
sgn(A) = - /.
94 Chapter 9. Eigenvalues and Eigenvectors
12. Show that every matrix A E jRnxn can be factored in the form A = SIS2, where SI
and S2 are real symmetric matrices and one of them, say S1, is nonsingular.
Hint: Suppose A = Xl X-I is a reduction of A to JCF and suppose we can construct
the "symmetric factorization" of 1. Then A = (X SIXT)(X-
T
S2X-I) would be the
required symmetric factorization of A. Thus, it suffices to prove the result for the
JCF. The transformation P in (9.18) is useful.
13. Prove that every matrix A E jRn xn is similar to its transpose and determine a similarity
transformation explicitly.
Hint: Use the factorization in the previous exercise.
14. Consider the block upper triangular matrix
A _ [ All
- 0
Al2 ]
A22 '
where A E jRnxn and All E jRkxk with 1 ::s: k ::s: n. Suppose Al2 =1= 0 and that we
want to block diagonalize A via the similarity transformation
where X E IRkx(n-k), i.e.,
T-IAT = [A 011 0 ]
A22 .
Find a matrix equation that X must satisfy for this to be possible. If n = 2 and k = 1,
what can you say further, in terms of All and A22, about when the equation for X is
solvable?
15. Prove Theorem 9.42.
16. Prove Theorem 9.43.
17. Suppose A E en xn has all its eigenvalues in the left half-plane. Prove that
sgn(A) = -1.
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V — > • W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A e R
mxn
, find P e R™
xm
and Q e R
n
n
xn
such that PAQ has a
"canonical form." The transformation A M» PAQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A e C
m xn
and unitary equivalence if P and
< 2 are unitary.
Two special cases are of interest:
1. If W = V and < 2 = P"
1
, the transformation A H> PAP"
1
is called a similarity.
2 . If W = V and if Q = P
T
is orthogonal, the transformation A i-» PAP
T
is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A
H
6 C"
x
" has eigenvalues AI, . . . , A
n
, then there exists a unitary matrix £7 such that
U
H
AU — D, where D = di ag( A. j , . . . , A.
n
). This is proved in Theorem 10.2 . What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A e C"
x
" is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA
H
= A
H
A). Normal matrices include Hermitian,
skew-Hermitian, and unitary matrices (and their "real" counterparts: symmetric, skew-
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _
a
b
^1 for real scalars a and b. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A
H
e C"
x
" have (real) eigenvalues A. I, . . . , X
n
. Then there
exists a unitary matrix X such that X
H
AX = D = diag(A.j , . . . , X
n
) (the columns ofX are
orthonormal eigenvectors for A).
95
Chapter 10
Canonical Forms
10.1 Some Basic Canonical Forms
Problem: Let V and W be vector spaces and suppose A : V ---+ W is a linear transformation.
Find bases in V and W with respect to which Mat A has a "simple form" or "canonical
form." In matrix terms, if A E IR
mxn
, find P E lR;;:xm and Q E l R ~ x n such that P AQ has a
"canonical form." The transformation A f--+ P AQ is called an equivalence; it is called an
orthogonal equivalence if P and Q are orthogonal matrices.
Remark 10.1. We can also consider the case A E e
mxn
and unitary equivalence if P and
Q are unitary.
Two special cases are of interest:
1. If W = V and Q = p-
1
, the transformation A f--+ PAP-I is called a similarity.
2. If W = V and if Q = pT is orthogonal, the transformation A f--+ P ApT is called
an orthogonal similarity (or unitary similarity in the complex case).
The following results are typical of what can be achieved under a unitary similarity. If
A = A H E en xn has eigenvalues AI, ... , An, then there exists a unitary matrix U such that
U
H
AU = D, where D = diag(AJ, ... , An). This is proved in Theorem 10.2. What other
matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem
10.9, where it is proved that a general matrix A E e
nxn
is unitarily similar to a diagonal
matrix if and only if it is normal (i.e., AA H = AHA). Normal matrices include Hermitian,
skew-Hermitian, and unitary matrices (and their "real" counterparts: symmetric, skew-
symmetric, and orthogonal, respectively), as well as other matrices that merely satisfy the
definition, such as A = [ _ ~ !] for real scalars a and h. If a matrix A is not normal, the
most "diagonal" we can get is the JCF described in Chapter 9.
Theorem 10.2. Let A = A H E en xn have (real) eigenvalues AI, ... ,An. Then there
exists a unitary matrix X such that X
H
AX = D = diag(Al, ... , An) (the columns of X are
orthonormal eigenvectors for A).
95
96 Chapter 10. Canonical Forms
Proof: Let x\ be a right eigenvector corresponding to X\, and normalize it such that xf*x\ =
1. Then there exist n — 1 additional vectors x
2
, ..., x
n
such that X = [x\,..., x
n
] =
[x\ X
2
] is unitary. Now
Then x^U
2
= 0 (/ € k) means that x
f
is orthogonal to each of the n — k columns of U
2
.
But the latter are orthonormal since they are the last n — k rows of the unitary matrix U.
Thus, [Xi f/2] is unitary. D
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k — 1.
For simplicity, we consider the real case. Let the unit vector x\ be denoted by [£i, ..., %
n
]
T
.
In (10.1) we have used the fact that Ax\ = k\x\. When combined with the fact that
x"xi = 1, we get A-i remaining in the (l,l)-block. We also get 0 in the (2,l)-block by
noting that x\ is orthogonal to all vectors in X
2
. In (10.2), we get 0 in the (l,2)-block by
noting that X
H
AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)-block must have eigenvalues X
2
,..., A.
n
. D
Given a unit vector x\ e E", the construction of X
2
e ]R"
X
("-
1
) such that X —
[x\ X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let X\ e C
nxk
have orthonormal columns and suppose U is a unitary
matrix such that UX\ = \
0
1, where R € C
kxk
is upper triangular. Write U
H
= [U\ U
2
]
with Ui € C
nxk
. Then [Xi U
2
] is unitary.
Proof: Let X\ = [x\,..., Xk]. Construct a sequence of Householder matrices (also known
as elementary reflectors) H\,..., H
k
in the usual way (see below) such that
where R is upper triangular (and nonsingular since x\, ..., Xk are orthonormal). Let U =
H
k
...H
v
. Then U
H
= / / ,- • • H
k
and
96 Chapter 10. Canonical Forms
Proof' Let XI be a right eigenvector corresponding to AI, and normalize it such that XI =
1. Then there exist n - 1 additional vectors X2, ... , Xn such that X = (XI, ... , xn] =
[XI X
2
] is unitary. Now
XHAX = [
xH
] A [XI X2] = [

]
I
XH
XfAxl XfAX
2
2
=[
Al
]
(10.1)
0 XfAX
2
=[
Al 0
l
(10.2)
0
XfAX
z
In (l0.1) we have used the fact that AXI = AIXI. When combined with the fact that
XI = 1, we get Al remaining in the (l,I)-block. We also get 0 in the (2, I)-block by
noting that XI is orthogonal to all vectors in Xz. In (10.2), we get 0 in the (l,2)-block by
noting that XH AX is Hermitian. The proof is completed easily by induction upon noting
that the (2,2)-block must have eigenvalues A2, ... , An. 0
Given a unit vector XI E JRn, the construction of X
z
E JRnx(n-l) such that X =
[XI X
2
] is orthogonal is frequently required. The construction can actually be performed
quite easily by means of Householder (or Givens) transformations as in the proof of the
following general result.
Theorem 10.3. Let XI E C
nxk
have orthonormal columns and suppose V is a unitary
matrix such that V X I = [ where R E C
kxk
is upper triangular. Write V H = [VI Vz]
with VI E C
nxk
. Then [XI V
2
] is unitary.
Proof: Let X I = [XI, ... ,xd. Construct a sequence of Householder matrices (also known
as elementary reflectors) HI, ... , Hk in the usual way (see below) such that
Hk ... HdxI, ... , xd = [ l
where R is upper triangular (and nonsingular since XI, ... , Xk are orthonormal). Let V =
Hk'" HI. Then VH = HI'" Hk and
Then X
i
H
U2 = 0 (i E means that Xi is orthogonal to each of the n - k columns of V2.
But the latter are orthonormal since they are the last n - k rows of the unitary matrix U.
Thus. [XI U2] is unitary. 0
The construction called for in Theorem 10.2 is then a special case of Theorem 10.3
for k = 1. We illustrate the construction of the necessary Householder matrix for k = 1.
For simplicity, we consider the real case. Let the unit vector XI be denoted by .. , ,
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X^ is given by
U = I — 2uu
+
= I — -^UU
T
, where u = [t-\ ± 1, £2, • • •» £«]
r
- It can easily be checked
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of j ci, it is easily verified that U
T
U = 2 ± 2£i and U
T
X\ = 1 ± £1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre-
quently in applications.
Theorem 10.4. Let A = A
T
e E
nxn
have eigenvalues k\, ... ,X
n
. Then there exists an
orthogonal matrix X e W
lxn
(whose columns are orthonormal eigenvectors of A) such that
X
T
AX = D = diag(Xi, . . . , X
n
).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections P, (onto the one-dimensional eigenspaces corre-
sponding to the A., 's), i.e.,
where P, = PUM —
x
i
x
f =
x
i
x
j since xj xi — 1.
The following pair of theorems form the theoretical foundation of the double-Francis-
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
10.1. Some Basic Canonical Forms 97
Then the necessary Householder matrix needed for the construction of X
2
is given by
U = I - 2uu+ = I - +uu
T
, where u = [';1 ± 1, ';2, ... , ';nf. It can easily be checked
u u
that U is symmetric and U
T
U = U
2
= I, so U is orthogonal. To see that U effects the
necessary compression of Xl, it is easily verified that u
T
u = 2 ± 2';1 and u
T
Xl = 1 ± ';1.
Thus,
Further details on Householder matrices, including the choice of sign and the complex case,
can be consulted in standard numerical linear algebra texts such as [7], [11], [23], [25].
The real version of Theorem 10.2 is worth stating separately since it is applied fre-
quently in applications.
Theorem 10.4. Let A = AT E jRnxn have eigenvalues AI, ... , An. Then there exists an
orthogonal matrix X E jRn xn (whose columns are orthonormal eigenvectors of A) such that
XT AX = D = diag(Al, ... , An).
Note that Theorem 10.4 implies that a symmetric matrix A (with the obvious analogue
from Theorem 10.2 for Hermitian matrices) can be written
n
A = XDX
T
= LAiXiXT,
(10.3)
i=1
which is often called the spectral representation of A. In fact, A in (10.3) is actually a
weighted sum of orthogonal projections Pi (onto the one-dimensional eigenspaces corre-
sponding to the Ai'S), i.e.,
n
A = LAiPi,
i=l
where Pi = PR(x;) = xiXt = xixT since xT Xi = 1.
The following pair of theorems form the theoretical foundation of the double-Francis-
QR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A e C"
x
". Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem 10.2 except that
in this case (using the notation U rather than X) the (l,2)-block wf AU2 is not 0. D
In the case of A e R"
x
", it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenvalues on the diagonal of T. However, the next theorem shows that every
A e W
xn
is also orthogonally similar (i.e., real arithmetic) to a quasi-upper-triangular
matrix. A quasi-upper-triangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2x2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (Murnaghan-Wintner). Let A e R"
x
". Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasi-upper-triangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur form. The quasi-upper-triangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur form (RSF). The columns of a unitary [orthogonal]
matrix U that reduces a matrix to [real] Schur form are called Schur vectors.
Example 10.8. The matrix
is in RSF. Its real JCF is
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A-
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A e C"
x
" is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., A
H
A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
so A is normal.
98 Chapter 10. Canonical Forms
Theorem 10.5 (Schur). Let A E c
nxn
. Then there exists a unitary matrix U such that
U
H
AU = T, where T is upper triangular.
Proof: The proof of this theorem is essentially the same as that of Theorem lO.2 except that
in this case (using the notation U rather than X) the (l,2)-block ur AU
2
is not O. 0
In the case of A E IR
n
xn , it is thus unitarily similar to an upper triangular matrix, but
if A has a complex conjugate pair of eigenvalues, then complex arithmetic is clearly needed
to place such eigenValues on the diagonal of T. However, the next theorem shows that every
A E IR
nxn
is also orthogonally similar (i.e., real arithmetic) to a quasi-upper-triangular
matrix. A quasi-upper-triangular matrix is block upper triangular with 1 x 1 diagonal
blocks corresponding to its real eigenvalues and 2 x 2 diagonal blocks corresponding to its
complex conjugate pairs of eigenvalues.
Theorem 10.6 (Murnaghan-Wintner). Let A E IR
n
xn. Then there exists an orthogonal
matrix U such that U
T
AU = S, where S is quasi-upper-triangular.
Definition 10.7. The triangular matrix T in Theorem 10.5 is called a Schur canonical
form or Schur fonn. The quasi-upper-triangular matrix S in Theorem 10.6 is called a real
Schur canonical form or real Schur fonn (RSF). The columns of a unitary [orthogonal}
matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors.
Example 10.8. The matrix
s ~ [
-2 5
n
-2 4
0 0
is in RSF. Its real JCF is
h[
1
n
-1 1
0 0
Note that only the first Schur vector (and then only if the corresponding first eigenvalue
is real if U is orthogonal) is an eigenvector. However, what is true, and sufficient for virtually
all applications (see, for example, [17]), is that the first k Schur vectors span the same A-
invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the
diagonal of T (or S).
While every matrix can be reduced to Schur form (or RSF), it is of interest to know
when we can go further and reduce a matrix via unitary similarity to diagonal form. The
following theorem answers this question.
Theorem 10.9. A matrix A E c
nxn
is unitarily similar to a diagonal matrix if and only if
A is normal (i.e., AH A = AA
H
).
Proof: Suppose U is a unitary matrix such that U
H
AU = D, where D is diagonal. Then
AAH = U VUHU VHU
H
= U DDHU
H
== U DH DU
H
== AH A
so A is normal.
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U
H
AU = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. D
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A e W
xn
is
1. positive definite if and only ifx
T
Ax > Qfor all nonzero x G W
1
. We write A > 0.
2. nonnegative definite (or positive semidefinite) if and only if X
T
Ax > 0 for all
nonzero x e W. We write A > 0.
3. negative definite if—A is positive definite. We write A < 0.
4. nonpositive definite (or negative semidefinite) if— A is nonnegative definite. We
write A < 0.
Also, if A and B are symmetric matrices, we write A > B if and only if A — B > 0 or
B — A < 0. Similarly, we write A > B if and only ifA — B>QorB — A < 0.
Remark 10.11. If A e C"
x
" is Hermitian, all the above definitions hold except that
superscript //s replace Ts. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = A
H
e C
nxn
with eigenvalues X
{
> A
2
> • • • > A
n
. Then for all
x eC",
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let v = U
H
x, where x is an arbitrary vector in C
M
, and denote the components of y by
j]i, i € n. Then
But clearly
10.2. Definite Matrices 99
Conversely, suppose A is normal and let U be a unitary matrix such that U H A U = T,
where T is an upper triangular matrix (Theorem 10.5). Then
It is then a routine exercise to show that T must, in fact, be diagonal. 0
10.2 Definite Matrices
Definition 10.10. A symmetric matrix A E lR.
nxn
is
1. positive definite if and only if x T Ax > 0 for all nonzero x E lR.
n
. We write A > O.
2. nonnegative definite (or positive semidefinite) if and only if x
T
Ax :::: 0 for all
nonzero x E lR.
n
• We write A :::: O.
3. negative definite if - A is positive definite. We write A < O.
4. nonpositive definite (or negative semidefinite) if -A is nonnegative definite. We
write A ~ O.
Also, if A and B are symmetric matrices, we write A > B if and only if A - B > 0 or
B - A < O. Similarly, we write A :::: B if and only if A - B :::: 0 or B - A ~ O.
Remark 10.11. If A E e
nxn
is Hermitian, all the above definitions hold except that
superscript H s replace T s. Indeed, this is generally true for all results in the remainder of
this section that may be stated in the real case for simplicity.
Remark 10.12. If a matrix is neither definite nor semidefinite, it is said to be indefinite.
Theorem 10.13. Let A = AH E e
nxn
with eigenvalues AI :::: A2 :::: ... :::: An. Thenfor all
x E en,
Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.2. Furthermore,
let y = U H x, where x is an arbitrary vector in en, and denote the components of y by
11;, i En. Then
But clearly
n
x
H
Ax = (U
H
X)H U
H
AU(U
H
x) = yH Dy = LA; 111;12.
n
LA; 11'/;12 ~ AlyH Y = AIX
H
X
;=1
;=1
100 Chapter 10. Canonical Forms
and
from which the theorem follows. D
Remark 10.14. The ratio ^^ for A = A
H
< = C
nxn
and nonzero jc e C" is called the
Rayleigh quotient of jc. Theorem 10.13 provides upper (AO and lower (A.
w
) bounds for
the Rayleigh quotient. If A = A
H
e C"
x
" is positive definite, X
H
Ax > 0 for all nonzero
x E C", soO < X
n
< • • • < A. I.
Corollary 10.15. Let A e C"
x
". Then \\A\\
2
= ^
m
(A
H
A}.
Proof: For all x € C" we have
Let jc be an eigenvector corresponding to X
max
(A
H
A). Then ^pjp
2
= ^^(A" A) , whence
Definition 10.16. A principal submatrix of an nxn matrix A is the (n — k)x(n — k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n — k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A e E"
x
" is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the formM
T
M, where M e R"
x
" is nonsingular.
Theorem 10.18. A symmetric matrix A € R"
x
" is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegative.
3. A can be written in the formM
T
M, where M 6 R
ix
" and k > rank(A) — rank(M) .
Remark 10.19. Note that the determinants of all principal eubmatrioes muet bQ nonnogativo
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A — [
0
_
l
1. The determinant of the 1x1 leading submatrix is 0 and
the determinant of the 2x2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
100 Chapter 10. Canonical Forms
and
n
LAillJilZ::: AnyHy = An
xHx
,
i=l
from which the theorem follows. 0
Remark 10.14. The ratio XHHAx for A = AH E e
nxn
and nonzero x E en is called the
x x
Rayleigh quotient of x. Theorem 1O.l3 provides upper (A 1) and lower (An) bounds for
the Rayleigh quotient. If A = AH E e
nxn
is positive definite, x
H
Ax > 0 for all nonzero
x E en, so 0 < An ::::: ... ::::: A I.
I
Corollary 10.15. Let A E e
nxn
. Then IIAII2 = Ar1ax(AH A).
Proof: For all x E en we have
I
Let x be an eigenvector corresponding to Amax (A H A). Then = Ar1ax (A H A), whence
IIAxll2 ! H
IIAliz = max --= Amax{A A). 0
xfO IIxll2
Definition 10.16. A principal submatrixofan n x n matrix A is the (n -k) x (n -k) matrix
that remains by deleting k rows and the corresponding k columns. A leading principal
submatrix of order n - k is obtained by deleting the last k rows and columns.
Theorem 10.17. A symmetric matrix A E is positive definite if and only if any of the
following three equivalent conditions hold:
1. The determinants of all leading principal submatrices of A are positive.
2. All eigenvalues of A are positive.
3. A can be written in the form MT M, where M E xn is nonsingular.
Theorem 10.18. A symmetric matrix A E xn is nonnegative definite if and only if any
of the following three equivalent conditions hold:
1. The determinants of all principal submatrices of A are nonnegative.
2. All eigenvalues of A are nonnegaTive.
3. A can be wrirren in [he/orm MT M, where M E IRb<n and k ranlc(A) "" ranlc(M).
R.@mllrk 10.19. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll "ubm!ltriC[!!l mu"t bB nonnBgmivB
in Theorem 10.18.1, not just those of the leading principal submatrices. For example,
consider the matrix A = _ The determinant of the I x 1 leading submatrix is 0 and
the determinant of the 2 x 2 leading submatrix is also 0 (cf. Theorem 10.17). However, the
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
Recall that A > B if the matrix A — B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B e R
nxn
be symmetric.
1. If A >BandMe R
nxm
, then M
T
AM > M
T
BM.
2. If A >B and M e R
nxm
, then M
T
AM > M.
T
BM.
j m
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A € E"
xn
, we say
that S e R
nx
" is a square root of A if S
2
— A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = /2, any matrix S of
the form [
c
s
°*
e
e
_
c
s

9
e
] is a square root.
Theorem 10.22. Let A e R"
x
" be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rank A (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A e <C
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LL
H
.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n — 1 so that B
may be written as B = L\L^, where L\ e C
1
-""
1
^""^ is nonsingular and lower triangular
then M can be
10.2. Definite Matrices 101
principal submatrix consisting of the (2,2) element is, in fact, negative and A is nonpositive
definite.
Remark 10.20. The factor M in Theorem 10.18.3 is not unique. For example, if
then M can be
[1 0], [ fz
-ti
o [ ~ 0]
o l ~ 0 , ...
v'3 0
Recall that A :::: B if the matrix A - B is nonnegative definite. The following
theorem is useful in "comparing" symmetric matrices. Its proof is straightforward from
basic definitions.
Theorem 10.21. Let A, B E jRnxn be symmetric.
1. 1f A :::: Band M E jRnxm, then MT AM :::: MT BM.
2. If A> Band M E j R ~ x m , then MT AM> MT BM.
The following standard theorem is stated without proof (see, for example, [16, p.
181]). It concerns the notion of the "square root" of a matrix. That is, if A E lR.
nxn
, we say
that S E jRn xn is a square root of A if S2 = A. In general, matrices (both symmetric and
nonsymmetric) have infinitely many square roots. For example, if A = lz, any matrix S of
h
" [COSO Sino] .
t e 10rm sinO _ cosO IS a square root.
Theorem 10.22. Let A E lR.
nxn
be nonnegative definite. Then A has a unique nonnegative
definite square root S. Moreover, SA = AS and rankS = rankA (and hence S is positive
definite if A is positive definite).
A stronger form of the third characterization in Theorem 10.17 is available and is
known as the Cholesky factorization. It is stated and proved below for the more general
Hermitian case.
Theorem 10.23. Let A E c
nxn
be Hermitian and positive definite. Then there exists a
unique nonsingular lower triangular matrix L with positive diagonal elements such that
A = LLH.
Proof: The proof is by induction. The case n = 1 is trivially true. Write the matrix A in
the form
By our induction hypothesis, assume the result is true for matrices of order n - 1 so that B
may be written as B = L1Lf, where Ll E c(n-l)x(n-l) is nonsingular and lower triangular
102 Chapt er 10. Ca n o n i c a l Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A € C™*
7 1
. Then there exist matrices P e C ™
xm
and Q e C"
n
x
" such
that
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv-
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (10.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (10.4) and more efficiently computable than a ful l SVD. Many similar results are also
available.
where a is positive. Performing the indicated matrix multiplication and equating the cor-
responding submatrices, we see that we must have L\c = b and a
nn
= C
H
C + a
2
. Clearly
c is given simply by c = L^b. Substituting in the expression involving a, we find
a
2
= a
nn
— b
H
L\
H
L\
l
b = a
nn
— b
H
B~
l
b (= the Schur complement of B in A). But we
know that
Since det (fi ) > 0, we must have a
nn
—b
H
B
l
b > 0. Choosing a to be the positive square
root of «„„ — b
H
B~
l
b completes the proof. D
102 Chapter 10. Canonical Forms
with positive diagonal elements. It remains to prove that we can write the n x n matrix A
in the form
b ] = [L
J
0 ] [Lf c J,
ann c a 0 a
where a is positive. Performing the indicated matrix multiplication and equating the cor-
responding submatrices, we see that we must have L I C = b and ann = c
H
c + a
2
• Clearly
c is given simply by c = C,lb. Substituting in the expression involving a, we find
a
2
= ann - b
H
LIH L11b = ann - b
H
B-1b (= the Schur complement of B in A). But we
know that
o < det(A) = det [
b ] = det(B) det(a
nn
_ b
H
B-1b).
ann
Since det(B) > 0, we must have ann - b
H
B-1b > O. Choosing a to be the positive square
root of ann - b
H
B-1b completes the proof. 0
10.3 Equivalence Transformations and Congruence
Theorem 10.24. Let A E c;,xn. Then there exist matrices P E C:
xm
and Q E such
that

(l0.4)
Proof: A classical proof can be consulted in, for example, [21, p. 131]. Alternatively,
suppose A has an SVD of the form (5.2) in its complex version. Then
[
S-l 0 ] [ U
H
] [I 0 ]
o I Uf AV = 0 0 .
Take P = [ 'f [I ] and Q = V to complete the proof. 0
Note that the greater freedom afforded by the equivalence transformation of Theorem
10.24, as opposed to the more restrictive situation of a similarity transformation, yields a
far "simpler" canonical form (10.4). However, numerical procedures for computing such
an equivalence directly via, say, Gaussian or elementary row and column operations, are
generally unreliable. The numerically preferred equivalence is, of course, the unitary equiv-
alence known as the SVD. However, the SVD is relatively expensive to compute and other
canonical forms exist that are intermediate between (l0.4) and the SVD; see, for example
[7, Ch. 5], [4, Ch. 2]. Two such forms are stated here. They are more stably computable
than (lOA) and more efficiently computable than a full SVD. Many similar results are also
available.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A e C™
x
". Then there exist
unitary matrices U e C
mxm
and V e C
nxn
such that
where R e €,
r
r
xr
is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. D
Theorem 10.26. Let A e C™
x
". Then there exists a unitary matrix Q e C
mxm
and a
permutation matrix Fl e C"
x
" such that
where R E C
r
r
xr
is upper triangular and S e C
r x(
"
r)
is arbitrary but in general nonzero.
Proof: For the proof, see [4]. D
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A e C
nxn
and X e C
n
n
xn
. The transformation A i-> X
H
AX is called
a congruence. Note that a congruence is a similarity if and only ifX is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X
H
AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = A
H
e C"
x
" and let 7t, v, and £ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (n, v, £). The signature of A is given by sig(A) = n — v.
Example 10.30.
2. If A = A" eC
n x
" , t h enA > 0 if and only if In (A) = (n, 0, 0).
3. If In(A) = (TT, v, £), then rank(A) = n + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A
H
e C
nxn
and X e C
n
n
xn
. Then
In(A) = ln(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
10.3. Equivalence Transformations and Congruence 103
Theorem 10.25 (Complete Orthogonal Decomposition). Let A E e ~ x n . Then there exist
unitary matrices U E e
mxm
and V E e
nxn
such that
where R E e;xr is upper (or lower) triangular with positive diagonal elements.
Proof: For the proof, see [4]. 0
(10.5)
Theorem 10.26. Let A E e ~ x n . Then there exists a unitary matrix Q E e
mxm
and a
permutation matrix IT E en xn such that
QAIT = [ ~ ~ l
(10.6)
where R E e;xr is upper triangular and S E erx(n-r) is arbitrary but in general nonzero.
Proof: For the proof, see [4]. 0
Remark 10.27. When A has full column rank but is "near" a rank deficient matrix,
various rank revealing QR decompositions are available that can sometimes detect such
phenomena at a cost considerably less than a full SVD. Again, see [4] for details.
Definition 10.28. Let A E e
nxn
and X E e ~ x n . The transformation A H- XH AX is called
a congruence. Note that a congruence is a similarity if and only if X is unitary.
Note that congruence preserves the property of being Hermitian; i.e., if A is Hermitian,
then X H AX is also Hermitian. It is of interest to ask what other properties of a matrix are
preserved under congruence. It turns out that the principal property so preserved is the sign
of each eigenvalue.
Definition 10.29. Let A = AH E e
nxn
and let rr, v, and ~ denote the numbers of positive,
negative, and zero eigenvalues, respectively, of A. Then the inertia of A is the triple of
numbers In(A) = (rr, v, n The signature of A is given by sig(A) = rr - v.
Example 10.30.
l.In[! 1
o 0
00] o 0
-10 =(2,1,1).
o 0
2. If A = AH E e
nxn
, then A> 0 if and only if In(A) = (n, 0, 0).
3. If In(A) = (rr, v, n, then rank(A) = rr + v.
Theorem 10.31 (Sylvester's Law of Inertia). Let A = A HE en xn and X E e ~ xn. Then
In(A) = In(X
H
AX).
Proof: For the proof, see, for example, [21, p. 134]. D
Theorem 10.31 guarantees that rank and signature of a matrix are preserved under
congruence. We then have the following.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = A
H
e C"
xn
with In(A) = (jt, v, £). Then there exists a matrix
X e C"
n
xn
such that X
H
AX = diag(l, . . . , 1, -1,..., -1, 0, . . . , 0), where the number of
1 's is 7i, the number of — l's is v, and the number 0/0 's is (,.
Proof: Let AI , . . . , X
w
denote the eigenvalues of A and order them such that the first TT are
positive, the next v are negative, and the final £ are 0. By Theorem 10.2 there exists a unitary
matrix U such that U
H
AU = diag(Ai, . . . , A
w
). Define the n x n matrix
Then it is easy to check that X = U W yields the desired result. D
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = A
T
and D = D
T
. Then
if and only if either A > 0 and D - B
T
A~
l
B > 0, or D > 0 and A - BD^B
T
> 0.
Proof: The proof follows by considering, for example, the congruence
The details are straightforward and are left to the reader. D
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = A
T
and D = D
T
. Then
if and only ifA>0, AA
+
B = B, and D - B
T
A
+
B > 0.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. D
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
104 Chapter 10. Canonical Forms
Theorem 10.32. Let A = AH E c
nxn
with In(A) = (Jr, v, O. Then there exists a matrix
X E such that XH AX = diag(1, ... , I, -I, ... , -1,0, ... ,0), where the number of
1 's is Jr, the number of -I 's is v, and the numberofO's
Proof: Let A I, ... , An denote the eigenvalues of A and order them such that the first Jr are
positive, the next v are negative, and the final are O. By Theorem 10.2 there exists a unitary
matrix V such that VH AV = diag(AI, ... , An). Define the n x n matrix
vv = ... , 1/.f-Arr+I' ... , I/.f-Arr+v, I, ... ,1).
Then it is easy to check that X = V VV yields the desired result. 0
10.3.1 Block matrices and definiteness
Theorem 10.33. Suppose A = AT and D = DT. Then
ifand only ifeither A> ° and D - BT A-I B > 0, or D > 0 and A - BD-
I
BT > O.
Proof: The proof follows by considering, for example, the congruence
B ] [I _A-I B JT [ A
D 0 I BT
] [
The details are straightforward and are left to the reader. 0
Remark 10.34. Note the symmetric Schur complements of A (or D) in the theorem.
Theorem 10.35. Suppose A = AT and D = DT. Then
B ] > °
D -
if and only if A:::: 0, AA+B = B. and D - BT A+B:::: o.
Proof: Consider the congruence with
and proceed as in the proof of Theorem 10.33. 0
10.4 Rational Canonical Form
One final canonical form to be mentioned is the rational canonical form.
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A e M"
x
" is said to be nonderogatory if its minimal polynomial
and characteristic polynomial are the same or, equivalently, if its Jordan canonical f orm
has only one block associated with each distinct eigenvalue.
Suppose A E W
xn
is a nonderogatory matrix and suppose its characteristic polyno-
mial is 7 r( A ) = A " — ( a
0
+ «A +
is similar to a matrix of the form
+ a
n
_ i A
n
~ ' ) - Then it can be shown (see [12]) that A
Definition 10.37. A matrix A e E
nx
" of the f orm (10.7) is called a companion matrix or
is said to be in companion form.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverse-order
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
Notice that in all cases a companion matrix is nonsingular if and only if aO /= 0.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
£*Yamr\1j=»
10.4. Rational Canonical Form 105
Definition 10.36. A matrix A E lR
n
Xn is said to be nonderogatory ifits minimal polynomial
and characteristic polynomial are the same or; equivalently, if its Jordan canonical form
has only one block associated with each distinct eigenvalue.
Suppose A E lR
nxn
is a nonderogatory matrix and suppose its characteristic polyno-
mial is n(A) = An - (ao + alA + ... + an_IAn-I). Then it can be shown (see [12]) that A
is similar to a matrix of the form
o o o
o 0
o
(10.7)
o o
Definition 10.37. A matrix A E lR
nxn
of the form (10.7) is called a cornpanion rnatrix or
is said to be in cornpanion forrn.
Companion matrices also appear in the literature in several equivalent forms. To
illustrate, consider the companion matrix
(l0.8)
This matrix is a special case of a matrix in lower Hessenberg form. Using the reverse-order
identity similarity P given by (9.18), A is easily seen to be similar to the following matrix
in upper Hessenberg form:
a2 al
o 0
1 0
o 1
6]
o .
o
(10.9)
Moreover, since a matrix is similar to its transpose (see exercise 13 in Chapter 9), the
following are also companion matrices similar to the above:
l
:: ~ ! ~ 0 1 ] .
ao 0 0
(10.10)
Notice that in all cases a companion matrix is nonsingular if and only if ao i= O.
In fact, the inverse of a nonsingular companion matrix is again in companion form. For
example,
o
1
o
- ~
ao
1
o
o
- ~
ao
o
o
_!!l
o
o
(10.11)
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo-
inverse can still be computed. Let a e M""
1
denote the vector \a\, 02,..., a
n
-i] and let
c =
l+
l
a
r
a
. Then it is easily verified that
Note that / — caa
T
= (I + aa
T
) , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = 0.
Companion matrices have many other interesting properties, among which, and per-
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let a\ > GI > • • • > a
n
be the singular values of the companion matrix
A in (10.7). Let a = a\ + a\ + • • • +a%_
{
and y = 1 + «.Q + a. Then
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A € R
nx
" is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Ifao ^ 0, the largest and smallest singular values can also be written in the equivalent form
106 Chapter 10. Canonical Forms
with a similar result for companion matrices of the form (10.10).
If a companion matrix of the form (10.7) is singular, i.e., if ao = 0, then its pseudo-
inverse can still be computed. Let a E JRn-1 denote the vector [ai, a2, ... , an-If and let
c = I + ~ T a' Then it is easily verified that
o
o
o
o
o o
o
o
+
o
1- caa
T
o J.
ca
Note that I - caa T = (I + aa T) -I , and hence the pseudoinverse of a singular companion
matrix is not a companion matrix unless a = O.
Companion matrices have many other interesting properties, among which, and per-
haps surprisingly, is the fact that their singular values can be found in closed form; see
[14].
Theorem 10.38. Let al ~ a2 ~ ... ~ an be the singular values of the companion matrix
A in (10.7). Leta = ar + ai + ... + a;_1 and y = 1 + aJ + a. Then
2 _ 1 ( J 2 2)
a
l
- 2 y + y - 4a
o
'
a? = 1 for i = 2, 3, ... , n - 1,
a; = ~ (y - J y2 - 4a
J
) .
If ao =1= 0, the largest and smallest singular values can also be written in the equivalent form
Remark 10.39. Explicit formulas for all the associated right and left singular vectors can
also be derived easily.
If A E JRnxn is derogatory, i.e., has more than one Jordan block associated with
at least one eigenvalue, then it is not similar to a companion matrix of the form (10.7).
However, it can be shown that a derogatory matrix is similar to a block diagonal matrix,
each of whose diagonal blocks is a companion matrix. Such matrices are said to be in
rational canonical form (or Frobenius canonical form). For details, see, for example, [12].
Companion matrices appear frequently in the control and signal processing literature
but unfortunately they are often very difficult to work with numerically. Algorithms to reduce
an arbitrary matrix to companion form are numerically unstable. Moreover, companion
matrices are known to possess many undesirable numerical properties. For example, in
general and especially as n increases, their eigenstructure is extremely ill conditioned,
nonsingular ones are nearly singular, stable ones are nearly unstable, and so forth [14].
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in floating-
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is K
P
(A) =
I I ^ I I
p
I I A~
l
I I
p
>
m
e so-called condition number of A with respect to inversion and with respect
to the matrix P-norm. I f this number is large, say 0(10*), one may lose up to k digits of
precision. I n the 2-norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
I t is easy to show that y/2/ao < k2(A) < -£-,, and when GO is small or y is large (or both),
then K2(A) ^ T~I. I t is not unusual for y to be large for large n. Note that explicit formulas
for K\ (A) and K oo(A) can also be determined easily by using (10.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A e M"
x
" is normal, then Af(A) = A/"(A
r
).
3. Let A G C
nx
" and define p(A) = maxx

A(A) I ' M- Then p(A) is called the spectral
radius of A. Show that if A is normal, then p(A) = ||A||
2
. Show that the converse
is true if n = 2.
4. Let A € C
nxn
be normal with eigenvalues y1 , ..., y
n
and singular values a\ > a
2
>
• • • > o
n
> 0. Show that a, (A) = |A.,-(A)| for i e n.
5. Use the reverse-order identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A e C"
x
" to lower triangular
form.
6. Let A = I J MeC
2x2
. Find a unitary matrix U such that
7. I f A e W
xn
is positive definite, show that A
[
must also be positive definite.
3. Suppose A e E"
x
" is positive definite. I s [ ^ /i 1 > 0?
}. Let R, S 6 E
nxn
be symmetric. Show that [ * J 1 > 0 if and only if S > 0 and
R> S
Exercises 107
Companion matrices and rational canonical forms are generally to be avoided in fioating-
point computation.
Remark 10.40. Theorem 10.38 yields some understanding of why difficult numerical
behavior might be expected for companion matrices. For example, when solving linear
systems of equations of the form (6.2), one measure of numerical sensitivity is Kp(A) =
II A II p II A -] II p' the so-called condition number of A with respect to inversion and with respect
to the matrix p-norm. If this number is large, say O(lO
k
), one may lose up to k digits of
precision. In the 2-norm, this condition number is the ratio of largest to smallest singular
values which, by the theorem, can be determined explicitly as
y+J
y
2- 4
a5
21
a
ol
It is easy to show that :::: K2(A) :::: 1:01' and when ao is small or y is large (or both),
then K2(A) It is not unusualfor y to be large forlarge n. Note that explicit formulas
for K] (A) and Koo(A) can also be determined easily by using (l0.11).
EXERCISES
1. Show that if a triangular matrix is normal, then it must be diagonal.
2. Prove that if A E jRnxn is normal, then N(A) = N(A
T
).
3. Let A E cc
nxn
and define peA) = max)..EA(A) IAI. Then peA) is called the spectral
radius of A. Show that if A is normal, then peA) = IIAII2' Show that the converse
is true if n = 2.
4. Let A E en xn be normal with eigenvalues A], ... , An and singular values 0'1 0'2
... an O. Show that a; (A) = IA;(A)I for i E!l.
5. Use the reverse-order identity matrix P introduced in (9.18) and the matrix U in
Theorem 10.5 to find a unitary matrix Q that reduces A E cc
nxn
to lower triangular
form.
6. Let A = :] E CC
2x2
. Find a unitary matrix U such that
7. If A E jRn xn is positive definite, show that A -I must also be positive definite.
8. Suppose A E jRnxn is positive definite. Is [1 O?
9. Let R, S E jRnxn be symmetric. Show that > 0 if and only if S > 0 and
R > S-I.
108 Chapter 10. Canonical Forms
10. Find the inertia of the following matrices:
108
10. Find the inertia of the following matrices:
(a) [ ~ ~ l (b) [
(d) [-1 1 + j ]
1 - j -1 .
Chapter 10. Canonical Forms
-2 1 + j ]
1 - j -2 '
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
for t > IQ. This is known as an initial-value problem. We restrict our attention in this
chapter only to the so-called time-invariant case, where the matrix A e R
nxn
is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A e R
nxn
, the matrix exponential e
A
e R
nxn
is defined by the
power series
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +00). The solution of (11.1) involves the matrix
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. e° = I.
Proof: This follows immediately from Definition 11.1 by setting A = 0.
2. For all A G R"
XM
, (e
A
f - e^.
Proof: This follows immediately from Definition 11.1 and linearity of the transpose.
109
Chapter 11
Linear Differential and
Difference Equations
11.1 Differential Equations
In this section we study solutions of the linear homogeneous system of differential equations
x(t) = Ax(t); x(to) = Xo E JR.n (11.1)
for t 2: to. This is known as an initial-value problem. We restrict our attention in this
chapter only to the so-called time-invariant case, where the matrix A E JR.nxn is constant
and does not depend on t. The solution of (11.1) is then known always to exist and be
unique. It can be described conveniently in terms of the matrix exponential.
Definition 11.1. For all A E JR.nxn, the matrix exponential e
A
E JR.nxn is defined by the
power series
+00 1
e
A
= L ,Ak.
k=O k.
(11.2)
The series (11.2) can be shown to converge for all A (has radius of convergence equal
to +(0). The solution of (11.1) involves the matrix
(11.3)
which thus also converges for all A and uniformly in t.
11.1.1 Properties of the matrix exponential
1. eO = I.
Proof This follows immediately from Definition 11.1 by setting A = O.
T T
2. For all A E JR.nxn, (e
A
) = e
A

Proof This follows immediately from Definition 11.1 and linearity of the transpose.
109
110 Chapter 11. Linear Differential and Difference Equations
3. For all A e R"
x
" and for all t, r e R, e
(t
+
T)A
= e'
A
e
rA
= e
lA
e'
A
.
Proof: Note that
Compare like powers of A in the above two equations and use the binomial theorem
on (t + T)*.
4. For all A, B e R"
xn
and for all t e R, e
t(A+B)
=^e'
A
e'
B
= e'
B
e'
A
if and only if A
and B commute, i.e., AB = B A.
Proof: Note that
and
and
while
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B)
k
and the commutativity of A and B.
5. For all A e R"
x
" and for all t e R, (e'
A
)~
l
= e~'
A
.
Proof: Simply take T = — t in property 3.
6. Let £ denote the Laplace transform and £~
!
the inverse Laplace transform. Then for
all A € R"
x
" and for all t € R,
(a) C{e
tA
} = (sI-Ar
l
.
(b) £-
1
{(j/-A)-
1
} = «
M
.
Proof: We prove only (a). Part (b) follows similarly.
110 Chapter 11. Linear Differential and Difference Equations
3. For all A E JRnxn and for all t, T E JR, e(t+r)A = etA erA = erAe
tA
.
Proof" Note that
(t + T)2 2
e(t+r)A = I + (t + T)A + A + ...
2!
and
tA rA t 2 T 2
(
2 )( 2 )
e e = I + t A + 2! A +... I + T A + 2! A +... .
Compare like powers of A in the above two equations and use the binomial theorem
on(t+T)k.
4. For all A, B E JRnxn and for all t E JR, et(A+B) =-etAe
tB
= etBe
tA
if and only if A
and B commute, i.e., AB = BA.
Proof' Note that
and
while
t
2
et(A+B) = I + teA + B) + -(A + B)2 + ...
2!
tB tA t 2 t 2
(
2 )( 2 )
e e = 1+ tB + 2iB +... 1+ tA + 2!A +... .
Compare like powers of t in the first equation and the second or third and use the
binomial theorem on (A + B/ and the commutativity of A and B.
5. ForaH A E JRnxn and for all t E JR, (etA)-1 = e-
tA
.
Proof" Simply take T = -t in property 3.
6. Let £ denote the Laplace transform and £-1 the inverse Laplace transform. Then for
all A E JRnxn and for all t E lR,
(a) .l{e
tA
} = (sI - A)-I.
(b) .l-I{(sl- A)-I} = erA.
Proof" We prove only (a). Part (b) follows similarly.
{+oo
= io et(-sl)e
tA
dt
(+oo
= io ef(A-sl) dt since A and (-sf) commute
11.1. Differential Equations 111
= (sl -A)-
1
.
The matrix (s I — A) ~' is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A e R"
x
" and for all t e R, £(e'
A
) = Ae
tA
= e'
A
A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated term-by-
term from which the result follows immediately. Alternatively, the formal definition
can be employed as follows. For any consistent matrix norm,
11.1. Differential Equations 111
= {+oo t e(Ai-S)t x;y;H dt assuming A is diagonalizable
10 ;=1
= e(Ai-S)t dt]x;y;H
n 1
= '"' --Xi y;H assuming Re s > Re Ai for i E !!
L..... s - A"
i=1 I
= (sI - A)-I.
The matrix (s I - A) -I is called the resolvent of A and is defined for all s not in A (A).
Notice in the proof that we have assumed, for convenience, that A is diagonalizable.
If this is not the case, the scalar dyadic decomposition can be replaced by
m
et(A-sl) = L Xiet(Ji-sl)y;H
;=1
using the JCF. All succeeding steps in the proof then follow in a straightforward way.
7. For all A E JRnxn and for all t E JR, 1h(e
tA
) = Ae
tA
= etA A.
Proof: Since the series (11.3) is uniformly convergent, it can be differentiated term-by-
term from which the result follows immediately. Alternatively, the formal definition
d e(t+M)A _ etA
_(/A) = lim
d t L'lt
can be employed as follows. For any consistent matrix norm,
II
etA II III II
---u.--- - Ae
tA
= L'lt - /A) - Ae
tA
= II - etA) - Ae
tA
II
= II - l)e
tA
- Ae
tA
II
II
I ( (M)2 2 ) tA tAil
= L'lt M A + A +... e - Ae
= II ( Ae
tA
+ A
2
e
tA
+ ... ) - Ae
tA
II
= II ( A2 + A
3
+ .. , ) etA II
< MIIA21111e
tA
II _ + -IIAII + --IIAI12 + ...
(
1 L'lt (L'lt)2 )
- 2! 3! 4!
< L'lt1lA21111e
tA
Il (1 + L'ltiIAIl + IIAII2 + ... )
= L'lt IIA 21111e
tA

112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the right-hand side above clearly goes to 0 as At goes to 0. Thus, the
limit exists and equals Ae'
A
. A similar proof yields the limit e'
A
A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with e'
A
.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A e R
nxn
. The solution of the linear homogeneous initial-value problem
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x( t ) =
Ae
( t
~
to) A
xo = Ax( t) . Also, x( t
0
) — e
( fo
~
t
° '
) A
X Q — X Q so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). D
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A e R
nxn
, B e W
xm
and let the vector-valued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initial-value problem
for t > IQ is given by the variation of parameters formula
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
is used to get x( t ) = Ae
{
'-
to) A
x
0
+ f'
o
Ae
(
'-
s) A
Bu( s) ds + Bu( t) = Ax( t) + Bu( t) . Also,
*('o)
=
< ?
(f
° ~
fo)/ 1
.¥ o + 0 = X Q so, by the fundamental existence and uniqueness theorem for
ordinary differential equations, (11.7) is the solution of (11.6). D
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x — Ax = Bu by e~
tA
to get
112 Chapter 11. Linear Differential and Difference Equations
For fixed t, the right-hand side above clearly goes to 0 as t:.t goes to O. Thus, the
limit exists and equals Ae
t
A • A similar proof yields the limit e
t
A A, or one can use the
fact that A commutes with any polynomial of A of finite degree and hence with etA.
11.1.2 Homogeneous linear differential equations
Theorem 11.2. Let A E IR
n
xn. The solution of the linear homogeneous initial-value problem
x(t) = Ax(l); x(to) = Xo E IR
n
(11.4)
for t ::: to is given by
(11.5)
Proof: Differentiate (11.5) and use property 7 of the matrix exponential to get x (t) =
Ae(t-to)A
xo
= Ax(t). Also, x(to) = e(to-to)A Xo = Xo so, by the fundamental existence and
uniqueness theorem for ordinary differential equations, (11.5) is the solution of (11.4). 0
11.1.3 Inhomogeneous linear differential equations
Theorem 11.3. Let A E IR
nxn
, B E IR
nxm
and let the vector-valued function u be given
and, say, continuous. Then the solution of the linear inhomogeneous initial-value problem
x(t) = Ax(t) + Bu(t); x(to) = Xo E IR
n
for t ::: to is given by the variation of parameters formula
x(t) = e(t-to)A
xo
+ t e(t-s)A Bu(s) ds.
l t o
(11.6)
(11.7)
Proof: Differentiate (11.7) and again use property 7 of the matrix exponential. The general
formula
d l
q
(t) l
q
(t) af(x t) dq(t) dp(t)
- f(x, t) dx = ' dx + f(q(t), t)-- - f(p(t), t)--
dt pet) pet) at dt dt
is used to get x(t) = Ae(t-to)A Xo + Ir: Ae(t-s)A Bu(s) ds + Bu(t) = Ax(t) + Bu(t). Also,
x(t
o
} = e(to-tolA Xo + 0 = Xo so, by the fundilm()ntill nnd uniqu()Oc:s:s theorem for
ordinary differential equations, (11.7) is the solution of (1l.6). 0
Remark 11.4. The proof above simply verifies the variation of parameters formula by
direct differentiation. The formula can be derived by means of an integrating factor "trick"
as follows. Premultiply the equation x - Ax = Bu by e-
tA
to get
(11.8)
11.1. Differential Equations 113
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
11.1.4 Linear matrix differential equations
Matrix-valued initial-value problems also occur frequently. The first is an obvious general-
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A e W
lxn
. The solution of the matrix linear homogeneous initial-value
nrohlcm
for t > to is given by
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = 0.
Theorem 11.6. Let A e Rn
xn
, B e R
mxm
, and C e Rn
xm
. Then the matrix initial-value
problem

a
tA
ra
tB
has the solutionX ( t ) = e Ce
Proof: Differentiate e
tA
Ce
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X ( t ) satisfies the initial condition is trivial. D
Corollary 11.7. Let A, C e IR"
X
". Then the matrix initial-value problem
has the solution X(t} = e
tA
Ce
tAT
.
When C is symmetric in (11.12), X ( t ) is symmetric and (11.12) is known as a Lya-
punov differential equation. The initial-value problem (11.11) is known as a Sylvester
differential equation.
11.1. Differential Equations
Now integrate (11.8) over the interval [to, t]:
Thus,
and hence
-e-sAx(s) ds = e-SABu(s) ds.
1
t d 1t
to ds to
e-tAx(t) - e-toAx(to) = t e-
sA
Bu(s) ds
lto
x(t) = e(t-t
olA
xo
+ t e(t-s)A Bu(s) ds.
lto
11.1.4 Linear matrix differential equations
113
Matrix-valued initial-value problems also occur frequently. The first is an obvious general-
ization of Theorem 11.2, and the proof is essentially the same.
Theorem 11.5. Let A E jRnxn. The solution of the matrix linear homogeneous initial-value
problem
X(t) = AX(t); X(to) = C E jRnxn (11.9)
for t ::: to is given by
X(t) = e(t-to)Ac.
(11.10)
In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time to = O.
Theorem 11.6. Let A E jRnxn, B E jRmxm, and C E ]R.nxm. Then the matrix initial-value
problem
X(t) = AX(t) + X(t)B; X(O) = C (11.11)
has the solution X(t) = etACe
tB
.
Proof: Differentiate etACe
tB
with respect to t and use property 7 of the matrix exponential.
The fact that X (t) satisfies the initial condition is trivial. 0
Corollary 11.7. Let A, C E ]R.nxn. Then the matrix initial-value problem
X(t) = AX(t) + X(t)AT; X(O) = C (11.12)
has the solution X(t) = etACetAT.
When C is symmetric in (11.12), X (t) is symmetric and (11.12) is known as a Lya-
punov differential equation. The initial-value problem (11.11) is known as a Sylvester
differential equation.
114 Chapter 11. Linear Differential and Difference Equations
11.1.5 Modal decompositions
Let A E W
xn
and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz-
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A — ^ X f Ji Y
t
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
The ki s are called the modal velocities and the right eigenvectors *, are called the modal
directions. The decomposition above expresses the solution x(t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
if we write the initial condition X Q as a weighted sum of the right eigenvectors
Then
In the last equality we have used the fact that y f * X j = S f j .
Similarly, in the inhomogeneous case we can write
11.1.6 Computation of the matrix exponential
JCF method
Let A e R"
x
" and suppose X e Rn
xn
is such that X"
1
AX = J, where J is a JCF for A.
Then
114 Chapter 11. Linear Differential and Difference Equations
11.1 .5 Modal decompositions
Let A E jRnxn and suppose, for convenience, that it is diagonalizable (if A is not diagonaliz-
able, the rest of this subsection is easily generalized by using the JCF and the decomposition
A = L X;li y
i
H
as discussed in Chapter 9). Then the solution x(t) of (11.4) can be written
x(t) = e(t-to)A Xo
= (ti.iU-tO)Xiyr) Xo
1=1
n
= L(Yi
H
xoeAi(t-tO»Xi.
i=1
The Ai s are called the modal velocities and the right eigenvectors Xi are called the modal
directions. The decomposition above expresses the solution x (t) as a weighted sum of its
modal velocities and directions.
This modal decomposition can be expressed in a different looking but identical form
n
if we write the initial condition Xo as a weighted sum of the right eigenvectors Xo = L ai Xi.
Then
n
= L(aieAiU-tO»Xi.
i=1
In the last equality we have used the fact that Yi
H
X j = flij.
Similarly, in the inhomogeneous case we can write
i
t e(t-s)A Bu(s) ds = t (it eAiU-S)YiH Bu(s) dS) Xi.
~ i=1 ~
11.1.6 Computation of the matrix exponential
JCF method
i=1
Let A E jRnxn and suppose X E j R ~ x n is such that X-I AX = J, where J is a JCF for A.
Then
etA = etXJX-1
= XetJX-
1
I
n
Le
A

,
X'Yi
H
if A is diagonalizable
1=1
~ t,x;e'J,y;H in geneml.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute e
tA
via the formula e
tA
= Xe
tJ
X '
since e
tj
is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let .7, e <C
kxk
be a Jordan block of the form
Clearly A/ and N commute. Thus, e
tJi
= e'
u
e
tN
by property 4 of the matrix exponential.
The diagonal part is easy: e
tu
= diag(e
x
',..., e
xt
}. But e
tN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M e M
nx
"
M
p
= 0, while M
p
~
l
^ 0.
is nilpotent of degree (or index, or grade) p if
For the matrix N defined above, it is easy to check that while N has 1's along only
its first superdiagonal (and O's elsewhere), N
2
has 1's along only its second superdiagonal,
and so forth. Finally, N
k
~
l
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= 0. Thus, the series expansion of e'
N
is finite, i.e.,
Thus,
In the case when A. is complex, a real version of the above can be worked out.
11.1. Differential Equations 115
If A is diagonalizable, it is then easy to compute etA via the formula etA = Xe
tl
X-I
since e
t
I is simply a diagonal matrix.
In the more general case, the problem clearly reduces simply to the computation of
the exponential of a Jordan block. To be specific, let J
i
E C
kxk
be a Jordan block of the form
J
i
=
A 1
o A
o
o o
o =U+N.
o A
Clearly AI and N commute. Thus, e
t
I, = eO.! e
l
N by property 4 of the matrix exponential.
The diagonal part is easy: e
lH
= diag(e
At
, ••• ,eAt). But e
lN
is almost as easy since N is
nilpotent of degree k.
Definition 11.8. A matrix M E jRnxn is nilpotent of degree (or index, or grade) p if
MP = 0, while MP-I t=- O.
For the matrix N defined above, it is easy to check that while N has l's along only
its first superdiagonal (and O's elsewhere), N
2
has l's along only its second superdiagonal,
and so forth. Finally, N
k
-
I
has a 1 in its (1, k) element and has O's everywhere else, and
N
k
= O. Thus, the series expansion of e
lN
is finite, i.e.,
Thus,
t
2
t
k
-
I
e
IN
=I+tN+-N
2
+ ... + N
k
-
I
2! (k - I)!
o
o o
eAt
teAt
12 At
2I
e
0
eAt teAl
ell; =
0 0
eAt
0 0
t
1
Ik-I At
(k-I)! e
12 At
2I
e
teAl
eAt
In the case when A is complex, a real version of the above can be worked out.
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9. Let A = [ ~ _ \ J]. Then A (A) = {-2, -2} and
Interpolation method
This method is numerically unstable in finite-precision arithmetic but is quite effective for
hand calculation in small-order problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A € E.
nxn
and /(A) = e
tx
, compute f(A) = e'
A
, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n ( X ) = Yi?=i (^ ~~ ^ i)" ' »
where the A., - s are distinct. Define
where O TQ , . . . , a
n
-i are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
Here, the superscript (&) denotes the fcth derivative with respect to X. With the a, s then
known, the function g is known and /(A) = g(A). The motivation for this method is
the Cayley-Hamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n — 1 can be expressed as linear combinations of A
k
for k = 0, 1, . . . , n — 1. Thus, all the
terms of order greater than n — 1 in the power series for e'
A
can be written in terms of these
lower-order powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
and /(A) = e
tK
. Then j r(A.) = -(A. + I)
3
, so m = 1 and n
{
= 3.
Let g(X) — UQ + a\X + o^A.
2
. Then the three equations for the a, s are given by
116 Chapter 11. Linear Differential and Difference Equations
Example 11.9.
Let A = [=i
a Then A(A) = {-2, -2} and
etA = Xe
tJ
x-I
=[
2 1
] exp t [
-2
- ~ ] [
-1
]
0
-1 2
=[
2
] [ e ~ 2 t
te-
2t
] [
-1
]
1
e-
2t
-1 2
Interpolation method
This method is numerically unstable in finite-precision arithmetic but is quite effective for
hand calculation in small-order problems. The method is stated and illustrated for the
exponential function but applies equally well to other functions.
Given A E jRnxn and f(A) = etA, compute f(A) = etA, where t is a fixed scalar.
Suppose the characteristic polynomial of A can be written as n(A) = nr=1 (A - Ai t',
where the Ai s are distinct. Define
where ao, ... , an-l are n constants that are to be determined. They are, in fact, the unique
solution of the n equations:
g(k)(Ai) = f(k)(Ai); k = 0, I, ... , ni - I, i Em.
Here, the superscript (k) denotes the kth derivative with respect to A. With the aiS then
known, the function g is known and f(A) = g(A). The motivation for this method is
the Cayley-Hamilton Theorem, Theorem 9.3, which says that all powers of A greater than
n - 1 can be expressed as linear combinations of A k for k = 0, I, ... , n - 1. Thus, all the
terms of order greater than n - 1 in the power series for e
t
A can be written in terms of these
lower-order powers as well. The polynomial g gives the appropriate linear combination.
Example 11.10. Let
A = [ - ~ - ~ ~ ]
o 0-1
and f(A) = etA. Then n(A) = -(A + 1)3, so m = 1 and nl = 3.
Let g(A) = ao + alA + a2A2. Then the three equations for the aiS are given by
g(-I) = f(-1) ==> ao -al +a2 = e-
t
,
g'(-1) = f'(-1) ==> at - 2a2 = te-
t
,
g"(-I) = 1"(-1) ==> 2a2 = t
2
e-
t

11.1. Differential Equations 117
Solving for the a, s, we find
Thus,
~4 4i t f f > \ t k TU^^ _/"i\ f \ i o\ 2
Example 11.11. Let A = [ _* J] and /(A) = e
a
. Then 7 r(X ) = (A + 2)
2
so m = 1 and
«i = 2.
Let g(A.) = «o + ofiA.. Then the defining equations for the a,-s are given by
Solving for the a,s, we find
Thus,
Other methods
1. Use e
tA
= £~
l
{(sl — A)^
1
} and techniques for inverse Laplace transforms. This
is quite effective for small-order problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCF.
2. Use Pade approximation. There is an extensive literature on approximating cer-
tain nonlinear functions by rational functions. The matrix analogue yields e
A
=
11 .1. Differential Equations
117
Solving for the ai s, we find
Thus,
Example 11.11. Let A = [ : : : : ~ 6] and f(A) = eO-. Then rr(A) = (A + 2)2 so m = 1 and
nL = 2.
Let g(A) = ao + aLA. Then the defining equations for the aiS are given by
g(-2) = f(-2) ==> ao - 2al = e-
2t
,
g'(-2) = f'(-2) ==> al = te-
2t
.
Solving for the aiS, we find
Thus,
ao = e-
2t
+ 2te-
2t
,
aL = te-
2t
.
f(A) = etA = g(A) = aoI + al A
= (e-
2t
+ 2te-
2t
) [ ~
_ [ e-
2t
_ 2te-
2t
- -te-
2t
Other methods
o ] + te-
2t
[-4 4 ]
I -I 0
1. Use etA = .c-I{(sI - A)-I} and techniques for inverse Laplace transforms. This
is quite effective for small-order problems, but general nonsymbolic computational
techniques are numerically unstable since the problem is theoretically equivalent to
knowing precisely a JCE
2. Use Pade approximation. There is an extensive literature on approximating cer-
tain nonlinear functions by rational functions. The matrix analogue yields e
A
~
118 Chapter 11. Linear Differential and Difference Equations
D~
l
(A)N(A), where D(A) = 8
0
I + Si A H h S
P
A
P
and N(A) = v
0
I + v
l
A +
• • • + v
q
A
q
. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Fade approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when || A|| is sufficiently small. This can be arranged by scaling A, say, by
/ * \
2
*
multiplying it by 1/2* for sufficiently large k and using the fact that e
A
= ( e
{ ] / 2 )A
j .
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= Ue
s
U
H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
s
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and log(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discrete-time analogues of the linear differential
equations of the previous section. Linear discrete-time systems, modeled by systems of
difference equations, exhibit many parallels to the continuous-time differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A e Rn
xn
. The solution of the linear homogeneous system of difference
equations
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A e R
nxn
, B e R
nxm
and suppose {«*}£§ « a given sequence of
m-vectors. Then the solution of the inhomogeneous initial-value problem
for k > 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). D
Remark 11.13. Again, we restrict our attention only to the so-called time-invariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is time-invariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
118 Chapter 11. Linear Differential and Difference Equations
D-I(A)N(A), where D(A) = 001 + olA + ... + opAP and N(A) = vol + vIA +
... + Vq A q. Explicit formulas are known for the coefficients of the numerator and
denominator polynomials of various orders. Unfortunately, a Pad6 approximation for
the exponential is accurate only in a neighborhood of the origin; in the matrix case
this means when IIAII is sufficiently small. This can be arranged by scaling A, say, by
2'
multiplying it by 1/2k for sufficiently large k and using the fact that e
A
= (e( I /2')A )
Numerical loss of accuracy can occur in this procedure from the successive squarings.
3. Reduce A to (real) Schur form S via the unitary similarity U and use e
A
= U e
S
U H
and successive recursions up the superdiagonals of the (quasi) upper triangular matrix
e
S
.
4. Many methods are outlined in, for example, [19]. Reliable and efficient computation
of matrix functions such as e
A
and 10g(A) remains a fertile area for research.
11.2 Difference Equations
In this section we outline solutions of discrete-time analogues of the linear differential
equations of the previous section. Linear discrete-time systems, modeled by systems of
difference equations, exhibit many parallels to the continuous-time differential equation
case, and this observation is exploited frequently.
11.2.1 Homogeneous linear difference equations
Theorem 11.12. Let A E jRn xn. The solution of the linear homogeneous system of difference
equations
(11.13)
for k 2:: 0 is given by
Proof: The proof is almost immediate upon substitution of (11.14) into (11.13). 0
Remark 11.13. Again, we restrict our attention only to the so-called time-invariant
case, where the matrix A in (11.13) is constant and does not depend on k. We could also
consider an arbitrary "initial time" ko, but since the system is time-invariant, and since we
want to keep the formulas "clean" (i.e., no double subscripts), we have chosen ko = 0 for
convenience.
11.2.2 Inhomogeneous linear difference equations
Theorem 11.14. Let A E jRnxn, B E jRnxm and suppose { u d t ~ is a given sequence of
m-vectors. Then the solution of the inhomogeneous initial-value problem
(11.15)
11.2. Difference Equations 119
is given by
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A
k
. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use z-transforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the z-transform of a sequence {gk} is
Assuming |z| > max |A|, the z-transform of the sequence {A
k
} is then given by
X€A(A)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). D
Methods based on the JCF are sometimes useful, again mostly for small-order prob-
lems. Assume that A e M"
xn
and let X e R^
n
be such that X~
1
AX = /, where J is a
JCF for A. Then
If A is diagonalizable, it is then easy to compute A
k
via the formula A
k
— XJ
k
X
l
since /* is simply a diagonal matrix.
11.2. Difference Equations
is given by
k-I
xk=AkXO+LAk-j-IBUj, k:::.O.
j=O
119
(11.16)
Proof: The proof is again almost immediate upon substitution of (11.16) into (11.15). 0
11.2.3 Computation of matrix powers
It is clear that solution of linear systems of difference equations involves computation of
A k. One solution method, which is numerically unstable but sometimes useful for hand
calculation, is to use z-transforms, by analogy with the use of Laplace transforms to compute
a matrix exponential. One definition of the z-transform of a sequence {gk} is
+00
= LgkZ-
k
.
k=O
Assuming Izl > max IAI, the z-transform of the sequence {Ak} is then given by
AEA(A)
+00
k "'kk 1 12
Z({A})=L...-z-A =I+-A+"2A + ...
k=O z z
= (l-z-IA)-I
= z(zI - A)-I.
Methods based on the JCF are sometimes useful, again mostly for small-order prob-
lems. Assume that A E jRnxn and let X E be such that X-I AX = J, where J is a
JCF for A. Then
Ak = (XJX-I)k
= XJkX-
1
_I
- m
LXi Jty
i
H
;=1
if A is diagonalizable,
in general.
If A is diagonalizable, it is then easy to compute Ak via the formula Ak = X Jk X-I
since Jk is simply a diagonal matrix.
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let 7, e C
pxp
be a Jordan block of the form
Writing /,• = XI + N and noting that XI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (XI + N)
k
and verify that
The symbol ( ) has the usual definition of ,
(
^ ., and is to be interpreted as 0 if k < q.
In the case when A. is complex, a real version of the above can be worked out.
-4
Example 11.15. Let A = [_J J]. Then
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 Higher-Order Equations
It is well known that a higher-order (scalar) linear differential equation can be converted to
a first-order linear system. Consider, for example, the initial-value problem
with 4 > (t } a given function and n initial conditions
120 Chapter 11. Linear Differential and Difference Equations
In the general case, the problem again reduces to the computation of the power of a
Jordan block. To be specific, let J
i
E Cpxp be a Jordan block of the form
o ... 0 A
Writing J
i
= AI + N and noting that AI and the nilpotent matrix N commute, it is
then straightforward to apply the binomial theorem to (AI + N)k and verify that
Ak
kA k-I
(;)A
k
-
2
(
k ) Ak-P+I
p-l
0
Ak kA
k
-
1
J/ =
0 0
Ak
( ; ) A
k
-
2
kA
k
-
1
0 0
Ak
The symbol (: ) has the usual definition of q ! ( k k ~ q ) ! and is to be interpreted as 0 if k < q.
In the case when A is complex, a real version of the above can be worked out.
Example 11.15. Let A = [=i a Then
Ak = XJkX-1 = [2 1 ] [(_2)k k(-2)kk-
1
] [ 1 -2
1
]
1 1 0 (-2) -1
_ [ (_2/-
1
(-2 - 2k) k( -2l+
1
]
- -k( _2)k-1 (-2l-
1
(2k - 2) .
Basic analogues of other methods such as those mentioned in Section 11.1.6 can also
be derived for the computation of matrix powers, but again no universally "best" method
exists. For an erudite discussion of the state of the art, see [11, Ch. 18].
11.3 Higher-Order Equations
It is well known that a higher-order (scalar) linear differential equation can be converted to
a first-order linear system. Consider, for example, the initial-value problem
(11.17)
with ¢J(t) a given function and n initial conditions
y(O) = Co, y(O) = CI, ... , in-I)(O) = Cn-I' (1l.l8)
Exercises 121
Here, v
(m)
denotes the mth derivative of y with respect to t. Define a vector x (?) e R" with
components *i(0 = y ( t ) , x
2
( t) = y ( t ) , . . . , x
n
( t) = y
{ n
~
l )
( t ) . Then
These equations can then be rewritten as the first-order linear system
The initial conditions take the form ^(0) = c = [ C Q , c\, ..., C
M
_ I ] .
Note that det(X7 — A) = A." + a
n
-\X
n
~
l
H h a\X + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higher-order difference equation
EXERCISES
1. Let P € R
nxn
be a projection. Show that e
p
% / + 1.718P.
2. Suppose x, y € R" and let A = xy
T
. Further, let a = x
T
y. Show that e'
A
I + g ( t , a) xy
T
, where
3. Let
with n initial conditions, into a linear first-order difference equation with (vector) initial
condition.
Exercises 121
Here, y(m) denotes the mth derivative of y with respect to t. Define a vector x (t) E ]Rn with
components Xl (t) = yet), X2(t) = yet), ... , Xn(t) = In-l)(t). Then
Xl (I) = X2(t) = y(t),
X2(t) = X3(t) = yet),
Xn-l (t) = Xn(t) = y(n-l)(t),
Xn(t) = y(n)(t) = -aoy(t) - aly(t) - ... - an_lln-l)(t) + ¢(t)
= -aOx\ (t) - a\X2(t) - ... - an-lXn(t) + ¢(t).
These equations can then be rewritten as the first-order linear system
0 0 0
0 0 1
x(t)+ [ n ~ ( t )
x(t) =
0
0 0 1
-ao -a\ -a
n
-\
The initial conditions take the form X (0) = C = [co, Cl, •.. , C
n
-\ r.
(11.19)
Note that det(A! - A) = An + an_1A
n
-
1
+ ... + alA + ao. However, the companion
matrix A in (11.19) possesses many nasty numerical properties for even moderately sized n
and, as mentioned before, is often well worth avoiding, at least for computational purposes.
A similar procedure holds for the conversion of a higher-order difference equation
with n initial conditions, into a linear first-order difference equation with (vector) initial
condition.
EXERCISES
1. Let P E lR
nxn
be a projection. Show that e
P
~ ! + 1.718P.
2. Suppose x, y E lR
n
and let A = xyT. Further, let a = XT y. Show that etA
1+ get, a)xyT, where
{
!(eat - I)
g(t,a)= a t
3. Let
if a 1= 0,
if a = O.
122 Chapter 11. L i n ear Di f f eren ti al and Di f f erence Equati on s
where X e M'
nx
" is arbitrary. Show that
4. Let K denote the skew-symmetric matrix
where /„ denotes the n x n identity matrix. A matrix A e R
2n x2n
is said to be
Hamiltonian if K~
1
A
T
K = -A and to be symplectic if K~
l
A
T
K - A-
1
.
(a) Suppose E is Hamiltonian and let A, be an eigenvalue of H. Show that — A, must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let A. be an eigenvalue of S. Show that 1 /A, must
also be an eigenvalue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that S~
1
HS must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, ft € R and
Then show that
6. Find a general expression for
7. Find e
M
when A =
5. Let
(a) Solve the differential equation
122 Chapter 11. Linear Differential and Difference Equations
where X E jRmxn is arbitrary. Show that
e
A = [eo I sinh 1 X ]
~ I .
4. Let K denote the skew-symmetric matrix
[
0 In ]
-In 0 '
where In denotes the n x n identity matrix. A matrix A E jR2nx2n is said to be
Hamiltonian if K -I AT K = - A and to be symplectic if K -I AT K = A -I.
(a) Suppose H is Hamiltonian and let).. be an eigenvalue of H. Show that -).. must
also be an eigenvalue of H.
(b) Suppose S is symplectic and let).. be an eigenvalue of S. Show that 1/).. must
also be an eigenValue of S.
(c) Suppose that H is Hamiltonian and S is symplectic. Show that S-I H S must be
Hamiltonian.
(d) Suppose H is Hamiltonian. Show that e
H
must be symplectic.
5. Let a, f3 E lR and
Then show that
6. Find a general expression for
7. Find etA when A =
8. Let
ectt cos f3t
_eut sin f3t
ectctrt sin ~ t J.
e cos/A
(a) Solve the differential equation
i = Ax ; x(O) = [ ~ J.
Exercises 123
Show that the eigenvalues of the solution X ( t ) of this problem are the same as those
of Cf or all?.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k — » • +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
(b) Solve the differential equation
9. Consider the initial-value problem
for t > 0. Suppose that A e E"
x
" is skew-symmetric and let a = \\XQ\\
2
. Show that
||*(OII
2
= af or al l f > 0.
10. Consider the n x n matrix initial-value problem
12. (a) Find the solution of the initial-value problem
(b) Consider the difference equation
If £
0
= 1 and z\ = 2, what is the value of Z IQ OO? What is the value of Zk in
general?
Exercises 123
(b) Solve the differential equation
i = Ax + b; x(O) = [ ~ l
9. Consider the initial-value problem
i(t) = Ax(t); x(O) = Xo
for t ~ O. Suppose that A E ~ n x n is skew-symmetric and let ex = Ilxol12. Show that
I/X(t)1/2 = ex for all t > O.
10. Consider the n x n matrix initial-value problem
X(t) = AX(t) - X(t)A; X(O) = c.
Show that the eigenvalues of the solution X (t) of this problem are the same as those
of C for all t.
11. The year is 2004 and there are three large "free trade zones" in the world: Asia (A),
Europe (E), and the Americas (R). Suppose certain multinational companies have
total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. Each
year half of the Americas' money stays home, a quarter goes to Europe, and a quarter
goes to Asia. For Europe and Asia, half stays home and half goes to the Americas.
(a) Find the matrix M that gives
[
A] [A]
E =M E
R year k+1 R year k
(b) Find the eigenvalues and right eigenvectors of M.
(c) Find the distribution of the companies' assets at year k.
(d) Find the limiting distribution of the $40 trillion as the universe ends, i.e., as
k ---* +00 (i.e., around the time the Cubs win a World Series).
(Exercise adapted from Problem 5.3.11 in [24].)
12. (a) Find the solution of the initial-value problem
.Yet) + 2y(t) + yet) = 0; yeO) = 1, .YeO) = O.
(b) Consider the difference equation
Zk+2 + 2Zk+1 + Zk = O.
If Zo = 1 and ZI = 2, what is the value of ZIOOO? What is the value of Zk in
general?
This page intentionally left blank This page intentionally left blank
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
125
where A, B e C"
xn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x e C" is a right generalized eigenvector of the pair
(A, B) with A, B e C
MX
" if there exists a scalar A. e C, called a generalized eigenvalue,
such that
Similarly, a nonzero vector y e C" is a left generalized eigenvector corresponding to an
eigenvalue X if
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a. e C.
Definition 12.2. The matrix A — X B is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A — X B is singular.
Definition 12.3. The polynomial 7 r(A.) = det(A — A.5) is called the characteristic poly-
nomial of the matrix pair (A, B) . The roots ofn(X .) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B e E"
xn
, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
Chapter 12
Generalized Eigenvalue
Problems
12.1 The Generalized Eigenvalue/Eigenvector Problem
In this chapter we consider the generalized eigenvalue problem
Ax = 'ABx,
where A, B E e
nxn
. The standard eigenvalue problem considered in Chapter 9 obviously
corresponds to the special case that B = I.
Definition 12.1. A nonzero vector x E en is a right generalized eigenvector of the pair
(A, B) with A, B E e
nxn
if there exists a scalar 'A E e, called a generalized eigenvalue,
such that
Ax = 'ABx. (12.1)
Similarly, a nonzero vector y E en is a left generalized eigenvector corresponding to an
eigenvalue 'A if
(12.2)
When the context is such that no confusion can arise, the adjective "generalized"
is usually dropped. As with the standard eigenvalue problem, if x [y] is a right [left]
eigenvector, then so is ax [ay] for any nonzero scalar a E <C.
Definition 12.2. The matrix A - 'AB is called a matrix pencil (or pencil of the matrices A
and B).
As with the standard eigenvalue problem, eigenvalues for the generalized eigenvalue
problem occur where the matrix pencil A - 'AB is singular.
Definition 12.3. The polynomial n('A) = det(A - 'AB) is called the characteristic poly-
nomial of the matrix pair (A, B). The roots ofn('A) are the eigenvalues of the associated
generalized eigenvalue problem.
Remark 12.4. When A, B E jRnxn, the characteristic polynomial is obviously real, and
hence nonreal eigenvalues must occur in complex conjugate pairs.
125
and there are again four cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and ^.
Case 2: a = 0, ft ^ 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 3: a ^ 0, f3 = 0. There are two eigenvalues, 1 and 0.
Case 4: a = 0, (3 = 0. All A 6 C are eigenvalues since det(B — uA) = 0.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A — A.B, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — nA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A — KB always has
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then n ( X ) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A — X B. However,
when B = I, in particular, when B is singular, there may be 0, k e n, or infinitely many
eigenvalues associated with the pencil A — X B. For example, suppose
where a and ft are scalars. Then the characteristic polynomial is
and there are several cases to consider.
Case 1: a ^ 0, ft ^ 0. There are two eigenvalues, 1 and |.
Case 2: a = 0, f3 / 0. There are two eigenvalues, 1 and 0.
Case 3: a = 0, f3 = 0. There is only one eigenvalue, 1 (of multiplicity 1).
Case 4: a = 0, f3 = 0. All A e C are eigenvalues since det(A — A. B ) =0.
Definition 12.6. If del (A — X B) is not identically zero, the pencil A — X B is said to be
regular; otherwise, it is said to be singular.
Note that if AA(A) n J\f(B) ^ 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A — X B is a reciprocal pencil B — n,A and cor-
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
(JL = £. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
126 Chapter 12. Generalized Eigenvalue Problems
Remark 12.5. If B = I (or in general when B is nonsingular), then rr(A) is a polynomial
of degree n, and hence there are n eigenvalues associated with the pencil A - AB. However,
when B =I- I, in particular, when B is singular, there may be 0, k E !!, or infinitely many
eigenvalues associated with the pencil A - AB. For example, suppose
where a and (3 are scalars. Then the characteristic polynomial is
det(A - AB) = (I - AHa - (3A)
and there are several cases to consider.
Case 1: a =I- 0, {3 =I- O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I- O. There are two eigenvalues, I and O.
Case 3: a =I- 0, {3 = O. There is only one eigenvalue, I (of multiplicity 1).
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(A - AB) == O.
(12.3)
Definition 12.6. If det(A - AB) is not identically zero, the pencil A - AB is said to be
regular; otherwise, it is said to be singular.
Note that if N(A) n N(B) =I- 0, the associated matrix pencil is singular (as in Case
4 above).
Associated with any matrix pencil A - AB is a reciprocal pencil B - /.LA and cor-
responding generalized eigenvalue problem. Clearly the reciprocal pencil has eigenvalues
/.L = ±. It is instructive to consider the reciprocal pencil associated with the example in
Remark 12.5. With A and B as in (12.3), the characteristic polynomial is
det(B - /.LA) = (1 - /.L)({3 - a/.L)
and there are again four cases to consider.
Case 1: a =I- 0, {3 =I- O. There are two eigenvalues, I and ~ .
Case 2: a = 0, {3 =I- O. There is only one eigenvalue, I (of multiplicity I).
Case 3: a =I- 0, {3 = O. There are two eigenvalues, 1 and O.
Case 4: a = 0, (3 = O. All A E C are eigenvalues since det(B - /.LA) == O.
At least for the case of regular pencils, it is apparent where the "missing" eigenvalues have
gone in Cases 2 and 3. That is to say, there is a second eigenvalue "at infinity" for Case 3 of
A - AB, with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B - /.LA.
A similar reciprocal symmetry holds for Case 2.
While there are applications in system theory and control where singular pencils
appear, only the case of regular pencils is considered in the remainder of this chapter. Note
that A and/or B may still be singular. If B is singular, the pencil A - AB always has
12. 2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A- A. f i always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B~
l
Ax = Xx (or AB~
l
w = Xw). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva-
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, fl, Q, Z e C
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two
problems are said to be equivalent).
2. ifx is a right eigenvector of A—XB, then Z~
l
x is a right eigenvector of QAZ—XQ B Z.
3. ify is a left eigenvector of A —KB, then Q~
H
y isa left eigenvector ofQAZ — XQBZ.
Proof:
1. det(QAZ-XQBZ) = det[0(A - XB)Z] = det gdet Zdet(A - XB). Since det 0
and det Z are nonzero, the result follows.
2. The result follows by noting that (A – yB)x - Oif andonly if Q(A-XB)Z(Z~
l
x) =
0.
3. Again, the result follows easily by noting that y
H
(A — XB) — 0 if and only if
( Q~
H
y )
H
Q( A– XB ) Z = Q. D
where T
a
and Tp are upper triangular.
By Theorem 12.7, the eigenvalues of the pencil A — XB are then the ratios of the diag-
onal elements of T
a
to the corresponding diagonal elements of Tp, with the understanding
that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue.
There is also an analogue of the Murnaghan-Wintner Theorem for real matrices.
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B e Cn
xn
. Then there exist unitary matrices Q, Z e Cn
xn
such that
12.2. Canonical Forms 127
fewer than n eigenvalues. If B is nonsingular, the pencil A - AB always has precisely n
eigenvalues, since the generalized eigenvalue problem is then easily seen to be equivalent
to the standard eigenvalue problem B-
1
Ax = Ax (or AB-
1
W = AW). However, this turns
out to be a very poor numerical procedure for handling the generalized eigenvalue problem
if B is even moderately ill conditioned with respect to inversion. Numerical methods that
work directly on A and B are discussed in standard textbooks on numerical linear algebra;
see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
12.2 Canonical Forms
Just as for the standard eigenvalue problem, canonical forms are available for the generalized
eigenvalue problem. Since the latter involves a pair of matrices, we now deal with equiva-
lencies rather than similarities, and the first theorem deals with what happens to eigenvalues
and eigenvectors under equivalence.
Theorem 12.7. Let A, B, Q, Z E c
nxn
with Q and Z nonsingular. Then
1. the eigenvalues of the problems A - AB and QAZ - AQBZ are the same (the two
problems are said to be equivalent).
2. ifx isa right eigenvector of A-AB, then Z-l x isa righteigenvectorofQAZ-AQB Z.
3. ify isa left eigenvector of A -AB, then Q-H y isa lefteigenvectorofQAZ -AQBZ.
Proof:
1. det(QAZ - AQBZ) = det[Q(A - AB)Z] = det Q det Z det(A - AB). Since det Q
and det Z are nonzero, the result follows.
2. The result follows by noting that (A -AB)x = 0 if and only if Q(A -AB)Z(Z-l x) =
o.
3. Again, the result follows easily by noting that yH (A - AB) o if and only if
(Q-H y)H Q(A _ AB)Z = O. 0
The first canonical form is an analogue of Schur's Theorem and forms, in fact, the
theoretical foundation for the QZ algorithm, which is the generally preferred method for
solving the generalized eigenvalue problem; see, for example, [7, Sec. 7.7] or [25, Sec. 6.7].
Theorem 12.8. Let A, B E c
nxn
. Then there exist unitary matrices Q, Z E c
nxn
such that
QAZ = T
a
, QBZ = T
fJ
,
where Ta and TfJ are upper triangular.
By Theorem 12.7, the eigenvalues ofthe pencil A - AB are then the ratios of the diag-
onal elements of Ta to the corresponding diagonal elements of T
fJ
, with the understanding
that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue.
There is also an analogue of the Murnaghan-Wintner Theorem for real matrices.
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B e R
nxn
. Then there exist orthogonal matrices Q, Z e R"
xn
such
thnt
where T is upper triangular and S is quasi-upper-triangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil formed with the corresponding
2x2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical form called the Kronecker canonical
form (KCF). A full description of the KCF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KCF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B e C
nxn
and suppose the pencil A — XB is regular. Then there
exist nonsingular matrices P, Q € C"
x
" such that
where J is a Jordan canonical form corresponding to the finite eigenvalues of A -A.fi and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A — XB.
Example 12.11. The matrix pencil
with characteristic polynomial (X — 2)
2
has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B e C
mxn
. Then there exist
nonsingular matrices P e C
mxm
and Q e C
nxn
such that
128 Chapter 12. Generalized Eigenvalue Problems
Theorem 12.9. Let A, B E jRnxn. Then there exist orthogonal matrices Q, Z E jRnxn such
that
QAZ = S, QBZ = T,
where T is upper triangular and S is quasi-upper-triangular.
When S has a 2 x 2 diagonal block, the 2 x 2 subpencil fonned with the corresponding
2 x 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Otherwise, real
eigenvalues are given as above by the ratios of diagonal elements of S to corresponding
elements of T.
There is also an analogue of the Jordan canonical fonn called the Kronecker canonical
form (KeF). A full description of the KeF, including analogues of principal vectors and
so forth, is beyond the scope of this book. In this chapter, we present only statements of
the basic theorems and some examples. The first theorem pertains only to "square" regular
pencils, while the full KeF in all its generality applies also to "rectangular" and singular
pencils.
Theorem 12.10. Let A, B E c
nxn
and suppose the pencil A - AB is regular. Then there
exist nonsingular matrices P, Q E c
nxn
such that
peA - AB)Q = [ ~ ~ ] - A [ ~ ~ l
where J is a Jordan canonical form corresponding to the finite eigenvalues of A - AB and
N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite
eigenvalues of A - AB.
Example 12.11. The matrix pencil
[2 I
0 0
~ ]-> [ ~
0 0
o 0] o 2 0 0 I 0 o 0
o 0 1 0 0 0 I 0
o 0 0 1 0 0 o 0
o 0 0 0 0 0 0 0
with characteristic polynomial (A - 2)2 has a finite eigenvalue 2 of multiplicty 2 and three
infinite eigenvalues.
Theorem 12.12 (Kronecker Canonical Form). Let A, B E c
mxn
• Then there exist
nonsingular matrices P E c
mxm
and Q E c
nxn
such that
peA - AB)Q = diag(LII' ... , L
l
" L ~ , ...• L;'. J - A.I, I - )"N),
12.2. Canonical Forms 129
where N is nilpotent, both N and J are in Jordan canonical form, and L^ is the (k + 1) x k
bidiagonal pencil
The /( are called the left minimal indices while the r, are called the right minimal indices.
Left or right minimal indices can take the value 0.
Such a matrix is in KCF. The first block of zeros actually corresponds to LQ, LQ, LQ, LQ ,
LQ, where each LQ has "zero columns" and one row, while each LQ has "zero rows" and
one column. The second block is L\ while the third block is L\. The next two blocks
correspond to
Just as sets of eigenvectors span A-invariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B e W
lxn
and suppose the pencil A — XB is regular. Then V is a
deflating subspace if
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S e R n*
xk
is a matrix whose columns span a ^-dimensional
subspace S of R
n
, i.e., R ( S) = <S. Then S is a deflating subspace for the pencil A — XB if
and only if there exists M e R
kxk
such that
while the nilpotent matrix N in this example is
12.2. Canonical Forms 129
where N is nilpotent, both Nand J are in Jordan canonical form, and Lk is the (k + I) x k
bidiagonal pencil
-A 0 0
-A
Lk =
0 0
-A
0 0 I
The Ii are called the left minimal indices while the ri are called the right minimal indices.
Left or right minimal indices can take the value O.
Example 12.13. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are
-A 0]
I -A .
o I
Such a matrix is in KCF. The first block of zeros actually corresponds to Lo, Lo, Lo, L6,
L6, where each Lo has "zero columns" and one row, while each L6 has "zero rows" and
one column. The second block is L\ while the third block is LI- The next two blocks
correspond to
[
21
J = 0 2
o 0
while the nilpotent matrix N in this example is

000
Just as sets of eigenvectors span A-invariant subspaces in the case of the standard
eigenproblem (recall Definition 9.35), there is an analogous geometric concept for the
generalized eigenproblem.
Definition 12.14. Let A, B E and suppose the pencil A - AB is regular. Then V is a
deflating subspace if
dim(AV + BV) = dimV. (12.4)
Just as in the standard eigenvalue case, there is a matrix characterization of deflating
subspace. Specifically, suppose S E is a matrix whose columns span a k-dimensional
subspace S of i.e., n(S) = S. Then S is a deflating subspace for the pencil A - AB if
and only if there exists M E such that
AS = BSM. (12.5)
130 Chapter 12. Generalized Eigenvalue Problems
If B = /, then (12.4) becomes dim(AV + V) = dimV, which is clearly equivalent to
AV c V. Similarly, (12.5) becomes AS = SM as before. If the pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear svstem
which has a root at —2.8 .
The method of finding system zeros via a generalized eigenvalue problem also works
well for general multi-input, multi-output systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6)). This is accom-
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non-
trivial. However, we offer some insight below into the special case of a single-input,
with A € M
n x n
, B € R"
x m
, C e R
pxn
, and D € R
pxm
. This linear time-invariant state-
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
Then the transfer matrix (see [26]) of this system is
which clearly has a zero at —2.8 . Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
130 Chapter 12. Generalized Eigenvalue Problems
If B = I, then (12.4) becomes dim (A V + V) = dim V, which is clearly equivalent to
AV ~ V. Similarly, (12.5) becomes AS = SM as before. lEthe pencil is not regular, there
is a concept analogous to deflating subspace called a reducing subspace.
12.3 Application to the Computation of System Zeros
Consider the linear system
i = Ax + Bu,
y = Cx + Du
with A E jRnxn, B E jRnxm, C E jRPxn, and D E jRPxm. This linear time-invariant state-
space model is often used in multivariable control theory, where x(= x(t)) is called the state
vector, u is the vector of inputs or controls, and y is the vector of outputs or observables.
For details, see, for example, [26].
In general, the (finite) zeros of this system are given by the (finite) complex numbers
z, where the "system pencil"
(12.6)
drops rank. In the special case p = m, these values are the generalized eigenvalues of the
(n + m) x (n + m) pencil.
Example 12.15. Let
A=[
-4
2
Then the transfer matrix (see [26)) of this system is
C = [I 2],
55 + 14
g(5)=C(sI-A)-'B+D= 2 '
5 + 3s + 2
D=O.
which clearly has a zero at -2.8. Checking the finite eigenvalues of the pencil (12.6), we
find the characteristic polynomial to be
det
[
A -c
M
B]
D "'" 5A + 14,
which has a root at -2.8.
The method of finding system zeros via a generalized eigenvalue problem also works
well for general mUlti-input, multi-output systems. Numerically, however, one must be
careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12.6». This is accom-
plished by computing a certain unitary equivalence on the system pencil that then yields a
smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite
zeros).
The connection between system zeros and the corresponding system pencil is non-
trivial. However, we offer some insight below into the special case of a single-input.
12.4. Symmetric Generalized Eigenvalue Problems 131
single-output system. Specifically, let B = b e Rn, C = c
1
e R
l xn
, and D = d e R.
Furthermore, let g(.s) = c
r
(s7 — A )~
!
Z ? + d denote the system transfer function (matrix),
and assume that g ( s ) can be written in the form
where T T (S ) is the characteristic polynomial of A, and v(s) and T T (S ) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose z € C is such that
is singular. Then there exists a nonzero solution to
or
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
Substituting this in (12.8), we have
or g ( z ) y = 0 by the definition of g . Now _ y ^ 0 (else x = 0 from (12.9)). Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
for A, B e R
nxn
arises when A = A and B = B
1
> 0. For example, the second-order
system of differential equations
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem of the form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B~
l
Ax = A J C. However, B~
1
A is not necessarily
symmetric.
12.4. Symmetric Generalized Eigenvalue Problems 131
single-output system. Specifically, let B = b E ffi.n, C = c
T
E ffi.l xn, and D = d E R
Furthermore, let g(s) = c
T
(s I - A) -1 b + d denote the system transfer function (matrix),
and assume that g(s) can be written in the form
v(s)
g(s) = n(s)'
where n(s) is the characteristic polynomial of A, and v(s) and n(s) are relatively prime
(i.e., there are no "pole/zero cancellations").
Suppose Z E C is such that
[
A - zI b ]
c
T
d
is singular. Then there exists a nonzero solution to
or
(A - zl)x + by = 0,
c
T
x +dy = O.
(12.7)
(12.8)
Assuming z is not an eigenvalue of A (i.e., no pole/zero cancellations), then from (12.7) we
get
x = -(A - zl)-lby.
(12.9)
Substituting this in (12.8), we have
_c
T
(A - zl)-lby + dy = 0,
or g(z)y = 0 by the definition of g. Now y 1= 0 (else x = 0 from (12.9». Hence g(z) = 0,
i.e., z is a zero of g.
12.4 Symmetric Generalized Eigenvalue Problems
A very important special case of the generalized eigenvalue problem
Ax = ABx (12.10)
for A, B E ffi.nxn arises when A = AT and B = BT > O. For example, the second-order
system of differential equations
Mx+Kx=O,
where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness
matrix," is a frequently employed model of structures or vibrating systems and yields a
generalized eigenvalue problem ofthe form (12.10).
Since B is positive definite it is nonsingular. Thus, the problem (12.10) is equivalent
to the standard eigenvalue problem B-
1
Ax = AX. However, B-
1
A is not necessarily
symmetric.
Nevertheless, the eigenvalues of B
l
A are always real (and are approximately 2.1926
and -3.1926 in Example 12.16).
Theorem 12.17. Let A, B e R
nxn
with A = A
T
and B = B
T
> 0. Then the generalized
eigenvalue problem
whose eigenvalues are approximately 2.1926 and —3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since real-valued matrices are commonly used in most applications,
we have restricted our attention to that case only.
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y)
B
= X
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL
T
, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
can be rewritten as the equivalent problem
Letting C = L
1
AL
J
and z = L
1
x, (12.11) can then be rewritten as
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen-
vectors zi,..., z
n
satisfying
Then x, = L
T
zi, i € n, are eigenvectors of the original generalized eigenvalue problem
and satisfy
Finally, if A = A
T
> 0, then C = C
T
> 0, so the eigenvalues are positive. D
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
Then it is easily checked thai
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A ThenB~
l
A
132 Chapter 12. Generalized Eigenvalue Problems
Example 12.16. Let A = ; l B = [i J Then A = J
Nevertheless, the eigenvalues of A are always real (and are approximately 2.1926
and -3.1926 in Example 12.16).
Theorem 12.17. Let A, B E jRnxn with A = AT and B = BT > O. Then the generalized
eigenvalue problem
Ax = ABx
has n real eigenvalues, and the n corresponding right eigenvectors can be chosen to be
orthogonal with respect to the inner product (x, y) B = x
T
By. Moreover, if A > 0, then
the eigenvalues are also all positive.
Proof: Since B > 0, it has a Cholesky factorization B = LL T, where L is nonsingular
(Theorem 10.23). Then the eigenvalue problem
Ax = ABx = ALL T x
can be rewritten as the equivalent problem
(12.11)
Letting C = L AL and Z = LT x, (12.11) can then be rewritten as
Cz = AZ. (12.12)
Since C = C
T
, the eigenproblem (12.12) has n real eigenvalues, with corresponding eigen-
vectors Z I, •.. , Zn satisfying
zi Zj = Dij.
Then Xi = L Zi, i E !!., are eigenvectors of the original generalized eigenvalue problem
and satisfy
(Xi, Xj)B = xr BXj = (zi L Zj) = Dij.
Finally, if A = AT> 0, then C = C
T
> 0, so the eigenvalues are positive. 0
Example 12.18. The Cholesky factor for the matrix B in Example 12.16 is
1] .
.,fi .,fi
Then it is easily checked that
c = = [ 0 . .5
2.5
2 . .5 ]
-1.5 '
whose eigenvalues are approximately 2.1926 and -3.1926 as expected.
The material of this section can, of course, be generalized easily to the case where A
and B are Hermitian, but since real-valued matrices are commonly used in most applications,
we have restricted our attention to that case only.
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma-
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B e E"
x
" with
A = A
T
and B = B
T
> 0. Then there exists a nonsingular matrix Q such that
\ 2.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L~
1
AL~
T
as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly ill conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate, let
where D is diagonal. In fact, the diagonal elements of D are the eigenvalues of B
1
A.
Proof: Let B = LL
T
be the Cholesky factorization of B and setC = L~
1
AL~
T
. Since
C is symmetric, there exists an orthogonal matrix P such that P
T
CP = D, where D is
diagonal. Let Q = L~
T
P. Then
and
Finally, since QDQ~
l
= QQ
T
AQQ~
l
= L-
T
PP
T
L~
1
A = L~
T
L~
1
A = B~
1
A, we
haveA(D) = A(B~
1
A). D
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A — XB. This can be seen directly.
LetA = Q
T
AQandB = Q
T
BQ. Then/HA = Q~
l
B~
l
Q~
T
Q
T
AQ = Q~
1
B~
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B e M"
xn
be positive definite. Then A > B if and only if B~
l
>
A-
1
.
Proof: By Theorem 12.19, there exists Q e E"
x
" such that Q
T
AQ = D and Q
T
BQ = I,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A > B, by Theorem
10.21 we have that Q
T
AQ > Q
T
BQ, i.e., D > I. But then D"
1
< / (this is trivially true
since the two matrices are diagonal). Thus, QD~
l
Q
T
< QQ
T
, i.e., A~
l
< B~
l
. D
12.5. Simultaneous Diagonalization 133
12.5 Simultaneous Diagonalization
Recall that many matrices can be diagonalized by a similarity. In particular, normal ma-
trices can be diagonalized by a unitary similarity. It turns out that in some cases a pair of
matrices (A, B) can be simultaneously diagonalized by the same matrix. There are many
such results and we present only a representative (but important and useful) theorem here.
Again, we restrict our attention only to the real case, with the complex case following in a
straightforward way.
Theorem 12.19 (Simultaneous Reduction to Diagonal Form). Let A, B E ] [ ~ n x n with
A = AT and B = BT > O. Then there exists a nonsingular matrix Q such that
where D is diagonal. Infact, the diagonal elements of D are the eigenvalues of B-
1
A.
Proof: Let B = LLT be the Cholesky factorization of B and set C = L -I AL -T. Since
C is symmetric, there exists an orthogonal matrix P such that pTe p = D, where D is
diagonal. Let Q = L - T P. Then
and
QT BQ = pT L -I(LLT)L -T P = pT P = [.
Finally, since QDQ-I = QQT AQQ-I = L -T P pT L -I A = L -T L -I A
B-
1
A, we
have A(D) = A(B-
1
A). 0
Note that Q is not in general orthogonal, so it does not preserve eigenvalues of A and B
individually. However, it does preserve the eigenvalues of A - 'AB. This can be seen directly.
Let A = QT AQ and B = QT BQ. Then B-
1
A = Q-1 B-
1
Q-T QT AQ = Q-1 B-
1
AQ.
Theorem 12.19 is very useful for reducing many statements about pairs of symmetric
matrices to "the diagonal case." The following is typical.
Theorem 12.20. Let A, B E lR
nxn
be positive definite. Then A 2: B if and only if B-
1
2:
A-I.
Proof: By Theorem 12.19, there exists Q E l R ~ x n such that QT AQ = D and QT BQ = [,
where D is diagonal. Now D > 0 by Theorem 10.31. Also, since A 2: B, by Theorem
10.21 we have that QT AQ 2: QT BQ, i.e., D 2: [. But then D-
I
:::: [(this is trivially true
since the two matrices are diagonal). Thus, Q D-
I
QT :::: Q QT, i.e., A -I :::: B-
1
. 0
12.5.1 Simultaneous diagonalization via SVD
There are situations in which forming C = L -I AL -T as in the proof of Theorem 12.19 is
numerically problematic, e.g., when L is highly iII conditioned with respect to inversion. In
such cases, simultaneous reduction can also be accomplished via an SVD. To illustrate. let
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13)) via arithmetic operations performed only on LA
and LB separately, i.e., without forming the products L
A
L
T
A
or L
B
L
T
B
explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix M
T
M and solving
the eigenproblem M
T
MX = Xx.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = A
T
> 0. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP
T
,
~ ~ ~ ~ T
where D is diagonal and P is orthogonal, but in writing A — PDDP = PD(PD) with
D diagonal, D may have pure imaginary elements.
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = L
A
L
T
A
and B — LsL
T
B
be Cholesky factorizations of A and B, respectively. Compute the SVD
where E e R£
x
" is diagonal. Then the matrix Q = L
B
T
U performs the simultaneous
diagonalization. To check this, note that
while
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the so-called generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L
B
L
A
can be found from the eigenvalue problem
Letting x = L
B
z we see that (12.14) can be rewritten in the form L
A
L
A
x = XL
B
z =
ALgL^Lg
7
z, which is thus equivalent to the generalized eigenvalue problem
134 Chapter 12. Generalized Eigenvalue Problems
us assume that both A and B are positive definite. Further, let A = and B =
be Cholesky factorizations of A and B, respectively. Compute the SVD
(12.13)
where L E xn is diagonal. Then the matrix Q = L i/ u performs the simultaneous
diagonalization. To check this, note that
while
QT AQ = U
T

= UTULVTVLTUTU
= L2
QT BQ = U
T

= UTU
= I.
Remark 12.21. The SVD in (12.13) can be computed without explicitly forming the
indicated matrix product or the inverse by using the so-called generalized singular value
decomposition (GSVD). Note that
and thus the singular values of L B 1 L A can be found from the eigenvalue problem
02.14)
Letting x = LBT Z we see that 02.14) can be rewritten in the form = ALBz =
z, which is thus equivalent to the generalized eigenvalue problem
02.15)
The problem (12.15) is called a generalized singular value problem and algorithms exist to
solve it (and hence equivalently (12.13» via arithmetic operations performed only on LA
and L B separately, i.e., without forming the products LA L or L B L explicitly; see, for
example, [7, Sec. 8.7.3]. This is analogous to finding the singular values of a matrix M by
operations performed directly on M rather than by forming the matrix MT M and solving
the eigenproblem MT M x = AX.
Remark 12.22. Various generalizations of the results in Remark 12.21 are possible, for
example, when A = AT::: O. The case when A is symmetric but indefinite is not so
straightforward, at least in real arithmetic. For example, A can be written as A = PDP T,
where Disdiagonaland P is orthogonal,butin writing A = PDDp
T
= PD(PD{ with
D diagonal, b may have pure imaginary elements.
12.6. Higher-Order Eigenvalue Problems 135
12.6 Higher-Order Eigenvalue Problems
Consider the second-order system of differential equations
where q(t} e W
1
and M, C, K e Rn
xn
. Assume for simplicity that M is nonsingular.
Suppose, by analogy with the first-order case, that we try to find a solution of (12.16) of the
form q(t) = e
xt
p, where the n-vector p and scalar A. are to be determined. Substituting in
(12.16) we get
To get a nonzero solution /?, we thus seek values of A. for which the matrix A.
2
M + A.C + K
is singular. Since the determinantal equation
yields a polynomial of degree 2rc, there are 2n eigenvalues for the second-order (or
quadratic) eigenvalue problem A.
2
M + A.C + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = K
T
. Suppose K has eigenvalues
Let a > k = | f j i k 1
2
• Then the 2n eigenvalues of the second-order eigenvalue problem A.
2
/ + K
are
If r = n (i.e., K = K
T
> 0), then all solutions of q + Kq = 0 are oscillatory.
12.6.1 Conversion to first-order form
Let x\ = q and \i = q. Then (12.16) can be written as a first-order system (with block
companion matrix)
where x(t) €. E
2
". If M is singular, or if it is desired to avoid the calculation of M
l
because
M is too ill conditioned with respect to inversion, the second-order problem (12.16) can still
be converted to the first-order generalized linear system
or, since
12.6. Higher-Order Eigenvalue Problems 135
12.6 Higher-Order Eigenvalue Problems
Consider the second-order system of differential equations
Mq+Cq+Kq=O, (12.16)
where q(t) E ~ n and M, C, K E ~ n x n . Assume for simplicity that M is nonsingular.
Suppose, by analogy with the first-order case, that we try to find a solution of (12.16) of the
form q(t) = eAt p, where the n-vector p and scalar A are to be determined. Substituting in
(12.16) we get
or, since eAt :F 0,
(A
2
M + AC + K) p = O.
To get a nonzero solution p, we thus seek values of A for which the matrix A
2
M + AC + K
is singular. Since the determinantal equation
o = det(A
2
M + AC + K) = A 2n + ...
yields a polynomial of degree 2n, there are 2n eigenvalues for the second-order (or
quadratic) eigenvalue problem A
2
M + AC + K.
A special case of (12.16) arises frequently in applications: M = I, C = 0, and
K = KT. Suppose K has eigenvalues
IL I ::: ... ::: ILr ::: 0 > ILr+ I ::: ... ::: ILn·
Let Wk = I ILk I !. Then the 2n eigenvalues of the second-order eigenvalue problem A
2
I + K
are
± jWk; k = 1, ... , r,
± Wk; k = r + 1, ... , n.
If r = n (i.e., K = KT ::: 0), then all solutions of q + K q = 0 are oscillatory.
12.6.1 Conversion to first-order form
Let XI = q and X2 = q. Then (12.16) can be written as a first-order system (with block
companion matrix)
. [ 0
X = -M-1K
where x (t) E ~ 2 n . If M is singular, or if it is desired to avoid the calculation of M-
I
because
M is too ill conditioned with respect to inversion, the second-order problem (12.16) can still
be converted to the first-order generalized linear system
[
I OJ' [0 I J
o M x = -K -C x.
136 Chapter 12. Generalized Eigenvalue Problems
Many other first-order realizations are possible. Some can be useful when M, C, and/or K
have special symmetry or skew-symmetry properties that can exploited.
Higher-order analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higher-order eigenvalue problems that can be converted to first-order form using aknxkn
block companion matrix analogue of (11.19). Similar procedures hold for the general k\h-
order difference equation
EXERCISES
are the eigenvalues of the matrix A — BD
1
C.
2. Let F, G € C
MX
". Show that the nonzero eigenvalues of FG and GF are the same.
Hint: An easy "trick proof is to verify that the matrices
are similar via the similarity transformation
are identical for all F 6 E"
1
*" and all G G R"
xm
.
Hint: Consider the equivalence
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
which can be converted to various first-order systems of dimension kn.
1. Suppose A e R
nx
" and D e R™
xm
. Show that the finite generalized eigenvalues of
the pencil
3. Let F e C
nxm
, G e C
mx
". Are the nonzero singular values of FG and GF the
same?
4. Suppose A € R
nxn
, B e R
n
*
m
, and C e E
wx
". Show that the generalized eigenval-
ues of the pencils
and
136 Chapter 12. Generalized Eigenvalue Problems
Many other first-order realizations are possible. Some can be useful when M, C, andlor K
have special symmetry or skew-symmetry properties that can exploited.
Higher-order analogues of (12.16) involving, say, the kth derivative of q, lead naturally
to higher-order eigenvalue problems that can be converted to first-order form using a kn x kn
block companion matrix analogue of (11.19). Similar procedures hold for the general kth-
order difference equation
which can be converted to various first-order systems of dimension kn.
EXERCISES
1. Suppose A E lR
n
xn and D E lR::! xm. Show that the finite generalized eigenvalues of
the pencil
[ ~ ~ J - A [ ~ ~ J
are the eigenvalues of the matrix A - B D-
1
C.
2. Let F, G E e
nxn
• Show that the nonzero eigenvalues of FG and G F are the same.
Hint: An easy "trick proof' is to verify that the matrices
[Fg ~ ] and [ ~ GOF ]
are similar via the similarity transformation
3. Let F E e
nxm
, G E e
mxn
• Are the nonzero singular values of FG and GF the
same?
4. Suppose A E ]Rnxn, B E lR
nxm
, and C E lRmxn. Show that the generalized eigenval-
ues of the pencils
[ ~ ~ J - A [ ~ ~ J
and
[ A + B ~ + GC ~ ] _ A [ ~ ~ ]
are identical for all F E Rm xn and all G E R" xm .
Hint: Consider the equivalence
[
I G][A-U B][I 0]
01 CO Fl'
(A similar result is also true for "nonsquare" pencils. In the parlance of control theory,
such results show that zeros are invariant under state feedback or output injection.)
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B e
]R
nx
" in such a way that Q~
l
AQ~
T
and Q
T
BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = L&L
T
A
and B = L#Lg,
respectively, and let UW
T
be an SVD of L
T
B
L
A
.
(a) Show that Q = LA V£~
5
is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Q~
l
= ^~^U
T
L
T
B
.
(c) Show that the eigenvalues of A B are the same as those of E
2
and hence are
positive.
Exercises 137
5. Another family of simultaneous diagonalization problems arises when it is desired
that the simultaneous diagonalizing transformation Q operates on matrices A, B E
jRnxn in such a way that Q-l AQ-T and QT BQ are simultaneously diagonal. Such
a transformation is called contragredient. Consider the case where both A and
B are positive definite with Cholesky factorizations A = LA L and B = L B L
respectively, and let be an SVD of
(a) Show that Q = LA is a contragredient transformation that reduces both
A and B to the same diagonal matrix.
(b) Show that Q-l =
(c) Show that the eigenvalues of AB are the same as those of 1;2 and hence are
positive.
This page intentionally left blank This page intentionally left blank
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A e R
mx
", B e R
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
Obviously, the same definition holds if A and B are complex-valued matrices. We
restrict our attention in this chapter primarily to real-valued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
Note that B < g> A / A < g> B.
2. Foranyfl e!F
X(
7, /
2
< 8 > f l = [o l\
Replacing I
2
by /„ yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2x2 matrix. Then
139
Chapter 13
Kronecker Products
13.1 Definition and Examples
Definition 13.1. Let A E lR
mxn
, B E lR
pxq
. Then the Kronecker product (or tensor
product) of A and B is defined as the matrix
[
allB
A@B= :
amlB
alnB ]
: E lRmpxnq.
amnB
(13.1)
Obviously, the same definition holds if A and B are complex-valued matrices. We
restrict our attention in this chapter primarily to real-valued matrices, pointing out the
extension to the complex case only where it is not obvious.
Example 13.2.
1. Let A =
2
nand B = [; Then
2

4 2 6
n
A@B = [
2B 3 4 6 6
2B 3 4 2 2
9 4 6 2
Note that B @ A i- A @ B.
2. Forany B E lR
pxq
, /z @ B = J.
Replacing 12 by In yields a block diagonal matrix with n copies of B along the
diagonal.
3. Let B be an arbitrary 2 x 2 matrix. Then
l b"
0
b12
0
l
B @/z =
b
ll
0 b12
0
b
2
2
0
b
21
0 b
22
139
140 Chapter 13. Kronecker Products
The extension to arbitrary B and /„ is obvious.
4. Let Jt € R
m
, y e R". Then
5. Let* eR
m
, y eR". Then
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A e R
mx
", 5 e R
rxi
, C e R"
x
^ and D e R
sxt
. Then
Proof: Simply verify that
Theorem 13.4. For all A and B,
Proof: For the proof, simply verify using the definitions of transpose and Kronecker
product. D
Corollary 13.5. If A e R"
xn
and B e R
mxm
are symmetric, then A® B is symmetric.
Theorem 13.6. If A and B are nonsingular,
Proof: Using Theorem 13.3, simply note that
140 Chapter 13. Kronecker Products
The extension to arbitrary B and In is obvious.
4. Let x E y E !R.n. Then
[
T T]T
X ® Y = XIY , ... , XmY
= [XIYJ, ... , XIYn, X2Yl, ... , xmYnf E !R.
mn
.
13.2 Properties of the Kronecker Product
Theorem 13.3. Let A E B E C E and D E Then
(A 0 B)(C 0 D) = AC 0 BD (E
Proof; Simply verify that

=AC0BD. 0
Theorem 13.4. Foral! A and B, (A ® Bl = AT ® BT.
al;kCkPBD ]
amkckpBD
(13.2)
Proof' For the proof, simply verify using the definitions of transpose and Kronecker
product. 0
Corollary 13.5. If A E ]Rn xn and B E !R.
m
xm are symmetric, then A ® B is symmetric.
Theorem 13.6. If A and Bare nonsingular, (A ® B)-I = A-I ® B-
1
.
Proof: Using Theorem 13.3, simply note that (A ® B)(A -1 ® B-
1
) = 1 ® 1 = I. 0
Corollary 13.8. If A € E"
xn
is orthogonal and B e M
mxm
15 orthogonal, then A < g > B is
orthogonal.
13.2. Properties of the Kronecker Product 141
Theorem 13.7. If A e IR"
xn
am/ B eR
mxm
are normal, then A® B is normal.
Proof:
yields a singular value decomposition of A < 8 > B (after a simple reordering of the diagonal
elements O/£A < 8 > £5 and the corresponding right and left singular vectors).
Corollary 13.11. Let A e R™
x
" have singular values a\ > • • • > a
r
> 0 and let B e
have singular values T\ > • • • > T
S
> 0. Then A < g ) B (or B < 8 > A) has rs singular values
^iT\ > • • • > ff
r
T
s
> Qand
Theorem 13.12. Let A e R
nx
" have eigenvalues A., - , / e n, and let B e R
mxw
/zave
eigenvalues jJij, 7 € m. TTzen ?/ze mn eigenvalues of A® B are
Moreover, if x\, ..., x
p
are linearly independent right eigenvectors of A corresponding
to A - i , . . . , A.
p
(p < n), and zi, • • •, z
q
are linearly independent right eigenvectors of B
corresponding to JJL\ , ..., \Ju
q
(q < m), then ;c, < 8 > Zj € ffi.
m
" are linearly independent right
eigenvectors of A® B corresponding to A., /u ,
7
, i e /?, 7 e q.
Proof: The basic idea of the proof is as follows:
If A and B are diag onalizable in Theorem 13.12, we can take p = n and q —mand
thu s g et the complete eig enstru ctu re of A < 8 > B. In g eneral, if A and fi have Jordan form
Example 13.9. Let A and B - Then it is easily seen that
A i s orthog onal wi th eig envalu es e
±j9
and B i s orthog onal wi th eig envalu es e
±j(i>
. T he 4x4
matrix A ® 5 is then also orthog onal with eig envalu es e^'^+'W and e
±
^
( 6>
~^
>
\
Theorem 13.10. Lg f A G E
mx
" have a singular value decomposition l/^E^Vj an^ /ef
fi e ^
pxq
have a singular value decomposition UB^B^B- Then
13.2. Properties of the Kronecker Product
Theorem 13.7. If A E IR
nxn
and B E IR
mxm
are normal, then A 0 B is normal.
Proof:
(A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) by Theorem 13.4
= AT A 0 BT B by Theorem 13.3
= AAT 0 B BT since A and B are normal
= (A 0 B)(A 0 B)T by Theorem 13.3. 0
141
Corollary 13.8. If A E IR
nxn
is orthogonal and B E IR
mxm
is orthogonal, then A 0 B is
orthogonal.
E I 139 L A
[
eose Sine] dB [Cos</> Sin</>] Th ., '1 h
xamp e .• et = _ sin e cose an = _ sin</> cos</>O en It IS easl y seen t at
A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. The 4 x 4
matrix A 0 B is then also orthogonal with eigenvalues e±jeH</» and e±je
fJ
-</».
Theorem 13.10. Let A E IR
mxn
have a singular value decomposition VA ~ A vI and let
B E IR
pxq
have a singular value decomposition V B ~ B VI. Then
yields a singular value decomposition of A 0 B (after a simple reordering of the diagonal
elements of ~ A 0 ~ B and the corresponding right and left singular vectors).
Corollary 13.11. Let A E lR;"xn have singular values UI :::: ... :::: U
r
> 0 and let B E IRfx
q
have singular values <I :::: ... :::: <s > O. Then A 0 B (or B 0 A) has rs singular values
U, <I :::: ... :::: U
r
<s > 0 and
rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) .
Theorem 13.12. Let A E IR
n
xn have eigenvalues Ai, i E !!, and let B E IR
m
xm have
eigenvalues JL j, j E m. Then the mn eigenvalues of A 0 Bare
Moreover, if Xl, ••. , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , A p (p ::::: n), and Z I, ... ,Zq are linearly independent right eigenvectors of B
corresponding to JLI, ... ,JLq (q ::::: m), then Xi 0 Zj E IR
mn
are linearly independent right
eigenvectors of A 0 B corresponding to Ai JL j, i E l!! j E 1·
Proof: The basic idea of the proof is as follows:
(A 0 B)(x 0 z) = Ax 0 Bz
= AX 0 JLZ
= AJL(X 0 z). 0
If A and Bare diagonalizable in Theorem 13.12, we can take p = nand q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
142 Chapter 13. Kronecker Products
decompositions given by P~
l
AP = JA and Q~
]
BQ = JB, respectively, then we get the
following Jordan-like structure:
Note that JA® JB, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and 5, respectively, to Schur (triangular) form, i.e.,
P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
Corollary 13.13. Let A e R
nxn
and B e R
mxm
. Then
Definition 13.14. Let A e R
nxn
and B e R
mxm
. Then the Kronecker sum (or tensor sum)
of A and B, denoted A © B, is the mn x mn matrix (I
m
< g> A) + (B ® /„). Note that, in
general, A ® B ^ B © A.
Example 13.15.
Then
The reader is invited to compute B 0 A = (/3 ® B) + (A < g> /2) and note the difference
with A © B.
1. Let
142 Chapter 1 3. Kronecker Products
decompositions given by p-
I
AP = J
A
and Q-l BQ = J
B
, respectively, then we get the
following Jordan-like structure:
(P ® Q)-I(A ® B)(P ® Q) = (P-
I
® Q-l)(A ® B)(P ® Q)
= (P-
1
AP) ® (Q-l BQ)
= J
A
® J
B ·
Note that h ® JR, while upper triangular, is generally not quite in Jordan form and needs
further reduction (to an ultimate Jordan form that also depends on whether or not certain
eigenvalues are zero or nonzero).
A Schur form for A ® B can be derived similarly. For example, suppose P and
Q are unitary matrices that reduce A and B, respectively, to Schur (triangular) form, i.e.,
pH AP = TA and QH BQ = TB (and similarly if P and Q are orthogonal similarities
reducing A and B to real Schur form). Then
(P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q)
= (pH AP) ® (QH BQ)
= TA ® T
R
.
Corollary 13.13. Let A E IR
n
xn and B E IR
rn
xm. Then
1. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A).
2. det(A ® B) = (det A)m(det Bt = det(B ® A).
Definition 13.14. Let A E IR
n
Xn and B E IR
m
xrn. Then the Kronecker sum (or tensor sum)
of A and B, denoted A EEl B, is the mn x mn matrix Urn ® A) + (B ® In). Note that, in
general, A EEl B i= B EEl A.
Example 13.15.
1. Let

2
;

2
Then
2 3 0 0 0 2 0 0 0 0
3 2 1 0 0 0 0 2 0 0 1 0
AfflB = (h®A)+(B®h) =
1 1 4 0 0 0 0 0 2 0 0
0 0 0 2 3
+
2 0 0 3 0 0
0 0 0 3 2 0 2 0 0 3 0
0 0 0 4 0 0 2 0 0 3
The reader is invited to compute B EEl A = (h ® B) + (A 0 h) and note the difference
with A EEl B.
13.2. Properties of the Kronecker Product 143
If A and B are diagonalizable in Theorem 13.16, we can take p = n and q = m and
thus get the complete eigenstructure of A 0 B. In general, if A and B have Jordan form
decompositions given by P~
1
AP = JA and Q"
1
BQ = JB, respectively, then
is a Jordan-like structure for A © B.
Then J can be written in the very compact form J = (4 < 8 > M) + (E^®l2) = M 0 Ek.
Theorem 13.16. Let A e E"
x
" have eigenvalues A, - , i e n, and let B e R
mx
'" have
eigenvalues /z
;
, 7 e ra. TTzen r/ze Kronecker sum A® B = (I
m
(g> A) + (B < g> /„) /za^ ran
e/genva/wes
Moreover, if x\,... ,x
p
are linearly independent right eigenvectors of A corresponding
to AI, . . . , X
p
(p < n), and z\, ..., z
q
are linearly independent right eigenvectors of B
corresponding to f j i \ , . . . , f^
q
(q < ra), then Zj < 8 > Xi € W
1
" are linearly independent right
eigenvectors of A® B corresponding to A., + [ij , i € p, j e q.
Proof: The basic idea of the proof is as follows:
2. Recall the real JCF
where M =
13.2. Properties of the Kronecker Product
2. Recall the real JCF
1=
where M = [
a
-f3
M I 0 0
f3
a
o M I 0
M
0
J. Define
0 0
0 0
Ek =
0
I 0
M I
o M
o
o
o
143
E jR2kx2k,
Then 1 can be written in the very compact form 1 = (I} ® M) + (Ek ® h) = M $ E
k
.
Theorem 13.16. Let A E jRnxn have eigenvalues Ai, i E !!. and let B E jRmxm have
eigenvalues fJ-j, j E I!!. Then the Kronecker sum A $ B = (1m ® A) + (B ® In) has mn
eigenvalues
Al + fJ-t, ... , AI + fJ-m, A2 + fJ-t,···, A2 + fJ-m, ... , An + fJ-m'
Moreover, if XI, .•• , xp are linearly independent right eigenvectors of A corresponding
to AI, ... , Ap (p ::s: n), and ZI, ... , Zq are linearly independent right eigenvectors of B
corresponding to fJ-t, ... , fJ-q (q ::s: m), then Z j ® Xi E jRmn are linearly independent right
eigenvectors of A $ B corresponding to Ai + fJ-j' i E E, j E fl·
Proof: The basic idea of the proof is as follows:
[(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) + (Bz ® X)
= (Z ® Ax) + (fJ-Z ® X)
= (A + fJ-)(Z ® X). 0
If A and Bare diagonalizable in Theorem 13.16, we can take p = nand q = m and
thus get the complete eigenstructure of A $ B. In general, if A and B have Jordan form
decompositions given by p-I AP = lA and Q-t BQ = l
B
, respectively, then
[(Q ® In)(lm ® p)rt[(lm ® A) + (B ® In)][CQ ® In)(lm ® P)]
= [(1m ® p)-I(Q ® In)-I][(lm ® A) + (B ® In)][(Q ® In)(/m ® P)]
= [(1m ® p-I)(Q-I ® In)][(lm ® A) + (B ® In)][CQ ® In)(/m <:9 P)]
= (1m ® lA) + (JB ® In)
is a Jordan-like structure for A $ B.
144 Chapter 13. Kronecker Products
A Schur form for A © B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) form, i.e., P
H
AP = T
A
and Q
H
BQ = T
B
(and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur form). Then
((Q ® /„)(/« ® P)]"[(/m < 8 > A) + (B ® /
B
)][(e (g) /„)(/„, ® P)] = (/
m
< 8 > r
A
) + (7* (g) /„),
where [(Q < 8 > /„)(/« ® P)] = (< 2 ® P) is unitary by Theorem 13.3 and Corollary 13.8 .
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
A special case of (13.3) is the symmetric equation
obtained by taking B = A
T
. When C is symmetric, the solution X e W
x
" is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunov equations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in terms of their columns, it is easily seen by equating the
z'th columns that
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (I
m
* A) +
(B
T
® /„). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
where A e R"
x
", B e R
mxm
, and C e M"
xm
. This equation is now often called a Sylvester
equation in honor of J.J. Sylvester who studied general linear matrix equations of the form
These equations can then be rewritten as the mn x mn linear system
144 Chapter 13. Kronecker Products
A Schur fonn for A EB B can be derived similarly. Again, suppose P and Q are unitary
matrices that reduce A and B, respectively, to Schur (triangular) fonn, i.e., pH AP = TA
and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B
to real Schur fonn). Then
where [(Q ® In)(lm ® P)] = (Q ® P) is unitary by Theorem 13.3 and Corollary 13.8.
13.3 Application to Sylvester and Lyapunov Equations
In this section we study the linear matrix equation
AX+XB=C, (13.3)
where A E IR
nxn
, B E IR
mxm
, and C E IRnxm. This equation is now often called a Sylvester
equation in honor of 1.1. Sylvester who studied general linear matrix equations of the fonn
k
LA;XB; =C.
;=1
A special case of (13.3) is the symmetric equation
AX +XAT = C (13.4)
obtained by taking B = AT. When C is symmetric, the solution X E IR
n
xn is easily shown
also to be symmetric and (13.4) is known as a Lyapunov equation. Lyapunovequations
arise naturally in stability theory.
The first important question to ask regarding (13.3) is, When does a solution exist?
By writing the matrices in (13.3) in tenns of their columns, it is easily seen by equating the
ith columns that
m
AXi + Xb; = C; = AXi +
j=1
These equations can then be rewritten as the mn x mn linear system
[
A+blll b
21
1
bl21 A + b
2Z
1
blml b2ml
(13.5)
The coefficient matrix in (13.5) clearly can be written as the Kronecker sum (1m 0 A) +
(B
T
0 In). The following definition is very helpful in completing the writing of (13.5) as
an "ordinary" linear system.
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let c
(
€ E.
n
denote the columns ofC e R
nxm
so that C = [n,..., c
m
}.
Then vec(C) is defined to be the mn-vector formed by stacking the columns ofC on top of
one another, i.e., vec(C) =
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
There exists a unique solution to (13.6) if and only if [(I
m
® A) + (B
T
® /„)] is nonsingular.
But [(I
m
< 8 > A) + (B
T
(g) /„)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(/
m
<g> A) + (B
T
<8> /„)] are A., + IJ LJ , where
A,,- e A (A), i e n_, and ^j e A(fi), j e m. We thus have the following theorem.
Theorem 13.18. Let A e R
nxn
, B G R
mxm
, and C e R"
xm
. 77ie/i the Sylvester equation
has a unique solution if and only if A and —B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4)) are generally not solved using the mn x mn "vec" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n > m, this algorithm takes only O(n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A e Rn
xn
, B e R
mxm
, and C e R
nxm
. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left half-plane). Then the (unique) solution of the Sylvester equation
can be written as
Proof: Since A and B are stable, A., (A) + A
;
- (B) ^ 0 for all i, j so there exists a unique
solution to(13.8 )by Theorem 13.18. Now integrate the differential equation X = AX + XB
(with X(0) = C) on [0, +00):
13.3. Application to Sylvester and Lyapunov Equations 145
Definition 13.17. Let Ci E jRn denote the columns ofC E jRnxm so that C = [CI, ... , C
m
].
: : ~ ~ : : ~ : : : d ~ ~ : : : O :[]::::fonned by "ocking the colunuu of C on top of
Using Definition 13.17, the linear system (13.5) can be rewritten in the form
[(1m ® A) + (B
T
® In)]vec(X) = vec(C). (13.6)
There exists a unique solution to (13.6) if and only if [(1m ® A) + (B
T
® In)] is nonsingular.
But [(1m ® A) + (B
T
® In)] is nonsingular if and only if it has no zero eigenvalues.
From Theorem 13.16, the eigenvalues of [(1m ® A) + (BT ® In)] are Ai + Mj, where
Ai E A(A), i E!!, and Mj E A(B), j E!!!.. We thus have the following theorem.
Theorem 13.1S. Let A E lR
nxn
, B E jRmxm, and C E jRnxm. Then the Sylvester equation
AX+XB=C
(13.7)
has a unique solution if and only if A and - B have no eigenvalues in common.
Sylvester equations of the form (13.3) (or symmetric Lyapunov equations of the form
(13.4» are generally not solved using the mn x mn "vee" formulation (13.6). The most
commonly preferred numerical algorithm is described in [2]. First A and B are reduced to
(real) Schur form. An equivalent linear system is then solved in which the triangular form
of the reduced A and B can be exploited to solve successively for the columns of a suitably
transformed solution matrix X. Assuming that, say, n :::: m, this algorithm takes only 0 (n
3
)
operations rather than the O(n
6
) that would be required by solving (13.6) directly with
Gaussian elimination. A further enhancement to this algorithm is available in [6] whereby
the larger of A or B is initially reduced only to upper Hessenberg rather than triangular
Schur form.
The next few theorems are classical. They culminate in Theorem 13.24, one of many
elegant connections between matrix theory and stability theory for differential equations.
Theorem 13.19. Let A E jRnxn, B E jRmxm, and C E jRnxm. Suppose further that A and B
are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real
parts in the open left half-plane). Then the (unique) solution of the Sylvester equation
AX+XB=C (13.8)
can be written as
(13.9)
Proof: Since A and B are stable, Aj(A) + Aj(B) =I 0 for all i, j so there exists a unique
solution to (13.8) by Theorem 13.18. Now integrate the differential equation X = AX + X B
(with X(O) = C) on [0, +00):
lim XU) - X(O) = A roo X(t)dt + ([+00 X(t)dt) B.
I-Hoo 10 10
(13.10)
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e = lim e = 0.
r—> + oo t—v+oo
Hence, using the solution X ( t ) = e
tA
Ce
tB
from Theorem 11.6, we have that lim X ( t ) — 0.
/—<-+3C
Substituting in (13.10) we have
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
XB = C is that [ J _
c
fi
] be similar to [ J _°
B
] (via the similarity [ J _* ]).
Theorem 13.21. Lef A, C e R"
x
". TTzen r/z e Lyapunov equation
has a unique solution if and only if A and —A
T
have no eigenvalues in common. If C is
symmetric and (13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A e W
xn
has eigenvalues A.I ,...,!„, then — A
T
has eigen-
values —A.], . . . , —k
n
. Thus, a sufficient condition that guarantees that A and — A
T
have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con-
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A,C e R"
x
" and suppose further that A is asymptotically stable.
Then the (unique) solution of the Lyapunov equation
can be written as
Theorem 13.24. A matrix A e R"
x
" is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
Proof: Suppose A is asymptotically stable. By Theorems 13.21 and 13.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonz ero vector in E".
Then
and so X
where C -
146 Chapter 13. Kronecker Products
Using the results of Section 11.1.6, it can be shown easily that lim e
lA
= lim e
lB
= O.
1-->+00 1 .... +00
Hence, using the solution X (t) = elACe
lB
from Theorem 11.6, we have that lim X (t) = O.
t ~ + x
Substituting in (13.10) we have
-C = A (1+
00
elACe
lB
dt) + (1+
00
elACe
lB
dt) B
{+oo
and so X = -1o elACe
lB
dt satisfies (13.8). o
Remark 13.20. An equivalent condition for the existence of a unique solution to AX +
X B = C is that [ ~ _C
B
] be similar to [ ~ _OB] (via the similarity [ ~ _ ~ ]).
Theorem 13.21. Let A, C E jRnxn. Then the Lyapunov equation
AX+XAT = C (13.11)
has a unique solution if and only if A and - A T have no eigenvalues in common. If C is
symmetric and ( 13.11) has a unique solution, then that solution is symmetric.
Remark 13.22. If the matrix A E jRn xn has eigenvalues )"" ... , An, then - AT has eigen-
values -AI, ... , - An. Thus, a sufficient condition that guarantees that A and - A T have
no common eigenvalues is that A be asymptotically stable. Many useful results exist con-
cerning the relationship between stability and Lyapunov equations. Two basic results due
to Lyapunov are the following, the first of which follows immediately from Theorem 13.19.
Theorem 13.23. Let A, C E jRnxn and suppose further that A is asymptotically stable.
Then the (unique) solution o/the Lyapunov equation
AX+XAT=C
can be written as
(13.12)
Theorem 13.24. A matrix A E jRnxn is asymptotically stable if and only if there exists a
positive definite solution to the Lyapunov equation
AX +XAT = C, (13.13)
where C = C
T
< O.
Proof: Suppose A is asymptotically stable. By Theorems l3.21 and l3.23 a solution to
(13.13) exists and takes the form (13.12). Now let v be an arbitrary nonzero vector in jRn.
Then
13.3. Application to Sylvester and Lyapunov Equations 147
Since — C > 0 and e
tA
is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = X
T
> 0 and let A. e A (A) with corresponding left eigen-
vector y. Then
Since y
H
Xy > 0, we must have A + A = 2 Re A < 0 . Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + XA
T
= C can also be written using the
vec notation in the equivalent form
A subtle point arises when dealing with the "dual" Lyapunov equation A
T
X + XA = C.
The equivalent "vec form" of this equation is
However, the complex-valued equation A
H
X + XA = C is equivalent to
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
Proof: The proof follows in a fairly straightforward fashion either directly from the defini-
tions or from the fact that vec(;t;y
r
) = y <8 > x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvester-like equation introduced in Theorem 6.11.
Theorem 13.27. Let A e R
mxn
, B e R
px(}
, and C e R
mxq
. Then the equation
has a solution X e R.
nxp
if and only ifAA
+
CB
+
B = C, in which case the general solution
is of the form
where Y e R
nxp
is arbitrary. The solution of (13.14) is unique if BB
+
® A
+
A = I.
Proof: Write (13.14) as
13.3. Application to Sylvester and Lyapunov Equations 147
Since -C > 0 and etA is nonsingular for all t, the integrand above is positive. Hence
v
T
Xv > 0 and thus X is positive definite.
Conversely, suppose X = XT > 0 and let A E A(A) with corresponding left eigen-
vector y. Then
0> yHCy = yH AXy + yHXAT Y
= (A + I)yH Xy.
Since yH Xy > 0, we must have A + I = 2 Re A < O. Since A was arbitrary, A must be
asymptotically stable. D
Remark 13.25. The Lyapunov equation AX + X A T = C can also be written using the
vec notation in the equivalent form
[(/ ® A) + (A ® l)]vec(X) = vec(C).
A subtle point arises when dealing with the "dual" Lyapunov equation A T X + X A = C.
The equivalent "vec form" of this equation is
[(/ ® AT) + (AT ® l)]vec(X) = vec(C).
However, the complex-valued equation A H X + X A = C is equivalent to
[(/ ® AH) + (AT ® l)]vec(X) = vec(C).
The vec operator has many useful properties, most of which derive from one key
result.
Theorem 13.26. For any three matrices A, B, and C for which the matrix product ABC is
defined,
vec(ABC) = (C
T
® A)vec(B).
Proof: The proof follows in a fairly straightforward fashion either directly from the defini-
tions or from the fact that vec(xyT) = y ® x. D
An immediate application is to the derivation of existence and uniqueness conditions
for the solution of the simple Sylvester-like equation introduced in Theorem 6.11.
Theorem 13.27. Let A E jRrnxn, B E jRPxq, and C E jRrnxq. Then the equation
AXB =C (13.14)
has a solution X E jRn x p if and only if A A + C B+ B = C, in which case the general solution
is of the form
(13.15)
where Y E jRnxp is arbitrary. The solution of (13. 14) is unique if BB+ ® A+ A = [.
Proof: Write (13.14) as
(B
T
® A)vec(X) = vec(C) (13.16)
148 Chapter 13. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
It is a straightforward exercise to show that (M ® N)
+
= M
+
< 8> N
+
. Thus, (13.16) has a
solution if and only if
and hence if and only if AA
+
CB
+
B = C.
The general solution of (13.16) is then given by
where Y is arbitrary. This equation can then be rewritten in the form
or, using Theorem 13.26,
The solution is clearly unique if BB
+
< 8> A
+
A = I. D
EXERCISES
1. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A))
r
(vec(fl)) = Tr(A
r
£). In particular, if B e Rn
xn
, then Tr(fl) =
vec(/J
r
vec(fl).
2. Prove that for all matrices A and B, (A ® B)
+
= A
+
® B
+
.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
can be written in the form
148 Chapter 1 3. Kronecker Products
by Theorem 13.26. This "vector equation" has a solution if and only if
(B
T
® A)(B
T
® A)+ vec(C) = vec(C).
It is a straightforward exercise to show that (M ® N) + = M+ ® N+. Thus, (13.16) has a
solution if and only if
vec(C) = (B
T
® A)«B+{ ® A+)vec(C)
= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)
and hence if and only if AA + C B+ B = C.
The general solution of (13 .16) is then given by
vec(X) = (B
T
® A) + vec(C) + [I - (B
T
® A) + (B
T
® A)]vec(Y),
where Y is arbitrary. This equation can then be rewritten in the form
vec(X) = «B+{ ® A+)vec(C) + [I - (BB+{ ® A+ A]vec(y)
or, using Theorem 13.26,
The solution is clearly unique if B B+ ® A + A = I. 0
EXERCISES
I. For any two matrices A and B for which the indicated matrix product is defined,
show that (vec(A»T (vec(B» = Tr(A
T
B). In particular, if B E lR
nxn
, then Tr(B) =
vec(Inl vec(B).
2. Prove that for all matrices A and B, (A ® B)+ = A+ ® B+.
3. Show that the equation AX B = C has a solution for all C if A has full row rank and
B has full column rank. Also, show that a solution, if it exists, is unique if A has full
column rank and B has full row rank. What is the solution in this case?
4. Show that the general linear equation
k
LAiXB
i
=C
i=1
can be written in the form
[BT ® AI + ... + B[ ® Ak]vec(X) = vec(C).
Exercises 149
5. Let x € M
m
and y e E". Show that *
r
< 8 > y = yx
T
.
6. Let A e R"
xn
and £ e M
mxm
.
(a) Show that ||A < 8 > B||
2
= ||A||
2
||£||
2
.
(b) What is ||A ® B\\
F
in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A < 8 > B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, 5 eR"
x
".
(a) Show that (/ ® A)* = / < 8 > A* and (fl < g > /)* = B
fc
® / for all integ ers &.
(b) Show that e
l
®
A
= I < g ) e
A
and e
5
®
7
= e
B
(g ) /.
(c) Show that the matrices / (8 ) A and B ® / commute.
(d) Show that
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8 . Consider the Lyapunov matrix equation (13.11) with
and C the symmetric matrix
Clearly
is a symmetric solution of the equation. Verify that
is also a solution and is nonsymmetric. Explain in lig ht of Theorem 13.21.
9. Block Triangularization: Let
where A e Rn
xn
and D e R
mxm
. It is desired to find a similarity transformation
of the form
such that T
l
ST is block upper triang ular.
Exercises 149
5. Let x E ]Rm and y E ]Rn. Show that x T ® y = y X T •
(a) Show that IIA ® BII2 = IIAII2I1Blb.
(b) What is II A ® B II F in terms of the Frobenius norms of A and B? Justify your
answer carefully.
(c) What is the spectral radius of A ® B in terms of the spectral radii of A and B?
Justify your answer carefully.
7. Let A, B E ]Rnxn.
(a) Show that (l ® A)k = I ® Ak and (B ® Il = Bk ® I for all integers k.
(b) Show that el®A = I ® e
A
and eB®1 = e
B
® I.
(c) Show that the matrices I ® A and B ® I commute.
(d) Show that
e
AEIlB
= eU®A)+(B®l) = e
B
® e
A
.
(Note: This result would look a little "nicer" had we defined our Kronecker
sum the other way around. However, Definition 13.14 is conventional in the
literature.)
8. Consider the Lyapunov matrix equation (13.11) with
A = [ ~ _ ~ ]
and C the symmetric matrix
[ ~
Clearly
Xs = [ ~ ~ ]
is a symmetric solution of the equation. Verify that
Xns = [ _ ~ ~ ]
is also a solution and is nonsymmetric. Explain in light of Theorem 13.21.
9. Block Triangularization: Let
where A E ]Rn xn and D E ]Rm xm. It is desired to find a similarity transformation
of the form
T = [ ~ ~ J
such that T-
1
ST is block upper triangular.
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
if X satisfies the so-called matrix Riccati equation
(b) Formulate a similar result for block lower triangularization of S.
10. Block Diagonalization: Let
where A e Rn
xn
and D E R
mxm
. It is desired to find a similarity transformation of
the form
such that T
l
ST is block diagonal,
(a) Show that S is similar to
if Y satisfies the Sylvester equation
(b) Formulate a similar result for block diagonalization of
150 Chapter 13. Kronecker Products
(a) Show that S is similar to
[
A +OBX B ]
D-XB
if X satisfies the so-called matrix Riccati equation
C-XA+DX-XBX=O.
(b) Fonnulate a similar result for block lower triangularization of S.
to. Block Diagonalization: Let
S= [ ~ ~ l
where A E jRnxn and D E jRmxm. It is desired to find a similarity transfonnation of
the fonn
T = [ ~ ~ ]
such that T-
1
ST is block diagonal.
(a) Show that S is similar to
if Y satisfies the Sylvester equation
AY - YD = -B.
(b) Fonnulate a similar result for block diagonalization of
Bibliography
[1] Albert, A., Regression and the Moore-Penrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, R.H., and G.W. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + XB = C," Cornm. ACM, 15(1972), 820-826.
[3] Bellman, R., Introduction to Matrix Analysis, Second Edition, McGraw-Hill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964), 57–58.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A Hessenberg-Schur Method for the Problem
AX + XB = C," IEEE Trans. Autom. Control, AC-24(1979), 909-913.
[7] Golub, G.H., and C.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and J.H. Wilkinson, "Ill-Conditioned Eigensystems and the Computation
of the Jordan Canonical Form," SIAM Rev., 18(1976), 578-619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966), 518–521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, PR., Finite-Dimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.J., Accuracy and Stability of'Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Horn, R.A., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Horn, R.A., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
Bibliography
[1] Albert, A., Regression and the Moore-Penrose Pseudoinverse, Academic Press, New
York, NY, 1972.
[2] Bartels, RH., and G.w. Stewart, "Algorithm 432. Solution of the Matrix Equation
AX + X B = C," Comm. ACM, 15(1972),820-826.
[3] Bellman, R, Introduction to Matrix Analysis, Second Edition, McGraw-Hill, New
York, NY, 1970.
[4] Bjorck, A., Numerical Methodsfor Least Squares Problems, SIAM, Philadelphia, PA,
1996.
[5] Cline, R.E., "Note on the Generalized Inverse of the Product of Matrices," SIAM Rev.,
6(1964),57-58.
[6] Golub, G.H., S. Nash, and C. Van Loan, "A Hessenberg-Schur Method for the Problem
AX + X B = C," IEEE Trans. Autom. Control, AC-24(1979), 909-913.
[7] Golub, G.H., and c.F. Van Loan, Matrix Computations, Third Edition, Johns Hopkins
Univ. Press, Baltimore, MD, 1996.
[8] Golub, G.H., and lH. Wilkinson, "Ill-Conditioned Eigensystems and the Computation
ofthe Jordan Canonical Form," SIAM Rev., 18(1976),578-619.
[9] Greville, T.N.E., "Note on the Generalized Inverse of a Matrix Product," SIAM Rev.,
8(1966),518-521 [Erratum, SIAM Rev., 9(1967), 249].
[10] Halmos, P.R, Finite-Dimensional Vector Spaces, Second Edition, Van Nostrand,
Princeton, NJ, 1958.
[11] Higham, N.1., Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM,
Philadelphia, PA, 2002.
[12] Hom, RA., and C.R. Johnson, Matrix Analysis, Cambridge Univ. Press, Cambridge,
UK, 1985.
[13] Hom, RA., and C.R. Johnson, Topics in Matrix Analysis, Cambridge Univ. Press,
Cambridge, UK, 1991.
151
152 Bibliography
[14] Kenney, C, and A.J. Laub, "Controllability and Stability Radii for Companion Form
Systems," Math, of Control, Signals, and Systems, 1(1988), 361-390.
[15] Kenney, C.S., and A.J. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995), 1330–1348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, A.J., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans..
Autom. Control, AC-24( 1979), 913–921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, C.B., and C.F. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978), 801-836.
[20] Noble, B., and J.W. Daniel, Applied Linear Algebra, Third Edition, Prentice-Hall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Penrose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955), 406–413.
[23] Stewart, G. W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley-
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
Springer-Verlag, New York, NY, 1985.
152 Bibliography
[14] Kenney, C., and AJ. Laub, "Controllability and Stability Radii for Companion Fonn
Systems," Math. of Control, Signals, and Systems, 1(1988),361-390.
[15] Kenney, C.S., andAJ. Laub, "The Matrix Sign Function," IEEE Trans. Autom. Control,
40(1995),1330-1348.
[16] Lancaster, P., and M. Tismenetsky, The Theory of Matrices, Second Edition with
Applications, Academic Press, Orlando, FL, 1985.
[17] Laub, AJ., "A Schur Method for Solving Algebraic Riccati Equations," IEEE Trans ..
Autom. Control, AC-24( 1979), 913-921.
[18] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, PA,
2000.
[19] Moler, c.B., and c.P. Van Loan, "Nineteen Dubious Ways to Compute the Exponential
of a Matrix," SIAM Rev., 20(1978),801-836.
[20] Noble, B., and J.w. Daniel, Applied Linear Algebra, Third Edition, Prentice-Hall,
Englewood Cliffs, NJ, 1988.
[21] Ortega, J., Matrix Theory. A Second Course, Plenum, New York, NY, 1987.
[22] Pemose, R., "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc.,
51(1955),406-413.
[23] Stewart, G.W., Introduction to Matrix Computations, Academic Press, New York, NY,
1973.
[24] Strang, G., Linear Algebra and Its Applications, Third Edition, Harcourt Brace
Jovanovich, San Diego, CA, 1988.
[25] Watkins, D.S., Fundamentals of Matrix Computations, Second Edition, Wiley-
Interscience, New York, 2002.
[26] Wonham, W.M., Linear Multivariable Control. A Geometric Approach, Third Edition,
Springer-Verlag, New York, NY, 1985.
Index
A–invariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LU factorization, 5
triangularization, 149
C", 1
(pmxn i
(p/nxn 1
Cauchy–Bunyakovsky–Schwarz Inequal-
ity, 58
Cayley–Hamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
co–domain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 4–6
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor-
mation, 81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob-
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114–118
inverse of, 110
properties of, 109–112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
153
Index
A-invariant subspace, 89
matrix characterization of, 90
algebraic multiplicity, 76
angle between vectors, 58
basis, 11
natural, 12
block matrix, 2
definiteness of, 104
diagonalization, 150
inverse of, 48
LV factorization, 5
triangularization, 149
en, 1
e
mxn
, 1
e ~ x n , 1
Cauchy-Bunyakovsky-Schwarz Inequal-
ity,58
Cayley-Hamilton Theorem, 75
chain
of eigenvectors, 87
characteristic polynomial
of a matrix, 75
of a matrix pencil, 125
Cholesky factorization, 101
co-domain, 17
column
rank, 23
vector, 1
companion matrix
inverse of, 105
pseudoinverse of, 106
singular values of, 106
singular vectors of, 106
complement
of a subspace, 13
orthogonal, 21
153
congruence, 103
conjugate transpose, 2
contragredient transformation, 137
controllability, 46
defective, 76
degree
of a principal vector, 85
determinant, 4
of a block matrix, 5
properties of, 4-6
dimension, 12
direct sum
of subspaces, 13
domain, 17
eigenvalue, 75
invariance under similarity transfor-
mation,81
elementary divisors, 84
equivalence transformation, 95
orthogonal, 95
unitary, 95
equivalent generalized eigenvalue prob-
lems, 127
equivalent matrix pencils, 127
exchange matrix, 39, 89
exponential of a Jordan block, 91, 115
exponential of a matrix, 81, 109
computation of, 114-118
inverse of, 110
properties of, 109-112
field, 7
four fundamental subspaces, 23
function of a matrix, 81
generalized eigenvalue, 125
generalized real Schur form, 128
154 Index
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higher–order difference equations
conversion to first–order form, 121
higher–order differential equations
conversion to first–order form, 120
higher–order eigenvalue problems
conversion to first–order form, 136
i, 2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initial–value problem, 109
for higher–order equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen-
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
7, 2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singular values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible, 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom-
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
co–domain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible, 26
matrix representation of, 18
nonsingular, 25
nullspace of, 20
154
generalized Schur form, 127
generalized singular value decomposition,
134
geometric multiplicity, 76
Holder Inequality, 58
Hermitian transpose, 2
higher-order difference equations
conversion to first-order form, 121
higher-order differential equations
conversion to first-order form, 120
higher-order eigenvalue problems
conversion to first-order form, 136
i,2
idempotent, 6, 51
identity matrix, 4
inertia, 103
initial-value problem, 109
for higher-order equations, 120
for homogeneous linear difference
equations, 118
for homogeneous linear differential
equations, 112
for inhomogeneous linear difference
equations, 119
for inhomogeneous linear differen-
tial equations, 112
inner product
complex, 55
complex Euclidean, 4
Euclidean, 4, 54
real, 54
usual, 54
weighted, 54
invariant factors, 84
inverses
of block matrices, 47
j,2
Jordan block, 82
Jordan canonical form (JCF), 82
Kronecker canonical form (KCF), 129
Kronecker delta, 20
Kronecker product, 139
determinant of, 142
eigenvalues of, 141
eigenvectors of, 141
products of, 140
pseudoinverse of, 148
singUlar values of, 141
trace of, 142
transpose of, 140
Kronecker sum, 142
eigenvalues of, 143
eigenvectors of, 143
exponential of, 149
leading principal submatrix, 100
left eigenvector, 75
left generalized eigenvector, 125
left invertible. 26
left nullspace, 22
left principal vector, 85
linear dependence, 10
linear equations
Index
characterization of all solutions, 44
existence of solutions, 44
uniqueness of solutions, 45
linear independence, 10
linear least squares problem, 65
general solution of, 66
geometric solution of, 67
residual of, 65
solution via QR factorization, 71
solution via singular value decom-
position, 70
statement of, 65
uniqueness of solution, 66
linear regression, 67
linear transformation, 17
co-domain of, 17
composition of, 19
domain of, 17
invertible, 25
left invertible. 26
matrix representation of, 18
nonsingular, 25
nulls pace of, 20
Index 155
range of, 20
right invertible, 26
LU factorization, 6
block, 5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal, 2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasi–upper–triangular, 98
sign of a, 91
square root of a, 101
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1–.60
2–, 60
oo–, 60
/?–, 60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed, 60
mutually consistent, 61
relations among, 61
Schatten, 60
spectral, 60
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singular, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
Moore–Penrose pseudoinverse, 29
multiplication
matrix–matrix, 3
matrix–vector, 3
Murnaghan–Wintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced, 56
natural, 56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace, 20
left, 22
right, 22
observability, 46
one–to–one (1–1), 23
conditions for, 25
onto, 23
conditions for, 25
Index
range of, 20
right invertible, 26
LV factorization, 6
block,5
Lyapunov differential equation, 113
Lyapunov equation, 144
and asymptotic stability, 146
integral form of solution, 146
symmetry of solution, 146
uniqueness of solution, 146
matrix
asymptotically stable, 145
best rank k approximation to, 67
companion, 105
defective, 76
definite, 99
derogatory, 106
diagonal,2
exponential, 109
Hamiltonian, 122
Hermitian, 2
Householder, 97
indefinite, 99
lower Hessenberg, 2
lower triangular, 2
nearest singular matrix to, 67
nilpotent, 115
nonderogatory, 105
normal, 33, 95
orthogonal, 4
pentadiagonal, 2
quasi-upper-triangular, 98
sign of a, 91
square root of a, 10 1
symmetric, 2
symplectic, 122
tridiagonal, 2
unitary, 4
upper Hessenberg, 2
upper triangular, 2
matrix exponential, 81, 91, 109
matrix norm, 59
1-,60
2-,60
00-,60
p-,60
consistent, 61
Frobenius, 60
induced by a vector norm, 61
mixed,60
mutually consistent, 61
relations among, 61
Schatten,60
spectral, 60
155
subordinate to a vector norm, 61
unitarily invariant, 62
matrix pencil, 125
equivalent, 127
reciprocal, 126
regular, 126
singUlar, 126
matrix sign function, 91
minimal polynomial, 76
monic polynomial, 76
Moore-Penrose pseudoinverse, 29
multiplication
matrix-matrix, 3
matrix-vector, 3
Mumaghan-Wintner Theorem, 98
negative definite, 99
negative invariant subspace, 92
nonnegative definite, 99
criteria for, 100
nonpositive definite, 99
norm
induced,56
natural,56
normal equations, 65
normed linear space, 57
nullity, 24
nullspace,20
left, 22
right, 22
observability, 46
one-to-one (1-1), 23
conditions for, 25
onto, 23
conditions for, 25
156 Index
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (Kth) of a Jordan block, 120
powers of a matrix
computation of, 119–120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a full–column–rank matrix, 30
of a full–row–rank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Q –orthogonality, 55
QR factorization, 72
T O " 1
IK , 1
M
mxn i
, 1
M
mxn 1
r '
M nxn 1
n ' '
range, 20
range inclusion
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rank–one matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, 111
reverse–order identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, 1
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur T heorem, 98
Schur vectors, 98
second–order eigenvalue problem, 135
conversion to first–order form, 135
Sherman–Morrison–Woodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, h
156
orthogonal
complement, 21
matrix, 4
projection, 52
subspaces, 14
vectors, 4, 20
orthonormal
vectors, 4, 20
outer product, 19
and Kronecker product, 140
exponential of, 121
pseudoinverse of, 33
singular value decomposition of, 41
various matrix norms of, 63
pencil
equivalent, 127
of matrices, 125
reciprocal, 126
regular, 126
singular, 126
Penrose theorem, 30
polar factorization, 41
polarization identity, 57
positive definite, 99
criteria for, 100
positive invariant subspace, 92
power (kth) of a Jordan block, 120
powers of a matrix
computation of, 119-120
principal submatrix, 100
projection
oblique, 51
on four fundamental subspaces, 52
orthogonal, 52
pseudoinverse, 29
four Penrose conditions for, 30
of a full-column-rank matrix, 30
of a full-row-rank matrix, 30
of a matrix product, 32
of a scalar, 31
of a vector, 31
uniqueness, 30
via singular value decomposition, 38
Pythagorean Identity, 59
Q-orthogonality, 55
QR factorization, 72
JR.n, I
JR.mxn,1
1
I
range, 20
range inclusion
Index
characterized by pseudoinverses, 33
rank, 23
column, 23
row, 23
rank-one matrix, 19
rational canonical form, 104
Rayleigh quotient, 100
reachability, 46
real Schur canonical form, 98
real Schur form, 98
reciprocal matrix pencil, 126
reconstructibility, 46
regular matrix pencil, 126
residual, 65
resolvent, III
reverse-order identity matrix, 39, 89
right eigenvector, 75
right generalized eigenvector, 125
right invertible, 26
right nullspace, 22
right principal vector, 85
row
rank, 23
vector, I
Schur canonical form, 98
generalized, 127
Schur complement, 6, 48, 102, 104
Schur Theorem, 98
Schur vectors, 98
second-order eigenvalue problem, 135
conversion to first-order form, 135
Sherman-Morrison-Woodbury formula,
48
signature, 103
similarity transformation, 95
and invariance of eigenvalues, 81
Index 157
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor-
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
A–invariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob-
lem, 131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
l–, 57
2–, 57
oo–, 57
P–, 51
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p–, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130
Index
orthogonal, 95
unitary, 95
simple eigenvalue, 85
simultaneous diagonalization, 133
via singular value decomposition, 134
singular matrix pencil, 126
singular value decomposition (SVD), 35
and bases for four fundamental
subspaces, 38
and pseudoinverse, 38
and rank, 38
characterization of a matrix factor-
ization as, 37
dyadic expansion, 38
examples, 37
full vs. compact, 37
fundamental theorem, 35
nonuniqueness, 36
singular values, 36
singular vectors
left, 36
right, 36
span, 11
spectral radius, 62, 107
spectral representation, 97
spectrum, 76
subordinate norm, 61
subspace, 9
A-invariant, 89
deflating, 129
reducing, 130
subspaces
complements of, 13
direct sum of, 13
equality of, 10
four fundamental, 23
intersection of, 13
orthogonal, 14
sum of, 13
Sylvester differential equation, 113
Sylvester equation, 144
integral form of solution, 145
uniqueness of solution, 145
157
Sylvester's Law of Inertia, 103
symmetric generalized eigenvalue prob-
lem,131
total least squares, 68
trace, 6
transpose, 2
characterization by inner product, 54
of a block matrix, 2
triangle inequality
for matrix norms, 59
for vector norms, 57
unitarily invariant
matrix norm, 62
vector norm, 58
variation of parameters, 112
vec
of a matrix, 145
of a matrix product, 147
vector norm, 57
1-,57
2-,57
00-,57
p-,57
equivalent, 59
Euclidean, 57
Manhattan, 57
relations among, 59
unitarily invariant, 58
weighted, 58
weighted p-, 58
vector space, 8
dimension of, 12
vectors, 1
column, 1
linearly dependent, 10
linearly independent, 10
orthogonal, 4, 20
orthonormal, 4, 20
row, 1
span of a set of, 11
zeros
of a linear dynamical system, 130

Matrix Analysis Matrix Analysis
for Scientists & Engineers for Scientists & Engineers

This page intentionally left blank This page intentionally left blank

California slam. .Matrix Analysis Matrix Analysis for Scientists & Engineers for Scientists & Engineers Alan J. Laub Alan J. Laub University of California Davis.

Mathcad is a registered trademark of Mathsoft Engineering & Education. Natick. Matrices. ISBN 0-89871-576-8 (pbk. info@mathworks. For information. QA188138 2005 QA 188. write to the Society for Industrial and Applied Mathematics. PA 19104-2688. Includes bibliographical references and index. Library of Congress Cataloging-in-Publication Data Library of Congress Cataloging-in-Publication Data Laub. No part of this book All rights reserved. 3 Apple Hill Drive.lam.. Fax: 508-647-7101.) 1.L38 2005 512. Inc. 1948Matrix analysis for scientists and engineers / Alan J. Inc.. MATLAB® is a registered trademark of The MathWorks. For MATLAB product information. stored.. I. p. Inc. 1. Mathematics.9'434-dc22 2004059962 2004059962 About the cover: The original artwork featured on the cover was created by freelance About the cover: The original artwork featured on the cover was created by freelance artist Aaron Tallon of Philadelphia. 3 Apple Hill Drive. Inc.. Mathematica is a registered trademark of Wolfram Research.) ISBN 0-89871-576-8 (pbk. Includes bibliographical references and index.mathworks. For MATLAB product information. cm. Mathematical analysis. 508-647-7000. wwwmathworks. PA 19104-2688. Natick. Laub. Matrices. Alan J. No part of this book may be reproduced. MA 01760-2098 USA.. Inc. Laub. www. or transmitted in any manner without the written permission may be reproduced. or transmitted in any manner without the written permission of the publisher. info@mathworks. For information. write to the Society for Industrial and Applied of the publisher. cm. Used by permission 5. Title. stored. Fax: 508-647-7101. artist Aaron Tallon of Philadelphia. • slam is a registered trademark. Inc. Mathematical analysis. Inc. 3600 University City Science Center. Printed in the United States of America. Used by permission. Alan J. MATLAB® is a registered trademark of The MathWorks. Philadelphia. is a registered trademark. please contact The MathWorks.com Mathematica is a registered trademark of Wolfram Research. 3600 University City Science Center. MA 01760-2098 USA. PA. Mathcad is a registered trademark of Mathsoft Engineering & Education. . 10987654321 10987654321 All rights reserved.com. 2. p. Title.Copyright Copyright © 2005 by the Society for Industrial and Applied Mathematics. Inc.com. Matrix analysis for scientists and engineers / Alan J. 1948Laub.9'434—dc22 512. Printed in the United States of America..com 508-647-7000. please contact The MathWorks. I.. 2. 2005 by the Society for Industrial and Applied Mathematics. PA. Philadelphia.

Beverley (who captivated me in the UBC math library captivated UBC nearly forty years ago) nearly forty . Beverley To my wife.To my wife.

This page intentionally left blank This page intentionally left blank .

3 Inner Products and Orthogonality 1. .4 Structure of Linear Transformations 3. 2. . .1 Definitions and Characterizations Definitions and Characterizations. .1 The Fundamental Theorem .1 Some Notation and Terminology 1. 5. .1 The Fundamental Theorem 5... .. .3 Row and Column Compressions Linear Equations Linear Equations 6. ..5 Four Fundamental Subspaces Introduction to the Moore-Penrose Pseudoinverse Introduction to the Moore-Penrose Pseudoinverse 4. .3 A More General Matrix Linear Equation 6.2 Matrix Representation of Linear Transformations 3..2 Matrix Representation of Linear Transformations 3. . .5 Four Fundamental Subspaces . .2 Matrix Linear Equations . . . 6. . .4 Sums and Intersections of Subspaces Linear Transformations Linear Transformations 3. . 4. . .3 Composition of Transformations .3 Linear Independence 2. . .1 Some Notation and Terminology 1. 1. . . . .2 Matrix Linear Equations 6.. .3 Properties and Applications Introduction to the Singular Value Decomposition Introduction to the Singular Value Decomposition 5.4 Some Useful and Interesting Inverses vii xi xi 1 1 1 1 3 3 4 4 4 7 7 7 7 9 9 10 10 13 13 17 17 17 17 18 18 19 19 20 20 22 22 2 2 3 3 4 4 29 29 30 30 31 31 35 35 35 35 38 40 5 5 6 6 43 43 43 43 44 47 47 47 47 ..1 Definitions and Examples ..3 Rowand Column Compressions 5. . ..4 Determinants 1. .4 Structure of Linear Transformations 3. . 5. . . 4... .4 Determinants Vector Spaces Vector Spaces 2. 3.1 2.2 Matrix Arithmetic .. 3. ..2 Examples 4.1 Definition and Examples 3.2 Some Basic Properties 5. .3 Composition of Transformations 3. 3. 1.1 4. 4.3 Properties and Applications . . .1 Vector Linear Equations . ..4 Sums and Intersections of Subspaces 2. .2 Subspaces 2. . . . .Contents Contents Preface Preface 1 1 Introduction and Review Introduction and Review 1.1 Definition and Examples . 6. .2 Examples. 2. . Definitions and Examples 2. . .3 Inner Products and Orthogonality . .3 Linear Independence .2 Some Basic Properties .2 Matrix Arithmetic 1.1 Vector Linear Equations 6.. 6. .4 Some Useful and Interesting Inverses...3 A More General Matrix Linear Equation 6.2 Subspaces.

3. . . . . . .4 Rational Canonical Form 11 Linear Differential and Difference Equations 11 Linear Differential and Difference Equations 11.3. . 9.1 The four fundamental orthogonal projections The four fundamental orthogonal projections 7. 11.1 Properties of the matrix exponential 11. . . .4 Matrix Norms . . . . . . 11. . . .2 Difference Equations . . . . . .2 Homogeneous linear differential equations 11. . . . . . . . . . .3 Determination of the JCF . .2 Difference Equations 11. . . .2 Geometric Solution .1.2 Definite Matrices 10. . 11. .1. . . .3 Computation of matrix powers 11.2 Jordan Canonical Form 9. . . . . . . . .2.6 Computation of the matrix exponential 11. .5 Modal decompositions .1.4 Geometric Aspects of the JCF 9.2 Homogeneous linear differential equations 11. . . .1 Differential Equations ILl Differential Equations . .5 Modal decompositions 11. .3. . 9.1 Fundamental Definitions and Properties 9. . .2 Inhomogeneous linear difference equations 11.3. . . .6 Computation of the matrix exponential 11.2 Inhomogeneous linear difference equations 11.1 Example: Linear regression .3. . . . . . .2 On the + l's in JCF blocks 9.1 Theoretical computation . . . . .2. . .3 Vector Norms 7. . .3 Computation of matrix powers .3 Higher-Order Equations .2.2 On the +1's in JCF blocks 9. . . . .4 Linear matrix differential equations 11.4 Matrix Norms Linear Least Squares Problems 8.3 Inhomogeneous linear differential equations 11.1 Block matrices and definiteness 10. . . . . .1 The Linear Least Squares Problem 8. 7.1.1 Homogeneous linear difference equations 11. 11. .3. . 8.4 Geometric Aspects of the JCF 9.3. 10.1. .3 Linear Regression and Other Linear Least Squares Problems 8. .2 Inner Product Spaces 7.4 Least Squares and Singular Value Decomposition 8. . . 11. .2 Definite Matrices . Theoretical computation 9. . .2. . . . . 8.1 7.3 Equivalence Transformations and Congruence 10. . . .2 Other least squares problems . . . 10.2 Other least squares problems 8.1. .1 Properties ofthe matrix exponential . .3 Vector Norms 7.1 Projections . .2 Jordan Canonical Form .1. .4 Rational Canonical Form . .1 Block matrices and definiteness 10. and Norms 7.3 Determination of the JCF 9. Eigenvalues and Eigenvectors 9.2 Geometric Solution 8. . . . . . Inner Product Spaces.5 Least Squares and QR Factorization . Example: Linear regression 8. . .5 The Matrix Sign Function 51 51 51 51 52 52 54 54 57 57 59 59 8 65 65 65 65 67 67 67 67 67 67 69 70 70 71 71 9 75 75 75 82 82 85 85 86 86 88 88 89 89 91 91 95 95 10 Canonical Forms 10.1. .1 Homogeneous linear difference equations 11. .1 Some Basic Canonical Forms .2 Inner Product Spaces 7. . .1. . . 10. .1 8.3 Higher-Order Equations. 9.5 The Matrix Sign Function.1 Fundamental Definitions and Properties 9.2. . .1. .viii viii Contents Contents 7 Projections.1 Some Basic Canonical Forms 10. .3.3 Linear Regression and Other Linear Least Squares Problems 8. . .1. . . .5 Least Squares and QR Factorization 8. . . .4 Least Squares and Singular Value Decomposition 8.1 The Linear Least Squares Problem . . .3. .1 Projections 7. . . 8. . . . .3 Equivalence Transformations and Congruence 10. 11.4 Linear matrix differential equations . 7.3. . . . 11. 95 95 99 102 102 104 104 104 104 109 109 109 109 109 109 112 112 112 112 113 113 114 114 114 114 118 118 118 118 118 118 119 119 120 120 . . . . .3 Inhomogeneous linear differential equations 11. .1 9.1.1.2.1.

2 Canonical Forms 12.1 Conversion to first-order form 12.1 Simultaneous diagonalization via SVD 12. .2 Canonical Forms . . . . . . .1 Simultaneous diagonalization via SVD 12. 12. . . . . .3 Application to the Computation of System Zeros 12.1 Definition and Examples 13.1 Definition and Examples .4 Symmetric Generalized Eigenvalue Problems .5. . . . .Contents Contents ix ix 12 Generalized Eigenvalue Problems 12 Generalized Eigenvalue Problems 12. 12. . .6. .6 Higher-Order Eigenvalue Problems .. . 12. .1 The Generalized EigenvaluelEigenvector Problem 12.3 Application to Sylvester and Lyapunov Equations 13. . .5. . . 12.6.2 Properties of the Kronecker Product . . .3 Application to the Computation of System Zeros . . . . . .5 Simultaneous Diagonalization . .4 Symmetric Generalized Eigenvalue Problems 12. .1 The Generalized Eigenvalue/Eigenvector Problem 12. 13.2 Properties of the Kronecker Product 13. . 12.1 Conversion to first-order form 125 125 125 127 127 130 131 131 133 133 133 135 135 135 139 139 139 139 140 144 144 151 153 13 Kronecker Products 13 Kronecker Products 13. . . 13. . .5 Simultaneous Diagonalization 12.3 Application to Sylvester and Lyapunov Equations Bibliography Bibliography Index Index . .6 Higher-Order Eigenvalue Problems 12. . .

This page intentionally left blank This page intentionally left blank .

These powerful and versatile tools doinverses and the singular value decomposition (SVD). or [16]. the student is then well-equipped to pursue. and concepts such as determinants. students meant to learn much of the important and useful mathematics that. requiring such material as prerequisite permits the early (but "out-of-order" by conventional standards) introduction of topics such as pseuthe early (but "out-of-order" by conventional standards) introduction of topics such as pseudoinverses and the singular value decomposition (SVD). Matrices are stressed more than abstract vector spaces." However. for [11]. and Strang [24] Ortega are excellent companion texts for this book. even though their recollecmatrices least tion may occasionally be "hazy. the sciences. particular subject area. and positive definite matrices should have been covered at least once. but somehow didn't quite manage to do. example) or on the theoretical side (at the level of [12]. mathematics.. or [25]. or computational science science who wish to be familar with enough matrix analysis that they are prepared to use its enough analysis they are prepared to tools and ideas comfortably in a variety of applications. The text linear dynamical systems (systems of linear differential or difference equations). By matrix analysis I mean linear tools and ideas comfortably in a variety of applications. although Chapters 2 and 3 do cover some geometric (i. Upon completion of a course based on this text." this approach necessarily presupposes the availability of appropriate mathematical software on approach necessarily presupposes the availability of appropriate mathematical software on a digital computer. eigenvalues and eigenvectors. Upon completion of a course based on this are excellent companion texts for this book." However. For this. For this. for example). computer science. Noble and Daniel [20]. Basic of calculus and definitely some previous exposure to matrices and linear algebra. requiring such material as prerequisite permits tion may occasionally be "hazy. the student is then well-equipped to pursue.. I have tried throughout to emphasize only the more important and "useful" tools. in many cases. Ortega [21]. [23]. methods. Certain topics thoroughly as undergraduates. I highly recommend MATLAB® although other software such as xi xi . The choice of topics covered in linear algebra and matrix theory is motivated both by The choice of topics covered in linear algebra and matrix theory is motivated both by applications and by computational utility and relevance.e. basis-free or subspace) aspects of many of the fundamental do cover some geometric (i. [13]. basis-free or subspace) aspects of many of the fundamental notions. The concept of matrix factorization applications and by computational utility and relevance. Because tools such as the SVD are not generally amenable to "hand computation. mathematics. By matrix analysis I mean linear algebra and matrix theory together with their intrinsic interaction with and application to algebra and matrix theory together with their intrinsic interaction with and application to linear dynamical systems (systems of linear differential or difference equations). and mathematical structures. example) or on the theoretical side (at the level of [12]. although Chapters 2 and 3 algebra. Certain topics that may have been treated cursorily in undergraduate courses are treated in more depth that may have been treated cursorily in undergraduate courses are treated in more depth and more advanced material is introduced. in many cases.Preface Preface This book is intended to be used as a text for beginning graduate-level (or even senior-level) This book is intended to be used as a text for beginning graduate-level (or even senior-level) students in engineering. I have tried throughout to emphasize only the and more advanced material is introduced. but somehow didn't quite manage to do. for example). the sciences. These powerful and versatile tools can then be exploited to provide a unifying foundation upon which to base subsequent topcan exploited to foundation subsequent topics. computer science. essentially Prerequisites for using this text are quite modest: essentially just an understanding for this understanding of calculus and definitely some previous exposure to matrices and linear algebra. Basic concepts such as determinants. The concept of matrix factorization is emphasized throughout to provide a foundation for a later course in numerical linear is emphasized throughout to provide a foundation for a later course in numerical linear algebra. follow-on topics on the computational side (at the level of [7]. eigenvalues and eigenvectors. either via formal courses or through selfstudy. [II]. singularity of matrices. or computational students in engineering. The text can be used in a one-quarter or one-semester course to provide a compact overview of can be used in a one-quarter or one-semester course to provide a compact overview of much of the important and useful mathematics that. Matrices are stressed more than abstract vector spaces. students meant to learn thoroughly as undergraduates. Because tools such as the SVD are not generally amenable to "hand computation. The books by Meyer [18].e. Instructors are encouraged to supplement the book with specific application examples from their own encouraged to supplement the book with specific application examples from their own particular subject area. or [16]. either via formal courses or through selftext. [13]. I highly recommend MAlLAB® although other software such as a digital computer." this ics. singularity of matrices.

It is thus crucial to acquire knowledge vocabulary a working knowledge of the vocabulary and grammar of this language. Mastery of the material in this text should enable the student to read and understand the modern language of matrices used throughout mathematics. This is ideal material from which to learn a bit about mathematical proofs and the mathematical maturity and insight gained thereby. are deferred to such a course. in an elementary linear algebra course. diverse audience. If If are linearly dependent. This is an absolutely fundamental fundamental concept. applied physics. State-space methods are State-space modem now standard in much of modern engineering where. remarked afterward that if processing." a set of vectors is either linearly independent or it is not. in particular. and while most material is developed from basic ideas in the book. But in most engineering or scientific contexts we want to know more than that. many students who completed especially offered. linear algebra introducing "on-the-fly" algebra for elementary state-space theory) to an appendix or introducing it "on-the-f1y" when to necessary. statistics. the details of most of the numerical aspects of linear algebra per se. The presentation of the material in this book is strongly influenced by computais influenced by computational issues for two principal reasons. chemistry. one must lay a firm foundation upon which subsequent applications and Rather. and the course has proven to be remarkably successful at enabling students from disparate backgrounds to acquire a quite acceptable level of mathematical maturity and acceptable graduate rigor for subsequent graduate studies in a variety of disciplines. and the course has proven to be remarkably successful at enabling students from Davis. I have taught this material for many years. and modem engineering. the student does require a certain amount of what is conventionally referred Proofs referred to as "mathematical maturity. and states often give rise to models of very numbers models high order that must be analyzed. completed the course. This is ideal not given explicitly. . are there "best" linearly independent subsets? These tum out to turn be much more difficult problems and frequently involve research-level questions when set be much more difficult problems and frequently involve research-level questions when set in the context of the finite-precision. especially the first few times it was offered. how "nearly dependent" are the vectors? If they linearly independent. or signal processing. A second motivation for a computational emphasis is that it provides many of the essential tools for what I call "qualitative mathematics. for example. finite-range floating-point arithmetic environment of of of most modem computing platforms. Indeed." For example. The "language" in which such described models are conveniently described involves vectors and matrices. They must generally be solved computationally and closed-form it is important to know which types of algorithms can be relied upon and which cannot. modern Some of the applications of matrix analysis mentioned briefly in this book derive of the applications of matrix analysis mentioned briefly in this book modem state-space from the modern state-space approach to dynamical systems. must lay firm foundation upon which and perspectives perspectives can be built in a logical. econometrics. and thus the text can serve a rather diverse audience." Proofs are given for many theorems. Since this text is not intended for a course in numerical linear algebra per se. consistent. It is my firm conviction that such maturity is neither encouraged conviction neither nor nurtured by relegating the mathematical aspects of applications (for example. prerequisites developed While prerequisites for this text are modest. science. many times at UCSB and twice at UC Davis. and evaluated. and coherent fashion. they are either obvious or easily found in the literature. When they are not given explicitly. Some of the key algorithms of numerical linear algebra. and a wide variety of other fields. "real-life" problems seldom yield to simple "real-life" closed-form formulas or solutions. control systems with standard large numbers of interacting inputs. if only they had had this course before they took linear systems. are deferred to such a course. mathematics. form the foundation virtually modem upon which rests virtually all of modern scientific and engineering computation. simulated. If a set of vectors is linearly independent. Rather. First.xii xii Preface Preface Mathcad® Mathematica® or Mathcad® is also excellent. form the foundation Some of the key algorithms of numerical linear algebra. outputs. they are either obvious or easily found in the literature. The tools of matrix analysis are also applied on a daily basis to problems in biology. in particular.

Preface Preface xiii XIII or estimation theory. -AJL. etc. .. The concept seems to work. June 2004 — AJL. realized that by requiring this course as a prerequisite. My fellow instructors. rather than having to spend time making up for deficiencies in their background background in matrices and linear algebra. they would have been able to concentrate on the new ideas deficiencies they wanted to learn. they no longer had to provide as much time for "review" and could focus instead on the subject at hand. too.

This page intentionally left blank This page intentionally left blank .

.. IR~ xn denotes the set of real = set of real of rank Thus. 2. but this convention makes column vector rather than a row vector is entirely arbitrary.. en 4. That a vector is always a y E IR n and the superscript T is the transpose operation. and linear algebra. where y G Rn and the superscript T is the transpose operation. XTy is a scalar while it easy to recognize immediately throughout the text that. nonsingular n x n matrices. That a vector is always a column vector rather than a row vector is entirely arbitrary. 2.n xn Rmxnr = the set of real m x n matrices of rank r.... A row vector is denoted by yT where Note: Vectors are always column vectors. 1 .. IR n = the set of n-tuples of real numbers represented as column vectors. x e IR n means means where xi e IR for e n. the set of n-tuples of complex numbers represented as column vectors. Henceforth. where Xi E R for ii E !!. Thus. n }. .g. the notation n denotes the set {1. but this convention makes it easy to recognize immediately throughout the text that. 3. Rnxnn denotes the set of real nonsingular n x n matrices. Thus. This is followed by a review of some basic notions in matrix analysis throughout the text.1 Some Notation and Terminology Some Notation and Terminology We begin with a brief introduction to some standard notation and terminology to be used We begin with a brief introduction to some standard notation and terminology to be used throughout the text. x E Rn I. Rn = the set of n-tuples of real numbers represented as column vectors. mxn = the set of complex (or complex-valued) x n matrices.Chapter 1 Chapter 1 Introduction and Review Introduction and Review 1. e. Crnxn = the set of complex (or complex-valued) m x n matrices.1 1. . 1R. Henceforth. 5. IR rn xn = the set of real (or real-valued) m x n matrices.n xn Cmxn = the set of complex m x n matrices of rank r. This is followed by a review of some basic notions in matrix analysis and linear algebra. R mxn = the set of real (or real-valued) m x n matrices. e. x T y is a scalar while xyT is an n x n matrix. xyT is an n x n matrix. A row vector is denoted by y~. Thus. n}. = the set of complex m x n matrices of rank r. e. Note: Vectors are always column vectors. e 6. the notation!! denotes the set {I.g. Cn = the set of n-tuples of complex numbers represented as column vectors. 5.. The following sets appear frequently throughout subsequent chapters: The following sets appear frequently throughout subsequent chapters: 1.

• diagonal if aij7 = 0 for i i= }. are appropriately dimensioned subblocks. ~ 5 is symmetric (and Hermitian). is Hermitian (but not symmetric).. • pentadiagonal if aij = 0 for Ii . Example 1. unless if A = A T Hermitian A = A H. = 0 for < j. We henceforth that. i)th entry of A. Introduction and Review We now classify some of the more familiar "shaped" matrices. 2 2. • lower Hessenberg if aij = 0 for } .. The notation j is used throughout the text but reminders are placed at strategic locations.. For example.[ 7 .. (7.1. 7 + j ] is complex-valued symmetric but not Hermitian. Each of the above also has a "block" analogue obtained by replacing scalar components in the respective definitions by block submatrices. that is. AT E E" xm is the (j. j)\h entry is (AH)ij = (aji). = 0 for |/ — j\ > 2. There is some advantage to being conversant with both notations. Hermitian conjugate sometimes A*) and its (i. A is conjugation.e. j)th entry of a matrix A is denoted by AT and is the matrix whose j)th entry A.2. it is Transposes of block matrices can be defined in an obvious way. • upper triangular if a. • tridiagonal if aij = 0 for Ii . then the (m + n) x (m + n) matrix [~ Bc] is block upper triangular. = 0 for i ^ j. While \/—\ is most commonly denoted by i in mathematics texts. then r = [ . C E jRmxm. then its Hermitian transpose (or conjugate transpose) is denoted by AH (or H If A e C mx ".2 2 Chapter 1. otherwise noted.jj > 1. There is some the more common notation in electrical engineering and system theory. an equation like A = A T implies that A is real-valued while a statement otherwise noted. • tridiagonal if a(y = 0 for |z — j\ > 1. = a jfJ.j - 7+} ] is Hermitian (but not symmetric). We henceforth adopt the convention that. If A E em xn. • upper Hessenberg if afj = 0 for — > 1. it is easy to see that if Aij are appropriately dimensioned subblocks. } is While R is most commonly denoted by i in mathematics texts. For example. and definitions block submatrices. Introduction and Review Chapter 1.J I > 2.2. text but reminders are placed at strategic locations. if z = a + jf$ (j = ii = R). is complex-valued symmetric but not Hermitian. an equation like A = A T implies that A is real-valued while a statement like A = AH implies that A is complex-valued. • upper Hessenberg if aij = 0 for ii . = 0 for j — > 1. . = 0 for i > j. where the bar indicates complex sometimes A*) and its = IX jfJ (j = = v^T). i. then z = IX -— jfi.e. (AT)ij = aji.. A if A = AT and Hermitian if A = AH.. For example.. then A7" e jRnxm. • lower triangular if a. where the bar indicates complex j)th entry is (A H ). z Remark 1. 2 Transposes of block matrices can be defined in an obvious way. j is Remark the more common notation in electrical engineering and system theory. A matrix A E IRn xn e (or A E enxn ) is A eC" x ")is • diagonal if a. if A E IRnxn. The The transpose of a matrix A is denoted by AT and is the matrix whose (i. Oth (A 7 ).. 7 = («77). A = [ 7+} 5 3· A . Note that if A E R mx ". Example 1.JI > 1. • upper triangular if aij.. • lower triangular if aij7 = 0 for i/ < }. A e jRmxn. if e Rnxn e Rmxn C e Rmxm then the (m n) x (m n) matrix [A0 ~] is block upper triangular.ii > 1. B E IR nxm . A = AH A complex-valued.. a. A matrix A is symmetric i. • pentadiagonal if ai. ] is symmetric (and Hermitian). 1. • lower Hessenberg if a. A = [ . = 0 for i > }. then easy to see that if A. For example.

. .. multiplication. un Rmxn with u Rm and V = [v .bhp ] e Rnxp with hi e jRn. Theorem 1.2 Matrix Arithmetic 1.xn~ ] Then Ax = Xjal + .. matrix-vector product with the column x to find Ax = [50 32]' but this matrix-vector product can also be computed computed via v1a 3.2.. eRmxn has row cj e E l x ". {. Ax. A very important way to view this product is interpret weighted to interpret it as a weighted sum (linear combination) of the columns of A.. It is deceptively simple and its full understanding is well rewarded.2 Arithmetic It is assumed that the reader is familiar with the fundamental notions of matrix addition..[ ~ J+l. Again...3. That is. As a numerical example. .'" p] E jRnxp For matrix multiplication. vector x.e. Then v E jRP.3. AB bi E W1. The importance of this interpretation cannot be overemphasized... . + Xnan E jRm. i.. matrix-vector if C E jRmxn has row vectors cJ E jRlxn.. Vn] ]Ee lR Pxn U [Uj.. importance interpretation take A = [96 85 74]x = take A = [~ ~].1. there can be important computer-architecture-related advancomputer-architecture-related tages to preferring the latter calculation method. E JRm and x = l I. Let U = [MI. . applied p times: There is also an alternative.e. A special case of matrix multiplication occurs when the second matrix is a column multiplication second i. multiplication of a matrix by a scalar. Theorem 1. This gives a dual to the matrix-vector result above. Namely. n UV T = LUiVr E jRmxp.. suppose A e Rmxn and B = [bi. if (C D)H — D C ). x = ! 2 Then we can quickly calculate dot products of the rows of A [~]. . It Theorem 1. .. and is premultiplied by a row yT E R l x m then the product can be written as a weighted linear sum of the rows of C as follows: follows: yTC=YICf +"'+Ymc~ EjRlxn.. but equivalent.•.~]. i=I If matrices C and D are compatible for multiplication. un]]Ee jRmxn with Ui t Ee jRm and V = [VI.. and multiplication of matrices.[ ~ l For large arrays of numbers. formulation of matrix multiplication that appears frequently in the text and is presented below as a theorem..3 can then also be generalized to its "row dual. Then the matrix product A B can be thought of as above.• a"1 E m JR " with a. suppose (linear combination) suppose A = la' . its importance cannot be overemphasized. and is premultiplied by a row vector yTe jRlxm. the matrix-vector product Ax.. vn Rpxn p with Vit e R . Matrix Arithmetic 3 1. suppose A E jRmxn and [hI." The details are left to the readei "row left . Theorem reader.. Then we can quickly calculate dot products of the rows of A column Ax = [. recall that (CD)T = DT C T (C D)T = DT T If H H H (or (CD} = DHC H )..[ ~ J+2.

There is an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. Y}c = [ } JH [ ~ ] = [I . Similarly.4 4 Chapter 1. What is true in the complex case is that XH x = 0 if and only if x = 0.y. we define their complex Euclidean inner product (or inner product..j] [ ~ ] = 1 . i. Let x = [} ]] and y = [~]. the Euclidean inner product inner for short) y is given by y is given by n T (x. x T = O.. The more conventional definition of the complex inner product is product is important. order in which x product is important. The more conventional definition of the complex inner product is H ( x . A orthogonal and XTX = 1 and yTyy = 1. x)c'.y. i..3 1.=1 y appear in Note that (x. Note that x T x = 0 if and only if x = 0 when x E IRn but that this is not true if x E en.3 Inner Products and Orthogonality Inner Products and Orthogonality For vectors x. . Introduction and Review Chapter 1. Note that the inner product is a scalar. consider What is true in the complex case is that x H 0 if and only if O.2j while while and we see that. The notation /„ is sometimes used to denote the identity matrix in IR nxn in Rnx" x nxn H H (or en "). indeed. If x and y are zero. a matrix A e en xn is said to be unitary if A H A = AA H = I. rows or columns. y E R". x}c. x)c. y e IRn are said to be orthogonal if their inner product is zero. y)c = y = Eni=1 xiyi but throughout the text we prefer the symmetry with the real case. we define their complex Euclidean inner product (or inner product. To illustrate. In sometimes used denote identity matrix. indeed.y. where I is the n x n matrix A e IRnxn is an orthogonal matrix if ATA = AAT = /. Example 1. A EC = (orC" xn). (x. 1. (or A E Cnxn) we use the notation det A for the determinant of A. E R are said to be orthogonal if their inner product is Two nonzero vectors x. i. y) := x y = Lx. i. y ) c = yHxx = L:7=1 x. where / is the n x n identity matrix. the Euclidean inner product (or inner product. the order in which x and y appear in the complex inner (x. y E <en.=1 Note that the inner product is a scalar. xTyy = 0. Then Example 1. If e C". Similarly. There is no special name attached to a nonsquare matrix A E R mxn (or E Cmxn with orthonormal no special name attached to a nonsquare matrix A e ]Rrn"n (or € e mxn ))with orthonormal rows or columns. then we say that x and y are orthonormal. consider the nonzero vector x above. A nxn matrix E R is an orthogonal matrix if AT A = AAT = I. (x. y)c = (y.. but throughout the text we prefer the symmetry with the real (x. then we say that x and y are orthonormal. Then XTX = 0 but XHX = 2. For A E R nnxn A e IR xn It assumed of determinants.4. Nonzero complex vectors are orthogonal if x H y = 0. for short) by for short) by n (x'Y}c :=xHy = Lx..e. We list below some of (or A 6 en xn) we use the notation det A for the determinant of A. Let x = [1j and y = [1/2]. Nonzero complex vectors are orthogonal if XHy = O. x)c and we see that. .e.4 Determinants Determinants It is assumed that the reader is familiar with the basic theory of determinants. Introduction and Review 1. case.e. x and y are orthogonal and x Tx = 1 and yT = 1. Then x T x = 0 but x H X = 2.. y)c = (y. y)c = (y. To illustrate. for short) of x and For vectors y e IRn. We list below some of .4.e. Then (x. Clearly said = an orthogonal or unitary matrix has orthonormal rows and orthonormal columns. the nonzero vector x above. Two nonzero vectors x. If x. Y}c = {y.4 1. Note that x Tx = 0 if and only if x = 0 when x e Rn but that this is not true if x e Cn.

then det A = 0.e. If A e R n x n and D e R m x m . 16. Multiplying a column of A by a scalar ex results in a new matrix whose determinant scalar a determinant is ex det A. then det(A-1) = 1detA . If A E lR~xn and DE IR mxm det [Ac ~] detA det(D . B E IRnxn . then det [Ac ~] det D det(A B D.. Multiplying a row of A by a scalar and then adding it to another row does not change 7. 10. det A 11 det A22 • • det Ann 14.1. 3.. Multiplying a row of A by a scalar ex results in a new matrix whose determinant is a det A. exdetA. 17.•. If elements. det A is the product of its diagonal 10..4. properties 1.. several more is a properties are consequences of one or more of the others. i. then det A = O. with A block diagonal (or block 13. If A A A = o.. • • An" (of A = square diagonal blocks A11. then det A = different det A11 det A22 .• a"n. If A is lower triangular. then det A = 0. = alla22 • • ann i. det A is the product of its diagonal diagonal.e. A 11.. then det A = all a22 . 14.4. B eR n x n . If A € lR~xn. i. Multiplying a row of A by a scalar a results in a new matrix whose determinant is 5. If A E Rnxn. If A is lower triangUlar. 9. If A has a zero column or if any two columns of A are equal. Multiplying a row of A by a scalar and then adding it to another row does not change the determinant. Interchanging two rows of A changes only the sign of the determinant. Proof" Proof: This follows easily from the block UL factorization BD.. Ann (of possibly different sizes). Proof: This follows easily from the block LU factorization Proof" This follows easily from the block LU factorization [~ ~J=[ ~ ][ ~ 17. Note that this is not a minimal set.1 ) = de: A.. change the determinant.C). Interchanging two rows of A changes only the sign of the determinant. If A.A22. then det A = a11a22 .. If A has a zero row or if any two rows of A are equal. then det A = a11a22 . then det A = alla22 • • ann 12. If A E R n x n and D e RMmxm.. 3. If A is upper triangular.thendet(AB) = det A det 5. If A. then det [~ BD] = det D det(A -– B D – 11C ) . 13. . 8.. 15.e... Determinants 5 properties the more useful properties of determinants. 15..• ann. then det(AB) = det A det B. 7. Multiplying A 6. Determinants 1. If A is diagonal. det AT = det A (detA H = detA if A e C nxn ). are consequences of one or more of the others. 8. Interchanging two columns of A changes only the sign of the determinant.• det Ann.• ann. the determinant.B)... of 5. Multiplying a column of A by a scalar and then adding it to another column does not a column of scalar column does change the determinant.CA..1 I ][ . then det(A. A 22 .. 11.. If det = a11a22 • • ann 12. then det [~ BD] = del A det(D – CA– l 1 B). If A has a zero column or if any two columns of A are equal. 16. 4. is a det A. detAT = detA (det A H = det A A E C"X"). 11. If A e IRnxn and D E lR~xm. If A is block diagonal (or block upper triangular or block lower triangular). 2.

If A is orthogonal. [24]. . U2 .e. i. The factorizations used above U triangular. 2 0 IS I dempotent . • ..... Show that det(I – xyT) = 1.. aII o..5.5. _. Show that A must be singular. Remark — C -I B – BD-Ie Similarly. denoted TrA. The factorization of a matrix A into the product of a unit lower triangular matrix L (i. The factorization of a matrix A into the product of a unit lower triangular Remark 1. y e Rn. 4.. Introduction and Review Chapter 1. Suppose A E jRn xn is idempotent and A ^ I. for example.6 6 Chapter 1.yTx. i. [~ ~ ]. even though in general AB i= B A. Let U1.. B e JRn xn and a.B D – l C is the Schur complement of D in [~ BD ].. . Introduction and Review Remark 1. (a) Show that the trace is a linear function. Uk E Rnxn U = VI V2 .• V k is an orthogonal matrix.e A – 1 B is called the Schur complement of A in[ACBD]. see. lor z r 2sm2rt # J. TrA = Eni=1 aii. Tr(aA + f3B) = aTrA + fiTrB. 2f) 2 |_ sm 2^ sin 0 sin sin 20 1 .6. nxn linear E R f3 E JR. i. 6.e. TrA = L~=I au· elements. A matrix A e Wx" is said to be idempotent if A2 = A. what is det(aA)? What is det(–A)? E R a det(-A)? A? If A unitary. of Din [AC ~ l EXERCISES EXERCISES nxn 1. either prove the converse or provide a counterexample.e. V2.e. Show that A must be singular. A E jRnxn A2 / x™ . (c) Let S € Rnxn be skew-symmetric. ft e R. (b) Show that Tr(AB) = Tr(BA). A . (b) Suppose A e IR" X "is idempotent and A i= I. what is det A? If A is unitary. Letx. Another such factorization is VL U is an LU factorization. . . 2sin20 J is idempotent for all #. what is det A? If A 3. example. .Vk € jRn xn be orthogonal matrices. A? 2. Show that the product V = VI. U1 U2 • • Uk is an 5. elements. Showthatdet(l-xyT) 1 – yTx.e. are block analogues of these. Let x. lower triangular with all l's on the diagonal) and an upper triangular matrix L 1's an V is called an LV factorization.. Then E jRnxn skew-symmetric. TrS O. is defined as the sum of its diagonal A e Rnxn.. ! [ 2cos2<9 I T 2cos2 0 (a) Show that the matrix A = _. of denoted Tr A. AB ^ BA.y E jRn. [24]. see.... then Tr(aA fiB)= aTrA + f3TrB. ST = -S. II _ . A =. Remark 1. Let A E jRNxn. ST = -So Show that TrS = 0. . If A e jRnxn and or is a scalar.. Another such factorization is UL where V is unit upper triangular and L is lower triangular. i. Tr(Afl) = Tr(£A). The matrix D . if A. The trace of A.

(Ml) a· p . including spaces formed by special classes of matrices. p e F. but some infinite-dimensional examples are also cited. The In this chapter we give a brief review of some of the basic concepts of vector spaces.8 = ft + afar all a. 7 . ft Elf. The emphasis is on finite-dimensional vector spaces. (Al) a + (.8 = P • a for all a. y)=ci-p+a. . Axioms (M1)-(M4) state that (F \ to). Axioms (A1)-(A3) state that (F. .8 .1. An excellent reference of matrices. (Ml) a . where some of the proofs that are not given here may for this and the next chapter is [10]. ." is not written explicitly.) ( a . • F x IF ~ F such that (Al) a (P y ) = (a + p ) y o r all a. (D) (D) a· p a . (A4) a + p = . : IF x F —> IF such that Definition 2. p. a"1 € IF • a~l = 1. a-I = 1. (M3) for all a E ¥. 2. +) is a group and an abelian group if (A4) also holds.8.1. (M4) a • p =. where some of the proofs that are not given here may be found..8 + y) = (a +. y € F.1 Definitions and Examples Definition 2. A field is a set F together with two operations +. . when no confusion can arise.y for alia. (M4) a·. the multiplication operator ". +) is a group and an abelian group if (A4) also holds.8 +a· y for all a. afar all a. Generally speaking. there exists an element (-a) e F such that a + (-a) = 0. Axioms (Al)-(A3) state that (IF. (M2) there exists an element I E F such that a .8)· yyf for all a. p. y Elf. yy) = (a·. (A2) there exists an element 0 e IF such that a 0 = a. (A2) there exists an element 0 E F such that a + 0 = a for all a E F. A field is a set IF together with two operations +. An excellent reference for this and the next chapter is [10].8 + y) = a·. a f. Axioms (MI)-(M4) state that (IF \ {0}. including spaces formed by special classes emphasis is on finite-dimensional vector spaces. 0. but some infinite-dimensional examples are also cited. the multiplication operator "•" is Generally speaking.o r all a.8 a for all a.8 E IF. (M3) e IF. y e F. ft. •) is an abelian group.8..Chapter 2 Vector Spaces Vector Spaces In this chapter we give a brief review of some of the basic concepts of vector spaces. be found. (A3) for all a E IF. (A3) for all a e F. .((.8) + y ffor all a. (M2) 1 e IF • I = for a e IF. (A4) a + . y Elf. I = a for all a E F. when no confusion can arise.8. y Elf.ye¥. there exists an element (—a) E IF such that a (—a) O.p ) .((. not written explicitly.8 e F. for all a e IF.8. ^ 0.) is an abelian group. there exists an element a-I E F such that a .

. RMrmxn= {m x n matrices of rank r with real coefficients} is clearly not a field since. lR~xn is not a field either since (M4) does not hold in general (although the other 8 axioms hold). simply by V. w for all a e F and for all v.8 Chapter 2. 4.qEZ +} . (Ml) does not hold unless m = n. F) or. }. (V3) (a + f3).. Similar definitions hold for (en.F xV -»• V such that (VI) (V. Remark 2. ft) v = a v + f3. Definition 2. R with ordinary addition and multiplication is a field. R) with addition defined by I..2. is a field. Example 2.r] = the field of rational functions in the indeterminate x = {ao + f30 + atX f3t X + . (R". 2. R"x" is not a field either for example.3. A vector space over a field F is a set V together with two operations Definition 2.f3i EIR . no confusion and the operator is usually not even written explicitly. (V5) 1 v = v for all v e V (1 Elf). v + a-. IF) or.. when there is no possibility of confusion as to the underlying fie Id. this causes 2. Note that + and· in Definition 2. for example.2. . this causes no confusion and the·• operator is usually not even written explicitly. (MI) does not hold unless m = n. Raf. C). (VI) (V. .1 in the sense of operating on different objects in different sets. (V5) I·• v = v for all v E V (1 e F). +) is an abelian group. v = a . in Definition Remark 2. is a vector space. where Z+ = {O. 4. Example 2. f3 Elf andforall v E V. 3.3.. +) is an abelian group. C with ordinary complex addition and multiplication is a field.3 are different from the + and • in Definition 2.P. A vector space is denoted by (V.. p € F and for all v e V. Example 2. simply by V. + f3qXq :aj.. (V2) (a·. Moreover. (V2) ( a f3) v = a P . I. e with ordinary complex addition and multiplication is a field.4. (V4) a· (v + w) = a .( (f3' V v) o r all a. (V3) (a (V4) a-(v w)=a-v a w for all a ElF andfor all v. w E V. IR) with addition defined by and scalar multiplication defined by and scalar multiplication defined by is a vector space. 1.}. In practice. 1. + apxP + . p E IF andfor all v e V. .l. is a field.. underlying field. when there is no possibility of confusion as to the A vector space is denoted by (V. Note that + and • in Definition 2. 2. IR with ordinary addition and multiplication is a field. A vector space over a field IF is a set V together with two operations + ::V x V -+ V and· :: IF x V -+ V such that V x V -^V and.. Moreover. Similar definitions hold for (C". Ra[x] = the field of rational functions in the indeterminate x 3.2.1. IR~ xn = m x n matrices of rank r with real coefficients) is clearly not a field since.4. w e V. Vector Spaces Example 2. since (M4) does not hold in general (although the other 8 axioms hold).1 in the sense of operating on different objects in different sets.5.5. (IRn.• v = a·• v + p • v for all a. f3 e F and for all v E V. In practice.3 are different from the + and . e).2. v for all a.p ) -. where Z+ = {0. ) f for all a.

e. Then (W." The less restrictive meaning "is a subset of" is specifically flagged as such. The latter characterization of a subspace is often the easiest way to check Remark 2. and the functions are piecewise continuous =: (PC[to. F) = (IR". fJ e IF andforall WI. too. JR). IF) is itself a vector space or. that since 0 E IF. y a l2 y a 22 yam 2 ya. (E mxn JR) is a vector space with addition defined by 2. we write W ~ V. Let (V. F) if and only if (W. (JRmxn. td)n. for all d ED. W = 0. 2. equivalently. etc. F) be an arbitrary vector space and V be an arbitrary set. and the functions are piecewise continuous (a) '0 = [to. and for all f E cf>. when used with vector spaces. and the symbol c. g E cf> and scalar multiplication defined by and scalar multiplication defined by (af)(d) = af(d) for all a E IF.. w2 e Remark 2. Let cf>('O. (V. and the symbol ~. too.. Let O(X>. IF) is a subspace of (V. W2 E W. Let (V. td)n or continuous =: (C[?0. V) is a vector space with addition defined by defined by (f + g)(d) = fed) + g(d) for all d E '0 and for all f." ya2n . Notation: When the underlying field is understood. Let (V. Notation: When the underlying field is understood. Subspaces 2.7.7. Then cf>('O. equivalently.2. F) is itself a vector space or. Then {x(t) : x(t) = Ax(t}} is a vector space (of dimension n). Then (W. we write W c V. verify that the set in question is closed under addition and scalar multiplication. The latter characterization of a subspace is often the easiest way to check or prove that something is indeed a subspace (or vector space). Note. Then (x(t) : x ( t ) = Ax(t)} is a vector space (of dimension n). W f= 0. Then O(D. t\])n continuous =: (C[to. Special Cases: Special Cases: (a) V = [to.2. E) is a vector space with addition defined by 9 9 A+B= [ . V) be the 3. JR).6.2 Subspaces Subspaces Definition 2. + fJmn l yaml yamn 3. less restrictive meaning "is a subset of' is specifically flagged as such. IF) be a vector space and let W c V." + fJ2I a21 + P" . if and only if(aw1 ßW2) E if(awl + fJw2) e W for all a. t\]. td. Let (V. V) be the set of functions / mapping D to V. implies that the zero vector must be in any subspace. is henceforth understood to mean "is a subspace of. . F) be a vector space and let W ~ V. (V.6. verify that the set in or prove that something is indeed a subspace (or vector space). l . IF) be an arbitrary vector space and '0 be an arbitrary set.2. (V. =: (PC[f0. IF) if and only if (W. V) is a vector space with addition set of functions f mapping '0 to V. ß E ¥ and for all w1.e. Note. amI al2 a22 + fJI2 + fJ22 aln + fJln a2n + fJ2n a mn + fJml am2 + fJm2 and scalar multiplication defined by and scalar multiplication defined by [ ya" y a 21 yA = . if and only subspace of (V. Subspaces 2. Let A € R"x". F) is a Definition 2. this question is closed under addition and scalar multiplication. foral! a." The when used with vector spaces. 4.2 2. E). is henceforth understood to mean "is a subspace of. that since 0 e F. IF) = (JR n . this implies that the zero vector must be in any subspace. +00). i. i. 4.. Let A E JR(nxn. IF) = (JRn. h])n (b) '0 = [to..

W2. V usually denotes a vector space with the underlying field generally being JR. ak not all zero such that elements VI. too. Then it is easily shown that ctA\ + f3A2 is Proof' Suppose AI.} . JR. Note.. then R = S if and only if Definition 2.. unless explicitly stated otherwise. Consider (V... Proof: Suppose A\. Vk of X and for any scalars aI.. For a. Let X {VI. X linearly set of Definition 2. Definition 2. As an interesting exercise. (Xk not all zero such that X is a linearly independent set of vectors if and only if for any collection of k distinct X is a linearly independent set of vectors if and only if for any collection of k distinct elements v1. called linear varieties. . that the vertical line through the origin (i. .•. in some vector space V. elements VI. W1/2... Henceforth. vk E X and scalars a1. too. . A2 are symmetric.0. .10.9.e.. E ]Rnxn : not 2. .nxn : A We V.ß is a subspace of V if and only if ß = O. . R) and for each v € R2 of the form v = [v1v2 ] identify v1 with 3. 1. 2.3 Linear Independence Linear Independence Let X = {v1. explicitly stated otherwise. f3 e R. Consider (V. . Let W = {A € R"x" : A is orthogonal}.e. Henceforth. Shifted subspaces W". Then W".10.. ak. Then it is easily shown that aAI + fiAi is symmetric for all a.nxn. ~SandS ~ R. For ß E R define the jc-coordinate in the plane and V2 with the y-coordinate. then R = S if and only if R C S and S C R. al VI + .I. ft E R symmetric for all a. a = 00) is also a subspace. . Vector Spaces Example 2. sketch W2.• } be a nonempty collection of vectors Vi in some vector space V. ./l is a subspace of V if and only if f3 = 0..8.9.) and let W = [A e R"x" : A is symmetric}. f3 e R. Shifted subspaces Wa. •. Wi. A2 are symmetric. . 3. that the vertical line through the origin (i. . IF) = (]R2. To prove two vector spaces are equal. ak = O. . W2.S. R"x".lF) = (R" X ".Vk of X and for any scalars a1.10 Chapter 2. sketch Then Wa.I' and Wi. one usually proves the two inclusions separately: Note: To prove two vector spaces are equal.) and for each v E ]R2 of the form v = [~~ ] identify VI with the x-coordinate in the plane and u2 with the y-coordinate. Then W is /wf a subspace of JR.3 2.1.o.ß with ß =1= 0 are called linear varieties. be an element of R. v2. All lines through the origin are subspaces. . X is a linearly dependent set of vectors ifand only if there exist k distinct if and only if exist distinct elements v1.. Then (V. .. we drop the explicit dependence of a vector space on an underlying field. define W".nxn.R) and 1. we drop the explicit dependence of a vector space on an underlying field. + (XkVk = 0 implies al = 0. . •••. F) = (R2.. If 12. c E JR. . ak.1. W2../l with f3 = 0 are All lines through the origin are subspaces. Consider (V. F) = (JR. • • •} be a nonempty collection of vectors u. . As an interesting exercise. W~V. one usually proves the two inclusions separately: An arbitrary r e R is shown to be an element of S and then an arbitrary s E S is shown to is shown to be an element of and then an arbitrary 5 € is shown to An arbitrary r E be an element of R./l = {V : v = [ ac ~ f3 ] . .. ffR and S are vector spaces (or subspaces).and W1/2. and S are vector spaces (or subspaces). = {A E JR. Note. Definition 2. . V2. Thus. Example 2. V usually denotes a vector space with the underlying field generally being R unless Thus.o.O. a = oo) is also a subspace.JR.. Vk e X and scalars aI.

and there exists a e R* such that Va = 0.14. A set of vectors X is a basis for V if and only ij Definition 2. . }.en} = ]Rn..2.12. A e R xn B E ]Rnxm.'" .. and there exists a E ]Rk such that VT V is singular. and consider the matrix V = [VI. An equivalent condition for linear dependence is that the k x k matrix condition VT V is singular.. The dependence of this set of vectors is equivalent to the existence of a nonzero vector E Rk dependence of this set of vectors is equivalent to the existence of a nonzero vector a e ]Rk O.. tIl 2. to be studied further in what follows.. If the set of vectors is independent. = [ v 1 .. linear dependence x such that Va = 0. o Definition 2. (Xi ElF.3.en = 0 0 0 o SpIel. Let A E ]Rnxn and 5 e R"xm. 2. Then the span of of X is defined as X is defined as Sp(X) = Sp{VI. 2. Independence of these vectors turns out to be equivalent to a concept Chapter 11).3.14.. LetV = 11 11 ~. .v2 + V3 = 0). Then Sp{e1. E V. en} = Rn.Vk] e ]Rnxk. . ~ HHi] } Ime~ly i is a i" linearly independent set. Why? However.. e k. ..13. Then consider the rows of etA B as vectors in em [to. V2. Definition 2. then = O. Vi EX. ii E If.. t1] (recall that etA denotes the matrix exponential. e2 . e2. . Example 2. . Howe.13. Linear Independence Example 2.. X is a linearly independent set (of basis vectors). to be studied further in what follows.. Example 2. {1..."I [ i1i1l ]} [[ s a linearly is a Iin=ly dependent set de~ndent ~t (since 2v\ — V2 + v3 = 0). X = [v1 v2 .. Then {[ Then I. (since 2vI . Sp(X) = V. The linear v E ]Rn. Let X = {VI.11. Then consider the rows of etA B as vectors in Cm [t0. T V is nonsingular. V V2. An equivalent condition for linear independence is that the matrix Va = 0.}. .. which is discussed in more detail in efA Chapter 11). Vk] E Rnxk. and X (of and 2. If the set of vectors is independent.11. . } = (Xl VI + ... Vi e span of Definition 2. Let V = Rn and define = ]Rn and el = 0 0 .. called consider Let Vif e R".•}} be a collection of vectors vi.. Sp(X) = V. Why? independent.. e2 = 0 1 0 . then a = 0. . kEN}. 1.. . 1£t V = R3. A set of vectors X is a basis for V if and only if 1. 2. Independence of these vectors turns out to be equivalent to a concept called controllability. .. = {v : where N = {I.. Linear Independence 2. + (XkVk . . An equivalent condition for linear independence is that the matrix V TV is nonsingular.12.

n unique. . If V= 0) V is Definition 2. Vector Spaces Example 2... . We say that the vector x of of of (b1. For example.16... el + 2 . while [ ~ ] = I . with respect to the basis with respect to the basis {[-~l[-!J} we have we have [ ~ ~ ] = 3. ... Then for all v E V there exists a unique n-tuple {~I'.l. In]Rn.18.15..... en} is a basis for IR" (sometimes called the natural basis). . bn be a basis (with a specific order associated with the basis vectors) b1. .b.. . n } such that for V.dimensional or have dimension n and we write dim (V) = n or dim V — n. . . .[ -~ - ] + 4· [ -~ l To see this. VI ] : = vlel + V2e2 + . x ~ D J Definition 2. V is said to X for be n-dimensional or have dimension n and we write dim(V) n or dim V n..19. In Rn. We represents B..E~n} such that v= where ~Ibl + . write [ ] = XI • [ ~ + ] X2 • [ _! ] =[ ~ = [ -~ Then Then -! ][ ~~ l -1 [ ~~ ] = [ -.18. . .. r I [ . {~i } of v with respect to the basis {b l . components represents the vector v with respect to the basis B. n for Then for all e there exists a unique n-tuple {E1 . for]Rn [e\. ] l = = Theorem 2. + vne n · Vn We can also determine components of v with respect to another basis. For be n. The scalars {Ei}are called the components (or sometimes the coordinates) components coordinates) Definition 2..12 12 Chapter 2.19. en} natural Now let b l . B ~ [b".16. bn]} and are unique.. {el... Example 2. Definition 2.. . If a basis X for a vector space V(Jf 0) has n elements. + ~nbn = Bx.. For .. e2. The number of elements in a basis of a vector space is independent of the particular basis considered. For example...17. The number of elements in a basis of a vector space is independent of the Theorem 2. particular basis considered.. while We can also determine components of v with respect to another basis.

and 2.18 says that dim (V) the number of elements in a basis. 1. R H S = {v : v e R and v e S}. U\ + 1. The union of two subspaces.4. F) be a vector space and let R. and because the 0 vector is in any vector space. Remark 2.+00. 2. ." The collection of E. Let (V. R D S C V (in general. j)th location. A consistency. s e S}. we define dim(O) = O. j E ~. 1. is not necessarily a subspace. dim(Rn)=n. Let (V. The union of two subspaces. Thus. t1]) . R j ) = 0 am/ Ri = T). S. dim{A E ~nxn :: A is upper (lower) triangular} = 1/2n(n+ 1). S S. 2. s E 5}. Sums and Intersections of Subspaces 2. The collection of Eij matrices can be called the "natural basis matrices. n S {r s : r E U. The sum and intersection ofR and S are defined respectively by: of R. R + S = {r + s : r e R. 2. for finite k).. U + S = T (in general ft. V. 1. 4. vector space V is finite-dimensional if there exists a basis X with n < +00 elements.4 Sums and Intersections of Subspaces Subspaces Definition 2. n 5 S.24.4. The sum and intersection Definition 2.21. Note: Check that a basis for Rmxn is given by the mn matrices Eij. R S = (in general. dim(~mXn) = mn. Ra C V/or an arbitrary index set A). we define dim(O) = 0.20. dim(C[to.. Definition 2. 2. where Eij is a matrix all of whose elements are 0 except for a 1 in the (i. for finite k). V for an arbitrary index set A). dim{A e Rnxn A is upper (lower) triangular} = !n(n 1). dim{A E ~nxn :: A = AT} = !n(n + 1). V (in general.18 says that dim(V) = the number of elements in a basis. J)th location.= T). tJJ) = +00. 5. V.-) = 0 and ]P ft. Example 2. dim{A € Rnxn A AT} = {1/2(n 1 (To see why.j matrices can be called the "natural basis matrices. n n S = 0. and S are defined respectively by: 1. K + S S. V (in general.. y>f (L L . Remark 2. The subspaces Rand S are said to be complements of each other in T. U S. S c V. a eA CiEA f] n *R. is not necessarily a subspace.2. Theorem 2. otherwise. and 1. + 7^ =: L R. 2. where Efj is a matrix all of whose elements are 0 except for a 1 in the (i.22. 2. dim(R mxn ) mn. T = R 0 S is the direct sum of R and S if = REB S is the direct sum ofR and S if Definition 2. Example 2.) 2 5. 72. and S are said to be complements of each other in T. Note: Check that a basis for ~mxn is given by the mn matrices Eij. i E m. RI -\ h Rk =: ]T ft/ C V.a S. JF') be a vector space and let 71.23.23. Sums and Intersections of Subspaces 13 13 consistency. V is infinite-dimensional. n (^ ft.24. Theorem 2." 3. V is infinite-dimensional. .=1 K k 1=1 2. R C S. R S C V (in general.4 2. R. « The subspaces R. i e m.21. and because the 0 vector is in any vector space.) (To see why. Theorem 2. R = 0. determine !n(n 1) symmetric basis matrices. j e n. dim(~n) = n.20. H. 1. A vector space V is finite-dimensional if there exists a basis X with n < +00 elements. otherwise. Thus. Theorem 2. ft n 5 = {v : v E 7^ and v E 5}.22. determine 1/2n(n + 1) symmetric basis matrices.

Avn are also orjRn. For arbitrary subspaces ft. .... Show that Av\. r2 e R. Then r1 — r2 = s2— SI. Theorem 2.. where r1. S2 E S. Suppose {VI. Xk} must be a linearly independent set. We discuss more about orthogonal complements elsewhere in the text. Prove that viand V2 form a a basis Consider v\ = [2 l]r 1*2 = [3 l] Prove that VI and V2 form basis 2 for R .. of the formula given in Theorem 2. e jRnxn 4. For example. Vector Spaces Chapter 2. R). Av" •.. which uniqueness follows. . .. D Theorem 2.c = jRnnxn jRn xn.26. ft be the set of symmetric matrices in R" x ". and let S Let (V... 2. For arbitrary subspaces R. consider V = R2 unique. jR). Since ft fl 0. the set in jRnxn. while U n £ is the set of diagonal matrices in Rnxn. Then any other distinct line through the origin is a complement of R. ... . ft. 0 The statement of the second part is a special case of the next theorem. we must have r\ r-i and SI rl . XI.s\. jRnxn.. Let x\. Vn thonormal if and only if A E R"x" is orthogonal.29.r2 S2 .. 3. *2.c the Example 2.. {vi. Then show that one of the vectors 1. . Since R n S = 0.28. But as t = r1 + s1 = r2 + S2. v = [4 l]r jR2. triangular + L = R xn un.. Xk} must be a linearly independent set. .27. ft.14 14 Chapter 2. Then any other distinct line through the origin is and let R be any line through the origin.. . Among all the complements there is a unique one orthogonal to R. dim(T) = dim(R) + dim(S). Suppose T = R O S..2 and 2. let R be the set of skew-symmetric matrices in (V. S2 e rl Sl r2 Then r. suppose an arbitrary vector t E T can be written in two ways t e as t S2. Let VI. Show that {XI. every t E T can be written uniquely in the form tt = r + s with r E Rand s E S. X2. Example 2. But r1 –r2 E ft and S2 — SI E S.. x/c E R" be nonzero mutually orthogonal vectors. n Proof: A e jRnxn written Proof: This follows easily from the fact that any A E R"x" can be written in the form A=2:(A+A )+2:(A-A). Let U be the subspace of upper triangular matrices in E" x" and let £ be the subspace of lower triangUlar matrices in Rnxn. and SI. Then V = U $ S.. we must have rl = r2 and s\ = si from S2 from which uniqueness follows...r .27..20. . Example 2. 2. together with Examples 2.27. one can easily verify the validity = n. = dim(ft) + Proof: To Proof: To prove the first part.25. Find the components of the vector v = [4 If with respect to this basis. The complement of R (or S) is not unique. mutually [x\. 1 TIT The first matrix on the right-hand side above is in S while the second is in R. jRn xn .si e S. Example 2. dim(R + S) = dim(R) + dim(S) - dim(R n S). . vd must be a linear combination of the others.25. . S of a vector space V.5.29. . 2. 0 S. ft S) = jR2 and let ft be any line through the origin. Using the fact that dim {diagonal (diagonal matrices} = n.c jRnxn. Theorem 2.28. S of a vector space V. EXERCISES EXERCISES 1. and let R"x". Then it may be checked that U + .26. Suppose =R EB Then 1.r2 £ Rand 52 .. every t € can be written uniquely in the form r s with r e R and s e S. . Vector Spaces Remark 2.vn be orthonormal vectors in R". r2 E Rand s1. unique ft... Xk E jRn 2.27..20. validity of the formula given in Theorem 2. Then Theorem 2. . AVn are orv\. F) (R n x n . Consider the vectors VI — [2 1f and V2 = [3 1f. IF) = (jRnxn. Vk} is a linearly dependent set. where rl.

Exercises Exercises

15

5. Let denote the set of polynomials of degree less than or equal to two of the form 5. Let P denote the set of polynomials of degree less than or equal to two of the form Po + PI X + pix2, where Po, PI, p2 E R. Show that P is a vector space over R Show Po p\x P2x2, where po, p\, P2 e R Show that is a vector space over E. Show Find the components of the that the polynomials 1, *, and 2x2 — 1 are a basis for P. Find the components of the that the polynomials 1, x, and 2x2 - 1 are a basis for 2 2 with respect to this basis. polynomial 2 + 3x 4x polynomial 2 + 3x + 4x with respect to this basis.
6. Prove Theorem 2.22 (for the case of two subspaces Rand S only). 6. Prove Theorem 2.22 (for the case of two subspaces R and only).

7. Let n denote the vector space of polynomials of degree less than or equal to n, and of 7. Let Pn denote the vector space of polynomials of degree less than or equal to n, and of the form p ( x ) = Po + PIX + ...•+ Pnxn,, where the coefficients Pi are all real. Let PE po + p\x + • • + pnxn where the coefficients /?, are all real. Let PE the form p(x) denote the subspace of all even polynomials in Pn,, i.e., those that satisfy the property denote the subspace of all even polynomials in n i.e., those that satisfy the property p(—x} = p(x). Similarly, let PQ denote the subspace of all odd polynomials, i.e., p( -x) = p(x). Similarly, let Po denote the subspace of all odd polynomials, i.e., those satisfying p(—x} = -p(x). Show that Pn = PE EB Po· those satisfying p(-x) = – p ( x ) . Show that n = PE © PO8. Repeat Example 2.28 using instead the two subspaces 7" of tridiagonal matrices and 8. Repeat Example 2.28 using instead the two subspaces T of tridiagonal matrices and U of upper triangular matrices. U of upper triangular matrices.

This page intentionally left blank This page intentionally left blank

Chapter 3 Chapter 3

Linear Transformations Linear Transformations

3.1 3.1

Definition and Examples Definition and Examples

definition of a linear (or function, We begin with the basic definition of a linear transformation (or linear map, linear function, or linear operator) between two vector spaces. or linear operator) between two vector spaces.
Let IF) and (W, IF) be vector spaces. Then I:- : -> a Definition 3.1. Let (V, F) and (W, F) be vector spaces. Then C : V -+ W is a linear transformation if and only if transformation if and only if I:-(avi £(avi + {3V2) = aCv\ + {3I:-V2 far all a, {3 e F andfor all v},v2e V. pv2) = al:-vi fi£v2 for all a, £ ElF and far all VI, V2 E V. The vector space V is called the domain of the transformation C while VV, the space into called the of the transformation I:- while W, the space into The vector space which it maps, is called the which it maps, is called the co-domain.

Example 3.2. Example 3.2.
1. Let F = R and take V = W = PC[f0, +00). 1. Let IF JR and take V W PC[to, +00). Define I:- : PC[to, +00) -> PC[to, +00) by Define £ : PC[t0, +00) -+ PC[t0, +00) by
vet)
f--+

wet) = (I:-v)(t) =

11
to

e-(t-r)v(r) dr.

2. Let F = R and take V = W = JRmxn. Fix M e R m x m . Let IF JR and V W R mx ". Fix ME JRmxm. Define £ : JRmxn -+ M mxn by I:- : R mx " -> JRmxn by
X
f--+

Y

= I:-X = MX.

3. Let F = R and take V = P" = {p(x) = a0 + ct}x H ... + anx"n : a, E R} and ao alx + ai E JR} and 3. Let IF = JR and take V = pn (p(x) h anx W = pn-l. w = -pn-1. I:- : —> Define C.: V -+ W by I:- p = p', where'I denotes differentiation with respect to x. Lp — p', where denotes differentiation x.

17

e. Thus. Thinking of both as a matrix and as a linear transformation from JR. £V = W A since x was arbitrary. Thus. j e m}. Specifically. transformation with its matrix representation. i.2 3.. w m] and where W = [WI.. In other words. j E !!!. + ~nLvn =~IWal+"'+~nWan = WAx. L IF) ~ (W. for V and W) is the representation of £i>. i E n}. with respect to {w }•. E ~} e m] V ith column of A = Mat £ (the matrix representation of £ with respect to the given bases = L L for V and W) is the representation of LVi with respect to {w j. i e n} and {Wj.mxn a mn represents L since represents £ since LVi = aliwl =Wai. is arbitrary). F) is linear and further suppose that {Vi. + . Change of basis then corresponds naturally to appropriate matrix multiplication. {u. F) —>• (W.. We thus commonly identify A as a linear transformation with its matrix representation.. LV WA since x was arbitrary. if v = ~I VI + • • + ~n v = Vx (where v.e. usually L The action of £ on an arbitrary vector V e V is uniquely determined (by linearity) v E V uniquely determined by its action on a basis. w ] and L is the ith column of A.n. . We identify A the equation £V = W A becomes simply £ = A..} are the usual (natural) bases. in the notation. i e ~}. Li near Transformations Chapters. but this is usually not done. Thus. n A= al : ] E JR. {W jj' j e !!!. say. i. W = lR. respectively... When V = R". IF) veniently in matrix form.m usually causes no naturally confusion. then LVx = Lv = ~ILvI + . In other words. and hence x.2 Matrix Representation of Linear Transformations Matrix Representation of Linear Transformations Linear transformations between vector spaces with specific bases can be represented conLinear transformations between vector spaces with specific bases can be represented conSpecifically.. Note that A = Mat £ depends on the particular bases for V and W." to Rm usually causes no Thinking of A both as a matrix and as a linear transformation from Rn to lR. Thus..18 Chapter 3.... . if V = E1v1 + .. and hence jc.. Then the {w j. then arbitrary). is by its action on a basis.• + E nVnn = V x (where u.. + amiWm where W = [w\. [ w . Linear Transformations 3.. j E raj. W = R m and [ v i .m and {Vi. z'th V This could be reflected by subscripts.} are bases for V and W. When V = JR. suppose £ : (V. j E m} are the usual (natural) bases WA linea LV L = A.

3.3. y e Rn. then composition of transformations corresponds to standard matrix multiplication. and W and transformations B from U to V and A from Wand V to W. Then their inner product is the scalar E ~n. then composition of transformations corresponds to standard matrix mUltiplication.3. y E Rn.. expressed mxp nxp formula cij = L k=1 n aikbkj. V.=1 Outer Product: Let x e Rm. and dim W m. it might be useful to prefer the former since the transformations A and B appear in the same order in both the diagram and the equation. That is. The above is sometimes expressed componentwise by the C — A B . Then we can define a new transformation C as follows: to W. y e ~n. xx T XX ). Composition ofTransformations 3. That is.y. Inner Product: n xTy = Lx. A rank-one symmetric matrix can be written in above (or xy if A E C xyH e c ). the arrows above are reversed as follows: C However. and if we associate matrices with the transformations in the usual way. Note that in The above diagram illustrates the composition of transformations C = AB. Two Special Cases: Two Special Inner Product: Let x. Note that in most texts.3 Composition of Transformations Composition Consider three vector spaces U. If dimZ// = p. we have C A B . . and if we associate matrices with the transformations in the usual way. Then we can define a new transformation C as follows: C The above diagram illustrates the composition of transformations C = AB. in the same order in both the diagram and the equation. dimV = n. and dim W = m. the form XXT (or xx HH). If dimU = p. dim V = n. . Outer Product: matrix matrix mxn E R Note that any rank-one matrix A e ~mxn can be written in the form A = xyT = xyT H mxn mxn). Then their outer product is the m x n E ~m. Composition of Transformations 19 19 3.

~. N(A) c V. vk] be a set of nonzero vectors Vi E ~n. D Remark 3. . 2. is an orthogonal set. then {I —/==. N(A) S. then Proof: The proof of this theorem is easy.-. € 1Tlln is an orthogonal set.i •. The nullspace of A. vi ^/v'k vk ~~~ ] . 1. is an orthonormal set. R(A) C W.4 Structure of Linear Transformations Structure of Linear Transformations Let A : V —> W be a linear transformation. The range of A is also known as the image of A and — {Av e V}. Equivalently. [-: J} is an orthogonal set. is the set {w E W : w = Av for some v E V}. 0 nition. of of denoted Im(A).IN.4 3. essentially following immediately from the defiProof: The proof of this theorem is easy.. If in of = [a\. Example 3. be orthogonal if' vjvj 0 for i ^ j and orthonormal if vf vj 8ij' where 8tj is the be orthogonal if vr v j = 0 for i f= j and orthonormal if vr v j = 8ij.. . . subspaces of different spaces. Let A E Rmxn.. orthonormal set. is the {v e V Av = 0}. The range of A..3.. essentially following immediately from the definition. e Rn. . Theorem 3.. Note that N(A) and R(A) are. IS an orthogonaI set. 2. denoted Im(A). W.Vk } With Vi E. an M. LinearTransformations Chapter 3. Definition3. R(A) = {Av : v E V}. {[ ~~i ]. V.8. . then then R(A) = Sp{al. the same symbol (A) is Note that in Theorem and throughout the text.8. denotedR(A). e ~mxn. . See also the of Section 3. R ( A ) S.4. the same symbol (A) is used to denote both a linear transformation and its matrix representation with respect to the used to denote both a linear transformation and its matrix representation with respect to the usual (natural) bases. See also the last paragraph of Section 3. Li near Transformations 3.2. 2. The range of A. denotedlZ( A). Note that N(A) and R(A) are.an]. Let {VI.6. The nullspace of kernel of and A is also known as the kernel of A and denoted Ker (A).7...•. an]. subspaces of different spaces. I ~VI VI ^/v.[ -:~~ J} . Vk} with u.3. usual (natural) bases. {[ ~ J.5 and throughout the text. —/=== . The nullspace of The of denoted N(A)..7. if i f= j. Note that in Theorem 3. where 8ij is the Kronecker delta defined by Kronecker delta defined by 8 = {I0 ij ifi=j. Theorem 3. If {VI. Then 1. . . 1. Then Let A : V —>• be a linear transformation. Let A : V --+ be a linear transformation. an} . . {v1.. then ~ . vd of u. in general..5. If A is written in terms of its columns as A = [ai. Definition 3. is the set {v E V : Av = O}. . is an orthonormal set..." • orthonormal set. ~ 3 ... . is the set {w e w = Av for some v e V}. Let A : V --+ W be a linear transformation.20 20 Chapter 3. ~} | ISisan 3. Definition 3..2. denoted N(A). { t > . in general. (A). 3... The set is said to 3..

then give rise to redundant equations). (n n S)~ = n~ + S~. + X2 + X3 = 0.4. of course.=1 -XI. Any set of vectors will do. Theorem 311 Let Theorem 3. Then it can be shown that Working from the definition. Example 3.. Then the of defined T S~={VE]Rn: V S = 0 for all s e S}.. Let 3. n~. Vk} be an orthonormal basis for S and let x E Rn be an arbitrary {v1. Then n. k =X . Let S <.. Let {VI. S 5. including dependent spanning vectors (which would.10. if and only if S~ <.4. = S. S \B S~ = ]Rn. Structure of Linear Transformations 21 21 Definition 3. 6.3. . (n + S)~ = nl. 3.10. nonzero) solutions of the system of equations 3xI -4xI + 5X2 + 7X3 = 0. Set XI X2 = L (xT Vi)Vi. n <.= {v e Rn : vTs=OforallsES}. the computation involved is simply to find all nontrivial (i.11.. (S~)l. vk} e ]Rn vector. n S~. 4.. Then the orthogonal complement of S is defined as the set c ]Rn. . 2.9.. Set vector. Let R S C Rn The S <. Structure of Li near Transformations 3. ]Rn. . Rn. S1. Note that there is nothing special about the two vectors in the basis defining S being orthogonal. The proofs of the other results are left as Proof: left exercises. .e. Proof: We prove and discuss only item 2 here.

) N(A)1" spaces. we see that x2 is orthogonal to v1.e. .1 in the next section. But yT Ax = ( A T ) x. Ax = Proof: To prove the first part.) 2. In other words. the right nullspace is A/"(A) while the left nullspace is N(A T ). Let A : IRn -> Rm.= Af(AT) ) (i. . Suppose. Since x was arbitrary.) 2. and x2 = x~.. Then Theorem 3. since T x 2 Vj = XTVj .. (Note: This also holds for infinite-dimensional vector spaces. But then (x'1 —XI)TT (x. every vector v in the domain space R" can be written in a unique way as v = x + y. 2. 3. since Then XI e S and. Theorem 3.l = R(A T}.. R(A). Then {v e IRn : A v = O} is sometimes called the Definition 3. Li near Transformations Chapters.13. everything in S (i. Then R(A r ).. every vector w in the co-domain space IRm can be written in a unique way as w = x+y. (Note: This for finite-dimensional 1.X2) 0 since (x'1 — x1) (x' 2 — x2) = 0 by definition of ST. x 1 E Sand x2. Let A : Rn -+ Rm.. = x'1+ x'2.e. Similarly. E R(A) and E R(A). It is also easy to see directly that. where XI. {w E IR m : WT A = O} is called the left nullspace of A. Vk and hence to any linear combination of these vectors.. Suppose. . ft(Ar) (i.x1) (x'1 xd x2 — X2 = — (x'1 — x1) (which follows by rearranging the equation x1+x2 = x'1 + x'2). We also have that S U S. Let A : IRn -> IRm. Then T (x.12. See also Theorem 2. .14 (Decomposition Theorem).. i. Let A : R" -+ IRm. Thus.26.. We have thus shown that S + S. (Note: This also holds for infinite-dimensional vector spaces.11 can be combined to give two very funTheorem 3.. X2 is orthogonal to any vector in S. Then X2 = x. established that N(A) U(AT ).e.11 can be combined to give two very fundamental decompositions damental and useful decompositions of vectors in the domain and co-domain of a linear and transformation A. many properties of A can be developed in terms of the four fundamental subspaces . where x\. E N(A) and E N(A). Theorem 3. Thus. When thought of as a linear transformation from IR n to Rm. can write vectors in a unique way with respect to the corresponding subspaces. right nullspace of A. that x = XI for example. But yT Ax = (ATyy{ x. Let A : IRn —> Rm. .•... including itself) is O.. -XI) (which follows by rearranging the equation XI +X2 = x.e.26. R" N(A) 0 ft(Ar ». including itself) is 0. transformation A. D Theorem 3. + x~. x~ e S. IRn = M(A) EB R(A T)). x. See also Theorem 2. It can write vectors in a unique way with respect to the corresponding subspaces.e. E S and X2. In other words.e.e. Clearly.5 3. every vector v in the domain space IRn can be written in a unique way as v = x 7.l. We have thus shown that vectors. 0 The proof of the second part is similar and is left as an exercise. 'R.l N(A T (i. IRm = R(A) EBN(A T». But then (x. x E R(A r ) Since x 1 have established thatN(A). Thus. (Note: This holds only for finite-dimensional vector spaces. i. standing Figure 3. Let A : Rn -+ IRm.12 and part 2 of Theorem 3. Then Ax = 0 and this is an and equivalent to yT Ax = 0 for all v. the right nullspace is N(A) while the left nullspace is J\f(AT).(A)1~ — J\f(ATT ). This key theorem becomes very easy to remember by carefully studying and underThis key theorem becomes very easy to remember by carefully studying and understanding Figure 3. XI = x. Vk and hence to any linear combination of these we see that X2 is orthogonal to VI.l = 0 since the only vector s E S orthogonal to S1 = IRn. that x = x1 + x2.l = 7£(AT). Clearly. Then Theorem 3. x~ -X2 = -(x. Similarly. X2 is orthogonal to any vector in S.l = N(A ). we form AT v. Ax = 0 if and only if x orthogonal is orthogonal to all vectors of the form AT y. Ax = 0 if and only if x equivalent to yT Ax = 0 for all y.1 in the next section.l. Linear Transformations Then x\ E <S and.13.) Proof: To x E N(A). (w e Rm : w T A = 0} is called the left nullspace of A. when we have such direct sum decompositions. We S n S1 =0 the e orthogonal everything in (i. -– x1) = 0 since 0 by definition of S.l.xn. Thus.l where x € M(A) and y € J\f(A)± = R(AT) (i.12.XITVj =XTVj-XTVj=O. Then 1. Rm = 7l(A) 0 M(AT)).5 Four Fundamental Subspaces Four Fundamental Subspaces Consider a general matrix A E lR.l = Rn.XI/ (x~ . x'2 E S1. every vector in the co-domain space R m can be written ina unique way asw = x+y..14 (Decomposition Theorem).e. N(A). where x e U(A) and y e ft(A)1. many properties of A can be developed in terms of the four fundamental subspaces to IRm. 0 x1 — x'1 andx2 = x2. for example. D Definition 3. Then {v E R" : Av = 0} is sometimes called the right nullspace of A.12 and part 2 of Theorem 3. +x~). When thought of as a linear transformation from E" Consider a general matrix A € E^ x ". y.22 22 Chapter 3. x e R(AT). we decompositions. take an arbitrary x e A/"(A). The proof of the second part is similar and is left as an exercise.

A is one-to-one or 1-1 (also called monic or injective) if N(A) = O. IR n -> IRm. A f ( A ) . N(A).1. 1.1. properties 7£(A). 1. R(A). Figure 3. 'R. be a linear transforDefinition 3. the column rank of A (maximum number of independent columns).1 obvious and we return to this figure frequently both in the context of linear transformations obvious and we return to this figure frequently both in the context of linear transformations and in illustrating concepts such as controllability and observability. The row rank of A is column rank of of independent row rank of . Let A : E" -+ Rm.15. rank(A) dimftCA).3. This is sometimes called 3. and in illustrating concepts such as controllability and observability. Four Fundamental Subspaces 3. mation. fundamental subspaces.5. Four Fundamental Subspaces 23 23 A r N(A)1- r EB {OJ X {O}Gl n-r m -r Figure 3. Two equivalent characterizations of A being 1-1 that are often easier to verify in practice are the characterizations of A being 1-1 that are often easier to verify in practice are the following: following: (a) AVI = AV2 (b) VI ===} VI = V2 . Figure 3.(A)^. Definition 3. A is onto (also called epic or surjective) ifR(A) = W. A is onto (also called epic or surjective) ifR. Let and W be vector spaces and let A : motion. and N(A)1-. Two equivalent 2. Let V and W be vector spaces and let A : V -+ W be a linear transforDefinition 3.16. t= V2 ===} AVI t= AV2 . R(A)1-. Four fundamental subspaces.16.15.(A) = W. Then rank(A) = dim R(A).5. 3. 2. A is one-to-one or 1-1 (also called monic or infective) ifJ\f(A) = 0.1 makes many key properties seem almost N(A)T.

") of A. Then N(T) = To w E 7£(A). . u.11 and 3. Let A. then {TVI. + rank(B) - n :s rank(AB) :s min{rank(A). 1 1 Xl E A/^A) . Let A : R" ~ Rm. dimensions. Then 3. + nullity(B). . Theorem 3. following follows we apply this and several previous results. of A. dimension of the domain of A.24 24 Chapter 3. (Note: Since 3. . Linear Transformations dim 7£(A r ) (maximum number of independent rows).17. rank(A) + B) :s rank(A) + rank(B). We thus have that dim R(A) = dimN(A)-L since it is easily shown T dim7?. Tv abasis 7?. and products of matrices.. of A.19.. by definition there is a vector x E ]Rn such that Ax = w. .17. rank(AB) = rank(BA) = rank(A) and N(BA) = N(A). the following string of equalities follows easily: "column rank of A" = rank(A) = dim R(A) = dimN(A)-L1 = dim R(AT) = rank(AT)) = A" rank(A) = dim7e(A) = dim A/^A) = dim7l(AT) = rank(A r = "column "row rank of A. . Let A : Rn -> Rm. The basic results are contained in the following easily proved following theorem.11 and 3.17 we see immediately that n = dimN(A) = dimN(A) + dimN(A)-L + dim R(A) .(A). Like the theorem. The dual notion to rank is the nullity R(AT) of independent rows).17 we see immediately that Proof: From Theorems 3. . x x e R" x\ X2. Tvrr]} is a basis for R(A). where n is the ]Rn -> ]Rm. if B is nonsingular.19 suggests looking atthe general problem of the four fundamental subspaces of matrix products. Theorem 3. 0 For completeness. the subspaces themselves are not necessarily in the same vector space. {Tv\. sometimes denoted nullity(A) or corank(A). 1.19 suggests looking at the general problem of the four fundamental Part 4 of Theorem 3. it is a statement about equality of dimensions. The last equality AXI x\ e N(A)-L and jc E N(A). . e ]Rnxn.(A) = dimA/^A^ 1 if that if {VI. Then dimN(A) + dim R(A) = n. of Corollary 3. Then dim K(A) = dimNCA)-L.. this theorem is sometimes colloquially stated "row rank of A = column N(A)-L = R(A A/^A) " = 7l(A ). 3. shows that T is onto. . rank(B)}.19. Clearly T is 1-1 (since A/"(T) = 0).. Finally. and is defined as dim A/"(A). take any W e R(A). if {ui. where Ax — w. nullity(B) :s nullity(AB) :s nullity(A) 4. and is defined as dimN(A). To see that T is also onto. . B E R" xn . dimA/"(A) + dimft(A) = dimension of the domain of A. the subspaces themselves are not necessarily in the same vector space.. dimA/'(A) ± (Note: 1 T T ).andx22 e A/"(A). colloquially of = rank of A. v r } is a basis for N(A)-L. R(A) : ]Rn ~ ]Rm.18.18. Part 4 of Theorem 3. . Proof: From Theorems 3. LinearTransformations Chapter3. . we include here a few miscellaneous results about ranks of sums completeness. Write x = Xl + X2. O:s rank(A 2. iv} abasis forA/'CA) . denoted nullity(A) or corank(A).... Then Ajti = W = TXI since Xl e A/^A)." 0 of D The following corollary is immediate.") Proof: Proof: Define a linear transformation T : N(A)-L ~ R(A) by J\f(A)~L —>• 7£(A) by Tv = Av for all v E N(A)-L. r*i *i E N(A)-L. 3.

A is onto if and only if rank (A) = m (A has linearly independent rows or is said to have full row rank. AT A nonsingular). dim7?. A A -T. Then y = Ax. the transformations A.3.(A).20 and is also easily proved.20. have full row rank.5. Proof' Proof of part 1: If A is onto. A is 1-1 if and only z/rank(A) = n (A has linearly independent columns or is said to have full column rank. suppose AXI = Ax^. A is onto if and only //"rank(A) — m (A has linearly independent rows or is said to 1. Definition 3. Thus. suppose Ax\ dim R(A T). then A/"(A) = 0. e 7?. 1. let y e Rm Proof: Proof of part 1: If A is onto.22. equivalently. Conversely. A : V —» W is invertible (or bijective) if and only if it is 1-1 and onto. y E R(A). : R n -» Rm. AT A is nonsingular). We now characterize 1-1 and onto transformations and provide characterizations in We now characterize I-I and onto transformations and provide characterizations in terms of rank and invertibility. equivalently. dim R(A) = m = rank(A). AATT is nonsingular). Let jc = AT(AAT)~]y Y E Rn.ti = AT Ax2. Let A : IRn -+ IRm.e.2 N(A T ). B E Rnxp. Conversely. N«AB)T) . N(A) = N(A T A). 2. D D 1-1. i. Theorem 3. Then 3. 2. N(AB) . R(B T ). XI = X2 AT A A 1-1. Let A E Rmxn.17. A : IRn -»• IR n is invertible or Note that if A is invertible. It is extremely useful in text that follows. R(A) = R(AA T ). Four Fundamental Subspaces Theorem 3. so A is onto. especially when dealing with pseudoinverses and is extremely useful in text that follows.21. Note that if A is invertible.23. then dim V = dim W.22. Ar. The transformations AT are all 1-1 and onto between the two spaces N(A)1. A is AT AXI AT AX2. Similar remarks apply to A and A~T. 4. N(A T ) = N(AA T ). AX2. 1. A"1 ± are all 1-1 and onto between the two spaces M(A) and 7£(A).(A) — m — rank (A).and R(A). 2. linear least squares problems. 25 25 The next theorem is closely related to Theorem 3. and hence dim R(A) n by Theorem 3. . which implies that dim A/^A). A Proof of part 2: If A is 1-1.. then N(A) = 0. R(AT) 3. Let e IRmxn. A is 1-1 if and only ifrank(A) = n (A has linearly independent columns or is said 2. equivalently. Let A E Rmxn.—n = dim 7£(A r ). Then A r A. RCAB) S. Then Theorem 3. RCA). then dim V — dim W. 3. Conversely.20. which implies x\ = x^. Then 3. Conversely.2 N(B). A : W1 -+ E" is invertible or nonsingular if and only z/rank(A) = n. e IRnxp. and A-I A. since ArA is invertible. and hence dim 7£(A) = n by Theorem 3. nonsingular ifand only ifrank(A) = n. AT.20 and is also easily proved. Definition 3. It The next theorem is closely related to Theorem 3. e IRmxn. A : V -+ W is invertible (or bijective) if and only if it is 1-1 and onto. terms of rank and invertibility. to have full column rank. Also. let y E IRm be arbitrary. AA is nonsingular). which implies that dimN(A)1-1 = n — Proof of part 2: If A is 1-1.17.21. 1.5. x AT (AAT)-I e IRn. The transformations AT and A -I have the same domain and range but are in general different maps unless A is and A~! have the same domain and range but are in general different maps unless A is orthogonal. Four Fundamental Subspaces 3. Note that in the special case when A E R"x". especially when dealing with pseudoinverses and linear least squares problems.23. equivalently. A € IR~xn. 4. R«AB)T) S. = R(A T A). Theorem 3. Also.

A~ (A A)~ A . i. where Iw denotes the identity transfonnation on W. A is invertible if and only if it is both right and left invertible. If there exists a unique right inverse A~R such that AA~R = I. 1..e. Then Definition 3. Let Theorem 3. A is right invertible if and only if it is onto. Obviously A has full row rank (= 1) and A ..22 we see that if A : E" -+ Em is onto. by uniqueness it must be Thus. If there exists a unique left inverse A~L such that A~LA = I. are infinitely A. then a right inverse is given by A~R = AT(AAT) -I. Definition 3.24. A is said to be left invertible if there exists a left inverse transformation A~L : W —> to transformation A -L : V such that A -L A = Iv. Then -> 1. is left invertible if and if it and left invertible. Theorem 3. right inverses for A. can always find v e E2 such that [1 2][^] = a).25 that A is invertible.R = _~] (=1) and A~R = [ _j j is a right inverse. by uniqueness it must be A -R + A -R A — = A -R. it may still be right or left invertible. such that A~LA = Iv where Iv denotes the identity transformation on V. 0 a left inverse. then a left inverse is given by A -R = AT (AAT) left T L = A. then A is invertible. (A -R + A -R A — /) must be a right inverse and.. if A is 1-1. Notice the and leave the following: following: A(A. A is left invertible if and only ifit is 1-1. D Example 3. in A -I = A -R = A -L. Defileft If linear concepts left nitions of these concepts are followed by a theorem characterizing left and right invertible transformations. both 1-1 and Moreover. —> transformation if left -+ 2. 1.L = (ATTA)-I1AT. then one (Proo!' = [1 2]:]R2 -+ ]R . Let A = [1 2] : E2 -»• E1I. € ]R . it is clear that there are infinitely many right inverse. therefore.R = If left A -L A -L A = 2. But this implies that A~RA = /. where Iv denotes the identity transfonnation on V.25.R + AA-RA = I A +IA - A since AA -R = I = I. A -R A = I.24.27.26.R + A-RA -I) = AA. It then follows from Theorem 3.both 1-1 and is if and if onto. Also.25 that A is invertible.e.. in which case A~l = A~R = A~L. It then follows from Theorem 3.26. therefore.: AA -R = w Iw W -+ V such that AA~R = Iw. . (Proof: Take any a E E1I. Let A : V -+ W. characterizing all solutions of the linear matrix equation AR = I. Let A : V -> W. that A~R is a left inverse. Theorem 3. In Chapter 6 we characterize all right inverses of a matrix by Chapter characterize characterizing all solutions of the linear matrix equation A R = I.e.e. A. If Proof: proof of second Proof: We prove the first part and leave the proof of the second to the reader. A -R the case that A~R + A~RA ..22 ]Rn ->• ]Rm Note: From Theorem 3. Let A : V -» V.26 Chapter 3. Let A : V -+ Then 1. then A is invertible. A is said to be right invertible if there exists a right inverse transformation A~RR : if A. i. Li near Transformations Chapters. 1.. Let -+ V.R AA. linear Transformations If a linear transformation is not invertible. 3. i. A right invertible if and only if it onto. 2. Then A is onto. i. Obviously A has full row rank can always find v E ]R2 such that [1 2][ ~~] = a).I) must be a right inverse and.I = A~R.. Similarly. (A R + A RA .

It is now obvious that A has full column rank (=1) and A~L = [3 . and let 7£ denote the subspace of skew-symmetric matrices. We give below bases for its four fundamental subspaces. (Proof: The only solution to 0 = Av = [I2]v is v = 0. J E2. is neither 1-1 nor onto. In Chapter 6 we characterize all left inverses of a infinitely many left inverses for A. consider A linear transformation ]R3 1. Prove Theorem 3. with Y e ]Rnxn (X. let S denote the subspace of symmetric matrices. Consider the vector space R nx " over E. respect to this inner product. — S^. We give when considered as a linear on ]R3. I-I? Is £. Y E Enx" define their inner product by (X. 3. It is now obvious that A has full column is v 0. 3. Find the matrix representation of A with respect to the bases Find the matrix representation of A to bases {[lHHU]} of R3 and {[il[~J} of E . let denote the subspace of symmetric 2. Let A = [i]:]Rl -> ]R2. In Chapter 6 we characterize all left inverses of a matrix by characterizing all solutions of the linear matrix equation LA = I. £.1] is a left inverse. Consider differentiation £ 1-1? Is£ onto? onto? 4. 2. Let A = [8 5 i) and consider A as a linear transformation mapping E3 to ]R2. whence A/"(A) = 0 so A is 1-1).2. Then A is 1-1. matrix characterizing all solutions of the linear matrix equation LA = I. 4. Show that. The matrix 3. 2 .Exercises 27 2.4. below bases for its four fundamental subspaces. . Consider the differentiation operator C defined in Example 3.3. whence N(A) = 0 so A is 1-1). it is clear that there are A -L = [3 — 1] infinitely many left inverses for A. ThenAis 1-1. For matrices X. Let A = [~ . Is £. R = S J. respect to this inner product. and let R denote the subspace of skew-symmetric matrices. For matrices matrices. Consider the vector space ]Rnxn over ]R. (Proof Theonly solution toO = Av = [i]v 2. Y) = Tr(X Tr F). y) = Tr(X Y). 'R. LetA [J] : E1 ~ E2. Again. EXERCISES EXERCISES 3 4 1. The matrix A = 1 1 2 1 [ 3 1 when considered as a linear transformation onIE \ is neither 1-1 nor onto.4. Prove Theorem 3.

If E 1R~9X48. ~ ~ 3 8. Linear Transformations Chapters. Rnxm thought of as a transformation from Rm to IRn. Let = [~ 9.11. Linear Transformations 7. Suppose A € Mg 9x48 . Modify Figure 3.2. Are they equal? Is this true in general? DetennineN(A) and R(A). prove it. if not. Prove Theorem 3.28 5. Show that AT has a right inverse. Are they equal? Is this true in general? If this is true in general.4. homogeneous linear system Ax = 0? homogeneous linear system Ax = O? n 3. Chapter 3. Suppose A E IR m xn has a left inverse. Theorem 6. provide a counterexample.12. 3. .1 11. How many linearly independent solutions can be found to the 10. linearly independent solutions 10. Let A = [ J o].4.Il. left T Suppose e Rmxn 9. Determine A/"(A) and 7£(A).1 to illustrate the four fundamental subspaces associated with AT e associated ATE nxm IR from IR m R". Determine bases for the four fundamental subspaces of the matrix Detennine fundamental A=[~2 5 5 ~]. Prove Theorem 3.

l.1. Define a transformation T : Af(A)1. neither provides Unfortunately.1 4.---+ R(A) by dimensional Define transformation T : N(A)..Chapter 4 Chapter 4 Introduction to the Introduction to the Moore-Penrose Moore-Pen rose Pseudoinverse Pseudoinverse In this chapter we give a brief introduction to the Moore-Penrose pseudoinverse. where X Xand Y y are arbitrary finitedimensional vector spaces. as noted in the proof of Theorem 3. Then A+ is the Moore-Penrose where y = y\ pseudoinverse of A. 4. brings great notational and conceptual clarity to the study of solutions to arbitrary systems of linear equations and linear least squares to the study of solutions to arbitrary systems of linear equations and linear least squares problems. which was proved by Penrose in 1955.17. problems.. Definition 4. where and are arbitrary finiteConsider a linear transformation A : X ---+ y. see [22]." X ".1 Definitions and Characterizations Definitions and Characterizations Consider a linear transformation A : X —>• y. and hence we can RCA) —>• J\f(A}~L This transformation can define a unique inverse transformation T-l 1 :: 7£(A) ---+ NCA). the Moore-Penrose pseudoinverse of A.. and hence we Then. Then. the Moore-Penrose pseudoinverse of A. Although X and y were arbitrary vector spaces above. as is shown in the following text.1. define a transformation A + y ---+ X by where Y = YI + Yz with Yl e 7£(A) and Yz e Tl(A}L. see [22]. We have thus defined A+ for all A E IR™xn. a generIn this chapter we give a brief introduction to the Moore-Penrose pseudoinverse. define a transformation A+ : Y —»• X by Definition 4. the definition neither provides nor suggests a good computational strategy good computational strategy for determining A +. 29 . can be used to give our first definition of A . The Moore-Penrose pseudoinverse is defined for any matrix and. T is bijective (1-1 and onto).l. which was proved by Penrose in 1955. This transformation T~ + can be used to give our first definition of A+. as is shown in the following text. Then A+ is the Moore-Penrose j2 with y\ E RCA) and yi E RCA). T is bijective Cl-l and onto). With A and T as defined above. A purely algebraic y + characterization of A+ is given in the next theorem.17.l —>• Tl(A) by Tx = Ax for all x E NCA). as noted in the proof of Theorem 3. let us henceforth consider the Although X and Y were arbitrary vector spaces above. let us henceforth consider the X ~n lP1.m We A+ A e lP1. for determining A+ . characterization of A is given in the next theorem. pseudoinverse of A. brings great notational and conceptual clarity matrix and. case X = W1 and Y = Rm. With A and T as defined above.l. a generalization of the inverse of a matrix.

Such a verification is often relatively satisfies all four. p. characterization can be useful for hand calculation of small examples. Let A e R?xn Then G = A + if and only if (Pl) AGA = A.7. then by uniqueness. one need simply verify the four Penrose conditions (P1)-(P4). Introduction to the Moore-Penrose Pseudoinverse Chapter 4. . Let A E lR. the Penrose properties do offer the great virtue of providing a checkable criterion in the following sense. If G the pseudoinverse of A. terizations. if a =0. Such a verification is often relatively straightforward. Consider A = f ] satisfies (P1)-(P4). Example 4. Example 4. whose proof Still another characterization of A + is given in the following theorem.4.. Example 4. Still another characterization of A+ is given in the following theorem.2. Then Theorem 4. A+ = (AT A)~ AT if A is 1-1 (independent columns) (A is left invertible). Furthermore.2 Examples Examples Each of the following can be derived or verified by using the above definitions or characEach of the following can be derived or verified by using the above definitions or characterizations. If G satisfies all four. Let A e R™xn. (PI) AGA = A. A -L = [3 — 1]) satisfy properties (PI). (4. However. Unfortunately. it must be A+. Then G = A+ if and only if Theorem 4. However." xn.4. and (P4) but not (P3). this can be found in [1. (P4) (GA)T = GA. whose proof can be found in [1. For any scalar a. AG. Let A E lR. (P2) GAG = G. Given a matrix G that is a candidate for being the pseudoinverse of A.2 4. Note that the inverse of a nonsingular matrix satisfies all four Penrose properties. Then A+ [a [! = lim (AT A + 82 1) -I AT 6--+0 6--+0 (4.30 Chapter 4. Given a matrix G that is a candidate for being checkable criterion in the following sense. the Penrose properties do offer the great virtue of providing a tional algorithm. Consider A = ['].1) = limAT(AAT +8 2 1)-1. A+ always exists and is unique. Unfortunately.7. neither the statement of Theorem 4. 19]. L Note that other left inverses (for example.2 nor its proof suggests a computawith Definition 4. A + always exists and is unique. then by uniqueness. Example 4. straightforward." xn.1.1. Verify directly that A+ = [| ~] satisfies (PI)-(P4). this characterization can be useful for hand calculation of small examples. A+ = (AT A)-I AT if A is 1-1 (independent columns) (A is left invertible). Also. Theorem 4. Example 4. as a right or left inverse satisfies no fewer than three of the four properties. Introduction to the Moore-Penrose Pseudoinverse Theorem 4. Example 4.3. 19].3. (P4) (GA)T = GA. For any scalar a. as with Definition 4.2 nor its proof suggests a computational algorithm. Also. one need simply verify the four Penrose conditions (P1)-(P4). A~ = [3 . if a t= 0. (P2). Note that the inverse of a nonsingular matrix satisfies all four Penrose properties.6. Note that other left inverses (for example.2) 4. = Furthermore. (P3) (AGf (P3) (AG)T = AG.1]) satisfy properties (PI). it must be A +.5. While not generally suitable for computer implementation.2. a right or left inverse satisfies no fewer than three of the four properties.5. (P2). X+ = AT(AATT) -I if A is onto (independent rows) (A is right invertible). While not generally suitable for computer implementation. (P2) GAG G. neither the statement of Theorem 4. Verify directly that A+ = Example 4. p. and (P4) but not (P3).6. A t = AT (AA )~ if A is onto (independent rows) (A is Example 4.

3. (A T )+ = (A+{. e jRmxn and suppose Rmxm R n are orthogonal (M is T -1 1 orthogonal if M M ).3 Properties and Applications Properties and Applications This section presents some miscellaneous useful results on pseudoinverses.VVEejRnxnx " are orthogonal (M is 4.3.7. p.10. Then S+ UD+U T where D+ is again a diagonal matrix whose diagonal elements are determined according to Example 4.13. For any vector v E M". The proof of the second result (which can also be proved easily by verifying the four Penrose proof of the second result (which can also be proved easily by verifying the four Penrose conditions) is as follows: conditions) is as follows: (A T )+ = lim (AA T ~--+O + 82 l)-IA = lim [AT(AAT ~--+O + 82 l)-1{ + 82 l)-1{ 0 = [limAT(AAT ~--+O = (A+{.8.4. A+ = (AT A)+ AT = AT (AA T)+.. where U is orthogonal and D is diagonal. 31 31 Example 4. Example 4. For A e Rmxn 1. The Proof: Both results can be proved using the limit characterization of Theorem 4. Then orthogonal if MT = M. p. The interested reader can consult the proof in [1. The especially illuminating. where U is orthogonal an Theorem 4. Theorem 4.12. if v i= 0.3 4. Many of these are used in the text that follows. simply verify that the expression above does indeed satisfy each c the four Penrose conditions. . D Theorem 4. Proof: Both results can be proved using the limit characterization of Theorem 4. Let S e jRnxn be symmetric with UT SU = D. The proof of the first result is not particularly easy and does not even have the virtue of being proof of the first result is not particularly easy and does not even have the virtue of being especially illuminating.7.4. Example 4.9.10.9. 0 the four Penrose conditions. For all A E jRmxn.).8. Properties and Applications 4. [~ ~ r ~ =[ 0 Example 4.11. where D+ is again a diagonal matrix whose diagonc D is diagonal. Then Proof: For the proof. 27]. Example 4. [~ r 1 =[ 4 4 I I ~l 4 I I 4 4. 2.4. are used in the text that follows. elements are determined according to Example 4. Theorem 4. . simply verify that the expression above does indeed satisfy each of Proof: For the proof. Let A E R m x "and suppose UUEejRmxm. Many of these This section presents some miscellaneous useful results on pseudoinverses. The interested reader can consult the proof in [1. Let S E Rnxn be symmetric with U TSU = D.13. Properties and Applications Example 4.11. 4. For any vector e jRn. Then S+ = U D+UT. .12. 27]. if v = O.

however (see.16. then A+ = (ATA)~lAT. e. in theory at least. then AkA+ = A+ Ak and (Ak)+ = (A+)kforall integers k > O. 1.13 can. 2. see [5]. 0 D Theorem 4. since e lR~xr. 0 D E lR~xr. n(A T AB) ~ nCB) . Proof: Proof: For the proof.15. For all A E lR mxn . Then As an example consider A = [0 I] and B = LI. 3.• Similarly.14. = n(A T) = n(A+ A) = n(A TA). see [9]. [7]. in general..g. For e Rmxn . properties Theorem 4.At = A in Theorem 4. 4. in peneraK ucts of matrices such as exists for inverses of products. [9].15. 4. A\ = A in Theorem 4. Introduction to the Moore-Penrose Pseudoinverse Chapter 4. we B+ BT(BBT)-I. n(BB T AT) ~ n(AT) and 2. Then (AB)+ = 1+ = I while while B+ A+ = [~ ~J ~ = ~. Introduction to the Moore-Penrose Pseudo inverse 4. . [7].g. 0 The following theorem gives some additional useful properties of pseudoinverses. TTnfortnnatelv.32 Chapter 4.. (AB)+ = B+ A + if and only if 1. (A+)+ = A. e. and better methods are suggested in text that follows.16. (AB)+ = B{ Ai. If e lR~xm. 4. As an example consider A = [0 1J and B = [ : J.12 and 4. BB+ f r The by taking BI = B. where BI = A+ AB and AI = AB\B+. This A AT AT turns out to be a poor approach in finite-precision arithmetic. (AB)+ = B?A+. Proof: Proof: For the proof.11 is suggestive of a "reverse-order" property for pseudoinverses of prodTheorem 4. whence A+A = f r.14. (AT A)+ = A+(A T)+.15. The result then follows by E lR. If A e Rnrxr. Ir Similarly. (AA T )+ = (A T)+ A+.17. then (AB)+ = B+ A+. n(A+) 4. (AB)+ = B+A+. [11].15. (AB)+ = B+A+ if and only if 4. A+ = (AT A)-I AT. poor (see. [23]). N(A+) 5.xm. Proof' A+ A Proof: Since A E Rnrxr. Theorem 4. compute 4. D takingB t = B. B E Rrrxm. Theorem 4. = N(AA+) = N«AA T)+) = N(AA T) = N(A T). If A is normal. [5]. xm + T B e Wr . the Moore-Penrose pseudoinverse of any matrix (since AAT and AT A are symmetric). where BI = A+AB and A) = ABIB{. [II]..11 nets of matrices such as exists for inverses of nroducts Unfortunately.13 we can. necessary and sufficient conditions under which the reverse-order property does hold are known and we quote a couple of moderately useful results for reference. [] sufficient reverse-order However.17. Theorem 4. we have B = B (BBT)~\ whence BB+ = Ir.12 Note that by combining Theorems 4.

For example. assume that AA+B = B and take arbitrary y e K(B). Then we have Bx = Ay = AA + Ay = AA + Bx. if A is symmetric. Then Bx e R(B) c H(A).4 to compute the pseudoinverse of U . € IRm xm D 6. Use Theorem 4. (xyT)+ = (xTx)+(yTy) yx T 3.]. we have shown that B = AA+B. or orthogonal. ft(A+) ft(Ar 5.i ]. Note: Recall that A e R" xn is normal if AATT = AT A. = B. 0 EXERCISES EXERCISES 1. Theorem 4. (a) Prove or disprove that Prove or disprove that [~ (b) Prove or disprove that (b) Prove or disprove that AB D [~ B D r r=[ =[ A+ 0 -A+ABD. then it is normal. Proof: Suppose K(B) S. prove that RCA) = R(AAT) using only definitions and elementary 3. assume that AA + B To prove the converse.4 to compute the pseudoinverse of \ 2 1. we have shown where one of the Penrose properties is used above. where one of the Penrose properties is used above. so Proof: Suppose R(B) c U(A) and take arbitrary jc E Rm. 4. whereupon y = Bx = AA+Bx E R(A). For A E Rpxn and BE R mx ". a matrix can be none of the preceding but still be normal. A E IRmxn. a matrix can be none of the skew-symmetric. if A is symmetric. B E E M X m . prove that R(A+) = R(A T).i D.18. show that (xyT)+ = (x Tx)+(yT y)++yxT. so there exists a vector y E Rp such that Ay = Bx. Then there exists a vector x E Rm such that Bx = y. that B = AA+ B. However. U(A) if and only if AA+B = B. For A e R m x n . Then B and take arbitrary y E R(B). Then Bx E H(B) S. A e IRPxn thatN(A) S. If jc. Since x was arbitrary. Then we have there exists a vector y e IRP such that Ay = Bx. Use Theorem 4. y E IRn. such as preceding but still be normal. Let A G M"xn. b E E. properties of the Moore-Penrose pseudoinverse. A E IRn xn B E E n xm 6. Then K(B) S. For A E IRmxn.• 1 2 x. For A e Rmxn. show that JV(A) C A/"(S) if and only if BA+ A = B. Then R(B) c R(A) if and only if Suppose e IRnxp. To prove the converse. or orthogonal. such as A=[ -b a a b] for scalars a. fiA+A B. N(B) and 5 € IRmxn.. Suppose A E Rnxp.i l .1 D.Exercises 33 Note: Recall that A E IRn xn is normal if A A = A T A. prove that 7£(A) = 7£(AA r ) using only definitions and elementary properties of the Moore-Penrose pseudoinverse. whereupon there exists a vector x e IR m such that Bx = y. e IRnxm. 5 e JRn x m . However. RCA). Since x was arbitrary. 2. The next theorem is fundamental to facilitating a compact and unifying approach The next theorem is fundamental to facilitating a compact and unifying approach to studying the existence of solutions of (matrix) linear equations and linear least squares to studying the existence of solutions of (matrix) linear equations and linear least squares problems. R(A) and take arbitrary x e IRm. problems. For example. skew-symmetric. then it is normal. A+ 0 -A+BD. Y e R". b e R for scalars a. and D E E mxm and suppose further that D is nonsingular.

This page intentionally left blank This page intentionally left blank .

y22 €E Rnxfo-r^ and the 0-JM^/ocJb in E~are compatibly IRnx(n-r).. for example. ... UI e Wnxr. Let {Vi.. specifically.1. Pre.Vn ].. we have n = U~VT. .\. Ch.1 The Fundamental Theorem Theorem A Theorem 5. r write A r A VI = ViS2. ..) Denote the set of eigenvalues of AT A by {U?. Proof: Since AT A ::::: ( (AT A s symmetric and nonnegative definite.u ).. . i. (Note: The rest of the proof follows analogously if we start with the observation that AAT > 0 and the details are left to the reader analogously if we start with the observation that A A T ::::: 0 and the details are left to the reader as an exercise. Proof: Since A r A > 00 A r A i is symmetric and nonnegative definite.. ..Vv r). e IRmxm and V E IR nxn such that V € Rnxn such that UI > diag(ul. . role throughout (numerical) linear algebra and its applications. recall..] .1. . .} with UI ::::: ... .) Denote the set of eigenvalues of AT A by {of / E n} with a\ > • • > a > 0 = o>+i = • • an. VI «xr j V U2 e ^x(m-r) .and postmultiplying by equality following from the orthonormality of the r. recall. ii E !!. and a\ > • • • > Ur > 0. Then there exist orthogonal matrices U E Rmxm and E IR~xn. Pre-and postmultiplying by S-I gives the emotion S~l eives the equation (5. Vi eE RIRnxr.. . . . u r ) e R .. We show that every matrix has an SVD and describe some useful properties and applications show that every matrix has an SVD and describe some useful properties and applications of this important matrix factorization. The SVD plays a key conceptual and computational role throughout (numerical) linear algebra and its applications. .o>) E IRrxr. 6]). Ch. Premultiplying by vt gives vt ATAVi = vt VI S2 = S2. for example. . Let A e R™ x n . and the O-subblocks in are compatibly dimensioned. rcfr).• = Un. We In this chapter we give a brief introduction to the singular value decomposition (SVD). More where ~ = [~ specifically. Theorem 5. U2 E IRrnx(m-rl. .1) rxr A = [U I U2) [ ~ 0 0 ][ ] 2 T VI VT (5.. . The SVD plays a key conceptual and computational of this important matrix factorization. U\ E IRmxr.. i. i e !!.LettingSS = diag(uI.} be a set of corresponding orthonormal eigenvectors and let VI = [v\. [24. «}).• ::::: Urr > as an exercise.3) = Ulsvt· The submatrix sizes are all determined by r (which must be S min{m. Premultiplying by Vf gives Vf A T A VI write ATAVi = VI S2. 6])..Chapter 5 Chapter 5 Introduction to the Singular Introduction to the Singular Value Decomposition Value Decomposition In this chapter we give a brief introduction to the singular value decomposition (SVD). vn].. Let {u.4) 35 . < min{m... 5.. (Note: The rest of the proof follows [24.2) (5. (5. V2 = [Vr+I. dimensioned. we can and let V\ [VI. Letting — diag(cri.e. e n} be a set of corresponding orthonormal eigenvectors 0= Ur+1 = .we can Vi = [vr+ . its eigenvalues are all real and nonnegative.. where S = [J °0]. its eigenvalues are all real and nonnegative.. More S > o r > O. n}). S = diagfcri. vectors. the latter VfV^S2 = S2.e.. the latter equality following from the orthonormality of the Vi vectors.

matrix VI E M mx/ " by U\ = AViS~l.3. vn } for IR and {u\. C denote A thought of as linear transformation mapping W to W. . See also m [v\. AV2 = 0. VT as AV = V"i:. Introduction to the Singular Value Decomposition Chapter 5. ar}} is called the set of (nonzero) singular values of the matrix A and called [a\. Ui e (5.2... V H. .. •. The latter equality follows from the orthogonality of the columns of VI and V 2.. For example.. an we have that ATAV2z = VzO = 0. U V identical. Thus. The set {ai.. where V and V are unitary and the proof is essentially proof decomposition A = t/E V H. singular 2.denote A thought of as aalinear transformation mapping IR n to IRm. Now V20 Vf A T A V 0. and defining this matrix U\ andU UT A V [Q ~].. we see that Mat C is "i:.16. The !:ingular value decomposition is not unique. A = t/E VT SVD of A 5. eigenvectors of AA TT). The columns of V are called the left singular vectors of A (and are the orthonormal called orthonormal columns ofU left singular vectors of eigenvectors of AA ). Remark 5. . there may be nonuniqueness associated with the columns of V\ (and hence U\) cor• there may be nonuniqueness associated with the columns of VI (and hence VI) corresponding to multiple O'i'S.2).. Then T rewriting A = V"i:.2. u ]} for Rm (see the discussion in Section 3. U be interpreted changes domain and co-domain spaces with respect to which A then has a diagonal matrix representation.. Let A = V"i:. Then from (5. except for Hermitian transposes replacing transposes. singular unique. analogous complex e IC~ xn straightforward.(A) At.4. From the proof of Theorem 5. Choose U2 £ IRmx(m-r) [U\ U2] orthogonal.2).e. we see that. VTAV = [~ Q]. . A... n} . Then T V AV =[ =[ VrAVI VIAVI VrAVI vIA VI Vr AVz vI AVz ] ~] since A V2 =0.(AATT).5.5. responding to multiple cr/'s.4) see UfU\ = columns of U\ are orthonormal. . in fact. Note that V and V can be interpreted as changes of basis in both the domain Remark 5..? (AA I min{m. Referring to the equation V I = A VI S-I defining U\.'.1 reveals that proof Theorem any orthonormal basis for jV(A) can be used for V2 • £lny orthonormal basis for N(A) can be used for V2. Remark 5. Introduction to the Singular Value Decomposition Turning now to the eigenvalue equations corresponding to the eigenvalues or+\. . to be ~ completes the proof. an we Turning now to the eigenvalue equations corresponding to the eigenvalues ar+l. whence Vi ATAV22 = O.. . of the proof of Theorem 5... The latter equality follows from the orthogonality of the S and vI AVi = vI VI S = O.. 3.1. Remark 5. . with respect to the bases A = U^V as A V Mat £ is S the U E we see respect n and {u I. Then Specifically. 1. The decomposition is A = V"i:.. the IRmxr VI AViS-I. we see that V r A VI = since A V2 = O.). .4.3.(AT A) = is denoted ~(A). 0 Definition 5. m for IR (see the discussion in Section 3. . Choose any matrix V2 E ^ 77IX( ™~ r) such that [VI V2] is VI columns orthonormal. The analogous complex case in which A E C™ x " is quite straightforward. Referring to the equation U\ = A V\ S l defining VI. } for R" {VI. Specifically..16.36 36 Chapter 5. Now define the ATA V AV O. The columns of V are called the right singular vectors of A (and are the orthonormal right singular vectors of of called orthonormal eigenvectors of AT1A).1.. and co-domain spaces with respect to which A then has a diagonal matrix representation. of values of i I proof of A. of A A). VT be an SVD of A as in Theorem 5. i.1 we see that ai(A) = A(2 (ATA) = £(A).. n] — Note that there are also min{m. Remark 5. an examination decomposition Remark 5. D to be S completes the proof. See also m Remark 5.r zero singular values... Definition 5. let C. Remark 5. we see that U{ AV\ = S and 1/2 A VI = U^UiS = 0.4) we see that VrVI = /. cr. Thus.. .

g. A factorization UI: VT of a n m x n matrix A qualifies as an SVD if U and V are A t/SV r o f an m n U orthogonal and I: is an m x n "diagonal" matrix whose diagonal elements in the upper £ left comer are positive (and ordered). 0 Example 5. U arbitrary 2 x orthogonal 5. n=[ [] 3 2 I 3 2 I -5- -2y'5 y'5 S- 4y'5 15 2~ ][ 3~ 0 0 0][ 0 0 3 0 _y'5 -3- v'2 T v'2 T v'2 T -v'2 -2- ] 3 2 2 = 3 3 3J2 [~ ~] A E R MX Example 5.U I U T .[1 0 ] . that "full SVD" (5.6. e j8 the case). The Fundamental Theorem 37 37 • any U22can be used so long as [U I U2] is orthogonal.10.8..g. is the matrix I: and the span of the columns of UI. is an SVD. however. [7].7.sine sin e cose J[~ ~J[ cose sine Sine] -cose ' where e is arbitrary. F/vamnlp 5. [25]. VT AV = A > O. VI:TU/ s n SVD of A VS C . VI. i. SVDof A. [11].2) can always be constructed from ¥2 Theorem too. For example. VT A V = A = VAV eigenvectors > 0.3).1. Then A = V A VTT is an A.e. [11].10. corner f/E V T r r T Ti isaanS V D o f AT. Example A .. C/ [U\ Ui] orthogonal. Let A e IRnxn" be symmetric and positive definite.e. that aa"full SVD" (5. A=U is an SVD.9. i.11).. V2 (see Theorem 5. see.8. U V form • columns of U and V can be changed (in tandem) by sign (or multiplier of the form e je in the complex case).3). is an SVD. Vi. if A = UI:VT is an SVD of A. too. U2. 01 - where U is an arbitrary 2x2 2 orthogonal matrix. f/2. Example 5. see. orthogonal transformations.2) can always be constructed from a "compact SVD" (5. e.9. Let V be an orthogonal 5. Computing an SVD by working directly with the eigenproblem for AT A or 5. e.1. Note. SVD" (5. What is unique. The Fundamental Theorem 5. A _ [ 1 - 0 -~ ] cose = [ . symmetric V orthogonal matrix of eigenvectors that diagonalizes A. Better algorithms exist that work directly on A via a sequence of orthogonal transformations.6.. then A A. AT A Remark 5.. Computing AA AATT is numerically poor in finite-precision arithmetic.5. SVD of A. and E U\. Example 5. [25]. [7].

... as A = UZV rather than. (5. reduction to row or column echelon form. Note that each subspace requires knowledge of the rank r. [HI..13. . nicely in Figure 5. A = UZV. rank(A) = r = the number of nonzero singular values of A.12.6) (5. Then A has the dyadic (or outer 2. . .14. 1. . (d) R(V2) = N(A) = R(AT)1-. Then A has the dyadic (or outer product) expansion product) expansion r A = Laiuiv.= R(A T ). LetUI = [UI. say.. (c) R(VI) = N(A)1.12.£V. Using the notation of Theorem 5. Let A E Rmxn have a singular value decomposition A = U'£ VT. urn].7) explain why it is conventional to write the SVD as A = U'£VTT rather than. . The relationship to the four fundamental subspaces is summarized knowledge of the rank r. .. reduction to row or column echelon form.5) 3.8) where where .13. um] and V = [VI. 4. Remark 5.. . .1. 2.1. vn]. . Then TheoremS.7) AT Uj = aivi for i E r.1.£VTT as in Let A e jRmxn have a singular value decomposition A = UHV as in Theorem Theorem 5.5) as a sum of outer products Remark 5. the following properties hold: the notation of Theorem 5.11. Let A e jRrnxn have a singular value decomposition A = VLV T Using Theorem 5.. . The elegance of the dyadic decomposition (5. vn].. Let V = [UI.. Then (a) R(VI) = R(A) = N(A T / .6) and (5. Vn]. Let A E E mx " have a singular value decomposition A = U. .11. Let U =.2 5. rank(A) = r = the number of nonzero singular values of A..5) as a sum of outer products and the key vector relations (5.38 38 Chapter 5. The relationship to the four fundamental subspaces is summarized nicely in Figure 5. The elegance of the dyadic decomposition (5.6) and (5.1.2 Some Basic Properties Some Basic Properties Theorem 5..= N(A T ). . Introduction to the Singular Value Decomposition 5. say. urn] and V = [v\.. Then (5. Part 4 of the above theorem provides a numerically superior method for Remark 5. u r ].. vr ]. .7) explain why it is conventional to write the SVD and the key vector relations (5. Theorem 5.. i=1 (5. . Introduction to the Singular Value Decomposition Chapter 5.. Remark 5. The singular vectors satisfy the relations 3. The singular vectors satisfy the relations AVi = ajui.]. Note that each subspace requires on. VI = [VI. andV2 = [Vr+I. .1. A = U. . . (b) R(U2) = R(A)1.. U2 = [Ur+I. Part 4 of the above theorem provides a numerically superior method for finding (orthonormal) bases for the four fundamental subspaces compared to methods based finding (orthonormal) bases for the four fundamental subspaces compared to methods based on. for example. the following properties hold: 1. for example.

. Proof' The proof follows easily by verifying the four Penrose conditions. However. if we let the columns of U and V be as defined in Theorem 5. . which is clearly orthogonal and symmetric. a simple reordering accomplishes the task: reordering accomplishes the task: (5. Note that none of the expressions above quite qualifies as an SVD of A + Remark 5. ed. then = L r 1 -v. Remark 5. Note that none of the expressions above quite qualifies as an SVD of A+ if we insist that the singular values be ordered from largest to smallest. with the O-subblocks appropriately sized.11. Furthermore.. .10) .. 0 D (5. Figure 5. e\\.15. Some Basic Properties 39 39 A r r E9 {O} / {O)<!l n-r m-r Figure 5. SVD and the four fundamental subspaces.15. Proof: The proof follows easily by verifying the four Penrose conditions. . e^.er^\.11) This can also be written in matrix terms by using the so-called reverse-order identity matrix This can also be written in matrix terms by using the so-called reverse-order identity matrix (or exchange matrix) P = \err. a simple if we insist that the singular values be ordered from largest to smallest. e2. SVD and the four fundamental subspaces..=1 U.2.. if we let the columns of U and V with the Q-subblocks appropriately sized..11. (or exchange matrix) P = [e er-I.1. which is clearly orthogonal and symmetric...1.u. Some Basic Properties 5.5. However. Furthermore. then be as defined in Theorem 5.2.

mxn have an SVD given by (5. Then Let A E lR. then T-lI can be defined by T^'M.. u is clearly matrix representation for T with respect to the bases { v \ . Such a compression is analogous to the . . w.vvr}}is aa is r basisforN(A). notice that H(A) = K(AV) = R(UI S) and the matrix UiS e Rm xr has full K(UiS) and the matrix VI S E lR. the matrix representation for T with respect to the bases {VI.1). . v } and {MI . finite-precision arithmetic. Such a compression is analogous to the "compresses" A by I. u is a basis forR(A). since [u\.. . In other words. Since T is determined by its action on a basis. and since ( v \ .. the same bases is 5""1. Recall the linear transformation T used in the proof of Theorem 3.16. In other words.4). is the matrix version of (5. is not generally as reliable a procedure. . let A e Rmxn have an SVD given by (5. From Section 3.2.. and since {VI. Notice that N(A) .1). while the matrix representation for the inverse linear transformation T~ with respect to the same bases is S-I. Both compressions are analogous to the so-called row-reduced where R is upper triangular. Both compressions are analogous to the so-called row-reduced echelon form which.1.2. since {UI. Then Again..i e r. . Then VT A = :EVT = [~ ~ ] [ ~i ] D _ ... Then AV = V:E = [VI U2] [~ ~ ] =[VIS 0] ElR..M(UT Notice that M(A) = N(V T A) = N(svr> and the matrix SVf E Rrxll" has full row A/"(SV. let A E lR. Similarly. Introduction to the Singular Value Decomposition Chapter 5. / E~. .1).17 and in Definition 4..1..11). Recall the linear transformation T used in the proof of Theorem 3.mxn. notice that R(A) R(A V) This time.. Remark 5. . then T~ canbedefinedbyT-Iu.l. where R is upper triangular.3 5. the isabasisfor7£(. is not generally as reliable a procedure.r) and the matrix SVr e lR.40 40 Then Then Chapters. premultiplication of A by UT is an orthogonal transformation that rank. in Definition 4.urr}} e r. .. .17 and Remark 5. . A "full SVD" can be similarly constructed.16. .11). by orthogonal row transformations performed directly on A to reduce it to the form [~].. . . have an SVD given by (5..3 Rowand Column Compressions Row and Column Compressions Row compression Let A E R have an SVD given by (5. = ^-u. Introduction to the Singular Value Decomposition A+ = (VI p)(PS-1 p)(PVr) is the matrix version of (5. In other words... In other words. postmultiplication of A by V is an orthogonal transformation that "compresses" A by column transformations.. then T can be defined by TV. when derived by a Gaussian elimination algorithm implemented in echelon form which. then T can be defined by TVj = OjUj . r x has full row rank. postmultiplication of A by V is an orthogonal transformation column rank. . Such a row compression can also be accomplished "compresses" A by row transformations. vrr} and {u I. . mxr has full column rank. when derived by a Gaussian elimination algorithm implemented in finite-precision arithmetic. This time.1).olumn transformations. = cr.. Similarly. = tv.. Such a row compression can also be accomplished by orthogonal row transformations performed directly on A to reduce it to the form 0 . . From Section 3. 5. . premultiplication of A by VT is an orthogonal transformation that "compresses" A by row transformations. A "full SVD" can be similarly constructed. Column compression Column compression Again. urr]} is clearly S. while the matrix representation for the inverse linear transformation T-lI with respect to S.i / E~. basis forJ\f(A)±.[ SVr ] 0 mxn E lR. Since T is determined by its action on a basis.

Let A E ~mxn and W E IR mxm and 7 E ~nxn are (a) Show that A and WAY have the same singular values (and hence the same rank). Prove Theorem 5. Note: this is analogous to the polar form where Q is orthogonal and P = PT > 0. Note: this is analogous to the polar form iO z = rel&ofaa complex scalar z (where i = j = V^T). Do A Wand Yare A and WAY have the same singular values? Do they have the same rank? and WAY have the same singular values? Do they have the same rank? factorization of i.. [25]. Use the SVD to determine a polar factorization of A. Determine SVDs of the matrices 5. see. (b) Suppose that W and Y are nonsingular but not necessarily orthogonal. Let x e Rm. [23].[11]. [7]. 4. (a) Show that and W A F have the same singular values (and hence the same rank). [25]. y e Rn be nonzero vectors. If XTX = 0. EXERCISES EXERCISES 1. z of complex scalar z (where i j J=I).e. for performed by Gauss transformations in finite-precision arithmetic. 2. € IRmxn.e. For details. which is not generally a reliable procedure when performed by Gauss transformations in finite-precision arithmetic. Let A e Rmxn and suppose W eRmxm and Y e Rnxn are orthogonal. y E ~n Determine A e ~~ 4. of defined by A defined by A = xyT. If XT X = 2. Let A e E"xn be symmetric but indefinite. Determine an SVD of A. Determine an SVD of the matrix A E R™ xn E IRm. an SVD A.Exercises Exercises 41 41 so-called column-reduced echelon form.1 starting from the observation that AAT ~ O. Determine SVDs of the matrices (a) (b) [ ] [ ~l -1 0 -1 6. which is not generally a reliable procedure when so-called column-reduced echelon form. Let E ~~xn. For details. xyT 5. see. . Let A € R" X M .1 starting from the observation that AAT > 0. i. [7]. for example. [11].. [23]. 3. show that X = 0.. Prove Theorem 5. A = Q P 7. Let X E M mx ". A = QP 7. A E IRnxn indefinite. = o. Use the SVD to determine a where Q is orthogonal and P p T > O.

This page intentionally left blank This page intentionally left blank .

b E lRm. b]) = rank(A). A is 2. and this is possible only ifm ::: n.3) for all b E lRm if and only ifR(A) = lRm. Consider the system of linear equations Theorem 6. 1. equivalently..1. only if rank(A) < n.e. A E lRmxn.. There exists a solution to (6. A E lR mxm and A has neither a 0 singular value nor a 0 eigenvalue. and onto.3) 1. (6. There exists a unique to (6. equivalently.e. 3.3) for all b e W1 if and only if the columns of A are linearly independent.1. 2. i. There exists a unique solution to (6. A solution to (6. N(A) 0. We begin with a review of some of the principal results associated with vector linear systems. n.1 6. A G M m x m and A has neither a singular value nor a eigenvalue. There exists at most one solution to (6. 43 . (6. 3. there exists a solution if and only ifrank([A.3) for all b E lRm if and only if the columns of 5.e. rank(A) < n. n}). 6. A are linearly independent. and this is possible only ifm :::: n (since m = dim R(A) = rank(A) :::: min{m.3) is unique if and only ifJ\f(A) = 0. There exists a nontrivial solution to the homogeneous system Ax = 0 if and only if Ax = 0 if 6. A is 1-1. A is 1-1. n this is possible only ifm < n (since m dimT^(A) = rank(A) < min{m. as a special case. i.3) is unique if and only if N(A) = 0. onto.e. 4.e.1 Vector Linear Equations Vector Linear Equations We begin with a review of some of the principal results associated with vector linear systems. there exists a solution if and only j/"rank([A. There exists a solution to (6. i...3} for e R m if only ifU(A) = W". 5. General linear systems of the form equations.2) 6. i. A solution to (6. (6.3) if and only ififbeH(A).3) for all b e W" if and only if is nonsingular. and this is possible only ifm > n.3) if and only b E R(A). equivalently. There exists a solution to (6. 4. General linear systems of the form (6. b E ]Rn. A E ]Rn xn.. Theorem 6. the familiar vector system Ax = b. equivalently. i.e. A/"(A) = 0. There exists at most one solution to (6.Chapter 6 Chapter 6 Linear Equations Linear Equations In this chapter we examine existence and uniqueness of solutions of systems of linear In this chapter we examine existence and uniqueness of solutions of systems of linear equations. b]) = rank(A). Consider the system of linear equations Ax = b. as a special case. i.3) for all b E ]Rm if and only if A is nonsingular. the familiar vector system are studied and include.1) are studied and include..

i. R(A). i. equivalently.2)follow by 6. BE JR.18.2) follow by specializing even further to the case m = n. Therefore. all solutions of (6. The matrix linear equation AX = B.5). note that x = 0 is always a solution to the homogeneous system. Then any matrix eRmxk of the form of the form X = A+ B + (/ . a solution exists if and only if has a solution if and only ifR(B) S. Then we can write (6. specializing even further to the case m = n.mxk and suppose that AA+B = B. Linear Equations Chapter 6. (6. A E JR.6) Furthermore.e. to algebra. Proof: To verify that (6.6). which implies rank(A) < n by part 0 by part 3.3. Let Z be an arbitrary solution of That all solutions arc of this form can be seen as follows. where Y E JR. A is not 1-1. AZ — B.A+ A)Y..2 (Existence).5) is a solution.2 6.18. we must have the case of a nonunique solution. The matrix criterion is Theorem 4. Note that the results of Theorem 6. Therefore.6) are of this form.6) are of this form. AA+B B.5) is a solution. while results for (6. A is not I-I. Let A e Rmxn. Proof: The subspace inclusion criterion follows essentially from the definition of the range Proof: The subspace inclusion criterion follows essentially from the definition of the range of a matrix. (6.4) has a solution if and only ifl^(B) C 7£(A). Linear Equations Proof: The proofs are straightforward and can be consulted in standard texts on linear Proof: The proofs are straightforward and can be consulted in standard texts on linear algebra. D 6. E JR.1 follow from those below for the special case = 1. Note that some parts of the theorem follow directly from others. while results for (6. to prove part 6.mxk.1).2 Matrix Linear Equations In this section we present some of the principal results concerning existence and uniqueness In this section we present some of the principal results concerning existence and uniqueness of solutions to the general matrix linear system (6. premultiply by A: Proof: To verify that (6.44 Chapter 6.. equivalently.5).3. 0 . B E JR.6). Theorem 6.. i.nxk is arbitrary. all solutions of (6. premultiply by A: AX = AA+ B + A(I = B A+ A)Y + (A - AA+ A)Y by hypothesis = B since AA + A = A by the first Penrose condition.. 0 Theorem 6.2 (Existence).e. AZ :::: B. we prove part 6. Furthermore. For example. That all solutions are of this form can be seen as follows.e . note that x 0 is always a solution to the homogeneous system. The matrix linear equation Theorem 6.mxn. For example.1). mxn . and this is clearly of the form (6. Note that some parts of the theorem follow directly from others.e. Then we can write Z=A+AZ+(I-A+A)Z =A+B+(I-A+A)Z and this is clearly of the form (6. of a matrix.1 follow from those below for the special case k = 1. Let Z be an arbitrary solution of (6.5) is a solution of is a solution of AX=B. i. The matrix criterion is Theorem 4. a solution exists if and only if AA+B = B. which implies rank(A) < n must have the case of a nonunique solution. (6. +B = Theorem 6. Note that the results of Theorem of solutions to the general matrix linear system (6.

this can occur if and only if rank(A) = r = m (since r ::: m) and this is equivalent to A being onto (A + is then a right inverse).nxn. A A = f/E VT. It can be shown that the particular solution X = A+B is the solution of (6.7. A e E"x". Solution: x=A+O+(I-A+A)y = (I-A+A)y.2.mxn. equivalently.2. BE lR. But rank(A) = n that A+ A = / if r — n. D 0 Example 6.6 (Uniqueness). X• = A~ B. y E R" A + A t= I.mxn. Hence. wherer = rank(A) (recallr ::: h). there exists a nonzero solution if and only if A+A /= I.6) that minimizes TrX7 (Tr(-) denotes the trace of a matrix. Remark 6.8) . Example 6. (6. recall that TrXT X = £\ •xlj.n is arbitrary. A E lR. (TrO denotes the trace of a matrix. then it is easily R(I — A + A). Find all solutions of the homogeneous system Ax = 0. and (N(A) = 0).) Theorem 6. Clearly. Matrix Linear Equations 6. matrix.6) +B Remark 6. vD Example 6. Example 6.A+A = Vz V2 and R(Vz2V^) = R(Vz) = N(A). The second follows by noting thatA+A = I can occur only ifr = n. if there exists a unique.j jcj.mxk (6.7) has a unique solution if and only if unique if and only if A + A = I. this can occur if and only if rank(A) = r m (since equivalent to AA+Im = 1m. N(A) = O. it is easy to see that all solutions are generated y from a basis for 7£(7 . nonzero solution. it is not unique.A+A) = O. r checked that 1. if and only if A is I-lor _/V(A) = O.7) has a unique solution if and only if M(A) = 0. equivalently. recall that TrX r = Li. Here. All right inverses r < m) A (A+ of A are then of the form of A R = A+ 1m + (In .S. A R (AA(A) = A"1. Characterize AR = Im solutions R of the equation AR = 1m. Proof: Proof: The first equivalence is immediate from Theorem 6.9. Consider Example 6. rank(A) = < A This is equivalent to either rank (A) = r < n or A being singular.5. A A+ A-I Remark (/ — A + A) 0. A+ = A"1 and so (I . 7£(A) and this is 7£(/m) c R(A) equivalent to AA + 1m Im. But if A has an SVD given by A = U h VT. Clearly. (6.4.A+A). equivalently. A solution of the matrix linear equation Theorem 6. There is a unique right inverse if and only if A+A = I/ e E"xm arbitrary. Matrix Linear Equations 45 Remark 6.5.7. When A is square and nonsingular. Ax — 0. It particular (6. A+ A where Y E lR. we write 1m to emphasize the m x m identity Im matrix. Solution: There exists a right inverse if and only if R(Im) S.A+ A V2 V[ and U(V = K(V2) = N(A).6 (Uniqueness).) that minimizes TrXT X.. Computation: Since y is arbitrary.9. Characterize all right inverses of a matrix A E lR. there is no "arbitrary" component. A solution of the matrix linear equation AX = B.nxm is arbitrary. Suppose A E lR.8. in which case A must be invertible and R = A-I.7) is unique if and only if A+A = /. Consider the system of linear first-order difference equations (6. where y e lR.6. leaving only the unique solution X = A-I1B. where r rank(A) (recall r < n). / . Example 6. Thus. find all A e ]Rmx".3. Butrank(A) = n if and only if A is 1-1 or N(A) = 0. Clearly.A+ A)Y =A++(I-A+A)Y.

A n . does there exist an input sequence {u j an input sequence {"y}"~o such that xn = O? In linear system theory. We now introduce an output vector Yk to the system (6.11) with and D (p ~ 1).9 Example 6.J B] = n. We might now ask the question: Given Xo 0. Again from Theorem 6. if and only if or. overall system that are dual in the system-theoretic sense to reachability and controllability. The matrices A = [ ° Q and f ^ provide an lability and reachability are equivalent..8) is controllable if and only if if controllability.8) is given by k-J Xk = Akxo + LAk-J-j BUj j=O Uk-J ] Uk-2 (6. Linear Equations Xk with A E R"xn and B E IR nxmxm(rc>l. equivalently. B) is if(AT .10) for k ~ 1. There are many other algebraically equivalent conditions. Theorem l'/:b Clearly. this is called controllability..10.8) is reachable if and only if if R([ B. if A is nonsingular.:b dual to reachability is called observability: When does knowledge of {" j }"!Q and {y_/}"~o suffice to determine (uniquely) Jt0? As a dual to controllability. equivalently. this is a question va [Uj }k~:b such that x^ takes an arbitrary value in W ? In linear system theory. B T] is observable [reconsrrucrible] [controllablcl if and T) observable [reconstructive]. The vector Jt* in linear system theory is e IR nx " fieR" (n ~ I. We can then pose some new questions about the overall system that are dual in the system-theoretic sense to reachability and controllability. Since m ~ I.8) is given by solution of (6. The matrices A = [~ ~]1and B5 == [~] 1 providean example of a system that is controllable but not reachable. we of reachability. we have the notion of reconstructibility: When does knowledge of {u jy }"~Q and {. this is a question {u }y~Q Xk of reacbability..• A k k-J B] [ ~o (6.9) ~Axo+[B. AB. AB.AB •. Theorem 6. The linear differential equations).2. Example 6.y/}"Io suffice to determine reconstructibility: When does knowledge of {w r/:b and {YJ lj:b suffice to determine (uniquely) xn? The fundamental duality result from linear system theory is the following: (uniquely) xnl The fundamental duality result from linear system theory is the following: E RPxn e IR pxn E RPxm € IR pxm (A. does there exist an input sequence {ujj 1jj^ such that Xk takes an arbitrary value in 1R"? In linear system theory. The condition dual to reachability is called observability: When does knowledge of {u 7 r/:b and {Yj l'. standard conditions with analogues for continuous-time models (i. .J B]) = 1R" or. The answers are cast in terms that are dual in the linear algebra sense as well. The general solution of (6. We might now ask the question: Given XQ = 0.9 by appending the equation by appending the equation (6. linear differential equations). see that (6. We can then pose some new questions about the with C and (p > 1).e. does there exist an input sequence for k > 1. reachability always implies controllability and. example of a system that is controllable but not reachable. this is called such that Xn = 0? linear system theory. B) iJ reachable [controllable] ifand only if (A . (A. m known as the state vector at time while Uk is the input (control) vector.T. We now introduce an output vector yk to the system (6. The above are standard conditions with analogues for continuous-time models (i.. The general known as the state vector at time k while Uk is the input (control) vector. . from the fundamental Existence Theorem. .8) of Example 6..ra>l). A related question is the following: Given an arbitrary initial vector Xo. we have the notion of suffice to determine (uniquely) xo? As a dual to controllability..46 46 Equations Chapter 6.8) of Example 6. The condition The answers are cast in terms that are dual in the linear algebra sense as well. . we see that (6. There are many other algebraically equivalent conditions. from the fundamental Existence Theorem. A n . if and only if rank [B..e..10. controlA 1 lability and reachability are equivalent. does there exA related question is the following: Given an arbitrary initial vector XQ. Since > 1.~ I).2.

v E R(R). e Rmxn. A compact matrix criterion for uniqueness of solutions to (6.27. C E jRmxn. particularly for block matrices. by definition. Let A E Rmxn. and C E Rpxti. sociated e jRnxn. Invertibility is assumed for any component or subblock whose inverse is indicated.CBuo . By the fundamental Uniqueness Theorem.3 6. +L k-l CAk-1-j BUj + DUk. if and only if r Yn-] - Lj:~ CA n . by definition.4 Some Useful and Interesting Inverses Some Useful and Interesting Inverses In many applications.4 Some Useful and Interesting Inverses 6. in which case the general solution is of the form (6. By the fundamental the right-hand side.14) requires the notion A compact matrix criterion for uniqueness of solutions to (6. mxm and D E jRm Invertibility is assumed for any component or subblock whose inverse is and D € E xm. 6. Theorem 6.6. E jRnxm.13) Let v denote the (known) vector on the left-hand side of (6. In these identities. Theorem 6. equivalently.. and C e jRpxq.4 Some Useful and Interesting Inverses 47 To derive a condition for observability.Du] (6. the has a solution if and only if AA+BC+C = B.15) E jRnxp where Y € Rn*p is arbitrary. asbelow is a small collection of useful matrix identities. so a solution exists. Then.2 -j BUj . the coefficient matrices of interest are square and nonsingular. particularly for block matrices. B E Rnxm. Such a criterion (CC+ <g) A+ A — I) is stated and proved in Theorem 13. notice that To derive a condition for observability.6.DUn-l 6. the coefficient matrices of interest are square and nonsingular. . A E Rnxn. or. Verification of each identity is recommended as an exercise for the reader. Then the equation e jRmxn. Such a criterion (C C+ ® A +A = I) of the Kronecker product of matrices for its statement. (6. the solution is then unique if and only if N(R) ==0. so a solution exists. Then. equivalently.Duo Yl . Verification of each identity is recommended as an exercise for the reader.11. is stated and proved in Theorem 13.3 A More General Matrix Linear Equation A More General Matrix Linear Equation AXC=B (6. notice that Yk = CAkxo Thus. 0. Listed below is a small collection of useful matrix identities.6. B E Rmxq.13) and let denote the matrix on the right-hand side. in which case the general solution is of the has a solution if and only if AA + BC+C = B.14) Theorem 6. Listed In many applications.13) and let R denote the matrix on Let denote the (known) vector on the left-hand side of (6.12) j=O Yo .14) requires the notion of the Kronecker product of matrices for its statement.4 6. associated with matrix inverses. the solution is then unique if and only if N(R) Uniqueness Theorem. indicated. B e jRmx q . arbitrary. Thus. e Tl(R). if and only if or.27.

This result follows easily from the block UL factorwhere F = (A — ED C) This result follows easily from the block UL factorization in property 17 of Section 1. This result follows easily from the block LU factorization in property 16 of Section 1. r A~I [~ ~ r [-D~I~A-I D~I 1 ~r ~~B 1 r l [~ ~ r [-D~CF +-~~I~.. for example. that X~l [~ !/ [~ ~ r [~ -~ l [~ ~/ r [~ -~ 1 l l l = [ ~ 4.8. theory. where F = (A . As in Example 6./ blocks may be exchanged.4. l 8.I . Note that the positions of the / and .A~lB(D~ CA~lB)~[CA~l This result is known as the Sherman-Morrison-Woodbury formula.B D. Assuming 2. 5. [ / +c 7. Linear Equations 1. Let A € E mx ".I C) -I.c E E") that arise in optimization (A + xx T ) — (with symmetric A e lRnxn and x e lRn) that arise in optimization theory. characterize all left inverses of a matrix A e lR ". BB EelR fflxk and suppose AAhas an SVD as in Theorem 5.48 Chapter 6. formulas for the inverse of a sum of matrices such as (A + D)-lor (A-I1 + D-I)-I. Linear Equations Chapter 6. X.I ] D.CA. It also the inverse of a sum of matrices such as (A + D)"1 or (A" + D"1) It also yields very efficient "updating" or "downdating" formulas in expressions such as yields very efficient "updating" or "downdating" formulas in expressions such as T (A + JUT ) -I1 (with symmetric A E R"x" and .I B)-I (E is the inverse of the Schur complement of A). BC 6.1.4. for example. 1. (A BDCr1 = A-I .A-IB(D-lI + CA-IB)-ICA-I. ization in property 17 of Section 1.4.8. It has many This result is known as the Sherman-Morrison-Woodbury formula. l = l = [!C / [~ ~ l = [ A-I +_~~!~CA-I -A~BE = D. characterize all solutions of the matrix linear equation 7Z(B) c 7£(A). mx . characterize all left inverses of a matrix A E Mm xn . = = Both of these matrices satisfy the matrix equation X2 = / from which it is obvious these matrices satisfy the matrix equation X^ = I from which it is obvious Both of that X-I = X. Let A E lRmxn. This where E = (D — CA B) (E is the inverse of the Schur complement of A). [~ ~ r l 3. (A + BDC)-I = A~l . 2.I EXERCISES EXERCISES 1. 2.. characterize all solutions of the matrix linear equation AX=B in terms of the SVD of A in terms of the SVD of A. Rmxk and suppose has an SVD as in Theorem 5. Note that the positions of the / and — / blocks may be exchanged. As in Example 6. Assuming R(B) ~ R(A). result follows easily from the block LU factorization in property 16 of Section 1..BD-I l = [ -A-I BD. It has many applications (and is frequently "rediscovered") including.1. where E = (D .4. formulas for applications (and is frequently "rediscovered") including. 1.

€ IRn and suppose that x T y i= 1. ..y Assume that Yji i= 0 for some i/ and j. y E E" and suppose further that XTy i= 1.10.Exercises Exercises 3. Show that the matrix B — A — —eie T : (i. Show that the matrix B = A . A with yl subtracted from its (ij)th element) is singular. As in Example 6. Let x. c and individual elements y. Let A e R"xxn and let A"1 have columns c\. Show that -cxJ C ' where c 1/(1 — T y).e. 6. y e IRn and suppose further that x T y ^ 1.e. Show that (/ . in Example 6.10.e.. check directly that the condition for reconstructibility takes the 6.Cn and individual elements Yij. l' Hint: Show that Ci E N(B). Hint: Show that ct <= M(B). y E E" and suppose further that XTy ^ 1.. 5. (i.. A with — subtracted from its (zy)th element) is singular..l ~i e. T 4. Let jc... . Assume that x/( 7^ 0 for some and j.. Let A E 1R~ " and let A -1 have columns Cl. . Show that 4. Show that 3. where C = 1/(1 . check directly condition for reconstructibility the form form N[ fA J CA n 1 ~ N(A n ). Let x.xy) T -1 49 = I - 1 xTy -1 xy . .x xTy).

This page intentionally left blank This page intentionally left blank .

yp2 = P. Let V be a vector space with V = X EEl Y. say on X along Y (using the notation of Definition 7.1.y • V —>• c V has a unique decomposition v = x + y with x e X and y E y. px.y- Theorem 7.26. Oblique projections. Proof: Suppose P is a projection. Px. Oblique projections.x — I — Px. i. Figure 7.y is called the (oblique) projection on X along 3^.3.x = I -px.y is linear and P# y — px. Also.3. say on X along y (using the notation of Definition 7. P isaprojectionifandonlyifl -P isaprojection.y is linear and pl. and Norms 7. Inner Product Spaces.1. Infact. Px.1 7. Infact. every v e V has a unique decomposition v x y with x E and y e y. V by by PX. y = Px. Py.yV = x for all v E V.y.e.y. 51 51 . P2 = P.1). every v E V Definition 7..e. Inner Product Projections.1. Theorem 7. Proof: Suppose P is a projection. Define PX y : V ---+ X <.1 displays the projection of v on both X and Y in the case V = ]R2.2.1 Projections Definition 7.1. Also. By Theorem 2.2. i. Define pX.1 displays the projection of von both and 3^ in the case = Figure 7.y is called the (oblique) projection on X along y. y x Figure 7.Chapter 7 Chapter 7 Projections. Theorem 7. P is a projection if and only if I —P is a projection.26. A linear transformation P is a projection if and only if it is idempotent. Py. By Theorem 2. Theorem 7.1). Figure 7. A linear transformation P is a projection if and only if it is idempotent. PX. and Norms Spaces.. Let V be a vector space with V X 0 y.

1 and 5. Write x = Px (I — P)x.1 7.P)x E XL. while Py = P(I P}v = x Pv .3. V = X $ Y and the projection on X along Y is P.P)x = xTP(I . If v e y.P)x = x T P(l . p2 = P.V Theorems 5. let A E Rmxn with SVD A = U!:VTT = A = UT.X^X = Px±. and P must be an orthogonal projection. then Pv = O. (I . say. p 2v = PPv = Let u E V be arbitrary.. PX. Note that (I .PX. Let X n y.P)v = = Pv. Then Px = p 2v = Pv = x so x e X. We now prove that V = X $ y.P)v. then v = 0.xx by Theorem 7. suppose symmetric projection matrix and let x be arbitrary. we must have pT (I .P)v.4. Projections. Now let v E V be arbitrary. Hence PT = PTP = P. If v E Y. R" be Proof: Let P be an orthogonal projection (on X. .=1 m PR(A). mental subspaces. then Pv v.)x = PXJ. We now prove and Y = {v E V : Pv = OJ. Then Pv = P(x + y) = Px = x. i=r+l PN(A) 1. A 6 jRmxII UtSVf. and Norms Let v e V be arbitrary. Thus. X 0 y and the projection on X along y is P. Essentially the same argument shows that / .P)x = O. Thus.P is the projection on Y along X. Thus.52 52 Chapter 7. Then Px = P2v = Pv = x so x E X.P)x = (I .xx by Theorem 7.XLtion and we then use the notation P x = PX. P = P. we have ( P y f I (/ .P)x = (I .. Hence if v E X ny.P}x = 0. It is easy to check that X and 3^ are subspaces.3. then v = O. Projections. (I . P e jRnxn is the matrix of an orthogonal projection (onto R(P)) if and only ifP2 PT if p2 = p = pT.xl. Since x and y were arbitrary.1 .XL iss called an orthogonal projection and we then use the notation PX = PX. Then symmetric projection matrix and let x be arbitrary. we must have P (I — P) = O. with the second equality following since PTP is symmetric.P)v.. arbitrary.P)x = O. In the special case where Y = X^.1.P)x 6 R(P)1and P must be an orthogonal projection.P)v. Let x = Pv. P Proof: Let P be an orthogonal projection (on X. Conversely.1 The four fundamental orthogonal projections The four fundamental orthogonal projections Using the notation of Theorems 5.P)x. P2v = P Pv — 2 2 Px = x = Pv. suppose P = P. Inner Product Spaces. Then x T pT (I . Thus.uT. D Essentially the same argument shows that I — P is the projection on y along X. Hence pT = pT P = P. suppose p2 = P. Note that (/ . Then v = Pv + (I .xJ. D 0 7.P)x E ft(P)1 xTPT(I . Moreover. y = (I .L 1.1 5. Thus. Inner Product Spaces. Thus. Conversely. First note that v E X.P)x = yTTpT (I .4. Moreover. Since Py E X.5. . T Since x and y were arbitrary. then (/ . say.P)x e X1-. yy Ee jR" be arbitrary. and Norms Chapter 7. while Py = P(l -. Now let u e V be arbitrary. along XXL} and let x. In the special case where y X1-. A+A VIV{ r LViVT are easily checked to be (unique) orthogonal projections onto the respective four fundaare easily checked to be (unique) orthogonal projections onto the respective four fundamental subspaces.A+A V2V{ L i=r+l i=l 11 ViVf. Px E R(P). Then U\SVr Then r PR(A) AA+ U\U[ Lu. Py e X. y = (I . It is easy to check that X and Y are subspaces. Hence that V X 0 y. suppose P is a is a with the second equality following since pT P is symmetric. Conversely. 0 Definition 7. px. then Pv = v. Then v if v € Pv (I . P)x = y PT(I P)x = 0. PN(A)J..XL Theorem 7. Write x = P x + (I . * called an orthogonal projecDefinition 7. since Px e U(P).11.5. then Pv = 0. along 1-) and let jc.11. Let X = {v E V : Pv = v} Px = x = Pv.P) = 0. First note that iftfveX. we have (py)T ((I . V Pv .P2v = 0 so Y E y. Let X = {v e V : Pv = v} and y {v € V : Pv 0}. Conversely. P E E"xn is the matrix of an orthogonal projection (onto K(P)} if and only 7.px. Then Pv = P(x + y) = Px = x.p 2 v 0 so y e Thus.AA+ U2 U ! LUiUT.

. Specifically.2. Recall the proof of Theorem 3. Orthogonal projection on a "line. IR n Rm 1 n Let X E IR be an arbitrary vector. The expression for x\ is simply the orthogonal projection of XI projection of rather x on S. T W W Moreover. ..11.A+ A)x 2 = A+ Ax + (I = VI vt x + V Vi x (recall VVT = I).6. Recall the diagram of the four fundamental subspaces.8) = (WTV) W.8) (using Example 4. in fact.7.7. . Recall the proof of Theorem 3. The indicated direct sum decompositions of the domain E" and co-domain IRm are given easily as follows. the vector z that is orthogonal to wand such that v = P v + z is given by z is given by z = PK(W)±Vv = (/ — PK(W))V = v — (^-^ j w.11. Recall the diagram of the four fundamental subspaces. An arbitrary vector x e IRn was chosen and a formula for x\ appeared rather mysteriously.1.8. orthogonal: that z and u.2. See Figure 7. Vk} was an orthomormal basis for a subset S of W1..2.1.~) w...(:. { v \ . Then X = PN(A)u + PN(A)X . Projections 7.Pn(w»v = v . the vector z that is orthogonal to w and such that Pv Moreover. Then Let x e W be an arbitrary vector. {VI.. There.7. Example 7. in fact. There. are. Determine the orthogonal projection of a vector v E IR n on another nonzero vector w E IRn. A direct calculation shows that and ware.2. e Rn Solution: Think of the vector w as an element of the one-dimensional subspace IZ(w). Projections 53 Example 7.8.. Vk} was an orthornormal Example 7. An arbitrary vector x E R" was chosen and a formula for XI basis for a subset of IRn. See Figure 7. Solution: Think of the vector w as an element of the one-dimensional subspace R( w). Orthogonal projection on a "line. A direct calculation shows z = Pn(w)"' = (l ." Example 7. X on Specifically. The indicated direct Example 7.6." Figure 7. orthogonal: v z Pv w Figure 7. Determine the orthogonal projection of a vector e M" on another nonzero Example 7. Then the desired projection is simply Then the desired projection is simply Pn(w)v = ww+v wwTv (using Example 4. .

Then ('. Let Example 7. and Norms Chapter 7. (jc. y) Q = X T Qy. Then { • • V x V if product if 1. y e V. Yl. f3ftE IR.54 Chapter 7. Example 7.y E V. Let V = E".11. 3. Yl) + f3(x. Let V = R". x) for all x. Y2 E V and/or all a. j2 ^ V and for alia. e R. V = IRn.9. Example 7. (x. .12. 3.2 Inner Product Inner Product Spaces Definition 7.10. as follows: o o 4] uniquely into the sum of a vector in N(A)-L 4V uniquely into the sum of a vector in A/'CA)-1 r 1/4 1/4 ] 1/4 1/4 [!]~ = = A' Ax + (l - A' A)x 1/2 -1/2 1/2 1/2 0] [ 2] [ -1/2 1/2 + [ 1~2 1~2 ~ o o ! 5/2] [-1/2] 1~2 . Projections.10. defines a "weighted" inner product.AA+)y = U1Ur y + U2U[ Y (recall UU T = I). let Y E ]Rm be an arbitrary vector. Inner Product Spaces. y) = (y. Example 7.) ) :: V x V -+ IR is a real inner is a real inner Definition 7. Let Then Then and we can decompose the vector [2 3 and we can decompose the vector [2 3 and a vector in N(A). .11. > Ofor all E V ( x x) =0 if 2. .9. respectively. Inner Product Spaces. n x n positive definite matrix. aYI + PY2) = a(x. y\) + /3(jt.13. {*. such that {x. y) for all x € Rm and for all y e R". If e IR mx ". respectively. ATE IR nxm transformation Definition 7. Projections. y} = XTy is the "usual" Euclidean inner product or dot product. If A E Rm xn.13. Then Y = PR(A)Y + PR(A)~Y = AA+y + ( l . Ay) = {AT x.(A . let y e IR m be an arbitrary vector. x } = 0 if and only ifx = 0. Y2) for all x. yi. only ifx = O.12. and Norms Similarly. (x. (x. [ 5~2 + 7. where Q = QT > 0 is an arbitrary Q = Q T > is an Example 7.x)forallx. Let V be a vector space over R.. (x. Then {^. as follows: and a vector in J\f(A). Let V = IRn. Then (x. Let V be a vector space over IR. y) x T Y is the "usual" Euclidean inner product or Example 7. then AT e Rn xm is the unique linear transformation or map T E IRm andfor IRn. Then Similarly. y)Q = XT Qy. (x. x) ::: Qfor aU x 6V and (x. Then (x. 2. definite defines Definition 7. y^} for all jc. y) = (y. cryi + ^2) = a(x.

7.2. Inner product Spaces 7.2. Inner Product Spaces

55 55

It is easy to check that, with this more "abstract" definition of transpose, and if the It is easy to check that, with this more "abstract" definition of transpose, and if the (i, j)th element of A is aij, then the (i, j)th element of AT is ap. It can also be checked (/, y)th element of A is a(;, then the (i, y)th element of AT is a/,. It can also be checked that all the usual properties of the transpose hold, such as (Afl) = BT AT. However, the that all the usual properties of the transpose hold, such as (AB) = BT AT. However, the

definition above allows us to extend the concept of transpose to the case of weighted inner definition above allows us to extend the concept of transpose to the case of weighted inner products in the following way. Suppose A e Rmxn and let (., .) Q and (•, .) R, , with Q and A E ]Rm xn (., -}R with Q and {-, -}g R positive definite, be weighted inner products on Rm and W, respectively. Then we can positive definite, be weighted inner products on IR m and IRn, respectively. Then we can define the "weighted transpose" A # as the unique map that satisfies define the "weighted transpose" A# as the unique map that satisfies
(x, AY)Q = (A#x, y)R all x e IRm (x, Ay)Q = (A#x, Y)R for all x E Rm and for all Y E W1. y e IRn.

By Example 7.l2 above, we must then have x T QAy x T (A#{ Ry for all x, y. Hence we By Example 7.12 above, we must then have XT QAy = xT(A#) Ry for all x, y. Hence we transposes (of AT Q = RA#. must have QA = (A#{ R. Taking transposes (of the usual variety) gives AT Q = RA#. QA = (A#) R. Since R is nonsingular, we find Since R is nonsingular, we find
A# = R-1A Q. A* = /r'A' TQ.

We can also generalize the notion of orthogonality (x T = 0) to Q -orthogonality (Q is We can also generalize the notion of orthogonality (xTyy = 0) to Q-orthogonality (Q is a positive definite matrix). Two vectors x, y E IRn are Q-orthogonal (or conjugate with a positive definite matrix). Two vectors x, y e W are <2-orthogonal (or conjugate with T X Qy O. Q-orthogonality is an important tool used in respect to Q) if ( x y) Q respect to Q) if (x,, y } Q = XT Qy = 0. Q -orthogonality is an important tool used in studying conjugate direction methods in optimization theory. studying conjugate direction methods in optimization theory. Definition 7.14. Let V be a vector space over C. Then (., •} : V V -> Definition 7.14. Let V be a vector space over <C. Then {-, .) : V x V -+ C is a complex is a complex inner product if inner product if

1. (x,, x ) :::: Qfor all x e V and ( x , x ) = 0 if and only if x = 0. 1. ( x x) > 0 for all x E V and (x, x) =0 if and only ifx = O.

2. (x, y) = (y, x) for all x, y E V. (y, x) for all x, y e V. 2. (x, y)
3. (x, aYI + fiy2) = a(x, y\) + fi(x, Y2) for all x, YI, y2 E V and for alia, f3 6 C. 3. (x,ayi f3Y2) = a(x, yll f3(x, y2}forallx, y\, Y2 e V andfor all a, ft E c. Remark 7.15. We could use the notation (., ·)e to denote a complex inner product, but Remark 7.15. We could use the notation {•, -}c to denote a complex inner product, but if the vectors involved are complex-valued, the complex inner product is to be understood. if the vectors involved are complex-valued, the complex inner product is to be understood. Note, too, from part 2 of the definition, that (x, x) must be real for all x. Note, too, from part 2 of the definition, that ( x , x ) must be real for all x.
Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have Remark 7.16. Note from parts 2 and 3 of Definition 7.14 that we have

(ax\ + fix2, y) = a(x\, y) + P(x2, y}.
Remark 7.17. The Euclidean inner product of x, e C" is given by Remark 7.17. The Euclidean inner product of x, y E C n is given by
n

(x, y)

= LXiYi = xHy.
i=1

The conventional definition of the complex Euclidean inner product is (x, y) yH but we The conventional definition of the complex Euclidean inner product is (x, y} = yHxx but we use its complex conjugate H here for symmetry with the real case. use its complex conjugate xHyy here for symmetry with the real case.

Remark 7.18. A weighted inner product can be defined as in the real case by (x, y)Q = Remark 7.1S. A weighted inner product can be defined as in the real case by (x, y}Q — x H Qy, arbitrary Q QH > o. notion Q-orthogonality can be similarly XH Qy, for arbitrary Q = QH > 0. The notion of Q -orthogonality can be similarly generalized to the complex case. generalized to the complex case.

56 56

Chapter 7. Projections, Inner Product Spaces, and Norms Chapter 7. Projections, Inner Product Spaces, and Norms

Definition 7.19. A vector space (V, F) endowed with a specific inner product is called an Definition 7.19. A vector space (V, IF) endowed with a specific inner product is called an inner product space. If F = C, we call V a complex inner product space. If F = R, we inner product space. If IF = e, we call V a complex inner product space. If IF = R we call V a real inner product space. call V a real inner product space.
Example 7.20. Example 7.20. 1. Check that V = IRn x" with the inner product (A, B) = Tr AT B is a real inner product 1. Check that = R" xn with the inner product (A, B) = Tr AT B is a real inner product space. Note that other choices are possible since by properties of the trace function, space. Note that other choices are possible since by properties of the trace function, Tr AT B = TrB TA = Tr A B = TrBAT TrATB = Tr BTA = TrABTT = Tr BAT..

2. Check that V = e nxn with the inner product (A, B) = Tr AHB is a complex inner Tr AH B is a complex inner 2. Check that V = Cnx" with the inner product (A, B) product space. Again, other choices are possible. product space. Again, other choices are possible. Definition 7.21. Let V be an inner product space. For v e V, we define the norm (or Definition 7.21. Let V be an inner product space. For v E V, we define the norm (or length) ofv by IIvll = */(v, v). This is called the norm induced by (',, -.).. length) ofv by \\v\\ = -J(V,V). This is called the norm induced by ( - ) Example 7.22. Example 7.22. 1. If V = E." with the usual inner product, the induced norm is given by ||i>|| 1. If V = IRn with the usual inner product, the induced norm is given by II v II = n 2 2 1
(Li=l V i (E,=i<Y))2.xV—*« 9\ 7

2. If V = en with the usual inner product, the induced norm is given by II v II = 2. If V = C" with the usual inner product, the induced norm is given by \\v\\ "n (L...i=l IVi ) ! (£? = ,l»,-lI22)*.. Theorem 7.23. Let P be an orthogonal projection on an inner product space V. Then Then Theorem 7.23. Let P be an orthogonal projection on an inner product space \\Pv\\ ::::: Ilvll for all v e V. IIPvll < \\v\\forallv E V.
Proof: Since P is an orthogonal projection, p2 = P = pH. (Here, the notation p# denotes Proof: Since P is an orthogonal projection, P2 = P = P#. (Here, the notation P# denotes the unique linear transformation that satisfies ( P u , } = (u, p#v) for all u, v E If this the unique linear transformation that satisfies (Pu, vv) = (u, P#v) for all u, v e V. If this seems a little too abstract, consider V = R" (or en), where P# is simply the usual PT (or seems a little too abstract, consider = IRn (or C"), where p# is simply the usual pT (or pH)). Hence (Pv, v) = (P 2v, v) = (Pv, p#v) = (Pv, Pv) = IIPvll 2 > O. Now / - P is PH)). Hence ( P v , v) = (P2v, v) = (Pv, P#v) = ( P v , Pv) = \\Pv\\2 ::: 0. Now / - P is also a projection, so the above result applies and we get also a projection, so the above result applies and we get

0::::: ((I - P)v. v) = (v. v) - (Pv, v)
=

IIvll2 - IIPvll 2

from which the theorem follows. from which the theorem follows.

0

Definition 7.24. The norm induced on an inner product space by the "usual" inner product Definition 7.24. The norm induced on an inner product space by the "usual" inner product is called the natural norm. is called the natural norm.
In case V = C" or V = R",, the natural norm is also called the Euclidean norm. In In case = en or = IR n the natural norm is also called the Euclidean norm. In the next section, other norms on these vector spaces are defined. A converse to the above the next section, other norms on these vector spaces are defined. A converse to the above procedure is also available. That is, given a norm defined by IIx II = •>/(•*> x), an inner procedure is also available. That is, given a norm defined by \\x\\ — .j(X,X}, an inner product can be defined via the following. product can be defined via the following.

7.3. Vector Norms 7.3. Vector Norms Theorem 7.25 (Polarization Identity). Theorem 7.25 (Polarization Identity).
1. For x, y E m~n, an inner product is defined by 1. For x, y € R", an inner product is defined by (x,y)=xTy=

57 57

IIx+YIl2~IIX_YI12_

IIx + yll2 _ IIxll2 _ lIyll2 2

2. For x, y E en, an inner product is defined by 2. For x, y e C", an inner product is defined by

where j = i = \/—T. where j = i = .J=I.

7.3 7.3

Vector Norms Vector Norms

Definition 7.26. Let (V, F) be a vector space. Then \ Definition 7.26. Let (V, IF) be a vector space. Then II \ -. \ II\ : V ---+ R is a vector norm ifit V ->• IR is a vector norm if it satisfies the following three properties: satisfies the following three properties:
1. Ilxll::: Ofor all x E V and IIxll = 0 ifand only ifx

= O.

2. Ilaxll = lalllxllforallx

E

Vandforalla

E

IF.

3. IIx + yll :::: IIxll + IIYliforall x, y E V. (This is called the triangle inequality, as seen readily from the usual diagram illus (This is called the triangle inequality, as seen readily from the usual diagram illustrating the sum of two vectors in ]R2 .) trating the sum of two vectors in R2 .) Remark 7.27. It is convenient in the remainder of this section to state results for complexRemark 7.27. It is convenient in the remainder of this section to state results for complexvalued vectors. The specialization to the real case is obvious. valued vectors. The specialization to the real case is obvious. Definition 7.28. A vector space (V, F) is said to be a normed linear space if and only if Definition 7.28. A vector space (V, IF) is said to be a normed linear space if and only if there exists a vector norm || • || : V -> R satisfying the three conditions of Definition 7.26. there exists a vector norm II . II : V ---+ ]R satisfying the three conditions of Definition 7.26. Example 7.29. Example 7.29.

1. For x E en, the Holder norms, or p-norms, are defined by 1. For e C", the HOlder norms, or p-norms, are defined by

Special cases: Special cases: (a) Ilx III = L:7=1
IXi

I (the "Manhattan" norm).
1

(b) Ilxllz = (L:7=1Ix;l2)2 = (c) Ilxlioo

(X

H

1

X)2

(the Euclidean norm).

= maxlx;l
IE!!

=

(The second equality is a theorem that requires proof.) (The second equality is a theorem that requires proof.)

p---++oo

lim IIxllp-

31 and Remark 7. i.e. Then with equality if and only if x and yare linearly dependent. The norm || • ||2 is unitarily invariant. Ther. Then Theorem 7. Projections. Since yHxx = x Hy.g. 217]). then Remark 7. > 0. Since is a nonnegative definite matrix. Remark 7. 217]). 11·111 and 1I·IIClO XHUHUx . However. Let x. In other words. On the vector space (C[to. it is particularly easy to remember. y e C" may be defined by Remark 7. Let x. with equality if and only if x and y are linearly dependent. denoted II ..33. Theorem 7. The norm II . However. and Norms 2. In other words.32.32 are true for general inner product spaces.. R). its determinant must be nonnegative. its determinant must be nonnegative.. Inner Product Spaces. e.31 and Remark 7.31 (Cauchy-Bunyakovsky-Schwarz Inequality).30 (Holder Inequality).e.1^ IIUxll2 = IIxll2 (Proof IIUxili = x U Ux = xHx = IIxlli)· However. || . is a nonnegative definite matrix. On the vector space (C[to. we see immediately that \XH yl < IIxll2l1yllz. Projections. Let x.32.30 (HOlder Inequality). tO~t:5.t~JI On the vector space ((C[to. [20.D = E^rf/l*/!. p... define the vector norm 3. The angle e between two nonzero vectors x. However.(x Hyy)(yH x). p q I I A particular case of the HOlder inequality is of special interest.g. Then Fhcorem 7. Let x.D = L~=ld. Proof' Consider the matrix [x y] E C" x2 . Some weighted p-norms: (a) IIxll1.||. where 4 > O. p. 1Ft).l. Theorem 7.34.32 are true for general inner product spaces. y E C".31 (Cauchy-Bunyakovsky-Schwarz Inequality). \\Ux\\l XHX = \\x\\\). The C-B-S inequality is thus equivalent to the statement I.^|| cos e = Il-Mmlylb 0 ~ 0 < I' The C-B-S inequality is thus equivalent to the statement ~ ^ | COS 0 < 1. define the vector norm 1111100 = max II/(t) 11 00 . ttl. if U € C"x" is unitary.tl Theorem 7. Remark 7..58 58 Chapter 7.> where Q = QH > 0 (this norm is more commonly = QH > Ikllz. we see immediately that IXH y\ ~ 0 < ( x H ) ( y H y ) — ( x H ) ( y H x ) . ttlr.Q — (xhH Qx) 2. and || .33. D 0 \\X\\2\\y\\2Note: This is not the classical algebraic proof of the Cauchy-Bunyakovsky-Schwarz Note: This is not the classical algebraic proof of the Cauchy-Bunyakovsky-Schwarz (C-B-S) inequality (see. ||JC||. 1cose 1|~ 1.34. Theorem 7. o ~ (x Hxx)(yH y) . A particular case of the Holder inequality is of special interest. Inner Product Spaces. IIQ)' 1 3. define the vector norm 11111 = max 1/(t)I· to:::. The angle 0 between two nonzero vectors x. Since yH = x H y. (b) IIx IIz. (C-B-S) inequality (see. if U E enxn is unitary. -+-=1. whered.g = (x QXY denoted || • ||c). y e en.lx. e. y E en may be defined by cos# = 1I. Some weighted p-norms: 2. then H H \\Ux\\2 \\x\\2 (Proof. „ |. 1Ft). y e C". [20. 112 is unitarily invariant. define the vector norm On the vector space «e[to. it is particularly easy to remember. 0 < e — 5-. Remark 7. Remark 7. Since Proof: Consider the matrix [x y] e en x2 . y E en.~~1~1112'. t\])n. and Norms Chapter 7. i. t \ ] R).

2 the proof of which follows easily from ||z||2 _ z_//. y E en are orthogonal. Definition 7. IIA + BII :::: IIAII + IIBII for all A. For x E en.36.. || || R mx " ~ E is a matrix norm if it satisfies the following three Definition 7.. i. Theorem 7. i. i. this is called the triangle inequality... the motivation for In this section we introduce the concept of matrix norm.4 7. the following inequalities are all tight bounds. Matrix Norms 7. 2. Let \\ \\ be a vector norm and suppose v. ConFinally. All norms on en are equivalent. Finally. e C". 7. we conclude this section with a theorem about convergence of vectors. vectors under orthogonal transformation. IR) since that is "convergence" of matrices. Extension to the complex case is straightforward and essentially obvious. 3. i. The using matrix norms is to have a notion of either the size of or the nearness of matrices.7. there exist vectors x for which equality holds: vectors x for which equality holds: Ilxlll :::: Jn Ilxlb Ilxll2:::: IIxll» IIxlloo :::: IIxll» Ilxlll :::: n IIxlloo.35.e. there exist constants c\. If y € C" are orthogonal.4 Matrix Norms Matrix Norms In this section we introduce the concept of matrix norm. lIaAl1 = lalliAliforall A E mxn andfor all a E IR..37. IIxlioo :::: IIxllz. Theorem 7. II·• II : IR mxn -> IR is a matrix norm if it satisfies the following three properties: properties: 1.) . while the latter is needed to make sense of "convergence" of matrices. there exist constants CI. the motivation for using matrix norms is to have a notion of either the size of or the nearness of matrices. i. Similar remarks apply to the unitary invariance of norms of real vectors under orthogonal transformation.39. E en.38. IIAII ~ Ofor all A E IR mxn and IR IIAII = 0 if and only if A = O. Attention is confined to the vector space (IRmnxn. Convergence of a sequence of vectors to some limit vector can be converted into a statement vergence of a sequence of vectors to some limit vector can be converted into a statement about convergence of real numbers. c-i (possibly depending onn) such that depending on n) such that Example 7. we conclude this section with a theorem about convergence of vectors.4.. Remark 7. v(2).38. i. Matrix Norms 59 59 are not unitarily invariant. about convergence of real numbers.e. convergence in terms of vector norms. Then lim k4+00 V(k) = v if and only if lim k~+oo II v(k) - v II = O. The former notion is useful for perturbation analysis. Similar remarks apply to the unitary invariance of norms of real are not unitarily invariant.e. then we have the Pythagorean Identity Ilx ± YII~ = IIxll~ + IIYII~. BE IRmxn. . while the latter is needed to make sense of former notion is useful for perturbation analysis. Attention is confined to the vector space (W xn R) since that is what arises in the majority of applications. As with vectors. the proof of which follows easily from liz II~ = ZH z. v(l). If x. convergence in terms of vector norms.e.e.e. Extension to the complex case is straightforward what arises in the majority of applications.35.. (As with vectors. Then 7. IIxl12 :::: Jn Ilxll oo .4. there exist Example 7... For x G C". C2 (possibly 7. then we have the Pythagorean Identity Remark 7.... Let II·• II be a vector norm and suppose v.39.36. and essentially obvious. i» (1) v(2\ . All norms on C" are equivalent. the following inequalities are all tight bounds. As with vectors.37.

The "matrix analogue of the vector I-norm. tTL T Note: IIA+llz = l/ar(A). The Schattenp-norms are defined by E lR. For example.00 = || • ||2. The "maximum column sum" norm is 2.43.1 is often called the trace norm. matrix = Ilxllp. Example 7. IIAII P t altA)) 1 ~ (T..jj laij.p = (at' + . (AA ')). and Norms Chapter 7. to estimate the size of a matrix product A B in terms of the sizes of A and B individually. || 5 2 = II IIF and || • ||5i00 = II . Projections.44. Let A E lR." IIAliss = Li._ Then "mixed" norms can also be defined by e lR. Schatten/7-norms IIAlls." theorem and requires a proof.mxn. 11·115.mxn. The concept of a matrix norm alone is not altogether useful since it does not allow us to estimate the size of a matrix product AB in terms of the sizes of A and B individually. (A' A)) 1 ~ (T. IIAlioo = max rE!!l." Each is a "computable.42.60 Chapter 7.41. Let A E K m x ". + a!)"". (t laUI). \\F and 11'115.) I ~ (t. Let A E lR.43. ^wncic = rank(A)). (where r = laiiK^/i.60 max -_P IIAxll = max Ilxli p IIxllp=1 IIAxll p . ||5>1 is often called the trace norm. e R mx ". Example 7. The "matrix analogue of the vector 1-norm.. defined by IIAIIF ~ (t. IIAII2 = Amax(A A) = A~ax(AA ) = a1(A).44. I. and Norms Example 7.40. is a norm. p-norms previously. Then the Frobenius norm (or matrix Euclidean norm) is 7.mxn. where r mxn = rank(A). J=1 3.q = max IIAxil p 11. The "maximum row sum" norm is 2.42. The spectral norm is 3. I. 1. Example 7. 112' The norm II • 115. is a norm.40..mxn IIAII p.2 = || .<110#0 IIxllq Example 7. || . Inner Product Spaces.. Inner Product Spaces." || A\\ = ^ \ai} |. The norm || . Example 7. Example 7. Let A E R . . Projections. I Some special cases of Schatten /?-norms are equal to norms defined previously. Example 7. The following three special cases are important because they are "computable. ai. Then the matrix p-norms are defined by A e Rmxn.

jii IIAII2. p for all p are consistent matrix norms. )).jii IIAIIF. more generally.ooIlBIII. IIAxliv < IIAlim \\x\\v' Not every consistent matrix norm is subordinate to a vector norm. •II ||F. The following miscellaneous results about matrix norms are collected for future reference. i. The "mixed" norm "mixed" norm II· 11 100 . Definition 7..e.jii IIAlb IIAIIF ::s . IIABIII. For example. inner products or outer products of vectors.45. atornorms.e. reader The interested reader is invited to prove each of them as an exercise. IIAII2 ::s. II". IIAII2::S IIAIIF. IIAxll1 = max .jii IIAlloo.oo J1. i. e.~~i'. take A = B = [: is a matrix norm but it is not consistent. not exist a vector norm II • || such that IIAIIF is given by max x . || • ||/7and II ||. if II A B II < II A 1111 fi|| whenever the matrix product is defined. IIAlioo ::s . Let A E ]Rmxn. Matrix Norms 7. i.oo 2 while IIAIII.47... Then :]. it follows that all subordinate norms are consistent. If II . B e Rnxk.Then II Ax 1122 ::s II ||A||F||jc||2..1100 = max laijl x. inner products or outer products of vectors. although there are analogues for. there exists a vector norm \\ .48. .45. which equality holds: which equality holds: IIAIII ::s . II In II p = 1 for all p. IIAllp.jii IIAlioo' . \\v consistent with it. while E ]Rnxn..jii IIAlb IIAIII ::s n IIAlloo. i.jii IIAII I . A A 2.. also called oper(or. it follows that all subordinate norms are consistent. 2. •1122is consistent with II ||.48.g.. a vector norm. \\m is a consistent matrix norm.jii.60 \^ • Useful Results The following miscellaneous results about matrix norms are collected for future reference. Matrix Norms 61 61 Notice that this difficulty did not arise for vectors. We thus need the following definition. For example.60 Ilx i.. there exist matrices A for i. also caUedoperator norms. i..60 . II· II F and 1. B E ]Rnxk..46.e. e. II A 1100 ::s n IIAII I . 11^4^11 P (or. exercise.60 IIx II Ilxll=1 IIAxll p . Since IIABxl1 ::s ||A||||fljc|| ::s IIAIIIIBllllxll. there exists a vector norm II • IIv Theorem 7.47. IIAlioo ::s .7.e. For example. Then The p -norms are examples of matrix norms that are subordinate to (or induced by) The p-norms are examples of matrix norms that are subordinate to (or induced by) a vector norm.4. If \\ • 11m is a consistent matrix norm.jii II A IIF.g.but there does || is consistent with F.46.. the IIIn II F = . ||A|| = max^o IIxll.. more generally. For example. Notice that this difficulty did not arise for vectors. Example 7. IIAIIF ::s . IIAII2 ::s . although there are analogues for. q For such subordmate norms. l. Example 7. Theorem 7.jii IIAII I. 1.4. Then the norms \\ .e. we clearly have ||Ajc|| ::s ||A||1|jt||. II such that ||A||F is given by max^o ". \\ • \\p. 2.e.j is a matrix norm but it is not consistent. Theorem 7. take A = B = \ \ ||Afl|| li00 = 2while||A|| li00 ||B|| 1>00 = 1.and II \\ •lIy y are mutually consistent if \\ A B \\ a < IIAllfllIBlly. e R" x ". but there does consider II . II F' Then||A^|| < A II Filx 112. \\ are Definition 7.so II ||. There exists a vector x* such that IIAx*11 = IIAllllx*11 if the matrix norm is subordinate to the vector norm. wec1earlyhave IIAxll < IIAllllxll· Since ||Afijc|| < IIAlIllBxll < ||A||||fl||||jt||. For A following inequalities are all tight. •II ||p for all p are consistent matrix norms. We thus need the following definition. subordinate to the vector norm.q = maxx. 2. IIAxl1 IIAII = max . consistent with it. Not every consistent matrix norm is subordinate to a vector norm. Let A e Rmxn. Then the norms II • \\a II· Ilfl' and . IIAIII ::s . IIAIIF ::s. A matrix norm 11·11\\is said to be consistent mutuallyconsistentifIlABII. HAjcJI^ ::s \\A\\m Ilxli v. There exists a vector x* such that ||Ajt*|| = ||A|| ||jc*|| if the matrix norm is Theorem 7. .::S \\A\\p\\B\\y A matrix norm \\ • is said to be consistent if \\AB\\ ::s || A || || B II whenever the matrix product is defined.= max IIAxl1 x.. 1. so not exist a vector norm || . For such subordinate norms. consider || • \\F.

. || • ||2 and || • \\F 8. matrices Q zR and e M" ". where ¥2 is defined as in Theorem 5.. 112 and II . 3. and Norms Chapter 7.. Show that the matrix norms II . Prove that / . Projections. scalars. A (1) . Suppose that a matrix A E IR mxn has linearly independent columns. 112 (as well as all the Schatten /?-norms. for all A E IRmxn and for all orthogonal unitarily invariant. ||(MZ||a or F. Let II ·11 be a matrix norm and suppose A. > . i. l. 3. A (2) . and Norms max laijl :::: IIAII2 :::: ~ max laijl. orthogonal projection.Q — Q must be an orthogonal matrix. Then 7. \\ -\\bea Rmx".e. Prove that the A e Wnxn orthogonal projection onto the space spanned by these column vectors is given by the P matrix P = A(ATTA)~}AT. but not necessarily other p-norms) are unitarily invariant. EeIRmxn. ...49. Prove that E"xn with the inner product (A. p+ = P.62 62 3. Definition: Let A E IRnxn and denote its set of eigenvalues (not necessarily distinct) by P. Projections... prove that P+ = P. Prove that P . .] l. where V2 is defined as in Theorem 5. space. EXERCISES EXERCISES 1. Suppose P and Q are orthogonal projections and P + Q = I. B) = Tr ATB is a real inner product IR n x" AT B (A. Find the (orthogonal) projection of the vector [2 3 4f onto the subspace of 1R3 5. Theorem 7. Inner Product Spaces.e. B) = space.1. IIQAZlia = ||A|| fora = 2 or F. Also.y + 2z = O.c — v + = 0. must be an orthogonal matrix. Definition: Let A e Rnxn and denote its set of eigenvalues (not necessarily distinct) 8. If P projection. A(2)..1.An}. spanned by the plane 3x .. prove directly that V22Vl is an I — +A V V/ is an orthogonal projection.49. A(I).. 2. For A E IR mxn . [2 3 4]r R3 spanned by the plane 3.I.l. The norms II . 4. . 7. . but not necessarily The norms || • \\F and || • ||2 (as well as all the Schatten p-norms. If P is an orthogonal projection. 6. 1. A(A A) -1 AT 5. i. Inner Product Spaces.] 4. e Rmx" mxm x mxm and Z E IRnxn .. The spectral radius of A is the scalar by {A-i . i . Then k~+oo lim A (k) = A if and only if k~+oo lim IIA (k) - A II = o. IIF are unitarily invariant.A+A is an orthogonal projection. Chapter 7. . For A eRmxa .. IIF and II .. IIAllaa fora matrices Q E IR Convergence Convergence The following theorem uses matrix norms to convert a statement about convergence of a sequence of matrices into a statement about the convergence of an associated sequence of of scalars. „ } The spectral radius of A is the scalar p(A) = max IA.

all of whose columns and rows as well as main diagonal and antidiagonal sum to s = n(n2 + 1) /2.. IIAlb IIAlloo. Determine IIAIIF' IIAII Ilt.Exercises Exercises 63 63 Let Let A=[~ 14 0 12 5 ~]. 9. appropriate. all of whose Determine ||A||F. Let A=[~4 9 2 ~ ~]. or (Xl as appropriate. Let A = xy . where both x. and p(A). Determine ||A||F. H A I I ||A||2. or oo as and II A 1100 in terms of IIxlla and/or IlylljJ. is called a "magic square" matrix. ||A||2. Determine IIAIIF' IIAII d . and peA). (An n x n matrix.) that || M Up = for all/?. \\A\\ ||A||2. it can be proved that IIMllp = ss for all p.2. IIAlb IIAlloo. H A H ^ and peA). If M is a magic square matrix. 10. HA^. where both x. columns and rows as well as main diagonal and antidiagonal sum to s = n (n 2 l)/2. y e R" are nonzero. Determine IIAIIF' IIAIII> IIAlb and ||A||oo in terms of \\x\\a and/or \\y\\p. where ex and {3 take the value 1. 2. (An n x n matrix. and p(A). where a and ft take the value 1. ||A||j. y E IR n are nonzero.. Let 9. .) T 10. Determine ||A||F. Let A = xyT.

This page intentionally left blank This page intentionally left blank .

. vector x e X if and only if AT where b — Ax is the residual associated with x.bll~ (and hence p(x) = \\Ax . while (b . ||A. whereyEjRnisarbitrary..Ax) is clearly in 'R(A).x — b\\\ (and hence p ( x ) = from the Pythagorean identity (Remark 7. Solution: The set X has a number of easily verified properties: The set X has a number of easily verified properties: 1. 2.35). Thus.PR(A)b) = (I . For further details. see Section 8.e.e. where r = b .1 The Linear Least Squares Problem The Linear Least Squares Problem Problem: Suppose A E Rmx" with m 2: nand b E jRm is aagiven vector.PR(A)bll~ + IIPR(A)b - Axll~ from the Pythagorean identity (Remark 7. (8.2) 65 .1 8.-b E 'R(A)-L so these two vectors are orthogonal.PR(A))b = PR(A). For further details. Hence. so these two vectors are orthogonal. AT — A T Ax = AT b latter form is commonly known as the normal equations. The linear least Problem: Suppose A e jRmxn with > n and b <= Rm is given vector. IIAx .PR(A)b) + (PR(A)b - Ax). i. Thus. is a solution of the normal equations. x E X if and only if x is a solution of the normal equations. Now.Ax is the residual associated 1.2.Axll~ = lib . A vector x E X if and only if ATrr = 0..1) To see why this must be so. (Pn(A)b — AJC) is clearly in 7£(A). write the residual r in the form To see why this must be so. i. while Now. write the residual in the form r = (b . The linear least squares problem consists of finding an element of the set squares problem consists of finding an element of the set x = {x E jRn : p(x) = IIAx . The equations ATrr = 0 can be rewritten in the form A TAx = ATb and the x.b 112) assumes its minimum value if and only if II Ax —b\\2) assumes its minimum value if and only if (8. A vector x X if and onlv if x is of the x=A+b+(I-A+A)y.bll 2 is minimized}. Hence. IIrll~ = lib .2. x e X if and only if latter form is commonly known as the normal equations.35).Chapter 8 Chapter 8 Linear Least Squares Linear Least Squares Problems Problems 8. A vector x E X if and only if x is of the form 2. see Section 8. (PR(A)b .

66 Chapter 8. then equality holds and the least squares .23. if 5. all solutions of (8.1) and which follows since the two vectors are orthogonal. there is no "existence condition" such as R(B) S.A+A)y and *2 = A+b + (I — A+A)z in X. all and this equation always has a solution since AA+b E R(A). The only difference is that in the case of linear least squares solutions. if and only if A + A lor.e. 0*i (1 #)* = A+b (I . By Theorem 6. equivalently. 7£(A). Just as for the solution of linear equations. X = A+B..n is arbitrary. x* = A+b is the unique vector that solves this "double minimization" problem. To see why. The general solution to e ]R. i. and only if A+A = I or. BE ]R. The unique solution of minimum 2-norm or F-norm is X = A+B. which follows since the two vectors are orthogonal. + (1 . problem to the matrix case. The minimum value of p ((x) is then clearly equal to where y E ]R.2.. has a unique element x" of minimal2-norm. Linear Least Squares Problems Chapter 8. consider two arbitrary vectors Xl = A + b 3. Then the convex combination 8x. then equality holds and the least squares If the existence condition happens to be satisfied. Notice that solutions of the linear least squares problem look exactly the Remark 8. The only difference is that in the case same as solutions of the linear system AX = B. the last inequality following by Theorem 7.1. The minimum value of p x ) is then clearly equal to lib .A+A)(Oy (1 . 5. consider two arbitrary vectors jci = A+b + (I — A + A) y (I .e.0)z) is clearly in 4. if and only if rank (A) = n. Notice that solutions of the linear least squares problem look exactly the same as solutions of the linear system AX = B. where y e W is arbitrary.nxk is arbitrary. There is a unique solution to the least squares problem. of linear least squares solutions. x* minimizes the residual p(x) that solves this "double minimization" problem.e. equivalently. In fact. There is a unique solution to the least squares problem.3.8)xz2 = A+b ++ (I -A+ A)(8y ++ (1 8)z) is clearly in X. we can generalize the linear least squares problem to the matrix case. X has a unique element x* of minimal 2-norm.PR(A)bll z = ~ 11(1 Ilbll z.mxk. X. Then the convex combination and Xz = A+b (I .A+ A)z in X..1) and convexity or directly from the fact that all x E X are of the form (8. The unique solution of minimum 2-norm or F-norm is where Y € ]R.3. In fact. X = {x*} = {A+b}. we can generalize the linear least squares Just as for the solution of linear equations. i. The Theorem 8. AA+)bI1 2 the last inequality following by Theorem 7.1]. Linear Least Squares Problems and this equation always has a solution since AA+b e 7£(A).23. Let 8 e [0.2) are of the form solutions of (8. X = {x"} = {A+b}. X is convex. X is convex. if and only if rank(A) n. Let 6 E [0.. 1]. Let A E E mx " and B € Rmxk. where Y E R" xfc is arbitrary. there is no "existence condition" such as K(B) c R(A).2. x* minimizes the residual p ( x ) and is the vector of minimum 2-norm that does so.e. Remark 8. 3.. By Theorem 6. If the existence condition happens to be satisfied.. i.mxn XElR Plxk min IIAX - Bib is of the form is of the form X=A+B+(I-A+A)Y. To see why. This follows immediately from convexity or directly from the fact that all x e X are of the form (8. i.2) are of the form x = A+ AA+b + (I - A+ A)y =A+b+(I-A+A)y. x" = + b is the unique vector 4. This follows immediately from and is the vector of minimum 2-norm that does so.

8.3 Linear Regression and Other Linear Least Squares Problems 8.3 Linear Regression and Other Linear Least Squares Problems

67

O. X = +B residual is 0. Of all solutions that give a residual of 0, the unique solution X = A+B has minimum 2-norm or F -norm. F-norm. Remark 8.3. If we take B = 1m in Theorem 8.1, then X = A+ can be interpreted as Im in Theorem 8.1, then Remark 8.3. If we take B A+ can be interpreted as saying that the Moore-Penrose pseudoinverse of A is the best (in the matrix 2-norm sense) A AX matrix such that AX approximates the identity. Remark 8.4. Many other interesting and useful approximation results are available for the F -norm). matrix 2-norm (and F-norm). One such is the following. Let A E M™ x " with SVD following. e lR~xn
A

= U~VT = LOiUiV!.
i=l

Then a best rank k approximation to A for 1< f c < r r,i . e . , a solution to A k l :s k :s , i.e., a
MEJRZ'xn

min IIA - MIi2,

is given by is given by
k

Mk =

LOiUiV!.
i=1

The special case in which m = n and k = n - 1 gives a nearest singular matrix to A E A e = nand = —

lR~ xn .

8.2 8.2

Geometric Solution Geometric Solution

Looking at the schematic provided in Figure 8.1, it is apparent that minimizing IIAx -—bll 2 2 || Ax b\\ x e W1 p — Ax is equivalent to finding the vector x E lRn for which p = Ax is closest to b (in the Euclidean b Ay norm sense). Clearly, r = b - Ax must be orthogonal to R(A). Thus, if Ay is an arbitrary r b — Ax 7£(A). R(A) vector in 7£(A) (i.e., y is arbitrary), we must have y
0= (Ay)T (b - Ax) =yTAT(b-Ax) = yT (ATb _ AT Ax).

Since y is arbitrary, we must have ATb — ATAx = 0 or A r A;c = AT b. AT b - AT Ax AT Ax = ATb. T Special case: If A is full (column) rank, then x = (AT A) ATb. A = (A A)-l ATb.

8.3 8.3
8.3.1 8.3.1

Linear Regression and Other Linear Least Squares Linear Regression and Other Linear Least Squares Problems Problems
Example: Linear regression

Suppose we have m measurements (ll, YI), ... ,, (trn,,ym) for which we hypothesize a linear (t\,y\), . . . (tm Ym) (affine) relationship (8.3) y = at + f3

68

Chapter 8. Linear Least Squares Problems Chapter 8. Linear Least Squares Problems

b

r

p=Ax

Ay E R(A)

Figure S.l. Projection of b on K(A). Figure 8.1. Projection of b on R(A).
for certain constants a. and {3. One way to solve this problem is to find the line that best fits for certain constants a and ft. One way to solve this problem is to find the line that best fits the data in the least squares sense; i.e., with the model (8.3), we have the data in the least squares sense; i.e., with the model (8.3), we have

YI
Y2

= all + {3 + 81 ,
= al2 + {3 + 82

where &\,..., 8m are "errors" and we wish to minimize 8\ + • • 8;. Geometrically, we where 81 , ... , 8m are "errors" and we wish to minimize 8? + ...• + 8^- Geometrically, we are trying to find the best line that minimizes the (sum of squares of the) distances from the are trying to find the best line that minimizes the (sum of squares of the) distances from the given data points. See, for example, Figure 8.2. given data points. See, for example, Figure 8.2.
y

Figure 8.2. Simple linear regression. Figure 8.2. Simple linear regression.

Note that distances are measured in the vertical sense from the points to [he line (as Note that distances are measured in the venical sense from the point!; to the line (a!; indicated, for example, for the point (t\, y\}}. However, other criteria arc possible. For exindicated. for example. for the point (tl. YIn. However. other criteria nrc po~~iblc. For cxample, one could measure the distances in the horizontal sense, or the perpendicular distance ample, one could measure the distances in the horizontal sense, or the perpendiculnr distance from the points to the line could be used. The latter is called from the points to the line could be used. The latter is called total least squares. Instead squares. Instead of 2-norms, one could also use 1-norms or oo-norms. The latter two are computationally of 2-norms, one could also use I-norms or oo-norms. The latter two are computationally

8.3. Linear Regression and Other Linear Least Squares Problems 8.3. Linear Regression and Other Linear Least Squares Problems

69

much more difficult to handle, and thus we present only the more tractable 2-norm case in difficult text that follows. follows. The m "error equations" can be written in matrix form as ra
Y = Ax +0,

where

We then want to solve the problem
minoT 0 = min (Ax - y)T (Ax - y)
x

or, equivalently, min lIoll~ = min II Ax - YII~.
x

(8.4)

AT Solution: x = [~] is a solution of the normal equations AT Ax Solution: x — [^1 is a solution of the normal equations ATAx = ATyy where, for the special form of the matrices above, we have special form of the matrices above, we have

and and
AT Y = [ Li ti Yi

LiYi

J.

The solution for the parameters a and f3 can then be written ft

8.3.2

Other least squares problems
y = f(t) =
Cl0!(0

(8.3) of the form Suppose the hypothesized model is not the linear equation (S.3) but rather is of the form + • • • 4- cn<t>n(t). (8.5) (8.5)

In (8.5) the ¢i(t) are given (basis) functions and the Ci; are constants to be determined to </>,(0 functions c minimize the least squares error. The matrix problem is still (S.4), where we now have minimize the least squares error. The matrix problem is still (8.4), where we now have

An important special case of (8.5) is least squares polynomial approximation, which corresponds to choosing ¢i (t) = t t'~1,, i i;Ee!!, although this choice can lead to computational 0,• (?) = i - l n, although this choice can lead to computational

Since the standard Kalman filter essentially amounts method in finite-precision arithmetic.70 70 Chapter 8.1. arbitrary. c. Since the standard Kalman filter essentially amounts to sequential updating of normal equations. quantity above is clearly minimized by taking z\ = S-'c. Then GI defining y = logy. = log c" and C2 = cj_ results in a standard linear least squares y — log y. [7]. 8. we assume that A has an SVD given by A U\SVf via the SVD. z. respectively. it can be expected to exhibit such poor numerical behavior in practice (and it does). since II . if the fitting function is of the form y t) Y = ff( (t) = c\eC2i. [23]). are based on orthogonal polynomials. the subvectors can have different lengths).5) is that the coefficients Ci appear linearly. [11]. Two basic classes of algorithms are A itself S VD and QR (orthogonal-upper triangular) factorization. then II v II ~ = II viii ~ + II v211 ~ (note that orthogonality is not what is used here. c. This that orthogonality is not what is used here. Ib is unitarily invariant =11~z-cll~ wherez=VTx. piecewise polynomial functions. insight. C2 problem. norm. Specifically. respectively. Better numerical methods are based on algorithms that AT work directly and solely on A itself rather than AT A.4 Least Squares and Singular Value Decomposition Least Squares and Singular Value Decomposition In the numerical linear algebra literature (e. as in Theorem 5. We now note that IIAx - bll~ = IIU~VT x = - bll~ II ~ VT X - U T bll. Better numerical methods are based on algorithms that behavior in practice (and it does). bE IR m . In this section we investigate solution of the linear least squares problem min II Ax x b11 2 . (8. This explains why it is convenient to work above with the square of the norm rather than the concerned. appear functions </>.4 8. For example.. the subvectors can have different lengths). [4]. Z2 while the minimum value of \\Ax — b II ~ is l^llr while the minimum value of II Ax . etc.SVr U~VT Theorem 5. problem.. For example. 's ¢i. [7]. As far as the minimization is concerned.[ ~~ ] II: sz~~ c. c\ logci.. of linear least squares problems via the normal equations can be a very poor numerical method in finite-precision arithmetic. ] II: = II [ The last equality follows from the fact that if v [~~]. In fact.6) via the SVD. the last equivalent. then taking logarithms yields the equation logy = logci + cjt. then ||u||^ = ||i>i \\\ \\vi\\\ (note The last equality follows from the fact that if v = [£ ]. Then c. etc. The former based on SVD and QR (orthogonal-upper triangular) factorization.g.b\\^ is II czll ~.can be arbitrarily nonlinear. Sometimes a problem in which the Ci'S appear nonlinearly nonlinearly can be converted into a linear problem. the two are equivalent.c=UTb = II [~ ~] [ ~~ ] . . Linear Least Squares Problems Chapter 8. VT = U. S~lc\. Linear Least Squares Problems difficulties because of numerical ill conditioning for large n. it is shown that solution [4]. Specifically. fact. The key feature in (8.1. The former is much more expensive but is generally more reliable and offers considerable theoretical offers insight. The subvector z2 is arbitrary. if the fitting function is of the form can be converted into a linear problem. Numerically better approaches ill difficulties n. A E IRmxn . + c2f. e C2 / then taking logarithms yields the equation log y = log c. splines. we assume that A has an SVD given by A = UT. The basis functions coefficients c.

. to reduce A in the following way. x has been written in the form x = A+b + (I .e.. It is then possible. A finite sequence of simple orthogonal row transformations (of Householder or Givens type) can be performed on A to reduce it row transformations (of Householder or Givens type) can be performed on A to reduce it to triangular form. of course.e. an important special case of the linear least squares problem is the Finally. This matrix factorization is much cheaper to compute than an SVD and.5 Least Squares and QR Factorization Least Squares and QR Factorization In this section. than an SVD and. A e R™ X M . with (8. This agrees. The minimum value of the least squares residual is The minimum value of the least squares residual is and we clearly have that and we clearly have that minimum least squares residual is 0 -4=> b is orthogonal to all vectors in U2 minimum least squares residual is 0 {::=:} b is orthogonal to all vectors in U2 {::=:} •<=^ {::=:} b is orthogonal to all vectors in R(A)l.6) but this time in terms of the QR factorization. If we label the product of such orthogonal row transformations as the orthogonal matrix QT E R m x m . Least Squares and QR Factorization Now transform back to the original coordinates: Now transform back to the original coordinates: x = Vz 71 71 = [VI V2 1[ ~~ ] = VIZ I + V2Z2 = = + V2Z2 vls-Iufb + V2 2.AA+)bllz. an important special case of the linear least squares problem is the so-called full-rank problem. with (8. i. This follows easily since Another expression for the minimum residual is II (I . This agrees. i.5 8. of course. This matrix factorization is much cheaper to compute time in terms of the QR factorization. = +b + (/ — A + A) y. 8.8..~xn. we have QT € ffi.1). is orthogonal to all vectors in 7l(A}L b E R(A).. Z VIS-ici The last equality follows from The last equality follows from c = UTb = [ ~ f: ]= [ ~~ l Note that since Z2 is arbitrary. This follows easily since ||(7 . we again look at the solution of the linear least squares problem (8. (8. can be quite reliable. x has Note that since 12 is arbitrary. In this case the SVD of A is given by A A = V:EVTT = [VI{ Vzl[g]Vr.S. we add the simplifying assumption that A has full column To simplify the exposition.7) . and there is thus "no V2 part" to the solution. can be quite reliable. A E ffi. i.mxm. To simplify the exposition. A finite sequence of simple orthogonal transformations. via a sequence of so-called Householder or Givens transformations.1). Least Squares and QR Factorization B.11U2U!b"~ = bTUZV!V UJb = bTVZV!b = IIV!bll~. to reduce A in the following way.AA+)b\\22 = \\U2Ufb\\l = bTU2U^U22V!b = bTU2U*b = \\U?b\\22.m is arbitrary. In this case the SVD of A is given by so-called full-rank problem. A e 1R™ X ". 11(1. A E ffi.6) but this In this section.~xn. Thus.e. we add the simplifying assumption that A has full column rank. we again look at the solution of the linear least squares problem (8.5. If we label the product of such orthogonal row transformations as the to triangular form. with appropriate numerical enhancements. where y e ffi. UZV = [U t/2][o]^i r > and there is thus "no V2 part" to the solution. i. via a sequence of so-called Householder or Givens rank.e.AA+)bll~ . Another expression for the minimum residual is || (/ — AA + )b|| 2 . Thus. where y E Rm is arbitrary.A + A)_y. Finally. V2 Z 2 is an arbitrary vector in 7Z(V2)) = A/"(A). It is then possible. with appropriate numerical enhancements. V2z is an arbitrary vector in R(V2 = N(A).

8). and qz are two orthonormal vectors and b is a fixed vector. check directly that (I .I of the columns of yields the orthonormal columns of Q\. b E Em.~xn is upper triangular. Linear Least Squares Problems Chapter 8. where QI E ffi. we see that in (8. 112 is unitarily invariant ~ ] x .Cl and the minimum residual The last quantity above is clearly minimized by taking x = R lIc\ and the minimum residual is Ilczllz.Show that r is orthogonal to both^i and q2.mxn and where R e M£ x " is upper triangular.9) is essentially what is accomplished by the Gram-Schmidt process. where Q\ e R mx " and Qz € K" x(m-n). all in R".8) ~ ] (8. n • (a) Find the optimal linear combination aq^ + (3q2 that is closest to b (in the 2-norm (a) Find the optimallinear combination aql + fiq2 that is closest to b (in the 2-norm sense).7).72 Chapter 8.2). For A E Wmxn . i. both ql and q2 . m and any e ffi. are orthogonal vectors. Multiplying through by Q in (8. Qz] [ (8.. b e ffi. Yi): 2. by writing (8.7).[ ~~ ] If:. (b) Let r denote the "error vector" b . Both Q\ and <2 have orthonormal columns. xn.9) are variously referred to as QR factorizations of A. all in ffi. we have x = R.aql .. data. R~l) ) of the columns of A yields the orthonormal columns of QI. (b) Find the best (in the 2-norm sense) line of the form jc = ay + (3 that fits this (b) Find the best (in the 2-norm sense) line of the form x = ay fJ that fits this data. Note that (8. Now write Q = [Q\ Qz].A+A)y and A +b 1. check directly that (I . data.m IX(m ~" ) . 3. and any y E R". Suppose qi and q2 are two orthonormal vectors and b is a fixed vector. (2. (8..8). Consider the following set of measurements (*. sense).9) are variously referred to as QR factorizations of A.. i. or (8. Now write Q = [QI Q2]. we have = R~l Qf b = +b and the minimum residual is IIC?^!^- EXERCISES EXERCISES 1. Linear Least Squares Problems where E ffi. The last quantity above is clearly minimized by taking x = R.flq2 Show that r is orthogonal to (b) Let r denote the "error vector" b — ctq\ — {3qz. Suppose q.1).1Q\b = A+b and the minimum residual is II Qr bllz' is \\C2\\2.. Equivalently. or (8. (8. by writing AR~l1 = Q\ we see that a "triangular" linear combination (given by the coefficients of ARQI we see that a "triangular" linear combination (given by the coefficients of R.9) Any of (8.9) is essentially what is accomplished by the Gram-Schmidt process.e. Multiplying through by Q Q2 E ffi.3). Note that Any of (8. Consider the following set of measurements (Xi. Now note that Now note that IIAx - bll~ = IIQ T Ax = II [ QTbll~ since II .Equivalently. 2. (a) Find the best (in the 2-norm sense) line of the form y = ax + fJ that fits this (a) Find the best (in the 2-norm sense) line of the form y = ax + ft that fits this data. n .e. For € ffi.7).7). yt): (1.+ A)y and A+b are orthogonal vectors. we see that A=Q[~J = [QI = QIR. 3. (3.. Both Q I and Qz2 have orthonormal columns.

What happens to ||jt* . where 8 is a small positive number. not necessarily nonsingular. then A+ R~ Q\.9).:.xn can be factored in the form (8. (a) Consider a perturbation E\ = [0 ~] of A.IlQf. 7. where 8 is a small positive number. and suppose A where is 1. then A+ == R. of 2-norm solution of least «rmarp« problem squares nrr»h1<=>m min II Ax . Prove that A+ = R+ QT.z||2 as 8 approaches O? where A2 — A E 2 What happens to \\x* — zll2 as 8 approaches 0? 6.bl1 2 when A = [~ ~ ] and b = [ !1 x The solution is (a) Consider a perturbation EI = [~ pi of A. Find all solutions of the linear least squares problem 4.• What happens to IIx* . What happens to IIx* — y ||2 as 8 approaches 0? where AI = A + E\.9). verify that if A E ~. A+ R+QT .bll 2 x when A = [ ~ 5. Let A e R"x". Solve the perturbed problem min II A 2 z . where Q is orthogonal. where AI = A + E I . where again 8 is a small of A. Consider the problem of finding the minimum 2-norm solution of the linear least 5. and suppose A = QR. Solve the perturbed problem positive number. Find all solutions of the linear least squares problem min II Ax . where again 8 is a small positive number. Let A E ~nxn. Solve the perturbed version of the above problem.yII2 as 8 approaches O? (b) Now consider the perturbation E2 = [~ (b) Now consider the perturbation EI = \0 s~\ of A.Exercises Exercises 73 4. Use the four Penrose conditions and the fact that QI has orthonormal columns to verify that if A e R™ x "can be factored in the form (8. not necessarily nonsingular. Solve the perturbed version of the above problem. Use the four Penrose conditions and the fact that Q\ has orthonormal columns to 6.bib z n where A2 = A + E2.

This page intentionally left blank This page intentionally left blank .

1) Similarly. called an eigenvalue. A nonzero vector x E en is a right eigenvector of A E e nxn if there exists Definition 9. One often-used scaling for an eigenvector is One often-used scaling for an eigenvector is so is ax [ay] for any nonzero scalar a E a = 1/ IIx II so that the scaled eigenvector has nonn 1. then n(A) is a polynomial of degree n. for example. a nonzero vector y e C" is a left eigenvector corresponding to an eigenvalue a if Mif (9. such that a scalar A E e. [21D or directly using elementary properties of inverses and determinants (see. Thus. (Note that the characteristic polynomial can also be defined as det(A.3 (Cayley-Hamilton). the Fundamental Theorem of Algebra says that x 75 . e C. we see immediately that XH is a left eigenvector of A H associated with A. It is an easy exercise to 2 verify that n(A) = A + 2A . n(A) = 0. It is an easy exercise to Example 9.1 Fundamental Definitions and Properties Fundamental Definitions and Properties Definition 9.1. example. (Note that the characteristic polynomial can also be defined as det(Al . Then n(A) A2 + 2A 3. Thus. then vector of AH associated with I. The polynomialn (A.1.t|| so that the scaled eigenvector has norm 1.} The following classical theorem can be very useful in hand calculation.Chapter 9 Chapter 9 Eigenvalues and Eigenvalues and Eigenvectors Eigenvectors 9. Theorem 9..2. n(A) = O. as a matter of convenience. Definition 9. Theorem 9. (9. It can be The following classical theorem can be very useful in hand calculation. then so is ax [ay] for any nonzero scalar a E C.31 O. the Fundamental Theorem of Algebra says that 7t (X) is a polynomial of degree n. we use both forms results in at most a change of sign and.1 9. called an eigenvalue. The 2-norm is the most common nonn used for such scaling. then It can be proved from elementary properties of detenninants that if A e enxn . For any A e Cnxn .Al) is called the characteristic polynomial Definition 9. It can be proved easily from the Jordan canonical form to be discussed in the text to follow (see. Let A = [~g ~g]. norm used for such scaling. [21]) or directly using elementary properties of inverses and determinants (see. for proved easily from the Jordan canonical fonn to be discussed in the text to follow (see.) = det (A . [3]). For any A E e nxn . Then n(k) = X2 + 2A./ — A). A nonzero vector x e C" is a right eigenvector of A e Cnxn if there exists a scalar A.4. [3]).3 (Cayley-Hamilton). a nonzero vector y E en is a left eigenvector corresponding to an eigenvalue Similarly. The 2-nonn is the most common a — \j'||. we see immediately that x H is a left eigenBy taking Hermitian transposes in (9. This results in at most a change of sign and.A).) throughout the text. such that Ax = AX. Example 9. as a matter of convenience. verify that n(A) = A2 2A .1).4.— 3. for example. we use both forms throughout the text. Let A [-~ -~].31 = 0. The polynomial n (A) = det(A—A. for example.2) By taking Hennitian transposes in (9. Note that if x [y] is a right [left] eigenvector of A.1). It can be proved from elementary properties of determinants that if A E C" ". Note that if x [y] is a right [left] eigenvector of A. This of A./) is called the characteristic polynomial of A.2.

8. say.2.AI) = (A] .25). such a polynomial is said to be monic and we of the highest power of A to be +1. if If A € A(A) has algebraic multiplicity m. then A satisfies (Je — I)2 = 0. The spectrum of A e nxn is the set of all eigenvalues of A. The geometric multiplicity ofX is the number of associated of algebraic multiplicity m.. it is posn(A) = O. A is said to be defective if it does not have n linearly independent (right or left) eigenvectors. and hence further guarantee the existence of corresponding nonzero eigenvectors.e. Definition 9..e.6.XI. The minimal polynomial Of A l::: K""" ix (hI' polynomilll a(A) of least degree such that a(A) ~ O..2aA + aa2+ f322 and A has eigenvalues a f3j (where A has eigenvalues a ± fij (where j = i = R). but that A(A) = A(A) only if A e R"x". But it also clearly satisfies the smaller degree polynomial equation isfies (1 . eigenvalues of A.A) (9.. Xn. possibly repeated. Equivalently. checked eigenvectors of A and AT (take Hermitian transposes of both sides of (9. we say that A is an eigenvalue of A Definition 9. For example. Hence the roots of 7r(A). If e Wxn. Then n(A) = A22. the set of all roots of its characteristic polynomial n(X). then we must have I < g < m. Eigenvalues and Eigenvectors Chapter 9. then we must have 1 :::: g :::: m.~. then n(X) has real coefficients. Note.2)).AI). 2aA + 2 + ft and Example 9.2». neftnhion ~. then I < dimA/"(A — A/) < m. The geometric multiplicity of A is the number of associated independent eigenvectors = n — rank(A — A/) = dim J\f(A — XI). we know that n(A) = 0.A) . degree such that a (A) =0.AI) = dimN(A . . Eigenvalues and Eigenvectors n(A) has n roots. Specifically.. it can also be generally write a(A) as a monic polynomial throughout the text).rank(A .6. Moreover. less than) its algebraic multiplicity.. if left of A A E A(A). i. ft E 1Ft and let A = [ _^ !].. (9. geometric multiplicity is not equal to (i. less than) its algebraic multiplicity..5.nxn is the polynomial o/(X) oJ IPll. Let a. The spectrum of A E C"x" is the set of all eigenvalues of A. n(A). These roots. and hence further are the eigenvalues of A and imply the singularity of the matrix A — AI.5. too. . then A satsible for A to satisfy a lower-order polynomial. we always have A(A) = A(AT).. Let a. The spectrum of A is denoted A (A). If is a root of multiplicity m of n(A). A matrix A e 1Ft x" is said to be defective if it has an eigenvalue whose geometric multiplicity is not equal to (i.e. then 1 :::: dimN(A . the n(A) coefficients.e.. Moreover. i..3) in the n(A) = det(A . A is said to be defective if it does not have n linearly independent (right or left) eigenvectors.. These roots. as solutions of the determinant equation 7r(A) has n roots. the set of Definition 9.n =0o. Thus. IfXA is a root of multiplicity m ofjr(X). guarantee the existence of corresponding nonzero eigenvectors. we say that X is an eigenvalue of A of algebraic multiplicity m.7. An. a . (An . but that A(A) = A(A) only if A E 1Ftnxn. then y is a right eigenvector of AT corresponding to I € A (A). However. if we denote the geometric multiplicity of A by g. Thus.1) . Then jr(A. as solutions of the determinant equation n(A) = det(A - AI) = 0. y of AT y is a left eigenvector of A corresponding to A e A(A). if A = [~ ~].. Definition 9. A. we get the interesting fact that del (A) = A] • A2 • • An (see also Theorem 9. E A(A). such a polynomial is said to be monic and we generally write et (A) as a monic polynomial throughout the text)..1)2 = O.3) are the eigenvalues of A and imply the singularity of the matrix A .4) and set A = 0 in this identity.e. we get the interesting fact that det(A) = AI .ft Definition 5. if eigenvectors of A and AT (take Hermitian transposes of both sides of (9. must occur in complex conjugate pairs. If A E A(A) has algebraic multiplicity m. i. possibly repeated. must occur in complex conjugate pairs.. A matrix A E Wnxn is said to be defective if it has an eigenvalue whose Definition 9. Let the eigenvalues of A E en xxn be denoted X\. But it also clearly satisfies the smaller degree polynomial equation (it. The spectrum of A is denoted A(A).. of we always have A(A) = A(A r ).7. Thll minimal polynomial of A G l!if. say. it can also be . However. For example.) A. Specifically. c form form A e C" " A]. Example 9. It can be shown that or(l) is essentially unique (unique if we force the coefficient It can be shown that a(Je) is essentially unique (unique if we force the coefficient of the highest power of A to be + 1. then there is an easily checked relationship between the left and right If A € R"x".:.8. we denote the geometric multiplicity of A by g. A... sible for A to satisfy a lower-order polynomial. all roots of its characteristic polynomialn(A). independent eigenvectors = n . From the Cayley-Hamilton Theorem.. Equivalently. eigenvalues of A. • AM(see and set X = 0 in this identity.AI) :::: m.76 Chapter 9. Definition 9. i •>/—!)• If A E 1Ftnxn. Theorem If A E 1Ftnxn.5. if A = \1Q ®]. f3 e R and let A = [~f3 £ ]. Then if we write (9. that by elementary properties of the determinant.

0 g At this point. each 4. Then Xi = 0. shown that a (A. Let A e C« x " ana [et A.) directly (without knowing eigenvalues and as- Unfortunately.. called the Bezout algorithm. g.. algorithm. Let A E cc nxn and let Ai be an eigenvalue of A with corresponding right eigenvector jc. The above definitions are illustrated below for a series of matrices. A~[~ A~U 2 0 0 I 2 2 ] ha< a(A) (A . such is not the case. 0 0 0 2 0 0 0 2 ] h'" a(A) (A . Example 9. sociated eigenvector structure). YY Proof: Since Axt = A. i.2)4. We denote 7r(A) (A . n(A) = (A — 2)4. every nonzero polynomial f3(A) particular.2)2 ""d g 3.2)' ""d g ~ ~ ~ 1. The matrix A~U has a(A) I 2 0 0 2 0 0 0 !] (9. the geometric multiplicity by g.. of which has an eigenvalue 2 of algebraic multiplicity 4. Then yfx{ = O. let Yj be a left eigenvector corresponding to any A.10.2) andg ~ 4.9.2)' ""d g 2. 0 0 0 2 A~U 0 0 ] ha<a(A) (A . Unfortunately.10.. Proof' Since AXi AiXi. Fundamental Definitions and Properties 77 77 a(A) f3(A) O. Theorem 9.5) = (A - 2)2 and g = 2. e l\(A) yj Xi. A-[~ - 2 0 I 2 0 0 0 0 0 0 !] ~ ~ ~ ha. a(X) n(X). one might speculate that g plus the degree of a must always be five. Example 9. Furthermore."(A) ~ ~ ~ ~ (A . be an eigenvalue of A with corresponding right Theorem 9.*. eigenvector is numerically unstable.) divides every nonzero polynomial fi(k} for which ft (A) = 0.1. a(A) directly (without knowing eigenvalues and asThere is an algorithm to determine or (A. Bezout algorithm. Fundamental Definitions and Properties 9. this algorithm. In particular.e. a(A) divides n(A).11.1. left Aj E A (A) such that Xj 1= A. such that Aj =£ Ai.-.11.. . Unfortunately.

the two vectors must be orthogonal.Aj ^ 0.. Take the Hermitian A transpose of this equation and use the facts that A is Hermitian and A is real to get xXHAz == of equation facts A.13. i. However. xn}} is a linearly independent set. 0 If A E cnx " has distinct eigenvalues. then by Theorem 9. Then all eigenvalues of A must Theorem 9... respectively. it cannot be the case that yf*xt = 0 as orthogonal to all yj's for which j ^ i.. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = A.. Since equation Az i^z XH get X H Az = iJ-XH A. 's. Then all eigenvalues of A must be real. and if Ai E A(A).. Cnxn have distinct eigenvalues AI. c Proof: Suppose (A. we have that XxH = AXH x. i.5). result holds for the corresponding left eigenvectors..12. since x is an Using the fact that A is Hermitian.e.14. or both. jc. Eigenvalues and Eigenvectors yy. Take the Hermitian Proof: Premultiply the equation Ax = AX by ZH to get ZH Ax = AZ H x. or the Yi 's. and if A. x) is an arbitrary eigenvalue/eigenvector pair such that Ax = AX. An with corresponding Theorem 9.. Since XxHz. Let us now return to the general case. since x is an eigenvector..Aj)YY xi. XH x /= 0. for [21.JC by ZH to get ZH Ax = X z Hx . However. Eigenvalues and Eigenvectors Chapter 9. Let A e nxn be Hermitian. we must have that x H = 0.7) yields Taking Hermitian transposes in (9.78 Similarly. we have that IXHxx = XxHx. i. . since y" A = Similarly. from which we conclude I = A. .. The same right eigenvectors XI. AXH z. Proof: Proof: For the proof see.11 is very similar to two other fundamental and important The proof of Theorem 9. = A. Since Xi ^ 0 and would thus have to be 0. Let A E cnxn have distinct eigenvalues A .14. from which conclude A. Since yf*Xi =1= 0 for each i. Let A e C"x" be Hermitian and suppose A and /JL are distinct eigenvalues Theorem 9. the proof see. then by Theorem 9. . e A(A).14) and would thus have to be 0. A. [21. A = AH.. Let A €. i. we find 0 = (Ai ..7) Taking Hermitian transposes in (9. Premultiply the equation Az = iJ-Z by x H to get x HAz = /^XHZZ = XxHz. 118]. = 1 for/ E n. (9.14) well. the two vectors must be orthogonal. p.11. ^ /z. However. orthogonal.6) from (9. Then and z must be orthogonal. i.e.. or both. . 0 Let us now return to the general case. for example. A is real.13. However..n with corresponding right eigenvectors x\. Then (9.e. or the y. we find 0 = (A. c Proof: Premultiply the equation Ax = A.. Xi is orthogonal to all y/s for which j =1= i. we have xHX =1= 0... A. Since A. . Then x and z must be of A with corresponding right eigenvectors and respectively.. yr .. or else Xi would be orthogonal to n linearly independent vectors (by Theorem 9. A = AH..e. 's. so that y H Xi = 1 for each i.. The same result holds for the corresponding left eigenvectors. be real. x . Then [x\. Then Proof: Suppose (A. .. x is a linearly independent set.7) yields Using the fact that A is Hermitian.11 is very similar to two other fundamental and important results. D A =1= iJ-. 0 The proof of Theorem 9. we must have that XHzz = 0.5).is real. Chapter 9.. eigenvector. or else xf would be orthogonal to n linearly independent vectors (by Theorem 9. is real to get H Az AxH z.11.e.7.— A y )j^jc. . we can choose the normalization of the *. p. 1 ?. Theorem 9. Let A E nxn be Hermitian and suppose X and iJ. we must have Subtracting (9. for i € !1.12. contradicting the fact that it is an eigenvector. results. it cannot be the case that YiH Xi = 0 as well. is If A e C nxn has distinct eigenvalues. so that YitH x. we can choose the normalization of the Xi'S. we must have yfxt = O. contradicting the fact that it is an eigenvector.. Since Ai .=1= 0. since YY A = AjXjyf..JC.. Let A E C"x" be Hermitian. i... 118]. Theorem 9.— A.are distinct eigenvalues of A with corresponding right eigenvectors x and z.6) Subtracting (9.. 0 D Theorem 9. YyXi =0.6) from (9.e.xnn • Then {XI.

. Then AJC. . For A2 = -1 + 2j. solve the linear system (A — (—1 + 2j)I)x2 = 0 to get yi X2 =[ 3+ j ] 3 ~/ . .. . A...3 4A2 9 A. . This time we have chosen the arbitrary scale factor for y\ so that \ = 1.15.1. Then AXi = AiXi. suppose that the left and right eigenvectors have been normalized so that yf1 Xi = 1. solve the (since dimN(A .. An and let the corresponding right eigenvectors form a matrix X [x\. To get the corresponding left eigenvector y\. . Let A e en xn have distinct eigenvalues AI. / e n. solve the linear system (A . . For Al = —2. from which we find A(A) = {—2. To get the corresponding left eigenvector YI. j e n.AI) -(A.22 + 2)" + 5). . — 1 2 j } .A.ci can be set arbitrarily. Furthermore. from Then n(X) det(A . 2)(A. is expressed by the equation yHX = I.. Let A E C"x" have distinct eigenvalues A. . Let Example 9. Fundamental Definitions and Properties 9.16. solve the 3 x 3 linear system (A .9) =A = XAyH yRAX n (9.. can be written in matrixform as diag(A. is expressed by the equation while YiH Xj = oij.1./) = -(A 3 + 4A. 10) -(A. / en..15... corresponding to these eigenvalues.11) Example 9. -1 ± 2j}. / en.. We can now find the right and left eigenvectors which we find A (A) = {-2. let Y — [y\. Xn) E Wtxn.10) = XAX.I = = LAixiyr i=1 (9.-*/.. An) e ]Rnxn. let A = diag(AJ. Fundamental Definitions and 79 Theorem 9.2 + 9)" + 10) = -()" + 2)().(-2)l)xI = 0 to get Note that one component of .. let Y = [YI. .8) while y^Xj = 5.nand let the correspondTheorem 9. solve the linear system (A 21) = 0 to get linear system y\(A + 21) = 0 to get yi This time we have chosen the arbitrary scale factor for YJ so that y f xXI = 1. + 5). suppose that the left and be the matrix of corresponding left eigenvectors. Similarly. Similarly.. Yn] ing right eigenvectors form a matrix X = [XI. .. y' E !!.I . ..(-1 + 2j) I)x2 = 0 to get For A2 — 1 + 2j.. let A = right eigenvectors have been normalized so that YiH Xi = 1.j. and this then determines the other two Note that one component of XI can be set arbitrarily. solve the 3 x 3 linear system (A — (—2}I)x\ = 0 to get For A-i = -2.9. 2A. xn]."" yn] be the matrix of corresponding left eigenvectors.. Furthermore.16. and this then determines the other two (since dimA/XA — (—2)7) = 1). xn]. = A. i E !!. Let 2 5 -3 -3 -2 -4 ~ ] . i E!!.. These matrix equations can be combined to yield the following matrix factorizations: These matrix equations can be combined to yield the following matrix factorizations: X-lAX and and A (9. Finally.. . Then rr(A) = det(A . can be written in matrix form as AX=XA (9. We can now find the right and left eigenvectors corresponding to these eigenvalues. i E !!:: Finally. .(-2)1) = 1).

instead of determining the j. Eigenvalues and Eigenvectors Solve the linear system yf (A — (-1 + 27')/) = 0 and normalize y> so that yf 2 1 to get Solve the linear system y" (A .'s directly. Other results in Theorem 9. For example.) 19X + 12) = -(A. Let A = [-~ -~ ~] . —4}. we can also note that X3 =x2' and yi jj. X~l Example 9. For example.!.L Other results in Theorem 9.A. Proceeding as in the previous example.2 and simply can also note that x$ = X2 and Y3 = Y2.2*2 to get Ax^ = ^2X2. —3. we could proceed to solve linear systems as for A2. Proceeding as in the previous example. o -3 Then 7r(A.2j.AI) = -(A33 + 8A 22+ 19A + 12) = -(A + I)(A + 3)(A + 4). Let Example 9. -2 It is then easy to verify that It is then easy to verify that -2 . note that we could have solved directly only for XI and X2 (and X3 = X2). det(A . Then Jl"(A) = det(A . -3. we could proceed to solve linear systems as for A. -4}. Eigenvalues and Eigenvectors Chapter 9.j ] 3+j .!. = x2).80 Chapter 9.15 can also be verified. However. + 4).c2 = ^. use the fact that A.2 However. use the fact that A33 = A2 and simply conjugate the equation A.~q 1 3 2 2 0 -2 -3 ] 2 ~ y' .L 4 !. we could have found them instead by computing X-I and reading off its rows. To see this. + 3)(A. for left eigenvectors.15 can also be verified.. is from which we find A (A) = {-I. Then. we For XT. similar argument yields the result for left eigenvectors. itit is gtruightforw!U"d to compute straightforward to comput~ X~[~ and and I 0 -I -i ] 1 x-.17. note that we could have solved directly only for *i and x2 (and XT. A.=. Finally.17. + 1)(A./) _(A + 8A from which we find A(A) = {—1. X-IAX=A= [ -2 0 0 -1+2j o 0 Finally. Then.±1 4 4 4 l+j . Now define the matrix X of right eigenvectors: Now define the matrix of right eigenvectors: 3+j 3-j 3. = -I .=.A similar argument yields the result conjugate the equation AX2 — A2X2 to get AX2 A2X2. we could have found them instead by computing instead of detennining the Yi'S directly. To see this.( -I + 2 j) I) = 0 and nonnalize Y22 so that y"xX2 = 1 to get For A3 = — 1 — 2j.

x) but not conversely. but A = [~ has two independent right eigenvectors associated with the eigenvalue o.. i=1 . say. ( x ) is a polynomial. A is diagonalizable). —4). of Chapter 11.20. —3. The following theorem is useful when solving systems of linear differential equations. I I ~J I 2 0 0 0 3 3 3 I I (. or sin*. For example. For left eigenvectors we have a similar statement. X) is an eigenvalue/eigenvector pair such that Ax = AX.g. The following theorem is useful when solving systems of linear differential equations. since T is nonsingular. What is true is that the eigenvalue/eigenvector pair (A. formation T. jc) is an eigenvalue/eigenvector pair such that Ax = Xx. A = [~ Oj ] have all the same eigenvectors (unless.9.1. -3. which is equivalent to the dyadic expanWe also have X~l AX = A = diag( -1. namely the theorem statement follows. x) but not conversely.txiYiH. but /(A) does not necessarily the eigenvalues of /(A) (defined as X^o^-A") are /(A). A = T0 6 2 has only one right eigenvector corresponding to the eigenvalue 0. ] [~ ~ (-I) [ I (. in general. namely yH A AyH if and only if Hy)H(T~1AT) = A(T Hy)H. from which equivalent statement (T~ AT)(T. from the theorem statement follows. Let A E R" xn and suppose X~~1AX — A. representable by a power series X^^o fln*n)> then it is easy to show that representable L~:O anxn). or sinx. etA Ax are Details of how the matrix exponential e'A is used to solve the system x = Ax are the subject solve system i of Chapter 11.lIAT)(T~lx)x) = X ( T ~ lIxx). Fundamental Definitions and Properties 81 81 We also have X-I AX = A = diag(—1. A Theorem 9. Then suppose X-I AX = A. -4). Then. For left eigenvectors we have a similar statement. 2 3 I (..1. since T Proof: Suppose (A. ff(x) is a polynomial. If / is an analytic function (e.but f(A) does not necessarily have all the same eigenvectors (unless.I = A(T.3 0 -~l +(-4) [ -. say. J+ (-3) [ -2 0 2 I I I -2 I ]+ 3 -3 I I -3 I I I 3 -3 I (-4) [ 3 -3 I I 0 3 3 l Theorem 9.19. For example. D D yHA = XyH ifandon\yif(T(T Hy)H (T. A is diagonalizable).20.1 AT) =X(THyf. Proof: Suppose (A.g. I 3 I (.18. e jRnxn n = LeA. Then. then easy to show that the eigenvalues of f(A) (defined as L~:OanAn) are f(A). What is true is that the independent right eigenvectors associated with the eigenvalue 0. Remark 9. where A is diagonal. Remark 9.18. eigenvalue/eigenvector pair (A. If f is an analytic function (e. Fundamental Definitions and Properties 9. Eigenvalues (but not eigenvectors) are invariant under a similarity transTheorem 9. Theorem 9. or eX. Eigenvalues (but not eigenvectors) are invariant under a similarity transformation T. x) maps to (/(A). we have the equivalent statement (T. which is equivalent to the dyadic expansion sion 3 A = LAixiyr i=1 ~(-I)[ ~ W~l+(-3)[ j ][~ ~ 1 . or. but A2 = f0 0~1]has two has only one right eigenvector corresponding to the eigenvalue 0.) . x) maps to (f(A).19. or ex.

. of course. then e A has eigenvalues e A There are extensions to Theorem 9.82 Proof: Starting from the definition.12) where each of the lordan block matrices 1i .. 9. Corollary 9. to have a version of Theorem 9.e.. there exists X € c~xn such that (not necessarily distinct).. and right Corollary 9. kn E C (not necessarily distinct)....-.e.20 and Corollary 9. from which such a result is then available and presented later in this chapter. . ..21.. . i. i E ~.20 and Corollary 9... / € n_.= Xdiag(/(A. € n_... ii E ~. i. If A E Rnx" is diagonalizable with eigenvalues A. .20 and its corollary in which A is not necessarily diagonalizable. there exists X E C^x" such that X-I AX = 1 = diag(ll... .22.22. ( A ) = Xf(A)X.. Theorem 9. we have Proof' Starting from the definition.. and right eigenvectors xt•. It is necessary first to consider the notion of Jordan canonical form. f ( X t t ) ) X ~ It is desirable.13) 1i = o o Ai o Ai . i E ~. i=1 0 The following corollary is immediate from the theorem upon setting t = I. I.i).21 for any function that analytic on the spectrum of A. 0 (9. /' en. ff(A) = X f ( A ) X ~ l I = Xdiag(J(AI). analytic on the spectrum of A.21. An e C 1. Jordan Canonical Form (/CF): For all A e c nxn with eigenvalues AI. Eigenvalues and Eigenvectors Chapter 9.. and the same eigenvectors. eigenvectors Xi. . to have a version of Theorem 9.. of course.2 Jordan Canonical Form Jordan Canonical Form Theorem 9. Eigenvalues and Eigenvectors n = LeA. It is necessary first to consider the notion of Jordan A is not necessarily diagonalizable.. If A e ]Rn xn is diagonalizable with eigenvalues Ai.Il . from which such a result is then available and presented later in this chapter.. 1q). 1q is of the form where each of the Jordan block matrices / 1 ••• Jq is of the form Ai 0 Ai Ai o 0 (9. . we have Chapter 9. and the same eigenvectors. lordan Canonical Form (JCF): For all A E C"x" with eigenvalues X\.21 for any function that isis There are extensions to Theorem 9. canonical form. . The following corollary is immediate from the theorem upon setting t = I. f(An))X. .2 9. then eA has eigenvalues e X"i .IXiYiH.20 and its corollary in which It is desirable.

. pp.JfJ =[ (X -fJ fJ ] (X = M... there exists X € R" xn such that (9. .=1 ki = n. Proof: proof D 0 Transformations like T = [ _~ -"•{"]allow us to go back and forth between aareal JCF Transformations like T = I" _.. ] T (X .9. ~: ] and I = \0 A in the case of complex conjugate eigenvalues a ± jfJi E A(A). .. complicated. and where M.. Real Jordan Canonical Form: For all A E R n x " with eigenvalues AI. . With 1 -j o -j o 1 o o o -j ~ -~] 0 1 ' .. 120-124].An n (not € jRnxn Xi. the situation is only a bit more complicated.. 1q is of form where each of the Jordan block matrices 11. Proof: For the proof see. .. 83 83 Form: 2.. for example. [21. = [ _»' ^ 1 and h2 = [6 ~] in the case of complex conjugate eigenvalues where Mi = [ _~. Jordan Canonical Form and L. . e A (A). X (not necessarily X E lR. Jordan Canonical Form 9.2.2. Jq is of the form of in the case of real eigenvalues A.. { ] allow us to go back and forth between real JCF and its complex counterpart: T-I [ (X + jfJ o O.~xn necessarily distinct). .14) J\. For nontrivial Jordan blocks. (Xii±jpieA(A>).

.22 we have that A = X J X ~ .) = det(7) = ]~["=l A. X X-I. I) .jf3 0 0 et . .22 l Tr(A) = Tr(X J X-I) Tr(JX. 2)2.7x7 is known to have 7r(A) = (A . 1)4(A 2) and 2 2 et(A) = (A ..-1)z.— I) (A. from Theorem 9. Let A e C" " with eigenvalues AI.1*) = Tr(J) = L7=1 Ai.22 we have that A = X JJX ~ l ..(A 1). 1).2).(A. 1 det(A) = det(X J X-I) det(J) = n7=1 Ai.2(A(A .-. . The characteristic polynomials of the Jordan blocks defined in Theorem 9.26. The characteristic polynomials of the Jordan blocks defined in Theorem Definition 9. I) .25. x Theorem 9.. i=1 n 2..2).24. Then Theorem 9. . and 2).I [ "+ jfi 0 0 0 et + jf3 0 0 0 0 et ..84 it is easily checked that it is easily checked that Chapter 9. Eigenvalues and Eigenvectors Chapter 9.22 we have that A X J X-I. and (A . The minimal polynomial of a matrix is the product of the elementary divisors of highest degree corresponding to distinct eigenvalues.22 are called the elementary divisors or invariant factors of A.)i. Tr(A) = Tr(XJX~ ) = TrC/X"1 X) = Tr(/) = £"=1 A. 1). " Xn. 2.23. The characteristic polynomial of a matrix is the product of its elementary divisors.-1)2. 1)2(A . From Theorem 9. D 0 Example 9. from Theorem 9.(A. Suppose A e lR.I)(A (A2)2. J(2) has elementary divisors (A while /( 2) haselementary divisors (A . 9.1)2.-.. . 2 . — 2) . Eigenvalues and Eigenvectors T. . Then AAhas two possible JCFs (not counting reorderings of the a (A. .23. highest degree corresponding to distinct eigenvalues. .2)2.22 are called the elementary divisors or invariant factors of A.2)2.2)3 3and is known to have :rr(A) Example 9.26.2)2. Thus. 2(A(A. and (A -(A .2).jf3 0 ]T~[~ l h M Definition 9. det(A) = nAi. Again. i=1 l Proof: Proof: 1.. Then has two possible JCFs (not counting reorderings of the diagonal blocks): diagonal blocks): 1 J(l) = 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 and f2) 0 0 0 0 0 1 0 0 0 0 0 2 = 0 0 0 0 0 0 I 1 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 1 0 2 0 J(l) has elementary divisors (A Note that 7(1) haselementary divisors (A . Thus. Thus. Theorem 9. Tr(A) = 2. det(A) = det(XJX.25.) = (A. Suppose A E E (A.. Then c n 1.. 1. Let A E nxn with eigenvalues AI. The minimal polynomial of a matrix is the product of the elementary divisors of divisors. An.1)4(A . -(A1). . and (A .24. The characteristic polynomial of a matrix is the product of its elementary Theorem 9. From Theorem 9.

7) = n . is not sufficient to Example 9.. Knowing TT (A.e. of algebraic multiplicity 1... 9. Thus.— a) .e. a)7..).] are eigenvectors (and are independent). three eigen7r(A. X e A(A) if and only if (A XI)kx = 0 and (A U}k~l x ^ 0..28. Remark 9.l). X principal Definition 9.3/)£ = 0.A. c . both denote a solution to the linear system (A . Then x is a right principal vector of degree k degree associated with A E A (A) ifand only if(A -. eigenvectors dimN(A — A.28. of course. associated independent right (or left) eigenvectors is given by dim A^(A . i.9.27.3..AI)k-lx i= o.27. To get a third vector JC3 such that X [x\ X2 XT.) = (A. a)\ .29.e.7) for distinct A. Definition 9. For each distinct eigenvalue Ai. i...3 Determination of the JCF Determination of the JCF The first critical item of information in determining the JCF of a matrix A E Wlxn is its A e ]R. we find that 2~2 + £ 3= 0 . left k.— a) and rank(A ..l) independent right — — A. To get a third vector X3 such that X = [Xl KJ_ X3] reduces A to JCF. a (A). we find that 2£2 + ~3 = O. Thus. Determination of the JCF 85 &5 Example 9.) = (A. is of algebraic multiplicity greater than one. it The straightforward case is. the associated number of linearly A. a(A). For example. it then has precisely one eigenvector.3. determine the JCF of A uniquely. An analogous definition holds for a left principal vector of degree k. i. of course. suppose suppose A = [3 2 0 o Then Then 3 0 A-3I= U2 I] o o 0 0 n has rank 1. If we let [~l ~2 ~3]T associated If [^i £2 &]T denote a solution to the linear system (A — 3l)~ = 0. A e nxn ]R. and rank(A -—Ai l) for distinct Ai is not sufficient to rr(A). determine a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 Al= 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 1 a A2 = a 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 a 0 0 0 0 0 0 0 a 4. The straightforward case is.is simple. The more interesting (and difficult) case occurs when Ai multiplicity A. Determination of the JCF 9.e. when Ai is simple..ulx = 0 and (A -.nxn number of eigenvectors. so the eigenvalue 3 has two eigenvectors associated with it.nxn).a(A) = (A . three eigenboth have rr(A) = (A . The matrices A uniquely.— al) == 4. when X.29.(7).rank(A . and rank (A A. of algebraic multiplicity 1. i. Let A E C"xn (or R"x"). a(A.A. both are eigenvectors (and are independent). Remark 9. we need the notion of principal vector. and rank(A al) vectors. 1.

I) associated This step finds all the eigenvectors (i. For example. The case k = 1 corresponds to the "usual" eigenvector. principal vectors of degree 1) associated with A. The other solution necessary is the desired principal vector of degree 2. solutions solutions to the homogeneous equation (A ." "of often 3. If the algebraic multiplicity of If A principal need X is greater than its geometric multiplicity.1.3. If we premultiply (9. if X. of — AI. Solve (A . for example. wefind(A-.XI)2x^ = 0.3. Thus. ji of dimension k or larger.AI). for get right-hand example. The second column yields the following equation for x .A1)X(l) = O. The number of eigenvectors depends on the rank of A . If. of course. A right (or left) principal vector of degree k is associated with a Jordan block J. 4. S. x(l). different term will be assigned a much different meaning in Chapter 12. but the latter generalized eigenvectors. The phrase "of grade k" is often used synonymously with "of degree k. if of . See.e.X I ) ( l ) = (A AI)O o. 2. combination of jc(1) vectors to get a right-hand side that is in 7£(A — XI). eigenvector.AI). of k 5. Denote by x(1) and x(2) the two columns of a matrix X e R2. the principal vector second of degree 2: of degree (A . the definition of principal vector is satisfied. A E A(A) following: (or C ). Then for each distinct X e A (A) perform the following: z (2) w c 1. For each independent jc (1) .1 Theoretical computation Theoretical computation To motivate the development of a procedure for determining principal vectors. of The number of linearly independent solutions at this step depends on the rank of 2 (A . determine all eigenvalues of A e R" x " nxn ). E lR nxn This suggests a "general" procedure.x2 A JCF. is. Eigenvalues and Eigenvectors synonymously "of 2. 9.2. Theother solution (A . (A — uf. by (A .'A1)22xx(l) = (A .XI). this rank is n . we find (A If we premultiply XI) x = (A XI)x = 0. k = eigenvector.) . See.17) by (A . there are two linearly independent n — o. Denote by x(l) and x(2) the two columns of a matrix X E lR~X2 2x2 2 Jordan block{h0 h1. Then the equation AX = X J can be written A [x(l) x(2)] = [x(l) X(2)] [~ ~ J.A1)2 x(2) = (A -.86 Chapter 9. Exercise 7. x(l) (^ 0).. (9.X I ) .A/)x(2) = x(l).1 9. Principal vectors are sometimes also called generalized eigenvectors. there is only one eigenvector.A1)X(l) = O. which simply says that x(!) is a right Ax(1) = hx(1) x (1) (2) x(2).A1)x(2) = x(l). Thus. One of these solutions (A — AI)2 x (2) x(l) (1= 0). Then the equation AX = XJ can be written that reduces a matrix A to this JCF.A/) = — multiplicity of rank(A — XI) = n . principal vectors still need to be computed from succeeding steps. (It may be necessary to take a linear of x(l) R(A . since (A . solve (A . Eigenvalues and Eigenvectors Chapter 9. consider a determining 2 x Jordan [~ i].XI.XI)0 = 0.17) The first column yields the equation Ax(!) = AX(!). First.

Determination of eigenvectors more extensive treatments. solve 3. Continue in this way until the total number of independent eigenvectors and principal vectors is equal to the algebraic multiplicity of A.3.32. this natural-looking procedure can fail to find all Jordan vectors. For Unfortunately.2I)x~1) = 0 yields . Theorem 9.9. Principal vectors associated with different Jordan blocks are linearly indeTheorem 9.33. For each independent X(2) from step 2.. Principal vectors associated with different Jordan blocks are linearly independent. and A3 = 2. for example. Determination of the JCF 3.. say). Let A=[~ 0 2 ] . Continue in this way until the total number of independent eigenvectors and principal 4.30. First. . with the distinct eigenvalues 1 and 2..(1) (A . Unfortunately.. [20] and [21]. although a j ardan command is available in MATLAB'S does not offer a jcf command.. (x (1) . . . . find the eigenvectors associated The eigenvalues of A are A1 = I.3. x(k)]. see.2/)x3(1)= 0 yields (A . Let Example 9. There are significant numerical difficulties inherent in attempting generally prove unreliable. Symbolic Toolbox. Let X = [[x(l). A2 = 1.. There are significant numerical difficulties inherent in attempting to compute a JCF. this natural-looking procedure can fail to find all Jordan vectors.AI) = k . Notice that high-quality mathematical software such as MATLAB readable [8] to learn why. see.32.33. X(k)]. where the chain of suppose further that rank(A . For more extensive treatments.31. First. 1 . and h3 = 2. Theorem 9.30. . For each independent x(2) from step 2. Attempts to do such calculations in finite-precision floating-point arithmetic generally prove unreliable. Suppose A E C kxk has an eigenvalue A of algebraic multiplicity kkand suppose further that rank(A — AI) = k — 1. where the chain of vectors x(i) is constructed as above. Attempts to do such calculations in finite-precision floating-point arithmetic or 3. . Then Theorem 9. of algebraic multiplicity and Theorem 9. X(k)} is a linearly independent set. say). Theorem 9. vectors is equal to the algebraic multiplicity of A. {x(l). Example 9. Suppose A e Ckxk has an eigenvalue A. . Let X = x ( l ) .31. x (k) } is a linearly independent set. pendent. for example. h2 = 1. Then vectors x(i) is constructed as above.. Notice that high-quality mathematical software such as MATLAB does not offer a j cf command. and the interested student is strongly urged to consult the classical and very readable [8] to learn why. Determination of the JCF 9. .1. 4. and the interested student is strongly urged to consult the classical and very to compute a JCF. . although a jordan command is available in MATLAB's Symbolic Toolbox. [20] and [21]. solve (A AI)x(3) 87 = x(2). find the eigenvectors associated with the distinct eigenvalues 1 and 2. 0 The eigenvalues of A are AI = 1. Determination of eigenvectors and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 and principal vectors is obviously very tedious for anything beyond simple problems (n = 2 or 3. .

but the result clearly holds for any JCF. we consider below the case of a single Jordan block.. .l/)x. For the sake of defmiteness. consider below the case of a single Jordan block.(2) = x. solve 2 (A ..so long as they are nonzero. Suppose A € Rnxn and SupposedA E jRnxn and Let D diag(d1.. but the result clearly holds for any JCF. 0 !b. Then A 4l. Eigenvalues and Eigenvectors To find a principal vector of degree 2 associated with the multiple eigenvalue 1.11)x?J = 0 yields (A.. d. we 1 's but can be arbitrary — so long as they are nonzero. 0 0 D-'(X-' AX)D = D-' J D = j ).88 (A .. . 0 1 = [xiI) 0 xl" xl"] ~ [ ~ -5 ] and X-lAX 5 3 0 Then it is easy to check that Then it is easy to check that l 1 X-'~U -i 1 =[ ~ I 0 0 n 9. (1) toeet x. dn be a nonsingular "scaling" matrix. Now let Now let X (2) =[ 0 ] ~ . . 0 = 0 A dn dn I 2 0 dn dn I A- 0 ).1I)xl ) = xiI) to get (A – l/)x.. solve To find a principal vector of degree 2 associated with the multiple eigenvalue 1. .2 On the +1 's in JCF blocks 's JCF In this subsection we show that the nonzero superdiagonal elements of a JCF need not be In this subsection we show that the nonzero superdiagonal elements of a JCF need not be 1's but can be arbitrary . =0 yields (1) Chapter 9. For the sake of definiteness. Then Let D = diag(d" .. .3. d n)) be a nonsingular "scaling" matrix.2 9. d.3.

34..Am)Vm with A-i. . A subspace S ~ V is A-invariant if AS ~ S. Such a decomposition is given in the following associated direct sum decomposition of jH..4 9. dnxn]...AlIt) E6 .A[)n) .Amtm c and minimal polynomial a(A) = (A . Let IF and suppose --+ transformation...n. . set {As s E S}. ..35.. mdistinct.xn]] of eigenvectors = [x[.A[)V) '" (A .4. Note that dimM(A .. It is thus natural to expect an with respect to which the matrix is diagonal or block diagonal... E6 N(A ..AmItm ...nxn (or nxn to JCF provides change of basis with respect to which the matrix is diagonal or block diagonal. dnxn}.. Let V be a vector space over F and suppose A : V —>• V is a linear Definition 9.A1I) v) E6 . A subspace S c V is A -invariant if AS c S. dimN(A — AJ)Vi = ni. the reverse-order identity matrix (or exchange matrix) 0 p = pT = p-[ = 0 I 0 (9. interpreted This result can also be interpreted in terms of the matrix X = [x\.. A.35.nxn n(A) = (A . Suppose e jH.4. Geometric Aspects of the JCF 9. In a similar fashion.34. the reverse-order identity matrix (or exchange matrix) In a similar fashion. .. E6 N (A .-. . .. x n eigenvectors and principal vectors that reduces A to its lCF. Then jH.18) 0 I 1 0 0 can be used to put the superdiagonal elements in the subdiagonal instead if that is desired: to superdiagonal elements in instead desired: A I 0 0 A 0 A 0 A 0 0 A 0 p-[ A p= 0 1 0 0 A 0 I A A 0 0 0 A 9..9./) w = «. where AS is defined as the set {As:: s e S}..n = N(A = N (A . Such a decomposition is given in the following theorem. where AS is defined as the transformation. Suppose A E R"x" has characteristic polynomial 9. similarity transformation XD [d[x[. It is thus natural to expect an associated direct sum decomposition of R.. Definition 9. . Specifically.. Theorem 9. Specifically.. J is obtained from A via the similarity transformation XD = \d\x\.Am I) Vm .4 Geometric Aspects of the JCF Geometric Aspects of the JCF The matrix X that reduces a matrix A E IR"X"(or C nxn)) to aalCF provides aachange of basis X e jH. Geometric Aspects of the JCF 89 di's Appropriate choice of the di 's then yields any desired nonzero superdiagonal elements. Am distinct. (A . .. Then AI.A. j is obtained from A via the and principal vectors that reduces A to its JCF.

S is A-invariant if and only if S . Example 9.. such "canonical" forms are discussed in text that follows. i.38. where X [ X i .. A-invariant.e.e. Then N(p(A)) and R(p(A)) 7£(p(A)) are A-invariant.39.Ai/)n. Jm). so the columns of Xi span an A-mvanant subspace.). Let Yi E <enxn . e E"x".. i /= 1. Equivalently.. If F = NI ® • • 0 m A// is A-invariant... for N(A . each Ji = diag(JiI.. The equation Ax Example 9. Rewriting in the form ~ J.2. be a Jordan basis for N (AT . Xm R"n such that X-I AX diag(7i.34. so by (9. where each Ji = diag(/. R(S) == S.. then S <S is A-invariant if and only if there exists M E ]Rkxk such that eRkxk (9. The Jordan canonical form is a special case of the above theorem.as in Theorem 9.) span an A-invariant of A". The equation Ax = A* = x A defining a right eigenvector x of an eigenvalue AX = x A defining a right eigenvector x of an eigenvalue A x X says that * spans an A-invariant subspace (of dimension one). A". Other such "canonical" forms are discussed in text that follows. we could choose bases for N(A — A.is A T -invariant. 7. .. partition .19) AS = SM.) and each Jik is a Jordan block corresponding to A. . Note that AXi = A*./)"'. € C"x"' be a Jordan basis for N(AT — A.i. i.38. = 1.36. Note that A A". we have that A A. but we restrict our attention here to only the Jordan block case.. .. is A-invariant... i. could be replaced by v. so the columns of A. //*. = Xi. example (note that the power n..-/)"' by SVD.19): /th Example 9.34... If V is a vector space over IF such that V = N\ EB .36. A -invariant if only ifS1 1. Jm). Eigenvalues and Eigenvectors Chapter 9.19) the columns of Xi (i.li.39.. 9. Suppose X block diagonalizes A..e. s/t span a /^-dimensional subspace <S. K(S) <S. . the eigenvectors and principal vectors associated with A. Eigenvalues and Eigenvectors If V is taken to be ]Rn over Rand S E ]Rn x* is a matrix whose columns SI. X-I AX = [~ J 2 ]... Suppose A E ]Rnxn.. Sk If R" R. we return to the problem of developing a formula for e l A in the case that A A formula e' A is not necessarily diagonalizable..."" Jik. We could also use other block diagonal decompositions (e.90 Chapter 9. we have that AXi Theorem 9.. AT Theorem 9.span an A-invariant subspace.A.) and each /. of W.. 2.. then a basis for V can be chosen with respect to which A has a block N. is not necessarily diagonalizable. eigenvalues Ai 9.• EB Nm..2.* is a Jordan block corresponding to Ai E A(A). partition Equivalently.g.. then is A-invariant if and only if there span a k-dimensional subspace S. Other representation for A with full blocks rather than the highly structured Jordan blocks.. so by (9.Ji . then a basis for V can be chosen with respect to which A has a block diagonal representation. 9. Let 7. diagonal representation. Let p(A) = CloI + ClIA + '"• •+ ClqAqq be a polynomial in A. We would then get a block diagonal representation for A with full blocks rather than the highly structured Jordan blocks. where each Theorem 9.37.. .e.. .. Suppose A"== [Xl .. Let peA) = «o/ + o?i A + • + <xqA be a polynomial in A.. This follows easily by comparing the ith columns of each side of (9.. = X. (i. /. . If A has distinct eigenvalues A. the eigenvectors and principal vectors associated with Ai) span an A-invariant subspace of]Rn. via SVD). . If A has distinct The Jordan canonical form is a special case of the above theorem. We would then get a block diagonal example (note that the power ni could be replaced by Vi). e A(A).Xm] ] Ee]R~xnxnisis such that X^AX ==diag(J1.19) the columns attention here to only the Jordan block case. Then N(p(A)) and 1.e.. . /. Finally.lt. and S e R" xk s\.37.

5 9. is given by eigenvalues in the right half-plane. Jm) [YI . Then A = XJX. Definition 9. Then the sign of z is defined by Re(z) {+1 sgn(z) = IRe(z) I = -1 ifRe(z) > 0. It is a generalization of the sign (or signum) of a scalar. Definition 9. associated with an eigenvalue A. for a k x k Jordan block 7. Then the sign of A. Let z E C with Re(z) ^ O.. It is a generalization of the sign (or signum) of a scalar. Suppose A E C"x" has no eigenvalues on the imaginary axis. A survey of the matrix sign function and some of its applications can be found in [15].9.I = XJy H = [XI.lt 2 e At 2! 0 exp t 0 0 0 1 A teAt eAt 0 0 0 0 0 block Ji associated A = A. Ym]H = LX. Then compatibly. A called the matrix sign function. The Matrix Sign Function 9. 9.5 The Matrix Sign Function The Matrix Sign Function section brief interesting useful In this section we give a very brief introduction to an interesting and useful matrix function function called the matrix sign function. and let e cnxn be a Jordan canonical form for A.. .. . . with N containing all Jordan blocks corresponding to the be a Jordan canonical form for with N containing all Jordan blocks corresponding to the eigenvalues of in the left half-plane and P containing all Jordan blocks corresponding to eigenvalues of A in the left half-plane and P containing all Jordan blocks corresponding to eigenvalues in the right half-plane.41. denoted sgn(A). .S. .5.. of defined Definition 9. m ••• .. i=1 which is a useful formula when used in conjunction with the result which is a useful formula when used in conjunction with the result A 0 A A 0 eAt teAt eAt .YiH. Then the sign of A. The Matrix Sign Function 91 91 compatibly. ifRe(z) < O. denoted sgn(A). Definition 9.41. i=1 H In a similar fashion we can compute m etA = LXietJ. E f= 0.40.40. Xm] diag(JI. is given by sgn(A) = X [ -/ 0] 0 / X -I ..= Ai.JiYi .

We state some of the more useful properties of the matrix sign function as theorems. Show that v can be expressed (uniquely) as a linear combination arbitrary vector.43. respectively. EXERCISES EXERCISES 1.42. distinct right eigenvectors Xi. 2. 4. Find the appropriate expression for v as a linear combination expression of the left eigenvectors as well... its reliable numerical calculation is an interesting topic in calculation its own right. ••.1> . 3.. Suppose A E C"x" has no eigenvalues on the imaginary axis. Eigenvalues and Eigenvectors where the negative and positive identity matrices are of the same dimensions as N and p. The JCF definition of the matrix sign function does not generally lend itself to reliable computation on a finite-wordgenerally itself length digital computer. 5. Suppose A E enxn has no eigenvalues on the imaginary axis. 3. Show that v can be expressed (uniquely) as a linear combination e of the right eigenvectors. Xn and left eigenvectors y\. AS = SA.. Let e C" be an arbitrary vector. In fact. 7l(S -l) is an A-invariant subspace corresponding to the left half-plane eigenvalues left half-plane I. posA == (l + S)/2 is a projection onto the positive invariant subspace of A. .. and let = sgn(A).. ••• . but the one given here is especially useful in deriving many of its key properties. of A (the negative invariant subspace). sgn(cA) = sgn(c) sgn(A) for all nonzero real scalars c. sgn(cA) = sgn(c) sgn(A)/or c.. e nxn Theorem 9. ).43. negA == (l . Theorem 9.. AS = SA. sgn(A") = (sgn(A»H. respectively. S2 = I. Let A E Cnxn have distinct eigenvalues AI. S2 = I.. 2. There are other equivalent definitions of the matrix sign function.42. S is diagonalizable with eigenvalues equal to del. S = sgn(A). The JCF definition of the here is especially useful in deriving many of its key properties. Eigenvalues and Eigenvectors Chapter 9. . R(S+/) is an A-invariant subspace corresponding to the right half-plane eigenvalues R(S + l) A -invariant half-plane of (the positive invariant of A (the positive invariant subspace)..S) /2 is a projection onto the negative invariant subspace of A. negA = (/ — S)/2 3. respectively. .. 5. 6. Then the following hold: following 1. sgn(T-1AT) T-lsgn(A)TforallnonsingularT e enxn 6. 4.. and let — sgn(A). .92 92 Chapter 9. Let v E en be an vectors Xl. ± 1. 2. 3. e C"x" Theorem 9. We state some of the more useful properties of the matrix sign function as theorems. positive of P. Theorem 9. Xn with corresponding right eigenA e nxn )..n 1. Then the following hold: following e 1. but the one given There are other equivalent definitions of the matrix sign function. sgn(T-lAT) = T-1sgn(A)T foralinonsingularT E C"x". Yn. . positive = (/ + of A. R(S — /) A-invariant of (the negative invariant subspace). S = sgn(A). yn. Their left exercises. sgn(AH) = (sgn(A))". .. Their straightforward proofs are left to the exercises. projection subspace of 4.xn and left eigenvectors Yl.

a skew-Hermitian matrix must be pure imaginary. Prove the same result right eigenvector x. y E lR. Determine all possible € R 5x5 {2. AH = -A. Characterize all left eigenvectors. The vectors [0 1 -Ifand[l 0 of [0 — l] r and[1 0]r (2) (1) are both eigenvectors. 4. Let A e R"x" be of the form A = xyT. right eigenvectors and right principal vectors if necessary. 2.22. Suppose A € rc nxn is skew-Hermitian.1) element of J. Suppose the small number 10. Determine the JCFs of the following matrices: <a) Uj n -2 -1 2 =n 7.Exercises 93 93 2. 5. Characterize all left eigenvectors. Let A = [H -1]· 2 2" Find a nonsingular matrix X such that X-I AX = J. Suppose a matrix A E R 16x 16 has 16 eigenvalues at 0 and its JCF consists of a single A e lR. Prove that all eigenvalues of 2.e.1) element of J. 16x 16 has eigenvalues at 0 its JCF consists of single Jordan block of the form specified in Theorem 9. 3}. Prove the same result if A is skew-Hermitian. multiples of e\ E lR. where x. where J is the JCF 1 J=[~ 0 1~].nxn A = xyT. but then the equation (A . Suppose 10~16 is added to the (16. x.5x5 has eigenvalues {2. Determine the JCF of A. Suppose A E C"x" is Hermitian. i. 3.30 must be 8. y e R" are nonzero vectors 10. Show that all right eigenvectors of the Jordan block matrix in Theorem 9. Let A E lR. y E lR. where x. eigenvalues. JCFs for A. = O. Let A e R" xn be of the form A = 1+ xyT. 2. 9.30 must be multiples of el e R*. Let 7. 5. x O. eigenvectors and if and (real) JCFs of the following matrices: (a) 2 -1 ] 0 ' [ 1 6. Determine the eigenvalues. Suppose a matrix A E lR. Show that x is also a left eigenvector for A. 10.l]r as an eigenvector. 11. if A is skew-Hermitian. Let A be an eigenvalue of A with corresponding 3. Determine the JCF of A.e./)jc = x can't be solved.n T T x yy = 0. Determine the JCF of A. ~ 0 Hint: Use[-1 1 — I]T an Hint: Use[— 1 1 . i. n are nonzero vectors with with xTTyy = 0. nxn be of the form A = / + xyT. Let A be an eigenvalue of A with corresponding right eigenvector x. Prove that all eigenvalues of a skew-Hermitian matrix must be pure imaginary. 3}. What are the eigenvalues of this slightly perturbed is added to the (16.. 2. Show that all right eigenvectors of the Jordan block matrix in Theorem 9. where J is the JCF Find a nonsingular matrix X such that X AX = J. JCFs for A. where x.16 Jordan form specified 9. Show that x is also a left eigenvector for A. Determine the JCFs of the following matrices: 6. (A — I)x(2) x(1) 8. y e R" are nonzero vectors with A E lR.22. What are the eigenvalues of this slightly perturbed matrix? matrix? . AH = —A.. Suppose A e rc nxn is Hermitian. Determine the JCF of A. k . Suppose A E C"x" is skew-Hermitian.

Suppose A E C"xn has all its eigenvalues in the left half-plane. Hint: Suppose A = Xl X-I is a reduction of A to JCF and suppose we can construct Hint: Suppose A = X J X ~ l is a reduction of A to JCF and suppose we can construct the "symmetric factorization" of 1. where Si 12. If n = 2 and k = 1. is nonsingular. Suppose A e sgn(A) = -1. Thus. JCF.. 16.18) is useful. Show that every matrix A E R"x" can be factored in the form A = SIS2. Prove that every matrix e jRn xn is similar to its transpose and determine a similarity transformation explicitly. xn has all its eigenvalues in the left half-plane. sgn(A) = -/. i.e. Prove Theorem 9. Prove Theorem 9. Then A = (XS i X T ) ( X ~ T T S2X-I) would be the the "symmetric factorization" of J. en .42. Prove Theorem 9. about when the equation for X is what can you say further. transformation explicitly.43.42. what can you say further. The transformation P in (9. 14. Prove Theorem 9. say Si. in terms of All and A22. Hint: Use the factorization in the previous exercise. Consider the block upper triangular matrix A _ [ All - 0 Al2 ] A22 ' where A E M"xn and An E Rkxk with 1 ::s: k < n. X e R*x <«-*). and S2 are real symmetric matrices and one of them. Suppose Al2 ^ and want to block diagonalize A via the similarity transformation want to block diagonalize A via the similarity transformation where X E IRkx(n-k). Then = ( X SIXT)(X. T-IAT = [A011 A22 0 ] . where SI and £2 are real symmetric matrices and one of them. Prove that every matrix A E W x" is similar to its transpose and determine a similarity 13. in terms of AU and A 22.43. it suffices to prove the result for the JCF. If n = 2 and k = 1. Thus. Prove that 17. Prove that 17. Eigenvalues and Eigenvectors Chapter 9. Find a matrix equation that X must satisfy for this to be possible. Consider the block upper triangular matrix 14. 16. say S1. is nonsingular.18) is useful.S2X~l) would required symmetric factorization of A. Hint: Use the factorization in the previous exercise. 15. about when the equation for is solvable? solvable? 15. Show that every matrix A e jRnxn can be factored in the form A Si$2. 13. The transformation P in (9. Eigenvalues and Eigenvectors 12. Suppose Au =1= 0 and that we we e jRnxn and All e jRkxk 1 < ::s: n. it suffices to prove the result for the required symmetric factorization of A.94 Chapter 9. Find a matrix equation that X must satisfy for this to be possible.

as well as other matrices that merely satisfy the definition. and orthogonal. if A e Rmxn . Let A = AH e C"x" have (real) eigenvalues A. skewskew-Hermitian. What other matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem matrices are "diagonalizable" under unitary similarity? The answer is given in Theorem 10.. if A E IR mxn find E R™ and Q E lR~xn such that P AQ has a form. An. Two special cases are of interest: Two special cases are of interest: 1. = V and if pT is orthogonal.. orthogonal equivalence if P and are orthogonal matrices. . A. An.9.j.Chapter 10 Chapter 10 Canonical Forms Canonical Forms 10..e.. it is called an "canonical form. This is proved in Theorem 10. as well as other matrices that merely satisfy the symmetric.2. the definition. the for real scalars a and h. and unitary matrices (and their "real" counterparts: symmetric. an orthogonal similarity (or unitary similarity in the complex case).1 Some Basic Canonical Forms Some Basic Canonical Forms Problem: Let V and W be vector spaces and suppose A : V ---+ W is a linear transformation." In matrix terms. matrix if and only if it is normal (i.. Normal matrices include Hermitian.. .. If a matrix A is not normal.. If The following results are typical of what can be achieved under a unitary similarity. the transformation A f--+ P ApT is called 2. If P"1 .I. .2. This is proved in Theorem 10. where it is proved that a general matrix A E C"x" is unitarily similar to a diagonal 10. it is called an orthogonal equivalence if P and Q are orthogonal matrices.. then there exists a unitary matrix £7 such that A = AH E en xxn has eigenvalues AI. AAHH = AH A).. . n ). V and Q 1. the transformation A f--+ PAP-I is called aasimilarity. Find bases in V and W with respect to which Mat A has a "simple form" or "canonical Find bases in V and W with respect to which Mat A has a "simple form" or "canonical xm form. If W = V and <2== p.. If a matrix A is not normal. respectively). Xn) (the columns ofX are orthonormal eigenvectors for A). Then there AI.e. find P e lR. The following results are typical of what can be achieved under a unitary similarity. . . and orthogonal. Normal matrices include Hermitian." The transformation A f--+ P AQ is called an equivalence. . skew-Hermitian. Q are unitary.j.2. most "diagonal" we can get is the JCF described in Chapter 9. Xn Theorem 10. . !] Theorem 10. If A = A H 6 C" " has eigenvalues AI. skewsymmetric. An).1. An) (the columns of X are exists a unitary matrix X such that XHAX = D = diag(A. such as A = [ _ab ^1 for real scalars a and b. . where D = diag(AJ. What other U HAU = D. . .1 10.1..." In matrix terms. 95 95 .1. If W = V and if Q = PT is orthogonal.An. and unitary matrices (and their "real" counterparts: symmetric. . where D = diag(A. . respectively). .. . the transformation A H> PAP" 1 is called similarity." The transformation A M» PAQ is called an equivalence. such as A = [_~ most "diagonal" we can get is the JCF described in Chapter 9. where it is proved that a general matrix A e enxn is unitarily similar to a diagonal matrix if and only if it is normal (i. AA = AHA). Problem: Let V and W be vector spaces and suppose A : V —>• W is a linear transformation.2. the transformation A i-» PAPT is called If an orthogonal similarity (or unitary similarity in the complex case). We can also consider the case A e Cm xn and unitary equivalence if P and Remark 10....9.. . orthonormal eigenvectors for A). then there exists a unitary matrix U such that UH AU — D. = H E en xn exists a unitary matrix X such that X H AX = D = diag(Al. Remark 10.:xm and Q e Rnnxn such that PAQ has a "canonical form... We can also consider the case A E emxn and unitary equivalence if P and <2 are unitary. .

Then VH = / / .96 96 Chapter 10. D 0 (2... 10. .. . Xk]. . the construction of X2 E JRnx(n-l) such that X — z e ]R" (".. The proof is completed easily by induction upon noting proof that the (2.. . —k U. we get 0 in the (l.. (l. . k = For simplicity.k But the latter are orthonormal since they are the last n .1) we have used the fact that AXI = k\x\. where R € Ckxk is upper triangular..2)-block X2 . Hk as elementary reflectors) H\. 0 Thus... Then there exist n — 1 additional vectors X2... n .I)-block.. Let X\ e Cnxk have orthonormal columns and suppose U is a unitary Theorem 10. [£i. ..xd...2).2) Al X~AX2 XfAX 2 0 Al ] 0 XfAX z 0 l In (10.1) (10. X = Given a unit vector x\ E JRn. . VI € Cnxk [Xi U ] Proof: Let X\I = [x\. ~nf. . Let the unit vector x\ be denoted by [~I. and normalize it such that x~ XI = XI 1. Let XI E Cnxk have orthonormal columns and suppose V is a unitary matrix such that UX\ = \ 1. xn] = 1.. . Write UH = [U\ U ] [VI Vz] 0 2 with Ui E Cnxk ... X 1 XI e E". k 1. . .. [Xi f/2] unitary. . Construct a sequence of Householder matrices (also known Proof: Let X [XI.2)-block by XI Xz.. .k rows of the unitary matrix U. Thus. . Canonical Forms Proof: eigenvector corresponding AI. .. . where R E kxk is upper triangular.l)-block.. Xk are orthonormal).2)-block noting that x\ is orthogonal to all vectors in X2..) [XI X2]] is orthogonal is frequently required. .T...1) we have used the fact that Ax\ = AIXI. . XH AX induction noting that XH AX is Hermitian. Let V = XI. We illustrate the construction of the necessary Householder matrix for k — 1. We also get 0 in the (2.1 additional vectors x2... Then there exist n .3 for k = 1.Hv. When combined with the fact that In (l0. simplicity.. A.. ...l)-block by Al (2. . .3. The construction can actually be performed orthogonal frequently [x\ 2 quite easily by means of Householder (or Givens) transformations as in the proof of the Householder transformations proof following general result. [XI U2] is unitary. Hk in the usual way (see below) such that Hk . Now XHAX =[ xH I XH ] A [XI 2 X 2] =[ =[ =[ x~Axl XfAxl X~AX2 XfAX 2 ] (10. An.. xn] = [x\ ] [XI X22] is unitary. Canonical Forms Chapter 10.. xf*x\ = Proof' Let x\ be a right eigenvector corresponding to X\. Construct a sequence of Householder matrices (also known HI.. Xn such that [x\.. %n] XI . Write V H matrix such that V X I = [ ~]. When combined with the fact that x~ XI = 1. .3 called Theorem 10.• • Hk and Hk'" HI. following general result. xn such that X = (XI. orthogonal (l. I)-block x"xi = 1. . we get A-i remaining in the (l.2)-block must have eigenvalues A2. HdxI.. Then [XI V 2] is unitary... D The construction called for in Theorem 10.3. .. (/ € k) U2 X i U2 = Xi . In (10. we consider the real case. Xk H Hk... Then U = HI'" Hk and H Then x^U2 = 0 (i E ~) means that xf is orthogonal to each of the n — k columns of V2. xd = [ ~ l U = where R is upper triangular (and nonsingular since x\.2 is then a special case of Theorem 10..

.2uu+ = I . . To see that U effects the U symmetric U U = U = I. Let A E jRn xn (whose orthogonal matrix X e Wlxn (whose columns are orthonormal eigenvectors of A) such that of XT AX = D = diag(Al.. .4. . Some Basic Canonical Forms 10..4.i. it is easily verified that UT U = 2 ± 2'. consulted standard numerical linear algebra can be consulted in standard numerical linear algebra texts such as [7].+uu T . £«] r It can checked T 2 that U is symmetric and U TU = U 2 = I.2 is worth stating separately since it is applied frequently in applications.1. Thus. Some Basic Canonical Forms 97 Then the necessary Householder matrix needed for the construction of X 2 is given by Then the necessary Householder matrix needed for the construction of X^ is given by U = I . where u = ['. The real version of Theorem 10.•» '.e.3) is actually a often weighted sum of orthogonal projections P. 's).. A in (10..3) spectral which is often called the spectral representation of A..1 ± 1. . Theorem 10. U effects necessary compression of jci. '. Then there exists an 10. . [23]. where u -^UU [t-\ 1. (onto the one-dimensional eigenspaces correPi one-dimensional eigenspaces sponding to the A. where Pi = PR(x. x where P. it is easily verified that u T u = ± 2£i and u T Xl = 1 ± '. [25]. quently in applications. Then there exists an AT E jRnxn have eigenvalues AI. A Note that Theorem 10.. n A = LAiPi.2 is worth stating separately since it is applied fre10. [11].) — xiXt = i j since xT Xi = 1.2. so U is orthogonal. In fact. An.1.e. sponding to the Ai'S). i=1 (10. including the choice of sign and the complex case. i.2 for Hermitian matrices) can be written from Theorem 10. [23].. An).1 and UT X\ = 1 ± £1.4 implies that a symmetric matrix A (with the obvious analogue from Theorem 10. [7].10... Let A = AT e E nxn have eigenvalues k\.. = PUM = xixf = xxixT since xj xi — 1. [11]. i=l theoretical The following pair of theorems form the theoretical foundation of the double-Francisdouble-FrancisQR algorithm used to compute matrix eigenvalues in a numerically stable and reliable way.2 for Hermitian matrices) can be written n A = XDX T = LAiXiXT. . [25]. U orthogonal..It can easily be checked — 2uu+ — u u T . £2.Xn. • • . Further details on Householder matrices.nf. . .. . X n ).1. XTAX = D = diag(Xi. . necessary compression of Xl.

is that the first k Schur vectors span the same Ainvariant subspace as the eigenvectors corresponding to the first eigenvalues along the invariant subspace as the eigenvectors corresponding to the first k eigenvalues along the diagonal of T (or S). A is normal (i. but In the case of A E R"xxn . . matrix U that reduces a matrix to [real] Schur form are called Schur vectors.2)-block wf AU2 is not 0.7.. A matrix A E C"x" is unitarily similar to a diagonal matrix if and only if Theorem 10. Let A e C"x". where S is quasi-upper-triangular. UH AU = T. Let A E R"xxn. The triangular matrix T in Theorem 10. Then there exists an orthogonal 10. Then AAH = U VUHU VHU H = U DDHU H == U DH DU H == AH A so A is normal.e. [17]). The quasi-upper-triangular matrix S in Theorem 10. real arithmetic) to a quasi-upper-triangular matrix. it is thus unitarily similar to an upper triangular matrix. [17]).e. Then there exists an orthogonal Let A e IR n ". for example. D ur In the case of A e IRn ". Canonical Forms Theorem 10. Theorem 10. what is true. Its real JCF is is in RSF.e.2 except that in this case (using the notation U rather than X) the (l. where D is diagonal. it is of interest to know While every matrix can be reduced to Schur form (or RSF). then complex arithmetic is clearly needed if A has a complex conjugate pair of eigenvalues. it is of interest to know when we can go further and reduce a matrix via unitary similarity to diagonal form. 0 in this case (using the notation U rather than X) the (l.7. where D is diagonal.6 T T matrix U such that U AU = S. The following theorem answers this question. Then Proof: Suppose U is a unitary matrix such that U H AU = D. where T is upper triangular. diagonal of T (or S).9. AHA = AA H). A quasi-upper-triangular matrix is block upper triangular with 1 x 1 diagonal blocks corresponding to its real eigenvalues and 2x2 2 diagonal blocks corresponding to its blocks corresponding to its real eigenvalues and 2 x diagonal blocks corresponding to its complex conjugate pairs of eigenvalues. so A is normal. A matrix A e c nxn is unitarily similar to a diagonal matrix if and only if A is normal (i. following theorem answers this question... The columns of a unitary [orthogonal} Schur canonical form or real Schur fonn (RSF). AH A = AAH ). but if A has a complex conjugate pair of eigenvalues. Example 10.2 except that Proof: The proof of this theorem is essentially the same as that of Theorem 10. The matrix 10. Definition 10. However. complex conjugate pairs of eigenvalues. The triangular matrix T in Theorem 10. However. Let A E cnxn Then there exists a unitary matrix U such that Theorem 10. Canonical Forms Chapter 10. Proof: The proof of this theorem is essentially the same as that of Theorem lO.6 is called a real Schur canonical form or real Schur form (RSF). A quasi-upper-triangular matrix is block upper triangular with 1 x 1 diagonal matrix. Proof: Suppose U is a unitary matrix such that U H AU = D. . for example. where T is upper triangular. Its real JCF is h[ 1 -1 1 0 0 n n Note that only the first Schur vector (and then only if the corresponding first eigenvalue Note that only the first Schur vector (and then only if the corresponding first eigenvalue is real if U is orthogonal) is an eigenvector. The columns of a unitary [orthogonal] matrix U that reduces a matrix to [real} Schur fonn are called Schur vectors. The when we can go further and reduce a matrix via unitary similarity to diagonal form. then complex arithmetic is clearly needed to place such eigenValues on the diagonal of T. However. However. matrix U such that U AU = S. the next theorem shows that every to place such eigenvalues on the diagonal of T. real arithmetic) to a quasi-upper-triangular A e Wnxn is also orthogonally similar (i.5 is called a Schur canonical Definition 10.2)-block AU2 is not O..98 98 Chapter 10. Then there exists a unitary matrix U such that U H AU = T.9.e. where S is quasi-upper-triangular.5 (Schur).6 (Murnaghan-Wintner).6 is called a real form or Schur fonn. the next theorem shows that every A E IR xn is also orthogonally similar (i. what is true. The matrix s~ [ -2 0 -2 5 4 0 is in RSF. is that the first Schur vectors span the same all applications (see. While every matrix can be reduced to Schur form (or RSF).8. it is thus unitarily similar to an upper triangular matrix. and sufficient for virtually is real if U is orthogonal) is an eigenvector.5 (Schur). The quasi-upper-triangular matrix S in Theorem 10.8. Theorem 10. and sufficient for virtually all applications (see.5 is called a Schur canonical form or Schur form.

superscript H s replace T s. let y = U H x. where x is an arbitrary vector in en. nonpositive definite (or negative semidefinite) if -A is nonnegative definite. A U U HA U T.B > 0 or or Also. all the above definitions hold except that A e nxn Remark 10. nonnegative definite (or positive semidefinite) if and only if XT Ax :::: 0 for all (or positive if and only if x T Ax > for all nonzero x e W.2. i € n. U diagonalizes A 10. Thenfor all Let A = AH E Cnxn with eigenvalues AI > A2 > • • > An. We write A ~ 0.n • We write A :::: 0.. Remark 10. suppose A is normal and let U be a unitary matrix such that U H AU = T. We write A > 0.2. Similarly. If neither semidefinite. Proof: Proof: Let U be a unitary matrix that diagonalizes A as in Theorem 10.nxn is Definition 10.10. . negative positive definite.5). A symmetric matrix A E lR. if—A 4. If A E C"x" is Hermitian.10. A symmetric matrix A e Wxn 1. 3. Indeed.=1 But clearly n LA. nonzero x E lR.=1 .A ~ O.. positive definite if and only if xTT Ax > 0 for all nonzero x G lR. this section that may be stated in the real case for simplicity. If a matrix is neither definite nor semidefinite. We write A < 0. Furthermore.n. write A < O. be diagonal.10. We write A < O. we write A :::: B if and only ifA — B>QorB — A < 0.. 11'/. this is generally true for all results in the remainder of of superscript //s Ts. B — A < 0.A < O. We (or negative if— A nonnegative definite. Then for all E en. it is said to be indefinite. we write A > B if and only if A .• :::: An. Also. we write A > B if and only if A — B > B . Similarly.A is positive definite.B :::: 0 or B . in fact. x eC". Remark 10. we write A > B if and only if A . positive definite if and only ifx Ax > Qfor all nonzero x E W1 We write A > O.2.12. Then 11.2. if A and B are symmetric matrices. Then T (Theorem It is then a routine exercise to show that T must. Then n x HAx = (U HX)H U H AU(U Hx) = yH Dy = LA. Definite Matrices 99 Conversely. We write A > O.2 Definite Matrices Definite Matrices Definition 10. i En. if A and B are symmetric matrices. negative definite if . and denote the components of y by v UHx.12 ~ AlyH Y = AIX HX . It T 0 D 10.2 10.11. e Theorem 10.13. Furthermore.. 111. indefinite. Definite Matrices 10. Let A = AH e enxn with eigenvalues X{ :::: A2 :::: .11. CM j]i. 2. where T is an upper triangular matrix (Theorem 10.12..

whence Ar1ax (A A). where M E IRb<n and k ~ ranlc(A) — ranlc(M). where M e R"x" is nonsingular. A leading principal submatrix of order n — k is obtained by deleting the last k rows and columns. Canonical Forms and and n LAillJilZ::: i=l AnyHy = An xHx . For example.17. Let A E enxn Then \\A\\2 = Ar1ax(AH A).19. form MT E ~n xn E ~n xn definite if and only if Theorem 10.13 provides (A 1) Rayleigh quotient of jc.16. Canonical Forms Chapter 10. 0 D Remark XHHAx Remark 10. Not@th!ltthl!dl!termin!lntl:ofnllprincip!ll eubmatrioes muet bQ nonnogativo in Theorem 10. However.. Theorem 10. Then 111~~1~2 Let jc be an eigenvector corresponding to Xmax(AHA). 3. The determinant of the 1x1 1 leading submatrix is 0 and 1. A can be written in the form MT M. Theorem 10. A can be written in the form MT M.= Amax{A A). The determinant of the I x leading submatrix is 0 and consider the matrix A = [~ 2x 0 (cf.17). Note that the determinants of all principal "ubm!ltriC[!!l mu"t bB nonnBgmivB R. Then IIAII2 = ^m(AH A}. I Proof: E C" Proof: For all x € en we have Let x be an eigenvector corresponding to Amax (A HA).18.19.I. All eigenvalues of A are nonnegative. of obtained and E ~nxn positive definite if and only if any of the Theorem 10. determinant the determinant of the 2x2 2 leading submatrix is also 0 (cf. XHAx > 0 for all nonzero = AH E enxn E en.1. A symmetric matrix A e E" x" is positive definite if and only if any of the following equivalent following three equivalent conditions hold: determinants of principal 1. ::::: AI.. A can be wrirren in [he/orm MT M. All eigenvalues of A are nonnegaTive. Theorem 10. 3.@mllrk 10. 2. A principal submatrix of an nxn n matrix A is the (n — k)x(n(n — k) matrix that remains by deleting k rows and the corresponding k columns. The determinants of all principal submatrices of A are nonnegative. so 0 An ::::: . xfO IIxll2 I 0 Definition submatrixofan n x -k) x -k) Definition 10.15. of positive. not just those of the leading principal submatrices. All eigenvalues of A are positive. Let A e C"x". 3. A symmetric matrix A € R"x" is nonnegative definite if and only if any of following equivalent of the following three equivalent conditions hold: 1.1. Corollary Corollary 10. consider the matrix A — [0 _l~]. from which the theorem follows. The determinants of all leading principal submatrices of A are positive. Then ^pjp2 = ^^(A" HA).18. 2.. Theorem 10.soO < X n < ••• < A. x E C". the . . where M 6 R ix " and k > rank(A) "" rank(M).14. of all principal submatrices of 2. Theorem 1O.17).100 100 Chapter 10. The ratio ^^ x for A = AH <=enxn and nonzero x jc een isis calledthe = AH E Cnxn and nonzero E C" called the x of x.w) x HAx > the Rayleigh quotient. If A = AH e C"x" is positive definite. whence IIAxll2 ! H IIAliz = max . All eigenvalues of A are positive. Remark 10.l3 provides upper (AO and lower (An) bounds for (A.18.

negative and is nonpositive definite. The following standard theorem is stated without proof (see. The case = is trivially true. [16. 0 Recall that A :::: B if the matrix A . 2. matrices (both symmetric and square root of if S A.18.2) element is. If A > B and M e Rm . Definite Matrices 10. The case n = 1 is trivially true..2) element is. any matrix of nonsymmetric) have infinitely many square roots. then MT AM > MT TBM. In general.2. for example. . Write the matrix A in Proof: The proof is by induction. A e R"x be nonnegative definite. A stronger form of the third characterization in Theorem 10. Theorem 10. in fact. Ll E C1-""1^""^ and . p. 10rm [COSO _ Sino] . if A = lz. basic definitions.20.20.we say 181]). [16. if A E lR.nxn"be nonnegative definite. B e Rnxn be symmetric. Let A e c nxn be Hermitian unique nonsingular lower triangular matrix L nonsingular A = LLH. Moreover. SA = AS and rankS = rank A (and hence S is positive = AS S S.3 is not unique. p. j proof (see. If >BandMe jRnxm.23. and positive definite. Theorem 10. if then M can be then M can be [1 0]. 1. Let A E lR.23.22.1 so that B By our induction hypothesis. nxm 2. = LLH..22. Definite Matrices 101 101 principal submatrix consisting of the (2. For example.18. Write the matrix A in the form the form By our induction hypothesis. in fact.17 is available and is known as the Cholesky factorization. Its proof is straightforward from theorem is useful in "comparing" symmetric matrices. Hermitian case. It is stated and proved below for the more general known as the Cholesky factorization. It is stated and proved below for the more general Hermitian case. In general. then MT AM :::: MTTBM. [ fz -ti o o l [~ 0] ~ 0 v'3 . nxn Theorem 10.B is nonnegative definite.3 is not unique. BM. standard theorem stated 181]). The factor M in Theorem 10. The factor M in Theorem 10. any matrix S of c e s 9 the " °* ™ the form [ ssinOe _ ccosOe ] IS a square root. Remark 10.17 is available and is A stronger form of the third characterization in Theorem 10..2. negative and A is nonpositive principal submatrix consisting of the (2. for example.10. with positive diagonal elements such that positive Proof: The proof is by induction. MT AM> M. E jRnxn MT AM > M BM. Then A has aaunique nonnegative definite square root S. assume the result is true for matrices of order n . For example. assume the result is true for matrices of order — 1 so that B may be written as B = L\L^. matrices (both symmetric and nonsymmetric) have infinitely many square roots. if € E" xn we say that e jRn x that S E R nxn"isisa asquare root of AA ifS2 2 =— A. rankS = rankA definite definite if positive definite). E <C Theorem 10. That is. Then A has unique nonnegative Theorem 10. The following Recall that A > B if the matrix A — B is nonnegative definite. That is. For example. It concerns the notion of the "square root" of a matrix. For example. The following theorem is useful in "comparing" symmetric matrices.nxn . Then there exists a positive definite. if = /2. if Remark 10. 1f A :::: Band M E Rnxm.is a square root. concerns the notion of the "square root" of a matrix. If A> Band E jR~xm.21. Let A. Its proof is straightforward from basic definitions. where L\ e c(n-l)x(n-l) is nonsingular and lower triangular as = L1Lf. definite if A is positive definite).

Many similar results are also (10. But we = ann — b LIH L\lb = ann — bH B~lb B A). ann Since det(B) > 0. Substituting in the involving we find 2 a2 = ann .24.lb.xn.24.4) and the SVD. Let A € C™*71. numerical procedures for computing such procedures an equivalence directly via. Then [ S-l o 0 ] [ I Uf U H ] AV = [I 0 0 ] 0 . [4. Canonical Forms with positive diagonal elements. we must have ann —bHB lb > 0. Gaussian or elementary row and column operations. Then there exist matrices P E C: xm and Q e C"nx" such E c. suppose A has an SVD of the form (5. They are more stably computable than (lOA) and more efficiently computable than a full SVD. 0 Note that the greater freedom afforded by the equivalence transformation of Theorem afforded 10.4) [7. It remains to prove that we can write the n x n matrix A It in the form in the form ann b ] = [LJ c a 0 ] [Lf 0 c a J. Substituting in the expression involving a. However. Canonical Forms Chapter 10.b B-1b completes D 10.3 Equivalence Transformations and Congruence Equivalence Transformations and Congruence Theorem 10.2) in its complex version. However. available. Two such forms are stated here. for example. of ann — b 0 root of «„„ . The numerically preferred equivalence is.b HL\H L11b = ann .4) efficiently available. 5].102 102 Chapter 10. Alternatively. are generally unreliable.b H B-1b > O. Ch. Alternatively. .p. multiplication where a is positive. Choosing a to be the positive square ann . p. say. Choosing a be det(fi) > HB~lb completes the proof. 131]. we find by L^b. Then E c~xn such exist e C™ x m that that PAQ=[~ ~l (l0.4) Proof: proof Proof: A classical proof can be consulted in. Ch.b H B-1b (= the Schur complement of B in A). of course.• Clearly we see we L I C = b and ann = c HC a 2 c is given simply by c = C. for example (10.4).3 10.131]. yields a far "simpler" canonical form (10. the SVD is relatively expensive to compute and other canonical forms exist that are intermediate between (l0. the unitary equivunitary alence known as the SVD. 2]. [21. we see that we must have L\c = b and ann = CHc + a 2. Performing the indicated matrix multiplication and equating the corresponding submatrices. Take P =[ S~ 'f [I ] and Q = V to complete the proof. [21. as opposed to the more restrictive situation of a similarity transformation. But know that o < det(A) = det [ ~ b ] = det(B) det(a nn _ b H B-1b).. see.

0. Let A e C™ ".1). In(A) = ln(X Proof: For the proof. v. then rank(A) rr v. Note that congruence preserves the property of being Hermitian.3. respectively. if A is Hermitian. .30. see [4].28. £). Let A = AH E e nxn and let rr. Theorem 10.31 (Sylvester's Law of Inertia). 0 D x Theorem 10. n. When A has full column rank but is "near" a rank deficient matrix. v. Definition 10. We then have the following.3. It turns out that the principal property so preserved is the sign of each eigenvalue. and zero eigenvalues. Note that a congruence is a similarity if and only if X is unitary. Proof: For the proof. numbers In(A) (n. see [4]. and ~ denote the numbers of positive.25 (Complete Orthogonal Decomposition). upper Proof: For the proof. for example. Then there exists a unitary matrix Q e e mxm and a Theorem 10. see. Let A = AH e C"x" and let 7t.27. HE C xn E e~ xn. It is of interest to ask what other properties of a matrix are then X H AX is also Hermitian.XH AX Definition 10. a congruence.25 (Complete Orthogonal Decomposition).31 guarantees that rank and signature of matrix are preserved under congruence. Proof: For the proof. Let A E C™ x ". The transformation A i-> XH AX is called a congruence. Let A = A He ennxn and X e Cnnxn.v. Then there exists a unitary matrix Q E Cmxm and a permutation permutation matrix IT e en xn" such that Fl E C"x QAIT = [~ ~ l (10. Example 10. The H. Let A E e~xn. 134]. phenomena at a cost considerably less than a full SVD. p. and £ denote the numbers of positive. Again. sig(A) = rr — v. Then H HAX). where R E Crrxr is upper triangular and S e C rx( " r) is arbitrary but in general nonzero.27.rrxr is upper (or lower) triangular with positive diagonal elements. Let A e e~xn.xr E erx(n-r) arbitrary general nonzero. Let A e Cnxn and X e Cnnxn. It turns out that the principal property so preserved is the sign preserved under congruence. (TT. [21.t h e n A > 0 if and only if In (A) = (n. of A. p.xr is upper (or lower) triangular with positive diagonal elements.5) where R E e. Remark 10. [21. then XH AX is also Hermitian.e.30.. 0. n The signature of A is given by sig(A) = n . 2. D Proof: For the proof.29. If A = A" E C nnxn. v. D Theorem 10. Then there exist Theorem 10. Equivalence Transformations and Congruence 103 103 Theorem 10. If In(A) = (rr.26. if A is Hermitian.6) E e. see [4] for details. It is of interest to ask what other properties of a matrix are preserved under congruence. see [4]. D 0 Remark 10. If A AH e e x " then A> 0 if and only if In(A) = (n. and eigenvalues. 0). v. for example.31 guarantees that rank and signature of a a matrixare preserved under Theorem 10. see [4] for details. Equivalence Transformations and Congruence 10. where R e €. Again. Note that congruence preserves the property of being Hermitian. i. negative. Then is the numbers In(A) = (rr. Then the inertia of A is the triple of inertia of of negative. nxn E e X E e~xn. Theorem 10.In[! 1o o o 0 0 00] -10 =(2.26. We then have the following. v. Proof: For the proof. of A.0). l.29. In(A) = In(X AX). of each eigenvalue.1. The signature of is Example 10. 0 2.. £). see [4]. congruence. respectively. then rank(A) = n + v. When A has full column rank but is "near" a rank deficient matrix. v. i. various rank revealing QR decompositions are available that can sometimes detect such various rank revealing QR decompositions are available that can sometimes detect such phenomena at a cost considerably less than a full SVD.10.28. Note that a congruence is a similarity if and only ifX is unitary. In(A) 3. .e.31 (Sylvester's Law of Inertia). Then there exist unitary matrices U e Cmxm and V E Cnxn such that unitary matrices U E e mxm and V e e nxn such that (10. Definition 10. 134]. Definition 10. see.

.4 10. Define the x n matrix vv = diag(I/~. the next v are negative. -1. . and the final £ are 0.. . 0. D > and . Suppose A = AT and D = DT. v. Theorem 10.0).3.. I/~. . .33. . I.. -1. .. Suppose A = AT and D = DT.. if and if either A> and D ... I/. Theorem positive. .. for example.3. . AA+B = B. Canonical Forms Theorem 10. ifand only ifeither A > 0 and D . . and D . where the number of E c~xn XH AX = diag(1. the number of — 's is v. £).. Let A = AHeE cnxn with In(A) = (jt. -I. 0). . . if ifA>0.35.104 104 Chapter 10. -1.BD^BT > 0. v... . Note the symmetric Schur complements of A (or D) in the theorem. or D > 0 and A . ..f-Arr+I' .33.1 10. . By Theorem 10.BT A-I > 0.BT A+B > 0.0. and the numberofO's is~. . X e C"nxn such that XHAX = diag(l.. left AT D DT. . AA+B = B.. .1). . Then Remark Remark 10.. X UW desired 10.BD.34.. and D .. . Proof: proof Proof: The proof follows by considering. where the number of X 1's is Jr.. 's is 7i. .BT A+B:::: o. An). Then there exists a matrix AH C"xn In(A) = (Jr..32. ..f-Arr+v. the congruence B ] [I D ~ 0 _A-I B I ° JT [ A BT ~ ][ ~ 0 D The details are straightforward and are left to the reader. Canonical Forms Chapter 10. A w ). 1/.4 Rational Canonical Form Rational Canonical Form rational One final canonical form to be mentioned is the rational canonical form. 1.. An of Jr Proof: Let AI . . 0 D 10.1 Block matrices and definiteness Theorem 10.... 0 D Then it is easy to check that X = V VV yields the desired result. . O. the number of -Il's is v.BT A~l B > 0. . Proof: AI. B D ] > - ° if and only if A:::: 0. .I BT > O. . the number 0/0 's is (.2 there exists a unitary matrix V such that VHAU = diag(AI. Xw denote the eigenvalues of A and order them such that the first TTare ~ O. Define the nn x n matrix U UH AV = diag(Ai. Proof: Consider the congruence with Proof: Consider proof Theorem and proceed as in the proof of Theorem 10. Then = AT D = DT.. I..

A matrix A E lRn Xn is said to be nonderogatory ifits minimal polynomial if its minimal polynomial and characteristic polynomial are the same or. Then it can be shown (see [12]) that A is similar to a matrix of the form is similar to a matrix of the form o o o o 0 o o o (10. A is easily seen to be similar to the following matrix identity similarity P given by (9. In fact. the Moreover. if its Jordan canonical form and characteristic polynomial are the same or.4.(ao + «A + . equivalently. To illustrate. if its Jordan canonical form has only one block associated with each distinct eigenvalue. consider the companion matrix illustrate. Suppose A E lRnxn is a nonderogatory matrix and suppose its characteristic polynoSuppose A E Wxn is a nonderogatory matrix and suppose its characteristic polynon(A) An — (a0 + alA + a n _iA n ~'). the inverse of a nonsingular companion matrix is again in companion form. + an_IAn-I). Rational Canonical Form 10. For £*Yamr\1j=» example. A matrix A e E nx " of the form (10. To Companion matrices also appear in the literature in several equivalent forms..9) Moreover.7) Definition 10. l 0 0 -~ ao -~ ao _!!l (10. : ~ ! ~01].11) . o (10.7) is called a cornpanion rnatrix or Definition 10.10) o 1 o 1 o o o o o o (10. consider the companion matrix (l0. has only one block associated with each distinct eigenvalue. Using the reverse-order This matrix is a special case of a matrix in lower Hessenberg form..8) This matrix is a special case of a matrix in lower Hessenberg form. equivalently. the inverse of a nonsingular companion matrix is again in companion form.37.37. Rational Canonical Form 105 105 Definition A matrix A e M"x" is said to be Definition 10. For In fact. A matrix A E lRnxn of the form (10. Companion matrices also appear in the literature in several equivalent forms. Using the reverse-order identity similarity P given by (9. is said to be in cornpanion form.7) is called a companion matrix or is said to be in companion forrn. since a matrix is similar to its transpose (see exercise 13 in Chapter 9). A is easily seen to be similar to the following matrix in upper Hessenberg form: in upper Hessenberg form: a2 al o 0 0 1 o 1 6] ao o . Notice that in all cases a companion matrix is nonsingular if and only if aO i= O.Then it can be shown (see [12]) that A mial is 7r(A) = A" . since a matrix is similar to its transpose (see exercise 13 in Chapter 9).18). the following are also companion matrices similar to the above: following are also companion matrices similar to the above: Notice that in all cases a companion matrix is nonsingular if and only if ao /= 0.18).4.36.10.

see. in matrices are known to possess many undesirable numerical properties. also be derived easily.4aJ) . If a companion matrix of the form (10.38. nonsingular ones are nearly singular. Let a = a\ + a\ + • • • + a%_{ and y = 1 + «. an-If and let e M"" \a\. associated at least one eigenvalue.. = ~ (y . with a similar result for companion matrices of the form (10. Such matrices are said to be in rational canonical form Frobenius rational canonical form (or Frobenius canonical form). stable ones are nearly unstable. Companion matrices have many other interesting properties. see [14].e. Canonical Forms with a similar result for companion matrices of the form (10. at least one eigenvalue.Jy2 . and so forth [14]..4ao ' 1 2) - a? = 1 for i = 2. Theorem 10.caa T = (I + aaT) -I . then it is not similar to a companion matrix of the form (10.Q + a. Algorithms to reduce but unfortunately they are often very difficult to work with numerically.7). [12]. Explicit formulas for all the associated right and left singular vectors can also be derived easily. the largest and smallest singular values can also be written in the equivalent form If ao =1= 0. among which.. Canonical Forms Chapter 10. and so forth [14]. 02.7) is singular. has more than one Jordan block associated with If A € JRnxn derogatory.39. Then + ai + . Moreover. and hence the pseudoinverse of a singular companion + matrix is not a companion matrix unless a = 0. Then A in (10. if ao = 0. Let a E JRn-1 denote the vector [ai.10). especially nonsingular ones are nearly singular. their eigenstructure is extremely ill conditioned... . Moreover.. among which..39.1.7). is the fact that their singular values can be found in closed form. Such matrices are said to be in each of whose diagonal blocks is a companion matrix. If A E R nx " is derogatory. it can be shown that a derogatory matrix is similar to a block diagonal matrix. For details. Companion matrices appear frequently in the control and signal processing literature Companion matrices appear frequently in the control and signal processing literature but unfortunately they are often very difficult to work with numerically.7). a. I — T = T) Note that / . . form). For example..e. Leta = ar aJ al 2_ 2 ( y + Jy 2. n .. for example. a n -i] and l c I+~T a. if ao = 1 inverse can still be computed. a2. and perhaps surprisingly. and perCompanion matrices have many other interesting properties. each of whose diagonal blocks is a companion matrix. is the fact that their singular values can be found in closed form..10). Ifao ^ 0. For example. then its pseudoIf singular. . . . Then it is easily verified that c = l+ ara' Then it is easily verified that o o o + o o o o o o 1.. i. Explicit formulas for all the associated right and left singular vectors can Remark 10. However. Let a\ > a2 > . then it is not similar to a companion matrix of the form (10. + a. in n general and especially as n increases.106 Chapter 10. i. 3..._1 and y = 1 + + a.38. stable ones are nearly unstable.• > an be the singular values of the companion matrix A in (10.7). see haps surprisingly. companion matrices are known to possess many undesirable numerical properties. matrix is not a companion matrix unless a = O. companion an arbitrary matrix to companion form are numerically unstable.. Algorithms to reduce an arbitrary matrix to companion form are numerically unstable.caa T ca o J. Let al ~ GI ~ • • ~ an be the singular values of the companion matrix Theorem 10. the largest and smallest singular values can also be written in the equivalent form Remark 10..

• • > on > 0. Prove that if A e M"x" is normal. one may lose up to k digits of to the matrix P-norm. If this number is large. Let A = I J : ]eEC 22x2. Show that a.38 yields some understanding of why difficult numerical Remark 10.5 to find a unitary matrix Q that reduces A e C"x" to lower triangular form. Theorem 10.. is true if n = 2. If A e jRn xn 8. EXERCISES EXERCISES 1. Let R.EA(A) I'MpeA) 3.(A)I for ii E!l. 3.. It is easy to show that y/2/ao < K2(A) < -£-. then peA) = ||A||2. Let A 7... Show that the converse radius of A. In the 2-norm. K\ (A) (10.. .4a5 21 a ol It is easy to show that 21~01 :::: k2(A) :::: 1:01' and when ao is small or y is large (or both). Suppose A e E"x" is positive definite. If A E Wxn is positive definite.. then it must be diagonal. say 0(10*). show that A-I must also be positive definite.. by the theorem.. A E jRnxn N(A) = A/"(A ). Show that if a triangular matrix is normal. when solving linear behavior might be expected for companion matrices. Show that if A is normal. Is [ ^ A E jRnxn is definite. one measure of numerical sensitivity is KP(A) = A A -] > the so-called condition number of A with respect to inversion and with respect II ^ IIpp II A~l IIpp'me so-called condition number of A with respect to inversion and with respect to the matrix p-norm. Show that if A is normal. 6. A E cc nxn peA) = max). S 6 E nxn be symmetric. and when GO is small or y is large (or both). Find a unitary matrix U such that [~ M CC x 2 Find a unitary matrix U such that 6.. Let A G Cnx" and define p(A) = maxx€A(A) IAI. then p(A) = IIAII2' Show that the converse is true if n = 2. A [ must also be positive 7. Then p(A) is called the spectral radius of A. Let A € C n xn be normal with eigenvalues y1 . one may lose up to k digits of precision.-(A)| for e n. In the 2-norm. can be determined explicitly as determined explicitly y+J y 2 .11). (A) = IA. Use the reverse-order identity matrix P introduced in (9. If this number is large. An and singular 0'1 > 0'2 ~ 4. Show that a.. say O(lO k ). 9. yn and singular values a\ ~ a2 > . (A) |A. Remark 10. For example. Let R.18) and the matrix U in identity in (9.Exercises Exercises 107 Companion matrices and rational canonical forms are generally to be avoided in floatingCompanion matrices and rational canonical forms are generally to be avoided in fioatingpoint computation. R> S [1 A~I] ~ O? /i 1 > 0? ~] > 0 if and only if > 0 and J 1 > 0 if and only if S > 0 and . For example. E jRnxn be symmetric. .40. when solving linear equations numerical sensitivity Kp(A) = systems of equations of the form (6. Show that [ * }. then K2(A) ^ T~I..11).2).18) U A E cc nxn Theorem 10. Note that explicit formulas then K2(A) ~ I~I' It is not unusualfor y to be large forlarge Note that explicit formulas Koo(A) for K] (A) and Koo(A) can also be determined easily by using (l0. 1.38 yields some understanding of why difficult numerical behavior might be expected for companion matrices. 5. this condition number is the ratio of largest to smallest singular precision. this condition number is the ratio of largest to smallest singular values which. . Show that [~ R > S-I. Theorem 10. A E en x n eigenvalues A].• ~ an ~ O.40. It is not unusual for y to be large for large n. then Af(A) = N(A Tr ). 2.

Canonical Forms Chapter 10.j 1+ j ] -1 . Canonical Forms [~ ~ l (b) [ -2 1. Find the inertia of the following matrices: following 10.1 1. (a) Chapter 10.j 1+ j ] -2 ' (d) [ .108 108 10. .

Ak.nxn is defined by Definition 11. where the matrix A E Rnxn is constant chapter only to the so-called time-invariant case. e° = I.1. eO = I. Proof: This follows immediately from Definition 11. the matrix exponential e A e JR.1 and linearity of the transpose.1 by setting AA =O.2) can be shown to converge for all A (has radius of convergence equal The series (11.2) can be shown to converge for all A (has radius of convergence equal to +00). This is known as an initial-value problem.nxn. where the matrix A e JR.1 Differential Equations Differential Equations = Ax(t).1 Properties of the matrix exponential Properties of the matrix exponential 1. The solution of (11.1) is then known always to exist and be unique. T T 109 109 . the matrix exponential e A E Rnxn is defined by the power series power series e = A L +00 1 .1. Definition 11. For all A JR.1) for t 2: to.1 11.1 and linearity of the transpose.2) k=O The series (11. For all A e Rnxn. Proof This follows immediately from Definition 11. A) • 2. Proof: This follows immediately from Definition 11.1) involves the matrix to +(0). For all A E JR. 11. (11. which thus also converges for all A and uniformly in t.1) is then known always to exist and be and does not depend on t. = Xo In this section we study solutions of the linear homogeneous system of differential equations In this section we study solutions of the linear homogeneous system of differential equations x(t) x(to) E JR.n (11. The solution of (11. (e(eAf = -e A e^. It can be described conveniently in terms of the matrix exponential. unique.1 by setting = 0.nxn. We restrict our attention in this for t > IQ. We restrict our attention in this chapter only to the so-called time-invariant case.1) involves the matrix (11.1 11.1. k. It can be described conveniently in terms of the matrix exponential.3) which thus also converges for all A and uniformly in t. The solution of (11.1. Forall A EG R" XM .nxn is constant and does not depend on t. The solution of (11. This is known as an initial-value problem. Proof This follows immediately from Definition 11.Chapter 11 Chapter 11 Linear Differential and Linear Differential and Difference Equations Difference Equations 11.

) ( 1+ tA + t2!A 2 +. B E R" xn and for all t E JR.A)-I} = «M.. all A € R"x" and for all t € lR. Proof: We prove only (a). For all A. For all e JRnxn and for all E R. T E JR. 6.. {+oo = io et(-sl)e tA dt since A and (-sf) commute =io (+oo ef(A-sl) dt . (e'A)~l e~'A. Proof" We prove only (a). Compare like powers of A in the above two equations and use the binomial theorem Compare like powers of A in the above two equations and use the binomial theorem on(t+T)k.A)-I..110 110 Chapter 11. Then for E JRnxn t E R..l-I{(sl. ForaH A E R" x " and for all t e JR. AB = BA. For all A E JRnxn and for all t. and B commute. Linear Differential and Difference Equations e(t+r)A e(t+T)A 3.l{e tA}} = (sI . 2! and and e e tA rA 2 = ( I + t A + t2! A 2 +. ) . (a) C{etA = (sI-Arl. Part (b) follows similarly. = e'A erA = elAe'A . (b) .. 5. (a) . 2 2! and and while while e e tB tA = ( 1+ tB t2 2 + 2iB 2 +. 4.1 {(j/-A). (etA)-1 = e. Compare like powers of t in the first equation and the second or third and use the Compare like powers of t in the first equation and the second or third and use the k binomial theorem on (A + B/ and the commutativity of A and B.. Then for 6. Proof" Simply take T = — t in property 3... Proof" Note that Proof: Note that e(t+r)A = etA erA = erAe tA . AB = B A..e. Part (b) follows similarly. et(A+B) =^e'Ae'B = e'Be'A and and B commute. r e R.tA .. ) ( I + T A + T2!2 A 2 +. Proof' Note that Proof: Note that et(A+B) = I t + teA + B) + -(A + B)2 + ... Let denote the Laplace transform and £~! the inverse Laplace transform.1 } = erA... (b) £. et(A+B) =-etAe tB = etBe tA if and only if A all e JRnxn and all e R. Proof: Simply take T = -t in property 3. For all e R"x" and for all t. ) . Linear Differential and Difference Equations Chapter 11. on (t + T)*. binomial theorem on (A B) and the commutativity of A and B. i. i. Let £ denote the Laplace transform and £-1 the inverse Laplace transform.e. = I + (t + T)A + (t + T)2 A 2 + .

If this is not the case.l)e . . ) = L'lt IIA 21111e tA IIe~tIIAII. For all A E JRnxn and for all E JR. s . ) 3! 4! L'ltiIAIl < L'lt1lA21111e (1 + + (~t IIAII2 + .1. ) = I ( Ae + = tA ~.. ) 3 II I ( ~..H using the JCF.. the formal definition d dt _(/A) = lim ~t-+O e(t+M)A _ etA L'lt can be employed as follows. Alternatively.H dt assuming A is diagonalizable . ..A) ~' is called the resolvent of A and is defined for all s not in A (A)...=1 m Xiet(Ji-sl)y.. employed I e(t+~t)AAt.u . .A"I i=1 .Ae tA tA tA I I e tA . that A is diagonalizable.etA .A)-I. Differential Equations 11. for convenience. Notice in the proof that we have assumed. using the JCF. it can be differentiated term-byProof: Since the series (11.. For all A e R"x" and for all t e R.1.3) is uniformly convergent.. All succeeding steps in the proof then follow in straightforward way...1 The matrix (s I — A) -I is called the resolvent of A and is defined for all s not in A (A). = (sl -A).H L... Differential Equations 111 111 = {+oo 10 n t 1 e(Ai-S)t x. The matrix (s I .Ae tA I = III (etAe~tA L'lt = = /A) . A 2etA + ..11. Notice in the proof that we have assumed.H = '"' assuming Re s > Re Ai for i E !! = (sI .. it can be differentiated term-byterm from which the result follows immediately.AetAil Ae tA I ~t (e~tAetA I (M A I ~t (e~tA .. e'A Proof: Since the series (11..=1 = ~[fo+oo e(Ai-S)t dt]x.y... for convenience. the scalar dyadic decomposition can be replaced by If this is not the case. that A is diagonalizable. the scalar dyadic decomposition can be replaced by et(A-sl) =L . For any consistent matrix norm. £(e'A) 7. A2 + (~~)2 A tA II tA Il 1 (_ 2! + .X i y. 1h(e tA ) = AetA = etA A.All succeeding steps in the proof then follow in aastraightforward way.Ae tA etA) . ) etA I < MIIA21111e - L'lt (L'lt)2 + -IIAII + --IIAI12 + .y.3) is uniformly convergent..Ae II = I L'lt (M)2 + ~ A 2 +.

i~t()Oc() nnd uniqu()Oc:s:s theorem for *('o)} = <?(f°~fo)/1. D Ir: Remark 11. The solution of the linear homogeneous initial-value problem = Ax(l).5) and use property 7 of the matrix exponential to get x t ) = Ae(t-to)A xo fundamental Ae(t~to)Axo = Ax(t). The general formula formula d dt l q (t) pet) f(x.. The formula can be derived by means of an integrating factor "trick" direct differentiation.8) .6). say. Premultiply the equation x . by the fundamental existence and uniqueness theorem for ordinary differential equations. D 11. the limit exists and equals Ae'A •. B e IR xm and let the vector-valued function u be given Theorem and. the right-hand side above clearly goes to 0 as t:. Then the solution of the linear inhomogeneous initial-value problem x(t) = Ax(t) + Bu(t).1. Thus. (11.2.5) is the solution of (11. 11. the For fixed t. continuous. fact that A commutes with any polynomial of A of finite degree and hence with etA.Ax = Bu by e. The proof above simply verifies the variation of parameters formula by direct differentiation.¥o + 0 = XQ so. Also. Let A E Rnxn . by the fundamental existence and x(t0) — e(fo~t°')AXQ — Xo uniqueness theorem for ordinary differential equations. 0 ordinary differential equations. Premultiply the equation x — Ax = Bu by e~ to get (11.1..5) is the solution of (11. (11. x(to) = Xo E IR n (11. B E Wnxm and let the vector-valued function u be given Let A e IR nxn . Thus. t ) . The formula can be derived by means of an integrating factor "trick" as follows. Linear Differential and Difference Equations For fixed t. x(to) = e(to-to)A Xo = XQ so. continuous. or one can use the fact that A commutes with any polynomial of A of finite degree and hence with e'A.4). (11. Also.5) and use property 7 of the matrix exponential to get x ((t) = Proof: Differentiate (11. t ) .tA to get as follows. x(to) = Xo E IRn (11.7) is the solution of (1l.7) is the solution of (11.f(p(t).6). say.2 Homogeneous linear differential equations Homogeneous equations x(t) Theorem 11. Ae(t-s)A Bu(s) to get x(t) = Ae{'-to)A Xo + Bu(t) = Ax(t) = x(to e(to-tolA Xo + = Xo fundilm()ntill ()lI.t goes to O. 0 uniqueness theorem for ordinary differential equations. Then the solution of the linear inhomogeneous initial-value problem and.7) Proof: Differentiate (11. Linear Differential and Difference Equations Chapter 11.3.4).4. The proof above simply verifies the variation of parameters formula by Remark 11.6) for t ::: to is given by the variation of parameters formula for t > IQ is given by the variation of parameters formula x(t) = e(t-to)A xo + t e(t-s)A Bu(s) ds. the right-hand side above clearly goes to 0 as At goes to 0.4) for t ::: to is given by (11.7) and again use property 7 of the matrix exponential.dt dt is used to get x ( t ) = Ae(t-to)Ax0 + f'o Ae('-s)ABu(s) ds + Bu(t) = Ax(t) + Bu(t). lo t (11.5) Proof: Differentiate (11. or one can use the limit exists and equals Ae t A A similar proof yields the limit et A A.3 Inhomogeneous linear differential equations Inhomogeneous equations Theorem 11.112 112 Chapter 11.4. Let A E IR n xn. The solution ofthe linear homogeneous initial-value problem Let A e Rnxn. The general Proof: Differentiate (11. t) dx = l af(x t) ' dx pet) at (t) q + dq(t) dp(t) f(q(t).7) and again use property 7 of the matrix exponential. (11. A similar proof yields the limit e'A A.

X t) X 0 D Corollary 11. For convenience.12). X((t) is symmetric and (11.sA Bu(s) ds x(t) = e(t-tolA xo + lto t e(t-s)A Bu(s) ds.e-toAx(to) = lto t e. Theorem 11.11. problem problem X(t) = AX(t) + X(t)B. and hence t d -e-sAx(s) ds = to ds 1t to e-SABu(s) ds.6.2.4 11. Let A E Rnxn. Then the matrix initial-value problem X(t) = AX(t) + X(t)AT. we can have coefficient matrices on both the right and left. Theorem 11.. e-tAx(t) .11) X(t) = etACe = e ratB has the solution X ( t ) — atACe tB .1.1. the following theorem is stated with initial time to = 0. following to = O. e jRnxn.6.5. C e IR" ".1.7. the Proof: Differentiate etACe tB property Proof: Differentiate etACetB with respect to t and use property 7 of the matrix exponential. t exponential. and C e Rnxm. The first is an obvious generalization of Theorem 11. X(to) =C E jRnxn (11. E ]R. The initial-value problem (11.7.8) over the interval [to. E ]R. t]: Now integrate (11. and the proof is essentially the same. differential equation. Then the matrix initial-value E jRmxm. punov differential equation. . The of nrohlcm problem X(t) = AX(t). Differential Equations [to. The fact that X((t) satisfies the initial condition is trivial. t]: 113 1 Thus. the When C is symmetric in (11.nxm.12) X(t) = etACetAT has the solution X(t} = etACetAT.nxn. The solution of the matrix linear homogeneous initial-value e jRnxn.1. B e R m x m .9) for t ::: to is given by for t > to is given by X(t) = e(t-to)Ac.11) is known as a Sylvester Sylvester differential equation. X(O) = C (11. Let A E Wlxn. X(O) =C (11.4 Linear matrix differential equations Linear matrix differential equations Matrix-valued initial-value problems also occur frequently.10) coefficient In the matrix case. Differential Equations 11.2. (11. Corollary 11. Theorem 11. the Theorem 11. Let A.12) is known as a LyaX t) punov differential equation. 11. and the proof is essentially the same.

5 Modal decompositions Let A and suppose. Linear Differential and Difference Equations Chapter 11. Similarly. for convenience.e'J. Linear Differential and Difference Equations 11.5 11.H . in the inhomogeneous case we can write Similarly. in the inhomogeneous case we can write t e(t-s)A Bu(s) ds i~ = t i=1 (it eAiU-S)YiH Bu(s) dS) Xi. ~ 11. This modal decomposition can be expressed in a different looking but identical form This modal decomposition can be expressed in a different looking but identical form n if we write the initial condition Xo as a weighted sum of the right eigenvectors if we write the initial condition XQ as a weighted sum of the right eigenvectors Xo = L ai Xi.114 114 Chapter 11. ~ 1=1 I t.iU-tO)Xiyr) Xo 1=1 n = L(YiHxoeAi(t-tO»Xi. i=1 The ki s are called the modal velocities and the right eigenvectors Xi are called the modal The Ai s are called the modal velocities and the right eigenvectors *. that it is diagonalizable (if A is not diagonalizable. where J is a JCF for A. The decomposition above expresses the solution x(t) as a weighted sum of its modal velocities and directions. Then Then i=1 n = L(aieAiU-tO»Xi. for convenience.6 Computation of the matrix exponential Computation exponential JCF method JCF method Let A e R"x" and suppose X E Rnxn is such that X"1 AX = J. In the last equality we have used the fact that YiHXj = flij.1.x. i=1 In the last equality we have used the fact that yf*Xj = Sfj. Then Then etA = etXJX-1 = XetJX. modal velocities and directions.4) can be written A = L X.1 .4) can be written x(t) = e(t-to)A Xo E jRnxn E Wxn = (ti. the rest of this subsection is easily generalized by using the JCF and the decomposition H A — ^ Xf Ji YiH as discussed in Chapter 9). Then the solution x(t) of (11. are called the modal directions. that it is diagonalizable (if A is not diagonalizLet A and suppose. Let A E jRnxn and suppose X e jR~xn is such that X-I AX = J.li y t as discussed in Chapter 9).y.1. if A is diagonalizable in geneml. the rest of this subsection is easily generalized by using the JCF and the decomposition able. Then the solution x(t) of (11.1 n Le A• X'YiH . where J is a JCF for A. The decomposition above expresses the solution x (t) as a weighted sum of its directions.

. Differential Equations 11.. l's For the matrix N defined above.eAt).. Thus. ••• .1. degree k. of In the more general case.7. k) element and has O's everywhere else. let .EeCkxk be aaJordan block of the form Ji <Ckxk be Jordan block of the form A Ji = 1 o o o =U+N.1. is complex. Mp~l ^ O. and N kforth.. it is easy to check that while N has 1's along only its first superdiagonal (and O's elsewhere). t2 t k. Thus. A.. or grade) MP = 0. A matrix M E M nx " is nilpotent of degree (or index.I e IN =I+tN+-N 2 + . Nk~lI has a 1 in its (1.0. teAl eAt = 0 0 0 2I e 12 At teAl 0 eAt In the case when A is complex. i. O. e lN finite. or grade) p if if matrix M e jRnxn is nilpotent of degree (or index. eAt teAt eAt o 2I e 12 At Ik-I At e (k-I)! 0 ell. AI e I.8. k) O's k k N = 0. N22 has l's along only its second superdiagonal. o A o o A Clearly A/ and N commute.!etN by property 4 of the matrix exponential. ext}.11. aareal version of the above can be worked out. and so forth. it is then easy to compute etA via the formula etA = XetJ X-I' Xe tl X If is etA etA tj since et I is simply a diagonal matrix. Mp = 0. + N k2! (k . the series expansion of e'N is finite..I)! I o t 1 o Thus. Differential Equations 115 If A is diagonalizable. But e tN is almost as easy since N The diagonal part is easy: e e = diag(e '. the problem clearly reduces simply to the computation of problem clearly reduces the exponential of a Jordan block. real version of the above can be worked out. elN is is nilpotent of degree k. N has 1's along only its second superdiagonal..e. nilpotent Definition 11. To be specific. e'u e l N tu x lH = diag(e At . . e ttJi = eO. its first superdiagonal (and O's elsewhere). while MP-I t=. (1. Finally.

n .t • g'(-1) = f'(-1) g"(-I) = 1"(-1) . . in fact.. Thus. Here. Then jr(A. -(A.10. Let A = [-~ o -~0-1~ ] t .2t ][ -1 ] Interpolation method Interpolation method This method is numerically unstable in finite-precision arithmetic but is quite effective for effective hand calculation in small-order problems.I. Let Example 11. + I) 3 . so m = 1 and nl Let g(X) = UQ + alA + a2A2. . Then the three equations for the a.) = -(A + 1)3. the function g is known and f(A) = g(A). -2} and Example 11.. Linear Differential and Difference Equations Example 11. With the aiS then kth superscript (&) X. k = 0. Suppose the characteristic polynomial of A can be written as n(A)) = Yi?=i (A . -2} and etA Xe tJ =[=i a = x-I =[ =[ 2 1 ] exp t ] [ [ -2 0 -~ ] [ -1 -1 2 -1 2 ] 2 1 e~2t te. characteristic of n(X (^ ~~ ^i)"'» where the A. .nxn and /(A) = etx.s are distinct. Linear Differential and Difference Equations Chapter 11. ==> 2a2 = t 2 e. The method is stated and illustrated for the hand calculation in small-order problems. lower-order g Example 11.2. an-i solution of the n equations: g(k)(Ai) = f(k)(Ai). Then A(A) = {-2. . an-l are n constants that are to be determined. The polynomial g gives the appropriate linear combination.a l +a2 = e==> at .Ai t'. f(A) n(A) etK. g(-I) = f(-1) ==> ao . where t is a fixed scalar.t .. terms of order greater than n .9. .. functions.. I... the unique OTQ. Let A = [ ~_\ J]. ni . the function g is known and /(A) = g(A).. so m = 1 and n{ = 3. Theorem 9.116 Chapter 11. The motivation for this method is known. They are. The motivation for this method is the Cayley-Hamilton Theorem. . the superscript (k) denotes the fcth derivative with respect to A. and /(A) = etA. i Em. . compute f(A) = e'A.1. which says that all powers of A greater than A n . Given A E jRnxn and f(A) = etA. . . t fixed Given A € E. Define the Ai nr=1 n where ao.1 in the power series for et A can be written in terms of these greater n— e' A lower-order powers as well.2t e. Let A Then A (A) = {-2.2a2 = te. I. The method is stated and illustrated for the exponential function but applies equally well to other functions.1 can be expressed as linear combinations of Ak for k = 0. a.10. — 1.3.9. . . compute f(A) = etA.s are given by g(A) — ao aiS a\X o^A. ... all the Ak — expressed k 1.s known.

Then the defining equations for the a.11. Differential Equations 11 . Then the defining equations for the aiS are given by 6] g(-2) = f(-2) ==> ao ==> al 2al = e.A)^ 1 } and techniques for inverse Laplace transforms. g'(-2) = f'(-2) = te- Solving for the a. Let A = [ ~4 J] and /(A) = eO-. te.2t _ Other methods Other methods 1. Use etA = £~l{(sl . 2.1. Differential Equations Solving for the ai s. Let g(A.2t -te. Let A _* Example 11.2t .2t aL = + 2te.2t .c-I{(sI — A)-I} is quite effective for small-order problems.2t ) 2te. 2. but general nonsymbolic computational effective small-order techniques numerically problem equivalent techniques are numerically unstable since the problem is theoretically equivalent to knowing precisely a JCE JCF.11. Thus.2t . Then 7r(X) = (A+ 2)22 so m = 11and [::::~ 4i and f(A) = ea.s. 1.) = «o + ofiA. Use Pade approximation. we find Solving for the aiS. we find ao = e. Then rr(A) = f\ + o\2 so m = and (A i 2) «i nL = 2. t ff>\ tk TU^^ _/"i\ Example 11.-s are given by Let g(A) ao + aLA. f(A) = etA = g(A) = aoI + al A = (e. The matrix analogue yields e A ~ functions rational eA = .2t I [-4 4] -I 0 _ [ - e.2t [ ~ o ] + te. There is an extensive literature on approximating certain nonlinear functions by rational functions. s.2t + 2te.11. we find Solving for the a. This etA = .1. we find 117 Thus.. 2t .

118 118 l Chapter 11. Linear Differential and Difference Equations Chapter 11. Let A E Rnxn.15) . modeled by systems of difference equations. in the matrix case the exponential is accurate only in a neighborhood of the origin. This can be arranged by scaling A. case. Linear discrete-time systems.14) into (11. eS .e. exhibit many parallels to the continuous-time differential equation difference equations. we restrict our attention only to the so-called time-invariant case. Unfortunately. for example. and since we consider an arbitrary "initial time" ko. Many methods are outlined in. [19].2 11. exhibit many parallels to the continuous-time differential equation case. Unfortunately. The solution of the linear homogeneous system ofdifference Let A e jRn xn. 11. where the matrix A in (11.12. of matrix functions such as e A and 10g(A) remains a fertile area for research. in the matrix case this means when || A|| is sufficiently small... Let A e Rnxn.14.13). say. [19]. and since we want to keep the formulas "clean" (i. 0 D Remark 11. for example. we have chosen ko = 0 for convenience. a Fade approximation for polynomials of various orders. + opAP and N(A) = vol + vIA + D~ (A)N(A). 11. no double subscripts). 11. Numerical loss of accuracy can occur in this procedure from the successive squarings.14) into (11.. by this means when IIAII is sufficiently small. e (e( 3. Reduce A to (real) Schur form S via the unitary similarity U and use e A 3.2. We could also consider an arbitrary "initial time" ko.2. Then the solution of the inhomogeneous initial-value problem m-vectors. Proof: The proof is almost immediate upon substitution of (11.13) is constant and does not depend on k.13). modeled by systems of equations of the previous section. where D(A) = 001 + olA + .e. convenience.2 Inhomogeneous linear difference equations Inhomogeneous linear difference equations E jRnxn.1 Homogeneous linear difference equations Homogeneous linear difference equations Theorem 11. Numerical loss of accuracy can occur in this procedure from the successive squarings.. Reliable and efficient computation 4. Linear discrete-time systems.2 Difference Equations Difference Equations In this section we outline solutions of discrete-time analogues of the linear differential In this section we outline solutions of discrete-time analogues of the linear differential equations of the previous section..13) for k > 0 is given by for k 2:: 0 is given by Proof: The proof is almost immediate upon substitution of (11.. where D(A) 80I Si A H h SPA and N(A) v0I + vlA + q Explicit formulas are known for the coefficients of the numerator and Explicit formulas are known for the coefficients of the numerator and denominator polynomials of various orders. we have chosen ko = 0 for want to keep the formulas "clean" (i. say. by 22' 2* )A A multiplying it by 1/2k for sufficiently large k and using the fact that A = / { ]I //2')A )\ * . Linear Differential and Difference Equations D-I(A)N(A). The solution ofthe linear homogeneous system of difference equations equations (11. 4. but since the system is time-invariant. This can be arranged by scaling A. Many methods are outlined in. E jRnxm {udt~ is of Theorem 11.13. Reduce A to (real) Schur form S via the unitary similarity U and use eA = U e SsUH Ue U H and successive recursions up the superdiagonals of the (quasi) upper triangular matrix and successive recursions up the superdiagonals of the (quasi) upper triangular matrix e s. Again.1 11.13) is constant and does not depend on k. and this observation is exploited frequently. and this observation is exploited frequently. but since the system is time-invariant. where the matrix A in (11. Again. •• vq A . a Pad6 approximation for denominator the exponential is accurate only in a neighborhood of the origin. we restrict our attention only to the so-called time-invariant Remark 11. Then the solution of the inhomogeneous initial-value problem (11. B e Rnxm and suppose {«*}£§ « a given sequence of m-vectors. We could also case.• + Vq A q.13. no double subscripts). = P = multiplying it by 1/2* for sufficiently large k and using the fact that e = ( e j .2. Reliable and efficient computation of matrix functions such as e A and log(A) remains a fertile area for research.

0 D 11.2.3 Computation of matrix powers Computation of matrix powers It is clear that solution of linear systems of difference equations involves computation of It is clear that solution of linear systems of difference equations involves computation of k. which is numerically unstable but sometimes useful for hand calculation.-z-A =I+-A+"2 A + . Then JCF for A. Then Ak = (XJX-I)k = XJkX.. LXi Jtyi . k=O Assuming Izl > max |A|. Jk .2.3 11. the z-transform of the sequence {Ak}} is then given by AEA(A) X€A(A) k "'kk 1 12 Z({A})=L. in general.. by analogy with the use of Laplace transforms to compute z-transforms.2. a matrix exponential.11. One solution method. k:::.O.H m if A is diagonalizable. the z-transform of the sequence {Ak is then given by Assuming |z| > max IAI.=1 H l If A is diagonalizable. again mostly for small-order probsmall-order lems.16) Proof: The proof is again almost immediate Proof: The proof is again almost immediate upon substitution of (11. Assume that A e M" xn and let X e jR~xn be such that X-I AX = /..16) into (11. sometimes useful Ak. +00 k=O z z = (l-z-IA)-I = z(zI . j=O (11. One definition of the z-transform of a sequence {gk} is a matrix exponential.16) into (11. X~1 AX JCF for A.15). One definition of the z-transform of a sequence is +00 Z({gk}t~) = LgkZ-k. is to use z-transforms.1 _I tA~X.y.15). based Methods based on the JCF are sometimes useful. substitution of (11.2. since /* is simply a diagonal matrix..A)-I. Difference Equations 11. it is then easy to compute Ak via the formula Ak = XJkXX-I Ak Ak — X Jk If diagonalizable. Difference Equations 119 119 is given by k-I xk=AkXO+LAk-j-IBUj. where J is a E jRnxn and X E R^n J.

1. The symbol ( ) has the usual definition of . let 7. 0 A Writing /. .2) .15. [11. in-I)(O) = Cn-I' (1l.1(2k . Let A = [_J Example 11.3 11. it is commute. y(O) = CI.)A - ( k ) Ak-P+I p-l 0 J/ = kA k. it is then straightforward to apply the binomial theorem to (AI + N)k and verify that straightforward N)k (XI verify Ak kA k-I Ak k 2 (. 1 -1 1 -2 1 ] Basic analogues of other methods such as those mentioned in Section 11.1 Ak ( .. Ch. For an erudite discussion of the state of the art.. Then Then 1 ] [(_2)k 1 0 k(-2)kk(-2) 1 ] [ _ [ (_2/. 11.17) with ¢J(t) a given function and n initial conditions 4>(t} y(O) = Co.2 0 0 0 0 kA k . Linear Differential and Difference Equations In the general case.3 Higher-Order Equations Higher-Order Equations differential It is well known that a higher-order (scalar) linear differential equation can be converted to higher-order a first-order linear system.1 Ak The symbol (: ) has the usual definition of q!(kk~q)! and is to be interpreted as 0 if k < q. the problem again reduces to the computation of the power of a In the general case.is complex. A is complex. but again no universally "best" method be derived for the computation of matrix powers.. aareal version of the above can be worked out. for example. In the case when A. the problem again reduces to the computation of the power of a To Ji E Cpxp Jordan block. but again no universally "best" method exists.1 (-2 .120 Chapter 11.2k) -k( _2)k-1 ] k( -2l+ (-2l.15. Let A Ak = XJkX-1 = [=i -4 a [2 1 J].(^ . 18]. Consider. . Linear Differential and Difference Equations Chapter 11.l8) .• = AI and noting that AI and the nilpotent matrix Writing Ji = XI + N and noting that XI and the nilpotent matrix N commute.. ) Ak.6 can also methods 11. see [11. To be specific. and is to be interpreted as 0 if k < q..6 be derived for the computation of matrix powers. Example 11. e Cpxp be a Jordan block of the form o . real version of the above can be worked out.1. the initial-value problem initial-value (11.

.a\X2(t) . . Note that det(X7 — A) = An + an-\Xn 1l H alA + ao. is often well worth avoiding. where !(eat . is often well worth avoiding.. Further.Exercises 121 121 Here.. the companion matrix A in (11. as mentioned before. where I + get. at least for computational purposes. y € R" and let A = xyT. the companion Note that det(A! . be a projection. . Show that e P ~ ! + 1. condition. Let P E lR nxn be a projection. let a = XT y. into a linear first-order difference equation with (vector) initial with n initial conditions. and.19) possesses many nasty numerical properties for even moderately sized n and. .an-lXn(t) + ¢(t).an_lln-l)(t) Xn-l (t) Xn(t) = y(n)(t) = -aoy(t) - + ¢(t) = -aOx\ (t) . (11. •. y E lRn and let A = xyT. a)xyT... x2(t) = y ( t ) . Xn(t) y { n ~ l ) ( t ) . EXERCISES EXERCISES 1. Show that etA 2... at least for computational purposes. xn(t) = In-l)(t). 3..19) possesses many nasty numerical properties for even moderately sized n matrix A in (11.19) The initial conditions take the form ^(0) = c [CQ. = Xn(t) = y(n-l)(t)....A) = A. y(m) denotes the mth derivative of y with respect to t. A similar procedure holds for the conversion of a higher-order difference equation A similar procedure holds for the conversion of a higher-order difference equation with n initial conditions."+ an_1A n-~ + . . Define a vector x (?) e R" with components *i(0 = y ( t ) . aly(t) . . into a linear first-order difference equation with (vector) initial condition. Suppose x. Cn -\ . Then Xl (I) X2(t) = X2(t) = y(t). However.718P. . c\.I) g(t.718P. v (m) denotes the mth derivative of y with respect to t. .. C M _I] The initial conditions take the form X (0) = C = [co. let a = xTy. = X3(t) = yet). X2(t) yet). Suppose x. . as mentioned before. Let P € R 1. Let 3. Let . Define a vector x (t) E ]Rn with Here. These equations can then be rewritten as the first-order linear system These equations can then be rewritten as the first-order linear system 0 0 x(t) = 0 0 1 0 0 0 -ao -a\ x(t)+ [ 0 1 -a n-\ n ~(t) r. 2. However. Cl. a)xyT.. Show that e'A 1+ g ( t . +h a\X+ ao. = O. Further.. Show that e % / + 1..a)= { a t nxn p if a if a 1= 0. Then components Xl (t) yet)..

... Show S~1 H S Hamiltonian. (b) Suppose S is symplectic and let A. Show that -).be an eigenvalue of H. Let (a) Solve the differential equation (a) Solve the differential equation i = Ax . Show that S-I HS must be Suppose and symplectic. Hamiltonian. must also be an eigenvalue of H. x(O) =[ ~ J. Hamiltonian if K~1ATK = -A and to be symplectic if K -I ATK = A --I. ft € lR and Let a. A matrix A e R 2nx2n is said to be K -I AT K = . Show that 1 /A.A 1 . 6. Show that —A. be an eigenvalue of H. must (a) Suppose E is Hamiltonian and let A. Show that eH must be symplectic.be an eigenvalue of S. Let a. must (b) Suppose S is symplectic and let). also be an eigenvalue of H. Let denote the skew-symmetric matrix 4. H (d) Suppose H is Hamiltonian. Show that E jRmxn e = [eoI A sinh 1 X ] ~I . Find a general expression for Find a general expression for 7.122 122 Chapter 11. (d) Suppose 5. must also be an eigenValue of S. Let 5. Let K denote the skew-symmetric matrix 0 [ -In In ] 0 ' In A E jR2nx2n where /„ denotes the n x n identity matrix. Linear Differential and Difference where X e M'nx" is arbitrary. . also eigenvalue of (c) Suppose that H is Hamiltonian and S is symplectic. f3 E R and Then show that Then show that ectt _eut cos f3t sin f3t ectctrt e sin ~t cos/A J. Linear Differential and Difference Equations Chapter 11. be an eigenvalue of S. (a) Suppose H is Hamiltonian and let). Find eM when A = Find etA = 8. Show that 1/).. 4.A and to be symplectic K~l AT K .

(c) Find the distribution of the companies' assets at year k. Consider the n x n matrix initial-value problem 10. (b) Find the eigenvalues and right eigenvectors of M. a quarter goes to Europe. what is the value of ZIQOO? What is the value of Zk in general? general? . (c) Find the distribution of the companies' assets at year k. 12.. Consider the initial-value problem 9. a quarter goes to Europe.YeO) = O. The year is 2004 and there are three large "free trade zones" in the world: Asia (A). For Europe and Asia. of Cf or all t. as k —»• +00 (i.e.e. 11. (d) Find the limiting distribution of the $40 trillion as the universe ends. Consider the n x n matrix initial-value problem X(t) = AX(t) . Suppose certain multinational companies have total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R.. what is the value of ZIOOO? What is the value of Zk in 2. (Exercise adapted from Problem 5.e.e.11 in [24]. I/X(t)1/2 = ex for all t > 0. and the Americas (R). x(O) =[ ~ l 9. Consider the initial-value problem i(t) = Ax(t).) (Exercise adapted from Problem 5. k ---* +00 (i.) 12. i. half stays home and half goes to the Americas. X(O) = c. 11..11 in [24]. Suppose certain multinational companies have Europe (E). yeO) = 1.3. Europe (E). If £0 = 1 and z\ If Zo = 1 and ZI = 2. around the time the Cubs win a World Series). Show that for t > 0. (b) Consider the difference equation (b) Consider the difference equation Zk+2 + 2Zk+1 + Zk = O. and the Americas (R). Suppose that e E"x" is skew-symmetric and let a = \\XQ\\2.Yet) + 2y(t) + yet) = 0. 10. as (d) Find the limiting distribution of the $40 trillion as the universe ends.3. (a) Find the solution of the initial-value problem (a) Find the solution of the initial-value problem . Each year half of the Americas' money stays home.. x(O) = Xo for t ~ O.Exercises Exercises (b) Solve the differential equation (b) Solve the differential equation i 123 = Ax + b. and a quarter goes to Asia. Show that the eigenvalues of the solution X t ) of this problem are the same as those Show that the eigenvalues of the solution X ((t) of this problem are the same as those of C for all?. i. Show that ||*(OII2 = aforallf > O. half stays home and half goes to the Americas. For Europe and Asia. Each total assets of $40 trillion of which $20 trillion is in E and $20 trillion is in R. . (a) Find the matrix M that gives (a) Find the matrix M that gives [ A] E R =M year k+1 [A] E R year k (b) Find the eigenvalues and right eigenvectors of M. The year is 2004 and there are three large "free trade zones" in the world: Asia (A).X(t)A. around the time the Cubs win a World Series). and a quarter year half of the Americas' money stays home. goes to Asia. Suppose that A E ~nxn is skew-symmetric and let ex = Ilxol12.

This page intentionally left blank This page intentionally left blank .

and A. The matrix A . As with the standard eigenvalue problem.2. e C. B).3.) = det(A . Definition 12. and Remark 12.'AB) is called the characteristic polyDefinition 12. 125 125 . B). hence nonreal eigenvalues must occur in complex conjugate pairs. the characteristic polynomial is obviously real. generalized eigenvalue problem. eigenvalues for the generalized eigenvalue problem occur pencil — XB problem occur where the matrix pencil A .1. called a generalized eigenvalue. then so is ax [ay] for any nonzero scalar a E <C. B E E" xn . Definition 12. (A. eigenvector.1 12.4. The roots ofn(X. As with the standard eigenvalue problem. e e.2) When the context is such that no confusion can arise. a nonzero vector y e C" is a left generalized eigenvector corresponding to an E en generalized eigenvector eigenvalue 'X if eigenvalue A if (12. The matrix A — 'AB is called a matrix pencil (or pencil of the matrices A Definition 12. characteristic hence nonreal eigenvalues must occur in complex conjugate pairs. B) with A. B e jRnxn. the adjective "generalized" "generalized" standard eigenvalue [y] is usually dropped. B).'AB is singular.1 The Generalized Eigenvalue/Eigenvector Problem The Generalized Eigenvalue/Eigenvector Problem Ax = 'ABx.3. The polynomial n('A) = det(A — A.4. such that that (12. When A. if x [y] is a right [left] ax [ay] for any eigenvector. . B E C MX if there exists a scalar A E C.Chapter 12 Chapter 12 Generalized Eigenvalue Generalized Eigenvalue Problems 12. The standard eigenvalue problem considered in Chapter 9 obviously where A. corresponds to the special case that B = I. B e enxn" if there exists a scalar 'A. B) with A.XB is called a matrix pencil (or pencil of the matrices A and B). called a generalized eigenvalue. Similarly. a. B e C" xn The standard eigenvalue problem considered in Chapter 9 obviously corresponds to the special case that B = I.2. Remark 12. A E en Definition 12.1) Ax = 'ABx.5) is called the characteristic polynomial of the matrix pair (A. In this chapter we consider the generalized eigenvalue problem In we the generalized eigenvalue problem where A. The polynomial 7r(A.) are the eigenvalues of the associated generalized eigenvalue problem. B E enxn.1. The roots ofn('A) are the eigenvalues of the associated nomial of the matrix pair (A. A nonzero vector x e C" is a right generalized eigenvector of the pair generalized eigenvector of (A. Definition 12.

(3 = O. the pencil A — XB is said to be 12. and ~. There are two eigenvalues.XB.AB Definition 12. otherwise. A — A. Then the characteristic polynomial is ft det(A . ft =I./. All A E C are eigenvalues since det(A . (3 = O. Generalized Eigenvalue Problems Remark 12. is singular. I and ^. There are two eigenvalues. I (of multiplicity 1).AB) and there are several cases to consider.LA. There are two eigenvalues. then rr(A) is a polynomial nonsingular). zero. {3 =I.6. If det(A — AB) not regular. At least for the case of regular pencils. the pencil A . Case Case 3: a = 0.6. 1 Case 3: a =I. Associated with any matrix pencil A . eigenvalues associated with the pencil A . Case 1: a =I./. For example. the characteristic polynomial is = (I .3).X B is a reciprocal pencil B — n. it is apparent where the "missing" eigenvalues have "missing" gone in Cases 2 and 3. it is said to be singular. reciprocal Case of reciprocal . However. ft ^ O. Case 2: a = 0. Note appear. ^ 0. Case 3: Case 4: = 0.AB.5. only the case of regular pencils is considered in the remainder of this chapter. k E !!. I1 and |.O. I and O.O. det(B . regular.{3 = 0.O. If B = I (or in general when B is nonsingular).L = (JL = £. Case 4: a = 0. f3 = O. when B =I.I. There are two eigenvalues. suppose associated — AB. That is to say. 1 and O.0. Clearly the reciprocal pencil has eigenvalues responding generalized /.B) == O. f3 = O. There are two eigenvalues. only the case of regular pencils is considered in the remainder of this chapter. All A 6 C are eigenvalues since det(B — uA) = O.3) where a and (3 are scalars.A.LA) == 0.a/. =I. B k e n. Case 2: a = 0. Note that if AA(A) n J\f(B) ^ 0.A and corAssociated with any matrix pencil — AB is a reciprocal pencil . Case 1: ^ 0. eigenvalues — AB. or infinitely many B = I. All A E C are eigenvalues since det(B . there may be 0. Case 1: a =I.LA and corresponding generalized eigenvalue problem.nA. Case 1: a ^ 0. Case 4: a = 0. f3 = 0. there is a second eigenvalue "at infinity" for Case 3 of of . the associated matrix pencil is singular (as in Case N(A) n N(B) =Isingular 4 above).AHa . 1 (of multiplicity 1). 1 and ~.126 126 Chapter 12. There are two eigenvalues. {3 =I.L) and there are again four cases to consider. in particular. While While there are applications in system theory and control where singular pencils appear. With A and B as in (12.0. 1 and 0. when B is singular. A similar reciprocal symmetry holds for Case 2.(3A) ±. with its reciprocal eigenvalue being 0 in Case 3 of the reciprocal pencil B — /.LA) = (1 .B. Case 4: a = 0.XB. {3 =I. Case 2: = 0. I multiplicity 1). It is instructive to consider the reciprocal pencil associated with the example in It reciprocal Remark 12. Case = ft ^ 0. n(X) Remark 12. However. (12. (3 = 0.5./. {3 = 0. {3 ^ 0. There are two eigenvalues.XB) is not identically zero. All A e C are eigenvalues since det(A — AB) =0. If del (A .L)({3 ./. If = of degree n. If B is singular. 1 and 0. A similar reciprocal symmetry holds for Case 2. I).KB always has pencil — AB . pencil . There is only one eigenvalue. Note that A and/or B may still be singular.5.0. There are two eigenvalues. Generalized Eigenvalue Problems Chapter 12.0. and hence there are n eigenvalues associated with the pencil A . f3 / 0. There is only one eigenvalue.

l Ax Ax (or AB. B e cnxn . Since the latter involves a pair of matrices.7.Oif andonly if Q(A-XB)Z(Z~lx) = 0.2. det(QAZ . then Q-Hy isa left eigenvector ofQAZ -AQBZ. ifx isa right eigenvector of A—XB. Theorem 12. [7.7] or [25. Proof: Proof: 1. in fact. D The first canonical form is an analogue of Schur's Theorem and forms. with the understanding that a zero diagonal element of Tp corresponds to an infinite generalized eigenvalue. 12. the eigenvalues of the pencil A .2. 7. then Z~lx isa right eigenvector of QAZ—XQ B Z.XB)Z] = detQ det Z det(A -. the eigenvalues ofthe pencil A — XB are then the ratios of the diagonal elements of Ta to the corresponding diagonal elements of TfJ . where Ta and Tp are upper triangular. and eigenvectors under equivalence. Then there exist unitary matrices Q. Then 12. [7. 2.AB are then the ratios of the diagBy Theorem 12. 6. Let A. and the first theorem deals with what happens to eigenvalues lencies rather than similarities. canonical forms are available for the generalized eigenvalue problem. Then 1. ify isa left eigenvector of A —KB. 3. Let A. Q. 6. Canonical Forms 12. Numerical methods that work directly on A and are discussed in standard textbooks on numerical linear algebra. for example. Since det 0 and det Z are nonzero.2 Canonical Forms Canonical Forms Just as for the standard eigenvalue problem.l W AW). Z e Cnxn such that 12. which is the generally preferred method for theoretical foundation for the QZ algorithm. Sec. c 3.7].7].AB). for example.7]. the result follows easily by noting that yH(A — XB) — 0 if and only if yH (A . and the first theorem deals with what happens to eigenvalues and eigenvectors under equivalence. o. in fact. However.. Sec. Since det Q XB).7]. of A-AB. fl. lencies rather than similarities. since the generalized eigenvalue problem is then easily seen to be equivalent to the standard eigenvalue problem B. Sec. solving the generalized eigenvalue problem. we now deal with equivaa matrices. . Let A. with the understanding onal elements of Ta to the corresponding diagonal elements of Tp. However.AB and QAZ .2 12. [7.7. which is the generally preferred method for solving the generalized eigenvalue problem. Sec. [7. 6. Numerical methods that if B is even moderately ill conditioned with respect to inversion. Let A. B E Cnxn Then there exist unitary matrices Q. the result follows. Z e Cnxn with Q and Z nonsingular. Sec. where Ta and TfJ are upper triangular. the result follows.8. to ifx is a Z-l x is a righteigenvectorofQAZ-AQB Z. If B is nonsingular.7] or [25. since the generalized eigenvalue problem is then easily seen to be equivalent eigenvalues. see. 7. the pencil A fewer than eigenvalues. canonical forms are available for the generalized Just as for the standard eigenvalue problem. B. Again. fewer than n eigenvalues. the eigenvalues of the problems A — XB and QAZ — XQBZ are the same (the two 1. There is also an analogue of the Murnaghan-Wintner Theorem for real matrices.8. the The first canonical form is an analogue of Schur's Theorem and forms. 6.AQBZ are the same (the two problems problems are said to be equivalent). 0 ( Q ~ H y)H Q(A X AB)Z = O. the eigenvalues of the problems A . the theoretical foundation for the QZ algorithm. this turns to the standard eigenvalue problem B~1Ax = Xx (or AB~1w = Xw). E nxn with Q and nonsingular. and det Z are nonzero. There is also an analogue of the Murnaghan-Wintner Theorem for real matrices.AB)Z] = det gdet Zdet(A 1.7.12. f i always has precisely eigenvalues. see. ify is a left of -AB. the pencil A --AAB always has precisely n . det(QAZ-XQBZ) = det[0(A . QBZ = TfJ . this turns out to be a very poor numerical procedure for handling the generalized eigenvalue problem out to be a very poor numerical procedure for handling the generalized eigenvalue problem if is even moderately ill conditioned with respect to inversion.AQBZ) = det[Q(A . Theorem 12.7] [25.7] or [25.AB) o if and only if (Q-H y ) H Q ( A –_ B ) Z = Q. for example. Q~H y isa lefteigenvectorofQAZ — XQBZ. Canonical Forms 127 B is nonsingular. that a zero diagonal element of TfJ corresponds to an infinite generalized eigenvalue. for example. The result follows by noting that (A -AB)x = 0 if and only if Q(A -AB)Z(Z-l x) = The result follows by noting that (A –yB)x . By Theorem 12. see. Sec.7. work directly on A and B are discussed in standard textbooks on numerical linear algebra. see. Q. 7. E c nxn such that QAZ = Ta . Sec. 7. 2.

• L. Example 12. Let A. . . Let A.11. Generalized Eigenvalue Problems Theorem 12.9. Otherwise.9. including analogues of principal vectors and description of of so forth. The matrix pencil 12. I ..12 (Kronecker Canonical Form). is beyond the scope of this book. Z e R"xn such B E jRnxn.AB)Q = [~ ~ ] ..XB is regular. The first theorem pertains only to "square" regular pencils. form (KCF). In this chapter.. quasi-upper-triangular. L l" L~. Q € c nxn"such that nonsingular E C" such that peA . Let A. [2o I o o o 0 0 0 0 0 2 0 0 1 0 0 1 0 0 ~ ]-> [~ 0 I 0 0 0 0 0 0 0 0 o o 0 I 0] 0 0 0 0 (X . There is also an analogue of the Jordan canonical form called the Kronecker canonical fonn Kronecker form (KeF).fi and canonical form nilpotent matrix of associated and N is a nilpotent matrix of Jordan blocks associated with 0 and corresponding to the infinite infinite eigenvalues of A .128 Chapter 12. J ..AB where J is a Jordan canonical form corresponding to the finite eigenvalues of A -A.I. of eigenvalues are given as above by the ratios of diagonal elements of S to corresponding elements of T.)"N).10. B e c mxn . mxn E C • Theorem 12.2)2 with characteristic polynomial (A — 2)2 has a finite eigenvalue 2 of multiplicty 2 and three 2 2 infinite eigenvalues. Then there exist orthogonal matrices Q. When S has a 2 x 2 diagonal block. E jRnxn 12. QBZ = T. B e Cnxn and suppose the pencil A .A.A [~ ~ l of . Then there exist 12.11.'.12 mxm nxn mxm nxn E C nonsingular nonsingular matrices P e c and Q e c QE C such that peA . real eigenvalues. Generalized Eigenvalue Problems Chapter 12. B e Rnxn. A full description of the KeF.AB)Q = diag(LII' . where T is upper triangular and S is quasi-upper-triangular. we present only statements of the basic theorems and some examples.AB. T. B E c nxn pencil — AB Theorem 12. thnt that QAZ = S. the 2 x 2 subpencil formed with the corresponding fonned 2 x diagonal subblock 2x2 2 diagonal subblock of T has a pair of complex conjugate eigenvalues. Then there x exist nonsingular matrices P. of — XB. KCF. while the full KeF in all its generality applies also to "rectangular" and singular KCF "rectangular" pencils. .

e. both N and J are in Jordan canonical form.— XBif S Rn. B e Wlxn and suppose the pencil A . LQ . are called the right minimal indices. generalized eigenproblem. 000 Just as sets of eigenvectors span A-invariant subspaces in the case of the standard eigenvectors eigenproblem (recall Definition 9. 0. LQ. Then V is a E ~nxn suppose pencil — AB deflating subspace if deflating subspace if dim(AV + BV) = dimV. Canonical Forms 12. (12. Lo L6 one column. Then is deflating subspace for the pencil A AB if and only if there exists M E Rkxk such that e ~kxk AS = BSM. there is a matrix characterization of deflating subspace. Specifically.12. Lo.2. Definition 12.14. and Lk is the (k + 1) x k bidiagonal pencil bidiagonal pencil -A 0 0 -A Lk = 0 0 0 0 -A I The Ii are called the left minimal indices while the ri are called the right minimal indices. there is an analogous geometric concept for the eigenproblem generalized eigenproblem.4) eigenvalue characterization Just as in the standard eigenvalue case. Such a matrix is in KCF. The second block is L\ while the third block is LI. both Nand J are in Jordan canonical form. i. Example 12.5) .. Canonical Forms 129 where N is nilpotent.The next two blocks second block L\ one the block is L\. Lo.2. Left Left or right minimal indices can take the value O.. The first block of zeros actually corresponds to LQ. next two correspond to correspond J = 21 0 2 [ o 0 while the nilpotent matrix N in this example is N [ ~6~]. Lo. LQ. The /( are called the left minimal indices while the r.35). where each LQ has "zero columns" and one row. while each LQ has "zero rows" and L6. Then SS is aadeflating subspace for the pencil A .13. Consider a 13 x 12 block diagonal matrix whose diagonal blocks are -A 0] I o -A I . R ( S <S. i.XB is regular. Let A. corresponds LQ. L6. suppose S e Rn* xk is a matrix whose columns span a k-dimensional E ~nxk ^-dimensional subspace S of ~n. n(S)) = S. and L^ is the (k + I) x k where N is nilpotent. (12.e.

15. vector. Numerically. we find the characteristic polynomial to be find the characteristic polynomial to be det [ which has a root at -2.15. In the special case p = m. see. Example 12. then (12. multi-output systems. In the special case p = m. these values are the generalized eigenvalues of the drops rank. and D € Rpxm.5) becomes AS = SM as before.4) becomes dim(AV + V) = dimV. E jRnxm. for example. see. (12. where x(= x(t)) is called the state space model is often used in multivariable control theory. and y is the vector of outputs or observables.8. This linear time-invariant statespace model is often used in multivariable control theory.6». which is clearly equivalent to AV c V. one must be careful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. (n + m) x (n + m) pencil.6). the (finite) zeros of this system are given by the (finite) complex numbers In general.130 Chapter 12. there is a concept analogous to deflating subspace called a reducing subspace. multi-output systems. D=O. we which clearly has a zero at -2. However. However. and y is the vector of outputs or observables. u is the vector of inputs or controls. Generalized Eigenvalue Problems If B = /. trivial. = Cx + Du E jRnxn. there AV ~ V. we offer some insight below into the special case of a single-input. Numerically.5) becomes AS = SM as before. the (finite) zeros of this system are given by the (finite) complex numbers where the "system pencil" z. For details.6).8. E jRPxn. however. these values are the generalized eigenvalues of the (n + m) x (n m) pencil. Then the transfer matrix (see [26]) of this system is Then the transfer matrix (see [26)) of this system is g(5)=C(sI-A)-'B+D= 5 55 2 + 14 ' + 3s + 2 which clearly has a zero at —2. In general. For details. B] . Let Example 12. [26].6)). Similarly. which is clearly equivalent to If B = I.3 12. C e Rpxn. The connection between system zeros and the corresponding system pencil is nonThe connection between system zeros and the corresponding system pencil is nontrivial. one must be well for general mUlti-input. and E jRPxm. zeros). A-c M D "'" 5A + 14. for example. This is accomcareful first to "deflate out" the infinite zeros (infinite eigenvalues of (12. B € R" xm .6) drops rank. we offer some insight below into the special case of a single-input. which has a root at —2. This is accomplished by computing a certain unitary equivalence on the system pencil that then yields a plished by computing a certain unitary equivalence on the system pencil that then yields a smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite smaller generalized eigenvalue problem with only finite generalized eigenvalues (the finite zeros). where x(= x(t)) is called the state vector. however. (12. Checking the finite eigenvalues of the pencil (12. lEthe pencil is not regular. Similarly. This linear with A € M n x n . Checking the finite eigenvalues of the pencil (12. where the "system pencil" (12. u is the vector of inputs or controls. is a concept analogous to deflating subspace called a reducing subspace. Let A=[ -4 2 C = [I 2].8.8. then (12. The method of finding system zeros via a generalized eigenvalue problem also works The method of finding system zeros via a generalized eigenvalue problem also works well for general multi-input. If the pencil is not regular.4) becomes dim (A V + V) = dim V. [26].3 Application to the Computation of System Zeros Application to the Computation of System Zeros i y Consider the linear system Consider the linear svstem = Ax + Bu. 12.

C = c T E R l x n . 12. "pole/zero cancellations"). Symmetric Generalized Eigenvalue Problems 12.8). z is a zero of g. Specifically.n. g(s) Furthermore. no pole/zero cancellations).zI cT b ] d is singular. and v(s) and n(s) are relatively prime TT(S) v(s) TT(S) (i.A)~ ! Z? + d denote the system transfer function (matrix). Thus. the second-order A. A pole/zero Assuming z is not an eigenvalue of A (i.10).l xn. . Hence g(z) = 0.8) c T x +dy = O. and D e R r T(s7 . e ffi. the problem (12. relatively where n(s) is the characteristic polynomial of A.zl)x + by = 0.. B E Rnxn arises when A = A and B = BT > O.e. b e ffi.9». Then there exists a nonzero solution to or or (A .9) Substituting this in (12. let g(. However. B~11A is not necessarily B~ Ax = AX. then from (12..e.e. and D = d E R.zl)-lby.s) = c (s I — A) -1 b + d c function and assume that g(s) can be written in the form and assume that g ( s ) can be written in the form v(s) g(s) = n(s)' polynomial A. Thus. system of differential equations differential Mx+Kx=O.8).4 12. of the Since B is positive definite it is nonsingular. we have Substituting this in (12.zl)-lby + dy = 0. we have _c T (A .. g. M K where M is a symmetric positive definite "mass matrix" and K is a symmetric "stiffness definite "stiffness matrix. Hence g(z) 0.nxn A AT and B the B1 0. Symmetric Generalized Eigenvalue Problems 131 131 1 single-output system. let B = b E Rn. For example. there are no "pole/zero cancellations").A to the standard eigenvalue problem B-l1Ax = AJC. symmetric.7) we get get x = -(A .4 Symmetric Generalized Eigenvalue Problems Symmetric Generalized Eigenvalue Problems Ax = ABx A very important special case of the generalized eigenvalue problem (12.4. 0 from (12. B e ffi. or g ( z ) y = 0 by the definition of g. Suppose z € is such that Suppose Z E C is such that [ A . Now y ^ 0 (else x z i.9)). the problem (12.7) (12.4." is a frequently employed model of structures or vibrating systems and yields a frequently generalized eigenvalue problem ofthe form (12.10) is equivalent B. (12.12. (12.10) for A. Now _y 1= 0 (else x = 0 from (12.10) is equivalent Since B is positive definite it is nonsingular. or g(z)y 0 by the definition of g.

Zn Zj = Dij. and are Hermitian.. we have restricted our attention to that case only. it has a Cholesky factorization B = LLT. y)BB = XT By. zn satisfying vectors Z I. .1926 as expected. Finally.16. with corresponding eigenSince C = C T. where L is nonsingular (Theorem 10.fi 1] . B e jRnxn A AT and B BT > O.5 2.1926 whose eigenvalues are approximately 2. Let A.5 ] -1.23).11) can then be rewritten as = Cz = AZ. Then the generalized A. Then the eigenvalue problem (Theorem 10. generalized case A and B are Hermitian. Generalized Eigenvalue Problems Chapter 12.1926 and -3. .. the eigenproblem (12. = L ~Tzi. if A > 0. it has a Cholesky factorization B = LL T. and the n corresponding right eigenvectors can be chosen to be orthogonal with respect to the inner product (x. of course.1926 in Example 12.5 ' -3. are eigenvectors of the original generalized eigenvalue problem Xi Zi. the eigenvalues are also all positive. with corresponding eigenvectors zi. •. .132 132 Chapter 12. Finally.1926 and —3. Proof: Since B > 0.18..16 is D 0 L=[~ . be generalized easily to the case where A material of can. the eigenvalue problem eigenvalue problem Ax = ABx has n real eigenvalues. Moreover. are eigenvectors of the original generalized eigenvalue problem and satisfy and satisfy (Xi.17. then C = C T > 0. The Cholesky factor for the matrix B in Example 12. if orthogonal > 0. Xj)B T T = xr BXj = (zi L ~l)(LLT)(L ~T Zj) = Dij.12) Since C = C T the eigenproblem (12. E !!.18. Let A Example 12.. positive. Then the eigenvalue problem Ax = ABx = ALL Tx (12. The Cholesky factor for the matrix B in Example 12. (12. we have restricted our attention to that case only. The material of this section can.11) can then be rewritten as AL J and Z = LT x.. then product y) x T By. Theorem 12. Generalized Eigenvalue Problems Example 12.16). if A = A > 0... (12.. Example 12. where L is nonsingular Proof: Since B > 0.. so the eigenvalues are positive.fi Then it is easily checked that Then it is easily checked thai c = L~lAL~T = [ 0. if A = AT> 0. but since real-valued matrices are commonly used in most applications.12) has n real eigenvalues. Moreover.23). so the eigenvalues are positive. but since real-valued matrices are commonly used in most applications. the eigenvalues of B l A are always real (and are approximately 2.. ii € n..5 2. l = [i ~ J B ThenB~ A Then A B~Il = [-~ ~ J B~I A approximately Nevertheless. then = C T > 0. B E Rnxn with A = AT and B = BT > 0.16.12) has n real eigenvalues. Let A = [~ . zi Then x.11) can be rewritten as the equivalent problem 1 Letting C = L ~I AL ~T and z = L1 x.16 is Example 12. (12.

5. There are many such results and we present only a representative (but important and useful) theorem here. Proof: By Theorem 12. This can be seen directly. LetA QT AQ and B QT Then/HA Q~ B. it does preserve the eigenvalues of A — XB.< / (this is trivially true 0 since the two matrices are diagonal). D > I. Theorem 12.20. since A 2: B. normal matrices can be diagonalized by a unitary similarity. so it does not preserve eigenvalues of and B Note that Q is not in general orthogonal. = pT L -I(LLT)L -T P = pT P = [.5 12. Since Proof: Let T C is symmetric. Let A.12. = QQT AQQ~l = L-TPPTL~-IA = L~TL~1A L -T P pT L 1 A L -T L -I A QQT AQQ-I 0 D = B-1A. Let A. D since the two matrices are diagonal).e. when L is highly ill conditioned with respect to inversion. the diagonal elements of D are the eigenvalues of B 1A. normal maRecall that many matrices can be diagonalized by a similarity. Also. In particular. it does preserve the eigenvalues of A . But then D.l Q~T QT Q~ B~ AQ. i. A-1. Also. However. Now D > 0 by Theorem 10. Thus. we restrict our attention only to the real case. B) can be simultaneously diagonalized by the same matrix. Then diagonal. let . so it does not preserve eigenvalues of A and B individually.e.1AQ. It turns out that in some cases a pair of matrices (A.T P. Then and and QT BQ Finally. where D is diagonal. B e M" xn be positive definite. To illustrate. where D is diagonal. Theorem 12. the diagonal elements of D are the eigenvalues of B. A~l :::: B-l1.31.19 is There are situations in which forming C = L~1AL~T as in the proof of Theorem 12.1A = Q-1l B~1Q-T QT AQ = Q-11B. D 2: [.19.21 we have that QT AQ > QT BQ. there exists an orthogonal matrix P such that P CP = D.. simultaneous reduction can also be accomplished via an SVD. such results and we present only a representative (but important and useful) theorem here. B) can be simultaneously diagonalized by the same matrix.. In numerically problematic.5. Then A 2: B if and only if B~ 2: A-I. There are many matrices (A.21 we have that QT AQ 2: QT BQ.1A. Let Q = L .19. It turns out that in some cases a pair of trices can be diagonalized by a unitary similarity. e.e. by Theorem 10.19 (Simultaneous Reduction to Diagonal Form).g. Let A = QT AQandB = QT BQ.19 is numerically problematic. However. Now D > 0 by Theorem 10.lI QT :::: Q QT." The following is typical. B E lRnxn be positive definite.5.g.1A). haveA(D) = A(B.5. But then D"1I :::: [(this is trivially true 10.e. Then there exists a nonsingular matrix Q such that A = AT and B = BT > 0. we Note that Q is not in general orthogonal. i. Simultaneous Diagonalization 12. with the complex case following in a Again. matrices to "the diagonal case. individually..31. In particular.19 is very useful for reducing many statements about pairs of symmetric Theorem 12. B E E"x" with 12. Since LLT be the Cholesky factorization of and setC L -I AL~T. \ 2." The following is typical. simultaneous reduction can also be accomplished via an SVD. Again. Proof: Let B = LLT be the Cholesky factorization of B and set C = L~1AL -T. This can be seen directly. Thus. where D is C is symmetric.. since QDQ-I Finally. with the complex case following in a straightforward way. when L is highly iII conditioned with respect to inversion.5 Simultaneous Diagonalization Simultaneous Diagonalization Recall that many matrices can be diagonalized by a similarity. A -I < B~ . by Theorem where D is diagonal. i.1 12. Let Q = L~T P.19 is very useful for reducing many statements about pairs of symmetric matrices to "the diagonal case. we restrict our attention only to the real case.'AB. since QDQ~l have A(D) = A(B~1A).. straightforward way. Theorem 12. In fact.. QD~ QT < QQT. Let A. Q D. since A > B. In such cases. Proof: By Theorem 12. there exists Q E lR~xn such that QT AQ = D and QT BQ = [. Then A > B if and only if B-l1 > Theorem 12.20. Infact. Simultaneous Diagonalization 133 12. e.1 Simultaneous diagonalization via SVD Simultaneous diagonalization via SVD There are situations in which forming C L -I AL -T as in the proof of Theorem 12.19 e ][~nxn A AT and B BT > O. i. we B~ 1 A. where D is diagonal. Then B. To illustrate. there exists an orthogonal matrix P such that pTe p = D. let such cases. there exists Q e E"x" such that QT AQ = D and QT BQ = I. Then there exists a nonsingular matrix Q such that where D is diagonal.

e. let A = LAL~ and B = LsLTB be Cholesky factorizations of A and B.134 134 Chapter 12. i. D may have pure imaginary elements.14) Letting x = LB z we see that (12. respectively. see.butin writing A — PDDP T = PD(PD) with D is diagonal and P orthogonal. PDPT ~ ~ ~ ~ T PD(PD{ with where Disdiagonaland P is orthogonal. Compute the SVD Cholesky factorizations A B. respectively. For example. Generalized Eigenvalue Problems us assume that both A and B are positive definite. operations performed directly on M rather than by forming the matrix MT M and solving performed MT forming the eigenproblem MT MX = AX.22. A straightforward.13) where E E R£ x " isisdiagonal. which is thus to the generalized eigenvalue problem 02. Further. The case when A is symmetric but indefinite is not so A = AT::: O. for generalizations results 12. A can be written as A = PDP T. The SVD in (12.21. but in writing = PDDp D diagonal. for LB i. when A = AT > 0. products LA L ~ LBL~ see. Remark 12. note that T QT AQ = U Li/(LAL~)Li/U = UTULVTVLTUTU i/ = while L2 QT BQ = U T LB1(LBL~)Li/U = UTU = I. Remark 12. D b .21 are possible. This is analogous to finding the singular values of a matrix M by Sec.13)) and LB separately. Then the matrix Q == LLBTu performs the simultaneous diagonal.15) The problem (12. To check this.3].15) is called a generalized singular value problem and algorithms exist to problem generalized solve it (and hence equivalently (12. Note that LB A and thus the singular values of L B 1 LA can be found from the eigenvalue problem 02.13» via arithmetic operations performed only on LA LA (12. Generalized Eigenvalue Problems Chapter 12. Various generalizations of the results in Remark 12. Then the matrix Q U performs the simultaneous L e 1R~ xn diagonalization.14) rewritten the LAL~x = ALBz = A L g L ^ L g 7 z . Sec. without forming the products LALTA or LBLTB explicitly.e.21 example.13) can be computed without explicitly forming the without Remark product indicated matrix product or the inverse by using the so-called generalized singular value decomposition (GSVD). [7... eigenproblem MT M x Xx. 8.7. example. at least in real arithmetic. (12.. which is thus equivalent to the generalized eigenvalue problem ALBL~LBT z. let A = LALTA and B — LBL~ us assume that both A and B are positive definite.14) can be rewritten in the form LALAx = XLBz = Letting x = LBT Z we see 02. Further.

16) arises frequently in applications: M = I. Suppose K has eigenvalues eigenvalues IL I ::: .. C.6. Assume for simplicity that M is nonsingular.6. polynomial 2n. M Mwhere x(t) €. K = KT ::: 0).• Then the 2n eigenvalues of the second-order eigenvalue problem A2 I /+ K Let Wk = | fjik 12 Then the 2n eigenvalues of the second-order eigenvalue problem A. Since the determinantal equation o = det(A 2 M + AC + K) = A2n + . = [ -M-1K 0 x (t) E ~2n. yields a polynomial of degree 2rc.C + K.1 12. E2".16) we get (12.16) Consider the second-order system of differential equations Consider the second-order system of differential equations q(t) E ~n E ~nxn. . ± Wk. C = 0. If r = n (i. Suppose K = KT. then all solutions of q + Kq = 0 are oscillatory.C + K is singular.16) can be written as a first-order system (with block companion matrix) X .. k = r + 1... or if it is desired to avoid the calculation of M lI because M is too ill conditioned with respect to inversion.16) arises frequently in applications: 0.6. . where the n-vector p and scalar A. .16) of the p A are to be determined.6 12.. (12.e. we thus seek values of A. quadratic) eigenvalue problem A. If M is singular. A special case of (12. Suppose. the second-order problem (12. If r n (i. by analogy with the first-order case. Higher-Order Eigenvalue Problems 12. where q(t} e W1 and M.6 Higher-Order Eigenvalue Problems Higher-Order Eigenvalue Problems Mq+Cq+Kq=O..2 K are are ± jWk. ::: ILr ::: 0 > ILr+ I ::: . .e.6. Substituting in q(t) = eAt p. then all solutions of q K q 0 are oscillatory. Higher-Order Eigenvalue Problems 135 12. KT > 0). ::: ILn· Let a>k = IILk I!. and = = KT. since eAt :F 0.2M + A. n. seek A A2 M + AC + To get a nonzero solution /?. Then (12.... (12. k = 1. r... Since the determinantal equation is singular. .16) can still M second-order generalized linear be converted to the first-order generalized linear system converted I [ o M OJ'x = [0 -K I -C Jx. for which the matrix A.16) can be written as a first-order system (with block Let XI q and X2 Then (12.16) or. (A 2 M + AC + K) p = O. there are 2n eigenvalues for the second-order (or A2 M + AC + K.1 Conversion to first-order form Conversion to first-order form Let x\ = q and \i = q. are to be determined. Substituting in form q(t) = ext p. that we try to find a solution of (12. and A special case of (12. p. K e Rnxn.12..2M + A. 12.

and C e lRmxn. Suppose A e Rnxn and D E lR::! xm.) . Let F e Cnxm . In the parlance of control theory. Some can be useful when M. such results show that zeros are invariant under state feedback or output injection. Show that the generalized eigenval". to higher-order eigenvalue problems that can be converted to first-order form using a kn x kn to higher-order eigenvalue problems that can be converted to first-order form using aknxkn block companion matrix analogue of (11. C. Similar procedures hold for the general k\horder difference equation order difference equation which can be converted to various first-order systems of dimension kn.1 2. Suppose A € Rnxn. EXERCISES EXERCISES nx 1. Generalized Eigenvalue Problems Many other first-order realizations are possible. Are the FG and GF the 3. Similar procedures hold for the general kthblock companion matrix analogue of (11. Let F. and/or K have special symmetry or skew-symmetry properties that can exploited. In the parlance of control theory. andlor K Many other first-order realizations are possible.B D. . E Rnxm and E E 4..16) involving.19). the kth derivative of q. derivative q. Hint: Consider the equivalence I G][A-UO F0]' B][I l [01 C (A similar result is also true for "nonsquare" pencils. verify Hint: An easy "trick proof is to verify that the matrices "trick proof' [Fg ~] and [~ GOF ] are similar via the similarity transformation are similar via the similarity transformation Let F E nxm G E mx ". C.19). which can be converted to various first-order systems of dimension kn. properties Higher-order analogues of (12. Some can be useful when M.136 136 Chapter 12. (A similar result is also true for "nonsquare" pencils. Show that the finite generalized eigenvalues of E lR " finite eigenvalues of e R™ x m the pencil [~ ~J-A[~ ~J are the eigenvalues of the matrix A — BD 1 C. lead naturally naturally involving. Let € C M X • Show that the nonzero eigenvalues of and G F are the same. say. Show that the generalized eigenvalues of the pencils ues of the pencils e e [~ ~J-A[~ ~J and and [ A + B~ + GC ~] _ A [~ ~] are identical for all F E E"1xn and all G E R" xmm . G e Cmxn • Are the nonzero singular values of FG and GF the same? same? wx E ]Rnxn. Show that the nonzero eigenvalues of FG and GF are the same. B e lRn*m. G E enxn". Generalized Eigenvalue Problems Chapter 12.. F 6 Rm *" G R" x .

Such QT BQ a transformation is called contragredient. and let UWT be an SVD of L~LA'. . (c) Show that the eigenvalues of A B are the same as those of 1. A B B are positive definite with Cholesky factorizations A = L&LTA and B = L#Lg. positive Cholesky = LA L ~ = L B L ~. positive. (b) Show that Q~l = ^~^UT LTB. Q-l = ~-!UTL~. respectively. A and B to the same diagonal matrix. respectively. B E e jRnxn Q-l AQ-T ]Rnx" in such a way that Q~l AQ~T and QT BQ are simultaneously diagonal.2 and hence are AB E2 positive. and let U~VT be an SVD of LTBLA (a) Show that Q = LA V £ ~ 5 is a contragredient transformation that reduces both contragredient = LA V~-! A and B to the same diagonal matrix.Exercises Exercises 137 137 desired 5. Consider the case where both A and transformation contragredient. Another family of simultaneous diagonalization problems arises when it is desired Another simultaneous diagonalization problems operates that the simultaneous diagonalizing transformation Q operates on matrices A.

This page intentionally left blank This page intentionally left blank .

1. We Obviously.1. Let A = [~ 2 2 nand B = [..1) amnB Obviously.. 1. Forany B e!F pxq /z @ B = [~ In Replacing 12 by /„ yields a block diagonal matrix with n copies of B along the I2 diagonal with n copies of along the diagonal. Then 0 b ll b12 B @/z = l b" b~l 139 0 b2 2 0 b21 0 0 b12 0 b 22 l . Example 13. Then A@B =[ 3~ ~]~U J. B e lR pxq. Let B be an arbitrary 2x2 matrix. the same definition holds if A and B are complex-valued matrices. extension to the complex case only where it is not obvious. Then the Kronecker product (or tensor Then the Kronecker product (or tensor product) of A and B is defined as the matrix product) of A and B is defined as the matrix allB A@B= [ : amlB alnB ] : E lRmpxnq. Example 13. 2B 2B ~J. Let A e R mx ". Let B be an arbitrary 2 x 2 matrix. the same definition holds if A and B are complex-valued matrices.Chapter 13 Chapter 13 Kronecker Products Kronecker Products 13.1 13.2. / 2 <8>fl = [o ~ l\ 2. We restrict our attention in this chapter primarily to real-valued matrices. Foranyfl E lRX(7. Note that B <g> A / A <g> B. (13.A @ B. n 2.. pointing out the extension to the complex case only where it is not obvious.1 Definition and Examples Definition and Examples Definition 13. Then 3. Let A E lRmxn B E R Definition 13. 4 3 4 3 4 9 4 2 6 2 6 6 6 2 2 Note that B @ A i. pointing out the restrict our attention in this chapter primarily to real-valued matrices.2.

2 Properties of the Kronecker Product Properties of the Kronecker Product (A 0 B)(C 0 D) = AC 0 BD (E ~mrxpt).6. xmYnf E !R. Theorem 13. Then 13.. XmY T]T = [XIYJ.. A® 13.4. (A ® B)-I = Bare 13. (13. Let* eR m . XIYn. = 1 ® 1 = I. (A ® Bl = AT ® BT. L~=l al. Foral! Proof' Proof: For the proof. . If A e R"xn and B E !R. Proof: Proof: Using Theorem 13. If E ]Rn xn e Rmxm are Theorem 13.3. then A® B is symmetric.m xm are symmetric.1 ) Theorem 13. .n. B e ~rxs. simply verify using the definitions of transpose and Kronecker verify transpose Kronecker 0 product. 5..5.2 13.. Simply verify that ~[ =AC0BD. Let E ~mxn. B In x E ~m.1. Kronecker Products Kronecker Products The extension to arbitrary B and /„ is obvious. E R".5.6. C E R" x ^ and D E ~sxt. Let A e R mx ". .kCkPBD L~=1 amkckpBD ] 0 Theorem 13.140 Chapter 13. simply note that (A ® B)(A -1 ® B. and D e Rsxt. y e !R. . 5 E R r x i . X2Yl. If A-I ® B.. Let Jt € Rm.3.. 4. . D Corollary 13. For all A and B.3. y eR". Then X ® Y = [ XIY T . .2) Proof: Simply verify that Proof. mn .3. If A and B are nonsingular. Then 13. C e ~nxp. 0 .

and let BB E e IRR mxwhave e IR nxn have eigenvalues A. . and let eigenvalues jJij..10.[Cos</> cos</>O Then It IS easl'1y seen that . In general. Let A E R nx "have eigenvalues Ai.c. . L et A E xamp Ie 139 Let A = [ _eose cose andB . 7 E m. then A® B is normal.7. matrix A ® 5 is then also orthogonal with eigenvalues e^'^+'W and e ± ^ (6> ~^ > \ Theorem 13.-. and zi.4 by Theorem 13.• :::: TS > O.i . if A and B have Jordan form thus get the complete eigenstructure of A <8> B. i / E e!!.j.. 141 141 Proof: Proof: (A 0 B{ (A 0 B) = (AT 0 BT)(A 0 B) = AT A 0 BT B = AAT 0 B BT by Theorem 13. = (A 0 B)(A 0 B)T 0 Corollary 13. :::: U rTs > 0 and ^iT\ > • • • > ffr <s Qand rank(A 0 B) = (rankA)(rankB) = rank(B 0 A) .. we can take p = nand q = m and n and q —m and If A and B are diagonalizable in Theorem 13.. Theorem 13. If Corollary 13.Zq are linearly independent right eigenvectors of B corresponding to JLI. Example 13. If A e IR nxn am/ B E IR mxm are normal. if A and fi have Jordan form . Then the mn eigenvalues of A® B are eigenvalues JL j. The 4 x 4 orthogonal e±j9 orthogonal eigenvalues e±j(i>.. we can take p thus get the complete eigenstructure of A 0 B.• :::: U rr > 0 and let B E IRfx Corollary e R™x" singular a\ > • • > a > e have singular values T\ > • • > <s > 0.m are linearly independent right corresponding to JJL\ . 0 Zj E€ IR mn "are linearly independent right eigenvectors of A 0 B corresponding to Ai JL 7 i e /?. Ap (p ::::: and ZI. if Xl.3. then A <g> B is € IR nxn orthogonal and e IR m x m 15 then 0 is orthogonal. j e q.p (p < n).. if x\.. Then A 0 B (or B A) has rs singular values have singular values <I :::: .13... xp are linearly independent right eigenvectors of A corresponding Moreover."xn have singular values UI :::: . .12.. A0 B e±jeH</» e±jefJ -</». Sine] and B . TTzen ?/ze mn eigenvalues of A 0 Bare Moreover. A. xp are linearly independent right eigenvectors of A corresponding AI.. Properties of the Kronecker Product Theorem 13.10.. eigenvectors of A® B corresponding to A. .n.... then A 0 B is normal. .7. If A E E"xn is orthogonal and B E Mmxm is orthogonal.• sin e = _ sin</> Sin</>] Then it is easily seen that A is orthogonal with eigenvalues e±jO and B is orthogonal with eigenvalues e±j</J. i E l!! 7 E 1· Proof: proof Proof: The basic idea of the proof is as follows: follows: (A 0 B)(x 0 z) = Ax 0 Bz =AX 0 JLZ = AJL(X 0 z). mxm /zave Theorem 13. Then vI yields a singular value decomposition of A <8>B (after aasimple reordering of the diagonal yields a singular value decomposition of A 0 B (after simple reordering of the diagonal elements O/£A <8> £5 and the corresponding right and left singular vectors). j € m. then ... q Corollary 13. Lgf A E E mxn have a singular value decomposition VA ~A Theorem 13.9. If A E IR"xn and B eRmxm are normal.8. 0 If A and Bare diagonalizable in Theorem 13. .... <I :::: . then Xi <8> Zj ffi.12. Then A <g)B (or B 0<8> A) has rs singular values U.12. \Ju (q ::::: m). .. Let A G IR mx " have a singular value decomposition l/^E^Vj an^ let and /ef singular decomposition UB^B^BB e IR pxq fi E ^pxq have a singular value decomposition V B ~B VI. ••./u.. .2....3 since A and B are normal by Theorem 13. • • zq independent of to A . . elements of ~A 0 ~B and the corresponding right and left singular vectors).2. .•.JLqq (q < m). Properties of the Kronecker Product 13.11. Let A E lR. In general.8.

Then 13. i. Corollary 13.13. pH AP = TA and QH BQ = TB (and similarly if and are orthogonal similarities PHAP = TA and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form). to Schur (triangular) form. Let 1. Tr(A ® B) = (TrA)(TrB) = Tr(B ® A). denoted A EEl B. is generally not quite in Jordan form and needs further reduction (to an ultimate Jordan form that also depends on whether or not certain further reduction (to an ultimate Jordan form that also depends on whether or not certain eigenvalues are zero or nonzero). suppose P and Schur form for A ® B can be derived similarly. Then (P ® Q)H (A ® B)(P ® Q) = (pH ® QH)(A ® B)(P ® Q) = (pH AP) ® (QH BQ) = TA ® TR . nxn mxm Definition 13. For example.142 142 Chapter 13. Let A e Rn Xn and B e Rm xrn.14. 2. suppose P and Q are unitary matrices that reduce A and B. Note that. Example 13. 1.e. 1. in of A and B. eigenvalues are zero or nonzero).e. ~l 2 2 1 3 AfflB = (h®A)+(B®h) = 1 3 0 1 0 4 0 3 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 3 4 2 0 0 2 0 0 2 0 0 2 0 0 0 1 0 0 + 0 2 0 0 2 0 0 0 0 3 0 0 0 3 0 0 0 3 The reader is invited to compute B 0 A = (/3 ® B) + (A 0 h) and note the difference The reader is invited to compute B EEl A = (h ® B) (A <g> /2) and note the difference with A © B. Then reducing A and B to real Schur form). with A EEl B. while upper triangular. E IR E IR Kronecker Definition 13. of A and B.14.15. Then the Kronecker sum (or tensor sum) .I ® Q-l)(A ® B)(P ® Q) = (P. respectively. A ® B i= B © A. A Schur form for A ® B can be derived similarly. general. Let A~U Then Then 2 2 !]andB~[ . then we get the decompositions given by P~lI AP = J A and Q-l BQ = JB. . E IR nxn E IR mxm. A EEl B ^ B EEl A. while upper triangular. is generally not quite in Jordan form and needs Note that JA® JB.13. det(A ® B) = (det A)m(det Bt = det(B ® A). is the mn mn matrix (Im ® A) + (B ® /„).15.AP J B . For example. then we get the JA and Q~] BQ following Jordan-like structure: following Jordan-like structure: (P ® Q)-I(A ® B)(P ® Q) = (P. Kronecker Products Chapter 13. Example 13. respectively. are unitary matrices that reduce A and 5. Kronecker Products decompositions given by p... in general. is the mn x mn matrix Urn <g> A) + (B ® In). denoted A © B. Let A e Rn xn and B e Rrn xm. i. Note that. respectively. to Schur (triangular) form.1 AP) ® (Q-l BQ) = JA ® JB · Note that h ® JR. respectively.

then decompositions given JA and Q-t BQ [(Q ® In)(lm ® p)rt[(lm ® A) = [(1m ® p)-I(Q ® In)-I][(lm ® A) = (1m ® lA) + (B ® In)][CQ ® In)(lm ® P)] + (B ® In)][(Q ® In)(/m ® + (B ® P)] = [(1m ® p-I)(Q-I ® In)][(lm ® A) In)][CQ ® In)(/m <:9 P)] + (JB ® In) is a Jordan-like structure for A $ B. and z\. Recall the real JCF 2. zq are linearly independent eigenvectors of corresponding to fJ-t. An + fJ-m' Moreover. if A and have Jordan form thus get the complete eigenstructure of A 0 B. . j e q.. Zq are linearly independent right eigenvectors of B AI. TTzen r/ze Kronecker sum A $ B eigenvalues e/genva/wes Al + fJ-t. is a Jordan-like structure for A © B. if A and B have Jordan form p-I l B ... . then Zj ® Xi E€ jRmn" are linearly independent right Zj <8> Xi W1 are linearly independent right corresponding f j i . . . if x\. . A2 + fJ-t. xp are linearly independent right eigenvectors of A corresponding to AI. respectively. AI + fJ-m.. 0 If A and Bare diagonalizable in Theorem 13. i E !!. . Then J can be written in the very compact form J Theorem 13. if XI.. ..16. e jRmxm eigenvalues /z.. Let A E E"x" have eigenvalues Ai. . Properties of the Kronecker Product 2. j E fl· eigenvectors of A $ B corresponding to A. j E ra.2. .. Ap (p < and ZI. Proof: The basic idea of the proof is as follows: Proof: The basic idea of the proof is as follows: [(1m ® A) + (B ® In)](Z ® X) = (Z ® Ax) = (Z + (Bz ® X) ® Ax) + (fJ-Z ® X) = (A + fJ-)(Z ® X)..2. and let B E Rmx'" have e jRnxn eigenvalues A.xp are linearly independent right eigenvectors of A corresponding Moreover. we can take p = n and q = m and thus get the complete eigenstructure of A $ In general... .···. respectively. Then the Kronecker sum A® B = (1m (g>A) + (B ® In) has mn (Im ® A) + (B <g> /„) /za^ ran eigenvalues fJ-j. . + fJ-j' € p. Properties of the Kronecker Product 13. Xp (p ::s: n). ii E E. ..13.. fJ-q (q < m).•• . In general. A2 + fJ-m... .. (I} ® M) + (E^®l2) = M 0 Ek. . we can take p nand q and If A and B are diagonalizable in Theorem 13.-. 7 e I!!. 0 I M 0 where M = [ where M = o M a f3 -f3 a J.16.\ . . .i e n.. Recall the real JCF M I M 143 143 0 I M I 0 o 1= 0 E jR2kx2k. then decompositions given by P~1AP = lA and Q"1 BQ = JB. Define 0 0 0 0 o o Ek = 0 o Then 1 can be written in the very compact form 1 = (4 <8>M) + (Ek ® h) = M $ E k . .. . f^q (q ::s: ra).16. eigenvectors of A® B corresponding to Ai + [ij.. ..

and C e M" xm . .3 13. Lyapunov equations also to be symmetric and (13. respectively.4) is known as a Lyapunov equation. i. j=1 These equations can then be rewritten as the These equations can then be rewritten as the mn x mn linear system x linear system A+blll bl21 A + b 2Z 1 b2ml b 21 1 (13.5) clearly can be written as the Kronecker sum (Im * A) + (BT ® In). The following definition is very helpful in completing the writing of (13. suppose P and are unitary A Schur form for A © B can be derived similarly.3) in terms of their easily seen z'th columns that ith columns that m AXi + Xb.3) is the symmetric equation AX +XAT = C (13. i. Sylvester who studied general linear matrix equations of the form equation in honor of J. solution e IR xn also to be symmetric and (13.3) in tenns of their columns. When C is symmetric.5) as an "ordinary" linear system.5) [ blml The coefficient matrix in (13. = C.5) as (B T 0 /„).5) clearly can be written as the Kronecker sum (1m 0 A) + The coefficient matrix in (13. it is easily seen by equating the writing (13. The first important question to ask regarding (13. Sylvester where A e R"x". (13. the solution X E Wnx" is easily shown taking B = AT. When does a solution exist? The first important question to ask regarding (13.3 and Corollary 13. where [(Q <8>In)(lm ® P)] = (Q ® P) is unitary by Theorem 13. ® P)] = (/m <8> rA) + (7* (g) /„). 13.3) is. The following definition is very helpful in completing the writing of (13. B e Rmxm . PHAP = TA that reduce to Schur and QH BQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B and QHBQ = TB (and similarly if P and Q are orthogonal similarities reducing A and B to real Schur form). = AXi + l:~>j.3) is.1. Again..8. . an "ordinary" linear system.3) mxm E IRnxn E IR E IRnxm. =C. Sylvester who studied general linear matrix equations of the fonn k LA.4) obtained by taking B = AT. [(Q ® /„)(/« ® P)] = (<2 ® P) is unitary by Theorem 13. When symmetric. Then to real Schur fonn).J.3 Application to Sylvester and Lyapunov Equations Application to Sylvester and Lyapunov Equations In this section we study the linear matrix equation In this section we study the linear matrix equation AX+XB=C. When does a solution exist? By writing the matrices in (13.. suppose P and Q are unitary fonn. This equation is now often called a Sylvester equation is now often equation in honor of 1. Again. Lyapunovequations arise naturally in stability theory. pH AP = TA matrices that reduce A and B.8.144 Chapter 13.4) is known as a Lyapunov equation. .e. Then ((Q ® /„)(/« ® P)]"[(/m <8> A) + (B ® /B)][(e (g) /„)(/„.XB.e. Kronecker Products A Schur fonn for A EB B can be derived similarly. arise naturally in stability theory.Xj. to Schur (triangular) form. Kronecker Products Chapter 13..=1 A special case of (13.3 and Corollary 13.

..and Mj Ee A(B).. this algorithm takes only 0 (n 3) transformed solution matrix X. E jRmxm.17. An equivalent linear system is then solved in which the triangular form equivalent linear system is then solved in which the triangular form of the reduced and can be exploited to solve successively for the columns of a suitably of the reduced A and B can be exploited to solve successively for the columns of a suitably transformed solution matrix X. where A. +00): I-Hoo lim XU) .3) (or symmetric Lyapunov equations of the form Sylvester equations of the form (13. . c ].3) (or symmetric Lyapunov equations of the form (13. AX+XB=C (13. j j E!!!. (13. the eigenvalues of [(/m <g> A) + (BT ® In)] are Ai A.8)by Theorem 13.6). But [(1m ® A) + (B (g) /„)] nonsingular and only has no zero eigenvalues. j j so there exists aaunique for all i.. . They culminate in Theorem 13.5) can be rewritten in the form Using Definition 13. xn Theorem 13.6).18. The most commonly preferred numerical algorithm is described in [2]. But [(Im <8>A) + (B TT ® In)] isisnonsingular ififand only ififitithas no zero eigenvalues. Let Ci( € E. Theorem C E jRnxm. n > m... A further enhancement to this algorithm is available in [6] whereby Gaussian elimination. Let A e jRnxn.4» are generally not solved using the mn x mn "vec" formulation (13.6) There exists a unique solution to (13. A(fi). A further enhancement to this algorithm is available in [6] whereby the larger of A or B is initially reduced only to upper Hessenberg rather than triangular the larger of A or B is initially reduced only to upper Hessenberg rather than triangular Schur form. +00): (with X(0) = C) on [0.X(O) = A 10 roo X(t)dt + ([+00 X(t)dt) 10 B. say. where From Theorem 13.6) directly with Gaussian elimination. The most (13.e A (A). B e Rmxm. . the linear system (13. Suppose further are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real parts in the open left half-plane). c E jRn the Then vec(C) is defined to be the mn-vector formed by stacking the columns ofC on top of by C ::~~::~: ::d~~:::O:[]::::fonned "ocking the colunuu of on top of one another.. We thus have the following theorem.. and C e R" xm . this algorithm takes only O(n3 ) operations rather than the O(n6)) that would be required by solving (13. Then the (unique) solution of the Sylvester equation parts in the open left half-plane).3. Then the (unique) solution of the Sylvester equation AX+XB=C (13. From Theorem 13.18. First A and B are reduced to commonly preferred numerical algorithm is described in [2]. i.8) can be written as can be written as (13. Now integrate the differential equation X = AX + X B solution to (13. Ai E A(A).24. Suppose further that A and B E Rn .5) can be rewritten in the form [(1m ® A) + (B T ® In)]vec(X) = vec(C).17. one of many The next few theorems are classical. E!!. so there exists unique Proof: Since A and B are stable.-(B) ^ solution to(13. Application to Sylvester and Lyapunov Equations 145 145 Definition 13. A.6) if and only if [(1m ® A) + (B T ® In)] is nonsingular. 77ie/i Theorem 13. . Cm}. (A)+ Aj(B) =I 00 for all i. elegant connections between matrix theory and stability theory for differential equations.13. vec(C) = Using Definition 13. B E Rmxm.6) if and only if [(Im ® A) + (BT ® /„)] is nonsingular. has a unique solution if and only if A and —B have no eigenvalues in common. Assuming that. First A and B are reduced to (real) Schur form.B have no eigenvalues in common. We thus have the following theorem..e.1S. The next few theorems are classical...18. Sylvester equations of the form (13. There exists a unique solution to (13.n denote the columns ofC E Rnxm so that C = [ n . Application to Sylvester and Lyapunov Equations 13. Then the Sylvester equation G jRmxm. the eigenvalues of [(1m ® A) + (BT <8> /„)] are + Mj.17.10) . Assuming that.9) Proof: Since A and B are stable. (real) Schur form. Aj(A) + A. one of many elegant connections between matrix theory and stability theory for differential equations. E R E jRnxm.24. ofC e jRnxm [CI. Definition 13.17.19. e m. They culminate in Theorem 13. (13.7) has a unique solution if and only if A and . the linear system (13.8) by Theorem 13. Schur form. and C e Rnxm. say. + IJLJ.16. n :::: m.3. Let A e lRnxn. and ^j Theorem 13.16. Now integrate the differential equation X AX XB (with X(O) C) on [0.6) directly with operations rather than the O(n 6 that would be required by solving (13. ii e n_.4)) are generally not solved using the mn x mn "vee" formulation (13.

1. C e jRnxn further asymptotically stable. v E". Remark 13.23 solution Proof: Suppose A is asymptotically stable.!„.21 and 13. Let A.C E R"x" and suppose further that A is asymptotically stable.. +00 r—>+oo t—v+oo X t ) = etACelB X t ) — O.. .11) has a unique solution if and only if A and . —kn... . it can be shown easily that lim elA = lim elB = O.19.13) where C -= C T < O. Now let v be an arbitrary nonzero vector in jRn. sufficient —A common eigenvalues A asymptotically no common eigenvalues is that A be asymptotically stable.. Theorem Substituting in (13.ATT have A —A.. If the matrix A E Wxn has eigenvalues A.10) we have -C t~+x /—<-+3C = A (1+ 00 elACe lB dt) + (1+ o 00 elACe lB dt) B and so X and so X = -1o {+oo elACe lB dt satisfies (13. then that solution is symmetric. If C is has unique if and only if and —A T eigenvalues in common. Hence.20.13) exists and takes the form (13.. C E R"x". If symmetric and (13.6. Two basic results due to Lyapunov are the following.22. Remark 13.6. X B = is that [ J _Cfi ] be similar to [~ _OB] (via the similarity [ Let Theorem 13.I .AT has eigen— AT eigenvalues -AI. using the solution X ((t) = elACe tB from Theorem 11.12) Theorem 13. Theorem 13. where C Proof: asymptotically l3. Lef A.8).21 l3.146 146 Chapter 13.23. By Theorems 13..11) has a unique solution.An. A.24.. (13. Theorem 13. A matrix A E R"x" is asymptotically stable if and only if there exists a only if e jRnxn asymptotically if positive definite solution to the Lyapunov equation positive definite solution to the Lyapunov equation AX +XAT = C. Then Then .19. Kronecker Products Using the results of Section 11. .]. a sufficient condition that guarantees that A and . symmetric and ( 13...23 a solution to (13. then that solution is symmetric. Then the (unique) solution o/the Lyapunov equation of the AX+XAT=C can be written as can be written as (13. TTzen r/ze AX+XAT =C (13. . Kronecker Products Chapter 13. An equivalent condition for the existence of a unique solution to AX + AX + Remark XB = C is that [~ _cB ] be similar to [ J _°B ](via the similarity [~J _~ ]). . An. then . the first of which follows immediately from Theorem 13. If matrix A e jRn xn eigenvalues )"" .. . Then the Lyapunov equation e jRnxn. Thus. Many useful results exist concerning the relationship between stability and Lyapunov equations.21. _* ]). . we have that lim X ((t) = 0.12).11) has a unique solution. 1-->+00 1 .A T have no eigenvalues in common. results = 0.

Application to Sylvester and Lyapunov Equations 147 147 Since — C > 0 and etA is nonsingular for all the integrand above is positive. D An immediate application is to the derivation of existence and uniqueness conditions An immediate application is to the derivation of existence and uniqueness conditions for the solution of the simple Sylvester-like equation introduced in Theorem 6.14) xp E jRn has a solution X e R. D asymptotically stable. e jRrnxq. Since yH Xy > 0. Proof: The proof follows in a fairly straightforward fashion either directly from the definiProof: The proof follows in a fairly straightforward fashion either directly from the definitions or from the fact that vec(.15) of (13. The Lyapunov equation AX X A = C can also be written using the Remark 13.25. and C E Rmxq. most of which derive from one key result. Let A E Rmxn. Theorem 13. most of which derive from one key The vec operator has many useful properties. and C for which the matrix product ABC is Theorem 13. D tions or from the fact that vec(xyT) = y ® x. Hence Since -C > 0 and etA is nonsingular for all t. Since A was arbitrary. where Y e Rnxp is arbitrary. result. B e jRPxq. For any three matrices A. defined. The solution of (13. However. The equivalent "vec form" of this equation is The equivalent "vec form" of this equation is [(/ ® AT) + (AT ® l)]vec(X) = + (AT ® l)]vec(X) = vec(C).16) . Then 0> yHCy = yH AXy + yHXAT Y = (A + I)yH Xy. Then vector y. where Y E jRnxp is arbitrary. e A(A) with corresponding left eigenvector y.14) as (B T ® A)vec(X) = vec(C) (13.27. suppose X = XT > 0 and let A. The vec operator has many useful properties. Application to Sylvester and Lyapunov Equations 13. and C for which the matrix product ABC is defined. Conversely. the integrand above is positive. A must be asymptotically stable.25.11. Theorem 13. B. we must have A + I = 2 Re A < 0 . nx p if and only if A A+CB+BB = C. in which the solution is of the form is of the form (13. B. in which case the general solution has a if only ifAA + C B+ C. Hence vT Xv > 0 and thus X is positive definite. we must have A + A = 2 R e A < O.27. Then the equation 13. The Proof: Write (13.14) is unique if BB+ ® A+ A = I. C. A must be Since yHXy > 0.14) as Proof: Write (13.yr) = <8> x. the complex-valued equation H X X A = C is equivalent to However.11.t.3.26. suppose X = XT > 0 and let A E A (A) with corresponding left eigenConversely. The Lyapunov equation AX + XATT = C can also be written using the vec notation in the equivalent form vec notation in the equivalent form [(/ ® A) + (A ® l)]vec(X) = vec(C). the complex-valued equation AHX + XA = C is equivalent to [(/ ® AH) vec(C). e jRrnxn. B E Rpx(}. 14) is unique if BB+ ® A+A = [.26. A subtle point arises when dealing with the "dual" Lyapunov equation A T X X A A subtle point arises when dealing with the "dual" Lyapunov equation ATX + XA = C.3. For any three matrices A. vec(ABC) = (C T ® A)vec(B).13. v TXv > 0 and thus X is positive definite. for the solution of the simple Sylvester-like equation introduced in Theorem 6. D Remark 13. Since A was arbitrary. the AXB =C (13.

148 148

Chapter 1 3. Kronecker Products Chapter 13. Kronecker Products

by Theorem 13.26. This "vector equation" has a solution if and only if by Theorem 13.26. This "vector equation" has a solution if and only if
(B T ® A)(B T ® A)+ vec(C)
+

= vec(C).
+ +

It is a straightforward exercise to show that (M ® N) + = M+ ® N+.. Thus, (13.16) has aa It is a straightforward exercise to show that (M ® N) = M <8> N Thus, (13.16) has

solution if and only if solution if and only if vec(C)

=

(B T ® A)«B+{ ® A+)vec(C)

= [(B+ B{ ® AA+]vec(C)
= vec(AA +C B+ B)

and hence if and only if AA +CB+B = C. and hence if and only if AA+ C B+ B C. The general solution of (13 .16) is then given by The general solution of (13.16) is then given by vec(X) = (B T ® A) + vec(C)

+ [I -

(B T ® A) + (B T ® A)]vec(Y),

where Y is arbitrary. This equation can then be rewritten in the form where Y is arbitrary. This equation can th