You are on page 1of 550

PEARSON

NEVER LEARNING

Daniel Norman • Dan Wolczuk

Introduction to Linear
Algebra for Science and
Engineering

Student Cheap-ass Edition

Taken from:
Introduction to Linear Algebra for Science and Engineering, Second Edition
by Daniel Norman and Dan Wolczuk
Cover Art: Courtesy of Pearson Learning Solutions.

Taken from:

Introduction to Linear Algebra for Science and Engineering, Second Edition


by Daniel Norman and Dan Wolczuk
Copyright© 2012, 1995 by Pearson Education, Inc.
Published by Pearson
Upper Saddle River, New Jersey 07458

All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.

This special edition published in cooperation with Pearson Learning Solutions.

All trademarks, service marks, registered trademarks, and registered service marks are the
property of their respective owners and are used herein for identification purposes only.

Pearson Learning Solutions, 501 Boylston Street, Suite 900, Boston, MA 02116
A Pearson Education Company
www.pearsoned.com

PEARSON
Contents

A Note to Students vi 2.3 Application to Spanning and Linear


A Note to Instructors viii Independence 91
Spanning Problems 91
Chapter 1 Euclidean Vector Spaces 1 Linear Independence Problems 95
Bases of Subspaces 97
1.1 Vectors in JR2 and JR3 1 2.4 Applications of Systems of Linear
The Vector Equation of a Line in JR2 5 Equations 102
Vectors and Lines in JR3 9 Resistor Circuits in Electricity 102
1.2 Vectors in JRll 14 Planar Trusses 105
Addition and Scalar Multiplication Linear Programming 107
of Vectors in JR11 15
Subspaces 16
Chapter 3 Matrices, Linear Mappings,
Spanning Sets and Linear Independence 18
and Inverses 115
Surfaces in Higher Dimensions 24
1.3 Length and Dot Products 28 3.1 Operations on Matrices 115
Length and Dot Products in JR2, and JR3 28 Equality, Addition, and Scalar Multiplication
Length and Dot Product in JR11 31 of Matrices 115
The Scalar Equation of Planes and The Transpose of a Matrix 120
Hyperplanes 34 An Introduction to Matrix Multiplication 121
1.4 Projections and Minimum Distance 40 Identity Matrix 126
Projections 40 Block Multiplication 127
The Perpendicular Part 43 3.2 Matrix Mappings and Linear
Some Properties of Projections 44 Mappings 131
Minimum Distance 44 Matrix Mappings 131
1.5 Cross-Products and Volumes 50 Linear Mappings 134
Cross-Products 50 Is Every Linear Mapping a Matrix Mapping? 136
The Length of the Cross-Product 52 Compositions and Linear Combinations of

Some Problems on Lines, Planes, Linear Mappings 139


and Distances 54 3.3 Geometrical Transformations 143
Rotations in the Plane 143
Rotation Through Angle e About the X -axis
Chapter 2 Systems of Linear Equations 63 3
in JR3 145
2.1 Systems of Linear Equations and 3.4 Special Subspaces for Systems and Mappings:
Elimination 63 Rank T heorem 150
The Matrix Representation of a System Solution Space and Nullspace 150
of Linear Equations 69 Solution Set of Ax = b 152
Row Echelon Form 73 Range of L and Columnspace of A 153
Consistent Systems and Unique Rowspace of A 156
Solutions 75 Bases for Row(A), Col(A), and Null(A) 157
Some Shortcuts and Some Bad Moves 76 A Summary of Facts About Rank 162
A Word Problem 77 3.5 Inverse Matrices and Inverse Mappings 165
A Remark on Computer Calculations 78 A Procedure for Finding the Inverse of a
2.2 Reduced Row Echelon Form, Rank, Matrix 167
and Homogeneous Systems 83 Some Facts About Square Matrices and Solutions of
Rank of a Matrix 85 Linear Systems 168
Homogeneous Linear Equations 86 Inverse Linear Mappings 170

iii
iv Contents

3.6 Elementary Matrices 175 Eigenvalues and Eigenvectors of a


3.7 LU-Decomposition 181 Matrix 291
Solving Systems with the LU-Decomposition 185 Finding Eigenvectors and Eigenvalues 291
A Comment About Swapping Rows 187 6.2 Diagonalization 299
Some Applications of Diagonalization 303
Chapter 4 Vector Spaces 193 6.3 Powers of Matrices and the Markov
Process 307
4.1 Spaces of Polynomials 193
Systems of Linear Difference Equations 312
Addition and Scalar Multiplication of
The Power Method of Determining
Polynomials 193
Eigenvalues 312
4.2 Vector Spaces 197
6.4 Diagonalization and Differential
Vector Spaces 197
Equations 315
Subspaces 201
A Practical Solution Procedure 317
4.3 Bases and Dimensions 206
General Discussion 317
Bases 206
Obtaining a Basis from an Arbitrary
Finite Spanning Set 209 Chapter 7 Orthonormal Bases 321
Dimension 211 7.1 Orthonormal Bases and Orthogonal
Extending a Linearly Independent Matrices 321
Subset to a Basis 213 Orthonormal Bases 321
4.4 Coordinates with Respect to a Basis 218 Coordinates with Respect to an Orthonormal
4.5 General Linear Mappings 226 Basis 323
4.6 Matrix of a Linear Mapping 235 Change of Coordinates and Orthogonal
The Matrix of L with Respect to the Matrices 325
Basis B 235 A Note on Rotation Transformations and
Change of Coordinates and Linear Rotation of Axes in JR.2
329
Mappings 240 7.2 Projections and the Gram-Schmidt
4.7 Isomorphisms of Vector Spaces 246 Procedure 333
Projections onto a Subspace 333
Chapter 5 Determinants 255 The Gram-Schmidt Procedure 337
7.3 Method of Least Squares 342
5.1 Determinants in Terms of Cofactors 255 Overdetermined Systems 345
The 3 x 3 Case 256 7.4 Inner Product Spaces 348
5.2 Elementary Row Operations and the Inner Product Spaces 348
Determinant 264 7.5 Fourier Series 354
The Determinant and Invertibility 270 b

Determinant of a Product 270


The Inner Product J f(x)g(x) dx 354
a
5.3 Matrix Inverse by Cofactors and Fourier Series 355
Cramer's Rule 274
Cramer's Rule 276
Chapter 8 Symmetric Matrices and
5.4 Area, Volume, and the Determinant 280
Quadratic Forms 363
Area and the Determinant 280
The Determinant and Volume 283 8.1 Diagonalization of Symmetric
Matrices 363
The Principal Axis Theorem 366
Chapter 6 Eigenvectors and
8.2 Quadratic Forms 372
Diagonalization 289
Quadratic Forms 372
6.1 Eigenvalues and Eigenvectors 289 Classifications of Quadratic Forms 376
Eigenvalues and Eigenvectors of a 8.3 Graphs of Quadratic Forms 380
Mapping 289 Graphs of Q(x) k in JR.3 385
=
Contents v

8.4 Applications of Quadratic Forms 388 9.4 Eigenvectors in Complex Vector Spaces 417
Small Deformations 388 Complex Characteristic Roots of a Real
The Inertia Tensor 390 Matrix and a Real Canonical Form 418
The Case of a 2 x 2 Matrix 420
Chapter 9 Complex Vector Spaces 395 The Case of a 3 x 3 Matrix 422
9.5 Inner Products in Complex Vector Spaces 425
9.1 Complex Numbers 395 Properties of Complex Inner Products 426
The Arithmetic of Complex Numbers 395 The Cauchy-Schwarz and Triangle
The Complex Conjugate and Division 397 Inequalities 426
Roots of Polynomial Equations 398 Orthogonality in C" and Unitary Matrices 429
The Complex Plane 399
9.6 Hermitian Matrices and Unitary
Polar Form 399 Diagonalization 432
Powers and the Complex Exponential 402
n-th Roots 404 Appendix A Answers to Mid-Section
9.2 Systems with Complex Numbers 407 Exercises 439
Complex Numbers in Electrical Circuit
Equations 408
Appendix B Answers to Practice Problems
9.3 Vector Spaces over C 411
and Chapter Quizzes 465
Linear Mappings and Subspaces 413
Complex Multiplication as a Matrix
Mapping 415 Index 529
A Note to Students

Linear Algebra-What Is It?


Linear algebra is essentially the study of vectors, matrices, and linear mappings. Al­
though many pieces of linear algebra have been studied for many centuries, it did not
take its current form until the mid-twentieth century. It is now an extremely important
topic in mathematics because of its application to many different areas.
Most people who have learned linear algebra and calculus believe that the ideas
of elementary calculus (such as limit and integral) are more difficult than those of in­
troductory linear algebra and that most problems in calculus courses are harder than
those in linear algebra courses. So, at least by this comparison, linear algebra is not
hard. Still, some students find learning linear algebra difficult. I think two factors con­
tribute to the difficulty students have.
First, students do not see what linear algebra is good for. This is why it is important
to read the applications in the text; even if you do not understand them completely, they
will give you some sense of where linear algebra fits into the broader picture.
Second, some students mistakenly see mathematics as a collection of recipes for
solving standard problems and are uncomfortable with the fact that linear algebra is
"abstract" and includes a lot of "theory." There will be no long-term payoff in simply
memorizing these recipes, however; computers carry them out far faster and more ac­
curately than any human can. That being said, practising the procedures on specific
examples is often an important step toward much more important goals: understand­
ing the concepts used in linear algebra to formulate and solve problems and learning to
interpret the results of calculations. Such understanding requires us to come to terms
with some theory. In this text, many of our examples will be small. However, as you
work through these examples, keep in mind that when you apply these ideas later, you
may very well have a million variables and a million equations. For instance, Google's
PageRank system uses a matrix that has 25 billion columns and 25 billion rows; you
don't want to do that by hand! When you are solving computational problems, al­
ways try to observe how your work relates to the theory you have learned.
Mathematics is useful in so many areas because it is abstract: the same good idea
can unlock the problems of control engineers, civil engineers, physicists, social scien­
tists, and mathematicians only because the idea has been abstracted from a particular
setting. One technique solves many problems only because someone has established a
theory of how to deal with these kinds of problems. We use definitions to try to capture
important ideas, and we use theorems to summarize useful general facts about the kind
of problems we are studying. Proofs not only show us that a statement is true; they can
help us understand the statement, give us practice using important ideas, and make it
easier to learn a given subject. In particular, proofs show us how ideas are tied together
so we do not have to memorize too many disconnected facts.
Many of the concepts introduced in linear algebra are natural and easy, but some
may seem unnatural and "technical" to beginners. Do not avoid these apparently more
difficult ideas; use examples and theorems to see how these ideas are an essential
part of the story of linear algebra. By learning the "vocabulary" and "grammar" of
linear algebra, you will be equipping yourself with concepts and techniques that math­
ematicians, engineers, and scientists find invaluable for tackling an extraordinarily rich
variety of problems.

vi
A Note to Students vii

Linear Algebra-Who Needs It?


Mathematicians

Linear algebra and its applications are a subject of continuing research. Linear algebra
is vital to mathematics because it provides essential ideas and tools in areas as diverse
as abstract algebra, differential equations, calculus of functions of several variables,
differential geometry, functional analysis, and numerical analysis.

Engineers

Suppose you become a control engineer and have to design or upgrade an automatic
control system. T he system may be controlling a manufacturing process or perhaps
an airplane landing system. You will probably start with a linear model of the sys­
tem, requiring linear algebra for its solution. To include feedback control, your system
must take account of many measurements (for the example of the airplane, position,
velocity, pitch, etc.), and it will have to assess this information very rapidly in order to
determine the correct control responses. A standard part of such a control system is a
Kalman-Bucy filter, which is not so much a piece of hardware as a piece of mathemat­
ical machinery for doing the required calculations. Linear algebra is an essential part
of the Kalman-Bucy filter.
If you become a structural engineer or a mechanical engineer, you may be con­
cerned with the problem of vibrations in structures or machinery. To understand the
problem, you will have to know about eigenvalues and eigenvectors and how they de­
termine the normal modes of oscillation. Eigenvalues and eigenvectors are some of the
central topics in linear algebra.
An electrical engineer will need linear algebra to analyze circuits and systems; a
civil engineer will need linear algebra to determine internal forces in static structures
and to understand principal axes of strain.
In addition to these fairly specific uses, engineers will also find that they need
to know linear algebra to understand systems of differential equations and some as­
pects of the calculus of functions of two or more variables. Moreover, the ideas and
techniques of linear algebra are central to numerical techniques for solving problems
of heat and fluid flow, which are major concerns in mechanical engineering. And the
ideas of Jjnear algebra underjje advanced techniques such as Laplace transforms and
Fourier analysis.

Physicists

Linear algebra is important in physics, partly for the reasons described above. In addi­
tion, it is essential in applications such as the inertia tensor in general rotating motion.
Linear algebra is an absolutely essential tool in quantum physics (where, for exam­
ple, energy levels may be determined as eigenvalues of linear operators) and relativity
(where understanding change of coordinates is one of the central issues).

Life and Social Scientists

Input/output models, described by matrices, are often used in economics, and similar
ideas can be used in modelling populations where one needs to keep track of sub­
populations (generations, for example, or genotypes). In all sciences, statistical anal­
ysis of data is of great importance, and much of this analysis uses Jjnear algebra; for
example, the method of least squares (for regression) can be understood in terms of
projections in linear algebra.
viii A Note to Instructors

Managers

A manager in industry will have to make decisions about the best allocation of re­
sources: enormous amounts of computer time around the world are devoted to linear
programming algorithms that solve such allocation problems. The same sorts of tech­
niques used in these algorithms play a role in some areas of mine management. Linear
algebra is essential here as well.

So who needs linear algebra? Almost every mathematician, engineer, or scientist


will .find linear algebra an important and useful tool.

Will these applications be explained in this book?

Unfortunately, most of these applications require too much specialized background


to be included in a first-year linear algebra book. To give you an idea of how some of
these concepts are applied, a few interesting applications are briefly covered in sections
1.4, 1.5, 2.4, 5.4, 6.3, 6.4, 7.3, 7.5, 8.3, 8.4, and 9.2. You will get to see many more
applications of linear algebra in your future courses.

A Note to Instructors
Welcome to the second edition of Introduction to Linear Algebra for Science and
Engineering. It has been a pleasure to revise Daniel Norman's first edition for a new
generation of students and teachers. Over the past several years, I have read many
articles and spoken to many colleagues and students about the difficulties faced by
teachers and learners of linear algebra. In particular, it is well known that students typ­
ically find the computational problems easy but have great difficulty in understanding
the abstract concepts and the theory. Inspired by this research, I developed a pedagog­
ical approach that addresses the most common problems encountered when teaching
and learning linear algebra. I hope that you will find this approach to teaching linear
algebra as successful as I have.

Changes to the Second Edition


• Several worked-out examples have been added, as well as a variety of mid­
section exercises (discussed below).

• Vectors in JR.11 are now always represented as column vectors and are denoted
with the normal vector symbol 1. Vectors in general vector spaces are still
denoted in boldface.

• Some material has been reorganized to allow students to see important con­
cepts early and often, while also giving greater flexibility to instructors. For
example, the concepts of linear independence, spanning, and bases are now
introduced in Chapter 1 in JR.11, and students use these concepts in Chapters 2
and 3 so that they are very comfortable with them before being taught general
vector spaces.
A Note to Instructors ix

• The material on complex numbers has been collected and placed in Chapter 9,
at the end of the text. However, if one desires, it can be distributed throughout
the text appropriately.

• There is a greater emphasis on teaching the mathematical language and using


mathematical notation.

• All-new figures clearly illustrate important concepts, examples, and applica­


tions.

• The text has been redesigned to improve readability.

Approach and Organization


Students typically have little trouble with computational questions, but they often
struggle with abstract concepts and proofs. This is problematic because computers
perform the computations in the vast majority of real-world applications of linear
algebra. Human users, meanwhile, must apply the theory to transform a given problem
into a linear algebra context, input the data properly, and interpret the result correctly.
The main goal of this book is to mix theory and computations throughout the
course. The benefits of this approach are as follows:

• It prevents students from mistaking linear algebra as very easy and very com­
putational early in the course and then becoming overwhelmed by abstract con­
cepts and theories later.

• It allows important linear algebra concepts to be developed and extended more


slowly.

• It encourages students to use computational problems to help understand the


theory of linear algebra rather than blindly memorize algorithms.

One example of this approach is our treatment of the concepts of spanning and
linear independence. They are both introduced in Section 1.2 in JR.n, where they can be
motivated in a geometrical context. They are then used again for matrices in Section
3.1 and polynomials in Section 4.1, before they are finally extended to general vector
spaces in Section 4.2.

The following are some other features of the text's organization:

• The idea of linear mappings is introduced early in a geometrical context and is


used to explain aspects of matrix multiplication, matrix inversion, and features
of systems of linear equations. Geometrical transformations provide intuitively
satisfying illustrations of important concepts.

• Topics are ordered to give students a chance to work with concepts in a simpler
setting before using them in a much more involved or abstract setting. For ex­
ample, before reaching the definition of a vector space in Section 4.2, students
will have seen the 10 vector space axioms and the concepts of linear indepen­
dence and spanning for three different vector spaces, and they will have had
some experience in working with bases and dimensions. Thus, instead of be­
ing bombarded with new concepts at the introduction of general vector spaces,
students will j ust be generalizing concepts with which they are already familiar.
x A Note to Instructors

Pedagogical Features
Since mathematics is best learned by doing, the following pedagogical elements are
included in the book.

• A selection of routine mid-section exercises is provided, with solutions in­


cluded in the back of the text. These allow students to use and test their under­
standing of one concept before moving on to other concepts in the section.

• Practice problems are provided for students at the end of each section. See "A
Note on the Exercises and Problems" below.

• Examples, theorems, and definitions are called out in the margins for easy
reference.

Applications
One of the difficulties in any linear algebra course is that the applications of linear
algebra are not so immediate or so intuitively appealing as those of elementary cal­
culus. Most convincing applications of linear algebra require a fairly lengthy buildup
of background that would be inappropriate in a linear algebra text. However, without
some of these applications, many students would find it difficult to remain motivated
to learn linear algebra. An additional difficulty is that the applications of linear alge­
bra are so varied that there is very little agreement on which applications should be
covered.
In this text we briefly discuss a few applications to give students some easy sam­
ples. Additional applications are provided on the Corripanion Website so that instruc­
tors who wish to cover some of them can pick and choose at their leisure without
increasing the size (and hence the cost) of the book.

List of Applications

• Minimum distance from a point to a plane (Section 1.4)

• Area and volume (Section 1.5, Section 5.4)

• Electrical circuits (Section 2.4, Section 9.2)

• Planar trusses (Section 2.4)

• Linear programming (Section 2.4)

• Magic squares (Chapter 4 Review)

• Markov processes (Section 6.3)

• Differential equations (Section 6.4)

• Curve of best fit (Section 7.3)

• Overdetermined systems (Section 7.3)

• Graphing quadratic forms (Section 8.3)

• Small deformations (Section 8.4)

• The inertia tensor (Section 8.4)


A Note to Instructors xi

Computers
As explained in "A Note on the Exercises and Problems," which follows, some prob­
lems in the book require access to appropriate computer software. Students should
realize that the theory of linear algebra does not apply only to matrices of small size
with integer entries. However, since there are many ideas to be learned in linear alge­
bra, numerical methods are not discussed. Some numerical issues, such as accuracy
and efficiency, are addressed in notes and problems.

A No t e on the Exerc ises and Problems


Most sections contain mid-section exercises. These mid-section exercises have been
created to allow students to check their understanding of key concepts before continu­
ing on to new concepts in the section. Thus, when reading through a chapter, a student
should always complete each exercise before continuing to read the rest of the chapter.
At the end of each section, problems are divided into A, B, C, and D problems.
The A Problems are practice problems and are intended to provide a sufficient
variety and number of standard computational problems, as well as the odd theoretical
problem for students to master the techniques of the course; answers are provided at
the back of the text. Full solutions are available in the Student Solutions Manual (sold
separately).
The B Problems are homework problems and essentially duplicates of the A prob­
lems with no answers provided, for instructors who want such exercises for homework.
In a few cases, the B problems are not exactly parallel to the A problems.
The C Problems require the use of a suitable computer program. These problems
are designed not only to help students familiarize themselves with using computer soft­
ware to solve linear algebra problems, but also to remind students that linear algebra
uses real numbers, not only integers or simple fractions.
The D Problems usually require students to work with general cases, to write
simple arguments, or to invent examples. These are important aspects of mastering
mathematical ideas, and all students should attempt at least some of these-and not
get discouraged if they make slow progress. With effort, most students will be able
to solve many of these problems and will benefit greatly in the understanding of the
concepts and connections in doing so.
In addition to the mid-section exercises and end-of-section problems, there is a
sample Chapter Quiz in the Chapter Review at the end of each chapter. Students should
be aware that their instructors may have a different idea of what constitutes an appro­
priate test on this material.
At the end of each chapter, there are some Further Problems; these are similar to
the D Problems and provide an extended investigation of certain ideas or applications
of linear algebra. Further Problems are intended for advanced students who wish to
challenge themselves and explore additional concepts.

Using This Text to Teach Linear Algebra


There are many different approaches to teaching linear algebra. Although we suggest
covering the chapters in order, the text has been written to try to accommodate two
main strategies.
xii A Note to Instructors

Early Vector Spaces


We believe that it is very beneficial to introduce general vector spaces immediately
after students have gained some experience in working with a few specific examples
of vector spaces. Students find it easier to generalize the concepts of spanning, linear
independence, bases, dimension, and linear mappings while the earlier specific cases
are still fresh in their minds. In addition, we feel that it can be unhelpful to students
to have determinants available too soon. Some students are far too eager to latch onto
mindless algorithms involving determinants (for example, to check linear indepen­
dence of three vectors in three-dimensional space) rather than actually come to terms
with the defining ideas. Finally, this allows eigenvalues, eigenvectors, and diagonal­
ization to be highlighted near the end of the first course. If diagonalization is taught
too soon, its importance can be lost on students.

Early Determinants and Diagonalization


Some reviewers have commented that they want to be able to cover determinants and
diagonalization before abstract vector spaces and that in some introductory courses,
abstract vector spaces may not be covered at all. Thus, this text has been written so
that Chapters 5 and 6 may be taught prior to Chapter 4. (Note that all required in­
formation about subspaces, bases, and dimension for diagonalization of matrices over
JR is covered in Chapters 1, 2, and 3.) Moreover, there is a natural flow from matrix
inverses and elementary matrices at the end of Chapter 3 to determinants in Chapter 5.

A Course Outline
The following table indicates the sections in each chapter that we consider to be "cen­
tral material":

Chapter Central Material Optional Material


1 l , 2,3,4,5
2 1, 2, 3 4
3 1,2,3,4,5,6 7
4 l,2,3,4,5,6,7
5 1,2,3 4
6 1,2 3,4
7 1,2 3,4, 5
8 1,2 3,4
9 l , 2,3,4,5,6

Supplements
We are pleased to offer a variety of excellent supplements to students and instructors
using the Second Edition.
T he new Student Solutions Manual (ISBN: 978-0-321-80762-5), prepared by
the author of the second edition, contains full solutions to the Practice Problems and
Chapter Quizzes. It is available to students at low cost.
MyMathLab® Online Course (access code required) delivers proven results
in helping individual students succeed. It provides engaging experiences that person­
alize, stimulate, and measure learning for each student. And, it comes from a trusted
partner with educational expertise and an eye on the future. To learn more about how
A Note to Instructors xiii

MyMathLab combines proven learning applications with powerful assessment, visit


www.mymathlab.com or contact your Pearson representative.
The new Instructor's Resource CD-ROM (ISBN: 978-0-321-80759-5) includes
the following valuable teaching tools:

• An Instructor's Solutions Manual for all exercises in the text: Practice


Problems, Homework Problems, Computer Problems, Conceptual Problems,
Chapter Quizzes, and Further Problems.

• A Test Bank with a large selection of questions for every chapter of the text.

• Customizable Beamer Presentations for each chapter.

• An Image Library that includes high-quality versions of the Figures, T heo­


rems, Corollaries, Lemmas, and Algorithms in the text.

Finally, the second edition is available as a CourseSmart eTextbook


(ISBN: 978-0-321-75005-1). CourseSmart goes beyond traditional expectations­
providing instant, online access to the textbook and course materials at a lower cost
for students (average savings of 60%). With instant access from any computer and the
ability to search the text, students will find the content they need quickly, no matter
where they are. And with online tools like highlighting and note taking, students can
save time and study efficiently.
Instructors can save time and hassle with a digital eTextbook that allows them
to search for the most relevant content at the very moment they need it. Whether it's
evaluating textbooks or creating lecture notes to help students with difficult concepts,
CourseSmart can make life a little easier. See all the benefits at www.coursesmart.com/
instructors or www.coursesmart.com/students.

Pearson's technology specialists work with faculty and campus course designers to
ensure that Pearson technology products, assessment tools, and online course materi­
als are tailored to meet your specific needs. This highly qualified team is dedicated to
helping schools take full advantage of a wide range of educational resources by assist­
ing in the integration of a variety of instructional materials and media formats. Your
local Pearson Canada sales representative can provide you with more details about this
service program.

Acknowledgments
T hanks are expressed to:

Agnieszka Wolczuk: for her support, encouragement, help with editing, and tasty
snacks.
Mike La Croix: for all of the amazing figures in the text and for his assistance on
editing, formatting, and LaTeX'ing.
Stephen New, Martin Pei, Barbara Csima, Emilio Paredes: for proofreading and
their many valuable comments and suggestions.
Conrad Hewitt, Robert Andre, Uldis Celmins, C. T. Ng, and many other of my
colleagues who have taught me things about linear algebra and how to teach
it as well as providing many helpful suggestions for the text.
To all of the reviewers of the text, whose comments, corrections, and recommen­
dations have resulted in many positive improvements:
xiv A Note to Instructors

Robert Andre Dr. Alyssa Sankey


University of Waterloo University of New Brunswick

Luigi Bilotto
Manuele Santoprete
Vanier College
Wilfrid Laurier University

Dietrich Burbulla
University of Toronto Alistair Savage
University of Ottawa
Dr. Alistair Carr
Monash University Denis Sevee
John Abbott College
Gerald Cliff
University of Alberta
Mark Solomonovich
Antoine Khalil Grant MacEwan University
CEGEP Vanier
Dr. Pamini Thangarajah
Hadi Kharaghani
Mount Royal University
University of Lethbridge

Gregory Lewis Dr. Chris Tisdell

University of Ontario The University of New South Wales

Institute of Technology
Murat Tuncali
Eduardo Martinez-Pedroza Nipissing University
McMaster University
Brian Wetton
Dorette Pronk
University of British Columbia
Dalhousie University

T hanks also to the many anonymous reviewers of the manuscript.


Cathleen Sullivan, John Lewis, Patricia Ciardullo, and Sarah Lukaweski: For all
of their hard work in making the second edition of this text possible and for
their suggestions and editing.
In addition, I thank the team at Pearson Canada for their support during the
writing and production of this text.
Finally, a very special thank y ou to Daniel Norman and all those who contributed
to the first edition.

Dan Wolczuk
University of Waterloo
CHAPTER 1

Euclidean Vector Spaces


CHAPTER OUTLINE
"
1.1 Vectors in IR and JR?.3
1.2 Vectors in IR
11

1.3 Length and Dot Products


1.4 Projections and Minimum Distance
1.5 Cross-Products and Volumes

Some of the material in this chapter will be familiar to many students, but some ideas
that are introduced here will be new to most. In this chapter we will look at operations
on and important concepts related to vectors. We will also look at some applications
of vectors in the familiar setting of Euclidean space. Most of these concepts will later
be extended to more general settings. A firm understanding of the material from this
chapter will help greatly in understanding the topics in the rest of this book.

1.1 Vectors in R2 and R3


We begin by considering the two-dimensional plane in Cartesian coordinates. Choose
an origin 0 and two mutually perpendicular axes, called the x1 -axis and the xraxis, as
shown in Figure 1.1.1. Then a point Pin the plane is identified by the 2-tuple ( p1, p2),
called coordinates of P, where Pt is the distance from Pto the X2-axis, with p1 positive
if Pis to the right of this axis and negative if Pis to the left. Similarly, p2 is the distance
from Pto the x1 -axis, with p2 positive if Pis above this axis and negative if Pis below.
You have already learned how to plot graphs of equations in this plane.

p =(pi, p2)
P2 --- -
-- -- --
- -- ..
I
I
I
I
I
I
I
I

0 Pi

Figure 1.1.1 Coordinates in the plane .

For applications in many areas of mathematics, and in many subjects such as


physics and economics, it is useful to view points more abstractly. In particular, we
will view them as vectors and provide rules for adding them and multiplying them by
constants.
2 Chapter 1 Euclidean Vector Spaces

Definition JR2 is the set of all vectors of the form [:�l where and
xi x2 are real numbers called

the components of the vector. Mathematically, we write

Remark

We shall use the notation 1 = [ :� ] to denote vectors in JR2.


Although we are viewing the elements of JR2 as vectors, we can still interpret

these geometrically as points. That is, the vector jJ =


[��] can be interpreted as the

point P(p , p ). Graphically, this is often represented by drawing an arrow from (0, 0)
to (pi, p2),i 2
as shown in Figure 1.1.2. Note, however, that the points between (0, 0)
and (pi, p2)should not be thought of as points "on the vector." The representation of a
vector as an arrow is particularly common in physics; force and acceleration are vector
quantities that can conveniently be represented by an arrow of suitable magnitude and
direction.

0 = (0, 0)

Figure 1.1.2 Graphical representation of a vector.

Definition
Addition and Scalar
If 1 = [:� l [��l t JR,
y = and E then we define addition of vectors by

Multiplication in :12
X +y =
[Xi]+ [y'] [Xi Yl]
=
+
+
X2 Y2 X2 Y2
and the scalar multiplication of a vector by a factor oft, called a scalar, is defined by

tx = t [Xzxi]= [txtxi2]
The addition of two vectors is illustrated in Figure 1.1.3: construct a parallelogram
with vectors 1 and y as adjacent sides; then 1 + y is the vector corresponding to the
vertex of the parallelogram opposite to the origin. Observe that the components really
are added according to the definition. This is often called the "parallelogram rule for
addition."
Section 1.1 Vectors in JR2 and JR3 3

Figure 1.1.3 Addition of vectors jJ and if.

EXAMPLE I
(3, 4)
Let x = [-�] and y = [n Then (-2, 3)

0 X1

Similarly, scalar multiplication is illustrated in Figure 1.1.4. Observe that multi­


plication by a negative scalar reverses the direction of the vector. It is important to note
that x - y is to be interpreted as x + (-1 )y.

(1.S)J

X1

(-l)J
Figure 1.1.4 Scalar multiplication of the vector J.
4 Chapter 1 Euclidean Vector Spaces

EXAMPLE2
Let a= [n [ �]
v=
-
.and w= [-�l Calculate a+ v, 3w, and 2V- w.

Solution: We get

a+v= [ i ] [ �] [!]
+
-
=

3w=3
-
[ �] [ � ]
= -

2v - w = 2 [-�] [ �] [-�] [�] [-�]


+ < -1) _ = + =

EXERCISE 1
Let a= [ � l [� ]
_ v= .and w = rn Calculate each of the following and illustrate with

a sketch.

(a) a+ w (b) -v (c) (a+ w) - v

The vectors e1 = [�] and e =


2
[�] play a special role in our discussion of IR.2. We

will call the set {e1, e } the standard basis for IR.2. (We shall discuss the concept of
2
a basis fmther in Section 1.2.) The basis vectors e1 and e are important because any
2
vector v= [�� ] can be written as a sum of scalar multiples of e1 and e in exactly one
2
way:

Remark

In physics and engineering, it is common to use the notation i [�] and j = [�]
instead.

We will use the phrase linear combination to mean "sum of scalar multiples."
So, we have shown above that any vector x E IR.2 can be written as a unique linear
combination of the standard basis vectors.

One other vector in IR.2 deserves special mention: the zero vector, 0= [� ] .Some

important properties of the zero vector, which are easy to verify, are that for any
xEJR.2,

(1) 0 +x x =

(2) x + c-1)x = o
(3) Ox= 0
Section 1.1 Vectors in JR.2 and JR.3 5

The Vector Equation of a Line in JR.2


In Figure 1.1.4, it is apparent that the set of all multiples of a vector J creates a line
through the origin. We make this our definition of a line in JR.2: a line through the
origin in JR.2 is a set of the form
{tJitEJR.}
Often we do not use formal set notation but simply write the vector equation of the
line:
X = rJ, tEJR.
The vector Jis called the direction vector of the line.
Similarly, we define the line through ff with direction vector J to be the set

{ff+ tJi tEJR.}

which has the vector equation

X = ff+ rJ. tEJR.

This line is parallel to the line with equation xrJ. tEJR. because of the parallelogram
=

rule for addition. As shown in Figure 1.1.5, each point on the line through ff can be
obtained from a corresponding point on the line x = rJ by adding the vector ff. We
say that the line has been translated by ff. More generally, two lines are parallel if the
direction vector of one line is a non-zero multiple of the direction vector of the other
line.

X2

. line x = rJ+ ff

Figure 1.1.5 The line with vector equation x = td + p.

EXAMPLE3
A vector equation of the line through the point P(2, -3) with direction vector [ �]
-
is
6 Chapter 1 Euclidean Vector Spaces

EXAMPLE4 Write the vector equation of a line through P(l, 2) parallel to the line with vector
equation

x= t [�] , tEIR

Solution: Since they are parallel, we can choose the same direction vector. Hence, the
vector equation of the line is

EXERCISE 2 Write the vector equation of a line through P(O, 0) parallel to the line with vector
equation

Sometimes the components of a vector equation are written separately:

x = jJ + tJ becomes
{ X1 = Pl + td 1
t EIR
X2 = P2 + td2,

This is referred to as the parametric equation of the line. The familiar scalar form
of the equation of the line is obtained by eliminating the parameter t. Provided that
di* 0, d1 * 0,
X1 - Pl X2 - P2
--- -r- ---
di - - d1
or
d1
x2 = P2 + (xi - Pi)
di
What can you say about the line if d1 = 0 or d2 = O?

EXAMPLES Write the vector, parametric, and scalar equations of the line passing through the point

P(3, 4) with direction vector r-�].


Solution: The vector equation is x = [!] r-�l
+ t t ER

. .
So, the parametnc equat10n 1s
. { XI = 3 - St
tER
X2 = 4 + t,
The scalar equation is x2 = 4 - - !Cx1 3).
Section 1.1 Vectors in JR.2 and JR.3 7

Directed Line Segments For dealing with certain geometrical problems, it is


useful to introduce directed line segments. We denote the directed line segment from
point P to point Q by PQ. We think of it as an "arrow" starting at P and pointing
towards Q. We shall identify directed line segments from the origin 0 with the corre­
sponding vectors; we write OP = fJ, OQ =if, and so on. A directed line segment that
starts at the origin is called the position vector of the point.
For many problems, we are interested only in the direction and length of the
directed line segment; we are not interested in the point where it is located. For
example, in Figure 1.1.3, we may wish to treat the line segment QR as if it were the
same as OP. Taking our cue from this example, for arbitrary points P, Q, R in JR.2, we
define QR to be equivalent to OP if r -if=fJ. In this case, we have used one directed
line segment OP starting from the origin in our definition.

Figure 1.1.6 A directed line segment from P to Q.

More generally, for arbitrary points Q, R, S, and T in JR.2, we define QR to be


equivalent to ST if they are both equivalent to the same OP for some P. That is, if

r-if = fJ and t -
s= fJ for the same fJ

We can abbreviate this by simply requiring that

r-if=i'-s

EXAMPLE6 For points Q( l , 3 ) R(6,-l), S(-2,4), and T(3,0), we have that QR is equivalent to
,

ST because

-r - if= [ �] - [�] [ -�] = [�] - [ -!] =


_ = r s
-

S(-2,4)
8 Chapter 1 Euclidean Vector Spaces

In some problems, where it is not necessary to distinguish between equivalent


directed line segments, we "identify" them (that is, we treat them as the same object)
and write PQ = RS. Indeed, we identify them with the corresponding line segment

starting at the origin, so in Example 6 we write QR = = [-�l


ST

Remark

Writing QR = ST is a bit sloppy-an abuse of notation-because QR is not really


the same object as ST. However, introducing the precise language of "equivalence
classes" and more careful notation with directed line segments is not helpful at this
stage. By introducing directed line segments, we are encouraged to think about vectors
that are located at arbitrary points in space. This is helpful in solving some geometrical
problems, as we shall see below.

EXAMPLE 7 Find a vector equation of the line through P(l, 2) and Q(3, -1).

Solution: The direction of the line is

PQ= - p=[_i]-[;] =[_i]


q

Hence, a vector equation of the line with direction


PQ P(
that passes through 1, 2) is

x=p+tPQ=[;]+t[_i]• tE�

Observe in the example above that we would have the same line if we started at the
second point and "moved" toward the first point--0r even if we took a direction vector
in the opposite direction. Thus, the same line is described by the vector equations

x=[_iJ+r[-�J. rE�
x=[_iJ+s[_iJ· sE�
x=[;]+t[-�], tE�
In fact, there are infinitely many descriptions of a line: we may choose any point on
the line, and we may use any non-zero multiple of the direction vector.

EXERCISE 3 Find a vector equation of the line through P(l, 1) and Q(-2, 2).
Section 1.1 Vectors in JR.2 and JR.3 9

Vectors and Lines in R3


Everything we have done so far works perfectly well in three dimensions. We choose
an origin 0 and three mutually perpendicular axes, as shown in Figure 1.1.7. The
x1 -axis is usually pictured coming out of the page (or blackboard), the x2-axis to
the right, and the x3-axis towards the top of the picture.

Figure 1.1.7 The positive coordinate axes in IR.3.

It should be noted that we are adopting the convention that the coordinate axes
form a right-handed system. One way to visualize a right-handed system is to spread
out the thumb, index finger, and middle finger of your right hand. The thumb is
the x1 -axis, the index finger is the x2-axis, and the middle finger is the x3-axis. See
Figure 1.1.8.

Figure 1.1.8 Identifying a right-handed system.

2
We now define JR.3 to be the three-dimensional analog of JR. .

Definition
:l3
R3 is the set of all vectors of the form [�:l · where x1,x,, and x3 are ceal numbers.

Mathematically, we write
10 Chapter 1 Euclidean Vector Spaces

Definition If 1 = [:n �n
jl and t E II., then we define addition of vectors by

[ l �l l l
Addition and Scalar
=

Multiplication in J.3

[
Xt Xt + Yt
x+y = X2 + Y2 = X2 + Y2
X3 3 X3 + Y3

[l l
and the scalar multiplication of a vector by a factor oft by

[
Xl tX1
tx = t x2 = tx2
X3 tX 3

Addition still follows the parallelogram rule. It may help you to visualize this
if you realize that two vectors in JR.
3 must lie within a plane in JR.3 so that the two­

dimensional picture is still valid. See Figure 1.1.9.

Figure 1.1.9 Two-dimensional parallelogram rule in IR.3.

EXAMPLES
Let u = [ _i]. l-n
jl = and w
=
[H crucula� jl + U, -W, and -V + 2- W u.

Solution: We have

V +U
=

nHJ ni =

-w
-[�] {�]
l
=

-V + 2W-" =
-
- l 11+2 l�l-l J l =r m l :1 r-�l = + +
= =
Section 1.1 Vectors in JR.2 and JR.3 11

It is useful to introduce the standard basis for JR.3 just as we did for JR.2. Define

Then any vector V = [�:] can be written as the linear combination

Remark

In physics and engineering, it is common to use the notation i = e1, j = e1, and k = e3
instead.

The zero vector 0 = [�] in R3 has the same properties as the zero vector in l!.2.

Directed line segments are the same in three-dimensional space as in the two­
dimensional case.
A line through the point P in JR.3 (corresponding to a vector {J) with direction
vector J f. 0 can be described by a vector equation:
X = p + tJ, t E JR
It is important to realize that a line in JR.3 cannot be described by a single scalar linear
equation, as in JR.2. We shall see in Section 1.3 that such an equation describes a plane
in JR.3 .

EXAMPLE9 Find a vector equation of the line that passes through the points P(l, 5, -2) and
Q(4,-1,3).

Solution: A direction vector is J = if - [-H


p = Hence a vector equation of the

line is

Note that the corresponding parametric equations are x1


X3 = -2 +St.
1 + 3t, x2 = 5 - 6t, and

EXERCISE4 Find a vector equation of the line that passes through the points P(l, 2, 2) and
Q(l,-2,3).
12 Chapter 1 Euclidean Vector Spaces

PROBLEMS 1.1
Practice Problems

Al Compute each of the following linear combinations A6 Consider the points P(2,3,1), Q(3,1,-2),
and illustrate with a sketch. R(l,4,0), S(-5,1,5). Determine PQ, PR, PS,QR,
(a)[�]+[�] and SR, and verify that PQ+QR= PR= PS+ SR.
A 7 Write a vector equation of the line passing through
(c)3 [- �] the given points with the given direction vector.
A2 Compute each of the following linear combina-
(a) P(3,4),J= [-�]
tions. (b) P(2,3),J = [=:J
(a)[-�]+[-�]
(c)-2 [ _;J (c) P(2,0,5),J= [-�]-11
(e) 23 [31] - 2 [11//43)
A3 Compute each of the following linear combina-
tions. AS
(d) P(4,1,5),J =
[ - r1
Write a vector equation for the line that passes
[!]-[J
(a) through the given points.
(a) P(-1,2),Q(2,-3)
(b) P(4,1),Q(-2,-1)
(c) P(l,3,-5),Q(-2,1,0)
(d) P(-2,1,1),Q(4,2,2)
(e) P(!,t,1),Q(-l,l,�)
A9 For each of the following lines in JR. , determine a
2
vector equation and parametric equations.
(a) x2= 3x1 +2
A4 Ul Hl
Ut V = and W= Detenillne (b) 2x1 +3x2 = 5
AJO (a) Aset of points in IR.11 is collinear if all the points
(a) 2v- 3w lie on the same line. By considering directed
Cb) -3Cv +2w) +5v line segments, give a general method for deter­
(c) a such that w- 2a = 3v mining whether a given set of three points is
(d) a such that a - 3v = 2a collinear.
(b) Determine whether the points P(l,2), QC4,1),
AS Ut V = m rn
and W = = Detennine and R(-5,4) are collinear. Show how you
decide.
(a) �v + !w (c) Determine whether the points S(1,0,1),
T(3,-2,3), and U(-3,4,-1) are collinear.
Cb) 2c v + w)- c2v - 3w)
(c) a such that w- a = 2V Show how you decide.
(d) a such that !a + �v = w
Section 1.1 Exercises 13

Homework Problems

B 1 Compute each of the following linear combinations B6 (a) Consider the points P(l,4,1), Q(4,3,-1),

[-�]- [�]
R(-1,4,2), and S(8,6,-5). Determine PQ,
[-�] + r-�]
and illustrate with a sketch.

(a) (b)
PR, PS, QR, and SR, and verify that PQ+QR=
PR= PS +SR.
(c) -3 [-�] (d) -3 [�]- [;] (b) Consider the points P(3,-2,1), Q(2, 7, -3),
R(3,1,5), and S(-2,4,-1). Determine PQ,
PK,
;1...,.
PS,
-+
QR,
-+
and SR, and verify that P Q+QR=
-+ -+
-t

B2 Compute each of the following linear combina­


PR= PS +SR.
(a)[�]+ [=�] [�]- [�]
tions.

(b) B7 Write a vector equation of the line passing through

2 [ =�J H [1 �] - ?a[�]
the given points with the given direction vector.

(c) (d (a) P(-3, 4),J= [-�]


(e)
�[ � + Y3 [- �12] - [ �] (b) P(O, 0). J= m
[-�]
B3 Compute each of the following linear combina-

<{�]-[-�]
tions.
(c) P(2,3,-1), J=

P(3,1,2),J= [-�]
[=�l
(d)
(c) 4
BS Write a vector equation for the line that passes

f;�l Hl
through the given points.
+
l
(e) (a) P(3,1), Q(l,2)

[ 4J [-�1
(b) P(l,-2,1), Q(O, 0, 0)
1- (c) P(2,-6,3), Q(-1,5,2)
(f) (1 +�) 0 i (d) P(l,-1,i), Q(i, t. 1)
� -i 2
JR2, determine

{ � l {n
B9 For each of the following lines in a
_

B4 Ut V and W Detecm;ne (a)


(b)
x2= -2x1 3
Xi + 2X2= 3
+
vector equation and parametric equations.

(a) 2v- 3w BlO (You will need the solution from Problem AlO (a)
(b) -2(v- w) - 3w to answer this.)
(c)
(d) +
i1 such that w - 2i1 3v
i1 such that 2i1 3w= v
=
(a) Determine whether the points P(2,1,1),

[ �] -H
Q(l,2, 3), and R(4,-1,-3) are collinear. Show

[
how you decide.
BS Ut V = - and W= Deterrlline (b) Determine whether the points S(1,1, 0),
T(6,2, 1), and U(-4, 0,-1) are collinear. Show
(a) 3v- 2w how you decide.
(b) -iv+ �w
(c) i1 such that v+i1= v
(d) i1 such that 2i1 - w= 2v
14 Chapter 1 Euclidean Vector Spaces

Computer Problems

[=�u [ �:1
-36
Cl Let V, = V2 � - , and

v, =

[=m
Use computer software to evaluate each of the fol­
lowing.
(a) 171!1 + sv2 - 3v3 + 42v4
(b) -1440i11 - 2341i12 - 919i13 + 6691/4

Conceptual Problems

Dl Let i1 = [ �] and w = [ �].


_
(a) Explain in terms of directed line segments why

(a) Find real numbers t1 and t2 such that t1i1 +t2 w =


PQ+ QR+RP = o
[ ��l
_ Illustrate with a sketch.
(b) Verify the equation of part (a) by expressing

(b) Find real numbers t1 and t2 such that t1 v +t2w =


PQ, QR, and RP in terms of jJ, q, and r.
2
[��] for any X1, X2 E R D3 Let fl
fl + t J,
and J t= 0 bevectors in JR. . Prove that x
2
t E JR, is a line in JR. passing through the
=

(c) Use your result in part (b) to find real numbers


origin if and only if fl is a scalar multiple of J
t1 and t2 such that t1V1 + t2i12 = [ :2). D4 Let x and y be vectors in JR.3 and t E JR be a scalar.
Prove that
D2 Let P, Q, and R be points in JR.2 corresponding to t(x + y) = tx + t)!
vectors fl, q, and rrespectively.

1.2 Vectors in IRn


We now extend the ideas from the previous section to n-dimensional Euclidean
11•
space JR.
Students sometimes do not see the point in discussing n-dimensional space be­
cause it does not seem to correspond to any physical realistic geometry. But, in a num­
ber of instances, more than three dimensions are important. For example, to discuss
the motion of a particle, an engineer needs to specify its position (3 variables) and its
velocity (3 more variables); the engineer therefore has 6 variables. A scientist working
in string theory works with 11 dimensional space-time variables. An economist seek­
ing to model the Canadian economy uses many variables: one standard model has
more than 1500 variables. Of course, calculations in such huge models are carried
out by computer. Even so, understanding the ideas of geometry and linear algebra is
necessary to decide which calculations are required and what the results mean.
Section 1.2 Vectors in JR.1 15

Addition and Scalar Multiplication of Vectors in JRn

Definition JR.11 is the set of all vectors of the form


[Xl , where x; E R Mathematically,

Xn :

Definition If 1
Xi y= r�1 , and t E JR., then we define addition of vectors by
Addition and Scalar
Multiplication in :i"
=

Xn ,

�1
x+.Y=
[Xl tll [X1
: + : =
+ Yl
:
Xn n Xn Yn +

and the scalar multiplication of a vector by a factor oft by

tx = t 1�Xn1] tt�X11]
=

Theorem 1 J'.or all w, x, y E JR.11 ands, t E JR. we have

(1) x+ y E JR.11 (closed under addition)


(2) x+ y = y + 1 (addition is commutative)
(3) c1 + y) + w 1 +CY+ w)
JR.1
(addition is associative)
=

-+
(4) There exists a vector 0 E such that z+0 = z for all z E JR.ll

-+ -+ -+ -+
(zero vector)
(5) For each 1 E JR.ll there exists a vector -1 E JR.11 such that 1 + (-1) = 0
(additive inverses)
(6) t1 E JR.11 (closed under scalar multiplication)
(7) s(t1) = (st)1 (scalar multiplication is associative)
(8) (s + t)x = s1 + tx (a distributive law)
(9) t(1 + y) = t1 + tY (another distributive law)
(10) lx = 1 (scalar multiplicative identity)

Proof: We will prove properties (1) and (2) from Theorem 1 and leave the other proofs
to the reader.
,-
16 Chapter 1 Euclidean Vector Spaces

For (1), by definition,


X1 +
[ Ytl
X +y =
: E !Rn
Xn + Yn
since x; +y; E IR for 1 :S i :S n.
For (2),

x+st=
[ ] ti ]
XI +y1
: =
+X1
: =st+x •

X11 +y,, 11 + Xn

EXERCISE 1 Prove properties (5), (6), and (7) from Theorem 1.

Observe that properties (2), (3), (7), (8), (9), and ( 10) from Theorem 1 refer only to
the operations of addition and scalar multiplication, while the other properties, ( 1), ( 4) ,
(5), and (6), are about the relationship between the operations and the set IR11• These
facts should be clear in the proof of Theorem 1. Moreover, we see that the zero vector
0
of JR11 is the vector 0 = : , and the additive inverse of x is -x = (- l)x. Note that the

0
zero vector satisfies the same properties as the zero vector in JR2 and JR3.
Students often find properties (1) and (6) a little strange. At first glance, it seems
obvious that the sum of two vectors in !R11 or the scalar multiple of a vector in JR11 is
another vector in IR". However, these properties are in fact extremely important. We
now look at subsets of IR11 that have both of these properties.

Subspaces
Definition A non-empty subset S of IR11 is called a subspace of IR11 if for all vectors x, y E S and
Subspace t E IR:

( 1) x +y ES (closed under addition)


(2) tx ES (closed under scalar multiplication)

The definition requires that a subspace be non-empty. A subspace always contains


at least one vector. In particular, it follows from (2) that if we let t
0, then every =

n
subspace of !R contains the zero vector. This fact provides an easy method for dis­
qualifying any subsets that do not contain the zero vector as subspaces. For example,
a line in IR3 cannot be a subspace if it does not pass through the origin. Thus, when
_,

checking to determine if a set S is non-empty, it makes sense to first check if 0 ES.


It is easy to see that the set {O} consisting of only the zero vector in !R11 is a subspace
of IR11; this is called the trivial subspace. Additionally, IR" is a subspace of itself. We
will see throughout the text that other subspaces arise naturally in linear algebra.
Section 1.2 Vectors inJR" 17

EXAMPLE 1
Show that S = {[�:l I x1 -x2 +xi = } is a subspace ofll!.3.
0

SoluUon1 We observe that, by definition, S is a subset of R3 and that 0 = [�] E S


0, 0, 0
since talcing x1 = x2 = and X3 = satisfies x1 -x2 +x3 = 0.

Let = [:n 'f �:l E S. Then they must satisfy the condition of the set, so
5! =

x1 - x2 +x3 = and Y1 -Y2 +y3 =


0 0.

1
To show that Sis closed under addition, we must show that +y satisfies the condition
of S. We have
X + y = [XX3X2t +Y++YY3I2]
and
(x1 +Y1) -(x2 +Y2) +(x3 +y3) = X1 - x2 +x3 +Y1 -Y2 +Y3 = + = 0 0 0

1
Hence, + y E S.
tx1
Similarly, for any t E JR, we have tx [tx2] and
=

t X3
So, S is closed under scalar multiplication. Therefore, S is a subspace ofJR3.
EXAMPLE2
0
Show that T ={[��]I x1x2 = } is not a subspace ofJR2.
Solution: To show that Tis not a subspace, we just need to give one example showing
T
that does not satisfy the definition of a subspace. We will show that T is not closed
under addition.
1 1
Observe that =[�]and y =[�]are both in T, but + y = [�] T, since � 1(1) * 0.

Thus, T is not a subspace ofJR2•

EXERCISE2
Show that S ={[��] I 2x1 = x2} is a subspace ofJR2 and T = {[��] I x1 +x2 = 2} is
not a subspace of R2.
18 Chapter 1 Euclidean Vector Spaces

Spanning Sets and Linear Independence


One of the main ways that subspaces arise is as the set of all linear combinations of
some spanning set. We next present an easy theorem and a bit more vocabulary.

Theorem 2 If {v1, • • • , vk} is a set of vectors in JRn and S is the set of all possible linear combi­
nations of these vectors,

then S is a subspace of ]Rn.

Proof: By properties (1) and (6) of Theorem 1, t1v1 + · · · + tkvk E JR.11, so S is a subset
of JRn. Taking t; = 0 for 1 � i � k, we get 0 = Ov1 + · · · + Ovk ES, so S is non-empty.
Let x,y ES. Then, for some real numbers s; and t;, 1 � i � k, x = s1v1 +· · ·+skvk
andy = t1v1 + · · + tkvk. It follows that
·

so, x +y ES since (s; + t;) E JR. Hence, S is closed under addition.


Similarly, for all t E JR,

So, S is closed under scalar multiplication. Therefore, S is a subspace of JRn. •

Definition If S is the subspace of JR.11 consisting of all linear combinations of the vectors v1, . . • , vk
Span E JR.11, then S is called the subspace spanned by the set of vectors 13 = {v1, ... , vk}, and
Spanning Set we say that the set 13 spans S. The set 13 is called a spanning set for the subspace S.
We denote S by
S = Span{i11, ... , vk} =Span 13

EXAMPLE3 2
Let v E JR. with v * 0 and consider the line L with vector equation x = tV, t E JR. Then
L is the subspace spanned by {V}, and {V} is a spanning set for L. We write L =Span{V}.
2
Similarly, for v1, v2 E JR. , the set M with vector equation x = ti\11 + t2v2 is a
2
subspace oflR with spanning set {v1, v2}. That is, M =Span{i11, v2}.

2
If v E JR. with v * 0, then we can guarantee that Span{v} represents a line in
2
JR. that passes through the origin. However, we see that the geometrical interpretation
of Span{v1, v2} depends on the choices of v1 and v2. We demonstrate this with some
examples.
Section 1.2 Vectors in JR.11 19

EXAMPLE4
The set S1 = Span {[�] [�]}
, has vector equation

2
Hence, S1 is a line in JR. that passes through the origin.

The set S2 = Span { [�] [-�]}


, has vector equation

where t = t1 - 2t2 E JR.. Hence, S 2 represents the same line as SI· That is,

The set S3 = Span {[�] [�]}, has vector equation

2
Hence, S3 = JR. . That is, S3 spans the entire two-dimensional plane.

2
From these examples, we observe that {V1, v2} is a spanning set for JR. if and only
if neither v 1 nor v2 is a scalar multiple of the other. This also means that neither vector
3
can be the zero vector. We now look at this in JR. .

EXAMPLES
The set S1 "Span {[ =�l m, [�] }
· has vector equation

Hence,
20 Chapter 1 Euclidean Vector Spaces

EXAMPLES
(continued) The set S =Spm
2
{[ jl [ _; l [!]}
· · has vwor equation

1
=
{�] {i] [!] + + t
3 •
1,,1 ,! ER
2 3

which can be written as

1 t
= ,
[ jl [ _�l [!] l!l
+ !
2
+ !
2
+ !
3
= (tI
+
1 )
2
[ �l
_
+ (1
2
+ 1 )
3 [! ]
So, S =Span
2 {[ 3] [!]}
·

We extend this to the general case in IR.11•

Theorem 3 Let v1,...,vk be vectors in IR.11• If vk can be written as a linear combination of


V1,...,Vk-1, then

Proof: We are assuming that there exists t1,...,tk-l E IR. such that

Let 1 ESpan{v1,...,vk}. Then, there exists S1, ...,Sk E


IR. such that

1 = S1V1 + ... + Sk-lvk-1 + SkVk


= s1v1 + · · · + Sk-1Vk-l + sk(t1v1 + ·
·· + tk-1Vk-1)
(s1 + Skt1W1 + + (sk-1 + sktk-1Wk-I
=
· · ·

Thus, 1 E Span{\11,... , vk-d· Hence, Span{vi. ...,vd s;; Span{\11,...,vk-d· Clearly,


we have Span{\11,...,vk-d s;; Span{v1,...,vk} and so

as required. •

In fact, any vector which can be written as a linear combination of the other vectors
in the set can be removed without changing the spanned set. It is important in linear
algebra to identify when a spanning set can be simplified by removing a vector that can
be written as a linear combination of the other vectors. We will call such sets linearly
dependent. If a spanning set is as simple as possible, then we will call the set linearly
independent. To identify whether a set is linearly dependent or linearly independent,
we require a mathematical definition.
Section 1.2 Vectors in IR" 21

Assume that the set {V 1, • • • , v k} is linearly dependent. Then one of the vectors, say
v;, is equal to a linear combination of some (or all) of the other vectors. Hence, we can
find scalars t1, ... tk E IR such that

where t; f. 0. Thus, a set is linearly dependent if the equation

has a solution where at least one of the coefficients is non-zero. On the other hand, if
the set is linearly independent, then the only solution to this equation must be when all
the coefficients are 0. For example, if any coefficient is non-zero, say t; f. 0, then we
can write

Thus, v; E Span{v1, ..., v;_1, V;+1, . • . , v,,}, and so the set can be simplified by using
Theorem 3.
We make this our mathematical definition.

Definition A set of vectors {v1, ... , v k} is said to be linearly dependent if there exist coefficients
Linearly Dependent t1, • • . , tk not all zero such that
Linearly Independent

A set of vectors {V1, • • • , vd is said to be linearly independent if the only solution to

is t1 = t2 = · · · = tk = 0. This is called the trivial solution.

Theorem 4 If a set of vectors {V1, ... , v k} contains the zero vector, then it is linearly dependent.

Proof: Assume V; = 0. Then we have

Hence, the equation 0 = t1v1 + · · · + tkvk has a solution with one coefficient, t;, that is
non-zero. So, by definition, the set is linearly dependent. •

EXAMPLE6
show that the set {[-1 :J . [ 1:Ix4] [-�]} • is lineady dependent

Solution: We consider
22 Chapter 1 Euclidean Vector Spaces

EXAMPLE6 Using operations on vectors, we get

(continued)

Since vectors are equal only if their corresponding entries are equal, this gives us three
equations in three unknowns

7t1 - lOt2 - t3 = 0

-14t, + 15t2 = 0
15
6t, + t2 + 3t3 = 0
14

Solving using substitution and elimination, we find that there are in fact infinitely many
possible solutions. One is t1 = �, t2 = �, t3 = -1. Hence, the set is linearly dependent.

EXERCISE 3
Determine whether

ml m [m ·
, is linearly dependent or JinearI y independent.

Remark

Observe that determining whether a set {\11, ... 'vk} in JR.11 is linearly dependent or
linearly independent requires determining solutions of the equation t1v1 + · · · + tkvk =

0. However, this equation actually represents n equations (one for each entry of the
vectors) ink unknowns t1, • • • , tk. In the next chapter, we will look at how to efficiently
solve such systems of equations.

What we have derived above is that the simplest spanning set for a subspace S is
one that is linearly independent. Hence, we make the following definition.

Definition If {v,,...,vk} is a spanning set for a subspace S of JR.11 and {V1,... ,vk} is linearly
Basis independent, then {V1, • • • , vk} is called a basis for S.

EXAMPLE 7
Let


= [-H r \l· [-H
"'
= v, = and let s be the subspace of JR3 given by

S =Span {V1, i12, v3}. Find a basis for S.


Solution: Observe that {V1, i12, i13} is linearly dependent, since
Section 1.2 Vectors in !fll.11 23

EXAMPLE 7 In particular, we can write v3 as a linear combination of v1 and i12. Hence, by

(continued) Theorem 3,

Moreover, observe that the only solution to

is t1 = t2 = 0 since neither v 1 nor i12 is a scalar multiple of the other. Hence, {V1, i12} is
linearly independent.
Therefore, {v1, v2} is linearly independent and spans S and so it is a basis for S.

Bases (the plural of basis) will be extremely important throughout the remainder
of the book. At this point, however, we just define the following very important basis.

Definition 1
In !fll.1 , let e; represent the vector whose i-th component is 1 and all other components
Standard Basis are 0. The set {e1' ... ' e,,} is called the standard basis for
!fli.11•
for'.?."
2 3
Observe that this definition matches that of the standard basis for !fli. and !fll. given
in Section 1.1.

EXAMPLE 8
The standard basis for R3 is re,, e,, e, J = { [�] [!] [m
• .

It is linearly independent since the only solution to

is 11 = 12 = 13 = 0. Moreover, it is a spanning set for R3 since every vector X = [�:l E

!fll.3 can be written as a linear combination of the basis vectors. In particular,

[�:l [�] [!] m


= X1 + X2 + X3

Remark

Compare the result of Example 8 with the meaning of point notation P(a, b, c). When
we say P(a, b, c) we mean the point P having a amount in the x-direction, b amount
in they-direction, and c amount in the z-direction. So, observe that the standard basis
vectors represent our usual coordinate axes.
24 Chapter 1 Euclidean Vector Spaces

EXERCISE4 State the standard basis for JR5. Prove that it is linearly independent and show that it is
a spanning set for JR5.

Surfaces in Higher Dimensions


We can now extend our geometrical concepts of lines and planes to JR" for n > 3. To
2
match what we did in JR and JR3, we make the following definition.

Definition Let jJ, v E JR" with v * 0. Then we call the set with vector equation x = jJ + t1v, t1 E JR
Linein3i." a line in JR11 that passes through jJ.

Definition Let V1,V2,JJ E JR11, with {V1,v2} being a linearly independent set. Then the set with
Planein J." vector equation x = jJ + ti v1 + t2v2, ti, t2 E JR is called a plane in JR" that passes
through jJ.

Definition Let JR11, with {V1, . ., v11_ i} being linearly independent. Then the set
v 1, ... , v11_ 1, jJ E .

Hyperplane in R" with vector equation x jJ + t1v1 + + t11_1v11_1, t; E JR is called a hyperplane in JR


= · · ·
11
that passes through jJ.

EXAMPLE9
The set Span { �l , I , : }
-2 -1
is a hyperplane since l� : ' :}
1 -2 -1
is linearly independent in JR4.

EXAMPLE 10
Show that the set Span

{[�l r-ll ·rm


·
defines a plane in R3.

Solution: Observe that the set is Iinear!y dependent since

the simplified vector equation of the set spanned by these vectors is


[ � l r-l l l H
+ = Hence,
Section 1.2 Exercises 25

EXAMPLE 10
(continued )

Since the set {l�l [- �]}


· is linear! y independent, this is the vector equation of a plane

3
passing through the origin in IR. .

PROBLEMS 1.2
Practice Problems

[�] [;] [�]


Al Compute each of the following linear combina­
tions. (el X +1 + 12
1

{!}
2
=

1
3 3
( a) +2
2 -1
-1 (D Span
-1 3
-2 1 -1
(b ) -3 +2 A3 Determine whether the following sets are sub­
5 1 4
spaces of IR.4. Explain.

1 2
2 0
2
(a) {.x I
E IR.4 x1 +x2 + x3 +x4 = o }
(b) {v1}, where v1 * 0.
2 -2 0
(c) 2 1 +2 1 3 (c) {x E JR.41Xt+2x3 = 5,X[ -3X4 = 0 }
{x }
-

0 2 (d) E IR.4 I X1 = X3X4,X2 -X4 = o


-1 -1 (e) {x E IR.4 I 2x1 =
3x4,X2 - Sx3 = o }
A2 For each of the following sets, show that the set is
11•
(f ) {x E JR.4 X[ +X2
1
-X4,X3 2 }

}
= =

or is not a subspace of the appropriate IR.

{[�:]
A4 Show that each of the following sets is linearly de­
pendent. Do so by writing a non-trivial linear com­
(a) I xi - xl = x3
bination of the vectors that equals the zero vector.

( b)
{[�:J 1
X
t X3 } (a)
mrnl .[_:]}
{l-rrnrnJ}
=

(c) {[��]I X +X2


t
o } (b)

}
=

(d )
{[::J 1
XtX2 = X3
(c)
{[i]·[l]·[�]}
(d )
{[n.r�J.r�n
26 Chapter 1 Euclidean Vector Spaces

AS Determine if the following sets represent lines,

H' j'n
planes, or hyperplanes in JR4. Give a basis for each.
c d) Span

Ca) Span

{r , r} A6 Let fl, J E JR12• Prove that 1 fl + tJ, t E JR is a

{� , ! �}
=

12
subspace of JR if and only if fl is a scalar multiple
of J
(b) Span ,
A 7 Suppose that� {v 1, ..., vk} is a linearly indepen­
=

12
dent set in JR • Prove that any non-empty subset of

}
0 0 1

{ _! , � , �
� is linearly independent.

(c) Span
-
0 0 0

Homework Problems

Bl Compute each of the following linear combina-


tions.
2 4
CD Span mirm
1 -2 B3 Determine whether the following sets are sub­
(a)
-1 4
spaces of JR . Explain.
3 -2 4
(a) {1E1R 2x1 - Sx4 7,3x2 2x4}
2 2 2 4
I = =

(b) {1E JR x I 2x� x� + x�}


�+
-2 1 1
4
=

(b) 3 +2 2 + 2 3 (c) {1E1R I X1 + X2 + X4 0,3x3 -x4}


= =

2 3 (d)
4
{1E JR I X1 + 2x3 0, Xt - 3x4 o}
= =

0 -1

un
1 -1
1 -2 -7 Ce)
(c) 3 0 -2 3 + 6
0 -7
mirm
-4
2 3 0 CD Span
B2 For each of the following sets, show that the set is
1
or is not a subspace of the appropriate JR 2• B4 Show that each of the following sets is linearly de­
(a) +x2 l } pendent by writing a non-trivial linear combination
{[��]I X1 =

of the vectors that equals the zero vector.

Cb)
{[�i] 1 x, +x, � o
} Ca)
WHW
Cb)
{[-il Ul {:]}
·

Cc)
mrnl' [i]}
(d) {[�]·[�].[-;]}
Section 1.2 Exercises 27

BS Determine if the following sets represent lines,

{ \ �: }
planes, or hyperplanes in JR.4. Give a basis for each.

-
(a) Span ,

2 -2

(b) SpM p�n , ,

Computer Problems

1.23 4.21 -9.6


:t 4.16 .-t -3.14 1.01
Cl Let vi , v2 ' v3
_2_21
=

0 2.02'
0.34 2.71 1.99
0.33
2.12
and -t
.v
4 = _ _ .
3 23
0.89
Use computer software to evaluate each of the fol­
lowing.
(a) 3vi - 2v2 + Sv3 - 3\14
Cb) 2Avi - I.3v2 + Yiv3 - Y3v4 .

Conceptual Problems

Dl Prove property(8) from Theorem 1. D6 Let 13 = {V1, . • . ,vk} be a linearly independent set

D2 Prove property(9) from Theorem 1. of vectors in JR.I! and let x et Span 13. Prove that
{V1, . . . ,vk. x} is linearly independent.
D3 Let U and Vbe subspaces oflR.n.
(a) Prove that the intersection of U and Vis a sub­ D7 Let v1,Vz E JR.I! and lets and t be fixed real numbers

space of JR.I!. with t * 0. Prove that

(b) Give an example to show that the union of two Span{V1, v2} = Span{V1,sv1 + tV2}
subspaces of JR.I! does not have to be a subspace
of JR.I!. D8 Let v1, Vz,V3 E JR". State whether each of the fol­
(c) Define U + V = {a+ v I i1 E U, v E V}. Prove lowing statements is true or false. If the statement
that U + Vis a subspace of JR". is true, explain briefly. If the statement is false, give
D4 Pick vectors j3, vi,v2, and v3 in JR.4 such that the a counterexample.
vector equation x = j3 + t1v1 + t2v2 + t3V3 (a) If i12 = tV 1 for some real number t, then {V1,v2}
(a) ls a hyperplane not passing through the origin is linearly dependent.
(b) ls a plane passing through the origin (b) If v1 is not a scalar multiple of v2, then {V1, 112)

(c) Is the point(l,3,1,1) is linearly independent.

(d) Is a line passing through the origin (c) If {V1,v2,v3} is linearly dependent, then v1 can
be written as a linear combination of v2 and v3.
DS Let 13 = {V1, • • • , vd be a linearly independent set
(d) If v1 can be written as a linear combination of
of vectors in JR.I!. Prove that every vector in Span13
Vz and V3, then {V1,vz, v3} is linearly depen­
can be written as a unique linear combination of the
dent.
vectors in 13.
(e) {vi} is not a subspace oflR.11•
(f) Span{Vi} is a subspace of JR".
28 Chapter 1 Euclidean Vector Spaces

1.3 Length and Dot Products


In many physical applications, we are given measurements in terms of angles and
magnitudes. We must convert this data into vectors so that we can apply the tools of
linear algebra to solve problems. For example, we may need to find a vector represent­
ing the path (and speed) of a plane flying northwest at 1300 km/h. To do this, we need
to identify the length of a vector and the angle between two vectors. In this section, we
see how we can calculate both of these quantities with the dot product operator.

2 3
Length and Dot Products in R , and R
The length of a vector in IR2 is defined by the usual distance formula (that is, Pythago­
ras' Theorem), as in Figure 1.3.10.

Definition
Length in ::2
If x = [��] E JR2, its length is defined to be

11111 = �xf + �

Figure 1.3.10 Length in JR2.

For vectors in JR3, the formula for the length can be obtained from a two-step
calculation using the formula for JR2, as shown in Figure 1.3.11. Consider the point
X(x1, x2, x3) and let P be the point P(x1, x2, 0). Observe that OPX is a right triangle, so
that

Definition
Length in:::.'
If it = [�:l E IR3, its length is defined to be

11111 =
�xf + � x�
x +

One immediate application of this formula is to calculate the distance between two
points. In particular, if we have points P and Q, then the distance between them is the
length of the directed line segment PQ.
Section 1.3 Length and Dot Products 29

Figure 1.3.11 Length in JR3.

EXAMPLE 1 Find the distance between the points P(-1, 3, 4), Q(2, - 5 , 1) in IR.3.
l)
-
Solution: We have PQ= [ -� � j [-�]·
1
=
- 4
=
-3
Hence, the distance between the two

points is

llPQll= "132 + (-8)2 + (-3)2= ...f82

Angles and the Dot Product Determining the angle between two vectors in
IR.2 leads to the important idea of the dot product of two vectors. Consider
Figure 1.3.12. The Law of Cosines gives

llPQll2= llOPll2 + llOQll2 - 2llOPll llOQll cos e (1 .1)

Substituting [ ]q
OP= jJ = P1 , OQ= q= 1 . PQ= jJ - q= Pi -
P2 q2 P2 q 2
[]
- qi [ ] into (1.1) and

simplifying gives
p1q1 + p2q2= llfJll ll?/11 cos e
For vectors in IR.3, a similar calculation gives

Observe that if jJ = q, then e = 0 radians, and we get

PT + p� + p�= llf1112
30 Chapter 1 Euclidean Vector Spaces

Figure 1.3.12 llPQll2 = llOPll2 + llOQll2 - 2llOPll llOQll cos 8.

This matches our definition of length in JR3 above. Thus, we see that the formula on the
left-hand side of these equations defines the angle between vectors and also the length
of a vector.

Thus, we define the dot product of two vectors x = [��]. y [��]


= in IR2 by

x y · = X1Y1 + X2Y2

Similady, the dot product of vectors X = [::] and jl = �] in IR3 is defined by

x y ·
=
X1Y1 + X2Y2 + x3y3

Thus, in IR2 and JR3, the cosine of the angle between vectors x and y can be calcu­
lated by means of the formula

x·st
cos (1.2)
8
=

1111111.Yll
where e is always chosen to satisfy 0 � e � 1r.

EXAMPLE2
Find the angle in IR3 between i1 = [ _�]· [-H W =

Solution: We have

v. w = 1(3) + 4(-1) + (-2)(4) = -9


llvll = v1 + 16 + 4 = ../21
llwll = Y9 + 1 + 16 = Y26

Hence,
-9
cose = Y26::::: -0. 3 85 1 6
W
So e ::::: 1.966 radians. (Note that since cose is negative, e is between �
and n.)
Section 1.3 Length and Dot Products 31

EXAMPLE3
Find the angle in JR2 between v = [ �]
-
and w = [n w
=
m
Solution: We have v w 1(2) (-2)(1) 0.
0
+

Hence, cos e 0. Thus, e =


· = =

o
� radians.
That is, v and w are perpendicular to each other.
=
llilll llwll =

EXERCISE 1
Find the angle in JR3 between i1 = U] and "1 = [ :l=

Length and Dot Product in R11


We now extend everything we did in JR2 and JR 3 to JRll. We begin by defining the dot
product.

Xi
Definition Let x= and y = r�ll be vectors in JRll. Then the dot product of x and y is
Dot Product
Xn �
X ' Y = X1Y1 + X2Y2 + '
· ' + XnYn

Remark

The dot product is also sometimes called the scalar product or the standard inner
product.

From this definition, some important properties follow.

Theorem 1 Let x, y, z E JR11 and t E R Then,


(1) x x ;:::: 0 and x x = 0 if and only if x = 0
· ·

(2) x. y y. x
(3) x. (J + tJ = x. y + x. z
=

(4) (tx) .Y t(x. J) = .x (tJ)


. = ·

Proof: We leave the proof of these properties to the reader.


Because of property (1), we can now define the length of a vector in JR n. The word
norm is often used as a synonym for length when we are speaking of vectors.
32 Chapter 1 Euc]jdean Vector Spaces

Definition
Norm
Let x = [�1 ].
Xn
We define the norm or length of x by

EXAMPLE4 2 1 /3
-2/3
Let1 andy . Find 11111 and 11.Yll.
3
= =

0
-1 -2/3
Solution: We have

11111 = ..J22 + l2 +32 + c-1)2 = m

11.Y11 = ..Jo /3)2 +c-2/3)2 +02 +c-2/3)2 = ....;1/9+4/9+o+4/9 = 1

EXERCISE2
Let X = [�] and let jl =
� X. Detenn;ne 11111 and Iljlll.
11 11

1
We now give some important properties of the norm in IR. 1•

11
Theorem 2 Let 1,y E IR. and t ER Then
(1) 11111 <': 0 and 11111 0 if and only if 1
= 0 =

(2) 11r111 1r1111 11


=

(3) 11·.YI:S11111 11.Yll, with equality if and only if {1,y} is linearly dependent
(4) 111+.Yll ::; 11111+11.Yll

Proof: Property (1) of Theorem 2 follows immediately from property (1) of


Theorem 1.
(2) llt111 = ...j(txi)2 + · ··+ (txn)2 =

-../ti xT + ···+x� = Jtlll1JI.
(3) Suppose that {1, y} is linearly dependent. Then, eithery = 0 or 1 = t.Y. Ify is
the zero vector, then both sides are equal to 0. If 1 t)1, then =

11 ·.YI =
11 ·Ct1)1 =
1rc1 ·1)1 =
1r1111112 =
11111 11r111 =
11111 11.Yll

Suppose that {1,y) is linearly independent. Then t1 +y * 0 for any t E R


Therefore, by property (1),we have (t1+y) · (t1+y) > 0 for any t ER Use property
(3) of Theorem 1 to expand, and we obtain

(1 ·1)t2 + (21 · y)t+CY y) · > 0, for all t E IR. ( 1.3)


Section 1.3 Length and Dot Products 33

Note that 1 ·1 > 0 since 1 f. 0. Now a quadratic expression At2+ Bt+C with A > 0
is always positive if and only if the corresponding equation has no real roots. From
the quadratic formula, this is true if and only if B2 - 4AC < 0. Thus, inequality (1.3)
implies that
4(1 . 1)2 - 4(1 · 1)CY·1) <o

which can be simplified to the required inequality.


(4) Observe that the required statement is equivalent to

111+1112 � Cll111+11111)2

The squared form will allow us to use the dot product conveniently. Thus, we consider

111+1112 - c11111+11111)2 =
c1+1)·c1+1) - c111112+ 211111 11111+111112)
= 1 . 1+1 . 1+1 . 1+ 1·1 - c1 · 1+ 211111 11111+ 1 · 1)
= 21 . 1 - 211111 11111
�o by (3) •

Remark
Property (3) is called the Cauchy-Schwarz Inequality (or Cauchy-Schwarz­
Buniakowski). Property (4) is the Triangle Inequality.

EXERCISE 3 Prove that the vector x = 11j111 is parallel to 1 and satisfies II.XII = 1.

Definition A vector 1 E IR.11 such that 11111 1 is called a unit vector.


Unit Vector
=

We will see that unit vectors can be very useful. We often want to find a unit vector
that has the same direction as a given vector 1. Using the result in Exercise 3, we see
that this is the vector
A
1 of

x ffi..t
=

We could now define the angle between vectors 1 and 1 in IR" by matching equa­
tion (1.2). However, in linear algebra we are generally interested only in whether two
vectors are perpendicular. To agree with the result of Example 3, we make the follow­
ing definition.

Definition Two vectors 1 and 1 in IR.11 are orthogonal to each other if and only if 1·1 0.
Orthogonal
=

Notice that this definition implies that 0 is orthogonal to every vector in IR.11•
34 Chapter 1 Euclidean Vector Spaces

EXAMPLES 1 2 -1

Let v =
0
3
, w =
3
0
, and z... =
... -�
. Show that v is orthogonal to w but v is not

-2 1 2
orthogonal to Z.
Solution: We have v w = 1(2) + 0(3) + 3(0) + (-2)(1) = 0, so they are orthogonal.
·

v z = 1 (-1) + 0(-1) + 3(1) + (-2)(2) = -2, so they are not orthogonal.


·

The Scalar Equation of Planes and Hyperplanes


We saw in Section 1.2 that a plane can be described by the vector equation x = jJ +
ti vi + t2v2, ti, t2 E JR, where {vi, v2} is linearly independent. In many problems, it is
more useful to have a scalar equation that represents the plane. We now look at how
to use the dot product to find such an equation.
Suppose that we want to find the equation of a plane that passes through the point

P(p" p,, p,). suppose that we can find a vector it = [:n called the normal vedor of

the plane, that is orthogonal to any directed line segment PQ lying in the plane. (That
is, it is orthogonal to PQ for any point Qin the plane; see Figure 1.3.13.) To find the
equation of this plane, let X(x1, x2, x3) be any point on the plane. Then it is orthogonal
to PX, so

This equation, which must be satisfied by the coordinates of a point X in the plane, can
be written in the form

This is the standard equation of this plane. For computational purposes, the form
it (x - jJ)
· = 0 is often easiest to use.

Xi
Figure 1.3.13 The normal it is orthogonal to every directed line segment lying in the plane.
Section 1.3 Length and Dot Products 35

EXAMPLE6 Find the scalar equation of the plane that passes through the point P(2, 3, -1) and has

nonnfil vectoc n =

Hl
[l
Solution: The equation is

- p)
] [XJ - 2]
it· (X = -4 · X2 - 3 = 0
1 X3 + 1

or
X1 - 4x2 + X3 = 1(2) + (-4)(3) + 1(-1) = -11

It is important to note that we can reverse the reasoning that leads to the equation
of the plane in order to identify the set of points that satisfies an equation of the form
n1x1 + n2x2 + n3x3 = d. If n1 * 0, this can be written as

where n = [::l · This equation describes a plane through the point P(d In" 0, O) with ,
normal vector it. If if.2 * 0, we could combine the d with the x2 term and find that the
plane passes through the point P(O, d/n2, 0). In fact, we could find a point in the plane
in many ways, but the normal vector will always be a non-zero scalar multiple of it.

- p)
EXAMPLE 7 Describe the set of points in JR. that satisfies 5x1 - 6x2 + 7x3 = 11.
Solution: We wish to rewrite the equation in the form it· (x = 0. Using our work
above, we get

or

( - �l )-
5 x1 6(x2 - 0) + 7(x3 - 0) = 0

Thus, we identify the set as a plane with normal vectm n = 1-H passing through the

point (11/5, 0, 0). Alternatively, if x1 = x3 = 0, we find that x2 = -11/6, so the plane


passes through (0, -11/6,0).

Two planes are defined to be parallel if the normal vector to one plane is a non­
zero scalar multiple of the normal vector of the other plane. Thus, for example, the
plane Xt + 2x2 - X3 = 1 is parallel to the plane 2x1 + 4x2 - 2x3 = 7.
36 Chapter 1 Euclidean Vector Spaces

Two planes are orthogonal to each other if their normal vectors are orthogonal.
For example, the plane x1 + x2 + x3 = 0 is orthogonal to the plane x1 + x2 - 2x3 = 0
since

EXAMPLE8 Find a scalar equation of the plane that contains the point P(2, 4, -1) and is parallel to
the plane 2x1 + 3x2 - 5x3 = 6.
Solution: An equation of the plane must be of the form 2x1 + 3x2 - 5x3 = d since the
planes are parallel. The plane must pass through P, so we find that a scalar equation of
the plane is
2x1 + 3x2 - 5x3 = 2(2) + 3(4) + (-5)(-1) = 21

EXAMPLE9 Find a scalar equation of a plane that contains the point P(3, -1, 3) and is orthogonal
to the plane x1 - 2x2 + 4x3 = 2.

Solutiom The nocmal vectoc must be orthogonal to [-il so we pick [H Thus, an

equation of this plane must be of the form 2x2 + x3 = d. Since the plane must pass
through P, we find that a scalar equation of this plane is

2x2 + X3 = 0(3) + 2(-1) + (1)(3) = 1

EXERCISE4 Find a scalar equation of the plane that contains the point P(l, 2, 3) and is parallel to
the plane X1 - 3x2 - 2x3 = -5.

The Scalar Equation of a Hyperplane Repeating what we did above we


can find the scalar equation of a hyperplane in IRn. In particular, if we have a vector
m that is orthogonal to any directed line segment PQ lying in the hyperplane, then for
any point X(x1, ... , x11) in the hyperplane, we have

o = m· PX

As before, we can rearrange this as

o = m. ·ex - ff)
O=m·x-m·ff
m·x=m·ff
m1X1 + ·· · + mnXn = m · p

Thus, we see that a single scalar equation in JR.11 represents a hyperplane in IR".
Section 1.3 Exercises 37

EXAMPLE 10 2

. Find the scalar equation of the hyperplane in JR.4 that has normal vector m
3
= _ and
2

passes through the point P(1,0,2,-1 ).


Solution: The equation is

2x1 + 3x2 - 2x3 + x4 = 2(1) + 3(0) + (-2)(2) + 1(-1) = -3

PROBLEMS 1.3
Practice Problems

Al Calculate the lengths of the given vectors. A4 Verify the triangle inequality and the Cauchy­

[ �1 (b ) [:;�1
m [!]
Schwarz inequality if
(a)

(a) x
-

{�]
and y

{�]
= =

c c

[ l (b) , H l nl
[��I
and y
(f)
l/Y3
= =

AS Determine
(e ) l/Y3
-l/Y3 whether each pair of vectors is

1 orthogonal.

lH Hl
-1
(g) (a)
0
2

A2 Find a unit vector in the direction of


4 -1

[-�1 (b)
m nl O'
4
(a)
rn (c) (d)

-2
3
0

c {�] c +�J (e )
0
' (f)
1/3
2/3
-1/3 ' -3/2
3/2
0

1
0 X3
-2 1

(f)
3

A6 Determine all values of k for which each pair of


-2 0
(e)
1 0
0 -1 vectors is orthogonal.

A3 Determine the distance from P to Q for (a )


[ n [ �]
_

k
(b) P( l , 1,-2) and Q(-3,1,1)
(a) P(2,3) and Q(-4,1)

k
(c) P(4,-6, 1) and Q(-3,5, 1)

lH Hl -k
2

(d) P(2, 1, 1,5) and Q(4,6,-2, 1)


(c) (d) '

3
4 0
38 Chapter 1 Euclidean Vector Spaces

A 7 Find a scalar equation of the plane that contains the 1

[_�]
given point with the given normal vector. -4
(c) P(O,0,0,0),n=
5
(a) P(-1,2,-3),fl= -2

[�]
0
1
(b) P(2,5,4),fl= (d) P(l,0,1,2,1),n= 2

[-;]
-1

(c) P(l,-1,1),fl= A9 Determine a normal vector for the hyperplane.

[=�l
(a) 2x1 + x2 = 3 in IR2
(b) 3x1 - 2x2 + 3x3 = 7 in IR3
(d) P(2,l,1),fl= (c) -4x1 + 3x2 - 5x3 - 6 = 0 in JR
3
(d) X1 - x2 + 2x3 - 3x4 = 5 in IR4
AS Determine a scalar equation of the hyperplane that (e) X1 + X2 - X3 + 2x4 - X5 = 0 in IR5
passes through the given point with the given nor­ AlO Find an equation for the plane through the given

[!]
mal vector. point and parallel to the given plane.
(a) P(l, -3, -1), 2x1 - 3x2 + 5x3 = 17
(a) P(l,1,-1), fl=
(b) P(O, -2, 4), X2 = 0
(c) P(l,2,1), x1 - x2 + 3x3 = 5
0
1 All Consider the statement "If il·v = il·w, then v = w."
(b) P(2,-2,0,l),n= (a) If the statement is true, prove it. If it is false,
3
3 provide a counterexample.
(b) If we specify i1 t= 0, does this change the re­
sult?

Homework Problems

Bl Calculate the lengths of the following vectors. B3 Determine the distance from P to Q for

[�l [� 1
(a) P(l,-3) and Q(2,3)
(a) (b)

[-�] [ �� ]
(b) P(-2,-2,5) and Q(-4, 1,4)
- 3 (c) P(3, 1,-3) and Q(-1,4,5)
(c) (d) 1 (d) P(S,-2,-3, 6) and Q(2,5,-4,3)
-3 2/3 B4 Verify the triangle inequality and the Cauchy­
1

{�]
3/YW Schwarz inequality if
2

nl
1;Y25
Ce) (f) -1
(al x an<lY =
-3/YW
1

[J m
-1;Y26
3
B2 Find a unit vector in the direction of �) x= •n<lY=
(a)
r-!l (b)
r;1
]
BS Determine whether each pair of vectors is

[1i�l <{�
orthogonal.
(c)

2 -1
(a)
rn r-�1 b
( ) r:l r-�1
mDl [ilHl
2 (f) -1
(e) (c) (d)
0 -1
3
Section 1.3 Exercises 39

2 -2 3
-3
-2
2 0 (a) P(2,l,1,5),it=
(e) (f) -1 ' -1 -5
1 ' 5
-2 0 1
2
2 2
-4
B6 Determine all values of k for which each pair of (b) P(3,1,0,7),it=
l
vectors is orthogonal.
-3
(a)
[n [�]
[
0
(c) P(O,0,0, 0),it=

m �]
O
(c)
0
0
B7 Find a scalar equation of the plane that contains the 1

[-�1
given point with the given normal vector. (d) P(l,2,0,1,1),it= -2

(a) P(-3,-3,1),it=

[-�]
B9 Determine a normal vector for the given hyper­
plane.
(b) P(6, -2,5), it= 2
(a) 2x1 + 3x2 = 0 in JR.

[�]
3
(b) -xi - 2x2 + 5x3 = 7 in JR.
4
(c) xi + 4x2 - X4 = 2 in JR.
(c) P(0,0,0),it=
(d) X1 + X2 + X3 - 2x4 = 5 in IR.
4

[: l
(e) xi - x5 = 0 in JR.5
(f) X1 + X2 - 2x3 + X4 - X5 = 1 in JR.5
(d) P( I, l, I),it=
BlO Find an equation for the plane through the given
point and parallel to the given plane.
BS Determine a scalar equation of the hyperplane that
(a) P(3,-1,7), 5x1 - x2 - 2x3 = 6
passes through the given point with the given nor­
(b) P(-1,2,-5), 2x2 + 3x3 = 7
mal vector.
(c) P(2,1,1), 3x1 - 2x2 + 3x3 = 7
(d) P(l, 0, 1), x1 - 5x2 + 3x3 = 0

Computer Problems

1.12 1.00 -3.13


2.10 3.12 1.21
Cl Let i1 1 = 7.03 'i12 = -0.45 , v3 = 3.31 , and
4.15 -2.21 1.14
6.13 2.00 -0.01
1.12
0
v4 = 2.13 .
3.15
3.40
Use computer software to evaluate each of the fol­
lowing.
(a) i11 ·i12
(b) lli13ll
(c) i12 v4 ·

2
(d) lli12 + v4[[
40 Chapter 1 Euclidean Vector Spaces

Conceptual Problems

Dl (a) Using geometrical arguments, what can you D7 Let S be any set of vectors in JR11• Let S .L be the set
say about the vectors p, it, and Jif the line with of all vectors that are orthogonal to every vector in
vector equation 1= fJ + tJ and the plane with S. That is,
scalar equation it· 1 = k have no point of inter­
section? S .L = { w E IR11 I v · w = 0 for all v E S}
(b) Confirm your answer in part (a) by determining
when it is possible to find a value of the para­ Show that S .L is a subspace of JR".
meter t that gives a point of intersection. 08 Let {V 1, . .. , vkl be a set of non-zero vectors in JR"
D2 Prove that, as a consequence of the triangle inequal­ such that all of the vectors are mutually ortho­
ity, 111111-111111�111-111. (Hint: 11111=111-.Y +yll.) gonal. That is, v; · v1 = 0 for all i * j. Prove that
{V 1, . . . , vkl is linearly independent.
D3 Let Vt and i1 be orthogonal vectors in IR". Prove
2
that lli11 + i1 112 =llv t ll2 + llv 112. D9 (a) Let it be a unit vector in IR3. Let a be the angle
2 2
between it and the x1 -axis, let,B be the angle be­
04 Determine the equation of the set of points in JR3
tween ii and the x -axis, and let y be the angle
that are equidistant from points P and Q. Explain 2
why the set is a plane and determine its normal vec­
between it and the x3-axis. Explain why
tor.
cos a
[ l
05 Find the scalar equation of the plane such that each it= cos,B
point of the plane is equidistant from the points cosy
P(2, 2, 5) and Q(-3, 4, 1) in two ways.
(a) Write and simplify the equation llPXll = (Hint: Take the dot product of it with the stan­
llQXll. dard basis vectors.)
(b) Determine a point on the plane and the Because of this equation, the components
normal vector by geometrical arguments. n1, n , n3 are sometimes called the direction
2
06 Let it JR". Prove that the set of all vectors ortho­
E
cosines.
(b) Explain why cos2 a+ cos2,B + cos2y 1.
gonal to it is a subspace of JR".
=

(c) Give a two-dimensional version of the direction


cosines and explain the connection to the iden­
tity cos2 e + sin2 e = 1.

1.4 Projections and Minimum Distance


The idea of a projection is one of the most important applications of the dot product.
Suppose that we want to know how much of a given vector y is in the direction of some
other given vector 1 (see Figure 1.4.14). In elementary physics, this is exactly what is
required when a force is "resolved" into its components along certain directions (for
example, into its vertical and horizontal components). When we define projections, it
is helpful to think of examples in two or three dimensions, but the ideas do not really
depend on whether the vectors are in JR2, JR3, or JR11•

Projections
First, let us consider the case where 1 = e1 in JR2. How much of an arbitrary vector

y = [��] points along x? Clearly, the part of y that is in the direction of x is [�] =
Yt e1 = CJ · x)x. This will be called the projection of y onto x and is denoted proj1 y.
Section 1.4 Projections and Minimum Distance 41

X1

0 X1
Figure 1.4.14 proj_.. y is a vector in the direction of x.

2
Next, consider the case where 1 E JR has arbitrary direction and is a unit vector.
First, draw the line through the origin with direction vector x. Now, draw the line
perpendicular to this line that passes through the point (y1, Y ) . This forms a right
2
triangle, as in Figure 1.4.14.
The projection of y onto 1 is the portion of the triangle
that lies on the line with direction x. Thus, the resulting projection is a scalar multiple
of 1, that is proj_x y = k1. We need to determine the value of k. To do this, let tdenote
the vector from projxy toy. Then, by definition, tis orthogonal to x and we can write

y = t + projx y = t + kx
We now employ a very useful and common trick-take the dot product of y with x:

y . x = (t+ kx) . 1 = t . .x + (k1) . 1 = o + k(x x) = kll1112 = k .


since 1 is a unit vector. Thus,

projx y = (Y 1)1 ·

EXAMPLE 1
Find the projection of 11 = [ �]
-
onto the unit vector i1 = [��1il
Solution: We have

projv 11 = (it v)v· i1 [11.../21


.

(-3 1 ) [1/.../2]
=
1/.../2J
=
.../2 .../2 1;.../2
+

-2 [1/.../2] X1
=
. .../2 1/.../2
= [=�]
2
If x E JR is an arbitrary non-zero vector, then the unit vector in the direction of 1
is x = 11:11• Hence, we find that the projection of y onto x is

, ,"'t ,
proJx Y = proJx Y = v
"'t
, ( ,"'t
x)x =
A A ( . jjiji ) jjiji

Y
1 1 y 1
'
1
.
= 111112
To match this result, we make the following definition for vectors in JR11•

Definition For any vectorsy, 1 in JR11, with x =F 0, we define the projection of y onto 1 by
Projection
. y·x
proJx y = 1
111112
42 Chapter 1 Euclidean Vector Spaces

EXAMPLE2
Let V = [ :J
_ and it = l-n Determine proj, it and proj, V.

... [][ 1
Solution: We have

4 8/13
· u·v (-2)(4)+5(3)+3(-l) = �
ProJv l1
= v= v 3 = 6113
2 42 +32 +c-1)
11v1 1 2 26
-1 -2/13

. v·l1
prOJu v = 11 11 =
11 1\ 2
(4)(-2)+3(5) +(-1)(3)11
(-2)2 +52 +32
=

38
[1[ 1
-2
5 =
3
-4/19
10/19
6/19

Remarks

1. This example illustrates that, in general, proj1 y * proj" 1. Of course, we


should not expect equality because proj_i' y is in the direction of 1, whereas
proj_y 1 is in the direction of y.

2. Observe that for any 1 JR.11, we can consider proj1 a function whose domain
E
and codomain are JR.n. To indicate this, we can write proj1 : JR.11 � JR.11• Since the
output of this function is a vector, we call it a vector-valued function.

EXAMPLE3
Let v = [�j�l
1/.../3
and 11 = [ �1.
-2
Find proj;111.

Solution: Since \!vii = 1, we get

11·v
proj;1 a= l ll v =ca. v)v =
lv 2
6
V3
[� j��J [�]
l/
=
2

EXERCISE 1
Let V = [ii and it = l-n Determine proj, it and proj, V.
Section 1.4 Projections and Minimum Distance 43

The Perpendicular Part


W hen you resolve a force in physics, you often not only want the component of the
force in the direction of a given vector x, but also the component of the force perpen­
dicular to x.
We begin by restating the problem. In JR.n, given a fixed vector x and any other y,
expressy as the sum of a vector parallel to x and a vector orthogonal to x. That is,
write y= w + z, where w = c1 for some c E JR and z·x = 0.
We use the same trick we did JR.2. Taking the dot product of x and y gives

y . x= (z+ w). x = z. x + (d). x= 0 + c(x . x) = c111112

T herefore, c= {i;I�, so in fact, w = ex = proj1 y, as we might have expected. One

bonus of approaching the problem this way is that it is now clear that this is the only
way to choose w to satisfy the problem.
Next, since y = proj1 y +z, it follows that z = y-proj1 y. Is this zreally orthogonal
to x? To check, calculate

z· x = CY proj1 y) · x
-
y. x
= y.x x .x
- 111112
( )
J. x
= J· 1 x x)
- 111112 e ·
( )
= y. x -(fi;1�) 111112

=J·x-J·x=o

So, zis or thogonal to x, as required. Since it is often useful to construct a vector zin
this way, we introduce a name for it.

Definition For any vectors x, y E Rn, with x * 0, define the projection ofy perpendicular to x
Perpendicular of a Projection to be

Notice that perp1 y is again a vector-valued function on JR.11• Also observe that
y= proj1 y + perp1 y. See Figure 1.4.15.

Figure 1.4.15 perp_x(j1) is perpendicular to 1, and proj_x(y) + perp_,(51) = y.


44 Chapter 1 Euclidean Vector Spaces

EXAMPLE4
Let v = m and ii = [H Detennine proj, ii and perp, ii.

Solution:

perpv it= i1 projv i1 =


[15] [84/3/3] [-11/35/3 l
=
-

1 4/3 -1/3
-

EXERCISE 2
Let v = m and ii= [-H Detennine proj, ii and perp, ii.

Some Properties of Projections


Projections will appear several times in this book, and some of their special properties
are important. Two of them are called the linearity properties:

(Ll) proj/(51 + t) projxy + projx zfor ally, Z E JR11


=

(L2) proj1(ty) = t proj1y for ally E JR11 and all t E JR

EXERCISE 3 Verify that properties (L 1) and (L2) are true.

It follows that perp1 also satisfies the corresponding equations. We shall see that
proj1 and perp1 are just two cases amongst the many functions that satisfy the linearity
properties.
proj1 and perp1 also have a special property called the projection property. We
write it here for projx. but it is also true for perp1:

proj1 (proj1y) = proj1y, for ally in JR11

Minimum Distance
What is the distance from a point Q(q1, q2) to the line with vector equation x = jJ + tJ?
In this and similar problems, distance always means the minimum distance. Geomet­
rically, we see that the minimum distance is found along a line segment perpendicular
Section 1.4 Projections and Minimum Distance 45

to the given line through a point Pon the line. A formal proof that minimum distance
requires perpendicularity can be given by using Pythagoras' Theorem. S ( ee Problem
D3.)
=
To answer the question, take any point on the line x jJ + tJ The obvious choice
is P(pi, p2) corresponding to jJ. From Figure 1.4.16, we see that the required distance
is the length perp;PQ.

x = (1,2) + t(-1, 1)
d=(-1,1)

0
X1

Figure 1.4.16 The distance from Q to the line 1 = f1 + tJ is II perpJPQll.

EXAMPLES
Find the distance from Q(4, 3) to the line 1 = [�J [-n
+ t t ER
Solution: We pick the point P( 1,
2) on the line. Then,

PQ = [� = �J = [n so, the distance is

II perp;PQll =
- Q
llPQ proj;P ll Q

1 m-(-13 ) [-� Jll


+ l
= +l

= 1 m l-� Jll=1 l;J11=2


+ V2

Notice that in this problem and similar problems, we take advantage of the fact
that the direction vector J can be thought of as "starting" at any point. W hen perp;AB
is calculated, both vectors are "located" at point P. W hen projections were originally
defined, it was implicitly assumed that all vectors were located at the origin. Now, it
is apparent that the definitions make sense as long as all vectors in the calculation are
located at the same point.
We now want to look at the similar problem of finding the distance from a point
Q(q1, q2, q3) to a plane in JR.3 with normal vector it. If Pis any point in the plane, then
proji1 PQ is the directed line segment from the plane to the point Q that is perpendicular
46 Chapter 1 Euclidean Vector Spaces

to the plane. Hence, II proj,1 PQll is the distance from Q to the plane. Moreover, perp11 PQ
PQ
onto the plane. See Figure 1.4.17.
is a directed line segment lying in the plane. In particular, it is the projection of

Xi
Figure 1.4.17 proj,-r PQ and perp,1 PQ, where it is normal to the plane.

EXAMPLE6 What is the distance from Q(q1, q1, q3) to a plane in lll3 with equation nixi +n1x2 +
n3X3 d?
Solution: Assuming that 0, we pick P(d/ni, 0, 0)
=

n1 t= to be our point in the plane.


Thus, the distance is

II proJ,1 PQll
· I I
-
(q llrliff) n
n
=
l2
ei/ fJ) · rt
l 11ni12 l 11ni1
... ·

= -

=
ei/
l �1!1). l rt

(qi - d/ni)ni +q1n2 +q3n3


�nf+n� +n�

=
q1ni +q2n2 +q3n3 - d
�nf+n�+n�

This is a standard formula for this distance problem. However, the lengths of pro­
jections along or perpendicular to a suitable vector can be used for all of these prob­
lems. It is better to learn to use this powerful and versatile idea, as illustrated in the
problems above, than to memorize complicated formulas.
Section 1.4 Exercises 47

Finding the Nearest Point In some applications, we need to determine the


point in the plane that is nearest to the point Q. Let us call this point R, as in Figure
l .4.17. Then we can determine R by observing that

OR =
-+
OP+ PK
-+ �
= OP+ perp11 PQ
-+ -+

However, we get an easier calculation if we observe from the figure that

OR =
-+
O Q + QR = O Q + proj,1 QP
-+ -+ -+ -+

Notice that we need QP here instead of PQ. Problem D4 asks you to check that these
two calculations of OR are consistent.
If the plane in this problem passes through the origin, then we may take P= 0,
and the point in the plane that is closest to Q is given by perp,1 q.

[ i].
EXAMPLE 7 Find the point on the plane x1 - 2x2 + 2x3 = 5 that is closest to the point Q(2, 1, 1).

Solution' We pkk P( 1, - I, I) to be the point on the plane. Then QP = = and we

find that the point on the plane closest to Q is

PROBLEMS 1.4
Practice Problems
Al For each given pair of vectors v and ii, determine A2 Determine proj;1 i1 and perp;1 i1 where
proj;1 i1 and perp;1 ii. Check your results by verify­
ing that projil i1 + perp;1 i1 = i1 and v perp;1 i1 = 0.
(a) v=
[ � J [ ;J
.a= _

U l Hl
·

(a) v=
[�J [ � ]
.a= - ( b) V=
·
·=

r J r-4J
3/5 'u.... =

{; ] H l
v= 4
Cb) ....
/5 6

[H {i]
(c) v •=
.
(c) v = a

Cd) v= - [ ��;] [ �1 ,a=


( d) "=
Ul. {ll
2/3 -3 -1 2
2 -1
1 -1 (e) v= a=
1 .... -1
l ,
2
(e ) v_-; = o·u = -3
2
-2 -1
1 2

en v = � , a=

-3
48 Chapter 1 Euclidean Vector Spaces

[rn a [H
A3 Consider the force represented by the vector A6 Use a projection to find the distance from the point
to the plane.

a.
F= and let = (a) Q(2,3,1),plane 3x1 - x2 + 4x3 = 5

a.
(b) Q(-2, 3,- 1),plane 2x1 - 3x2 - Sx3 = 5
(a) Determine a unit vector in the direction of (c) Q(O,2,-1),plane 2x1 - x3 = 5

a.
(b ) Find the projection of F onto (d) Q( -1,-1,1),plane 2x1 - x2-x3 =4
(c) Determine the projection of F perpendicular
A 7 For the given point and hyperplane in JR4, determine
to
by a projection the point in the hyperplane that is

[ !] a Ul
A4 Follow the same instructions as in Problem A2 but closest to the given point.
(a) Q(1,0,0,1), hyperplane 2x1 - x2 + X3 + X4 = 0
with F= 1 and =
(b) Q( l ,2,1,3),hyperplane x1 - 2x2 + 3x3 = 1
(c) Q(2,4,3,4), hyperplane 3x1 - x2 + 4x3 + X4 = 0
AS For the given point and line, find by projection the (d) Q(-1, 3,2,-2),hyperplane xi + 2x2 +
point on the line that is closest to the given point X3 - X4= 4
and use perp to find the distance from the point to
the line.

(a) Q(O,O),linex= [� J [ � l
+r
-
tEIR

(b) Q(2,5 ), linex= [�] [-�l


+t t EIR

(c) Q( l,0,l),lineX= U] [ H +t - tElR

(d) Q(2,3,2), lineX= [J t[H + t E JR

Homework Problems

a a. a a a
Bl For each given pair of vectors v and a, determine B2 Determine projv i1 and perpv i1 where
projv i1 and perpv Check your results by verify­
[�] [ =n
[n a
(a) v= a=
ing that projil + perpil = and v . perpil = 0.

[� ] [ � ] [�]
[ �:�]. a [ �]
(a) v= a= (b ) v=
.

+; l a Ul
=
-
(b ) v =

=[�]·a= Hl
(c) V =
= _

Ul· a= [�l
(c) V
(d) ,=

(d) '=
U J nl .
a=

(e) v=
U l [� ] a=

a
1 3

,a
1 3 2 -1
(e) v= =
o , -4 0 2
(f) v= = -1

....
2 1
2 2
2 3
(f) v= 'u =
-1 2
3
Section 1.4 Exercises 49

B3 Consider the force represented by the vector B6 Use a projection to find the distance from the point
to the plane.
F = [-q] and let
u nl (a)
(b)
Q(Q(2O,,0,-32),,-1),
plane 2x1x 2x1+-x2x2-3x2-4x3
plane -5x3= 5 = 7
Q(Q(2O,,0-1,, 0)2),, -X3 =6
=

(a) Determine a unit vector in the direction of il. (c ) plane 1


(b) Find the projection of F ontoil.
(c ) Determine the projection of F perpendicular
(d) plane -xi + 2x2 -X3 =JR.54,
B7 For the given point and hyperplane in determine
toil.
by a projection the point in the hyperplane that is
B4 Follow the same instructions as in Problem A2
= [ !�] = HJ
but closest to the given point.

wM and U
( a)
(b)
Q(Q(2l,, 13,,00,,-1-1
), hyperplane
), hyperplane
2x12x1-x3- 2x2+3x4+=x30+
BS For the given point and line, find by projection the (c )
3x4Q(3=,1,02, -6), hyperplane 3x1 - x2 - X3 +
point on the line that is closest to the given point
(d)
X4Q(=5,23,3,7), hyperplane 2x + x2 + 4x3 +
1
and use perp to find the distance from the point to
the line. 3x4 = 40
(a) line 1
Q(3,-5), = [-�] +t[_;J. t E JR

(bl Q(O, 0, 2), !ind= [�]+ r H t E JR

(c ) Q(1.-1.01.lind{i]+t[:j. telt
(d) Q(-1,2,3),lind= [_!]+r[H telR
Computer Problems

1.2.1102 1.3.0102 -1.1.1.000102 V4 = 1.2.01132 .


Cl Letv1 = 4.7.0135 'V2 = -0.-2.2415 'V3 , and

6.13 2.00
=

-1.1.0034 3.3.4105
Use computer software to evaluate each of the
following.
(a) proji11 v3
(b) perpit 1V3
(c) proj;:t3 v1
(d) proj;:t4 v2
50 Chapter 1 Euclidean Vector Spaces

Conceptual Problems
3 �
D4 By using the definition of perp,1 and the fact that
Dl (a) Given u and v in JR with u

* 0 and v * 0,
3 3 PQ= -QP, show that
verify that the composite map C : JR. ---+ JR.
defined by C(1) = proj,1(projv 1) also has the -+ -+

OP+ perp,1 PQ= OQ+ proJ11 QP


-+ , -+

linearity properties (Ll) and (L2).


3
(b) (b) Suppose that C(1)=0 for all 1 E JR. , where
C is defined as in part (a). What can you say
about u and v? Explain.
DS (a) Let i1 = U] and X = [H Show that

D2 By linearity property (L2), we know that


proja(perpa(1)) = 0.
proja( -1) proj,1 1. Check, and explain geo­ 3
(b) For any u E JR. , prove algebraically that for any
= -

metrically, that proj_a 1=projil1. 3


1 E JR. , proj;;(perpa(1)) =O.
2
D3 (a) (Pythagoras' theorem) Use the fact that 11111 = (c) Explain geometrically why proj,1(perpa(1)) =
2 2 2
1·1 to prove that 111+111 =11111 +11111 if and 0 for every 1.
only if 1 · 1 =0.
(b) Let f be the line in ]Rn with vector equation
1= td and let P be any point that is not on
f. Prove that for any point Q on the line, the
smallest value of 11.8' - qll2 is obtained when
q = projjp) (that is, when p - q is per-

pendicular to d). (Hint: Consider 11.8' - qll =

11.8' - projjp) + projjp) - qll.)

1.5 Cross-Products and Volumes


3
Given a pair of vectors u and v in JR. , how can we find a third vector w that is ortho­
gonal to both u and v? This problem arises naturally in many ways. For example, to find
the scalar equation of a plane whose vector equation is 1 = ru + sv, we must find the
normal vector it orthogonal to u and v. In physics, it is observed that the force on an
electrically charged particle moving in a magnetic field is in the direction orthogonal
to the velocity of the particle and to the vector describing the magnetic field.

Cross-Products
3
Let u, v E JR. . If w is orthogonal to both u and v, it must satisfy the equations

U �V=Ut W1 + U2W2 + U3W3=0


·

V · W=V1W1 + V2W2 + V3W3=0

In Chapter 2, we shall develop systematic methods for solving such equations for
w1, w2, w3. For the present, we simply give a solution:

W =
[ U2V3 - U3V2
U3V1 - U1V3
]
u1v2 - u2v1

EXERCISE 1 Verify that w · u =0 and w v=0. ·


Section 1.5 Cross-Products and Volumes 51

Also notice from the form of the equations that any multiple of w would also be
orthogonal to both i1 and v.

Definition
Cross-Product
The cross-product of vectors it = [�:] and V = [�] is defined by

ilxv= U3V1-U1V3
[
U2 V3-U3V2 ]
U1V 2-U2 V1

EXAMPLE l
m [-il
CalculaIB the cross product of and

Solution: [�5] [-�1 [-65--54] [-�]·


x = =
2
- 2 (-3) 5

Remarks

1. Unlike the dot product of two vectors, which is a scalar, the cross-product of
two vectors is itself a new vector.

3
2. The cross-product is a construction that is defined only in IR. . (There is a gen­
eralization to higher dimensions, but it is considerably more complicated, and
it will not be considered in this book.)

The formula for the cross-product is a little awkward to remember, but there are
many tricks for remembering it. One way is to write the components of i1 in a row
above the components of v:

Then, for the first entry in i1 xv, we cover the first column and calculate the difference
of the products of the cross-terms:

For the second entry in i1 xv, we cover the second column and take the negative of the
difference of the products of the cross-terms:

- I
U1
V1
U3
V3
1 � -(U1V3-U3V1)

Similarly, for the third entry, we cover the third column and calculate the difference of
the products of the cross-terms:
52 Chapter 1 Euclidean Vector Spaces

Note carefully that the second term must be given a minus sign in order for this proce­
dure to provide the correct answer. Since the formula can be difficult to remember, we
recommend checking the answer by verifying that it is orthogonal to both i1 and v.

EXERCISE2
Calculate the cross-product of - [ �] m and

By construction, ax v is orthogonal toil and v, so the direction of ax v is known


except for sign: does it point "up" or "down"? The general rule is as follows: the three
vectors a, v, and axv, taken in this order, form a right-handed system. Let us see how
this works for simple cases.

EXERCISE 3 Let e1, e2, and e


3
be the standard basis vectors in JR.3. Verify that

but

Check that in every case, the three vectors taken in order form a right-handed system.

These simple examples also suggest some of the general properties of the cross­
product.

Theorem 1 for x,y,zE JR.3 and t E JR., we have


(1) xxy= -yxx
(2) xxx= 0
(3) xx(y + Z)=xxy +xxz
(4) ( tx)xy=t(xxy)

Proof: These properties follow easily from the definition of the cross-product and are
left to the reader.
One rule we might expect does not in fact hold. In general,

.xx Cfx Z> * exxy)x t

This means that the parentheses cannot be omitted in a cross-product. (There are for­
mulas available for these triple-vector products, but we shall not need them. See Prob­
lem F3 in Further Problems at the end of this chapter.)

The Length of the Cross- Product


Given il and v, the direction of their cross-product is known. W hat is the length of the
cross-product of a and v? We give the answer in the following theorem.

Theorem 2 Let it, v E JR.3 and e be the angle between it and V, then llilx vii = llitll llVll sine.
Section 1.5 Cross-Products and Volumes 53

Proof: We give an outline of the proof. We have

llu vll2 = (U V3 - U3V )2 + (U3V1 - U1V3)2 + (U1V - U V1)2


X
2 2 2 2
Expand by the binomial theorem and then add and subtract the term Cuivi+u�v�+u�v�).
The resulting terms can be arranged so as to be seen to be equal to

(U21 + U2 + U23)(V21 + V2 + 3 - (UtVt


V2) + U V + U3V3)2
2 2 2 2
Thus,

lli1 x vll2 = lli1112llvll2 - (i1 . v)2= lli1112llvll2 - lli1112llvll2 cos2e


= lli1112llvll2(1 - cos2 B) = llz1112lli1112 sin2e

and the result follows. •

To interpret this formula, consider Figure 1.5 .18. Assuming that i1 x v f:. 0, the
vectors z1 and v determine a parallelogram. Take the length of z1 to be the base of the
parallelogram; the altitude is the length of perpa v. From trigonometry, we know that
this length is IJVll sine, so that the area of the parallelogram is

(base) x (altitude) = Jli111 llvll sine= llz1 xvii

Xt

Figure 1.5.18 The area of the parallelogram is llUll llVll sin 8.

EXAMPLE2
Find the area of the parallelogram detennined by i1= [�] and V = [il ·

Solution: By Theorem 2, we find that the area is

lli1 x VII= [�] = I


54 Chapter 1 Euclidean Vector Spaces

EXAMPLE3
Find the area of the paraIIelogram determined by a = [-�] and ii = [-: l
Solution: By Theorem 2, the area is

EXERCISE4
Find the area of the parallelogram determined by"= [3l and =
ii LH

Some Problems on Lines, Planes, and Distances


The cross-product allows us to answer many questions about lines, planes, and dis­
tances in JR3.

Finding the Normal to a Plane In Section 1.2, the vector equation of a


plane was given in the form 1 jJ + sil + t\!, where {il, v} is linearly independent. By
definition, the normal vector it must be perpendicular to i1 and v. Therefore, it will be
=

given by it= i1 xv.

EXAMPLE4
The lines 1 = [i] [�]
+ s and 1 = m [- �]
+ t must lie in a common plane since iliey

have the point (1, 3, 2) in common. Find a scalar equation of the plane that contains
these lines.
Solution: The normal to the plane is

Therefore, since the plane passes through P(l, 3, 2), we find that an equation of the
plane is
-4xi - 3x2 + 2x3 = (-4)(1) + (-3)(3) + 2(2) = -9
Section 1.5 Cross-Products and Volumes 55

EXAMPLES Find a scalar equation of the plane that contains the three points P(l, -2, 1),
Q(2,-2,-1), and R(4, 1, 1).
_,

Solution: Since P, Q, and R lie in the plane, then so do the directed line segments PQ
and PR. Hence, the normal to the plane is given by

n = PQ x PR = [j] [�] HJ
x =

Since the plane passes through P, we find that an equation of the plane is

6x1 - 6x2 + 3 x3 = (6)(1) + (-6)(-2) + 1(3) = ,


21 or 2x1 - 2x2 + X3 = 7

Finding the Line of Intersection of Two Planes Unless two planes in


3
JR. are parallel, their intersection will be a line. The direction vector of this line lies in
both planes, so it is perpendicular to both of the normals. It can therefore be obtained
as the cross-product of the two normals. Once we find a point that lies on this line, we
can write the vector equation of the line.

EXAMPLE6 Find a vector equation of the line of intersection of the two planes x1 + x2 - 2x3 = 3
and 2x1 - X2 + 3x3 = 6.

Solution: The normal vectors of the planes are [ _l] [-H and Hence, the direction

vector of the line of intersection is

One easy way to find a point on the line is to let x3 = 0 and then solve the remaining
equations x1 + x2 = 3 and 2x1 - x2 = 6. The solution is x1 =3 and x2 0. Hence, a
=

vector equation of the line of intersection is

EXERCISE 5 Find a vector equation of the line of intersection of the two planes -x1 - 2x2 + x3 = -2
and 2x1 + x2 - 2x3 = 1.
56 Chapter 1 Euclidean Vector Spaces

The Scalar Triple Product and Volumes zn �3 The three vectors


a, v, and w in JR.3 may be taken to be the three adjacent edges of a parallelepiped
(see Figure 1.5.19). Is there an expression for the volume of the parallelepiped in
terms of the three vectors? To obtain such a formula, observe that the parallelogram
determined by a and v can be regarded as the base of the solid of the parallelepiped.
This base has area 11a x vii. With respect to this base, the altitude of the solid is the
length of the amount of w in the direction of the normal vector ft a xv to the base. =

That is,
lw . ill lw . (it xv)I
altltude
. .
...II =
II proJ,1 w liitil 11a xvii
= =

Thus, to get the volume of the parallelepiped, multiply this altitude by the area of the
base to get

. lw. ca xv)I
volume of the parallelepiped = x11a xvii lw ca xv)I
Ila xvii
= ·

The product w ·(it xv) is called the scalar triple product of w, it, and v. Notice that
the result is a real number (a scalar).

axv
altitude 11 proj,1 wll

base area 11axVil


Figure 1.5.19 The parallelepiped with adjacent edges a, v, w has volume given by
lw. ca xv)I.

The sign of the scalar triple product also has an interpretation. Recall that the
ordered triple of vectors {a, v, a xv} is right-handed; we can think of a xv as the
"upwards" normal vector to the plane with vector equation x = sa + tf!. Some other
vector w is then "upwards," and {a, v, w} (in that order) is right-handed, if and only if
the scalar triple product is positive. If the triple scalar product is negative, then {a, v, w}
is a left-handed system.
It is often useful to note that
w . ca xv) = a . cv xw) = v . cw xa)
This is straightfoward but tedious to verify.

EXAMPLE 7
Find the volume of the pamllelepiped determined by the vectors [H [H [ nand =
Solution: The volume Vis

V=

m l[:H=rn [�JHJ = =2
Section 1.5 Exercises 57

PROBLEMS 1.5
Practice Problems

Al Calculate the following cross-products. A4 Determine the scalar equation of the plane with

+iJ {l] {�] +�J


vector equation
c c

(•) ,
=
[�] Ul ,[l]
+ +

{�H�l [�H=ii
,

Ul [ii f11
c (d)
= + +
(b) ,

(+�I {!l [!Hll [ i] H] [�]


,
co
(c) X= - +s +1

A2 ld = nl Ul [=H v = and w = Check


(d) X=s �][ {�]
+
by calculation that the following general properties
hold. AS Determine the scalar equation of the plane that con­
(a) axa = o
P(2,1, 5),Q(4,-3,2),R(2, 6, -1)
tains each set of points.
Cb) axv= -v xa
(b) P(3,1,4),Q(-2,0,2),R(l,4,-l)
(a)
(c) ilx3w=3(ilxw)
(d) ilx(v +w) =ilxv+ilxw (c) P(-l,4,2),Q(3,l,-l),R(2,-3,-l)
c e) a. cv xw) = w . caxv) (d) P(l,O,1),Q(-1,0, 1),R(0,0,0)
A6 Determine a vector equation of the line of intersec­
Cf) a. cv xw) -v. ci1 xw)
A3 Calculate the area of the parallelogram determined
=

y each pair of vectors.


tion of the given planes.
(a) Xi + 3x2 - X3 = 5 and
b 2x1 - 5x2 +-.x3= 7
(a)
[H Ul (b) [Hll A 7 (b)
2x1 - 3x3 = 7 and x2 + 2x3 = 4

[�lHl
Find the volume of the parallelepiped determined

[-nr�1
by each set of vectors.
( c) (d)

(Hint: For (d), think of these vectms as


[-!] and
(•) [�Hrrnl
(b) u1rn m
[ii inR3)

(c)
nrnrn1
(d)
[_H urn1
(e)
n1nrn1
AS What does it mean, geometrically, if il·(vxw) = O?
A9 Show that (il - v) x(il+v) = 2(il xv).
58 Chapter 1 Euclidean Vector Spaces

Homework Problems

Bl Calculate the following cross-products. B4 Determine the scalar equation of the plane with

Hl {�] +!H�l
vector equation

{i] {�] l�l


(a)
� (a) x +
+t
(c )
m +�l <aflHil �1 •
=
nl [!] ftl +
s
+
( el
[ �� ] {i]2 {iHil
c (c )

[;] f1l nl
= +
+
B2 Ut U =
[!]· Ul V = and W =

by calculation that the following general properties


Hl Ch �k
(d) X=
[�] f �l
s
+i =

BS Determine the scalar equation of the plane that con­


0
hold.
(a) i1x i1 = tains each set of points.
(b) i1xv = -vx i1 (a) P(S,2,1), Q(-3,2,4),R(8,1,6)
(c ) i1 x 2w = 2(i1x w) (b) P(S, -1,2), Q(-1,3, 4),R(3,1,1)
Cd) ax cv+ w) = axv+ ax w (c ) P(0,2,1), Q(3,-1,1),R(l,3,0)
Ce) i1. cvx w) = w. caxv) (d) P(l,5,-3), Q(2,6,-1),R(l,O, 1)
Cf) a . cvx w) = -v . cax w)
B6 Determine a vector equation of the line of intersec­
B3 Calculate the area of the parallelogram determined

-
tion of the given planes.
by each pair of vectors. (a) x1 + 4x2 X3 = 5 and
7X2+- X3 = 6

lH Hl lrllfl
3X1
(a) (b) (b) xi - 3x2 - 2x3 = 4 and
3x1 + 2x2 + x3 = 2

B7 Find the volume of the parallelepiped determined

c {�Jr:J (d)
m. [-;] by each set of vectors.

(Hint: For (d), think of these v�tors as m [-�land


(a)
HlUlUI
in IR.3.)
(b)
liHH []
(c)
lH lH Ul
(d)
lH lH Ul
Conceptual Problems
Dl Show that if X is a point on the line through P and D2 Consider the following statement: "If i1 f. 0, and
Q, then x x (q - p) p x if, where x = OX, i1xv = i1 x w, then v = w." If the statement is true,

p OP, and q = OQ.


=

prove it. If it is false, give a counterexample.


--7 �
....
=
Chapter Review Student Review 59

D3 Explain why u x (v x w) must be a vector that sat­ D4 Give an example of distinct vectors u, v, and w in
isfies the vector equation x = sv + tW. JR.3 such that
(a) ax (v x w) = (u xv) x w
(b) ax (v x w) * (u xv) x w

CHAPTER REVIEW
Suggestions for Student Review

Organizing your own review is an important step to­ by giving examples. Explain how this relates with
wards mastering new material. It is much more valu­ the concept of linear independence. (Section 1.2)
able than memorizing someone else's list of key ideas.
6 Let {V [, v2} be a linearly independent spanning set
To retain new concepts as useful tools, you must be able
for a subspace S of JR.3. Explain how you could con­
to state definitions and make connections between var­
struct other spanning sets and other linearly indepen­
ious ideas and techniques. You should also be able to
dent spanning sets for S . (Section 1.2)
give (or, even better, create) instructive examples. T he
suggestions below are not intended to be an exhaus­ 7 State the formal definition of linear independence.
tive checklist; instead, they suggest the kinds of activ­ Explain the connection between the formal defini­

ities and questioning that will help you gain a confident tion of linear dependence and an intuitive geometric

grasp of the material. understanding of linear dependence. Why is linear


independence important when looking at spanning
1 Find some person or persons to talk with about math­
sets? (Section 1.2)
ematics. There's lots of evidence that this is the best
way to learn. Be sure you do your share of asking 8 State the relation (or relations) between the length
and answering. Note that a little bit of embarrass­ in JR.3 and the dot product in JR.3. Use examples to

ment is a small price for learning. Also, be sure to illustrate. (Section 1.3)

get lots of practice in writing answers independently. 9 Explain how projection onto a vector v is defined
2 Draw pictures to illustrate addition of vectors, sub­ in terms of the dot product. Illustrate with a picture.

traction of vectors, and multiplication of a vector by Define the part of a vector x perpendicular to v and

a scalar (general case). (Section 1.1) verify (in the general case) that it is perpendicular to
v. (Section 1.4)
3 Explain how you find a vector equation for a line and
make up examples to show why the vector equation 10 Explain with a picture how projections help us to

of a line is not unique. (Albert Einstein once said, "If solve the minimum distance problem. (Section 1.4)
you can't explain it simply, you don't understand it 11 Discuss the role of the normal vector to a plane in de­
well enough.") (Section 1.1) termining the scalar equation of the plane. Explain

4 State the definition of a subspace of JR.n. Give exam­ how you can get from a scalar equation of a plane

ples of subspaces in JR.3 that are lines, planes, and to a vector equation for the plane and from a vector

all of JR.3. Show that there is only one subspace in equation of the plane to the scalar equation. (Sec­

JR.3 that does not have infinitely many vectors in it. tions 1.3 and 1.5)
(Section 1.2) 12 State the important algebraic and geometric proper­
5 Show that the subspace spanned by three vectors in ties of the cross-product. (Section 1.5)
JR.3 can either be a point, a line, a plane, or all of JR.3,
60 Chapter 1 Euclidean Vector Spaces

Chapter Quiz

[�] nl
Note: Your instructor may have different ideas of an ap­ ES Determine a non-zero vector that is orthogonal to
propriate level of difficulty for a test on this material.
both and
El Determine a vector equation of the line passing
through points P(-2, 1, -4) and Q(S, -2, 1).
E9 Prove that the volume of the parallelepiped deter­
E2 Determine the scalar equation of the plane that
mined by il, v, and w has the same volume as the
contains the points P(l,-1,0), Q(3, 1,-2), and
parallelepiped determined by (il +kV), v, and w.
R(-4, 1,6).

E3 Show that {[�], [ �]}


- is a basis for JR.2.
ElO Each of the following statements is to be inter­
preted in JR.3. Determine whether each statement

{[::J }
is true, and if so, explain briefly. If false, give a
counterexample.
3
E4 Pro,e thatS = E IR I a,x, + a,x, + a3x3 = d (i) Any three distinct points lie in exactly one
plane.
3
is a subspace of JR. for any real numbers a1, az, a3 (ii) The subspace spanned by a single non-zero
if and only if d = 0. vector is a line passing through the origin.
(iii) The set Span{V1, vk} is linearly dependent.

[- �]
ES Determine the cosine of the angle between . • . ,

(iv) The dot product of a vector with itself cannot


ii = and each of the coordinate axes be zero.
(v) For any vectors x and y, proj1 y proj.Y x.=

[H
(vi) For any vectors x and y, the set {proj1 y, perp1 y}
is linearly independent.
E6 Find the point on the line 1 t - t ER
(vii) The area of the parallelogram determined by il
=

and v is the same as the area of the parallelo­


that is closest to the point P(2, 3, 4). Illustrate your
gram determined by il and (v + 3il).
method of calculation with a sketch.

E7 Find the point on the hyperplane x1 + x + x3 +


2
x4 1 that is closest to the point P(3, -2, 0, 2) and
=

determine the distance from the point to the plane.

Further Problems

These problems are intended to be a little more chal­ F3 In Problem l .5.D3, you were asked to show that
lenging than the problems at the end of each section. il x(v x w) sv + tW for some s, t ER
=

Some explore topics beyond the material discussed in (a) By direct calculation, prove that il x (v x w) =

the text. ca w)V - ca . v)w.


.

(b) Prove that il x (v x w) + v x (w x it) + w x


Fl Consider the statement "If il * 0, and both il ·v =

Cit xv) o.
il ·w and i1 xv il x w, then v w." Either prove
=
= =

the statement or give a counterexample. F4 Prove that


F2 Suppose that il and v are orthogonal unit vectors in (a) it· v ±11a + vll2 - ±llit - vll2
=

3 Cb) 11a + vli2 + 11a - v112 211a112 + 211v112


JR.3. Prove that for every x E JR. ,
=

(c) Interpret (a) and (b) in terms of a parallelogram


determined by vectors il and v.
Chapter Review Student Review 61

FS Show that if P, Q, and R are collinear points and F6 In JR.2, two lines fail to have a point of intersection
OP= p, OQ = if, and OR= 1, then only if they are parallel. However, in JR.3, a pair of
lines can fail to have a point of intersection even if
ctJ x if) + cif x 1'J + c1 x fJ) = o they are not parallel. Two such lines in JR.3 are called
skew.
(a) Observe that if two lines are skew, then they do
not lie in a common plane. Show that two skew
lines do lie in parallel planes.
(b) Find the distance between the skew lines

1= [�H�l ,seRand1= HHil · teR

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 2

Systems of Linear
Equations
CHAPTER OUTLINE

2.1 Systems of Linear Equations and Elimination


2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems
2.3 Application to Spanning and Linear Independence
2.4 Applications of Systems of Linear Equations

In a few places in Chapter 1, we needed to find a vector x in JRn that simultaneously


satisfied several linear equations. In such cases, we used a system of linear equations.
Such systems arise frequently in almost every conceivable area where mathematics is
applied: in analyzing stresses in complicated structures; in allocating resources or
managing inventory; in determining appropriate controls to guide aircraft or robots;
and as a fundamental tool in the numerical analysis of the flow of fluids or heat.
The standard method of solving systems of linear equations is elimination. Elim­
ination can be represented by row reduction of a matrix to its reduced row echelon
form. This is a fundamental procedure in linear algebra. Obtaining and interpreting
the reduced row echelon form of a matrix will play an important role in almost every­
thing we do in the rest of this book.

2.1 Systems of Linear Equations


and Elimination

Definition A linear equation in n variables x1, • • • , x11 is an equation that can be written in the
Linear Equation form
(2.1)

The numbers a1, ..., a,, are called the coefficients of the equation, and b is usually
referred to as "the right-hand side," or "the constant term." The xi are the unknowns or
variables to be solved for.
64 Chapter 2 Systems of Linear Equations

EXAMPLE 1 The equations

X1 + 2x2 = 4 (2.2)

x1 - 3x2 + ../3x3 = JTX 4 (2.3)

are both linear equations since they both can be written in the form of equation (2. 1 ).
The equation x� - x2 = 1 is not a linear equation.

n
Definition A vector in !R is called a solution of equation (2.1) if the equation is satisfied
Solution
Sn
when we make the substitution x1 = s1, x2 = s2, ... , Xn = Sn.

[n [O�s l [ �]
EXAMPLE2
A few solutions of equation (2.2) are and _ since

2 + 2(1) = 4
3 + 2(0.5) = 4
6 + 2(- 1 ) = 4

0
0
The vector is clearly a solution of x1 - 3x2 + .../3x3 = JTX4.
0
0

A general system of m linear equations inn variables is written in the form

a 11X1 + a12x2 + · · · + a1nXn = b1


a21X1 + a22X2 + · · · + a2nXn = b2

Note that for each coefficient, the first index indicates in which equation the coefficient
appears. The second index indicates which variable the coefficient multiplies. That
is, au is the coefficient of Xj in the i-th equation. The indices on the right-hand side
indicate which equation the constant appears in.
We want to establish a standard procedure for determining all the solutions of such
a system-if there are any solutions! It will be convenient to speak of the solution set
of a system to mean the set of all solutions of the system.
The standard procedure for solving a system of linear equations is elimination. By
multiplying and adding some of the original equations, we can eliminate some of the
Section 2.1 Systems of Linear Equations and Elimination 65

variables from some of the equations. The result will be a simpler system of equations
that has the same solution set as the original system, but is easier to solve.
We say that two systems of equations are equivalent if they have the same solution
set. In elimination, each elimination step must be reversible and must leave the solution
set unchanged. Every system produced during an elimination procedure is equivalent
to the original system. We begin with an example and explain the general rules as we
proceed.

EXAMPLE 3 Find all solutions of the system of linear equations

X1 + X2 - 2X3 = 4
X1 + 3x2 - X3 = 7
2x1 + x2 - Sx3 = 7

Solution: To solve this system by elimination, we begin by eliminating x1 from all


equations except the first one.
Add (-1) times the first equation to the second equation. The first and third
equations are unchanged, so the system is now

X1 + X2 - 2x3 = 4
2x2 + X3 = 3
2x1 + x2 - Sx3 = 7

Note two important things about this step. First, if x1, x2, x3 satisfy the original system,
then they certainly satisfy the revised system after the step. This follows from the rule
of arithmetic that if P = Q and R = S, then P + R = Q + S. So, when we add
two equations and both are satisfied, the resulting sum equation is satisfied. Thus, the
revised system is equivalent to the original system.
Second, the step is reversible: to get back to the original system, we just add (1)
times the first equation to the revised second equation.
Add ( 2 ) times the first equation to the third equation.
-

X1 + X2 - 2X3 = 4
2x2 + X3 = 3
-X2 - X3 = -1

Again, note that this step is reversible and does not change the solution set. Also note
that x1 has been eliminated from all equations except the first one, so now we leave
the first equation and turn our attention to x2.
Although we will not modify or use the first equation in the next several steps, we
keep writing the entire system after each step. This is important because it leads to a
good general procedure for dealing with large systems.
It is convenient, but not necessary, to work with an equation in which x2 has the
coefficient 1. We could multiply the second equation by 1 /2, but to avoid fractions,
follow the steps on the next page.
66 Chapter 2 Systems of Linear Equations

EXAMPLE3 Interchange the second and third equations. This is another reversible step that

(continued) does not change the solution set:

X1 + X2 - 2X3 = 4
-X2 - X3 = -1
2X2 + X3 = 3

Multiply the second equation by (-1). This step is reversible and does not
change the solution set:

X1 + X2 - 2X3 = 4
X2 + X3 = 1

2X2 + X3 = 3

Add (-2) times the second equation to the third equation.

X1 + X2 - 2x3 = 4
X2 + X3 = }
-X3 = 1

In the third equation, all variables except x3 have been eliminated; by elimination,
we have solved for X3. Using similar steps, we could continue and eliminate X3 from
the second and first equations and x2 from the first equation. However, it is often a
much simpler task to complete the solution process by back-substitution.
First, observe that x3 = -1. Substitute this value into the second equation and find
that
X2 = 1 - X3 = 1 - ( -1) = 2

Next, substitute these values back into the first equation to obtain

X1 = 4 - X2 + 2x3 = 4-2 + 2(-1) = 0

Thus, the on!y solution of this system is [�:] [ H = _ Since the final system is equl va­

lent to the original system, this solution is also the unique solution of the problem.

Observe that we can easily check that [ �]


_ satisfies the original system of equa­

tions:

0 + 2 - 2( -1) = 4
0 + 3(2) - (-1) = 7
2(0) + 2 - 5(-1) = 7

It is important to observe the form of the equations in our final system. The first
variable with a non-zero coefficient in each equation, called a leading variable, does
not appear in any equation below it. Also, the leading variable in the second equation
Section 2.1 Systems of Linear Equations and Elimination 67

is to the right of the leading variable in the first, and the leading variable in the thfrd is
to the right of the leading variable in the second.
The system solved in Example 3 is a particularly simple one. However, the solution
procedure introduces all the steps that are needed in the process of elimination. They
are worth reviewing.

Types of Steps in Elimination


(1) Multiply one equation by a non-zero constant.
(2) Interchange two equations.
(3) Add a multiple of one equation to another equation.
Warning! Do not combine steps of type ( 1) and type (3) into one step of the form
"Add a multiple of one equation to a multiple of another equation." Although such a
combination would not lead to errors in this chapter, it would lead to errors when we
apply these ideas in Cnapter 5.

EXAMPLE4 Determine the solution set of the system of linear equations

X1 + 2X3 + X4 = 14
X1 + 3x3 + 3x4 = 19

Remark

Notice that neither equation contains x2. This may seem peculiar, but it happens in
some applications that one of the variables of interest does not appear in the linear
equations. If it truly is one of the variables of the problem, ignoring it is incorrect.
Rewrite the equations to make it explicit:

x1 + Ox2 + 2x3 + X4 = 14
Xi + Ox2 + 3x3 + 3x4 = 19

Solution: As in Example 3, we want our leading variable in the first equation to be to


the left of the leading variable in the second equation, and we want the leading variable
to be eliminated from the second equation. Thus, we use a type (3) step to eliminate
x1 from the second equation.
Add ( -1) times the first equation to the second equation:

xi + Ox2 + 2x3 + X4 = 14
X3 + 2X4 = 5

Observe that X2 is not shown in the second equation because the leading variable must
have a non-zero coefficient. Moreover, we have already finished our elimination pro­
cedure as we have our desired form. The solution can now be completed by back­
substitution.
Note that the equations do not completely determine both x3 and x4: one of them
can be chosen arbitrarily, and the equations can still be satisfied. For consistency, we
always choose the variables that do not appear as a leading variable in any equation to
be the ones that will be chosen arbitrarily. We will call these free variables.
68 Chapter 2 Systems of Linear Equations

EXAMPLE4 Thus, in the revised system, we see that neither x2 nor x4 appears as a leading

(continued) variable in any equation. Therefore, x2 and x4 are the free variables and may be chosen
arbitrarily (for example, x4 = t E JR and x2 = s E JR). Then the second equation can be
solved for the leading variable X3:

X3 = 5 - 2X4 = 5 - 2t

Now, solve the first equation for its leading variable x1:

Xt = 14 - 2X3 - X4 = 14 - 2(5 - 2t) - t = 4 + 3t

Thus, the solution set of the system is

X1 4 + 3t
s
s, t E JR
5 - 2t '

In this case, there are infinitely many solutions because for each value of s and each
value oft that we choose, we get a different solution. We say that this equation is the
general solution of the system, and we call s and t the parameters of the general
solution. For many purposes, it is useful to recognize that this solution can be split into
a constant part, a part in t, and a part in s:

XJ 4 0 3
X2 0 1 0
+ + t
S
=

X3 5 0 -2
X4 0 0

This will be the standard format for displaying general solutions. It is acceptable to
leave x2 in the place of s and x4 in the place of t, but then you must say x2, X4 E
JR. Observe that one immediate advantage of this form is that we can instantly see
the geometric interpretation of the solution. The intersection of the two hyperplanes
4 4
x1 + 2x3 + X4 = 14 and x1 + 3x3 + 3x4 = 19 in JR is a plane in JR that passes through
P(4, 0, 5, 0).

The solution procedure we have introduced is known as Gaussian elimination


with back-substitution. A slight variation of this procedure is introduced in the next
section.

EXERCISE 1 Find the general solution to the system of linear equations

2x 1 + 4x2 + Ox3 = 12
X1 + 2X2 - X3 = 4

Use the general solution to find three different solutions of the system.
Section 2.1 Systems of Linear Equations and Elimination 69

The Matrix Representation of a System


of Linear Equations
After you have solved a few systems of equations using elimination, you may realize
that you could write the solution faster if you could omit the letters x1, x2, and so on­
as long as you could keep the coefficients lined up properly. To do this, we write out
the coefficients in a rectangular array called a matrix. A general linear system of m

equations in n unknowns will be represented by the matrix

a11 a12 alj a111 b1


a21 a22 a2j a211 b2

ail ai2 aij a;n b;

am! am2 amj a,,111 bm

where the coefficient aij appears in the i-th row and }-th column of the coefficient
matrix. This is called the augmented matrix of the system; it is augmented because
it includes as its last column the right-hand side of the equations. The matrix without
this last column is called the coefficient matrix of the system:

au a12 alj a111


a21 a22 a2j a211

a;1 a;2 aij a;n

amt am2 amj amn

For convenience, we sometimes denote the augmented matrix of a system with coeffi-
bi
cient matrix A and right-hand side b = by [A I b J. In Chapter 3, we will develop
bm
another way of representing a system of linear equations.

EXAMPLES Write the coefficient matrix and augmented matrix for the following system:

3x1 + 8x2 - l 8x3 + X4 = 35

x1+ 2x2 - 4x3 = 11


x1+3x2 - 7 x3 + X4 = l 0

Solution: The coefficient matrix is formed by writing the coefficients of each equation
as the rows of the matrix. Thus, we get the matrix

3
[� 8 -1 8 =� �l j
A= �
70 Chapter 2 Systems of Linear Equations

EXAMPLES For the augmented matrix, we just add the right-hand side as the last column. We get

(continued)
8 18 1 35

[: 1
-

2 -4 0 11
3 -7 10

[
EXAMPLE6 W1ite the system of linear equations that has the augmented matrix

1 0 2
0 -1 1
0 0 1

Solution: The rows of the matrix tell us the coefficients and right-hand side of each
equation. We get the system

x, + 2x3 = 3
-X2 + X3 = 1
X3 = -2

Remark

Another way to view the coefficient matrix is to see that the }-th column of the matrix
is the vector containing all the coefficients of x1. This view will become very important
in Chapter 3 and beyond.

Since each row in the augmented matrix corresponds to an equation in the system
of linear equations, performing operations on the equations of the system corresponds
to performing the same operations on the rows of the matrix. Thus, the steps in elimi­
nation correspond to the following elementary row operations.
Types of Elementary Row Operations
(1) Multiply one row by a non-zero constant.
(2) Interchange two rows.
(3) Add a multiple of one row to another.
As with the steps in elimination, we do not combine operations of type (1) and
type (3) into one operation.
The process of performing elementary row operations on a matrix to bring it into
some simpler form is called row reduction.
Recall that if a system of equations is obtained from another system by one or
more of the elimination steps, the systems are said to be equivalent. For matrices,
if the matrix M is row reduced into a matrix N by a sequence of elementary row
operations, then we say that Mis row equivalent to N. Just as elimination steps are
reversible, so are elementary row operations. It follows that if M is row equivalent to
N, then N is row equivalent to M, so we may say that Mand N are row equivalent. It
also follows that if A is row equivalent to Band Bis row equivalent to C, then A is row
equivalent to C.
Let us see how the elimination in Example 3 appears in matrix notation. To do
this, we introduce notation to indicate this elementary row operation. We write Ri + cRJ
Section 2.1 Systems of Linear Equations and Elimination 71

to indicate adding c times row j to row i, R; ! R J to indicate interchanging row i and


row j. We write cR; to indicate multiplying row i by a non-zero scalar c. Additionally,
at each step we will use � to indicate that the matrices are row equivalent. Note that
it would be incorrect to use = or ::::} instead of �. As one becomes confident with
elementary row operations, one may omfr these indicators of which elementary row
operations were used. However, including them can make checking the steps easier,
and instructors may require them in work submitted for grading.

3
[ 11 31 -1-2 4 l
EXAMPLE 7 The augmented matrix for the system in Example is

2 1 -5 7
7

Here we add (-1)


The first step in the elimination was to add (-1) times the first equation to the second.
multiplied by the first row to the second. We write

1 -1-2 1 -2
[; 3
-5 n R2+(-l)R1 - [� 2 1
-5 n

[1 -[
The remaining steps are

�2
12 -21 2-1 -2-1
0
1 -5 � l 2 j]l
-[
R2 ! R3
R3 - R1
-2 4 1 11 -21
-12 -2-11 21 11 31
-! l (-l)R2
-[ � R3 - 2R2

[ -1 n
0
0 0

All the elementary row operations corresponding to the elimination in Example 3 have

3.
been performed. Observe that the final matrix is the augmented matrix for the final
system of linear equations that we obtained in Example

EXAMPLES The matrix representation of the elimination in Example 4 is

[ 11 32 31 11419 [ 1 21 21 11 4
J 5 J
0 0
R2 + (-l)R1

0 0 0

EXERCISE 2 W rite out the matrix representation of the elimination used in Exercise 1.
In the next example, we will solve a system of linear equations using Gaussian
elimination with back-substitution entirely in matrix form.
72 Chapter 2 Systems of Linear Equations

EXAMPLE9 Find the general solution of the system

3x1 + 8x2 - l 8x3 +x4 = 35


Xi+ 2x2 - 4x3 = 11
Xi +3x2 - 7X3 +X4 = 10

Solution: W rite the augmented matrix of the system and row reduce:

[:
8
2
3
-18
-4
-7
1
0
1
35
11
10
l Ri !R2 -[ i 2
8
3
-4
-18
-7
0
1
1
11
35
10 l R1 - 3R1 �

l
2

[�
-4 0 11 2

-[[ �I
-4 0

l
I�
2 -6 2 -6 R1 t R3 �

3 -7 1 1� R3-R1 -3 1 -1

[�
2
1
2
-4
-3
-6
0 11
-1
2
l R3 - 2R2

0
0
2

0
-4
-3
0 -1
0 11
-1
4 l
To find the general solution, we now interpret the final matrix as the augmented matrix
of the equivalent system. We get the system

xi +2x2 - 4x3 = 11
X2 - 3X3+ X4 = -1
-X4 = 4

We see that X3 is a free variable, so we let x3 = t E R Then we use back-substitution


to get

X4 = -4
X2 = -1 + 3X3 - X4 = 3 + 3t
xi = 11 - 2x2 +4x3 = 5 - 2t

Thus, the general solution is

X1 5 - 2t 5 -2
X2 3 + 3t 3 3
= +t t E JR
X3 t 0 1
,

X4 -4 -4 0

Check this solution by substituting these values for x1, x2, x3, x4 into the original
equations.

Observe that there are many different ways that we could choose to row reduce
the augmented matrix in any of these examples. For instance, in Example 9 we could
interchange row 1 and row 3 instead of interchanging row 1 and row 2. Alternatively,
we could use the elementary row operations R1 - �R1 and R3 - tR1 to eliminate the
Section 2.1 Systems of Linear Equations and Elimination 73

non-zero entries beneath the first leading variable. It is natural to ask if there is a way of
determining which elementary row operations will work the best. Unfortunately, there
is no such algorithm for doing these by hand. However, we will give a basic algorithm
for row reducing a matrix into the "proper" form. We start by defining this form.

Row Echelon Form


Based on how we used elimination to solve the sy stem of equations, we define the
following form of a matrix.

Definition A matrix is in row echelon form (REF) if


Row Echelon Form (REF) (1) W hen all entries in a row are zeros, this row appears below all rows that contain
a non-zero entry.
(2) When two non-zero rows are compared, the first non-zero entry, called the lead­
ing entry, in the upper row is to the left of the leading entry in the lower row.

Remark
It follows from these properties that all entries in a column beneath a leading entry
must be 0. For otherwise, (1) or (2) would be violated.

EXAMPLE 10 Determine which of the following matrices are in row echelon form. For each matrix
that is not in row echelon form, explain why it is not in row echelon form.

[o o i i 2 ]
(b) 0 0 0 -3 1
0 0 0 0 0

(d) [� 1 2 -1
-�]
�2 � �
3 -
3 ] 3 4

Solution: The matrices in (a) and (b) are both in REF. The matrix in (c) is not in REF
since the leading entry in the second row is to the right of the leading entry in the
third row. The matrix in (d) is not in REF since the leading entry in the second row is
beneath the leading entry in the first row.

Any matrix can be row reduced to row echelon form by using


the Jollowing steps First, consider the first column of the matrix; if it consists
entirely of zero entries, move to the next column. If it contains some non-zero entry,
interchange rows (if necessary) so that the top entry in the column is non-zero. Of
course, the column may contain multiple non-zero entries. You can use any of these
non-zero entries, but some choices will make your calculations considerably easier
than others; see the Remarks below. We will call this entry a pivot. Use elementary row
operations of type (3) to make all entries beneath the pivot into zeros. Next, consider
the submatrix consisting of all columns to the right of the column we have just worked
on and all rows below the row with the most recently obtained leading entry. Repeat
74 Chapter 2 Systems of Linear Equations

the procedure described for this submatrix to obtain the next pivot with zeros below
it. Keep repeating the procedure until we have "used up" all rows and columns of the
original matrix. The resulting matrix is in row echelon form.

EXAMPLE 11 Row reduce the augmented matrix of the following system to row echelon form and
use it to determine all solutions of the system:

X1 + X2 = 1

X2 + X3 = 2
X1 + 2X2 + X3 = -2

l [ l [
Solution: W rite the augmented matrix of the system and row reduce:

[ � �
Lil
0 1 1 0 1 1 1
2 - 0 2 - 0 1
1 2 -2 R3 - R1 0 - 3 R3 - R2 0 0

Observe that when we write the system of linear equations represented by the aug­
mented matrix, we get

X\ + X2 = 1
X2 + X3 = 2
0 = -5

Clearly, the last equation is impossible. This means we cannot find values of x1, x2, X3
that satisfy all three equations. Hence, this system has no solution.

Remarks

1. Although the previous algorithm will always work, it is not necessarily the
fastest or easiest method to use for any particular matrix. In principle, it does
not matter which non-zero entry is chosen as the pivot in the procedure just
described. In practice, it can have considerable impact on the amount of work
required and on the accuracy of the result. The ability to row reduce a general
matrix to REF by hand quickly and efficiently comes only with a lot of practice.
Note that for hand calculations on simple integer examples, it is sensible to go
to some trouble to postpone fractions because avoiding fractions may reduce
both the effort required and the chance of making errors.

2. Observe that the row echelon form for a given matrix A is not unique. In partic­
ular, every non-zero matrix has infinitely many row echelon forms that are all
row equivalent. However, it can be shown that any two row echelon forms for
the same matrix A must agree on the position of the leading entries. (This fact
may seem obvious, but it is not easy to prove. It follows from Problem F6 in
the Chapter 4 Further Problems.)

We have now seen that a system of linear equations may have exactly one solution,
infinitely many solutions, or no solution. We will now discuss this in greater detail.
Section 2.1 Systems of Linear Equations and Elimination 75

Consistent Systems and Unique Solutions


We shall see that it is often important to be able to recognize whether a given system
has a solution and, if so, how many solutions. A system that has at least one solution is
called consistent, and a system that does not have any solutions is called inconsistent.
To illustrate the possibilities, consider a system of three linear equations in three
unknowns. Each equation can be considered as the equation of a plane in IR.3. A solution
of the system determines a point of intersection of the three planes that we call P1, P2,
P3. Figure 2.1.1 illustrates an inconsistent system: there is no point common to all three
planes. Figure 2.1.2 illustrates a unique solution: all three planes intersect in exactly
one point. Figure 2.1.3 demonstrates a case where there are infinitely many solutions.

Figure 2.1.1 Two cases where three planes have no common point of intersection: the
corresponding system is inconsistent.

point of intersection
Figure 2.1.2 Three planes with one intersection point: the corresponding system of
equations has a unique solution.

Figure 2.1.3 Three planes that meet in a common line: the corresponding system has
infinitely many solutions.

Row echelon form allows us to answer questions about consistency and uniqueness.
In particular, we have the following theorem.
76 Chapter 2 Systems of Linear Equations

Theorem 1 Suppose that the augmented matrix [A I G] of a system of linear equations is row
equivalent to [s I c), which is in row echelon form.
(1) The given system is inconsistent if and only if some row of [s I c] is of the
form [ 0 0 · · · 0 I c ], with c * 0.
(2) If the system is consistent, there are two possibilities. Either the number of
pivots in S is equal to the number of varia
_ bles in the system and the system has
a unique solution, or the number of pivots is less than the number of variables
and the system has infinitely many solutions.

Proof: (1) If [s I c] contains a row of the form [ 0 0 ... 0 I c ]. where c * 0,


then this corresponds to the equation 0 = c, which clearly has no solution. Hence, the
system is inconsistent. On the other hand, if it contains no such row, then each row
must either be of the form [ 0 0 · · · 0 I 0 ], which corresponds to an equation
satisfied by any values of x1, . • • , x11, or else contains a pivot. We may ignore the rows
that consist entirely of zeros, leaving only rows with pivots. In the latter case, the
corresponding system can be solved by assigning arbitrary values to the free variables
and then determining the remaining variables by back-substitution. Thus, if there is no
row of the form [ 0 0 · · · 0 I c ], the system cannot be inconsistent.
(2) Now consider the case of a consistent system. The number of leading variables
cannot be greater than the number of columns in the coefficient matrix; if it is equal,
then each variable is a leading variable and thus is determined uniquely by the system
corresponding to [s I c]. If some variables are not leading variables, then they are
free variables, and they may be chosen arbitrarily. Hence, there are infinitely many
solutions. •

Remark

As we will see later in the text, sometimes we are only interested in whether a system
is consistent or inconsistent or in how many solutions a system has. We may not nec­
essarily be interested in finding a particular solution. In these cases, Theorem 1 or the
related theorem in Section 2.2 is very useful.

Some Shortcuts and Some Bad Moves


When carrying out elementary row operations, you may get weary of rewriting the
matrix every time. Fortunately, we can combine some elementary row operations in

[
one rewriting. For example,

1 1 -2
1 3 -1
2 1 -5

Choosing one particular row (in this case, the first row) and adding multiples of it to
several other rows is perfectly acceptable. There are other elementary row operations
that can be combined, but these should not be used until one is extremely comfortable
Section 2.1 Systems of Linear Equations and Elimination 77

with row reducing. This is because some combinations of steps do cause errors. For
example,

[ 1 1 3 ] R1 -R1 �
[ 0 -1 -1 ] (WRONG!)
1 2 4 R1 - R1 0 1 1
This is nonsense because the final matrix should have a leading 1 in the first column.
By performing one elementary row operation, we change one row; thereafter we must
use that row in its new changed form. Thus, when performing multiple elementary row
operations in one step, make sure that you are not modifying a row that you are using
in another elementary row operation.

A Word Problem
To illustrate the application of systems of equations to word problems, we give a simple
example. More interesting applications usually require a fair bit of background. In
Section 2.4 we discuss two applications from physics/engineering.

EXAMPLE 12 A boy has ajar full of coins. Altogether there are 180 nickels, dimes, and quarters. The
number of dimes is one-half of the total number of nickels and quarters. The value of
$16.00. How many of each kind of coin does he have?
the coins is
Solution: Let n be the number of nickels, d the number of dimes, and q the number
of quarters. Then
n+ d+q= 180
The second piece of information we are given is that

1
d= 2cn+q)

We rewrite this into standard form for a linear equation:

n -2d+q=O

Finally, we have the value of the coins, in cents:

Sn+ lOd+ 2Sq= 1600

Thus, n, d, and q satisfy the system of linear equations:

n +d +q = 180
n -2d +q=0
Sn+ lOd+ 2Sq= 1600

W rite the augmented matrix and row reduce:

[ I I 1 180 l 1 180 l
1 -2 1 R1 -R1 -3 0 -180 (-1/3)R2
s 10 2S 160 � R3 -SR1 -[[ � s 20 700 (1 /S)R3
1 180 l 1 1 180 l
0 60 �

0 0 60
[� 4 140 R3 -R1 0 0 4 80
78 Chapter 2 Systems of Linear Equations

EXAMPLE 12 According to Theorem 1, the system is consistent with a unique solution. In particular,

(continued) writing the final augmented matrix as a system of equations, we get

n + d + q = 180

d = 60

4q = 80

So, by back-substitution, we get q = 20, d = 60, n = 180 - d - q = 100. Hence, the


boy has 100 nickels, 60 dimes, and 20 quarters.

A Remark on Computer Calculations


In computer calculations, the choice of the pivot may affect accuracy. The problem
is that real numbers are represented to only a finite number of digits in a computer,
so inevitably some round-off or truncation errors occur. W hen you are doing a large
number of arithmetic operations, these errors can accumulate, and they can be partic­
ularly serious if at some stage you subtract two nearly equal numbers. The following
example gives some idea of the difficulties that might be encountered.
The system

O.lOOOx1 + 0.9990x2 = 1.000

O. lOOOx1 + l.OOOx2 = 1.006

is easily found to have solution x2 = 6.000, x1 = -49.94. Notice that the coefficients
were given to four digits. Suppose all entries are rounded to three digits. The system
becomes

O. lOOx1 + 0.999x2 = 1.00

0.100x1 + l.OOx2 = 1.01

The solution is now X2 = 10, x1 = -89.9. Notice that despite the fact that there was
only a small change in one term on the right-hand side, the resulting solution is not
close to the solution of the original problem. Geometrically, this can be understood
by observing that the solution is the intersection point of two nearly parallel lines;
therefore, a small displacement of one line causes a major shift of the intersection
point. Difficulties of this kind may arise in higher-dimensional systems of equations in
real applications.
Carefully choosing pivots in computer programs can reduce the error caused by
these sorts of problems. However, some matrices are ill conditioned; even with high­
precision calculations, the solutions produced by computers with such matrices may
be unreliable. In applications, the entries in the matrices may be experimentally deter­
mined, and small errors in the entries may result in large errors in calculated solutions,
no matter how much precision is used in computation. To understand this problem bet­
ter, you need to know something about sources of error in numerical computation­
and more linear algebra. We shall not discuss it further in this book, but you should be
aware of the difficulty if you use computers to solve systems of linear equations.
Section 2.1 Exercises 79

PROBLEMS 2.1
Practice Problems
Al Solve each of the following systems by back­ 2 0 2 0
substitution and write the general solution in stan­ 2 3 4
(d)
dard form. 4 9 16
(a) x1 -3x2 = 5 3 6 13 20
0 2
X2 = 4
1 2 1
(e)
3 -1 -4 1
(b) X1 + 2X2 -X3 7
2 3 6
=

X3 = 6 3 1 8 2 4
0 3 0 1
(c) x1 + 2 3
3x2 - x 4 (f)
= 0 2 -2 4 3
X2 + Sx3 = 2 -4 11 3 8
X3 = 2 A4 Each of the following is an augmented matrix of a
system of linear equations. Determine whether the

l
(d) x1 - 2x2 +X3 + 4 4
x = 7 system is consistent. If it is, determine the general
solution.
X2 X4 -3

� ; -�
- =

2 -1 2
X3 +X4 = 2

A2 Which of the following matrices are in row eche­

n �l
lon form? For each matrix not in row echelon form,

[� i _; �1
explain why it is not.

� �
l
0
(a) A = -
1

[� � i �1
0 0 0 3

� -� � � l
0 0 1 3

(b) B =

[� � -� -�1
0 0 0 0
0 0 -2
-
1 0 1 -1 0
(c) C =

0 1 0 0 0
0 1 0 3 (e)
0 0 0 0 0

[ H � :1
0 0 0 0 0
(d)D=
AS For each of the following systems of linear equa-
tions:
A3 Row reduce the following matrices to obtain a row
(i) Write the augmented matrix.
equivalent matrix in row echelon form. Show your
(ii) Obtain a row equivalent matrix in row echelon
steps.

(a) [� -� �] form.
(iii) Determine whether the system is consistent or

[i 1
inconsistent. If it is consistent, determine the
8
=i � number of parameters in the general solution.
(b) 3
(iv) If the system is consistent, write its general so­
-1 1 0 2
lution in standard form.
1 -1 -1
(a) 3x1 -Sx2 2
2 -1 -2
=

(c)
5 0 0 Xt + 2X 2 = 4
3 4 5
80 Chapter 2 Systems of Linear Equations

(b ) Xi +2X2 +X3 5

[� !l
=
4 -3
2xi - 3x2 + 2x3 = 6 ( a) b 7
0 a

(c ) xi +2x2- 3x3 = 8 1 -1 4 -2 5
0 1 2 3 4
xi +3x2 - 5x3 = 11 (b)
0 0 d 5 7
2xi +5x2 - 8x3 19
0 0 0 cd c
=

(d)
A 7 A fruit seller has apples, bananas, and oranges. Al­
-3xi +6x2 +l6x3 = 36
together he has 1500 pieces of fruit. On average,
xi - 2x2 - 5x3 = -11
each apple weighs 120 grams, each banana weighs
2xi - 3x2 - 8x3 = -17 1 40 grams, and each orange weighs 160 grams.
He can sell apples for 2 5 cents each, bananas for
( e) Xi +2x2-X3 = 4 20 cents each, and oranges for 30 cents each. If
the fruit weighs 20 8 kilograms, and the total sell­
2xi +5x2 +x3 = 10
ing price is $3 80, how many of each kind of fruit
4xi +9x2-x3 = 19
does the fruit seller have?

(f) A8 A student is taking courses in algebra, calculus, and


xi +2x2 - 3x3 = -5
physics at a college where grades are given in per­
2x1 + 4x2 -6x3 +X4 = -8
centages. To determine her standing for a physics
6x1 +13x2- 17x3 +4x4 = - 21 prize, a weighted average is calculated based on
50% of the student's physics grades, 30% of her
( g) +X5 = 2 calculus grade, and 20% of her algebra grade; the

x, +2x2- 3x3 +X4 +4xs = 1 weighted average is 84. For an applied mathemat­
ics prize, a weighted average based on one-third
2x1 +4x2- 5x3 +3x4 +8xs = 3
of each of the three grades is calculated to be 8 3.
2x, +5x2 - 7x3 +3x4 +lOxs 5
For a pure mathematics prize, her average based on
=

50% of her calculus grade and 50% of her algebra


A6 Each of the following matrices is an augmented
grade is 8 2.5. What are her grades in the individual
matrix of a system of linear equations. Determine
courses?
the values of a, b, c, d for which the systems
are consistent. If a system is consistent, determine
whether the system has a unique solution.

Homework Problems

Bl Solve each of the following systems by back­ (c) xi +3x2-x3 +2x4 = -1


substitution and write the general solution in stan­
X2-X3 +2X4 = -1
dard form.
X 3 + 3X4 3
(a )
=

x1 - 2x2 -x3 = 5

X2 +3x3 4 (d) xi +3x2- 2x3 - 2x4 +2xs -2


=
=

X3 -2
x2- 2x3 - 2x4 +3xs
=
= 4

X3 +X4 - 2xs -3
(b )
=

X1 - 3X2 +X3 = 1

X2 +2X3 = -1
Section 2.1 Exercises 81

[� � � � �l
B2 Which of the following matrices are in row eche­
lon form? For each matrix not in row echelon form, (c)
1 2

[� � -� �1
explain why it is not.
1 0 0 -1
(a) A 0 0 1 -1 0
(d)
=

0 0 0 1 0 0 0 0 0

[� � i � l
0 0 0 0 0
(b) = BS For each of the following systems of linear equa­
B
tions:

[l � � �l
(i) Write the augmented matrix.
(c) C = (ii) Obtain a row equivalent matrix in row echelon

[� � � �1
form.
(iii) Determine whether the system is consistent
(d) D = or inconsistent. If it is consistent, then deter­
0 0 -3 mine the number of parameters in the general
solution.
B3 Row reduce the following matrices to obtain a row
(iv) If the system is consistent, write its general
equivalent matrix in row echelon form. Show your
solution in standard form.

[� H ��l
steps.
(a) 2x1+X2 +Sx3 = -4
X1+X2 +X3 -2
(a)
=

+X2 - X3 6

[! r Lr �l
(b) 2X1 =

x1 - 2x2 - 2x3 = 1
�) -
-xi +12x2 +8x3 = 7
2 0 7
1 -1 2 (c) X2 +X3 = 2
(c)
3 2 0 12
Xt +X2 +X3 3
6 4 -1 25
=

] 2 1 3 0 2x1+3x2 + 3x3 = 9
2 5 2 6 1
(d)
3 7 4 9 3
(d) = -7
2 6 2 6 5 2x1+4x2 +X3 = -16

B4 Each of the following is an augmented matrix of a X1+2X2 +X3 = 9


system of linear equations. Determine whether the
(e) X1 +X2 +2X3 +X4 3

l
system is consistent. If it is, determine the general =

x, + 2x2 +4x3 +X4 7

[
solution. =

1 -1 0 1
X1 +X4 -21
0
=

� � -! -�
(a)
0
(f) X1 +X2 +2X3 +X4 = 1
1 0 1

l
1 2 3 x1 +2x2 +4x3 +X4 = -1
0 0 X1 +X4 = 3
82 Chapter 2 Systems of Linear Equations

B6 Each of the following matrices is an augmented F3 are applied perpendicular to the rod in the direc­

matrix of a system of linear equations. Determine tions indicated by the arrows in the diagram below;

the values of a, b, c, d for which the systems are F1 is applied to the left end of the rod, F2 is applied

consistent. In cases where the system is consis­ at a point 2 m to the right of centre and F3 at a point

tent, determine whether the system has a unique 4 m to the right. The total force on the pivot is zero,

solution. the moment about the centre is zero, and the sum
of the magnitudes of forces is 80 newtons. Write a
(a) [� i : j] a'
-

1 system of three equations for F1, F2, and F3; write


the corresponding augmented matrix; and use the
1 0 2 s 2 standard procedure to find F1, F2, and F3•
0 c c 0
(b)
0 0 c 0 c
0 0 0 cd c+d

B7 A bookkeeper is trying to determine the prices -S 0 2 4


that a manufacturer was charging. He examines
old sales slips that show the number of various B9 Students at Linear University write a linear algebra
items shipped and the total price. He finds that examination. An average mark is computed for 100
20 armchairs, 10 sofa beds, and 8 double beds cost students in business, an average is computed for
$1S,200; JS armchairs, 12 sofa beds, and 10 dou­ 300 students in liberal arts, and an average is com­
ble beds cost $1S,700; and 12 armchairs, 20 sofa puted for 200 students in science. The average of
beds, and 10 double beds cost $19,600. Determine these three averages is 8S%. However, the overall
the cost for each item or explain why the sales slips average for the 600 students is 86%. Also, the aver­
must be in error. age for the 300 students in business and science is
4 marks higher than the average for the students in
B8 (Requires knowledge of forces and moments.) A
liberal arts. Determine the average for each group
rod 10 m long is pivoted at its centre; it swings in
of students by solving a system of linear equations.
the horizontal plane. Forces of magnitude F1, F2,

Computer Problems

Cl Use computer software to determine a matrix in C3 Suppose that a system of linear equations has the
row echelon form that is row equivalent to each of augmented matrix.

(a)A=
[
the following matrices.
3S
l7
4S
6S
18
-61
13
7
] [ 1.121
2.SOl
-2.0lS
3.214
2.131
4.130
4.612
3.11S
23 19 6 41 -1.639 -12.473 -1.827 8.430

(b) B =
[ -2S
SO
-36
-38
37
49
41
13
22
4S
] (a) Determine the solution of this system.

27 -23 6 -21 27 (b) Change the entry in the second row and third
column from 4.130 to 4.080 and find the
C2 Redo Problems A3, AS, B3, and BS using a
solution.
computer.
Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems 83

Conceptual Problems

Dl Consider the linear system in x, y, z, and w: For what values of the constants a and b is the
system
x +y +w=b (a) Inconsistent?
2x +3y +z +Sw = 6 (b) Consistent with a unique solution?
(c) Consistent with infinitely many solutions?
z +w

2y +2z +aw= 1
= 4 3
D2 Recall that in JR , two planes n · x = c and m
· x = d are parallel if and only if the normal vec­
tor mis a non-zero multiple of the normal vector n.
Row reduce a suitable augmented matrix to explain
why two parallel planes must either coincide or else
have no points in common.

2.2 Reduced Row Echelon Form, Rank, and


Homogeneous Systems
To determine the solution of a system of linear equations, elimination with back­
substitution, as described in Section 2.1, is the standard basic procedure. In some
situations and applications, however, it is advantageous to carry the elimination steps
(elementary row operations) as far as possible to avoid the need for back-substitution.
To see what further elementary row operations might be worthwhile, recall that the
Gaussian elimination procedure proceeds by selecting a pivot and using elementary
row operations to create zeros beneath the pivot. T he only further elimination steps
that simplify the system are steps that create zeros above the pivot.

EXAMPLE 1 In Example 2.1.7, we row reduced the augmented matrix for the original system to a
row equivalent matrix in row echelon form. That is, we found that

[ 1 1 -2 4l [ 1 -2 4l
1 3 -1 7 - 0 1 1
2 1 -5 7 0 0 -1 1

Instead of using back-substitution to solve the system as we did in Example 2.1.7, we


instead perform the following elementary row operations:

- 1
0
[� � 0 (- l)R3 [� 0

Rt+ 2R3 Rt - R1 [ 1 0 0
1 - 0 1 0
[� � -[ � 0 0 0 1

This is the augmented matrix for the system x1 = 0, x2 = 2, and x3 = -1, which gives
us the solution we found in Example 2.1.7.
84 Chapter 2 Systems of Linear Equations

This system has been solved by complete elimination. The leading variable in
the }-th equation has been eliminated from every other equation. This procedure is
often called Gauss-Jordan elimination to distinguish it from Gaussian elimination
with back-substitution. Observe that the elementary row operations used in
Example 1 are exactly the same as the operations performed in the back-substitution in
Example 2.1.7.
A matrix corresponding to a system on which Gauss-Jordan elimination has been
carried out is in a special kind of row echelon form.

Definition A matrix R is said to be in reduced row echelon form (RREF) if


Reduced Row (1) It is in row echelon form.
Echelon Form (RREF) (2) All leading entries are 1, called a leading 1.
(3) In a column with a leading 1, all the other entries are zeros.

As in the case of row echelon form, it is easy to see that every matrix is row equivalent
to a matrix in reduced row echelon form via Gauss-Jordan elimination. However, in
this case we get a stronger result.

Theorem 1 For any given matrix A there is a unique matrix in reduced row echelon form that is
row equivalent to A.

Proof: You are asked to prove that there is only one matrix in reduced row echelon
form that is row equivalent to A in Problem F6 in the Chapter 4 Further Problem. •

EXAMPLE2 Obtain the matrix in reduced row echelon form that is row equivalent to the matrix

A= [� 1
3
2
5
-2
0
2
2
]
Solution: Row reducing the matrix, we get

1
3
1
3
2
5
-2
0
2
2 R2 - 3R1
[� 1
0 -1
2 -2
- � ] R1 + 2R2

-� ]
-
6

-[ �
1 1 0 10 -6 1 0 10
0 0 -1 6 -4 (- l )R2 0 -6

This final matrix is in reduced row echelon form.

EXERCISE 1 Row reduce the matrices in Examples 2.1.9 and 2.1.12 into reduced row echelon form.

Because of the uniqueness of the reduced row echelon form, we often speak of
the reduced row echelon form of a matrix or of reducing a matrix to its reduced row
echelon form.
Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems 85

Remarks

1. In general, reducing an augmented matrix to reduced row echelon form to solve


a system is not more efficient than the method used in Section 2.1. As previously
mentioned, both methods are essentially equivalent for solving small systems
by hand.

2. W hen row reducing to reduced row echelon form by hand, it seems more natural
not to obtain a row echelon form first. Instead, you might obtain zeros below
and above any leading 1 before moving on to the next leading 1. However, for
programming a computer to row reduce a matrix, this is a poor strategy because
it requires more multiplications and additions than the previous strategy. See
Problem F2 at the end of the chapter.

Rank of a Matrix
We saw in Theorem 2.1.1 that the number of leading entries in a row echelon form
of the augmented matrix of a system determines whether the system is consistent or
inconsistent. It also determines how many solutions (one or infinitely many) the system
has if it is consistent. Thus, we make the following definition.

Definition The rank of a matrix A is the number of leading 1s in its reduced row echelon form
Rank and is denoted by rank(A).

The rank of A is also equal to the number of leading entries in any row echelon
form of A. However, since the row echelon form is not unique, it is more tiresome to
give clear arguments in terms of row echelon form. In Section 3.4 we shall see a more
conceptual way of describing rank.

EXAMPLE3 The rank of the matrix in Example 1 is 3 since the RREF of the matrix has three
leading 1s. The rank of the matrix in Example 2 is 2 as the RREF of the matrix has
two leading ls.

EXERCISE 2 Determine the rank of each of the following matrices:

[�
1 1 0 1

!]
0 1
0 0 0 0
(a) A= 0 1 (b) B=
0 0 0 3
0 0
0 0 0 2

Theorem 2 [ b] be a system of
Let A I m linear equations inn variables.
(1) The system is consistent if and only if the rank of the coefficient matrix A is
equal to the rank of the augmented matrix A I [ b].
86 Chapter 2 Systems of Linear Equations

Theorem 2 (2) If the system is consistent, then the number of parameters in the general solu­
(continued) tion is the number of variables minus the rank of the matrix:

#of parameters = n - rank(A)

Proof: Notice that the first n columns of the reduced row echelon form of [A I tq
consists of the reduced row echelon form of A. By Theorem 2.1.1, the system is in­
consistent if and only if the reduced row echelon form of [A I
GJ contains a row of the
form [ 0 · · · 0 J 1 ]. But this is true if and only if the rank of [A I G] is greater than
the rank of A.
If the system is consistent, then the free variables are the variables that are not
leading variables of any equation in a row echelon form of the matrix. Thus, by def­
inition, there are n - rank(A) free variables and hence n - rank(A) parameters in the
general solution. •

Corollary 3 Let [A I G] be a system of m linear equations inn variables. Then [A I GJ is consis­


tent for all b E JR11 if and only if rank(A) = m.

Proof: This follows immediately from Theorem 2.

Homogeneous Linear Equations


Frequently systems of linear equations appear where all of the terms on the right-hand
side are zero.

Definition A linear equation is homogeneous if the right-hand side is zero. A system of linear
Homogeneous equations is homogeneous if all of the equations of the system are homogeneous.

Since a homogeneous system is a special case of the systems already discussed,


no new tools or techniques are needed to solve them. However, we normally work
only with the coefficient matrix of a homogeneous system since the last column of the
augmented matrix consists entirely of zeros.

EXAMPLE4 Find the general solution of the homogeneous system

2X1 + X2 = Q
X1 + X2 - X3 = 0
-X2 + 2X3 = 0
Section 2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems 87

EXAMPLE4 Solution: We row reduce the coefficient matrix of the system to RREF:

1 - [ � -� l -
(continued)

[ � !; ]l R1 ! R2 R2 - 2 R1

-[ �
-1 -1

[� -[ � -� I
-

�I
1 - R1+R2 0 0
-1 -1 (- l )R2 1
-1 2 R3 - R2 0 0

This corresponds to the homogeneous system

X1 +X3 = 0
X2 - 2 X3 = 0

Hence, x3 is a free variable, so we let X3 = t ER Then x1 = -x3 = -t, x2 = x


2 3 = 2 t,
and the general solution is

Observe that every homogeneous system is consistent as the zero vector 0 will
certainly be a solution. We call 0 the trivial solution. Thus, as we will see frequently
throughout the text, when dealing with homogeneous systems, we are often mostly
interested in how many parameters are in the general solution. Of course, for this we
can apply Theorem .
2

EXAMPLES Determine the number of parameters in the general solution of the homogeneous
system

xi + 2x2 + 2x3 +x4 +4xs = 0


3x1+7x2 + 7x3 +3x4 + 13xs = 0
2 x1+5x2 + 5x3 + x
2 4 +9xs = 0

[1 1 [- l 1 ] [ ]
Solution: We row reduce the coefficient matrix:

]
2 2 4 2 2 4 RI - 2R2 1 0 0 1 .2
3 7 7 3 13 R1 - 3R1 0 1 1 0 l - o 1 1 0 1
2 5 5 2 9 R3 - 2R1 0 1 1 0 1 R3 - R2 0 0 0 0 0

The rank of the coefficient matrix is 2 and the number of variables is 5. Thus, by
Theorem 2, there are 5 - 2 = 3 parameters in the general solution.

EXERCISE 3 Find the general solution of the system in Example 5.


88 Chapter 2 Systems of Linear Equations

PROBLEMS 2.2
Practice Problems
Al Determine the RREF and rank of the following 1 2
matrices.

(a)
[! -i]
(b)
[� -3 2 !I
1
0
0
0
0
0
0 0 0
0

[� �I
0 0 0 0
(c)

(b) 0

1
0
- 21
0 0 0
0
-�]
(c)
[i � !] (d)
[� 40
0 0
0
0

[� ! -�1 (e)
[1� 0
0
1
0
-5
0 �I
1-1 -222 33
(d)

0 0 0 0
1 0 1 1 0 0
(f)
0 0 0 1 0

2 4 3 0 0 0 0
(e)

A3 For each homogeneous system, write the coefficient

(D [2: di matrix and determine the rank and number of pa-

x
2 2 -Sx 3
rameters in the general solution. Then determine

[31 -�2 �3 [�]


l
the general solution.

(g ) =
(a)

x1x1 4x22x2 -3xx3 33 = 0


0

21 312 13 442
+ + =

1 0 1 0

x3x11 x2x2 -9x3


+

0
=

2x1 x2 -S-7x3x3
0
(h )

31 83 22 3
(b) + =

0
0 0 5
+ =

0
1 5

xx3 11 -3-X2x2 8x2x33 -Sx4-3x4


+

2 7
=

0
(i)
0
6
(c) +

23xx11 -3- x2x22 7Sx3X3 -4x4


=

0
A2 Suppose that each of the following is the coefficient
+

-7x4
=

matrix of a homogeneous system already in RREF. + = 0


0

x2 2
x 3 2 x 4
For each matrix, determine the number of parame­ + =

ters in the general solution and write out the general

[� � -� �11
(d)

x12x1 xx22 2 Sx3Sx3 x3 X44 -3-xsxs 0


solution in standard form. + + =

+ + + 0

xi x2 4x3 2x4 - 2xs


( a)
=

0 0 0 + + + = 0
+ + + = 0
Section 2.2 Exercises 89

A4 Solve the following systems of linear equations


by row reducing the coefficient matrix to RREF.
(g)

x1 + 2 x 2 - 3x3 + X4 +
+

4xs
X5 == 2

=
Compare your steps with your solutions from the

==
2x1 + 4x2 - x
S 3 + 3x4 + 8xs = 3
Problem 2.1.AS.
5

[A b]
2x1 + Sx2 - x
7 3 + 3x4 + lOxs
(a) 3x1 - x
S 2 2

==
X1 2x2
+
AS Solve the system I by row reducing the aug­

[A ].
mented matrix to RREF. Then, without any further
(b) X1 + 2x2 + X3 5 operations, find the general solution to the homo­
geneous I0

A=[: ; nb=[�]
2x1 - 3x2 2x3

==
+ 6
-
(c) x1 + 2x2 - 3x3 8 (a)

x1 + 3x2 - x
S 3

= 11

A = u � J. b= HI
== -11
2x1 + Sx2 - 8x3 19
(b)

(d) -3x1 + 6x2 + 16x3 3 6


A= [-10 -1-1 ] -- [-14 ]
=
5 -2
x1 - 2x2 -x
S 3 (c) 'b

A= u _! �i =H b= m
...

-4 -1

==10
2x1 - 3x2 - 8x3 -17
(d)
( e) Xi + 2x2 - X3 4
2x1

4x1
+

+
Sx2 +x3

9x2 - X3 = 19

== -21
(f) x, + 2x2 - 3x3 -5
2x1 + 4x2 - 6x3 + X4 -8

6x1 + 13x2 - l7x3 + 4x4

Homework Problems

Bl Determine the RREF and rank of the following 1 -1 2


matrices.
(d)
-1
4
1 4
[� -ll
-3 3
(a) 3 -2 3

(b)
[f
(e )
[i d ;]
[:
2
(f) 3 2 1
(c) 4 3 2 1
0 -1 0
90 Chapter 2 Systems of Linear Equations

2 1 -2
(c)
2 2 4 3 -6
X1 + X2 + X3 - 2X4 = 0
(g) - 14x4=0
0 2 2 -4 2x1+7x2

3 2 4 2 -4 X1 +3X2

B2 Suppose that each of the following is the coefficient X1+4x2


matrix of a homogeneous system already in RREF.
F or each matrix, determine the number of parame- (d)
x1 +3x2 +X3 +X4 +2xs = 0

-�
ters in the general solution and write out the general
2x2 +X3 - X5 = 0
solution in standard form.

]
3 0 x1+2x2 +2x3 +X4 =0

(a)
[� 0
0
1
0
X1 +2x2 + X3 +X4 +X5 =0

[A I b J by row reducing the coef­


�]
0 -2 B4 Solve the system
(b)
[�
1
0
0
2
1
0
0
1
0
0 0 -3
ficient matrix to RREF. Then, without any further
operations, find the general solution to the homo­
geneous system [A I 0].
(c)
0
0
0
0
0
0
1
0
0
-5
0
0
0
1
0
4
1
0
(a) A= [ : � �]. [-�]
-2 -4
b=
-1

B3 F or each homogeneous system, write the coefficient 1 2 -4 10


matrix and determine the rank and number of pa­ -1 -5 -5 -1
rameters in the general solution. Then determine
(b) A= -4 9
'
_,

b
=
1
the general solution. -5 -4 0 8
(a)
x1 +5x2 - 3x3 =0 -1 4 -1 4
-1 -2 5 -2 5
3x1 +5x2 - 9x3 = 0 (c) A= -4 -1 2 2 'b
_,

=
-4
X1 +X2 - 3X3 = 0
5 4 1 8 5
3 1 4 2
6
(b) x1 +4x2 - 2x3 =0 4 4 -8 4 -4
(d) A= _,

1 -2 1 'b
=
2x1 - 3x3 =0 1 4 -6
4x1+ 8x2 - 7X3 = 0 3 3 2 -4 5 6

Computer Problems

Cl Determine the RREF of the augmented matrix of If the system is consistent, determine the general
the system of linear equations solution.

C2 Determine the RREF of the matrices in Problem


2.0lx +3.45y+2.23z=4.13
2.1.Cl.
l.57x +2.03y- 3.1 lz=6.11

2.23x +7.lOy- 4.28z = 0.47


Section 2.3 Application to Spanning and Linear Independence 91

Conceptual Problems

Dl We want to find a vector 1 0 in JR3 that is simul­


i- orthogonal to i1, v, and w. (This should lead to

taneously orthogonal to given vectors a, b, c E JR3. a homogeneous system with coefficient matrix

(a) W1ite equations that must be satisfied by 1. C, whose rows are i1, v, and w.) What does the

(b) What condition must be satisfied by the rank of rank of C tell us about the set of vectors 1 that
are orthogonal to i1, v, and w?

the matrix A = [:: :� :�i


C1 C2 C3
if there are to be D3 What can you say about the consistency of a system
of m linear equations inn variables and the number
non-trivial solutions? Explain.
of parameters in the general solution if:

D2 (a) Suppose that [� � -n is the coefficient ma­


(a) m = 5, n = 7, the rank of the coefficient matrix
is 4?
trix of a homogeneous system of linear equa­ (b) m = 3, n = 6, the rank of the coefficient matrix
tions. Find the general solution of the system is 3?
and indicate why it describes a line through the (c) m = 5, n = 4, the rank of the augmented matrix
01igin. is 4?
(b) Suppose that a matrix A with two rows and
D4 A system of linear equations has augmented ma-
three columns is the coefficient matrix of a ho­
mogeneous system. If A has rank 2, then ex­
plain why the solution set of the homogeneous
trix [�1
� � !
0 1 b
]· For which values of a and b

system is a line through the origin. What could


is the system consistent? Are there values for which
you say if rank(A) 1?
there is a unique solution? Determine the general
=

(c) Let i1, v, and w be three vectors in JR4. Write


solution.
conditions on a vector 1 E JR4 such that 1 is

2.3 Application to Spanning and Linear


Independence
As discussed at the beginning of this chapter, solving systems of linear equations will
play an important role in much of what we do in the rest of the text. Here we will show
how to use the methods described in this chapter to solve some of the problems we
encountered in Chapter 1.

Spanning Problems
Recall that a vector v E Rn is in the span of a set {V1, .. , vd of vectors in JR11 if and
.

only if there exists scalars t1, ... , tk E JR such that

Observe that this vector equation actually represents n equations (one for each compo­
nent of the vectors) in the k unknowns t1, ... , tk. Thus, it is easy to establish whether
a vector is in the span of a set; we just need to determine whether the corresponding
system of linear equations is consistent or not.
92 Chapter 2 Systems of Linear Equations

EXAMPLE 1
Detennine whethec the vectorV � = [ �] is in the set Span l{[: · [-il m [ l]}
· , =

[
Solution: Consider the vector equation

" rn Hl m =il [ �1
+ 12 + 13 + 14 =
=

Simplifying the left-hand side using vector operations, we get

Comparing corresponding entries gives the system of linear equations

ti + t2 + 2t3 -t4 = -2
t1 - t2 + t3 - 3t4 = -3
ti + 5t2 + 4t3 + 3t4 =

[
We row reduce the augmented matrix:

[: -1
5
2

4
-1
-3
3
-2
-3
1
i R2-Ri
R3 - Ri
-
I
0
0
-2
1

4
2
-1
2
-1
-2
4
-2
-1
3 i R3 + 2R2

[� i
I 2 -1 -2
-2 -1 -2 -1
0 0 0 1
-

By Theorem 2.1.1, the system is inconsistent. Hence, there do not exist values of
t1, t2, t3, and t4 that satisfy the system of equations, so v is not in the span of the
vectors.

EXAMPLE2
Ut v, =
[H = nl·
v, andV3 = m W riteV = [l] as a lineM combination of v,,

v2, andV3 .

Solution: We need to find scalars t1, t2, and t3 such that

Simplifying the left-hand side using vector operations, we get


Section 2.3 Application to Spanning and Linear Independence 93

EXAMPLE2 This gives the system of linear equations:

(continued)
ti - 2t2 + t3 = -1
2ti + t1 + t3 = 1
ti + t3 = -1

Row reducing the augmented matrix to RREF gives

[� 1
-
�0 1� -� I-[ � � � � I
-1 0 0 1 -3

The solution is t1 2, t2 0, and t3 3. Hence, we get 2vi Ov2 - 3v3 v.


-
= = = + =

EXERCISE 1
Detennine whethec v = m is in the set Span
{[ il Hl [m
= · ·

EXAMPLE3
Considec Span { t ; , � }·
, Find a homogeneous system of lineac equations that

defines this set.


Xi
x2
Solution: A vector x = is in this set if and only if for some t1, t2, and t3,
X3
X4

1 3
2 1 5
ti + t1 3 + t3 5

Simplifying the left-hand side gives us a system of equations with augmented matrix

1 3 X1
2 1 5 X2
3 5 X3
3 X4

Row reducing this matrix to RREF gives

3 Xi 1 3 Xi
2 1 5 X2 0 1 1 2X1 - X2
3 5 X3 0 0 0 -Sxi + 2x2 + x3
3 X4 0 0 0 -X1 + X4

The system is consistent if and only if -S x i + 2x2 + X3 = 0 and -xi + X4 = 0. Thus,


this system of linear equations defines the set.
94 Chapter 2 Systems of Linear Equations

EXAMPLE4 Show that every vector v E JR3 can be written as a linear combination of the vectors

V, =
[!]· Ul
Y2 =
andV3
= Hl
Solution: To show that every vector v E JR3 can be written as a linear combination of
the vectors v1, v2, and v3, we need to show that the system

is consistent for all v E JR3.

[-31 13
Simplifying the left-hand side gives us a system of equations with coefficient
matrix

-3 -:1
-2

[-13 31 -1 ] [10 1o 0ol


Row reducing the coefficient matrix to RREF gives

1 001 �

3,
-2 - 3

Hence, the rank of the matrix is which equals the number of rows (equations).
Hence, by Theorem 2.2.2, the system is consistent for all v E JR3, as required.

We generalize the method used in Example 4 to get the following important re­
sults.

Lemma 1 A set of k vectors { v 1, ... , vk} in JR11 spans JR11 if and only if the rank of the coefficient
matrix of the system t1v1 + · · · + tkvk =
v is n.

Proof: If Span{\11, . 'vk}


. . = JR11, then every vector b E JR" can be written as a linear
combination of the vectors {V1, ... , vk}. That is, the system of linear equations

has a solution for every b E JR". By Theorem 2.2.2, this means that the rank of the
coefficient matrix of the system equals n (the number of equations). On the other hand,
if the rank of the coefficient matrix of the system t1v1 + · · · + tkvk = v is n, then the
system is consistent for all v E JR11 by Theorem 2.2.2. Hence, Span{V 1, . .., vk}
JR". •
Section 2.3 Application to Spanning and Linear Independence 95

11 JR.11, then k;?: n.


Theorem 2 Let {V1, . • • , vk) be a set of k vectors in JR. • If Span{V1, ... , vk) =

Proof: By Lemma 1, if Span{v 1, . .. , vk} = JR.11, then the rank of the coefficient matrix
is n. But, if we haven leading ls, then there must be at leastn columns in the matrix
to contain the leading ls. Hence, the number of columns, k, must be greater than or
equal ton. •

Linear Independence Problems


11
Recall that a set of vectors {V1, • • • , vk} in JR. is said to be linearly independent if and
only if the only solution to the vector equation

is the solution t; =
0 for 1 � i � k. From our work above, we see that this is true when
the corresponding homogeneous system of n equations in k unknowns has a unique
solution (the trivial solution).

EXAMPLES
Detenn;ne whether the set

dent.
{ [: l [-il m [ m
· ·
,
=
;s linear!y ;ndependent or depen-

Solution: Consider

" m Hl r!l l �l l�l


+ ,, +
,,
+
,, =
=

]
Simplifying as above, this gives the homogeneous system with coefficient matrix

[;
1 2 -1
-1 1 -3
5 4 3

Notice that we do not even need to row reduce this matrix. By Theorem 2.2.2, the
number of parameters in the general solution equals the number of variables minus
the rank of the matrix. There are four variables, but the maximum the rank can be is 3
since there are only three rows. Hence, the number of parameters is at least one, so the
system has infinitely many solutions. Therefore, the set is linearly dependent.

EXAMPLE6
Let v I = rn. [-�]·
i12 = and v, = [: l · Determ;ne whether the set [ii J, i12, i13j;S Hnearly

independent or dependent.
96 Chapter 2 Systems of Linear Equations

EXAMPLE6 Solution: Consider t1 v1 +t2v2 +t3v3 0. As above, we find that the coefficient matrix

[� -� : l
=

(continued)
of the corresponding system is Using the same elementary row operations

as in Example 2, we get
o
1
o
0
l
0 1

Therefore, the set is linearly independent since the system has a unique solution (the
trivial solution).

EXERCISE 2
Determine whether the set {[�il [=�l [-i]}
· · is linearly independent or dependent

Again, we can generalize the method used in Examples 5 and 6 to prove some
important results.

Lemma 3 A set of vectors {V 1, • • • , vk} in JR11 is linearly independent if and only if the rank of
the coefficient matrix of the homogeneous system t1 v1 + · · · + tkvk = 0 is k.

Proof: If {V1, • • • , vk} is linearly independent, then the system of linear equations

has a unique solution. Thus, the rank of the coefficient matrix equals the number of
unknowns k by Theorem 2.2.2.
On the other hand, if the rank of the coefficient matrix equals k, then the ho­
mogeneous system has k - k 0 parameters. Therefore, it has the unique solution
=

t1 = · · · = tk = 0, and so the set is linearly independent. •

Theorem 4 If {v1 , • . • , vk} is a Iinearly independent set of vectors in JR11, then k ::; n.

Proof: By Lemma 3, if {V1, vk} is linearly independent, then the rank of the co­
• • • ,

efficient matrix is k. Hence, there must be at least k rows in the matrix to contain the
leading 1 s. Therefore, the number of rows n must be greater than or equal to k. •
Section 2.3 Application to Spanning and Linear Independence 97

Bases of Subspaces
Recall from Section 1 .2 that we defined a basis of a subspace S of JR.11 to be a linearly
independent set that spans S. Thus, with our previous tools, we can now easily identify
a basis for a subspace. In particular, to show that a set 13 of vectors in JR.11 is a basis for
a subspace S, we just need to show that Span 13 = S and 13 is linearly independent. We
demonstrate this with a couple examples.

EXAMPLE 7
1£1 s = {[il Hl 'nl}
· Prove iliat sis a basis for R3.

Solution: To show that every vector v E JR.3 can be written as a linear combination of
the vectors in 13, we just need to show that the system

[" il Hl [-�l
+ 1,
+

1,
= v

is consistent for all v E JR. 3.


Row reducing the coefficient matrix that corresponds to this system to RREF gives

[� [ �
2 _;2 - �1 � �1
1 �
0 0 1

The rank of the matrix is 3, which equals the number of rows (equations). Hence, by
Theorem 2.2.2, the system is consistent for all v E JR.3, as required.
Moreover, to determine whether 13 is linearly independent, we would perform the
same elementary row operations on the same coefficient matrix. So, we see that the
rank of the matrix also equals the number of columns (variables). By Theorem 2.2.2,
the system has a unique solution, and hence 13 is also linearly independent. Thus, 13 is
a basis for JR.3.

EXAMPLES
Show that S = {[ �l [l l}
_ · is a basis for the plane -3x1 + 2x2 + x3 = O.

Solution: We first observe that 13 is clearly linearly independent since neither vector
is a scalar multiple of the other. Thus, we need to show that every vector in the plane
can be written as a linear combination of the vectors in 13. To do this, observe that any
vector x in the plane must satisfy the condition of the plane. Hence, every vector in
the plane has the form

x =

[ ��
3x1 - 2x2 l
since X3 = 3x1
- 2x2. Therefore, we now just need to show that the equation
98 Chapter 2 Systems of Linear Equations

EXAMPLE 8 is always consistent. Row reducing the corresponding augmented matrix gives
(continued)

So, the system is consistent and hence :B is a basis for the plane.

Theorem 5 A set of vectors {V 1, . .. , v } is a basis for ]Rn if and only if the rank of the coefficient
n
matrix of tiV1+···+t Vn =vis n.
n

Proof: If {V1, ... , Vn} is a basis for JRll, then it is linearly independent. Hence, by
Lemma 3, the rank of the coefficient matrix is n.
If the rank of the coefficient matrix is n, then the set is linearly independent and
spans ]Rn by Lemma 1 and Lemma 3. •

Theorem 5 gives us a condition to test whether a set of n vectors in JR11 is a basis


for 1Rn. Moreover, Lemma 1 and Lemma 3 tell us that a basis of JRll must contain n
vectors. We now want to prove that every basis of a subspace S of ]Rn must contain the
same number of vectors.

Lemma6 Suppose that S is a non-trivial subspace of JR11 and Span {vi, ... , ve} = S. If
{ z11, ... , z1d is a linearly independent set of vectors in S, then k ::; e.

Proof: Since each z1i, 1 ::; i ::; k, is a vector in S, by definition of spanning, it can be
written as a linear combination of the vectors v1, ... , Ve. We get

il1 = a11v1+a11V2+·· + ae1Ve


·

z12 = a12V1 + a12V2+· ··+ aeive

Consider the equation

O=t1U1+..·+tkUk
_,
_, _,

=t1(a11V1+ a11V2+···+ae1Ve)+···+tk(alkv1+a1kV2+···+ aekve)


=(a11t1+···+alktkW1+···+(anti+···+ aektdve

Since {V 1, • • • , ve} is linearly independent, the only solution to this equation is


Section 2.3 Application to Spanning and Linear Independence 99

This gives a homogeneous system of e equations in the k unknowns t1, ..., tk. If k > e,
then this system would have a non-trivial solution, which would imply that {it 1, • • • ,uk}
is linearly dependent. But we assumed that {it 1, ..., uk} is linearly independent, so we
must have k � e. •

Theorem 7 If {V1, .. , v e} and {it 1,


. • . . , uk} are both bases of a non-trivial subspace S of IR.n, then
k= e.

Proof: We know that {V 1, v e} is a basis for S, so it is linearly independent. Also,


• • • ,

{it 1,.. ., uk} is a basis for S, so Span{u 1, ..., uk} S. Thus, by Lemma 6, we get e � k.
=

Similarly, {it j, . . 'uk} is linearly independent as it is a basis for s' and Span{V1' ... 'Ve}
.

= S, so e � k. Therefore, e k, as required.
= •

This theorem justifies the following definition.

Definition If S is a non-trivial subspace of JR.I! with a basis containing k vectors, then we say that
Dimension the dimension of S is k and write

dims= k

So that we can talk about the basis of any subspace of IR.n, we define the empty set
...,

to be a basis for the trivial subspace {O} of JR.11 and thus say that the dimension of the
trivial vector space is 0.

EXAMPLE9 By definition, a plane in JR.11 that passes through the origin is spanned by a set of two
linearly independent vectors. Thus, such a plane is 2-dimensional since every basis of
the plane will have two vectors. This result fits with our geometric understanding of a
plane.
100 Chapter 2 Systems of Linear Equations

PROBLEMS 2.3
Practice Problems

Al Let B =
{ 1 ! , -� }
·
For each of the following
A4 Using the procedure in Example 8, determine
whether the given set is a basis for the given plane
or hyperplane.

vectors, either express it as a linear combination of (a)


{[i], [�]} fo
r xi + x,- x,=
0

{[:] .[j]} -
the vectors of B or show that it is not a vector in
-3 5 2

� 2 3x2 + x, = 0
Span B. (a)
(b
) : (c) - (b
)
for 2x,

A2 Let B =
4

{ -: , -i , _; }
7

·For each of the fol-


(c) {i . � , !} -
0 1 1
for x1 x2 + 2x3 - 2x4 = 0

0 2 -1 AS Determine whether the following sets are linearly


lowing vectors, either express it as a linear combi­ independent. If the set is linearly dependent, find
nation of the vectors of B or show that it is not a all linear combinations of the vectors that equal the
vector in Span B. zero vector.

(a) _1
-1
3
2
(b)
-7
3
0

8
(c) (a) { � ' ' -� }
-1
;
1 1
A3 Using the procedure in Example 3, find a homoge­
neous system that defines the given set.
( b) { I : 1 �}
{[�] [!]}
·
·
·

(a) Span
· 0
1 2

{[�]}
1 3
0
(b) Span (c)
3
1

{[i].[-!]}
1

(c) Span A6 Determine all values of k such that the given set is

u,:, =!}
linearly independent.

(d) Span
{[J. [-m (a)

-�}
(b)
{ 1 -� . -n
' 4
-3
Exercises 101

A 7 Determine whether the given set is a basis for JR'.3.

(a)
{[iH=:J.l:J} (c)
{[�Jnrni1n
(b)
W�H-m
(d)
{[-: ].[J [�]}
Homework Problems

Bl Let B = { l. �, j }
1 2 1
·For each of the following (e) Span {: -1
-1
3
3 ' -5
1 2
-� }
vectors, either express it as a linear combination of 1 0 0 0
the vectors of B or show that it is not a vector in 0 1 0 0
SpanB. (f) Span 2 1 ' 0
-4 6 3 -1 -5 -1 0
-2 0 -1 0 0 3
(a) (b) (c)
2 0 2
B4 Using the procedure in Example 8, determine
-6 3 1
whether the given set is a basis for the given plane

B2 Let B =
{ � , -: }·
,
3
For each of the following
or hyperplane.

( a)
{r-;] li]} fo
'2X>
+ X2 + X3 = Q

{HrnJ}
0 -1
vectors, either express it as a linear combination of
the vectors of B or show that it is not a vector in (b) for 4x1+2,, - x, = a
SpanB.
0 6 2

(a)
1
4
2
(b)
10
3
4 (c)
3

-1
(c ) p 1n , - . for 2x1+3x, - 54 = 0

B3 Using the procedure in Example 3, find a homoge­


neous system that defines the given set.
(d) {-� � -q for X1 + 2x,

{ [ i J lm
• • = a

(a) span - .
BS Determine whether the following sets are linearly

{[ i] }
independent. If the set is linearly dependent, find
(b) Span - all linear combinations of the vectors that are 0.

cci span
{ [J nJ .U] } (a) u, �, n
j i . :)
1 1 0 1
- 0 2 -3
(d) Sp 1 1 1

l
(b) ' 0
-2 4 1 0 2 0
0 -2
102 Chapter 2 Systems of Linear Equations

3
B6 Determine all values of k such that the given set is B7 Determine whether the given set is a basis for JR. .

{[-illlUl}
linearly independent.

(a) p , �� , ; } (a)

(b)
mini·m·[�J}
(b)
h -1. ; } -
(c)

WlHHm
(d)

rnHm
Computer Problems

3 0 (a) Determine whether B is linearly independent or


2 1 1
-1 -2 0 -1 0 1 dependent.

I 1 -1 0 1 0 (b) Determine whether vis in SpanB.


Cl Let B = 2 0 5 -4 -2 and v = 0.
4 6 -1 3 0
-2 2 8 6 3 0
3 2 3 5 0

Conceptual Problems

Dl Let B = {e1, ... ,e11} be the standard basis for JR.11• (a) Prove that if k < n, then there exists a vector
Prove that SpanB = JR" and that B is linearly inde­ v E JR.11 such that v 'I. SpanB.
pendent. (b) Prove that if k > n, then B must be linearly
dependent.
D2 Let B = {v1, ... 'vd be vectors in JR.11•
(c) Prove that if k = n, then SpanB = JR.11 if and
only if B is linearly independent.

2.4 Applications of Systems of Linear


Equations

Resistor Circuits in Electricity


The flow of electrical current in simple electrical circuits is described by simple linear
laws. In an electrical circuit, the current has a direction and therefore has a sign at­
tached to it; voltage is also a signed quantity; resistance is a positive scalar. The laws
for electrical circuits are discussed next.
Section 2.4 Applications of Systems of Linear Equations 103

Ohm's Law If an electrical current of magnitude I amperes is flowing through


a resistor with resistance R ohms, then the drop in the voltage across the resistor is
V = IR, measured in volts. The filament in a light bulb and the heating element of an
electrical heater are familiar examples of electrical resistors. (See Figure 2.4.4.)

R I
����VVv��--�

t V =IR t
Figure 2.4.4 Ohm's law: the voltage across the resistor is V = IR.

Kirchhoff's Laws (1) At a node or junction where several currents enter, the
signed sum of the currents entering the node is zero. (See Figure 2.4.5.)

11 - /2 + ]3 - ]4 = 0
Figure 2.4.5 One of Kirchhoff's laws: 11 - /2 + h - 14 = 0.

(2) In a closed loop consisting of only resistors and an electromotive force E (for
example, E might be due to a battery), the sum of the voltage drops across resistors is
equal to E. (See Figure 2.4.6.)

Figure 2.4.6 Kirchhoff's other law: E = R11 + R21.

Note that we adopt the convention of drawing an arrow to show the direction of I or
of E. These arrows can be assigned arbitrarily, and then the circuit laws will determine
whether the quantity has a positive or negative sign. It is important to be consistent in
using these assigned directions when you write down Kirchhoff's law for loops.
Sometimes it is necessary to determine the current flowing in each of the loops
of a network of loops, as shown in Figure 2.4.7. (If the sources of electromotive force
are distributed in various places, it will not be sufficient to deal with the problems as a
collection of resistors "in parallel and/or in series.") In such problems, it is convenient
to introduce the idea of the "current in the loop," which will be denoted i. The true
current across any circuit element is given as the algebraic (signed) sum of the "loop
currents" flowing through that circuit element. For example, in Figure 2.4.7, the circuit
consists of four loops, and a loop current has been indicated in each loop. Across the
resistor R1 in the figure, the true current is simply the loop current i1; however, across
the resistor R2, the true current (directed from top to bottom) is i1 -i2. Similarly, across
R4, the true current (from right to left) is i1 - i3.
104 Chapter 2 Systems of Linear Equations

Figure 2.4.7 A resistor circuit.

The reason for introducing these loop currents for our present problem is that there
are fewer loop currents than there are currents through individual elements. Moreover,
Kirchhoff's law at the nodes is automatically satisfied, so we do not have to write
nearly so many equations.
To determine the currents in the loops, it is necessary to use Kirchhoff's second
law with Ohm's law describing the voltage drops across the resistors. For Figure 2.4.7,
the resulting equations for each loop are:

• The top-left loop: R,i, +R1Ci1 - i1)+R4Ci1 - i3) =


E,

• The top-right loop: R3i2+ RsCi2 - i4)+R1Ci2 - i1) =


E2

• The bottom-left loop: R6i3+R4(i3 - ii)+R1(i3 - i4) = 0

• The bottom-right loop: Rsi4+R1(i4 - i3)+Rs(i4 - i1) =


-£2

Multiply out and collect terms to display the equations as a system in the variables
i1, i1, i3, and i4. The augmented matrix of the system is

R, + R1 + R4 -R2 -R4 0 E,
-R2 R1+R3+Rs 0 -Rs E2
- R4 0 R4+R6+R1 -R1 0
0 -Rs -R1 Rs+R1 +Rs -E2

To determine the loop currents, this augmented matrix must be reduced to row echelon
form. There is no particular purpose in finding an explicit solution for this general
problem, and in a linear algebra course, there is no particular value in plugging in
particular values for £1, £2, and the seven resistors. Instead, the point of this example
is to show that even for a fairly simple electrical circuit with the most basic elements
(resistors), the analysis requires you to be competent in dealing with large systems of
linear equations. Systematic, efficient methods of solution are essential.
Obviously, as the number of loops in the network grows, so does the number of
variables and so does the number of equations. For larger systems, it is important to
know whether you have the correct number of equations to determine the unknowns.
Thus, the theorems in Sections 2.1 and 2.2, the idea of rank, and the idea of linear
independence are all important.

The Moral of This Example Linear algebra is an essential tool for dealing
with large systems of linear equations that may arise in dealing with circuits; really
interesting examples cannot be given without assuming greater knowledge of electrical
circuits and their components.
Section 2.4 Applications of Systems of Linear Equations 105

Planar Trusses
It is common to use trusses, such as the one shown in Figure 2.4.8, in construction.
For example, many bridges employ some variation of this design. When designing
such structures, it is necessary to determine the axial forces in each member of the
structure (that is, the force along the long axis of the member). To keep this simple,
only two-dimensional trusses with hinged joints will be considered; it will be assumed
that any displacements of the joints under loading are small enough to be negligible.

Fv 0 0
Figure 2.4.8 A planar truss. All triangles are equilateral, with sides of length s.

The external loads (such as vehicles on a bridge, or wind or waves) are assumed
to be given. T he reaction forces at the supports (shown as R1, R2, and RH in the figure)
are also external forces; these forces must have values such that the total external force
on the structure is zero. To get enough information to design a truss for a particular
application, we must determine the forces in the members under various loadings. To
illustrate the kinds of equations that arise, we shall consider only the very simple case
of a vertical force Fv acting at C and a horizontal force FH acting at E. Notice that
in this figure, the right-hand end of the truss is allowed to undergo small horizontal
displacements; it turns out that if a reaction force were applied here as well, the equa­
tions would not uniquely determine all the unknown forces (the structure would be
"statically indeterminate"), and other considerations would have to be introduced.
The geometry of the truss is assumed given: here it will be assumed that the trian­
gles are equilateral, with all sides equal to s metres.
First consider the equations that indicate that the total force on the structure is zero
and that the total moment about some convenient point due to those forces is zero. Note
that the axial force along the members does not appear in this first set of equations.

• Total horizontal force: RH + FH = 0

• Total vertical force: R1 + R2 - Fv = 0

• Moment about A: -Fv(s) + R2(2s) = 0, so R2 = �Fv = R1

Next, we consider the system of equations obtained from the fact that the sum of
the forces at each joint must be zero. T he moments are automatically zero because the
forces along the members act through the joints.
At a joint, each member at that joint exerts a force in the direction of the axis of the
member. It will be assumed that each member is in tension, so it is "pulling" away from
the joint; if it were compressed, it would be "pushing" at the joint. As indicated in the
figure, the force exerted on this joint A by the upper-left-hand member has magnitude
N1; with the conventions that forces to the right are positive and forces up are positive,
106 Chapter 2 Systems of Linear Equations

the force vector exerted by this member on the joint A is [ ��� J. 12


On the joint B,

the same member will exert a force [ �� ]


_
2
1 12
. If N1 is positive, the force is a tension

force; if N1 is negative, there is compression.


For each of the joints A, B, C, D, and E, there are two equations-the first for the
sum of horizontal forces and the second for the sum of the vertical forces:

Al N1/2 +N2 +RH= 0

A2 ...f3N1/2 +R1= 0
Bl - Ni/2 + N3/2 +N4 = 0

B2 -...f3Ni/2 -.../3N3/2 = 0
Cl -N2 -N3/2 + Ns/2 + N6 = 0

C2 .../3N3/2 + ...f3Ns/2 = Fv
Dl -N4 -Ns/2 + N1/2 = 0

D2 -...f3Ns/2 -...f3N1/2 = 0
El -N6 -N1/2 =-FH

E2 ...f3 N1/2 +R2= 0

Notice that if the reaction forces are treated as unknowns, this is a system of 10
equations in 10 unknowns. The geometry of the truss and its supports determines the
coefficient matrix of this system, and it could be shown that the system is necessarily
consistent with a unique solution. Notice also that if the horizontal force equations (Al,
Bl, Cl, D l , and El) are added together, the sum is the total horizontal force equation,
and similarly the sum of the vertical force equations is the total vertical force equation.
A suitable combination of the equations would also produce the moment equation, so
if those three equations are solved as above, then the 10 joint equations will still be a
consistent system for the remaining 7 axial force variables.
For this particular truss, the system of equations is quite easy to solve, since some
of the variables are already leading variables. For example, if FH = 0, from A2 and
E2 it follows that N1= N1= - !rs Fv and then B2 , C2, and D2 give NJ= Ns= !;:s Fv;
then Al and E l imply that N2 = N6 = 1v-s Fv, and Bl implies that N4 =
2
!rs Fv. -

Note that the members AC, BC, CD, and CE are under tension, and AB, BD, and DE
experience compression, which makes intuitive sense.
This is a particularly simple truss. In the real world, trusses often involve many
more members and use more complicated geometry; trusses may also be three­
dimensional. Therefore, the systems of equations that arise may be considerably larger
and more complicated. It is also sometimes essential to introduce considerations other
than the equations of equilibrium of forces in statics. To study these questions, you
need to know the basic facts of linear algebra.
It is worth noting that in the system of equations above, each of the quantities N1,
N2, ... , N1 appears with a non-zero coefficient in only some of the equations. Since
each member touches only two joints, this sort of special structure will often occur
in the equations that arise in the analysis of trusses. A deeper knowledge of linear
algebra is important in understanding how such special features of linear equations
may be exploited to produce efficient solution methods.
Section 2.4 Applications of Systems of Linear Equations 107

Linear Programming
Linear programming is a procedure for deciding the best way to allocate resources.
"Best" may mean fastest, most profitable, least expensive, or best by whatever criterion
is appropriate. For linear programming to be applicable, the problem must have some
special features. These will be illustrated by an example.
In a primitive economy, a man decides to earn a living by making hinges and gate
latches. He is able to obtain a supply of 25 kilograms a week of suitable metal at a
price of 2 cowrie shells per kilogram. His design requires 500 grams to make a hinge
and 250 grams to make a gate latch. With his primitive tools, he finds that he can make
a hinge in 1 hour, and it takes 3/4 hour to make a gate latch. He is willing to work
60 hours a week. The going price is 3 cowrie shells for a hinge and 2 cowrie shells
for a gate latch. How many hinges and how many gate latches should he produce each
week in order to maximize his net income?
To analyze the problem, let x be the number of hinges produced per week and
let y be the number of gate latches. Then the amount of metal used is (O.Sx + 0.25y)
kilograms. Clearly, this must be less than or equal to 25 kilograms:

O.Sx + 0.25y s 25

or
2x + y s 100

Such an inequality is called a constraint on x and y; it is a linear constraint because


the corresponding equation is linear.
Our producer also has a time constraint: the time taken making hinges plus the
time taken making gate latches cannot exceed 60 hours. Therefore,

lx + 0.75y s 60

or
4x + 3y s 240

Obviously, also x � 0 and y � 0.


The producer's net revenue for selling x hinges and y gate latches is R(x, y)
3x + 2y - 2(25) cowrie shells. This is called the objective function for the problem.
The mathematical problem can now be stated as follows: Find the point (x, y) that max­
imizes the objective function R(x, y) = 3x + 2y- 50, subject to the linear constraints
x � 0, y � 0, 2x + y s 100, and 4x + 3y s 240.
This is a linear programming problem because it asks for the maximum (or
minimum) of a linear objective function, subject to linear constraints. It is useful to
introduce one piece of special vocabulary: the feasible set for the problem is the set
of (x, y) satisfying all of the constraints. The solution procedure relies on the fact that
the feasible set for a linear programming problem has a special kind of shape. (See
Figure 2.4.9 for the feasible set for this particular problem.) Any line that meets the
feasible set either meets the set in a single line segment or only touches the set on its
boundary. In particular, because of the way the feasible set is defined in terms of linear
inequalities, it turns out that it is impossible for one line to meet the feasible set in two
separate pieces.
For example, the shaded region if Figure 2.4.10 cannot possibly be the feasible
set for a linear programming problem, because some lines meet the region in two line
segments. (This property of feasible sets is not difficult to prove, but since this is only
a brief illustration, the proof is omitted.)
108 Chapter 2 Systems of Linear Equations

50 x
lines R(x, y) constant "-- 2x + y
= =
100
Figure 2.4.9 The feasible region for the linear programming example. The grey lines
are level sets of the objective function R.

Now consider sets of the form R(x, y) =


k, where k is a constant; these are called
the level sets of R. These sets obviously form a family of parallel lines, and some
of them are shown in Figure 2.4.9. Choose some point in the feasible set: check that
(20, 20) is such a point. Then the line

R(x,y) =
R(20,20) = 50

x
Figure 2.4.10 The shaded region cannot be the feasible region for a linear program­
ming problem because it meets a line in two segments.
Section 2.4 Applications of Systems of Linear Equations 109

meets the feasible set in a line segment. (30,30) is also a feasible point (check),
and
R(x,y) = R(30,30) = 100

also meets the feasible set in a line segment. You can tell that (30,30) is not a boundary
point of the feasible set because it satisfies all the constraints with strict inequality;
boundary points must satisfy one of the constraints with equality.
As we move further from the origin into the first quadrant, R(x,y) increases. The
biggest possible value for R(x,y) will occur at a point where the set R(x, y) = k (for
some constant k to be determined) just touches the feasible set. For larger values of
R(x,y), the set R(x,y) = k does not meet the feasible set at all, so there are no feasible
points that give such bigger values of R. The touching must occur at a vertex-that is,
at an intersection point of two of the boundary lines. (In general, the line R(x,y) = k
for the largest possible constant could touch the feasible set along a line segment that
makes up part of the boundary. But such a line segment has two vertices as endpoints,
so it is correct to say that the touching occurs at a vertex.)
For this particular problem, the vertices of the feasible set are easily found to be
(0,0), (50,0), and (0, 80), and the solution of the system of equations is

2x + y = 100

4x + 3y = 240

For this particular problem, the vertices of the feasible set are (0,0),(50, 0),(0, 80),
and (30, 80). Now compare the values of R(x,y) at all of these vertices: R(O,0),R(50,0),
R(O, 80) = 110, and R(30,40) = 120. The vertex (30,40) gives the best net revenue, so
the producer should make 30 hinges and 40 gate latches each week.

General Remarks Problems involving allocation of resources are common in


business and government. Problems such as scheduling ship transits through a canal
can be analyzed this way. Oil companies must make choices about the grades of crude
oil to use in their refineries, and about the amounts of various refined products to pro­
duce. Such problems often involve tens or even hundreds of variables-and similar
numbers of constraints. The boundaries of the feasible set are hyperplanes in some
IR", where n is large. Although the basic principles of the solution method remain the
same as in this example (look for the best vertex), the problem is much more com­
plicated because there are so many vertices. In fact, it is a challenge to find vertices;
simply solving all possible combinations of systems of boundary equations is not good
enough. Note in the simple two-dimensional example that the point (60,0) is the inter­
section point of two of the lines (y = 0 and 4x + 3y = 240) that make up the boundary,
but it is not a vertex of the feasible region because it fails to satisfy the constraint
2x + y $ 100. For higher-dimension problems, drawing pictures is not good enough,
and an organized approach is called for.
The standard method for solving linear programming problems has been the sim­
plex method, which finds an initial vertex and then prescribes a method (very similar
to row reduction) for moving to another vertex, improving the value of the objective
function with each step.
Again, it has been possible to hint at major application areas for linear algebra,
but to pursue one of these would require the development of specialized mathematical
tools and information from specialized disciplines.
110 Chapter 2 Systems of Linear Equations

PROBLEMS 2.4
Practice Problems

Al Determine the system of equations for the reaction A2 Determine the augmented matrix of the system of
forces and axial forces in members for the truss linear equations, and determine the loop currents
shown in the diagram. indicated in the diagram.

i E1R1E>
l1 R1 E> R3 'ji) R4

R6 'Ji) R1 RsE2 l

A3 Find the maximum value of the objective function


x + y subject to the constraints 0 $ x $ 100, 0 $ y $
80, and 4x + Sy $ 600. Sketch the feasible region.

CHAPTER REVIEW
Suggestions for Student Review

1 Explain why elimination works as a method for solv­ in row echelon form (but not reduced row echelon
ing systems of linear equations. (Section 2.1) form) and of rank 3.

2 When you row reduce an augmented matrix [A I b] to (b) Determine the general solution of your system.
(c) Perform the following sequence of elementary
solve a system of linear equations, why can you stop
row operations on your augmented matrix:
when the matrix is in row echelon form? How do you
use this form to decide if the system is consistent and (i) Interchange the first and second rows.

if it has a unique solution? (Section 2.1) (ii) Add the (new) second row to the first row.

3 How is reduced row echelon form different from row (iii) Add twice the second row to the third row.
echelon form? (Section 2.2) (iv) Add the third row to the second.

4 (a) Write the augmented matrix of a consistent non­ (d) Regard the result of (c) as the augmented matrix
homogeneous system of three linear equations in of a system and solve that system directly. (Don't
four variables, such that the coefficient matrix is just use the reverse operation in (c).) Check that
your general solution agrees with (b).
Chapter Review 111

5 For homogeneous systems, how can you use the row 7 Explain how to determine whether a set of vectors
echelon form to determine whether there are non­ (i11, ... , vk} in JR11 is both linearly independent and
trivial solutions and, if there are, how many parame­ a spanning set for a subspace S of lRn. What form
ters there are in the general solution? Is there any case must the reduced row echelon form of the coefficient
_,
where we know (by inspection) that a homogeneous matrix of the vector equation t1v1 + · · · + tk vk = b
system has non-trivial solutions? (Section 2.2) have if the set is a linearly independent spanning set?

6 Write a short explanation of how you use informa­ (Section 2.3)

tion about consistency of systems and uniqueness of


solutions in testing for linear independence and in de­
termining whether a vector x belongs to a given sub­
space of JR11• (Section 2.3)

Chapter Quiz

El Determine whether the following system is consis­ (b) Determine all values of (a, b, c) such that the
tent by row reducing its augmented matrix: system has a unique solution.

E4 (a) Determine all vectors in JR5 that are orthogonal


X2 - 2x3 + X4 = 2
1 2 3
2x1 - 2x2 + 4x3 - X4 = 10 2
X1 - X2 + X3 = 2 to 3 , 5 , and 8 .

= 9 1 0 5
4 0 9
If it is consistent, determine the general solution. (b) Let a, v, and w be three vectors in JR5. Explain
Show your steps clearly. why there must be non-zero vectors orthogonal
to all of a, v, and w.
E2 Find a matrix in reduced row echelon form that is

{[!].[i]. [!]}
row equivalent to
ES Detenn;ne whether ;s a bas;s
0 3 3 0 -1
B =

1 3 3 for JR3•
A=
2 4 9 6 E6 Indicate whether the following statements are true
-2 -4 -6 -3 -1 or false. In each case, justify your answer with a
brief explanation or counterexample.
Show your steps clearly.
(a) A consistent system must have a unique solu­
2 3 a 2 tion.
0 2 0 -3 (b) If there are more equations than variables in a
E3 The matrix A = IS
0 0 b+2 0 b non-homogeneous system of linear equations,
2
0 0 0 c -1 c + 1 then the system must be inconsistent.
the augmented matrix of a system of linear equa­ (c) Some homogeneous systems of linear equa­
tions. tions have unique solutions.
(a) Determine all values of (a, b, c) such that the (d) If there are more variables than equations in a
system is consistent and all values of (a, b, c) system of linear equations, then the system can­
such that the system is inconsistent. not have a unique solution.
112 Chapter 2 Systems of Linear Equations

Further Problems

These problems are intended to be challenging. They (c) Let R be a matrix in reduced row echelon form,
may not be of interest to all students. with m rows, n columns, and rank k. Show that

Fl The purpose of this exercise is to explore the


the9eneral solution of the homogeneous sys­
tem l R 10] is
relationship between the general solution of the
system [A I b] and the general solution of the

corresponding homogeneous system [A I 0]. This


relation will be studied with different tools in where each vi is expressed in terms of the en­
Section 3.4. We begin by considering some exam­ tries in R. Suppose that the system [RI c] is
ples where the coefficient matrix is in reduced row consistent and show that the general solution is
echelon form.
=
[01 O r1 3 ]
(a) Let R . Show that the general so­
l r23
lution of the homogeneous system RI [ 0] is where jJ is expressed in terms of the compo­
nents of c, and x H is the solution of the corre­
sponding homogeneous system.
(d) Use the result of (c) to discuss the relationship
where v is expressed in terms of r13 and r23· between the general solution of the consistent
Show that the general solution of the non­ system [A I b] and the corresponding homoge-
homogeneous system [R I c] is
neous system [A I 0].

F2 This problem involves comparing the efficiency of


row reduction procedures.
where jJ is expressed in terms of c, and x H is
as above. W hen we use a computer to solve large systems

(b) Let R
=
1� �
r 2
� � ���]· Show that the
of linear equations, we want to keep the number
of arithmetic operations as small as possible. This
0 0 0 1 r3s reduces the time taken for calculations, which is
feneral solution of the homogeneous system important in many industrial and commercial ap­
lRI O ) is plications. It also tends to improve accuracy: every
arithmetic operation is an opportunity to lose accu­
racy through truncation or round-off, subtraction of
two nearly equal numbers, and so on.
where each of v1 and v2 can be expressed in
terms of the entries riJ. Express each vi ex­
We want to count the number of multiplications
plicitly. Then show that the general solution of
and/or divisions in solving a system by elimination.
[RI c] can be written as
We focus on these operations because they are more
time-consuming than addition or subtraction, and
the number of additions is approximately the same
as the number of multiplications. We make certain
where jJ is expressed in terms of the compo­
nents c, and x H is the solution of the corre­
assumptions: the system [A I b] has n equations
and n variables, and it is consistent with a unique
sponding homogeneous system.
solution. (Equivalently, A has n rows, n columns,
and rank n.) We assume for simplicity that no
The pattern should now be apparent; if it is not,
row interchanges are required. (If row interchanges
try again with another special case of R. In the
are required, they can be handled by renaming
next part of this exercise, create an effective la­
"addresses" in the computer.)
belling system so that you can clearly indicate
what you want to say.
Chapter Review 113


(a) How many multi lications and divisions are re­ (b) Determine how many multiplications and divi­
quired to reduce lA I b] to a form [c I J] such sions are required to solve the system with the
that C is in row echelon form? augmented matrix C I [ J] of part (a) by back­
Hints substitution.

(1) To carry out the obvious first elementary (c) Show that the number of multiplications and

row operation, compute a21 -one division. divisions required to row reduce [RI c] to re­
a11

Since we know what will happen in the first duced row echelon form is the same as the
column, we do not multiply a11 by a
zi, but number used in solving the system by back­
a11

we must multipl 7 ev�ry other element of substitution. Conclude that the Gauss-Jordan
the first row of lA ]
I b by this factor and procedure is as efficient as Gaussian elim­
subtract the product from the corresponding ination with back-substitution. For large n,
element of the second row-n multipli­ the number of multiplications and divisions is
cations. roughly f.
(d) Suppose that we do a "clumsy" Gauss-Jordan
(2) Obtain zeros in the remaining entries in the
procedure. We do not first obtain row eche­
first column, then move to the (n 1) by n -

lon form; instead we obtain zeros in all en­


blocks consisting of the reduced version of
[A I 6] with the first row and first column
tries above and below a pivot before moving
on to the next column. Show that the number
deleted.
IL II of multiplications and divisions required in this
1!(1L+J)
- and � i2
(3) Note that � i =
- 2
=
procedure is roughly !f, so that this procedure
i=l i=1
n(n+l)(2n+l)
6
requires approximatel y 50% more operations
than the more efficient procedures.
(4) The biggest term in your answer should be
n3 /3. Note that n3 is much greater than n2
when n is large.

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 3

Matrices, Linear
Mappings, and Inverses
CHAPTER OUTLINE

3.1 Operations on Matrices


3.2 Matrix Mappings and Linear Mappings
3.3 Geometrical Transformations
3.4 Special Subspaces for Systems and Mappings: Rank Theorem
3.5 Inverse Matrices and Inverse Mappings
3.6 Elementary Matrices
3.7 LU-Decomposition

In many applications of linear algebra, we use vectors in JR11 to represent quantities,


such as forces, and then use the tools of Chapters I and 2 to solve various problems.
However, there are many times when it is useful to translate a problem into other linear
algebra objects. In this chapter, we look at two of these fundamental objects: matrices
and linear mappings. We now explore the properties of these objects and show how
they are tied together with the material from Chapters I and 2.

3.1 Operations on Matrices


We used matrices essentially as bookkeeping devices in Chapter 2. Matrices also pos­
sess interesting algebraic properties, so they have wider and more powerful applica­
tions than is suggested by their use in solving systems of equations. We now look at
some of these algebraic properties.

Equality, Addition, and Scalar Multiplication


of Matrices
We have seen that matrices are useful in solving systems of linear equations. However,
we shall see that matrices show up in different kinds of problems, and it is important
to be able to think of matrices as "things" that are worth studying and playing with­
and these things may have no connection with a system of equations. A matrix is a
rectangular array of numbers. A typical matrix has the form

a11 a12 a lj a111


a21 a22 a2j a2,,

A=
a;1 a;2 aij a;,,

aml am2 amj amn


116 Chapter 3 Matrices, Linear Mappings, and Inverses

We say that A is an m x n matrix when A has m rows and n columns. Two matrices A
and B are equal if and only if they have the same size and their corresponding entries
are equal. That is, if aiJ = biJ for 1 � i � m, 1 � j� n.
For now, we will consider only matrices whose entries aiJ are real numbers. We
will look at matrices whose entries are complex numbers in Chapter 9.

Remark

We sometimes denote the ij-th entry of a matrix A by

(A)iJ =
%

This may seem pointless for a single matrix, but it is useful when dealing with multiple
matrices.

Several special types of matrices arise frequently in linear algebra.

Definition An n x n matrix (where the number of rows of the matrix is equal to the number of
Square Matrix columns) is called a square matrix.

Definition A square matrix U is said to be upper triangular if the entries beneath the main
Upper Triangular diagonal are all zero-that is, UiJ = 0 whenever i > j. A square matrix L is said to
Lower Triangular be lower triangular if the entries above the main diagonal are all zero-in particular,
lij = 0 whenever i < j.

[� ;] [-; �]
EXAMPLE l
The mauices •nd [� � �] "'e uppe' tri•ngul.,, whHe md

H j �] "'e lower tr;•ngul"' The motrix [� � �] ; , both upper ond lower

triangular.

Definition A matrix D that is both upper and lower triangular is called a diagonal matrix-that
Diagonal Matrix is, diJ = 0 for all i * j. We denote an nx n diagonal matrix by

D = diag(d11, d22, · · · , d 1111)

EXAMPLE2
We denote the diagonal matrix D = [ : �] _ by D = diag( YJ, - 2), while

diag(O, 3, 1) is the diagonal matrix [� � �]·


0 0 1
Section 3.1 Operations on Matrices 117

Also, we can think of a vector in JR11 as an n x 1 matrix, called a column matrix.


For this reason, it makes sense to define operations on matrices to match those with
vectors in JR11•

Definition
Addition and Scalar
Let A and B be m x n matrices and t E JR a scalar. We define addition of matrices by

Multiplication of Matrices
(A + B)iJ = (A)iJ + (B)iJ

and the scalar multiplication of matrices by

(tA)iJ = t(A)iJ

EXAMPLE3 Perform the following operations.

()a [� �]+[_; �]
Solution:[��]+[_; �]-[4!��2) �:�]=[� :]
(b) [13 -OJ5 [-21 -0]1
_

c
( )
5 [� i]
Solution:[5 24 ]31 = [5(5(42)) 5(5(1)3)] =[0210 1 5 5]

Note that matrix addition is defined only if the mattices are the same size.

Properties of Matrix Addition and Multiplication by Scalars We


now look at the properties of addition and scalar multiplication of matrices. It is very
important to notice that these are the exact same ten properties discussed in Section
1.2 for addition and scalar multiplication of vectors in JR11•
118 Chapter 3 Matrices, Linear Mappings, and Inverses

Theorem 1 Let A, B, C be m x n matrices and let s, t be real scalars. Then


(1) A+ B is an m x n matrix (closed under addition)
(2) A+ B = B + A (addition is commutative)
(3) (A+ B) + C =A + (B + C) (addition is associative)
(4) There exists a matrix, denoted by Om,n , such that A+ Om,n = A; in particular,
Om,n is the m x n matrix with all entries as zero and is called the zero matrix
(zero vector)
(5) For each matrix A, there exists an m x n matrix (-A), with the property that
A +(-A) = Om,n; in particular, (-A) is defined by (-A)iJ = -(A)iJ (additive
inverses)
(6) sA is an m x n matrix (closed under scalar multiplication)
(7) s(tA) (st)A = (scalar multiplication is associative)
(8) (s+ t)A =sA + tA (a distributive law)
(9) s(A + B) =sA + sB (another distributive law)
(10) lA =A (scalar multiplicative identity)

These properties follow easily from the definitions of addition and multiplication
by scalars. The proofs are left to the reader.
Since we can now compute linear combinations of matrices, it makes sense to look
at the set of all possible linear combinations of a set of matrices. And, as with vectors
in JR'\ this goes hand-in-hand with the concept of linear independence. We mimic the
n
definitions we had for vectors in ]R .

Definition Let :B = {A 1, • • • , Ak} be a set of m x n matrices. Then the span of :B is defined as


Span

Definition Let :B = {A1, ... , Ae} be a set of m x n matrices. Then :B is said to be linearly inde­
Linearly Independent pendent if the only solution to the equation
Linearly Dependent

t1A1 + · · · + teAe = Om,n

is t1 = · · · =te =0; otherwise, :Bis said to be linearly dependent.

EXAMPLE4
Determine if [� �] is in the span of

Solution: We want to find if there are t1, t2, t3, and t4 such that
Section 3.1 Operations on Matrices 119

EXAMPLE4 Since two matrices are equal if and only if their corresponding entries are equal, this
(continued) gives the system of linear equations

ti + t1 = 1
ti + t3 + t4 = 2
t3 = 3
t1 + t4 = 4

Row reducing the augmented matrix gives

1 0 0 1 1 0 0 0 -2
1 0 1 2 0 1 0 0 3
0 0 0 3 0 0 1 0 3
0 0 4 0 0 0

We see that the system is consistent. Therefore, [� �] is in the span of :B. In particular,

we have ti = -2, t1 = 3, t3 = 3, and t4 = 1.

EXAMPLES
Determine if the set :B = {[� �] [� �] , [� �]}
_ , is linearly dependent or linearly

independent.
Solution: We consider the equation

Row reducing the coefficient matrix of the corresponding homogeneous system gives

3 0 1 0 0
2 2 0 0 1 0
2 2 0 0 1
-1 2 0 0 0

The only solution is t1 = t2 = t3 = 0, so S is linearly independent.

EXERCISE 1
Deterrune if:S = {[� �]·[� n.[� �]·[-� �]} -
is linearly dependent or lin-

early independent. Is X [-� �]


= in the span of :B?
120 Chapter 3 Matrices, Linear Mappings, and Inverses

EXERCISE 2
Consider 13= {[� �], [� �], [� �], rn �]} ·Prove that 23 is linearly independent

and show that Span 13 is the set of all 2 x 2 matrices. Compare 13 with the standard
basis for IR.4.

The Transpose of a Matrix

Definition Let A be an m x n matrix. Then the transpose of A is the n x m matrix denoted AT,
Transpose whose ij-th entry is the Ji-th entry of A. That is,

EXAMPLE6
[ ]
[ ll .
-1 6 -4
Determine the transpose of A= and B=
5

[ : �] ]
3 2
_

-1 3 J T

[�
T
Solution: AT= [ - 1
3
6
5
-

]
4
2
= _ and BT= _ = [1 0 - 1 ] .

Observe that taking the transpose of a matrix turns its rows into columns and its
columns into rows.

Some Properties of the Transpose How does the operation of transposi­


tion combine with addition and scalar multiplication?

Theorem 2 A and B and scalar s


For any matrices E IR., we have
(1) (ATf =A
(2) (A+ Bf =AT+ BT
(3) (sAf=sAT

Proof:

2. ((A+ Bl);;=(A+ B);;=(A);;+ (B);;=(AT);;+ (BT);;=(AT+ BT);;.

3. ((sA)T);;=(sA);;=s(A)Ji=s(AT);;=(sAT);;.

Section 3.1 Operations on Matrices 121

EXERCISE 3
Let A = [� � a
_ Verify that (ATl = A and (3Al = 3AT.

Remark

Since we always represent a vector in JR11 as a column matrix, to represent the row of a
matrix as a vector, we will write iJT. For now, this will be our main use of the transpose;
however, it will become much more important later in the book.

An Introduction to Matrix Multiplication


The operations are so far very natural and easy. Multiplication of two matrices is more
complicated, but a simple example illustrates that there is one useful natural definition.
Suppose that we wish to change variables; that is, we wish to work with new
variables y1 and Y2 that are defined in terms of the original variables x1 and x2 by the
equations

Y1 = a11x1 + a12X2
Y2 = a21X1 + a22x2

This is a system of linear equations like those in Chapter 2.

It is convenient to w1ite these equations in matrix form. Let x = [�J [��].


y =

and A = [ai 1
a21
a12
a22
]be the coefficient matrix. Then the change of variables equations

can be written in the form y = Ax, provided that we define the product of A and x
according to the following rule:

(3.1)

It is instructive to rewrite these entries in the right-hand matrix as dot products. Let

a1 = [::� ] and a2 = [:�� ] so that a f = [a11 ]


a12 and a{ = [a21 ]
a22 are the rows of

A. Then the equation becomes

Thus, in order for the right-hand side of the original equations to be represented cor­
rectly by the matrix product Ax, the entry in the first row of Ax must be the dot product
of the first row of A (as a column vector) with the vector x; the entry in the second row
must be the dot product of the second row of A (as a column vector) with x.
Suppose there is a second change of variables from y to z:

Z1 = b11Y1 + b12Y2
Z2 = b21Y1 + b22Y2
122 Chapter 3 Matrices, Linear Mappings, and Inverses

z
In matrix form, this is written = By Now suppose that these changes are performed
.

one after the other. The values for y1 and Y2 from the first change of variables are
substituted into the second pair of equations:

ZJ = b11(a11X1 + a12X2) + b12(a21X1 + a22X2)


z2 = b21(a11x1 + a12x2) + b22(a21X1 + a22x2)

After simplification, this can be written as

z
=
[ b11a11 + b12a21 b11a12 + b12a22 x ]
b21a11 + b22a21 b21a12 + b22a22
z x
We want this to be equivalent to = By BA Therefore, the product BA must be

]
= .

b11a12 + b12a22
(3.2)
b21a12 + b22a22

Thus, the product BA must be defined by the following rules:

• (BA)11 is the dot product of the first row of Band the first column of A.

• (BA)12 is the dot product of the first row of Band the second column of A.
• (BA)ii is the dot product of the second row of Band the first column of A.
• (BA)22 is the dot product of the second row of Band the second column of A.

EXAMPLE7 [ ][
2
4
3
1
5
-2
1
7
] [
-
2(5)
4(5)
+
+
3(-2)
1(-2)
2(1)
4(1)
+ 3(7)
1(7)
-
] [
4
18
23
11
]
+

We now want to generalize matrix multiplication. It will be convenient to use b[


to represent the i-th row of B and a1 to represent the }-th column of A. Observe from
our work above that we want the ij-th entry of BA to be the dot product of the i-th row
of Band the }-th column of A. However, for this to be defined, bj must have the same
number of entries as a1. Hence, the number of entries in the rows of the matrix B (that
is, the number of columns of B) must be equal to the number of entries in the columns
of A (that is, the number of rows of A). We can now make a precise definition.

Definition Let Bbe anm x n matrix with rows bf, . . . , b� and A be an n x p matrix with columns
Matrix Multiplication a1, • • • , ap. Then, we define BA to be the matrix whose ij-th entry is

-;

(BA)iJ = b; ai
-;
·

Remark
If B is an m x n matrix and A is a p x q matrix, then BA is defined only if n = p.
Moreover, if n = p, then the resulting matrix ism x q.

More simply stated, multiplication of two matrices can be performed only if the
number of columns in the first matrix is equal the number of rows in the second.
Section 3.1 Operations on Matrices 123

EXAMPLES Perform the following operations.

3 1

(a) [ 2 3 0
l
] 1 2
4 -1 2 -1 2 3
0 s

3 1

Solution: [� 3 0 1 ) 1 2 [ 9 13 ]
-1 2 -1 2 3 15 3
=

0 s

(b)
H -� m� !l
Solution: [-� �i [� �i [�� ��i
l
-0 1 2 s
=

2 s

EXAMPLE9
[ 2
1
] [� � ]
3
2 -3
is not defined because the first matrix has two columns and the sec-
3
_

ond matrix has three rows.

EXERCISE4
Let A

not defined.
= [� � -�] and B = [� �]. Calculate the following or explain why they are

(a) AB (b) BA (c)ATA (d) BET

EXAMPLE 10
LetA = m andB = m Compu�ATB

Solution: ATB = [I 2 3 ] [�] = [1(6) + 2(5) + 3(4)] = [28]

Obsme that if we let X = m and'! = m be vectors in 1<3, then X · y = 28,

This matches the result in Example 10. This result should not be surprising since we
124 Chapter 3 Matrices, Linear Mappings, and Inverses

have defined matrix multiplication in terms of the dot product. More generally, for any
x,y E JRn, we have
XTy = X ·y
where we interpret the 1 x 1 matrix on the right-hand side as a scalar. This formula
will be used frequently later in the book.
Defining matrix multiplication with the dot product fits our view that the rows of
the coefficient matrix of a system of linear equations are the coefficients from each
equation. We now look at how we could define matrix multiplication by using our
alternate view of the coefficient matrix; in that case, the columns of the coefficient
matrix are the coefficients of each variable.
Observe that we can write equations (3. 1 ) as

Ax=
[
a11x1 +a12x2 ] [ ] [ J
= a11
xi +
a12
x2
a11X1 +a12X2 a11 a12

That is, we can view Ax as giving a linear combination of the columns of A. So, for
an m x n matrix A with columns a1, ... , a11 and vector x E JR", we have

Xt j
an ] : =
X1a1 +·'' + Xnan
Xn

Using this, observe that (3.2) can be written as

BA = [Ba1 Ba2 ]
Hence, in general, if A is an m x n matrix and B is a p x m matrix, then BA is the p x n
matrix given by
(3.3)
Both interpretations of matrix multiplication will be very useful, so it is important
to know and understand both of them.

Remark

We now see that linear combinations of vectors (and hence concepts such as spanning
and linear independence), solving systems of Linear equations, and matrix multiplica­
tion are all closely tied together. We will continue to see these connections later in this
chapter and throughout the book.

Summation Notation and Matrix Multiplication Some calculations


with matrix products are better described in terms of summation notation:
11
I ak =
a1 +a1 +···+a,,
k=l

Let A be an m x n matrix with i-th row aj =


[ail ]
ain and let B be an n x p
blj l
matrix with j-th column b1 = : . Then the ij-th entry of AB is

bnj
11

->
(AB)ij ai · bj a T bj ailblJ +ai2b21 +· · · +ainbnj � A )ik( B)kj
_, _,
L)
_,
= =
i = =

k=l
We can use this notation to help prove some properties of matrix multiplication.
Section 3.1 Operations on Matrices 125

Theorem 3 If A, B, and C are matrices of the correct size so that the required products are
defined, and t E JR, then
(1) A(B+ C) =AB+ AC
(2) (A + B)C= AC+ BC
(3) t(AB)=(tA)B A(tB) =

(4) A(BC)=(AB)C
T T T
(5) (AB) =B A

Each of these properties follows easily from the definition of matrix multiplica­
tion and properties of summation notation. However, the proofs are not particularly
illuminating and so are omitted.

Important Facts The matrix product is not commutative: That is, in general,
AB -:¢: BA. In fact, if BA is defined, it is not necessarily true that AB is even defined.
For example, if B is 2 x 2 and A is 2 x 3, then BA is defined, but AB is not. However,
even if both AB and BA are defined, they are usually not equal. AB = BA is true only
in very special circumstances.

EXAMPLE 11
Show that if A = [� 3] -
l
5
and B= _2[ 1] 7 , then AB -:¢: BA.

Solution: AB=

but
[� -i H-; �] [ � =�] ,
=
2

BA= _ [ ;�][� _ i] [�: -��]


-

Therefore, AB -:¢: BA.

The cancellation law is almost never valid for matrix multiplication: Thus, if
AB=AC, then we cannot guarantee that B=C.

EXAMPLE 12
Let A= [� �] [; �]
.B =
[; n .and C = Then,

[� �][; �] [� �] [� �] [; �]
AB= = = =AC

so AB=AC but B-:¢: C.

Remark

The fact that we do not have the cancellation law for matrix multiplication comes from
the fact that we do not have division for matrices.

We must distinguish carefully between a general cancellation law and the follow­
ing theorem, which we will use many times.
126 Chapter 3 Matrices, Linear Mappings, and Inverses

Theorem4 If A and B are m xn matrices such that Ax=Bx for every x E !Rn, then A=B.

Note that it is the assumption that equality holds for every x that distinguishes this
from a cancellation law.

Proof: You are asked to prove Theorem 4, with hints, in Problem Dl.

Identity Matrix
We have seen that the zero matrix Om,n is the additive identity for addition of m x n
matrices. However, since we also have multiplication of matrices, it is important to
determine if we have a multiplicative identity. If we do, we need to determine what
A and a
the multiplicative identity is. First, we observe that for there to exist a matrix
matrix I such that Al=A=IA, both A and I must ben xn matrices. The multiplicative
identity I is then x n matrix that has this property for alln xn matrices A.

To find how to define I, we begin with a simple case. Let A= [; �]. We want to

find a matrix I= [; {] such that Al=A. By matrix multiplication, we get

[ c db]=[c db] [eg fh]=[ce ++ dgbg


a a ae af + bh
cf + dh
]
Thus, we must have a=ae + bg, b=af + bh, c=ce + dg, and d=cf+ dh. Although
this system of equations is not linear, it is still easy to solve. We find that we must have
e = 1 =h and f=g=0. Thus,

I= [� �]= diag(l , 1)

It is easy to verify that I also satisfies IA = A. Hence, I is the multiplicative identity


for 2 x 2 matrices. We now extend this definition to then xn case.

Definition Then xn matrix I=diag(l, 1, ... , 1) is called the identity matrix.


Identity Matrix

EXAMPLE 13
The 3 x 3 identity matrix is I=diag(l, 1, 1)= [� � �]·
0 0 1
1 0 0 0

The 4 x 4 identity matrix is I=diag(l, 1, 1,


1) =
0
0
0
0
0
0

0
0
0 ·
Section 3.1 Operations on Matrices 127

Remarks

1. In general, the size of I (the value of n) is clear from context. However, in some
cases, we stress the size of the identity matrix by denoting it by In. For example,
I2 is the 2 x 2 identity matrix, and Im is them xm identity matrix.

2. The columns of the identity matrix should seem familiar. If {ei, ... , en} is the
n
standard basis for �, then

Theorem 5 If A is any m x n matrix, then I,.,zA =A=Ain.

You are asked to prove this theorem in Problem D2. Note that it immediately
implies that In is the multiplicative identity for the set of n x n matrices.

Block Multiplication
Observe that in our second interpretation of matrix multiplication, equation (3.3), we
calculated the product BA in blocks. That is, we computed the smaller matrix products
Bai, Ba2, ... , Ban and put these in the appropriate positions to create BA. This is a very
simple example of block multiplication. Observe that we could also regard the rows
of B as blocks and write

BA=
bTA
p

There are more general statements about the products of two matrices, each of
which have been partitioned into blocks. In addition to clarifying the meaning of some
calculations, block multiplication is used in organizing calculations with very large
matrices.
Roughly speaking, as long as the sizes of the blocks are chosen so that the products
of the blocks are defined and fit together as required, block multiplication is defined
by an extension of the usual rules of matrix multiplication. We will demonstrate this
with an example.

EXAMPLE 14 Suppose that A is anm x n matrix, B is an n x p matrix, and A and B are partitioned
into blocks as indicated:

A=
[1�],
Say that A1 is r x n so that A2 is (m - r) x n, while Bi is n x q and B2 is n x (p - q).
Now, the product of a 2 x 1 matrix and a 1 x 2 matrix is given by
128 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE 14 So, for the partitioned block matrices, we have


(continued)

Observe that all the products are defined and the size of the resulting matrix is m x p,
as desired.

PROBLEMS 3.1
Practice Problems

(a)
[ 2 3] [-3 ]
Al Perform the indicated operation.
-2
+
-4 1
A4 For the given matrices
A + B and AB are defined. If so, check
(A+Bf=AT+ BT and (ABf=BTAT.
A and B, check whether
that

[ � �], 1-�3 =�]


(-3)[; -�i
4 1 -1 2 -5 3

(b) (a) A= B=

[� ;] 3 [! -�J [; -� ; ] n =�]
4 -2 -2 1 2

(c)
_ -
(b) A= _
B=
.

[l j].
A2 Calculate each of the following products or explain

[ -i ; ] .B=[ -; ; -i] .c=


r-� !H-� ; -;J
why a product is not defined.
AS UtA=
(a)

(b) [ � 3� i] [� �i
-1 2 0
-

5
and D= [-� � � 3;].
2 1 0

H -!l [ : � ; _;J
Determine the following products or state why they
do not exist.
AB BA
(c)
_ (a) (b) (c) AC
DC CD (f) CTD

[ ; ;J [i j]
(d) (e)
(g) Verify that A(BC)=(AB)C
(h) Verify that (ABf=BTAT
(d)

3
_

(i) Without doing any arithmetic, determine DTC

[ � -i !] [;], [ i],
A3 Check that A(B + C)=AB+AC and that A( B) =
3(AB) for the given matrices.
A= x= y=
[-� n [-i �l C=[ =; !J
A6 Let ,

A B=
-

[-:]
(a) -1 0 1 4 -1

n B=[=� H
=

[� ;
andZ=
(b) A= - -
Ax, Ay, and AZ.

H -�]
(a) Determine

C=
Section 3.1 Exercises 129

1 13 -1ol.
(b) Use the result of (a) to determine A [42 4 5 6
-2
3
4
-1 1 3
[ -4 1 1-2 1 ] 3
2

Let A = [ � -� !] and consider Ax as a lin-


-3 2
A7
-1 1 of A to determine x if
ear combination of0 columns
= [-42 31 ][-26 34]+ [-42 1 ] [-31 32] 5

Ax=b where Determine if[� -�]is in the span of


b= l AlO
(b) b=
(•) U Ul
(c)b= HI (d) =
" Ul All Determine if the set
Calculate the following products. -
(a)[-�] [2 6] (b) [2 6] [ �]
A8

(c) Hl [s 4 -3] (d) [s 4 -31[-�] is linearly dependent or linearly independent.


Let A = [basis vectors] andlet{e\,
A9 Verify the following case of block multiplication
A12
standard a1 · • • a11
for Prove...,e1that1}denotethe
IR.11• Ae; = a;.

by calculating both sides of the equation and com­


paring.
Homework Problems
Bl Perform the indicated operation. (d) [� � -�][-: � -�]
(a) [-! + [-6 Check that A(B+ =AB+ AC and that A(3B) =
3 [ -�i27 3 -6� :i2
-9
B3 C)
3(AB) for the given matrices.
(b) -7 1 0 -5]
(-5)
(a) A=[_; _nB=[� -� -n
(c) [-� � ;]-4[-� -� �] c
= r-; � -�J
B2 Calculate each of the following products or explain (b) A=u �]· B [; -n [-� -�1
why a product is not defined. C

(a) r-; -�][; -� =n


= =

B4 For the given matrices A and B, check whether


(b) [- ; = � (AA ++ B/
B and
=ATAB+ BareT anddefined. If so, check that
(AB/=BTAT.
�1 1 3�I [�5 = 0l
(a) A=[_; : :].s=H j]
(c) [� -�] [_; =� �]
130 Chapter 3 Matrices, Linear Mappings, and Inverses

(b) H ! J.B=[! �! J
A= (c)b=rn
Calculate the following products.
Let = H =H B= [! -�]. (a) [;J [-2 4] (b) [-2 4] [;J
BS
BS A

1 3 -5 (c) [!] [-3 2] (d) [-3 2) [!]


-
[=; � �land -� ; 1 ·
I I

Verify the following


both sidescaseof oftheblock multiplication
and compar­by
C D
= =

Determine the following products or state why they calculating equation


do not exist.
1 -1 B9

(a)AB (b) BA (c) ing.


(d) (e)
AC

(g) Verify that B( ) =


DC DA (f) CDT
43
(h) Verify that =B
CA (BC)A
25
(i) Without doing any arithmetic, determine
(ABl T AT
61
-
Der
-1 3
B6 LetA = [--1� �1 0�] = [-2�].y =[ �2].and
.x
= [l3 -1]2 [42 53] [-2 4]3 [-16 1]3
+ 1

Determine if[� -�] is in the span of


(a) Determine and
BlO

Ax, Ay, At

(b) Use the resultof(a) to determine [-2� -2-1 -3=�]·


Determine if the set
A

Let ; -� =�] and consider as a lin- {[-1 O0 '[-34 -2OJ' [20 -3J' [-33 -3-l }
Bll

B7 A
[-2 -4 -3 Ax
J J
=

ear combination of columns of to determine if


= 1
S

b where is linearly dependent or linearly independent.


-1
A x

(a)b=[=!l (b)b=nl
Ax=

Computer Problems
Cl Use a computer to check your results for Problems not have to enter it twice.
A2 and AS. (a) [-1.97
2.12 5.3.5635J[-l.02 3.47 -4.94
Use a computer to determine the following matrix 3.33 5.83 2.29J
products and the transpose of the products. Note 3.47 -4.94 [�·�� 4.25
(b) [-1.02
C2

that one of the matrices appears twice. You should 3.38


3.33 5.83 2.29J 4:67 -3.73]
Section 3.2 Matrix Mappings and Linear Mappings 131

Conceptual Problems
Dl Prove T heorem 4, using the following hints. To D3 Find a formula to calculate the ij-th entry of AAT
prove A = B, prove that A - B
= 0111,11; note and of AT A. Explain why it follows that if AAT or
that Ax = Bx for every x E JR" if and only if AT A is the zero matrix, then A is the zero matrix.
1
(A - B)x = 0 for every x E JR. 1• Now, suppose that
D4 (a) Construct a 2 x 2 matrix A that is not the zero
Cx = 0 for every x E JR.11• Consider the case where 2
matrix yet satisfies A = 02,2.
x = e; and conclude that each column of C must be
(b) Find 2 x 2 matrices A and B with A t. B and
the zero vector. 2
neither A = 02,2 nor B = 02,2, such that A -

D2 Let A be an m x n matrix. Show that !111A = A and AB - BA + B2 = 02 2.


A/12 =A.
DS Find as many 2 x 2 matrices as you can that satisfy
2
A =I .

3.2 Matrix Mappings and Linear


Mappings
Functions are a fundamental concept in mathematics. Recall that a function f is a
rule that assigns to every element x of an initial set called the domain of the function
a unique value y in another set called the codomain off. We say that f maps x to y or
that y is the image ofx under f, and we write f(x) = y. !ff is a function with domain U
and codomain V, then we say that f maps U to V and denote this by f:U� V. In your
2
earlier mathematics, you met functions f: JR� JR such as f(x) = x and looked at their
various properties. In this section, we will start by looking at more general functions
f : JR.11 � JR.111, commonly called mappings or transformations. We will also look at a
class of functions called linear mappings that are very important in linear algebra.

Matrix Mappings
Using the rule for matrix multiplication introduced in the preceding section, observe
that for any m x n matrix A and vector x E JR", the product Ax is a vector in JRm. We
see that this is behaving like a function whose domain is JR.11 and whose codomain is
JR.11!.

Definition For any mxn matrix A, we define a function fA : JR.11 � JR.111 called the matrix mapping,
Matrix Mapping corresponding to A by
fACx) =Ax

for any x E JR.11•

Remark

Although a matrix mapping sends vectors to vectors, it is much more common to view
functions as mapping points to points. Thus, when dealing with mappings in this text,
132 Chapter 3 Matrices,Linear Mappings, and Inverses

we will often write

when,technically, it would be more correct to write f [[] tJ =

EXAMPLE 1
LetA = [-i �] ·FindfA(l. 2) and fA(-1, 4).

Solution: We have

EXERCISE 1
LetA = [� n
_
FindfA(l,o),JAco,1),andfAC2,3).

W hat is the relationship between the value of fA(2,3) and the values of fA(1, 0) and
fA(O,1)?

EXERCISE 2
LetA = [� � ; �] -
·Find fA(-1, l, l,0) and fA(-3,l,0, l).

Based on our results in Exercise 1, is such a relationship always true? A goodway


to explore this is to look at a more general example.

EXAMPLE2
Let A =
[ a1 i a12]and find the values of fA(l,0), fA(O, 1),andfA(X1,x2).
a21 a22
Section 3.2 Matrix Mappings and Linear Mappings 133

EXAMPLE2 Solution: We have

(continued)

Then we get

We can now clearly see the relationship between the image of the standard basis
2
vectors in IR. and the image of any vector 1. We suspect that this works for any m x n
matrix A.

Theorem 1 Let e1' e2, ... 'en be the standard basis vectors of IR.11' let A be an m x n matrix, and let
fA : IR. 11 -t IR.m be the corresponding matrix mapping. Then, for any vector 1 E IR.11,
we have

Proof: Let A = [a'.1 a'.2 a'.11 ] . Since e; has 1 as its i-th entry and Os elsewhere,
we get fA(e;) =Ae; =a'.;. Thus, we have

as required. •

Since the images of the standard basis vectors are just the columns of A, we see
that the image of any vector 1 E IR." is a linear combination of the columns of A.
This should not be surprising as this is one of our interpretations of matrix multiplica­
tion. However, it does make us think about how a matrix mapping will affect a linear
combination of vectors in IR.11• For simplicity, we look at how it affects these separately.

Theorem 2 Let A be an m x n matrix with corresponding matrix mapping /A : IR.11 -t IR.m. Then,
for any 1, y E IR.11 and any t E IR., we have
(L l ) fA(x+ y) =fA(x) +/ACY)
(L2) fA(tx) =tfA(x)

Proof: Let 1,y E IR.11 and t ER Using properties of matrix multiplication, we get

fA(X+y) =A(x+ y) =Ax+ Ay =fA(x)+/ACY)

and
fA(tx) =A(tx) =tA1 =tfA(X)

134 Chapter 3 Matrices, Linear Mappings, and Inverses

A function that satisfies property (Ll ) of Theorem 2 is said to preserve addi­


tion. Similarly, a function satisfying property (L2) of Theorem 2 is said to preserve
scalar multiplication. Notice that a function that satisfies both properties will in fact
preserve linear combinations-that is,

We call such functions linear mappings.

Linear Mappings

Definition A function L : JRn --+ JRm is called a linear mapping (or linear transformation) if for
Linear !\lapping every x, yE JRn and t E JR it satisfies the following properties:

(Ll ) L(x + y) = L(x) + L(Y)


(L2) L( tx) = tL(x)

Definition A linear operator is a linear mapping whose domain and codomain are the same.
Linear Operator
Theorem 2 shows that every matrix mapping is a linear mapping. In Section 1.4,
we showed that projil and perpil are linear operators from JRn to JRn.

Remarks

1. L inear transformation and linear mapping mean exactly the same thing. Some
people prefer one or the other, but we shall use both.

2. Since a linear operator L has the same domain and codomain, we often speak
of a linear operator Lon JR11 to indicate that Lis a linear mapping from JR11 to
JR".

3. For the time being, we have defined only linear mappings whose domain is JR"
and whose codomain is JRm. In Chapter 4, we will look at other sets that can be
the domain and/or codomain of linear mappings.

EXAMPLE3 Show that the mapping f : IR


2 --+ IR2 defined by f(x1, x2) = ( 2x1 + x2, -3x1 + 5x2) is
linear.
Solution: To prove that it is linear, we must show that it preserves both addition and
scalar multiplication. For any y, z E JR2 we have ,

f(Y ZJ f(y1 Z1,Y2 Z2)


[ ]
+ =
+ +

2(y1 + z1) + CY2 + z2)


- -3(y1 + z1) + S(y2 + z2)

=
[ 2y1 + Y2 ] [
+
2 z1 + Z2 ]
-3y1 + Sy2 -3z1 + Sz2
=
f(Y) + f(ZJ
Section 3.2 Matrix Mappings and Linear Mappings 135

EXAMPLE3 Thus, f preserves addition. Let y E JR.2 and t E JR. be a scalar. Then we have
(continued)
f(tY) f(ty1' ty2)
[ 2(ty1) +(ty2) ]
=

-
-3(ty1)+S(ty2)
=
t
[ 2y1 + Y2 ]
-3y1 + 5y2
= tfCY)

Hence, f also preserves scalar multiplication, and therefore f is a linear operator.

In the previous solution, notice that the proofs for addition and scalar multipli­
cation are very similar. A natural question to ask is whether we can combine these
into one step. The answer is yes! We can combine the two linearity properties into one
statement:
L : JR.I! � JR.m is a linear mapping if for every 1 and y in the domain and for any
scalar t E JR.,
L(t1 + y) =
tL(1) + LCY)
The proof that the definitions are equivalent is left for Problem D 1.

EXAMPLE4 Determine if the mapping f: JR.3 � JR. defined by /(1) = 11111 is linear.
Solution: We must test whether f preserves addition and scalar multiplication. Let
1,Y E JR.3 and consider

JC1+Sf) = 111+5'11 and JC1) +!CY) = 11111 +115'11

Are these equal? By the triangle inequality, we get

111 + 5'11 ::; 11111+115'11

and we expect equality only when one of 1, y is a multiple of the other. Therefore, we
believe that these are not always equal, and consequently f does not preserve addition.

To demonstrate this, we give a countecexample: ifi' = [�l and j1 = [n then

f(X + j1) = f( l, l, 0) = m = ..f5.


136 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE4 but

[i] [ ! ]
(continued)
I 2
1
+ = + = I + =

/( ) f(J)
f(x + y) f(x) + f(Y) for any pair of vectors , y in JR.3, hence f is not linear.
1
Thus, *

Note that one counterexample is enough to disqualify a mapping from being linear.

EXERCISE 3 Determine whether the following mappings are linear.

(a) f : JR.2 ---7 JR.2 defined by f(x1, x2) = (XT, X1 + x2)

(b) g : JR.2 ---7 JR.2 defined by g(x1, x2) = (x2, x1 - x2)

Is Every Linear Mapping a Matrix Mapping?


We saw that every matrix determines a corresponding linear mapping. It is natural to
ask whether every linear mapping can be represented as a matrix mapping.
To see if this might be true, let's look at the linear mapping f from Example 3,
defined by
f(x1, x2) = (2x1 + X2, -3x1 + Sx2)
If this is to be a matrix mapping, then from our work with matrix mappings, we know
that the columns of the matrix must be the images of the standard basis vectors. We

have f(l, 0) = [ ;l f(O,


_ 1) = [n So, taking these as columns, we get the matrix

A = [ ; �]
_ .Now observe that

Hence, f can be represented as a matrix mapping.


T his example not only gives us a good reason to believe it is always true but
indicates how we can find the matrix for a given linear mapping L.

Theorem 3 If L : JR.11 ---7 JR.m is a linear mapping, then L can be represented as a matrix mapping,
with the corresponding m x n matrix [L] given by
Section 3.2 Matrix Mappings and Linear Mappings 137

Proof: Since L is linear, for any 1 E JR.11 we have

L(1) = L(x1e1 + + ..
x2e2 . + Xne11)
= X1L(e1) + + ... +
x2L(e2) X11L(e11)

= [LCe1) L(e2) ... L(e11)] [�1]


Xn
= [L]1

as required. •

Remarks

1. Combining this with Theorem 2, we have proven that a mapping is linear if and
only if it is a matrix mapping.

2. The matrix [L] determined in this theorem depends on the standard basis. Con­
sequently, the resulting matrix is often called the standard matrix of the linear
mapping. Later, we shall see that we can use other bases and get different ma­
trices corresponding to the same linear mapping.

EXAMPLES
Let v = [!] and i1 = rn Find the standard matrix of the mapping projv : JR.2 � JR.2

and use it to find projil i1.


Solution: Since projil is linear, we can apply Theorem 3 to find [projil]. The first
column of the matrix is the image of the first standard basis vector under prok Using
the methods of Section 1.4, we get

.
=
e1 ·v 1(3)+0(24) [3] [9;25 ]
proJv e
I llVll2
v
=
3 +4 4 12/25
2
=

Similarly, the second column is the image of the second basis vector:

. e2. v
v
0(3) + 1(4) [3]4 [12/25]
16/25
rO v
P J e2
=

11v112
=

25 =

Hence, the standard matrix of the linear mapping is

. [proJv. -; . -; ] =
[9/25 12/25]
[proJv] = ei proJv e2
12/25 16/25
Therefore, we have

. -;
proJv u =
.] -;
u =
[9/25 12/25] [1] [33/25] =
[ proJv
12/25 16/25 2 44/25
138 Chapter 3 Matrices, Linear Mappings, and Inverses

2
EXAMPLE6 Let G : JR3 --+ JR be defined by G(xi, x2, x3) = (xi, x2). Prove that G is linear and find
the standard matrix of G.
Solution: We first prove that G is linear. For any y, z E JR3 and t E JR, we have

G(ry + tJ = G(tyi + Zi, ty2 + Z2, ty3 + Z3)

= [�� : ��] [��] [��]


= +

= t[��] [��]+ = tG(Y) + G(tJ

Hence, G is linear. Thus, we can apply Theorem 3 to find its standard matrix. The
images of the standard basis vectors are

G(l, 0,0) = [�] , G(O, 1,0) = [�], G(O, 0, 1) = [�]


Thus, we have

[G] = [co. 0, 0) G(O, 1, 0) G(O, 0, ]


1) = [� �] 0

Did we really need to prove that G was linear first? Couldn't we have just con­
structed [G] using the image of the standard basis vectors and then said that because
G is a matrix mapping, it is linear? No! You must always check the conditions of the
theorem before using it. Theorem 3 says that if f is linear, then [f] can be constructed
from the images of the standard basis vectors. The converse is not true!
For example, consider the mapping f(xi, x2) = (xix2, 0). The images of the stan-

dard basis vectors are f(l, 0) = [�] and f(O, 1) = ml so we can construct the matrix

[� �l But this matrix does not represent the mapping! In particular, observe that

f(l, 1) = [�] [� �] [ � ] [�]


but = .Hence, even though we can create a matrix using

the images of the standard basis vectors, it does not imply that the matrix will represent
that mapping, unless we already know the mapping is linear.

4 2
EXERCISE4 Let H : JR JR be defined by H(xi, x2, x3, x4)
--+ = (x3 + x4, xi). Prove that H is linear
and find the standard matrix of H.
Section 3.2 Matrix Mappings and Linear Mappings 139

Compositions and Linear Combinations


of Linear Mappings
We now look at the usual operations on functions and how these affect linear mappings.

"
Definition Let L and M be linear mappings from JR to JRm and let t E JR be a scalar. We define
Operations on Linear (L + M) to be the mapping from JR" to JRm such that
Mappings

(L + M)(x) = L(x) + M(x)


1
We define (tL) to be the mapping from JR 1 to JRm such that

(tL)(x) = tL(x)

"
Theorem 4 If Land M are linear mappings from JR to JRm and t E JR, then (L + M) and (tL) are
linear mappings.

Proof: Let 1, y E JR11 and s E R Then,

(tL)(sx + y) = tL(sx + y) = t[sL(x) + L(J)] = stL(x) + tL(J) = s(tL)(x) + (tL)(J)

Hence, (tL) is linear. Determining that L + M is linear is left for Problem D2. •

The properties we proved in Theorem 4 should seem familiar. They are properties
(1) and (6) of Theorem 1.2.1 and Theorem 3.1.1. That is, the set of all possible linear
mappings from JR" to JRm is closed under scalar multiplication and addition. It can be
shown that the other eight properties in these theorems also hold.

1
Definition Let L : JR1 4 ]Rm and M : ]Rm 4 JRP be linear mappings. The composition M o L :
Composition of Linear 11
JR � JRP is defined by
Mappings
(M 0 L)(x) = M (L(x))
for all x E JR".

Note that the definition makes sense only if the domain of the second map M
contains the codomain of the first map L, as we are evaluating M at L(x). Moreover,
observe that the order of the mappings in the definition is important.

1
EXERCISE 5 Prove that if L : JR 1 4 ]Rm and M : ]Rm 4 JRP are linear mappings, then M o L is a
linear mapping.

Since compositions and linear combinations of linear mappings are linear map­
pings, it is natural to ask about the standard matrix of these new linear mappings.
140 Chapter 3 Matrices, Linear Mappings, and Inverses

Theorem 5 Let L : JR11 - Rm, M : JR11 - Rm, and N : JR111 - JRP be linear mappings and t E R
Then,
[L + M]= [L] + [M], [tL]= t[L], [NoL]= [N][L]

Proof: We will prove [tL] = t[L] and [No L] = [N][L]. You are asked to prove that
[L + M]= [L] + [M] in Problem D2.
Observe that for any x E JR11, we have

[tLJx= (tL)(x)= tL(x)= t[LJx

Hence, [tL]= t[L] by Theorem 3.1.4.


[NJ is p xm and [L] ism x n, the matrix product [N][L]
We first observe that since
is defined. For any 1 E JR11,

[N 0 L]x= (N 0 L)(x)= N(L(x))= N([L]x) = [N][L]x

Thus, [NoL]= [N][L] by Theorem 3.1.4. •

EXAMPLE 7 Let Land M be the linear operators on JR2 defined by

Determine [MoL].
Solution: We find that

L(l,0)= [�] , L(0,1)= [ �] , M(l,0)= [ �]


-
, M(0,1)= [ �]
-

Thus,

[MoL]= - [ � -�][� �] [� �] =

In Example 7, M o L is the mapping such that (M o L)(x) = 1 for all 1 E JR".


Moreover, the standard matrix of the mapping is the identity matrix. We thus make the
following definition.

Definition The linear mapping Id : JR" - JR11 defined by


Identity Mapping

Id(x) = x

is called the identity mapping.


Section 3.2 Exercises 141

Remark

Since the mappings L and M in Example 7 satisfy Mo L= Id, they are called inverses,
as are the matrices [M] and [L]. We will look at inverse mappings and matrices in
Section 3.5.

PROBLEMS 3.2
Practice Problems

-2 3 A4 For each of the following linear mappings, deter­


3 0 mine the domain, the codomain, and the standard
Al Let A= and let fA be the corresponding
5 matrix of the mapping.
4 -6 (a) L(x1,x2,x3) =(2x1 - 3x2 + x3,x2 - 5x3)
matrix mapping. (b) K(x1,x2,X3,X4)= (5x1 +3x3-X4,x2-?x3+3x4)
(a) Determine the domain and codomain of fA. (c) M(x1' X2,X3,X4)= (x1 - X3 + X4,X1 + 2x2 - X3 -
(b) Determine /A(2,-5) and /A(-3, 4). 3X4,X2 + X3,X1 - X2 + X3 - X4)
(c) Find the images of the standard basis vectors AS Suppose that S and T are linear mappings with ma­
for the domain under fA. trices
(d) Determine fA(x).
(e) Check your answers in (c) and (d) by calculat­
ing [/A(x)] using Theorem 3.
[S]=[ �
-
� ;], [T]=[
l
2
2
2
-1
3
]

A2 Let A=
1
[� -
0
� �-

2 -1
�i
and let /A be the corre-
(a) Determine the domain and codomain of each
mapping.
(b) Determine the matrices that represent (S + T)
sponding matrix mapping.
and (2S - 3T).
(a) Determine the domain and codomain of fA·
(b) Determine /A(2,-2,3, 1) and fA(-3, 1,4,2). A6 Suppose that S and T are linear mappings with ma­
(c) Find the images of the standard basis vectors trices
for the domain under fA.
(d) Determine fA(x). 1 4
(e) Check your answers in (c) and (d) by calculat­
ing [fA(x)] using Theorem 3.
-
[S]=[ � -
� � ;],[T]= -� -l
3 -4
A3 For each of the following mappings, state the
domain and codomain. Determine whether the (a) Determine the domain and codomain of each
mapping is linear and either prove that it is linear mapping.
or give a counterexample to show why it cannot be (b) Determine the matrices that represent So T and
linear. To S.
(a) f(x1,x2)= (sin X1, ex2)
A7 Let L, M, and N be linear mappings with matrices
(b) g(x1,x2)= (2x1 + 3x2,x1 - x2)
(c)
(d)
h(x1' X2) =(2x1 + 3x2,X1 - X2,X1X2)
k(x1' X2,X3) =(x1 + X2,0, X2 - X3)
[L] HH [M] [i -� -�l· and

(e) l'(x1,x2, X3)= (x2,lxil)


2
(f) m(x1) (x1,1,0)
-� � . Determine which of the following
=

[N] =

1 4
142 Chapter 3 Matrices, Linear Mappings, and Inverses

compositions are defined and determine the domain


and codomain of those that are defined. A9 Let v = [! l Determine the matrix [perpil].
(a) LoM (b) MoL
(c) Lo N
(e) Mo N
(d) NoL
(f) NoM A 10 Let V = [ jl · Determine the matrix [proj,].

AS Let v= [-n Determine the matrix [projv].

Homework Problems
B4 For each of the following linear mappings, deter­
Bl Let A = [ =� � -�1 and letfA be the correspond­
mine the domain, the codomain, and the standard
ing matrix mapping. matrix of the mapping.
(a) Determine the domain and codomain offA· (a) L(x1,X2,X3) =(2x1 -3x2,x2, 4x1 - 5x2)
(b) DeterminefA(3,4, -5 ) andfA(-2, 1, -4). (b) K(x1,X2,X3,X4) =(2xi - X3 + 3x4 -X1 - 2x2 + ,

(c) Find the images of the standard basis vectors 2x3 X4,3x2 + x3)
+

for the domain under !A. (c) M(x1,X2,x3) =(x3 -x1,0,Sx1 + x2)
(d) DeterminefA(x). BS Suppose that S and T are linear mappings with ma­
(e) Check your answers in (c) and (d) by calculat­ trices
ing [JA(x)] using Theorem 3.

B2 Let A =
2
0 2 3
5
0

7 9 and letfA be the corresponding


[S]= [� -�] and [ T]= [! -�]
2 4 8
(a) Determine the domain and codomain of each
matrix mapping.
mapping.
(a) Determine the domain and codomain offA·
(b) DeterminefA(-4,2, 1) andfA(3,-3,2).
(b) Determine the matrices that represent ( T + S)
and ( -S + 2T).
(c) Find the images of the standard basis vectors
for the domain underfA. B6 Suppose that S and T are linear mappings with ma­
(d) Determine fA(x). trices
(e) Check your answers in (c) and (d) by calculat­
ing [fA(x)] using Theorem 3. 4 -3

B3 For each of the following mappings, state the do­ [S]= � -� ,[ T]= [-� O � �]
main and codomain. Determine whether the map­
-2 0
ping is linear and either prove that it is linear or
give a counterexample to show why it cannot be (a) Determine the domain and codomain of each
linear. mapping.
(a) f(x1, x2) =(2x2,xi - x2) (b) Determine the matrices that represent SoT and
(b) g(x1,x2) =(cosx2,xix� ) ToS.
(c) h(xi' X2,X3) =(0,0,X1 + X2 + X3)
B7 Let L, M, N be linear mappings with matrices [L]
(d) k(Xi,X2,X3) =(0,0,0)
=

(e) f(xi,X2,X3,X4) =(xi, l,X4)


(f) m(x1' X2,X3,X4)=(xi + X2 - X3)
un [M]= [i -
� -
� �} and
Section 3.3 Geometrical Transformations 143

[N] =
3
-3 0 � . Determine which of the fol- B9 Let v = [ �].
_ Determine the matrix [perpil].

-4 1
lowing compositions are defined and determine the
domain and codomain of those that are defined.
B 10 Let ii= [-l]. Determine the matrix [prnj,].

(a) LoM (b) MoL


(c) Lo N
(e) Mo N
(d) NoL
(f) NoM
Bll Let ii= [-H Determine the matrix [perp,].

B8 Let v = [-�l Determine the matrix [projil].

Conceptual Problems

Dl Let L : R.11
� R.m. Show that for any 1, y E R.11 and linear mapping. What is its codomain? Verify that
t E R., L satisfies L(1+y) = L(1)+L(Y) and L(t1) = the matrix of this linear mapping can be written as
tL(1) if and only if L(t1 + y) = tL(1)+ L(Y). VT.

D2 Let L : R.11 � R.m and M : R.11 � R.m be linear D4 If it is a unit vector, show that [proj,,] = it itT.
mappings. Prove that (L+M) is linear and that [L+ DS Let v E R.3 be a fixed vector and define a map­
M] [L]+ [M].
=
ping CROSSv by CROSSv 1 = v x 1. Verify that
D3 Let v E R.11 be a fixed vector and define a mapping CROSSv is a linear mapping and determine its
DOTv by DOTv 1 = v · 1 . Verify that DOTv is a codomain and standard matrix.

3.3 Geometrical Transformations


Geometrical transformations have long been of great interest to mathematicians. They
also have many important applications. Physicists and engineers often rely on sim­
ple geometrical transformations to gain understanding of the properties of materials
or structures they wish to examine. For example, structural engineers use stretches,
shears, and rotations to understand the deformation of materials. Material scientists
use rotations and reflections to analyse crystals and other fine structures. Many of
these simple geometrical transformations in R2 and R3 are linear. The following is a
brief partial catalogue of some of these transformations and their matrix representa­
tions. (projil and perpil belong to the list of geometrical transformations, too, but they
were discussed in Chapter 1 and so are not included here.)

Rotations in the Plane


Re : R2 � R2 is defined to be the transformation that rotates 1 counterclockwise
through angle () to the image Re(1). See Figure 3.3.1. Is Re linear? Since a rotation
does not change lengths, we get the same result if we first multiply a vector 1 by
a scalar t and then rotate through angle (), or if we first rotate through () and then
multiply by t. Thus, Re(t1) = tRe(1) for any t E R. and any 1 E R2. Since the shape
of a parallelogram is not altered by a rotation, the picture for the parallelogram rule of
addition should be unchanged under a rotation, so Re(1 + y) = Re(1) + Re(Y). Thus,
144 Chapter 3 Matrices, Linear Mappings, and Inverses

Re(1)

Figure 3.3.1 Counterclockwise rotation through angle e in the plane.

Re is linear. A more formal proof is obtained by showing that Re can be represented as


a matrix mapping.
Assuming that Re is linear, we can use the definition of the standard matrix to
calculate [Re]. From Figure 3.3.2, we see that

R e(1 O) '
= [� ]
c se
sme

= -sine
[ ]
Re(O, l)
cose

Hence,

[Re] =
[�
c se -sine ]
sme cose

It follows from the calculation of the product [Re]1 that


[XXit ?
c se - XX22 ]sine
Re(1) =

Sln 8 + COS 8

X2
(0, 1) Re(l, 0) = (cose, sin 8)

Re(O, 1) = (-sine, cos 8)

(1, 0) X1
Figure 3.3.2 Image of the standard basis vectors under R8.

EXAMPLE 1 What is the matrix of rotation of�.2 through angle 2rr/3?


Solution: Since cos � = �
- and sin � = f,
[ -1 /2 v'3/2 ]
[R2,,.13]
-

../3/2 -1/2
Section 3.3 Geometrical Transformations 145

EXERCISE 1 Determine [Rrr14] and use it to calculate Rrr14(1, 1). Illustrate with a sketch.

Rotation Through Angle(} About the x3-axis in IR.3


Figure 3.3.3 demonstrates a counterclockwise rotation with respect to the right-handed
standard basis. This rotation leaves x3 unchanged, so that if the transformation is de­
noted R, R(O,
then 0, 1) (0, 0, 1). Together with the previous case, this tells us that
=

the matrix of this rotation is

[R] [
=
cose
sine
0
- sine
cose
0
OJ
0
1

These ideas can be adapted to give rotations about the other coordinate axes. Is it
3
possible to determine the matrix of a rotation about an arbitrary axis in JR ? We shall
see how to do this in Chapter 7.

(COS B, sin B, X3)

Figure 3.3.3 A right-handed counterclockwise rotation about the x3-axis in JR3.

Stretches Imagine that all lengths in the x1 -direction in the plane are stretched by a
scalar factor t > 0, while lengths in the x2-direction are left unchanged (Figure 3.3.4).
This linear transformation, called a "stretch by factor tin the x1 -direction," has matrix

[� n (If t < 1, you might prefer to call this a shrink.) It should be obvious that

stretches can also be defined in the x2-direction and in higher dimensions. Stretches
are important in understanding the deformation of solids.

Contractions and Dilations If a linear operator T : JR2 � JR2 has matrix

[� �l with t > 0, then for any 1, T(x) = tx, so that this transformation stretches

vectors in all directions by the same factor. Thus, for example, a circle of radius 1
centred at the origin is mapped to a circle of radius t at the origin. If 0 < t < 1, such a
transformation is called a contraction; if t > l, it is a dilation.
146 Chapter 3 Matrices, Linear Mappings, and Inverses

Figure 3.3.4 A stretch by a factor t in the x1 -direction.

Shears Sometimes a force applied to a rectangle will cause it to deform into a paral­
lelogram, as shown in Figure 3.3.5. The change can be described by the transformation
2 2 1
S : IR � IR , such that S (2, 0) (2, 0) and S (0, 1) = ( s, ). Although the deformation
=

of a real solid may be more complicated, it is usual to assume that the transformation S
is linear. Such a linear transformation is called a shear in the direction of x1 by amount
s. Since the action of S on the standard basis vectors is known, we find that its matrix
.
IS
[ ]
1 s
0 1 .

(5, 1) (2, 1) (2 + 5, 1)
(0, 1) ----....,.-----.---""'7
...-

(2,0)

Figure 3.3.5 A shear in the direction of x1 by amounts.

Reflections in Coordinate Axes in R.2 or Coordinates Planes in


2
IR;.3 Let R: JR.2 � JR. be a reflection in the x1 -axis (see Figure 3.3.6). Then each vector
corresponding to a point above the axis is mapped by R to the mirror image vector
below. Hence, R(xi, x2) = (x1, -x2), and it follows that this transformation is linear
.
. h matnx
wit
1 [ OJ _
. .
1 . s·mu'larly, a re ftect1on mt . .
he x2-ax1s has the matnx
[- OJ
l
0 0 1 .
Next, consider the reflection T : JR3 � JR3 that reflects in the x1x2-plane (that is,
the plane x3 = 0). Points above the plane are reflected to points below the plane. The
[ 1 o o l
matrix of this reflection is 0 1 0 .
0 0 -1
Section 3.3 Geometrical Transformations 147

Figure 3.3.6 A reflection in JR;.2 over the x1 -axis.

EXERCISE 2 W rite the matrices for the reflections in the other two coordinate planes in JR.3.

General Reflections We consider only reflections in (or "across") lines in JR.2 or


planes in JR.3 that pass through the origin. Reflections in lines or planes not containing
the origin involve translations (which are not linear) as well as linear mappings.
Consider the plane in JR.3 with equation it· 1 = 0. Since a reflection is related to
proj,7, a reflection in the plane with normal vector it will be denoted refl;t. If a vector
p corresponds to a point P that does not lie in the plane, its image under refl;t is the
vector that corresponds to the point on the opposite side of the plane, lying on a line
through P perpendicular to the plane of reflection, at the same distance from the plane
as P. Figure 3.3.7 shows reflection in a line. From the figure, we see that

Since proj,1 is linear, it is easy to see that refl,1 is also linear.

reft;t a

Figure 3.3.7 A reflection in R2 over the line with nonnal vector n.


148 Chapter 3 Matrices, Linear Mappings, and Inverses

It is important to note that refl,1 is a reflection with normal vector it. The calcu­
lations for reflection in a line in IR2 are similar to those for a plane, provided that the
equation of the line is given in scalar form it· 1 0. If the vector equation of the line
=

is given as 1 tJ, then either we must find a normal vector it and proceed as above,
=

or, in terms of the direction vector J, the reflection will map jJ to (jJ 2 perpJ jJ). -
EXAMPLE2
Consider a r·efiection refl,1 : 1!.3 � IR3 over the plane with normal vector ii = [-i].
Determine the matrix [refl,1].
Solution: We have

Hence,

Note that we could have also computed [refl,1] in the following way. The equation
for refl;t(p) can be written as

refl;t(P) = Id(jJ) - 2 proj;t(P ) = (Id +(-2) projit )(jJ)

Thus,

PROBLEMS 3.3
Practice Problems

A 1 Determine the matrices of the rotations in the plane A2 (a) In the plane, what is the matrix of a stretch S
through the following angles. by a factor 5 in the x2-direction?
can (b) 1T (b) Calculate the composition of S followed by a
(c) -� (d) 6f' rotation through angle e.
Section 3.3 Exercises 149

3 3
(c) Calculate the composition of S following a ro­ AS (a) Let D : JR � JR be the dilation with factor
3
tation through angle e. t = 5 and let inj : JR � JR4 be defined by

inj(x1,x2,x3) (x1,X2,0,x3). Determine the


A3 Determine the matrices of the following reflections
=

2 matrix of inj oD.


in JR . 3 2
(b) Let P : JR � JR be defined by P(x1,x2,X3)
(a) R is a reflection in the line x1 3x2 0.
=
+ =
3
(x2, x3) and let S be the shear in JR such that
(b) S is a reflection in the line 2xt = x2.
S (x1,x2,x3) (x1,x2,X3 = + 2x1). Determine the
A4 Determine the matrix of the reflections in the fol­ matrix of P o S .
3 2 2
lowing plane in JR . (c) Can you define a shear T : JR � JR such that
(a) Xt + X2 + X3 = 0 To P = Po S, where P and S are as in part (b)?
(b) 2x1- 2x2 - X3 0 3 2
= (d) Let Q : JR � JR be defined by Q(x1,x2,x3) =

(xt, x2). Determine the matrix of Q o S, where


S is the mapping in part (b).

Homework Problems

Bl Determine the matrices of the rotations in the plane B4 Determine the matrix of the reflections in the fol­
3
through the following angles. lowing plane in JR .
(a)- � (b) -n (a) Xt - 3x2 - x3 = 0
(c) � (d) -� (b) 2X1 + X2- X3 = 0
3 3
B2 (a) In the plane, what is the matrix of a stretch S BS (a) Let C : JR � JR be contraction with fac­
3
by a factor 0.6 in the x2-direction? tor 1 /3 and Jet inj : JR � JR5 be defined by
(b) Calculate the composition of S followed by a inj(x1,x2,x3) = (0,x1,0,x2,x3). Determine the
rotation through angle e. matrix of inj oC.
3 2
(c) Calculate the composition of S following a ro­ (b) Let S : JR � JR be the shear defined by
tation through angle �. S (x1,x2,X3) = (x1,X2 - 2x3,x3). Determine the
matrices Co S and S o C, where C is the con­
B3 Determine the mat1ices of the following reflections
2 traction in part (a).
in JR . 3 3
(c) Let T : JR � JR be the shear defined by
(a) Ris a reflection in the line X1- 5x2 = 0.
T(x1,x2,X3) = (xt + 3x2,x2,X3). Determine the
(b) S is a reflection in the line 3x1 + 4x2 = 0.
matrix of S o T and T o S, where S is the map­
ping in part (b).

Conceptual Problems

Dl Verify that for rotations in the plane [Ra o Re] = [-3/5 4/5
415 315
] .
are refiect10n matrices. Calculate
[Ra][Re] = [Ra+e].

D2 In Problem A3, [R] =


[ 4/5
_315
-3/5 ]
_415 and [S]
[Ro S] and verify that it can be identified as the
matrix of a rotation. Determine the angle of the
rotation. Draw a picture illustrating how the com­
position of these reflections is a rotation.
150 Chapter 3 Matrices, Linear Mappings, and Inverses

D3 In R3, calculate the matrix of the composition of a


D4 (a) Construct a 2 x 2 matrix A *I such that A3 = /.
reflection in the x2x3-plane followed by a reflection
(Hint: Think geometrically.)
in the x1 x2-plane and identify it as a rotation about
(b) Construct a 2 x 2 matrix A *I such that A5 = /.
some coordinate axis. W hat is the angle of the ro-
tation? DS From geometrical considerations, we know that
reflt1 o refl,1 = Id. Verify the corresponding matrix
equation. (Hint: [reft,1] = I 2[projn] and proj,1 sat­
-

isfied the projection property from Section 1.4.)

3.4 Special Subspaces for Systems and


Mappings: Rank Theorem
In the preceeding two sections, we have seen how to represent any linear mapping as a
matrix mapping. We now use subspaces of R11 to explore this further by examining the
connection between the properties of Land its standard matrix [L] = A. This will also
allow us to show how the solutions of the systems of equations Ax = band Ax =0 are
related.
Recall from Section 1.2 that a subspace of R11 is a non-empty subset of R11 that is
closed under addition and closed under scalar multiplication. Moreover, we proved that
Span{V1, • , vk} is a subspace of R11• Throughout this section, L will always denote a
• •

linear mapping from R11 to Rm, and A will denote the standard matrix of L.

Solution Space and Nullspace

Theorem 1 Let A be an m x n matrix. The set S = {x E R11 I Ax = 0} of all solutions to a


homogeneous system Ax = 0 is a subspace of R11•

Proof: We have 0 E S since AO= 0.


Let x, y E S. Then A(x+ y) =Ax+ Ay =0 + 0 = 0, so we have x+ y e S.
Let x ES and t ER Then A(tx) = tAx = tO= 0. Therefore, tx e S.
So, by definition, S is a subspace of R11• •

From this result, we make the following definition.

Definition The set S = {x E R11 I Ax = 0} of all solutions to a homogeneous system Ax = 0 is


Solution Space called the solution space of the system Ax = 0.
Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem 151

EXAMPLE 1 Find the solution space of the homogeneous system x1 + 2x2 - 3x3 = 0.
Solution: We can solve this very easily by using the methods of Chapter 2. In partic­
ular, we find that the general solution is

or

EXERCISE 1
Let A= [� ! � � �] Find the solution space of AX= 0

Notice that in both of these problems, the solution space is displayed automatically
as the span of a set of linearly independent vectors.
For the linear mapping L with standard matrix A = [L], we see that L(x) =Ax, by
definition of the standard matrix. Hence, the vectors x such that L(x) = 0 are exactly
the same as the vectors satisfying Ax = 0. Thus, the set of all vectors x such that
L(x) = 0 also forms a subspace of JR11•

Definition The nullspace of a linear mapping L is the set of all vectors whose image under L is
Nullspace the zero vector 0. We write
Null(L) = {x E JR" I L(x) = O}

Remark

The word kernel-and the notation ker(L) = {x E JR" I L(x) = 0)-is often used in
place of nullspace.

EXAMPLE2
Let V= [� ll Find the nullspace of proj, , R3 � R3 .

Solution: Since vectors orthogonal to v will be mapped to 0 by projil, the nullspace


of projil is the set of all vectors orthogonal to v. That is, it is the plane passing through
the origin with normal vector v. Hence, we get

Null(proj,) = {[�:] E R3 I 2x1 - x, + 3x3 = 0 }


152 Chapter 3 Matrices, Linear Mappings, and Inverses

2 3
EXAMPLE3 Let L : JR. JR. be defined by L(x1, x2) (2x1 - x2, 0, xi + x2). Find Null(L).

[��]
--+ =

Solution: We have E Null(L) if L(x1, x2) = (0, 0, 0). For this to be true, we must

have

2x1 - X2 = 0
X1 + X2 = 0

We see that the only solution to this homogeneous system is x = 0. Thus Null(L) = {0}.

EXERCISE 2

To match our work with linear mappings, we make the following definition for
matrices.

Definition Let A be an m x n matrix. Then the nullspace of A is


Nullspace

Null(A) = {x e JR.11 J Ax = O}

It should be clear that for any linear mapping L : JR.11 --+ JR.m, Null(L) = Null([L]).

Solution Set of Ax = b
Next, we want to consider solutions for a non-homogeneous system Ax = b, b t:- 0 and
compare this solution set with the solution space for the corresponding homogeneous
system Ax = 0 (that is, the system with the same coefficient matrix A).

EXAMPLE4 Find the general solution of the system x1 + 2x2 - 3x3 = 5.


Solution: The general solution of x1 + 2x2 - 3x3 = 5 is

EXERCISE3
Let A = [� ! � � �l and b = [n Find the genernl solution of AX = b
Section 3.4 Special Subspaces for Systems and Mappings : Rank Theorem 153

Observe that in Example 4 and Exercise 3, the general solution is obtained from
the solution space of the corresponding homogeneous problem (Example 1 and Exer­
cise 1, respectively) by a translation. We prove this result in the following theorem.

Theorem 2 Let jJ be a solution of the system of linear equations Ax =


b, b * 0.
( 1) Ifv is any other solution of the same system, then A(jJ -v) = 0, so that jJ -v
is a solution of the corresponding homogeneous system Ax = 0.
(2) If h is any solution of the corresponding system Ax = 0, then jJ + h is a
solution of the system Ax = b.

Proof: (i) Suppose that Av= b. Then A(jJ - v) = AjJ -Av= b - b= 0.

(ii) Suppose that Ah= 0. Then A(jJ + h) = AjJ +Ah= b + 0 = b.


The solution jJ of the non-homogeneous system is sometimes called a particular


solution of the system. Theorem 2 can thus be restated as follows: any solution of the
non-homogeneous system can be obtained by adding a solution of the corresponding
homogeneous system to a particular solution.

Range of L and Columnspace of A

Definition The range of a linear mapping L: JR11 � JRm is defined to be the set
Range

Range (L) = {L(x) E JRm I x E JR11}

EXAMPLES
Let ii = [:] and considec the lineac mapping prnj, ' R3 � R3. By definition, evecy

image of this mapping is a multiple of v, so the range of the mapping is the set of all
multiples of v. On the other hand, the range of perpil is the set of all vectors orthogonal
to v. Note that in each of these cases, the range is a subset of the codomain.

EXAMPLE6 If Lis a rotation, reflection, contraction, or dilation in JR3, then, because of the geome­
try of the mapping, it is easy to see that the range of Lis all of JR3.
154 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE 7 Let L R2 � R3 be defined by L(x1,x2) = (2x1 - x2,0, xi + x2). Find Range(L).


:

Solution: By definjtjon of the range, if L(x) is any vector in the range, then

L(x)= [ �]
2x1

X1 + X2
x2
. Using vector operations, we can write this as

This is focany x1,x2 e R, and so Range(L)=Span {[�] nl}


·

EXERCISE4 Let L : R3 � R2 be defined by L(x1,X2,X3) = (xi - x2, -2xi + 2x2 + x3). Find
Range(L).

It is natural to ask whether the range of L can easily be described in terms of the
matrix A of L. Observe that

L(x)=Ax= a1 [ ctn]
X1
:
] = X1G1 + "· + X11Gn
Xn

Thus, the image of x under L is a linear combination of the columns of the matrix A.

Definition The columnspace of an m x n matrix A is the set Col(A) defined by


Columnspace
Col(A)= {Ax E R"' I x E R"J

Notice that our second interpretation of matrix-vector multiplication tells us that


Ax is a linear combination of the columns of A. Thus, the columnspace of A is the
set of all possible linear combinations of A. In particular, it is the subspace of R111
spanned by the columns of A. Moreover, if L : R11 � Rm is a linear mapping, then
Range(L)=Col(A).

EXAMPLE8
Let A= 1
2 [ 2

1
_
3]
1
and B= [� -H
1
Then

co1cAJ =span
{[�]. [�]. [ ;J}
_
and coI(BJ =span
{[�] [-l]}
·
Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem 155

EXAMPLE9
If L is the mapping with standa<d matrix A = [� n then

Range(L) =Col(A) =Span


{[�] [i]}
·

EXERCISE 5 Find the standard matrix A of L(x1, x2, x3) = (x1 - x2, -2x1 + 2x2 + x3) and show that
Range(L) = Col(A).

The range of a linear mapping L with standard matrix A is also related to the
system of equations Ax = b .

Theorem 3 The system of equations Ax = b is consistent if and only if b is in the range of the
linear mapping L with standard matrix A (or, equivalently, if and only if b is in the
columnspace of A).

Proof: If there exists a vector x such that Ax = b then b = Ax = L(x) and hence b is
,

in the range of L. Similarly, if b is in the range of L, then there exists a vector x such
that b =L(x) =Ax. •

EXAMPLE 10
Suppose that L is a linear mapping with matrix A [� n Determine whether

C= ul and J= m are in the range of L

Solution: c is in the range of L if and only if the system Ax = c is consistent. Sim­


ilarly, J is in the range of L if and only if Ax = J is consistent. Since the coefficient
matrix is the same for the two systems, we can answer both questions simultaneously
by row reducing the doubly augmented matrix [ A I c I J ]:
[ 1 1 1 2 l
2
1
1
3 �
-1 �]�[�
9 0 0 �
-]
0
3
1
156 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE 10 By considering the reduced matrix corresponding to [ A I c J (ignore the last column),
(continued) we see that the system Ax = c is consistent, so c is in the range of L. The reduced
matrix corresponding to [ A I J J shows that Ax = J is inconsistent and hence J is
not in the range of L.

Rowspace of A
The idea of the rowspace of a matrix A is similar to the idea of the columnspace.

Definition Given an m x n matrix A, the rowspace of A is the subspace spanned by the rows of A
Rows pace (regarded as vectors) and is denoted Row(A).

EXAMPLE 11
Let A =
[l
2 1
2
_

1]
3
and B =
[�1 � ] -
3
. Then

Row(A) =Span m] .[_m and Row(B) =Span


{ [�] [_i] . [�])
.

To write a mathematical definition of the rowspace of A, we require linear com­


binations of the rows of A. But, matrix-vector multiplication only gives us a linear
combination of the columns of a matrix. However, we recall that the transpose of a
matrix turns rows into columns. Thus, we can precisely define the rowspace of A by

We now prove an important result about the rowspaces of row equivalent matrices.

Theorem 4 If the m x n matrix A is row equivalent to the matrix B, then Row(A) = Row(B).

Proof: We will show that applying each of the three elementary row operations does
not change the rowspace. Let the rows of A be denoted a , ..., a f � and the rows of B
CT , _,T
be denoted by b 1 • • • , bm .
Suppose that B is obtained from A by interchanging two rows of A. Then, except
for the order, the rows of A are the same as the rows of B; hence Row(A) = Row(B).
Suppose that B is obtained from A by multiplying the i-th row of A by a non-zero
constant t. Then,

-+ ;:J
Row(B) = Span{b1, ... , bm} = Span{a1, ... , tai, ... , um)
-+ -+ -+

= {c1a1 + · · · + ci(tai) + · · · + cmam I ci E JR}


= {c1a1 + ... + Cjtllj + . . + Cmllm I Cj E JR}
.

=Span{a1, • • • , am) = Row(A)


Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem 157

Now, suppose that B is obtained from A by adding t times the i-th row to the }-th
row. Then,

Row(B) = Span{b1, ..., b111} = Span{a1, . .. , aj+ta;,


... , a111}
= {c1a1 +...+c;a;+...+Cj(aj+ta;)+...+Cmam IC; E JR}
= {c1a1+ +(c;+cjl)a;+ +cjaj+ +cmam I c; E JR}
· · · · · · · · ·

= Span{a1, .. ,a111}
.
= Row(A)

By considering a sequence of elementary row operations, we see that row equiva-


lent matrices must have the same rowspace. •

Bases for Row(A), Col(A), and Null(A)


Recall from Section 1.2 that we always want to write a spanning set with as few vectors
in the set as possible. We saw this in Section 1.2 when the set was linearly independent.
Thus, we defined a basis for a subspace S to be a linearly independent spanning set.
Moreover in Section 2.3, we defined the dimension of the subspace to be the number
of vectors in any basis.

Basis of the Rowspace of a Matrix We now determine how to find bases


for the rowspace, columnspace, and nullspace of a matrix.

Theorem 5 Let B be the reduced row echelon form of an m x n matrix A. Then the non-zero
rows of B form a basis for Row(A), and hence the dimension of Row(A) equals the
rank of A.

Proof: By Theorem 4, we have Row(B) = Row(A). Hence, the non-zero rows of B


form a spanning set for the rowspace of A. Thus, we just need to prove that these non­
zero rows are linearly independent. Suppose that B has r non-zero rows bf, .. ., b� and
consider
(3.-+)
Observe that if B has a leading I in the }-th row, then by definition of reduced row
echelon form, all other rows must have a zero as their }-th coordinate. Hence, we must
have tj = 0 in (3.4). It follows that the rows are linearly independent and thus form a
basis for Row(B) = Row(A). Moreover, since there are r non-zero rows in B, the rank
of A is r, and so the dimension of the rowspace equals the rank of A. •

EXAMPLE 12
Let A = [l : �]
-
·
Find a basis for Row(A).

Solution: Row reducing A gives

ll �H� ! -rl
l
-
158 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE 12
(continued) Hence, by Theorem 5, a basis for Row(A) is {[�l .U]}
EXERCISE 6 3

� �
2
Let A = . Find a basis for Row(A).

Basis of the Columnspace of a Matrix What about a basis for the column­
space of a matrix A? It is remarkable that the same row reduction that gives the basis
for the rowspace of A also indicates how to find a basis for the columnspace of A.
However, the method is more subtle and requires a little more attention.
Again, let B be the reduced row echelon form of A. Recall that the whole point of
the method of row reduction is that a vector x satisfies Ax = 0 if and only if it satisfies
Bx = 0. That is, if we let i11, • • • , i111 denote the columns of A and b1, • • • , b11 denote the
columns of B, then

if and only if

So, any statement about the linear dependence of the columns of A is true if and only
if the same statement is true for the corresponding columns of B.

Theorem 6 Suppose that B is the reduced row echelon form of A. Then the columns of A that
correspond to the columns of B with leading ls form a basis of the columnspace of
A. Hence, the dimension of the columnspace equals the rank of A.

Proof: For any m x n matrix B = [E1 b11] in reduced row echelon form, the
set of columns containing leading ls is linearly independent as they are standard basis
vectors from JR"'. Moreover, if b; is a column of B that does not contain a leading 1,
then, by definition of the reduced row echelon form, it can be written as a linear combi­
nation of the columns containing leading 1 s. Therefore, from our argument above, the
corresponding column a; in A can be written as a linear combination of the columns
in A that correspond to the columns containing leading l s in B. Thus, we can remove
this column from the spanning set without changing the set it spans by Theorem 1.2.3.
We continue to do this until we have removed all the columns of A that correspond
to columns of B that do not have leading l s, and we get a basis for the columnspace
�A. •
Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem 159

EXAMPLE 13 2

LetA =
2
2
4
2
2
� . Find a basis for Col(A).

3 6 4 3
Solution: By row reducingA, we get

2 1 2 0 0
1 2 2 1 0 0 1 0
2 4 2 3 0 0 0 1
3 6 4 3 0 0 0 0

{ � � � }.
The first, third, and fourth columns of the reduced row echelon form ofA are linearly
independent. Therefore, by Theorem 2, the first, third, and fourth columns of matrixA
1 1 1

form a basis for Col(A). Thus, a basis for Col(A) is , ,

3 4 3

Notice in Example 13 that every vector in the columnspace of the reduced row
echelon form of A has a last coordinate 0, which is not true in the columnspace of A,
so the two columnspaces are not equal. Thus, the first, third, and fourth columns of the
reduced row echelon form ofA do not form a basis for Col(A).

EXERCISE 7
LetA= [�
-1
-1
2
2
0
-3
� � -
-2
] ·Find a basis for Col(A).

There is an alternative procedure for finding a basis for the columnspace of matrix
A, which uses the fact that the columnspace of a matrix A is equal to the rowspace of
AT. However, the basis obtained in this manner is sometimes not as useful, as it may
not consist of the columns ofA.

EXERCISE 8 Find a basis for the columnspace of the matrix A from Example 13 by finding a basis
for the rowspace ofAT.

Basis of the Nullspace of a Matrix In Section 2.2 we saw that if the


rank of A was r, then the general solution of the homogeneous system Ax = 0 was
automatically expressed as a spanning set of n - r vectors. We can now show that these
spanning vectors are linearly independent, so that the dimension of the nullspace ofA
is n - r. Since this quantity is imp01tant, we make the following definition.

Definition LetA be an m x n matrix. We call the dimension of the nullspace ofA the nullity ofA
Nullity and denote it by nullity(A).
160 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE 14 Consider the homogeneous system Ax = 0, where the coefficient matrix

[1 2 0 4] 3
A=
00 1 5 6

is already in reduced row echelon form. By finding a basis for Null(A), determine the
nullity of A and relate it to rank(A).
Solution: The general solution is

-4 -3 -2
0 0 1
x =ti -6 + t2 -5 + t3 0
0 0
0 0
-3
-40 0 -21
Thus, -6 -5 is a spanning set for the nullspace of A. We now check for
0 01 000
linear independence.
Let us look closely at the coordinates of the general solution x corresponding to
the free variables (x2, X4, and xs in this example):

* * * *

0 0 1 t3
x =ti *
+ t2 * + t3 * = *

0 1 0 t2
1 0 0 t1

Clearly, this linear combination is the zero vector only if t1 = t2 = t3 = 0, so the


spanning vectors are linearly independent and hence form a basis for the nullspace of
A. It follows that nullity(A) = 3 = (#of columns) - rank(A). In particular, it is the
number of free variables in the system.

Following the method in Example 14, we prove the following theorem.

Theorem 7 Let A be an m x n matrix with rank(A) = r. Then the spanning set for the general
solution of the homogeneous system Ax = 0 obtained by the method in Chapter 2
is a basis for Null(A) and the nullity of A is n - r.

Proof: Let {vi, ... , v11_,} be a spanning set for the general solution of Ax = 0 obtained
in the usual manner and consider

t1V1 + · · · + tn-rVn-r =0 (3.5)


Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem 161

Then, the coefficients ti are just the parameters to the free variables. Thus, the coordi­
nate associated with the i-th parameter is non-zero only in the i-th vector. Hence, the

-
only possible solution of (3.5) is t1 = · · · = t11_, = 0. Therefore, the set is a linearly
independent spanning set for Null(A) and thus forms a basis for Null(A). Thus, the
nullity of A is n r, as required. •

Putting Theorem 6 and Theorem 7 together gives the following important result.

Theorem 8 [Rank Theorem]


If A is any m x n matrix, then

rank(A) + nullity(A) = n

EXAMPLE 15
Find a basis for the rowspace, columnspace, and nullspace of A � [-i ; �] and

verify the Rank Theorem in this case.

1 [1 1 11
Solution: Row reducing A gives

11
2 3 0
3 2 � 0 1
0 0 0

Thus, a basis for the rowspace of A is {[�l [�]}- · Also, the first and second columns

of the reduced row echelon form of A have leading l s, so the corresponding columns

{[-il [�I}.
from A form a basis for the columnspace of A. That is, a basis for the columnspace of A

is · Thus, since the rank of A is equal to the dimension of the columnspace

[=; l
(or rowspace), the rank of A is 2.

By back-substitution, we find that the general solution is 1 = t · t E R

Hence, a basis for the nullspace of A is {[ ;]} = ·Thus, we have nullity(A) = 1 and

rank(A) + nullity(A) = 2+ 1 = 3

as predicted by the Rank Theorem.


162 Chapter 3 Matrices, Linear Mappings, and Inverses

EXERCISE9
Find a basis for the rowspace, columnspace, and nullspace of A= [� : =� �] and

verify the Rank Theorem in this case.

A Summary of Facts About Rank


For an m x n matrix A:
the rank of A =the number of leading ls in the reduced row echelon form of A
=the number of non-zero rows in any row echelon form of A
=dim Row(A)
=dim Col(A)
=n - dim Null(A)

PROBLEMS 3.4
Practice Problems

Al Let L be the linear mapping with matrix A4 Determine a matrix of a linear mapping L :
1
0 I
0 -I
3
3
1
3
-5
JR2 -t
3
JR whose nullspace is Span H-�]} and
1 'YI = 6 and Y2= .
I 2 I
'

2 5 I
1
5
(a) Is y 1 in the range of L? If so, find x such that
whose range is Span {[: ]}
L(x)=J1.
AS Suppose that each of the following matrices is
(b) Is Yi in the range of L? If so, find x such that
the coefficient matrix of a homogeneous system of
L(x)=Ji.
equations. State the following.
A2 Find a basis for the range and a basis for the (i) The number of variables in the system
nullspace of each of the following linear mappings. (ii) The rank of the matrix
(a) L(x 1 , X2, X3) = (2x1, -x2 + 2x3) (iii) The dimension of the solution space
(b) M(x1' X2, X3, X4) = (x4, X3, 0, X2, X1 + X2 - X3)
(a) A= [� � � _;]
[�I � ; � �]
A3 Determine a matrix of a linear mapping L

JR
2
-t
3
JR whose nullspace is Span {[ � ]} and whose (b) B=
-2 0 0
-
5

range is Span {[�]} (c) C= [� � � i -�]


Section 3.4 Exercises 163

1 0 3 -5 -1 1 1 5
0 1 2 -4 2 2 3 2 11
(d) D A8 The matrix A
0 0 0 1 1 4 1 3 7
has reduced
=

0 0 0 0 0 0 2 0 -1 4
A6 For each of the following matrices, determine a ba­ 0 2 0 3
0 1 -1 0 1
.
sis for the rowspace, a subset of the columns that
row echelon form
form a basis for the columnspace, and a basis for R 0 0 0 1 1

[�
=

the nullspace. Verify the Rank Theorem. 0 0 0 0 0


(a) If the rowspace of A is a subspace of some
IR.12,

(a)
1 � �i 0 -2
what is n?
(b) Without calculation, give a basis for Row(A);

[� � =� �l
outline the theory that explains why this is a

(b) basis.
(c) If the columnspace of A is a subspace of some

2 0 3 0 !Rm, what is m?

1 2 1 7 (d) Without calculation, give a basis for the


(c)
2 4 0 6 1 columnspace of A. Why is this the required

3 6 13 2 basis?
(e) Determine the general solution of the system
A 7 By geometrical arguments, give a basis for the
Ax = 0 and, hence, obtain a spanning set for
nullspace and a basis for the range of the follow­
the solution space.

[-�]
ing linear mappings.
(f) Explain why the spanning set in (e) is in fact a

(a) prnj, : 11.3 11.3, where V


basis for the solution space.

(g) Verify that the rank of A plus the dimension
=

[!]
of the solution space of the system Ax = 0 is
(b) pe']J, : 11.3 11.3, where V
equal to the number of variables in the system.
� =

(c) reft, : 11.3 � 11.3, where V = [!]


Homework Problems

Bl Let L be the linear mapping with matrix B2 Find a basis for the range and a basis for the
1 2 1 5 1 nullspace of each of the following linear mappings.
1 0 3 5 -3 (a) L(x1,x2) (x1,X1 +2x2,3x2)
2 .
=

'y I = 'and h
2 1 -1 4 (b) M(x1' X2,X3,X4) (x1 +X4,X2 - 2x3,Xt - 2x2 +
=

0 2 2 4 1 3x3)
(a) Is y 1 in the range of L? If so, find x such that
L(x) = J1.
(b) Is y 2 in the range of L? If so, find x such that
L(x) = .Y'2.
164 Chapter 3 Matrices, Linear Mappings, and Inverses

B3 Determine a matrix of a linear mapping L JR.2 � B7 By geometrical arguments, give a basis for the

{[�]}
:

nullspace and a basis for the range of the follow­


JR.2 whose nullspace is Span and whose range
ing linear mappings.

is Span {[�]} ·
(a) proj, : 11.3 � R3, where ii = [j]
[-i]
B4 Determine a matrix of a linear mapping L : JR.2

{[-�]}

JR.3 whose nullspace is Span and whose (b) perp,: R3 � R3, where ii=

range is Span {[�]} (c) reft, : R3 � R3, where ii= m


BS Suppose that each of the following matrices is
the coefficient matrix of a homogeneous system of 2 0 0 3 0 -1
equations. State the following. 2 1 0 2 0 -3
(i) The number of variables in the system BS The matrix A = 2 0 1 1 4 has re-
(ii) The rank of the matrix 1 2 1 0 2 1

(iii) The dimension of the solution space 3 6 2 5 2 2


1 2 0 1 duced row echelon form

�]
1 2 0 0 3 0 -1
(a) A= 0 1 1 2
0 0 1 0 -1 0 -2
0 0 0 1
R= 0 0 0 1 -2 0 1 .
-

1 0 2 0
0 0 0 0 0 1 4
0 1 -1 3
(b) B = 0 0 0 0 0 0 0
0 0 0 1
(a) If the rowspace of A is a subspace of some JR.n, �
0 0 0 0
what is n?
1 5 0 2 -3 (b) Without calculation, give a basis for Row(A);
0 3 -2 1
(c) C =
outline the theory that explains why this is a
0 0 4 2
basis.
0 0 0 0
(c) If the columnspace of A is a subspace of some
1 6 0 2 -1 0 JR.m, what ism?
0 0 1 -2 1 2 (d) Without calculation, give a basis for the
(d) D =
0 0 0 0 1 3 columnspace of A. Why is this the required
0 0 0 0 0 0 basis?
B6 For each of the following matrices, determine a ba­ (e) Determine the general solution of the system
sis for the rowspace, a subset of the columns that Ax = 0 and, hence, obtain a spanning set for
form a basis for the columnspace, and a basis for the solution space.
the nullspace. Verify the Rank Theorem. (f) Explain why the spanning set in (e) is in fact a

[� � l l
basis for the solution space.
(a) (g) Verify that the rank of A plus the dimension
of the solution space of the system Ax = 0 is
equal to the number of variables in the system.
(b)
ln ! =:i
1 1 1
0 2 3 4
(c)
0 1 3 3
3 6 8
Section 3.5 Inverse Matrices and Inverse Mappings 165

Conceptual Problems

Dl Let L : JR11 - JRm be a linear mapping. Prove that D3 (a) If A is a 5 x7 matrix and rank(A) = 4, then
what is the nullity of A, and what is the dimen­
dim(Range(l)) + dim(Nuil(l)) = n sion of the columnspace of A?
(b) If A is a 5 x4 matrix, then what is the largest
D2 Suppose that L : JR11 - JRm and M : ]Rm - JRP are possible dimension of the nulls pace of A? What
linear mappings. is the largest possible rank of A?
(a) Show that the range of Mo l is a subspace of (c) If A is a 4x5 matrix and nullity(A) = 3, then
the range of M. what is the dimension of the rowspace of A?
(b) Give an example such that the range of Mo l
is not equal to the range of M.
D4 Let A be an n x n matrix such that A2
Prove that the columnspace of A is a subset of the
= 011,11•

(c) Show that the nullspace of L is a subspace of


nullspace of A.
the nullspace of Mo l.

3.5 Inverse Matrices and Inverse Mappings


In this section we will look at inverses of matrices and linear mappings. We will make
many connections with the material we have covered so far and provide useful tools
for the material contained in the rest of the book.

Definition
Inverse
Let A be an n x n matrix. If there exists an n x n matrix B such that AB =

A is said to be invertible, and B is called the inverse of A (and A is the inverse of B).
I = BA, then

1
The inverse of A is denoted A- •

EXAMPLE 1
The matrix [ � - �]
_ is the inverse of the matrix [ � �] because

[� �H-� -�J=[� �J=/


and

[-� -�][� �] =[� �]=/

Notice that in the definition, B is the inverse of A. Th.is depends on the easily
proven fact that the inverse is unique.
166 Chapter 3 Matrices, Linear Mappings, and Inverses

Theorem 1 Let A be a square matrix and suppose that BA = AB = I and CA = AC = I. Then


B=C.

Proof: We have B=BI=B(AC)=(BA)C=IC=C. •

Remark

Note that the proof uses less than the full assumptions of the theorem: we have proven
that if BA = I = AC, then B = C. Sometimes we say that if BA = I, then B is a
"left inverse" of A. Similarly, if AC = I, then C is a "right inverse" of A. The proof
shows that for a square matrix, any left inverse must be equal to any right inverse.
However, non-square matrices may have only a right inverse or a left inverse, but not
both (see Problem D4). We will now show that for square matrices, a right inverse is
automatically a left inverse.

Theorem 2 Suppose that A and B are n x n matrices such that AB = I. Then BA = I, so that
B =A-1• Moreover, B and A have rank n.

Proof: We first show, by contradiction, that rank(B) = n. Suppose that B has rank
less than
n. Then, by Theorem 2.2.2, the homogeneous system Bx=0 has non-trivial

solutions. But this means that for some non-zero x, AB1 = A(Bx) = AO = 0. So,
AB is certainly not equal to I, which contradicts our assumption. Hence, B must have
rank n.

Since B has rank n, the non-homogeneous system y = Bx is consistent for every


y E IR.11 by Theorem 2.2.2. Now consider

BAy =BA(Bx)=B(AB)x=BIx=Bx=y

Thus, BAy =y for every y E IR.11, so BA=I by Theorem 3. 1.4. Therefore, AB=I and
BA=I, so that B=A-1•
Since we have BA = I, we see that rank(A)=n, by the same argument we used to
prove rank(B)=n. •

Theorem 2 makes it very easy to prove some useful properties of the matrix in­
verse. In particular, to show that A-1 =B, we only need to show that AB=I.

Theorem 3 Suppose that A and B are invertible matrices and that t is a non-zero real number.
(1) (tA)-1= + A-1
(2) (AB)-1=B-1A-1
(3) (Ar)-1=(A-ll

Proof: We have

(tA) u ) (�)
A-1 = AA-1 = 11 =I

(AB)(B-1A-1)=A(BB-1)A-1=AIA-1 =AA-1=I

(AT)(A-1l =(A-1Al = F =I •
Section 3.5 Inverse Matrices and Inverse Mappings 167

A Procedure for Finding the Inverse of a Matrix


For any given square matrix A, we would like to determine whether it has an inverse
and, if so, what the inverse is. Fortunately, one procedure answers both questions. We
begin by trying to solve the matrix equation AX I for the unknown square matrix
=

1
X. If a solution X can be found, then X= A- by Theorem 2. On the other hand, if no

such matrix X can be found, then A is not invertible.


To keep it simple, the procedure is examined in the case where A is 3 x 3, but it
should be clear that it can be applied to any square matrix. Write the matrix equation
AX = I in the form

Hence, we have

So, it is necessary to solve three systems of equations, one for each column of X. Note
that each system has a different standard basis vector as its right-hand side, but all
have the same coefficient matrix. Since the solution procedure for systems of equations
requires that we row reduce the coefficient matrix, we might as well write out a "triple­
augmented matrix" and solve all three systems at once. Therefore, write

and row reduce to reduced row echelon form to solve.


Suppose that A is row equivalent to!, and call the resulting block on the right B
so that the reduction gives

Now, we must interpret the final matrix by breaking it up into three systems:

In particular, Ab1 e1, Ab2 ez, and Ab3 e3. It follows that the first column of the
� � �
� � �
= = =

desired matrix X is b1, the second column of X is b2, and so on. Thus, A is invertible
and B = A-1•
If the reduced row echelon form of A is not I, then rank(A) < n. Hence, A is not
invertible, since Theorem 2 tells us that if A is invertible, then rank(A) = n.

First, we summarize the procedure and give an example.

Algorithm 1 To find the inverse of a square matrix A,

Finding A-1
(1) Row reduce the multi-augmented matrix [ A I ! ] so that the left block is in
reduced row echelon form.
1
(2) If the reduced row echelon form is I I B [
then A- = B. ],
(3) If the reduced row echelon form of A is not/, then A is not invertible.
168 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE2
Deternline whether A = [� � �i
2 4 3
is invertible, and if it is, determine its inverse.

Solution: Write the matrix [ A II J and row reduce:

[l 1 1 ol [1 11 -1 l
1 l [1 1
2 0 2 1 0 0
2 2 0 0 0 0 1 0

[
� �

4 3 0 0 0 2 -1 -2 0

1
0
0
0
1
0
0
-1
2
-1 2

0
1-1

-2 j]
1
0
0
1
� 0
0
0

0
0
0 -1
2

0
-5
1
2

Hence, A is invertible and A -[-l : j].


I =
-

You should check that the inverse has been correctly calculated by verifying that
AA-1 =I.

EXAMPLE3
Determine whether A = [ � �] is invertible, and if it is, determine its inverse.

Solution: Write the matrix [ A II J and row reduce:


[ � � I � � J [ � � 1-� � J �

Hence, A is not invertible.

EXERCISE 1
Determine whether A = [� �] is invertible, and if it is, determine its inverse.

Some Facts About Square Matrices and Solutions of


Linear Systems
In Theorem 2 and in the description of the procedure for finding the inverse matrix, we
used some facts about systems of equations with square matrices. It is worth stating
them clearly as a theorem. Most of the conclusions are simply special cases of previous
results.

Theorem 4 [Invertible Matrix Theorem]


Suppose that A is an n xn matrix. Then the following statements are equivalent (that
is, one is true if and only if each of the others is true).
Section 3.5 Inverse Matrices and Inverse Mappings 169

Theorem 4 (1) A is invertible.


(continued) (2) rank(A) = n.
(3) The reduced row echelon form of A is/.
(4) For all b E IR.n, the system Ax= b is consistent and has a unique solution.
(5) The columns of A are linearly independent.
(6) The columnspace of A is llln.

Proof: (We use the "implication arrow": P => Q means "if P, then Q." It is common
in proving a theorem such as this to prove (1) => (2) => (3) ::::> (4) ::::> (5) ::::> (6) ::::> (1),
so that any statement implies any other.)
(1) => (2): This is the second part of Theorem 2.
(2) => (3): This follows immediately from the definition of rank and the fact that A is
nx n.
(3) ::::>This follows immediately from Theorem 2.2.2.
(4):
(4) (5): Assume that the only solution to Ax = 0 is the trivial solution. Hence, if
=>
A = a1 ...a,, , then 0 = Ax = X1 il1 + ... + Xniln has only the solution X1 = ...=
[ ]
Xn =0. Thus, the columns il1' ...'an of A are linearly independent.
(5) => (6): If the columns of A are linearly independent, then Ax = 0 has a unique
solution, so the rank of A is n. Thus, Ax = b is consistent for all b E IR." and so
Col(A) = IR.".
(6) ::::> (1): If Col(A) = IR.n, then Ax; = e; is consistent for 1 � i � n. Thus, A is
invertible. •

Amongst other things, this theorem tells us that if a matrix A is invertible, then
the system Ax = b is consistent with a unique solution. However, the way we proved
the theorem does not immediately tell us how to find the unique solution. We now
demonstrate this.
Let A be an invertible square matrix and consider the system Ax= b. Multiplying
l
both sides of the equation by A-1 gives A-1Ax= A-1b and hence x=A- b.
_,
_,

EXAMPLE4
Let A= [� n [�]
Find the solution of Ax= .

Solution: By Example 1, A-1 = [ � -n


_ Thus, the solution of Ax= [�) is

EXERCISE 2 1 3
[ and b= 14 [ ] .
Let A= _

2 1
] 7 . Determme A- and use 1t to find the solution of Ax=b.
_,
I · ·
_,
170 Chapter 3 Matrices, Linear Mappings, and Inverses

It likely seems very inefficient to solve Exercise 2 by the method described. One
would think that simply row reducing the augmented matrix of the system would make
more sense. However, observe that if we wanted to solve many systems of equations
with the same coefficient matrix A, we would need to compute A-1 once, and then
each system can be solved by the problem of solving the system to simple matrix
multiplication.

Remark

It might seem surprising at first that we can solve a system of linear equations without
performing any elementary row operations and instead just using matrix multiplica­
tion. Of course, with some thought, one realizes that the elementary row operations
are "contained" inside the inverse of the matrix (which we obtained by row reducing).
In the next section, we will see more of the connection between matrix multiplication
and row reducing.

Inverse Linear Mappings


It is useful to introduce the inverse of a linear mapping here because many geometrical
transformations provide nice examples of inverses. Note that just as the inverse matrix
is defined only for square matrices, the inverse of a linear mapping is defined only
for linear operators. Recall that the identity transformation Id is the linear mapping
defined by ld(x)= x for all x.

n n
Definition If L : JR. ----? JR. is a linear mapping and there exists another linear mapping M : JR.11
----?

Inverse Mapping JR.n such that M o L = Id = Lo M, then L is said to be invertible, and M is called the
inverse of L, usually denoted L-1•

Observe that if M is the inverse of L and L(v)= w, then

M(w)= M(L(v))= (M 0 L)(v)= ld(v) = v

Similarly, if M(w)= v, then

L(v)= L(M(w))= (L 0 M)(w)= ld(w)= w

So we have L(v)= w if and only if M(w)= v.

n
Theorem 5 Suppose that L : JR. ----? JR.11 is a linear mapping with standard matrix [L] = A and
n
that M : JR. ----? JR.11 is a linear mapping with standard matrix [M] = B. Then M is the
inverse of L if and only if B is the inverse of A.

Proof: By Theorem 3.2.5, [Mo L] = [M][L]. Hence, Lo M= Id= Mo L if and only


if AB= I= BA. •

For many of the geometrical transformations of Section 3.3, an inverse transfor­


mation is easily found by geometrical arguments, and these provide many examples of
inverse matrices.
Section 3.5 Inverse Matrices and Inverse Mappings 171

EXAMPLES For each of the following geometrical transformations, determine the inverse transfor­
mation. Verify that the product of the standard matrix of the transformation and its
inverse is the identity matrix.

(a) The rotation Re of the plane

(b) In the plane, a stretch T by a factor oft in the x1-direction

Solution: (a) The inverse transformation is to just rotate by -e. That is, (Re t 1
=
R_8•
We have

[R ] = [ cos( -e) - sin( -e) ] [


= cose sine ]
-e sin(-e) cos(-B) - sine cose

since sin( -e) - sine and cos(-e) cose. Hence,


=

[Re ][R_e ] =
r�
c se - sine ][ �
c se sine ]
sm e cose - sm e cose
1
= [ cos2 e + sin2e cose sine - cose sine ] [ ]
=
0
1
- sine cose+sine cose cos2e+sin2e O

1
(b) The inverse transformation r- is a stretch by a factor of ? in the Xi-direction:

EXERCISE 3 For each of the following geometrical transformations, determine the inverse transfor­
mation. Verify that the product of the standard matrix of the transformation and its
inverse is the identity matrix.

(a) A reflection over the line x x1 in the plane


=

(b) A shear in the plane by a factor oft in the x1 -direction

Observe that if y E JR.11 is in the domain of the inverse M, then it must be in the
range of the original L. Therefore, it follows that if L has an inverse, the range of L
must be all of the codomain JR.11• Moreover, if L(x1) y L(x ), then by applying M
2
= =

to both sides, we have

Hence, we have shown that for any y


JR.11, there exists a unique x E JR" such that
E
L(x)
=
y. This property is the linear mapping version of statement (4) of Theorem 4
about square matrices.
172 Chapter 3 Matrices, Linear Mappings, and Inverses

Theorem 6 [Invertible Matrix Theorem, cont.]


Suppose that l : ]Rn --+ JRll is a linear mapping with standard matrix A = [l].
Then, the following statements are equivalent to each other and to the statements of
Theorem 4.
(7) l is invertible.
(8) Range(L)=]Rn.
(9) Null(l)= {0}.

Proof: (We use P <:=> Q to mean "P if and only if Q.")


(1) <:=> (7): This is Theorem 5.
(4) <:=> (8): Since L(x)=Ax, we know that for every b E JRll there exists a x E JR11 such
n
that L(x)=bif and only if Ax=bis consistent for every b E IR .
(7) � (9): Assume that L is invertible. Hence, there is a linear mapping L-1 such that
L-1(L(x)) = x. If x E Null(L), then l(x) = o and then L-1L(x) = L-1(0). But, this
gives
x=L-1(0) =0
_,
_,

since L-1 is linear. Thus, Null(L)= {0}.


(9) � (3): Assume that Null(L) = {O}. Then L(x) = Ax = 0 has a unique solution.
Thus, rank(A)=n, by Theorem 2.2.2. •

Remark

It is possible to give many alternate proofs of the equivalences in Theorem 4 and


Theorem 6. You should be able to start with any one of the nine statements and show
that it implies any of the other eight.

EXAMPLE6 Prove that the linear mapping projil is not invertible for any v E JR11, n � 2.
Solution: By definition, projil(x) = tV, for some t E R Hence, any vector y E ]Rn
* IR11,
that is not a scalar multiple of v is not in the range of prok Thus, Range(projil)
hence projil is not invertible, by Theorem 6.

3 3
EXAMPLE 7 Prove that the linear mapping L : JR --+ JR defined by

is invertible.
Solution: Assume that x is in the nullspace of L. Then L(x) = 0, so by definition of
L, we have

2 X1 + X2 =0
X3 =0
X2 - 2X3 =0

The only solution to this system is x1 =x2 =x3 =0. Thus, Null(L)= {0} and hence L
is invertible, by Theorem 6.
Section 3.5 Exercises 173

Finally, recall that the matrix condition A B = BA = I implies that the matrix
inverse can be defined only for square matrices. Here is an example that illustrates for
linear mappings that the domain and codomain of L must be the same if it is to have
an inverse.

EXAMPLE 8 Consider the linear mappings : P JR.4 � JR.3 defined by P(x ,x2,x3,x ) =(x ,x2,x3)
3 1 4 1
and inj : JR. � JR.4 defined by inj(x1,x2,x3) =(xi,x2,X3,0).
It is easy to see that Poinj = Id but that injoP 1- Id. Thus, P is not an inverse for
inj. Notice that P satisfies the condition that its range is all of its codomain, but it fails
the condition that its nullspace is trivial. On the other hand, inj satisfies the condition
that its nullspace is trivial but fails the condition that its range is all of its codomain.

PROBLEMS 3.5
Practice Problems
Al For each of the following matrices, either show that the following.

[;]
the matrix is not invertible or find its inverse. Check
by multiplication. (a) B =
X
(a) [; �J
nl
-

[� ! i l
(b) B =
X
(b)

(c)
[i � � ]
(c) B
X
=
[[ : ]{�]]
[ � �] [� n
[� : l
0 A3 Let A = and B =
(d)
(a) Find A-1 andB-1•
(b) Calculate AB and (ABt1 and check that
1 1 3 1 (ABt1 = s-1A-1•
0 2 0 (c) Calculate(3At1 and checkthat it equals t A-1•
(e)
2 2 7 (d) Calculate(A7t1 and checkthat A7(A7t1 =I.
0 6 3 1
A4 By geometrical arguments,determine the inverse of
1 0 1 0
each of the following matrices.
0 1 0 0
(a) The matrix of the rotation Rn:;6 in the plane.
(f) 0 0
0
0
0
0
0
0
1
0
2 (b) [� �] (c) [� �]
[� -! �]
-

A2 Let B = [� ! i]· UseB-1 to find the solutions of


(d)
174 Chapter 3 Matrices, Linear Mappings, and Inverses

2
AS The mappings in this problem are from JR - JR2. A6 Suppose that L 11
: JR - JR" is a linear mapping and
(a) Determine the matrix of the shear S by a factor that M : JR11 - JR11 is a function (not assumed to be

of 2 in the x2-direction and the matrix of S -1• linear) such that x M(J) if and only if y
= = L(x).
(b) Determine the matrix of the reflection R in the Show that M is also linear.
line x x2 0 and the matrix of R-1.
1
=

-
(c) Determine the matrix of (RoS t1 and the matrix
of (S o R)-1 (without determining the matrices
of R o S and S o R).

Homework Problems

Bl For each of the following matrices, either show that 1 0 0 0 0


the matrix is not invertible or find its inverse. Check 0 1 0 0 0
by multiplication. U) o o 1 o o
0 0 0 1 0
(a) [-� �] 1

[� : �l [� �1-
-
(b) -
B2 Let A = �
[� 3 �]
l -1 -1
(a) Find A-1.
(c)
(b) Use (a) to solve Ax b if

[� i -�1 Ul
=

01 b
(d)
=

(e)
[f -� �] (ii) b =

3[_:1
Hl
1 -1 0 2
(f) 0 1 0 (iii) b
2 -2 3 5 =

1
0 2
0
2
1
5
3
B3 Let A =
[ � �] and B =
[� � l
0 1 0 3 (a) Find A-1 and B-1•
(g)
1 3 3 (b) Calculate AB and (ABt1 and check that
3 6 0 3 (AB)-1 B-1 A-1•=

1 0 0 0 1 (c) Calculate (SA)-1 and check that it equals ! A-1•


0 0 0 1 0 T
(d) Calculate (A )-1 and check that AT (ATt 1 I. =

(h) 0 0 0 0
B4 By geometrical arguments, determine the inverse of
0 1 0 0 0
each of the following matrices.
0 0 0 0
(a) The matrix of the rotation Rrr;4 in the plane.
0 0 0 0 1

(i)
0
0
0
0
0
1
1
1
0
1
(b) [� �]
0 0 0 0
0 0 0 0
Section 3.5 Exercises 175

(c)
[� - 1/ �] B6 For each of the following pairs of linear mappings

�i
3 3
from JR. JR. , determine the matrices [W1 ],

[-�
-t

(d) - [S-1], and [(R o S)-1].


0 �
0 -1 (a)R is the rotation about the x1 -axis through angle
7r /2, and S is the stretch by a factor of 0.5 in the
2 2
BS The mappings in this problem are from JR. -t JR. .
x3-direction.
(a) Determine the matrix of the stretch S by a fac­
(b) R is the reflection in the plane x1 - x3 0, and =

tor of 3 in the x2-direction and the matrix of


S is a shear by a factor of 0.4 in the x3-direction
s-1.
in the x x3-plane.
(b) Determine the matrix of the reflection R in the
1
line x1 + x2 = 0 and the matrix of R-1•
(c) Determine the matrix of (Ros)-1 and the matrix
of (S o R)-1 (without determining the matrices
of R o S and S o R).

Computer Problems

1.23 3.11 1.01 0.00 (a) Use computer software to calculate A-1.
2.01 -2.56 3.03 0.04 (b) Use computer software to calculate the inverse
Cl Let A= .
1.11 0.03 -5.11 2.56 of A-1. Explain your answer.
2.14 -1 .9 0 4.05 1.88

Conceptual Problems

Dl Determine an expression in terms of A-1 and B-1 (a) Show that A has a right inverse by finding a ma­
for ((ABl)-1• trix B such that AB= I.
(b) Show that there cannot exist a matrix C such
D2 (a) Suppose that A is an n x n matrix such that
A3 = I. Find an expression for A-1 in terms that CA = I. Hence, A cannot have a left in­
verse.
of A. (Hint: Find X such that AX=!.)
(b) Suppose that B satisfies B5 + B3 + B I. Find = DS Prove that the following are equivalent for an n x n
1
an expression for B- in terms of B. matrix A.
(1) A is invertible.
D3 Prove that if A and B are square matrices such that
(2) Null(A) {O}.

�l
AB is invertible, then A and B are invertible. =

A are linearly independent.


[�
(3) The rows of

D4 Let A = =� (4) AT is invertible.


176 Chapter 3 Matrices, Linear Mappings, and Inverses

3.6 Elementary Matrices


In Sections 3.1 and 3.5, we saw some connections between matrix-vector multiplica­
tion and systems of linear equations. In Section 3.2, we observed the connection be­
tween linear mappings and matrix-vector multiplication. Since matrix multiplication
is a direct extension of matrix-vector multiplication, it should not be surprising that
there is a connection between matrix multiplication, systems of linear equations, and
linear mappings. We examine this connection through the use of elementary matrices.

Definition A matrix that can be obtained from the identity matrix by a single elementary row
Elementary Matrix operation is called an elementary matrix.

Note that it follows from the definition that an elementary matrix must be square.

EXAMPLE 1
E1= [� �] is the elementary matrix obtained from I2 by adding the product oft times

the second row to the first-a single elementary row operation. Observe that £1 is the
matrix of a shear in the x1 -direction by a factor oft.

£2 = [� �] is the elementary matrix obtained from I2 by multiplying row 2 by the

non-zero scalar t. E2 is the matrix of stretch in the x2-direction by a factor oft.

£3 = [� �] is the elementary matrix obtained from I2 by swapping row 1 and row 2,

and it is the matrix of a reflection over x2 = x1 in the plane.

As in Example 1, it can be shown that every n x n elementary matrix is the stan­


dard matrix of a shear, a stretch, or a reflection. The following theorem tells us that ele­
mentary matrices also represent elementary row operations. Hence, performing shears,
stretches, and reflections; multiplying by elementary matrices; and using elementary
row operations all accomplish the same thing.

Theorem 1 If A is an n x n matrix and Eis the elementary matrix obtained from In by a certain
elementary row operation, then the product EA is the matrix obtained from A by
performing the same elementary row operation.

It would be tedious to write the proof in the general nxn case. Instead, we illustrate

[
why this works by verifying the conclusion for some simple cases for 3 x 3 matrices.

corresponding elementary matrix is E = 0


l 0
1
k l
Case 1. Consider the elementary row operation of adding k times row 3 to row 1. The

0 . Then,

[ l [
0 0 1

a11 a12 a13 R, + kR, a11 + ka31 a12 + ka32 a13 + ka33
a21 a22 a23 �
a21 a22 a23
a31 a32 a33 a31 a32 a33

while

l[ ["" l
[�
0

l
k a11 a12 a13 + ka, , a12 + ka32 a1 3 + ka33
EA= 1 0 a21 a22 a23 = a21 a22 a23
0 1 a31 a32 a33 a31 a32 a33
Section 3.6 Elementary Matrices 177

Case 2. Consider the elementary row operation of swapping row 2 with row 3, which
1 0
has the corresponding elementary matrix E= 0 0 1 :
[ OJ
0 1 0

[ a11
a21
a12
a22
a13
a23
l [ a11
a31
a12
a32
a13
a33
l
R2 l R3 -
a31 a32 a33 a21 a22 a23

while

EA=
0
0
ol [
1
a11
az1
a12
a22
][
a13 a11
a23 = a31
a12
a32
a13
a33
l
[�
Again, the conclusion of Theorem 1 is verified.
1 0 a31 a32 a33 a21 a22 a23

EXERCISE 1 Verify that Theorem 1 also holds for the elementary row operation of multiplying the
second row by a non-zero constant for 3 x 3 matrices.

Theorem 2 For any m x n matrix A, there exists a sequence of elementary matrices,


E1, E2, • • • , Ek. such that Ek··· E2E1A is equal to the reduced row echelon form
of A.

Proof: From our work in Chapter 2, we know that there is a sequence of elemen­
tary row operations to bring A into its reduced row echelon form. Call the elementary
matrix corresponding to the first operation E1, the elementary matrix corresponding
to the second operation E2 , and so on, until the final elementary row operation cor­
responds to Ek. Then, by Theorem 1, E1A is the matrix obtained by performing the
first elementary row operation on A, E2E1A is the matrix obtained by performing the
second elementary row operation on E1A (that is, performing the first two elementary
row operations on A), and Ek··· E2E1A is the matrix obtained after performing all of
the elementary row operations on A, in the specified order. •

EXAMPLE2
Let A =
[� � �]. Find a sequence of elementary matrices E1, ..., Ek such that

Ek··· E1A is the reduced row echelon form of A.


Solution: We row reduce A keeping track of our elementary row operations:

[ 1 2 1 ] [ 1 2 1 ] [ 1 2 1 ] R1 - Rz
-
[ 1 2 0
� R2
�]
2 4 4 R2 - 2R1 - 0 0 2 - 0 0 1 0 0

The first elementary row operation is Rz - 2R1, so E1= [� .


_

The second elementary row operation is � R2, so E2= [� � ] 1 2 .


178 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE2
(continued)
[� - �].
The third elementary row operation is R1 - R2, so E3 =

Thus,E3E2E1A =
[� -�][� 1�2] [ _� �][� � !] [� � �l =

2 2
Remark

We know that the elementary matrices in Example must be 2 x for two reasons.
First, we had only two rows in A to perform elementary row operations on, so this
must be the same with the corresponding elementary matrices. Second, for the matrix
multiplication E1A to be defined, we know that the number of columns in E1 must be
equal to the number of rows in A. Also, E1 is square since it is elementary.

EXERCISE2
Let A = [� H Find a sequence of elementacy matrices £1,. • • , E, such that

Ek··· E1A is the reduced row echelon form of A.

2,
In the special case where A is an invertible square matrix, the reduced row eche­
lon form of A is I. Hence, by Theorem there exists a sequence of elementary row
operations such that Ek··· E1A = I. Thus, the matrix B = Ek··· E1 satisfies BA = I,
so B is the inverse of A. Observe that this result corresponds exactly to two facts we
observed in Section 3.5. First, it demonstrates our procedure for finding the inverse of
a matrix by row reducing [ A j I ) . Second, it shows us that solving a system Ax = b
by row reducing or by computing x = A-' b yields the same result.
Finally, observe that elementary row operations are invertible since they are re­
versible, and thus reflections, shears, and stretches are invertible. Moreover, since the
reverse operation of an elementary row operation is another elementary row operation,
the inverse of an elementary matrix is another elementary matrix. We use this to prove
the following important result.

Theorem 3 If an n x n matrix A has rank n, then it may be represented as a product of elementary


matrices.

Proof: By Theorem 2, there exists a sequence of elementary row operations such that
Ek··· E1A = I. Since E k is invertible, we can multiply both sides on the left by (Ekt'
to get
1 1 1
(E k)- EkEk-l ··· E1A = (Ek)- 1 or Ek-I ··· E1A = E"k

Next, we multiply both sides by E;!, to get

We continue to multiply by the inverse of the elementary matrix on the left until the
equation becomes
Section 3.6 Exercises 179

Thus, since the inverse of an elementary matrix is elementary, we have written A as a


product of elementary matrices. •

Remark

Observe that writing A as a product of simpler matrices is kind of like factoring a


polynomial (although it is definitely not the same). This is an example of a matrix de­
composition. There are many very important matrix decompositions in linear algebra.
We will look at a useful matrix decomposition in the next section and a couple more
of them later in the book.

EXAMPLE3
Let A = [� �l Write A and A-1 as a product of elementary matrices.

Solution: We row reduce A to I, keeping track of the elementary row operations used:

[ 0 2 ] R2 ! Ri
-[ 1 1
0 2
] 1 2
2R
-[ 1 1 ] R1 - R1 -[ 1 0 ]
1 1 0 1 0 1

Hence, we have

Thus,

and

A= e-lE-lE- 1 1 1 1
1 2 3 = 1 00 20 1
[o ] [1 OJ [ ]

PROBLEMS 3.6
Practice Problems

Al Write a 3 x 3 elementary matrix that corresponds (a) Add (-5) times the second row to the first row.
to each of the following elementary row opera­ (b) Swap the second and third rows.
(c) Multiply the third row by (-1).

[- � � !]
tions. Multiply each of the elementary matrices by
(d) Multiply the second row by 6.
A = and verify that the product EA is (e) Add 4 times the first row to the third.
4 20
A2 Write a 4 x 4 elementary matrix that corresponds to
the matrix obtained from A by the elementary row
each of the following elementary row operations.
operation.
180 Chapter 3 Matrices, Linear Mappings, and Inverses

(a) Add (-3) times the third row to the fourth row. A4 For each of the following matrices:
(b) Swap the second and fourth rows. (i) Find a sequence of elementary matrices
(c) Multiply the third row by (-3 ). Eb ... ,E1 such thatEk···E1A=I.
1
(d) Add 2 times the first row to the third row. (ii) Determine A- by computingEk···E1.

[i � �]
(e) Multiply the first row by 3. (iii) Write A as a product of elementary matrices.
(f) Swap the first and third rows.
(a) A=
A3 For each of the following matrices, either state that

[�2 42 �i
it is an elementary matrix and state the correspond­


ing elementary row operation or explain why it is
not elementary. (b) A=

[-�0 0 -1�i [--4� �1 =�]4


5
(a)

[0� -4��i1 (b)


(c) A=

(c)

[[o� �! o�]l (d)

[� ! �I 1 -2 4
-1 3 -4 -1
� � [� : �]
(d) A=
0 1 2 0
(e) en
-2 4 -8 -1

Homework Problems

Bl Write a 3 x 3 elementary matrix that corresponds B3 For each of the following matrices, either state that
to each of the following elementary row opera­ it is an elementary matrix and state the correspond­

�[1 � --2�i [�1 o� o�l


tions. Multiply each of the elementary matrices by ing elementary row operation or explain why it is
not elementary.
A= and verify that the productEA is

0 1 �0i
(a)[��
-3 (b)
the matrix obtained from A by the elementary row
operation.
(a) Add 4 times the second row to the first row.
(b) Swap the first and third rows.
(c)
[-0� �1 0�i (d) [�1 00 �1 i
(c) Multiply the second row by (-3).
(d) Multiply the first row by
(e) Add (-2)
2.
times the first row to the third.
(f) Swap the first and second rows.
(e)

[0� 0��i1 cn -

B4 For each of the following matrices:


[ l ! �]
B2 Write a 4 4
x elementary matrix that corresponds to (i) Find a sequence of elementary matrices
each of the following elementary row operations. Eb ... ,E1 such thatEk···E1A=I.
(a) Add 6 times the fourth row to the second row. (ii) Determine A-1 by computingEk···E1.

[� � - �i
(b) Multiply the second row by 5. (iii) Write A as a product of elementary matrices.
(c) Swap the first and fourth rows.
(d) Swap the third and fourth rows. (a) A=
(e) Add (-2) times the third row to the first row. 2 4 0
(f) Multiply the fourth row by (-2).
Section 3.7 LU-Decomposition 181

(b) A= [� � �1
1 4 1 (d) A=
3
-2
-2
-4
4
2
-1
1
2
-3

(c) A= H _; -�] 3 -5 1 5

Conceptual Problems
2
Dl (a) Let L : IR. -t IR.2 be the invertible linear op­ (a) Determine elementary matrices £1 and £2 such
E2E1A= I.
erator with standard matrix A = [� =�l By
(b)
that
Since A is invertible, we know that the system
writing A as a product of elementary matrices,
1
Ax = b has the unique solution x = A- 6 =
show that L can be written as a composition of E2E1 b. Instead of using matrix multiplication,
shears, stretches, and reflections. calculate the solution 1 in the following way.
(b) Explain how we know that every invertible lin­ First, compute E1 b by performing the elemen­
ear operator L : JRll -t IR.11 can be written as tary row operation associated with £1 on the
a composition of shears, stretches, and reflec­ matrix b. Then compute x = E2E1 b by per­
tions. forming the elementary row operation associ­

D2 For 2 x 2 matrices, verify that Theorem 1 holds for


ated with £2 on the result for E 1 b.
(c) Solve the system Ax = b by row reducing
the elementary row operations add t times row 1 to
row 2 and multiply row 2 by a factor of t t 0.
[ A I b ]. Observe the operations that you use
on the augmented patt of the system and com­
D3 Let A= [ � ;] and b = [;]. pare with part (b).

3.7 LU-Decomposition
One of the most basic and useful ideas in mathematics is the concept of a factorization
of an object. You have already seen that it can be very useful to factor a number into
primes or to factor a polynomial. Similarly, in many applications of linear algebra, we
may want to decompose a matrix into factors that have certain properties.
In applied linear algebra, we often need to quickly solve multiple systems Ax = b,
where the coefficient matrix A remains the same but the vector b changes. The goal of
this section is to derive a matrix factorization called the LU-decomposition, which is
commonly used in computer algorithms to solve such problems.
We now start our look at the LU-decomposition by recalling the definition of
upper-triangular and lower-triangular matrices.

Definition An n x n matrix U is said to be upper triangular if the entries beneath the main
Upper Triangular diagonal are all zero-that is, (U)ij = 0 whenever i > }. Ann x n matrix L is said to
Lower Triangular be lower triangular if the entries above the main diagonal are all zero-in particular,
(L)iJ = 0 whenever i < }.
182 Chapter 3 Matrices, Linear Mappings, and Inverses

Remark

By definition, a matrix in row echelon form is upper triangular. This fact is very im­
portant for the LU-decomposition.

Observe that for each such system Ax = b, we can use the same row operations
to row reduce [ A I b J to row echelon form and then solve the system using back­
substitution. The only difference between the systems will then be the effect of the row
operations on b. In particular, we see that the two important pieces of information we
require are the row echelon form of A and the elementary row operation used.
For our purposes, we will assume that our n x n coefficient matrix A can be brought
into row echelon form using only elementary row operations of the form add a mul­
tiple of one row to another. Since we can row reduce a matrix to a row echelon form
without multiplying a row by a non -zero constant, omitting this row operation is not
a problem. However, omitting row interchanges may seem rather serious: without row

interchanges, we cannot bring a matrix such as [� �] into row echelon form. How­

ever, we only omit row interchanges because it is difficult to keep track of them by
hand. A computer can keep track of row interchanges without physically moving en­
tries from one location to another. At the end of the section, we will comment on the
case where swapping rows is required.
Thus, for such a matrix A, to row reduce A to a row echelon form, we will only use
row operations of the form R; + sRj• where i > j. Each such row operation will have
a corresponding elementary matrix that is lower triangular and has ls along the main
diagonal. So, under our assumption, there are elementary matrices E1, . • . , Ek that are
all lower triangular such that

where U is a row echelon form of A. Since Ek··· E1 is invertible, we can write A


(Ek ··· E1)-1 U and define

Since the inverse of a lower-triangular elementary matrix is lower triangular, and a


product of lower-t1iangular matrices is lower triangular, Lis lower triangular. (You are
asked to prove this in Problem Dl .) Therefore, this gives us the matrix decomposition
A = LU, where U is upper triangular and Lis lower triangular. Moreover, L contains
the information about the row operations used to bring A to U.

Theorem 1 If A is an n x n matrix that can be row reduced to row echelon form without swap­
ping rows, then there exists an upper triangular matrix U and lower triangular matrix
L such that A =LU.

Definition Writing an n x n matrix A as a product LU, where Lis lower triangular and U is upper
LU-Decomposition triangular, is called an LU-decomposition of A.

Our derivation has given an algorithm for finding the LU-decomposition of such
a matrix.
Section 3. 7 LU-Decomposition 183

2[ 4 -1-1 4
-1 -1 3]
EXAMPLE 1
Find an LU-decomposition of A= 6 .

Solution: Row-reducing and keeping track of our row-operations gives

2 -1-1 4 2 - 20 -11 -2
4
[ -1 -1 3 l 6 R2 - R 1
[ 0 -3/2 4 l
2 - 1
- [ 00 10 -22 4 l R3+�R1

= u
5 R3+ �R2

E1 =[-102 �0 �1], [1/2� �0 �],1 [�0 3/2� �ll


£2 = £3 =

�[ 100 Ol10 [-1/20 100 l01 00 -3/210


Hence, we let

�I
I
L= E; 1Ei1E;1 =

= [� 100 �] [_lj -3/210 Ol01 - [-1/22 -3/210 100


I

And we get A= LU.

Observe from this example that the entries in Lare just the negative of the multi­
pliers used to put a zero in the corresponding entry. To see why this is the case, observe
that if Ek···E1A= U , then

Hence, the same row operations that reduce A to U will reduce Lto /.
T his makes the LU-decomposition extremely easy to find.

EXAMPLE2
Find an LU -decomposition of B = -4[ 2 31 -1-33] .
6 8
184 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLE2 Solution: By row-reducing, we get

H '. �]
(continued)

L=

L
=
H �] 0

Therefore. we have

B = LU =
H �m i �: i 0

EXAMPLE3
Find an LU-decomposition of C = [� ] 2
2
-3
3.
-4 -2 1
Solution: By row-reducing, we get

[� ;
-4 -2 �i
-3

-1
L= [ � � �i
-4 1

[ _; -� l ; -� l [ � � �ll
*

-[ �
1
0 _ L=
0 6 -11 R3 + 3R2 0 0 16 -4 -3

Therefore, we have

EXERCISE 1
Find an LU-decomposition of A= 1-�
-3
-1
-3
-

�1 ] .
Section 3.7 LU-Decomposition 185

Solving Systems with the LU-Decomposition


....

We now look at how to use the LU-decomposition to solve the system Ax =b. If
A =LU, the system can be written as

LUx =b

Letting y = Ux, we can write LUx =bas two systems:

Ly=b and Ux=y

which both have triangular coefficient matrices. This allows us to solve both sys­
tems immediately, using substitution. In particular, since Lis lower triangular, we use
forward-substitution to solve y and then solve Ux =y for x using back-substitution.

Remark

Observe that the first system is really calculating how performing the row operations
on A would have affected b.

EXAMPLE4 [
2 1
Let B= -4 3
-11 3 [ ]
3 and b =- 13 Use an LU-decomposition of B to solve Bx=b.
.

6 8 -3 4
Solution: In Example 2 we found an LU-decomposition of B. We write Bx = b as
LUx=band take y= Ux. W riting out the system Ly= b, we get

}'1 =3
-2y1 + )'2 = -13
3y1 + )'2 + )'3 =4

[-;].
Using forward-substitution, we find that )'1 =3, so )'2 =-13 + 2(3) = -7 and

y, =4 -3(3)- (-7) =2. Hence,f =

Thus, our system Ux=y is

2X1 + X2 - X3 =3
Sx2 + x3 =-7
-X3 =2

Using back-substitution, we get x3 = -2, 5x2 =-7 - (-2) � x2 =-1 and 2x1 =

3- (-!) + (- 2) => x1 =I. Thus, the solution is it=


l=H
186 Chapter 3 Matrices, Linear Mappings, and Inverses

EXAMPLES
Use LU-decomposition to solv + : -4-� +· + l
[-: -4 H
-2 6 12

SoluOom We fast find an LU-decomposition fm -2 Row reducing gives

[ _: -41 l
-2

I
+ [ [ - : 0
-2 3 R1
+ - �
R1
:l -1
�] � L=

-[ � l
-2 6 R3 2R1 -2 -2

[ � -1 48 l [-: �]
*

I I I I 0
0 -I � L=
-2 R3 - 2R2 0 0 -2 2

[ �H
Thus, U =
� - We let jl = Ut and solve Ljl = b. This gives

-y1
+ Y2
YI =

= 6
1
-2y1 + 2y2 + y3 = 12

So, y1 = I, y, = 1 + -14
6+I = 7, and y, = [i]. 2 2 = O. Then we solve Ut =

Xt-X2+ X2++4X3X3
which is

= 1

Ox3 = 7

8 -St. x3 t -x2 -4t8 -St, l x2 [ 8- + 4t x, 1 + 4t) -t


= 0

This gives = E JR, = 7 so = 7 and = (7 - =

[ : 4t -�] + t [-S�i
Hence,

x = -7 =

EXERCISE 2
Let A =
[-14 -11 1]
-3 -3
2
-3 . Use the LU-decomposition of A that you found in Exercise 1

to solve the system Ax = b, where:

(a) b=
m (b) b=
Ul
Section 3.7 Exercises 187

A Comment About Swapping Rows


It can be shown that for any n x n matrix A, we can first rearrange the rows of A to
get a matrix that has an LU-factorization. In particular, for every matrix A, there exists
a matrix P, called a permutation matrix, that can be obtained by only performing row
swaps on the identity matrix such that

PA= LU

Then, to solve Ax= b, we use the LU-decomposition as before to solve the equivalent
system
PAx=Pb

PROBLEMS 3.7
Practice Problems

[ � �]
[-2-� -2-2 l �l l �l
Al Find an LU-decomposition of each of the following 0
matrices. (a) A= - 1 -
· b 1 = = · b, = =

[� 2 J
-1

-�] H -�i. [=;]. [ ;]


-1 4
(a) 0 (b) 0

-2 2
1
(b) A= -4 b1 = b, =
1 5 3 4

[ i 2 -n l=H {�l
-4 -1 -5 -1

[� �]
-4
-6 -1 3 0
(c) 5 (d )
(c)A= -
-2 -2 -2 -22 2
0 -1 -1 b1 = b

2
-1
0 0 0 0 -3 4 ,

2
1 -1 0

-2 2
-1 -3 0 -6 5

2 2
0 -3 1 4 3 0 -1 3 1 7 -3
( e) (f) A= 'bi= 'b2 =
_,

3 -3 -1 3 3 4 3 ( d)
3 -8 3 -4 3
0 4 -3 0 -1 -4

2.
3 1 5 -5
A2 For each matrix, find an LU-decomposition and use
it to solve Ax= b;, for i = 1,

Homework Problems

Bl Find an LU-decomposition of each of the following


matrices.
-[-�1 2 2 -2
1
3
0
-1 -4 (f) -22 2 -1 3
-3
-1
2
]
( e)

[ 1 -1
-3 -4 0 4 5 5 5

[
(a) (b) 4 5 0 0 -3 4 -5 4

2.
5 1

[�
B2 For each matrix, find an LU-decomposition and use
1 -1
it to solve Ax= b;, for i=1,
(c) 0 -4 (d )
-1 -4 -4
188 Chapter 3 Matrices, Linear Mappings, and Inverses

[-4I
3[-2 -4 -�], {H [�l
0
[-4� -23 :l [-16��l [=�]2
b, = b, =

2-2 -1 -2-3 -2-2 -4-3 'b2 -24


b, = (c) A =
(a) A = 1 b,
-8

; 2 b
0
1 0 0
(b) A =
6 =H · nJ,c, l�l
= =
(d) A =

4 -4 -4 -3 -4
-5
5 5
,b1 =
8
=

Conceptual Problems

Dl (a) Prove that the inverse of a lower-triangular ele­ (b) Prove that a product of lower-triangular matri­
mentary matrix is lower triangular. ces is lower triangular.

CHAPTER REVIEW
Suggestions for Student Review

Try to answer all of these questions before checking an­


swers at the suggested locations. In particular, try to in­
3 Determine the image of the vector [ �] under the ro­

�.

3.3)
vent your own examples. These review suggestions are tation of the plane through the angle Check that
intended to help you carry out your review. They may the image has the same length as the original vector.
not cover every idea you need to master. Working in (Section
small groups may improve your efficiency. 4 Outline the relationships between the solution set for
the homogeneous system Ax = 0 and solutions for

23
1 State the rules for determining the product of two the corresponding non-homogeneous system Ax =
matrices A and B. What condition(s) must be satis­ b. Illustrate by giving some specific examples where

3.1) 3.4)
fied by the sizes of A and B for the product to be A is x and is in reduced row echelon form. Also
defined? How do each of these rules correspond to discuss connections between these sets and special
writing a system of linear equations? (Section subspaces for linear mappings. (Section

2 Explain clearly the relationship between a matrix 5 (a) How many ways can you recognize the rank of
3.4)
JR.2
and the corresponding matrix mapping. Explain how a matrix? State them all. (Section

3.4)
you determine the matrix of a given linear map­ (b) State the connection between the rank of a ma­
ping. Pick some vector v E and determine trix A and the dimension of the solution space of
0. (Section

4 4;
the standard matrices of the linear mappings proj11, Ax =

3;
perp11, and refl11. Check that your answers are cor­ (c) Illustrate your answers to (a) and (b) by con­

3.2) 2.
rect by using each matrix to determine the image of structing examples of x 5 matrices in row ech­
v and a vector orthogonal to v under the mapping. elon form of (i) rank (ii) rank and (iii)
(Section rank In each case, actually determine the gen­
eral solution of the system A x = 0 and check
Chapter Review 189

that the solution space has the correct dimension. (b) Pick a fairly simple 3 x 3 matrix (that has not
(Section 3.4) too many zeros) and try to find its inverse. If it
is not invertible, try another. When you have an
6 (a) Outline the procedure for determining the in­
inverse, check its correctness by multiplication.
verse of a matrix. Indicate why it might not pro­
duce an inverse for some matrix A. Use the ma­
(Section 3.5)
trices of some geomet1ic linear mappings to give 7 For 3 x 3 matrices, choose one elementary row op­
two or three examples of mat1ices that have in­ eration of each of the three types, call these E 1, E2,
verses and two examples of square matrices that £3. Choose an arbitrary 3 x 3 matrix A and check that
do not have inverses. (Section 3.5) E;A is the matrix obtained from A by the appropriate
elementary row operations. (Section 3.6)

Chapter Quiz
4 -5
El Let A =
[3
_
2 -5
4
=�J and B =
[ 2
3
-1
0 2 .
1 -1 5
4
] ES Let B =
-1 -1 -1
1 3 0
2

'i1
0

=
-3
5
, and v =
6
-7
.

Either determine the following products or explain 0 2 -1 3 -1


why they are not defined. (a) Using only one sequence of elementary row
(a)AB (b)BA (c)BAr operations, determine whether i1 is in the
columnspace of B and whether vis in the range
E2 (a) Let A = [-� -� �]. _ let fA be the matrix of the linear mapping fB with matrix B.
(b) Determine from your calculation in part (a) a

mapping with mauix A, and let i1 = [_i] and


vector x such that f8(x) = v.
2
-1
(c) Determine a vector y such that fBCY) =

'I = [n
= Determine /A (U) and /A ('I).
(the second column of B).
3
2

(b) Use the result of part (a) to calculate


E6 You are given the matrix A below and a row eche­
A [� � l·
-1
2
-2
lon form of A. Determine a basis for the rowspace,
columnspace, and nullspace of A.

E3 Let R be the rotation through angle � about the x3- 0 1 0 1 1


3 3
axis in JR. and let M be a reflection in JR. in the 2 1 2 5 0 1 -1 3
0
plane with equation - x 1 - x2 + 2x3= 0. Determine: A=
0 2 -2 1 8 0 0 0 1 2
(a) The matrix of R
3 3 0 4 14 0 0 0 0 0
(b) The matrix of M
(c) The matrix of [Ro M] E7 Determine the inverse of the matrix A
5 1 0 0 -1
E4 Let A = [� � � � �i
l 1 1 1 1
and b = []
1 6 . Determine
18
0 0 1
0 2 0
0
1
.
1 0 0 2
the solution set of Ax = band the solution space of
Ax = 0 and discuss the relationship between the ES Determine all values of p such that the matrix
two sets.
[� � �i
2 1 1
is invertible and determine its inverse.
190 Chapter 3 Matrices, Linear Mappings, and Inverses

E9 Prove that the range of a linear mapping L : JRll --?


(b) A matrix K such that KM = MK for all 3 x 4
]Rm is a subspace of the codomain. matrices M.
ElO Let {i11, ... , vk} be a linearly independent set in JRll (c) The matrix of a linear mapping L:IR.2--? IR.3
and let L : JRll --? ]Rm be a linear mapping. Prove whose range is Span {[ �]} and whose
that if Null(L) {O}, then {L(v1),
= , L(vk)} is a
. • .

linearly independent set in ]Rm. nullspace is Span {[;]} .

Ell Let A = [� � =�]· (d) The matrix of a linear mapping L:IR.2--? IR.3

0 0 4
(a) Determine a sequence of elementary matrices
whose range is Span { [ l l} and whose

Ei, . .. , Ek> such that Ek···E i A = I .


(b) By inverting the elementary matrices in part
(a), write A as a product of elementary matri­
nullspace is Span {[�]} .

ces. (e) A linear mapping L : IR.3 --? IR.3 such that the
range of L is all of IR.3 and the nullspace of L is
E12 For each of the following, either give an example
or explain (in terms of theorems or definitions) why
no such example can exist.
Span {[ ;]}
-

(a) A matrix K such that KM = MK for all 3 x 3 (f) An invertible 4 x 4 matrix of rank 3.
matrices M.

Further Problems

These problems are intended to be challenging. They x1 -axis.Let refle denote a reflection in this line.
may not be of interest to all students. Determine the matrix [refle] in terms of func­
tions of e.
Fl We say that matrix C commutes with matrix D if (b) Let refla denote a reflection in a second line, and
CD = DC. Show that the set of matrices that com- by considering the matrix [refla o refl8], show
mute with A = [; 7] is the set of matrices of the that the composition of two reflections in the
plane is a rotation. Express the angle of the rota­
form pl+ qA, where p and q are arbitrary scalars.
tion in terms of a and e.
F2 Let A be some fixed n x n matrix. Show that the set
FS (Isometries of JR.2) A linear transformation L :
C(A) of matrices that commutes with A is closed
IR2 --? IR2 is an isometry of IR2 if L preserves lengths
under addition, scalar multiplication, and matrix
(that is, if llL(1)11 = 11111 for every 1 E IR2).
multiplication.

F3 A square matrix A is said to be nilpotent if some


power of A is equal to the zero matrix. Show that
·
(a) Show that an isometry preserves the dot product
(that is, L(1) L(j)
(Hint: Consider L(1 + y).)
= 1 y for every 1,J
· E JR.2).

the matrix
[� a�2 :��1
0 0 0
is nilpotent. Generalize.
(b) Show that the columns of that matrix [L] must
be orthogonal to each other and of length 1. De­
duce that any isometry of IR.2 must be the com­
F4 (a) Suppose that e is a line in IR.2 passing through the position of a reflection and a rotation. (Hint: You
origin and making an angle e with the positive may find it helpful to use the result of Problem
F4 (a).)
Chapter Review 191

F6 (a) Suppose that A and B are matrices such (b) With the same assumptions as in part (a), give

[� ! ]
n x n

that A + B and A - B are invertible and that C


a careful explanation of why the matrix
and D are arbitrary n x n matrices. Show that
there are n x n matrices X and Y satisfying the must be invertible and obtain an expression for
system its inverse in terms of (A+ B)-1 and (A - B)-1.

AX+ BY= C
BX+ AY= D

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 4

Vector Spaces
CHAPTER OUTLINE

4.1 Spaces of Polynomials


4.2 Vector Spaces
4.3 Bases and Dimensions
4.4 Coordinates with Respect to a Basis
4.5 General Linear Mappings
4.6 Matrix of a Linear Mapping
4.7 Isomorphisms of Vector Spaces

This chapter explores some of the most important ideas in linear algebra. Some of
these ideas have appeared as special cases before, but here we give definitions and
examine them in more general settings.

4.1 Spaces of Polynomials


We now compare sets of polynomials under standard addition and scalar multiplica­
n
tion to sets of vectors in IB?. and sets of matrices.

Addition and Scalar Multiplication of Polynomials


Recall that if p(x) = ao + a1x+···+ anx1. q(x) =bo + b1x+···+ bnr, and t E JR?., then

(p + q)(x) = (ao + bo) + (a1 + b1)x +···+ (an + bn)X1

and
(tp)(x) =tao + (ta1)x +··· + (ta1 ).x'1
1
Moreover, two polynomials p and q are equal if and only if ai =bi for 0 ::::; i ::::; n.

EXAMPLE 1 Perform the following operations.

(a) (2 + 3x + 4x2 + x3) + (5 + x - 2x2 + 7x3)

Solution: (2+3x+4x2+x3)+(5+x-2x2+7x3) =2+5+(3+l)x+(4-2)x2+(1+7)x3


=7 + 4x + 2x2 + 8x3

(b) (3 + x2 - 5x3) - (1 - x - 2x2)

Solution: (3+x2-5x3)-(l -x-2x2) = 3-1 + [0-(-l)]x+ [1-(-2)]x2+ [-5-0]x3


= 2 + x + 3x2 - 5x3
194 Chapter 4 Vector Spaces

EXAMPLE 1 (c) 5(2 + 3x + 4x2 + x3)


(continued)
Solution: 5(2+3x+4x2+.x3)=5(2)+5(3)x+5(4)x2+5(1).x3=10+15x+20x2+5.x3

(d) 2(1 + 3x - x3) + 3(4 + x2 + 2.x3)

Solution: 2(1 + 3x - x3) + 3(4 + x2 + 2.x3) = 2(1) + 2(3)x + 2(0)x2 + 2(-l).x3


+ 3(4) + 3(0)x + 3(1)x2 + 3(2).x3
=2 + 12 + (6 +O)x + (0 + 3)x2 + (-2 + 6).x3
=14 + 6x + 3x2 + 4.x3

Properties ofPolynomial Addition and Scalar Multiplication

Theorem 1 Let p(x), q(x) and r(x) be polynomials of degree at most n and let s, t E R Then
(1) p(x) + q(x) is a polynomial of degree at most n
(2) p(x) + q(x)=q(x) + p(x)
(3) (p(x) + q(x)) + r(x)=p(x) + (q(x) + r(x))
(4) The polynomial 0= 0 +Ox +···+Ox't, called the zero polynomial, satisfies
p(x) + 0=p(x)=0 + p(x) for any polynomial p(x)
(5) For each polynomial p(x), there exists an additive inverse, denoted (-p)(x),
with the property that p(x) + (-p)(x)=O; in particular, (-p)(x) = -p(x)
(6) tp(x) is a polynomial of degree at most n
(7) s(tp(x))=( st)p(x)
(8) (s + t)p(x)=sp(x) + tp(x)
(9) t( p(x) + q(x))=tp(x) + tq(x)
(10) 1 p(x)= p(x)

Remarks

1. These properties follow easily from the definitions of addition and scalar mul­
tiplication and are very similar to those for vectors in !Rn. Thus, the proofs are
left to the reader.

2. Observe that these are the same 10 properties we had for addition and
scalar multiplication of vectors in JR.11 (Theorem 1.2.1) and of matrices
(Theorem 3 .1.1).

3. When we look at polynomials in this way, it is the coefficients of the polynomi­


als that are important.

As with vectors in JR.11 and matrices, we can also consider linear combinations of
polynomials. We make the following definition.

Definition Let 13= (p1 (x), . . . , Pk(x)} be a set of polynomials of degree at most n. Then the span
Span of 13 is defined as
Section 4.1 Spaces of Polynomials 195

Definition The set 13 = {p1 (x),... , Pk(x)} is said to be linearly independent if the only solution
Linearly Independent to the equation
t1 P1(x)+ · · · +tkPk(x) = 0

is t1 = · · · = tk = 0; otherwise, 13 is said to be linearly dependent.

EXAMPLE2 Determine if 1+2x+3x2+4x3 is in the span of 13 {1+x,1 +x3,x+x2,x+x3}. =

Solution: We want to determine if there are t1,t 2,t ,t4 such that
3

1+2x+3x2+4x3 t1 (1 +x)+t2(l +x3)+t (x+x2)+t4(x+x3)


3
=

(t1 +t2)+(t1 +t +t4)X+t X2+(t2+t4)X3


3 3
=

By comparing the coefficients of the different powers of x on both sides of the equation,
we get the system of linear equations

t1 + t2 = 1
t1 + t + t4 2
3
=

t 3
3
=

t2 + t4 = 4

Row reducing the augmented matrix gives

1 0 0 1 0 0 0 -2
1 0 1 2 0 1 0 0 3
0 0 1 0 3 0 0 1 0 3
0 1 0 4 0 0 0

We see that the system is consistent; therefore, 1 +2x+3x2+4x3 is in the span of 13.
In particular, we have t1 -2, t2 3, t 3, and t4 1.
3
= = = =

EXAMPLE3 Determine if the set 13 = {1 + 2x+ 2x2 - x3,3 + 2x+ x2+x3,2x2 + 2x3} is linearly
dependent or linearly independent.
Solution: Consider

0 t1 (1 +2x+2x2 - x3)+t2(3+2x+x2+x3)+t (2x2+2x3)


3
=

(t1 +3t2)+(2t1 +2t2)X+(2t1 +t2+2t )x2+(-t1 +t2+2t )X3


3 3
=

Comparing coefficients of the powers of x, we get a homogeneous system of linear


equations. Row reducing the associated coefficient matrix gives

3 0 1 0 0
2 2 0 0 1 0
2 2 0 0 1
-1 2 0 0 0

The only solution is t1 t2 t 0. Hence 13 is linearly independent.


3
= = =
196 Chapter 4 Vector Spaces

EXERCISE 1 Determine if :B == (1+2x +x2 +x3,1 +x + 3x2+x3,3 + Sx+ Sx2 - 3x3,-x - 2x2}
p(x) 1+5x - 5x2+x3 in the span
is linearly dependent or linearly independent. Is ==

of :B?

EXERCISE 2 Consider :B == {1,x,x2,x3). Prove that :B is linearly independent and show that Span :B
is the set of all polynomials of degree less than or equal to 3.

PROBLEMS 4.1
Practice Problems

Al Calculate the following. (e) -1+7x+Sx2+4x3


(a) (2 - 2x+3x2+4x3) +(-3 - 4x+x2+2x3) (f) 2 +x+Sx3
(b) (-3)(1- 2x+2x2+x3 + 4x4)
A3 Determine which of the following sets are linearly
(c) (2 +3x+x2 - 2x3) - 3(1 - 2x+4x2 + 5x3)
independent. If a set is linearly dependent, find all
(d) (2 +3x + 4x2) - (5 +x - 2x2)
linear combinations of the polynomials that equal
(e) -2(-5 +x+x2) +3(-1 - x2)
the zero polynomial.
( f) 2 ( � - tx+2x2) + H3 - 2x+x2) (a) {1+2x+x2- x3,5x+x2,l- 3x+2x2+x3}
(g) V2o +x+x2) + 7r(-1+x2)
(b) {1+x+x2,x,x2+x3,3 +2x+2x2 - x3}
A2 Let :B =={1+x2+x3,2 +x+x3, -1+x+2x2+x3). (c) {3 +x +x2,4 +x - x2,1+2x+x2+2x3,
For each of the following polynomials, either ex­
press it as a linear combination of the polynomials
-1+5x2+x3}
in :B or show that it is not in Span :B. (d) {1+x+x3+x4,2 +x - x2+x3+x4,

(a) 0 x+x2+x3+x4}
(b) 2 +4x + 3x2+4x3
( c) -x+2x2+x3 A4 Prove that the set :B {1,x - 1,(x - 1)2} is linearly
==

independent and show that Span :B is the set of all


(d) -4 - x+3x2
polynomials of degree less than or equal to 2.
Homework Problems

Bl Calculate the following. B3 Determine which of the following sets are linearly
(a ) (3 +4x - 2x2+5x3) - (1 - 2x+Sx3) independent. If a set is linearly dependent, find all
(b ) (-2)(2 +x+x2+3x3 - x4) linear combinations of the polynomials that equal
(c) (-1)(2 +x+4x2+2x3) - 2(-1 - 2x - 2x2 - x3) the zero polynomial.
(d) 3(1+x+x3) +2(x - x2+x3) (a) {x2,x3,x2+x3+x4}
(e) 0(1 +3x3 - 4x4)
(b) {1+2•1
� - �2• x+ �
6•x - �}
6
(f) H3 - �x+x2) + !(2 +4x+x2)
(c) {1 + x +x3,x+x3+ x5, 1 - x5 }
(g) (1 + -Y2) (1 - -Y2+ < -Y2 - l ) x2 ) - H-2 + 2x2)
(d) (1 - 2x+x4,x - 2x2+ x5, 1 - 3x+x3}
B2 Let :B =={l+ x,x + x2,1 - x3}. For each of the (e) {l+2x+x2-x3, 2+3x-x2+x3+x4,1+x-2x2
following polynomials, either express it as a linear +2x3+x4,1+2x+x2+x3 - 3x4,
combination of the polynomials in :B or show that 4 + 6x - 2x2+Sx4)
it is not in Span :B.
B4 Prove that the set 13 {1,x - 2,(x - 2 ) 2,( x - 2 ) 3}
=

(a) p(x) == 1 is linearly independent and show that Span :B is


(b) p(x) == Sx+2x2+3x3 the set of all polynomials of degree less than or
(c) q(x) == 3 +x2 - 4x3 equal to 3.
(d) q(x) == 1+x3
Section 4.2 Vector Spaces 197

Conceptual Problems

Dl Let 13 ={ p 1 (x), ... , Pk(x)} be a set of polynomials (b) Prove that if k > n + 1, then 13 must be linearly
of degree at most n. dependent.
(a) Prove that if k < n + 1, then there exists a
polynomial q(x) of degree at most n such that
q(x) (f. Span 13.

4.2 Vector Spaces


We have now seen that addition and scalar multiplication of matrices and polynomi­
als satisfy the same 10 properties as vectors in IR.11• Moreover, we commented in Sec­
tion 3.2 that addition and scalar multiplication of linear mappings also satisfy these
same properties. In fact, many other mathematical objects also have these important
properties. Instead of analysing each of these objects separately, it is useful to de.fine
one abstract concept that encompasses them all.

Vector Spaces
Definition A vector space over JR. is a set V together with an operation of addition, usually
Vector Space over J. denoted x + y for any x, y E V, and an operation of scalar multiplication, usually
denoted sx for any x EV and s E JR., such that for any x, y, z EV and s, t E JR. we have
all of the following properties:
V1 x+ y EV (closed under addition)
V2 x + y = y + x (addition is commutative)
V3 (x + y) + z x + (y + z) (addition is associative)
=

V4 There is an element 0 EV, called the zero vector, such that

x+0 = x = 0 +x (additive identity)

VS For each x EV there exists an element -x EV such that

x + (-x) = 0 (additive inverse)

V6 tx EV (closed under scalar multiplication)


V7 s(tx) (st)x (scalar multiplication is associative)
=

VS (s + t)x sx + tx (scalar addition is distributive)


=

Y9 t(x + y) tx + ty (scalar multiplication is distributive)


=

Y lO lx = x (1 is the scalar multiplicative identity)

Remarks

1. We will call the elements of a vector space vectors. Note that these can be
very different objects than vectors in IR.11• Thus, we will always denote, as in the
definition above, a vector in a general vector space in boldface (for example, x).
However, in vector spaces such as JR.", matrix spaces, or polynomial spaces, we
will often use the notation we introduced earlier.
198 Chapter 4 Vector Spaces

2. Some people prefer to denote the operations of addition and scalar multiplica­
tion in general vector spaces by E9 and 0, respectively, to stress the fact that
these do not need to be "standard" addition and scalar multiplication.

3. Since every vector space contains a zero vector by V3, the empty set cannot be
a vector space.

4. W hen working with multiple vector spaces, we sometimes use a subscript to


denote the vector space to which the zero vector belongs. For example, Ov
would represent the zero vector in the vector space V.

5. Vector spaces can be defined using other number systems as the scalars. For
example, note that the definition makes perfect sense if rational numbers are
used instead of the real numbers. Vector spaces over the complex numbers are
discussed in Chapter 9. Until Chapter 9, "vector space" means "vector space
over JR."

6. We define vector spaces to have the same structure as JRn. The study of vector
spaces is the study of this common structure. However, it is possible that vectors
in individual vector spaces have other aspects not common to all vector spaces,
such as matrix multiplication or factorization of polynomials.

EXAMPLE 1 JR11 is a vector space with addition and scalar multiplication defined in the usual way.
We call these standard addition and scalar multiplication of vectors in JR11•

EXAMPLE2 P11, the set of all polynomials of degree at most n, is a vector space with standard
addition and scalar multiplication of polynomials.

EXAMPLE 3 M(m, n), the set of all m x n matrices, is a vector space with standard addition and
scalar multiplication of matrices.

EXAMPLE4 Consider the set of polynomials of degree n. Is this a vector space with standard addi­
tion and scalar multiplication? No, since it does not contain the zero polynomial. Note
also that the sum of two polynomials of degree n may be of degree lower than n. For
example, (1 + x") - (1 - x'1) = 2, which is of degree 0. Thus, the set is also not closed
under addition.

EXAMPLES Let 'F(a,b) denote the set of all functions f: (a,b)--+ R If f,g E 'F(a,b), then the
sum is defined by (f + g)(x) f(x) + g(x), and multiplication by a scalar t E JR is
=

defined by (tf)(x) tf(x). With these definitions, 'F(a,b) is a vector space.


=

EXAMPLE6 Let C(a,b) denote the set of all functions that are continuous on the interval (a, b).
Since the sum of continuous functions is continuous and a scalar multiple of a contin­
uous function is continuous, C(a,b) is a vector space. See Figure 4.2.1.
Section 4.2 Vector Spaces 199

y g ::
- ___. ,,
,
, ·.
, ..
, ..
,
.
.
·· "! +
g
,
.. , ..
... .
,; ..
..
... . ....
, .. ___

Figure 4.2.1 The sum of continuous functions f and g is a continuous function.

EXAMPLE? Let Tbe the set of all solutions to x1+2x2 = 1, 2x1 + 3x2 = 0. Is Ta vector space with
standard addition and scalar multiplication? No. This set with these operations does
not satisfy many of the vector space axioms. For example, Vl does not hold since

[-3]
2
· m
1s · "TT" • 1s
Jl ; 1t • a so1ut1on
· ·
of th'1s system of l.mear equations. B ut [-3] + [-3] [-6]
2 2
=

4
is not a solution of the system and hence not in T.

EXAMPLE 8 Consider V {(x, y) I x,y E JR} with addition defined by (x1,y1) + (x2,y2) =
=

(x1 + x2, y1 + Y2) and scalar multiplication defined by k(x, y) (ky, kx). Is V a vec­ =

tor space? No, since 1(2,


does not satisfy V7.
3) (3,= 2) -:f. (2, 3), it does not satisfy VlO. Note that it also

EXAMPLE9
Let S = {[ �� ]
X1 + X2
1 x1, x2 E JR } . Is S a vector space with standard addition and scalar

multiplication in JR3? Yes. Let's verify the axioms.


First, observe that axioms V2, VS, V7, V8, V9, and VlO refer only to the opera­
tions of addition and scalar multiplication. Thus, we know that these operations must
satisfy all the axioms as they are the operations of the vector space JR3.

Let 1 = [ �� I
X1 + X2
and y = r ��
�1+Y2
l be vectors ins.

Vl We have

1 +y = [ ;� l
X 1+ X2
+ r ��
�l +Y2
l[= ;� : ��
X1 + Yl + X2 +Y2
l
Observe that if we let z1 = X1+Y1 and z2 = x2 + y2, then z1 + z2 = x1 + Yt +x2 + Y2
and hence

1+y = [ �� IE
Z1 + Z2
s

since it satisfies the conditions of the set. Therefore, S is closed under addition.

V3 The vector 0 = [�] satisfies 1 + 0 = 1 = 0 + Jt and is in S since it satisfies the

conditions of S.
200 Chapter 4 Vector Spaces

EXAMPLE9
(continued) V4 The additive inverse of x= [ �� ] X1+X2
is (-x)= [ =��
-X1- X2
I· which is in S since

it satisfies the conditions of S.

V6 tx = t[ �� l [tx1ttxxi2tx2l
Xt + X2
=
+
E S. Therefore, S is closed under scalar

multiplication.

Thus, S with these operators is a vector space as it satisfies all 10 axioms.

EXERCISE 1
Prove that the set S = {[��] x1, x2
I E Z} is not a vector space using standard addition

and scalar multiplication of vectors in JR.2.

EXERCISE 2
Let S = { [�1 �2] a1, a2
I E JR . Prove that S is a vector space using standard addi­
}
tion and scalar multiplication of matrices. This is the vector space of 2 x 2 diagonal
matrices.

Again, one advantage of having the abstract concept of a vector space is that when
we prove a result about a general vector space, it instantly applies to all of the examples
of vector spaces. To demonstrate this, we give three additional properties that follow
easily from the vector space axioms.

Theorem 1 Let V be a vector space. Then


(1) Ox = 0 for all x EV

(3) tO = 0 for all t


(2) (- l)x = -x for all x EV
E JR

Proof: We will prove ( 1). You are asked to prove (2) and (3) in Problem Dl . For any
x EV we have

Ox=Ox+ 0 byV3
= Ox+ [x + (-x)] byV4
=Ox+ [ lx+ (-x)] byVlO
=[Ox+ lx]+ (-x) byV2
= (0+ l)x + (-x) byV8
= lx + (-x) operation of numbers in JR
= x + (-x) byVlO
=0 byV4 •
Section 4.2 Vector Spaces 201

Thus, if we know that V is a vector space, we can determine the zero vector of V
by finding Ox for any x E V. Similarly, we can determine the additive inverse of any
vector x EV by computing (-l)x.

EXAMPLE 10 Let V = {(a,b) I a,b E JR, b > O} and define addition by (a,b) EB (c, d) = (ad + be,bd)
and define scalar multiplication by t 0 (a,b) = (taY-1, b1). Use T heorem 1 to show
that axioms V3 and V4 hold for V with these operations. (Note that we are using EB
and 0 to represent the operations of addition and scalar multiplication in the vector
space to help distinguish the difference between these and the operations of addition
and multiplication of real numbers.)
Solution: We do not know if V is a vector space. If it is, then by Theorem 1 we must
have
0 =OO(a,b) =(0ab-1,b0) =(0, 1)

Observe that (0, 1) EV and for any (a,b) EV we have

(a,b) EB (0, 1) =(a(l) + b(O), b(l) ) =(a,b) =(O(b) + l(a), 1(b) ) =(0, 1) EB (a,b)

So, V satisfies V3 using 0 = (0, 1).


Similarly, if V is a vector space, then by Theorem 1 for any x = (a,b) E V we
must have
(-x) =(-l)O(a,b) =(-ab-2,b-1)

Observe that for any (a,b) E V we have (-ab-2,b-1) E V since b-1 > 0 whenever
b > 0. Also,

So, V satisfies V4 using -(a,b) =(-ab-2,b-1 ).


You are asked to complete the proof that V is indeed a vector space in Problem
D2.

Subspaces
In Example 9 we showed that S is a vector space that is contained inside the vector
space JR3. Observe that, by the definition of a subspace of JR" in Section 1.2, S is
actually a subspace of JR3. We now generalize these ideas to general vector spaces.

Definition Suppose that V is a vector space. A non-empty subset 1LJ of V is a subspace of V if it


Subspace satisfies the following two properties:
S 1 x + y E 1LJ for all x,y E 1LJ (U is closed under addition)
S2 tx E 1LJ for all x E 1LJ and t E JR (U is closed under scalar multiplication)

Equivalent Definition. If 1LJ is a subset of a vector space V and 1LJ is also a vector
space using the same operations as V, then 1LJ is a subspace of V.
To prove that both of these definitions are equivalent, we first observe that if 1LJ is a
vector space, then it satisfies properties S 1 and S2 as these are vector space axioms V 1
and V6. On the other hand, as in Example 4.2.9, we know the operations must satisfy
axioms V2, VS, V7, V8, V9, and VlO since V is a vector space. For the remaining
axioms we have
202 Chapter 4 Vector Spaces

V1 Follows from property S1

V3 Follows from Theorem 1 and property S2 because for any u E 11J we have
0 = Oil E 11J

V4 Follows from Theorem 1 and property S2 because for any u E 11.J, the additive
inverse of u is (-u) = (-1 )u E 11J

V6 Follows from property S2

Hence, all 10 axioms are satisfied. Therefore, 11J is also a vector space under the
operations of V.

Remarks

1. When proving that a set 11J is a subspace of a vector space V, it is important not
to forget to show that 11J is actually a subset of V.

2. As with subspaces of IRn in Section 1 .2, we typically show that the subset is
non-empty by showing that it contains the zero vector of V.

EXAMPLE 11
In Exercise 2 you proved that § = {[� � ] 2
I a1, a2E IR} is a vector space. Thus,

since § is a subset of M(2, 2), it is a subspace of M(2, 2).

EXAMPLE 12 Let 11J {p(x) E P3 I p(3) 0}. Show that 11J is a subspace of ?3.
= =

Solution: By definition, 11J is a subset of P3• The zero vector in P3 maps x to 0 for
all x; hence it maps 3 to 0. Therefore, the zero vector of ?3 is in 11.J, and hence 11J is
non-empty.
Let p(x), q(x) E 11J ands ER Then p(3) = 0 and q(3) = 0.

Sl (p + q)(3) = p(3) + q(3) = 0 + 0 = 0 so p(x)+ q(x) E 11J

S2 (sp)(3) =
sp(3) = sO = 0, so sp(x) E 11J

Hence, 11J is a subspace of P3• Note that this also implies that 11J is itself a vector space.

EXAMPLE 13
Define the trace o f a 2 x
.
2 matnx b y tr
a11 + a22· Prove that
([ a11 a1 2 ]) =

a22 a11
§ ={A E M(2, 2) I tr(A) O} is a subspace of M(2, 2).
=

Solution: By definition,§ is a subset of M(2, 2). The zero vector of M(2, 2) is

02,2 =
[ � �l Clearly, tr(02,2) = 0, so02,2E§.

LetA,BE§ ands ER Then a11 + a22 = tr(A) = 0 and


b11 +b22 tr(B)
= 0. =

S l tr(A+B) = tr ([
aii +
a21 +
bii
b21
a12 +
b12
a22 + b 22
j) =
a11 + a22 + bi1 + b22 = 0 + 0 = 0, so

A+BE§
Section 4.2 Vector Spaces 203

EXAMPLE 13
(continued)
S2 tr(sA) = tr
([ sa11
sa2 1
sa12
sa22
]) = sa1 i + sa22 = s(a11 + a22) = s(O) = 0, so sA ES

Hence, Sis a subspace of M(2, 2).

EXAMPLE 14 The vector space JR2 is not a subspace of JR3, since JR2 is not a subset of JR3. That is, if

we take any vector x = [��] E JR2, this is not a vector in JR3, since a vector in JR3 has

three components.

EXERCISE 3 Prove that U = {a+ bx+ cx2 E P2 I b + c = a} is a subspace of P2 .

EXERCISE4 Let V be a vector space. Prove that {O} is also a vector space, called the trivial vector
space, under the same operations asV by proving it is a subspace of V.

In the previous exercise you proved that {0} is a subspace of any vector space V.
Furthermore, by definition, V is a subspace of itself. We now prove that the set of all
possible linear combinations of a set of vectors in a vector space V is also a subspace.

Theorem 2 If {v1, ... , vk} is a set of vectors in a vector space V and Sis the set of all possible
linear combinations of these vectors,

then Sis a subspace of V.

Proof: By VI and V6, t1 v1+ · · ·+ tkvk EV. Hence, Sis a subset ofV. Also, by taking
ti = 0 for 1 s i s k, we get

by V9 and Theorem 1, so Ov ES.


Let x, y ES. Then for some real numbers Si and t;, 1 sis k, x = s1 v1 + · · · + skvk
and y = t1V1 + · · · + tkvk. It follows that

using V8. So, x + y ESsince (si + t;) E JR. Hence, Sis closed under addition.
Similarly, for all r E JR,

by V7. Thus, Sis also closed under scalar multiplication. Therefore, Sis a subspace
ofV. •
204 Chapter 4 Vector Spaces

To match what we did in Sections 1.2, 3.1, and 4.1, we make the following
definition.

Definition If§ is the subspace of the vector space V consisting of all linear combinations of the
Span vectors vi,... ,vkEV, then§ is called the subspace spanned by 13 ={vi,... ,vk}, and
Spanning Set we say that the set 13 spans§. The set 13 is called a spanning set for the subspace§.
We denote§ by
= Span{vi,... ,vk}= Span:S
§

In Sections 1.2, 3.1 ,and 4.1, we saw that the concept of spanning is closely related
to the concept of linear independence.

Definition If 13 ={vi,... ,vk}


is a set of vectors in a vector space V, then 13 is said to be linearly
Linearly Independent independent if the only solution to the equation
Linearly Dependent

is ti= = tk =O; otherwise, 13 is said to be linearly dependent.


· · ·

Remark

The procedure for determining if a vector is in a span or if a set is linearly independent


in a general vector space is exactly the same as we saw for JR.'1, M(m,n), and Pn in
Sections 2.3, 3.1, and 4.1. We will see further examples of this in the next section.

PROBLEMS 4.2
Practice Problems

Al Determine, with proof, which of the following sets A2 Determine, with proof, whether the following sub­
are subspaces of the given vector space. sets of M(n, n) are subspaces.
(a) The subset of diagonal matrices

(a) { i� I x1 2x,=0, X1> x,,x,,x.e R


+ } of R4 (b) The subset of matrices that are in row echelon
form
(c) The subset of symmetric matrices (A matrix
A is symmetric if AT = A or, equivalently, if
(b) {[:� :�]I ai+2a2=O,ai,a2,a3,a4EJR.} aiJ=a1; for all i and j.)
ofM(2,2) 2 3 (d) The subset of upper triangular matrices
(c) {ao+aix a1x a3x I ao+2ai=0,
+ +
A3 Determine, with proof, whether the following sub­
ao,ai,a1,a3EJR.} of ?3 sets of P5 are subspaces.
(d) {[:� :�]lai,a2,a3,a4EZ}ofM(2,2) (a) {p(x) E I p(-x) = p(x) for all x E JR.} (the
P5
subset of even polynomials)
(e) {[:� :�]I a1a4 - a2a3=O,a1,a2,a3,a4 EJR.} (b) {(1 +x2) p(x) I p(x) E
4 ?3}
(c) {ao,a1x + +a4x I ao= a4,a1= a3,aiEJR.}
of M(2,2)
· · ·

(d) {p(x) E I p(O)2 = 1}


Ps

(f) {[� �] I ai = a1,ai, a1EJR.} (e) {ao+a1x+a2x 1ao,a1,a2ElR.}

of M(2,2).
Section 4.2 Exercises 205

A4 Let 'F be the vector space of all real-valued func­ (c) {f r I f(-x) = f(x) for all xE JR}
E
tions of a real variable. Determine, with proof, (d) {fEr I f(x) 2:: 0 for all x EJR}
which of the following subsets of r are subspaces.
AS Show that any set of vectors in a vector space V that
(a) {fEr I f(3)=O}
contains the zero vector is linearly dependent.
Cb) u e r 1 f(3) = 1}

Homework Problems

Bl Determine, with proof, which of the following sets (d) The subset of 3 x 3 matrices A such that

m [�]
are subspaces of the given vector space.
A

(a) { �i X1 I + X3 =x,,x,,x,,x,,x, ER } of 1!.4 =

(e) The subset of skew-symmetric matrices. (A


matrix A is skew-symmetric if AT = -A; that

(b) {[:� :�] ai a3 a4,a1,a2,a3,a4 JR}


I + = E
is, =
aij -a1i for all i and j.)

B3 Determine, with proof, whether the following sub­


o f M(2, 2) sets of are subspaces.
(c) {a0 +a,x + azx2 a3x3 ao az a3,
+ I + =
P5 5
(a) {p(x) E P I p(-x)
= -p(x) for all x E } (the
JR
ao,a1,az,2a3EJR ?3 } of subset of odd polynomials)
(d) {ao a,x ao,a1EJR} ?4
+ I of 2
(b) {(p(x)) I p(x)E
Pa4x2} 4
(e) {[� �] } of M(2, 2)
(c)
(d) {x
+a,x +
{ao3 + ·

p(x) I p(x)E P2}


I· =1
a1a4 EJR
· }

(e) (p(x) I p(l)=0)


(f) {[�' :�]la1-a3=l,a1,a2,a3EJR} B4 Let 'F be the vector space of all real-valued func­
of M(2, 2).
tions of a real variable. Determine, with proof,
B2 Determine, with proof, whether the following sub­ which of the following subsets of r are subspaces.
sets of M(3, 3) are subspaces. (a) {fEr I f(3)+f(S)= O}
(a) {A M(3, 3) I tr(A) =O}
E (b) {fE'Flf(l)+f(2)=1}
(b) The subset of invertible 3 x 3 matrices (c) {fEr I lf(x)I :S 1)
(c) The subset of 3 x 3 matrices A such that (d) {fEr I f is increasing on JR}

A
m [�]
=

Conceptual Problems

Dl Let V be a vector space. for any t E R Prove that V is a vector space with
(a) Prove that -x=(-l)x for every x EV. these operations.
(b) Prove that the zero vector in V is unique.
D4 Let lL denote the set of all linear operators
(c) Prove that tO 0 for every t R E
= L : JR" � with standard addition and scalar
JR"
D2 Let V = {(a, b) I bE
a, JR, b > 0} and define multiplication of linear mappings. Prove that lL is a
addition by b) €B (c, d) =(ad+ be, bd) and scalar
(a, vector space under these operations.
multiplication by t 0 (a,b) = (tab1-1, b1) for any
DS Suppose that U and V are vector spaces over R The
t E R Prove that V is a vector space with these
Cartesian product of U and V is defined to be
operations.

D3 Let V = {x I x > O} and define addition by


E JR U x V= {(u, v) I uE U, vEV}
x xy and scalar multiplication by t 0 x = x1
€By.=
206 Chapter 4 Vector Spaces

(a) InU x V define addition and scalar multiplica­ (b) Verify thatU x {Ov} is a subspace ofU x V.
tion by (c) Suppose instead that scalar multiplication is
defined by t 0 (u, v) = (tu, v), while addition
(u1,V1) EB (u2, V2) = (u1 + U2, V1 + V2) is defined as in part (a). IsU x V a vector space
t0(U1,V1) = (tu1,tV1) with these operations?

Verify that with these operations that U x V is


a vector space.

4.3 Bases and Dimensions


In Chapters I and 3, much of the discussion was dependant on the use of the standard
basis in IR.11• For example, the dot product of two vectors a and b was defined in terms of
the standard components of the vectors. As another example, the standard matrix [L]
of a linear mapping was determined by calculating the images of the standard basis
vectors. Therefore, it would be useful to define the same concept for any vector space.

Bases
Recall from Section 1.2 that the two important properties of the standard basis in IR.
11
were that it spanned IR.11 and it was linearly independent. It is clear that we should want
a basis 13 for a vector space V to be a spanning set so that every vector in V can be
written as a linear combination of the vectors in 13. Why would it be important that the
set 13 be linearly independent? The following theorem answers this question.

Theorem 1 Unique Representation Theorem


Let 13 {v1,.•.,v11} be a spanning set for a vector space V. Then every vector in V
=

can be expressed in a unique way as a linear combination of the vectors of 13 if and


only if the set 13 is linearly independent.

Proof: Let x be any vector in V. Since Span 13 = V, we have that x can be written as
a linear combination of the vectors in 13. Assume that there are linear combinations

This gives

which implies

If 13 is linearly independent, then we must have a; - b; = 0, so a; = b; for 1 ::; i ::; n.


Hence, x has a unique representation.
On the other hand, if 13 is linearly dependent, then
Section 4.3 Bases and Dimensions 207

has a solution where at least one of the coefficients is non-zero. But

0 = Ov I + ... + Ovn

Hence, 0 can be expressed as a linear combination of the vectors in 13 in multiple


ways. •

Thus, if 13 is a linearly independent spanning set for a vector space V, then every
vector in V can be written as a unique linear combination of the vectors in 13.

Definition A set 13 of vectors in a vector space Vis a basis if it is a linearly independent spanning
Basis set for V.

Remark

According to this definition, the trivial vector space {O}


any set of vectors containing the zero vector in a vector space Vis linearly dependent.
does not have a basis since

However, we would like every vector space to have a basis, so we define the empty set
to be a basis for the trivial vector space.

EXAMPLE 1
The set of vectors {[� �], rn �], [� �], [� �]} in Exercise 3.1.2 is a basis for

set
The set of vectors 1, { x, x2, x3}
M(2, 2). It is called the standard basis for M(2, 2).

(1, x, in Exercise 4.1.2 is a basis for


... , �}is called the standard basis for P11•
P3. In particular, the

EXAMPLE2
�ove that the set C = {[i], m, [m is a basis for R3.
prove that Span C = JR.3,
Solution: We need to show that Span C JR.3
=

we need to show that every vector 1 E JR.3


and that C is linearly independent. To
can be written as a
linear combination of the vectors in C. Consider

[X2XJ] t1 [11] t2 [1]] t3 [1] [ti tt12 t2 tt32 t3]


+ +

X3 =

0
+

1 + 0
1
= +

Row reducing the corresponding coefficient matrix gives

1] [1 1 l
0 - 0
o o
0
1 0 0 1

JR.3.
Observe that the rank of the coefficient matrix equals the number of rows, so by The­
orem 2.2.2, the system is consistent for every 1 E
since the rank of the coefficient matrix equals the number of columns, there are no
JR.3.
Hence, Span C = Moreover,

1 = 0, so C is also linearly independent. Hence, it is a basis for JR.3.


parameters in the general solution. Therefore, we have a unique solution when we take
208 Chapter 4 Vector Spaces

EXAMPLE3
Is the set 23 = {[- � �], [� n, [� �]} a basis for the subspace Span 23 of M(2, 2)?

Solution: Since 23 is a spanning set for Span 23, we just need to check if the vectors in
23 are linearly independent. Consider the equation

[o o] [ 2] [o
ti
i
+t2
1 ] +t3
[ ][
2 s
=
ti +2t3 2t1 +t2+St3 ]
0 0 = -1 1 3 1 1 3 -ti+3t2+t3 ti+t2+3t3

Row reducing the coefficient matrix of the corresponding system gives

1 0 2 1 0 2
2 1 5 0 1 1
-1 3 1 0 0 0
3 0 0 0

Observe that this implies that there are non-trivial solutions to the system. For example,
one non-trivial solution is given by t1 = -2, t2 = - 1 , and t3 = 1 , and you can verify
that
[�
(-2) _ n +(- 1 ) [� � J +< l ) [� �J=[� �J
Therefore, the given vectors are linearly dependent and do not form a basis for Span 23.

2 2 2
EXAMPLE4 Is the set C = {3+2x+2x , 1 +x , 1 +x+x } a basis for P2?
Solution: Consider the equation

2 2 2 2
ao +aix+a2x = ti(3+2x+2x )+t2( 1 +x )+t3( 1 +x+x )
2
= (3ti+t2+t3)+(2ti+t3)X+(2t1 +t2+t3)X

Row reducing the coefficient matrix of the corresponding system gives

[ i l [ o ol
3
2
1
0 1
1
� 0 1 0
2 1 1 0 0 l

Observe that this implies that the system is consistent and has a unique solution for all
2
ao + aix+a2x E P2. Thus, C is a basis for P2.

2 2
EXERCISE 1 Prove that the set 23 = {l +2x+x , 1 +x , 1 +x} is a basis for P2•
Section 4.3 Bases and Dimensions 209

EXAMPLES Determine a basis for the subspace§ {p(x) E P I p(l) O} of P .


2 2
= =

Solution: We first find a spanning set for§ and then show that it is linearly indepen­
dent. By the Factor Theorem, if p(l) = 0, then (x- 1) is a factor of p(x). That is, every
polynomial p(x) E § can be written in the form

p(x) = (x - l)(ax +b) = a(x2 - x) +b(x - 1)

Thus, we see that§ = Span{x2 - x, x - 1 }. Consider

The only solution is t1 t 0. Hence, {x2 - x, x - l} is linearly independent. Thus,


2
= =

{x2 - x, x - 1} is a linearly independent spanning set of§ and hence a basis.

Obtaining a Basis from an Arbitrary Finite Spanning


Set
Many times throughout the rest of this book, we will need to determine a basis for a
vector space. One standard way of doing this is to first determine a spanning set for the
vector space and then to remove vectors from the spanning set until we have a basis.
We now outline this procedure.
Suppose that T = {v1, • • • , vk} is a spanning set for a non-trivial vector space V.
We want to choose a subset of'T that is a basis for V. If'T is linearly independent, then
Tis a basis for V, and we are done. If Tis linearly dependent, then t1 v1 + · · +tkvk 0
· =

has a solution where at least one of the coefficients is non-zero, say t; * 0. Then, we
can solve the equation for v; to get

So, for any x E V we have

x = a1v1 + · · · +a;-1 V;-1 +a;v; +a;+1V;+1 + · · · +akvk

= l�
a1V1 + ... +a;-1V;-1 +a; - (t1V1 +... +t;-1Vi-J +t;+1V;+1 +... +tkvk) ]
+a;+1V;+1 + · · · + akvk

Thus, any x E V can be expressed as a linear combination of the set T\{v;}. This
shows that T\{v;} is a spanning set for V. If T\{vd is linearly independent, it is a
basis for V, and the procedure is finished. Otherwise, we repeat the procedure to omit
a second vector, say v1, and get T\{v;, v1}, which still spans V. In this fashion, we
must eventually get a linearly independent set. (Certainly, if there is only one non-zero
vector left, it forms a linearly independent set.) Thus, we obtain a subset of Tthat is a
basis for V.
210 Chapter 4 Vector Spaces

EXAMPLE6
If T ={[_il [-:J.[-�l [�]}·
· · deIBnlline a subset of T that is a basis for Span T

1
Solution: Consider

[ol
0 = [ 1 ]
ti[ ] [
1
] [ ] = [ 2
1
+ t2 -1 + t3 -2 + t4 5
3 3
ti + 2t2 + t3 + t4
ti - t2 - 2t3 + 5t4
-2t1 + t2 + 3t3 + 3t4 l
0 -2 1

We row reduce the corresponding coefficent matrix

The general solution is


ti
t2
t3
= 1
s
-1
, s E R Taking s
.

= 1, we get
t1
t2
t3
-11 1
, which

t4 0 4 0

=
gives

Ul-Hl+�H� Hl -UJ+:I
l or

[ �] Tl{[-�I}=
Thus, we can omit - from T and consider

{fJHJ.rm
Now consider

The matrix is the same as above except that the third column is omitted, so the same

[ � -� �1 r� � �11
row operations give

-2 1 3 0 0

Hence, the on! y solution is t, === t2 t3 0 and we conclude that


{[_iJ.l-:].[i]} is

linearly independent and thus a basis for Span T.


Section 4.3 Bases and Dimensions 211

EXERCISE 2 Let 'B = {1 - x, 2 + 2x + x2, x + x2, 1 + x2}. Determine a subset of 'B that is a basis for
Span 'B.

Dimension
n
We saw in Section 2.3 that every basis of a subspace S of R contains the same number
of vectors. We now prove that this result holds for general vector spaces. Observe that
the proof of this result is essentially identical to that in Section 2.3.

Lemma2 Suppose that V is a vector space and Span{v1, • • • , v,,} = V. If {u1 , . . • , uk} is a lin­
early independent set in V, then k $ n.

Proof: Since each ui, 1 $ i $ k, is a vector in V, it can be written as a linear combi­


nation of the v/s. We get

U1 = a11V1 + a11V2 + · · · + an1Vn


U2 = a12V1 + a21V2 + · · · + an2Vn

Consider the equation

0 = t1u, + · · · + tkuk
= t1(a11V1 + az1V2 + · · · + a111Vn) + · · · + tk(a1kV1 + azkV2 + · · · + a,,kvn)
= (a11t1 + · · · + alktk)v1 + · · · + (a,,1t1 + · · · + a,,ktk)v11

Since 0 = O v1 + · · · + O v,, we can write this as

Comparing coefficients of vi we get the homogeneous system

of n equations in k unknowns t1, tk. If k > n, then this system would have a non­
• • • ,

trivial solution, which would imply that {u1, ... , uk} is linearly dependent. But, we
assumed that {u1, ... , uk} is linearly independent, so we must have k $ n. •

Theorem 3 If 'B = {v1, ..., v,,} and C = {u1, • • • , uk} are both bases of a vector space V, then
k = n.

Proof: On one hand, 'Bis a basis for V, so it is linearly independent. Also, C is a basis
for V, so Span C = V. Thus, by Lemma 2, we get that n $ k. Similarly, C is linearly
independent as it is a basis for V, and Span 'B = V, since 'Bis a basis for V. So Lemma 2
gives n � k. Therefore, n = k, as required. •
212 Chapter 4 Vector Spaces

As in Section 2.3, this theorem justifies the following definition of the dimension
of a vector space.

Definition If a vector space V has a basis with n vectors, then we say that the dimension of V is
Dimension n and write
dimV = n

If a vector space V does not have a basis with finitely many elements, then V is called
infinite-dimensional. The dimension of the trivial vector space is defined to be 0.
Remark
Properties of infinite-dimensional spaces are beyond the scope of this book.

EXAMPLE 7
(a) IR.11 is n-dimensional because the standard basis contains n vectors.

(b) The vector space M(m, n) is (m x n)-dimensional since the standard basis has
m x n vectors.

(c) The vector space Pn is (n + 1 )-dimensional as it has the standard basis


{ 1 , X, x2, • . . , X1}.

(d) The vector space C(a, b) is infinite-dimensional as it contains all polynomials


(along with many other types of functions). Most function spaces are infinite­
dimensional.

EXAMPLES
Ut S =Span

ml. [=�l nl. nl}


Solution: Consider
·
Show that ilim S = 2.

4
[1 -1 -li4 [0 01 6]
We row reduce the corresponding coefficent matrix

-1 0000
-1 1 5
3 -2 2 3 - 5
2 3

Observe that this implies that [-�l r!l and can be written as linear combinations

of the first two vectors Thus, S { li].[=�]}


= Span Moreom, B =

m].[=�l} is
Section 4.3 Bases and Dimensions 213

EXAMPLE8 clearly linearly independent since neither vector is a scalar multiple of the other, hence

(continued) 13 is a basis for§. Thus, dim§ = 2.

EXAMPLE9
Let§ = {[: �] E M(2, 2) I a+ b = }
d . Determine the dimension of§.

Solution: Since d = a+ b, observe that every matrix in§has the form

Thus,

It is easy to show that this spanning set {[� �] [� n , [� �]}


, for S is also linearly

independent and hence is a basis for§. Thus, dim§ = 3.

EXERCISE 3 Find the dimension of§ = {a+ bx+ cx2 + dx3 E P3 I a+ b + c + d = O}.

Extending a Linearly Independent Subset to a Basis


Sometimes a linearly independent subset T = {vi, ..., vd is given in an n-dimensional
vector space V, and it is necessary to include these in a basis for V. If Span T * V,
then there exists some vector wk+! that is in V but not in Span T. Now consider

(4.1)

If tk+I * 0, then we have

t1 tk
Wk+I = --V1 - - -Vk
tk+I tk+!
' ' ·

and so Wk+! can be written as a linear combination of the vectors in T, which cannot
be since Wk+! <t Span T. Therefore, we must have tk+I = 0. In this case, (4.1) becomes

But, Tis linearly independent, which implies that t1 tk 0. Thus, t1 v1 + = + · · · = = · · ·

tkvk + tk+! Wk+I = 0 implies that t1 tk+I = 0, and hence {v1,


· · · = , vb wk+!} is
= • • .

linearly independent. Now, if Span{v1, , vb Wk+d V, then it is a basis for V. If


. • • =

not, we repeat the procedure to add another vector Wk+2 to get {v1, ... , vb Wk+!• Wk+2},
which is linearly independent . In this fashion, we must eventually get a basis, since
according to Lemma 2, there cannot be more than n linearly independent vectors in an
n-dimensional vector space .
214 Chapter 4 Vector Spaces

EXAMPLE 10
Let C = {[� �], [-� -�]} Extend C to a basis for M(2, 2).

Solution: We first want to determine whether C is a spanning set for M(2, 2). Consider

Row reducing the augmented matrix of the associated system gives

-2 b1 1 -2 b1
-1 b2 0 1 b2 - b1
0 1 b3 0 0 b1 - b2 + b3
1 1 b4 0 0 2b1 - 3b2 + b4

Hence, '13 is not a spanning set of M(2, 2) since any matrix [�� �� ] with b1 -b2+b3 :/= 0

(or 2b1 - 3b2 + b4 :/= 0) is not in Span'B. In particular, [� �] is not in the span of '13.

Hence, by the procedure above, we should add this matrix to the set. We let

and repeat the procedure. Consider

Row reducing the augmented matrix of the associated system gives

1 -2 0 1 -2 0 bi
1 -1 0 0 1 0 b2 - bi
0 1 0 0 1 b1 - b2 + b3
0 0 0 2b1 - 3b2 + b4

So, any matrix [:� :�] with 2b1 - 3b2 + b4 :/= 0 is not in Span '131• For example, [� �]
is not in the span of '131 and thus '131 is not a basis for M(2, 2). Adding rn �] to '131 we

get

By construction '132 is a linearly independent. Moreover, we can show that it spans


M(2, 2). Thus, it is a basis for M(2, 2).
Section 4.3 Bases and Dimensions 215

EXERCISE 4
Extend the set 'T = {[; ]} to a basis for R3.

Knowing the dimension of a finite dimensional vector space Vis very useful when
trying to invent a basis for V, as the next theorem demonstrates.

Theorem 4 Let Vbe an n-dimensional vector space. Then


(1) A set of more than n vectors in V must be linearly dependent.
(2) A set of fewer than n vectors cannot span V.
(3) A set with n elements of Vis a spanning set for V if and only if it is linearly
independent.

Proof: (1) This is Lemma 2 above.

(2) Suppose that Vcan be spanned by a set of k < n vectors. If Vis n-dimensional,
then it has a basis containing n vectors. This means we have a set of n linearly
independent vectors in a set spanned by k < n vectors which contradicts (1).

(3) If 13 is a linearly independent set of n vectors that does not span V, then it can
be extended to a basis for V by the procedure above. But, this would give a
linearly independent set of more than n vectors in V, which contradicts (1).

Similarly, if 13 is a spanning set for V that is not linearly independent, then


by the procedure above, there is a linearly independent proper subset of 13 that
spans V. But, then V would be spanned by a set of fewer that n vectors, which
contradicts (2). •

EXAMPLE 11
1. Produce a basis 13 for the plane P in IR3 with equation x1 + 2x2 -
3
x = 0.

2. Extend the basis 13 to obtain a basis C for JR3.

Solution: (a) We know that a plane in JR3 has dimension 2. By Theorem 4, we just need

to pick two linearly independent vectors that lie in the plane. Observe that ii 1 = [�I
and v, = m both satisfy the equation of the plane and neither is a scalar multiple

of the other. Thus they are linearly independent, and 13 = {v 1, v2} is a basis for the
plane P.

(b) From the procedure above, we need to add a vector that is not in the span of {v 1, v2}.
But, Span{v 1, v2} is in the plane, so we need to pick any vector not in the plane. Ob-

serve that V3 = [�j does not satisfy the equation of the plane and hence is not in the

plane. Thus, {V 1, v2, v3} is a linearly independent set of three vectors in JR3 and there­
fore is a basis for JR3, according to Theorem 4.
216 Chapter 4 Vector Spaces

EXERCISE 5 Produce a basis for the hyperplane in IR4 with equation x1 - x2 + X - 2x4 = 0
3
and
extend the basis to obtain a basis for IR4.

PROBLEMS 4.3
Practice Problems

Al Determine whether each set is a basis for JR3.

(a)
{[il-l=: l [;]}
·

(b)

W�lHl} AS Determine the dimension of the vector space of

minrnlHl}
polynomials spanned by B.
(c)
(a) B={l+x,l+x+x2,l+x3}
(b) B= (1+x,1 - x, 1+x3,1 - x3}
B = {1 + x + x2,1 - x3,1 - 2x + 2x2 - x3,

wirnrnJ}
(c)
(d) 1 - x2+2x3,x2+x3}

{[-:].[J[�]}
A6 (a) Using the method in Example 11, determine a
basis for the plane 2x1 - x2 - X = 0 in JR3.
(e) 3
(b) Extend the basis of part (a) to obtain a basis

{l � 1}
for JR3.
2
A 7 (a) Using the method in Example 11, determine a
3
A2 Ut B= ' 1 ' , Prove that B is a basis for the hyperplane x1 - x2 +X - X4 = 0
3
in IR4•
3 l
(b) Extend the basis of part (a) to obtain a basis
basis for IR4.
for IR4.
A3 Select a basis for Span B from each of the following
AS Obtain a basis for each of the following vector
sets B and determine the dimension of Span B.
spaces and determine the dimension.

{HJ.r!HH lm
(a) S ={a+bx+cx2 P2 a= -c}
I
E
(a) B=
(b) S = {[� �] E M(2,2) a,b,c
I
E IR}
(b) B=
{[�l r=�i flrnHm
·
(c) s =
[{ �:] [�:l [i] o}
E R3
I
=

A4 Select a basis for Span B from each of the following (d) S ={p(x) P2 p(2) = O}
I
E

sets

(a)
B and determine the dimension of Span B.
B={[-� �]·[� -�]·[� =�J.
(e) S = {[� !] E M(2,2) a= -c}
I

[� -�]}
Section 4.3 Exercises 217

Homework Problems

Bl Determine whether each set is a basis forJR3. BS Select a basis for Span3 from each of the following

{[-il Ul Ul}
sets3 and determine the dimension of Span3.
(a)
(a) 3= {[� �]·[=� =�]·[� -�]·[� �]}
{[ � �] ,[� �] [� �] ,
·

{[�J .nrnrnJ}
(b) 3=
(b)
_ ,

rn _;J.r; -�n
(c)
{[�] HJ.lm B6 Determine the dimension of the vector space of
polynomials spanned by3.

mHm
(a) 3= { + 1 x+x2,x+x2+x3, 1-x3}
(d)
(b) 3= {l+ +
x+ x x2+x3, 1 x2+x3}
x2, +

{[J[J.Ul
B7 (a) Using the method in Example 11, determine a

}
basis for the plane
x1 3x2 4x3 + + =0 inJR3.
(e)
(b) Extend the basis of part (a) to obtain a basis
for JR3.

B2 Let3= {[� �]·[� !] [! -�J.


- BS (a) Using the method in Example 11, determine a

[! -�n.
basis for the hyperplane +
x1 x2+2x3+ = X
- .

0
4
Prove that3 is a basis for
M(2, 2). inJR4.
(b) Extend the basis of part (a) to obtain a basis
B3 Determine whether each set is a basis for P2. for JR4.
(a) {1
+x+x2, 1 -x2} B9 Obtain a basis for each of the following vector

S {[� �]EM(2,2) Ia,bE }


(b) { + +
11++x2,x2,21-xx x2,2x2,-x-2-x2,2x}1+2x2}
(c) { + +
spaces and determine the dimension.

(d) {
3+2x+2x2,1+x+x2,1 -x - x2} (a) = JR

B4 Select a basis for Span3 from each of the following


S { x2)p(x) p(x)EP2}
= Cl+

{[;;] R3 I[;;] Ul o}
(b) I
sets3 and determine the dimension of Span3.

{ [�l .UJ rm
(c) s = =
(a) =
e
B

{[ :i [-irnl .ui Hll


(d)

S {p(b -x) EPs I-p(a-x) Ep(x) x}


(b) =
S={[; :]EM(2,2)1a-b=O,
B _ = = JR}
c 0, c 0
(e)
.

= = for all

Conceptual Problems

Dl (a) It may be said that "a basis for a finite dimen­ for V." Explain why this makes sense in terms
sional vector space V is a maximal (largest pos­ of statements in this section.
sible) linearly independent set in V." Explain
D2 Let V be an n-dimensional vector space. Prove
why this makes sense in terms of statements in
that if S is a subspace of V and dim S = n, then
this section.
S=V.
(b) It may be said that "a basis for a finite dimen­
sional vector space V is a minimal spanning set
218 Chapter 4 Vector Spaces

D3 (a) Show that if {v1, v2} is a basis for a vector space D4 In Problem 4.2.D2 you proved that V = {(a, b) I
V, then for any real number t, {v1, v2+ tvi} is a, b E JR, b > O}, with addition defined by (a, b) EB
also a basis for V. (c, d) = (ad+ be, bd) and scalar multiplication de­
(b) Show that if {v1, Vz, v3} is a basis for a vector fined by t O (a, b) = (taY-1, b1), was a vector space
space V, then for any s, t E JR, {v1, V2, v3 + tv1+ over R Find, with justification, a basis for V and
sv2} is also a basis for V. hence determine the dimension of V.

4.4 Coordinates with Respect to a Basis


In Section 4.3, we saw that a vector space may have many different bases. Why might
we want a basis other than the standard basis? In some problems, it is much more
convenient to use a different basis than the standard basis. For example, a stretch by a
2
factor of 3 in JR in the direction of the vector [�] is geometrically easy to understand.

However, it would be awkward to determine the standard matrix of this stretch and then
determine its effect on any other vector. It would be much better to have a basis that

takes advantage of the direction [�] and of the direction [-�l which remain unchanged

under the stretch.


3
Alternatively, consider the reflection in the plane x1+2x2 - 3x3 = 0 in JR . It is easy

to describe th;; by say;ng that ;t ceve.ses the nocmal veotoc lj] [ �]


to = and leaves

unchanged any vectocs ly;ng ;n the plane (such as the vectocs [-�] [�} and · See Hg­

ure 4.4.2. Describing this reflection in terms of these vectors gives more geometrical
information than describing it in terms of the standard basis vectors.

Figure 4.4.2 Reflection in the plane x1 + 2x2 - 3x3 = 0.

Notice that in these examples, the geometry itself provides us with a preferred
basis for the appropriate space. However, to make use of these preferred bases, we
Section 4.4 Coordinates with Respect to a Basis 219

need to know how to represent an arbitrary vector in a vector space V in terms of a


basis <J3 for V.

Definition Suppose that <J3 = {v1, ... , v11} is a basis for the vector space V. If x E V with
Coordinate Vector x = x1 v1 + x2v2 + · · + x11v11, then the coordinate vector of x with respect to the
·

basis <J3 is

[X]!B =

Xn

Remarks

1. This definition makes sense because of the Unique Representation Theorem


(Theorem 4.3.1).

2. Observe that the coordinate vector [x]!B depends on the order in which the basis
vectors appear. In this book, "basis" always means ordered basis; that is, it is
always assumed that a basis is specified in the order in which the basis vectors
are listed.

3. We often say "the coordinates of x with respect to <J3" or "the <J3-coordinates


of x" instead of the coordinate vector.

EXAMPLE 1
The set <J3 = {[�], [�]} is a basis for JP?.2. Find the coordinates of a=[;] and b = [-�]
with respect to the basis <J3.

Solution: For a, we must find ai and a2 such that ai [�] [�] [;l
+ a2 = By inspection,

we see that a solution is a1 = 1 and a2 = 2. Thus,

Similarly, we see that a solution of b1 [ �] [ �] [-�]


+ b2 = is b1 = 3, b2 = -2. Hence

Figure 4.4.3 shows JP?.2 with this basis and a. Notice that the use of the basis <J3 means
that the space is covered with two families of parallel coordinate lines, one with di-

rection vector [�] and the other with direction vector [n Coordinates are established

relative to these two families. The axes of this new coordinate system are obviously
not orthogonal to each other. Such non-orthogonal coordinate systems arise naturally
in the study of some crystalline structures in material science.
220 Chapter 4 Vector Spaces

/ [�]
Figure 4A.3 The basis 13 in �2; the 13-coordinate vector of a.

EXAMPLE2
The set '13 = {[; �], [ � �], [ � �]} is a basis for the subspace Span '13. Determine

whether the matrices A [� �]


=
-
[� �] and B = are in Span '13. If they are, determine

their '13-coordinate vector.


Solution: We are required to determine whether there are numbers u1, u2, u3 and/or
v,, v2, v3, such that

UI [; �] [ � OJ [ ] [ 1]
+ Uz
1
+ U3
1
}
1
0
=
1
0
-

VI [; �] [ � �] [ � �] [� �]
+ Vz + V3 =

Since the two systems have the same coefficient matrix, augment the coefficient matrix
twice by adjoining both right-hand sides and row reduce

3 1 1 1 0 0 6
2 0 -1 4 0 1 0 -9
2 1 0 0 0 0 1 -3 -8
2 0 3 3 0 0 0 0 5

It is now clear that the system with the second matrix B is not consistent, so that B is
not in the subspace spanned by '13. On the other hand, we see that the system with the
first matrix A has the unique solution u1 = I, u2 = 1, and u3 = -3. Thus,

Note that there are only three '13-coordinates because the basis '13 has only three vectors.
Also note that there is an immediate check available as we can verify that
Section 4.4 Coordinates with Respect to a Basis 221

[1
EXAMPLE2
(continued) 0 -1]3 [32 2] [11 1o] -3[1 1]0
=l
2
+l
1

EXERCISE 1
Determine the coordinate vector of m with respect to the basis 13 = {[_:l [ m
· = of

Span 'B.

EXAMPLE3 Suppose that you have written a computer program to perform certain operations with
P • You need to include
polynomials in

3"3,-5,
2
are considering. If you use the standard basis
-
{1,
some method of inputting the polynomial you
x, x2} for P ,
5x +2x2, you would surely write your program in such a way that you would type
2
to input the polynomial

2", the standard coordinate vector, as the input.

the basis 'B =


find t1, t and
{1 - 1
On the other hand, for some problems in differential equations, you might prefer

t3
x2, x,
such that
+ x2}. To find the 'B-coordinates of 3- 5x +2x2, we must

Row reducing the corresponding augmented matrix gives

[ 01 01 01 -531 0[ 01 001 001 15-5 1 /2

-1 0 1 2 �

/2

It follows that the coordinates of 3-5x +2x2 with respect to 'B are

[3-5x +2x2].IB
[1-5/21
=

5/2
"0.5, 5 2.5".
Thus, if your computer program is written to work in the basis 'B, then you would input
- ,

We might need to input several polynomials into the computer program written
to work in the basis 'B. In this case, we would want a much faster way of converting
standard coordinates to coordinates with respect to the basis 'B. We now develop a
method for doing this.

Theorem 1 Let 'B be a basis for a finite dimensional vector space V. Then, for any x, y E V and
t E JR we have
[tx + Y]lB =t[x]lB +[Y]lB
222 Chapter 4 Vector Spaces

Proof: Let :B = {v1, ... , Vn}. Then, for any vectors x = X1V1 + · · · + x11v11 and
y =Y1V1 +· · ·+y11v11 in V and any t E JR, we have tx+y = (tx1 +y1)V1 +· · ·+(tx11+y11)v11, so

[tx + Y]!B =
tx1 + Y1 [lt
Xl
=t: +:
l
=t[x]!B+[ y]!B
tXn +Yn Xn 11 •

Let :Bbe a basis for an n-dimensional vector space V and let C = {w1, . • . , w11} be
another basis for V. Consider x E V. W riting x as a linear combination of the vectors
in Cgives

Taking :B-coordinates gives

[
[x]B =[X1W1 + · · · +XnW11]!8 =X1[wi]!8 + · · · + x11[w11]!8 = [wi]!B · · · [w11J!8 ]
[
Xl
:
Xn

Since [I] = [x]c, we see that this equation gives a formula for calculating the

:B-coordinates of x from the C-coordinates of x using simple matrix -vector multi­


plication. We call this equation the change of coordinates equation and make the
following definition.

Definition Let :B and C = {w1, • , w11} both be bases for a vector space V. The matrix P =
• •

Change of Coordinates Matrix [ [w1]!8 ··· ]


[w11]!8 is called the change of coordinates matrix from C-coordinates
to :B-coordinates and satisfies
[x]!B = P[x]c

Of course, we could exchange the roles of :Band C to find the change of coordi­
nates matrix Q from :B-coordinates to C-coordinates.

Theorem 2 Let :B and C both be bases for a finite-dimensional vector space V. Let P be
the change of coordinates matrix from C-coordinates to :B-coordinates. Then, P
1
is invertible and p- is the change of coordinates matrix from :B-coordinates to
C-coordinates.

The proof is left to Problem D4.

EXAMPLE4
Let S = [ t,, l,, t3} be the standard basis for JR3 and let '1l = {[_�].[:l [;l}.
· Find the

change of coordinates matrix Q from :B-coordinates to S-coordinates. Find the change


of coordinates matrix P from S-coordinates to :B-coordinates. Ve1ify that PQ = I.
Section 4.4 Coordinates with Respect to a Basis 223

EXAMPLE4 Solution: To find the change of coordinates matrix Q, we need to find the coordinates
(continued) of the vectors in 13 with respect to the standard basis S. We get

To find the change of coordinates matrix P, we need to find the coordinates of the
standard basis vectors with respect to the basis 13. To do this, we solve the augmented

[ 00 l , [ 10 l ' [ 0l
systems

1 2 3 1 1 2 3 o 1 2 3 o
3 1 4 3 1 4 3 1 4
-1 1 1 -1 1 1 -1 1 1 1

10 01 00 � [ 01 01 00
To make this easier, we row reduce the triple-augmented matrix for the system:

[_l 00 l 00 1l
2 3 3/5 -1/5 -1
4 7/5 -4/5 -1

-
1 -4/5 3/5

T hus,

P=
[ 3/5
7/5
1-11 -1/5
-4/5
-4/5 3/5 1

[�
We now see that

[ i[ 1

�] 0 �]
3/5 -1/5 -1 2 0
7/5 -4/5 -1 3 = 1
-4/5 3/5 1 -1

EXAMPLES Let 13 = 1 1
{ - x2, x, +x2}. Find [a+bx+cx2]13.
Solution: If we find the change of coordinates matrix P from the standard basis
S= {1, x, x2} of P to 13, then we will have
2

[a+bx+ex']•=P[a +bx+cx']s =P m
Hence, we just need to find the coordinates of the standard basis vectors with respect
to the basis 13. To do this, we solve the three systems of linear equations given by

t11Cl - x2)+ t! (x)+t13(1+x2)=1


2
t 1 (1 - x2) + t (x)+ t 3(1+x2)=x
2 22 2
t31 (1 - x2)+ f3 (x)+t33(1+x2)=x2
2
224 Chapter 4 Vector Spaces

[ -11010010-010
01100l [100102/ 10 -10/2 l
EXAMPLES We row reduce the triple-augmented matrix to get

(continued)

01001 00112/ 0 12/


Hence,
[1/2 0 / =[
=01
1/2 0 -l1�12/ 1:1 � l
a c
[a+ bx+ cx2]�
c �2 �le
la+
2

= {[i].[�].[m.
EXERCISE 2
Let s = I'" e,. e,) be the standard basis for R3 and let 2l Find the

=
change of coordinates matrix Q from B-coordinates to S-coordinates. Find the change
of coordinates matrix P from S-coordinates to B-coordinates. Verify that PQ /.

PROBLEMS 4.4
Practice Problems
=1
{ 2x4}, x1=2
y=1
Al In each case, check that the given vectors in B are (e) B + x2 + x4, + x + 2x2 + x3 + x4,
linearly independent (and therefore form a basis for x - x2+ x3 - + x - 5x2+ x3 - 6x4,
+ x+ 4x2+ x3 + 3x4
x y
the subspace they span). Then determine the coor­

= ml· [J}
{ }·
dinates of and with respect to B.

= i �: x= � = �i =0.
A2 (a) Verify that 2l is a basis for the

1 0 2
(a) B -
Y

=1{ 1y=

plane 2x1 - x - 2x
2 3

x= 2
3 (b) For each of the following vectors, determine
(b) B + x+ x2, + 3 x+ 2x2, 4+ x2}, whether it lies in the plane of part (a). If it does,
+ 8x+ sx2' -4+ 8x+ 4x2

[� l [�] [ � ]
find the vector's B-coordinates.

= {[� �] , [� �], [� �] } x = [� �l
-

(c) B .

(i) (ii) (iii)

=1
{ 1
-

y= [-� !]
B={[� 1 �],[� � =�]}
A3 (a) Verify that B + x2, - x+ 2x2, -1 - x+ x2}
(d) is a basis for P .
2

x=[� ! -�ly=[-� =� �]
Section 4.4 Exercises 225

(b) Determine the coordinates relative to


following polynomials.
:B of the
(b) :B={ [ � ;] ' [ � -�] ' [ � _ �]}'
(ii)
(i) p(x)=1
q(x)=4 - 2x+7x2 A= [� -�]
(iii) r(x)=-2 - 2x+ 3x2 AS For the given basis :B, find the change of coordi­
nates matrix to and from the standard basis of each
(c) Determine [2 -4x+ l0x2}8 and use your an­
vector space.
swers to part (b) to check that

[4 - 2x+7x2]�+[-2 - 2x+ 3x2]� (a) B = {[�] [!] .[=:]}



for
R3

for P
=[(4 - 2)+(-2 - 2)x+(7+ 3)x2]� (b) :B={ 1 - 2x+5x2, 1 - 2x2,x+x2}
= {[� =n, [� =�J, [� � ]}
2
A4 In each case, do the following.
:s
Cc) for the vec -

2 upper-triangular matrices
(i) Determine whether the given matrix A belongs
to Span :B. 2
tor space of x

(ii) Use your row reduction from (i) to explain


whether :Bis a basis for Span :B.
(iii) If :B is a basis for Span :B and A E Span :B,
determine [A]�.

(a) :B={[ � �]·[-� [


- 2
�]. � 1 0]}'
A=
[� -
�]
Homework Problems

Bl In each case, check that the given vectors in :B are (b) For each of the following vectors, determine
linearly independent (and therefore form a basis for whether it lies in the plane of part (a). If it does,
the subspace they span). Then determine the coor­ find the vector's :B-coordinates.

{ 1i :1 }· = j1 = !1 m [i1 [�1
dinates of x and y with respect to :B.
(i) (ii) (iii)

(a) :B= x • Y

B3 (a) Verify that = {UrnHm


:B = {[� -� ] ' [ � �] [� �]}' x=[� -; B
is a basis
(b)
' l for JR3.
y=[ � �]
- (b) Determine the coordinates relative to :B of the
following vectors.

m [J
(c) :B={x+x2,-x+ 3x2,1 +x-x2},x=-1 +
3x - x2, y = 3+ 2x2 (i) (ii)
(d) :B = (1 + 2x + 2x2, -3x - 3x2,- 3 - 3x},
x= 3 - 3x2, y=1+x2
(e) :B= (1+x+x3, 1+2x2+x3+x4, x2+x3+3x4},
=
x 2 - 2x+5x2 -x3 -5x4, y=-1-3x+3x2 -
2x3 -x4
B2 (a) Verify that B

plane
=
2x1 - 3x2+ 2x3= 0.
{[�] [W
• is a basis for the
226 Chapter 4 Vector Spaces

(c) Determine [jJ and use your answers to part


BS In each case, do the following.
(b) to check that.
(i) Determine whether the given matrix belongs
A
to Span 13.
(ii) Use your row reduction from (i) to explain
whether 13 is a basis for Span 13.
(iii) If 13 is a basis for Span 13 and Span 13,
Ul. m. Ul.
A E
= + determine [A]2:1.
(a) 13={[� �].[-� -�]·[� -�]}.
B4 (a) Verify that 13=
for P .
2
is a basis
{1+x2,1 +x,x+x2}
A =[�� -�]
(b) Determine the coordinates relative to 13 of the (b) 13={[� �] , [� �] , [� �]}, =[ � !]
A
following polynomjals. _ _

(i) p(x) = 3 +4x+ Sx2 B6 For the given basis 13, find the change of coordi­
(ii) = q(x) 4+ Sx - 7x2 nates matrix to and from the standard basis of each
vector space.
(a) = {[i] .[_:] .[_l]} for
(iii) = r(x) 1+x+x2
(c) Determine [4 + and use your an­
Sx + 6x2]2:1
� �l
swers to part (b) to check that
[3+4x+5x2]2:1+ [1+x+ x2]2:1 (b) 13= (-1+2x2,1+x+x2,l-x-3x2)forP
2
= [(3 + 1)+ (4+ l)x+(5+ l)x2]2:1 (c) 13= {[� -�], [� �]}for the vector space of
2 x diagonal matrices
2

Conceptual Problems

Dl Suppose that 13= is a basis for a vector


{v1, ,vk}
• . •

D3 LetB = {[�]·[�]}andC = {[�]·[�]}and let


space and that =
V C is another basis
{w1, ,wk} . • •

for and that for every


V Must it
x E V, [x]2:1 [x]c. = be the linear mapping such that
L : IR.2 --t IR.2
be true that = for each
v; w; Explain or
1 $ i $ k? =
[x]s [L(x)Jc.
prove your conclusion. (a) Find (U]). L
D2 Suppose is a vector space with basis 13 =
V
{v,,v ,V3,V4}.
2
Then = C
basis of Find a matrix such that =
V. P
is also a
{v3,V ,V4,vi}
2
P[x]2:1 [x]c.
(b) Find ([�� ]) L .

D4 Prove Theorem 2.
Section 4.5 General Linear Mappings 227

4.5 General Linear Mappings


m
In Chapter 3 we looked at linear mappings L : IR.11 � IR. and found that they can be
useful in solving some problems. Since vector spaces encompass the essential prop­
erties of IR.11, it makes sense that we can also define linear mappings whose domain
and codomain are other vector spaces. This also turns out to be extremely useful and
important in many real-world applications.

Definition If V and W are vector spaces over JR., a function L : V � W is a linear mapping if it
Linear Mapping satisfies the linearity properties
L1 Lx
( y) L(x)
= + L(y)
+= tL(x)
L2 L(tx)
for all x, y E V and t ER If W = V, then Lmay be called a linear operator.

As before, the two properties can be combined into one statement:

L(tx + y) = tL(x) + L(y) for all X, y E v' t E JR.

Moreover, we have that

EXAMPLE 1 2
Let L : M(2, 2) � P2 be defined by L ([; �]) = + + + d (b d)x ax . Prove that Lis a

linear mapping.

Solution: For any [;; �:], [;� �� ] E M(2, 2) and t E JR., we have

L t ( [�: �:]+[�� ��])=L([��::�� ��::��])


= + + + + + (td1 d2) (tb1 b2 td1 d2)X
2
+ + (ta1 a2)x
2
= + + + t(d1 (b1 d1)x a1x )
2
+ + + + (d2 (b2 d2)x a2x )

So, L is linear.

2
EXAMPLE2 Let M : P3 � P3 be defined by M(ao + + = +a1x a1x ) a1 2a2x. Prove that M is a
linear operator.
2 2
Solution: Let p(x) =ao + + a1x a2x , q(x)
=ho+ + b1x b2x , and t ER Then,

2
M(tp(x) + =q(x)) M((tao +ho)+ + + + (ta1 b1)x (ta2 b2)x )
= (ta1 b1)
+ + + 2(ta2 b2)x
= t(a1 2a2x)
+ + + (b1 2b2x)
= tM(p(x)) + M(q(x))

Hence, M is linear.
228 Chapter 4 Vector Spaces

EXERCISE 1
Let

linear.
L R3• � M(2, 2) be defined by L
[[�:ll [6
= Xi+XX22 + X3] . Prove that L. 1s

Remark

Since a linear mapping L :V - Wis just a function, we define operations (addition,


scalar multiplication, and composition ) and the concept of invertibility on these more
general linear mappings in the same way as we did for linear mappings L : ]Rn -
JR.111•

As we did with linear mappings in Chapter 3, it is important to consider the range


and nullspace of general linear mappings.

Definition Let V and Wbe vector spaces over JR. The range of a linear mapping L :V - Wis
Range defined to be the set
Nulls pace Range(L) ={L(x) E I x EV}
W

The nullspace of Lis the set of all vectors in V whose image under Lis the zero vector
Ow. We write
Null(L) ={x EV I L(x) =Ow}
EXAMPLE3
Let L : M(2, 2) P2- L([; �])=c +(b + d)x+ax2.
be the linear mapping defined by

Determine whether 1 +x+x2 L,


is in the range of Aand if it is, determine a matrix
such that 1
L(A)= +x + x2.
Solution: We want to find a, b, d c and such that

l+x+x2=L([� !])=c+(b+d)x+ax2
Hence, we must have c= b+d = a =
1, 1, and 1. Observe that we get a system of linear
equations that is consistent with infinitely many solutions. Thus, 1
+x+x2 E
L(A)= +x+x2 A=[ � �].
Range(L)

and one choice of A such that 1 is

EXAMPLE4
Let L : P2 - JR3 be the linear mapping defined by
[a -bl
L(a + bx + cx2) b-c .
[;]
c-a
Determine whether is in the range of L.
Section 4.5 General Linear Mappings 229

EXAMPLE4 Solution: We want to find a, b and c such that


(continued)

This gives us the system of linear equations a- b = 1, b- c = 1 , and c - a = 1 . Row

1 l [ 1 -1 1
reducing the corresponding augmented matrix gives

1 0 0

l
-

1 -1 1 � 0 1 -1 1
0 1 1 0 0 0 3

Henee, the system is inconsistent, so [;] is not in the span of L.

Theorem 1 Let V and W be vector spaces and let L : V � W be a linear mapping. Then
( 1 ) L(Ov) = Ow
(2) Null(L) is a subspace of V
(3) Range(L) is a subspace of W

The proof is left as Problem D 1 .


As with any other subspace, it is often useful to find a basis for the nullspace and
range of a linear mapping.

EXAMPLES

[ � ]·
Determine a basis for the range and nullspace of the linear mapping l : P1 � IR.3

defined by L(a+bx) =
a- 2b

Solution: If a+bx E Null(L), then we have

[ ]
a- 2b
� = l(a+bx) = [ �]
0
·Hence, a = 0

and a- 2b = 0, which implies that b = 0. Thus, the only polynomial in the nullspace
of l is the zero polynomial. That is, Null(l) = {0}, and so a basis for Null(L) is the
empty set.
Any vector y in the range of l has the form

y
=
I l [�l [ � 1

a- 2b
=a
1
+b
-2

Thus, Range( L) = Span C, where C = { [�l [J]}. · Moreovec, C is elead y linead y

independent. Hence C is a basis for the range of l.


230 Chapter 4 Vector Spaces

EXAMPLE6 Determine a basis for the range and nullspace of the linear mapping L M(2, 2) P2
:

defined by L([; �]) =(b + c) + (c - d)x2.


Solution: If[: �] Null(L), then 0 =L([: �]) =(b + c) + (c - d)x2, sob + c =0


E

and c - d =0. Thus, b =-c and d =c, so every matrix in the nullspace of L has the
form

Thus, 13 =Span {[� �], [� - �]}is a linearly independent spanning set for Null(L)
and hence is a basis for Null(L).
Any polynomial in the range of L has the form (b+c)+(c-d)x2. Hence Range(L) =
Span{ 1, x2} since we can get any polynomial of the form a0 + a2x2 by taking b =a0,
c = 0, and d = -a . Also, {1, x2} is clearly linearly independent and hence a basis for
2
Range(L).

EXERCISE 2 Determine a basis for the range and nullspace of the linear mapping L : JR3 � M(2, 2)

=[X Xi+ X3 X2 X1+ X3 ] ·


defined by L
[[��ii
X3 2

Observe that in each of these examples, the dimension of the range of L plus the
dimension of the nullspace of L equals the dimension of the domain of L. This result
reminds us of the Rank Theorem (Theorem 3.4.8). Before we prove the similar result
for general linear mappings, we make some definitions to make this look even more
like the Rank Theorem.
Definition Let V and W be vector spaces over R The rank of a linear mapping L : V � W is
Rank of the dimension of the range of L:
a Linear Mapping

rank(L) = dim ( Range(L))

Definition Let V and W be vector spaces over R The nullity of a linear mapping L : V � W is
Nullity of the dimension of the nullspace of L:
a Linear Mapping

nullity(L) =dim (Null(L))


Section 4.5 General Linear Mappings 231

Theorem 2 [Rank-Nullity Theorem]


Let V and W be vector spaces over IR'. with dim V = n, and let L : V -t W be a linear
mapping. Then,
rank(L) + nullity(L) = n

Proof: The idea of the proof is to assume that a basis for the nullspace of L contains
k vectors and show that we can then construct a basis for the range of L that contains
n k vectors.
-

Let 13 = {v1,...,vd be a basis for Null(L), so that nullity(L) = k. Since V is


n-dimensional, we can use the procedure in Section 4.3. There exist vectors Uk+l • ... ,u,,
such that {v1,...,vb Uk+l• ... ,Un} is a basis for V.
Now consider any vector win the range of L. Then w = L(x) for some x E V.
But any x E V can be written as a linear combination of the vectors in the basis
{v1,...,vbuk+1,...,u,,}, so there exists t1,...,tn such that x = t1V1 + ... + tkvk +

tk+I Uk+! + + t11u11• Then,


· · ·

W = L(t1V1 + · · · + tkVk + tk+IUk+I + · · · + tnU11)


= t1L(v1) + · · · + tkL(vk) + tk+1L(uk+1) + · · · + tnL(un)

But each v; is in the nullspace of L, so L(v;) = 0, and so we have

Therefore, any w E Range(L) can be expressed as a linear combination of the vectors


in the set C = {L(uk+i),...,L(un)}. Thus, C is a spanning set for Range(L). Is it linearly
independent? We consider

By the linearity of L, this is equivalent to

If this is true, then tk+I Uk+! + · · · + t11u,, is a vector in the nullspace of L. Hence, for
some d1, ..., db we have

But this is impossible unless all t; and d; are zero, because {v1,..., vbuk+1,...,u11} is
a basis for V and hence linearly independent.
It follows that C is a linearly independent spanning set for Range(L). Hence it is a
basis for Range(L) containing n - k vectors. Thus, rank(L) = n - k and

rank(L) + nullity(L) = (n - k) + k = n

as required. •
232 Chapter 4 Vector Spaces

EXAMPLE7 Determine the rank and nullity of L : P3


M(2,2) � defined by

L ([� :])=cx+(a+b)x3
and verify the Rank-Nullity Theorem.

Solution: If [� :] E Null(L), then

= ([� :])=cx+(a+b)x3
O L

Hence, we have a +b = c = 0 and 0, and so every matrix in Null(L) has the form

Therefore, Null(L) = Span {[� -�] [� �]}·


, Moreover, we see that the set '13 =
{[� -�J, rn �]} is linearly independent, so '13 is a basis for Null(L) and hence

nullity(L) 2. =

Clearly a basis for the range of L is {x, x3},


since this set is linea�ly independent
and spans the range. Thus rank(L) 2. =
Then, as predicted by the Rank-Nullity Theorem, we have

rank(L) + nullity(L) =+ =
2 2 = 4 dim M(2,2)

PROBLEMS 4.5
Practice Problems

Al Prove that the following mappings are linear. A2 Determine which of the following mappings are
(a) L : IR3 IR2 � x x2, x3)
defined by L( 1, = linear.
+
(X1 +X2, X\ X2+X3 ) (a) det : M(2,2) � IR defined by det ([� :]) =
(b) L , P,
R' � m J = (a+ b)+
defined by L ad-be
(b) L : P 2 � P2 2 defined by L(a + bx + cx2) =
(a+b+c)x (a-b)+(b+c)x
(c) tr: M(2,2) �IR ([� :])=a+d
defined by tr (c) T: IR2 � M(2,2) defined by r([��])=
(Taking the trace of a matrix is a linear opera­

(d)
tion.)
T : P3 � M(2, 2) defined by T(a +bx+ [�l :2]
(d) M : M(2,2) � M(2,2) defined by

cx2+dx3)= [� �] M([� �])= [� �]


Section 4.5 Exercises 233

[-
-
A3 For each of the following, determine whether the 2 x2 - 2x3 - 2x4
given vector y is in the range of the given linear -2Xt X2 - X4

mapping L : V � W. If it is, find a vector x E V


such that L(x) = y.
y= [=� -�]
([�:ll
Xt +X3 A4 Find a basis for the range and nullspace of the fol­
0 lowing linear mappings and verify the Rank-Nullity
(a) L : JR3 � JR4 defined by L =
0 Theorem.
2
(a) L: IR.3 � IR. defined by L(x1,x2,x3) =

([�ll
2 (Xt +X2,X1 +X2+X3)
0
y=
0 (b) L: JR3 � P1 defined by L = (a+b)+
3
2
(b) L: P2 � M(2,2) defined by L(a+bx+cx ) = (a+b+c)x

[ a

c
b � ] [ � �]
c .
y= (c) tr : M(2,2) � IR. defined by tr ([; �]) =a+d

2 (d) T: P3 � M(2,2) defined by T(a+bx+


(c) L : P2 � Pt defined by L(a + bx + cx ) =

[; �]
[�]
(b+c)+ (- b - c)x,y = 1 +x 2
cx +dx3) =

(d) L: JR4 � M(2,2) defined by L =

Homework Problems

m
Bl Prove that the following mappings are linear. B2 Determine which of the following mappings are
linear.
2 2
(a) L: P2 � JR3 defined by /Aa+bx+cx ) = (a) L: P3 � IR. defined by L(a+bx+cx +dx3) =
a

(b) L : P2 � Pt defined by L a +bx+cx ( 2


) = b

b+ 2cx c

(c) L: M(2,2) �IR. defined by L ([� !]) =0


d
2
(b) M : P2 � IR. defined by M(a + bx+ cx ) =
(d) Let JD be the subspace of M(2,2) of diagonal 2
b - 4ac

([�
�]) mm
matrices; L : JD � P2 defined by L =
(e) N : JR3 � M(2,2) defined by N =
2
a+ (a+b)x+bx

(e) L: M(2,2) � M(2,2) defined by L ([; !]) = [ Xl � X3 � ]


2

[;=� �=:] (d) L :

[ � ;]
M(2,2) � M(2, 2) defined by L (A ) =

([; �])
A

(f) L : M(2, 2) � P2 defined by L

(a+b+c+d)x
2 (e) T : M(2,2) � P2 defined by T ([ ])
a
c
b
d
--

2
(g) T: M(2,2) � M(2,2) defined by T(A) =AT a+ (ab)x+ (abc)x
234 Chapter 4 Vector Spaces

B3 For each of the following, determine whether the B4 Find a basis for the range and nullspace of the fol­
given vector y is in the range of the given linear lowing linear mappings and verify the Rank-Nullity

= ex')= m
mapping L : V ---t W. If it is, find a vector x E V Theorem.

L([::Jl =
such that L(x) y.
(a) L • P, � R3 defined by L(a+bx+
(a) L • R3 R3 defined by
)=

(
[
(b) L : P ---t P1 defined by L a+bx+cx2
2
- b + 2cx
:\ � �:
l · [ �1 ([; :])=o
2
2 y= (c) L:M(2,2)---tlR.defined by L
-2x1 + x - 2x3 1
2
-

(b) L : P ---t P defined by L(a + bx + cx2) = (d) Let D be the subspace of M(2,2) of diagonal

(-a
2
- 2
2b) + (2a + c)x + (-2a + b - 2c)x2, matrices; L: D ---t P defined by
2
L([� �]) =
y=l+x- x2
JR.3 defined by L(a + bx + cx2) = a+(a+b)x+bx2

L([; :]) =
(c) L : P ---t
2

[ l [-�1
(e) L: M(2,2) ---t M(2,2) defined by

a+b +c' �
y=
3 [:=� �=�]
(d) L: JR.2 ---t M(2, 2) defined by L ([��])= (f) L : M(2,2) ---t P defined by L ([; :]) =
2

[�l ��l [� �] y= (a+b +c + d)x2


(g) T : M(2,2) ---t M(2, 2) defined by T(A)=AT
(e) Let T denote the subspace of 2 x 2 upper­
triangular matrices; L : T ---t P defined by

L([� �])= -
2
( a- c) +(a - 2b)x+
[ ;: : ��
(h) L: P

_
2
---t M(2,2) defined by L(a+bx+cx2) =

-�: = �]
(-2a+b + c)x2, y= 2 +x- x2

] [-
(f) L: P ---t M(2,2) defined by L(a+bx+cx2)

[ ]
2
=

-a - 2c 2b - c 2 2
y=
-2a+2c -2b - c ' 0 -2

Conceptual Problems
Dl Prove Theorem 1. L(vk)} is a linearly independent set in W,
then {v1, ,vk} is a linearly independent set
D2 For the given vector spaces V and W, invent a lin­
• • •

in V.
ear mapping L : V ---t W that satisfies the given
(b) Give an example of a linear mapping L V
properties.
: ---t

([�ll ([�]l
W, where {v1,... , vd is linearly independent

(a) V = R3, W = P • L in V but {L(v1), ,L(vk)} is linearly dependent


= x', L = 2x
• • •

2 , inW.

L ( [�ll = I +x+x2
D4 Let V and W be n-dimensional vector spaces and
let L : V
=
---t W be a linear mapping. Prove that
=
(b) V = P , W = M(2, 2); Null(L)
2
= {O} and
Range(L) W if and only if Null(L)

DS Let U, V andW be finite-dimensional vector spaces


{O}.

Range(L)=span
{[� �J.rn �J.rn �]} over JR. and let L : V ---t U and M: U ---t W be linear

(c) V=M(2,2), W = =
JR.4; nullity(L)
0
2,
mappings.
(a) Prove that rank(M o L) $ rank(M).

([� �]) �
(b) Prove that rank(M o L) $ rank(L).
rank(L)= 2, and L = (c) Construct an example such that the rank of the
composition is strictly less than the maximum
0
of the ranks.
D3 (a) Let V and W be vector spaces and L : V ---t W
be a linear mapping. Prove that if {L(v1), . • • ,
Section 4.6 Matrix of a Linear Mapping 235

D6 Let U and V be finite-dimensional vector spaces vector space. Define the left shift L : S ---+ S by
and let L : V ---+ Ube a linear mapping and M : L(x1, x2,x3, ...) = (x2,x3, X4, ... ) and the right shift
U ---+ Ube a linear operator such that Null(M) = R : S ---+ S by R(x1,X2,X3,...) = (O,x1,x2,X3,. ..).
{Ou}. Prove that rank(M o L) = rank(L). Then it is easy to verify that L and R are linear.
Check that (L o R)(x) = x but that (R o L)(x) f. x.
D7 Let S denote the set of all infinite sequences of
L has a right inverse, but it does not have a left
real numbers. A typical element of S is x =
inverse. It is important in this example that S is
(x1, x2,...,x11,•••). Define addition x + y and scalar
infinite-dimensional.
multiplication tx in the obvious way. Then S is a

4.6 Matrix of a Linear Mapping

The Matrix of L with Respect to the Basis B


In Section 3.2, we defined the standard matrix of a linear transformation L : IR.n ---+ IR."'
to be the matrix whose columns are the images of the standard basis vectors S =
{e1,..., e11} of IR.11 under L. We now look at the matrix of L with respect to other bases.
We shall use the subscript S to distinguish the standard mat1ix of L:

11
The standard coordinates of the image under L of a vector 1 E IR. are given by the
equation
[L(x)]s = [L]s[1]s (4.2)

These equations are exactly the same as those in Theorem 3.2.3 except that the notation
is fancier so that we can compare this standard description with a description with
respect to some other basis. We follow the same method as in the proof of Theorem
3.2.3 in defining the matrix of L with respect to another basis 13 of IR.11.
1 11 11
Let 13 = {V1,...,v11} be a basis for IR.1 and let L : IR. ---+ IR. be a linear operator.
11
Then, for any 1 E IR. , we can write 1 = b1v1 + ·+ b11v11• Therefore,
· ·

Taking 13-coordinates of both sides gives

[L(x)J23 = [h1L Cv1) + · · + h11LCv11)]23


·

=bi [L Cv1)J23 + · · · + h11[LCv11)J23


bi

b,,

Observe that = [1]23, so this equation is the 13-coordinates version of equa-


bn
tion (4.2) where the matrix [ [L(v1)]23 ··· [L(v 1)J23
1
] is taking the place of the
standard matrix of L.

Definition 11
Suppose that 13 = {v1, ... 'v,,} is any basis for IR. and that L : IR.11 ---+ IR.11 is a linear
Matrix of a Linear Operator operator. Define the matrix of the linear operator L with respect to the basis 13 to be
the matrix
236 Chapter 4 Vector Spaces

We then have that for any 1 E IR.11,

Note that the columns of [L].:B are the .3-coordinate vectors of the images of the
.3-basis vectors under L. The pattern is exactly the same as before, except that
everything is done in terms of the basis .3. It is important to emphasize again that
by "basis," we always mean ordered basis; the order of the basis elements determines
the order of the columns of the matrix [Lhi.

3 3

{[ :J [: l [j]}·
EXAMPLE 1 Let L : IR. � IR. be defined by L(x1,x2,X3) = (x1 +2x2 - 2x3,-x2 +2x3,X1 + 2x2)

and let :B = = . · Find the :B-matrix of Land use it to detennine [L(1)]•,

where [1].= [H
Solution: By definition, the columns of [L].:B are the .3-coordinates of the images of
the vectors in .3 under L. So, we find these images and write them as a linear combi­
nation of the vectors in .3:

L(2.-l.-l)= l-�l =(l) l=:l m (-1) ui


+(0) +

L(l.l. l)= m =(0) l=!l (1) m ui


+ +(-2)

L(0,0,-1) = Hl = (4/3) [=:l (-2/3) [:] Ul


+ + (-2)

1,
Hence,

[L].:B= [[L(2,-1 ,-1)].:B [L(l, 1)].:B [L(0,0,-l)J.:B]

= [-1� � -���1
-2 -2

[-1 -4/2/33] [13] [-115]


Thus,
1 0
[L(x)].:B= 0 1 2 = 0
-2 -2

[�]·
We can verify that this answer is correct by calculating L(x) in two ways. First, if

[1].= ilien

1=1
l :l l:J il-�l Ul
=
+2 + =
Section 4.6 Matrix of a Linear Mapping 237

[ �]
EXAMPLE 1 and by definition of the mapping, we have L(x) = L(4, 1, -2) = (10, -5, 6). Second, if
(continued)
[L(x)]!B = .then
-11

EXAMPLE2
Let v = [! ] .In Example 3.2.5, the standard matrix of the linear operator projv : JR2 �

IR2 was found to be


.
[pWJ ;t ]S = [ 9/25 12/25 ]
12/25 16/25

Find the matrix of projv with respect to a basis that shows the geometry of the trans­
formation more clearly.
Solution: For this linear transformation, it is natural to use a basis for JR2 consisting
of the vector v, which is the direction vector for the projection, and a second vector

orthogonal to v, say w = r-�J. Then, with 13 = {V, w}, by geometry,

projv v = projv [!] [!]


= = 1 v +Ow

proj" w = proj" [-�J [�] = = Ov +Ow

Hence, [projv v]!B = [�] and [projv w]!B


= rnl Thus,

We now consider projv x for any x E JR2. We have

=
r� �1 r��1 r� 1 =

In terms of 13-coordinates, projv is described as the linear mapping that sends [��] to

[�ll
This simple geometrical description is obtained when we use a basis 13 that is
adapted to the geometry of the transformation (see Figure 4.6.4 ) This example will be .
discussed further below.
238 Chapter 4 Vector Spaces

Vz

Figure 4.6.4 The geometry of proji1.

Of course, we want to generalize this to any linear operator L : V - V on a vector


space V with respect to any basis '13 for V. To do this, we repeat our argument above
exactly.
Let '13 = {v1, ... , v,,} be a basis for a vector space V and let L: V - V be a linear
operator. Then, for any x EV, we can write x = biv1 + · · · + b11v11• Therefore,

Taking '13-coordinates of both sides gives

[L(x)]21 = [b1L(v1) + · · · + b11L(v11)]21


= b1 [L(vi)]21+ · · ·
+ bn[L(v,,)]21

[
= [L(v1)l• · · · [L(v,)]• f]
We make the following definition.

Definition Suppose that '13 = {v1, ... , v11} is any basis for a vector space V and that L: V - V is
Matrix of a Linear Operator a linear operator. Define the matrix of the linear operator L with respect to the basis
'13 to be the matrix

Then for any x EV,


[L(x)]21 = [L]21[x]21

EXAMPLE3 Let L: P2 - P2 be defined by L(a+bx+cx2) =(a+b) +bx+(a+b+c)x2• Find the


matrix of L with respect to the basis '13 = { 1, x, x2 }.
Section 4.6 Matrixof a Linear Mapping 239

EXAMPLE3 Solution: We have


(continued)
L(l)=l+x2
L(x) =1 +x +x2
L(x2) =x2

Hence,

[ a+b l
We can check our answer by observing that [L(a+bx+cx2)]1l = b and
a+b+c

[
1 1 O a
l[ ] [a+b l
[ (a +bx+cx2)]1l = [L]1l[a+bx+ cx2]1l = 0 1 0
L b = b
1 1 1 c a+b+c

EXAMPLE4 Let L: P � P be defined by L(a+bx+cx2) =(a+b)+bx+(a+b+c)x2 . Find the


2 2
matrixof L with respect to the basis '13 ={l+x+x2,1 - x,2}.
Solution: We have

L(l +x+x2) =2 +x +3x2= 3


( )1 ( )(1 - x)+(-3/2)(2)
( +x +x2)+ 2

L(l - x) =0 +(-l)x +Ox2 = 0 ( +x +x2)+ (1)(1- x)+ (-1/2)(2)


( )1

L(2)= 2 +2x2= 2 ( +x+x2)+ 2


( )1 ( - x)+ (-1)(2)
( )1

Hence,

EXERCISE 1
Let L : M2 ( ,2) be defined by L
( ,2) � M2
([ ]) [
a b
=
a+b
] .
a-b
. Fmd the
c d c a+b+d

matrixofLwith respect to the basis'B= {[� � ].[� � ]·[� �]·[� �]}.

Observe that in each of the cases above, we have used the same basis for the
domain and codomain of the linear mapping L. To make this as general as possible,
we would like to define the matrixcL ] of a linear mappingL: V
[ 1l � W, where '13 is
a basis for the vector space V and C is a basis for the vector space W. This is left as
Problem D3.
240 Chapter 4 Vector Spaces

Change of Coordinates and Linear Mappings


In Example 2, we used special geometrical properties of the linear transformation and
of the chosen basis 13 to determine the 13-coordinate vectors that make up the 13-matrix
2 2
[L]s of the linear transformation L : IR. - IR. • In some applications of these ideas,
the geometry does not provide such a simple way of determining [Ls
] . We need a gen­
eral method for determining [L]s, given the standard matrix [Ls
] of a linear operator
L : IR.11 - IR.11 and a new basis 13.
Let L : IR.11 - IR.11 be a linear operator. Let Sdenote the standard basis for IR.11 and
let 13 be any other basis for IR.11• Denote the change of coordinates matrix from 13 to S
by P so that we have

P[xs
] = [x]s and

If we apply the change of coordinates equation to the vector L(x) (which is in IR.11),
we get
[L(x)s
J = P-1 [L(x)s
J

Substitute for [L(x)]s and [L(x)]s to get

But, P[x]s = [xs


] , so we have

Since this is true for every [x]s, we get, by Theorem 3.1.4, that

[Ls
] = p-I [L]sP

Thus, we now have a method of determining the 13-matrix of L, given the standard
matrix of Land a new basis 13.
We shall first apply this change of basis method to Example 2 to make sure that
things work out as we expect.

EXAMPLES
Let v = [ !]
.In Example 2, we determined the matrix of projil with respect to a geo­

metrically adapted basis 13. Let us verify that the change of basis method just described
does transform the standard matrix [projil ]s to the 13-matrix [projil ]s.

The matrix [projil s


] = [ {£; } � �;�n
2
5
The basis 13 = {[!J, [-�]} so the change

of coordinates matrix from 13 to Sis P [! �]=


-
.The inverse is found to be P-1

[ 3/25
_4125
4/25
3125
]
. Hence, the .v-matnx
'° . of proJ. v 1s
. given
. by

- .
p 1 [proJ,iJsP = [ 3/25
-4/25
4/25
3/25
][ 9/25
12/25

Thus, we obtain exactly the same 13-matrix [projvs


] as we obtained by the earlier
geometric argument.
Section 4.6 Matrix of a Linear Mapping 241

EXAMPLES To make sure we understand exactly what this means, let us calculate the
(continued) B-coordinates of the image of the vector 1 = [�] under projil. We can do this in

two ways.
Method 1. Use the fact that [projv x]3 = [projv]3[x3
] . We need the B-coordinates
of 1:
[5]2 p-
1 [5]2 [ 3/25 4/25] [5] [ 23/25] =

3
=

-4/25 3/25 2 -14/25


s
=

Hence,
.
[proJv 1]3 =
.
[proJi13
] [x3
]
[1 OJ [-14/25
23/25] [23/25]
=

0 0
=

Method 2. Use the fact that [projil 13


) = p-1[projv x]s:

.
[proJv x]s =
. [52] [12/9/2525 12/25]
[proJv ]s =

16/25 [5]2 [69/25


92/25] =

Therefore,
.
[proJv 113 =
[-4/25
3/25 43/25
/25] [6992/25
/25] [23/25] =
0

The calculation is probably slightly easier if we use the first method, but that re­
ally is not the point. W hat is really important is that it is easy to get a geometrical

understanding of what happens to vectors if you multiply by


[� �] (the B-matrix); it

is much more difficult to understand what happens if you multiply by


[ 1;;�; ��;�;]
(the standard matrix). Using a non-standard basis may make it much easier to under­
stand the geometry of a linear transformation.

EXAMPLE6
Let Lbe the linear mapping with standard matrix A =
[; n { [�] , [-�]}
Let B = be

a basis for IR.2. Find the matrix of L with respect to the basis B.

Solution: The change of coordinates matrix Pis P =


[� -n and we have p-1 =

� [ � �].
_ It follows that the B-matrix of Lis

[L]3 = p-1AP =
[4 -11 13] [42 35] [13 -11] [2113/2/2 11/2/2]
� =

-3 -5
EXAMPLE 7
95
Let Lbe the linear mapping with standard matrix A =

[ -7 -5-3
1 . Let B be the basis

{[i], m, [:]}
-7 7

'B = Determine the matrix of L with respect to the basis 'B


242 Chapter 4 Vector Spaces

EXAMPLE 7
(continued) Solutiom The change of coordinates matrix P is P = [i n and we have

p-1 = [� -
-
]
� � ·Thus, the 13-matrix of the mappingLis
l
-1 1 0

[L]23 = p-1AP = [
2
0
0
o
-3
0
o
0
4

Observe that the resulting matrix is diagonal. What does this mean in terms of the
geometry of the linear transformation? From the definition of [L]23 and the definition

[i]
of 13-coordinates of a vector, we see that the linear transformation stretches the first

basis vector by a factor of 2, it reflects (because of the minus sign)the second basis

vector [:] in the origin and stretches it by a factor of 3, and it stretches the third basis

vector [:1 by a factor of 4. This gives a very clear geometrical picture of how the linear

transformation maps vectors. This picture is not obvious from looking at the standard
matrix A.

At this point it is natural to ask whether for any linear mappingL : JR" � JR" there
exists a basis 13 of JR" such that the 13-matrix ofLis in diagonal form, and how can we
find such a basis if it exists?
The answers to these questions are found in Chapter 6. However, in order to deal
with these questions, one more computational tool is needed, the determinant, which
is discussed in Chapter 5.

PROBLEMS 4.6
Practice Problems

Al Determine the matrix of the linear mappingL with


respect to the basis 13 in the following cases. Deter­
A2 Consider the basis 13 = { [ �] , [-;]} 2
of JR . In each

mine [L(x)]23 for the given [1]23. of the following cases, assume that L is a linear
2
(a) In JR , 13 {v1,v2} andL(v1) v2, mapping and determine [L]23.

[�]
= =

(a) L( l ,1) (-3,-3) andL(-1,2) (-4,8)


2v1 - v2; [ xJ23
=

L(v2)
=
= =

(b) L(l, 1) = (-1,2)andL(-1,2) (2,2) =

3
(b) In JR , 13 W1,v2, v3} and L(v1) 2v1 - \13,

. [ -! }
= =
A3 Consider the basis 13 {v 1, v2,v3}

] ]
= =

L(v2) 2v1 - v3,L(v3) 4v2 + sv3;

ml n
= =

Ul
of R . In each of the follow-
[XJ.
=
ing cases, assume that L is a linear mapping.
Section 4.6 Exercises 243

Determine [L(v\)]2l, [L(v2)]2l, and [L(v3)]2l and


hence determine [Lk
L( l , 2, 0) = 0,
(-2, 2), and L(O,
(5, 3, -5). Determine the 13-matrix ofL.
1,1) =
([ :JJ nl· ([ �]] = [:]. ([ !]]= m
- (c) Use parts (a) and (b) to determineL(5,3, -5).
(a) L L L
- A7 Assume that each of the following matrices is the
=

standard matrix of a linear mappingL : JR..n � JR..n.

ll :11 = l n wrn = m +rn =


Determine the matrix ofL with respect to the given
(b) L L
basis 13. You may find it helpful to use a computer
to find inverses and to multiply matrices.
-

ni (a)
[ ;].13={[�].[!]}
-�

= - = = (b)
l-! =n ={[ ;1 r � n
13
(c) L
l �
ll: 11 l n l 11 m ll-rn ui
L L

A4 For each of the following linear transformations,


(c)
[� 6 -n- 2 0 ={[�].�[�.]}�
13
_ .

2
determine a geometrically natural basis 13 (as in Ex-
amples and 3) and determine the 13-matrix of the
(d) [� - 6 ].13={[ ] [ ]}
transformation.
(a) refic1,-2)
(e
l
[� -� Hs=mrnrnl}
(b) projc2,1,-1)
(c) refic-1,-1,1) ! s=
AS (a) Hnd the coordinates of m with respect to the
(fj l � �tH mrnrn11
A8 Find the 13-matrix of each of the following linear
mappings.
=
(a) L : JR..3 � JR..3 defined by L(x1,x2, X3) (x1+
basiss=
ml·Hl·[!lJinR'. ,,, ,,+ X ,X1
3
s=mi. m. l�ll
X3).

L(l,-1,0)=(0, L(l,0,l,2)1)==(2(,-2,0)
1, .
(b) Suppose that L : JR..3 JR..3 is a linear
_

transformation such that 2, 4), 2

1 -1

(b) L : P2 � P2 defined by L(a+ bx+ cx )


1,2),andL(O, 2 2 2

L(l,
Determine the 13-matrix ofL.
(c) Use parts (a) and (b) to determine 2, 4).
a+ (b+ c )x , 13

b+ 2cx, = {1,
(c) D : P2 � P2 defined by D(a+ bx+ cx )
13
2
x, x }
=

=
{ + x , + x, 1 x+ x }
2
-

[j]
A6 (a) Find the coordinates of with respect to the
(d) T : U
M(2,2),
U, where U is the subspace of
upper-triangular matrices in

defined

by ([� �]) = [� ! ; : ]
s= {[j]. [�] [m
c .
T

basis in
a

· R'. 13={[� �]·[� �]·[� �]}


L(l,0,-1) = (0,1,1),
(b) Suppose that L : JR..3
formation such that
� JR..3 is a linear trans­

Homework Problems

Bl Determine the matrix of the linear mappingL with


respect to the basis 13 in the following cases. Deter­
(b) InlR..3,13 =
{V1,112,v3}andL(111)
3 3
111 3v2,
-
LCv2) 3v1 + 4112 - 11 ,L(v ) =-vi + 2v2 +
=2
=
mine [L(1)]2l for the given [ 1)$,
= Hl
=

2 6il3; [XJ.
(a) In JR.. , 13 {V1, 112} and L(111) 111 + 3112,

= [-�]
=

L(112) = 5111 - 7V2; [ 1]2l


244 Chapter 4 Vector Spaces

B2 Consider the basis 13= { [�],[


L
_ �]} of JR2. In each
B6 (a) Find the coordinates of
[ :] with respect to the

= ml · [ �] · l]}
of the following cases, assume that is a linear
mapping and determine [L].:a. -
(a)
(b)
(c)
L (l,2)= (1,-2) L (l,-2) (4, 8)
= and
and
L (l,2)= (5,10) L (l,-2)= (-3,6)
and
L (l,2) (0,0) L (l,-2)= (1,2)
= basisS

(b) Suppose that L


[ inR3.

JR3 - JR3 is a linear trans­


L (2,1,0) = (3,3,0),
:

13 = {v1,v2,v } formation such that


L (-1,0,1)
{[iJ +:l [�]} L (l,1,0)
B3 Consider the basis
3
(1,4,4), and
(-2,0, 2).
Determine the 13-matrix of L.
L (l,4,4).
· of R3• In each of the following
(c) Use parts (a) and (b) to determine
cases, assume that L is a linear mapping Deter­
[L (v1)].:a, [L (v2)].:a, [L (v3)].:a
B7 Assume that each of the following matrices is the
mine and
.
and hence L
[L].:a.
standard matrix of a linear mapping : JR11 - JR11•
determine Determine the matrix of L with respect to the given
13.
L
[[i]] = [�HHll [iHl�ll Hl = = basis You may find it helpful to use a computer
(a) to find inverses and to multiply matrices.

(a) [�136 n 13= {[!] ' [�] }


L i=
(b)
l[ JJ [�HHJJ= l�Hml = [:J (b) [� �160l 13= {[�] ' [ �]}
(c)
L[[i]]= llHHll= [iH[�]l [�] = (c) ; s=
l=� =! J mrnrnl}
lO .
L i=
(d)
l[ ll nHHJJ [�Hl�ll= m
B4 For each of the following linear transformations,
= (d)
[ !J mrni+m
-� -; - s=
-1 6 -6 .
13
BS Assume that each of the following matrices is the
determine a geometrically natural basis (as in Ex­ : JR 11 - JR L
"

3)
standard matrix of a linear mapping
amples 2 and and determine the 13-matrix of the Determine the matrix of L
.
with respect to the given
transformation. basis 13 and use it to determine [L (x)].:a for the
(a) perp( ,2)
3 given vector x. You may find it helpful to use a
(b) perpc2,1,-2) computer to find inverses and to multiply matrices.
(c) reflc1,2, )
[-�� �1�] .13= { [�] . (;] } .[xJ.:a= [�]
m
3
(a)

6
r3 6 =�l { [�J. r�n. [1].:a = r-n
BS (a) Find the coordinates of with respect to the
(b) 13 =

basisS=
Wl L l:J [W
· ·
inR3
.
(c)
U =�� =�n s=
{[�] [-;].[_:]}.

XJ• [il
(b) Suppose that JR3 - JR3 is a linear
L (l,1, 0)= (0,5,5), [
:

transformation such that =


L (O,1,1)= (2,0,2), L (l,O,1)= (5,2, 1).
and
L.
Determine the 13-matrix of
L (5,2,1).
(c) Use parts (a) and (b) to determine
Section 4.6 Exercises 245

li =: H {Ul r l lW
(c) L: M(2,2) � M(2,2) defined by
�=
([� �]) = [� ; l
(d) · b c
L

nl
[XJ. =

B9 Find the $-matrix of each of the following linear


13= {[� �]·[� �]·[� �].
[� �]}
mappings. (d) D : P P defined by D(a + bx + cx2) =
2

3 3 2

{[i] 'm' lm
(a) L : IR. IR. defined by L(x1,X ,x3) = (x1 + b+2ax, 13 {1+ 2x+ 3x2, -2x +x2,1+ x+ x2}
2

=

(e) T : ID � ID, where ID is the subspace


X + X3,X1 +2x ,X1 + X3), � = of diagonal matrices in M(2,2), defined by
2 2
r([� �]) [ � a b
]
(b) L : P
2 2
� P defined by L(a + bx + cx2) =
=
2a � b .
(a+b+c)+(a+2b)x+(a+c)x2, 13
= {1,x,x2}
13= { [� �]·[� �]}
Conceptual Problems

Dl Suppose that 13 and C are bases for IR.11 and Sis the satisfies [L(x)]c = c[L]21[x]21 and hence is the ma­
standard basis of IR.11• Suppose that P is the change
of coordinates matrix from 13 to S and that Q is
trix of L with respect to basis 13 and C .

D4 Determine the matrix of the following linear map­


the change of coordinates matrix from C to S. Let
L : IR.11 � IR.11 be a linear mapping . Express the
pings with respect to the given bases
(a) D : P � P1 defined by D(a + bx + cx2) =
13 and C .

2
matrix [L]c in terms of [L]21, P, and Q.

D2 Suppose that a 2 x 2 matrix A is the standard matrix


b + 2cx, 13 =
= {l,x,x2}, C {1,x}
(b) L: IR.2 P defined by L(a1,a1) = (a1 + a1) +
of a linear mapping L : IR.2 � IR.2. Let {V1,v }
2
be a basis for IR.2 and let P denote the change of co­
13= a1x2,

=
2
13 {[ �] [� ]},_ , C = { 1 + x2, 1 + x, -1 -

ordinates matrix from 13 to the standard basis. What x + x2}


conditions will have to be satisfied by the vectors
(c) T : IR.2 M(2,2) defined by =
[�1 � ] = r([;])

1
v1 and v in order for p- AP = D
2 a b
for some d1,d
2
AP = PD, or A v1
E

[ ]
v = v1 [
2
IR.? (Hint: Consider the equation
]
v D.)
[ ; � ] { [ � ] [ � ]}
a b .
:s=
-
·
.
2
= {v1, ... ,v },
D3 Let V be a vector space with basis
11
2
13 c= {[� �]·[� �]·[� �]·[� �]}
let W be a vector space with basis C , and let (d) L : P � IR.2 defined by L(a + bx + cx2)
2
L: V � W be a linear mapping. Prove that the
matrix c[L]21 defined by
[; � �]. 13 = {1 + x2, 1 + x, - 1 + x + x2},

c= {[�]·[�]}
246 Chapter 4 Vector Spaces

4.7 Isomorphisms of Vector Spaces


Some of the ideas discussed in Chapters 3 and 4 lead to generalizations that are im­
portant in the further development of linear algebra (and also in abstract algebra).
Some of these generalizations are outlined in this section. Most of the proofs are easy
or simple variations on proofs given earlier, so they will be left as exercises. Through­
out this section, it is assumed that U, V, and W are vector spaces over JR and that
L : U --t V and M : V --t Ware linear mappings.

Definition Lis said to be one-to-one if L( u1 ) = L(u2) implies that u1 = u2.


One-to-One

Lemma 1 Lis one-to-one if and only if Null (L) = {0}. (Compare this to Theorem 3.5.6.)

You are asked to prove Lemma 1 as Problem D 1.

EXAMPLE 1 Every invertible linear transformation L : JR11 --t JR" is one-to-one. The fact that such
a transformation is one-to-one allows the definition of the inverse. The mapping inj :
JR3 --t JR4 of Example 3.5.8 is a one-to-one mapping that is not invertible. The mapping
P : JR4 --t JR3 of Example 3.5.8 is not one-to-one. For any n, proj11 : JR3 --t JR3 is not

one-to-one, since many elements in the domain are mapped to the same vector in the
range.

EXAMPLE2 Prove that L : JR2--t JR3 defined by Lx


( 1,x2) x
( i,xi + x2,x2) is one-to-one.
=

Solution: Assume thatL(x1,x2) L(y1, Y2). Then we have (x1,xi + X2,x2) (y1, Y1+
= =

Y2, yz), and so x1 Y1 andx2 Y2· Thus, Lis one-to-one.


= =

EXAMPLE3
Determine ifM : P2 --t ( + bx+ cx2 ) =
M(2,2) defined by La [� �] is one-to-one.

( ) = l+x andq(x) = l+x+x .0bserve thatMp


Solution: Let p x ( ) =
2
[� �] = Mq
( ),

butp x
( ) * q(x), so Mis not one-to-one.

EXERCISE 1 Suppose that {u1, • . . , uk} is a linearly independent set in U and Lis one-to-one. Prove
that {L( u1) , ... , L(uk)} is linearly independent.

Definition L : U --t V is said to be onto if for every v E V there exists some u E U such that
Onto L(u) = v. That is, Range (L) = V.

EXAMPLE4 Invertible linear transformations of JR" are all onto mappings. The mapping P : JR4 --t

JR3 of Example 3.5.8 is onto, but the mapping inj : JR3 --t JR4 of Example 3.5.8 is not
onto.
Section 4. 7 Isomorphisms of Vector Spaces 24 7

2
EXAMPLES Prove that L : JR. -t PI defined by L(yI, y2) =YI + (yI + y2)x is onto.

Solution: Let a+ bx be any polynomial in PI. We need to find a vector y = [��] E


2
JR. ,

such that UJ) = a+ bx. For this to be true, we require that Yi+ (yI+ y2)x = L(yI, y2) =
a+ bx, so YI =a and b =YI + y2, which gives Y2 = b - Y1 = b - a. Therefore, we have
L(a, b - a) = a + bx, and so Lis onto.

�:]
EXAMPLE6 Determine if M : JR. -t JR.3 defined by M(xI, x2) = (xi, XI + Xz, x2) is onto.

Solution: If M is onto, then for every vector jl = E !!.3, there exists X = [�:] E 11!.'

soch that L(X) = jl. But, if jl = [;] ·then we have

which implies that XI = 1, x2 = 1, and XI + x2 = 1, which is clearly inconsistent.


Hence, M is not onto.

EXERCISE 2 Suppose that {ui, ... , uk} is a spanning set for U and Lis onto. Prove that a spanning
set for Vis {L(ui), ... , L( uk)}.

I
Theorem 2 The linear mapping L : U -t Vhas an inverse linear mapping L- : V -t U if and
only if Lis one-to-one and onto.

You are asked to prove Theorem 2 as problem D4.

Definition If U and V are vector spaces over JR, and if L : U -t V is a linear, one-to-one, and
Isomorphism onto mapping, then Lis called an isomorphism (or a vector space isomorphism), and
Isomorphic U and Vare said to be isomorphic.

The word isomorphism comes from Greek words meaning "same form." The con­
cept of an isomorphism is a very powe1ful and important one. It implies that the es­
sential structure of the isomorphic vector spaces is the same, so that a vector space
statement that is true in one space is immediately true in any isomorphic space. Of
course, some vector spaces such as M(m, n) or P11 have some features that are not
purely vector space properties (such as matrix decomposition and polynomial factor­
ization), and these particular features cannot automatically be transferred from these
spaces to spaces that are isomorphic as vector spaces.
248 Chapter 4 Vector Spaces

3
EXAMPLE7 Prove that P2 and JR are isomorphic by constructing an explicit isomorphism L.

Solution: We define L: P2 �
3
R by L(a, + aix + a2x') = [�]
To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and
onto.
2 2
Linear: Let any two elements of P2 be p(x) = ao+a1x+a2x and q(x) = b0+b1x+b2x
and let t E R Then,

2 2
L(tp + q) = L(t(ao + a1x + a1x ) + (bo + b1x + b1x ))
2
= L(tao + bo + (ta1 + b1)x + (ta2 + b1)x )

= tL(p) +Lq
( )

Therefore, Lis linear.

One-to-one: Let o0 + a1x + a2x


2
E Null (L). Then, [�] =
2
L(a, + a1x + a2x ) = [�]
Hence, a0 = a1 = a2, so Null (L) = {O} and thus Lis one-to-one by Lemma 1.

Onto: For any [:;] E R3, we have La


( , + a1x + a2x') = [::J. Hence, Lis onto.

Thus, Lis an isomorphism from P2 to JR3 .

EXERCISE 3 Use Exercise 1 and Exercise 2 to prove that if L : 1I.J � V is an isomorphism and
{u1, . • . , u,,} is a basis for 1I.J, then {L(u1), . • . , L(u,,)} is a basis for V.

Theorem 3 Suppose that 1I.J and V are finite-dimensional vector spaces over R Then 1I.J and V
are isomorphic if and only if they are of the same dimension.

You are asked to prove Theorem 3 as Problem DS.

EXAMPLE 8
1. The vector space M(m, n) is isomorphic to Rm".

2. The vector space P,, is isomorphic to R"+1.

3. Every k-dimensional subspace of JR" is isomorphic to every k-dimensional sub­


space of M(m, n).
Section 4.7 Exercises 249

If we know that two vector spaces over JR have the same dimension, then
Theorem 3 says that they are isomorphic. However, even if we already know that
two vector spaces are isomorphic, we may need to construct an explicit isomorphism
between the two vector spaces. The following theorem shows that if we have two
isomorphic vector spaces 1!J and V, then we only have to check if a linear mapping
L : 1!J ---t V is one-to-one or onto to prove that it is an isomorphism between these two
spaces.

Theorem 4 If1!J and V are n-dimensional vector spaces over JR, then a linear mapping L : 1!J ---t V
is one-to-one if and only if it is onto.

You are asked to prove Theorem 4 as Problem D6.

PROBLEMS 4.7
Practice Problems

Al For each of the following pairs of vector spaces, (c) P3 and M(2, 2)
define an explicit isomorphism to establish that the (d) IP'= {p(x) E P2I p(2) = O} and the vector space
spaces are isomorphic. Prove that your map is an
isomorphism.
1!J = {[� � ]
1
2
I a1,a2 E JR } of 2 x 2 diagonal

(a) P3 and JR4 matrices


(b) M(2, 2) and JR4

Homework Problems

Bl For each of the following pairs of vector spaces,


define an explicit isomorphism to establish that the
spaces are isomorphic. Prove that your map is an
2
(c) R and the vector spa� S =Span {[�] [�]} ·

isomorphism. (d) IP' = {p(x) E P3 I p(l) = O} and the vector


(a) P and JR5
4 space T = {[� :�]I a1,a2,a3 E JR } of 2 x 2
(b) M(2, 3) and JR6
upper-triangular matrices

Conceptual Problems

Dl Prove Lemma 1. (Hint: Suppose that L is one-to­ D4 Prove Theorem 2.


one. What is the unique u E 1!J such that L(u) = O?
DS Prove Theorem 3. To prove "isomorphic ::::} same
Conversely, suppose that Null (L) = {O}. If L(u1) =
dimension," use Exercise 3. To prove "same dimen­
L(u2), then what is L(u1 - u2)?)
sion ::::} isomorphic," take a basis {u1, ..., u11} for
D2 (a) Prove that if Land Mare one-to-one, then Mo L 1!J, and a basis {v1 , , v11} for V. Define an isomor­
. • .

is one-to-one. phism by taking L(u;) = v; for 1 � i � n, requiring


(b) Give an example where Mis not one-to-one but that L be linear. (You must prove that this is an iso­
M o Lis one-to-one. morphism.)
(c) Is it possible to give an example where Lis not
D6 Prove Theorem 4.
one-to-one but M o Lis one-to-one? Explain.
D7 Prove that any plane through the origin in JR3 is iso­
D3 Prove that if L and M are onto, then M o Lis onto. 2
morphic to JR .
250 Chapter 4 Vector Spaces

D8 Recall the definition of the Cartesian product from DlO Suppose that L : 1l.J � V is a vector space isomor­
Problem 4.1.D4. Prove that 1l.J x {Ov} is a subspace phism and that M : V � V is a linear mapping.

of 1l.J x V that is isomorphic to 1!.J. Prove that L-1 o Mo Lis a linear mapping from 1l.J
to 1!.J. Describe the nullspace and range of L-1oMoL
D9 (a) Prove that JR2 x JR is isomorphic to JR3.
in terms of the nullspace and range of M.
(b) Prove that JRI! x ]Rm is isomorphic to JRn+m.

CHAPTER REVIEW
Suggestions for Student Review

Remember that if you have understood the ideas of (c) Take the vector you found in (b) and carry out
Chapter 4, you should be able to give answers to these the standard procedure to determine its coordi­

[ !]
questions without looking them up. Try hard to answer nates with respect to :B. Did you get the right
them from your own understanding.
answe,, - ? (Section 4.4)
1 State the essential properties of a vector space over
R Why is the empty set not a vector space? Describe
(d) Pick any two vectors in JR5 and determine
two or three examples of vector spaces that are not
whether they lie in your subspace. Determine
subspaces of JR11• (Section 4.2)
the coordinates of any vector that is in the sub­
2 What is a basis? What are the important properties space. (Section 4.4)
of a basis? (Section 4.3)
6 Write a short explanation of how you use informa­
3 (a) Explain the concept of dimension. What the­
tion about consistency of systems and uniqueness of
orem is required to ensure that the concept of
solutions in testing for linear independence and in
dimension is well defined. (Section 4.3)
determining whether a vector belongs to a given sub­
(b) Explain how knowing the dimension of a vector
space. (Sections 4.3 and 4.4)
space is helpful when you have to find a basis
for the vector space. (Section 4.3) 7 Give the definition of a linear mapping L : V � W
and show how this implies that L preserves linear
4 Why is linear independence of a spanning set impor­
combinations. Explain the procedure for determin­
tant when we define coordinates with respect to the
ing if a vector y is in the range of L. Describe how
spanning set? (Section 4.4)
to find a basis for the nullspace and a basis for the
S Invent and analyze an example as follows. range of L. (Section 4.5)
(a) Give a basis :B for a three-dimensional sub­
8 State how to determine the standard matrix and the
space in JR5. (Don't make it too easy by choos­
:B-matrix of a linear mapping L : V � V. Ex­
ing any standard basis vectors, but don't make
plain how [L(x)].s is determined in terms of [L]_s.
it too hard by choosing completely random
(Section 4.6)
components.) (Section 4.3)
(b) Determine the standard coordinates in JR5 of the 9 State the definition of an isomorphism of vector

vecto' that hos cooniinate vectm [-!] with re­


spaces and give some examples. Explain why a
finite-dimensional vector space cannot be isomor­
phic to a proper subspace of itself. (Section 4.7)
spect to your basis :B. (Section 4.4)
Chapter Review 251

Chapter Quiz

El Determine whether the following sets are vector E4 (a) Find a basis for the plane in JR.3 with equation
spaces; explain briefly. X1 - X3 = 0.
(a) The set of 4 x 3 matrices such that the sum of (b) Extend the basis you found in (a) to a basis 13
the entries in the first row is zero (a11 + a12 + for JR.3.
a1 = 0) under standard addition and scalar (c) Let L : JR.3 � JR.3 be a reflection in the plane
3
multiplication of matrices. from part (a). Determine [L]�.
(b) The set of polynomials p(x) such that p(l) = 0 (d) Using your result from part (c), determine the
and p(2) = 0 under standard addition and scalar standard matrix [L]s of the reflection.
multiplication of polynomials.
ES Let L : JR.3 � JR.3 be a linear mapping with standard

tr{i -� �]
(c) The set of 2 x 2 matrices such that all entries
are integers under standard addition and scalar
multiplication of matrices.
ma and let�=
{[i] [I] [ t]}
. . -

(d) The set of all vectors [::J such that x1 + x2 +


Determine the matrix [L]�.

E6 Suppose that L : V � W is a linear mapping with


x3 = 0 under standard addition and scalar mul­ Null(L) = {O}. Suppose that {v1, ... , vd is a linearly
tiplication of vectors. independent set in V. Prove that {L(v1), , L(vk)}
• • .

is a linearly independent set in W.


E2 In each of the following cases, determine whether
the given set of vectors is a basis for M(2, 2). E7 Decide whether each of the following statements is

(a){[� �]·[� -n·l� �].[; -�]·[� �]}


true or false. If it is true, explain briefly; if it is false,
give an example to show that it is false.
(a) A subspace of JR.11 must have dimension less
(b)
{[� �] ' [� -�] ' [� �] ' [; �]} than n.
(b) A set of four polynomials in P2 cannot be a ba­

{[_� !]·[� �]·[� �]}


-

(c) sis for P2.


(c) If 13 is a basis for a subspace of JR.5, the
1 13-coordinate vector of some vector x E JR.5 has
0 five components.
E3 (a) Let § be the subspace spanned by v 1 1 , (d) For any linear mapping L : JR.11 � JR.I! and any
1 basis 13 oflR.11, the rank of the matrix [L]� is the
3 same as the rank of the matrix [L]s.
3 (e) For any linear mapping L : V � V and any
1 3 basis 13 of V, the column space of [L]� equals
v2 = O, v 1 and V4 1 . Pick a the range of L.
3
,

1 0 -2 (f) If L : V � W is one-to-one, then dim V =


2 0 dimW.
subset of the given vectors that forms a basis 13
for§. Determine the dimension of§.
0
2
(b) Determine the coordinates of x - 1 with
-3
-5
respect to 13.
252 Chapter 4 Vector Spaces

Further Problems

These problems are intended to be ch allenging. They are all in the nullspace, where a, b, c, . . . , v

may not be of interest to all students. denote unknown entries. Determine these un­
known entries and prove that K1 and K2 form a
Fl Let S be a subspace of an n-dimensional vector
space V. Prove that there exists a linear operator
L: V � V such that Null(L) S. sider A - 1 1 1l
basis for Null(wt). (Hint: If A E Null(wt), con­
a11K1 - a12K2.)

[� 1 �
=

F2 Use the ideas of this chapter to prove the unique­ (d) Let!... = . Observe that!... is a magic
ness of the reduced row echelon form for a given
matrix A. (Hint: Begin by assuming that there are square with wt(!...) 3. Show that all A in MS 3

1
=

two reduced row echelon forms R and S. W hat can that have weight k are of the form
you say about the columns with leading s in the
two matrices?) (k/3):1. + PK1 + qK2
F3 Magic Squares-An Exploration of
Their Vector Space Properties for some p, q E lit
We say that any matrix A E M(3, 3) is a 3 x 3
(e) Show that 15 {:l., K1, K2} is a basis for MS 3.
=

magic square if the three row sums (where each


row sum is the sum of the entries in one row of
(f)
weight 1.
As an example, find all 3 x 3 magic squares of

A) of A, the three column sums of


diagonal sums of A (a11 + a22 + a33 and a13 + a22 +
a31 ) all have the same value k. The common sum k
A, and the two
(g) Find the coordinates of A = [� � ;]1
2 3
with

respect to the basis 15.


is called the weight of the magic square A and is
denoted by wt(A) k. = Conclusion: MS 3 is a three-dimensional subspace of

For example, if A = [-� � -!].


3 0 0
A is a magic
M(3, 3).

Exercises F4-F7 require the following definitions.


square with wt(A) = 3. If S and T are subspaces of the vector space V, we de­
The aim of this exploration is to find all
3 x 3 magic fine
squares. The subset of M(3, 3) consisting of magic S+T = { p s + q t I p, q E JR, s E S, t E T}
squares is denoted MS 3.
If S and T are subspaces of V such that S + T = V and
(a) Show that MS 3 is a subspace of M(3, 3).
Sn T {O}, we say that S is the complement of T (and
=

(b) Observe that weight detenrunes a map wt


T is the complement of S). In general, given a subspace
MS 3 �lit Show that wt is linear. S of V, one can choose a complement in many ways; the
(c) Compute the nullspace of wt. Suppose that 2
complement of S is not unique. For example, in JR , we

0 a
l 1 l h may take a complement of Span { [�]} to be Span {[�]}
{[ � ]}
c d ' j k
f g m n or Span

and F4 In the vector space of continuous real-valued


functions of a real variable, show that the even
functions and the odd functions form subspaces
such that each is the complement of the other.
Chapter Review 253

FS (a) If § is a k-dimensional subspace of JR.11, show be the subspace spanned by v and §. Let U be the
that any complement of§ must be of dimension subspace spanned by w and §. Prove that if w is in
n-k. T but not in S, then v is in U.
(b) Suppose that § is a subspace of JR.n that has a
F7 Show that if § and T are finite-dimensional sub­
unique complement. Must it be true that § is
spaces of V, then
JR. 1?
either { O} or 1

F6 Suppose that v and w are vectors in a vector space dim § + dim T = dim(§ + T) + dim(§ n T)
V. Suppose also that § is a subspace of V. Let T

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 5

Determinants
CHAPTER OUTLINE

5.1 Determinants in Terms of Cofactors


5.2 Elementary Row Operations and the Determinant
5.3 Matrix Inverse by Cofactors and Cramer's Rule
5.4 Area, Volume, and the Determinant

For each square matrix A. we define a number called the determinant of A. Originally,
the determinant was used to "determine" whether a system of n linear equations in
n variables was consistent. Now the determinant also provides a second method for
finding the inverse of a matrix. It also plays an important role in the discussion of
volume. Finally, it is an important tool for finding eigenvalues in Chapter 6.

5.1 Determinants in Terms of Cofactors


Consider the system of two linear equations in two variables:

a11 x1 + a12X2 = b1

a21X1 + a22X2 = b2

By the standard procedure of elimination, the system is found to be consistent if and


only if a11 a22 - a12 a21 * 0. If it is consistent, then the solution is found to be

X1 =
X2
a11a22 - a12 a21 =
a1 la22 - a12 a21

This fact prompts the following definition.

Definition The determinant of a 2 x 2 matrix A = [ a11


a21
a12
a22
] is defined by
Determinant of
a 2 x 2 Matrix

detA = det
[ a11
a21
256 Chapter 5 Determinants

EXAMPLE 1
Find the determinant of [� !l [� -n [� n and

Solution: We have

det [� !] = 1(4) - 3(2) = -2

det [2 7]
8 -5
= 2(-5) - 7(8) = - 10 - 56 = -66

det [� � ] = 2(4) - 2(4) = 0

An Alternate Notation: The determinant is often denoted by vertical straight lines:

One risk with this notation is that one may fail to distinguish between a matrix and the
determinant of the matrix. This is a rather gross error.

EXERCISE 1 Calculate the following determinants.

(a)
1; �I (b)
I� _;1 (c)
I� �I

The 3 x 3 Case
Let A be a 3 x 3 matrix. We can show through elimination (with some effort) that the
system is consistent with a unique solution if and only if

We would like to simplify or reorganize this expression so that we can remember it


more easily and so that we can determine how to generalize it to the n x n case.
Notice that a11 is a common factor in the first pair of terms in D, a12 is a common
factor in the second pair, and a13 is a common factor in the third pair. Thus, D can be
rewritten as

Observe that the determfoant being multiplied by a11 is the determinant of the 2 x 2
matrix formed by removing the first row and first column of A. Similarly, a12 is being
multiplied by (-!) times the determinant of the matrix formed by removing the first
Section 5.1 Determinants in Terms of Cofactors 257

row and second column of A, and a13 is being multiplied by the determinant of the
matrix formed by removing the first row and third column of A. Hence, we make the
following definitions.

Definition Let A be a 3 x 3 matrix. Let A(i, j) denote the 2 x 2 submatrix obtained from A by
Cofactors of a deleting the i-th row and }-th column. Define the cofactor of a 3 x 3 matrix aiJ to be
3 x 3 Matrix
i
CiJ = ( - l)C +J! detA(i, j)

Definition The determinant of a 3 x 3 matrix A is defined by


Determinant of a
3 x 3 Matrix

Remarks

1. This definition of the determinant of a 3 x 3 matrix is called the expansion of


the determinant along the first row. As we shall see below, a determinant can
be expanded along any row or column.

2. The signs attached to cofactors can cause trouble if you are not careful. One
helpful way to remember which sign to attach to which cofactor is to take a
blank matrix and put a+in the top-left corner and then alternate - and+both

across and down: [� : �i.


+ - +
This is shown for a 3 x 3 matrix, but it works for

a square matrix of any size.

3. CiJ is called the cofactor of aiJ because it is the "factor with a;/' in the expansion
of the determinant. Note that each CiJ is a number not a matrix.

EXAMPLE2
Let A = [�
1
- � �
0 6
] ·Calculate the cofactors of the first row of A and use them to find

the determinant of A.
Solution: By definition, the cofactor C11 is ( -1) 1+1 times the determinant of the
matrix obtained from A by deleting the first row and first column. Thus,

C11 =
1
(-1)1+ det [ ]
3
0
5
6
= 3(6)- 5(0) = 18
258 Chapter 5 Determinants

EXAMPLE2 The cofactor C12 is (-1) 1+z times the determinant of the matrix obtained from A by
(continued) deleting the first row and second column. So,

2
C12 =(-1)1+ det [i �] = -[2(6) - 5(1)] = -7

Finally, the cofactor C13 is

1 3
C13 =(-1) + det [i �] =2(0) - 3(1) =-3

Hence,

2 3
EXERCISE2
Let A = 0 [ l

4
- I
0 -3
]
-2 . Calculate the cofactors of the first row of A and use them to

find the determinant of A.

Generally, when expanding a determinant, we write the steps more compactly, as


in the next example.

EXAMPLE3
Calculate det
5 0 -1
[-� � �1.
Solution: By definition, we have

[
det -2
1 2
2 �1 = a11C11 + a12 C12 +a13C13
5 0 -1

1 2 1 2 2 1 3 -2 2
=1(-1)1+
0
1 -1
1+2(-1)1+ -
5
1 -1
1
+3(-1)1+ 1 5 0
1
=1[2(-1) - 1(0)] - 2[(-2)(-1) - 1(5)] +3[(-2)(0) - 2(5)]
=-2 +6 - 30 =-2 6

We now define the determinant of an n x n matrix by following the pattern of the


definition for the 3 x 3 case.

Definition Let A be an n x n matrix. Let A(i, }) denote the (n - 1) x (n 1) submatrix obtained


-

Cofactors of an from A by deleting the i-th row and }-th column. The cofactor of an n x n matrix of
Matrix
aij is defined to be
11 x n

C;1 =(- )(i+Jl det A(i, j)


l
Section 5.1 Determinants in Terms of Cofactors 259

Definition The determinant of an n x n matrix A is defined by


Determinant of a
n x n JVlatrix

Remark

This is a recursive definition. The result for the n x n case is defined in terms of the
(n -1) -1)
x (n case, which in turn must be calculated in terms of the (n
case, and so on, until we get back to the 2 2 x
-2) -2)
x (n
case, for which the result is given
explicitly.

EXAMPLE4 We calculate the following determinant by using the definition of the determinant.
Note that * **
and represent cofactors whose values are irrelevant because they are
multiplied by 0.
1-20 25 63 07
det

-5 3 20 43 a11 C11+a12C12+a13C13+a14C14
=

167 1 7
0(*)+2(-1)1+2 -2-5 20 43 + 3(-1)1+3 -2-5 53 43 +O(**)
= det

(1(-1)1+1 I� �I+6(-1)1+2 I=� �I+7(-1)1+3 I=� �I)


=
-2

+3(1(-1)1+11; �1+5(-1)1+21=� �1+7(-1)1+31=� ;1)


-2((0 -8) -6(-6+20)+7(-4 -0))
=

+3((9 -4) -5(-6+20)+7(-2+15))


-2(-8 -84 -28)+ 3(5 -70+91)
=

-2(-120)+ 3(26) 318


= =

4 4
It is apparent that evaluating the determinant of a x matrix is a fairly lengthy
calculation, and things will get worse for larger matrices. In applications it is not un­
common to have a matrix with thousands (or even millions) of columns, so it is very
important to have results that simplify the evaluation of the determinant. We look at a
few useful theorems here and some very helpful theorems in the next section.

Theorem 1 Suppose that A is an n x n matrix. Then the determinant of A may be obtained by


a cofactor expansion along any row or any column. In patticular, the expansion of
the determinant along the i-th row of A is

The expansion of the determinant along the }-th column of A is


260 Chapter S Determinants

We omit a proof here since there is no conceptually helpful proof, and it would be
a bit grim to verify the result in the general case.
Theorem 1 is a very practical result. It allows us to choose from A the row or
column along which we are going to expand. If one row or column has many zeros, it
is sensible to expand along it since we shall then not have to evaluate the cofactors of
the zero entries. This was demonstrated in Example 4, where we had to compute only
two cofactors.

[�
EXAMPLES 3 2 0 -1
2 -1

i
0 6 0 0
Calculate the detemtinant of A = 1 'B= , and
4 1 2 1
-1 5
3 -1 0
4 2 1 -1
0 2 2 2
C=
0 0 -1 3·
0 0 0 4
Solution: For A, we expand along the third column to get

detA= a13C13 + a23C23 + a33C33

= (-1)(-1)1+
3
1 � �I+
-
0 + 0

= -1(15 - (-1)) = -16

For B, we expand along the second row to get

det B= a21C21 + a22C22 + +


a23C23 a24C24

=0 + 2 2
3 0 -1
( 6)(-1) + 4 2 1
3 0 1
+ 0 + 0

=o + 6 ( 2 2
2(-1) + 1 ; �I) +
-
o

= 12(3 - (-3)) = 72

We expanded the 3 x 3 submatrix along the second column. For C we continuously


expand along the bottom row to get

+
4 2 1
=0 0 + 0 + 4( -1 ) 4+4 0 2 2
0 0 -1

( 3 3
= 4 (-1)(-1) +
I� � I )
= 4(-1)(4(2)- 0) = 4(-1)(4)(2)= -32
Section 5.1 Determinants in Terms of Cofactors 261

EXERCISE 3 1 3 2 0
0 0 -1 2
Calculate the determinant of A = by
3 5 -1 0
-2 2 -4 0

1. Expanding along the first column

2. Expanding along the second row

3. Expanding along the fourth column

Exercise 3 demonstrates the usefulness of expanding along the row or column with
the most zeros. Of course, if one row or column contains only zeros, then this is even
easier.

Theorem 2 If one row (or column) of an n x n matrix A contains only zeros, then detA = 0.

Proof: If the i-th row of A contains only zeros, then expanding the determinant along
the i-th row of A gives

detA = ailCil + ai2C;2 + · · · + a;nCin = 0 + 0 + · · · + 0 = 0 •

As we saw with the matrix C in Example 5, another useful special case is when
the matrix is upper or lower triangular.

Theorem 3 If A is an n x n upper- or lower-triangular matrix, then the determinant of A is the


product of the diagonal entries of A. That is,

The proof is left as Problem D 1.


Finally, recall that taking the transpose of a matrix A turns rows into columns
T
and vice versa. That is, the columns of A are identical to the rows of A. Thus, if we
T
expand the determinant of A along its first column, we will get the same cofactors
and coefficients we would get by expanding the determinant of A along its first row.
So, we get the following theorem.

Theorem 4 If A is an n x n matrix, then detA = detA7.

With the tools we have so far, evaluation of determinants is still a very tedious
business. Properties of the determinant with respect to elementary row operations make
the evaluation much easier. These properties are discussed in the next section.
262 Chapter 5 Determinants

PROBLEMS 5.1
Practice Problems

�.�r��r :Ft:
[� -i]
Al following dete 1

I
(c) 3
1 -2
-3 4 0 -7 -3 4 0 1

I� �I
3 -4 0 2 4 -1 0 -6
(c) (d) (d)
5 0 -5 1 -1 0 -3
2 0 0 4 -2 3 6
1 5 -7 8 0 6 2
1 3 -4
2 -1 3 0 0 5 -1 1
(e) 0 0 2 (f) (e)
-4 2 0 0 3 -5 -3 -5
0 0 3
0 0 0 5 6 -3 -6
A2 Evaluate the determinants of the following matrices 3 4 -5 7
by expanding along the first row. 0 -3 1 2 3

[ � � �i
(f) 0 0 4 1 0
(a) - 0 0 0 -1 8

�i
-4 -1 2 0 0 0 4 3

[-� �
A4 Show that the determinant of each matrix is equal to
(b) the determinant of its transpose by expanding along

[ ; �]
3 2 1 the second column of A and the second row of A 7.
2 1 0 -1 3 -

0 3 2 (a ) A = 1
(c) 1 0
-4 0 2 -2 -
3 -5 2 1 1 2 3 4

0 4 0 -2 0 2 5
(b) A =
2 -3 4 1 3 0 4
(d) 5 -2
-1 3 2 4 4

1 -2 4 AS Calculate the determinant of each of the following

A3 Evaluate the determinants of the following matri­ elementary matrices.

ces by expanding along the row or column of your

[� � !]
H ol
choice. (a ) E 1
=
5

[� ! �]
(•) 6 0
1 0

[-� ]
�) E 2 =
2

[-� � �i
-4
(b ) -4 6
-6 2 -3 (c) £3 =
0 0 1
Section 5.1 Exercises 263

Homework Problems

:�·r:ir ";:;j�f -:i


[� � -�1
Bl following dete
(c)
0 1 0
3 -5 2 1 2 3 -3
5
(c)
1
1
1 -1
1
1 (d)
0
3
0
-3
0
9
0
1
(d)
-3
1
1
2
0
-2
1 -1 3 0 0
4 2 2 -1 4 2 5 -4
3 0 0
0 3 9 -4 -1 0 2 3
(e ) 2 0 (f) (e)
0 0 2 -2 -2 -1 -4 3
-1 0
0 0 0 3 0 -3 2
B2 Evaluate the determinants of the following matrices 1 3 4 -5 7

�[ � �1
by expanding along the first row. 3 3 1 2 0
(f) F 2 -1 4 1 0
=
(a )
5 3 0 0 0
7 1 7 -2 0 0 0 0

(b)
H -� !J B4 Show that the determinant of each matrix is equal to
the determinant of its transpose by expanding along
the third row ofA and the third column ofAT.

(c)
-2
0
4
0
2
0
0
2
0
-2
0
-4
(a )A=
[� -� -�1
3 -1 0
0 -4 4 0 3 1 6 2
1 6 3 5
-1 -1 0 0 (b)A
5 = -5 6 0 0
0 2 -6
(d ) 4 -3 4 0
1 0 1 -1
2 2 0 -3 BS Calculate the determinant of each of the following
B3 Evaluate the determinants of the following matri­ elementary matrices.

[� � �]
ces by expanding along the row or column of your
(a) E
choice. 1 =

H j ll
[� ! �]
(• )
(b) E
2=

(b)
[� -� j] (c) £
3= [� ! �] _1
264 Chapter 5 Determinants

Computer Problems

Cl Use a computer to evaluate the determinants of the 0.5 0.5 0.5 0.5

(a)
[
following matrices.
l.09
2.13
4.83
-3.25
2.95
1.57
] (c)
0.5
0.5
-0.5
-0.5
0.5
0.5
0.5
-0.5
-0.5
-0.5
-0.5
0.5
1.72 2.15 -0.89
1.23 2.35 4.19 -1.28
-2.09 0.17 3.89 22.1
(b)
0.78 2.15 -3.55 4.15
1.58 -2.59 1.01 0.00

Conceptual Problems

Dl Prove Theorem 3. (b) Write a determinantal equation for a plane


3
02 (a) Consider the points (a1, a2) and (b1, b2) in in JR. that contains the points (a1, a2, a3),
2 (b1, b2, b3), and (c1, c2, c3).
JR. . Show that the determinantal equation

det [�: �� �1
b1 b2 1
= 0 is the equation of the line

containing the two points.

5.2 Elementary Row Operations and


the Determinant
Calculating the determinant of a matrix can be lengthy. This calculation can often be
simplified by applying elementary row operations to the matrix, provided that suitable
adjustments are made to the determinant.
To see what happens to the determinant of a matrix A when we multiply a row of
A by a constant, we first consider a 3 x 3 example. Following the example, we state
and prove the general result.

EXAMPLE 1
Let A =
[ a11
a21
a12
a22
a13 ]
a23 and let B be the matrix obtained from A by multiplying the
a31 a32 a33
third row of A by the real number r. Show that detB rdetA.

Solution: We have B =
[ a11
a21
a12
a22
a13 ] =

a23 . We wish to expand the determinant of B


ra31 ra32 ra33
along its third row. The cofactors for this row are
Section 5.2 Elementary Row Operations and the Determinant 265

EXAMPLE 1 Observe that these are also the cofactors for the third row of A. Hence,

(continued)

Theorem 1 Let A be an n x n matrix and let B be the matrix obtained from A by multiplying the
i-th row of A by the real number r. Then, det B = r det A.

Proof: As in the example, we expand the determinant of B along the i-th row. Notice
that the cofactors of the elements in this row are exactly the cofactors of the i-th row of
A since all the other rows of B are identical to the corresponding rows in A. Therefore,

Remark

It is important to be careful when using this theorem as it is not uncommon to incor­


rectly use the reciprocal (1 / r) of the factor. One way to counter this error is to think
of "factoring out" the value of r from a row of the matrix. Keep this in mind when
reading the following example.

EXAMPLE2 By Theorem 1,
1 3 4 1 3 4
5 10 15 = 5 1 2 3
2 -1 0 2 -1 0

and

[ � :i
-2 -2
[i 1
det 2
3
= (-2)det 2
3 -:1
EXERCISE 1 Let A be a 3 x 3 matrix and let r ER Use Theorem 1 to show that det(rA) = r3 det A.

Next, we consider the effect of swapping two rows.


266 Chapter 5 Determinants

EXAMPLE3 The following calculations illustrate that swapping rows causes a change of sign of the
determinant. We have

[� �]=cb-da=-(ad-bc)=-det[� �]
<let

Let A =[aa2111 a12a22 a13a23]


and let Bbe the matrix obtained from A by swapping the first
a31 a32 a33
and third rows of A. We expand the determinant of B along the second row (the row
that has not been swapped):

<letB= [:�: :�� :��1


<let
a11 a12 a13
=a2i(-1)2+1 la12a32 a33a131 + a22(-1)2+2 la11a31
=-a21 (-1:�� ::�I)+ a22 (-1:�:
:��I+ a22 I:�:
a12 a13 ]
a22a32 a23a33

Theorem 2 Suppose that A is an nxn matrix and that B is the matrix obtained from A by
swapping two rows. Then detB = -det A .

Proof: W e use induction. W e verified the case where n= 2 i n Example 3.


Assume that the result holds for any (n - 1) x (n - 1) matrix and suppose that Bis
an nxn matrix obtained from A by swapping two rows. If the i-th row of A was not
swapped, then the cofactors of the i-th row of Bare (n - 1) x -1)
(n matrices. We can
obtain these matrices from the cofactors of the i-th row of A by swapping the same two
rows. Hence, by the inductive hypothesis, the cofactors c;1 of B and CiJ of A satisfy
c71 =-CiJ. Hence,

detB =ai!C71 + · · · + ainC1,1 =a;1(-Ci1) + · · · + ain(-C;n)


=-(ail Ci! + · · · + ainCin)=- <let A •
Section 5.2 Elementary Row Operations and the Determinant 267

Corollary 3 If two rows of A are equal, then detA = 0.

Proof: Let B be the matrix obtained from A by interchanging the two equal rows.
Obviously B =A, so detB = detA. But, by Theorem 2, detB= -detA, so detA =
-detA. This implies that detA = 0. •

Finally, we show that the third type of elementary row operation is particularly
useful as it does not change the determinant.

EXAMPLE4 For any r E lit we have

det [c: ra d : rb] =a(d+rb)-b(c+ra) = ad+arb-bc-arb =ad-be=det [: � ]


Let A = [:�: :�� :��i
a31 a32 a33
and let B be the matrix obtained from A by adding r times

row 2 to row 3. Expanding the determinant of Balong the first row and using the result
above gives

detB = det [ :�:


a31 + ra21
:��
a32 + ra22
:��
a33 + ra23
]
1 1
=a!l(-1) +
1 a12
a32 + ra22
a23
a33 + ra23
l + an(-l)
1 +2
1 a11
a31 + ra21

- 1+3
+ a13( 1) I a11
a31 + ra21
a12
a32 + ra22 I
:��1- 1:�: :��1
a12 + a 13 I:�:
ai2
a12
a13
a23
]
a32 a33

Theorem 4 Suppose that A is an n x n matrix and that B is obtained from A by adding r times
the i-th row of A to the k-th row. Then detB=detA.

Proof: We use induction. We verified the case where n = 2 in Example 4.


Assume that the result holds for any (n - 1) x (n -1) matrix and suppose that B is
the n x n matrix obtained from A by adding r times the i-th row of A to the k-th row.
Then the cofactors of any other row, say the l-th row, of Bare (n-1) x (n-1) matrices.
We can obtain these matrices from the cofactors of the l-th row of A by adding r times
268 Chapter 5 Determinants

the i-th row to the k-th row. Hence, by the inductive hypothesis, the cofactors c;1 of B
and CiJ of A are equal. Hence,

detB= aeiCe1 + · · · + ae11Ce11 = ae1Ce1 + · · · + aenCe11 = detA •

Theorem 1, Theorem 2, Theorem 4, and Theorem 5.1.3 suggest that an effective


strategy for evaluating the determinant of a matrix is to row reduce the matrix to upper­
triangular form. For n > 3, it can be shown that in general, this strategy will require
fewer arithmetic operations than straight cofactor expansion. The following example
illustrates this strategy.

EXAMPLES 3 1 5
1 3 -3 -3
O
LetA= .Find detA.
3 0
6 2 11
Solution: By Theorem 4, performing the row operations R2 - R1 and R4 - R1 does not
change the determinant, so

1 3 1 5
-4
detA= � � -�
0 3 1 6

To get the matrix into upper-triangular form, we now swap row 2 and row 3. By
Theorem 2, this gives
3 5
0 3 1 0
O O 4 8
detA= (-1)
_ _

0 3 6

By Theorem 1, factoring out the common factor of ( -4) from the third row gives

1 3 5
0 3 1 0
detA= ( -1 )( -4)
O O l 2
0 3 6

Finally, by Theorem 4, we perform the row operation R4 - R2 to get

3 5

detA= 4 � � l � = 4 (1)(3)(1)(6)= 72

0 0 0 6
Section 5.2 Elementary Row Operations and the Determinant 269

EXERCISE 2 2 4 -2 6
-6 -6 -2 5
LetA = . Find detA.
1 3 -1
4 6 -2 5

In some cases, it may be appropriate to use some combination of row operations


and cofactor expansion. We demonstrate this in the following example.

EXAMPLE6 5 6 7
1 8 7 9
Find the determinant of A =

1 5 6 10 .
0 4 -2
Solution: By Theorem 4,
1 5 6 7
0 3 1 2
detA =

0 0 0 3
0 4 -2

Expanding along the first column gives

3 1 2
detA =
1 1
(-1) + 0 0 3 +O
4 -2

Expanding along the second row gives

detA = (1)(3)(-1)2+3 [i ![ = (-3)(12 - 1) = -33

EXERCISE 3 -6 -2 4 -5
3 2 -4 3
Find the determinant of A =

-6 4 0 o·
-3 2 -3 -4
270 Chapter 5 Determinants

The Determinant and Invertibility


It follows from Theorem 1, Theorem 2, and Theorem 4 that there is a connection
between the determinant of a square matrix, its rank, and whether it is invertible.

Theorem 5 If A is an n x n matrix, then the following are equivalent:

(1) detA :f:. 0


(2) rank(A) = n
(3) A is invertible

Proof: In Theorem 3.5.4, we proved that (2) if and only if (3), so it is only necessary
to prove that (1) if and only if (2).
Notice that Theorem 1, Theorem 2, and Theorem 4 indicate that if detA * 0, then
the matrices obtained from A by elementary row operations will also have non-zero
determinants. Every matrix is row equivalent to a matrix in reduced row echelon form;
this reduced row echelon form has a leading 1 in every entry on the main diagonal if
and only if the rank of the matrix is n. Hence, a given matrix A is of rank n if and only
if detA :f:. 0. •

Remark

Theorem 5 shows that detA :f:. 0 is equivalent to all of the statements in Theorem 3.5.4
and Theorem 3.5.6.

We shall see how to use the determinant in calculating the inverse in the next
section. It is worth noting that Theorem 5 implies that "almost all" square matrices are
invertible; a square matrix fails to be invertible only if it satisfies the special condition
detA = 0.

Determinant of a Product
Often it is necessary to calculate the determinant of the product of two square matrices
A and B. When you remember that each entry of AB is the dot product of a row from
A and a column from B, and that the rule for calculating determinants is quite compli­
cated, you might expect a very complicated rule. The following theorem should be a
welcome surprise. But first we prove a useful lemma.

Lemma6 If E is an n x n elementary matrix and C is any n x n matrix, then

det(EC) = (det E)(det C)

Proof: Observe that if Eis an elementary matrix, then since Eis obtained by perform­
ing a single row operation on the identity matrix, we get by Theorem 1, Theorem 2,
and Theorem 4 that det E = a, where a is 1, -1, or r, depending on the elementary
row operation used. Moreover, since EC is the matrix obtained by performing that row
operation on C, we get


det(EC) = a det C = det Edet C
Section 5.2 Elementary Row Operations and the Determinant 271

Theorem 7 If A and B are n x n matrices, then det(AB) = (detA)(detB).

Proof: If detA = 0, then Ay = 0 has infinitely many solutions by Theorem 5 and


Theorem 3.5.4. If B is invertible, then y = Bx has a solution for every y E JRn, and
hence there are infinitely many x such that ABx = 0, and so AB is not invertible. If B is
not invertible, then there are infinitely many x such that Bx = 0. Then for all such x we
get ABx = AO= 0. So, AB is not invertible. Thus, if detA = 0, then AB is not invertible
and hence det (AB)= 0. Therefore, if detA = 0, then det(AB) = (detA)(detB).
On the other hand, if detA * 0, then by Theorem 3.5.4 the RREF of A is/. Thus,
by Theorem 3.6.3, there exists a sequence of elementary matrices E1, • • • , Ek such that
A = E1 ···Ek . Hence AB = E1 • • • EkB and, by repeated use of Lemma 6, we get

det(AB) = det(E1 ·· ·EkB) = (detE1)(detE2) ··· (detEk)(detB) = (detA)(detB) •

EXAMPLE?
Verify Theorem 7 for A =
2
[� -1
0 l

l and B =

r
l

-2
2

H
Solution: By Theorem 4 we get

3 0 1
1
�1 =
-10 -
detA = -10 -1 0 = -15
5
5 2 0

-1 2 4
detB = 0 15 28 = -105
0 0 7

So (det A)(detB) = ( -15)( -105) = 1575. On the other hand, using


Theorem 1 and Theorem 4 gives

detAB = det [=�9 -� ��i 12 20


= (-5) -� 9 12
4
��
20

= (-5) 1
0

0
6
1
3
7
-4 = (-5)(-1)2+1
56
1 6
3 56
7

1
= (5)(315)= 1575
272 Chapter 5 Determinants

PROBLEMS 5.2
Practice Problems

Al Use row operations and triangular form to compute 2 0 -2 -6


the determinants of each of the following matrices. 2 -6 -4 -1
(d)
Show your work clearly. Decide whether each ma­ -3 -4 5 3

[ � � �i
trix is invertible. -2 -1 -3 2

A3 Use row operations to compute the determinant


(a) A=
of each of the following matrices. In each case,

[� � �1
-1 3 2
determine all values of p such that the matrix is

[! : =il
invertible.
(b) A=
1 1 1 (a)A=
5 2 -1
2 -1 2 3 p
(c) A=
3 2 1 4 0 1 2 1
-2 0 3 5 (b)A=
0 1 7 6
1 3 0 1 0
-2 -2 -4 -1 1 1 1
(d)A=
2 2 8 3 2 3 4
1 1 7 (c) A=
3 4 9 16
5 10 5 -5 8 27 p
3 5 7
(e) A= A4 Verify Theorem 7 if
2 6 3
-1 7 (a)A= [� -n B=
[ � �]
[- � � � l
-
A2 Use a combination of row operations and cofactor
expansion to evaluate the following determinants. (b) A = - ·

1 -1 2 3 0 2

H � �]
(a) 1 -2
2 3
B=
2 4 2
(b) 4 2 1
AS Suppose that A is an n x n matrix.
-2 2 2
(a) Determine det(rA).
1 2 2
(b) If A is invertible,show that detA-1 = de;A.
2 4 1 5
(c) (c) If A3 =/,prove that A is invertible.
3 6 5 9
3 4 3

Homework Problems

Bl Use row operations and triangular form to compute


the determinants of each of the following matri­
ces. Show your work clearly. Decide whether each
(a) A=
[� -l 1]
matrix is invertible.
Section 5.2 Exercises 273

(b) A= [ ; -� -�1 -4 4
(d)
2
3
1
3
1
2
2
4
4
-2
1
4

[; -� �1
-3
2 -1 5 6

(c) A= B3 Use row operations to compute the determinant


3 5 -6 of each of the following matrices. In each case,
7 1 -1 1 determine all values of p such that the matrix is

-4

[! ��]
invertible.
4
3 3 5
(d) A=

O
3 2
1 1 -1 1 (a) A=

[� i �l
1 10 7 -9
7 -7 7 7
(e) A=
2 -2 6 2 (b) A=

4
4
-3 -3 1
1 3 1 2 5 2

(f) A=
0 1 -1
4
-2 2
(c) A=
1 0 1 2

-4
2 1 -1
5 8 -3 0 p

B2 Use a combination of row operations and cofactor


4
1 1 1
2 8
expansion to evaluate the following determinants. (d) A=

4
3 9 27
1 3 3
16 p
(a)
1 4
1 -2 2
- 5

-4
2
(b) -1 3
-3 5

1 1 -2
1 3 -1 -1
(c)
2 2 -2 7
1 0 2

Conceptual Problems

Dl A square matrix A is skew-symmetric if AT = -A. L(x)= Ax cannot be all of IR.3 and that its nullspace
If A is an n x n skew-symmetric matrix, with n odd, cannot be trivial.
prove that detA= 0. D4 Let A be an n x n matrix. Prove that if P is any

D2 Assume that A and B are


detAB= 0, is it necessarily true that detA= 0
or det B = O? Prove it or give a counterexample.
n x n matrices. If

DS (a) Prove that det


[ l
n x n invertible matrix, then det p-lAP= detA.

a+p
d
b+q
e
c1+kr

D3 Suppose that A is a 3 g h

[: � {l [� : a
x 3 matrix and that
detA = 0. What can you say about the rows of A?
Argue that the range of the matrix mapping = det + det
274 Chapter 5 Determinants

[a +p b+q c+r l DS (a) Prove that


(b) Use part (a) to express det d+x

g
e +y

as the sum of determinants of matrices whose


h
f +z
k
det [:�:
a31
:�� �:��
a32 ra33
i [:�: :�� :��i .
= r det
a31 a32 a33

entries are not sums. (b) Let A be an n x n matrix and B be the matrix
obtained from A by multiplying the i-th column

[
D6 Prove that

det b+c
a+b p+q
q +r
u+
v +w
vl [
= 2 det b
a of A by a factor of r. Prove that det B
r det A. (Hint: How are the matrices AT and BT
=

c+a r+p w+u c related?)


(c) Let A be an n x n matrix and B be the matrix

07 Pmve that det [: 1 +a


(1 +a)
2
obtained from A by adding a multiple of one
column to another. Prove that det A = det B.

5.3 Matrix Inverse by Cofactors and


Cramer's Rule
We will now see that the determinant provides an alternative method of calculating the
inverse of a square matrix. Determinant calculations are generally much longer than
row reduction. Therefore, this method of finding the inverse is not as efficient as the
method based on row reduction. However, it is useful in some theoretical applications
because it provides a formula for A-1 in terms of the entries of A.
This method is based on a simple calculation that makes use of the following
theorem.

Theorem l [False Expansion Theorem]


If A is an n x n matrix and i t= k, then

Proof: Let B be the matrix obtained from A by replacing (not swapping) the k-th row
of A by the i-th row of A. Then the i-th row of B is identical to the k-th row of B, hence
det B = 0 by Corollary 5.2.3. Since the cofactors c;j of B are equal to the cofactors
CkJ of A, and the coefficients bkJ of the k-th row of B are equal to the coefficients aiJ
of the i-th row of A, we get

as required. •

Definition Let A be an n x n matrix. We define the cofactor matrix of A, denoted cof A, by


Cofactor Matrix
(cof A)iJ = CiJ
Section 5.3 Matrix Inverse by Cofactors and Cramer's Rule 275

EXAMPLE 1
Let A=
6
[02 4-2 -1 1
3 1 . Determine cof A.
5

Solution: The nine cofactors of A are

ell=Cl)1-� �I 17 = c12 I� �I=


= c-1) 6 C13=(1)I� _;1= -18
C21 14_2 -11= -18
= (-1) 5 C22= 12 -11=
(1) 5 16 C23= I� -�I= 28
(-1)

C32=(-1)012 -11 =-2


6

-11=7 1
l C33 (1)1� �I
= =6

Hence,

EXERCISE 1
Calculate the cofactor matrix of A = [-1 �n
Observe that the cofactors of the i-th row of A form the i-th row of cof A, so they
form the i-th column of (cof Al. Thus, the dot product of the i-th row of A and the
i-th column of (cof Al equals the determinant of A. Moreover, by the False Expansion
Theorem, the dot product of the i-th row of A and the }-th column of cof A equals
if i * j. Hence, if af represents the i-th row of A and c1 represents the }-th column of
0
(cof A)\ then

A(cof Al =
�I1
/
-+T
a n
[c1
c,!]
i11 ·en

[. .
Ct

0 0
a11 • C1 an. en
detA

=0 detA

0 = (detA)/

0 0 detA
where I is the identity matrix.
If detA * 0, it fo llows that A (de;A) (cofAl= I, and, therefore,
A-1 =( � ) de A
ccofA)7
276 Chapter 5 Determinants

If detA = 0, then, by Theorem 5.2.4, A is not invertible. We shall refer to this


method of finding the inverse as the cofactor method. (Some people refer to the trans­
pose of the cofactor matrix as the adjugate matrix and therefore call this the adjugate
method).

EXAMPLE2
Find the inverse of the matrix A � [� -2
4
3
-1

i by the cofactor method.

Solution: We first find that

2 4 -1 2 4 -1
detA = 0
6 -2
3 1
5
= 0
0
3
-14
1
8
= 2(24 + 14) =
76
Thus, A is invertible. Using the result of Example 1 gives

1 1 7
A (cof A)
-

[ ]6
= --

detA
1 -
= J_ � �� � -
76 -18 28

EXERCISE 2
Use the cofacto< method to find the inverse of A = [-i � n
For 3 x 3 matrices, the cofactor method requires the evaluation of nine 2 x 2
determinants. This is manageable, but it is more work than would be required by the
row reduction method. Finding the inverse of a 4 x 4 matrix by using the cofactor
method would require the evaluation of sixteen 3 x 3 determinants; this method
becomes extremely unattractive.

Cramer's Rule
Consider the system of n linear equations in n variables, Ax = b. If detA * 0 so that
A is invertible, then the solution may be written in the form

1 7
x = A - 1 b = -- (cof A) b
detA
X1 C11 C 21 C111 b1

X; = C1; C2; c,li b;


detA

Xn C111 C211 Cnn bn


Section 5.3 Matrix Inverse by Cofactors and Cramer's Rule 277

Consider the value of the i-th component of x:

1
x· = -- (b1C1 +b2C2 +.. +bn C)
Ill
I I I
·

detA

This is de;A multiplied by the dot product of the vector b with the i-th row of (cof Af .
But the i-th row of (cof A f is the i-th column of the original cofactor matrix cof A. So
x; is the dot product of the vector b with the i-th column of cof A divided by detA.
Now, let N; be the matrix obtained from A by replacing the i-th column of A by b.
Then the cofactors of the i-th column of N; will equal the cofactors of the i-th column
of A, and hence we get

det N; = b1Ci;+b2C2;+ · · · +b11C11;

Therefore, the i-th component of 1 in the solution of Ax =


b is

det N;
x· - --
1
- detA

This is called Cramer's Rule (or Method). We now demonstrate Cramer's Rule with a
couple of examples.

EXAMPLE3 Use Cramer's Rule to solve the system of equations.

Xt +Xz - X3 = b1
2x1 +4x2+ Sx3 = b2
X1+X2+ 2X3 = b3

Solufon: The coefficient matrix is A = [� �] 4


-
·so

-1 -1
detA = 2 4 5 0 2 7 = ( 1 ) (2 )(3) = 6
2 0 0 3

Hence,

det N1 1
b1 1 -1 b1 1 -1
3b1 - 3b2+ 9b3
Xt b2 4 5 !
b2+Sb1 9 0 =
6
= = =

detA 6 6
b3 2 b3 +2b1 3 0

1 b1 -1 1 b1 -1
det N2 1 b1+3bz-7b3
x2 = =
! 2 b2 5 = - 7 b2+ Sbi 0 =

detA 6 l 6 6
b3 2 3 b3 + 2b1 0

1 1 b1 1 1 bi
det N3 1 -2b1 + 2b3
2 4 2 b2 - 2b1
!
x3 = = b2 =- 0 =
detA 6 l 6 6
b3 0 0 b3 -bi
278 Chapter 5 Determinants

EXAMPLE4 Use Cramer's Rule to solve the system of equations

2 3 1
--X1 + -X2 = -

3 5 5
2 1 1
-xi - -x2 =

5 3 2
-

.
.
S oI ution: The coe ffi c1ent . .s A
matnx 1 = r-22/315 3/5
11 3 ] , so

-· - - -·
_

-2 -1 3 2 -4
detA = - = -

3 3 5 5 225
Hence,

Xi -
-
1 1
1/5
1/2
3/5
-1/3
-
-225 . -11
4
-
-
1 165
_

-4/225 30
1 1
8
1 -2/3 1/5 -225 . 93 93
x
2
= =

-4/225 2/5 1/2


=

4 4 4

To solve a system of n equations in n variables by using Cramer's Rule would


require the evaluation of the determinant of n + 1 matrices, each of which is n x n.

Thus, solving a system by using Cramer's Rule requires far more calculation than
elimination, so Cramer's Rule is not considered a computationally useful solution
method. However, like the cofactor method above, Cramer's Rule is sometimes used
to write a formula for the solution of a problem.

PROBLEMS 5.3
Practice Problems

[� i
Al Determine the inverse of each of the following ma­
trices by using the cofactor method. Verify your an­ (d) -
swer by using multiplication. -2 � �
2 -1

[-; -� �]·
-

(a)
[! l�] A2 Let A =

(b)
[; =�1 3 1
(a) Determine the cofactor matrix cof A.
1

U -� Il
(b) Calculate A(cof Af and determine detA
(c) and A-1•
Section 5.3 Exercises 279

A3 Use Cramer's Rule to solve the following systems. (c) 7x1+


x2 - x
4 3 =3
(a) x
2 1 -x
3 2=6 -x
6 1 - 4x2+x3 =0
3x1 +x
5 2=7 4x1 -x2 - x
2 3 =6

(b) x
3 1+x
3 2=2 (d) x
2 1+3x2 - 5x3 =2
x
2 1-x
3 2=5 3x1 -x2 +2x3 =1

5x1+4x2 - x
6 3 =3

Homework Problems

Bl Determine the inverse of each of the following


matrices by using the cofactor method. Verify your B2 Let A= [ � � �1. -

[-� �]
answer by using multiplication. -2 1 4
(a) Determine the cofactor matrix cof A.
(a)
(b) Calculate A(cof A)7 and determine detA

(b) [-� �] and A-1•

B3 Use Cramer's Rule to solve the following systems.

H � -�1
(a) 2
x1 - x
7 2=3
(c)
x
5 1+x
4 2=-17

(d) -2 1 0
1
2
0
(b) 3x1+5x2=-2

X1 +x
3 2=-3

[� � -�]
3 -1 1
(c) x1 - 5x2 - x
2 3 =-2
(e) 2
x1 +x
3 3 =3
0 -7 2
4X1 +
X2 -X3 =1
-2 -2 -4 -4
3 0 2
(d) X1 +2X3 =-2
(f) -3 0 -3 -3 3x1 -x2+x
3 3 =5
2 -2 0 4
=0

Conceptual Problems

Dl Suppose that A = i11 [ · · · an] is an invertible 2 -1 0

matrix. 0 -1 3 2
n x n
D2 Let A = . Use the cofactor method
(a) Verify by Cramer's Rule that the system of 0 0 0

equations Ax=a1 has the unique solution 0 2 0 3


1 1
x=ej (the }-th standard basis vector). to calculate (A- )23 and (A- )42. (If you calcu­
(b) Explain the result of part (a) in terms of linear late more than these two entries of A-1, you have
transformations and/or matrix multiplication. missed the point.)
280 Chapter 5 Determinants

5.4 Area, Volume, and the Determinant


So far, we have been using the determinant of a matrix only to determine if an n x n ma­
trix is invertible. In particular, we have only been interested in whether the determinant
of a matrix is zero or non-zero. We will now see that we are sometimes interested in the
specific value of the determinant, as it can have an important geometric interpretation.

Area and the Determinant


Let i1 = [��] and v = [�� l In Chapter 1, we saw that we could construct a paralle­

logram from these two vectors by making the vectors i1 and v as adjacent sides and
having i1 + v as the vertex of the parallelogram, opposite the origin, as in Figure 5.4.1.
This is called the parallelogram induced by a and v.

0 Ut UJ + V) X1

Figure 5.4.1 Parallelogram induced by i1 and v.

We will calculate the area of this parallelogram by calculating the area of the
rectangle with sides of length u1 + v1 and u2 + v2 and subtracting the area inside the
rectangle and outside the parallelogram, as indicated in Figure 5.4.2.
This gives

Area(it, v) = Area of Square - Area 1 - Area 2 - Area 3 - Area 4 - Area 5 - Area 6

1 1 1 1
(u1 + V1)(u2 + v2) - V1V2 - U2V1 - u1u2 - V1V2 - U2V1 - U1U2
2 2 2 2
=

= U1U2 + U1V2 + U2V1 + V1V2 - V1V2 - 2U2V1 - U1U2


= U1V2 - U2V1

We immediately recognize this as the determinant of the matrix [�� ��] = [it v].
Remark

At this time, you might be tempted to say that the area of the parallelogram induced
by any two vectors it and v equals the determinant of [il v]. However, this would be
slightly incorrect as we have made a hidden assumption in our calculation above. In
particular, in our diagram we have drawn a and v as a right-handed system.
Section 5.4 Area, Volume, and the Determinant 281

Area(a, V)

A6 AS
u2
0

Figure 5.4.2 Area of the parallelogram induced by a and v.

EXERCISE 1
Show that 1f a
. [UU1]2 and v [VVJ2] form a left-handed system, then the area of the

. I [UUj2 VV1JI2
= =

parallelogram mduced by a and V equals det


·

We have now shown that the area of the parallelogram induced by it and v is

Area(it, v) = l [�� :�]I


det

EXAMPLE 1 Draw the parallelogram induced by the following vectors and determine its area.

(a) it=
[�l [-�]
v =

[ [
(b) it =

� l -n
"
=

Solution: For (a), we have

Area (it, v) = l [-� ;JI


det = 2(2) - 3(-2) = 10
282 Chapter 5 Determinants

EXAMPLE 1 For ( b), we have


(continued)

Area(it,v) = det l [ � �JI - = 1(-2)-1(1)= -3

Now suppose that the 2 x 2 matrix A is the standard matrix of a linear transforma­
tion L: IR.2 2
� IR. . Then the images of i1 and v under L are L(il) =Ail and L(v) =Av.

Moreover, the volume of the image parallelogram is

l [
Area (Ail,Av) = det Ail Av ]I = ldet (A [a v])I
Hence, we get

Area (Ait,Av)= det A ill ( [ v ])l =ldetAlldet [il v]l =ldetAIArea(il,v) (5.1)

In words: the absolute value of the determinant of the standard matrix A of a linear
transformation is the factor by which area is changed under the linear transformation
L. The result is illustrated in Figure 5.4.3.

Figure 5.4.3 Under a linear transformation with matrix A , the area of a figure is
changed by factor I detAI.

EXAMPLE2
Let A = [� �] and L be the linear mapping L(x) = Ax. Determine the image of

i1 = [ �] and v = [-�] under L and compute the area determined by the image vectors

in two ways.
Section 5.4 Area, Volume, and the Determinant 283

EXAMPLE2 Solution: The image of each vector under L is


(continued)

Hence, the area determined by the image vectors is

Area (L(i1), L(v)) = l [� ;11 = - =


det 18 41 4

Or, using (5.1) gives

Area(L(i1), L(v)) =I Idet A Area (i1, v) = [ [� ;J[ [ [-� �JI= =


det det 2(2) 4

EXERCISE2
Let A = [� �] be the standard matrix of the stretch S : JR.2 JR.2
� in the x1 direction

by a factor oft. Determine the image of the standard basis vectors e1 ez


and under S
and compute the area determined by the image vectors in two ways. Illustrate with a
sketch.

The Determinant and Volume


Recall from Chapter 1 that if it, v, and w are vectors in JR.3, then the volume of the
parallelepiped induced by it, v, and w is

Volume(i1, v, w) = Jw· (it x v)I


Now obsem that if U = [::l = [�:l
V and W = [::I· then

lw·(i1xv)I = lw1(U2V3-U3V2)-w2(U1V3-U3V1)+w3(U1V2-U2V1)I = lU3UUzJ det

Hence, the volume of the parallelepiped induced by it, v, and wis

Volume(i1,v,w)= det i1 l [ v wJI

EXAMPLE3
Ut A = [; � -n = l-ll· m
U V = and W = nl Calculate the volume of the

parallelepiped induced by i1, v, and wand the volume of the parallelepiped induced by
Au, Av, and Aw.
284 Chapter 5 Determinants

EXAMPLE 3 Solution: The volume determined by il, v, and w is


(continued)

Volume(ii, V, W) = det H : -�] = I - 71 = 7

The volume determined by Ail, Av, and Aw is

l [
Volume(Ail,Av,Aw) = det Ail Av Aw JI
= det H � :J -
l = I - 3851 = 385

Moreover, det A = -55, so

Volume (Ail.Av,Aw) = I det AIVolume (il,v,w)

which coincides with the result for the 2 x 2 case.

In general, if v1, ..., vn are n vectors in �11, then we say that they induce an
n-dimensional parallelotope (the n-dimensional version of a parallelogram or
parallelepiped). Then-volume of the parallelotope is

and if A is the standard matrix of a linear mapping L : �n � �n, then

n-Volume (Av1,... , Avn)= I det Aln-Volume (v1,. .., Vn)

PROBLEMS 5.4
Practice Problems

Al (a) Calculate the area of the parallelogram induced


[� �]
[; ] [�]
A2 Let A = be the standard matrix of the reflec­
by il= and v= in �2.
tion R �2 �2 over the line x2 = x1. Determine

[�] [-�]
: �

(b) Determine the image of il and v under the lin­


the image of il = and v = under R and
ear mapping with standard matrix
compute the area determined by the image vectors.

[� n
-
A=
A3 (a) Compute the volume of the parallelepiped in-
(c) Compute the determinant of A.
(d) Compute the area of the parallelogram induced
by the image vectors in two ways.
duced by ii= lH [=n jl = and w = m
Section S.4 Exercises 285

(b) Compute the determinant ofA= [! -� �].


0 2 s
v3 =
0
2
3
,
and V4 =
1
0

(c) What is the volume of the image of the paral­ 0 5
lelepiped of part (a) under the linear mapping (b) Calculate the 4-volume of the image of this
with standard matrixA? parallelotope under the linear mapping with

A4 Repeat Problem A3 with vectors a= [H standard matrixA=


2
5
0
3
4
0
1
3
7
1
0
3
.

V= HJ m .w= .andA= [� -� -H A6 Let {i11, . .. 0 0 0


, V,,} be vectors in IR.n. Prove that the
n-volume of the parallelotope induced by i11, Vn • • • ,
is the same as the volume of the parallelotope in­
AS (a) Calculate the 4-volume of the 4-dimensional
1 0 duced by i11, ... , V,,_1, V n + tV 1.

parallelotope determined by i11 = � , i12 = 1


,
0 3

Homework Problems

Bl (a) Calculate the area of the parallelogram induced (c) What is the volume of the image of the paral­

by il = [�] and v = [� ] in JR.2 .


lelepiped of part (a) under the linear mapping
with standard matrixA?
(b) Determine the image of il and v under the lin­
ear mapping with standard matrix

A= [� n
B4 Repeat Problem B3 with vectors i1 = [_il
Hl· nl·
-
(c) Compute the determinant ofA.
(d) Compute the area of the parallelogram induced
by the image vectors in two ways.
v = w =
andA=
u -r H
[� � l
BS (a) Calculate the 4-volume of the 4-dimensional
B2 LetA = be the standard matrix of the shear 1 1

H : JR.2 � JR.2 in the x1 direction by a factor of t. parallelotope determined by i11 =


1
2
v 2=
1
1
, ,
Determine the image of the standard basis vectors
1 3
e1 and e2 under H and compute the area determined
by the image vectors.
2 0
V3 = , and V 4 = .
B3 (a) Compute the volume of the parallelepiped in- 3 5

dured by U= [-H [ HV= = [H


and W=
0 7
(b) Calculate the 4-volume of the image of this
parallelotope under the linear mapping with

(b) Compute the determinant ofA= [ � -� H


_
standard matrixA=
2
2
-1
3 1
3
3 7
1
0

0 2 1
286 Chapter 5 Determinants

Conceptual Problems
Dl Let {ii\, ... , V,1} be vectors in lR.11• Prove that the D2 Suppose that L, M : JR3 ---7 JR3 are linear mappings
n-volume of the parallelotope induced by v1, , V,1
. . • with standard matrices A and B, respectively. Prove
is half the volume of the parallelotope induced by that the factor by which a volume is multiplied un­
2i11, V2, .. .'V,I. der the composite map M o L is I det BAI.

CHAPTER REVIEW
Suggestions for Student Review
Try to answer all of these questions before checking an­ 2 State as many facts as you can that simplify the
swers at the suggested locations. In particular, try to in­ evaluation of determinants. For each fact, explain
vent your own examples. These review suggestions are why it is true. (Sections 5.1 and 5.2)
intended to help you carry out your review. They may
3 Explain and justify the cofactor method for finding
not cover every idea you need to master. Working in
a matrix inverse. Write down a 3 x 3 matrix A and
small groups may improve your efficiency.
calculate A(cof A)T. (Section 5.3 )
1 Define cofactor and explain cofactor expansion. Be
4 How and why are determinants connected to
especially careful about signs. (Section 5.1)
volumes? (Section 5 .4)

Chapter Quiz

[� !]
El By cofactor expansion along some column, evaluate E4 Determine all values of k such that the matrix
-2 4 0 0

det
1 -2 2 9 � is invertible.
-3 6 0 3· 2 -4 1
-1 0 0 ES Suppose that A is a 5 x 5 matrix and det A = 7.
E2 By row reducing to upper-triangular form, evaluate (a) If B is obtained from A by multiplying the
3 2 7 -8 fourth row of A by 3, what is det B?
-6 - 1 -9 20 (b) If C is obtained from A by moving the first row
det
3 8 21 -17 · to the bottom and moving all other rows up,
3 5 12 what is det C? Justify.
(c) What is det(2A)?
0 2 0 0 0
(d) What is det(A-1)?
0 0 0 3 0
(e) What is det(AT A)?

[ � � �]
E3 Evaluate det 0 0 0 0 1 .
0 0 4 0 0
5 0 0 0 6 E6 Let A =
·Determine (A-1) 31 by using
-2 0 2
the cofactor method.
Chapter Review 287

E7 Determine x by using Cramer's Rule if


2

2x1 + 3x + x3 = 1
2
(b) If A = [� � _;],
0 0 -4
what is the volume of the

X1 + Xz - X3 -1 =
parallelepiped induced by Ail, Av, and Aw?
-2x1 + 2x3 = 1

E8 (a) What is the volume of the parallelepiped in-

duced by U = [ J [-H
' = and W= [!]1
Further Problems

These exercises are intended to be challenging. They cofactor of c2, argue that
may not be of interest to all students.
V3(a, b, c)= (c - a)(c - b)(b - a)
Fl Suppose that A is an n x n matrix with all row sums 3
I! a a2 a
equal to zero. (That is, 2: aiJ = 0 for 1 � i � n.) 3
I b b2 b
j=l (b) Let V4(a, b, c, d ) = det . By
Prove that detA= 0. 1 c c2 c3
3
1 1 d d2 d
F2 Suppose that A and A- both have all integer entries.
using arguments similar to those in part (a) (and
Prove that detA= ± 1.
without expanding the determinant), argue that
F3 Consider a triangle in the plane with side lengths
a, b, and c. Let the angles opposite the sides with V4(a, b, c, d)= (d - a)(d - b)(d - c)V3(a, b, c)
lengths a, b, and c be denoted by A, B, and C, re­
F5 Suppose that A is a 4 x4 matrix partitioned into 2x2
spectively. By using trigonometry, show that
blocks:

c= b cosA + a cosB A= [fy]


Write similar equations for the other two sides. Use (a) If A3 = 0 , (the 2 x 2 zero matrix), show that
22
Cramer's Rule to show that detA= detA1 detA4.
(b) Give an example to show that, in general,
b2 + c2 - a2
cosA= ---- detA * detA1 detA4 - detA2 detA3
2 bc
F6 Suppose that A is a 3 x 3 matrix and B is a 2 x 2

F4 (a) Let V3(a, b, c) = det [� � ��] .


1 c c2
Without ex-
matrix.

(a) Show that det


[0 3
A 1 03
B
,2
] = detA detB.
2,
panding, argue that (a - b), (b - c), and (c - a)
are all factors of V3(a, b, c). By considering the
(b) What is det r 0 3
· � I o�.2 j ?

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 6

Eigenvectors and
Diagonalization
CHAPTER OUTLINE

6.1 Eigenvalues and Eigenvectors


6.2 Diagonalization
6.3 Powers of Matrices and the Markov Process
6.4 Diagonalization and Differential Equations

An eigenvector is a special or preferred vector of a linear transformation that is


mapped by the linear transformation to a multiple of itself. Eigenvectors play an im­
portant role in many applications in the natural and physical sciences.

6.1 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors of a Mapping

Definition Suppose that L : JR11 -7 JR11 is a linear transformation. A non-zero vector v E JR11 such
Eigenvector that L(V) = A.vis called an eigenvector of L; the scalar A. is called an eigenvalue of L.
Eigenvalue The pair A., vis called an eigenpair.

Remarks

1. The pairing of eigenvalues and eigenvectors is not one-to-one. In particular, we


will see that each eigenvector of L will have a distinct eigenvalue, while each
eigenvalue will correspond to infinitely many eigenvectors.

2. We have restricted our definition of eigenvectors (and hence eigenvalues) to be


real. In Chapter 9 we will consider the case where we allow eigenvalues and
eigenvectors to be complex.

The restriction that an eigenvector v be non-zero is natural and important. It is


natural because L(O) = 0 for every linear transformation, so it is completely uninter­
esting to consider 0 as an eigenvector. It is important because many of the applications
involving eigenvectors make sense only for non-zero vectors. In particular, we will
see that we often want to look for a basis of JR11 that contains eigenvectors of a linear
transformation.
290 Chapter 6 Eigenvectors and Diagonalization

EXAMPLE 1
Let A = [�� =i�] and let L : JR2 � JR2 be the linear transformation defined by

L(x) = Ax. Determine which of the following vectors are eigenvectors of Land give

the corresponding eigenvalues:


rn [=;J. rn [!] and .

Solution: To test whether [ �] is an eigenvector, we calculate L(l, 1 ):

So, [ �] is an eigenvector of L with eigenvalue 2.

Since [=�] = (-2) [ �] and Lis linear,

L(-2,-2) = (-2)L(l, 1) = (-2)(2) [ � ] 2[=�]


=

So, [=�] is also an eigenvector of L with eigenvalue 2. In fact, by a similar argument,

any non-zero multiple of [ i] is an eigenvector of L with eigenvalue 2.

L(l ,3) = [17 -15J r J r J r J


20 -18
l
3
=
-28
-34 f. ;i 3
l

for any real number ..i, so [�] is not an eigenvector of L.

L(3 4)=
. 20
[17 -15] [3] [ -9]
-18 4
=
-12
= 3
- 4
[3]
So, [!] (or any non-zero multiple of it) is an eigenvector of Lwith eigenvalue -3.

EXAMPLE2 Eigenvectors and Eigenvalues of Projections and Reflections in JR3

1. Since projil(it) = lit, it is an eigenvector of proj,1 with corresponding


eigenvalue 1. If vis orthogonal to it, then proj,lV) = 0 = Ov, so vis an eigen­
vector of proji1 with corresponding eigenvalue 0. Observe that this means there
is a whole plane of eigenvectors corresponding to the eigenvalue 0 as the set of
vectors orthogonal to it is a plane in JR3. For an arbitrary vector a E JR3, proj,1(a)
is a multiple of rt, so that a is definitely not an eigenvector of proji1 unless it is
a multiple of it or orthogonal to it.
Section 6.1 Eigenvalues and Eigenvectors 291

EXAMPLE2 2. On the other hand, perp,z(it) = 0 = On, so rt is an eigenvector of perpr1 with


(continued) eigenvalue 0. Since perp,1(V) = 1v for any v orthogonal to rt, such a v is an
eigenvector of perpr1 with eigenvalue I.

3. We have refl,1 ii = - lit, so ii is an eigenvector of reflr1 with eigenvalue -1. For


vorthogonal to ii, refl11(V) = 1 v. Hence, such a vis an eigenvector of reflr1 with
eigenvalue 1.

EXAMPLE3 Eigenvectors and Eigenvalues of Rotations in JR2


.
Cons1.der the rotation Re : ln)
11'.2 ---7
lnl . .
11'.2 wit h matnx
cose
.
[ - sine l .
, where e is not an
sme cose
integer multiple of 1r. By geometry, it is clear that there is no non-zero vector vin IR2
such that Re(V) = /lv for some real number /l. This linear transformation has no real
eigenvalues or real eigenvectors. In Chapter 9 we will see that it does have complex
eigenvalues and complex eigenvectors.

EXERCISE 1
Let Re : IR3 ---7 IR3 denote the rotation in IR3 with matrix sine
[cose - sine
cose H Derermine
0 0
any real eigenvectors of Re and the corresponding eigenvalues.

Eigenvalues and Eigenvectors of a Matrix


The geometric meaning of eigenvectors is much clearer when we think of them as
belonging to linear transformations. However, in many applications of these ideas, it
is a matrix A that is given. Thus, we also speak of the eigenvalues and eigenvectors of
the matrix A.

Definition Suppose that A is an n x n matrix. A non-zero vector v E IRn such that Av= /lv is called
Eigenvector an eigenvector of A; the scalar /l is called an eigenvalue of A. The pair /l, vis called
Eigenvalue an eigenpair.

In Example 1, we saw that [ �] [!]


and are eigenvectors of the matrix A = [�� = � �]
with eigenvalues 2 and -3, respectively.

Finding Eigenvectors and Eigenvalues


If eigenvectors and eigenvalues are going to be of any use, we need a systematic
method for finding them. Suppose that a square matrix A is given; then a non-zero
vector v E IR" is an eigenvector if and only if Av= AV. This condition can be rewritten
as
Av-M=o
It is tempting to write this as (A - ,i)v = 0, but this would be incorrect because A is
a matrix and /l is a number, so their difference is not defined. To get around this, we
292 Chapter 6 Eigenvectors and Diagonalization

use the fact that v= IV, where I is the appropriately sized identity matrix. Then the
eigenvector condition can be rewritten

(A - ,U)v= 0
The eigenvector vis thus any non-trivial solution (since it cannot be the zero vector)
of the homogeneous system of linear equations with coefficient matrix (A -tl/). By
the Invertible Matrix Theorem, we know that a homogeneous system of n equations in
n variables has non-trivial solutions if and only if it has a determinant equal to 0.
Hence, for tl to be an eigenvalue, we must have det(A-tl/) = 0. This is the key result in
the procedure for finding the eigenvalues and eigenvectors, so it is worth summarizing
as a theorem.

Theorem 1 Suppose that A is an n x n matrix. A real number tl is an eigenvalue of A if and only


if tl satisfies the equation
det(A - tl/) = 0

If tl is an eigenvalue of A, then all non-trivial solutions of the homogeneous system

(A -,l/)v= 0

are eigenvectors of A that correspond totl.

Observe that the set of all eigenvectors corresponding to an eigenvalue tl is just


the nullspace of A -tl/, excluding the zero vector. In particular, the set containing all
eigenvectors corresponding to tl and the zero vector is a subspace of lRn. We make the
following definition.

Definition Let tl be an eigenvalue of an n x n matrix A. Then the set containing the zero vector
Eigenspace and all eigenvectors of A corresponding to tl is called the eigenspace oftl.

Remark

From our work preceding the theorem, we see that the eigenspace of any eigenvalue tl
must contain at least one non-zero vector. Hence, the dimension of the eigenspace
must be at least 1 .

EXAMPLE4
Find the eigenvalues and eigenvectors of the matrix A= [�� = �� ] of Example 1.

Solution: We have

A-tl/= [ 17
20
-15
-18
] [ ] [
-tl
l
0
0
1
=
17-tl
20
-15
-18 -,{
]
(You should set up your calculations like this: you will need A - tll later when you.find
the eigenvectors.) Then
1
I
17-,{ -15
det(A -tl/) =

20 _ 18 _ tl

= (17 - tl)(-18 - tl) - (-15)20

= tl.2 + ,{ - 6 = (;l + 3)(tl - 2)

so det(A - -1/) = 0 when -1= -3 or -1= 2. These are all of the eigenvalues of A.
Section 6.1 Eigenvalues and Eigenvectors 293

EXAMPLE4 To find all the eigenvectors of il -3, =

we solve the homogeneous system

(continued) (A - ill)V = 0. W riting A - ill and row reducing gives

A
_
(-3)I [ ] [ -30/4]
= 20
20
-15
-15
� 1
0

so that the general solution of (A-ill)v 0 = [3i4].


is v = t t E R Thus, all eigenvectors

of A corresponding to il = -3are v [3i4]


=
t for any non-zero value oft, and the

eigenspace for il = -3 is Span {[3i4]}.


We repeat the process for the eigenvalue il
=
2:

A_ 21 = [15
20
- 15
-20
] [

1
0
-1
0
]
The general solution of (A - ill)V = 0 is v =t [ �], t E JR, so the eigenspace for il = 2 is

Span {[ � ]} .In particular, all eigenvectors of A corresponding to il = 2 are all non-zero

multiples of [n
Observe in Example
vates the following definition.
4 that det(A - ill) gave us a degree 2 polynomial. This moti­

Definition Let A be an n x n matrix. Then C(il) det(A - ill) is called the characteristic
Characteristic Polynomial polynomial of A.

For an n x n matrix A, the characteristic polynomial C(il) is of degree n, and the


roots of C(il) are the eigenvalues of A. Note that the term of highest degree il" has
coefficient (-1)'1; some other books prefer to work with the polynomial det(ill - A)
so that the coefficient of tl.11 is always 1. In our notation, the constant term in the char­
acteristic polynomial is det A (see Problem 6.2.D7).

It is relevant here to recall some facts about the roots of an n-th degree polynomial:

(1) il1 is a root of C(il) if and only if (il - il1) is a factor of C(il).

(2) The total number of roots (real and complex, counting repetitions) is n.

(3) Complex roots of the equation occur in "conjugate pairs," so that the total num­
ber of complex roots must be even.

(4) If n is odd, there must be at least one real root.

(5) If the entries of A are integers, since the leading coefficient of the characteristic
polynomial is± 1, any rational root must in fact be an integer.
294 Chapter 6 Eigenvectors and Diagonalization

EXAMPLES
Find the eigenvalues and eigenvectors of A= [b � l
Solution: The characteristic polynomial is

So, A. = 1 is a double root (that is, (A.- 1) appears as a factor of C(A.) twice) so A. = 1
is the only eigenvalue of A.
For A.= 1, we have

A-A.I=
rn b]
which has the general solution v = t [ b ]. t E lit Thus, the eigenspace for A = 1 is

Span {[b]} ·

EXERCISE2
Find the eigenvalues and eigenvectors of A= [� �].

EXAMPLE6
Find the eigenvalues and eigenvectors of A= -7 9 -5.
[ -3 5 -5 ]
-7 7 -3
Solution: We have

-3-A. 5 -5
C(,1) = det(A-A.I)= -7 9-A. -5
-7 7 -3- ,i

Expanding this determinant along some row or column will involve a fair number of
calculations. Also, we will end up with a degree 3 polynomial, which may not be easy
to factor. But this is just a determinant, so we can use properties of determinants to
make it easier. Since adding a multiple of one row to another does not change the
determinant, we get by subtracting row 2 from row 3,

-3-A. 5 -5
C(A.)= -7 9- ,i -5
0 -2+,1 2-,1

Expanding this along the bottom row gives

C(A.) =
(-2 + A.)(-1)((-3 - ,1)(-5)- (-5)(-7))
+( 2-,1)((-3-,1)(9- ,1)- 5(-7))
( 2- ,i)(( 5A.+ 15- 35) + (;J.2 - 6,i- 27 + 35))
-(,1- 2)(,12 - ,i - 12)= -(,1 - 2)(A.- 4)(A. + 3)
Section 6.1 Eigenvalues and Eigenvectors 295

[-5 5 -51 �[ ol
EXAMPLE6 Hence, the eigenvalues of Aare ..t1 = 2, ..t2 = 4, and A3 = -3.

(continued) For A1 = 2,
1 -1
A- ..t1/= -7
-7 -5
7
7
-5 0
0
0
0
1
0

Hence, the general solution of (A - ;1 J)V = 0 is V = t [n Thus, a basis for the

eigenspace of;, is
ml}
For ..t2 = 4,

[�
A- ..t2/ =
[=� � =�1 � -�1
-7 7 -7 0 0 0

[H

Hence, the general solution of (A - A2J)il = 6 is V = t Thus, a basis for the

eigenspace of A2 is {[11}
[ � � =�1�[� � =�1
For A3 = -3,

A-..t3/= - 1
-7 7 0 0 0 0

Hence, the general solution of (A - ,!3l)V = m· 6 is V = t Thus, a basis for the

eigenspa� of ,!3 is {[; l}


EXAMPLE 7
Find the eigenva!ues and eigenvectors of A= [: ; l 1
Solution: We have
1 -..t 1- ,l 1
C(..t) = 1 - ,t = 1 1 - ,t
1 1 1 - ,t 0 ,t -..t

= ,l(-1)((1- ,l)- 1(1)) + (-..t)((l - ..t)(l - ..t) - 1(1 ))


2 2
= -,l(-,l + ,t - 2A) = -..t c..t - 3)

l[ l
Therefore, the eigenvalues of Aare ..t1 = 0 (which occurs twice) and ..t2 = 3.
For ..t1 = 0,
1 1 l 1

[
l l
A- ..t1/= 1 1 1 � 0 0 0
1 1 1 0 0 0
296 Chapter 6 Eigenvectors and Diagonalization
EXAMPLE 7
(continued) Hence, a basis forthe eigenspace of 0 is{[-1l [-�]} A' = · ·

For /l.2 3,

n -2 -2�i [0� 0� =0�i


=

A-,l,/ =

Thus, a basis for the eigenspace of is{[;]}· A2 = 3

These examples motivate the following definitions.


Definition
Algebraic Multiplicity
Let beofantimes ismatrepeatrix ewidtash eia rootgenvalofuthee charact
numberA n x n
/l
The eristic polynomial. The of is the
/l. algebraic multiplicity /l
geometric
Geometric Multiplicity multiplicity of is the dimension of the eigenspace of
/l /l.

EXAMPLES InpolExampl e t h e ei
5,g enval
(/l
u e1)(/1. 1),
has
/l alg
= 1ebrai
ynomial is - - and has geometric multiplicity since a basis for
/l 1
c mul tiplicity 2 si n ce t h e charact
1
e ri s t i c
its eigenspace is{[�]}.
=

IInn Exampl
Exampl ee eachthe eieiggenval
6,
7,
envaluuee has0 hasalgalebraigebraic mulc andtipligeomet
city andricgeomet
mul t i rilcicmulity 2,tiplandicitythe
p 1.

eigenvalue has algebraic and geometric multiplicity


3 1.

EXERCISE 3
Let [00 -20 22]. Show that and -2 are both eigenvalues of and
A =
5
-3
-4
/l.1 = 5 /l.2 = A

determine the algebraic and geometric multiplicity of both of these eigenvalues.


sectiThese
on. definitions lead to some theorems that wil be very important in the next
Theorem 2 Let be an eigenvalue of an matrix Then
/l n x n A.

geometric multiplicity algebraic multiplicity


1 S S
Section 6.1 Eigenvalues and Eigenvectors 297

If the geometric multiplicity of an eigenvalue is less than its algebraic multiplic­


ity, then we say that the eigenvalue is deficient. However, if A is an n x n matrix with
distinct eigenvalues A1, • • • , Ab which all have the property that their geometric multi­
plicity equals their algebraic multiplicity, then the sum of the geometric multiplicities
of all eigenvalues equals the sum of the algebraic multiplicities, which equals n (since
an n-th degree polynomials has exactly n roots). Hence, if we collect the basis vectors
from the eigenspaces of all k eigenvalues, we will end up with n vectors in JR.I!. We now
prove that eigenvectors from eigenspaces of different eigenvalues are necessarily lin­
early independent, and hence this collection of n eigenvectors will form a basis for JRn.

Theorem 3 Suppose that Ai, ..., Ak are distinct (Ai * A1) eigenvalues of an n x n matrix A,
with corresponding eigenvectors v1, ... , vb respectively. Then {V1, ... , vk} is linearly
independent.

Proof: We will prove this theorem by induction. If k = 1, then the result is trivial,
since by definition of an eigenvector, v1 * 0. Assume that the result is true for some
k 2'. 1. To show { v1, ... , vb Vk+ i} is linearly independent, we consider

(6.1)

Observe that since Avi = Aivi, we have (A - AJ)Vi = 0 and

Thus, multiplying both sides of (6.1) by A - Ak+I I gives

By our induction hypothesis, {v1, vk} is linearly independent; thus, all the coeffi­
• • • ,

cients must be 0. But Ai * AJ; hence, we must have c1 ck 0. Thus (6.1)


= · · · = =

becomes
0 + Ck+JVk+I = 0
But Vk+1 * 0 since it is an eigenvector; hence, Ck+l 0, and the set is linearly
independent. •

Remark

In this book, most eigenvalues tum out to be integers. This is somewhat unrealistic; in
real world applications, eigenvalues are often not rational numbers. Effective computer
methods for finding eigenvalues depend on the theory of eigenvectors and eigenvalues.
298 Chapter 6 Eigenvectors and Diagonalization

PROBLEMS 6.1
Practice Problems

Al Let A = [= ��2 �3 �] .Determine whether the fol-


(e) [! �] (f) [36 --63]
-1 A3 For each of the following matrices, determine the
lowing vectors are eigenvectors of A. If they are, algebraic multiplicity of each eigenvalue and deter­
determine the corresponding eigenvalues. Answer mine the geometric multiplicity of each eigenvalue
without calculating the characteristic polynomial. by writing a basis for its eigenspace.

(a) [� -�J (b) [� -�J


(d) [-2-� =� 63 ]
[222 222 2221
-7 7
A2 Find the eigenvalues and corresponding eigenspaces
of the following matrices.

(a) [ � �] (b) [� ; ]
(e) (f) [; � 3�i
1 1

[-26 29]
-

(c) [� �] (d)
-75
10

Homework Problems
4

23
[ 53 1 [-2 236 61
4

Bl Let A = [-� -� -�1.6


8 -1
Determine whether the (e) 0
0
1

0
7 (f)
-

-1 1

following vectors are eigenvectors of A. If they are, B3 For each of the following matrices, determine the
determine the corresponding eigenvalues. Answer algebraic multiplicity of each eigenvalue and deter­
without calculating the characteristic polynomial. mine the geometric multiplicity of each eigenvalue

(a) (b) [[-; =�J


by writing a basis for its eigenspace.

[; �] 4

B2 Find the eigenvalues and corresponding eigenspaces


(c) [ � �] (d) ��
0

-� �
0
1
[ �2 -�]
of the following matrices.

!] [� � �]
(a)

[=� ;]
(e)

(t) [=� =; j]
(c)
_;]
Section 6.2 Diagonalization 299

Computer Problems

Cl Use a computer to determine the eigenvalues 2.89316 -1.28185 2.42918 ]


�] [[ ] [ ] [ ]
and corresponding eigenspaces of the following C2 Let A -0.70562 0.76414 -0.67401 .

matrices. 1.67682 -0.83198 2.34270


9

[�
- -
1.21 1.31 -1.85
(a) -5 Verify that -0.34 , 2.15 , and 0.67 are
-2 0.87 -0.21 2.10
-5

[l -i]
(approximately) eigenvectors of A. Determine the
(b) -2 corresponding eigenvalues.
-2
2 1 0 3
2 3 -4
(c)
4 2 2 4
4 2 2 4

Conceptual Problems

Dl Suppose that vis an eigenvector of both the matrix D4 (a) Let A be an n x n matrix with rank(A) = r < n.
A and the matrix B, with corresponding eigenvalue Prove that 0 is an eigenvalue of A and deter­
,1 for A and corresponding eigenvalueµ for B. Show mine its geometric multiplicity.
that vis an eigenvector of (A + B) and of AB. De­ (b) Give an example of a 3 x 3 matrix with
termine the corresponding eigenvalues. rank(A) = r < n such that the algebraic mul­
tiplicity of the eigenvalue 0 is greater than its
D2 (a) Show that if ;l is an eigenvalue of a matrix A,
geometric multiplicity.
then ,.in is an eigenvalue of A11• How are the cor­
responding eigenvectors related? DS Suppose that A is an n x n matrix such that the

[�
(b) Give an example of a 2 x 2 matrix A such that sum of the entries in each row is the same. That
A has no real eigenvalues, but A 3 does have real
eigenvalues. (Hint: See Problem 3.3.D4.) is, f a;k =
k=l
c for all 1 :$; i :$; n. Show that v = is

D3 Show that if A is invertible and vis an eigenvector 1


of A, then vis also an eigenvector of A-1. How are an eigenvector of A. (Such matrices arise in proba­
the corresponding eigenvalues related? bility theory.)

6.2 Diagonalization
At the end of the last section, we showed that if the k distinct eigenvalues ,11, • • • , Ak
of an n x n matrix A all had the property that their geometric multiplicity equalled
their algebraic multiplicity, then we could find a basis for JR.11 of eigenvectors of A by
collecting the basis vectors from the eigenspaces of each of the k eigenvalues. We now
see that this basis of eigenvectors is extremely useful.
Suppose that A is an n x n matrix for which there is a basis {V1, •. . , v11} of eigen­
vectors of A. Let the corresponding eigenvalues be denoted A1, . . . , ,111, respectively. If
300 Chapter 6 Eigenvectors and Diagonalization

we let P =[v1 v1
1
J, then we get
AP=A[v1 vn ]
=[Av1 Avn J
= [,,t1vi AnVn ]
A1 0 0

0 A2
=[vi vn ] =PD
0
0 0 An

Recall that a square matrix D such that d;; = 0 for i t= j is said to be diagonal and
can be denoted by diag(d11, . . • , d1111). Thus, using the fact that P is invertible since the
columns of P form a basis for IR.1 1, we can write AP = PD as

Definition If there exists an invertible matrix P and diagonal matrix D such that p-l AP=D, then
Diagonalizable we say that A is diagonalizable (some people prefer "diagonable")and that the matrix
P diagonalizes A to its diagonal form D.

It may be tempting to think that p-l AP = D implies that A = D since P and


1
p- are inverses. However, this isnot true in general since matrix multiplication is not
commutative. Not surprisingly, though, if A and B are matrices such that p-1AP = B
for some invertible matrix P, then A and B have many similarities.

Theorem 1 If A and B are n x n matrices such that p-1AP = B for some invertible matrix P,
then A and B have

(1) The same determinant


(2) The same eigenvalues
(3) The same rank
11
(4) The same trace, where the trace of a matrix A is defined by tr A= .L: a;;
i=l

(!)was proved as Problem D4 in Section 5.2. The proofs of (2), (3), and (4) are
left as Problems D 1, D2, and D3, respectively.
This theorem motivates the following definition.

Definition If A and Bare n x n matrices such that p-1AP = Bfor some invertible matrix P, then
Similar Matrices A and Bare said to be similar.

Thus, from our work above, if there is a basis of IR.11 consisting of eigenvectors of
A, then A is similar to a diagonal matrix D and so A is diagonalizable. On the other
hand, if at least one of the eigenvalues of A is deficient, then A will not haven linearly
independent eigenvectors. Hence we will not be able to construct an invertible matrix P
whose columns are eigenvectors of A. In this case, we say that A is not diagonalizable.

We get the following theorem.


Section 6.2 Diagonalization 301

Theorem 2 [Diagonalization Theorem]


An n x n matrix A can be diagonalized if and only if there exists a basis for JR11 of
eigenvectors of A. If such a basis {V1, • • • , v11} exists, the matrix P = [v1 v11 )
diagonalizes A to a diagonal matrix D = diag(At, ...,A11 ), where Ai is an eigenvalue
of A corresponding to v; for 1 ::S i ::S n.

From the Diagonalization Theorem and our work above, we immediately get the
following two useful corollaries.

Corollary 3 A matrix A is diagonalizable if and only if every eigenvalue of a matrix A has its
geometric multiplicity equal to its algebraic multiplicity.

Corollary 4 If an n x n matrix A has n distinct eigenvalues, then A is diagonalizable.

Remark

Observe that it is possible for a matrix A with real entries to have non-real eigenvalues,
which will lead to non-real eigenvectors. In this case, there cannot exist a basis for
JR11 of eigenvectors of A, and so we will say that A is not diagonalizable over JR. In
Chapter 9, we will examine the case where complex eigenvalues and eigenvectors are
allowed.

t
EXAMPLE 1 Find an invertible matrix P and a diagonal matrix D such that p- AP = D, where

A= [� ;].
2
Solution: We need to find a basis for JR of eigenvectors of A. Hence, we need to find
a basis for the eigenspace of each eigenvalue of A. The characteristic poly nomial of
A is

Hence, the eigenvalues of A are At = 5 and Az = -1.


For A1 = 5, we get

-3
] [
3 � 1
0
-1
0
]
So, Vt = [ �] is an eigenvector for At = 5 and {V i } is a basis for its eigenspace.

For A2 = -1, we get

So, V'2 = r-�] is an eigenvector for A2= -1 and {\12} is a basis for its eigenspace.

Thus, {v1, v2} is a basis for JR2, and so if we let P= [vt J


v2 = [ � -�]. we get
302 Chapter 6 Eigenvectors and Diagonalization
EXAMPLE 1
(continued) Note that we could have instead taken P= [i12 vi] = r- � n which would have
given

EXAMPLE2 0 3 - 2
Determine whether A = [-2 35 -201 is diagonalizable. If it is, find an invertible
matrix Panda di a gonal mat -2rix such that p-1AP=
Solution: The characteristic polynomial of Ais
D D.

C(/l)=det(A-/l/)= 0-/ -2-2 l 5-/l33 0-2-/-2 l = 0-/l -20 -25-/3 /ll 2-2-2-/l
= /l)(-21)(2/l - ( 2-/l)(/l2-5/l /l6)
+

(-2 + 4) + +

=-(/l-2)(/l -3/l 2)=-(/l -2)(/l -2)( -


+ 1)

Hence, /lt1h=alg2ebrai
val/l2 =ue wimust is anc eimulgenval
t i p l i ucietywi1t.hByalgTheorem
ebraic mul6.1tip.l2i,citthye geomet
2 and /l2ric=mulistianpliceiitgyen­of
1

of /l1 = 2 is 2.
1 equal 1 . Thus, A i s diagonal izable i f and onl y i f th e geomet ric mul t i p l i c i t y
For /l1 = 2, we get A-/l1/ = [=-2� 3� -2=�]- [�0 -�0 2 �]·Thus, 0 a basis for
3 -
the eigenspace is { [ fl [ �]}. Hence, the geometric multiplicity of = 2 equals J1

its alBygebraiCorolc mullarytip3,licweity.see that A is diagonalizable. So, we also need to find a basis
for the eigenspace of /l2 = 1.
For /l2 = 1, we get A-/l2/ =[=�-2 !3 =�]- [�0 0� =�]·0 Therefore, {[�]} is a
basis for the eigenspace. -1 1

So, we can toke P=n -� �]andget 2


P- 1AP=[� � �]·
1 00 1

EXERCISE 1
Diagonalize A=U -� =il
Section 6.2 Diagonalization 303

EXAMPLE3 7
Is the matrix A = [=! =�] 11 diagonalizable?
-4 8 -3
Solution: The characteristic polynomial is

-1-tl 7 5 - -tl
1 7 5

-
- -

C(tl) = det(A tll) = -4 11 - tl -6 = -4 11 - tl -6


-4 8 -3-tl 0 - 3 + tl 3 - tl
= (-3 + tl)(-1)(6,,t + 6- 0) + (3 - tl)(tl2 - U + tl - 11 +
2 1-1) 28)
= -(,,l - 3)(tl2 - 4,-l + 3) = -(tl- 3)(,-l - 3)(,-l

Thus, tl1 = 3 is an eigenvalue with algebraic multiplicity 2, and tl2 = 1 is an eigenvalue


with algebraic multiplicity 1.
For tl1 = 3, we get A - tl1/ = [=: � =�] [� � =�12] .
-4 8 -6

0 0 0
Thus, a basis for

the eigenspace is {[1[2]}· Hence, the geometric multiplicity of .l, � I is less than its

algebraic multiplicity, and so A is not diagonalizable by Corollary 3.

EXERCISE 2
Show that A = [� �] is not diagonalizable.

EXAMPLE4
Show that matrix A = [� -�] is not diagonalizable over lit

Solution: The characteristic polynomial is

1-,,l 1
-1
C(tl) = det(A - tll) = = tl 2 + 1
l -tl

Since tl2 + 1 = 0 has no real solutions, the matrix A has no real eigenvalues and hence
is not diagonalizable over lit

Some Applications of Diagonalization


A geometrical application of diagonalization occurs when we try to picture the graph
of a quadratic equation in two variables, such as ax2 + 2bxy + cy2 d. It turns out that

[: �].
=

we should consider the associated matrix By diagonalizing this matrix, we can

easily recognize the graph as an ellipse, a hyperbola, or perhaps some degenerate case.
This problem will be discussed in Section 8 .3.
304 Chapter 6 Eigenvectors and Diagonalization

A physical application related to these geometrical applications is the analysis of


the deformation of a solid. Imagine, for example, a small steel block that experiences
a small deformation when some forces are applied. The change of shape in the block
can be described in terms of a 3 x 3 strain matrix. This matrix can always be diago­
nalized, so it turns out that we can identify the change of shape as the composition of
three stretches along mutually orthogonal directions. This application is discussed in
Section 8.4.
Diagonalization is also an important tool for studying systems of linear difference
equations, which arise in many settings. Consider, for example, a population that is
divided into two groups; we count these two groups at regular intervals (say, once a
month) so that at every time n, we have a vector jJ = [�����] that tells us how many are
in each group. For some situations, the change from month to month can be described
by saying that the vector jJ changes according to the rule

jJ(n + 1) =AjJ(n)

whereA is some known 2 x 2 matrix. It follows that p(n)= N p(O). We are often inter­
ested in understanding what happens to the population "in the long run." This requires
us to calculateA11 for n large. This problem is easy to deal with if we can diagonal­
izeA. Particular examples of this kind are Markov processes, which are discussed in
Section 6.3.
One very important application of diagonalization and the related idea of eigen­
vectors is the solution of systems of linear differential equations. This application is
discussed in Section 6.4.
In Section 4.6 we saw that if L : JR.11 � JR.11 is a linear transformation, then its matrix
with respect to the basis '13 is determined from its standard matrix by the equation

] = P-1[L]sP
[L21

where P = v1 [ )
v11 is the change of basis matrix. Examples 5 and 7 in
Section 4.6 show that we can more easily give a geometrical interpretation of a lin­
ear mapping L if there is a basis '13 such that [L]21 is in diagonal form. Hence, our
diagonalization process is a method for finding such a geometrically natural basis. In
particular, if the standard matrix of Lis diagonalizable, then the basis for JR.11 of eigen­
vectors forms the geometrically natural basis.

PROBLEMS 6.2
Practice Problems

Al By checking whether columns of P are eigenvec­


(a)A=
[ ]
11
9 -
6
' P= [� � ]
-

tors ofA, determine whether P diagonalizesA. If 4

diagonal.
1
it does, determine p-t and check that p-AP is
(b)A= [� -�l p=
[� n
Section 6.2 Exercises 305

[� -87]' [� �] -2-� 72 1] 3
(c) A=
- ] p=
(f) A=
[ 21
[: 2 2 n -� :J -[ 1
4 4
(d) A= 4, P= 6 3
4

A2 For the following matrices, determine the eigenval­


ues and corresponding eigenvectors and determine
(g) A=
-6
3
12 81
-4 -3

A3 Follow the same instructions as for Problem A2.


whether each matrix is diagonalizable over R If it
is diagonalizable, give a matrix P and diagonal ma­
(a) A=
[ ; �]
_

trix D such that p-1 AP = D.

(a) A=
[� �]
(b) A=
[: :]
(b) A= [-2 3] (c) A=
[-; -�]
(c) A=
4

[-� -�]
-3
(d) A= [-2-� 2: -=!]2
(d) A= [�1 1� �11 (e) A= [2� �2 �1 0

(e) A=
[-� -17-� -8-�1
9
(f) A= [-1-� -2� �i 3

(g) A= [�� �� -�:1


Homework Problems

Bl By checking whether columns of P are eigenvec- B2 For the following matrices, determine the eigenval-
tors of A, determine whether P diagonalizes A. If
it does, determine p-1 and check that p-1 AP is whether each matrix is diagonalizable over R If it
ues and corresponding eigenvectors and determine

is diagonalizable, give a matrix P and diagonal ma-

[-� n [-1 n [-2 ;]


trix D such that p-1 AP = D.
diagonal.

(a) A= p=
1 4
(a) A=

(b) A=
[� n [� - �] p=
(b) A=
[� �]
(c) A=
[� n [� - �] p=

7 2
[-188 -1 111 [-!� -12 -2: l
-4
(c) A=
[� ;]
[� 3 -2 �]
(d) A 4, P= _ 6
-6 3 (d) A=
=

(e) A= [- -22 -�]


4
-9
2
306 Chapter 6 Eigenvectors and Diagonalization

<DA= H -�i -1
2 (d) A= [=� � � ] 2

[� � =�1
-3 -4 -3 1 5

H -;1
-2
(g) A= 3 (e) A=

[� ; �1
4 1 0 -1

B3 Follow the same instructions as for Problem B2.


(a) A= [� �1 A=

[�� -� -��]
(f) 0 0 -2

(b) A= [ � -�1
-
�) A=

(c) A= [ ! -�1
_

Conceptual Problems

Dl Prove that if A and B are similar, then A and B have D6 (a) Suppose that A is diagonalizable. Prove that
the same eigenvalues. tr A is equal to the sum of the eigenvalues of
A (including repeated eigenvalues) by using
D2 Prove that if A and B are similar, then A and B have
Theorem 1.
the same rank.
(b) Use the result of part (a) to determine, by in­
D3 (a) Let A and B be n x n matrices. Prove that
spection, the algebraic and geometric multi­
tr AB= tr BA.

[: ]
plicities of all of the eigenvalues of
(b) Use the result of part (a) to prove that if A and a b
B are similar, then tr A = tr B.
A= :
a b : ·

D4 (a) Suppose that P diagonalizes A and that the di­ a a a+b


1
agonal form is D. Show that A= PDP- • D7 (a) Suppose that A is diagonalizable. Prove that
(b) Use the result of part (a) and properties of det A is equal to the product of the eigenvalues
eigenvectors to calculate a matrix that has of A (repeated according to their multiplicity)

[�] [�].
eigenvalues 2 and 3 with corresponding eigen- 1
by considering p- AP.

vectors and respectively. (b) Show that the constant term in the characteris­
tic polynomial is det A. (Hint: How do you find

[H
(c) Determine a matrix that has eigenvalues 2, -2,
the constant term in any polynomial p(A.)?)
(c) Without assuming that A is diagonalizable,
and 3, whh correspond;ng dgenvectors
show that det A is equal to the product of the

[ - i].
roots of the characteristic equation of A (in­

Ul
cluding any repeated roots and complex roots).
and <espec6 vel Y· (Hint: Consider the constant term in the char­
·

acteristic equation and the factored version of


DS (a) Suppose that P diagonalizes A and that the di­ that equation.)
agonal form is D. Show that Ak = PDkp-1,
DB Let A be an n x n matrix. Prove that A is invertible

[-� -� �]
(b) Use the result of part (a) to calculate A5, where
if and only if A does not have 0 as an eigenvalue.
(Hint: See Problem D7.)
A= - is the matrix from Problem
-6 12 8 D9 Suppose that A is diagonalized by the matrix P and
A2 (g). that the eigenvalues of A are A.1,..., A.11• Show that
the eigenvalues of (A - A1 I) are 0, A2 - A 1,A3 -
.
A1,.• ,An -A1• (Hint: A-Ail is diagonalized by P.)
Section 6.3 Powers of Matrices and the Markov Process 307

6.3 Powers of Matrices


and the Markov Process
In some applications of linear algebra, it is necessary to calculate powers of a matrix. If
the matrix A is diagonalized by P to the diagonal matrix D, it follows from D p -1 P
= A
that A= PDP-1, and then for any positive integerm we get

Am= (PDP-1r (PDP-1)(PDP-1)


= (PDP-1) • . .

= PD(P-1P)D(P-1P)D· ··(P-1P)DP PDmp-l =


Thus, knowledge of the eigenvalues of A and the theory of diagonalization should
be valuable tools in these applications. One such application is the study of Markov
processes. After discussing Markov processes, we tum the question around and show
how the "power method" uses powers of a matrix A to determine an eigenvalue of A.
(This is an important step in the Google PageRank algorithm.) We begin with an ex­
ample of a Markov process.

EXAMPLE 1 Smith and Jones are the only competing suppliers of communication services in their
community. At present, they each have a 50% share of the market. However, Smith
has recently upgraded his service, and a survey indicates that from one month to the
next,
hand,
90%
70% of Smith's customers remain loyal, while
of Jones's customers remain loyal and 30%10% switch to Jones. On the other
switch to Smith. If this goes on
for six months, how large are their market shares? If this goes on for a long time, how
big will Smith's share become?
Solution: Let Sm be Smith's market share (as a decimal) at the end of them-th month
Sm++ lm = 1, since between them they have 100%
and let lm be Jones's share. Then
of the market. At the end of the (m !)-st month, Smith has 90% of his previous
customers and 30% of Jones's previous customers, so

Sm+l= 0.90S m 0 3 m
+ . J

Similarly,
lm+l = O.lSm 0 7lm
+ .

We can rewrite these equations in matrix-vector form:

0 9 0.7
[Sm+l] = [0.1.0 3] [Sm].
lm+l lm
The matrix T = [�:� �:�] is called the transition matrix for this problem: it de­

scribes the transition (change) from the state [�:] at time m to the state [�::: ] at

time m + 1. Then we have answers to the questions if we can determine T6 [�:;] and

ym [0.0.55] form large.

To answer the first question, we might compute T6 directly. For the second ques­
tion, this approach is not reasonable, and instead we diagonalize. We find that il1 =1
308 Chapter 6 Eigenvectors and Diagonalization

EXAMPLE 1
(continued)
is an eigenvalue of T with eigenvector v1 = [�] .and il.2 = 0.6 is the other eigenvalue,

with eigenvector V2 = [ �l
_ Thus,

It follows that

ym = PDmp-1 0 ] � [11 - 1]
= [31 -11] [ l0m (0.6)"' 4 3
We could now answer our question directly, but we get a simpler calculation if we
observe that the eigenvectors form a basis, so we can write

Then,
[c2ci ] = [Sloo]= 4� [SoSo+
p-I
- 3lolo ]
Then, by linearity,

[��] = c1Tmv1+c2Tmv2
ym

= C1il�V1+c2il�1v2
= �(So+lo)[�]+ �(So - 3lo)(0.6)"' [ �] _

Now So= lo= 0.5. = 6,


Whenm

[s ] = �4 [31]- �4 co.6)6 [-11]


6
16

1 [3- 0.0117
::::: 4 1+0.0117)
::::: [0.747]
0.253
Thus, after six months, Smith has approximately 74.7% of the market.
Whenmis very large, (0.6)"' is nearly zero, so form large enough (m � oo), we
have S = 0.75 loo = 0.25.
oo and
Thus, in this problem, Smith's share approaches 75% as m gets large, but it never
75%.
gets larger than Now look carefully: we get the same answer in the long run, no
So lo
matter what the initial value of and (0.6)"' 0 So+lo= 1.
are because � and
Section 6.3 Powers of Matrices and the Markov Process 309

By emphasizing some features of Example 1, we will be led to an important defi­


nition and several general properties:

(1) Each column of T has sum l . This means that all of Smith's customers show
up a month later as customers of Smith or Jones; the same is true for Jones's
customers. No customers are lost from the system and none are added after the
process begins.

(2) It is natural to interpret the entries tij as probabilities. For example, t11 =0.9
is the probability that a Smith customer remains a Smith customer, with
0. 1 as the probability that a Smith customer becomes a Jones customer. If we
t21 =
consider "Smith customer" as "state 1" and "Jones customer" as "state 2," then
tij is the probability of transition from state j to state i between time m and
time m + 1 .

(3) The "initial state vector" is [��l [��] ym i s the state vector at time m.

(4) Note that

[�11] = [��]=So[��:]+ lo[���]


T

Since t11 + t21 = t12 + t22 =


1 and 1, it follows that

SI + 11 =SO+ lo
Thus, it follows from (1) that each state vector has the same column sum. In our
example, So l o
and So+ lo =
are decimal fractions, so
a process whose states have some other constant column sum.
1, but we could consider

(5) Note that 1 is an eigenvalue of Twith eigenvector [n To get a state vector with

the appropriate sum, we take the eigenvector to be [�j:]. Thus,

T [ ]=[ ]
3/4
1/4
3/4
1/4

and the state vector [�j:J is fixed or invariant under the transformation with

matrix T. Moreover, this fixed vector is the limiting state approached by ym [��]
for any [��].
The following definition captures the essential properties of this example.

Definition Ann xn matrix Tis the Markov matrix (or transition matrix) of ann-state Markov
Markov Matrix process if
Markov Process
(1) tiJ � 0, for each i and j.

i=I tiJ =
n

(2) Each column sum is 1: L: 1 for each j.


310 Chapter 6 Eigenvectors and Diagonalization

We take possible states of the process to be the vectors S =


s[ :1 such that s; ;::: 0 for

each i, and s1 + · · · + Sn = 1. Sn

Remark

With minor changes, we could develop the theory with s1 + · · · + Sn = constant.

EXAMPLE2
The matnx . [0.0.19 0.3] . O.S
1s not a Markov matnx
. smce
. the sum of the entries
.
.
m the second

column does not equal 1 .


EXAMPLE3
Find the fixed-state vector for the Markov matrix A = rn:� �:�].
Solution: We know the fixed-state vector is an eigenvector for the eigenvalue ,1 = 1.
We have

A_ I= [-00.99 -0.0.331 [01 -1/301


.

Therefore, an eigenvector. corresponding to ,1 =


[�].
1 is The components in the state

vector must sum to 1, . .


so the mvanant state 1s
. [1/4]34.I
It is easy to verify that

[0.1 0.3] [1/4] [1/4]


0.9 0.7 3/4 3/4 =

EXERCISE 1 Determine which of the following matrices is a Markov matrix. Find the fixed-state
vector of the Markov matrix.

(a) A = [0.50.4 0.0.651 (b) B =


[0.4 0.61
0.6 0.4
The goal with the Markov process is to establish the behaviour of a sequence with
states s, Ts, T2 S,..., T'n s. If possible, we want to say something about the limit of
Tms as m � oo. As we saw in Example 1, diagonalization of Tis a key to solving
the problem. It is beyond the scope of this book to establish all the properties of the
Markov process, but some of the prope1ties are easy to prove, and others are easy to
illustrate if we make extra assumptions.
Section 6.3 Powers of Matrices and the Markov Process 311

PROPERTY 1. One eigenvalue of a Markov matrix is ,11 = 1.

Proof: Since each column ofT has sum 1, each column of (T - 1/) has sum 0. Hence,
the sum of the rows of (T - 11) is the zero vector. Thus the rows are linearly dependent,
and (T - 1/) has rank less than n, so det(T - 1/) 0. Therefore, 1 is an eigenvalue
=

ofT. •

PROPERTY 2. The eigenvector S* for ,11 = 1 has sj � 0 for 1 ::; j::; n.


This property is important because it means that the eigenvector S* is a real state of
the process. In fact, it is a fixed or invariant state:

PROPERTY 3. All other eigenvalues satisfy IA;I ::; 1.


To see why we expect this, let us assume thatT is diagonalizable, with distinct eigen­
values I, A2, ... , An and corresponding eigenvectors s*, s 2, . .. , S,1• Then any initial
state s can be written

It follows that

If any l,1;1 > 1, the term 1,i;nl would become much larger than the other terms when m

is large; it would follow that T"' s has some coordinates with magnitude greater than
1. This is impossible because state coordinates satisfy 0 ::; s; ::; 1, so we must have
l,1;1::; 1.

PROPERTY 4. Suppose that for some m all the entries in ym are not zero. Then
all the eigenvalues of T except for ,11 = 1 satisfy IA;I < l. In this case, for any initial
state S, T"' s ---t S* as m ---t oo: all states tend to the invariant state S* under the process.
Notice that in the diagonalizable case, the fact that ym s ---t s* follows from the ex­
pression forT111 s given under Property 3.

EXERCISE 2
The Markov matrix T = [� �) has eigenvalues 1 and -1; it does not satisfy the

conclusion of Property 4. However, it also does not satisfy the extra assumption of
Property 4. It is worthwhile to explore this "bad" case.

Let s= [ ;� l Determine the behaviour of the sequence s,Ts,T2 s, .... What is the

fixed-state vector forT?


312 Chapter 6 Eigenvectors and Diagonalization

Systems of Linear Difference Equations


If A is an n x n matrix and S(m) is a vector for each positive integer m, then the matrix
vector equation
S(m + 1) = AS(m)

may be regarded as a system of n linear first-order difference equations, describing


the coordinates s1, s2, ... , s,, at times m + 1 in terms of those at time m. They are
"first-order difference" equations because they involve only one time difference from
m to m + 1; the Fibonacci equation s(m + 1) = s(m) + s(m - 1) is a second-order
difference equation.
Markov processes form a special class of this large class of systems of linear
difference equations, but there are applications that do not fit the Markov assumptions.
For example, in population models, we might wish to consider deaths (so that some
column sums of A would be less than 1) or births, or even multiple births (so that some
entries in A would be greater than 1). Similar considerations apply to some economic
models, which are represented by matrix models. A proper discussion of such models
requires more theory than is developed in this book.

The Power Method of Determining Eigenvalues


Practical applications of eigenvalues often involve larger matrices with non-integer en­
tries. Such problems often require efficient computer methods for determining eigen­
values. A thorough discussion of such methods is beyond the scope of this book, but
we can indicate how powers of matrices provide one tool for finding eigenvalues.
Let A be an n x n matrix. To simplify the discussion, we suppose that A has
n distinct real eigenvalues tl1, ... , tl,,, with corresponding eigenvectors v1, ... , v11• We
suppose that l..t1 I > l..l;I for 2 :::; i :::; n. We call tl1 the dominant eigenvalue. Since

{\11, ..., v,,} will form a basis for l!l", any vector X E Jll" can be written

T hen

and
A111 1 = c1tl�v1 + · · · + CnA� Vn

For m large, l..l'fl is much greater than all other terms. If we divide by c1..l'f, then all
terms on the right-hand side will be negligibly small except for v1, so we will be able
to identify Vt. By calculating Av1, we determine tl1•
To make this into an effective procedure, we must control the size of the vectors: if
..t 1 > 1, then tl'1" -t oo as m gets large, and the procedure would break down. Similarly,
if all eigenvalues are between 0 and 1, then A1111 -t 0, and the procedure would fail.
To avoid these problems, we normalize the vector at each step (that is, convert it to a
vector of length 1).
Section 6.3 Powers of Matrices and the Markov Process 313

The procedure is as follows.

Algorithm 1
Guess 1o; normalize Yo=1o/ll1o\\
11 = Ay0; normalize y1 =.Xi/\\xi\\
12=Ay1; normalize y2=12/111211
and so on.

We seek convergence of ym to some limiting vector; if such a vector exists, it must


be v1, the eigenvector for the largest eigenvalue il1•
This procedure is illustrated in the following example, which is simple enough
that you can check the calculations.

EXAMPLE4 [ �; -�]
Determine the eigenvalue of largest absolute value for the matrix A = _ by

using the power method.

Solution: Choose any starting vector and let xo = [ �] . Then

Yo =
_1 [1] [0.707]

Yi 1 0.707
11 =Ayo�
[-1213 -56] [0.707]
0.707 [ 13.44]

-12.02
xi 0.745]
[-0.667
= �
y1 il1iii

12 =Ayi�
[ 5.683]' y2�
[ 0.712]
-5.605 -0.702
13 =Ah�
[-5.034
5.044]' y �
[ 0.7078]
3 -0.7063
[-4.9621
4.9636] y [-0.7070
0.7072]
X4 =Ah� �

' 4

. . . [_0.707] . .
,
-t
At this point, we JU dge that .rm -t
0.707 7.[_0.707
, so -t

and the corresponding dominant eigenvalue is ili =


.v1 =
0.707] is an eigenvector of A,

(The answer is easy to check by


using standard methods.)

Many questions arise with the power method. W hat if we poorly choose the initial
vector? If we choose x0 in the subspace spanned by all eigenvectors of A except vi,
the method will fail to give v1. How do we decide when to stop repeating the steps
of the procedure? For a computer version of the algorithm, it would be important to
have tests to decide that the procedure has converged-or that it will never converge.
Once we have determined the dominant eigenvalue of A, how can we determine
other eigenvalues? If A is invertible, the dominant eigenvalue of A-1 would give the
reciprocal of the eigenvalue of A with the smallest absolute value. Another approach
is to observe that if one eigenvalue /l.i is known, then eigenvalues of A - /l.il will give
us information about eigenvalues of A. (See Problem 6.2.D9.)
314 Chapter 6 Eigenvectors and Diagonalization

PROBLEMS 6.3
Practice Problems

Al Determine which of the following matrices are A3 A car rental company serving one city has three
Markov matrices. For each Markov matrix, deter­ locations: the airport, the train station, and the city
mine the invariant or fixed state (corresponding to centre. Of the cars rented at the airport, 8/10 are
the eigenvalue ,1 1). returned to the airport, 1/10 are left at the train sta­

(a) [ 0.2 0.6 ] =

tion, and 1/10 are left at the city centre. Of cars


0.8 0.3 rented at the train station, 3/10 are left at the air­

(b)
[ 0.3 0.6 ] port, 6/10 are returned to the train station, and 1/10
0.7 0.4 are left at the city centre. Of cars rented at the city
[ 0.7 0.3 0.0 ] centre, 3/10 go to the airport, 1/10 go to the train
(c) 0.1 0.6 0.1 station, and 6/10 are returned to the city centre.
0.2 0.2 0.9 Model this as a Markov process and determine the
[ 0.9 0.1 0.0 1 steady-state distribution for the cars.
(d) 0.0 0.9 0.1 A4 To see how the power method works, use it to de­
0.1 0.0 0.9 termine the largest eigenvalue of the given matrix,
A2 Suppose that census data show that every decade, starting with the given initial vector. (You will need
15% of people dwelling in rural areas move into a calculator or computer.)
towns and cities, while 5% of urban dwellers move
into rural areas.
ca) [� -�J.10 [n =

(a) What would be the eventual steady-state popu­


lation distribution?
(b) [�� ��]. 10 [�]
- =

(b) If the population were 50% urban, 50% rural


at some census, what would be the distribution
after 50 years?

Homework Problems

Bl Determine which of the following matrices are B2 The town of Markov Centre has only two suppliers
Markov matrices. For each Markov matrix, deter­ of widgets-Johnson and Thomson. All inhabitants
mine the invariant or fixed state (corresponding to buy their supply on the first day of each month.
the eigenvalue ,1 1). Neither supplier is very successful at keeping cus­

(a)
[ 0.4 0.7 ] =

tomers. 70% of the customers who deal with John­


0.5 0.3 son decide that they will "try the other guy" next

(b)
[ 0.5 0.6 ] time. Thomson does even worse: only 20% of his
0.5 0.4 customers come back the next month, and the rest
[ 0.8 0.3 0.2 ] go to Johnson.
(c) 0.0 0.6 0.2 (a) Model this as a Markov process and determine
0.2 0.1 0.6 the steady-state distribution of customers.

[ 0.8 0.1 0.2 ] (b) Determine a general expression for Johnson

(d) 0.1 0.9 0.6 and Thomson's shares of the customers, given

0.1 0.1 0.2 an initial state where Johnson has 25% and
Thomson has 75%.
Section 6.4 Diagonalization and Differential Equations 315

B3 A student society at a large university campus de­ athletic centre. At the athletic centre, there are 20
cides to create a pool of bicycles that can be used that started at the residence, 20 that started at the
by the members of the society. Bicycles can be bor­ library, and 100 that started at the athletic centre.
rowed or returned at the residence, the library, or If this pattern is repeated every day, what is the
the athletic centre. The first day, 200 marked bi­ steady-state distribution of bicycles?

[�]
cycles are left at each location. At the end of the
day, at the residence, there are 160 bicycles that B4 Use the power method with initial vector to
started at the residence, 40 that started at the library,
and 60 that started at the athletic centre. At the li­
. . .
determme the dommant e1genva1ue of [ 3.5
4.5
4. 5
3.5
]
.
brary, there are 20 that started at the residence, 140
Show your calculations clearly.
that started at the library, and 40 that started at the

Computer Problems

Cl Use the powec method whh initial vectoc [�] to


[ 2.89316
-0.70562
-1.28185
0.76414
2.42918 ]
-0.67401 . You may do
1.67682 -0.83198 2.34270
determine the dominant eigenvalue of the matrix this by using software that includes matrix opera­
tions or by writing a program to carry out the pro­
cedure.

Conceptual Problems

Dl (a) Let T be the transition matrix for a two-state D2 Suppose that T is a Markov matrix.
n n
Markov process. Show that the eigenvalue that
is not I is A.2 = t11 + t22 - 1. (a) Show that for any state 1, I(Tx)k I xk.
=

k=I k=I
(b) For a two-state Markov process with t21 a
(b) Show that if vis an eigenvector of T with eigen-
=

and t12 = b, show that the fixed state (eigenvec- 11

I vk
tor for A.= 1) is
a!b [�]. value A. f. 1, then
k=I
= 0.

6.4 Diagonalization and Differential


Equations
This section requires knowledge of the exponential function, its derivative, and first­
order linear differential equations. The ideas are not used elsewhere in this book.
Consider two tanks, Y and Z, each containing 1000 litres of a salt solution. At a
initial time, t = 0 (in hours), the concentration of salt in tank Y is different from the
concentration in tank Z. In each tank the solution is well stirred, so that the concentra­
tion is constant throughout the tank. The two tanks are joined by pipes; through one
pipe, solution is pumped from Y to Z at a rate of 20 L/h; through the other, solution is
316 Chapter 6 Eigenvectors and Diagonalization

pumped from Z to Y at the same rate. The problem is to determine the amount of salt

y(t) t.
in each tank at time
Let be the amount of salt (in kilograms) in tank Y at time t, z(t)
and let be the
amount in the tank Z at time t. Then the concentration in Y at time t (y/1000)
is kg/L.
Similarly, (z/1000) kg/L is the concentration in Z. Then for tank Y, salt is fl.owing out
through one pipe at a rate of (20)(y/1000) kg/h and in through the other pipe at a rate

dy(20)(z/1000)
of

-0.02y 0.02z.
kg/h. Since the rate of change is measured by the derivative, we have

By consideration of Z, we get a second differential equation, so

ydt z
=
+

and are the solutions of the system of linear ordinary differential equations:

dy -0.02y 0.02z
dzdt 0.02y -0.02z
=
+

dt =

. . . ' . d [y] [-0.02 0.02] [y]


It 1s convernent to rewnte th 1s system m the form
dt z 0.02 _0.02 z
=

How can we solve this system? Well, it might be easier if we could change vari­
ables so that the

A =
0.202]2. ' 0 .
[-0.0.0022 -0.02 x matrix is diagonalized. By standard methods, one eigenvalue of

1s llJ =
. [l1]. '
, wit h correspond mg eigenvector
.
The other eigen-

value is il2 -0.04,


=
r- � l
with corresponding eigenvector Hence, A is diagonalized

by p [11 -11]
=
. [-11 1]
'With p
-l
=
l
2 1 .

Introduce new coordinates


[�:] by the change of coordinates equation, as in Sec-

tion 4.6: [�] [�:l=


P [�]
Substitute this for on both sides of the system to obtain

d [y* [y*]
dt z*] - z* -P AP

Since the entries in Pare constants, it is easy to check that

d y*] d [y*
dt [z* - dt z*]
-P -P-

Multiply both sides of the system of equations (on the left) by p-i. Since Pdiagonal­
izes A, we get

:t [�:] p-lAP[�:] rn -0�04] [�:]


=
=

Now write the pair of equations:

dy* 0 dz* -0.04z*


dt
-

dt = and -
=

These equations are "decoupled," and we can easily solve each of them by using simple
one-variable calculus.
Section 6.4 Diagonalization and Differential Equations 317

dy*
The only functions satisfying
dt
= 0 are constants: we write y*(t) = a. The only

functions satisfying an equation of the form �; = kx are exponentials of the form


dz*
x(t) cek1 for a constant c. So, from =
-0.04z*, we obtain z*(t) be-0·041, where
dt
= =

b is a constant.
Now we need to express the solution in terms of the original variables y and z:

-1
1
y*
][ ] [
z* -
y* - z*
y* + z* -
] [aa - be-0.04t
+ be-0.041
]
For later use, it is helpful to rewrite this as [�] a [ �]
= +be-0·041 [ �]
-
.This is the general

solution of the problem. To determine the constants a and b, we would need to know

the amounts y(O) and z(O) at the initial time t = 0. Then we would know y and z for
all t. Note that as t � oo, y and z tend to a common value a, as we might expect.

A Practical Solution Procedure


The usual solution procedure takes advantage of the understanding obtained from this
diagonalization argument, but it takes a major shortcut. Now that the expected form

of the solution is known, we simply look for a solution of the form [�] = ce,i1 [:].
Substitute this into the original system and use the fact that :tce,i1 [:] = A.ce,i1 [:l

After the common factor ce,i1 is cancelled, this tells us that [:] is an eigenvector of

A, with eigenvalue A.. We find the two eigenvalues A.1 and A.2 and the corresponding
eigenvectors v1 and v2, as above. Observe that since our problem is a linear homoge­
neous problem, the general solution will be an arbitrary linear combination of the two
solutions e,i11v1 and e,i21v2. This matches the general solution we found above.

General Discussion
There are many other problems that give rise to systems of linear homogeneous or­
dinary differential equations (for example, electrical circuits or a mechanical system
consisting of springs). Many of these systems are much larger than the example we
considered. Methods for solving these systems make extensive use of eigenvectors and
eigenvalues, and they require methods for dealing with cases where the characteristic
equation has complex roots.
318 Chapter 6 Eigenvectors and Diagonalization

PROBLEMS 6.4
Practice Problems
Al Find the general solution of each of the following
systems of linear differential equations.
(b) :!_
dt z
[y] [
=
0.2
0.1
0 .7
-0.4 ] [y]
z

(a) :t [�] = [! -�J [�]


Homework Problems
Bl Find the general solution of each of the following
systems of linear differential equations. (b) :t [�] [- � _:] [�]
[;] [-�� -� �i [;]
=

(a) :t [�] = [-�:� -�:;][�] (c) : t


=
z 11 -5 -8 z

CHAPTER REVIEW
Suggestions for Student Review
1 Define eigenvectors and eigenvalues of a matrix A. (b) Is there any case where you can tell from the
Explain the connection between the statement that eigenvalues that A is not diagonalizable over JR.?
A. is an eigenvalue of A with eigenvector v and the
condition det(A - /I.I)
=
0. (Section 6.1)
(Section 6.2)

4 Use the idea suggested in Problem 6.2.D 4 to create


2 What does it mean to say that matrices A and B are matrices for your classmates to diagonalize. (Sec­
similar? Explain why this is an important question. tion 6.2)
(Section 6.2)
5 Suppose that P-1AP = D, where D is a diago­
3 Suppose you are told that the n x n matrix A has nal matrix with distinct diagonal entries A.1, ... , /1.11•
eigenvalues /1.1, ... , An (repeated according to multi­ How can we use this information to solve the sys-
plicity).
tem of linear differential equations :t x = Ax? (Sec­
(a) What conditions on these eigenvalues guaran­
tion 6.4)
tees that A is diagonalizable over JR.? (Sec­
tion 6.2)

Chapter Quiz

El Let A = [ � -�� �]
-2 8
=
3
·Determine whether the E2 Determine whether the matrix A = [ �;
-11
- � - �i
5 4
following vectors are eigenvectors of A. If any is is diagonalizable. If it is, give an invertible matrix

an eigenvector of A, state the corresponding eigen­ Pand a diagonal matrix D such that p-1 AP= D.

value.

(a)[!] (b) m (c) m c{:J


Chapter Review 319

E3 Determine the algebraic and geometric multiplicity (b) What is the dimension of the nullspace of the

of each eigenvalue of A
[-� � -�1- Is A diag-
matrix B = A - 21?
(c) What is the rank of A?

onalizable?
=

1 1 3

E6 Let A
[00..90 0.0.81 0.0.10] . Verify that A is a Markov
E4 If ;i is an eigenvalue of the invertible matrix A,
1
prove that ;i-1 is an eigenvalue of A- •
=

0.1 0.1 0.9


matrix and determine its invariant state 1 such that

ES Suppose that A is a 3 x 3 matrix such that


3
L
i=l
Xi = 1.
det A = 0, det(A + 2/) = 0, det(A - 3/) = 0 E7 Find the general solution of the system of differen­
tial equations

Answer the following questions and give a brief ex­


planation in each case.
(a) What is the dimension of the solution space of :t [�] [�:� �:�] [�]
=

Ax= o?

Further Problems

Fl (a) Suppose that A and B are square matrices such its characteristic polynomial." That is, if the char­
that AB = BA. Suppose that the eigenvalues acteristic polynomial is
of A all have algebraic multiplicity 1. Prove
that any eigenvector of A is also an eigenvector
of B.
(b) Give an example to illustrate that the result in then
part (a) may not be true if A has eigenvalues
with algebraic multiplicity greater than 1.

F2 If det B *

eigenvalues.
0, prove that AB and BA have the same
(Hint: Write the characteristic polynomial in fac­
tored form.) This result is called the Cayley­
F3 Suppose that A is an n x n matrix with n distinct Hamilton Theorem and is true for any square
eigenvalues ;i1, • • • ,An with corresponding eigen­ matrix A.
vectors v1, , Vn, respectively. By representing 1
F4 For an invertible n x n matrix, use the Cayley­
• • •

with respect to the basis of eigenvectors, show that


Hamilton Theorem to show that A-1 can be writ­
(A - ;i1 !)(A - ,12/) ···(A ;i,J)x = O for every
ten as a polynomial of degree less than or equal
1 E
-

JR.11, and hence conclude that "A is a root of


to n
11 1
-1 in A (that is, a linear combination of
2
{A - , ... 'A , A,/}.

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 7

Orthonormal Bas es
CHAPTER OUTLINE

7.1 Orthonormal Bases and Orthogonal Matrices


7.2 Projections and the Gram-Schmidt Procedure
7.3 Method of Least Squares
7.4 Inner Product Spaces
7.5 Fourier Series

In Section 1.4 we saw that we can use a projection to find a point P in a plane that is
closest to some other point Q that is not in the plane. We can view this as finding the
point in the plane that best approximates Q. In many applications, we want to find a
best approximation. Thus, it is very useful to generalize our work with projections from
Section 1.4 to not only general subspaces of JR.11 but also to general vector spaces. You
may find it helpful to review Sections 1.3 and 1.4 carefully before proceeding with
this chapter.

7.1 Orthonormal Bases and


Orthogonal Matrices
Most of our intuition about coordinate geometry is based on experience with the stan­
dard basis for JR.11• It is therefore a little uncomfortable for many beginners to deal with
the arbitrary bases that arise in Chapter 4. Fortunately, for many problems, it is pos­
sible to work with bases that have the most essential properties of the standard basis:
the basis vectors are mutually orthogonal (that is, the dot product of any two vectors
is 0) , and each basis vector is a unit vector (a vector with length 1 ).

Orthonormal Bases

Definition A set of vectors {V1, ••• vk} in JRll is orthogonal if vi· v1


, = O whenever i * j.
Orthogonal

EXAMPLE l

The set { \ �: -! }
, , is an octhogonal set of vectors in 11!.'. (Check the dot products

{ ! =: n
yourself.) The set , , is also an orthogonal set.

If the zero vector is excluded, orthogonal sets have one very nice property.
322 Chapter 7 Orthonormal Bases

Theorem 1 If {V 1, • . • , vd is an orthogonal set of non-zero vectors in JR'Z, it is linearly


independent.

Proof: Consider the equation c1v1 + ··· + cdk = 0. Take the dot product of v; with
each side to get

Cc1v1 + ... + ckvk) ·v; = O·v;


c1Cv1. v;) + ... + c;(v;. v;) + ... + ck(vk. v;) = o
0 + ... + 0 + c;llv;ll2 + 0 + ... + 0 = 0

since v; · v1 = 0 unless i = j. Moreover, v; f. 0, so llv;ll f. 0 and hence c; = 0. Since


this is true for all 1 � i � k, it follows that { v 1, .. , vd is linearly independent.
. •

Remark

The trick used in this proof of taking the dot product of each side with one of the
vectors v; is an amazingly useful trick. Many of the things we do with orthogonal sets
depend on it.

In addition to being mutually orthogonal, we want the vectors to be unit vectors.

Definition A set {V1, • • • , vd of vectors in JR11 is orthonormal if it is orthogonal and each vector v;
Orthonormal is a unit vector (that is, each vector is normalized).

Notice that an orthonormal set of vectors does not contain the zero vector, since all
vectors have length 1. It follows from Theorem 1 that orthonormal sets are necessarily
linearly independent.

EXAMPLE2 Any subset of the standard basis vectors in JR11 is an orthonormal set. For example, in
JR6' {e,' e2, es, e6} is an orthonormal set of four vectors (where, as usual, e; is the i-th
standard basis vector).

EXAMPLE3
The set { ; � j -n
l 'l ' }, is an orthonormal set in IR4. The vectors are multipies

of the vectors in Example 1, so they are certainly mutually orthogonal. They have been
normalized so that each vector has length 1.
Section 7.1 Orthonormal Bases and Orthogonal Matrices 323

EXERCISE 1
Verify that the set { [il r�l [-�]}
· · is orthogonal and then normalize the vectors to

produce the corresponding orthonormal set.

Many arguments based on orthonormal sets could be given for orthogonal sets of
non-zero vectors. However, the general arguments are slightly simpler for orthonor­
mal sets since llvill = 1 in this case. In specific examples, it may be simpler to use
orthogonal sets and postpone the normalization that often introduces square roots until
the end. (Compare Examples 1 and 3.) The arguments here will usually be given for
orthonormal sets.

Coordinates with Respect to an Orthonormal Basis


An orthonormal set of n vectors in JR.11 is necessarily a basis for JR.12 since it is auto­
matically linearly independent and a set of n linearly independent vectors in JR.12 is a
basis for JR.11 by Theorem 4.3.4. We will now see that there are several advantages to an
orthonormal basis over an arbitrary basis.
The first of these advantages is that it is very easy to find the coordinates of a vector
with respect to an orthonormal basis. Suppose that :B = {v1, ... , v12} is an orthonormal
basis for JR.11 and that 1 is any vector in JR.12• To find the :B-coordinates of 1, we must
find b1,... , b such that
11
(7.1)

If :B were an arbitrary basis, the procedure would be to solve the resulting system of
n equations in n variables. However, since :B is an orthonormal basis, we can use our

amazingly useful trick: take the dot product of (7.1) with vi to get

x · v; = o + .
. + o + biCvi ·vi)+ o + · · · + o
. = bi

because vi· v1 = 0 for i * j and vi· V; = 1. The result of this argument is important
enough to summarize as a theorem.

Theorem 2 If :B = W1,. . ., Vn} is an orthonormal basis for JR.12, then the i-th coordinate of a
vector 1 E JR.12 with respect to :Bis

b; = x ·v;

It follows that 1 can be written as


324 Chapter 7 Orthonormal Bases

EXAMPLE4
2
Find the coordinates of 1 = with respect to the orthonormal basis
3
4

13 = {� 1� 1 -1
2 1 '2
-1
1 1 -1
1'-../2 1 , -../2
0

0
1

Solution: By Theorem 2, the coordinates b1, b2, b3, and b4 of 1 are given by

bi = 1 · v1 =
12"(1+2 + + 3 4) = 5

1
b2 1 . v2 -(1-2 + 3 4) -1
2
= = - =

b3 = x · v3 = �(-1+ + + 0 3 0) = -../2

b4 = 1 · v4 =
1
-../2 +2 + O
-CO - 4) =
-Yz

5
-1
Thus the 13-coordinate vector of 1 is [1]23 -../2 . (It is easy to check that

- -../2

EXERCISE 2
Ld = { rn ul H]}
},
}, �
Verify that� is an orthononnal basis fodl3 and

[!]
' ·

then find the coordinates of 1 = with respect to�.

Another technical advantage of using orthonormal bases is related to the first one.
Often it is necessary to calculate the lengths and dot products of vectors whose co­
ordinates are given with respect to some basis other than the standard basis. If the
basis is not orthonormal, the calculations are a little ugly, but they are quite simple
when the basis is orthonormal. Let {v1, • . • ,vn} be an orthonormal basis for JR.n and let
1 = X1V1 + · · · + X11Vn and y = Y1V1 + · · · + YnVn be any vectors in JR". Using the fact
Section 7.1 Orthonormal Bases and Orthogonal Matrices 325

that vi· Vj = 0 for i * j and vi· v; = 1 gives

1· Y (X1V1 + ''' + XnVn) (y1V1 + ''' + Y11V11)


X1Y1(V1 · V1) + X1Y2(V1 · V2) + · + X1Y11(V1 · Vn) + X2Y1(V2 V t )
·
=

+ · + X2Yn(V2 · V11) + + X11Y11(V11 Vn)


= · ·
·

X1Y1 X2Y2 + ... + X11Y11


· · · ·
· ·

=
+

111112 1·1 x� + · · · + �
and
= =

coordinates. This fact will be used in Section 7 2


Thus, the formulas in the new coordinates look exactly like the formulas in standard

..

EXAMPLES
Ut B = {}, [;]. Ul, [-m. }, � and let
i'
,Y E R3 such that
[i']
o
=

Ul and

[
]
jl o
=
[-H Detennine 111112 andt · jl.

Solution: Using our work above, we get

111112 = [i'] . [i'


, J.
= Ul. Ul = 6

and


1
=

i' , jlJ.
[ ] [
· =

Ul Hl · =

-2

EXERCISE 3 Verify the result of Example 5 by finding 1 111112


1· y
and y explicitly and computing and
directly. This will demonstrate the usefulness of coordinates with respect to an
orthonormal basis.

Dot products and orthonormal bases in JR. have important generalizations to


11 inner
products and orthonormal bases in general vector spaces.
Section 7.4.
These will be considered in

Change of Coordinates and Orthogonal Matrices


A third technical advantage of using orthonormal bases is that it is very easy to invert

To keep the writing short, we give the argument in JR.3, but the corresponding argument
a change of coordinates matrix between the standard basis and an orthonormal basis.

works in any dimension.


326 Chapter 7 Orthonormal Bases

Let <J3 = {v1,v2,v3} be an orthonormal basis for JR3 and let P = [vt v2 v3J.
From Section 4.4, P is the change of coordinates matrix from <J3-coordinates to coordi­
nates with respect to the standard basis S. Now, consider the product of the transpose
of P with P: the rows of pT are vf,vI, v;, so

viv2 viv3
viv2 viv3
vrv2 vrv3
Vt ·v2 Vt ·v3 1
v2 v2 v2 v3
· ·

v3 v2 v3 ·v3
·

Remark
We have used the fact that the matrix multiplication xT y equals the dot product x · y.
We will use this very impo1tant fact many times throughout the rest of the book.

It follows that if P is a square matrix whose columns are orthonormal, then P is


invertible and p-t pT. Matrices with this property are given a special name.
=

Definition An n x n matrix P such that pTP = I is called an orthogonal matrix. It follows that
Orthogonal Matrix p-t pT and that ppT
= = I = pTP.
It is important to observe that the definition of an orthogonal matrix is equivalent
to the orthonormality of either the columns or the rows of the matrix.

Theorem 3 The following are equivalent for an n x n matrix P:

(1) P is orthogonal.
(2) The columns of P form an orthonormal set.
(3) The rows of P form an orthonormal set.

Proof: Let P = [vt v,,J. By the usual rule for matrix multiplication,
T
vf VJ v;
(P P)iJ = = · VJ

Hence , ppT I if and only if v; ·v;


= 1 and v; v1 = · = 0 for all if.}. But this is true if
and only if the columns of P form an orthonormal set.
The result for the rows of P follows from consideration of the product ppT = I.
You are asked to show this in Problem D4. •
Section 7.1 Orthonormal Bases and Orthogonal Matrices 327

Remark

Observe that such matrices should probably be called orthonormal matrices, but the
name orthogonal matrix is the name everybody uses. Be sure that you remember that
an orthogonal matrix has orthonormal columns and rows.

EXAMPLE6
The set {[��� ;] [-:�: :]} is orthonormal for any e (verify). So, the matrix P

[ ] -[ ]
,

Cose -sine . T cose sine


.
IS an orthogonal matnx. Hence, p-l - p - .
_

. e
sm cose - sm
. e cose

EXAMPLE 7
The set { [il [=: l H]}
· · is orthogonal. (Verify this.) If the vectors are normalized,

the resulting set is orthonormal, so the following matrix Pis orthogonal:

p =
[ 1/.../2 1/Y3 1/\/6
1/.../2 -1/Y3 -1/\/6
]
0 -1/Y3 2/\/6

Thus, Pis invertible and

p
-l =
p
T
=
[ 1 /.../2 1/.../2
l/-fJ -1/Y3 -1 Y3 � ]
11\16 -11\16 2/\/6

Moreover, observe that p-I is also orthogonal.

EXAMPLES The vectors of Example 4 are orthonormal, so the following matrix is orthogonal:

1/2 1/2 -1/.../2 0


1/2 -1/2 0 1/.../2
1/2 1/2 1 1.../2 0
1/2 -1/2 0 -1/.../2

EXERCISE4
Verify that P =
[ 1/.../2
1 j-{3
1/.../2 0
-1/Y3 1/Y3
l is orthogonal by showing that pp
T
= /.

-1/\/6 1/\/6 2/\/6

The most important application of orthogonal matrices considered in this book is


the diagonalization of symmetric matrices in Chapter 8, but there are many other geo­
metrical applications as well. In the next example, an orthogonal change of coordinates
328 Chapter 7 Orthonormal Bases

matrix is used to find the standard matrix of a rotation transformation about an axis
that is not a coordinate axis. (This was one question we could not answer in
Chapter 3.)

EXAMPLE9 Find the standard matrix of the linear transformation L : JR3 � JR3 that rotates vectors

about the axis defined by the vector ii= m counterclockwise through an angle 1

Solution: If the rotation were about the standard x1 -axis (that is, the axis defined by
e1), the matrix of the rotation would be

[
1 0 0 0 0
R1 = 0 cosn/3 - sinn/3 = 0 1/2 - Y312
l [I l
0 sinn/3 cosn/3 0 Y312 1/2
This will also be the 13-matrix of the rotation in this problem if there exists a basis
13 = LA, fi, h} such that
(1) /i is a unit vector in the direction of the axis v.
(2) 13 is orthonormal.
(3) 13 is right-handed (so that we can correctly include the counterclockwise sense
of the rotation with respect to a right-handed basis).

Let us find such a basis.

To start, let /. = 11:11 = }, [; l We must find two vectors that are orthogonal to /. and
to each other. Solving the equation

0 = f1
-+ -+

X = 1 (X1 + X2 + X3)
Y3
·

by inspection, we find that the vector [-i] is orthogonal to /.. (There are infinite]y

many other choices for this vector; this is just one simple choice.) To form a right­
handed system, we can now take the third vector to be

Normalizing these vectors, we get

and

The required right-handed orthonormal basis is thus 13 = u7, fi, h}, and the
orthogonal change of coordinates matrix from this basis to the standard basis is

P =[A Ii
[1/vf3 1/-Y'i.
A]= 11Y3 - 11-Y'i.
1/../6
11../6
]
1/Y3 0 -2/../6
Section 7.1 Orthonormal Bases and Orthogonal Matrices 329

EXAMPLE9 Since [L]1l = R1 (above), the standard matrix is given by


(continued)

[L]s = P[L]1lP-
1
=
[ 2/3
2/3
-1/3
2/3
2/3
-1/3
]
-1/3 2/3 2/3

It is easy to check that m is an eigenvector of this matrix with eigenvalue I, and it

should be since it defines the axis of the rotation represented by the matrix. Notice
also that the matrix L
[ ]s is itself an orthogonal matrix. Since a rotation transformation
always maps the standard basis to a new orthonormal basis, its matrix can always be
taken as a change of coordinates matrix, and it must be orthogonal.

A Note on Rotation Transformations and Rotation of


Axes in R2
. p
The matnx =
[ cose
.
- sine . ] .
. from the bas1s
ts the c hange of coord'mates matnx
Stne COS e

'13 = { [���;] ,[-���:]} 2


to the standard basis S of JR • This change of basis is often

described as a "rotation of axes through angle 8" because each of the basis vectors
in '13 is obtained from the corresponding standard basis vector by a rotation through
angle e.
Treatments of rotation of axes often emphasize the change of coordinates equation.
Recall Theorem 4.4.2, which tells us that if P is the change of coordinates matrix
from '13 to S, then p-l is the change of coordinates matrix from S to '13. Then, for any

x E JR2 with [x]1l = [:J the change of coordinates equation can be written in the form

[x]1l p-1 [x]s. If the change of basis is a rotation of axes, then Pis an orthogonal
=

1
matrix, so p- = pr. Thus, the change of coordinates equation for this rotation of axes
can be written as

[:�] -[ ��� : _ ��� �][��]


Or it can be written as two equations for the "new" coordinates in terms of the "old":

b1 = X1 COS 8 + X2 sin 8
b2 = -X1 sin 8 + X2 COS 8

These equations could also be derived using a fairly simple trigonometric argument.

The matnx
.
.
[
cose - sine 1 ] .
a SO appeared .tn s ect10n 3 . 3 as the Standard matnx
.
sm B cos B
2
[ e] of the linear transformation of JR that rotates vectors counterclockwise through
R
angle e. Conceptually, this is quite different from a rotation of axes.
It can be confusing that the matrix for a rotation through e as a linear transfor­
mation is the transpose of the change of coordinates matrix for a rotation of axes
330 Chapter 7 Orthonormal Bases

through e. In fact, what may seem even more confusing is the fact that if y ou re­
place e with ( -e), one matrix turns into the other (because cos ( -e) cos e and =

sin (-B) = - sin B). One way to understand this is to imagine what happens to the
vector e1 under the two different scenarios. First, consider R to be the transformation
that rotates each vector by angle e, with 0 < e < �; then R(e1) is a vector in the
first quadrant that makes an angle e with the x1 -axis. Next, consider a rotation of axes
through (-B); denote the new axes by y1 and y2, respectively. Then e1 has not moved
but the axes have, and with respect to the new axes, e1 is in the new first quadrant and
makes an angle of e with respect to the Y1 -axis. Therefore, the new coordinates of e1
relative to the rotated axes are exactly the same as the standard coordinates of R(e1).
Compare Figures 7.1.1 and 7.1.2.

Figure 7.1.1 The transformation L: IR.2 --+ IR.2 rotates vectors by angle e.

Figure 7.1.2 The standard basis vector e1 is shown relative to axes obtained from the
standard axis by rotation through -e.
Section 7.1 Exercises 331

PROBLEMS 7.1
Practice Problems
Al Determine which of the following sets are orthog­ A4 For each of the following matrices, decide whether
onal. For each orthogonal set, produce the cor­ A is orthogonal by calculating AT A. If A is not
responding orthonormal set and the orthogonal orthogonal, indicate how the columns of A fail to
change of coordinates matrix P. form an orthonormal set (for example, "the second

(a)
{[ � ] [ � ] }
· -
and third columns are not orthogonal").

[15/13 12/13]
{[-: rni +m
(a) A=

(b)
2/[ 3/513 -5/13
(b) A=
-4/5 -3/54/5]
{[l].[J HJ}
(c)
(c) A=
[1/52/5 -1/5]
1/3[2/3 -2/32/32/5 -2/31/3]
(d) { 1� _-1: ' --�01 ' 01}
,
-21 (d) A=

(e) A=
2/31/3 1/32/3 2/32/3
[2/32/3 -2/31/3 -2/31/3]
{ Hl [�] {!]} JR.3 JR.3

[H
A2 �t S = l · l Find the coordi- AS Let L : � be the rotation through angle �
nates of each of the following vectors with respect about the axis determined by ih = _

to the orthonormal basis :B.

(a) w=
m (b)t=
Hl (a) Verify that g, =
[ -�] is orthogonal to g,

Hl m
Define = g3
§2§2. §1 x and calculate the compo­
(c)Y= (d) Z=
nents of
(b) Let for so that
-+

f; = =
.

gi/l g;l 1, 2, 3,
-+ -+

10 -1!}
t

1 -10 :B = {/i, h.f;} is an orthonormal basis.

{' :
-1 Write the change of coordinates matrix P for
A3 Let :B =
1 1 1
2 11 -1 '2 ' Y2 ' Y2 the change from :B to the standard basis S in

the form P= �[ · ].
..
Find the coordinates of each of the following vec-
(c) Write the :B-matrix of L, [L]2J. For part (d),
tors with respect to the orthonormal basis :B.
it is probably easiest to write this in the form

(a) 1=
24 -14
(b) y=
[L]2l= }i[".].
_35 3-5 (d) Determine the standard matrix of [L]s.

103 -13
(c) w= (d) z=

-23
332 Chapter 7 Orthonormal Bases

A6 Given that '13= {[ -�;� l·[ �;� l [ � ]}


1/../6
3
1/../3
·
l/
-1/Y2
1/../6
1/../3
1;Y2 l and briefly explain why your basis is

is an orthonormal basis for IR. . Determine another orthonormal.


3
orthonormal basis for IR. that includes the vector

Homework Problems

Bl Determine which of the following sets are ortho­ 3 2


gonal. For each orthogonal set, produce the cor­
(a) w=
-2 (b) 1=
-4
responding orthonormal set and the orthogonal 6 0
change of coordinates matrix P. 4

(a)
{[�]·[-i]} (c ) y =
5
0
4
2

{[-:J.Ul·[�J}
(d ) z=
-2 -2
(b) 2 3
B4 For each of the following matrices, decide whether

{[ll {�l HJ}


A is orthogonal by calculating AT A. If A is not
( c) orthogonal, indicate how the columns of A fail to

-1 2
0 0
2
1
} form an orthonormal set (for example, "the second
and third columns are not orthogonal").

[2/1;-VS
YS -1/ Ysl
, 1 , -2 (a) A=
-21-VSJ
1 0 2/YS -1/YS]

·
{ [ �l n l }
}, u ] }o
(b) A=
[-11-VS -21-VS
1/2 1/2 ]
[
B2 Ut � = Find the
[ 1/2 1/2
]
·

(c ) A=
}, -
coordinates of each of the following vectors with
1/../3 l/Y2 -1/../6

m nJ
respect to the orthonormal basis '13.
-1/../3 l/-f2. 1/../6
[
(d ) A=
1/../3 0 2/../6
(a) w= (b) x=
1/../3 1/../6 l/Y2
(e) A= 1/../3 1/../6 l/Y2
(c ) =
Y Ul (d)Z=
nl} 1/../3 -1/../6 0

[i] [ �]
-

: � !
- BS (a) Ut W, = and W, = ·Determine a thfrd

B 3 Let '13 = {d 1
. j
-
-1
,�
0
, },
-1
·
Find vector w3 such that {w1, w2, w3} forms a right­
handed orthogonal set.

Ul. ni.
the coordinates of each of the following vectors (b) Let v; = w;/llw;ll so that '13 = {v1, 1!2, 1/3} is an
with respect to the orthonormal basis '13.
orthonormal basis. Find and
Section 7.2 Projections and the Gram-Schmidt Procedure 333

Conceptual Problems

Dl Verify that the product of two orthogonal matrices D3 (a) Use the fact that x · y =Py to show that if an
is an orthogonal matrix. n x n matrix Pis orthogonal, then llPxll = 11111
D2 (a) Prove that if P is an orthogonal matrix, then for every 1E JRn.
detP= ± 1. (b) Show that any real eigenvalue of an orthogonal
(b) Give an example of a 2 x 2 matrix A such that matrix must be either 1 or -1.
det A = 1, but A is not orthogonal. D4 Prove that an n x n matrix P is orthogonal if and
only if the rows of Pform an orthonormal set.

7.2 Projections and the Gram-Schmidt


Procedure
Projections onto a Subspace
The projection of a vector y onto another vector 1 was defined in Chapter 1 by find­
ing a scalar multiple of 1, denoted proj1 y, and a vector perpendicular to 1, denoted
perp1 y, such that
y = proj1 y + perp1 y
Since we were just trying to find a scalar multiple of 1, the projection of y onto 1
could be viewed as projecting y onto the subspace spanned by 1. Similarly, we saw
how to find the projection of y onto a plane, which is just a 2-dimensional subspace. It
is natural and useful to define the projection of vectors onto more general subspaces.
Let y E JRn and let S be a subspace of JR11 To match what we did in Chapter 1, we

want to write y as
y = proj5 y + perp5 y

where proj5 y is a vector in S and perp5 y is a vector orthogonal to S. To do this, we


first observe that we need to define precisely what we mean by a vector orthogonal to
a subspace.

Definition Let S be a subspace of ]Rn. We shall say that a vector 1 is orthogonal to S if


Orthogonal
Orthogonal Complement 1 s= o
· for alls ES

We call the set of all vectors orthogonal to S the orthogonal complement of S and
denote it §.L. That is,

s.l = {1E]Rn11·s=0 for all SES}

Remark

Note that if :B = {v1, . • • , vk} is a basis for S, then

s.l = {1E]Rn11. V; =0 for 1 � i � k}


334 Chapter 7 Orthonormal Bases

EXAMPLE 1 If S is a plane in lll?.3 with normal vector n, then by definition n is orthogonal to every
vector in the plane, so we say that n is orthogonal to the plane. On the other hand,
we saw in Chapter 1 that the plane is the set of all vectors orthogonal to n (or any
scalar multiple of it), so the orthogonal complement of the subspace Span{it} is the
plane.

EXAMPLE2
Let W =Span {� , n Find W" in R4

V1 1 1

Solution: We want to find all v = �: E ffi?.4 such that v · � = 0 and v · � = 0. This

V4 0
gives the system of equations Vt + v4 = 0 and v1 + v3 = 0, which has solution space

Span g -r} •
Hen�. W" =span
{� -r} •

EXERCISE 1
Lets= Span {i ;} · -� .
-
FindS"

We get the following important facts about Sand SJ_.

Theorem 1 Let S be ak-dimensional subspace of lll?.11• Then S is a subspace of JR" and:

(1) s n SJ_ = {O}


(2) dim(SJ_) = n k -

(3) If {v 1 • vk} is an orthonormal basis for Sand Wk+l


• . . , , vn} is an orthonormal • . . .

• .
1
basis for SJ_, then {i11, •••, vk> Vk+I . . , vn} is an orthonormal basis for lll?.1 •

You are asked to prove these facts in Problems D 1, D2, and D4.
We are now able to return to our goal of defining the projection of a vector 1
onto a subspaces of ffi?.11• Assume that we have an orthonormal basis Wt vk} for • . . . '

S and an orthonormal basis (vk+t . , i111} for SJ_. Then, by (3) of Theorem 1, we
• . .

know that {i11, ••• , i11 } is an orthonormal basis for ffi?.11• Therefore, from our work in
1
Section 7.2 Projections and the Gram-Schmidt Procedure 335

Section 7 .1, we can find the coordinates of x with respect to this orthonormal basis.
We get

Observe that this is exactly what we have been looking for. In particular, we have
written x as a sum of (x · v1)111 + · ·· + (x vk)Vb · which is a vector in §, and
(x · vk+t)Vk+l +·· · + (x · v11 )1111 , which is a vector in§j_. Thus, we make the following
definition.

Definition Let§ be ak-dimensional subspace of IR.11 and let '13 {v1, , vk} be an orthonormal
= • • •

Projection onto basis of§ . If x is any vector in IR.11, the projection of x onto§ is defined to be
a Subspace

The projection of 1 perpendicular to§ is defined to be

Remark

Observe that a key component for this definition is that we have an orthonormal basis
for the subspace§. We could, of course, make a similar definition for the projection if
we have only an orthogonal basis. See Problem D6.

We have defined perp8 x so that we do not require an orthonormal basis for §j_.
However, we have to ensure that this is a valid equation by verifying that perp8 x E §j_.
For any 1 � i � k, we have

v; · perp8 x= v; · [x - ((x · i11)v1 + · · · + (x · vk)Vk)]


= v;. 1 - v;. ccx · v1)111 + ..·+ex· vk)Vk)
= v; · x - ( 0 + · ··+ 0 + (1 v;)(v; · v;) + 0 +· · · + 0)
·

= v; · x -v; .1
=0

since '13 is an orthonormal basis and the dot product is symmetric. Hence, perp81
is orthogonal to every vector in the orthonormal basis {V1, ... , vk} of §. Hence, it is
orthogonal to every vector in§.

EXAMPLE3

Let§= Span {l : : }
1
,l -
-1
and let X= _ �
3
. Determine proj, 1 and perp, 1.

Solution: An orthonormal basis for § is '13 = W1. i12} =

{d : }
1
,
l
-

-1
· Thus,
336 Chapter 7 Orthonormal Bases

EXAMPLE3 we get

(continued)
1 -5/2

projs 1 = (1 · i11)111 + (1 i12)112


· = G) i + (-�3 ) i -1 =
4
1 -5/2
1 -1 4
2 -5/2 9/2
5 4 1
perps 1 = 1 - projs 1 =

- 7 -5/2
=

-9/2

3 4 -]

EXERCISE2
Id = mJr m be an orthogonal basis for S and let 1 = m Deterntlne proj,1

and perps 1.

Recall that we showed in Chapter 1 that the projection of a vector 1 E JR3 onto
3
a plane in JR is the vector in the plane that is closest to 1. We now prove that the
projection of 1 E JR" onto a subspace§ of lR.11 is the vector in§ that is closest to 1.

Theorem 2 Approximation Theorem


Let § be a subspace of lR.11• Then, for any 1 E lR.11, the unique vector s E § that
minimizes the distance 111 - Sll is s = projs 1.

Proof: Let {i11, ... , vd be an orthonormal basis for§ and let lVk+I• ... , v,,} be an or­
thonormal basis for§j_. Then, for any 1 E IR", we can write

Any vector s E § can be expressed as s = s1v1 + · · · + skvb so that the square of the
distance from 1 to sis given by

111 - 511 2 = (x1 - sd + · · · + (xk - sd + x�+i + · · · + x�

To minimize the distance, we must choose s; = x; for 1 ::; i ::; k. But this means that


Section 7.2 Projections and the Gram-Schmidt Procedure 337

The Gram-Schmidt Procedure


For many of the calculations in this chapter, we need an orthonormal (or orthogonal)
basis for a subspace § of JR.11• If § is a k-dimensional subspace of JR.11, it is certainly
possible to use the methods of Section 4.3 to produce some basis {w1, ...,wk} for§.
We will now show that we can convert any such basis for§ into an orthonormal basis
{v1, ...,vk} for§. To simplify the description (and calculations), we first produce an
orthogonal basis and then normalize each of the vectors to get an orthonormal basis.
The construction is inductive. That is, we first take v1 = w1 so that {Vi} is an or­
thogonal basis for Span{wi}. Then, given an orthogonal basis {V1, ...,v;_i} for
Span{w1, ...,w;_J}, we want to find a vector v; such that {V1, ...,v;_1,v;} is an or­
thogonal basis for Span{w1, ...,w;}. We will repeat this procedure, called the Gram­
Schmidt Procedure, until we have the desired orthogonal basis {v1, ...,vd for
Span{w1,...,wk}. To do this, we will use the following theorem.

Theorem 3 Suppose that v1, ..., Vk E JR.11• Then

for any t1, ..., tk-1 ER

You are asked to prove Theorem 3 in Problem D5.

Algorithm l The Gram-Schmidt Procedure is as follows.


First step: Let v1 = w1• Then the one-dimensional subspace spanned by v1 is
obviously the same as the subspace spanned by w1• We will denote this subspace
as S1= Span{Vi}.
Second step: We want to find a vector v2 such that it is orthogonal to v1 and
Span{V1,v2}= Span{wi. w2}. We know that the perpendicular of a projection onto a
subspace is orthogonal to the subspace, so we take

Then v2 is orthogonal to v1 and Span{v1,v2} = Span{w1,w2} by Theorem 3. We


denote the two-dimensional subspace by§2 = Span{il\,v2}.
i-th step: Suppose that i 1 steps have been carried out so that {V1, ...,v;-d is
-

orthogonal, and S;-1= Spanfv1,...,v;_i} = Span{w1,...,w;-d. Let

By Theorem 3, {V1, ...,v;} is orthogonal and Span{v1,...,v;}= Span{w1,...,w;}.


We continue this procedure until i= k, so that

§k = Span{i11, ...,vk}= Sp
. an{w1, ...,wk}=§

and an orthogonal basis has been produced for the original subspace§.
338 Chapter 7 Orthonormal Bases

Remarks

1. It is an important feature of the construction that §;_ 1 is a subspace of the


next§;.

2. Since it is really only the direction of v; that is important in this procedure, we


can rescale each v; in any convenient fashion to simplify the calculations.

3. Notice that the order of the vectors in the original basis has an effect on the
calculations because each step takes the perpendicular part of the next vector. If
the original vectors were given in a different order, the procedure might produce
a different orthonormal basis.

4. Observe that the procedure does not actually require that we start with a basis
§; only a spanning set is required. The procedure will actually detect a linearly
dependent vector by returning the zero vector when we take the perpendicular
part. This is demonstrated in Example 5.

EXAMPLE4 Use the Gram-Schmidt Procedure to find an orthonormal basis for the subspace of JR5
1 -1 0
1 2 1
defined by § = Span 0 1 1
0 1
2
Solution: Call the vectors in the basis w1, Wz, and w3, respectively.
1
1
First step: Let v1 = w1 = 0 and §1 = Span{Vi}.

Second step: Determine perp51 w2:

-3/2
3/2
1
-1/2
1/2

(It is wise to check your arithmetic by verifying that v1 · perp51 Wz = 0.)

As mentioned above, we can take any scalar multiple of perp51 w2, so we take
-3
3
v2 = 2 and S2 = Span{v1, vz}.
-1
1
Section 7.2 Projections and the Gram-Schmidt Procedure 339

EXAMPLE4 Third step: Determine perp32 w3:

(continued)
-1/4
-3/4
1/2
1/4
3/4

(Again, it is wise to check that perps2 w3 is orthogonal to V1 and Vz.)


We now see that {v1, Vz, V3} is an orthogonal basis for§. To obtain an orthonormal
basis for §, we divide each vector in this basis by its length. Thus, we find that an
orthonormal basis for § is

-3 -1
1 3 -3
1 1 1
0 2 2
2
' '

1 2.../6 -1 2.../6 1
1 1 3

EXAMPLES Use the Gram-Schmidt Procedure to find an orthogonal basis for the subspace

S
=
Span
ml m ·[i]}
·
3
ofR .

Solution: Call the vectors in the spanning set w1, w2, and w3, respectively.

First step: Let V1 = W1 = m and S1 = Span(Vi}.

Second step: Determine perps1 w2:

We take v2 =

-2
�1 and S2 = Span{V1, v2}.

Third step: Determine perps2 w3:

Hence, w3 was in S2 and so w3 E Span{v1, v2} = Span{w1, w2}. Therefore, {V1, v2}
is an orthogonal basis for§.
340 Chapter 7 Orthonormal Bases

PROBLEMS 7 .2
Practice Problems

{[�]·[i] ·[;]}
Al Each of the following sets is orthogonal but not
2 (a
j
3
orthonormal. Determine the projection of x = 5

( b)
{[;].[i]·[�]}
{ =!
6
onto the subspace spanned by each set.

(a ) � =

u ( c)
P -I · - 1 }
{ � � �}
,

Cb ) '13 =
(d ) { � � -� � }
{ : = � -� }
1 ' 0 ' -1
, , ,

0 1 0
1
A4 Use the Gram-Schmidt Procedure to produce an
( c) C = orthonormal basis for the subspace spanned by

: each set.
,

{[i]·[;].[�] }
-

A2 Find a basis for the orthogonal complement of each (a)


of the following subspaces.

(a) S=Span
{U]} ( b)
{[J.[:J.[=:J}
(b ) S=Span
{ [ ; ]fi] } ( c) { � �} 0

{! . -I}
-

0 ' 1 ' -1
1 -1

(c) s =Span
1 1
0 0 1

(d) 1 -1
A3 Use the Gram-Schmidt Procedure to produce an 0 1
orthogonal basis for the subspace spanned by 0
each set. AS Let § be a subspace of
JPi.11•
Prove that perps
x = projgJ. x for any x E JPi.11•

Homework Problems

Bl Each of the following sets is orthogonal but not

orthonormal. Determine the projection of x = _


4
3
2
5
(a) .'.ll= { j _;}
1 -2

onto the subspace spanned by each set.


Section 7.2 Exercises 341

(b) � = p -; ) (c) {-! ; ' n


{I -� �)
'

(c) C = ,
, (d)

P. �. -n
{ : ' -� ' _: )
-

(d) v
B4 Use the Gram-Schmidt Procedure to produce an
orthonormal basis for the subspace spanned by

0
=

each set.
-1

{[�J-l=:Hm
l

B2 Find a basis for the orthogonal complement of each (a)

{[�]}
of the following subspaces.

(a) S=Span (b)


{[ll Ul lJ l�l}
{[j] fi]}
·

u : 'n
-

�) S=Span
(c)

(c) s =Span {[J .mrm -1


0

(d) S=Span { I -n
- . 7.
(d) 0

0
l
0
2

B3 Use the Gram-Schmidt Procedure to produce an


orthogonal basis for the subspace spanned by
BS Ut S=Span { 1 !) · and let X = _: .
(a) Find an orthonormal basis :B for§.
each set.

(a)
{[�l-l= :J.lm (b) Calculate perps x.
(c) Find an orthonormal basis C for §J_.
(d) Calculate prok1. x.
(b)
{l:HiJ. Ul}
Conceptual Problems
11
Dl Prove that if § is a k-dimensional subspace of JR. , D3 Prove that if § is a k-dimensional subspace of JR.'1,

11
then§ n §J_ = {0}.

D2 Prove that if § is a k-dimensional subspace of JR. ,


then (§J_ )J_ =§.

then§J_ is an (n
D4 Prove that if (\11, ... , vk} is an orthonormal basis for
-

k)-dimensional subspace. § and {vk+I• ... , v1 1} is an orthonormal basis for§J_,


1
then {V 1,• •v11} is an orthonormal basis for JR. 1•
• ,
342 Chapter 7 Orthonormal Bases

DS Suppose thatv1, ...,vk E JRll and t1, ..., t-


k t ER D7 If {V 1, ... ,vk} is an orthonormal basis for a sub­
Prove that space §, verify that the standard matrix of proj8 can
be written
Span{v1, ...,vk} = Spanv k 1,vk + t1v1
{ 1, ... ,v-
+ ... + tk-v
l k-d

D6 Suppose that § is a k-dimensional subspace of JRll


and let 13 = v
{ 1, ... ,v}
k be an orthogonal basis of
§. For any x E §, find the coordinates of proj8 x
with respect to 13.

7 .3 Method of Least Squares


Suppose that an experimenter measures a variable y at times t1, t , ... , tn and obtains
2
the values Y1, y , . .. , y11• For example, ymight be the position of a particle at time tor
2
the temperature of some body fluid. Suppose that the experimenter believes that the
data fit (more or less) a curve of the form y = a+ bt + ct2. How should she choose a,
b, and c to get the best-fitting curve of this form?
Let us consider some particular a, b, and c. If the data fit the curve y = a+ bt+ ct2
perfectly, then for each i, y; = a + bt; + ct1 . However, for arbitrary a, b, and c, we
expect that at each t;, there will be an error, denoted bye; and measured by the vertical
distance
e; y; (a+ bt; + ct; )
= -

as shown in Figure 7.3.3.

0
Figure 7 .3.3 Some data points and a curve y = a + bt + ct2• Vertical line segments
measure the error in the fit at each t;.
Section 7.3 Method of Least Squares 343

One approach to finding the best-fitting curve might be to try to minimize the
ll

total error L: e;. This would be unsatisfactory, however, because we might get a small
i=I
total error by having large positive errors cancelled by large negative errors. Thus, we
instead choose to minimize the sum of the squares of the errors

n n

I e7 = I (y; - (a+ bt;+ ctT))2


i=l i=I

This method is called the method of least squares.


To find the parameters a, b, and c that minimize this expression for given val­
ues t1, . . . , tn and y , ... , y11, one could use calculus, but we will proceed by using a
1
projection. This requires us to set up the problem as follows.
y
Let y= r ;1 , l = �1,i' = 1: , and i" = '[ be vectors in R'. Now consider the

� n tn 1 t11
distance from y to (al+ bi'+ ci'2). Observe that the square of this distance is exactly
the sum of the squares of the errors
ll

11.Y- (al+ bi'+ ci'2)112 = I (y;-(a+ bt;+ ct;))2


i=l

Next, observe that (al + bi'+ ci'2) is a vector in the subspace § of JR11 spanned by
13 = {l, t: i'2}. If at least four of the t; are distinct, then 13 is linearly independent (see
Problem D2), so it is a basis for §. Thus, the problem of finding the curve of best fit is
reduced to finding a, b, and c such that al+ bi'+ ci'2 is the vector in § that is closest
to y By the Approximation Theorem,
. this vector is projs y and the required a, b, and
c are the 13-coordinates of projs y.
Given what we know so far, it might seem that we should proceed by transforming
13 into an orthonormal basis for § so that we can find the projection. However, we can
use the theory of orthogonality and projections to simplify the problem. If a, b, and c
have been chosen correctly, the error vector e= y-al-bi'-ci'2 is equal to perps y.
In particular, it must be orthogonal to every vector in §, so it is orthogonal to l, t: and
i'2. Therefore,

1 · (y
-+
1 e-+ = -+
· - a -+1 - bt-+-ct72 ) = 0

t-+ e·
-t - a -+
-+ = t (y 1 - bt-+- ct72 ) = 0
-+
·
...

t2 ·e= t2 ·(y-al-bi'-ci'2) = o
The required a, b, and c are determined as the solutions of this homogeneous system
of three equations in three variables.
It is helpful to rewrite these equations by introducing the matrix

x= [1 i' t2]

and the ve<otor i1 = m of parameters. Then the error vector can be written as i! = jl-Xii.

Since the three equations are obtained by taking dot products of e with the columns of
X, the system of equations can be written in the form

XT(y-Xil) = 0
344 Chapter 7 Orthonormal Bases

The equations in this form are called the normal equations for the least squares fit.
Since the columns of X are linearly independent, the matrix xrX is a 3 x 3 invertible
matrix (see Problem D2), and the normal equations can be rewritten as

a= (XTx)-lXTy
This is consistent with a unique solution.
For a more general situation, we use a similar construction. The matrix X, called
the design matrix, depends on the desired model curve and the way the data are col­
lected. This will be demonstrated in Example 2 below.

EXAMPLE 1 Suppose that the experimenter's data are as follows:

t l.O 2.1 3.1 4.0 4.9 6.0


y 6.l 12.6 21.1 30.2 40.9 55.5

Solution: As in the earlier discussion, the experimenter wants to find the curve
y= a+ bt + ct2 that best fits the data, in the sense of minimizing the sum of the squares
of the errors. We let
1 1 l
XT = [f t (2r = 1�0 2\
[ 3.1 4.0 4.9 6.0
1.0 4.41 9.61 16.0 24.01 36.0

6.1
12.6
21.1
y=
30.2
40.9
55.5

Using a computer, we can find that the solution for the system a = (XTx)-lXTy is
[ ]
l.63175
a = 3.38382 . The data do not justify retaining so many decimal places, so we take
0.93608
2
the best-fitting quadratic curve to bey = 1.63 + 3.38t + 0.94t . The results are shown
in Figure 7.3.4.

0 2 3 4 5 6

Figure 7.3.4 The data points and the best-fitting curve from Example 1.
Section 7.3 Method of Least Squares 345

EXAMPLE2 Find a and b to obtain the best-fitting equation of the form y at2 +bt for the following
=

data:

t -1 0 1
y 4 1

Solution: Using the method above, we observe that we want the error vector
e = y - at2 - bt to be equal to perps y. In particular, e must be orthogonal to
t2 and t. Therefore,

t2 . e = t2 . (y at2 bi') = 0
- -

t e t (y at2 bi') = 0
= - -

In this case, we want to pick X to be of the form

Taking jl = m then gives

So, y = �t2 - �t is the equation of best fit for the given data.

Overdetermined Systems
The problem of finding the best-fitting curve can be viewed as a special case of the
problem of "solving" an overdetermined system. Suppose that Ax = b is a system
of p equations in q variables, where p is greater than q. With more equations than
...
variables, we expect the system to be inconsistent unless b has some special properties.
If the system is inconsistent, we say that the system is overdetermined-that is, there
are too many equations to be satisfied.
Note that the problem in Example 1 of finding the best-fitting quadratic curve was
of this form: we needed to solve Xa = y for the three variables a, b, and c, where there
were n equations. Thus, for n > 3, this is an overdetermined system.
If there is no x such that Ax = b, the next best "solution" is to find a vector x
that minimizes the "error" llAx - bll. However, Ax = x1a1 + · · · + xqaq, which is a
vector in the columnspace of A. Therefore, our challenge is to find x such that Ax is
the point in the columnspace of A that is closest to b. By the Approximation Theorem,
we know that this vector is projs x. Thus, to find a vector x that minimizes the "error"
346 Chapter 7 Orthonormal Bases

llAx -bll, we want to solve the consistent system Ax prokoI(A) x. Using an argument
=

analogous to that in the special case above, it can be shown that this vector x must also
satisfy the normal system

EXAMPLE3 Verify that the following system Ax = b is inconsistent and then determine the
vector x that minimizes llAx - bll:

3x1 - X2 = 40
x1 + 2x2 =

2x1 + x2 = 1
Solution: Write the augmented matrix [A I GJ and row reduce:
3
[ 1 -1 40 l [ 01 1 0 l 2
2 � -2
2 11 00 -5

The last row indicates that the system is inconsistent. The x that minimizes llAx - hll
must satisfy AT Ax = AT b. Solving for x in this system gives

l
= [114 1]- [-1
6
3

1 ] [ 14]
[-�
=
83
-1
14 -3
[ ]
87 /83
- -56/83

So, x = [ �����]
- is the vector that minimizes llAx - bll.
Section 7.3 Exercises 347

PROBLEMS 7 .3
Practice Problems

Al Find a and b to obtain the best-fitting equation of 2 t -2 -1 0 1 2


(b) y =a+ bt for the data
the form y = a+ bt for the given data. Make a graph y 1 2 3 -2
showing the data and the best-fitting line.
t 1 2 3 4 5
A3 Verify that the following systems Ax = b are in­
(a) consistent and then determine for each system the
y 9 6 5 3
vector x that minimizes llAx - bll.
t -2 -1 0 1 2
(b) X1 + 2Xz 5
y 2 2 4 4 5
(a) 2x1 - 3x2 = 6
A2 Find the best-fitting equation of the given form for X1 - 12x2 = -4
each set of data.
2 t -1 0 1 2x1 + 3x2 = -4
(a) y = at + bt for the data
y 4 (b) 3x1 - 2x2 = 4
Xj - 6xz 7

Homework Problems

Bl Find a and b to obtain the best-fitting equation of 2 0 2


(a) y = at + bt for the data
the form y =a+ bt for the given data. Make a graph y -1 1 -1
showing the data and the best-fitting line.
t -2 - I 0 1 2
t -2 -] 0 1 2 3
(b) y =a + bt+ ct for the data
(a) y
_ _ _ _

5 2 1 1 2
y 9 8 5 3
t I 2 3 4 5 B4 Verify that the following systems Ax = b are in­
(b)
y 4 3 4 5 5 consistent and then determine for each system the

B2 Find a, b, and c to obtain the best-fitting equation vector x that minimizes llAx - bll.
2 Xj - Xz = 4
of the form y = a+ bt + ct for the given data. Make
(a) 3x1 + 2x2 5
a graph showing the data and the best-fitting curve.
X1 - 6x2 10

t -2 -1 0 2
X1 + Xz = 7
y 3 2 0 2 8
(b) X1 - Xz = 4
Xj + 3x2 14
B3 Find the best-fitting equation of the given form for
each set of data.

Computer Problems

Cl Find a, b, and c to obtain the best-fitting equation Make a graph showing the data and the curve.
2
of the form y = a+ bt + ct for the following data.
t 0.0 1.1 1.9 3.0 4.1 5.2
y 4.0 3.6 4.1 5.6 7.9 11.8
348 Chapter 7 Orthonormal Bases

Conceptual Problems

t1

Dl Let X = [f ( ]
� .where (= l::J and� = [:;] D2 Let X = l [ i' t2 ('n], []
where t= :

f11
and

Then show that

n
II

L t;
i=l
ll

Lt2
i=l l
[i = ['� ]
tll
for 1 � i � n. Assume that at least m + 1

II II I!
of the numbers t1, , t11 are distinct.
xrx= Lt; Lt� Lt3
• • •

i=l i=l i=l I (a) Prove that the columns of X are linearly inde­
I! n ll
pendent by showing that the only solution to
Lt2 Lt3 Lti
i=l I i=l I i=l col+ cit+···+ cmf'n = 0 is co=···= Cm= 0.
(Hint: Let p(t) = co + c1 t + ·· ·+Cm� and show
-+ -+ -+
that if col+ c1t + · + Cmt� = 0, p(t) must be
· · 1

the zero polynomial.)


(b) Use the result from part (a) to prove that x X
r

is invertible. (Hint: Show the only solution to


xr Xv= Ois v= Oby considering 11Xv112 .)

7 .4 Inner Product Spaces


In Sections 1.3, 1.4, and 7.2, we saw that the dot product plays an essential role in
the discussion of lengths, distances, and projections in JR.11 In Chapter 4, we saw that

the ideas of vector spaces and linear mappings apply to more general sets, including
some function spaces. If ideas such as projections are going to be used in these more
general spaces, it will be necessary to have a generalization of the dot product to
general vector spaces.

Inner Product Spaces


Consideration of the most essential properties of the dot product in Section 1.3 leads
to the following definition.

Definition Let V be a vector space over JR. An inner product on V is a function ( , ) : V x V -4 JR


Inner Product such that
Inner Product Space
(1) (v, v) � 0 for all v EV and (v, v) = 0 if and only if v = 0. (positive definite)
(2) (v, w) = (w, v) for all v, w EV. (symmetric)
(3) (v, sw + t z) = s(v, w) + t(v, z) for all s, t E JR and v, w, z EV. (bilinear)

A vector space V with an inner product is called an inner product space.

Remark

Every non-trivial finite-dimensional vector space V has in fact infinitely many different
inner products. When we talk about an inner product space, we mean the vector space
and one particular inner product.

EXAMPLE 1 The dot product on JR.11 is an inner product on JR".


Section 7.4 Inner Product Spaces 349

EXAMPLE2 Show that the function ( , ) defined by

(x, y) = 2x1Y1 + 3x2y2

is an inner product on JR2.


Solution: We verify that ( , ) satisfies the three properties of an inner product:

1. (x, x)= 2xi + 3x� � 0 and (x, x)= 0 if and only if x = 0. Thus, it is positive
definite.

2. (x,y)= 2X1Y1 + 3x2y2 = 2y1x1 + 3y2x2 = (Y, x). Thus, it is symmetric.


2
3. For any x,y,ZE lR. ands, t E JR,
(x, sw + tZ) = 2x1(sw1 + tz1) + 3x2(sw2 + tz2)

= s(2x1w1 + 3x2w2) + t(2x1z1 + 3x2z2)


= s(x, w) + t(x,Z>

So, ( , ) is bilinear. Thus, ( , ) is an inner product on IR2.

Remark

Although there are infinitely many inner products on IRn, it can be proven that for any
inner product on JR" there exists an orthonormal basis such that the inner product is
just the dot product on IR" with respect to this basis. See Problem Dl.

EXAMPLE3 Verify that (p, q) = p(O)q(O) + p(l)q(l) + p(2)q(2) defines an inner product on the
2 2
vector space P2 and determine (1 + x + x , 2 - 3x ).
Solution: We first verify that ( , ) satisfies the three properties of an inner product:
2 2 2
(1) (p, p) = (p(0)) + (p(1)) + (p(2)) � 0 for all p E P2. Moreover, (p, p) = 0 if
and only if p(O) = p(l) = p(2) = 0, and the only p E P2 that is zero for three
values of x is the zero polynomial, p(x) = 0. Thus ( , ) is positive definite.

(2) (p, q)= p(O)q(O)+p(l)q(l)+p(2)q(2) = q(O)p(O)+q(l)p(l)+q(2)p(2) = (q, p).


So, ( , ) is symmetric.

(3) For any p, q, r E P2 and s, t E IR,

(p, sq + tr)= p(O)(sq(O) + tr(O)) + p(l)(sq(l ) + tr(l)) + p(2)(sq(2) + tr(2))


= s(p(O)q(O) + p(l)q(l) + p(2)q(2)) + t(p(O)r(O) + p(l)r(l )
+ p(2)r(2)) = s(p, q) + t(p, r)

So, ( , ) is bilinear. Thus, ( , ) is an inner product on P2. That is, P2 is an inner product
space under the inner product ( , ).
In this inner product space, we have
2 2
(1 + x + x , 2 - 3x )= (1 + 0 + 0)(2 - 0 ) + (1 + 1 + 1)(2 - 3) + (1 + 2 + 4)(2 - 12)
= 2 - 3 - 70 = -71
350 Chapter 7 Orthonormal Bases

EXAMPLE4 Let tr(C) represent the trace of a matrix (the sum of the diagonal entries). Then, M(2, 2)
is an inner product space under the inner product defined by (A, B) = tr(Br A). If

A = [� _ � ] and B =[� �]. then under this inner product, we have

\[� -�J. [� �]) ([� �][� -�D


=tr

([ � !])
= tr
2
= 4 + 4= 8

EXERCISE 1 Verify that (A, B)= tr(BT A) is an inner product for M(2, 2). Do you notice a relation­
ship between this inner product and the dot product on JR.4?

Since these properties of the inner product mimic the properties of the dot product,
it makes sense to define the norm or length of a vector and the distance between vectors
in terms of the inner product.

Definition Let V be an inner product space. Then, for any v EV, we define the norm (or length)
�form of v to be
Distance llvll= -J(v, v)
For any vectors v, w EV, the distance between v and w is

ll v-wll

Definition A vector v in an inner product space V is called a unit vector if llvll = 1.


Unit Vector

EXAMPLES
Find the norm of A = [� �] in M(2, 2) under the inner product (A, B) = tr(Br A).

Solution: We have

llAll = -J(A, A)= -Jtr(AT A)= Ys+1= -y6


Section 7.4 Inner Product Spaces 351

EXAMPLE6 Find the norm of p(x) = 1 - 2x - x2 in P2 under the inner product (p, q) = p(O)q(O)
+ p(l)q(l) + p(2)q(2).
Solution: We have

Ill - 2x - x211 = --j(p, p)


= �(p(0))2 + (p(1))2 + (p(2))2

= --)12 + (1 - 2 - 1)2 + (1 - 4 - 4)2

= Ys4

EXERCISE 2 Find the norm of p(x) =1 and q(x) = x in P2 under the inner product
(p, q) = p(O)q(O) + p(l)q(l) + p(2)q(2).

EXERCISE 3 Find the norm of q(x) x in P2 under the inner product


=

(p, q) = p(-l)q(-1) + p(O)q(O) + p(l)q(l).

In Sections 7.1 and 7 .2 we saw that the concept of orthogonality is very useful.
Hence, we extend this concept to general inner product spaces.

Definition Let Vbe an inner product space with inner product ( , ). Then two vectors v, w E V
Orthogonal are said to be orthogonal if (v, w) = 0. The set of vectors {v1, ... , vd in Vis said to
Orthonormal be orthogonal if (vi, VJ)= 0 for all if. j. The set is said to be orthonormal if we also
have (v;, v;)= 1 for all i.

With this definition, we can now repeat our arguments from Sections 7.1 and 7.2
for coordinates with respect to an orthonormal basis and projections. In particular, we
get that if '13 = {v1, ... , vk} is an orthonormal basis for a subspace S of an inner product
space Vwith inner product ( , ), then fo r anyx E Vwe have

projs x = (x, V1)v1 + · · · + (x, vk)vk


Additionally, the Gram-Schmidt Procedure is also identical. If we have a basis
{w1, , w11} for an inner
• • • product space V with inner product (, ), then the set
{v1, ... , v,i} defined by
VJ= VJ

is an orthogonal basis fo r V.
352 Chapter 7 Orthonormal Bases

EXAMPLE 7 Use the Gram-Schmidt Procedure to determine an orthonormal basis for § =


Span{l, x} of P under the inner product ( p,q) = p(O)q(O) + p(l)q(l) + p(2)q(2).
2
Use this basis to determine projs x2.
Solution: Denote the basis vectors of § by p1(x) 1 and p2( x)
= x. We want to
=

find an orthogonal basis { q 1(x),q (x)} for §. By using the Gram-Schmidt Procedure,
2
we take q1(x) = p1(x) = 1 and then let

Therefore, our orthogonal basis is {q1,q } = {1,x - 1}.


2

Hence, we have

(x2,1) (x2, x- 1)
. x2 =
proJs 1+ (x - 1)
!ilii2 llx-1112
0(1)+1(1)+4(1) 0(-1)+1(0)+4(1)
=
1+ (x - 1)
1 2+12+12 (-1 )2 +02+12
5 1
= -1+2(x - 1) = 2x - -
3 3

PROBLEMS 7 .4
Practice Problems

Al On P , define the inner product (p,q) = p(O)q(O)+ (ii) Use the orthonormal basis you found in part (i)
2
p(1)q(1)+p(2)q(2) . Calculate the following.
(a) (x- 2x 2, 1+3x) (b) (2 - x+3x2,4 - 3x2)
.
to d eterrrune .[ ]
proJs
4
_2
3
1
.

(c) 113 - 2x + x211 (d) 119 + 9x + 9x211

A2 In each of the following cases, determine whether ca) s=span {[ � �],[� �],[� -�]}
_

( , ) defines an inner product on P •


(a) (p,q) = p(O)q(O)+p(l)q(l)
2
(b) §=Span {[� i],[� -�],[:1 �]}
(b) (p, q) = lp(O)q(O)I + lp(l)q(l)I +lp(2)q(2)1
A4 Define the inner product (1, y) =
2X1Y1 + X2Y2 +
(c) (p, q) = p(- l )q(-1) + 2 p(O)q(O)+p(l)q(l) 3
3x3y3 on IR .
(d) (p,q) = p(-l)q(l)+2p(O)q(O)+p(l)q(-1)
(a) Use the Gram-Schmidt Procedure to determine
A3 On M(2, 2) define the inner product (A, B) an orthogonal basis for
tr(BT A).

(i) Use the Gram-Schmidt Procedure to determine


an orthonormal basis for the following sub­
spaces of M(2, 2).
S � Span {[i] .[-i]·[i]}
Section 7.4 Exercises 353

(b) Detemllne the coordinates of X � m with re­


AS Let {v1, ... , vk} be an orthogonal set in an inner
product space V. Prove that

spect to the orthogonal basis you found in (a).

Homework Problems

Bl On P , define the inner product (p,q) p(O)q(O) + (ii) Use the orthogonal basis you found in part (i)
2
=

p(1)q(l) + p(2)q(2). Calculate the following. to determine proj5(1 + x + x2).


(a) (1 -3x2,1+x+2x2) (b) (3-x,-2 -x-x2)
(c) Ill - Sx + 2x211 (d) 1173x + 73x211
(a) S = {
Span l,x - x2 }
(b) S Span { 1 + x2,x - x2 }
B2 In each of the following cases, determine whether
=

( ,) defines an inner product on P3. B4 Define the inner product (1,Y) = X1Y1 + 3x2 y2 +
3
(a) (p,q) p(-l)q(-1) + p(O)q(O) + p(l)q(l)
=
2x3y3 on IR. .
(b) (p,q) p(O)q(O) + p(l)q(l) + p(3)q(3)
=
(a) Use the Gram-Schmidt Procedure to determine
+ p(4)q(4) an orthogonal basis for
(c) (p,q) p(-l)q(O) + p( l)q(l) + p(2)q(2)
=

{[i]' nl, [i]}


+ p(3)q(3)
S �Span
B3 On P define the inner product (p,q) p(-l)q(-1)
2
=

+ p(O)q(O) + p(l)q(l).

(i) Use the Gram-Schmidt Procedure to deter­


mine an orthogonal basis for the following sub­
spaces of P .
(b) Determine the coordinates of X [�]
= with re­

2
spect to the orthogonal basis you found in (a).

Conceptual Problems

Dl (a) Let {e1, e } be the standard basis forlR.2 and sup­ (c) Apply the Gram-Schmidt Procedure, using the
2
pose that ( ,) is an inner product on IR.2• Show inner product ( ,) and the corresponding norm,
that if 1,y E JR.2, to produce an orthonormal basis '13 {v1, v2}
=

for JR.2.
(1, st> x1Y1<e1, e1> + x1Y <e1,e > (d) Define G, the '13-matrix of the inner product ( ,),
2 2
=

+ x Y1<e ,e1 > + x Y <e ,e > by g;j (v;, Vj) for i, j 1,2. Show that G I
= = =

2 2 2 2 2 2
and that for x .f 1 v1 + .f2v2 and y
= =.Y1v1 +
(b) For the inner product in part (a), define a .Y2v2,
matrix G, called the standard matrix of the in­
ner product ('),by g;j (ei,ej) for i, j 1, 2.
= =
Conclusion. For an arbitrary inner product ( ,) on
Show that G is symmetric and that JR.2, there exists a basis for JR.2 that is orthonormal
with respect to this inner product. Moreover, when
2
1 and y are expressed in terms of this basis, (1, Y>
(1,y) = I g;JXiYj = 17 Gy
looks just like the standard inner product in JR.2•
i,j=l
354 Chapter 7 Orthonormal Bases

This argument generalizes in a straightforward way


to IR"; see Problem 8.2.D6. (b) Show that if [i']8 = [:;] and [jl]8 = �:l· then

D2 Suppose that {v1, v2, v3} is a basis for an inner prod­


uct space V with inner product ( , ). Define a matrix
G by

(c) Determine the matrix G of the inner product


(a) Prove that G is symmetric (GT G).
=

(p , q ) = p(O)q(O) + p( l )q ( l ) + p(2)q(2) for P2


with respect to the basis {1, x, x2}.

7 .5 Fourier Series
b
The Inner Product J f(x)g(x) dx
a
Let C[a, b] be the space of functions f : IR IR that are continuous on the interval

[a, b]. Then, for any f, g E C[a, b] we have that the product j g is also continuous
on [a, b] and hence integrable on [a, b]. Therefore, it makes sense to define an inner
product as follows.
The inner product ( , ) is defined on C[a, b] by
b
(j, g) =
l j x g x dx
( ) ( )

The three properties of an inner product are satisfied because

b b
(1) (f, f) J f(x)f(x) d x 2:: 0 for all f E C[a, b] and (j, f) J f(x)f(x) dx 0 if
= = =

a a

and only if f (x) 0 for all x E [a, b].


=

b b
(2) (f, g) = J f(x)g(x) d x J g(x)f(x) dx (g, f)
= =

a a

b b b
(3) (f, sg + th) J f(x)(sg(x) + th(x)) dx
= = s J f(x) g(x) dx + t J f(x)h(x) dx =

a a a

s(j, g) + t(f, h) for any s, t E R


Since an integral is the limit of sums, this inner product defined as the integral of
the product of the values off and g at each x is a fairly natural generalization of the
dot product in IR11 defined as a sum of the product of the i-th c omp onents ofx and y for
each i.
One interesting consequence is that the norm of a function f with respect to this
inner product is
( b ) 1/2
11!11 =
l j2(x) dx
Section 7.5 Fourier Series 355

Intuitively, this is quite satisfactory as a measure of how far the function is from the
zero function.
One of the most interesting and important applications of this inner product in­
volves Fourier series.

Fourier Series
Let CP2rr denote the space of continuous real-valued functions of a real variable that
are periodic with period 2n. Such functions satisfy f(x+ 2n) = f(x) for all x. Examples
of such functions are f(x) = c for any constant c, cos x, sin x, cos 2x, sin 3x, etc. (Note
that the function cos 2x is periodic with period 2n because cos(2(x + 2n)) = cos 2x.
However, its "fundamentai (smallest) period" is n.) In some electrical engineering
applications, it is of interest to consider a signal described by functions such as the

{:7!"
function
x if -n::; x::; -n/2
f(x) = - if - 7r/2 < x::; 7r/2

7!"- x if n/2 < x::; 7r

This function is shown in Figure 7.5.5.

y 7r

Figure 7.5.5 A continuous periodic function.

In the early nineteenth century, while studying the problem of the conduction of
heat, Fourier had the brilliant idea of trying to represent an arbitrary function in CP2rr
as a linear combination of the set of functions

{ 1, cos x, sin x, cos 2x, sin 2x, ... , cos nx, sin nx, ...}

This idea developed into Fourier analysis, which is now one of the essential tools in
quantum physics, communication engineering, and many other areas.
We formulate the questions and ideas as follows. (The proofs of the statements are
discussed below.)

(i) For any n, the set of functions { l, cos x, sin x, cos 2x, sin 2x, . .., cos nx, sin nx}
is an orthogonal set with respect to the inner product

(f, g) = 1: f(x)g(x) dx

The set is therefore an orthogonal basis for the subspace of CP2rr that it spans.
This subspace will be denoted CP2rr,n·
356 Chapter 7 Orthonormal Bases

(ii) Given an arbitrary function f in CP2rr, how well can it be approximated by a


function in CP2rr,n? We expect from our experience with distance and subspaces
that the closest approximation to f in CP2rr,n is projcP:z..nf. The coefficients
for Fourier's representation off by a linear combination of { 1, cos x, sin x, ... ,
cosnx, sin nx, ...}, called Fourier coefficients, are found by considering this
projection.

(iii) We hope that the approximation improves as n gets larger. Since the distance
fromf to the n-th approximation prokP:z....f is II perpcP:z..n /II, to test if the ap­
proximation improves, we must examine whether II perpCP:z.,n /II-+ 0 as n-+ oo.

Let us consider these statements in more detail.

(i) The orthogonality of constants, sines, and cosines with respect to the inner
rr
product (/, g) Jf(x)g(x) dx
=

-rr
These results follow by standard trigonometric integrals and trigonometric
identities:

l
rr 1 rr

rr
sinnx dx = - - cosnx
n -rr
0
I =

rr 1 rr
I rr
cosnx dx = - sinnx
n -rr I 0 =

1: cos mx sinnx dx = 1: �( sin(m +n)x - sin(m -n)x) dx = 0

and for m* n,

l
rr
cos mx cos nx dx
l (
rrl
- cos(m +n)x +cos(m -n)x) dx 0
-rr 2
l 2(
= =

rr

l
rr rrl
sin mx sinnx dx = cos(m -n)x - cos(m +n)x) dx = 0
rr -rr

Hence, the set { 1, cos x, sin x, ..., cosnx, sinnx} is orthogonal. To use this as a basis
for projection arguments, it is necessary to calculate 111112, II cos mxll2, and II sin mxll2:

2
11111 = 1: 1 dx = 2n
rr
1 -(1
rr

I
1
II cos mxll2 cos2 mx dx +cos 2mx) dx = 7r
-rr 2
=

l 2(1
=

rr

l
rr rr 1
II sin mxll2 = rr sin2 mx dx = - cos 2mx) dx = 7r
-rr

(ii) The Fourier coefficients off as coordinates of a projection with respect to the
orthogonal basis for CP2rr,n
The procedure for finding the closest approximation proj CP:z. nf in CP2rr,n to an
.
arbitrary function f in CP2rr is parallel to the procedure in Sections 7.2 and 7.4.
That is, we use the projection formula, given an orthogonal basis {v1, ... 'vd for a
subspace S:
. (1, v1) (1, vk)
proJs 1 1iJJi2 v 1 + + vk
··· llvkll2
=
Section 7.5 Fourier Series 357

There is a standard way to label the coefficients of this linear combination:

ao
prokp2".n f= 1 +ai cos x +a1cos2x +···+an cos nx
2
+ bi sin x + b2 sin2x + · + b" sin nx · ·

The factor� in the coefficient of 1 appears here because 111112 is equal to 2rr, while the
other basis vectors have length squared equal to rr. Thus, we have

(f, 1)
ao = liij2 = ;
1 {rr_ f(x) dx
J rr
(f, cos mx) 1 rr
am =
11 cos mxll2
= -
I-rr f(x) cos mxdx

rr .
7r

(f, sin mx)


I-rr f( )
1
bm - x smmx dx
_ _

--
sin
II m xll2 7r

(iii) Is projcPin,, fequal to fin the limit as n ---+ oo?

As n ---+ oo, the sum becomes an infinite series called the Fourier series for f. The
question being asked is a question about the convergence of series-and in fact, about
series of functions. Such questions are raised in calculus (or analysis) and are beyond
the scope of this book. (The short answer is "yes, the series converges to f provided
that f is continuous." The problem becomes more complicated if f is allowed to be
piecewise continuous.) Questions about convergence are important in physical and
engineering applications.

EXAMPLE 1 Determine prokPini f for the function f(x) defined by f(x) = lxl if -rr � x � rr and
f(x +2rr) = f(x) for all x.
Solution: We have
rr lxl dx
I-rr
l
ao = - = rr

1 rr
7r

I-rr
4
a,= - lxlcos xdx = --
rr
rr lxl 2x dx
7r

I-rr
1
a1= - cos = 0

1 rr
7r

4
a3 = -
7rI-rrrr lxl 3x dx
cos = --
9rr
1
bi = -
I-rr lxl x dx sin =0

1 rr
7r

b1 = -
I lxl sin2xdx = 0

rr 3x dx
7r - rr

I
I
b3 = - lxl sin = 0
7r -rr

Hence, prokPin.i f = � - ; cos x - t,; cos 3x. The results are shown m
Figure 7.5.6.
358 Chapter 7 Orthonormal Bases

-y=f(x)

-- Y = prokP:u,.1 f(x)

-Tr
- Y projCP:u,,3 fx
=
( )

7r t
Figure 7 .5.6 Graphs of projCP,,,,, f and prokP,,, , f compared to the graph of f(x).
,

{-Tr - -Tr
EXAMPLE2 Determine p rokP:i,,,J f for the function jx
( ) defined by

-
x if - 7r � x � /2
f(x) = x if -rr/2 < x � rr/2
7r x ifrr/2 < x � rr

Solution: We have
lf
Jlf
1
ao = fdx = 0
-
7r -;r

Jlf
1
a1 = fcosxdx = 0
-
7r -;r

Jlf
1
a1 = fcos 2xdx = 0
-
7r -;r

Jlf
1
a3 = fcos 3xdx = 0
-
7r -;r

1 4
bi=
-
7r Jlf
-;r
fsinxdx =
-
7r
1
b2 =
-
7r Jlf
-;r
fsin 2xdx = 0

J
1 4
b3 = fsin 3xdx = -
rr
- 9rr
-

-;r

Hence, projcP:i,,.J f= ; sinx trr sin 3x. The results are shown in Figure 7.5.7.
-
Chapter Review 359

y
1r
2

- y=f(x)
-- Y = projCPin.1 f(x)

-� - Y projCPin,J f(x)
=

Figure 7.5.7 Graphs of prokPi..i f and projCPi. i f compared to the graph of f(x).
.

PROBLEMS 7 .5
Computer Problems

Cl Use a computer to calculate projCPin,,, f for (b) f(x) =eX, - n � x :5 7r

n =3,7, and 11 for each of the following functions.


Graph the function f and each of the projections
(c) f(x) =
io
1
if - n :5 x :5 o
ifO < x � 7r
on the same plot.
(a) f(x) =x2, -;r � x � 7r

PROBLEMS 7 5 .

Suggestions for Student Review

1 What is meant by an orthogonal set of vectors in !Rn? you find an orthonormal basis? Describe the Gram­
What is the difference between an orthogonal basis Schmidt Procedure. (Section 7.2)
and an orthonormal basis? (Section 7 .1) 4 What are the essential properties of a projection onto
2 Why is it easier to determine coordinates with re­ a subspace of !Rn? How do you calculate a projection
spect to an orthonormal basis than with respect to an onto a subspace? (Section 7 .2)
arbitrary basis? What are some special features of
5 Outline how to use the ideas of orthogonality to find
the change of coordinates matrix from an orthonor­
the best-fitting line for a given set of data points
mal basis to the standard basis? What is an orthogo­
{( t; y; ) Ii=1, ... , n}. (Section 7.3)
,

nal matrix? (Section 7 .1)


6 What are the essential properties of an inner prod­
3 Does every subspace of IR11 have an orthonormal uct? Give an example of an inner product on P2.
basis? What about the zero subspace? How do Give an example of an inner product on M(2, 3).
(Section 7.4)
360 Chapter 7 Orthonormal Bases

Chapter Quiz
El Determine whether the following sets are orthog­ a vector in §, use the orthonormality of 'B to deter­
onal, and which are orthonormal. Show how you mine the coordinates of x with respect to 'B.

{ t -> -i}
decide.
E3 (a) Prove that if P is an orthogonal matrix, det
p ±1.=

(a) l , l (b) Prove that if P and R are n x n orthogonal ma­

{ � �}
trices, then so is PR.

{! ·H
E4 Let S be the subspace of lll4 defined by S =

1
Cb) _l_ I
Y3 ' Vs 1

{}, � -�}
Span ,
-2

(a) Apply the Gram-Schmidt Procedure to the


(c)
1 1
, Y3 -1 , Y3 -1 given spanning set to produce an orthonormal

0 1 basis for S.
1
E2 Consider the orthonormal set -2
1 1 -1 (b) Determine the point in S closest to x .
1
= _

0 1 1
1 0
'B =
2 , � O, � 0 . Let S be the sub-
0 1 ES Determine whether each of the following functions

-1 0 ( , ) defines an inner product on M(2, 2). Explain


how you decide in each case.
2
(a) (A, B) det(AB)
5
=

space of JR.11 spanned by 'B. Given that x = 1 is


(b) (A,B) = a11b11+2a12b12+2a21b21 + a22b22
3
-2

Further Problems
Fl (lsometries of JR.3) (d) Let A be the standard matrix of L. Suppose that
(a) A linear mapping is an isometry of JR.3 if 1 is an eigenvalue of A with eigenvector u. Let
llL(x)ll = 11111 for every x E JR3. Prove that an v and w be vectors such that {u, v, w} is an
isometry preserves dot products and angles as orthonormal basis for JR.3 and let P
well as lengths. [u v w]. Show that
[ ]
(b) Show that L is an isometry if and only if the
standard matrix of L is orthogonal. (Hint: See pT AP 1 012
=

Problem 3.FS and Problem 7. l.D3.) 021 A*


(c) Explain why an isometry of JR.3 must have one
or three real characteristic roots, counting mul­ where the right-hand side is a partitioned ma­

tiplicity. Based on Problem 7. l.D3 (b), these trix, with OiJ being the i x j zero matrix, and
must be±1. with A• being a 2 x 2 orthogonal matrix. More­
over, show that the characteristic roots of A are
1 and the characteristic roots of A•.
Chapter Review 361

Note that an analogous form can be obtained described by requiring the i-th approximation to be
for pT AP in the case where one eigenvalue the closest vector v in some finite-dimensional sub­
is 1
- . space Si of V, where the subspaces are required to
(e) Use Problem 3.F5 to analyze the A* of satisfy
3
part (d) and explain why every isometry of IR.
is the identity mapping, a reflection, a composi­ S1 c S2 c · · · c Si c · · · c V
tion of reflections, a rotation, or a composition
of a reflection and a rotation. The i-th approximation is then projs; v. Prove that
the approximations improve as i increases in the
F2 A linear mapping L : IR.11 � IR.11 is called an involu­
sense that
tion if L o L Id. In terms of its standard matrix,
=

this means that A2 I. Prove that any two of the


llv proj., vii llv proj ., vii
=
-
U!J+ I
:=:; -
1J11
following imply the third.
(a) A is the matrix of an involution. FS QR-factorization. Suppose that A is an invertible
(b) A is symmetric. n x n matrix. Prove that A can be written as the
(c) A is an isometry. product of an orthogonal matrix Q and a upper­
F3 The sum S T of subspaces of a finite dimensional
+ triangular matrix R : A QR. =

vector space V is defined in the Chapter 4 Further (Hint: Apply the Gram-Schmidt Procedure to the
Problems. Prove that (S + T).l S.l n T.L.
= columns of A, starting at the first column.)

F4 A problem of finding a sequence of approximations Note that this QR-factorization is important in a


to some vector (or function) v in a possibly infinite­ numerical procedure for determining eigenvalues
dimensional inner product space V can often be of symmetric matrices.

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 8

Symmetric Matrices and


Quadratic Forms
CHAPTER OUTLINE

8.1 Diagonalization of Symmetric Matrices


8.2 Quadratic Forms
8.3 Graphs of Quadratic Forms
8.4 Applications of Quadratic Forms

Symmetric matrices and quadratic forms arise naturally in many physical applica­
tions. For example, the strain matrix describing the deformation of a solid and the
inertia tensor of a rotating body are symmetric (Section 8.4). We have also seen that
the matrix of a projection is symmetric since a real inner product is symmetric. We
now use our work with diagonalization and inner products to explore the theory of
symmetric matrices and quadratic forms.

8.1 Diagonalization of
Symmetric Matrices

Definition A matrix A is symmetric if AT =A or, equivalently, if aiJ = a1; for all i and j.
Symmetric :\latrix
In Chapter 6, we saw that diagonalization of a square matrix may not be possible
if some of the roots of its characteristic polynomial are complex or if the geometric
multiplicity of an eigenvalue is less than the algebraic multiplicity of that eigenvalue.
As we will see later in this section, a symmetric matrix can always be diagonalized:
all the roots of its characteristic polynomial are real, and we can always find a basis of
eigenvectors. Before considering why this works, we give three examples.

EXAMPLE 1 Determine the eigenvalues and corresponding eigenvectors of the symmetric matrix

A = [� -n W hat is the diagonal matrix corresponding to A, and what is the matrix

that diagonalizes A?
Solution: We have

- 1
I
0-A 1 2
C(A) = det(A Al) = = A + 2A 1
2 A
-

l _ _

Using the quadratic formula, we find that the roots of the characteristic polynomial are
A1 = -1 + V2 and A2 =
-1 - V2. Thus, the resulting diagonal matrix is

D =
[ -1 + V2 0 ]
0 -1 - V2
364 Chapter 8 Symmetric Matrices and Quadratic Forms

EXAMPLE 1 For /l1 = -1 + Y2, we have


(continued)

A-/l1/= [ 1
- Y2
1 ] [ 1 1 - o
- Y2 ]
l -1 - Y2 - 0

Thus, a basis for the eigenspace is { [ 1 4}.


1 +

Similarly, for /l2 = -1 - Y2, we have

A
_

/l2/ =
1 + [ 1
Y2
-1 +
1
Y2
] [1
_

0
-1 +
0
Y2l
J

Thus, a basis for the eigenspace is {[ 4}


1-
1
.

Hence, A 1s
. d"iagonalized by p = [1 1 Y2+ 1 -
1
Y2lJ.

Observe in Example 1 that the columns of P are orthogonal. That is,

[ 1 [ -1 1
1+
1
·
1
= (1 + Y2)(1 - Y2) + 1(1) = 1 - 2 + 1 = 0

Hence, if we normalized the columns of P, we would find that A is diagonalized by


an orthogonal matrix. (It is important to remember that an orthogonal matrix has or­
thonormal columns.)

EXAMPLE2
Diagonali'e the symmetric A = [� : -n _
Show that A can be diagonali,ed by an

orthogonal matrix.
Solution: We have

4 - /l 0 0
C(/l) = 0 1 - /l -2 = -(/l - 4)(/l - 3)(/l + 1)
0 -2 1 - /l

The eigenvalues are /l1 = 4, tl2 = 3, and tl3 = -1, each with algebraic multiplicity 1.
For /l1 = 4, we get

A -Ail= [� -�] [� 1 !I
-3
-2
0

-3
-

0
0
0

Thus, a basis focthe eigenspace is { [�]}


For /l2 = 3, we get

A-A2l = [� -�]- [� 1 !]
-2
-2
0

-2 0
0

0
Section 8.1 Diagonalization of Symmetric Matrices 365

EXAMPLE2
(continued) Thus, a basis for the eigenspace is {[-:]}·
For A3 = -1, we get

Thus, a basis for the eigenspace is {[:]}·


Observe that the vectors V 1 = [H [-:l V2 = · and Vl = [:] fonn an orthogonal set.

Hence, if we normalize them, we find that A is diagonalized by the orthogonal matrix

P =
[l
0 - 1/-,./2
0 0
1/-,./2
1 [4 OJ
to D = 0
0
3 0 .
0 1/../2 1/../2 0 0 -1

EXAMPLE3
Diagonalize the symmetric A = [-� -2] -2
-4
5
-2
-2 . Show that A can be diagonalized by
8
an orthogonal matrix.
Solution: We have

C(A) =
5 -A
-4 5-A
-4 -2
-2 = -A(A - 9)
2

-2 -2 8 -A

The eigenvalues are At = 9 with algebraic multiplicity 2 and A2 = 0 with algebraic


multiplicity 1.
For At = 9, we get

Thus, a basis for the eigenspace oL! I is { w" W,} = {[-il r�]} . However, observe

that these vectors are not orthogonal to each other. Since we require an orthonormal
basis of eigenvectors of A, we need to find an orthonormal basis for the eigenspace of
At. We can do this by applying the Gram-Schmidt Procedure to this set.
Pick Vt =Wt. Then St = Span{Vi} and
366 Chapter 8 Symmetric Matrices and Quadratic Forms

EXAMPLE3 Then, {V1, v2} is an orthogonal basis for the eigenspace of /l1.

-4 -21-2 [1
(continued) For /l2 = 0, we get

-25
8
- 0
0

Thus, a basis for the eigenspace of .<2 is [i13) ={[�]}


Observe that v3 is orthogonal to v1 and i12. Hence, normalizing v1, i12, and V3, we

[22/3/3 -111-fI.;-fI. -1/Yi8


find that A is diagonalized by the orthogonal matrix

1/3 0
-1/Yi8
4/Yi8
to D = [� �l0
9
0

The Principal Axis Theorem


We will now proceed to show that not only can every symmetric matrix be diagonal­
ized, but that it can be diagonalized by an orthogonal matrix.

Definition A matrix A is said to be orthogonally diagonalizable if there exists an orthogonal


Orthogonally matrix P and a diagonal matrix D such that
Diagonalizable

Pr AP= D

Remark

Two matrices A and Bare said to be orthogonally similar if there exists an orthogonal
matrix P such that pr AP = B. Observe that orthogonally similar matrices are similar
since an orthogonal matrix P satisfies pr = p-l. In particular, pr AP = Dis equivalent
to p-1 AP = D for an orthogonal matrix P. Hence, all of our theory of similar matrices
and diagonalization applies to orthogonal diagonalization.

Lemma 1 An n x n matrix A is symmetric if and only if

x (Ay) = (Ax) y
· ·

fo r all x,Y E JR.11•


Section 8.1 Diagonalization of Symmetric Matrices 367

Proof: Suppose that A is symmetric. For any x, y E JR!!,

x . CA'J) = xTAy = xTAT:y =(Axl:y =(Ax) 9 ·

Conversely, if x · (Ay) =(Ax) y for all x, y


· E lltn,

XTAy = x . (Ay) =(Ax) . y= (Ax)Ty

Since X7Ay = (Ax)7y for all y E lltn, based on Theorem 3.1.4, X7A= (Ax)7. Hence,
(xTA)7 = Ax or ATx = Ax for all x E lltn. Applying Theorem 3.1.4 again gives
AT = A, as required. •

In Examples 1-3, we saw that the basis vectors of distinct eigenspaces are or­
thogonal. This implies that every eigenvector in the eigenspace of one eigenvalue is
orthogonal to every eigenvector in the eigenspace of a different eigenvalue. We now
use Lemma 1 to prove that is always true.

Theorem 2 If v1 and v2 are eigenvectors of a symmetric matrix A corresponding to distinct


eigenvalues tl1 and tl2, then Vt is orthogonal to v2.

Proof: By definition of eigenvalues and eigenvectors, Av1 = tl1v1 and Av2 = tl2v2.
Hence,

Using Lemma 1, we get

Hence,

It follows that

It was assumed that tl1 * tl2, so v1 ·v2 must be zero, and the eigenvectors corresponding
to distinct eigenvalues are mutually orthogonal, as claimed. •

Note that this theorem applies only to eigenvectors that correspond to different
eigenvalues. As we saw in Example 3, eigenvectors that correspond to the same eigen­
value do not need to be orthogonal. Thus, as in Example 3, if an eigenvalue has al­
gebraic multiplicity greater than 1, it may be necessary to apply the Gram-Schmidt
Procedure to find an orthogonal basis for its eigenspace.

Theorem 3 If A is a symmetric matrix, then all eigenvalues of A are real.

The proof of this theorem requires properties of complex numbers and hence is
postponed until Chapter 9. See Problem 9.5.D6.
We can now prove that every symmetric matrix is orthogonally diagonalizable. We
begin with a lemma that will be the main step in the proof by induction in the theorem.
368 Chapter 8 Symmetric Matrices and Quadratic Forms

Lemma4 Suppose that tl.1 is an eigenvalue of then x n symmetric matrix A, with correspond­
ing unit eigenvector v1• Then there is an orthogonal matrix P whose first column is

[
v1, such that
tl.1
pTAP=
On-I,!

where A1 is an (n - n
1) x (n - 1) symmetric matrix and Om, is the m xn zero matrix.

{vi} n
to a basis for IR. , applying the Gram-Schmidt
Proof: By extending the set
Procedure, and normalizing the vectors, we can produce an orthonormal basis 13
lV1, W2, ... , Wn} for IR.n. Let
P= [v1 w2 wn]
Then

VTl
->T
W 2
PTAP= A [vi w2 wn]
->T,,
w

VTl
->T
W 2
= [Av1 Aw2 Awn]
->T
w n
vf Av1 vf
-> Aw2 v->f Awn
wIAv1 W2TAW_,2 T
W2 A_,
Wn
=
-> ->TA_,
w�Av1 W11TA_,
W2 w n Wn
i11 ·Av1 v1 ·Aw2 vi ·Awn
w2 ·Av1 w2 Aw2 ·
w2 ·Awn
=
w11 ·Av1 w,, A w2
·
Wn·AWn
First observe that pTAP is symmetric since

Also, v1 is a unit eigenvector of tl.1, so we have

Since 13 is orthonormal, we get

So, all other entries in the first column are 0. Hence, all other entries in the first row
pTAP is symmetric. Moreover, the (n -
are also 0 since 1) x (n - 1) block A1 is also
symmetric since pTAP is symmetric. •
Section 8.1 Diagonalization of Symmetric Matrices 369

Theorem 5 Principal Axis Theorem


Suppose A is an n x n symmetric matrix. Then there exists an orthogonal matrix
P and diagonal matrix D such that pT AP = D. That is, every symmetric matrix is
orthogonally diagonalizable.

Proof: The proof is by induction on n. If n = 1, then A is a diagonal matrix and


hence is orthogonally diagonalizable with P = [l]. Now suppose the result is true
for (n - 1) x (n - 1) symmetric matrices, and consider an n x n symmetric matrix
A. Pick an eigenvalue tli of A (note that tli is real by Theorem 3) and find a corre­
sponding unit eigenvector vi · Then, by Lemma 4, there exists an orthogonal matrix
[
R= vi w2 · ]
· wn such that
·

RTAR= [ tli
011-i,i

where Ai is an (n - 1) x (n - 1) symmetric matrix. Then, by our hypothesis, there is an


(n - 1) x (n - 1) orthogonal matrix Pi such that

PfA1P1=Di

where D i is an (n - 1) x (n - 1) diagonal matrix. Define

Oi,n-i
Pi
]
The columns of P2 form an orthonormal basis for Rn. Hence, P2 is orthogonal. Since
a product of orthogonal matrices is orthogonal, we get that P = P2R is an n x n
orthogonal matrix and, by block multiplication,

pTAP=(P2Rl A(P2R) =P{ RT ARP2

=P{ [ tli Oi,11-i


Ai
]
p2
011-i,i

[ tli Oi,11-i
Di
=D
]
- 011-i,i

This is diagonal, as required. •

EXERCISE 1
Orthogonally diagonalize A= [-� -n

EXERCISE2
Orthogonally diagonalize A= [- �
-1 -1
- � =2 �]·
370 Chapter 8 Symmetric Matrices and Quadratic Forms

Remarks

1. The eigenvectors in an orthogonal matrix that diagonalizes a symmetric matrix


A are called the principal axes for A. We will see why this definition makes
sense in Section 8.3.

2. The converse of the Principal Axis Theorem is also true. That is, every or­
thogonally diagonalizable matrix is symmetric. You are asked to prove this in
Problem D2. Hence, we can say that a matrix is orthogonally diagonalizable if
and only if it is symmetric.

PROBLEMS 8.1
Practice Problems

Al Determine which of the following matrices are A2 For each of the following symmetric matrices, find
symmetric. an orthogonal matrix P and diagonal matrix D such

(a) A=
[� 7]_
that pT AP = D.

(a) A=
[ � -71
-
(b) B =
[� �] [5 )
[-� �]
3
(b) A=
3 -3

[� �i
2
(c) C = 1
-1 -2 (c) A 0
=

H -ll [ � -� =�]
-1 1 1 0
(d) D 0
=
-1 (d) A=
-2 -2 0

(e) A=
[! : -�]
_

Homework Problems

[� !]
Bl Determine which of the following matrices are
symmetric. (d) D 1
=
(a) A
= [ � �] 0

[ � �] [� J
3
(e) 3
(b) B = E=
2

[� :1
0
(c) c
=
Section 8.1 Exercises 371

P 0
(0 l-1 11 -ii4
A=
pTAP=
B2 For each of the following symmetric matrices, find

2 -
an orthogonal matrix and diagonal matrix D such

A=[; �] [
A= � -2 11
that D.

A = [� �]
(a)

[ -1
(g) -2

A= 1
-4 -2

A=[� -�]
(b)
-2 2
(h) 2 1 -2

A= [�1 �1 �1 A= H -1-5 -1_;l


(c)
-1 -2 -2
-2

A = [-1� �1 -0�1
(d) (i)
2

(e)

Computer Problems

[2 + l
Cl Use a computer to determine the eigenvalues and a C2 Let S (t) denote the symmetric matrix
basis of orthonormal eigenvectors of the following

4[ .1 1.9 0.51
symmetric matrices.

1
t -2t t

1.0.95 0.6 -2.0.61


(a) The matrix in Problem A2 (e). S (t) = - 2t 2 t
- -t

(-0.05), (0), (0.05), (0.1), (-0.


t -t + t

0.0.0.019555 0.0.0.102555 0.0.0.129555 0.0.0.290555


(b) 1.2
By calculating the eigenvalues of S l ),
S S S and S explore how
the eigenvalues of S (t) change as t varies.

0.25 0.95 0.05 0.15


(c)

Conceptual Problems

A B A+ B A-1 A
ATA ABA A2
Dl Let and be n x n symmetric matrices. Deter­ D3 Prove that if is an invertible symmetric matrix,
mine which of the following is symmetric. (a) then is orthogonally diagonalizable.
(b) (c) (d)

A
D2 Show that if
is symmetric.
is orthogonally diagonalizable, then
372 Chapter 8 Symmetric Matrices and Quadratic Forms

8.2 Quadratic Forms


We saw earlier in the book the relationship between matrix mappings and linear map­
pings. We now ex plore the relationship between symmetric matrices and an impor­
tant class of functions called quadratic forms, which are not linear. Quadratic forms
appear in geometry, statistics, calculus, topology, and many other areas. We shall see
in Section 8.3 how quadratic forms and our special theory of diagonalization of sym­
metric matrices can be used to graph conic sections and quadric surfaces.

Quadratic Forms
Consl.der the symmetr1·c matn'x A -
a[
- b/2
b/2
c ]
. If = [ 2x1x ] .
:t -
A then

x2 ][ a
b/2
b/2
C
][ ]
xi
X2

x2 ][
ax1 + bx2 /2
bx1/2 + CX2
]
2
We call the expression a�+ bx1x2 +ex� a quadratic form on JR (or in the variables 1
x
and 2x ) . Thus, corresponding to every symmetric matrix A, there is a quadratic form

On the other hand, given a quadratic form Q(x) = axT+bx12x +ex�, we can reconstruct
�]
the symmetric matrix A = [;
b 2
b 2
by choosing (A)1
1 to be the coefficient of T
X ,

(A)12= (Ah1 to be half of the coefficient of 1


x 2x , and (A)22 to be the coefficient of x�.
We deal with the coefficient of x 1 2x in this way to ensure that A is symmetric.

EXAMPLE 1 Determine the symmetric matrix corresponding to the quadratic form

Solution: The corresponding symmetric matrix A is

A=
[ 2
-2
-

-1
2
]
Notice that we could have written axT + bx1x2 +ex� in terms of other asymmetric
matrices. For example,

Many choices are possible. However, we agree always to choose the symmetric matrix
for two reasons. First, it gives us a unique (symmetric) matrix corresponding to a
given quadratic form. Second, the choice of the symmetric matrix A allows us to apply
the special theory available for symmetric matrices. We now use this to extend the
definition of quadratic form ton variables.
Section 8.2 Quadratic Forms 373

Definition A quadratic form on JR.11, with corresponding symmetric matrix A, is a function Q :


Quadratic Form JR.11 � JR defined by
11

Q(x) = � aijXiXj = .xr Ax


i,j=l

As above, given a quadratic form Q(x) on JR.11, we can easily construct the co1Te­
sponding symmetric matrix A by taking (A);; to be the coefficient of x and (A);j to be �
half of the coefficient of X;Xj for i :f. j.

Q(x)
. [ ]
EXAMPLE2 = 3xi + Sx1x2 + 2� is a quadratic form on JR.2 with corresponding symmetric
3 5/2
matnx A =
512 2 .
Q(x) = xi +

sponding symmetric matrix A


4x� + 2x� + 4x1x2

=
[�
+ xi x3 +
� 1 �2 ]
2x2x3 is a quadratic form on JR.3 with corre-

.
1/2 I 2
Q(x) = 2xi + 4x1x3 - x� + 2x2x3 + 6x2x4 + � + x� is a quadratic form on JR.2 with
2 0 2 0
0 ] 3
corresponding symmetric matrix A
-

2 1 1 0 ·
=

0 3 0

EXERCISE 1 Find the quadratic form c01Tesponding to each of the following symmetric matrices.

(a)
[ 4 1/2 ]
l /2 -../2 ·

EXERCISE2 Find the corresponding symmetric matrix for each of the following quadratic forms.

1. Q(x) = xi - 2x,x2 - 3�
2. Q(x) = 2xi + 3x1x2 - x1x3 + 4x� + x �
3. Q(x) = xi + 2x� + 3x� + 4x�

Observe that the symmetric matrix corresponding to Q(x) = xi + 2x� + 3x� + 4�


is in fact diagonal. This motivates the following definition.

Definition A quadratic form Q(x) is in diagonal form if all the coefficients ajk with j * k
Diagonal Form are equal to 0. Equivalently, Q(x) is in diagonal form if its corresponding symmetric
matrix is diagonal.

EXAMPLE3 The quadratic form Q(x) 3xi - 2x� + 4 � is in diagonal form. The quadratic form
=

Q(x) = 2xi - 4x1x2 + 3x� is not in diagonal form.


374 Chapter 8 Symmetric Matrices and Quadratic Forms
Since each quadratic form has an associated symmetric matrix, we should expect
that diagonalizing the symmetric matrix should also diagonalize the quadratic form.
We first demonstrate this with an example and then prove the result in general.

EXAMPLE4
Let A = [� �]and let Q(x) = .XTAx = 2xi 2x1x2 2�. Let x = Py, where
+ +

P= [ l/Yl 1/Yll -fi.J ' Express Q(x) terms of y= [yYi2].


m
.

-l/Yl l/
Solution: We first observe that P diagonalizes A. In particular, we have

pTAP= l/Yl[l/Yl ][ ][
-1/Yl 2 l l/Yl l/Yl =[l
l/Yl 1 2 -1/Yl l/Yl 0
] OJ
3

We have
QCx)=xTAx
=(PylA(Py)
=yTp TAPy
=yT [� �] y
2 2
= Y1 3Y2
+

Recall that if P = [v1 v2J is an orthogonal matrix, it is a change of coordinates


matrix from standard coordinates to coordinates with respect to the basis 13={V1, v2}.
In particular, in Example 4, we put Q(x) into diagonal form by writing it with respect to
the orthonormal basis 13={l � j�], [-1�1]}· The vector y is just the 13-coordinates
with respect to x. That is, [x]23=y. We now prove this in general.
Theorem 1 Let Q(x)=xTAx be a quadratic form on JR.11• Then there is an orthonormal basis 13
of JR.11 such that when Q(x) is expressed in terms of 13-coordinates, it is in diagonal
form.

Proof: Since A is symmetric, we can apply the Principal Axis Theorem to get an
orthogonal matrix P such that pTAP=D = diag(;J.1, , ;J.11), where ;J.1, ;J.1 are the
eigenvalues of A. Recall from Section 4.4 that the change of coordinates matrix P from
. . • • • . ,

13-coordinates to standard coordinates satisfies x=P[x]23. Hence,


QCx)=xTAx
=(P[x]23)TA(P[x]23)
=[x]�PTAP[x]23
=[xJ�D[xJ23
Section 8.2 Quadratic Forms 375

Let [X]B � l:J ·


Then we have

A1 0 0

LJ
Q(x)= [Y1 . .. Yn] 0 ;l2
0
0 0 An
= '11y� + A2Y�+ +AnY� · · ·

as required. •

EXAMPLES Let Q(x) =xi+4x1x2+x�. Find a diagonal form of Q(x) and an orthogonal matrix P

[12 21]
that brings it into this form.

.
Solution: The corresponding . matnx
symmetnc . 1s
. A= vve have
. HT

1 I
1 2
C(-1)= -2 ;l l--1 =(-1-3)(-1+1)

The eigenvalues are ;J,1 =3 with algebraic multiplicity 1 and -12 =-1 with algebraic
multiplicity 1.
For ;J,1=3, we get

A - '1i/ = [-22 -22] [ l -1 ] � 0 0

An eigenvector for ;J,1 is v1 = [il and a basis for the eigenspace is {v 1 }.


For A2 = -1, we get
A - -12/= [; ;] [� �] �

An eigenvector for ;J,2 is v2 = [-i l and a basis for the eigenspace is {v2}.

Therefore, we see that A is orthogonally diagonalized by P = � [ i -11 ] to

D = [� -n Let 1 =Py. Then, we get

Hence, Q(x) is brought into diagonal form by P.

EXERCISE 3 Let Q(x) = 4x1x2 -3x�. Find a diagonal form of Q(x) and an orthogonal matrix P
that brings it into this form.
376 Chapter 8 Symmetric Matrices and Quadratic Forms

Classifications of Quadratic Forms


Definition A quadratic form Q(x) on JR" is
Positive Definite
1. Positive definite if Q(x)>0 for all x * 0
Negative Definite
2. Negative definite if Q(x) < 0 for all x * 0
Indefinite
3. Indefinite if Q(x)>0 for some x and Q(x) < 0 for some x
Semidefinite
4. Positive semidefinite if Q(x) � 0 for all x
5. Negative semidefinite if Q(x) � 0 for all x

These concepts are useful in applications. For example, we shall see in Section 8.3
that the graph of Q(x) = 1 in JR2 is an ellipse if and only if Q(x) is positive definite.

2
EXAMPLE6 Classify the quadratic forms Q1(x) = 3x i + 4x� , Q1(x) = xy, and Q3(x) = 2x1 +
4x1x2 + x� .
Solution: Q1(x) is positive definite since Q(x) = 3xi + 4x� >0 for all x * 0.
Q2(x) is indefinite since Q(l, 1) = 1>0 and Q(-1, 1 ) = - 1 < 0 .
Q3(x) is indefinite since Q(l, 1) = 8>0 and Q(l, -2) -2 < 0. =

Observe that classifying Q3(X) was a little more difficult than classifying Q1(X) or
Q2(X). A general quadratic form Q(x) on JR11 would be difficult to classify by inspec­
tion. The following theorem gives us an easier way to classify a quadratic form.

Theorem 2 Let Q(x) = .xr Ax, where A is a symmetric matrix. Then

1 . Q(x) is positive definite if and only if all eigenvalues of A are positive.


2. Q(x) is negative definite if and only if all eigenvalues of A are negative.
3. Q(x) is indefinite if and only if some of the eigenvalues of A are positive and
some are negative.

Proof: We prove (1) and leave (2) and (3) as Problems Dl and D2.
By Theorem 1, there exists an orthogonal matrix P such that

where x = Py and /l.1, . . • , ll.11 are the eigenvalues of A. Clearly, Q(x)>0 for ally i- 0
if and only if the eigenvalues are all positive. Moreover, since P is orthogonal, it is
invertible. Hence, x = 0 if and only if y = 0 since x = Py. Thus we have shown that
Q(x) is positive definite if and only if all eigenvalues of A are positive. •

EXAMPLE7 Classify the following quadratic forms.


(a) Q(x) = 4x� + 8x1x2 + 3x�

Solution: The symmetric matrix corresponding to Q(x) is A = [: �l The character­

2
istic polynomial of A is C(/l) = /l. - 7/l - 4. Using the quadratic formula, we find that
the eigenvalues of A are /l = ?± j33. These are both positive. Hence, Q(x) is positive
definite.
Section 8.2 Exercises 377

EXAMPLE 7 (b) Q(x) = -2XT - 2x1X2 + 2X1X3 - 2x� + 2x2X3 - 2x�


(continued)
Solution: The symmetric matrix corresponding to Q(x) is A = [=� =� �]·
1 -2
The

2
characteristic polynomial of A is C(tl) = -('1 + 1) ('1 + 4). Thus, the eigenvalues of A
are -1, -1, and -4. Therefore, Q(x) is negative definite.

EXERCISE4 Classify the following quadratic forms.

(a) Q(x) = 4xr + l6x1x2 + 4x�

(b) Q(x) = 2xT - 6x1x2 - 6x1x3 + 3x� + 4x2x3 + 3x�

Since every symmetric matrix corresponds uniquely to a quadratic form, it makes


sense to classify a symmetric matrix by classifying its corresponding quadratic form.
That is, for example, we will say a symmetric matrix A is positive definite if and only
if the quadratic form Q(x) _xTAx is positive definite. Observe that this implies that
=

we can use Theorem 2 to classify symmetric matrices as well.

EXAMPLE8 Classify the following symmetric matrices.

(a)A= [; �]
2
Solution: We have C(tl) = '1 - 6'1 + 5. Thus, the eigenvalues of A are 5 and 1, so A
is positive definite.

(b)A = [-� -� =�]


-4 -2 2
2
Solution: We have C(tl) = (,1 + 4)(,1 - 28). Thus, the eigenvalues of A are -4, 2 -../7,
and -2 -../7, so A is indefinite.

PROBLEMS 8.2
Practice Problems

Al Determine the quadratic form corresponding to the [� -2 1 l l


given symmetric matrix. (c) A = 1 -1
(a) A= [� _i] -1 0
A2 For each of the following quadratic forms Q(x),
(i) Determine the corresponding symmetric
(b) A� [� -� j] matrix A.
(ii) Express Q(x) in diagonal form and give the
orthogonal matrix that brings it into this form.
378 Chapter 8 Symmetric Matrices and Quadratic Forms

[� -� �1
(iii) Classify Q(x).
(a) Q(x)= xT - 3x1 x2 + x� (c)A=
SxT - 4x1x2 + 2x� 0 6 7

[ � -� -�1
(b) Q(x)=
(c) Q(x)= -2xT + 12x1x2 + 7x� -
(d)
(e)
-
Q(x)= xT 2x1x2 + 6x1x3 + x� + 6x2x3 -3.S
Q(x)= -4xT + 2x1x2 - Sx� - 2x2x3 - 4x�
(d)A=
-1 -3
A3 Classify each of the following symmetric matrices.

(a)A=
l-� -!]
(e)A= [ � 1� =�1
-1 -2 7

(b)A= [� � �1
0 0 3
(fj A= [=; -; !]
Homework Problems

Bl Determine the quadratic form corresponding to the (c) Q(x)= xT + 3x1x2 + �

[! �]
given symmetric matrix. (d) Q(X)= 3xT -2x1x2 - 2x1X3 + Sx� + 2x2x3 + 3.S
(e) Q(x)= 2xT-4x1x2 + 6x1x3 + 2� + 6x2x3 - 3x�
(a)A=

[ 4 -14]
B3 Classify each of the following symmetric matrices.

(b)A=
n ; -�l (a)A= _
l

[ � -;J
[� j -!l
(b)A= -

n -� j]
(c)A=

(c)A=
B2 For each of the following quadratic forms Q(x),
(i) Determine
matrixA.
the corresponding symmetric

(ii) Express Q(x) in diagonal form and give the


(d)A= n -: J
orthogonal matrix that brings it into this form.
(iii) Classify Q(x).
(a) Q(x)=
(b)
7xT + 4x1x2 + 4�
Q(x)= 2x1 + 6x1x2 + 2x�
(e)A=
[=r =� =!I
Computer Problems

Cl Classify each of the following quadratic forms with (c) Q(x) = 0.85(xT + x� + x� + x�) - O.l.x1.x2 +
the help of a computer. 0.6X1X3 + 0.2X1X4 + 0.2X2.X3 + 0.6x2X4 -O.lx3X4
(a) Q(x)= -9xT + 8x1x2 + 8x1x3 - Sx� - Sx�
(b) Q(x)= -0.lxT - 0.8x1x2 + l.2x1x4 + 2.lx� +
l.6x2x3 + l.3x� + 4.2x3x4 + l . l x�

Conceptual Problems

Dl Let Q(x)= .xr


Ax, whereA is a symmetric matrix. D2 Let Q(x)= .xr
Ax, whereA is a symmetric matrix.
Prove that Q(x) is negative definite if and only if Prove that Q(x) is indefinite if and only if some
all eigenvalues ofA are negative. of the eigenvalues ofA are positive and some are
negative.
Section 8.2 Exercises 379

D3 (a) Let Q(x) = .XTAx, where A is a symmetric ma­ 11


(a) Verify that for any 1,Y E JR ,
tlix. Prove that Q(x) is positive semidefinite if
n
and only if all of the eigenvalues of A are non­
11

negative.
<.XS>= I I xiY1<ei, e1>
. i=l j=l
(b) Let A be an m x n matrix. Prove that ATA 1s
positive semidefinite. (b) Let G be the n x n matrix defined by gii =
D4 Let A be a positive definite symmetric matrix. (ei, e1). Verify that
Prove that
(a) The diagonal entries of A are all positive.
(b) A is invertible.
(c) A-1 is positive definite. (c) Use the properties of an inner product to verify
(d) pTAP is positive definite for any orthogonal that G is symmetric and positive definite.
matrix P. (d) By adapting the proof of Theorem 1, show that
there is a basis 13 = {v1, ... ,v11} such that in
DS A matrix B is called skew-symmetric if BT = -B. 13-coordinates,
Given a square matrix A, define the symmetric
part of A to be
(x,y) = A.1.iiY1 + ·
· ·
+ Ani;zY"n

where A.1, . . . , ,111 are the eigenvalues of G. In


particular,
and the skew-symmetric part of A to be n
(1,y) = 111112 = I Aixf
i=I

(e) Introduce a new basis C = { w 1 ,... , wn} by


(a) Verify that A+ is symmetric, A- is skew­
defining wi = vJ../T;. Use an asterisk to denote
symmetric, and A= A+ +A-. ....
. .... + xn.wn.
.
C-coordmates, so that x:t = x1 W1 +
(b) Prove that the diagonal entries of A- are 0.
· · ·

Verify that
(c) Determine expressions for typical entries (A +)ij
and (A-)iJ in terms of the entries of A.
if i= k
(d) Prove that for every x E JRn,
if i * k

and that
(Hint: Use the fact that A= A+ + A- and prove
that xTA-1=0.) (1,y) = x�y� + · · · + x�y�

D6 In this problem, we show that general inner prod­


Thus, with respect to the inner product(,), C is
ucts on JR11 are not different in interesting ways from
an orthonormal basis, and in C-coordinates, the
the standard inner product.
inner product of two vectors looks just like the
Let (,) be an inner product on JR11 and let S =
standard dot product.
{e'i, . . , e,1} be the standard basis.
.
380 Chapter 8 Symmetric Matrices and Quadratic Forms

8.3 Graphs of Quadratic Forms


In JR.2, it is often of interest to know the graph of an equation of the form Q(x)
where Q(x) is a quadratic form on JR.2 and k is a constant. If we were interested in only
= k,

one or two particular graphs, it might be sensible to simply use a computer to produce
these graphs. However, by applying diagonalization to the problem of determining
these graphs, we see a very clear interpretation of eigenvectors. We also consider a
concrete useful application of a change of coordinates. Moreover, this approach to
these graphs leads to a classification of the various possibilities; all of the graphs of
the form Q(x) = JR.2
k in can be divided into a few standard cases. Classification is a
useful process because it allows us to say "I really only need to understand these few
standard cases." A classification of these graphs is given later in this section.
Observe that in general it is difficult to identify the shape of the graph of
bxJX2 + ex� = k. It is even more difficult to try to sketch the graph. However, it is
axi +
easy to sketch the graph of a
quadratic form Q(x) = c� xi + =
k. Thus, our strategy to sketch the graph of a
k is to first bring it into diagonal form. Of course, we first need
to determine how diagonalizing the quadratic form will affect the graph.

Theorem 1 Let Q(x) = axr + bx1X2 +ex�, where a, b, and care not all zero. Then an orthogonal
matrix P P=
with det 1, which diagonalizes Q(x), corresponds to a rotation in JR.2.
Proof: Let A = [b� b� J. 2
2
Since A is symmetric, by the Principal Axis Theorem,

there exists an orthonormal basis {v, w} of JR.2 of eigenvectors of A. Let v = [��] and

w = [Ww2J]
_,
.
v
. s·mce .-t 1s a umt vector, we must have

1 = 1 111 2 = vi + v�
Hence, the entries v
VJ = - v2 =2] VJ and lie on the unit circle. Therefore, there exists an angle e

w = + [- ]
such that cose and sine. Moreover, since w is a unit vector orthogonal to w,
sine sine
we must have w= [
_,

±
cos e
. We choose
_,

cose
so that det P= 1. Hence we

=[ � ]
have
p c se - sine
sme cose

This corresponds to a rotation by e. Finally, from our work in Section 8.2, we know
that this change of coordinates matrix brings Q into diagonal form. •

Remark

If we picked w = [��� ]
-
ne
e .
we would find that P corresponds to a rotation and a

reflection.
Section 8.3 Graphs of Quadratic Forms 381

In practice, we do not need to calculate the angle of rotation. W hen we orthogo­


nally diagonalize Q(x) with P = v1 [ v2 ], the change of coordinates 1 = Py causes a
rotation of the new y1- and y2-axes. In particular, taking y = [�l we get

and taking y = [�] gives

That is, the new y1-axis corresponds to the vector v1 in the x1x2-plane, and the y2-axis
corresponds the vector V2 in the x1x2-plane.
We demonstrate this with two examples.

EXAMPLE 1 i
Sketch the graph of the equation 3x + 4x1x2 = 16.
Solution: The quadratic form Q(x) = 3x i + 4x1x2 corresponds to the symmetric

. A =
matnx
[ )
3
2
2 . . po1 ynomia
, so the charactenst1c . 1 1s
.
0
2
C(A) = det(A -Al) = A - 3,i -4 = (1l -4)(,i + 1)

Thus, the eigenvalues of A are A1 = 4 and A2 = -1. Thus, by an orthogonal change of


coordinates, the equation can be brought into the diagonal form:

f
4y -y =l6 �
This is an equation of a hyperbola, and we can sketch the graph in the y1y2-plane.
We observe that the y1-intercepts are (2, 0) and (-2, 0), and there are no intercepts on
the Yraxis. The asymptotes of the hyperbola are determined by the equation 4y -Y = T �
0. By factoring, we determine that the asymptotes are lines with equations 2y1 -y2 = 0
and 2y 1 + Y2 = 0. With this information, we obtain the graph in Figure 8.3.1.

Y2
However, we want a picture
f
of the graph 3x + 4x1 x2 = 16
relative to the original x1-axis
and x2-axis-that is, in the x1x2-
plane. Hence, we need to find the
eigenvectors of A.
For A1 = 4,

YI

Thus, a basis for the eigenspace

is {V1}, where v1 = [n
For A2 = -1,
a ymptotes
Figure 8.3.1 The graph of 4y T �
- y = 16,
shown with horizontal Y1-axis
and asymptotes.
382 Chapter 8 Symmetric Matrices and Quadratic Forms

EXAMPLE 1
(continued)
Thus, a basis for the eigenspace is {v2}, where v2 =
[ �]-
.(We could have chosen - [ �l
but
[-�] is better because {V 1, v2} is right-handed.)

Now we sketch the graph


T
of 3x + 4x1x2 = 16 . In the
x1x2-plane, we draw the new
y1-axis in the direction of v1 .
(For clarity, in Figure 8.3.2 we

have shown the vector [�] in-

stead of
[�] .) We also draw the

new y2 -axis in the direction


of v2. Then, relative to these
new axes, we sketch the graph
of the hyperbola4y -y f � = 16.
The graph in Figure 8.3.2 is
also the graph of the original
equation 3x + 4x1x2f = 16.
In order to include the
asymptotes in the sketch, we
rewrite their equations in stan­ Figure 8.3.2 i
The graph of 3x + 4x1 x2 = 16 <=>
dard coordinates. The orthog­ i
4y -y =l6. �
onal change of coordinates
matrix in this case is given by

P = }s [� �].
-
(This is a rotation of the axes through angle e;::; 0.46 radians.) Thus,

the change of coordinates equation can be written

[YI] 1 [ 1] [Xi] 2
Y -vs
2
=

-1 2 X2

1
since pT = p- as Pis orthogonal. This gives

and

Then one asymptote is

The other asymptote is

Thus, in standard coordinates, the asymptotes are 3x1 + 4x2 = 0 and x1 = 0.


Section 8.3 Graphs of Quadratic Forms 383

EXAMPLE2 Sketch the graph of the equation 6xt + 4x1x2 + 3x� = 14.

Solution: The corresponding symmetric matrix is [� �l The eigenvalues are

;!1 = 2 and ;!2 = 7. A basis for the eigenspace of ;!1 is {\11}, where v1 = [-H A basis

for the eigenspace of A2 is { v2}, where if2 = [n If v t is taken to define the Yt -axis and

if2 is taken to define the Y2-axis, then the original equation is equivalent to

2yt + 7y� = 14

This is the equation of an ellipse with y1-intercepts (-fl, 0) and (--fl, 0) and
Y2-intercepts (0, -fi.) and (0, - -fi.).
In Figure 8.3.3, the ellipse is shown relative to the y1 - and y2-axes. In Fig­
ure 8.3.4, the new Yt - and y2-axes determined by the eigenvectors are shown rela­
tive to the standard axes, and the ellipse from Figure 8.3.3 is rotated into place. The
resulting ellipse is the graph of 6xf + 4x1 x2 + 3� = 14.

Y2

Figure 8.3.3 The graph of 2yT ?y�


+ = 14 in the Y1Y2-plane.

Figure 8.3.4 The graph of 6� + 4x1x2 + 3x � = 14 � 2yT ?y�


+ = 14.
384 Chapter 8 Symmetric Matrices and Quadratic Forms

Since diagonalizing a quadratic form corresponds to a rotation, to classify all the


graphs of equations of the form Q(x) = k, we diagonalize and rewrite the equation in
the form A.1yf + A.2y� = k. Here, A.1 and A.2 are the eigenvalues of the corresponding
symmetric matrix. The distinct possibilities are displayed in Table 8.3.1.

Table 8.3.1 Graphs of tl1xT tl2x�


+ = k

k>O k=O k<O


A.1 > 0, A.2> 0 ellipse point (0,0) empty set
A.1> 0, A.2 = 0 parallel lines line x=0 empty set
A.1 > 0, A.2< 0 hyperbola intersecting lines hyperbola
A.1 = 0, A.2< 0 empty set line y= 0 parallel lines
A.1 < 0, A.2< 0 empty set point (0,0) ellipse

The cases where k = 0 or one eigenvalue is zero may be regarded as degenerate


cases (not general cases). The nondegenerate cases are the ellipses and hyperbolas,
which are conic sections. (A conic section is a curve obtained in JR.3 as the intersec­
tion of a cone and a plane.) Notice that the cases of a single point, a single line, and
intersecting lines can also be obtained as the intersection of a cone and a plane passing
through the vertex of the cone. However, the cases of parallel lines (in Table 8.3. 1.) are
not obtained as the intersection of a cone and a plane.
It is also important to realize that one class of conic sections, parabolas, does not
appear in Table 8.3.1. In JR.2, the equation of a parabola is a quadratic equation, but it
contains first-degree terms. Since a quadratic form contains only second-degree terms,
an equation of the form Q(x) = k cannot be a parabola.
The classification provided by Table 8.3. l suggests that it might be interesting
to consider how degenerate cases arise as limiting cases of nondegenerate cases. For
example, Figure 8.3.5 shows that the case of parallel lines (y = ±constant) arises from
the family of ellipses A.xf + x� = 1 as A. tends to 0.

.,
::-: .. .-::
.:-:: .-
-- ......-:. .�. · .
-- - - ,. ···....
··· ... -..,
,, .·· ·... -- '
..
... . . "
'
' .... ./ · "· _., Xt
-.. ... . ··· · --
-
---- -- �· � ·· ·· ·""" · ··· :.:.: ::..· - ... .-

Figure 8.3.5 A family of ellipses A.xf + x� =


1. The circle occurs for A. = 1; as A.
decreases, the ellipses get "fatter"; for A. = 0, the graph is a pair of lines.

Table 8.3. 1 could have been constructed using only the cases k > 0 and k = 0.
The graphs obtained for k< k > 0, although they may be
0 are all obtained for
oriented differently. For example, the graph of xf - x� = - 1 is the same as the graph of
-xf + x� = 1, and this hyperbola may be obtained from the hyperbola xf - x� = 1 by
reflecting over the line x1 = x2 • However, for purposes of illustration, it is convenient
Section 8.3 Graphs of Quadratic Forms 385

to include both k > 0 and k < 0. Figure 8.3.6 shows that the case of intersecting lines
(k = 0) separates the hyperbolas with intercepts on the x1 -axis Cxi - 2� = k > 0) from
the hyperbolas with intercepts on the x -axis Cxi - 2x� k < 0).
2 =

Figure 8.3.6 Graphs of xf 2x�


- = k for k E (-1, 0, 1}.

EXERCISE 1 Diagonalize the quadratic form and sketch the graph of the equation xi+2x1x2+x� 2. =

Show both the original axes and the new axes.

Graphs of Q(x) =kin R3


For a quadratic equation of the form Q(x) = k in JR3, there are similar results to what
we did above. However, because there are three variables instead of two, there are more
possibilities. The nondegenerate cases give ellipsoids, hyperboloids of one sheet, and
hyperboloids of two sheets. These graphs are called quadric surfaces.
x2 x2 x2
The usual standard form for the equation of an ellipsoid is +
1.
=

a� b� + c;
This is the case obtained by diagonalizing Q(x) = k if the eigenvalues and k are all
non-zero and have the same sign. In particular, if we write

An ellipsoid is shown in Figure 8.3.7. The positive intercepts on the coordinate axes
are (a, 0, 0), (0, b, 0), and (0, 0, c).

Figure 8.3.7 An ellipsoid in standard position.


386 Chapter 8 Symmetric Matrices and Quadratic Forms

z z z

y y y
x

(a) (b) (c)


Figure 8.3.8 Graphs of 4x + 4x t � � = k. (a) k = 1; a hyperboloid of one sheet.
-

(b) k = O; a cone. (c) k = -1; a hyperboloid of two sheets.

2 2 x2
x x
The standard form of the equation for a hyperboloid of one sheet is -
a + b
� ; ;
c =

1. This form is obtained when k and two eigenvalues of the matrix of Q are positive
and the third eigenvalue is negative. It is also obtained when k and two eigenvalues are

negative and the other eigenvalue is positive. Notice that if this is rewritten � + �=
1- ¥i, it is clear that for every z there are values of x1 and x2 that satisfy the equation,
so that the surface is all one piece (or one sheet). A hyperboloid of one sheet is shown
if Figure 8.3.8 (a).
XJ2 X22 X23 __

The standard form of the equation for a hyperboloid of two sheets is 2 + - 2=


a b2 c
-1. This form is obtained when k and one eigenvalue is negative and the other eigen-
values are positive, or when k and one eigenvalue are positive and the other eigenvalues

are negative. Notice that if this is rewritten � + � =-1 - ¥i, it is clear that for every
lx3I <c, there are no values of x1 and x2 that satisfy the equation. Therefore, the graph
consists of two pieces (or two sheets), one with x3 � c and the other with x3 :'.S: -c. A
hyperboloid of two sheets is shown in Figure 8.3.8 (c).
It is interesting to consider the family of surfaces obtained by varying k in the
x2 x2 x2
equation + - =k, as in Figure 8.3.8. When k= 1, the surface is a hyperboloid
a� b; ;
c
of one sheet; as k decreases toward 0, the "waist" of the hyperboloid shrinks until at
k = 0 it has "pinched in" to a single point and the hyperboloid of one sheet becomes
a cone. As k decreases towards -1, the waist has disappeared, and the graph is now a
hyperboloid of two sheets.

Table 8.3.2 Graphs of -11xT '12x�


+ + -13� = k

k>O k=O
A1,A2,A3>0 ellipsoid point (0, 0, 0)
A1,A2 >0, ,13=0 elliptic cylinder X3-axis
A1,A2 >0, ,13<0 hyperboloid of one sheet cone
A1 >0, A2= 0, ,13<0 hyperbolic cylinder intersecting planes
Ai >0, A2, .13<0 hyperboloid of two sheets cone
.11 =0, A2, ,13<0 empty set x1-axis
.11,.12,A3<0 empty set point (0, 0, 0)
Section 8.3 Exercises 387

Table 8.3.2 display � the possible cases for Q(x) = k in R 3. The nondegenerate
cases are the ellipsoids and hyperboloids. Note that the hyperboloid of two sheets
2 2 2
-t - � - --% = k, k > 0.
x x x
appears in the form
a b c

z z z

(a) (b) (c)


Figure 8.3.9 Some degenerate quadric surfaces. (a) An elliptic cylinder x + A.x = 1 , i �
i
parallel t o the x3-axis. (b) A hyperbolic cylinder A.x - x = 1 , parallel�
i
to the x2-axis. (c) Intersecting planes A.x - x = 0. �

Figure 8.3.9 shows some degenerate quadric surfaces. Note that paraboloidal sur­
faces do not appear as graphs of the form Q(x) = k in R3 for the same reason that
2
parabolas do not appear in Table 8.3.1 for R : their equations contain first-degree
terms.

PROBLEMS 8.3
Practice Problems

Al Sketch the graph of the equation 2xi + 4x1x2 - x � =


6. Show both the original axes and the new axes.
(a) A= [� n
A2 Sketch the graph of the equation 2xi + 6x1x2 + (b) A= [� -�]
[1 ll
1 Ox � = 1 1 . Show both the original axes and the
1
new axes.
(c) A= 0
A3 Sketch the graph of the equation 4x i -6x1x2 +4x � =

[� ]
1
12. Show both the original axes and the new axes.
0 -2
A4 Sketch the graph of the equation Sxi +6x1x2-3x� = (d) A= -1 -2

[� -�]
15. Show both the original axes and the new axes. -2 -2 0
8
AS For each of the following symmetric matrices, iden-
tify the shape of the graph F Ax = 1 and the shape (e) A= 1
-4
of the graph xT Ax= -1.
388 Chapter 8 Symmetric Matrices and Quadratic Forms

Homework Problems

i
Bl Sketch the graph of the equation 9x +4x1x2+6x = � B6 In each of the following cases, diagonalize the
90. Show both the original axes and the new axes. quadratic form. Then determine the shape of the
Q(x) = k for k = 1, 0, -1. Note that two
i
B2 Sketch the graph of the equation x + 6x1x2 - 7 x = � surface
of the quadratic forms are degenerate.
32. Show both the original axes and the new axes.
(a) Q(x) i
=x + 4x1x2 + x �
i
B3 Sketch the graph of the equation x -4x1x2+x = 8. � (b) Q(x) i �
=x + 6x1x2 + 2x1x3 + x + 2x2x3 + Sx �
Show both the original axes and the new axes. (c) Q(x) =xi + 4x1x2 + 4x1x3 + Sx� + 6x2x3 + Sx�

B4 Sketch the graph of the equation � +4x1X2+x� = 8. (d) Q(x) =-xi + 2x1x2 - 6x1x3 + x � - 2x2x3 - x�
Show both the original axes and the new axes. (e) Q(x) =4xi + 2x1x2 + Sx � - 2x2x3 + 4x �

f
BS Sketch the graph of the equation 3x -4x1x2+3 x� =
32. Show both the original axes and the new axes.

Computer Problems

Cl Identify the following surfaces by using a computer (a) xi - l4x1x2 + 6x1x3 - x� + 8x2x3 10 =

to find the eigenvalues of the con-esponding sym­ i


(b) 3x + l0x1x2 + 4x1X3 + l6x2x3 - 6x � = 37
metric matrix. Plot graphs of the original system
and of the diagonalized system.

8.4 Applications of Quadratic Forms


Applying quadratic forms requires some knowledge of calculus and physics. This sec­
tion may be omitted with no loss of continuity.
Some may think of mathematics as only a set of rules for doing calculations.
However, a theorem such as the Principal Axis Theorem is often important because
it provides a simple way of thinking about complicated situations. The Principal Axis
Theorem plays an important role in the two applications described here.

Small Deformations
A small deformation of a solid body may be understood as the composition of three
stretches along the principal axes of a symmetric matrix together with a rigid rotation
of the body.
Consider a body of material that can be deformed when it is subjected to some
external forces. This might be, for example, a piece of steel under some load. Fix
an origin of coordinates 0 in the body; to simplify the story, suppose that this ori­
gin is left unchanged by the deformation. Suppose that a material point in the body,
which is at 1 before the forces are applied, is moved by the forces to the point f (1) =
(f1(1), f2(1), f3(1)); we have assumed that f(O) 0. The problem is to understand
=

this deformation f so that it can be related to the properties of the body. (Note that f
represents the displacement of the point initially at 1, not the force at 1.)
For many materials under reasonable forces, the deformation is small; this means
that the point f(1) is not far from 1. It is convenient to introduce a parameter f3 to
Section 8.4 Applications of Quadratic Forms 389

describe how small the deformation is and a function h(x) and write
f(x) = 1 + f3h(x)

This equation is really the definition of h(x) in terms of the point x, the given function
f(x), and the parameter {3.
For many materials, an arbitrary small deformation is well approximated by its
"best linear approximation," the derivative. In this case, the map f :
3IR. IR.3
� is
approximated near the origin by the linear transformation with matrix [ �;� J,
(0) so that

in this approximation, a point originally at v is moved (approximately) to [�;� (0) J v.


(This is a standard calculus approximation.)
In terms of the parameter f3 and the function h, this matrix can be written as

[�CO)]= I+ f3G

where G = [��� (0)J. In this situation, it is useful to write Gas G= E + W, where

is its symmetric part, and

W=
1
2
(G-G )
T

is its skew-symmetric part, as in Problem 8.2.DS.


The next step is to observe that we can write

I+ = I+ + = (I+
f3G f3(E W) {3E)(J + f3W) - /32 EW

Since f3 is assumed to be small, {32 is very small and may be ignored. (Such treatment
of terms like [32 can be justified by careful discussion of the limit at f3 � 0.)
The small deformation we started with is now described as the composition of two
linear transformations, one with matrix I+ f3E and the other with matrix I+ f3W. It can
be shown that I+ f3W describes a small rigid rotation of the body; a rigid rotation does
not alter the distance between any two points in the body. (The matrix f3W is called an
infinitesimal rotation.)
Finally, we have the linear transformation with matrix I+ f3E. This matrix is sym­
metric, so there exist principal axes such that the symmetric matrix is diagonalized to
[ � 1 � � ]·
l Ei
E2 (It is equivalent to diagonalize f3E and add the result to I,
I 1+€3
0
because
0
is transformed to itself under any orthonormal change of coordinates.) Since
f3 is small, it follows that the numbers EJ are small in magnitude, and therefore 1+ EJ >

0. This diagonalized matrix can be written as the product of the three matrices:

0
0
l3 = [1+ o1 ol [1 1+
0
fl
0 0
0
f2
1+ € 1
0 0 0 0

It is now apparent that, excluding rotation, the small deformation can be represented
as the composition of three stretches along the principal axes of the matrix f3E. The
quantities E1, E2, and f3 are related to the external and internal forces in the material
by elastic properties of the material. (/3£ is called the infinitesimal strain; this notation
is not quite the standard notation. This will be important if you read further about this
topic in a book on continuum mechanics.)
390 Chapter 8 Symmetric Matrices and Quadratic Forms

The Inertia Tensor


For the purpose of discussing the rotation motion of a rigid body, information
about the mass distribution within the body is summarized in a symmetric matrix N
called the inertia tensor. (1) The tensor is easiest to understand if principal axes are
used so that the matrix is diagonal; in this case, the diagonal entries are simply the
moments of inertia about the principal axes, and the moment of inertia about any other
axis can be calculated in terms of these principal moments of inertia. (2) In general,
the angular momentum vector l of the rotating body is equal to Nw, where w is the
instantaneous angular velocity vector. The vector lis a scalar multiple of w if and
only if w is an eigenvector of N-that is, if and only if the axis of rotation is one of the
principal axes of the body. This is a beginning to an explanation of how the body wob­
bles during rotation (w need not be constant) even though l is a conserved quantity
(that is, /is constant if no external force is applied).
Suppose that a rigid body is rotating about some point in the body that remains
fixed in space throughout the rotation. Make this fixed point the origin (0, 0, 0). Sup­
pose that there are coordinate axes fixed in space and also three reference axes that are
fixed in the body (so that they rotate with the body). At any time t, these body axes
make certain angles with respect to the space axes; at a later time t + 6-t, the body axes
have moved to a new position. Since (0, 0, 0) is fixed and the body is rigid, the body
axes have moved only by a rotation, and it is a fact that any rotation in JR.3 is deter­
u(t + 6-t) and denote
mined by its axis and an angle. Call the unit vector along this axis
the angle by 6-8. Now let 6-t 0; the unit vector u(t + M) must tend to a limit u(t),
-t

and this determines the instantaneous axis of rotation at time t. Also, as M 0, -t

�� -t'f*, the instantaneous rate of rotation about the axis. The instantaneous
angular velocity is defined to be the vector w = ( 'f*) u(t).
(It is a standard exercise to show that the instantaneous linear velocity v(t) at
some point in the body whose space coordinates are given by 1(t) is determined by
v=wx1.)
To use concepts such as energy and momentum in the discussion of rotating mo­
tion, it is necessary to introduce moments of inertia.
For a single mass m at the point (x1, x2, x3) the moment of inertia about the
x3-axis is defined to be m(xT + x�); this will be denoted by n33 . The factor (XT + x�)
is simply the square of the distance of the mass from the x3-axis. There are similar
definitions of the moments of inertia about the x1 -axis (denoted by n11) and about the
x2-axis (denoted by n12).
For a general axis e through the origin with unit direction vector it, the moment of
inertia of the mass about e is defined to be m multiplied by the square of the distance

of m from e. Thus, if we let 1= [:n the moment of inertia in this case is

mil perpa 1112 = m[1 - (1 . u)uf [1 - (1 . i1)i1] = m(ll1112 - (1 . u)2)


With some manipulation, using ar i1 = 1 and 1 i1 = 1T i1,
· we can verify that this is
equal to the expression

Because of this, for the single point mass m at 1, we define the inertia tensor N to be
the 3 x 3 matrix
Section 8.4 Applications of Quadratic Forms 391

(Vectors and matrices are special kinds of "tensors"; for our present purposes, we
simply treat N as a matrix.) With this definition, the moment of inertia about an axis
with unit direction u is
uT N u

It is easy to check that N is the matrix with components n11, n22, and n33 as given
above, and for i * j, niJ = - mx; x1 . It is clear that this matrix N is symmetric because
xxT is a symmetric 3 x 3 matrix. (The term mx;x1 is called a product of inertia. This
name has no special meaning; the term is simply a product that appears as an entry in
the inertia tensor.)
It is easy to extend the definition of moments of inertia and the inertia tensor
to bodies that are more complicated than a single point mass. Consider a rigid body
that can be thought of as k masses joined to each other by weightless rigid rods. The
moment of inertia of the body about the x3-axis is determined by taking the moment of
inertia about the x3-axis of each mass and simply adding these moments; the moments
about the x1 - and x2-axes, and the products of the inertia are defined similarly. The
inertia tensor of this body is just the sum of the inertia tensors of the k masses; since
it is the sum of symmetric matrices, it is also symmetric. If the mass is distributed
continuously, the various moments and products of inertia are determined by definite
integrals. In any case, the inertia tensor N is still defined, and is still a symmetric
matrix.
Since N is a symmetric matrix, it can be brought into diagonal form by the Prin­
cipal Axis Theorem. The diagonal entries are then the moments of inertia with respect
to the principal axes, and these are called the principal moments of inertia. Denote
these by N1, N2, and N3• Let 'P denote the orthonormal basis consisting of eigenvectors

[;:]·
of N (which means these vectors are unit vectors along the principal axes). Suppose

an arbitrary axis t is determined by the unit vector ii such that [il]p = Then,

from the discussion of quadratic forms in Section 8.2, the moment of inertia about this
axis e is simply

This formula is greatly simplified because of the use of the principal axes.
It is important to get equations for rotating motion that corresponds to Newton's
equation:

The rate of change of momentum equals the applied force.

The appropriate equation is

The rate of change of angular momentum is the applied torque.

It turns out that the right way to define the angular momentum vector l for a general
body is
J' = N(t)w(t)

Note that in general N is a function oft since it depends on the positions at time t of
each of the masses making up the solid body. Understanding the possible motions of
392 Chapter 8 Symmetric Matrices and Quadratic Forms

a rotating body depends on determjning w(t), or at least saying something about it. In
general, this is a very difficult problem, but there will often be important simplifications
if N is diagonalized by the Principal Axis Theorem. Note that J(t) is parallel to w(t) if
and only if w(t) is an eigenvector of N(t).

PROBLEM 8.4
Conceptual Problem
Dl Show that if P is an orthogonal matrix that di­ diagonalizes (I+f3E) to a matrix with diagonal en­
agonalizes the symmetric matrix f3E to a matrix tries 1 +t1
: , 1 + E2, and 1 + E3.
with diagonal entries t:1, t:2, and t:3, then P also

CHAPTER REVIEW
Suggestions for Student Review
1 How does the theory of diagonalization of symme­ 4 What role do eigenvectors play in helping us under­
tric matrices differ from the theory for general square stand the graphs of equations Q(x)= k, where Q(x)
matrices? (Section 8. 1 ) is a quadratic form? (Section 8.3)
2 Explain the connection between quadratic forms and 5 Define the principal axes of a symmetric matrix A.
symmetric matrices. How do you find the symmetric How do the principal axes of A relate to the graph of
matrix corresponding to a quadratic form? How does Q(x) = .XT Ax = k? (Section 8.3)
diagonalization of the symmetric matrix enable us to
6 When diagonalizing a symmetric matrix A, we know
diagonalize the quadratic form? (Section 8.2) that we can choose the eigenvalues in any order.
3 List the classifications of a quadratic form. How How would changing the order in which we pick the
does diagonalizing the corresponding symme­ eigenvalues change the graph of Q(x) = _xT Ax = k?
tric matrix help us classify a quadratic form? Explain. (Section 8.3)
(Section 8.2)

Chapter Quiz

El Let A = [-� � �]·


2
-

3 2
Find an orthogonal matrix
E3 By diagonalizing the quadratic form, make a sketch
of the graph of

P such that pT AP= D is diagonal.

E2 For each of the following quadratic forms Q(x),


in the x1x2-plane. Show the new and old coordinate
(i) Determfoe the corresponding symmetric
axes.
matrix A.
(ii) Express Q(x) in diagonal form and give the or­ E4 Prove that if A is a positive definite symmetric ma­
thogonal matrix that brings it into this form. trix, then (1, y) = _xT Ay is an inner product on JR11•

(iii) Classify Q(x). ES Prove that if A is a 4 x 4 symmetric matrix with


(iv) Describe the shape of Q(x) = 1 and Q(x)= 0. characteristic polynomial C(tl) = (tl - 3)4, then
(a) Q(x)= Sxi + 4xix2 + Sx� A= 31.
(b) Q(x)= 2xi - 6x1x2 - 6x1 x3 - 3x� + 4x2x3 - 3x�
Chapter Review 393

Further Problems

Fl In Problem 7.F5, we saw the Q R-factorization: an F4 (a) Suppose that A is an invertible n x n matrix.
invertible n x n matrix A can be expressed in the Prove that A can be expressed as a product
form A = Q R, where Q is orthogonal and R is of an orthogonal matrix Q and a positive def­
upper triangular. Let Ai = RQ, and prove that inite symmetric matrix U, A = QU. This is
Ai is orthogonally similar to A and hence has the known as a polar decomposition of A. (Hint:
same eigenvalues as A. (By repeating this process, Use Problems F2 and F3, let U be the square
A = QiR1, Ai = Ri Qi, A1 = QzR 2, Az = RzQ 2, ..., root of ATA, and let Q AU 1 ) =
-
.

one obtains an effective numerical procedure for (b) Let V = QUQT. Show that Vis symmetric and
determining eigenvalues of a symmetric matrix.) that A = VQ. Moreover, show that V2 = AAT,
so that Vis a positive definite symmetric square
F2 Suppose that A is an n x n positive semidefinite
root of AAT.
symmetric matrix. Prove that A has a square root.
(c) Suppose that the 3 x 3 matrix A is the matrix
That is, show that there is a positive semidefinite
of an orientation-preserving linear mapping L.
symmetric matrix B such that 82 A. (Hint: Sup­
=

Show that L is the composition of a rotation fol­


pose that Q diagonalizes A to D so that QTAQ = D.
lowing three stretches along mutually orthog­
Define C to be a positive square root for D and let
onal axes. (This follows from part (a), facts
B = QCQT.)
about isometries of JR3, and ideas in Section
F3 (a) If A is any n x n matrix, prove that ATA is sym­ 8.4. In fact, this is a finite version of the result
metric and positive semidefinite. (Hint: Con­
for infinitesimal strain in Section 8.4.)
sider Ax· Ax.)
(b) If A is invertible, prove that ATA is positive
definite.

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
CHAPTER 9

Complex Vector Spaces


CHAPTER OUTLINE

9.1 Complex Numbers


9.2 Systems with Complex Numbers
9.3 Vector Spaces Over :C
9.4 Eigenvectors in Complex Vector Spaces
9.5 Inner Products in Complex Vector Spaces
9.6 Hermitian Matrices and Unitary Diagonalization

When they first encounter imaginary numbers, many students wonder why we look at
numbers that are not real. In fact, it was not until Rafael Bombelli showed in 1572 that
numbers involving square roots of negative numbers can be used to solve real-world
problems. Currently, complex numbers are used to solve problems in a wide variety of
areas. Some examples are electronics, control theory, quantum mechanics, and fluid
dynamics. Our goal in this chapter is to extend everything we did in Chapters 1-8 to
allow for the use of complex numbers instead of just real numbers.

9.1 Complex Numbers


The first numbers we encounter as children are the natural numbers 1, 2, 3, and so on.
In school, we soon find that in order to perform certain subtractions, we must extend
our concept of number to the integers, which include the natural numbers. Then, so
that division can always be carried out, we extend the concept of number to the rational
numbers, which include the integers. Next we have to extend our understanding to the
real numbers, which include all the rationals and also include irrational numbers.
To solve the equation x2 + 1 = 0, we have to extend our concept of number one
more time. We define the number i to be a number such that i2 = -
1 The system of
.

numbers of the form x + yi where x, y E JR is called the complex numbers. Note that the
real numbers are included as those complex numbers with b = 0. As in the case with
all the previous extensions of our understanding of number, some people are initially
uncertain about the meaning of the "new" numbers. However, the complex numbers
have a consistent set of rules of arithmetic, and the extension to complex numbers is
justified by the fact that they allow us to solve important mathematical and physical
problems that we could not solve using only real numbers.

The Arithmetic of Complex Numbers


Definition A complex number is a number of the form z = x + yi, where x, y E JR, and i is an
Complex Number element such that i2 = -1. The set of all complex numbers is denoted by C.
396 Chapter 9 Complex Vector Spaces

Addition of complex. number'!. Z.\ = X\ + Y\i and z.2 = X2 + )l2i \� G.efmed b�


Z1 + Z2= (x1 + X2) + (yl + Y2)i

Multiplication of complex numbers z1= x1 + y1 i and z2= x2 + y2i is defined by


Z1Z2= (x1 + Y1i)(x2 + Y2i)
2
= X1X2 + X1Y2i + X2Y1i + Y1Y2i
= (x1x2- Y1Y2) + (X1Y2 + X2Y1)i

EXAMPLE 1 Perform the following operations

(a) (2 + 3i) + (S- 4i)

Solution: (2 + 3i) + (S- 4i)= 2 +S + (3- 4)i= 7- i

(b) (2 +3i)- cs - 4i)

Solution: (2 + 3i)- (S- 4i)= 2- S + (3- (-4))i= -3 + 7i

(c) (3- 2i)(-2 +Si)

Solution: (3- 2i)(-2 +Si)= [3(-2)- (-2)(S)] + [3(S) + (-2)(-2)]i= 4 +19i

EXERCISE 1 Calculate the following:

(a) (1- 4i) + (2 + Si)

(b) (2 + 2i)i

(c) (1 - 3i)(2 + i)

(d) (3- 2i)(3 +2i)

Remarks

1. Notice that for a complex number z = x + yi, we have that z = 0 if and only if
x= 0 and y = 0.

2. If z= x + yi, we say that the real part of z is x and write Re (z)= x. We say that
the imaginary part of z is y (not yi), and we write Im(z)= y. It is important to
remember that the imaginary part of z is a real number.

3. If x = 0, z = yi is said to be "purely imaginary." If y = 0, z = x is "purely


real." Notice that the real numbers are the subset of C of purely real complex
numbers.

4. It is best not to use � as a notation for the number i; doing so can lead to
confusion in some cases. In physics and engineering, it is common to use j in
place of i since the letter i is often used to denote electric current.

S. It is sometimes convenient to write x + iy instead of x + yi. This is particularly


common with the polar form for complex numbers, which is discussed below.
Section 9.1 Complex Numbers 397

The Complex Conjugate and Division


We have not yet discussed division of complex numbers. From the multiplication in
Example 1, we can say that
4+l9i=-2+Si
3 - 2i
In order to give a systematic method for expressing the quotient of two complex num­
bers as a complex number in standard form, it is useful to introduce the complex con­
jugate.

Definition The complex conjugate of the complex number z=x+yi is x - yi and is denoted
Complex Conjugate
z = x - yi

EXAMPLE2
2+Si= 2 - Si

-3 - 2i = -3+2i
x = x, for any x E IR

Theorem l Properties of the Complex Conjugate


For complex numbers zi = x+iy and z2 with x, y E IR we have

(1) Zi = Zi
(2) z1 is purely real if and only if ZI= zi
(3) zi is purely imaginary if and only if ZI = -z
(4) zi+z2=Zl+z2
(S) ZiZ2 = ZiZ2
(6) t: = Zl1
(7) zi+ZI= 2 Re(zi) = 2x1
(8) zi - ZI = i2 Im(zi)= i2yi
(9) ziZI=�+YT

EXERCISE2 Prove properties (1), (2), and (4) in Theorem 1.

The proofs of the remaining properties are left as Problem D 1.


The quotient of two complex numbers can now be displayed as a complex num­
ber in standard form by multiplying both the numerator and the denominator by the
complex conjugate of the denominator and simplifying. If zi = x i +Yii and z2 =
x2+Y2i * 0, then
398 Chapter 9 Complex Vector Spaces

Notice that the quotient is defined for every pair of complex numbers z1, z2, provided
that the denominator is not zero.

EXAMPLE3 2+Si (2+5i)(3+4i) (6-20)+(8+ 15)i 14 23.


= = =--+-t
3 -4i (3-4i)(3+4i) 9+ 16 25 25
--

EXERCISE 3 Calculate the following quotients.


1+ 4 -i
(a) �
l-t
(b) �
l+t
(c)
1+Si

Roots of Polynomial Equations


Complex conjugates are not only used to determine quotients of complex numbers, but
occur naturally as roots of polynomials with real coefficients.

Theorem 2 Let p(x) = a,,X' + ·


· · + a1x+ ao. where a; E JR for 1 ::; i ::; n. If z is a root of p(x),
then z is also a root of p(x).

Proof: Suppose that z is a root, so that

a11i' +...+ a1z+ ao =0

Using Theorem 1, we get

0 = 0= a,,z11+ · ·
· +a1z+ ao = a11zn + · · · + a1z+ ao

Thus, z is a root of p(x). •

EXAMPLE4 Find the roots of p(x) = x3 + 1.


Solution: By the Rational Roots Theorem (or by observation), we see that x =-1 is
a root of p(x). Therefore, by the Factor Theorem, (x+ 1) is a factor of p(x). Thus,

x3 + 1 =(x+ l)(x2 - x+ 1)

Using the quadratic formula, we find that the other roots are

1+ Y3i 1 - Y3i
z= and z=
2 2
Section 9.1 Complex Numbers 399

The Complex Plane


For some purposes, it is convenient to represent the complex numbers as ordered pairs
of real numbers: instead of z = x+ yi, we write z = (x,y). Then addition and multipli­
cation appear as follows:

Z1 +Z2 = (x1,Y1) + (x2,Y2) = (x1+ Xz,y1+Y2)

z1 z2 = (x1 ,Y1)(x2 ,Y2) = (x 1x2 - Y1Y2,X1Y2+ X2Y1)

In terms of this ordered pair notation, it is natural to represent complex numbers as


points in the plane, with the real part of z being the x-coordinate and the imaginary
part being the y-coordinate. We will speak of the real axis and the imaginary axis in
the complex plane. See Figure 9.1.1. A picture of this kind is sometimes called an
Argand diagram.

y
imaginary
axis • � = (x.y)

real axis

0 x

• � = (x, -y)

Figure 9.1.1 The complex plane.

If c is purely real, then cz c (x,y)


= (ex,cy). Thus, with respect to addition and
=

2
scalar multiplication by real scalars, the complex plane is just like the usual plane JR .
However, in the complex plane, we can also multiply by complex scalars. This has a
natural geometrical interpretation, which we shall see in the next section.

EXERCISE4 Plot the following complex numbers in the complex plane.


(a)2+i (b)2+i (c) (2+i)i (d) (2+i)( l+i)

Polar Form
Given a complex number z = x+yi, the real number

Definition lzl = r = � x2+ y2


;\1odulus

Argument
is called the modulus of z. If lzl * 0, let () be the angle measured counterclockwise
from the positive x-axis such that
Polar Form

x = r cos () and y = r sin()


400 Chapter 9 Complex Vector Spaces

The angle () is unique up to a multiple of 27r, and it is called an argument of z. As


shown in Figure 9.1.2, a polar form of z is
i
z = r(cos()+ sin())
y

z = x+ iy = r cos()+ ir sin()

0 x

Figure 9.1.2 Polar representation of z.

EXAMPLES Determine the modulus, an argument, and a polar form of z1 = 2-2i and z2= -1 + Y3i.
Solution: We have
lzil= 12 - 2il = V22+ 22 = 2 Y2
Any argument () satisfies

2 2 Y2 cos()
= and - 2 2 Y2 sin()
=

so cos()= � and sin()=- � ·which gives ()=- � + 27Tk, k E Z. Hence, a polar form
of z1 is

For z2, we have

1zz1 = l-1+ Y3il = )c-1)2 cY3)2 = 2


+

Since -1 = 2 cos() and Y3 = 2 sin(), we get () = ¥ + 27rk, k E Z. Thus, a polar form


of z2 is

Remarks

1. An important consequence of the definition is

2
lzl = x2 + l = zZ

2. The angles may be measured in radians or degrees. We will always use radians.

3. Notice that every complex number z has infinitely many arguments and hence
infinitely many polar forms.
Section 9.1 Complex Numbers 401

4. It is tempting but incorrect to write() = arctan(y/x). Remember that you need


two trigonometric functions to locate the correct quadrant for z. Also note that
y/x is not defined if x = 0.

EXERCISE 5 Determine the modulus, an argument, and a polar form of z1 = -f3+i and z2 = -1 - i.

EXERCISE 6 Let z = r(cos() + i sin B). Prove that the modulus of z equals the modulus of z and an
argument of z is - B.

The polar form is particularly convenient for multiplication and division because
of the trigonometric identities

cos(B1+B2) = cos Bi cos B2 - sin B1 sin B2

sin(B1+B2) = sin ()1 cos B2 +cos Bi sin B2

It follows that

z1z2 = r1(cos Bi+i sin B1 )r2(cos B2 +i sin B2)

= rir2((cos Bi cos B2 - sin Bi sin B2)+i(cos Bi sin B2 +sin Bi cos B2))

= rir2( cos(B1+B2)+i sin(B1+B2))

In words, the modulus of a product is the product of the moduli of the factors, while
an argument of a product is the sum of the arguments.

Theorem 3 For any complex numbers z1 = r1(cos B1+i sin()1) and z2 = r2(cos B2+i sin B2), with
z2 f. 0, we have

The proof is left for you to complete in Problem D4.

1
Corollary 4 Let z = r(cos() +i sin B) with r f. 0. Then z- = � ( cos(-())+i sin(-B)).

EXERCISE 7 Describe Theorem 3 in words and use it to prove Corollary 4.


402 Chapter 9 Complex Vector Spaces

EXAMPLE6 r:; 2+2i ,


Calculate ( 1 - i)(- v.) +i) and

. r:; us mg polar form.
1 + v3i
Solution: We have

(1 - i)(- �+i) = Y2 cos - ( ( �) +isin - ( �)) 2 (cos ( 5;)+isin ( 5;))


= 2 Y2 cos ( G;) +isin G;))
;<; -0.732+i(2.732 )

2+2i 2Y2 (cos ( � )+isin ( � ))


=
1 + -.../3i 2 (cos ( � )+isin ( � ))
7r
= Y2 cos -( ( �)1
+ isin -
12
( ))
;<; 1.366+i(0.366 )

EXERCISE 8 2 - 2i
Calculate (2 - 2i)(-1 + -.../3i) and -.../3 using polar form.
-1 + 3i

Powers and the Complex Exponential


From the rule for products, we find that

z2 = r2(cos 2e + isin 28)

Then

z3 = z2z = r2r(cos (W + B) + isin B(W+B))


= r3(cos 3e+isin 3B)

Theorem 5 [de Moivre's Formula]


Let z = r (cos e+isin B) with r -:F 0. Then, for any integer n, we have

t1 = r'1(cos ne + i sin ne)

Proof: For n 0, we have z0 = 1 = r0(cos 0+isin 0 ). To prove that the theorem holds
=

for positive integers, we proceed by induction. Assume that the result is true for some
integer k � 0. Then

/+1 = lz =Ir[cos (ke+ ())+isin (k ()+ ())]


= /+1[cos((k+ l)())+isin((k+ 1 )()) ]
Section 9.1 Complex Numbers 403

Therefore, the result is true for all non-negative integers n . Then, by Theorem 4, for
any positive integer m, we have

(�)-1 (r"'(cos me+ isin me)f 1


-m
z = =
-m
= r ( cos(-me) + isin(-m8))

Hence, the result also holds for all negative integers n = -m . •

EXAMPLE 7 Calculate (2 + 2i)3.

Solution: c2 + 2i)3 = [2 Y2 ( G) G))r


cos + sin

= (2 \12)3 ( ( :) ( :))
cos 3 + isin 3

= 16 Y2 -( � �) +i

= - 16 + l6i

In the case where r = 1, de Moivre's Formula reduces to

(cose+ isinet = cosne+ isinne

This is formally just like one of the exponential laws, (e11)'1 = e1111• We use this idea
to define ez for any z E C, where e is the usual natural base for exponentials
(e � 2.71828). We begin with a useful formula of Euler.
Definition
Euler's Formula ei!I = cose + isine

Definition For any complex number z = x + iy, we define

Remarks

1 . One interesting consequence of Euler's Formula is that

In one formula , we have five of the most important numbers in mathematics: 0,


1, e, i, and1r.

2. One area where Euler's Formula has important applications is ordinary differ­
ential equations. There, one often uses the fact that

< +
e a bi)t = eateibt =
eat (cos bt + isin bt)
404 Chapter 9 Complex Vector Spaces

Observe that Euler's Formula allows us to write every complex number z in the
form

where r = lzl and e is any argument of z. Hence, in this form, de Moivre's Formula
becomes

EXAMPLES Calculate the following using the polar form.

(a) (2 +2i)3

( 3
)
Solution: (2 +2i)3 = 2 "2.ei1114 = (2 Y2.)3ei<3rr/4l = -1 6 + 1 6i

(b) (2i)3

( )
Solution: (2i)3 = 2ei1112 3 = 23e i<31112l = -8i

(c) cY3 + i)5


Solution: ../3
( (
+ i)5 = 2ei1116 )
5 = 25ei511l6 = -16../3 + 16i

EXERCISE 9 Use polar form to calculate (1 - i)5 and ( -1 - ../3i)5.

n-th Roots
Using de Moivre's Formula for n-th powers is the key to finding n-th roots. Suppose
that we need to find then-th root of the non-zero complex number z = reil:i. That is, we
need a number w such that w'1 = z. Suppose that w =Rei¢. Then wn = z implies that

Then R is the realn-th root of the positive real number r. However, because arguments
of complex numbers are determined only up to the addition of 2nk, all we can say
about ¢> is that
nr/> = 8 +2nk, kEZ

or
e +2nk
r/> = kEZ
n

EXAMPLE9 Find all the cube roots of 8.


Solution: We have 8 = 8e;co+2rrk), k E Z. Thus, for any k E Z.
Section 9.1 Complex Numbers 405

0
EXAMPLE9 If k = 0, we have the root w0 = 2e = 2.
2 /
(continued) If k= 1, we have the root w1 = 2ei rr 3 = -1 + -./3i.
/
If k = 2, we have the root w2 = 2ei 4rr 3 = -1 - -./3i.
If k = 3, we have the root 2ei2rr= 2 = wo.
By increasing k further, we simply repeat the roots we have already found. Simil­
arly, consideration of negative k gives us no further roots. The number 8 has three third
roots, w0, w1, and w2. In particular, these are the roots of equation w3 - 8= 0.

Theorem 6 Let z be a non-zero complex number. Then then distinctn-th roots of z = rei6 are

lf11 ( +2rrk)/n
Wk = r ei 6 , k = 0, 1, ... 'n - 1

EXAMPLE 10 Find the fourth roots of -81.


Solution: We have -81 = 8lei(rr+2rrkl. Thus, the fourth roots are

(81)1/4ei(JT+2JTk)/4, k= 0, 1,2,3

In Examples 9 and 10, we took roots of numbers that were purely real: we were
really solving x' - a= 0, where a E R By our earlier theorem, when the coefficients
of the equations are real, the roots that are not real occur in complex conjugate pairs.
As a contrast, let us consider roots of a number that is not real.

EXAMPLE 11 Find the third roots of Si and illustrate in an Argand diagram.


Solution: Si= Sei0+2kJT), so the cube roots are

Wo st/3eirr/6 st /3 (f +i� )
W1 = 51/3ei5JT/6 51/3 ( -f3 + i �)
W2 = 5 t/3 ei5rr/6 = 5113(-i)

Plotting these roots shows that all three


are points on the circle of radius 5113 cen­
tred at the origin and that they are separated
by equal angles of�· -i 51/3

Examples 9, 10, and 11 all illustrate a general rule: the n-th roots of a complex
number z= rei6 all lie on the circle of radius r11", and they are separated by equal
angles of 2n In.
406 Chapter 9 Complex Vector Spaces

PROBLEMS 9.1
Practice Problems

Al Determine the following sums or differences. AS Express the following quotients in standard form.
(a) (2+Si)+ (3+2i) l 3
(a) (b)
(b) (2-7i)+ (-S+3i) 2+3i 2-7i
(c) (-3+Si)-(4+3i) 2- Si 1+6i
.
(d) (-S-6i)- (9- lli) (c) (d)
3+2i 4 l
-

A2 Express the following products in standard form. z


A6 Use polar form to determine z1z2 and i if
(a) (1 + 3i)(3- 2i) Z2
(b) (-2- 4i)(3-i) (a) z1=1+ i, z2= 1 + Y3i
(c) (1 - 6i)(-4+ i) (b) Z1=--f3- i, Z2= 1 - i
(d) (-1- i)(l- i) (c) Z1 = 1 +2i, Z2=-2- 3i
(d) z1=-3+ i,z2=6- i
A3 Determine the complex conjugates of the following
numbers. A 7 Use polar form to determine the following.
(a)3- Si (b) 2+7i (a) (1 + i)4 (b) (3- 3i)3
(c) 3 (d) -4i (c) (-1- -./3i)4 (d) (-2Y3 +2i)5
A4 Determine the real and imaginary parts of the fol­ A8 Use polar form to determine all the indicated roots.
1 1
lowing. (a) (-1) 15 (b) (-16i) 14
1 1
(a) z=3- 6i (b) z= (2+Si)(l- 3i) (c) c-Y3 - i) 13 (d) (1 +4i) 13
4 -1
(c)z= - . (d) z= -.
6 -[ l

Homework Problems

Bl Determine the following sums or differences. (b) (3+2i)(2- 3i)


(a) (3+4i)+ (1 +Si) s
( c)
(b) (3- 2i)+ (-7+6i) 4- i
(c) (-S+7i)- (2+6i) 1- 2i
(d) .
(d) (-7- 2i)- (-8 -9i) 1+ l
BS Express the following quotients in standard form.
B2 Express the following products in standard form. l
(a) (2+ i)(S- 3i) (a)
3+4i
(b) (-3- 2i)(S- 2i) 2
(b)
(c) (3- Si)(-1+6i) 3- Si
(d) (-3- i)(3-i) 1- 4i
(c)
B3 Determine the complex conjugates of the following 3+Si
1+4i
numbers. (d) --

(a) 2i 4- Si
(b) 17 B6 Use polar form to determine z1z2 and �if
Z2
(c) 4- 8i
(a) Z1= l - -f3i, Z2= -1 + i
(d) s +1li
(b) Z1=- -./3 + i, Z2 =-3- 3i
B4 Determine the real and imaginary parts of the (c) z1= l+3i, z2= -1 -2i
following. (d) ZJ=-2+ i, Z2 =4- i
(a) 4-7i
Section 9.2 Systems with Complex Numbers 407

B7 Use polar form to determine the following. BS Use polar form to determine all the indicated roots.
(a) (32) 115
(a) (1 + VJi)4
(b) (8 li)1/5
(b) ( -2 2i)3-
3
(c) ( -Y3 - i)4
(c) (- Y3 + i)l/
3
(d) (-2 + 2 Y3i)5 (d) (4+i)1 1

Conceptual Problems

Dl Prove properties (3 ), (5), (6), (7), (8), and (9) of D3 Use Euler's Formula to show that
Theorem 1. (a) eie= e-ie

D2 If z= r(cos e +i sin 8), what islzl? W hat is an argu­ (b) cos8= � (e;e+e-ie)
sine= � (e e - e-ie)
ment ofz?
;
(c)
i
D4 Prove Theorem 3.

9.2 Systems with Complex Numbers


In some applications, it is necessary to consider systems of linear equations with com­
plex coefficients and complex right-hand sides. One physical application, discussed
later in this section, is the problem of determining currents in electrical circuits with
capacitors and inductive coils as well as resistance. We can solve systems with com­
plex coefficients by using exactly the same elimination/row reduction procedures as
for systems with real coefficients. Of course, our solutions will be complex, and any
free variables will be allowed to take any complex value.

EXAMPLE 1 Solve the system of linear equations

ZJ + Z2+ Z3 = 0
(1 - i)z1 + z2+ =i
(3 - i)z1 + 2z2+ z3 = 1 + 2i

Solution: The solution procedure is, as usual, to write the augmented matrix for the
system and row reduce the coefficient matrix to row echelon form:

[ 1
3 -i
�; 1
2
1
0
0
i
1 +2i ll R1 - (1 - i)R1
R3 - (3 - i)R1 -[ � - 1+i
- 1+i
-2+i 1 J l
2;
-iR2 �

[� - l+i
1 +i
-2+i
0
1
1 + 2i
R1 - R,

R3 + ( 1 - i)R2
�[ 1
0
0
0
1
0
-i
1 +i
2 l
-1

� i -iR3

+; l
-1 R, +m,

l�
1

-[ �
0 0 0

l
-[
1 1 +i 1 1 0 -2 + i
0 1 1 - 2i R2 - (1 + i)R3 0 1 - 2i

1 +;
Hence, the solution is z =

l l -2+ i
1 - 2i
.
408 Chapter 9 Complex Vector Spaces

EXAMPLE2 Solve the system

(1 + i)z1 + 2iz2 + =1
. 1 1.
(1 + t)z2 + Z3 = - - -t
2 2
ZI - Z3 = 0

Solution: Row reducing the augmented matrix gives

[T � ] [
2i 0 1 0 -1 0
l
l
� 21
1 l.
i i i
j
1 + 0 1 + 1 2
0 -1 1 i 2i 0

� ]-
- +

[� +H�
0 0 0

[�
0 -1 -1

I
-1
1 + i !
2
!;
2 1 1-i
1 1-i
-I.

r
T T
2i 1 + i I 2i 1 + i 0 0

Hence, Z3 is a free variable, so we let z3 = t EC. Then z1 = z3 = t, z2 =- i- 12; t, and t


the general solution is

EXERCISE 1 Solve the system

iz1 + z2 + 3z3 = -1 - 2i
iz1 + iz2 + (1 + 2i)z3 = 2 + i
2z1 + (1 + i)z2 + 2z3 = 5 -i

Complex Numbers in Electrical Circuit Equations


This application requires some knowledge of calculus and physics. It can be omitted
with no loss of continuity.
For purposes of the following discussion only, we switch to a notation commonly
used by engineers and physicists and denote by j the complex number such that
2
j = -1, so that we can use i to denote current.
In Section 2.4, we discussed electrical circuits with resistors. We now also con­
sider capacitors and inductors, as well as alternating current. A simple capacitor can be
thought of as two conducting plates separated by a vacuum or some dielectric. Charge
can be stored on these plates, and it is found that the voltage across a capacitor at
time tis proportional to the charge stored at that time:

Q(t)
V(t) =
c
where Q is the charge and the constant C is called the capacitance of the capacitor.
Section 9.2 Systems with Complex Numbers 409

The usual model of an inductor is a coil; because of magnetic effects, it is found


that with time-varying current i(t), the voltage across an inductor is proportional to the
rate of change of current:
d i(t)
V(t) = L
dt
where the constant of proportionality L is called the inductance.
As in the case of the resistor circuits, Kirchhoff's Laws applies: the sum of the
voltage drops across the circuit elements must be equal to the applied electromotive
force (voltage). Thus, for a simple loop with inductance L , capacitance C, resistance R,
and applied electromotive force E(t) (Figure 9.2.3), the circuit equation is

d i(t) 1
L - +R i(t) + -Q(t) = E(t)
dt c

+
E c

Figure 9.2.3 Kirchhoff's voltage law applied to an alternating current circuit.

For our purposes, it is easier to work with the derivative of this equation and use
dQ
the fact that = i:
dt

d2 i(t) d i(t) 1 . d E(t)


L -- +R -- + -t (t) = --
dt2 dt c dt

In general, the solution to such an equation will involve the superposition (sum)
of a steady-state solution and a transient solution. Here we will be looking only for the
steady-state solution, in the special case where the applied electromotive force, and
hence any current, is a single-frequency sinusoidal function. Thus, we can assume that

E(t) = BeJwt and i(t) = Aejwr

where A and B are complex numbers that determine the amplitudes and phases of
voltage and current, and w is 2rr multiplied by the frequency. Then

di .
- = jwAe1wr = jwi
dt
d2i
- = (jw)2i = -w2i
dt2

and the circuit equation can be rewritten

1 dE
-W2 L .l
.

. R.l +
+ JW -t = -
c dt
410 Chapter 9 Complex Vector Spaces

Now consider a network of circuits with resistors, capacitors, inductors, electromotive


force, and currents, as shown in Figure 9.2.4. As in Section 2.4, the currents are loops,
so that the actual current across some circuit elements is the difference of two loop
currents. (For example, across R1, the actual current is i1 - i2.) From our assumption
that we have only one single frequency source, we may conclude that the steady-state
loop currents must be of the form

Figure 9.2.4 An alternating current network.

By applying Kirchhoff's laws to the top-left loop, we find that

If we write the corresponding equations for the other two loops, reorganize each equa­
tion, and divide out the non-zero common factor eiwr, we obtain the following system
A1, A2, A3:
of linear equations for the three variables and

[-w2L1 �1 jwR1]A1 -jwR1A2 w2L1A3 -jwB


+ + + =

-jwR1A1 [-w2Li �2 jw(R1 R2 R3)] A1 -jwR3A3 jwB


+ + + + + =

w2L1A1 -jwR3A2 [-w2(L1 �3 jwR3] A3


+ + �) + + = 0

Thus, we have a system of three linear equations with complex coefficients for the
three variables
A1, A1, A3. and We can solve this system by standard elimination. We
emphasize that this example is for illustrative purposes only: we have constructed a
completely arbitrary network and provided the solution method for only part of the
problem, in a special case. A much more extensive discussion is required before a
reader will be ready to start examining realistic circuits to discover what they can do.
But even this limited example illustrates the general point that to analyze some electri­
cal networks, we need to solve systems of linear equations with complex coefficients.
Section 9.3 Vector Spaces over C 411

PROBLEMS 9.2
Practice Problems

Al Determine whether each system is consistent, and (b) (1


z1 + + i)z2+2z3+Z4 - i = 1
if it is, determine the general solution.
(a)
(1
z1 + iz2+ +i)z3 -i 1
221 +(2 + i)z2 + 523 + (2+i)z4 4 -i =

(-1 1
=

(1
-2z1 + - 2i)z2 - 2z3 2i =
iz1 + + i)22+(1+2i)z3+2iZ4 =

2iz1 - 2z2 - (2+3i)z3 + 3i = -1

Homework Problems

Bl Determine whether each system is consistent, and (c) (3


i21 +2z2 - + i)z3 = 1
if it is, determine the general solution.
(1+i)21 +(2 - 2i)z2 -423 i =

(a) z2 -i23 +3i = 1


iz1 + 2z2 -(3+3i)z3 + 2i = 1
i21 - 22+(-1 + i)z3 + 2i = 1
(3
221 +2i22+ +2i)z3 4 = (d) (1
z1 +z2+iz3+ + i)24 = 0
(-1 (
iz1 + i22 + -i)23 + 2 + i)z4 - = 0
(b) 21 + (2+i)22 + i23 +i 1
0
=

iz1 + (-1+2i)z2 +2iz4 -i =


2z1 +222+(2 + 2i)23 + 2z4 =

(1
z1 +(2+i)z2+ + i)z3 + 2iz4 2 -i =

9.3 Vector Spaces over C


The definition of a vector space in Section 4.2 is given in the case where the scalars are
real numbers. In fact, the definition makes sense when the scalars are taken from any
one system of numbers such that addition, subtraction, multiplication, and division
are defined for any pairs of numbers (excluding division by 0) and satisfy the usual
commutative, associative, and distributive rules for doing arithmetic. Thus, the vector
space axioms make sense if we allow the scalars to be the set of complex numbers. In
such cases, we say that we have a vector space over C, or a complex vector space.

EXERCISE 1 2 {[�� ] Z1,Z2 c},


Le t C = I E with addition defined by

[21]Z2 + [w'] [21


= + wi]
W2 Z2+W2
and scalar multiplication by a E C defined by

a [zi]z2 [a2az21]
=

Show that C2 is a vector space over C.


412 Chapter 9 Complex Vector Spaces

It is instructive to look carefully at the ideas of basis and dimension for complex
vector spaces. We begin by considering the set of complex numbers C itself as a vector
space.
As a vector space over the complex numbers, C has a basis consisting of a single
element, {1 }. That is, every complex number can be written in the form a1, where a is a
complex number. Thus, with respect to this basis, the coordinate of the complex num­
ber z is z itself. Alternatively, we could choose to use the basis {i}. Then the coordinate
of z would be -iz since
z=(-iz)i

In either case, we see that C has a basis consisting of one element, so IC is a one­
dimensional complex vector space.
Another way of looking at this is to observe that when we use complex scalars, any
two non-zero elements of the space IC are linearly dependent. That is, given z1, z2 E IC,
there exist complex scalars, not both zero, such that

For example, we may take a1 = 1 and a2 = _fl, since we have assumed that z2 * 0.
z2

It follows that with respect to complex scalars, a basis for IC must have fewer than two
dimensions.
However, we could also view IC as a vector space over R Addition of complex
numbers is defined as usual, and multiplication of z = x + iy by a real scalar k gives

kz= kx + kyi

Observe that if we use real scalars, then the elements 1 and i in C are linearly in­
dependent. Hence, viewed as a vector space over JR, the set of complex numbers is
two-dimensional, with "standard" basis {l , i}.
As we saw in Section 9.2, we sometimes write complex numbers in a way that
exhibits the property that IC is a two-dimensional real vector space: we write a complex
number z in the form

z = x + iy = (x,y) = x(l,0) + y(O, 1)

Note that (1, 0) denotes the complex number 1 and that (0, 1) denotes the complex
number i. With this notation, we see that the set of complex numbers is isomorphic to
2
JR. , which justifies our work with the complex plane in Section 9.1. However, notice
that this representation of the complex numbers as a real vector space does not include
multiplication by complex scalars.
2
Using arguments similar to those above, we see that IC is a two-dimensional
complex vector space, but it can be viewed as a real vector space of dimension four.

Definition The vector space IC11 is defined to be the set


C"

with addition of vectors and scalar multiplication defined as above.


Section 9.3 Vector Spaces over C 413

Remark

These vector spaces play an important role in much of modem mathematics.

Since the complex conjugate is so useful in C, we extend the definition of a com­


plex conjugate to vectors in C'1•

z1
Definition
Complex Conjugate
The complex conjugate of z= []
:
Zn
E en is defined to be l = [0�l.
EXAMPLE 1 1 +i 1 +i 1-i
-2i -2i 2i
Let z= . Then z= =
3 3 3
1- 2i 1 - 2i 1 +2i

Linear Mappings and Subspaces


We can now extend the definition of a linear mapping L :V � W to the case where V
and W are both vector spaces over the complex numbers. We say that L is linear over
the complex numbers if for any a EC and v1, v2 EV we have

L(av1 + v2) = aL(v1) + L(v2)


We can also define subspaces just as we did for real vector spaces, and the range
and nullspace of a linear mapping will be subspaces of the appropriate vector spaces,
as before.

EXAMPLE2 Let L : C3 � C2 be the linear mapping such that

[ ] 1+i -2i
[l ]
L(l,0,0) = , L(O, 1, 0) =
2 _i ,

Then, the standard matrix of L is

[ 1 +i -2i 1+2i ]
[L] =
2 1- i 3+i

The image of z= [ L .]
1 - l
under Lis calculated by

L(x) = [LJz=
[ 1+i
2
-2i.
1-z
1+2.i
3+t
] [ � ] = [8 8 ]
i
+2i

1-
.

The range of Lis the subspace of the codomain C2 spanned by the columns of [L], and
the nulls pace is the solution space of the system of linear equations Az = 0. (Remember
that Zis a vector in C3 .)
414 Chapter 9 Complex Vector Spaces

Similarly, the rowspace, columnspace, and nullspace of a matrix are also defined
as in the real case.

EXAMPLE3 [ 1 1 +i -i
Let A = 11.

nullspace of A.
- l
2 +i
-1 + 2i
;
l
- i . Find a basis for the rowspace, columnspace, and

Solution: We row reduce and find that the reduced row echelon form of A is

R
=
[� � � ��1
0 0 0 0

{ } � }·
As in the real case, a basis for Row(A) is the non-zero rows of the reduced row echelon

for of A. That is, a basis for Row(A) is ,

-1 -[

m l [j: Ul
We next recall that the columns of A corresponding to the columns of that con-
R

tain leading 1' fonn a basis for Col(A). So, a basis for Col(A) is

To find the nullspace, we solve the homogeneous systems Ax = 0. Using the


reduced row echelon form of A, we find that the homogeneous system is equivalent to

Zt + iz2 - Z4 = 0

Z3 - iz4 =0

The free variables are z2 and z4, so we let z2 =s E C, and Z4 =t E C. Then we get
z1 = -is+t and z3 = it. Hence, the general solution to the homogeneous system is

Z1 -is+t - [ 1
s 1 0
= =s +t
Z2

Z3 it 0
Z4 0

Thus, a basis for Null(L) is {-� ·H


EXERCISE 2 -1 1 +i
1 -i
1 +i 1
Let A . Find a basis for the rowspace, columnspace, and

-[ 1 - 2i -2 - 2i
nullspace of A.
Section 9.3 Vector Spaces over C 415

Complex Multiplication as a Matrix Mapping


We have seen that Ccan be regarded as a real two-dimensional vector space. We now
want to represent multiplication by a complex number as a matrix mapping. We first
consider a special case.
As before, to regard Cas a real vector space we write

z=x+iy=(x,y)

Let us consider multiplication by i:

iz= i(x + iy) = ix - y=(-y, x)

It is easy to see that this corresponds to a linear mapping:

Mi : IR2 � IR2 defined by MJx, y) = (-y, x)

The standard matrix is

[M;] =
[o -1]
1 0

Observe that [Md = [RI]. That is, multiplication by i corresponds to a rotation by an


angle of �·

y
.::=(x,y)

iw = ( - v, u)

Figure 9.3.5 Multiplication by i corresponds to rotation by angle �.

More generally, we can consider multiplication of complex numbers by a complex


number a = x +yi. In Problem Dl y ou are asked to prove that multiplication by any
complex number = a + bi can be represented as a linear mapping
a Ma of JR2 with
-b [a ]
standard matrix .
b a
416 Chapter 9 Complex Vector Spaces

PROBLEMS 9.3
Practice Problems

Al Calculate the following. (b) Determine L(2 + 3i,1 - 4i).

(a) [-� ] [ �� ]
+i
-
3 i (c) Find a basis for the range and nullspace of L.

[ � : � I [ ! : �� I
1
A3 Find a basis for the rowspace, columnspace, and
nullspace of the following matrices.

[[ ]
(b) +
2 - Si -3 - 4i (a) A=
1
;/ �
_ i 2 � 2i

[� � ;�]
I
(c) 2i 1 i

[ �:�I [
(b) B= 1+i -l+i
.
-1 l

(d) (-1 2i)


1 -l+i -1

I
_

2 - Si
(c) C = 2 1+ 2i -2 + 3i -2
A2 (a) Write the standard matrix of the linear mapping 1+i -2 +i -1 -i
L : C2 ---+ C2 such that

L(l,0)= [� ]1 2i
and L(O, 1) =
[ � � �]
Homework Problems

[ ] [- ]
Bl Calculate the following. B3 Determine which of the following sets is a basis

l ;I
4 - 3i 2 - �i for C3•

mrnl' [�J}
-

(a) .
-l 1- l
(a)

[ [
3 + 2; 2 +
(b) -2 -� - 3 + �

mrnH�m
1+ 3t 1- l

(c) -3i[� � ;�] (b)

(d) (-1 _i)[�I 2


-2 - 7i
3i
[ l
B4 Find a basis for the rowspace, columnspace, and
nullspace of the following matrices.
1 1 i
(a) A= i i 1
B2 (a) Write the standard matrix of the linear mapping

[� l
l+i l+i -1-i
L : C2 ---+ C2 such that

[ ;/]
1 i 2 -i

[ �] 1 .
(b) B = -l
L(l,O)= and L(0,1)=
_; i _

]
-l+i 2i

(b) Determine L(2 -i,-4 +i). (c) C = [� -1 2 i

[I� Tl
i 6 8 -4i
(c) Find a basis for the range and nullspace of L.

(d) D � 4;
o
Section 9.4 Eigenvectors in Complex Vector Spaces 417

Conceptual Problems

Dl (a) Prove that multiplication by any complex num­ (a) Prove that C(2, 2) is a complex vector space.
ber a = a + bi can be represented as a lin­ (b) W rite a basis for C(2, 2) and determine its di­
ear mapping Ma of IR2 with standard matrix mension.

[� �] - . D3 Define isomorphisms for complex vector spaces


and check that the arguments and results of Sec­
(b) Interpret multiplication by an arbitrary com­
tion 4.7 are correct, provided that the scalars are
plex number as a composition of a contraction
2
or dilation, and a rotation in the plane JR .
always taken to be complex numbers.

(c) Verify the result by calculating Mo: for a = D4 Let {v\,v2,v3} be a basis for JR3. Prove that
3 -
4i and interpreting it as in part (b). W1. V2, v3} is also a basis for e3 (taken as a com­
plex vector space).
D2 Let q2, 2) denote the set of all 2 x 2 matrices with
complex entries with standard addition and scalar
multiplication of matrices.

9.4 Eigenvectors in Complex


Vector Spaces
We now look at eigenvectors and diagonalization for linear mappings L : en � en or,
equivalently, in the case of n x n matrices with complex entries.
Eigenvalues and eigenvectors are defined in the same way as before, except that
the scalars and the coordinates of the vectors are complex numbers.

Definition Let L : e11 e11 be a linear mapping. If for some A E e there exists a non-zero vector

Eigenvalue z E e11 such that L(Z) AZ, then A is an eigenvalue of L and z is called an eigenvector
=

Eigenvector of L that corresponds to A. Similarly, a complex number A is an eigenvalue of an


n x n matrix A with complex entries with corresponding eigenvector z E C11, z f. 0, if
Az= AZ.

Since the theory of solving systems of equations, inverting matrices, and finding
coordinates with respect to a basis is exactly the same for complex vector spaces as
the theory for real vector spaces, the basic results on diagonalization are unchanged
except that the vector space is now C11• A complex n x n matrix A is diagonalized by a
matrix P if and only if the columns of P form a basis for C11 consisting of eigenvectors
of A . Since the Fundamental Theorem of Algebra guarantees that every n-th degree
polynomial has exactly n roots over C, the only way a matrix cannot be diagonalizable
over C is if it has an eigenvalue with geometric multiplicity less than its algebraic
multiplicity.
We do not often have to carry out the diagonalization procedure for complex
matrices. However, a simple example is given to illustrate the theory.

EXAMPLE I .
Determme . A
whether the matnx = [ _
3
1 -
_
Si
4i
-
_
l
2l +
6
7i
+ i
] . '
1s diagona 1·1zable. If 1t
. .
1s,

determine the invertible matrix P that diagonalizes A.


418 Chapter 9 Complex Vector Spaces

EXAMPLE 1 Solution: Consider


(continued) A = [3 - 8i- -4t./l -2-11++6i 7-i ]
_ /I.I
-1 ,t

The characteristic equation is

det(A - ,ti) = .-t2 - -2i)tl+(3 - 3i) = 0


(1

Using the quadratic formula, we find that

/l = (1 -2i)
-------
± [(1 - 2i)2 -4(1)(3 -3i)]112
2
Using the methods from Section 9.1, we find that the eigenvalues are /l1 = +i
1 and
/l2 = -3i. tl1 =1+i,
For we get

A - = [ -12 -9i -11 +7 i 1 -1 - i]


-4i -3+Si ] [ 0

/I.ii
0
[1 ; l
Hence, the general solution is a a E C. Thus, an eigenvector corresponding to

,t =1+i [1 ; 1 = -3i,
is For .-t2 we get

A /l2/ =[ 3 --S4ti. -2++9t7.i] [ 0


_

-1
-11 � 1 -2 -i]
0
[2; i].
Hence, the general solution is a a E C. Thus, an eigenvector corresponding to

) = -3 .. [2+i]
1t l IS l .

P = [1 +1 i 2+1 i] .
Hence, [a
. Usmg the formula c

that P-1 - [ 21 + ii]-1


1
_ _
and

p-tAP=[l+i0 -3!0.]

Complex Characteristic Roots of a Real Matrix and


a Real Canonical Form
Does diagonalizing a complex matrix tell us anything useful about diagonalizing a real
matrix? First, note that since the real numbers form a subset of the complex numbers,
we may regard a matrix A with real entries as being a matrix with complex entries;
all of the entries just happen to have zero imaginary part. In this context, if the real
matrix A has a complex characteristic root .-t, we speak of ,t as a complex eigenvalue
of A,
A over C.
with a corresponding complex eigenvector. We can then proceed to diagonalize
Section 9.4 Eigenvectors in Complex Vector Spaces 419

EXAMPLE2
Let A= [; =n Find its eigenvectors and diagonalize over IC.

Solution: We have
A-A.I = [5 - A.
3
-6
-1-A.
]
2
so det(A-A./) =A. -4A.+ 13, and the roots of the characteristic equation areA.1 =2+3i
andA.2 =2 - 3i=A.1.
ForA.1=2 + 3i,

l
A-A.i =
[
3- 3i
3
-6
-3- 3i
� 1
0
] [ -(l + i)
0
]
Hence, a comp1 ex eigenvector correspon d'mg to Il1 l =2 + 3'11s
. .... =
. z1
[ ]
1+ i
.
1
For A.2 =2 - 3i,

l
A-A.2 =
[
3+ 3i
3
-6
.
-3+ 3t
� 1
0
] [ ]
- (1 - i)
0

Thus, an eigenvector corresponding toA.2 =2 - 3i is z2 =


[ �l 1 i

If follows that A is diagonalized to [


2
�3i 2 � 3i ] [; by P =
1 i 1- i
1
] .

Observe in Example 2 that the eigenvalues of A were complex conjugates. This


makes sense since we know that by Theorem 9.1 2
. , complex roots of real polynomials
come in pairs of complex conjugates. Before proving this, we first extend the definition
of complex conjugates to matrices.

Definition Let A be an m x n matrix A. We define the complex conjugate of A, A by


Complex Conjugate

Theorem 1 Suppose that A is an n x n matrix with real entries and that A. = a+ bi, b * 0 is
an eigenvalue of A, with corresp.�mding eigenvector Z. Then :1 is also an eigenvalue,
with corresponding eigenvector Z.

Proof: Suppose that Az =A.Z. Taking complex conjugates of both sides gives

Az =A.z::::} Al= 7it

since (A't = (A)11 = (Al))· Hence, :1 is an eigenvalue of A with corresponding eigen­


vector z, as required. •
420 Chapter 9 Complex Vector Spaces

Now we note that the solution to Example 2 was not completely satisfying. The
point of diagonalization is to simplify the matrix of the linear mapping. However, in
Example 2, we have changed from a real matrix to a complex matrix. Given a square
matrix A with real entries, we would like to determine a similar matrix p-l
AP that
also has real entries and that reveals information about the eigenvalues of A, even if
the eigenvalues are complex. The rest of this section is concerned with the problem of
finding such a real matrix.
If the eigenvalues of A are all real, then by diagonalizing A, we have our desired
similar matrix. Thus, we will just consider the case where A is an n x n real matrix
with at least one eigenvalue ;l = a + bi, where a, b E JR and b 'f. 0.
Since we are splitting the eigenvalue into real and imaginary parts, it makes sense
to also split the corresponding eigenvector z into real and imaginary parts:

z= =
[Xl tll
: +i : = x+iy' x,y E
JR.11

Xn n
Thus, we have Az = ;lz, or

A(x + if) = (a+bi)(x + iy) = (ax - by) + i(bx + ay)

Upon considering real and imaginary parts, we get

Ax= ax - by and Ay=bx+ay (9.1)

Observe that equation (9 .1) shows that the image of any linear combination of
x and y under A will be a linear combination of x and y. That is, if v E Span{x,y},
then Av E Span{x,y}.

Definition If T : V � V is a linear operator and 1lJ is a subspace of V such that T(u) E 1lJ for all
Invariant Subspace u E l!J, then 1lJ is called an invariant subspace of T.

Note that x and y must be linearly independent, for if x = equation (9.1)


ky,
would say that they are real eigenvectors of A with corresponding real eigenvalues,
and this is impossible with our assumption that b 'f. 0. (See Problem D2b.) Moreover,
it can be shown that no vector in Span{x,Y} is a real eigenvector of A. (See Problem
D2c.) This discussion together with Problem D2 is summarized in Theorem 2.

Theorem 2 Suppose that ,l = a +bi, b 'f. 0 is an eigenvalue of an n x n real matrix A with cor­
responding eigenvector z=x+if. Then Span{x,y} is a two-dimensional subspace
n
of JR. that is invariant under A and contains no real eigenvector of A.

The Case of a 2 x 2 Matrix


If A is a 2 x 2 real matrix, then it follows from Theorem 2 that 13 = {x,Y} is a basis for
JR.2. From (9.1) we get that the 13-matrix of the linear mapping L associated with A is

[L]!B = [_: !]. Moreover, from our work in Section 4.6, we know that [L]!B = p-l AP,

where P = x [ y] is the change of coordinates matrix from 13-coordinates to standard


coordinates.
Section 9.4 Eigenvectors in Complex Vector Spaces 421

Thus, we have a real matrix that is similar to A and gives information about the
eigenvalues of A, as desired.

Definition Let A be a 2 x 2 real matrix with eigenvalue A. = a+ ib, b f. 0. The matrix _ [: ] b .


a
IS

called a real canonical form for A.


Real Canonical Form

EXAMPLE3
Find a real canonical form of the matrix C of A = [� =;J and find a change of

coordinates matrix P such that p-I AP= C.


Solution: We have

2 -5
det(A - A.I) = 1 A = A.2 +
I 1
_2 A.
- _
l

So, the eigenvalues of A are A.1 = 0 + i and A.2 = 0 - i = A.1. Thus, we have a= 0 and

b = 1 and hence a real canonical form of A is _ [ � �].


For A.1 = i, we have

A_ A.ii= [ 2
1
-
i -5
-2 - i
] [1

0
-2
0
-
i ]
so an eigenvector corresponding to A. is

Hence, a change of coordinates matrix P is

P= [� �]

EXAMPLE4
Find a real canonical form of the matrix A = [; =�l
Solution: In Example 2, we saw that A has eigenvalues A. = 2 + 3i and :i' = 2 - 3i.

Thus, a real canonical form of A is [ � ;J.


-

EXERCISE 1 Find a change of coordinates matrix P for Example 4 and verify that
1
p- AP= [-� ;J.
422 Chapter 9 Complex Vector Spaces

Remarks

1. A matrix of the form � [ �]b


can be rewritten as

[
va2 + b2 ?
c se - sine ]
sm e cose

where cose = a/Ya2 + b2 and sine= -b/Ya2 + b2. But, since the new basis
vectors 1 and y are not necessarily orthogonal, the matrix does not represent a
true rotation of Span{x,y}.

2. Observe in Example 3 that we could have taken ,i = -i and tl =


1 2 Ti. Thus,

[� -�] is also a real canonical form of A.

3. The complex eigenvector z is determined only up to multiplication by an arbi­

(1
trary non-zero complex number. This means that the vectors 1 and y are not

uniquely determined. For example, in Example 3, z = + i


) 2
=
[ � i] [ \:�i]
is also an eigenvector corresponding to ti. Thus, taking P =
[ � �] would also

. p-IAP
give - [ _10 0.1]

EXERCISE 2
Find a real canonical form of the matrix C of A = [-i �] and find a change of

coordinates matrix P such that p-1AP =C.

The Case of a 3 x 3 Matrix


If A is a 3 x 3 real matrix with one real eigenvalueµ with corresponding eigenvector
0
v and complex �igenvalues ,i = a + bi, b * and :i with corresponding eigenvectors
z 1 + iY and z, then we have the equations
=

Av= µv, Ax=ax- by, Ay =bx+ ay

Then, the matrix of A with respect to the basis !B � W. X,j1) is � _: n


The matrix can be brought into this real canonical form by a change of coordinates
with matrix P = [v 1 y]. .

EXAMPLES [-�1
Find a real canonical form of the matrix C of A =

coordinates matrix P such that p-1AP = C.


-2
-4
7
-11
-4
4
and find a change of
Section 9.4 Eigenvectors in Complex Vector Spaces 423

EXAMPLES Solution: We have

-1 -,1 -4 -4
(continued)

det(A- ,t/)= 4 7-,1 4 = -(,1 - 3)(,12 - 2,1 + 5)


0 -2 -1 -,1

[3 o ol
Thus, the eigenvalues of A areµ 3, ,11 = 1 + 2i, and ,12 = 1 - 2i Ti. Thus, a real

1
= =

canonical form ofA is 0 1 2 .

3, 0 -2

][
Forµ= we have

A-µ/= [�
-
4
-4
4
-2
-4
4
-4
-
1
0
0

Thus, an eigenvector corresponding toµ is V= [-H


For,11 = 1 + 2i, we have

[-2� - 2i - [1
6- 2i
-4 l
4 0
4
A-,11/=
-2 -2 - 2i 0
-

Thus, an eigenvector corresponding to A1 is

Hence, a change of coordinates matrix P is

P=
H -il -l

If A is an n x n real matrix with complex eigenvalue ,1 = a + bi, b * 0 and


corresponding eigenvector z= x + iy, then Span{x, y} is a two-dimensional invariant
subspace of JR11• If we use x and y as consecutive vectors in a basis for JR11, with the
other vectors being eigenvectors for other eigenvalues, then in a matrix similar to A,

there is a block [ � !].


-
with the a's occurring on the diagonal in positions determined

by the position of x and y in the basis.


For repeated complex eigenvalues, the situation is the same as for repeated real
eigenvectors. In some cases, it is possible to find a basis of eigenvectors and diagonal­
ize the matrix over C. In other cases, further theory is required, leading to the Jordan
normal form.
424 Chapter 9 Complex Vector Spaces

PROBLEMS 9.4
Practice Problems

Al For each of the following matrices, determine a


(b) [-1 ] 2

-1

diagonal matrix D similar to the given matrix -1 -3

H
over C. Also determine a real canonical form and 2
give a change of coordinates matrix P that brings (c) 1

[� �
the matrix into this form. 2 -1

[-� �]
1
-

(a)
(d) 1
-1

Homework Problems

Bl For each of the following matrices, determine a


diagonal matrix D similar to the given matrix (d) [=� -� =�1
[� � -�1
over C. Also determine a real canonical form and 4 -2 5
give a change of coordinates matrix P that brings

[l ]
the matrix into this form. (e)

[ � � -�1
-5 8 -1 -5
(a)
1 -3

[�o -;J
(f)
(b)

[
-2 -1 2

-11
-2
(c) 2 2
0 -2

Conceptual Problems

Dl Verify that if z is an eigenvector of a matrix A with (a) Prove that x i- 0 and y i- 0.


complex entries, then z is an eigenvector of A (the (b) Show that x * ky for any real number k. (Hint:
matrix obtained from A by taking complex conju­ Suppose x ky for some k, and use equation
=

gates of each entry of A). (9.1) to show that this requires ,B 0.)
=

(c) Prove that Span{x,y} does not contain an


D2 Suppose that A is an n x n real matrix and that
eigenvector of A corresponding to a real eigen­
tl a+ bi is a complex eigenvalue of A with b * 0.
=

value of A.
Let the corresponding eigenvector be 1 + iJ.
Section 9.5 Inner Products in Complex Vector Spaces 425

9.5 Inner Products in Complex


Vector Spaces
We would like to have an inner product defined for complex vector spaces because the
concepts of length, orthogonality, and projection are powerful tools for solving certain
problems.
Our first thought would be to determine if we can extend the dot product to e11•
Does this define an inner product on e11? Let z x iJ, then we have = +
Z Z = Z1 + +
_,
·
_, 2 ···
2
Z11

= (� +...+x� - Yr - ... - y�) +2i(X1Y1 +... +XnYn)


Observe that z · z does not even need to be a real number and so the condition · ?:'. 0 zz
does not even make sense. Thus we cannot use the dot product as a rule for de.fining
an inner product in en.
As in the real case, we want (z, tJ to be a non-negative real number so that we can
define a vector by llZ11 = --./(z,
ZJ. We recall that if z E e, then zZ lzl2 ?:'. 0. Hence, it =
makes sense to choose
<z. w>= z· w
as this gives us

(z,Z'J = z· t= Z1Z1 +· · · +ZnZn = [zi[2 +· · · +[ Zn [2 ?:'. 0

Definition In en the standard inner product , ) is defined by (


Standard Inner Product on C"

(z, w)= z· w= Z1W1 +...+ZnWn, for W,ZE en

EXAMPLE 1
Let il= � � v=
[ �]. [;��:l Determine (v, il), (il,v), and (v, (2 - i)il).
Solution:

<v, a>= v. a= -
[ � : � j [; � �J i
.

= -
[ � : �J [; : �]
= (-2+ +(3+ + i)(l - i) 2i)(2 i)
= +3i+ + = 3+ -1 4 7i lOi

(il,v>= [; � �J [ � = � j ·
-
i

= - 3i+ - = 3 -1 4 7i - lOi

(v,(2 - i)il)= v. (2 - i)il = r-�: � ] (2 [;: �] i


. - i)

= r-�:�Jc2+i)[���]
= (2+i)(3+ lOi)
= +23i -4
426 Chapter 9 Complex Vector Spaces

Observe that this does not satisfy the properties of the real inner product. In par­
ticular, (it, v)* (v, it) and (v, ail)* a(v, it).

EXERCISE 1
Let it= [i � ]2i
and v= [i � ��l Determine (it, v), (2iil, v), and (it, 2iv).

Properties of Complex Inner Products


Example 1 warns us that for complex vector spaces, we must modify the requirements
of symmetry and bilinearity stated for real inner products.

Definition Let V be a vector space over C. A complex inner product on V is a function (,) :
Complex Inner Product V x V-+ C such that

(1) (z, z) � 0 for all z EV and (z, z)= 0 if and only if z = 0

(2) (z, w) = (w, z) for all w, z EV

(3) For all u, v, w, z EV and a EC

(i) (v + z, w) = (v, w) + (z, w)


(ii) (z, w + u) = (z, w) + (z, u)
(iii) (az, w) = a(z, w)
(iv) (z, a w) = a(z, w)

EXERCISE 2 Verify that the standard inner product on en is a complex inner product.

Note that property (1) allows us to define the (standard) length by llz ll = (z, z)112,
as desired.
Property (2) is the Hermitian property of the inner product. Notice that if all the
vectors are real, the Hermitian property simplifies to symmetry.
Property (3) says that the complex inner product is not quite bilinear. However,
this property reduces to bilinearity when the scalars are all real.

T he Cauchy-Schwarz and Triangle Inequalities


Complex inner products satisfy the Cauchy-Schwarz and triangle inequalities. How­
ever, new proofs are required.

Theorem 1 Let V be a complex inner product space with inner product (, ). Then, for all w,
z EV,

(4) l(z, w)I ::.:; llzll llwll


(5) llz + wll ::.:; llzll + llwll
Section 9.5 Inner Products in Complex Vector Spaces 427

Proof: We prove (4) and leave the proof of (5) as Problem Dl.
If w= 0, then (4) is immediate, so assume that w 1= 0, and let

(z,w)
a= --

(w,w)

Then, we get

0 S (z - aw, z - aw)
= (z,z - aw) - a(w, z - aw)
= (z,z) - a(z,w) - a(w,z) + aa(w,w)
l(z,w)l2 l(z,w)l2 l(z,w)l2
= (z, z)
_ _

+
(w,w) (w,w) (w,w)
l(z,w)l2
= llzll2 -
llwll2

and (4) follows. •

Let C(m, n) denote the complex vector space of m x n matrices with complex en­
tries. How should we define the standard complex inner product on this vector space?
We saw with real vector spaces that we defined (A, B) = tr(BTA) and found that this
inner product was equivalent to the dot product on IR.11111• Of course, we want to define
the standard inner product on C(m, n) in a similar way. However, because of the way
the complex inner product is defined on C", we see that we also need to take a complex
conjugate. Thus, we define the inner product (,)on C(m, n) by (A, B)= tr(i/ A).
a11 ain b11 l b111 [ ]
In particular, let A = : and B = : : . Then

�l a� �I b�

Since we want the trace of this, we just consider the diagonal entries in the product
and find that

m m m

tr(BT A) = I a;1b;1 + I a;2bi2 + ··· + I a;nbi11


i=l i=l i=l

which corresponds to the standard inner product of the corresponding vectors under
the obvious isomorphism with C11111•
428 Chapter 9 Complex Vector Spaces

EXAMPLE2
Let A =
2 i
[ ; 1 � i] and B = [ _;i �: ��l Find (A,B) and show that this corre-

2 i+ 3
sponds to the standard inner product of a=
1 �

and b=
2 - 3i
i -2i .
1 -i 1 2i+

Solution: We have

(A,B) = (B A)= ([ ; 3i 1 =i2i] [2; i 1 � i])


tr
T
tr
2

3(2 i) 2i(i) + +
3(1) 2i(l - i) +
= tr
[ (2 3i)(2 i)+(1 - 2i)(i) +
(2 3i)(l) (1 - 2i)(l - i)]
+ + +

= + + +
3(2 i) 2i(i) (2 3i)(l) (1 - 2i)(l - i) + +

2 i 3 +

1 2 3i +
=
2i
1 - i 1 - 2i
::;
= a·b

= (a, b) �

This matrix Il is very important, so we make the following definition.

Definition Let A be an n x n matrix with complex entries. We define the conjugate transpose A•
Conjugate Transpose of A to be
-T
A* = A

EXAMPLE3

Observe that if A is a real matrix, then A• =A 7.

C. Then
Theorem 2 Let A and B be complex matrices and let a E

(1) (Az, w) =(z,A*w) for all Z, w E


C11
(2) A** =A
+
(3) (A B)*= A* B* +

(4) (aA)*= aA*


(5) (AB)* =B* A*

The proof is left as Problem D2.


Section 9.5 Inner Products in Complex Vector Spaces 429

Orthogonality in c_n and Unitary Matrices


With a complex inner product defined, we can proceed and introduce orthogonality,
projections, and distance, as we did in the real case. No new approaches or ideas are
required, so we omit a detailed discussion. However, we must be very careful that we
have the vectors in the correct order when calculating an inner product since the com­
plex inner product is not symmetric. For example, if 'B = {V1, . . , vk} is an orthonormal
.

basis for a subspace § ofen, then the projection of z E en onto § is given by

If we were to calculate (v 1, !) instead of (z, v1), we would likely get the wrong answer.
Notice that since (z, w) may be complex, there is no obvious way to define the
angle between vectors.

EXAMPLE4 1
0 1
Let v1 , v2 . , and v3 1 and consider the subspace S Span{v1, v2, v3}
0
= = = =

-1

0 0
ofe4.
(a) Use the Gram-Schmidt Procedure to find an orthogonal basis for S.

0
Solution: Let w1 Then
0
= .

0 0
(v3, w1) (v3, w2) 0 2i i/3
w3 v3 - w1 - +i
_, _,
w - -

2 1 0
-
1/3
llw1112 11w211 2 - 3 -1

0 0 2/3

Since we can take any scalar multiple of this, we instead take W3 and get that

2
{w1, w2, W3} is an orthogonal basis for S.

(b) Let z =
-
: . Find proj8 Z.
-1
430 Chapter 9 Complex Vector Spaces

EXAMPLE4 Solution: Using the orthogonal basis we found in (a), we get


(continued)
<z. w1 > <z. w2> <z. w3> ...
+
. _, ...
pro wi w2 w3
_,

Js z llw1112 llw2ll2 + llw3ll2


=

0 0 1
0 i 2 0
-i +- - -

3 -l 0
=

0 6
0 2 -1

W hen working with real inner products, we saw that orthogonal matrices are very
important. So, we now consider a complex version of these matrices.

Definition An n x n matrix with complex entries is said to be unitary if its columns form an
Unitary Matrix n
orthonormal basis for e .

For an orthogonal matrix P, we saw that the defining property is equivalent to the
matrix condition p-1 =

.
p T We get the associated result for unitary matrices.

Theorem 3 If U is an n x n matrix, then the following are equivalent.

(1) The columns of U form an orthonormal basis for IC"


(2) The rows of U form an orthonormal basis for IC"
(3) u-1 u·
=

Proof: Let U = [z1 Z,,]. By definition, we have that {z1, • • • , Zn} is orthonormal
if and only if (z;, Z;) = 1 and
0 = Z;, Zj
(_, _,) = Z;
_, · -;J
<.j = ->T-::;­
Z; Zj, for it= j

Thus, since
-T ----= --

( U• U) ij - Z;
_, ::t
<.j -
- ;f[ Zj
Z; _, - (Z;, _, )
_, Zj

we get that U* U = I if and only if {z1, • • • , Zn} is orthonormal. The proof that (2) is
equivalent to (3) is similar. •

Observe that if the entries of A are all real, then A is unitary if and only if it is
orthogonal.

- [ �(1 ]
[� ]
EXAMPLES -i + i) _l_(l + i)
Are the matrices U = 1 and V - _Li· Y6 2 . unitary?
ygl
_

Y3
Solution: Observe that

\[�J. [�D rn. [�J [�J. [ �J


= =

-
= 1(1) + i(-i) = 2

Hence, [�] is not a unit vector. Thus, U is not unitary since its columns are not or­

thonormal.
Section 9.5 Exercises 431

EXAMPLES
(continued)
For v, we have v· = [_L(l
Y6
i) _Li
'{3 (1 - i) - i
-

'1
l
Y6
' so

_ [ !(2 + I) 3�(2-2)] [1 OJ
(2-2) 1(2+4) - 0

V V- 1
6
1
3 Vi
-

Thus, is unitary.
V

EXERCISE 3 Determine if either of the following matrices is unitary.

A= [ 2+i
Y6
1 ] =
[ lY}+i
1 2+1
V6. , B
- Y6 Y6 Y6

PROBLEMS 9.5
Practice Problems

Al Use the standard inner product in C" to calculate


(it, v), (v, it), l itl , l vl . 2
and
(d) D = [(-11;V3
+ i)/-f3 (1 -i)/../6]
2/../6
it=[ � � ;�] . = [2-_ �i]
it = [�]
(a) i1
i
_

(b) it=[-��� ].v=[13:;i] A3 (a) Verify that ii is orthogonal to ii if and

it= [1 � il [� � �]
ii= [i
(c) v =

it=[ _\+_ 2�J v= [--�i]


= [� : �1
(d)
(b) Determine the projection of w onto
+
A = Is [1 � i 1 � i] it 3
A2 Are the following matrices unitary? l

(a) the subspace of C3 spanned by and v.

(b) B = [� �] A4 (a) Prove that any unitary matrix U satisfies


I det UI = 1. (Hint: See Problem D3.)
2 2
_

x
(c) C = Jd-� �] (b) Give a
det U * ±l.
unitary matrix such that

Homework Problems
Bl Use the standard inner product in C" to calculate
(it, v), (v,-it), l itl , l vl .
and

(a) it=[ �:�lv= [::�]


(b) it= [13:Ll v= [� : �]
432 Chapter 9 Complex Vector Spaces

B2 Are the following matrices unitary?


(a) A= vs -2 1 2
1

1
[ ] (b) Determine the projection of w = [;1 � �] onto
- 2i
(b) B = [� �] the subspace of C3 spanned by a and v.
B4 Consider C3 with its standard inner product. Let
l (1+ i)/--./7 5 /Y35 ]
� � � ] and
[-l+i [-�-1�i].
(c) C=
-

(1+ 2i)/--./7 (3+ i)/Y35 v1 v2 = =


Cd) D= l(1+ i) I Y6 (1+ i) IY3 ] =

.
(a) Find an orthonormal basis for Span
2i/.../6 i/Y3 §
{v1, v2}.
1+ i
B3 (a) Verify that a is orthogonal to v if i1 = [ i
� (b) Determine prnj, i1 where it = m
and v = [1�- l .
i

Conceptual Problems

Dl Prove property (5) of Theorem 1. DS (a) Show that if U is unitary, then llVZ11 = llZ11 for
D2 Prove Theorem 2. all Z en.E

(b) Show that if U is unitary, all of its eigenvalues


D3 Prove that for any n xn matrix, satisfy l;ll 1.=

detA = detA (c) Give a 2x2 unitary matrix such that none of its
eigenvalues are real.
D4 Prove that if A and Bare unitary, then ABis unitary. D6 Prove that all eigenvalues of a real symmetric
matrix are real.

9.6 Hermitian Matrices and


Unitary Diagonalization
In Section 8.1, we saw that every real symmetric matrix is orthogonally diagonalizable.
It is natural to ask if there is a comparable result in the case of matrices with complex
entries.
First, we observe that if A is a real symmetric matrix, then the condition AT = A
is equivalent to A = A. Hence, the condition A* = A should take the place of the
*

condition Ar = A.
Definition Ann xn matrix A with complex entries is called Hermitian if A *
= A or, equivalently,
Hermitian Matrix if A= Ar.
Section 9.6 Hermitian Matrices and Unitary Diagonalization 433

EXAMPLE 1 Which of the following matrices are Hermitian?

2 3-i 1 2i
A= B=
[ +
] [ ]
3 i 4 ' -2i 3-i '

A= [3 �
i
Solution: We have
3 : ] = AT, so A is Hermitian.
i
1 -2i ..
f. BT,so B.1s
-= [ 3 ]
B + not Herm1tian.
2i i
[o� -i -i i
c= � -i * cT, soc is not Hermitian.
l -l 0

Observe that if A is Hermitian, then we have (A)ij = A1;, so the diagonal entries of
A must be real and for i f. j the ij-th entry must be the complex conjugate of the ji-th
entry.

Theorem 1 An n x n matrix A is Hermitian if and only if for all z, w E e11, we have

<t. Aw> = <At,w>

Proof: If A is Hermitian, then we get

(z,Aw) = t Aw= zTA� = t AT�= (Azl� = (Az, w)

If (z,Aw) = (Az,w) for all z, w E e11, then we have

Since this is valid for all t,w E en' we have that A= AT. Thus, A is Hermitian. •

Remark

A linear operator L : V � V is called Hermitian if (x, L(j)) = (L(x),y) for all


x,y E V. A linear operator is Hermitian if and only if its matrix with respect to
any orthonormal basis of V is a Hermitian matrix.Hermitian linear operators play an
important role in quantum mechanics.
434 Chapter 9 Complex Vector Spaces

Theorem 2 Suppose that A is an n x n Hermitian matrix. Then

(1) All eigenvalues of A are real.


(2) Eigenvectors corresponding to distinct eigenvalues are orthogonal to each
other.

Proof: To prove (1), suppose that ;l is an eigenvalue of A with corresponding unit


eigenvector t. Then,

(t,AZ) =
(t,,it) =
�(t,Z) = � and (At,Z) = (Al, Z) = ,i

But, since A is Hermitian, we have (t,AZ) = (At,Z}, so� = ;l. T hus, ;l must be real.
To prove (2), suppose that ,11 and ,12 are distinct eigenvalues of A with correspond­
ing eigenvectors t1, t2. Then
(t1,At2) = (t1,A2t2) = A2(l1,t2) = ;l2(t1,l2) and (Al1,t2) = A1(l1,l2)
Since A is Hermitian, we get ;l2(t1,l2) =
AJ <ti,l2). Thus, since A1 * A2,we must have
<ti,l2) 0, as required.
= •

From this result, we expect to get something very similar to the principal axis
theorem for Hermitian matrices. Let's consider an example.

EXAMPLE2 1 ; i] .
Let A =

[ :.
1 i
Verify that A is Hermitian and diagonalize A.

Solution: We have A* = A, so A is Hermitian. Consider A - ,if = [� � � ; � �].


Then the characteristic polynomial is C(;l) = ,12 - 5,i + 4 = (,1- 4)(,1- 1 ) ,so ;l = 4
or ;l = 1.
For ,i = 4,
f -2. 1 ] r i -2 1 i . ]
A_ ,i
+ +
= [ 1 - l -1
_

0 0

Thus, a corresponding eigenvector is t1 = [;]


1 i .
If ;l = 1,

A_ I)Lf =
[ 1 I ] [
+ i _ 1 1 + i ]
l - i 2 0 0 '

Thus, a corresponding eigenvector is l1 =


[ �]1_ i
. Hence, A is diagonalized to [� �]
by Q =
[I ; i 1 � il
-

Observe in Example 2 that since the columns of Qare orthogonal, we can make
Qunitary by normalizing the columns. Hence, we have that the Hermitian matrix A is
diagonalized by a unitary matrix. We can prove that we can do this for any Hermitian
matrix.
Section 9.6 Exercises 435

Spectral Theorem for Hermitian Matrices


Suppose that A is an n x n Hermitian matrix. Then there exist a unitary matrix U
Theorem 3

and a diagonal matrix D such that U*AU = D.

The proof is essentially the same as the proof of the Principal Axis Theorem, with
appropriate changes to allow for complex numbers. You are asked to prove the theorem
as Problem DS.

Remarks

1 . If A and Bare matrices such that B= U*AU for some U, we say that A and B
are unitarily similar. If B is diagonal, then we say that A is unitarily diago­
nalizable.

2. Unlike, the Principal Axis Theorem, the converse of the Spectral Theorem for
Hermitian Matrices is not true. That is, there exist matrices that are unitarily
diagonalizable but not Hermitian.

1 i
The matrix A = [: ;] from Example 2 is Hermitian and hence unitarily diag­
1 i
EXAMPLE3

onalizable. In particular, by normalizing the columns of Q, we find that A is unitarily


i) (1 + i)/ V3j
]' zed by u =
d'iagona1
[0 +1.../6/.../6
2 -l/.../3 J'

PROBLEMS 9.6
Practice Problems

Al For each of the following matrices,


(c) C [ V:+ i
(i) Determine whether it is Hermitian.

1 +i
=

+i
(ii) If it is Hermitian, unitarily diagonalize it.

(a) A= 1 i 0
l l

4 L
[
0 1 +i
_r,; . ../2 (d) F = -

y 2- 2
i
]
i .../3 + i
]
5
[ ..j2 + ../2
-

(b) B =
436 Chapter 9 Complex Vector Spaces

Homework Problems

Bl For each of the following matrices, 5


(c) C i i
(i) Determine whether it is Hermitian.
(ii) If it is Hermitian, unitatily diagonalize it.
=

[ ../3 _

../32+ ]
i
(a) A =

[-f2. + i
2 Y2 +
..f3 ] (d) F" H �'. -�]
5 Y2 - ]
+i i
(b) B =
[ -f2.
3

Conceptual Problems

Dl Suppose that A and B are n x n Hermitian matri­ (b) What can you say about a, b, c, and d if A is
ces and that A is invertible. Determine which of the Hermitian, unitary, and diagonal.
following are Hermitian. (c) What can you say about the form of a 3 x 3
(a) AB matrix that is Hermitian, unitary, and diagonal?
(b) A2
D4 Let V be a complex inner product space. Prove
(c) A-1
that a linear operator L : V -t V is Hermitian
D2 Prove (without appealing to diagonalization) that if ( (1, L(j)) (L(x), y)) if and only if its matrix with
=

A is Hermitian, then detA is real. respect to any orthonormal basis of V is a Hermi­

D3 A general Hermjtian matrix can be written as


tian matrix.
i
2 2 x

b DS Prove the Spectral Theorem for Hermitian Matri­


A = [ b
� ci : c ]. a, b, c, d ER
ces.
(a) What can you say about a, b, c, and d if A is
unitary as well as Hermitian.

CHAPTER REVIEW
Suggestions for Student Review

1 What is the complex conjugate of a complex num­ 4 Explain how diagonalization of matrices over C
ber? List some properties of the complex conjugate. differs from iliagonalization over R (Section 9.4)
How does the complex conjugate relate to division of
5 Define the real canonical form of a real matrix A.
complex numbers? How does it relate to the length
In what situations would we find the real canonical
of a complex number? (Section 9.1)
form of A instead of diagonalizing A? (Section 9.4)
2 Define the polar form of a complex number. Ex­
6 Discuss the standard inner product in C". How are
plain how to convert a complex number from stan­
the essential properties of an inner product modi­
dard form to polar form. Is the polar form unique?
fied in generalizing from the real case to the complex
How does the polar form relate to Euler's Formula?
case? (Section 9.5)
(Section 9.1)
7 Define the conjugate transpose of a matrix. List
3 List some of the similarities and some of the differ­
some similarities between the conjugate transpose of
ences between complex vector spaces and real vector
a complex matrix and the transpose of a real matrix.
spaces. Discuss the differences between viewing C
(Section 9.5)
as a complex vector space and as a real vector space.
(Section 9.3) 8 What is a Hermitian matrix? State what you can
about diagonalizing a Hermitian matrix. (Sec­
tion 9.6)
Chapter Review 437

Chapter Quiz

El Let z1 1 - Y3i and z2


= = 2 + 2i. Use polar form to (a) Determine a diagonal matrix similar to A over
. Z1
determme z1z2 and -.
C and give a diagonalizing matrix P.
Z2 (b) Determine a real canonical form of A and the
E2 Use polar form to determine all values of (i)112. corresponding change of coordinates matrix P.

E3 Let ii � [ 3 � ;] and V � [4 � J Calculate the fol­


ES Prove that U -
- ...L
V3
[1 -

1
i -il
-l+i
is a unitary

l
matrix.
lowing.
E6 Let A= [� 3 � ki
(a) 2it + (1 + i)V (b) it i
3
(c) (it, v) Cd) <v,it> (a) Determine k such that A is Hermitian.
(e) llVll (f) proja v (b) With the value of k as determined in part (a),

E4 Let A= [� 1
-
13
4 ·
find the eigenvalues of A and corresponding
eigenvectors. Verify that the eigenvectors are
orthogonal.

Further Problems
Fl Suppose that A is a Hermitian matrix with all non­ Spectral Theorem for Hermitian Matrices), prove
negative eigenvalues. Prove that A has a square that every n x n matrix A is unitarily triangulariz­
root. That is, show that there is a Hermitian matrix able. (This is called Schur's Theorem.)
B such that B2 = A. (Hint: Suppose that U diago­
F4 A matrix is said to be normal if A*A= AA*.
nalizes A to D so that U* AU= D. Define C to be a
(a) Show that every unitarily diagonalizable matrix
square root for D and let B = UCU*.)
is normal.
F2 (a) If A is any n x n matrix, prove that A*A (b) Use (a) to show that if A is normal, then A is
is Hermitian and has all non-negative real unitarily similar to an upper triangular matrix
eigenvalues. T and that T is normal.
(b) If A is invertible, prove that A* A has all positive (c) Prove that every upper triangular normal matrix
real eigenvalues. is diagonal and hence conclude that every nor­

F3 A matrix A is said to be unitarily triangularizable mal matrix is unitarily diagonalizable. (This is

if there exists a unitary matrix U and an upper tri­


often called the Spectral Theorem for Normal

angular matrix T such that U*AU = T. By adapt­


Matrices.)

ing the proof of the Principal Axis Theorem (or the

MyMathlab Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you
want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to
you, too!
APPENDIX A

Answers to Mid-Section
Exercises
CHAPTER!

Section 1.1

1.(a
) [�] (b) [=�J (c) [ =i J

_,
X2 X2
w
i1 + w
Xt Xt
ca+ w) - v

2. 1 = t [-n t E IR
3. 1 =
[ �] +t [-n t E IR
4. 1= [�]+1[-H IER
Section 1.2

1. (5) Let 1=
[ X t l . Then
;
1-
-1=
X1 since +
;
[ X t +( -x1) 1
l o ] (-1) = ; = ; = 0.

t X n - X11 Xn + (-Xn ) 0

(6)
[=
x 1
t1 tX1 ] E IR1 since tx; E IR for 1
; :5 i :5 n.

(7) =
l tx 1 lstx1
s(t1) s tXn StXnI st Xn (st)1.
: = :
x1
= : =
440 Appendix A Answers to Mid-Section Exercises

2. By definition, S is a subset of JR. . We have that


2
[�] E S since 2(0) = 0. Let

x= [��] and y =[�� ] be vectors in S. Then 2x1 = x2 and 2y1 = Y2 · We find

that x + y = [ ]
xi +Yi
X2 +Y2
E S since 2(x1 + Y1) = 2x, + 2y, = x2 + Y2· Similarly,

t1 = [ ]
txi
tx2
ESsince 2(tx1) = t2x1 = tx2. Hence, Sis a subspace.

Tis not a subspace since [� ] is not in T.

This gives c1 +c3 0, c2 +c3 = 0, and c1 +c2 0. The first two equations imply
= =

that c1 = c2. Putting this into the third equation gives c1 = c2 = 0. Then, from
the first equation, we get c3 0. Hence, the only solution is c1 = c2 = c3 = 0,
=

so the set is linearly independent.


1 0 0 0 0
0 1 0 0 0
4. The standard basis for JR.5 is 0 , 0 1 0 0
0 0 0 1 0
0 0 0 0
X1 1 0 0 0 0 t1
X2 0 1 0
t2 0 0
Consider X3 = t1 0 + t2 0 + t3 1 + t4 0 + t5 0 t3 . Thus, for every
X4 0 0 0 1 0 t4
X5 0 0 0 0 t5
1 E JR.5 we have a solution t; x; for 1 :::; i :::; 5. Hence, the set is a spanning set
=

for JR.5. Moreover, if we take 1 = 0, we get that the only solution is t; = 0 for
1 :::; i :::; 5, so the set is also linearly independent.

Section 1.3

v·w
1. cos()= = 0, so ()=� rads.

2. 11111 = v1 + 4 + 1 = Y6, II.YI! = �i + � + i =1


3. By definition, xis a scalar multiple of 1, so it is parallel. Using Theorem 1.3.2
(2), we get

II.XII=
I � 1 I�I
11 11
1 =
11 11
11111 = ::;:: =1

4. x1 - 3x2 - 2x3 = 1(1) + (-3)(2) + (-2)(3) = -11

Section 1.4

1 2
1. projil it =
it. v
llvl l2
v =
8
9
[ �] ,
pro
ja
v=
v. it
ll ll2 it
it
=
8 -
17
[ ;1
Appendix A Answers to Mid-Section Exercises 441

2. proj il z1= ��v



ll ll
v=
1
1
4
[i].
2
perpil z1= z1- proj il z1= [-�]-[ij�:i [-��j�:i
O 2 /14
=
-1/7

3' proJx <J+ Z) =


. <J+Z>·1 y·1+z·1 :;.1 -z.1 .
2 1= 2 1= 21+ 21 = proJx y+
11111 11111 11111 11111
proj1 z
. (tY) 1 y. 1 .
proJ1(ty )= =t 21.=tproJ1 y
.

1
"""jj1jj'2 11111

Section 1.5

1. w. il= (u2V3 - U3V2)(u1)+ (u3V1- U1V3)(u2)+(u1v2- U2V1)(u3)=0


w. v=(u2V3- U3V2)(v1) + (u3V1- U1V3)(v2)+ (u1v2- U2V1)(v3)=0

2
• HH�l [=rn =

3. The six cross-products are easily checked.

4. Area = llil x VII= [=�l = "9= 3

5. The direction vector is J= [=�HJ m = Put X3 = 0 and


ili en so Ive

2x1 + x2= 1.

[ ! ] [H
-xi- 2x2= -2 and This gives x1= 0 and x2= 1. So, a vector

equation of the line of intersection is ii= lit


E
+t t

CHAPTER2

Section 2.1

1. Add (-1/2) times the first equation to the second equation:


4 2+ Ox3= 12
2x1+ x
-X3=- 2
x2 does not appear as a leading l, it is
Since a free variable. Thus, we let
x2=t E R Then we rewrite the first equation as
1
X1=2(12- 4x2)=6- 2t

Thus, the general solution is

[�:H t] [�] fl]


6
E
= +
· I �

[7 � �1 ] [� � � 12� ]
12
2• - 4 R2 - �R1 � -
442 Appendix A Answers to Mid-Section Exercises

Section 2.2

[
1. For Example 2.1.9, we get

1
0
2
1
-

-3
4 0
1
11
-1
l -[ � 2 -4
-3
0

l
11
-1 R2 - R3

[
0

1
0

2
0

-4
-1

0
4

11 l (-l)R3

R1 - 2R2 [- 1
0

0
0

2
1

0
-4

0
0
1
0
-3
0
0 3
-4
0
0 0
-3
0
0
j]
l
For Example 2.1.12, we get

[� 1
1
0
180
60 -[ � 1 0
180
60
l
R1 - R,

0 4 80 � R3
l 0 1 20

l
l
0 160 R1 - Ri 1 0 0 100

[� 1
0
0 60
20
� 0
0 0
0 60
20

2. (a) rank(A) = 2 (b) rank(B) = 2

3. We write the reduced row echelon form of the coefficient matrix as a homoge­
neous system. We get

x1 + x4 + 2xs = 0
X2 + X3 + X5 = 0

We see that x3, X4, and x5 are free variables, so we let x3 = t i, x4 = t2, and
xs = tJ. Then, we rewrite the equations as

Xi = -t2 - 2t3
X2 = -t1 - t3

Hence, the general solution is

Xi -t2 - 2t3 0 -1 -2
X2 -ti - t3 -1 0 -1
X3 ti = t1 1 + t2 0 + t3 0 t1 , t2, t3 E IR
X4 t2 0 1 0
X5 t3 0 0

Section 2.3

!. Considert1
[=�l Hl {�] m
+
,,

Simplifying and comparing entries gives the system


+
=

t1 + 2t2 - 2t3 = 1

-3ti - 2t2 + 2t3 = 3


-3t1 + t2 - 3t3 = 1
Appendix A Answers to Mid-Section Exercises 443

Solving the system by row reducing the augmented matrix, we find that it is
consistent so that v is in the span. In particular, we find that

[ l 2 2
C-2) =;l+ I-�l+ i--�l= I �l
19
4 13
4
l

2. Consider

Simplifying and comparing entries gives the homogeneous system

-ti - 2t2 +t3= 0

t1 -3t2 -t3= 0

-3ti -3t2+3t3= O
Row reducing the coefficient matrix, we find that the rank of the coefficient
matrix is less than the number of variables. Thus, there is at least one param­
eter. Hence, there are infinitely many solutions. Therefore, the set is linearly
dependent.

CHAPTER3

Section 3.1

1. consider rn �]=r1[� �]+r2[� �]+t3[� �]+t4[-� �] -


.simplifying

the right-hand side, we get the homogeneous system


ti+t1+3t3= 0

2ti +t1+5t3 -t4= 0

ti+3t2+5t3 - 2t4= O

ti +t2 + 3t3= o

Using the methods of Chapter 2, we find that this system has infinitely many
solutions, and hence 'B is linearly dependent.

Similar to our work above, to determine whether X E Span 'B, we solve the
system

ti +t1+3t3= 1

2ti+t2+ St3 -t4=5


t1 +3t2+ St3 - 2t4= -5
1
t1 +t1+3t3=
Using the methods of Chapter
X is in the span of 'B.
2, we find that this system is consistent and hence

2. For any 2 x2 [xX3i x2X4],


matrix we have

xi[� �]+x2[� �]+x3[� �]+x4[� �]=[�� �:]


Hence, 'B spans all 22 x matrices. Moreover, if we take
xi =x2=x3 = x4=
then the only solution is the trivial solution, so the set is linearly independent.
0,
444 Appendix A Answers to Mid-Section Exercises

3. AT= [� -�1 and (A')T= [-�


3AT= [� ;]
3
-

15
=(3Al

4. (a) AB is not defined since A has three columns and B has two rows.

(b) BA = [4
1
7 -
1
2 -1
]
(c) ATA = [l i] 13
8

(d ) BET=
[;
�]
Section 3.2

1. fA(l,0) = [n fA(O,1) = [-n fA(2,3) = [�]


We have fA(2,3)=2fA(l,0) + 3fA(O,1).

2. f,(-1,1,1,0) = [�] ,f,(-3,1,0,1) = [ �]


3. (a) f is not linear. f(l,0) + f(2, 0) f(3,0). f.

(b) G is linear since for all 1,y E JR. and t E JR, we have
2

g(tx + y) = g(tx1 + Y1, tx2 + Y2) =


[ tx2 + Y2
(txi +Yi ) - (tx2 + Y2) ]
=t
1
[X2
X - X2
] [ + Y2
YI - Y2
=tg(x) + g(Y)
]
4. H is linear since for all 1, y E JR.4 and t E JR, we have

H(tx + y) = H(tx1 + Y1, tx2 + Y2, tx3 + y3, tx4 + y4) = [ tx3 + y3 + tx4 + Y4
tx1 + Y1
]
= t [ � ] [; ]
X3 X4
+
Y3 Y4
= tH(x) + H(Y)

By definition of the standard matrix, we get

5. Let 1, y E JRn and t ER Then, since L and Mare linear, we get

(M0 L)(tx + y) = M(L(tx y)) = M(tL(x) + L(Y))


+

= tM(L(x)) + M(L<})) = t(M0 L)(x) + (M0 L)(Y)

Hence (Mo L) is linear.


Appendix A Answers to Mid-Section Exercises 445

Section 3.3

[ o ol
2. The matrix of the reflection over
1
thex1x3-plane is 0 -1 0.
0 0 1
[ �J [-1 o o
The matrix of the reflection over

thex2x3-plane is 0
0 1l 1
0
0.

Section 3.4
-2 -4
-3 -5
1. The solution space is Span 1 0
0 -6
0

2. If 1 =

[�:l E Null(L), then we must havex1 -x, = 0 and -2x1 + 2xi +x3 = 0.

Solving this homogeneous system, we find that Null(L) = span


{[i]}.
7 -2 -4
8 -3 -5
3. The general solution is x = 0 +s 1 +t 0, s,t E R
9 0 -6
0 0

4. We have

Range(L) = X1 [-�J [-�J [�J


+X
2
+ X3

Thus,

Range(L) = Span {[ �J, [-�J, [�]} {[ �J, [�]}


-
=Span
- = �2

-1
5. [L] = [-� 2 �l So Col(L) =Span {[ �] [-�], [�]}
-
, = Range(L).

2 1
3
5 0 1 o·
0 2
1

1
1 3 5 0 0
6. Row reducing A to RREF gives

1 3 0 0 0

Thus, a basis for Row(A) is


{[�l [m ·
446 Appendix A Answers to Mid-Section Exercises

7. Row reducing A to RREF gives [-1� -12 1� -3� -2_;]- [� � 1 -1 1� ]· 0 0 0 0

Thus, a basis for coI(Al is {[_ :J. [-i] [=m ·

-1 1
21 -12 -321 - 1 1
0 0
0 0
8. Row reducing AT to RREF gives 0 0 0

3 -3 -2
0 0
0
0
0
0
0

Thus, a basis for Row(Arl "coI(Al is { [�l [!] [m · ·

9. Row reducing A to RREF gives

[21 113 -2-3 13] [1 1 -2-1 1l


0
-8 4 - 0
0
o

0 0
o
0

Thus, { -i , _; H {[�l [�l [m. U } · · and are bases for Row(A), Col(A),

dim Null(A) = 1, and so rank(A) 31 3


and Null(A), respectively. Thus, rank(A) = dim Row(A) =
+ nullity(A) = +
and nullity(A) =
= 4, as required.

Section 3.5

have [2
4
3511 1 ]- [ 1 1 1-5/22 3/2-1 ] [-5/22 3/2-1 ] ·
1. We can row reduce A to the identity matrix so it is invertible. In particular, we

0
0
0
0
'
_
so A 1 =

2' We have [ -21 1311 1 ] - [ 1 112/7


0
1/7 -3/7
0
1/7] 0
0

Thus, the solution to Ax= b is x = A-1 E = [-� .


3. (a) Let R denote the reflection over the lines x2 = x1. Then R is its own inverse.

We have [R] = [ � �l and we find that [R][R] = [� �l as required.

(b) Let s denote the shear in the X1 -direction by a factor oft. Then s-1 is a

shear in the X1 -direction by a factor of -t. We get [S] rs-1] = [� �][� 7] =

[� � l as required.
Appendix A Answers to Mid-Section Exercises 447

Section 3.6

!. Ut [� n 0
k Then,

l
E=

j
0

[ l
a"
a21
a12
a22
a13
a23 kR2
a11
ka21
a12
ka22
au
ka23

j [

j
a31 a32 a33 a31 a32 a33

while EA= [� °]["" 0


k
0
0
0
a21
a12
a22
a13
a23 = ka21
a33
a11

a31
a12
ka22
a32
a13
ka23 , as
a33
required.

[� n
a31 a32

its reduced row echelon form R= We find that


2. We row reduce A to

£5£4£3£2E1A = R, where

E1 = [-� � �]. [ � £2 =

[� � �i
0 0 1 -3

-
Es=
0 0 1

(Alternative solutions are possible.)

Section 3.7

We row reduce and get

[-!
1.

-3
-J
-3
_;
1
l R, + 4R1
R3 - 3R1 [ -\ j]

0
0 H l -6
1
3 ::::} L=
o
1
o
0
1

[ l
*

[-� -6
1
3
2
5
-5
l R3 + 2R2

-\
0
0 H �j
I
3
0
2
5
5
::::} L=
-2
0

Therefore, we have

A= LU= -4 [ ll 1 ] 0
1 0 0
1
3
2
5
3 -2 1 0 0 5

2. (a) We have A= LU= [ ! � �i [-� � �]


-
3 -2 1 0 0 5
·Write Ax= bas LU.X = b

and take y= Ux. W riting out the system Ly= b, we get

Y1 = 3
-4y1 + Y2 = 2

3y1 - 2y2 + y3 = 6
448 Appendix A Answers to Mid-Section Exercises

Using forward substitution, we find that y1 = 3, Y2 = 2 + 4(3)= 14,


so and

y3 = 6 - 3(3) 2(14) = 25.


+ Hence 5' = [{� l
Thus, our systemUx= y is

-xi + x2 2x3 = 3 +

3x2 5x3 = 14 +

5x3 = 25
X3 = 5, 3x2 = 14 - 5(5)::::? X2 = -1,f-,
Using back-substitution, we get and

-x, = 3 !j- - 2(5) x, = !jl.


+ => = ��P3
Thus, the solution isX [ l
(b) Write Ax = bas LUx = band take y = Ux. Writing out the system
Ly= b, we get

YI= 8
-4yI Y2 = 2 +
3yI - 2y2 + y3 = -9
Using forward-substitution, we find thatYI= 8, Y2 = 2 4(8)= 34,so + and

y3 = -9 - 3(8) 2(34) = 35.


+ 5' = Hence [;�l
Ux =
Thus, our system y is

-xi x2 + 2x3 = 8+

3x2 + 5x3 = 34
5x3 = 35
Using back-substitution, we get X3 = 7, 3x2 = 34 - 5(7) X2 = -t, and
[17/3]
::::?

-Xi = 8 t -2(7)::::? XI = lj-.


+ x = -v3 .
Thus , the solution is

CHAPTER4

Section 4.1

1. Consider

Row reducing the coefficient matrix of the associated homogeneous system of


linear equations gives

3 0 1 0 2 0
2 1 5 -1 0 1 1 0
3 5 -2 0 0 0 1
3 0 0 0 0 0
Appendix A Answers to Mid-Section Exercises 449

Hence, there are infinitely many solutions, so 13 is linearly independent.


To determine if p(x) E span t1, t2, t3, t4
13 we need to find if there exists and
such that

1 +Sx - 5x2+x3 = t1(1+2x+x2+x3)2 +t2(1+x+3x2+2x3)


+ t3(3+Sx+5x +3x3)+t4(-x - 2x )
Row reducing the corresponding augmented matrix gives

3
1 2 -3
0 1 1 0 0 4
21 5 5 -1 0 1 0
3 5 -2 -5 0 0 0 1 0
3 1 0 0 0 0 0 0

p(x)2
Since the system is consistent, we have that E 13.

a+bx+cx +dx3,
2. Clearly, if we pick any polynomial then it is a linear com­
bination of the vectors in 13. Moreover, if we consider

we get t1 = t2 = t3 = t4 = 0. Thus, 13 is also linearly independent.

Section 4.2

1. The set S is not closed under scalar multiplication. For example, [ �] E S, but

�[�]=[��s.
2. Observe that axioms V2, VS, V7, V8, V9, and VlO must hold since we are
using the operations of the vector space M(2, 2).
Let A = [�l �2] and B = [� i2J be vectors in S.

Vl We have

Therefore, S is closed under addition.

V3 The matrix 02,2 = [� �] E S (take al = a2 = 0) satisfies A+02,2 = A =


02,2+A. Hence, it is the zero vector of S.

A = r-;i -a2].
V4 The additive inverse of is ( A)
-
O
which is clearly in S.

V6 tA = t[� �2] = [t�i ta2] O


E S. Therefore, S is closed under scalar

multiplication.

Thus, S with these operators is a vector space as it satisfies all 10 axioms.


450 Appendix A Answers to Mid-Section Exercises

3. By definition, 1U is a subset of P2. Taking a= b = c = 0, we get 0 E \U, so 1U is


2 2
non-empty . Let p= a1 + bix + c1x , q = a2 + b2x + c2x E \U, ands ER Then
b1 + c1 = ai and b2 + c2= a2.
2
S l p + q = (a1 + a2) + (b1 + b2)x + (c1 + c2)x and (b1 + b2) + (c1 + c2) =
b1 + CJ + b2 + C2 = a1 + a2 sop+ q E 1U
2
S2 sp = sa1 + sb1x + sc1x and sbi +sci = s(b1 + c1)= sai, so sp E 1U

Hence, 1U is a subspace of P2 .

4. By definition, {O} is a non-empty subset of V. Let x,y E {0} and s E R Then,


x= 0and y= 0. Thus,

Sl x+ y=0+ 0=0byV4.Hence,x+ yE{O}.

S2 sx= sO= 0by Theorem 4.21


. . Hence,sxE{O}.

Hence, {O} is a subspace of V and therefore is a vector space under the same
operations as V.

Section 4.3

1. Consider the equation

Row reducing the coefficient matrix of the corresponding system gives

[� � �] - [� � �1
1 0 0 0 1

Observe that this implies that the system is consistent and has a unique solution
2
for all ao + aix + a2x E P2. Thus, 13 is a basis for P2.

2. Consider the equation

Row reducing the coefficient matrix of the corresponding system gives

[
-1
l 2 0
2 1
1 ][
0 - 0
1 0
1
0
0
1
0
]
0 1 1 1 0 0 1 1

2
Thus, 1 + x can be written as a linear combination of the other vectors. More­
2 2
over, {1- x, 2+ 2x + x , x + x } is a linearly independent set and hence is a basis
for Span 13.

3. Observe that every polynomial in S can be written in the form

2 2
a+bx+ cx +(-a - b - c)x3 = a(l - x3) + b(x - x3) + c(x - x3)

2
Hence, 13 = { 1 - x3, x - x3, x - x3} spans S. Verify that 13 is also linearly
independent and hence a basis for S. Thus, dim S = 3.
Appendix A Answers to Mid-Section Exercises 451

4. By the procedure, we first add a vector not in Span {[: ]}· We pick lH Thus,

{[: l [�]}
· is linear!y independent. Consider

[1 [
Row reducing the corresponding augmented matrix gives

1 1 1

I I
X1 X1

1 0 X2 � 0 1 Xt - X2

1 0 X3 0 0 X3 - X2

Thus, the vector X = [ !] is not in the span of the first two vectors. Hence, 13 =

{[;]. [�] [!]}


· is linearly independent. Moreover, repeating the row reduction

above with !I = [!]. we find that the augmented matrix has a leading 1 in each

row, and hence 13 also spans JR3. Thus, 13 is a basis for JR3. Note that there are
many possible correct answers.

5. A hyperplane in JR4 is three-dimensional; therefore, we need to pick three lin­

{� , � , �}
early independent vectors that satisfy the equation of the hyperplane. We pick

13 = ·It is easy to '°rify that 13 is Hnearly independent and hence

forms a basis for the hyperplane. To extend this to a basis for JR4, we j ust need
1


}
to add one vector that does not lie in the hyperplane. We observe that does

{�
0

not satisfy the equation of the hyperplane. Thus,


0 2
0
1
0 .
1s a basis
.
, 1 ' 0 ,

0

for JR4.
452 Appendix A Answers to Mid-Section Exercises

Section 4.4

J. Considert {�] [=!]=[H+ t2

[
Row reducing the corresponding augmented matrix gives

� =� � l [ � � � l
-1 2 2 �
0 0 0

Thus,
m. [1]
2. To find the change of coordinates matrix Q we need to find the coordinates of
=

the vectors in '13 with respect to the standard basis S. We get

To find the change of coordinates matrix P, we need to find the coordinates of


the standard basis vectors with respect to the basis '13. To do this, we solve the

[ [I -4 I
triple augmented systems

1 1 2 1 0 0 0 0 2
1
2
2
1
2
3
0
0
1
0
0
1 l� 0
0
1
0
0 -1
3
1
-1 -1
0
l
[= --�4 j]
Thus,

p -1

[ -4
We now verify that

1 1 0
-1
3 -1
1
j][i �]=[� �]
2 1
0

Section 4.5

tx1 + Y1 + tx2 + Y2 + tx3 + y3 ]


tx2 + Y2

Y1 + Y2 + Y3 ]=
Y2
tL(x) + L(J)

Hence, L is linear.
Appendix A Answers to Mid-Section Exercises 453

2. If 1 [��]
=
XJ
E Null(L), then
[Q ]
0
0
0
= L(x) =
[
x2
X1
+ x3
X2 +
xi
x3
] .

x1 x2 x3

" [ !] {[ :J}
Hence, = 0 and + = 0. Thus, every vector in Null(L) has the form

X x,Hence, a basis fo; Null(L) is _

_
Every vector in the range of L has the form
[ X2 XJ
] [� �] [� �]
x2 � x3 � =
Xt
+ (X2 + X )
3

Since {[� �] [� �]}


, is also clearly linearly independent, it is a basis for the

range of L.

Section 4.6
1. We have

� J + 1 [�
�] +o[� �]
�] [� �] [� �]
+0 + 1

Hence,
2 1 0 0
-2 -2 0 0
[L]23 =
2 1 0
2 0

Section 4.7
= t1L(u1) + + tkl(uk) = L(t1u1 + + tuk E Null(L).

,
1. If 0 · · · tkuk), then t1u1 + · · · · · ·

But, L is one-to-one, so Null(L) = {0}. Thus, t1u1 + + tuk 0 and hence t1 · · · = =

· · ·tk 0 since {u1,


= = ud is linearly independent. Thus, {L(u1 ),
. . • , L(uk)} • . .

is linearly independent.
2. Let v be any vector in V. Since L is onto, there exists a vector x E 1U such that

L(x) = v. Since x E 1U, we can write it as a linear combination of the vectors


{u1, ..., uk}. Hence, we have

v = L(x) = L(t1u1 + · · · + tkuk) = t1L(u1) + · · · + tkl(uk)

Therefore, Span{L(u1), ..., L(uk)) = V.

3. If L is an isomorphism, then it is one-to-one and onto. Since {u1, ..., Un} is a


basis, it is a linearly independent spanning set. Thus, {L(u1), ..., L(u11)} is also
linearly independent by Exercise 1, and it is a spanning set for V by Exercise 2.
Thus, it is a basis for V.
454 Appendix A Answers to Mid-Section Exercises

CHAPTERS

Section 5.1

1. (a)
I� � I
=3(1) - 2(2)=-1

1; 1 =1(-2) - 3(0) = -2
3
-2
(b)

(c)
I� �I=2(2) _ 4(1)=o

2. We have C11 = (-1) +


1 1
1 -1
0
= �I = 3, C12 = (-1)1+
2
I� =�I = -8, a nd C13
=

1 3
(-1) + 1� �ll = 4. So, detA = lC11 +2C12+3C13 =1(3)+2(-8)+3(4)=-1.

3. (a) detA =lC11 +OC21 +3C31 +(-2)C41=1(-36)+3(32)+(-2)(-26)=112


(b) detA =OC21 + OC22 + (-l)C23 + 2C24 =(-1)(0) + 2(56)=112

(c) detA =OC14 + 2C24 + OC34 + OC44 =2(56)=112

Section 5.2

1. rA is ob tained by mul tiplying ea ch row of A by r. Thus, by Theorem 5.2.1,


det (rA)= (r)(r)(r)detA =r3 detA.

2.

0 2 -8 8 I 1 3 -1
0 0 16 -1 = 2 0 2 -14 9
detA = l (-1)
3 -1 0 2 -8 8
0 2 -14 9 0 0 16 -1
1 1 3 -] 1 I 3 -1

= 2 0 2 -14 9 2 0 2 -14 9
(-1) =(-1) =20
0 0 6 -1 0 0 6 -1
0 0 16 -1 0 0 0 5/3

3.

-2 4 -5 -6
-5
4
3 1 3 2
detA =(-6)(-1) + 2 -4 3 + 4(-1) + 3 -4 3
2 -3 -4 -3 -3 -4

-2 4 -5 0 -4 1
=-6 0 0 -2 -4 3 -4 3
0 I -9 0 -7 -1
-5
=(-6)(-1)
-2
0
0
4
1
0
-9 - 4(3)(-1) +1
-2
2 -4
-7
1 -�I
=6(4) + 12(11)=156
Appendix A Answers to Mid-Section Exercises 455

Section 5.3

1. cofA = [=; � =;]


2 -1 2

[-�
2. A-1 = d�A(cofA)7 =
3
- i i
3
-

-2
]
Section 5.4

1. We draw i1 and v to form a left-handed system and repeat the calculations for
the right-handed system:

Area(il, v) =Area of Square - Area 1 - Area 2 - Area 3 - Area 4 - Area 5 - Area 6


1 1 1 1
= (v1 +U1)(v2 +u2) - -U1U2 - V2U1 - -V1V2 - -U1U2 - V2U1 - -V1V2
2 2 2 2
=Vt V2 +Vt U2 +V2U2 + V2U2 - U1 u2 - 2v2u1 - V2V1
= V1U2 - V2U1 = -(u1v2 - U2V1)

So, Area= det l [�� ��JI·


2. We have L(e1) = [�] and L(e2) = [�]. Hence, the area determined by the image

vectors is

Alternatively,
456 Appendix A Answers to Mid-Section Exercises

CHAPTER6

Section 6.1

1. The only eigenvectors are multiples of e3 with eigenvalue 1.

=
2. C(..l) det(A -,.l/) = S..l = ,.l(,.l - S),
,.1
2
so the eigenvalues are ..l1 = 0 and
=S. =
-

..l2 For tl1 0, we have

OJ=[� �]- [� �]
A -

Thus, the eigenspace of tl1= {[-71}· =S,


0 is Span For ,.12 we have

A-Sf=[-� -�]-[� -l/�]


Thus, the eigenspace of tl1 = {[�]}.
0 is Span

3. We have

C(..l) det(A - ..l/)


S -,.l -3 2
= = 0 -tl 2
-4
0 -2 - tl

=(S - I=� tl)


2
-4 _
Al =-(,.l -S)(,.l + 2)
2

So tl1 =S has algebraic multiplicity 1, and ..l2 = -2 has algebraic multiplicity


=S,
2. For ..l1 we have

[o ]
A -SI = -S - [ o ol
0
- 3 2
2 0
1
0 1
0 -2 -9 0 0 0

Thus, a bas;s fm tbe dgenspace of .<1 = 5 ;s {[�]} ·so ;t bas geometric multi­

plicity 1. For ..l2 = -2, we have

A bas;s foe the dgenspace of .<2 = 2 ;s { [-�q}, so ;t also bas geometric

multiplicity 1.
Appendix A Answers to Mid-Section Exercises 457

Section 6.2

-1. -
1. We have C(,i) = det(A - ,if) = -,i(,i 2)(,1 + 1). Hence, the eigenvalues are

[ 111 -1] [1 -
,11 = 0, -i2 = 2, and ,t3 = For ,11 = 0, we have

0 0
A - Of =
-7
-4
3 11
-7
4
� 0
0
1
0
-1
0

Thus, a bosis for tbe eigenspace of ,! r is {[:]}


[-111 -11 [1
For -i2 = 2, we have

0 0
A - 2f =
-7
-6
3 �] -7
2
� 0
0
1
0

{[=il}
-1,
Thus, a basis for the eigenspace of ,! r is

For -i3 = we have

A - (-1)/ = 1
-7
[ � -� �1 [� � �;�]
3
=
5

0 0
=
0

Thus, a basis for tbe eigenspace of ,i, is {[il} r: =; il So, we can takeP =

and getP-1AP=D= [� � �]·


0 0 -1

[� � l
2
2. We have C(;l) = det(A - ,ii) = (,i - 2) . Thus, ,i = 2 has algebraic multiplicity

2. We have A - 21 = {[�]}
so a basis for its eigenspace is . Hence,

the geometric multiplicity is less than the algebraic multiplicity. Thus, A is not
diagonalizable.

Section 6.3

[-�:� [1 -1]
1. (a) A is not a Markov matrix.

(b) Bis a Markov matrix. We have B- I =


0.6 ] . ThUS,
.
ltS

[ �;�l
-Q.6 � Q Q

fixed-state vector is

[�� l [��l 3
[��].
1, [n
2. We find that 'f = 'f2 = 'f = etc. On the other hand, the fixed­

state vector is the eigenvector corresponding to ,i = which is


458 Appendix A Answers to Mid-Section Exercises

CHAPTER 7

Section 7.1

1. We have [i] · nJ [i] · Hl 0, 0 and


nJ · Hl 0. Thus, the set

1· [ f�] [� �6�rl: [�!!h]}


= = =

s go a di vector by its length gives the orthononnal set

{ 21-16
·
1/Ys 2 -../36
2. It is easy to verify that <J3 is orthonormal. We have b1 = 1 i11· = 8/-.../3, b2 =

1 i12· = -1 /{i, and b3 = 1 v3


· = 5/-./6. Hence, [1]11 = [!{�]·
[
c V2 + 2 Y3 - 1);-./6 ]
c-v'lS - V2 + 1);-./6
[ 5/-./6
]
3. We have 1 ( .../2 + 2)/-./6
= and y ( -v'18 - 2)/-./6 . = With a
c V2 - 2 Y3 - 1);-./6
little effort, we find that 11111

4. The result is easily verified.


2
c-v'lS + m + 1);-./6
� 6 and 1 y -2.
= =
· =

Section 7.2
Vt 1 -1

v2 1
JR4
0
1. We want to find all i1 E such that i1 0, i1 0, and
V3 1
= · = · =

V4 0 1
0

i1 0. This gives the homogeneous system of equations


1
=
·
_

Vt + V2 + V3 = 0

-V1 + V4 V3 + = 0
V2 - V3 + V4 = 0

which has solution space Span { �} -


Hence, S' = Span { -f } .
2. We have

perps 1 = 1 -
projs 1 = [] [ ] [ ]
2

3
5/2
1 - 1
5/2
=
-1/2

1/2
0
Appendix A Answers to Mid-Section Exercises 459

Section 7.4

a1 a2 ] [ b'
, B= �
] [
1. We verify that ( ,) satisfies the three properties of an inner product. Let A =
[ h , and C = ci c2 .]
� � � � �

(a) (A,A) = tr(ATA) = ai + a�+ a�+ a� � 0 and (A,A) = 0 if and only if


a1 = a2= a3= a4= 0, so A = 02,2. Thus, it is positive definite.
(b) (A,B) = tr(BTA)= b1a1 +b3a3+b2a2+b4a4= a1b1+a3b3+a2b2+a4b4=
(A,B), so it is symmetric.

(c) For any A,B, CE M(2, 2) and s,tE JR.,

(A, sB+ tC) = tr((sB+ tCl A)= tr((sBT+ tCT)A)


= tr(sBTA+ tCTA)= s tr(BTA)+ ttr( CTA)= s (A,B) + t(A, C)

So, ( ,) is bilinear. Thus, ( ,) is an inner product on M(2, 2). We observe that


(A,B) = a, bi +a2b2+a3h + a4b4, so it matches the dot product on the isomor­
4
phic vectors in JR. .
2. 11111 = V(f,1)= Yl2+l2 + 12= Y3. llxll = y(x,x)= Y02 + l2+ 22 = Y5

3. llxll= y(x,x)= �(-1 )2+02+12= Yi

CHAPTERS

Section 8.1

1. We have C(,1. ) = (,1. - 8)(,1. - 2). Thus, the eigenvalues are ,1.1 = 8 and A2 = 2.
For A1 = 8, we get

A_ 81 = [ ] [ ]
-3 -3 - l l
-3 -3 0 0

Thus, a basis for its eigenspace is {[ � ]}


- .For ,1.2 = 2, we get

A-8!= [ ] [ ]
3
-3
-3 - l
3 0
-1
0

Thus, a basis for its eigenspace is {[ � ]} .We normalize the vectors and find that

A is orthogonally diagonalized to D= [� �] [-1�1{ �;1i].


by P =

2. We have C(,1.) = -,1.(,1. - 3)2. Thus, the eigenvalues are ,1.1= 0 and ,1.2 = 3. For
A1 = 0, we get
2 -1 -1
l 01 [1 -11
A Of = -1
- 2 -1 - 0 l -1
-1 -1 2 0 0 0
460 Appendix A Answers to Mid-Section Exercises
Thus, a basis for its eigenspace is m ]}·For = we get ,1, 3,

A- 81
-1 -1 1 [1 1 l
= [ -1 -
-1
-1
-1 0 0
1
0
-1 -1 -1 0 0 0

{[-il [-�]}.
{[ ]}
Thus, a basis for its eigenspace is But we need an orthonormal
·

basis for each eigenspace, � 7�pp� t;�Gram-Schmidt Procedure and nor-


[
l 1
malize the vectors to get 1 / , - .
-:- J 2�!ft
[� i [�;�
Therefore,AisdiagonalizedtoD= � � byP=
0 0 3
- - 1 /.../6
1�� - 1 /.../6 .
1;-Y3 0 2/.../6 1
Section 8.2

1. (a) Q(1) 4� + x1x2 + -Y2x�


=

(b) Q(x) = xi - 2x1x2 + 2x� + 6x2x3 - x�


2. (a) [-1 -3
1 -1] [ 2 3/2 - 1/2
(b) 3/2 4 0
]
(c) 3
1
0
0
0
2
0
0
0
0
0
0
-1 /2 01 4 0 0 0

3. The corresponding symmetric matrix is A = [� -n We have C(tl) (tl+4) =

(A - 1). Thus, the eigenvalues are A1 = -4 and A2 =1. For A1 -4, we get =

A+ 41 = [� �] - [� �]
Thus, a basis for its eigenspace is{[-�]}For tl2 =1, we get
A = [- 12 -42] [1 - 2]
_I
_

0 0

Thus, a basis for its eigenspace is{[�]}. Thus, takingP= [��!fl �;�]and
1 =Py, we get the diagonal form of Q(x) is Q(1) = -4yT + y�.

4. (a) The corresponding symmetric matrix isA= [: !]. We have C(tl) (A-12) =

(A + 4). Thus, the eigenvalues are tl1 = 12 and tl2 -4. Thus, Q(x) is =

[-� ]
indefinite.
-
(b) The corresponding symmetric matrix isA= -� � ·We have C(tl) =
-3 2 3
-(A- l)(A+ l)(A-8). Thus, the eigenvalues are tl1 1, tl2 = -1, and tl3 = = 8.
Since Q(x) has both positive and negative eigenvalues, it is indefinite.
Appendix A Answers to Mid-Section Exercises 461

Section 8.3

1. The corresponding symmetric matrix is A = [� �l We have C(,l) = ,l(,t - 2).

Thus, the eigenvalues are ,1.1 = 0 and A2 = 2. For ,1.1 = 0, we get

Thus, a basis for its eigenspace is {[ � ]}


-
.For ,1.2 = 2, we get

r-1 1] [ ]
1 -1
1 -1
21 =
_

A _

0 0

Thus, a basis for its eigenspace is {[ � ]} r �]


.We take
-
to define the y1 -axis and

rn to define the Y2-axis . Then the original equation is equivalent to

OyT + 2 y� = 2. We get the pair of parallel lines depicted below.

CHAPTER9

Section 9.1

1. (a)3+i (b)-2 +2i (c)5 - 5i (d) 13


2. (a) ZI = x - yi = x + yi = z1
(b) x +iy = z1 = Zi' = x - iy if and only if y = 0.

(c) Let z2 = a+ib. Then

z1 + z2 = x +iy + a+ib = x + a+i(y + b)= x+a-i(y+b) = x -iy+a-ib =


462 Appendix A Answers to Mid-Section Exercises

1 +i 1 +i1+ i 2i .
(a) = =
3•
=
1-i 1-i1 +i 2 i

(b)
2i
=- - - --
2i 1-i 2 + 2i
l+i l+il- i
=
2
= 1 +i

4-i 4-i 1-Si 1 21


(c) = = - - i
1+Si 1+Si1 - Si 26 26
y

0 x 0 x
I I
4. L
y

(2 +i (1+i)

--+-i}-i

I o x
0
=-� x
5. We have lz1 I = lv'3 + ii = v'3+l = 2. Any argument 8 satisfies vf3 = 2 cos 8
and 1 = 2 sin 8. Thus, 8 = � + 2nk, k E Z. Hence,

Zl = 2 COS
( n
6 +iSin 6
. n )
We have IZzl 1-1-il = Y1"+1 = ../2. Any argument 8 satisfies - 1 =
= ../2 cos 8
and - 1 = ../2 sin 8. Thus, 8 = ?f + 2nk, k E Z. Hence,

z2 =
r,;
v L.
_ ( cos
Sn
4 +ism 4
. Sn )
6. We have

lzl =Ir cos 8 - r sin 81 = �r2 cos2 8 + r2 sin2 8 = ...f7i = r


l l = lzl

Using the trigonometric identities cos 8 = cos(-8) and - sin 8 = sin(-8) gives

z = rcos 8 - r sin 8 = r cos(-8) + r sin(-8) = r( cos(-8) + sin(-8))

Hence, an argument of z is -8.

7. Theorem 3 says the modulus of a quotient is the quotient of the moduli of the
factors, while the argument of the quotient is the difference of the arguments.
Taking z1 = 1 = l(cos 0 +isin 0) and z2 = z in Theorem 3 gives

2_ = �( cos(O - 8)+isin(O - 8)) = �( cos(-8) +isin(-8))


z2 r r
Appendix A Answers to Mid-Section Exercises 463

8. In Example S, we found that 2-2i = 2 Y2 (cos -4 +i sin -4


rr rr
) and -1+ .../3i =

(
2 cos � +isin �) .Hence,

(2-2i)(-1+ Y3i)=4 Y2 ( ( � ;) ( � 2;))


cos - + 2 +isin - +

4 2( l2 +ism l2 ) = 1.464+i( S.464)


Sn . Sn
cos
Y2
=

= -- ( (-4-3 ) + m (-4-3 ))
2-2i 2 Y2 n 2n . n 2n
cos s
.

t
-l+-./3i 2

( -lln ---U- ) -1.366 - i( 0.366)


_ ,.;; -lln
= v L cos
---U- +i sin =

9. We have ( 1- i) = Y2 (cos 7 +i sin -;), so

(1 - i)5 = 2512 ( cos


-Sn -Sn
4 + i sin 4 ) = -4 + 4i

We have (-1- -./3i) = 2 (cos ¥ +i sin ¥) , so

(-1- \13i)5 = 25 ( 2�n +


cos isin
2 n
�) = -16+16Y3i

Section 9.2

[ [
1. We row reduce the corresponding augmented matrix to get

i 1 3 -1-2i 1 0 0
i 1
2 l+i
1 +2i
2
2+i
S- i l � 0 1 0
0 0 1

Thus, the only solution is z1 = � +i, z2 = 0, and Z3 = - � i.

Section 9.3

1. All 10 vector spaces axioms are easily verified.


1 0
0 1 -i
2. The reduced row echelon form of A is Hence, a basis
0 0 0
.

0 0 0

for Row(A) is
ml [!J}
·
464 Appendix A Answers to Mid-Section Exercises

Section 9.4

1. For ..:l1 = 2 + 3i, =[ 3 � 3i _3-� 3i]- [1� -20 ].


we haveA - ;J.1/
i

[1� i]=[ �] +i [ �] P=[� n


-
An eigenvector corresponding to ;i is .So,
-

2. det(A-;J./) = ;J.2 - 6;J. +11 ; 3 + -fi.i.


So, ..:l1 = Hence, a real

canonical form ofA is C =[ � �. _

We haveA-;J.1/= [ � -�J�[� � ]
- i i
.

An eigenvector correspond'mg to
·
[ --fi.1 i]= [10] + [ -Y2l0 j.
)
/l 1s
·
1
·
So,

P=[� -�.
Section 9.5

1. We have

(it, v>=iC2 - 2i) +c1+ 2i)Cl + 3i)=-3 +7i


(2iit, v>= 2i(it,v>= -14 - i 6

(it, 2iv)= 2i(it, v>= -2iC-3 +7i)= 14 + 6i

2. We have
2 2
(z,Z;=z· l= + + Z1ZI + +
·
·
· ZnZn = lzil · ·
·
lz111

Thus, (z, 0 z
Z> � for all 0
and equal to z= if and only if 0:

(z, w>= w-=!'w-=cf'-w)T=w-Tt=w----


t · Tz=<w.t>
<v +z, w>=cv +t;T�=cvT +zT)�=vT� +zT�=(v, w> +<z. w>
<z. w +it>=!'w +it= c� +ii)=!'� +zTii=<z. w> +<z. it>
tT
(az, w)=(at)T�=af'�=a(z, w)
(z, aw)= aw=at'�=a(z, w)
ZT

3. The columns ofA are not orthogonal under the standard complex inner product,
soA is not unitary. We have B* B = /,so Bis unitary.
APPENDIX B

Answers to Practice
Problems and Chapter
Quizzes
CHAPTERl

Section 1.1

A Problems

Al X2
(a) (b)

[;]
[�]
Xi

-[�]
3[-!]
Xi
X2

2[�]-2[-�]
(c) (d) X2

-2
[ �]
-

Xi

[_�]
[3[62] 3 [ 4 �vl]
Xi

[� ] [=�] [-�] 4 [72]


74//3] [ -rrl
A2 (

2
a) (b) (c) (d) (e) (f)

{[ �1 [-
r-10;[-1 ! 1 ;�1 -
[-1/2�]1 13/3
c{
A3 ( (b (c) (e) (f) -Y2

0 -Y2 + 7f

A4 (a)

--13�1 (b)
-22
10 (c) u =

9/2
-7/2
[=�](d) u

466 Appendix B Answers to Practice Problems and Chapter Quizzes

AS (a) [�l
-1/2
(b) [ :;]
-10
(d) [:l
- /3
-14/3
A6

pQ = oQ-oP =
Ul-m [ il =
=

PR = oR -oh
m m [tl -
=

PS = oS -oh
ni-m [=�] =

QR=
[�]-[_!] nJ
oR -oQ = =

sR=
m-ni Ul
o R -oS = =

pQ +QR= [=il nl [t] [=!] [j]


+ = = + = PS + sR

AS Note that alternative correct answers are possible.

(a)x= - [ �] [ � J
+t . tE� (b)x= [�J [ =�J
+r . tE�

[j] Hl nl [H
-

(c}X= + · IER (d}X= +l IER

(e)X= [�;�] [-;;�],


1
+t
-2/3
t E�

A9 (a) XI = -1 +t, X2 = -1 + 3t, [= �] [�].


t E�; X = +t t E�

(b) X1 = l + 3t, X2 = l - 2t, t E�; x= UJ [ ;J


+t
_
. t E�

AlO (a) Three points P, Q, and R are collinear if PQ = tPR for some t E R
(b) Since -2PQ = [-�] = PR, the points P, Q, and R must be collinear.

(c) The points S, T, and U are not collinear because SU* tST for any t.
Appendix B Answers to Practice Problems and Chapter Quizzes 467

Section 1.2

A Problems

0
5 10
0
9 -7
Al (a) (b) (c) 1
0 10
1
1 -5
3

A2 (a) The set is not a subspace of IR.3.

(b) The set is a subspace of IR.3.

(c) The set is a subspace of IR.2•

(d) The set is not a subspace of IR.3.

(e) The set is a subspace of IR.3.

(f) The set is a subspace of IR.4.


A3 (a) The set is a subspace of IR.4 .

(b) The set is not a subspace of IR.4.

(c) The set is not a subspace of IR.4•

(d) The set is not a subspace of IR.4.

(e) The set is a subspace of IR.4.

(f) The set is not a subspace of IR.4.


A4 Alternative correct answers are possible.

(a) l
[�J o[�l o [_:H�l
+ +

(b)
+!l- [�] [�H�l
2
+
l

(c) 1
lil [tl m l�l
+
1
-
l =

(d) 1 [n [�J [�J [�J


-
2 +1
=
AS Alternative correct answers are possible.

(a) The plane in R4 with basis

{! i} ,

(b) The hype'Jllane in R4 with basis U � �} , ,


468 Appendix B Answers to Practice Problems and Chapter Quizzes

(c) The line in 11!.4 with basis { -I }


(d) The plane in JR4 with basis {l �}2
.
-1
A6 If 1 = jJ + tJ is a subspace of 1Rn, then it contains the zero vector. Hence, there
exists t1 such that 0 = jJ + t1J Thus, jJ = -t1J and so jJ is a scalar multiple of
J On the other hand, if jJ is a scalar multiple of J, say jJ = t1J, then we have
1 = jJ + td =
t1J + tJ (t1 + t)J Hence, the set is Span{J} and thus is a subspace.
=

A7 Assume that there is a non-empty subset 131 {V1, , ve} of 13 that is linearly = • . .

dependent. Then there exists Ci not all zero such that

o= c1v1 + · · · + ceve = c1v1 + · · · + ceve + Ove+1 + · · · + Ovn


This contradicts the fact that 13 is linearly independent. Hence, 131 must be linearly
independent.

Section 1.3

A Problems

Al (a) Y29 (b) 1 (c) .../2 (d) Y17


(e) -fiSl/5 (f) 1 (g) -% - (h) 1

1
[ ] [ ;0] 6

3/5 1
A2 (a) - (b) (c)
4/5 1/.../2
2/Ys

c+!J (e)
-2/3
-2/3
1/3
0
(f)
1/.../2
0
0
-1/.../2
A3 (a) 2 VlO (b) 5 (c) vT75 (d) 3-%
A4 (a) 11111 -v26; 11.Yll -f35; 111 + .Yll 2 .../22; 11 ·.YI 16; the triangle inequality:
= = = =

2 -v'22 ::::: 9.38 $ -v26 + -f35 ::::: 10.58; the Cauchy-Schwarz inequality: 1 6 $
-../26(30);:::: 27.93.
(b) 11111 = -%; 11.Yll = Y29; 111 + .Yll = -../41; 11 · .YI = 3; the triangle inequal­
ity: -../41 ::::: 6.40 $ -% + Y29 ::::: 7.83; the Cauchy-Schwarz inequality:
3 $ -../ 6(29);:::: 13.19.

AS (a)
m [- � l· = O; these vectors are orthogonal.

(b) [-!] [-:] · = 0; these vectors are orthogonal.


Appendix B Answers to Practice Problems and Chapter Quizzes 469

(c) rn [-�l
· = 4 ; O; these vectors are not orthogonal.

4 -1
(d)
0 34 = O; these vectors are orthogonal.

-2 0
0 X1
0 X2
(e)
0 X3 = O; these vectors are orthogonal.

0 X4
1/3 3/2
2/3 0
(f) -1/3 -3/2 = 4; these vectors are not orthogonal.

3 1
A6 (a) k = 6 0 (b) k = or k = 3 (c) k = -3 (d) any k E IR

A7 (a) 2x1 4x2 -X3


+ =
9
(b) 3x1-4x1 -2x2
+ 5x3 -2x3
=26 -12
(c) 3x1 -4x2 x3 8 + = (d) =

AS (a) 3x1
X1 -4x2x2 5x3
+ 4x3 0+ = (b) x2 3x3 3x4 1
+ + =
(c) -2x4 0 = (d) X2 2X3 -X4 X5 = 1

{�]
+ + +

A9 (a) ii= m (b) ii=


Hl1 (c) ii

(d) -12
it= -1 (e)n=

-3 2
-1
AlO (a) 2x1 -3x2 5x3 6 + =

(b)X2 -2=

(c)xi - x2 3x3 2 + =

All (a) False. One possible counterexample is [�] [�] 2 [�] [ �]


· = = · _9 .

(b) Our counterexample in part (a) has i1 * 0, so the result does not change.

Section 1.4

A Problems

Al (a) projil i1 = [-�J. perpil i1 = [�]


r3648/2/25J r-136
102//25J
(b) proJvu =
. _,

5 ,perpvu =
_,

25
4 70 Appendix B Answers to Practice Problems and Chapter Quizzes

-[ 4/98/9] [ 40/91/9 l
(d) projil a=
-8/0 9 -1 -19/9
'perpil a=

00 -12
(e) projil a=

0 1/2 -1 5/2
'perpil a=

-0 3
(f) projil a=

-1/20 -5/22
'perpil a=

2/76/7
A3 (a) u = 11�11
=
[3/7220/49]
(b) proja F = 660/49]
[330/49
[ 270/49
222/49]
(c) p erpa F =
-624/49
3/1/M
A4 (a) u = �
11 11
=
[-224/71Yf4.
Ml
(b) proja F =
[-16/78/7]
-[ 693/7/7]
(c) p erpa F =
30/7
Appendix B Answers to Practice Problems and Chapter Quizzes 471

AS (a) R(5/2,5/2), 5/-../2

(b) R(58/17,91/17), 6/Yfl


(c) R(17/6, 1/3,-1/6), y29/6
(d) R(5/3,11/3,-1/3), -../6
A6 (a) 2/../26

(b) 13/../38
(c) 4/Ys
(d) Y6
A7 (a) R(l/7,3/7,-3/7,4/7)

(b) R(l5/14,13/7,17/14,3)
(c) R(O,14/3,1/3,10/3)
(d) R(-12/7,11/7,9/7,-9/7)

Section 1.5

A Problems

Al (a)
[=� ]
-27

(b)
[�]
-
31
- 4

(c)
Ul
(d)
Ul
(e) [�]
(0 [�]
A2 (a) ii x ii =
[�]
(b) i1 xv = [ -�1
-13
= -v x i1
472 Appendix B Answers to Practice Problems and Chapter Quizzes

(c) i1x 3w = [ �i
-15
= 3(i1x w)

Cd) ax cv+ w) = [ -:i -18


= axv+ax w

(e) i1. (vx w) = -14 = w . (i1xv)

en a. cvx w) = -14 = -v. cax w)


A3 (a) Y35

(b) -vTl

(c) 9

(d) 13

A4 (a) x1 - 4x2 - l0x3 = -85

(b) 2x1 - 2x2 + 3x3 = -5

(c) -5x1 - 2x2 + 6x3 = 15


(d) -17x1 - x2 +lOx3 = 0
AS (a) 39x1 + 12x2 + lOx3 = 140

(b) llx1 - 21x2 - 17x3 = -56

(c) -12x1 + 3x2 - 19x3 = -14

(d) X2 = 0

A6 (a) 1" [�6�\} t;J tER

(b)
A 7 (a) 1
x{il+H tER

(b) 126

(c) 5

(d) 35

(e) 16

AS i1 (v x w) = 0 means that i1is orthogonal to v x w. Therefore, i1lies in the plane


·

through the origin that contains v and w. We can also see this by observing that
i1 (vxw) = O means that the parallelepiped determined by i1, v, and w has volume
·

zero; this can happen only if the three vectors lie in a common plane.

A9 ca- v) x ca+v) = ax ca+v) - vx ca+v)


= i1xi1+i1xv- vxi1-vxv
=O+i1xv+i1xv-O
= 2caxv)
Appendix B Answers to Practice Problems and Chapter Quizzes 4 73

Chapter 1 Quiz
E Problems

Et x {�] HJ + . ieR

E2 8xi - x2+7 X3 = 9

E3 To show that {[�], [ �]} -


is a basis, we need to show that it spans IR. and that it is
2

linearly independent.
Consider

[��] [�] [-�] [ �: : i ]


= ti +t1 =

2 t2

This gives xi ti-t2 and x2 2ti+2t2. Solving using substitution and elimination,

[��]
= =

we get ti = F2xi + x2) and t1 = �(- 2xi + x2). Hence, every vector can be

written as

2
So, it spans IR. . Moreover, if xi = x2 = 0, then our calculations above show
that ti = t2 = 0, so the set is also linearly independent. Therefore, it is a basis
2
for IR. •
. ...,

E4 If d * 0, then ai(0) +a2(0) +a3 (0) 0 * d, so 0 � S. Thus, S 1s not a subspace


=

3
of IR. .
On the other hand, assume d 0. Observe that, by definition, S is a subset of
=

R3 and that 0[�] = e S since taking x1 = 0, x, = 0, and x3 = 0 satisfies

a1 Xi +a2x2 +a3 x3 = 0.

Lett = [�:]· �:]Y = e S . Then they must satisfy the condition of the set, so

aixi+a2x2+a3X3 = 0 and aiYi+a2Y2+ a3y3 = 0.


To show that S is closed under addition, we must show that x + y satisfies the

condition of S. We have x +y = [�� : ��i


X3 +Y3
and

ai(xi +Yi)+a2(x2+Y2)+a3(X3 +y3) = aixi +a2x2+a3X3 +aiyi+a2y2


+ a3y3 = 0 +0 = 0

Hence, x +y E S . Similarly, for any t E IR, we have tx =


[l
tXi
tx2 and
tx3

3
So, S is closed under scalar multiplication. Therefore, S is a subspace of IR. .
474 Appendix B Answers to Practice Problems and Chapter Quizzes

ES The coordinate axes have direction vectors given by the standard basis vectors.
The cosine of the angle between v and e1 is

=
llVlllleill -v'14
v e1 2
cos =
·

The cosine of the angle between v and e2 is

The cosine of the angle between v and e3 is

cos Y =
1iv1111e311 =
v. e3
-v'14
1

[-il
E6 Since the origin 0(0, 0, 0) is on the line, we get that the point Q on the line closest

to Pis given by oQ = proj1oP, where =


J
is a direction vector of the line.

Hence,

=
11di12JJ -18/11
Qp. [ ��;��1
OQ =

and the closest point is Q(l 8/11, -12/11, 18/11).


z

E7 Let Q(O, 0, 0, 1) be a point in the hyperplane. A normal vector to the plane

= proj,1 PQ.
1

is it= � . Then, the point R in the hyperplane closest to Psatisfies PR

1
Hence,

OR= OP+ proj11PQ =


3 5/2
-; -; -; -2 2 -5/2
O 4 -1/2
2 3/2

Then the distance from the point to the line is the length of PR:

-1/2
-1/2
llPRll = -1/2
= 1

-1/2
Appendix B Answers to Practice Problems and Chapter Quizzes 475

E8 A vectot orthogonal to both vectorn is


[�] r-:1 Hl
x =

E9 The volume of the parallelepiped determined by it+ kV, v, and w is

!Cit+ kv). cvxw)l = tit. cvxw) + k(v. cvxw))l


= 111 . cvxw) +k(O)l

ElO (i) False. The points P(O, 0, 0), Q(O, 0, 1), and R(O, 0,
form t1x1 +t2X2 = 0 with t1 and t2 not both zero.
2)
This equals the volume of the parallelepiped determined by it, v, and w.

lie in every plane of the

(ii) True. This is the definition of a line, reworded in terms of a spanning set.

(iii) True. The set contains the zero vector and hence is linearly dependent.

(iv) False. The dot product of the zero vector with itself is 0.

(v) False. Let x =


[�] and y = Ul Then, proj1 y = [�l while proj51 x = [�;�.
(vi) False. If y = 0, then proj1 y = 0. Thus, {proj1 y, perp1 y} contains the zero
vector, so it is ljnearly dependent.

(vii) True. We have

tlitxcv+311)11 = 1111xv+3(11x11)11 = 1111 xv+ 011 = 11itx1111

so the parallelograms have the same area.

CHAPTER2

Section 2.1

A Problems

Al (a) x =
[1;]
(b) X=
[�] fn + t ER

(c) X =
[=2�]
-1 -1

(d) X= -� +t -�, tEJR

0
476 Appendix B Answers to Practice Problems and Chapter Quizzes

A2 (a) A is in row echelon form.

(b) Bis in row echelon form.

(c) C is not in row echelon form because the leading 1 in the third row is not
further to the right than the leading 1 in the second row.

(d) D is not in row echelon form because the leading 1 in the third row is to left
of the leading 1 in the second row.

A3 Alternate correct answers are possible.


-3
(a)
[� 13 - �]
-1 2
(b)
[i 0
0
1
0 �]
5 0 0
0 -1 -2
(c)
0 0 1
0 0 0

2 0 2 0
0 2 2 4
(d)
0 0 4 8
0 0 0 0

1 2
0 1 2 1
(e)
0 0 7 5
0 0 0 2

1 0 3 0
0 1 -1 2 1
(f)
0 0 24 1 11
0 0 0 0
A4 (a) Inconsistent.

(b) Consistent The solution is ii = m [!]


+t t ER

-1
-1 -1
(c) Consistent. The solution is x = +t t E JR
0 1'
3 0

19/2 -1
0
(d) Consistent. The solution is x = +t t ER
5/2 O'
-2 0

-1 1
0 0
(e) Consistent. The solution is x = s +t s,t ER
1 O'
0
Appendix B Answers to Practice Problems and Chapter Quizzes 477

AS (a) [ 1 ] [
3
1
-5
1 ]
2
2
4 -
1
0 _11
2
_10
4 . . .
. cons1stent with so1ut10n
:;t
-t
=
24
/11
l0
/1 l
.
[ ]
(b) [ � -� �I� H � -� � 1 ;J -
consistent with solutionX = [�fl +

f[ H IE
R

[ � � =� i l
1 2 -3 8
(c) 1
2
3
5
-5
-8
11
19 l Consistent with solution

x=
[ �l f H + IE R.

(d)
[ -r l [
-2
-3
6 16
-5
-8
-11
-17
3 6
-
1
0
0
-2
1
0
-5
2
1
-11
5
3
l Consistent with solution

X=
Hl
(e)

[�
2

l[
5
9
� l-1

-1
1 10
19
4
-
1
0
0
2
1
0
-1
3
0
The system is inconsistent.

1 [� 1
2 -3 0 -5 2 -3 0 -5
(f)
[� 13
4
-17
-6

-7
1
4 -21
-8

5
1
0
1
0
4 ; . Consistent with

-
1 1
solution x = t ' tER
0+ l
2 0

0 2 -2 0 2 1 2 -3 1 4 1
1 2 -3 1 4 1 0 2 -2 0 1 2
(g) 1 . The system is con-
2 4 -5 3 8 3 0 0 1 0
2 5 -7 3 10 5 0 0 0 3/2 2
-4 0
0 1
sistent with solution x = -1 t 3/2 ' t ER
+
2 -3/2
0

A6 (a) If a * 0, b * 0, this system is consistent, the solution is unique. If a = 0,


b * 0, the system is consistent, but the solution is not unique. If a * 0, b = 0,
the system is inconsistent. If a = 0, b = 0, this system is consistent, but the
solution is not unique.

(b) If c * 0, d * 0, the system is consistent, and the solution is unique. If d = 0,


the system is consistent only if c = 0. If c = 0, the system is consistent for all
values of d, but the solution is not unique.

A 7 600 apples, 400 bananas, and 500 oranges.

AS 75% in algebra, 90% in calculus, and 84% in physics.


478 Appendix B Answers to Practice Problems and Chapter Quizzes

Section 2.2

A Problems

Al (a)
[i n the rank is 2

[� H
0
(b) 1 the rank is 3.
0

(c)
[� H
0
1
0
the rank is 3.

[i n
0
(d) the rank is 3.
0

1 2 0

(e)
0
0
0
0
� ; the rank is 2.

0 0 0

(f)
[�
1
0
0
0
1
0
n the rank is 3.

(g)
[�
0
1
0
0
0 -H the rank is 3.

1 0 0 -1/2
0 1 0 3/2
(h) ; the rank is 3.
0 0 1 1/2
0 0 0 0

1 0 0 0 -56
0 1 0 0 17
(i) ; the rank is 4.
0 0 1 0 23
0 0 0 -6

-2
1
A2 (a) There is one parameter. The general solution is x = t t ER
1 ,
0

(b) There are two parameters. The general solution is x = s � + t


-2
1 ,
s,tEJR.

0 0

3 -2
1 0
(c) There are two parameters. The general solution is x = s + t s,tEJR.
0 1 ,
0 0
Appendix B Answers to Practice Problems and Chapter Quizzes 4 79

-2 0

(d) There are two parameters. The general solution is 1 = s 1 -1


0
+t
2
0 , s, t ER

0 -4
0
(e) There are two parameters. The general solution is 1 = s 0
0
0
+ t
1
5,

0
s, t E IR

(f) There is one parameter. The general solution is 1 = t


-1
1
0
, t E JR.

A3 (a) [�1 � -�i [� � �1];


4 -3
-

0 0
the rank is 3; there are zero parameters. The only

solution is 1 = 0.

(b) [� 1� =;] [� � =�];


2 -7
-

0 0 0
the rank is 2; there is one parameter. The

[H
general solution is 1 = t t ER.

-1 21 -1 1 -3 0 -7

(c)
3
2
-3
-2
8
5
-5
-4
0
0
0
0 0
� ; the rank is 2; there are two para-

3 -3 7 -7 0 0 0 0
7

meters. The general solution is 1 = s � +t -� , s, t ER

(d)
0
1
2
2
2
5
5
2
3 -1
-3
1
0
0
11 1 1
0

0
0
0
-2
0
_
0
2
; the rank is 3; there are two pa-

4 2 -2 0 0 0 0 0
2 0

rameters. The general solution is 1 -11 = s


0
+t
-2
1
0
, s, t ER

A4 (a) [ � -� I � ]- [ � � I ���� � l Consistent with solution 1 = [���� � l


480 Appendix B Answers to Practice Problems and Chapter Quizzes

(b) [ 12 2-3 21 156 ] [ 1 10 1274//7 ] .O


l O
Consistent with solution

2[ 4b7/71 [-1�'1 7
x ER
=

7 +t t

(c) [ 2� 5� =; 119� I [ � I -i � ]
-8
Consistent with solution

X= [�] fl
+ , IE R

(d) [ -�2 -3 5 -1 l - [ � ! � -� l
- �
16
-
-8
36
-11 Consisrent with solution

7
X=
Hl-
[ i �9 -1� 191� ]- [ �0 �0 -�0 �1 ]
[ � 1324 -1-6-3 014 -21 i [ 1� 00 -510 00 -7� i .
(e) ·The system is inconsistent.

-5
(D -8 Consistent with

7-7 5
solution x
10 -11 '
+t t ER

2 0
=

01 22 -2-3 0 4 21 10 10 00 00 -10 -40


(g)
22 54 --5 33 10 53 00 00 10 0 -3/23/2 -12
8
. The system is con-

7 -40 01
sistent with solution x -12 -3/2
= 3/2 ' +t t ER

-13 04 0 1
01 00 -31 .
AS (a)
[: 1 2 �I [� 0 1 2i The solution to [A I b] is

r-n [n
The solution to the homogeneous sysrem is =

1 0 5 -2 i
, ,
=

� � -� l - [ 0 1 0 1 [A I b]
2 -5 4 0 0 0 0 . The solution to is

x =

nl fH+ t E R The solution to the homogeneous system is

X=
fH IER
Appendix B Answers to Practice Problems and Chapter Quizzes 481

-1 5 2 1 -1 ] [ 10 01 -59 2 -51
[ -� -5 -9 1 1 ]
-1
The solution to
[A I b_,]
- �

(c)
-1 -4 -1 4 .

is x
_,

010 50 -2O' + s
1
+t s,t E JR. The solution to the

-95 -2
0 O'1
homogeneous system is x = s +t s,tER
1

0 -12 01 00 -20 � ]·
-1

u 2 50 lH� 0 1 1 -1
3
(d)
3
-4 � The solution to [A I b]
-4 -

is x =
-11 21 +t t E R The solution to the homogeneous system is

0 1
_ _ ,

02
X=t ' tER
-1

Section 2.3

A Problems

2 -1 3

2 01 (-1) 01 12 2
-

Al (a) + +3 =
8
4

5
4
(b) is not in the span.
6
7

01 2 2
-2 -1

(c)
3
+ (-1 )
1 +
0 (-1) 2 1

A2 (a)
-1
2 is not in the span.

-1

(-2) -10
I -1 -7

02 (-2) -1-1
3
(b) +
3 1 + 1
=
08
482 Appendix B Answers to Practice Problems and Chapter Quizzes

(c) is not in the span.

A3 (a) X3 = 0

(b) X1 - 2x2 = 0, X3 = 0

(c) x1+3x2-2x3 =0

(d) X1 +3x2 + Sx3 = 0

(e) -X1 - X2 + X3 = 0, X2+ X4 = 0

(f) -4x1 + Sx2 + X3 + 4x4 = 0

A4 (a) It is a basis for the plane.

(b) It is not a basis for the plane.

(c) It is a basis for the hyperplane.

AS (a) Linearly independent.


0 0 3 0

(b) Linearly dependent. -3t � - 2t -t


0
+t
2
6
0
O'
t E IR

0 3 0

02 0
03
(c) Linearly dependent. 2t 0 -t 1 +t = 0 , t E IR
1 3 1 0
1 3 1 0
A6 (a) Linearly independent for all k f. -3.

(b) Linearly independent for all k f. -5/2.

A 7 (a) It is a basis.
3
(b) Only two vectors, so it cannot span IR . T herefore, it is not a basis.
3
(c) It has four vectors in IR , so it is linearly dependent. T herefore, it is not a basis.

(d) It is linearly dependent, so it is not a basis.

Section 2.4

A Problems

Al To simplify writing, let a= �·


Total horizontal force: R1 + R2 = 0.
Total vertical force: Rv - Fv = 0.
Total moment about A: R1s + Fv(2s) = 0.
The horizontal and vertical equations at the joints A, B, C, D, and E are
o:N2+ R2 = 0 and N1 + o:N2 + Rv = O;
N3 + aN4 + R1 = 0 and -Ni + aN4 = 0
Appendix B Answers to Practice Problems and Chapter Quizzes 483

-N3 +aN6 = 0 and -aN2 +Ns+aN6 = 0


-aN4 +N1 = 0 and -aN4 - Ns = 0
-N7 - aN6 = 0 and -aN6 - Fv = 0

R1 +R2 -R2 0 0 0 E1
-R2 R2 +R3 -R3 0 0 0
A2 0 -R3 R3 +R4 +Rs 0 -Rs 0
0 0 0 Rs+R6 -R6 0
0 0 -Rs -R6 R6 +R1 +Rs E2

A3

x
x = 100

Chapter 2 Quiz
E Problems

1 -1 1 0 2
0 1 0 0 7
El . Inconsistent.
0 0 -2 1 -5
0 0 0 0

1 0 0 0 1
0 1 0 0 0
E2
0 0 I 0 -1/3
0 0 0 1/3

E3 (a) The system is inconsistent for all (a, b, c) of the form (a, b, 1) or (a, -2, c) and
is consistent for all (a, b, c) where b * -2 and c * I.

(b) The system has a unique solution if and only if b * -2, c * 1 and, c * -1.

-2 11/4
-1 -11/2
E4 (a) s 1 +t 0 ' S, t E JR
0 -5/4
0 1

(b) x . a = 0, x . v = 0, and x . w = 0, yields a homogeneous system of three


linear equations with five variables. Hence, the rank of the matrix is at most
three and thus there are at least# of variables - rank = 5-3 = 2 parameters.
So, there are non-trivial solutions.
484 Appendix B Answers to Practice Problems and Chapter Quizzes

ES Consider 51 =t1 m [1] [1].


+t2 +ti Row reducing the corresponding coefficient

matrix gives
[ ][ 3 1 4 1 o o
1
2
1
6
1 � 0
5 0
1
0

Thus, :Bis linearly independent and spans IR3. Hence, it is a basis for IR3.
0
1 l
E6 (a) False. The system may have infinitely many solutions.

(b) False. The system X1 = 1, 2x1 = 2 has more equations than variables but is
consistent.

(c) True. The system x1 =0 has a unique solution.


(d) True. If there are more variables than equations and the system is consistent,
then there must be parameters and hence the system cannot have a unique
solution. Of course, the system may be inconsistent.

CHAPTER3

Section 3.1

A Problems

Al (a) [- 1 -6
-4 �]
[ ]
6

-3 6
(b) -6 -3
-12 6

(c) [ -1
-11 -1 �]
[ -1 � �-1 12
]
[I
A2 (a)
27 -1

f l
l
(b)

]
15

6 l3 -4
(c)
n -3
15
-5
19
1
-1

(d) The product is not defined since the number of columns of the first matrix does
not equal the number of rows of the second matrix.

A3 (a) A(B+C)= [- �! l �] =AB+AC A(3B)= ; ; - �� =3(AB)


[ ]
r6 -
A(3B)= _2; � =3(AB)
-16
(b) A(B+C)= l4 14 =AB+AC
] [ n
Appendix B Answers to Practice Problems and Chapter Quizzes 485

A4 (a) A+ Bis defined. ABis not defined. (A+ Bl = [=� ; ;] =Ar+ Br.

A+ = [-�� =��]=Br
12 2lO] .
(b) Bis not defined. ABis defined. (ABl Ar.

AS (a) AB= [ lO 13 31

(b) Does not exist because the matrices are not of the correct size for this product
to be defined.

(c) Does not exist because the matrices are not of the correct size for this product
to be defined.

(d) Does not exist because the matrices are not of the correct size for this product
to be defined.

(e) Does not exist because the matrices are not of the correct size for this product
to be defined.

[l� � � l�]
5[ 2 l]
(f)

139

]
(g)
62 46

[ 21 12
13 10
(h) 3

7 7
10

11

15
9
(i) Dre= (CTDl =
3 11

A6 (a) AX=
[12 l: n {;]. AZ
AY= nl
17[ � 8

�ll 1
-

(b) 4
3 -4

A7 (a)

(b)
[�]
(c)
[�]
(d)
[i l
486 Appendix B Answers to Practice Problems and Chapter Quizzes

AS (a) [-� -1 �]

(c)
[
(b) [OJ
10
-5
8
-4
-6
3
1
15 12 -9
(d) [-3]

.
A9 B oth s1'des give
-l 3
_
27
[ 16]
o ·

AlO It is in the span since 3 U �] +( -2) [-� �] +(-1) [� -�] [� -H =

All It is linearly independent.


Al2 Using the second view of matrix-vector multiplication and the fact that the i-th
component of e; is 1 and all other components are 0, we get

Ae; =Oa1 + · · · + Oa;-1 +la;+Oa;+1 + · · · +Oan =a;

Section 3.2

A Problems

2
Al (a) Domain JR. , codomain JR.4
-19 18
6 -9
(b) fA(2, -5) = ,fA(-3, 4) =
-23 17
38 -36

-2 3
3 0
(c) fA(l, Q) = ,fA(O, 1 ) =
l 5
4 -6

-2x1 + 3x2
3x1 + Ox2
(d) fA(x)
x1 +5x2
=

4x1 - 6x2
(e) The standard matrix offA is

-2 3
3 0
1 5
4 -6

A2 (a) Domain JR.4, codomain JR.3

(b) fA(2, -2, 3, 1) = [-111� , fA(-3, 1, 4, 2) =


[ �]
-13
-

(c) fA(ti) = [H fA (€,) = [-H fA(t3) = l-n Ul


fA (€,) =
Appendix B Answers to Practice Problems and Chapter Quizzes 487

(d) fA(x) =
l��l+::�: ;::]
3
X1
2X - X4 +

(e) The standard matrix of fA is

A3 (a) f is not linear.

(b) g is linear.

(c) h is not linear.

(d) k is linear.
(e) e is not linear.
(f) m is not linear.

A4 (a) Domain JR.3, codomain IR.2, [L ] = [� -3


1 �]
[�
-


]
0 3
(b) Domain IR.4, codomain IR.2, [K]
-
=

1 -7

0 -1 1
1 2 -1 -3
(c) Domain IR.4, codomain JR.4, [M] =

0 1 0
-1 1 -1

AS (a) The domain is IR.3 and codomain is IR.2 for both mappings.

[ [
]
3 3 2
]
1 -4 9
(b) [S+ T] 2 5 [2S- 3T]
-8 -5
= =

1 -6
,

A6 (a) The domain of S is IR.4, and the codomain of Sis IR.2. The domain of Tis IR.2,
and the codomain of Tis IR.4 .

(b) [So T] = [ 6
10
-19
_10
] , [To S] =
-3

-6
6
-8
5
8
16
4
-4
0
9

0
-9 -1 7 -16 -5

A7 (a) The domain and codomain of Lo Mare both IR.3 .

(b) The domain and codomain of Mo Lare both IR.2.


(c) Not defined

(d) Not defined

(e) Not defined

(f) The domain of N o Mis IR.3, and the codomain is IR.4.

AS [projv] =

i l-� -7]
A9 [perpv] =
1 [ ] 16 -4
17 -4 1

AlO [projv] =
§ [ : : =�]
-2 -2 1
488 Appendix B Answers to Practice Problems and Chapter Quizzes

Section 3.3

A Problems

Al ca) [� -�] [ � -�] � [ � �] rn:��i -�:;��]


(b)
-
cc)
-
(d)

[S] [� �] [R9oS] [���; � �;:]


S
A2 (a) = (b) =

[
c

( ) [s R
c 0 g ]J
5 sin e
=
cos e -sin e
5 cos e

] [S] [
-
[R] [ ]
4/5 -3/5 -3/5 4/5
A3 (a) = (b) =

-3/5 -4/5 4/5 3/5

-2 2 8 4

f0 0 1 1
1
1
A4 (a) - -2 1 -2 (b) - 8 1 -4
3 9
-2 -2 1 4 -4 7

00 0 0
5
[0 0 0]
00 0
5 1
AS (a) (b)
2 1
5

(c)There is no shear T such that To P = PoS. (d)


[� �]
0

Section 3.4

A Problems

3 3
1 -5
Al (a) 6 is notin the range ofl. (b)L(l,1,-2)=
1
1 5

A2 (a) A bas;s fm Range(L)is {[6].[m A basis for Null(L)is {[�]}


0 0 0
0 0
00 ' 0 ' 0 ' 00
1
1
(b) A basis for Range(M) is
0 0 . A basis for Null(M) is the

0
1
1 -1

empty set since Null(M) {0}.

[i 1
=

-1
A3 The matrix of L is any multiple of -2.
-3

A4 The matrix of L is any multiple of [� :1


Appendix B Answers to Practice Problems and Chapter Quizzes 489

AS (a) The number of variables is 4. The rank of A is 2. The dimension of the solution
space is 2.

(b) The number of variables is 5. The rank of A is 3. The dimension of the solution
space is 2.

(c) The number of variables is 5. The rank of A is 2. The dimension of the solution
space is 3.

(d) The number of variables is 6. The rank of A is 3. The dimension of the solution
space is 3.

A6 (a) A basis fo, the mwspace is


{[�] m, rm· A basis fo, the columnspace is

[ J}.

{[: l m ,
·
A basis fm the nullspace is the empty set

Then, rank(A)+nullity(A) = 3+0 = 3, the number of columns of A as predicted


by the Rank Theorem.

(b) A basis for the rowspace is { j , _! � }


0 0
,

1
·A basis for the columnspace is

{[�l m, rm
·
{i}
A basis fm ilie nullspace is

Then, rank(A)+nullity(A) = 3+ 1 = 4, the number of columns of A as predicted


by the Rank Theorem.

0 0
2 0 0
(c) A basis for the rowspace is 0 ' ' 0 . A basis for the columnspace is
3 4 0
0 0
-2 -3

r} ·A basis fm the nullspace is


1
0
0
-4
0

1
0 0
Then, rank(A)+nullity(A) = 3+2 = 5, the number of columns of A as predicted
by the Rank Theorem.

A 7 (a) A basis fm the nul\space is �{[ l [ �]}


· _ (°' any pafr of linead y independent

vectorn orthogonal to [-�]) ; a basis fm the range is


{ [- � ] } .
(b) For the nullspace, a basis is [!]{ }; for the range,
{[-�l l-m
·

(c) A basis for the nullspace is the empty set; the range is JR.3, so take any basis
for JR.3•
490 Appendix B Answers to Practice Problems and Chapter Quizzes

{t ' , �}
1 0 0
1
0 1 0
3
AS (a) n = 5 (b) 2 -1 ' 0 (c) m = 4 (d)
1
0 0 1
2 -1
3 1
-2 -3 -2 -3
-1 -1
(e) x = s 1 + t 0 , s, t E R So, a spanning set is 0
0 -1 0 -1
0 1 0
-2 -3
1 -1
(f) 0 is also linearly independent, so it a basis.
0 -1
0 1

(g) The rank of A is 3 and a basis for the solution space has two vectors in it, so
the dimension of the solution space is 2. We have 3 + 2 5 is the number of =

variables in the system.

Section 3.5
A Problems

1 [ 5
�]
[
Al (a)
23 -2

i
2 0 -1
(b) -1 1 -1
-1 0 1
(c) It is not invertible.
-1
(d)
H �]6
1
0

10 -5/2 -7/2
2 -1/2 -1/2
(e)
-2 -3 1 1
0 -3 0

0 -1 -2
0 ] 0 -] 2
(f) 0 0 1 -1 1
0 0 0 1 -2
0 0 0 0 1

A2 (a)
Hl
(b)
ni
(c)
[=�]
Appendix B Answers to Practice Problems and Chapter Quizzes 491

A3 (a) A-1 =[-32 -�]. s-1 [-� n =

[� {6]. [-l� _;]


-

(b) (AB) = (AB)-1 =

(c) (3At1 =[213 -2/3


1/3]
-1

(d) (ATtl [ 2 �J
=
-1

= [ -YJ/2
l 12 YJ/2/2]
-

I 1
A4 (a) [Rrr;6] - = [R_rr/6]

(b) [� �]
(c) [165 �]
(d) [� -� �i
0 0 1

AS (a) [SJ= [� n [-� �]


[S-1] =

(b) [R] = [� � ] =[WI)

(c) [(RoSt1]= [� �l [ � �]
- [(SoR)-1)=
-

A6 Let v,y E IR11 and t ER Then there exists il,x E JR11 such that x = M(Y) and
i1 = M(v) . Then L(x) =y and L(il) = v. Since Lis linear L(tx + il) = tL(x) +
L(il) = tY + v. It follows that

=
M(ty + v) =tx +a tMCY) + M(v)

So, Mis linear.

Section 3.6

[� 1 ol EA= [6-1 23 Tl
A Problems

E -5 -u
Al (a) = 0,

Ol [
0 1 4

2
�) E = [� 0
E A
0 1
0
= 2
-1 3 �]
,
I
4

2
E [� �].EA -:
0
(c)
=
1
0 -1
= [ -23 �I -4
=E [� 00 �].EA= H 1822 2�]
492 Appendix B Answers to Practice Problems and Chapter Quizzes

(d) 6

01 A= 2
(e)
E = [� 0 �].E H 10 �] 3

00 001 001 000


I

00
A2 (a)

00 000 00 00
-3

0 00
(b)

10 01 00 00
(c)
00 00 0 0
-3

021 00 001 000


000
l
(d)

00 001 00 000
000
(e)
l

001 001 00 000


000
(f)

1 -1.
A3 (a) It is elementary. The corresponding elementary row operation is RJ + (-4)R2.
3
(b) It is not elementary. Both row and row have been multiplied by

1.
(c) It is not elementary. We have multiplied row l by
3
and then added row

1
3
to row

(d) It is elementary. The corresponding elementary row operation is R � R3•

(e) It is not elementary. All three rows have been swapped.

[ 1 0 Ol 0 0 l [ 0
(f) It is elementary. A corresponding elementary row operation is (l)R1•

E1 = 00 01 01 =] [00 01 1/20 = 00 0 �] = l� 01 �]
3
I I -4 l -
A4 (a) E2 '£3 l '£4

-A 1 = [00 1-02 01
,

l -3

[l 0 1
/2
0 0
A= 0 1 0°][0 0 �rn 01 m� 01 �]
0 0 1 0 1 3
Appendix B

E1 = [ 0I 00 n E2 = [� 00 1 J =[00I 001 �] = [� 0 �]
Answers to Practice Problems and Chapter Quizzes 493

O -2 I -2
(b) 1 1 -3 '£3 '£4 1

A-1 = [-7 0 ]
-2
-2 4
6 1 -3

A= [0 00 m� 00 rn� 00 �m 0 �]
-2 1
I 2
1 1

[I 0
Ei = � 01 HE2= [� 001 HE,=[� 00 H
2

- 1 /4

0 0
(c)

E4 = [� 01 n E, = [� 01 �I
A-1 = [ 0 !] 1 /2
4
- 1 /4

0
A=H 0 �JU 001 �rn 00 �rn 00 001 ][100 00 10i
-1/2

1
- 1 /4

-4 1
-I

0 00 00 01 0 00 00 01 1 00 00
1

E i = 0 0 E2 = 0 0 1 0 = 0 0
2
1 1

0 0 0 0 0 1 0 0 0 0 01 0 0 0 00 0
(d) , £
1 O ' 3 1 O'
2

= 00 0 00 'Es= 00 01 0 00 'E6 = 00 0 0 0
1 1 1 -4

0 00 00 0 0 0 1 0 0 0
1
£4
-1 1 /2 O'
1

= 00 0 01 00
1 -1
1

000
£7

A-1 = 1 1 0 00
3 4 -2 -1

0
1
- /2
0 - 1 /2

0 00 00 0 0 00 00 0 1 00 00 0 0 00 00
2
1 /2
1

-1
A= 0 0 0 0 0 1 0 0 0 0 0 0
-2 1

01 0 0 0 0 0 1 0 0 00 00 0 0 0 0 0 0
1 1
1 1 1
-2

00 01 0 00 00 01 0 00 00 01 01 00 4 1 1

000100010001 2
494 Appendix B Answers to Practice Problems and Chapter Quizzes

Section 3.7

A Problems

-:]
-1
-� -
�]
-1201/210001 53 -13/2
01 0000504 -7/2 114
0 001 00 - 0 0
(d)

10 10 000001 -32-2 1
30-4/3-1 17/91 01 00 00-30-37
(e)

(f)
--3/221 3/210000
1
0-20-11 2220
0 0 0 4 0
-1 -221 0 000
A2(a)LU=[=� ! rn� � _;];11=[=H12=Ul
H 0
LU= rn� -� �i 11 = m 1, =r=�i
(b)
- ,

LU=[=� ! rn� � H 11= [�]· 1, {�]


(c )

1 01 0000-10 -12 -33 01 -11 --53


0
LU= -3-1 2001 0 00 00-1201 -31 , 12= -20
(d)
O;xi=

Chapter 3 Quiz

E Problems

El (a)
[-14-1 10
1 =3179]
Appendix B Answers to Practice Problems and Chapter Quizzes 495

(b) Not defined

( c)
1-� =��1
-8 -42

E2 (a) fA(a) = [�] - 1


.1ACv) = [-��]
[-16 ] -11
(b)

ol
17 0

E3 (a) [R] = [R,13] = [ �� 2


-Y312
1/ 2
0
0
1

(b) [M] = [re ftc 1 1 2) ] = � [-7 -� �i


1
- ,- ,

3
2 2 -1
2+ Y3 -l - 2Y3 2- 2
(c) [Ro M] = � 2 \(3 - 1 - '\f3 + 2 2 '\f3+ 2
6
4 4 -2
-2 -1
1 0
E4 The solution space of Ax = 0 is x = s 1 +t 0 , s, t E R The solution set
0 1
0 0
5 -2 -1
6 1 0
of Ax= bis x = 0 + s 1 +t 0 , s, t E R In particular, the solution set is
0 0 1
7 0 0
5
6
obtained from the solution space of Ax = 0 by translating by the vector 0 .
0
7
E5 (a) a is not in the columnspace of B. v is in the columnspace of B.

(b) x =
l=il
(cJY = m
1 0 0
0 0
E6 A basis for the rowspace of A is 1 , -1 , 0 . A basis for the columnspace

{� � !}
0 0
-1 3 2
-1 1
1 0
-3
of A is , , ·A basis for tbe nul\space is 1 0
0 -2
3 3
0
496 Appendix B Answers to Practice Problems and Chapter Quizzes

2/3 0 0 1/3
1/6 0 1/2 -1/6
0 1 0 0
-1/3 0 0 1/3

ES The matrix is invertible only for p f. 1. The inverse is �


1 p
� [-
-1
p
1 - 2p
-1
-p
p .
1 i
. . . -;; � �

E9 By defimt1on, the range of L 1s a subset of JR.m. We have L(O) = 0, so 0 E Range(L).


If x,y E Range(L), then there exists i1, v E JR." such that L(i1) = x and L(v) = y.
Hence, L(i1 + v) = L(u) + L(v) = x + y, so x + y E Range(L). Similarly, L(tu) =
tL(i1) = tx, so tx E Range(L). Thus, L is a subspace of JR.Ill.

ElO Consider c1L(v1) + + ckL(vk) = 0. Since L is linear, we get L(c1v1 + · · +


· · · ·

ckvk) = 0. Thus, c1v1 + + ckvk E Null(L) and so c1v1 +


· · · + ckvk 0. This · · · =

implies that c1 = · · · = Ck = 0 since {V1, , vk} is linearly independent. Therefore,


. • .

{L(v1), ... , L(vk)} is linearly independent.

Ell (a) E1 =[� � �], [� � � l· [� � �1. [�


0
1 2
0 1
E =
2
0 0 1/4
£3 =
0 0 1
£4 =
0 0
o
1 3/2
1
o

I
(b) A= [� � rn� ! m� ! -�i [� ! +1
E12 (a) K = /3
(b) There is no matrix K.

(c) The range cannot be spanned by [�1 because this vector is not in JR.3.

(d) The matrix of L is any multiple of [ � =���1


2 -4/3
(e) This contradicts the Rank Theorem, so there can be no such mapping L.

(f) This contradicts Theorem 3.5.2, so there can be no such matrix.

CHAPTER4

Section 4.1

A Problems

Al (a) -1 - 6x + 4x2 + 6x3


(b) -3 + 6x - 6x2 - 3x3 - 12x4

(c) - 1 + 9x - 1 l x2 - 17 x3

(d) -3 + 2x + 6x2

(e) 7 - 2x - 5x2
(f) l3 �x + llx2
3 3
_

(g) -fi. - 7r + -fi.x + ( -fi. + 7r)X2


Appendix B Answers to Practice Problems and Chapter Quizzes 497

A2 (a) 0 = 0(1 + x2 + x3) + 0(2 + x + x3) + 0(-1 + x + 2x2 + x3)


(b) 2 + 4x + 3x2 + 4x3 is not in the span.
(c) -x + 2x2 + x3 = 2(1 + x2 + x3) + (-1)(2 + x + x3) + 0(-1 + x + 2x2 + x3)
(d) -4 - x + 3x2 = 1(1 + x2 + x3) + (-2)(2 + x + x3) + 1(-1 + x + 2x2 + x3)
(e) -1 - 7 x + 5x2 + 4x3 = (-3)(1+ x2 + x3) + 3(2 + x + x3) + 4(-1+x+2x2 + x3)
(f) 2 + x + 5x3 is not in the span.
A3 (a) The set is linearly independent.

(b) The set is linearly dependent. We have

0 = (-3t)(l + x + x2) + tx + t(x2 + x3) + t(3 + 2x + 2x2 - x3), t E IR.

(c) The set is linearly independent.

(d) The set is linearly dependent. We have

0 = (-2t)(l +x+x3+x4)+t(2+x - x2+x3+x4)+t(x+x2+x3+x4), t E IR.

A4 Consider

[ 1 -1 1 a1 l
The corresponding augmented matrix is 0 1 -2 a .
2
0 0 1 a
3
Since there is a leading1 in each row, the system is consistent for all polynomials
a1 + a x + a x2• Thus, 13 spans P . Moreover, since there is a leading 1 in each
2 3 2
column, there is a unique solution and so 13 is also linearly independent. Therefore,
it is a basis for P •
2

Section 4.2

A Problems

Al (a) It is a subspace.
(b) It is a subspace.

(c) It is a subspace.

(d) It is not a subspace.

(e) It is not a subspace.

(f) It is a subspace.
A2 (a) It is a subspace.
(b) It is not a subspace.
,-

(c) It is a subspace.

(d) It is a subspace.
498 Appendix B Answers to Practice Problems and Chapter Quizzes

A3 (a) It is a subspace.

(b) It is a subspace.

(c) It is a subspace.

(d) It is not a subspace.

(e) It is a subspace.

A4 (a) It is a subspace.

(b) It is not a subspace.

(c) It is a subspace.

(d) It is not a subspace.

AS Let the set be {v1, ... , vk} and assume that vi is the zero vector. Then we have

0 = Ov1 + · · · + Ov;-1 + v; + Ov;+1 + ·· · + Ovk

Hence, by definition, {v1, ... , vk} is linearly dependent.

Section 4.3

A Problems

Al (a) It is a basis.

(b) Since it only has two vectors in IR.3, it cannot span IR. 3 and hence cannot be a
basis.

(c) Since it has four vectors in IR.3, it is linearly dependent and hence cannot be a
basis.

(d) It is not a basis.

(e) It is a basis.

A2 Show that it is a linearly independent spanning set.

A3 (a) One possible basis is {[-�l [!]} · ·Hence, the dimension is 2.

(b) One possible basis is

ml [ il [!]}
·

=
· Hence, the ili mension is 3

A4 (a) One possible basis is {[ - � �] , [� -n, [� =� ]} .Hence, the dimension is

3.

(b) One possible basis is {[ - � =�J · U � ] [� � ] [� � ]}


- · · . Hence, the

dimension is 4.

(c) 'l3 is a basis. Thus, the dimension is 4.

AS (a) The dimension is 3.


(b) The dimension is 3.

(c) The dimension is 4.


Appendix B Answers to Practice Problems and Chapter Quizzes 499

A6 Alternate correct answers are possible.

(a)
mrnl}
(b)
mrnJHJ}
A 7 Alternate correct answers are possible.

(a)
{ � -! � )
,
,

1 1 1

)
0 1 0
-1 ' 0 ' 0
0 0 0

AS (a) One possible basis is {x, 1 - x2}. Hence, the dimension is 2.


(b) One possible basis is {[� �], [� �], rn �]} .Hence, the dimension is 3.

(c) One possible basis is {[ :l · [ �]}


_ _ . Hence the dimension is 2.
,

(d) One possible basis is {x - 2, x2 - 4}. Hence, the dimension is 2.


(e) One possible basis is {[ � �J.rn �]·[� �] }
- . Hence, the dimension

is 3.

Section 4.4

A Problems

Al (a) [x]3 = [_;J. [y]3 = [;]


�) [xJ. = UJ. Ul [y].=

(c) [x].= nl· {�] [yJ.

(d) [x]3 = [n [y]3 = [-�]


(e) [x]• = Hl· {�] �l•
500 Appendix B Answers to Practice Problems and Chapter Quizzes

A2 (a) Show that it is linearly independent and spans the plane.

(b) m m and are not in the plane We have m. = [ ;J


_

A3 (a) Show that it is linearly independent and spansP .


2

(b) [p(x)]" =[: l· [q(x)]" = [H ['(x)].= Hl


(c) [2- 4x + !Ox']•= [H We have

[4- 2x + 7x'J. + [-2 - 2x + 3x'J.= m +!l [�] =

= [2- 4x + l0x2]13 = [(4 - 2) + (-2 - 2)x + (7 + 3)x2]!B

A4 (a) i. A is in the span of '13.


ii. '13 is linearly independent, so it forms a basis for Span '13.

iii. [A]!B = [-i j;l 3/4


(b) i. A is not in the span of '13.
ii. '13 is linearly independent, so it forms a basis for Span '13.
AS (a) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[! � =�]·
l 0 3
The change of coordinates matrix P from S-coordinates to 'B-

3I11
coordinates isP= -15/11
[ o
1
2/11l
1/11 .
-1/11 0 3/11
(b) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[-� 5 -2 1
]
� � .The change of coordinates matrixP from S-coordinates to '13-
2/9 -1/9
coordinates isP= 7/9 1/9
[ 1/9]
-1/9 .
4/9 7/9 2/9
(c) The change of coordinates matrix Q from '13-coordinates to S-coordinates is

[- � -� �l·
-1 -1 1
The change of coordinates matrix P from S-coordinates to '13-

1/3 2/9
coordinates isP= 0 -1/3
[ -8/9]
1/3 .
1/3 -1/9 4/9
Appendix B Answers to Practice Problems and Chapter Quizzes 501

Section 4.5

A Problems

Al Show that each mapping preserves addition and scalar multiplication.

A2 (a) det is not linear. (b) L is linear.


(c) T is not linear. (d) M is linear.

A3 (a) y is in the range of L. We have L ([�\] = y.

(b) y is in the range of L. We have L(l + 2x + x2) = y.


(c) y is not in the range of L.

(d) y is not in the range of L.


A4 Alternate correct answers are possible.

(a) A basis fm Range(L) is {[:J.lm A basis foe Null(L) is {[-i]} We have

rank(L)+Null(L) = 2 + 1 = dim JR3.

(b) A basis foe Range(L) is (1 + x, x\. A basis foe Null(L) is {[-l]}· We have

rank(L) + Null(L) = 2 + 1 = dim JR3.

(c) AbasisforRange(L)is{l}.Abasis forNull(L)is {[� -�]·[� �]·[� �]}.


We have rank(L) + Null(L) = 1 + 3 = dim M(2, 2).

(d) AbasisforRange(L)is {[� �]·[� �]·[� �]·[� �]} .Abasis forNull(L)

is the empty set. We have rank(L) + Null(L) = 4 + 0 = dim P3•

Section 4.6

A Problems

Al (a) [L]23 = [� -n [L(x)]23 =


[�]
(b) [L]13 = [� �
-1 -1
�].
5
[L(x)]23 = [ �� l
-11

A2 (a) [L]13 =
r-� �]
(b) [L]13 = [� � ]
502 Appendix B Answers to Practice Problems and Chapter Quizzes

A3 (a) [L(v,)]• = [!]. [L(v 2)], =


[H [L(vi)l• = [H [! � �]
[LJ. =

(b) [L(v,)J. = [H [4v2ll• =


[�]· [4v1)]• =
[!]· [� � !]
[L], =

(c) [4v,)]• =
[�]· [L(v2)J. = [ll [4vi)l• =
l �l [� �l
= ·
[L]• =

1 =

A4 (a) 23 =
{[-�], [�]}. [reflo,-2J]B = [-� �]
(b)
B
=
{[J. [�].[:]}· [proj"' -ol• =
[�

(b) [LJ. =
H � �I
(c) L(l, 2 , 4 m
) =

A6 (a)
Ul. oUl ]
[10 00
=

-2 2
(b) [L]B = 3
-3

(c) 45,3,-5)
[�1] =

A7 (a) [L]B = [�� ��]


(b) [L]B = [� -�]
(c) [L]B = [-1408 -11528]
(d) [L]B =
[� �]
Appendix B Answers to Practice Problems and Chapter Quizzes 503

(e) [L], = [� �] 2
4
0

(0 [L]•= [� �] 0
-2
0

AS (a) [L]1l = [ _!]


2
0
-2
1
-3
I

(b) [L], = [� -� - � ]
2 -1

(c) [DJ•= [� � �1
0 0

(d) [T]1l = [-� � =�]


2
-

2 4

Section 4.7

A Problems

Al In each case, verify that the given mapping is linear, one-to-one, and onto.
a
b
(a) DefineL(a+bx+cx2 + dx3) =

c
d

(b) Define L ([; �]) = �


d

(c) DefineL(a +bx+cx2 +dx3)= [; �l


(d) DefineL(a1(x - 2) + a2 (x2 - 2x)) = a i
o
[ O]
a2 ·

Chapter 4 Quiz

E Problems

El (a) The given set is a subset of M(4, 3) and is non-empty since it clearly contains
the zero matrix. LetA and B be any two vectors in the set. Then a11 +a12+a13 =

0 and b11 +b12 +b13 0. Then the first row ofA + B satisfies
=
504 Appendix B Answers to Practice Problems and Chapter Quizzes

so the subset is closed under addition. Similarly, for any t E JR., the first row of
tA satisfies
ta11 + ta12 + ta13 = t(au + a12 + a13) = 0

so the subset is also closed under scalar multiplication. Thus, it is a subspace


of M(4, 3) and hence a vector space.

(b) The given set is a subset of the vector space of all polynomials, and it clearly
contains the zero polynomial, so it is non-empty. Let p(x) and q(x) be in the
set. Then p(l) = 0, p(2) = 0, q(l) = 0, and q(2) = 0. Hence, p + q satisfies

(p + q)( l ) = p(l) + q(l) = 0 and (p + q)(2) = p(2) + q(2) = 0

so the subset is closed under addition. Similarly, for any t E JR., the first row of
tp satisfies

(tp)(l) = tp(l) = 0 and (tp)(2) = tp(2) = 0

so the subset is also closed under scalar multiplication. Thus, it is a subspace


and hence a vector space.

(c) The set is not a vector space since it is not closed under scalar multiplication.

For example, � [ � �] is not in the set since it contains rational entries.

3 and is non-empty since it clearly contains 0.


(d) The given set is a subset of JR.
Let x and y be in the set. Then x1 + x2 + x3 0 and Y1 + Y2 + y3 0. Then = =

x + y satisfies

X1 + Yl + X2 + Y2 + X3 + Y3 = X1 + X2 + X3 + Yl + Y2 + Y3 = 0 + 0 = 0

so the subset is closed under addition. Similarly, for any t E JR., tx satisfies

so the subset is also closed under scalar multiplication. Thus, it is a subspace


3
of JR. and hence a vector space.

E2 (a) A set of five vectors in M(2, 2) is linearly dependent, so the set cannot be a
basis.

(b) Consider

Row reducing the coefficient matrix of the corresponding system gives

0 0 2 1 0 0 2
1 2 0 1 0
2 4 0 0 -1
- 1 3 -2 0 0 0 0

Thus, the system has infinitely many solutions, so the set is linearly dependent,
and hence it is not a basis.

(c) A set of three vectors in M(2, 2) cannot span M(2, 2), so the set cannot be a
basis.
Appendix B Answers to Practice Problems and Chapter Quizzes 505

E3 (a) Consider
1
0 10 31 11 00 3 0
t1
13 + t2
02 -20 00
+ t3 + t4

Row reducing the coefficient matrix of the corresponding system gives

10 33 11 01 01 00 -20
l -
13 l 02 -20 000 000 001 001
0 1
Thus, '13 = {V1, V2, V3} is a linearly independent set. Moreover, V4 can be writ­

is a basis for§ and so dim§ =

(b) We need to find constants t1, t2, and t3 such that


3.
ten as a linear combination of the v1, v2, and v3, so '13 also spans§. Hence, it

10 1 33 02
t1
13 +0 t2 1
02 -3 -1 + t3

-5

11 33 02 01 0 00 -2-1
01 01 -3-1 - 00 00 01 01
3 2 000 0 -5

Thus, [1]• = [=:]


E4 (a) Let il1 = [!] and il2 = [H Then {iii. il2) is a basis for the plane since it is a set

of two linearly independent vectors in the plane.

(b) Since il3 = [j] does not lie in the plane, the set './J = {ii1, ii" ii3) is linear] y

3
independent and hence a basis for JR. .

(c) We have L(v1) = v1, L(v2) = v2, and L(v3) = -v3, so

[01 o1 ol0
[Lhi =

0 0 -1
(d) The change of coordinates matrix from '13-coordinates to S-coordinates (stan­
dard coordinates) is

P
[o 01 01]
I
=

0 1 -1
506 Appendix B Answers to Practice Problems and Chapter Quizzes

[� �]
It follows that
0
[L]s = P[L]BP-1 = 1
0

ES The change of coordinates matrix from S to 13 is

[i -\]
0
p =

Hence,

[L]• p -1 [-: �] [�
-1
0 1
2/3
2/3
11/3
-10/3
1
p
=

-2 1/3 1/3
=

E6 If t i L(v i ) + ···+ tkl(vk) = 0, then

and hence ti vi+···+ tkvk E Null(l). Thus,

and hence t1 = · ·· = tk 0 since {vi, ... , vk} is linearly independent. Thus,


=

{l(vi), ... , L(vk)} is linearly independent.


E7 (a) False. !Rn is an n-dimensional subspace of IR".
(b) True. The dimension of P2 is 3, so a set of four polynomials in P2 must be
linearly dependent.
(c) False. The number of components in a coordinate vector is the number of
vectors in the basis. So, if 13 is a basis for a 4 dimensional subspace, the 13-
coordinate vector would have only four components.
(d) True. Both ranks are equal to the dimension of the range of l.

(e) False. Consider a linear mapping L : P2 � P2. Then, the range of L is a


subspace of P2, while for any basis 13, the columnspace of [L]11 is a subspace
2
of IR • Hence, Range(L) cannot equal the columnspace of [L]11•

(f) False. The mapping l : IR � IR given by l(xi)


2
(xi, 0) is one-to-one, but
dim IR -::f:: dim!R .
=

CHAPTERS

Section 5.1

A Problems

Al (a) 38 (b) -5 (c) 0


(d) 0 (e) 0 (f) 48
A2 (a) 3 (b) 0 (c) 196 (d) -136
Appendix B Answers to Practice Problems and Chapter Quizzes 507

A3 (a) O (b) 20 (c) 18


(d)-90 (e) 76 (f) 420

A4 (a)-26 (b) 98

AS(a)-1 (b) 1 (c) -3

Section 5.2

A Problems

Al (a) detA= 30, so A is invertible.

(b) detA = 1, so A is invertible.

(c) detA=8,so A is invertible.

(d) detA=0,so A is not invertible.

(e) detA =-1120,so A is invertible.

A2 (a) 14 (b) -12 (c)-5 (d) 716

A3 (a) detA = 3p - 14, so A is invertible for all p * ¥·


(b) detA =-Sp- 20,so A is invertible for all p * -4.

(c) detA= 2p- 116, so A is invertible for all p * 58.

A4 (a) detA = 13,det B = 14, detAB=det [� ] °


26
=182

(b) detA=-2,det B=56,detAB=det -7 -4


2 7 25 ]
7 =-112
11 11 8

AS (a) Since rA is the matrix where each of then rows of A must been multiplied by
r ,we can use Theorem 5.2.1n times to get det(rA)=r" detA.

(b) We have AA-1 is /,so

1 = det/ = detAA-1 = (detA)(detA-1)

by Theorem 5.2.7. Since detA * 0,we get detA-1 = de�A.


(c) By Theorem 5.2.7, we have 1 = det/ = detA3 = (detA)3. Taking cube roots
of both sides gives detA = 1.

Section 5.3

A Problems

Al (a) � [ ��
2 -�] (b) � [=� �]
(c)
1
[
-3
-1
21
7
11
5
] (d) � [= � - �� =:1
S 4
3 -13 -7 -2 -8 -4
508 Appendix B Answers to Practice Problems and Chapter Quizzes
A2 (a) cofA= \3�t 2�3t �;1]
-3 -2 -2t -6
[
(b) A(cofAl= � -2t�l7 � ]·So detA= -2t-17 and =
-2t-17 0
1

[
A-
-2t-17
-1 3 t -3
-2 -11 -6 l
+

-2t�l7 5 2 -3t -2 - 2t , provided -2t-17 t- 0.

3 51/19] (b) [ 7/5] (c) [-221/1111] (d) [-13/5


(a) [-4/19 2/5
]
A
-11/15 � -8/5
Section 5.4

A Problems

Al (a) 11
Cb) Ail= [�H Av=[��]
(c) -26
(d) ldet u� ��JI = 286 = 1(-26)1(11)
A2 Ail=[;]. Av= [-n Area = Jctet [� -�JI= 1-81= 8.
A3 (a) 63
(b) 42
(c) 2646
A4 (a) 41
(b) 78
(c) 3198
AS (a) 5
(b) 245
A6 Then-volume of the parallelotope induced by V1, ... 'Vn is ldet [v1 .. . vn]I·
Since adding a multiple of one column to another does not change the determinant
(see Problem 5.2.D8), we get that

This is the volume of the parallelotope induced by v1,... , v1 _ vn 1.


1, + tV
Appendix B Answers to Practice Problems and Chapter Quizzes 509

Chapter 5 Quiz

E Problems

-2 4 0 0
-2 4 0
El
-3
1 -2
6
2
0
9
3
=2(-1)2+3 -3
1 -1
6 3 =(-2)(3)(-1)2+3
0
1 -2
1
4
-1
1 = -12

-1 0 0

3 2 7 -8 3 2 7 -8
-6 -1 -9 20 0 3 5 4
E2 = 180
3 8 21 -17 0 0 4 -17
=

3 5 12 0 0 0 5

0 2 0 0 0
0 0 0 3 0
E3 0 0 0 0 1 = 5(2)(4)(3)(1) = 120
0 0 4 0 0
5 0 0 0 6

E4 The matrix is invertible for all k * -* ± �.


ES (a) 3(7) = 21

(b) (-1)4(7) = 7

(c) 25(7) = 224

(d) de A ; =
t
(e) 7(7) = 49

E6 (A-1) 1
3
= -�
2
E7 Xz =
de A; 1 -1 -1
-2 2

ES (a) det [ � � �i
-2
-
3 4
=33

(b) 1-241(33) =792

CHAPTER6

Section 6.1

A Problems

A1 m [-: l and are not eigen,ectors of A. [� l is an eigenvcctocwith eigenvalue ,! : O;

U] is an eigen,ectoc with eigenvalue A : -6; and [:] is an eigenvectoc with

eigenvalue ;i =4.
510 Appendix B Answers to Practice Problems and Chapter Quizzes

A2 (a) The eigenvalues are ..11 = 2 and ..12 = 3. The eigenspace of ..11 is Span {[;]} .

The eigenspace of ..12 is Span {[�]} .

(b) The only eigenvalue is ..1 = 1. The eigenspace of ..1 is Span {[�]}.
(c) The eigenvalues are ..11 = 2 and ..12 = 3. The eigenspace of ..11 is Span {[�]}
The eigenspace of ..12 is Span {[�]}
(d) The eigenvalues are ..11 = -1 and ..12 = 4. The eigenspace of ..11 is Span {[�]}.
The eigenspace of ..12 is Span {[�]}.
(e) The eigenvalues are ..11 = 5 and ..12 = -2. The eigenspace of ..11 is Span {[!]} ·

The eigenspace of ..12 is Span {[- � ]}.


(f) The eigenvalues are ..11 = 0 and ..12 = -3. The eigenspace of ..11 is Span {[ � ]}.
The eigenspace of ..12 is Span {[ �]}.
A3 (a) ..11 = 2 has algebraic multiplicity 1. A basis for its eigenspace is {[ �]}. so it

has geometric multiplicity 1. ..12 3 has algebraic multiplicity 1. A basis for

{[- �]}.
=

its eigenspace is so it has geometric multiplicity 1.

(b) ..11 = 2 has algebraic multiplicity 2; a basis for its eigenspace is {[�]} . so it

has geometric multiplicity 1.

(c) ..11 = 2 has algebraic multiplicity 2; a basis for its eigenspace is {[ � ]}. so it

has geometric multiplicity 1.

(d) A, = 2 has algebraic multiplicity I; a basis for its eigenspace is {r-m, so

it has geometric multiplicity 1. ..12 = 1 has algebraic multiplicity 1; a basis

for its eigenspace is {[�]}· so it has geometric multiplicity I. A3 = -2 has

algebraic multiplicity I; a basis for its eigenspace is {[; ]}, so it has geometric

multiplicity 1.
Appendix B Answers to Practice Problems and Chapter Quizzes 511

(e) ,! 1 = 0 has algebraic multiplicity 2; a basis for its eigenspace i

so it has geometric multiplicity 2. tl2


S {[-il r�1;l}
6 has algebraic multiplicity
· •

{[:]}
= a

basis for its eigenspace is · so it has geometric multiplicity 1.


( 0 ,! 1 = 2 has algebraic multiplicity 2; a basis for its eigenspace is

so it has geometric multiplicity 2. tl2


{ril r�1;l}
S has algebraic multiplicity

{[;]},
= a

basis for its eigenspace is so it has geometric multiplicity 1.


Section 6.2

A Problems

Al (a) rn is an eigenvector of A with eigenvalue 14, r- �] and is an eigenvector of A

with eigenvalue -7. P-


1
= � [-� n [104 -�l
1
p- AP =

(b) P does not diagonalize A .

(c) rn is an eigenvector of A with eigenvalue 1, [ �]


and is an eigenvector of A

. 1ue - 3 p-
1 - 11 -1 ] 1 -
p- A p - [1 O]
[ 2 0 _3 .
.
with eigenva .
-
_

[-il [-�l rn
,

(d) and are both eigenvectors of A with eigenvalue -2, and is

an eigenvector of A with eigenvalue 1 0. p-I � [=: -� -n p- I AP

[-� -� �1
= =

0 0 10
A2 Alternate correct answers are possible.

(a) The eigenvalues are tl1 1 and tl2 8, each with algebraic multiplicity

1. {[ � ]}
= =

A basis for the eigenspace of tl1 is


-
. so the geometric multiplicity

is 1. A basis for the eigenspace of tl2 is {[�]}. so the geometric multiplicity

is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P


=
r- � �] and

D
=
[� �l
512 Appendix B Answers to Practice Problems and Chapter Quizzes

n-;n.
(b) The eigenvalues are '11 = -6 and '12 = 1, each with algebraic multiplicity 1.

A basis for the eigenspace of '11 is so the geometric multiplicity is

1. A basis for the eigenspace of '12 is {[ � ]} . so the geometric multiplicity is

1. Therefore, by Corollary 6.2.3, A is diagonalizable with P = [-; �] and

D =

[-� �l
(c) The eigenvalues of A are± fili. Hence, A is not diagonalizable overR

(d) The eigenvalues are '11 = -1, '12 = 2, and '13 = 0, each with algebraic

multiplicity 1. A basis for the eigenspace of ,!1 is {[-l]}· so the geometric

multiplicity is 1. A basis for the eigenspace of ,i, is {[i]} · so the geometric

multiplicity is 1. A basis for the eigenspace of A3 is {[-�]}, so the geomet·

ric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with

P = [- � � -�1
0 3 1
and D = [-� � �l
0 0 0
·

(e) The eigenvalues are '11 = 1 with algebraic multiplicity 2 and '12 = 5 with alge­

braic multiplicity 1. A basis fonhe eigenspace of ,!1 is {[�]} · so the geometric

mu1tiplicity is 1. A basis for the eigenspace of ,!2 is { [-l ] }


, so the geometric

multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable since


the geometric multiplicity of !l1 does not equal its algebraic multiplicity.

(f) The eigenvalues of A are 1, i, and -i. Hence, A is not diagonalizable


overR

(g) The eigenvalues are ;l1 = 2 with algebraic multiplicity 2 and !l2 = -1 with

algebraic multiplicity 1. A basis for the eigenspace of A1 is {[�] [�]}


· · so the

geometric multiplicity is 2. A basis for the eigenspace of ,!2 is {1-i]} · so the

geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable

with P = 1� � - �i
0 1 2
and D = 1� � �]·
0 0 -1
Appendix B Answers to Practice Problems and Chapter Quizzes 513

A3 Alternate correct answers are possible.

(a) The only eigenvalue is -11 3 with algebraic multiplicity 2. A basis for the

{[�]}.
=

eigenspace of -11 is so the geometric multiplicity is 1. Therefore, by

Corollary 6.2.3, A is not diagonalizable since the geometric multiplicity of A,1


does not equal its algebraic multiplicity.

(b) The eigenvalues are A,1 0 and A,2 8, each with algebraic multiplicity 1.

{[-�]}.
= =

A basis for the eigenspace of A,1 is so the geometric multiplicity is

1. A basis for the eigenspace of -12 is {[ � ]}. so the geometric multiplicity is

1. Therefore, by Corollary 6.2.3, A is diagonalizable with P =


[-� �] and

D =

rn �].
(c) The eigenvalues are -11 3 and A2 -7, each with algebraic multiplicity

{[�]}.
= =

1. A basis for the eigenspace of -11 is so the geometric multiplicity

is 1. A basis for the eigenspace of A,2 is {[-� ]}. so the geometric multiplicity

is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P =


[ � -�] and

D
=
[� -�l
{I-:]}
(d) The eigenvalues are A,1 = 2 with algebraic multiplicity 2 and A,2 = -2 with

algebraic multiplicity 1. A basis for the eigenspace of A1 is so the ge-

ometric multiplicity is 1. A basis for the eigenspace of 12 is


{I:]},
metric multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable
so the geo­

since the geometric multiplicity of A,1 does not equal its algebraic multiplicity.

{[-�l ri]}
(e) The eigenvalues are A,1 = -2 with algebraic multiplicity 2 and A,2 = 4 with

algebraic multiplicity 1. A basis for the eigenspace of A1 is , so

the geometric multiplicity is 2. A basis for the eigenspace of 12 is {I:]}, so the

1-� - � �i 1-� -� �]·


geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable

with P = and D =

1 0 1 0 0 4
514 Appendix B Answers to Practice Problems and Chapter Quizzes

(t) The eigenvalues are ..11 = 2 with algebraic multiplicity 2 and ..12 = 1 with

algebraic multipLicity I. A basis for the eigenspace of A, is {[-�l [�]}


· , so the

geometric multiplicity is 2. A basis foe the eigenspace of A2 is {[1]}· so the

geometric multiplicity is 1. Therefore, by Corollary 62. 3. , A is diagonalizable

with P = [-�0 1� 1�i and D = [�0 �0 1�].


(g) The eigenvalues of are 2, 2 + i and 2 - i. Hence, A is not diagonalizable
overR

Section 6.3

A Problems

Al (a) A is not a Markov matrix.

(b) B 1s a Markov matnx.


. . The invariant state 1s
.
. . [67 1113]
13 .
(c) C is not a Markov matrix.

(d) Dis a Markov matrix. The invariant state is [1/3���] .

A2 (a) In the Jong run, 25% of the population will be rural dwellers and 7 5% will be
urban dwellers.

(b) After five decades, approximately 33% of the population will be rural dwellers
and 67% will be urban dwellers.

A3
1 [� � �l 60% 20%
T =

10 1 6 1
· In the long run, of the cars will be at the airport, of

the cars will be at the train station, and 20% of the cars will be at the city centre.

A4 (a) The dominant eigenvalue is ..1 = 5.


(b) The dominant eigenvalue is ..1 = 6.
Section 6.4

A Problems

Al (a) ae-51 [-!] + be41 [n a,b ER

(b) ae-0·51 r-�J + be0.3t rn a,b ER


Appendix B Answers to Practice Problems and Chapter Quizzes 515

Chapter 6 Quiz

E Problems

El (a) m is not an eigenvector of A

(b) m is an eigenvector with eigenvalue I.

(c) m is an eigenvector with eigenvalue I.

(d) [ f]
_ is an eigenvector with eigenvalue -1.

E2 The matrix is diagonalizable. P= [ � � �i


-1
-

2 1
and D = [� -� �i
0 0 -2
E3 At = 2 has algebraic and geometric multiplicity 2; A2 = 4 has algebraic and
geomet1ic multiplicity 1. Thus, A is diagonalizable.

E4 Since A is invertible, 0 is not an eigenvalue of A (see Problem 6.2. D8). Then, if


Ax= Ax, we getx= A A-x
1 , so A-1x = �x.
ES (a) One-dimensional

(b) Zero-dimensional

(c) Rank(A ) = 2

E6 The invariant state isx= [�j:].


1/2

E7 ae-0.11 r-�J + beo4t [�]

CHAPTER 7

Section 7.1

A Problems

.
Al (a) The set 1s orthogonal. P
[11-VS 21-VS ]
2/YS l /YS .
=

(b) The set is not orthogonal.


516 Appendix B Ans ers to Practice Problems and Chapter Quizzes
w

1/1/Yl
Yl l l 3/ YlO
0 1
-10/ ; \!TiO
\!TiOI .
(c) The set is orthogonal.P=
3/W -1;Y15 3/\!TiO
1[ 85/9//333l
(d) The set is not orthogonal.

A2 (a) [w]23 = (c) [jt]23 = [22/2/4/333]


4 -5/3/22
A3 (a) [x)23 =
-5/-1/...../2./2
-5
(b) [.Y]s =

6/7/1/....2 ./2./2
1/3/5/..22 ./2 3/6/1/....2 ./2./2
(c) [w ]23 =
-0 (d) [ZJ23 =

A4 (a) It is orthogonal.

(b) It is not orthogonal. The columns of the matrix are not orthogonal.
(c) It is not orthogonal. The columns are not unit vectors.

(d) It is not orthogonal. The third column is not orthogonal to the first or second
column.

(e) It is orthogonal.

AS (a) fr 81 =
0. Hl 82 =

(b) p = � [-2� -2� -1�i


[l -�I �]
(c) [L]• =

[-75+4.../2
},

+4.../2 -1+4
5+4 .../2.../2 -4-8-2.../2
2.. 1
(d) [L]s=
-4 8-2 8+ ./2
[-2/1/../6../6 1;1;Y3Y3 1/..0 1./2
9�
-2 Y2 Y2 Y2

A6 Since 'B is orthonormal, the matrix P =


1/1../6 1;Y3 -1/.../2 is ortho-

:::: ::�: :°J [ : ;�·


I �[ �;
; Jfi w
� :;1•
[ 1} . o .""us
, w
e can pick anoth°'

l 11.../2 o{J J
no .s
-1;.../2
Appendix B Answers to Practice Problems and Chapter Quizzes 517

Section 7.2

A Problems

5/2 2 2
5 9/2 3
Al (a) (b) (c)
512 5 5

{� }
5 9/2 6

mi rm {[-m
-1 0
-1 3
AZ ( ( b) (c )
•)

� '

AJ (a)
mrnrnl} {[:J lil HJ}
{ � -: , -: } { i � =� }
�)

(c) , (d) ' - '

1 0 1 1 1 1

{ll ;-fi.:-l [O�J , [ 6 ]}


1;Yi
{[ �;;], [��'5]. [-�:!rs]}
{ }
A4 ( a ) 1/ , -1 Yi (b)
-2/3 5/Y45 o
1;Y3 1;Y3
-1;Y15
i;Y3 -2;Y15 2;Y16
o 3/YIS o
1;Y3 1;Y15 -1;Y16
Cc) Cct) 1;Y3 , -1;Y3 , 1;Y15
o ' 3/YIS ' 2;Y16
o 1;Y3 2;Y15
1;Y3 i;Y15 -1;YIO
1;Y3 0 0

AS Let {v\, ... , vd be an orthonormal basis for § and let lVk+l• ... , v11} be an
orthonormal basis for §1-. Then {v1, , v11} is an orthonormal basis for IR.11• Thus,
• . •

any 1 E IR.11 can be written

Then

perp3(1) =
1 - proj31
= 1- [( 1 · v1)v1 + . . ·
+ (1 · vk)Vk]
=
(1 · Vk+tWk+I + . .
. + (1 VnWn·

=
p roj3� 1
518 Appendix B Answers to Practice Problems and Chapter Quizzes

Section 7.3

A Problems

Al (a) y = 10.5- l.9t (b)y = 3.4 + 0.8t


y

y= 10.5- 1.9r

0 0

A2 (a)y = -�t+ �t2 (b)y = ¥- - � t2


A3 (a) x =
[
383/98
32/49 ] (b) x = [ 167/650
-401/325 ]

Section 7.4

A Problems

Al (a) (x - 2x2, 1 + 3x) = -46 (b) (2- x + 3x2 , 4 - 3x2 ) = -84


(c) 113 - 2x + x2 11 m
= (d) 119 + 9x + 9x2 11 = 9 ../59
A2 (a) It does not define an inner product since (x - x2 , x - x2 ) 0. =

(b) It does not define an inner product since (-p, q) f. -(p, q).
(c) It does define an inner product.

( d) It does not define an inner product since (x, x) = -2.


A3(a)
{ � [ � �].�[� �].�[� =�]}.
-
pro
j8 _ [ � �] [-21� ;�;]
=

(b)
{ � [� �]. � [� -n. � [-� �]} -
projs [-� 13] [ -7/3/3 4/33]
= 11

[�;�l
.

A4 (a)�=
{[i] nrnl} (b)[X]• =
Appendix B Answers to Practice Problems and Chapter Quizzes 519

AS We have

(Vt+···+vb Vt+· ··+vk) =(v1, Vt)+· ·· +(Vt, vk) +(v2, Vt)+ (v2, v2)+
+···+(v2, vk) +···+(vb v1) +···+(vbvk)

Since {v1, ..., vk} is orthogonal, we have (v;, Vj) = 0 if i * j. Hence,

Chapter 7 Quiz
E Problems

El (a) Neither. The vectors are not of unit length, and the first and third vectors are
not orthogonal.

(b) Neither. The vectors are not orthogonal.

(c) The set is orthonormal.

E2 Let v1, V2, and V3 denote the vectors in '13. We have

6
v1·x=2, v3·x = -
Y3
2
Hence, [x]B = 9/Y3 .
6/Y3
E3 (a) We have
1= det I = det(PTP) = (det pT)(det P) = (det P)2

Thus, det P = ± 1.

(b) We have pTP = I and RTR = I . Hence,

Thus, PR is orthogonal.

E4 (a) Denote the vectors in the spanning set for S by z1, z2, and z3. Let W1 = Zt.
Then

0
-1

-1

Nonnalizing { W" W2, W3} gives us the orthonormal basis


{ [i ,
1;V2 0
1;V2
-1/2
1/2
}
1I 0 1/2 .
1;V2 ,-1/2
520 Appendix B Answers to Practice Problems and Chapter Quizzes

(b) The closest point in S to x is projs x. We find that

projs x = x

Hence, x is already in §.

ES (a) Let A = [ � �]. Since det A = 0,


0
it follows that

(A,A) = det(AA) = (detA)(detA) =

Hence, it is not an inner product.

0
(b) We verify that (,) satisfies the three properties of the inner product:

(A,A) = aT 1+2aT 2+2a� 1+a� 2 2::

and equals zero if and only if A = 02,2:

(A, B) =a11b11+2a12b12+2a21b21+a22b22
=b11a11+2b12a12+2b21a21+b22a22 = (B,A)

(A,sB+tC) = a11(sb11+tc, i)+2a12(sb12+tc12)+

+2a21(sb21+tc21) +a22(sb22+tc22)
= s(a11b11+2a12b12 + 2a21b 21+a22b22)+

+t(a11C11+2a12c12+2a21C21+a22C22)
=s(A, B)+t(A, C)

Thus, it is an inner product.

CHAPTERS

Section 8.1

A Problems

Al (a) A is symmetric. (b) Bis symmetric.


(c) C is not symmetric since c12 f. c21. (d) Dis symmetric.

A2 Alternate correct answers are possible.

(a) p = [-1/1/.../2.../2 1;Y2J'


l /Y2l
D= [30 -1JO

(b) P =
1/YTO 3/YTOl
[-3/YTO D= [-4o 6J O

[0 o ol0
l /YTOJ'
1 ;-.../3 -1 ;-../2 -1;% 2

0
(c) P = 1/.../3
0 0
1/-../2 -1/../6, D = -1

[ [o0 o
1/.../3 2/../6 -1
2/3 2;3 1/3
(d) p = -2/3

[0
1/3 ] 0 0 j]
1/3 2/3 , D =

] [09 o9 ol0
-2/3 2/3
3

1/Vs 4/../4s -2/3


(e) P = s;V4s
0 0 -9
2/3 , D =
2/Vs -2/../4s 1/3
Appendix B Answers to Practice Problems and Chapter Quizzes 521

Section 8.2

-
== � -�
A Problems

Al (a) Q(x1, x2) x7 + 6x1 x2 x


(b) Q(x1, x2, x3)

= X T - 2x� + 6x2x3 x

- +
A - [-3/2 1 ] Q(x) _ 1/.../2 ! j1i]; Q(x)
(c) Q(x1, X2, X3) -2xi + 2X1X2 2x1X3 + � 2x2X3

A2 (a)
_
1 -

-_ [-1/.../2
3/2 ·

'
5 2 I 2
- 2Y1 - 2Y2'
p is indefi-

A = [_2 -22] ; Q(x) = P = [1/21-YsYs -l2//YsYs]; Q(x) .


nite.

5 2 2 . .
(b) y1 + 6y2, 1s pos1t1ve

( ) A [- ] Q(:t) - 10 - [21//YsYs -l2//YsYSJ.J ' Q(:t)


definite.

2 6 2 2
y1 - 5y2, p -
. . .
c = 6 7 · , ,,1, - ,,1, ismdefimte.

A=[-3� 3 -3�1;Q(x)=2yT+3y�-6y�,P= 1�1/0 1/../3


-l -
�j� -�j� l ;
Q(x)[-4 1 -
(d)
2/.../6
-11../3 -21;Y6 1;0.../2] ·,
is indefinite.

A= 10 -1 4ol i � � 1-1/../3 1/../3 1;Y6 3 y - 6y

)
Q(x
(e) -5 - 1 Q(1) =- - 4y ,
P =
,

;Y6 1/.../2
is negative definite.

A3 (a) Positive definite. (b) Positive definite.


(c) Indefinite. (d) Negative definite.
(e) Positive definite. (f) Indefinite.

Section 8.3

A Problems

Al A2
522 Appendix B Answers to Practice Problems and Chapter Quizzes

A3 A4

Y2

AS (a) The graph of xT Ax = 1 is a set of two parallel lines. The graph of


xT Ax = -1 is the empty set.

(b) The graph of xT Ax = 1 is a hyperbola. The graph of xT Ax = -1 is a hyper­


bola.

(c) The graph of xT Ax= 1 is a hyperboloid of two sheets. The graph of xT Ax=
-1 is a hyperboloid of one sheet.

(d) The graph of XT Ax= 1 is a hyperbolic cylinder. The graph of xT Ax = -1 is


a hyperbolic cylinder.

(e) The graph of xT Ax = 1 is a hyperboloid of one sheet. The graph of


xT Ax= -1 is a hyperboloid of two sheets.

Chapter 8 Quiz
E Problems

El P =
[ l /v6
- 2/v6
1/../3 1;.../2
1/../3 0
] and D
16 l
= 0 -3
o o
0 .
-
l /v6 -1/../3 1/.../2 0 0 4

E2 (a) A = [; �]. Q(x) = 7yi + 3y�, and P =


[�;� ;.Ji2] Q(x)
-
1
. is positive

definite. Q(x) = 1 is an ellipse, and Q(x) = 0 is the origin.


-
(b) A= 1-� =� -�i.
-3 2 -3
Q(x) = -5y�+5y�-4y�, and P= [ -1 �...,12 1��
1/.../2 1;v6 �;�]·
1/../3
Q(x) is indefinite. Q(x) = 1 is a hyperboloid of two sheets, and Q(x) = 0 is a
cone.
Appendix B Answers to Practice Problems and Chapter Quizzes 523

E3

Y2

E4 Since A is positive definite, we have

and (x,x) = 0 if and only if x = 0.


Since A is symmetric, we have

n
For any x,y,z E IR. and s,t E IR., we have

(x,sy + tZ> = xT A(sy + tZ) = xT A(sy) + xT A(tt/


= SXT Ay + txT Az= s(x,y) + t(x,Z>

Thus, (x,y) is an inner product on IR.11•

ES Since A is a 4 x 4 symmetric matrix, there exists an orthogonal matrix P that


diagonalizes A. Since the only eigenvalue of A is 3, we must have pT AP = 31.
Then we multiply on the left by P and on the right by pr, and we get

A = P(3/)PT = 3ppT = 31

CHAPTER9

Section 9.1

A Problems

Al (a)5+7i (b) -3 - 4i (c) -7+ 2i (d) -1 4+Si

A2 (a) 9 +7i (b) -10 - lOi (c) 2+ 25i (d) -2

A3 (a)3 +5i (b)2 -7i (c) 3 (d) 4i

A4 (a) Re(z) = 3, Im (z)= 6 -


(b)Re(z) = 17, Im(z) = -1
(c) Re(z) = 24/37, Im(z) = 4/37 (d) Re(z) = 0, Im(z) = 1
6 21 2 25
(b) 53 + 53 13 - l.2i
(c) _ _±_ (d) -17 + 171
. .

l
13

A6 (a) z1z2 = 2 Y2 (cos � + isin �),;. = }i (cos-;-;+ isin -;-;)


524 Appendix B Answers to Practice Problems and Chapter Quizzes

(b) z z 2 Y2 ( +i )
1 2 = cos llir.
12 ( l?rr +i ?rr)
sin llir.
12 il
z2
=
Y2 cos 12
..1... sin l12

z1z2 4 - 7i, � -fJ - -hi


'

(c) = = (This answer can be checked using Cartesian


form.)
(d) z1z2 -17+9i, � -fi + fii
form.)
= = (This answer can be checked using Cartesian

A7 (a) -4 -54 - 54i


(b) -8 - 8 YJi 5
(c) 2 Y3+i) (d) 1 (
AS (a) The roots are cos ( rr+�krr)+ i ( rr+�krr), � � 4.
sin 0 k

(b) The roots are2 [ (-z:2krr)+i (-%:2krr)]. � 3.


cos sin O k �

(c) The roots are2113 [ (-T;2krr)+ (-T;2krr)], � � 2.


cos isin O k

4. 17 1 [ ( ikrr)+i ( ikrr)J � � 2,
(d) The roots are
e arctan
=
1 6 cos 0+ sin 0+ , 0 k where

Section 9.2

A Problems

Al (a) The general solution is z = [ �1: ii]·3


l

-5 - 51 ·

+i -1+2i
-25+2i +t 0
(b) The general solution is z =

-i
-t
, t EC.

Section 9.3

A Problems

r-5; 3i] 57+- 8i3i -4 - i


(d) -1 -7i3
Al (a)

[1+2i 3+�]
[ -1 - 9il
(b)
[ -12+il
A2 (a) [l] =

1 1- l

(b) l(2 + 3i, 1 - 4i) [�1 --4�i]


=

(c) A basis for Range(l) is


{[1�2i]}. A basis for Null(l) is {[- \ i]}
+

A3 (a) A basis for Row(A) is m].[:]} A basis foe Col(A) is {[ 1_;/]. [L]}. and a

basis for NuncAJ is {[_:-!]}


Appendix B Answers to Practice Problems and Chapter Quizzes 525

(b) A basis for Row(B) is


ml, [m A basis for Col(B) is
{[1-� ;] , [-1:+ ;]}·
and a basis for Null(B) is the empty set.

(c) A basis for Row(C) is {j I} , A basis for Col(C) is


{[ U [ � 2;]}·
1
l

and a basis fo r Null(C) is { =1 � } ,

Section 9.4

A Problems

Al (a) D= [� �J [� -�].
- P= p-'AP=
[-� �]
(b) D=
r +i - l r
-2
0 - 2
0
l
p=
-1
1 0
, P
]
-1 p-' A = -2
-1
r -�]
1 o ol
+ ol1 , [o o l l
(c )
D
=

[� 0
2i
[ 0
1 - 2i
P= 0 0
,

- �]
2 1 0
p-1 AP= 0
0
1
2

+ i . [ � 1 �]· � o ol
1
[� [
0

(d) D = 2
0i] 2 -
p=
-1 1
3
s
P-'AP= 2
2

Section 9.5

A Problems

Al (a) (il, v) = 2 -Si, -6i, +Si,


(v, il) = 2 llilll = {18, llVll = -../33
(b) (il, v) =
(il, v)
6i+, i, -i,
3
(v, il) =
(V, il) = 3
11ai1 = m, llvll = ../26
llilll Yil, llVll = 2
(c)

(d) (il, v> = 4


=

-i, + i,
(v, a> = 4 11a11
=

= Yl5, 11v11 = ....ts

+[ ++�i±ii l
A2 (a) A is not unitary. (b) B is unitary. (c) C is unitary. (d) Dis unitary.
f

A3 (a) (il, v) = 0 (b) i


3 3

A4 (a) We have 1 = det I = det( U* U) = det( U*) det U = det U det U = I det U12.

(b) The matrix U = [� �] is unitary and det U = i.


526 Appendix B Answers to Practice Problems and Chapter Quizzes

Section 9.6

A Problems

. u = [c.../2 + i)/../12 c..)./2+7 /2] D


A 1s. Herm1tian.
.
[i o]
Al (a)
_3 ../12 112 5
=

0
,

(b) Bis not Hermitian.

-i)/Ys -i)/ 7
(c) C 1s .
· Hernutian U = [(../3l/Ys _4 1-.fiO-.fiO] D= [ OJ2
( ..f3
.

. ,
0

0 0 0
(d) Fis Hermitian. D= 0 Vs 0 ,

U
=
[(1--i)/-{16
2/-VW
0
(3 + Ys)/a
0
(3 - Ys)/b l
-
Vs

(1 - i)(l + Ys)/a (1-i)(l - Ys)/b


2/-VW 2/a 2/b
where a� 52. 361 and b � 7.639.

Chapter 9 Quiz

E Problems

El z 12
z = 4 ...f2e-irr/12 'z2 ...l.
il
Yi
=
..e-711/12

E2 The square roots ofi are e irr/4 and ei51114.

7i
[ l (b) [J�;l
-

E3 (a) 3 +Si
9 + 3i
(c) 11 + 4i (d) 11 - 4i

(e) ff?
2
[ ���]
(f) rs : :
22 - 8i
(a) P= [ �
2 3i 2�3i andP-1AP=D= [ 2 3i �
E4
] � 2 3i]
(b) P= [� -

�] and C = [ _� ;]

vu·= �[ I�i 1 i
ES
_;�i] �[ ; _/_i] =[� �]
E6 (a) A is Hermitian if A• = A. Thus, we must have 3 +ki = 3 -i and 3 - ki = 3 +i,
which is only true when k = -1. Thus, is Hermitian if and only if k = -1.
A
Appendix B Answers to Practice Problems and Chapter Quizzes 527

(b) If k = - 1 , then A = [3 � i 3 � i] and

det(A - ;l/) lo3 33 ii


=
- ;l
+i
-
_
;l
=
(;l + 2)(,l -
5)

Thus the eigenvalues are l1


;
= -2 and A2 = 5. For ;l1 = -2,

A _
;l I
I
=
[3+i 3 i] [ 3 i]
2 -

5
� 2
0
-

Thus, a basis for the eigenspace of A1 is {[3_�i]}. For ;l2 = 5,

Thus, a basis for the eigenspace of ;l2 is {[3 � i]}.


\f [3 i] lp i]) 5
- -

We can easily verify that _2 , = 0.


Index

A standard for R",321 C",413


addition subspaces, 97-99 eigenvalues, 419
complex numbers, 396 taking advantage of direction, 218 matrices, 419--423
general vector spaces, 198 vector space, 206--209 properties, 397
matrices, 117-120 best-fitting curve, finding, 342-346 quotient of complex numbers, 397-398
parallelogram rule for, 2 block multiplication, 127-128 roots of polynomial equations, 398
polynomials, 193-196 Bombelli, Rafael, 396 complex eigenvalues
real scalars, 399 of A, 418--419
scalars, 117-120 c repeated, 423
vector space, 199-200 C vector space complex eigenvectors, 419
vector space over R, 197-201 basis, 412 complex exponential, 402--404
vectors, 2,10, 15-16 complex multiplication as matrix complex inner products
adjugate matrix, 276 mapping, 415 Cauchy-Schwarz and triangle inequalities,
adjugate method, 276 non-zero elements linearly dependent, 412 426--428
algebraic multiplicity, 302,303 one-dimensional complex vector Hermitian property, 426
eigenvalues, 296--297 space, 412 length, 426
allocating resources, 107-109 cancellation law and matrix multiplication, complex matrices and diagonalization,
angles, 29-31 125-126 417--418
angular momentum vectors, 390 capacitor, capacitance of, 408 complex multiplication as matrix
Approximation Theorem, 336,343,345 Cauchy-Schwarz and triangle inequalities, mappings, 415
arbitrary bases, 321 426--428 complex numbers, 417
area Cauchy-Schwarz Inequality, 33 addition, 396
determinants, 280-283 change of coordinates equation, arguments, 400--401
parallelogram, 280-282 222,240,329 arithmetic of, 396--397
Argand diagram, 399,405 change of coordinates matrix, 222-224 complex conjugates, 397-398
arguments, 400--401 characteristic polynomial, 293 complex exponential, 402--404
augmented matrix, 69-72 classification and graphs, 380 complex plane, 399
axial forces, 105 en complex vector space division, 397-398
complex conjugate, 413 electrical circuit equations, 408--410
B inner products, 425 imaginary part, 396
back-substitution, 66 orthogonality, 429--431 modulus, 399--401
bases (basis) standard inner product, 425 multiplication, 396
arbitrary, 321 vector space, 412 multiplication as matrix mapping, 415
C vector space, 412 codomain, 131 n-th roots, 404--405
complex vector spaces, 412 linear mappings, 134 polar form, 399--402
coordinate vector, 219 coefficient matrix, 69-70, 71 powers, 402--404
coordinates with respect to, 218-224 coefficients, 63 quotient, 397-398
definition, 22 cofactor expansion, 259-261 real part, 396
eigenvectors, 299-300,308 cofactor matrix, 274-275 scalars, 411
extending linearly independent subset cofactor method, 276 two-dimensional real vector space, 412
to, 213-216 cofactors vector spaces over, 413--414
linearly dependent, 206--208 3 x 3 matrix, 257 complex plane, 399
linearly independent, 206--207,211 determinants in terms of, 255-261 complex vector spaces
mutually orthogonal vectors, 321 irrelevant values, 259 basis, 412
nullspace, 229-230 matrix inverse, 274-278 complex numbers, 396--40 5
obtaining from arbitrary wFinite spanning of n x n matrix, 258 dimension, 412
set, 209-211 column matrix, 117 eigenvectors, 417--423
ordered, 219,236 columnspace, 154-155,414 inner products, 425--431
preferred, 218 basis of, 158-159 one-dimensional, 412
range, 229-230 complete elimination, 84 scalars and coordinates of, 417
standard basis properties, 321 complex conjugates subspaces, 413--414

529
530 Index

complex vector spaces (continued) symmetric matrices,327, 363-370, forming basis,308


systems with complex numbers,407-410 373-375 linear transformation,289-290
two-dimensional, 412 Diagonalization Theorem,30 I,307 linearly independent, 300
vector spaces over C, 4 I1-4 I5 differential equations and diagonalization, mappings, 289-291
vector spaces over complex numbers, 315-317 matrices,291
413-414 dilations and planes, 145 non-real, 301
components of vectors, 2 dimensions,99 non-trivial solution, 292
compositions, I 39-141 finite dimensional vector space,215 non-zero vectors,289
computer calculations choice of pivot trivial vector space, 212 orthogonality,367
affecting accuracy,78 vector spaces,211-213 principal axes,370
cone,386 directed Iine segments,7-8 of projections and reflections in R3,
conic sections, 384 three-dimensional space, 1 I 290-291
conjugate transpose,428 two-dimensional case, 11 real and imaginary parts, 420

consistent solutions,75-76 direction vector and parallel lines,5-6 restricted definition of,289

consistent systems,75-76 distance,44 rotations in R2, 291

constants, orthogonality of, 356 division symmetric matrices, 363-367

constraint, I07 complex numbers,397-398 electrical circuit equations

contractions and planes, 145 matrices,125 complex numbers, 408-410

convergence, 357 polar forms,401 steady-state solution,409-410

coordinate vector,2 I9 domain, 131 electricity, resistor circuits in, 102-104

coordinates with respect to basis,218-224 linear mappings,134 electromotive force, I 03

orthonormal bases,323-329 dominant eigenvalues,312 elementary matrices, 175-179

corresponding homogeneous system solution dot products, 29-31,323, 326 reflection, I76

space,152-153 defining, 354 shears, 176

cosines,orthogonality of, 356 perpendicular part, 43 stretch, 176


elementary row operations, 70, 72-73
Cramer's Rule,276-278 projections, 40-44
cross-products,51-54 bad moves and shortcuts, 76-78
properties,348
determinant,264--271
formula for, 51-52 R2,28-31
elimination,matrix representation of,71
length,52-54 R",31-34,323,354
ellipsoid,385
properties,52 symmetric, 335
elliptic cylinder,387
current,102
equality and matrices,113-117
E
equations
D eigenpair,289
first-order difference, 3 I1-312
de Moivre's Formula, 402-404 eigenspaces,292-293
free variables, 67-68
deficient, 297 orthogonality, 367
leading variables,66-67
degenerate cases, 384 eigenvalues, 307-310,417
normal,344
degenerate quadric surfaces,387 3 x 3 matrix, 422-423
rotating motion,391
design matrix,344 algebraic multiplicity, 296-297, 302,303
second-order difference, 3 I2
determinants complex conjugates, 419
solution,64
2 x 2 matrix,255-256, 259 complex eigenvalue of A, 418-419
equivalent
3 x 3 matrix, 256-258 deficient, 297
directed line segments,7-8
area, 280-283 dominant,312
systems of equations, 65
cofactors,255-261 eigenspaces and,292-293
systems of linear equations, 70
Cramer's Rule, 276-278 finding,291-297
error vector, 343
elementary row operations,264--2 7 I geometric multiplicity,296-297
Euclidean spaces,1
expansion along first row, 257-258 integers,297
cross-products,50-54
invertibility, 269-270 mappings, 289-291
length and dot products, 28-37
matrix inverse by cofactors, 274--278 matrices, 291
minimum distance,44-47
non-zero,270 matrices,420
n x n
projections, 40-44
products, 270-271 non-rational numbers,297 vectors and lines in R3,9-11
row operations and cofactor non-real,301 vectors in R2 and R3, 1-11
expansion,269 power method of determining,312-3 I 3 vectors in R",14--25
square matrices,269-270 of projections and reflections in R3, Euler's Formula,403-404
swapping rows,265 290-291 explicit isomorphism,248
volume, 283-284 real and imaginary parts,420 exponential function,315
diagonal form, 300, 373 rotations in R2, 291
diagonal matrices,114 symmetric matrices, 363-367 F
diagonalization eigenvectors, 308-310 Factor Theorem,398
applications of,303 3 x 3 matrix,422-423 factors,reciprocal,265
complex matrices,4 I7-4 I 8 basis of,299-300 False Expansion Theorem,275
differential equations,3 I 5-317 complex,419 feasible set linear programming problem,
eigenvectors,299-304 complex vector spaces, 417-423 107-109
quadratic form, 373-375 diagonalization,299-304 finite dimensional vector space
square matrices,300, 363 finding,291-297 dimension, 2 I5
Index 531

Finite spanning set, basis from arbitrary, Hermitian property, 426 K


209-21! homogeneous linear equations, 86-87 kernel, 151
first-order difference equations, 311-312 hyperbolic cylinder, 387 Kirchhoff's Laws, 103-104, 409, 410
first-order linear differential equations, 315 hyperboloid of one sheet, 386
fixed-state vectors, 309 hyperboloid of two sheets, 386-387 L
Markov matrix, 310 hyperplanes Law of Cosines, 29
Fourier coefficients, 356-357 in R",24 leading variables, 66-67
Fourier Series, 354-359 scalar equation, 36-37 left inverse, 166
free variables, 67-68 length
functions, 131 complex inner products, 426
addition, 134 cross-products, 52-54
identity mapping, 140
domain of, 131 R3,28-31
identity matrix, 126-127
linear combinations, 134 R",31-34
ill conditioned matrices, 78
linear mappings, 134-136 level sets, 108
image parallelogram volume, 282
mapping points to points, 131-132
imaginary axis, 399 line of intersection of two planes, 55
matrix mapping, 131-134
·

imaginary part, 396 linear combinations, 4


scalar multiplication, 134 linear mappings, 139-141
inconsistent solutions, 75-76
Fundamental Theorem of Algebra, 417
indefinite quadratic forms, 376-377 of matrices, 118
polynomials, 194-196
G inertia tensor, 390,391
linear constraint, 107
Gauss-Jordan elimination, 84 infinite-dimensional vector space, 212
linear difference equations, 304
Gaussian elimination with infinitesimal rotation, 389
linear equations
back-substitution, 68 inner product spaces, 348-352
coefficients, 63
general linear mappings, 226-232 unit vector, 350-351
definition, 63
general reflections, 147-148 inner products, 323,348-352,354-355
homogeneous, 86-87
general solutions, 68 C" complex vector space, 425
matrix multiplication, 124
general systems of linear equations, 64 complex vector spaces, 425-431
linear explicit isomorphism, 248
general vector spaces, 198 correct order of vectors, 429
linear independence, 18-24
geometric multiplicity and eigenvalues, defining, 354
problems, 95-96
296-297 Fourier Series, 355-359
linear mappings, 134-136,175-176,
geometrical transformations orthogonality of constants, sines, and
227,247
contractions, 145 cosines, 356
2 x 2 matrix, 420
dialations, 145 properties, 349-350,354
change of coordinates, 240-242
general reflections, 147-148 R",348-349
codomain, 134
inverse transformation, 171 instantaneous angular velocity vectors, 390
compositions, 139-141
reflections in coordinate axes in R2 or instantaneous axis of rotation at time, 390
domain, 134
coordinates planes in R3, 146 instantaneous rate of rotation about the
general, 226-232
rotation through angle (-) about x3-axis in axis, 390
identity mapping, 140
R3,145-148 integers, 396
inverse, 170-173
rotations in plane, 143-145 eigenvalues, 297
linear combinations, 139-141
shears, 146 integral, 354
linear operator, 134
stretches, 145 intersecting planes, 387
matrices, 235-242
Google PageRank algorithm, 307 invariant subspace, 420
matrix mapping, 136-138
Gram-Schmidt Procedure, 337-339, invariant vectors, 309
nullity, 230-231
351-352,365,367-368,429 inverse linear mappings, 170-173
nullspace, 151,228
graphs
inverse mappings, 165-173
classification, 380 one-to-one, 247
procedure for finding, 167-168
cone,386 range, 153-156,228-230
inverse matrices, 165-173
conic sections, 384 rank, 230-231
invertibility
degenerate cases, 384 standard matrix, 137-140,241
determinant, 269-270
degenerate quadric surfaces, 387 vector spaces over complex numbers,
general linear mappings, 228
diagonalizing quadratic form, 380 413-414
square matrices, 269-270 linear operators, 134,227
ellipsoid, 385
invertible linear transformation onto Hermitian, 433
elliptic cylinder, 387
mappings, 246-247 matrix, 235-239
hyperbolic cylinder, 387
Invertible Matrix Theorem, 168,172,292 standard matrix, 237
hyperboloid of one sheet, 386
invertible square matrix and reduced row linear programming, 107-109
hyperboloid of two sheets, 386-387
echelon form (RREF), 178 feasible set, l 07-109
intersecting planes, 387
isomorphic, 247 simplex method, l 09
nondegenerate cases, 384
isomorphisms, 247 linear sets, linearly dependent or linearly
quadratic forms, 379-387
explicit isomorphism, 248 independent, 20-24
quadric surfaces, 385-387
vector space, 245-249 linear systems, solutions of, 168-170
H linear transformation
Hermitian linear operators, 433 J eigenvectors, 289-290
Hermitian matrix, 432-435 Jordan normal form, 423 standard matrix, 235-239,328
532 Index

linear transformation (continued) inverse cofactors, 274-278 modulus, 399-401


linearity properties, 44 linear combinations of, 118 moment of inertia about an axis, 390, 391
linearly dependent, 20-24 linear mapping, 235-242 multiple vector spaces, 198
basis, 206-208 linearly dependent, 118-120 multiplication
matrices, 118-120 linearly independent, 118-120 complex numbers, 396, 415
polynomials, 195-196 lower triangular, 181-182 matrices, 121-126. See also matrix
subspaces, 204 LU-decomposition, 181-187 multiplication
linearly independent, 20-24 multiplication, 121-126 polar forms, 401
basis, 206-207, 21 l non-real eigenvalues, 301 properties of matrices and scalars,
eigenvectors, 300 nullspace, 414 117-120
matrices, 118-120 operations on, 113-128 mutually orthogonal vectors, 321
polynomials, 195-196 order of elements, 236
procedure for extending to basis, 213-216 orthogonal, 325-329
N
subspaces, 204 orthogonally diagonalizable, 366
n-dimensional parallelotope, 284
vectors, 323 orthogonally similar, 366
11-dimensional space, 14
lines partitioned, 127-128
11-th roots, 404-405
direction vector, 5 powers of, 307-313
11-volume, 284
parametric equation, 6 rank of, 85-86
11 x 11 matrices, 296
in R3, 9 real eigenvalues, 420
characteristic polynomial, 293
in R", 24 reduced row echelon form (RREF), 83-85
cofactor of, 258
translated, 5 reducing to upper-triangular form, 268
complex entries, 428
vector equation in R2, 5-8 representation of elimination, 71
conjugate transpose, 428
lower triangular, 114, 181-182 representation of systems of linear
determinant of, 259
LU-decomposition, 181-187 equations, 69-73
eigenvalues, 420
with respect to basis, 235
solving systems, 185-186 Hermitian, 432-435
row echelon form
similar, 300
(REF), 73-74
M swapping rows, 266-267
row equivalent, 70,71
mappings unitary, 430-431
row reduced to row echelon form (RREF),
eigenvalues, 289-291 natural numbers, 396
73-74
eigenvectors, 289-291 nearest point, finding, 47
row reduction, 70
inverse, 165-173 negative definite quadratic forms,
rowspace, 414
nullspace, 150-152 376-377
scalar multiplication, 117
special subspaces for, 150-162 negative semidefinite quadratic forms,
shortcuts and bad moves in elementary
Markov matrix, 309-313 376-377
row operations, 76-78
fixed-state vector, 310 Newton's equation, 391
skew-symmetric, 379
Markov process, 307-313 non-homogeneous system solution, 153
special types of, 114
properties, 310-311 non-real eigenvalues, 301
square, 114
matrices, 69 non-real eigenvectors, 301
standard, 137-140
addition, 117 non-square matrices, 166
swapping rows, 187
addition and multiplication by scalars non-zero determinants, 270
symmetric, 363-370
properties, 117-120 non-zero vectors, 41
trace of, 202-203
basis of columnspace, 158-159 nondegenerate cases, 384
transition, 307
basis of nullspace, 159-161 norm, 31-32
transpose of, 120-121
basis of rowspace, 157 normal equations, 344
unitarily diagonalizable, 435
calculating determinant, 264-265 normal to planes, 54-55
unitarily similar, 435
cofactor expansion, 259-261 normal vector, 34-35
unitary, 430-43 l
column, 117 normalizing vectors, 312
upper triangular, 181-182
columnspace, 154-155, 414 matrix mappings, 131-134 nullity, 159-161
complex conjugate, 419-423 affecting linear combination of vectors in linear mapping, 230-231
decompositions, 179 R", 133 nullspace, 150-152,414
design, 344 complex multiplication as, 415 basis, 159-161, 229-230
determinant, 268, 280 linear mappings, 134, 136-138 linear mappings, 228
diagonal, 114, 300 matrix multiplication, 126, 175-176,
division, 125 300,326 0
eigenvalues, 291 block multiplication, 127-128 objective function, 107
eigenvectors, 291 cancellation law, 125-126 Ohm's law, 102-103,104
elementary, 175-179 linear equations, 124 one-dimensional complex vector
elementary row operations, 70 product not commutative, 125 spaces, 412
equality, 113-117 properties, 133-134 one-to-one, 246
finding inverse procedure, 167-168 summation notation, 124 explicit isomorphism, 248
Hermitian, 432-435 matrix of the linear operator, 235-239 invertible linear transformation, 246
identity, 126-127 matrix-vector multiplication, 175-176 onto, 246
ill conditioned, 78 method of least squares, 342-346 explicit isomorphism, 248
inverse, 165-173 minimum distance, 44-47 ordered basis, 219,236
Index 533

orthogonal, 33 points small deformations and, 388-389


Cn complex vector space, 429--431 calculating distance between, symmetric matrices, 371-377
planes, 36 28-29 quadric surfaces, 385-387
orthogonal basis, 337-339 position vector, 7 quotient, 397-398
subspace S, 335 polar forms, 399--402
orthogonal complement, 333 division, 401
orthogonal matrices, 325-329 multiplication, 401
orthonormal columns and rows, polynomial equations, roots of, 398
dot products, 28-31
327,364 polynomials
eigenvectors and eigenvalues of
real inner products, 430 addition, l 93-196
rotations, 291
orthogonal sets, 322 addition and scalar multiplication
length, 28-31
orthogonal vectors, 321,333,35 l properties, l 94
line through origin in, 5
orthogonally diagonalizable, 366 linear combinations, 194-- 196
reflections in coordinate axes, 146
orthogonally similar, 366 linearly dependent, 195-196
rotation of axes, 329-330
orthonormal bases (basis), 321-325, linearly independent, 195-196
standard basis for, 4
323,336 roots of, 293
vector equation of line in, 5-8
change of coordinates, 325-329 scalar multiplication, 193-196
vectors in, 1- l l
coordinates with respect to, span, 194-- l 96
zero vector, 11
323-325 vector spaces, 193-196
R3
error vector, 343 position vector, 7
defining, 9
fourier series, 354--359 positive definite quadratic forms,
eigenvectors and eigenvalues of
general vector spaces, 323 376-- 377
projections and reflections, 290-29 l
Gram-Schmidt Procedure, 337-339 positive semidefinite quadratic forms,
lines in, 9-11
inner product spaces, 348-352 376--377
reflections in coordinate planes, 146
method of least squares, 342-346 power method of determining eigenvalues,
rotation through angle ( -) about x3 axis,
overdetermined systems, 345-346 312-313
145-148
projections onto subspace, 333-336 powers and complex numbers,
scalar triple product and volumes in,
R",323,334 402--404
55-56
subspace S, 335 powers of matrices, 307-313
vectors in, 1-1 l
technical advantage of using, 323-325 preferred basis, 218
zero vector, l l
orthonormal columns, 364 preserve addition, 134
range
orthonormal sets, arguments based on, 323 preserve scalar multiplication, 134
basis, 229-230
orthonormal vectors, 322,351 principal axes, 370
Principal Axis Theorem, 369-370,374,380, linear mappings, 153-156, 228-230
overdetermined systems, 345-346
391-392,435 rank

principal moments of inertia, 390,391 linear mapping, 230-231


p
probabilities, 309 matrices, 85-86
parabola, 384
·problems square matrices, 269-270
parallel lines and direction vector, 5---0
linear independence, 95-96 summary of, 162
parallel planes, 35
spanning, 91-95 Rank-Nullity Theorem, 231, 232
parallelepiped, 55-56
products Rank Theorem, 150-162
volumes, 56,283-284
determinants, 270-271 rational numbers, 396
parallelogram
of inertia, 391 Rational Roots Theorem, 398
area, 53, 280-282
projection of vectors, 335 real axis, 399
induced by, 280
rule for addition, 2 projection property, 44 real canonical form

parallelotope and 11-volume, 284 projections, 40--44 3 x 3 matrix, 422--423

parametric equation, 6 eigenvectors and eigenvalues in R3, 2 x 2 real matrix, 421--422

particular solution, 153 290-291 complex characteristic roots of,

permutation matrix, 187 onto subspaces, 333-336 418--420

perpendicular of projection, 43 properties of, 44 real eigenvalues, 420

planar trusses, I 05-106 scalar multiple, 41 real eigenvectors, 423

planes Pythagoras' Theorem, 44 real inner products and 01thogonal

contractions, 145 matrices, 430


real matrix, 420
dilations, 145 Q
finding nearest point, 47 quadratic forms complex characteristic roots of, 418--420

line of intersection of two, 55 applications of, 388-392 real numbers, 396,418


normal to, 54--55 diagonalizing, 373-375 real part, 396
normal vector, 34--35 graphs of, 379-387 real polynomials and complex roots, 419
orthogonal, 36 indefinite, 376-377 real scalars
parallel, 35 inertia tensor, 390 addition, 399
in R", 24 negative definite, 376-377 scalar multiplication, 399
rotations in, 143-145 negative semidefinite, 376--377 real vector space, 412
scalar equation, 34-36 positive definite, 376--377 reciprocal factor, 265
stretches, 145 positive semidefinite, 376--377 recursive definition, 259
534 Index

reduced row echelon form (RREF), second-order difference equations,312 linear dependent,204
83-85,178 shears,146 linearly independence,18-24
reflections elementary matrix,176 linearly independent,204
describing terms of vectors,218 similar,300 projections onto,333-336
eigenvectors and eigenvalues in R3, simplex method, l 09 R", 16--1 7
290-291 sines,orthogonality of,356 spanning sets, 18-24, 204
elementary matrix,176 skew-symmetric matrices,379 spans,204
in plane with normal vector,147-148 small deformations,388-389 summation notation and matrix
repeated complex eigenvalues,423 solid body and small deformations,388-389 multiplication,124
repeated real eigenvectors,423 solids surfaces in higher dimensions,24-25
resistance,102 analysis of deformation of,304 symmetric matrices
resistor circuits in electricity, 102-104 deformation of,145 classifying,377
resistors,103 solution,64 diagonalization,327,363-370,373-375
resources,allocating,107-109 back-substitution,66 eigenvalues,363-367
right-handed system,9 solution set,64 eigenvectors,363-367
right inverse,166 solution space,150-152 orthogonally diagonalizable,367
rigid body,rotation motion of,390-392 corresponding homogeneous system, quadratic forms,371-377
R" 152-153 systems
addition and scalar multiplication of systems,152-153 solution space,150-153
vectors in,15 spanned,18 solving with LU-decomposition,185-186
dot product,3 l-34,323,354 spanning problems,91-95 special subspaces for,150-162
inner products,348-349 spanning sets,18-24
systems of equations
definition,18
length,31-34 equivalent,65
subspaces,204
line in plane in hyperplane in,24 word problems,77-78
vector spaces,209
orthonormal basis,323,334 systems of linear difference equations,
spans
standard basis for,321 311-312
polynomials,194-196
subset of standard basis vectors, 322 systems of linear equations,63-68,175-176
subspaces,204
subspace, l 6--17 applications of,102-109
special subspaces for mappings,150-162
vectors in,14-25 augmented matrix,69-70,71
special subspaces for systems,150-162
zero vector,16 coefficient matrix,69-70,71
Spectral Theorem for Hermitian Matrices,
roots of polynomial equations,398 complete elimination,84
435
rotations complex coefficients, 407-410
square matrices,114
of axes in R2, 329-330 complex right-hand sides, 407-410
calculating inverse, 274-278
eigenvectors and eigenvalues in R2, 291 consistent solutions,75-76
determinant,269-271
equations for,391 consistent systems,75-76
diagonalization,300,363
in plane,143-145 elimination,64-66
facts about,168-170
through angle(-) about x3-axis in R3, elimination with back-substitution,83
invertible,269-270
145-148 equivalent,70
left inverse, 166
row echelon form (REF), 73-74 Gauss-Jordan elimination,84
lower triangular, 1 14
consistency and uniqueness,75-76 Gaussian elimination with
rank, 269-270
number of leading entries in,85 back-substitution,71-72
right inverse,166
row equivalent, 70,71 general,64
upper triangular,114
row reduction,70 general solution,68
standard basis for R2, 4
rowspace,156--157, 414 homogeneous,86-87
standard inner product,31,425
inconsistent solutions,75-76
standard matrix,137-140
s linear programming,107-109
linear mapping, 241
scalar equation matrix representation of, 69-73
linear operator,237
hyperplanes,36-37 planar trusses,105-106
linear transformation, 235-239,328
planes,34-36 resistor circuits in electricity,102-104
state,307
scalar form,6 spanning problems,91-95
state vector,309
scalar multiplication unique solutions,75-76
stretches
general vector spaces,198 systems of linear homogeneous ordinary
elementary matrix,176
matrices,117 differential equations,317
planes,145
polynomials,193-196 systems with complex numbers,407-410
subspace S
real scalars,399 orthogonal basis,335
vector space,199-200 orthonormal basis, 335 T
vector space over R, 197-20 l subspace S or R", orthonormal or orthogonal three-dimensional space and directed line
vectors,2,3,10,15-16 basis for,337-339 segments,11
scalar product,31 subspaces,201-204 trace of matrices,202-203
scalar triple product in R3, 55-56 bases of,97-99 transition,probability of,309
scalars,2 complex vector spaces, 413-414 transition matrix,307,309
complex numbers, 411 definition,16 transposition of matrices,120-121
properties of matrix addition and dimensions,99 Triangle Inequality,33
multiplication by,117-120 invariant, 420 trivial solution,21, 87
Index 535

trivial subspace,16-17 extending linearly independent subset to linearly dependent,21,95-96


trivial vector space,203 basis procedure,213-216 linearly independent,21,323
dimension,212 general linear mappings,226-232 mutually orthogonal,321
two-dimensional case and directed line infinite-dimensional,212 norm,31-32,350-351
segments,11 inner product,348-352 normalizing,312,322
two-dimensional complex vector spaces,412 inner product spaces,348-352 orthogonal,33,321,333,351
two-dimensional real vector space and isomorphisms of,245-249 orthonormal,322,351
complex numbers,412 matrix of linear mapping,235-242 projection of,335
multiple,198 projection onto subspaces, 333-336
u over complex numbers, 413-414 in R2 and R3, 1-11
Unique Representation Theorem,206,219 polynomials,193-196 representation as arrow,2
unique solutions in systems of linear scalar multiplication, 199-200 reversing direction,3
equations,75-76 scalars,41 l in R", 14-25
unit vector,33,41,321-322,350-351 spanning set,209 scalar multiplication,2,3, 10,15-16
unitarily diagonalizable matrices,435 subspaces, 201-204 span of a set,91-95
unitarily similar matrices,435 trivial,203
standard basis for R2, 4
unitary matrices,430-431 vector space over R, 197
zero vector,4
upper triangular,114,181-182 vectors,197
vertices,109
zero vector, 198
voltage,102
v vector spaces over C, 411-415
volumes
vector equation of line in R2, 5-8 vector-valued function,42
determinants,283-284
vector space over R, 197 vectors,1
image parallelogram,282
addition,197-201 addition of,10,15-16
parallelepiped,56, 283-284
scalar multiplication,197-201 angle between, 29-31
in R3, 55-56
zero vector,197 angular momentum,390
vector spaces components of,2
w
abstract concept of,200-20 l controlling size of,312
cross-products, 51-54 word problems and systems of equations,
addition,199-200
77-78
basis,206-209 distance between,350-351
complex,396-435 dot products,29-31,323
coordinates with respect to basis,218-224 fixed,309 z

defined using number systems as formula for length,28 zero vector,4,197-198


scalars,198 instantaneous angular velocity,390 R2, 11
dimension,211-213 invariant,309 R3, 11
explicit isomorphism,249 length,323,350-351 R", 16-17

You might also like