You are on page 1of 68

Introduction to Linear Algebra for

Science and Engineering 2nd Edition


(Student Edition) Edition Daniel
Norman
Visit to download the full and correct content document:
https://ebookmass.com/product/introduction-to-linear-algebra-for-science-and-engine
ering-2nd-edition-student-edition-edition-daniel-norman/
PEARSON
NEVER LEARNING

Daniel Norman • Dan Wolczuk

Introduction to Linear
Algebra for Science and
Engineering

Student Cheap-ass Edition

Taken from:
Introduction to Linear Algebra for Science and Engineering, Second Edition
by Daniel Norman and Dan Wolczuk
Cover Art: Courtesy of Pearson Learning Solutions.

Taken from:

Introduction to Linear Algebra for Science and Engineering, Second Edition


by Daniel Norman and Dan Wolczuk
Copyright© 2012, 1995 by Pearson Education, Inc.
Published by Pearson
Upper Saddle River, New Jersey 07458

All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.

This special edition published in cooperation with Pearson Learning Solutions.

All trademarks, service marks, registered trademarks, and registered service marks are the
property of their respective owners and are used herein for identification purposes only.

Pearson Learning Solutions, 501 Boylston Street, Suite 900, Boston, MA 02116
A Pearson Education Company
www.pearsoned.com

PEARSON
Contents

A Note to Students vi 2.3 Application to Spanning and Linear


A Note to Instructors viii Independence 91
Spanning Problems 91
Chapter 1 Euclidean Vector Spaces 1 Linear Independence Problems 95
Bases of Subspaces 97
1.1 Vectors in JR2 and JR3 1 2.4 Applications of Systems of Linear
The Vector Equation of a Line in JR2 5 Equations 102
Vectors and Lines in JR3 9 Resistor Circuits in Electricity 102
1.2 Vectors in JRll 14 Planar Trusses 105
Addition and Scalar Multiplication Linear Programming 107
of Vectors in JR11 15
Subspaces 16
Chapter 3 Matrices, Linear Mappings,
Spanning Sets and Linear Independence 18
and Inverses 115
Surfaces in Higher Dimensions 24
1.3 Length and Dot Products 28 3.1 Operations on Matrices 115
Length and Dot Products in JR2, and JR3 28 Equality, Addition, and Scalar Multiplication
Length and Dot Product in JR11 31 of Matrices 115
The Scalar Equation of Planes and The Transpose of a Matrix 120
Hyperplanes 34 An Introduction to Matrix Multiplication 121
1.4 Projections and Minimum Distance 40 Identity Matrix 126
Projections 40 Block Multiplication 127
The Perpendicular Part 43 3.2 Matrix Mappings and Linear
Some Properties of Projections 44 Mappings 131
Minimum Distance 44 Matrix Mappings 131
1.5 Cross-Products and Volumes 50 Linear Mappings 134
Cross-Products 50 Is Every Linear Mapping a Matrix Mapping? 136
The Length of the Cross-Product 52 Compositions and Linear Combinations of

Some Problems on Lines, Planes, Linear Mappings 139


and Distances 54 3.3 Geometrical Transformations 143
Rotations in the Plane 143
Rotation Through Angle e About the X -axis
Chapter 2 Systems of Linear Equations 63 3
in JR3 145
2.1 Systems of Linear Equations and 3.4 Special Subspaces for Systems and Mappings:
Elimination 63 Rank T heorem 150
The Matrix Representation of a System Solution Space and Nullspace 150
of Linear Equations 69 Solution Set of Ax = b 152
Row Echelon Form 73 Range of L and Columnspace of A 153
Consistent Systems and Unique Rowspace of A 156
Solutions 75 Bases for Row(A), Col(A), and Null(A) 157
Some Shortcuts and Some Bad Moves 76 A Summary of Facts About Rank 162
A Word Problem 77 3.5 Inverse Matrices and Inverse Mappings 165
A Remark on Computer Calculations 78 A Procedure for Finding the Inverse of a
2.2 Reduced Row Echelon Form, Rank, Matrix 167
and Homogeneous Systems 83 Some Facts About Square Matrices and Solutions of
Rank of a Matrix 85 Linear Systems 168
Homogeneous Linear Equations 86 Inverse Linear Mappings 170

iii
iv Contents

3.6 Elementary Matrices 175 Eigenvalues and Eigenvectors of a


3.7 LU-Decomposition 181 Matrix 291
Solving Systems with the LU-Decomposition 185 Finding Eigenvectors and Eigenvalues 291
A Comment About Swapping Rows 187 6.2 Diagonalization 299
Some Applications of Diagonalization 303
Chapter 4 Vector Spaces 193 6.3 Powers of Matrices and the Markov
Process 307
4.1 Spaces of Polynomials 193
Systems of Linear Difference Equations 312
Addition and Scalar Multiplication of
The Power Method of Determining
Polynomials 193
Eigenvalues 312
4.2 Vector Spaces 197
6.4 Diagonalization and Differential
Vector Spaces 197
Equations 315
Subspaces 201
A Practical Solution Procedure 317
4.3 Bases and Dimensions 206
General Discussion 317
Bases 206
Obtaining a Basis from an Arbitrary
Finite Spanning Set 209 Chapter 7 Orthonormal Bases 321
Dimension 211 7.1 Orthonormal Bases and Orthogonal
Extending a Linearly Independent Matrices 321
Subset to a Basis 213 Orthonormal Bases 321
4.4 Coordinates with Respect to a Basis 218 Coordinates with Respect to an Orthonormal
4.5 General Linear Mappings 226 Basis 323
4.6 Matrix of a Linear Mapping 235 Change of Coordinates and Orthogonal
The Matrix of L with Respect to the Matrices 325
Basis B 235 A Note on Rotation Transformations and
Change of Coordinates and Linear Rotation of Axes in JR.2
329
Mappings 240 7.2 Projections and the Gram-Schmidt
4.7 Isomorphisms of Vector Spaces 246 Procedure 333
Projections onto a Subspace 333
Chapter 5 Determinants 255 The Gram-Schmidt Procedure 337
7.3 Method of Least Squares 342
5.1 Determinants in Terms of Cofactors 255 Overdetermined Systems 345
The 3 x 3 Case 256 7.4 Inner Product Spaces 348
5.2 Elementary Row Operations and the Inner Product Spaces 348
Determinant 264 7.5 Fourier Series 354
The Determinant and Invertibility 270 b

Determinant of a Product 270


The Inner Product J f(x)g(x) dx 354
a
5.3 Matrix Inverse by Cofactors and Fourier Series 355
Cramer's Rule 274
Cramer's Rule 276
Chapter 8 Symmetric Matrices and
5.4 Area, Volume, and the Determinant 280
Quadratic Forms 363
Area and the Determinant 280
The Determinant and Volume 283 8.1 Diagonalization of Symmetric
Matrices 363
The Principal Axis Theorem 366
Chapter 6 Eigenvectors and
8.2 Quadratic Forms 372
Diagonalization 289
Quadratic Forms 372
6.1 Eigenvalues and Eigenvectors 289 Classifications of Quadratic Forms 376
Eigenvalues and Eigenvectors of a 8.3 Graphs of Quadratic Forms 380
Mapping 289 Graphs of Q(x) k in JR.3 385
=
Contents v

8.4 Applications of Quadratic Forms 388 9.4 Eigenvectors in Complex Vector Spaces 417
Small Deformations 388 Complex Characteristic Roots of a Real
The Inertia Tensor 390 Matrix and a Real Canonical Form 418
The Case of a 2 x 2 Matrix 420
Chapter 9 Complex Vector Spaces 395 The Case of a 3 x 3 Matrix 422
9.5 Inner Products in Complex Vector Spaces 425
9.1 Complex Numbers 395 Properties of Complex Inner Products 426
The Arithmetic of Complex Numbers 395 The Cauchy-Schwarz and Triangle
The Complex Conjugate and Division 397 Inequalities 426
Roots of Polynomial Equations 398 Orthogonality in C" and Unitary Matrices 429
The Complex Plane 399
9.6 Hermitian Matrices and Unitary
Polar Form 399 Diagonalization 432
Powers and the Complex Exponential 402
n-th Roots 404 Appendix A Answers to Mid-Section
9.2 Systems with Complex Numbers 407 Exercises 439
Complex Numbers in Electrical Circuit
Equations 408
Appendix B Answers to Practice Problems
9.3 Vector Spaces over C 411
and Chapter Quizzes 465
Linear Mappings and Subspaces 413
Complex Multiplication as a Matrix
Mapping 415 Index 529
A Note to Students

Linear Algebra-What Is It?


Linear algebra is essentially the study of vectors, matrices, and linear mappings. Al­
though many pieces of linear algebra have been studied for many centuries, it did not
take its current form until the mid-twentieth century. It is now an extremely important
topic in mathematics because of its application to many different areas.
Most people who have learned linear algebra and calculus believe that the ideas
of elementary calculus (such as limit and integral) are more difficult than those of in­
troductory linear algebra and that most problems in calculus courses are harder than
those in linear algebra courses. So, at least by this comparison, linear algebra is not
hard. Still, some students find learning linear algebra difficult. I think two factors con­
tribute to the difficulty students have.
First, students do not see what linear algebra is good for. This is why it is important
to read the applications in the text; even if you do not understand them completely, they
will give you some sense of where linear algebra fits into the broader picture.
Second, some students mistakenly see mathematics as a collection of recipes for
solving standard problems and are uncomfortable with the fact that linear algebra is
"abstract" and includes a lot of "theory." There will be no long-term payoff in simply
memorizing these recipes, however; computers carry them out far faster and more ac­
curately than any human can. That being said, practising the procedures on specific
examples is often an important step toward much more important goals: understand­
ing the concepts used in linear algebra to formulate and solve problems and learning to
interpret the results of calculations. Such understanding requires us to come to terms
with some theory. In this text, many of our examples will be small. However, as you
work through these examples, keep in mind that when you apply these ideas later, you
may very well have a million variables and a million equations. For instance, Google's
PageRank system uses a matrix that has 25 billion columns and 25 billion rows; you
don't want to do that by hand! When you are solving computational problems, al­
ways try to observe how your work relates to the theory you have learned.
Mathematics is useful in so many areas because it is abstract: the same good idea
can unlock the problems of control engineers, civil engineers, physicists, social scien­
tists, and mathematicians only because the idea has been abstracted from a particular
setting. One technique solves many problems only because someone has established a
theory of how to deal with these kinds of problems. We use definitions to try to capture
important ideas, and we use theorems to summarize useful general facts about the kind
of problems we are studying. Proofs not only show us that a statement is true; they can
help us understand the statement, give us practice using important ideas, and make it
easier to learn a given subject. In particular, proofs show us how ideas are tied together
so we do not have to memorize too many disconnected facts.
Many of the concepts introduced in linear algebra are natural and easy, but some
may seem unnatural and "technical" to beginners. Do not avoid these apparently more
difficult ideas; use examples and theorems to see how these ideas are an essential
part of the story of linear algebra. By learning the "vocabulary" and "grammar" of
linear algebra, you will be equipping yourself with concepts and techniques that math­
ematicians, engineers, and scientists find invaluable for tackling an extraordinarily rich
variety of problems.

vi
A Note to Students vii

Linear Algebra-Who Needs It?


Mathematicians

Linear algebra and its applications are a subject of continuing research. Linear algebra
is vital to mathematics because it provides essential ideas and tools in areas as diverse
as abstract algebra, differential equations, calculus of functions of several variables,
differential geometry, functional analysis, and numerical analysis.

Engineers

Suppose you become a control engineer and have to design or upgrade an automatic
control system. T he system may be controlling a manufacturing process or perhaps
an airplane landing system. You will probably start with a linear model of the sys­
tem, requiring linear algebra for its solution. To include feedback control, your system
must take account of many measurements (for the example of the airplane, position,
velocity, pitch, etc.), and it will have to assess this information very rapidly in order to
determine the correct control responses. A standard part of such a control system is a
Kalman-Bucy filter, which is not so much a piece of hardware as a piece of mathemat­
ical machinery for doing the required calculations. Linear algebra is an essential part
of the Kalman-Bucy filter.
If you become a structural engineer or a mechanical engineer, you may be con­
cerned with the problem of vibrations in structures or machinery. To understand the
problem, you will have to know about eigenvalues and eigenvectors and how they de­
termine the normal modes of oscillation. Eigenvalues and eigenvectors are some of the
central topics in linear algebra.
An electrical engineer will need linear algebra to analyze circuits and systems; a
civil engineer will need linear algebra to determine internal forces in static structures
and to understand principal axes of strain.
In addition to these fairly specific uses, engineers will also find that they need
to know linear algebra to understand systems of differential equations and some as­
pects of the calculus of functions of two or more variables. Moreover, the ideas and
techniques of linear algebra are central to numerical techniques for solving problems
of heat and fluid flow, which are major concerns in mechanical engineering. And the
ideas of Jjnear algebra underjje advanced techniques such as Laplace transforms and
Fourier analysis.

Physicists

Linear algebra is important in physics, partly for the reasons described above. In addi­
tion, it is essential in applications such as the inertia tensor in general rotating motion.
Linear algebra is an absolutely essential tool in quantum physics (where, for exam­
ple, energy levels may be determined as eigenvalues of linear operators) and relativity
(where understanding change of coordinates is one of the central issues).

Life and Social Scientists

Input/output models, described by matrices, are often used in economics, and similar
ideas can be used in modelling populations where one needs to keep track of sub­
populations (generations, for example, or genotypes). In all sciences, statistical anal­
ysis of data is of great importance, and much of this analysis uses Jjnear algebra; for
example, the method of least squares (for regression) can be understood in terms of
projections in linear algebra.
viii A Note to Instructors

Managers

A manager in industry will have to make decisions about the best allocation of re­
sources: enormous amounts of computer time around the world are devoted to linear
programming algorithms that solve such allocation problems. The same sorts of tech­
niques used in these algorithms play a role in some areas of mine management. Linear
algebra is essential here as well.

So who needs linear algebra? Almost every mathematician, engineer, or scientist


will .find linear algebra an important and useful tool.

Will these applications be explained in this book?

Unfortunately, most of these applications require too much specialized background


to be included in a first-year linear algebra book. To give you an idea of how some of
these concepts are applied, a few interesting applications are briefly covered in sections
1.4, 1.5, 2.4, 5.4, 6.3, 6.4, 7.3, 7.5, 8.3, 8.4, and 9.2. You will get to see many more
applications of linear algebra in your future courses.

A Note to Instructors
Welcome to the second edition of Introduction to Linear Algebra for Science and
Engineering. It has been a pleasure to revise Daniel Norman's first edition for a new
generation of students and teachers. Over the past several years, I have read many
articles and spoken to many colleagues and students about the difficulties faced by
teachers and learners of linear algebra. In particular, it is well known that students typ­
ically find the computational problems easy but have great difficulty in understanding
the abstract concepts and the theory. Inspired by this research, I developed a pedagog­
ical approach that addresses the most common problems encountered when teaching
and learning linear algebra. I hope that you will find this approach to teaching linear
algebra as successful as I have.

Changes to the Second Edition


• Several worked-out examples have been added, as well as a variety of mid­
section exercises (discussed below).

• Vectors in JR.11 are now always represented as column vectors and are denoted
with the normal vector symbol 1. Vectors in general vector spaces are still
denoted in boldface.

• Some material has been reorganized to allow students to see important con­
cepts early and often, while also giving greater flexibility to instructors. For
example, the concepts of linear independence, spanning, and bases are now
introduced in Chapter 1 in JR.11, and students use these concepts in Chapters 2
and 3 so that they are very comfortable with them before being taught general
vector spaces.
A Note to Instructors ix

• The material on complex numbers has been collected and placed in Chapter 9,
at the end of the text. However, if one desires, it can be distributed throughout
the text appropriately.

• There is a greater emphasis on teaching the mathematical language and using


mathematical notation.

• All-new figures clearly illustrate important concepts, examples, and applica­


tions.

• The text has been redesigned to improve readability.

Approach and Organization


Students typically have little trouble with computational questions, but they often
struggle with abstract concepts and proofs. This is problematic because computers
perform the computations in the vast majority of real-world applications of linear
algebra. Human users, meanwhile, must apply the theory to transform a given problem
into a linear algebra context, input the data properly, and interpret the result correctly.
The main goal of this book is to mix theory and computations throughout the
course. The benefits of this approach are as follows:

• It prevents students from mistaking linear algebra as very easy and very com­
putational early in the course and then becoming overwhelmed by abstract con­
cepts and theories later.

• It allows important linear algebra concepts to be developed and extended more


slowly.

• It encourages students to use computational problems to help understand the


theory of linear algebra rather than blindly memorize algorithms.

One example of this approach is our treatment of the concepts of spanning and
linear independence. They are both introduced in Section 1.2 in JR.n, where they can be
motivated in a geometrical context. They are then used again for matrices in Section
3.1 and polynomials in Section 4.1, before they are finally extended to general vector
spaces in Section 4.2.

The following are some other features of the text's organization:

• The idea of linear mappings is introduced early in a geometrical context and is


used to explain aspects of matrix multiplication, matrix inversion, and features
of systems of linear equations. Geometrical transformations provide intuitively
satisfying illustrations of important concepts.

• Topics are ordered to give students a chance to work with concepts in a simpler
setting before using them in a much more involved or abstract setting. For ex­
ample, before reaching the definition of a vector space in Section 4.2, students
will have seen the 10 vector space axioms and the concepts of linear indepen­
dence and spanning for three different vector spaces, and they will have had
some experience in working with bases and dimensions. Thus, instead of be­
ing bombarded with new concepts at the introduction of general vector spaces,
students will j ust be generalizing concepts with which they are already familiar.
x A Note to Instructors

Pedagogical Features
Since mathematics is best learned by doing, the following pedagogical elements are
included in the book.

• A selection of routine mid-section exercises is provided, with solutions in­


cluded in the back of the text. These allow students to use and test their under­
standing of one concept before moving on to other concepts in the section.

• Practice problems are provided for students at the end of each section. See "A
Note on the Exercises and Problems" below.

• Examples, theorems, and definitions are called out in the margins for easy
reference.

Applications
One of the difficulties in any linear algebra course is that the applications of linear
algebra are not so immediate or so intuitively appealing as those of elementary cal­
culus. Most convincing applications of linear algebra require a fairly lengthy buildup
of background that would be inappropriate in a linear algebra text. However, without
some of these applications, many students would find it difficult to remain motivated
to learn linear algebra. An additional difficulty is that the applications of linear alge­
bra are so varied that there is very little agreement on which applications should be
covered.
In this text we briefly discuss a few applications to give students some easy sam­
ples. Additional applications are provided on the Corripanion Website so that instruc­
tors who wish to cover some of them can pick and choose at their leisure without
increasing the size (and hence the cost) of the book.

List of Applications

• Minimum distance from a point to a plane (Section 1.4)

• Area and volume (Section 1.5, Section 5.4)

• Electrical circuits (Section 2.4, Section 9.2)

• Planar trusses (Section 2.4)

• Linear programming (Section 2.4)

• Magic squares (Chapter 4 Review)

• Markov processes (Section 6.3)

• Differential equations (Section 6.4)

• Curve of best fit (Section 7.3)

• Overdetermined systems (Section 7.3)

• Graphing quadratic forms (Section 8.3)

• Small deformations (Section 8.4)

• The inertia tensor (Section 8.4)


A Note to Instructors xi

Computers
As explained in "A Note on the Exercises and Problems," which follows, some prob­
lems in the book require access to appropriate computer software. Students should
realize that the theory of linear algebra does not apply only to matrices of small size
with integer entries. However, since there are many ideas to be learned in linear alge­
bra, numerical methods are not discussed. Some numerical issues, such as accuracy
and efficiency, are addressed in notes and problems.

A No t e on the Exerc ises and Problems


Most sections contain mid-section exercises. These mid-section exercises have been
created to allow students to check their understanding of key concepts before continu­
ing on to new concepts in the section. Thus, when reading through a chapter, a student
should always complete each exercise before continuing to read the rest of the chapter.
At the end of each section, problems are divided into A, B, C, and D problems.
The A Problems are practice problems and are intended to provide a sufficient
variety and number of standard computational problems, as well as the odd theoretical
problem for students to master the techniques of the course; answers are provided at
the back of the text. Full solutions are available in the Student Solutions Manual (sold
separately).
The B Problems are homework problems and essentially duplicates of the A prob­
lems with no answers provided, for instructors who want such exercises for homework.
In a few cases, the B problems are not exactly parallel to the A problems.
The C Problems require the use of a suitable computer program. These problems
are designed not only to help students familiarize themselves with using computer soft­
ware to solve linear algebra problems, but also to remind students that linear algebra
uses real numbers, not only integers or simple fractions.
The D Problems usually require students to work with general cases, to write
simple arguments, or to invent examples. These are important aspects of mastering
mathematical ideas, and all students should attempt at least some of these-and not
get discouraged if they make slow progress. With effort, most students will be able
to solve many of these problems and will benefit greatly in the understanding of the
concepts and connections in doing so.
In addition to the mid-section exercises and end-of-section problems, there is a
sample Chapter Quiz in the Chapter Review at the end of each chapter. Students should
be aware that their instructors may have a different idea of what constitutes an appro­
priate test on this material.
At the end of each chapter, there are some Further Problems; these are similar to
the D Problems and provide an extended investigation of certain ideas or applications
of linear algebra. Further Problems are intended for advanced students who wish to
challenge themselves and explore additional concepts.

Using This Text to Teach Linear Algebra


There are many different approaches to teaching linear algebra. Although we suggest
covering the chapters in order, the text has been written to try to accommodate two
main strategies.
xii A Note to Instructors

Early Vector Spaces


We believe that it is very beneficial to introduce general vector spaces immediately
after students have gained some experience in working with a few specific examples
of vector spaces. Students find it easier to generalize the concepts of spanning, linear
independence, bases, dimension, and linear mappings while the earlier specific cases
are still fresh in their minds. In addition, we feel that it can be unhelpful to students
to have determinants available too soon. Some students are far too eager to latch onto
mindless algorithms involving determinants (for example, to check linear indepen­
dence of three vectors in three-dimensional space) rather than actually come to terms
with the defining ideas. Finally, this allows eigenvalues, eigenvectors, and diagonal­
ization to be highlighted near the end of the first course. If diagonalization is taught
too soon, its importance can be lost on students.

Early Determinants and Diagonalization


Some reviewers have commented that they want to be able to cover determinants and
diagonalization before abstract vector spaces and that in some introductory courses,
abstract vector spaces may not be covered at all. Thus, this text has been written so
that Chapters 5 and 6 may be taught prior to Chapter 4. (Note that all required in­
formation about subspaces, bases, and dimension for diagonalization of matrices over
JR is covered in Chapters 1, 2, and 3.) Moreover, there is a natural flow from matrix
inverses and elementary matrices at the end of Chapter 3 to determinants in Chapter 5.

A Course Outline
The following table indicates the sections in each chapter that we consider to be "cen­
tral material":

Chapter Central Material Optional Material


1 l , 2,3,4,5
2 1, 2, 3 4
3 1,2,3,4,5,6 7
4 l,2,3,4,5,6,7
5 1,2,3 4
6 1,2 3,4
7 1,2 3,4, 5
8 1,2 3,4
9 l , 2,3,4,5,6

Supplements
We are pleased to offer a variety of excellent supplements to students and instructors
using the Second Edition.
T he new Student Solutions Manual (ISBN: 978-0-321-80762-5), prepared by
the author of the second edition, contains full solutions to the Practice Problems and
Chapter Quizzes. It is available to students at low cost.
MyMathLab® Online Course (access code required) delivers proven results
in helping individual students succeed. It provides engaging experiences that person­
alize, stimulate, and measure learning for each student. And, it comes from a trusted
partner with educational expertise and an eye on the future. To learn more about how
A Note to Instructors xiii

MyMathLab combines proven learning applications with powerful assessment, visit


www.mymathlab.com or contact your Pearson representative.
The new Instructor's Resource CD-ROM (ISBN: 978-0-321-80759-5) includes
the following valuable teaching tools:

• An Instructor's Solutions Manual for all exercises in the text: Practice


Problems, Homework Problems, Computer Problems, Conceptual Problems,
Chapter Quizzes, and Further Problems.

• A Test Bank with a large selection of questions for every chapter of the text.

• Customizable Beamer Presentations for each chapter.

• An Image Library that includes high-quality versions of the Figures, T heo­


rems, Corollaries, Lemmas, and Algorithms in the text.

Finally, the second edition is available as a CourseSmart eTextbook


(ISBN: 978-0-321-75005-1). CourseSmart goes beyond traditional expectations­
providing instant, online access to the textbook and course materials at a lower cost
for students (average savings of 60%). With instant access from any computer and the
ability to search the text, students will find the content they need quickly, no matter
where they are. And with online tools like highlighting and note taking, students can
save time and study efficiently.
Instructors can save time and hassle with a digital eTextbook that allows them
to search for the most relevant content at the very moment they need it. Whether it's
evaluating textbooks or creating lecture notes to help students with difficult concepts,
CourseSmart can make life a little easier. See all the benefits at www.coursesmart.com/
instructors or www.coursesmart.com/students.

Pearson's technology specialists work with faculty and campus course designers to
ensure that Pearson technology products, assessment tools, and online course materi­
als are tailored to meet your specific needs. This highly qualified team is dedicated to
helping schools take full advantage of a wide range of educational resources by assist­
ing in the integration of a variety of instructional materials and media formats. Your
local Pearson Canada sales representative can provide you with more details about this
service program.

Acknowledgments
T hanks are expressed to:

Agnieszka Wolczuk: for her support, encouragement, help with editing, and tasty
snacks.
Mike La Croix: for all of the amazing figures in the text and for his assistance on
editing, formatting, and LaTeX'ing.
Stephen New, Martin Pei, Barbara Csima, Emilio Paredes: for proofreading and
their many valuable comments and suggestions.
Conrad Hewitt, Robert Andre, Uldis Celmins, C. T. Ng, and many other of my
colleagues who have taught me things about linear algebra and how to teach
it as well as providing many helpful suggestions for the text.
To all of the reviewers of the text, whose comments, corrections, and recommen­
dations have resulted in many positive improvements:
xiv A Note to Instructors

Robert Andre Dr. Alyssa Sankey


University of Waterloo University of New Brunswick

Luigi Bilotto
Manuele Santoprete
Vanier College
Wilfrid Laurier University

Dietrich Burbulla
University of Toronto Alistair Savage
University of Ottawa
Dr. Alistair Carr
Monash University Denis Sevee
John Abbott College
Gerald Cliff
University of Alberta
Mark Solomonovich
Antoine Khalil Grant MacEwan University
CEGEP Vanier
Dr. Pamini Thangarajah
Hadi Kharaghani
Mount Royal University
University of Lethbridge

Gregory Lewis Dr. Chris Tisdell

University of Ontario The University of New South Wales

Institute of Technology
Murat Tuncali
Eduardo Martinez-Pedroza Nipissing University
McMaster University
Brian Wetton
Dorette Pronk
University of British Columbia
Dalhousie University

T hanks also to the many anonymous reviewers of the manuscript.


Cathleen Sullivan, John Lewis, Patricia Ciardullo, and Sarah Lukaweski: For all
of their hard work in making the second edition of this text possible and for
their suggestions and editing.
In addition, I thank the team at Pearson Canada for their support during the
writing and production of this text.
Finally, a very special thank y ou to Daniel Norman and all those who contributed
to the first edition.

Dan Wolczuk
University of Waterloo
CHAPTER 1

Euclidean Vector Spaces


CHAPTER OUTLINE
"
1.1 Vectors in IR and JR?.3
1.2 Vectors in IR
11

1.3 Length and Dot Products


1.4 Projections and Minimum Distance
1.5 Cross-Products and Volumes

Some of the material in this chapter will be familiar to many students, but some ideas
that are introduced here will be new to most. In this chapter we will look at operations
on and important concepts related to vectors. We will also look at some applications
of vectors in the familiar setting of Euclidean space. Most of these concepts will later
be extended to more general settings. A firm understanding of the material from this
chapter will help greatly in understanding the topics in the rest of this book.

1.1 Vectors in R2 and R3


We begin by considering the two-dimensional plane in Cartesian coordinates. Choose
an origin 0 and two mutually perpendicular axes, called the x1 -axis and the xraxis, as
shown in Figure 1.1.1. Then a point Pin the plane is identified by the 2-tuple ( p1, p2),
called coordinates of P, where Pt is the distance from Pto the X2-axis, with p1 positive
if Pis to the right of this axis and negative if Pis to the left. Similarly, p2 is the distance
from Pto the x1 -axis, with p2 positive if Pis above this axis and negative if Pis below.
You have already learned how to plot graphs of equations in this plane.

p =(pi, p2)
P2 --- -
-- -- --
- -- ..
I
I
I
I
I
I
I
I

0 Pi

Figure 1.1.1 Coordinates in the plane .

For applications in many areas of mathematics, and in many subjects such as


physics and economics, it is useful to view points more abstractly. In particular, we
will view them as vectors and provide rules for adding them and multiplying them by
constants.
2 Chapter 1 Euclidean Vector Spaces

Definition JR2 is the set of all vectors of the form [:�l where and
xi x2 are real numbers called

the components of the vector. Mathematically, we write

Remark

We shall use the notation 1 = [ :� ] to denote vectors in JR2.


Although we are viewing the elements of JR2 as vectors, we can still interpret

these geometrically as points. That is, the vector jJ =


[��] can be interpreted as the

point P(p , p ). Graphically, this is often represented by drawing an arrow from (0, 0)
to (pi, p2),i 2
as shown in Figure 1.1.2. Note, however, that the points between (0, 0)
and (pi, p2)should not be thought of as points "on the vector." The representation of a
vector as an arrow is particularly common in physics; force and acceleration are vector
quantities that can conveniently be represented by an arrow of suitable magnitude and
direction.

0 = (0, 0)

Figure 1.1.2 Graphical representation of a vector.

Definition
Addition and Scalar
If 1 = [:� l [��l t JR,
y = and E then we define addition of vectors by

Multiplication in :12
X +y =
[Xi]+ [y'] [Xi Yl]
=
+
+
X2 Y2 X2 Y2
and the scalar multiplication of a vector by a factor oft, called a scalar, is defined by

tx = t [Xzxi]= [txtxi2]
The addition of two vectors is illustrated in Figure 1.1.3: construct a parallelogram
with vectors 1 and y as adjacent sides; then 1 + y is the vector corresponding to the
vertex of the parallelogram opposite to the origin. Observe that the components really
are added according to the definition. This is often called the "parallelogram rule for
addition."
Section 1.1 Vectors in JR2 and JR3 3

Figure 1.1.3 Addition of vectors jJ and if.

EXAMPLE I
(3, 4)
Let x = [-�] and y = [n Then (-2, 3)

0 X1

Similarly, scalar multiplication is illustrated in Figure 1.1.4. Observe that multi­


plication by a negative scalar reverses the direction of the vector. It is important to note
that x - y is to be interpreted as x + (-1 )y.

(1.S)J

X1

(-l)J
Figure 1.1.4 Scalar multiplication of the vector J.
4 Chapter 1 Euclidean Vector Spaces

EXAMPLE2
Let a= [n [ �]
v=
-
.and w= [-�l Calculate a+ v, 3w, and 2V- w.

Solution: We get

a+v= [ i ] [ �] [!]
+
-
=

3w=3
-
[ �] [ � ]
= -

2v - w = 2 [-�] [ �] [-�] [�] [-�]


+ < -1) _ = + =

EXERCISE 1
Let a= [ � l [� ]
_ v= .and w = rn Calculate each of the following and illustrate with

a sketch.

(a) a+ w (b) -v (c) (a+ w) - v

The vectors e1 = [�] and e =


2
[�] play a special role in our discussion of IR.2. We

will call the set {e1, e } the standard basis for IR.2. (We shall discuss the concept of
2
a basis fmther in Section 1.2.) The basis vectors e1 and e are important because any
2
vector v= [�� ] can be written as a sum of scalar multiples of e1 and e in exactly one
2
way:

Remark

In physics and engineering, it is common to use the notation i [�] and j = [�]
instead.

We will use the phrase linear combination to mean "sum of scalar multiples."
So, we have shown above that any vector x E IR.2 can be written as a unique linear
combination of the standard basis vectors.

One other vector in IR.2 deserves special mention: the zero vector, 0= [� ] .Some

important properties of the zero vector, which are easy to verify, are that for any
xEJR.2,

(1) 0 +x x =

(2) x + c-1)x = o
(3) Ox= 0
Section 1.1 Vectors in JR.2 and JR.3 5

The Vector Equation of a Line in JR.2


In Figure 1.1.4, it is apparent that the set of all multiples of a vector J creates a line
through the origin. We make this our definition of a line in JR.2: a line through the
origin in JR.2 is a set of the form
{tJitEJR.}
Often we do not use formal set notation but simply write the vector equation of the
line:
X = rJ, tEJR.
The vector Jis called the direction vector of the line.
Similarly, we define the line through ff with direction vector J to be the set

{ff+ tJi tEJR.}

which has the vector equation

X = ff+ rJ. tEJR.

This line is parallel to the line with equation xrJ. tEJR. because of the parallelogram
=

rule for addition. As shown in Figure 1.1.5, each point on the line through ff can be
obtained from a corresponding point on the line x = rJ by adding the vector ff. We
say that the line has been translated by ff. More generally, two lines are parallel if the
direction vector of one line is a non-zero multiple of the direction vector of the other
line.

X2

. line x = rJ+ ff

Figure 1.1.5 The line with vector equation x = td + p.

EXAMPLE3
A vector equation of the line through the point P(2, -3) with direction vector [ �]
-
is
6 Chapter 1 Euclidean Vector Spaces

EXAMPLE4 Write the vector equation of a line through P(l, 2) parallel to the line with vector
equation

x= t [�] , tEIR

Solution: Since they are parallel, we can choose the same direction vector. Hence, the
vector equation of the line is

EXERCISE 2 Write the vector equation of a line through P(O, 0) parallel to the line with vector
equation

Sometimes the components of a vector equation are written separately:

x = jJ + tJ becomes
{ X1 = Pl + td 1
t EIR
X2 = P2 + td2,

This is referred to as the parametric equation of the line. The familiar scalar form
of the equation of the line is obtained by eliminating the parameter t. Provided that
di* 0, d1 * 0,
X1 - Pl X2 - P2
--- -r- ---
di - - d1
or
d1
x2 = P2 + (xi - Pi)
di
What can you say about the line if d1 = 0 or d2 = O?

EXAMPLES Write the vector, parametric, and scalar equations of the line passing through the point

P(3, 4) with direction vector r-�].


Solution: The vector equation is x = [!] r-�l
+ t t ER

. .
So, the parametnc equat10n 1s
. { XI = 3 - St
tER
X2 = 4 + t,
The scalar equation is x2 = 4 - - !Cx1 3).
Section 1.1 Vectors in JR.2 and JR.3 7

Directed Line Segments For dealing with certain geometrical problems, it is


useful to introduce directed line segments. We denote the directed line segment from
point P to point Q by PQ. We think of it as an "arrow" starting at P and pointing
towards Q. We shall identify directed line segments from the origin 0 with the corre­
sponding vectors; we write OP = fJ, OQ =if, and so on. A directed line segment that
starts at the origin is called the position vector of the point.
For many problems, we are interested only in the direction and length of the
directed line segment; we are not interested in the point where it is located. For
example, in Figure 1.1.3, we may wish to treat the line segment QR as if it were the
same as OP. Taking our cue from this example, for arbitrary points P, Q, R in JR.2, we
define QR to be equivalent to OP if r -if=fJ. In this case, we have used one directed
line segment OP starting from the origin in our definition.

Figure 1.1.6 A directed line segment from P to Q.

More generally, for arbitrary points Q, R, S, and T in JR.2, we define QR to be


equivalent to ST if they are both equivalent to the same OP for some P. That is, if

r-if = fJ and t -
s= fJ for the same fJ

We can abbreviate this by simply requiring that

r-if=i'-s

EXAMPLE6 For points Q( l , 3 ) R(6,-l), S(-2,4), and T(3,0), we have that QR is equivalent to
,

ST because

-r - if= [ �] - [�] [ -�] = [�] - [ -!] =


_ = r s
-

S(-2,4)
8 Chapter 1 Euclidean Vector Spaces

In some problems, where it is not necessary to distinguish between equivalent


directed line segments, we "identify" them (that is, we treat them as the same object)
and write PQ = RS. Indeed, we identify them with the corresponding line segment

starting at the origin, so in Example 6 we write QR = = [-�l


ST

Remark

Writing QR = ST is a bit sloppy-an abuse of notation-because QR is not really


the same object as ST. However, introducing the precise language of "equivalence
classes" and more careful notation with directed line segments is not helpful at this
stage. By introducing directed line segments, we are encouraged to think about vectors
that are located at arbitrary points in space. This is helpful in solving some geometrical
problems, as we shall see below.

EXAMPLE 7 Find a vector equation of the line through P(l, 2) and Q(3, -1).

Solution: The direction of the line is

PQ= - p=[_i]-[;] =[_i]


q

Hence, a vector equation of the line with direction


PQ P(
that passes through 1, 2) is

x=p+tPQ=[;]+t[_i]• tE�

Observe in the example above that we would have the same line if we started at the
second point and "moved" toward the first point--0r even if we took a direction vector
in the opposite direction. Thus, the same line is described by the vector equations

x=[_iJ+r[-�J. rE�
x=[_iJ+s[_iJ· sE�
x=[;]+t[-�], tE�
In fact, there are infinitely many descriptions of a line: we may choose any point on
the line, and we may use any non-zero multiple of the direction vector.

EXERCISE 3 Find a vector equation of the line through P(l, 1) and Q(-2, 2).
Section 1.1 Vectors in JR.2 and JR.3 9

Vectors and Lines in R3


Everything we have done so far works perfectly well in three dimensions. We choose
an origin 0 and three mutually perpendicular axes, as shown in Figure 1.1.7. The
x1 -axis is usually pictured coming out of the page (or blackboard), the x2-axis to
the right, and the x3-axis towards the top of the picture.

Figure 1.1.7 The positive coordinate axes in IR.3.

It should be noted that we are adopting the convention that the coordinate axes
form a right-handed system. One way to visualize a right-handed system is to spread
out the thumb, index finger, and middle finger of your right hand. The thumb is
the x1 -axis, the index finger is the x2-axis, and the middle finger is the x3-axis. See
Figure 1.1.8.

Figure 1.1.8 Identifying a right-handed system.

2
We now define JR.3 to be the three-dimensional analog of JR. .

Definition
:l3
R3 is the set of all vectors of the form [�:l · where x1,x,, and x3 are ceal numbers.

Mathematically, we write
10 Chapter 1 Euclidean Vector Spaces

Definition If 1 = [:n �n
jl and t E II., then we define addition of vectors by

[ l �l l l
Addition and Scalar
=

Multiplication in J.3

[
Xt Xt + Yt
x+y = X2 + Y2 = X2 + Y2
X3 3 X3 + Y3

[l l
and the scalar multiplication of a vector by a factor oft by

[
Xl tX1
tx = t x2 = tx2
X3 tX 3

Addition still follows the parallelogram rule. It may help you to visualize this
if you realize that two vectors in JR.
3 must lie within a plane in JR.3 so that the two­

dimensional picture is still valid. See Figure 1.1.9.

Figure 1.1.9 Two-dimensional parallelogram rule in IR.3.

EXAMPLES
Let u = [ _i]. l-n
jl = and w
=
[H crucula� jl + U, -W, and -V + 2- W u.

Solution: We have

V +U
=

nHJ ni =

-w
-[�] {�]
l
=

-V + 2W-" =
-
- l 11+2 l�l-l J l =r m l :1 r-�l = + +
= =
Section 1.1 Vectors in JR.2 and JR.3 11

It is useful to introduce the standard basis for JR.3 just as we did for JR.2. Define

Then any vector V = [�:] can be written as the linear combination

Remark

In physics and engineering, it is common to use the notation i = e1, j = e1, and k = e3
instead.

The zero vector 0 = [�] in R3 has the same properties as the zero vector in l!.2.

Directed line segments are the same in three-dimensional space as in the two­
dimensional case.
A line through the point P in JR.3 (corresponding to a vector {J) with direction
vector J f. 0 can be described by a vector equation:
X = p + tJ, t E JR
It is important to realize that a line in JR.3 cannot be described by a single scalar linear
equation, as in JR.2. We shall see in Section 1.3 that such an equation describes a plane
in JR.3 .

EXAMPLE9 Find a vector equation of the line that passes through the points P(l, 5, -2) and
Q(4,-1,3).

Solution: A direction vector is J = if - [-H


p = Hence a vector equation of the

line is

Note that the corresponding parametric equations are x1


X3 = -2 +St.
1 + 3t, x2 = 5 - 6t, and

EXERCISE4 Find a vector equation of the line that passes through the points P(l, 2, 2) and
Q(l,-2,3).
12 Chapter 1 Euclidean Vector Spaces

PROBLEMS 1.1
Practice Problems

Al Compute each of the following linear combinations A6 Consider the points P(2,3,1), Q(3,1,-2),
and illustrate with a sketch. R(l,4,0), S(-5,1,5). Determine PQ, PR, PS,QR,
(a)[�]+[�] and SR, and verify that PQ+QR= PR= PS+ SR.
A 7 Write a vector equation of the line passing through
(c)3 [- �] the given points with the given direction vector.
A2 Compute each of the following linear combina-
(a) P(3,4),J= [-�]
tions. (b) P(2,3),J = [=:J
(a)[-�]+[-�]
(c)-2 [ _;J (c) P(2,0,5),J= [-�]-11
(e) 23 [31] - 2 [11//43)
A3 Compute each of the following linear combina-
tions. AS
(d) P(4,1,5),J =
[ - r1
Write a vector equation for the line that passes
[!]-[J
(a) through the given points.
(a) P(-1,2),Q(2,-3)
(b) P(4,1),Q(-2,-1)
(c) P(l,3,-5),Q(-2,1,0)
(d) P(-2,1,1),Q(4,2,2)
(e) P(!,t,1),Q(-l,l,�)
A9 For each of the following lines in JR. , determine a
2
vector equation and parametric equations.
(a) x2= 3x1 +2
A4 Ul Hl
Ut V = and W= Detenillne (b) 2x1 +3x2 = 5
AJO (a) Aset of points in IR.11 is collinear if all the points
(a) 2v- 3w lie on the same line. By considering directed
Cb) -3Cv +2w) +5v line segments, give a general method for deter­
(c) a such that w- 2a = 3v mining whether a given set of three points is
(d) a such that a - 3v = 2a collinear.
(b) Determine whether the points P(l,2), QC4,1),
AS Ut V = m rn
and W = = Detennine and R(-5,4) are collinear. Show how you
decide.
(a) �v + !w (c) Determine whether the points S(1,0,1),
T(3,-2,3), and U(-3,4,-1) are collinear.
Cb) 2c v + w)- c2v - 3w)
(c) a such that w- a = 2V Show how you decide.
(d) a such that !a + �v = w
Section 1.1 Exercises 13

Homework Problems

B 1 Compute each of the following linear combinations B6 (a) Consider the points P(l,4,1), Q(4,3,-1),

[-�]- [�]
R(-1,4,2), and S(8,6,-5). Determine PQ,
[-�] + r-�]
and illustrate with a sketch.

(a) (b)
PR, PS, QR, and SR, and verify that PQ+QR=
PR= PS +SR.
(c) -3 [-�] (d) -3 [�]- [;] (b) Consider the points P(3,-2,1), Q(2, 7, -3),
R(3,1,5), and S(-2,4,-1). Determine PQ,
PK,
;1...,.
PS,
-+
QR,
-+
and SR, and verify that P Q+QR=
-+ -+
-t

B2 Compute each of the following linear combina­


PR= PS +SR.
(a)[�]+ [=�] [�]- [�]
tions.

(b) B7 Write a vector equation of the line passing through

2 [ =�J H [1 �] - ?a[�]
the given points with the given direction vector.

(c) (d (a) P(-3, 4),J= [-�]


(e)
�[ � + Y3 [- �12] - [ �] (b) P(O, 0). J= m
[-�]
B3 Compute each of the following linear combina-

<{�]-[-�]
tions.
(c) P(2,3,-1), J=

P(3,1,2),J= [-�]
[=�l
(d)
(c) 4
BS Write a vector equation for the line that passes

f;�l Hl
through the given points.
+
l
(e) (a) P(3,1), Q(l,2)

[ 4J [-�1
(b) P(l,-2,1), Q(O, 0, 0)
1- (c) P(2,-6,3), Q(-1,5,2)
(f) (1 +�) 0 i (d) P(l,-1,i), Q(i, t. 1)
� -i 2
JR2, determine

{ � l {n
B9 For each of the following lines in a
_

B4 Ut V and W Detecm;ne (a)


(b)
x2= -2x1 3
Xi + 2X2= 3
+
vector equation and parametric equations.

(a) 2v- 3w BlO (You will need the solution from Problem AlO (a)
(b) -2(v- w) - 3w to answer this.)
(c)
(d) +
i1 such that w - 2i1 3v
i1 such that 2i1 3w= v
=
(a) Determine whether the points P(2,1,1),

[ �] -H
Q(l,2, 3), and R(4,-1,-3) are collinear. Show

[
how you decide.
BS Ut V = - and W= Deterrlline (b) Determine whether the points S(1,1, 0),
T(6,2, 1), and U(-4, 0,-1) are collinear. Show
(a) 3v- 2w how you decide.
(b) -iv+ �w
(c) i1 such that v+i1= v
(d) i1 such that 2i1 - w= 2v
14 Chapter 1 Euclidean Vector Spaces

Computer Problems

[=�u [ �:1
-36
Cl Let V, = V2 � - , and

v, =

[=m
Use computer software to evaluate each of the fol­
lowing.
(a) 171!1 + sv2 - 3v3 + 42v4
(b) -1440i11 - 2341i12 - 919i13 + 6691/4

Conceptual Problems

Dl Let i1 = [ �] and w = [ �].


_
(a) Explain in terms of directed line segments why

(a) Find real numbers t1 and t2 such that t1i1 +t2 w =


PQ+ QR+RP = o
[ ��l
_ Illustrate with a sketch.
(b) Verify the equation of part (a) by expressing

(b) Find real numbers t1 and t2 such that t1 v +t2w =


PQ, QR, and RP in terms of jJ, q, and r.
2
[��] for any X1, X2 E R D3 Let fl
fl + t J,
and J t= 0 bevectors in JR. . Prove that x
2
t E JR, is a line in JR. passing through the
=

(c) Use your result in part (b) to find real numbers


origin if and only if fl is a scalar multiple of J
t1 and t2 such that t1V1 + t2i12 = [ :2). D4 Let x and y be vectors in JR.3 and t E JR be a scalar.
Prove that
D2 Let P, Q, and R be points in JR.2 corresponding to t(x + y) = tx + t)!
vectors fl, q, and rrespectively.

1.2 Vectors in IRn


We now extend the ideas from the previous section to n-dimensional Euclidean
11•
space JR.
Students sometimes do not see the point in discussing n-dimensional space be­
cause it does not seem to correspond to any physical realistic geometry. But, in a num­
ber of instances, more than three dimensions are important. For example, to discuss
the motion of a particle, an engineer needs to specify its position (3 variables) and its
velocity (3 more variables); the engineer therefore has 6 variables. A scientist working
in string theory works with 11 dimensional space-time variables. An economist seek­
ing to model the Canadian economy uses many variables: one standard model has
more than 1500 variables. Of course, calculations in such huge models are carried
out by computer. Even so, understanding the ideas of geometry and linear algebra is
necessary to decide which calculations are required and what the results mean.
Section 1.2 Vectors in JR.1 15

Addition and Scalar Multiplication of Vectors in JRn

Definition JR.11 is the set of all vectors of the form


[Xl , where x; E R Mathematically,

Xn :

Definition If 1
Xi y= r�1 , and t E JR., then we define addition of vectors by
Addition and Scalar
Multiplication in :i"
=

Xn ,

�1
x+.Y=
[Xl tll [X1
: + : =
+ Yl
:
Xn n Xn Yn +

and the scalar multiplication of a vector by a factor oft by

tx = t 1�Xn1] tt�X11]
=

Theorem 1 J'.or all w, x, y E JR.11 ands, t E JR. we have

(1) x+ y E JR.11 (closed under addition)


(2) x+ y = y + 1 (addition is commutative)
(3) c1 + y) + w 1 +CY+ w)
JR.1
(addition is associative)
=

-+
(4) There exists a vector 0 E such that z+0 = z for all z E JR.ll

-+ -+ -+ -+
(zero vector)
(5) For each 1 E JR.ll there exists a vector -1 E JR.11 such that 1 + (-1) = 0
(additive inverses)
(6) t1 E JR.11 (closed under scalar multiplication)
(7) s(t1) = (st)1 (scalar multiplication is associative)
(8) (s + t)x = s1 + tx (a distributive law)
(9) t(1 + y) = t1 + tY (another distributive law)
(10) lx = 1 (scalar multiplicative identity)

Proof: We will prove properties (1) and (2) from Theorem 1 and leave the other proofs
to the reader.
,-
16 Chapter 1 Euclidean Vector Spaces

For (1), by definition,


X1 +
[ Ytl
X +y =
: E !Rn
Xn + Yn
since x; +y; E IR for 1 :S i :S n.
For (2),

x+st=
[ ] ti ]
XI +y1
: =
+X1
: =st+x •

X11 +y,, 11 + Xn

EXERCISE 1 Prove properties (5), (6), and (7) from Theorem 1.

Observe that properties (2), (3), (7), (8), (9), and ( 10) from Theorem 1 refer only to
the operations of addition and scalar multiplication, while the other properties, ( 1), ( 4) ,
(5), and (6), are about the relationship between the operations and the set IR11• These
facts should be clear in the proof of Theorem 1. Moreover, we see that the zero vector
0
of JR11 is the vector 0 = : , and the additive inverse of x is -x = (- l)x. Note that the

0
zero vector satisfies the same properties as the zero vector in JR2 and JR3.
Students often find properties (1) and (6) a little strange. At first glance, it seems
obvious that the sum of two vectors in !R11 or the scalar multiple of a vector in JR11 is
another vector in IR". However, these properties are in fact extremely important. We
now look at subsets of IR11 that have both of these properties.

Subspaces
Definition A non-empty subset S of IR11 is called a subspace of IR11 if for all vectors x, y E S and
Subspace t E IR:

( 1) x +y ES (closed under addition)


(2) tx ES (closed under scalar multiplication)

The definition requires that a subspace be non-empty. A subspace always contains


at least one vector. In particular, it follows from (2) that if we let t
0, then every =

n
subspace of !R contains the zero vector. This fact provides an easy method for dis­
qualifying any subsets that do not contain the zero vector as subspaces. For example,
a line in IR3 cannot be a subspace if it does not pass through the origin. Thus, when
_,

checking to determine if a set S is non-empty, it makes sense to first check if 0 ES.


It is easy to see that the set {O} consisting of only the zero vector in !R11 is a subspace
of IR11; this is called the trivial subspace. Additionally, IR" is a subspace of itself. We
will see throughout the text that other subspaces arise naturally in linear algebra.
Section 1.2 Vectors inJR" 17

EXAMPLE 1
Show that S = {[�:l I x1 -x2 +xi = } is a subspace ofll!.3.
0

SoluUon1 We observe that, by definition, S is a subset of R3 and that 0 = [�] E S


0, 0, 0
since talcing x1 = x2 = and X3 = satisfies x1 -x2 +x3 = 0.

Let = [:n 'f �:l E S. Then they must satisfy the condition of the set, so
5! =

x1 - x2 +x3 = and Y1 -Y2 +y3 =


0 0.

1
To show that Sis closed under addition, we must show that +y satisfies the condition
of S. We have
X + y = [XX3X2t +Y++YY3I2]
and
(x1 +Y1) -(x2 +Y2) +(x3 +y3) = X1 - x2 +x3 +Y1 -Y2 +Y3 = + = 0 0 0

1
Hence, + y E S.
tx1
Similarly, for any t E JR, we have tx [tx2] and
=

t X3
So, S is closed under scalar multiplication. Therefore, S is a subspace ofJR3.
EXAMPLE2
0
Show that T ={[��]I x1x2 = } is not a subspace ofJR2.
Solution: To show that Tis not a subspace, we just need to give one example showing
T
that does not satisfy the definition of a subspace. We will show that T is not closed
under addition.
1 1
Observe that =[�]and y =[�]are both in T, but + y = [�] T, since � 1(1) * 0.

Thus, T is not a subspace ofJR2•

EXERCISE2
Show that S ={[��] I 2x1 = x2} is a subspace ofJR2 and T = {[��] I x1 +x2 = 2} is
not a subspace of R2.
18 Chapter 1 Euclidean Vector Spaces

Spanning Sets and Linear Independence


One of the main ways that subspaces arise is as the set of all linear combinations of
some spanning set. We next present an easy theorem and a bit more vocabulary.

Theorem 2 If {v1, • • • , vk} is a set of vectors in JRn and S is the set of all possible linear combi­
nations of these vectors,

then S is a subspace of ]Rn.

Proof: By properties (1) and (6) of Theorem 1, t1v1 + · · · + tkvk E JR.11, so S is a subset
of JRn. Taking t; = 0 for 1 � i � k, we get 0 = Ov1 + · · · + Ovk ES, so S is non-empty.
Let x,y ES. Then, for some real numbers s; and t;, 1 � i � k, x = s1v1 +· · ·+skvk
andy = t1v1 + · · + tkvk. It follows that
·

so, x +y ES since (s; + t;) E JR. Hence, S is closed under addition.


Similarly, for all t E JR,

So, S is closed under scalar multiplication. Therefore, S is a subspace of JRn. •

Definition If S is the subspace of JR.11 consisting of all linear combinations of the vectors v1, . . • , vk
Span E JR.11, then S is called the subspace spanned by the set of vectors 13 = {v1, ... , vk}, and
Spanning Set we say that the set 13 spans S. The set 13 is called a spanning set for the subspace S.
We denote S by
S = Span{i11, ... , vk} =Span 13

EXAMPLE3 2
Let v E JR. with v * 0 and consider the line L with vector equation x = tV, t E JR. Then
L is the subspace spanned by {V}, and {V} is a spanning set for L. We write L =Span{V}.
2
Similarly, for v1, v2 E JR. , the set M with vector equation x = ti\11 + t2v2 is a
2
subspace oflR with spanning set {v1, v2}. That is, M =Span{i11, v2}.

2
If v E JR. with v * 0, then we can guarantee that Span{v} represents a line in
2
JR. that passes through the origin. However, we see that the geometrical interpretation
of Span{v1, v2} depends on the choices of v1 and v2. We demonstrate this with some
examples.
Section 1.2 Vectors in JR.11 19

EXAMPLE4
The set S1 = Span {[�] [�]}
, has vector equation

2
Hence, S1 is a line in JR. that passes through the origin.

The set S2 = Span { [�] [-�]}


, has vector equation

where t = t1 - 2t2 E JR.. Hence, S 2 represents the same line as SI· That is,

The set S3 = Span {[�] [�]}, has vector equation

2
Hence, S3 = JR. . That is, S3 spans the entire two-dimensional plane.

2
From these examples, we observe that {V1, v2} is a spanning set for JR. if and only
if neither v 1 nor v2 is a scalar multiple of the other. This also means that neither vector
3
can be the zero vector. We now look at this in JR. .

EXAMPLES
The set S1 "Span {[ =�l m, [�] }
· has vector equation

Hence,
20 Chapter 1 Euclidean Vector Spaces

EXAMPLES
(continued) The set S =Spm
2
{[ jl [ _; l [!]}
· · has vwor equation

1
=
{�] {i] [!] + + t
3 •
1,,1 ,! ER
2 3

which can be written as

1 t
= ,
[ jl [ _�l [!] l!l
+ !
2
+ !
2
+ !
3
= (tI
+
1 )
2
[ �l
_
+ (1
2
+ 1 )
3 [! ]
So, S =Span
2 {[ 3] [!]}
·

We extend this to the general case in IR.11•

Theorem 3 Let v1,...,vk be vectors in IR.11• If vk can be written as a linear combination of


V1,...,Vk-1, then

Proof: We are assuming that there exists t1,...,tk-l E IR. such that

Let 1 ESpan{v1,...,vk}. Then, there exists S1, ...,Sk E


IR. such that

1 = S1V1 + ... + Sk-lvk-1 + SkVk


= s1v1 + · · · + Sk-1Vk-l + sk(t1v1 + ·
·· + tk-1Vk-1)
(s1 + Skt1W1 + + (sk-1 + sktk-1Wk-I
=
· · ·

Thus, 1 E Span{\11,... , vk-d· Hence, Span{vi. ...,vd s;; Span{\11,...,vk-d· Clearly,


we have Span{\11,...,vk-d s;; Span{v1,...,vk} and so

as required. •

In fact, any vector which can be written as a linear combination of the other vectors
in the set can be removed without changing the spanned set. It is important in linear
algebra to identify when a spanning set can be simplified by removing a vector that can
be written as a linear combination of the other vectors. We will call such sets linearly
dependent. If a spanning set is as simple as possible, then we will call the set linearly
independent. To identify whether a set is linearly dependent or linearly independent,
we require a mathematical definition.
Section 1.2 Vectors in IR" 21

Assume that the set {V 1, • • • , v k} is linearly dependent. Then one of the vectors, say
v;, is equal to a linear combination of some (or all) of the other vectors. Hence, we can
find scalars t1, ... tk E IR such that

where t; f. 0. Thus, a set is linearly dependent if the equation

has a solution where at least one of the coefficients is non-zero. On the other hand, if
the set is linearly independent, then the only solution to this equation must be when all
the coefficients are 0. For example, if any coefficient is non-zero, say t; f. 0, then we
can write

Thus, v; E Span{v1, ..., v;_1, V;+1, . • . , v,,}, and so the set can be simplified by using
Theorem 3.
We make this our mathematical definition.

Definition A set of vectors {v1, ... , v k} is said to be linearly dependent if there exist coefficients
Linearly Dependent t1, • • . , tk not all zero such that
Linearly Independent

A set of vectors {V1, • • • , vd is said to be linearly independent if the only solution to

is t1 = t2 = · · · = tk = 0. This is called the trivial solution.

Theorem 4 If a set of vectors {V1, ... , v k} contains the zero vector, then it is linearly dependent.

Proof: Assume V; = 0. Then we have

Hence, the equation 0 = t1v1 + · · · + tkvk has a solution with one coefficient, t;, that is
non-zero. So, by definition, the set is linearly dependent. •

EXAMPLE6
show that the set {[-1 :J . [ 1:Ix4] [-�]} • is lineady dependent

Solution: We consider
22 Chapter 1 Euclidean Vector Spaces

EXAMPLE6 Using operations on vectors, we get

(continued)

Since vectors are equal only if their corresponding entries are equal, this gives us three
equations in three unknowns

7t1 - lOt2 - t3 = 0

-14t, + 15t2 = 0
15
6t, + t2 + 3t3 = 0
14

Solving using substitution and elimination, we find that there are in fact infinitely many
possible solutions. One is t1 = �, t2 = �, t3 = -1. Hence, the set is linearly dependent.

EXERCISE 3
Determine whether

ml m [m ·
, is linearly dependent or JinearI y independent.

Remark

Observe that determining whether a set {\11, ... 'vk} in JR.11 is linearly dependent or
linearly independent requires determining solutions of the equation t1v1 + · · · + tkvk =

0. However, this equation actually represents n equations (one for each entry of the
vectors) ink unknowns t1, • • • , tk. In the next chapter, we will look at how to efficiently
solve such systems of equations.

What we have derived above is that the simplest spanning set for a subspace S is
one that is linearly independent. Hence, we make the following definition.

Definition If {v,,...,vk} is a spanning set for a subspace S of JR.11 and {V1,... ,vk} is linearly
Basis independent, then {V1, • • • , vk} is called a basis for S.

EXAMPLE 7
Let


= [-H r \l· [-H
"'
= v, = and let s be the subspace of JR3 given by

S =Span {V1, i12, v3}. Find a basis for S.


Solution: Observe that {V1, i12, i13} is linearly dependent, since
Section 1.2 Vectors in !fll.11 23

EXAMPLE 7 In particular, we can write v3 as a linear combination of v1 and i12. Hence, by

(continued) Theorem 3,

Moreover, observe that the only solution to

is t1 = t2 = 0 since neither v 1 nor i12 is a scalar multiple of the other. Hence, {V1, i12} is
linearly independent.
Therefore, {v1, v2} is linearly independent and spans S and so it is a basis for S.

Bases (the plural of basis) will be extremely important throughout the remainder
of the book. At this point, however, we just define the following very important basis.

Definition 1
In !fll.1 , let e; represent the vector whose i-th component is 1 and all other components
Standard Basis are 0. The set {e1' ... ' e,,} is called the standard basis for
!fli.11•
for'.?."
2 3
Observe that this definition matches that of the standard basis for !fli. and !fll. given
in Section 1.1.

EXAMPLE 8
The standard basis for R3 is re,, e,, e, J = { [�] [!] [m
• .

It is linearly independent since the only solution to

is 11 = 12 = 13 = 0. Moreover, it is a spanning set for R3 since every vector X = [�:l E

!fll.3 can be written as a linear combination of the basis vectors. In particular,

[�:l [�] [!] m


= X1 + X2 + X3

Remark

Compare the result of Example 8 with the meaning of point notation P(a, b, c). When
we say P(a, b, c) we mean the point P having a amount in the x-direction, b amount
in they-direction, and c amount in the z-direction. So, observe that the standard basis
vectors represent our usual coordinate axes.
24 Chapter 1 Euclidean Vector Spaces

EXERCISE4 State the standard basis for JR5. Prove that it is linearly independent and show that it is
a spanning set for JR5.

Surfaces in Higher Dimensions


We can now extend our geometrical concepts of lines and planes to JR" for n > 3. To
2
match what we did in JR and JR3, we make the following definition.

Definition Let jJ, v E JR" with v * 0. Then we call the set with vector equation x = jJ + t1v, t1 E JR
Linein3i." a line in JR11 that passes through jJ.

Definition Let V1,V2,JJ E JR11, with {V1,v2} being a linearly independent set. Then the set with
Planein J." vector equation x = jJ + ti v1 + t2v2, ti, t2 E JR is called a plane in JR" that passes
through jJ.

Definition Let JR11, with {V1, . ., v11_ i} being linearly independent. Then the set
v 1, ... , v11_ 1, jJ E .

Hyperplane in R" with vector equation x jJ + t1v1 + + t11_1v11_1, t; E JR is called a hyperplane in JR


= · · ·
11
that passes through jJ.

EXAMPLE9
The set Span { �l , I , : }
-2 -1
is a hyperplane since l� : ' :}
1 -2 -1
is linearly independent in JR4.

EXAMPLE 10
Show that the set Span

{[�l r-ll ·rm


·
defines a plane in R3.

Solution: Observe that the set is Iinear!y dependent since

the simplified vector equation of the set spanned by these vectors is


[ � l r-l l l H
+ = Hence,
Section 1.2 Exercises 25

EXAMPLE 10
(continued )

Since the set {l�l [- �]}


· is linear! y independent, this is the vector equation of a plane

3
passing through the origin in IR. .

PROBLEMS 1.2
Practice Problems

[�] [;] [�]


Al Compute each of the following linear combina­
tions. (el X +1 + 12
1

{!}
2
=

1
3 3
( a) +2
2 -1
-1 (D Span
-1 3
-2 1 -1
(b ) -3 +2 A3 Determine whether the following sets are sub­
5 1 4
spaces of IR.4. Explain.

1 2
2 0
2
(a) {.x I
E IR.4 x1 +x2 + x3 +x4 = o }
(b) {v1}, where v1 * 0.
2 -2 0
(c) 2 1 +2 1 3 (c) {x E JR.41Xt+2x3 = 5,X[ -3X4 = 0 }
{x }
-

0 2 (d) E IR.4 I X1 = X3X4,X2 -X4 = o


-1 -1 (e) {x E IR.4 I 2x1 =
3x4,X2 - Sx3 = o }
A2 For each of the following sets, show that the set is
11•
(f ) {x E JR.4 X[ +X2
1
-X4,X3 2 }

}
= =

or is not a subspace of the appropriate IR.

{[�:]
A4 Show that each of the following sets is linearly de­
pendent. Do so by writing a non-trivial linear com­
(a) I xi - xl = x3
bination of the vectors that equals the zero vector.

( b)
{[�:J 1
X
t X3 } (a)
mrnl .[_:]}
{l-rrnrnJ}
=

(c) {[��]I X +X2


t
o } (b)

}
=

(d )
{[::J 1
XtX2 = X3
(c)
{[i]·[l]·[�]}
(d )
{[n.r�J.r�n
26 Chapter 1 Euclidean Vector Spaces

AS Determine if the following sets represent lines,

H' j'n
planes, or hyperplanes in JR4. Give a basis for each.
c d) Span

Ca) Span

{r , r} A6 Let fl, J E JR12• Prove that 1 fl + tJ, t E JR is a

{� , ! �}
=

12
subspace of JR if and only if fl is a scalar multiple
of J
(b) Span ,
A 7 Suppose that� {v 1, ..., vk} is a linearly indepen­
=

12
dent set in JR • Prove that any non-empty subset of

}
0 0 1

{ _! , � , �
� is linearly independent.

(c) Span
-
0 0 0

Homework Problems

Bl Compute each of the following linear combina-


tions.
2 4
CD Span mirm
1 -2 B3 Determine whether the following sets are sub­
(a)
-1 4
spaces of JR . Explain.
3 -2 4
(a) {1E1R 2x1 - Sx4 7,3x2 2x4}
2 2 2 4
I = =

(b) {1E JR x I 2x� x� + x�}


�+
-2 1 1
4
=

(b) 3 +2 2 + 2 3 (c) {1E1R I X1 + X2 + X4 0,3x3 -x4}


= =

2 3 (d)
4
{1E JR I X1 + 2x3 0, Xt - 3x4 o}
= =

0 -1

un
1 -1
1 -2 -7 Ce)
(c) 3 0 -2 3 + 6
0 -7
mirm
-4
2 3 0 CD Span
B2 For each of the following sets, show that the set is
1
or is not a subspace of the appropriate JR 2• B4 Show that each of the following sets is linearly de­
(a) +x2 l } pendent by writing a non-trivial linear combination
{[��]I X1 =

of the vectors that equals the zero vector.

Cb)
{[�i] 1 x, +x, � o
} Ca)
WHW
Cb)
{[-il Ul {:]}
·

Cc)
mrnl' [i]}
(d) {[�]·[�].[-;]}
Section 1.2 Exercises 27

BS Determine if the following sets represent lines,

{ \ �: }
planes, or hyperplanes in JR.4. Give a basis for each.

-
(a) Span ,

2 -2

(b) SpM p�n , ,

Computer Problems

1.23 4.21 -9.6


:t 4.16 .-t -3.14 1.01
Cl Let vi , v2 ' v3
_2_21
=

0 2.02'
0.34 2.71 1.99
0.33
2.12
and -t
.v
4 = _ _ .
3 23
0.89
Use computer software to evaluate each of the fol­
lowing.
(a) 3vi - 2v2 + Sv3 - 3\14
Cb) 2Avi - I.3v2 + Yiv3 - Y3v4 .

Conceptual Problems

Dl Prove property(8) from Theorem 1. D6 Let 13 = {V1, . • . ,vk} be a linearly independent set

D2 Prove property(9) from Theorem 1. of vectors in JR.I! and let x et Span 13. Prove that
{V1, . . . ,vk. x} is linearly independent.
D3 Let U and Vbe subspaces oflR.n.
(a) Prove that the intersection of U and Vis a sub­ D7 Let v1,Vz E JR.I! and lets and t be fixed real numbers

space of JR.I!. with t * 0. Prove that

(b) Give an example to show that the union of two Span{V1, v2} = Span{V1,sv1 + tV2}
subspaces of JR.I! does not have to be a subspace
of JR.I!. D8 Let v1, Vz,V3 E JR". State whether each of the fol­
(c) Define U + V = {a+ v I i1 E U, v E V}. Prove lowing statements is true or false. If the statement
that U + Vis a subspace of JR". is true, explain briefly. If the statement is false, give
D4 Pick vectors j3, vi,v2, and v3 in JR.4 such that the a counterexample.
vector equation x = j3 + t1v1 + t2v2 + t3V3 (a) If i12 = tV 1 for some real number t, then {V1,v2}
(a) ls a hyperplane not passing through the origin is linearly dependent.
(b) ls a plane passing through the origin (b) If v1 is not a scalar multiple of v2, then {V1, 112)

(c) Is the point(l,3,1,1) is linearly independent.

(d) Is a line passing through the origin (c) If {V1,v2,v3} is linearly dependent, then v1 can
be written as a linear combination of v2 and v3.
DS Let 13 = {V1, • • • , vd be a linearly independent set
(d) If v1 can be written as a linear combination of
of vectors in JR.I!. Prove that every vector in Span13
Vz and V3, then {V1,vz, v3} is linearly depen­
can be written as a unique linear combination of the
dent.
vectors in 13.
(e) {vi} is not a subspace oflR.11•
(f) Span{Vi} is a subspace of JR".
28 Chapter 1 Euclidean Vector Spaces

1.3 Length and Dot Products


In many physical applications, we are given measurements in terms of angles and
magnitudes. We must convert this data into vectors so that we can apply the tools of
linear algebra to solve problems. For example, we may need to find a vector represent­
ing the path (and speed) of a plane flying northwest at 1300 km/h. To do this, we need
to identify the length of a vector and the angle between two vectors. In this section, we
see how we can calculate both of these quantities with the dot product operator.

2 3
Length and Dot Products in R , and R
The length of a vector in IR2 is defined by the usual distance formula (that is, Pythago­
ras' Theorem), as in Figure 1.3.10.

Definition
Length in ::2
If x = [��] E JR2, its length is defined to be

11111 = �xf + �

Figure 1.3.10 Length in JR2.

For vectors in JR3, the formula for the length can be obtained from a two-step
calculation using the formula for JR2, as shown in Figure 1.3.11. Consider the point
X(x1, x2, x3) and let P be the point P(x1, x2, 0). Observe that OPX is a right triangle, so
that

Definition
Length in:::.'
If it = [�:l E IR3, its length is defined to be

11111 =
�xf + � x�
x +

One immediate application of this formula is to calculate the distance between two
points. In particular, if we have points P and Q, then the distance between them is the
length of the directed line segment PQ.
Section 1.3 Length and Dot Products 29

Figure 1.3.11 Length in JR3.

EXAMPLE 1 Find the distance between the points P(-1, 3, 4), Q(2, - 5 , 1) in IR.3.
l)
-
Solution: We have PQ= [ -� � j [-�]·
1
=
- 4
=
-3
Hence, the distance between the two

points is

llPQll= "132 + (-8)2 + (-3)2= ...f82

Angles and the Dot Product Determining the angle between two vectors in
IR.2 leads to the important idea of the dot product of two vectors. Consider
Figure 1.3.12. The Law of Cosines gives

llPQll2= llOPll2 + llOQll2 - 2llOPll llOQll cos e (1 .1)

Substituting [ ]q
OP= jJ = P1 , OQ= q= 1 . PQ= jJ - q= Pi -
P2 q2 P2 q 2
[]
- qi [ ] into (1.1) and

simplifying gives
p1q1 + p2q2= llfJll ll?/11 cos e
For vectors in IR.3, a similar calculation gives

Observe that if jJ = q, then e = 0 radians, and we get

PT + p� + p�= llf1112
30 Chapter 1 Euclidean Vector Spaces

Figure 1.3.12 llPQll2 = llOPll2 + llOQll2 - 2llOPll llOQll cos 8.

This matches our definition of length in JR3 above. Thus, we see that the formula on the
left-hand side of these equations defines the angle between vectors and also the length
of a vector.

Thus, we define the dot product of two vectors x = [��]. y [��]


= in IR2 by

x y · = X1Y1 + X2Y2

Similady, the dot product of vectors X = [::] and jl = �] in IR3 is defined by

x y ·
=
X1Y1 + X2Y2 + x3y3

Thus, in IR2 and JR3, the cosine of the angle between vectors x and y can be calcu­
lated by means of the formula

x·st
cos (1.2)
8
=

1111111.Yll
where e is always chosen to satisfy 0 � e � 1r.

EXAMPLE2
Find the angle in IR3 between i1 = [ _�]· [-H W =

Solution: We have

v. w = 1(3) + 4(-1) + (-2)(4) = -9


llvll = v1 + 16 + 4 = ../21
llwll = Y9 + 1 + 16 = Y26

Hence,
-9
cose = Y26::::: -0. 3 85 1 6
W
So e ::::: 1.966 radians. (Note that since cose is negative, e is between �
and n.)
Another random document with
no related content on Scribd:
Rhinops, 224;
male of, 223 n.
Rhizopoda, as food for Polychaeta, 296
Rhizota, 220 n.
Rhombogen (form of Dicyemid), 93
Rhopalonaria, 521 n.
Rhopalophorus, 73
Rhopalura giardii, occurrence and structure, 94, 95;
R. intoshii, 94
Rhynchelmis, 365, 376
Rhynchobdellae, 396 f., 405
Rhynchodemidae, 35, 42
Rhynchodemus, 34, 35, 42
Rhynchopora, 531
Rhynchozoon, 529 n., 531
Riches, on British Nemertinea, 110;
on Malacobdella, 119
Rietsch, on Gephyrea, 443
Rockworm, 319
Rods—see Rhabdites
Rohde, on muscles of Nematoda, 128 f.
Rootlet, in Polyzoa, 485, 517
Rosa, on Oligochaeta, 364, 380, 385, 390
Rosette-plates, 471, 522
Rotifer, 201, 202, 210, 216, 222, 226, 227
Rotifera, 197 f.;
distribution, 200;
parasitic, 204;
digestive organs, 209;
renal organs, 213;
nervous system and sense organs, 215;
reproduction and development, 216;
classification, 220;
habits 226;
preservation, 228;
affinities, 229
Rousselet, on Rotifers, 198, 216, 228
Sabella, 299, 337; parapodium, 265;
habitat, 286;
tube, 287;
tube-building, 288;
colour, 293, 294;
S. saxicava, habits of, 287
Sabellaria, 341;
body, 259;
cirri, 265.;
tube, 287, 290;
S. alveolata, 259, 300;
S. spinulosa, paleae, 267, 300
Sabellidae, 258, 336;
head, 261;
chaetae, 266, 267;
regeneration, 283;
from fresh water, 284;
colour, 292
Sabelliformia, 258, 306, 336;
chlorocruorin in, 252; body, 259;
head, 260, 261;
uncini, 266, 267;
nephridia, 269, 306;
genital organs, 273;
development of gills, 275;
gland shields, 287
Saccobdella, 226
Sacconereis, 275, 276, 280
Saccosoma, 434, 440, 442
Sagitella, 321
Sagitta, 186, 186, 191, 534;
anatomy, 186 f., 188;
development, 189;
habits, 190;
species, 191, 193;
American species, 534
Salensky, on development of Nemertinea, 99;
of Rotifers, 218
Salinella, 93, 96
Salivary glands, in Polyclads, 10, 24;
in Leeches, 396
Salmacina, 273, 341;
brood-pouch, 276;
fission, 281
Salpina, 200, 225
Salpinidae, 225
Sandmason, 328
Saxicava, Eulalia in borings of, 314
Scales, of Gastrotricha, 233
Scalibregma, 334
Scalibregmidae, 258, 334
Scapha, 259, 330
Scaridium, 201, 207, 225
Schistocephalus, 75, 91;
reproductive organs, 86;
larva, 84;
life-history, 78, 85
Schizocerca, 225
Schizogamy, in Syllidae, 278, 279, 281
Schizonemertea, 109;
characters, 111;
development, 113;
transverse section, 103
Schizoporella, 518, 527, 528, 529, 530, 531;
zooecium and avicularia, 482;
Leptoplana on, 22
Schizotheca, 482, 518, 529
Schmarda, on Oligochaeta, 366, 387
Schmidt, on Rhabdocoels, 6
Schneider, on life-history of certain Mesostoma, 48;
on classification of Nematoda, 129;
on oesophageal glands, 131;
on Strongylidae, 142
Schultze, on Polyclads, 13, 26;
on Nemertinea, 108, 109
Schultzia, 50
Schulze, F. E., Stichostemma found by, 118;
Trichoplax found by, 96
Scirtopoda, 200, 201, 203, 206, 207, 223
Sclerocheilus, 334
Sclerostomum, 163
Scoleciformia, 258, 305, 331 f.;
vascular system, 252;
buccal region, 269;
food of, 296
Scolecolepis, 299, 322
Scolex, 5, 74, 75 f., 89;
S. polymorphus, 77;
of Taenia solium, 79
Scolithus, 302
Scoloplos, 299, 321;
parapodium, 265;
habitat, 286
Scruparia, 527
Scrupocellaria, 517, 518, 519, 526;
vibracula, 477, 485, 517;
phosphorescence, 478;
larva, 511
Scutum, 525
Seals, parasites of, 142, 183
Sea-mat, 466, 477
Sea-mouse, 312
Secondary orifice, 522, 524
Sedentaria, 285
Segment, 241;
of Nereis, 246, 247
Seison, 226
Seisonaceae, 204, 216, 220 n., 225, 227
Seisonidae, 226
Selenaria, 518;
vibracula, 487
Selenka, on Sipunculids, 424 n., 447
Self-fertilisation, in certain Mesostoma, 48;
in Trematodes, 52, 58;
in Cestodes, 86
Semper, on excretory system of Nemertinea, 108;
on Geonemertes palaensis, 101 n., 117;
on mimicry in Polychaeta, 294
Sense-organs, of Leptoplana, 13;
of Polyclads, 26;
of Triclads, 36;
of Trematodes, 56, 86;
of Cestodes, 86;
of Nemertinea, 106;
of Nematoda, 128;
of Gordius, 166;
of Acanthocephala, 178;
of Chaetognatha, 188;
of Rotifera, etc., 215, 233, 234;
of Polychaeta, 255, 272;
of Oligochaeta, 354;
of Leeches, 395;
of Gephyrea, 417;
of Phoronis, 457
Septum, of Archiannelida, 244;
of Nereis, 249, 251;
of Polychaeta, 269;
of Chlorhaemidae, 334;
of Oligochaeta, 355;
of Gephyrea, 440
Serpula, 300, 339, 340;
fossil, 301;
tubes, 290, 301;
commensal with Polynoid, 298;
colour, 292
Serpulidae, 258, 339;
nerve cords, 255;
gills, 261;
operculum, 261;
cirri, 265;
thoracic membrane, 266;
uncinus, 267;
fission, 281;
tube, 290;
colour, 292, 293;
from great depth, 300;
fossil, 301
Serpulite chalk, 301
Seta, of vibraculum, 484, 485, 486, 517, 524
Setosella, 530
Sharks, Trematodes of, 62, 72;
Cestodes of, 78
Sheep, parasites of, 67, 81, 82, 83
Sheldon, Miss, on Nemertinea, 99 f.
Shell-gland, of Leptoplana, 8, 9, 14, 16;
of Polyclads, 28;
of Trematodes, 59;
of Cestodes, 86
Shield, cuticular, of Polychaeta, 259;
of Sternaspis, 335;
glandular—see Gland shields.
Shipley, on Bipalium, 37;
on Nemathelminthes, 123 f.;
on Chaetognatha, 186 f., 534;
on Gephyrea, 411 f.;
on Phoronis, 450 f.
Sialis lutaria, host of Gordius, 171, 172;
host of Acanthocephala, 185
Side organs, of Carinellidae, 107
Siebold, von, on Tape-worms, 76
Sigalion, 313
Silliman, on Nemertinea, 101, 109, 118
Silurian, Polychaeta, 301
Sinus, in Polyzoa, 482, 484, 525
Siphon, of Capitelliformia, 272, 305;
of Gephyrea, 436
Siphonogaster, 353, 368
Siphonostoma, 334;
commensal, 298
Sipunculoidea, 412, 420, 446;
species, 426
Sipunculus, 425;
history, 411;
species, 426;
anatomy, 412 f., 413, 415;
development, 419, 419;
food, 422;
habits, 426
Size, of Cestodes, 5;
of Polyclads, 20;
of Land Planarians, 33;
of Cestodes, 75;
of Nemertinea, 100
Slavina, 377
Sluiter, on Gephyrea, 429, 447
Smitt, on Polyzoa, 516
Smittia, 518, 527, 529;
zooecium and avicularium, 482
Snakes, parasites of, 142
Solenopharyngidae, 50
Solenopharynx, 50
Solenophorinae, 91
Solenophorus, 91
Sorocelis, 42
Spadella, 186, 189, 192;
anatomy, 186 f.;
eggs, 189;
habits, 190;
species, 192, 194;
American species, 534
Spallanzani, on Oligochaeta, 348
Sparganophilus, 366, 386;
anatomy, 355.
Spatangus, as host, 298
Spencer, on Land-Planarians, 34;
on earthworms, 349, 380
Spengel, on Gephyrea, 440
Spermatheca, of Dinophilus, 243;
of Oligochaeta, 362, 363, 364
Spermatophores, 27, 402
Spermiducal gland, 361
Sphaerodoridae, 320
Sphaerodorum, 321
Sphaerosyllis, 308
Sphaerularia, 150, 153, 160, 161
Sphyranura, 73;
setae in, 56
Spine, of Polyzoa, 481, 523 f., 524
Spinther, 318
Spio, 322
Spionidae, 258, 321;
larva, 274, 275
Spioniformia, 258, 304, 321;
peristomial cirri, 263;
gill, 265;
chaetae, 266, 267;
eyes, 272;
food, 296
Spirographin, 290
Spirographis, 338;
substance of tube, 290
Spiroptera, 147, 163;
S. reticulata, 149;
S. obtusa, 161;
S. alata, 163
Spirorbis, 340, 341;
operculum, 261, 341;
genital organs, 273, 274;
brood-pouch, 261, 276;
fossil, 301; shell, 341
Spirosperma, 378;
chaeta, 350
Spirulaea, 301
Sporocysts, 92;
of Distomum macrostomum, 64, 65;
of D. hepaticum, 67;
hosts of, 71
Staggers, induced by Coenurus, 82
Statoblast, 493, 499, 501 f., 506;
sessile, 502;
germination, 501, 503, 514;
resemblance to ephippian ova, 493
Steenstrup, on Tape-worms, 76
Steganoporella, 530
Stelechopoda, 344
Stelechopus, 342
Stenostoma, 44, 49;
asexual reproduction, 44
Stephanoceros, 202, 205, 210, 213, 220, 221
Stephanops, 225
Stercutus, 376
Sternaspidae, 258, 335;
nephridia of, 305
Sternaspis, 335, 411, 445;
anatomy, 335, 336;
shape, 259;
shield, 259;
head, 264;
chaetae, 265;
gills, 268;
intestine, 271;
compared with Gephyrea, 336, 447, 449
Sthenelais, 299, 300, 309, 313
Stichostemma eilhardii, 118
Stilesia, 91;
generic characters, 90;
S. centripunctata, 91;
S. globipunctata, 91
Stock, asexual, of Autolytus, 279;
of Myrianida, 281
Stolc, on Oligochaeta, 360
Stolon, 480, 488, 518, 525
Stolonata, 518 n.
Stomatopora, 518, 532
Stork, parasites of, 63, 163
Strobila, 75, 76
Strobilation, 76
Strodtmann, on Chaetognatha, 191
Stromatoporoids, 520
Strongylidae, 131, 142
Strongylus, 129, 142, 143, 160, 163;
S. filaria, 132;
S. tetracanthus, 163
Stuhlmann, on Polyzoa, 493
Stuhlmannia, 359, 386
Stylaria, 348, 377
Stylets of Nemertine proboscis, 104, 110
Stylochoplana, 18, 19, 20
Stylochus, 19;
development, 28
Stylostomum, 19, 22
Sub-cuticle, 125, 175
Submalleate, 210, 211
Succinea putris, infested by larvae of Distomum macrostomum,
64, 66
Sucker, of Leptoplana, 8, 16 n.;
of Triclads, 35, 36;
of Temnocephala, 53, 54;
of Monogenea, 53, 56, 57, 60;
of Digenea, 62, 64, 65, 69;
of Cestodes, 75, 79;
of Dinophilus, 243;
of Chaetopterus, 324;
of Myzostoma, 342;
of larva of Polyzoa, 509, 511
Summer-eggs, of Mesostoma, 48;
of Rotifera, 216
Sutroa, 376, 380
Swim-bladder, of Syllidae, 272
Swimming, of Leptoplana, 9, 10;
of Polyclads, 23;
of Rotifers, etc., 206, 235
Syllidae, 258, 306;
palps, 260;
tentacles, 262;
head, 262;
parapodium, 264;
jaw, 270, 271;
alimentary tract, 271;
swim-bladder, 272;
asexual reproduction, 278 f., 279;
regeneration, 278, 283;
colours, 293;
phosphorescence, 296;
ancestral, 303
Syllis, 274, 307;
development, 278;
S. armillaris, 307;
S. ramosa, 282;
S. vivipara, 276
Synapta, bearing Rotifers, 222, 227
Synchaeta, 200, 204 f., 224, 226
Synchaetidae, 223, 224
Syncoelidium, 33, 42
Syncytium, 125
Syngamus trachealis, 130, 142, 144, 161, 163, 164
Syrinx, 411

Taenia, 74, 78, 79, 91;


life-histories of species of, 83;
table of species, 89;
T. (Cysticercus) acanthotrias, 80;
T. coenurus, 81, 82;
life-history, 83;
specific characters, 90;
T. crassicollis, life-history, 78, 83;
specific characters, 89;
T. echinococcus, 80;
life-history, 83;
specific characters, 90;
T. krabbei, 81;
T. marginata, 81;
life-history, 83;
specific characters, 90;
T. (Hymenolepis) murina, 70, 80 n., 89;
life-history, 80, 83;
specific characters, 91;
T. perfoliata, 163;
T. saginata (= T. mediocanellata), 78, 79;
life-history, 83;
specific characters, 89;
T. serialis, 82;
life-history, 83;
specific characters, 90;
T. serrata, 81;
life-history, 83, 87, 89;
specific characters, 90;
T. solium, 79;
life-history, 79, 83;
specific characters, 89
Taeniasis, 82
Taeniidae, 91
Tail, of Arenicola, 333;
of Nereis, 246, 248;
regeneration of, 283
Tanypus, host of Gordius, 172
Tape-worms, 5, 74
Taphrocampa, 200, 204, 224
Tardigrada, affinities, 344
Telmatodrilus, 378
Temnocephala, 4, 53, 54, 73
Temnocephalidae, 53, 73;
habits and structure, 53 f.;
affinities, 54
Tennent, on land-leeches, 408
Tentacles, in Polyclads, 15, 26;
in Triclads, 30, 36;
in Vorticeros, 45, 46;
in Trematodes, 53;
(peristomial), of Spionidae, 322;
(prostomial), 255, 260, 262 f.;
of Nereis, 248;
of Polycirrus, 294, 295;
of Polygordius, 244;
of Terebellids, use of, 289;
nerves to, 254
Tentacle-sheath, in Polyzoa, 470
Tentacular cirri = Peristomial cirri, q.v.
Tentacular filaments, 304;
of Cirratulids, 326, 327
Terebella, 328;
otocyst, 273;
fossil, 301;
T. conchilega, tube, 286, 287, 288;
building of tube, 289, 290;
gill, 329;
T. nebulosa, colour, 292;
as host, 311;
gill, 329
Terebellidae, 258, 327;
shape, 259;
tentacles, 263;
gill, 265;
chaetae, 266, 267;
gizzard, 271;
tube, 286;
use of tentacles, 289;
colour, 293;
phosphorescence, 296;
food of, 296;
tube containing Polynoid, 298
Terebellides, 299, 330;
gill, 329
Terebelliformia, 258, 325;
definition, 304;
genital organs, 273;
gland shields, 287;
nephridia, 269;
nuchal organs, 273;
uncini, 266, 267;
vascular system, 252
Terebripora, 478
Terricola, 30, 42
Tertiary, Polyzoa, 521
Tessin, on Rotifers, 198, 218
Testes, of Leptoplana, 14, 15;
of Planaria lactea, 38, 39;
of Acoela and Alloeocoela, 47;
of Temnocephala, 54;
of Polystomum, 57;
of Cestodes, 75, 76, 86—see also Reproductive organs
Tetragonurus, 389
Tetraonchus, 73
Tetraphyllidae, 91
Tetrarhynchidae, 91
Tetrarhynchus, 75, 76 n., 85, 91
Tetrastemma, British species, 110;
land forms, 101, 115, 118;
fresh-water forms, 101, 118;
excretory system, 108, 109;
habits, 114;
hermaphrodite species, 109;
viviparous species, 117
Thalamoporella, 530
Thalassema, 411, 435 f., 441, 443;
development, 439;
habits, 443
Thelepus, 299, 329
Theodisca, 321
Thompson, J. V., on term Polyzoa, 475
Thoracic membrane, 266, 305
Thorax, of Polychaeta, 259, 306, 337
Thysanosoma, 91;
generic characters, 90;
T. fimbriata, life-history, 83;
specific characters, 90;
T. giardii, specific characters, 90
Thysanozoon, 13, 18, 19, 20
Tomopteridae, 258, 291, 315
Tomopteris, 315;
colour, 294;
light-producing organ, 296;
prostomium, 259;
T. rolasi, 315
Tooth, in Polyzoa, 482, 522
Tortoise, Temnocephala associated with, 53
Torus uncinigerus, 268
Tracks, fossil, 302
Travisia, 332
Travisiopsis, 321
Trematoda, 4, 51 f.;
life-histories, 71;
classification, 73
Trembley, on Turbellaria, 6;
on Polyzoa, 496, 497
Trepostomata, 520
Triaenophorus (= Tricuspidaria), 91;
excretory system, 84
Triarthra, 201, 203, 206, 211, 224, 225, 226
Triarthridae, 200, 201, 202, 206, 207, 224, 226
Trias, Serpulid in, 301
Trichina, 131, 135, 144, 161;
T. spiralis, 146, 163
Trichinosis, 82, 146, 147
Trichocephalus, 131, 135, 136, 144, 160, 163;
species of, 145;
T. dispar, 145
Trichochaeta, chaeta, 351
Trichoderma, 159
Trichoplax, 93, 95
Trichosoma, 144, 163;
species of, 145
Trichotrachelidae, 144
Tricladida, 7, 30 f.;
habits, 35 f.;
sexual reproduction, 38;
asexual reproduction, 40;
classification, 42;
British species, 31, 32, 34, 42
Tricoma cincta, 157
Tricuspidaria, 91
Trigaster, 359, 384
Trigonoporus, 19, 27
Trinephrus, 357, 382 f.
Triophthalmus, 224
Triphylus, 224
Tristicochaeta, 158
Tristomatidae, 53, 55, 73
Tristomatinae, 73
Tristomum, 73
Triticella, 478, 518, 533
Trocheta, 393, 407
Trochophore, 229
Trochopus, 73
Trochosphaera, 200, 201, 221, 229, 230
Trochosphaeridae, 221
Trochosphere, of Archiannelida, 243, 245;
of Polychaeta, 274, 275, 510, 512;
of Echiuroidea, 439, 510;
of Polyzoa, 510
Trochus, in Rotifers, 202, 204
Trophi, of Rotifers, 209, 210
Trophonia, 299, 334;
genital organs, 273;
head, 262;
intestine, 271
Trunk, of Nereis, 246;
of Polychaeta, 259;
of Gephyrea, 412 f.
Tube, of Rotifers, 205;
of Polychaeta, 287 f.; composition of, 290;
fossil, 301, 302;
of Chaetopterus, 323;
of Clymene, 287;
of Dodecaceria, 326;
of Eunice tibiana, 290;
of Eunicidae, 285, 290, 318;
of Haplobranchus, 339;
of Hekaterobranchus, 326;
of Hyalinoecia, 290, 319;
of Maldanidae, 332;
of Myxicola, 285, 338;
of Nereis, 316;
of Nicomache, 287;
of Onuphis, 287, 319;
of Owenia, 325;
of Pectinaria, 285, 288, 330;
of Polydora, 323;
of Polynoids, 285;
of Panthalis, 313;
of Sabella, 287 f.;
of Sabellaria, 287;
of Sabellidae, 337;
of Serpulidae, 290, 339 f., 340;
of Terebellidae, 286, 287, 288, 289, 327 f.;
of Priapuloidea, 433;
of Echiurus, 444
Tube-forming glands, 304
Tube-making, of Polychaeta, 287 f.
Tubicolous Polychaeta, 285, 300, 304, 306
Tubifex, 351, 367, 369, 378;
chaetae, 350
Tubificidae, 350, 361, 366, 378
Tubulipora, 518, 531, 532
Turbellaria, 3 f.
Turtles, parasites of, 142
Tylenchus, 131, 154, 155, 157, 160, 163
Tylosoma, 422, 423, 426, 430, 447
Typhloscolecidae, 258, 291, 321;
nuchal organ, 273 n.
Typhloscolex, 321
Typosyllis, regeneration of head, 283 n.

Udekem, D', on Oligochaeta, 365


Udonella, 73;
U. caligorum, 55;
U. pollachii, eggs of, 58
Udonellinae, 73
Umbonula, 531
Uncinaria, 143
Uncinate, 210, 211
Uncini, of Polychaeta, 266, 267, 304, 305
Uncus, 210
Urnatella, 490, 493, 518
Urobenus, 388
Uteriporus, 42
Uterus, of Leptoplana, 8, 14;
of Planaria, 38, 39;
of Triclads, 40;
of Rhabdocoela, 48;
of Temnocephala, 54;
of Polystomum, 57, 59;
of Diplozoon, 60;
of Didymozoon, 71;
of Calliobothrium, 75;
of Taenia, 79;
of Schistocephalus, 86;
in Bothriidae, 87;
of Rotifers, 216

Vagina, of Leptoplana, 16;


in ectoparasitic Trematodes, 57 f.;
in Cestodes, 86, 87
Vaillant, on Hirudinea, 392, 405
Valencinia, 113;
V. lineformis, 112
Valkeria, 533
Vallentin, on Rotifers, 198
Vallisnia, 73
Vanadis, 315
Varme, 297
Vasa deferentia, of Leptoplana, 14, 15;
of Planaria, 38, 39;
of Acoela and Alloeocoela, 47;
of Diplozoon, 60;
of Schistocephalus, 86
Vasa efferentia, of Leptoplana, 14, 15;
of Triclads, 38
Vascular System, of Nemertinea, 106, 107;
of Archiannelida, 244;
of Nereis, 251 f.;
of Polychaeta, 251 f.;
of Cryptocephala, 252;
of Scoleciformia, 252;
of Terebelliformia, 252;
absence of, in certain Polychaeta, 253;
of Oligochaeta, 355;
of Leeches, 396;
of Gephyrea, 415, 436, 447;
of Phoronis, 455
Vaucheria, Rotifers in, 227
Vejdovsky, on Rhabdocoels, 46;
on Gordius, 164, 166;
on Oligochaeta, 365, 369, 374, 400
Vermes, 347
Vermiculus, 378
Vermiformia, 461
Verrill, on Chaetognatha, 534
Vertebrates, parasites of, 163, 174, 179, 183
Verworn, on statoblasts, 501
Vesicula seminalis, of Planaria, 39
Vesicularia, 481, 518, 533
Vesicularina, 518, 523
Vestibule, 488, 490;
of larva, 509
Vibracular zooecium, 485, 486, 517, 524
Vibraculum, 477, 484, 485, 517, 524;
movements, 487;
function, 486
Vicarious avicularia, 482
Victorella, 492, 501, 505, 518, 533
Villot, on life-history of Gordius, 172
Vinella, 521 n.
Virgate, 210
Visceral nervous system, of Nereis, 255
Vitellarium = Yolk-gland, q.v.
Vitello-intestinal canal, in Polystomatidae, 57
Viviparous, Nemertinea, 109, 117;
Rotifers, 200, 216 f.;

You might also like