You are on page 1of 163

Linear Algebra

2009¿
>
=

Contents

Chapter 1. Systems of Linear equations and matrices
§1.1 Introduction to Systems of Linear Equations

§1.2 Gaussian Elimination

§1.3 Matrices and Matrix Operations

§1.4 Inverses; Rules of Matrix Arithmetic

§1.5 Elementary Matrices and a Method for Finding A−1

§1.6 Further Results on Systems of Equations and Invertibility

Chapter 2. Determinants
§2.1 Combinatorial Approach to Determinants

§2.2 Evaluating Determinants by Row Reduction

§2.3 Properties of the Determinant Function

§2.4 The Determinants by Cofactor Expansion

Chapter 3. General Vector Spaces
§3.1 Euclidean n-Space

§3.2 General Vector Spaces

§3.3 Subspaces

§3.4 Linear Independence

§3.5 Basis and Dimension

§3.6 Row Space and Column Space and Rank

2

Chapter 4. Inner Product Spaces
§4.1 Inner Products

§4.2 Length and Angle in Inner Product Spaces

§4.3 Orthonormal Bases; Gram-Schmidt Process

§4.4 Coordinates; Change of Basis

Chapter 5. Linear Transformations
§5.1 Introduction to Linear Transformations

§5.2 Properties of Linear Transformations; Kernel and Range

§5.3 Linear Transformations from Rn to Rm

§5.4 Matrices of Linear Transformations

Chapter 6. Eigenvalues and Eigenvectors
§6.1 Eigenvalues and Eigenvectors

§6.2 Diagonalization

§6.3 Orthogonal Diagonalization; Symmetric Matrices

3

a2 .1 Introduction to Systems of Linear Equations In this section we introduce basic terminology and discuss a method for solving systems of linear equations. . . . . . . . Chapter One Systems of Linear equations and matrices §1. . xn . A solution of a linear equation a1 x1 + a2 x2 + · · · + an xn = b is a sequence of n numbers s1 . sn such that s1 = x1 . a1 s1 + a2 s2 + · · · + an sn = b. . The set of all solutions of the equation is called its solution set or the general solution of the equation. s2 . x2 . that is. . an and b are real constants. . 4 . s2 = x2 . . If a1 . Definition 1. then an equation of the form a1 x1 + a2 x2 + · · · + an xn = b is called a linear equation in the n unknown variables x1 . sn = xn .1. . . . .

.     . A system of linear equations is said to be inconsistent if it has no solu- tions. sn = xn is called a solution of the system. . am1 x1 + am2 x2 + · · · + amn xn = bm . . . ... . ..    am1 am2 · · · amn bm 5 . If there is at least one solution. .. . xn . . . . s2 . . x2 . . ... . A finite set of linear equations in the variables x1 . . .. . sn such that s1 = x1 . . . . a11 x1 + a12 x2 + · · · + a1n xn = b1 a21 x1 + a22 x2 + · · · + a2n xn = b2 . . am1 x1 + am2 x2 + · · · + amn xn = bm is called a system of linear equations or a linear system. . . . A sequence of numbers s1 .. . For a linear system a11 x1 + a12 x2 + · · · + a1n xn = b1 a21 x1 + a22 x2 + · · · + a2n xn = b2 . it is called consistent. s2 = x2 ... the rectangular array   a11 a12 ··· a1n b1      a21 a22 ··· a2n b2     ..

adding a multiple of one row to another row. 3. Example 1. multiplying a row(horizonal line) through by a nonzero constant. The elementary row operations for the augmented matrix of a linear system are 1. Starting with its augmented matrix    1 1 2 9     2 4 −3 1  (−2× 1st row + 2nd row) ⇒     3 6 −5 0    1 1 2 9     0 2 −7 −17  (−3× 1st row + 3rd row) ⇒     3 6 −5 0 6 . Solution. Remark 1.is called the augmented matrix for the system. The following example illustrates how the elementary opera- tions can be used to solve systems of linear equations. Solve the system of linear equations x + y + 2z = 9 2x + 4y − 3z = 1 3x + 6y − 5z = 0. interchanging two rows. 2.1.1.

z = 3.     0 0 1 3 The solution is x = 1.   1 1 2 9     0 2 −7 −17  ( 1 × 2rd row) ⇒   2   0 3 −11 −27    1 1 2 9     0 1 − 7 − 17  (−3× 2rd row+3rd row) ⇒  2 2    0 3 −11 −27      1 1 2 9   1 1 2 9       0 1 − 7 − 17  (−2· 3rd) ⇒  0 1 − 7 − 17  (−1· 2rd+1st row) ⇒  2 2   2 2      0 0 − 12 − 32 0 0 1 3   11 35  1 0 2 2     0 1 − 7 − 17  (− 11 × 3rd row+1st row and 7 × 3rd row +2nd row) ⇒  2 2  2 2   0 0 1 3    1 0 0 1     0 1 0 2 . y = 2. 7 .

2. 2x3 + x4 = 3 8 . x2 and x3 ? (a) x1 + 2x1 x2 + x3 = 2. ♣Exercises 1. Find the augmented matrix for each of the following systems of linear equations: (a) x1 − 2x2 = 0 3x1 + 4x2 = −1 . (b) x1 + x2 + x3 = sin k (k is a constant). (b) 2x1 + 4x2 − 7x3 = 8. −x1 + 2x2 − x3 = 3 (c) x1 + x3 = 1 2x2 − x3 + x5 = 2 . (c) − 3x1 + 4x2 − 7x3 + 8x4 = 5. 2x1 − x2 = 3 (b) x1 + x3 = 0 . (d) x1 = 2x3 − x2 + 7. Which of the following are linear equations in x1 . 1 √ (c) x1 − 3x2 + 2x32 = 4. (e) x1 + x−1 2 − 3x3 = 5. (d) 2v − w + 3x + y − 4z = 0. (f ) x1 = x3 . Find the solution set of each of the following: (a) 6x − 7y = 3.1 1. 3.

    0 −1 2 4 1 −1 1   1 0 0 0 1        0 1 0 0 2   1 2 3 4 5    (c)   . 6. (d)  .   5 4 3 2 1  0 0 1 0 3    0 0 0 1 4 5. (b)  0 1 0  . Consider the system of equations ax + by = k cx + dy = l ex + f y = m. Find a system of linear equations corresponding to each of the following augmented matrices:      1 0 −1 2   1 0 0      (a)  2 1 1 3    . 9 . For which value(s) of the constant k does the following system of linear equations have no solution? Exactly one solution? Infinitely many solutions? x − y = 3 2x − 2y = k. 4. (d) x1 = 1 x2 = 2.

If a row does not consist entirely of zeros. If there are any rows that consist entirely of zeros.2 Gaussian Elimination In this section we give a systematic procedure for solving systems of linear equations. cx+dy = l.2. A matrix is said to be in reduced row-echelon form if it has the following properties: 1. Definition 1. 10 . the leading 1 in the lower row occurs farther to the right than the leading 1 in the higher row. In two successive rows that do not consist entirely of zeros. ex+f y = m when: (a) the system has no solution. 4. 2. §1. then the first nonzero number is a 1. (b) the system has exactly one solution. 3. (c) the system has infinitely many solutions. A matrix having properties 1. it is based on the idea of reducing the augmented matrix.Discuss the relative positions of the lines ax+by = k. then they are grouped together in the bottom of the matrix. called a leading 1 . Each column that contains a leading 1 has zeros everywhere else. 2 and 3 is said to be in row-echelon form.

Example 1.2.  0 1 0  .     2 4 −5 6 −5 −1 Step 1.    0 0 −2 0 7 12     2 4 −10 6 12 28   . Locate the leftmost column that does not consist entirely of zeros.  0 0 1 −1 0        0 0 1 5 0 0 0 0 0 0 0 1 Remark 1.2.       0 0 0 0 0   0 0 0 0 1 −1 0 0 1   0 0 0 0 0 While the following matrices are in row-echelon form:        1 4 3 7   1 1 0   0 1 2 0 0         0 1 6 2     .    0 0 −2 0 7 12     2 4 −10 6 12 28 . Interchange the top row with another.   .   2 4 −5 6 −5 −1 Step 2. . if necessary. We illustrate the idea which reduces a matrix into a reduced row-echelon form by reducing the following matrix to a reduced row-echelon form. The following matrices are in reduced row-echelon form:       0 1 −2 0 1 1 0 0 4 1 0 0                 0 0 0 1 3   0 0   0 1 0   7 . . 0 1 0   . to bring a nonzero 11 .

Now cover the top row in the matrix and begin again with Step 1 applied to the submatrix that remains.   2 4 −5 6 −5 −1 Step 3.     2 4 −5 6 −5 −1 Step 4.     0 0 5 0 −17 −29 12 .    1 2 −5 3 6 14     0 0 −2 0 7 12 . Continue in this way until the entire matrix is in row-echelon form. Add suitable multiples of the top row to the rows below so that all entries below the leading 1 become zeros.     0 0 5 0 −17 −29 Step 5.    1 2 −5 3 6 14     0 0 −2 0 7 12 . 1 multiply the first row by in order to introduce a leading 1 a    1 2 −5 3 6 14     0 0 −2 0 7 12 . If the entry that is at the top of the column found in Step 1 is a.entry to the top of the column found in Step 1    2 4 −10 6 12 28     0 0 −2 7 12   0 .

To find the reduced row-echelon form we need the following additional step. Beginning with the last nonzero row and working upward.    1 2 −5 3 6 14     0 0 1 0 0 1      0 0 0 0 1 2    1 2 −5 3 0 2     0 0 1 0 0 1      0 0 0 0 1 2 13 .   0 0 5 0 −17 −29    1 2 −5 3 6 14     0 0 −6   1 0 − 72 . Step 6. add suit- able multiples of each row to the rows above to introduce zeros above the leading 1’s.    1 2 −5 3 6 14     0 0 −6   1 0 − 72 .   1 0 0 0 0 2 1    1 2 −5 3 6 14     0 0 1 0 − 72 −6      0 0 0 0 1 2 which is now in row-echelon form.

5x3 + 10x4 + 15x6 = 5 2x1 + 6x2 + 8x4 + 4x5 + 18x6 = 6 Solution. If we use only the first five steps. Start with the augmented matrix for the system   1 3 −2 0 2 0 0      2 6 −5 −2 4 −3 −1     . the procedure produces a row-echelon form and is called Gaussian elimination.    1 2 0 3 0 7     0 0 1 0 0 1      0 0 0 0 1 2 which is now in reduced row-echelon form.    0 0 5 10 0 15 5    2 6 0 8 4 18 6 14 . The above procedure for reducing a matrix into a reduced row-echelon form is called Gauss-Jordan elimination.3. x1 + 3x2 − 2x3 + 2x5 = 0 2x1 + 6x2 − 5x3 − 2x4 + 4x5 − 3x6 = −1 . Example 1. Solve the following linear system by using Gauss-Jordan elim- ination.

  1 3 −2 0 2 0 0      0 0 −1 −2 0 −3 −1     .Adding −2 times the first row to the second row. 6   1 3 −2 0 2 0 0      0 0 1 2 0 3 1     .  1   0 0 0 0 0 1 3    0 0 0 0 0 0 0 Adding −3 the third row to the second row and then adding 2 times the second 15 .    0 0 5 10 0 15 5    2 6 0 8 4 18 6 Multiplying the second row by −1 and then adding −5 times the second row to the third row and −4 times the second row to the fourth row.   1 3 −2 0 2 0 0      0 0 1 2 0 3 1         0 0 0 0 0 0 0    0 0 0 0 0 6 2 which is in row echelon form. Interchanging the third and fourth rows and 1 then multiplying the third row of the resulting matrix by .

then 16 . The corresponding system of the equa- tion is x1 + 3x2 + 4x4 + 2x5 = 0 x3 + 2x4 = 0 .row of the resulting matrix to the first row. t. 1 x6 = 3 Solving for the leading variables. s. x4 = s. If we set x2 = r. x5 = t for arbitrary values r. such arbitrary values are called parameters.   1 3 0 4 2 0 0      0 0 1 2 0 0 0       1   0 0 0 0 0 1 3    0 0 0 0 0 0 0 which is in reduced row echelon form. x1 = −3x2 − 4x4 − 2x5 x3 = −2x4 1 x6 = 3 .

   0 0 0 0 0 1 13    0 0 0 0 0 0 0 The corresponding system of the equation is x1 + 3x2 −2x3 + 2x5 = 0 x3 + 2x4 + 3x6 = 1 . x3 = −2s x4 = s x5 = t 1 x6 = 3 .4. Example 1. A technique to solve a system of linear equations by using Gaussian elimination to bring the augmented matrix into a row echelon form is called back-substitution. From the solution of Example 1. Solve the linear system in Example 1. we have the row echelon form   1 3 −2 0 2 0 0      0 0 1 2 0 3 1     .the solution set is given by x1 = −3r − 4s − 2t x2 = r. Solution.3. Definition 1. 1 x6 = 3 17 .3.3 by back-substitution.

x1 = −3x2 + 2x3 − 2x5 x3 = 1 − 2x4 − 3x6 . Beginning with the bottom equation and working upward. suc- cessively substitute each equation into all the equations above it. 1 x6 = 3 Step 3. 3 x1 = −3x2 + 2x3 − 2x5 x3 = −2x4 1 x6 = 3 .We proceed as follows: Step 1. Assign arbitrary values to the nonleading variables. Substituting x3 = −2x4 into the first equation. If we assign x2 = r. 1 x6 = 3 Step 2. x1 = −3x2 − 4x4 − 2x5 x3 = −2x4 . x4 = s. Substituting 1 x6 = into the second equation. Solve each equation for its leading variable. x5 = t 18 .

t. bm are all zero. then the solution set is given by x1 = −3r − 4s − 2t x2 = r. xn = 0 for any homogeneous system of linear equations is called the trivial solution. s. Definition 1..for arbitrary values r.. how many solutions has it? We consider several cases in which it is possible to make situation about the number of solutions by inspection. If a system has solutions. The solution x1 = 0. they are called nontrivial solutions. A system of linear equations is said to be homogeneous if all the constants b1 . or no solutions. 19 . x2 = 0. . . if there are other solutions. .. x3 = −2s x4 = s x5 = t 1 x6 = 3 . . . b2 . . . am1 x1 + am2 x2 + · · · + amn xn = 0.4. . . . the system has the form a11 x1 + a12 x2 + · · · + a1n xn = 0 a21 x1 + a22 x2 + · · · + a2n xn = 0 . . infinitely many solutions. . that is. . Every system of linear equations has either one solution.

   1 1 −2 0 −1 0    0 0 1 1 1 0 Reducing this matrix to reduced row echelon form.5. exactly one of the following is true: 1.    0 0 0 1 0 0    0 0 0 0 0 0 20 . 2. The system has only the trivial solution. x1 + x2 − 2x3 − x5 = 0 x3 + x4 + x5 = 0 Solution. The system has infinitely many nontrivial solutions.3.Remark 1. The augmented matrix for the system is   2 2 −1 0 1 0      −1 −1 2 −3 1 0     . For a homogeneous system of linear equations.   1 1 0 0 1 0      0 0 1 0 1 0     . Example 1. Solve the homogeneous system of linear equations by Gauss- Jordan elimination 2x1 + 2x2 − x3 + x5 = 0 −x1 − x2 + 2x3 − 3x4 + x5 = 0 .

Proof. if the system is con- sistent. however. x4 = 0 Solving for the leading variables. x3 = −t.1. x2 = s. x5 = t where s and t are arbitrary values. 21 . Theorem 1.The corresponding system of equations is x1 + x2 + x5 = 0 x3 + x5 = 0 . A homogeneous system of linear equations with more un- knowns than equations has infinitely many solutions. . A nonhomogeneous system of linear equations with more un- knowns than equations need not be consistent.4. x4 = 0. Omitted! Remark 1. x1 = −x2 − x5 x3 = −x5 x4 = 0 The solution set is given by x1 = −s − t. it will have many solutions.

     0 0 0 0 1    0 1 2 4   0 1 0 4 0 0 0 0 0 2. Which of the following are in row-echelon form?        1 2 3   1 1 0     1 −7 5 5    (a)   0 0 0  . (e)  0 1 2 . (b)  1 0  0  .        0 0 0 0 1    0 0 3 0 0 0 0 0 0 0 0 3. (e)  0 0 1 3  .2 1. In each part suppose that the augmented matrix for a system of linear equations has been reduced by row operations to the given reduced row-echelon 22 .   0 1 3 2   0 0 1 0 0 0   1 3 0 2 0         1   2 3 4   0 0 0   0 2 2 0       (d)   . ♣Exercises 1.       0 0 1 0 0 0 0 0 0   1 2 0 3 0         0   1 0 0 5   0 1 1 0     1 0 3 1  (d)   . (c)  0 1 0  . Which of the following are in reduced row-echelon form?        1 0 0   0 1 0   1 1 0        (a)   0 0 0  . (b)   . (c)    0 1 0 . (f )   .  (f )   0 0 0  .

     1 2 −4 2   1 0 4 7 10      (a)    0 1 −2 −1  . Solve the system.       0 0 0 1 4 2    0 0 0 1 0 0 0 0 0 0 23 . In each part suppose that the augmented matrix for a system of linear equations has been reduced by row operations to the given row-echelon form. (d)   0 0 1 0 .form. (b)   0 1 0 −1 4 . Solve the system. (b)   0 1 −3 −4 −2  . (d)   0 1 3 3 .      0 0 1 2 0 0 1 1 2   1 5 0 0 5 −1       0 0 1   1 2 0 0   0 3 1     (c)   .      1 0 0 4   1 0 0 3 2      (a)  0 1 0 3  .       0 0 0 1 4 2    0 0 0 1 0 0 0 0 0 0 4.      0 0 1 2 0 0 1 1 2   1 5 −4 0 −7 −5       0 0    1 2 2 2   1 1 7 3    (c)   .

a21 x1 + a22 x2 + a23 x3 = 0 2x1 + 2x2 = 0. Without using pencil and paper.5. x2 + 4x3 = 0 . Show that if ad 6= bc. 7. c d 0 1 (b) the system ax + by = k cx + dy = l has exactly one solution. 24 . 3x1 + 2x2 + 7x3 + 8x4 = 0 5x3 = 0 (c) a11 x1 + a12 x2 + a13 x3 = 0 (d) x1 + x2 = 0 . (a) then the reduced row-echelon form of      a b   1 0    is  . determine which of the following homo- geneous systems have nontrivial solutions: (a) x1 + 3x2 + 5x3 + x4 = 0 (b) x1 + 2x2 + 3x3 = 0 4x1 − 7x2 − 3x3 − x4 = 0 . Solve the linear system by Gaussian-Jordan elimination and back-substitution: 3x1 + 2x2 − x3 = −15 5x1 + 3x2 + 2x3 = 0 3x1 + x2 + 3x3 = 11 11x1 + 7x2 = −30 6.

3x1 + x2 + x3 + x4 = 0 x1 + 2x2 = 0 . ex+f y = m when: (a) the system has only the trivial solution.♣. 9. Discuss the relative positions of the lines ax+by = k. x + 6y − 2z = 0 − 2x2 − 2x3 − x4 = 0 . 5x1 − x2 + x3 − x4 = 0 x2 + x3 = 0 4. Consider the system of equations ax + by = k cx + dy = l ex + f y = m. 2x1 + x2 + 3x3 = 0 3. Solve the given homogeneous systems of linear equations: 2. 25 . (b) the system has nontrivial solutions. 2x − 4y + z = 0. For which value(s) of λ does the following system of equations have non- trivial solutions? (λ − 3)x +y = 0 x +(λ − 3)y = 0. cx+dy = l. x1 + 3x2 + x4 = 0 x1 − 2x2 − x3 + x4 = 0 8. 2x1 − 4x2 + x3 + x4 = 0 x1 − 5x2 + 2x3 = 0 5. .

A matrix is a rectangular array of numbers.3 Matrices and Matrix Operations Rectangular array of real numbers arise in many contexts other than as augmented matrices for systems of linear equations. (b) Show that if x = x1 . as objects in their own right and develop some their properties. y = y2 are any two solutions. y = y0 is any solution and k is any constant.10. §1. Consider the system of equations ax + by = 0 cx + dy = 0. y = ky0 is also a solution. The numbers in a matrix are called the entries of the matrix. The size of a matrix is described by specifying the number of rows (horizontal lines) and columns (vertical lines) that occur in the matrix. 26 . In this section we consider such arrays. (a) Show that if x = x0 .5. y = y1 and x = x2 . then x = kx0 . then x = x1 + x2 . Definition 1. If a matrix has m rows and n columns. y = y1 + y2 is also a solution. its size is m by n (written m × n). just called matrices.

 or A = [aij ]. If A is a m × n matrix with aij as its entry in row i and column j. . . Then the matrix multiplication AB is the m × n matrix " r # X AB = aik bkj . The sum of A and B is the matrix A + B = [aij + bij ]. .5. Let A = [aij ] be a m × r matrix and B = [bij ] be a r × n matrix. (3). . . then it is denoted by   a a12 · · · a1n  11     a21 a22 · · · a2n    A= . k=1 27 ..  . If c is any scalar. Definition 1. A and B are said to be equal if aij = bij for all i.6.Remark 1.    am1 am2 · · · amn If m = n. When we discuss matrices. a22 . the scalar multiplication cA is the matrix cA = [caij ]. .7. .. j. Remark 1.   . it is common to refer to numerical quantities (real numbers) as scalars. (1). . ann are said to be on the main diagonal of A. (2).6. then A is called a square matrix of order n and the enrties a11 . Let A = [aij ] and B = [bij ] be two matrices with the same size. Definition 1. the difference of A from B is the matrix A − B = [aij − bij ].

8 −4 26 12 Remark 1. = . . . Then the system is represented by     a x + a12 x2 + · · · + a1n xn b1  11 1         a21 x1 + a22 x2 + · · · + a2n xn   b2       ...      am1 x1 + am2 x2 + · · · + amn xn bm 28 . . am1 x1 + am2 x2 + · · · + amn xn = bm . Consider a system of m equations in n unknowns: a11 x1 + a12 x2 + · · · + a1n xn = b1 a21 x1 + a22 x2 + · · · + a2n xn = b2 .. .  2 6 0   2 7 5 2 then AB=   1·4+2·0+4·2 1·1−2·1+4·7 1·4+2·3+4·5 1·3+2·1+4·2   2·4+6·0+0·2 2·1−6·1+0·7 2·4+6·3+0·5 2·3+6·1+0·2    12 27 30 13  = .6.   . Matrix multiplication has an important application to systems of linear equations.Example 1... If      4 1 4 3   1 2 4    A= ..   .     . .7. . B=  0 −1 3 1 . . . .

. ... . We denote the aug- mented matrix by   a11 a12 ··· a1n | b1      a21 a22 ··· a2n | b2     . | .8.       .       am1 am2 · · · amn xm bm The matrix   a11 a12 ··· a1n      a21 a22 ··· a2n     . ... then the n × m matrix At = [aji ] that results from interchanging the rows and columns of A is called the transpose of A. 29 . .  . If A = [aij ] is a m × n matrix.  .. .which becomes      a11 a12 ··· a1n x1 b1            a21 a22 ··· a2n  x2   b2        ...     . ..   . .    am1 am2 · · · amn | bm Definition 1. .  .. .. If A = [aij ] is a square matrix of order n. the tr(A) = a11 + a22 + · · · + ann is called the trace of A. .    am1 am2 · · · amn is called the coefficient matrix for the linear system. ..     . . = ..

then B is an n × m matrix. (c) AE + B. and 5×4 matrices. (e) E(A + B). (a) BA. 3.3 1. respectively. (b) Show that if A is an m × n matrix and A(BA) is defined. and E be 5×2. 4×2. then AB and BA are square matrices. (d) AB + B. Let A and B be 4×5 matrices and let C. For those which are defined. 3d + c 2a − 4d 7 6 30 . b. Determine which of the following matrix expressions are defines. give the size of the resulting matrix. (f ) E(AC). (a) Show that if AB and BA are defined. (h) (At + E)D. ♣Exercises 1. Solve the following matrix equation for a. c and d:      a−b b+c   8 1   = . 2. (g) E t A. D. (b) AC + D.

(b) D + E.4. E =  −1 1 2  . (f ) − 7B. but thereare someexceptions. 2 3 3 0 31 . Consider the matrices        3 0     4 −1   1 4 2  A=  −1 2 . Show that the product of diagonal matrices is again a diagonal matrix.        3 2 4 4 1 3 Compute (a) AB. (e) ED. A square matrix is called a diagonal matrix if all entries off the main diagonal are zero. 5. §1. For example. (c) D − E.  B=  C= .   0 2 3 1 5 1 1      1 5 2   6 1 3      D=  −1 0 1  . (d) DE.  in general.4 Inverses: Rules of Matrix Arithmetic Many of the rules of arithmetic for real numbers also hold for matrices. AB 6= BA for matrices  −1 0   1 2  A=  and B =  .

(k) (a − b)C = aC − bC. (e) (B + C)A = BA + CA. (h) a(B + C) = aB + aC. Definition 1.9. (a) A + O = O + A = A. (b) A + (B + C) = (A + B) + C. (f) A(B − C) = AB − AC. Assuming that the sizes of matrices are such that the indicated operations can be performed. (m) a(BC) = (aB)C = B(aC). (l) (ab)C = a(bC). Proof. (d) A(B + C) = AB + AC. A matrix whose entries are zeros is called a zero matrix and is denoted by O.3. Assuming that the sizes of matrices are such that the indicated operations can be performed. (i) a(B − C) = aB − aC. (a) A + B = B + A.Theorem 1. Theorem 1. Left to the reader as exercises. (j) (a + b)C = aC + bC. (g) (B − C)A = BC − CA. 32 .2. (c) A(BC) = (AB)C.

(d) AO = O. (c) O − A = −A. So AX1 = AX2 ⇒ A(X1 − X2 ) = O. then A(X1 + kX0 ) = AX1 + A(kX0 ) = AX1 + k(AX0 ) = B + kO = B+O = B. Every system of linear equations has either no solutions. Suppose that the system has more than one solution. Let AX = B be a system of linear equations. Thus X1 + kX0 is also a solution for arbitrary k.4. OA = O. Left to the reader as exercises. Proof. 33 . and hence AX = B has infinitely many solutions. Theorem 1. ex- actly one solution or infinitely many solutions. Then AX1 = B and AX2 = B. (b) A − A = O. If we set X0 = X1 − X2 and k is any scalar. Proof. It is enough to show that it has infinitely many solutions. Let X1 and X2 be two solutions of AX = B.

(b) (AB)−1 = B −1 A−1 . 34 . then (a) AB is invertible. If B and C are both inverses of a matrix A. such that AB = BA = I.9. aij =   0 if i 6= j. If A is an invertible matrix. Since B and C are inverses of A.11. its inverse will be denoted by A−1 . called an inverse of A.5. Theorem 1. Theorem 1. we write In for the n × n identity matrix. Proof. Definition 1.10. Remark 1.6. A square matrix A is said to be invertible if there exists a matrix B.Definition 1. AI = IA = A. A square matrix A = [aij ] such that    1 if i = j. If A and B are invertible matrices of the same size.8. B = IB = (CA)B = C(AB) = CI = C. is called an identity matrix and is denoted by I. For any matrix A. then B = C. If we emphasize the size. Thus AA−1 = I = A−1 A. Remark 1.

| {z n factors In addition. (B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1 IB = B −1 B = I. then (a) A−1 is invertible and (A−1 )−1 = A. . then Ar As = Ar+s . if A is invertible. Theorem 1. . Thus B −1 A−1 is the inverse of AB. . If A is a square matrix. If A is an invertible matrix. then we define the nonnegative integer powers of A to be A0 = I. If A is a square matrix and r and s are integers. k 35 . 1 (c) For any k 6= 0. so AB is invertible. Definition 1. . n = 0. n factors Theorem 1. Proof. Since (AB)(B −1 A−1 ) = A(BB −1 )A−1 = AIA−1 = AA−1 = I. An = AA · · · A} (n > 0).12. (Ar )s = Ars . 2. (b) An is invertible and (An )−1 = (A−1 )n . we define A−n = A−1 −1 −1 | A {z· · · A } (n > 0).Proof. Left to the reader as exercises. kA is invertible and (kA)−1 = A−1 .8.7. 1.

36 . k Theorem 1. (c) Since µ ¶ µ ¶ 1 −1 1 −1 1 (kA) A = (kA)A = k AA−1 = I. (b) (A + B)t = At + B t . Others are left to the reader as exercises. (A−1 )n An = A−n An = A−n+n = A0 = I. (A−1 )t At = (AA−1 )t = I t = I. (c) (kA)t = kAt for any scalar k. (d) (AB)t = B t At . then (a) (At )t = A. (b) Since An (A−1 )n = An A−n = An−n = A0 = I.9. A−1 is invertible and (A−1 )−1 = A. k k 1 −1 kA is invertible and (kA)−1 = A . Proof. An is invertible and (An )−1 = (A−1 )n . then At is also invertible and (At )−1 = (A−1 )t . µ k¶ k µ ¶ k 1 −1 1 A (kA) = k (A−1 A) = I. (e) If A is invertible. (a) Since AA−1 = I = A−1 A.Proof. If the sizes of the matrices are such that the given operations can be performed. (e) At (A−1 )t = (A−1 A)t = I t = I.

5 2 4 4 0 3 3. B =  . (i) (αC)t = αC t . C =  . (d) α(B − C) = αB − αC. (c) (α + β)C = αC + βC. Compute the inverses of the following matrices        3 1   2 −3   2 0  A= . ♣Exercises 1. C= . (e) α(BC) = (αB)C = B(αC). (h) (A + B)t = At + B t .4 1. Let A and B be square matrices of the same size. (b) (AB)C = A(BC). 2. Let α = −3 β = 2. (f ) A(B − C) = AB − AC. −1 3 1 5 4 6 Show that (a) A + (B + C) = (A + B) + C. (j) (AB)t = B t At . 37 . (g) (At )t = A. Is (AB)2 = A2 B 2 a valid matrix identity? Justify your answer. If R is a square matrix in reduced row echelon form and has no zero rows. 4. show that R = I. and let        3 2   4 0   0 −1  A= . B =  .

If E is the elementary matrix obtained from Im by perform- ing a row operation and A is a m × n matrix.5 Elementary Matrices and a Method for Finding A−1 In this section we will develop a simple scheme or algorithm for finding the inverse of an invertible matrix. (iii) 1st row + 3× 3rd row of I3 and (iv) 1× 1st row of I3 . 3. interchanging two rows.10. 2. Definition 1. then EA is the matrix that results when the row operation is performed on A. 38 . exactly one of 1. An n × n matrix E is called an elementary matrix if it is obtained from In by performing a single elementary row operation. multiplying a row through a nonzero constant.7. (ii)   .13. (iv)  0 1 0           0 −3  0 0 1 0    0 0 1 0 0 1 0 1 0 0 since (i) −3× the second row of I2 . (ii) interchanging 2nd and fourth rows of I4 . that is. (iii)   0 1 0  .§1. Example 1. adding a multiple of one row to another row. Theorem 1. The following matrices are elementary matrices:   1 0 0 0           0 0 0 1    1 0 3   1 0 0   1 0        (i)   .

Interchange rows i and j Interchange rows i and j Add c times row i to row j Add −c times row i to row j The operations on the right side of the table are called the inverse opera- tions of the corresponding operations on the left. Definition 1. Omitted! Example 1. Row Operation on I that produces E Row Operation on E that reproduces I 1 Multiply row i by c 6= 0 Multiply row i by c . We see that    1 0 2 3    EA =   2 −1 3 6     4 4 10 9 which is the matrix that results when we add 3 times the first row of A to the third row.14. Let E and I be an elementary matrix and the identity matrix of the same size.     1 4 4 0 3 0 1 Then E is the elementary matrix obtained from I3 by adding 3 times the first row to the third row.Proof. 39 . E=   0 1 0 .8. Let      1 0 2 3   1 0 0      A=   2 −1 3 6  .

Proof.Theorem 1. Thus AX = 0 has only the trivial solution. E0 E = I and EE0 = I.15. 40 . and the inverse is also an elementary matrix.12. Let E0 be the matrix that results when the inverse of this operation is performed on I. if B is obtained from A by performing a finite number of elementary row operations. Then E0 is an elementary matrix. Assume A is invertible and let X0 be any solution of AX = 0. Then AX0 = 0 and the A−1 AX0 = a−1 0 = 0. Every elementary matrix is invertible. so E is invertible. 10. Proof. (c) A ∼ In . Thus E0 is the inverse of E. A matrix A is said to be row equivalent to a matrix B. Theorem 1.11. If E is an elementary matrix. Definition 1. (a) ⇒ (b). If A is an n × n matrix. then E is the result from performing a row operation on I. By Theorem 1. so 0 = IX0 = X0 . then the following statements are equivalent: (a) A is invertible. written A ∼ B. (b) AX = 0 has only the trivial solution.

... . an1 x1 + an2 x2 + · · · + ann xn = 0 and the system has only the trivial solution. . then the system of equations corresponding to the reduced row- echelon form of the augmented matrix will be x1 = 0 x2 = 0 .    an1 an2 · · · ann 0 can be reduced to the augmented matrix   1 0 ··· 0 0      0 1 ··· 0 0     . . . .   .   .. . .. . . . If we solve by Gauss-Jordan elimination. Let AX = 0 be the matrix form of the system a11 x1 + a12 x2 + · · · + a1n xn = 0 a21 x1 + a22 x2 + · · · + a2n xn = 0 .. . . . . .    0 0 ··· 1 0 41 .   ..   . xn = 0 Thus the augmented matrix for the system AX = 0   a a12 · · · a1n 0  11     a21 a22 · · · a2n 0     .. ... ... (b) ⇒ (c). . .

11. . E2 . (c) ⇒ (a). we can find elementary matrices E1 . Assume that A is row equivalent to In .10. . . A simple method for finding the inverse Ek · · · E2 E1 of A is given in the fol- lowing example. by Theorem 1. . Ek are invertible. From A = (Ek · · · E2 E1 )−1 . Remark 1. 42 . E1 .. .10. Thus A is invertible. A is reduced to the matrix identity matrix In by a finite number of elementary row operations. Then. A−1 = Ek · · · E2 E1 . xn = 0 by a finite number of elementary row operations.for x1 = 0 x2 = 0 . E2 . By Theorem 1. . Ek such that Ek · · · E2 E1 A = In . Therefore. . so we have A = E1−1 E2−1 · · · Ek−1 In = (Ek · · · E2 E1 )−1 . . Thus A is row equivalent to In . .

   1 2 3 | 1 0 0     0 1 −3 | −2 1 0      0 0 1 | 5 −2 −1 43 . The procedure is as follows: we reduce [A|I] to [I|A−1 ].    1 2 3 | 1 0 0     0 1 −3 | −2 1 0      0 −2 5 | −1 0 1 Adding 2 times the second row to the third. Find the inverse of    1 2 3     2 5 3 .    1 2 3 | 1 0 0     2 5 3 | 0 1 0      1 0 8 | 0 0 1 Adding −2 times the first row to the second and −1 times the first row to the third.Example 1.    1 2 3 | 1 0 0     0 1 −3 | −2 1 0      0 0 −1 | −5 2 1 Multiplying the third row by −1.9.     1 0 8 Solution.

Adding 3 times the third row to the second and −3 times the third row to the first.   0 0 1 | 5 −2 −1 Thus    −40 16 9    A−1 =  13 −5 −3 .    5 −2 −1 44 .    1 0 0 | −40 16 9     0 1 0 13 −5 −3   | .    1 2 0 | −14 6 3     0 1 0 | 13 −5 −3      0 0 1 | 5 −2 −1 Adding −2 times the second row to the first.

(b)  . (c) E3 A = C. (c)  .      0 1 1 0  0 0 1   0 0 0 1 2. 0 1 3 1 0 2      0 1 0   0 1 0      (d)   1 0 0 .        7 8 9 1 2 3 9 12 15 Find elementary matrices E1 .      0 0 1 0 0 1     1 0 0 0 1 0 0          0 1 0 0      (f )  0 1 −3  . B=   4 5 6 . (b) E2 B = A. Consider the matrices        1 2 3   7 8 9   1 2 3        A=  4 5 6 . 45 . (d) E4 C = A. C=  4 5 6 . (g)  .  (e)   0 0 1 . E2 E3 and E4 such that (a) E1 A = B.5 1. Which of the following are elementary matrices?        2 0   1 0   2 0  (a)  . ♣Exercises 1.

Express the matrix    1 3 3 8    A=   −2 −5 1 −8    0 1 7 8 in the form EF A = R where E and F are elementary matrices and R is in row-echelon form. b. (b)   . Find the inverse of each of the following matrices. k3 . c must be a zero. Show that if    1 0 0    A=  0 1 0     a b c is an elementary matrix. 4. k2 .3. 5. then at least one of a. where k1 . k4 and k are all nonzero.       k 0 0 0 0 0 0 k1 k 0 0 0  1             0 k2 0 0   0 0 k2  0   1 k 0 0       (a)   .        0 0 k3 0   0 k3 0 0   0 1 k 0        0 0 0 k4 k4 0 0 0 0 0 1 k 46 . (c)  .

. Theorem 1. To solve systems of equations AX = B1 . Remark 1.     1 0 8 | 9 | −6 47 . . . then for each n × 1 matrix B.10. AX = Bk . . Solve the systems x1 + 2x2 + 3x3 = 4 x1 + 2x2 + 3x3 = 1 (a) 2x1 + 5x2 + 3x3 = 5 (b) 2x1 + 5x2 + 3x3 = 6 . Reducing    1 2 3 | 4 | 1     2 5 3 | 5 | 6 . Example 1. . . x1 + 8x3 = 9 x1 + 8x3 = −6 Solution. AX = B2 . reduce [A|B1 |B2 | .11.13. then AX0 = B and hence X0 = A−1 B. |Bk0 ]. the system of equations AX = B has exactly one solution X = A−1 B. Proof.6 Further Results on Systems of Equations and Invert- ibility In this section we will establish more results about systems of linear equa- tions and invertibility of matrices. If A is an invertible n × n matrix. . If X0 is any solution of AX = B. . Since A(A−1 B) = B. A−1 B is a solution of AX = B. |Bk ] to [I|B10 |B20 | .§1.

Let AX = 0. If A is an n × n matrix. (b) If B is a square matrix such that AB = I. then the following statements are equivalent. x2 = 1. (a) It is enough to show that A is invertible and then BAA−1 = IA−1 implies B = A−1 . (b) AB = I ⇒ B t At = I t = I. x3 = 1 and of (b) is x1 = 2. (a) A is invertible. x3 = −1. 48 . Then B(AX) = B0 ⇒ (BA)X = 0 ⇒ IX = 0 ⇒ X = 0. then B = A−1 . Let A be a square matrix.12. by (a). At is invertible and so is A by Theorem 1.14. Theorem 1. Thus AX = 0 has only the trivial solution. then B = A−1 . x2 = 0.9(e). (a) If B is a square matrix such that BA = I.     0 0 1 | 1 | −1 Thus the solution of (a) is x1 = 1. it suffices to prove that the system AX = 0 has only the trivial solution. (b) AX = 0 has only the trivial solution. Theorem 1.15. Proof.we will have    1 0 0 | 1 | 2     0 1 0 | 0 | 1 . so. By Theorem 1.

.             0 0 1 will be consistent. . . If A is invertible and B is any n × 1 matrix.X2 . . . Xn ]. the second.12..   . so AX = 0 is consistent. AXn ] = I. . . Let C be the matrix with X1 .   . . respectively. (c) A is row equivalent to In . By Theorem 1. Proof. last columns.   . . . AX =  0         . so we see that AC = [AX1 AX2 . then X = A−1 B is the solution of AX = 0. A is invertible.14. AX =  0  . . . C = [X1 X2 . X2 be a solution of the second system. 49 . (a) ⇒ (b) ⇒ (c) ⇒ (a). . the systems        1   0   0         0   1   0                    AX =  0  . (d) AX = B is consistent for every n × 1 matrix B. Let X1 be a solution of the first system. that is.. Xn as the first.   . . . . . It remains to show that (a) ⇔ (d). . .    . . Xn be a solution of the last system. Assume the system AX = B is consistent for any n × 1 matrix B. . In particular. . By Theorem 1.

The following example illustrates how Gaussian elimination can be used to determine such conditions. Start with the augment matrix    1 1 2 b1     1 0 1 b . Find all m×1 matrices B such that the system AX = B is consistent. Let A be a m×n matrix.11. b2 .    1 1 2 b1     0 −1 −1 b − b . What conditions must b1 .     0 −1 −1 b3 − 2b1 50 .  2 1    0 −1 −1 b3 − 2b1 Multiplying the second row by −1.A Fundamental Problem. Example 1.    1 1 2 b1     0 1 1 b1 − b2 . b3 satisfy for the system x1 + x2 + 2x3 = b1 x1 + x3 = b 2 2x1 + x2 + 3x3 = b3 to be consistent? Solution.  2    2 1 3 b3 Adding −1 times the first row to the second and −2 times the first row to the third.

2x1 − x2 = b2 −2x1 + x2 − 6x3 = b3 51 .6 1. Thus the system AX = B is consistent if and only if    b1    B=  b2  .    b1 + b3 ♣Exercises 1.   0 0 0 b3 − b2 − b1 From the third row. Find the conditions that the b’s must satisfy for the systems to be consis- tent. (b) 3x1 − 3x2 + 9x3 = b2 .    1 1 2 b1     0 b1 − b2   1 1 .Adding the second row to the third. x1 − x2 + 3x3 = b1 4x1 − 2x2 = b1 (a) . it is evident that the system has a solution if and only if b3 − b2 − b1 = 0 or b3 = b1 + b2 .

Show that AX = 0 has just the trivial solution if and only if (QA)X = 0 has just the trivial solution. Consider the matrices      2 2 3   x1      A=  1 2 1 .  and X =   x 2 . Let AX = 0 be a homogeneous system of n linear equations in n unknowns.2. Show that if k is any positive integer. 3. Let AX = 0 be a homogeneous system of n linear equations in n unknowns that has only the trivial solution. and let Q be an invertible matrix. 52 . Show that an n × n matrix A is invertible if and only if it can be written as a product of elementary matrices. 4. then the system Ak X = 0 also has only the trivial solution. 5.      2 −2 1 x3 Show that the equation AX = X can be written as (A − I)X = 0 and use this result to solve AX = X for X.

ik ) of integers of {1. Definition 2. . . Our work on the determinant function will have important applications to the theory of systems of linear equations and will also lead us to an explicit formula for the inverse of an invertible matrix. . 2. . in ) of n integers is an ordered pair (ij . Remark 2. n} such that ij > ik and ij precedes ik in α. . . we will define this function. and so on. .1 Combinatorial Approach To Determinants A “determinant” is a certain kind of function that associates a real number with a square matrix.1. The number of permutations of n integers equals n!. . 53 . . . An inversion of a permutation α = (i1 . . in ) of n integers.1. Chapter Two Determinants §2. i2 . 2. Definition 2. the number of inversions of α equals j1 + j2 + · · · + jn−1 where j1 is the number of inversions of α whose first coordinate is i1 . . j2 is the number of inversions of α whose first coordinate is i2 . In this section. . For a permutation α = (i1 . . .2. i2 . .2. A permutation of n integers is an ordered arrangement of the n integers 1. Remark 2. n.

5.    an1 an2 · · · ann an elementary product form from A is any product of n entries of A. Definition 2. 4. . we define the sign of α by    +1 if α is even. .4.. .  . . 1.. The number of inversions of (6. Example 2. . jn is the number of inversions of α whose first coordinate is in−1 . . . 3. Definition 2. sgn(α) =   −1 if α is odd. A permutation α of n integers is said to be even if its total number of inversions is even. For a n × n matrix   a a ··· a1n  11 12     a21 a22 · · · a2n    A= . If α is a permutation of n integers.1.   .3. . 2) equals 5 + 0 + 1 + 1 + 1 = 8. A signed elementary product form from A is sgn(α)a1i1 a2i2 · · · anin where α = (i1 . in ) is a permutation of n integers. i2 . and even if its total number of inversions is odd. 54 . no two of which come from the same row or the same column. .

Remark 2.3. If A is a n × n matrix, then there are n! elementary products

form from A.

Remark 2.4. For any permutation α of n integers, α is considered as a

bijective function from the set {1, 2, . . . , n} onto itself. So if α = (i1 , i2 , . . . , in )

is a permutation, it means that

α(1) = i1 , α(2) = i2 , . . . , α(n) = in .

Definition 2.5. Let A = [aij ] be a n × n matrix and let Sn be the set of

all permutations of n integers. Then the determinant function of square

matrices is denoted by det and we define the value of det(A) by

X
det(A) = sgn(α)a1α(1) a1α(2) · · · a1α(n) .
α∈Sn

Example 2.2.
 
 a11 a12 
(i) det   = a11 a22 − a12 a21 .
a21 a22

 
 a11 a12 a13 
 
(ii) det 
 a21 a22 a23
 = a11 a22 a33 + a12 a23 a31 + a13 a21 a32

 
a31 a32 a33

−a13 a22 a31 − a12 a21 a33 − a11 a23 a32 .

55

♣Exercises 2.1

1. Find the number of inversions in each of the following permutations of

{1, 2, 3, 4, 5}

(a) (3, 4, 1, 5, 2), (b) (4, 2, 5, 3, 1), (c) (5, 4, 3, 2, 1),

(d) (1, 2, 3, 4, 5), (e) (1, 3, 5, 4, 2), (f ) (2, 3, 5, 4, 1).

2. Evaluate the determinant:
¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ ¯ ¯ ¯
¯ 1 2 ¯ ¯ 6 4 ¯ ¯ −1 7 ¯
¯ ¯ ¯ ¯ ¯ ¯
(a) ¯ ¯, (b) ¯ ¯, (c) ¯ ¯,
¯ ¯ ¯ ¯ ¯ ¯
¯ −1 3 ¯ ¯ 3 2 ¯ ¯ −8 −3 ¯

¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ ¯ ¯ ¯
¯ 1 −2 7 ¯ ¯ 8 2 −1 ¯ ¯ 1 0 3 ¯
¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ ¯ ¯ ¯
(d) ¯¯ 3 5 1 ¯¯ , (e) ¯¯ −3 4 −6 ¯¯ , (f ) ¯ 4 0 −1 ¯ .
¯ ¯
¯ ¯ ¯ ¯ ¯ ¯
¯ ¯ ¯ ¯ ¯ ¯
¯ 4 3 8 ¯ ¯ 1 7 2 ¯ ¯ 2 8 6 ¯

3. Find all values of λ for which det(A) = 0.
 
 
 λ−6 0 0 
 λ − 1 −2   
(a) A =  , (b) A = 
 0 λ −1 .

1 λ−4  
0 4 λ−4

56

§2.2 Evaluating Determinants by Row Reduction

In this section we show that the the determinant of a matrix can be eval-

uated by reducing the matrix to row echelon form.

Theorem 2.1. If A is a square matrix that contains a row of zeros, then

det(A) = 0.

Proof. Let A = [aij ] be a n × n matrix and let the i-th row consist of zeros.

Then aij = 0 for each i = 1, 2, . . . , n. Thus

X
det(A) = sgn(α)a1α(1) a2α(2) · · · aiα(i) · · · anα(n) = 0
α∈Sn

since aiα(i) = 0 for each i = 1, 2, . . . , n.

Definition 2.6. A square matrix A is said to be

(a) upper triangular if all the entries below the main diagonal are zeros,

(b) lower triangular if all the entries above the main diagonal are zeros,

(a) triangular if it is either upper or lower triangular.

Example 2.3. A 4 × 4 upper and lower triangular matrix, respectively, are
   
a a a13 a14 a11 0 0 0
 11 12   
   
 0 a22 a23 a24   a21 a22 0 0 
   
 ,  .
   
 0 0 a33 a34  a a
 31 32 33 a 0 
   
0 0 0 a44 a41 a42 a43 a44

57

3. then det(A0 ) = det(A). then det(A0 ) = k det(A). Evaluate det(A) where    0 1 5     3 −6 9  . Proof. (a) If A0 is the matrix obtained from A by multiplying a scalar k to a row. Example 2.4. Let A be a n × n matrix. det(A) = a11 a22 a33 a44 is the product of entries on the main diagonal. so det(kA) = k n det(A). then det(A0 ) = −det(A).We see that in either case. Theorem 2. Left to reader as exercises. (c) If A0 is the matrix obtained from A by adding a multiple of a row to another row. Theorem 2.2. then det(A) is the product of the entries on the main diagonal. we have the following theorem. In general.     2 6 1 58 . (b) If A0 is the matrix obtained from A by interchanging two rows. If A which is a n × n triangular matrix.

Solution. ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 0 1 5 ¯ ¯ 3 −6 9 ¯ ¯ 1 −2 3 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ det(A) = ¯¯ 3 −6 9 ¯ = −¯ 0 ¯ ¯ 1 5 ¯ = −3 ¯ 0 ¯ ¯ 1 5 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 2 6 1 ¯ ¯ 2 6 1 ¯ ¯ 2 6 1 ¯ ¯ ¯ ¯ ¯ ¯ 1 −2 3 ¯¯ ¯ ¯ ¯ = −3 ¯¯ 0 1 5 ¯¯ = (−3)(1)(1)(−55) = 165. ¯ ¯ ¯ ¯ ¯ 0 0 −55 ¯ Remark 2. then det(A) = 0. Example 2.1 and (c) of Theorem 2.3. By Theorem 2. The determinant of    2 7 8     3 2 4      4 14 16 is zero.5. if A is a square matrix which has two proportional rows.5. 59 .

     4 −3 2 0 1 5 60 .  4 2 3  .        1 −2 7 1 3 0      1 −2 0   2 −4 8      4. Evaluate the determinants of the given matrices by reducing the matrix to row-echelon form:      2 3 7   2 1 1      2. 3. 5. ¯ (d) ¯¯ 6 −2 4 ¯. Evaluate the following by inspection: ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 1 0 0 0 ¯ ¯ 2 −40 17 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ −9 −1 0 0 ¯ ¯ ¯ ¯ ¯ (a) ¯ 0 1 11 ¯ . ♣Exercises 2.  −2   7 −2 . ¯ ¯ ¯ ¯ ¯ ¯ ¯ 12 7 8 0 ¯ ¯ 0 0 3 ¯ ¯ ¯ ¯ ¯ ¯ 4 5 7 2 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 1 2 3 ¯ ¯ 3 −1 2 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ (c) ¯¯ 3 7 6 ¯.   0 0 −3  . ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 1 2 3 ¯ ¯ 1 7 3 ¯ ♣.2 1.   −3 5 1  . (b) ¯ ¯.

 0 0 1 0 1 . Assume det   d e f  = 5.     3 6 9 3 2 1 3 1          −1 0 1 0   1 0 1 1      6.   .  (b) det   2d 2e 2f .      a b c −g −h −i      a+d b+e c+f   a b c      (c) det   d e f  . (d) det  d − 3a e − 3b f − 3c         g h i 2g 2h 2i 61 .      1 3 2 −1   0 2 1 0      −1 −2 −2 1 0 1 2 3     1 1 1  1 3 1 5 3  1    2 2  2  −2 −7 0 −4 2       −1 1 0  1    2 2  2   8.  .  . 7. 9. Find    g h i      d e f   −a −b −c      (a) det   g h i .  2 1 1     3 3 3 0       0 0 2 1 1  1 1   3 1 3 0   0 0 0 1 1    a b c    10.

then (a) If A0 is the matrix obtained from A by multiplying a scalar k to a column. By Theorem 2. then det(A0 ) = det(A). then det(A0 ) = k det(A). It follows from the fact that A and At actually have the same signed elementary products. Proof.4. Use row reduction to show that ¯ ¯ ¯ ¯ ¯ 1 1 1 ¯ ¯ ¯ ¯ ¯ ¯ a b c ¯ = (b − a)(c − a)(c − b). Theorem 2. (b) If A0 is the matrix obtained from A by interchanging two columns.6.4. then det(At ) = det(A). Remark 2. ¯ ¯ ¯ ¯ ¯ 2 2 2 ¯ ¯ a b c ¯ §2. If A is a square matrix.3 Properties of the Determinant Function In this section we develop some of the fundamental properties of the the determinant function. Theorem 2. 62 .11. (c) If A0 is the matrix obtained from A by adding a multiple of a column to another column.3 can be written as: if A is a n × n matrix. then det(A0 ) = −det(A).

. ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 1 0 0 3 ¯ ¯ 1 0 0 0 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 2 7 0 6 ¯ ¯ 2 7 0 0 ¯ ¯ ¯ ¯ ¯ det(A) = ¯ ¯=¯ ¯ = (1)(7)(3)(−26) = −546.        1 + 0 4 + 1 7 + (−1) 1 4 7 0 1 −1 63 . Proof. 2. .5. n and aij = a0ij = a00ij for all i 6= r. Example 2. Let A = [aij ].Example 2. ¯ ¯ ¯ ¯ ¯ 0 6 3 0 ¯ ¯ 0 6 3 0 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 7 3 1 −5 ¯ ¯ 7 3 1 −26 ¯ Theorem 2.        1 7 5   1 7 5   1 7 5        det   2 0 3  = det  2 0 3  + det  2 0 3     . . Then det(A00 ) = det(A) + det(A0 ). Compute the determinant   1 0 0 3      2 7 0 6    A=     0 6 3 0    7 3 1 −5 Solution. A0 = [a0ij ] and A00 = [a00ij ] be n × n matrices such that a00rj = arj + a0rj for j = 1. .6. Left to the reader as an exercise.7.

so we can find elementary matrices E1 . For example. so det(A) 6= 0.7. Proof. Then R was obtained by a finite number of elementary row operations on A. Ek such that Ek · · · E2 E1 A = R ⇒ A = E1−1 E2−1 · · · Ek−1 R. Let R be the row echelon form of A. Proof. det(A + B) 6= det(A) + det(B). then det(AB) = det(A)det(B). then I = AA−1 and hence 1 = det(I) = det(AA−1 ) = det(A)det(A−1 ).6. In general.        3 1   −1 3   2 4  det   + det   6= det  . by Theorem 1. E2 . Omitted! Remark 2. . We will show that A is row equivalent to I and then.7. Assume that det(A) 6= 0. A square matrix A is invertible if and only if det(A) 6= 0. . . If A and B are square matrices of the same size. 64 .Theorem 2. . A is invertible.12. 2 1 5 8 7 9 Theorem 2. If A is invertible.

3 1. 1 = det(I) = det(AA−1 ) = det(A)det(A−1 ). A=  −1 0 6 . Verify that det(AB) = det(A)det(B) when      2 1 0   1 −1 3      A= 3 4 0 .      0 0 2 5 0 1 65 . Therefore.  2 5   3 2 8 2. that is. R = I. det(R) 6= 0. det(A) Proof. A ∼ I. If A is invertible. det(A) ♣Exercises 2. B =  7   1 2 . Hence R must be I. 1 so det(A−1 ) = .Thus det(A) = det(E1−1 ) det(E2−1 ) · · · det(Ek−1 )det(R). Verify that det(A) = det(At ) for      1 2 7   1 −3    A= . Corollary. then 1 det(A−1 ) = . Since det(A) 6= 0. Since AA−1 = I. so each row of R does not entirely consist of zeros.

   1 0 0 0    2 −8 3 4 4. 66 . (c) det((2A)−1 ). explain why det(A) = 0 where   −3 4 7 −2      2 6 1 −3    A= .   g h i Find (a) det(3A). (b) det(2A−1 ). −2 k − 2   k 3 2 §2.3.4 Determinants By Cofactor Expansion In this section we consider a method for evaluating determinants that is useful for hand computations and important theoretically. (b) A =    3 1 6 . 5. Assume that det(A) = 5. where    a b c    A=   d e f . By inspection. For which value(s) of k does A fail to be invertible?      1 2 4   k − 3 −2    (a) A =   .

If A = [aij ] is a n × n square matrix. Example 2.8. which is called the cofactor expansion along the j-th column. so C11 = (−1) M11 = 16. then the minor of entry aij is denoted by Mij and is defined by the determinant of the (n − 1) × (n − 1) submatrix Aij which is obtained from A by deleting the i-th row and the j-th column. 4 8 Theorem 2. Let    3 1 −4    A=  2 5 6  .   1 4 8 Then    5 6  1+1 M11 =   = 40 − 24 = 16. which is called the cofactor expansion along the i-th row . then (a) det(A) = a1j C1j + a2j C2j + · · · + anj Cnj for each j. (b) det(A) = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin for each i.7.8. If A = [aij ] is a n × n matrix. Omitted! 67 . The number (−1)i+j Mij is denoted by Cij and is called the cofactor of aij .Definition 2. Proof.

   2 4 1 5    3 7 5 3 Solution. From Theorem 2. Evaluate det(A) where   3 5 −2 6      1 2 −1 1    A= .8.8. since it is the determinant of the matrix obtained from A by replacing the i-th row by the j-th row. 68 . then (a) a1i C1j + a2i C2j + · · · + ani Cnj = 0. (b) aj1 Ci1 + aj2 Ci2 + · · · + ajn Cin = 0.9. provided i 6= j. ¯ ¯ ¯ ¯ ¯ ¯ ¯ 9 3 ¯ ¯ 0 9 3 ¯ Remark 2.Example 2. ¯ ¯ ¯ ¯ ¯ ¯ ¯ 3 5 −2 6 ¯ ¯ ¯ ¯ ¯ ¯ −1 1 3 ¯ ¯ ¯ ¯ ¯ ¯ 1 2 −1 1 ¯ ¯ ¯ ¯ ¯ det(A) = ¯ ¯ = − ¯¯ 0 3 3 ¯ ¯ ¯ ¯ ¯ ¯ ¯ 2 4 1 5 ¯ ¯ ¯ ¯ ¯ ¯ 1 8 0 ¯ ¯ ¯ ¯ 3 7 5 3 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ −1 1 3 ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ 3 3 ¯¯ = − ¯¯ 0 3 3 ¯¯ = −(−1) ¯ ¯ = 9 − 27 = −18. since it is the determinant of the matrix obtained from A by replacing the j-th column by the i-th column. provided i 6= j. if A = [aij ] is a n × n matrix. which has the same two columns. which has the same two rows.

10.. . The transpose of this matrix   C C21 · · · Cn1  11     C12 C22 · · · Cn2     . then the matrix   C C12 · · · C1n  11     C21 C22 · · · C2n     . .   . ..   . Example 2. .    2 −4 0 then the matrix of cofactors from A is      C11 C12 C13   12 6 −16       C C C = 4 2 16   21 22 23        C31 C32 C33 12 −10 16 69 .. .. . If A = [aij ] is a n × n matrix and Cij is the cofactor of aij . .   .    Cn1 Cn2 · · · Cnn is called the matrix of cofactors from A. If    3 2 1    A=  1 6 3 .8.Definition 2.   .    C1n C2n · · · Cnn is called the adjoint of A and is denoted by adj(A). .

  −16 16 16 Theorem 2.8 and Remark 2. ... . det(A) det(A) 70 .8.and the adjoint of A is    12 4 12     6 2 −10   . det(A) Proof... If A is an invertible matrix. ..  C1n C2n · · · Cjn · · · Cnn    an1 an2 · · · ann   det(A) 0 ··· 0      0 det(A) · · · 0    =  .. .9.. . . ...    . .. Let A = [aij ] be a n × n invertible matrix. . . Then    a11 a12 · · · a1n       a   21 a22 · · · a2n   C11 C21 · · · Cj1 · · · Cn1     ..     ..    . .       . ..  C12 C22 · · · Cj2 · · · Cn2  A adj(A) =      .   ai1 ai2 · · · ain   . . . . . Thus A adj(A) = det(A)I. so · ¸ 1 1 A adj(A) = I so A−1 = adj(A). .  = det(A)I.    0 0 · · · det(A) by Theorem 2. . then 1 A−1 = adj(A).

.. 2. n. .Theorem 2. . . Note that det(Aj ) = b1 C1j + b2 C2j + · · · + bn Cnj by Theorem 2. . Proof. so AX = B implies X = A−1 B which is the unique solution of AX = B. .. . 71 . ..10 (Cramer’s Rule).. If AX = B is a system of linear equations in n unknowns such that det(A) 6= 0.8. . . 2. . .    an1 an2 · · · anj−1 bn anj+1 · · · ann for each j = 1. then the system has a unique solution det(Aj ) xj = for j = 1. Since det(A) 6= 0. . . .     . .. A−1 exists. . . . n det(A) where   a11 a12 · · · a1j−1 b1 a1j+1 · · · a1n      a21 a22 · · · a2j−1 b2 a2j+1 · · · a2n    Aj =  .

... . . . . . ... (b) Find all the cofactors. . .    3 −1 4 (a) Find all the minors..      b1 C1n + b2 C2n + · · · + bn Cnn So det(Aj ) xj = b1 C1j + b2 C2j + · · · + bn Cnj det(A) = det(A) for each j = 1.       . 2. 72 ... .  det(A)  . Let    1 6 −3    A=  −2 7 1 .9. ♣Exercises 2. n.4 1. .     C1n C2n · · · Cnn bn   bC + b2 C21 + · · · + bn Cn1  1 11    1   b1 C12 + b2 C22 + · · · + bn Cn2   =  .  . By Theorem 2.. . det(A)  . 1 X = A−1 B = adj(A)B  det(A)   C11 C21 · · · Cn1 b1       1  C12 C22 · · · Cn2   b2      =  .

y3 ) are collinear if and only if ¯ ¯ ¯ ¯ ¯ x1 y1 1 ¯ ¯ ¯ ¯ ¯ ¯ x y 1 ¯ = 0. y1 ). y2 ) and (x3 . Prove that the equation of the line through two distinct points (a1 . (c) Evaluate the determinant of A by a cofactor expansion along (i) the first row. (iv) the second column. ¯ 2 2 ¯ ¯ ¯ ¯ ¯ ¯ x3 y3 1 ¯ 73 . 2. 3. (iii) the second row. ¯ 1 1 ¯ ¯ ¯ ¯ ¯ ¯ a2 b2 1 ¯ 4. b1 ) and (a2 . Prove that three points (x1 . (x2 . (vi) the third column. Use Cramer’s Rule to solve 4x + 5y = 2 11x + y + 2z = 3 x + 5y + 2z = 1. b2 ) can be written ¯ ¯ ¯ ¯ ¯ x y 1 ¯ ¯ ¯ ¯ ¯ ¯ a b 1 ¯ = 0. (v) the third row. (d) Find (i) adj(A) and (ii) A−1 . (ii) the first column.

then all the entries in A−1 are integers. then A−1 is upper triangular. 74 . (b) solve by Gauss-Jordan elimination. 7x + 3y − 5z + 8w = −3 x + y + z + 2w = 3 (a) solve by Cramer’s Rule. 6.5. For the system 4x + y + z + w = 6 3x + 7y − z + w = 1 . 7. Prove that if A is an invertible upper triangular matrix. Prove that if det(A) = 1 and all the entries in A are integers.

then an ordered n-tuple is a sequence of n real numbers (a1 . . . a2 . The set of all ordered n-tuples is called n-space and is denoted by Rn . 75 .1. . . un ) and v=(v1 .1 Euclidean n-Space In this section we will make the idea of using pairs of numbers to locate points in the plane and triples of numbers to locate points in the sphere to extend beyond 3-space.2. v2 . Let u=(u1 . . 2. it is usual to write R rather than R1 . When n = 2 or 3. . . . If n is a positive integer. Definition 3. . Chapter Three General Vector Spaces §3. . which is the set of all real numbers. . . un + vn ). . u2 + v2 . The elements of Rn are called vectors and the real numbers are called scalars. . it is usual use the term ordered pair and ordered triple. . an ). . Then (a) u and v are called equal if ui = vi for each i = 1. . When n = 1. . . u2 . n. vn ) be two vectors in Rn . . Definition 3. (b) The sum u+v is defined by u+v = (u1 + v1 .

(d) The zero vector in Rn is the vector 0 = (0. . Left to reader as exercises. . un ). 0). αu2 . . 0. (c) If α is a scalar. (e) The negative (or additive inverse) of u is defined by −u = (−u1 . αun ). . Proof. . w2 . (h) 1u=u. Theorem 3. 76 . −u2 . (f) α(u+v) = αu + αv. v=(v1 . . . . . . . wn ) be vectors in Rn and α and β be scalars. . . . then scalar multiple αu is defined by αu = (αu1 . . (b) u+(v+w)= (u+v)+w. . that is. u − u = 0. −un ). . . . (c) u+0=u=0+u. Let u=(u1 . Then (a) u+v=v+u. . v2 . . vn ) and w=(w1 . . (g) (α + β)u = αu + βu. (d) u + (−u) = 0. .1. . (e) α(βu) = (αβ)u. u2 .

un ). . . . v·v=0 if and only if v=0.Definition 3. . vn ) and w=(w1 . . Then (a) u· v=v· u. w2 . If u=(u1 . v2 .1. v2 . . (d) v·v≥0. . (b) (u+v)· w=u· w+v· w.3. un ) in Rn is defined by q 1 kuk = (u · u) 2 = u21 + u22 + · · · + u2n . v) = ku − vk = (u1 − v1 )2 + (u2 − v2 )2 + · · · + (u2n − vn ). . Proof. .2. The Euclidean distance between u=(u1 . wn ) be vectors in Rn and α be a scalar. . . . . u2 . v=(v1 . Definition 3. The Euclidean norm (or Euclidean length) of a vector u=(u1 . . . Remark 3. un ) and v=(v1 . . . un ) and v=(v1 . . . . . . Theorem 3. (c) (αu) · v = α(u · v). 77 . . then the Euclidean inner product u·v is defined by u · v = u1 v 1 + u2 v 2 + · · · + u n v n . . . It is common to refer to the n-space Rn with the operations of addition. . . vn ) are two vectors in Rn . v2 . u2 . . u2 . u2 . Left to reader as exercises. . Let u=(u1 . . vn ) is defined by p d(u.4. . scalar multiplication and inner product as Euclidean n-space. .

3. −1. −1). (g) find the vector x that satisfies u − v + x = 7x + w. Find scalars α1 . (b) u = (3. 0. 1. 2.1 1. α2 . 6. 3). 3. 4) and u4 = (6. (c) − w + v. (b) v = (1. 7. Let u = (2. 1. 4. (b) u · v = 14 ku + vk2 − 14 ku − vk2 . 3). (b) 7v + 3w. u3 = (7. 0. 2). For vectors in Rn . 9). α3 and α4 such that α1 u1 + α2 u2 + α3 u3 + α4 u4 = (0. 0). 3). Find the Euclidean inner product u · v when (a) u = (−1. −1. 4. Compute the Euclidean norm of v when (a) v = (4. 1. establish the identities: (a) ku + vk2 + ku − vk2 = 2kuk2 + 2kvk2 . 5. −1). 1). 2. (c) v = (2. v = (−1. v = (7. 0. 2). −1) and w = (6. 7. 0. v = (5. (f ) 2v − (u + w). 3. 3). 0. 4. Find (a) u − v. ♣Exercises 3. 3. (d) 3(u − 7v). 78 . −3). 5. Let u1 = (−1. 2). 2. u2 = (2. (e) − 3v − 8w.

A vector space over the field K is a set V whose elements are called vectors. respectively. (7) α(u+v) = αu + αv. x (c) 0. called the zero vector . y ∈ K.5.2 General Vector Spaces In this section we generalize the concept of a vector still further. We will say that a set of real (or complex) numbers K is a field if for any x. provided x 6= 0. (6) αu ∈ V . (5) There exists −u such that u+(−u)=0=(−u) + u.§3. together with two operations. (4) There exists 0 ∈ V . (2) u+v=v+u. called addition and scalar multiplication. 1 (b) x−1 = ∈ K. (8) (α + β)u = αu + βu. w ∈ V and α. 1 ∈ K. which satisfies the following axioms: for all u. xy ∈ K. (3) u+(v+w)= (u+v)+w. (a) x ± y. such that u+0=u=0+u. (10) 1u=u. 79 . Definition 3. v. (9) α(βu) = (αβ)u. + : V × V → V and · : K × V → V . (1) u + v ∈ V . β ∈ K.

80 . (b) α0 + α0 = α(0 + 0) = α0 ⇒ α0 + α0 = α0 α0 + α0 + (−α0) = α0 + (−α0) = 0 ⇒ k0 + 0 = 0 ⇒ α0 = 0. (c) (−1)u = −u. Proof. (a) 0u + 0u = (0 + 0)u = 0u ⇒ 0u + 0u = 0u 0u + 0u + (−0u) = 0u + (−0u) = 0 ⇒ 0u + 0 = 0 ⇒ 0u = 0. If K = R is the set of all real numbers. The elements of K are called scalars. V is called a complex vector space. If V is a vector space and u ∈ V and α is a scalar. So (−1)u must be −u.3. (b) α0 = 0. (d) If αu = 0. and if K = C is the set of all complex numbers. then (a) 0u = 0. then α = 0 or u = 0. (c) u + (−1)u = 1u + (−1)u = (1 + (−1))u = 0u = 0. V is called a real vector space. Theorem 3.

0. y. ♣Exercises 3. 2. z) = (0. z) + (x0 . y. If α 6= 0. z + z 0 ) and α(x. 3. 1. z 0 ) = (x + y 0 . list all axioms that fail to hold. z) with the operations (x. The set of all triples of real numbers (x. The set of all triples of real numbers (x. (d) Let αu = 0. Let u 6= 0. Determine which sets are vector spaces under the given oper- ations. then α−1 (αu) = α−1 0 = 0 ⇒ 0 = α−1 (αu) = (α−1 α)u = 1u = u. z).2 ♣. y. y + y 0 . The set of all 2 × 2 matrices of the form    a 1    1 b 81 . Then 0 = αu ⇒ α−1 0 = α−1 (αu) ⇒ 0 = u which is a contradiction. y 0 . 0). y 0 . z) + (x0 . z) with the operations (x. y. y + y 0 . z) = (αx. y. z 0 ) = (x + y 0 . Suppose α 6= 0. A set of objects is given together with operations of addition and scalar multiplication. For those that are not. z + z 0 ) and α(x. y. y.

4. 5. 6. The operations are moon+moon=moon and k(moon)=moon. The set whose only element is the moon.with matrix addition and scalar multiplication. The set of all 2 × 2 matrices of the form    a 0    0 b with matrix addition and scalar multiplication. The set of all 2 × 2 matrices of the form    a a+b    a+b b with matrix addition and scalar multiplication. 82 . where k is a real number.

6. Proof. Each vector v = (v1 . A subset W of a vector space V is called a subspace of V if W itself is a vector under the addition and scalar multiplication defined on V. then W 83 .    vn Let A be a m×n matrix. Consider the real vector space Rn .1. vn ) ∈ Rn is considered as a n × 1 matrix   v1      v2    v= .. Left to the reader as an exercise. v ∈ W . . v2 . (b) αu ∈ W for all u ∈ W and any scalar α. . . .3 Subspaces It is possible for one vector space to be contained within a larger vector space. called solution vectors. of the homogeneous linear system Ax = 0. Theorem 3.4.    . Example 3. If W = {s ∈ Rn |As = 0} is the set of all solutions.§3. Definition 3. A nonempty subset W of a vector space V is a subspace of V if and only if (a) u + v ∈ W for all u. .

vr if there exist scalars α1 . Then the set of all linear combinations of v1 . Theorem 3. . let u. . . then A(s + s0 ) = As + As0 = 0 + 0 = 0. s0 ∈ W and α is a scalar. Let S = {v1 . . A vector w is called a linear combination of the vectors v1 . . . v ∈ Span S and let α be 84 . vr . vr }. . . v2 . vr .8. v2 . . .5.is subspace of Rn . called the solution space of the system Ax = 0 since if s. . . denoted by Span S (or Span {v1 . . . . . . v2 . . vr are vectors in a vector space V and every vector in V can be written as a linear combination of v1 . In addition. . v2 . . Proof. . . . . . v2 . then we say that V is spanned by {v1 . vr } be a subset of a vector space V . To show that Span S is a subspace. v2 . αr such that w = α1 v1 + α2 v2 + · · · + αr vr . . . . vr . . v2 . . . . . . . Definition 3. Definition 3.7. called the linear space spanned by S or the space spanned by S. is a subspace of V . vr }). . v2 . A(αs) = α(As) = α0 = 0. α2 . Span S is the smallest subspace of V which contains v1 . If v1 .

. r. To show that Span S is the smallest. α2 . . . . . . Thus u + v. 2. . αr and β1 . . αu ∈ Span S and hence Span S is a subspace. vr . . . . . let W be any subspace of V . . . 2. β2 .any scalar. vi ∈ lin(S) for each i = 1. . . vr . . . W must contain all linear combinations α1 v1 + α2 v2 + · · · + αr vr of v1 . Thus Span S ⊆ W . βr such that u = α1 v1 + α2 v2 + · · · + αr vr v = β1 v1 + β2 v2 + · · · + βr vr . Then there exist scalars α1 . . . Since vi = 0v1 + 0v2 + · · · + 1vi + · · · + 0vr for each i = 1. so u + v = (α1 + β1 )v1 + (α2 + β2 )v2 + · · · + (αr + βr )vr αu = (αα1 )v1 + (αα2 )v2 + · · · + (ααr )vr . Since W is a subspace. r. . . which contains v1 . v2 . 85 . . v2 . .

b.3 1. Express the following as linear combinations of u = (2. Determine which of the following are subspaces of R3 : (a) all vectors of the form (a. 2. −1. c. 0). (c) (0. 5): (a) (5. v = (1. 6). and d are integers. 3) and w = (3. 2. (b) all matrices of the form    a b    c d where a + d = 0 (c) all 2 × 2 matrices A such that At = A. ♣Exercises 3. 3). 2. where b = a + c. 0. 0. 0. 1). c). b. 86 . c). (d) all vectors of the form (a. b. Determine which of the following are subspaces of M22 which is the vector space of all 2 × 2 matrices with matrix addition and scalar multiplication: (a) all matrices of the form    a b    c d where a. 1. 0). (d) (2. (b) (2. 5). (d) all 2 × 2 matrices A such that det(A) = 0. 4). 1. 9. (c) all vectors of the form (a. (b) all vectors of the form (a. 3. where b = a + c + 1.

that is. 3). α1 = 0. v2 = (2. −1). −1). 1. .4 Linear Independence Definition 3. v2 = (5. −1. §3. 87 . 1. . vr } is called a linearly independent set if the vector equation α1 v1 + α2 v2 + · · · + αr vr = 0 has only the trivial solution. the equation has other solutions. 1. 6. v2 = (4. −3. A set of vectors S = {v1 . v2 = (2. (b) v1 = (2. v4 = (1. 3. −1. 2. 8). it is a linearly depen- dent set. Example 3. v3 = (3. 2). 3). . 2. αr = 0. 9). 4). . 1). . v2 = (1. 4. (d) v1 = (1. v3 = (1.2. 1) form a linearly dependent or linearly independent set.4. 5). α2 = 0. 0). . −2. 0. Determine whether the vectors v1 = (1.9. v3 = (3. v3 = (5. . v4 = (6. Otherwise. −2. 2. 0). v3 = (8. 1). 4). 3). (c) v1 = (3. . 3). 3. 4. v2 . In each part determine whether the given vectors span R3 : (a) v1 = (1.

Suppose α1 v1 + α2 v2 + α3 v3 = (0. 88 . 6. 1 1 α1 = t. 0). −2α1 + 6α2 + 2α3 . so we have a system of linear equations α1 + 5α2 + 3α3 = 0 −2α1 + 6α2 + 2α3 = 0 3α1 − α2 + α3 = 0. −2. 0. 0. Then (0. α3 = t 2 2 for t ∈ R. Solving this system     1  1 5 3 | 0   1 0 | 0  2      −2 6 2 | 0 → 0 1 1 | 0     2      3 −1 1 | 0 0 0 0 | 0 Thus 1 α1 + α 2 3 = 0 1 α2 + α 2 3 = 0 0 · α3 = 0 Therefore. 1) = (0. 0. 0) = (α1 + 5α2 + 3α3 . 0) = α1 (1. 3) + α2 (5. 2. −1) + α3 (3.Solution. 3α1 − α2 + α3 ). α2 = t.

αi+1 . . so α1 v1 + α2 v2 + · · · + αi−1 vi−1 − vi + αi+1 vi+1 + · · · + αr vr = 0. . then α1 α2 αi−1 αi+1 αr vi = − v1 − v2 − · · · − vi−1 − vi+1 − · · · − vr . (b) linearly independent iff no vector in S is expressed as a linear combi- nation of other vectors in S. . (⇐) Suppose vi is a linear combination of S \{vi }. Then there exist scalars α1 . αr such that vi = α1 v1 + α2 v2 + · · · + αi−1 vi−1 + αi+1 vi+1 + · · · + αr vr . . . A nonempty set of vectors S is (a) linearly dependent iff at least one of the vectors in S is expressed as a linear combination of other vectors in S. . Thus S is linearly dependent. αr . (⇒) Suppose S is linearly dependent. . vr }. . . . Then there exist scalars α1 . . αi αi αi αi αi So vi is a linear combination of S \ {vi }. . If αi 6= 0.Theorem 3.6. Proof. . such that α1 v1 + α2 v2 + · · · + αr vr = 0. 89 . . v2 . α2 . α2 . not all zero. (a) Let S = {v1 . αi−1 . . .

. Proof. vr = (vr1 . Proof. Then we have the linear system v11 α1 + v21 α2 + · · · + vr1 αr = 0 v12 α1 + v22 α2 + · · · + vr2 αr = 0 .. then S is linearly dependent. . . (a) If a set contains the zero vector. . . Let v1 = (v11 . Let S = {v1 . v2 ..7. . . . vr2 . v22 . If r > n. Suppose α1 v1 + α2 v2 + · · · + αr vr = 0. (b) A set with exactly two vectors is linearly dependent if and only if one is a scalar multiple of the other. vr } be a set of vectors in Rn . then it is linearly dependent. v2n ) . . v12 . . .. . Theorem 3. . . Theorem 3. .8. vrn ). . . v1n α1 + v2n α2 + · · · + vrn αr = 0 90 .. v1n ) v2 = (v21 . . . .. . Left to the reader as exercises. . . (b) is left to the reader as an exercise. .

v and w. 2). v1 = − . −4). λ. show that every subset of S with one or more vectors is also a linearly independent set. v4 } is also a linearly dependent set for any other vector v4 . (3. v2 = − . v − w. (c) (6. (7. 4). show that {u − v. w − u} is linearly dependent. −1). (0.4 1. 4. 3. v3 } is a linearly dependent set of vectors.λ . If r > n. vn } is a linearly independent set of vectors. by Theorem 1. If S = {v1 . 1. 4). v2 . −3). 2. v2 . . − . If {v1 . 10. 3). 3). 5). ♣Exercises 3. show that {v1 . For which real values of λ do the following vectors form a linearly dependent set in R3 ? µ ¶ µ ¶ µ ¶ 1 1 1 1 1 1 v1 = λ. (2. 91 . α2 . − . −1. 1. 1. . 1). 5. 2 2 2 2 2 2 3. −1). this system has a nontrivial solution α1 . Which of the following sets of vectors are linearly dependent? (a) (2. v2 .1. 6. 0. . (5. (4. For any vectors u. . Thus S is linearly dependent. αr . 4). . (b) (3. 6. v3 . (1. − . . .− . (d) (1. −1. (2. 0. . 2.

en = (0. . . If S = {v1 . w2 .. . so S is a basis for Rn . . 0) e1 = (0.5 Basis and Dimension Definition 3. . . 0. . v2 . . then every set with more than n vectors is linearly dependent.§3. Let S 0 = {w1 . This basis is called the standard basis for Rn . 92 . .3. vr } be a finite set of vectors in a vector space V . 0. Let S = {v1 . . . 0. e2 . . wm } be a set of m vectors in V and m > n. 1. (ii) S spans V . . Definition 3. . V 6= {0} and V has no basis. A vector space V is said to be finite-dimensional if either V = {0} or V has a basis with a finite number of vectors. . . en } is linearly indepen- dent and spans Rn . then S is called a basis for V if (i) S is linearly independent. . 0. Let e1 = (1. . . Example 3. v2 . . . Proof. It is easy to see that S = {e1 . V is called infinite-dimensional . . . 0. . that is.10. .11. Theorem 3. Otherwise. . vn } is a basis for a vector space V . 0) . . . . 1) be vectors in Rn .9.

. α1 αn1 + α2 αn2 + · · · + αm αnm = 0 Since this linear system has more unknowns than equations (∵ m > n). Then we have (α1 α11 + α2 α12 + · · · + αm α1m )v1 + (α1 α21 + α2 α22 + · · · + αm α2m )v2 + .1. by Theorem 1.. so we have w1 = α11 v1 + α21 v2 + ··· + αn1 vn w2 = α12 v1 + α22 v2 + ··· + αn2 vn . . . .. . .Since S spans V . . . S also spans S 0 . . . . 93 . Thus α1 α11 + α2 α12 + · · · + αm α1m = 0 α1 α21 + α2 α22 + · · · + αm α2m = 0 . wm = α1m v1 + α2m v2 + · · · + αnm vn Suppose that α1 w1 + α2 w2 + · · · + αm wm = 0. +(α1 αn1 + α2 αn2 + · · · + αm αnm )vn = 0.. . . wm } is linearly dependent.. So S 0 = {w1 . w2 .. . this system has a nontrivial solution.

Theorem 3. . by Theorem 3.4. Example 3.10. since S 0 is a basis and S is linearly independent. .9. w2 . by Theorem 3. Any two bases for a finite-dimensional vector space have the same number of vectors. vn } and S 0 = {w1 . wm } be bases for a finite-dimensional vector space V . m ≤ n. We define the zero vector space to have dimension zero. . Definition 3. that is. if V = {0}. . . Thus n = m. On the other hand. Let S = {v1 .9. . Since S is a basis and S 0 is linearly independent. The dimension of a finite-dimensional vector space V is the number of vectors in a basis for V and is denoted by dim(V ). . 94 . v2 .12. n ≤ m. Proof. define dim(V ) = 0. . Determine a basis for and the dimension of the solution space of the homogeneous system 2x1 + 2x2 − x3 + x5 = 0 −x1 − x2 + 2x3 − 3x4 + x5 = 0 x1 + x2 − 2x3 − x5 = 0 x3 + x4 + x5 = 0.

   1 1 −2 0 −1 0    0 0 1 1 1 0 Reducing this matrix to reduced row echelon form.Solution. x4 = 0. The augmented matrix for the system is   2 2 −1 0 1 0      −1 −1 2 −3 1 0     . x3 = −t.   1 1 0 0 1 0      0 0 1 0 1 0     . x5 = t 95 . x4 = 0 Solving for the leading variables. x2 = s. x1 = −x2 − x5 x3 = −x5 x4 = 0 The solution set is given by x1 = −s − t.    0 0 0 1 0 0    0 0 0 0 0 0 The corresponding system of equations is x1 + x2 + x5 = 0 x3 + x5 = 0 .

There- fore. x2 . . (a) If S = {v1 . then S is a basis for V . v2 } is a basis for S and dim(S) = 2. Theorem 3. vn } spans a n-dimensional vector space V . . . x5 )|x1 = −s − t. . (b) If S = {v1 . v2 =  −1           0   0          0 1 span the solution space and we see that they are linearly independent. x3 = −t.                          x4   0   0   0   0   0                          x5 t 0 t 0 1      −1   −1       1   0              v1 =  0  .where s and t are arbitrary values. 96 . x5 = t}. . . . x4 = 0. vn } is linearly independent in a n-dimensional vector space V . {v1 . v2 . v2 . x4 . x2 = s. Thus the solution space is S = {x = (x1 .11. .x3 . then S is a basis for V . Since              x1   −s − t   −s   −t   −1   −1               x   s   s   0   1   0   2                                     x3  =  −t  =  0  +  −t  = s  0  + t  −1  .

. . 5)} is a basis for R2 . vr } is linearly independent in a n-dimensional vector space V and r < n. . 7). 5)} is linearly independent.5. 0). . (5. . . 7) + β(5. Show that {(−3. v2 . 97 . vn } is a basis for V . so α = 0 and then β = 0.11. 5)} is linearly independent. By (a) of Theorem 3. Thus {(−3. (5. vn ∈ V such that S ∪ {vr+1 . Proof. then there exist vectors vr+1 . . vr+2 . Left to the reader as exercises. 7). Then −3α + 5β = 0 7α + 5β = 0 Then 10α = 0. . . . Suppose that α(−3. Solution. . 5) = (0. vr+2 . it is enough to show that {(−3. 7). (5. Example 3. (c) If S = {v1 . .

Which of the following sets of vectors are bases for R2 ? (a) (2.5 1. 0). 4. 3. .. 5. (−7. −8). 1). (4. Which of the following sets of vectors are bases for R3 ? (a) (1. (d) (1. 3. . (3. 4. 6). ♣Exercises 3. 3x1 + x2 + x3 + x4 = 0. 2. 1). 0.   . 3). −7. 5). (1. 0). 2. (3. (2.    am1 am2 · · · amn 98 . 2.13. −4). (2. 1. 5x1 − x2 + x3 − x4 = 0 −x1 + x3 = 0 §3.  . Determine the dimension of and a basis for the solution space of the system: (a) (b) x1 + x2 − x3 = 0. . 8) (c) (2. 0). 1). 1. For a m × n matrix   a a12 · · · a1n  11     a21 a22 · · · a2n    A= . −2x1 − x2 + 2x3 = 0 .. (c) (0. . (2. 0). (0. 1). 4). (b) (3. . . −3. (−1. (1. 6. 1). 3). −1).6 Row Space and Column Space and Rank Definition 3. . (b) (4.

. r2 = (a21 .  . . a2n ). . . Theorem 3. . Theorem 3. . c2 =  . am2 . amn ) are called the row vectors of A and the subspace of Rn spanned by the row vectors is called the row space of A. . . . . a12 . a1n ).  . .12.        am1 am2 amn are called the column vectors of A and the subspace of Rm spanned by the column vectors is called the column space of A. . .. The nonzero row vectors in a row-echelon form of a matrix A form a basis for the row space of A. The vectors       a a a  11   12   1n         a21   a22   a2n        c1 =  .   . .   .  .   .the vectors r1 = (a11 . Elementary row operations do not change the row space of a matrix. cn =  . .13. a22 . .   . rm = (am1 . . 99 .   . . .

0. 1. 3). 18. −5. v2 .    0 5 15 10 0    2 6 18 8 6 This matrix is reduced to the row echelon form   1 −2 0 0 3      0 1 3 2 0     . 3). −3. Find a basis for the space spanned by the vectors v1 = (1. v3 = (0. 8.    0 0 1 1 0    0 0 0 0 0 Thus the nonzero row vectors r1 = (1. 0) form a basis for the the space spanned by the vectors v1 .6. v4 . The space spanned by v1 . 6). v3 . 6.Example 3. 3. 0. 2. Solution. v2 . v3 . v2 = (2. 0. 0. 0). 1. 0). v4 is the row space of the matrix   1 −2 0 0 3      2 −5 −3 −2 6     . v4 = (2. r3 = (0. 0. r2 = (0. −2. 1. 5. −2. 100 . −2. 10. 15. 6).

Example 3.7. Find a basis for the column space of the matrix
 
 1 0 1 1 
 
A=  3 2 5 1 
.
 
0 4 4 −4

Solution. Transposing A,
 
1 3 0
 
 
 0 2 4 
 
At =  
 
 1 5 4 
 
1 1 −4

and reducing to the row echelon form
 
1 3 0
 
 
 0 1 2 
 
 .
 
 0 0 0 
 
0 0 0

Thus the nonzero vectors
   
 1   0 
   
c1 =  
 3 , c2 =  
 1 
   
0 2

form a basis for the column space of A.

Theorem 3.14. If A is any matrix, then the row space and column space of

A have the same dimension.

101

Definition 3.14. The dimension of the row (or column) space of a matrix A

is called the rank of A and is denoted by rank(A).

Theorem 3.15. Let A be a n × n matrix. Then the following statements are

equivalent.

(a) A is invertible.

(b) Ax = 0 has only the trivial solution.

(c) A is row equivalent to In .

(d) Ax = b is consistent for every n × 1 matrix b.

(e) det(A) 6= 0.

(f) rank(A) = n.

(g) The row vectors of A are linearly independent.

(h) The column vectors of A are linearly independent.

Proof. In Theorem 1.15, we show that (a) ∼ (d) are equivalent, and (a) and

(e) are equivalent in Theorem 2.7. Now, we will show (c) ⇒ (f ) ⇒ (g) ⇒

(h) ⇒ (c).

(c) ⇒ (f ). Since A is row equivalent to In which has n nonzero rows, the

row space of A has dimension n by Theorem 3.13. Hence rank(A) = n.

(f ) ⇒ (g). Since rank(A) = n, the row space of A has dimension n. Since

the n row vectors of A span the row space of A, by Theorem 3.11, the row

vectors of A are linearly independent.

(g) ⇒ (h). Assume that the row vectors of A are linearly independent.

102

Then the row space of A is n-dimensional. By Theorem 3.14, the column

space of A is n-dimensional. Since the column vectors of A span the column

space of A, the column vectors of A are linearly independent by Theorem 3.11.

(h) ⇒ (c). Assume that the column vectors of A are linearly independent.

Then the column space of A is n-dimensional. By Theorem 3.14, the row

space of A is n-dimensional. This means that the reduced row-echelon form of

A has n nonzero rows, that is, all rows are nonzero; so it should be the identity

matrix In . Hence A is row equivalent to In .

Theorem 3.16. A system of linear equations Ax = b is consistent if and

only if b is in the column space of A.

Proof. If A be a m × n matrix and let
 
x
 1 
 
 x2 
 
x= . .
 . 
 . 
 
xn

Then Ax = b becomes

x1 c1 + x2 c2 + · · · + xn cn = b

where c1 , c2 . . . , cn are the column vectors of A. Thus b is a linear combination

of the column vectors of A; so it is in the column space of A.

103

the reduced row echelon form of the augmented matrix [A|b] has r nonzero rows. Since rank[A|b] = rank(A) = r. By Theorem 3.16. . the solution is of the form (x1 . Theorem 3.) Proof. Ax = b is consistent if and only if b is in the column space of A if and only if b is a linear combination of the column vectors of A if and only if the rank of the column space of A is the same as the rank of the matrix [A|b]. (That is. . . then the solution of the system contains n − r parameters. Proof. Note that the rank of A is the rank of the column space of A. xn are functions of the remaining n − r terms which take arbitrary. x2 . A system of linear equations Ax = b is consistent if and only if the rank of the coefficient matrix A is the same as the rank of the augmented matrix [A|b]. x2 . Since each of these r rows contains a leading 1. 104 . If Ax = b is a consistent linear system of m equations in n unknowns. By Theorem 3. the corresponding leading variables can be expressed in terms of the remaining n − r unknowns.17. . and if rank(A) = r.17. xn ) such that r of x1 . . .Theorem 3.18. . .

(−2.      1 2 3 7    4 1 5 9 8 −3 1 −2 0 105 . 0. and find a basis for the row space. 2). (9. (3. −2). −3. ♣Exercises 3. 0. (c) (1. and the rank of the matrix:        1 2 −1   1 1 2 1   1 −3      (a)  . 1). 3).  2 −6     0 0 −8 2 1 3 4 2. 0). 0). 3. (b) (−1. a basis for the column space. 1. 0. 0. (b)   2 4 6 . 0. Find a basis for the space of R4 spanned by the given vectors: (a) (1. List the row vectors and column vectors of the following matrices. 0).  (c)   1 0 1 2 . −2. −1. (0. −4. 3). (2. (0. 0. 2). (b)   −1 2 1 0 −2  . Verify that the row space and column space have the same dimension   2 0 2 2       3 −4 −1 −9    2 3 5 7 4      (a)   .6 1. 6. 1. (2. −3). 2. 1. 2. 3. 0. 3. 1.

1.. Let     u1 v1          u2   v2      u= . (1) < u. v >=< v. > : V × V → R such that for all u. (3) < αu. Chapter Four Inner Product Spaces §4. v. v >= 0 if and only if v = 0. w ∈ V and all α ∈ R. v= .1. w >=< u. u > (symmetry axiom).. . v >≥ 0 (positivity axiom) and < v. (4) < v. Example 4. A real vector space with an inner product is called an inner product space.       . (2) < u + v. v >= α < u. w > (additivity axiom).1 Inner Products Recall that a real vector space is a vector space over the field R of all real numbers. v > (homogeneity axiom). An inner product on a real vector space V is a real-valued function <.   .      un vn 106 . w > + < v. Definition 4.

be vectors in the Euclidean n-space Rn (expressed n × 1 matrices). which is called the inner product on Rn generated by A. define <. > on Rn by < u.1. If u. and let A be an invertible n × n matrix. Proof. Left to the reader as exercises. αv >= α < u. v > + < u. w >. (b) < u. v and w are vectors in an inner product space and α is a scalar. v >. then (a) < 0. v >=< v. v >= Au · Av. Then <. Theorem 4. > is an inner product on Rn (verify!). (c) < u. v + w >=< u. 0 >= 0. If u · v = u1 v1 + u2 v2 + · · · + un vn is the Euclidean inner product on Rn . 107 .

v + w >=< u. v >= α < u. . −5) and α = −3.1 1. (c) < αu. 3. v > . 3). w2 . v >= w1 u1 v1 + w2 u2 v2 + · · · + wn un vn is an inner product on Rn . Let < u. (f) < u. v > + < u. and let u = (2. . wn be positive real numbers and let u = (u1 . . Let w1 . 2. un ) and v = (v1 . vn ). Av >=< At u. −1). 108 . and if A is an n × n matrix. αv >= α < u. Verify that (a) < u. v > be the Euclidean inner product on Rn . v = (−1. (e) < u. Show that < u. (d) < 0. v >=< v. show that < u. v > be the Euclidean inner product on R2 . . (b) < u + v. . w = (0. w >=< u. . . . v2 . v >. . . v >=< v. w >. w >. u2 . If < u. . . w > + < v. ♣Exercises 4. u >. v >. 0 >= 0.

so the equality holds. If u and v are vectors in an inner product space. v >2 ≤< u.2 Length and Angle in Inner Product Spaces Definition 4.§4. u >. Let u 6= 0 and set a =< u. If u = 0. u >< v. u >. t ∈ R be any. If V is an inner product space. then < u. v >. then the norm (or length) of a vector u ∈ V is denoted by kuk and is defined by 1 kuk =< u. b = 2 < u. The distance between two points (vectors) u and v in V is denoted by d(u. then < u. v >= kuk2 kvk2 . v >.2. v) = ku − vk.2 (Cauchy-Schwarz Inequality). Theorem 4. Proof. v >= 0 =< u. c =< v. u > 2 . 109 .

.By the positivity of the inner product <. . v2 . u >< v. . . v >2 ≤< u. vn ) be vectors in Rn . its discriminant must be b2 − 4ac ≤ 0 ⇒< u. v >= u · v = (u1 . v >. . v >. u >< v. Therefore. v >2 ≤ kuk2 kvk2 or | < u. Let u = (u1 . . v > = at2 + bt + c. v2 . Since kuk2 =< u. Example 4. u2 .1. v > . . . Then the inner product on Rn is < u. . . 0 ≤< (tu + v). < u. Thus at2 + bt + c ≥ 0 for all t and hence it has either no real roots or a double root. u2 . u > t2 + 2 < u. . u > and kvk2 =< v. vn ) = u1 v1 + u2 v2 + · · · + un vn which is its Euclidean inner product and Cauchy-Schwarz Inequality becomes 1 1 |u1 v1 + u2 v2 + · · · + un vn | ≤ (u21 + u22 + · · · + u2n ) 2 (v12 + v22 + · · · + vn2 ) 2 which is called Cauchy inequality . . . (tu + u) > = < u. . >. un ) · (v1 . un ) and v = (v1 .2. v > | ≤ kuk kvk. v >2 ≤< u. Remark 4. . . v > t+ < v. can be written as < u. the Cauchy- Schwarz inequality. 110 .

From Remark 4. ku + vk2 = < ku + vk. v) = ku − vk satisfy all the properties listed in the table: Basic Properties of Length Basic Properties of Distance L1. < u. v) (triangle inequality) (triangle inequality) Proof. ku + vk > = < u. u > +2 < u. Taking square roots ku + vk ≤ kuk + kuk. v) = d(v. v > | < u. We will prove L4. u) L4. Others are left to the reader as exercises. kαuk = |α|kuk D3. d(u. d(u. v) = 0 if and only if u=v L3. v) ≤ d(u. v > = kuk2 + 2kuk kvk + kvk2 = (kuk + kvk)2 . v) ≥ 0 L2. v > ≤ < u. d(u. u > +2| < u.1. v > | ≤ kuk kvk ⇒ −1 ≤ ≤1 kuk kvk 111 . v > |+ < v. ku + vk ≤ kuk + kuk D4. 1 Theorem 4. kuk ≥ 0 D1. u > +2kuk kvk+ < v.3. w) + d(w. kuk = 0 if and only if u=0 D2. v > ≤ < u. u > 2 and the distance d(u. then the norm kuk =< u. Remark 4. d(u. If V is an inner product space.2. v > + < v.

kuk kvk Definition 4. Definition 4. v >= 0.4 (Generalized Theorem of Pythagoras).3. If u and v are orthogonal vectors in an inner product space. If u is orthogonal to each vector in a set W . v > cos θ = and 0 ≤ θ ≤ π. ku + vk2 = < u + v. Proof. In an inner product space. Theorem 4. 112 . u + v >= kuk2 + 2 < u. two vectors u and v are said to be orthogonal if < u. The angle between u and v is denoted by θ and is defined by < u. v > +kvk2 = = kuk2 + kvk2 .4.for any nonzero two vectors u and v in an inner product space. we say that u is orthogonal to W . then ku + vk2 = kuk2 + kvk2 .

2. 0. u2 ) and v = (v1 . −1). 1. 2. v = (−b. −1. 113 . −1). c). v = (2. 1. −4). −1). −1. −1. 1). 3. 0). 3. v = (−2. 1). 3). (e) u = (0. 1. −2. −3). v = (1. −5. (a) the Euclidean inner product. v = (0. 8). v >= 3u1 v1 + 2u2 v2 . (d) u = (1. (b) u = (3. v = (−1. 2. 1). 3. a). b). v = (2. −1 3 2. −4. 4). v = (1.    1 2  (c) the inner product generated by the matrix A =  .2 1. 5). −2. −9). 3. −1. (b) u = (1. −2. 2). 1. v2 ). 2. v = (0. where u = (u1 . In each part use the given inner product on R2 to find kwk. (d) u = (−2. 0). −3). (c) u = (a. v = (1. ♣Exercises 4. b. 0). (b) the weighted Euclidean product < u. (c) u = (1. In each part determine whether the given vectors are orthogonal with re- spect to the Euclidean inner product: (a) u = (−1. where w = (−1. (f) u = (a. In each part verify that the Cauchy-Schwarz inequality holds for the given vectors using the Euclidean inner product: (a) u = (2.

Show that ku + vk2 + ku − vk2 = 2kuk2 + 2kvk2 for vectors in V .4. then ku − vk = 2. Let V be an inner product space. Let V be an inner product space. 114 . Show that 1 1 < u. 6. Let V be an inner product space. v >= ku + vk2 − ku − vk2 4 4 for vectors in V . 5. Show that if u and v are orthogonal √ vectors in V such that kuk = kvk = 1.

Since S = {v1 . . α2 . then v . vn > = αi < vi . and u ∈ V is any vector. v2 > + · · · + αn < vn . n.3. A set of vectors W in an inner product space is called an orthogonal set if each pair of distinct vectors in W is orthogonal.§4.5. v2 > v2 + · · · + < u. u = α1 v1 + α2 v2 + · · · + αn vn for some scalars α1 . If S = {v1 . . then u =< u. In addition. vn } is a basis. v1 > v1 + < u. . it is called an orthonormal basis. . if W is a basis. 2. . Gram-Schmidt Process Definition 4. vi > = < α1 v1 + α2 v2 + · · · + αn vn . . . vn } is an orthonormal basis for an inner product space V .3 Orthonormal Bases. v2 . An orthogonal set in which each vector has norm 1 is called an orthonormal set. For each i = 1. v1 > +α2 < v2 . vi >= αi 115 . . vi > = α1 < v1 . Further. . Theorem 4. . . v2 . αn . vn > vn . Remark 4. If v is a nonzero vector in an inner product space. called normalizing v . . . .5. if it is a basis. < u. kvk has norm 1. . Proof. . W is called an orthogonal basis.

vr . . for each i = 1. vr > vr . . Since each vector S is nonzero. Then. vr > vr . αn . v2 > v2 − · · · − < u. If S = {v1 . then every vector u in V can be expressed in the form u = w1 + w2 where w1 is in W and w2 is orthogonal to W by letting w1 = < u. w2 = u− < u. Theorem 4. vj >=   0 if i 6= j. .since    kvi k2 = 1 if i = j. . . . Theorem 4. . . v1 > v1 + < u. vi > = αi < vi . Let V be an inner product space and {v1 . < vi . . v2 . .6. . . Proof. vn } is an orthogonal set of nonzero vectors in an inner product space V . < vi . . . . so αi < vi . vi >= 0 implies αi = 0 for each i. . n. vi > . then S is linearly independent. 0 = < 0. vi >=< α1 v1 + α2 v2 + · · · + αn vn . v2 . If W is the space spanned by v1 . . 2. . v2 . 116 . . vi >6= 0. . v2 > v2 + · · · + < u. vr } be an orthonormal set of vectors in V . . α2 .7. v1 > v1 − < u. Assume that α1 v1 + α2 v2 + · · · + αn vn = 0 for some scalars α1 .

v2 > v2 − · · · − < un . The vector w1 is called the orthogonal projection of u on W and is denoted by projW u =< u. v2 > v 2 − < u4 . Proof. and let S = {u1 . ku4 − < u4 . The vector w2 = u − projW u is called the component of u orthogonal to W. Proof. u2 . un − < un . Left to the reader as an exercise. v1 > v1 − < u3 . ku2 − < u2 . . 117 . v2 . v2 > v2 k u4 − < u4 . v1 > v1 − < u3 . ku1 k u2 − < u2 . v2 > v2 − · · · − < un .. v 3 > v 3 k . vn−1 > vn−1 k Then {v1 . vn−1 > vn−1 vn = . . v 3 > v 3 v4 = . v1 > v1 − < u4 . We will construct an orthonormal basis from S by the following step-by-step construction which is called the Gram- Schmidt process: u1 v1 = . ku3 − < u3 . Every nonzero finite-dimensional inner product space has an orthonormal basis. . v1 > v1 − < u4 . . . un } be a basis for V . . . vn } is an orthonormal basis for V . v2 > v 2 − < u4 . Let V be a nonzero n-dimensional inner product space. v1 > v1 − < un . . v2 > v2 v3 = . kun − < un .8. v1 > v1 + < u. v1 > v1 v2 = . v2 > v2 + · · · + < u. . v1 > v1 k u3 − < u3 . Theorem 4. v1 > v1 − < un . vr > vr .

0. 1. √13 > √13 . √ . v2 > v2 v3 = ku3 − < u3 . 1) − √3 √3 . 0. √16 1 1 1 1 1 = ³ ´ ³ ´ . v2 > v ³2 k ´ (0. √13 > √13 . 1). 1). √16 . − √ . 3 3 2 1 1 2 1 1 = ¡ ¢ =√ − . 1. − 12 . 1)k 3 3 3 u2 − < u2 . √13 . √3 . 1. − . 0. √ . . construct a orthonormal basis from the basis {(1. √ . 3 k ¡ 2 1 1¢ 3 µ ¶ µ ¶ −3. 1. √13 . √13 . 13 k 6 3 3 3 6 6 6 u3 − < u3 . = 0. In the vector space R3 with the Euclidean inner product. √ . 0. √ 3 3 3 6 6 6 2 2 form an orthonormal basis for R3 . − √ . 1) 1 1 1 v1 = = = √ . 1) − √13 √13 . 1. 13 . 1) − 3 √ √ . 12 k 2 2 2 2 Then µ ¶ µ ¶ µ ¶ 1 1 1 2 1 1 1 1 √ . 1)− < (0. 1. √13 . 1. Taking µ ¶ u1 (1. √16 . (0. − 21 . 0. 1). v1 > v1 k ³ ´ ³ ´ (0. 1. v1 ³> v1 − < u3´. 1. √13 = ³ ´ ³ ´ k(0. 3. √ . − √ .√ . √3 − √6 − √26 . Let u1 = (1. 1)− < (0. 3. v1 > v1 v2 = ku2 − < u2 . k 0. 1). 1. 1). (0. √13 . √13 k ³ ´ 2 1 √1 √1 (0. 1. √ . √13 − √16 − √26 . 1.√ . √16 k ¡ ¢ µ ¶ µ ¶ 0. 1). k − 23 . 118 . u3 = (0. 1). 12 √ 1 1 1 1 = ¡ ¢ = 2 0. ku1 k k(1. = −√ . √ . 3 = ³ ´ 2 1 √1 √1 k(0.Example 4. 3. v1 > v1 − < u3 . 1) − 3 √ √ 3 .3. 1)}. u2 = (0. √13 . √13 . Solution. k(0.

and if u ∈ V .10 (Best Approximation Theorem. If W is a finite-dimensional subspace of an inner product space V . 119 .9 (Projection Theorem). If W is a finite-dimensional sub- space of an inner product space V .Theorem 4. then projW u is the best approximation to u from W in the sense that ku − projW uk < ku − wk for every vector w ∈ W different from projW u. then every vector u ∈ V can be expressed in exactly one way as u = w1 + w2 where w1 ∈ W and w2 is orthogonal to W . Theorem 4.

√ . u2 = (3. . Let R3 have the Euclidean inner product. µ 2 ¶ 2µ ¶ 1 1 2 1 1 (d) √ .3 1. . √ . 1). 0). √ . 2. vn >2 . 0). 1). µ 2 2 µ ¶ 3 3¶ µ 3 ¶ 2 2 2 2 1 2 1 2 1 2 2 (b) . 0. 3 3 µ 3 3 3 ¶3 3 3 3 1 1 (c) (1.− . −2). v2 . √ . Show that if w is a vector in V . − √ . 1). 1. . v1 >2 + < w. then kwk2 =< w. u3 } into an orthonormal basis: (a) u1 = (1. 6 6 6 2 2 2. v2 >2 + · · · + < w. 0. (b) u1 = (1. vn } be an orthonormal basis for an inner product space V . . .− . √ . − √ . Let {v1 . . 7. 3. u2 = (−1. 0. . u2 . 0. 120 . 1. . √ . 0. . √ . 4. . Let R3 have the Euclidean inner product. 0). ♣Exercises 4. u3 = (1. 0. 0 . (0. Use the Gram-Schmidt process to transform the basis {u1 . − √ . 1). − √ . Which of the following form orthonormal sets? µ ¶ µ ¶ µ ¶ 1 1 1 1 1 1 1 (a) √ . u3 = (0. √ .

Change of Basis There is a close relationship between the notion of a basis and the notion of a coordinate system. αn . α2 .§4.  . . The coordinate vector of v relative to S is denoted by (v)S and is defined by (v)S = (α1 . v2 . . . . vn } is a basis for a vector space V . Theorem 4. . .  .   . The scalars α1 . . . . If S = {v1 .11. α2 . In this section we develop this idea and also discuss results about changing bases for vector spaces. . . . α2 . . αn are called the coor- dinates of v relative to the basis S.4 Coordinates. v = β1 v1 + β2 v2 + · · · + βn vn 121 .    αn Proof. αn ). Suppose that v = α1 v1 + α2 v2 + · · · + αn vn . then every vector v ∈ V is uniquely expressed in the form v = α1 v1 + α2 v2 + · · · + αn vn for some scalars α1 . . The coordinate matrix of v relative to S is denoted by [v]S and is defined by   α  1     α2    [v]S =  . . .

(a) 1 kuk = < α1 v1 + α2 v2 + · · · + αn vn . . v >= α1 β1 + α2 β2 + · · · + αn βn Proof.j≤n i=1 since    1 if i = j. vj > =t αi2 1≤i. vj >=   0 if i 6= j. Then u = α1 v1 + α2 v2 + · · · + αn vn . . α2 . . αi = βi . βn . . . β2 . Theorem 4. < vi . . . then p (a) kuk = α12 + α22 + · · · + αn2 . ∀i. . . α2 . v2 . . .for some scalars α1 .12. . Let S = {v1 . β1 . . . Then (α1 − β1 )v1 + (α2 − β2 )v2 + · · · + (αn − βn )vn = 0 Since S is linearly independent. αi − βi = 0 for each i. . p (b) d(u. . that is. . (c) < u. vn }. 122 . αn . If S is an orthonormal basis for an n-dimensional inner prod- uct space V and if (u)S = (α1 . . v) = (α1 − β1 )2 + (α2 − β2 )2 + · · · + (αn − βn )2 . α1 v1 + α2 v2 + · · · + αn vn > 2 Ã ! 12 v u n X uX = αi αj < vi . β2 . . αn ) and (v)S = (β1 . . v = β1 v1 + β2 v2 + · · · + βn vn . βn ).

. u0n = γn1 u1 + γn2 u2 + · · · + γnn un .. Since B is a basis for V and B 0 ⊆ V .      αn βn that is. .. If we change the basis for a vector space V from an old basis B = {u1 . v = β1 u01 + β2 u02 + · · · + βn u0n . u2 . . . . we may write u01 = γ11 u1 + γ12 u2 + · · · + γ1n un . un } to a new basis B 0 = {u01 . . u0n }. . u02 = γ21 u1 + γ22 u2 + · · · + γ2n un .   .. u02 . . how is the old coordinate matrix [v]B of a vector v related to the new coordi- nate matrix [v]B 0 ? Solution. 123 . v = α 1 u1 + α 2 u2 + · · · + α n un . Let     α1 β1          α2   β2      [v]B =  . . ♣Change of Basis Problem. . . [v]B 0 = . (b) and (c) are left to the reader.      . .

.       . + βn (γn1 u1 + γn2 u2 + · · · + γnn un ) = (β1 γ11 + β2 γ21 + · · · + βn γn1 )u1 + (β1 γ12 + β2 γ22 + · · · + βn γn2 )u2 + . ... + (β1 γ1n + β2 γ2n + · · · + βn γnn )un .. . .    γ1n γ2n · · · γnn Then the j-th column of P equals the coordinate matrix of u0j relative of B. .. and P is denoted by P = [[u01 ]B . . = .. n.. [u0n ]B ] 124 . for each j = 1. .   . . . . . .  . 2. Thus      α1 γ11 γ21 · · · γn1 β1            α2   γ12 γ22 · · · γn2   β2        .   .    . .. . [u02 ]B . . . [u0j ]B .. . . . .  .       αn γ1n γ2n · · · γnn βn Set   γ11 γ21 · · · γn1      γ12 γ22 · · · γn2    P = .Then v = β1 u01 + β2 u02 + · · · + βn u0n becomes v = β1 (γ11 u1 + γ12 u2 + · · · + γ1n un ) + β2 (γ21 u1 + γ22 u2 + · · · + γ2n un )+ ..

125 . then (a) P is invertible. . (b) The row vectors of A form an orthonormal set in Rn with the Euclidean inner product. . .15.13.14. (a) A is orthogonal. . then the following are equivalent. Definition 4. Theorem 4. If P is the transition matrix from an orthonormal basis to another orthonormal basis for an inner product space. If P is the transition matrix from a basis B 0 to a basis B.6. (b) P −1 is the transition matrix from a basis B to a basis B 0 Theorem 4. If A is a n × n matrix. Theorem 4. The matrix P is called the transition matrix from B 0 to B.and then [v]B = P [v]B 0 = [[u01 ]B . A square matrix A such that A−1 = At is called an orthogonal matrix . [u02 ]B . [u0n ]B ][v]B 0 . then P −1 = P t . (c) The coulmn vectors of A form an orthonormal set in Rn with the Eu- clidean inner product.

(a) u1 = (1. u2 }. (b) u1 = (2. 8). 2.Proof. −5 126 . b). Find the coordinate matrix and coordinate vector for w relative to the basis S = {u1 . w = (1. where    3  w= . w = (a. v 2 =  . 2). (b) Find the transition matrix from B to B 0 . 0 1 1 4 (a) Find the transition matrix from B 0 to B. v2 }. 0). u2 = (0. 1). (c) Compute the coordinate matrix [w]B . u2 =   . where          1   0   2   −3  u1 =   . (a) u1 = (1. u2 = (3. w = (3. u2 } and B 0 = {v1 . u2 = (0. Omitted! ♣Exercises 4. 1). Consider the bases B = {u1 . 1).4 1. 7). −4). v 1 =   .

Example 5. T : V → W . defined by T (v) = 0. A function from a vector space V to a vector space W .3. is a linear transformation which is called a zero transformation. Example 5. T : V → W . Then the function T : Rn → Rm defined by T (x) = Ax is a linear transformation which is called a matrix transformation or mul- tiplication by A.1 Introduction to Linear Transformations Definition 5. Example 5. (i) T (u + v) = T (u) + T (v) and (ii) T (αu) = αT (u).1. defined by T (v) = v. Let A be a m × n matrix. Chapter Five Linear Transformations §5. T : V → V . 127 . is called a linear transformation if for all vectors u.1. The function from a vector space V to itself. v ∈ V and scalar α. A function from a vector space V to a vector space W .2. is a linear transformation which is called the identity transformation on V .

Then the function T : V → V defined by T (v) = αv is a linear operator. w1 > w1 + < v. then T is called a contraction of V . Then the function T : V → W defined by T (v) =< v. . then T is called a dilation of V .5.4. w2 . Example 5. . If α > 1. and if 0 < α < 1. . Let V be an inner product space and let W be a finite- dimensional subspace of V having S = {w1 . wr } as an orthonormal basis. w2 > w2 + · · · + < v. . Let V be a vector space and let α be a fixed scalar. 128 . A linear transformation from a vector space V to itself is called a linear operator on V . wr > wr is a linear transformation which is called the orthogonal projection of V onto W . Example 5.

y). y) = (y. F (x. Determine whether F is linear: 1. 2. and suppose              1   0   0     1    3     4  T  0  =   . 3. 2.   z 129 . 5.    1    (b) Find T     3 . F (x. y) = (x. x). F (x. y) = (0. y). Let T : R3 → R2 be a matrix transformation. y). F (x.A formula is given for a function F : R2 → R2 . x − y). y) = (x2 . F (x. y) = (2x + y. ♣Exercises 5. 4. T  1  =    .   8    x    (b) Find T     y . F (x. T  0  =  .1 ♣. y) = (2x.         1   0   −7 0 0 1 (a) Find thematrix. 6. y + 1).

2 Properties of Linear Transformations. (c) T (v − u) = T (v) − T (u) for all v. (c) Since v − u = v + (−1)u.1. then (a) ker(T ) = {v ∈ V |T (v) = 0} is called the kernel (or nullspace) of T . Definition 5. T (0) = T (0v) = 0T (v) = 0. If T : V → W is a linear transformation. (b) R(T ) = {T (v) ∈ W |v ∈ V } is called the range of T .§5. u ∈ V . Kernel and Range In this section we develop some basic properties of linear transformations. If T : V → W is a linear transformation. Proof. (b) T (−v) = T ((−1)v) = (−1)T (v) = −T (v).2. (b) T (−v) = −T (v) for all v ∈ V . Theorem 5. If T : V → W is a linear transformation. We assume that V and W are vector spaces. then (a) T (0) = 0. T (v − u) = T (v + (−1)u) = T (v) + (−1)T (u) = T (v) − T (u).2. (a) Since 0 = 0v. then 130 . Theorem 5.

(a) Let v1 . Then T (v1 + v2 ) = T (v1 ) + T (v2 ) = 0 + 0 = 0. T (αv1 ) = αT (v1 ) = α0 = 0. αw1 = αT (v1 ) = T (αv1 ) ∈ R(T ). then the dimen- sion of the range R(T ) is called the rank of T and he dimension of the kernel ker(T ) is called the nullity of T . so v1 + v2 . Definition 5. Proof. Thus ker(T ) is a subspace.3. v2 ∈ V such that T (v1 ) = w1 and T (v2 ) = w2 . Then there exist v1 . If T : V → W is a linear transformation. αv1 ∈ ker(T ). (a) ker(T ) is a subspace of V . so w1 + w2 . v2 ∈ ker(T ) and α be any scalar. 131 . w2 ∈ R(T ) and α be any scalar. Thus R(T ) is a subspace. Then w1 + w2 = T (v1 ) + T (v2 ) = T (v1 + v2 ) ∈ R(T ). αw1 ∈ R(T ). (b) R(T ) is a subspace of W . (b) Let w1 .

T (x) = Ax for all x ∈ Rn .3 (Dimension Theorem). then (rank of T ) + (nullity of T ) = n. By Theorem 5. Theorem 5.3. thus. then the dimension of the solution space of Ax = 0 is n − rank(A). Therefore. 132 . Proof. i. the solution space of Ax = 0. If A is an m × n matrix. Let T : Rn → Rm be multiplication by A.. x ∈ Rn } = the set of all b such that Ax = b is consistent.Theorem 5. that is. Omitted. (nullity of T ) = dimension of ker(T ) = dimension of the solution space of Ax = 0. Proof. so (nullity of T ) = n − (rank of T ) But ker(T ) = {x ∈ Rn |T (x) = 0. Ax = 0} is the set of all solutions of Ax = 0.4.e. (rank of T ) + (nullity of T ) = n. Since R(T ) = {T (x)|x ∈ Rn } = {Ax|x ∈ Rn } = {b ∈ Rm |b = Ax. If T : V → W is a linear transfor- mation and V has dimension n.

(rank of T ) = dimension of R(T ) = dimension of the column space of A = rank(A). R(T ) is the column space of A.16. 133 .by Theorem 3. Therefore.

♣Exercises 5.2

1. Let T : R2 → R2 be multiplication by
 
 2 −1 
 .
−8 4

(1) Which of the following are in R(T )?
     
 1   5   −3 
(a)   , (b)  , (c)  .
−4 0 12

(2) (1) Which of the following are in ker(T )?
     
 5   3   1 
(a)   , (b)   , (c)   .
10 2 1

2. In each let T be multiplication by the given matrix:
   
 1 −1 3   2 0 −1 
   
(a) 
 5 6 −4  , (b)  4 0 −2
 
.

   
7 4 2 0 0 0

Find that

(a) a basis for the range of T ;

(b) a basis for the kernel of T ;

(c) the rank and nullity of T .

134

§5.3 Linear Transformations from Rn to Rm

Theorem 5.5. If T : Rn → Rm is a linear transformation and if {e1 , e2 , . . . , en }

is the standard basis for Rn , then T is is multiplication by A where A is the

matrix whose j-th column is T (ej ) for each j = 1, 2, . . . , n, that is, T (x) = Ax

for each x ∈ Rn . The matrix A is called the standard matrix for T .

Proof. Let
     
a a a
 11   12   1n 
     
 a21   a22   a2n 
     
T (e1 ) =  .  , T (e2 ) =  .  , . . . , T (en ) =  . 
 .   .   . 
 .   .   . 
     
am1 am2 amn

and set  
a a12 · · · a1n
 11 
 
 a21 a22 · · · a2n 
 
A= . .. .. .
 . 
 . . . 
 
am1 am2 · · · amn

135

 
x1
 
 
 x2 
 
If x =  ..  , then x = x1 e1 + x2 e2 + · · · + xn en ; so
 
 . 
 
xn

T (x) = x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en )
     
a x a x a x
 11 1   12 2   1n n 
     
 a21 x1   a22 x2   a2n xn 
     
=  . + .  + ··· +  . 
 ..   ..   .. 
     
     
am1 x1 am2 x2 amn xn
 
a x + a12 x2 + · · · + a1n xn
 11 1 
 
 a21 x1 + +a22 x2 + · · · + a2n xn 
 
=  .  = Ax.
 .. 
 
 
am1 x1 + +am2 x2 + · · · + amn xn

Example 5.6. Find the standard matrix for the transformation T : R3 →

R4 defined by  
  x1 + x2
 
  x1    
   x1 − x2 
 
T   x2  =


.
  
   x3 
x3  
x1

136

Solution. 137 .     1      1  1     0 0 Thus   1 1 0      1 −1 0    A= .     0      0  0     1 1       0+1 1 0               0 − 1   −1     T (e2 ) = T    1  =  = . Since       1+1 1 1               1 − 0   1     T (e1 ) = T    0  =   =  .    0 0 1    1 0 0 is the standard matrix for T .     0     0   0     0 0       0+0 0 0               0 − 0   0     T (e3 ) = T    0  =   =  .

x2 x1 + x2 x2 x2          x1   x1 + 2x2 + x3   x1   4x1          (c) T  x  =  x + 5x  2   1 2  . (d) T  x  =  7x   2   2 .  (f ) T     =   . (b) T   =  .  2   0  x3       x    3     x3  0     x2        x4   0 x1 − x3 138 .3 1. ♣Exercises 5. Find the standard matrix of each of the following linear operators:          x1   2x1 − x2   x1   x1  (a) T   =  .          x3 x3 x3 −8x3     0    x4        x1   x  0     x1   1              x2     (e) T   x  =   .

Let B and B 0 be bases for V and W . To find the matrix A. . .   . . . A[v2 ]B = [T (v2 )]B 0 . . vn }.    . respectively.  . Then A[v1 ]B = [T (v1 )]B 0 .   . let B = {v1 . v2 . .   .  . [v2 ]B =  . But       1 0 0              0   1   0        [v1 ]B =  . and for each x ∈ V . . Then [x]B ∈ Rn and the coordinate matrix [T (x)]B 0 is in Rm . .  .§5. . then any linear transformation T : V → W can be regarded as a matrix transformation as follows: Suppose that V is an n-dimensional vector space and W an m-dimensional vector space. let [x]B be the coordinate matrix of x with respect to B.        0 0 1 139 . we have T 0 ([x]B ) = A[x]B = [T (x)]B 0 . Using the standard matrix A for T 0 .   . . Thus the linear transformation T which maps x to T (x) defines a linear transformation T 0 from Rn to Rm by sending [x]B to [T (x)]B 0 .4 Matrices of Linear Transformations In this section we show that if V and W are finite-dimensional vector spaces (not necessarily Rn and Rm ).. A[vn ]B = [T (vn )]B 0 . . . [vn ]B =  .

u2 =   . [T (vn )]B 0 ] which is called the matrix for T with respect to the bases B and B 0 . . 2. . . n. [T (v2 )]B 0 . x2 −2x1 + 4x2 Find the matrix for T with respect to the basis B = {u1 .Thus {[v1 ]B . Since        1   1 + 1   2  T (u1 ) = T   =   =   = 2u1 + 0u2 . A is commonly denoted by [T ]B. . Let T : R2 → R2 be the linear operator define by       x1    x1 + x2  T   =  . 1 2 Solution. Thus A = [[T (v1 )]B 0 . 2 −2 + 8 6 140 . . If V = W and B = B 0 . . . then [T ]B. Since T 0 ([vj ]B ) = A[vj ]B = [T (vj )]B 0 for each j = 1.B is simply denoted by [T ]B and is called the matrix for T with respect to the basis B.B 0 . 1 −2 + 4 2        1   1 + 2   3  T (u2 ) = T   =   =   = 0u1 + 3u2 . .7. . [vn ]B } is the standard basis for Rn . . Example 5. . [v2 ]B . u2 } where      1   1  u1 =   . .

v3 } where      1   −2  u1 =   . If A and B are square matrices.    2 0  [T ]B = [[T (u1 )]B . 0 3 We end this chapter after defining two terminologies. u2 } and B 0 = v1 . A square matrix A = [aij ] is called a diagonal matrix if aij = 0 whenever i 6= j. then B is said to be similar to A if there exists an invertible matrix P such that B = P −1 AP . 3 4        1   2   3        v1 =  1    .4.        1 0 0 141 .  x2   0 (a) Find the matrix of T with respect to the bases B = u1 . [T (u2 )]B ] =  .   v 3 =  0  . Definition 5. u2 =  . ♣Exercises 5. v2 =  2  . v2 .4 1. Let T : R2 → R3 be defined by      x1 + 2x2   x1    T   =   −x1 .

(a) Find [T (v1 )]B and [T (v2 )]B . B 0 = {w1 . w3 } 142 . w2 . 3      1   −1  2. v4 }. Let v1 =   and v2 =  . Let    3 −2 1 0    A=  1 6 2 1     −3 0 7 1 be the matrix T : R4 → R3 with respect to the bases B = {v1 . v3 . v2 . x2    1  (d) Use the formula obtained in (c) to compute T  . v2 }.    x1  (c) Find a formula for T  . 1 3. (b) Find T (v1 ) and T (v2 ). and let 3 4    1 3  A=  −2 5 be the matrix for T : R2 → R2 with respect to the basis B = {v1 . (b) Compute the value    8  T   .

          8 1 1 (a) Find [T (v1 )]B 0 .where         0 2 1 6                  1   1   4   9          v1 =   . [T (v3 )]B 0 . w3 =  9  . (c) Find a formula for   x1       x2     T   . v2 =   . [T (v2 )]B 0 . (b) Find T (v1 ). v4 =            1   −1   −1   4          1 −1 2 2        0   −7   −6        w1 =  8    . v3 =   . T (v4 ). w2 =  8  .    0    0 143 .   x  3    x4 (d) Compute   2      2    T   . [T (v4 )]B 0 . T (v2 ). T (v3 ).

15. The scalar λ is called an eigenvalue (or proper value or characteristic value) of A and x is said to be an eigenvector corresponding to λ. To find an eigenvalue of a an n × n matrix A. we rewrite Ax = λx as Ax = λIx or (λI − A)x = 0. Summarizing. it has a nonzero solution if and only if det(λI − A) = 0. by Theorem 3. then a nonzero vector x ∈ Rn is called an eigenvector of A if there exists a scalar λ such that Ax = λx.1.1 Eigenvalues and Eigenvectors Definition 6. Chapter Six Eigenvalues and Eigenvectors §6. 144 . If A is an n × n matrix. This equation is called the characteristic equation of A.1. The polynomial det(λI −A) in λ is called the characteristic polynomial of A. we have the following theorem. For λ to be an eigenvalue. Remark 6. (λI − A)x = 0 must have a nonzero solution for x.

So λ = 1 and λ = 2 are the eigenvalues of A. (b) The system of equation (λI − A)x = 0 has nontrivial solutions. (c) There is a nonzero vector x such that Ax = λx.1.Theorem 6. Find the eigenvalues of the matrix    3 2  A=  −1 0 and the eigenvectors corresponding the eigenvalues. Solution. Example 6.1. If A is an n × n matrix. then the following are equivalent: (a) λ is an eigenvalue of A. (d) λ is a real solution of the characteristic equation det(λI − A) = 0. The eigenvectors of A corresponding to an eigenvalue λ are the nonzero vectors x such that Ax = λx. 145 . that is. The characteristic equation of A is ¯ ¯ ¯ ¯ ¯ λ − 3 −2 ¯ ¯ ¯ 0 = det(λI − A) = ¯ ¯ ¯ ¯ ¯ 1 λ ¯ = (λ − 3)λ + 2 = λ2 − 3λ + 2 = (λ − 1)(λ − 2). are the nonzero vectors in the solution space of (λI − A)x = 0 which is called the eigenspace of A corresponding to λ.

 
 x1 
Let x =   be the the eigenvectors corresponding the eigenvalue λ.
x2
Then Ax = λx. If λ = 1, then
    
 3 2   x1   x1 
Ax = λx ⇒   = .
−1 0 x2 x2

So
3x1 + 2x2 = x1 ,

−x1 = x2 .
If we set x2 = t for any real t 6= 0, then x1 = −t; so
 
 −t 
x= 
t

is the eigenvector corresponding to λ = 1.

If λ = 2, then
    
 3 2   x1   x1 
Ax = λx ⇒    = 2 .
−1 0 x2 x2

So
3x1 + 2x2 = 2x1 ,

−x1 = 2x2 .
If we set x2 = t for any real t 6= 0, then x1 = −2t; so
 
 −2t 
x= 
t

146

is the eigenvector corresponding to λ = 5.

♣Exercises 6.1

1. For the following matrices,
     
 3 0   10 −9   0 3 
(a)  , (b)  , (c)  ,
8 −1 4 −2 4 0

     
 −2 −7   0 0   1 0 
(d)  , (e)  , (f )  ,
1 2 0 0 0 1

   
 4 0 1   3 0 −5 
   
(g)  
 −2 1 0  , (h)  1
 5 −1 0 
,
   
−2 0 1 1 1 −2

   
 −1 0 1   5 0 1 
   
(i) 
 −1 3 0  , (j) 
  1 1 0 
,
   
−4 13 −1 −7 1 0

(a) Find the characteristic equations;

(b) Find the eigenvalues;

(c) Find the eigenspaces.

147

2. Prove that λ = 0 is an eigenvalue of a matrix A if and only if A is not

invertible.

3. Prove that the constant term in the characteristic polynomial of an n ×

n matrix A is (−1)n det(A). (Hint: The constant term is the value of the

characteristic polynomial when λ = 0.)

4. Let A be an n × n matrix.

(a) Prove that the characteristic polynomial of A has degree n.

(b) Prove that the coefficient of λn in the characteristic polynomial is 1.

5. The traceof a square matrix A, denoted by tr(A), is the sum of the ele-

ments on the main diagonal. Show that the characteristic equation of a 2 × 2

matrix A is λ2 − tr(A)λ + det(A) = 0.

6. Prove that the eigenvalues of a triangular matrix are the entries on the

main diagonal.

7. Show that if λ is an eigenvalue of A, then λ2 is an eigenvalue of A2 ; more

generally, show that λn is an eigenvalue of An for each positive integer n.

8. Find the eigenvalues of A9 where
 
1 3 7 11
 
 
 0 −1 3 8 
 
A= .
 
 0 0 −2 4 
 
0 0 0 2

148

Theorem 6. . does there exists an invertible matrix P such that P −1 AP is diagonal? Definition 6..2.     .    pn1 pn2 · · · pnn 149 . (b) A has n linearly independent eigenvectors. the matrix P is said to diagonalize A. Proof. (a) ⇒ (b) Suppose A is diagonalizable. A square matrix A is said to be diagonalizable if there exists an invertible matrix P such that P −1 AP is diagonal. If A is an n × n matrix. Then there exists an invertible matrix   p11 p12 · · · p1n      p21 p22 · · · p2n    P = . . Given a square matrix A.2. . does there exists a basis B for V such that the matrix for T with respect to B.. is diagonal? ♣Matrix Form of the Diagonalization Problem.§6. .2 Diagonalization ♣The Diagonalization Problem. Given a linear operator T : V → V on a finite-dimensional vector space V .. [T ]B . then the following are equivalent: (a) A is diagonalizable.

. 2. . . .. Apj = λj pj . Since AP = P D.. . . j = 1.  .  .  . . .   . .. that is.. . .. n. 2... . . 150 . 2. . .. .. n.    0 0 ··· λn Therefore.    . . AP = P D. and Apj is the j-th column vector of AP for j = 1.. .    pnj denote the j-th column vector of P . . . . Then λj pj is the j-th column vector of P D. .  . .    .   .    .such that P −1 AP = D where   λ1 0 ··· 0      0 λ2 · · · 0    D= .     pn1 pn2 · · · pnn 0 0 ··· λn   λp λ2 p12 · · · λn p1n  1 11     λ1 p21 λ2 p22 · · · λn p2n    =  . . j = 1. . . . .    p p · · · p1n λ1 0 ··· 0  11 12       p21 p22 · · · p2n  0 λ2 · · · 0     AP =  . . n.. .    λ1 pn1 λ2 pn2 · · · λn pnn Let   p1j      p2j    pj =  .

  . pn . . . 2. pj is a nonzero vector for each j = 1. . . p2 . .  = PD  . . . . . . . .   . .   . (b) ⇒ (a) Suppose A has n linearly independent eigenvectors. and let   p p ··· p1n  11 12     p21 p22 · · · p2n    P = .. . ..15. by Theorem 3. p2 .    pn1 pn2 · · · pnn be the matrix whose columns are p1 .    λ1 pn1 λ2 pn2 · · · λn pnn    p11 p12 · · · p1n λ 0 ··· 0   1      p21 p22 · · · p2n   0 λ2 · · · 0     =  . . 2. pn are linearly independent. .  . . n. . . . . . . . . p2 . . Thus p1 . . . . p1 . λn . 2. . . j = 1.     pn1 pn2 · · · pnn 0 0 ··· λn 151 . . . . . n so that   λ1 p11 λ2 p12 · · · λn p1n      λ1 p21 λ2 p22 · · · λn p2n    AP =  . . .   . . pn with corresponding eigenvalues λ1 . . .   . ..Since P is invertible. . Then Apj is the j-th column vector of AP for j = 1. . . n. . p2 . .. But Apj = λj pj . . .  . . . λ2 . p1 . Since P is invertible. . respectively. pn are eigenvectors of A.  .

. λn as its diagonal entries where λi is the eigenvalue corresponding to the eigenvector pi for i = 1. 2. Since the column vectors of P are linearly independent. . . to find a matrix P which diagonalizes A Step 1. Step 3. Find n linearly independent eigenvectors p1 . λ2 . . .   0 0 5 Solution. . P is invertible. . Step 2. .where D is the diagonal matrix whose diagonal entries are the eigenvalues λ1 . . λn . λ2 . . pn . for a diagonalizable n × n matrix A. Example 6. . Form the matrix P whose columns are p1 . n. 152 . ¯ ¯ ¯ ¯ ¯ λ−3 2 0 ¯ ¯ ¯ ¯ ¯ 0 = det(λI − A) = ¯¯ 2 λ−3 0 ¯ = (λ − 5)((λ − 3)2 − 4) ¯ ¯ ¯ ¯ ¯ ¯ 0 0 λ−5 ¯ = (λ − 5)2 (λ − 1).2. p2 . From the proof of the above Theorem 6. . . . P −1 AP is diagonal with λ1 . . . . . Remark 6. p2 . Find a matrix P which diagonalizes    3 −2 0    A=  −2 3 0 . so AP = P D implies P −1 AP = D which is diagonal. pn . . Hence λ = 1 and λ = 5 are eigenvalues of A.2. .2.

t ∈ R. s. x3 = t. Then   x3 Ax = λx.   2         0 0 0 x3 0 Solving this system x1 = −s. Thus the eigenvectors of A corresponding λ = 5 are nonzero vectors              x1   −s   −s   0   −1   0              x=          x2  =  s  =  s  +  0  = s  1  + t  0   .    x1    Let x =    x2  be the eigenvector corresponding to the eigenvalue λ. equivalently. 153 .              x3 t 0 t 0 1 Note that the eigenvectors corresponding λ = 5      −1   0      p1 =    1  and p2 =  0        0 1 are linearly independent. If λ = 5. then (λI − A)x = 0 becomes       2 2 0   x1   0        2 2 0  x  =  0 . x2 = s. (λI − A)x = 0. so they form a basis for the eigenspace corresponding λ = 5.

x2 = t. then (λI − A)x = 0 becomes       −2 2 0   x1   0        2 −2 0    =  0 . p2 . t ∈ R. p3 } is linearly independent. Thus the matrix    −1 0 1    P =  1 0 1     0 1 0 154 . We see that {p1 .    x2         0 0 −4 x3 0 Solving this system x1 = t. Thus the eigenvectors of A corresponding λ = 1 are nonzero vectors        x1   t   1        x=    x2  =  t  = t 1           x3 0 0 so    1    p3 =    1    0 form a basis for the eigenspace corresponding λ = 1. If λ = 1. x3 = 0.

λk . Thus A has n linearly independent eigenvectors. . . Proof. 155 .3. Proof. then A is diagonalizable. . λ2 . . If v1 . vk } is a linearly inde- pendent set.diagonalizes A and    5 0 0    P −1 AP =    0 5 0 . . . vk are eigenvectors of a matrix A corresponding to distinct eigenvalues λ1 . If v1 .3. If an n × n matrix A has n distinct eigenvalues. By The- orem 6. .4. vk } is lin- early independent. .2. . . vk are eigenvectors of a matrix A corresponding to dis- tinct eigenvalues λ1 . .   0 0 1 Theorem 6. . A is diagonalizable. λk . v2 . . then. {v1 . . λ2 . . v2 . v2 . v2 . by Theorem 6. . . . . . . . . then {v1 . . Omitted! Theorem 6.

2. A =   0 1 1 . 5. Find a matrix P that diagonalizes A and determine P −1 AP . A =   . 6. Let T : R2 → R2 be the linear operator given by      x1   3x1 + 4x2  T   =  .   . 4.2 ♣.  1 2 1 −1   0 1 2 ♣. A =   . x2 2x1 + x2 Find a basis for R2 relative to which the matrix of T is diagonal.        1 0 0   −14 12   1 0    4. (b) A is not diagonalizable if (a − d)2 + 4ac < 0.   .  0 2 0 . ♣Exercises 6. 3. 156 . Show that the following matrices are not diagonalizable:        3 0 0   2 0   2 −3    1. c d Show that (a) A is diagonalizable if (a − d)2 + 4ac > 0.  −20 17 6 −1   0 1 1 3. Let    a b  A= .

Let A be an n × n matrix and P an invertible n × n matrix. Show that (a) (P −1 AP )2 = P −1 A2 P . 6. (b) (P −1 AP )k = P −1 Ak P for each positive integer k.) 157 . −1 2 (Hint: Find a matrix P that diagonalizes A and compute (P −1 AP )10 . Compute A10 where    1 0  A= .5.

3. Definition 6. Given a linear operator T : V → V on a finite-dimensional vector space V .5. Theorem 6. Then there exists an orthogonal matrix P such that P −1 AP is diagonal. A square matrix A is said to be orthogonally diagonal- izable if there exists an Orthogonal matrix P such that P −1 AP (= P t AP ) is diagonal. [T ]B . (b) A has an orthonormal set of n eigenvectors. If A is an n × n matrix. (a) ⇒ (b).4. As shown in the proof of Theorem 6. the matrix P is said to orthogonally diagonalize A. does there exists an orthogonal matrix P such that P −1 AP (= P t AP ) is diagonal? Definition 6. Since 158 . is diagonal? ♣Matrix Form of the Orthogonal Diagonalization Problem. does there exists an orthonormal basis B for V such that the matrix for T with respect to B.§6.2. the n column vectors of P are eigenvectors of A. A square matrix A is said to be symmetric if A = At . (c) A is symmetric. Given a square matrix A. Symmetric Matrices ♣The Orthogonal Diagonalization Problem. then the following are equivalent: (a) A is orthogonally diagonalizable. Suppose A is orthogonally diagonalizable. Proof.3 Diagonalization.

Since these eigenvectors are orthonormal.6. . (a) ⇒ (c). (c) ⇒ (a). p1 .15. by Theorem 4. (b) ⇒ (a).2. these column vectors are orthonormal so that A has n orthonormal eigenvectors. Suppose that A has an orthonormal set of n eigenvectors {p1 . pn }. .P is orthogonal. then eigenvectors from different eigenspaces are orthogonal. . P with these eigen- vectors as columns diagonalizes A. As shown in the proof of Theorem 6. If A is a symmetric matrix.15. . Omitted! (beyond the scope of this elementary course) Theorem 6. by Theorem 4. Proof. so A is symmetric. At = (P DP t )t = P Dt P t = P DP t = A. Thus D = P −1 AP ⇒ A = P DP −1 = P DP t since P is orthogonal. Therefore. P is orthogonal. Then there exists an orthogonal matrix P such that P −1 AP = D where D is diagonal. Omitted! 159 . Suppose A is orthogonally diagonalizable. and then orthogonally diagonalizes A.

for a symmetric matrix A. Find a basis for each eigenspace of A. The characteristic equation of A is ¯ ¯ ¯ ¯ ¯ λ−4 −2 −2 ¯ ¯ ¯ ¯ ¯ 0 = det(λI − A) = ¯¯ −2 λ − 4 −2 ¯ ¯ ¯ ¯ ¯ ¯ ¯ −2 −2 λ − 4 ¯ = (λ − 2)2 (λ − 8). change each basis found in Step 1 into an orthonormal basis for the corresponding eigenspace. Form the matrix P whose columns are the basic vectors constructed in Step 2. Example 6. Step 2.Remark 6.    2 2 4 Solution. 160 .6. From Theorems 6. we obtain a procedure for finding an orthogonal matrix P which orthogonally diagonalizes A: Step 1. Step 3.3. The eigenvectors of A are λ = 2 and λ = 8. Using Gram-Schmidt Process.5 and 6. Find an orthogonal matrix P that diagonalizes the symmetric matrix    4 2 2    A=  2 4 2 .3.

2    2 6     0 √2 6 The eigenspace corresponding to λ = 8 has    1    u3 =   1     1 as a basis and Gram-Schmidt process yields   √1  3    v3 =   √1 . v3 .  3   √1 3 Finally.   √1 √1 √1  − 2 − 6 3     P =  √ −√ √ 1 1 1  2 6 3    2 1 0 √ 6 √ 3 161 . v2 . and Gram-Schmidt process yields orthonormal basis eigenvectors     1 √1  − 2   − 6 √      v1 =   √1  and v =  − √1 . the matrix whose columns are v1 . We see that      −1   −1      u1 =   1   and u2 =   0       0 1 form a basis for the eigenspace corresponding to λ = 2.

2. Theorem 6.  −4 1 −2    .7.        0 0 0 0   0 0 −2 0  0 3 3     0 0 0 0 − 43 1 3 0 − 53 162 . Find the dimension of the eigenspaces of the following symmetric matrices:        1 −4 2   1 1 1   1 1      1.  .  . Omitted! ♣Exercises 6.orthogonally diagonalizes A. 5.  0 3 3   3 3  .3 ♣.  1 1     2 −2 −2 1 1 1       4 4 0 0 10 − 43 0 − 43 6 0 0    3           4 4 0 0   −4 −5 0 1     3  4.  1 1 1  .  . (b) If an eigenvalue λ of a symmetric matrix A has a k-multiple root of its characteristic equation. Proof. 3. then the eigenspace corresponding to λ is k- dimensional. 6. (a) The characteristic equation of a symmetric matrix A has only real roots.

Two n × n matrices A and B are called orthogonally similar if there is an orthogonal matrix P such that B = P t AP . then B is symmetric. 8. Find a matrix that orthogonally diagonalizes    a b    b a where b 6= 0.7. Show that if A is symmetric and A and B are orthogonally similar. 163 .