This action might not be possible to undo. Are you sure you want to continue?

**Systems of Linear Equations and Matrices
**

Section 1.1

Exercise Set 1.1

1. (a), (c), and (f) are linear equations in x1 , x2 , and x3 .

(b) is not linear because of the term x1 x3 .

(d) is not linear because of the term x −2 .

(e) is not linear because of the term x13/ 5 .

3. (a) and (d) are linear systems.

(b) is not a linear system because the first and second equations are not linear.

(c) is not a linear system because the second equation is not linear.

5. By inspection, (a) and (d) are both consistent; x1 = 3, x2 = 2, x3 = −2, x4 = 1 is a solution of (a) and

**x1 = 1, x2 = 3, x3 = 2, x4 = 2 is a solution of (d). Note that both systems have infinitely many solutions.
**

7. (a), (d), and (e) are solutions.

(b) and (c) do not satisfy any of the equations.

9. (a) 7 x − 5 y = 3

5

3

x= y+

7

7

Let y = t. The solution is

5

3

x = y+

7

7

y=t

(b) −8 x1 + 2 x2 − 5 x3 + 6 x4 = 1

1

5

3

1

x1 = x2 − x3 + x4 −

4

8

4

8

Let x2 = r , x3 = s, and x4 = t . The solution is

1

5

3 1

r− s+ t−

4

8

4 8

=r

=s

=t

x1 =

x2

x3

x4

⎡2 0 0⎤

11. (a) ⎢ 3 −4 0 ⎥ corresponds to

⎢

⎥

1 1⎥⎦

⎢⎣ 0

2 x1

=0

3x1 − 4 x2 = 0.

x2 = 1

1

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎡ 3 0 −2 5⎤

(b) ⎢7

1 4 −3⎥⎥ corresponds to

⎢

1 7 ⎥⎦

⎢⎣ 0 −2

3x1

− 2 x3 = 5

7 x1 + x2 + 4 x3 = −3.

−2 x2 + x3 = 7

15. If (a, b, c) is a solution of the system, then

**ax12 + bx1 + c = y1 , ax22 + bx2 + c = y2 , and
**

ax32 + bx3 + c = y3 which simply means that the

points are on the curve.

17. The solutions of x1 + kx2 = c are x1 = c − kt ,

**x2 = t where t is any real number. If these
**

satisfy x1 + lx2 = d , then c − kt + lt = d or

⎡

⎤

(c) ⎢7 2 1 −3 5⎥ corresponds to

⎣ 1 2 4 0 1⎦

7 x1 + 2 x2 + x3 − 3x4 = 5

.

x1 + 2 x2 + 4 x3

=1

⎡1

⎢

(d) ⎢0

⎢0

⎢⎣0

x1

0

1

0

0

0

0

1

0

**c − d = (k − l) t for all real numbers t. In
**

particular, if t = 0, then c = d, and if t = 1, then

k = l.

True/False 1.1

0 7⎤

0 −2 ⎥ corresponds to

⎥

0

3⎥

1 4 ⎥⎦

x2

x3

x4

(a) True; x1 = x2 =

**(b) False; only multiplication by nonzero constants
**

is acceptable.

= 7

= −2 .

3

=

= 4

**(c) True; if k = 6 the system has infinitely many
**

solutions, while if k ≠ 6, the system has no

solution.

(d) True; the equation can be solved for one variable

in terms of the other(s), yielding parametric

equations that give infinitely many solutions.

**13. (a) The augmented matrix for −2 x1 = 6
**

3x1 = 8

9 x1 = −3

**(e) False; the system 3x − 5 y = −7 has the
**

2 x + 9 y = 20

6 x − 10 y = −14

solution x = 1, y = 2.

⎡ −2 6 ⎤

is ⎢ 3 8⎥ .

⎢

⎥

⎢⎣ 9 −3⎥⎦

(b) The augmented matrix for

⎡

⎤

6 x1 − x2 + 3x3 = 4 is ⎢6 −1 3 4 ⎥ .

⎣0 5 −1 1⎦

5x − x = 1

2

**(f) False; multiplying an equation by a nonzero
**

constant c does not change the solutions of the

system.

3

**(g) True; subtracting one equation from another is
**

the same as multiplying an equation by −1 and

adding it to another.

**(c) The augmented matrix for
**

2 x2

− 3x4 + x5 = 0

−3x1 − x2 + x3

= −1

6 x1 + 2 x2 − x3 + 2 x4 − 3x5 = 6

**(h) False; the second row corresponds to the
**

equation 0 x1 + 0 x2 = −1 or 0 = −1 which is false.

⎡ 0 2 0 −3

1 0⎤

is ⎢ −3 −1 1 0 0 −1⎥ .

⎢

⎥

⎢⎣ 6 2 −1 2 −3 6 ⎥⎦

Section 1.2

Exercise Set 1.2

(d) The augmented matrix for x1 − x5 = 7 is

[1

0

= xn = 0 will be a solution.

0

0

−1

**1. (a) The matrix is in both row echelon and
**

reduced row echelon form.

7].

**(b) The matrix is in both row echelon and
**

reduced row echelon form.

2

SSM: Elementary Linear Algebra

Section 1.2

**(c) The matrix is in both row echelon and
**

reduced row echelon form.

**(d) The last line of the matrix corresponds to the
**

equation 0 x1 + 0 x2 + 0 x3 = 1, which is not

**(d) The matrix is in both row echelon and
**

reduced row echelon form.

**satisfied by any values of x1 , x2 , and x3 ,
**

so the system is inconsistent.

⎡ 1

1 2 8⎤

5. The augmented matrix is ⎢⎢ −1 −2 3 1⎥⎥ .

⎢⎣ 3 −7 4 10 ⎥⎦

Add the first row to the second row and add −3

times the first row to the third row.

⎡1

8⎤

1 2

⎢0

9⎥

−1 5

⎢

⎥

⎢⎣0 −10 −2 −14⎥⎦

**(e) The matrix is in both row echelon and
**

reduced row echelon form.

(f) The matrix is in both row echelon and

reduced row echelon form.

(g) The matrix is in row echelon form.

3. (a) The matrix corresponds to the system

x1 − 3x2 + 4 x3 = 7

x1 = 7 + 3x2 − 4 x3

x2 + 2 x3 = 2 or x2 = 2 − 2 x3

.

x3 = 5

x3 = 5

**Multiply the second row by −1.
**

⎡1

8⎤

1 2

⎢0

1 −5 −9⎥⎥

⎢

⎢⎣0 −10 −2 −14⎥⎦

Thus x3 = 5, x2 = 2 − 2(5) = −8, and

x1 = 7 + 3(−8) − 4(5) = −37. The solution is

x1 = −37, x2 = −8, x3 = 5.

**Add 10 times the second row to the third row.
**

⎡1 1

8⎤

2

⎢ 0 1 −5

−9 ⎥

⎢

⎥

⎣⎢0 0 −52 −104 ⎦⎥

**(b) The matrix corresponds to the system
**

x1

+ 8 x3 − 5 x4 = 6

x2 + 4 x3 − 9 x4 = 3 or

x3 + x4 = 2

Multiply the third row by −

x1 = 6 − 8 x3 + 5 x4

x2 = 3 − 4 x3 + 9 x4 .

x3 = 2 − x4

1

.

52

⎡1 1 2

8⎤

⎢0 1 −5 −9 ⎥

⎢

⎥

1 2 ⎥⎦

⎢⎣0 0

Add 5 times the third row to the second row and

−2 times the third row to the first row.

⎡ 1 1 0 4⎤

⎢0 1 0 1⎥

⎢

⎥

⎢⎣0 0 1 2⎥⎦

Let x4 = t , then x3 = 2 − t ,

x2 = 3 − 4(2 − t ) + 9t = 13t − 5, and

x1 = 6 − 8(2 − t ) + 5t = 13t − 10. The solution

is x1 = 13t − 10, x2 = 13t − 5, x3 = −t + 2,

x4 = t .

**Add −1 times the second row to the first row.
**

⎡ 1 0 0 3⎤

⎢0 1 0 1⎥

⎢

⎥

⎢⎣0 0 1 2⎥⎦

**(c) The matrix corresponds to the system
**

x1 + 7 x2 − 2 x3

− 8 x5 = −3

x3 + x4 + 6 x5 = 5 or

x4 + 3x5 = 9

x1 = −3 − 7 x2 + 2 x3 + 8 x5

x3 = 5 − x4 − 6 x5

x4 = 9 − 3 x5

The solution is x1 = 3, x2 = 1, x3 = 2.

7. The augmented matrix is

⎡ 1 −1 2 −1 −1⎤

⎢ 2 1 −2 −2 −2 ⎥

⎢

⎥.

1 1⎥

⎢ −1 2 −4

⎣⎢ 3 0 0 −3 −3⎦⎥

**Let x2 = s and x5 = t , then x4 = 9 − 3t ,
**

x3 = 5 − (9 − 3t ) − 6t = −3t − 4, and

x1 = −3 − 7 s + 2(−3t − 4) + 8t = −7 s + 2t − 11.

The solution is x1 = −7 s + 2t − 11, x2 = s,

**Add −2 times the first row to the second row, 1
**

times the first row to the third row, and −3 times

the first row to the fourth row.

x3 = −3t − 4, x4 = −3t + 9, x5 = t .

3

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎡ 1 −1 2 −1 −1⎤

⎢0 3 −6 0 0 ⎥

⎢

⎥

⎢0 1 −2 0 0 ⎥

⎢⎣0 3 −6 0 0 ⎥⎦

**13. Since the system has more unknowns (4) than
**

equations (3), it has nontrivial solutions.

15. Since the system has more unknowns (3) than

equations (2), it has nontrivial solutions.

1

and then add −1

3

times the new second row to the third row and

add −3 times the new second row to the fourth

row.

⎡ 1 −1 2 −1 −1⎤

⎢0 1 −2 0 0 ⎥

⎢

⎥

⎢0 0 0 0 0 ⎥

⎣⎢0 0 0 0 0 ⎦⎥

Add the second row to the first row.

⎡ 1 0 0 −1 −1⎤

⎢ 0 1 −2 0 0 ⎥

⎢

⎥

⎢0 0 0 0 0⎥

⎢⎣0 0 0 0 0⎥⎦

The corresponding system of equations is

x = w −1

− w = −1

x

or

.

=0

y = 2z

y − 2z

Let z = s and w = t. The solution is x = t − 1,

y = 2s, z = s, w = t.

Multiply the second row by

⎡2 1 3 0⎤

17. The augmented matrix is ⎢⎢ 1 2 0 0 ⎥⎥ .

⎢⎣ 0 1 1 0 ⎥⎦

Interchange the first and second rows, then add

−2 times the new first row to the second row.

⎡ 1 2 0 0⎤

⎢0 −3 3 0 ⎥

⎢

⎥

1 1 0 ⎥⎦

⎢⎣0

1

Multiply the second row by − , then add −1

3

times the new second row to the third row.

⎡ 1 2 0 0⎤

⎢0 1 −1 0 ⎥

⎢

⎥

⎣⎢0 0 2 0 ⎦⎥

1

, then add the new

2

third row to the second row.

⎡ 1 2 0 0⎤

⎢0 1 0 0⎥

⎢

⎥

⎢⎣0 0 1 0⎥⎦

Multiply the third row by

**9. In Exercise 5, the following row echelon matrix
**

occurred.

⎡1 1 2

8⎤

⎢0 1 −5 −9 ⎥

⎢

⎥

1 2 ⎥⎦

⎢⎣0 0

**Add −2 times the second row to the first row.
**

⎡ 1 0 0 0⎤

⎢0 1 0 0⎥

⎢

⎥

⎣⎢0 0 1 0⎦⎥

**The corresponding system of equations is
**

x1 + x2 + 2 x3 = 8

x1 = − x2 − 2 x3 + 8

x2 − 5 x3 = −9 or x2 = 5 x3 − 9

.

x3 = 2

x3 = 2

**The solution, which can be read from the matrix,
**

is x1 = 0, x2 = 0, x3 = 0.

Since x3 = 2, x2 = 5(2) − 9 = 1, and

x1 = −1 − 2(2) + 8 = 3. The solution is x1 = 3,

⎡

⎤

19. The augmented matrix is ⎢3 1 1 1 0⎥ .

⎣ 5 −1 1 −1 0 ⎦

1

Multiply the first row by , then add −5 times

3

the new first row to the second row.

1

1

1 0⎤

⎡1

3

3

3

⎢

⎥

⎢⎣0 − 83 − 23 − 83 0 ⎥⎦

3

Multiply the second row by − .

8

x2 = 1, x3 = 2.

11. From Exercise 7, one row echelon form of the

⎡ 1 −1 2 −1 −1⎤

⎢

⎥

augmented matrix is ⎢0 1 −2 0 0 ⎥ .

⎢0 0 0 0 0 ⎥

⎢⎣0 0 0 0 0 ⎥⎦

The corresponding system is

x = y − 2z + w −1

x − y + 2 z − w = −1

or

.

y = 2z

y − 2z

=0

Let z = s and w = t. Then y = 2s and

x = 2s − 2s + t − 1 = t − 1. The solution is

x = t − 1, y = 2s, z = s, w = t.

⎡1 1

3

⎢

⎢⎣0 1

4

1

3

1

4

0⎤

⎥

1 0 ⎥⎦

1

3

SSM: Elementary Linear Algebra

Add −

Section 1.2

w= y

=0

= 0 or x = − y .

z=0

z=0

Let y = t. The solution is w = t, x = −t, y = t,

z = 0.

w

1

times the second row to the first row.

3

⎡ 1 0 1 0 0⎤

4

⎢

⎥

⎢⎣0 1 14 1 0 ⎥⎦

This corresponds to the system

1

1

+ x3

=0

x1

x1 = − x3

4

4

or

.

1

1

x2 + x3 + x4 = 0

x2 = − x3 − x4

4

4

Let x3 = 4s and x4 = t . Then x1 = − s and

−y

x+ y

⎡ 2 −1 3 4 9 ⎤

⎢ 1 0 −2 7 11⎥

23. The augmented matrix is ⎢

⎥.

1 5 8⎥

⎢ 3 −3

1 4 4 10⎦⎥

⎣⎢ 2

Interchange the first and second rows.

⎡ 1 0 −2 7 11⎤

⎢ 2 −1 3 4 9 ⎥

⎢

⎥

1 5 8⎥

⎢ 3 −3

1 4 4 10⎥⎦

⎢⎣ 2

Add −2 times the first row to the second and

fourth rows, and add −3 times the first row to the

third row.

7

11⎤

⎡ 1 0 −2

⎢0 −1 7 −10 −13⎥

⎢

⎥

⎢0 −3 7 −16 −25⎥

1 8 −10 −12⎦⎥

⎣⎢0

**x2 = − s − t . The solution is x1 = − s, x2 = − s − t ,
**

x3 = 4s, x4 = t .

⎡ 0 2 2 4 0⎤

⎢ 1 0 −1 −3 0 ⎥

21. The augmented matrix is ⎢

⎥.

⎢ 2 3 1 1 0⎥

⎢⎣ −2 1 3 −2 0 ⎥⎦

Interchange the first and second rows.

⎡ 1 0 −1 −3 0 ⎤

⎢ 0 2 2 4 0⎥

⎢

⎥

⎢ 2 3 1 1 0⎥

⎣⎢ −2 1 3 −2 0 ⎦⎥

**Multiply the second row by −1, then add 3 times
**

the new second row to the third row and −1

times the new second row to the fourth row.

7

11⎤

⎡ 1 0 −2

⎢0 1 −7

10

13⎥

⎢

⎥

⎢0 0 −14 14 14 ⎥

15 −20 −25⎦⎥

⎣⎢0 0

**Add −2 times the first row to the third row and 2
**

times the first row to the fourth row.

⎡ 1 0 −1 −3 0 ⎤

⎢0 2 2 4 0 ⎥

⎢

⎥

⎢0 3 3 7 0 ⎥

⎢⎣0 1 1 −8 0 ⎥⎦

1

Multiply the second row by , then add −3

2

times the new second row to the third row and

−1 times the new second row to the fourth row.

⎡ 1 0 −1 −3 0 ⎤

⎢0 1 1

2 0⎥

⎢

⎥

1 0⎥

⎢0 0 0

⎣⎢0 0 0 −10 0 ⎦⎥

1

, then add −15

14

times the new third row to the fourth row.

11⎤

⎡ 1 0 −2 7

⎢0 1 −7 10

13⎥

⎢

⎥

1 −1 −1⎥

⎢0 0

⎣⎢0 0 0 −5 −10 ⎦⎥

Multiply the third row by −

1

Multiply the fourth row by − , then add the

5

new fourth row to the third row, add −10 times

the new fourth row to the second row, and add

−7 times the new fourth row to the first row.

⎡ 1 0 −2 0 −3⎤

⎢ 0 1 −7 0 −7 ⎥

⎢

⎥

1 0

1⎥

⎢0 0

0 1 2 ⎦⎥

⎣⎢0 0

Add 7 times the third row to the second row and

2 times the third row to the first row.

**Add 10 times the third row to the fourth row, −2
**

times the third row to the second row, and 3

times the third row to the first row.

⎡ 1 0 −1 0 0 ⎤

⎢0 1 1 0 0 ⎥

⎢

⎥

⎢0 0 0 1 0 ⎥

⎢⎣0 0 0 0 0 ⎥⎦

The corresponding system is

5

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎡2 1 a ⎤

29. The augmented matrix is ⎢

.

⎣ 3 6 b ⎥⎦

1

Multiply the first row by , then add −3 times

2

the new first row to the second row.

a

⎡1 1

⎤

2

2

⎢

⎥

⎢⎣0 92 − 32a + b ⎥⎦

2

1

Multiply the second row by , then add −

2

9

times the new second row to the first row.

⎡1 0 2 a − b ⎤

3

9 ⎥

⎢

⎢0 1 − a3 + 29b ⎥

⎣

⎦

2a b

a 2b

− , y=− + .

The solution is x =

3 9

3 9

⎡ 1 0 0 0 −1⎤

⎢0 1 0 0 0 ⎥

⎢

⎥

⎢0 0 1 0 1⎥

⎢⎣0 0 0 1 2 ⎥⎦

The solution, which can be read from the matrix,

is I1 = −1, I 2 = 0, I 3 = 1, I 4 = 2.

25. The augmented matrix is

⎡1 2

−3

4 ⎤

⎢ 3 −1

5

2 ⎥⎥

⎢

⎢⎣ 4 1 a 2 − 14 a + 2⎥⎦

**Add −3 times the first row to the second row and
**

−4 times the first row to the third row.

⎡1 2

−3

4 ⎤

⎢0 −7

14

−10 ⎥⎥

⎢

2

⎣⎢0 −7 a − 2 a − 14 ⎦⎥

Add −1 times the second row to the third row.

⎡1 2

−3

4 ⎤

⎢0 −7

14

−10 ⎥⎥

⎢

2

⎢⎣0 0 a − 16 a − 4 ⎥⎦

**31. Add −2 times the first row to the second row.
**

⎡ 1 3⎤

⎢⎣0 1⎥⎦ is in row echelon form. Add −3 times

the second row to the first row.

⎡ 1 0⎤

⎢⎣0 1⎥⎦ is another row echelon form for the

matrix.

The third row corresponds to the equation

(a 2 − 16) z = a − 4 or (a + 4)(a − 4)z = a − 4.

If a = −4, this equation is 0z = −8, which has no

solution.

If a = 4, this equation is 0z = 0, and z is a free

variable. For any other value of a, the solution of

1

. Thus, if a = 4, the

this equation is z =

a+4

system has infinitely many solutions; if a = −4,

the system has no solution; if a ≠ ±4, the system

has exactly one solution.

**33. Let x = sin α, y = cos β, and z = tan γ; then the
**

x + 2 y + 3z = 0

system is 2 x + 5 y + 3 z = 0 .

− x − 5 y + 5z = 0

⎡ 1 2 3 0⎤

The augmented matrix is ⎢ 2 5 3 0⎥ .

⎢

⎥

⎣ −1 −5 5 0 ⎦

Add −2 times the first row to the second row,

and add the first row to the third row.

⎡ 1 2 3 0⎤

⎢0

1 −3 0 ⎥

⎢

⎥

⎣ 0 −3 8 0 ⎦

2

1 ⎤

⎡1

27. The augmented matrix is ⎢

⎥.

2

⎣ 2 a − 5 a − 1⎦

Add −2 times the first row to the second row.

2

1 ⎤

⎡1

⎢

⎥

2

⎣ 0 a − 9 a − 3⎦

The second row corresponds to the equation

**Add 3 times the second row to the third row.
**

⎡ 1 2 3 0⎤

⎢0 1 −3 0 ⎥

⎢

⎥

⎣0 0 −1 0 ⎦

(a 2 − 9) y = a − 3 or (a + 3)(a − 3)y = a − 3.

If a = −3, this equation is 0y = −6, which has no

solution. If a = 3, this equation is 0y = 0, and y is

a free variable. For any other value of a, the

1

solution of this equation is y =

.

a+3

Thus, if a = −3, the system has no solution; if

a = 3, the system has infinitely many solutions;

if a ≠ ±3, the system has exactly one solution.

**Multiply the third row by −1 then add 3 times
**

the new third row to the second row and −3

times the new third row to the first row.

⎡ 1 2 0 0⎤

⎢0 1 0 0⎥

⎢

⎥

⎣0 0 1 0⎦

Add −2 times the second row to the first row.

6

SSM: Elementary Linear Algebra

Section 1.2

X = 1 ⇒ x = ±1

Y=3⇒ y=± 3

⎡ 1 0 0 0⎤

⎢0 1 0 0⎥

⎢

⎥

⎣0 0 1 0⎦

The system has only the trivial solution, which

corresponds to sin α = 0, cos β = 0, tan γ = 0.

For 0 ≤ α ≤ 2π , sin α = 0 ⇒ α = 0, π , 2π .

For 0 ≤ β ≤ 2π , cos β = 0 ⇒ β =

Z=2⇒ z=± 2

The solutions are x = ±1, y = ± 3, z = ± 2.

37. (0, 10): d = 10

(1, 7): a + b + c + d = 7

(3, −11): 27a + 9b + 3c + d = −11

(4, −14): 64a + 16b + 4c + d = −14

The system is

a +b +c+d = 7

27a + 9b + 3c + d = −11

and the augmented

64a + 16b + 4c + d = −14

d = 10

1

1

1 1

7⎤

⎡

⎢ 27 9 3 1 −11⎥

matrix is ⎢

⎥.

⎢ 64 16 4 1 −14⎥

⎣⎢ 0 0 0 1 10⎦⎥

π 3π

,

.

2 2

For 0 ≤ γ ≤ 2π , tan γ = 0 ⇒ γ = 0, π , 2π .

Thus, the original system has

3 ⋅ 2 ⋅ 3 = 18 solutions.

35. Let X = x 2 , Y = y 2 , and Z = z 2 , then the

system is

X +Y + Z = 6

X − Y + 2Z = 2

2X + Y − Z = 3

⎡ 1 1 1 6⎤

The augmented matrix is ⎢ 1 −1 2 2⎥ .

⎢

⎥

⎣ 2 1 −1 3⎦

Add −1 times the first row to the second row and

−2 times the first row to the third row.

⎡ 1 1 1 6⎤

⎢ 0 −2

1 −4 ⎥

⎢

⎥

⎣ 0 −1 −3 −9 ⎦

**Add −27 times the first row to the second row
**

and −64 times the first row to the third row.

1

1

1

7⎤

⎡1

⎢0 −18 −24 −26 −200 ⎥

⎢

⎥

⎢0 −48 −60 −63 −462 ⎥

0

0

1

10 ⎦⎥

⎣⎢0

1

, then add 48

18

times the new second row to the third row.

7 ⎤

⎡1 1 1 1

⎢0 1 4 13 100 ⎥

⎢

3

9

9 ⎥

⎢0 0 4 19 214 ⎥

3

3 ⎥

⎢

⎣⎢0 0 0 1 10 ⎦⎥

Multiply the second row by −

1

Multiply the second row by − , then add the

2

new second row to the third row.

⎡1 1

1 6⎤

⎢

⎥

2⎥

⎢0 1 − 12

⎢

⎥

7

⎢⎣0 0 − 2 −7 ⎥⎦

2

1

Multiply the third row by − , then add

2

7

times the new third row to the second row and

−1 times the new third row to the first row.

⎡ 1 1 0 4⎤

⎢ 0 1 0 3⎥

⎢

⎥

⎣0 0 1 2⎦

**Multiply the third row by
**

⎡1

⎢0

⎢

⎢0

⎢

⎣⎢0

X

Y

0 1

0 0

Add −

**Add −1 times the second row to the first row.
**

⎡ 1 0 0 1⎤

⎢ 0 1 0 3⎥ .

⎢

⎥

⎣0 0 1 2⎦

This corresponds to the system

1 1

1 43

1

13

9

19

12

1

1

.

4

7 ⎤

100 ⎥

9 ⎥

107 ⎥

6 ⎥

10 ⎦⎥

19

times the fourth row to the third row,

12

13

times the fourth row to the second row,

9

and −1 times the fourth row to the first row.

−

=1

=3.

Z =2

7

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

1 1 0 −3 ⎤

⎥

1 43 0 − 10

3⎥

0 1 0 2 ⎥

0 0 1 10 ⎥⎦

4

Add − times the third row to the second row

3

and −1 times the third row to the first row.

⎡ 1 1 0 0 −5⎤

⎢0 1 0 0 −6 ⎥

⎢

⎥

⎢0 0 1 0 2 ⎥

⎢⎣0 0 0 1 10 ⎥⎦

Add −1 times the second row to the first row.

1⎤

⎡1 0 0 0

⎢0 1 0 0 −6 ⎥

⎢

⎥

⎢0 0 1 0 2 ⎥

⎣⎢0 0 0 1 10 ⎦⎥

⎡1

⎢0

⎢

⎢0

⎢0

⎣

**43. (a) There are eight possibilities. p and q are any
**

real numbers.

⎡ 1 0 0 ⎤ ⎡ 1 0 p ⎤ ⎡ 1 p 0⎤

⎢0 1 0 ⎥ , ⎢0 1 q ⎥ , ⎢0 0 1⎥ ,

⎢

⎥ ⎢

⎥ ⎢

⎥

⎣0 0 1⎦ ⎣0 0 0 ⎦ ⎣0 0 0⎦

⎡0 1 0 ⎤

⎢0 0 1⎥ ,

⎢

⎥

⎣0 0 0 ⎦

⎡ 1 p q ⎤ ⎡0 1 p ⎤

⎢0 0 0 ⎥ , ⎢0 0 0 ⎥ ,

⎢

⎥ ⎢

⎥

⎣0 0 0 ⎦ ⎣0 0 0 ⎦

⎡0 0 0 ⎤

⎡0 0 1⎤

⎢0 0 0 ⎥ , and ⎢0 0 0 ⎥

⎢

⎥

⎢

⎥

⎣0 0 0 ⎦

⎣0 0 0 ⎦

(b) There are 16 possibilities. p, q, r, and s are

any real numbers.

⎡ 1 0 0 0⎤ ⎡ 1 0 0 p ⎤

⎢0 1 0 0⎥ ⎢ 0 1 0 q ⎥

⎢

⎥, ⎢

⎥,

⎢0 0 1 0⎥ ⎢ 0 0 1 r ⎥

⎢⎣0 0 0 1⎥⎦ ⎢⎣ 0 0 0 0 ⎥⎦

⎡ 1 0 p 0 ⎤ ⎡ 1 p 0 0⎤

⎢0 1 q 0 ⎥ ⎢0 0 1 0⎥

⎢

⎥, ⎢

⎥,

⎢0 0 0 1⎥ ⎢0 0 0 1⎥

⎢⎣0 0 0 0 ⎥⎦ ⎢⎣0 0 0 0⎥⎦

⎡0 1 0 0⎤ ⎡ 1 0 p q ⎤

⎢0 0 1 0⎥ ⎢ 0 1 r s ⎥

⎢

⎥, ⎢

⎥,

⎢0 0 0 1⎥ ⎢ 0 0 0 0 ⎥

⎣⎢0 0 0 0⎦⎥ ⎣⎢ 0 0 0 0 ⎦⎥

**The coefficients are a = 1, b = −6, c = 2,
**

d = 10.

39. Since the homogeneous system has only the

trivial solution, using the same steps of GaussJordan elimination will reduce the augmented

matrix of the nonhomogeneous system to the

⎡ 1 0 0 d1 ⎤

form ⎢0 1 0 d 2 ⎥ .

⎢

⎥

⎣ 0 0 1 d3 ⎦

Thus the system has exactly one solution.

⎡ 1 p 0 q⎤

⎢0 0 1 r ⎥

⎢

⎥,

⎢0 0 0 0 ⎥

⎢⎣0 0 0 0 ⎥⎦

**41. (a) If a ≠ 0, the sequence of matrices is
**

b ⎤

⎡1

⎡ a b ⎤ ⎡1 ba ⎤

a ⎥

⎢

→

→

⎢⎣ c d ⎥⎦ ⎢ c d ⎥

−bc ⎥

ad

⎢0

⎣⎢

⎦⎥

a ⎦

⎣

⎡1 b ⎤

1 0⎤

a⎥ → ⎡

→⎢

⎢⎣0 1 ⎥⎦ .

⎢⎣ 0 1 ⎥⎦

⎡1 p

⎢0 0

⎢

⎢0 0

⎢⎣0 0

q 0⎤

0 1⎥

⎥,

0 0⎥

0 0 ⎥⎦

⎡0 1 0 p ⎤ ⎡ 0 1 p 0 ⎤

⎢0 0 1 q ⎥ ⎢ 0 0 0 1⎥

⎢

⎥, ⎢

⎥,

⎢0 0 0 0 ⎥ ⎢ 0 0 0 0 ⎥

⎢⎣0 0 0 0 ⎥⎦ ⎢⎣ 0 0 0 0 ⎥⎦

**If a = 0, the sequence of matrices is
**

⎡1 d ⎤

⎡0 b ⎤

⎡ c d ⎤ ⎡1 dc ⎤

c

⎢⎣ c d ⎥⎦ → ⎢⎣0 b ⎥⎦ → ⎢ 0 b ⎥ → ⎢ 0 1 ⎥

⎣⎢

⎦⎥

⎣⎢

⎦⎥

⎡1 0 ⎤

.

→⎢

⎣ 0 1 ⎥⎦

⎡0

⎢0

⎢

⎢0

⎣⎢0

0 1 0⎤ ⎡ 1 p

0 0 1⎥ ⎢ 0 0

⎥, ⎢

0 0 0⎥ ⎢ 0 0

0 0 0⎦⎥ ⎢⎣ 0 0

⎡0 1 p

⎢0 0 0

⎢

⎢0 0 0

⎣⎢0 0 0

**(b) Since ad − bc ≠ 0, the lines are neither
**

identical nor parallel, so the augmented

⎡a b k ⎤

matrix ⎢

will reduce to

⎣ c d l ⎥⎦

⎡ 1 0 k1 ⎤

⎢0 1 l ⎥ and the system has exactly one

1⎦

⎣

solution.

⎡0

⎢0

⎢

⎢0

⎢⎣0

8

0

0

0

0

q⎤

0⎥

⎥,

0⎥

0 ⎦⎥

⎡0

⎢0

⎢

⎢0

⎣⎢0

q

0

0

0

r⎤

0⎥

⎥,

0⎥

0 ⎦⎥

0 1 p⎤

0 0 0⎥

⎥,

0 0 0⎥

0 0 0 ⎦⎥

0 1⎤

⎡0 0

⎢0 0

0 0⎥

⎥ , and ⎢

0 0⎥

⎢0 0

0 0 ⎥⎦

⎢⎣0 0

0

0

0

0

0⎤

0⎥

⎥

0⎥

0⎥⎦

SSM: Elementary Linear Algebra

Section 1.3

5 + 1 2 + 3⎤

⎡ 1+ 6

3. (a) D + E = ⎢ −1 + (−1) 0 + 1 1 + 2 ⎥

⎢

⎥

2 + 1 4 + 3⎦

⎣ 3+ 4

⎡ 7 6 5⎤

= ⎢ −2 1 3⎥

⎢

⎥

⎣ 7 3 7⎦

True/False 1.2

(a) True; reduced row echelon form is a row echelon

form.

(b) False; for instance, adding a nonzero multiple of

the first row to any other row will result in a

matrix that is not in row echelon form.

⎡ 1− 6

(b) D − E = ⎢ −1 − (−1)

⎢

⎣ 3− 4

⎡ −5 4

= ⎢ 0 −1

⎢

⎣ −1 1

**(c) False; see Exercise 31.
**

(d) True; the sum of the number of leading 1’s and

the number of free variables is n.

(e) True; a column cannot contain more than one

leading 1.

5 − 1 2 − 3⎤

0 −1 1 − 2 ⎥

⎥

2 − 1 4 − 3⎦

−1⎤

−1⎥

⎥

1⎦

⎡ 15 0 ⎤

(c) 5 A = ⎢ −5 10 ⎥

⎢

⎥

⎣ 5 5⎦

**(f) False; in row echelon form, there can be nonzero
**

entries above the leading 1 in a column.

(g) True; since there are n leading 1’s, there are no

free variables, so only the trivial solution is

possible.

⎡ −7 −28 −14 ⎤

(d) −7C = ⎢

⎣ −21 −7 −35⎥⎦

**(h) False; if the system has more equations than
**

unknowns, a row of zeros does not indicate

infinitely many solutions.

**(e) 2B − C is not defined since 2B is a 2 × 2
**

matrix and C is a 2 × 3 matrix.

(i) False; the system could be inconsistent.

⎡ 24

(f) 4 E − 2 D = ⎢ −4

⎢

⎣ 16

⎡ 22

= ⎢ −2

⎢

⎣ 10

Section 1.3

Exercise Set 1.3

1. (a) BA is undefined.

(b) AC + D is 4 × 2.

4 12 ⎤ ⎡ 2 10 4 ⎤

4 8 ⎥ − ⎢ −2 0 2 ⎥

⎥ ⎢

⎥

4 12 ⎦ ⎣ 6 4 8⎦

−6 8 ⎤

4 6⎥

⎥

0 4⎦

(g) −3( D + 2 E )

⎛ ⎡ 1 5 2⎤ ⎡ 12 2 6 ⎤ ⎞

= −3 ⎜ ⎢ −1 0 1⎥ + ⎢ −2 2 4 ⎥ ⎟

⎜⎜ ⎢

⎥ ⎢

⎥ ⎟⎟

⎝ ⎣ 3 2 4⎦ ⎣ 8 2 6 ⎦ ⎠

⎡ 13 7 8⎤

= −3 ⎢ −3 2 5 ⎥

⎢

⎥

⎣ 11 4 10 ⎦

⎡ −39 −21 −24 ⎤

= ⎢ 9 −6 −15⎥

⎢

⎥

⎣ −33 −12 −30 ⎦

**(c) AE + B is undefined, since AE is 4 × 4 not
**

4 × 5.

(d) AB + B is undefined since AB is undefined.

(e) E(A + B) is 5 × 5.

(f) E(AC) is 5 × 2.

(g) E T A is undefined since ET is 4 × 5.

(h) ( AT + E ) D is 5 × 2.

⎡0 0⎤

(h) A − A = ⎢ 0 0 ⎥

⎢

⎥

⎣0 0⎦

(i) tr(D) = 1 + 0 + 4 = 5

9

Chapter 1: Systems of Linear Equations and Matrices

(j)

SSM: Elementary Linear Algebra

⎛ ⎡ 1 5 2 ⎤ ⎡ 18 3 9 ⎤ ⎞

tr( D − 3E ) = tr ⎜ ⎢ −1 0 1⎥ − ⎢ −3 3 6 ⎥ ⎟

⎜⎜ ⎢

⎥ ⎢

⎥ ⎟⎟

⎝ ⎣ 3 2 4 ⎦ ⎣ 12 3 9 ⎦ ⎠

⎛ ⎡ −17 2 −7 ⎤ ⎞

= tr ⎜ ⎢ 2 −3 −5⎥ ⎟

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ −9 −1 −5⎦ ⎠

= −17 − 3 − 5

= −25

⎛ ⎡ 28 −7 ⎤ ⎞

(k) 4 tr(7 B) = 4 tr ⎜ ⎢

⎥⎟

⎝ ⎣ 0 14 ⎦ ⎠

= 4(28 + 14)

= 4(42)

= 168

(l) tr(A) is not defined because A is not a square matrix.

⎡ 3 0⎤

⎡ 4 −1⎤

5. (a) AB = ⎢ −1 2 ⎥ ⎢

⎢

⎥ ⎣ 0 2 ⎥⎦

⎣ 1 1⎦

⎡ (3 ⋅ 4) + (0 ⋅ 0) −(3 ⋅1) + (0 ⋅ 2) ⎤

= ⎢ −(1 ⋅ 4) + (2 ⋅ 0) (1⋅1) + (2 ⋅ 2) ⎥

⎢

⎥

⎣ (1 ⋅ 4) + (1 ⋅ 0) −(1 ⋅1) + (1 ⋅ 2) ⎦

⎡ 12 −3⎤

= ⎢ −4 5 ⎥

⎢

⎥

1⎦

⎣ 4

(b) BA is not defined since B is a 2 × 2 matrix and A is 3 × 2.

⎡ 18 3 9 ⎤ ⎡ 1 5 2 ⎤

(c) (3E ) D = ⎢ −3 3 6 ⎥ ⎢ −1 0 1⎥

⎢

⎥⎢

⎥

⎣ 12 3 9 ⎦ ⎣ 3 2 4 ⎦

⎡ (18 ⋅1) − (3 ⋅1) + (9 ⋅ 3) (18 ⋅ 5) + (3 ⋅ 0) + (9 ⋅ 2) (18 ⋅ 2) + (3 ⋅1) + (9 ⋅ 4) ⎤

= ⎢ −(3 ⋅1) − (3 ⋅1) + (6 ⋅ 3) −(3 ⋅ 5) + (3 ⋅ 0) + (6 ⋅ 2) −(3 ⋅ 2) + (3 ⋅1) + (6 ⋅ 4) ⎥

⎢

⎥

⎣ (12 ⋅1) − (3 ⋅1) + (9 ⋅ 3) (12 ⋅ 5) + (3 ⋅ 0) + (9 ⋅ 2) (12 ⋅ 2) + (3 ⋅1) + (9 ⋅ 4) ⎦

⎡ 42 108 75⎤

= ⎢ 12 −3 21⎥

⎢

⎥

⎣ 36 78 63⎦

⎡ 12 −3⎤

⎡1 4

(d) ( AB)C = ⎢ −4 5⎥ ⎢

⎢

⎥ ⎣3 1

1⎦

⎣ 4

⎡ (12 ⋅1) − (3 ⋅ 3)

= ⎢ −(4 ⋅1) + (5 ⋅ 3)

⎢

⎣ (4 ⋅1) + (1 ⋅ 3)

⎡ 3 45 9 ⎤

= ⎢11 −11 17 ⎥

⎢

⎥

⎣ 7 17 13⎦

2⎤

5⎥⎦

(12 ⋅ 4) − (3 ⋅1) (12 ⋅ 2) − (3 ⋅ 5) ⎤

−(4 ⋅ 4) + (5 ⋅1) −(4 ⋅ 2) + (5 ⋅ 5) ⎥

⎥

(4 ⋅ 4) + (1 ⋅1)

(4 ⋅ 2) + (1 ⋅ 5) ⎦

10

SSM: Elementary Linear Algebra

Section 1.3

⎡ 3 0⎤

⎛ ⎡ 4 −1⎤ ⎡ 1 4 2 ⎤ ⎞

(e) A( BC ) = ⎢ −1 2 ⎥ ⎜ ⎢

⎢

⎥ ⎝ ⎣ 0 2 ⎥⎦ ⎢⎣3 1 5⎥⎦ ⎟⎠

⎣ 1 1⎦

⎡ 3 0⎤

⎡ (4 ⋅1) − (1 ⋅ 3) (4 ⋅ 4) − (1 ⋅1) (4 ⋅ 2) − (1 ⋅ 5) ⎤

= ⎢ −1 2 ⎥ ⎢

⎢

⎥ ⎣(0 ⋅1) + (2 ⋅ 3) (0 ⋅ 4) + (2 ⋅1) (0 ⋅ 2) + (2 ⋅ 5) ⎥⎦

⎣ 1 1⎦

⎡ 3 0⎤

⎡ 1 15 3⎤

= ⎢ −1 2 ⎥ ⎢

⎢

⎥ ⎣6 2 10⎥⎦

⎣ 1 1⎦

⎡ (3 ⋅1) + (0 ⋅ 6) (3 ⋅15) + (0 ⋅ 2) (3 ⋅ 3) + (0 ⋅10) ⎤

= ⎢ −(1 ⋅1) + (2 ⋅ 6) −(1⋅15) + (2 ⋅ 2) −(1 ⋅ 3) + (2 ⋅10) ⎥

⎢

⎥

(1⋅15) + (1 ⋅ 2)

(1 ⋅ 3) + (1 ⋅10) ⎦

⎣ (1 ⋅1) + (1 ⋅ 6)

⎡ 3 45 9⎤

= ⎢11 −11 17 ⎥

⎢

⎥

⎣ 7 17 13⎦

⎡ 1 3⎤

⎡ 1 4 2⎤ ⎢

4 1⎥

(f) CCT = ⎢

⎣3 1 5⎥⎦ ⎢ 2 5⎥

⎣

⎦

⎡(1 ⋅1) + (4 ⋅ 4) + (2 ⋅ 2) (1 ⋅ 3) + (4 ⋅1) + (2 ⋅ 5) ⎤

=⎢

⎣ (3 ⋅1) + (1 ⋅ 4) + (5 ⋅ 2) (3 ⋅ 3) + (1⋅1) + (5 ⋅ 5) ⎥⎦

⎡ 21 17 ⎤

=⎢

⎣17 35⎥⎦

T

⎛ ⎡ 1 5 2⎤ ⎡ 3 0⎤ ⎞

(g) ( DA) = ⎜ ⎢ −1 0 1⎥ ⎢ −1 2 ⎥ ⎟

⎜⎜ ⎢

⎥⎢

⎥ ⎟⎟

⎝ ⎣ 3 2 4⎦ ⎣ 1 1⎦ ⎠

T

T

⎛ ⎡ (1⋅ 3) − (5 ⋅1) + (2 ⋅1) (1 ⋅ 0) + (5 ⋅ 2) + (2 ⋅1) ⎤ ⎞

= ⎜ ⎢ −(1 ⋅ 3) − (0 ⋅1) + (1 ⋅1) −(1 ⋅ 0) + (0 ⋅ 2) + (1 ⋅1) ⎥ ⎟

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ (3 ⋅ 3) − (2 ⋅1) + (4 ⋅1) (3 ⋅ 0) + (2 ⋅ 2) + (4 ⋅1) ⎦ ⎠

T

⎛ ⎡ 0 12⎤ ⎞

= ⎜ ⎢ −2 1⎥ ⎟

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ 11 8⎦ ⎠

⎡ 0 −2 11⎤

=⎢

1 8⎥⎦

⎣12

11

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎛ ⎡ 1 3⎤

⎞

⎡ 4 −1⎤ ⎟ ⎡ 3 −1 1⎤

(h) (C T B) AT = ⎜ ⎢ 4 1⎥ ⎢

⎜⎜ ⎢

⎥ ⎣ 0 2⎥⎦ ⎟⎟ ⎢⎣ 0 2 1⎥⎦

⎝ ⎣ 2 5⎦

⎠

⎡ (1 ⋅ 4) + (3 ⋅ 0) −(1 ⋅1) + (3 ⋅ 2) ⎤

⎡ 3 −1 1⎤

= ⎢ (4 ⋅ 4) + (1 ⋅ 0) −(4 ⋅1) + (1 ⋅ 2) ⎥ ⎢

⎢

⎥ ⎣ 0 2 1⎥⎦

⎣(2 ⋅ 4) + (5 ⋅ 0) −(2 ⋅1) + (5 ⋅ 2) ⎦

5⎤

⎡ 4

⎡ 3 −1 1⎤

= ⎢16 −2⎥ ⎢

⎢

⎥ ⎣ 0 2 1⎥⎦

⎣ 8 8⎦

⎡ (4 ⋅ 3) + (5 ⋅ 0) −(4 ⋅1) + (5 ⋅ 2) (4 ⋅1) + (5 ⋅1) ⎤

= ⎢(16 ⋅ 3) − (2 ⋅ 0) −(16 ⋅1) − (2 ⋅ 2) (16 ⋅1) − (2 ⋅1) ⎥

⎢

⎥

⎣ (8 ⋅ 3) + (8 ⋅ 0) −(8 ⋅1) + (8 ⋅ 2) (8 ⋅1) + (8 ⋅1) ⎦

6 9⎤

⎡ 12

= ⎢ 48 −20 14 ⎥

⎢

⎥

8 16 ⎦

⎣ 24

(i)

⎛ ⎡ 1 5 2 ⎤ ⎡ 1 − 1 3⎤ ⎞

tr( DDT ) = tr ⎜ ⎢ −1 0 1⎥ ⎢ 5 0 2 ⎥ ⎟

⎜⎜ ⎢

⎥⎢

⎥ ⎟⎟

⎝ ⎣ 3 2 4⎦ ⎣2 1 4⎦ ⎠

⎛ ⎡ (1 ⋅1) + (5 ⋅ 5) + (2 ⋅ 2) −(1 ⋅1) + (5 ⋅ 0) + (2 ⋅1) (1⋅ 3) + (5 ⋅ 2) + (2 ⋅ 4) ⎤ ⎞

= tr ⎜ ⎢ −(1 ⋅1) + (0 ⋅ 5) + (1 ⋅ 2) (1⋅1) + (0 ⋅ 0) + (1 ⋅1) −(1 ⋅ 3) + (0 ⋅ 2) + (1⋅ 4) ⎥ ⎟

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ (3 ⋅1) + (2 ⋅ 5) + (4 ⋅ 2) −(3 ⋅1) + (2 ⋅ 0) + (4 ⋅1) (3 ⋅ 3) + (2 ⋅ 2) + (4 ⋅ 4) ⎦ ⎠

⎛ ⎡30 1 21⎤ ⎞

1⎥ ⎟

= tr ⎜ ⎢ 1 2

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ 21 1 29 ⎦ ⎠

= 30 + 2 + 29

= 61

(j)

⎛ ⎡ 24 −4 16 ⎤ ⎡ 1 5 2 ⎤ ⎞

tr(4 ET − D) = tr ⎜ ⎢ 4 4 4 ⎥ − ⎢ −1 0 1⎥ ⎟

⎜⎜ ⎢

⎥ ⎢

⎥⎟

8 12 ⎦ ⎣ 3 2 4 ⎦ ⎟⎠

⎝ ⎣ 12

⎛ ⎡ 23 −9 14 ⎤ ⎞

= tr ⎜ ⎢ 5 4 3⎥ ⎟

⎜⎜ ⎢

⎥ ⎟⎟

⎝ ⎣ 9 6 8⎦ ⎠

= 23 + 4 + 8

= 35

12

SSM: Elementary Linear Algebra

Section 1.3

⎛ ⎡ 1 3⎤

⎡ 3 −1

(k) tr(C T AT + 2 ET ) = tr ⎜ ⎢ 4 1⎥ ⎢

⎜⎜ ⎢

⎥ ⎣0 2

⎝ ⎣ 2 5⎦

⎛ ⎡ (1⋅ 3) + (3 ⋅ 0)

= tr ⎜ ⎢ (4 ⋅ 3) + (1 ⋅ 0)

⎜⎜ ⎢(2 ⋅ 3) + (5 ⋅ 0)

⎝⎣

⎛⎡ 3 5

= tr ⎜ ⎢12 −2

⎜⎜ ⎢

8

⎝⎣ 6

⎛ ⎡15 3

= tr ⎜ ⎢14 0

⎜⎜ ⎢

⎝ ⎣12 12

= 15 + 0 + 13

= 28

(l)

⎡12 −2

1⎤ ⎢

+

2 2

1⎥⎦ ⎢

6

4

⎣

−(1⋅1) + (3 ⋅ 2)

−(4 ⋅1) + (1 ⋅ 2)

−(2 ⋅1) + (5 ⋅ 2)

8⎤ ⎞

2⎥ ⎟

⎥⎟

6 ⎦ ⎟⎠

(1 ⋅1) + (3 ⋅1) ⎤ ⎡12 −2 8⎤ ⎞

(4 ⋅1) + (1 ⋅1) ⎥ + ⎢ 2 2 2 ⎥ ⎟

⎥ ⎢

⎥⎟

(2 ⋅1) + (5 ⋅1) ⎦ ⎣ 6 4 6 ⎦ ⎟⎠

4 ⎤ ⎡12 −2 8⎤ ⎞

5⎥ + ⎢ 2 2 2 ⎥ ⎟

⎥ ⎢

⎥⎟

7 ⎦ ⎣ 6 4 6 ⎦ ⎟⎠

12⎤ ⎞

7⎥ ⎟

⎥⎟

13⎦ ⎟⎠

⎛ ⎛ ⎡ 6 1 3⎤ ⎡ 1 3⎤ ⎞T ⎡ 3 0 ⎤ ⎞

⎜

⎟

tr(( ECT )T A) = tr ⎜ ⎜ ⎢ −1 1 2 ⎥ ⎢ 4 1⎥ ⎟ ⎢ −1 2 ⎥ ⎟

⎜⎜ ⎢

⎟

⎜ ⎝ ⎣ 4 1 3⎥⎦ ⎢⎣ 2 5⎥⎦ ⎟⎠ ⎢⎣ 1 1⎥⎦ ⎟

⎝

⎠

⎛ ⎛ ⎡ (6 ⋅1) + (1 ⋅ 4) + (3 ⋅ 2) (6 ⋅ 3) + (1 ⋅1) + (3 ⋅ 5) ⎤ ⎞T

⎜

= tr ⎜ ⎜ ⎢ −(1⋅1) + (1 ⋅ 4) + (2 ⋅ 2) −(1 ⋅ 3) + (1⋅1) + (2 ⋅ 5) ⎥ ⎟

⎜

⎟

⎜ ⎝⎜ ⎢⎣ (4 ⋅1) + (1⋅ 4) + (3 ⋅ 2) (4 ⋅ 3) + (1 ⋅1) + (3 ⋅ 5) ⎥⎦ ⎟⎠

⎝

⎛ ⎛ ⎡16 34 ⎤T ⎞ ⎡ 3 0 ⎤ ⎞

⎜⎜

⎟

⎟

= tr ⎜ ⎜ ⎢ 7 8⎥ ⎟ ⎢ −1 2 ⎥ ⎟

⎢

⎥

⎢

⎥

⎜ ⎜ ⎣14 28⎦ ⎟ ⎣ 1 1⎦ ⎟

⎠

⎝⎝

⎠

⎛

⎡ 3 0⎤ ⎞

⎡16 7 14 ⎤ ⎢

= tr ⎜ ⎢

−1 2 ⎥ ⎟

⎜⎜ ⎣34 8 28⎥⎦ ⎢

⎥ ⎟⎟

⎣ 1 1⎦ ⎠

⎝

⎡ 3 0 ⎤ ⎞⎟

⎢ −1 2 ⎥ ⎟

⎢

⎥

⎣ 1 1⎦ ⎟⎠

⎛ ⎡ (16 ⋅ 3) − (7 ⋅1) + (14 ⋅1) (16 ⋅ 0) + (7 ⋅ 2) + (14 ⋅1)⎤ ⎞

= tr ⎜ ⎢

⎥⎟

⎝ ⎣(34 ⋅ 3) − (8 ⋅1) + (28 ⋅1) (34 ⋅ 0) + (8 ⋅ 2) + (28 ⋅1) ⎦ ⎠

⎛ ⎡ 55 28⎤ ⎞

= tr ⎜ ⎢

⎥⎟

⎝ ⎣122 44 ⎦ ⎠

= 55 + 44

= 99

7. (a) The first row of AB is the first row of A times B.

⎡ 6 −2 4 ⎤

1 3⎥

[a1 B] = [3 −2 7] ⎢ 0

⎢

⎥

⎣ 7 7 5⎦

= [18 + 0 + 49 −6 − 2 + 49 12 − 6 + 35]

= [67 41 41]

13

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

**(f) The third column of AA is A times the third
**

column of A.

⎡ 3 −2 7 ⎤ ⎡7 ⎤

[ A a3 ] = ⎢ 6 5 4 ⎥ ⎢ 4 ⎥

⎢

⎥⎢ ⎥

⎣ 0 4 9 ⎦ ⎣9 ⎦

⎡ 21 − 8 + 63 ⎤

= ⎢ 42 + 20 + 36 ⎥

⎢

⎥

⎣ 0 + 16 + 81 ⎦

⎡ 76 ⎤

= ⎢ 98 ⎥

⎢ ⎥

⎣97 ⎦

**(b) The third row of AB is the third row of A
**

times B.

[ a3 B ]

⎡ 6 −2 4 ⎤

1 3⎥

= [0 4 9] ⎢ 0

⎢

⎥

⎣ 7 7 5⎦

= [0 + 0 + 63 0 + 4 + 63 0 + 12 + 45]

= [63 67 57]

**(c) The second column of AB is A times the
**

second column of B.

⎡ 3 −2 7 ⎤ ⎡ −2 ⎤

[ A b 2 ] = ⎢6

5 4 ⎥ ⎢ 1⎥

⎢

⎥⎢ ⎥

⎣0 4 9 ⎦ ⎣ 7 ⎦

⎡ −6 − 2 + 49 ⎤

= ⎢ −12 + 5 + 28⎥

⎢

⎥

⎣ 0 + 4 + 63 ⎦

⎡ 41⎤

= ⎢ 21⎥

⎢ ⎥

⎣67 ⎦

⎡ −3 12 76 ⎤

9. (a) AA = ⎢ 48 29 98⎥

⎢

⎥

⎣ 24 56 97 ⎦

⎡ −3 ⎤

⎡ 3⎤

⎡ −2 ⎤

⎢ 48⎥ = 3 ⎢6 ⎥ + 6 ⎢ 5⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ 24 ⎦

⎣0 ⎦

⎣ 4⎦

⎡ 12 ⎤

⎡ 3⎤

⎡ −2 ⎤

⎡7 ⎤

⎢ 29 ⎥ = −2 ⎢ 6⎥ + 5 ⎢ 5⎥ + 4 ⎢ 4 ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ 56 ⎦

⎣ 0⎦

⎣ 4⎦

⎣9⎦

**(d) The first column of BA is B times the first
**

column of A.

⎡ 6 −2 4 ⎤ ⎡ 3 ⎤

[ B a1 ] = ⎢ 0

1 3⎥ ⎢ 6 ⎥

⎢

⎥⎢ ⎥

7

7

5⎦ ⎣ 0 ⎦

⎣

⎡18 − 12 + 0 ⎤

= ⎢ 0+6+0 ⎥

⎢

⎥

⎣ 21 + 42 + 0⎦

⎡ 6⎤

= ⎢ 6⎥

⎢ ⎥

⎣63⎦

⎡76 ⎤

⎡ 3⎤

⎡ −2 ⎤

⎡7 ⎤

⎢ 98⎥ = 7 ⎢6 ⎥ + 4 ⎢ 5⎥ + 9 ⎢ 4 ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣97 ⎦

⎣0 ⎦

⎣ 4⎦

⎣9⎦

⎡ 64 14 38⎤

(b) BB = ⎢ 21 22 18⎥

⎢

⎥

⎣77 28 74⎦

⎡ 64 ⎤

⎡6⎤

⎡ 4⎤

⎢ 21⎥ = 6 ⎢ 0 ⎥ + 7 ⎢ 3⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣77 ⎦

⎣7 ⎦

⎣ 5⎦

**(e) The third row of AA is the third row of A
**

times A.

[a3 A]

⎡ 3 −2 7 ⎤

= [ 0 4 9] ⎢ 6 5 4 ⎥

⎢

⎥

⎣0 4 9⎦

= [0 + 24 + 0 0 + 20 + 36 0 + 16 + 81]

= [ 24 56 97]

⎡ 14 ⎤

⎡ 6 ⎤ ⎡ −2 ⎤

⎡4⎤

⎢ 22 ⎥ = −2 ⎢ 0 ⎥ + 1 ⎢ 1⎥ + 7 ⎢ 3⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ 28⎦

⎣7 ⎦ ⎣ 7⎦

⎣ 5⎦

⎡ 38⎤

⎡6⎤

⎡ −2 ⎤

⎡ 4⎤

⎢ 18⎥ = 4 ⎢ 0 ⎥ + 3 ⎢ 1⎥ + 5 ⎢ 3⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣74 ⎦

⎣7 ⎦

⎣ 7⎦

⎣ 5⎦

⎡ x1 ⎤

⎡ 2 −3 5⎤

⎡ 7⎤

11. (a) A = ⎢ 9 −1 1⎥ , x = ⎢ x2 ⎥ , b = ⎢ −1⎥

⎢ ⎥

⎢ ⎥

⎢

⎥

⎣ 0⎦

⎣ 1 5 4⎦

⎣ x3 ⎦

⎡ 2 −3 5⎤ ⎡ x1 ⎤ ⎡ 7 ⎤

The equation is ⎢ 9 −1 1⎥ ⎢ x2 ⎥ = ⎢ −1⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 5 4⎦ ⎣ x3 ⎦ ⎣ 0 ⎦

14

SSM: Elementary Linear Algebra

Section 1.3

21. The diagonal entries of A + B are aii + bii , so

tr( A + B)

= (a11 + b11 ) + (a22 + b22 ) + + (ann + bnn )

= (a11 + a22 + + ann ) + (b11 + b22 + + bnn )

= tr(A) + tr( B)

⎡ x1 ⎤

1⎤

⎡ 1⎤

⎡ 4 0 −3

⎢x ⎥

⎢5

⎢ 3⎥

1 0 −8 ⎥

(b) A = ⎢

⎥, x = ⎢ 2⎥, b = ⎢ ⎥

2

−

5

9

−

1

x

⎢ 3⎥

⎢

⎥

⎢0⎥

⎣⎢ 0 3 −1 7 ⎦⎥

⎣⎢ 2 ⎦⎥

⎣⎢ x4 ⎦⎥

The equation is

1⎤ ⎡ x1 ⎤ ⎡ 1⎤

⎡ 4 0 −3

⎢5

1 0 −8⎥ ⎢ x2 ⎥ ⎢ 3⎥

⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

⎢ 2 −5 9 −1⎥ ⎢ x3 ⎥ ⎢ 0 ⎥

3 −1 7 ⎦⎥ ⎣⎢ x4 ⎦⎥ ⎣⎢ 2 ⎦⎥

⎣⎢ 0

⎡ a11 0

⎢ 0 a

22

⎢

0

0

23. (a) ⎢

⎢ 0

0

⎢ 0

0

⎢

0

⎣⎢ 0

0

0

a33

0

0

0

0

0

0

a44

0

0

0

0

0

0

a55

0

0 ⎤

0 ⎥

⎥

0 ⎥

0 ⎥

0 ⎥

⎥

a66 ⎦⎥

(b) The system is

x1 + x2 + x3 = 2

=2

2 x1 + 3x2

5 x1 − 3x2 − 6 x3 = −9

⎡ a11 a12

⎢ 0 a

22

⎢

0

0

(b) ⎢

⎢ 0

0

⎢ 0

0

⎢

0

⎢⎣ 0

a13

a23

a33

0

0

0

a14

a24

a34

a44

0

0

a15

a25

a35

a45

a55

0

a16 ⎤

a26 ⎥

⎥

a36 ⎥

a46 ⎥

a56 ⎥⎥

a66 ⎥⎦

⎡ 1 1 0⎤ ⎡k ⎤

15. [ k 1 1] ⎢ 1 0 2 ⎥ ⎢ 1⎥

⎢

⎥⎢ ⎥

⎣ 0 2 −3⎦ ⎣ 1⎦

⎡k ⎤

= [ k + 1 k + 2 −1] ⎢ 1⎥

⎢ ⎥

⎣ 1⎦

⎡ a11 0

⎢a

a

⎢ 21 22

a

a

(c) ⎢ 31 32

⎢ a41 a42

⎢a

a

⎢ 51 52

⎢⎣ a61 a62

0

0

a33

a43

a53

a63

0

0

0

a44

a54

a64

0

0

0

0

a55

a65

0 ⎤

0 ⎥

⎥

0 ⎥

0 ⎥

0 ⎥

⎥

a66 ⎥⎦

⎡ a11 a12

⎢a

a

⎢ 21 22

0 a32

(d) ⎢

⎢ 0

0

⎢ 0

0

⎢

0

0

⎢⎣

0

a23

a33

a43

0

0

0

0

a34

a44

a54

0

0

0

0

a45

a55

a65

0 ⎤

0 ⎥

⎥

0 ⎥

0 ⎥

a56 ⎥

⎥

a66 ⎦⎥

**13. (a) The system is
**

5 x1 + 6 x2 − 7 x3 = 2

− x1 − 2 x2 + 3 x3 = 0

4 x2 − x3 = 3

= [ k 2 + k + k + 2 − 1]

= [ k 2 + 2k + 1]

The only solution of k 2 + 2k + 1 = 0 is k = −1.

3 ⎤ ⎡ 4

d − 2c ⎤

⎡a

17. ⎢

=

gives the

−2 ⎥⎦

⎣ −1 a + b ⎥⎦ ⎢⎣ d + 2c

equations

a=4

d − 2c = 3

d + 2c = −1

a + b = −2

a = 4 ⇒ b = −6

d − 2c = 3, d + 2c = −1 ⇒ 2d = 2, d = 1

d = 1 ⇒ 2c = −2, c = −1

The solution is a = 4, b = −6, c = −1, d = 1.

25.

⎛ x ⎞ ⎡1 1⎤ ⎡ x1 ⎤ ⎛ x1 + x2 ⎞

f ⎜ 1⎟=⎢

⎟

⎥⎢ ⎥ =⎜

⎝ x2 ⎠ ⎣ 0 1⎦ ⎣ x2 ⎦ ⎝ x2 ⎠

⎛1⎞ ⎛1 + 1⎞ ⎛ 2 ⎞

(a) f ⎜ ⎟ = ⎜

⎟=⎜ ⎟

⎝1⎠ ⎝ 1 ⎠ ⎝ 1 ⎠

y

1

f(x)

x

19. The ij-entry of kA is kaij . If kaij = 0 for all i, j

x

then either k = 0 or aij = 0 for all i, j, in which

1

case A = 0.

15

2

Chapter 1: Systems of Linear Equations and Matrices

⎛ 2⎞ ⎛ 2 + 0⎞ ⎛ 2⎞

(b) f ⎜ ⎟ = ⎜

⎟=⎜ ⎟

⎝ 0⎠ ⎝ 0 ⎠ ⎝ 0⎠

⎡5 0 ⎤

(b) The four square roots of ⎢

are

⎣0 9 ⎥⎦

⎡ 5 0⎤ ⎡− 5 0⎤ ⎡ 5 0 ⎤

,

, and

⎢ 0 3⎥ , ⎢ 0

3⎥⎦ ⎢⎣ 0 −3⎥⎦

⎣

⎦ ⎣

⎡− 5 0 ⎤

.

⎢ 0

−3⎥⎦

⎣

y

1

f(x) = x

1

(c)

x

2

⎛ 4 ⎞ ⎛ 4 + 3⎞ ⎛ 7 ⎞

f ⎜ ⎟=⎜

⎟=⎜ ⎟

⎝ 3⎠ ⎝ 3 ⎠ ⎝ 3⎠

5

SSM: Elementary Linear Algebra

True/False 1.3

y

**(a) True; only square matrices have main diagonals.
**

f(x)

x

**(b) False; an m × n matrix has m row vectors and
**

n column vectors.

x

8

4

(c) False; matrix multiplication is not commutative.

⎛ 2⎞ ⎛ 2 − 2⎞ ⎛ 0 ⎞

(d) f ⎜ ⎟ = ⎜

⎟=⎜ ⎟

⎝ −2 ⎠ ⎝ −2 ⎠ ⎝ −2 ⎠

**(d) False; the ith row vector of AB is found by
**

multiplying the ith row vector of A by B.

y

(e) True

x

1

⎡ 1 3⎤

(f) False; for example, if A = ⎢

and

⎣ 4 −1⎥⎦

⎡ 1 6⎤

⎡1 0 ⎤

and

B=⎢

, then AB = ⎢

⎥

0

2

⎣

⎦

⎣ 4 −2⎥⎦

tr(AB) = −1, while tr(A) = 0 and tr(B) = 3.

2

–1

–2

f(x)

x

27. Let A = [ aij ].

**a13 ⎤ ⎡ x ⎤ ⎡ a11 x + a12 y + a13 z ⎤
**

a23 ⎥ ⎢ y ⎥ = ⎢ a21 x + a22 y + a23 z ⎥

⎥⎢ ⎥ ⎢

⎥

a33 ⎦⎥ ⎣ z ⎦ ⎣⎢ a31 x + a32 y + a33 z ⎦⎥

⎡x + y⎤

= ⎢x − y⎥

⎢

⎥

⎣ 0 ⎦

The matrix equation yields the equations

a11 x + a12 y + a13 z = x + y

a21 x + a22 y + a23 z = x − y .

a31 x + a32 y + a33 z = 0

For these equations to be true for all values of x,

y, and z it must be that a11 = 1, a12 = 1, a21 = 1,

⎡ a11 a12

⎢a

a

⎢ 21 22

a

⎣⎢ 31 a32

⎡ 1 3⎤

(g) False; for example, if A = ⎢

and

⎣ 4 −1⎥⎦

⎡ 1 4⎤

⎡ 1 0⎤

, while

B=⎢

, then ( AB)T = ⎢

⎥

0

2

⎣6 −2 ⎥⎦

⎣

⎦

⎡ 1 8⎤

AT BT = ⎢

.

⎣3 −2 ⎥⎦

(h) True; for a square matrix A the main diagonals

**of A and AT are the same.
**

(i) True; if A is a 6 × 4 matrix and B is an m × n

matrix, then AT is a 4 × 6 matrix and BT is an

a22 = −1, and aij = 0 for all other i, j. There is

**n × m matrix. So BT AT is only defined if m = 4,
**

and it can only be a 2 × 6 matrix if n = 2.

⎡ 1 1 0⎤

one such matrix, A = ⎢ 1 −1 0 ⎥ .

⎢

⎥

⎣0 0 0⎦

**(j) True, tr(cA) = ca11 + ca22 +
**

= c(a11 + a22 +

= c tr( A)

⎡1 1⎤

⎡ −1 −1⎤

29. (a) Both ⎢

and ⎢

are square roots

⎥

1

1

⎣

⎦

⎣ −1 −1⎥⎦

⎡2 2⎤

of ⎢

.

⎣ 2 2 ⎥⎦

+ cann

+ ann )

**(k) True; if A − C = B − C, then aij − cij = bij − cij ,
**

so it follows that aij = bij .

16

SSM: Elementary Linear Algebra

Section 1.4

⎡ 1 2⎤

⎡ 1 5⎤

(l) False; for example, if A = ⎢

, B=⎢

,

⎣3 4 ⎥⎦

⎣3 7 ⎥⎦

⎡ 1 0⎤

⎡ 1 0⎤

and C = ⎢

, then AC = BC = ⎢

.

⎥

0

0

⎣

⎦

⎣3 0 ⎥⎦

⎡ 2 4⎤

11. BT = ⎢

;

⎣ −3 4⎥⎦

( BT ) − 1 =

**(m) True; if A is an m × n matrix, then for both AB
**

and BA to be defined B must be an n × m matrix.

Then AB will be an m × m matrix and BA will be

an n × n matrix, so m = n in order for the sum to

be defined.

1

1 ⎡ 4 −4 ⎤ ⎡ 5

⎢

=

8 + 12 ⎢⎣ 3 2 ⎥⎦ ⎢ 3

⎣ 20

⎡ 70 45⎤

13. ABC = ⎢

⎣122 79 ⎥⎦

1

⎡ 79 −45⎤

70 ⋅ 79 − 122 ⋅ 45 ⎣⎢ −122 70 ⎦⎥

1 ⎡ 79 −45⎤

=

40 ⎢⎣ −122 70⎥⎦

1

⎡ −1 −4 ⎤ 1 ⎡ −1 −4 ⎤

C −1 =

=

6(−1) − 4(−2) ⎢⎣ 2 6 ⎥⎦ 2 ⎢⎣ 2 6 ⎥⎦

1

⎡ 4 3⎤ 1 ⎡ 4 3 ⎤

B −1 =

=

2 ⋅ 4 − (−3)4 ⎢⎣ −4 2 ⎥⎦ 20 ⎢⎣ −4 2 ⎥⎦

( ABC )−1 =

**(n) True; since the jth column vector of AB is
**

A[jth column vector of B], a column of zeros in

B will result in a column of zeros in AB.

⎡ 1 0 ⎤ ⎡ 1 5⎤ ⎡ 1 5 ⎤

(o) False; ⎢

=

⎣0 0⎥⎦ ⎢⎣3 7 ⎥⎦ ⎢⎣ 0 0 ⎥⎦

1

⎡ 2 −1⎤ ⎡ 2 −1⎤

=

3 ⋅ 2 − 1 ⋅ 5 ⎢⎣ −5 3⎥⎦ ⎢⎣ −5 3⎥⎦

1 ⎛ ⎡ −1 −4 ⎤ ⎡ 4 3⎤ ⎡ 2 −1⎤ ⎞

C −1 B −1 A−1 =

⎜

⎟

40 ⎝ ⎢⎣ 2 6 ⎥⎦ ⎢⎣ −4 2 ⎥⎦ ⎢⎣ −5 3⎥⎦ ⎠

1 ⎡ 79 −45⎤

=

40 ⎢⎣ −122 70⎥⎦

Section 1.4

A−1 =

Exercise Set 1.4

1

⎡ 4 3⎤

2 ⋅ 4 − (−3)4 ⎢⎣ −4 2 ⎥⎦

1 ⎡ 4 3⎤

=

20 ⎢⎣ −4 2 ⎥⎦

3 ⎤

= ⎡ 15 20

⎢

⎥

1⎥

⎢⎣ − 15 10

⎦

5. B −1 =

7. D −1 =

15. 7 A = ((7 A)−1 )−1

1

⎡ −2 −7 ⎤

=

(−3)(−2) − 1⋅ 7 ⎢⎣ −1 −3⎥⎦

⎡ −2 −7 ⎤

= −1 ⎢

⎣ −1 −3⎥⎦

⎡2 7⎤

=⎢

⎣ 1 3⎥⎦

⎡ 2 1⎤

⎥.

Thus A = ⎢ 7

⎢⎣ 17 73 ⎥⎦

1

1 ⎡ 3 0 ⎤ 1 ⎡ 3 0 ⎤ ⎡ 2 0⎤

⎢

⎥

=

=

2 ⋅ 3 − 0 ⎢⎣0 2 ⎥⎦ 6 ⎢⎣0 2 ⎥⎦ ⎢ 0 1 ⎥

3⎦

⎣

1

9. Here, a = d = (e x + e − x ) and

2

1 x −x

b = c = (e − e ), so

2

1

1

ad − bc = (e x + e − x )2 − (e x − e− x )2

4

4

1 2x

1

−2 x

= (e + 2 + e ) − (e2 x − 2 + e −2 x )

4

4

= 1.

⎡ 1 (e x + e − x )

Thus, ⎢ 2 x − x

⎢ 1 (e − e )

⎣2

17. I + 2 A = (( I + 2 A)−1 )−1

1

⎡ 5 −2 ⎤

=

−1⋅ 5 − 2 ⋅ 4 ⎢⎣ −4 −1⎥⎦

1 ⎡ 5 −2 ⎤

=− ⎢

13 ⎣ −4 −1⎥⎦

⎡− 5 2 ⎤

= ⎢ 13 13 ⎥

4

1⎥

⎢⎣ 13

13 ⎦

−1

− e− x ) ⎤

⎥

+ e− x ) ⎥⎦

⎡ 1 (e x + e − x ) − 1 (e x − e − x ) ⎤

2

⎥.

= ⎢ 2 x −x

1 (e x + e − x ) ⎥

⎢ − 1 (e − e )

⎣ 2

⎦

2

1 (e x

2

1 (e x

2

− 15 ⎤

⎥ = ( B −1 )T

1⎥

10 ⎦

17

Chapter 1: Systems of Linear Equations and Matrices

1⎤

2⎤

⎡− 9

⎡ − 5 2 ⎤ ⎡ 1 0 ⎤ ⎡ − 18

13 ⎥ .

13

13 ⎥ and A = ⎢ 13

⎢

Thus 2 A = ⎢ 13 13 ⎥ − ⎢

=

4

1 ⎥ ⎣ 0 1⎦⎥ ⎢ 4 − 12 ⎥

2 − 6⎥

⎢ 13

⎢⎣ 13

13 ⎦

13 ⎦

13 ⎦

⎣ 13

⎣

⎛ ⎡3

19. (a) A3 = ⎜ ⎢

⎝ ⎣2

⎡11

=⎢

⎣8

⎡ 41

=⎢

⎣30

1⎤ ⎡ 3

1⎥⎦ ⎢⎣ 2

4⎤ ⎡ 3

3⎥⎦ ⎢⎣ 2

15⎤

11⎥⎦

1⎤ ⎞ ⎡ 3 1⎤

1⎥⎦ ⎟⎠ ⎢⎣ 2 1⎥⎦

1⎤

1⎥⎦

(b) A−3 = ( A3 ) −1

1

⎡ 11 −15⎤

=

41⎥⎦

41⋅11 − 15 ⋅ 30 ⎢⎣ −30

⎡ 11 −15⎤

=⎢

41⎥⎦

⎣ −30

⎡11

(c) A2 − 2 A + I = ⎢

⎣8

⎡6

=⎢

⎣4

4 ⎤ ⎡ 6 2 ⎤ ⎡ 1 0⎤

−

+

3⎥⎦ ⎢⎣ 4 2 ⎥⎦ ⎢⎣0 1⎥⎦

2⎤

2 ⎥⎦

⎡ 3 1⎤ ⎡ 2 0 ⎤ ⎡ 1 1⎤

(d) p( A) = A − 2 I = ⎢

−

=

⎣ 2 1⎥⎦ ⎢⎣ 0 2 ⎥⎦ ⎢⎣ 2 −1⎥⎦

(e)

p( A) = 2 A2 − A + I

⎡ 22 8⎤ ⎡ 3 1⎤ ⎡ 1 0 ⎤

=⎢

−

+

⎣ 16 6 ⎥⎦ ⎢⎣ 2 1⎥⎦ ⎢⎣0 1⎥⎦

⎡ 20 7 ⎤

=⎢

⎣ 14 6⎥⎦

(f)

p( A) = A3 − 2 A + 4 I

⎡ 41 15⎤ ⎡ 6 2 ⎤ ⎡ 4 0⎤

=⎢

−

+

⎣30 11⎥⎦ ⎢⎣ 4 2 ⎥⎦ ⎢⎣ 0 4⎥⎦

⎡ 39 13⎤

=⎢

⎣ 26 13⎥⎦

⎛ ⎡ 3 0 0⎤ ⎡ 3

21. (a) A3 = ⎜ ⎢0 −1 3⎥ ⎢ 0

⎜⎜ ⎢

⎥⎢

⎝ ⎣0 −3 −1⎦ ⎣ 0

⎡9 0 0⎤ ⎡ 3

= ⎢ 0 −8 −6 ⎥ ⎢ 0

⎢

⎥⎢

⎣ 0 6 −8 ⎦ ⎣ 0

0⎤

⎡ 27 0

= ⎢ 0 26 −18⎥

⎢

⎥

⎣ 0 18 26⎦

0 0⎤ ⎞ ⎡ 3 0 0⎤

−1 3⎥ ⎟ ⎢0 −1 3⎥

⎥⎟⎢

⎥

−3 −1⎦ ⎟⎠ ⎣0 −3 −1⎦

0 0⎤

− 1 3⎥

⎥

−3 −1⎦

18

SSM: Elementary Linear Algebra

SSM: Elementary Linear Algebra

Section 1.4

1

1 ⎡ −1 −3⎤ ⎡ − 10

⎡ −1 3⎤

⎢

is

(b) Note that the inverse of ⎢

=

10 ⎢⎣ 3 −1⎥⎦ ⎢ 3

⎣ −3 −1⎥⎦

⎣ 10

A−3

⎛ ⎡1

⎜ ⎢3

= ( A− 1 ) 3 = ⎜ ⎢ 0

⎜⎢

⎜ ⎢0

⎝⎣

⎡1

⎢9

= ⎢0

⎢

⎢⎣ 0

⎡1

⎢ 27

=⎢0

⎢0

⎣

⎡1

0

0⎤

⎢3

⎥

1 − 3 ⎥.

Thus A−1 = ⎢ 0 − 10

10

⎢

⎥

3 − 1

⎢⎣ 0

10

10 ⎥⎦

0⎤ ⎡ 13

0

0 ⎤ ⎞ ⎡ 13

0

0⎤

⎥⎢

⎥⎟⎢

⎥

1 − 3 ⎥ ⎢0 − 1 − 3 ⎥ ⎟ ⎢0 − 1 − 3 ⎥

− 10

10

10

10 ⎟

10

10

⎥⎢

⎥ ⎢

⎥

3 − 1

3 − 1 ⎟ 0

3 − 1

0

10

10 ⎥⎦ ⎢⎣

10

10 ⎥⎦ ⎠ ⎢⎣

10

10 ⎥⎦

0

0 ⎤ ⎡ 13

0

0⎤

⎥⎢

⎥

3 ⎥ ⎢0 − 1 − 3 ⎥

2

− 25

50

10

10

⎥⎢

⎥

3

3 − 1

2

− 50

− 25

⎥⎦ ⎢⎣ 0

10

10 ⎥⎦

0

0 ⎤

⎥

0.026 0.018 ⎥

−0.018 0.026 ⎥

⎦

0

⎡9 0 0⎤ ⎡6 0 0 ⎤ ⎡ 1 0 0⎤

(c) A2 − 2 A + I = ⎢0 −8 −6⎥ − ⎢0 −2 6 ⎥ + ⎢ 0 1 0⎥

⎢

⎥ ⎢

⎥ ⎢

⎥

⎣0 6 −8⎦ ⎣0 −6 −2 ⎦ ⎣ 0 0 1⎦

0⎤

⎡4 0

= ⎢ 0 −5 −12⎥

⎢

⎥

⎣ 0 12 −5⎦

⎡3

(d) p( A) = A − 2 I = ⎢0

⎢

⎣0

⎡1

= ⎢0

⎢

⎣0

3⎤

− 10

⎥.

1⎥

− 10

⎦

0 0⎤ ⎡ 2 0 0 ⎤

−1 3⎥ − ⎢ 0 2 0 ⎥

⎥ ⎢

⎥

−3 −1⎦ ⎣ 0 0 2 ⎦

0 0⎤

−3 3⎥

⎥

−3 −3⎦

(e)

p( A) = 2 A2 − A + I

0

0⎤ ⎡ 3 0 0 ⎤ ⎡ 1 0 0⎤

⎡18

= ⎢ 0 −16 −12⎥ − ⎢0 −1 3⎥ + ⎢ 0 1 0⎥

⎢

⎥ ⎢

⎥ ⎢

⎥

⎣ 0 12 −16⎦ ⎣0 −3 −1⎦ ⎣ 0 0 1⎦

0

0⎤

⎡16

= ⎢ 0 −14 −15⎥

⎢

⎥

15 −14 ⎦

⎣ 0

(f)

p( A) = A3 − 2 A + 4 I

0 ⎤ ⎡6 0 0⎤ ⎡ 4 0 0 ⎤

⎡ 27 0

⎢

= 0 26 −18⎥ − ⎢0 −2 6⎥ + ⎢ 0 4 0 ⎥

⎢

⎥ ⎢

⎥ ⎢

⎥

⎣ 0 18 26 ⎦ ⎣0 −6 −2⎦ ⎣ 0 0 4 ⎦

0⎤

⎡ 25 0

⎢

= 0 32 −24 ⎥

⎢

⎥

⎣ 0 24 32 ⎦

19

**Chapter 1: Systems of Linear Equations and Matrices
**

27. Since a11a22 ann ≠ 0,

⎡1

0

⎢ a11

1

⎢0

a22

Let B = ⎢

⎢

⎢

0

⎢⎣ 0

SSM: Elementary Linear Algebra

**none of the diagonal entries are zero.
**

0 ⎤

⎥

0 ⎥

⎥.

⎥

1 ⎥

ann ⎥⎦

**AB = BA = I so A is invertible and B = A−1 .
**

29. (a) If A has a row of zeros, then by formula (9) of Section 1.3, AB will have a row of zeros, so A cannot be

invertible.

(b) If B has a column of zeros, then by formula (8) of Section 1.3, AB will have a column of zeros, and B cannot

be invertible.

CT B −1 A2 BAC −1DA−2 BT C −2 = C T

31.

**CA−1 B −1 A−2 B(CT )−1 ⋅ CT B −1 A2 BAC −1DA−2 BT C −2 = CA−1B −1 A−2 B(C T ) −1 C T
**

DA−2 BT C −2 = CA−1B −1 A−2 B

**DA−2 BT C −2 ⋅ C 2 ( BT )−1 A2 = CA−1B −1 A−2 B ⋅ C 2 ( BT )−1 A2
**

D = CA−1B −1 A−2 BC 2 ( BT )−1 A2

**33. ( AB)−1 ( AC −1 )( D −1C −1 )−1 D −1 = ( B −1 A−1 )( AC −1 )(CD) D −1
**

= B −1 ( A−1 A)(C −1C )( DD −1 )

= B −1

⎡ b11 b12

35. Let X = ⎢b21 b22

⎢

⎣⎢b31 b32

b13 ⎤

b23 ⎥ .

⎥

b33 ⎦⎥

AX = I

⎡ b11 + b31 b12 + b32 b13 + b33 ⎤ ⎡1 0 0⎤

⎢b + b

b +b

b + b ⎥ = ⎢ 0 1 0⎥

⎢ 11 21 12 22 13 23 ⎥ ⎢

⎥

⎣⎢b21 + b31 b22 + b32 b23 + b33 ⎦⎥ ⎣ 0 0 1 ⎦

Consider the equations from the first columns of the matrices.

b11 + b31 = 1

b11 + b21 = 0

b21 + b31 = 0

1

1

1

Thus b11 = b31 = −b21 , so b11 = , b21 = − , and b31 = .

2

2

2

⎡ 1

⎢ 2

Working similarly with the other columns of both matrices yields A−1 = ⎢ − 12

⎢ 1

⎢⎣ 2

20

1

2

1

2

− 12

− 12 ⎤

⎥

1⎥.

2

1⎥

2 ⎥⎦

SSM: Elementary Linear Algebra

⎡ b11 b12

37. Let X = ⎢b21 b22

⎢

⎣⎢b31 b32

Section 1.4

b13 ⎤

b23 ⎥ .

⎥

b33 ⎦⎥

AX = I

b

b

b

⎡

⎤ ⎡1 0 0 ⎤

31

32

33

⎢ b +b

⎥ = ⎢0 1 0 ⎥

b12 + b22

b13 + b23

11

21

⎢

⎥ ⎢

⎥

⎢⎣ −b11 + b21 + b31 −b12 + b22 + b32 −b13 + b23 + b33 ⎥⎦ ⎣0 0 1 ⎦

Consider the equations from the first columns of the matrices.

b31 = 1

b11 + b21 = 0

−b11 + b21 + b31 = 0

1

1

Thus, b21 = −b11 and −2b11 + 1 = 0, so b11 = , b21 = − , and b31 = 1.

2

2

Working similarly with the other columns of both matrices yields

⎡ 1 1 − 1⎤

2⎥

⎢ 2 2

−1

1

1

1

A = ⎢− 2 2

⎥.

2

⎢

⎥

1

0

0

⎢⎣

⎥⎦

⎡ 3 −2 ⎤ ⎡ x1 ⎤ ⎡ −1⎤

, so the solution is

39. The matrix form of the system is ⎢

=

5⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ 3⎥⎦

⎣4

⎡ 3 −2 ⎤

⎢⎣ 4

5⎥⎦

−1

⎡ x1 ⎤ ⎡ 3 −2 ⎤

⎢ x ⎥ = ⎢4

5⎥⎦

⎣ 2⎦ ⎣

−1

⎡ −1⎤

⎢⎣ 3⎥⎦ .

−1

⎡ 0⎤

⎢⎣ −2 ⎥⎦ .

1

⎡ 5 2⎤

3 ⋅ 5 − (−2)4 ⎢⎣ −4 3⎥⎦

1 ⎡ 5 2⎤

=

23 ⎢⎣ −4 2 ⎥⎦

=

⎡ x1 ⎤ 1 ⎡ 5 2 ⎤ ⎡ −1⎤ 1 ⎡ −5 + 6⎤ 1 ⎡ 1 ⎤

⎢ x ⎥ = 23 ⎢ −4 3⎥ ⎢ 3⎥ = 23 ⎢ 4 + 9⎥ = 23 ⎢13⎥

⎣

⎦⎣ ⎦

⎣

⎦

⎣ ⎦

⎣ 2⎦

1

13

The solution is x1 = , x2 = .

23

23

1⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡6

, so the solution is

41. The matrix form of the system is ⎢

⎢ ⎥=

4

3

−

⎣

⎦⎥ ⎣ x2 ⎦ ⎣⎢ −2 ⎦⎥

1⎤

⎡6

⎢⎣ 4 −3⎥⎦

−1

1

⎡ −3 −1⎤

6(−3) − 1 ⋅ 4 ⎢⎣ −4 6 ⎥⎦

1 ⎡ −3 −1⎤

=− ⎢

22 ⎣ −4 6 ⎥⎦

=

1 ⎡ −3 −1⎤ ⎡ 0⎤

⎡ x1 ⎤

⎢ x ⎥ = − 22 ⎢ −4 6⎥ ⎢ −2⎥

⎣

⎦⎣ ⎦

⎣ 2⎦

1 ⎡ 0+2 ⎤

=− ⎢

22 ⎣0 − 12 ⎥⎦

1 ⎡ 2⎤

=− ⎢

22 ⎣ −12⎥⎦

1

6

The solution is x1 = − , x2 = .

11

11

21

1⎤

⎡ x1 ⎤ ⎡ 6

⎢ x ⎥ = ⎢ 4 −3⎥

⎦

⎣ 2⎦ ⎣

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

**51. (a) If A is invertible, then A−1 exists.
**

AB = AC

A−1 AB = A−1 AC

IB = IC

B=C

**(b) The matrix A in Example 3 is not invertible.
**

53. (a) A( A−1 + B −1 ) B( A + B)−1 = ( AA−1 + AB −1 ) B( A + B)−1

= ( I + AB −1 ) B( A + B)−1

= ( IB + AB −1 B)( A + B) −1

= ( B + A)( A + B)−1

= ( A + B)( A + B)−1

=I

(b) A−1 + B −1 ≠ ( A + B)−1

55. ( I − A)( I + A + A2 +

+ Ak −1 ) = ( I − A) I + ( I − A) A + ( I − A) A2 +

= I − A + A − A2 + A2 − A3 +

=I−A

= I −0

=I

+ ( I − A) Ak −1

+ Ak −1 − Ak

k

True/False 1.4

(a) False; A and B are inverses if and only if AB = BA = I.

(b) False; ( A + B)2 = ( A + B)( A + B)

= A2 + AB + BA + B 2

Since AB ≠ BA the two terms cannot be combined.

(c) False; ( A − B)( A + B) = A2 + AB − BA − B 2 .

Since AB ≠ BA, the two terms cannot be combined.

(d) False; AB is invertible, but ( AB)−1 = B −1 A−1 ≠ A−1 B −1 .

(e) False; if A is an m × n matrix and B is and n × p matrix, with m ≠ p, then AB and ( AB)T are defined but AT BT is

not defined.

(f) True; by Theorem 1.4.5.

(g) True; (kA + B)T = (kA)T + BT = kAT + BT .

(h) True; by Theorem 1.4.9.

(i) False; p( I ) = (a0 + a1 + a2 +

+ am ) I which is a matrix, not a scalar.

22

SSM: Elementary Linear Algebra

Section 1.5

**(j) True, if the square matrix A has a row of zeros,
**

the product of A and any matrix B will also have

a row or column of zeros (depending on whether

the product is AB or BA), so AB = I is

impossible. A similar statement can be made if A

has a column of zeros.

**5. (a) E interchanges the first and second rows of
**

A.

5 −1⎤

⎡ 0 1 ⎤ ⎡ −1 −2

EA = ⎢

⎥

⎢

−

−

−6 ⎥⎦

1

0

3

6

6

⎣

⎦⎣

⎡ 3 −6 −6 −6 ⎤

=⎢

5 −1⎥⎦

⎣ −1 −2

⎡ 1 0⎤

⎡ −1 0 ⎤

(k) False; for example, ⎢

and ⎢

are

⎥

0

1

⎣

⎦

⎣ 0 −1⎥⎦

⎡0 0 ⎤

both invertible, but their sum ⎢

is not

⎣0 0 ⎥⎦

invertible.

**(b) E adds −3 times the second row to the third
**

row.

⎡ 1 0 0⎤ ⎡ 2 −1 0 −4 −4 ⎤

EA = ⎢0

1 0⎥ ⎢ 1 −3 −1 5

3⎥

⎢

⎥⎢

⎥

⎣0 −3 1⎦ ⎣ 2 0 1 3 −1⎦

⎡ 2 −1 0 −4 −4 ⎤

5

3⎥

= ⎢ 1 −3 −1

⎢

⎥

⎣ −1 9 4 −12 −10 ⎦

Section 1.5

Exercise Set 1.5

1. (a) The matrix results from adding −5 times the

first row of I 2 to the second row; it is an

elementary matrix.

**(c) E adds 4 times the third row to the first row.
**

⎡ 1 0 4 ⎤ ⎡ 1 4 ⎤ ⎡13 28⎤

EA = ⎢0 1 0 ⎥ ⎢ 2 5⎥ = ⎢ 2 5⎥

⎢

⎥⎢

⎥ ⎢

⎥

⎣0 0 1⎦ ⎣ 3 6 ⎦ ⎣ 3 6⎦

**(b) The matrix is not elementary; more than one
**

elementary row operation on I 2 is required.

**7. (a) To obtain B from A, the first and third rows
**

must be interchanged.

⎡0 0 1⎤

E = ⎢0 1 0 ⎥

⎢

⎥

⎣ 1 0 0⎦

**(c) The matrix is not elementary; more than one
**

elementary row operation on I3 is required.

(d) The matrix is not elementary; more than one

elementary row operation on I 4 is required.

**(b) To obtain A from B, the first and third rows
**

must be interchanged.

⎡0 0 1⎤

E = ⎢0 1 0 ⎥

⎢

⎥

⎣ 1 0 0⎦

**3. (a) The operation is to add 3 times the second
**

⎡ 1 3⎤

row to the first row. The matrix is ⎢

.

⎣0 1⎥⎦

(b) The operation is to multiply the first row by

⎡− 1 0 0⎤

⎢ 7

⎥

1

− . The matrix is ⎢ 0 1 0 ⎥ .

7

⎢ 0 0 1⎥

⎣

⎦

**(c) To obtain C from A, add −2 times the first
**

row to the third row.

⎡ 1 0 0⎤

E = ⎢ 0 1 0⎥

⎢

⎥

⎣ −2 0 1⎦

**(c) The operation is to add 5 times the first row
**

⎡ 1 0 0⎤

to the third row. The matrix is ⎢0 1 0 ⎥ .

⎢

⎥

⎣ 5 0 1⎦

**(d) To obtain A from C, add 2 times the first
**

row to the third row.

⎡ 1 0 0⎤

E = ⎢0 1 0⎥

⎢

⎥

⎣ 2 0 1⎦

**(d) The operation is to interchange the first and
**

⎡0 0 1 0⎤

⎢0 1 0 0⎥

third rows. The matrix is ⎢

⎥.

⎢1 0 0 0 ⎥

⎢⎣0 0 0 1 ⎥⎦

⎡ 1 4 1 0⎤

9. ⎢

⎣ 2 7 0 1⎥⎦

Add −2 times the first row to the second.

23

Chapter 1: Systems of Linear Equations and Matrices

1 0⎤

⎡1 4

⎢⎣0 −1 −2 1⎥⎦

Multiply the second row by −1.

⎡ 1 4 1 0⎤

⎢⎣0 1 2 −1⎥⎦

Add −4 times the second row to the first.

⎡ 1 0 −7 4 ⎤

⎢⎣0 1 2 −1⎥⎦

⎡1 4 ⎤

⎢⎣ 2 7 ⎥⎦

−1

SSM: Elementary Linear Algebra

3 0

1 0⎤

⎡1 0

⎢0 5 −10 0 −2 1⎥

⎢

⎥

⎣0 4 −10 1 −3 0 ⎦

Multiply the second row by

1

.

5

3 0

1 0⎤

⎡1 0

⎢ 0 1 −2 0 − 2 1 ⎥

5 5⎥

⎢

⎢⎣0 4 −10 1 −3 0⎥⎦

Add −4 times the second row to the third.

⎡1 0

3 0

1

0⎤

⎢

⎥

2

1⎥

⎢ 0 1 −2 0 − 5

5

⎢

⎥

7

4

⎢⎣0 0 −2 1 − 5 − 5 ⎥⎦

1

Multiply the third row by − .

2

⎡1 0

3

0

1 0⎤

⎢

⎥

2

0 − 5 15 ⎥

⎢ 0 1 −2

⎢

⎥

7

2

1 − 12 10

⎢⎣0 0

5 ⎥⎦

Add −3 times the third row to the first and 2

times the third row to the second.

3 − 11 − 6 ⎤

⎡1 0 0

2

10

5⎥

⎢

1

1⎥

⎢0 1 0 −1

7

2⎥

⎢0 0 1 − 1

2

10

5⎦

⎣

⎡ −7 4 ⎤

=⎢

⎣ 2 −1⎥⎦

⎡ −1 3 1 0 ⎤

11. ⎢

⎣ 3 −2 0 1⎥⎦

Multiply the first row by −1.

⎡ 1 −3 −1 0 ⎤

⎢⎣3 −2 0 1⎥⎦

Add −3 times the first row to the second.

⎡ 1 −3 −1 0 ⎤

⎢⎣0 7 3 1⎥⎦

1

Multiply the second row by .

7

⎡ 1 −3 −1 0 ⎤

⎢0

1 73 17 ⎥⎥

⎣⎢

⎦

Add 3 times the second row to the first.

⎡1 0 2 3⎤

7 7⎥

⎢

⎢0 1 73 17 ⎥

⎣

⎦

−1 ⎡ 2 3 ⎤

⎡ −1 3 ⎤

7 7

⎢⎣ 3 −2 ⎥⎦ = ⎢⎢ 3 1 ⎥⎥

⎣7 7⎦

⎡ 3 4 −1⎤

⎢1 0

3⎥

⎢

⎥

2

5

4

−

⎣

⎦

−1

⎡ 3 − 11 − 6 ⎤

10

5⎥

⎢ 2

1

1⎥

= ⎢ −1

7

2⎥

⎢− 1

10

5⎦

⎣ 2

⎡ −1 3 −4 1 0 0 ⎤

1 0 1 0⎥

15. ⎢ 2 4

⎢

⎥

⎣ −4 2 −9 0 0 1⎦

Multiply the first row by −1.

⎡ 1 −3 4 −1 0 0 ⎤

⎢ 2 4

1 0 1 0⎥

⎢

⎥

4

2

9

0 0 1⎦

−

−

⎣

⎡ 3 4 −1 1 0 0 ⎤

3 0 1 0⎥

13. ⎢ 1 0

⎢

⎥

⎣ 2 5 −4 0 0 1⎦

Interchange the first and second rows.

3 0 1 0⎤

⎡1 0

⎢ 3 4 −1 1 0 0 ⎥

⎢

⎥

⎣ 2 5 −4 0 0 1⎦

**Add −2 times the first row to the second and 4
**

times the first row to the third.

⎡ 1 −3 4 −1 0 0 ⎤

⎢0 10 −7

2 1 0⎥

⎢

⎥

⎣0 −10 7 −4 0 1⎦

**Add −3 times the first row to the second and −2
**

times the first row to the third.

3 0

1 0⎤

⎡1 0

⎢0 4 −10 1 −3 0 ⎥

⎢

⎥

⎣0 5 −10 0 −2 1⎦

Multiply the second row by

Interchange the second and third rows.

24

1

.

10

SSM: Elementary Linear Algebra

Section 1.5

⎡ 1 3 3 1 0 0⎤

2

⎢

⎥

2

7

6

0

1 0⎥

⎢

⎢ 2 7 7 0 0 1⎥

⎣

⎦

Add −2 times the first row to both the second

and third rows.

⎡ 1 3 3 1 0 0⎤

2

⎢

⎥

⎢ 0 1 0 −1 1 0 ⎥

⎢0 1 1 −1 0 1⎥

⎣

⎦

Add −1 times the second row to the third.

⎡1 3 3 1

0 0⎤

2

⎢

⎥

⎢0 1 0 −1 1 0 ⎥

⎢0 0 1 0 −1 1⎥

⎣

⎦

Add −3 times the third row to the first.

⎡1 3 0 1

3 −3⎤

2

⎢

⎥

⎢0 1 0 −1 1 0 ⎥

⎢0 0 1 0 −1 1⎥

⎣

⎦

Add −3 times the second row to the first.

⎡ 1 0 0 7 0 −3⎤

2

⎢

⎥

⎢0 1 0 −1 1 0 ⎥

⎢0 0 1 0 −1 1⎥

⎣

⎦

−1 ⎡ 7

0 −3⎤

⎡2 6 6⎤

2

⎢

⎢ 2 7 6 ⎥ = ⎢ −1 1 0 ⎥⎥

⎢

⎥

⎢ 0 −1 1⎥

⎣2 7 7 ⎦

⎣

⎦

4 −1 0 0 ⎤

⎡ 1 −3

⎢0

7

1

1 0⎥

1 − 10

⎢

⎥

5 10

⎢⎣0 −10

7 −4 0 1⎥⎦

Add 10 times the second row to the third.

4 −1 0 0 ⎤

⎡ 1 −3

⎢0

7

1

1 0⎥

1 − 10

⎢

⎥

5 10

0 −2 1 1⎦⎥

⎣⎢0 0

Since there is a row of zeros on the left side,

⎡ −1 3 −4 ⎤

⎢ 2 4

1⎥ is not invertible.

⎢

⎥

⎣ −4 2 −9 ⎦

⎡ 1 0 1 1 0 0⎤

17. ⎢0 1 1 0 1 0 ⎥

⎢

⎥

⎣ 1 1 0 0 0 1⎦

Add −1 times the first row to the third.

⎡ 1 0 1 1 0 0⎤

⎢0 1 1 0 1 0 ⎥

⎢

⎥

⎣0 1 −1 −1 0 1⎦

Add −1 times the second row to the third.

1 1 0 0⎤

⎡1 0

⎢0 1 1 0 1 0⎥

⎢

⎥

⎣0 0 −2 −1 −1 1⎦

1

Multiply the third row by − .

2

⎡1 0 1 1 0

⎤

0

⎢

⎥

0

1

1

0

1

0

⎢

⎥

⎢0 0 1 1 1 − 1 ⎥

2 2

2⎦

⎣

⎡ 2 −4 0 0 1 0 0 0 ⎤

⎢ 1 2 12 0 0 1 0 0 ⎥

21. ⎢

⎥

⎢0 0 2 0 0 0 1 0⎥

⎢⎣ 0 −1 −4 −5 0 0 0 1⎥⎦

Interchange the first and second rows.

⎡ 1 2 12 0 0 1 0 0 ⎤

⎢ 2 −4 0 0 1 0 0 0 ⎥

⎢

⎥

⎢0 0 2 0 0 0 1 0⎥

⎣⎢ 0 −1 −4 −5 0 0 0 1⎦⎥

**Add −1 times the third row to both the first and
**

second rows.

1 −1

1⎤

⎡1 0 0

2

2

2⎥

⎢

1

1⎥

⎢0 1 0 − 12

2

2

⎢

1

1 − 1⎥

⎢⎣0 0 1

2

2

2 ⎥⎦

1⎤

−1 ⎡ 1 − 1

2

2⎥

⎡ 1 0 1⎤

⎢ 2

1

1⎥

⎢0 1 1⎥ = ⎢ − 1

2

2

2

⎢

⎥

⎢ 1

1 − 1⎥

⎣ 1 1 0⎦

⎢⎣ 2

2

2 ⎥⎦

**Add −2 times the first row to the second.
**

1 0 0⎤

⎡ 1 2 12 0 0

⎢0 −8 −24 0 1 −2 0 0 ⎥

⎢

⎥

2 0 0 0 1 0⎥

⎢0 0

⎣⎢0 −1 −4 −5 0 0 0 1⎦⎥

Interchange the second and fourth rows.

1 0 0⎤

⎡ 1 2 12 0 0

⎢0 −1 −4 −5 0 0 0 1⎥

⎢

⎥

2 0 0 0 1 0⎥

⎢0 0

⎢⎣0 −8 −24 0 1 −2 0 0 ⎥⎦

⎡2 6 6 1 0 0⎤

19. ⎢ 2 7 6 0 1 0 ⎥

⎢

⎥

⎣ 2 7 7 0 0 1⎦

Multiply the first row by

1

.

2

25

**Chapter 1: Systems of Linear Equations and Matrices
**

Multiply the second row by −1.

1 0 0⎤

⎡ 1 2 12 0 0

⎢0

1

4 5 0 0 0 −1⎥

⎢

⎥

2 0 0 0 1 0⎥

⎢0 0

⎢⎣0 −8 −24 0 1 −2 0 0 ⎥⎦

Add 8 times the second row to the fourth.

1 0 0⎤

⎡ 1 2 12 0 0

⎢0 1 4 5 0 0 0 −1⎥

⎢

⎥

⎢0 0 2 0 0 0 1 0 ⎥

⎣⎢0 0 8 40 1 −2 0 −8⎦⎥

SSM: Elementary Linear Algebra

⎡1

⎢

⎢0

⎢

⎢0

⎢

0

⎣⎢

0 0 0

1 0 0

0

1 0

0 0

1

1

4

− 18

1

2

1

4

0

0

1

40

1

− 20

⎡ 2 −4 0 0 ⎤

⎢ 1 2 12 0 ⎥

⎢

⎥

⎢0 0 2 0⎥

⎣⎢ 0 −1 −4 −5⎦⎥

1

.

2

1 0 0⎤

⎡ 1 2 12 0 0

⎢0 1 4 5 0 0 0 −1⎥

⎢

⎥

1

0⎥

⎢0 0 1 0 0 0 2

⎢0 0 8 40 1 −2 0 −8⎥

⎣

⎦

Add −8 times the third row to the fourth.

1 0 0⎤

⎡ 1 2 12 0 0

⎢0 1 4 5 0 0 0 −1⎥

⎢

⎥

1

0⎥

⎢0 0 1 0 0 0 2

⎢0 0 0 40 1 −2 −4 −8⎥

⎣

⎦

1

.

Multiply the fourth row by

40

1

0

0⎤

⎡ 1 2 12 0 0

⎢0 1 4 5 0

0

0 −1⎥

⎢

⎥

1

0

0⎥

⎢0 0 1 0 0

2

⎢

1

1

1

1⎥

⎢⎣0 0 0 1 40 − 20 − 10 − 5 ⎥⎦

Add −5 times the fourth row to the second.

0

1

0

0⎤

⎡ 1 2 12 0

⎢0 1 4 0 − 1

1

1

0⎥

8

4

2

⎢

⎥

1

⎢0 0 1 0

⎥

0

0

0

2

⎢

⎥

⎢0 0 0 1 1 − 1 − 1 − 1 ⎥

40

20

10

5⎦

⎣

Add −12 times the third row to the first and −4

times the third row to the second.

0

1 −6

0⎤

⎡1 2 0 0

⎢0 1 0 0 − 1

3

1

0 ⎥⎥

−2

⎢

8

4

1

⎢0 0 1 0

0

0

0⎥

2

⎢

⎥

⎢0 0 0 1 1 − 1 − 1 − 1 ⎥

40

20

10

5⎦

⎣

Multiply the third row by

−1

−3

− 32

1

2

1

− 10

0⎤

⎥

0⎥

⎥

0⎥

⎥

− 15 ⎥

⎦

1

⎡ 1

0⎤

−3

2

⎢ 4

⎥

3

1

−

0

⎢ − 18

⎥

4

2

=⎢

⎥

1

0

0

0⎥

⎢

2

⎢ 1

1

1

1⎥

⎢⎣ 40 − 20 − 10 − 5 ⎥⎦

1 0 1 0 0 0⎤

⎡ −1 0

⎢ 2 3 −2 6 0 1 0 0 ⎥

23. ⎢

⎥

⎢ 0 −1 2 0 0 0 1 0 ⎥

1 5 0 0 0 1⎥⎦

⎢⎣ 0 0

Multiply the first row by −1.

⎡ 1 0 −1 0 −1 0 0 0 ⎤

⎢ 2 3 −2 6 0 1 0 0 ⎥

⎢

⎥

⎢ 0 −1 2 0 0 0 1 0 ⎥

1 5 0 0 0 1⎥⎦

⎢⎣ 0 0

Add −2 times the first row to the second.

⎡ 1 0 −1 0 −1 0 0 0⎤

⎢0 3 0 6 2 1 0 0⎥

⎢

⎥

⎢0 −1 2 0 0 0 1 0⎥

⎣⎢0 0 1 5 0 0 0 1⎦⎥

Interchange the second and third rows.

⎡ 1 0 −1 0 −1 0 0 0⎤

⎢0 −1 2 0 0 0 1 0⎥

⎢

⎥

⎢0 3 0 6 2 1 0 0⎥

⎢⎣0 0 1 5 0 0 0 1⎥⎦

Multiply the second row by −1.

⎡ 1 0 −1 0 −1 0 0 0 ⎤

⎢ 0 1 −2 0 0 0 −1 0 ⎥

⎢

⎥

⎢0 3 0 6 2 1 0 0 ⎥

1 5 0 0 0 1⎥⎦

⎢⎣0 0

Add −3 times the second row to the third.

⎡ 1 0 −1 0 −1 0 0 0 ⎤

⎢ 0 1 −2 0 0 0 −1 0 ⎥

⎢

⎥

⎢0 0 6 6 2 1 3 0 ⎥

1 5 0 0 0 1⎦⎥

⎣⎢0 0

Interchange the third and fourth rows.

⎡ 1 0 −1 0 −1 0 0 0 ⎤

⎢ 0 1 −2 0 0 0 −1 0 ⎥

⎢

⎥

1 5 0 0 0 1⎥

⎢0 0

⎢⎣0 0 6 6 2 1 3 0 ⎥⎦

Add −6 times the third row to the fourth.

Add −2 times the second row to the first.

26

SSM: Elementary Linear Algebra

⎡1

⎢0

⎢

⎢0

⎢⎣0

Section 1.5

⎡1

⎢

⎢0

⎢

⎢0

⎢

⎢0

⎢⎣

0 −1

0 −1

1 −2

0 0

0

1

5 0

0 0 −24 2

0 0 0⎤

0 −1 0 ⎥

⎥

0 0

1⎥

1 3 −6 ⎥⎦

1

Multiply the fourth row by − .

24

−1

0

0 0⎤

⎡ 1 0 −1 0

⎢ 0 1 −2 0

0

0 −1 0 ⎥

⎢

⎥

1 5

0

0

0 1⎥

⎢0 0

⎢0 0 0 1 − 1 − 1 − 1 1 ⎥

12

24

8 4⎦

⎣

Add −5 times the fourth row to the third.

−1

0

0

0⎤

⎡ 1 0 −1 0

⎢ 0 1 −2 0

0

0 −1

0⎥

⎢

5

5

5 − 1⎥

1 0

⎢0 0

12

24

8

4⎥

⎢

⎥

1

1

1

1

0 0 0 1 − 12 − 24 − 8

4 ⎦⎥

⎣⎢

Add the third row to the first and 2 times the

third row to the second.

5

5 − 1⎤

⎡1 0 0 0 − 7

12

24

8

4⎥

⎢

5

5

1 − 1⎥

⎢0 1 0 0

6

12

4

2

⎢

⎥

5

5

5

1

− 4⎥

⎢0 0 1 0

12

24

8

⎢

1 − 1

1⎥

0 0 0 1 − 12

− 81

24

4 ⎦⎥

⎣⎢

1 0⎤

⎡ −1 0

⎢ 2 3 −2 6 ⎥

⎢

⎥

⎢ 0 −1 2 0 ⎥

1 5⎥⎦

⎢⎣ 0 0

−1

⎡− 7

⎢ 12

⎢ 56

=⎢

5

⎢ 12

⎢ 1

−

⎣⎢ 12

5

24

5

12

5

24

1

− 24

⎡ k1 0 0 0 1 0 0

⎢0 k

0 0 0 1 0

2

25. (a) ⎢

k

0

0

0 0 0 1

3

⎢

⎢⎣ 0 0 0 k4 0 0 0

1

Multiply the first row by

,

k1

5

8

1

4

5

8

− 81

0 0 0

1

k1

0

0

1 0 0

0

1

k2

0

0

0

0

1

k3

0

0

0

1 0

0 0

1

⎡ k1 0 0 0⎤

⎢0 k

0 0⎥

2

⎢

⎥

0⎥

⎢ 0 0 k3

⎢⎣ 0 0 0 k4 ⎥⎦

−1

⎡1

⎢ k1

⎢0

=⎢

⎢0

⎢

⎢0

⎢⎣

⎡k 1 0 0 1 0 0

⎢0 1 0 0 0 1 0

(b) ⎢

⎢0 0 k 1 0 0 1

⎢⎣ 0 0 0 1 0 0 0

0⎤

⎥

0⎥

⎥

0⎥

⎥

1⎥

k4 ⎥⎦

0

0

1

k2

0

0

1

k3

0

0

0⎤

0⎥

⎥

0⎥

1⎥⎦

**Multiply the first and third rows by
**

⎡1

⎢

⎢0

⎢0

⎢

⎢⎣0

0⎤

⎥

0⎥

⎥

0⎥

⎥

1⎥

k4 ⎥⎦

1

.

k

0 0⎤

⎥

1 0 0 0 1 0 0⎥

0 1 1k 0 0 k1 0 ⎥

⎥

0 0 1 0 0 0 1⎥⎦

1

Add − times the fourth row to the third

k

1

and − times the second row to the first.

k

⎡1 0 0 0 1 − 1 0

0⎤

k

k

⎢

⎥

1 0

0⎥

⎢0 1 0 0 0

⎢0 0 1 0 0

0 k1 − k1 ⎥

⎢

⎥

0 0

1⎦⎥

⎣⎢0 0 0 1 0

− 14 ⎤

⎥

− 12 ⎥

⎥

− 14 ⎥

1⎥

4 ⎦⎥

0⎤

0⎥

⎥

0⎥

1⎥⎦

1

k

0

0

⎡k 1 0 0⎤

⎢ 0 1 0 0⎥

⎢

⎥

⎢ 0 0 k 1⎥

⎣⎢ 0 0 0 1⎥⎦

the second

1

1

, the third row by

, and the

k2

k3

1

fourth row by

.

k4

row by

1

k

−1

0

⎡1 − 1

k

⎢k

0

1

⎢

=

⎢0

0

⎢

0

⎣⎢ 0

0⎤

⎥

0

0⎥

1 − 1⎥

k

k⎥

0

1⎦⎥

0

⎡c c c 1 0 0 ⎤

27. ⎢ 1 c c 0 1 0 ⎥

⎢

⎥

⎣ 1 1 c 0 0 1⎦

Multiply the first row by

requirement).

27

1

(so c ≠ 0 is a

c

Chapter 1: Systems of Linear Equations and Matrices

⎡1 1 1 1 0 0 ⎤

c

⎢

⎥

c

c

1

0

1 0⎥

⎢

⎢1 1 c 0 0 0 ⎥

⎣

⎦

Add −1 times and the first row to the second and

third rows.

1 0 0⎤

⎡1

1

1

c

⎢

⎥

⎢0 c − 1 c − 1 − 1c 1 0⎥

⎢

⎥

0 c − 1 − 1c 0 1⎥

⎢⎣0

⎦

1

Multiply the second and third rows by

(so

c −1

c ≠ 1 is a requirement).

⎡1 1 1

1

0

0 ⎤

c

⎢

⎥

1

⎢0 1 1 − 1

⎥

0

c (c −1) c −1

⎢

⎥

1 ⎥

⎢0 0 1 − 1

0

c (c −1)

c −1 ⎦

⎣

Add −1 times the third row to the first and

second rows.

1

⎡1 1 0

0 − c1−1 ⎤

c −1

⎢

⎥

1

1 ⎥

⎢0 1 0

0

−

c −1

c −1 ⎥

⎢

1 ⎥

⎢0 0 1 − c(c1−1) 0

c −1 ⎦

⎣

The matrix is invertible for c ≠ 0, 1.

⎡ −3 1⎤

⎢⎣ 2 2 ⎥⎦

−1

−1

⎡ 1 0 ⎤ ⎡ 1 −1⎤ −1 ⎡ − 1 0⎤ ⎡ 1 0 ⎤ −1

4

=⎢

⎥ ⎢

1⎥ ⎢

⎥ ⎢

⎥

⎣ 0 2 ⎦ ⎣ 0 1⎦ ⎣ 0 1⎦ ⎣ −1 1⎦

⎡ 1 0⎤ ⎡ 1 1⎤ ⎡ −4 0 ⎤ ⎡1 0 ⎤

=⎢

⎣ 0 2⎥⎦ ⎢⎣ 0 1⎥⎦ ⎢⎣ 0 1⎥⎦ ⎢⎣1 1⎥⎦

Note that other answers are possible.

⎡ 1 0 −2 ⎤

31. Use elementary matrices to reduce ⎢0 4

3⎥

⎢

⎥

1⎦

⎣0 0

to I3 .

⎡ 1 0 2 ⎤ ⎡ 1 0 −2 ⎤ ⎡ 1 0 0 ⎤

⎢0 1 0 ⎥ ⎢ 0 4

3 ⎥ = ⎢ 0 4 3⎥

⎢

⎥⎢

⎥ ⎢

⎥

0

0

1

0

0

1⎦ ⎣0 0 1⎦

⎣

⎦⎣

⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 0⎤

⎢ 0 1 − 3 ⎥ ⎢ 0 4 3⎥ = ⎢ 0 4 0 ⎥

⎢

⎥⎢

⎥ ⎢

⎥

1⎦ ⎣0 0 1⎦ ⎣0 0 1⎦

⎣0 0

⎡ 1 0 0⎤ ⎡ 1

⎢0 1 0 ⎥ ⎢0

4

⎢

⎥⎢

⎢⎣0 0 1⎥⎦ ⎣0

Thus

⎡ 1 0 0⎤ ⎡ 1

⎢0 1 0 ⎥ ⎢0

4

⎢

⎥⎢

⎢⎣0 0 1⎥⎦ ⎣0

⎡ 1 0 0⎤

= ⎢ 0 1 0⎥

⎢

⎥

⎣ 0 0 1⎦

so

⎡ 1 0 −2 ⎤

⎢0 4

3⎥

⎢

⎥

0

0

1⎦

⎣

⎡ −3 1⎤

29. Use elementary matrices to reduce ⎢

to

⎣ 2 2 ⎥⎦

I2 .

⎡ 1 0⎤ ⎡ −3 1⎤ ⎡ −3

⎢0 1 ⎥ ⎢ 2 2 ⎥ = ⎢ 1

⎦ ⎣

⎣

2⎦ ⎣

⎡ 1 −1⎤ ⎡ −3 1⎤ ⎡ −4

⎢⎣0 1⎥⎦ ⎢⎣ 1 1⎥⎦ = ⎢⎣ 1

⎡ − 1 0 ⎤ ⎡ −4 0 ⎤ ⎡1

⎢ 4

⎥⎢

⎥=⎢

⎣ 0 1⎦ ⎣ 1 1⎦ ⎣1

SSM: Elementary Linear Algebra

1⎤

1⎥⎦

0⎤

1⎥⎦

0⎤

1⎥⎦

0 0⎤ ⎡ 1 0 0 ⎤

4 0⎥ = ⎢ 0 1 0 ⎥

⎥ ⎢

⎥

0 1⎦ ⎣ 0 0 1⎦

0 0 ⎤ ⎡ 1 0 2 ⎤ ⎡ 1 0 −2 ⎤

1 −3 ⎥ ⎢ 0 1 0 ⎥ ⎢ 0 4

3⎥

⎥⎢

⎥⎢

⎥

0

1⎦ ⎣0 0 1⎦ ⎣ 0 0

1⎦

−1

−1

⎡ 1 0 2 ⎤ ⎡ 1 0 0 ⎤ ⎡ 1 0 0⎤

= ⎢ 0 1 0 ⎥ ⎢0 1 −3⎥ ⎢ 0 14 0⎥

⎥

⎢

⎥ ⎢

⎥ ⎢

1⎦ ⎢⎣ 0 0 1⎥⎦

⎣ 0 0 1⎦ ⎣0 0

⎡ 1 0 −2 ⎤ ⎡ 1 0 0 ⎤ ⎡ 1 0 0⎤

= ⎢ 0 1 0 ⎥ ⎢ 0 1 3⎥ ⎢ 0 4 0 ⎥

⎢

⎥⎢

⎥⎢

⎥

1⎦ ⎣ 0 0 1⎦ ⎣0 0 1⎦

⎣0 0

Note that other answers are possible.

⎡ 1 0 ⎤ ⎡1 0 ⎤ ⎡ 1 0 ⎤

⎢⎣ −1 1⎥⎦ ⎢⎣1 1⎥⎦ = ⎢⎣0 1⎥⎦

Thus,

⎡ 1 0 ⎤ ⎡ − 14 0 ⎤ ⎡ 1 −1⎤ ⎡ 1 0⎤ ⎡ −3 1⎤

⎢⎣ −1 1⎥⎦ ⎢ 0 1⎥ ⎢⎣0 1⎥⎦ ⎢0 1 ⎥ ⎢⎣ 2 2⎥⎦

⎣

⎦

⎣

2⎦

⎡ 1 0⎤

=⎢

⎣ 0 1⎥⎦

so

28

−1

SSM: Elementary Linear Algebra

Section 1.6

33. From Exercise 29,

⎡ −3 1⎤

⎢⎣ 2 2 ⎥⎦

−1

**(c) True; since A and B are row equivalent, there
**

exist elementary matrices E1 , E2 , … , Ek such

0⎤ ⎡ − 14

⎢

0 ⎤ ⎡ 1 −1⎤ ⎡ 1 0 ⎤

⎡ 1

=⎢

⎥

⎢

1⎥

⎥

⎣ −1 1⎦ ⎣ 0 1⎦ ⎢⎣0 1⎥⎦ ⎣0 2 ⎦

⎡− 1 1 ⎤

= ⎢ 4 83 ⎥

⎢⎣ 14 8 ⎥⎦

that B = Ek

**E2 E1 A. Similarly there exist
**

elementary matrices E1′, E2′ , …, El′ such that

C = El′ E2′ E1′B = El′ E2′ E1′Ek

and C are row equivalent.

35. From Exercise 31,

⎡1 0

⎢0 4

⎢

⎣0 0

⎡1

= ⎢0

⎢

⎢⎣ 0

⎡1

= ⎢0

⎢

⎢⎣ 0

E2 E1 A so A

**(d) True; a homogeneous system has either exactly
**

one solution or infinitely many solutions. Since

A is not invertible, Ax = 0 cannot have exactly

one solution.

−1

−2 ⎤

3⎥

⎥

1⎦

0 0⎤ ⎡ 1 0 0 ⎤ ⎡ 1 0 2 ⎤

1 0 ⎥ ⎢ 0 1 −3⎥ ⎢ 0 1 0 ⎥

4

⎥⎢

⎥⎢

⎥

1⎦ ⎣ 0 0 1⎦

0 1⎥⎦ ⎣ 0 0

0

2⎤

1 − 3⎥

4

4⎥

0

1⎥⎦

**(e) True; interchanging two rows is an elementary
**

row operation, hence it does not affect whether a

matrix is invertible.

(f) True; adding a multiple of the first row to the

second row is an elementary row operation,

hence it does not affect whether a matrix is

invertible.

(g) False; since the sequence of row operations that

convert an invertible matrix A into I n is not

unique, the expression of A as a product of

elementary matrices is not unique. For instance,

⎡ −3 1⎤ ⎡ 1 0 ⎤ ⎡ 1 1⎤ ⎡ −4 0 ⎤ ⎡1 0⎤

⎢⎣ 2 2 ⎥⎦ = ⎢⎣0 2 ⎥⎦ ⎢⎣0 1⎥⎦ ⎢⎣ 0 1⎥⎦ ⎢⎣1 1⎥⎦

⎡ 1 −2 ⎤ ⎡ 1 0 ⎤ ⎡ 1 0 ⎤ ⎡ 1 5⎤

=⎢

1⎥⎦ ⎢⎣ 2 1⎥⎦ ⎢⎣0 −8⎥⎦ ⎢⎣0 1⎥⎦

⎣0

(This is the matrix from Exercise 29.)

⎡ 1 2 3⎤

37. A = ⎢ 1 4 1⎥

⎢

⎥

⎣2 1 9⎦

Add −1 times the first row to the second row.

3⎤

⎡1 2

⎢ 0 2 −2 ⎥

⎢

⎥

⎣2 1 9⎦

Add −1 times the first row to the third row.

3⎤

⎡1 2

⎢0 2 −2 ⎥

⎢

⎥

⎣ 1 −1 6 ⎦

Section 1.6

Exercise Set 1.6

**Add −1 times the second row to the first row.
**

5⎤

⎡1 0

⎢0 2 −2 ⎥

⎢

⎥

⎣ 1 −1 6 ⎦

**1. The matrix form of the system is
**

⎡ 1 1⎤ ⎡ x1 ⎤ ⎡ 2 ⎤

⎢⎣5 6 ⎥⎦ ⎢ x ⎥ = ⎢⎣ 9 ⎥⎦ .

⎣ 2⎦

−1

**Add the second row to the third row.
**

5⎤

⎡1 0

⎢0 2 −2 ⎥ = B

⎢

⎥

⎣ 1 1 4⎦

1 ⎡ 6 −1⎤ ⎡ 6 −1⎤

⎡ 1 1⎤

⎢⎣5 6 ⎥⎦ = 6 − 5 ⎢⎣ −5 1⎥⎦ = ⎢⎣ −5 1⎥⎦

⎡ 6 −1⎤ ⎡ 2 ⎤ ⎡ 12 − 9 ⎤ ⎡ 3⎤

⎢⎣ −5 1⎥⎦ ⎢⎣ 9 ⎥⎦ = ⎢⎣ −10 + 9 ⎥⎦ = ⎢⎣ −1⎥⎦

The solution is x1 = 3, x2 = −1.

True/False 1.5

(a) False; the product of two elementary matrices is

not necessarily elementary.

**3. The matrix form of the system is
**

⎡ 1 3 1⎤ ⎡ x1 ⎤ ⎡ 4 ⎤

⎢ 2 2 1⎥ ⎢ x ⎥ = ⎢ −1⎥

⎢

⎥⎢ 2⎥ ⎢ ⎥

⎣ 2 3 1⎦ ⎣ x3 ⎦ ⎣ 3⎦

(b) True; by Theorem 1.5.2.

Find the inverse of the coefficient matrix.

29

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎡ 1 3 1 1 0 0⎤

⎢2 2 1 0 1 0⎥

⎢

⎥

⎣ 2 3 1 0 0 1⎦

⎡ 1 1 1 1 0 0⎤

⎢0 5 5 4 0 1⎥

⎢

⎥

⎣ 0 0 −5 −1 1 0 ⎦

⎡ 1 3 1 1 0 0⎤

⎢0 −4 −1 −2 1 0⎥

⎢

⎥

⎣0 −3 −1 −2 0 1⎦

⎡ 1 1 1 1 0 0⎤

⎢0 1 1 4 0 1 ⎥

5

5⎥

⎢

⎣⎢0 0 −5 −1 1 0 ⎥⎦

0 0⎤

⎡1 3 1 1

⎢0

1

1

1 4

− 14 0 ⎥

2

⎢

⎥

⎢⎣0 −3 −1 −2

0 1⎥⎦

⎡1 1 1 1

0

⎢

4

0

⎢0 1 1 5

⎢

1

0 0 1 5 − 15

⎣⎢

0⎤

⎥

1⎥

5

⎥

0⎥

⎦

⎡1 1 0

⎢

⎢0 1 0

⎢

0 0 1

⎣⎢

0⎤

⎥

1⎥

5

⎥

0⎥

⎦

⎡1 3

1

1

0 0⎤

⎢

⎥

1

1

1

−

0

⎢0 1

⎥

4

2

4

⎢

⎥

3

1

1

⎢⎣0 0 − 4 − 2 − 4 1⎥⎦

0 0⎤

⎡1 3 1 1

⎢0 1 1 1 − 1

0⎥

4 2

4

⎢

⎥

⎢⎣0 0 1 2

3 −4 ⎥⎦

4

5

3

5

1

5

⎡1 0 0 1

5

⎢

⎢0 1 0 53

⎢

1

⎢⎣0 0 1 5

⎡ 1 3 0 −1 −3 4 ⎤

⎢0 1 0 0 −1 1⎥

⎢

⎥

⎣ 0 0 1 2 3 −4 ⎦

1

5

1

5

− 15

0 − 15 ⎤

⎥

1

1⎥

5

5

⎥

0⎥

− 15

⎦

⎡1

0 − 15 ⎤ 5

⎢5

⎥ ⎡ ⎤ ⎡ 1 ⎤ ⎡ 1⎤

1

1 ⎥ ⎢10 ⎥ = ⎢3 + 2 ⎥ = ⎢ 5 ⎥

⎢ 53

5

5 ⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎢1

⎥ 0

1

⎣ ⎦ ⎣1 − 2 ⎦ ⎣ −1⎦

0

−

⎢⎣ 5

⎥

5

⎦

The solution is x = 1, y = 5, z = −1.

1⎤

⎡ 1 0 0 −1 0

⎢0 1 0 0 −1 1⎥

⎢

⎥

⎣ 0 0 1 2 3 −4 ⎦

1⎤ ⎡ 4 ⎤ ⎡ −4 + 3 ⎤ ⎡ −1⎤

⎡ −1 0

⎢ 0 −1 1⎥ ⎢ −1⎥ = ⎢ 1 + 3 ⎥ = ⎢ 4 ⎥

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 2 3 −4⎦ ⎣ 3⎦ ⎣8 − 3 − 12⎦ ⎣ −7 ⎦

**7. The matrix form of the system is
**

⎡3 5⎤ ⎡ x1 ⎤ ⎡ b1 ⎤

⎢⎣ 1 2 ⎥⎦ ⎢ x ⎥ = ⎢b ⎥ .

⎣ 2⎦ ⎣ 2⎦

The solution is x1 = −1, x2 = 4, x3 = −7.

−1

1 ⎡ 2 −5⎤ ⎡ 2 −5⎤

=

6 − 5 ⎢⎣ −1 3⎥⎦ ⎢⎣ −1 3⎥⎦

⎡ 2 −5⎤ ⎡ b1 ⎤ ⎡ 2b1 − 5b2 ⎤

⎢⎣ −1 3⎥⎦ ⎢b ⎥ = ⎢ −b + 3b ⎥

⎣ 2⎦ ⎣ 1

2⎦

The solution is x1 = 2b1 − 5b2 , x2 = −b1 + 3b2 .

⎡3 5 ⎤

⎢⎣ 1 2 ⎥⎦

**5. The matrix form of the equation is
**

⎡ 1 1 1⎤ ⎡ x ⎤ ⎡ 5 ⎤

⎢ 1 1 −4 ⎥ ⎢ y ⎥ = ⎢10 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −4 1 1⎦ ⎣ z ⎦ ⎣ 0 ⎦

**Find the inverse of the coefficient matrix.
**

⎡ 1 1 1 1 0 0⎤

⎢ 1 1 −4 0 1 0 ⎥

⎢

⎥

⎣ −4 1 1 0 0 1⎦

=

**9. Write and reduce the augmented matrix for both
**

systems.

⎡ 1 −5 1 −2 ⎤

⎢⎣3 2 4

5⎥⎦

⎡ 1 1 1 1 0 0⎤

⎢ 0 0 −5 −1 1 0 ⎥

⎢

⎥

⎣0 5 5 4 0 1⎦

⎡ 1 −5 1 −2 ⎤

⎢⎣0 17 1 11⎥⎦

30

SSM: Elementary Linear Algebra

Section 1.6

⎡ 1 −5 1 −2 ⎤

⎢0

1

11 ⎥

1 17

17 ⎦

⎣

⎡ 1 0 22

17

⎢

1

⎢⎣0 1 17

⎡ 1 3 b1 ⎤

13. ⎢

⎥

⎣ −2 1 b2 ⎦

b1 ⎤

⎡1 3

⎢0 7 2b + b ⎥

⎣

1

2⎦

21 ⎤

17 ⎥

11 ⎥

17 ⎦

b1

⎡1 3

⎤

⎢

⎥

2

1

⎢⎣0 1 7 b1 + 7 b2 ⎥⎦

22

1

(i) The solution is x1 = , x2 = .

17

17

(ii) The solution is x1 =

⎡ 1 0 1 b1 − 3 b2 ⎤

7

7 ⎥

⎢

⎢⎣0 1 72 b1 + 17 b2 ⎥⎦

There are no restrictions on b1 and b2 .

21

11

, x2 = .

17

17

**11. Write and reduce the augmented matrix for the
**

four systems.

⎡ 4 −7 0 −4 −1 −5⎤

⎢⎣ 1 2 1 6 3

1⎥⎦

⎡ 1 −2 5 b1 ⎤

15. ⎢ 4 −5 8 b2 ⎥

⎢

⎥

⎣ −3 3 −3 b3 ⎦

1⎤

⎡1 2 1 6 3

⎢⎣ 4 −7 0 −4 −1 −5⎥⎦

5

b1 ⎤

⎡ 1 −2

⎢0

3 −12 −4b1 + b2 ⎥

⎢

⎥

3b1 + b3 ⎦

⎣0 −3 12

2

1

6

3

1⎤

⎡1

⎢⎣0 −15 −4 −28 −13 −9 ⎥⎦

5

b1

⎡ 1 −2

⎤

⎢0

3 −12 −4b1 + b2 ⎥

⎢

⎥

0 −b1 + b2 + b3 ⎦

⎣0 0

⎡ 1 2 1 6 3 1⎤

⎢0 1 4 28 13 3 ⎥

⎢⎣

15 15 15 5 ⎥⎦

⎡1 0 7

15

⎢

4

⎢0 1 15

⎣

34

15

28

15

19

15

13

15

5

b1

⎡ 1 −2

⎤

⎢

⎥

4

1

1 −4 − 3 b1 + 3 b2 ⎥

⎢0

⎢⎣0 0 0 −b1 + b2 + b3 ⎥⎦

− 15 ⎤

⎥

3⎥

5⎦

(i) The solution is x1 =

7

4

, x2 = .

15

15

(ii) The solution is x1 =

34

28

, x2 = .

15

15

(iii) The solution is x1 =

19

13

, x2 = .

15

15

⎡ 1 0 −3 − 5 b1 + 2 b2 ⎤

3

3

⎢

⎥

4

1b ⎥

0

1

4

b

−

−

+

⎢

1

2

3

3

⎢

⎥

⎢⎣0 0 0 −b1 + b2 + b3 ⎥⎦

The only restriction is from the third row:

−b1 + b2 + b3 = 0 or b3 = b1 − b2 .

1

3

(iv) The solution is x1 = − , x2 = .

5

5

31

Chapter 1: Systems of Linear Equations and Matrices

⎡ 1 −1

⎢ −2

1

17. ⎢

−

3

2

⎢

⎣⎢ 4 −3

b1 ⎤

b2 ⎥

⎥

b3 ⎥

b4 ⎦⎥

3 2

5 1

2 −1

1 3

⎡ 1 −1

1

1

0 0⎤

⎢

⎥

1 0⎥

⎢0 1 − 52 − 52

5

⎢

⎥

1

4 −2 1

0

0

−

⎥⎦

5

5

5

⎣⎢

1

1

⎡ 1 −1

⎢0 1 − 2 − 2

⎢

5

5

⎢⎣0 0

1 −4

b1 ⎤

3 2

⎡ 1 −1

⎢0 −1 11 5 2b + b ⎥

1

2⎥

⎢

3b1 + b3 ⎥

⎢0 −1 11 5

⎣⎢0 1 −11 −5 −4b1 + b4 ⎦⎥

−b1 − b2 ⎤

0 −8 −3

1 −11 −5 −2b1 − b2 ⎥

⎥

0

0 0 b1 − b2 + b3 ⎥

0

0 0 −2b1 + b2 + b4 ⎥⎦

**invertible. Thus, Ak is also invertible and
**

Ak x = 0 has only the trivial solution.

−2b1 + b2 + b4 = 0.

Thus b3 = −b1 + b2 and b4 = 2b1 − b2 .

Expressing b1 and b2 in terms of b3 and b4

gives b1 = b3 + b4 and b2 = 2b3 + b4 .

**Note that ( Ak )−1 = ( A−1 ) k .
**

23. Let x1 be a fixed solution of Ax = b, and let x be

any other solution. Then

A(x − x1 ) = Ax − Ax1 = b − b = 0. Thus

⎡ 2 −1 5 7 8⎤

⎢ 4 0 −3 0 1⎥

⎢

⎥

⎣ 3 5 −7 2 1⎦

x0 = x − x1 is a solution to Ax = 0, so x can be

expressed as x = x1 + x0 .

Also A(x1 + x0 ) = Ax1 + Ax0 = b + 0 = b, so

every matrix of the form x1 + x0 is a solution of

Ax = b.

⎡ 1 −1 1 1 0 0 ⎤

⎢ 2 3 0 0 1 0⎥

⎢

⎥

⎣ 0 2 −1 0 0 1⎦

True/False 1.6

⎡ 1 −1 1 1 0 0⎤

⎢0 5 −2 −2 1 0⎥

⎢

⎥

⎣0 2 −1 0 0 1⎦

1

1

⎡ 1 −1

⎢0 1 − 2 − 2

⎢

5

5

⎢⎣0 2 −1

0

5 7 8⎤

−3 0 1⎥

⎥

−7 2 1⎦

26 ⎤

−17 ⎥

⎥

−35⎦

21. Since Ax = 0 has only the trivial solution, A is

From the bottom two rows, b1 − b2 + b3 = 0 and

−1

1

5

3 −1 3 ⎤

⎡1 0 0

⎢0 1 0 −2 1 −2⎥

⎢

⎥

⎣0 0 1 −4 2 −5⎦

⎡ 3 −1 3⎤ ⎡ 2 −1

X = ⎢ −2 1 −2 ⎥ ⎢ 4 0

⎢

⎥⎢

⎣ −4 2 −5 ⎦ ⎣ 3 5

⎡ 11 12 −3 27

= ⎢ −6 −8

1 −18

⎢

⎣ −15 −21 9 −38

b1

3 2

⎡ 1 −1

⎤

⎢0 1 −11 −5

−2b1 − b2 ⎥

⎢

⎥

b1 − b2 + b3 ⎥

0 0

⎢0 0

0 0 −2b1 + b2 + b4 ⎦⎥

⎣⎢0 0

⎡ 1 −1 1⎤

19. X = ⎢ 2 3 0 ⎥

⎢

⎥

⎣ 0 2 −1⎦

0⎤

0 ⎥⎥

2 −5⎥⎦

0

5 −2

5⎤

⎡ 1 −1 0

⎢ 0 1 0 −2

1 −2 ⎥

⎢

⎥

⎣ 0 0 1 −4 2 −5 ⎦

b1

⎡ 1 −1 3 2

⎤

⎢0 −1 11 5

2b1 + b2 ⎥

⎢

⎥

b1 − b2 + b3 ⎥

⎢0 0 0 0

⎣⎢0 0 0 0 −2b1 + b2 + b4 ⎦⎥

⎡1

⎢0

⎢

⎢0

⎢⎣0

SSM: Elementary Linear Algebra

**(a) True; if a system of linear equations has more
**

than one solution, it has infinitely many

solutions.

0 0⎤

1 0⎥

⎥

5

0 1⎥⎦

**(b) True; if Ax = b has a unique solution, then A is
**

invertible and Ax = c has the unique solution

A−1c.

32

SSM: Elementary Linear Algebra

Section 1.7

**(c) True; if AB = I n , then B = A−1 and BA = I n
**

also.

⎡12

9. A2 = ⎢

⎢⎣ 0

⎡1−2

A−2 = ⎢

⎢⎣ 0

**(d) True; elementary row operations do not change
**

the solution set of a linear system, and row

equivalent matrices can be obtained from one

another by elementary row operations.

⎡1− k

A− k = ⎢

⎢⎣ 0

(e) True; since ( S −1 AS )x = b, then

S ( S −1 AS )x = Sb or A(Sx) = Sb, Sx is a solution

of Ay = Sb.

⎡

⎢

⎢

11. A2 = ⎢

⎢

⎢

⎢⎣

⎤

⎡1 0 ⎤

=⎢

1⎥

(−2) ⎥⎦ ⎣0 4 ⎦

0 ⎤

0 ⎤ ⎡1

=

⎢

1 ⎥

⎥

(−2)− k ⎥⎦ ⎢⎣0 ( −2)k ⎥⎦

0

−2 ⎥

⎤

0 ⎥ ⎡1 0

⎥ ⎢4

2

1

0

0

⎥ = ⎢ 0 91

3

⎥ ⎢

2

1

⎥ ⎢⎣ 0 0

0

0

4 ⎥

⎦

⎡ 1 −2

⎤

0

0 ⎥

⎢ 2

⎢

⎥ ⎡4

−2

1

A−2 = ⎢ 0

0

⎥ = ⎢0

3

⎢

⎥ ⎢0

−2

1

⎢ 0

⎥ ⎣

0

4

⎢⎣

⎦⎥

−

k

⎡ 1

⎤

0

0 ⎥

⎢ 2

⎢

⎥

−k

1

0

A− k = ⎢ 0

⎥

3

⎢

⎥

−k

1

⎢ 0

⎥

0

4

⎣⎢

⎦⎥

⎡ 2k 0

⎤

0

⎢

⎥

k

=⎢0 3

0⎥

⎢0

0 4k ⎥⎥

⎣⎢

⎦

**(f) True; the system Ax = 4x is equivalent to the
**

system (A − 4I)x = 0. If the system Ax = 4x has a

unique solution, then so will (A − 4I)x = 0, hence

A − 4I is invertible. If A − 4I is invertible, then

the system (A − 4I)x = 0 has a unique solution

and so does the equivalent system Ax = 4x.

( 12 )

2

()

**(g) True; if AB were invertible, then both A and B
**

would have to be invertible.

()

Section 1.7

Exercise Set 1.7

1. The matrix is a diagonal matrix with nonzero

entries on the diagonal, so it is invertible.

⎡1

0⎤

⎥.

The inverse is ⎢ 2

⎢⎣ 0 − 15 ⎥⎦

3. The matrix is a diagonal matrix with nonzero

entries on the diagonal, so it is invertible.

⎡ −1 0 0 ⎤

The inverse is ⎢ 0 12 0 ⎥ .

⎢

⎥

⎢⎣ 0 0 3⎥⎦

0

()

()

()

()

()

0⎤

⎥

0⎥

⎥

1

16 ⎥⎦

0 0⎤

9 0⎥

⎥

0 16 ⎦

()

**13. The matrix is not symmetric since
**

a12 = −8 ≠ 0 = a21 .

15. The matrix is symmetric.

17. The matrix is not symmetric, since

a23 = −6 ≠ 6 = a32 .

⎡ 3 0 0 ⎤ ⎡ 2 1⎤ ⎡ 6 3⎤

5. ⎢0 −1 0 ⎥ ⎢ −4 1⎥ = ⎢ 4 −1⎥

⎢

⎥⎢

⎥ ⎢

⎥

⎣0 0 2 ⎦ ⎣ 2 5⎦ ⎣ 4 10 ⎦

⎡ 5 0 0⎤ ⎡ −3 2 0

7. ⎢0 2 0⎥ ⎢ 1 −5 3

⎢

⎥⎢

⎣0 0 −3⎦ ⎣ −6 2 2

⎡ −15 10 0 20

= ⎢ 2 −10 6 0

⎢

⎣ 18 −6 −6 −6

0 ⎤ ⎡1 0 ⎤

⎥=⎢

⎥

(−2) 2 ⎥⎦ ⎣0 4 ⎦

**19. The matrix is not symmetric, since
**

a13 = 1 ≠ 3 = a31 .

4 −4 ⎤

0

3⎥

⎥

2 2⎦

−20 ⎤

6⎥

⎥

−6 ⎦

**21. The matrix is not invertible because it is upper
**

triangular and has a zero on the main diagonal.

23. For A to be symmetric, a12 = a21 or −3 = a + 5,

so a = −8.

25. For A to be invertible, the entries on the main

diagonal must be nonzero. Thus x ≠ 1, −2, 4.

33

Chapter 1: Systems of Linear Equations and Matrices

⎡ 1 0 0⎤

27. By inspection, A = ⎢0 −1 0 ⎥ .

⎢

⎥

⎣0 0 −1⎦

1

1

( A − AT )T = ( AT − ( AT )T )

2

2

1 T

= ( A − A)

2

1

= − ( A − AT )

2

1

Thus ( A − AT ) is skew-symmetric.

2

33. If AT A = A, then

AT = ( AT A)T = AT ( AT )T = AT A = A, so A is

symmetric. Since A = AT , then A2 = AT A = A.

35. (a) A is symmetric since aij = a ji for all i, j.

**39. From AT = − A, the entries on the main diagonal
**

must be zero and the reflections of entries across

the main diagonal must be opposites.

⎡ 0 0 −8 ⎤

A = ⎢ 0 0 −4 ⎥

⎢

⎥

⎣ 8 4 0⎦

(b) i 2 − j 2 = j 2 − i 2 ⇒ 2i 2 = 2 j 2 or

i 2 = j 2 ⇒ i = j, since i, j > 0

Thus, A is not symmetric unless n = 1.

(c) A is symmetric since aij = a ji for all i, j.

**41. No; if A and B are commuting skew-symmetric
**

matrices, then

**(d) Consider a12 and a21.
**

2

SSM: Elementary Linear Algebra

( AB)T = ( BA)T = AT BT = (− A)(− B) = AB so

the product of commuting skew-symmetric

matrices is symmetric rather than skewsymmetric.

3

a12 = 2(1) + 2(2) = 18 while

a21 = 2(2)2 + 2(1)3 = 10.

A is not symmetric unless n = 1.

⎡x y⎤

43. Let A = ⎢

. Then

⎣ 0 z ⎥⎦

⎡ x3 y( x 2 + xz + z 2 ) ⎤

3

A3 = ⎢

⎥ , so x = 1 and

3

z

⎣⎢ 0

⎦⎥

**37. (a) If A is invertible and skew-symmetric then
**

( A−1 )T = ( AT )−1 = (− A)−1 = − A−1 so A−1

is also skew-symmetric.

(b) Let A and B be skew-symmetric matrices.

z 3 = −8 or x = 1, z = −2. Then 3y = 30 and

y = 10.

⎡ 1 10⎤

A=⎢

⎣0 −2⎥⎦

( AT )T = (− A)T = − AT

( A + B)T = AT + BT = − A − B = −( A + B)

( A − B)T = AT − BT = − A + B = −( A − B)

(kA)T = k ( AT ) = k (− A) = −kA

True/False 1.7

**(c) From the hint, it’s sufficient to prove that
**

1

1

( A + AT ) is symmetric and ( A − AT ) is

2

2

skew-symmetric.

1

1

1

( A + AT )T = ( AT + ( AT )T ) = ( A + AT )

2

2

2

1

T

Thus ( A + A ) is symmetric.

2

**(a) True; since a diagonal matrix must be square and
**

have zeros off the main diagonal, its transpose is

also diagonal.

(b) False; the transpose of an upper triangular matrix

is lower triangular.

⎡ 1 2 ⎤ ⎡ 3 0⎤ ⎡ 4 2 ⎤

(c) False; for example, ⎢

.

+

=

⎣0 1⎥⎦ ⎢⎣ 2 1⎥⎦ ⎢⎣ 2 2 ⎥⎦

**(d) True; the entries above the main diagonal
**

determine the entries below the main diagonal in

a symmetric matrix.

(e) True; in an upper triangular matrix, the entries

below the main diagonal are all zero.

34

SSM: Elementary Linear Algebra

Section 1.8

x1 + x2

x1

+ x3

x3

x2

**(f) False; the inverse of an invertible lower
**

triangular matrix is lower triangular.

(g) False; the diagonal entries may be negative, as

long as they are nonzero.

By inspection, the solution is x1 = 40, x2 = 10,

**(h) True; adding a diagonal matrix to a lower
**

triangular matrix will not create nonzero entries

above the main diagonal.

**x3 = −10 and the completed network is shown.
**

50

**(i) True; since the entries below the main diagonal
**

must be zero, so also must be the entries above

the main diagonal.

40

**3. (a) Consider the nodes from left to right on the
**

top, then the bottom.

300 + x2 = 400 + x3

750 + x3 = 250 + x4

100 + x1 = 400 + x2

200 + x4 = 300 + x1

The system is

x2 − x3

= 100

x3 − x4 = −500

x1 − x2

= 300

− x1

+ x4 = 100

2

(m) True; if (kA)T = kA, then

0 = (kA)T − kA = kAT − kA = k ( AT − A) since

k ≠ 0, then A = AT .

Section 1.8

**(b) Reduce the augmented matrix.
**

⎡ 0 1 −1 0 100 ⎤

⎢ 0 0 1 −1 −500 ⎥

⎢

⎥

⎢ 1 −1 0 0 300 ⎥

⎣⎢ −1 0 0 1 100 ⎦⎥

Exercise Set 1.8

1. Label the network with nodes, flow rates, and

flow directions as shown.

50

B

x3

C

⎡ 1 −1 0 0 300 ⎤

⎢ 0 0 1 −1 −500 ⎥

⎢

⎥

⎢ 0 1 −1 0 100 ⎥

⎢⎣ −1 0 0 1 100 ⎥⎦

x2

D

50

40

⎡1 0 ⎤

⎡ 1 0⎤

.

(l) False; for example ⎢

=⎢

⎣1 −1⎥⎦

⎣ 0 1⎥⎦

30

60

10

⎡ 1 2 ⎤ ⎡ 1 3⎤ ⎡ 2 5 ⎤

(k) False; for example, ⎢

+

=

⎣5 1⎥⎦ ⎢⎣ −5 1⎥⎦ ⎢⎣ 0 2 ⎥⎦

which is upper triangular.

A

10

30

⎡ 1 2 ⎤ ⎡ 5 0⎤ ⎡ 6 2 ⎤

(j) False; for example, ⎢

+

=

⎣0 1⎥⎦ ⎢⎣ 2 6⎥⎦ ⎢⎣ 2 7 ⎥⎦

which is symmetric.

x1

= 50

= 30

= −10

= 10

60

50

⎡ 1 −1 0 0 300 ⎤

⎢0 0 1 −1 −500 ⎥

⎢

⎥

⎢0 1 −1 0 100 ⎥

⎣⎢0 −1 0 1 400 ⎦⎥

40

Node A: x1 + x2 = 50

Node B: x1 + x3 = 30

Node C: x3 + 50 = 40

Node D: x2 + 50 = 60

The linear system is

⎡ 1 −1 0 0 300 ⎤

⎢0 1 −1 0 100 ⎥

⎢

⎥

⎢0 0 1 −1 −500 ⎥

⎣⎢0 −1 0 1 400 ⎥⎦

35

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

⎡ 1 1 −1 0 ⎤

⎢0

1 2 4⎥

⎢

⎥

⎣ 0 −2 1 3⎦

⎡ 1 −1 0 0 300 ⎤

⎢0 1 −1 0 100 ⎥

⎢

⎥

⎢0 0 1 −1 −500 ⎥

⎢⎣0 0 −1 1 500 ⎥⎦

⎡ 1 1 −1 0 ⎤

⎢0 1 2 4 ⎥

⎢

⎥

⎣0 0 5 11⎦

From the last row we conclude that

11

I3 = = 2.2 A, and from back substitution it

5

then follows that

22

2

I 2 = 4 − 2 I3 = 4 −

= − = −0.4 A, and

5

5

11 2 13

I1 = I3 − I 2 = + =

= 2.6 A. Since I 2 is

5 5 5

negative, its direction is opposite to that

indicated in the figure.

⎡ 1 −1 0 0 300 ⎤

⎢0 1 −1 0 100 ⎥

⎢

⎥

⎢0 0 1 −1 −500 ⎥

0 ⎦⎥

⎣⎢0 0 0 0

⎡ 1 0 0 −1 −100 ⎤

⎢0 1 0 −1 −400 ⎥

⎢

⎥

⎢0 0 1 −1 −500 ⎥

0 ⎥⎦

⎢⎣0 0 0 0

Since x4 is a free variable, let x4 = t . The

solution to the system is x1 = −100 + t ,

x2 = −400 + t , x3 = −500 + t , x4 = t .

**7. From application of the current law at each of
**

the four nodes (clockwise from upper left) we

have I1 = I 2 + I 4 , I 4 = I3 + I5 , I 6 = I3 + I5 ,

and I1 = I 2 + I 6 . These four equations can be

reduced to the following:

I1 = I 2 + I 4 , I 4 = I3 + I5 , I 4 = I 6

From application of the voltage law to the three

inner loops (left to right) we have

10 = 20 I1 + 20 I 2 , 20 I 2 = 20 I3 , and

10 + 20 I3 = 20 I 5 . These equations can be

simplified to:

2 I1 + 2 I 2 = 1, I3 = I 2 , 2 I5 − 2 I3 = 1

This gives us six equations in the unknown

currents I1 , … , I 6 . But, since I3 = I 2 and

I 6 = I 4 , this can be simplified to a system of

only four equations in the variables I1, I 2 , I 4 ,

and I5 :

**(c) The flow from A to B is x4 . To keep the
**

traffic flowing on all roads, each flow rate

must be nonnegative. Thus, from

x3 = −500 + t , the minimum flow from A to

B is 500 vehicles per hour. Thus x1 = 400,

x2 = 100, x3 = 0, x4 = 500.

5. From Kirchoff’s current law, applied at either of

the nodes, we have I1 + I 2 = I3 . From

Kirchoff’s voltage law, applied to the left and

right loops, we have 2 I1 = 2 I 2 + 6 and

2 I 2 + 4 I3 = 8. Thus the currents I1, I 2 , and I3

satisfy the following system of equations:

I1 + I 2 − I 3 = 0

=6

2 I1 − 2 I 2

2 I 2 + 4I3 = 8

Reduce the augmented matrix.

⎡ 1 1 −1 0 ⎤

⎢ 2 −2 0 6 ⎥

⎢

⎥

⎣ 0 2 4 8⎦

=0

I1 − I 2 − I 4

I 2 − I 4 + I5 = 0

=1

2 I1 + 2 I 2

−2 I 2

+ 2 I5 = 1

Reduce the augmented matrix.

⎡ 1 −1 −1 0 0⎤

⎢0

1 −1 1 0 ⎥

⎢

⎥

⎢ 2 2 0 0 1⎥

⎣⎢ 0 −2 0 2 1⎦⎥

⎡ 1 1 −1 0 ⎤

⎢ 1 −1 0 3⎥

⎢

⎥

⎣0 1 2 4 ⎦

⎡ 1 1 −1 0 ⎤

⎢ 0 −2 1 3⎥

⎢

⎥

1 2 4⎦

⎣0

36

SSM: Elementary Linear Algebra

⎡ 1 −1 −1 0

⎢0

1 −1 1

⎢

0

4

2 0

⎢

⎢⎣0 −2 0 2

Section 1.8

**homogeneous linear system.
**

3x1

− x3

=0

8 x1

− 2 x4 = 0

2 x2 − 2 x3 − x4 = 0

Reduce the augmented matrix.

⎡ 3 0 −1 0 0⎤

⎢ 8 0 0 −2 0⎥

⎢

⎥

⎣ 0 2 −2 −1 0 ⎦

0⎤

0⎥

⎥

1⎥

1⎥⎦

⎡ 1 −1 −1 0

⎢ 0 1 −1 1

⎢

⎢0 0 6 −4

⎣⎢0 0 −2 4

0⎤

0⎥

⎥

1⎥

1⎦⎥

⎡ 1 −1 −1 0

⎢ 0 1 −1 1

⎢

⎢0 0 −2 4

⎢⎣0 0 6 −4

0⎤

0⎥

⎥

1⎥

1⎥⎦

⎡1 0 − 1

0 0⎤

3

⎢

⎥

0 −2 0 ⎥

⎢8 0

⎢0 2 −2 −1 0 ⎥

⎣

⎦

⎡1 0 − 1

0 0⎤

3

⎢

⎥

8 −2 0 ⎥

⎢0 0

3

⎢

⎥

⎢⎣0 2 −2 −1 0 ⎥⎦

0⎤

⎡ 1 −1 −1 0

⎢0 1 −1 1

0⎥

⎢

1⎥

⎢0 0 1 −2 − 2 ⎥

⎢0 0 6 −4

1⎥⎦

⎣

⎡1 0 − 1

0 0⎤

3

⎢

⎥

⎢0 2 −2 −1 0 ⎥

8 −2 0 ⎥

⎢0 0

3

⎣

⎦

0⎤

⎡ 1 −1 −1 0

⎢0 1 −1 1

0⎥

⎢

1⎥

⎢0 0 1 −2 − 2 ⎥

⎢0 0 0

8

4 ⎥⎦

⎣

⎡1 0 − 1

0 0⎤

3

⎢

⎥

⎢0 1 −1 − 12 0⎥

⎢

⎥

8

−2 0 ⎥

⎢⎣0 0

3

⎦

0⎤

⎡ 1 −1 −1 0

⎢0 1 −1 1

0⎥

⎢

1⎥

⎢0 0 1 −2 − 2 ⎥

⎢

1⎥

1

⎢⎣0 0 0

2 ⎥⎦

⎡1 0 − 1

0 0⎤

3

⎢

⎥

⎢0 1 −1 − 12 0⎥

⎢

⎥

1 − 34 0⎥⎦

⎢⎣0 0

From this we conclude that x4 is a free variable,

and that the general solution of the system is

3

3

given by x4 = t , x3 = x4 = t ,

4

4

1

3 1

5

x2 = x3 + x4 = t + t = t , and

2

4

2

4

1

1

x1 = x3 = t . The smallest positive integer

3

4

solutions are obtained by taking t = 4, in which

case we obtain x1 = 1, x2 = 5, x3 = 3, and

x4 = 4. Thus the balanced equation is

C3 H8 + 5O 2 → 3CO 2 + 4H 2 O.

1

From this we conclude that I5 = ,

2

1

1

1

I 6 = I 4 = − + 2 I5 = − + 1 = ,

2

2

2

1 1

I3 = I 2 = I 4 − I5 = − = 0, and

2 2

1 1

I1 = I 2 + I 4 = 0 + = .

2 2

**9. We seek positive integers x1 , x2 , x3 , and x4
**

that will balance the chemical equation

x1 (C3 H8 ) + x2 (O2 ) → x3 (CO 2 ) + x4 (H 2 O)

For each of the atoms in the equation (C, H, and

O), the number of atoms on the left must be

equal to the number of atoms on the right. This

leads to the equations 3x1 = x3 , 8 x1 = 2 x4 , and

2 x2 = 2 x3 + x4 . These equations form the

37

Chapter 1: Systems of Linear Equations and Matrices

a0 − a1 + a2 − a3 = −1

a0

=1

a0 + a1 + a2 + a3 = 3

a0 + 4a1 + 16a2 + 64a3 = −1

The augmented matrix of this system is

⎡1 −1 1 −1 −1⎤

⎢1 0 0 0 1⎥

⎢

⎥

⎢1 1 1 1 3⎥

⎣⎢1 4 16 64 −1⎦⎥

**11. We must find positive integers x1 , x2 , x3 , and
**

x4 that balance the chemical equation

x1 (CH3COF) + x2 (H 2 O)

→ x3 (CH3COOH) + x4 (HF)

For each of the elements in the equation (C, H,

O, and F), the number of atoms on the left must

equal the number of atoms on the right. This

leads to the equations x1 = x3 ,

3x1 + 2 x2 = 4 x3 + x4 , x1 + x2 = 2 x3 , and

x1 = x4 . This is a very simple system of

equations having (by inspection) the general

solution x1 = x2 = x3 = x4 = t .

Thus, taking t = 1, the balanced equation is

CH3COF + H 2 O → CH3COOH + HF.

⎡1 0 0 0 1⎤

⎢1 −1 1 −1 −1⎥

⎢

⎥

⎢1 1 1 1 3⎥

⎣⎢1 4 16 64 −1⎦⎥

1⎤

⎡1 0 0 0

⎢0 −1 1 −1 −2 ⎥

⎢

⎥

⎢0 1 1 1 2 ⎥

⎣⎢0 4 16 64 −2 ⎦⎥

**13. The graph of the polynomial
**

p( x) = a0 + a1 x + a2 x 2 passes through the points

(1, 1), (2, 2), and (3, 5) if and only if the

coefficients a0 , a1 , and a2 satisfy the

equations

a0 + a1 + a2 = 1

a0 + 2a1 + 4a2 = 2

a0 + 3a1 + 9a2 = 5

Reduce the augmented matrix.

⎡1 1 1 1⎤

⎢1 2 4 2 ⎥

⎢

⎥

⎣1 3 9 5⎦

**Add −1 times row 1 to rows 2 and 3.
**

⎡ 1 1 1 1⎤

⎢0 1 3 1⎥

⎢

⎥

⎣0 2 8 4 ⎦

Add −2 times row 2 to row 3.

⎡ 1 1 1 1⎤

⎢0 1 3 1⎥

⎢

⎥

⎣0 0 2 2 ⎦

⎡1

⎢0

⎢

⎢0

⎣⎢0

0 0 0

1⎤

1 −1 1 2 ⎥

⎥

1 1 1 2⎥

4 16 64 −2 ⎦⎥

⎡1

⎢0

⎢

⎢0

⎢⎣0

0 0 0

1⎤

1 −1 1

2⎥

⎥

0 2 0

0⎥

0 20 60 −10⎥⎦

⎡1

⎢0

⎢

⎢0

⎣⎢0

0 0 0

1⎤

1 −1 1

2⎥

⎥

0

1 0

0⎥

0 20 60 −10⎦⎥

1⎤

⎡1 0 0 0

⎢ 0 1 −1 1

2⎥

⎢

⎥

0⎥

⎢0 0 1 0

⎢⎣0 0 0 60 −10 ⎥⎦

From the first row we see that a0 = 1, and from

the last two rows we conclude that a2 = 0 and

**We conclude that a2 = 1, and from back
**

substitution it follows that a1 = 1 − 3a2 = −2, and

a0 = 1 − a1 − a2 = 2. Thus the quadratic

polynomial is p( x) = x 2 − 2 x + 2.

1

a3 = − . Finally, from back substitution, it

6

13

follows that a1 = 2 + a2 − a3 = . Thus the

6

1 3 13

polynomial is p( x) = − x + x + 1.

6

6

**15. The graph of the polynomial
**

2

SSM: Elementary Linear Algebra

3

**p( x) = a0 + a1 x + a2 x + a3 x passes through the
**

points (−1, −1), (0, 1), (1, 3), and (4, −1), if and

only if the coefficients a0 , a1 , a2 , and a3

satisfy the equations

38

SSM: Elementary Linear Algebra

Section 1.9

−1

17. (a) The quadratic polynomial

⎡ 0.50 −0.25 ⎤

(b) We have ( I − C ) −1 = ⎢

⎣ −0.25 0.90 ⎥⎦

1 ⎡ 0.90 0.25⎤

,

=

0.3875 ⎢⎣ 0.25 0.50⎥⎦

and so the production needed to provide

customers $7000 worth of mechanical work

and $14,000 worth of body work is given by

x = ( I − C )−1 d

1 ⎡0.9 0.25⎤ ⎡ 7000 ⎤

=

0.3875 ⎢⎣0.25 0.5 ⎥⎦ ⎢⎣14, 000 ⎥⎦

⎡$25, 290 ⎤

≈⎢

⎣ $22, 581⎥⎦

2

**p( x) = a0 + a1 x + a2 x passes through the
**

points (0, 1) and (1, 2) if and only if the

coefficients a0 , a1 , and a2 satisfy the

equations

a0

=1

a0 + a1 + a2 = 2

The general solution of this system, using

a1 as a parameter, is a0 = 1, a1 = k ,

a2 = 1 − k . Thus this family of polynomials

is represented by the equation

p( x) = 1 + kx + (1 − k ) x 2 where −∞ < k < ∞.

**3. (a) A consumption matrix for this economy is
**

⎡ 0 .1 0 .6 0 .4 ⎤

C = ⎢ 0 .3 0 .2 0 .3 ⎥ .

⎢

⎥

⎣ 0 .4 0 .1 0 .2 ⎦

**(b) Below are graphs of four curves in the
**

2

family: y = 1 + x (k = 0), y = 1 + x (k = 1),

y = 1 + 2 x − x 2 (k = 2), and y = 1 + 3x − 2 x 2

(k = 3).

4

y

k=0

**(b) The Leontief equation (I − C)x = d is
**

represented by the augmented matrix

⎡ 0.9 −0.6 −0.4 1930 ⎤

⎢ −0.3 0.8 −0.3 3860 ⎥ .

⎢

⎥

⎣ −0.4 −0.1 0.8 5790 ⎦

k=1

3

2

–2

–1

1

k=3

2

**The reduced row echelon form of this
**

matrix is

⎡1 0 0 31, 500 ⎤

⎢0 1 0 26, 500 ⎥ .

⎢

⎥

⎣0 0 1 26, 300 ⎦

x

3

k=2

True/False 1.8

**Thus the required production vector is
**

⎡ $31, 500 ⎤

x = ⎢$26, 500⎥ .

⎢

⎥

⎣$26, 300⎦

**(a) True; only such networks were considered.
**

(b) False; when a current passes through a resistor,

there is a drop in electrical potential in the

circuit.

**5. The Leontief equation (I − C)x = d for this
**

⎡ 0.9 −0.3 ⎤ ⎡ x1 ⎤ ⎡50 ⎤

; thus the

economy is ⎢

⎢ ⎥=

⎣ −0.5 0.6 ⎥⎦ ⎣ x2 ⎦ ⎢⎣60 ⎥⎦

required production vector is

(c) True

(d) False; the number of atoms of each element must

be the same on each side of the equation.

x = ( I − C )−1 d

1 ⎡0.6 0.3⎤ ⎡50 ⎤

=

0.39 ⎢⎣0.5 0.9⎥⎦ ⎢⎣ 60 ⎥⎦

1 ⎡ 48⎤

=

0.39 ⎢⎣79 ⎥⎦

⎡123.08 ⎤

≈⎢

.

⎣ 202.56 ⎥⎦

**(e) False; this is only true if the n points have
**

distinct x-coordinates.

Section 1.9

Exercise Set 1.9

1. (a) A consumption matrix for this economy is

⎡0.50 0.25 ⎤

C=⎢

.

⎣0.25 0.10 ⎥⎦

39

Chapter 1: Systems of Linear Equations and Matrices

SSM: Elementary Linear Algebra

Chapter 1 Supplementary Exercises

**7. (a) The Leontief equation for this economy is
**

⎡ 1 0 ⎤ ⎡ x1 ⎤ ⎡ d1 ⎤

⎢2

⎥ ⎢ ⎥ = ⎢ ⎥ . If d1 = 2 and d 2 = 0,

⎣ 0 0 ⎦ ⎣ x2 ⎦ ⎣ d 2 ⎦

the system is consistent with general

solution x1 = 4, x2 = t (0 ≤ t < ∞). If

**1. The corresponding system is
**

3 x1 − x2

+ 4 x4 = 1

2 x1

+ 3x3 + 3x4 = −1

⎡ 3 −1 0 4 1⎤

⎢⎣ 2 0 3 3 −1⎥⎦

**d1 = 2 and d 2 = 1 the system is
**

inconsistent.

⎡ 1 0⎤

(b) The consumption matrix C = ⎢ 2

⎥

⎣ 0 1⎦

indicates that the entire output of the second

sector is consumed in producing that output;

thus there is nothing left to satisfy any

outside demand. Mathematically, the

⎡ 1 0⎤

Leontief matrix I − C = ⎢ 2

⎥ is not

⎣ 0 0⎦

invertible.

⎡1 − 1 0

3

⎢

0 3

⎣2

4

3

1⎤

3⎥

⎡1 − 1 0

3

⎢

2 3

⎢⎣0

3

4

3

1

3

1⎤

3⎥

− 35 ⎥⎦

3 −1⎦

1⎤

⎡1 − 1 0 4

3

3

3⎥

⎢

1 92 12 − 25 ⎥⎦

⎢⎣0

Thus, x3 and x4 are free variables.

**9. Since c21c12 < 1 − c11, it follows that
**

(1 − C11 ) − C21C12 > 0. From this we conclude

Let x3 = s and x4 = t .

**that the matrix I − C is invertible, and
**

c12 ⎤

1

⎡ 1

( I − C ) −1 =

has

⎢

1

−

c

c11 ⎥⎦

(1 − C11 ) − C21C12 ⎣ 21

nonnegative entries. This shows that the

economy is productive. Thus the Leontief

equation (I − C)x = d has a unique solution for

every demand vector d.

9

1

5

9

1 5

x2 = − x3 − x4 − = − s − t −

2

2

2

2

2 2

1

4

1

x1 = x2 − x4 +

3

3

3

1⎛ 9

1 5⎞ 4 1

= ⎜− s − t − ⎟ − t +

3⎝ 2

2 2⎠ 3 3

3

3 1

=− s− t−

2

2 2

3

3 1

The solution is x1 = − s − t − ,

2

2 2

9

1 5

x2 = − s − t − , x3 = s, x4 = t .

2

2 2

**11. If C has row sums less than 1, then CT has
**

column sums less than 1. Thus, from Theorem

1.9.1, it follows that I − C T is invertible and that

( I − C T )−1 has nonnegative entries. Since

I − C = ( I T − C T )T = ( I − C T )T , we can

conclude that I − C is invertible and that

**3. The corresponding system is
**

2 x1 − 4 x2 + x3 = 6

−4 x1

+ 3x3 = −1

x2 − x3 = 3

( I − C ) −1 = (( I − C T )T )−1 = (( I − CT )−1 )T has

nonnegative entries.

⎡ 2 −4 1 6 ⎤

⎢ −4 0 3 −1⎥

⎢

⎥

1 −1 3⎦

⎣ 0

True/False 1.9

(a) False; open sectors are those that do not produce

output.

⎡ 1 −2 1

3⎤

2

⎢

⎥

⎢ −4 0 3 −1⎥

⎢ 0

1 −1 3 ⎥

⎣

⎦

(b) True

(c) True

(d) True; by Theorem 1.9.1.

(e) True

40

SSM: Elementary Linear Algebra

Chapter 1 Supplementary Exercises

⎡ 1 −2 1 3⎤

2

⎢

⎥

0

8

5 11⎥

−

⎢

⎢0

1 −1 3⎥

⎣

⎦

⎡1 0 3 x + 4 y ⎤

5

5 ⎥

⎢

⎢0 1 − 45 x + 53 y ⎥

⎣

⎦

The solution is x ′ =

⎡ 1 −2 1 3⎤

2

⎢

⎥

1 −1 3⎥

⎢0

⎢0 −8 5 11⎥

⎣

⎦

**7. Reduce the augmented matrix.
**

⎡1 1 1 9 ⎤

⎢⎣1 5 10 44 ⎥⎦

⎡ 1 1 1 9⎤

⎢⎣0 4 9 35⎥⎦

⎡ 1 −2 1

3⎤

2

⎢

⎥

1 −1 3⎥

⎢0

⎢0 0 −3 35⎥

⎣

⎦

⎡1 1

⎢0 1

⎣

⎡ 1 −2 1

3⎤

2

⎢

⎥

1 −1

3⎥

⎢0

⎢0 0 1 − 35 ⎥

3⎦

⎣

35

26

Thus x3 = − , x2 = x3 + 3 = − , and

3

3

1

52 35

17

x1 = 2 x2 − x3 + 3 = − + + 3 = − .

2

3

6

2

17

26

The solution is x1 = − , x2 = − ,

2

3

35

x3 = − .

3

5

3

⎡1 − 4

3

⎢

⎢0 53

⎣

5x

3

− 43 x +

⎡1 − 4

3

⎢

⎢0 1

⎣

⎤

⎥

− 54 x + 35 y ⎥

⎦

9⎤

35 ⎥

4⎦

⎡a 0 b 2⎤

9. ⎢ a a 4 4 ⎥

⎢

⎥

⎣0 a 2 b⎦

Add −1 times the first row to the second.

b

2⎤

⎡a 0

⎢0 a 4 − b 2⎥

⎢

⎥

b⎦

2

⎣0 a

x⎤

⎥

y⎥

⎦

5

3

1

9

4

⎡1 0 − 5 1 ⎤

4

4 ⎥

⎢

9

35

⎢⎣0 1

⎥

4

4⎦

Thus z is a free variable. Let z = s. Then

9

35

5

1

y =− s+

and x = s + . For x, y, and z to

4

4

4

4

be positive integers, s must be a positive integer

such that 5s + 1 and −9s + 35 are positive and

divisible by 4. Since −9s + 35 > 0, s < 4, so

s = 1, 2, or 3. The only possibility is s = 3. Thus,

x = 4, y = 2, z = 3.

**5. Reduce the augmented matrix.
**

⎡3 − 4 x⎤

5

⎢5

⎥

⎢ 45 35 y ⎥

⎣

⎦

⎡1 − 4

3

⎢

⎢ 45 35

⎣

3

4

4

3

x + y, y ′ = − x + y.

5

5

5

5

**Add −1 times the second row to the third.
**

b

2 ⎤

⎡a 0

⎢0 a 4 − b

2 ⎥

⎢

⎥

⎣0 0 b − 2 b − 2⎦

⎤

⎥

y⎥

⎦

**(a) The system has a unique solution if the
**

corresponding matrix can be put in reduced

row echelon form without division by zero.

Thus, a ≠ 0, b ≠ 2 is required.

x

**(b) If a ≠ 0 and b = 2, then x3 is a free variable
**

and the system has a one-parameter

solution.

(c) If a = 0 and b = 2, the system has a twoparameter solution.

41

Chapter 1: Systems of Linear Equations and Matrices

**SSM: Elementary Linear Algebra
**

13. (a) Here X must be a 2 × 3 matrix. Let

⎡ a b c⎤

X =⎢

.

⎣ d e f ⎥⎦

Then

⎡ −1 0 1⎤

⎡ a b c⎤ ⎢

⎥ ⎡ 1 2 0⎤

⎢⎣ d e f ⎥⎦ ⎢ 1 1 0⎥ = ⎢⎣ −3 1 5⎥⎦ or

⎣ 3 1 −1⎦

**(d) If b ≠ 2 but a = 0, the system has no
**

solution.

9. Note that K must be a 2 × 2 matrix. Let

⎡a b ⎤

K =⎢

.

⎣ c d ⎥⎦

Then

⎡ 1 4⎤

⎡ 8 6 −6 ⎤

⎡a b ⎤ ⎡ 2 0 0⎤ ⎢

⎢ −2

3⎥ ⎢

=

6 −1 1⎥

⎢

⎥ ⎣ c d ⎦⎥ ⎣⎢ 0 1 −1⎦⎥ ⎢

⎥

⎣ 1 −2 ⎦

⎣ −4 0 0 ⎦

⎡ −a + b + 3c b + c a − c ⎤

⎢⎣ − d + e + 3 f e + f d − f ⎥⎦

⎡ 1 2 0⎤

.

=⎢

⎣ −3 1 5⎥⎦

Equating entries in the first row leads to the

system

− a + b + 3c = 1

b +c = 2

−c = 0

a

Equating entries in the second row yields

the system

− d + e + 3 f = −3

e + f =1

d

− f =5

These systems can be solved simultaneously

by reducing the following augmented

matrix.

⎡ −1 1 3 1 −3⎤

⎢ 0 1 1 2

1⎥

⎢

⎥

⎣ 1 0 −1 0 5⎦

⎡ 1 4⎤

⎡ 8 6 −6 ⎤

⎡ 2 a b −b ⎤ ⎢

or ⎢ −2

3⎥ ⎢

=

6 −1 1⎥ or

⎢

⎥ ⎣ 2c d −d ⎥⎦ ⎢

⎥

1

−

2

−

⎣

⎦

⎣ 4 0 0⎦

−b − 4 d ⎤

b + 4d

⎡ 2a + 8c

⎢ −4a + 6c −2b + 3d 2b − 3d ⎥

⎢

⎥

−b + 2d ⎦

b − 2d

⎣ 2 a − 4c

⎡ 8 6 −6 ⎤

= ⎢ 6 −1 1⎥ .

⎢

⎥

⎣ −4 0 0 ⎦

Thus 2a

+ 8c

=8

b

+ 4d = 6

−4 a

+ 6c

=6

−2b

+ 3d = −1

2a

− 4c

= −4

− 2d = 0

b

Note that we have omitted the 3 equations

obtained by equating elements of the last

columns of these matrices because the

information so obtained would be just a repeat of

that gained by equating elements of the second

columns. The augmented matrix of the above

system is

8 0 8⎤

⎡ 2 0

⎢ 0

1 0 4 6⎥

⎢

⎥

⎢ −4 0 6 0 6 ⎥ .

3 −1⎥

⎢ 0 −2 0

⎢ 2 0 −4 0 −4 ⎥

⎢ 0

1 0 −2 0 ⎥⎦

⎣

The reduced row-echelon form of this matrix is

⎡ 1 0 0 0 0⎤

⎢0 1 0 0 2 ⎥

⎢

⎥

⎢0 0 1 0 1⎥ .

⎢0 0 0 1 1⎥

⎢0 0 0 0 0 ⎥

⎢0 0 0 0 0 ⎥

⎣

⎦

Thus a = 0, b = 2, c = 1, and d = 1.

⎡0 2 ⎤

The matrix is K = ⎢

.

⎣1 1 ⎥⎦

⎡ 1 0 −1 0 5⎤

⎢ 0 1 1 2

1⎥

⎢

⎥

−

−

1

1

3

1

3

⎣

⎦

⎡ 1 0 −1 0 5 ⎤

⎢0 1 1 2 1⎥

⎢

⎥

⎣0 1 2 1 2 ⎦

⎡ 1 0 −1 0 5 ⎤

⎢0 1 1 2 1⎥

⎢

⎥

⎣0 0 1 −1 1⎦

⎡ 1 0 0 −1 6 ⎤

⎢0 1 0 3 0 ⎥

⎢

⎥

⎣0 0 1 −1 1⎦

Thus, a = −1, b = 3, c = −1, d = 6, e = 0,

⎡ −1 3 −1⎤

.

f = 1 and X = ⎢

⎣ 6 0 1⎥⎦

42

SSM: Elementary Linear Algebra

Chapter 1 Supplementary Exercises

(b) X must be a 2 × 2 matrix.

⎡x y⎤

Let X = ⎢

. Then

⎣ z w⎥⎦

⎡ 1 −1 2 ⎤ ⎡ x + 3 y − x 2 x + y ⎤

=

X⎢

.

⎣3 0 1⎥⎦ ⎢⎣ z + 3w − z 2 z + w⎥⎦

If we equate matrix entries, this gives us the

equations

z + 3w = 6

x + 3 y = −5

− z = −3

− x = −1

2z + w = 7

2x + y = 0

Thus x = 1 and z = 3, so that the top two

equations give y = −2 and w = 1. Since these

values are consistent with the bottom two

equations, we have that

⎡1 − 2 ⎤

X =⎢

.

⎣3 1 ⎥⎦

**15. Since the coordinates of the given points must
**

satisfy the polynomial, we have

a +b+c = 2

p(1) = 2 ⇒

a −b +c = 6

p(−1) = 6 ⇒

p(2) = 3 ⇒ 4a + 2b + c = 3

This system has augmented matrix

⎡ 1 1 1 2⎤

⎢ 1 −1 1 6 ⎥ which reduces to

⎢

⎥

⎣ 4 2 1 3⎦

1⎤

⎡1 0 0

⎢0 1 0 −2 ⎥ .

⎢

⎥

⎣ 0 0 1 3⎦

Thus, a = 1, b = −2, and c = 3.

19. First suppose that AB −1 = B −1 A. Note that all

matrices must be square and of the same size.

Therefore ( AB −1 ) B = ( B −1 A) B or A = B −1 AB

**(c) Again, X must be a 2 × 2 matrix. Let
**

⎡x y⎤

X =⎢

, so that the matrix equation

⎣ z w⎥⎦

becomes

3y + w ⎤ ⎡ x + 2 y 4x⎤

⎡ 3x + z

⎢⎣ − x + 2 z − y + 2w⎥⎦ − ⎢⎣ z + 2 w 4 z ⎥⎦

⎡ 2 −2 ⎤

.

=⎢

⎣ 5 4 ⎥⎦

This yields the system of equations

2x − 2 y + z

=2

−4 x + 3 y

+ w = −2

−x

+ z − 2w = 5

− y − 4 z + 2w = 4

so that BA = B( B −1 AB) = ( BB −1 )( AB) = AB.

A similar argument shows that if AB = BA then

AB −1 = B −1 A.

21. Consider the ith row of AB. By the definition of

matrix multiplication, this is

ai1b11 + ai 2b21 + + ain bn1

a + a + + ain

= i1 i 2

n

= ri

1

since all entries bi1 = .

n

1 0 2⎤

⎡ 2 −2

⎢ −4

3 0

1 −2 ⎥

with matrix ⎢

⎥ which

1 −2 5 ⎥

⎢ −1 0

⎣⎢ 0 −1 −4 2 4 ⎦⎥

⎡1 0

⎢

⎢0 1

reduces to ⎢

⎢0 0

⎢

⎢⎣0 0

113

,

Hence, x = −

37

⎤

0 0 − 113

37 ⎥

0 0 − 160

⎥

37 .

⎥

20

1 0 − 37 ⎥

46 ⎥

0 1 − 37

⎥⎦

160

20

y=−

, z=− ,

37

37

⎡ − 113 − 160 ⎤

46

37 ⎥ .

w = − , and X = ⎢ 37

20

46 ⎥

37

⎢ − 37 − 37

⎣

⎦

43

Chapter 10

Applications of Linear Algebra

3. Using Equation (10) we obtain

Section 10.1

x2

0

0

4

4

16

Exercise Set 10.1

1. (a) Substituting the coordinates of the points

x y 1

into Equation (4) yields 1 −1 1 = 0

2 2 1

which, upon cofactor expansion along the

first row, yields −3x + y + 4 = 0; that is,

y = 3x − 4.

2x + y − 1 = 0 or y = −2x + 1.

2. (a) Equation (9) yields

x +y

x y 1

40

2 6 1 = 0 which, upon first4

2 0 1

34

5 3 1

row cofactor expansion, yields

**x 2 + 2 xy + y 2 − 2 x + y = 0, which is the equation
**

of a parabola.

2

dividing by 18, x + y − 4 x − 6 y + 4 = 0.

Completing the squares in x and y yields the

standard form ( x − 2)2 + ( y − 3)2 = 9.

x

y

2 −2

3 5

−4 6

1

1

1 = 0 which is the

1

1

1

160 x 2 + 320 xy + 160( y 2 + y) − 320 x = 0; that is,

18( x 2 + y 2 ) − 72 x − 108 y + 72 = 0 or,

x2 + y 2

8

(b) As in (a),

34

52

x y

0 0

0 −1

2 0

2 −5

4 −1

x 2 xy y 2 + y x

4

0

0

2 = 0.

4 −10

20

2

16 −4

0

4

Now expand along the first row and get

2

2

y2

0

1

0

25

1

x 2 xy y 2 x y

0

0

1 0 −1

same as 4

0

0 2 0 = 0 by

4 −10 25 2 −5

16 −4 1 4 −1

expansion along the second row (taking

advantage of the zeros there). Add column five

to column three and take advantage of another

row of all but one zero to get

x y 1

(b) As in (a), 0 1 1 = 0 yields

1 −1 1

2

xy

0

0

0

−10

−4

**4. (a) From Equation (11), the equation of the
**

x y

z 1

1 1 −3 1

plane is

= 0.

1 −1 1 1

0 −1 2 1

Expansion along the first row yields

−2x − 4y − 2z = 0; that is, x + 2y + z = 0.

1

1 = 0 yields

1

1

50( x 2 + y 2 ) + 100 x − 200 y − 1000 = 0; that

x y

z

2 3 1

(b) As in (a),

2 −1 −1

1 2 1

is, x 2 + y 2 + 2 x − 4 y − 20 = 0. In standard

form this is ( x + 1)2 + ( y − 2)2 = 25.

1

1

= 0 yields

1

1

−2x + 2y − 4z + 2 = 0; that is −x + y − 2z + 1

= 0 for the equation of the plane.

211

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

**5. (a) Equation (11) involves the determinant of
**

the coefficient matrix of the system

y

z 1⎤ ⎡ c1 ⎤ ⎡ 0⎤

⎡x

⎢x

y1 z1 1⎥ ⎢ c2 ⎥ ⎢ 0⎥

1

⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

⎢ x2 y2 z2 1⎥ ⎢ c3 ⎥ ⎢ 0⎥

⎣⎢ x3 y3 z3 1⎦⎥ ⎢⎣ c4 ⎥⎦ ⎢⎣ 0⎥⎦

Rows 2 through 4 show that the plane passes

through the three points ( xi , yi , zi ),

i = 1, 2, 3, while row 1 gives the equation

c1 x + c2 y + c3 z + c4 = 0. For the plane

passing through the origin parallel to the

plane passing through the three points, the

constant term in the final equation will be 0,

which is accomplished by using

x

y

z 0

x1 y1 z1 1

=0

x2 y2 z2 1

x3 y3 z3 1

x2 + y 2 + z 2

5

(b) As in (a),

11

5

11

yields

x y

z

0 1 −2

1 3 1

2 −1 0

3 1 −1

1

1

1 =0

1

1

−24( x 2 + y 2 + z 2 ) + 48 x + 48 y + 72 = 0; that

is, x 2 + y 2 + z 2 − 2 x − 2 y − 3 = 0 or in

standard form, ( x − 1) 2 + ( y − 1)2 + z 2 = 5.

7. Substituting each of the points ( x1 , y1 ),

( x2 , y2 ), ( x3 , y3 ), ( x4 , y4 ), and ( x5 , y5 ) into

the equation

c1 x 2 + c2 xy + c3 y 2 + c4 x + c5 y + c6 = 0 yields

**c1 x12 + c2 x1 y1 + c3 y12 + c4 x1 + c5 y1 + c6 = 0
**

#

#

#

c1 x52 + c2 x5 y5 + c3 y52 + c4 x5 + c5 y5 + c6 = 0.

These together with the original equation form a

homogeneous linear system with a non-trivial

solution for c1, c2 , ..., c6 . Thus the determinant

of the coefficient matrix is zero, which is exactly

Equation (10).

**(b) The parallel planes passing through the
**

origin are x + 2y + z = 0 and −x + y − 2z = 0,

respectively.

6. (a) Using Equation (12), the equation of the

x2 + y2 + z 2 x y z 1

14

1 2 3 1

sphere is

6

−1 2 1 1 = 0.

2

1 0 1 1

6

1 2 −1 1

Expanding by cofactors along the first row

yields

**8. As in the previous problem, substitute the
**

coordinates ( xi , yi , zi ) of each of the three

**points into the equation c1 x + c2 y + c3 z + c4 = 0
**

to obtain a homogeneous system with nontrivial

solution for c1, ..., c4 . Thus the determinant of

the coefficient matrix is zero, which is exactly

Equation (11).

16( x 2 + y 2 + z 2 ) − 32 x − 64 y − 32 z + 32 = 0;

that is, ( x 2 + y 2 + z 2 ) − 2 x − 4 y − 2 x + 2 = 0.

Completing the squares in each variable

yields the standard form

**9. Substituting the coordinates ( xi , yi , zi ) of the
**

four points into the equation

c1 ( x 2 + y 2 + z 2 ) + c2 x + c3 y + c4 z + c5 = 0 of the

sphere yields four equations, which together with

the above sphere equation form a homogeneous

linear system for c1, ..., c5 with a nontrivial

solution. Thus the determinant of this system is

zero, which is Equation (12).

( x − 1) 2 + ( y − 2)2 + ( z − 1)2 = 4.

Note: When evaluating the cofactors, it is

useful to take advantage of the column of

ones and elementary row operations; for

example, the cofactor of x 2 + y 2 + z 2 above

can be evaluated as follows:

1 2 3 1

1 2

3 1

−1 2 1 1 −2 0 −2 0

=

= 16 by

1 0 1 1

0 −2 −2 0

1 2 −1 1

0 0 −4 0

cofactor expansion of the latter determinant

along the last column.

212

SSM: Elementary Linear Algebra

Section 10.2

**10. Upon substitution of the coordinates of the three
**

points ( x1 , y1 ), ( x2 , y2 ) and ( x3 , y3 ), we

obtain the equations:

Section 10.2

Exercise Set 10.2

c1 y + c2 x 2 + c3 x + c4 = 0

c1 y1 + c2 x12 + c3 x1 + c4

c1 y2 + c2 x22 + c3 x2 + c4

c1 y3 + c2 x32 + c3 x3 + c4

=0

=0

= 0.

This is a homogeneous system with a nontrivial

solution for c1, c2 , c3 , c4 , so the determinant of

the coefficient matrix is zero; that is,

y

x2

x

1

y1

x12

x22

x32

x1

1

y2

y3

x2 1

1

,1

2

3

,1

2

2,

(0, 0)

(2, 0)

x2 = 1

2

3

2 x1 + 3 x2 = 6

5 x1

**In the figure the feasible region is shown and the
**

extreme points are labeled. The values of the

objective function are shown in the following

table:

= 0.

x3 1

**11. Expanding the determinant in Equation (9) by
**

cofactors of the first row makes it apparent that

**the coefficient of x 2 + y 2 in the final equation is
**

x1

x2

x3

x 2 2 x1 − x2 = 0

x1 = 2

1.

y1 1

y2 1 . If the points are collinear, then the

y3 1

**columns are linearly dependent ( yi = mxi + b),
**

so the coefficient of x 2 + y 2 = 0 and the

resulting equation is that of the line through the

three points.

12. If the three distinct points are collinear then two

of the coordinates can be expressed in terms of

the third. Without loss of generality, we can say

that y and z can be expressed in terms of x, i.e., x

is the parameter. If the line is (x, ax + b, cx + d),

then the determinant in Equation (11) is

x −ax − b −cx − d 1

x 0

0

1

.

equivalent to 1

x2 0

0

1

x3 0

0

1

Expanding along the first row, it is clear that the

determinant is 0 and Equation (11) becomes

0 = 0.

Extreme point

( x1 , x2 )

Value of

z = 3x1 + 2 x2

(0, 0)

0

( 12 , 1)

7

2

( 32 , 1)

( 2, 23 )

22

3

(2, 0)

6

13

2

Thus the maximum,

22

, is attained when

3

2

x1 = 2 and x2 = .

3

2.

x2

x2 = 3

3

2 x1 − x2 = −2

4 x1 − x2 = 0

13. As in Exercise 11, the coefficient of

**x 2 + y 2 + z 2 will be 0, so Equation (12) gives
**

the equation of the plane in which the four points

lie.

2

x1

**The intersection of the five half-planes defined
**

by the constraints is empty. Thus this problem

has no feasible solutions.

213

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

x2

3.

**Thus, it has a minimum, which is attained at the
**

14

25

and x2 = . The value of

point where x1 =

9

18

335

. In the problem’s terms, if we use

z there is

18

7

25

cup of milk and

ounces of corn flakes, a

9

18

minimum cost of 18.6¢ is realized.

5

− x1 + x2 = 1

3x1 − x2 = −5

2 x1 + 4 x2 = 12

x1

**6. (a) Both x1 ≥ 0 and x2 ≥ 0 are nonbinding. Of
**

the remaining constraints, only

2 x1 + 3x2 ≤ 24 is binding.

**The feasible region for this problem, shown in
**

the figure, is unbounded. The value of

z = −3x1 + 2 x2 cannot be minimized in this

region since it becomes arbitrarily negative as

we travel outward along the line − x1 + x2 = 1;

i.e., the value of z is

−3x1 + 2 x2 = −3x1 + 2( x1 + 1) = − x1 + 2 and x1

can be arbitrarily large.

4.

x1 + x2 = 10, 000

x2

(b) x1 − x2 ≤ v will be binding if the line

**x1 − x2 = v intersects the line x2 = 6 with
**

0 < x1 < 3, thus if −6 < v < −3. If v < −6,

then the feasible region is empty.

x1 = 6000

**(c) x2 ≤ v will be nonbinding for v ≥ 8 and the
**

feasible region will be empty for v < 0.

(5000, 5000)

**7. Letting x1 be the number of Company A’s
**

containers shipped and x2 the number of

Company B’s, the problem is:

Maximize z = 2.20 x1 + 3.00 x2 subject to

(6000, 4000)

(6000, 2000)

x2 = 2000

(2000, 2000)

40 x1 + 50 x2 ≤ 37, 000

2 x1 + 3x2 ≤ 2000

x1 ≥ 0

x2 ≥ 0.

x1

x1 − x2 = 0

**The feasible region and vertices for this problem
**

are shown in the figure. The maximum value of

the objective function z = .1x1 + .07 x2 is attained

x2

**when x1 = 6000 and x2 = 4000, so that z = 880.
**

In other words, she should invest $6000 in bond

A and $4000 in bond B for annual yield of $880.

5.

x2

5

14 25

,

9 18

x1 =

0, 666

3

3x1 − 2 x1 = 0

2

3 9

,

2 4

40 20

,

21 21

4 x1 + 2 x2 = 9

40 x1 + 50 x2 = 37, 000

(550, 300)

2 x1 + 3x2 = 2000

(0, 0)

x1 − 2 x2 = 0

3 3

,

2 2

2

3

(925, 0)

x1

**The feasible region is shown in the figure. The
**

vertex at which the maximum is attained is

x1 = 550 and x2 = 300, where z = 2110.

A truck should carry 550 containers from

Company A and 300 containers from Company

B for maximum shipping charges of $2110.

x1

1

1

1

x8 + x2 =

8 1 10

3

**The feasible region and its extreme points are
**

shown in the figure. Though the region is

unbounded, x1 and x2 are always positive, so

**8. We must now maximize z = 2.50 x1 + 3.00 x2 .
**

The feasible region is as in Exercise 7, but now

the maximum occurs for x1 = 925 and x2 = 0,

where z = 2312.50.

**the objective function z = 7.5 x1 + 5.0 x2 is also.
**

214

SSM: Elementary Linear Algebra

Section 10.3

**925 containers from Company A and no
**

containers from Company B should be shipped

for maximum shipping charges of $2312.50.

Section 10.3

Exercise Set 10.3

1. The number of oxen is 50 per herd, and there are

7 herds, so there are 350 oxen. Hence the total

number of oxen and sheep is 350 + 350 = 700.

**9. Let x1 be the number of pounds of ingredient A
**

used and x2 the number of pounds of ingredient

B. Then the problem is:

Minimize z = 8 x1 + 9 x2 subject to

2 x1 + 5 x2 ≥ 10

2 x1 + 3x2 ≥ 8

6 x1 + 4 x2 ≥ 12

x1 ≥ 0

x2 ≥ 0

**2. (a) The equations are B = 2A, C = 3(A + B),
**

D = 4(A + B + C), 300 = A + B + C + D.

Solving this linear system gives A = 5 (and

B = 10, C = 45, D = 240).

(b) The equations are B = 2A, C = 3B, D = 4C,

132 = A + B + C + D. Solving this linear

system gives A = 4 (and B = 8, C = 24,

D = 96).

x 2 (0, 3)

2 12

,

5 5

**3. Note that this is, effectively, Gaussian
**

elimination applied to the augmented matrix

⎡1 1 10 ⎤

⎢1 1 7 ⎥ .

⎣ 4

⎦

2 x1 + 3x2 = 8

5

,1

2

2 x1 + 5 x2 = 10

6 x1 + 4 x2 = 12

**4. (a) Let x represent oxen and y represent sheep,
**

then the equations are 5x + 2y = 10 and

2x + 5y = 8. The corresponding array is

(5, 0)

5 x1

**Though the feasible region shown in the figure is
**

unbounded, the objective function is always

positive there and hence must have a minimum.

This minimum occurs at the vertex where

2

12

x1 = and x2 = . The minimum value of z is

5

5

124

or 24.8¢.

5

Each sack of feed should contain 0.4 lb of

ingredient A and 2.4 lb of ingredient B for a

minimum cost of 24.8¢.

2

5

5

2

8

10

**and the elimination step subtracts twice
**

column 2 from five times column 1, giving

5

**10. It is sufficient to show that if a linear function
**

has the same value at two points, then it has that

value along the line connecting the two points.

Let ax1 + bx2 be the function, and

ax1′ + bx2′ = ax1′′ + bx2′′ = c. Then

21

2

20

10

and so y =

**(tx1′ + (1 − t ) x1′′, tx2′ + (1 − t ) x2′′ ) is an arbitrary
**

point on the line connecting ( x1′ , x2′ ) to ( x1′′, x2′′ )

and a(tx1′ + (1 − t ) x1′′) + b(tx2′ + (1 − t ) x2′′ )

= t (ax1′ + bx2′ ) + (1 − t )(ax1′′ + bx2′′ )

= tc + (1 − t )c

= c.

x=

20

unit for a sheep, and

21

34

units for an ox.

21

**(b) Let x, y, and z represent the number of
**

bundles of each class. Then the equations

are

2x + y

=1

3y + z = 1

x

4z = 1

and the corresponding array is

215

Chapter 10: Applications of Linear Algebra

2

3

**(b) Exercise 7(b) may be solved using this
**

technique. x1 represents gold, x2 represents

1

1

1

1

SSM: Elementary Linear Algebra

**brass, x3 represents tin, and x4 represents
**

much-wrought iron, so n = 4 and

a1 = 60,

2

a2 = (60) = 40,

3

3

a3 = (60) = 45,

4

3

a4 = (60) = 36.

5

(a2 + a3 + a4 ) − a1

x1 =

n−2

40 + 45 + 36 − 60

=

4−2

61

=

2

61 19

x2 = a2 − x1 = 40 − =

2

2

61 29

x3 = a3 − x1 = 45 − =

2

2

61 11

x4 = a4 − x1 = 36 − =

2

2

1

The crown was made with 30 minae of

2

1

1

gold, 9 minae of brass, 14 minae of

2

2

1

tin, and 5 minae of iron.

2

4

1

1

**Subtract two times the numbers in the third
**

column from the second column to get

1

3

1

1

−8

4

1

−1

Now subtract three times the numbers in the

second column from the first column to get

1

1

1

25

−8

4

4

−1

1

**This is equivalent to the linear system
**

x + 4z = 1

y − 8 z = −1.

25 z = 4

From this, the solution is that a bundle of the

9

measure, second

first class contains

25

7

measure, and third class

class contains

25

4

measure.

contains

25

**6. (a) We can write this as
**

5x + y + z − K = 0

x + 7y + z − K = 0

x + y + 8 − K = 0 (a 3 × 4 system). Since the

coefficient matrix of equations (5) is

invertible (its determinant is 262), there is a

unique solution x, y, z for every K; hence, K

is an arbitrary parameter.

5. (a) From equations 2 through n, x j = a j − x1

**(j = 2, ..., n). Using these equations in
**

equation 1 gives

x1 + (a2 − x1 ) + (a3 − x1 ) + " + (an − x1 ) = a1

(b) Gaussian elimination gives x =

a + a + " + an − a1

x1 = 2 3

.

n−2

First find x1 in terms of the known

quantities n and the ai . Then we can use

21K

,

131

14 K

12 K

, z=

, for any choice of K.

131

131

Since 131 is prime, we must choose K to be

an integer multiple of 131 to get integer

solutions. The obvious choice is K = 131,

giving x = 21, y = 14, z = 12, and K = 131.

y=

**x j = a j − x1 (j = 2, ..., n) to find the other
**

xi .

216

SSM: Elementary Linear Algebra

Section 10.4

6( y1 − 2 y2 + y3 )

**(c) This solution corresponds to
**

K = 262 = 2 ⋅ 131.

h2

6( y2 − 2 y3 + y4 )

**7. (a) The system is x + y = 1000,
**

⎛1⎞

⎛1⎞

⎜ ⎟ x − ⎜ ⎟ y = 10, with solution x = 577

⎝4⎠

⎝5⎠

7

2

and , y = 422 and . The legitimate son

9

9

7

receives 577 staters, the illegitimate son

9

2

receives 422 .

9

h2

6( y3 − 2 y4 + y5 )

h2

6( y4 − 2 y5 + y6 )

= −1.1880

= −2.3295

= −3.3750

= −4.2915

h2

and the linear system (21) for the parabolic

runout spline becomes

⎡5 1 0 0 ⎤ ⎡ M 2 ⎤ ⎡ −1.1880 ⎤

⎢1 4 1 0 ⎥ ⎢ M 3 ⎥ ⎢ −2.3295 ⎥

⎥=⎢

⎢

⎥⎢

⎥.

⎢0 1 4 1 ⎥ ⎢ M 4 ⎥ ⎢ −3.3750 ⎥

⎣⎢0 0 1 5 ⎦⎥ ⎢⎣ M 5 ⎥⎦ ⎣⎢ −4.2915 ⎦⎥

⎛2⎞

(b) The system is G + B = ⎜ ⎟ 60,

⎝3⎠

⎛3⎞

⎛3⎞

G + T = ⎜ ⎟ 60, G + I = ⎜ ⎟ 60,

⎝4⎠

⎝5⎠

G + B + T + I = 60, with solution G = 30.5,

B = 9.5, T = 14.5, and I = 5.5. The crown

1

was made with 30 minae of gold,

2

1

1

9 minae of brass, 14 minae of tin, and

2

2

1

5 minae of iron.

2

**Solving this system yields M 2 = −.15676,
**

M 3 = −.40421, M 4 = −.55592,

M 5 = −.74712.

From (19) and (20) we have

M1 = M 2 = −.15676, M 6 = M 5 = −.74712.

The specific interval .4 ≤ x ≤ .6 is the third

interval. Using (14) to solve for a3 , b3 ,

c3 , and d3 gives

(M 4 − M 3 )

= −.12643

6h

M

b3 = 3 = −.20211

2

( y4 − y3 ) ( M 4 + 2M 3 )h

−

= .92158

c3 =

h

6

d3 = y3 = .38942.

The interpolating parabolic runout spline for

.4 ≤ x ≤ .6 is thus

a3 =

⎛1 ⎞

(c) The system is A = B + ⎜ C , ⎟

⎝3 ⎠

⎛1⎞

⎛1⎞

B = C + ⎜ ⎟ A, C = ⎜ ⎟ B + 10, with

3

⎝ ⎠

⎝3⎠

solution A = 45, B = 37.5, and C = 22.5.

The first person has 45 minae, the second

1

1

has 37 , and the third has 22 .

2

2

S ( x) = −.12643( x − .4)3 − .20211( x − .4) 2

+ .92158( x − .4) + .38942.

Section 10.4

(b) S (.5) = −.12643(.1)3 − .20211(.1) 2

+ .92158(.1) + .38942

= .47943.

Since sin(.5) = S(.5) = .47943 to five

decimal places, the percentage error is zero.

Exercise Set 10.4

2. (a) Set h = .2 and

x1 = 0, y1 = .00000

x2 = .2, y2 = .19867

x3 = .4, y3 = .38942

x4 = .6, y4 = .56464

x5 = .8, y5 = .71736

x6 = 1.0, y6 = .84147

Then

**3. (a) Given that the points lie on a single cubic
**

curve, the cubic runout spline will agree

exactly with the single cubic curve.

217

Chapter 10: Applications of Linear Algebra

(b) Set h = 1 and

x1 = 0, y1 = 1

x2 = 1, y2 = 7

x3 = 2, y3 = 27

x4 = 3, y4 = 79

x5 = 4, y5 = 181.

Then

6( y1 − 2 y2 + y3 )

h2

6( y2 − 2 y3 + y4 )

h2

6( y3 − 2 y4 + y5 )

**SSM: Elementary Linear Algebra
**

d 4 = y4 = 79.

For 0 ≤ x ≤ 1 we have

S ( x) = S1 ( x) = 3x3 − 2 x 2 + 5 x + 1.

For 1 ≤ x ≤ 2 we have

S ( x) = S2 ( x)

= 3( x − 1)3 + 7( x − 1)2 + 10( x − 1) + 7

= 3x3 − 2 x 2 + 5 x + 1.

For 2 ≤ x ≤ 3 we have

S ( x) = S3 ( x)

= 84

= 192

= 3( x − 2)3 + 16( x − 2)2 + 33( x − 2) + 27

= 3x3 − 2 x 2 + 5 x + 1.

For 3 ≤ x ≤ 4 we have

S ( x) = S 4 ( x)

= 300

h2

and the linear system (24) for the cubic

runout spline becomes

⎡6 0 0 ⎤ ⎡ M 2 ⎤ ⎡ 84 ⎤

⎢1 4 1 ⎥ ⎢ M 3 ⎥ = ⎢192 ⎥ .

⎥ ⎢

⎢

⎥⎢

⎥

⎣0 0 6 ⎦ ⎣ M 4 ⎦ ⎣300 ⎦

Solving this system yields M 2 = 14,

M 3 = 32, M 4 = 50.

From (22) and (23) we have

M 1 = 2 M 2 − M 3 = −4

M 5 = 2 M 4 − M 3 = 68.

Using (14) to solve for the ai’s, bi’s, ci’s, and

( M 2 − M1 )

di’s we have a1 =

= 3,

6h

(M 3 − M 2 )

(M 4 − M 3 )

a2 =

= 3, a3 =

= 3,

6h

6h

(M 5 − M 4 )

a4 =

= 3,

6h

= 3( x − 3)3 + 25( x − 3)2 + 74( x − 3) + 79

= 3x3 − 2 x 2 + 5 x + 1.

Thus S1 ( x) = S2 ( x) = S3 ( x) = S4 ( x), or

S ( x) = 3x3 − 2 x 2 + 5 x + 1 for 0 ≤ x ≤ 4.

4. The linear system (16) for the natural spline

⎡ 4 1 0 ⎤ ⎡ M 2 ⎤ ⎡ −.0001116 ⎤

becomes ⎢1 4 1 ⎥ ⎢ M 3 ⎥ = ⎢ −.0000816 ⎥ .

⎥ ⎢

⎢

⎥⎢

⎥

⎣ 0 1 4 ⎦ ⎣ M 4 ⎦ ⎣ −.0000636 ⎦

**Solving this system yields M 2 = −.0000252,
**

M 3 = −.0000108, M 4 = −.0000132.

From (17) and (18) we have M1 = 0, M 5 = 0.

Solving for the ai’s, bi’s, ci’s, and di’s from

Equations (14) we have

( M 2 − M1 )

a1 =

= −.00000042,

6h

(M 3 − M 2 )

a2 =

= .00000024,

6h

(M 4 − M 3 )

a3 =

= −.00000004,

6h

(M 5 − M 4 )

a4 =

= −.00000022,

6h

M1

M

= −2, b2 = 2 = 7,

2

2

M3

M4

b3 =

= 16, b4 =

= 25,

2

2

b1 =

( y2 − y1 ) ( M 2 + 2M1 )h

−

= 5,

h

6

( y − y2 ) ( M 3 + 2M 2 )h

= 3

−

= 10,

h

6

( y − y3 ) ( M 4 + 2M 3 )h

= 4

−

= 33,

h

6

( y − y4 ) ( M 5 + 2 M 4 )h

= 5

−

= 74,

h

6

c1 =

c2

c3

c4

M1

M

= 0, b2 = 2 = −.0000126,

2

2

M3

M

=

= −.0000054, b4 = 4 = −.0000066,

2

2

M5

=

= 0.

2

b1 =

b3

b5

c1 =

d1 = y1 = 1, d 2 = y2 = 7, d3 = y3 = 27,

218

( y2 − y1 ) ( M 2 + 2 M1 )h

−

= .000214,

h

6

SSM: Elementary Linear Algebra

Section 10.4

( y3 − y2 ) ( M 3 + 2M 2 )h

( y − y3 ) ( M 4 + 2M 3 )h

−

= .000088, c3 = 4

−

= −.000092,

h

h

6

6

( y − y4 ) ( M 5 + 2M 4 )h

c4 = 5

−

= −.000212,

h

6

c2 =

**d1 = y1 = .99815, d 2 = y2 = .99987, d3 = y3 = .99973, d 4 = y4 = .99823.
**

The resulting natural spline is

⎧−.00000042( x + 10)3 + .000214( x + 10) + .99815, − 10 ≤ x ≤ 0

⎪

⎪.00000024( x)3 − .0000126( x)2 + .000088( x) + .99987, 0 ≤ x ≤ 10

S ( x) = ⎨

3

2

⎪−.00000004( x − 10) − .0000054( x − 10) − .000092( x − 10) + .99973, 10 ≤ x ≤ 20

⎪−.00000022( x − 20)3 − .0000066( x − 20)2 − .000212( x − 20) + .99823, 20 ≤ x ≤ 30.

⎩

Assuming the maximum is attained in the interval [0, 10] we set S ′( x) equal to zero in this interval:

S ′( x) = .00000072 x 2 − .0000252 x + .000088 = 0.

To three significant digits the root of this quadratic equation in the interval [0, 10] is x = 3.93, and

S(3.93) = 1.00004.

⎡6 0 0 ⎤ ⎡ M 2 ⎤ ⎡ −.0001116 ⎤

5. The linear system (24) for the cubic runout spline becomes ⎢1 4 1 ⎥ ⎢ M 3 ⎥ = ⎢ −.0000816 ⎥ .

⎥ ⎢

⎢

⎥⎢

⎥

⎣0 0 6 ⎦ ⎣ M 4 ⎦ ⎣ −.0000636 ⎦

Solving this system yields M 2 = −.0000186, M 3 = −.0000131, M 4 = −.0000106.

From (22) and (23) we have M1 = 2M 2 − M 3 = −.0000241, M 5 = 2 M 4 − M 3 = −.0000081.

( M 2 − M1 )

= .00000009,

6h

(M 3 − M 2 )

(M 4 − M 3 )

(M 5 − M 4 )

a2 =

= .00000009, a3 =

= .00000004, a4 =

= .00000004.

6h

6h

6h

Solving for the ai’s, bi’s, ci’s, and di’s from Equations (14) we have a1 =

b1 =

M

M1

M

M

= −.0000121, b2 = 2 = −.0000093, b3 = 3 = −.0000066, b4 = 4 = −.0000053,

2

2

2

2

( y − y2 ) ( M 3 + 2M 2 )h

( y2 − y1 ) ( M 2 + 2 M1 )h

−

= .000282, c2 = 3

−

= .000070,

h

6

h

6

( y − y3 ) ( M 4 + 2M 3 )h

( y − y4 ) ( M 5 + 2M 4 )h

c3 = 4

−

= −.000087, c4 = 5

−

= −.000207,

h

6

h

6

c1 =

**d1 = y1 = .99815, d 2 = y2 = .99987, d3 = y3 = .99973, d 4 = y4 = .99823.
**

The resulting cubic runout spline is

⎧.00000009( x + 10)3 − .0000121( x + 10) 2 + .000282( x + 10) + .99815, − 10 ≤ x ≤ 0

⎪

⎪.00000009( x)3 − .0000093( x) 2 + .000070( x) + .99987, 0 ≤ x ≤ 10

S ( x) = ⎨

3

2

⎪.00000004( x − 10) − .0000066( x − 10) − .000087( x − 10) + .99973, 10 ≤ x ≤ 20

⎪.00000004( x − 20)3 − .0000053( x − 20)2 − .000207( x − 20) + .99823, 20 ≤ x ≤ 30.

⎩

Assuming the maximum is attained in the interval [0, 10], we set S ′( x) equal to zero in this interval:

S ′( x) = .00000027 x 2 − .0000186 x + .000070 = 0.

To three significant digits the root of this quadratic equation in the interval [0, 10] is 4.00 and S(4.00) = 1.00001.

219

Chapter 10: Applications of Linear Algebra

**SSM: Elementary Linear Algebra
**

For 1 ≤ x ≤ 1.5, we have

S ( x) = S2 ( x) = −2( x − 1) + 0 = −2 x + 2.

The resulting natural spline is

⎧ − 2 x + 2 0 .5 ≤ x ≤ 1

S ( x) = ⎨

.

⎩−2 x + 2 1 ≤ x ≤ 1.5

6. (a) Set h = .5 and

x1 = 0, y1 = 0

x2 = .5, y2 = 1

x3 = 1, y3 = 0

For a natural spline with n = 3,

M1 = M 3 = 0 and

6( y1 − 2 y2 + y3 )

h2

M 2 = −12.

**(c) The three data points are collinear, so the
**

spline is just the line the points lie on.

= −48 = 4M 2 . Thus

**7. (b) Equations (15) together with the three
**

equations in part (a) of the exercise

statement give

6( yn −1 − 2 y1 + y2 )

4M1 + M 2 + M n −1 =

h2

6( y1 − 2 y2 + y3 )

M1 + 4 M 2 + M 3 =

h2

6( y2 − 2 y3 + y4 )

M 2 + 4M 3 + M 4 =

h2

#

6( yn −3 − 2 yn − 2 + yn −1 )

M n −3 + 4 M n −2 + M n −1 =

h2

6( yn − 2 − 2 yn −1 + y1 )

M1 + M n −2 + 4 M n −1 =

.

h2

The linear system for M1 , M 2 , ..., M n −1 in

matrix form is

⎡ 4 1 0 0 " 0 0 0 1 ⎤ ⎡ M1 ⎤

⎢1 4 1 0 " 0 0 0 0 ⎥ ⎢ M 2 ⎥

⎥

⎢

⎥⎢

⎢0 1 4 1 " 0 0 0 0 ⎥ ⎢ M 3 ⎥

# # # # ⎥⎢ # ⎥

⎢# # # #

⎢0 0 0 0 " 0 1 4 1 ⎥ ⎢ M n −2 ⎥

⎥

⎢1 0 0 0 " 0 0 1 4 ⎥ ⎢

⎣

⎦ ⎣⎢ M n −1 ⎦⎥

⎡ yn −1 − 2 y1 + y2 ⎤

⎢

⎥

y1 − 2 y2 + y3

⎢

⎥

6 ⎢ y2 − 2 y3 + y4 ⎥

=

.

⎥

#

h2 ⎢

⎢y

− 2 yn −2 + yn −1 ⎥

⎢ n −3

⎥

⎣⎢ yn − 2 − 2 yn −1 + y1 ⎦⎥

M − M2

M 2 − M1

= −4, a2 = 3

= 4,

6h

6h

M

M

b1 = 1 = 0, b2 = 2 = −6,

2

2

y2 − y1 ( M 2 + 2M1 )h

c1 =

−

= 3,

h

6

y − y2 ( M 3 + 2 M 2 )h

c2 = 3

−

= 0, d1 = 0,

h

6

d2 = 1

For 0 ≤ x ≤ .5, we have

a1 =

S ( x) = S1 ( x) = −4 x3 + 3x.

For .5 ≤ x ≤ 1, we have

S ( x) = S2 ( x) = 4( x − .5)3 − 6( x − .5)2 + 1

= 4 x3 − 12 x 2 + 9 x − 1.

The resulting natural spline is

⎧⎪−4 x3 + 3 x

0 ≤ x ≤ 0.5

S ( x) = ⎨

.

3

2

⎪⎩4 x − 12 x + 9 x − 1 0.5 ≤ x ≤ 1

(b) Again h = .5 and

x1 = .5, y1 = 1

x2 = 1, y2 = 0

x3 = 1.5, y3 = −1

4M 2 =

6( y1 − 2 y2 + y3 )

=0

h2

Thus M1 = M 2 = M 3 = 0, hence all ai and

bi are also 0.

y − y2

y −y

= −2

c1 = 2 1 = −2, c2 = 3

h

h

d1 = y1 = 1, d 2 = y2 = 0

For .5 ≤ x ≤ 1, we have

S ( x) = S1 ( x) = −2( x − .5) + 1 = −2 x + 2.

220

SSM: Elementary Linear Algebra

Section 10.5

**8. (b) Equations (15) together with the two
**

equations in part (a) give

6( y2 − y1 − hy1′ )

2 M1 + M 2 =

h2

6( y1 − 2 y2 + y3 )

M1 + 4 M 2 + M 3 =

h2

6( y2 − 2 y3 + y4 )

M 2 + 4M 3 + M 4 =

h2

#

6( yn − 2 − 2 yn −1 + yn )

M n − 2 + 4 M n −1 + M n =

h2

6( yn −1 − yn + hyn′ )

M n −1 + 2M n =

.

h2

This linear system for M1 , M 2 , ..., M n in

matrix form is

⎡2 1 0 0 " 0 0 0 0 ⎤ ⎡ M ⎤

⎢1 4 1 0 " 0 0 0 0 ⎥ ⎢ 1 ⎥

⎢

⎥ ⎢ M2 ⎥

⎢0 1 4 1 " 0 0 0 0 ⎥ ⎢ M 3 ⎥

⎢0 0 1 4 " 0 0 0 0 ⎥ ⎢

# ⎥

⎢# # # #

# # # # ⎥⎢

⎥

⎢0 0 0 0 " 1 4 1 0 ⎥ ⎢ M n− 2 ⎥

⎢

⎥ ⎢ M n −1 ⎥

⎢0 0 0 0 " 0 1 4 1 ⎥ ⎢

⎥

⎢⎣0 0 0 0 " 0 0 1 2 ⎥⎦ ⎣ M n ⎦

⎡ −hy1′ − y1 + y2 ⎤

⎢ y − 2y + y ⎥

2

3 ⎥

⎢ 1

6 ⎢ y2 − 2 y3 + y4 ⎥

.

=

⎥

#

h2 ⎢

⎢y

− 2 yn −1 + yn ⎥

⎥

⎢ n−2

⎣⎢ yn −1 − yn + hyn′ ⎦⎥

.6q1 − .5q2 = 0, or q1 =

5

q2 . Solutions are

6

⎡5⎤

thus of the form q = s ⎢ 6 ⎥ . Set

⎢⎣ 1 ⎥⎦

s=

1

5

6

+1

=

⎡5⎤

6

to obtain q = ⎢ 11 ⎥ .

6⎥

11

⎢⎣ 11

⎦

⎡. 7 ⎤

⎡.23⎤

2. (a) x(1) = Px(0) = ⎢.2 ⎥ ; likewise x( 2) = ⎢.52 ⎥

⎢ ⎥

⎢ ⎥

⎣.25⎦

⎣ .1⎦

⎡.273⎤

and x(3) = ⎢.396⎥ .

⎢

⎥

⎣ .331⎦

(b) P is regular because all of its entries are

positive. To solve (I − P)q = 0, i.e.

⎡ .8 −.1 −.7 ⎤ ⎡ q1 ⎤ ⎡ 0 ⎤

⎢ −.6 .6 −.2 ⎥ ⎢ q2 ⎥ = ⎢ 0 ⎥ , reduce the

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −.2 −.5 .9 ⎦ ⎣ q3 ⎦ ⎣ 0 ⎦

coefficient matrix to row-echelon form:

5 −9 ⎤

⎡ 8 −1 −7 ⎤

⎡2

⎢ −6 6 −2⎥ → ⎢ 0 −21 29 ⎥

⎢

⎥

⎢

⎥

0 0⎦

⎣ −2 −5 9⎦

⎣0

⎡ 1 0 − 22 ⎤

21 ⎥

⎢

→ ⎢ 0 1 − 29 ⎥ .

21

⎢

⎥

0⎦

⎣0 0

This yields solutions (setting q3 = s) of the

⎡ 22 ⎤

⎢ 21 ⎥

form ⎢ 29 ⎥ s.

⎢ 21 ⎥

⎣1⎦

To obtain a probability vector, take

⎡ 22 ⎤

⎢ 72 ⎥

1

21

s=

=

yielding q = ⎢ 29

⎥.

72

22 + 29 + 1 72,

⎢

⎥

21 21

21

⎢⎣ 72 ⎥⎦

Section 10.5

Exercise Set 10.5

⎡. 4 ⎤

⎡.46 ⎤

1. (a) x(1) = Px(0) = ⎢ ⎥ , x( 2) = Px(1) = ⎢ ⎥ .

⎣. 6 ⎦

⎣.54 ⎦

Continuing in this manner yields

⎡.454⎤

.

x(3) = ⎢

⎣.546⎥⎦

⎡.4546 ⎤

⎡.45454⎤

.

x ( 4) = ⎢

and x(5) = ⎢

⎥

.

5454

⎣

⎦

⎣.54546⎥⎦

3. (a) Solve (I − P)q = 0, i.e.,

⎡ 2 − 3⎤ ⎡q ⎤

4 ⎥ 1 = ⎡0⎤ .

⎢ 3

3 ⎥ ⎢ q ⎥ ⎢⎣ 0 ⎥⎦

2

⎢− 3

⎣ 2⎦

4⎦

⎣

The only independent equation is

**(b) P is regular because all of the entries of P
**

are positive. Its steady-state vector q solves

⎡ .6 −.5⎤ ⎡ q1 ⎤ ⎡0 ⎤

.

(I − P)q = 0; that is, ⎢

⎢ ⎥=

⎣ −.6 .5⎥⎦ ⎣ q2 ⎦ ⎢⎣0 ⎥⎦

This yields one independent equation,

221

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

⎡ 1 n

⎤

⎢ 2 x1

⎥

n

P x=⎢

⎥.

n

x1 + x2 ⎥

⎢ 1 − 12

⎣

⎦

Already true for n = 1, 2. If true for n − 1,

()

⎡9 ⎤

2

3

q1 = q2 , yielding q = ⎢ 8 ⎥ s. Setting

3

4

⎢⎣ 1 ⎥⎦

(

⎡9⎤

8

s=

yields q = ⎢ 17 ⎥ .

8⎥

17

⎢ 17

⎣ ⎦

()

(

() )

( )( )

( () ()

()

( ())

⎡ 26 ⎤

⎡ 26 ⎤

19

to get q = ⎢ 45 ⎥ .

q = ⎢ 19 ⎥ s. Set s =

45

⎢ 19

⎥

⎢⎣ 1 ⎥⎦

⎣ 45 ⎦

⎡ 2 −1

0⎤ ⎡ q ⎤

2

⎡0 ⎤

⎢ 3

⎥ 1

1 − 14 ⎥ ⎢ q2 ⎥ = ⎢0 ⎥ by

(c) Again, solve ⎢ − 13

⎢ ⎥ ⎢ ⎥

⎢ 1

1

1 ⎥ ⎣ q3 ⎦ ⎣ 0 ⎦

⎢⎣ − 3 − 2

⎥

4⎦

reducing the coefficient matrix to row⎡1 0 − 1⎤

4⎥

⎢

echelon form: ⎢0 1 − 1 ⎥ yielding

3

⎢

⎥

0⎦

⎣0 0

⎡1⎤

⎢4⎥

solutions of the form q = ⎢ 1 ⎥ s.

⎢3⎥

⎣1 ⎦

⎡1⎤

⎢k ⎥

⎢1⎥

5. Let q = ⎢ k ⎥ . Then

⎢# ⎥

⎢1⎥

⎣k ⎦

1

1 k

1

pij = ∑ pij = ,

k j =1

k

j =1

j =1 k

since the row sums of P are 1. Thus ( Pq)i = qi

for all i.

( Pq ) i =

( n)

4. (a) Prove by induction that p12

= 0: Already

**true for n = 1. If true for n − 1, we have
**

P n = P n −1 P, so

( n)

( n −1)

( n −1)

p12

= p11

p12 + p12

p22 . But

p12 =

= 0 so

)

**(c) The Theorem says that the entries of the
**

steady state vector should be positive; they

⎡0 ⎤

are not for ⎢ ⎥ .

⎣1 ⎦

⎡3⎤

⎢ 19 ⎥

12

4 ⎥.

to get q = ⎢ 19

Set s =

19

⎢ 12 ⎥

⎢⎣ 19 ⎥⎦

( n)

p12

)

then P n = P( P n −1x)

⎡ 1 n −1

⎤

x1

⎢ 2

⎥

= P⎢

⎥

n −1

x1 + x2 ⎥

⎢ 1 − 12

⎣

⎦

⎡ 1 1 n −1

⎤

x1

⎢ 2 2

⎥

=⎢

⎥

n −1

n

x1 + x2 ⎥

+ 12

⎢ 1 − 12

⎣

⎦

n

⎡ 1

⎤

⎢ 2 x1

⎥

=⎢

⎥.

n

1

x

x

+

⎢ 1− 2

1

2⎥

⎣

⎦

⎡ 0 ⎤ ⎡0 ⎤

if x is a

Since lim P n x = ⎢

⎥=

n→∞

⎣ x1 + x2 ⎦ ⎢⎣1 ⎥⎦

state vector.

⎡ .19 −.26 ⎤ ⎡ q1 ⎤ ⎡0 ⎤

(b) As in (a), solve ⎢

⎢ ⎥=

⎣ −.19 .26 ⎥⎦ ⎣ q2 ⎦ ⎢⎣0 ⎥⎦

i.e., .19q1 = .26q2 . Solutions have the form

( n −1)

p12

()

k

k

∑ pij q j = ∑

**6. Since P has zeros entries, consider
**

⎡1 1 1⎤

⎢2 4 4⎥

P 2 = ⎢ 14 12 14 ⎥ , so P is regular. Note that all

⎢1 1 1⎥

⎢⎣ 4 4 2 ⎥⎦

the rows of P sum to 1. Since P is 3 × 3,

⎡1 ⎤

⎢3⎥

Exercise 5 implies q = ⎢ 13 ⎥ .

⎢1 ⎥

⎢⎣ 3 ⎥⎦

= 0 + 0 = 0. Thus,

**no power of P can have all positive entries,
**

so P is not regular.

⎡ 1 x1

⎤

⎡x ⎤

⎥,

(b) If x = ⎢ 1 ⎥ , Px = ⎢ 2

1

⎢⎣ 2 x1 + x2 ⎥⎦

⎣ x2 ⎦

⎡ 1 x1

⎤

⎥ etc. We use

P2x = ⎢ 4

⎢⎣ 14 x1 + 12 x1 + x2 ⎥⎦

induction to show

222

SSM: Elementary Linear Algebra

7. Let x = [ x1

Section 10.6

Section 10.6

x2 ]T be the state vector, with

x1 = probability that John is happy and

Exercise Set 10.6

x2 = probability that John is sad. The transition

**1. Note that the matrix has the same number of
**

rows and columns as the graph has vertices, and

that ones in the matrix correspond to arrows in

the graph.

⎡4 2⎤

matrix P will be P = ⎢ 5 3 ⎥ since the columns

⎢⎣ 15 13 ⎥⎦

must sum to one. We find the steady state vector

⎡ 1 − 2 ⎤ ⎡ q ⎤ ⎡0⎤

3⎥ 1 =

for P by solving ⎢ 5

, i.e.,

2 ⎥ ⎢ q ⎥ ⎢⎣ 0 ⎥⎦

⎢⎣ − 15

⎣ 2⎦

3⎦

⎡ 10 ⎤

1

2

3

q1 = q2 , so q = ⎢ 3 ⎥ s. Let s =

and get

5

3

13

⎣⎢ 1 ⎦⎥

⎡ 10 ⎤

10

is the probability that John will

q = ⎢ 13 ⎥ , so

3

13

⎢ 13 ⎥

⎣ ⎦

be happy on a given day.

8. The state vector x = [ x1 x2 x3 ]T will

represent the proportion of the population living

in regions 1, 2, and 3, respectively. In the

transition matrix, pij will represent the

proportion of the people in region j who move to

⎡.90 .15 .10 ⎤

region i, yielding P = ⎢.05 .75 .05⎥ .

⎢

⎥

⎣.05 .10 .85⎦

⎡0

⎢1

(a) ⎢

⎢1

⎣⎢0

0

0

1

0

0

1

0

0

1⎤

1⎥

⎥

1⎥

0⎦⎥

⎡0

⎢0

⎢

(b) ⎢1

⎢0

⎢⎣0

1

0

0

0

0

1

0

0

1

1

0

0

1

0

0

0⎤

1⎥

⎥

0⎥

0⎥

0 ⎥⎦

⎡0

⎢1

⎢

0

(c) ⎢

⎢0

⎢0

⎢0

⎣

1

0

1

0

0

0

0

0

0

0

0

1

1

0

1

0

0

0

0

0

1

0

0

1

0⎤

0⎥

⎥

1⎥

1⎥

1⎥

0 ⎥⎦

2. See the remark in problem 1; we obtain

⎡ .10 −.15 −.10⎤ ⎡ q1 ⎤ ⎡0 ⎤

⎢ −.05 .25 −.05⎥ ⎢ q2 ⎥ = ⎢0 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −.05 −.10 .15⎦ ⎣ q3 ⎦ ⎣0 ⎦

(a)

P1

P2

⎡1 0 − 13 ⎤

7⎥

⎢

First reduce to row echelon form ⎢0 1 − 4 ⎥ ,

7

⎢

⎥

⎣0 0 0 ⎦

⎡ 13 ⎤

⎢7⎥

7

and get

yielding q = ⎢ 4 ⎥ s. Set s =

24

⎢7⎥

⎣1⎦

q=

⎡ 13 ⎤

⎢ 24 ⎥

4 ⎥,

⎢ 24

⎢7⎥

⎢⎣ 24 ⎥⎦

i.e., in the long run

(b)

P4

P3

P1

13 ⎛

1 ⎞

⎜ or 54 % ⎟

24 ⎝

6 ⎠

P5

**of the people reside in region 1,
**

in region 2, and

P4

4

24

2 ⎞

⎛

⎜ or 16 % ⎟

3 ⎠

⎝

P3

7 ⎛

1 ⎞

⎜ or 29 % ⎟ in region 3.

24 ⎝

6 ⎠

223

P2

Chapter 10: Applications of Linear Algebra

(c)

P1

SSM: Elementary Linear Algebra

P2

⎡1

⎢0

⎢

T

4. (a) M M = ⎢0

⎢0

⎢⎣0

P3

P6

P5

3. (a)

0

1

0

0

0

0

0

1

1

0

0

0

1

2

1

0⎤

0⎥

⎥

0⎥

1⎥

2⎥⎦

(b) The kth diagonal entry of M T M is

P4

5

∑ mik 2 ,

P1

i.e., the sum of the squares of the

i =1

**entries in column k of M. These entries are 1
**

if family member i influences member k and

0 otherwise.

P4

P2

**(c) The ij entry of M T M is the number of
**

family members who influence both

member i and member j.

P3

(b) m12 = 1, so there is one 1-step connection

**5. (a) Note that to be contained in a clique, a
**

vertex must have “two-way” connections

with at least two other vertices. Thus, P4

from P1 to P2 .

⎡1

⎢0

M =⎢

⎢1

⎢⎣1

2

1

1

1

1

1

1

0

1⎤

1⎥

⎥ and

0⎥

1 ⎥⎦

⎡2

⎢1

M3 = ⎢

⎢1

⎣⎢1

3

2

2

2

2

1

1

2

2⎤

1⎥

⎥.

2⎥

1 ⎦⎥

2

**could not be in a clique, so {P1 , P2 , P3} is
**

the only possible clique. Inspection shows

that this is indeed a clique.

(b) Not only must a clique vertex have two-way

connections to at least two other vertices,

but the vertices to which it is connected

must share a two-way connection. This

consideration eliminates P1 and P2 , leaving

( 2)

(3)

= 2 and m12

= 3 meaning there

So m12

**{P3 , P4 , P5} as the only possible clique.
**

Inspection shows that it is indeed a clique.

**are two 2-step and three 3-step connections
**

from P1 to P2 by Theorem 10.6.1. These

are:

1-step: P1 → P2

(c) The above considerations eliminate P1 , P3

and P7 from being in a clique. Inspection

2-step: P1 → P4 → P2 and P1 → P3 → P2

shows that each of the sets {P2 , P4 , P6},

3-step: P1 → P2 → P1 → P2 ,

{P4 , P6 , P8}, {P2 , P6 , P8}, {P2 , P4 , P8}

P1 → P3 → P4 → P2 , and

**and {P4 , P5 , P6 } satisfy conditions (i) and
**

(ii) in the definition of a clique. But note

that P8 can be added to the first set and we

P1 → P4 → P3 → P2 .

( 2)

(3)

(c) Since m14 = 1, m14

= 1 and m14

= 2, there

still satisfy the conditions. P5 may not be

**are one 1-step, one 2-step and two 3-step
**

connections from P1 to P4 . These are:

**added, so {P2 , P4 , P6 , P8} is a clique,
**

containing all the other possibilities except

{P4 , P5 , P6 }, which is also a clique.

1-step: P1 → P4

2-step: P1 → P3 → P4

3-step: P1 → P2 → P1 → P4 and

P1 → P4 → P3 → P4 .

224

SSM: Elementary Linear Algebra

Section 10.7

**6. (a) With the given M we get
**

⎡0 1 0 1 0 ⎤

⎢1 0 1 0 0 ⎥

⎢

⎥

S = ⎢0 1 0 0 1 ⎥ ,

⎢1 0 0 0 1 ⎥

⎢⎣0 0 1 1 0 ⎥⎦

⎡0 3 1 3 1 ⎤

⎢3 0 3 1 1 ⎥

⎥

3 ⎢

S = ⎢1 3 0 1 3 ⎥ .

⎢ 3 1 1 0 3⎥

⎢⎣1 1 3 3 0 ⎥⎦

8. Associating vertex P1 with team A, P2 with

**B, ..., P5 with E, the game results yield the
**

following dominance-directed graph:

P1

P2

P3

**in the graph represented by M.
**

1

0

1

0

1

0

0

1

0

1

0

1

1

0

1

0

1

1

0

1

0

1

0

0

0⎤

0⎥

⎥

1⎥

,

1⎥

0⎥

0 ⎥⎦

⎡0 6 1 7 0 2⎤

⎢6 0 7 1 6 3⎥

⎢

⎥

1 7 2 8 1 4⎥

S3 = ⎢

.

⎢7 1 8 2 7 5 ⎥

⎢0 6 1 7 0 2⎥

⎢2 3 4 5 2 2⎥

⎣

⎦

The elements along the main diagonal tell us

that only P3 , P4 , and P6 are members of a

clique. Since a clique contains at least three

vertices, we must have {P3 , P4 , P6 } as the

only clique.

⎡0

⎢1

7. M = ⎢

⎢0

⎣⎢0

0

0

1

1

1

0

0

0

⎡0

2 ⎢0

Then M = ⎢

⎢1

⎢⎣1

⎡0

⎢1

M +M =⎢

⎢1

⎢⎣1

2

1⎤

0⎥

⎥

1⎥

0⎦⎥

2

0

1

0

2

0

2

1

P4

**which has vertex matrix
**

⎡0 1 1 1 0 ⎤

⎢0 0 1 0 1 ⎥

⎢

⎥

M = ⎢0 0 0 1 1 ⎥ .

⎢0 1 0 0 0 ⎥

⎢⎣1 0 0 1 0 ⎥⎦

⎡0 1 1 1 2⎤

⎢1 0 0 2 1 ⎥

⎥

2 ⎢

Then M = ⎢1 1 0 1 0 ⎥ ,

⎢0 0 1 0 1 ⎥

⎢⎣0 2 1 1 0 ⎥⎦

⎡0 2 2 2 2 ⎤

⎢1 0 1 2 2 ⎥

⎥

2 ⎢

M + M = ⎢1 1 0 2 1 ⎥ .

⎢0 1 1 0 1 ⎥

⎢⎣1 2 1 2 0 ⎥⎦

Summing the rows, we get that the power of A is

8, of B is 6, of C is 5, of D is 3, and of E is 6.

Thus ranking in decreasing order we get A in

first place, B and E tie for second place, C in

fourth place, and D last.

**Since sii(3) = 0 for all i, there are no cliques
**

⎡0

⎢1

⎢

0

(b) Here S = ⎢

⎢1

⎢0

⎢0

⎣

P5

Section 10.7

Exercise Set 10.7

0

1

0

0

1

1

0

0

**1. (a) From Equation (2), the expected payoff of
**

the game is

⎡1⎤

⎢4⎥

1⎤ ⎢ 1 ⎥

⎡ −4 6 −4

pAq = ⎡⎣ 12 0 12 ⎤⎦ ⎢ 5 −7

3 8⎥ ⎢ 14 ⎥

⎢

⎥

⎣ −8 0 6 −2 ⎦ ⎢ 4 ⎥

⎢1⎥

⎣⎢ 4 ⎦⎥

5

=− .

8

1⎤

1⎥

⎥ and

0⎥

0 ⎥⎦

2⎤

1⎥

⎥.

1⎥

0 ⎥⎦

**By summing the rows of M + M 2 , we get that
**

the power of P1 is 2 + 1 + 2 = 5, the power of

P2 is 3, of P3 is 4, and of P4 is 2.

225

Chapter 10: Applications of Linear Algebra

(b) If player R uses strategy [ p1

p2

SSM: Elementary Linear Algebra

p3 ]

**(c) Here, a32 is a saddle point, so optimal
**

⎡0⎤

strategies are p* = [0 0 1], q* = ⎢1 ⎥ and

⎢ ⎥

⎣0⎦

⎡1⎤

⎢4⎥

⎢1⎥

against player C’s strategy ⎢ 14 ⎥ his payoff

⎢4⎥

⎢1⎥

⎣⎢ 4 ⎦⎥

v = a32 = 2.

(d) Here, a21 is a saddle point, so

⎛ 1⎞

⎛9⎞

will be pAq = ⎜ − ⎟ p1 + ⎜ ⎟ p 2 − p3 . Since

⎝ 4⎠

⎝4⎠

p1, p2 , and p3 are nonnegative and add

up to 1, this is a weighted average of the

1 9

numbers − , , and −1. Clearly this is the

4 4

largest if p1 = p3 = 0 and p2 = 1; that is,

⎡1 ⎤

p* = [0 1 0 0], q* = ⎢ 0 ⎥ and

⎢ ⎥

⎣0⎦

v = a21 = −2.

4. (a) Calling the matrix A, the formulas of

Theorem 10.7.2 yield p* = ⎡ 85 83 ⎤ ,

⎣

⎦

1

⎡ ⎤

27

q* = ⎢ 8 ⎥ , v =

(A has no saddle points).

8

⎢⎣ 78 ⎥⎦

p = [0 1 0].

(c) As in (b), if player C uses

[ q1 q2

q3

q4 ]T against ⎡⎣ 12

0

1⎤,

2⎦

40 20 ⎤ = ⎡ 2 1 ⎤ ,

(b) As in (a), p* = ⎡ 60

60 ⎦ ⎣ 3 3 ⎦

⎣

10

⎡ ⎤ ⎡1⎤

1400 70

q* = ⎢ 60 ⎥ = ⎢ 6 ⎥ , v =

=

(Again, A

50

5

60

3

⎢ ⎥ ⎢ ⎥

⎣ 60 ⎦ ⎣ 6 ⎦

has no saddle points).

1

we get pAq = −6q1 + 3q2 + q3 − q4 .

2

Clearly this is minimized over all strategies

by setting q1 = 1 and q2 = q3 = q4 = 0.

That is q = [1 0 0 0]T .

2. As per the hint, we will construct a 3 × 3 matrix

with two saddle points, say a11 = a33 = 1. Such a

**(c) For this matrix, a11 is a saddle point, so
**

⎡1 ⎤

p* = [1 0], q* = ⎢ ⎥ , and v = a11 = 3.

⎣0⎦

⎡1 2 1 ⎤

matrix is A = ⎢ 0 7 0 ⎥ . Note that

⎢

⎥

⎣1 2 1 ⎦

a13 = a31 = 1 are also saddle points.

**(d) This matrix has no saddle points, so, as in
**

(a), p* = ⎡ −−53 −−25 ⎤ = ⎡ 35 25 ⎤ ,

⎣

⎦ ⎣

⎦

−

3

3

⎡ ⎤ ⎡ ⎤

−19 19

q* = ⎢ −5 ⎥ = ⎢ 5 ⎥ , and v =

= .

−5

5

⎢⎣ −−25 ⎥⎦ ⎢⎣ 25 ⎥⎦

**3. (a) Calling the matrix A, we see a22 is a saddle
**

point, so the optimal strategies are pure,

⎡0⎤

namely: p* = [0 1], q* = ⎢ ⎥ , the value of

⎣1 ⎦

the game is v = a22 = 3.

**(e) Again, A has no saddle points, so as in (a),
**

⎡1⎤

3 10 ⎤ , q* = ⎢ 13 ⎥ , and v = −29 .

p* = ⎡ 13

13 ⎦

⎣

13

⎢⎣ 12

⎥

13 ⎦

**(b) As in (a), a21 is a saddle point, so optimal
**

⎡1 ⎤

strategies are p* = [0 1 0], q* = ⎢ ⎥ , the

⎣0⎦

value of the game is v = a21 = 2.

5. Let

a11 = payoff to R if the black ace and black two

are played = 3.

a12 = payoff to R if the black ace and red three

are played = −4.

a21 = payoff to R if the red four and black two

are played = −6.

a22 = payoff to R if the red four and red three

226

SSM: Elementary Linear Algebra

Section 10.8

⎡ 1 0 − 78 ⎤ ⎡ p ⎤

⎡0⎤

1

79 ⎥

⎢

which reduces to ⎢0 1 − 54 ⎥ ⎢ p2 ⎥ = ⎢ 0 ⎥ .

⎥ ⎢ ⎥

79 ⎢

⎢

⎥

0

0 ⎦ ⎣ p3 ⎦ ⎣ ⎦

⎣0 0

⎡ 78 ⎤

⎢ 79 ⎥

Solutions are of the form p = ⎢ 54 ⎥ . Let

⎢ 79 ⎥

⎣1⎦

⎡ 78 ⎤

s = 79 to obtain p = ⎢54 ⎥ .

⎢ ⎥

⎣79 ⎦

are played = 7.

So, the payoff matrix for the game is

⎡ 3 −4 ⎤

A=⎢

.

⎣ −6 7 ⎥⎦

A has no saddle points, so from Theorem 10.7.2,

⎡ 11 ⎤

7 ⎤ , q* = ⎢ 20 ⎥ ; that is, player R

p* = ⎡ 13

9 ⎥

⎣ 20 20 ⎦

⎢⎣ 20

⎦

should play the black ace 65 percent of the time,

and player C should play the black two

55 percent of the time. The value of the game is

3

− , that is, player C can expect to collect on

20

the average 15 cents per game.

**2. (a) By Corollary 10.8.4, this matrix is
**

productive, since each of its row sums is .9.

Section 10.8

**(b) By Corollary 10.8.5, this matrix is
**

productive, since each of its column sums is

less than one.

Exercise Set 10.8

1. (a) Calling the given matrix E, we need to solve

⎡ 1 − 1 ⎤ ⎡ p ⎤ ⎡0 ⎤

3⎥

1 =

( I − E )p = ⎢ 2

.

1 ⎥ ⎢ p ⎥ ⎣⎢ 0 ⎦⎥

⎢⎣ − 12

⎣

2⎦

3⎦

⎡1⎤

1

1

This yields p1 = p2 , that is, p = s ⎢ 3 ⎥ .

2

3

⎣2⎦

2

⎡ ⎤

Set s = 2 and get p = ⎢ ⎥ .

⎣3⎦

⎡2⎤

⎡1.9 ⎤

(c) Try x = ⎢ 1 ⎥ . Then Cx = ⎢ .9 ⎥ , i.e., x > Cx,

⎢ ⎥

⎢ ⎥

⎣1 ⎦

⎣ .9 ⎦

so this matrix is productive by Theorem

10.8.3.

3. Theorem 10.8.2 says there will be one linearly

independent price vector for the matrix E if some

positive power of E is positive. Since E is not

positive, try E 2 .

⎡.2 .34 .1⎤

E 2 = ⎢.2 .54 .6⎥ > 0

⎢

⎥

⎣.6 .12 .3⎦

(b) As in (a), solve

⎡ 1

0 − 12 ⎤ ⎡ p ⎤

⎡0⎤

⎢ 2

⎥ 1

1 − 12 ⎥ ⎢ p2 ⎥ = ⎢ 0 ⎥ .

( I − E )p = ⎢ − 13

⎢ 1

⎥⎢ ⎥ ⎢ ⎥

1⎥ ⎣ p3 ⎦ ⎣ 0 ⎦

⎢⎣ − 6 −1

⎦

In row-echelon form, this reduces to

⎡ 1 0 −1⎤ ⎡ p1 ⎤ ⎡ 0 ⎤

⎢0 1 − 5 ⎥ ⎢ p ⎥ = ⎢ 0 ⎥ .

⎢

6⎥ ⎢ 2⎥ ⎢ ⎥

⎢⎣0 0

0 ⎥⎦ ⎣ p3 ⎦ ⎣ 0 ⎦

**4. The exchange matrix for this arrangement (using
**

⎡1 1 1⎤

⎢2 3 4⎥

A, B, and C in that order) is ⎢ 13 13 14 ⎥ .

⎢1 1 1⎥

⎢⎣ 6 3 2 ⎥⎦

For equilibrium, we must solve (I − E)p = 0.

⎡ 1 − 1 − 1⎤

3

4 ⎥ ⎡ p1 ⎤ ⎡ 0 ⎤

⎢ 2

1

2

1

That is ⎢ − 3

− 4 ⎥ ⎢ p2 ⎥ = ⎢0 ⎥ .

3

⎢ ⎥ ⎢ ⎥

⎢ 1

1

1 ⎥ ⎣ p3 ⎦ ⎣ 0 ⎦

⎢⎣ − 6 − 3

⎥

2⎦

Row reduction yields solutions of the form

⎡ 18 ⎤

⎢ 16 ⎥

1600

p = ⎢ 15 ⎥ s. Set s =

and obtain

15

⎢ 16 ⎥

⎣1⎦

**Solutions of this system have the form
**

⎡1⎤

⎡6 ⎤

p = s ⎢⎢ 56 ⎥⎥ . Set s = 6 and get p = ⎢5 ⎥ .

⎢ ⎥

⎢⎣ 1 ⎥⎦

⎣6 ⎦

(c) As in (a), solve

⎡ .65 −.50 −.30 ⎤ ⎡ p1 ⎤ ⎡ 0⎤

( I − E )p = ⎢ −.25 .80 −.30 ⎥ ⎢ p2 ⎥ = ⎢ 0⎥ ,

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −.40 −.30 .60 ⎦ ⎣ p3 ⎦ ⎣ 0⎦

227

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

⎡120 ⎤

p = ⎢100 ⎥ ; i.e., the price of tomatoes was

⎢

⎥

⎣106.67 ⎦

$120, corn was $100, and lettuce was $106.67.

**(b) The increase in value is the second column
**

⎡542 ⎤

1 ⎢

of ( I − C )−1,

690⎥ . Thus the value of

503 ⎢170 ⎥

⎣

⎦

the coal-mining operation must increase by

542

.

503

**5. Taking the CE, EE, and ME in that order, we
**

form the consumption matrix C, where cij = the

**amount (per consulting dollar) of the i-th
**

engineer’s services purchased by the j-th

⎡ 0 .2 .3 ⎤

engineer. Thus, C = ⎢.1 0 .4 ⎥ .

⎢

⎥

⎣.3 .4 0 ⎦

n

7. The i-th column sum of E is

∑ e ji ,

and the

j =1

**elements of the i-th column of I − E are the
**

negatives of the elements of E, except for the

ii-th, which is 1 − eii . So, the i-th column sum of

**We want to solve (I − C)x = d, where d is the
**

demand vector, i.e.

⎡ 1 −.2 −.3⎤ ⎡ x1 ⎤ ⎡500 ⎤

⎢ −.1 1 −.4 ⎥ ⎢ x2 ⎥ = ⎢700 ⎥ .

⎢

⎥⎢ ⎥ ⎢

⎥

⎣ −.3 −.4 1 ⎦ ⎣ x3 ⎦ ⎣600 ⎦

n

I − E is 1 − ∑ e ji = 1 − 1 = 0. Now, ( I − E )T has

j =1

zero row sums, so the vector x = [1 1 " 1]T

**In row-echelon form this reduces to
**

⎡1 −.2 −.3

⎤ ⎡ x1 ⎤ ⎡ 500 ⎤

⎢0 1

−.43877 ⎥ ⎢ x2 ⎥ = ⎢ 765.31⎥ .

⎢

⎥⎢ ⎥ ⎢

⎥

1

⎣0 0

⎦ ⎣ x3 ⎦ ⎣1556.19⎦

Back-substitution yields the solution

⎡1256.48 ⎤

x = ⎢1448.12 ⎥ .

⎢

⎥

⎣1556.19 ⎦

The CE received $1256, the EE received $1448,

and the ME received $1556.

**solves ( I − E )T x = 0. This implies
**

det( I − E )T = 0. But det( I − E )T = det( I − E ),

so (I − E)p = 0 must have nontrivial (i.e.,

nonzero) solutions.

The CE received $1256, the EE received $1448,

and the ME received $1556.

**8. Let C be a consumption matrix whose column
**

sums are less than one; then the row sums of C T

are less than one. By Corollary 10.8.4, C T is

6. (a) The solution of the system (I − C)x = d is

productive so ( I − C T )−1 ≥ 0. But

−1

**x = ( I − C ) d. The effect of increasing the
**

demand di for the ith industry by one unit

( I − C )−1 = ((( I − C )T ))−1 )T

= (( I − C T )−1 )T

≥ 0.

Thus, C is productive.

⎡0 ⎤

⎢# ⎥

⎢ ⎥

⎢0 ⎥

is the same as adding ⎢1 ⎥ to d where the 1

⎢0 ⎥

⎢# ⎥

⎢ ⎥

⎢⎣0 ⎥⎦

is in the ith row. The new solution is

⎛

⎡0 ⎤ ⎞

⎡ 0⎤

⎜

⎢# ⎥ ⎟

⎢# ⎥

⎜

⎢ ⎥⎟

⎢ ⎥

0

⎜

⎢0 ⎥ ⎟

−1

−1 ⎢ ⎥

( I − C ) ⎜ d + ⎢1 ⎥ ⎟ = x + ( I − C ) ⎢ 1 ⎥

⎜

⎢0 ⎥ ⎟

⎢ 0⎥

⎜

⎢# ⎥ ⎟

⎢# ⎥

⎜

⎢ ⎥ ⎟⎟

⎢ ⎥

⎜

⎢⎣0 ⎥⎦ ⎠

⎢⎣ 0⎥⎦

⎝

which has the effect of adding the ith

Section 10.9

Exercise Set 10.9

1. Using Equation (18), we calculate

30s

= 15s

Yld 2 =

2

50s 100s

.

=

Yld3 =

7

2+ 3

2

**So all the trees in the second class should be
**

harvested for an optimal yield (since s = 1000)

of $15,000.

**column of ( I − C ) −1 to the original solution.
**

228

SSM: Elementary Linear Algebra

Section 10.10

2. From the solution to Example 1, we see that for the fifth class to be harvested in the optimal case we must have

p5 s

> 14.7 s, yielding p5 > $222.63.

( .28−1 + .31−1 + .25−1 + .23−1 )

3. Assume p2 = 1, then Yld 2 =

p3 s

(.28

−1

+ .31−1 )

p4 s

s

(.28)−1

= .28s. Thus, for all the yields to be the same we must have

= .28s

**(.28−1 + .31−1 + .25−1 )
**

p5 s

= .28s

**(.28−1 + .31−1 + .25−1 + .23−1 )
**

p6 s

= .28s

= .28s

(.28−1 + .31−1 + .25−1 + .23−1 + .37−1 )

Solving these successively yields p3 = 1.90, p4 = 3.02, p5 = 4.24 and p6 = 5.00. Thus the ratio

p2 : p3 : p4 : p5 : p6 = 1 : 1.90 : 3.02 : 4.24 : 5.00.

n

5. Since y is the harvest vector, N = ∑ yi is the number of trees removed from the forest. Then Equation (7) and

i =1

**the first of Equations (8) yield N = g1 x1 , and from Equation (17) we obtain
**

N=

g1s

1+

g1

g2

+" +

g1

g k −1

=

s

1

g1

+" +

1

g k −1

.

**6. Set g1 = " = g n −1 = g , and p2 = 1. Then from Equation (18), Yld 2 =
**

to be the same, we need to solve Yld k =

pk s

(k − 1) g1

p2 s

1

g1

= gs. Since we want all of the Yld k ’s

= gs for pk for 3 ≤ k ≤ n. Thus pk = k − 1. So the ratio

p2 : p3 : p4 : " : pn = 1 : 2 : 3 : " : (n − 1).

Section 10.10

Exercise Set 10.10

⎡0 1 1 0⎤

1. (a) Using the coordinates of the points as the columns of a matrix we obtain ⎢0 0 1 1 ⎥ .

⎢

⎥

⎣0 0 0 0⎦

229

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

⎡ 3 0 0⎤

⎢2

⎥

(b) The scaling is accomplished by multiplication of the coordinate matrix on the left by ⎢ 0 1 0 ⎥ , resulting in

2

⎢

⎥

⎣ 0 0 1⎦

⎡0 3 3 0 ⎤

2 2

⎢

⎥

⎛3

⎞ ⎛3 1 ⎞

⎛ 1 ⎞

the matrix ⎢0 0 1 1 ⎥ , which represents the vertices (0, 0, 0), ⎜ , 0, 0 ⎟ , ⎜ , , 0 ⎟ and ⎜ 0, , 0 ⎟ as

2 2

⎝2

⎠ ⎝2 2 ⎠

⎝ 2 ⎠

⎢

⎥

⎣0 0 0 0 ⎦

shown below.

⎡ −2 −2 −2 −2 ⎤

⎡ −2 −1 −1 −2 ⎤

(c) Adding the matrix ⎢ −1 −1 −1 −1⎥ to the original matrix yields ⎢ −1 −1 0 0 ⎥ , which represents

⎢

⎥

⎢

⎥

⎣ 3 3 3 3⎦

⎣ 3 3 3 3⎦

the vertices (−2, −1, 3), (−1, −1, 3), (−1, 0, 3), and (−2, 0, 3) as shown below.

**⎡cos(−30°) − sin(−30°) 0 ⎤
**

(d) Multiplying by the matrix ⎢ sin(−30°) cos(−30°) 0 ⎥ , we obtain

⎢

⎥

0

0

1⎦

⎣

.866 1.366 .500 ⎤

⎡0 cos(−30°) cos(−30°) − sin(−30°) − sin(−30°) ⎤ ⎡0

⎢0 sin(−30°) cos(−30°) + sin(−30°) cos(−30°) ⎥ = ⎢0 −.500 .366 .866 ⎥ .

⎢

⎥ ⎢

⎥

0

0

0

0

0

⎣0

⎦ ⎣0 0

⎦

The vertices are then (0, 0, 0), (.866, −.500, 0), (1.366, .366, 0), and (.500, .866, 0) as shown:

⎡1 1 0 ⎤ ⎡ xi ⎤ ⎡ xi + 1 yi ⎤

2 ⎥

2

⎢

2. (a) Simply perform the matrix multiplication ⎢0 1 0 ⎥ ⎢ yi ⎥ = ⎢ yi ⎥ .

⎢

⎥⎢ ⎥

⎢⎣0 0 1 ⎥⎦ ⎢⎣ zi ⎥⎦ ⎢⎣ zi ⎥⎦

230

SSM: Elementary Linear Algebra

Section 10.10

(b) We multiply

⎡1 1 0 ⎤ ⎡ 0 1 1 0 ⎤ ⎡ 0 1 3 1 ⎤

2

2 2⎥

⎢

⎥ ⎢0 0 1 1 ⎥ = ⎢

0

1

0

0

0

1

1⎥

⎢

⎥⎢

⎥ ⎢

⎢⎣0 0 1 ⎥⎦ ⎣0 0 0 0 ⎦ ⎣⎢0 0 0 0 ⎦⎥

yielding the vertices (0, 0, 0), (1, 0, 0),

⎛3

⎞

⎛1

⎞

⎜ , 1, 0 ⎟ , and ⎜ , 1, 0 ⎟ .

⎝2

⎠

⎝2

⎠

**4. (a) The formulas for scaling, translation, and
**

rotation yield the matrices

⎡ 1 0 0⎤

⎢2

⎥

M1 = ⎢ 0 2 0 ⎥ ,

⎢0 0 1⎥

3⎦

⎣

⎡1

2

M2 = ⎢0

⎢

⎣⎢ 0

**(c) Obtain the vertices via
**

⎡ 1 0 0 ⎤ ⎡0 1 1 0 ⎤ ⎡0 1 1 0⎤

⎢.6 1 0 ⎥ ⎢0 0 1 1 ⎥ = ⎢0 .6 1.6 1 ⎥ ,

⎢

⎥⎢

⎥ ⎢

⎥

⎣ 0 0 1 ⎦ ⎣0 0 0 0 ⎦ ⎣0 0 0 0⎦

yielding (0, 0, 0), (1, .6, 0), (1, 1.6, 0), and

(0, 1, 0), as shown:

1

2

0

0

" 12 ⎤

⎥

0 " 0⎥ ,

0 " 0⎦⎥

1

2

1

0 ⎤

⎡1

M 3 = ⎢0 cos 20° − sin 20° ⎥ ,

⎢

⎥

⎣0 sin 20° cos 20° ⎦

**⎡ cos(−45°) 0 sin(−45)° ⎤
**

⎥ , and

M4 = ⎢

0

1

0

⎢

⎥

sin(

45

)

0

cos(

45

)

−

−

°

−

°

⎣

⎦

cos

90

°

−

sin

90

°

0

⎡

⎤

M 5 = ⎢ sin 90° cos 90° 0 ⎥ .

⎢

⎥

0

1⎦

⎣ 0

(b) Clearly P ′ = M 5 M 4 M 3 ( M1 P + M 2 ).

3. (a) This transformation looks like scaling by the

factors 1, −1, 1, respectively and indeed its

⎡ 1 0 0⎤

matrix is ⎢0 −1 0 ⎥ .

⎢

⎥

⎣0 0 1⎦

⎡. 3 0 0 ⎤

5. (a) As in 4(a), M1 = ⎢ 0 .5 0⎥ ,

⎢

⎥

⎣ 0 0 1⎦

0

0 ⎤

⎡1

M 2 = ⎢0 cos 45° − sin 45° ⎥ ,

⎢

⎥

⎣0 sin 45° cos 45° ⎦

⎡1 1 1 " 1 ⎤

M 3 = ⎢0 0 0 " 0⎥ ,

⎢

⎥

⎣0 0 0 " 0⎦

⎡ cos 35° 0 sin 35° ⎤

M4 = ⎢ 0

1

0 ⎥,

⎢

⎥

sin

35

0

cos

35°⎦

−

°

⎣

⎡cos(−45°) − sin(−45°) 0 ⎤

M 5 = ⎢ sin(−45°) cos(−45°) 0 ⎥ ,

⎢

⎥

0

0

1⎦

⎣

⎡0 0 0 " 0 ⎤

M 6 = ⎢0 0 0 " 0 ⎥ , and

⎢

⎥

⎣1 1 1 " 1 ⎦

⎡2 0 0⎤

M 7 = ⎢0 1 0⎥ .

⎢

⎥

⎣0 0 1 ⎦

**(b) For this reflection we want to transform
**

( xi , yi , zi ) to (− xi , yi , zi ) with the matrix

⎡ −1 0 0 ⎤

⎢ 0 1 0⎥ .

⎢

⎥

⎣ 0 0 1⎦

Negating the x-coordinates of the 12 points

in view 1 yields the 12 points

(−1.000, −.800, .000), (−.500, −.800, −.866),

etc., as shown:

**(c) Here we want to negate the z-coordinates,
**

⎡ 1 0 0⎤

with the matrix ⎢0 1 0 ⎥ .

⎢

⎥

⎣0 0 −1⎦

This does not change View 1.

(b) As in 4(b),

P ′ = M 7 ( M 6 + M 5 M 4 ( M 3 + M 2 M1P)).

231

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

**6. Using the hing given, we have
**

⎡ cos β 0 sin β ⎤

R1 = ⎢ 0

1

0 ⎥,

⎢

⎥

⎣ − sin β 0 cos β ⎦

⎡cos α − sin α 0 ⎤

R2 = ⎢ sin α cos α 0 ⎥ ,

⎢

⎥

0

1⎦

⎣ 0

⎡ cos θ 0 sin θ ⎤

R3 = ⎢ 0

1

0 ⎥,

⎢

⎥

sin

0

cos

−

θ

θ⎦

⎣

⎡ cos(−α ) − sin(−α ) 0 ⎤

R4 = ⎢ sin(−α ) cos(−α ) 0 ⎥ , and

⎢

⎥

0

1⎦

⎣ 0

Section 10.11

Exercise Set 10.11

1. (a) The discrete mean value property yields the

four equations

1

t1 = (t2 + t3 )

4

1

t2 = (t1 + t4 + 1 + 1)

4

1

t3 = (t1 + t4 )

4

1

t4 = (t2 + t3 + 1 + 1).

4

Translated into matrix notation, this

⎡0 1 1 0⎤

0

4 4

⎡ t1 ⎤ ⎢

⎥ ⎡ t1 ⎤ ⎡⎢ ⎤⎥

1

1

⎢t ⎥ ⎢

0 0 4 ⎥ ⎢t2 ⎥ 12

becomes ⎢ 2 ⎥ = ⎢ 14

⎢ ⎥ + ⎢ ⎥.

1⎥ t

⎢0⎥

t

0

0

3

⎢ 3⎥ ⎢4

4⎥ ⎢ ⎥ ⎢ ⎥

⎢⎣t4 ⎥⎦ ⎢

⎢

⎥

t

⎥

1 1

⎣ 4 ⎦ ⎢⎣ 12 ⎥⎦

⎢⎣ 0 4 4 0 ⎥⎦

⎡ cos(− β ) 0 sin(− β ) ⎤

R5 = ⎢

0

1

0 ⎥.

⎢

⎥

⎣ − sin(− β ) 0 cos(− β ) ⎦

**7. (a) We rewrite the formula for vi′ as
**

⎡1 ⋅ xi + x0 ⋅1⎤

⎢1 ⋅ y + y ⋅1⎥

0 ⎥ . So

vi′ = ⎢ i

z

z

⋅

+

1

⎢ i 0 ⋅1 ⎥

⎢⎣

⎥⎦

1 ⋅1

⎡1

⎢0

vi′ = ⎢

⎢0

⎢⎣0

0

1

0

0

0

0

1

0

**(b) To solve the system in part (a), we solve
**

(I − M)t = b for t:

⎡ 1 −1 −1

0⎤

0

4

4

⎢

⎥ ⎡ t1 ⎤ ⎡⎢ ⎤⎥

1

1

1

⎢

⎥

1

0 − 4 ⎥ t2

⎢− 4

⎢2⎥.

⎢

⎥

=

⎢ 1

⎥

0

1 − 14 ⎥ ⎢ t3 ⎥ ⎢ 0 ⎥

⎢− 4

⎢

⎥ ⎢t ⎥ ⎢ 1 ⎥

1

1

1⎥⎦ ⎣ 4 ⎦ ⎣⎢ 2 ⎦⎥

⎢⎣ 0 − 4 − 4

In row-echelon form, this is

⎡1 − 1 − 1

0 ⎤ ⎡ t1 ⎤ ⎡ 0 ⎤

4

4

⎢ ⎥

⎢

⎥

1 −15

4 ⎥ ⎢t2 ⎥ ⎢ 0 ⎥

⎢0

⎢ ⎥= 1 .

⎢0

0

1 − 12 ⎥ ⎢ t3 ⎥ ⎢ − 8 ⎥

⎢

⎥ ⎢t ⎥ ⎢ 3 ⎥

0

0

1⎦⎥ ⎣ 4 ⎦ ⎣⎢ 4 ⎦⎥

⎣⎢0

x0 ⎤ ⎡ xi ⎤

y0 ⎥ ⎢ yi ⎥

⎥ ⎢ ⎥.

z0 ⎥ ⎢ zi ⎥

1 ⎥⎦ ⎢⎣ 1 ⎥⎦

(b) We want to translate xi by −5, yi by +9,

zi by −3, so x0 = −5, y0 = 9, z0 = −3.

⎡1

⎢0

The matrix is ⎢

⎢0

⎢⎣0

0

1

0

0

0 −5⎤

0 9⎥

⎥.

1 −3⎥

0

1⎥⎦

⎡1⎤

⎢4⎥

⎢3⎥

Back substitution yields the result t = ⎢ 14 ⎥ .

⎢4⎥

⎢3⎥

⎣⎢ 4 ⎦⎥

**8. This can be done most easily performing the
**

multiplication RRT and showing that this is I.

For example, for the rotation matrix about the xaxis we obtain

0

0 ⎤ ⎡1

0

0 ⎤

⎡1

RRT = ⎢ 0 cos θ − sin θ ⎥ ⎢0 cos θ sin θ ⎥

⎢

⎥⎢

⎥

⎣ 0 sin θ cos θ ⎦ ⎣0 − sin θ cos θ ⎦

⎡1 0 0 ⎤

= ⎢0 1 0⎥ .

⎢

⎥

⎣0 0 1 ⎦

232

SSM: Elementary Linear Algebra

(c) t (1) = Mt (0) + b

⎡0 1 1

4 4

⎢

⎢ 14 0 0

=⎢

1

⎢4 0 0

⎢

1 1

⎣⎢ 0 4 4

⎡ 0⎤

⎢1⎥

= ⎢2⎥

⎢ 0⎥

⎢1⎥

⎢⎣ 2 ⎥⎦

Section 10.12

−.0371

× 100% = −12.9%, and for

.2871

.0371

t2 and t4 was

× 100% = 5.2%.

.7129

t3 was

0⎤

0

⎥ ⎡0 ⎤ ⎡ ⎤

1⎥ ⎢ ⎥ ⎢1⎥

4 0 + ⎢2⎥

1 ⎥ ⎢0 ⎥ ⎢ 0 ⎥

4⎥ ⎢ ⎥ ⎢1⎥

⎥ ⎢0 ⎥

0 ⎦⎥ ⎣ ⎦ ⎣⎢ 2 ⎦⎥

**2. The average value of the temperature on the
**

1 π

f (θ )r dθ , where r is the radius

circle is

2π r ∫−π

of the circle and f(θ) is the temperature at the

point of the circumference where the radius to

that point makes the angle θ with the horizontal.

−π

π

<θ <

and is zero

Clearly f(θ) = 1 for

2

2

otherwise. Consequently, the value of the

integral above (which equals the temperature at

1

the center of the circle) is .

2

⎡1⎤

⎢8⎥

⎢5⎥

t ( 2) = Mt(1) + b = ⎢ 81 ⎥ ,

⎢8⎥

⎢5⎥

⎢⎣ 8 ⎥⎦

⎡3⎤

⎢ 16 ⎥

⎢ 11 ⎥

t (3) = Mt ( 2) + b = ⎢ 16

⎥

3

⎢ 16 ⎥

⎢ 11 ⎥

⎣⎢ 16 ⎦⎥

**3. As in 1(c), but using M and b as in the problem
**

statement, we obtain

t (1) = Mt (0) + b

= ⎡⎣ 34

5

4

1

2

5

4

1

2

5

4

13

16

7

16

1

3⎤

4⎦

1

T

t ( 2) = Mt(1) + b

⎡7⎤

⎢ 32 ⎥

⎢ 23 ⎥

t ( 4) = Mt(3) + b = ⎢ 32 ⎥ ,

7

⎢ 32 ⎥

⎢ 23 ⎥

⎢⎣ 32 ⎥⎦

13

= ⎡ 16

⎣

9

8

9

16

11

8

21

16

1

15 ⎤

8⎦

T

.

Section 10.12

Exercise Set 10.12

⎡ 15 ⎤

⎢ 64 ⎥

⎢ 47 ⎥

t (5) = Mt( 4) + b = ⎢ 64 ⎥

15

⎢ 64 ⎥

⎢ 47 ⎥

⎣⎢ 64 ⎦⎥

**1. (c) The linear system
**

1

*

*

*

x31

= [ 28 + x31

− x32

]

20

1

*

*

*

x32

= [ 24 + 3x31

− 3x32

]

20

can be rewritten as

⎡ 15 ⎤ ⎡ 1 ⎤ ⎡ − 1 ⎤

⎢ 64 ⎥ ⎢ 4 ⎥ ⎢ 64 ⎥

⎢ 47 ⎥ ⎢ 3 ⎥ ⎢ − 1 ⎥

t (5) − t = ⎢ 64 ⎥ − ⎢ 4 ⎥ = ⎢ 64 ⎥

15

1

1

⎢ 64 ⎥ ⎢ 4 ⎥ ⎢ − 64 ⎥

⎢ 47 ⎥ ⎢ 3 ⎥ ⎢ 1 ⎥

−

⎣⎢ 64 ⎦⎥ ⎢⎣ 4 ⎥⎦ ⎣⎢ 64 ⎦⎥

*

*

19 x31

+ x32

= 28

*

*

−3x31

+ 23x32

= 24,

which has the solution

31

*

x31

=

22

27

*

= .

x32

22

(d) Using

percentage error

computed value − actual value

=

× 100%

actual value

we have that the percentage error for t1 and

2. (a) Setting

(

) (

)

(1) (1) = (0) (0) = (0, 0), and

x(01) = x01

x31 , x32

, x02

using part (b) of Exercise 1, we have

233

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

1

[ 28] = 1.40000

20

1

(1)

x32

= [ 24] = 1.20000

20

1

( 2)

x31

= [ 28 + 1.4 − 1.2] = 1.41000

20

1

( 2)

x32

= [ 24 + 3(1.4) − 3(1.2)] = 1.23000

20

1

(3)

x31

= [ 28 + 1.41 − 1.23] = 1.40900

20

1

(3)

x32

= [ 24 + 3(1.41) − 3(1.23)] = 1.22700

20

1

( 4)

x31

= [ 28 + 1.409 − 1.227] = 1.40910

20

1

( 4)

x32

= [ 24 + 3(1.409) − 3(1.227)]

20

= 1.22730

1

(5)

x31

= [ 28 + 1.4091 − 1.2273] = 1.40909

20

1

(5)

x32

= [ 24 + 3(1.4091) − 3(1.2273)]

20

= 1.22727

1

( 6)

x31 = [ 28 + 1.40909 − 1.22727]

20

= 1.40909

1

( 6)

x32

= [ 24 + 3(1.40909) − 3(1.22727)]

20

= 1.22727.

(1)

x31

=

(

)

( 0) (0)

(b) x(01) = (1, 1) = x31

, x32

1

(1)

x31

= [ 28 + 1 − 1] = 1.4

20

1

( 2)

x32

= [ 24 + 3(1) − 3(1)] = 1.2

20

**Since x(31) in this part is the same as x(31) in part (a), we will get x(32) as in part (a) and therefore x(33) , ..., x(36)
**

will also be the same as in part (a).

(

( 0) (0)

(c) x(01) = (148, − 15) = x31

, x32

)

1

[ 28 + 148 − (−15)] = 9.55000

20

1

(1)

x32

= [ 24 + 3(148) − 3(−15)] = 25.65000

20

1

( 2)

x31

= [ 28 + 9.55 − 25.65] = 0.59500

20

(1)

x31

=

234

SSM: Elementary Linear Algebra

Section 10.12

1

[ 24 + 3(9.55) − 3(25.65)]

20

= −1.21500

1

= [ 28 + 0.595 + 1.215]

20

= 1.49050

1

= [ 24 + 3(0.595) + 3(1.215)]

20

= 1.47150

1

= [ 28 + 1.4905 − 1.4715] = 1.40095

20

1

= [ 24 + 3(1.4905) − 3(1.4715)]

20

= 1.20285

1

= [ 28 + 1.40095 − 1.20285]

20

= 1.40991

1

= [ 24 + 3(1.40095) − 3(1.20285)]

20

= 1.22972

1

= [ 28 + 1.40991 − 1.22972] = 1.40901

20

1

= [ 24 + 3(1.40991) − 3(1.22972)]

20

= 1.22703

( 2)

x32

=

(3)

x31

(3)

x32

( 4)

x31

( 4)

x32

(5)

x31

(5)

x32

( 6)

x31

( 6)

x32

**4. Referring to the figure below and starting with x(01) = (0, 0) :
**

x(01) is projected to x1(1) on L1 , x1(1) is projected to x(21) on L2 , x(21) is projected to x(31) on L3 , and so on.

x2

L1

L3

(2)

(1)

x1

x1

L2

A

x2 = 1

x(1)

3

x1

(1)

x0

B

(1)

x2

x1 – x2 = 0

x1 – x2 = 2

As seen from the graph the points of the limit cycle are x1∗ = A, x∗2 = B, x∗3 = A.

Since x1∗ is the point of intersection of L1 and L3 it follows on solving the system

∗

x12

=1

∗

*

x11

− x12

=0

∗

∗

∗

∗

, x22

) is on L2 , it follows that x21

− x22

= 2. Now x1∗ x∗2 is perpendicular to L2 ,

that x1* = (1, 1). Since x∗2 = ( x21

235

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

⎛ x∗ − 1 ⎞

∗

∗

∗

∗

therefore ⎜ 22 ⎟ (1) = −1 so we have x22

− 1 = 1 − x21

or x21

+ x22

= 2.

⎜ x∗ − 1 ⎟

⎝ 21 ⎠

Solving the system

∗

∗

x21

− x22

=2

∗

∗

x21

+ x22

=2

∗

∗

= 2 and x22

= 0. Thus the points on the limit cycle are x1∗ = (1, 1), x∗2 = (2, 0), x∗3 = (1, 1).

gives x21

7. Let us choose units so that each pixel is one unit wide. Then aij = length of the center line of the i-th beam that

lies in the j-th pixel.

If the i-th beam crosses the j-th pixel squarely, it follows that aij = 1. From Fig. 10.12.11 in the text, it is then

clear that

a17 = a18 = a19 = 1

a24 = a25 = a26 = 1

a31 = a32 = a33 = 1

a73 = a76 = a79 = 1

a82 = a85 = a88 = 1

a91 = a94 = a97 = 1

since beams 1, 2, 3, 7, 8, and 9 cross the pixels squarely. Next, the centerlines of beams 5 and 11 lie along the

diagonals of pixels 3, 5, 7 and 1, 5, 9, respectively. Since these diagonals have length 2, we have

a53 = a55 = a57 = 2 = 1.41421

a11, 1 = a11, 5 = a11, 9 = 2 = 1.41421.

In the following diagram, the hypotenuse of triangle A is the portion of the centerline of the 10th beam that lies in

the 2nd pixel. The length of this hypotenuse is twice the height of triangle A, which in turn is 2 − 1. Thus,

a10, 2 = 2

(

)

2 − 1 = .82843.

**By symmetry we also have
**

a10, 2 = a10, 6 = a12, 4 = a12, 8 = a62 = a64 = a46 = a48 = .82843.

Also from the diagram, we see that the hypotenuse of triangle B is the portion of the centerline of the 10th beam

that lies in the 3rd pixel. Thus, a10, 3 = 2 − 2 = .58579.

236

SSM: Elementary Linear Algebra

Section 10.12

2−1

2

1

A

2( 2 − 1)

B

1

2

2− 2

1

−

2

2

1−

1

−

2

2

1

**By symmetry we have a10, 3 = a12, 7 = a61 = a49 = .58579.
**

The remaining aij ’s are all zero, and so the 12 beam equations (4) are

x7 + x8 + x9 = 13.00

x4 + x5 + x6 = 15.00

x1 + x2 + x3 = 8.00

.82843( x6 + x8 ) + .58579 x9 = 14.79

1.41421( x3 + x5 + x7 ) = 14.31

.82843( x2 + x4 ) + .58579 x1 = 3.81

x3 + x6 + x9

x2 + x5 + x8

x1 + x4 + x7

.82843( x2 + x6 ) + .58579 x3

= 18.00

= 12.00

= 6.00

= 10.51

1.41421( x1 + x5 + x9 ) = 16.13

.82843( x4 + x8 ) + .58579 x7 = 7.04

8. Let us choose units so that each pixel is one unit wide. Then aij = area of the i-th beam that lies in the j-th pixel.

Since the width of each beam is also one unit it follows that aij = 1 if the i-th beam crosses the j-th pixel squarely.

From Fig. 10.12.11 in the text, it is then clear that

a17 = a18 = a19 = 1

a24 = a25 = a26 = 1

a31 = a32 = a33 = 1

a73 = a76 = a79 = 1

a82 = a85 = a88 = 1

a91 = a94 = a97 = 1

since beams 1, 2, 3, 7, 8, and 9 cross the pixels squarely.

For the remaining aij ’s, first observe from the figure that an isosceles right triangle of height h, as indicated, has

area h 2 . From the diagram of the nine pixels, we then have

237

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

h

Area = h 2

1

−

2

( 2 − 1)

2

B

1

F

D

A

E

C

1

−

2

B

3

−

2

3

−(

2

3

−

2

2

1

⎡1

⎤

Area of triangle B = ⎢ ( 2 − 1) ⎥

⎣2

⎦

1

= (3 − 2 2 )

4

= 0.4289

2

2

1

⎡1⎤

Area of triangle C = ⎢ ⎥ = = .25000

4

⎣2⎦

2

⎡3

⎤

Area of triangle F = ⎢ ( 2 − 1) ⎥

⎣2

⎦

9

= (3 − 2 2 )

4

= .38604

We also have

Area of polygon A = 1 − 2 × (Area of triangle B)

1

= 2−

2

= .91421

238

2 − 1)

SSM: Elementary Linear Algebra

Section 10.12

**Area of polygon D = 1 − (Area of triangle C )
**

1

= 1−

4

3

=

4

= .7500

Area of polygon E = 1 − (Area of triangle F )

1

= (18 2 − 23)

4

= .61396

Referring back to Fig. 10.12.11, we see that

a11, 1 = Area of polygon A = .91421

**a10, 1 = Area of triangle B = .04289
**

a11, 2 = Area of triangle C = .25000

a10, 2 = Area of polygon D = .75000

a10, 3 = Area of polygon E = .61396

By symmetry we then have

a11, 1 = a11, 5 = a11, 9 = a53 = a55 = a57

= .91421

a10, 1 = a10, 5 = a10, 9 = a12, 1 = a12, 5 = a12, 9

= a63 = a65 = a67 = a43 = a45

= a47 = .04289

**a11, 2 = a11, 4 = a11, 6 = a11, 8
**

= a52 = a54 = a56 = a58 = .25000

a10, 2 = a10, 6 = a12, 4 = a12, 8

= a62 = a64 = a46 = a48 = .75000

a10, 3 = a12, 7 = a61 = a49 = .61396.

The remaining aij ’s are all zero, and so the 12 beam equations (4) are

x7 + x8 + x9 = 13.00

x4 + x5 + x6 = 15.00

x1 + x2 + x3 = 8.00

0.04289( x3 + x5 + x7 ) + 0.75( x6 + x8 ) + 0.61396 x9 = 14.79

0.91421( x3 + x5 + x7 ) + 0.25( x2 + x4 + x6 + x8 ) = 14.31

0.04289( x3 + x5 + x7 ) + 0.75( x2 + x4 ) + 0.61396 x1 = 3.81

x3 + x6 + x9 = 18.00

x2 + x5 + x8 = 12.00

x1 + x4 + x7 = 6.00

0.04289( x1 + x5 + x9 ) + 0.75( x2 + x6 ) + 0.61396 x3 = 10.51

0.91421( x1 + x5 + x9 ) + 0.25( x2 + x4 + x6 + x8 ) = 16.13

0.04289( x1 + x5 + x9 ) + 0.75( x4 + x8 ) + 0.61396 x7 = 7.04

239

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

Section 10.13

Exercise Set 10.13

12

. Also, the

25

rotation angles for the four subsets are all 0°. The displacement distances can be determined from the figure to

find the four similitudes that map the entire set onto the four subsets S1 , S2 , S3 , S4 . These are, respectively,

1. Each of the subsets S1 , S2 , S3 , S4 in the figure is congruent to the entire set scaled by a factor of

y

1

12

25

1

25

S3

S4

S1

S2

x

1

⎡ 13 ⎤

⎤ ⎡0⎤

⎛ ⎡ x ⎤ ⎞ 12 ⎡1 0 ⎤ ⎡ x ⎤ ⎡ ei ⎤

⎡ ei ⎤

⎡0 ⎤ ⎡ 13

25

⎢ 25 ⎥ .

,

,

,

and

Ti ⎜ ⎢ ⎥ ⎟ =

+

i

=

1,

2,

3,

4,

where

the

four

values

of

,

are

⎢f ⎥

⎢⎣0 ⎥⎦ ⎢ 0 ⎥ ⎢ 13 ⎥

⎢⎣ 0 1 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢ f ⎥

y

25

⎣

⎦

⎢ 13

⎥

⎣ i⎦

⎣ i⎦

⎝

⎠

⎣⎢ ⎦⎥ ⎣⎢ 25 ⎦⎥

⎣ 25 ⎦

12

Because s =

and k = 4 in the definition of a self-similar set, the Hausdorff dimension of the set is

25

ln(k )

ln(4)

d H (S ) =

=

= 1.888.... The set is a fractal because its Hausdorff dimension is not an integer.

1

25

ln s

ln 12

()

2.

( )

15

(

16 )

The rough measurements indicated in the figure give an approximate scale factor of s ≈

= .47 to two

decimal places. Since k = 4, the Hausdorff dimension of the set is approximately d H ( S ) =

2

ln(k )

ln

()

1

s

=

ln(4)

ln

( .471 )

= 1 .8

to two significant digits. Examination of the figure reveals rotation angles of 180°, 180°, 0°, and −90° for the sets

S1 , S2 , S3 , and S4 , respectively.

240

SSM: Elementary Linear Algebra

Section 10.13

15 "

16

s3

s4

s1

s2

2"

**3. By inspection, reading left to right and top to bottom, the triplets are:
**

(0, 0, 0) none are rotated

(1, 0, 0) the upper right iteration is rotated 90°

(2, 0, 0) the upper right iteration is rotated 180°

(3, 0, 0) the upper right iteration is rotated 270°

(0, 0, 1) the lower right iteration is rotated 90°

(0, 0, 2) the lower right iteration is rotated 180°

(1, 2, 0) the upper right iteration is rotated 90° and the lower left is rotated 180°

(2, 1, 3) the upper right iteration is rotated 180°, the lower left is rotated 90°, and the lower right is rotated 270°

(2, 0, 1) the upper right iteration is rotated 180° and the lower right is rotated 90°

(2, 0, 2) the upper right and lower right iterations are both rotated 180°

(2, 2, 0) the upper right and lower left iterations are both rotated 180°

(0, 3, 3) the lower left and lower right iterations are both rotated 270°

4. (a) The figure shows the original self-similar set and a decomposition of the set into seven nonoverlapping

1

congruent subsets, each of which is congruent to the original set scaled by a factor s = . By inspection, the

3

rotations angles are 0° for all seven subsets. The Hausdorff dimension of the set is

ln(k ) ln(7)

d H (S ) =

=

= 1.771.... Because its Hausdorff dimension is not an integer, the set is a fractal.

ln(3)

ln 1

(s)

241

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

(b) The figure shows the original self-similar set and a decomposition of the set into three nonoverlapping

1

congruent subsets, each of which is congruent to the original set scaled by a factor s = . By inspection, the

2

rotation angles are 180° for all three subsets. The Hausdorff dimension of the set is

ln(k ) ln(3)

d H (S ) =

=

= 1.584.... Because its Hausdorff dimension is not an integer, the set is a fractal.

ln(2)

ln 1

(s)

(c) The figure shows the original self-similar set and a decomposition of the set into three nonoverlapping

1

congruent subsets, each of which is congruent to the original set scaled by a factor s = . By inspection, the

2

rotation angles are 180°, 180°, and −90° for S1 , S2 , and S3 , respectively. The Hausdorff dimension of the

set is d H ( S ) =

ln(k )

()

ln 1s

=

ln(3)

= 1.584.... Because its Hausdorff dimension is not an integer, the set is a fractal.

ln(2)

S3

S1

242

S2

SSM: Elementary Linear Algebra

Section 10.13

(d) The figure shows the original self-similar set and a decomposition of the set into three nonoverlapping

1

congruent subsets, each of which is congruent to the original set scaled by a factor s = . By inspection, the

2

rotation angles are 180°, 180°, and −90° for S1 , S2 , and S3 , respectively. The Hausdorff dimension of the

set is d H ( S ) =

ln(k )

ln

()

1

s

=

ln(3)

= 1.584.... Because its Hausdorff dimension is not an integer, the set is a fractal.

ln(2)

S3

S2

S1

⎡ .85 .04⎤

5. The matrix of the affine transformation in question is ⎢

. The matrix portion of a similitude is of the

⎣ −.04 .85⎥⎦

⎡cos θ − sin θ ⎤

form s ⎢

. Consequently, we must have s cos θ = .85 and

cos θ ⎥⎦

⎣ sin θ

s sin θ = −.04. Solving this pair of equations gives s = (.85)2 + (−.04)2 = .8509... and

−.04 ⎞

⎟ = −2.69...°.

⎝ .85 ⎠

θ = tan −1 ⎛⎜

⎛ ⎡ x⎤ ⎞

⎡ x⎤

⎡x⎤

6. Letting ⎢ ⎥ be the vector to the tip of the fern and using the hint, we have ⎢ ⎥ = T2 ⎜ ⎢ ⎥ ⎟ or

⎣ y⎦

⎣ y⎦

⎝ ⎣ y⎦ ⎠

⎡ x ⎤ ⎡ .85 .04 ⎤ ⎡ x ⎤ ⎡.075⎤

⎢⎣ y ⎥⎦ = ⎢⎣ −.04 .85⎥⎦ ⎢⎣ y ⎥⎦ + ⎢⎣.180 ⎥⎦ . Solving this matrix equation gives

rounded to three decimal places.

⎡ x ⎤ ⎡.15 −.04 ⎤

⎢⎣ y ⎥⎦ = ⎢⎣.04

.15⎥⎦

−1

⎡.075⎤ ⎡.766 ⎤

⎢⎣.180⎥⎦ = ⎢⎣.996 ⎥⎦

**7. As the figure indicates, the unit square can be expressed as the union of 16 nonoverlapping congruent squares,
**

1

each of side length . Consequently, the Hausdorff dimension of the unit square as given by Equation (2) of the

4

ln(k ) ln(16)

=

= 2.

text is d H ( S ) =

ln(4)

ln 1

(s)

243

Chapter 10: Applications of Linear Algebra

SSM: Elementary Linear Algebra

1

4

1

8. The similitude T1 maps the unit square (whose vertices are (0, 0), (1, 0), (1, 1), and (0, 1)) onto the square whose

⎛3 ⎞ ⎛3 3⎞

⎛ 3⎞

vertices are (0, 0), ⎜ , 0 ⎟ , ⎜ , ⎟ , and ⎜ 0, ⎟ . The similitude T2 maps the unit square onto the square whose

⎝4 ⎠ ⎝4 4⎠

⎝ 4⎠

⎛1 ⎞

⎛ 3⎞

⎛1 3⎞

vertices are ⎜ , 0 ⎟ , (1, 0), ⎜1, ⎟ , and ⎜ , ⎟ . The similitude T3 maps the unit square onto the square whose

⎝4 ⎠

⎝ 4⎠

⎝4 4⎠

⎛ 1⎞ ⎛3 1⎞ ⎛3 ⎞

vertices are ⎜ 0, ⎟ , ⎜ , ⎟ , ⎜ , 1⎟ , and (0, 1). Finally the similitude T4 maps the unit square onto the square

⎝ 4⎠ ⎝4 4⎠ ⎝4 ⎠

3

⎛1 1⎞ ⎛ 1⎞

⎛1 ⎞

whose vertices are ⎜ , ⎟ , ⎜1, ⎟ , (1, 1), and ⎜ , 1⎟ . Each of these four smaller squares has side length of ,

4

⎝4 4⎠ ⎝ 4⎠

⎝4 ⎠

3

so that the common scale factor of the similitudes is s = . The right-hand side of Equation (2) of the text gives

4

ln(k ) ln(4)

=

= 4.818.... This is not the correct Hausdorff dimension of the square (which is 2) because the four

ln 1s

ln 43

()

()

**smaller squares overlap.
**

9. Because s =

ln(k ) ln(8)

1

=

= 3 for the Hausdorff dimension

and k = 8, Equation (2) of the text gives d H ( S ) =

2

ln(2)

ln 1s

()

of a unit cube. Because the Hausdorff dimension of the cube is the same as its topological dimension, the cube is

not a fractal.

10. A careful examination of Figure Ex-10 in the text shows that the Menger sponge can be expressed as the union of

1

1

20 smaller nonoverlapping congruent Menger sponges each of side length . Consequently, k = 20 and s = ,

3

3

ln(k ) ln(20)

=

= 2.726.... Because its

and so the Hausdorff dimension of the Menger sponge is d H ( S ) =

ln(3)

ln 1

(s)

**Hausdorff dimension is not an integer, the Menger sponge is a fractal.
**

11. The figure shows the first four iterates as determined by Algorithm 1 and starting with the unit square as the

1

initial set. Because k = 2 and s = , the Hausdorff dimension of the Cantor set is

3

ln(k ) ln(2)

d H (S ) =

=

= 0.6309.... Notice that the Cantor set is a subset of the unit interval along the x-axis and

ln(3)

ln 1

(s)

that its topological dimension must be 0 (since the topological dimension of any set is a nonnegative integer less

than or equal to its Hausdorff dimension).

244

SSM: Elementary Linear Algebra

Section 10.14

Initial set

First Iterate

Second Iterate

Third Iterate

Fourth Iterate

**12. The area of the unit square S0 is, of course, 1. Each of the eight similitudes T1 , T2 , ..., T8 given in Equation (8) of
**

1

1

the text has scale factor s = , and so each maps the unit square onto a smaller square of area . Because these

3

9

8

eight smaller squares are nonoverlapping, their total area is , which is then the area of the set S1 . By a similar

9

8

argument, the area of the set S2 is -th the area of the set S1 . Continuing the argument further, we find that the

9

areas of S0 , S1, S2 , S3 , S4 , ..., form the geometric sequence 1,

8 ⎛ 8 ⎞ 2 ⎛ 8 ⎞3 ⎛ 8 ⎞ 4

, ⎜ ⎟ , ⎜ ⎟ , ⎜ ⎟ , .... (Notice that this

9 ⎝9⎠

⎝9⎠ ⎝9⎠

n

⎛8⎞

implies that the area of the Sierpinski carpet is 0, since the limit of ⎜ ⎟ as n tends to infinity is 0.)

⎝9⎠

Section 10.14

Exercise Set 10.14

1. Because 250 = 2 ⋅ 53 it follows from (i) that Π (250) = 3 ⋅ 250 = 750.

Because 25 = 52 it follows from (ii) that Π (25) = 2 ⋅ 25 = 50.

Because 125 = 53 it follows from (ii) that Π(125) = 2 ⋅125 = 250.

Because 30 = 6 ⋅ 5 it follows from (ii) that Π (30) = 2 ⋅ 30 = 60.

Because 10 = 2 ⋅ 5 it follows from (i) that Π (10) = 3 ⋅10 = 30.

245

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

**Because 50 = 2 ⋅ 52 it follows from (i) that Π (50) = 3 ⋅ 50 = 150.
**

Because 3750 = 6 ⋅ 54 it follows from (ii) that Π (3750) = 2 ⋅ 3750 = 7500.

Because 6 = 6 ⋅ 50 it follows from (ii) that Π(6) = 2 ⋅ 6 = 12

Because 5 = 51 it follows from (ii) that Π (5) = 2 ⋅ 5 = 10.

⎛m n⎞

2. The point (0, 0) is obviously a 1-cycle. We now choose another of the 36 points of the form ⎜ , ⎟ , say

⎝ 6 6⎠

1

⎛

⎞

⎜ 0, ⎟ . Its iterates produce the 12-cycle

⎝ 6⎠

⎡3⎤

⎡1⎤

⎡2⎤

⎡3⎤

⎡1⎤

⎡0⎤

6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥

⎢

→

⎢1⎥

5

4

⎢ 65 ⎥

⎢⎣ 62 ⎥⎦

⎢⎣ 16 ⎥⎦

⎣6⎦

⎣⎢ 6 ⎦⎥ ⎣⎢ 6 ⎦⎥

⎣ ⎦

⎡5⎤

⎡3⎤

⎡4⎤

⎡3⎤

⎡5⎤

⎡0⎤

→ ⎢5⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥.

⎢⎣ 64 ⎥⎦

⎢⎣ 16 ⎥⎦

⎢⎣ 65 ⎥⎦

⎢⎣ 62 ⎥⎦

⎢⎣ 16 ⎥⎦

⎣⎢ 6 ⎦⎥

⎛ 2⎞

So far we have accounted for 13 of the 36 points. Taking one of the remaining points, say ⎜ 0, ⎟ , we arrive at

⎝ 6⎠

⎡ 2 ⎤ ⎡0⎤ ⎡ 4 ⎤

⎡0⎤

the 4-cycle ⎢ 2 ⎥ → ⎢ 6 ⎥ → ⎢ 4 ⎥ → ⎢ 6 ⎥ . We continue in this way, each time starting with some point of the form

⎢⎣ 64 ⎥⎦ ⎣ 6 ⎦ ⎢⎣ 62 ⎥⎦

⎣6⎦

⎛m n⎞

⎜ , ⎟ that has not yet appeared in a cycle, until we exhaust all such points. This yields a 3-cycle:

⎝ 6 6⎠

⎡3⎤

⎡4⎤

⎡2⎤

4

2

0

⎡3⎤

6 ⎥ → ⎡ ⎤ ; another 4-cycle: ⎡ 6 ⎤ → ⎢ 6 ⎥ → ⎡ 6 ⎤ → ⎢ 6 ⎥ ; and another 12-cycle:

6

⎢

→

⎢ ⎥

⎢3⎥

⎢ ⎥

⎢ ⎥

⎢ 63 ⎥

⎢⎣ 64 ⎥⎦ ⎣ 0 ⎦ ⎢⎣ 62 ⎥⎦

⎣⎢ 0 ⎦⎥

⎣⎢ 6 ⎦⎥

⎣0⎦

⎣ ⎦

⎡1⎤

⎡2⎤

⎡5⎤

⎡1⎤

⎡4⎤

⎡1⎤

6

6

6

6

6

⎢6⎥ → ⎢1⎥ → ⎢3⎥ → ⎢2⎥ → ⎢3⎥ → ⎢1⎥

⎢

⎥

⎢

⎥

⎢

⎥

⎢

⎥

⎢

0

⎣ ⎦

⎣6⎦

⎣6⎦

⎣6⎦

⎣6⎦

⎣ 6 ⎥⎦

⎡5⎤

⎡5⎤

⎡4⎤

⎡1⎤

⎡5⎤

→ ⎢6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥ → ⎢6⎥

⎢ 56 ⎥

⎢⎣ 63 ⎥⎦

⎢ 36 ⎥

⎢⎣ 64 ⎥⎦

⎢⎣ 0 ⎥⎦

⎣ ⎦

⎣ ⎦

2

⎡ ⎤

→ ⎢ 65 ⎥ .

⎢6⎥

⎣ ⎦

⎛m n⎞

The possible periods of points for the form ⎜ , ⎟ are thus 1, 3, 4 and 12. The least common multiple of these

⎝ 6 6⎠

four numbers is 12, and so Π (6) = 12.

246

SSM: Elementary Linear Algebra

Section 10.14

**3. (a) We are given that x0 = 3 and x1 = 7. With p = 15 we have
**

x2 = x1 + x0 mod 15 = 7 + 3 mod 15 = 10 mod 15 = 10,

x3 = x2 + x1 mod 15 = 10 + 7 mod 15 = 17 mod 15 = 2,

x4 = x3 + x2 mod 15 = 2 + 10 mod 15 = 12 mod 15 = 12

x5 = x4 + x3 mod 15 = 12 + 2 mod 15 = 14 mod 15 = 14,

x6 = x5 + x4 mod 15 = 14 + 12 mod 15 = 26 mod 15 = 11,

x7 = x6 + x5 mod 15 = 11 + 14 mod 15 = 25 mod 15 = 10

x8 = x7 + x6 mod 15 = 10 + 11 mod 15 = 21 mod 15 = 6,

x9 = x8 + x7 mod 15 = 6 + 10 mod 15 = 16 mod 15 = 1,

x10 = x9 + x8 mod 15 = 1 + 6 mod 15 = 7 mod 15 = 7,

x11 = x10 + x9 mod 15 = 7 + 1 mod 15 = 8 mod 15 = 8

x12 = x11 + x10 mod 15 = 8 + 7 mod 15 = 15 mod 15 = 0,

x13 = x12 + x11 mod 15 = 0 + 8 mod 15 = 8 mod 15 = 8,

x14 = x13 + x12 mod 15 = 8 + 0 mod 15 = 8 mod 15 = 8,

x15 = x14 + x13 mod 15 = 8 + 8 mod 15 = 16 mod 15 = 1,

x16 = x15 + x14 mod 15 = 1 + 8 mod 15 = 9 mod 15 = 9,

**x17 = x16 + x15 mod 15 = 9 + 1
**

x18 = x17 + x16 mod 15 = 10 + 9

x19 = x18 + x17 mod 15 = 4 + 10

x20 = x19 + x18 mod 15 = 14 + 4

x21 = x20 + x19 mod 15 = 3 + 14

x22 = x21 + x20 mod 15 = 2 + 3

x23 = x22 + x21 mod 15 = 5 + 2

x24 = x23 + x22 mod 15 = 7 + 5

x25 = x24 + x23 mod 15 = 12 + 7

x26 = x25 + x24 mod 15 = 4 + 12

x27 = x26 + x25 mod 15 = 1 + 4

x28 = x27 + x26 mod 15 = 5 + 1

x29 = x28 + x27 mod 15 = 6 + 5

x30 = x29 + x28 mod 15 = 11 + 6

mod 15 = 10 mod 15 = 10,

mod 15 = 19 mod 15 = 4,

mod 15 = 14 mod 15 = 14,

mod 15 = 18 mod 15 = 3,

mod 15 = 17 mod 15 = 2,

mod 15 = 5 mod 15 = 5,

mod 15 = 7 mod 15 = 7,

mod 15 = 12 mod 15 = 12,

mod 15 = 19 mod 15 = 4,

mod 15 = 16 mod 15 = 1,

mod 15 = 5 mod 15 = 5,

mod 15 = 6 mod 15 = 6,

mod 15 = 11 mod 15 = 11,

mod 15 = 17 mod 15 = 2,

**x31 = x30 + x29 mod 15 = 2 + 11 mod 15 = 13 mod 15 = 13,
**

x32 = x31 + x30 mod 15 = 13 + 2 mod 15 = 15 mod 15 = 0,

x33 = x32 + x31 mod 15 = 0 + 13 mod 15 = 13 mod 15 = 13,

x34 = x33 + x32 mod 15 = 13 + 0 mod 15 = 13 mod 15 = 13,

x35 = x34 + x33 mod 15 = 13 + 13 mod 15 = 26 mod 15 = 11,

x36 = x35 + x34 mod 15 = 11 + 13 mod 15 = 24 mod 15 = 9,

x37 = x36 + x35 mod 15 = 9 + 11 mod 15 = 20 mod 15 = 5,

x38 = x37 + x36 mod 15 = 5 + 9 mod 15 = 14 mod 15 = 14,

x39 = x38 + x37 mod 15 = 14 + 5 mod 15 = 19 mod 15 = 4,

x40 = x39 + x38 mod 15 = 4 + 14 mod 15 = 18 mod 15 = 3,

x41 = x40 + x39 mod 15 = 3 + 4 mod 15 = 7 mod 15 = 7,

and finally: x40 = x0 and x41 = x1 . Thus this sequence is periodic with period 40.

247

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

⎡ x12 ⎤ ⎡1 1 ⎤ ⎡ 4 ⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢ 6 ⎥ mod 21

⎦⎣ ⎦

⎣ 13 ⎦ ⎣

⎡10 ⎤

= ⎢ ⎥ mod 21

⎣16 ⎦

⎡10 ⎤

=⎢ ⎥

⎣16 ⎦

**(b) Step (ii) of the algorithm is
**

xn +1 = xn + xn −1 mod p.

Replacing n in this formula by n + 1 gives

xn + 2 = xn +1 + xn mod p

= ( xn + xn −1 ) + xn mod p

= 2 xn + xn −1 mod p.

These equations can be written as

xn +1 = xn −1 + xn mod p

xn + 2 = xn −1 + 2 xn mod p

which in matrix form are

⎡ xn +1 ⎤ ⎡1 1 ⎤ ⎡ xn −1 ⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢ x ⎥ mod p.

⎦⎣ n ⎦

⎣ n+2 ⎦ ⎣

⎡ x14 ⎤ ⎡1 1 ⎤ ⎡10 ⎤

⎢ x ⎥ = ⎢1 2 ⎥ ⎢16 ⎥ mod 21

⎦⎣ ⎦

⎣ 15 ⎦ ⎣

⎡ 26 ⎤

= ⎢ ⎥ mod 21

⎣ 42 ⎦

⎡5 ⎤

=⎢ ⎥

⎣0 ⎦

⎡ x16 ⎤ ⎡1 1 ⎤ ⎡ 5 ⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢ 0 ⎥ mod 21

⎦⎣ ⎦

⎣ 17 ⎦ ⎣

⎡5⎤

= ⎢ ⎥ mod 21

⎣5⎦

⎡5⎤

=⎢ ⎥

⎣5⎦

⎡ x ⎤ ⎡5 ⎤

(c) Beginning with ⎢ 0 ⎥ = ⎢ ⎥ , we obtain

⎣ x1 ⎦ ⎣5⎦

⎡ x2 ⎤ ⎡1 1 ⎤ ⎡5⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢5⎥ mod 21

⎦⎣ ⎦

⎣ 3⎦ ⎣

⎡10 ⎤

= ⎢ ⎥ mod 21

⎣15⎦

⎡10 ⎤

=⎢ ⎥

⎣15⎦

⎡x ⎤ ⎡x ⎤

and we see that ⎢ 16 ⎥ = ⎢ 0 ⎥ .

⎣ x17 ⎦ ⎣ x1 ⎦

⎡ x4 ⎤ ⎡1 1 ⎤ ⎡10 ⎤

⎢ x ⎥ = ⎢1 2 ⎥ ⎢15⎥ mod 21

⎦⎣ ⎦

⎣ 5⎦ ⎣

⎡ 25⎤

= ⎢ ⎥ mod 21

⎣ 40 ⎦

⎡4⎤

=⎢ ⎥

⎣19 ⎦

⎡ x6 ⎤ ⎡1 1 ⎤ ⎡ 4 ⎤

⎢ x ⎥ = ⎢1 2 ⎥ ⎢19 ⎥ mod 21

⎦⎣ ⎦

⎣ 7⎦ ⎣

⎡ 23⎤

= ⎢ ⎥ mod 21

⎣ 42 ⎦

⎡2⎤

=⎢ ⎥

⎣0⎦

⎡ x8 ⎤ ⎡1 1 ⎤ ⎡ 2 ⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢ 0 ⎥ mod 21

⎦⎣ ⎦

⎣ 9⎦ ⎣

⎡2⎤

= ⎢ ⎥ mod 21

⎣2⎦

⎡2⎤

=⎢ ⎥

⎣2⎦

4. (c) We have that

⎡ 1 ⎤

⎛ ⎡ 1 ⎤ ⎞ ⎡1 1 ⎤ ⎡ 1 ⎤

101 ⎥ mod 1 = ⎢ 101 ⎥ ,

C ⎜ ⎢ 101 ⎥ ⎟ = ⎢

⎢

1 ⎥

⎜ 0 ⎟ ⎣1 2 ⎥⎦ 0

⎢⎣ 101

⎣ ⎦

⎝⎣ ⎦⎠

⎦

1

2

⎡

⎤

⎡

⎤

1

⎛ ⎡ ⎤ ⎞ ⎡1 1 ⎤ 101

⎢ 101 ⎥ ,

⎢

⎥

1

mod

C 2 ⎜ ⎢ 101 ⎥ ⎟ = ⎢

=

3 ⎥

⎜

⎟ 1 2 ⎥⎦ ⎢ 1 ⎥

⎢⎣ 101

⎝⎣ 0 ⎦⎠ ⎣

⎣ 101 ⎦

⎦

2

5

⎡

⎤

⎡

⎤

1

⎛ ⎡ ⎤ ⎞ ⎡1 1 ⎤ 101

⎢ ⎥ mod 1 = ⎢ 101 ⎥ ,

C 3 ⎜ ⎢ 101 ⎥ ⎟ = ⎢

⎜ 0 ⎟ ⎣1 2⎥⎦ ⎢ 3 ⎥

⎢ 8 ⎥

⎝⎣ ⎦⎠

⎣ 101 ⎦

⎣ 101 ⎦

5 ⎤

⎡ 13 ⎤

⎛ ⎡ 1 ⎤ ⎞ ⎡1 1 ⎤ ⎡ 101

⎢ ⎥ mod 1 = ⎢ 101 ⎥ ,

C 4 ⎜ ⎢ 101 ⎥ ⎟ = ⎢

21 ⎥

⎜

⎟ 1 2 ⎥⎦ ⎢ 8 ⎥

⎢⎣ 101

⎝⎣ 0 ⎦⎠ ⎣

⎣ 101 ⎦

⎦

34

13

⎡ ⎤

⎛ ⎡ 1 ⎤ ⎞ ⎡1 1 ⎤ ⎡ 101 ⎤

⎢ ⎥ mod 1 = ⎢ 101 ⎥ .

C 5 ⎜ ⎢ 101 ⎥ ⎟ = ⎢

⎥

55 ⎥

21

⎜

⎟ 1 2⎦ ⎢ ⎥

⎢ 101

⎝⎣ 0 ⎦⎠ ⎣

⎣ 101 ⎦

⎣ ⎦

Because all five iterates are different, the

⎛ 1

⎞

period of the periodic point ⎜

, 0 ⎟ must

⎝ 101 ⎠

be greater than 5.

⎡ x10 ⎤ ⎡1 1 ⎤ ⎡ 2 ⎤

⎢ x ⎥ = ⎢1 2⎥ ⎢ 2 ⎥ mod 21

⎦⎣ ⎦

⎣ 11 ⎦ ⎣

⎡4⎤

= ⎢ ⎥ mod 21

⎣6⎦

⎡4⎤

=⎢ ⎥

⎣6⎦

5. If 0 ≤ x < 1 and 0 ≤ y < 1, then

5

⎛

⎞

T ( x, y) = ⎜ x + , y ⎟ mod 1, and so

12 ⎠

⎝

10 ⎞

⎛

T 2 ( x, y) = ⎜ x + , y ⎟ mod 1,

⎝

12 ⎠

248

SSM: Elementary Linear Algebra

Section 10.14

15 ⎞

⎛

T 3 ( x, y) = ⎜ x + , y ⎟ mod 1, ...,

⎝

12 ⎠

60 ⎞

⎛

T 12 ( x, y) = ⎜ x + , y ⎟ mod 1

⎝

12 ⎠

= ( x + 5, y) mod 1 = ( x, y).

Thus every point in S returns to its original

position after 12 iterations and so every point in

S is a periodic point with period at most 12.

Because every point is a periodic point, no point

can have a dense set of iterates, and so the

mapping cannot be chaotic.

**and so by part (ii) of the definition, this is
**

not the matrix of an Anosov automorphism.

⎡ 0 1⎤

(c) The eigenvalues of the matrix ⎢

are

⎣ −1 0⎥⎦

±i; both of which have magnitude 1. By part

(iii) of the definition, this cannot be the

matrix of an Anosov automorphism.

⎡ x⎤

Starting with an arbitrary point ⎢ ⎥ in the

⎣ y⎦

interior of S, (that is, with 0 < x < 1 and

0 < y < 1) we obtain

⎡ 0 1⎤ ⎡ x ⎤

⎡y⎤

⎡ y ⎤

⎢⎣ −1 0 ⎥⎦ ⎢⎣ y ⎥⎦ mod 1 = ⎢⎣ − x ⎥⎦ mod 1 = ⎢⎣1 − x ⎥⎦ ,

⎡ 0 1⎤ ⎡ y ⎤

⎡1 − x ⎤

⎢⎣ −1 0 ⎥⎦ ⎢⎣1 − x ⎥⎦ mod 1 = ⎢⎣ − y ⎥⎦ mod 1

⎡1 − x ⎤

,

=⎢

⎣1 − y ⎥⎦

⎡ 0 1⎤ ⎡1 − x ⎤

⎡ 1− y ⎤

⎢⎣ −1 0 ⎥⎦ ⎢⎣1 − y ⎥⎦ mod 1 = ⎢⎣ −1 + x ⎥⎦ mod 1

⎡1 − y ⎤

,

=⎢

⎣ x ⎥⎦

⎡ 0 1⎤ ⎡1 − y ⎤

⎡ x ⎤

⎢⎣ −1 0 ⎥⎦ ⎢⎣ x ⎥⎦ mod 1 = ⎢⎣ −1 + y ⎥⎦ mod 1

⎡ x⎤

= ⎢ ⎥.

⎣ y⎦

Thus every point in the interior of S is a

periodic point with period at most 4. The

geometric effect of this transformation, as

seen by the iterates, is to rotate each point in

the interior of S clockwise by 90° about the

⎡1⎤

center point ⎢ 2 ⎥ of S. Consequently, each

⎢⎣ 12 ⎥⎦

point in the interior of S has period 4 with

⎡1⎤

the exception of the center point ⎢ 2 ⎥ which

⎢⎣ 12 ⎥⎦

is a fixed point.

For points not in the interior of S, we first

⎡0 ⎤

observe that the origin ⎢ ⎥ is a fixed point,

⎣0 ⎦

which can easily be verified. Starting with a

⎡ x⎤

point of the form ⎢ ⎥ with 0 < x < 1, we

⎣0 ⎦

obtain a 4-cycle

⎡ x⎤

⎡ 0 ⎤

⎡1 − x ⎤

⎡0⎤

⎡ x⎤

⎢⎣ 0 ⎥⎦ → ⎢⎣1 − x ⎥⎦ → ⎢⎣ 0 ⎥⎦ → ⎢⎣ x ⎥⎦ → ⎢⎣ 0 ⎥⎦ if

⎡1 1 ⎤

6. (a) The matrix of Arnold’s cat map, ⎢

, is

⎣1 2 ⎥⎦

one in which (i) the entries are all integers,

(ii) the determinant is 1, and (iii) the

3+ 5

= 2.6180... and

eigenvalues,

2

3− 5

= 0.3819..., do not have magnitude

2

1. The three conditions of an Anosov

automorphism are thus satisfied.

⎡0 1 ⎤

(b) The eigenvalues of the matrix ⎢

are

⎣1 0 ⎥⎦

±1, both of which have magnitude 1. By

part (iii) of the definition of an Anosov

automorphism, this matrix is not the matrix

of an Anosov automorphism.

⎡3 2 ⎤

are

The entries of the matrix ⎢

⎣1 1 ⎥⎦

integers, its determinant is 1; and neither of

its eigenvalues, 2 ± 3, has magnitude 1.

Consequently, this is the matrix of an

Anosov automorphism.

⎡1 0 ⎤

The eigenvalues of the matrix ⎢

are

⎣0 1 ⎥⎦

both equal to 1, and so both have magnitude

1. By part (iii) of the definition, this is not

the matrix of an Anosov automorphism.

⎡5 7 ⎤

The entries of the matrix ⎢

are

⎣ 2 3⎥⎦

integers; its determinant is 1; and neither of

its eigenvalues, 4 ± 15, has magnitude 1.

Consequently, this is the matrix of an

Anosov automorphism.

⎡6 2 ⎤

The determinant of the matrix ⎢

is 2,

⎣5 2 ⎥⎦

249

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

⎡1

⎢⎣1

⎡1

⎢⎣1

⎡1

⎢⎣1

1

, otherwise we obtain the 2-cycle

2

⎡1⎤

⎡0⎤

⎡1⎤

⎢2⎥ → ⎢1⎥ → ⎢2⎥.

⎣0⎦

⎣2⎦

⎣0⎦

Similarly, starting with a point of the form

⎡0⎤

⎢⎣ y ⎥⎦ with 0 < y < 1, we obtain a 4-cycle

⎡0⎤ ⎡ y⎤

⎡ 0 ⎤

⎡1 − y ⎤ ⎡ 0 ⎤

⎢⎣ y ⎥⎦ → ⎢⎣ 0 ⎥⎦ → ⎢⎣1 − y ⎥⎦ → ⎢⎣ 0 ⎥⎦ → ⎢⎣ y ⎥⎦ if

1

y ≠ , otherwise we obtain the 2-cycle

2

⎡0⎤

⎡1⎤

⎡0⎤

⎢ 1 ⎥ → ⎢ 2 ⎥ → ⎢ 1 ⎥ . Thus every point not in

⎣2⎦

⎣0⎦

⎣2⎦

the interior of S is a periodic point with 1, 2,

or 4. Finally because no point in S can have

a dense set of iterates, it follows that the

mapping cannot be chaotic.

x≠

1

( 2 , 1)

III'

(0, 0)

(1, 1)

I'

IV'

( 12 , 0)

1 ⎤ ⎡ x2 ⎤ ⎡ a ⎤ ⎡ 12 ⎤

+

=

,

2 ⎥⎦ ⎢⎣ y2 ⎥⎦ ⎢⎣ b ⎥⎦ ⎢⎣ 1 ⎥⎦

1 ⎤ ⎡ x3 ⎤ ⎡ a ⎤ ⎡1⎤

.

+

=

2 ⎥⎦ ⎢⎣ y3 ⎥⎦ ⎢⎣ b ⎥⎦ ⎢⎣1⎥⎦

⎡1 1 ⎤

⎡ 2 −1⎤

The inverse of the matrix ⎢

is ⎢

.

⎥

1

2

⎣

⎦

⎣ −1 1⎥⎦

We multiply the above three matrix equations by

⎡ c ⎤ ⎡ 2 −1⎤ ⎡ a ⎤

this inverse and set ⎢ ⎥ = ⎢

. Notice

⎣ d ⎦ ⎣ −1 1⎥⎦ ⎢⎣ b ⎥⎦

that c and d must be integers. This leads to

⎡ x1 ⎤ ⎡ 2 −1⎤ ⎡0 ⎤ ⎡ c ⎤

⎡c⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢0 ⎥ − ⎢ d ⎥ = − ⎢ d ⎥ ,

⎦⎣ ⎦ ⎣ ⎦

⎣ ⎦

⎣ 1⎦ ⎣

1

0⎤ ⎡ c ⎤

⎡

⎤

⎡

x

⎡ 2 ⎤ ⎡ 2 −1⎤ 2 ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢ ⎥ − ⎢ d ⎥ = ⎢ 1 ⎥ − ⎢ d ⎥ ,

⎦ ⎣1⎦ ⎣ ⎦ ⎣ 2 ⎦ ⎣ ⎦

⎣ 2⎦ ⎣

⎡ x3 ⎤ ⎡ 2 −1⎤ ⎡1⎤ ⎡ c ⎤ ⎡1 ⎤ ⎡ c ⎤

⎢ ⎥=⎢

⎥ ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥.

⎣ y3 ⎦ ⎣ −1 1⎦ ⎣1⎦ ⎣ d ⎦ ⎣0 ⎦ ⎣ d ⎦

The only integer values of c and d that will give

values of xi and yi in the interval [0, 1] are

c = d = 0. This then gives a = b = 0 and the

⎡ x ⎤ ⎡1 1 ⎤ ⎡ x ⎤

mapping ⎢ ⎥ → ⎢

maps the three

⎣ y ⎦ ⎣1 2 ⎥⎦ ⎢⎣ y ⎥⎦

⎛ 1⎞

points (0, 0), ⎜ 0, ⎟ , and (1, 0) to the three

⎝ 2⎠

⎛1 ⎞

points (0, 0), ⎜ , 1⎟ , and (1, 1), respectively.

⎝2 ⎠

⎛ 1⎞

The three points (0, 0), ⎜ 0, ⎟ , and (1, 0) define

⎝ 2⎠

the triangular region labeled I in the diagram

below, which then maps onto the region I ′.

(0, 1)

(1, 1)

**9. As per the hint, we wish to find the regions in S
**

that map onto the four indicated regions in the

figure below.

(0, 1)

1 ⎤ ⎡ x1 ⎤ ⎡ a ⎤ ⎡0 ⎤

+

=

,

2 ⎥⎦ ⎢⎣ y2 ⎥⎦ ⎢⎣ b ⎥⎦ ⎢⎣0 ⎥⎦

II'

(1, 0)

**We first consider region I ′ with vertices (0, 0),
**

⎛1 ⎞

⎜ , 1⎟ , and (1, 1). We seek points ( x1 , y1 ),

⎝2 ⎠

( x2 , y2 ), and ( x3 , y3 ), with entries that lie in

[0, 1], that map onto these three points under the

⎡ x ⎤ ⎡1 1 ⎤ ⎡ x ⎤ ⎡ a ⎤

for certain

mapping ⎢ ⎥ → ⎢

+

⎣ y ⎦ ⎣1 2 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣ b ⎥⎦

integer values of a and b to be determined. This

leads to the three equations

IV

1

(0, 2 )

II

1

III

(1, 2 )

I

(0, 0)

(1, 0)

For region II ′, the calculations are as follows:

250

SSM: Elementary Linear Algebra

Section 10.14

⎡1 1 ⎤ ⎡ x1 ⎤ ⎡ a ⎤ ⎡ 12 ⎤

⎢⎣1 2⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢ 0 ⎥ ,

⎣ 1⎦

⎣ ⎦

⎡1 1 ⎤ ⎡ x2 ⎤ ⎡ a ⎤ ⎡1⎤

⎢⎣1 2 ⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣1⎥⎦ ,

⎣ 2⎦

⎡1 1 ⎤ ⎡ x3 ⎤ ⎡ a ⎤ ⎡ 12 ⎤

⎢⎣1 2 ⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢ 0 ⎥ ;

⎣ 3⎦

⎣ ⎦

⎡ x1 ⎤ ⎡ 2 −1⎤ ⎡0 ⎤ ⎡ c ⎤

⎡c⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢0 ⎥ − ⎢ d ⎥ = − ⎢ d ⎥ ,

⎦⎣ ⎦ ⎣ ⎦

⎣ ⎦

⎣ 1⎦ ⎣

⎡1 1 ⎤ ⎡ x2 ⎤ ⎡ a ⎤ ⎡1⎤

⎢⎣1 2 ⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣1⎥⎦ ,

⎣ 2⎦

⎡1 1 ⎤ ⎡ x3 ⎤ ⎡ a ⎤ ⎡1 ⎤

⎢⎣1 2⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣0 ⎥⎦ ;

⎣ 3⎦

⎡ x1 ⎤ ⎡ 2 −1⎤ ⎡ 12 ⎤ ⎡ c ⎤ ⎡ 1 ⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢ ⎥ − ⎢ d ⎥ = ⎢ − 1 ⎥ − ⎢ d ⎥ ,

⎦ ⎣0⎦ ⎣ ⎦ ⎣ 2 ⎦ ⎣ ⎦

⎣ 1⎦ ⎣

⎡ x2 ⎤ ⎡ 2 −1⎤ ⎡1⎤ ⎡ c ⎤ ⎡1 ⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢1⎥ − ⎢ d ⎥ = ⎢ 0⎥ − ⎢ d ⎥ ,

⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ 2⎦ ⎣

⎡ x2 ⎤ ⎡ 2 −1⎤ ⎡1⎤ ⎡ c ⎤ ⎡1 ⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢1⎥ − ⎢ d ⎥ = ⎢ 0⎥ − ⎢ d ⎥ ,

⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ 2⎦ ⎣

⎡ x3 ⎤ ⎡ 2 −1⎤ ⎡ 12 ⎤ ⎡ c ⎤ ⎡ 1 ⎤ ⎡ c ⎤

⎢ ⎥=⎢

⎥ ⎢ ⎥ − ⎢ ⎥ = ⎢ 1⎥ − ⎢ ⎥.

⎣ y3 ⎦ ⎣ −1 1⎦ ⎣ 0 ⎦ ⎣ d ⎦ ⎣ − 2 ⎦ ⎣ d ⎦

Only c = 0 and d = −1 will work. This leads to

a = −1, b = −2 and the mapping

⎡ x ⎤ ⎡1 1 ⎤ ⎡ x ⎤ ⎡ −1⎤

⎢⎣ y ⎥⎦ → ⎢⎣1 2 ⎥⎦ ⎢⎣ y ⎥⎦ + ⎢⎣ −2 ⎥⎦ maps region IV with

⎛ 1⎞

vertices (0, 1), (1, 1), and ⎜1, ⎟ onto region

⎝ 2⎠

IV ′.

⎡ x3 ⎤ ⎡ 2 −1⎤ ⎡1 ⎤ ⎡ c ⎤ ⎡ 2⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢0 ⎥ − ⎢ d ⎥ = ⎢ −1⎥ − ⎢ d ⎥ .

⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ 3⎦ ⎣

Only c = 1 and d = −1 will work. This leads to

a = 0, b = −1 and the mapping

⎡ x ⎤ ⎡1 1⎤ ⎡ x ⎤ ⎡ 0 ⎤

⎢⎣ y ⎥⎦ → ⎢⎣1 2 ⎥⎦ ⎢⎣ y ⎥⎦ + ⎢⎣ −1⎥⎦ maps region II with

⎛ 1⎞

vertices ⎜ 0, ⎟ , (0, 1), and (1, 0) onto region

⎝ 2⎠

′

II . For region III ′, the calculations are as

follows:

⎡1 1 ⎤ ⎡ x1 ⎤ ⎡ a ⎤ ⎡0 ⎤

⎢⎣1 2⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣0 ⎥⎦ ,

⎣ 1⎦

**12. As per the hint, we want to find all solutions of
**

⎡ x ⎤ ⎡ 2 3⎤ ⎡ x0 ⎤ ⎡ r ⎤

the matrix equation ⎢ 0 ⎥ = ⎢

⎥⎢ ⎥−⎢ ⎥

⎣ y0 ⎦ ⎣ 3 5⎦ ⎣ y0 ⎦ ⎣ s ⎦

where 0 ≤ x0 < 1, 0 ≤ y0 < 1, and r and s are

nonnegative integers. This equation can be

⎡1 3 ⎤ ⎡ x0 ⎤ ⎡ r ⎤

rewritten as ⎢

, which has the

⎢ ⎥=

⎣3 4 ⎥⎦ ⎣ y0 ⎦ ⎢⎣ s ⎥⎦

−4r + 3s

3r − s

and y0 =

. First

solution x0 =

5

5

trying r = 0 and s = 0, 1, 2, ..., then r = 1 and

s = 0, 1, 2, ..., etc., we find that the only values

of r and s that yield values of x0 and y0 lying

in [0, 1) are:

2

1

r = 1 and s = 2, which give x0 = and y0 = ;

5

5

1

3

r = 2 and s = 3, which give x0 = and y0 = ;

5

5

4

2

r = 2 and s = 4, which give x0 = and y0 = ;

5

5

3

4

r = 3 and s = 5, which give x0 = and y0 = .

5

5

⎛2 1⎞

⎛3 4⎞

We can then check that ⎜ , ⎟ and ⎜ , ⎟

⎝5 5⎠

⎝5 5⎠

⎛1 3⎞

⎛4 2⎞

form one 2-cycle and ⎜ , ⎟ and ⎜ , ⎟ form

⎝5 5⎠

⎝5 5⎠

another 2-cycle.

⎡1 1 ⎤ ⎡ x2 ⎤ ⎡ a ⎤ ⎡ 12 ⎤

⎢⎣1 2 ⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢ 1 ⎥ ,

⎣ 2⎦

⎣ ⎦

⎡1 1 ⎤ ⎡ x3 ⎤ ⎡ a ⎤ ⎡0 ⎤

⎢⎣1 2⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣1 ⎥⎦ ;

⎣ 3⎦

⎡ x1 ⎤ ⎡ 2 −1⎤ ⎡0 ⎤ ⎡ c ⎤

⎡c⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢0 ⎥ − ⎢ d ⎥ = − ⎢ d ⎥ ,

⎦⎣ ⎦ ⎣ ⎦

⎣ ⎦

⎣ 1⎦ ⎣

1

⎡ x2 ⎤ ⎡ 2 −1⎤ ⎡ 2 ⎤ ⎡ c ⎤ ⎡ 0 ⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢ ⎥ − ⎢ d ⎥ = ⎢ 1 ⎥ − ⎢ d ⎥ ,

⎦ ⎣1⎦ ⎣ ⎦ ⎣ 2 ⎦ ⎣ ⎦

⎣ 2⎦ ⎣

⎡ x3 ⎤ ⎡ 2 −1⎤ ⎡0 ⎤ ⎡ c ⎤ ⎡ −1⎤ ⎡ c ⎤

⎢ y ⎥ = ⎢ −1 1⎥ ⎢1 ⎥ − ⎢ d ⎥ = ⎢ 1⎥ − ⎢ d ⎥ .

⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ 3⎦ ⎣

Only c = −1 and d = 0 will work. This leads to

a = −1, b = −1 and the mapping

⎡ x ⎤ ⎡1 1 ⎤ ⎡ x ⎤ ⎡ −1⎤

⎢⎣ y ⎥⎦ → ⎢⎣1 2 ⎥⎦ ⎢⎣ y ⎥⎦ + ⎢⎣ −1⎥⎦ maps region III with

⎛ 1⎞

vertices (1, 0), ⎜1, ⎟ , and (0, 1) onto region

⎝ 2⎠

III ′.

For region IV ′, the calculations are as follows:

⎡1 1 ⎤ ⎡ x1 ⎤ ⎡ a ⎤ ⎡0 ⎤

⎢⎣1 2⎥⎦ ⎢ y ⎥ + ⎢⎣ b ⎥⎦ = ⎢⎣0 ⎥⎦ ,

⎣ 1⎦

251

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

**14. Begin with a 101 × 101 array of white pixels and
**

add the letter ‘A’ in black pixels to it. Apply the

mapping T to this image, which will scatter the

black pixels throughout the image. Then

superimpose the letter ‘B’ in black pixels onto

this image. Apply the mapping T again and then

superimpose the letter ‘C’ in black pixels onto

the resulting image. Repeat this procedure with

the letter ‘D’ and ‘E’. The next application of the

mapping will return you to the letter ‘A’ with the

pixels for the letters ‘B’ through ‘E’ scattered in

the background.

⎡9 1 ⎤

, det(A) = 18 − 7 = 11,

2. (a) For A = ⎢

⎣ 7 2 ⎥⎦

which is not divisible by 2 or 13. Therefore

by Corollary 10.15.3, A is invertible. From

Eqation (2):

⎡ 2 −1⎤

A−1 = (11)−1 ⎢

⎣ −7 9 ⎥⎦

⎡ 2 −1⎤

= 19 ⎢

⎣ −7 9 ⎥⎦

⎡ 38 −19⎤

=⎢

⎣ −133 171⎥⎦

⎡12 7 ⎤

(mod 26).

=⎢

⎣ 23 15⎥⎦

Checking:

⎡ 9 1⎤ ⎡12 7 ⎤

AA−1 = ⎢

⎣ 7 2⎥⎦ ⎢⎣ 23 15⎥⎦

⎡131 78⎤

=⎢

⎣130 79 ⎥⎦

⎡ 1 0⎤

(mod 26)

=⎢

⎣ 0 1⎥⎦

Section 10.15

Exercise Set 10.15

1. First group the plaintext into pairs, add the

dummy letter T, and get the numerial equivalents

from Table 1.

DA

RK

NI

GH

TT

4 1 18 11 14 9 7 8 20 20

⎡ 1 3⎤

(a) For the enciphering matrix A = ⎢

,

⎣ 2 1⎥⎦

reducing everything mod 26, we have

⎡ 1 3⎤ ⎡ 4 ⎤ ⎡ 7 ⎤

G

⎢⎣ 2 1⎥⎦ ⎢⎣ 1 ⎥⎦ = ⎢⎣9 ⎥⎦

I

⎡1 3⎤ ⎡18⎤ ⎡ 51⎤ ⎡ 25⎤ Y

⎢⎣ 2 1⎥⎦ ⎢⎣11⎥⎦ = ⎢⎣ 47 ⎥⎦ = ⎢⎣ 21⎥⎦ U

O

⎡1 3⎤ ⎡14 ⎤ ⎡ 41⎤ ⎡15⎤

⎢⎣ 2 1⎥⎦ ⎢⎣ 9 ⎥⎦ = ⎢⎣37 ⎥⎦ = ⎢⎣11⎥⎦

K

⎡ 1 3⎤ ⎡ 7 ⎤ ⎡ 31⎤ ⎡ 5⎤ E

⎢⎣ 2 1⎥⎦ ⎢⎣ 8 ⎥⎦ = ⎢⎣ 22 ⎥⎦ = ⎢⎣ 22 ⎥⎦ V

⎡1 3⎤ ⎡ 20 ⎤ ⎡80 ⎤ ⎡ 2 ⎤

B

⎢⎣ 2 1⎥⎦ ⎢⎣ 20 ⎥⎦ = ⎢⎣60 ⎥⎦ = ⎢⎣ 8 ⎥⎦

H

The Hill cipher is GIYUOKEVBH.

⎡12 7 ⎤ ⎡ 9 1⎤

A−1 A = ⎢

⎣ 23 15⎥⎦ ⎢⎣ 7 2 ⎥⎦

⎡157 26 ⎤

=⎢

⎣312 53⎥⎦

⎡ 1 0⎤

(mod 26).

=⎢

⎣0 1⎥⎦

⎡3 1⎤

(b) For A = ⎢

, det(A) = 9 − 5 = 4, which

⎣5 3⎥⎦

is divisible by 2. Therefore by Corollary

10.15.3, A is not invertible.

⎡8 11⎤

(c) For A = ⎢

,

⎣ 1 9 ⎥⎦

det(A) = 72 − 11 = 61 = 9 (mod 26), which

is not divisible by 2 or 13. Therefore by

Corollary 10.15.3, A is invertible. From (2):

⎡ 9 −11⎤

A−1 = (9) −1 ⎢

8⎥⎦

⎣ −1

⎡ 9 −11⎤

= 3⎢

8⎥⎦

⎣ −1

⎡ 27 −33⎤

=⎢

⎣ −3 24 ⎥⎦

⎡ 1 19 ⎤

(mod 26).

=⎢

⎣ 23 24 ⎥⎦

Checking:

⎡4 3⎤

(b) For the enciphering matrix A = ⎢

,

⎣ 1 2 ⎥⎦

reducing everything mod 26, we have

⎡ 4 3 ⎤ ⎡ 4 ⎤ ⎡19 ⎤

S

⎢⎣ 1 2 ⎥⎦ ⎢⎣ 1⎥⎦ = ⎢⎣ 6 ⎥⎦

F

⎡ 4 3⎤ ⎡18⎤ ⎡105⎤ ⎡ 1⎤

A

⎢⎣1 2⎥⎦ ⎢⎣11⎥⎦ = ⎢⎣ 40 ⎥⎦ = ⎢⎣14 ⎥⎦ N

E

⎡ 4 3 ⎤ ⎡14 ⎤ ⎡ 83⎤ ⎡ 5⎤

⎢⎣1 2 ⎥⎦ ⎢⎣ 9 ⎥⎦ = ⎢⎣32⎥⎦ = ⎢⎣ 6 ⎥⎦

F

Z

⎡ 4 3 ⎤ ⎡ 7 ⎤ ⎡52⎤ ⎡ 0 ⎤

⎢⎣ 1 2 ⎥⎦ ⎢⎣ 8⎥⎦ = ⎢⎣ 23⎥⎦ = ⎢⎣ 23⎥⎦

W

J

⎡ 4 3 ⎤ ⎡ 20 ⎤ ⎡140 ⎤ ⎡10 ⎤

⎢⎣1 2 ⎥⎦ ⎢⎣ 20 ⎥⎦ = ⎢⎣ 60 ⎥⎦ = ⎢⎣ 8⎥⎦ H

The Hill cipher is SFANEFZWJH.

252

SSM: Elementary Linear Algebra

Section 10.15

⎡8 11⎤ ⎡ 1 19 ⎤

AA−1 = ⎢

⎣ 1 9⎥⎦ ⎢⎣ 23 24 ⎥⎦

⎡ 261 416 ⎤

=⎢

⎣ 208 235⎥⎦

⎡ 1 0⎤

(mod 26)

=⎢

⎣ 0 1⎥⎦

**⎡1 8⎤ ⎡15 12⎤
**

AA−1 = ⎢

⎣1 3⎥⎦ ⎢⎣ 21 5⎥⎦

⎡183 52 ⎤

=⎢

⎣ 78 27 ⎥⎦

⎡ 1 0⎤

(mod 26)

=⎢

⎣ 0 1⎥⎦

⎡ 1 19 ⎤ ⎡8 11⎤

A−1 A = ⎢

⎣ 23 24 ⎥⎦ ⎢⎣ 1 9 ⎥⎦

⎡ 27 182 ⎤

=⎢

⎣ 208 469 ⎥⎦

⎡ 1 0⎤

(mod 26).

=⎢

⎣0 1⎥⎦

⎡15

A−1 A = ⎢

⎣ 21

⎡ 27

=⎢

⎣ 26

⎡1

=⎢

⎣0

⎡ 2 1⎤

(d) For A = ⎢

, det(A) = 14 − 1 = 13,

⎣ 1 7 ⎥⎦

which is divisible by 13. Therefore by

Corollary 10.15.4, A is not invertible.

12 ⎤ ⎡1 8⎤

5⎥⎦ ⎢⎣1 3⎥⎦

156⎤

183⎥⎦

0⎤

(mod 26).

1⎥⎦

**3. From Table 1 the numerical equivalent of this
**

ciphertext is

19 1

11 14 15 24

1 15

10 24

⎡ 4 1⎤

Now we have to find the inverse of A = ⎢

.

⎣ 3 2 ⎥⎦

Since det(A) = 8 − 3 = 5, we have by (2):

⎡ 2 −1⎤

A−1 = (5)−1 ⎢

⎣ −3 4 ⎥⎦

⎡ 2 −1⎤

= 21 ⎢

⎣ −3 4 ⎥⎦

⎡ 42 −21⎤

=⎢

⎣ −63 84⎥⎦

⎡16 5⎤

(mod 26).

=⎢

⎣15 6 ⎥⎦

To obtain the plaintext, multiply each ciphertext

⎡ 3 1⎤

(e) For A = ⎢

, det(A) = 6 − 6 = 0, so that

⎣6 2 ⎥⎦

A is not invertible by Corollary 10.15.4.

⎡1 8⎤

(f) For A = ⎢

, det(A) = 3 − 8 = −5 = 21

⎣1 3⎥⎦

(mod 26), which is not divisible by 2 or 13.

Therefore by Corollary 10.15.4, A is

invertible. From (2):

⎡ 3 −8 ⎤

A−1 = (21)−1 ⎢

⎣ −1 1⎥⎦

⎡ 3 −8 ⎤

= 5⎢

⎣ −1 1⎥⎦

⎡ 15 −40 ⎤

=⎢

5⎥⎦

⎣ −5

⎡15 12 ⎤

(mod 26).

=⎢

⎣ 21 5⎥⎦

Checking:

**vector by A−1 and reduce the results modulo 26.
**

⎡16 5⎤ ⎡19 ⎤ ⎡309 ⎤ ⎡ 23⎤ W

⎢⎣15 6 ⎥⎦ ⎢⎣ 1⎥⎦ = ⎢⎣ 291⎥⎦ = ⎢⎣ 5⎥⎦

E

L

⎡16 5 ⎤ ⎡11⎤ ⎡ 246 ⎤ ⎡12⎤

⎢⎣15 6 ⎥⎦ ⎢⎣14 ⎥⎦ = ⎢⎣ 249 ⎥⎦ = ⎢⎣15⎥⎦ O

⎡16 5 ⎤ ⎡ 15⎤ ⎡360⎤ ⎡ 22⎤ V

⎢⎣15 6 ⎥⎦ ⎢⎣ 24 ⎥⎦ = ⎢⎣369⎥⎦ = ⎢⎣ 5⎥⎦ E

⎡16 5 ⎤ ⎡ 1⎤ ⎡ 91⎤ ⎡13⎤ M

⎢⎣15 6 ⎥⎦ ⎢⎣15⎥⎦ = ⎢⎣105⎥⎦ = ⎢⎣ 1⎥⎦ A

⎡16 5 ⎤ ⎡ 10 ⎤ ⎡ 280 ⎤ ⎡ 20 ⎤ T

⎢⎣15 6 ⎥⎦ ⎢⎣ 24 ⎥⎦ = ⎢⎣ 294 ⎥⎦ = ⎢⎣ 8⎥⎦ H

The plaintext is thus WE LOVE MATH.

4. From Table 1 the numerical equivalent of the

known plaintext is

AR

MY

1 18

13 25

and the numerical equivalent of the

corresponding ciphertext is

253

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

Since det( A−1 ) = 35 − 90 = −55 = 23(mod 26),

SL

HK

19 12

8 11

so the corresponding plaintext and ciphertext

vectors are

⎡ 1⎤

⎡19 ⎤

p1 = ⎢ ⎥ ↔ c1 = ⎢ ⎥

18

⎣ ⎦

⎣12 ⎦

⎡ 13⎤

⎡ 8⎤

p 2 = ⎢ ⎥ ↔ c2 = ⎢ ⎥

25

⎣ ⎦

⎣11⎦

A = ( A−1 )−1

⎡ 5 −15⎤

= 23−1 ⎢

7 ⎥⎦

⎣ −6

⎡ 5 −15⎤

= 17 ⎢

7 ⎥⎦

⎣ −6

⎡ 85 −255⎤

=⎢

⎣ −102 119 ⎥⎦

⎡7 5⎤

(mod 26)

=⎢

⎣ 2 15⎥⎦

is the enciphering matrix.

⎡cT ⎤ ⎡19 12 ⎤

We want to reduce C = ⎢ 1T ⎥ = ⎢

⎥ to I by

⎢⎣c2 ⎥⎦ ⎣ 8 11⎦

elementary row operations and simultaneously

⎡pT ⎤ ⎡ 1 18⎤

apply these operations to P = ⎢ 1T ⎥ = ⎢

⎥.

⎢⎣p 2 ⎥⎦ ⎣13 25⎦

The calculations are as follows:

⎡19 12 1 18⎤

Form the matrix

⎢⎣ 8 11 13 25⎥⎦

[C P ] .

⎡ 1 132 11 198⎤

⎢⎣8 11 13 25⎥⎦

**5. From Table 1 the numerical equivalent of the
**

known plaintext is

AT

OM

1 20

15 13

and the numerical equivalent of the

corresponding ciphertext is

JY

QO

10 25

17 15

The corresponding plaintext and ciphertext

vectors are:

⎡ 1⎤

⎡ 10 ⎤

p1 = ⎢ ⎥ ↔ c1 = ⎢ ⎥

⎣ 20 ⎦

⎣ 25⎦

⎡15⎤

⎡17 ⎤

p2 = ⎢ ⎥ ↔ c2 = ⎢ ⎥

⎣13⎦

⎣15⎦

⎡10 25⎤

We want to reduce C = ⎢

to I by

⎣17 15 ⎥⎦

elementary row operations and simultaneously

⎡ 1 20 ⎤

apply these operations to P = ⎢

.

⎣15 13⎥⎦

The calculations are as follows:

⎡10 25 1 20⎤

Form the matrix

⎢⎣17 15 15 13⎥⎦

[C P ] .

**Multiply the first row
**

by 19−1 = 11 (mod 26).

⎡ 1 2 11 16 ⎤

⎢⎣8 11 13 25⎥⎦

**Replace 132 and 198 by
**

their residues modulo

26.

11

16 ⎤

⎡1 2

⎢⎣0 −5 −75 −103⎥⎦ −8 times the first row to

the second.

⎡ 1 2 11 16 ⎤

Replace the entries in

⎢⎣0 21 3 1⎥⎦

the second row by their

residues modulo 26.

1

2

11

16

⎡

⎤

Multiply the second row

⎢⎣0 1 15 5⎥⎦

⎡ 27 40 16 33⎤

⎢⎣ 17 15 15 13⎥⎦

**by 21−1 = 5 (mod 26).
**

⎡ 1 0 −19 6 ⎤

⎢⎣0 1 15 5⎥⎦

**the first (since 10−1
**

does not exist mod 26).

**Add −2 times the
**

second row to the first.

⎡ 1 0 7 6⎤

⎢⎣0 1 15 5⎥⎦

Add the second row to

⎡ 1 14 16 7 ⎤

⎢⎣17 15 15 13⎥⎦

Replace −19 by its

Replace the entries in

**the first row by their
**

residues modulo 26.

1

14

16

7

⎡

⎤

⎢⎣0 −223 −257 −106 ⎥⎦

Add −17 times the first

row to the second.

residue modulo 26.

⎡ 7 6⎤

so the deciphering

Thus ( A−1 )T = ⎢

⎣15 5⎥⎦

⎡7 15⎤

(mod 26).

matrix is A−1 = ⎢

⎣ 6 5⎥⎦

254

SSM: Elementary Linear Algebra

⎡ 1 14 16 7 ⎤

⎢⎣0 11 3 24 ⎥⎦

Section 10.15

**which yields the message THEY SPLIT THE
**

ATOM.

**Replace the entries in
**

the second row by their

residues modulo 26.

7⎤

⎡ 1 14 16

⎢⎣0 1 57 456 ⎥⎦

**6. Since we want a Hill 3-cipher, we will group the
**

letters in triples. From Table 1 the numerical

equivalents of the known plaintext are

I H A V E C O M E

9 8 1 22 5 3 15 13 5

and the numerical equivalent of the

corresponding ciphertext are

H P A F Q G G D U

8 16 1 6 17 7 7 4 21

The corresponding plaintext and ciphertext

vectors are

⎡9 ⎤

⎡ 8⎤

p1 = ⎢ 8⎥ ↔ c1 = ⎢16 ⎥

⎢ ⎥

⎢ ⎥

⎣ 1⎦

⎣ 1⎦

⎡ 22 ⎤

⎡ 6⎤

p 2 = ⎢ 5⎥ ↔ c2 = ⎢17 ⎥

⎢ ⎥

⎢ ⎥

⎣ 3⎦

⎣ 7⎦

⎡15⎤

⎡ 7⎤

p3 = ⎢13⎥ ↔ c3 = ⎢ 4 ⎥

⎢ ⎥

⎢ ⎥

⎣ 5⎦

⎣ 21⎦

⎡ 8 16 1⎤

We want to reduce C = ⎢ 6 17 7 ⎥ to I by

⎢

⎥

⎣7 4 21⎦

elementary row operations and simultaneously

⎡ 9 8 1⎤

apply these operations to P = ⎢ 22 5 3⎥ .

⎢

⎥

⎣ 15 13 5⎦

The calculations are as follows:

⎡ 8 16 1 9 8 1⎤

⎢ 6 17 7 22 5 3⎥ Form the matrix

⎢

⎥

⎣7 4 21 15 13 5⎦

**Multiply the second row
**

by 11−1 = 19 (mod 26).

⎡ 1 14 16 7 ⎤

⎢⎣0 1 5 14 ⎥⎦

**Replace the entries in
**

the second row by their

residues modulo 26.

⎡ 1 0 −54 −189 ⎤

⎢⎣0 1

5

14 ⎥⎦

**Add −14 times the
**

second row to the first.

⎡ 1 0 24 19 ⎤

⎢⎣0 1 5 14 ⎥⎦

Replace −54 and −189

**by their residues
**

modulo 26.

⎡ 24 19 ⎤

, and so the

Thus ( A−1 )T = ⎢

⎣ 5 14 ⎥⎦

⎡ 24 5⎤

.

deciphering matrix is A−1 = ⎢

⎣ 19 14 ⎥⎦

From Table 1 the numerical equivalent of the

given ciphertext is

LN

GI HG YB

VR

EN

JY

12 14 7 9 8 7 25 2 22 18 5 14 10 25

QO

17 15

To obtain the plaintext pairs, we multiply each

ciphertext vector by A−1:

⎡ 24 5⎤ ⎡12 ⎤ ⎡ 358⎤ ⎡ 20 ⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣14 ⎥⎦ = ⎢⎣ 424 ⎥⎦ = ⎢⎣ 8⎥⎦

⎡ 24 5⎤ ⎡7 ⎤ ⎡ 213⎤ ⎡ 5⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣ 9 ⎥⎦ = ⎢⎣ 259 ⎥⎦ = ⎢⎣ 25⎥⎦

⎡ 24 5⎤ ⎡ 8⎤ ⎡ 227 ⎤ ⎡19 ⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣7 ⎥⎦ = ⎢⎣ 250 ⎥⎦ = ⎢⎣16 ⎥⎦

⎡ 24 5⎤ ⎡ 25⎤ ⎡610 ⎤ ⎡12 ⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣ 2 ⎥⎦ = ⎢⎣ 503⎥⎦ = ⎢⎣ 9 ⎥⎦

⎡ 24 5⎤ ⎡ 22 ⎤ ⎡ 618⎤ ⎡ 20 ⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣ 18⎥⎦ = ⎢⎣ 670 ⎥⎦ = ⎢⎣ 20 ⎥⎦

⎡ 24 5⎤ ⎡ 5⎤ ⎡190⎤ ⎡8⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣14 ⎥⎦ = ⎢⎣ 291⎥⎦ = ⎢⎣5⎥⎦

⎡ 24 5⎤ ⎡ 10 ⎤ ⎡365⎤ ⎡ 1⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣ 25⎥⎦ = ⎢⎣540 ⎥⎦ = ⎢⎣ 20⎥⎦

⎡ 24 5⎤ ⎡17 ⎤ ⎡ 483⎤ ⎡15⎤

⎢⎣ 19 14 ⎥⎦ ⎢⎣15⎥⎦ = ⎢⎣ 533⎥⎦ = ⎢⎣13⎥⎦

T

H

E

Y

S

P

L

(mod 26)

I

T

T

H

E

A

T

O

M

[C

P].

⎡15 20 22 24 21 6 ⎤

⎢ 6 17 7 22 5 3⎥

⎢

⎥

⎣ 7 4 21 15 13 5⎦

Add the third row

to the first since 8−1

does not exist modulo

26.

⎡ 1 140 154 168 147 42⎤

⎢ 6 17

7 22

5 3⎥

⎢

⎥

4 21 15 13 5⎦

⎣7

Multiply the first row

by 15−1 = 7 (mod 26).

255

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

⎡ 1 10 24 12 17 16⎤

⎢ 6 17 7 22 5 3⎥

⎢

⎥

⎣7 4 21 15 13 5⎦

Replace the entries in

the first row by their

residues modulo 26.

24 12

17

16⎤

⎡ 1 10

⎢0 −43 −137 −50 −97

−93⎥

⎢

⎥

⎣0 −66 −147 −69 −106 −107 ⎦

⎡ 1 10 24 12 17

⎢0 9 19 2 7

⎢

⎣0 12 9 9 24

⎡ 1 10 24 12 17

⎢0 1 57 6 21

⎢

⎣0 12 9 9 24

⎡ 1 0 0 4 15

⎢0 1 0 9 17

⎢

⎣0 0 1 15 6

⎡ 4

Thus, ( A−1 )T = ⎢ 9

⎢

⎣15

**Add −6 times the first
**

row to the second and

−7 times the first row to

the third.

16 ⎤

11⎥

⎥

23⎦

Replace the entries in

the second and third

rows by their residues

modulo 26.

16 ⎤

33⎥

⎥

23⎦

Multiply the second row

⎡ 4 9 15⎤

deciphering matrix is A−1 = ⎢ 15 17 6 ⎥ .

⎢

⎥

⎣ 24 0 17 ⎦

From Table 1 the numerical equivalent of the

given ciphertext is

H P A

F Q G

G D U

G G D

8 16 1

6 17 7

7 4 21

7 4 4

H P G

O D Y

N O R

8 16 7 15 4 25 14 15 18

To obtain the plaintext triples, we multiply each

**ciphertext vector by A−1 :
**

⎡ 4 9 15⎤ ⎡ 8⎤ ⎡ 191⎤ ⎡9 ⎤

⎢ 15 17 6 ⎥ ⎢16 ⎥ = ⎢ 398⎥ = ⎢8⎥

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 1⎦ ⎣ 209 ⎦ ⎣ 1⎦

−1

⎡ 1 10 24

⎢0 1 5

⎢

⎣0 12 9

⎡ 1 0 −26

⎢0 1

5

⎢

0

0

−

51

⎣

⎡1 0 0

⎢0 1 5

⎢

⎣0 0 1

⎡1 0 0

⎢0 1 0

⎢

⎣0 0 1

**Add −5 times the third
**

row to the second.

24 ⎤

0⎥

⎥

17 ⎦

Replace the entries in

the second row by their

residues modulo 26.

15 24⎤

17 0⎥ and so the

⎥

6 17 ⎦

by 9 = 3 (modulo

26).

12 17 16 ⎤

6 21 7 ⎥

⎥

9 24 23⎦

Replace the entries in

the second row by their

residues modulo 26.

−48 −193 −54 ⎤

6

21

7⎥

⎥

−63 −228 −61⎦

⎡ 4 9 15⎤ ⎡ 6 ⎤ ⎡ 282 ⎤ ⎡ 22 ⎤

⎢ 15 17 6 ⎥ ⎢17 ⎥ = ⎢ 421⎥ = ⎢ 5⎥

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 7 ⎦ ⎣ 263⎦ ⎣ 3⎦

⎡ 4 9 15⎤ ⎡ 7 ⎤ ⎡ 379⎤ ⎡15⎤

⎢ 15 17 6 ⎥ ⎢ 4 ⎥ = ⎢ 299⎥ = ⎢13⎥

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 21⎦ ⎣ 525⎦ ⎣ 5⎦

I

H

A

V

E

C

O

M

E

⎡ 4 9 15⎤ ⎡7 ⎤ ⎡ 124 ⎤ ⎡ 20 ⎤ T

⎢ 15 17 6 ⎥ ⎢ 4⎥ = ⎢ 197 ⎥ = ⎢ 15⎥ O

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 4⎦ ⎣ 236 ⎦ ⎣ 2 ⎦ B

**Add −10 times the
**

second row to the first

and −12 times the

second row to the third.

4 15 24⎤

6 21 7 ⎥

⎥

15 6 17 ⎦

Replace the entries in

the first and second row

by their residues

modulo 26.

4 15 24 ⎤

−69 −9 −78⎥

⎥

15 6 17 ⎦

**⎡ 4 9 15⎤ ⎡ 8⎤ ⎡ 281⎤ ⎡ 21⎤ U
**

⎢ 15 17 6 ⎥ ⎢16 ⎥ = ⎢ 434 ⎥ = ⎢ 18⎥ R

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 7 ⎦ ⎣ 311⎦ ⎣ 25⎦ Y

⎡ 4 9 15⎤ ⎡ 15⎤ ⎡ 471⎤ ⎡3⎤ C

⎢ 15 17 6 ⎥ ⎢ 4 ⎥ = ⎢ 443⎥ = ⎢ 1⎥ A

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣ 25⎦ ⎣785⎦ ⎣5⎦ E

⎡ 4 9 15⎤ ⎡14 ⎤ ⎡ 461⎤ ⎡19 ⎤ S

⎢ 15 17 6 ⎥ ⎢15⎥ = ⎢ 573⎥ = ⎢ 1⎥ A

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ ⎥

⎣ 24 0 17 ⎦ ⎣18⎦ ⎣642 ⎦ ⎣18⎦ R

Finally, the message is I HAVE COME TO

BURY CAESAR.

256

SSM: Elementary Linear Algebra

Section 10.16

⎡0 1 1⎤

Thus A−1 = ⎢ 1 1 1⎥ .

⎢

⎥

⎣ 1 0 1⎦

⎡0 1 1⎤ ⎡ 0 ⎤ ⎡1 ⎤

⎢ 1 1 1⎥ ⎢1 ⎥ = ⎢1 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 0 1⎦ ⎣ 0 ⎦ ⎣0 ⎦

**7. (a) Multiply each of the triples of the message
**

⎡ 1 1 0⎤

by A = ⎢ 0 1 1⎥ and reduce the results

⎢

⎥

⎣ 1 1 1⎦

modulo 2.

⎡ 1 1 0 ⎤ ⎡ 1⎤ ⎡ 2 ⎤ ⎡0 ⎤

⎢0 1 1⎥ ⎢ 1⎥ = ⎢ 1⎥ = ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ 1 1 1⎦ ⎣ 0 ⎦ ⎣ 2 ⎦ ⎣0 ⎦

⎡ 1 1 0 ⎤ ⎡ 1⎤ ⎡ 1⎤ ⎡ 1⎤

⎢0 1 1⎥ ⎢ 0 ⎥ = ⎢ 1⎥ = ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ 1 1 1⎦ ⎣ 1⎦ ⎣ 2 ⎦ ⎣0 ⎦

⎡ 1 1 0 ⎤ ⎡1⎤ ⎡ 2 ⎤ ⎡ 0 ⎤

⎢0 1 1⎥ ⎢1⎥ = ⎢ 2 ⎥ = ⎢ 0 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ 1 1 1⎦ ⎣1⎦ ⎣ 3⎦ ⎣ 1⎦

The encoded message is 010110001.

(b) Reduce [ A I ] to ⎡⎣ I

⎡ 1 1 0 1 0 0⎤

⎢0 1 1 0 1 0 ⎥

⎢

⎥

⎣ 1 1 1 0 0 1⎦

⎡ 1 1 0 1 0 0⎤

⎢0 1 1 0 1 0⎥

⎢

⎥

⎣ 2 2 1 1 0 1⎦

⎡0 1 1⎤ ⎡1 ⎤ ⎡1 ⎤ ⎡1 ⎤

⎢1 1 1⎥ ⎢1 ⎥ = ⎢ 2 ⎥ = ⎢0 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣1 0 1⎦ ⎣ 0 ⎦ ⎣1 ⎦ ⎣1 ⎦

⎡0 1 1⎤ ⎡ 0⎤ ⎡1⎤

⎢ 1 1 1⎥ ⎢ 0⎥ = ⎢1⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 0 1⎦ ⎣1 ⎦ ⎣1⎦

The decoded message is 110101111, which

is the original message.

**8. Since 29 is a prime number, by Corollary
**

10.15.2 a matrix A with entries in Z 29 is

invertible if and only if det(A) ≠ 0 (mod 29).

−1 ⎤

A ⎦ modulo 2.

Form the matrix

Section 10.16

[A

Exercise Set 10.16

I ].

**1. Use induction on n, the case n = 1 being already
**

given. If the result is true for n − 1, then

Add the first row to

M n = M n −1M

= ( PD n −1 P −1 )( PDP −1 )

**the third row.
**

⎡ 1 1 0 1 0 0⎤

⎢0 1 1 0 1 0 ⎥

⎢

⎥

⎣0 0 1 1 0 1⎦

= PD n −1 ( P −1P) DP −1

Replace 2 by its

= PD n −1DP −1

= PD n P −1 ,

proving the result.

residue modulo 2.

⎡ 1 1 0 1 0 0⎤

⎢0 1 2 1 1 1⎥

⎢

⎥

⎣0 0 1 1 0 1⎦

Add the third row

**2. Using Table 1 and notations of Example 1, we
**

derive the following equations:

1

1

an = an −1 + bn −1

2

4

1

1

1

bn = an −1 + bn −1 + cn −1

2

2

2

1

1

cn = bn −1 + cn −1.

4

2

⎡ 1 1 0⎤

⎢2 4

⎥

The transition matrix is thus M = ⎢ 12 12 12 ⎥ .

⎢

1 1⎥

⎢⎣ 0 4 2 ⎥⎦

The characteristic polynomial of M is

**to the second row.
**

⎡ 1 1 0 1 0 0⎤

⎢0 1 0 1 1 1⎥

⎢

⎥

⎣0 0 1 1 0 1⎦

Replace 2 by its

residue modulo 2.

⎡ 1 2 0 2 1 1⎤

⎢0 1 0 1 1 1⎥

⎢

⎥

⎣0 0 1 1 0 1⎦

**Add the second row
**

to the first row.

⎡ 1 0 0 0 1 1⎤

⎢0 1 0 1 1 1⎥

⎢

⎥

⎣0 0 1 1 0 1⎦

Replace 2 by its

residue modulo 2.

257

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

⎛3⎞

⎛1⎞

det(λI − M ) = λ3 − ⎜ ⎟ λ 2 + ⎜ ⎟ λ

⎝2⎠

⎝2⎠

1⎞

⎛

= λ(λ − 1) ⎜ λ − ⎟ ,

⎝

2⎠

x( 2 n) = ( M 2 M1 )n x( 0) and

x( 2 n +1) = M1 ( M 2 M1 )n x( 0) . We have

0 ⎤ ⎡1 1 0 ⎤ ⎡ 12 83 14 ⎤

2

⎥

⎥

⎥ ⎢

1⎥ ⎢

1 1⎥ = ⎢ 1 1 1 ⎥ .

0

⎢

2

2 2 2

2

1 ⎥ ⎢0 0 0 ⎥ ⎢ 0 1 1 ⎥

⎦ ⎢⎣

2 ⎦⎥ ⎣

8 4 ⎦⎥

The characteristic polynomial of this matrix is

5

1

λ3 − λ 2 + λ, so the eigenvalues are λ1 = 1,

4

4

1

λ 2 = , λ3 = 0. Corresponding eigenvectors

4

⎡1

⎢2

M 2 M1 = ⎢ 12

⎢

⎣⎢ 0

1

so the eigenvalues of M are λ = 1, λ 2 = , and

2

λ3 = 0. Corresponding eigenvectors (found by

solving (λI − M)x = 0) are e1 = [1 2 1] ,

T

**e2 = [1 0 −1] , and e3 = [1 −2 1] . Thus
**

T

T

Mn

= PD n P −1

are e1 = [5 6 1]T , e2 = [ −1 0 1]T , and

1

1⎤

1

0

0 ⎤ ⎡ 14

4

4⎥

⎡ 1 1 1⎤ ⎢⎡

⎢

⎥ 1

n

0 − 12 ⎥ .

= ⎢ 2 0 −2 ⎥ ⎢0 1

⎢2

0

⎥

2

⎢

⎥

⎥⎢

1⎥

⎣ 1 −1 1⎦ ⎢⎣0

0

0 ⎦ ⎢⎣ 14 − 14

4 ⎥⎦

This yields

⎡ an ⎤

x( n) = ⎢ bn ⎥

⎢ ⎥

⎢⎣ cn ⎥⎦

⎡ 1 1 n +1 1 1 1 n +1 ⎤

− 2

⎢4 + 2

⎥ ⎡ a0 ⎤

4 4

⎢

⎥⎢ ⎥

1

1

1

=⎢

2

2

2

⎥ ⎢ b0 ⎥ .

⎢ 1 1 n +1 1 1 1 n +1 ⎥ ⎣⎢ c0 ⎦⎥

+ 2

⎢⎣ 4 − 2

⎥⎦

4 4

Remembering that a0 + b0 + c0 = 1, we obtain

e3 = [1 −2 1]T . Thus,

()

()

an =

( M 2 M1 ) n

= PD n P −1

1

1

1⎤

1

0

0 ⎤ ⎡ 12

12 12 ⎥

⎡ 5 −1 1⎤ ⎢⎡

⎢

⎥

n

1

1

2⎥.

= ⎢ 6 0 −2 ⎥ ⎢ 0 1

0⎥ ⎢− 3

6

3

4

⎢

⎥⎢

⎢

⎥

⎥

⎣ 1 1 1⎦ ⎣0

0

0 ⎦ ⎢⎣ 14 − 14 14 ⎥⎦

Using the notation of Example 1 (recall that

a0 + b0 + c0 = 1), we obtain

()

()

()

1

1

1

⎛1⎞

a0 + b0 + c0 + ⎜ ⎟

4

4

4

⎝2⎠

n +1

⎛1⎞

a0 − ⎜ ⎟

⎝2⎠

b2 n

n +1

c0

c2 n

1 ⎛1⎞

+ ⎜ ⎟ (a0 − c0 )

4 ⎝2⎠

1

1

1

1

bn = a0 + b0 + c0 =

2

2

2

2

=

cn =

=

and

2

1

+

(2a0 − b0 − 4c0 )

3 6 ⋅ 4n

1

1

b2 n +1 = −

(2a0 − b0 − 4c0 )

3 6 ⋅ 4n

c2 n +1 = 0.

a2n +1 =

n +1

n +1

1

1

1

⎛1⎞

⎛1⎞

a0 + b0 + c0 − ⎜ ⎟ a0 + ⎜ ⎟ c0

4

4

2

⎝2⎠

⎝2⎠

1 ⎛1⎞

−⎜ ⎟

4 ⎝2⎠

⎛1⎞

Since ⎜ ⎟

⎝2⎠

n +1

n +1

5

1

+

(2a0 − b0 − 4c0 )

12 6 ⋅ 4n

1

=

2

1

1

= −

(2a0 − b0 − 4c0 )

12 6 ⋅ 4n

a2 n =

()

n +1

1

4

1

2

1

4

(a0 − c0 ).

**4. The characteristic polynomial of M is
**

1⎞

⎛

(λ − 1) ⎜ λ − ⎟ , so the eigenvalues are λ1 = 1

2⎠

⎝

1

and λ 2 = . Corresponding eigenvectors are

2

approaches zero as n → ∞, we

1

1

1

obtain an → , bn → , and cn → as

4

2

4

n → ∞.

**easily found to be e1 = [1 0]T and
**

e2 = [1 −1]T . From this point, the verification

of Equation (7) is in the text.

**3. Call M1 the matrix of Example 1, and M 2 the
**

matrix of Exercise 2. Then

258

SSM: Elementary Linear Algebra

1

5. From Equation (9), if b0 = .25 = , we get b1 =

4

Section 10.16

1

4

9

8

2

= , then b2 =

9

2

9

10

9

1

= , b3 =

5

1

5

11

10

=

2

, and, in general,

11

2

2

. We will reach

= .10 in 12 generations. According to Equation (8), under the controlled program

8+ n

20

1

1

the percentage would be

in 12 generations, or

= .00006 = .006%.

14

16

,

384

2

bn =

1

⎡

⎤

2

⎢

⎥

1

⎢

⎥

2

⎢

⎥

0

⎥

6. P −1x(0) = ⎢

0

⎢

⎥

⎢1 (

⎥

)

⎢ 12 1 + 5 ⎥

⎢ 1 (1 − 5 ) ⎥

⎣ 12

⎦

1

⎡

⎤

2

⎢

⎥

1

⎢

⎥

2

⎢

⎥

0

⎥

n −1 (0) ⎢

D P x =⎢

0

⎥

⎢1 1

n +1 ⎥

⎢ 3 ⎡⎣ 4 (1 + 5 ) ⎤⎦ ⎥

⎢

n +1 ⎥

⎢ 1 ⎡ 1 (1 − 5 ) ⎤ ⎥

⎦ ⎦

⎣3 ⎣4

n +1

n +1

⎡1 + 1 ⋅ 1 ⎡

+ ( −3 + 5 )(1 − 5 ) ⎤⎦ ⎤

⎢ 2 3 4n + 2 ⎣( −3 − 5 )(1 + 5 )

⎥

⎢

⎥

n +1

n +1 ⎤

1⋅ 1 ⎡

+ (1 − 5 ) ⎦

3 4n +1 ⎣(1 + 5 )

⎢

⎥

⎢

⎥

n

n⎤

1⋅ 1 ⎡

⎥

3 4n +1 ⎣(1 + 5 ) + (1 − 5 ) ⎦

n −1 (0) ⎢

PD P x = ⎢

⎥

n

n

1⋅ 1 ⎡

⎤

⎢

⎥

3 4n +1 ⎣(1 + 5 ) + (1 − 5 ) ⎦

⎢

⎥

n +1

n +1

1⋅ 1 ⎡

⎢

⎥

+ (1 − 5 ) ⎤⎦

3 4n +1 ⎣(1 + 5 )

⎢

⎥

⎢ 1 + 1 ⋅ n1+ 2 ⎡( −3 − 5 )(1 + 5 )n +1 + ( −3 + 5 )(1 − 5 )n +1 ⎤ ⎥

⎦⎦

⎣2 3 4 ⎣

⎡1⎤

⎢2⎥

⎢0⎥

⎢0⎥

1

1

[As n tends to infinity,

and

approach 0, so x( n) approaches ⎢ ⎥ .

n+ 2

n +1

0

4

4

⎢ ⎥

⎢0⎥

⎢1⎥

⎣2⎦

**7. From (13) we have that the probability that the limiting sibling-pairs will be type (A, AA) is
**

2

1

2

1

a0 + b0 + c0 + d0 + e0 .

3

3

3

3

The proportion of A genes in the population at the outset is as follows: all the type

259

Chapter 10: Elementary Linear Algebra

SSM: Elementary Linear Algebra

⎡857 ⎤

(c) x(6) = Lx(5) = ⎢

⎣ 285⎥⎦

⎡ 855⎤

x(6) λ1x(5) = ⎢

⎣ 287 ⎥⎦

2

1

of the type (A, Aa) genes,

3

3

the type (A, aa) genes, etc. ...yielding

2

1

2

1

a0 + b0 + c0 + d0 + e0 .

3

3

3

3

(A, AA) genes,

5. a1 is the average number of offspring produced

**8. From an (A, AA) pair we get only (A, AA) pairs
**

and similarly for (a, aa). From either (A, aa) or

(a, AA) pairs we must get an Aa female, who will

not mature. Thus no offspring will come from

such pairs. The transition matrix is then

⎡1 0 0 0 ⎤

⎢0 0 0 0⎥

⎢

⎥.

⎢0 0 0 0⎥

⎣⎢0 0 0 1 ⎦⎥

**in the first age period. a2b1 is the number of
**

offspring produced in the second period times

the probability that the female will live into the

second period, i.e., it is the expected number of

offspring per female during the second period,

and so on for all the periods. Thus, the sum of

these, which is the net reproduction rate, is the

expected number of offspring produced by a

given female during her expected lifetime.

⎛ 1 ⎞ ⎛ 1 ⎞⎛ 1 ⎞ 19

7. R = 0 + 4 ⎜ ⎟ + 3 ⎜ ⎟⎜ ⎟ =

= 2.375

⎝ 2 ⎠ ⎝ 2 ⎠⎝ 4 ⎠ 8

**9. For the first column of M we realize that parents
**

of type (A, AA) can produce offspring only of

that type, and similarly for the last column. The

fifth column is like the second column, and

follows the analysis in the text. For the middle

two columns, say the third, note that male

offspring from (A, aa) must be of type a, and

females are of type Aa, because of the way the

genes are inherited.

8. R = 0 + (.00024)(.99651) + "

+ (.00240)(.99651)" (.987)

= 1.49611.

Section 10.18

Exercise Set 10.18

Section 10.17

**1. (a) The characteristic polynomial of L is
**

3 ⎛

1⎤

3⎞⎡

⎛ 3⎞

λ3 − 2λ − = ⎜ λ − ⎟ ⎢λ 2 + ⎜ ⎟ λ + ⎥ , so

8 ⎝

4⎦

⎝ 2⎠

2⎠⎣

Exercise Set 10.17

1. (a) The characteristic polynomial of L is

3

λ 2 − λ − , so the eigenvalues of L are

4

3

1

3

λ = and λ = − , thus λ1 = and

2

2

2

⎡1⎤

⎢ 1 ⎥ ⎡1 ⎤

x1 = ⎢ 2 ⎥ = ⎢ 1 ⎥ is the corresponding

⎢ 3 ⎥ ⎣3⎦

⎣2⎦

eigenvector.

3

λ1 = . Thus h, the fraction harvested of

2

2 1

each age group, is 1 − = so the yield is

3 3

1

33 % of the population. The eigenvector

3

⎡1⎤

⎢1⎥

3

corresponding to λ = is ⎢ 3 ⎥ ; this is the

2

⎢1⎥

⎣ 18 ⎦

age distribution vector after each harvest.

⎡100 ⎤

⎡175⎤

(b) x(1) = Lx(0) = ⎢ ⎥ , x( 2) = Lx(1) = ⎢ ⎥ ,

50

⎣ ⎦

⎣ 50 ⎦

⎡ 250⎤

x(3) = Lx( 2) = ⎢

,

⎣ 88⎥⎦

⎡382 ⎤ 5

⎡570 ⎤

x( 4) = Lx(3) = ⎢

, x = Lx ( 4 ) = ⎢

⎣125⎥⎦

⎣ 191⎥⎦

(b) From Equation (10), the age distribution

vector x1 is ⎡1

⎣

us that h1 = 1 −

1

2

1

19

8

=

1⎤

8⎦

T

. Equation (9) tells

11

11

, so we harvest

19

19

**or 57.9% of the youngest age class. Since
**

Lx1 = ⎡ 19

⎣8

260

1

2

1⎤

8⎦

T

, the youngest class

SSM: Elementary Linear Algebra

Section 10.19

**contains 79.2% of the population. Thus the yield is 57.9% of 79.2%, or 45.8% of the population.
**

2. The Leslie matrix of Example 1 has b1 = .845, b2 = .975, b3 = .965, etc. This, together with the harvesting data

from Equations (13) and the formula of Equation (5) yields

x1 = [1 .845 .824 .795 .755 .699 .626 .5323 0 0 0 0]T

Lx1 = [ 2.090 .845 .824 .795 .755 .699 .626 .532 .418 0 0 0]T .

The total of the entries of Lx1 is 7.584. The proportion of sheep harvested is h1 ( Lx1 )2 + h9 ( Lx1 )9 = 1.51, or

19.9% of the population.

4. In this situation we have hI ≠ 0, and h1 = h2 = " = hI −1 = hI +1 = " = hn = 0. Equation (4) then takes the form

a1 + a2 b1 + a3b1b2 + " + aI b1b2 " bI −1 (1 − hI ) + aI +1b1b2 " bI (1 − hI ) + " + an b1b2 " bn −1 (1 − hI ) = 1.

(1 − hI )[ aI b1b2 " bI −1 + aI +1b1b2 " bI + " + an b1b2 " bn −1 ] = 1 − a1 − a2 b1 − " − aI −1b1b2 " bI − 2 .

a + a b + " + aI −1b1b2 " bI − 2 − 1

So hI = 1 2 1

+1

aI b1b2 " bI −1 + " + an b1b2 " bn −1

a + a b + " + aI −1b1b2 " bI − 2 − 1 + aI b1b2 " bI −1 + " + an b1b2 " bn −1

= 1 21

aI b1b2 " bI −1 + " + an b1b2 " bn −1

R −1

=

aI b1b2 " bI −1 + " + an b1b2 " bn −1

5. Here hJ = 1, hI ≠ 0, and all the other hk’s are zero. Then Equation (4) becomes

a1 + a2 b1 + " + aI −1b1b2 " bI − 2 + (1 − hI )[ aI b1b2 " bI −1 + " + aJ −1b1b2 " bJ − 2 ] = 1.

**a1 + a2b1 + " + aI −1b1b2 " bI − 2 − 1
**

+1

aI b1b2 " bI −1 + " + aJ −1b1b2 " bJ −2

a + a b + " + aJ −1b1b2 " bJ − 2 − 1

.

= 1 21

aI b1b2 " bI −1 + " + aJ −1b1b2 " bJ −2

We solve for hI to obtain hI =

Section 10.19

Exercise Set 10.19

1. From Theorem 10.19.1, we compute a0 =

bk =

π2

3

1

2π

π ∫0

1

2π

π ∫0

2

1

(t − π )2 dt = π 2 , ak =

π

3

2π

∫0

(t − π )2 cos kt dt =

4

k2

, and

(t − π )2 sin kt dt = 0. So the least-squares trigonometric polynomial of order 3 is

4

+ 4 cos t + cos 2t + cos 3t .

9

2. From Theorem 10.19.2, we compute a0 =

2 T 2

2 2

2 T 2

2 kπ t

T2

,

t

dt

=

T

=

=

and

a

t

cos

dt

k

3

T ∫0

T ∫0

T

k 2π 2

2 T 2

2 kπ t

T2

=

−

t

sin

dt

.

T ∫0

T

kπ

So the least-squares trigonometric polynomial of order 4 is

bk =

2

T2 T2 ⎛

2π t 1

4π t 1

6π t 1

8π t ⎞ T ⎛

2π t 1

4π t 1

6π t 1

8π t ⎞

+

−

cos

cos

cos

cos

+

+

+

+ sin

+ sin

+ sin

⎜

⎟

⎜ sin

⎟.

3 π2 ⎝

T

T

T 16

T ⎠ π ⎝

T

T

T

T ⎠

4

9

2

3

4

261

Chapter 10: Elementary Linear Algebra

3. From Theorem 10.19.2, a0 =

ak =

1

2π

π ∫0

SSM: Elementary Linear Algebra

f (t )dt =

1 π

π

2

∫0 sin t dt = π . (Note the upper limit on the second integral),

1 π

π

**∫0 sin t cos kt dt
**

π

1⎛ 1

⎞

= ⎜ 2 [ k sin kt sin t + cos kt cos t ⎟

π ⎝ k −1

⎠0

1⎛ 1

⎞

= ⎜ 2 [0 + (−1)k −1 − 1] ⎟

π ⎝ k −1

⎠

0

if

is

odd

k

⎧

⎪

2

=⎨

if k is even.

−

⎪ π (k 2 − 1)

⎩

⎧1

1 π

⎪ if k = 1

bk = ∫ sin kt sin t dt = ⎨ 2

.

π 0

⎪⎩0 if k > 1

So the least-squares trigonometric polynomial of order 4 is

4. From Theorem 10.19.2, a0 =

ak =

1 2π

π ∫0

=−

1 2π

π ∫0

1

π

1

2

2

cos 2t −

cos 4t .

+ sin t −

2

3π

15π

1

4

sin t dt = .

2

π

1

sin t cos kt dt

2

4

π (4k 2 − 1)

4

,

π (2k − 1)(2k + 1)

1 2π

1

bk = ∫ sin t sin kt dt = 0.

0

2

π

=−

So the least-square trigonometric polynomial of order n is

2

π

−

4 ⎛ cos t

cos 2t

2 T

2 1T

2 T

T

f (t )dt = ∫ 2 t dt + ∫ 1 (T − t )dt = .

∫

2

T 0

T 0

T 2T

1T

2

2kπ t

2 T

2 kπ t

ak = ∫ 2 t cos

dt + ∫ 1 (T − t ) cos

dt

T 0

T

T 2T

T

4T

((−1)k − 1)

=

4k 2π 2

if k is even

⎧0

⎪ 8T

.

=⎨

if k is odd

⎪ ( 2k ) 2 π 2

⎩

So the least-squares trigonometric polynomial of order n is:

2π t 1

6π t

1

10π t

1

2π nt ⎞

T 8T ⎛ 1

+ cos

+

+" +

cos

cos

cos

−

⎜

2

2

2

2

T

T 10

T

T ⎟⎠

4 π2 ⎝2

6

( 2 n)

5. From Theorem 10.19.2, a0 =

if n is even; the last term involves n − 1 if n is odd.

262

cos 3t

cos nt

⎞

+

+

+" +

.

3⋅5

5⋅7

(2n − 1)(2n + 1) ⎟⎠

π ⎜⎝ 1 ⋅ 3

SSM: Elementary Linear Algebra

6. (a)

(b)

1 =

2π

∫0

cos kt =

=

Section 10.20

**(c) As in part (a) the system for c1 , c2 , and c3
**

⎡3 −2 3⎤ ⎡ c1 ⎤ ⎡0 ⎤

is ⎢3 −2 0 ⎥ ⎢c2 ⎥ = ⎢0 ⎥ which has the

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 1 1⎦ ⎣ c3 ⎦ ⎣1 ⎦

dt = 2π

2π

∫0

cos2 kt dt

2

3

unique solution c1 = , c2 = , and

5

5

c3 = 0. Because these coefficients are all

nonnegative, it follows that v is a convex

combination of the vectors v1 , v 2 , and

2π ⎛ 1 + cos 2t ⎞

∫0

⎜

⎝

2

⎟ dt

⎠

= π.

(c)

2π

sin kt =

∫0

=

∫0

sin 2 kt dt

v3 .

2π ⎛ 1 − cos 2t ⎞

⎜

⎝

2

⎟ dt

⎠

(d) As in part (a) the system for c1 , c2 , and c3

⎡3 −2 3⎤ ⎡ c1 ⎤ ⎡1 ⎤

is ⎢3 −2 0 ⎥ ⎢c2 ⎥ = ⎢0 ⎥ which has the

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 1 1⎦ ⎣ c3 ⎦ ⎣1 ⎦

4

6

unique solution c1 = , c2 = , and

15

15

5

c3 = . Because these coefficients are all

15

nonnegative, it follows that v is a convex

combination of the vectors v1 , v 2 , and

= π

Section 10.20

Exercise Set 10.20

⎡1⎤

⎡3⎤

⎡ 4 ⎤ ⎡3⎤

1. (a) Equation (2) is c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ = ⎢ ⎥

⎣1⎦

⎣5⎦

⎣ 2 ⎦ ⎣3⎦

and Equation (3) is c1 + c2 + c3 = 1. These

equations can be written in combined matrix

⎡1 3 4 ⎤ ⎡ c1 ⎤ ⎡3⎤

form as ⎢1 5 2 ⎥ ⎢c2 ⎥ = ⎢3⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣1 1 1 ⎦ ⎣ c3 ⎦ ⎣1⎦

v3 .

2. For both triangulations the number of triangles,

m, is equal to 7; the number of vertex points, n,

is equal to 7; and the number of boundary vertex

points, k, is equal to 5. Equation (7),

m = 2n − 2 − k, becomes 7 = 2(7) − 2 − 5, or

7 = 7.

1

This system has the unique solution c1 = ,

5

2

2

c2 = , and c3 = . Because these

5

5

coefficients are all nonnegative, it follows

that v is a convex combination of the vectors

v1 , v 2 , and v3 .

**3. Combining everything that is given in the
**

statement of the problem, we obtain:

w = Mv + b

= M (c1 v1 + c2 v 2 + c3 v3 ) + (c1 + c2 + c3 )b

= c1 ( Mv1 + b) + c2 ( Mv 2 + b) + c3 ( Mv3 + b)

= c1w1 + c2 w 2 + c3 w 3 .

**(b) As in part (a) the system for c1 , c2 , and c3
**

⎡1 3 4 ⎤ ⎡ c1 ⎤ ⎡ 2 ⎤

is ⎢1 5 2 ⎥ ⎢c2 ⎥ = ⎢ 4 ⎥ which has the

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣1 1 1 ⎦ ⎣ c3 ⎦ ⎣1 ⎦

v1

4. (a)

2

4

unique solution c1 = , c2 = , and

5

5

1

c3 = − . Because one of these coefficients

5

is negative, it follows that v is not a convex

combination of the vectors v1 , v 2 , and

v2

v3

v4

v5

v6

v3 .

263

v7

Chapter 10: Elementary Linear Algebra

(b)

v1

SSM: Elementary Linear Algebra

v2

**Solving these two linear systems leads to
**

⎡3 −1⎤

⎡0 ⎤

and b = ⎢ ⎥ .

M =⎢

⎣1 ⎦

⎣ 1 1⎥⎦

v3

v4

**(c) As in part (a), we are led to the following
**

two linear systems:

⎡ −2 1 1⎤ ⎡ m11 ⎤ ⎡ 0⎤

⎢ 3 5 1⎥ ⎢ m12 ⎥ = ⎢ 5⎥ and

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 1 0 1⎦ ⎣ b1 ⎦ ⎣ 3⎦

v5

v6

v7

⎡ −2 1 1⎤ ⎡ m21 ⎤ ⎡ −2 ⎤

⎢ 3 5 1⎥ ⎢ m22 ⎥ = ⎢ 2 ⎥ .

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 1 0 1⎦ ⎣ b2 ⎦ ⎣ −3⎦

Solving these two linear systems leads to

⎡1 0 ⎤

⎡ 2⎤

M =⎢

and b = ⎢ ⎥ .

⎥

0

1

⎣

⎦

⎣ −3⎦

m12 ⎤

⎡m

⎡ b1 ⎤

5. (a) Let M = ⎢ 11

⎥ and b = ⎢b ⎥ . Then

m

m

⎣ 21

⎣ 2⎦

22 ⎦

the three matrix equations Mvi + b = w i ,

i = 1, 2, 3, can be written as the six scalar

equations

m11 + m12 + b1 = 4

m21 + m22 + b2 = 3

**(d) As in part (a), we are led to the following
**

two linear systems:

⎡ 5⎤

⎡ 0 2 1⎤ ⎡ m11 ⎤ ⎢ 2 ⎥

⎢ 2 2 1⎥ ⎢ m12 ⎥ = ⎢ 7 ⎥ and

⎥ ⎢ 2⎥

⎢

⎥⎢

⎣ −4 −2 1⎦ ⎣ b1 ⎦ ⎢ − 7 ⎥

⎣ 2⎦

⎡ 0 2 1⎤ ⎡ m21 ⎤ ⎡ −1⎤

⎢ 2 2 1⎥ ⎢ m22 ⎥ = ⎢ 3⎥ .

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ −4 −2 1⎦ ⎣ b2 ⎦ ⎣ −9 ⎦

2m11 + 3m12 + b1 = 9

2m21 + 3m22 + b2 = 5

2m11 + m12 + b1 = 5

2m21 + m22 + b2 = 3

The first, third, and fifth equations can be

written in matrix form as

⎡1 1 1⎤ ⎡ m11 ⎤ ⎡ 4 ⎤

⎢ 2 3 1⎥ ⎢ m12 ⎥ = ⎢9 ⎥ and the second,

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 2 1 1⎦ ⎣ b1 ⎦ ⎣ 5 ⎦

fourth, and sixth equations as

⎡1 1 1⎤ ⎡ m21 ⎤ ⎡3⎤

⎢ 2 3 1⎥ ⎢ m22 ⎥ = ⎢5⎥ .

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 2 1 1⎦ ⎣ b2 ⎦ ⎣3⎦

The first system has the solution m11 = 1,

m12 = 2, b1 = 1 and the second system has

the solution m21 = 0, m22 = 1, b2 = 2.

**Solving these two linear systems leads to
**

⎡ 1 1⎤

⎡ 1⎤

M = ⎢2

⎥ and b = ⎢ 2 ⎥ .

⎣ 2 0⎦

⎣ −1⎦

7. (a) The vertices v1 , v 2 , and v3 of a triangle

can be written as the convex combinations

v1 = (1) v1 + (0) v 2 + (0) v3 ,

v 2 = (0) v1 + (1) v 2 + (0) v3 , and

v3 = (0) v1 + (0) v 2 + (1) v3 . In each of these

cases, precisely two of the coefficients are

zero and one coefficient is one.

⎡1 2 ⎤

⎡1 ⎤

and b = ⎢ ⎥ .

Thus we obtain M = ⎢

⎣2⎦

⎣0 1 ⎥⎦

**(b) If, for example, v lies on the side of the
**

triangle determined by the vectors v1 and

**(b) As in part (a), we are led to the following
**

two linear systems:

⎡ −2 2 1⎤ ⎡ m11 ⎤ ⎡ −8⎤

⎢ 0 0 1⎥ ⎢ m12 ⎥ = ⎢ 0⎥ and

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 2 1 1⎦ ⎣ b1 ⎦ ⎣ 5⎦

v 2 then from Exercise 6(a) we must have

that v = c1 v1 + c2 v 2 + (0) v3 where

c1 + c2 = 1. Thus at least one of the

coefficients, in this example c3 , must equal

zero.

⎡ −2 2 1⎤ ⎡ m21 ⎤ ⎡1 ⎤

⎢ 0 0 1⎥ ⎢ m22 ⎥ = ⎢1 ⎥ .

⎥ ⎢ ⎥

⎢

⎥⎢

⎣ 2 1 1⎦ ⎣ b2 ⎦ ⎣ 4 ⎦

264

SSM: Elementary Linear Algebra

Section 10.20

**(c) From part (b), if at least one of the
**

coefficients in the convex combination is

zero, then the vector must lie on one of the

sides of the triangle. Consequently, none of

the coefficients can be zero if the vector lies

in the interior of the triangle.

8. (a) Consider the vertex v1 of the triangle and

its opposite side determined by the vectors

v 2 and v3 . The midpoint v m of this

( v 2 + v3 )

and the point on

2

the line segment from v1 to v m that is twothirds of the distance to v m is given by

opposite side is

1

2

1

2 ⎛ v + v3 ⎞

v1 + v m = v1 + ⎜ 2

⎟

3

3

3

3⎝ 2 ⎠

1

1

1

= v1 + v 2 + v3 .

3

3

3

(b)

1 ⎡ 2 ⎤ 1 ⎡ 5 ⎤ 1 ⎡1⎤ 1 ⎡8 ⎤ ⎡ 83 ⎤

+

+

=

=⎢ ⎥

3 ⎢⎣ 3 ⎥⎦ 3 ⎢⎣ 2 ⎥⎦ 3 ⎢⎣1⎥⎦ 3 ⎢⎣6 ⎥⎦ ⎣⎢ 2 ⎦⎥

265

Chapter 2

Determinants

4 −1 6

(b) M 23 = 4 1 14

4 1 2

1 14

4 14

4 1

=4

− (−1)

+6

1 2

4 2

4 1

= 4(2 − 14) + (8 − 56) + 6(4 − 4)

= −96

Section 2.1

Exercise Set 2.1

7 −1

1. M11 =

= 28 − (−1) = 29

1 4

C11 = (−1)1+1 M11 = 29

M12 =

6 −1

= 24 − 3 = 21

−3 4

C23 = (−1)2 +3 M 23 = 96

**C12 = (−1)1+ 2 M12 = −21
**

M13 =

4 1 6

(c) M 22 = 4 0 14

4 3 2

1 6

4 6

4 1

= −4

+0

− 14

3 2

4 2

4 3

= −4(2 − 18) − 14(12 − 4)

= −48

6 7

= 6 − (−21) = 27

−3 1

C13 = (−1)1+3 M13 = 27

M 21 =

−2 3

= −8 − 3 = −11

1 4

C22 = (−1) 2+ 2 M 22 = −48

C21 = (−1)2 +1 M 21 = 11

M 22 =

1 3

= 4 − (−9) = 13

−3 4

−1 1 6

(d) M 21 = 1 0 14

1 3 2

1 6

−1 6

−1 1

= −1

+0

− 14

3 2

1 2

1 3

= −(2 − 18) − 14(−3 − 1)

= 72

C22 = (−1) 2+ 2 M 22 = 13

M 23 =

1 −2

= 1 − 6 = −5

−3

1

C23 = (−1)2 +3 M 23 = 5

M 31 =

−2 3

= 2 − 21 = −19

7 −1

C21 = (−1)2 +1 M 21 = −72

C31 = (−1)3+1 M 31 = −19

M 32

5.

1 3

=

= −1 − 18 = −19

6 −1

C32 = (−1)

M 33 =

3+ 2

⎡ 3 5⎤

⎢⎣ −2 4⎥⎦

M 32 = 19

1 −2

= 7 − (−12) = 19

6 7

7.

C33 = (−1)3+3 M 33 = 19

3. (a) M13

3 5

= 12 − (−10) = 22

−2 4

−1

C13 = (−1)1+3 M13 = 0

44

5

2

1 ⎡ 4 −5⎤ ⎡ 11 − 22 ⎤

⎢

⎥

=

3 ⎥

3⎥⎦ ⎢ 1

22 ⎢⎣ 2

22 ⎦

⎣ 11

−5 7

= 10 − (−49) = 59

−7 −2

⎡ −5 7 ⎤

⎢⎣ −7 −2 ⎥⎦

0 0 3

4 1

= 4 1 14 = 3

= 3(4 − 4) = 0

4 1

4 1 2

=

−1

=

2

1 ⎡ −2 −7 ⎤ ⎡ − 59

⎢

=

59 ⎢⎣ 7 −5⎥⎦ ⎢ 7

⎣ 59

7 ⎤

− 59

⎥

5⎥

− 59

⎦

SSM: Elementary Linear Algebra

9.

Section 2.1

a −3

5

= (a − 3)(a − 2) − (5)(−3)

−3 a − 2

= a 2 − 5a + 6 + 15

= a 2 − 5a + 21

11.

13.

−2 1 4 −2 1 4 −2 1

3 5 −7 = 3 5 −7 3 5

1 6

2

1 6

2 1 6

= [(−2)(5)(2) + (1)(−7)(1) + (4)(3)(6)] − [(4)(5)(1) + (−2)(−7)(6) + (1)(3)(2)]

= [ −20 − 7 + 72] − [ 20 + 84 + 6]

= −65

3 0 0 3 0 03 0

2 −1 5 = 2 −1 5 2 −1

1 9 −4 1 9 −4 1 9

= [(3)(−1)(−4) + (0)(5)(1) + (0)(2)(9)] − [0(−1)(1) + (3)(5)(9) + (0)(2)(−4)]

= 12 − 135

= −123

15. det( A) = (λ − 2)(λ + 4) − (−5)

= λ 2 + 2λ − 8 + 5

= λ 2 + 2λ − 3

= (λ − 1)(λ + 3)

det(A) = 0 for λ = 1 or −3.

17. det(A) = (λ − 1)(λ + 1) − 0 = (λ − 1)(λ + 1)

det(A) = 0 for λ = 1 or −1.

3 0 0

−1 5

2

5

2 −1

19. (a) 2 −1 5 = 3

−0

+0

9 −4

1 −4

1 9

1 9 −4

= 3(4 − 45)

= −123

3 0 0

−1 5

0 0

0 0

(b) 2 −1 5 = 3

−2

+1

9 −4

9 −4

−1 5

1 9 −4

= 3(4 − 45) − 2(0 − 0) + (0 − 0)

= −123

(c)

3 0 0

0 0

3 0

3 0

+ (−1)

−5

2 −1 5 = −2

9 −4

1 −4

1 9

1 9 −4

= −2(0 − 0) − (−12 − 0) − 5(27 − 0)

= 12 − 135

= −123

45

Chapter 2: Determinants

SSM: Elementary Linear Algebra

3 0 0

2

5

3 0

3 0

(d) 2 −1 5 = −0

+ (−1)

−9

1 −4

1 −4

2 5

1 9 −4

= −(−12 − 0) − 9(15 − 0)

= 12 − 135

= −123

(e)

(f)

3 0 0

0 0

3 0

3 0

−9

+ (−4)

2 −1 5 = 1

−1 5

2 5

2 −1

1 9 −4

= (0 − 0) − 9(15 − 0) − 4(−3 − 0)

= −135 + 12

= −123

3 0 0

2 −1

3 0

3 0

2 −1 5 = 0

−5

+ (−4)

1 9

1 9

2 −1

1 9 −4

= −5(27 − 0) − 4(−3 − 0)

= −135 + 12

= −123

**21. Expand along the second column.
**

−3 7

det( A) = 5

= 5[ −15 − (−7)] = −40

−1 5

23. Expand along the first column.

det( A) = 1

k

k2

k

k2

−1

k

k2

k

k2

+1

k

k2

k

k2

= (k 3 − k 3 ) − (k 3 − k 3 ) + (k 3 − k 3 )

=0

25. Expand along the third column, then along the first rows of the 3 × 3 matrices.

3 3 5

3 3 5

det( A) = −3 2 2 −2 − 3 2 2 −2

2 10 2

4 1 0

⎛ 2 −2

2 −2

2 2 ⎞ ⎛ 2 −2

2 −2

2 2⎞

= 3⎜ 3

−3

+5

⎟ − 3⎜ 3 1 0 − 3 4 0 + 5 4 1 ⎟

10

2

2

2

2

10

⎝

⎠ ⎝

⎠

= −3[3(24) − 3(8) + 5(16)] − 3[3(2) − 3(8) + 5(−6)]

= −3(128) − 3(−48)

= −240

27.

1 0 0

0 −1 0 = (1)(−1)(1) = −1

0 0 1

46

SSM: Elementary Linear Algebra

0

1

29.

0

1

0

2

4

2

0

0

3

3

1

0

31.

0

0

2 7 −3

1 −4

1

= (1)(1)(2)(3) = 6

0 2 7

0 0 3

Section 2.1

0

0

= (0)(2)(3)(8) = 0

0

8

**33. Expand along the third column.
**

sin(θ )

cos(θ )

0

sin(θ ) cos(θ )

sin(θ )

0 =

− cos(θ )

− cos(θ ) sin(θ )

sin(θ ) − cos(θ ) sin(θ ) + cos(θ ) 1

= sin 2 (θ ) − (− cos2 (θ ))

= sin 2 (θ ) + cos2 (θ )

=1

1

35. M11 =

0

f

= 1 in both matrices, so expanding along the first row or column gives that d 2 = d1 + λ.

1

True/False 2.1

(a) False; the determinant is ad − bc.

(b) False;

1 0 0

1 0

= 0 1 0 =1

0 1

0 0 1

(c) True; if i + j is even then (−a)i + j = 1 and Cij = (−1)i + j M ij = M ij . Similarly, if Cij = (−1)i + j M ij = M ij , then

(−1)i + j = 1 so i + j must be even.

⎡a b

(d) True; let A = ⎢ b d

⎢

⎣c e

c⎤

b

e ⎥ , then C12 = (−1)

c

⎥

f⎦

e

b

= −(bf − ce) and C21 = (−1)

f

e

c

= −(bf − ce). Similar

f

**arguments show that C23 = C32 and C13 = C31 .
**

(e) True; the value is det(A) regardless of the row or column chosen.

(f) True

(g) False; the determinant would be the product, not the sum.

c 0

⎡1 0 ⎤

= c 2 . In general, if A is an n × n matrix, then

(h) False; let A = ⎢

, then det(A) = 1, but det(cA) =

⎥

0

c

0

1

⎣

⎦

det(cA) = c n A.

47

Chapter 2: Determinants

SSM: Elementary Linear Algebra

⎡1 0 ⎤

⎡ −1 0 ⎤

(i) False; let A = ⎢

and B = ⎢

, then

⎣ 0 −1⎥⎦

⎣0 1 ⎥⎦

det(A) = det(B) = 1, but det(A + B) = 0.

2

⎡a b⎤

2 ⎡ a + bc ab + bd ⎤

=

A

(j) True; let A = ⎢

so

that

⎢

⎥ and

⎣ c d ⎥⎦

⎢⎣ ac + cd bc + d 2 ⎦⎥

det( A2 ) = (a 2 + bc)(bc + d 2 ) − (ab + bd )(ac + cd )

= a 2 bc + a 2 d 2 + bcd 2 + b 2 c 2 − (a 2bc + 2abcd + bcd 2 )

= a 2 d 2 − 2abcd + b 2 c 2

= (ad − bc)2

= (det( A))2

Section 2.2

Exercise Set 2.2

1. det(A) = (−2)(4) − (3)(1) = −8 − 3 = −11

⎡ −2 1⎤

AT = ⎢

⎣ 3 4 ⎥⎦

det( AT ) = (−2)(4) − (1)(3) = −8 − 3 = −11

3. det( A) = [(2)(2)(6) + (−1)(4)(5) + (3)(1)(−3)] − [(3)(2)(5) + (2)(4)(−3) + (−1)(1)(6)]

= [ 24 − 20 − 9] − [30 − 24 − 6]

= −5

det( AT ) = [(2)(2)(6) + (1)(−3)(3) + (5)(−1)(4)] − [(5)(2)(3) + (2)(−3)(4) + (1)(−1)(6)]

= [ 24 − 9 − 20] − [30 − 24 − 6]

= −5

5. The matrix is I 4 with the third row multiplied by −5.

1 0 0 0

0 1 0 0

= −5

0 0 −5 0

0 0 0 1

7. The matrix is I 4 with the second and third rows interchanged.

1 0 0 0

0 0 1 0

= −1

0 1 0 0

0 0 0 1

9. The matrix is I 4 with −9 times the fourth row added to the second row.

1 0 0 0

0 1 0 −9

=1

0 0 1 0

0 0 0

1

48

SSM: Elementary Linear Algebra

11.

13.

Section 2.2

0 3 1

1 1 2

1 1 2 =−0 3 1

3 2 4

3 2 4

1 1 2

1

=−0 3

0 −1 −2

1 1 2

= 0 −1 −2

0 3

1

1 1 2

=−0 1 2

0 3 1

1 1 2

=−0 1 2

0 0 −5

= −(−5)

=5

15.

3 −6 9

1 −2

3

2

7

2

3

2

7

2

−

− = −

−

0

1 5

0

1 5

1 −2 3

3 4

=30

0

1 5

1 −2 3

1 5

= −3 0

0

3 4

1 −2

3

1

5

= −3 0

0 0 −11

= −3(−11)

= 33

17.

2

1

0

0

1

0

2

1

3

1

1

2

1

1

=−

0

3

1 0 1 1

2 1 3 1

0 2 1 0

0 1 2 3

1 0 1 1

0 1 1 −1

=−

0 2 1 0

0 1 2 3

1 0 1 1

0 1 1 −1

=−

0 0 −1 2

0 0 1 4

1 0 1 1

0 1 1 −1

=

0 0 1 −2

0 0 1 4

1 0 1 1

0 1 1 −1

=

0 0 1 −2

0 0 0 6

=6

1 3 1 5 3 1 3

−2 −7 0 −4 2 0 −1

0 0 1 0 1= 0 0

0 0 2

1 1 0 0

0 0 0

1 1 0 0

1 3

0 1

=−0 0

0 0

0 0

1

0

=−0

0

0

1

0

=−0

0

0

= −2

49

1

2

1

2

0

1

−2

1

2

0

5

6

0

1

1

3

8

1

1

1

5 3

−6 −8

0

1

1 1

1 1

3

1 5

1 −2 −6

0

1 0

0 0

1

0 0

1

3

1 5

1 −2 −6

0

1 0

0 0

1

0 0 0

3

−8

1

−1

1

3

−8

1

−1

2

Chapter 2: Determinants

SSM: Elementary Linear Algebra

**19. Exercise 14: First add multiples of the first row
**

to the remaining rows, then expand by cofactors

along the first column. Repeat these steps with

the 3 × 3 determinant, then evaluate the 2 × 2

determinant.

1 −2

3

1 1 −2

3

1

5 −9 6

3 0

1 −9 −2

=

−1 2 −6 −2 0 0 −3 −1

2

8 6

1 0 12 0 −1

1 −9 −2

= 0 −3 −1

12 0 −1

1 −9 −2

= 0 −3 −1

0 108 23

−3 −1

=

108 23

= (−3)(23) − (−1)(108)

= 39

Exercise 15: Add −2 times the second row to the

first row, then expand by cofactors along the

first column. For the 3 × 3 determinant, add

multiples of the first row to the remaining row,

then expand by cofactors of the first column.

Finally, evaluate the 2 × 2 determinant.

2 1 3 1 0 1 1 −1

1 0 1 1 1 0 1 1

=

0 2 1 0 0 2 1 0

0 1 2 2 0 1 2 3

1 1 −1

= (−1) 2 1 0

1 2 3

1 1 −1

= − 0 −1 2

0 1 4

−1 2

=−

1 4

= −(−4 − 2)

=6

1

Exercise 16: Add − times the first row to the

2

second row, then expand by cofactors along the

fourth column. For the 3 × 3 determinant, add −2

times the second row to the third row, then

expand by cofactors along the second column.

Finally, evaluate the 2 × 2 determinant.

0

1

2

2

3

− 13

1 1

1

1

2

1

3

2

3

1

3

1

0

1

2

1

2

2

3

− 13

0

0 0

=

1

0

1 1

0

1

2

1

3

1

0

3

2 0 0

3

1 0 1

2

2

= (−1) 23 13 13

− 13 32 0

1 0

1

2

2

1

= − 23 13

3

− 53 0 − 23

1

1

⎛1⎞

2

= − ⎜ ⎟ 25

⎝ 3 ⎠ − 3 − 23

1 ⎡ 1 ⎛ 5 ⎞⎤

= − ⎢− − ⎜ − ⎟⎥

3 ⎣ 3 ⎝ 6 ⎠⎦

1

=−

6

Exercise 17: Add 2 times the first row to the

second row, then expand by cofactors along the

first column two times. For the 3 × 3

determinant, add −2 times the first row to the

second row, then expand by cofactors along the

first column. Finally, evaluate the 2 × 2

determinant.

1 3 1 5 3 1 3 1 5 3

−2 −7 0 −4 2 0 −1 2 6 8

0 0 1 0 1= 0 0 1 0 1

0 0 2

1 1 0 0 2 1 1

0 0 0

1 1 0 0 0 1 1

−1 2 6 8

0 1 0 1

=

0 2 1 1

0 0 1 1

1 0 1

= (−1) 2 1 1

0 1 1

1 0 1

= − 0 1 −1

0 1 1

1 −1

=−

1 1

= −[1 − (−1)]

= −2

50

SSM: Elementary Linear Algebra

Section 2.3

**21. Interchange the second and third rows, then the
**

first and second rows to arrive at the determinant

whose value is given.

d e f

d e f

a b c

g h i = − a b c = d e f = −6

a b c

g h i g h i

29.

1

a

1

b

1

c

a2

b2

c2

1

= 0

**23. Remove the common factors of 3 from the first
**

row, −1 from the second row, and 4 from the

third row to arrive at the determinant whose

value is given.

3a 3b 3c

a b

c

− d −e − f = 3 − d −e − f

4 g 4h 4i

4 g 4h 4i

a b c

e f

= −3 d

4 g 4h 4i

a b c

= −12 d e f

g h i

= −12(−6)

= 72

=

1

b−a

1

c−a

0 b2 − a 2 c2 − a 2

b−a

c−a

b2 − a2

c2 − a2

= (b − a)(c 2 − a 2 ) − (c − a)(b 2 − a 2 )

= (b − a)(c + a)(c − a) − (c − a)(b + a)(b − a)

= (b − a)(c − a)[c + a − (b + a)]

= (b − a)(c − a)(c − b)

True/False 2.2

(a) True; interchanging two rows of a matrix causes

the determinant to be multiplied by −1, so two

such interchanges result in a multiplication by

(−1)(−1) = 1.

(b) True; consider the transposes of the matrices so

that the column operations performed on A to get

B are row operations on AT .

**25. Add −1 times the third row to the first row to
**

arrive at the matrix whose value is given.

a+ g b+h c+i

a b c

d

e

f = d e f = −6

g

h

i

g h i

**(c) False; adding a multiple of one row to another
**

does not change the determinant. In this case,

det(B) = det(A).

(d) False; multiplying each row by its row number

causes the determinant to be multiplied by the

row number, so det(B) = n! det(A).

**27. Remove the common factor of −3 from the first
**

row, then add 4 times the second row to the third

to arrive at the matrix whose value is given.

−3a

−3b

−3c

d

e

f

g − 4 d h − 4e i − 4 f

a

b

c

3

d

e

f

=−

g − 4d h − 4e i − 4 f

a b c

= −3 d e f

g h i

= −3(−6)

= 18

**(e) True; since the columns are identical, they are
**

proportional, so det(A) = 0 by Theorem 2.2.5.

(f) True; −1 times the second and fourth rows can

be added to the sixth row without changing

det(A). Since this introduces a row of zeros,

det(A) = 0.

Section 2.3

Exercise Set 2.3

1. det(A) = −4 − 6 = −10

−2 4

det(2 A) =

= −16 − 24 = −40 = 22 (−10)

6 8

51

Chapter 2: Determinants

SSM: Elementary Linear Algebra

3. det(A) = [20 − 1 + 36] − [6 + 8 − 15] = 56

−4 2 −6

det(−2 A) = −6 −4 −2

−2 −8 −10

= [ −160 + 8 − 288] − [ −48 − 64 + 120]

= −448

**13. Expand by cofactors of the first row.
**

1 0

det( A) = 2

= 2(6) = 12

3 6

**Since det(A) ≠ 0, A is invertible.
**

15. det( A) = (k − 3)(k − 2) − 4

= k 2 − 5k + 6 − 4

= (−2)3 (56)

= k 2 − 5k + 2

Use the quadratic formula to solve

9 −1 8

5. det( AB) = 31 1 17

10 0 2

= [18 − 170 − 0] − [80 + 0 − 62]

= −170

−1 −3 6

det( BA) = 17 11 4

10 5 2

= [ −22 − 120 + 510] − [660 − 20 − 102]

= −170

2 1

det( A) = 2

= 2(8 − 3) = 10

3 4

det( B) = [1 − 10 + 0] − [15 + 0 − 7] = −17

k 2 − 5k + 2 = 0.

k=

−(−5) ± (−5)2 − 4(1)(2) 5 ± 17

=

2(1)

2

A is invertible for k ≠

5 ± 17

.

2

**17. det( A) = [ 2 + 12k + 36] − [ 4k + 18 + 12]
**

= 8k + 8

A is invertible for k ≠ −1.

19. A is the same matrix as in Exercise 7, which was

shown to be invertible (det(A) = −1). The

cofactors of A are C11 = −3, C12 = −(−3) = 3,

3 0 3

det( A + B) = 10 5 2

5 0 3

3 3

=5

5 3

= 5(9 − 15)

= −30

≠ det( A) + det( B)

C13 = −4 + 2 = −2, C21 = −(15 − 20) = 5,

C22 = 6 − 10 = −4, C23 = −(8 − 10) = 2,

C31 = 0 + 5 = 5, C32 = −(0 + 5) = −5, and

C33 = −2 + 5 = 3.

⎡ −3 3 −2 ⎤

The matrix of cofactors is ⎢ 5 −4 2 ⎥ .

⎢

⎥

3⎦

⎣ 5 −5

⎡ −3 5 5⎤

1

1 ⎢

A−1 =

adj( A) =

3 −4 −5⎥

⎥

−1 ⎢ −2 2

det( A)

3⎦

⎣

⎡ 3 −5 −5 ⎤

= ⎢ −3 4 5⎥

⎢

⎥

⎣ 2 −2 −3⎦

**7. Add multiples of the second row to the first and
**

third row to simplify evaluating the determinant.

0 3 5

3 5

det( A) = −1 −1 0 = −(−1)

= 9 − 10 = −1

2 3

0 2 3

**Since det(A) ≠ 0, A is invertible.
**

9. Expand by cofactors of the first column.

1 −3

det( A) = 2

= 2(2) = 4

0 2

**21. A is the same matrix as in Exercise 9, which was
**

shown to be invertible (det(A) = 4). The

cofactors of A are C11 = 2, C12 = 0, C13 = 0,

Since det(A) ≠ 0, A is invertible.

**C21 = −(−6) = 6, C22 = 4, C23 = 0,
**

C31 = 9 − 5 = 4, C32 = −(−6) = 6, and C33 = 2.

**11. The third column of A is twice the first column,
**

so det(A) = 0 and A is not invertible.

52

SSM: Elementary Linear Algebra

Section 2.3

⎡2 0 0⎤

The matrix of cofactors is ⎢ 6 4 0 ⎥ .

⎢

⎥

⎣4 6 2⎦

⎡2 6 4⎤

1

1⎢

−1

adj( A) = 0 4 6 ⎥

A =

4 ⎢0 0 2⎥

det( A)

⎣

⎦

⎡1

⎢2

= ⎢0

⎢

⎢⎣ 0

1 3 1

C23 = (−1) 1 3 9 = 0

1 3 2

1 3 1

C24 = 1 3 8 = 0

1 3 2

1⎤

⎥

1 32 ⎥

⎥

0 12 ⎥

⎦

3

2

3 1 1

C31 = 5 2 2 = 0

3 2 2

1 1 1

C32 = (−1) 2 2 2 = 0

1 2 2

**23. Use row operations and cofactor expansion to
**

simplify the determinant.

1 3 1 1

0 −1 0 0

det( A) =

0 0 7 8

0 0 1 1

−1 0 0

= 0 7 8

0 1 1

7 8

= (−1)

1 1

= −(7 − 8)

=1

Since det(A) ≠ 0, A is invertible. The cofactors of

A are

5 2 2

C11 = 3 8 9 = −4

3 2 2

1 3 1

C33 = 2 5 2 = −1

1 3 2

1 3 1

C34 = (−1) 2 5 2 = −(−1) = 1

1 3 2

3 1 1

C41 = (−1) 5 2 2 = −(1) = −1

3 8 9

1 1 1

C42 = 2 2 2 = 0

1 8 9

1 3 1

C43 = (−1) 2 5 2 = −(−8) = 8

1 3 9

2 2 2

C12 = (−1) 1 8 9 = −(−2) = 2

1 2 2

1 3 1

C44 = 2 5 2 = −7

1 3 8

2 5 2

C13 = 1 3 9 = −7

1 3 2

6⎤

⎡ −4 2 −7

⎢ 3 −1 0

0⎥

The matrix of cofactors is ⎢

⎥.

⎢ 0 0 −1 1⎥

8 −7 ⎦⎥

⎣⎢ −1 0

2 5 2

C14 = (−1) 1 3 8 = −(−6) = 6

1 3 2

A

3 1 1

C21 = (−1) 3 8 9 = −(−3) = 3

3 2 2

1 1 1

C22 = 1 8 9 = −1

1 2 2

53

−1

⎡ −4

1

1⎢ 2

adj( A) = ⎢

=

det( A)

1 ⎢ −7

⎢⎣ 6

⎡ −4

⎢ 2

=⎢

⎢ −7

⎢⎣ 6

3 0 −1⎤

−1 0 0 ⎥

⎥

0 −1 8⎥

0 1 −7 ⎥⎦

3 0 −1⎤

0⎥

−1 0

⎥

0 −1 8 ⎥

0 1 −7 ⎥⎦

Chapter 2: Determinants

SSM: Elementary Linear Algebra

1⎤

⎡ 4 −3

A1 = ⎢ −2 −1 0 ⎥

⎢

⎥

⎣ 0 0 −3⎦

4 −3

det( A1 ) = (−3)

= −3(−4 − 6) = 30

−2 −1

⎡ 4 5 0⎤

25. A = ⎢11 1 2⎥

⎢

⎥

⎣ 1 5 2⎦

4 5

4 5

det( A) = −2

+2

1 5

11 1

= −2(20 − 5) + 2(4 − 55)

= −132

⎡ 2 5 0⎤

Ax = ⎢ 3 1 2⎥

⎢

⎥

⎣ 1 5 2⎦

1⎤

⎡1 4

A2 = ⎢ 2 −2 0 ⎥

⎢

⎥

⎣ 4 0 −3⎦

2 −2

1 4

det( A2 ) =

+ (−3)

4 0

2 −2

= (0 + 8) − 3(−2 − 8)

= 38

⎡ 1 −3 4 ⎤

A3 = ⎢ 2 −1 −2⎥

⎢

⎥

⎣ 4 0 0⎦

2 5

2 5

+2

1 5

3 1

= −2(10 − 5) + 2(2 − 15)

= −36

⎡ 4 2 0⎤

Ay = ⎢11 3 2 ⎥

⎢

⎥

⎣ 1 1 2⎦

det( Ax ) = −2

det( A3 ) = 4

4 2

4 2

+2

1 1

11 3

= −2(4 − 2) + 2(12 − 22)

= −24

⎡ 4 5 2⎤

Az = ⎢11 1 3⎥

⎢

⎥

⎣ 1 5 1⎦

det( Ay ) = −2

−3 4

= 4(6 + 4) = 40

−1 −2

det( A1 ) 30

30

=

=−

det( A) −11

11

det( A2 ) 38

38

x2 =

=

=−

det( A) −11

11

det( A3 ) 40

40

x3 =

=

=−

det( A) −11

11

30

38

The solution is x1 = − , x2 = − ,

11

11

40

x3 = − .

11

x1 =

1 3

5 2 5 2

− 11

+

5 1

5 1 1 3

= 4(1 − 15) − 11(5 − 10) + (15 − 2)

= 12

det( Ax ) −36

3

=

=

x=

det( A) −132 11

det( Ay ) −24

2

y=

=

=

det( A) −132 11

det( Az )

12

1

z=

=

=−

det( A) −132

11

3

2

1

The solution is x = , y = , z = − .

11

11

11

det( Az ) = 4

⎡ 3 −1 1⎤

29. A = ⎢ −1 7 −2 ⎥

⎢

⎥

⎣ 2 6 −1⎦

Note that the third row of A is the sum of the

first and second rows. Thus, det(A) = 0 and

Cramer’s rule does not apply.

⎡ 4 1 1 1⎤

⎢ 3 7 −1 1⎥

31. A = ⎢

⎥

⎢ 7 3 −5 8⎥

⎢⎣ 1 1 1 2 ⎥⎦

Use row operations and cofactor expansion to

simplify the determinant.

1⎤

⎡ 1 −3

27. A = ⎢ 2 −1 0⎥

⎢

⎥

⎣ 4 0 −3⎦

2 −1

1 −3

+ (−3)

4 0

2 −1

= (0 + 4) − 3(−1 + 6)

= −11

det( A) =

54

SSM: Elementary Linear Algebra

Section 2.3

8

⎛ 1⎞

(c) det(2 A−1 ) = 23 det( A−1 ) = 8 ⎜ − ⎟ = −

7

7

⎝

⎠

0 −3 −3 −7

0 4 −4 −5

det( A) =

0 −4 −12 −6

1 1

1 2

−3 −3 −7

= −(1) 4 −4 −5

−4 −12 −6

−3 −3 −7

= − 4 −4 −5

0 −16 −11

(d) det((2 A)−1 ) =

=

23 det( A)

1

=

8(−7)

1

=−

56

⎛

−4 −5

−3 −7 ⎞

= − ⎜ −3

−4

−

−

−

16

11

16 −11 ⎟⎠

⎝

= 3(44 − 80) + 4(33 − 112)

= −424

1 1⎤

⎡4 6

⎢3

1 −1 1⎥

Ay = ⎢

⎥

⎢ 7 −3 −5 8 ⎥

1 2 ⎦⎥

⎣⎢ 1 3

⎡a

(e) det ⎢ b

⎢

⎣c

0 −6 −3 7

0 −8 −4 −5

det( Ay ) =

0 −24 −12 −6

1

3

1 2

−6 −3 7

= −(1) −8 −4 −5

−24 −12 −6

−6 −3

7

= − −8 −4 −5

0 0 −34

−6 −3

= −(−34)

−8 −4

= −(−34)(24 − 24)

=0

det( Ay )

0

=

=0

y=

det( A) −424

in A

(b) det( A−1 ) =

d⎤

⎡a b

e ⎥ = det ⎢ g h

⎥

⎢

f⎦

⎣d e

⎡a b

= − det ⎢ d e

⎢

⎣g h

= − det( A)

=7

c⎤

i⎥

⎥

f⎦

c⎤

f⎥

⎥

i⎦

1

1

=

det( A) 7

⎛1⎞ 8

(c) det(2 A−1 ) = 23 det( A−1 ) = 8 ⎜ ⎟ =

⎝7⎠ 7

(d) det((2 A)−1 ) =

=

1

det(2 A)

1

23 det( A)

1

=

8(7)

1

=

56

True/False 2.3

(a) False; det(2 A) = 23 det( A) if A is a 3 × 3 matrix.

are integers.

⎡ −1 0 ⎤

⎡ 1 0⎤

(b) False; consider A = ⎢

and B = ⎢

.

⎣0 1⎥⎦

⎣ 0 −1⎥⎦

Then det(A) = det(B) = 1, but

det(A + B) = 0 ≠ det(2A) = 4.

**35. (a) det(3 A) = 33 det( A) = 27(−7) = −189
**

(b) det( A−1 ) =

g

h

i

37. (a) det(3 A) = 33 det( A) = 27(7) = 189

**33. If all the entries in A are integers, then all the
**

cofactors of A are integers, so all the entries in

adj(A) are integers. Since det(A) = 1 and

1

adj( A) = adj( A), then all the entries

A−1 =

det( A)

−1

1

det(2 A)

1

1

1

1

=

=−

det( A) −7

7

55

Chapter 2: Determinants

SSM: Elementary Linear Algebra

**(c) True; since A is invertible, then
**

1

det( A−1 ) =

and

det( A)

1

det( A−1BA) =

⋅ det( B) ⋅ det( A) = det( B).

det( A)

(b)

**(d) False; a square matrix A is invertible if and only
**

if det(A) ≠ 0.

(e) True; since adj(A) is the transpose of the matrix

**of cofactors of A, and ( BT )T = B for any matrix
**

B.

3. (a)

**(f) True; the ij entry of A ⋅ adj(A) is
**

ai1C j1 + ai 2 C j 2 + " + ain C jn . If i ≠ j then this

**sum is 0, and when i = j then this sum is det(A).
**

Thus, A ⋅ adj(A) is a diagonal matrix with

diagonal entries all equal to det(A) or

(det( A)) I n .

(b)

**(g) True; if det(A) were not 0, the system Ax = 0
**

would have exactly one solution (the trivial

solution).

(h) True; if the reduced row echelon form of A were

I n , then there could be no such matrix b.

5. (a)

(i) True; since E is invertible.

(j) True; if A is invertible, then so is A−1 and so

**both Ax = 0 and A−1x = 0 have only the trivial
**

solution.

(b)

(k) True; if A is invertible, then det(A) ≠ 0 and A−1

**is invertible, so det( A) ⋅ A−1 = adj( A) is
**

invertible.

**(l) False; if the kth row of A is a row of zeros, then
**

every cofactor Cij = 0 for i ≠ k, hence the

−1 5 2

2 −1

5 2

0 2 −1 = −1

+ (−3)

1 1

2 −1

−3 1 1

= −(2 + 1) − 3(−5 − 4)

= 24

5 2

−1 5 2 −1

0 2 −1 = 0

2 −1

0 −14 −5

−3 1 1

2

−1 5

= 0 2

−1

0 0 −12

= (−1)(2)(−12)

= 24

3 0 −1

1 1 0 −1

1 1 1 =3

−

4 2 4 2

0 4 2

= 3(2 − 4) − (0 + 4)

= −10

3 0 −1

1 1

1 1 1=−3 0

0 4 2

0 4

1 1

= − 0 −3

0 4

1

−1

2

1

−4

2

1 1

1

= − 0 −3 −4

0 0 − 10

3

**cofactor matrix has a row of zeros and the
**

corresponding column (not row) of adj(A) is a

column of zeros.

⎛ 10 ⎞

= −(1)(−3) ⎜ − ⎟

⎝ 3⎠

= −10

**Chapter 2 Supplementary Exercises
**

1. (a)

3 3

−4 2

=−

3 3

−4 2

1 1

= −3

−4 2

1 1

= −3

0 6

= −3(1)(6)

= −18

−4 2

= −4(3) − 2(3) = −18

3 3

56

SSM: Elementary Linear Algebra

Chapter 2 Supplementary Exercises

3

−2

7. (a)

1

−9

6 0

3

1

0 −1

2 −2

1

−2

−2 3

3

1 4

1 4

1

4

= 3 0 −1 1 − 6 1 −1 1 − 1 1 0 −1

1

−9 −2 2

−9 2 −2

2 −2 2

2

⎛ −1 1

−1 1

−2 3 ⎞

1 4⎞ ⎛

1 1

1 −1 ⎞ ⎛ 3

1

= 3⎜ 3

+2

⎟ − 6 ⎜ −2 −2 2 − 1 −9 2 + 4 −9 −2 ⎟ − ⎜ −1 2 −2 − (−1) −9 2 ⎟

−

−

2

2

1

1

⎝

⎠ ⎝

⎠ ⎝

⎠

= 3[3(−2 + 2) + 2(1 + 4)] − 6[ −2(−2 + 2) − (2 + 9) + 4(−2 − 9)] − [ −(−6 − 2) + (−4 + 27)]

= 30 + 330 − 31

= 329

3

−2

(b)

1

−9

6 0

3

1

0 −1

2 −2

1

4

=−

1

2

1 0

−2 3

3 6

−9 2

1 0

0 3

=−

0 6

0 2

1 0

0 3

=−0 0

0 0

−1

1

0

−2

−1

−1

3

−11

−1

−1

5

31

−3

1

4

1

2

1

6

−2

11

1

6

−14

7

0 −1

1

3 −1

6

0 5 −14

0 0 − 329

15

⎛ 329 ⎞

= −(1)(3)(5) ⎜ −

⎟

⎝ 15 ⎠

= 329

1

0

=−0

0

9. Exercise 3:

−1 5 2 −1 5 2 −1 5

0 2 −1 = 0 2 −1 0 2

−3 1 1 −3 1 1 −3 1

= [(−1)(2)(1) + (5)(−1)(−3) + (2)(0)(1)] − [(2)(2)(−3) + (−1)(−1)(1) + (5)(0)(1)]

= 13 − (−11)

= 24

Exercise 4:

−1 −2 −3 −1 −2 −3 −1 −2

−4 −5 −6 = −4 −5 −6 −4 −5

−7 −8 −9 −7 −8 −9 −7 −8

= [(−1)(−5)(−9) + (−2)(−6)(−7) + (−3)(−4)(−8)] − [(−3)(−5)(−7) + (−1)(−6)(−8) + (−2)(−4)(−9)]

= −225 − (−225)

=0

57

Chapter 2: Determinants

SSM: Elementary Linear Algebra

Exercise 5:

3 0 −1 3 0 −1 3 0

1 1 1= 1 1 11 1

0 4 2 0 4 20 4

= [(3)(1)(2) + (0)(1)(0) + (−1)(1)(4)] − [(−1)(1)(0) + (3)(1)(4) + (0)(1)(2)]

= 2 − 12

= −10

Exercise 6:

−5

1 4 −5

1 4 −5

1

3 0 2 = 3 0 2 3 0

1 −2 2

1 −2 2 1 −2

= [(−5)(0)(2) + (1)(2)(1) + (4)(3)(−2)] − [(4)(0)(1) + (−5)(2)(−2) + (1)(3)(2)]

= −22 − 26

= −48

11. The determinants for Exercises 1, 3, and 4 were evaluated in Exercises 1, 3, and 9.

Exercise 2:

7 −1

= 7(−6) − (−1)(−2) = −44

−2 −6

The determinants of the matrices in Exercises 1−3 are nonzero, so those matrices are invertible. The matrix in

Exercise 4 is not invertible, since its determinant is zero.

13.

5

b−3

= 5(−3) − (b − 3)(b − 2)

b − 2 −3

= −15 − b2 + 5b − 6

= −b 2 + 5b − 21

0

0

15. 0

0

5

0 0 0 −3

0 0 −4 0

0 −1 0 0 = (5)(2)(−1)(−4)(−3)

2 0 0 0

0 0 0 0

= −120

⎡ −4 2 ⎤

17. Let A = ⎢

. Then det(A) = −18. The cofactors of A are C11 = 3, C12 = −3, C21 = −2, and C22 = −4. The

⎣ 3 3⎥⎦

⎡ 3 −3⎤

matrix of cofactors is ⎢

.

⎣ −2 −4 ⎥⎦

A−1 =

1

1

1 ⎡ 3 −2 ⎤ ⎡ − 6

⎢

=

adj( A) =

−18 ⎢⎣ −3 −4 ⎥⎦ ⎢ 1

det( A)

⎣ 6

1⎤

9⎥

2⎥

9⎦

⎡ −1 5 2 ⎤

19. Let A = ⎢ 0 2 −1⎥ . Then det(A) = 24.

⎢

⎥

⎣ −3 1 1⎦

The cofactors of A are C11 = 2 + 1 = 3, C12 = −(0 − 3) = 3, C13 = 0 + 6 = 6, C21 = −(5 − 2) = −3, C22 = −1 + 6 = 5,

C23 = −(−1 + 15) = −14, C31 = −5 − 4 = −9, C32 = −(1 − 0) = −1, and C33 = −2 − 0 = −2.

58

SSM: Elementary Linear Algebra

Chapter 2 Supplementary Exercises

6⎤

⎡ 3 3

The matrix of cofactors is ⎢ −3 5 −14⎥ .

⎢

⎥

⎣ −9 −1 −2⎦

A−1 =

9 16

+

=1

25 25

⎡x − 4⎤

5⎥

Ax′ = ⎢

3⎥

⎢⎣ y

5⎦

3

4

det( Ax′ ) = x + y

5

5

3

⎡

x⎤

⎥

Ay′ = ⎢ 5

⎢⎣ 54 y ⎥⎦

3

4

4

3

det( Ay′ ) = y − x = − x + y

5

5

5

5

3

4

4

3

The solution is x ′ = x + y, y ′ = − x + y.

5

5

5

5

det( A) =

⎡ 3 −3 −9 ⎤

1

1 ⎢

adj( A) =

3

5 −1⎥

det( A)

24 ⎢6 −14 −2 ⎥

⎣

⎦

⎡ 1 − 1 − 3⎤

8

8⎥

⎢8

5

1 ⎥

= ⎢ 18

−

24

24

⎢1

⎥

7

1

⎢⎣ 4 − 12 − 12 ⎥⎦

⎡ 3 6 0 1⎤

⎢ −2 3

1 4⎥

23. Let A = ⎢

⎥.

⎢ 1 0 −1 1⎥

⎣⎢ −9 2 −2 2⎦⎥

Then det(A) = 329. The cofactors of A are

C11 = 10, C12 = 55, C13 = −21, C14 = −31,

1 α⎤

1 β⎥

⎥

β 1⎦

⎡1

27. A = ⎢ 1

⎢

⎣α

C21 = −2, C22 = −11, C23 = 70, C24 = 72,

α

β −α

0 β −α 1−α 2

β −α

0

=

β − α 1− α 2

C31 = 52, C32 = −43, C33 = −175, C34 = 102,

1

det( A) = 0

**C41 = −27, C42 = 16, C43 = −42, and
**

C44 = −15.

The matrix of cofactors is

⎡ 10 55 −21 −31⎤

⎢ −2 −11

70 72 ⎥

⎢

⎥.

⎢ 52 −43 −175 102 ⎥

⎣⎢ −27 16 −42 −15⎦⎥

1

0

= 0 − (β − α )2

= −( β − α ) 2

det(A) = 0 if and only if α = β , and the system

has a nontrivial solution if and only if det(A) = 0.

1

adj( A)

det( A)

52 −27 ⎤

⎡ 10 −2

1 ⎢ 55 −11 −43 16 ⎥

=

⎢

⎥

329 ⎢ −21 70 −175 −42 ⎥

⎢⎣ −31 72 102 −15⎥⎦

10

52

27 ⎤

2

⎡

− 329

− 329

329

⎢ 329

⎥

55

43

16 ⎥

11

− 329

− 329

⎢ 329

329

=⎢

⎥

3

10

25

6

−

−

⎢ − 47

47

47

47 ⎥

⎢ 31

72

102 − 15 ⎥

329

329

329 ⎦⎥

⎣⎢ − 329

A−1 =

**29. (a) Draw the perpendicular from the vertex of
**

angle γ to side c as shown.

b

α

c1

γ

c

a

c2

β

c1

c

and cos β = 2 , so

b

a

c = c1 + c2 = a cos β + b cos α . A similar

construction gives the other two equations in

the system. The matrix form of the system is

⎡0 c b ⎤ ⎡ cos α ⎤ ⎡ a ⎤

⎢ c 0 a ⎥ ⎢ cos β ⎥ = ⎢ b ⎥ .

⎢

⎥⎢

⎥ ⎢ ⎥

⎣b a 0 ⎦ ⎣ cos γ ⎦ ⎣ c ⎦

Then cos α =

**25. In matrix form, the system is
**

⎡ 3 − 4 ⎤ x′

5 ⎥ ⎡ ⎤ = ⎡ x⎤ .

⎢5

3 ⎥ ⎣⎢ y ′⎦⎥ ⎣⎢ y ⎦⎥

⎢ 45

5⎦

⎣

⎡3 − 4⎤

5⎥

A = ⎢5

3⎥

⎢ 45

5⎦

⎣

59

Chapter 2: Determinants

SSM: Elementary Linear Algebra

31. If A is invertible then A−1 is also invertible and

0 c b

det( A) = c 0 a

b a 0

c a

c 0

= −c

+b

b 0

b a

= −c(0 − ab) + b(ac − 0)

= 2abc

⎡a c b⎤

Aα = ⎢ b 0 a ⎥

⎢

⎥

⎣ c a 0⎦

det( Aα ) = −c

**det(A) ≠ 0, so det( A) A−1 is invertible. Thus,
**

adj( A) = det( A) A−1 is invertible and

(adj( A))−1 = (det( A) A−1 )−1

1

=

( A−1 )−1

det( A)

1

=

A

det( A)

= adj(A −1 )

b a

a b

−a

c 0

b a

since adj( A−1 ) = det( A−1 )( A−1 )−1 =

= −c(0 − ac) − a(a 2 − b 2 )

**33. Let A be an n × n matrix for which the sum of
**

the entries in each row is zero and let x1 be the

= a (c 2 + b 2 − a 2 )

det( Aα )

cos α =

det( A)

a(c 2 + b 2 − a 2 )

=

2abc

c 2 + b2 − a 2

=

2bc

**n × 1 column matrix whose entries are all one.
**

⎡ 0⎤

⎢ 0⎥

Then Ax1 = ⎢ ⎥ because each entry in the

⎢# ⎥

⎣⎢ 0⎦⎥

product is the sum of the entries in a row of A.

That is, Ax = 0 has a nontrivial solution, so

det(A) = 0.

⎡0 a b ⎤

(b) Aβ = ⎢ c b a ⎥

⎢

⎥

⎣b c 0 ⎦

c a

c b

det( Aβ ) = −a

+b

b 0

b c

**35. Add 10,000 times the first column, 1000 times
**

the second column, 100 times the third column,

and 10 times the fourth column to the fifth

column. The entries in the fifth column are now

21,375, 38,798, 34,162, 40,223, and 79,154,

respectively. Evaluating the determinant by

expanding by cofactors of the fifth column, gives

21, 375C15 + 38, 798C25 + 34,162C35

+ 40, 223C45 + 79,154C55 .

= −a(0 − ab) + b(c 2 − b2 )

cos β =

= b( a 2 + c 2 − b 2 )

det( Aβ )

det( A)

b( a 2 + c 2 − b 2 )

=

2abc

2

a + c 2 − b2

=

2ac

⎡0 c a ⎤

Aγ = ⎢ c 0 b ⎥

⎢

⎥

⎣b a c ⎦

det( Aγ ) = −c

**Since each coefficient of Ci5 is divisible by 19,
**

this sum must also be divisible by 19.

Note that the column operations do not change

the value of the determinant⎯consider them as

row operations on the transpose.

c b

c 0

+a

b c

b a

= −c(c 2 − b 2 ) + a(ac − 0)

= c( a 2 + b 2 − c 2 )

cos γ =

c( a 2 + b 2 − c 2 )

det( A)

2abc

a 2 + b2 − c 2

=

2ab

det( Aγ )

1

A.

det( A)

=

60

Chapter 3

Euclidean Vector Spaces

Section 3.1

(f)

z

y

Exercise Set 3.1

x

z

1. (a)

(–3, 4, –5)

(3, 4, 5)

y

3. (a)

y

x

x

z

(b)

(–3, 4, 5)

y

(b)

x

y

x

z

(c)

(3, –4, 5)

(c)

y

(d)

z

x

y

x

z

(d)

y

y

x

(3, 4, –5)

x

(e) (–3, –4, 5)

z

x

y

61

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

z

(e)

**9. (a) Let the terminal point be B(b1 , b2 ). Then
**

(b1 − 1, b2 − 1) = (1, 2) so b1 = 2 and b2 = 3.

The terminal point is B(2, 3).

(b) Let the initial point be A(a1, a2 , a3 ). Then

(−1 − a1 , − 1 − a2 , 2 − a3 ) = (1, 1, 3), so

y

a1 = −2, a2 = −2, and a3 = −1. The initial

point is A(−2, −2, −1).

x

z

(f)

**11. (a) One possibility is u = v, then the initial point
**

(x, y, z) satisfies

(3 − x, 0 − y, −5 − z) = (4, −2, −1), so that

x = −1, y = 2, and z = −4.

One possibility is (−1, 2, −4).

y

**(b) One possibility is u = −v, then the initial
**

point (x, y, z) satisfies

(3 − x, 0 − y, −5 − z) = (−4, 2, 1), so that

x = 7, y = −2, and z = −6.

One possibility is (7, −2, −6).

x

→

5. (a) P1 P2 = (3 − 4, 7 − 8) = (−1, − 1)

y

x

13. (a) u + w = (4, −1) + (−3, −3) = (1, −4)

(b) v − 3u = (0, 5) − (12, −3) = (−12, 8)

→

(b) P1 P2 = (−4 − 3, − 7 − (−5)) = (−7, − 2)

(c) 2(u − 5w ) = 2((4, − 1) − (−15, − 15))

= 2(19, 14)

= (38, 28)

y

x

(d) 3v − 2(u + 2w )

= (0, 15) − 2((4, − 1) + (−6, − 6))

= (0, 15) − 2(−2, − 7)

= (0, 15) + (4, 14)

= (4, 29)

→

(c) P1 P2 = (−2 − 3, 5 − (−7), − 4 − 2)

= (−5, 12, − 6)

z

(e) −3(w − 2u + v)

= −3((−3, − 3) − (8, − 2) + (0, 5))

= −3(−11, 4)

= (33, − 12)

y

x

(f) (−2u − v) − 5(v + 3w)

= ((−8, 2) − (0, 5)) − 5((0, 5) + (−9, −9))

= (−8, −3) − 5(−9, −4)

= (−8, −3) + (45, 20)

= (37, 17)

→

7. (a) P1 P2 = (2 − 3, 8 − 5) = (−1, 3)

15. (a) v − w = (4, 7, − 3, 2) − (5, − 2, 8, 1)

= (−1, 9, − 11, 1)

→

(b) P1 P2 = (2 − 5, 4 − (−2), 2 − 1) = (−3, 6, 1)

(b) 2u + 7 v = (−6, 4, 2, 0) + (28, 49, − 21, 14)

= (22, 53, − 19, 14)

62

SSM: Elementary Linear Algebra

Section 3.1

(c) −u + (v − 4w) = (3, −2, −1, 0) + ((4, 7, −3, 2) − (20, −8, 32, 4))

= (3, −2, −1, 0) + (−16, 15, −35, −2)

= (−13, 13, −36, −2)

(d) 6(u − 3v) = 6((−3, 2, 1, 0) − (12, 21, − 9, 6))

= 6(−15, − 19, 10, − 6)

= (−90, − 114, 60, − 36)

(e) − v − w = (−4, − 7, 3, − 2) − (5, − 2, 8, 1)

= (−9, − 5, − 5, − 3)

(f) (6v − w) − (4u + v) = ((24, 42, −18, 12) − (5, −2, 8, 1)) − ((−12, 8, 4, 0) + (4, 7, −3, 2))

= (19, 44, −26, 11) − (−8, 15, 1, 2)

= (27, 29, −27, 9)

17. (a) w − u = (−4, 2, − 3, − 5, 2) − (5, − 1, 0, 3, − 3)

= (−9, 3, − 3, − 8, 5)

(b) 2 v + 3u = (−2, − 2, 14, 4, 0) + (15, − 3, 0, 9, − 9)

= (13, − 5, 14, 13, − 9)

(c) −w + 3(v − u) = (4, −2, 3, 5, −2) + 3((−1, −1, 7, 2, 0) − (5, −1, 0, 3, −3))

= (4, −2, 3, 5, −2) + 3(−6, 0, 7, −1, 3)

= (4, −2, 3, 5, −2) + (−18, 0, 21, −3, 9)

= (−14, −2, 24, 2, 7)

(d) 5(−v + 4u − w) = 5((1, 1, −7, −2, 0) + (20, −4, 0, 12, −12) − (−4, 2, −3, −5, 2))

= 5(25, −5, −4, 15, −14)

= (125, −25, −20, 75, −70)

(e) −2(3w + v) + (2u + w) = −2((−12, 6, − 9, − 15, 6) + (−1, − 1, 7, 2, 0)) + ((10, − 2, 0, 6, − 6) + (−4, 2, − 3, − 5, 2))

= −2(−13, 5, − 2, − 13, 6) + (6, 0, − 3, 1, − 4)

= (26, − 10, 4, 26, − 12) + (6, 0, − 3, 1, − 4)

= (32, − 10, 1, 27, − 16)

(f)

1

1

(w − 5v + 2u) + v = ((−4, 2, − 3, − 5, 2) − (−5, − 5, 35, 10, 0) + (10, − 2, 0, 6, − 6)) + (−1, − 1, 7, 2, 0)

2

2

1

= (11, 5, − 38, − 9, − 4) + (−1, − 1, 7, 2, 0)

2

9

⎛ 11 5

⎞

= ⎜ , , − 19, − , − 2 ⎟ + (−1, − 1, 7, 2, 0)

2

⎝2 2

⎠

5

⎛9 3

⎞

= ⎜ , , − 12, − , − 2 ⎟

2

⎝2 2

⎠

19. (a) v − w = (4, 0, − 8, 1, 2) − (6, − 1, − 4, 3, − 5)

= (−2, 1, − 4, − 2, 7)

(b) 6u + 2 v = (−18, 6, 12, 24, 24) + (8, 0, − 16, 2, 4)

= (−10, 6, − 4, 26, 28)

63

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

(c) (2u − 7w ) − (8v + u) = ((−6, 2, 4, 8, 8) − (42, − 7, − 28, 21, − 35)) − ((32, 0, − 64, 8, 16) + (−3, 1, 2, 4, 4))

= (−48, 9, 32, − 13, 43) − (29, 1, − 62, 12, 20)

= (−77, 8, 94, − 25, 23)

21. Let x = ( x1, x2 , x3 , x4 , x5 ).

2u − v + x = (−6, 2, 4, 8, 8) − (4, 0, − 8, 1, 2) + ( x1 , x2 , x3 , x4 , x5 )

= (−10 + x1, 2 + x2 , 12 + x3 , 7 + x4 , 6 + x5 )

7x + w = (7 x1, 7 x2 , 7 x3 , 7 x4 , 7 x5 ) + (6, − 1, − 4, 3, − 5)

= (7 x1 + 6, 7 x2 − 1, 7 x3 − 4, 7 x4 + 3, 7 x5 − 5)

Equate components.

8

−10 + x1 = 7 x1 + 6 ⇒ x1 = −

3

1

2 + x2 = 7 x2 − 1 ⇒ x2 =

2

8

12 + x3 = 7 x3 − 4 ⇒ x3 =

3

2

7 + x4 = 7 x4 + 3 ⇒ x4 =

3

11

6 + x5 = 7 x5 − 5 ⇒ x5 =

6

⎛ 8 1 8 2 11 ⎞

x = ⎜− , , , , ⎟

⎝ 3 2 3 3 6⎠

23. (a) There is no scalar k such that ku is the given vector, so the given vector is not parallel to u.

(b) −2u = (4, −2, 0, −6, −10, −2)

The given vector is parallel to u.

(c) The given vector is 0, which is parallel to all vectors.

25. au + bv = (a, − a, 3a, 5a) + (2b, b, 0, − 3b)

= (a + 2b, − a + b, 3a, 5a − 3b)

= (1, − 4, 9, 18)

Equating the third components give 3a = 9 or a = 3. Equating any other components gives b = −1. The scalars are

a = 3, b = −1.

27. Equating components gives the following system of equations.

c1 + 3c2

= −1

−c1 + 2c2 + c3 = 1

c2 + 4c3 = 19

⎡ 1 0 0 2⎤

⎡ 1 3 0 −1⎤

The reduced row echelon form of ⎢ −1 2 1 1⎥ is ⎢0 1 0 −1⎥ .

⎢

⎥

⎢

⎥

⎣ 0 0 1 5⎦

⎣ 0 1 4 19⎦

The scalars are c1 = 2, c2 = −1, c3 = 5.

64

SSM: Elementary Linear Algebra

Section 3.2

**29. Equating components gives the following system
**

of equations.

−c1 + 2c2 + 7c3 + 6c4 = 0

3c1

+ c3 + 3c4 = 5

2c1 + 4c2 + c3 + c4 = 6

−c2 + 4c3 + 2c4 = −3

The reduced row echelon form of

⎡ 1 0 0 0 1⎤

⎡ −1 2 7 6 0 ⎤

⎢0 1 0 0 1⎥

⎢ 3 0 1 3 5⎥

⎥.

⎢

⎥ is ⎢

2

4

1

1

6

⎢0 0 1 0 −1⎥

⎢

⎥

⎣⎢0 0 0 1 1⎦⎥

⎣⎢ 0 −1 4 2 −3⎦⎥

True/False 3.1

(a) False; vector equivalence is determined solely by

length and direction, not location.

(b) False; (a, b) is a vector in 2-space, while (a, b, 0)

is a vector in 3-space. Alternatively, equivalent

vectors must have the same number of

components.

(c) False; v and kv are parallel for all values of k.

(d) True; apply Theorem 3.1.1 parts (a), (b), and (a)

again: v + (u + w) = v + (w + u)

= ( v + w) + u

= (w + v) + u.

**The scalars are c1 = 1, c2 = 1, c3 = −1, c4 = 1.
**

31. Equating components gives the following system

of equations.

−2c1 − 3c2 + c3 = 0

9c1 + 2c2 + 7c3 = 5

6c1 + c2 + 5c3 = 4

The reduced row echelon form of

⎡ 1 0 1 0⎤

⎡ −2 −3 1 0 ⎤

⎢ 9 2 7 5⎥ is ⎢0 1 −1 0 ⎥ .

⎢

⎥

⎢

⎥

1 5 4⎦

⎣0 0 0 1⎦

⎣ 6

**(e) True; the vector −u can be added to both sides of
**

the equation.

(f) False; a and b must be nonzero scalars for the

statement to be true.

(g) False; the vectors v and −v have the same length

and can be placed to be collinear, but are not

equal.

**From the bottom row, the system has no
**

solution, so no such scalars exist.

33. (a)

(h) True; vector addition is defined component-wise.

1 → 1

PQ = (7 − 2, − 4 − 3, 1 − (−2))

2

2

1

= (5, − 7, 3)

2

7 3⎞

⎛5

=⎜ , − , ⎟

2 2⎠

⎝2

(i) False; (k + m)(u + v) = (k + m)u + (k + m)v.

1

(j) True; x = (5v + 4w )

8

(k) False; for example, if v 2 = − v1 then

a1 v1 + a2 v 2 = b1 v1 + b2 v 2 as long as

1 →

PQ with its initial point at P

2

gives the midpoint.

⎛

5

3⎞ ⎛9

1

1⎞

⎛ 7⎞

⎜ 2 + , 3 + ⎜ − ⎟, − 2 + ⎟ = ⎜ , − , − ⎟

2

2⎠ ⎝2

2

2⎠

⎝ 2⎠

⎝

a1 − a2 = b1 − b2 .

Locating

(b)

Section 3.2

Exercise Set 3.2

3 → 3

21 9 ⎞

⎛ 15

PQ = (5, − 7, 3) = ⎜ , − , ⎟

4

4

4

4 4⎠

⎝

1. (a)

v = 42 + (−3)2 = 25 = 5

v 1

⎛ 4 3⎞

= (4, − 3) = ⎜ , − ⎟ has the

v 5

⎝5 5⎠

same direction as v.

v

1

⎛ 4 3⎞

= − (4, − 3) = ⎜ − , ⎟ has the

u2 = −

5

v

⎝ 5 5⎠

direction opposite v.

u1 =

3 →

Locating PQ with its initial point at P

4

gives the desired point.

⎛

15

9⎞

⎛ 21 ⎞

⎜ 2 + , 3 + ⎜ − ⎟, − 2 + ⎟

4

4⎠

⎝ 4⎠

⎝

⎛ 23 9 1 ⎞

=⎜ , − , ⎟

4 4⎠

⎝ 4

65

Chapter 3: Euclidean Vector Spaces

(b)

SSM: Elementary Linear Algebra

v = 22 + 22 + 22 = 12 = 2 3

⎛ 1 1 1 ⎞

v

1

=

(2, 2, 2) = ⎜

,

,

⎟ has the same direction as v.

v

2 3

⎝ 3 3 3⎠

v

u2 = −

v

1

=−

(2, 2, 2)

2 3

⎛ 1

1

1 ⎞

,−

,−

= ⎜−

⎟

3

3

3⎠

⎝

has the direction opposite v.

u1 =

(c)

v = 12 + 02 + 22 + 12 + 32 = 15

u1 =

=

v

v

1

(1, 0, 2, 1, 3)

15

⎛ 1

2

1

3 ⎞

, 0,

,

,

=⎜

⎟

15 15 15 ⎠

⎝ 15

has the same direction as v.

v

u2 = −

v

1

=−

(1, 0, 2, 1, 3)

15

⎛ 1

2

1

3 ⎞

= ⎜−

, 0, −

,−

,−

⎟

15

15

15 ⎠

⎝ 15

has the direction opposite v.

3. (a)

u + v = (2, − 2, 3) + (1, − 3, 4)

= (3, − 5, 7)

= 32 + (−5)2 + 72

= 83

(b)

u + v

= (2, − 2, 3) + (1, − 3, 4)

= 22 + (−2)2 + 32 + 12 + (−3)2 + 42

= 17 + 26

(c)

−2u + 2 v = (−4, 4, − 6) + (2, − 6, 8)

= (−2, − 2, 2)

= (−2)2 + (−2) 2 + 22

= 12

=2 3

66

SSM: Elementary Linear Algebra

(d)

Section 3.2

3u − 5v + w

= (6, − 6, 9) − (5, − 15, 20) + (3, 6, − 4)

= (4, 15, − 15)

= 42 + 152 + (−15)2

= 466

5. (a)

3u − 5v + w = (−6, − 3, 12, 15) − (15, 5, − 25, 35) + (−6, 2, 1, 1)

= (−27, − 6, 38, − 19)

= (−27)2 + (−6)2 + 382 + (−19)2

= 2570

(b)

3u − 5 v + w = (−6, − 3, 12, 15) − 5 (3, 1, − 5, 7) + (−6, 2, 1, 1)

= (−6)2 + (−3)2 + 122 + 152 − 5 32 + 12 + (−5) 2 + 7 2 + (−6)2 + 22 + 12 + 12

= 414 − 5 84 + 42

= 3 46 − 10 21 + 42

(c)

− u v = − (−2)2 + (−1)2 + 42 + 52 (3, 1, − 5, 7)

= − 46 (3, 1, − 5, 7)

= 46 32 + 12 + (−5) 2 + 7 2

= 46 84

= 2 966

7.

kv = k (−2, 3, 0, 6)

= k

(−2)2 + 32 + 0 + 62

= k 49

=7 k

5

5

5

7 k = 5 if k = , so k = or k = − .

7

7

7

9. (a) u ⋅ v = (3)(2) + (1)(2) + (4)(−4) = −8

u ⋅u = u

2

= 32 + 12 + 42 = 26

v⋅v = v

2

= 22 + 22 + (−4)2 = 24

(b) u ⋅ v = (1)(2) + (1)(−2) + (4)(3) + (6)(−2)

=0

u ⋅u = u

2

= 12 + 12 + 42 + 62 = 54

v⋅v = v

2

= 22 + (−2)2 + 32 + (−2)2 = 21

67

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

11. (a) d (u, v) = u − v

= (3 − 1)2 + (3 − 0)2 + (3 − 4)2

= 22 + 32 + (−1)2

= 14

(b) d (u, v) = u − v

= (0 − (−3))2 + (−2 − 2)2 + (−1 − 4)2 + (1 − 4)2

= 32 + (−4)2 + (−5)2 + (−3)2

= 59

(c) d (u, v) = u − v

= (3 − (−4))2 + (−3 − 1)2 + (−2 − (−1)) 2 + (0 − 5) 2 + (−3 − 0)2 + (13 − (−11))2 + (5 − 4)2

= 72 + (−4)2 + (−1)2 + (−5)2 + (−3)2 + 242 + 12

= 677

13. (a) cos θ =

=

u⋅v

u v

(3)(1) + (3)(0) + (3)(4)

32 + 32 + 32 12 + 02 + 42

15

=

27 17

Since cos θ is positive, θ is acute.

(b) cos θ =

u⋅v

u v

(0)(−3) + (−2)(2) + (−1)(4) + (1)(4)

=

2

0 + (−2)2 + (−1) 2 + 12 (−3)2 + 22 + 42 + 42

−4

=

=−

6 45

4

6 45

Since cos θ is negative, θ is obtuse.

(c) cos θ =

u⋅v

u v

=

=

=−

(3)(−4) + (−3)(1) + (−2)(−1) + (0)(5) + (−3)(0) + (13)(−11) + (5)(4)

32 + (−3)2 + (−2) 2 + 02 + (−3)2 + 132 + 52 (−4)2 + 12 + (−1)2 + 52 + 02 + (−11) 2 + 42

−136

225 180

136

225 180

Since cos θ is negative, θ is obtuse.

68

SSM: Elementary Linear Algebra

Section 3.2

**15. The angle θ between the vectors is 30°.
**

3

a ⋅ b = a b cosθ = 9 ⋅ 5 cos 30° = 45

2

23. (a) cos θ =

=

**17. (a) u ⋅ (v ⋅ w) does not make sense because
**

v ⋅ w is a scalar.

=−

**u ⋅ v does not make sense because u ⋅ v is
**

a scalar.

(b) cos θ =

**(d) (u ⋅ v) − u makes sense since both u ⋅ v
**

=

and u are scalars.

19. (a)

(b)

(c)

(−4, − 3)

(−4, − 3)

=

(−4, − 3)

(−4)2 + (−3)2

(−4, − 3)

=

25

3⎞

⎛ 4

= ⎜− , − ⎟

5

5

⎝

⎠

=−

=

( −3, 2, 3 )

2

(−3)2 + 22 + ( 3 )

( −3, 2, 3 )

=

(d) cos θ =

(d)

=

=0

42 27

(−2)2 + 22 + 32 12 + 72 + (−4)2

0

17 66

25. (a) u ⋅ v = (3)(4) + (2)(−1) = 10

u v = 32 + 22 42 + (−1) 2

= 13 17

≈ 14.866

**21. Divide v by v to get a unit vector that has the
**

same direction as v, then multiply by m: m

10

u⋅v

u v

(−2)(1) + (2)(7) + (3)(−4)

=

(1, 2, 3, 4, 5)

(1, 2, 3, 4, 5)

(1, 2, 3, 4, 5)

=

2

1 + 22 + 32 + 42 + 52

(1, 2, 3, 4, 5)

=

55

⎛ 1

2

3

4

5 ⎞

=⎜

,

,

,

,

⎟

55

55

55

55 ⎠

⎝ 55

8 10

3

12 + (−5)2 + 42 32 + 32 + 32

0

=0

16

⎛ 3 1 3⎞

= ⎜− , ,

⎜ 4 2 4 ⎟⎟

⎝

⎠

40 16

24

u⋅ v

u v

(1)(3) + (−5)(3) + (4)(3)

=

=

962

(−6)2 + (−2)2 42 + 02

−24

=−

(c) cos θ =

13 74

11

u⋅v

u v

(−6)(4) + (−2)(0)

=

(1, 7)

(1, 7)

(1, 7) ⎛ 1

7 ⎞

=

=

=⎜

,

⎟

2

2

(1, 7)

50 ⎝ 5 2 5 2 ⎠

1 +7

( −3, 2, 3 )

( −3, 2, 3 )

22 + 32 52 + (−7) 2

−11

=

(b) u ⋅ (v + w) makes sense.

(c)

u⋅ v

u v

(2)(5) + (3)(−7)

v

.

v

69

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

(b) u ⋅ v = (−3)(2) + (1)(−1) + (0)(3) = −7 = 7

Section 3.3

u v = (−3)2 + 12 + 02 22 + (−1)2 + 32

Exercise Set 3.3

= 10 14

≈ 11.832

(c)

1. (a) u ⋅ v = (6)(2) + (1)(0) + (4)(−3) = 0, so u

and v are orthogonal.

u ⋅ v = (0)(1) + (2)(1) + (2)(1) + (1)(1) = 5

(b) u ⋅ v = (0)(1) + (0)(1) + (−1)(1) = −1 ≠ 0, so

u and v are not orthogonal.

u v

= 02 + 22 + 22 + 12 12 + 12 + 12 + 12

= 9 4

=6

(c) u ⋅ v = (−6)(3) + (0)(1) + (4)(6) = 6 ≠ 0, so u

and v are not orthogonal.

(d) u ⋅ v = (2)(5) + (4)(3) + (−8)(7) = −34 ≠ 0,

so u and v are not orthogonal.

**27. It is a sphere of radius 1 centered at
**

( x0 , y0 , z0 ).

3. (a) v1 ⋅ v 2 = (2)(3) + (3)(2) = 12 ≠ 0

The vectors do not form an orthogonal set.

True/False 3.2

(b) v1 ⋅ v 2 = (−1)(1) + (1)(1) = 0

The vectors form an orthogonal set.

(a) True; 2 v = 2 v .

(b) True

(c) v1 ⋅ v 2 = (−2)(1) + (1)(0) + (1)(2) = 0

v1 ⋅ v3 = (−2)(−2) + (1)(−5) + (1)(1) = 0

(c) False; 0 = 0.

(d) True; the vectors are

(e) True; cos

π

3

=

v 2 ⋅ v3 = (1)(−2) + (0)(−5) + (2)(1) = 0

The vectors form an orthogonal set.

v

v

and −

.

v

v

(d) v1 ⋅ v 2 = (−3)(1) + (4)(2) + (−1)(5) = 0

v1 ⋅ v3 = (−3)(4) + (4)(−3) + (−1)(0) = −24

1

u⋅v

.

=

2 u v

**Since v1 ⋅ v3 ≠ 0, the vectors do not form an
**

orthogonal set.

**(f) False; (u ⋅ v) + w does not make sense because
**

u ⋅ v is a scalar. u ⋅ (v + w) is a meaningful

expression.

5. Let x = ( x1, x2 , x3 ).

u ⋅ x = (1)( x1 ) + (0)( x2 ) + (1)( x3 ) = x1 + x3

v ⋅ x = (0)( x1 ) + (1)( x2 ) + (1)( x3 ) = x2 + x3

**(g) False; for example, if u = (1, 1), v = (−1, 1), and
**

w = (1, −1), then u ⋅ v = u ⋅ w = 0.

**Thus x1 = x2 = − x3 . One vector orthogonal to
**

both u and v is x = (1, 1, −1).

x

(1, 1, − 1)

=

x

12 + 12 + (−1)2

**(h) False; let u = (1, 1) and v = (−1, 1), then
**

u ⋅ v = (1)(−1) + (1)(1) = 0. u ⋅ v = 0 indicates

that the angle between u and v is 90°.

=

(i) True; if u = (u1 , u2 ) and v = (v1 , v2 ) then

(1, 1, − 1)

3

⎛ 1 1

1 ⎞

=⎜

,

,−

⎟

3⎠

⎝ 3 3

**u1 , u2 > 0 and v1, v2 < 0 so u1v1 , u2 v2 < 0,
**

hence u1v1 + u2 v2 < 0.

⎛ 1 1

1 ⎞

The possible vectors are ± ⎜

,

,−

⎟.

3

3

3⎠

⎝

**(j) True; use the triangle inequality twice.
**

u + v + w = (u + v) + w ≤ u + v + w

u+ v + w ≤ u + v + w

70

SSM: Elementary Linear Algebra

Section 3.3

→

7. AB = (−2 − 1, 0 − 1, 3 − 1) = (−3, − 1, 2)

proja u =

19. (a)

→

BC = (−3 − (−2), − 1 − 0, 1 − 3) = (−1, − 1, − 2)

=

→

CA = (1 − (−3), 1 − (−1), 1 − 1) = (4, 2, 0)

→

→

=

AB ⋅ BC = (−3)(−1) + (−1)(−1) + (2)(−2) = 0

→

→

→

→

**Since AB ⋅ BC = 0, AB and BC are orthogonal
**

and the points form the vertices of a right

triangle.

=

u ⋅a

u ⋅a

2

a

a

a

u ⋅a

a

(1)(−4) + (−2)(−3)

(−4) 2 + (−3)2

2

=

9. −2(x − (−1)) + 1(y − 3) + (−1)(z − (−2)) = 0

−2(x + 1) + (y − 3) − (z + 2) = 0

2

a

25

2

=

5

11. 0(x − 2) + 0(y − 0) + 2(z − 0) = 0

2z = 0

proja u =

(b)

13. A normal to 4x − y + 2z = 5 is n1 = (4, − 1, 2).

A normal to 7x − 3y + 4z = 8 is n 2 = (7, − 3, 4).

=

**Since n1 and n 2 are not parallel, the planes are
**

not parallel.

=

15. 2y = 8x − 4z + 5 ⇒ 8x − 2y − 4z = −5

A normal to the plane is n1 = (8, − 2, − 4).

u ⋅a

a

(3)(2) + (0)(3) + (4)(3)

22 + 32 + 32

18

22

21. u ⋅ a = (6)(3) + (2)(−9) = 0

The vector component of u along a is

u⋅a

0

proja u =

(3, − 9) = (0, 0).

a=

2

2

a

a

1

1

1

1

z+ y ⇒ x− y− z =0

2

4

4

2

1

1⎞

⎛

A normal to the plane is n 2 = ⎜1, − , − ⎟ .

4

2⎠

⎝

Since n1 = 8n 2 , the planes are parallel.

x=

**The vector component of u orthogonal to a is
**

u − proja u = (6, 2) − (0, 0) = (6, 2).

23. u ⋅ a = (3)(1) + (1)(0) + (−7)(5) = −32

17. A normal to 3x − y + z − 4 = 0 is n1 = (3, − 1, 1)

and a normal to x + 2z = −1 is n 2 = (1, 0, 2).

a

n1 ⋅ n 2 = (3)(1) + (−1)(0) + (1)(2) = 5

2

= 12 + 02 + 52 = 26

**The vector component of u along a is
**

u⋅a

proja u =

a

2

a

−32

(1, 0, 5)

=

26

80 ⎞

⎛ 16

= ⎜ − , 0, − ⎟ .

13 ⎠

⎝ 13

The vector component of u orthogonal to a is

80 ⎞

⎛ 16

u − proja u = (3, 1, − 7) − ⎜ − , 0, − ⎟

13 ⎠

⎝ 13

11 ⎞

⎛ 55

= ⎜ , 1, − ⎟ .

13 ⎠

⎝ 13

**Since n1 and n 2 are not orthogonal, the planes
**

are not perpendicular.

71

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

25. u ⋅ a = (1)(0) + (1)(2) + (1)(−1) = 1

a

2

2

2

**31. The line is 4x + y − 2 = 0.
**

ax + by0 + c

D= 0

a 2 + b2

4(2) + (1)(−5) − 2

=

42 + 12

1

=

17

2

= 0 + 2 + (−1) = 5

**The vector component of u along a is
**

u⋅a

proja u =

a

2

a

1

= (0, 2, − 1)

5

⎛ 2 1⎞

= ⎜ 0, , − ⎟ .

5⎠

⎝ 5

The vector component of u orthogonal to a is

⎛ 2 1⎞

u − proja u = (1, 1, 1) − ⎜ 0, , − ⎟

⎝ 5 5⎠

⎛ 3 6⎞

= ⎜ 1, , ⎟ .

⎝ 5 5⎠

**33. The plane is x + 2y − 2z − 4 = 0.
**

ax + by0 + cz0 + d

D= 0

a2 + b2 + c2

(1)(3) + 2(1) − 2(−2) − 4

=

12 + 22 + (−2)2

5

=

9

5

=

3

27. u ⋅ a = (2)(4) + (1)(−4) + (1)(2) + (2)(−2) = 2

a

2

= 42 + (−4)2 + 22 + (−2)2 = 40

**The vector component of u along a is
**

u⋅a

proja u =

a

2

a

2

=

(4, − 4, 2, − 2)

40

1⎞

⎛1 1 1

= ⎜ , − , , − ⎟.

5

5

10

10

⎝

⎠

The vector component of u orthogonal to a is

1⎞

⎛1 1 1

u − proja u = (2, 1, 1, 2) − ⎜ , − , , − ⎟

⎝ 5 5 10 10 ⎠

⎛ 9 6 9 21 ⎞

= ⎜ , , , ⎟.

⎝ 5 5 10 10 ⎠

29. D =

=

**35. The plane is 2x + 3y − 4z − 1 = 0.
**

ax + by0 + cz0 + d

D= 0

a2 + b2 + c2

2(−1) + 3(2) − 4(1) − 1

=

22 + 32 + (−4)2

−1

=

29

1

=

29

37. A point on 2x − y − z = 5 is P0 (0, − 5, 0). The

ax0 + by0 + c

distance between P0 and the plane

a2 + b2

4(−3) + 3(1) + 4

−4x + 2y + 2z = 12 is

−4(0) + 2(−5) + 2(0) − 12

D=

(−4)2 + 22 + 22

−22

=

24

22

=

2 6

11

.

=

6

=

=1

−5

42 + 32

25

**39. Note that the equation of the second plane is −2
**

times the equation of the first plane, so the

planes coincide. The distance between the planes

is 0.

72

SSM: Elementary Linear Algebra

Section 3.4

**41. (a) α is the angle between v and i, so
**

v ⋅i

cos α =

v i

(a)(1) + (b)(0) + (c)(0)

=

v ⋅1

a

=

.

v

(b) Similar to part (a), cos β =

cos γ =

(c)

Section 3.4

Exercise Set 3.4

1. The vector equation is x = x0 + tv.

(x, y) = (−4, 1) + t(0, −8)

Equating components gives the parametric

equations.

x = −4, y = 1 − 8t

b

and

v

**3. Since the given point is the origin, the vector
**

equation is x = tv.

(x, y, z) = t(−3, 0, 1)

Equating components gives the parametric

equations.

x = −3t, y = 0, z = t

c

.

v

v ⎛ a b c ⎞

=⎜

,

,

⎟ = (cos α , cos β , cos γ )

v ⎜⎝ v v v ⎟⎠

(d) Since

1, so

**5. For t = 0, the point is (3, −6). The coefficients of
**

t give the parallel vector (−5, −1).

v

is a unit vector, its magnitude is

v

**7. For t = 0, the point is (4, 6).
**

x = (4 − 4t, 6 − 6t) + (−2t, 0) = (4 − 6t, 6 − 6t)

The coefficients of t give the parallel vector

(−6, −6).

cos2 α + cos2 β + cos2 γ = 1 or

cos2 α + cos2 β + cos2 γ = 1.

**9. The vector equation is x = x0 + t1v1 + t2 v 2 .
**

( x, y, z ) = (−3, 1, 0) + t1 (0, − 3, 6) + t2 (−5, 1, 2)

Equating components gives the parametric

equations.

x = −3 − 5t2 , y = 1 − 3t1 + t2 , z = 6t1 + 2t2

**43. v ⋅ (k1w1 + k2 w 2 ) = v ⋅ (k1w1 ) + v ⋅ (k2 w 2 )
**

= k1 ( v ⋅ w1 ) + k2 ( v ⋅ w 2 )

= k1 (0) + k2 (0)

=0

11. The vector equation is x = x0 + t1v1 + t2 v 2 .

True/False 3.3

( x, y, z ) = (−1, 1, 4) + t1 (6, − 1, 0) + t2 (−1, 3, 1)

Equating components gives the parametric

equations.

x = −1 + 6t1 − t2 , y = 1 − t1 + 3t2 , z = 4 + t2

**(a) True; the zero vector is orthogonal to all vectors.
**

(b) True; (ku) ⋅ (mv) = km(u ⋅ v) = km(0) = 0.

(c) True; the orthogonal projection of u on a has the

same direction as a while the vector component

of u orthogonal to a is orthogonal

(perpendicular) to a.

**13. One vector orthogonal to v is w = (3, 2). Using
**

w, the vector equation is (x, y) = t(3, 2) and the

parametric equations are x = 3t, y = 2t.

(d) True; since projb u has the same direction as b.

**15. Two possible vectors that are orthogonal to v are
**

v1 = (0, 1, 0) and v 2 = (5, 0, 4). Using v1 and

v 2 , the vector equation is

**(e) True; if v has the same direction as a, then
**

proja v = v.

( x, y, z ) = t1 (0, 1, 0) + t2 (5, 0, 4) and the

parametric equations are x = 5t2 , y = t1,

**(f) False; for a = (1, 0), u = (1, 10) and v = (1, 7),
**

proja u = proja v.

z = 4t2 .

(g) False; u + v ≤ u + v .

73

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

⎡ 1 1 1 0⎤

17. The augmented matrix of the system is ⎢ 2 2 2 0 ⎥ which has reduced row echelon form

⎢

⎥

⎣ 3 3 3 0⎦

The solution of the system is x1 = −r − s, x2 = r , x3 = s, or x = (−r − s, r, s) in vector form.

⎡ 1 1 1 0⎤

⎢0 0 0 0⎥ .

⎢

⎥

⎣0 0 0 0⎦

⎡ 1 1 1⎤

Since the row vectors of ⎢ 2 2 2 ⎥ are all multiples of (1, 1, 1), only r1 needs to be considered.

⎢

⎥

⎣ 3 3 3⎦

r1 ⋅ x = (1, 1, 1) ⋅ (− r − s, r , s)

= 1(−r − s) + 1(r ) + 1( s)

=0

⎡1 5 1 2 −1 0 ⎤

which has reduced row echelon form

19. The augmented matrix of the system is ⎢

⎣1 −2 −1 3 2 0 ⎥⎦

8 0⎤

⎡ 1 0 − 3 19

7

7

7

⎢

⎥.

2 − 1 − 3 0⎥

⎢0 1

7

7

7

⎣

⎦

3

19

8

2

1

3

The solution of the system is x1 = r − s − t , x2 = − r + s + t , x3 = r , x4 = s, x5 = t , or

7

7

7

7

7

7

19

8

2

1

3

⎛3

⎞

x = ⎜ r − s − t , − r + s + t , r , s, t ⎟ in vector form.

7

7

7

7

7

7

⎝

⎠

For this system, r1 = (1, 5, 1, 2, − 1) and r2 = (1, − 2, − 1, 3, 2).

19

8 ⎞ ⎛ 2

1

3 ⎞

⎛3

r1 ⋅ x = 1⎜ r − s − t ⎟ + 5 ⎜ − r + s + t ⎟ + 1(r ) + 2( s) − 1(t )

7

7

7

7

7

7

⎝

⎠ ⎝

⎠

3

19

8 10

5

15

= r − s − t − r + s + t + r + 2s − t

7

7

7

7

7

7

=0

19

8 ⎞ ⎛ 2

1

3 ⎞

⎛3

r2 ⋅ x = 1⎜ r − s − t ⎟ − 2 ⎜ − r + s + t ⎟ − 1(r ) + 3( s) + 2(t )

7

7 ⎠ ⎝ 7

7

7 ⎠

⎝7

3

19

8

4

2

6

= r − s − t + r − s − t − r + 3s + 2t

7

7

7 7

7

7

=0

21. (a) A particular solution of x + y + z = 1 is (1, 0, 0).

The general solution of x + y + z = 0 is x = −s − t, y = s, z = t, which is s(−1, 1, 0) + t(−1, 0, 1) in vector form.

The equation can be represented by (1, 0, 0) + s(−1, 1, 0) + t(−1, 0, 1).

(b) Geometrically, the form of the equation indicates that it is a plane in R3 , passing through the point P(1, 0, 0)

and parallel to the vectors (−1, 1, 0) and (−1, 0, 1).

23. (a) If x = (x, y, z) is orthogonal to a and b, then a ⋅ x = x + y + z = 0 and b ⋅ x = −2x + 3y = 0.

x + y+z =0.

The system is

−2 x + 3 y

=0

⎡1 0

⎡ 1 1 1 0⎤

⎢

(b) The reduced row echelon form of ⎢

is

⎢⎣0 1

⎣ −2 3 0 0⎥⎦

3

5

2

5

0⎤

⎥ so the solution will require one

0 ⎥⎦

**parameter. Since only one parameter is required, the solution space is a line through the origin in R3 .
**

74

SSM: Elementary Linear Algebra

Section 3.5

x2 = s, x3 = t , x4 = 0 and a particular solution

**(c) From part (b), the solution of the system is
**

3

2

x = − t , y = − t , z = t, or

5

5

2 ⎞

⎛ 3

x = ⎜ − t , − t , t ⎟ in vector form. Here,

5 ⎠

⎝ 5

r1 = (1, 1, 1) and r2 = (−2, 3, 0).

1

of the nonhomogeneous system is x1 = ,

3

x2 = 0, x3 = 0, x4 = 1.

True/False 3.4

⎛ 3 ⎞ ⎛ 2 ⎞

r1 ⋅ x = 1⎜ − t ⎟ + 1⎜ − t ⎟ + 1(t ) = 0

⎝ 5 ⎠ ⎝ 5 ⎠

⎛ 3 ⎞ ⎛ 2 ⎞

r2 ⋅ x = −2 ⎜ − t ⎟ + 3 ⎜ − t ⎟ + 0(t ) = 0

⎝ 5 ⎠ ⎝ 5 ⎠

**(a) True; the vector equation is then x = x0 + tv
**

where x0 is the given point and v is the given

vector.

(b) False; two non collinear vectors that are parallel

to the plane are needed.

**25. (a) The reduced row echelon form of
**

⎡ 1 2 − 1 0⎤

⎡ 3 2 −1 0 ⎤

3

3

⎥

⎢ 6 4 −2 0 ⎥ is ⎢⎢0 0

0 0⎥ .

⎢

⎥

⎢0 0

1 0⎦

0 0⎥

⎣ −3 −2

⎣

⎦

The solution of the system is

2

1

x1 = − s + t , x2 = s, x3 = t .

3

3

**(c) True; the equation of the line is x = tv where v is
**

any nonzero vector on the line.

(d) True; if b = 0, then the statement is true by

Theorem 3.4.3. If b ≠ 0, so that there is a

nonzero entry, say bi of b, and if x0 is a

solution vector of the system, then by matrix

multiplication, ri ⋅ x0 = bi where ri is the ith

row vector of A.

**(b) Since the second and third rows of
**

⎡ 3 2 −1 2 ⎤

⎢ 6 4 −2 4 ⎥ are scalar multiples of

⎢

⎥

1 −2 ⎦

⎣ −3 −2

**(e) False; a particular solution of Ax = b must be
**

used to obtain the general solution of Ax = b

from the general solution of Ax = 0.

**the first, it suffices to show that x1 = 1,
**

x2 = 0, x3 = 1 is a solution of

3x1 + 2 x2 − x3 = 2.

3(1) + 2(0) − 1 = 3 − 1 = 2.

**(f) True; A(x1 − x2 ) = Ax1 − Ax2 = b − b = 0.
**

Section 3.5

Exercise Set 3.5

**(c) In vector form, add (1, 0, 1) to
**

1

⎛ 2

⎞

⎜ − s + t , s, t ⎟ to get

3

3

⎝

⎠

2

1

x1 = 1 − s + t , x2 = s, x3 = 1 + t .

3

3

1. (a) v × w = (0, 2, − 3) × (2, 6, 7)

⎛ 2 −3

0 −3 0 2 ⎞

=⎜

,−

,

6

7

2 7 2 6 ⎟⎠

⎝

= (14 + 18, − (0 + 6), 0 − 4)

= (32, − 6, − 4)

**27. The reduced row echelon form of
**

⎡1 4 1 0 1⎤

⎡ 3 4 1 2 3⎤

3 3

3

⎢6 8 2 5 7 ⎥ is ⎢⎢0 0 0 1 1⎥⎥ .

⎢

⎥

⎢0 0 0 0 0⎥

⎣9 12 3 10 13⎦

⎣

⎦

A general solution of the system is

1 4

1

x1 = − s − t , x2 = s, x3 = t , x4 = 1.

3 3

3

From the general solution of the

nonhomogeneous system, a general solution of

4

1

the homogeneous system is x1 = − s − t ,

3

3

(b) u × ( v × w )

= (3, 2, − 1) × (32, − 6, − 4)

⎛ 2 −6

3 −1 3 2 ⎞

=⎜

,−

,

1

4

32

−

−

−4 32 −6 ⎟⎠

⎝

= (−8 − 6, − (−12 + 32), − 18 − 64)

= (−14, − 20, − 82)

75

Chapter 3: Euclidean Vector Spaces

SSM: Elementary Linear Algebra

(c) u × v = (3, 2, − 1) × (0, 2, − 3)

⎛ 2 −1

3 −1 3 2 ⎞

=⎜

,−

,

2

3

0

−

−3 0 2 ⎟⎠

⎝

= (−6 + 2, − (−9 − 0), 6 − 0)

= (−4, 9, 6)

(u × v) × w

= (−4, 9, 6)× (2, 6, 7)

⎛9 6

−4 6 −4 9 ⎞

=⎜

,−

,

6

7

2 7

2 6 ⎟⎠

⎝

= (63 − 36, − (−28 − 12), − 24 − 18)

= (27, 40, − 42)

→

P1 P4 = (3, 1)

The parallelogram can be considered in R3 as

being determined by u = (3, 2, 0) and

v = (3, 1, 0).

⎛2 0

3 0 3 2⎞

,−

,

u× v = ⎜

3 0 3 1 ⎟⎠

⎝1 0

= (0, 0, − 3)

u × v = 02 + 02 + (−3) 2 = 3

The area is 3.

→

**The triangle can be considered in R3 as being
**

half of the parallelogram formed by u = (1, 4, 0)

and v = (−3, 2, 0).

⎛4 0

1 0

1 4⎞

,−

,

u× v = ⎜

−3 0 −3 2 ⎟⎠

⎝2 0

= (0, 0, 14)

**5. u × v is orthogonal to both u and v.
**

u × v = (−2, 1, 5) × (3, 0, − 3)

⎛1 5

−2 5 −2 1 ⎞

=⎜

,−

,

0

3

3 −3

3 0 ⎟⎠

−

⎝

= (−3 − 0, − (6 − 15), 0 − 3)

= (−3, 9, − 3)

u × v = 02 + 02 + 142 = 14

The area of the triangle is

→

→

v = P1P3 = (2, 0, 3).

⎛ −5 2

−1 2 −1 −5 ⎞

,−

,

u× v = ⎜

0

3

2 3 2 0 ⎟⎠

⎝

= (−15, 7, 10)

u × v = (−7)2 + (−1) 2 + 32 = 59

u × v = (−15)2 + 7 2 + 102 = 374

59.

The area of the triangle is

9. u × v = (2, 3, 0) × (−1, 2, − 2)

⎛3 0

2 0 2 3⎞

=⎜

,−

,

2

2

1 −2 −1 2 ⎟⎠

−

−

⎝

= (−6 − 0, − (−4 − 0), 4 + 3)

= (−6, 4, 7)

→

1

374

u× v =

.

2

2

2 −6 2

17. u ⋅ ( v × w) = 0 4 −2

2 2 −4

4 −2

−6 2

=2

+2

2 −4

4 −2

= 2(−12) + 2(4)

= −16

The volume of the parallelepiped is −16 = 16.

u × v = (−6)2 + 42 + 72 = 101

The area is

1

u × v = 7.

2

15. Let u = P1 P2 = (−1, − 5, 2) and

7. u × v = (1, − 1, 2) × (0, 3, 1)

⎛ −1 2

1 2 1 −1 ⎞

=⎜

,−

,

0 1 0 3 ⎟⎠

⎝ 3 1

= (−1 − 6, − (1 − 0), 3 − 0)

= (−7, − 1, 3)

The area is

→

13. AB = (1, 4) and AC = (−3, 2)

**3. u × v is orthogonal to both u and v.
**

u × v = (−6, 4, 2) × (3, 1, 5)

⎛4 2

−6 2 −6 4 ⎞

=⎜

,−

,

1

5

3 5

3 1 ⎟⎠

⎝

= (20 − 2, − (−30 − 6), − 6 − 12)

= (18, 36, − 18)

101.

→

**11. P1 P2 = (3, 2) and P3 P4 = (−3, − 2) so the side
**

determined by P1 and P2 is parallel to the side

determined by P3 and P4 , but the direction is

opposite, thus P1 P2 and P1 P4 are adjacent sides.

76

SSM: Elementary Linear Algebra

Section 3.5

1

−1 −2

19. u ⋅ ( v × w) = 3 0 −2

5 −4 0

3 0

−1 −2

=

+2

5 −4

5 −4

= −12 + 2(14)

= 16

The vectors do not lie in the same plane.

(b)

−2 0 6

1 −3 1

−5 −1 1

1 −3

−3 1

= −2

+6

−1 1

−5 −1

= −2(−2) + 6(−16)

= −92

→

= (−1)2 + 22 + 22 = 3

AB

Let x be the length of the altitude from

vertex C to side AB, then

1 →

26

x

=

2 AB

2

26

26

x=

=

→

3

AB

21. u ⋅ ( v × w) =

29. (u + v) × (u − v)

= (u + v ) × u − (u + v ) × v

= (u × u) + ( v × u) − ((u × v) + ( v × v))

= 0 + ( v × u ) − (u × v ) − 0

= ( v × u) + ( v × u)

= 2( v × u)

a 0 0

23. u ⋅ ( v × w) = 0 b 0 = abc

0 0 c

37. (a) a = PQ = (3, − 1, − 3)

→

→

b = PR = (2, − 1, 1)

→

**25. Since u ⋅ (v × w) = 3, then
**

⎡ u1 u2 u3 ⎤

det ⎢ v1 v2 v3 ⎥ = 3.

⎢

⎥

⎢⎣ w1 w2 w3 ⎥⎦

c = PS = (4, − 4, 3)

3 −1 −3

a ⋅ (b × c) = 2 −1 1

4 −4 3

2 1

2 −1

−1 1

=3

+1

−3

4 3

4 −4

−4 3

= 3(1) + 1(2) − 3(−4)

= 17

1

17

The volume is a ⋅ (b × c) = .

6

6

⎡ u1 u2 u3 ⎤

(a) u ⋅ (w × v) = det ⎢ w1 w2 w3 ⎥ = −3

⎢

⎥

⎣⎢ v1 v2 v3 ⎦⎥

(Rows 2 and 3 were interchanged.)

(b) (v × w) ⋅ u = u ⋅ (v × w) = 3

⎡ w1 w2 w3 ⎤

(c) w ⋅ (u × v) = det ⎢ u1 u2 u3 ⎥ = 3

⎢

⎥

⎣⎢ v1 v2 v3 ⎦⎥

(Rows 2 and 3 were interchanged, then rows

1 and 2 were interchanged.)

→

(b) a = PQ = (1, 2, − 1)

→

b = PR = (3, 4, 0)

→

c = PS = (−1, − 3, 4)

1 2 −1

a ⋅ (b × c) = 3 4 0

−1 −3 4

3 4

1

= −1

+4

3

−1 −3

= −(−5) + 4(−2)

= −3

1

The volume is a ⋅ (b × c) =

6

→

**27. (a) Let u = AB = (−1, 2, 2) and
**

→

v = AC = (1, 1, − 1).

⎛2 2

−1 2 −1 2 ⎞

,−

,

u× v = ⎜

1

1

1 −1 1 1 ⎟⎠

−

⎝

= (−4, 1, − 3)

u × v = (−4) 2 + 12 + (−3) 2 = 26

The area of the triangle is

1

26

u× v =

.

2

2

77

2

4

3 1

= .

6 2

Chapter 3: Euclidean Vector Spaces

**SSM: Elementary Linear Algebra
**

(d) u ⋅ w = (−2)(2) + (0)(−5) + (4)(−5) = −24

True/False 3.5

w

**(a) True; for nonzero vectors u and v,
**

u × v = u v sin θ will only be 0 if θ = 0, i.e.,

if u and v are parallel.

= 22 + (−5)2 + (−5)2 = 54

projw u =

u⋅w

w

2

w

−24

(2, − 5, − 5)

=

54

12

= − (2, − 5, − 5)

27

**(b) True; the cross product of two nonzero and non
**

collinear vectors will be perpendicular to both

vectors, hence normal to the plane containing the

vectors.

−2 0 4

(e) u ⋅ ( v × w) = 3 −1 6

2 −5 −5

3 −1

−1 6

= −2

+4

2 −5

−5 −5

= −2(35) + 4(−13)

= −122

**(c) False; the scalar triple product is a scalar, not a
**

vector.

(d) True

(e) False; for example

(i × i) × j = 0 × j = 0

i × (i × j) = i × k = −j

(f) −5v + w = (−15, 5, − 30) + (2, − 5, − 5)

= (−13, 0, − 35)

(u ⋅ v)w

= ((−2)(3) + (0)(−1) + (4)(6))(2, − 5, − 5)

= 18(2, − 5, − 5)

= (36, − 90, − 90)

(−5 v + w ) × (u ⋅ v)w

⎛ 0 −35

−13 −35 −13

0⎞

=⎜

, −

,

36 −90

36 −90 ⎟⎠

⎝ −90 −90

= (−3150, − 2430, 1170)

**(f) False; for example, if v and w are distinct
**

vectors that are both parallel to u, then

u × v = u × w = 0, but v ≠ w.

Chapter 3 Supplementary Exercises

1. (a) 3v − 2u = (9, − 3, 18) − (−4, 0, 8)

= (13, − 3, 10)

(b)

2

u+v+w

= (−2 + 3 + 2, 0 − 1 − 5, 4 + 6 − 5)

= (3, − 6, 5)

3. (a) 3v − 2u = (−9, 0, 24, 0) − (−4, 12, 4, 2)

= (−5, − 12, 20, − 2)

= 32 + (−6)2 + 52

= 70

(b)

**(c) The distance between −3u and v + 5w is
**

−3u − ( v + 5w ) = −3u − v − 5w .

−3u − v − 5w

= (6, 0, − 12) − (3, − 1, 6) − (10, − 25, − 25)

= (−7, 26, 7)

u+ v+w

= (−2 − 3 + 9, 6 + 0 + 1, 2 + 8 − 6, 1 + 0 − 6)

= (4, 7, 4, − 5)

= 42 + 7 2 + 42 + (−5) 2

= 106

= (−7)2 + 262 + 7 2

= 774

78

SSM: Elementary Linear Algebra

Chapter 3 Supplementary Exercises

**(c) The distance between −3u and v + 5w is −3u − ( v + 5w ) = −3u − v − 5w .
**

−3u − v − 5w = (6, − 18, − 6, − 3) − (−3, 0, 8, 0) − (45, 5, − 30, − 30)

= (−36, − 23, 16, 27)

= (−36) 2 + (−23)2 + 162 + 272

= 2810

(d) u ⋅ w = (−2)(9) + (6)(1) + (2)(−6) + (1)(−6)

= −30

w

2

= 92 + 12 + (−6)2 + (−6)2 = 154

projw u =

u⋅w

w

2

w

−30

(9, 1, − 6, − 6)

=

154

15

= − (9, 1, − 6, − 6)

77

5. u = (−32, −1, 19), v = (3, −1, 5), w = (1, 6, 2)

u ⋅ v = (−32)(3) + (−1)(−1) + (19)(5) = 0

u ⋅ w = (−32)(1) + (−1)(6) + (19)(2) = 0

v ⋅ w = (3)(1) + (−1)(6) + (5)(2) = 7

Since v ⋅ w ≠ 0, the vectors do not form an orthogonal set.

7. (a) The set of all such vectors is the line through the origin which is perpendicular to the given vector.

(b) The set of all such vectors is the plane through the origin which is perpendicular to the given vector.

(c) The only vector in R 2 that can be orthogonal to two non collinear vectors is 0; the set is {0}, the origin.

(d) The set of all such vectors is the line through the origin which is perpendicular to the plane containing the

two given vectors.

9. True; u + v

2

= (u + v) ⋅ (u + v)

= u ⋅u + u ⋅ v + v ⋅u + v ⋅ v

= u

Thus, if u + v

2

2

= u

+ u ⋅ v + v ⋅u + v

2

2

**+ v 2 , u ⋅ v = v ⋅ u = 0, so u and v are orthogonal.
**

→

→

**11. Let S be S (−1, s2 , s3 ). Then u = PQ = (3, 1, − 2) and v = RS = (−6, s2 − 1, s3 − 1). u and v are parallel if
**

u × v = 0.

⎛ 1

3

3

1 ⎞

−2

−2

,−

,

u× v = ⎜

1

1

6

1

6

s

s

s

s

−

−

−

−

−

− 1 ⎟⎠

2

3

3

2

⎝

= ( s3 − 1 + 2( s2 − 1), − (3( s3 − 1) − 12), 3( s2 − 1) + 6)

= (2s2 + s3 − 3, − 3s3 + 15, 3s2 + 3)

If u × v = 0, then from the second and third components, s2 = −1 and s3 = 5, which also causes the first

component to be 0. The point is S(−1, −1, 5).

79

Chapter 3: Euclidean Vector Spaces

→

**SSM: Elementary Linear Algebra
**

21. One point on the line is (0, −5). Since the slope

3

of the line is m = , the vector (1, 3) is parallel

1

to the line. Using this point and vector gives the

vector equation (x, y) = (0, −5) + t(1, 3).

Equating components gives the parametric

equations x = t, y = −5 + 3t.

→

13. u = PQ = (3, 1, − 2), v = PR = (2, 2, − 3)

cos θ =

=

=

u⋅v

u v

(3)(2) + (1)(2) + (−2)(−3)

32 + 12 + (−2)2 22 + 22 + (−3)2

14

**23. From the vector equation, (−1, 5, 6) is a point on
**

the plane and (0, −1, 3) × (2, −1, 0) will be a

normal to the plane.

(0, − 1, 3) × (2, − 1, 0)

⎛ −1 3

0 3 0 −1 ⎞

=⎜

,−

,

2 0 2 −1 ⎟⎠

⎝ −1 0

= (3, 6, 2)

A point-normal equation for the plane is

3(x + 1) + 6(y − 5) + 2(z − 6) = 0.

14 17

14

=

17

15. The plane is 5x − 3y + z + 4 = 0.

5(−3) − 3(1) + 3 + 4

11

D=

=

35

52 + (−3) 2 + 12

→

17. The plane will contain u = PQ = (1, − 2, − 2)

**25. Two vectors in the plane are
**

→

→

u = PQ = (−10, 4, − 1) and

and v = PR = (5, − 1, − 5).

→

**Using P(−2, 1, 3), the vector equation is
**

( x, y, z )

= (−2, 1, 3) + t1 (1, − 2, − 2) + t2 (5, − 1, − 5).

Equating components gives the parametric

equations x = −2 + t1 + 5t2 , y = 1 − 2t1 − t2 ,

v = PR = (−9, 6, − 6).

A normal to the plane is u × v.

⎛ 4 −1

−10 −1 −10 4 ⎞

u× v = ⎜

,−

,

−9 −6 −9 6 ⎟⎠

⎝ 6 −6

= (−18, − 51, − 24)

Using point P(9, 0, 4), a point-normal equation

for the plane is −18(x − 9) − 51y − 24(z − 4) = 0.

z = 3 − 2t1 − 5t2 .

19. The vector equation is (x, y) = (0, −3) + t(8, −1).

Equating components gives the parametric

equations x = 8t, y = −3 − t.

**29. The equation represents a plane perpendicular to
**

the xy-plane which intersects the xy-plane along

the line Ax + By = 0.

80

Chapter 4

General Vector Spaces

Section 4.1

Exercise Set 4.1

1. (a) u + v = (−1, 2) + (3, 4)

= (−1 + 3, 2 + 4)

= (2, 6)

3u = 3(−1, 2) = (0, 6)

(b) The sum of any two real numbers is a real number. The product of two real numbers is a real number and 0 is

a real number.

(c) Axioms 1−5 hold in V because they also hold in R 2 .

(e) Let u = (u1 , u2 ) with u1 ≠ 0. Then 1u = 1(u1 , u2 ) = (0, u2 ) ≠ u.

3. The set is a vector space with the given operations.

5. The set is not a vector space. Axiom 5 fails to hold because of the restriction that x ≥ 0. Axiom 6 fails to hold for

k < 0.

7. The set is not a vector space. Axiom 8 fails to hold because (k + m)2 ≠ k 2 + m 2 .

9. The set is a vector space with the given operations.

11. The set is a vector space with the given operations.

23.

u+w = v+w

(u + w) + (− w) = ( v + w) + (− w)

u + [ w + (−w )] = v + [ w + (−w)]

u+0 = v+0

u=v

Hypothesis

Add − w to both sides.

Axiom 3

Axiom 5

Axiom 4

25. (1) Axiom 7

(2) Axiom 4

(3) Axiom 5

(4) Axiom 1

(5) Axiom 3

(6) Axiom 5

(7) Axiom 4

True/False 4.1

(a) False; vectors are not restricted to being directed line segments.

(b) False; vectors are not restricted to being n-tuples of real numbers.

(c) True

81

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

**(d) False; if a vector space V had exactly two
**

elements, one of them would necessarily be the

zero vector 0. Call the other vector u. Then

0 + 0 = 0, 0 + u = u, and u + 0 = u. Since −u

must exist and −u ≠ 0, then −u = u and

1

1

u + u = 0. Consider the scalar . Since u

2

2

1

1

would be an element of V, u = 0 or u = u. If

2

2

1

u = 0, then

2

1

1

⎛1 1⎞

u = 1u = ⎜ + ⎟ u = u + u = 0 + 0 = 0.

2

2

⎝2 2⎠

1

If u = u, then

2

1

1

⎛1 1⎞

u = 1u = ⎜ + ⎟ u = u + u = u + u = 0.

2

2

⎝2 2⎠

Either way we get u = 0 which contradicts our

assumption that we had exactly two elements.

(d) This is a subspace of P3 .

5. (a) This is a subspace of R ∞ .

(b) This is not a subspace of R ∞ , since for

k ≠ 1, kv is not in the set.

(c) This is a subspace of R ∞ .

(d) This is a subspace of R ∞ .

7. Consider k1u + k2 v = (a, b, c). Thin

(0k1 + k2 , − 2k1 + 3k2 , 2k1 − k2 ) = (a, b, c).

Equating components gives the system

k2 = a

⎡ 0 1⎤

−2k1 + 3k2 = b or Ax = b where A = ⎢ −2 3⎥ ,

⎢

⎥

2k1 − k2 = c

⎣ 2 −1⎦

⎡a ⎤

⎡k ⎤

x = ⎢ 1 ⎥ , and b = ⎢ b ⎥ .

⎢ ⎥

⎣ k2 ⎦

⎣c ⎦

The system can be solved simultaneously by

⎡ 0 1 2 3 0 0⎤

reducing the matrix ⎢ −2 3 2 1 4 0 ⎥ ,

⎢

⎥

⎣ 2 −1 2 5 5 0 ⎦

⎡ 1 0 2 4 0 0⎤

which reduces to ⎢0 1 2 3 0 0 ⎥ . The

⎢

⎥

⎣0 0 0 0 1 0 ⎦

results can be read from the reduced matrix.

**(e) False; the zero vector would be 0 = 0 + 0x which
**

does not have degree exactly 1.

Section 4.2

Exercise Set 4.2

1. (a) This is a subspace of R3 .

(b) This is not a subspace of R3 , since

(a1, 1, 1) + (a2 , 1, 1) = (a1 + a2 , 2, 2) which

is not in the set.

(a) (2, 2, 2) = 2u + 2v is a linear combination of

u and v.

(c) This is a subspace of R3 .

(b) (3, 1, 5) = 4u + 3v is a linear combination of

u and v.

**(d) This is not a subspace of R3 , since for k ≠ 1
**

k (a + c + 1) ≠ ka + kc + 1, so k(a, b, c) is not

in the set.

**(c) (0, 4, 5) is not a linear combination of u and
**

v.

(d) (0, 0, 0) = 0u + 0v is a linear combination of

u and v.

3

(e) This is a subspace of R .

**9. Similar to the process in Exercise 7, determining
**

whether the given matrices are linear

combinations of A, B, and C can be

accomplished by reducing the matrix

⎡ 4 1 0 6 0 6 −1⎤

⎢ 0 −1 2 −8 0 0 5⎥

⎢

⎥.

⎢ −2 2 1 −1 0 3 7 ⎥

⎣⎢ −2 3 4 −8 0 8 1⎦⎥

3. (a) This is a subspace of P3 .

(b) This is a subspace of P3 .

(c) This is not a subspace of P3 since

**k (a0 + a1 x + a2 x 2 + a3 x3 ) is not in the set
**

for all noninteger values of k.

82

SSM: Elementary Linear Algebra

⎡1

⎢0

The matrix reduces to ⎢

⎢0

⎢⎣0

0

1

0

0

Section 4.2

0

1

0 2

1 −3

0 0

0 1 0⎤

0 2 0⎥

⎥.

0 1 0⎥

0 0 1⎥⎦

⎡ 6 −8 ⎤

(a) ⎢

= 1A + 2 B − 3C is a linear combination of A, B, and C.

⎣ −1 −8⎥⎦

⎡0 0⎤

= 0 A + 0 B + 0C is a linear combination of A, B, and C.

(b) ⎢

⎣0 0⎥⎦

⎡6 0⎤

= 1A + 2 B + 1C is a linear combination of A, B, and C.

(c) ⎢

⎣ 3 8⎥⎦

⎡ −1 5 ⎤

is not a linear combination of A, B, and C.

(d) ⎢

⎣ 7 1⎥⎦

**11. Let b = (b1 , b2 , b3 ) be an arbitrary vector in R3 .
**

2k1

= b1

(a) If k1v1 + k2 v 2 + k3 v3 = b, then (2k1 , 2k1 + k3 , 2k1 + 3k2 + k3 ) = (b1 , b2 , b3 ) or 2k1

+ k3 = b2 .

2k1 + 3k2 + k3 = b3

⎡2 0 0⎤

Since det ⎢ 2 0 1⎥ = −6 ≠ 0, the system is consistent for all values of b1 , b2 , and b3 so the given vectors

⎢

⎥

⎣ 2 3 1⎦

span R3 .

**(b) If k1v1 + k2 v 2 + k3 v3 = b, then (2k1 + 4k2 + 8k3 , − k1 + k2 − k3 , 3k1 + 2k2 + 8k3 ) = (b1 , b2 , b3 ) or
**

2k1 + 4k2 + 8k3 = b1

− k1 + k2 − k3 = b2 .

3k1 + 2k2 + 8k3 = b3

⎡ 2 4 8⎤

Since det ⎢ −1 1 −1⎥ = 0, the system is not consistent for all values of b1 , b2 , and b3 so the given vectors

⎢

⎥

⎣ 3 2 8⎦

do not span R3 .

(c) If k1v1 + k2 v 2 + k3 v3 + k4 v 4 = b, then

(3k1 + 2k2 + 5k3 + k4 , k1 − 3k2 − 2k3 + 4k4 , 4k1 + 5k2 + 9k3 − k4 ) = (b1, b2 , b3 ) or

3k1 + 2k2 + 5k3 + k4 = b1

k1 − 3k2 − 2k3 + 4k4 = b2 .

4k1 + 5k2 + 9k3 − k4 = b3

5 1 b1 ⎤

⎡3 2

Reducing the matrix ⎢ 1 −3 −2 4 b2 ⎥ leads to

⎢

⎥

⎣ 4 5 9 −1 b3 ⎦

The system has a solution only if −

⎡1 −3 −2 4 b2

⎤

⎢

⎥

3

1

−1 11 b1 − 11 b2

⎢0 1 1

⎥.

⎢0 0 0 0 − 17 b + 7 b + b ⎥

3⎦

11 1 11 2

⎣

17

7

b1 + b2 + b3 = 0 or 17b1 = 7b2 + 11b3 . This restriction means that the

11

11

**span of v1 , v 2 , v3 , and v 4 is not all of R3 , i.e., the given vectors do not span R3 .
**

83

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

(d) If k1v1 + k2 v 2 + k3 v3 + k4 v 4 = b, then

(k1 + 3k2 + 4k3 + 3k4 , 2k1 + 4k2 + 3k3 + 3k4 , 6k1 + k2 + k3 + k4 ) = (b1, b2 , b3 ) or

k1 + 3k2 + 4k3 + 3k4 = b1

2k1 + 4k2 + 3k3 + 3k4 = b2 .

6k1 + k2 + k3 + k4 = b3

⎡1 0

⎢

⎢0 1

⎢

⎢⎣0 0

Thus, for any values of b1 , b2 , and b3 , values of

⎡1 3 4 3 b1 ⎤

The matrix ⎢ 2 4 3 3 b2 ⎥ reduces to

⎢

⎥

⎣6 1 1 1 b3 ⎦

1 b − 1 b + 7 b ⎤

− 39

1 39 2 39 3

⎥

16

23 b − 5 b ⎥ .

− 39 b1 + 39

0

2 39 3

⎥

22 b − 17 b + 2 b

1

39 1 39 2 39 3 ⎥⎦

k1 , k2 , k3 , and k4 can be found. The given vectors span

0

1

39

16

39

17

39

R3 .

**13. Let p = a0 + a1 x + a2 x 2 be an arbitrary polynomial in P2 . If k1p1 + k2p 2 + k3p3 + k4 p 4 = p, then
**

(k1 + 3k2 + 5k3 − 2k4 ) + (−k1 + k2 − k3 − 2k4 ) x + (2k1 + 4k3 + 2k4 ) x 2 = a0 + a1 x + a2 x 2 or

k1 + 3k2 + 5k3 − 2k4 = a0

− k1 + k2 − k3 − 2k4 = a1 .

2k1

+ 4k3 + 2k4 = a2

⎡ 1 3 5 −2 a0 ⎤

Reducing the matrix ⎢ −1 1 −1 −2 a1 ⎥ leads to

⎢

⎥

⎣ 2 0 4 2 a2 ⎦

a0

⎡ 1 3 5 −2

⎤

⎢

⎥

1

1a

−

+

0

1

1

1

a

⎢

⎥.

4 0 4 1

⎢0 0 0 0 − 1 a + 3 a + a ⎥

2⎦

⎣

2 0 2 1

1

3

The system has a solution only if − a0 + a1 + a2 = 0 or a0 = 3a1 + 2a2 . This restriction means that the span of

2

2

p1 , p 2 , p3 , and p 4 is not all of P3 , i.e., they do not span P3 .

⎡ −1 1 1 0 ⎤

15. (a) Since det(A) = 0, reduce the matrix ⎢ 3 −1 0 0 ⎥ . The matrix reduces to

⎢

⎥

⎣ 2 −4 −5 0 ⎦

⎡ 1 0 1 0⎤

2

⎢

⎥

⎢0 1 32 0 ⎥ . The solution

⎢

⎥

⎣0 0 0 0 ⎦

1

3

set is x = − t , y = − t , z = t which is a line through the origin.

2

2

3 0⎤

⎡ 1 −2

(b) Since det(A) = 0, reduce the matrix ⎢ −3 6 9 0 ⎥ . The matrix reduces to

⎢

⎥

⎣ −2 4 −6 0 ⎦

set is x = 2t, y = t, z = 0, which is a line through the origin.

⎡ 1 −2 0 0 ⎤

⎢0 0 1 0 ⎥ . The solution

⎢

⎥

⎣0 0 0 0 ⎦

**(c) Since det(A) = −1 ≠ 0, the system has only the trivial solution, i.e., the origin.
**

(d) Since det(A) = 8 ≠ 0, the system has only the trivial solution, i.e., the origin.

⎡ 1 −1 1 0 ⎤

(e) Since det(A) = 0, reduce the matrix ⎢ 2 −1 4 0⎥ . The matrix reduces to

⎢

⎥

⎣ 3 1 11 0⎦

is x = −3t, y = −2t, z = t, which is a line through the origin.

84

⎡ 1 0 3 0⎤

⎢0 1 2 0⎥ . The solution set

⎢

⎥

⎣0 0 0 0⎦

SSM: Elementary Linear Algebra

Section 4.3

⎡ 1 −3 1 0 ⎤

(f) Since det(A) = 0, reduce the matrix ⎢ 2 −6 2 0 ⎥ . The matrix reduces to

⎢

⎥

⎣ 3 −9 3 0 ⎦

is x − 3y + z = 0, which is a plane through the origin.

⎡ 1 −3 1 0 ⎤

⎢0 0 0 0 ⎥ . The solution set

⎢

⎥

⎣0 0 0 0 ⎦

**17. Let f = f(x) and g = g(x) be elements of the set. Then
**

b

f + g = ∫ ( f ( x) + g ( x))dx

a

b

b

a

a

= ∫ f ( x)dx + ∫ g ( x)dx

=0

b

b

a

a

**and f + g is in the set. Let k be any scalar. Then kf = ∫ kf ( x)dx = k ∫ f ( x)dx = k ⋅ 0 = 0 so kf is in the set. Thus
**

the set is a subspace of C[a, b].

True/False 4.2

(a) True; this is part of the definition of a subspace.

(b) True; since a vector space is a subset of itself.

(c) False; for instance, the set W in Example 4 contains the origin (zero vector), but is not a subspace of R 2 .

(d) False; R 2 is not a subset of R3 .

(e) False; if b ≠ 0, i.e., the system is not homogeneous, then the solution set does not contain the zero vector.

(f) True; this is by the definition of the span of a set of vectors.

(g) True; by Theorem 4.2.2.

(h) False; consider the subspaces W1 = {( x, y) y = x} and W2 = {( x, y) y = − x} in R 2 . v1 = (1, 1) is in W1 and

v 2 = (1, − 1) is in W2 , but v1 + v 2 = (2, 0) is not in W1 ∪ W2 = {( x, y) y = ± x} .

**(i) False; let v1 = (1, 0), v 2 = (0, 1), u1 = (−1, 0), and u 2 = (0, − 1). Then {v1 , v 2 } and {u1, u 2 } both span R 2 but
**

{u1, u 2 } ≠ {v1 , v 2 }.

(j) True; the sum of two upper triangular matrices is upper triangular and the scalar multiple of an upper triangular

matrix is upper triangular.

(k) False; the span of x − 1, ( x − 1)2 , and ( x − 1)3 will only contain polynomials for which p(1) = 0, i.e., not all of P3 .

Section 4.3

Exercise Set 4.3

1. (a) u 2 = −5u1 , i.e., u 2 is a scalar multiple of u1 .

(b) Since there are 3 vectors and 3 > 2, the set is linearly dependent by Theorem 4.3.3.

(c) p 2 = 2p1 , i.e., p 2 is a scalar multiple of p1 .

85

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

(d) B = −A, i.e., B is a scalar multiple of A.

3. (a) The equation k1 (3, 8, 7, − 3) + k2 (1, 5, 3, − 1) + k3 (2, − 1, 2, 6) + k4 (1, 4, 0, 3) = (0, 0, 0, 0) generates the

homogeneous system.

3k1 + k2 + 2k3 + k4 = 0

8k1 + 5k2 − k3 + 4k4 = 0

7k1 + 3k2 + 2k3

=0

−3k1 − k2 + 6k3 + 3k4 = 0

⎡ 3 1 2

⎢ 8 5 −1

Since det ⎢

⎢ 7 3 2

⎢⎣ −3 −1 6

independent.

1⎤

4⎥

⎥ = 128 ≠ 0, the system has only the trivial solution and the vectors are linearly

0⎥

3⎥⎦

**(b) The equation k1 (0, 0, 2, 2) + k2 (3, 3, 0, 0) + k3 (1, 1, 0, − 1) = (0, 0, 0, 0) generates the homogeneous system
**

3k2 + k3 = 0

3k2 + k3 = 0

2k1

=0

2k1

− k3 = 0

The third equation gives k1 = 0, which gives k3 = 0 in the fourth equation. k3 = 0 gives k2 = 0 in the first

two equations. Since the system has only the trivial solution, the vectors are linearly independent.

(c) The equation k1 (0, 3, − 3, − 6) + k2 (−2, 0, 0, − 6) + k3 (0, − 4, − 2, − 2) + k4 (0, − 8, 4, − 4) = (0, 0, 0, 0)

generates the homogeneous system

−2 k 2

=0

3k1

− 4 k3 − 8 k 4 = 0

.

−3k1

− 2k3 + 4k4 = 0

−6k1 − 6k2 − 2k3 − 4k4 = 0

⎡ 0 −2 0 0 ⎤

⎢ 3 0 −4 −8 ⎥

Since det ⎢

⎥ = 480 ≠ 0, the system has only the trivial solution and the vectors are linearly

⎢ −3 0 −2 4 ⎥

⎣⎢ −6 −6 −2 −4 ⎦⎥

independent.

(d) The equation k1 (3, 0, − 3, 6) + k2 (0, 2, 3, 1) + k3 (0, − 2, − 2, 0) + k4 (−2, 1, 2, 1) = (0, 0, 0, 0) generates the

homogeneous system

3k1

− 2 k4 = 0

2k2 − 2k3 + k4 = 0

.

−3k1 + 3k2 − 2k3 + 2k4 = 0

6k1 + k2

+ k4 = 0

⎡ 3

⎢ 0

Since det ⎢

⎢ −3

⎢⎣ 6

independent.

0 0 −2 ⎤

2 −2

1⎥

⎥ = 36 ≠ 0, the system has only the trivial solution and the vectors are linearly

3 −2 2⎥

1 0

1⎥⎦

86

SSM: Elementary Linear Algebra

Section 4.3

**5. If the vectors lie in a plane, then they are linearly
**

dependent.

3

7

3

t = − : v 2 = v1 + v3

2

2

2

7

2

t = 1: v3 = − v1 + v 2

3

3

**(a) The equation k1v1 + k2 v 2 + k3 v3 = 0
**

generates the homogeneous system

2k1 + 6k2 + 2k3 = 0

−2k1 + k2

= 0.

4 k 2 − 4 k3 = 0

**9. The equation k1v1 + k2 v 2 + k3 v3 = 0 generates
**

the homogeneous system

1

1

λk1 − k2 − k3 = 0

2

2

1

1

− k1 + λk2 − k3 = 0 .

2

2

1

1

− k1 − k2 + λk3 = 0

2

2

⎡ λ − 1 − 1⎤

2

2⎥

⎢

1

det ⎢ − 12

λ − 12 ⎥ = (4λ3 − 3λ − 1)

⎢ 1

⎥ 4

1

λ ⎥⎦

⎢⎣ − 2 − 2

1

= (λ − 1)(2λ + 1)2

4

1

For λ = 1 and λ = − , the determinant is zero

2

and the vectors form a linearly dependent set.

⎡ 2 6 2⎤

Since det ⎢ −2 1 0⎥ = −72 ≠ 0, the

⎢

⎥

⎣ 0 4 −4 ⎦

vectors are linearly independent, so they do

not lie in a plane.

(b) The equation k1v1 + k2 v 2 + k3 v3 = 0

generates the homogeneous system

−6k1 + 3k2 + 4k3 = 0

7k1 + 2k2 − k3 = 0 .

2k1 + 4k2 + 2k3 = 0

⎡ −6 3 4 ⎤

Since det ⎢ 7 2 −1⎥ = 0, the vectors are

⎢

⎥

⎣ 2 4 2⎦

linearly dependent, so they do lie in a plane.

**11. Let {v a , vb , … , v n } be a (nonempty) subset of
**

S. If this set were linearly dependent, then there

would be a nonzero solution (ka , kb , … , kn ) to

**7. (a) The equation k1v1 + k2 v 2 + k3 v3 = 0
**

generates the homogeneous system

6 k 2 + 4 k3 = 0

3k1

− 7 k3 = 0

.

k1 + 5k2 + k3 = 0

− k1 + k2 + 3k3 = 0

⎡ 0

⎢ 3

The matrix ⎢

⎢ 1

⎣⎢ −1

**ka v a + kb vb + + kn v n = 0. This can be
**

expanded to a nonzero solution of

k1v1 + k2 v 2 + + kr v r = 0 by taking all other

coefficients as 0. This contradicts the linear

independence of S, so the subset must be linearly

independent.

6

4 0⎤

0 −7 0 ⎥

⎥ reduces to

5

1 0⎥

1 3 0 ⎥⎦

**13. If S is linearly dependent, then there is a nonzero
**

solution (k1 , k2 , …, kr ) to

⎡ 1 0 − 7 0⎤

3

⎢

⎥

2 0⎥

⎢0 1

. Since the system has a

3

⎢

⎥

0

0

0

0

⎢

⎥

0 0 ⎦⎥

⎣⎢0 0

nontrivial solution, the vectors are linearly

dependent.

(b) The solution of the system is k1 =

k1v1 + k2 v 2 +

+ kr v r = 0. Thus

(k1 , k2 , … , kr , 0, 0, … , 0) is a nonzero solution

to

k1v1 + k2 v 2 + + kr v r + kr +1 v r +1 + + kn v n

=0

so the set {v1 , v 2 , … , v r , v r +1 , … , v n } is

linearly dependent.

7

t,

3

**15. If {v1, v 2 , v3} were linearly dependent, there
**

would be a nonzero solution to

k1v1 + k2 v 2 + k3 v3 = 0 with k3 ≠ 0 (since v1

2

k 2 = − t , k3 = t .

3

3

2

3

t = : v1 = v 2 − v3

7

7

7

and v 2 are linearly independent). Solving for

87

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

**determinant was simplified by adding the first
**

row to the third row.

k

k

v3 gives v3 = − 1 v1 − 2 v 2 which contradicts

k3

k3

that v3 is not in span {v1 , v 2 }. Thus,

True/False 4.3

{v1, v 2 , v3} is linearly independent.

(a) False; the set {0} is linearly dependent.

**19. (a) If v1 , v 2 , and v3 are placed with their
**

initial points at the origin, they will not lie in

the same plane, so they are linearly

independent.

(b) True

(c) False; the set {v1 , v 2 } in R 2 where v1 = (0, 1)

and v 2 = (0, 2) is linearly dependent.

**(b) If v1 , v 2 , and v3 are placed with their
**

initial points at the origin, they will lie in the

same plane, so they are linearly dependent.

Alternatively, note that v1 + v 2 = v3 by

**(d) True; suppose {kv1, kv 2 , kv3} were linearly
**

dependent, then there would be a nontrivial

solution to k1 (kv1 ) + k2 (kv 2 ) + k3 (kv3 ) = 0

which would give a nontrivial solution to

k1v1 + k2 v 2 + k3 v3 = 0.

**placing v1 along the z-axis with its terminal
**

point at the origin.

21. The Wronskian is

x

cos x

W ( x) =

= − x sin x − cos x.

1 − sin x

(e) True; consider the sets {v1 , v 2 }, {v1 , v 2 , v3},

**..., {v1 , v 2 , … , v n }. If {v1 , v 2 } is linearly
**

dependent, then v 2 = kv1 for some k and the

**W(0) = −cos 0 = −1, so the functions are linearly
**

independent.

**condition is met. If v 2 ≠ kv1 then there is some
**

2 < k ≤ n for which the set {v1 , v 2 , …, v k −1} is

linearly independent but {v1, v 2 , … , v k } is

linearly dependent. This means that there is a

nontrivial solution of

a1 v1 + a2 v 2 + + ak −1 v k −1 + ak v k = 0 with

1 x ex

23. (a) W ( x) = 0

x

1 ex = e

0 0 ex

Since W ( x) = e x is nonzero for all x, the

vectors are linearly independent.

**ak ≠ 0 and the equation can be solved to give
**

v k as a linear combination of v1 , v 2 , ...,

v k −1 .

1 x x2

(b) W ( x) = 0 1 2 x = 2

0 0

2

Since W(x) = 2 is nonzero for all x, the

vectors are linearly independent.

**(f) False; the set is
**

⎧ ⎡1 1 ⎤ ⎡1 0 ⎤ ⎡1 0⎤ ⎡ 0 1 ⎤ ⎡ 0 1⎤

⎨ ⎢ 0 0 ⎥ , ⎢1 0 ⎥ , ⎢0 1 ⎥ , ⎢1 0 ⎥ , ⎢ 0 1⎥ ,

⎦ ⎣

⎦ ⎣

⎦ ⎣

⎦ ⎣

⎦

⎩⎣

⎡0 0⎤ ⎫

⎢⎣1 1 ⎥⎦ ⎬ and

⎭

⎡1 1 ⎤ ⎡0 0 ⎤

⎡1 0⎤

⎡0 1⎤

1⎢

+ 1⎢

+ (−1) ⎢

+ (−1) ⎢

⎥

⎥

⎥

0

0

1

1

1

0

⎣

⎦ ⎣

⎦

⎣

⎦

⎣0 1⎥⎦

⎡0 0⎤

=⎢

⎣ 0 0 ⎥⎦

so the set is linearly dependent.

x cos x

sin x

cos x

25. W ( x) = cos x − sin x

cos x − x sin x

− sin x − cos x − x cos x − 2 sin x

x cos x

sin x cos x

= cos x − sin x cos x − x sin x

−2 sin x

0

0

sin x cos x

= −2 sin x

cos x − sin x

2

**(g) True; the first polynomial has a nonzero constant
**

term, so it cannot be expressed as a linear

combination of the others, and since the two

others are not scalar multiples of one another, the

three polynomials are linearly independent.

2

**= −2 sin x(− sin x − cos x)
**

= 2 sin x

Since 2sin x is not identically zero, the vectors

are linearly independent, hence span a threedimensional subspace of F(−∞, ∞). Note that the

88

SSM: Elementary Linear Algebra

Section 4.4

**(h) False; the functions f1 and f 2 are linearly dependent if there are scalars k1 and k2 such that
**

k1 f1 ( x) + k2 f 2 ( x) = 0 for all real numbers x.

Section 4.4

Exercise Set 4.4

1. (a) The set S = {u1, u 2 , u3} is not linearly independent; u3 = 2u1 + u 2 .

(b) The set S = {u1, u 2 } spans a plane in R3 , not all of R3 .

(c) The set S = {p1, p 2 } does not span P2 ; x 2 is not a linear combination of p1 and p 2 .

(d) The set S = {A, B, C, D, E} is not linearly independent; E can be written as linear combination of A, B, C, and

D.

⎡ 1 2 3⎤

3. (a) As in Example 3, it is sufficient to consider det ⎢ 0 2 3⎥ = 6 ≠ 0.

⎢

⎥

⎣ 0 0 3⎦

Since this determinant is nonzero, the set of vectors is a basis for R3 .

⎡ 3 2 1⎤

(b) It is sufficient to consider det ⎢ 1 5 4 ⎥ = 26 ≠ 0. Since this determinant is nonzero, the set of vectors is a

⎢

⎥

⎣ −4 6 8⎦

basis for R3 .

0⎤

⎡ 2 4

(c) It is sufficient to consider det ⎢ −3 1 −7 ⎥ = 0. Since this determinant is zero, the set of vectors is not

⎢

⎥

1⎦

⎣ 1 1

linearly independent.

⎡ 1 2 −1⎤

(d) It is sufficient to consider det ⎢ 6 4 2⎥ = 0. Since this determinant is zero, the set of vectors is not

⎢

⎥

⎣ 4 −1 5⎦

linearly independent.

⎡3 6 ⎤

⎡ 0 −1⎤

⎡ 0 −8 ⎤

⎡ 1 0 ⎤ ⎡ 0 0⎤

5. The equations c1 ⎢

+ c2 ⎢

+ c3 ⎢

+ c4 ⎢

=

and

⎥

⎥

⎥

3

6

1

0

12

4

−

−

−

−

⎣

⎦

⎣

⎦

⎣

⎦

⎣ −1 2 ⎥⎦ ⎢⎣ 0 0⎥⎦

⎡3 6 ⎤

⎡ 0 −1⎤

⎡ 0 −8 ⎤

⎡ 1 0⎤ ⎡ a b ⎤

c1 ⎢

+ c2 ⎢

+ c3 ⎢

+ c4 ⎢

=

generate the systems

⎥

⎥

⎥

3

6

1

0

12

4

−

−

−

−

⎣

⎦

⎣

⎦

⎣

⎦

⎣ −1 2 ⎥⎦ ⎢⎣ c d ⎥⎦

3c1

+ c4 = 0

3c1

+ c4 = a

6c1 − c2 − 8c3

=0

6c1 − c2 − 8c3

=b

and

3c1 − c2 − 12c3 − c4 = 0

3c1 − c2 − 12c3 − c4 = c

−6c1

− 4c3 + 2c4 = 0

−6c1

− 4c3 + 2c4 = d

89

Chapter 4: General Vector Spaces

**SSM: Elementary Linear Algebra
**

c1 − 4c2 + 7c3 = 5

2c1 + 5c2 − 8c3 = −12 . The matrix

3c1 + 6c2 + 9c3 = 3

**which have the same coefficient matrix. Since
**

0 1⎤

⎡ 3 0

⎢ 6 −1 −8 0 ⎥

det ⎢

⎥ = 48 ≠ 0 the matrices

⎢ 3 −1 −12 −1⎥

⎢⎣ −6 0 −4 2 ⎥⎦

are linearly independent and also span M 22 , so

they are a basis.

5⎤

⎡ 1 −4 7

⎢2

5 −8 −12⎥ reduces to

⎢

⎥

3⎦

⎣3 6 9

⎡ 1 0 0 −2 ⎤

⎢0 1 0 0 ⎥ .

⎢

⎥

⎣0 0 1 1⎦

**7. (a) Since S is the standard basis for R 2 ,
**

(w) S = (3, − 7).

**The solution of the system is c1 = −2,
**

c2 = 0, c3 = 1 and ( v) S = (−2, 0, 1).

(b) Solve c1u1 + c2 u 2 = w, or in terms of

**components, (2c1 + 3c2 , − 4c1 + 8c2 ) = (1, 1).
**

2c1 + 3c2 = 1

Equating components gives

.

−4c1 + 8c2 = 1

11. Solve c1 A1 + c2 A2 + c3 A3 + c4 A4 = A. By

inspection, c3 = −1 and c4 = 3. Thus it remains

−c1 + c2 = 2

which can be

c1 + c2 = 0

⎡ 2 3 1⎤

The matrix ⎢

reduces to

⎣ −4 8 1⎥⎦

to solve the system

⎡1 0 5 ⎤

3⎞

28 ⎥ . Thus, ( w) = ⎛ 5

⎢

⎜ , ⎟.

S

3

⎝ 28 14 ⎠

⎢⎣0 1 14 ⎥⎦

from which c1 = −1. Thus, ( A) S = (−1, 1, − 1, 3).

solved by adding the equations to get c2 = 1,

⎡0 0 ⎤

,

13. Solving c1 A1 + c2 A2 + c3 A3 + c4 A4 = ⎢

⎣0 0 ⎥⎦

⎡a b ⎤

c1 A1 + c2 A2 + c3 A3 + c4 A4 = ⎢

, and

⎣ c d ⎥⎦

c1 A1 + c2 A2 + c3 A3 + c4 A4 = A gives the systems

(c) Solve c1u1 + c2 u 2 = w, or in terms of

components, (c1 , c1 + 2c2 ) = (a, b). It is

clear that c1 = a and solving a + 2c2 = b

for c2 gives c2 =

b−a

. Thus,

2

⎛ b−a⎞

( w ) S = ⎜ a,

⎟.

⎝

2 ⎠

9. (a) Solve c1v1 + c2 v 2 + c3 v3 = v, or in terms of

components,

(c1 + 2c2 + 3c3 , 2c2 + 3c3 , 3c3 ) = (2, − 1, 3).

Equating components gives

c1 + 2c2 + 3c3 = 2

2c2 + 3c3 = −1 which can easily be

3c3 = 3

c1

c1 + c2

c1 + c2 + c3

c1 + c2 + c3 + c4

=0

=0

,

=0

=0

c1

c1 + c2

c1 + c2 + c3

c1 + c2 + c3 + c4

=1

=0

.

=1

=0

c1

c1 + c2

c1 + c2 + c3

c1 + c2 + c3 + c4

⎡1

⎢1

It is easy to see that det ⎢

⎢1

⎣⎢1

**solved by back-substitution to get c1 = 3,
**

c2 = −2, c3 = 1. Thus, ( v) S = (3, − 2, 1).

0

1

1

1

0

0

1

1

=a

=b

, and

=c

=d

0⎤

0⎥

⎥ = 1 ≠ 0, so

0⎥

1⎦⎥

**{A1, A2 , A3 , A4 } spans M 22 . The third system
**

is easy to solve by inspection to get c1 = 1,

**(b) Solve c1v1 + c2 v 2 + c3 v3 = v, or in terms of
**

components,

(c1 − 4c2 + 7c3 , 2c1 + 5c2 − 8c3 , 3c1 + 6c2 + 9c3 )

c2 = −1, c3 = 1, c4 = −1, thus,

A = A1 − A2 + A3 − A4 .

= (5, −12, 3).

Equating components gives

**15. Solving the systems c1p1 + c2p 2 + c3p3 = 0,
**

c1p1 + c2p 2 + c3p3 = a + bx + cx 2 , and

c1p1 + c2 p 2 + c3p3 = p gives the systems

90

SSM: Elementary Linear Algebra

c1

c1 + c2

c1 + c2 + c3

c1

c1 + c2

c1 + c2 + c3

Section 4.5

= 0 c1

=a

= 0 , c1 + c2

= b , and

= 0 c1 + c2 + c3 = c

=7

= −1 .

=2

True/False 4.4

(a) False; {v1 , … , v n } must also be linearly

independent to be a basis.

(b) False; the span of the set must also be V for the

set to be a basis.

⎡1 0 0 ⎤

It is easy to see that det ⎢1 1 0 ⎥ = 1 ≠ 0, so

⎢

⎥

⎣1 1 1 ⎦

{p1, p 2 , p3} spans P2 . The third system is easy

**(c) True; a basis must span the vector space.
**

(d) True; the standard basis is used for the

coordinate vectors in R n .

to solve by inspection to get c1 = 7, c2 = −8,

(e) False; the set {1 + x + x 2 + x3 + x 4 ,

c3 = 3, thus, p = 7p1 − 8p 2 + 3p3 .

x + x 2 + x3 + x 4 , x 2 + x3 + x 4 , x3 + x 4 , x 4 } is

a basis for P4 .

**17. From the diagram, j = u 2 and
**

u1 = (cos 30°)i + (sin 30°) j =

3

1

i + j.

2

2

Section 4.5

3

1

i + u 2 for i in terms of u1

2

2

2

1

and u 2 gives i =

u1 −

u2 .

3

3

Exercise Set 4.5

Solving u1 =

(a)

(

3, 1) is

⎡ 1 1 −1 0 ⎤

1. The matrix ⎢ −2 −1 2 0 ⎥ reduces to

⎢

⎥

⎣ −1 0 1 0 ⎦

⎡ 1 0 −1 0 ⎤

⎢0 1 0 0 ⎥ . The solution of the system is

⎢

⎥

⎣0 0 0 0 ⎦

x1 = t , x2 = 0, x3 = t , which can be written as

( x1 , x2 , x3 ) = (t , 0, t ) or ( x1 , x2 , x3 ) = t (1, 0, 1).

Thus, the solution space has dimension 1 and a

basis is (1, 0, 1).

3i + j.

1

⎛ 2

⎞

u1 −

u 2 ⎟ + u2

3i + j = 3 ⎜

3 ⎠

⎝ 3

= 2u1 − u 2 + u 2

= 2u1

The x ′y ′-coordinates are (2, 0).

(b) (1, 0) is i which has x ′y ′-coordinates

⎡ 1 −4 3 −1 0 ⎤

3. The matrix ⎢

reduces to

⎣ 2 −8 6 −2 0⎥⎦

⎡ 1 −4 3 −1 0 ⎤

⎢⎣0 0 0 0 0 ⎥⎦ . The solution of the system

is x1 = 4r − 3s + t , x2 = r , x3 = s, x4 = t ,

which can be written as

( x1 , x2 , x3 , x4 ) = (4r − 3s + t , r , s, t ) or

1 ⎞

⎛ 2

,−

⎜

⎟.

3⎠

⎝ 3

(c) Since (0, 1) = j = u 2 , the x ′y ′-coordinates

are (0, 1).

(d) (a, b) is ai + bj.

1

⎛ 2

⎞

u1 −

u 2 ⎟ + bu 2

ai + bj = a ⎜

3 ⎠

⎝ 3

a ⎞

2

⎛

=

au1 + ⎜ b −

⎟ u2

3⎠

3

⎝

a ⎞

⎛ 2

a, b −

The x ′y ′-coordinates are ⎜

⎟.

3⎠

⎝ 3

( x1 , x2 , x3 , x4 )

= r (4, 1, 0, 0) + s(−3, 0, 1, 0) + t (1, 0, 0, 1).

Thus, the solution space has dimension 3 and a

basis is (4, 1, 0, 0), (−3, 0, 1, 0), (1, 0, 0, 1).

91

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

n(n + 1)

entries on

2

or above the main diagonal, so the space has

n(n + 1)

.

dimension

2

⎡2 1 3 0⎤

5. The matrix ⎢ 1 0 5 0 ⎥ reduces to

⎢

⎥

⎣0 1 1 0⎦

⎡ 1 0 0 0⎤

⎢0 1 0 0 ⎥ , so the system has only the trivial

⎢

⎥

⎣0 0 1 0 ⎦

solution, which has dimension 0 and no basis.

(c) As in part (b), there are

**11. (a) Let p1 = p1 ( x) and p 2 = p2 ( x) be elements
**

of W. Then ( p1 + p2 )(1) = p1 (1) + p2 (1) = 0

so p1 + p 2 is in W.

(kp1 )(1) = k ( p1 (1)) = k ⋅ 0 = 0 so kp1 is in W.

Thus, W is a subspace of P2 .

**7. (a) Solving the equation for x gives
**

2

5

x = y − z, so parametric equations are

3

3

2

5

x = r − s, y = r, z = s, which can be

3

3

written in vector form as

5

⎛2

⎞

( x, y, z ) = ⎜ r − s, r , s ⎟

3

⎝3

⎠

⎛2

⎞ ⎛ 5

⎞

= r ⎜ , 1, 0 ⎟ + s ⎜ − , 0, 1⎟ .

⎝3

⎠ ⎝ 3

⎠

⎛2

⎞ ⎛ 5

⎞

A basis is ⎜ , 1, 0 ⎟ , ⎜ − , 0, 1⎟ .

⎝3

⎠ ⎝ 3

⎠

**(b), (c) A basis for W is {−1 + x, − x + x 2 } so the
**

dimension is 2.

13. Let u1 = (1, 0, 0, 0), u 2 = (0, 1, 0, 0),

u3 = (0, 0, 1, 0), u 4 = (0, 0, 0, 1). Then the set

{v1 , v 2 , u1, u 2 , u3 , u 4} clearly spans R 4 . The

equation

c1v1 + c2 v 2 + k1u1 + k2 u 2 + k3u3 + k4 u 4 = 0

leads to the system

c1 − 3c2 + k1

=0

−4c1 + 8c2

+ k2

=0

.

2c1 − 4c2

+ k3

=0

−3c1 + 6c2

+ k4 = 0

**(b) Solving the equation for x gives x = y, so
**

parametric equations are x = r, y = r, z = s,

which can be written in vector form as

( x, y, z ) = (r , r , s) = r (1, 1, 0) + s(0, 0, 1).

A basis is (1, 1, 0), (0, 0, 1).

⎡ 1 −3

⎢ −4

8

The matrix ⎢

−

2

4

⎢

⎢⎣ −3 6

⎡ 1 0 −2 0 0

⎢ 0 1 −1 0 0

⎢

to ⎢

0 0 0 1 0

⎢

⎢0 0 0 0 1

⎣

**(c) In vector form, the line is
**

(x, y, z) = (2t, −t, 4t) = t(2, −1, 4).

A basis is (2, −1, 4).

(d) The vectors can be parametrized as a = r,

b = r + s, c = s or

(a, b, c) = (r , r + s, s) = r (1, 1, 0) + s(0, 1, 1).

A basis is (1, 1, 0), (0, 1, 1).

9. (a) The dimension is n, since a basis is the set

{A1, A2 , … , An } where the only nonzero

entry in Ai is aii = 1.

1

0

0

0

0

1

0

0

0

0

1

0

0

0

0

1

0⎤

0⎥

⎥ reduces

0⎥

0 ⎥⎦

−1 0 ⎤

− 13 0⎥

⎥

.

− 43 0⎥

⎥

2 0⎥

3

⎦

**From the reduced matrix it is clear that u1 is a
**

linear combination of v1 and v 2 so it will not

be in the basis, and that {v1, v 2 , u 2 , u3} is

linearly independent so it is one possible basis.

Similarly, u 4 and either u 2 or u3 will produce

a linearly independent set. Thus, any two of the

vectors (0, 1, 0, 0), (0, 0, 1, 0), and (0, 0, 0, 1)

can be used.

**(b) In a symmetric matrix, the elements above
**

the main diagonal determine the elements

below the main diagonal, so the entries on

and above the main diagonal determine all

the entries. There are

n(n + 1)

entries on or

n + (n − 1) + + 1 =

2

above the main diagonal, so the space has

n(n + 1)

.

dimension

2

92

SSM: Elementary Linear Algebra

Section 4.6

**15. Let v3 = (a, b, c). The equation
**

c1v1 + c2 v 2 + c3 v3 = 0 leads to the system

c1

+ ac3 = 0

−2c1 + 5c2 + bc3 = 0 .

3c1 − 3c2 + cc3 = 0

**(i) True; the set of n × n matrices has dimension
**

2

n 2 , while the set {I , A, A2 , …, An } contains

n 2 + 1 matrices.

(j) False; since P2 has dimension three, then by

Theorem 4.5.6(c), the only three-dimensional

subspace of P2 is P2 itself.

⎡ 1 0 a 0⎤

The matrix ⎢ −2 5 b 0⎥ reduces to

⎢

⎥

⎣ 3 −3 c 0⎦

0⎤

a

⎡1 0

⎢0 1

2a+ 1b

0 ⎥ and the

5

5

⎢

⎥

⎢0 0 − 9 a + 3 b + c 0 ⎥

5

5

⎣

⎦

homogeneous system has only the trivial

9

3

solution if − a + b + c ≠ 0. Thus, adding any

5

5

vector v3 = (a, b, c) with 9a − 3b − 5c ≠ 0 will

Section 4.6

Exercise Set 4.6

1. (a) Since {u1, u 2 } is the standard basis B for

⎡ 3⎤

R 2 , [ w ]s = ⎢ ⎥ .

⎣ −7 ⎦

⎡ 2

(b) ⎢

⎣ −4

⎡1

⎢

⎢⎣0

and

3

**create a basis for R .
**

True/False 4.5

(a) True; by definition.

0

1

2

7

1

7

3⎤

⎡2

− 28

⎥ . Thus PB→ S = ⎢ 7

1⎥

⎢⎣ 17

14 ⎦

⎡2

[ w ]S = PB→ S [ w ]B = ⎢ 7

⎢⎣ 17

**(b) True; any basis of R17 will contain 17 linearly
**

independent vectors.

(c) False; to span R17 , at least 17 vectors are

required.

⎡1 0 1 0 ⎤

(c) ⎢

reduces to

⎣1 2 0 1⎥⎦

**(d) True; since the dimension for R5 is 5, then
**

linear independence is sufficient for 5 vectors to

be a basis.

**(f) True; if the set contains n vectors, it is a basis,
**

while if it contains more than n vectors, it can be

reduced to a basis.

1 0⎤

⎡1 0

⎢0 1 − 1 1 ⎥ .

2 2⎦

⎣

a

⎡

⎤

=⎢ 1

⎥.

1

⎣− 2 a + 2 b ⎦

**(g) True; if the set contains n vectors, it is a basis,
**

while if it contains fewer than n vectors, it can be

enlarged to a basis.

**3. (a) Since S is the standard basis B for P2 ,
**

⎡ 4⎤

(p) S = (4, − 3, 1) or [p]S = ⎢ −3⎥ .

⎢ ⎥

⎣ 1⎦

⎡0 1 ⎤ ⎡ 0 1⎤ ⎫

⎢⎣1 0 ⎥⎦ , ⎢⎣ −1 0 ⎥⎦ ⎬ form a

⎭

93

3⎤

− 28

⎥

1⎥

14 ⎦

3⎤

5

− 28

1 ⎡ ⎤

⎥ ⎡⎢ ⎤⎥ = ⎢ 28 ⎥ .

1 ⎥ ⎣1⎦ ⎢ 3 ⎥

14 ⎦

⎣ 14 ⎦

⎡ 1 0⎤

Thus PB→ S = ⎢ 1 1 ⎥ and

⎣− 2 2 ⎦

[ w ]S = PB → S [ w ]B

⎡ 1 0⎤ ⎡ a ⎤

= ⎢ 1 1⎥ ⎢ ⎥

⎣− 2 2 ⎦ ⎣b ⎦

**(e) True; since the dimension of R5 is 5, then
**

spanning is sufficient for 5 vectors to be a basis.

**(h) True; the set
**

⎧ ⎡1 0 ⎤ ⎡ 1 0 ⎤

⎨ ⎢ 0 1 ⎥ , ⎢0 −1⎥ ,

⎦ ⎣

⎦

⎩⎣

basis for M 22 .

3 1 0⎤

reduces to

8 0 1⎥⎦

Chapter 4: General Vector Spaces

⎡1

(b) ⎢ 1

⎢

⎣0

⎡1

⎢

⎢0

⎢

⎣⎢0

SSM: Elementary Linear Algebra

⎡ −1

⎢ 1

[ B]E = PS → E [ B ]S = ⎢

⎢ 0

⎢⎣ 0

⎡ 15⎤

⎢ −1⎥

=⎢ ⎥

⎢ 6⎥

⎣⎢ 3⎦⎥

1 0 1 0 0⎤

0 1 0 1 0 ⎥ reduces to

⎥

1 1 0 0 1⎦

1

1 − 1⎤

0 0

2

2

2⎥

1 −1

1 ⎥ . Thus,

1 0

2

2

2

1

1⎥

0 1 − 12

2

2 ⎦⎥

1 − 1⎤

⎡ 1

2

2⎥

⎢ 2

1 ⎥ and

PB→ S = ⎢ 12 − 12

2

⎢ 1

1

1⎥

⎢⎣ − 2

2

2 ⎥⎦

[p]S = PB → S [p]B

1 − 1⎤

⎡ 1

2

2 ⎥ ⎡ 2⎤

⎢ 2

1 ⎥ ⎢ −1⎥

= ⎢ 12 − 12

2 ⎢ ⎥

⎢ 1

1

1 ⎥ ⎣ 1⎦

−

2

2 ⎦⎥

⎣⎢ 2

⎡ 0⎤

= ⎢ 2⎥

⎢ ⎥

⎣ −1⎦

0

0

1

0

0 ⎤ ⎡ −8 ⎤

0⎥ ⎢ 7⎥

⎥⎢ ⎥

0⎥ ⎢ 6⎥

1⎥⎦ ⎢⎣ 3⎥⎦

⎡15 −1⎤

and B = ⎢

.

⎣ 6 3⎥⎦

⎡ 2 4 1 −1⎤

7. (a) ⎢

reduces to

⎣ 2 −1 3 −1⎥⎦

⎡ 1 0 13

10

⎢

⎢⎣0 1 − 25

⎡ 13

− 12 ⎤

⎥ so PB′→ B = ⎢ 10

⎢⎣ − 52

0 ⎥⎦

− 12 ⎤

⎥.

0⎥⎦

⎡ 1 −1 2 4 ⎤

(b) ⎢

reduces to

⎣3 −1 2 −1⎥⎦

⎡1 0 0 − 5⎤

⎡ 0 − 5⎤

2 ⎥ so P

2⎥.

⎢

=

B → B′ ⎢

13 ⎥

−

−

2

⎢⎣0 1 −2 − 13

⎥

⎢

2⎦

2⎦

⎣

or (p) S = (0, 2, − 1).

⎡ 1 2 3⎤

5. (a) From Theorem 4.6.2, PS → B = ⎢0 2 3⎥

⎢

⎥

⎣ 0 0 3⎦

⎡2 4 1 0⎤

(c) ⎢

reduces to

⎣ 2 −1 0 1⎥⎦

**where B is the standard basis for R3 . Thus,
**

⎡ 1 2 3⎤ ⎡ 6 ⎤ ⎡16 ⎤

[ w ]B = PS → B [ w ]S = ⎢0 2 3⎥ ⎢ −1⎥ = ⎢10 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 3⎦ ⎣ 4 ⎦ ⎣12 ⎦

and w = (16, 10, 12).

⎡1 0 1

10

⎢

⎢⎣0 1 15

2⎤

5⎥

− 15 ⎥

⎦

⎡1

so PE → B = ⎢ 10

⎢⎣ 15

2⎤

5⎥

− 15 ⎥

⎦

**where E is the standard basis for R 2 .
**

[ w ]B = PE → B [ w ]E

2⎤

⎡1

10

5 ⎥ ⎡ 3⎤

⎢

= 1

⎢⎣ 5 − 51 ⎥⎦ ⎢⎣ −5⎥⎦

⎡ − 17 ⎤

= ⎢ 10 ⎥

⎢ 8⎥

⎣ 5⎦

[ w ]B′ = PB→ B′ [ w ]B

⎡ 0 − 5 ⎤ ⎡ − 17 ⎤

2 ⎥ ⎢ 10 ⎥

=⎢

⎢⎣ −2 − 13

⎥⎢ 8 ⎥

2 ⎦⎣ 5 ⎦

⎡ −4 ⎤

=⎢ ⎥

⎣ −7 ⎦

**(b) The basis in Exercise 3(a) is the standard
**

⎡3⎤

basis B for P2 , so [q]B = [q]S = ⎢ 0 ⎥ and

⎢ ⎥

⎣4⎦

q = 3 + 4 x2 .

(c) From Theorem 4.6.2,

⎡ −1 1 0

⎢ 1 1 0

PS → E = ⎢

⎢ 0 0 1

⎣⎢ 0 0 0

1

1

0

0

0⎤

0⎥

⎥ where E is the

0⎥

1⎦⎥

**standard basis for M 22 .
**

Thus,

94

SSM: Elementary Linear Algebra

⎡ 1 −1 1 0 ⎤

(d) ⎢

reduces to

⎣3 −1 0 1⎥⎦

Section 4.6

⎡1 0 − 1

2

⎢

⎢⎣0 1 − 32

[ w ]B′ = PB → B′ [ w ]B

5⎤

⎡ 3 2

2 ⎥ ⎡ 9⎤

⎢

= ⎢ −2 −3 − 1 ⎥ ⎢ −9 ⎥

2 ⎢ ⎥

⎢

⎥

1

6 ⎦ ⎣ −5⎦

⎣ 5

1⎤

2⎥

1⎥

2⎦

⎡− 1 1 ⎤

so PE → B′ = ⎢ 2 2 ⎥ .

⎢⎣ − 23 12 ⎥⎦

[ w ]B′ = PE → B′ [ w ]E

⎡ − 1 1 ⎤ ⎡ 3⎤

= ⎢ 23 2 ⎥ ⎢ ⎥

⎢⎣ − 2 12 ⎥⎦ ⎣ −5⎦

⎡ −4 ⎤

=⎢ ⎥

⎣ −7 ⎦

⎡− 7 ⎤

⎢ 2⎥

= ⎢ 23 ⎥

⎢ 2⎥

⎣ 6⎦

⎡ 3

(c) ⎢ 1

⎢

⎣ −5

⎡1

⎢

⎢0

⎢

⎣0

1 −1 2 2 1⎤

1 0 1 −1 2 ⎥ reduces to

⎥

−3 2 1 1 1⎦

5⎤

0 0

3 2

2⎥

1 0 −2 −3 − 12 ⎥ so

⎥

0 1 5

1

6⎦

5⎤

⎡ 3 2

2⎥

⎢

PB→ B′ = ⎢ −2 −3 − 1 ⎥ .

2

⎢

⎥

1

6⎦

⎣ 5

⎡ 3

9. (a) ⎢ 1

⎢

⎣ −5

⎡1

⎢

⎢0

⎢

⎣0

1 −1 1 0 0 ⎤

1 0 0 1 0 ⎥ reduces to

⎥

−3 2 0 0 1⎦

1⎤

0 0 1 12

2⎥

1 0 −1 12 − 12 ⎥ so

⎥

0 1 1 2

1⎦

⎡ 1

⎢

PE → B′ = ⎢ −1

⎢

⎣ 1

1

2

1

2

2

1⎤

2⎥

− 12 ⎥ .

⎥

1⎦

[ w ]B′ = PE → B′ [ w ]E

1⎤

⎡ 1 1

2

2 ⎥ ⎡ −5 ⎤

⎢

= ⎢ −1 1 − 1 ⎥ ⎢ 8 ⎥

2

2 ⎢ ⎥

⎢

⎥

1⎦ ⎣ −5⎦

⎣ 1 2

⎡2 2 1 1 0 0⎤

(b) ⎢ 1 −1 2 0 1 0 ⎥ reduces to

⎢

⎥

⎣ 1 1 1 0 0 1⎦

3

1 − 5⎤

⎡1 0 0

2

2

2⎥

⎢

3 ⎥ so

1

1

⎢0 1 0 − 2 − 2

2

⎢

⎥

0

2⎦

⎣0 0 1 −1

1 − 5⎤

⎡ 3

2

2⎥

⎢ 2

3 ⎥ where E is the

PE → B = ⎢ − 1 − 1

2

2

2

⎢

⎥

0

2⎦

⎣ −1

⎡− 7 ⎤

⎢ 2⎥

= ⎢ 23 ⎥

⎢ 2⎥

⎣ 6⎦

11. (a) The span of f1 and f2 is the set of all linear

combinations af1 + bf 2 = a sin x + b cos x

and this vector can be represented by (a, b).

Since g1 = 2f1 + f 2 and g 2 = 3f2 , it is

⎡ 2 1⎤

sufficient to compute det ⎢

= 6. Since

⎣ 0 3⎥⎦

this determinant is nonzero, g1 and g 2

form a basis for V.

**standard basis for R3 .
**

[ w ]B = PE → B [ w ]E

1 − 5⎤

⎡ 3

2

2 ⎥ ⎡ −5 ⎤

⎢ 2

3

1

1

= ⎢−

−2

⎥ ⎢ 8⎥

2

2 ⎢ ⎥

⎢

⎥

0

2 ⎦ ⎣ −5 ⎦

⎣ −1

⎡ 9⎤

= ⎢ −9 ⎥

⎢ ⎥

⎣ −5⎦

**(b) Since B can be represented as {(1, 0), (0, 1)}
**

⎡ 2 0⎤

PB′→ B = ⎢

.

⎣ 1 3⎥⎦

⎡ 2 0 1 0⎤

(c) ⎢

reduces to

⎣ 1 3 0 1⎥⎦

⎡ 1

so PB→ B′ = ⎢ 2

⎢⎣ − 16

95

0⎤

⎥.

1⎥

3⎦

1

⎡1 0

2

⎢

⎢⎣0 1 − 16

0⎤

⎥

1⎥

3⎦

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ 2⎤

(d) [h]B = ⎢ ⎥ since B is the standard basis

⎣ −5 ⎦

for V.

⎡ 1 0 ⎤ ⎡ 2 ⎤ ⎡ 1⎤

⎥

[h]B′ = PB → B′ [h ]B = ⎢ 2

=

⎢⎣ − 16 13 ⎥⎦ ⎢⎣ −5⎥⎦ ⎢⎣ −2 ⎥⎦

⎡ 1 2 1 1⎤

⎡ 1 0 3 5⎤

15. (a) ⎢

reduces to ⎢

⎣0 1 −1 −2 ⎥⎦

⎣ 2 3 3 4⎥⎦

⎡ 3 5⎤

.

so PB2 → B1 = ⎢

⎣ −1 −2 ⎥⎦

⎡ 1 1 1 2⎤

⎡ 1 0 2 5⎤

reduces to ⎢

(b) ⎢

⎣0 1 −1 −3⎥⎦

⎣3 4 2 3⎥⎦

⎡ 2 5⎤

so PB1 → B2 = ⎢

.

⎣ −1 −3⎥⎦

⎡ 1 2 3⎤

13. (a) PB→ S = ⎢ 2 5 3⎥

⎢

⎥

⎣ 1 0 8⎦

⎡ 1 2 1 0⎤

⎡ 1 0 −3 2 ⎤

reduces to ⎢

(d) ⎢

⎣0 1 2 −1⎥⎦

⎣ 2 3 0 1⎥⎦

⎡ −3 2 ⎤

so PS → B1 = ⎢

where S is the

⎣ 2 −1⎥⎦

⎡ 1 2 3 1 0 0⎤

(b) ⎢ 2 5 3 0 1 0⎥ reduces to

⎢

⎥

⎣ 1 0 8 0 0 1⎦

⎡ 1 0 0 −40 16 9 ⎤

⎢0 1 0

13 −5 −3⎥ so

⎢

⎥

0

0

1

5 −2 −1⎦

⎣

⎡ −40 16 9 ⎤

PS → B = ⎢ 13 −5 −3⎥ .

⎢

⎥

⎣ 5 −2 −1⎦

**standard basis {(1, 0), (0, 1)} for R 2 . Since
**

w is the standard basis vector (0, 1) for R 2 ,

⎡ 2⎤

[ w ]B1 = ⎢ ⎥ .

⎣ −1⎦

[ w ]B2 = PB1 → B2 [ w ]B1

⎡ 2 5⎤ ⎡ 2 ⎤

=⎢

⎣ −1 −3⎥⎦ ⎢⎣ −1⎥⎦

⎡ −1⎤

=⎢ ⎥

⎣ 1⎦

(d) [ w ]B = PS → B [ w ]S

⎡ −40 16 9 ⎤ ⎡ 5⎤

= ⎢ 13 −5 −3⎥ ⎢ −3⎥

⎢

⎥⎢ ⎥

⎣ 5 −2 −1⎦ ⎣ 1⎦

⎡ −239⎤

= ⎢ 77 ⎥

⎢

⎥

⎣ 30⎦

[ w ]S = PB →S [ w ]B

⎡ 1 2 3⎤ ⎡ −239 ⎤

= ⎢ 2 5 3⎥ ⎢ 77 ⎥

⎢

⎥⎢

⎥

⎣ 1 0 8⎦ ⎣ 30 ⎦

⎡ 5⎤

= ⎢ −3⎥

⎢ ⎥

⎣ 1⎦

⎡ 1 1 1 0⎤

⎡ 1 0 4 −1⎤

reduces to ⎢

(e) ⎢

⎥

3

4

0

1

⎣

⎦

⎣0 1 −3 1⎥⎦

⎡ 4 −1⎤

.

so PS → B2 = ⎢

⎣ −3 1⎥⎦

⎡ 4 −1⎤ ⎡ 2 ⎤ ⎡ 3⎤

[ w ]B2 = PS → B2 [ w ]S = ⎢

=

⎣ −3 1⎥⎦ ⎢⎣ 5⎥⎦ ⎢⎣ −1⎥⎦

[ w ]B = PB

1

2 → B1

[ w ]B

2

⎡ 3 5 ⎤ ⎡ 3⎤

=⎢

⎣ −1 −2⎥⎦ ⎢⎣ −1⎥⎦

⎡ 4⎤

=⎢ ⎥

⎣ −1⎦

⎡ 3⎤

(e) [ w ]S = ⎢ −5⎥

⎢ ⎥

⎣ 0⎦

[ w ]B = PS → B [ w ]S

⎡ −40 16 9 ⎤ ⎡ 3⎤

= ⎢ 13 −5 −3⎥ ⎢ −5⎥

⎢

⎥⎢ ⎥

⎣ 5 −2 −1⎦ ⎣ 0 ⎦

⎡ −200⎤

= ⎢ 64⎥

⎢

⎥

⎣ 25⎦

96

SSM: Elementary Linear Algebra

⎡ 3

17. (a) ⎢ 1

⎢

⎣ −5

⎡1

⎢

⎢0

⎢

⎣0

Section 4.6

1 −1 2 2 1⎤

1 0 1 −1 2 ⎥ reduces to

⎥

−3 2 1 1 1⎦

5⎤

0 0

3 2

2⎥

1 0 −2 −3 − 12 ⎥ so

⎥

0 1 5

1

6⎦

PB → B

1

2

⎡ 3

(c) ⎢ 1

⎢

⎣ −5

⎡1

⎢

⎢0

⎢

⎣0

5⎤

⎡ 3 2

2⎥

⎢

= ⎢ −2 −3 − 1 ⎥ .

2

⎢

⎥

1

6⎦

⎣ 5

1 −1 1 0 0 ⎤

1 0 0 1 0 ⎥ reduces to

⎥

−3 2 0 0 1⎦

1⎤

0 0 1 12

2⎥

1 0 −1 12 − 12 ⎥ so

⎥

0 1 1 2

1⎦

1⎤

⎡ 1 1

2

2⎥

⎢

PS → B = ⎢ −1 1 − 1 ⎥ .

2

2

2

⎢

⎥

1⎦

⎣ 1 2

[ w ]B2 = PS → B2 [ w ]S

⎡2

(b) ⎢ 1

⎢

⎣1

⎡1

⎢

⎢0

⎢

⎣0

2 1 1 0 0⎤

−1 2 0 1 0 ⎥ reduces to

⎥

1 1 0 0 1⎦

3

1 − 5⎤

0 0

2

2

2⎥

3 ⎥ , so

1 0 − 12 − 12

2

⎥

0 1 −1

0

2⎦

1 − 5⎤

⎡ 3

2

2⎥

⎢ 2

3 ⎥ where S is the

PS → B = ⎢ − 1 − 1

1

2

2

⎢ 2

⎥

0

2⎦

⎣ −1

1⎤

⎡ 1 1

2

2 ⎥ ⎡ −5 ⎤

⎢

= ⎢ −1 1 − 1 ⎥ ⎢ 8 ⎥

2

2 ⎢ ⎥

⎢

⎥

1⎦ ⎣ −5⎦

⎣ 1 2

⎡− 7 ⎤

⎢ 2⎥

= ⎢ 23 ⎥

⎢ 2⎥

⎣ 6⎦

19. (a)

y

v1

**standard basis for R3 .
**

[ w ]B1 = PS → B1 [ w ]S

[ w ]B2

θ

e1

1 − 5⎤

⎡ 3

2

2 ⎥ ⎡ −5 ⎤

⎢ 2

3

1

1

= ⎢−

−2

⎥ ⎢ 8⎥

2

2 ⎢ ⎥

⎢

⎥

0

2⎦ ⎣ −5⎦

⎣ −1

⎡ 9⎤

= ⎢ −9 ⎥

⎢ ⎥

⎣ −5 ⎦

= PB1 → B2 [ w ]B1

x

**From the diagram, it is clear that
**

[ v1 ]S = (cos 2θ , sin 2θ ).

y

e2

θ

x

v2

**From the diagram, the angle between the
**

positive y-axis and v 2 is

5⎤

⎡ 3 2

2 ⎥ ⎡ 9⎤

⎢

= ⎢ −2 −3 − 1 ⎥ ⎢ −9 ⎥

2 ⎢ ⎥

⎢

⎥

1

6 ⎦ ⎣ −5 ⎦

⎣ 5

⎡− 7 ⎤

⎢ 2⎥

= ⎢ 23 ⎥

⎢ 2⎥

⎣ 6⎦

⎛π

⎞

2 ⎜ − θ ⎟ = π − 2θ , so the angle between

⎝2

⎠

v 2 and the positive x-axis is

π

π⎞

⎛

− ⎜ π − 2θ − ⎟ = 2θ − and

2

2⎠

⎝

⎛

π⎞

π ⎞⎞

⎛

⎛

[ v 2 ]S = ⎜ cos ⎜ 2θ − ⎟ , sin ⎜ 2θ − ⎟ ⎟

2

2 ⎠⎠

⎝

⎠

⎝

⎝

= (sin 2θ , − cos 2θ )

Thus by Theorem 4.6.2,

sin 2θ ⎤

⎡cos 2θ

PB→ S = ⎢

sin

2

cos

2θ ⎥⎦

θ

−

⎣

97

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

Section 4.7

**21. If w is a vector in the space and [ w ]B′ is the
**

coordinate vector of w relative to B ′, then

[ w ]B = P[ w ]B′ and [ w ]C = Q[ w ]B , so

Exercise Set 4.7

1. Row vectors: r1 = [ 2 −1 0 1],

[ w ]C = Q( P[ w ]B′ ) = QP[ w ]B′ . The transition

matrix from B′ to C is QP. The transition matrix

from C to B′ is (QP)−1 = P −1Q −1.

r2 = [3 5 7 −1], r3 = [1 4 2 7]

⎡2⎤

⎡ −1⎤

Column vectors: c1 = ⎢ 3⎥ , c2 = ⎢ 5⎥ ,

⎢ ⎥

⎢ ⎥

⎣ 1⎦

⎣ 4⎦

⎡0⎤

⎡ 1⎤

c3 = ⎢ 7 ⎥ , c4 = ⎢ −1⎥

⎢ ⎥

⎢ ⎥

⎣2⎦

⎣ 7⎦

**23. (a) By Theorem 4.6.2, P is the transition matrix
**

from B = {(1, 1, 0), (1, 0, 2), (0, 2, 1)} to S.

⎡ 1 1 0 1 0 0⎤

(b) ⎢ 1 0 2 0 1 0 ⎥ reduces to

⎢

⎥

⎣0 2 1 0 0 1⎦

4

⎡1 0 0

5

⎢

1

⎢0 1 0

5

⎢

2

0

0

1

−

5

⎣⎢

1

5

− 51

2

5

⎡ 1 3 −2 ⎤

⎡ 1 0 1⎤

3. (a) ⎢

reduces to ⎢

, so

⎥

4

6

10

−

⎣

⎦

⎣0 1 −1⎥⎦

⎡ −2 ⎤ ⎡ 1⎤ ⎡ 3⎤

⎢⎣ 10 ⎥⎦ = ⎢⎣ 4 ⎥⎦ − ⎢⎣ −6⎥⎦ .

− 52 ⎤

⎥

2⎥.

5

1⎥

5 ⎦⎥

**Thus, P is the transition matrix from S to
**

⎧⎛ 4 1

2⎞ ⎛1 1 2⎞

B = ⎨⎜ , , − ⎟ , ⎜ , − , ⎟ ,

5⎠ ⎝5 5 5⎠

⎩⎝ 5 5

⎛ 2 2 1 ⎞⎫

⎜ − , , ⎟⎬

⎝ 5 5 5 ⎠⎭

⎡ 1 0 1 0⎤

⎡ 1 1 2 −1⎤

(b) ⎢ 1 0 1 0⎥ reduces to ⎢0 1 1 0⎥

⎢

⎥

⎢

⎥

⎣ 2 1 3 2⎦

⎣0 0 0 1⎦

so the system Ax = b is inconsistent and b is

not in the column space of A.

⎡ 1 −1 1 5 ⎤

(c) ⎢9 3 1 1⎥ reduces to

⎢

⎥

⎣ 1 1 1 −1⎦

**27. B must be the standard basis for R n , since, in
**

particular [ei ]B = ei for the standard basis

vectors e1 , e2 , … , e n .

1⎤

⎡1 0 0

⎢0 1 0 −3⎥ , so

⎢

⎥

⎣0 0 1 1⎦

⎡ 5⎤ ⎡ 1⎤ ⎡ −1⎤ ⎡1⎤

⎢ 1⎥ = ⎢9 ⎥ − 3 ⎢ 3⎥ + ⎢1⎥ .

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥

⎣ −1⎦ ⎣ 1⎦ ⎣ 1⎦ ⎣1⎦

True/False 4.6

(a) True

(b) True; by Theorem 4.6.1.

(c) True

⎡ 1 −1 1 2 ⎤

(d) ⎢ 1 1 −1 0 ⎥ reduces to

⎢

⎥

⎣ −1 −1 1 0 ⎦

⎡ x1 ⎤

⎡ 1 0 0 1⎤

⎢0 1 −1 −1⎥ , so the system A ⎢ x2 ⎥ = b

⎢ ⎥

⎢

⎥

⎣0 0 0 0 ⎦

⎣ x3 ⎦

has the solution x1 = 1, x2 = t − 1, x3 = t .

(d) True

(e) False; consider the bases S = {e1 , e2 , e3} and

**B = {2e3 , 3e1 , 5e 2} for R3 , then
**

⎡0 3 0⎤

PB→ S = ⎢ 0 0 5⎥ which is not a diagonal

⎢

⎥

⎣2 0 0⎦

matrix.

⎡ 2 ⎤ ⎡ 1⎤

⎡ −1⎤ ⎡ 1⎤

Thus ⎢ 0 ⎥ = ⎢ 1⎥ + (t − 1) ⎢ 1⎥ + t ⎢ −1⎥ .

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ 0 ⎦ ⎣ −1⎦

⎣ −1⎦ ⎣ 1⎦

**(f) False; A would have to be invertible, not just
**

square.

98

SSM: Elementary Linear Algebra

⎡1

⎢0

(e) ⎢

⎢1

⎢⎣0

2

1

2

1

0

2

1

2

1

1

3

2

⎡1

⎢0

⎢

⎢0

⎢⎣0

0

1

0

0

0

0

1

0

0 −26 ⎤

0

13⎥

⎥ , so

0 −7 ⎥

1

4 ⎥⎦

Section 4.7

x3 = s, x4 = t , or in vector form

4⎤

3⎥

⎥ reduces to

5⎥

7 ⎥⎦

⎡ −1⎤

⎡2⎤

⎡ −1⎤ ⎡ −2 ⎤

⎢ 0⎥

⎢ 1⎥

⎢ 0⎥ ⎢ 0⎥

x = ⎢ ⎥ + r ⎢ ⎥ + s⎢ ⎥ +t ⎢ ⎥.

⎢ 0⎥

⎢0⎥

⎢ 1⎥ ⎢ 0 ⎥

⎢⎣ 0 ⎥⎦

⎢⎣ 0 ⎥⎦

⎢⎣ 0 ⎥⎦ ⎢⎣ 1⎥⎦

The general form of the solution of Ax = 0 is

⎡2⎤

⎡ −1⎤ ⎡ −2 ⎤

⎢ 1⎥

⎢ 0⎥ ⎢ 0⎥

x = r ⎢ ⎥ + s ⎢ ⎥ +t ⎢ ⎥.

0

⎢ ⎥

⎢ 1⎥ ⎢ 0 ⎥

⎣⎢ 0 ⎦⎥

⎣⎢ 0 ⎦⎥ ⎣⎢ 1⎦⎥

⎡4⎤

⎡ 1⎤

⎡ 2⎤

⎡ 0⎤

⎡ 1⎤

⎢ 3⎥

⎢0⎥

⎢ 1⎥

⎢ 2⎥

⎢ 1⎥

⎢ ⎥ = −26 ⎢ ⎥ + 13 ⎢ ⎥ − 7 ⎢ ⎥ + 4 ⎢ ⎥ .

⎢ 5⎥

⎢ 1⎥

⎢ 2⎥

⎢ 1⎥

⎢ 3⎥

⎣⎢7 ⎦⎥

⎣⎢ 0 ⎦⎥

⎣⎢ 1⎦⎥

⎣⎢ 2 ⎦⎥

⎣⎢ 2 ⎦⎥

1 4⎤

⎡ 1 2 −3

⎢ −2

1 2

1 −1⎥

(d) ⎢

⎥ reduces to

3⎥

⎢ −1 3 −1 2

⎣⎢ 4 −7 0 −5 −5⎦⎥

⎡ 1 −3 1⎤

⎡ 1 −3 1⎤

5. (a) ⎢

reduces to ⎢

, and

⎥

⎣ 2 −6 2 ⎦

⎣0 0 0 ⎥⎦

the solution of Ax = b is x1 = 1 + 3t , x2 = t ,

⎡1

⎢

⎢0

⎢

⎢0

⎢⎣0

⎡ 1⎤ ⎡3⎤

or in vector form, x = ⎢ ⎥ + t ⎢ ⎥ .

⎣ 0⎦ ⎣ 1⎦

The general form of the solution of Ax = 0 is

⎡3⎤

x = t ⎢ ⎥.

⎣ 1⎦

0 − 75

1 − 45

− 15

3

5

6⎤

5⎥

7⎥

5

**and the solution of
**

⎥

0 0⎥

0 0 ⎥⎦

6 7

1

Ax = b is x1 = + s + t ,

5 5

5

7 4

3

x2 = + s − t , x3 = s, x4 = t , or in

5 5

5

⎡ 1 1 2 5⎤

(b) ⎢ 1 0 1 −2 ⎥ reduces to

⎢

⎥

⎣ 2 1 3 3⎦

⎡ 1 0 1 −2 ⎤

⎢0 1 1 7 ⎥ , and the solution of

⎢

⎥

⎣0 0 0 0 ⎦

Ax = b is x1 = −2 − t , x2 = 7 − t , x3 = t , or

0

0

0

0

⎡6⎤

⎡7⎤ ⎡ 1⎤

⎢5⎥

⎢5⎥ ⎢ 5⎥

7⎥

3

4

⎢

vector form, x = 5 + s ⎢ 5 ⎥ + t ⎢ − 5 ⎥ .

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ 0⎥

⎢ 1⎥ ⎢ 0 ⎥

⎢⎣ 0 ⎥⎦ ⎢⎣ 1⎥⎦

⎢⎣ 0⎥⎦

The general form of the solution of Ax = 0 is

⎡7⎤ ⎡ 1⎤

⎢5⎥ ⎢ 5⎥

3

4

x = s ⎢ 5 ⎥ + t ⎢− 5 ⎥ .

⎢ ⎥ ⎢ ⎥

⎢ 1⎥ ⎢ 0 ⎥

⎢⎣ 0⎥⎦ ⎢⎣ 1⎥⎦

⎡ −2 ⎤ ⎡ −1⎤

in vector form, x = ⎢ 7 ⎥ + t ⎢ −1⎥ . The

⎢ ⎥ ⎢ ⎥

⎣ 0 ⎦ ⎣ 1⎦

general form of the solution of Ax = 0 is

⎡ −1⎤

x = t ⎢ −1⎥ .

⎢ ⎥

⎣ 1⎦

**7. (a) A basis for the row space is r1 = [1 0 2],
**

r2 = [0 0 1]. A basis for the column

⎡ 1 −2 1 2 −1⎤

⎢ 2 −4 2 4 −2 ⎥

(c) ⎢

⎥ reduces to

1⎥

⎢ −1 2 −1 −2

⎢⎣ 3 −6 3 6 −3⎥⎦

⎡ 1⎤

⎡2⎤

space is c1 = ⎢0 ⎥ , c2 = ⎢ 1⎥ .

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0⎦

⎡ 1 −2 1 2 −1⎤

⎢0 0 0 0 0 ⎥

⎢

⎥ , and the solution of

⎢0 0 0 0 0 ⎥

⎣⎢0 0 0 0 0 ⎦⎥

Ax = b is x1 = −1 + 2r − s − 2t , x2 = r ,

99

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

**(b) A basis for the row space is
**

r1 = [1 −3 0 0], r2 = [0 1 0 0].

**(b) A basis for the row space is
**

r1 = [1 −3 0 0], r2 = [0 1 0 0].

⎡ 1⎤

⎢0 ⎥

A basis for the column space is c1 = ⎢ ⎥ ,

⎢0 ⎥

⎣⎢0 ⎦⎥

⎡ 1⎤

⎢0 ⎥

A basis for the column space is c1 = ⎢ ⎥ ,

⎢0 ⎥

⎣⎢0 ⎦⎥

⎡ −3⎤

⎢ 1⎥

c2 = ⎢ ⎥ .

⎢ 0⎥

⎣⎢ 0 ⎦⎥

⎡ −3⎤

⎢ 1⎥

c2 = ⎢ ⎥ .

⎢ 0⎥

⎣⎢ 0 ⎦⎥

**(c) A basis for the row space is
**

r1 = [1 2 4 5], r2 = [0 1 −3 0],

**(c) A basis for the row space is
**

r1 = [1 2 4 5], r2 = [0 1 −3 0],

r3 = [0 0 1 −3], r4 = [0 0 0 1].

r3 = [0 0 1 −3], r4 = [0 0 0 1].

⎡ 1⎤

⎢0 ⎥

⎢ ⎥

A basis for the column space is c1 = ⎢0 ⎥ ,

⎢0 ⎥

⎢⎣0 ⎥⎦

⎡2⎤

⎡ 4⎤

⎡ 5⎤

⎢ 1⎥

⎢ −3⎥

⎢ 0⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

c2 = ⎢ 0 ⎥ , c3 = ⎢ 1⎥ , c4 = ⎢ −3⎥ .

⎢0⎥

⎢ 0⎥

⎢ 1⎥

⎢⎣ 0 ⎥⎦

⎢⎣ 0⎥⎦

⎢⎣ 0 ⎥⎦

⎡ 1⎤

⎢0 ⎥

⎢ ⎥

A basis for the column space is c1 = ⎢0 ⎥ ,

⎢0 ⎥

⎢⎣0 ⎥⎦

⎡2⎤

⎡ 4⎤

⎡ 5⎤

⎢ 1⎥

⎢ −3⎥

⎢ 0⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

c2 = ⎢ 0 ⎥ , c3 = ⎢ 1⎥ , c4 = ⎢ −3⎥ .

⎢0⎥

⎢ 0⎥

⎢ 1⎥

⎢⎣ 0 ⎥⎦

⎢⎣ 0⎥⎦

⎢⎣ 0 ⎥⎦

**(d) A basis for the row space is
**

r1 = [1 2 −1 5], r2 = [0 1 4 3],

**(d) A basis for the row space is
**

r1 = [1 2 −1 5], r2 = [0 1 4 3],

r3 = [0 0 1 −7], r4 = [0 0 0 1].

r3 = [0 0 1 −7], r4 = [0 0 0 1].

⎡ 1⎤

⎢0 ⎥

A basis for the column space is c1 = ⎢ ⎥ ,

⎢0 ⎥

⎣⎢0 ⎦⎥

⎡ 1⎤

⎢0 ⎥

A basis for the column space is c1 = ⎢ ⎥ ,

⎢0 ⎥

⎣⎢0 ⎦⎥

⎡ −1⎤

⎡2⎤

⎡ 5⎤

⎢ 4⎥

⎢ 1⎥

⎢ 3⎥

c 2 = ⎢ ⎥ , c3 = ⎢ ⎥ , c 4 = ⎢ ⎥ .

⎢ 1⎥

⎢0⎥

⎢ −7 ⎥

⎢⎣ 0 ⎥⎦

⎢⎣ 0 ⎥⎦

⎢⎣ 1⎥⎦

⎡2⎤

⎡ −1⎤

⎡ 5⎤

⎢ 1⎥

⎢ 4⎥

⎢ 3⎥

c 2 = ⎢ ⎥ , c3 = ⎢ ⎥ , c 4 = ⎢ ⎥ .

⎢0⎥

⎢ 1⎥

⎢ −7 ⎥

⎢⎣ 0 ⎥⎦

⎢⎣ 0 ⎥⎦

⎢⎣ 1⎥⎦

9. (a) A basis for the row space is r1 = [1 0 2],

⎡ 1 1 −4 −3⎤

11. (a) A row echelon form of ⎢ 2 0 2 −2 ⎥

⎢

⎥

⎣ 2 −1 3 2 ⎦

⎡ 1 1 −4 −3⎤

is ⎢0 1 −5 −2 ⎥ . Thus a basis for the

⎢

⎥

1 − 12 ⎦⎥

⎣⎢0 0

**r2 = [0 0 1]. A basis for the column
**

⎡ 1⎤

⎡2⎤

⎢

⎥

space is c1 = 0 , c2 = ⎢ 1⎥ .

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0⎦

subspace is (1, 1, −4, −3), (0, 1, −5, −2),

1⎞

⎛

⎜ 0, 0, 1, − ⎟ .

2⎠

⎝

100

SSM: Elementary Linear Algebra

Section 4.7

⎡ −1 1 −2 0 ⎤

(b) A row echelon form of ⎢ 3 3 6 0 ⎥ is

⎢

⎥

⎣ 9 0 0 3⎦

0⎤

⎡ 1 −1 2

⎢0 1 0

0 ⎥ . Thus a basis for the

⎢

1⎥

⎢⎣0 0 1 − 6 ⎥⎦

(b) det(A) = det(B) = 5

A and B are invertible so the systems Ax = 0

and Bx = 0 have only the trivial solution.

That is, the null spaces of A and B are both

the origin.

The second row of C is a multiple of the

first and it is clear that if 3x + y = 0, then

(x, y) is in the null space of C, i.e., the null

space of C is the line 3x + y = 0.

⎡x⎤

The equation Dx = 0 has all x = ⎢ ⎥ as

⎣ y⎦

solutions, so the null space of D is the entire

xy-plane.

subspace is (1, −1, 2, 0), (0, 1, 0, 0),

1⎞

⎛

⎜ 0, 0, 1, − ⎟ .

6⎠

⎝

⎡ 1 1

⎢ 0 0

(c) A row echelon form of ⎢

⎢ −2 0

⎢⎣ 0 −3

0

1

2

0

0⎤

1⎥

⎥ is

2⎥

3⎥⎦

True/False 4.7

(a) True; by the definition of column space.

⎡ 1 1 0 0⎤

⎢0 1 1 1⎥

⎢

⎥ . Thus a basis for the

⎢0 0 1 1⎥

⎣⎢0 0 0 1⎦⎥

subspace is (1, 1, 0, 0), (0, 1, 1, 1),

(0, 0, 1, 1), (0, 0, 0, 1).

**(b) False; the system Ax = b may not be consistent.
**

(c) False; the column vectors of A that correspond to

the column vectors of R containing the leading

1’s form a basis for the column space of A.

(d) False; the row vectors of A may be linearly

dependent if A is not in row echelon form.

⎡ 1 0 0⎤

15. (a) The row echelon form of A is ⎢0 1 0 ⎥ ,

⎢

⎥

⎣0 0 0 ⎦

so the general solution of the system Ax = 0

⎡0 ⎤

is x = 0, y = 0, z = t or t ⎢0 ⎥ . Thus the null

⎢ ⎥

⎣ 1⎦

space of A is the z-axis, and the column

⎡ 1⎤

⎡0 ⎤

space is the span of c1 = ⎢ 1⎥ , c2 = ⎢0 ⎥ ,

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0 ⎦

which is all linear combinations of y and x,

i.e., the xy-plane.

⎡1 3⎤

⎡1 3 ⎤

(e) False; the matrices A = ⎢

and B = ⎢

⎣0 0 ⎥⎦

⎣ 2 6 ⎥⎦

have the same row space but different column

spaces.

(f) True; elementary row operations do not change

the null space of a matrix.

(g) True; elementary row operations do not change

the row space of a matrix.

(h) False; see (e) for an example.

(i) True; this is the contrapositive of Theorem 4.7.1.

**(b) Reversing the reasoning in part (a), it is
**

⎡0 0 0 ⎤

clear that one such matrix is ⎢0 1 0 ⎥ .

⎢

⎥

⎣0 0 1⎦

**(j) False; assuming that A and B are both n × n
**

matrices, the row space of A will have n vectors

in its basis since being invertible means that A is

row equivalent to I n . However, since B is

**17. (a) If the null space of a matrix A is the given
**

line, then the equation Ax = 0 is equivalent

⎡ 3 −5⎤ ⎡ x ⎤ ⎡0 ⎤

to ⎢

=

and a general form

⎣0 0 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣0 ⎥⎦

⎡3a −5a ⎤

where a and b are real

for A is ⎢

⎣ 3b −5b ⎥⎦

numbers, not both of which are zero.

**singular, it is not row equivalent to I n , so in
**

reduced row echelon form it has at least one row

of zeros. Thus the row space of B will have

fewer than n vectors in its basis.

101

Chapter 4: General Vector Spaces

**SSM: Elementary Linear Algebra
**

7. (a) Since rank(A) = rank [ A b ] , the system is

Section 4.8

**consistent, and since nullity(A) = 3 − 3 = 0,
**

there are 0 parameters in the general

solution.

Exercise Set 4.8

1. The reduced row echelon form of A is

⎡1 0 − 6 − 4⎤

7

7⎥

⎢

2 ⎥ , so rank (A) = 2.

⎢0 1 17

7

7

⎢

⎥

0

0⎦

⎣0 0

⎡ 1 −3 −2 ⎤

⎢2

1 3⎥

AT = ⎢

⎥ and the reduced row

4

5

9⎥

⎢

⎣⎢ 0 2 2 ⎦⎥

⎡1

⎢0

echelon form of A is ⎢

⎢0

⎣⎢0

T

0

1

0

0

(b) Since rank( A) < rank ⎡⎣ A b ⎤⎦ , the system is

inconsistent.

(c) Since rank( A) = rank ⎡⎣ A b ⎤⎦ , the system is

**consistent, and since nullity(A) = 3 − 1 = 2,
**

there are 2 parameters in the general

solution.

1⎤

1⎥

⎥ , so

0⎥

0 ⎦⎥

(d) Since rank( A) = rank ⎡⎣ A b ⎤⎦ , the system is

**consistent, and since nullity(A) = 9 − 2 = 7,
**

there are 7 parameters in the general

solution.

rank( AT ) = rank( A) = 2.

(e) Since rank( A) < rank ⎡⎣ A b ⎤⎦ , the system is

**3. (a) Since rank (A) = 2, there are 2 leading
**

variables. Since nullity(A) = 3 − 2 = 1, there

is 1 parameter.

inconsistent.

(f) Since rank( A = rank ⎡⎣ A b ⎤⎦ , the system is

**consistent, and since nullity(A) = 4 − 0 = 4,
**

there are 4 parameters in the general

solution.

**(b) Since rank(A) = 1, there is 1 leading
**

variable. Since nullity(A) = 3 − 1 = 2, there

are 2 parameters.

(g) Since rank( A) = rank ⎡⎣ A b ⎤⎦ , the system is

**(c) Since rank(A) = 2, there are 2 leading
**

variables. Since nullity(A) = 4 − 2 = 2, there

are 2 parameters.

**consistent, and since nullity(A) = 2 − 2 = 0,
**

there are 0 parameters in the general

solution.

**(d) Since rank(A) = 2, there are 2 leading
**

variables. Since nullity(A) = 5 − 2 = 3, there

are 3 parameters.

⎡1 −3 b1 ⎤

⎢1 −2 b ⎥

2⎥

⎢

9. The augmented matrix is ⎢1 1 b3 ⎥ , which is

⎢1 −4 b4 ⎥

⎢1 5 b ⎥

5⎦

⎣

3b2 − 2b1 ⎤

⎡1 0

⎢0 1

b2 − b1 ⎥

⎢

⎥

row equivalent to ⎢0 0 b3 − 4b2 + 3b1 ⎥ . Thus,

⎢0 0 b4 + b2 − 2b1 ⎥

⎢0 0 b − 8b + 7b ⎥

5

2

1⎦

⎣

the system is consistent if and only if

=0

3b1 − 4b2 + b3

−2b1 + b2

+ b4

=0

+ b5 = 0

7b1 − 8b2

**(e) Since rank(A) = 3, there are 3 leading
**

variables. Since nullity(A) = 5 − 3 = 2, there

are 2 parameters.

5. (a) Since A is 4 × 4, rank(A) + nullity(A) = 4,

and the largest possible value of rank(A) is

4, so the smallest possible value of

nullity(A) is 0.

(b) Since A is 3 × 5, rank(A) + nullity(A) = 5,

and the largest possible value of rank(A) is

3, so the smallest possible value of

nullity(A) is 2.

(c) Since A is 5 × 3, rank(A) + nullity(A) = 3,

and the largest possible value of rank(A) is

3, so the smallest possible value of

nullity(A) is 0.

**which has the general solution b1 = r , b2 = s,
**

b3 = −3r + 4s, b4 = 2r − s, b5 = −7 r + 8s.

102

SSM: Elementary Linear Algebra

Section 4.9

**11. No, since rank(A) + nullity(A) = 3 and nullity(A)
**

= 1, the row space and column space must both

be 2-dimensional, i.e., planes in 3-space.

**(e) True; if the rows are linearly dependent, then the
**

rank is at least 1 less than the number of rows, so

since the matrix is square, its nullity is at least 1.

**13. The rank is 2 if r = 2 and s = 1. Since the third
**

column will always have a nonzero entry, the

rank will never be 1.

**(f) False; if the nullity of A is zero, then Ax = b is
**

consistent for every vector b.

(g) False; the dimension of the row space is always

equal to the dimension of the column space.

**17. (a) The number of leading 1’s is at most 3,
**

since a matrix cannot have more leading 1’s

than rows.

(h) False; rank(A) = rank(AT ) for all matrices.

**(b) The number of parameters is at most 5,
**

since the solution cannot have more

parameters than A has columns.

**(i) True; the row space and null space cannot both
**

have dimension 1 since

dim(row space) + dim(null space) = 3.

**(c) The number of leading 1’s is at most 3,
**

since the row space of A has the same

dimension as the column space, which is at

most 3.

(j) False; a vector w in W ⊥ need not be orthogonal

**to every vector in V; the relationship is that V ⊥
**

is a subspace of W ⊥ .

**(d) The number of parameters is at most 3 since
**

the solution cannot have more parameters

than A has columns.

Section 4.9

Exercise Set 4.9

⎡0 1⎤

⎡ 1 2⎤

19. Let A = ⎢

and B = ⎢

. Then

⎥

0

0

⎣

⎦

⎣ 2 4 ⎥⎦

**1. (a) Since A has size 3 × 2, the domain of TA is
**

R 2 and the codomain of TA is R3 .

⎡0 0⎤

rank(A) = rank(B) = 1, but since A = ⎢

⎣ 0 0 ⎥⎦

⎡ 5 10⎤

and B 2 = ⎢

,

⎣10 20⎥⎦

(b) Since A has size 2 × 3, the domain of TA is

rank( A2 ) = 0 ≠ rank( B 2 ) = 1.

(c) Since A has size 3 × 3, the domain of TA is

2

R3 and the codomain of TA is R 2 .

R3 and the codomain of TA is R3 .

True/False 4.8

(d) Since A has size 1 × 6, the domain of TA is

**(a) False; the row vectors are not necessarily
**

linearly independent, and if they are not, then

neither are the column vectors, since

dim(row space) = dim(column space).

**R 6 and the codomain of TA is R1 .
**

3. The domain of T is R 2 and the codomain of T is

**(b) True; if the row vectors are linearly independent
**

then nullity(A) = 0 and rank(A) = n = the number

of rows. But, since rank(A) + nullity(A) = the

number of columns, A must be square.

R3 .

T(1, −2) = (1 − 2, −(−2), 3(1)) = (−1, 2, 3)

5. (a) The transformation is linear, with domain

**(c) False; the nullity of a nonzero m × n matrix is at
**

most n − 1.

**R3 and codomain R 2 .
**

(b) The transformation is nonlinear, with

**(d) False; if the added column is linearly dependent
**

on the previous columns, adding it does not

affect the rank.

**domain R 2 and codomain R3 .
**

(c) The transformation is linear, with domain

R3 and codomain R3 .

103

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡0

⎢1

⎢

(d) The matrix is ⎢0

⎢0

⎢⎣ 1

(d) The transformation is nonlinear, with

**domain R 4 and codomain R 2 .
**

7. (a) T is a matrix transformation;

⎡0 0 0⎤

.

TA = ⎢

⎣ 0 0 0 ⎥⎦

(c) T is a matrix transformation;

⎡ 3 −4 0 ⎤

.

TA = ⎢

⎣ 2 0 −5⎥⎦

⎡ 2 −1 1⎤ ⎡ 2 ⎤ ⎡ 0 ⎤

(b) AT x = ⎢ 0 1 1⎥ ⎢ 1⎥ = ⎢ −2 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 0 0 ⎦ ⎣ −3⎦ ⎣ 0 ⎦

T(2, 1, −3) = (2(2) − (1) + (−3), 1 + (−3), 0)

= (0, −2, 0)

**(d) T is not a matrix transformation, since for
**

k ≠ 1,

T (k ( x, y, z )) = (k 2 y 2 , kz )

≠ kT ( x, y, z )

⎡ 1 0 0⎤ ⎡ 2 ⎤ ⎡ 2 ⎤

15. (a) ⎢0 1 0⎥ ⎢ −5⎥ = ⎢ −5⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 −1⎦ ⎣ 3⎦ ⎣ −3⎦

= (ky 2 , kz ).

The result is (2, −5, −3).

**(e) T is not a matrix transformation, since
**

T(0, 0, 0) = (−1, 0) ≠ 0.

⎡ 1 0 0⎤ ⎡ 2⎤ ⎡2⎤

(b) ⎢0 −1 0 ⎥ ⎢ −5⎥ = ⎢ 5⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 1⎦ ⎣ 3⎦ ⎣ 3⎦

The result is (2, 5, 3).

⎡ 3 5 −1⎤

9. By inspection, T = ⎢ 4 −1 1⎥ .

⎢

⎥

⎣ 3 2 −1⎦

For (x, y, z) = (−1, 2, 4),

w1 = 3(−1) + 5(2) − 4 = 3

⎡ −1 0 0⎤ ⎡ 2 ⎤ ⎡ −2 ⎤

(c) ⎢ 0 1 0⎥ ⎢ −5⎥ = ⎢ −5⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 0 1⎦ ⎣ 3⎦ ⎣ 3⎦

The result is (−2, −5, 3).

w2 = 4(−1) − (2) + 4 = −2

w3 = 3(−1) + 2(2) − (4) = −3

so T(−1, 2, 4) = (3, −2, −3).

⎡ 3 5 −1⎤ ⎡ −1⎤ ⎡ 3⎤

Also, ⎢ 4 −1 1⎥ ⎢ 2 ⎥ = ⎢ −2 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 3 2 −1⎦ ⎣ 4 ⎦ ⎣ −3⎦

⎡ 1 0 0 ⎤ ⎡ −2 ⎤ ⎡ −2 ⎤

17. (a) ⎢0 1 0 ⎥ ⎢ 1⎥ = ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 0 0 ⎦ ⎣ 3⎦ ⎣ 0 ⎦

The result is (−2, 1, 0).

⎡ 0 1⎤

⎢ −1 0 ⎥

11. (a) The matrix is ⎢

⎥.

⎢ 1 3⎥

⎣⎢ 1 −1⎦⎥

⎡ 1 0 0 ⎤ ⎡ −2 ⎤ ⎡ −2 ⎤

(b) ⎢0 0 0 ⎥ ⎢ 1⎥ = ⎢ 0⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 1⎦ ⎣ 3⎦ ⎣ 3⎦

The result is (−2, 0, 3).

⎡ 7 2 −1 1⎤

(b) The matrix is ⎢ 0 1 1 0 ⎥ .

⎢

⎥

⎣ −1 0 0 0 ⎦

0

0

0

0

0

1⎤

0⎥

⎥

0⎥ .

0⎥

0 ⎥⎦

⎡ −1 1⎤ ⎡ −1⎤ ⎡ 5 ⎤

13. (a) AT x = ⎢

=

⎣ 0 1⎥⎦ ⎢⎣ 4⎥⎦ ⎢⎣ 4 ⎥⎦

T(−1, 4) = (−(−1) + 4, 4) = (5, 4)

**(b) T is not a matrix transformation since
**

T(0, 0, 0) ≠ 0.

⎡0

⎢0

⎢

(c) The matrix is ⎢0

⎢0

⎢⎣0

0 0

0 0

0 1

1 0

0 −1

⎡ 0 0 0 ⎤ ⎡ −2 ⎤ ⎡ 0 ⎤

(c) ⎢0 1 0 ⎥ ⎢ 1⎥ = ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 1⎦ ⎣ 3⎦ ⎣ 3⎦

The result is (0, 1, 3).

0⎤

0⎥

⎥

0⎥ .

0⎥

0 ⎥⎦

104

SSM: Elementary Linear Algebra

Section 4.9

0 ⎤

⎡1 0

0

0 ⎤ ⎡ −2 ⎤ ⎢

⎡1

⎡ −2 ⎤

3

1⎥

19. (a) ⎢0 cos 30° − sin 30°⎥ ⎢ 1⎥ = ⎢ 0 2 − 2 ⎥ ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢

⎢ ⎥

3 ⎥ ⎣ 2⎦

⎣0 sin 30° cos 30° ⎦ ⎣ 2 ⎦ ⎢ 0 1

⎥

⎣

2

2 ⎦

⎡ −2 ⎤

⎢ 3 −2 ⎥

=⎢ 2 ⎥

⎢ 1+ 2 3 ⎥

⎣⎢ 2 ⎦⎥

⎛

The result is ⎜ −2,

⎝

3 − 2 1+ 2 3 ⎞.

,

⎟

2

2 ⎠

⎡ 1

0

⎡ cos 45° 0 sin 45° ⎤ ⎡ −2 ⎤ ⎢ 2

⎢

⎥

⎢

⎥

(b)

0

1

0

1 =⎢ 0

1

⎢

⎥⎢ ⎥ ⎢ 1

−

°

°

sin

cos

45

0

45

2

0

⎣

⎦ ⎣ ⎦ ⎢−

⎣ 2

⎡ 0 ⎤

=⎢ 1 ⎥

⎢

⎥

⎢⎣ 2 2 ⎥⎦

1 ⎤

2 ⎥ ⎡ −2 ⎤

0 ⎥ ⎢ 1⎥

1 ⎥ ⎢⎣ 2 ⎥⎦

2 ⎥⎦

The result is ( 0, 1, 2 2 ) .

**⎡cos 90° − sin 90° 0 ⎤ ⎡ −2 ⎤ ⎡ 0 −1 0 ⎤ ⎡ −2 ⎤
**

(c) ⎢ sin 90° cos 90° 0⎥ ⎢ 1⎥ = ⎢ 1 0 0 ⎥ ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢

⎥⎢ ⎥

0

1 ⎦ ⎣ 2 ⎦ ⎣ 0 0 1⎦ ⎣ 2 ⎦

⎣ 0

⎡ −1⎤

= ⎢ −2 ⎥

⎢ ⎥

⎣ 2⎦

The result is (−1, −2, 2).

0⎤

⎡1 0

0

0

⎡1

⎤ ⎡ −2 ⎤ ⎢

⎡ −2 ⎤

3

1 ⎥⎢ ⎥

21. (a) ⎢0 cos(−30°) − sin(−30°) ⎥ ⎢ 1⎥ = ⎢0 2

⎥

1

2

⎢

⎥⎢ ⎥ ⎢

⎥ ⎢ 2⎥

−

°

−

°

0

30

30

2

sin(

)

cos(

)

3

1

⎣

⎦ ⎣ ⎦ ⎢0 −

⎣ ⎦

2

2 ⎥⎦

⎣

⎡ −2 ⎤

⎢ 3+2 ⎥

=⎢ 2 ⎥

⎢ −1+ 2 3 ⎥

⎢⎣ 2 ⎥⎦

⎛

⎞

The result is ⎜ −2, 3 + 2 , −1 + 2 3 ⎟ .

⎝

2

2

⎠

105

Chapter 4: General Vector Spaces

⎡ 1

⎡ cos( −45°) 0 sin(−45°) ⎤ ⎡ −2 ⎤ ⎢ 2

⎥ ⎢ 1⎥ = ⎢ 0

(b) ⎢

0

1

0

⎢

⎥⎢ ⎥ ⎢

⎣ − sin(−45°) 0 cos(−45°) ⎦ ⎣ 2 ⎦ ⎢ 1

⎣ 2

⎡ −2

=⎢ 1

⎢

⎣⎢ 0

SSM: Elementary Linear Algebra

0 −

1

0

1 ⎤

2 ⎥ ⎡ −2 ⎤

0 ⎥ ⎢ 1⎥

1 ⎥ ⎢⎣ 2 ⎥⎦

2 ⎥⎦

2⎤

⎥

⎥

⎦⎥

The result is ( −2 2 , 1, 0 ) .

⎡cos(−90°) − sin(−90°) 0⎤ ⎡ −2 ⎤ ⎡ 0 1 0 ⎤ ⎡ −2 ⎤

(c) ⎢ sin(−90°) cos(−90°) 0⎥ ⎢ 1⎥ = ⎢ −1 0 0 ⎥ ⎢ 1⎥

⎢

⎥⎢ ⎥ ⎢

⎥⎢ ⎥

0

0

1 ⎦ ⎣ 2 ⎦ ⎣ 0 0 1⎦ ⎣ 2 ⎦

⎣

⎡1 ⎤

= ⎢2⎥

⎢ ⎥

⎣2⎦

The result is (1, 2, 2).

25.

⎛ 2 2 1⎞

= ⎜ , , ⎟ = (a, b, c)

⎝ 3 3 3⎠

2 + 2 +1

cos 180° = −1, sin 180° = 0

The matrix is

⎡ 2 2

2 2 (1 − (−1)) − 1 (0)

⎢ 3 (1 − (−1)) + (−1)

3 3

3

⎢

2

2 (1 − ( −1)) + ( −1)

⎢ 23 23 (1 − (−1)) + 13 (0)

3

⎢

⎢ 2 1 (1 − (−1)) − 2 (0) 2 1 (1 − (−1)) + 2 (0)

3 3

3

3

⎣ 3 3

v

v

=

()

( )( )

( )( )

(2, 2, 1)

2

2

2

( )( )

()

( )( )

( 23 )( 13 ) (1 − (−1)) + ( 23 ) (0)⎤⎥ ⎡ 94 (2) − 1

⎥ ⎢

( 23 )( 13 ) (1 − (−1)) − ( 32 ) (0) ⎥⎥ = ⎢⎢ 94 (2)

2

2

1 (1 − ( −1)) + ( −1) ⎥ ⎢⎣ 9 ( 2)

(3)

⎦

⎡− 1

⎢ 9

= ⎢ 89

⎢ 4

⎢⎣ 9

8

9

− 19

4

9

⎤

⎥

4 ( 2) − 1

⎥

9

⎥

2 ( 2)

1 ( 2) − 1

⎥⎦

9

9

4 ( 2)

9

2 ( 2)

9

2 ( 2)

9

4⎤

9⎥

4⎥

9

⎥

− 79 ⎥

⎦

**29. (a) The result is twice the orthogonal projection of x on the x-axis.
**

(b) The result is twice the reflection of x about the x-axis.

⎡cos 2θ

31. Since cos 2θ = cos 2 θ − sin 2 θ and sin 2θ = 2 sin θ cos θ, the matrix can be written as ⎢

⎣ sin 2θ

has the effect of rotating x through the angle 2θ.

− sin 2θ ⎤

which

cos 2θ ⎥⎦

**33. The result of the transformation is rotation through the angle θ followed by translation by x0 . It is not a matrix
**

transformation since T (0) = x0 ≠ 0.

35. The image of the line under the operator T is the line in R n which has vector representation x = T (x0 ) + tT ( v),

unless T is the zero operator in which case the image is 0.

106

SSM: Elementary Linear Algebra

Section 4.10

⎡1 1⎤

3. (a) The standard matrix for T1 is ⎢

and

⎣1 −1⎥⎦

True/False 4.9

(a) False; if A is 2 × 3, then the domain of TA is

⎡ 3 0⎤

the standard matrix for T2 is ⎢

.

⎣ 2 4 ⎥⎦

R3 .

(b) False; if A is m × n, then the codomain of TA is

(b) The standard matrix for T2 T1 is

Rm .

⎡ 3 0 ⎤ ⎡1 1⎤ ⎡ 3 3⎤

⎢⎣ 2 4 ⎥⎦ ⎢⎣1 −1⎥⎦ = ⎢⎣ 6 −2 ⎥⎦ .

The standard matrix for T1 T2 is

(c) False; T(0) = 0 is not a sufficient condition. For

instance if T: R n → R1 is given by T ( v) = v

then T(0) = 0, but T is not a matrix

transformation.

⎡1 1⎤ ⎡ 3 0 ⎤ ⎡5 4⎤

⎢⎣1 −1⎥⎦ ⎢⎣ 2 4 ⎥⎦ = ⎢⎣ 1 −4⎥⎦ .

(d) True

(c) T1 (T2 ( x1 , x2 )) = (5 x1 + 4 x2 , x1 − 4 x2 )

T2 (T1 ( x1 , x2 )) = (3x1 + 3x2 , 6 x1 − 2 x2 )

**(e) False; every matrix transformation will have this
**

property by the homogeneity property.

**5. (a) The standard matrix for a rotation of 90° is
**

⎡cos 90° − sin 90°⎤ ⎡ 0 −1⎤

⎢⎣ sin 90° cos 90°⎥⎦ = ⎢⎣ 1 0⎥⎦ and the

standard matrix for reflection about the line

⎡0 1⎤

y = x is ⎢

.

⎣ 1 0 ⎥⎦

⎡0 1⎤ ⎡0 −1⎤ ⎡ 1 0 ⎤

⎢⎣ 1 0 ⎥⎦ ⎢⎣ 1 0 ⎥⎦ = ⎢⎣0 −1⎥⎦

**(f) True; the only matrix transformation with this
**

property is the zero transformation (consider

x = y ≠ 0).

(g) False; since b ≠ 0, then T(0) = 0 + b = b ≠ 0, so

T cannot be a matrix operator.

(h) False; there is no angle θ for which

1

cos θ = sin θ = .

2

**(b) The standard matrix for orthogonal
**

⎡0 0 ⎤

projection on the y-axis is ⎢

and the

⎣0 1⎥⎦

standard matrix for a contraction with factor

⎡ 1 0⎤

1

⎥.

k = is ⎢ 2

2

⎢⎣ 0 12 ⎥⎦

**(i) True; when a = 1 it is the matrix for reflection
**

about the x-axis and when a = −1 it is the matrix

for reflection about the y-axis.

Section 4.10

Exercise Set 4.10

⎡1

⎢2

⎢⎣ 0

1. The standard matrix for TB TA is

3⎤ ⎡ 1 −2

1⎥ ⎢ 4

1

⎥⎢

7⎦ ⎣5 2

21⎤

4⎥ .

⎥

25⎦

The standard matrix for TA

⎡2

BA = ⎢ 5

⎢

⎣6

⎡ 5

= ⎢10

⎢

⎣ 45

⎡1

AB = ⎢ 4

⎢

⎣5

⎡ −8

= ⎢ −5

⎢

⎣ 44

−3

0

1

−1

−8

3

0⎤

−3⎥

⎥

4⎦

0⎤ ⎡ 0 0 ⎤ ⎡0

⎥

=

1 ⎥ ⎢⎣ 0 1⎥⎦ ⎢ 0

⎣

2⎦

0⎤

1⎥

2⎦

**(c) The standard matrix for reflection about the
**

⎡ 1 0⎤

x-axis is ⎢

and the standard matrix

⎣0 −1⎥⎦

⎡ 3 0⎤

for a dilation with factor k = 3 is ⎢

.

⎣0 3⎥⎦

⎡ 3 0⎤ ⎡ 1 0⎤ ⎡ 3 0⎤

⎢⎣0 3⎥⎦ ⎢⎣ 0 −1⎥⎦ = ⎢⎣ 0 −3⎥⎦

TB is

−2 0 ⎤ ⎡ 2 −3 3⎤

1 −3⎥ ⎢ 5 0 1⎥

⎥⎢

⎥

2 4⎦ ⎣ 6

1 7⎦

−3

1⎤

−15 −8⎥ .

⎥

−11 45⎦

107

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ −1 0 0 ⎤

7. (a) The standard matrix for reflection about the yz-plane is ⎢ 0 1 0 ⎥ and the standard matrix for orthogonal

⎢

⎥

⎣ 0 0 1⎦

⎡ 1 0 0⎤

projection on the xz-plane is ⎢0 0 0 ⎥ .

⎢

⎥

⎣0 0 1⎦

⎡ 1 0 0 ⎤ ⎡ −1 0 0⎤ ⎡ −1 0 0 ⎤

⎢0 0 0 ⎥ ⎢ 0 1 0⎥ = ⎢ 0 0 0 ⎥

⎢

⎥⎢

⎥ ⎢

⎥

⎣0 0 1⎦ ⎣ 0 0 1⎦ ⎣ 0 0 1⎦

⎡ 1

⎡ cos 45° 0 sin 45° ⎤ ⎢ 2

(b) The standard matrix for a rotation of 45° about the y-axis is ⎢ 0

1

0 ⎥=⎢ 0

⎢

⎥ ⎢

⎣ − sin 45° 0 cos 45° ⎦ ⎢ − 1

⎣ 2

⎡ 2

⎢

the standard matrix for a dilation with factor k = 2 is ⎢ 0

⎢⎣ 0

⎡ 2

⎢

⎢ 0

⎣⎢ 0

0

2

0

⎡ 1

0 ⎤⎢ 2

⎥

0 ⎥⎢ 0

⎢

2 ⎦⎥ ⎢ − 1

⎣ 2

0

1

0

1 ⎤

2⎥

⎡1

0 ⎥=⎢0

⎢

1 ⎥ ⎢ −1

⎣

2 ⎦⎥

0

2

0

0

2

0

0

1

0

1 ⎤

2⎥

0 ⎥ and

1 ⎥

2 ⎦⎥

0 ⎤

⎥

0 ⎥.

2 ⎥⎦

1⎤

0 ⎥⎥

1 ⎥⎦

⎡ 1 0 0⎤

(c) The standard matrix for orthogonal projection on the xy-plane is ⎢0 1 0 ⎥ and the standard matrix for

⎢

⎥

⎣0 0 0 ⎦

⎡ −1 0 0 ⎤

reflection about the yz-plane is ⎢ 0 1 0 ⎥ .

⎢

⎥

⎣ 0 0 1⎦

⎡ −1 0 0 ⎤ ⎡ 1 0 0 ⎤ ⎡ −1 0 0 ⎤

⎢ 0 1 0⎥ ⎢ 0 1 0⎥ = ⎢ 0 1 0 ⎥

⎢

⎥⎢

⎥ ⎢

⎥

⎣ 0 0 1⎦ ⎣ 0 0 0⎦ ⎣ 0 0 0 ⎦

⎡1 0 ⎤

⎡0 0⎤

.

9. (a) [T1 ] = ⎢

and [T2 ] = ⎢

⎣0 0 ⎥⎦

⎣ 0 1 ⎥⎦

⎡0 0 ⎤

so T1 T2 = T2 T1.

[T1 ][T2 ] = [T2 ][T1 ] = ⎢

⎣0 0 ⎥⎦

108

SSM: Elementary Linear Algebra

Section 4.10

⎡ cos(θ1 + θ 2 ) − sin(θ1 + θ 2 ) ⎤

(b) From Example 1, [T2 T1 ] = ⎢

⎥

⎣ sin(θ1 + θ 2 ) cos(θ1 + θ 2 ) ⎦

[T1 T2 ] = [T1 ][T2 ]

⎡cos θ1 − sin θ1 ⎤ ⎡cos θ 2 − sin θ 2 ⎤

=⎢

cos θ1 ⎥⎦ ⎢⎣ sin θ 2

cos θ 2 ⎥⎦

⎣ sin θ1

⎡ cos θ1 cos θ 2 − sin θ1 sin θ 2 − cos θ1 sin θ 2 − sin θ1 cos θ 2 ⎤

=⎢

⎥

⎣sin θ1 cos θ 2 + cos θ1 sin θ 2 − sin θ1 sin θ 2 + cos θ1 cos θ 2 ⎦

⎡cos(θ1 + θ 2 ) − sin(θ1 + θ 2 ) ⎤

=⎢

⎥

⎣ sin(θ1 + θ 2 ) cos(θ1 + θ 2 ) ⎦

= [T2 T1 ]

Thus T1 T2 = T2 T1.

⎡ 1 0⎤

⎡ cos θ − sin θ ⎤

and [T2 ] = ⎢

.

(c) [T1 ] = ⎢

⎥

cos θ ⎥⎦

⎣0 0 ⎦

⎣ sin θ

⎡ 1 0 ⎤ ⎡cos θ − sin θ ⎤

[T1 ][T2 ] = ⎢

cos θ ⎥⎦

⎣0 0 ⎥⎦ ⎢⎣ sin θ

⎡cos θ − sin θ ⎤

=⎢

0 ⎥⎦

⎣ 0

⎡cos θ − sin θ ⎤ ⎡ 1 0 ⎤

[T2 ][T1 ] = ⎢

cos θ ⎥⎦ ⎢⎣0 0 ⎥⎦

⎣ sin θ

⎡cos θ 0 ⎤

=⎢

⎣ sin θ 0 ⎥⎦

T1 T2 ≠ T2 T1

11. (a) Not one-to-one because the second row (column) is all 0’s, so the determinant is 0.

(b) One-to-one because the determinant is −1.

(c) One-to-one because the determinant is −1.

(d) One-to-one because the determinant is k 2 ≠ 0.

(e) One-to-one because the determinant is 1.

(f) One-to-one because the determinant is −1.

(g) One-to-one because the determinant is k 3 ≠ 0.

⎡ 1 2⎤

13. (a) [T ] = ⎢

⎣ −1 1⎥⎦

det[T] = 1 + 2 = 3 ≠ 0, so T is one-to-one.

1

2

1 ⎡1 −2 ⎤ ⎡ 3 − 3 ⎤

⎢

⎥

[T −1 ] = ⎢

=

1⎥

3 ⎣1 1⎥⎦ ⎢ 1

3⎦

⎣3

2

1

1 ⎞

⎛1

T −1 ( w1 , w2 ) = ⎜ w1 − w2 , w1 + w2 ⎟

3

3

3 ⎠

⎝3

109

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ 4 −6 ⎤

(b) [T ] = ⎢

3⎥⎦

⎣ −2

det[T] = 12 − 12 = 0, so T is not one-to-one.

⎡ 0 −1⎤

(c) [T ] = ⎢

⎣ −1 0 ⎥⎦

det[T] = 0 − 1 = −1 ≠ 0, so T is one-to-one.

1 ⎡0 1⎤ ⎡ 0 −1⎤

[T −1 ] =

=

−1 ⎢⎣ 1 0 ⎥⎦ ⎢⎣ −1 0 ⎥⎦

T −1 ( w1 , w2 ) = (− w2 , − w1 )

⎡ 3 0⎤

(d) [T ] = ⎢

⎣ −5 0 ⎥⎦

det[T] = 3(0) − (−5)(0) = 0, so T is not one-to-one.

15. (a) The inverse is (another) reflection about the x-axis in R 2 .

(b) The inverse is rotation through the angle −

(c) The inverse is contraction by a factor of

π

in R 2 .

4

1

in R 2 .

3

**(d) The inverse is (another) reflection about the yz-plane in R3 .
**

(e) The inverse is dilation by a factor of 5 in R3 .

17. Let u = ( x1 , y1 ), v = ( x2 , y2 ), and k be a scalar.

(a) T (u + v) = T ( x1 + x2 , y1 + y2 )

= (2( x1 + x2 ) + ( y1 + y2 ), ( x1 + x2 ) − ( y1 + y2 ))

= ((2 x1 + y1 ) + (2 x2 + y2 ), ( x1 − y1 ) + ( x2 − y2 ))

= T (u ) + T ( v )

T (ku) = T (kx1 , ky1 )

= (2kx1 + ky1, kx1 − ky1 )

= k (2 x1 + y1 , x1 − y1 )

= kT (u)

T is a matrix operator.

(b) T (ku) = T (kx1 , ky1 )

= (kx1 + 1, ky1 )

≠ k ( x1 + 1, y1 )

T is not a matrix operator, which can also be shown by considering T(u + v).

110

SSM: Elementary Linear Algebra

Section 4.10

(c) T (u + v) = T ( x1 + x2 , y1 + y2 )

= ( y1 + y2 , y1 + y2 )

= T (u) + T ( v)

T (ku) = T (kx1 , ky1 )

= (ky1 , ky1 )

= k ( y1 , y1 )

= kT (u)

T is a matrix operator.

(d) T (u + v) = T ( x1 + x2 , y1 + y2 )

( 3 x1 + x2 , 3 y1 + y2 )

≠ ( 3 x1 + 3 x2 , 3 y1 + 3 y2 )

=

**T is not a matrix operator, which can also be shown by considering T(ku).
**

19. Let u = ( x1 , y1 , z1 ), v = ( x2 , y2 , z2 ), and k be a scalar.

(a) T (u + v) = T ( x1 + x2 , y1 + y2 , z1 + z2 )

= (0, 0)

= T (u) + T ( v)

T (ku) = T (kx1, ky1, kz1 )

= (0, 0)

= k (0, 0)

= kT (u)

T is a matrix transformation.

(b) T (u + v) = T ( x1 + x2 , y1 + y2 , z1 + z2 )

= (3( x1 + x2 ) − 4( y1 + y2 ), 2( x1 + x2 ) − 5( z1 + z2 ))

= ((3x1 − 4 y1 ) + (3x2 − 4 y2 ), (2 x1 − 5 z1 ) + (2 x2 − 5 z2 ))

= T (u ) + T ( v )

T (ku) = T (kx1 , ky1 , kz1 )

= (3kx1 − 4ky1 , 2kx1 − 5kz1 )

= k (3x1 − 4 y1, 2 x1 − 5 z1 )

= kT (u)

T is a matrix operator.

⎡ −1 0 ⎤ ⎡ 1 0 ⎤ ⎡ −1 0 ⎤

21. (a) ⎢

=

⎣ 0 1⎥⎦ ⎢⎣0 0 ⎥⎦ ⎢⎣ 0 0 ⎥⎦

⎡ 1 0 ⎤ ⎡0 1⎤ ⎡ 0 1⎤

(b) ⎢

=

⎣0 −1⎥⎦ ⎢⎣ 1 0 ⎥⎦ ⎢⎣ −1 0 ⎥⎦

⎡0 0 ⎤ ⎡0 1⎤ ⎡ 3 0 ⎤ ⎡ 0

(c) ⎢

=

⎣0 1⎥⎦ ⎢⎣ 1 0 ⎥⎦ ⎢⎣0 3⎥⎦ ⎢⎣ 1

= ⎡0

⎢⎣ 3

0⎤ ⎡ 3 0⎤

0 ⎥⎦ ⎢⎣ 0 3⎥⎦

0⎤

0⎥⎦

23. (a) TA (e1 ) = (−1, 2, 4), TA (e2 ) = (3, 1, 5), and TA (e3 ) = (0, 2, − 3)

111

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

(b) TA (e1 + e2 + e3 )

= (−1 + 3 + 0, 2 + 1 + 2, 4 + 5 − 3)

= (2, 5, 6)

(b) True; use c1 = c2 = 1 to get the additivity

(c) TA (7e3 ) = 7(0, 2, − 3) = (0, 14, − 21)

**(c) True; if T is a one-to-one matrix transformation
**

with matrix A, then Ax = 0 has only the trivial

solution. That is T(x − y) = 0 ⇒ A(x − y) = 0

⇒ x = y, so there are no such distinct vectors.

**property and c1 = k , c2 = 0 to get the
**

homogeneity property.

**25. (a) Yes; if u and v are distinct vectors in the
**

domain of T1 which is one-to-one, then

T1 (u) and T1 ( v) are distinct. If T1 (u) and

(d) False; the zero transformation is one example.

**T1 ( v) are in the domain of T2 which is also
**

one-to-one, then T2 (T1 (u)) and T2 (T1 ( v))

(e) False; the zero transformation is one example.

are distinct. Thus T2 T1 is one-to-one.

(f) False; the zero transformation is one example.

(b) Yes; if T1 ( x, y) = ( x, y, 0) and

Section 4.11

T2 ( x, y, z ) = ( x, y), then T1 is one-to-one

Exercise Set 4.11

but T2 is not one-to-one and the

**1. (a) Reflection about the line y = −x maps (1, 0)
**

to (0, −1) and (0, 1) to (−1, 0), so the

⎡ 0 −1⎤

standard matrix is ⎢

.

⎣ −1 0 ⎥⎦

**composition T2 T1 is the identity
**

transformation on R 2 which is one-to-one.

However, when T1 is not one-to-one and T2

is one-to-one, the composition T2 T1 is not

one-to-one, since T1 (u) = T1 ( v) for some

**(b) Reflection through the origin maps (1, 0) to
**

(−1, 0) and (0, 1) to (0, −1), so the standard

⎡ −1 0 ⎤

matrix is ⎢

.

⎣ 0 −1⎥⎦

**distinct vectors u and v in the domain of T1
**

and consequently T2 (T1 (u)) = T2 (T1 ( v)) for

those same vectors.

27. (a) The product of any m × n matrix and the

n × 1 matrix of all zeros (the zero vector in

**(c) Orthogonal projection on the x-axis maps
**

(1, 0) to (1, 0) and (0, 1) to (0, 0), so the

⎡ 1 0⎤

standard matrix is ⎢

.

⎣0 0 ⎥⎦

**R n ) is the m × 1 matrix of all zero, which is
**

the zero vector in R m .

**(d) Orthogonal projection on the y-axis maps
**

(1, 0) to (0, 0) and (0, 1) to (0, 1), so the

⎡0 0 ⎤

standard matrix is ⎢

.

⎣0 1⎥⎦

**(b) One example is T ( x1, x2 ) = ( x12 + x22 , x1 x2 )
**

29. (a) The range of T must be a proper subset of

R n , i.e., not all of R n . Since det(A) = 0,

**3. (a) A reflection through the xy-plane maps
**

(1, 0, 0) to (1, 0, 0), (0, 1, 0) to (0, 1, 0), and

(0, 0, 1) to (0, 0, −1), so the standard matrix

⎡ 1 0 0⎤

is ⎢0 1 0 ⎥ .

⎢

⎥

⎣0 0 −1⎦

**there is some b1 in R n such that Ax = b1
**

is inconsistent, hence b1 is not in the range

of T.

(b) T must map infinitely many vectors to 0.

True/False 4.10

2

**(b) A reflection through the xz-plane maps
**

(1, 0, 0) to (1, 0, 0), (0, 1, 0) to (0, −1, 0),

and (0, 0, 1) to (0, 0, 1), so the standard

⎡ 1 0 0⎤

matrix is ⎢0 −1 0 ⎥ .

⎢

⎥

⎣0 0 1⎦

2

**(a) False; for example T: R → R given by
**

T ( x1, x2 ) = ( x12 + x22 , x1 x2 ) is not a matrix

transformation, although

T(0) = T(0, 0) = (0, 0) = 0.

112

SSM: Elementary Linear Algebra

Section 4.11

**(b) The geometric effect is expansion by a
**

factor of 5 in the y-direction and reflection

about the x-axis.

**(c) A reflection through the yz-plane maps
**

(1, 0, 0) to (−1, 0, 0), (0, 1, 0) to (0, 1, 0),

and (0, 0, 1) to (0, 0, 1), so the standard

⎡ −1 0 0 ⎤

matrix is ⎢ 0 1 0 ⎥ .

⎢

⎥

⎣ 0 0 1⎦

**(c) The geometric effect is shearing by a factor
**

of 4 in the x-direction.

⎡ 1 0⎤ ⎡ 12 0 ⎤ ⎡ 12 0 ⎤

13. (a) ⎢

⎢

⎥=⎢

⎥

⎣0 5⎥⎦ ⎣ 0 1⎦ ⎣ 0 5⎦

**5. (a) Rotation of 90° about the z-axis maps
**

(1, 0, 0) to (0, 1, 0), (0, 1, 0) to (−1, 0, 0),

and (0, 0, 1) to (0, 0, 1), so the standard

⎡0 −1 0 ⎤

matrix is ⎢ 1 0 0 ⎥ .

⎢

⎥

⎣0 0 1⎦

⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤

=

(b) ⎢

⎣ 2 1⎥⎦ ⎢⎣ 0 5⎥⎦ ⎢⎣ 2 5⎥⎦

⎡cos180° − sin 180°⎤ ⎡ 0 1⎤

(c) ⎢

⎣ sin 180° cos180°⎥⎦ ⎢⎣ 1 0 ⎥⎦

⎡ −1 0⎤ ⎡ 0 1⎤

=⎢

⎣ 0 −1⎥⎦ ⎢⎣ 1 0 ⎥⎦

⎡ 0 −1⎤

=⎢

⎣ −1 0⎥⎦

**(b) Rotation of 90° about the x-axis maps
**

(1, 0, 0) to (1, 0, 0), (0, 1, 0) to (0, 0, 1), and

(0, 0, 1) to (0, −1, 0), so the standard matrix

⎡ 1 0 0⎤

is ⎢0 0 −1⎥ .

⎢

⎥

⎣0 1 0 ⎦

−1

⎡0 1⎤

⎡0 1⎤

, the inverse

=⎢

15. (a) Since ⎢

⎥

1

0

⎣

⎦

⎣ 1 0 ⎥⎦

transformation is reflection about y = x.

**(c) Rotation of 90° about the y-axis maps
**

(1, 0, 0) to (0, 0, −1), (0, 1, 0) to (0, 1, 0),

and (0, 0, 1) to (1, 0, 0), so the standard

⎡ 0 0 1⎤

matrix is ⎢ 0 1 0 ⎥ .

⎢

⎥

⎣ −1 0 0 ⎦

⎡k 0⎤

(b) Since for 0 < k < 1, ⎢

⎣ 0 1⎥⎦

−1

⎡ 1 0⎤

= ⎢k

⎥ and

⎣ 0 1⎦

−1 ⎡ 1 0 ⎤

⎡ 1 0⎤

⎢⎣0 k ⎥⎦ = ⎢ 0 1 ⎥ , the inverse

k⎦

⎣

transformation is an expansion along the

same axis.

⎡ − 3 0 ⎤ ⎡ 0 ⎤ ⎡ 0 ⎤ ⎡ − 3 0 ⎤ ⎡ 1 ⎤ ⎡ − 3⎤

7. ⎢

=

,

,

=

⎣ 0 1⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣ 0 1⎥⎦ ⎢⎣ 0 ⎥⎦ ⎢⎣ 0 ⎥⎦

⎡ −3 0 ⎤ ⎡0 ⎤ ⎡0 ⎤ ⎡ −3 0 ⎤ ⎡1⎤ ⎡ −3⎤

⎢⎣ 0 1⎥⎦ ⎢⎣1 ⎥⎦ = ⎢⎣1 ⎥⎦ , ⎢⎣ 0 1⎥⎦ ⎢⎣1⎥⎦ = ⎢⎣ 1⎥⎦

The image is a rectangle with vertices at (0, 0),

(−3, 0), (0, 1), and (−3, 1).

⎡ −1 0 ⎤

(c) Since ⎢

⎣ 0 1⎥⎦

y

−1

⎡ −1 0 ⎤

=⎢

and

⎣ 0 1⎥⎦

−1

⎡ 1 0⎤

⎡ 1 0⎤

⎢⎣0 −1⎥⎦ = ⎢⎣ 0 −1⎥⎦ the inverse

transformation is a reflection about the same

coordinate axis.

x

**9. (a) Shear by a factor of k = 4 in the y-direction
**

⎡ 1 0⎤

has matrix ⎢

.

⎣ 4 1⎥⎦

⎡1 k⎤

(d) Since k ≠ 0, ⎢

⎣0 1⎥⎦

−1

−1

⎡ 1 −k ⎤

=⎢

and

1⎥⎦

⎣0

⎡ 1 0⎤

⎡ 1 0⎤

⎢⎣ k 1⎥⎦ = ⎢⎣ −k 1⎥⎦ , the inverse

transformation is a shear (in the opposite

direction) along the same axis.

**(b) Shear by a factor of k = −2 in the x-direction
**

⎡ 1 −2 ⎤

has matrix ⎢

.

1⎥⎦

⎣0

11. (a) The geometric effect is expansion by a

factor of 3 in the x-direction.

113

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ t⎤

17. The line y = 2x can be represented by ⎢ ⎥ .

⎣ 2t ⎦

**23. (a) To map (x, y, z) to (x + kz, y + kz, z), the
**

⎡1 0 k⎤

standard matrix is ⎢0 1 k ⎥ .

⎢

⎥

⎣0 0 1⎦

⎡ 1 3⎤ ⎡ t ⎤ ⎡ t + 6t ⎤ ⎡ 7t ⎤

=

=

(a) ⎢

⎣0 1⎥⎦ ⎢⎣ 2t ⎥⎦ ⎢⎣ 0 + 2t ⎥⎦ ⎢⎣ 2t ⎥⎦

The image is x = 7t, y = 2t, and using

1

2

t = x, this is the line y = x.

7

7

**(b) Shear in the xz-direction with factor k maps
**

(x, y, z) to (x + ky, y, z + ky). The standard

⎡ 1 k 0⎤

matrix is ⎢0 1 0 ⎥ .

⎢

⎥

⎣0 k 1⎦

Shear in the yz-direction with factor k maps

(x, y, z) to (x, y + kx, z + kx).

⎡ 1 0 0⎤

The standard matrix is ⎢ k 1 0⎥ .

⎢

⎥

⎣ k 0 1⎦

⎡ 1 0 ⎤ ⎡ t ⎤ ⎡t ⎤

(b) ⎢

1⎥⎢ ⎥ = ⎢ ⎥

⎣0 2 ⎦ ⎣ 2t ⎦ ⎣t ⎦

The image is x = t, y = t which is the line

y = x.

⎡0 1⎤ ⎡ t ⎤ ⎡ 2t ⎤

(c) ⎢

=

⎣ 1 0⎥⎦ ⎢⎣ 2t ⎥⎦ ⎢⎣ t ⎥⎦

The image is x = 2t, y = t, which is the line

1

y = x.

2

True/False 4.11

(a) False; the image will be a parallelogram, but it

will not necessarily be a square.

(b) True; an invertible matrix operator can be

expressed as a product of elementary matrices,

each of which has geometric effect among those

listed.

⎡ −1 0 ⎤ ⎡ t ⎤ ⎡ −t ⎤

(d) ⎢

=

⎣ 0 1⎥⎦ ⎢⎣ 2t ⎥⎦ ⎢⎣ 2t ⎥⎦

The image is x = −t, y = 2t, which is the line

y = −2x.

(c) True; by Theorem 4.11.3.

3⎤

⎡1

⎡cos 60° − sin 60°⎤ ⎡ t ⎤ ⎢ 2 − 2 ⎥ ⎡ t ⎤

(e) ⎢

=

1 ⎥ ⎢⎣ 2t ⎥⎦

⎣ sin 60° cos 60°⎥⎦ ⎢⎣ 2t ⎥⎦ ⎢ 3

2 ⎦

⎣2

1− 2 3 ⎤

⎡

t

=

⎢ 2 ⎥

⎢ 3+2 t ⎥

⎣ 2 ⎦

⎡ 1 0⎤

(d) True; ⎢

⎣0 −1⎥⎦

⎡ −1 0 ⎤

⎢⎣ 0 1⎥⎦

3+2

1− 2 3

x or y = −

⎡ 1 0⎤

=⎢

,

⎣0 −1⎥⎦

⎡ −1 0 ⎤

⎡0 1⎤

=⎢

, and ⎢

⎥

0

1

⎣ 1 0⎥⎦

⎣

⎦

−1

⎡0 1⎤

=⎢

.

⎣ 1 0 ⎥⎦

⎡1 1⎤

(e) False; ⎢

maps the unit square to the

⎣1 −1⎥⎦

1− 2 3

3+2

t, y =

t,

2

2

2

x, this is the line

and using t =

1− 2 3

The image is x =

y=

−1

−1

**parallelogram with vertices (0, 0), (1, 1), (1, −1),
**

and (2, 0) which is not a reflection about a line.

⎡ 1 −2 ⎤

(f) False; ⎢

maps the unit square to the

1⎥⎦

⎣2

parallelogram with vertices (0, 0), (1, 2), (−2, 1),

and (−1, 3) which is not a shear in either the xor y-direction.

8+5 3

x.

11

⎡ 3 1⎤ ⎡ x ⎤ ⎡ 3x + y ⎤ ⎡ x′ ⎤

19. (a) ⎢

=

=

⎣6 2 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣6 x + 2 y ⎥⎦ ⎢⎣ y ′⎥⎦

Thus, the image ( x′, y ′) satisfies y ′ = 2 x′

for every (x, y) in the plane.

**(g) True; this is an expansion by a factor of 3 in the
**

y-direction.

(b) No, since the matrix is not invertible.

Section 4.12

Exercise Set 4.12

1. (a) A is a stochastic matrix.

114

SSM: Elementary Linear Algebra

Section 4.12

**(b) A is not a stochastic matrix since the sums of
**

the entries in each column are not 1.

9

. The steady-state vector is

17

⎡8 9 ⎤ ⎡ 8 ⎤

q = ⎢ 9 17 ⎥ = ⎢ 17 ⎥ .

⎢ 9 ⎥ ⎢9⎥

⎣ 17 ⎦ ⎣ 17 ⎦

so s =

( )

(c) A is a stochastic matrix.

(d) A is not a stochastic matrix because the third

column is not a probability vector due to the

negative entry.

⎡0.5

3. x1 = Px0 = ⎢

⎣0.5

⎡ 0.5

x 2 = Px1 = ⎢

⎣ 0.5

**9. Each column of P is a probability vector and
**

⎡3 1 1⎤

⎢8 2 6⎥

7 ⎥ , so P is a regular stochastic

P 2 = ⎢ 13 83 18

⎢7 1 4⎥

⎢⎣ 24 8 9 ⎥⎦

matrix. The system (I − P)q = 0 is

⎡ 1 −1

0⎤ ⎡ q ⎤

2

⎡ 0⎤

⎢ 2

⎥ 1

1 − 1 ⎥ ⎢q ⎥ = ⎢ ⎥ .

⎢ − 14

0

2

2

3 ⎢ ⎥ ⎢ ⎥

⎢ 1

⎥ q

0⎦

1

⎣

⎣ 3⎦

0

⎢⎣ − 4

3 ⎥⎦

The reduced row echelon form of I − P is

⎡1 0 − 4⎤

3⎥

⎢

⎢0 1 − 43 ⎥ , so the general solution of the

⎢

⎥

0⎦

⎣0 0

4

4

system is q1 = s, q2 = s, q3 = s. Since q

3

3

must be a probability vector,

11

3

1 = q1 + q2 + q3 = s, so s = .

11

3

⎡4 3 ⎤ ⎡ 4 ⎤

⎢ 3 11 ⎥ ⎢ 11 ⎥

4 ⎥.

The steady-state vector is q = ⎢ 4 3 ⎥ = ⎢ 11

⎢ 3 11 ⎥ ⎢ ⎥

3

⎢ 3 ⎥ ⎣⎢ 11

⎦⎥

⎣ 11 ⎦

0.6 ⎤ ⎡0.5⎤ ⎡0.55⎤

=

0.4 ⎥⎦ ⎢⎣0.5⎥⎦ ⎢⎣0.45⎥⎦

0.6 ⎤ ⎡0.55⎤ ⎡ 0.545⎤

=

0.4 ⎥⎦ ⎢⎣0.45⎥⎦ ⎢⎣ 0.455⎥⎦

**⎡0.5 0.6 ⎤ ⎡ 0.545⎤ ⎡0.5455⎤
**

x 3 = Px 2 = ⎢

=

⎣0.5 0.4 ⎥⎦ ⎢⎣ 0.455⎥⎦ ⎢⎣0.4545⎥⎦

⎡0.5 0.6 ⎤ ⎡0.5455⎤ ⎡ 0.54545⎤

x 4 = Px 3 = ⎢

=

⎣0.5 0.4 ⎥⎦ ⎢⎣0.4545⎥⎦ ⎢⎣ 0.45455⎥⎦

⎡ 0.5455 0.5454 ⎤ ⎡0.5⎤

x 4 = P 4 x0 = ⎢

⎣ 0.4545 0.4546 ⎥⎦ ⎢⎣0.5⎥⎦

⎡ 0.54545⎤

=⎢

⎣ 0.45455⎥⎦

**5. (a) P is a regular stochastic matrix.
**

(b) P is not a regular stochastic matrix, since the

entry in row 1, column 2 of P k will be 0 for

all k ≥ 1.

( )

( )

**(c) P is a regular stochastic matrix since
**

⎡ 21 1 ⎤

P 2 = ⎢ 25 5 ⎥ .

4

4⎥

⎢⎣ 25

5⎦

7. Since P is a stochastic matrix with all positive

entries, P is a regular stochastic matrix.

The system (I − P)q = 0 is

⎡ 3 − 2⎤ ⎡ q ⎤

3 ⎥ 1 = ⎡0⎤ .

⎢ 4

3

2 ⎥ ⎢ q ⎥ ⎢⎣ 0 ⎥⎦

⎢−

⎣ 2⎦

3⎦

⎣ 4

⎡ 3 − 2 0⎤

⎡ 1 − 8 0⎤

3

9

⎢ 4

⎥ reduces to ⎢

⎥ so the

2 0⎥

⎢ − 34

0

0

0

⎢

⎣

⎦⎥

3

⎣

⎦

8

general solution is q1 = s, q2 = s. Since q

9

17

must be a probability vector 1 = q1 + q2 = s

9

**11. (a) The entry 0.2 = p11 represents the
**

probability that something in state 1 stays in

state 1.

(b) The entry 0.1 = p12 represents the

probability that something in state 2 moves

to state 1.

⎡0.2 0.1⎤ ⎡1 ⎤ ⎡0.2 ⎤

(c) ⎢

=

⎣ 0.8 0.9 ⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣ 0.8 ⎥⎦

The probability is 0.8.

⎡0.2 0.1⎤ ⎡0.5⎤ ⎡ 0.15⎤

(d) ⎢

=

⎣ 0.8 0.9 ⎥⎦ ⎢⎣0.5⎥⎦ ⎢⎣ 0.85⎥⎦

The probability is 0.85.

115

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ 0.95 0.55⎤

13. (a) Let state 1 be good air. The transition matrix is P = ⎢

.

⎣ 0.05 0.45⎥⎦

⎡1 ⎤ ⎡ 0.93 0.77 ⎤ ⎡1 ⎤ ⎡ 0.93⎤

=

(b) P 2 ⎢ ⎥ = ⎢

⎣ 0⎦ ⎣ 0.07 0.23⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣0.07 ⎥⎦

The probability is 0.93 that the air will be good.

⎡0 ⎤ ⎡ 0.922 0.858⎤ ⎡ 0 ⎤ ⎡ 0.858 ⎤

(c) P3 ⎢ ⎥ = ⎢

=

⎣1 ⎦ ⎣ 0.078 0.142 ⎥⎦ ⎢⎣1 ⎥⎦ ⎢⎣0.142 ⎥⎦

The probability is 0.142 that the air will be bad.

⎡0.2 ⎤ ⎡0.95 0.55⎤ ⎡0.2 ⎤ ⎡ 0.63⎤

(d) P ⎢ ⎥ = ⎢

=

⎣ 0.8⎦ ⎣0.05 0.45⎥⎦ ⎢⎣ 0.8 ⎥⎦ ⎢⎣0.37 ⎥⎦

The probability is 0.63 that the air will be good.

⎡ 0.95 0.03⎤

15. Let state 1 be living in the city. Then the transition matrix is P = ⎢

. The initial state vector is

⎣ 0.05 0.97 ⎥⎦

⎡100 ⎤

x0 = ⎢ ⎥ (in thousands).

⎣ 25⎦

⎡0.95 0.03⎤ ⎡100 ⎤ ⎡95.75 ⎤

=

(a) Px0 = ⎢

⎣0.05 0.97 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣ 29.25⎥⎦

2

**⎡0.95 0.03⎤ ⎡100⎤ ⎡91.84 ⎤
**

P 2 x0 = ⎢

=

⎣0.05 0.97 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣33.16 ⎥⎦

3

**⎡0.95 0.03⎤ ⎡100 ⎤ ⎡ 88.243 ⎤
**

P3x0 = ⎢

≈

⎣0.05 0.97 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣36.757 ⎥⎦

4

**⎡0.95 0.03⎤ ⎡100⎤ ⎡ 84.933 ⎤
**

P 4 x0 = ⎢

≈

⎣0.05 0.97 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣ 40.067 ⎥⎦

5

**⎡0.95 0.03⎤ ⎡100⎤ ⎡81.889⎤
**

P5 x0 = ⎢

≈

⎣0.05 0.97 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣ 43.111⎥⎦

Year

1

2

3

4

5

City

95,750

91,840

88,243

84,933

81,889

Suburbs

29,250

33,160

36,757

40,067

43,111

⎡ 0.05 −0.03⎤ ⎡ q1 ⎤ ⎡0 ⎤

.

(b) The system (I − P)q = 0 is ⎢

⎢ ⎥=

⎣ −0.05 0.03⎥⎦ ⎣ q2 ⎦ ⎢⎣0 ⎥⎦

⎡1 −0.6 ⎤

so the general solution is q1 = 0.6s, q2 = s. Since

The reduced row echelon form of I − P is ⎢

0 ⎥⎦

⎣0

1

⎡ 0.6(0.625) ⎤ ⎡0.375⎤

1 = q1 + q2 = 1.6s, s =

= 0.625 and the steady-state vector is q = ⎢

=

.

1.6

⎣ 0.625 ⎥⎦ ⎢⎣0.625⎥⎦

Over the long term, the population of 125,000 will be distributed with 0.375(125,000) = 46,875 in the city

and 0.625(125,000) = 78,125 in the suburbs.

116

SSM: Elementary Linear Algebra

⎡1

⎢ 10

17. The transition matrix is P = ⎢ 4

⎢ 51

⎣⎢ 10

⎡ 23

⎡1 ⎤ ⎢ 100

(a) P 2 ⎢ 0⎥ = ⎢ 17

⎢ ⎥ ⎢ 50

⎣ 0⎦ ⎢ 43

⎣ 100

19

50

7

20

27

100

The probability is

11 ⎤

50 ⎥ ⎡1 ⎤

29 ⎥ ⎢ ⎥

0

50 ⎢ ⎥

⎥

1 ⎣0 ⎦

5 ⎥⎦

Chapter 4 Supplementary Exercises

1

5

3

10

1

2

3⎤

5⎥

1⎥.

5

⎥

1

5 ⎦⎥

1 3 1

Column 3: 1 − − =

5 10 2

⎡7 1 1⎤

⎢ 10 10 5 ⎥

3

1⎥

P = ⎢ 15 10

⎢ 1 3 32 ⎥

⎢⎣ 10 5 10 ⎥⎦

The system (I − P)q = 0 is

⎡ 3 − 1 − 1⎤

10

5 ⎥ ⎡ q1 ⎤ ⎡ 0 ⎤

⎢ 10

7 − 1 ⎥ ⎢q ⎥ = ⎢ ⎥ .

⎢ − 15

0

10

2 ⎢ 2⎥ ⎢ ⎥

⎢ 1

⎥ q

0

3

7

⎢⎣ − 10 − 5 10 ⎥⎦ ⎣ 3 ⎦ ⎣ ⎦

The reduced row echelon form of I − P is

⎡ 1 0 −1⎤

⎢0 1 −1⎥ , so the general solution is q1 = s,

⎢

⎥

⎣0 0 0 ⎦

⎡ 23 ⎤

⎢ 100 ⎥

= ⎢ 17

⎥

⎢ 50

⎥

43

⎢⎣ 100 ⎥⎦

23

.

100

(b) The system (I − P)q = 0 is

⎡ 9 − 1 − 3⎤

5

5 ⎥ ⎡ q1 ⎤ ⎡ 0 ⎤

⎢ 10

7 − 1 ⎥ ⎢ q ⎥ = ⎢0 ⎥ .

⎢ −4

5 ⎢ 2⎥ ⎢ ⎥

⎢ 15 101

⎥ q

4

⎣ 3 ⎦ ⎣0 ⎦

⎢⎣ − 10 − 2

5 ⎥⎦

The reduced row echelon form of I − P is

⎡ 1 0 − 46 ⎤

47 ⎥

⎢

⎢0 1 − 66

⎥ , so the general solution is

47

⎢

⎥

0⎦

⎣0 0

46

66

q1 =

s , q2 =

s, q3 = s.

47

47

159

47

Since 1 = q1 + q2 + q3 =

s, s =

and

47

159

the steady-state vector is

⎡ 46 47 ⎤ ⎡ 46 ⎤

⎢ 47 159 ⎥ ⎢ 159 ⎥

q = ⎢ 66 47 ⎥ = ⎢ 22

⎥.

⎢ 47 159 ⎥ ⎢ 53 ⎥

⎢ 47 ⎥ ⎢ 47 ⎥

⎣ 159 ⎦ ⎣ 159 ⎦

q2 = s, q3 = s. Since 1 = q1 + q2 + q3 = 3s,

⎡1 ⎤

⎢3⎥

1

s = and the steady-state vector is q = ⎢ 13 ⎥ .

3

⎢1 ⎥

⎢⎣ 3 ⎥⎦

**21. Since q is the steady-state vector for P, Pq = q,
**

hence

P k q = P k −1 ( Pq) = P k −1q = P k −2 ( Pq) =

=q

and P k q = q for every positive integer k.

( )

( )

True/False 4.12

(a) True; the sum of the entries is 1 and all entries

are nonnegative.

(b) True; the column vectors are probability vectors

2

⎡0.2 1⎤

⎡0.84 0.2 ⎤

.

=⎢

and ⎢

⎥

0

.

8

0

⎣

⎦

⎣0.16 0.8⎥⎦

⎡ 0.2893⎤ ⎡35⎤

(c) 120q ≈ 120 ⎢ 0.4151⎥ ≈ ⎢50⎥

⎢

⎥ ⎢ ⎥

⎣ 0.2956 ⎦ ⎣35⎦

Over the long term, the probability that a car

ends up at location 1 is about 0.2893,

location 2 is about 0.4151, and location 3 is

about 0.2956. Thus the vector 120q gives

the long term distribution of 120 cars. The

locations should have 35, 50, and 35 parking

spaces, respectively.

**(c) True; this is part of the definition of a transition
**

matrix.

(d) False; the steady-state vector is a solution which

is also a probability vector.

(e) True

Chapter 4 Supplementary Exercises

7 1 1

19. Column 1: 1 − − =

10 10 5

3 3 1

Column 2: 1 − − =

10 5 10

1. (a) u + v = (3 + 1, −2 + 5, 4 + (−2)) = (4, 3, 2)

(−1)u = ((−1)3, 0, 0) = ( −3, 0, 0)

117

Chapter 4: General Vector Spaces

SSM: Elementary Linear Algebra

⎡ 1 0 1⎤

⎡ 1 0 1⎤

9. (a) A = ⎢ 0 1 0⎥ reduces to ⎢0 1 0 ⎥ , so

⎢

⎥

⎢

⎥

⎣ 1 0 1⎦

⎣0 0 0 ⎦

rank(A) = 2 and nullity(A) = 1.

**(b) V is closed under addition because addition
**

in V is component-wise addition of real

numbers and the real numbers are closed

under addition. Similarly, V is closed under

scalar multiplication because the real

numbers are closed under multiplication.

⎡1

⎢0

(b) A = ⎢

⎢1

⎣⎢ 0

**(c) Axioms 1−5 hold in V because they hold in
**

R3 .

(d) Axiom 7:

k (u + v) = k (u1 + v1 , u2 + v2 , u3 + v3 )

= (k (u1 + v1 ), 0, 0)

= (ku1, 0, 0) + (kv1 , 0, 0)

= ku + kv

Axiom 8:

(k + m)u = ((k + m)u1 , 0, 0)

= (ku1 , 0, 0) + (mu1 , 0, 0)

= ku + mu

Axiom 9: k (mu) = k (mu1 , 0, 0)

= (kmu1 , 0, 0)

= (km)u

0

1

0

1

1

0

1

0

0⎤

1⎥

⎥ reduces to

0⎥

1⎦⎥

⎡ 1 0 1 0⎤

⎢0 1 0 1⎥

⎢

⎥ , so rank(A) = 2 and

⎢0 0 0 0 ⎥

⎣⎢0 0 0 0 ⎦⎥

nullity(A) = 2.

**(c) The row vectors ri will satisfy
**

⎧r if i is odd

. Thus, the reduced row

ri = ⎨ 1

⎩r2 if i is even

echelon form will have 2 nonzero rows. Its

rank will be 2 and its nullity will be n − 2.

11. (a) If p1 and p2 are in the set, then

(e) For any u = (u1 , u2 , u3 ) with u2 , u3 ≠ 0,

( p1 + p2 )(− x) = p1 (− x) + p2 (− x)

= p1 ( x) + p2 ( x)

= ( p1 + p2 )( x)

so the set is closed under addition.

Also

(kp1 )(− x) = k ( p1 (− x)) = k ( p1 ( x)) = (kp1 )( x)

so kp1 is in the set for any scalar k, and the

set is a subspace of Pn , the subspace

consisting of all even polynomials. A basis

1u = (1u1 , 0, 0) ≠ (u1, u2 , u3 ) = u.

⎡1 1 s⎤

3. The coefficient matrix is A = ⎢ 1 s 1⎥ and

⎢

⎥

⎣ s 1 1⎦

det( A) = −( s − 1) 2 ( s + 2), so for s ≠ 1, −2, A is

invertible and the solution space is the origin.

⎡1 1 1⎤

If s = 1, then A = ⎢1 1 1⎥ , which reduced to

⎢

⎥

⎣1 1 1⎦

⎡ 1 1 1⎤

⎢0 0 0 ⎥ so the solution space will require 2

⎢

⎥

⎣0 0 0 ⎦

parameters, i.e., it is a plane through the origin.

⎡ 1 1 −2 ⎤

If s = −2, then A = ⎢ 1 −2

1⎥ , which

⎢

⎥

1 1⎦

⎣ −2

⎡ 1 0 −1⎤

reduces to ⎢0 1 −1⎥ , so the solution space

⎢

⎥

⎣0 0 0 ⎦

will require 1 parameter, i.e., it will be a line

through the origin.

is {1, x 2 , x 4 , … , x 2m } where 2m = n if n is

even and 2m = n − 1 if n is odd.

(b) If p1 and p2 are in the set, then

( p1 + p2 )(0) = p1 (0) + p2 (0) = 0, so

**p1 + p2 is in the set. Also,
**

(kp1 )(0) = k ( p1 (0)) = k (0) = 0, so kp1 is in

the set. Thus, the set is a subspace of Pn ,

the subspace consisting of polynomials with

constant term 0. A basis is

{x, x 2 , x3 , … , x n }.

**7. A must be an invertible matrix for Av1 ,
**

Av 2 , … , Av n to be linearly independent.

118

SSM: Elementary Linear Algebra

Chapter 4 Supplementary Exercises

13. (a) The entry aij , i > j, below the main diagonal is determined by the a ji entry above the main diagonal. A

⎧ ⎡ 1 0 0⎤ ⎡ 0 1 0 ⎤ ⎡ 0 0 1⎤ ⎡0 0 0 ⎤ ⎡0 0 0 ⎤

⎪

basis is ⎨ ⎢ 0 0 0⎥ , ⎢ 1 0 0 ⎥ , ⎢ 0 0 0 ⎥ , ⎢0 1 0 ⎥ , ⎢0 0 1⎥ ,

⎪⎩ ⎢⎣ 0 0 0⎥⎦ ⎢⎣ 0 0 0 ⎥⎦ ⎢⎣ 1 0 0 ⎥⎦ ⎢⎣0 0 0 ⎥⎦ ⎢⎣0 1 0 ⎥⎦

⎡0 0 0 ⎤ ⎫

⎢0 0 0 ⎥ ⎪⎬

⎢

⎥

⎣0 0 1⎦ ⎪⎭

⎧0 if i = j

(b) In a skew-symmetric matrix, aij = ⎨

. Thus, the entries below the main diagonal are determined

⎩−a ji if i ≠ j

⎧ ⎡ 0 1 0 ⎤ ⎡ 0 0 1⎤ ⎡0 0 0 ⎤ ⎫

⎪

⎪

by the entries above the main diagonal. A basis is ⎨ ⎢ −1 0 0 ⎥ , ⎢ 0 0 0 ⎥ , ⎢0 0 1⎥ ⎬ .

⎢

⎥

⎢

⎥

⎢

⎥

⎩⎪ ⎣ 0 0 0 ⎦ ⎣ −1 0 0 ⎦ ⎣0 −1 0 ⎦ ⎭⎪

15. Every 3 × 3, 4 × 4, and 5 × 5 submatrix will have determinant 0, so the rank can be at most 2. For i < 5 and j < 6,

⎡ 0 ai 6 ⎤

det ⎢

⎥ = −ai 6 a5 j so the rank is 2 if any of these products is nonzero, i.e., if the matrix has 2 nonzero

⎣ a5 j a56 ⎦

entries other than a56 . If exactly one entry in the last column is nonzero, the matrix will have rank 1, while if all

entries are zero, the rank will be 0. Thus, the possible ranks are 0, 1, or 2.

119

Chapter 5

Eigenvalues and Eigenvectors

0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡λ − 3

5. (a) ⎢

=

−

λ

+ 1⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣0 ⎥⎦

8

⎣

Section 5.1

Exercise Set 5.1

⎡ 0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

λ = 3 gives ⎢

⎢ ⎥=

⎣ −8 4 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡ 4 0 1⎤ ⎡1 ⎤ ⎡ 5⎤

1. Ax = ⎢ 2 3 2 ⎥ ⎢ 2⎥ = ⎢10 ⎥ = 5x

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 0 4 ⎦ ⎣ 1 ⎦ ⎣ 5⎦

x corresponds to the eigenvalue λ = 5.

⎡ 0 0⎤

reduces to

Since ⎢

⎣ −8 4 ⎥⎦

⎡1 − 1⎤

2 ⎥ , the

⎢

0⎦

⎣0

1

s, x2 = s, or

2

⎡1⎤

⎡1⎤

⎡ x1 ⎤ ⎡ 12 s ⎤

2

2

⎢ x ⎥ = ⎢ ⎥ = s ⎢ ⎥ , so ⎢ ⎥ is a basis for

⎣ 2⎦ ⎣ s ⎦

⎣1⎦

⎣1 ⎦

the eigenspace corresponding to λ = 3.

⎡ −4 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

. Since

λ = −1 gives ⎢

⎢ ⎥=

⎣ −8 0 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

general solution is x1 =

0 ⎤

⎡λ − 3

= (λ − 3)(λ + 1)

3. (a) det ⎢

⎣ −8 λ + 1⎥⎦

= λ 2 − 2λ − 3

The characteristic equation is

λ 2 − 2λ − 3 = 0.

9 ⎤

⎡ λ − 10

(b) det ⎢

= (λ − 10)(λ + 2) + 36

4

−

λ

+ 2 ⎥⎦

⎣

= λ 2 − 8λ + 16

The characteristic equation is

⎡1 0 ⎤

⎡ −4 0 ⎤

⎢⎣ −8 0 ⎥⎦ reduces to ⎢⎣0 0 ⎥⎦ , the general

solution is x1 = 0, x2 = s, or

λ 2 − 8λ + 16 = 0.

⎡ x1 ⎤ ⎡0 ⎤

⎡0 ⎤

⎡0 ⎤

⎢ ⎥ = ⎢ ⎥ = s ⎢1 ⎥ , so ⎢1 ⎥ is a basis for the

⎣ ⎦

⎣ ⎦

⎣ x2 ⎦ ⎣ s ⎦

eigenspace corresponding to λ = −1.

⎡ λ −3⎤

(c) det ⎢

= λ 2 − 12

⎥

−

4

λ

⎣

⎦

The characteristic equation is λ 2 − 12 = 0.

9 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡λ − 10

(b) ⎢

=

−

λ

+ 2 ⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣0 ⎥⎦

4

⎣

7 ⎤

⎡λ + 2

(d) det ⎢

= (λ + 2)(λ − 2) + 7

⎣ −1 λ − 2 ⎥⎦

= λ2 + 3

⎡ −6 9 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

λ = 4 gives ⎢

⎢ ⎥=

⎣ −4 6 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡ −6 9 ⎤

reduces to

Since ⎢

⎣ −4 6 ⎥⎦

**The characteristic equation is λ 2 + 3 = 0.
**

⎡λ 0 ⎤

(e) det ⎢

= λ2

⎣ 0 λ ⎥⎦

⎡1 − 3⎤

2 ⎥ , the

⎢

0⎦

⎣0

3

s, x2 = s, or

2

⎡3⎤

⎡3⎤

⎡ x1 ⎤ ⎡ 32 s ⎤

2

2

⎢ x ⎥ = ⎢ ⎥ = s ⎢ ⎥ , so ⎢ ⎥ is a basis for

⎣ 2⎦ ⎣ s ⎦

⎣1⎦

⎣1 ⎦

the eigenspace corresponding to λ = 4.

general solution is x1 =

**The characteristic equation is λ 2 = 0.
**

⎡λ − 1 0 ⎤

(f) det ⎢

= (λ − 1)2 = λ 2 − 2λ + 1

λ − 1⎥⎦

⎣ 0

The characteristic equation is

⎡ λ −3⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

(c) ⎢

⎢ ⎥=

⎣ −4 λ ⎥⎦ ⎣ x2 ⎦ ⎢⎣ 0 ⎥⎦

λ 2 − 2λ + 1 = 0.

⎡ 12

λ = 12 gives ⎢

⎣ −4

120

−3 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

⎥⎢ ⎥ =

12 ⎦ ⎣ x2 ⎦ ⎣⎢0 ⎦⎥

SSM: Elementary Linear Algebra

Section 5.1

⎡1 − 3 ⎤

−3 ⎤

12 ⎥ ,

reduces

to

⎢

⎥

12 ⎦

⎢⎣0

0 ⎥⎦

3

the general solution is x1 =

s, x2 = s,

12

7. (a) The eigenvalues satisfy

⎡ 12

Since ⎢

⎣ −4

λ3 − 6λ 2 + 11λ − 6 = 0 or

(λ − 1)(λ − 2)(λ − 3) = 0, so the eigenvalues

are λ = 1, 2, 3.

(b) The eigenvalues satisfy λ3 − 2λ = 0 or

3 s⎤

⎡ 3 ⎤

⎡ 3 ⎤

⎡x ⎤ ⎡

or ⎢ 1 ⎥ = ⎢ 12 ⎥ = s ⎢ 12 ⎥ , so ⎢ 12 ⎥ is a

⎣ x2 ⎦ ⎢⎣ s ⎥⎦

⎢⎣ 1 ⎥⎦

⎢⎣ 1 ⎥⎦

basis for the eigenspace corresponding to

λ = 12.

(

)

are λ = 0, − 2 , 2 .

(c) The eigenvalues satisfy λ3 + 8λ 2 + λ + 8 = 0

λ = − 12 gives

⎡ − 12

−3 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

.

⎢

⎥⎢ ⎥ =

− 12 ⎦ ⎣ x2 ⎦ ⎢⎣ 0 ⎥⎦

⎣ −4

⎡ − 12

−3 ⎤

Since ⎢

⎥ reduces to

− 12 ⎦

⎣ −4

⎡1

⎢

⎢⎣0

)(

λ λ + 2 λ − 2 = 0, so the eigenvalues

**or (λ + 8)(λ 2 + 1) = 0, so the only (real)
**

eigenvalue is λ = −8.

(d) The eigenvalues satisfy λ3 − λ 2 − λ − 2 = 0

**or (λ − 2)(λ 2 + λ + 1) = 0, so the only (real
**

eigenvalue is λ = 2.

3 ⎤

12 ⎥ ,

**the general solution is
**

0 ⎥⎦

3

x1 = −

s, x2 = s, or

12

(e) The eigenvalues satisfy

λ3 − 6λ 2 + 12λ − 8 = 0 or (λ − 2)3 = 0, so

the only eigenvalue is λ = 2.

3 s⎤

⎡− 3 ⎤

⎡− 3 ⎤

⎡ x1 ⎤ ⎡ −

12 ⎥ = s ⎢

12 ⎥ , so ⎢

12 ⎥ is

=

⎢

⎢x ⎥

⎣ 2 ⎦ ⎣⎢ s ⎦⎥

⎣⎢ 1 ⎦⎥

⎣⎢ 1 ⎦⎥

a basis for the eigenspace corresponding to

λ = − 12.

(f) The eigenvalues satisfy

λ3 − 2λ 2 − 15λ + 36 = 0 or

(λ + 4)(λ − 3)2 = 0, so the eigenvalues are

λ = −4, 3.

**(d) There are no eigenspaces since there are no
**

real eigenvalues.

0 ⎤

−2

⎡λ 0

⎢ −1 λ

−1

0 ⎥

9. (a) det ⎢

⎥

−

λ

+

0

1

2

0 ⎥

⎢

λ − 1⎦⎥

0

⎣⎢ 0 0

⎡λ 0⎤ ⎡ x1 ⎤ ⎡ 0⎤

(e) ⎢

⎢ ⎥=

⎣ 0 λ ⎥⎦ ⎣ x2 ⎦ ⎢⎣ 0⎥⎦

−2 ⎤

⎡λ 0

= (λ − 1) det ⎢ −1 λ

−1 ⎥

⎢

⎥

⎣ 0 −1 λ + 2⎦

= (λ − 1){λ[λ(λ + 2) − 1] − 2(1 − 0)}

⎡0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

λ = 0 gives ⎢

⎢ ⎥=

⎣0 0 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

Clearly, every vector in R 2 is a solution, so

⎡1 ⎤ ⎡ 0 ⎤

⎢⎣0 ⎥⎦ , ⎢⎣1 ⎥⎦ is a basis for the eigenspace

corresponding to λ = 0.

= (λ − 1)(λ3 + 2λ 2 − λ − 2)

= λ 4 + λ3 − 3λ 2 − λ + 2

The characteristic equation is

λ 4 + λ3 − 3λ 2 − λ + 2 = 0.

⎡λ − 1 0 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

(f) ⎢

=

λ − 1⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ 0⎥⎦

⎣ 0

⎡0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

λ = 1 gives ⎢

⎢ ⎥=

⎣0 0 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

Clearly, every vector in R 2 is a solution, so

⎡1 ⎤ ⎡ 0 ⎤

⎢⎣0 ⎥⎦ , ⎢⎣1 ⎥⎦ is a basis for the eigenspace

corresponding to λ = 1.

121

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

9

0

0 ⎤

⎡ λ − 10

0

0 ⎤

0

0 ⎤

⎡λ + 2

⎡ −4

⎢ −4

λ+2

0

0 ⎥

⎢

⎥

⎢

=

λ

−

−

10

9

(b) det ⎢

(

)

det

det

λ+2

0

7

0 λ+2

7 ⎥

⎥

0

7 ⎥

λ+2

⎢

⎥

⎢

⎥

⎢ 0

−1 λ − 2 ⎦

−1 λ − 2 ⎦

⎣ 0

⎣0

0

−1 λ − 2 ⎥⎦

⎢⎣ 0

= (λ − 10)(λ + 2)[(λ + 2)(λ − 2) + 7] − 9(−4)[(λ + 2)(λ − 2) + 7]

= (λ 2 − 8λ − 20 + 36)(λ 2 − 4 + 7)

= (λ 2 − 8λ + 16)(λ 2 + 3)

= λ 4 − 8λ3 + 19λ 2 − 24λ + 48

The characteristic equation is λ 4 − 8λ3 + 19λ 2 − 24λ + 48 = 0.

0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

−2

⎡λ 0

⎢ −1 λ

0 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

−1

11. (a) ⎢

⎥⎢ ⎥ = ⎢ ⎥

0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

⎢ 0 −1 λ + 2

0

λ − 1⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0 ⎥⎦

⎢⎣ 0 0

⎡ 1 0 −2 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎢ −1 1 −1 0 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

λ = 1 gives ⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

⎢ 0 −1 3 0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

⎣⎢ 0 0 0 0 ⎥⎦ ⎢⎣ x4 ⎥⎦ ⎣⎢0 ⎦⎥

⎡ 1 0 −2 0 ⎤

⎢ −1 1 −1 0 ⎥

Since ⎢

⎥ reduces to

⎢ 0 −1 3 0 ⎥

⎣⎢ 0 0 0 0 ⎥⎦

⎡1

⎢0

⎢

⎢0

⎣⎢0

0 −2 0 ⎤

1 −3 0 ⎥

⎥ , the general solution is x1 = 2 s, x2 = 3s, x3 = s,

0 0 0⎥

0 0 0 ⎦⎥

⎡ x1 ⎤ ⎡ 2 s ⎤

⎡ 2 ⎤ ⎡ 0⎤

⎡2⎤

⎢ x ⎥ ⎢ 3s ⎥

⎢3⎥

⎢

⎥

⎢

⎥

3

0

x4 = t or ⎢ 2 ⎥ = ⎢ ⎥ = s ⎢ ⎥ + t ⎢ ⎥ , so ⎢ ⎥ ,

x

s

1

0

⎢ 3⎥ ⎢ ⎥

⎢1 ⎥

⎢ ⎥ ⎢ ⎥

0

1

⎢⎣ x4 ⎥⎦ ⎣⎢ t ⎦⎥

⎢⎣ 0 ⎥⎦

⎣⎢ ⎦⎥ ⎣⎢ ⎦⎥

⎡0 ⎤

⎢0 ⎥

⎢ ⎥ is a basis for the eigenspace corresponding to λ = 1.

⎢0 ⎥

⎣⎢1 ⎦⎥

⎡ −2 0 −2 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎢ −1 −2 −1 0 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

λ = −2 gives ⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

⎢ 0 −1 0 0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

⎣⎢ 0 0 0 −3⎦⎥ ⎢⎣ x4 ⎥⎦ ⎣⎢0 ⎦⎥

⎡ −2 0 −2 0 ⎤

⎢ −1 −2 −1 0 ⎥

Since ⎢

⎥ reduces to

⎢ 0 −1 0 0 ⎥

⎣⎢ 0 0 0 −3⎦⎥

⎡1

⎢0

⎢

⎢0

⎣⎢0

0

1

0

0

1

0

0

0

0⎤

0⎥

⎥ , the general solution is x1 = − s, x2 = 0, x3 = s,

1⎥

0 ⎦⎥

⎡ x1 ⎤ ⎡ − s ⎤

⎡ −1⎤

⎡ −1⎤

⎢ x ⎥ ⎢ 0⎥

⎢

⎥

⎢ 0⎥

0

x4 = 0 or ⎢ 2 ⎥ = ⎢ ⎥ = s ⎢ ⎥ , so ⎢ ⎥ is a basis for the eigenspace corresponding to λ = −2.

⎢ x3 ⎥ ⎢ s ⎥

⎢ 1⎥

⎢ 1⎥

⎢⎣ x4 ⎥⎦ ⎣⎢ 0 ⎦⎥

⎣⎢ 0⎦⎥

⎣⎢ 0 ⎦⎥

⎡ −1 0 −2 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎢ −1 −1 −1 0 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

λ = −1 gives ⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

⎢ 0 −1 1 0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

⎢⎣ 0 0 0 −2 ⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0 ⎥⎦

122

SSM: Elementary Linear Algebra

Section 5.1

⎡ −1 0 −2 0 ⎤

⎢ −1 −1 −1 0 ⎥

Since ⎢

⎥ reduces to

⎢ 0 −1 1 0 ⎥

⎢⎣ 0 0 0 −2 ⎥⎦

**15. If a line is invariant under A, then it can be
**

expressed in terms of an eigenvector of A.

λ−4

1

−2 λ − 1

= (λ − 4)(λ − 1) + 2

(a) det(λI − A) =

⎡ 1 0 2 0⎤

⎢ 0 1 −1 0 ⎥

⎢

⎥ , the general solution is

⎢0 0 0 1⎥

⎢⎣0 0 0 0 ⎥⎦

x1 = −2s, x2 = s, x3 = s, x4 = 0 or

= λ 2 − 5λ + 6

= (λ − 3)(λ − 2)

1 ⎤ ⎡ x ⎤ ⎡0⎤

⎡λ − 4

⎢⎣ −2 λ − 1⎥⎦ ⎢⎣ y ⎥⎦ = ⎢⎣0 ⎥⎦

⎡ −1 1⎤ ⎡ x ⎤ ⎡ 0⎤

λ = 3 gives ⎢

.

=

⎣ −2 2 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣ 0⎥⎦

⎡ −1 1⎤

⎡ 1 −1⎤

⎢⎣ −2 2 ⎥⎦ reduces to ⎢⎣0 0⎥⎦ , so a general

solution is x = y. The line y = x is invariant

⎡ −2 1⎤ ⎡ x ⎤ ⎡ 0 ⎤

=

.

under A. λ = 2 gives ⎢

⎣ −2 1⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣ 0 ⎥⎦

⎡ −2 1⎤

⎡ 2 −1⎤

⎢⎣ −2 1⎥⎦ can be reduced to ⎢⎣ 0 0 ⎥⎦ , so a

general solution is 2x = y. The line y = 2x is

invariant under A.

⎡ x1 ⎤ ⎡ −2s ⎤

⎡ −2 ⎤

⎡ −2 ⎤

⎢x ⎥ ⎢ s⎥

⎢

⎥

⎢ 1⎥

1

⎢ 2⎥ = ⎢

⎥ = s ⎢ ⎥ , so ⎢ ⎥ is a basis

⎢ x3 ⎥ ⎢ s ⎥

⎢ 1⎥

⎢ 1⎥

⎢⎣ x4 ⎥⎦ ⎣⎢ 0 ⎦⎥

⎣⎢ 0 ⎦⎥

⎣⎢ 0⎦⎥

for the eigenspace corresponding to λ = −1.

9

0

0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡λ − 10

⎢ −4

0

0 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

λ+2

(b) ⎢

⎥⎢ ⎥ = ⎢ ⎥

0

7 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

λ+2

⎢ 0

0

−1 λ − 2 ⎦⎥ ⎢⎣ x4 ⎥⎦ ⎣⎢0 ⎦⎥

⎣⎢ 0

⎡ −6

⎢ −4

λ = 4 gives ⎢

⎢ 0

⎣⎢ 0

⎡ −6

⎢ −4

Since ⎢

⎢ 0

⎣⎢ 0

⎡1 − 3

2

⎢

0

⎢0

⎢0

0

⎢0

0

⎣

9 0

0 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

6 0

0 ⎥ ⎢ x2 ⎥ ⎢ 0 ⎥

⎥ ⎢ ⎥ = ⎢ ⎥.

0 6 −7 ⎥ ⎢ x3 ⎥ ⎢ 0 ⎥

0 −1 2 ⎦⎥ ⎣⎢ x4 ⎦⎥ ⎣⎢ 0 ⎦⎥

(b) det(λI − A) =

9 0

0⎤

6 0

0⎥

⎥ reduces to

0 6 −7 ⎥

0 −1 2 ⎦⎥

**Since the characteristic equation λ 2 + 1 = 0
**

has no real solutions, there are no lines that

are invariant under A.

0 0⎤

⎥

1 0 ⎥ , the general solution is

0 1⎥

0 0 ⎥⎦

(c) det(λI − A) =

λ − 2 −3

= (λ − 2)2

0

λ−2

⎡ λ − 2 −3 ⎤ ⎡ x ⎤ ⎡ 0 ⎤

=

⎢⎣ 0

λ − 2 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣ 0 ⎥⎦

⎡0 −3⎤ ⎡ x ⎤ ⎡ 0 ⎤

λ = 2 gives ⎢

.

=

⎣0 0 ⎥⎦ ⎢⎣ y ⎥⎦ ⎢⎣ 0 ⎥⎦

⎡0 −3⎤

⎡0 1⎤

Since ⎢

reduces to ⎢

, the

⎥

⎣0 0 ⎥⎦

⎣0 0⎦

general solution is y = 0, and the line y = 0 is

invariant under A.

3

s, x2 = s, x3 = 0, x4 = 0 or

2

⎡3⎤

⎡3⎤

⎡ x1 ⎤ ⎡ 3 s ⎤

⎢2⎥

⎢2⎥

⎢x ⎥ ⎢ 2 ⎥

2

s

1

⎢

⎥

⎢

⎥

, so ⎢ 1 ⎥ is a basis for

⎢ ⎥=

=s

⎢0⎥

⎢0⎥

⎢ x3 ⎥ ⎢ 0 ⎥

⎢0⎥

⎢0⎥

⎣⎢ x4 ⎦⎥ ⎢⎣ 0 ⎥⎦

⎣ ⎦

⎣ ⎦

x1 =

23. We have that Ax = λx.

the eigenspace corresponding to λ = 4.

A−1 ( Ax) = Ix = x, but also

**13. Since A is upper triangular, the eigenvalues of A
**

1

are the diagonal entries: 1, , 0, 2. Thus the

2

**A−1 ( Ax) = A−1 (λx) = λ( A−1x) so A−1x =
**

1

is an eigenvalue of A−1 , with

λ

corresponding eigenvector x.

9

1

⎛1⎞

,

eigenvalues of A are 1 = 1, ⎜ ⎟ =

512

⎝2⎠

9

λ −1

= λ2 + 1

1 λ

Thus

9

**09 = 0, and 29 = 512.
**

123

1

x.

λ

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

**25. We have that Ax = λx.
**

Since s is a scalar, (sA)x = s(Ax) = s(λx) = (sλ)x

so sλ is an eigenvalue of sA with corresponding

eigenvector x.

**5. Since the geometric multiplicity of an eigenvalue
**

is less than or equal to its algebraic multiplicity,

the eigenspace for λ = 0 can have dimension 1 or

2, the eigenspace for λ = 1 has dimension 1, and

the eigenspace for λ = 2 can have dimension 1,

2, or 3.

True/False 5.1

(a) False; the vector x must be nonzero to be an

eigenvector⎯if x = 0, then Ax = λx for all λ.

**7. Since the matrix is lower triangular, with 2s on
**

the diagonal, the only eigenvalue is λ = 2.

⎡ 0 0⎤

For λ = 2, 2 I − A = ⎢

, which has rank 1,

⎣ −1 0 ⎥⎦

hence nullity 1. Thus, the matrix has only 1

distinct eigenvector and is not diagonalizable.

**(b) False; if λ is an eigenvalue of A, then
**

det(λI − A) = 0, so (λI − A)x = 0 has nontrivial

solutions.

(c) True; since p(0) = 02 + 1 ≠ 0, λ = 0 is not an

eigenvalue of A and A is invertible by Theorem

5.1.5.

**9. Since the matrix is lower triangular with
**

diagonal entries of 3, 2, and 2, the eigenvalues

are 3 and 2, where 2 has algebraic multiplicity 2.

⎡0 0 0 ⎤

For λ = 3, 3I − A = ⎢0 1 0 ⎥ which has rank

⎢

⎥

⎣0 −1 1⎦

2, hence nullity 1, so the eigenspace

corresponding to λ = 3 has dimension 1.

⎡ −1 0 0 ⎤

For λ = 2, 2 I − A = ⎢ 0 0 0 ⎥ which has rank

⎢

⎥

⎣ 0 −1 0 ⎦

2, hence nullity 1, so the eigenspace

corresponding to λ = 2 has dimension 1.

The matrix is not diagonalizable, since it has

only 2 distinct eigenvectors.

**(d) False; the eigenspace corresponding to λ
**

contains the vector x = 0, which is not an

eigenvector of A.

(e) True; if 0 is an eigenvalue of A, then 02 = 0 is

an eigenvalue of A2 , so A2 is singular.

(f) False; the reduced row echelon form of an

invertible n × n matrix A is I n which has only

one eigenvalue, λ = 1, whereas A can have

eigenvalues that are not 1. For instance, the

⎡3 0 ⎤

matrix A = ⎢

has the eigenvalues λ = 3

⎣8 −1⎥⎦

and λ = −1.

**11. Since the matrix is upper triangular with
**

diagonal entries 2, 2, 3, and 3, the eigenvalues

are 2 and 3 each with algebraic multiplicity 2.

⎡0 1 0 −1⎤

⎢0 0 −1 1⎥

For λ = 2, 2 I − A = ⎢

⎥ which has

⎢ 0 0 −1 −2 ⎥

⎣⎢0 0 0 −1⎦⎥

rank 3, hence nullity 1, so the eigenspace

corresponding to λ = 2 has dimension 1.

⎡ 1 1 0 −1⎤

⎢0 1 −1 1⎥

For λ = 3, 3I − A = ⎢

⎥ which has

⎢ 0 0 0 −2 ⎥

⎢⎣0 0 0 0 ⎥⎦

rank 3, hence nullity 1, so the eigenspace

corresponding to λ = 3 has dimension 1.

The matrix is not diagonalizable since it has only

2 distinct eigenvectors. Note that showing that

the geometric multiplicity of either eigenvalue is

less than its algebraic multiplicity is sufficient to

show that the matrix is not diagonalizable.

**(g) False; if 0 is an eigenvalue of A, then A is
**

singular so the set of column vectors of A cannot

be linearly independent.

Section 5.2

Exercise Set 5.2

1. Since the determinant is a similarity invariant

and det(A) = 2 − 3 = −1 while

det(B) = −2 − 0 = −2, A and B are not similar

matrices.

⎡ 1 0 0⎤

3. A reduces to ⎢0 1 0 ⎥ while B reduces to

⎢

⎥

⎣0 0 1⎦

⎡ 1 2 0⎤

⎢0 0 1⎥ , so they have different ranks. Since

⎢

⎥

⎣0 0 0 ⎦

rank is a similarity invariant, A and B are not

similar matrices.

124

SSM: Elementary Linear Algebra

Section 5.2

**13. A has eigenvalues λ1 = 1 and λ 2 = −1 (the
**

diagonal entries, since A is triangular).

⎡λ − 1 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

(λI − A)x = 0 is ⎢

⎢ ⎥=

⎣ −6 λ + 1⎦⎥ ⎣ x2 ⎦ ⎣⎢0 ⎦⎥

⎡0 0 2 ⎤

⎡0 1 0 ⎤

Since ⎢0 −1 0 ⎥ reduces to ⎢0 0 1⎥ , the

⎢

⎥

⎢

⎥

⎣0 0 −1⎦

⎣0 0 0 ⎦

general solution is x1 = s, x2 = 0, x3 = 0 or

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = s ⎢0 ⎥ and p = ⎢0 ⎥ is a basis for the

1

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0 ⎦

⎣ x3 ⎦

eigenspace corresponding to λ1 = 2.

⎡ 0 0⎤ ⎡ x1 ⎤ ⎡ 0⎤

λ1 = 1 gives ⎢

.

⎢ ⎥=

⎣ −6 2⎥⎦ ⎣ x2 ⎦ ⎢⎣ 0⎥⎦

⎡1 − 1⎤

3 ⎥ , the general

⎢

0

0

⎣

⎦

⎡1 ⎤

x

1

⎡ ⎤

solution is x1 = s, x2 = s, or ⎢ 1 ⎥ = s ⎢ 3 ⎥ ,

3

⎣ x2 ⎦

⎣1 ⎦

⎡ 0 0⎤

Since ⎢

reduces to

⎣ −6 2 ⎥⎦

⎡ 1 0 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ 2 = 3 gives ⎢0 0 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ which has

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 0 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

general solution x1 = −2s, x2 = t , x3 = s, or

⎡1⎤

and p1 = ⎢ 3 ⎥ is a basis for the eigenspace

⎣1⎦

corresponding to λ1 = 1.

⎡ x1 ⎤

⎡0 ⎤

⎡ −2 ⎤

⎡ −2 ⎤ ⎡ 0 ⎤

⎢ x ⎥ = s ⎢ 0 ⎥ + t ⎢1 ⎥ , and p = ⎢ 0 ⎥ , p = ⎢1 ⎥

2

3

⎢ 2⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ 1⎦ ⎣ 0⎦

⎣0 ⎦

⎣ 1⎦

⎣ x3 ⎦

is a basis for the eigenspace corresponding to

⎡ 1 −2 0 ⎤

λ 2 = 3. Thus P = [p1 p 2 p3 ] = ⎢0 0 1⎥

⎢

⎥

1 0⎦

⎣0

diagonalizes A.

⎡ 1 0 2 ⎤ ⎡ 2 0 −2 ⎤ ⎡ 1 −2 0 ⎤

P −1 AP = ⎢0 0 1⎥ ⎢ 0 3 0⎥ ⎢ 0 0 1⎥

⎢

⎥⎢

⎥⎢

⎥

3⎦ ⎣ 0

1 0⎦

⎣0 1 0 ⎦ ⎣ 0 0

0 ⎤

⎡λ1 0

= ⎢0 λ 2 0 ⎥

⎢

⎥

λ2 ⎦

⎣0 0

⎡2 0 0⎤

= ⎢0 3 0⎥

⎢

⎥

⎣ 0 0 3⎦

⎡ −2 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

λ 2 = −1 gives ⎢

⎢ ⎥=

⎣ −6 0 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡ −2 0 ⎤

⎡ 1 0⎤

Since ⎢

reduces to ⎢

, the general

⎥

−

6

0

⎣0 0 ⎥⎦

⎣

⎦

⎡x ⎤

⎡0 ⎤

solution is x1 = 0, x2 = s or ⎢ 1 ⎥ = s ⎢ ⎥ , and

⎣1 ⎦

⎣ x2 ⎦

⎡0 ⎤

p 2 = ⎢ ⎥ is a basis for the eigenspace

⎣1 ⎦

corresponding to λ 2 = −1.

⎡ 1 0⎤

Thus, P = [p1 p 2 ] = ⎢ 3

⎥ diagonalizes A.

⎣ 1 1⎦

⎡ 3 0 ⎤ ⎡ 1 0 ⎤ ⎡ 13 0 ⎤

P −1 AP = ⎢

⎢

⎥

⎣ −3 1⎥⎦ ⎢⎣ 6 −1⎥⎦ ⎣ 1 1⎦

0⎤

⎡λ

=⎢ 1

⎥

0

λ

2⎦

⎣

⎡ 1 0⎤

=⎢

⎣0 −1⎥⎦

17. det(λI − A) =

λ + 1 −4

2

3

λ−4

0

3

−1 λ − 3

= x3 − 6 x 2 + 11x − 6

= ( x − 1)( x − 2)( x − 3)

The eigenvalues are λ1 = 1, λ 2 = 2, and λ3 = 3,

each with algebraic multiplicity 1. Since each

eigenvalue must have nonzero geometric

multiplicity, the geometric multiplicity of each

eigenvalue is also 1 and A is diagonalizable.

(λI − A)x = 0 is

2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡ λ + 1 −4

⎢ 3

0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

λ−4

⎢

⎥⎢ ⎥ ⎢ ⎥

−1 λ − 3⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎣ 3

**15. A has eigenvalues λ1 = 2 and λ 2 = 3 (the
**

diagonal entries, since A is triangular).

(λI − A)x = 0 is

0

2 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡λ − 2

⎢ 0

λ−3

0 ⎥ ⎢ x2 ⎥ = ⎢ 0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

0

λ − 3⎦ ⎣ x3 ⎦ ⎣ 0 ⎦

⎣ 0

⎡0 0 2⎤ ⎡ x1 ⎤ ⎡ 0⎤

λ1 = 2 gives ⎢0 −1 0⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 0 −1⎦ ⎣ x3 ⎦ ⎣ 0⎦

125

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

⎡ 2 −4 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 1 gives ⎢ 3 −3 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 3 −1 −2 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡ 2 −4 2 ⎤

⎡ 1 0 −1⎤

Since ⎢ 3 −3 0 ⎥ reduces to ⎢0 1 −1⎥ , the

⎢

⎥

⎢

⎥

⎣ 3 −1 −2 ⎦

⎣0 0 0 ⎦

general solution is x1 = s, x2 = s, x3 = s, or

diagonalizes A and

0 ⎤ ⎡ 1 0 0⎤

⎡λ1 0

P −1 AP = ⎢0 λ 2 0 ⎥ = ⎢0 2 0 ⎥ .

⎢

⎥ ⎢

⎥

λ 3 ⎦ ⎣ 0 0 3⎦

⎣0 0

19. Since A is triangular with diagonal entries of 0,

0, 1, the eigenvalues are λ1 = 0 (with algebraic

multiplicity 2) and λ 2 = 1.

λ 2 = 1 must also have geometric multiplicity 1.

⎡ x1 ⎤

⎡1⎤

⎡1⎤

⎢ x ⎥ = s ⎢1⎥ and p = ⎢1⎥ is a basis for the

2

1

⎢ ⎥

⎢⎥

⎢⎥

⎣1⎦

⎣1⎦

⎣ x3 ⎦

eigenspace corresponding to λ1 = 1.

0 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡λ 0

(λI − A)x = 0 is ⎢ 0 λ

0 ⎥ ⎢ x2 ⎥ = ⎢ 0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −3 0 λ − 1⎦ ⎣ x3 ⎦ ⎣ 0 ⎦

⎡ 0 0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 0 gives ⎢ 0 0 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −3 0 −1⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡3 −4 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ 2 = 2 gives ⎢3 −2 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣3 −1 −1⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡ 3 −4 2 ⎤

Since ⎢3 −2 0⎥ reduces to

⎢

⎥

⎣3 −1 −1⎦

⎡1 0 − 2⎤

3⎥

⎢

,

⎢0 1 −1⎥

⎢⎣0 0

0 ⎥⎦

⎡ 0 0 0⎤

Since ⎢ 0 0 0 ⎥ reduces to

⎢

⎥

⎣ −3 0 −1⎦

2

s, x2 = s, x3 = s

3

or x1 = 2t , x2 = 3t , x3 = 3t which is

the general solution is x1 =

1

general solution is x1 = − r , x2 = t , x3 = r or

3

x1 = s, x2 = t , x3 = −3s which is

⎡ x1 ⎤

⎡2⎤

⎡ 2⎤

⎢ x ⎥ = t ⎢ 3⎥ and p = ⎢ 3 ⎥ is a basis for the

2

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣ 3⎦

⎣3⎦

⎣ x3 ⎦

eigenspace corresponding to λ 2 = 2.

⎡ x1 ⎤

⎡ 1⎤ ⎡0 ⎤

⎢ x ⎥ = s ⎢ 0 ⎥ + t ⎢ 1⎥ . Thus λ = 0 has

1

⎢ 2⎥

⎢ ⎥ ⎢ ⎥

⎣ −3 ⎦ ⎣ 0 ⎦

⎣ x3 ⎦

geometric multiplicity 2, and A is diagonalizable.

⎡ 1⎤

⎡0 ⎤

p1 = ⎢ 0⎥ , p 2 = ⎢ 1⎥ are a basis for the

⎢ ⎥

⎢ ⎥

⎣ −3⎦

⎣0 ⎦

eigenspace corresponding to λ1 = 0.

⎡ 4 −4 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ3 = 3 gives ⎢ 3 −1 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 3 −1 0 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡1 0 − 1⎤

4⎥

⎢

⎢0 1 − 34 ⎥ , a

⎢

⎥

0⎦

⎣0 0

1

3

general solution is x1 = s, x2 = s, x3 = s or

4

4

⎡ x1 ⎤

⎡1 ⎤

x1 = t , x2 = 3t , x3 = 4t which is ⎢ x2 ⎥ = t ⎢ 3⎥

⎢ ⎥

⎢ ⎥

⎣ 4⎦

⎣ x3 ⎦

⎡ 4 −4 2 ⎤

Since ⎢ 3 −1 0 ⎥ reduces to

⎢

⎥

⎣ 3 −1 0 ⎦

0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

1 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎥⎢ ⎥ ⎢ ⎥

0 0 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡1 0

reduces to ⎢0 1

⎢

⎣0 0

general solution is x1 = 0, x2 = 0, x3

⎡ 1

λ 2 = 1 gives ⎢ 0

⎢

⎣ −3

⎡ 1 0 0⎤

Since ⎢ 0 1 0 ⎥

⎢

⎥

⎣ −3 0 0 ⎦

0⎤

0⎥ , a

⎥

0⎦

= s or

⎡ x1 ⎤

⎡0 ⎤

⎡0 ⎤

⎢ x ⎥ = s ⎢0 ⎥ and p = ⎢0 ⎥ is a basis for the

3

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣ 1⎦

⎣ 1⎦

⎣ x3 ⎦

eigenspace corresponding to λ 2 = 1.

⎡1 ⎤

and p3 = ⎢ 3 ⎥ is a basis for the eigenspace

⎢ ⎥

⎣4⎦

corresponding to λ3 = 3.

Thus P = [p1 p 2

⎡1 0 1⎤

3⎥

⎢

, a

⎢0 0 0 ⎥

⎣⎢0 0 0 ⎥⎦

⎡1 2 1⎤

p3 ] = ⎢1 3 3⎥

⎢

⎥

⎣1 3 4 ⎦

Thus P = [p1 p 2

126

⎡ 1 0 0⎤

p3 ] = ⎢ 0 1 0 ⎥

⎢

⎥

⎣ −3 0 1⎦

SSM: Elementary Linear Algebra

Section 5.2

diagonalizes A and

⎡λ1 0 0 ⎤ ⎡0 0 0 ⎤

P −1 AP = ⎢ 0 λ1 0 ⎥ = ⎢0 0 0 ⎥ .

⎢

⎥ ⎢

⎥

⎣ 0 0 λ 2 ⎦ ⎣0 0 1⎦

⎡ 1 0 0 0⎤

⎢0 1 −1 1⎥

⎢

⎥ , a general solution is x1 = 0,

⎢0 0 0 0 ⎥

⎢⎣0 0 0 0 ⎥⎦

x2 = s − t , x3 = s, x4 = t , or

**21. Since A is triangular with entries −2, −2, 3, and 3
**

on the diagonal, the eigenvalues are λ1 = −2 and

⎡ x1 ⎤

⎡0⎤ ⎡ 0⎤

⎢x ⎥

⎢1 ⎥ ⎢ −1⎥

2

⎢ ⎥ = s ⎢ ⎥ +t ⎢ ⎥.

⎢ x3 ⎥

⎢1 ⎥ ⎢ 0 ⎥

⎢⎣ x4 ⎥⎦

⎢⎣ 0 ⎥⎦ ⎢⎣ 1⎥⎦

Thus λ 2 = 3 has geometric multiplicity 2, and

**λ 2 = 3, both with algebraic multiplicity 2.
**

(λI − A)x = 0 is

0

0

0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡λ + 2

⎢ 0

λ + 2 −5

5 ⎥ ⎢ x2 ⎥ ⎢0 ⎥

⎢

⎥ ⎢ ⎥ = ⎢ ⎥.

λ−3

0

0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

⎢ 0

λ − 3⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0 ⎥⎦

0

0

⎢⎣ 0

⎡0

⎢0

λ1 = −2 gives ⎢

⎢0

⎣⎢0

⎡0

⎢0

Since ⎢

⎢0

⎣⎢0

⎡0

⎢0

⎢

⎢0

⎢⎣0

0

0

0

0

1

0

0

0

⎡0 ⎤

⎡ 0⎤

⎢1 ⎥

⎢ −1⎥

p3 = ⎢ ⎥ , p 4 = ⎢ ⎥ are a basis for the

⎢1 ⎥

⎢ 0⎥

⎢⎣ 1⎥⎦

⎣⎢0 ⎦⎥

eigenspace corresponding to λ 2 = 3.

Since the geometric and algebraic multiplicities

are equal for both eigenvalues, A is

diagonalizable.

⎡ 1 0 0 0⎤

⎢ 0 1 1 −1⎥

P = [p1 p 2 p3 p 4 ] = ⎢

⎥

⎢0 0 1 0⎥

⎣⎢ 0 0 0 1⎦⎥

diagonalizes A, and

0⎤

⎡λ1 0 0

⎢0 λ

0

0⎥

−1

1

⎥

P AP = ⎢

⎢ 0 0 λ2 0 ⎥

⎢⎣ 0 0 0 λ 2 ⎥⎦

0 0 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

0 −5 5⎥ ⎢ x2 ⎥ ⎢0 ⎥

⎥ ⎢ ⎥ = ⎢ ⎥.

0 −5 0 ⎥ ⎢ x3 ⎥ ⎢0 ⎥

0 0 −5⎦⎥ ⎣⎢ x4 ⎦⎥ ⎣⎢0 ⎦⎥

0 0 0⎤

0 −5 5⎥

⎥ reduces to

0 −5 0 ⎥

0 0 −5⎦⎥

0⎤

1⎥

⎥ , a general solution is x1 = s,

0⎥

0 ⎥⎦

⎡ x1 ⎤

⎡1 ⎤ ⎡ 0 ⎤

⎢x ⎥

⎢0 ⎥ ⎢1 ⎥

x2 = t , x3 = 0, x4 = 0 or ⎢ 2 ⎥ = s ⎢ ⎥ + t ⎢ ⎥ .

⎢ x3 ⎥

⎢0 ⎥ ⎢ 0⎥

⎢⎣ x4 ⎥⎦

⎢⎣0 ⎥⎦ ⎢⎣ 0⎥⎦

Thus λ1 = −2 has geometric multiplicity 2, and

⎡ −2 0

⎢ 0 −2

=⎢

⎢ 0 0

⎢⎣ 0 0

⎡0 ⎤

⎡1 ⎤

⎢1 ⎥

⎢0⎥

p1 = ⎢ ⎥ , p 2 = ⎢ ⎥ are a basis for the

⎢0 ⎥

⎢0⎥

⎢⎣0 ⎥⎦

⎢⎣ 0 ⎥⎦

eigenspace corresponding to λ1 = −2.

0 0

5 −5

0 0

0 0

⎡5

⎢0

Since ⎢

⎢0

⎣⎢0

0⎤

5⎥

⎥ reduces to

0⎥

0 ⎦⎥

0⎤

0⎥

⎥.

0⎥

3⎥⎦

1

λ + 1 −7

0

0

λ −1

0

−15 λ + 2

= λ3 + 2λ 2 − λ − 2

= (λ + 2)(λ + 1)(λ − 1)

Thus, the eigenvalues of A are λ1 = −2,

λ 2 = −1, and λ3 = 1.

(λI − A)x = 0 is

1 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡ λ + 1 −7

⎢ 0

λ −1

0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

−15 λ + 2 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎣ 0

⎡ −1 −7 1⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = −2 gives ⎢ 0 −3 0⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 −15 0⎦ ⎣ x3 ⎦ ⎣0 ⎦

0 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

5⎥ ⎢ x2 ⎥ ⎢ 0⎥

⎥ ⎢ ⎥ = ⎢ ⎥.

0 ⎥ ⎢ x3 ⎥ ⎢ 0⎥

0 ⎦⎥ ⎢⎣ x4 ⎥⎦ ⎣⎢ 0⎦⎥

⎡5

⎢0

λ 2 = 3 gives ⎢

⎢0

⎣⎢0

0 0

5 −5

0 0

0 0

23. det(λI − A) =

0

0

3

0

127

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

and D = P −1 AP

⎡ 0 −5

= ⎢1 4

⎢

1

⎣0

⎡ λ1 0

= ⎢ 0 λ2

⎢

⎣0 0

⎡ −2 0

= ⎢ 0 −1

⎢

⎣ 0 0

⎡ −1 −7 1⎤

⎡ 1 0 −1⎤

Since ⎢ 0 −3 0 ⎥ reduces to ⎢0 1 0 ⎥ , a

⎢

⎥

⎢

⎥

⎣ 0 −15 0 ⎦

⎣0 0 0 ⎦

general solution is x1 = s, x2 = 0, x3 = s, or

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = s ⎢0 ⎥ , so p = ⎢0 ⎥ is a basis for the

1

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣1 ⎦

⎣1 ⎦

⎣ x3 ⎦

eigenspace corresponding to λ1 = −2.

⎡0 −7 1⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

λ 2 = −1 gives ⎢0 −2 0 ⎥ ⎢ x2 ⎥ = ⎢ 0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 −15 1⎦ ⎣ x3 ⎦ ⎣ 0 ⎦

⎡0 −7 1⎤

⎡0 1 0 ⎤

Since ⎢0 −2 0 ⎥ reduces to ⎢0 0 1 ⎥ , a

⎢

⎥

⎢

⎥

⎣0 −15 1⎦

⎣0 0 0 ⎦

general solution is x1 = s, x2 = 0, x3 = 0, or

11

A

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = s ⎢0 ⎥ so p = ⎢0 ⎥ is a basis for the

2

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0 ⎦

⎣ x3 ⎦

eigenspace corresponding to λ 2 = −1.

⎡ 2 −7 1⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ3 = 1 gives ⎢ 0

0 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 −15 3⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡1 0 − 1⎤

⎡ 2 −7 1⎤

5⎥

⎢

Since ⎢ 0

0 0⎥ reduces to ⎢0 1 − 1 ⎥ , a

5

⎢

⎥

⎢

⎥

⎣ 0 −15 3⎦

0⎦

⎣0 0

1

1

general solution is x1 = s, x2 = s, x3 = s or

5

5

⎡ x1 ⎤

⎡1 ⎤

x1 = t , x2 = t , x3 = 5t which is ⎢ x2 ⎥ = t ⎢1⎥ so

⎢ ⎥

⎢ ⎥

⎣5 ⎦

⎣ x3 ⎦

= PD P

⎡(−2)11

0

0⎤

⎢

⎥ −1

11

= P⎢ 0

(−1)

0 ⎥P

⎢ 0

0

111 ⎥⎦

⎣

⎡ −2048 0 0 ⎤

= P⎢

0 −1 0 ⎥ P −1

⎢

⎥

0 0 1⎦

⎣

⎡ −1 10, 237 −2047 ⎤

=⎢ 0

1

0⎥

⎢

⎥

⎣ 0 10, 245 −2048⎦

λ −3

1

0

25. det(λI − A) = 1

λ−2

1

0

1

λ −3

= λ3 − 8λ 2 + 19λ − 12

= (λ − 1)(λ − 3)(λ − 4)

The eigenvalues of A are λ1 = 1, λ 2 = 3, and

λ3 = 4.

(λI − A)x = 0 is

1

0 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡λ − 3

⎢ 1

λ−2

1 ⎥ ⎢ x2 ⎥ = ⎢ 0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

λ − 3⎦ ⎣ x3 ⎦ ⎣ 0 ⎦

1

⎣ 0

⎡ −2 1 0 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

λ1 = 1 gives ⎢ 1 −1 1⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 1 −2 ⎦ ⎣ x3 ⎦ ⎣ 0⎦

⎡ −2 1 0 ⎤

⎡ 1 0 −1⎤

Since ⎢ 1 −1 1⎥ reduces to ⎢0 1 −2⎥ , a

⎢

⎥

⎢

⎥

⎣ 0 1 −2 ⎦

⎣0 0 0⎦

general solution is x1 = s, x2 = 2s, x3 = s or

⎡1 ⎤

p3 = ⎢1⎥ is a basis for the eigenspace

⎢ ⎥

⎣5⎦

corresponding to λ3 = 1.

P = [p1 p 2

11 −1

1⎤ ⎡ −1 7 −1⎤ ⎡ 1 1 1⎤

−1⎥ ⎢ 0 1 0 ⎥ ⎢0 0 1⎥

⎥⎢

⎥⎢

⎥

0 ⎦ ⎣ 0 15 −2 ⎦ ⎣ 1 0 5⎦

0⎤

0⎥

⎥

λ3 ⎦

0⎤

0⎥

⎥

1⎦

⎡1 1 1 ⎤

p3 ] = ⎢0 0 1⎥ diagonalizes A,

⎢

⎥

⎣1 0 5⎦

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = s ⎢ 2⎥ and p = ⎢ 2 ⎥ is a basis for the

1

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣1 ⎦

⎣1 ⎦

⎣ x3 ⎦

eigenspace corresponding to λ1 = 1.

⎡0 1 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ 2 = 3 gives ⎢1 1 1 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 1 0 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

128

SSM: Elementary Linear Algebra

Section 5.2

⎡0 1 0 ⎤

⎡1 0

Since ⎢1 1 1 ⎥ reduces to ⎢0 1

⎢

⎥

⎢

⎣0 1 0 ⎦

⎣0 0

general solution is x1 = s, x2 = 0,

31. If A is diagonalizable, then P −1 AP = D. For the

1⎤

0 ⎥ , the

⎥

0⎦

x3 = − s or

same P, P −1 Ak P

−1

=

) A" ( PP −1 ) AP

P −1 A( PP −1 ) A(

PP

⎡ x1 ⎤

⎡ 1⎤

⎡ 1⎤

⎢ x ⎥ = s ⎢ 0 ⎥ and p = ⎢ 0 ⎥ is a basis for the

2

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣ −1⎦

⎣ −1⎦

⎣ x3 ⎦

eigenspace corresponding to λ 2 = 3.

= (

P AP)( P

AP)" ( P AP

)

k factors of P −1 AP

=D

**33. (a) The dimension of the eigenspace
**

corresponding to an eigenvalue is the

geometric multiplicity of the eigenvalue,

which must be less than or equal to the

algebraic multiplicity of the eigenvalue.

Thus, for λ = 1, the dimension is 1; for

λ = 3, the dimension is 1 or 2; for λ = 4, the

dimension is 1, 2, or 3.

⎡ x1 ⎤

⎡ 1⎤

⎡ 1⎤

⎢ x ⎥ = s ⎢ −1⎥ and p = ⎢ −1⎥ is a basis for the

3

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣ 1⎦

⎣ 1⎦

⎣ x3 ⎦

eigenspace corresponding to λ3 = 4.

**(b) If A is diagonalizable then each eigenvalue
**

has the same algebraic and geometric

multiplicity. Thus the dimensions of the

eigenspaces are:

λ = 1: dimension 1

λ = 3: dimension 2

λ = 4: dimension 3

⎡ 1 1 1⎤

p3 ] = ⎢ 2 0 −1⎥

⎢

⎥

⎣ 1 −1 1⎦

diagonalizes A and

⎡ λ1 0

P −1 AP = D = ⎢ 0 λ 2

⎢

⎣0 0

0 ⎤ ⎡ 1 0 0⎤

0 ⎥ = ⎢0 3 0⎥ .

⎥ ⎢

⎥

λ3 ⎦ ⎣0 0 4⎦

**(c) Since only λ = 4 can have an eigenspace
**

with dimension 3, the eigenvalue must be 4.

1

1⎤

⎡1

6

3

6⎥

⎢

P −1 = ⎢ 12

0 − 12 ⎥

⎢1

1

1⎥

⎢⎣ 3 − 3

3 ⎥⎦

n

True/False 5.2

(a) True; use P = I.

(b) True; since A is similar to B, B = P1−1 AP1 for

n −1

A = PD P

n

⎡ 1 1 1⎤ ⎡1

⎢

= ⎢ 2 0 −1⎥ ⎢ 0

⎢

⎥

⎣ 1 −1 1⎦ ⎢⎣ 0

0

3n

0

⎡

0 ⎤ ⎢6

⎥ 1

0 ⎥ ⎢2

⎢

4n ⎥⎦ ⎢ 13

⎣

1

1

3

0

− 13

some invertible matrix P1. Since B is similar to

1⎤

6⎥

− 12 ⎥

1⎥

3 ⎥⎦

**C, C = P2−1 BP2 for some invertible matrix P2 .
**

Then C = P2−1 BP2

= P2−1 ( P1−1 AP1 ) P2

= ( P2−1P1−1 ) A( P1 P2 )

**27. Using the result in Exercise 20 of Section 5.1,
**

−b ⎤

⎡ −b

one possibility is P = ⎢

⎥ where

⎣ a − λ1 a − λ 2 ⎦

1

λ1 = ⎡ a + d + (a − d )2 + 4bc ⎤ and

⎦

2⎣

λ2 =

k

**Since D is a diagonal matrix, so is D k , so Ak is
**

also diagonalizable.

⎡1 1 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ3 = 4 gives ⎢1 2 1 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣0 1 1 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡1 1 0 ⎤

⎡ 1 0 −1⎤

Since ⎢1 2 1 ⎥ reduces to ⎢0 1 1⎥ , a

⎢

⎥

⎢

⎥

⎣0 1 1 ⎦

⎣0 0 0 ⎦

general solution is x1 = s, x2 = − s, x3 = s, or

Thus P = [p1 p 2

with k factors of A

−1

−1

−1

= ( P1P2 )−1 A( P1P2 ).

Since the product of invertible matrices is

invertible, P = P1 P2 is invertible and A is similar

to C.

1⎡

a + d − (a − d )2 + 4bc ⎤⎦ .

2⎣

(c) True; since B = P −1 AP, then

**B −1 = ( P −1 AP)−1 = P −1 A−1P so A−1 and B −1
**

are similar.

129

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

Im(4) ⎤ ⎡ −5 0⎤

⎡ Im(−5i)

Im( A) = ⎢

=

⎣ Im(2 − i) Im(1 + 5i) ⎥⎦ ⎢⎣ −1 5⎥⎦

det( A) = −5i(1 + 5i) − 4(2 − i)

= −5i + 25 − 8 + 4i

= 17 − i

tr(A) = −5i + (1 + 5i) = 1

**(d) False; a matrix P that diagonalizes A is not
**

unique⎯it depends on the order of the

eigenvalues in the diagonal matrix D and the

choice of basis for each eigenspace.

(e) True; if P −1 AP = D, then

**P −1 A−1P = ( P −1 AP)−1 = D −1, so A−1 is also
**

diagonalizable.

11. u ⋅ v = (i, 2i, 3) ⋅ (4, − 2i, 1 + i)

= i( 4) + 2i(−2i) + 3(1 + i)

= 4i + 2i(2i) + 3(1 − i)

= 4i − 4 + 3 − 3i

= −1 + i

u ⋅ w = (i, 2i, 3) ⋅ (2 − i, 2i, 5 + 3i)

= i(2 − i) + 2i(2i) + 3(5 + 3i)

= i(2 + i) + 2i(−2i) + 3(5 − 3i)

= 2i − 1 + 4 + 15 − 9i

= 18 − 7i

v ⋅ w = (4, − 2i, 1 + i) ⋅ (2 − i, 2i, 5 + 3i)

= 4(2 − i) − 2i(2i) + (1 + i)(5 + 3i)

= 4(2 + i) − 2i(−2i) + (1 + i)(5 − 3i)

= 8 + 4i − 4 + 5 + 2i + 3

= 12 + 6i

v ⋅ u = 4(−i) − 2i(−2i) + (1 + i)(3)

= −4i − 4 + 3 + 3i

= −1 − i

= u⋅v

u ⋅ ( v + w) = (i, 2i, 3) ⋅ (6 − i, 0, 6 + 4i)

= i(6 + i) + 2i(0) + 3(6 − 4i)

= 6i − 1 + 18 − 12i

= 17 − 6i

= (−1 + i) + (18 − 7i)

= u⋅v + u⋅w

k(u ⋅ v) = 2i(−1 + i) = −2i − 2 = −2 − 2i

(ku) ⋅ v = (2i(i, 2i, 3)) ⋅ (4, − 2i, 1 + i)

= (−2, − 4, 6i) ⋅ (4, − 2i, 1 + i)

= −2(4) − 4(2i) + 6i(1 − i)

= −8 − 8i + 6i + 6

= −2 − 2i

= k (u ⋅ v )

(f) True; DT = ( P −1 AP)T = PT AT ( P −1 )T , so AT

is also diagonalizable.

(g) True; if A is n × n and has n linearly independent

eigenvectors, then the geometric multiplicity of

each eigenvalue must be equal to its algebraic

multiplicity, hence A is diagonalizable.

(h) True; if every eigenvalue of A has algebraic

multiplicity 1, then every eigenvalue also has

geometric multiplicity 1. Since the algebraic and

geometric multiplicities are the same for each

eigenvalue, A is diagonalizable.

Section 5.3

Exercise Set 5.3

1. u = ( 2 − i, 4i, 1 + i ) = (2 + i, − 4i, 1 − i)

Re(u) = (Re(2 − i), Re(4i), Re(1 + i)) = (2, 0, 1)

Im(u) = (Im(2 − i), Im(4i), Im(1 + i)) = (−1, 4, 1)

u =

=

2

2

2

2 − i + 4i + 1 + i

(

22 + 12

= 5 + 16 + 2

= 23

) +(

2

42

) +(

2

12 + 12

)

2

5. If ix − 3v = u, then ix = 3v + u, and

1

x = (3v + u) = −i(3v + u).

i

x = −i(3(1 + i, 2 − i, 4) + (3 − 4i, 2 + i, − 6i))

= −i((3 + 3i, 6 − 3i, 12) + (3 + 4i, 2 − i, 6i))

= −i(6 + 7i, 8 − 4i, 12 + 6i)

= (7 − 6i, − 4 − 8i, 6 − 12i)

13. u ⋅ v = (i, 2i, 3) ⋅ (4, 2i, 1 − i)

= i(4) + 2i(−2i) + 3(1 + i)

= 4i + 4 + 3 + 3i

= 7 + 7i

⎡ −5i

4 ⎤ ⎡ 5i

4 ⎤

7. A = ⎢

⎥ = ⎢ 2 + i 1 − 5i ⎥

2

i

1

5

i

−

+

⎣

⎦

⎣

⎦

Re(4) ⎤ ⎡ 0 4 ⎤

⎡ Re(−5i)

Re(A) = ⎢

=

⎣ Re(2 − i) Re(1 + 5i) ⎥⎦ ⎢⎣ 2 1 ⎥⎦

w ⋅ u = (u ⋅ w) = u ⋅ w = 18 − 7i

(u ⋅ v) − w ⋅ u = 7 + 7i − (18 − 7i)

= −11 + 14i

= −11 − 14i

130

SSM: Elementary Linear Algebra

Section 5.3

15. tr(A) = 4 + 0 = 4

det(A) = 4(0) − 1(−5) = 5

The characteristic equation of A is

**21. Here a = 1, b = − 3. The angle from the
**

positive x-axis to the ray that joins the origin to

π

⎛

⎞

the point (1, − 3 ) is φ = tan −1 ⎜ − 3 ⎟ = − .

3

⎝ 1 ⎠

**λ 2 − 4λ + 5 = 0, which has solutions λ = 2 ± i.
**

⎡λ − 4 5 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

(λI − A) = 0 is ⎢

.

⎢ ⎥=

⎣ −1 λ ⎦⎥ ⎣ x2 ⎦ ⎣⎢ 0⎦⎥

λ = 1− i 3 = 1+ 3 = 2

5 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡ −2 − i

λ1 = 2 − i gives ⎢

.

⎢ ⎥=

⎣ −1 2 − i ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

5 ⎤

⎡ −2 − i

reduces to

Since ⎢

1

2

−

− i ⎥⎦

⎣

general solution is x1 = (2 − i) s,

23. tr(A) = −1 + 7 = 6

det(A) = −1(7) − 4(−5) = 13

The characteristic equation of A is

⎡ 1 −2 + i ⎤

, a

⎢⎣0

0 ⎥⎦

x2 = s or

**λ 2 − 6λ + 13 = 0 which has solutions λ = 3 ± 2i.
**

For λ = 3 − 2i, det(λI − A)x = 0 is

5 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡ 4 − 2i

. Since

=

⎢⎣ −4

−4 − 2i ⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ 0 ⎥⎦

⎡ x1 ⎤

⎡2 − i ⎤

⎡2 − i ⎤

⎢ x ⎥ = s ⎢ 1 ⎥ so x1 = ⎢ 1 ⎥ is a basis for the

⎣

⎦

⎣

⎦

⎣ 2⎦

eigenspace corresponding to λ1 = 2 − i.

⎡1 1 + 1 i ⎤

5 ⎤

⎡ 4 − 2i

2 ⎥, a

reduces

to

⎢

⎢⎣ −4

−4 − 2i ⎥⎦

0 ⎦

⎣0

1 ⎞

⎛

general solution is x1 = ⎜ −1 − i ⎟ s, x2 = s or

2 ⎠

⎝

x1 = (−2 − i)t , x2 = 2t which is

⎡2 + i ⎤

x 2 = x1 = ⎢

is a basis for the eigenspace

⎣ 1 ⎥⎦

corresponding to λ 2 = λ1 = 2 + i.

⎡ x1 ⎤

⎡ −2 − i ⎤

⎡ −2 − i ⎤

⎢ x ⎥ = t ⎢ 2 ⎥ so x = ⎢ 2 ⎥ is a basis for

⎣

⎦

⎣

⎦

⎣ 2⎦

the eigenspace corresponding to λ = 3 − 2i.

⎡ −2 ⎤

⎡ −1⎤

Re(x) = ⎢ ⎥ and Im(x) = ⎢ ⎥ .

⎣ 2⎦

⎣ 0⎦

⎡ −2 −1⎤

Thus A = PCP −1 where P = ⎢

and

⎣ 2 0 ⎥⎦

⎡ 3 −2 ⎤

C=⎢

.

3⎥⎦

⎣2

17. tr(A) = 5 + 3 = 8

det(A) = 5(3) − 1(−2) = 17

The characteristic equation of A is

λ 2 − 8λ + 17 = 0 which has solutions λ = 4 ± i.

2 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

⎡λ − 5

=

.

(λI − A)x = 0 is ⎢

1

−

λ

− 3⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ 0 ⎥⎦

⎣

⎡ −1 − i 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 4 − i gives ⎢

.

⎢ ⎥=

⎣ −1 1 − i ⎦⎥ ⎣ x2 ⎦ ⎣⎢0 ⎦⎥

⎡ −1 − i 2 ⎤

⎡1 −1 + i ⎤

reduces to ⎢

, a

Since ⎢

0 ⎥⎦

⎣0

⎣ −1 1 − i ⎥⎦

general solution is x1 = (1 − i) s, x2 = s or

25. tr(A) = 8 + 2 = 10

det(A) = 8(2) − (−3)(6) = 34

The characteristic equation of A is

⎡ x1 ⎤

⎡1 − i ⎤

⎡1 − i ⎤

⎢ x ⎥ = s ⎢ 1 ⎥ so x1 = ⎢ 1 ⎥ is a basis for the

⎣

⎦

⎣

⎦

⎣ 2⎦

eigenspace corresponding to λ1 = 4 − i.

**λ 2 − 10λ + 34 = 0 which has solutions
**

λ = 5 ± 3i.

For λ = 5 − 3i, (λI − A)x = 0 is

⎡ −3 − 3i −6 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

.

=

⎢⎣ 3

3 − 3i ⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣0 ⎥⎦

⎡1 + i ⎤

x 2 = x1 = ⎢

is a basis for the eigenspace

⎣ 1 ⎥⎦

corresponding to λ 2 = λ1 = 4 + i.

⎡ −3 − 3i −6 ⎤

⎡1 1 − i ⎤

Since ⎢

reduces to ⎢

, a

⎥

3 − 3i ⎦

⎣ 3

⎣0 0 ⎥⎦

general solution is x1 = (−1 + i) s, x2 = s or

19. Here a = b = 1. The angle from the positive xaxis to the ray that joins the origin to the point

⎛1⎞ π

(1, 1) is φ = tan −1 ⎜ ⎟ = .

⎝1⎠ 4

⎡ x1 ⎤

⎡1 − i ⎤

⎡1 − i ⎤

⎢ ⎥ = s ⎢ −1 ⎥ , so x = ⎢ −1 ⎥ is a basis for the

⎣

⎦

⎣

⎦

⎣ x2 ⎦

eigenspace corresponding to λ = 5 − 3i.

⎡ 1⎤

⎡ −1⎤

Re(x) = ⎢ ⎥ and Im(x) = ⎢ ⎥ .

−

1

⎣ ⎦

⎣ 0⎦

2

2

λ = 1 + 1i = 1 + 1 = 2

131

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

⎡ 1 −1⎤

Thus, A = PCP −1 where P = ⎢

and

⎣ −1 0 ⎥⎦

⎡ 5 −3⎤

C=⎢

.

⎣3 5⎥⎦

Section 5.4

Exercise Set 5.4

1. (a) The coefficient matrix for the system is

⎡1 4 ⎤

A=⎢

.

⎣ 2 3 ⎥⎦

27. (a) u ⋅ v = (2i, i, 3i) ⋅ (i, 6i, k )

= 2i(−i) + i(−6i) + 3i(k )

= 2 + 6 + 3ik

= 8 + 3ik

If u ⋅ v = 0, then 3ik = −8 or k = −

det (λI − A) = λ 2 − 4λ − 5 = (λ − 5)(λ + 1)

The eigenvalues of A are λ1 = 5 and

λ 2 = −1.

8 8

= i

3i 3

⎡λ − 1 −4 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

(λI − A)x = 0 is ⎢

.

⎢ ⎥=

⎣ −2 λ − 3⎥⎦ ⎣ x2 ⎦ ⎢⎣ 0 ⎥⎦

8

so k = i.

3

⎡ 4 −4 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 5 gives ⎢

.

⎢ ⎥=

⎣ −2 2 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡ 4 −4 ⎤

⎡ 1 −1⎤

reduces to ⎢

a

Since ⎢

⎥

−

2

2

⎣0 0⎥⎦

⎣

⎦

general solution is x1 = s, x2 = s, or

(b) u ⋅ v = (k , k , 1 + i) ⋅ (1, − 1, 1 − i)

= k (1) + k (−1) + (1 + i)(1 + i)

= (1 + i)2

= 2i

Thus, there is no complex scalar k for which

u ⋅ v = 0.

⎡ x1 ⎤

⎡1⎤

⎡1⎤

⎢ x ⎥ = s ⎢1⎥ and p1 = ⎢1⎥ is a basis for the

⎣⎦

⎣⎦

⎣ 2⎦

eigenspace corresponding to λ1 = 5.

True/False 5.3

⎡ −2 −4 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ 2 = −1 gives ⎢

.

⎢ ⎥=

⎣ −2 −4 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡ −2 −4 ⎤

⎡1 2 ⎤

reduces to ⎢

a

Since ⎢

⎥

−

−

2

4

⎣0 0 ⎥⎦

⎣

⎦

general solution is x1 = −2s, x2 = s or

**(a) False; non-real complex eigenvalues occur in
**

conjugate pairs, so there must be an even

number. Since a real 5 × 5 matrix will have 5

eigenvalues, at least one must be real.

(b) True; λ 2 − Tr( A)λ + det( A) = 0 is the

characteristic equation of a 2 × 2 matrix A.

⎡ x1 ⎤ ⎡ −2 ⎤

⎡ −2 ⎤

⎢ x ⎥ = ⎢ 1⎥ and p 2 = ⎢ 1⎥ is a basis for

⎣ ⎦

⎣ 2⎦ ⎣ ⎦

the eigenspace corresponding to λ 2 = −1.

**(c) False; if A is a matrix with real entries and
**

complex eigenvalues and k is a real scalar,

k ≠ 0, 1, then kA has the same eigenvalues as A

with the same algebraic multiplicities but

tr(kA) = ktr(A) ≠ tr(A).

⎡1 −2 ⎤

diagonalizes

Thus P = [p1 p 2 ] = ⎢

⎣1 1⎥⎦

0 ⎤ ⎡5 0⎤

⎡λ

A and P −1 AP = D = ⎢ 1

⎥=⎢

⎥.

λ

0

2 ⎦ ⎣ 0 −1⎦

⎣

The substitutions y = Pu and y ′ = Pu ′ give

**(d) True; complex eigenvalues and eigenvectors
**

occur in conjugate pairs.

⎡ 5 0⎤

u

the “diagonal system” u′ = Du = ⎢

⎣ 0 −1⎥⎦

u ′ = 5u1

or 1

which has the solution

u2′ = −u2

**(e) False; real symmetric matrices have real
**

eigenvalues.

(f) False; this is only true if the eigenvalues of A

satisfy λ = 1.

⎡ c e5 x ⎤

u = ⎢ 1 − x ⎥ . Then y = Pu gives the

⎣⎢c2 e ⎦⎥

solution

5x

5x

−x

⎡1 −2 ⎤ ⎡ c1e ⎤ ⎡c1e − 2c2 e ⎤

y=⎢

⎢ − x ⎥ = ⎢ 5x

⎥ and

⎥

⎣1 1⎦ ⎣⎢c2 e ⎦⎥ ⎣⎢ c1e + c2 e − x ⎦⎥

132

SSM: Elementary Linear Algebra

Section 5.4

the solution of the system is

**y1 = c1e5 x − 2c2 e− x
**

y2 = c1e5 x + c2 e− x

1

x1 = − s, x2 = s, x3 = s or x1 = −t ,

2

⎡ x1 ⎤

⎡ −1⎤

⎢

x2 = 2t , x3 = 2t which is x2 ⎥ = t ⎢ 2 ⎥

⎢ ⎥

⎢ ⎥

⎣ 2⎦

⎣ x3 ⎦

⎡ −1⎤

and p 2 = ⎢ 2 ⎥ is a basis for the eigenspace

⎢ ⎥

⎣ 2⎦

corresponding to λ 2 = 2.

.

c − 2c2 = 0

(b) The initial conditions give 1

c1 + c2 = 0

which has the solution c1 = c2 = 0, so the

solution satisfying the given initial

y =0

.

conditions is 1

y2 = 0

⎡ −1 0 −1⎤ ⎡ x1 ⎤ ⎡ 0⎤

λ3 = 3 gives ⎢ 2 2 0 ⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 2 0 2 ⎦ ⎣ x3 ⎦ ⎣ 0⎦

⎡ −1 0 −1⎤

Since ⎢ 2 2 0 ⎥ reduces to

⎢

⎥

⎣ 2 0 2⎦

⎡ 1 0 1⎤

⎢0 1 −1⎥ , the general solution is x1 = s,

⎢

⎥

⎣0 0 0 ⎦

⎡ x1 ⎤

⎡ 1⎤

x2 = − s, x3 = − s or ⎢ x2 ⎥ = s ⎢ −1⎥ and

⎢ ⎥

⎢ ⎥

⎣ −1⎦

⎣ x3 ⎦

⎡ 1⎤

p3 = ⎢ −1⎥ is a basis for the eigenspace

⎢ ⎥

⎣ −1⎦

corresponding to λ3 = 3.

**3. (a) The coefficient matrix for the system is
**

⎡ 4 0 1⎤

A = ⎢ −2 1 0 ⎥ .

⎢

⎥

⎣ −2 0 1⎦

det(λI − A) = λ3 − 6λ 2 + 11λ − 6

= (λ − 1)(λ − 2)(λ − 3)

The eigenvalues of A are λ1 = 1, λ 2 = 2,

and λ3 = 3.

(λI − A)x = 0 is

0

−1 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

⎡λ − 4

⎢ 2

λ − 1 0 ⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

0

λ − 1⎦ ⎣ x3 ⎦ ⎣ 0⎦

⎣ 2

⎡ −3 0 −1⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 1 gives ⎢ 2 0 0 ⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 2 0 0 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡ −3 0 −1⎤

⎡1 0 0 ⎤

Since ⎢ 2 0 0 ⎥ reduces to ⎢0 0 1 ⎥ ,

⎢

⎥

⎢

⎥

⎣ 2 0 0⎦

⎣0 0 0 ⎦

the general solution is x1 = 0, x2 = s,

Thus P = [p1 p 2

diagonalizes A and

⎡ λ1 0

−1

⎢

P AP = D = 0 λ 2

⎢

⎣0 0

0 ⎤ ⎡1 0 0⎤

0 ⎥ = ⎢0 2 0⎥ .

⎥ ⎢

⎥

λ3 ⎦ ⎣0 0 3⎦

The substitutions y = Pu and y ′ = Pu ′ give

the “diagonal system”

u1′ = u1

⎡1 0 0 ⎤

⎢

⎥

′

u = Du = 0 2 0 u or u2′ = 2u2 which

⎢

⎥

u3′ = 3u3

⎣0 0 3⎦

⎡ x1 ⎤

⎡0 ⎤

⎡0 ⎤

x3 = 0 or ⎢ x2 ⎥ = s ⎢1 ⎥ and p1 = ⎢1 ⎥ is a

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣0 ⎦

⎣ x3 ⎦

basis for the eigenspace corresponding to

λ1 = 1.

⎡ −2

λ 2 = 2 gives ⎢ 2

⎢

⎣ 2

⎡ −2 0 −1⎤

Since ⎢ 2 1 0 ⎥

⎢

⎥

⎣ 2 0 1⎦

⎡ 0 −1 1⎤

p3 ] = ⎢ 1 2 −1⎥

⎢

⎥

⎣ 0 2 −1⎦

0 −1⎤ ⎡ x1 ⎤ ⎡ 0⎤

1 0 ⎥ ⎢ x1 ⎥ = ⎢ 0⎥ .

⎥⎢ ⎥ ⎢ ⎥

0 1⎦ ⎣ x3 ⎦ ⎣ 0⎦

⎡ c ex ⎤

⎢ 1 ⎥

has the solution u = ⎢c2 e2 x ⎥ . Then y = Pu

⎢ 3x ⎥

⎣ c3e ⎦

gives the solution

reduces to

⎡1 0 1⎤

2⎥

⎢

, the general solution is

⎢0 1 −1⎥

⎢⎣0 0 0 ⎥⎦

133

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

x ⎤

⎡

⎡0 −1 1⎤ ⎢ c1e ⎥

y = ⎢ 1 2 −1⎥ ⎢ c2 e2 x ⎥

⎢

⎥

⎣0 2 −1⎦ ⎢ c e3 x ⎥

⎣ 3 ⎦

⎡ − c e 2 x + c e3 x ⎤

2

3

⎢

⎥

= ⎢c1e x + 2c2 e 2 x − c3e3 x ⎥

⎢

⎥

2x

3x

⎣ 2c2 e − c3e

⎦

and the solution of the system is

⎡ 3 −1⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 3 gives ⎢

.

⎢ ⎥=

⎣ −6 2 ⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

⎡1 − 1⎤

⎡ 3 −1⎤

3 ⎥ , the

reduces

to

Since ⎢

⎢

⎣ −6 2 ⎥⎦

0⎦

⎣0

1

general solution is x1 = s, x2 = s or x1 = t ,

3

⎡x ⎤

⎡1 ⎤

⎡1 ⎤

x2 = 3t which is ⎢ 1 ⎥ = t ⎢ ⎥ , so p1 = ⎢ ⎥ is a

x

3

⎣ ⎦

⎣ 3⎦

⎣ 2⎦

basis for the eigenspace corresponding to

λ1 = 3.

**y1 = −c2 e2 x + c3e3 x
**

y2 = c1e x + 2c2 e2 x − c3e3 x .

y3 = 2c2 e2 x − c3e3 x

⎡ −2 −1⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

λ 2 = −2 gives ⎢

. Since

⎢ ⎥=

⎣ −6 −3⎥⎦ ⎣ x2 ⎦ ⎢⎣ 0 ⎥⎦

**(b) The initial conditions give
**

−c2 + c3 = −1

c1 + 2c2 − c3 = 1

2c2 − c3 = 0

⎡ −2 −1⎤

⎢⎣ −6 −3⎥⎦ reduces to

⎡1 1⎤

2 ⎥ , the general

⎢

0

0

⎣

⎦

1

solution is x1 = − s, x2 = s or x1 = t ,

2

⎡x ⎤

⎡ 1⎤

⎡ 1⎤

x2 = −2t which is ⎢ 1 ⎥ = t ⎢ ⎥ so p 2 = ⎢ ⎥

x

2

−

⎣ ⎦

⎣ −2 ⎦

⎣ 2⎦

is a basis for the eigenspace corresponding to

⎡ 1 1⎤

λ 2 = −2. Thus P = [p1 p 2 ] = ⎢

⎣3 −2 ⎥⎦

diagonalizes A and

0 ⎤ ⎡ 3 0⎤

⎡λ

P −1 AP = D = ⎢ 1

⎥=⎢

⎥.

⎣ 0 λ 2 ⎦ ⎣0 −2 ⎦

The substitutions y = Pu and y ′ = Pu ′ give the

**which has the solution c1 = 1, c2 = −1,
**

c3 = −2, so the solution satisfying the given

y1 = e2 x − 2e3 x

initial conditions is y2 = e x − 2e2 x + 2e3 x .

y3 = −2e 2 x + 2e3 x

**5. Let y = f(x) be a solution of y ′ = ay. Then for
**

g ( x) = f ( x)e − ax we have

g ′( x) = f ′( x)e− ax + f ( x)(− ae −ax )

= af ( x)e −ax − af ( x)e− ax

=0

⎡ 3 0⎤

“diagonal system” u′ = Du = ⎢

u or

⎣ 0 −2 ⎥⎦

u1′ = 3u1

u2′ = −2u2

**Hence g ( x) = f ( x)e −ax = c where c is a
**

constant or f ( x) = ceax .

⎡ c e3 x ⎤

which has the solution u = ⎢ 1 −2 x ⎥ . Then

⎢⎣c2 e ⎥⎦

y = Pu gives the solution

3x

3x

−2 x ⎤

⎡ 1 1⎤ ⎡ c1e ⎤ ⎡ c1e + c2 e

y=⎢

=

⎢

⎥

⎢

⎥.

⎣3 −2⎦⎥ ⎣⎢ c2 e−2 x ⎦⎥ ⎣⎢3c1e3 x − 2c2 e−2 x ⎦⎥

7. If y1 = y and y2 = y ′, then

y2′ = y ′′ = y ′ + 6 y = y2 + 6 y1 which yields the

system

y1′ = y2

y2′ = 6 y1 + y2

The coefficient matrix of this system is

⎡0 1⎤

A=⎢

.

⎣6 1⎥⎦

**Thus y1 = y = c1e3 x + c2 e−2 x is the solution of
**

the original differential equation.

**9. Use the substitutions y1 = y, y2 = y ′ = y1′ and
**

y3 = y ′′ = y2′ . This gives the system

det(λI − A) = λ 2 − λ − 6 = (λ − 3)(λ + 2)

The eigenvalues of A are λ1 = 3 and λ 2 = −2.

1 0⎤

⎡0

A = ⎢0

0 1⎥ .

⎢

⎥

⎣6 −11 6 ⎦

−1 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎡λ

(λI − A)x = 0 is ⎢

.

⎢ ⎥=

⎣ −6 λ − 1⎥⎦ ⎣ x2 ⎦ ⎢⎣0 ⎥⎦

134

SSM: Elementary Linear Algebra

Section 5.4

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = t ⎢ 3⎥ so p = ⎢3⎥ is a basis for the

3

⎢ 2⎥

⎢ ⎥

⎢ ⎥

⎣9 ⎦

⎣9 ⎦

⎣ x3 ⎦

eigenspace corresponding to λ3 = 3.

det(λI − A) = λ3 − 6λ 2 + 11λ − 6

= (λ − 1)(λ − 2)(λ − 3)

The eigenvalues of A are λ1 = 1, λ 2 = 2, and

λ3 = 3.

0 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

⎡ λ −1

⎢

(λI − A)x = 0 is 0 λ

−1 ⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −6 11 λ − 6 ⎦ ⎣ x3 ⎦ ⎣0 ⎦

Thus P = [p1 p 2

⎡1 1 1 ⎤

p3 ] = ⎢1 2 3⎥

⎢

⎥

⎣1 4 9 ⎦

diagonalizes A and

⎡ λ1 0

−1

P AP = D = ⎢ 0 λ 2

⎢

⎣0 0

⎡ 1 −1 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ1 = 1 gives ⎢ 0 1 −1⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −6 11 −5⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡ 1 −1 0 ⎤

⎡ 1 0 −1⎤

Since ⎢ 0 1 −1⎥ reduces to ⎢0 1 −1⎥ ,

⎢

⎥

⎢

⎥

⎣ −6 11 −5⎦

⎣0 0 0 ⎦

the general solution is x1 = s, x2 = s, x3 = s or

0 ⎤ ⎡1 0 0⎤

0 ⎥ = ⎢0 2 0⎥ .

⎥ ⎢

⎥

λ3 ⎦ ⎣0 0 3⎦

The substitutions y = Pu and y ′ = Pu ′ give the

⎡1 0 0 ⎤

“diagonal system” u′ = Du = ⎢0 2 0⎥ u or

⎢

⎥

⎣0 0 3⎦

⎡ x1 ⎤

⎡1⎤

⎡1⎤

⎢ x ⎥ = s ⎢1⎥ so p = ⎢1⎥ is a basis for the

1

⎢ 2⎥

⎢⎥

⎢⎥

⎣1⎦

⎣1⎦

⎣ x3 ⎦

eigenspace corresponding to λ1 = 1.

⎡ c ex ⎤

u1′ = u1

⎢ 1 ⎥

u2′ = 2u2 which has the solution u = ⎢c2 e2 x ⎥ .

⎢ 3x ⎥

u3′ = 3u3

⎣ c3e ⎦

Then y = Pu gives the solution

x ⎤

⎡

⎡1 1 1 ⎤ ⎢ c1e ⎥

y = ⎢1 2 3⎥ ⎢c2 e 2 x ⎥

⎢

⎥

⎣1 4 9 ⎦ ⎢ c e3 x ⎥

⎣ 3 ⎦

⎡ c e x + c e 2 x + c e3 x ⎤

2

3

⎢ 1

⎥

= ⎢ c1e x + 2c2 e2 x + 3c3e3 x ⎥ .

⎢ x

2x

3x ⎥

⎣c1e + 4c2 e + 9c3e ⎦

⎡ 2 −1 0 ⎤ ⎡ x1 ⎤ ⎡ 0⎤

λ 2 = 2 gives ⎢ 0 2 −1⎥ ⎢ x2 ⎥ = ⎢ 0⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −6 11 −4 ⎦ ⎣ x3 ⎦ ⎣ 0⎦

⎡1 0 − 1⎤

4⎥

⎢

⎡ 2 −1 0 ⎤

1⎥,

reduces

to

Since ⎢

−

0

1

⎢

2

⎣ 0 2 −1⎥⎦

⎢

⎥

0⎦

⎣0 0

1

1

s, x2 = s,

4

2

x3 = s or x1 = t , x2 = 2t , x3 = 4t which is

the general solution is x1 =

**Thus y1 = y = c1e x + c2 e2 x + c3e3 x is the
**

solution of the original differential equation.

⎡ x1 ⎤

⎡1 ⎤

⎡1 ⎤

⎢ x ⎥ = t ⎢ 2⎥ so p = ⎢ 2 ⎥ is a basis for the

2

2

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ 4⎦

⎣4⎦

⎣ x3 ⎦

eigenspace corresponding to λ 2 = 2.

True/False 5.4

(a) False; just as not every system Ax = b has a

solution, not every system of differential

equations has a solution.

⎡ 3 −1 0 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

λ3 = 3 gives ⎢ 0 3 −1⎥ ⎢ x2 ⎥ = ⎢0 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −6 11 −3⎦ ⎣ x3 ⎦ ⎣0 ⎦

⎡1 0 − 1⎤

⎡ 3 −1 0 ⎤

9⎥

⎢

Since ⎢ 0 3 −1⎥ reduces to ⎢0 1 − 1 ⎥ ,

3

⎢

⎥

⎢

⎥

⎣ −6 11 −3⎦

0⎦

⎣0 0

1

1

the general solution is x1 = s, x2 = s,

9

3

x3 = s or x1 = t , x2 = 3t , x3 = 9t which is

**(b) False; x and y may differ by any constant vector
**

b.

(c) True

(d) True; if A is a square matrix with distinct real

eigenvalues, it is diagonalizable.

(e) False; consider the case where A is

diagonalizable but not diagonal. Then A is

similar to a diagonal matrix P which can be used

to solve the system y ′ = Ay, but the systems

y ′ = Ay and u′ = Pu do not have the same

solutions.

135

Chapter 5: Eigenvalues and Eigenvectors

SSM: Elementary Linear Algebra

⎡1 ⎤

⎡1 ⎤

x 2 = ⎢ 1 ⎥ , x3 = ⎢ 2 ⎥ .

⎢ ⎥

⎢ ⎥

⎣0⎦

⎣2⎦

⎡1 1 1 ⎤

Thus P = ⎢ 0 1 2 ⎥ diagonalizes A, and

⎢

⎥

⎣0 0 2⎦

**Chapter 5 Supplementary Exercises
**

1. (a) det(λI − A) =

λ − cos θ

− sin θ

sin θ

λ − cos θ

= λ 2 − (2 cos θ )λ + 1

Thus the characteristic equation of A is

λ 2 − (2 cos θ )λ + 1 = 0. For this equation

2

2

⎡1 0 0 ⎤

P −1 AP = D = ⎢ 0 4 0 ⎥ .

⎢

⎥

⎣0 0 9 ⎦

⎡1 0 0 ⎤

If S1 = ⎢ 0 2 0 ⎥ , then S12 = D.

⎢

⎥

⎣0 0 3⎦

2

**b − 4ac = 4 cos θ − 4 = 4(cos θ − 1).
**

Since 0 < θ < π, cos2 θ < 1 hence

b2 − 4ac < 0, so the matrix has no real

eigenvalues or eigenvectors.

S = PS1 P −1

**(b) The matrix rotates vectors in R 2 through
**

the angle θ. Since 0 < θ < π, no vector is

transformed into a vector with the same or

opposite direction.

⎡ d1 0 "

⎢0 d "

2

3. (a) If D = ⎢

#

⎢#

⎢⎣ 0 0 "

⎡ d1

0 "

⎢

0

d2 "

S=⎢

⎢ #

#

⎢

0

0

"

⎣⎢

1

⎡1 1 1 ⎤ ⎡1 0 0 ⎤ ⎡⎢ 1 −1 2 ⎤⎥

= ⎢0 1 2⎥ ⎢ 0 2 0 ⎥ ⎢ 0 1 −1⎥

⎢

⎥⎢

⎥

⎣0 0 2⎦ ⎣ 0 0 3 ⎦ ⎢⎣ 0 0 12 ⎥⎦

⎡1 1 0 ⎤

= ⎢0 2 1 ⎥

⎢

⎥

⎣0 0 3⎦

0⎤

0⎥

⎥ then

# ⎥

d n ⎥⎦

S 2 = A, as required.

0 ⎤

⎥

0 ⎥

.

# ⎥

⎥

d n ⎦⎥

9. det(λI − A) =

λ − 3 −6

= λ 2 − 5λ

−1 λ − 2

**The characteristic equation of A is λ 2 − 5λ = 0,
**

hence A2 − 5 A = 0 or A2 = 5 A.

⎡3 6 ⎤ ⎡15 30⎤

A2 = 5 A = 5 ⎢

=

⎣ 1 2 ⎥⎦ ⎢⎣ 5 10⎥⎦

⎡15 30 ⎤ ⎡ 75 150 ⎤

A3 = 5 A2 = 5 ⎢

=

⎣ 5 10 ⎥⎦ ⎢⎣ 25 50 ⎥⎦

**(b) If A is a diagonalizable matrix with
**

nonnegative eigenvalues, there is a matrix P

**such that P −1 AP = D and
**

⎡ λ1 0 " 0 ⎤

⎢0 λ " 0 ⎥

2

D=⎢

⎥ is a diagonal

#

# ⎥

⎢#

⎣⎢ 0 0 " λ n ⎦⎥

matrix with nonnegative entries on the main

diagonal. By part (a) there is a matrix S1

⎡ 75

A4 = 5 A3 = 5 ⎢

⎣ 25

⎡375

A5 = 5 A4 = 5 ⎢

⎣125

such that S12 = D. Let S = PS1P −1 , then

**150 ⎤ ⎡375 750 ⎤
**

=

50 ⎥⎦ ⎢⎣125 250 ⎥⎦

750 ⎤ ⎡1875 3750 ⎤

=

250 ⎥⎦ ⎢⎣ 625 1250 ⎥⎦

**11. Suppose that Ax = λx for some nonzero
**

⎡ c1 x1 + c2 x2 + " + cn xn ⎤

⎡ x1 ⎤

⎢c x + c x + " + c x ⎥

⎢x ⎥

n n⎥

x = ⎢ 2 ⎥ . Since Ax = ⎢ 1 1 2 2

#

#

⎢

⎥

⎢ ⎥

⎢⎣ c1 x1 + c2 x2 + " + cn xn ⎥⎦

⎢⎣ xn ⎥⎦

S 2 = ( PS1P −1 )( PS1 P −1 )

= PS12 P −1

= PDP −1

= A.

then either x1 = x2 = " = xn ,

(c) A has eigenvalues λ1 = 1, λ 2 = 4, λ3 = 9

λ = c1 + c2 + " + cn = tr( A) or not all the xi are

⎡1 ⎤

with corresponding eigenvectors x1 = ⎢ 0 ⎥ ,

⎢ ⎥

⎣0⎦

equal and λ must be 0.

136

SSM: Elementary Linear Algebra

Chapter 5 Supplementary Exercises

13. If A is a k × k nilpotent matrix where An = 0,

**17. If λ is an eigenvalue of A, the λ3 will be an
**

eigenvalue of A3 corresponding to the same

eigenvector, i.e., if Ax1 = λx1, then

**then det(λI − An ) = det(λI ) = λ k and An has
**

characteristic polynomial λ k = 0, so the

A3 x1 = λ3 x1. Since A3 = A, the eigenvalues

eigenvalues of An are all 0. But if λ1 is an

**must satisfy λ3 = λ and the only possibilities are
**

−1, 0, and 1.

**eigenvalue of A, then λ1n is an eigenvalue of
**

An , so all the eigenvalues of A must also be 0.

⎡ 0

15. The matrix P = ⎢ 1

⎢

⎣ −1

⎡0

−1

and P AP = D = ⎢ 0

⎢

⎣0

1 0⎤

−1 1⎥ will diagonalize A

⎥

1 1⎦

0 0⎤

1 0 ⎥ , thus

⎥

0 −1⎦

A = PDP −1

1

1

⎡ 0 1 0 ⎤ ⎡0 0 0 ⎤ ⎡⎢ 1 2 − 2 ⎤⎥

= ⎢ 1 −1 1⎥ ⎢0 1 0 ⎥ ⎢ 1 0

0⎥

⎢

⎥⎢

⎥⎢

1

1

⎥

1

1

1

0

0

1

−

−

⎣

⎦⎣

⎦ ⎣0 2

2⎦

0

0⎤

⎡ 1

⎢ −1 − 1 − 1 ⎥

=⎢

2

2⎥

⎢ 1 − 1 − 1⎥

2

2⎦

⎣

137

Chapter 6

Inner Product Spaces

Section 6.1

(d)

Exercise Set 6.1

1. (a)

u, v = 1 ⋅ 3 + 1⋅ 2 = 5

(b)

kv, w = 3v, w = 9 ⋅ 0 + 6 ⋅ (−1) = −6

(c)

u + v, w = u, w + v, w

= 1 ⋅ 0 + 1⋅ (−1) + 3 ⋅ 0 + 2 ⋅ (−1)

= −3

(d)

v =

u, kv = (3, − 2), (−16, − 20)

= 3(−16) + (−2)(−20)

= −8

(e)

⎡ 2 1⎤

⎡ 2 1⎤

= A so

. Then AT = ⎢

5. Let A = ⎢

⎥

⎣1 1⎥⎦

⎣1 1⎦

⎡5 3 ⎤

AT A = A2 = ⎢

.

⎣3 2 ⎥⎦

v, v = 32 + 22 = 13

= (−2)2 + (−1)2

= 5

(a)

u, v = vT AT Au

⎡ 5 3⎤ ⎡ 2 ⎤

= [ −1 1] ⎢

⎣3 2 ⎥⎦ ⎢⎣1 ⎥⎦

⎡2⎤

= [ −2 −1] ⎢ ⎥

⎣1 ⎦

= −4 − 1

= −5

(b)

v, w = wT AT Av

⎡5 3⎤ ⎡ −1⎤

= [0 −1] ⎢

⎣ 3 2 ⎥⎦ ⎢⎣ 1⎥⎦

⎡ −1⎤

= [ −3 −2] ⎢ ⎥

⎣ 1⎦

= 3−2

=1

(c)

u + v, w = wT AT A(u + v)

⎡ 5 3⎤ ⎡ 1 ⎤

= [0 −1] ⎢

⎣3 2⎥⎦ ⎢⎣ 2 ⎥⎦

⎡1 ⎤

= [ −3 −2] ⎢ ⎥

⎣2⎦

= −3 − 4

= −7

u − kv = (1, 1) − 3(3, 2)

= (−8, − 5)

= (−8)2 + (−5)2

= 89

3. (a)

u, v = 3(4) + (−2)(5) = 2

v, u = 4(3) + 5(−2) = 2

(b)

(c)

0, v = (0, 0), (4, 5) = 0(4) + 0(5) = 0

v, 0 = (4, 5), (0, 0) = 4(0) + 5(0) = 0

(e) d (u, v) = u − v

= (−2, − 1)

(f)

ku, v = (−12, 8), (4, 5)

= −12(4) + 8(5)

= −8

k u, v = −4(2) = −8

u + v, w = (7, 3), (−1, 6)

= 7(−1) + 3(6)

= 11

u, w + v, w

= (3(−1) + (−2)(6)) + (4(−1) + 5(6))

= −15 + 26

= 11

u, v + w = (3, − 2), (3, 11)

= 3(3) + (−2)(11)

= −13

u, v + u, w

= (3(4) + (−2)(5)) + (3(−1) + (−2)(6))

= 2 − 15

= −13

138

SSM: Elementary Linear Algebra

(d)

Section 6.1

v, v = vT AT Av

⎡5 3⎤ ⎡ −1⎤

= [ −1 1] ⎢

⎣3 2 ⎥⎦ ⎢⎣ 1⎥⎦

⎡ −1⎤

= [ −2 −1] ⎢ ⎥

⎣ 1⎦

= 2 −1

=1

v =

⎡ 3

11. (a) The matrix is A = ⎢

⎣ 0

⎡ 3 0⎤

AT A = A2 = ⎢

.

⎣0 5⎥⎦

v, v = 1 = 1

⎡2

(b) The matrix is A = ⎢

⎣0

⎡4 0⎤

AT A = A2 = ⎢

.

⎣ 0 6 ⎥⎦

⎡ −1⎤ ⎡ 0 ⎤ ⎡ −1⎤

(e) v − w = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥

⎣ 1⎦ ⎣ −1⎦ ⎣ 2⎦

v − w, v − w = ( v − w)T AT A( v − w )

⎡5 3⎤ ⎡ −1⎤

= [ −1 2 ] ⎢

⎣ 3 2 ⎥⎦ ⎢⎣ 2 ⎥⎦

⎡ −1⎤

= [1 1] ⎢ ⎥

⎣ 2⎦

= −1 + 2

=1

d ( v, w ) = v − w

= v − w, v − w

v−w

2

A =

13. (a)

0⎤

⎥ , since

5⎦

0⎤

, since

6 ⎥⎦

A, A

= (−2)2 + 52 + 32 + 62

= 74

A = 0 by inspection.

(b)

⎡6 −1⎤

15. (a) A − B = ⎢

⎣ 8 −2⎥⎦

d ( A, B) = A − B

= A − B, A − B

= 1

=1

(f)

u, v = 9(−3)(1) + 4(2)(7) = −27 + 56 = 29

(b)

= 62 + (−1)2 + 82 + (−2)2

= (d ( v, w ))2 = 12 = 1

= 105

⎡ 3 4⎤ ⎡ −1 3⎤ ⎡ 1 13⎤

7. (a) u v = ⎢

=

⎣ −2 8⎥⎦ ⎢⎣ 1 1⎥⎦ ⎢⎣10 2 ⎥⎦

T

⎡ 3 3⎤

(b) A − B = ⎢

⎣ −5 −2 ⎥⎦

d ( A, B) = A − B

= A − B, A − B

T

u, v = tr(u v) = 1 + 2 = 3

⎡ 1 −3⎤ ⎡ 4 6 ⎤ ⎡ 4 −18⎤

(b) uT v = ⎢

=

⎣ 2 5⎥⎦ ⎢⎣ 0 8⎥⎦ ⎢⎣ 8 52 ⎥⎦

= 32 + 32 + (−5)2 + (−2)2

u, v = tr(uT v) = 4 + 52 = 56

= 47

⎡ 3 0⎤

⎡9 0 ⎤

9. (a) AT = ⎢

= A, so AT A = A2 = ⎢

.

⎣0 2⎥⎦

⎣0 4 ⎥⎦

⎡v ⎤

⎡u ⎤

For u = ⎢ 1 ⎥ and v = ⎢ 1 ⎥ ,

u

⎣v2 ⎦

⎣ 2⎦

17.

u, v = vT AT Au

p, q

= p(−1)q(−1) + p(0)q(0) + p(1)q(1) + p(2)q(2)

= (−2)(2) + (0)(1) + (2)(2) + (10)(5)

= −4 + 0 + 4 + 50

= 50

p =

⎡9 0 ⎤ ⎡ u1 ⎤

= [v1 v2 ] ⎢

⎢ ⎥

⎣0 4 ⎥⎦ ⎣u2 ⎦

⎡u ⎤

= [9v1 4v2 ] ⎢ 1 ⎥

⎣ u2 ⎦

= 9u1v1 + 4u2 v2

p, p

= [ p(−1)]2 + [ p(0)]2 + [ p(1)]2 + [ p(2)]2

= (−2)2 + 02 + 22 + 102

= 108

=6 3

139

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

19. u − v = (−3, −3)

(a) d (u, v) = u − v

= u − v, u − v

= (−3)2 + (−3)2

= 18

=3 2

(b) d (u, v) = u − v

= u − v, u − v

= 3(−3) 2 + 2(−3)2

= 45

=3 5

⎡ 1 −1⎤ ⎡ 1 2 ⎤ ⎡ 2 −1⎤

(c) AT A = ⎢

=

⎣ 2 3⎥⎦ ⎢⎣ −1 3⎥⎦ ⎢⎣ −1 13⎥⎦

u − v, u − v = (u − v)T AT A(u − v)

⎡ 2 −1⎤ ⎡ −3⎤

= [ −3 −3] ⎢

⎣ −1 13⎥⎦ ⎢⎣ −3⎥⎦

⎡ − 3⎤

= [ −3 −36] ⎢ ⎥

⎣ − 3⎦

= 9 + 108

= 117

d (u, v) = u − v

= u − v, u − v

= 117

= 3 13

21. (a) For u = (x, y), u = u, u

1/ 2

=

1 2 1 2

x + y and the unit circle is

4

16

**is the ellipse shown.
**

y

4

2

–2

x

–4

140

1 2 1 2

x2 y 2

x + y = 1 or

+

= 1 which

4

16

4 16

SSM: Elementary Linear Algebra

(b) For u = (x, y), u = u, u

Section 6.1

1/ 2

= 2 x 2 + y 2 and the unit circle is

2 x 2 + y 2 = 1 or

ellipse shown.

y

1

–

25.

u+v

2

1

2

1

–1 2

+ u−v

2

x

= u + v, u + v + u − v, u − v

= u, u + v + v, u + v + u, u − v − v, u − v

= u, u + u, v + v, u + v, v + u, u − u, v − v, u + v, v

= 2 u, u + 2 v, v

=2 u

2

+2 v

2

⎡ 0 1⎤

27. Let V = ⎢

, then

⎣ −1 0 ⎥⎦

V , V = 0(0) + (1)(−1) + (−1)(1) + (0)(0)

= −2 < 0

so Axiom 4 fails.

29. (a)

1

p, q = ∫ (1 − x + x 2 + 5 x3 )( x − 3 x 2 )dx

−1

1

= ∫ ( x − 4 x 2 + 4 x3 + 2 x 4 − 15 x5 )dx

−1

1

3

5

6⎞

⎛ 2

= ⎜ x − 4 x + x4 + 2 x − 5x ⎟

3

5

2 ⎠ −1

⎝ 2

29 ⎛ 1 ⎞

= − −⎜− ⎟

15 ⎝ 15 ⎠

28

=−

15

(b)

1

p, q = ∫ ( x − 5 x3 )(2 + 8 x 2 )dx

−1

1

= ∫ (2 x − 2 x3 − 40 x5 )dx

−1

1

4

6⎞

⎛

= ⎜ x 2 − x − 20 x ⎟

2

3 ⎠ −1

⎝

37 ⎛ 37 ⎞

= − −⎜− ⎟

6 ⎝ 6 ⎠

=0

True/False 6.1

(a) True; with w1 = w2 = 1, the weighted Euclidean inner product is the dot product on R 2 .

141

x2

1

2

+

y2

= 1 which is the

1

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

u, v = (−1)(2) + 5(4) + 2(−9) = 0

**(b) False; for example, using the dot product on R 2
**

with u = (0, −1) and v = (0, 5),

u, v = 0(0) − 1(5) = −5.

cos θ =

**(c) True; first apply Axiom 1, then Axiom 2.
**

(d)

**(d) True; applying Axiom 3, Axiom 1, then Axiom 3
**

and Axiom 1 again gives ku, kv = k u, kv

= k kv, u

= k (k ) v, u

=k

2

u, v .

(e)

= v, v , then v = 0 by Axiom

(f)

**Ax = 0 and x0 , x0 = Ax0 ⋅ Ax0 = 0 ⋅ 0 = 0, but
**

x 0 ≠ 0.

Exercise Set 6.2

2

2

u = 1 + (−3) = 10

3. (a)

2

(b)

u v

−10

10 20

=−

10

200

=−

1

=

u v

−6

6 2

1

=−

2

u, v

u v

=

8

4 55

2

=

55

A, A

B, B = 32 + 22 + 12 + 02 = 14

A, B = 2(3) + 6(2) + 1(1) + (−3)(0) = 19

cos θ =

2

u, v

A =

B =

2

v = 3 + 8 = 73

u, v = (−1)(3) + 0(8) = −3

cos θ =

u v

=

= 50

=5 2

2

2

u = (−1) + 0 = 1

2

u, v

= 22 + 62 + 12 + (−3) 2

v = 2 + 4 = 20

u, v = 1(2) + (−3)(4) = −10

=

9 10

2

2

2

2

v = 4 +0 +0 +0 = 4

u, v = 2(4) + 1(0) + 7(0) + (−1)(0) = 8

cos θ =

u, v

20

2

2

2

2

u = 2 + 1 + 7 + (−1) = 55

Section 6.2

cos θ =

u v

=−

2

2

2

2

u = 1 + 0 +1 + 0 = 2

cos θ =

2

u, v

2

2

2

2

v = (−3) + (−3) + (−3) + (−3)

= 36

=6

u, v = 1(−3) + 0(−3) + 1(−3) + 0(−3) = −6

**(g) False; the matrix A must be invertible, otherwise
**

there is a nontrivial solution x0 to the system

1. (a)

=0

30 101

2

2

2

u = 4 + 1 + 8 = 81 = 9

cos θ =

orthogonal.

2

u v

0

=

2

2

2

v = 1 + 0 + (−3) = 10

u, v = 4(1) + 1(0) + 8(−3) = −20

**(e) False; for example, using the dot product on R 2
**

with u = (1, 0) and v = (0, 1),

u, v = 1(0) + 0(1) = 0, i.e., u and v are

(f) True; if 0 = v

4.

u, v

(b)

−3

A =

A, B

A B

=

19

5 2 14

=

A, A

= 22 + 42 + (−1)2 + 32

73

= 30

(c)

2

2

2

u = (−1) + 5 + 2 = 30

2

2

B =

B, B

= (−3)2 + 12 + 42 + 22

2

v = 2 + 4 + (−9) = 101

= 30

142

19

10 7

SSM: Elementary Linear Algebra

Section 6.2

A, B = 2(−3) + 4(1) + (−1)(4) + 3(2) = 0

cos θ =

5.

A, B

A B

=

0

=0

30

p, q = 1(0) + (−1)(2) + 2(1) = 0

7. 0 = u, w = 2(1) + k (2) + 6(3) = 20 + 2k

Thus k = −10 and u = (2, −10, 6).

0 = v, w = l (1) + 5(2) + 3(3) = l + 19

Thus, l = −19 and v = (−19, 5, 3).

u, v = 2(−19) − 10(5) + 6(3) = −70 ≠ 0

Thus, there are no scalars such that the vectors are mutually orthogonal.

9. (a)

u, v = 2(1) + 1(7) + 3k = 9 + 3k

u and v are orthogonal for k = −3.

(b)

u, v = k (k ) + k (5) + 1(6) = k 2 + 5k + 6

k 2 + 5k + 6 = (k + 2)(k + 3)

u and v are orthogonal for k = −2, −3.

11. (a)

u, v = 3(4) + 2(−1) = 10 = 10

2

2

2

2

u v = 3 + 2 4 + (−1)

= 13 17

= 221

≈ 14.9

(b)

u, v = −3(2) + 1(−1) + 0(3) = −7 = 7

2

2

2

2

2

2

u v = (−3) + 1 + 0 2 + (−1) + 3

= 10 14

= 140

≈ 11.8

(c)

u, v = −4(8) + 2(−4) + 1(−2) = −42 = 42

2

2

2

2

2

2

u v = (−4) + 2 + 1 8 + (−4) + (−2)

= 21 84

= 1764

= 42

143

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

u, v = 0(−1) + (−2)(−1) + 2(1) + 1(1)

=5

=5

(d)

2

2

2

2

2

2

2

2

u v = 0 + (−2) + 2 + 1 (−1) + (−1) + 1 + 1

= 9 4

=6

13.

u, w1 = 0 by inspection.

u, w 2 = −1(1) + 1(−1) + 0(3) + 2(0) = −2

Since u is not orthogonal to w 2 , it is not orthogonal to the subspace.

**15. (a) W ⊥ will have dimension 1. A normal to the plane is u = (1, −2, −3), so W ⊥ will consist of all scalar
**

multiples of u or tu = (t, −2t, −3t) so parametric equations for W ⊥ are x = t, y = −2t, z = −3t.

(b) The line contains the vector u = (2, −5, 4), thus W ⊥ is the plane through the origin with normal u, or

2x − 5y + 4z = 0.

(c) The intersection of the planes is the line x + z = 0 in the xz plane (y = 0) which can be parametrized as

(x, y, z) = (t, 0, −t) or t(1, 0, −1). W ⊥ is the plane with normal (1, 0, −1) or x − z = 0.

17.

u−v

2

= u − v, u − v

= u, u − v − v, u − v

= u, u − u, v − v, u + v, v

= u 2 −0−0+ v 2

=2

Thus u − v = 2.

19. span{u1 , u 2 , … , u r } = k1u1 + k2 u 2 +

+ kr u r where k1, k2 , … , kr are arbitrary scalars. Let

v ∈ span{u1, u 2 , … , u r }.

w, v = w, k1u1 + k2u 2 + + kr u r

= k1 w, u1 + k2 w, u 2 + + kr w, u r

= 0+0+ +0

=0

Thus if w is orthogonal to each vector u1 , u 2 , …, u r , then w must be orthogonal to every vector in

span{u1 , u 2 , …, u r }.

21. Suppose that v is orthogonal to every basis vector. Then, as in exercise 19, v is orthogonal to the span of the set of

**basis vectors, which is all of W, hence v is in W ⊥ . If v is not orthogonal to every basis vector, then v clearly
**

cannot be in W ⊥ . Thus W ⊥ consists of all vectors orthogonal to every basis vector.

144

SSM: Elementary Linear Algebra

Section 6.3

→

Section 6.3

**27. Using the figure in the text, AB = v + u and
**

→

Exercise Set 6.3

**BC = v − u. Use the Euclidean inner product on
**

R 2 and note that u = v .

→

1. (a)

→

AB, BC = v + u, v − u

= v, v − v, u + u, v − u, u

(b)

= v2− u2

=0

→

→

**Thus AB and BC are orthogonal, so the angle
**

at B is a right angle.

(c)

**31. (a) W ⊥ is the line through the origin which is
**

perpendicular to y = x. Thus W ⊥ is the line

y = −x.

(d)

**(b) W ⊥ is the plane through the origin which is
**

perpendicular to the y-axis. Thus, W ⊥ is the

xz-plane.

3. (a)

**(c) W ⊥ is the line through the origin which is
**

perpendicular to the yz-plane. Thus W ⊥ is

the x-axis.

True/False 6.2

(a) False; if u is orthogonal to every vector of a

subspace W, then u is in W ⊥ .

(b) True; W ∩ W ⊥ = {0}.

(c) True; for any vector w in W,

u + v, w = u, w + v, w = 0, so u + v is in

W ⊥.

(d) True; for any vector w in W,

ku, w = k u, w = k (0) = 0, so ku is in W ⊥ .

(b)

**(e) False; if u and v are orthogonal u, v = 0 = 0.
**

(f) False; if u and v are orthogonal,

u+ v

2

= u 2 + v 2 thus

u+v =

u

2

(0, 1), (2, 0) = 0 + 0 = 0

The set is orthogonal.

1 ⎞ ⎛ 1

1 ⎞

1 1

⎛ 1

,

,

⎜−

⎟, ⎜

⎟ =−2+2 =0

2

2⎠ ⎝ 2

2⎠

⎝

The set is orthogonal.

1 ⎞ ⎛ 1

1 ⎞

1 1

⎛ 1

,−

,

⎜−

⎟, ⎜

⎟ =−2−2

2

2⎠ ⎝ 2

2⎠

⎝

= −1

≠0

The set is not orthogonal.

(0, 0), (0, 1) = 0 + 0 = 0

The set is orthogonal.

1 ⎞

1 ⎞ ⎛ 1 1

⎛ 1

,

,−

, 0,

⎟

⎜

⎟, ⎜

3⎠

2⎠ ⎝ 3 3

⎝ 2

1

1

=

+0−

6

6

=0

1 ⎞ ⎛ 1

1 ⎞

⎛ 1

, 0,

, 0,

⎜

⎟, ⎜ −

⎟

2⎠ ⎝

2

2⎠

⎝ 2

1

1

= − +0+

2

2

=0

1 ⎞ ⎛ 1

1 ⎞

⎛ 1 1

,

,−

, 0,

⎜

⎟, ⎜ −

⎟

3⎠ ⎝

2

2⎠

⎝ 3 3

1

1

=−

+0−

6

6

2

=−

≠0

6

The set is not orthogonal.

2

⎛2

⎜ ,− ,

3

⎝3

4 2 2

1⎞ ⎛ 2 1

2⎞

⎟, ⎜ , , − ⎟ = − −

9 9 9

3⎠ ⎝ 3 3 3 ⎠

=0

2 4 2

2 1⎞ ⎛1 2 2⎞

⎛2

⎜ , − , ⎟, ⎜ , , ⎟ = − + = 0

9 9 9

3 3⎠ ⎝3 3 3⎠

⎝3

2 2 4

2⎞ ⎛1 2 2⎞

⎛2 1

⎜ , , − ⎟, ⎜ , , ⎟ = + − = 0

9 9 9

⎝3 3 3⎠ ⎝3 3 3⎠

The set is orthogonal.

+ v2 ≠ u + v

145

Chapter 6: Inner Product Spaces

(c)

SSM: Elementary Linear Algebra

2⎛1⎞ 1⎛ 2⎞ 2⎛ 2⎞

p2 ( x), p3 ( x) = ⎜ ⎟ + ⎜ ⎟ − ⎜ ⎟

3⎝ 3⎠ 3⎝ 3 ⎠ 3⎝ 3⎠

2 2 4

= + −

9 9 9

=0

The set is an orthonormal set.

1

1 ⎞

⎛

,

(1, 0, 0), ⎜ 0,

⎟ = 0+0+0 = 0

2

2⎠

⎝

(1, 0, 0), (0, 0, 1) = 0 + 0 + 0 = 0

1

1 ⎞

1

⎛

,

⎜ 0,

⎟ , (0, 0, 1) = 0 + 0 +

2

2

2

⎝

⎠

1

=

≠0

2

The set is not orthogonal.

(d)

1

2 ⎞ ⎛ 1

1

⎛ 1

,

,−

,−

,

⎜

⎟, ⎜

6

6

6

2

2

⎝

⎠ ⎝

1

1

=

−

+0

12

12

=0

The set is orthogonal.

2

5. (a)

⎛ 2⎞ ⎛ 2⎞ ⎛1⎞

p1 ( x) = ⎜ ⎟ + ⎜ − ⎟ + ⎜ ⎟

⎝ 3⎠ ⎝ 3⎠ ⎝3⎠

4 4 1

=

+ +

9 9 9

= 1

=1

2

2

2

2

2

⎛ 2⎞ ⎛1⎞ ⎛ 2⎞

p2 ( x) = ⎜ ⎟ + ⎜ ⎟ + ⎜ − ⎟

⎝ 3⎠ ⎝3⎠ ⎝ 3⎠

4 1 4

=

+ +

9 9 9

= 1

=1

2

2

(b)

p1 ( x) = 12 = 1

2

⎛ 1 ⎞ ⎛ 1 ⎞

p2 ( x) = ⎜

⎟ +⎜

⎟

⎝ 2⎠ ⎝ 2⎠

1 1

=

+

2 2

= 1

=1

⎞

0⎟

⎠

2

p3 ( x) = 12 = 1

p1 ( x), p2 ( x) = 1(0) + 0(1) + 0(1) = 0

p1 ( x), p3 ( x) = 1(0) + 0(0) + 0(1) = 0

p2 ( x), p3 ( x) = 0(0) +

=

1

2

(0) +

1

2

(1)

1

≠0

2

The set is not an orthonormal set.

7. (a)

(−1, 2), (6, 3) = −6 + 6 = 0

(−1, 2) = (−1) 2 + 22 = 5

2 ⎞

(−1, 2) (−1, 2) ⎛ 1

,

=

= ⎜−

⎟

(−1, 2)

5

5⎠

5

⎝

2

(6, 3) = 62 + 32 = 45 = 3 5

⎛1⎞ ⎛ 2⎞ ⎛ 2⎞

p3 ( x) = ⎜ ⎟ + ⎜ ⎟ + ⎜ ⎟

⎝ 3⎠ ⎝ 3⎠ ⎝ 3⎠

1 4 4

=

+ +

9 9 9

= 1

=1

2⎛ 2⎞ 2⎛1⎞ 1⎛ 2⎞

p1 ( x), p2 ( x) = ⎜ ⎟ − ⎜ ⎟ + ⎜ − ⎟

3⎝ 3⎠ 3⎝ 3⎠ 3⎝ 3⎠

4 2 2

= − −

9 9 9

=0

2⎛1⎞ 2⎛ 2⎞ 1⎛ 2⎞

p1 ( x), p3 ( x) = ⎜ ⎟ − ⎜ ⎟ + ⎜ ⎟

3⎝ 3⎠ 3⎝ 3⎠ 3⎝ 3⎠

2 4 2

= − +

9 9 9

=0

(6, 3) (6, 3)

=

(6, 3)

3 5

3 ⎞

⎛ 6

,

=⎜

⎟

⎝3 5 3 5⎠

⎛ 2 1 ⎞

,

=⎜

⎟

5⎠

⎝ 5

2 ⎞

⎛ 1

⎛ 2 1 ⎞

,

,

The vectors ⎜ −

⎟ and ⎜

⎟

5

5⎠

5⎠

⎝

⎝ 5

are an orthonormal set.

(b)

(1, 0, − 1), (2, 0, 2) = 2 + 0 − 2 = 0

(1, 0, − 1), (0, 5, 0) = 0 + 0 + 0 = 0

(2, 0, 2), (0, 5, 0) = 0 + 0 + 0 = 0

(1, 0, − 1) = 12 + 02 + (−1)2 = 2

146

SSM: Elementary Linear Algebra

Section 6.3

1 ⎞

(1, 0, − 1) (1, 0, − 1) ⎛ 1

, 0, −

=

=⎜

⎟

(1, 0, − 1)

2⎠

2

⎝ 2

2

(2, 0, 2) = 22 + 02 + 22 = 8 = 2 2

(2, 0, 2) (2, 0, 2)

=

(2, 0, 2)

2 2

2 ⎞

⎛ 2

, 0,

=⎜

⎟

2 2⎠

⎝2 2

1 ⎞

⎛ 1

, 0,

=⎜

⎟

2⎠

⎝ 2

2

2

( − 12 , 12 , 0)

( − 12 , 12 , 0)

2⎞

⎛1 1

⎛1⎞

⎜ , ,− ⎟ = ⎜ ⎟

⎝3 3 3⎠

⎝ 3⎠

6

=

9

6

=

3

1, 1, − 2

3 ⎛1

3 3

3

=

⎜ ,

1, 1, − 2

6 ⎝3

1 ⎞

⎛ 1

, 0, −

The vectors ⎜

⎟,

2⎠

⎝ 2

1 ⎞

⎛ 1

, 0,

⎜

⎟ , and (0, 1, 0) are an

2

2⎠

⎝

orthonormal set.

(

(3

1 1

⎛1 1 1⎞ ⎛ 1 1 ⎞

⎜ , , ⎟, ⎜ − , , 0⎟ = − + + 0

10 10

⎝5 5 5⎠ ⎝ 2 2 ⎠

=0

1 1 2

2⎞

⎛1 1 1⎞ ⎛1 1

⎜ , , ⎟, ⎜ , , − ⎟ = + −

15

15 15

⎝5 5 5⎠ ⎝3 3 3⎠

=0

1 1

2⎞

⎛ 1 1 ⎞ ⎛1 1

⎜ − , , 0⎟, ⎜ , , − ⎟ = − + + 0

6 6

⎝ 2 2 ⎠ ⎝3 3 3⎠

=0

2

2

(

(5

5

)

5)

2⎛ 1 1 ⎞

⎜ − , , 0⎟

1 ⎝ 2 2 ⎠

2

(0, 5, 0) (0, 5, 0)

=

= (0, 1, 0)

5

(0, 5, 0)

⎛1 1 1⎞

⎛1⎞ ⎛1⎞ ⎛1⎞

⎜ , , ⎟ = ⎜ ⎟ +⎜ ⎟ +⎜ ⎟

⎝5 5 5⎠

⎝5⎠ ⎝5⎠ ⎝5⎠

3

=

25

3

=

5

1, 1, 1

5 ⎛1 1 1⎞

5 5 5

=

⎜ , , ⎟

1, 1, 1

3 ⎝5 5 5⎠

=

1

⎛ 1

⎞

,

, 0⎟

= ⎜−

2

2 ⎠

⎝

2

(0, 5, 0) = 0 + 5 + 0 = 5

(c)

2

⎛ 1 1 ⎞

⎛ 1⎞ ⎛1⎞

⎜ − , , 0⎟ = ⎜ − ⎟ + ⎜ ⎟ + 0

⎝ 2 2 ⎠

⎝ 2⎠ ⎝ 2⎠

1

=

2

1

=

2

)

3)

3

2

⎛1⎞ ⎛ 2⎞

+⎜ ⎟ +⎜− ⎟

⎝3⎠ ⎝ 3⎠

2

1

2⎞

,− ⎟

3 3⎠

1

2 ⎞

⎛ 1

,

,−

=⎜

⎟

6

6⎠

⎝ 6

⎛ 1 1 1 ⎞

,

,

The vectors ⎜

⎟,

⎝ 3 3 3⎠

1

1

2 ⎞

⎛ 1

⎛ 1

⎞

,

, 0 ⎟ and ⎜

,

,−

⎟ are

⎜−

2

2

6

6

6⎠

⎝

⎠

⎝

an orthonormal set.

2

2

9.

2

⎛ 3⎞ ⎛ 4⎞

v1 = ⎜ − ⎟ + ⎜ ⎟ + 02

⎝ 5⎠ ⎝ 5⎠

9 16

=

+

25 25

= 1

=1

2

2

⎛ 4⎞ ⎛ 3⎞

v 2 = ⎜ ⎟ + ⎜ ⎟ + 02

⎝ 5⎠ ⎝5⎠

16 9

=

+

25 25

= 1

=1

⎛ 1 1 1 ⎞

,

,

=⎜

⎟

⎝ 3 3 3⎠

v3 = 02 + 02 + 12 = 1

v1 , v 2 = −

147

12 12

+ +0 = 0

25 25

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

v1, v3 = 0 + 0 + 0 = 0

**linearly independent. Since S contains 4
**

linearly independent orthogonal vectors in

v 2 , v3 = 0 + 0 + 0 = 0

R 4 , it must be an orthogonal basis for R 4 .

(a) u = (1, −1, 2)

3 4

7

u, v1 = − − + 0 = −

5 5

5

4 3

1

u, v 2 = − + 0 =

5 5

5

u, v3 = 0 + 0 + 2 = 2

(b)

v2

v3

v4

= 12 + (−2)2 + 32 + (−4)2 = 30

2

= 22 + 12 + (−4) 2 + (−3)2 = 30

2

= (−3) 2 + 42 + 12 + (−2)2 = 30

2

= 42 + 32 + 22 + 12 = 30

u, v 2 = −2 + 2 − 12 − 21 = −33

u, v3 = 3 + 8 + 3 − 14 = 0

(b) u = (3, −7, 4)

9 28

37

u, v1 = − − + 0 = −

5 5

5

12 21

9

u, v 2 = − + 0 = −

5 5

5

u, v3 = 0 + 0 + 4 = 4

u, v 4 = −4 + 6 + 6 + 7 = 15

−24

−33

0

15

u=

v1 +

v 2 + v3 + v 4

30

30

30

30

4

11

1

= − v1 − v 2 + 0 v3 + v 4

5

10

2

37

9

v1 − v 2 + 4 v3 .

5

5

13. (a)

3 5⎞

⎛1

(c) u = ⎜ , − , ⎟

7 7⎠

⎝7

3 12

15

3

u, v1 = − − + 0 = − = −

35 35

35

7

4 9

5

1

− +0= − = −

u, v 2 =

35 35

35

7

5 5

u, v3 = 0 + 0 + =

7 7

3

1

5

3 5⎞

⎛1

Thus, ⎜ , − , ⎟ = − v1 − v 2 + v3 .

7

7

7

7 7⎠

⎝7

11. (a)

2

u, v1 = −1 − 4 + 9 − 28 = −24

7

1

Thus, (1, − 1, 2) = − v1 + v 2 + 2 v3 .

5

5

Thus, (3, − 7, 4) = −

v1

(b)

4

10 14

+0+ =

3

3

3

2

10

8

w, u 2 = + 0 − = −

3

3

3

4

5

1

w , u3 = + 0 − = −

3

3

3

14

8

1

Thus, w = u1 − u 2 − u3 .

3

3

3

w, u1 =

w, u 2 =

v1 , v 2 = 2 − 2 − 12 + 12 = 0

v1 , v3 = −3 − 8 + 3 + 8 = 0

3

w, u1 = −

w, u3 =

11

1

6

1

v 2 , v 3 = −6 + 4 − 4 + 6 = 0

v2 , v4 = 8 + 3 − 8 − 3 = 0

v3 , v 4 = −12 + 12 + 2 − 2 = 0

Thus, since S = {v1 , v 2 , v3 , v 4 } is a set of

nonzero orthogonal vectors in R 4 , S is

148

1

2

+

+

66

−

Thus, w = 0u1 +

v1 , v 4 = 4 − 6 + 6 − 4 = 0

+

11

6

4

66

5

6

+

2

2

=

5

14

=

6

+

u2 +

=0

11

66

11

6

66

11

66

u3 .

SSM: Elementary Linear Algebra

15. (a)

v1

v2

v3

2

= 12 + 12 + 12 + 12 = 4

2

= 12 + 12 + (−1)2 + (−1) 2 = 4

2

= 12 + (−1)2 + 12 + (−1)2 = 4

Section 6.3

x, v1 = 1 + 2 + 0 − 2 = 1

x, v 2 = 1 + 2 + 0 + 2 = 5

x, v3 = 1 − 2 + 0 + 2 = 1

1

5

1

v1 + v 2 + v3

4

4

4

5

5⎞ ⎛1

1 1

1⎞

⎛1 1 1 1⎞ ⎛5 5

= ⎜ , , , ⎟+⎜ , , − , − ⎟+⎜ , − , , − ⎟

4

4⎠ ⎝4

4 4

4⎠

⎝4 4 4 4⎠ ⎝4 4

3

5⎞

⎛7 5

=⎜ , , − , − ⎟

4

4⎠

⎝4 4

projW x =

(b)

v1

v2

v3

2

= 02 + 12 + (−4)2 + (−1) 2 = 18

2

= 32 + 52 + 12 + 12 = 36

2

= 12 + 02 + 12 + (−4) 2 = 18

x, v1 = 0 + 2 + 0 + 2 = 4

x, v 2 = 3 + 10 + 0 − 2 = 11

x, v3 = 1 + 0 + 0 + 8 = 9

4

11

9

v1 + v 2 + v3

18

36

18

2

11

1

= v1 + v 2 + v3

9

36

2

2 ⎞ ⎛ 33 55 11 11 ⎞ ⎛ 1

1

⎛ 2 8

⎞

= ⎜ 0, , − , − ⎟ + ⎜ ,

,

,

⎟ + ⎜ , 0, , − 2 ⎟

9

9 ⎠ ⎝ 36 36 36 36 ⎠ ⎝ 2

2

⎝ 9

⎠

1

23 ⎞

⎛ 17 7

=⎜ , , − , − ⎟

⎝ 12 4 12 12 ⎠

projW x =

17. (a)

1

3

+0+

=

18

18

18

1 10

1

x, v 2 = + + 0 − = 2

2 6

6

1

4

5

x, v3 =

+0+0+

=

18

18

18

3

5

projW x =

v1 + 2 v 2 +

v3

18

18

12

3 ⎞ ⎛ 5 1 1⎞ ⎛ 5

5

20 ⎞

⎛ 3

= ⎜ 0, , − , − ⎟ + ⎜1, , , ⎟ + ⎜ , 0, , − ⎟

18 18 ⎠

⎝ 18 18 18 ⎠ ⎝ 3 3 3 ⎠ ⎝ 18

1

17 ⎞

⎛ 23 11

=⎜ , , − , − ⎟

18 18 ⎠

⎝ 18 6

x, v1 = 0 +

2

149

Chapter 6: Inner Product Spaces

(b)

SSM: Elementary Linear Algebra

1

1

+1+ 0 − = 1

2

2

1

1

x, v 2 = + 1 + 0 + = 2

2

2

1

1

x, v3 = − 1 + 0 + = 0

2

2

projW x = 1v1 + 2 v 2 + 0 v3

⎛1 1 1 1⎞

= ⎜ , , , ⎟ + (1, 1, − 1, − 1)

⎝2 2 2 2⎠

1

1⎞

⎛3 3

=⎜ , , − , − ⎟

⎝2 2

2

2⎠

x, v1 =

q2 =

v2

v2

( 125 , 54 )

4 10

5

⎛ 12 4 ⎞

⎜ , ⎟

4 10 ⎝ 5 5 ⎠

1 ⎞

⎛ 3

,

=⎜

⎟

⎝ 10 10 ⎠

=

5

y

1

–1

(b) From Exercise 15(a),

3

5⎞

⎛7 5

w1 = projW x = ⎜ , , − , − ⎟ , so

⎝4 4

4

4⎠

3⎞

⎛ 3 3 3

w 2 = x − projW x = ⎜ − , , , − ⎟ .

⎝ 4 4 4

4⎠

–1

q2

q1 1

x

**(b) First, transform the given basis into an
**

orthogonal basis {v1 , v 2 }.

v1 = u1 = (1, 0)

v1 = 12 + 02 = 1

**21. (a) First, transform the given basis into an
**

orthogonal basis {v1 , v 2 }.

v2 = u2 −

u 2 , v1

v1

2

v1

3+0

= (3, − 5) −

(1, 0)

12

= (3, − 5) − (3, 0)

= (0, − 5)

v1 = u1 = (1, − 3)

v1 = 12 + (−3) 2 = 10

u 2 , v1

v1

2

v1

2−6

= (2, 2) −

(1, − 3)

10

⎛ 2 6⎞

= (2, 2) − ⎜ − , ⎟

⎝ 5 5⎠

⎛ 12 4 ⎞

=⎜ , ⎟

⎝ 5 5⎠

2

v1

3 ⎞

(1, − 3) ⎛ 1

,−

=

=⎜

⎟,

v1

10 ⎠

10

⎝ 10

=

**19. (a) From Exercise 14(a),
**

⎛3 3

⎞

w1 = projW x = ⎜ , , − 1, − 1⎟ , so

⎝2 2

⎠

⎛ 1 1

⎞

w 2 = x − projW x = ⎜ − , , 1, − 1⎟ .

⎝ 2 2

⎠

v2 = u2 −

q1 =

v 2 = 02 + (−5)2 = 5

**The orthonormal basis is
**

v

(1, 0)

q1 = 1 =

= (1, 0)

v1

1

q2 =

2

v2

(0, − 5)

=

= (0, − 1).

v2

5

y

⎛ 12 ⎞ ⎛ 4 ⎞

v2 = ⎜ ⎟ + ⎜ ⎟

⎝ 5 ⎠ ⎝5⎠

160

=

25

4 10

=

5

The orthonormal basis is

1

–1

150

–1

q1

q2 1

x

SSM: Elementary Linear Algebra

Section 6.3

**23. First, transform the given basis into an orthogonal basis {v1 , v 2 , v3 , v 4 }.
**

v1 = u1 = (0, 2, 1, 0)

v1 = 02 + 22 + 12 + 02 = 5

v2 = u2 −

u 2 , v1

v1

2

v1

0−2+0+0

(0, 2, 1, 0)

5

⎛ 4 2 ⎞

= (1, − 1, 0, 0) + ⎜ 0, , , 0 ⎟

⎝ 5 5 ⎠

1 2 ⎞

⎛

= ⎜1, − , , 0 ⎟

5 5 ⎠

⎝

= (1, − 1, 0, 0) −

2

2

30

30

⎛ 1⎞ ⎛2⎞

v 2 = 12 + ⎜ − ⎟ + ⎜ ⎟ + 02 =

=

25

5

⎝ 5⎠ ⎝5⎠

v 3 = u3 −

u3 , v1

v1

v1 −

2

u3 , v 2

v2

2

v2

1− 5 + 0 + 0 ⎛

0+4+0+0

1 2 ⎞

(0, 2, 1, 0) −

⎜1, − , , 0 ⎟

6

5

5 5 ⎠

⎝

5

2

= (1, 2, 0, − 1) −

1 1 ⎞

⎛ 8 4 ⎞ ⎛1

= (1, 2, 0, − 1) − ⎜ 0, , , 0 ⎟ − ⎜ , − , , 0 ⎟

⎝ 5 5 ⎠ ⎝ 2 10 5 ⎠

⎛1 1

⎞

= ⎜ , , − 1, − 1⎟

⎝2 2

⎠

2

2

⎛1⎞ ⎛1⎞

v3 = ⎜ ⎟ + ⎜ ⎟ + (−1)2 + (−1) 2

⎝2⎠ ⎝2⎠

5

=

2

10

=

2

u ,v

u ,v

u ,v

v 4 = u 4 − 4 1 v1 − 4 2 v 2 − 4 3 v3

2

2

2

v1

v2

v3

= (1, 0, 0, 1) −

1

0+0+0+0

1+ 0 + 0 + 0 ⎛

1 2 ⎞ 2 + 0 + 0 −1⎛ 1 1

⎞

v1 −

−

−

,

,

,

1

0

⎜

⎟

⎜ , , − 1, − 1⎟

6

5

5

5 5 ⎠

⎝

⎠

2

2

⎝

5

2

1 1 ⎞ ⎛ 1

1 1 1⎞

⎛5

= (1, 0, 0, 1) − ⎜ , − , , 0 ⎟ − ⎜ − , − , , ⎟

6 3 ⎠ ⎝ 10 10 5 5 ⎠

⎝6

8 4⎞

⎛ 4 4

=⎜ , , − , ⎟

⎝ 15 15 15 5 ⎠

2

2

2

⎛ 4 ⎞ ⎛ 4 ⎞ ⎛ 8 ⎞ ⎛4⎞

v4 = ⎜ ⎟ + ⎜ ⎟ + ⎜ − ⎟ + ⎜ ⎟

⎝ 15 ⎠ ⎝ 15 ⎠ ⎝ 15 ⎠ ⎝ 5 ⎠

240

=

225

4

=

15

2

151

Chapter 6: Inner Product Spaces

The orthonormal basis is q1 =

(

SSM: Elementary Linear Algebra

2 1

v1

(0, 2, 1, 0) ⎛

⎞

,

, 0⎟,

=

= ⎜ 0,

v1

5

5 ⎠

5

⎝

)

1, − 15 , 25 , 0 ⎛ 5

v2

1

,−

,

q2 =

=

=⎜

30

v2

30

⎝ 30

q3 =

v3

=

v3

q4 =

v4

=

v4

(

5

1 , 1 , − 1,

2 2

10

2

⎞

, 0⎟,

30 ⎠

2

) =⎛

−1

1

1

2

2 ⎞

,

,−

,−

⎜

⎟,

10

10 ⎠

⎝ 10 10

**( 154 , 154 , − 158 , 54 ) = ⎛
**

4

15

1

1

2

3 ⎞

,

,−

,

⎜

⎟.

15 15 ⎠

⎝ 15 15

**25. First, transform the given basis into an orthogonal basis {v1 , v 2 , v3}.
**

v1 = u1 = (1, 1, 1)

v1 =

v1 , v1

= 12 + 2(1)2 + 3(1) 2

= 1+ 2 + 3

= 6

u ,v

v 2 = u 2 − 2 1 v1

2

v1

1(1) + 2(1)(1) + 3(0)(1)

(1, 1, 1)

= (1, 1, 0) −

6

1

= (1, 1, 0) − (1, 1, 1)

2

1⎞

⎛1 1

=⎜ , , − ⎟

2⎠

⎝2 2

v2 =

v2 , v2

2

2

2

⎛1⎞

⎛1⎞

⎛ 1⎞

= ⎜ ⎟ + 2 ⎜ ⎟ + 3⎜ − ⎟

⎝2⎠

⎝2⎠

⎝ 2⎠

⎛1⎞

= 6⎜ ⎟

⎝4⎠

6

=

2

u ,v

u ,v

v3 = u3 − 3 1 v1 − 3 2 v 2

2

2

v1

v2

1 +0+0

1+ 0 + 0

1⎞

⎛1 1

= (1, 0, 0) −

(1, 1, 1) − 2

⎜ , ,− ⎟

6

6

⎝2 2

2⎠

4

1⎞

⎛1 1 1⎞ ⎛1 1

= (1, 0, 0) − ⎜ , , ⎟ − ⎜ , , − ⎟

6⎠

⎝6 6 6⎠ ⎝6 6

⎛2 1 ⎞

= ⎜ , − , 0⎟

⎝3 3 ⎠

152

SSM: Elementary Linear Algebra

v3 =

Section 6.3

w1 = projW w

w, v1

w, v 2

v1 +

v2

=

2

2

v1

v2

v3 , v3

2

2

⎛2⎞

⎛ 1⎞

= ⎜ ⎟ + 2 ⎜ − ⎟ + 3(0) 2

⎝3⎠

⎝ 3⎠

4 2

=

+

9 9

6

=

3

The orthonormal basis is

1

1 ⎞

v

(1, 1, 1) ⎛ 1

,

,

=⎜

q1 = 1 =

⎟,

v1

6

6⎠

6

⎝ 6

q2 =

q3 =

(

=

9

9 ⎛5 1

4⎞

= 2(1, 1, 1) − ⎜ , − , − ⎟

14 ⎝ 3 3 3 ⎠

3

6⎞

⎛ 15

= (2, 2, 2) − ⎜ , − , − ⎟

7⎠

⎝ 14 14

⎛ 13 31 20 ⎞

=⎜ , ,

⎟

⎝ 14 14 7 ⎠

w 2 = w − w1

⎛ 13 31 20 ⎞

= (1, 2, 3) − ⎜ , ,

⎟

⎝ 14 14 7 ⎠

3 1⎞

⎛1

=⎜ , − , ⎟

⎝ 14 14 7 ⎠

)

1, 1, − 1

1

1 ⎞

v2

⎛ 1

2

,

,−

= 2 2

=⎜

⎟,

6

v2

6

6⎠

⎝ 6

v3

=

v3

(

2

− 13 , 0

2,

3

6

3

) =⎛

2

1

⎞

,−

, 0⎟.

⎜

6

6

⎝

⎠

**27. Let W = span{u1 , u 2 }.
**

v1 = u1 = (1, 1, 1)

v2 = u2 −

u 2 , v1

v1

= (2, 0, − 1) −

⎡ 1 −1⎤

29. (a) A = ⎢

⎣ 2 3⎥⎦

Since det(A) = 3 + 2 = 5 ≠ 0, A is invertible,

so it has a QR-decomposition.

Let u1 = (1, 2) and u 2 = (−1, 3).

Apply the Gram-Schmidt process with

normalization to u1 and u 2 .

v1

2

2 + 0 −1

(1, 1, 1)

1 + 12 + 12

⎛1 1 1⎞

= (2, 0, − 1) − ⎜ , , ⎟

⎝3 3 3⎠

4⎞

⎛5 1

=⎜ , − , − ⎟

⎝3 3 3⎠

{v1 , v 2 } is an orthogonal basis for W.

2

v1 = u1 = (1, 2)

v1 = 12 + 22 = 5

v2 = u2 −

2

2

u 2 , v1

v1

2

v1

−1 + 6

(1, 2)

= (−1, 3) −

5

= (−1, 3) − (1, 2)

= (−2, 1)

v1 = 12 + 12 + 12 = 3

⎛5⎞ ⎛ 1⎞ ⎛ 4⎞

v2 = ⎜ ⎟ + ⎜ − ⎟ + ⎜ − ⎟

⎝ 3⎠ ⎝ 3⎠ ⎝ 3⎠

42

=

9

42

=

3

5 − 2 − 12

1+ 2 + 3

v1 + 3 3 3 v 2

42

3

2

v 2 = (−2)2 + 12 = 5

q1 =

v1

2 ⎞

(1, 2) ⎛ 1

,

=

=⎜

⎟

v1

5⎠

5

⎝ 5

q2 =

v2

(−2, 1) ⎛ 2 1 ⎞

,

=

= ⎜−

⎟

v2

5

5⎠

5

⎝

u1 , q1 =

u 2 , q1

u2 , q 2

153

1

4

= 5

5

1

6

=−

+

= 5

5

5

2

3

=

+

= 5

5

5

5

+

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

⎡ 1 1⎤

(c) A = ⎢ −2 1⎥

⎢

⎥

⎣ 2 1⎦

By inspection, the column vectors of A are

linearly independent, so A has a

QR-decomposition.

Let u1 = (1, − 2, 2) and u 2 = (1, 1, 1).

Apply the Gram-Schmidt process with

normalization to u1 and u 2 .

The QR-decomposition of A is

⎡ u1 , q1

u 2 , q1 ⎤

⎡ 1 −1⎤

⎥

⎢⎣ 2 3⎥⎦ = [q1 q 2 ] ⎢ 0

u2 , q 2 ⎦

⎣

⎡1 − 2 ⎤

5⎤

5⎥⎡ 5

= ⎢ 25

⎥.

1 ⎥⎢ 0

⎢

5⎦

⎣

5

5

⎣

⎦

⎡ 1 2⎤

(b) A = ⎢0 1⎥

⎢

⎥

⎣ 1 4⎦

By inspection, the column vectors of A are

linearly independent, so A has a

QR-decomposition. Let u1 = (1, 0, 1) and

v1 = u1 = (1, − 2, 2)

v1 = 12 + (−2)2 + 22 = 3

v2 = u2 −

v1

2

v1

1− 2 + 2

= (1, 1, 1) −

(1, − 2, 2)

9

2 2⎞

⎛1

= (1, 1, 1) − ⎜ , − , ⎟

9 9⎠

⎝9

⎛ 8 11 7 ⎞

=⎜ , , ⎟

⎝9 9 9⎠

u 2 = (2, 1, 4).

Apply the Gram-Schmidt process with

normalization to u1 and u 2 .

v1 = u1 = (1, 0, 1)

v1 = 12 + 02 + 12 = 2

v2 = u2 −

u 2 , v1

v1

2

v1

2

2+0+4

(1, 0, 1)

2

= (2, 1, 4) − (3, 0, 3)

= (−1, 1, 1)

v 2 = (−1)2 + 12 + 12 = 3

q1 =

v1

1 ⎞

(1, 0, 1) ⎛ 1

, 0,

=

=⎜

⎟

v1

2⎠

2

⎝ 2

q2 =

v2

(−1, 1, 1) ⎛ 1 1 1 ⎞

,

,

=

= ⎜−

⎟

v2

3 3 3⎠

3

⎝

u1 , q1 =

u 2 , q1 =

2

2

2

+0+

+0+

1

2

4

2

q2 =

=

= 2

2

v2

v2

( 89 , 119 , 97 )

234

9

11

7 ⎞

⎛ 8

,

,

=⎜

⎟

234

234 ⎠

⎝ 234

1 4 4

u1 , q1 = + + = 3

3 3 3

1 2 2 1

u 2 , q1 = − + =

3 3 3 3

8

11

7

+

+

u2 , q 2 =

234

234

234

26

=

3 26

26

=

3

=3 2

1

4

+

+

= 3

3

3

3

The QR-decomposition of A is

⎡ 1 2⎤

⎡u ,q

u 2 , q1 ⎤

⎢0 1⎥ = [q1 q 2 ] ⎢ 1 1

⎥

0

u2 , q 2 ⎦

⎢

⎥

⎣

1

4

⎣

⎦

⎡ 1 − 1 ⎤

3⎥

⎢ 2

1 ⎥ ⎡ 2 3 2⎤

=⎢ 0

⎥

3⎥⎢ 0

⎢

3⎦

1 ⎥⎣

⎢ 1

⎢⎣ 2

3 ⎥⎦

u2 , q 2 = −

2

⎛ 8 ⎞ ⎛ 11 ⎞ ⎛ 7 ⎞

v2 = ⎜ ⎟ + ⎜ ⎟ + ⎜ ⎟

⎝9⎠ ⎝ 9 ⎠ ⎝9⎠

234

=

81

234

=

9

v1

(1, − 2, 2) ⎛ 1

2 2⎞

=

=⎜ , − , ⎟

q1 =

3

v1

⎝3 3 3⎠

= (2, 1, 4) −

1

u 2 , v1

2

154

SSM: Elementary Linear Algebra

Section 6.3

The QR-decomposition of A is

⎡ 1 1⎤

⎡ u ,q

u 2 , q1 ⎤

⎢ −2 1⎥ = [q1 q 2 ] ⎢ 1 1

⎥

u

0

⎢

⎥

2 , q2 ⎦

⎣

⎣ 1 1⎦

8 ⎤

⎡ 1

234 ⎥ ⎡

⎢ 3

3 13 ⎤

11 ⎥ ⎢

⎥

= ⎢ − 23

234 ⎥ ⎢

26 ⎥

⎢

0

3 ⎦

7 ⎥⎣

⎢ 2

⎢⎣ 3

234 ⎥⎦

⎡1 0 2 ⎤

(d) A = ⎢0 1 1 ⎥

⎢

⎥

⎣1 2 0 ⎦

Since det(A) = −4 ≠ 0, A is invertible, so it has a QR-decomposition. Let u1 = (1, 0, 1), u 2 = (0, 1, 2), and

u3 = (2, 1, 0).

Apply the Gram-Schmidt process with normalization to u1 , u 2 , and u3 .

v1 = u1 = (1, 0, 1)

v1 = 12 + 02 + 12 = 2

v2 = u2 −

u 2 , v1

v1

v1 = (0, 1, 2) −

2

0+0+2

(1, 0, 1) = (0, 1, 2) − (1, 0, 1) = (−1, 1, 1)

2

v 2 = (−1)2 + 12 + 12 = 3

v 3 = u3 −

u3 , v1

v1

v1 −

2

u3 , v 2

v2

2

v2

2+0+0

−2 + 1 + 0

(1, 0, 1) −

(−1, 1, 1)

2

3

⎛ 1 1 1⎞

= (2, 1, 0) − (1, 0, 1) + ⎜ − , , ⎟

⎝ 3 3 3⎠

2⎞

⎛2 4

=⎜ , , − ⎟

3⎠

⎝3 3

= (2, 1, 0) −

2

2

2

⎛2⎞ ⎛4⎞ ⎛ 2⎞

v3 = ⎜ ⎟ + ⎜ ⎟ + ⎜ − ⎟ =

⎝3⎠ ⎝3⎠ ⎝ 3⎠

24 2 6

=

9

3

q1 =

v1

1 ⎞

(1, 0, 1) ⎛ 1

, 0,

=

=⎜

⎟

v1

2

2⎠

2

⎝

q2 =

v2

(−1, 1, 1) ⎛ 1 1 1 ⎞

,

,

=

= ⎜−

⎟

v2

3 3 3⎠

3

⎝

(

)

2, 4, − 2

v3

2

1 ⎞

⎛ 1

3

,

,−

= 3 3

=⎜

q3 =

⎟

2 6

v3

6

6⎠

⎝ 6

3

u1 , q1 =

1

2

1

+0+

u 2 , q1 = 0 + 0 +

2

2

2

= 2

= 2

155

Chapter 6: Inner Product Spaces

u3 , q1 =

2

+0+0 = 2

2

1

u2 , q 2 = 0 +

u3 , q3 =

3

+

2

2

+

3

2

u3 , q 2 = −

SSM: Elementary Linear Algebra

3

1

3

= 3

+0 = −

2

1

3

4

+

+0 =

6

6

6

The QR-decomposition of A is

⎡ u1, q1

⎡1 0 2 ⎤

⎢0 1 1 ⎥ = [q1 q 2 q3 ] ⎢ 0

⎢

⎢

⎥

⎢⎣ 0

⎣1 2 0 ⎦

⎡ 1 −

⎢ 2

=⎢ 0

⎢

⎢ 1

⎣⎢ 2

1

3

1

3

1

3

−

1 ⎤

⎡

6⎥⎢

2 ⎥⎢

6⎥⎢

1 ⎥⎢

6 ⎦⎥ ⎣

2

0

0

u 2 , q1

u2 , q2

0

2

3

0

u3 , q1

u3 , q 2

u3 , q 3

⎤

⎥

⎥

⎥⎦

2⎤

⎥

− 1 ⎥.

3

4 ⎥

⎥

6 ⎦

⎡1 2 1⎤

(e) A = ⎢1 1 1⎥

⎢

⎥

⎣ 0 3 1⎦

Since det(A) = −1 ≠ 0, A is invertible, so it has a QR-decomposition.

Let u1 = (1, 1, 0), u 2 = (2, 1, 3), and u3 = (1, 1, 1).

Apply the Gram-Schmidt process with normalization to u1 , u 2 , and u3 .

v1 = u1 = (1, 1, 0)

v1 = 12 + 12 + 02 = 2

v2 = u2 −

u 2 , v1

v1

2

v1

2 +1+ 0

(1, 1, 0)

2

⎛3 3 ⎞

= (2, 1, 3) − ⎜ , , 0 ⎟

⎝2 2 ⎠

1 ⎞

⎛1

= ⎜ , − , 3⎟

2 ⎠

⎝2

= (2, 1, 3) −

2

2

19

⎛1⎞ ⎛ 1⎞

v 2 = ⎜ ⎟ + ⎜ − ⎟ + 32 =

2

⎝2⎠ ⎝ 2⎠

156

SSM: Elementary Linear Algebra

v 3 = u3 −

u3 , v1

v1

= (1, 1, 1) −

v1 −

2

Section 6.3

u3 , v 2

v2

2

v2

1 − 1 +3

1+1+ 0

1 ⎞

⎛1

(1, 1, 0) − 2 2

⎜ , − , 3⎟

19

2

2 ⎠

⎝2

2

3 18 ⎞

⎛ 3

= (1, 1, 1) − (1, 1, 0) − ⎜ , − , ⎟

⎝ 19 19 19 ⎠

⎛ 3 3 1⎞

= ⎜− , , ⎟

⎝ 19 19 19 ⎠

2

2

2

1

1

⎛ 3⎞ ⎛ 3⎞ ⎛1⎞

v3 = ⎜ − ⎟ + ⎜ ⎟ + ⎜ ⎟ =

=

19

⎝ 19 ⎠ ⎝ 19 ⎠ ⎝ 19 ⎠

19

q1 =

v1

1

(1, 1, 0) ⎛ 1

⎞

,

, 0⎟

=

=⎜

v1

2 ⎠

2

⎝ 2

q2 =

1, − 1, 3

⎛ 2

v2

2 3 2⎞

2

= 2

=⎜

,−

,

⎟

19

v2

2 19 19 ⎠

⎝ 2 19

q3 =

(

v3

=

v3

u1 , q1 =

u 2 , q1 =

u3 , q1 =

u2 , q 2 =

u3 , q 2 =

(

)

2

3 , 3 , 1

− 19

19 19

1

19

1

2

2

2

1

2

+

+

+

2 2

2 19

2

1

2

1

2

−

⎜

⎝

3

19

,

3

19

,

1 ⎞

⎟

19 ⎠

+0 = 2

2

1

−

) = ⎛−

+0 =

3

2

+0 = 2

2

2 19

2

+

9 2

+

3 2

19

=

19

=

3 2

2

2 19 2 19

19

19

3

3

1

1

u3 , q3 = −

+

+

=

19

19

19

19

The QR-decomposition of A is

⎡ u1, q1

u 2 , q1

⎡1 2 1⎤

⎢

⎢1 1 1⎥ = [q1 q 2 q3 ]

u2 , q2

⎢ 0

⎢

⎥

0

3

1

⎢

0

⎣

⎦

⎣ 0

⎡ 1

⎢ 2

⎢

=⎢ 1

2

⎢

⎢0

⎣

2

2 19

− 2

2 19

3 2

19

− 3 ⎤⎡ 2

19 ⎥ ⎢

3 ⎥⎢ 0

19 ⎥ ⎢

⎥

1 ⎥⎢ 0

⎢

19 ⎦ ⎣

3

2

19

2

0

u3 , q1

u3 , q 2

u3 , q 3

2⎤

⎥

3 2⎥

19 ⎥

1 ⎥

19 ⎥⎦

157

⎤

⎥

⎥

⎥⎦

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

⎡ 1 0 1⎤

⎢ −1 1 1⎥

(f) A = ⎢

⎥ = [c1 c 2 c3 ]

⎢ 1 0 1⎥

⎢⎣ −1 1 1⎥⎦

By inspection, c3 − c1 = 2c2 , so the column

vectors of A are not linearly independent

and A does not have a QR-decomposition.

1

p3 , v1 = ∫ x 2 ⋅1 dx =

0

1

1 ⎞

⎛ 1

= ⎜ − x 3 + x3 ⎟

4 ⎠0

⎝ 6

1

=

12

p ,v

p ,v

v3 = p3 − 3 1 v1 − 3 2 v 2

2

2

v1

v2

**normalization of a vector vi that is the result of
**

applying the Gram-Schmidt process to

{u1 , u 2 , ..., u n }. Thus, vi is ui minus a linear

combination of the vectors v1, v 2 , ..., vi −1 , so

ui = vi + k1v1 + k2 v 2 + + ki −1 vi −1 . Thus,

ui , q i = u i ,

1

and

12

vi

vi

=

1

vi

1 1

=x − + −x

3 2

1

= − x + x2

6

vi , v i = v i .

2

v3

33. First transform the basis

2

2

1

1

= v1 , v1 = ∫ 12 dx = x 10 = 1

0

1

4

1

1 ⎞

⎛ 1

= ⎜ x − x 2 + x3 − x 4 + x5 ⎟

6

9

2

5 ⎠0

⎝ 36

1

=

180

v1 = 1 = 1

1

p 2 , v1 = ∫ x ⋅1 dx =

0

v2 = p2 −

v2

2

p 2 , v1

v1

2

1 2

x

2

1

=

0

= v3 , v3

1⎛ 1

⎞

= ∫ ⎜ − x + x 2 ⎟ dx

0⎝ 6

⎠

1⎛ 1

1

4

⎞

= ∫ ⎜ − x + x 2 − 2 x3 + x 4 ⎟ dx

0 ⎝ 36 3

3

⎠

**S = {p1, p 2 , p3} = {1, x, x 2} into an orthogonal
**

basis {v1 , v 2 , v3}.

v1 = p1 = 1

2

1

⎛ 1

⎞

= x 2 − 3 (1) − 12 ⎜ − + x ⎟

1 ⎝ 2

1

⎠

**Since each vector vi is nonzero, each diagonal
**

entry of R is nonzero.

v1

1

1

1 3

x =

3 0 3

1

⎛ 1

⎞

p3 , v 2 = ∫ x 2 ⎜ − + x ⎟ dx

0

⎝ 2

⎠

1⎛ 1 2

⎞

= ∫ ⎜ − x + x3 ⎟ dx

0⎝ 2

⎠

**31. The diagonal entries of R are ui , qi for
**

v

i = 1, 2, ..., n, where qi = i is the

vi

ui , v i = v i , v i

1

1

=

12 2 3

v2 =

1

2

1

1

=

180 6 5

The orthonormal basis is

v

1

q1 = 1 = = 1,

v1 1

v3 =

1

2

1

v1 = x − (1) = − + x

1

2

= v2 , v2

2

1⎛ 1

⎞

= ∫ ⎜ − + x ⎟ dx

0⎝ 2

⎠

1⎛ 1

⎞

= ∫ ⎜ − x + x 2 ⎟ dx

0⎝ 4

⎠

q2 =

=

1

1

1 ⎞

⎛1

= ⎜ x − x 2 + x3 ⎟

2

3 ⎠0

⎝4

1

=

12

v2

v2

− 12 + x

1

2 3

⎛ 1

⎞

= 2 3⎜− + x⎟

⎝ 2

⎠

= 3 (−1 + 2 x)

158

SSM: Elementary Linear Algebra

q3 =

=

Section 6.4

⎡ 2 −1 0 ⎤

⎢ 3 1 2⎥

(b) A = ⎢

⎥

⎢ −1 4 5⎥

⎢⎣ 1 2 4 ⎥⎦

v3

v3

1

6

− x + x2

1

6 5

⎡ 2 −1 0⎤

⎡ 2 3 −1 1⎤ ⎢

3 1 2⎥

AT A = ⎢ −1 1 4 2⎥ ⎢

⎥

⎢

⎥ ⎢ −1 4 5⎥

0

2

5

4

⎣

⎦ ⎢ 1 2 4⎥

⎣

⎦

−

15

1

5

⎡

⎤

= ⎢ −1 22 30⎥

⎢

⎥

⎣ 5 30 45⎦

⎡ −1⎤

⎡ 2 3 −1 1⎤ ⎢ ⎥ ⎡ −1⎤

0

T

A b = ⎢ −1 1 4 2 ⎥ ⎢ ⎥ = ⎢ 9 ⎥

⎢

⎥ ⎢ 1⎥ ⎢ ⎥

⎣ 0 2 5 4 ⎦ ⎢ 2 ⎥ ⎣ 13⎦

⎣ ⎦

The associated normal system is

⎡15 −1 5⎤ ⎡ x1 ⎤ ⎡ −1⎤

⎢ −1 22 30 ⎥ ⎢ x2 ⎥ = ⎢ 9⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 5 30 45⎦ ⎣ x3 ⎦ ⎣ 13⎦

⎛1

⎞

= 6 5 ⎜ − x + x2 ⎟

⎝6

⎠

2

= 5 (1 − 6 x + 6 x )

True/False 6.3

(a) False; for example, the vectors (1, 2) and (−1, 3)

in R 2 are linearly independent but not

orthogonal.

(b) False; the vectors must be nonzero for this to be

true.

(c) True; a nontrivial subspace of R3 will have a

basis, which can be transformed into an

orthonormal basis with respect to the Euclidean

inner product.

⎡ 1 1⎤

⎡1 −1 −1⎤ ⎢

⎡ 3 −2 ⎤

3. (a) A A = ⎢

−1 1⎥ = ⎢

⎥

1

1

2

⎢

⎥

⎣

⎦ −1 2

⎣ −2 6⎥⎦

⎣

⎦

⎡ 7⎤

⎡1 −1 −1⎤ ⎢ ⎥ ⎡14 ⎤

AT b = ⎢

0 =

⎣1 1 2 ⎥⎦ ⎢ −7 ⎥ ⎢⎣ −7 ⎥⎦

⎣ ⎦

The associated normal system is

⎡ 3 −2 ⎤ ⎡ x1 ⎤ ⎡14 ⎤

⎢⎣ −2 6 ⎥⎦ ⎢ x ⎥ = ⎢⎣ −7 ⎥⎦ .

⎣ 2⎦

⎡ 1 0 5⎤

⎡ 3 −2 14 ⎤

reduces

to

⎢0 1 1 ⎥ , so

⎢⎣ −2 6 −7 ⎥⎦

2⎦

⎣

T

**(d) True; a nonzero finite-dimensional inner product
**

space will have finite basis which can be

transformed into an orthonormal basis with

respect to the inner product via the GramSchmidt process with normalization.

(e) False; projW x is a vector in W.

(f) True; every invertible n × n matrix has a

QR-decomposition.

Section 6.4

1

the least squares solution is x1 = 5, x2 = .

2

Exercise Set 6.4

⎡ 1 −1⎤

1. (a) A = ⎢ 2 3⎥

⎢

⎥

⎣ 4 5⎦

⎡1

⎡ 1 2 1 1⎤ ⎢

2

(b) AT A = ⎢ 0

1 1 1⎥ ⎢

⎢

⎥ 1

⎣ −1 −2 0 −1⎦ ⎢⎢ 1

⎣

⎡ 7 4 −6 ⎤

= ⎢ 4 3 −3⎥

⎢

⎥

⎣ −6 −3 6 ⎦

⎡ 1 −1⎤

⎡ 1 2 4⎤ ⎢

⎡ 21 25⎤

AT A = ⎢

2 3⎥ = ⎢

⎥

−

1

3

5

⎢

⎥

⎣

⎦ 4 5

⎣ 25 35 ⎥⎦

⎣

⎦

⎡ 2⎤

⎡ 1 2 4 ⎤ ⎢ ⎥ ⎡ 20⎤

AT b = ⎢

−1 =

⎣ −1 3 5⎥⎦ ⎢ 5⎥ ⎢⎣ 20⎥⎦

⎣ ⎦

The associated normal system is

⎡ 21 25⎤ ⎡ x1 ⎤ ⎡ 20 ⎤

⎢⎣ 25 35 ⎥⎦ ⎢ x ⎥ = ⎢⎣ 20 ⎥⎦

⎣ 2⎦

0 −1⎤

1 −2 ⎥

⎥

1 0⎥

1 −1⎦⎥

⎡ 6⎤

⎡ 1 2 1 1⎤ ⎢ ⎥ ⎡ 18⎤

0

A b=⎢ 0

1 1 1⎥ ⎢ ⎥ = ⎢ 12⎥

⎢

⎥ ⎢ 9⎥ ⎢ ⎥

⎣ −1 −2 0 −1⎦ ⎢ 3⎥ ⎣ −9⎦

⎣ ⎦

The associated normal system is

T

159

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

⎡ 2 1⎤

⎡ 2 4 −2 ⎤ ⎢

⎡ 24 8⎤

7. (a) AT A = ⎢

4 2⎥ = ⎢

⎥

1

2

1

⎣

⎦ ⎢ −2 1⎥ ⎣ 8 6 ⎥⎦

⎣

⎦

⎡ 3⎤

⎡ 2 4 −2⎤ ⎢ ⎥ ⎡12 ⎤

AT b = ⎢

2 =

1⎥⎦ ⎢ ⎥ ⎢⎣ 8⎥⎦

⎣1 2

⎣ 1⎦

The associated normal system is

⎡ 24 8⎤ ⎡ x1 ⎤ ⎡12 ⎤

⎢⎣ 8 6⎥⎦ ⎢ x ⎥ = ⎢⎣ 8⎥⎦ .

⎣ 2⎦

−6 ⎤ ⎡ x1 ⎤ ⎡ 18⎤

−3⎥ ⎢ x2 ⎥ = ⎢ 12 ⎥ .

⎥⎢ ⎥ ⎢ ⎥

6 ⎦ ⎣ x3 ⎦ ⎣ −9 ⎦

−6 18⎤

−3 12 ⎥ reduces to

⎥

6 −9 ⎦

⎡ 1 0 0 12 ⎤

⎢0 1 0 −3⎥ so the least squares

⎢

⎥

⎣0 0 1 9 ⎦

solution is x1 = 12, x2 = −3, x3 = 9.

⎡ 7 4

⎢ 4 3

⎢

⎣ −6 −3

⎡ 7 4

⎢ 4 3

⎢

⎣ −6 −3

⎡ 24 8 12⎤

⎢⎣ 8 6 8⎥⎦ reduces to

5. (a) e = b − Ax

⎡ 7 ⎤ ⎡ 1 1⎤ ⎡ 5 ⎤

= ⎢ 0 ⎥ − ⎢ −1 1⎥ ⎢ 1 ⎥

⎢ ⎥ ⎢

⎥

⎣ −7 ⎦ ⎣ −1 2⎦ ⎣ 2 ⎦

11

⎡ 7 ⎤ ⎡⎢ 2 ⎤⎥

= ⎢ 0⎥ − ⎢− 9 ⎥

2

⎢ ⎥

⎣ −7 ⎦ ⎣⎢ −4 ⎦⎥

⎡ 3⎤

⎢ 2⎥

= ⎢ 9⎥

⎢ 2⎥

⎣ −3⎦

⎡ 3⎤

⎡1 −1 −1⎤ ⎢ 92 ⎥ ⎡ 0⎤

, so e is

AT e = ⎢

⎢ ⎥=

⎣1 1 2 ⎥⎦ ⎢ 2 ⎥ ⎢⎣ 0⎥⎦

⎣ −3⎦

orthogonal to the column space of A.

⎡6 ⎤ ⎡ 1

⎢0 ⎥ ⎢ 2

(b) e = b − Ax = ⎢ ⎥ − ⎢

⎢9 ⎥ ⎢ 1

⎣⎢ 3⎦⎥ ⎣⎢ 1

⎡1 0

⎢

⎢⎣0 1

the least squares solution is x1 =

1⎤

10 ⎥ ,

6⎥

5⎦

so

1

,

10

⎡1⎤

6

or x = ⎢ 10 ⎥ .

5

⎢⎣ 65 ⎥⎦

The error vector is

e = b − Ax

⎡ 3⎤ ⎡ 2 1 ⎤ ⎡ 1 ⎤

⎥

= ⎢ 2 ⎥ − ⎢ 4 2 ⎥ ⎢ 10

⎢ ⎥ ⎢

⎥⎢ 6 ⎥

−

1

2

1

5

⎣

⎦

⎣ ⎦ ⎣

⎦

7

⎡ ⎤

⎡ 3⎤ ⎢ 5 ⎥

= ⎢ 2 ⎥ − ⎢ 14 ⎥

⎢ ⎥ ⎢5⎥

⎣ 1⎦ ⎣ 1 ⎦

⎡ 8⎤

⎢ 5⎥

= ⎢− 4 ⎥

⎢ 5⎥

⎣ 0⎦

so the least squares error is

x2 =

0 −1⎤

⎡ 12 ⎤

1 −2 ⎥ ⎢ ⎥

⎥ −3

1 0⎥ ⎢ ⎥

9

1 −1⎦⎥ ⎣ ⎦

2

2

80 4

⎛8⎞ ⎛ 4⎞

2

=

5.

e = ⎜ ⎟ +⎜− ⎟ +0 =

25 5

5

5

⎝ ⎠ ⎝

⎠

⎡ 6 ⎤ ⎡ 3⎤

⎢ 0 ⎥ ⎢ 3⎥

= ⎢ ⎥−⎢ ⎥

⎢ 9 ⎥ ⎢9 ⎥

⎣⎢ 3⎦⎥ ⎣⎢0 ⎦⎥

⎡ 3⎤

⎢ −3⎥

=⎢ ⎥

⎢ 0⎥

⎣⎢ 3⎦⎥

⎡ 1 3⎤

⎡ 1 −2 3⎤ ⎢

⎡ 14 42 ⎤

(b) AT A = ⎢

−2 −6⎥ = ⎢

⎥

⎣3 −6 9 ⎦ ⎢ 3 9⎥ ⎣ 42 126 ⎥⎦

⎣

⎦

⎡ 1⎤

⎡ 1 −2 3⎤ ⎢ ⎥ ⎡ 4 ⎤

AT b = ⎢

0 =

⎣3 −6 9 ⎥⎦ ⎢ 1⎥ ⎢⎣12 ⎥⎦

⎣ ⎦

The associated normal system is

⎡ 14 42 ⎤ ⎡ x1 ⎤ ⎡ 4 ⎤

⎢⎣ 42 126 ⎥⎦ ⎢ x ⎥ = ⎢⎣12 ⎥⎦ .

⎣ 2⎦

⎡ 3⎤

⎡ 1 2 1 1⎤ ⎢ ⎥ ⎡0 ⎤

−3

A e=⎢ 0

1 1 1⎥ ⎢ ⎥ = ⎢0 ⎥ , so e is

⎢

⎥ ⎢ 0⎥ ⎢ ⎥

⎣ −1 −2 0 −1⎦ ⎢ 3⎥ ⎣0 ⎦

⎣ ⎦

orthogonal to the column space of A.

T

⎡ 14 42 4 ⎤

⎢⎣ 42 126 12 ⎥⎦ reduces to

160

⎡1 3 2⎤

7 ⎥ , so

⎢

⎣0 0 0⎦

SSM: Elementary Linear Algebra

Section 6.4

the least squares solutions are x1 =

⎡− 7 ⎤

−1

⎢ 6⎥ ⎡ ⎤

x3 = t or x = ⎢ 7 ⎥ + t ⎢ −1⎥ for t a real

⎢ 6 ⎥ ⎢⎣ 1⎥⎦

⎣ 0⎦

number.

The error vector is

e = b − Ax

⎛ 7

⎞

⎡ 7 ⎤ ⎡ −1 3 2 ⎤ ⎜ ⎢⎡ − 6 ⎥⎤ ⎡ −1⎤ ⎟

= ⎢ 0 ⎥ − ⎢ 2 1 3⎥ ⎜ ⎢ 7 ⎥ + t ⎢ −1⎥ ⎟

6

⎢ ⎥ ⎢

⎥

⎢ ⎥

⎣ −7 ⎦ ⎣ 0 1 1⎦ ⎜⎜ ⎣⎢ 0 ⎦⎥ ⎣ 1⎦ ⎟⎟

⎝

⎠

⎛ ⎡ 14 ⎤

⎞

⎡ 7 ⎤ ⎜ ⎢ 3 ⎥ ⎡0 ⎤ ⎟

= ⎢ 0 ⎥ − ⎜ ⎢ − 76 ⎥ + t ⎢0 ⎥ ⎟

⎢ ⎥ ⎜⎢ ⎥ ⎢ ⎥⎟

⎣ −7 ⎦ ⎜ ⎢ 7 ⎥ ⎣ 0 ⎦ ⎟

⎝⎣ 6⎦

⎠

7

⎡

⎤

⎢ 3⎥

= ⎢ 76 ⎥ .

⎢ 49 ⎥

⎢⎣ − 6 ⎥⎦

The least squares error is

2

− 3t ,

7

⎡ 2 ⎤ ⎡ −3⎤

x2 = t or x = ⎢ 7 ⎥ + t ⎢ ⎥ for t a real

⎣ 0 ⎦ ⎣ 1⎦

number. The error vector is

⎡ 1⎤ ⎡ 1 3⎤ ⎛ ⎡ 2 ⎤

⎡ −3⎤ ⎞

e = b − Ax = ⎢ 0⎥ − ⎢ −2 −6 ⎥ ⎜ ⎢ 7 ⎥ + t ⎢ ⎥ ⎟

⎜

⎢ ⎥ ⎢

⎥ 0

⎣ 1⎦ ⎠⎟

⎣ 1⎦ ⎣ 3 9 ⎦ ⎝ ⎣ ⎦

⎛⎡ 2⎤

⎞

⎡ 1⎤ ⎜ ⎢ 7 ⎥ ⎡ 0⎤ ⎟

= ⎢ 0⎥ − ⎜ ⎢ − 74 ⎥ + t ⎢ 0⎥ ⎟

⎢ ⎥ ⎜⎢ ⎥ ⎢ ⎥⎟

⎣ 1⎦ ⎜ ⎢ 6 ⎥ ⎣ 0⎦ ⎟

⎝⎣ 7⎦

⎠

⎡5⎤

⎢7⎥

= ⎢ 74 ⎥ .

⎢1⎥

⎢⎣ 7 ⎥⎦

The least squares error is

2

2

**⎛5⎞ ⎛4⎞ ⎛1⎞
**

e = ⎜ ⎟ +⎜ ⎟ +⎜ ⎟

⎝7⎠ ⎝7⎠ ⎝7⎠

42

=

49

1

=

42 .

7

2

2

2

⎛ 7 ⎞ ⎛ 7 ⎞ ⎛ 49 ⎞

e = ⎜ ⎟ +⎜ ⎟ +⎜− ⎟

⎝3⎠ ⎝6⎠ ⎝ 6 ⎠

294

=

4

7

=

6.

2

⎡ −1 2 0 ⎤ ⎡ −1 3 2 ⎤

(c) AT A = ⎢ 3 1 1⎥ ⎢ 2 1 3⎥

⎢

⎥⎢

⎥

⎣ 2 3 1⎦ ⎣ 0 1 1⎦

⎡ 5 −1 4 ⎤

= ⎢ −1 11 10 ⎥

⎢

⎥

⎣ 4 10 14 ⎦

⎡ −1 2 0 ⎤ ⎡ 7 ⎤ ⎡ −7 ⎤

AT b = ⎢ 3 1 1⎥ ⎢ 0 ⎥ = ⎢ 14 ⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 2 3 1⎦ ⎣ −7 ⎦ ⎣ 7 ⎦

The associated normal system is

⎡ 5 −1 4 ⎤ ⎡ x1 ⎤ ⎡ −7 ⎤

⎢ −1 11 10 ⎥ ⎢ x2 ⎥ = ⎢ 14 ⎥ .

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 4 10 14 ⎦ ⎣ x3 ⎦ ⎣ 7 ⎦

⎡2

⎢1

9. (a) Let A = ⎢

⎢1

⎢⎣ 1

1 −2 ⎤

0 −1⎥

⎥.

1 0⎥

1 −1⎥⎦

⎡2

⎡ 2 1 1 1⎤ ⎢

1

A A = ⎢ 1 0 1 1⎥ ⎢

⎢

⎥ ⎢1

⎣ −2 −1 0 −1⎦ ⎢ 1

⎣

⎡ 7 4 −6 ⎤

= ⎢ 4 3 −3⎥

⎢

⎥

⎣ −6 −3 6 ⎦

T

⎡ 5 −1 4 −7 ⎤

⎢ −1 11 10 14⎥ reduces to

⎢

⎥

⎣ 4 10 14 7 ⎦

⎡1 0 1 − 7⎤

6⎥

⎢

7 ⎥ , so the least squares

⎢0 1 1

6

⎢

⎥

0

0

0

0

⎣

⎦

7

7

solutions are x1 = − − t , x2 = − t ,

6

6

2

1 −2 ⎤

0 −1⎥

⎥

1 0⎥

1 −1⎥⎦

**det( AT A) = 3 ≠ 0, so the column vectors of
**

A are linearly independent and the column

space of A, W, is the subspace of R 4

spanned by v1 , v 2 , and v3 .

T

( A A)

161

−1

⎡ 3 −2 2 ⎤

= ⎢⎢ −2 2 −1⎥⎥

5

⎢⎣ 2 −1 3 ⎥⎦

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

projW u = A( AT A)−1 AT u

⎡ 2 1 −2 ⎤ ⎡ 3 −2 2⎤

⎡6 ⎤

⎡ 2 1 1 1⎤ ⎢ ⎥

⎢ 1 0 −1⎥ ⎢

3

⎥

=⎢

⎥ ⎢ −2 2 −1⎥ ⎢ 1 0 1 1⎥ ⎢ ⎥

1

1

0

9

⎢

⎥

⎢

⎥ ⎢ 2 −1 5 ⎥ ⎣ −2 −1 0 −1⎦ ⎢ ⎥

3⎦

⎢⎣ 1 1 −1⎥⎦ ⎣

⎢⎣6 ⎥⎦

⎡7 ⎤

⎢ 2⎥

=⎢ ⎥

⎢ 9⎥

⎢⎣ 5⎥⎦

projW u = (7, 2, 9, 5)

⎡ 1 −2 −3⎤

⎢ 1 −1 −1⎥

(b) Let A = ⎢

⎥.

1⎥

⎢ 3 −2

1 3⎥⎦

⎢⎣ 0

⎡ 1 −2 −3⎤

⎡ 1 1 3 0⎤ ⎢

1 −1 −1⎥

AT A = ⎢ −2 −1 −2 1⎥ ⎢

⎥

3

1⎥

⎢

⎥ ⎢ −2

−

−

3

1

1

3

⎣

⎦ ⎢0

1 3⎦⎥

⎣

⎡ 11 −9 −1⎤

= ⎢ −9 10 8⎥

⎢

⎥

⎣ −1 8 20⎦

**det( AT A) = 10 ≠ 0 so the column vectors of A are linearly independent and the column space of A, W, is the
**

subspace of R 4 spanned by v1 , v 2 , and v3 .

86 − 31 ⎤

⎡ 68

5

5⎥

⎢ 5

219 − 79 ⎥

( AT A)−1 = ⎢ 86

5

10

10

⎢ 31

⎥

79

29

⎢⎣ − 5 − 10

10 ⎥⎦

projW u = A( AT A)−1 AT u

86

⎡ 1 −2 −3⎤ ⎡ 68

5

5

⎢ 1 −1 −1⎥ ⎢ 86

219

⎢

=⎢

⎥

10

1⎥ ⎢ 5

⎢ 3 −2

31

79

1 3⎥⎦ ⎢⎣ − 5 − 10

⎢⎣0

⎡ − 12 ⎤

⎢ 5⎥

⎢ − 4⎥

= ⎢ 125 ⎥

⎢ 5⎥

⎢ 16 ⎥

⎢⎣ 5 ⎥⎦

4 12 16 ⎞

⎛ 12

projW u = ⎜ − , − , , ⎟

5 5 5⎠

⎝ 5

⎤

⎡ −2 ⎤

− 31

5 ⎥⎡ 1

1 3 0⎤ ⎢ ⎥

0

79 ⎥ ⎢

− 10

−2 −1 −2 1⎥ ⎢ ⎥

2

⎢

⎥

⎥

1 3⎦ ⎢ ⎥

29 ⎣ −3 −1

⎢⎣ 4 ⎥⎦

10 ⎥⎦

162

SSM: Elementary Linear Algebra

⎡ −1

11. (a) A A = ⎢ 3

⎢

⎣ 2

⎡ 5

= ⎢ −1

⎢

⎣ 4

T

2

1

3

−1

11

10

Section 6.4

0 ⎤ ⎡ −1 3 2 ⎤

1⎥ ⎢ 2 1 3⎥

⎥⎢

⎥

1⎦ ⎣ 0 1 1⎦

4⎤

10 ⎥

⎥

14 ⎦

**(b) W = the yz-plane is two-dimensional and
**

one basis for W is {w1, w 2 } where

w1 = (0, 1, 0) and w 2 = (0, 0, 1).

⎡0 0 ⎤

⎡ 0 1 0⎤

.

Thus A = ⎢ 1 0 ⎥ and AT = ⎢

⎢

⎥

⎣ 0 0 1⎥⎦

0

1

⎣

⎦

**det( AT A) = 0, so A does not have linearly
**

independent column vectors.

⎡0 0⎤

⎡ 0 1 0⎤ ⎢

⎡1 0 ⎤

AT A = ⎢

1 0⎥ = ⎢

⎥

0

0

1

⎢

⎥

⎣

⎦ 0 1 ⎣0 1 ⎥⎦

⎣

⎦

⎡ 2 −1 3 ⎤

⎡ 2 0 −1 4 ⎤ ⎢

0

1 1⎥

(b) AT A = ⎢ −1 1 0 −5⎥ ⎢

⎥

⎢

⎥ ⎢ −1 0 −2 ⎥

−

3

1

2

3

⎣

⎦ ⎢ 4 −5 3 ⎥

⎣

⎦

⎡ 21 −22 20 ⎤

= ⎢ −22 27 −17 ⎥

⎢

⎥

23⎦

⎣ 20 −17

[ P] = A( AT A)−1 AT

⎡0 0⎤

⎡1 0 ⎤ ⎡0 1 0 ⎤

= ⎢ 1 0⎥ ⎢

⎢

⎥ ⎣ 0 1 ⎥⎦ ⎢⎣0 0 1⎥⎦

⎣0 1⎦

⎡0 0⎤

⎡0 1 0⎤

= ⎢ 1 0⎥ ⎢

⎢

⎥ ⎣ 0 0 1⎥⎦

⎣0 1⎦

⎡0 0 0 ⎤

= ⎢0 1 0 ⎥

⎢

⎥

⎣0 0 1⎦

**det( AT A) = 0, so A does not have linearly
**

independent column vectors.

13. (a) W = the xz-plane is two-dimensional and

one basis for W is {w1, w 2 } where

**15. (a) If x = s and y = t, then a point on the plane is
**

(s, t, −5s + 3t) = s(1, 0, −5) + t(0, 1, 3).

w1 = (1, 0, − 5) and w 2 = (0, 1, 3) are a

basis for W.

w1 = (1, 0, 0), w 2 = (0, 0, 1).

⎡ 1 0⎤

Thus A = ⎢0 0 ⎥ and

⎢

⎥

⎣0 1⎦

⎡1

⎡ 1 0 0⎤ ⎢

AT A = ⎢

0

⎣ 0 0 1⎥⎦ ⎢ 0

⎣

⎡ 1 0 0⎤

AT = ⎢

.

⎣ 0 0 1⎥⎦

⎡ 1 0⎤

⎡1

(b) A = ⎢ 0 1⎥ , AT = ⎢

⎢

⎥

⎣0

⎣ −5 3⎦

⎡ 1

⎡ 1 0 −5⎤ ⎢

AT A = ⎢

0

⎣ 0 1 3⎥⎦ ⎢ −5

⎣

0⎤

⎡1 0 ⎤

0⎥ = ⎢

0 1 ⎥⎦

⎥

1⎦ ⎣

[ P] = A( AT A)−1 AT

⎡ 1 0⎤

⎡1 0 ⎤ ⎡ 1 0 0 ⎤

= ⎢0 0⎥ ⎢

⎢

⎥ ⎣ 0 1 ⎥⎦ ⎢⎣0 0 1⎥⎦

⎣0 1⎦

⎡ 1 0⎤

⎡ 1 0 0⎤

= ⎢0 0⎥ ⎢

⎢

⎥ ⎣ 0 0 1⎥⎦

⎣0 1⎦

⎡1 0 0 ⎤

= ⎢0 0 0 ⎥

⎢

⎥

⎣0 0 1 ⎦

0 −5 ⎤

,

1 3⎥⎦

0⎤

⎡ 26 −15⎤

1⎥ = ⎢

⎥ ⎣ −15 10 ⎥⎦

3⎦

det( AT A) = 35, ( AT A)−1 =

1 ⎡10 15⎤

35 ⎢⎣15 26 ⎥⎦

[ P] = A( AT A)−1 AT

⎡ 1 0⎤

⎛ 1 ⎡10 15 ⎤ ⎞ ⎡ 1 0 −5⎤

= ⎢ 0 1⎥ ⎜ ⎢

⎢

⎥ ⎝ 35 ⎣15 26 ⎥⎦ ⎟⎠ ⎢⎣0 1 3⎥⎦

⎣ −5 3⎦

⎡ 10 15⎤

1 ⎢

⎡ 1 0 −5 ⎤

=

15 26 ⎥

35 ⎢ −5 3⎥ ⎢⎣0 1 3⎥⎦

⎣

⎦

⎡ 10 15 −5⎤

1 ⎢

=

15 26

3⎥

35 ⎢ −5 3 34 ⎥

⎣

⎦

163

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

⎡ x0 ⎤

⎡ 10 15 −5⎤ ⎡ x0 ⎤

1 ⎢

(c) [ P] ⎢ y0 ⎥ =

15 26 3⎥ ⎢ y0 ⎥

⎢ ⎥ 35 ⎢

⎥⎢ ⎥

⎣ −5 3 34 ⎦ ⎣⎢ z0 ⎦⎥

⎣⎢ z0 ⎦⎥

⎡ 10 x0 + 15 y0 − 5 z0 ⎤

1 ⎢

15 x0 + 26 y0 + 3z0 ⎥

=

⎥

35 ⎢

⎣⎢ −5 x0 + 3 y0 + 34 z0 ⎦⎥

⎡ 10 x0 +15 y0 −5 z0 ⎤

35

⎢

⎥

= ⎢ 15 x0 + 26 y0 +3 z0 ⎥

35

⎢

⎥

⎢ −5 x0 +3 y0 +34 z0 ⎥

35

⎣⎢

⎦⎥

⎛ 2 x − 3 y0 − z0 15 x0 + 26 y0 + 3 z0 −5 x0 + 3 y0 + 34 z0 ⎞

The orthogonal projection is ⎜ 0

,

,

⎟.

7

35

35

⎝

⎠

(d) The orthogonal projection of P0 (1, − 2, 4) on W is

5 25 ⎞

⎛ 2(1) − 3(−2) − 4 15(1) + 26(−2) + 3(4) −5(1) + 3(−2) + 34(4) ⎞ ⎛ 8

,

,

⎜

⎟ = ⎜− , − ,

⎟.

7

35

35

7 7 ⎠

⎝

⎠ ⎝ 7

2

2

2

3 35

⎛ ⎛ 8 ⎞⎞ ⎛

25 ⎞

⎛ 5 ⎞⎞ ⎛

The distance between P0 (1, − 2, 4) and W is d = ⎜1 − ⎜ − ⎟ ⎟ + ⎜ −2 − ⎜ − ⎟ ⎟ + ⎜ 4 − ⎟ =

.

7

7 ⎠

⎝ 7 ⎠⎠ ⎝

⎝ ⎝ 7 ⎠⎠ ⎝

Using Theorem 3.3.4, the distance is

5(1) − 3(−2) + 4

52 + (−3)2 + 12

=

15

35

=

3 35

.

7

**17. By inspection, when t = 1, the point (t, t, t) = (1, 1, 1) is on line l. When s = 1, the point (s, 2s − 1, 1) = (1, 1, 1) is
**

on line m. Thus since P − Q ≥ 0, these are the values of s and t that minimize the distance between the lines.

19. If A has linearly independent column vectors, then AT A is invertible and the least squares solution of Ax = b is

**the solution of AT Ax = AT b, but since b is orthogonal to the column space of A, AT b = 0, so x is a solution of
**

AT Ax = 0. Thus, x = 0 since AT A is invertible.

21. AT will have linearly independent column vectors, and the column space of AT is the row space of A. Thus, the

**standard matrix for the orthogonal projection of R n onto the row space of A is
**

[ P] = AT [( AT )T AT ]−1 ( AT )T = AT ( AAT )−1 A.

True/False 6.4

(a) True; AT A is an n × n matrix.

(b) False; only square matrices have inverses, but AT A can be invertible when A is not a square matrix.

(c) True; if A is invertible, so is AT , so the product AT A is also invertible.

(d) True

(e) False; the system AT Ax = AT b may be consistent.

(f) True

164

SSM: Elementary Linear Algebra

Section 6.5

⎡ a0 ⎤

⎡ 2⎤

v* = ⎢ a1 ⎥ = ( M T M )−1 M T y = ⎢ 5⎥

⎢ ⎥

⎢ ⎥

⎣ −3⎦

⎣ a2 ⎦

**(g) False; the least squares solution may involve a
**

parameter.

(h) True; if A has linearly independent column

The desired quadratic is y = 2 + 5 x − 3x 2 .

**vectors, then AT A is invertible, so
**

AT Ax = AT b has a unique solution.

**5. The two column vectors of M are linearly
**

independent if and only neither is a nonzero

multiple of the other. Since all the entries in the

first column are equal, the columns are linearly

independent if and only if the second column has

at least two different entries, i.e., if and only if at

least two of the numbers x1 , x2 , ..., xn are

distinct.

Section 6.5

Exercise Set 6.5

⎡1 0⎤

⎡ 1 1 1⎤

1. M = ⎢1 1⎥ , M T = ⎢

,

⎢

⎥

⎣ 0 1 2 ⎥⎦

1

2

⎣

⎦

⎡1 0 ⎤

⎡ 1 1 1⎤ ⎢

⎡3 3⎤

MT M = ⎢

,

1 1⎥ =

⎣0 1 2 ⎥⎦ ⎢1 2 ⎥ ⎢⎣3 5⎥⎦

⎣

⎦

( M T M ) −1 =

1

11. With the substitution X = , the problem

x

becomes to find a line of the form y = a + b ⋅ X

⎛1 ⎞

that best fits the data points (1, 7), ⎜ , 3 ⎟ ,

⎝3 ⎠

⎛1 ⎞

⎜ , 1⎟ .

⎝6 ⎠

⎡1 1⎤

⎡7 ⎤

⎢ 1⎥

M = ⎢1 3 ⎥ , y = ⎢ 3⎥ ,

⎢ ⎥

⎢1 1 ⎥

⎣ 1⎦

⎣ 6⎦

⎡1 1⎤

⎡1 1 1⎤ ⎢ 1 ⎥ ⎡ 3 32 ⎤

T

⎥,

M M = ⎢ 1 1 ⎥ ⎢1 3 ⎥ = ⎢

3 41

1

⎣ 3 6 ⎦ ⎢1 1 ⎥ ⎢⎣ 2 36 ⎥⎦

⎣ 6⎦

1

−54 ⎤

41

⎡

( M T M )−1 =

⎢

42 ⎣ −54 108⎥⎦

1 ⎡ 5 − 3 ⎤ 1 ⎡ 5 − 3⎤

=

15 − 9 ⎢⎣ −3 3⎥⎦ 6 ⎢⎣ −3 3⎥⎦

v* = ( M T M )−1 M T y

⎡0⎤

1 ⎡ 5 −3⎤ ⎡ 1 1 1⎤ ⎢ ⎥

2

6 ⎣⎢ −3 3⎦⎥ ⎣⎢0 1 2 ⎦⎥ ⎢7 ⎥

⎣ ⎦

⎡− 1 ⎤

= ⎢ 72 ⎥

⎢⎣ 2 ⎥⎦

1 7

The desired line is y = − + x.

2 2

=

⎡1 x

1

⎢

⎢1 x2

3. M = ⎢

⎢1 x3

⎢1 x

4

⎣

x12 ⎤ ⎡1

⎥

x22 ⎥ ⎢1

⎥=⎢

x32 ⎥ ⎢1

⎢1

x42 ⎥⎦ ⎣

2 4⎤

3 9⎥

⎥

5 25⎥

6 36 ⎥⎦

⎡5⎤

⎡a ⎤

v* = ⎢ ⎥ = ( M T M )−1 M T y = ⎢ 21 ⎥ .

⎣b⎦

⎢⎣ 48

⎥

7 ⎦

⎡ 0⎤

⎢ −10 ⎥

y=⎢

⎥

⎢ −48⎥

⎣⎢ −76 ⎦⎥

5 48

+

X , so the

21 7

5 48

required curve is y = + .

21 7 x

The line in terms of X is y =

⎡1 2 4 ⎤

⎡ 1 1 1 1⎤ ⎢

1 3 9⎥

M T M = ⎢2 3 5 6⎥ ⎢

⎥

⎢

⎥ 1 5 25

⎣ 4 9 25 36 ⎦ ⎢⎢1 6 36 ⎥⎥

⎣

⎦

74 ⎤

⎡ 4 16

= ⎢16 74 376 ⎥

⎢

⎥

⎣74 376 2018⎦

⎡ 1989 −1116 135⎤

1 ⎢

( M T M )−1 =

−1116

649 −80 ⎥

⎥

90 ⎢ 135

−80 10 ⎦

⎣

y

10

x

10

165

Chapter 6: Inner Product Spaces

SSM: Elementary Linear Algebra

3. (a) The space W of continuous functions of the

True/False 6.5

form a + be x over [0, 1] is spanned by

**(a) False; there is only a unique least squares
**

straight line fit if the data points do not lie on a

vertical line.

v1 = 1 and v 2 = e x .

Let g1 = v1 = 1.

**(b) True; if the points are not collinear, there is no
**

solution to the system.

2

g1

1

= ∫ 12 dx = 1

0

1

d12

+ d 22

+

not the individual terms di .

(c) False; the sum

+ d n2

v 2 , g1 = ∫ e x dx = e x

0

is minimized,

Section 6.6

Exercise Set 6.6

ak =

=

1 2π

π ∫0

1 2π

π

∫0

1 2π

π ∫0

1

= ∫ (1 − e + e x ) 2 dx

2π

(1 + x)dx =

0

1

1⎛

x2 ⎞

= 2 + 2π

⎜x+

⎟

π⎝

2 ⎠0

= ∫ (1 − 2e + e2 + 2e x − 2ee x + e 2 x )dx

0

1

1

⎛

⎞

= ⎜ x − 2ex + e2 x + 2e x − 2e x +1 + e2 x ⎟

2

⎝

⎠0

3

1 2

= − + 2e − e

2

2

1

= − (1 − e)(3 − e)

2

g1 and g 2 are an orthogonal basis for W.

The least squares approximation of x is the

orthogonal projection of x on W,

x, g1

x, g 2

g +

g

projW x =

2 1

2 2

g1

g2

(1 + x) cos kx dx

cos kx dx +

2π

1 2π

π

∫0

x cos kx dx

1

sin kx + 0

kπ

0

=0

1 2π

bk = ∫ (1 + x)sin kx dx

=

π

0

1 2π

π ∫0

2

g2

1. With f(x) = 1 + x:

a0 =

sin kx dx +

= e −1

g1

2

g1

e −1

(1)

= ex −

1

= 1− e + ex

(d) True

=

v 2 , g1

g2 = v2 −

1

0

1 2π

π

∫0

x sin kx dx

2π

2

1

cos kx −

k

kπ

0

2

=−

k

=−

1

x, g1 = ∫ x dx =

0

1

x, g 2 = ∫ x(1 − e + e x )dx

0

3 1

− e

2 2

1

= (3 − e)

2

(a) a0 = 2 + 2π , a1 = a2 = 0, b1 = −2, b2 = −1

=

1

(2 + 2π ) − 2 sin x − sin 2 x

2

= (1 + π ) − 2 sin x − sin 2 x.

Thus f ( x) ≈

1

1 (3 − e)

2

projW x = 2 (1) +

(1 − e + e x )

1

− 1 (1 − e)(3 − e)

(b) a1 = a2 = an = 0

2

2

⎛ sin kx ⎞

bk = − so bk sin kx = −2 ⎜

⎟.

k

⎝ k ⎠

sin 2 x

⎛

f ( x) ≈ (1 + π ) − 2 ⎜ sin x +

+

2

⎝

1

1

1 2

x =

2 0 2

+

1

1

= −

(1 − e + e x )

2 1− e

1 1− e

1 x

= −

−

e

2 1− e 1− e

1

1 x

=− +

e

2 e −1

sin nx ⎞

⎟

n ⎠

166

SSM: Elementary Linear Algebra

Section 6.5

**(b) Let f1 = x and f2 = projW f1, then the mean
**

square error is f1 − f2

2

−1

. Recall that

f1 − f 2

=

=

=

=

v3 , g 2 = ∫ x3 dx =

−1

f1 − f 2 , f1 − f 2

f1, f1 − f2 − f2 , f1 − f2

f1, f1 − f2

f1, f1 − f1, f2

2

= f1

2

= f1

2

−

= f1

2

f1

2

g1

f1 , g1

−

g1

2

−

2

1 2

x dx

0

=∫

f1 , g 2

f1 , g1 −

2

g2

f1 , g 2

3

=x

3

0

f1 , g 2

2

f1 − f 2

3

1

( 12 (3 − e) )

2

−

2

f1 − f 2

2

2

−

= f1

g2

2

f1

1

g1

= ∫ x 2 dx =

−1

π

1

2

−1

π

2

f1 , g1

g1

−1

=0

π

2

2

−

f1 , g 2

g2

2

2

−

f1 , g3

g3

1

= ∫ sin 2 π x dx = 1

−1

f1 − f 2

1

−1

1

cos π x

Thus, the mean square error is

1

2

−1

g3

**(b) Let f2 = projW f1, then in a clear extension
**

of the process in Exercise 3(b),

= ∫ 12 dx = x 1−1 = 2

−1

g2 = v2 −

1

2

3

spanned by v1 = 1, v 2 = x, and v3 = x .

Let g1 = v1 = 1.

v 2 , g1

1

2

g3

2

2

v 2 , g1

2

f1 , g3

3

projW f1 = 0g1 + π x + 0g3 = x

form a0 + a1 x + a2 x 2 over [−1, 1] is

1

= ∫ x dx = x 2

−1

2

g2

g2 +

⎛ 1

⎞

f1, g3 = ∫ ⎜ − + x 2 ⎟ sin π x dx = 0

−1 ⎝ 3

⎠

− 12 (1 − e)(3 − e)

1

g2

1

5. (a) The space W of continuous functions of the

2

2

f1 , g 2 = ∫ x sin π x dx =

1 1

3−e

= − +

3 4 2(1 − e)

1 2 − 2e + 1 + e

= +

12

2(1 − e)

13

1+ e

= +

12 2(1 − e)

g1

g1

f1 , g 2

g1 +

f1 , g1 = ∫ sin π x dx = −

1

=

3

Thus,

1

2 1 (2)

= −

f1 , g1

projW f1 =

2

g2

1

2

2

g2

2

− f1 , f 2

f1 , g1

g1

=0

v3 , g 2

g1 −

2

−1

= x 2 − 3 (1) − 0

2

1 2

=− +x

3

{g1 , g 2 , g3} is an orthogonal basis for W.

The least squares approximation of

f1 = sin π x is the orthogonal projection of

f1 on W,

**Now decompose f2 in terms of g1 and g 2
**

from part (a).

f1 − f 2

v3 , g1

g3 = v3 −

1

1 4

x

4

1

**f1 − projW f1 = f1 − f2 and f2 are orthogonal.
**

2

2

3

1

v3 , g1 = ∫ x 2 dx =

=0

2

(2)

= 1− 0 − π

2

3

2

− 0 = 1−

⎧1, 0 < x < π

9. Let f ( x) = ⎨

.

⎩0, π ≤ x ≤ 2π

1 2π

1 π

a0 = ∫ f ( x) dx = ∫ dx = 1

g1 = v 2 = x

1

2

1 3

=

x

3 −1 3

π

167

0

π

0

6

π2

2

2

Chapter 6: Inner Product Spaces

2π

1

ak =

π ∫0

bk =

1

=

π

2π

∫0

f ( x) cos kx dx =

SSM: Elementary Linear Algebra

1 π

π

x = x22 + 4 x22 = 5 x22 = x2

∫0 cos kx dx = 0

5

, so

1

2

⎛

⎞

,

, 0⎟.

x = ± ⎜ 0,

5

5 ⎠

⎝

1 π

sin kx dx

π ∫0

1

(1 − (−1)k )

kπ

So the Fourier series is

1 ∞ 1

(1 − (−1)k ) sin kx.

+∑

2 k =1 kπ

=

⎡ u u2 ⎤

⎡ v1

3. Recall that if U = ⎢ 1

⎥ and V = ⎢v

u

u

4⎦

⎣ 3

⎣ 3

then U , V = u1v1 + u2 v2 + u3v3 + u4 v4 .

v2 ⎤

,

v4 ⎥⎦

(a) If U is a diagonal matrix, then u2 = u3 = 0

True/False 6.6

and U , V = u1v1 + u4 v4 .

For V to be in the orthogonal complement of

the subspace of all diagonal matrices, then it

must be the case that v1 = v4 = 0 and V

must have zeros on the main diagonal.

**(a) False; the area between the graphs is the error,
**

not the mean square error.

(b) True

(c) True

**(b) If U is a symmetric matrix, then u2 = u3
**

and U , V = u1v1 + u2 (v2 + v3 ) + u4 v4 .

2π

(d) False; 1 = 1, 1 = ∫ 12 dx = 2π ≠ 1

0

**Since u1 and u4 can take on any values, for
**

V to be in the orthogonal complement of the

subspace of all symmetric matrices, it must

be the case that v1 = v4 = 0 and v2 = −v3 ,

thus V must be skew-symmetric.

(e) True

Chapter 6 Supplementary Exercises

1. (a) Let v = (v1, v2 , v3 , v4 ).

v, u1 = v1,

1

If x = 1, then x2 = ±

f ( x)sin kx dx

5.

v, u 2 = v2 ,

5. Let u =

v, u3 = v3 ,

v, u 4 = v4

If v, u1 = v, u 4 = 0, then v1 = v4 = 0

and v = (0, v2 , v3 , 0). Since the angle θ

between u and v satisfies cos θ =

(

a1 , ..., an

⎛ 1

, ...,

v=⎜

⎝ a1

) and

1 ⎞

⎟ . By the Cauchy-Schwarz

an ⎠

Inequality, u ⋅ v

2

= (1 +

u, v

+ 1)2 ≤ u

2

v

2

or

n terms

,

u v

then v making equal angles with u 2 and u3

n 2 ≤ (a1 +

means that v2 = v3 . In order for the angle

⎛1

+ a1 ) ⎜ +

⎝ a1

+

1

an

⎞

⎟.

⎠

7. Let x = ( x1, x2 , x3 ).

**between v and u3 to be defined v ≠ 0.
**

Thus, v = (0, a, a, 0) with a ≠ 0.

x, u1 = x1 + x2 − x3

x, u 2 = −2 x1 − x2 + 2 x3

(b) As in part (a), since x, u1 = x, u 4 = 0,

x, u3 = − x1 + x3

x1 = x4 = 0.

x1, u3 = 0 ⇒ − x1 + x3 = 0, so x1 = x3 . Then

x, u1 = x2 and x, u 2 = − x2 , so x2 = 0 and

**Since u 2 = u3 = 1 and we want x = 1,
**

then the cosine of the angle between x and

u 2 is cos θ 2 = x, u 2 = x2 and, similarly,

x = ( x1, 0, x1 ). Then

x = x12 + x12 = 2 x12 = x1

cos θ3 = x, u3 = x3 , so we want x2 = 2 x3 ,

and x = 0, x2 , 2 x2 , 0 .

168

2.

SSM: Elementary Linear Algebra

If x = 1 then x1 = ±

1

Section 6.6

⎛ u, vi ⎞

cos αi = ⎜

⎜ u v ⎟⎟

i ⎠

⎝

and the vectors are

2

1 ⎞

⎛ 1

, 0,

±⎜

⎟.

2⎠

⎝ 2

⎛a ⎞

=⎜ i ⎟

⎝ u ⎠

9. For u = (u1 , u2 ), v = (v1 , v2 ) in R 2 , let

=

u, v = au1v1 + bu2 v2 be a weighted inner

v

2

= a(1)2 + b(2)2 = a + 4b = 1,

2

= a(3)2 + b(−1)2 = 9a + b = 1, and

cos2 α1 +

k (k

(b) As n approaches ∞,

approaches

π

2

1

n

.

+ cos2 α n =

a12 + a22 +

+ an2

a12 + a22 +

+ an2

= 1.

**To show that (W ⊥ ) ⊥ ⊆ W , let v be in (W ⊥ ) ⊥ .
**

Since v is in V, we have, by the Projection

Theorem, that v = w1 + w 2 where w1 is in W

and w 2 is in W ⊥ . By definition,

v, w 2 = w1 , w 2 = 0. But

v, w 2 = w1 + w 2 , w 2

= w1 , w 2 + w 2 , w 2

= w2, w2

**so that w 2 , w 2 = 0. Hence w 2 = 0 and
**

therefore v = w1, so that v is in W. Thus

ui , u = k 2 , so

ui u

+ an2

Thus W ⊆ (W ⊥ )⊥ .

**be the edges of the ‘cube’ in R n and
**

u = (k, k, k, ..., k) be the diagonal.

Then ui = k , u = k n , and

k2

a12 + a22 +

to every vector in W ⊥ , so that w is in (W ⊥ ) ⊥ .

11. (a) Let u1 = (k , 0, 0, ..., 0),

u 2 = (0, k , 0, ..., 0), ..., u 2 = (0, 0, 0, ..., k )

=

ai2

= 1)

W ⊆ (W ⊥ )⊥ . If w is in W, then w is orthogonal

⎡ 1 4⎤

⎡1 ⎤

⎡a ⎤

This leads to the system ⎢9

1⎥ ⎢ ⎥ = ⎢1 ⎥ .

⎢

⎥ ⎣b ⎦ ⎢ ⎥

⎣ 3 −2 ⎦

⎣0 ⎦

⎡ 1 4 1⎤

⎡ 1 0 0⎤

Since ⎢9

1 1⎥ reduces to ⎢0 1 0 ⎥ , the

⎢

⎥

⎢

⎥

⎣0 0 1⎦

⎣ 3 −2 0 ⎦

system is inconsistent and there is no such

weighted inner product.

ui , u

( vi

15. To show that (W ⊥ )⊥ = W , we first show that

u, v = a(1)(3) + b(2)(−1) = 3a − 2b = 0.

cos θ =

2

Therefore

**product. If u = (1, 2) and v = (3, −1) form an
**

orthonormal set, then

u

2

2

(W ⊥ ) ⊥ ⊆ W .

n)

=

1

n

⎡ 1 −1⎤

⎡ 1 2 4⎤

17. A = ⎢ 2 3⎥ , AT = ⎢

,

⎢

⎥

⎣ −1 3 5⎥⎦

4

5

⎣

⎦

⎡ 21 25⎤

,

AT A = ⎢

⎣ 25 35 ⎥⎦

⎡1 ⎤

⎡ 1 2 4 ⎤ ⎢ ⎥ ⎡ 4 s + 3⎤

AT b = ⎢

1 =

⎣ −1 3 5⎥⎦ ⎢ s ⎥ ⎢⎣5s + 2 ⎥⎦

⎣ ⎦

The associated normal system is

⎡ 21 25⎤ ⎡ x1 ⎤ ⎡ 4s + 3⎤

⎢⎣ 25 35 ⎥⎦ ⎢ x ⎥ = ⎢⎣5s + 2 ⎥⎦ .

⎣ 2⎦

If the least squares solution is x1 = 1 and x2 = 2,

.

approaches 0, so θ

.

**13. Recall that u can be expressed as the linear
**

combination u = a1 v1 + + an v n where

ai = u, vi for i = 1, ..., n. Thus

⎡ 21 25⎤ ⎡1 ⎤ ⎡ 71⎤ ⎡ 4 s + 3⎤

then ⎢

.

=

=

⎣ 25 35 ⎥⎦ ⎢⎣ 2 ⎥⎦ ⎢⎣95⎥⎦ ⎢⎣5s + 2⎥⎦

The resulting equations have solutions s = 17

and s = 18.6, respectively, so no such value of s

exists.

169

Chapter 7

Diagonalization and Quadratic Forms

A is orthogonal with inverse

⎡− 1

0 1 ⎤

2⎥

⎢ 2

1 ⎥

.

AT = ⎢ 1 − 2

6

6⎥

⎢ 6

1

1 ⎥

⎢ 1

⎢⎣ 3

3

3 ⎥⎦

Section 7.1

Exercise Set 7.1

1. (b) Since A is orthogonal,

⎡ 4 −9

25

⎢ 5

−1

T

4

A =A =⎢ 0

5

⎢ 3

12

⎢⎣ − 5 − 25

12 ⎤

25 ⎥

3⎥.

5

⎥

16

25 ⎥⎦

(e) For the given matrix A,

AT A

1

1

1⎤ ⎡1

1

⎡1

2

2

2⎥ ⎢2

2

⎢2

5

5

1

1

1

1

−6

−6

⎢

⎥⎢

6

6 2

= ⎢2

⎥⎢

1

1

1 −5 1

1

⎢2

6

6

6⎥ ⎢2

6

⎢1

1 −5

1⎥ ⎢1

1

⎢⎣ 2

6

6

6 ⎥⎦ ⎢⎣ 2

6

⎡ 1 0 0 0⎤

⎢0 1 0 0⎥

=⎢

⎥

⎢0 0 1 0⎥

⎣⎢ 0 0 0 1⎦⎥

A is orthogonal with inverse

1

1

1⎤

⎡1

2

2

2⎥

⎢2

1 −5

1

1⎥

⎢

6

6

6 .

AT = ⎢ 12

⎥

1

1 −5

⎢2

6

6

6⎥

⎢1

1 −5

1⎥

⎢⎣ 2

6

6

6 ⎥⎦

**3. (a) For the given matrix A,
**

⎡ 1 0 ⎤ ⎡1 0 ⎤ ⎡ 1 0 ⎤

AT A = ⎢

=

.

⎣ 0 1 ⎥⎦ ⎢⎣0 1 ⎥⎦ ⎢⎣ 0 1 ⎥⎦

⎡1 0 ⎤

A is orthogonal with inverse AT = ⎢

.

⎣ 0 1 ⎥⎦

(b) For the given matrix A,

1 ⎤⎡ 1

⎡ 1

− 1 ⎤

2⎥

AT A = ⎢ 12 12 ⎥ ⎢ 12

1 ⎥

⎢−

⎥⎢

2⎦⎣ 2

2⎦

⎣ 2

⎡ 1 0⎤

=⎢

.

⎣ 0 1⎥⎦

A is orthogonal with inverse

1 ⎤

⎡ 1

AT = ⎢ 12 12 ⎥ .

⎢−

⎥

2⎦

⎣ 2

Note that the given matrix is the standard

matrix for a rotation of 45°.

2

(c)

⎛ 1 ⎞

r1 = 02 + 12 + ⎜

⎟ =

⎝ 2⎠

matrix is not orthogonal.

2

(f)

3

≠ 1 so the

2

1 ⎤ ⎡−

2⎥⎢

1 ⎥⎢

6⎥⎢

1 ⎥⎢

3 ⎥⎦ ⎢⎣

1

2

0 −

1

2

1

6

2

6

1

6

2

7

⎛ 1 ⎞ ⎛ 1⎞

≠1

r2 = ⎜

⎟ +⎜− ⎟ =

12

⎝ 3⎠ ⎝ 2⎠

The matrix is not orthogonal.

1

π

3

= , sin =

2

3

2

3⎤

⎡ 1

⎡ x′ ⎤ ⎢ 2 2 ⎥ ⎡ −2⎤ ⎡ −1 + 3 3 ⎤

=

=⎢

⎥

⎢⎣ y ′⎥⎦ ⎢ 3

1 ⎥ ⎢⎣ 6 ⎥⎦ ⎣ 3 + 3 ⎦

−

⎣ 2

2⎦

7. (a) cos

(d) For the given matrix A,

AT A

⎡− 1

0

⎢ 2

=⎢ 1 − 2

6

⎢ 6

1

⎢ 1

⎢⎣ 3

3

⎡1 0 0⎤

= ⎢ 0 1 0⎥

⎢

⎥

⎣0 0 1⎦

1

2

1

6

1

6

− 56

1 ⎤

3⎥

1 ⎥

3⎥

1 ⎥

3 ⎥⎦

π

3

Thus, ( x′, y ′) = ( −1 + 3 3, 3 + 3 ) .

170

1⎤

2⎥

1⎥

6

⎥

− 56 ⎥

1⎥

6 ⎥⎦

SSM: Elementary Linear Algebra

Section 7.1

⎡ sin θ ⎤

[u3′ ]B = ⎢ 0 ⎥ . Thus the transition matrix

⎢

⎥

⎣cos θ ⎦

⎡ cos θ 0 sin θ ⎤

from B′ to B is P = ⎢ 0

1

0 ⎥,

⎢

⎥

θ

θ⎦

sin

0

cos

−

⎣

⎡x⎤

⎡ x′ ⎤

i.e., ⎢ y ⎥ = P ⎢ y ′⎥ . Then

⎢ ⎥

⎢ ′⎥

⎣z⎦

⎣z ⎦

⎡cos θ 0 − sin θ ⎤

−1 ⎢

A=P = 0

1

0 ⎥.

⎢

⎥

⎣ sin θ 0 cos θ ⎦

**(b) Since a rotation matrix is orthogonal,
**

Equation (2) gives

3⎤

⎡ 1

⎡5

⎤

⎡ x ⎤ ⎢ 2 − 2 ⎥ ⎡5⎤ ⎢ 2 − 3 ⎥

.

=

=

⎢⎣ y ⎥⎦ ⎢ 3

1 ⎥ ⎢⎣ 2 ⎥⎦ ⎢ 5 3 + 1⎥

2

⎣

⎦

2⎦

⎣2

5

⎛5

⎞

Thus, ( x, y) = ⎜ − 3,

3 + 1⎟ .

⎝2

⎠

2

**9. If B = {u1 , u 2 , u3} is the standard basis for R3 ,
**

and B ′ = {u1′ , u′2 , u3′ } is the rotated basis, then

**the transition matrix from B′ to B is
**

⎡ cos π 0 sin π ⎤ ⎡ 1

0

3

3⎥ ⎢ 2

⎢

P=⎢ 0

1

0 ⎥=⎢ 0

1

⎢ − sin π 0 cos π ⎥ ⎢ 3

0

3

3 ⎦ ⎢⎣ − 2

⎣

3⎤

2 ⎥

0⎥

1 ⎥

2 ⎥⎦

⎡1 ⎤

(b) With the same notation, [u1′ ]B = ⎢ 0 ⎥ ,

⎢ ⎥

⎣0⎦

⎡ 0 ⎤

⎡ 0 ⎤

⎢

⎥

′

′

[u 2 ]B = cos θ , and [u3 ]B = ⎢ − sin θ ⎥ , so

⎢

⎥

⎢

⎥

⎣ sin θ ⎦

⎣ cos θ ⎦

the transition matrix from B′ to B is

0

0 ⎤

⎡1

P = ⎢ 0 cos θ − sin θ ⎥ and

⎢

⎥

⎣ 0 sin θ cos θ ⎦

0

0 ⎤

⎡1

A = ⎢ 0 cos θ sin θ ⎥ .

⎢

⎥

⎣ 0 − sin θ cos θ ⎦

⎡ x′ ⎤

⎡ x⎤

(a) ⎢ y ′⎥ = P −1 ⎢ y ⎥

⎢ ′⎥

⎢ ⎥

⎣z ⎦

⎣z⎦

⎡ 1 0 − 3⎤

2 ⎥ ⎡ −1⎤

⎢ 2

=⎢ 0 1

0 ⎥ ⎢ 2⎥

⎢ 3

⎥ ⎢ 5⎥

1

⎣ ⎦

⎢⎣ 2 0

2 ⎥⎦

⎡− 1 − 5 3 ⎤

⎢ 2 2 ⎥

=⎢

2

⎥

⎢− 1 3 + 5 ⎥

2⎦

⎣ 2

Thus

⎡a + b b − a ⎤

13. Let A = ⎢

. Then

⎣ a − b b + a ⎥⎦

⎡ 2(a 2 + b 2 )

⎤

0

AT A = ⎢

⎥ , so a and b

0

2(a 2 + b2 ) ⎦⎥

⎣⎢

1

5⎞

⎛ 1 5

( x′, y ′, z ′) = ⎜ − −

3, 2, −

3 + ⎟.

⎝ 2 2

2

2⎠

⎡ 1

⎡x⎤

⎡ x′ ⎤ ⎢ 2

(b) ⎢ y ⎥ = P ⎢ y ′⎥ = ⎢ 0

⎢ ⎥

⎢ ⎥ ⎢

⎣z⎦

⎣ z′ ⎦ ⎢− 3

⎣ 2

0

1

0

3⎤

2 ⎥⎡

1⎤

0 ⎥ ⎢ 6⎥

⎢ ⎥

1 ⎥ ⎣ − 3⎦

2 ⎥⎦

1

must satisfy a 2 + b 2 = .

2

⎡ 1−3 3 ⎤

⎢ 2 2

⎥

=⎢

6

⎥

⎢− 1 3 − 3 ⎥

⎣ 2

2⎦

1

3⎞

⎛1 3

Thus ( x, y, z ) = ⎜ −

3, 6, −

3 − ⎟.

⎝2 2

2

2⎠

**17. The row vectors of an orthogonal matrix are an
**

orthonormal set.

2

2

⎛ 1 ⎞ ⎛ 1 ⎞

2

= a2 + ⎜

⎟ +⎜−

⎟ = a +1

2

2

⎝

⎠ ⎝

⎠

Thus a = 0.

r1

2

2

2

2

2

⎛ 1 ⎞ ⎛ 1 ⎞

2 2

r2 = b + ⎜

⎟ +⎜

⎟ =b +6

⎝ 6⎠ ⎝ 6⎠

2

.

Thus b = ±

6

2

**11. (a) If B = {u1 , u 2 , u3} is the standard basis for
**

R3 and B ′ = {u1′ , u′2 , u′3}, then

⎡ cos θ ⎤

⎡0 ⎤

[u1′ ]B = ⎢ 0 ⎥ , [u′2 ]B = ⎢1 ⎥ , and

⎢

⎥

⎢ ⎥

⎣ − sin θ ⎦

⎣0 ⎦

r3

171

2

2

⎛ 1 ⎞ ⎛ 1 ⎞

2 2

= c2 + ⎜

⎟ +⎜

⎟ =c +3

3

3

⎝

⎠ ⎝

⎠

Chapter 7: Diagonalization and Quadratic Forms

Thus c = ±

1

SSM: Elementary Linear Algebra

Section 7.2

.

3

The column vectors of an orthogonal matrix are

an orthonormal set, from which it is clear that b

and c must have opposite signs. Thus the only

2

1

possibilities are a = 0, b =

, c=−

or

6

3

2

1

, c=

.

a = 0, b = −

2

3

Exercise Set 7.2

1. (a)

**The characteristic equation is λ 2 − 5λ = 0
**

and the eigenvalues are λ = 0 and λ = 5.

Both eigenspaces are one-dimensional.

**21. (a) Rotations about the origin, reflections about
**

any line through the origin, and any

combination of these are rigid operators.

(b)

**λ3 − 27λ − 54 = 0 and the eigenvalues are
**

λ = 6 and λ = −3. The eigenspace for λ = 6

is one-dimensional; the eigenspace for

λ = −3 is two-dimensional.

**(c) All rigid operators on R 2 are angle
**

preserving. Dilations and contractions are

angle preserving operators that are not rigid.

(c)

True/False 7.1

λ − 1 −1

−1

−1 λ − 1 −1 = λ3 − 3λ 2 = λ 2 (λ − 3)

−1

−1 λ − 1

The characteristic equation is λ3 − 3λ 2 = 0

and the eigenvalues are λ = 3 and λ = 0. The

eigenspace for λ = 3 is one-dimensional; the

eigenspace for λ = 0 is two-dimensional.

**(a) False; only square matrices can be orthogonal.
**

(b) False; the row and column vectors are not unit

vectors.

(c) False; only square matrices can be orthogonal.

(d)

**(d) False; the column vectors must form an
**

orthonormal set.

λ − 4 −2

−2

−2 λ − 4 −2 = λ3 − 12λ 2 + 36λ − 32

−2

−2 λ − 4

= (λ − 8)(λ − 2) 2

The characteristic equation is

**(e) True; since AT A = I for an orthogonal matrix A,
**

A must be invertible.

**λ3 − 12λ 2 + 36λ − 32 = 0 and the
**

eigenvalues are λ = 8 and λ = 2. The

eigenspace for λ = 8 is one-dimensional; the

eigenspace for λ = 2 is two-dimensional.

**(f) True; a product of orthogonal matrices is
**

orthogonal, so A2 is orthogonal, hence

det( A2 ) = (det A)2 = (±1)2 = 1.

(g) True; since Ax = x for an orthogonal matrix.

vector, so A

λ −1 4

−2

4

2 = λ3 − 27λ − 54

λ −1

2

−2

λ+2

= (λ − 6)(λ + 3)2

The characteristic equation is

**(b) Rotations about the origin, dilations,
**

contractions, reflections about lines through

the origin, and combinations of these are

angle preserving.

(h) True; for any nonzero vector x,

λ − 1 −2

= λ 2 − 5λ

−2 λ − 4

(e)

x

is a unit

x

λ − 4 −4

−4 λ − 4

0

0

0

0

0

0

λ

0

0

0

= λ 4 − 8λ3 = λ3 (λ − 8)

0

λ

**The characteristic equation is λ 4 − 8λ3 = 0
**

and the eigenvalues are λ = 0 and λ = 8. The

eigenspace for λ = 0 is three-dimensional;

the eigenspace for λ = 8 is one-dimensional.

x

= 1. It follows that

x

1

Ax = 1 and Ax = x , so A is orthogonal

x

by Theorem 7.1.3.

172

SSM: Elementary Linear Algebra

(f)

Section 7.2

⎡3⎤

⎡3⎤

4

⎢5⎥

x

x3 = ⎢ 0 ⎥ . v3 = 3 = ⎢ 0 ⎥ is an orthonormal

⎢ ⎥

x3

⎢4⎥

⎣⎢ 1 ⎦⎥

⎣5⎦

4

⎡−

0 35 ⎤

⎢ 5

⎥

basis. Thus P = ⎢ 0 1 0 ⎥ orthogonally

⎢ 3 0 4⎥

5⎦

⎣ 5

diagonalizes A and

0 ⎤ ⎡ 25 0

⎡λ1 0

0⎤

P −1 AP = ⎢ 0 λ 2 0 ⎥ = ⎢ 0 −3

0⎥ .

⎢

⎥ ⎢

⎥

−

0

0

50

⎦

⎣ 0 0 λ3 ⎦ ⎣

λ−2

1

0

0

1

λ−2

0

0

0

0

λ−2

1

0

0

1

λ−2

= λ 4 − 8λ3 + 22λ 2 − 24λ + 9

= (λ − 1)2 (λ − 3)2

The characteristic equation is

λ 4 − 8λ3 + 22λ 2 − 24λ + 9 = 0 and the

eigenvalues are λ = 1 and λ = 3. Both

eigenspaces are two-dimensional.

**3. The characteristic equation of A is
**

λ 2 − 13λ + 30 = (λ − 3)(λ − 10) = 0.

λ1 = 3: A basis for the eigenspace is

**7. The characteristic equation of A is
**

λ3 − 6λ 2 + 9λ = λ(λ − 3)2 = 0.

⎡− 2 ⎤

⎡− 2 ⎤

x1 ⎢ 7 ⎥

x1 = ⎢ 3 ⎥ . v1 =

=

is an

x1 ⎢ 3 ⎥

⎢⎣ 1 ⎥⎦

⎢⎣ 7 ⎥⎦

orthonormal basis.

λ 2 = 10: A basis for the eigenspace is

⎡

⎡ 3⎤

x

x2 = ⎢ 2 ⎥ . v 2 = 2 = ⎢

⎢

x2

⎣⎢ 1 ⎦⎥

⎢⎣

3⎤

7⎥

2 ⎥

7 ⎥⎦

⎡1⎤

λ1 = 0: A basis for the eigenspace is x1 = ⎢1⎥ .

⎢⎥

⎣1⎦

1

⎡ ⎤

⎢ 3⎥

x

v1 = 1 = ⎢ 1 ⎥ is an orthonormal basis.

x1 ⎢ 3 ⎥

⎢1 ⎥

⎢⎣ 3 ⎥⎦

⎡ −1⎤

λ 2 = 3: A basis for the eigenspace is x 2 = ⎢ 1⎥ ,

⎢ ⎥

⎣ 0⎦

is an

⎡−

⎢

orthonormal basis. Thus P = ⎢

⎢⎣

orthogonally diagonalizes A and

0 ⎤ ⎡ 3 0⎤

⎡λ

P −1 AP = ⎢ 1

⎥=⎢

⎥.

λ

0

2 ⎦ ⎣ 0 10 ⎦

⎣

2

7

3

7

3⎤

7⎥

2 ⎥

⎥

7⎦

⎡ −1⎤

x3 = ⎢ 0⎥ . The Gram-Schmidt process gives the

⎢ ⎥

⎣ 1⎦

⎡− 1 ⎤

⎡− 1 ⎤

⎢ 6⎥

⎢ 2⎥

1

1

orthonormal basis v 2 = ⎢

⎥ , v3 = ⎢− ⎥ .

6⎥

⎢

⎢ 2⎥

2 ⎥

⎢

⎣⎢ 0 ⎦⎥

⎣⎢ 6 ⎦⎥

**5. The characteristic equation of A is
**

λ3 + 28λ 2 − 1175λ − 3750

= (λ − 25)(λ + 3)(λ + 50)

= 0.

λ1 = 25: A basis for the eigenspace is

⎡1 −

⎢ 3

Thus, P = ⎢ 1

⎢ 3

⎢1

⎢⎣ 3

diagonalizes A and

⎡λ1 0

P −1 AP = ⎢ 0 λ 2

⎢

⎣0 0

⎡− 4 ⎤

⎡− 4 ⎤

3⎥

x1 ⎢ 5 ⎥

⎢

= ⎢ 0 ⎥ is an orthonormal

x1 = 0 . v1 =

⎢ ⎥

x1 ⎢ 3 ⎥

⎢⎣ 1⎥⎦

⎣ 5⎦

basis.

λ 2 = −3: A basis for the eigenspace is

⎡ 0⎤

x 2 = ⎢1 ⎥ = v 2 , since x 2 = 1.

⎢ ⎥

⎣ 0⎦

− 1 ⎤

6⎥

− 1 ⎥ orthogonally

6⎥

2 ⎥

0

6 ⎥⎦

1

2

1

2

0 ⎤ ⎡0 0 0 ⎤

0 ⎥ = ⎢0 3 0 ⎥ .

⎥ ⎢

⎥

λ 2 ⎦ ⎣0 0 3 ⎦

9. The characteristic equation of A is

**λ 4 − 1250λ 2 + 390, 625 = (λ + 25)2 (λ − 25)2
**

= 0.

λ1 = −25: A basis for the eigenspace is

λ3 = −50: A basis for the eigenspace is

173

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

⎡− 4 ⎤

⎡ 0⎤

⎢ 3⎥

⎢ 0⎥

x1 = ⎢ 1⎥ , x 2 = ⎢ 4 ⎥ . The Gram-Schmidt

⎢ 0⎥

⎢− 3 ⎥

⎢ 0⎥

⎢ 1⎥

⎣ ⎦

⎣ ⎦

⎡− 4 ⎤

⎢ 5⎥

3

process gives the orthonormal basis v1 = ⎢ 5 ⎥ ,

⎢ ⎥

⎢ 0⎥

⎢⎣ 0 ⎥⎦

⎡ 0⎤

⎢ 0⎥

v 2 = ⎢− 4 ⎥ .

⎢ 5⎥

⎢ 3⎥

⎢⎣ 5 ⎥⎦

λ 2 = 25: A basis for the eigenspace is

**15. No, a non-symmetric matrix A can have
**

eigenvalues that are real numbers. For instance,

⎡3 0 ⎤

the eigenvalues of the matrix ⎢

are 3 and

⎣8 −1⎥⎦

−1.

17. If A is an orthogonal matrix then its eigenvalues

have absolute value 1, but may be complex.

Since the eigenvalues of a symmetric matrix

must be real numbers, the only possible

eigenvalues for an orthogonal symmetric matrix

are 1 and −1.

19. Yes; to diagonalize A, follow the steps on page

399. The Gram-Schmidt process will ensure that

columns of P corresponding to the same

eigenvalue are an orthonormal set. Since

eigenvectors from distinct eigenvalues are

orthogonal, this means that P will be an

orthogonal matrix. Then since A is orthogonally

diagonalizable, it must be symmetric.

⎡3⎤

⎡0⎤

⎢4⎥

⎢0⎥

x3 = ⎢ 1 ⎥ , x 4 = ⎢ 3 ⎥ . The Gram-Schmidt

⎢0⎥

⎢4⎥

⎢0⎥

⎢1 ⎥

⎣ ⎦

⎣ ⎦

True/False 7.2

⎡3⎤

⎢5⎥

4

process gives the orthonormal basis v3 = ⎢ 5 ⎥ ,

⎢ ⎥

⎢0⎥

⎢⎣ 0 ⎥⎦

⎡0⎤

⎢0⎥

v 4 = ⎢ 3 ⎥ . Thus,

⎢5⎥

⎢4⎥

⎢⎣ 5 ⎥⎦

⎡− 4 3

0 0⎤

⎢ 5 5

⎥

⎢ 53 54

0 0⎥

P = [ v1 v 2 v3 v 4 ] = ⎢

⎥

4 3

⎢ 0 0 − 5 5⎥

⎢

3 4⎥

0 0

5 5 ⎦⎥

⎣⎢

orthogonally diagonalizes A and

⎡λ1 0 0 0 ⎤

⎢0 λ

0 0⎥

2

P −1 AP = ⎢

⎥

⎢ 0 0 λ1 0 ⎥

⎢⎣ 0 0 0 λ 2 ⎥⎦

**(a) True; for any square matrix A, both AAT and
**

AT A are symmetric, hence orthogonally

diagonalizable.

**(b) True; since v1 and v 2 are from distinct
**

eigenspaces of a symmetric matrix, they are

orthogonal, so

v1 + v 2

2

= v1 + v 2 , v1 + v 2

= v1, v1 + 2 v1, v 2 + v 2 , v 2

= v1

2

+ 0 + v2

2

.

**(c) False; an orthogonal matrix is not necessarily
**

symmetric.

(d) True; ( P −1 AP)−1 = P −1 A−1P since P −1 and

A−1 both exist and since P −1 AP is diagonal, so

is ( P −1 AP)−1 .

(e) True; since Ax = x for an orthogonal matrix.

0 0⎤

⎡ −25 0

⎢ 0 25

0 0⎥

=⎢

⎥.

⎢ 0 0 −25 0 ⎥

0 25⎥⎦

⎢⎣ 0 0

(f) True; if A is an n × n orthogonally

diagonalizable matrix, then A has an

orthonormal set of n eigenvectors, which form a

basis for R n .

174

SSM: Elementary Linear Algebra

Section 7.3

⎡− 2 ⎤

⎢ 3⎥

bases for the eigenspaces are λ = 1: ⎢ 23 ⎥ ;

⎢ 1⎥

⎣⎢ 3 ⎦⎥

**(g) True; if A is orthogonally diagonalizable, then A
**

must be symmetric, so the eigenvalues of A will

all be real numbers.

Section 7.3

⎡2⎤

⎢3⎥

λ = 4: ⎢ 13 ⎥ ; λ = 7:

⎢2⎥

⎢⎣ 3 ⎥⎦

Exercise Set 7.3

1. (a) 3x12 + 7 x22 = [ x1

⎡ 3 0 ⎤ ⎡ x1 ⎤

x2 ] ⎢

⎢ ⎥

⎣ 0 7 ⎥⎦ ⎣ x2 ⎦

(b) 4 x12 − 9 x22 − 6 x1 x2 = [ x1

⎡− 2

⎢ 3

For P = ⎢ 23

⎢ 1

⎢⎣ 3

⎡ 4 −3⎤ ⎡ x1 ⎤

x2 ] ⎢

⎢ ⎥

⎣ −3 −9 ⎦⎥ ⎣ x2 ⎦

**(c) 9 x12 − x22 + 4 x32 + 6 x1 x2 − 8 x1 x3 + x2 x3
**

⎡ 9 3 −4 ⎤ ⎡ x ⎤

1

⎢

1⎥

= [ x1 x2 x3 ] ⎢ 3 −1 2 ⎥ ⎢ x2 ⎥

⎢ ⎥

⎢ −4 1

4 ⎥⎦ ⎣ x3 ⎦

⎣

2

3. [ x

1⎤

3⎥

2⎥,

3

⎥

2

− 3 ⎥⎦

let x = Py, then

xT Ax = yT ( PT AP)y

= [ y1

y2

⎡1 0 0 ⎤ ⎡ y1 ⎤ and

y3 ] ⎢ 0 4 0 ⎥ ⎢ y2 ⎥

⎢

⎥⎢ ⎥

⎣ 0 0 7 ⎦ ⎣ y3 ⎦

Q = y12 + 4 y22 + 7 y32 .

⎡ 2 −3⎤ ⎡ x ⎤

= 2 x 2 + 5 y 2 − 6 xy

y] ⎢

⎥

⎢

⎥

−

3

5

y

⎣

⎦⎣ ⎦

9. (a) 2 x 2 + xy + x − 6 y + 2 = 0 can be written as

2 x 2 + 0 y 2 + xy + ( x − 6 y) + 2 = 0 or

⎡ 2 −1⎤ ⎡ x1 ⎤

5. Q = x Ax = [ x1 x2 ] ⎢

⎢ ⎥

⎣ −1 2 ⎥⎦ ⎣ x2 ⎦

The characteristic equation of A is

T

[x

λ 2 − 4λ + 3 = (λ − 3)(λ − 1) = 0, so the

eigenvalues are λ = 3, 1. Orthonormal bases for

⎡2

y] ⎢

⎢⎣ 12

1⎤

2 ⎥ ⎡ x ⎤ + [1

0⎥⎦ ⎢⎣ y ⎥⎦

⎡ x⎤

−6] ⎢ ⎥ + 2 = 0.

⎣ y⎦

(b) y 2 + 7 x − 8 y − 5 = 0 can be written as

⎡ 1 ⎤

the eigenspaces are λ = 3: ⎢ 12 ⎥ ; λ = 1:

⎢− ⎥

⎣ 2⎦

⎡ 1 ⎤

⎢ 2 ⎥.

⎢ 1 ⎥

⎣ 2⎦

1 ⎤

⎡ 1

2

2⎥

⎢

For P =

, let x = Py, then

1 ⎥

⎢− 1

2⎦

⎣ 2

⎡ 3 0 ⎤ ⎡ y1 ⎤

xT Ax = yT ( PT AP)y = [ y1 y2 ] ⎢

⎢ ⎥

⎣ 0 1 ⎥⎦ ⎣ y2 ⎦

0 x 2 + y 2 + 0 xy + (7 x − 8 y) − 5 = 0 or

[x

⎡0 0 ⎤ ⎡ x ⎤

⎡ x⎤

+ [7 −8] ⎢ ⎥ − 5 = 0.

y] ⎢

⎥

⎢

⎥

⎣0 1 ⎦ ⎣ y ⎦

⎣ y⎦

x2 y 2

+

= 1 which is the

10 4

equation of an ellipse.

11. (a) 2 x 2 + 5 y 2 = 20 is

(b) x 2 − y 2 − 8 = 0 is x 2 − y 2 = 8 or

x2 y 2

−

= 1 which is the equation of a

8

8

hyperbola.

and Q = 3 y12 + y22 .

7 2

y which is the

2

equation of a parabola.

⎡ 3 2 0 ⎤ ⎡ x1 ⎤

7. Q = x Ax = [ x1 x2 x3 ] ⎢ 2 4 −2 ⎥ ⎢ x2 ⎥

⎢

⎥⎢ ⎥

5⎦ ⎣ x3 ⎦

⎣ 0 −2

The characteristic equation of A is

(c) 7 y 2 − 2 x = 0 is x =

T

3

2

3

1

3

2

3

⎡ 1⎤

⎢ 3⎥

⎢ 23 ⎥ .

⎢ 2⎥

⎢⎣ − 3 ⎥⎦

(d) x 2 + y 2 − 25 = 0 is x 2 + y 2 = 25 which is

the equation of a circle.

2

λ − 12λ + 39λ − 28 = (λ − 1)(λ − 4)(λ − 7) = 0,

so the eigenvalues are λ = 1, 4, 7. Orthonormal

175

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

⎡ −1 0 ⎤

(b) The eigenvalues of ⎢

are λ = −1

⎣ 0 −2 ⎥⎦

and λ = −2, so the matrix is negative

definite.

**13. The equation can be written in matrix form as
**

⎡ 2 −2 ⎤

xT Ax = −8 where A = ⎢

. The

⎣ −2 −1⎥⎦

eigenvalues of A are λ1 = 3 and λ 2 = −2, with

⎡ 2⎤

corresponding eigenvectors v1 = ⎢ ⎥ and

⎣ −1⎦

⎡1 ⎤

v 2 = ⎢ ⎥ respectively. Thus the matrix

⎣2⎦

1 ⎤

⎡ 2

P = ⎢ 15 25 ⎥ orthogonally diagonalizes A.

⎢−

⎥

5⎦

⎣ 5

Note that det(P) = 1, so P is a rotation matrix.

The equation of the conic in the rotated

x ′y ′-coordinate system is

[ x′

⎡ −1 0 ⎤

are λ = −1 and

(c) The eigenvalues of ⎢

⎣ 0 2 ⎥⎦

λ = 2, so the matrix is indefinite.

⎡1 0 ⎤

(d) The eigenvalues of ⎢

are λ = 1 and

⎣0 0 ⎥⎦

λ = 0, so the matrix is positive semidefinite.

⎡0 0 ⎤

(e) The eigenvalues of ⎢

are λ = 0 and

⎣0 −2 ⎥⎦

λ = −2, so the matrix is negative

semidefinite.

⎡ 3 0 ⎤ ⎡ x′ ⎤

= −8 which can be written

y ′] ⎢

⎣ 0 −2 ⎦⎥ ⎣⎢ y ′⎦⎥

**as 2 y ′2 − 3x′2 = 8; thus the conic is a hyperbola.
**

The angle through which the axes have been

⎛ 1⎞

rotated is θ = tan −1 ⎜ − ⎟ ≈ −26.6°.

⎝ 2⎠

**19. We have Q = x12 + x22 > 0 for ( x1 , x2 ) ≠ (0, 0);
**

thus Q is positive definite.

21. We have Q = ( x1 − x2 )2 > 0 for x1 ≠ x2 and

Q = 0 for x1 = x2 ; thus Q is positive

semidefinite.

**15. The equation can be written in matrix form as
**

⎡11 12 ⎤

xT Ax = 15 where A = ⎢

. The

⎣12 4 ⎥⎦

eigenvalues of A are λ1 = 20 and λ 2 = −5, with

**23. We have x12 − x22 > 0 for x1 ≠ 0, x2 = 0 and
**

Q < 0 for x1 = 0, x2 ≠ 0; thus Q is indefinite.

⎡4⎤

corresponding eigenvectors v1 = ⎢ ⎥ and

⎣3⎦

⎡ −3⎤

v 2 = ⎢ ⎥ respectively. Thus the matrix

⎣ 4⎦

⎡4 − 3⎤

5 ⎥ orthogonally diagonalizes A. Note

P = ⎢5

4⎥

⎢ 35

5⎦

⎣

that det(P) = 1, so P is a rotation matrix. The

equation of the conic in the rotated

x ′y ′-coordinate system is

[ x′

**25. (a) The eigenvalues of the matrix
**

⎡ 5 −2 ⎤

are λ = 3 and λ = 7; thus A is

A=⎢

5⎥⎦

⎣ −2

positive definite. Since 5 = 5 and

5 −2

= 21 are positive, we reach the

−2 5

same conclusion using Theorem 7.3.4.

⎡ 2 −1 0 ⎤

(b) The eigenvalues of A = ⎢ −1 2 0 ⎥ are

⎢

⎥

⎣ 0 0 5⎦

λ = 1, λ = 3, and λ = 5; thus A is positive

definite. The determinants of the principal

2 −1

submatrices are 2 = 2,

= 3, and

−1 2

⎡ 20 0⎤ ⎡ x ′ ⎤

= 15 which we can write

y ′] ⎢

⎣ 0 −5⎦⎥ ⎣⎢ y ′⎦⎥

**as 4 x ′2 − y ′2 = 3; thus the conic is a hyperbola.
**

The angle through which the axes have been

⎛3⎞

rotated is θ = tan −1 ⎜ ⎟ ≈ 36.9°.

⎝4⎠

2 −1 0

−1 2 0 = 15; thus we reach the same

0 0 5

conclusion using Theorem 7.3.4.

⎡1 0 ⎤

17. (a) The eigenvalues of ⎢

are λ = 1 and

⎣0 2 ⎥⎦

λ = 2, so the matrix is positive definite.

176

SSM: Elementary Linear Algebra

Section 7.3

27. The quadratic form Q = 5 x12 + x22 + kx32 + 4 x1 x2 − 2 x1 x3 − 2 x2 x3 can be expressed in matrix notation as Q = xT Ax

⎡ 5 2

where A = ⎢ 2 1

⎢

⎣ −1 −1

5 2 −1

2 1 −1 = k − 2.

−1 −1 k

−1⎤

5 2

= 1, and

−1⎥ . The determinants of the principal submatrices of A are 5 = 5,

2 1

⎥

k⎦

Thus Q is positive definite if and only if k > 2.

31. (a) For each i = 1, ..., n we have

( xi − x )2 = xi2 − 2 xi x + x 2

2

n

⎞

1 n

1 ⎛

⎜ ∑ xj ⎟

= xi2 − 2 xi ∑ x j +

n j =1

n2 ⎜⎝ j =1 ⎟⎠

n

n

n

⎞

2 n

1 ⎛

2

= xi2 − ∑ xi x j + ⎜ ∑ x j + 2 ∑ ∑ x j xk ⎟ .

2

⎟

n j =1

n ⎜⎝ j =1

j =1 k = j +1

⎠

1

Thus in the quadratic form s x2 =

[( x1 − x )2 + ( x2 − x )2 + " + ( xn − x )2 ] the coefficient of xi2 is

n −1

1 ⎡ 2 1 ⎤ 1

1 ⎡ 2 2 2 ⎤

2

1− +

n = , and the coefficient of xi x j for i ≠ j is

n =−

− − +

. It follows

n − 1 ⎢⎣ n n 2 ⎥⎦ n

n − 1 ⎢⎣ n n n 2 ⎥⎦

n(n − 1)

⎡ 1

⎢ n

⎢− 1

that s x2 = xT Ax where A = ⎢ n( n −1)

⎢ #

⎢− 1

⎣⎢ n( n −1)

− n( n1−1) " − n( n1−1) ⎤

⎥

1

1 ⎥

"

−

n

n( n −1) ⎥ .

#

%

# ⎥

1

⎥

− n( n1−1) "

⎥⎦

n

1

[( x1 − x ) 2 + ( x2 − x )2 + " + ( xn − x )2 ] ≥ 0, and s x2 = 0 if and only if x1 = x ,

(b) We have s x2 =

n −1

x2 = x , ..., xn = x , i.e., if and only if x1 = x2 = " = xn . Thus s x2 is a positive semidefinite form.

33. The eigenvalues of A must be positive and equal to each other. That is, A must have a positive eigenvalue of

multiplicity 2.

True/False 7.3

(a) True

(b) False; because of the term 4 x1 x2 x3 .

(c) True; ( x1 − 3x2 )2 = x12 − 6 x1 x2 + 9 x22 .

(d) True; none of the eigenvalues will be 0.

(e) False; a symmetric matrix can be positive semidefinite or negative semidefinite.

(f) True

(g) True; x ⋅ x = x12 + x22 + " + xn2

177

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

⎡0 ⎤

⎡0 ⎤

v 2 = ⎢1 ⎥ , v3 = ⎢0 ⎥ . Thus the constrained

⎢ ⎥

⎢ ⎥

⎣0 ⎦

⎣1 ⎦

maximum is 9 occurring at (x, y, z) = (±1, 0, 0),

and the constrained minimum is 3 occurring at

(x, y, z) = (0, 0, ±1).

(h) True

(i) False; this is only true for symmetric matrices.

(j) True

(k) False; there will be no cross product terms if

aij = −a ji for all i ≠ j.

2

2

⎛x⎞ ⎛ y ⎞

7. Rewrite the constraint as ⎜ ⎟ + ⎜

⎟ = 1,

⎝2⎠ ⎝ 2 ⎠

(l) False; if c < 0, xT Ax = c has no graph.

then let x = 2 x1 and y = 2 y1 . The problem is

Section 7.4

now to maximize z = xy = 2 2 x1 y1 subject to

Exercise Set 7.4

x12 + y12 = 1. Write

1. The quadratic form 5 x 2 − y 2 can be written in

z = xT Ax = [ x1

⎡5 0⎤

.

matrix notation as xT Ax where A = ⎢

⎣ 0 −1⎥⎦

The eigenvalues of A are λ1 = 5 and λ 2 = −1,

Since

⎡1 ⎤

with corresponding eigenvectors v1 = ⎢ ⎥ and

⎣0 ⎦

⎡0 ⎤

v 2 = ⎢ ⎥ . Thus the constrained maximum is 5

⎣1 ⎦

occurring at ( x, y) = (±1, 0), and the constrained

⎡ 0

y1 ] ⎢

⎣ 2

2 ⎤ ⎡ x1 ⎤

⎥ ⎢ ⎥.

0 ⎦ ⎣ y1 ⎦

λ − 2

= λ 2 − 2 = ( λ + 2 )( λ − 2 ) ,

− 2

λ

the largest eigenvalue of A is

2, with

⎡

corresponding positive unit eigenvector ⎢

⎢

⎣

Thus the maximum value is

⎛ 1 ⎞⎛ 1 ⎞

z = 2 2⎜

⎟⎜

⎟ = 2 occurs when

⎝ 2 ⎠⎝ 2 ⎠

minimum is −1 occurring at ( x, y) = (0, ± 1).

3. The quadratic form 3x 2 + 7 y 2 can be written in

1 ⎤

2⎥

.

1 ⎥

2⎦

x = 2 x1 = 2, y = 2 y1 = 1 or x = − 2,

y = −1. The smallest eigenvalue of A is − 2,

⎡3 0⎤

.

matrix notation as xT Ax where A = ⎢

⎣0 7 ⎥⎦

The eigenvalues of A are λ1 = 7 and λ 2 = 3,

⎡−

with corresponding unit eigenvectors ± ⎢

⎢

⎣

⎡0 ⎤

with corresponding unit eigenvectors v1 = ⎢ ⎥

⎣1 ⎦

⎡1 ⎤

and v 2 = ⎢ ⎥ . Thus the constrained maximum is

⎣0 ⎦

7 occurring at (x, y) = (0, ±1), and the

constrained minimum is 3 occurring at

(x, y) = (±1, 0).

1 ⎤

2⎥

.

1 ⎥

2⎦

**Thus the minimum value is z = − 2 which
**

occurs at ( x, y) = ( − 2 , 1) and

9.

(0, 1)

5 x 2 − y 2 = −1

(–1, 0)

⎡9 0 0 ⎤

expressed as x Ax where A = ⎢ 0 4 0 ⎥ . The

⎢

⎥

⎣0 0 3⎦

eigenvalues of A are λ1 = 9, λ 2 = 4, λ3 = 3,

(0, –1)

T

⎡1 ⎤

with corresponding eigenvectors v1 = ⎢0 ⎥ ,

⎢ ⎥

⎣0 ⎦

2 , − 1) .

5x2 − y2 = 5

y

5

x2 + y 2 = 1

5. The quadratic form 9 x 2 + 4 y 2 + 3z 2 can be

(

x

(1, 0)

5

**13. The first partial derivatives of f are
**

f x ( x, y) = 3 x 2 − 3 y and f y ( x, y) = −3x − 3 y 2 .

To find the critical points we set f x and f y

178

SSM: Elementary Linear Algebra

Section 7.5

17. The problem is to maximize z = 4xy subject to

equal to zero. This yields the equations y = x 2

2

2

⎛ x⎞ ⎛ y⎞

x 2 + 25 y 2 = 25, or ⎜ ⎟ + ⎜ ⎟ = 1. Let

⎝5⎠ ⎝1⎠

x = 5 x1 and y = y1, so that the problem is to

**and x = − y 2 . From this we conclude that
**

y = y 4 and so y = 0 or y = 1. The corresponding

values of x are x = 0 and x = −1 respectively.

Thus there are two critical points: (0, 0) and

(−1, 1).

The Hessian matrix is

⎡ f xx ( x, y) f xy ( x, y) ⎤ ⎡6 x −3 ⎤

H ( x, y) = ⎢

⎥=⎢

⎥.

⎣ f yx ( x, y) f yy ( x, y) ⎦ ⎣ −3 −6 y ⎦

maximize z = 20 x1 y1 subject to ( x1 , y1 ) = 1.

⎡ 0 10⎤ ⎡ x1 ⎤

Write z = xT Ax = [ x1 y1 ] ⎢

⎢ ⎥.

⎣10 0⎥⎦ ⎣ y1 ⎦

λ −10

= λ 2 − 100 = (λ + 10)(λ − 10).

−10 λ

⎡ 0 −3⎤

are

The eigenvalues of H (0, 0) = ⎢

⎣ −3 0 ⎥⎦

λ = ±3; this matrix is indefinite and so f has a

saddle point at (0, 0). The eigenvalues of

⎡ −6 −3⎤

are λ = −3 and

H (−1, 1) = ⎢

⎣ −3 −6 ⎥⎦

λ = −9; this matrix is negative definite and so f

has a relative maximum at (−1, 1).

**The largest eigenvalue of A is λ = 10 which has
**

⎡ 1 ⎤

positive unit eigenvector ⎢ 12 ⎥ . Thus the

⎢ ⎥

⎣ 2⎦

⎛ 1 ⎞⎛ 1 ⎞

maximum value of z = 20 ⎜

⎟⎜

⎟ = 10

⎝ 2 ⎠⎝ 2 ⎠

5

which occurs when x = 5 x1 =

and

2

1

y = y1 =

, which are the corner points of the

2

rectangle.

15. The first partial derivatives of f are

f x ( x, y) = 2 x − 2 xy and f y ( x, y) = 4 y − x 2 . To

find the critical points we set f x and f y equal

to zero. This yields the equations 2x(1 − y) = 0

1

and y = x 2 . From the first, we conclude that

4

x = 0 or y = 1. Thus there are three critical

points: (0, 0), (2, 1), and (−2, 1).

The Hessian matrix is

⎡ f xx ( x, y) f xy ( x, y) ⎤

H ( x, y ) = ⎢

⎥

⎣ f yx ( x, y) f yy ( x, y) ⎦

21. If x is a unit eigenvector corresponding to λ,

then q(x) = xT Ax = xT (λx) = λ(xT x) = λ(1) = λ.

True/False 7.4

(a) False; if the only critical point of the quadratic

form is a saddle point, then it will have neither a

maximum nor a minimum value.

⎡ 2 − 2 y −2 x ⎤

=⎢

.

4 ⎥⎦

⎣ −2 x

(b) True

(c) True

⎡2 0⎤

The eigenvalues of the matrix H (0, 0) = ⎢

⎣ 0 4⎥⎦

are λ = 2 and λ = 4; this matrix is positive

definite and so f has a relative minimum at (0, 0).

⎡ 0 −4 ⎤

The eigenvalues of H (2, 1) = ⎢

are

⎣ −4 4 ⎥⎦

**(d) False; the Hessian form of the second derivative
**

test is inconclusive.

(e) True; if det(A) < 0, then A will have negative

eigenvalues.

**λ = 2 ± 2 5. One of these is positive and one is
**

negative; thus this matrix is indefinite and f has a

saddle point at (2, 1). Similarly, the eigenvalues

⎡ 0 4⎤

of H (−2, 1) = ⎢

are λ = 2 ± 2 5; thus f

⎣ 4 4⎥⎦

has a saddle point at (−2, 1).

Section 7.5

Exercise Set 7.5

4 5 − i⎤

⎡ −2i

1. A* = ⎢

+

−i

1

i

3

0 ⎥⎦

⎣

179

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

i 2 − 3i ⎤

⎡ 1

3. A = ⎢ −i

−3

1 ⎥

⎢

⎥

2 ⎦

⎣ 2 + 3i 1

5. (a) A is not Hermitian since a31 ≠ a13 .

(b) A is not Hermitian since a22 is not real.

9. The following computations show that the row vectors of A are orthonormal:

2

r1 =

3

4

+ i

5

5

r2 =

−

2

2

4

3

+ i

5

5

9 16

+

=1

25 25

=

2

16 9

+

=1

25 25

=

⎛ 3 ⎞ ⎛ 4 ⎞ ⎛ 4 ⎞⎛ 3 ⎞

r1 ⋅ r2 = ⎜ ⎟ ⎜ − ⎟ + ⎜ i ⎟⎜ − i ⎟

⎝ 5 ⎠ ⎝ 5 ⎠ ⎝ 5 ⎠⎝ 5 ⎠

12 12

=− +

5 5

=0

⎡ 3

− 54 ⎤

⎥.

Thus A is unitary, and A−1 = A* = ⎢ 5

⎢ − 54 i − 53 i ⎥

⎣

⎦

**11. The following computations show that the column vectors of A are orthonormal:
**

c1 =

1

c2 =

1

2 2

4 4

=

+

8 8

=1

=

=1

c1 ⋅ c2 =

(

2 2

4 4

+

8 8

1

2 2

2

3 + i) +

(1 − i 3 )

(

3 + i)

1

2 2

2

+

1

2 2

(1 + i 3 )

1

2 2

(i −

(1 + i 3 ) +

3)

2

2

1

2 2

(1 + i 3 )

⎡ 1 ( 3 − i)

Thus A is unitary, and A−1 = A* = ⎢ 21 2

⎢

(1 + i 3 )

⎣2 2

1

2 2

1

2 2

1

2 2

( −i −

3) = 0

(1 − i 3 ) ⎤

⎥.

( −i − 3 ) ⎥⎦

**13. The characteristic polynomial of A is λ 2 − 9λ + 18 = (λ − 3)(λ − 6); thus the eigenvalues of A are λ = 3 and λ = 6.
**

⎡ −1 −1 + i 0 ⎤

⎡1 1 − i 0 ⎤

The augmented matrix of the system (3I − A)x = 0 is ⎢

, which reduces to ⎢

. Thus

−2 0 ⎥⎦

⎣0 0 0 ⎥⎦

⎣ −1 − i

⎡ −1+i ⎤

⎡ −1 + i ⎤

⎢ 3 ⎥

v1 = ⎢

is

a

basis

for

the

eigenspace

corresponding

to

λ

=

3,

and

p

=

1 ⎢ 1 ⎥ is a unit eigenvector. A

⎣ 1 ⎥⎦

⎣⎢ 3 ⎦⎥

180

SSM: Elementary Linear Algebra

Section 7.5

⎡ 1−i ⎤

similar computation shows that p 2 = ⎢ 6 ⎥ is a unit eigenvector corresponding to λ = 6. The vectors p1 and p 2

⎢ 2 ⎥

⎣⎢ 6 ⎦⎥

are orthogonal, and the unitary matrix P = [p1 p 2 ] diagonalizes the matrix A:

⎡ −1−i

P*AP = ⎢ 3

⎢ 1+i

⎢⎣ 6

⎡3 0⎤

=⎢

⎣ 0 6 ⎥⎦

1 ⎤

3⎥⎡ 4

2 ⎥ ⎢⎣1 + i

6 ⎥⎦

⎡ −1+i

1− i⎤ ⎢ 3

5 ⎥⎦ ⎢ 1

⎣⎢ 3

1−i ⎤

6⎥

2 ⎥

6 ⎦⎥

**15. The characteristic polynomial of A is λ 2 − 10λ + 16 = (λ − 2)(λ − 8); thus the eigenvalues of A are λ = 2 and λ = 8.
**

−2 − 2i 0 ⎤

⎡ −4

⎡2 1 + i 0⎤

The augmented matrix of the system (2I − A)x = 0 is ⎢

, which reduces to ⎢

.

⎥

−2

0⎦

⎣ 0 0 0 ⎥⎦

⎣ −2 + 2i

⎡ −1−i ⎤

⎡ −1 − i ⎤

Thus v1 = ⎢

is a basis for the eigenspace corresponding to λ = 2, and p1 = ⎢ 6 ⎥ is a unit eigenvector. A

⎢ 2 ⎥

⎣ 2 ⎥⎦

⎣⎢ 6 ⎦⎥

⎡ 1+i ⎤

similar computation shows that p 2 = ⎢ 3 ⎥ is a unit eigenvector corresponding to λ = 8. The vectors p1 and p 2

⎢1 ⎥

⎢⎣ 3 ⎥⎦

are orthogonal, and the unitary matrix P = [p1 p 2 ] diagonalizes the matrix A:

⎡ −1+i

6

⎢

P*AP =

⎢ 1−i

⎣⎢ 3

⎡ 2 0⎤

=⎢

⎣ 0 8 ⎥⎦

2 ⎤

6⎥⎡ 6

1 ⎥ ⎢⎣ 2 − 2i

3 ⎦⎥

⎡ −1−i

2 + 2i ⎤ ⎢ 6

4 ⎥⎦ ⎢ 2

⎣⎢ 6

1+i ⎤

3⎥

1 ⎥

3 ⎦⎥

**17. The characteristic polynomial of A is (λ − 5)(λ 2 + λ − 2) = (λ + 2)(λ − 1)(λ − 5); thus the eigenvalues of A are
**

0 0⎤

⎡ −7 0

λ1 = −2, λ 2 = 1, and λ3 = 5. The augmented matrix of the system (−2I − A)x = 0 is ⎢ 0 −1 1 − i 0 ⎥ ,

⎢

⎥

⎣ 0 1 + i −2 0 ⎦

0

0⎤

⎡1 0

⎡ 0 ⎤

which can be reduced to ⎢0 1 −1 + i 0 ⎥ . Thus v1 = ⎢1 − i ⎥ is a basis for the eigenspace corresponding to

⎢

⎥

⎢

⎥

0

0⎦

⎣ 1 ⎦

⎣0 0

⎡0⎤

⎡ 0 ⎤

⎢ 1−i ⎥

⎢ −1+i ⎥

λ1 = −2, and p1 = ⎢ 3 ⎥ is a unit eigenvector. Similar computations show that p 2 = ⎢ 6 ⎥ is a unit eigenvector

⎢ 1 ⎥

⎢ 2 ⎥

⎣⎢ 3 ⎦⎥

⎣⎢ 6 ⎦⎥

⎡1 ⎤

corresponding to λ 2 = 1, and p3 = ⎢0 ⎥ is a unit eigenvector corresponding to λ3 = 5. The vectors {p1, p 2 , p3}

⎢ ⎥

⎣0 ⎦

form an orthogonal set, and the unitary matrix P = [p1 p 2 p3 ] diagonalizes the matrix A:

181

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

1 ⎤

0

1⎤

⎡ 0 1+i

⎡0

0

0 ⎤ ⎢ 1−i −1+i

3

3 ⎥ ⎡5

⎢

0⎥⎥

P*AP = ⎢ 0 −1−i 2 ⎥ ⎢0

−1 −1 + i ⎥ ⎢ 3

6

⎥

6

6 ⎥ ⎢ 0 −1 − i

⎢

⎥

2

0 ⎦⎢ 1

0⎥

⎣

0

0 ⎦⎥

6

⎣⎢ 3

⎦

⎣⎢1

⎡ −2 0 0 ⎤

= ⎢ 0 1 0⎥

⎢

⎥

⎣ 0 0 5⎦

Note: The matrix P is not unique. It depends on the particular choice of the unit eigenvectors. This is just one

possibility.

i 2 − 3i ⎤

⎡ 0

19. A = ⎢ i

0

1 ⎥

⎢

⎥

4i ⎦

⎣ −2 − 3i −1

**21. (a) a21 = −i ≠ i = −a12 and a31 ≠ −a13
**

(b) a11 ≠ −a11 , a31 ≠ −a13 , and a32 ≠ −a23

⎡ 0 −1 + i ⎤

23. The characteristic polynomial of A = ⎢

is λ 2 − iλ + 2 = (λ − 2i)(λ + i); thus the eigenvalues of A are

⎥

i

i

+

1

⎣

⎦

λ = 2i and λ = −i.

27. We have A*A =

1 ⎡ e−iθ

⎢

2 ⎣⎢ eiθ

−ie−iθ ⎤ 1 ⎡ eiθ

⎥

⎢

ieiθ ⎦⎥ 2 ⎢⎣ieiθ

e −iθ ⎤ 1 ⎡ 1 + 1

⎥= ⎢

−ie−iθ ⎦⎥ 2 ⎢⎣ e2iθ − e2iθ

e−2iθ − e −2iθ ⎤ ⎡1 0 ⎤

−1

⎥=⎢

⎥⎦ ; thus A* = A

0

1

⎣

1+1

⎦⎥

and A is unitary.

1

1

29. (a) If B = ( A + A*), then B* = ( A + A*)*

2

2

1

= ( A * + A **)

2

1

= ( A * + A)

2

= B.

Similarly, C* = C.

(b) We have B + iC =

1

1

1

1

( A + A*) + ( A − A*) = A and B − iC = ( A + A*) − ( A − A*) = A * .

2

2

2

2

(c) AA* = ( B + iC )( B − iC )

= B 2 − iBC + iCB + C 2

and A*A = B 2 + iBC − iCB + C 2 . Thus AA* = A*A if and only if −iBC + iCB = iBC − iCB, or 2iCB = 2iBC.

Thus A is normal if and only if B and C commute i.e., CB = BC.

**31. If A is unitary, then A−1 = A * and so ( A*) −1 = ( A−1 )* = ( A*)*; thus A* is also unitary.
**

33. A unitary matrix A has the property that Ax = x for all x in C n . Thus if A is unitary and Ax = λx where x ≠ 0,

we must have λ x = Ax = x and so λ = 1.

182

SSM: Elementary Linear Algebra

Chapter 7 Supplementary Exercises

**35. If H = I − 2uu*, then
**

H* = (I − 2uu*)* = I* − 2u**u*= I − 2uu* = H;

thus H is Hermitian.

HH * = ( I − 2uu*)( I − 2uu*)

= I − 2uu * −2uu * +4uu * uu *

**Chapter 7 Supplementary Exercises
**

⎡3

1. (a) For A = ⎢ 5

⎢ 45

⎣

− 45 ⎤

⎥,

3⎥

5⎦

⎡ 3

−1

T

A =A =⎢ 5

⎢ − 45

⎣

2

= I − 4uu * +4u u u *

=I

and so H is unitary.

⎡

37. A = ⎢

⎢

⎣

1

2

i

2

− i ⎤

2⎥

is both Hermitian and unitary.

1

− ⎥

2⎦

4⎤

5⎥.

3⎥

5⎦

⎡ 4 0 − 3⎤

5⎥

⎢ 5

9

4 − 12 ⎥ ,

(b) For A = ⎢ − 25

25

⎢ 12 53

⎥

16

⎢⎣ 25 5

25 ⎥⎦

⎡1 0 0 ⎤

AT A = ⎢ 0 1 0 ⎥ , so

⎢

⎥

⎣0 0 1⎦

⎡ 4 − 9 12 ⎤

25 25 ⎥

⎢ 5

−1

T

3⎥.

4

A =A =⎢ 0

5

5

⎢ 3

⎥

12 16

⎢⎣ − 5 − 25 25 ⎥⎦

39. If P = uu*, then

**Px = (uu*)x = (uuT )x = u(uT x) = (x ⋅ u)u. Thus
**

2

multiplication of x by P corresponds to u

times the orthogonal projection of x onto

W = span{u}. If u = 1, then multiplication of x

by H = I − 2uu* corresponds to reflection of x

about the hyperplane u ⊥ .

True/False 7.5

**5. The characteristic equation of A is
**

λ3 − 3λ 2 + 2λ = λ(λ − 2)(λ − 1), so the

eigenvalues are λ = 0, 2, 1. Orthogonal bases for

⎡− 1 ⎤

⎢ 2⎥

the eigenspaces are λ = 0: ⎢ 0 ⎥ ; λ = 2:

⎢ 1 ⎥

⎢⎣ 2 ⎥⎦

⎡ 1 ⎤

⎡0 ⎤

⎢ 2⎥

⎢ 0 ⎥ ; λ = 1: ⎢1 ⎥ .

⎢ ⎥

⎢ 1 ⎥

⎣0 ⎦

⎢⎣ 2 ⎥⎦

1

⎡− 1

0⎤

2

⎢ 2

⎥

Thus P = ⎢ 0

0 1⎥ orthogonally

⎢ 1

⎥

1

0⎥

2

⎣⎢ 2

⎦

⎡0 0 0 ⎤

diagonalizes A, and PT AP = ⎢0 2 0 ⎥ .

⎢

⎥

⎣0 0 1 ⎦

(a) False; i ≠ i .

(b) False; for r1 = ⎡ − i

⎣ 2

i ⎤

r2 = ⎡0 − i

,

6

3⎦

⎣

r1 ⋅ r2 = −

⎡1 0 ⎤

, so

AT A = ⎢

⎣ 0 1 ⎥⎦

i

2

( 0) +

i ⎤

3⎦

i

6

and

i ⎛ i ⎞ i ⎛ i ⎞

⎜−

⎟+

⎜

⎟

6⎝

3⎝ 3⎠

6⎠

2

2

⎛ i ⎞ ⎛ i ⎞

= 0+⎜

⎟ −⎜

⎟

⎝ 6⎠ ⎝ 3⎠

1 1

=− +

6 3

1

=

6

Thus the row vectors do not form an

orthonormal set and the matrix is not unitary.

(c) True; if A is unitary, so A−1 = A*, then

( A*) −1 = A = ( A*) *.

**7. In matrix form, the quadratic form is
**

⎡ 1 − 3⎤ ⎡x ⎤

2 ⎥ 1 . The

xT Ax = [ x1 x2 ] ⎢

⎢ ⎥

4 ⎥⎦ ⎣ x2 ⎦

⎢⎣ − 32

**(d) False; normal matrices that are not Hermitian are
**

also unitarily diagonalizable.

characteristic equation of A is λ 2 − 5λ +

**(e) False; if A is skew-Hermitian, then
**

( A2 )* = ( A*)( A*) = (− A)(− A) = A2 ≠ − A2 .

183

7

=0

4

Chapter 7: Diagonalization and Quadratic Forms

SSM: Elementary Linear Algebra

5±3 2

or

2

λ ≈ 4.62, 0.38. Since both eigenvalues of A are

positive, the quadratic form is positive definite.

which has solutions λ =

9. (a) y − x 2 = 0 or y = x 2 represents a parabola.

(b) 3x − 11y 2 = 0 or x =

11 2

y represents a

3

parabola.

184

Chapter 8

Linear Transformations

Section 8.1

Exercise Set 8.1

1. T (−u) = −u = u = T (u) ≠ −T (u), so the function is not a linear transformation.

3. T(kA) = (kA)B = k(AB) = kT(A)

T ( A1 + A2 ) = ( A1 + A) B

= A1 B + A2 B

= T ( A1 ) + T ( A2 )

Thus T is a linear transformation.

5. F (kA) = (kA)T = kAT = kF ( A)

F ( A + B) = ( A + B)T = AT + BT = F ( A) + F ( B)

Thus F is a linear transformation.

7. Let p( x) = a0 + a1 x + a2 x 2 and q( x) = b0 + b1 x + b2 x 2 .

(a) T (kp( x)) = ka0 + ka1 ( x + 1) + ka2 ( x + 1)2

= kT ( p( x))

T ( p( x) + q( x)) = a0 + b0 + (a1 + b1 )( x + 1) + (a2 + b2 )( x + 1)2

= a0 + a1 ( x + 1) + a2 ( x + 1)2 + b0 + b1 ( x + 1) + b2 ( x + 1)2

= T ( p( x)) + T (q( x))

Thus T is a linear transformation.

(b) T (kp( x)) = T (ka0 + ka1 x + ka2 x 2 )

**= (ka0 + 1) + (ka1 + 1) x + (ka2 + 1) x 2
**

≠ kT ( p( x))

T is not a linear transformation.

9. For x = ( x1, x2 ) = c1v1 + c2 v 2 , we have ( x1 , x2 ) = c1 (1, 1) + c2 (1, 0) = (c1 + c2 , c1 ) or

c1 + c2 = x1

c1

= x2

which has the solution c1 = x2 , c2 = x1 − x2 .

( x1 , x2 ) = x2 (1, 1) + ( x1 − x2 )(1, 0)

= x2 v1 + ( x1 − x2 ) v 2

and

T ( x1, x2 ) = x2T ( v1 ) + ( x1 − x2 )T ( v 2 )

= x2 (1, − 2) + ( x1 − x2 )(−4, 1)

= (−4 x1 + 5 x2 , x1 − 3x2 )

T(5, −3) = (−20 − 15, 5 + 9) = (−35, 14).

185

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

11. For x = ( x1, x2 , x3 ) = c1 v1 + c2 v 2 + c3 v3 , we

have

( x1 , x2 , x3 ) = c1 (1, 1, 1) + c2 (1, 1, 0) + c3 (1, 0, 0)

= (c1 + c2 + c3 , c1 + c2 , c1 )

**(b), (c) Neither 1 + x nor 3 − x 2 can be
**

expressed as xp(x) with p(x) in P2 , so

neither are in R(T).

21. (a) Since −8x + 4y = −4(2x − y) and 2x − y can

assume any real value, R(T) is the set of all

vectors (x, −4x) or the line y = −4x. The

vector (1, −4) is a basis for this space.

c1 + c2 + c3 = x1

or c1 + c2

= x2 which has the solution

c1

= x3

c1 = x3 , c2 = x2 − x3 ,

c3 = x1 − ( x2 − x3 ) − x3 = x1 − x2 .

( x1 , x2 , x3 ) = x3 v1 + ( x2 − x3 ) v 2 + ( x1 − x2 ) v3

**(b) T can be expressed as a matrix operator
**

from R 4 to R3 with the matrix

⎡ 4 1 −2 −3⎤

A = ⎢ 2 1 1 −4 ⎥ . A basis for R(T) is a

⎢

⎥

⎣ 6 0 −9 9 ⎦

basis for the column space of A. A row

⎡ 1 1 1 −2 ⎤

2 2

echelon form of A is B = ⎢0 1 4 −5⎥ .

⎢

⎥

⎢⎣0 0 0

1⎥⎦

T ( x1, x2 , x3 )

= x3T ( v1 ) + ( x2 − x3 )T ( v 2 ) + ( x1 − x2 )T ( v3 )

= x3 (2, − 1, 4) + ( x2 − x3 )(3, 0, 1)

+ ( x1 − x2 )(−1, 5, 1)

= (− x1 + 4 x2 − x3 , 5 x1 − 5 x2 − x3 , x1 + 3 x3 )

T (2, 4, − 1) = (−2 + 16 + 1, 10 − 20 + 1, 2 − 3)

= (15, − 9, − 1)

**The columns of A corresponding to the
**

columns of B containing leading 1s are

⎡ 4 ⎤ ⎡ 1⎤

⎡ −3⎤

⎢ 2 ⎥ , ⎢ 1⎥ , and ⎢ −4 ⎥ . Thus, (4, 2, 6),

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ 6 ⎦ ⎣0 ⎦

⎣ 9⎦

13. T (2 v1 − 3v 2 + 4 v3 )

= 2T ( v1 ) − 3T ( v 2 ) + 4T ( v3 )

= (2, − 2, 4) − (0, 9, 6) + (−12, 4, 8)

= (−10, − 7, 6)

**(1, 1, 0), and (−3, −4, 9) form a basis for
**

R(T).

15. (a) T(5, 10) = (10 − 10, −40 + 40) = (0, 0)

so (5, 10) is in ker(T).

(c) R(T) is the set of all polynomials in P3 with

(b) T(3, 2) = (6 − 2, −24 + 8) = (4, −16) so

(3, 2) is not in ker(T).

**constant term 0. Thus, {x, x 2 , x3} is a basis
**

for R(T).

(c) T(1, 1) = (2 − 1, −8 + 4) = (1, −4) so

(1, 1) is not in ker(T).

3⎤

⎡ 1 −1

⎢

19

23. (a) A row echelon form of A is 0 1 − 11 ⎥ ,

⎢

⎥

0 ⎦⎥

⎣⎢0 0

17. (a) T(3, −8, 2, 0)

= (12 − 8 − 4, 6 − 8 + 2, 18 − 18)

= (0, 0, 0)

so (3, −8, 2, 0) is in ker(T).

⎡1 ⎤

⎡ −1⎤

thus columns 1 and 2 of A, ⎢ 5 ⎥ and ⎢ 6⎥

⎢ ⎥

⎢ ⎥

⎣7 ⎦

⎣ 4⎦

form a basis for the column space of A,

which is the range of T.

(b) T(0, 0, 0, 1)

= (0 + 0 + 0 − 3, 0 + 0 + 0 − 4, 0 − 0 + 9)

= (−3, −4, 9)

so (0, 0, 0, 1) is not in ker(T).

14 ⎤

⎡1 0

11 ⎥

⎢

(b) A reduces to ⎢0 1 − 19 ⎥ , so a general

11

⎢

⎥

0

0

0⎦

⎣

14

19

solution of Ax = 0 is x1 = − s, x2 = s,

11

11

x3 = s or x1 = −14t , x2 = 19t , x3 = 11t , so

(c) T (0, − 4, 1, 0) = (−4 − 2, − 4 + 1, − 9)

= (−6, − 3, − 9)

so (0, −4, 1, 0) is not in ker(T).

19. (a) Since x + x 2 = x(1 + x), x + x 2 is R(T).

186

SSM: Elementary Linear Algebra

Section 8.1

(b) dim( P4 ) = 5, so nullity(T) = 5 − rank(T) = 4

⎡ −14 ⎤

⎢ 19 ⎥ is a basis for the null space of A,

⎢

⎥

⎣ 11⎦

which is ker(T).

**(c) Since R(T ) = R3 , T has rank 3.
**

nullity(T) = 6 − rank(T) = 3

**(c) R(T) is two-dimensional, so rank(T) = 2.
**

ker(T) is one-dimensional, so nullity (T) = 1.

(d) nullity(T) = 4 − rank(T) = 1

31. (a) nullity(A) = 7 − rank(A) = 3, so the solution

space of Ax = 0 has dimension 3.

**(d) The column space of A is two-dimensional,
**

so rank(A) = 2. The null space of A is onedimensional, so nullity(A) = 1.

**(b) No, since rank(A) = 4 while R5 has
**

dimension 5.

**25. (a) A row echelon form of A is
**

0⎤

⎡1 2 3

⎢0 1 1 − 2 ⎥ , thus columns 1 and 2 of

7⎦

⎣

⎡4⎤

⎡1 ⎤

A, ⎢ ⎥ and ⎢ ⎥ form a basis for the column

⎣1 ⎦

⎣2⎦

space of A, which is the range of T.

**33. R(T) must be a subspace of R3 , thus the
**

possibilities are a line through the origin, a plane

through the origin, the origin only, or all of R3 .

35. (b) No; F (kx, ky)

= (a1k 2 x 2 + b1k 2 y 2 , a2 k 2 x 2 + b2 k 2 y 2 )

4⎤

⎡1 0 1

7 ⎥ , so a general

(b) A reduces to ⎢

⎢⎣0 1 1 − 72 ⎥⎦

4

solution of Ax = 0 is x1 = − s − t ,

7

2

x2 = − s + t , x3 = s, x4 = t or

7

x1 = − s − 4r , x2 = − s + 2r , x3 = s,

= k 2 F ( x, y)

≠ kF ( x, y)

**37. Let w = cv1 + c2 v 2 + " + cn v n be any vector in
**

V. Then

T (w ) = c1T ( v1 ) + c2T ( v 2 ) + " + cnT ( v n )

= c1 v1 + c2 v 2 + " + cn v n

=w

Since w was an arbitrary vector in V, T must be

the identity operator.

⎡ −1⎤

⎡ −4 ⎤

⎢ −1⎥

⎢ 2⎥

x4 = 7r , so ⎢ ⎥ and ⎢ ⎥ form a basis

⎢ 0⎥

⎢ 1⎥

⎢⎣ 7 ⎥⎦

⎢⎣ 0⎥⎦

for the null space of A, which is ker(T).

**41. If p′( x) = 0, then p(x) is a constant, so ker(D)
**

consists of all constant polynomials.

43. (a) The fourth derivative of any polynomial of

**(c) Both R(T) and ker(T) are two-dimensional,
**

so rank(T) = nullity(T) = 2.

degree 3 or less is 0, so T ( f ( x)) = f ( 4) ( x)

has ker(T ) = P3 .

**(d) Both the column space and the null space of
**

A are two-dimensional, so

rank(A) = nullity(A) = 2.

(b) By similar reasoning, T ( f ( x)) = f ( n +1) ( x)

has ker(T ) = Pn .

**27. (a) The kernel is the y-axis, the range is the
**

entire xz-plane.

True/False 8.1

**(b) The kernel is the x-axis, the range is the
**

entire yz-plane.

**(a) True; c1 = k , c2 = 0 gives the homogeneity
**

property and c1 = c2 = 1 gives the additivity

property.

**(c) The kernel is the line through the origin
**

perpendicular to the plane y = x, the range is

the entire plane y = x.

**(b) False; every linear transformation will have
**

T(−v) = −T(v).

**29. (a) nullity(T) = 5 − rank(T) = 2
**

187

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

⎡ 1 0⎤

(c) A reduces to ⎢0 1⎥ , so nullity(A) = 0 or

⎢

⎥

⎣0 0 ⎦

Ax = 0 has only the trivial solution, so

multiplication by A is one-to-one.

**(c) True; only the zero transformation has this
**

property.

(d) False; T (0) = v 0 + 0 = v 0 ≠ 0, so T is not a

linear transformation.

(e) True

**5. (a) All points on the line y = −x are mapped to
**

0, so ker(T) = {k(−1, 1)}.

(f) True

(b) Since ker(T) ≠ {0}, T is not one-to-one.

(g) False; T does not necessarily have rank 4.

7. (a) Since nullity(T) = 0, T is one-to-one.

(h) False; det(A + B) ≠ det(A) + det(B) in general.

(b) nullity(T) = n − (n − 1) = 1, so T is not oneto-one.

**(i) False; nullity(T) = rank(T) = 2
**

Section 8.2

(c) Since n < m, T is not one-to-one.

Exercise Set 8.2

(d) Since R(T ) = R n , rank(T) = n, so

nullity (T) = 0. Thus T is one-to-one.

1. (a) By inspection, ker(T) = {0}, so T is one-toone.

**9. If there were such a transformation T, then it
**

would have nullity 0 (because it would be oneto-one), and have rank no greater than the

dimension of W (because its range is a subspace

W). Then by the dimension Theorem,

dim(V) = rank(T) + nullity(T) ≤ dim(W) which

contradicts that dim(W) < dim(V). Thus there is

no such transformation.

3

(b) T(x, y) = 0 if 2x + 3y = 0 or x = − y so

2

⎧ ⎛ 3 ⎞⎫

ker(T ) = ⎨k ⎜ − , 1⎟ ⎬ and T is not one-to⎩ ⎝ 2 ⎠⎭

one.

**(c) (x, y) is in ker(T) only if x + y = 0 and
**

x − y = 0, so ker(T) = {0} and T is one-toone.

⎛ ⎡a b

11. (a) T ⎜ ⎢ b d

⎜⎜ ⎢

⎝ ⎣c e

**(d) By inspection, ker(T) = {0}, so T is one-toone.
**

(e) (x, y) is in ker(T) if x − y = 0 or x = y, so

ker(T) = {k(1, 1)} and T is not one-to-one.

⎡ a⎤

⎢ b⎥

c ⎤⎞ ⎢ ⎥

c

e ⎥⎟ = ⎢ ⎥

⎟

d

⎥

⎢ ⎥

f ⎦ ⎟⎠ ⎢ ⎥

e

⎢f⎥

⎣ ⎦

⎡a ⎤

⎛ ⎡a b ⎤ ⎞ ⎢ b ⎥

(b) T1 ⎜ ⎢

⎥ ⎟ = ⎢ ⎥ and

⎝ ⎣c d ⎦ ⎠ ⎢ c ⎥

⎣⎢ d ⎦⎥

**(f) (x, y, z) is in ker(T) if both x + y + z = 0 and
**

x − y − z = 0, which is x = 0 and y + z = 0.

Thus, ker(T) = {k(0, 1, −1)} and T is not

one-to-one.

⎡a⎤

⎛ ⎡a b ⎤ ⎞ ⎢ c ⎥

T2 ⎜ ⎢

⎥⎟ = ⎢ ⎥.

⎝ ⎣c d ⎦ ⎠ ⎢b ⎥

⎣⎢ d ⎦⎥

⎡ 1 −2 ⎤

3. (a) By inspection, A reduces to ⎢0 0⎥ , so

⎢

⎥

⎣0 0⎦

nullity(A) = 1 and multiplication by A is not

one-to-one.

**(c) If p(x) is a polynomial and p(0) = 0, then
**

p(x) has constant term 0.

⎡a⎤

T (ax3 + bx 2 + cx) = ⎢ b ⎥

⎢ ⎥

⎣c ⎦

**(b) A can be viewed as a mapping from R 4 to
**

R3 , thus multiplication by A is not one-toone.

188

SSM: Elementary Linear Algebra

Section 8.3

⎡a⎤

(d) T (a + b sin( x) + c cos( x)) = ⎢ b ⎥

⎢ ⎥

⎣c ⎦

Section 8.3

Exercise Set 8.3

1. (a) (T2 D T1 )( x, y) = T2 (2 x, 3 y)

= ( 2 x − 3 y, 2 x + 3 y )

**13. T (kf ) = kf (0) + 2kf ′(0) + 3kf ′(1) = kT (f )
**

T (f + g) = f (0) + g (0) + 2( f ′(0) + g ′(0))

+ 3( f ′(1) + g ′(1))

= T (f ) + T ( g )

Thus T is a linear transformation.

(b) (T2 D T1 )( x, y)

= T2 ( x − 3 y, 0)

= (4( x − 3 y) − 5(0), 3( x − 3 y) − 6(0))

= (4 x − 12 y, 3x − 9 y)

Let f = f ( x) = x 2 ( x − 1)2 , then

f ′( x) = 2 x( x − 1)(2 x − 1) so f(0) = 0, f ′(0) = 0,

and f ′(1) = 0.

T (f ) = f (0) + 2 f ′(0) + 3 f ′(1) = 0, so

ker(T) ≠ {0} and T is not one-to-one.

(c) (T2 D T1 )( x, y) = T2 (2 x, − 3 y, x + y)

= ( 2 x + 3 y, − 3 y + x + y )

= ( 2 x + 3 y, x − 2 y )

(d) (T2 D T1 )( x, y)

= T2 ( x − y, y, x)

= (0, ( x − y) + y + x)

= (0, 2 x)

**15. Yes; if T(a, b, c) = 0, then a = b = c = 0, so
**

ker(T) = {0}.

17. No; T is not one-to-one because ker(T) ≠ {0} as

T(a) = a × a = 0.

⎛ ⎡a c ⎤ ⎞

3. (a) (T1 D T2 )( A) = T1 ( AT ) = tr ⎜ ⎢

⎥⎟ = a + d

⎝ ⎣b d ⎦ ⎠

19. Yes, since T (u), T ( v) = u, v , so

T (u) =

T (u), T (u) =

u, u = u = 1 and

T(u) is orthogonal to T(v) if u is orthogonal to v.

**(b) (T2 D T1 )( A) does not exist because T1 ( A) is
**

not a 2 × 2 matrix.

True/False 8.2

1

v, then

4

⎛1 ⎞

⎛1 ⎞

(T1 D T2 )( v) = T1 ⎜ v ⎟ = 4 ⎜ v ⎟ = v and

⎝4 ⎠

⎝4 ⎠

1

(T2 D T1 )( v) = T2 (4 v) = (4 v) = v.

4

5. T2 ( v) =

2

**(a) False; dim( R ) = 2 while dim( P2 ) = 3.
**

(b) True; if ker(T) = {0} is one-to-one rank(T) = 4

so T is one-to-one and onto.

(c) False; dim( M 33 ) = 9 while dim( P9 ) = 10.

9. By inspection, T(x, y, z) = (x, y, 0).

Then

T(T(x, y, z)) = T(x, y, 0) = (x, y, 0) = T(x, y, z) or

T DT = T.

**(d) True; for instance, if V consists of all matrices of
**

⎡a b 0⎤

the form ⎢

, then

⎣ c d 0 ⎥⎦

⎡a⎤

⎛ ⎡ a b 0⎤ ⎞ ⎢ b ⎥

T ⎜⎢

⎥ ⎟ = ⎢ ⎥ is an isomorphism T:

⎝ ⎣ c d 0⎦ ⎠ ⎢ c ⎥

⎣⎢ d ⎦⎥

11. (a) Since det(A) = 0, T does not have an inverse.

V → R4 .

(e) False; T(I) = P − P = 0 so ker(T) ≠ 0.

(f) False; if there were such a transformation T, then

dim(ker(T)) = dim(R(T)), but since

dim(ker(T )) + dim( R(T )) = dim( P4 ) = 5, there

can be no such transformation.

189

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

**(b) det(A) = −8 ≠ 0, so T has an inverse.
**

⎡ 1 1 − 3⎤

4⎥

⎢ 8 8

1 ⎥ so

A−1 = ⎢ 18 18

4

⎢ 3 5

⎥

1

−

4 ⎦⎥

⎣⎢ 8 8

15. (a) Since T1 ( p( x)) = xp( x),

1

p( x).

x

Since T2 ( p( x)) = p( x + 1),

T1−1 ( p( x)) =

T2−1 ( p( x)) = p( x − 1).

⎛ ⎡ x1 ⎤ ⎞

⎡ x1 ⎤

−1 ⎜ ⎢ ⎥ ⎟

−1 ⎢ ⎥

x

x

=A

T

⎜⎢ 2⎥⎟

⎢ 2⎥

⎜ x ⎟

3

⎣ x3 ⎦

⎝⎣ ⎦⎠

1

⎡ x1 + 1 x2 − 3 x3 ⎤

8

4

⎢ 8

⎥

1

1

= ⎢ 8 x1 + 8 x2 + 14 x3 ⎥ .

⎢ 3

⎥

5

1

⎢⎣ − 8 x1 + 8 x2 + 4 x3 ⎥⎦

(T2 D T1 ) −1 ( p( x)) = (T1−1 D T2−1 )( p( x))

= T1−1 ( p( x − 1))

1

= p( x − 1)

x

17. (a) T(1 − 2x) = (1 − 2(0), 1 − 2(1)) = (1, −1)

(c) Let p( x) = a0 + a1 x, then

T ( p( x)) = (a0 , a0 + a1 ) so if

T(p(x)) = (0, 0), then a0 = a1 = 0 and p is

the zero polynomial, so ker(T) = {0}.

**(c) det(A) = −2 ≠ 0, so T has an inverse.
**

1⎤

⎡ 1 −1

2

2

2⎥

⎢

1

1 ⎥ so

A−1 = ⎢ − 12

2

2

⎢ 1

1 − 1⎥

⎢⎣ 2

2

2 ⎥⎦

⎛ ⎡ x1 ⎤ ⎞

⎡ x1 ⎤

−1 ⎜ ⎢ ⎥ ⎟

−1 ⎢ ⎥

T

x

x

=A

⎜⎢ 2⎥⎟

⎢ 2⎥

⎜ x ⎟

⎣ x3 ⎦

⎝⎣ 3⎦⎠

(d) Since T ( p( x)) = (a0 , a0 + a1 ), then

T −1 (2, 3) has a0 = 2 and a0 + a1 = 3 or

a1 = 1. Thus, T −1 (2, 3) = 2 + x.

P(x)

⎡ 1 x1 − 1 x2 + 1 x3 ⎤

2

2

⎢ 2

⎥

1

1

= ⎢ − 2 x1 + 2 x2 + 12 x3 ⎥ .

⎢ 1

⎥

1

1

⎢⎣ 2 x1 + 2 x2 − 2 x3 ⎥⎦

x

**(d) det(A) = 1 ≠ 0, so T has an inverse.
**

⎡ 3 3 −1⎤

A−1 = ⎢ −2 −2 1⎥ so

⎢

⎥

⎣ −4 −5 2 ⎦

21. (a) T1 ( x, y) = ( x, − y) and T2 ( x, y) = (− x, y)

(T1 D T2 )( x, y) = T1 (− x, y) = (− x, − y)

(T2 D T1 )( x, y) = T2 ( x, − y) = (− x, − y)

T1 D T2 = T2 D T1

⎛ ⎡ x1 ⎤ ⎞

⎡ x1 ⎤

T −1 ⎜ ⎢ x2 ⎥ ⎟ = A−1 ⎢ x2 ⎥

⎜⎢ ⎥⎟

⎢ ⎥

⎜ x ⎟

⎣ x3 ⎦

⎝⎣ 3⎦⎠

⎡ 3x1 + 3x2 − x3 ⎤

= ⎢ −2 x1 − 2 x2 + x3 ⎥ .

⎢

⎥

⎢⎣ −4 x1 − 5 x2 + 2 x3 ⎥⎦

(b) Consider (x, y) = (0, 1):

(T2 D T1 )(0, 1) = T2 (0, 0) = (0, 0) but

(T1 D T2 )(0, 1) = T1 (− sin θ , cosθ )

= (− sin θ , 0)

Thus T1 D T2 ≠ T2 D T1 .

**13. (a) For T to have an inverse, all the
**

ai , i = 1, 2, ..., n must be nonzero.

(b) T −1 ( x1 , x2 , ..., xn )

1

1

⎛1

⎞

x2 , ...,

xn ⎟

= ⎜ x1 ,

a

a

a

2

n

⎝ 1

⎠

190

SSM: Elementary Linear Algebra

Section 8.4

**(c) T1 ( x, y, z) = (kx, ky, kz ) and T2 ( x, y, z) = ( x cosθ − y sin θ , x sin θ + y cos θ , z )
**

(T1 D T2 )( x, y, z) = (k ( x cosθ − y sin θ ), k ( x sin θ + y sin θ ), kz)

(T2 D T1 )( x, y, z) = (kx cos θ − ky sin θ , kx sin θ + ky cos θ , kz)

T1 D T2 = T2 D T

True/False 8.3

(a) True

(b) False; the linear operators in Exercise 21(b) are an example.

(c) False; a linear transformation need not have an inverse.

(d) True; for T to have an inverse, it must be one-to-one.

(e) False; T −1 does not exist.

(f) True; if T1 is not one-to-one then there is some nonzero vector v1 with T1 ( v1 ) = 0, then (T2 D T1 )( v1 ) = T2 (0) = 0

and ker(T2 D T1 ) ≠ {0}.

Section 8.4

Exercise Set 8.4

1. (a) T (u1 ) = T (1) = x = v 2

T (u 2 ) = T ( x) = x 2 = v3

T (u 3 ) = T ( x 2 ) = x 3 = v 4

⎡0

⎢1

Thus the matrix for T relative to B and B′ is ⎢

⎢0

⎢⎣0

⎡0

⎢1

(b) [T ]B′, B [ x]B = ⎢

⎢0

⎣⎢ 0

0

0

1

0

0

0

1

0

⎡0⎤

0⎤

⎡c ⎤

0 ⎥ ⎢ 0 ⎥ ⎢ c0 ⎥

⎥ c =⎢ ⎥

0 ⎥ ⎢ 1 ⎥ ⎢ c1 ⎥

c

0 ⎦⎥ ⎣ 2 ⎦ ⎣⎢ c2 ⎦⎥

⎡0⎤

⎢c ⎥

[T (x)]B′ = [c0 x + c1 x + c2 x ]B′ = ⎢ 0 ⎥

⎢ c1 ⎥

⎢⎣ c2 ⎥⎦

2

3

3. (a) T(1) = 1

T(x) = x − 1 = −1 + x

T ( x 2 ) = ( x − 1)2 = 1 − 2 x + x 2

⎡ 1 −1 1⎤

Thus the matrix for T relative to B is ⎢0 1 −2 ⎥ .

⎢

⎥

1⎦

⎣0 0

191

0⎤

0⎥

⎥.

0⎥

1⎥⎦

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

⎡ 1 −1 1⎤ ⎡ a0 ⎤

(b) [T ]B [ x]B = ⎢ 0 1 −2 ⎥ ⎢ a1 ⎥

⎢

⎥⎢ ⎥

1⎦ ⎣ a2 ⎦

⎣0 0

⎡ a0 − a1 + a2 ⎤

= ⎢ a1 − 2a2 ⎥

⎢

⎥

a2

⎣

⎦

**9. (a) Since A is the matrix for T relative to B,
**

A = [[T ( v1 ) B [T ( v 2 )]B ] . That is,

⎡ 1⎤

⎡3⎤

[T ( v1 )]B = ⎢ ⎥ and [T ( v 2 )]B = ⎢ ⎥ .

−

2

⎣5⎦

⎣ ⎦

⎡ 1⎤

(b) Since [T ( v1 )]B = ⎢ ⎥ ,

⎣ −2 ⎦

⎡ 1⎤ ⎡ −2⎤ ⎡ 3⎤

T ( v1 ) = 1v1 − 2 v 2 = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ .

⎣ 3⎦ ⎣ 8 ⎦ ⎣ −5 ⎦

Similarly,

⎡ 3⎤ ⎡ −5 ⎤ ⎡ −2 ⎤

T ( v 2 ) = 3v1 + 5v 2 = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ .

⎣9 ⎦ ⎣ 20⎦ ⎣ 29⎦

2

For x = a0 + a1 x + a2 x ,

T (x) = a0 + a1 ( x − 1) + a2 ( x − 1) 2

= a0 − a1 + a2 + (a1 − 2a2 ) x + a2 x 2 ,

⎡ a0 − a1 + a2 ⎤

so [T (x)]B = ⎢ a1 − 2a2 ⎥ .

⎢

⎥

a2

⎣

⎦

⎡x ⎤

⎡ 1⎤

⎡ −1⎤

(c) If ⎢ 1 ⎥ = c1v1 + c2 v 2 = c1 ⎢ ⎥ + c2 ⎢ ⎥ ,

x

3

⎣ ⎦

⎣ 4⎦

⎣ 2⎦

x = c −c

then 1 1 2 . Solving for c1 and c2

x2 = 3c1 + 4c2

⎡ 7⎤

1

8

⎛ ⎡1⎤ ⎞

5. (a) T (u1 ) = T ⎜ ⎢ ⎥ ⎟ = ⎢ −1⎥ = 0 v1 − v 2 + v 2

3

⎢

⎥

2

3

⎣

⎦

⎝

⎠

⎣ 0⎦

⎡6 ⎤

4

⎛ ⎡ −2 ⎤ ⎞ ⎢ ⎥

T (u 2 ) = T ⎜ ⎢ ⎥ ⎟ = 2 = 0 v1 + v 2 + v3

4

⎢

⎥

3

⎝⎣ ⎦⎠ 0

⎣ ⎦

⎡ 0 0⎤

⎢ 1

⎥

[T ]B′, B = ⎢ − 2 1⎥

⎢ 8 4⎥

⎣ 3 3⎦

gives c1 =

4

1

3

1

x1 + x2 , c2 = − x1 + x2 ,

7

7

7

7

so

⎛⎡ x ⎤⎞

T ⎜ ⎢ 1⎥ ⎟

⎝ ⎣ x2 ⎦ ⎠

= c1T ( v1 ) + c2T ( v 2 )

1 ⎞ ⎡ 3⎤ ⎛ 3

1 ⎞ ⎡ −2 ⎤

⎛4

= ⎜ x1 + x2 ⎟ ⎢ ⎥ + ⎜ − x1 + x2 ⎟ ⎢ ⎥

−

5

7 ⎠⎣ ⎦ ⎝ 7

7 ⎠ ⎣ 29 ⎦

⎝7

18

1

⎡

x1 + 7 x2 ⎤

⎥

=⎢ 7

24 x ⎥

⎢ − 107

+

x

1

2

7

⎣ 7

⎦

1 ⎤

⎡ 18

7 ⎥ ⎡ x1 ⎤

=⎢ 7

⎢ ⎥

107

24

⎢− 7

⎥ ⎣ x2 ⎦

7 ⎦

⎣

7. (a) T(1) = 1

T(x) = 2x + 1 = 1 + 2x

T ( x)2 = (2 x + 1)2 = 1 + 4 x + 4 x 2

⎡1 1 1 ⎤

[T ]B = ⎢0 2 4 ⎥

⎢

⎥

⎣0 0 4 ⎦

⎡ 2⎤

(b) The matrix for 2 − 3 x + 4 x 2 is ⎢ −3⎥ .

⎢ ⎥

⎣ 4⎦

18

⎛ ⎡1⎤ ⎞ ⎡

(d) T ⎜ ⎢ ⎥ ⎟ = ⎢ 7

⎝ ⎣1⎦ ⎠ ⎢⎣ − 107

7

⎡ 2 ⎤ ⎡1 1 1 ⎤ ⎡ 2⎤ ⎡ 3⎤

[T ]B ⎢ −3⎥ = ⎢ 0 2 4 ⎥ ⎢ −3⎥ = ⎢10 ⎥

⎢ ⎥ ⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 4 ⎦ ⎣ 0 0 4 ⎦ ⎣ 4⎦ ⎣16 ⎦

1 ⎤

7 ⎥ ⎡1⎤

24 ⎥ ⎢⎣1⎥⎦

7 ⎦

⎡ 19 ⎤

=⎢ 7⎥

⎢ − 83

⎥

⎣ 7⎦

**11. (a) Since A is the matrix for T relative to B, the
**

columns of A are [T ( v1 )]B , [T ( v 2 )]B , and

[T ( v3 )]B , respectively. That is,

Thus T (2 − 3 x + 4 x 2 ) = 3 + 10 x + 16 x 2 .

⎡ 1⎤

⎡ 3⎤

[T ( v1 )]B = ⎢ 2 ⎥ , [T ( v 2 )]B = ⎢ 0 ⎥ , and

⎢ ⎥

⎢ ⎥

⎣ −2 ⎦

⎣6⎦

⎡ −1⎤

[T ( v3 )]B = ⎢ 5⎥ .

⎢ ⎥

⎣ 4⎦

(c) T (2 − 3 x + 4 x 2 ) = 2 − 3(2 x + 1) + 4(2 x + 1)2

= 2 − 6 x − 3 + 16 x 2 + 16 x + 4

= 3 + 10 x + 16 x 2

192

SSM: Elementary Linear Algebra

Section 8.4

⎡ 1⎤

(b) Since [T ( v1 )]B = ⎢ 2 ⎥ ,

⎢ ⎥

⎣6⎦

(d) In 1 + x 2 , a0 = 1, a1 = 0, a2 = 1.

T (1 + x 2 )

239 + 289 201 + 247

61 + 107 2

=

+

x+

x

24

8

12

= 22 + 56 x + 14 x 2

T ( v1 )

= v1 + 2 v 2 + 6 v3

= 3 x + 3 x 2 − 2 + 6 x + 4 x 2 + 18 + 42 x + 12 x 2

13. (a) (T2 D T1 )(1) = T2 (2) = 6 x

= 16 + 51x + 19 x 2 .

Similarly, T ( v 2 ) = 3v1 − 2 v3

2

= 9 x + 9 x − 6 − 14 x − 4 x

(T2 D T1 )( x) = T2 (−3x) = −9 x 2

2

⎡0 0⎤

⎢6 0⎥

Thus, [T2 D T1 ]B′, B = ⎢

⎥.

⎢ 0 −9 ⎥

⎣⎢0 0⎦⎥

= −6 − 5 x + 5 x 2 ,

and T ( v3 ) = − v1 + 5v 2 + 4 v3

= −3 x − 3x 2 − 5 + 15 x + 10 x 2 + 12

T1 (1) = 2

T1 ( x) = −3x

+ 28 x + 8 x 2

2

= 7 + 40 x + 15 x .

⎡2 0⎤

Thus [T1 ]B′′, B = ⎢ 0 −3⎥ .

⎢

⎥

⎣0 0⎦

T2 (1) = 3x

(c) If a0 + a1 x + a2 x 2

= c1 v1 + c2 v 2 + c3 v3

= c1 (3x + 3x 2 ) + c2 (−1 + 3x + 2 x 2 )

T2 ( x) = 3 x 2

+ c3 (3 + 7 x + 2 x 2 ),

a0 = −c2 + 3c3

Then a1 = 3c1 + 3c2 + 7c3 .

a2 = 3c1 + 2c2 + 2c3

T2 ( x 2 ) = 3x3

⎡0

⎢3

Thus [T2 ]B′, B′′ = ⎢

⎢0

⎣⎢0

**Solving for c1 , c2 , and c3 gives
**

1

c1 = (a0 − a1 + 2a2 ),

3

1

c2 = (−5a0 + 3a1 − 3a2 ),

8

1

c3 = (a0 + a1 − a2 ), so

8

0

0

3

0

0⎤

0⎥

⎥.

0⎥

3⎦⎥

(b) [T2 D T1 ]B′, B = [T2 ]B′, B′′ [T1 ]B′′, B

⎡0

⎢3

(c) ⎢

⎢0

⎣⎢0

T (a0 + a, x + a2 x 2 )

= c1T ( v1 ) + c2T ( v 2 ) + c3T ( v3 )

1

= (a0 − a1 + 2a2 )(16 + 51x + 19 x 2 )

3

1

+ (−5a0 + 3a1 − 3a2 )(−6 − 5 x + 5 x 2 )

8

1

+ (a0 + a1 − a2 )(7 + 40 x + 15 x 2 )

8

239a0 − 161a1 + 289a2

=

24

201a0 − 111a1 + 247 a2

+

x

8

61a0 − 31a1 + 107a2 2

+

x

12

0

0

3

0

0⎤

⎡0 0 ⎤

⎡2 0⎤ ⎢

⎥

0 ⎢

6 0⎥

⎥ 0 −3⎥ = ⎢

⎥

0⎥ ⎢

⎥ ⎢ 0 −9 ⎥

0

0

⎦ ⎢0 0 ⎥

3⎦⎥ ⎣

⎣

⎦

**15. If T is a contraction or dilation of V, then T maps
**

any basis B = {v1 , v 2 , ..., v n } of V to

**{kv1, kv 2 , ..., kv n } where k is a nonzero
**

constant. Thus the matrix for T relative to B is

⎡k 0 0 " 0 ⎤

⎢0 k 0 " 0⎥

⎢

⎥

⎢0 0 k " 0⎥ .

#⎥

⎢# # #

⎢⎣ 0 0 0 " k ⎥⎦

193

Chapter 8: Linear Transformations

**SSM: Elementary Linear Algebra
**

21. (a) [T2 D T1 ]B′, B = [T2 ]B′, B′′ [T1 ]B′′, B

**17. The matrix for T relative to B is the matrix
**

whose columns are the transforms of the basis

vectors in B in terms of the standard basis. Since

(b) [T3 D T2 D T1 ]B′, B

= [T3 ]B′, B′′′ [T2 ]B′′′, B′′ [T1 ]B′′, B

**B is the standard basis for R n , this matrix is the
**

standard matrix for T. Also, since B′ is the

standard basis for R m , the resulting

transformation will give vector components

relative to the standard basis.

True/False 8.4

(a) False; the conclusion would only be true if T:

V → V were a linear operator, i.e., if V = W.

19. (a) D(f1 ) = D(1) = 0

**(b) False; the conclusion would only be true if T:
**

V → V were a linear operator, i.e., if V = W.

D(f 2 ) = D(sin x) = cos x

D(f3 ) = D(cos x) = − sin x

The matrix for D relative to this basis is

⎡0 0 0 ⎤

⎢0 0 −1⎥ .

⎢

⎥

⎣0 1 0 ⎦

**(c) True; since the matrix for T is invertible, ker(T)
**

= {0}.

(d) False; the matrix of S D T relative to B is

[ S ]B [T ]B .

(b) D(f1 ) = D(1) = 0

(e) True

D (f 2 ) = D ( e x ) = e x

Section 8.5

D(f3 ) = D(e2 x ) = 2e2 x

The matrix for D relative to this basis is

⎡0 0 0 ⎤

⎢0 1 0 ⎥ .

⎢

⎥

⎣0 0 2 ⎦

Exercise Set 8.5

⎡1 ⎤

⎡ −2 ⎤

1. T (u1 ) = ⎢ ⎥ and T (u 2 ) = ⎢ ⎥ , so

0

⎣ ⎦

⎣ −1⎦

⎡ 1 −2 ⎤

[T ]B = ⎢

.

⎣0 −1⎥⎦

By inspection, v1 = 2u1 + u 2 and

(c) D(f1 ) = D(e2 x ) = 2e2 x

D(f2 ) = D( xe 2 x ) = e 2 x + 2 xe2 x

v 2 = −3u1 + 4u 2 , so the transition matrix from

D(f3 ) = D( x 2 e2 x ) = 2 xe2 x + 2 x 2 e2 x

The matrix for D relative to this basis is

⎡2 1 0⎤

⎢ 0 2 2⎥ .

⎢

⎥

⎣ 0 0 2⎦

⎡ 2 −3⎤

B′ to B is P = ⎢

. Thus

⎣ 1 4⎥⎦

⎡ 4 3⎤

PB→ B′ = P −1 = ⎢ 11 11 ⎥ .

1

2⎥

⎢⎣ − 11

11 ⎦

[T ]B′ = PB → B′ [T ]B PB′→ B

⎡ 4 3 ⎤ ⎡1 −2 ⎤ ⎡ 2 −3⎤

= ⎢ 11 11 ⎥ ⎢

1

2 ⎥ ⎣ 0 −1⎥⎦ ⎢⎣ 1

4 ⎥⎦

⎢⎣ − 11

11 ⎦

⎡ − 3 − 56 ⎤

11 ⎥

= ⎢ 11

3⎥

2

⎢⎣ − 11

11 ⎦

(d) D(4e2 x + 6 xe2 x − 10 x 2 e 2 x )

⎡ 2 1 0⎤ ⎡ 4⎤

= ⎢ 0 2 2⎥ ⎢ 6⎥

⎢

⎥⎢

⎥

⎣ 0 0 2 ⎦ ⎣ −10 ⎦

⎡ 14 ⎤

= ⎢ −8 ⎥

⎢

⎥

⎣ −20 ⎦

**Thus, D(4e2 x + 6 xe2 x − 10 x 2 e 2 x )
**

= 14e 2 x − 8 xe2 x − 20 x 2 e2 x .

194

SSM: Elementary Linear Algebra

Section 8.5

7⎤

⎡− 2

⎡3

−1 ⎢ 4

9 ⎥ . Thus P

=

=

P=⎢ 9

P

B → B′

⎢⎣ 13 − 61 ⎥⎦

⎢⎣ 32

and [T ]B′ = PB→ B′ [T ]B PB′→ B

7⎤

⎡ 3 7 ⎤ ⎡ 2 − 2 ⎤ ⎡− 2

9⎥ ⎢ 9

9⎥

= ⎢4 2⎥ ⎢3

4⎥ ⎢ 1 − 1⎥

⎢⎣ 32 1⎥⎦ ⎢⎣ 12

3⎦ ⎣ 3

6⎦

⎡1 1⎤

=⎢

.

⎣0 1⎥⎦

**3. Since B is the standard basis for R 2 ,
**

⎡ 1 − 1 ⎤

⎡cos 45° − sin 45° ⎤ ⎢ 2

2⎥

=

[T ]B = ⎢

. The

1 ⎥

⎣ sin 45° cos 45° ⎥⎦ ⎢ 1

2⎦

⎣ 2

matrices for PB→ B′ and PB′↔ B are the same as

in Exercise 1, so

[T ]B′ = PB → B′ [T ]B PB′→ B

⎡ 4 3 ⎤ ⎡ 1 − 1 ⎤ ⎡ 2 −3⎤

2⎥

= ⎢ 11 11 ⎥ ⎢ 2

1

2 ⎥⎢ 1

1 ⎥ ⎢⎣ 1

4 ⎥⎦

⎢⎣ − 11

11 ⎦ ⎣ 2

2⎦

⎡ 13 − 25 ⎤

11 2 ⎥

= ⎢ 115 2

.

9 ⎥

⎢

11 2 ⎦

⎣ 11 2

1 ⎥⎦

**11. (a) The matrix for T relative to the standard
**

⎡ 1 −1⎤

. The eigenvalues

basis B is [T ]B = ⎢

⎣ 2 4 ⎥⎦

of [T ]B are λ = 2 and λ = 3 with

⎡ 1⎤

corresponding eigenvectors ⎢ ⎥ and

⎣ −1⎦

⎡ 1⎤

⎢⎣ −2 ⎥⎦ .

⎡ 1 1⎤

, we have

Then for P = ⎢

⎣ −1 −2 ⎥⎦

⎡ 2 1⎤

⎡2 0⎤

and P −1[T ]B P = ⎢

P −1 = ⎢

.

⎥

−

−

1

1

⎣

⎦

⎣ 0 3⎥⎦

Since P represents the transition matrix from

the basis B′ to the standard basis B, then

⎧ ⎡ 1⎤ ⎡ 1⎤ ⎫

B ′ = ⎨ ⎢ ⎥ , ⎢ ⎥ ⎬ is a basis for which

⎩ ⎣ −1⎦ ⎣ −2 ⎦ ⎭

[T ]B′ is diagonal.

⎡1 ⎤

⎡0 ⎤

⎡0⎤

5. T (u1 ) = ⎢ 0 ⎥ , T (u 2 ) = ⎢1 ⎥ and T (u3 ) = ⎢ 0 ⎥ , so

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣0⎦

⎣0 ⎦

⎣0⎦

⎡1 0 0⎤

[T ]B = ⎢ 0 1 0⎥ .

⎢

⎥

⎣ 0 0 0⎦

By inspection, v1 = u1 , v 2 = u1 + u 2 , and

**v3 = u1 + u 2 + u3 , so the transition matrix from
**

⎡ 1 1 1⎤

B′ to B is P = ⎢ 0 1 1⎥ .

⎢

⎥

⎣ 0 0 1⎦

⎡ 1 −1 0 ⎤

Thus PB→ B′ = P −1 = ⎢ 0 1 −1⎥

⎢

⎥

⎣ 0 0 1⎦

[T ]B′ = PB→ B′ [T ]B PB′→ B

⎡ 1 −1 0 ⎤ ⎡ 1 0 0⎤ ⎡ 1

= ⎢0 1 −1⎥ ⎢ 0 1 0⎥ ⎢ 0

⎢

⎥⎢

⎥⎢

⎣0 0 1⎦ ⎣ 0 0 0⎦ ⎣ 0

⎡ 1 0 0⎤

= ⎢0 1 1⎥ .

⎢

⎥

⎣0 0 0 ⎦

7⎤

2⎥

and

**(b) The matrix for T relative to the standard
**

⎡ 4 −1⎤

.

basis B is [T ]B = ⎢

⎣ −3 1⎥⎦

1 1⎤

1 1⎥

⎥

0 1⎦

The eigenvalues of [T ]B are λ =

5 + 21

2

5 − 21

with corresponding

2

⎡ −3− 21 ⎤

⎡ −3+ 21 ⎤

eigenvectors ⎢ 6 ⎥ and ⎢ 6 ⎥ .

⎣⎢ 1 ⎦⎥

⎣⎢ 1 ⎦⎥

and λ =

2

1

7. T (p1 ) = 6 + 3( x + 1) = 9 + 3x = p1 + p 2 , and

3

2

2

4

T (p 2 ) = 10 + 2( x + 1) = 12 + 2 x = − p1 + p 2 ,

9

3

⎡2 − 2⎤

9⎥.

so [T ]B = ⎢ 3

4⎥

⎢⎣ 12

3⎦

2

1

7

1

q1 = − p1 + p 2 and q 2 = p1 − p 2 , so the

9

3

9

6

transition matrix from B′ to B is

⎡ −3− 21 −3+ 21 ⎤

6

Then for P = ⎢ 6

⎥ , we have

⎢⎣ 1

1 ⎥⎦

−3+ 21 ⎤

⎡− 3

−1 ⎢

21

2 21 ⎥

P =⎢

and

3+ 21 ⎥

3

⎢⎣ 21

2 21 ⎥⎦

195

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

⎡ 5+ 21

P [ I ]B P = ⎢ 2

⎢ 0

⎣

0 ⎤

⎥ . Since P

5− 21 ⎥

2 ⎦

represents the transition matrix from the

basis B′ to the standard basis B, then

⎧⎪ ⎡ −3− 21 ⎤ ⎡ −3+ 21 ⎤ ⎫⎪

B ′ = ⎨ ⎢ 6 ⎥ , ⎢ 6 ⎥ ⎬ is a basis for

⎩⎪ ⎢⎣ 1 ⎥⎦ ⎢⎣ 1 ⎥⎦ ⎭⎪

which [T ]B′ is diagonal.

True/False 8.5

−1

(a) False; every matrix is similar to itself since

A = I −1 AI .

(b) True; if A = P −1BP and B = Q −1CQ, then

A = P −1 (Q −1CQ) P = (QP)−1 C (QP).

(c) True; invertibility is a similarity invariant.

(d) True; if A = P −1 BP, then

13. (a) T (1) = 5 + x 2

T(x) = 6 − x

2

**A−1 = ( P −1BP)−1 = P −1 B −1P.
**

2

T ( x ) = 2 − 8x − 2x

Thus the matrix for T relative to the standard

⎡5 6 2⎤

basis B is [T ]B = ⎢ 0 −1 −8⎥ . The

⎢

⎥

⎣ 1 0 −2 ⎦

(e) True

(f) False; for example, let T be the zero operator.

(g) True

characteristic equation for [T ]B is

**(h) False; if B and B′ are different, let [T ]B be
**

given by the matrix PB→ B′ . Then

λ3 − 2λ 2 − 15λ + 36 = (λ + 4)(λ − 3)2 = 0, so

the eigenvalues of T are λ = −4 and λ = 3.

[T ]B′, B = PB → B′ [T ]B = PB → B′ PB′→ B = I n .

**(b) A basis for the eigenspace corresponding to
**

⎡ −2 ⎤

λ = −4 is ⎢⎢ 83 ⎥⎥ , so the polynomial

⎢⎣ 1⎥⎦

**Chapter 8 Supplementary Exercises
**

1. No; T (x1 + x 2 ) = A(x1 + x 2 ) + B

≠ ( Ax1 + B) + ( Ax 2 + B)

= T (x1 ) + T (x2 ),

and if c ≠ 1, then

T (cx) = cAx + B ≠ c( Ax + B) = cT (x).

8

−2 + x + x 2 is a basis in P 2 . A basis for

3

the eigenspace corresponding to λ = 3 is

⎡ 5⎤

⎢ −2 ⎥ , so the polynomial so the polynomial

⎢ ⎥

⎣ 1⎦

3. T (kv) = kv kv = k k v v ≠ kT ( v) if k ≠ 1.

5. (a) The matrix for T relative to the standard

⎡ 1 0 1 1⎤

basis is A = ⎢ 2 1 3 1⎥ . A basis for the

⎢

⎥

⎣ 1 0 0 1⎦

range of T is a basis for the column space of

⎡ 1 0 0 1⎤

A. A reduces to ⎢0 1 0 −1⎥ .

⎢

⎥

⎣0 0 1 0 ⎦

Since row operations don’t change the

dependency relations among columns, the

reduced form of A indicates that T (e3 ) and

5 − 2x + x 2 is a basis in P 2 .

15. If v is an eigenvector of T corresponding to λ,

then v is a nonzero vector which satisfies the

equation T(v) = λv or (λI − T)(v) = 0. Thus v is

in the kernel of λI − T.

17. Since C[ x]B = D[ x]B for all x in V, then we can

**let x = vi for each of the basis vectors
**

v1, ..., v n of V. Since [ vi ]B = ei for each i

where {e1 , ..., e n } is the standard basis for R n ,

**any two of T (e1 ), T (e2 ), T (e4 ) form a
**

basis for the range.

The reduced form of A shows that the

general solution of Ax = 0 is x = −s, x2 = s,

**this yields Cei = Dei for i = 1, 2, ..., n. But Cei
**

and Dei are the ith columns of C and D,

respectively. Since corresponding columns of C

and D are all equal, C = D.

196

SSM: Elementary Linear Algebra

Chapter 8 Supplementary Exercises

x3 = 0, x4 = s so a basis for the null space

**one-dimensional. Hence, nullity(T) = 1 and
**

rank(T ) = dim( M 22 ) − nullity(T ) = 3.

⎡ −1⎤

⎢ 1⎥

of A, which is the kernel of T is ⎢ ⎥ .

⎢ 0⎥

⎢⎣ 1⎥⎦

⎡1 0 ⎤

,

13. The standard basis for M 22 is u1 = ⎢

⎣0 0 ⎥⎦

⎡0 1 ⎤

⎡0 0 ⎤

⎡0 0 ⎤

, u3 = ⎢

, u4 = ⎢

. Since

u2 = ⎢

⎥

⎥

0

0

1

0

⎣

⎦

⎣0 1 ⎥⎦

⎣

⎦

L(u1 ) = u1, L(u 2 ) = u3 , L(u3 ) = u 2 , and

**(b) Since R(T) is three-dimensional and ker(T)
**

is one-dimensional, rank(T) = 3 and

nullity(T) = 1.

L(u 4 ) = u 4 , the matrix of L relative to the

**7. (a) The matrix for T relative to B is
**

⎡ 1 1 2 −2 ⎤

⎢ 1 −1 −4 6 ⎥

[T ]B = ⎢

⎥.

⎢ 1 2 5 −6 ⎥

3 −2 ⎥⎦

⎢⎣3 2

⎡1

⎢0

standard basis is ⎢

⎢0

⎢⎣0

0

0

1

0

0

1

0

0

0⎤

0⎥

⎥.

0⎥

1⎥⎦

**15. The transition matrix from B′ to B is
**

⎡ 1 1 1⎤

P = ⎢ 0 1 1⎥ , thus

⎢

⎥

⎣ 0 0 1⎦

⎡ 1 0 −1 2 ⎤

⎢ 0 1 3 −4 ⎥

[T ]B reduces to ⎢

⎥ which

⎢0 0 0 0 ⎥

⎣⎢0 0 0 0 ⎦⎥

has rank 2 and nullity 2. Thus, rank(T) = 2

and nullity(T) = 2.

⎡ −4 0 9 ⎤

[T ]B′ = P −1[T ]B P = ⎢ 1 0 −2 ⎥ .

⎢

⎥

⎣ 0 1 1⎦

(b) Since rank(T) < dim(V), T is not one-to-one.

⎛ ⎡ 1⎤ ⎞ ⎡ 1⎤

⎛ ⎡0 ⎤ ⎞ ⎡ −1⎤

17. T ⎜ ⎢0 ⎥ ⎟ = ⎢0 ⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢ 1⎥ , and

⎜⎜ ⎢ ⎥ ⎟⎟ ⎢ ⎥

⎜⎜ ⎢ ⎥ ⎟⎟ ⎢ ⎥

⎝ ⎣0 ⎦ ⎠ ⎣ 0⎦

⎝ ⎣0 ⎦ ⎠ ⎣ 1⎦

⎛ ⎡0 ⎤ ⎞ ⎡ 1⎤

⎡ 1 −1 1⎤

T ⎜ ⎢0 ⎥ ⎟ = ⎢ 0⎥ , thus [T }B = ⎢0 1 0⎥ .

⎜⎜ ⎢ ⎥ ⎟⎟ ⎢ ⎥

⎢

⎥

⎣ 1 0 −1⎦

⎝ ⎣ 1⎦ ⎠ ⎣ −1⎦

Note that this can also be read directly from

[T (x)]B .

9. (a) Since A = P −1 BP, then

AT = ( P −1BP)T = PT BT ( P −1 )T

= (( PT )−1 )−1 BT ( P −1 )T

= (( P −1 )T )−1 BT ( P −1 )T .

Thus, AT and BT are similar.

(b) A−1 = ( P −1 BP)−1 = P −1 B −1 P

Thus A−1 and B −1 are similar.

19. (a) D(f + g) = ( f ( x) + g ( x))′′ = f ′′( x) + g ′′( x)

and D(kf ) = (kf ( x))′′ = kf ′′( x).

⎡a b ⎤

11. For X = ⎢

,

⎣ c d ⎥⎦

⎡a + c b + d ⎤ ⎡ b b ⎤

T(X ) = ⎢

+

0 ⎥⎦ ⎢⎣ d d ⎥⎦

⎣ 0

⎡ a + b + c 2b + d ⎤

.

=⎢

d ⎥⎦

⎣ d

T(X) = 0 gives the equations

a+b+c = 0

2b + d = 0

d = 0.

Thus c = −a and X is in ker(T) if it has the form

⎧ ⎡ 1 0⎤ ⎫

⎡ k 0⎤

⎢⎣ − k 0 ⎥⎦ , so ker(T ) = ⎨k ⎣⎢ −1 0⎦⎥ ⎬ which is

⎩

⎭

**(b) If f is in ker(D), then f has the form
**

f = f ( x) = a0 + a1 x, so a basis for ker(D) is

f(x) = 1, g(x) = x.

(c) D(f) = f if and only if f = f ( x) = ae x + be− x

for arbitrary constants a and b. Thus,

**f ( x) = e x and g ( x) = e − x form a basis for
**

this subspace.

197

Chapter 8: Linear Transformations

SSM: Elementary Linear Algebra

**21. (c) Note that a1 P1 ( x) + a2 P2 ( x) + a3 P3 ( x)
**

evaluated at x1 , x2 , and x3 gives the values

25.

J ( x) =

a1 , a2 , and a3 , respectively, since

J ( x2 ) =

Pi ( xi ) = 1 and Pi ( x j ) = 0 for i ≠ j.

#

Thus

⎡ a1 ⎤

T (a1 P1 ( x) + a2 P2 ( x) + a3 P3 ( x)) = ⎢ a2 ⎥ , or

⎢ ⎥

⎣ a3 ⎦

⎛ ⎡ a1 ⎤ ⎞

T −1 ⎜ ⎢ a2 ⎥ ⎟ = a1 P1 ( x) + a2 P2 ( x) + a3 P3 ( x).

⎜⎢ ⎥⎟

⎜ a ⎟

⎝⎣ 3⎦⎠

D(1) = 0

D( x) = 1

D( x 2 ) =

#

1 2

x

2

1 3

x

3

1 n +1

x

n +1

This gives the (n + 2) × (n + 1) matrix

⎡0 0 0 " 0 ⎤

⎢1 0 0 " 0 ⎥

⎢

⎥

1

⎢0 2 0 " 0 ⎥

⎢

⎥.

1

⎢0 0 3 " 0 ⎥

⎢# # #

# ⎥

⎢

⎥

1

⎢⎣0 0 0 " n+1 ⎥⎦

J ( xn ) =

**(d) From the computations in part (c), the points
**

lie on the graph.

23.

J (1) = x

2x

D( x n ) =

nx n −1

This gives the matrix shown.

198

Chapter 9

Numerical Methods

⎡ 1 −1 −1⎤ ⎡ 1 −1 −1⎤

→ ⎢ 0 −2 2 ⎥ → ⎢ 0 1 −1⎥

⎢

⎥ ⎢

⎥

⎣ 0 4 1⎦ ⎣ 0 4 1⎦

⎡ 1 −1 −1⎤ ⎡ 1 −1 −1⎤

→ ⎢ 0 1 −1⎥ → ⎢0 1 −1⎥ = U

⎢

⎥ ⎢

⎥

⎣ 0 0 5⎦ ⎣0 0 1⎦

1

1

The multipliers used were , 1, − , −4, and

2

2

⎡ 2 0 0⎤

1

, which leads to L = ⎢ 0 −2 0 ⎥ , so the

⎢

⎥

5

⎣ −1 4 5⎦

system is

⎡ 2 0 0 ⎤ ⎡ 1 −1 −1⎤ ⎡ x1 ⎤ ⎡ −4 ⎤

⎢ 0 −2 0 ⎥ ⎢0 1 −1⎥ ⎢ x2 ⎥ = ⎢ −2 ⎥ .

⎢

⎥⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −1 4 5⎦ ⎣0 0 1⎦ ⎣ x3 ⎦ ⎣ 6 ⎦

⎡ 2 0 0 ⎤ ⎡ y1 ⎤ ⎡ −4 ⎤

⎢ 0 −2 0 ⎥ ⎢ y2 ⎥ = ⎢ −2 ⎥ is

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ −1 4 5⎦ ⎣ y3 ⎦ ⎣ 6 ⎦

= −4

2 y1

−2 y 2

= −2 which has the solution

− y1 + 4 y2 + 5 y3 = 6

y1 = −2, y2 = 1, y3 = 0.

Section 9.1

Exercise Set 9.1

1. In matrix form, the system is

⎡ 3 −6 ⎤ ⎡ x1 ⎤ ⎡ 3 0⎤ ⎡ 1 −2 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

=

=

⎢⎣ −2

5⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ −2 1⎥⎦ ⎢⎣ 0

1⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣1 ⎥⎦

⎡ 1 −2 ⎤ ⎡ x1 ⎤ ⎡ y1 ⎤

=

which reduces to ⎢

and

1⎥⎦ ⎢⎣ x2 ⎥⎦ ⎢⎣ y2 ⎥⎦

⎣0

⎡ 3 0 ⎤ ⎡ y1 ⎤ ⎡0 ⎤

⎢⎣ −2 1⎥⎦ ⎢ y ⎥ = ⎢⎣1 ⎥⎦ .

⎣ 2⎦

The second system is

3 y1 = 0

, which has

−2 y1 + y2 = 1

**solution y1 = 0, y2 = 1, so the first system
**

x1 − 2 x2 = 0

⎡ 1 −2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

becomes ⎢

,

⎢ x ⎥ = ⎢1 ⎥ or

⎥

0

1

x2 = 1

⎣

⎦⎣ 2⎦ ⎣ ⎦

which gives the solution to the original system:

x1 = 2, x2 = 1.

3. Reduce the coefficient matrix.

⎡ 2 8⎤ ⎡ 1 4 ⎤ ⎡ 1 4 ⎤ ⎡ 1 4 ⎤

⎢⎣ −1 −1⎥⎦ → ⎢⎣ −1 −1⎥⎦ → ⎢⎣ 0 3⎥⎦ → ⎢⎣0 1⎥⎦

=U

1

1

The multipliers used were , 1, and , which

2

3

⎡ 2 0⎤

leads to L = ⎢

, so the system is

⎣ −1 3⎥⎦

⎡ 2 0 ⎤ ⎡ 1 4 ⎤ ⎡ x1 ⎤ ⎡ −2 ⎤

⎢⎣ −1 3⎥⎦ ⎢⎣0 1⎥⎦ ⎢ x ⎥ = ⎢⎣ −2 ⎥⎦ .

⎣ 2⎦

x1 − x2 − x3 = −2

⎡ 1 −1 −1⎤ ⎡ x1 ⎤ ⎡ −2 ⎤

⎢0 1 −1⎥ ⎢ x2 ⎥ = ⎢ 1⎥ is

x2 − x3 = 1

⎢

⎥⎢ ⎥ ⎢ ⎥

0

0

1

0

x

x3 = 0

⎣

⎦⎣ 3⎦ ⎣ ⎦

which gives the solution to the original system:

x1 = −1, x2 = 1, x3 = 0.

7. Reduce the coefficient matrix.

5 10 ⎤ ⎡ 1

1 2⎤

⎡ 5

⎢ −8 −7 −9 ⎥ → ⎢ −8 −7 −9⎥

⎢

⎥ ⎢

⎥

4 26⎦

⎣ 0 4 26 ⎦ ⎣ 0

⎡ 1 1 2 ⎤ ⎡ 1 1 2⎤

→ ⎢0 1 7 ⎥ → ⎢0 1 7 ⎥

⎢

⎥ ⎢

⎥

⎣ 0 4 26 ⎦ ⎣ 0 0 −2⎦

⎡ 1 1 2⎤

→ ⎢0 1 7 ⎥ = U

⎢

⎥

⎣ 0 0 1⎦

2 y1 = −2

⎡ 2 0 ⎤ ⎡ y1 ⎤ ⎡ −2 ⎤

⎢⎣ −1 3⎦⎥ ⎢ y ⎥ = ⎣⎢ −2 ⎦⎥ is − y + 3 y = −2 which

⎣ 2⎦

1

2

has the solution y1 = −1, y2 = −1.

x1 + 4 x2 = −1

⎡ 1 4 ⎤ ⎡ x1 ⎤ ⎡ −1⎤

which

⎢⎣0 1⎦⎥ ⎢ x ⎥ = ⎣⎢ −1⎦⎥ is

x2 = −1

⎣ 2⎦

gives the solution to the original system: x1 = 3,

x2 = −1.

5. Reduce the coefficient matrix.

⎡ 2 −2 −2 ⎤ ⎡ 1 −1 −1⎤

⎢ 0 −2 2 ⎥ → ⎢ 0 −2 2 ⎥

⎢

⎥ ⎢

⎥

⎣ −1 5 2 ⎦ ⎣ −1 5 2 ⎦

The multipliers used were

199

1

1

, 8, −4, and −

2

5

Chapter 9: Numerical Methods

SSM: Elementary Linear Algebra

⎡ 5 0 0⎤

which leads to L = ⎢ −8 1 0 ⎥ , so the system

⎢

⎥

⎣ 0 4 −2 ⎦

⎡ 5 0 0 ⎤ ⎡ 1 1 2 ⎤ ⎡ x1 ⎤ ⎡ 0 ⎤

is ⎢ −8 1 0 ⎥ ⎢0 1 7 ⎥ ⎢ x2 ⎥ = ⎢ 1 ⎥ .

⎢

⎥⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 4 −2 ⎦ ⎣0 0 1⎦ ⎣ x3 ⎦ ⎣ 4 ⎦

⎡ 5 0 0⎤ ⎡ y1 ⎤ ⎡ 0 ⎤

⎢ −8 1 0⎥ ⎢ y2 ⎥ = ⎢ 1 ⎥ is

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 0 4 −2⎦ ⎣ y3 ⎦ ⎣ 4 ⎦

=0

5 y1

−8 y1 + y2

= 1 which has the solution

4 y2 − 2 y3 = 4

y1 = 0, y2 = 1, y3 = 0.

⎡ 1 0 −1

⎢0 3 0

→⎢

⎢ 0 −1 2

⎣⎢ 0 0 1

0 ⎤ ⎡ y1 ⎤ ⎡ 5⎤

0 ⎥ ⎢ y2 ⎥ ⎢ −1⎥

⎥ ⎢ ⎥ = ⎢ ⎥ is

0 ⎥ ⎢ y3 ⎥ ⎢ 3⎥

4 ⎦⎥ ⎢⎣ y4 ⎥⎦ ⎣⎢ 7 ⎦⎥

=5

= −1

, which has the

=3

=7

0 −1 0 ⎤ ⎡ x1 ⎤ ⎡ −5⎤

1 0 2 ⎥ ⎢ x2 ⎥ ⎢ 3⎥

⎥ ⎢ ⎥ = ⎢ ⎥ is

0 1 1⎥ ⎢ x3 ⎥ ⎢ 3⎥

0 0 1⎦⎥ ⎢⎣ x4 ⎥⎦ ⎣⎢ 1⎦⎥

x1

− x3

= −5

x2

+ 2 x4 = 3

, which gives the solution

x3 + x4 = 3

x4 = 1

⎡1

⎢0

⎢

⎢0

⎣⎢0

0⎤

2⎥

⎥

0⎥

5⎦⎥

**11. (a) Reduce A to upper triangular form.
**

⎡ 1 1 − 1⎤

⎡ 2 1 −1⎤

2

2⎥

⎢

⎢ −2 −1 2⎥ → −2 −1

2

⎢

⎥

⎢

⎥

⎢⎣ 2 1

⎣ 2 1 0⎦

0 ⎥⎦

0⎤ ⎡ 1

2⎥ ⎢0

⎥→⎢

2⎥ ⎢0

5⎥⎦ ⎢⎣ 0

0 −1 0 ⎤

1 0 2⎥

⎥

0 1 1⎥

0 1 5⎥⎦

⎡1

⎢0

→⎢

⎢0

⎣⎢ 0

0 −1

1 0

0 1

0 0

0⎤ ⎡ 1

2⎥ ⎢0

⎥→⎢

1⎥ ⎢ 0

4 ⎦⎥ ⎣⎢ 0

0 −1 0 ⎤

1 0 2⎥

⎥ =U

0 1 1⎥

0 0 1⎦⎥

⎡ −1

⎢ 2

1

and , which leads to L = ⎢

4

⎢ 0

⎣⎢ 0

the system is

0

0

2

1

x4 = 1.

0 −1

1 0

0 2

0 1

1

,

3

0

3

−1

0

⎡ −1 0

⎢ 2 3

⎢

⎢ 0 −1

⎣⎢ 0 0

to the original system: x1 = −3, x2 = 1, x3 = 2,

⎡1

⎢0

→⎢

⎢0

⎢⎣ 0

The multipliers used were −1, −2,

0⎤ ⎡ 1

0 ⎥ ⎢0

⎥⎢

0 ⎥ ⎢0

4 ⎦⎥ ⎣⎢0

solution y1 = −5, y2 = 3, y3 = 3, y4 = 1.

0⎤

6⎥

⎥

0⎥

5⎦⎥

0⎤

⎡ 1 0 −1

⎢0 1 0

6⎥

⎥→⎢

0⎥

⎢ 0 −1 2

5⎦⎥

⎣⎢ 0 0 1

0

0

2

1

− y1

2 y1 + 3 y2

− y2 + 2 y3

y3 + 4 y4

x1 + x2 + 2 x3 = 0

⎡ 1 1 2 ⎤ ⎡ x1 ⎤ ⎡0 ⎤

⎢0 1 7 ⎥ ⎢ x2 ⎥ = ⎢1 ⎥ is

x2 + 7 x3 = 1

⎢

⎥⎢ ⎥ ⎢ ⎥

x3 = 0

⎣0 0 1⎦ ⎣ x3 ⎦ ⎣0 ⎦

which gives the solution to the original system:

x1 = −1, x2 = 1, x3 = 0.

9. Reduce the coefficient matrix.

1 0⎤

⎡ −1 0

⎡ 1 0 −1

⎢ 2 3 −2 6 ⎥

⎢ 2 3 −2

⎢

⎥→⎢

⎢ 0 −1 2 0 ⎥

⎢ 0 −1 2

1 5⎦⎥

1

⎣⎢ 0 0

⎣⎢ 0 0

0 −1 0 ⎤ ⎡ x1 ⎤ ⎡ 5⎤

1 0 2 ⎥ ⎢ x2 ⎥ ⎢ −1⎥

⎥ ⎢ ⎥ = ⎢ ⎥.

0 1 1⎥ ⎢ x3 ⎥ ⎢ 3⎥

0 0 1⎦⎥ ⎣⎢ x4 ⎦⎥ ⎣⎢ 7 ⎦⎥

⎡ −1 0

⎢ 2 3

⎢

⎢ 0 −1

⎣⎢ 0 0

⎡1 1 − 1⎤ ⎡1

2

2⎥

⎢

→ ⎢0 0

1⎥ → ⎢0

⎢

0 ⎥⎦ ⎢⎣0

⎣⎢ 2 1

1,

0

0

2

1

− 12 ⎤

⎥

0

1⎥ = U

0

1⎦⎥

1

2

1

, 2, and −2,

2

⎡ 2 0 0⎤

which leads to L = ⎢ −2 1 0⎥ where the

⎢

⎥

⎣ 2 0 1⎦

1’s on the diagonal reflect that no

multiplication was required on the 2nd and

3rd diagonal entries.

The multipliers used were

1

, −1,

2

0⎤

0⎥

⎥ , so

0⎥

4 ⎦⎥

**(b) To change the 2 on the diagonal of L to a 1,
**

the first column of L is divided by 2 and the

diagonal matrix has a 2 as the 1, 1 entry.

200

SSM: Elementary Linear Algebra

Section 9.1

A = L1 DU1

x1 + 2 x2 + 2 x3 = 1

⎡ 1 2 2⎤ ⎡ x1 ⎤ ⎡ 1⎤

⎢0 1 4⎥ ⎢ x2 ⎥ = ⎢ 2 ⎥ is

x2 + 4 x3 = 2

⎢

⎥⎢ ⎥ ⎢ ⎥

0

0

17

12

x

17 x3 = 12

⎣

⎦⎣ 3⎦ ⎣ ⎦

which gives the solution of the original system:

21

14

12

x1 = , x2 = − , x3 = .

17

17

17

⎡ 1 0 0 ⎤ ⎡ 2 0 0 ⎤ ⎡ 1 12 − 12 ⎤

⎥

= ⎢ −1 1 0 ⎥ ⎢ 0 1 0 ⎥ ⎢0 0

1⎥

⎢

⎥⎢

⎥⎢

⎣ 1 0 1⎦ ⎣ 0 0 1⎦ ⎢⎣0 0

1⎥⎦

is the LDU-decomposition of A.

(c) Let U 2 = DU , and L2 = L1, then

**17. If rows 2 and 3 of A are interchanged, then the
**

resulting matrix has an LU-decomposition. For

⎡ 1 0 0⎤

⎡ 3 −1 0 ⎤

P = ⎢0 0 1⎥ , PA = ⎢ 0 2 1⎥ .

⎢

⎥

⎢

⎥

⎣0 1 0 ⎦

⎣ 3 −1 1⎦

Reduce PA to upper triangular form.

⎡ 1 − 1 0⎤

⎡ 3 −1 0 ⎤

3

⎥

⎢0 2 1⎥ → ⎢0

2

1⎥

⎢

⎢

⎥

⎢⎣ 3 −1 1⎥⎦

⎣ 3 −1 1⎦

⎡ 1 0 0 ⎤ ⎡ 2 1 −1⎤

A = L2U 2 = ⎢ −1 1 0 ⎥ ⎢ 0 0 1⎥ .

⎢

⎥⎢

⎥

⎣ 1 0 1⎦ ⎣ 0 0 1⎦

13. Reduce A to upper triangular form.

⎡ 3 −12 6 ⎤ ⎡ 1 −4 2 ⎤

⎢0

2 0⎥ → ⎢0

2 0⎥

⎢

⎥ ⎢

⎥

6

28

13

6

28

13⎦

−

−

⎣

⎦ ⎣

⎡ 1 −4 2 ⎤ ⎡ 1 −4 2 ⎤

→ ⎢ 0 2 0⎥ → ⎢0

1 0⎥

⎢

⎥ ⎢

⎥

⎣ 0 −4 1⎦ ⎣0 −4 1⎦

⎡ 1 −4 2 ⎤

→ ⎢0

1 0⎥ = U

⎢

⎥

⎣ 0 0 1⎦

1

1

The multipliers used were , −6, , and 4,

2

3

⎡ 3 0 0⎤

which leads to L1 = ⎢0 2 0 ⎥ . Since

⎢

⎥

⎣6 −4 1⎦

⎡ 1 0 0⎤ ⎡ 3 0 0⎤

L1 = ⎢ 0

1 0 ⎥ ⎢ 0 2 0 ⎥ , then

⎢

⎥⎢

⎥

2

2

1⎦ ⎣ 0 0 1⎦

−

⎣

⎡ 1 0 0⎤ ⎡ 3 0 0 ⎤ ⎡ 1 −4 2 ⎤

A = ⎢0

1 0⎥ ⎢ 0 2 0 ⎥ ⎢0

1 0 ⎥ is the

⎢

⎥⎢

⎥⎢

⎥

⎣ 2 −2 1⎦ ⎣ 0 0 1⎦ ⎣0 0 1⎦

LDU-decomposition of A.

⎡ 1 − 1 0⎤

⎡ 1 − 1 0⎤

3

3

⎢

⎥

⎢

⎥

→ 0

→ ⎢0

1 12 ⎥ = U

2

1

⎢

⎥

⎢

⎥

⎢⎣ 0

0 1⎥⎦

0 1⎦

⎣0

1

1

The multipliers used were , −3, and , so

2

3

⎡ 3 0 0⎤

L = ⎢0 2 0⎥ . Since P = P −1 , we have

⎢

⎥

⎣ 3 0 1⎦

1

⎡ 1 0 0⎤ ⎡ 3 0 0 ⎤ ⎢⎡ 1 − 3 0 ⎥⎤

A = PLU = ⎢ 0 0 1⎥ ⎢ 0 2 0 ⎥ ⎢0

1 12 ⎥ .

⎢

⎥⎢

⎥⎢

⎥

⎣ 0 1 0⎦ ⎣ 3 0 1⎦ ⎣0

0 1⎦

⎡ −2 ⎤

Since Pb = ⎢ 4⎥ , the system to solve is

⎢ ⎥

⎣ 1⎦

1

⎡ 3 0 0 ⎤ ⎢⎡ 1 − 3 0 ⎥⎤ ⎡ x1 ⎤ ⎡ −2 ⎤

⎢ ⎥

⎢0 2 0 ⎥ ⎢0

1 12 ⎥ ⎢ x2 ⎥ = ⎢⎢ 4 ⎥⎥ .

⎢

⎥⎢

⎥

⎣ 3 0 1⎦ ⎣0

0 1⎦ ⎣ x3 ⎦ ⎣ 1⎦

3 y1

= −2

⎡ 3 0 0 ⎤ ⎡ y1 ⎤ ⎡ −2 ⎤

⎢0 2 0 ⎥ ⎢ y2 ⎥ = ⎢ 4 ⎥ is

2 y2

=4

⎢

⎥⎢ ⎥ ⎢ ⎥

3

0

1

1

3 y1

+ y3 = 1

⎣

⎦ ⎣ y3 ⎦ ⎣ ⎦

⎡0 1 0⎤

⎡1 ⎤

15. P −1 = ⎢ 1 0 0 ⎥ and P −1b = ⎢ 2 ⎥ , so the

⎢

⎥

⎢ ⎥

⎣ 0 0 1⎦

⎣5⎦

system P −1 Ax = P −1b is

⎡ 1 0 0⎤ ⎡ 1 2 2 ⎤ ⎡ x1 ⎤ ⎡1 ⎤

⎢0

1 0⎥ ⎢ 0 1 4 ⎥ ⎢ x2 ⎥ = ⎢ 2⎥ .

⎢

⎥⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 3 −5 1⎦ ⎣ 0 0 17 ⎦ ⎣ x3 ⎦ ⎣ 5 ⎦

2

which has the solution y1 = − , y2 = 2,

3

y3 = 3.

y1

=1

⎡ 1 0 0 ⎤ ⎡ y1 ⎤ ⎡ 1 ⎤

⎢0

y2

=2

1 0 ⎥ ⎢ y2 ⎥ = ⎢ 2 ⎥ is

⎢

⎥⎢ ⎥ ⎢ ⎥

−

3

5

1

5

3 y1 − 5 y2 + y3 = 5

⎣

⎦ ⎣ y3 ⎦ ⎣ ⎦

which has the solution y1 = 1, y2 = 2, y3 = 12.

⎡ 1 − 1 0⎤ ⎡ x ⎤ ⎡− 2 ⎤

3

⎢

⎥ ⎢ 1 ⎥ ⎢ 3⎥

1 12 ⎥ ⎢ x2 ⎥ = ⎢ 2 ⎥ is

⎢0

⎢

⎥

0 1⎦ ⎣ x3 ⎦ ⎢⎣ 3⎥⎦

⎣0

201

Chapter 9: Numerical Methods

1

x1 − x2

3

x2 +

=−

1

x3 = 2

2

x3 = 3

SSM: Elementary Linear Algebra

**(b) There is no dominant eigenvalue since
**

λ1 = λ 4 = 5.

2

3

which gives the solution

3. The characteristic polynomial of A is

λ 2 − 4λ − 6 = 0, so the eigenvalues of A are

1

1

to the original system: x1 = − , x2 = ,

2

2

x3 = 3.

λ = 2 ± 10. λ = 2 + 10 ≈ 5.16228 is the

dominant eigenvalue and the dominant

⎡ 1 ⎤ ⎡

1

⎤

eigenvector is ⎢

≈⎢

.

⎥

⎥

0

16228

−

.

⎦

⎣3 − 10 ⎦ ⎣

**19. (a) If A has such an LU-decomposition, it can
**

be written as

y ⎤

⎡a b ⎤ ⎡ 1 0⎤ ⎡ x y ⎤ ⎡ x

⎢⎣ c d ⎥⎦ = ⎢⎣ w 1 ⎥⎦ ⎢⎣ 0 z ⎥⎦ = ⎢⎣ wx wy + z ⎥⎦

which leads to the equations

x=a

y=b

wx = c

wy + z = d

Since a ≠ 0, the system has the unique

c

solution x = a, y = b, w = , and

a

bc ad − bc

z=d−

=

.

a

a

Because the solution is unique, the

LU-decomposition is also unique.

⎡ 5 −1⎤ ⎡1 ⎤ ⎡ 5⎤

Ax 0 = ⎢

=

⎣ −1 −1⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣ −1⎥⎦

Ax 0

1 ⎡ 5⎤ ⎡ 0.98058⎤

x1 =

=

≈

Ax 0

26 ⎢⎣ −1⎥⎦ ⎢⎣ −0.19612 ⎥⎦

⎡ 5 −1⎤ ⎡ 0.98050⎤ ⎡ 5.09902 ⎤

Ax1 ≈ ⎢

≈

⎣ −1 −1⎥⎦ ⎢⎣ −0.19612⎥⎦ ⎢⎣ −0.78446 ⎥⎦

Ax1 ⎡ 0.98837 ⎤

x2 =

≈

Ax1 ⎢⎣ −0.15206⎥⎦

⎡ 5 −1⎤ ⎡ 0.98837 ⎤ ⎡ 5.09391⎤

Ax 2 ≈ ⎢

≈

⎣ −1 −1⎥⎦ ⎢⎣ −0.15206 ⎥⎦ ⎢⎣ −0.83631⎥⎦

Ax 2

⎡ 0.98679 ⎤

x3 =

≈⎢

Ax 2

⎣ −0.16201⎥⎦

⎡ 5 −1⎤ ⎡ 0.98679 ⎤ ⎡ 5.09596 ⎤

Ax 3 ≈ ⎢

≈

⎣ −1 −1⎥⎦ ⎢⎣ −0.16201⎥⎦ ⎢⎣ −0.82478⎥⎦

Ax 3

⎡ 0.98715⎤

x4 =

≈⎢

Ax 3

⎣ −0.15977 ⎥⎦

**(b) From part (a) the LU-decomposition is
**

b ⎤

⎡a b ⎤ ⎡ 1 0⎤ ⎡ a

⎢⎣ c d ⎥⎦ = ⎢ c 1 ⎥ ⎢ 0 ad −bc ⎥ .

a ⎦⎥

⎣⎢ a

⎦⎥ ⎣⎢

**⎡ 5 −1⎤ ⎡ 0.98715⎤ ⎡ 5.09552 ⎤
**

Ax 4 = ⎢

≈

⎣ −1 −1⎥⎦ ⎢⎣ −0.15977 ⎥⎦ ⎢⎣ −0.82738⎥⎦

⎡ 0.98709 ⎤

The dominant unit eigenvector is ≈ ⎢

,

⎣ −0.16018⎥⎦

which x 4 approximates to three decimal place

accuracy.

True/False 9.1

(a) False; if the matrix cannot be reduced to row

echelon form without interchanging rows, then it

does not have an LU-decomposition.

λ(1) = ( Ax1 )T x1 ≈ 5.24297

**(b) False; if the row equivalence of A and U requires
**

interchanging rows of A, then A does not have an

LU-decomposition.

λ( 2) = ( Ax2 )T x2 ≈ 5.16138

λ(3) = ( Ax3 )T x3 ≈ 5.16226

(c) True

λ( 4) = ( Ax4 )T x4 ≈ 5.16223

(d) True

**λ( 4) approximates λ to four decimal place
**

accuracy.

(e) True

5. The characteristic equation of A is

Section 9.2

λ 2 − 6λ − 4 = 0, so the eigenvalues of A are

Exercise Set 9.2

λ = 3 ± 13.

λ = 3 + 13 ≈ 6.60555 is the dominant

eigenvalue with corresponding scaled

**1. (a) λ3 = −8 is the dominant eigenvalue.
**

202

SSM: Elementary Linear Algebra

Section 9.2

⎡ 2 − 13 ⎤ ⎡ −0.53518⎤

eigenvector ⎢ 3 ⎥ ≈ ⎢

⎥⎦ .

1

⎣⎢ 1 ⎦⎥ ⎣

(b) λ(1) =

**⎡ 1 −3⎤ ⎡1⎤ ⎡ −2⎤
**

Ax 0 = ⎢

=

⎣ −3 5⎥⎦ ⎢⎣1⎥⎦ ⎢⎣ 2 ⎥⎦

Ax 0

⎡ −1⎤

x1 =

=⎢ ⎥

max( Ax0 ) ⎣ 1⎦

λ(1) =

Ax1 ⋅ x1

=6

x1 ⋅ x1

Ax 2 ⋅ x 2

= 6.6

x 2 ⋅ x2

λ(3) =

Ax 3 ⋅ x 3

≈ 2.997

x3 ⋅ x 3

(d) The percentage error is

3 − 2.997

= 0.001

3

or 0.1%.

9. By Formula (10),

A5 x0

⎡0.99180 ⎤

≈⎢

⎥.

max( A x0 ) ⎣ 1 ⎦

Ax 5 ⋅ x 5

Thus λ(5) =

≈ 2.99993.

x5 ⋅ x 5

x5 =

Ax 3 ⋅ x 3

≈ 6.60550

x 3 ⋅ x3

**⎡ 1 −3⎤ ⎡ −0.53846 ⎤ ⎡ −3.53846 ⎤
**

Ax 3 ≈ ⎢

⎥⎦ ≈ ⎢⎣ 6.61538⎥⎦

1

⎣ −3 5⎥⎦ ⎢⎣

Ax 3

⎡ −0.53488⎤

x4 =

≈

⎥⎦

1

max( Ax3 ) ⎢⎣

λ ( 4) =

Ax 2 ⋅ x 2

≈ 2.976

x 2 ⋅ x2

λ 2 − 4λ + 3 = (λ − 3)(λ − 1) = 0 so the

dominant eigenvalue of A is λ = 3, with

⎡ 1⎤

dominant eigenvector ⎢ ⎥ .

⎣ −1⎦

⎡ 1 −3⎤ ⎡ −0.5⎤ ⎡ −3.5⎤

Ax 2 = ⎢

=

⎣ −3 5⎥⎦ ⎢⎣ 1 ⎥⎦ ⎢⎣ 6.5⎥⎦

Ax 2

⎡ −0.53846 ⎤

x3 =

≈

⎥⎦

1

max( Ax2 ) ⎢⎣

λ(3) =

λ ( 2) =

(c) The characteristic polynomial of A is

⎡ 1 −3⎤ ⎡ −1⎤ ⎡ −4 ⎤

Ax1 = ⎢

=

⎣ −3 5⎥⎦ ⎢⎣ 1⎥⎦ ⎢⎣ 8⎥⎦

Ax1

⎡ −0.5⎤

x2 =

=

max( Ax1 ) ⎢⎣ 1 ⎥⎦

λ ( 2) =

Ax1 ⋅ x1

= 2.8

x1 ⋅ x1

5

⎡1 ⎤

13. (a) Starting with x0 = ⎢ 0⎥ , it takes 8 iterations.

⎢ ⎥

⎣ 0⎦

⎡0.229 ⎤

x1 ≈ ⎢0.668 ⎥ , λ(1) ≈ 7.632

⎢

⎥

⎣0.668 ⎦

⎡0.507 ⎤

x 2 ≈ ⎢ 0.320 ⎥ , λ( 2) ≈ 9.968

⎢

⎥

⎣ 0.800 ⎦

⎡ 0.380 ⎤

x3 ≈ ⎢0.197 ⎥ , λ(3) ≈ 10.622

⎢

⎥

⎣ 0.904 ⎦

⎡ 0.344⎤

x 4 ≈ ⎢ 0.096⎥ , λ( 4) ≈ 10.827

⎢

⎥

⎣ 0.934⎦

⎡0.317 ⎤

x5 ≈ ⎢ 0.044 ⎥ , λ(5) ≈ 10.886

⎢

⎥

⎣ 0.948 ⎦

⎡0.302 ⎤

x6 ≈ ⎢0.016 ⎥ , λ(6) ≈ 10.903

⎢

⎥

⎣ 0.953⎦

Ax 4 ⋅ x 4

≈ 6.60555

x 4 ⋅ x4

⎡ 2 −1⎤ ⎡1 ⎤ ⎡ 2⎤

7. (a) Ax0 = ⎢

=

⎣ −1 2 ⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣ −1⎥⎦

Ax0

⎡ 1 ⎤

x1 =

=

max( Ax0 ) ⎢⎣ −0.5⎥⎦

⎡ 2 −1⎤ ⎡ 1 ⎤ ⎡ 2.5⎤

Ax1 = ⎢

=

⎣ −1 2 ⎥⎦ ⎢⎣ −0.5⎥⎦ ⎢⎣ 2 ⎥⎦

Ax1

⎡ 1 ⎤

x2 =

=

max( Ax1 ) ⎢⎣ −0.8⎥⎦

⎡ 2 −1⎤ ⎡ 1 ⎤ ⎡ 2.8⎤

Ax 2 = ⎢

=

⎣ −1 2 ⎥⎦ ⎢⎣ −0.8⎥⎦ ⎢⎣ −2.6⎥⎦

Ax 2

⎡ 1 ⎤

x3 =

≈⎢

max( Ax2 ) ⎣ −0.929 ⎥⎦

203

Chapter 9: Numerical Methods

SSM: Elementary Linear Algebra

⎡ 0.294⎤

x7 ≈ ⎢ 0.002⎥ , λ(7) ≈ 10.908

⎢

⎥

⎣ 0.956⎦

Section 9.3

Exercise Set 9.3

⎡ 0.290 ⎤

x8 ≈ ⎢ −0.006 ⎥ , λ(8) ≈ 10.909

⎢

⎥

⎣ 0.957 ⎦

⎡1 ⎤

1. The row sums of A are h0 = ⎢ 2 ⎥ . The row sums

⎢ ⎥

⎣2⎦

⎡2⎤

of AT are a0 = ⎢ 0 ⎥ .

⎢ ⎥

⎣3⎦

⎡1 ⎤

⎢ 0⎥

(b) Starting with x0 = ⎢ ⎥ , it takes 8 iterations.

⎢ 0⎥

⎢⎣ 0⎥⎦

⎡ 0 0 1⎤ ⎡ 2 ⎤ ⎡3⎤

3. Aa0 = ⎢ 1 0 1⎥ ⎢ 0 ⎥ = ⎢5⎥

⎢

⎥⎢ ⎥ ⎢ ⎥

⎣ 1 0 1⎦ ⎣ 3 ⎦ ⎣5⎦

⎡0.39057 ⎤

Aa 0

≈ ⎢ 0.65094 ⎥

h1 =

⎢

⎥

Aa 0

⎣ 0.65094 ⎦

⎡1.30189 ⎤

AT h1 ≈ ⎢ 0 ⎥

⎢

⎥

⎣1.69246 ⎦

⎡0.577 ⎤

⎢ 0 ⎥ (1)

x1 ≈ ⎢

⎥ , λ ≈ 6.333

⎢0.577 ⎥

⎢⎣0.577 ⎥⎦

⎡0.249 ⎤

⎢ 0 ⎥ ( 2)

x2 ≈ ⎢

⎥ , λ ≈ 8.062

⎢ 0.498⎥

⎣⎢0.830 ⎦⎥

⎡ 0.193⎤

⎢ 0.041⎥ (3)

x3 = ⎢

⎥ , λ ≈ 8.382

⎢0.376 ⎥

⎣⎢ 0.905⎦⎥

a1 =

⎡ 0.60971⎤

≈⎢ 0 ⎥

AT h1 ⎢⎣ 0.79262 ⎥⎦

AT h1

⎡2

⎢1

5. AT A = ⎢

⎢0

⎣⎢ 0

⎡ 0.175⎤

⎢ 0.073⎥ ( 4)

x4 ≈ ⎢

⎥ , λ ≈ 8.476

⎢ 0.305⎥

⎢⎣ 0.933⎥⎦

1

2

0

0

0

0

1

0

0⎤

0⎥

⎥

0⎥

0 ⎦⎥

⎡0.167 ⎤

⎢ 0.091⎥ (5)

x5 ≈ ⎢

⎥ , λ ≈ 8.503

⎢ 0.266 ⎥

⎢⎣ 0.945⎥⎦

⎡2⎤

⎢2⎥

The column sums of A are ⎢ ⎥ , so we start with

⎢1 ⎥

⎢⎣ 0 ⎥⎦

⎡ 0.162 ⎤

⎢ 0.101⎥ (6)

x6 ≈ ⎢

⎥ , λ ≈ 8.511

⎢ 0.245⎥

⎣⎢ 0.951⎦⎥

⎡2⎤

⎢3⎥

⎢2⎥

a0 = ⎢ 3 ⎥ .

1

⎢3⎥

⎢0⎥

⎣ ⎦

⎡ 0.159 ⎤

⎢ 0.107 ⎥ ( 7)

x7 ≈ ⎢

⎥ , λ ≈ 8.513

⎢ 0.234 ⎥

⎣⎢ 0.953⎦⎥

⎡ 0.70225⎤

⎢ 0.70225⎥

a1 =

≈⎢

⎥

AT Aa0

⎢0.11704 ⎥

⎢⎣ 0 ⎥⎦

AT Aa0

⎡ 0.158⎤

⎢0.110 ⎥ (8)

x8 ≈ ⎢

⎥ , λ ≈ 8.513

⎢ 0.228⎥

⎢⎣0.954 ⎥⎦

⎡0.70656 ⎤

⎢0.70656 ⎥

≈⎢

a2 =

⎥

AT Aa1 ⎢ 0.03925⎥

⎢⎣ 0 ⎥⎦

AT Aa1

204

SSM: Elementary Linear Algebra

Section 9.4

⎡0.04370 ⎤

⎢ 0.73455⎥

A Aa 2

⎢

⎥

≈ ⎢0.58889 ⎥

a3 =

T

A Aa 2

⎢0.32670 ⎥

⎢⎣ 0.07075⎥⎦

⎡ 0.02273⎤

⎢ 0.73630⎥

T

A Aa3

⎢

⎥

a4 =

≈ ⎢ 0.59045⎥

T

A Aa3

⎢ 0.32766⎥

⎢⎣0.03677 ⎥⎦

⎡0.01179 ⎤

⎢0.73679 ⎥

AT Aa 4

⎢

⎥

≈ ⎢0.59086 ⎥

a5 =

T

A Aa 4

⎢0.32790 ⎥

⎢⎣ 0.01908⎥⎦

⎡ 0.00612⎤

⎢ 0.73693⎥

T

A Aa5

⎢

⎥

a6 =

≈ ⎢0.59097 ⎥

T

A Aa5

⎢ 0.32796⎥

⎢⎣ 0.00990⎥⎦

From a6 , it is apparent that site 2 has the most

authority, followed by sites 3 and 4, while sites 1

and 5 are irrelevant.

⎡ 0.70705⎤

⎢ 0.70705⎥

≈⎢

a3 =

⎥

AT Aa3

⎢ 0.01309 ⎥

⎢⎣ 0 ⎥⎦

AT Aa 2

T

⎡0.70710 ⎤

⎢0.70710 ⎥

≈⎢

a4 =

⎥

AT Aa3

⎢0.00436 ⎥

⎢⎣ 0 ⎥⎦

AT Aa3

⎡ 0.70711⎤

⎢ 0.70711⎥

≈⎢

a5 =

⎥

T

A Aa 4

⎢0.00145⎥

⎣⎢ 0 ⎦⎥

AT Aa 4

⎡ 0.70711⎤

⎢ 0.70711⎥

=⎢

a6 =

⎥

T

A Aa5

⎢0.00048⎥

⎣⎢ 0 ⎦⎥

AT Aa5

**From a6 , it is apparent that sites 1 and 2 are tied
**

for the most authority, while sites 3 and 4 are

irrelevant.

⎡1

⎢0

⎢

T

7. A A = ⎢ 0

⎢0

⎢⎣ 1

0

3

2

1

0

0

2

2

1

0

0

1

1

1

0

1⎤

0⎥

⎥

0⎥

0⎥

2⎥⎦

Section 9.4

Exercise Set 9.4

⎡ 1⎤

⎢ 3⎥

⎢ ⎥

The column sums of A are ⎢ 2 ⎥ , so we start with

⎢ 1⎥

⎢⎣ 2 ⎥⎦

⎡ 1⎤ ⎡ 0.22942 ⎤

⎢ 3⎥ ⎢ 0.68825⎥

1 ⎢ ⎥ ⎢

⎥

a0 =

2 ⎥ ≈ ⎢ 0.45883⎥ .

⎢

19 ⎢ 1⎥ ⎢ 0.22942 ⎥

⎢⎣ 2 ⎥⎦ ⎢⎣ 0.45883⎥⎦

⎡0.15250 ⎤

⎢0.71166 ⎥

T

A Aa 0

⎢

⎥

a1 =

≈ ⎢0.55916 ⎥

T

A Aa 0

⎢0.30500 ⎥

⎢⎣0.25416 ⎥⎦

⎡0.08327 ⎤

⎢ 0.72861⎥

T

A Aa1 ⎢

⎥

≈ ⎢ 0.58289⎥

a2 =

T

A Aa1 ⎢0.32267 ⎥

⎢⎣ 0.13531⎥⎦

**1. (a) For n = 1000 = 103 , the flops for both
**

phases is

2 3 3 3 3 2 7 3

(10 ) + (10 ) − (10 ) = 668,165, 500,

3

2

6

which is 0.6681655 gigaflops, so it will take

0.6681655 × 10−1 ≈ 0.067 second.

(b) n = 10, 000 = 104 :

2 4 3 3 4 2 7 4

(10 ) + (10 ) − (10 )

3

2

6

= 666, 816, 655, 000 flops

or 666.816655 gigaflops

The time is about 66.68 seconds.

(c) n = 100, 000 = 105

2 5 3 3 5 2 7 5

(10 ) + (10 ) − (10 )

3

2

6

≈ 666, 682 × 109 flops

or 666,682 gigaflops

The time is about 66,668 seconds which is

about 18.5 hours.

205

Chapter 9: Numerical Methods

**SSM: Elementary Linear Algebra
**

in AB requires 2n − 1 flops, for a total of

3. n = 10, 000 = 104

(a)

n 2 (2n − 1) = 2n3 − n2 flops.

Note that adding two numbers requires 1 flop,

adding three numbers requires 2 flops, and in

general, n − 1 flops are required to add

n numbers.

2 3 2 12

n ≈ (10 ) ≈ 666.67 × 109

3

3

666.67 gigaflops are required, which will

666.67

≈ 9.52 seconds.

take

70

Section 9.5

(b) n 2 ≈ 108 = 0.1× 109

0.1 gigaflop is required, which will take

about 0.0014 second.

Exercise Set 9.5

1. The characteristic polynomial of

⎡1 ⎤

⎡1 2 0⎤

AT A = ⎢ 2 ⎥ [1 2 0] = ⎢ 2 4 0⎥ is λ 2 (λ − 5);

⎢ ⎥

⎢

⎥

⎣0 ⎦

⎣ 0 0 0⎦

**(c) This is the same as part (a); about
**

9.52 seconds.

thus the eigenvalues of AT A are λ1 = 5 and

**(d) 2n3 ≈ 2 ×1012 = 2000 × 109
**

2000 gigaflops are required, which will take

about 28.57 seconds.

**λ 2 = 0, and σ1 = 5 and σ 2 = 0 are singular
**

values of A.

**5. (a) n = 100, 000 = 105
**

2 3 2

n ≈ × 1015

3

3

≈ 0.667 × 1015

3. The eigenvalues of

⎡ 1 2 ⎤ ⎡ 1 −2 ⎤ ⎡ 5 0 ⎤

AT A = ⎢

=

are λ1 = 5

1⎥⎦ ⎢⎣0 5⎥⎦

⎣ −2 1⎥⎦ ⎢⎣ 2

and λ 2 = 5 (i.e., λ = 5 is an eigenvalue of

multiplicity 2); thus the singular value of A is

σ1 = 5.

= 6.67 × 105 ×109

Thus, the forward phase would require

about 6.67 ×105 seconds.

**5. The only eigenvalue of
**

⎡ 1 1⎤ ⎡1 −1⎤ ⎡ 2 0⎤

AT A = ⎢

=

is λ = 2

⎣ −1 1⎥⎦ ⎢⎣1 1⎥⎦ ⎢⎣ 0 2⎥⎦

⎡1 ⎤

(multiplicity 2), and the vectors v1 = ⎢ ⎥ and

⎣0 ⎦

⎡0 ⎤

v 2 = ⎢ ⎥ form an orthonormal basis for the

⎣1 ⎦

n 2 = 1010 = 10 × 109

The backward phase would require about

10 seconds.

(b) n = 10, 000 = 104

2 3 2

n ≈ × 1012

3

3

≈ 0.667 ×1012

eigenspace (which is all of R 2 ). The singular

≈ 6.67 × 102 × 109

About 667 gigaflops are required, so the

computer would have to execute

2(667) = 1334 gigaflops per second.

**values of A are σ1 = 2 and σ 2 = 2. We have
**

⎡

1 ⎡1 −1⎤ ⎡1 ⎤ ⎢

Av1 =

u1 =

=

σ1

2 ⎢⎣1 1⎥⎦ ⎢⎣0 ⎥⎦ ⎢

⎣

1

7. Multiplying each of the n 2 entries of A by c

1 ⎤

2⎥

,

1 ⎥

2⎦

and

⎡ 1 ⎤

1 ⎡1 −1⎤ ⎡ 0⎤ ⎢ − 2 ⎥

Av =

. This

u2 =

=

σ2 2

2 ⎢⎣1 1⎥⎦ ⎢⎣1 ⎥⎦ ⎢ 1 ⎥

2

⎣

⎦

results in the following singular value

decomposition of A:

⎡ 1 − 1 ⎤⎡

2 0 ⎤ ⎡1 0 ⎤

T

2⎥

⎢

A = U ΣV = 12

⎢

⎥

1

⎢

⎥⎣ 0

2 ⎦ ⎢⎣0 1 ⎥⎦

2⎦

⎣ 2

requires n 2 flops.

1

**9. Let C = [cij ] = AB. Computing cij requires first
**

multiplying each of the n entries aik by the

**corresponding entry bkj , which requires n flops.
**

Then the n terms aik bkj must be summed, which

requires n − 1 flops. Thus, each of the n 2 entries

206

SSM: Elementary Linear Algebra

Section 9.5

2⎤

⎡

6 ⎥

⎢

u3 = ⎢ − 2 2 ⎥ . This results in the following

⎢ 3 ⎥

⎢

⎥

− 62 ⎥

⎣⎢

⎦

singular value decomposition:

7. The eigenvalues of

⎡ 4 0⎤ ⎡ 4 6⎤ ⎡ 16 24 ⎤

AT A = ⎢

=

are λ1 = 64

⎣ 6 4⎥⎦ ⎢⎣ 0 4⎥⎦ ⎢⎣ 24 52 ⎥⎦

and λ 2 = 4, with corresponding unit

⎡1 ⎤

⎡− 2 ⎤

5⎥

⎢

eigenvectors v1 = 2 and v 2 = ⎢ 15 ⎥

⎢ ⎥

⎢

⎥

⎣ 5⎦

⎣ 5⎦

respectively. The singular values of A are σ1 = 8

A = U ΣV T

2⎤

⎡ 2 1

6 ⎥⎡

⎢ 3

1 ⎤

2

3 2 0⎤ ⎡− 1

⎢

2

2⎥

2 2⎥⎢

⎥

⎢

.

=⎢ 1

0 − 3 ⎥ 0

0⎥

1 ⎥

3

⎢

⎢ 1

⎢ 2 1

⎥⎢ 0

0 ⎦⎥ ⎣ 2

2⎦

− 62 ⎥ ⎣

⎢− 3

2

⎣

⎦

Note: The singular value decomposition is not

unique. It depends on the choice of the

and σ 2 = 2. We have

⎡

1 ⎡4 6⎤ ⎢

Av1 = ⎢

u1 =

8 ⎣ 0 4 ⎦⎥ ⎢

σ1

⎣

⎡ 2 ⎤

= ⎢ 15 ⎥ , and

⎢ ⎥

⎣ 5⎦

2

⎡

⎤ ⎡ 1 ⎤

1

1 ⎡4 6⎤ ⎢ − 5 ⎥ ⎢ − 5 ⎥

Av =

.

u2 =

=

σ 2 2 2 ⎢⎣ 0 4 ⎥⎦ ⎢ 1 ⎥ ⎢ 2 ⎥

⎣ 5⎦ ⎣ 5⎦

This results in the following singular value

decomposition:

2 ⎤

⎡ 2 − 1 ⎤

⎡ 1

T

5

5 ⎥ ⎡8 0 ⎤ ⎢

5

15 ⎥

⎢

A = U ΣV = 1

2 ⎥ ⎢⎣ 0 2 ⎥⎦ ⎢ − 2

1 ⎥

⎢

5⎦

5⎦

⎣ 5

⎣ 5

1

1 ⎤

5⎥

2 ⎥

5⎦

**(extended) orthonormal basis for R3 . This is just
**

one possibility.

11. The eigenvalues of

⎡ 1 0⎤

⎡ 1 1 −1⎤ ⎢

⎡3 0 ⎤

AT A = ⎢

are λ1 = 3

1 1⎥ = ⎢

⎥

0

1

1

⎢

⎥

⎣

⎦ −1 1 ⎣0 2 ⎥⎦

⎣

⎦

and λ 2 = 2, with corresponding unit

⎡1 ⎤

⎡0 ⎤

eigenvectors v1 = ⎢ ⎥ and v 2 = ⎢ ⎥

⎣0 ⎦

⎣1 ⎦

respectively. The singular values of A are

σ1 = 3 and σ 2 = 2. We have

9. The eigenvalues of

⎡ −2 2 ⎤

⎡ −2 −1 2 ⎤ ⎢

⎡ 9 −9 ⎤

AT A = ⎢

are

−1 1⎥ =

⎣ 2 1 −2 ⎦⎥ ⎢ 2 −2 ⎥ ⎣⎢ −9 9⎦⎥

⎣

⎦

λ1 = 18 and λ 2 = 0, with corresponding unit

⎡

⎢

⎡ 1 0⎤

1

1 ⎢

⎡1 ⎤ ⎢

⎥

=

Av =

u1 =

1 1

σ1 1

3 ⎢ −1 1⎥ ⎢⎣ 0 ⎥⎦ ⎢

⎣

⎦

⎢−

⎣⎢

⎡− 1 ⎤

⎡ 1 ⎤

eigenvectors v1 = ⎢ 12 ⎥ and v 2 = ⎢ 12 ⎥

⎢

⎥

⎢ ⎥

⎣ 2⎦

⎣ 2⎦

respectively. The only nonzero singular value of

A is σ1 = 18 = 3 2, and we have

⎡ −2 2 ⎤ ⎡ −

1 ⎢

Av1 =

u1 =

−1 1⎥ ⎢

σ1

3 2 ⎢ 2 −2 ⎥ ⎢

⎣

⎦⎣

1

1 ⎤

2⎥

1 ⎥

2⎦

and

⎡0⎤

⎡ 1 0⎤

⎢ 1 ⎥

1 ⎢

0

⎡

⎤

u2 =

1 1⎥ ⎢ ⎥ = ⎢ 2 ⎥ . We choose

2 ⎢ −1 1⎥ ⎣1 ⎦ ⎢ 1 ⎥

⎣

⎦

⎢⎣ 2 ⎥⎦

⎡ 2 ⎤

⎢ 6⎥

u3 = ⎢ − 1 ⎥ so that {u1, u 2 , u3} is an

⎢ 6⎥

⎢ 1 ⎥

⎢⎣ 6 ⎥⎦

⎡ 2⎤

⎢ 3⎥

= ⎢ 13 ⎥ .

⎢ 2⎥

⎣⎢ − 3 ⎦⎥

**We must choose the vectors u 2 and u3 so that
**

{u1, u 2 , u3} is an orthonormal basis R3 .

⎡

⎢

A possible choice is u 2 = ⎢

⎢

⎢⎣

1 ⎤

3⎥

1 ⎥

3⎥

1 ⎥

3 ⎦⎥

1 ⎤

2⎥

**orthonormal basis for R3 . This results in the
**

following singular value decomposition:

0 ⎥ and

1 ⎥

2 ⎥⎦

A = U ΣV T

⎡ 1

⎢ 3

=⎢ 1

⎢ 3

⎢− 1

⎢⎣ 3

207

0

1

2

1

2

−

2 ⎤

6⎥⎡

1 ⎥⎢

6⎥⎢

1 ⎥ ⎢⎣

6 ⎥⎦

3

0

0

0 ⎤

⎥ ⎡1 0 ⎤

2 ⎥ ⎢0 1 ⎥

⎣

⎦

0 ⎥⎦

Chapter 9: Numerical Methods

SSM: Elementary Linear Algebra

**5. The reduced singular value expansion of A is
**

⎡ 2⎤

⎢ 3⎥

1 ⎤

3 2 ⎢ 13 ⎥ ⎡ − 1

.

2

2⎦

⎣

⎢ 2⎥

⎢⎣ − 3 ⎥⎦

True/False 9.5

(a) False; if A is an m × n matrix, then AT is an

**n × m matrix, and AT A is an n × n matrix.
**

(b) True

**7. The reduced singular value decomposition of A
**

⎡ 1 ⎤

⎡0⎤

⎢ 3⎥

⎢ 1 ⎥

is 3 ⎢ 1 ⎥ [1 0] + 2 ⎢ 2 ⎥ [0 1].

3

⎢

⎥

⎢ 1 ⎥

⎢− 1 ⎥

⎢⎣ 2 ⎥⎦

⎣⎢ 3 ⎦⎥

**(c) False; AT A may have eigenvalues that are 0.
**

(d) False; A would have to be symmetric to be

orthogonally diagonalizable.

(e) True; since AT A is a symmetric n × n matrix.

**9. A rank 100 approximation of A requires storage
**

space for 100(200 + 500 + 1) = 70,100 numbers,

while A has 200(500) = 100,000 has entries.

**(f) False; the eigenvalues of AT A are the squares of
**

the singular values of A.

(g) True

True/False 9.6

Section 9.6

(a) True

Exercise Set 9.6

(b) True

**1. From Exercise 9 in Section 9.5, A has the
**

singular value decomposition

2 ⎤

⎡ 2

1

6 ⎥⎡

1 ⎤

⎢ 3

2

3 2 0⎤ ⎡− 1

⎢

2

2⎥

⎢

2 2⎥⎢

⎥

A=⎢ 1

0 − 3 ⎥ 0

0⎥ ⎢ 1

1 ⎥

3

⎢

⎢ 2 1

⎥⎢ 0

0 ⎦⎥ ⎣⎢ 2

2 ⎦⎥

− 62 ⎥ ⎣

⎢− 3

2

⎣

⎦

where the lines show the relevant blocks. Thus

the reduced singular value decomposition of A is

⎡ 2⎤

⎢ 3⎥

1 ⎤

A = U1Σ1V1T = ⎢ 13 ⎥ [3 2 ] ⎡ − 1

.

2

2⎦

⎣

⎢ 2⎥

⎢⎣ − 3 ⎥⎦

**(c) False; V1 has size n × k so that V1T has size
**

k × n.

Chapter 9 Supplementary Exercises

1. Reduce A to upper triangular form.

⎡ −6 2 ⎤ ⎡ −3 1⎤

⎡ −3 1⎤

⎢⎣ 6 0 ⎥⎦ → ⎢⎣ 6 0⎥⎦ → ⎢⎣ 0 2 ⎥⎦ = U

1

The multipliers used were and 2, so

2

⎡ 2 0⎤

⎡ 2 0 ⎤ ⎡ −3 1⎤

and A = ⎢

.

L=⎢

⎣ −2 1⎥⎦

⎣ −2 1⎥⎦ ⎢⎣ 0 2 ⎥⎦

3. Reduce A to upper triangular form.

⎡2 4 6⎤

⎡1 2 3⎤

⎡ 1 2 3⎤

⎢ 1 4 7 ⎥ → ⎢1 4 7 ⎥ → ⎢0 2 4⎥

⎢

⎥

⎢

⎥

⎢

⎥

⎣ 1 3 7⎦

⎣1 3 7 ⎦

⎣ 1 3 7⎦

⎡ 1 2 3⎤

⎡ 1 2 3⎤ ⎡ 1 2

→ ⎢ 0 2 4 ⎥ → ⎢0 1 2 ⎥ → ⎢ 0 1

⎢

⎥

⎢

⎥ ⎢

⎣0 1 4⎦

⎣0 1 4 ⎦ ⎣ 0 0

**3. From Exercise 11 in Section 9.5, A has the
**

singular value decomposition

2 ⎤

⎡ 1

0

6⎥⎡ 3

⎢ 3

0 ⎤

⎥ ⎡1 0 ⎤

1

1

1 ⎥⎢

⎢

−

A=

2 ⎥ ⎢0 1 ⎥

2

6⎥⎢ 0

⎢ 3

⎣

⎦

0 ⎥⎦

1

1 ⎥ ⎢⎣ 0

⎢− 1

2

6 ⎦⎥

⎣⎢ 3

where the lines indicate the relevant blocks.

Thus the reduced singular value decomposition

of A is

⎡ 1

0⎤

⎢ 3

⎥

0 ⎤ ⎡1 0 ⎤

1

1 ⎥⎡ 3

T

⎢

.

A = U1Σ1V1 =

⎥

2⎥⎢ 0

⎢ 3

2 ⎦ ⎢⎣ 0 1 ⎥⎦

⎣

1

1

⎢−

⎥

2 ⎦⎥

⎣⎢ 3

3⎤

2⎥

⎥

2⎦

⎡ 1 2 3⎤

→ ⎢0 1 2⎥ = U

⎢

⎥

⎣ 0 0 1⎦

The multipliers used were

208

1

1

, −1, −1, , −1,

2

2

SSM: Elementary Linear Algebra

⎡2

1

so L = ⎢ 1

⎢

2

⎣1

⎡2 0 0⎤ ⎡ 1

A = ⎢ 1 2 0 ⎥ ⎢0

⎢

⎥⎢

⎣ 1 1 2 ⎦ ⎣0

and

Chapter 9 Supplementary Exercises

0 0⎤

2 0 ⎥ and

⎥

1 2⎦

2 3⎤

1 2⎥ .

⎥

0 1⎦

x5 =

⎡ 1 ⎤

x5 ≈ ⎢

as compared to the exact

⎣0.9918⎥⎦

⎡1⎤

eigenvector v = ⎢ ⎥ .

⎣1⎦

5. (a) The characteristic equation of A is

**7. The Rayleigh quotients will converge to the
**

dominant eigenvalue λ 4 = −8.1. However, since

λ 2 − 4λ + 3 = (λ − 3)(λ − 1) = 0 so the

dominant eigenvalue of A is λ1 = 3, with

corresponding positive unit eigenvector

⎡ 1 ⎤

⎡ 0.7071⎤

.

v = ⎢ 12 ⎥ ≈ ⎢

⎢ ⎥ ⎣ 0.7071⎥⎦

⎣ 2⎦

the ratio

Ax1 ⎡ 0.7809 ⎤

≈

Ax1 ⎢⎣0.6247 ⎥⎦

x3 =

Ax 2

⎡0.7328⎤

≈⎢

Ax 2

⎣0.6805⎥⎦

λ1

=

8.1

= 1.0125 is very close to 1,

8

⎡2 2⎤

are λ1 = 4

9. The eigenvalues of AT A = ⎢

⎣ 2 2 ⎥⎦

and λ 2 = 0 with corresponding unit

⎡− 1 ⎤

⎡− 1 ⎤

2⎥

⎢

eigenvectors v1 =

and v 2 = ⎢ 12 ⎥ ,

⎢

⎥

⎢− 1 ⎥

⎣ 2⎦

⎣ 2⎦

respectively. The only nonzero singular value A

is σ1 = 4 = 2, and we have

⎡−

⎡ 1 1⎤ ⎡ − 1 ⎤ ⎢

1⎢

u1 =

Av = 0 0 ⎥ ⎢ 2 ⎥ = ⎢

σ1 1 2 ⎢ 1 1⎥ ⎢ − 1 ⎥ ⎢

⎣

⎦ ⎣ 2 ⎦ ⎢−

⎣

We must choose the vectors u 2 and

1

Ax 3

⎡0.7158⎤

x4 =

≈⎢

Ax 3

⎣ 0.6983⎥⎦

x5 =

λ4

the rate of convergence is likely to be quite slow.

⎡ 2 1 ⎤ ⎡1 ⎤ ⎡ 2 ⎤

=

(b) Ax0 = ⎢

⎣ 1 2 ⎥⎦ ⎢⎣0 ⎥⎦ ⎢⎣1 ⎥⎦

Ax 0

1 ⎡ 2⎤ ⎡ 0.8944 ⎤

=

≈

x1 =

Ax 0

5 ⎢⎣1 ⎥⎦ ⎢⎣ 0.4472 ⎥⎦

x2 =

Ax 4

⎡ 1 ⎤

≈

max( Ax4 ) ⎢⎣ 0.9918⎥⎦

Ax 4

⎡0.7100 ⎤

≈⎢

Ax 4

⎣0.7042 ⎥⎦

1 ⎤

2⎥

0⎥ .

1 ⎥

2 ⎦⎥

u3 so that

**{u1, u 2 , u3} is an orthonormal basis for R3 . A
**

⎡ 1 ⎤

⎡0 ⎤

⎢ 2⎥

possible choice is u 2 = ⎢1 ⎥ and u3 = ⎢ 0 ⎥ .

⎢ ⎥

⎢ 1 ⎥

−

⎣0 ⎦

⎣⎢ 2 ⎦⎥

This results in the following singular value

decomposition:

⎡0.7100⎤

x5 ≈ ⎢

as compared to

⎣0.7042⎥⎦

⎡0.7071⎤

.

v≈⎢

⎣0.7071⎥⎦

⎡2⎤

(c) Ax0 = ⎢ ⎥

⎣1 ⎦

Ax 0

⎡1 ⎤

=

x1 =

max( Ax0 ) ⎢⎣ 0.5⎥⎦

Ax1

⎡1 ⎤

=⎢ ⎥

x2 =

max( Ax1 ) ⎣ 0.8⎦

A = U ΣV T

1 ⎤

⎡− 1 0

2 ⎥ ⎡ 2 0⎤ ⎡ −

⎢ 2

=⎢ 0 1

0⎥ ⎢ 0 0 ⎥ ⎢

⎢

⎢ 1

1 ⎥ ⎢ 0 0 ⎦⎥ −

⎣

⎢⎣ − 2 0 − 2 ⎥⎦ ⎣

Ax 2

⎡ 1 ⎤

≈

max( Ax2 ) ⎢⎣ 0.9286 ⎥⎦

Ax 3

⎡ 1 ⎤

≈⎢

x4 =

max( Ax3 ) ⎣ 0.9756 ⎥⎦

x3 =

209

1

2

1

2

−

1 ⎤

2⎥

1 ⎥

2⎦

Chapter 9: Numerical Methods

SSM: Elementary Linear Algebra

**11. A has rank 2, thus U1 = [u1 u 2 ] and
**

⎡ vT ⎤

V1T = ⎢ 1T ⎥ and the reduced singular value

⎣⎢ v 2 ⎦⎥

decomposition of A is

A = U1Σ1V1T

1⎤

⎡1

2⎥

⎢2

⎢ 12 − 12 ⎥ ⎡ 24 0 ⎤ ⎡ 23

= ⎢1

⎥ ⎢2

1⎥ ⎢

⎢ 2 − 2 ⎥ ⎣ 0 12 ⎦ ⎢⎣ 3

⎢1

1⎥

⎣⎢ 2

2 ⎦⎥

− 13

2

3

2⎤

3⎥

− 13 ⎥

⎦

210

- (Solution Manual) Contemporary Linear Algebra by Howard Anton, Robert C. Busby
- linear algebra
- ELEMENTARY ALGEBRA
- Algebra Linear.elementary.matthews
- Curso Algebra Lineal
- (Book) Contemporary Linear Algebra by Howard Anton, Robert C. Busby
- Calculus One and Several Variables 10E Salas Solutions Manual
- Differential Equations and Linear Algebra Solution Manual
- Linear Algebra With Applications 5th edition Otto Bretscher
- Calculus, Student Solutions Manual - Anton, Bivens & Davis
- Linear Algebra Solution Manual by Peter Olver
- Linear Algebra Solutions
- Calculus Solutions Manual 2
- Linear Algebra_Solution
- Linear Algebra - Friedberg; Insel; Spence [4th E]
- Elementary Linear Algebra 2nd Edition
- Introductory Linear Algebra Kolman 8e
- Nicholson Solution for Linear Algebra 7th edition.
- Engineering Mechanics Statics 6th Edition Meriam Kraige Solutions Manual
- Solution Manuals, Test Banks, Instructor Manuals, Exam Banks-2
- Chapter 04
- Anton Linear Algebra 10th edition Solutions Set 3.3
- Differential Equations and Linear Algebra
- elementary Linear Algebra
- Linear algebra
- 6.5
- Differential Equations With Boundary-Value Problems 5th Ed. by Zill and Cullen Solution Manual (2)
- 5.5
- Analysis With an Introduction to Proof - Steven Lay

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd