This action might not be possible to undo. Are you sure you want to continue?

Chapter 1

Problems

1.2 (1) Inconsistent.

(2) (x

1

, x

2

, x

3

, x

4

) = (−1 −4t, 6 −2t, 2 −3t, t) for any t ∈ R.

1.3 (1) (x, y, z) = (t, −t, t). (3) (w, x, y, z) = (2, 0, 1, 3)

1.4 (1) b

1

+b

2

−b

3

= 0. (2) For any b

i

’s.

1.7 a = −

17

2

, b =

13

2

, c =

13

4

, d = −4.

1.9 Consider the matrices: A =

_

2 4

3 6

_

, B =

_

2 1

3 4

_

, C =

_

8 7

0 1

_

.

1.10 Compare the diagonal entries of AA

T

and A

T

A.

1.12 (1) Inﬁnitely many for a = 4, exactly one for a ,= ±4, and none for a = −4.

(2) Inﬁnitely many for a = 2, none for a = −3, and exactly one otherwise.

1.14 (3) I = I

T

= (AA

−1

)

T

= (A

−1

)

T

A

T

means by deﬁnition (A

T

)

−1

= (A

−1

)

T

.

1.17 Any permutation on n objects can be obtained by taking a ﬁnite number of

interchangings of two objects.

1.21 Consider the case that some d

i

is zero.

1.22 x = 2, y = 3, z = 1.

1.23 No, in general. Yes, if the systems is consistent.

1.24 L =

_

_

1 0 0

−1 1 0

0 −1 1

_

_

, U =

_

_

1 −1 0

0 1 −1

0 0 1

_

_

.

1.25 (1) Consider (i, j)-entries of AB for i < j.

(2) A can be written as a product of lower triangular elementary matrices.

1.26 L =

_

_

1 0 0

−1/2 1 0

0 −2/3 1

_

_

, D =

_

_

2 0 0

0 3/2 0

0 0 4/3

_

_

, U =

_

_

1 −1/2 0

0 1 −2/3

0 0 1

_

_

.

1.27 There are four possibilities for P.

1.28 (1) I

1

= 0.5, I

2

= 6, I

3

= 0.55. (2) I

1

= 0, I

2

= I

3

= 1, I

4

= I

5

= 5.

1.30 x = k

_

_

0.35

0.40

0.25

_

_

, for k > 0.

1.31 A =

_

_

0.0 0.1 0.8

0.4 0.7 0.1

0.5 0.0 0.1

_

_

with d =

_

_

90

10

30

_

_

.

452

Selected Answers and Hints 453

Exercises 1.10

1.1 Row-echelon forms are A, B, D, F. Reduced row-echelon forms are A, B, F.

1.2 (1)

_

¸

¸

_

1 −3 2 1 2

0 0 1 −1/4 3/4

0 0 0 0 0

0 0 0 0 0

_

¸

¸

_

.

1.3 (1)

_

¸

¸

_

1 −3 0 3/2 1/2

0 0 1 −1/4 3/4

0 0 0 0 0

0 0 0 0 0

_

¸

¸

_

.

1.4 (1) x

1

= 0, x

2

= 1, x

3

= −1, x

4

= 2. (2) x = 17/2, y = 3, z = −4.

1.5 (1) and (2).

1.6 For any b

i

’s.

1.7 b

1

−2b

2

+ 5b

3

,= 0.

1.8 (1) Take x the transpose of each row vector of A.

1.10 Try it with several kinds of diagonal matrices for B.

1.11 A

k

=

_

_

1 2k 3k(k −1)

0 1 3k

0 0 1

_

_

.

1.13 See Problem 1.11.

1.15 (1) A

−1

AB = B. (2) A

−1

AC = C = A+I.

1.16 a = 0, c

−1

= b ,= 0.

1.17 A

−1

=

_

¸

¸

_

1 −1 0 0

0 1/2 −1/2 0

0 0 1/3 −1/3

0 0 0 1/4

_

¸

¸

_

, B

−1

=

_

_

13/8 −1/2 −1/8

−15/8 1/2 3/8

5/4 0 −1/4

_

_

.

1.18 A

−1

=

1

15

_

_

8 −19 2

1 −23 4

4 −2 1

_

_

.

1.21 (1) x = A

−1

b =

_

_

1/3 1/6 1/6

−4/3 −5/3 4/3

−1/3 −2/3 1/3

_

_

_

_

2

5

7

_

_

=

_

_

8/3

−5/3

−5/3

_

_

.

1.22 (1) A =

_

1 0

4 1

_ _

2 0

0 3

_ _

1 1/2

0 1

_

= LDU, (2) L = A, D = U = I.

1.23 (1) A =

_

_

1 0 0

2 1 0

3 1 1

_

_

_

_

1 0 0

0 2 0

0 0 −1

_

_

_

_

1 2 3

0 1 1

0 0 1

_

_

,

(2)

_

1 0

b/a 1

_ _

a 0

0 d −b

2

/a

_ _

1 b/a

0 1

_

.

1.24 c = [2 −1 3]

T

, x = [4 2 3]

T

.

1.25 (2) A =

_

_

1 0 0

1 1 0

1 1 1

_

_

_

_

1 0 0

0 3 0

0 0 2

_

_

_

_

1 1 1

0 1 4/3

0 0 1

_

_

.

454 Selected Answers and Hints

1.26 (1) (A

k

)

−1

= (A

−1

)

k

. (2) A

n−1

= 0 if A ∈ M

n×n

.

(3) (I −A)(I +A+ +A

k−1

) = I −A

k

.

1.27 (1) A =

_

1 1

0 0

_

. (2) A = A

−1

A

2

= A

−1

A = I.

1.28 Exactly seven of them are true.

(8) If AB has the (right) inverse C, then A

−1

= BC.

(10) Consider a permutation matrix

_

0 1

1 0

_

.

Chapter 2

Problems

2.2 (2), (4).

2.3 (1), (2), (4).

2.5 See Problem 1.11. For any square matrix A, A =

A+A

T

2

+

A−A

T

2

2.6 Note that any vector v in W is of the form a

1

x

1

+a

2

x

2

+ +a

m

x

m

which

is a vector in U.

2.7 tr(AB −BA) = 0.

2.10 Linearly dependent.

2.12 Any basis for W must be a basis for V already, by Corollary 2.13.

2.13 (1) n −1, (2)

n(n+1)

2

, (3)

n(n−1)

2

.

2.15 63a + 39b −13c + 5d = 0.

2.17 If b

1

, . . ., b

n

denote the column vectors of B, then AB = [Ab

1

Ab

n

].

2.18 Consider the matrix A from Example 2.19.

2.19 (1) rank = 3, nullity = 1. (2) rank = 2, nullity = 2.

2.20 Ax = b has a solution if and only if b ∈ ((A).

2.21 A

−1

(AB) = B implies rank B = rank A

−1

(AB) ≤ rank(AB).

2.22 By (2) of Theorem 2.24 and Corollary 2.21, a matrix A of rank r must have

an invertible submatrix C of rank r. By (1) of the same theorem, the rank

of C must be the largest.

2.24 dim(V +W) = 4 and dim(V ∩ W) = 1.

2.25 A basis for V is ¦(1, 0, 0, 0), (0, −1, 1, 0), (0, −1, 0, 1)¦,

for W: ¦(−1, 1, 0, 0), (0, 0, 2, 1)¦, and for V ∩ W : ¦(3, −3, 2, 1)¦. Thus,

dim(V +W) = 4 means V +W = R

4

and any basis for R

4

works for V +W.

2.28

A =

_

¸

¸

_

1 0 0 0

0 0 2 0

1 1 1 1

0 1 2 3

_

¸

¸

_

, and

_

¸

¸

_

a

b

c

d

_

¸

¸

_

= A

−1

_

¸

¸

_

1

2

4

4

_

¸

¸

_

=

_

¸

¸

_

1

2

1

0

_

¸

¸

_

.

Exercises 2.11

2.1 Consider 0(1, 1).

2.5 (1), (4).

Selected Answers and Hints 455

2.6 No.

2.7 (1) p(x) = −p

1

(x) + 3p

2

(x) −2p

3

(x).

2.11 ¦(1, 1, 0), (1, 0, 1)¦.

2.12 2.

2.13 Consider ¦e

j

= ¦a

i

¦

∞

i=1

¦ where a

i

=

_

1 if i = j,

0 otherwise.

2.14 (1) 0 = c

1

Ab

1

+ +c

p

Ab

p

= A(c

1

b

1

+ +c

p

b

p

) implies c

1

b

1

+ +c

p

b

p

= 0

since ^(A) = 0, and this also implies c

i

= 0 for all i = 1, . . . , p since columns

of B are linear independent.

(2) B has a right inverse. (3) and (4): Look at (1) and (2) above.

2.15 (1) ¦(−5, 3, 1)¦. (2) 3.

2.16 5!, and dependent.

2.17

(1) ¹(A) = ¸(1, 2, 0, 3), (0, 0, 1, 2)), ((A) = ¸(5, 0, 1), (0, 5, 2)),

^(A) = ¸(−2, 1, 0, 0), (−3, 0, −2, 1)).

(2) ¹(B) = ¸(1, 1, −2, 2), (0, 2, 1, −5), (0, 0, 0, 1)),

((B) = ¸(1, −2, 0), (0, 1, 1), (0, 0, 1)), ^(B) = ¸(5, −1, 2, 0)).

2.18 rank = 2 when x = −3, rank = 3 when x ,= −3.

2.20 Since uv

T

= u[v

1

v

n

] = [v

1

u v

n

u], each column vector of uv

T

is of

the form v

i

u, that is, u spans the column space. Conversely, if A is of rank

1, then the column space is spanned by any one column of A, say the ﬁrst

column u of A, and the remaining columns are of the form v

i

u, i = 2, . . . , n.

Take v = [1 v

2

v

n

]

T

. Then one can easily see that A = uv

T

.

2.21 Four of them are true.

Chapter 3

Problems

3.1 To show W is a subspace, see Theorem 3.2. Let E

ij

be the matrix with 1

at the (i, j)-th position and 0 at others. Let F

k

be the matrix with 1 at the

(k, k)-th position, −1 at the (n, n)-th position and 0 at others. Then the

set ¦E

ij

, F

k

: 1 ≤ i ,= j ≤ n, k = 1, . . . , n − 1¦ is a basis for W. Thus

dimW = n

2

−1.

3.2 tr(AB) =

m

i=1

n

k=1

a

i

k

b

k

i

=

n

k=1

m

i=1

b

k

i

a

i

k

= tr(BA).

3.3

_

0 1

1 0

_

, since it is simply the change of coordinates x and y.

3.4 If yes, (2, 1) = T(−6, −2, 0) = −2T(3, 1, 0) = (−2, −2).

3.5 If a

1

v

1

+a

2

v

2

+ +a

k

v

k

= 0, then

0 = T(a

1

v

1

+a

2

v

2

+ +a

k

v

k

) = a

1

w

1

+a

2

w

2

+ +a

k

w

k

implies a

i

= 0

for i = 1, . . . , k.

3.6 (1) If T(x) = T(y), then S ◦ T(x) = S ◦ T(y) implies x = y. (4) They are

invertible.

456 Selected Answers and Hints

3.7 (1) T(x) = T(y) if and only if T(x −y) = 0, i.e., x −y ∈ Ker(T) = ¦0¦.

(2) Let ¦v

1

, . . . , v

n

¦ be a basis for V . If T is one-to-one, then the set

¦T(v

1

), . . . , T(v

n

)¦ is linearly independent as the proof of Theorem 3.7

shows. Corollary 2.13 shows it is a basis for V . Thus, for any y ∈ V , we can

write it as y =

n

i=1

a

i

T(v

i

) = T(

n

i=1

a

i

v

i

). Set x =

n

i=1

a

i

v

i

∈ V . Then

clearly T(x) = y so that T is onto. If T is onto, then for each i = 1, . . . , n

there exists x

i

∈ V such that T(x

i

) = v

i

. Then the set ¦x

1

, . . . , x

n

¦ is

linearly independent in V , since, if

n

i=1

a

i

x

i

= 0, then 0 = T(

n

i=1

a

i

x

i

) =

n

i=1

a

i

T(x

i

) =

n

i=1

a

i

v

i

implies a

i

= 0 for all i = 1, . . . , n. Thus it is a

basis by Corollary 2.13 again. If T(x) = 0 for x =

n

i=1

a

i

x

i

∈ V , then

0 = T(x) =

n

i=1

a

i

T(x

i

) =

n

i=1

a

i

v

i

implies a

i

= 0 for all i = 1, . . . , n,

that is x = 0. Thus Ker (T) = ¦0¦.

3.8 Use rotation R

π

3

and reﬂection

_

1 0

0 −1

_

about the x-axis.

3.9 (1) (5, 2, 3). (2) (2, 3, 0).

3.12 (1) [T]

α

=

_

_

2 −3 4

5 −1 2

4 7 0

_

_

, [T]

β

=

_

_

0 7 4

2 −1 5

4 −3 2

_

_

.

3.13 [T]

β

α

=

_

_

1 2 0 0

1 0 −3 1

0 2 3 4

_

_

.

3.15 [S +T]

α

=

_

_

3 0 0

2 2 3

2 3 3

_

_

, [T ◦ S]

α

=

_

_

3 2 0

3 3 3

6 5 3

_

_

.

3.16 [S]

β

α

=

_

_

1 −1 0

1 1 0

0 0 1

_

_

, [T]

α

=

_

_

2 3 0

0 3 6

0 0 4

_

_

.

3.17 (2) [T]

β

α

=

_

1 0

1 1

_

[T

−1

]

α

β

=

_

1 0

−1 1

_

.

3.18 [Id]

α

β

=

1

2

_

_

0 −1 5

4 3 −1

2 1 1

_

_

, [Id]

β

α

=

_

_

−2 −3 7

3 5 −10

1 1 −2

_

_

.

3.19 [T]

α

=

_

_

1 2 1

0 −1 0

1 0 4

_

_

, [T]

β

=

_

_

1 4 5

−1 −2 −6

1 1 5

_

_

.

3.20 Write B = Q

−1

AQ with some invertible matrix Q.

(1) det B = det(Q

−1

AQ) = det Q

−1

det Adet Q = det A. (2) tr (B) = tr

(Q

−1

AQ) = tr (QQ

−1

A) = tr (A) (see Problem 3.2). (3) Use Problem 2.21.

3.22 α

∗

= ¦f

1

(x, y, z) = x −

1

2

y, f

2

(x, y, z) =

1

2

y, f

3

(x, y, z) = −x +z¦.

Exercises 3.10

3.1 (2).

3.2 ax

3

+bx

2

+ax +c.

Selected Answers and Hints 457

3.5 (1) Consider the decomposition of v =

v+T(v)

2

+

v−T(v)

2

.

3.6 (1) ¦(x,

3

2

x, 2x) ∈ R

3

: x ∈ R¦.

3.7 (2) T

−1

(r, s, t) = (

1

2

r, 2r −s, 7r −3s −t).

3.8 (1) Since T ◦ S is one-to-one from V into V , T ◦ S is also onto and so T is

onto. Moreover, if S(u) = S(v), then T ◦ S(u) = T ◦ S(v) implies u = v.

Thus, S is one-to-one, and so onto. This implies T is one-to-one. In fact, if

T(u) = T(v), then there exist x and y such that S(x) = u and S(y) = v.

Thus T ◦ S(x) = T ◦ S(y) implies x = y and so u = T(x) = T(y) = v.

3.9 Note that T cannot be one-to-one and S cannot be onto.

3.12

_

¸

¸

_

5 4 −6 18

−4 −3 −2 0

0 0 1 −12

0 0 0 1

_

¸

¸

_

.

3.13 (1)

_

−1/3 2/3

−5/3 1/3

_

.

3.14 (1)

_

0 2

3 −1

_

, (2)

_

3 −4

1 5

_

.

3.15 (1) T(1, 0, 0) = (4, 0), T(1, 1, 0) = (1, 3), T(1, 1, 1) = (4, 3).

(2) T(x, y, z) = (4x −2y +z, y + 2z).

3.16 (1)

_

_

1 1 1

0 1 2

0 0 1

_

_

, (4)

_

_

0 1 0

0 0 1

0 0 0

_

_

.

3.17 (1) P =

_

_

0 0 1

0 1 −1

1 −1 0

_

_

, (2) Q =

_

_

1 1 1

1 1 0

1 0 0

_

_

= P

−1

.

3.18 Use the trace.

3.19 (1)

_

−7 −33 −13

4 19 8

_

.

3.20 (2)

_

5 1

1 2

_

, (4)

_

_

−2/3 1/3 4/3

2/3 −1/3 −1/3

7/3 −2/3 −8/3

_

_

.

3.25 [T]

α

=

_

_

0 2 1

−1 4 1

1 0 1

_

_

= ([T

∗

]

α

∗)

T

.

3.26 (1)

_

1 1 0

−1 0 2

_

. (2) [T]

β

α

=

_

−3 1 −1

1 2 1

_

.

3.27 ^(T) = ¦0¦,

((T) = ¸ (2, 1, 0, 1), (1, 1, 1, 1), (4, 2, 2, 3) ), [T]

β

α

=

_

¸

¸

_

1 0 2

1 0 0

−1 0 −1

1 1 3

_

¸

¸

_

.

3.29 p

1

(x) = 1 +x −

3

2

x

2

, p

2

(x) = −

1

6

+

1

2

x

2

, p

3

(x) = −

1

3

+x −

1

2

x

2

.

3.30 Three of them are false.

458 Selected Answers and Hints

Chapter 4

Problems

4.4 (1) −27, (2) 0, (3) (1 −x

4

)

3

.

4.8 (1) −14. (2) 0.

4.9 See Example 4.6, and use mathematical induction on n.

4.10 Find the cofactor expansion along the ﬁrst row ﬁrst, and then compute the

cofactor expansion along the ﬁrst column of each n n submatrix (in the

second step, use the proof of Cramer’s rule).

4.15 If A = 0, then clearly adjA = 0. Otherwise, use A adjA = (det A)I.

4.16 Use adjA adj(adjA) = det(adjA) = I.

4.17 (1) x

1

= 4, x

2

= 1, x

3

= −2.

(2) x =

10

23

, y =

5

6

, z =

5

2

.

4.18 The solution of the system Id(x) = x is x

i

=

det Ci

det I

= det A.

Exercises 4.5

4.1 k = 0 or 2.

4.2 It is not necessary to compute A

2

or A

3

.

4.3 −37.

4.4 (1) det A = (−1)

n−1

(n −1). (2) 0.

4.5 −2, 0, 1, 4.

4.6 Consider

σ∈Sn

a

1σ(1)

a

nσ(n)

.

4.7 (1) 1, (2)24.

4.8 (3) x

1

= 1, x

2

= −1, x

3

= 2, x

4

= −2.

4.9 (2) x = (3, 0, 4/11)

T

.

4.10 k = 0 or ±1.

4.11 x = (−5, 1, 2, 3)

T

.

4.12 x = 3, y = −1, z = 2.

4.13 (3) A

11

= −2, A

12

= 7, A

13

= −8, A

33

= 3.

4.16 A

−1

=

1

72

_

_

−3 5 9

18 −6 18

6 14 −18

_

_

.

4.17 (1) adj(A) =

_

_

2 −7 −6

1 −7 −3

−4 7 5

_

_

, det(A) = −7, det(adj(A)) = 49,

A

−1

= −

1

7

adj(A). (2) adj(A) =

_

_

1 1 −1

−10 4 2

7 −3 −1

_

_

,

det A = 2, det(adj(A)) = 4, A

−1

=

1

2

adj(A).

4.18 Note that (AB)

−1

= B

−1

A

−1

and A

−1

= adj(A)/ det A. (The reader may

also try to prove this equality for non-invertible matrices.)

4.19 If we set A =

_

1 3

3 1

_

, then the area is

1

2

[ det A[ = 4.

Selected Answers and Hints 459

4.20 If we set A =

_

_

1 2

1 2

2 1

_

_

, then the area is

1

2

_

[ det(A

T

A)[ =

3

√

2

2

.

4.21 Use det A =

σ∈Sn

sgn(σ)a

1σ(1)

a

nσ(n)

. In fact, suppose B is a kk matrix,

and a permutation σ ∈ S

n

sends some number i ≤ k into ¦k + 1, . . . , n¦,

then there is an ℓ ≥ k + 1 such that σ(ℓ) ≤ k. Thus a

ℓσ(ℓ)

= 0, and

so sgn(σ)a

1σ(1)

a

ℓσ(ℓ)

a

nσ(n)

= 0. Therefore the only terms that do not

vanish are those for σ : ¦1, . . . , k¦ → ¦1, . . . , k¦. But then σ : ¦k+1, . . . , n¦ →

¦k + 1, . . . , n¦. i.e., σ = σ

1

σ

2

with sgn(σ) = sgn(σ

1

)sgn(σ

2

). Hence,

det A =

σ1∈S

k

sgn(σ

1

)a

1σ(1)

a

kσ(k)

σ2∈S

n−k

sgn(σ

2

)a

k+1σ(k+1)

a

nσ(n)

= det Bdet D.

4.22 Multiply

_

I 0

B I

_

to the right.

4.23 vol(T(B)) = [ det(A)[vol(B), for the matrix representation A of T. Clearly,

C = AB, so vol(T(C)) = [ det(AB)[ = [ det A[[ det B[ = [ det A[ vol(T(B)).

4.24 Exactly seven of them are true.

(4) (cI

n

−A)

T

= cI

n

−A

T

.

(10) See Exercise 2.20: det(uv

T

) = v

1

v

n

det([u u]) = 0.

(13) Consider

_

_

1 0 1

1 1 0

0 1 1

_

_

.

Chapter 5

Problems

5.1 Note that ¸x, x) = ax

2

1

+ 2cx

1

x

2

+bx

2

2

= a(x

1

+

c

a

x

2

)

2

+

ab−c

2

a

x

2

2

> 0 for all

x = (x

1

, x

2

) ,= 0. For x = (1, 0), we get a > 0. For x = (−

c

a

, 1), we get

ab −c

2

> 0. The converse is easy from the equation above.

5.2 ¸x, y)

2

= ¸x, x)¸y, y) if and only if |tx+y|

2

= ¸x, x)t

2

+2¸x, y)t+¸y, y) = 0

has a repeated real root t

0

.

5.3 (4) Compute the square of both sides and use Cauchy-Schwarz inequality.

5.5 ¸f, g) =

_

1

0

f(x)g(x)dx deﬁnes an inner product on C [0, 1]. Use Cauchy-

Schwarz inequality or Problem 5.3.

5.6 (1)

1

√

6

(2, 1, −1), (2)

1

√

61

(6, 4, −3).

5.7 (1): Orthogonal, (2) and (3): None, (4): Orthonormal.

5.10

_

1,

√

3(2x −1),

√

5(6x

2

−6x + 1)

_

.

5.12 (1) is just the deﬁnition, and use (1) to prove (2).

5.14 Proj

W

(p) = (

4

3

,

5

3

, −

1

3

).

5.16 P

T

= (P

T

P)

T

= P

T

P = P, and P = P

T

P = P

2

.

5.17 For x ∈ R

m

, x = ¸v

1

, x)v

1

+ +¸v

m

, x)v

m

= (v

1

v

1

T

)x+ +(v

m

v

mT

)x.

460 Selected Answers and Hints

5.18 The null space of the matrix

_

1 2 1 2

0 −1 −1 1

_

is

x = t[1 −1 1 0]

T

+s[−4 1 0 1]

T

for t, s ∈ R.

5.20 ¹(A)

⊥

= ^(A).

5.21 x = (1, −1, 0) +t(2, 1, −1) for any number t.

5.23 For A = [v

1

v

2

], two columns are linearly independent.

5.24 P =

1

3

_

_

2 1 1

1 2 −1

1 −1 2

_

_

.

5.26 −

1

6

+x.

5.27

_

_

s

0

v

0

1

2

g

_

_

= x = (A

T

A)

−1

A

T

b =

_

_

−0.4

0.35

16.1

_

_

.

5.29 (1) r =

1

√

2

, s =

1

√

6

, a = −

1

√

3

, b =

1

√

3

, c =

1

√

3

.

5.30 Extend ¦v

1

, . . . , v

m

¦ to an orthonormal basis ¦v

1

, . . . , v

m

, . . . , v

n

¦. Then

|x|

2

=

m

i=1

[¸x, v

i

)[

2

+

n

j=m+1

[¸x, v

j

)[

2

.

5.31 (1) orthogonal. (2) not orthogonal.

5.32 Let A = QR = Q

′

R

′

be two decompositions of A. Then Q

T

Q

′

= RR

′−1

which is an upper triangular and orthogonal matrix. Since (Q

T

Q

′

)

T

=

(Q

T

Q

′

)

−1

= (RR

′−1

)

−1

= R

′

R

−1

is upper and lower, Q

T

Q

′

is diagonal

and orthogonal so that Q

T

Q

′

= D = diag[d

i

] with d

i

= ±1. i.e., Q

′

= QD,

or u

i

′

= ±u

i

for each i ≥ 1. Since c

1

= b

′

11

u

1

′

= b

11

u

1

with b

′

11

, b

11

> 0

and so u

1

′

, u

1

are unit vectors in c

1

direction, we have u

1

′

= u

1

. Assume

that u

j−1

′

= u

j−1

and u

j

′

= −u

j

. Then u

j

becomes a linear combination

of u

1

, . . . , u

j−1

, since c

j

= b

1j

u

1

+ +b

jj

u

j

= b

′

1j

u

1

′

+ +b

′

jj

u

j

′

. Thus,

u

j

′

= u

j

, or d

j

= 1, for all j ≥ 1, so that D = Id. Thus we get Q = Q

′

, and

then R = R

′

follows.

Exercises 5.12

5.1 Inner products are (2), (4), (5).

5.2 For the last condition of the deﬁnition, note that ¸A, A) = tr(A

T

A) =

i,j

a

2

ij

= 0 if and only if a

ij

= 0 for all i, j.

5.4 (1) k = 3.

5.5 (3) |f| = |g| =

_

1/2, The angle is 0 if n = m,

π

2

if n ,= m.

5.6 Use the Cauchy-Schwarz inequality and Problem 5.2 with x = (a

1

, , a

n

)

and y = (1, , 1) in (R

n

, ).

5.7 (1) −

37

4

,

_

19

3

.

(2) If ¸h, g) = h(

a

3

+

b

2

+c) = 0 with h ,= 0 a constant and g(x) = ax

2

+bx+c,

then (a, b, c) is on the plane

a

3

+

b

2

+c = 0 in R

3

.

5.10 (1)

3

2

v

2

, (2)

1

2

v

2

.

5.12 Orthogonal: (4). Nonorthogonal: (1), (2), (3).

Selected Answers and Hints 461

5.16 Use induction on n. If n = 1, then A has only one column c

1

and A

T

A =

det(A

T

A) is simply the square of the length of c

1

. Assume the claim is true

for n − 1. Let B

m×(n−1)

be the submatrix of A with the ﬁrst column c

1

removed so that A = [ c

1

B], and let C

m×n

= [ a B], where a = c

1

−p and

p = Proj

W

(c

1

) = a

2

c

2

+ + a

n

c

n

∈ W for some a

i

’s, where W = ((B).

Then a is clearly orthogonal to c

2

, . . . , c

k

and p. Claim that

det(A

T

A) = det(C

T

C) = |a|

2

det(B

T

B) = |a|

2

vol(T(B))

2

= vol(T(A))

2

.

In fact,

det(A

T

A) = det

_

a

T

a +p

T

p p

T

B

B

T

p B

T

B

_

= det

_

a

T

a 0

B

T

p B

T

B

_

+ det

_

p

T

p p

T

B

B

T

p B

T

B

_

= det

_

a

T

a 0

0 B

T

B

_

+ det

__

p

T

B

T

_

_

p B

¸

_

.

Since p is a linear combination of the columns of B, one can easily ver-

ify that det

__

p

T

B

T

_

_

p B

¸

_

= 0, so that det(A

T

A) = det(C

T

C) =

|a|

2

det(B

T

B). This also shows that the volume is independent of the choice

of c

1

at the beginning.

5.17 Let A =

_

¸

¸

_

1 0 0

0 1 0

0 2 1

0 1 2

_

¸

¸

_

. Then the volume of the tetrahedron is

√

det(A

T

A)

3

=

1.

5.19 Ax = b has a solution for every b ∈ R

m

if k = m. It has inﬁnitely many

solutions if nullity = n −k = n −m > 0.

5.20 The line is a subspace with an orthonormal basis

1

√

2

(1, 1), or is the column

space of A =

1

√

2

_

1

1

_

.

5.21 Find a least square solution of

_

¸

¸

_

1 0

1 1

1 2

1 3

_

¸

¸

_

_

a

b

_

=

_

¸

¸

_

1

3

4

4

_

¸

¸

_

for (a, b)

in y = a +bx. Then y = x +

3

2

.

5.22 Follow Exercise 5.21 with A =

_

¸

¸

¸

¸

_

1 −1 1 −1

1 0 0 0

1 1 1 1

1 2 4 8

1 3 9 27

_

¸

¸

¸

¸

_

.

Then y = 2x

3

−4x

2

+ 3x −5.

462 Selected Answers and Hints

5.25 (1) Let h(x) =

1

2

(f(x)+f(−x)) and g(x) =

1

2

(f(x)−f(−x)). Then f = h+g.

(2) For f ∈ U and g ∈ V , ¸f, g) =

_

1

−1

f(x)g(x)dx = −

_

−1

1

f(−t)g(−t)dt

= −

_

1

−1

f(t)g(t)dt = −¸f, g), by change of variable x = −t.

(3) Expand the length in the inner product.

5.26

A

T

A =

_

1 sin θ cos θ

sin θ cos θ cos

2

θ

_

=

_

1 0

sin θ cos θ 1

_ _

1 0

0 cos

4

θ

_ _

1 sin θ cos θ

0 1

_

.

A = QR =

_

sin θ cos θ

cos θ −sin θ

_ _

1 sin θ cos θ

0 cos

2

θ

_

.

5.27 A

T

A = I and det A

T

= det A imply det A = ±1.

The matrix A =

_

cos θ sin θ

sin θ −cos θ

_

is orthogonal with det A = −1.

5.28 Six of them are true.

(1) Consider (1, 0) and (−1, 0).

(2) Consider two subspaces U and W of R

3

spanned by e

1

and e

2

, respec-

tively.

(3) The set of column vectors in a permutation matrix P are just

¦e

1

, . . . , e

n

¦, which is a set of orthonormal vectors.

Chapter 6

Problems

6.3 Zero is an eigenvalue of AB if and only if AB is singular, if and only if BA

is singular, if and only if zero is an eigenvalue of BA. Let λ be a nonzero

eigenvalue of AB with (AB)x = λx for a nonzero vector x. Then the vector

Bx is not zero, since λ ,= 0, but

(BA)(Bx) = B(λx) = λ(Bx).

This means that Bx is an eigenvector of BA belonging to the eigenvalue λ,

and λ is an eigenvalue of BA. Similarly, any nonzero eigenvalue of BA is also

an eigenvalue of AB.

6.4 Consider the matrices

_

1 1

0 1

_

and

_

1 0

0 1

_

.

6.5 Check with A =

_

1 1

0 1

_

.

6.6 If A is invertible, then AB = A(BA)A

−1

.

6.7 (1) Use det A = λ

1

λ

n

. (2) Ax = λx if and only if x = λA

−1

x.

6.8 (1) If Q = [x

1

x

2

x

3

] diagonalizes A, then the diagonal matrix must be λI and

AQ = λQI. Expand this equation and compare the corresponding columns

of the equation to ﬁnd a contradiction on the invertibility of Q.

Selected Answers and Hints 463

6.9 Q =

_

2 3

1 2

_

, D =

_

2 0

0 3

_

. Then A = QDQ

−1

=

_

−1 6

−2 6

_

.

6.10 (1) The eigenvalues of A are 1, 1, −3, and their associated eigenvectors are

(1, 1, 0), (−1, 0, 1) and (1, 3, 1), respectively.

(2) If f(x) = x

10

+x

7

+5x, then f(1), f(1) and f(−3) are the eigenvalues of

A

10

+A

7

+ 5A.

6.11 Note that

_

_

a

n+1

a

n

a

n−1

_

_

=

_

_

2 1 −2

1 0 0

0 1 0

_

_

_

_

a

n

a

n−1

a

n−2

_

_

. The eigenvalues are 1, 2,

−1 and eigenvectors are (1, 1, 1), (4, 2, 1) and (1, −1, 1), respectively. It turns

out that a

n

= 2 −(−1)

n

2

3

−

2

n

3

.

6.12 Write the characteristic polynomial as

f(x) = x

k

−a

1

x

k−1

− −a

k−1

x −a

k

= (x −λ)

m

g(x),

where g(λ) ,= 0. Then clearly f(λ) = f

′

(λ) = = f

(m−1)

(λ) = 0. For

n ≥ k, let f

1

(x) = x

n−k

f(x) = x

n

− a

1

x

n−1

− − a

k

x

n−k

. Then one can

easily show that

f

2

(λ) = λf

′

1

(λ) = nλ

n

−a

1

(n −1)λ

n−1

− −a

k

(n −k)λ

n−k

= 0,

since f

′

1

(λ) = (n −k)λ

n−k−1

f(λ) +λ

n−k

f

′

(λ) = 0. Inductively,

f

m

(λ) = λf

′

m−1

(λ) = λ

2

f

′′

m−2

(λ) +λf

′

m−2

(λ) = 0

= n

m−1

λ

n

−a

1

(n −1)

m−2

λ

n−1

− −a

k

(n −k)

m−k−1

λ

n−k

.

Thus, x

n

= λ

n

, nλ

n

, . . . , n

m−1

λ

n

are m solutions. It is not hard to show

that they are linearly independent.

6.15 The eigenvalues are 0, 0.4, and 1, and their eigenvectors are

(1, 4, −5), (1, 0, −1) and (3, 2, 5), respectively.

6.16 For (1), use (A + B)

k

=

k

i=0

_

k

i

_

A

i

B

k−i

if AB = BA. For (2) and (3), use

the deﬁnition of e

A

. Use (1) for (4).

6.17 Note that e

(A

T

)

= (e

A

)

T

by deﬁnition (thus, if A is symmetric, so is e

A

), and

use (4).

6.18 Write A = 2I +N with N =

_

_

0 3 0

0 0 3

0 0 0

_

_

. Then N

3

= 0.

6.19 y

1

= c

1

e

2x

−

1

4

c

2

e

−3x

; y

2

= c

1

e

2x

+c

2

e

−3x

.

6.20

_

_

_

y

1

= − c

2

e

2x

+c

3

e

3x

y

2

= c

1

e

x

+ 2c

2

e

2x

−c

3

e

3x

y

3

= 2c

2

e

2x

−c

3

e

3x

,

_

_

_

y

1

= e

2x

−2e

3x

y

2

= e

x

−2e

2x

+ 2e

3x

y

3

= −2e

2x

+ 2e

3x

.

6.21 (1)

_

e

−t

e

−t

_

, (2)

_

_

3e

t

−2

2 −e

−t

e

−t

_

_

.

6.22 With the basis α = ¦1, x, x

2

¦, [T]

α

= A =

_

_

1 0 0

0 2 0

0 0 3

_

_

.

464 Selected Answers and Hints

6.23 With the standard basis for M

2×2

(R) : α =

_

E

11

=

_

1 0

0 0

_

, E

12

=

_

0 1

0 0

_

, E

21

=

_

0 0

1 0

_

, E

22

=

_

0 0

0 1

__

,

[T]

α

= A =

_

¸

¸

_

1 1 0 1

1 1 1 0

0 1 1 1

1 0 1 1

_

¸

¸

_

. The eigenvalues are 3, 1, 1, −1, and their asso-

ciated eigenvectors are (1, 1, 1, 1), (−1, 0, 1, 0), (0, −1, 0, 1), and (−1, 1, −1, 1),

respectively.

6.24 With respect to the standard basis α, [T]

α

=

_

_

4 0 1

2 3 2

1 0 4

_

_

with eigenvalues

3, 3, 5 and eigenvectors (0, 1, 0), (−1, 0, 1) and (1, 2, 1), respectively.

Exercises 6.6

6.1 (4) 0 of multiplicity 3, 4 of multiplicity 1. Eigenvectors are e

i

− e

i+1

for

1 ≤ i ≤ 3 and

4

i=1

e

i

.

6.2 f(λ) = (λ + 2)(λ

2

−8λ + 15), λ

1

= −2, λ

2

= 3, λ

3

= 5,

x

1

= (−35, 12, 19), x

2

= (0, 3, 1), x

3

= (0, 1, 1).

6.4 ¦v¦ is a basis for ^(A), and ¦u, w¦ is a basis for ((A).

6.5 Note that the order in the product doesn’t matter, and any eigenvector of

A is killed by B. Since the eigenvalues are all diﬀerent, the eigenvectors

belonging to 1, 2, 3 form a basis. Thus B = 0, that is, B has only the zero

eigenvalue, so all vectors are eigenvectors of B.

6.7 A = QDQ

−1

=

1

2

_

_

1 −2 −1

1 4 −1

1 2 7

_

_

.

6.8 Note that R

n

= W ⊕ Ker(P) and P(w) = w for w ∈ W and P(v) = 0 for

v ∈ Ker(P). Thus, the eigenspace belonging to λ = 1 is W, and that to

λ = 0 is Ker(P).

6.9 For any w ∈ R

n

, Aw = u(v

T

w) = (v w)u. Thus Au = (v u)u, so u is an

eigenvector belonging to the eigenvalue λ = v u. The other eigenvectors are

those in v

⊥

with eigenvalue zero. Thus, A has either two eigenspaces E(λ)

that are 1-dimensional spanned by u and E(0) = v

⊥

if v u ,= 0, or just one

eigenspace E(0) = R

n

if v u = 0.

6.10 λv = Av = A

2

v = λ

2

v implies λ(λ −1) = 0.

6.12 Use tr(A) = λ

1

+ +λ

n

= a

11

+ +a

nn

.

6.13 (1) If k = 1, clearly x

1

∈ U. Suppose that the claim is true for k, and

x

1

+ +x

k

+x

k+1

= u ∈ U with x

i

∈ E

λi

(A). Then, from

A(x

1

+ +x

k+1

) = λ

1

x

1

+ +λ

k

x

k

+λ

k+1

x

k+1

= Au = ¯ u ∈ U

λ

k+1

x

1

+ +λ

k+1

x

k

+λ

k+1

x

k+1

= λ

k+1

u ∈ U,

Selected Answers and Hints 465

we get (λ

1

− λ

k+1

)x

1

+ + (λ

k

− λ

k+1

)x

k

= ¯ u − λ

k+1

u ∈ U. Thus by

induction all the x

i

’s are in U.

(2) Write R

n

= E

λ1

(A) ⊕ ⊕ E

λ

k

(A). Then U ∩ E

λi

(A), for i = 1, . . . , k,

span U since any basis vector u in U is of the form u = x

1

+ + x

k

with

x

i

∈ U ∩ E

λi

(A) by (1). Thus U = U ∩ E

λ1

(A) ⊕ ⊕U ∩ E

λ

k

(A).

6.14 A = QD

1

Q

−1

and B = QD

2

Q

−1

imply AB = BA since D

1

D

2

= D

2

D

1

.

Conversely, suppose AB = BA. If Ax = λ

i

x with x ∈ E

λi

(A), then ABx =

BAx = λ

i

Bx implies Bx ∈ E

λi

(A). That is, each eigenspace E

λi

(A) is

invariant by B, so that the restriction of B on E

λi

(A) is diagonalized.

6.16 With respect to the basis α = ¦1, x, x

2

¦, [T]

α

=

_

_

1 0 1

0 1 1

1 1 0

_

_

. The eigen-

values are 2, 1, −1 and the eigenvectors are (1, 1, 1), (−1, 1, 0) and (1, 1, −2),

respectively.

6.19 Eigenvalues are 1, 1, 2 and eigenvectors are (1, 0, 0), (0, 1, 2) and (1, 2, 3).

A

10

x = (1025, 2050, 3076).

6.20 Clearly, a

0

= 1, a

1

= 2 and a

2

= 3. Inductively, one can easily see that the

sequence ¦a

n

: n ≥ 1¦ is a Fibonacci sequence: a

n+1

= a

n

+a

n−1

. In fact, in

¦1, 2, . . . , n¦, the size of the class of subsets with the required property may

be counted as the number of the members of the class for the set without n

plus that of the class for the set without n and n−1, to each member of this

class just add n.

6.21 One can easily check that det A

n

= det A

n−1

− det A

n−2

. Set a

n

= det A

n

,

so that a

n

= a

n−1

−a

n−2

. With a

n−1

= a

n−1

, we obtain a matrix equation:

x

n

=

_

a

n

a

n−1

_

=

_

1 −1

1 0

_ _

a

n−1

a

n−2

_

= Ax

n−1

= A

n

x

1

,

with a

1

= 1 and a

2

= 0. Using the eigenvalues might make the computation

a mess. Instead, one can use the Cayley-Hamilton Theorem 8.13: Since the

characteristic polynomial of A is λ

2

− λ + 1, A

2

− A + I = 0 holds. Thus,

A

3

= A

2

−A = −I, so A

6

= I. One can now easily compute a

n

modulo 6.

6.22 The characteristic equation is λ

2

−xλ−0.18 = 0. Since λ = 1 is a solution, x =

0.82. The eigenvalues are now 1, −0.18 and the eigenvectors are (−0.3, −1)

and (1, −0.6).

6.23 (1) e

A

=

_

e e −1

0 1

_

.

6.24 The initial status in 1985 is x

0

= (x

0

, y

0

, z

0

) = (0.4, 0.2, 0.4), where x, y, z

represent the percentage of large, medium, and small car owners. In 1995, the

status is x

1

=

_

_

x

1

y

1

z

1

_

_

=

_

_

0.7 0.1 0

0.3 0.7 0.1

0 0.2 0.9

_

_

_

_

0.4

0.2

0.4

_

_

= Ax

0

. Thus, in 2025,

the status is x

4

= A

4

x

0

. The eigenvalues are 0.5, 0.8, and 1, whose eigenvec-

tors are (−0.41, 0.82, −0.41), (0.47, 0.47, −0.94), and (−0.17, −0.52, −1.04),

respectively.

466 Selected Answers and Hints

6.27 (1)

_

_

_

y

1

(x) = −2e

2(1−x)

+ 4e

2(x−1)

y

2

(x) = −e

2(1−x)

+ 2e

2(x−1)

y

3

(x) = 2e

2(1−x)

−2e

2(x−1)

.

(2)

_

y

1

(x) = e

2x

(cos x −sin x)

y

2

(x) = 2e

2x

sin x.

6.28 y

1

= 0, y

2

= 2e

2t

, y

3

= e

2t

.

6.29 (1) f(λ) = λ

3

−10λ

2

+28λ−24, eigenvalues are 6, 2, 2, and eigenvectors are

(1, 2, 1), (−1, 1, 0) and (−1, 0, 1).

(2) f(λ) = (λ −1)(λ

2

−6λ +9), eigenvalues are 1, 3, 3, and eigenvectors are

(2, −1, 1), (1, 1, 0) and (1, 0, 1).

6.30 Three of them are true:

(1) For A =

_

1 0

0 1

_

, B = Q

−1

AQ means that

_

0 1

1 0

_

=

_

1 0

0 1

_

.

(2) Consider A =

_

1 1

0 2

_

, and B =

_

1

1

2

0

1

2

_

.

(3) Consider

_

1 1

0 1

_

. (4) Consider

_

1 0

0 0

_

. (5) Consider

_

0 1

1 0

_

.

(6) If A is similar to I + A, then they have the same eigenvalues so that

tr(A) = tr(I +A) = n+ tr(A), which can not be equal.

(7) tr (A+B) = tr (A) + tr (B).

Chapter 7

Problems

7.1 (1) u v = u

T

v =

i

u

i

v

i

=

i

v

i

u

i

= v u.

(3) (ku) v =

i

ku

i

v

i

= k

i

u

i

v

i

= k(u v).

(4) u u =

i

[u

i

[

2

≥ 0, and u u = 0 if and only if u

i

= 0 for all i.

7.2 (1) If x = 0, clear. Suppose x ,= 0 ,= y. For any scalar k,

0 ≤ ¸x − ky, x − ky) = ¸x, x) − k¸x, y) − k¸y, x) + kk¸y, y). Let k =

y,x

y,y

to obtain [¸x, x)¸y, y) −[¸x, y)[

2

≥ 0. Note that equality holds if and only if

x = ky for some scalar k.

(2) Expand |x +y|

2

= ¸x +y, x +y) and use (1).

7.3 Suppose that x and y are linearly independent, and consider the linear depen-

dence a(x+y)+b(x−y) = 0 of x+y and x−y. Then 0 = (a+b)x+(a−b)y.

Since x and y are linearly independent, we have a+b = 0 and a−b = 0 which

are possible only for a = 0 = b. Thus x +y and x −y are linearly indepen-

dent. Conversely, if x+y and x−y are linearly independent, then the linear

dependence ax+by = 0 of x and y gives

1

2

(a+b)(x+y)+

1

2

(a−b)(x−y) = 0.

Thus we get a = 0 = b. Thus x and y are linearly independent.

7.4 (1) Eigenvalues are 0, 0, 2 and their eigenvectors are (1, 0, −i) and (0, 1, 0),

respectively. (2) Eigenvalues are 3,

1+

√

5

2

,

1−

√

5

2

, and their eigenvectors are

(1, −i,

1−i

2

), (

√

5−3

2

i, 1,

1−

√

5

2

(1 + i)), and (−

√

5+3

2

i, 1,

1+

√

5

2

(1 + i)), respec-

tively.

7.5 Refer to the real case.

Selected Answers and Hints 467

7.6 (AB)

H

= (AB)

T

= B

T

A

T

= B

H

A

H

.

7.7 (A

H

)(A

−1

)

H

= (A

−1

A)

H

= I.

7.8 The determinant is just the product of the eigenvalues and a Hermitian ma-

trix has only real eigenvalues.

7.9 See Exercise 6.9.

7.10 To prove (3) directly, show that λ(x y) = µ(x y) by using the fact that

A

H

x = −µx when Ax = µx.

7.11 A

H

= B

H

+ (iC)

H

= B

T

−iC

T

= −B −iC = −A.

7.12 ±AB = (AB)

H

= B

H

A

H

= (±B)(±A) = BA, + if they are Hermitian, − if

they are skew-Hermitian.

7.13 Note that det U

H

= det U, and 1 = det I = det(U

H

U) = [ det U[

2

.

7.16 Since A

−1

= A

H

, (AB)

H

AB = I.

7.17 Hermitian means the diagonal entries are real, and diagonality implies oﬀ-

diagonal entries are zero. Unitary means the diagonal entries must be ±1.

7.18 (1) If U =

_

1

6

i

√

3 +

1

2

−

1

6

i

√

3 +

1

2

−

1

3

√

3

1

3

√

3

_

, U

−1

AU =

_

1

2

−

1

2

i

√

3 0

0

1

2

+

1

2

i

√

3

_

(2) If U =

_

_

0 0 1

−

6

25

−

8

25

i

2

5

+

1

5

i

6

25

+

8

25

i

0 −1 0

_

_

, U

−1

AU =

_

_

−1 0 0

0 2i 1

0 0 2i

_

_

.

7.20 (4) Normal with eigenvalues 1 ± i so that it is unitarily diagonalizable but

not orthogonally.

7.22 This is a normal matrix. From a direct computation, one can ﬁnd the eigen-

values, 1 −i, 1 −i and 1 +2i, and the corresponding eigenvectors: (−1, 0, 1),

(−1, 1, 0) and (1, 1, 1), respectively, which are not orthogonal. But by an

orthonormalization, one can obtain a unitary transition matrix so that A is

unitarily diagonalizable.

7.23 A

H

A = (H

1

− H

2

)(H

1

+ H

2

) = (H

1

+ H

2

)(H

1

− H

2

) = AA

H

if and only if

H

1

H

2

−H

2

H

1

= 0.

7.24 In each subproblem, one directions are all proven in the theorems already.

For the other direction, suppose that U

H

AU = D for a unitary matrix U

and a diagonal matrix D.

(1) and (2). If all the eigenvalues of A are real (or purely imaginary), then

the diagonal entries of D are all real (or purely imaginary). Thus D

H

= ±D,

so that A is Hermitian (or skew-Hermitian).

(3) The diagonal entries of D satisfy [λ[ = 1. Thus, D

H

= D

−1

, and

A

H

= UD

−1

U

−1

= A

−1

.

7.25 Q =

1

√

6

_

_

√

3 −

√

2 −1

0

√

2 −2

√

3

√

2 1

_

_

.

7.26 (1) A =

1

2

_

1 −1

−1 1

_

+

3

2

_

1 1

1 1

_

,

(2) B =

3+2

√

6

6

_

1

(1+

√

6)(2+i)

5

(1+

√

6)(2−i)

5

7+2

√

6

5

_

+

3−2

√

6

6

_

1

(1−

√

6)(2+i)

5

(1−

√

6)(2−i)

5

7−2

√

6

5

_

.

468 Selected Answers and Hints

7.27 Let A = λ

1

P

1

+ +λ

k

P

k

be the spectral decomposition of A. Then

A

H

=

¯

λ

1

P

1

+ +

¯

λ

k

P

k

= λ

1

P

1

+ +λ

k

P

k

= A.

7.28 Take the Lagrange polynomials f

i

such that f

i

(λ

j

) = δ

ij

associated with λ

i

’s

as in Section 2.10.2. Then, by Corollary 7.13,

f

i

(A) = f

i

(λ

1

)P

1

+ +f

i

(λ

k

)P

k

= δ

i1

P

1

+ +δ

ik

P

k

= P

i

.

7.29 (1) A =

_

_

−1

2

2

_

_

=

_

_

−1/3

2/3

2/3

_

_

_

3

¸

[1],

A

+

=

_

_

−1

2

2

_

_

+

= [1]

_

1

3

¸ _

−1/3 2/3 2/3

¸

=

_

−

1

9

2

9

2

9

¸

.

(2) B =

_

−

1

√

2

1

√

2

1

√

2

1

√

2

_

_ √

3 0

0 1

_

_

1

√

6

−

2

√

6

1

√

6

−

1

√

2

0

1

√

2

_

.

(3) C

+

=

_

1/2 0

1/2 0

_

.

7.31 By the elementary row operations and then to columns in the blocks,

det B = det

_

X −Y

Y X

_

= det

_

X +iY −Y +iX

Y X

_

= det

_

X +iY 0

Y X −iY

_

= det Adet

¯

A = [ det A[

2

.

Exercises 7.10

7.1 (1)

√

6, (2) 4.

7.4 (1) λ = i, x = t(1, −2 −i), λ = −i, x = t(1, −2 +i).

(2) λ = 1, x = t(i, 1), λ = −1, x = t(−i, 1).

(3) Eigenvalues are 2, 2 +i, 2 −i, and eigenvectors are (0, −1, 1)),

(1, −

1

5

(2 +i), 1), (1, −

1

5

(2 −i), 1).

(4) Eigenvalues are 0, −1, 2, and eigenvectors are

(1, 0, −1)), (1, −i, 1), (1, 2i, 1).

7.6 A + cI is invertible if det(A +cI) ,= 0. However, for any matrix A, det(A +

cI) = 0 as a complex polynomial has always a (complex) solution. For the

real matrix

_

cos θ −sin θ

sin θ cos θ

_

, A + rI is invertible for every real number r

since A has no real eigenvalues.

7.7 (1)

1

√

3

_

1 1 −i

1 +i −1

_

, (2)

1

2

_

_

1 i 1 −i

√

2i

√

2 0

1 i −1 +i

_

_

.

7.10 (2) Q =

1

√

2

_

1 1

1 −1

_

.

Selected Answers and Hints 469

7.12 (1) Unitary; diagonal entries are ¦1, i¦. (2) Orthogonal; ¦cos θ+i sin θ, cos θ−

i sin θ¦, where θ = cos

−1

(0.6). (3) Hermitian; ¦1, 1 +

√

2, 1 −

√

2¦.

7.13 (1) Since the eigenvalues of a skew-Hermitian matrix must always be purely

imaginary, 1 cannot be an eigenvalue.

(2) Note that, for any invertible matrix A, (e

A

)

H

= e

A

H

= e

−A

= (e

A

)

−1

.

7.14 det(U −λI) = det(U −λI)

T

= det(U

T

−λI).

7.15 U =

1

√

2

_

1 −1

1 1

_

, D = U

H

AU =

_

2 +i 0

0 2 −i

_

.

7.17 (1) Let λ

1

, . . . , λ

k

be the distinct eigenvalues of A, and A = λ

1

P

1

+ +λ

k

P

k

the spectral decomposition of A. Then A

H

=

¯

λ

1

P

1

+ +

¯

λ

k

P

k

. Problem 7.28

shows that P

i

= f

i

(A), where f

i

’s are the Lagrange polynomial associated

with λ

i

’s as in Section 2.10.2. Then

A

H

=

k

i=1

¯

λ

i

P

i

=

k

i=1

¯

λ

i

f

i

(A) = g(A),

where g =

¯

λ

i

f

i

. The converse is clear.

(2) Clear from (1) since A

H

= g(A).

7.18 (See Exercise 6.14.) Since AB = BA, each E

λi

(A) is B-invariant. Since B is

normal, B

H

= g(B) for some polynomial g. Thus each E

λi

(A) is both B and

B

H

invariant. So the restriction of B on E

λi

(A) is normal, since B is normal.

That is, A and B have orthonormal eigenvectors in E

λi

(A) ∩ E

µj

(B).

7.19 (2) The characteristic polynomial of W is f(λ) = λ

n

−1.

(3) The eigenvalues of A are, for k = 0, 1, 2, . . . , n −1,

λ

k

=

n

i=1

a

i

ω

(i−1)k

= a

1

+a

2

ω

k

+ +a

n

ω

(n−1)k

.

(4) The characteristic polynomial of B is f(λ) = (λ −n + 1)(λ + 1)

n−1

.

7.20 The eigenvalues are 1, 1, 4, and the orthonormal eigenvectors are

(

1

√

2

, −

1

√

2

, 0), (−

1

√

6

, −

1

√

6

,

√

2

√

3

) and (

1

√

3

,

1

√

3

,

1

√

3

). Therefore,

A =

1

3

_

_

2 −1 −1

−1 2 −1

−1 −1 2

_

_

+

4

3

_

_

1 1 1

1 1 1

1 1 1

_

_

.

7.21 If λ is an eigenvalue of A, then λ

n

is an eigenvalue of A

n

. Thus, if A

n

= 0,

then λ

n

= 0 or λ = 0. Conversely, by Schur’s lemma, A is similar to an

upper triangular matrix, whose diagonals are eigenvalues that are supposed

to be zero. Then it is easy to conclude A is nilpotent.

7.22 Note that A

+

b = x

r

is the unique optimal least square solution in ¹(A) =

¹(U). Since, for any b ∈ R

m

,

A

T

A(U

T

(UU

T

)

−1

(L

T

L)

−1

L

T

)b = (U

T

L

T

LU)(U

T

(UU

T

)

−1

(L

T

L)

−1

L

T

)b

= U

T

(L

T

L)(UU

T

)(UU

T

)

−1

(L

T

L)

−1

L

T

b

= U

T

L

T

b = A

T

b,

470 Selected Answers and Hints

U

T

(UU

T

)

−1

(L

T

L)

−1

L

T

b is also a least square solution. Moreover, it is in

¹(A) = ¹(U), since U

T

times a column vector is a linear combination of row

vectors in U which form a basis for ¹(A). Therefore, it is optimal so that

U

T

(UU

T

)

−1

(L

T

L)

−1

L

T

b = A

+

b. That is, U

T

(UU

T

)

−1

(L

T

L)

−1

L

T

= A

+

.

7.23 Nine of them are true.

(2) Consider

_

cos θ −sin θ

sin θ cos θ

_

with θ ,= kπ. (3) Since rank(A) = rank(A

H

A) =

rank(AA

H

), λ = 0 may be an eigenvalue for both A

H

A and AA

H

with the

same multiplicities. Now suppose λ ,= 0 is an eigenvalue of A

H

A with eigen-

vector x. Then Ax ,= 0, and (AA

H

)Ax = A(A

H

A)x = λAx implies λ is

also an eigenvalue of AA

H

with eigenvector Ax. The converse is the same.

Hence, A

H

A and AA

H

have the same eigenvalues.

(4) Consider

_

1 1

0 2

_

. (5) If A is symmetric, then it is orthogonally diago-

nalizable. Moreover, if A is nilpotent, then all the eigenvalues must be zero,

since otherwise it cannot be nilpotent. Thus the diagonal matrix is the zero

matrix, and so is A.

(6) and (7) A permutation matrix is an orthogonal matrix, but needs not be

symmetric.

(8) If a nonzero nilpotent matrix N is Hermitian, then U

−1

NU = D, where

U is a unitary matrix and D is a diagonal matrix whose diagonals are not all

zero. Thus D

k

,= 0 for all k ≥ 1. That is N

k

,= 0 for all k ≥ 1.

(10) There is an invertible matrix Q such that A = Q

−1

DQ. Thus,

det(A+iI) = det(D +iI) ,= 0.

(11) Consider A =

_

1 −1

2 −1

_

. (12) Modify (10).

Chapter 8

Problems

8.2 (2)

_

_

4 1 0

0 4 1

0 0 4

_

_

, (3)

_

¸

¸

_

2 0 0 0

0 2 0 0

0 0 1 1

0 0 0 1

_

¸

¸

_

.

8.3 (1) For λ = −1, x

1

= (−2, 0, 1), x

2

= (0, 1, 1), and for λ = 0, x

1

=

(−1, 1, 1). (2) For λ = 1, x

1

= (−2, 0, 1), x

2

= (

5

2

,

1

2

, 0), and for λ = −1,

x

1

= (−9, −1, 1).

8.5 The eigenvalue is −1 of multiplicity 3 and has only one linearly independent

eigenvector (1, 0, 3). The solution is

y(t) =

_

_

y

1

(t)

y

2

(t)

y

3

(t)

_

_

= e

−t

_

_

−1 −5t + 2t

2

−1 + 4t

1 −15t + 6t

2

_

_

.

Selected Answers and Hints 471

8.6 For any u, v ∈ C,

[u +v[

2

= (u +v)(¯ u + ¯ v) = [u[

2

+ 2ℜ(u¯ v) +[v[

2

≤ [u[

2

+ 2[u¯ v[ +[v[

2

= [u[

2

+ 2[u[[v[ +[v[

2

= ([u[ +[u[)

2

.

The equality holds ⇔ [u¯ v[ = ℜ(u¯ v): i.e., u¯ v = ℜ(u¯ v) ∈ R ⇔ u = [u[z and

v = [v[z for some z = e

iθ

.

8.7 Consider the Jordan Canonical form: Q

−1

A

T

Q = J of A

T

. By taking the

transpose of this equation, one gets P

−1

AP = J

T

, where P = (Q

T

)

−1

. Let

P be the matrix obtained from P by taking reverse ordering of the column

vectors in each group corresponding to each Jordan block of P. Then it is

easy to see that P

−1

AP = J = Q

−1

A

T

Q. That is, A and A

T

have the same

Jordan Canonical form J, which means that the eigenspaces have the same

dimension.

8.8 See Problem 6.2.

8.9 Let λ

1

, . . . , λ

n

be the eigenvalues of A. Then

f(λ) = det(λI −A) = (λ −λ

1

) (λ −λ

n

).

Thus, f(B) = (B − λ

1

I

m

) (B − λ

n

I

m

) is non-singular if and only if B −

λ

i

I

m

, i = 1, . . . , n, are all non-singular. That is, none of the λ

i

’s is an

eigenvalue of B.

8.10 The characteristic polynomial of A is f(λ) = (λ−1)(λ−2)

2

, and the remain-

der is 104A

2

−228A+ 138I =

_

_

14 0 84

0 98 0

0 0 98

_

_

.

Exercises 8.5

8.1 Find the Jordan canonical form of A as Q

−1

AQ = J. Since A is nonsingular,

all the diagonal entries λ

i

of J, as the eigenvalues of A, are nonzero. Hence,

each Jordan blocks J

j

of J is invertible. Now one can easily show that

(Q

−1

AQ)

−1

= Q

−1

A

−1

Q = J

−1

which is the Jordan form of A

−1

, whose

Jordan blocks are of the form J

−1

j

.

8.3 (x, y) =

1

2

(4 +i, i).

8.4 (1) Use

_

3 1

1 3

_

=

_

1

2

1

2

−

1

2

1

2

_ _

2 0

0 4

_ _

1 −1

1 1

_

.

(2) y(t) =

√

2e

4t

_

1

√

2

1

√

2

_

−

√

2e

2t

_

−

1

√

2

1

√

2

_

.

8.5 (1) Use A =

_

_

−6 2 8

−3 1 2

6 −1 −4

_

_

_

_

−2 0 0

0 2 0

0 0 −4

_

_

_

_

1

6

0

1

3

0 2 1

1

4

−

1

2

0

_

_

.

(2)

_

_

_

y

1

(t) = −2e

2(1−t)

+ 4e

2(t−1)

y

2

(t) = −e

2(1−t)

+ 2e

2(t−1)

y

3

(t) = 2e

2(1−t)

− 2e

2(t−1)

.

472 Selected Answers and Hints

8.6

_

_

_

y

1

(t) = 2(t −1)e

t

y

2

(t) = −2te

t

y

3

(t) = (2t −1)e

t

.

8.8 (1) (a −d)

2

+ 4bc ,= 0 or A = aI.

8.9 (1) t

2

+t −11, (2) t

2

+ 2t + 13, (3) (t −1)(t

2

−2t −5).

8.10 (3) A

−1

=

_

_

1 0 −1

0

1

2

−

1

2

0 0 1

_

_

.

8.11 (2)

_

_

5 −22 101

0 27 −60

0 0 87

_

_

.

8.12 See solution of Problem 8.7.

8.14 (3) Use (2) and the proof of Theorem 8.11.

(4) Use (3), Theorem 8.11, and (1) of Section 8.3.

8.15 (2) A

k

=

_

_

1 0 k

0 2

k

2

k

−1

0 0 1

_

_

.

8.16 Four of them are true.

Chapter 9

Problems

9.1

(1)

_

_

9 3 −4

3 −1 1

−4 1 4

_

_

, (2)

1

2

_

_

0 1 1

1 0 1

1 1 0

_

_

, (3)

_

¸

¸

_

1 1 0 −5

1 1 0 0

0 0 −1 2

−5 0 2 −1

_

¸

¸

_

.

9.3 (1) The eigenvalues of A are 1, 2, 11. (2) The eigenvalues are 17, 0, −3, and

so it is a hyperbolic cylinder. (3) A is singular and the linear form is present,

thus the graph is a parabola.

9.5 B with the eigenvalues 2, 2 +

√

2 and 2 −

√

2.

9.7 The determinant is the product of the eigenvalues.

9.9 (1) is indeﬁnite. (2) and (3) are positive deﬁnite.

9.10 (1) local minimum, (2) saddle point.

9.12 (2) b

11

= b

14

= b

41

= b

44

= 1, all others are zero.

9.14 Let D be a diagonal matrix, and let D

′

be obtained from D by interchanging

two diagonal entries d

ii

and d

jj

, i ,= j. Let P be the permutation matrix

interchanging i-th and j-th rows. Then PDP

T

= D

′

.

9.15 Count the number of distinct inertia (p, q, k). For n, the number of inertia

with p = i is n −i + 1.

9.16 (3) index = 2, signature = 1, and rank = 3.

9.17 Note that the maximum value of R(x) is the maximum eigenvalue of A, and

similarly for the minimum value.

Selected Answers and Hints 473

9.18 max =

7

2

at ±(1/

√

2, 1/

√

2), min =

1

2

at ±(1/

√

2, −1/

√

2).

9.19 (1) max = 4 at ±

1

√

6

(1, 1, 2), min = −2 at ±

1

√

3

(−1, −1, 1);

(2) max = 3 at ±

1

√

6

(2, 1, 1), min = 0 at ±

1

√

3

(1, −1, −1) .

9.22 If u ∈ U ∩ W, then u = αx + βy ∈ W for some scalars α and β. Since

x, y ∈ U, b(u, x) = b(u, y) = 0. But b(u, x) = βb(y, x) = −β and

b(u, y) = αb(x, y) = α.

9.23 Let c(x, y) =

1

2

(b(x, y) + b(y, x)) and d(x, y) =

1

2

(b(x, y) − b(y, x)). Then

b = c +d.

Exercises 9.13

9.1 (1)

_

1 2

2 3

_

, (3)

_

_

1 2 3

2 −2 −4

3 −4 −3

_

_

, (4)

_

_

3 −2 0

5 7 −8

0 4 −1

_

_

.

9.4 (2) ¦(2, 1, 2), (−1, −2, 2), (1, 0, 0)¦.

9.3 (i) If a = 0 = c, then λ

i

= ±b. Thus the conic section is a hyperbola.

(ii) Since we assumed that b ,= 0, the discriminant (a −c)

2

+4b

2

> 0. By the

symmetry of the equation in x and y, we may assume that a −c ≥ 0.

If a − c = 0, then λ

i

= a ± b. Thus, the conic section is an ellipse if

λ

1

λ

2

= a

2

− b

2

> 0, or a hyperbola if a

2

− b

2

< 0. If λ

1

λ

2

= a

2

− b

2

= 0,

then it is a parabola when λ

1

,= 0 and e

′

,= 0, or a line or two lines for the

other cases.

If a − c > 0. Let r

2

= (a − c)

2

+ 4b

2

> 0. Then λ

i

=

(a+c)±r

2

for i = 1, 2.

Hence, 4λ

1

λ

2

= (a + c)

2

− r

2

= 4(ac − b

2

). Thus, the conic section is an

ellipse if det A = ac − b

2

> 0, or a hyperbola if det A = ac − b

2

< 0. If

det A = ac−b

2

= 0, it is a parabola, or a line or two lines depending on some

possible values of d

′

, e

′

and the eigenvalues.

9.6 If λ is an eigenvalue of A, then λ

2

and

1

λ

are eigenvalues of A

2

and A

−1

,

respectively. Note x

T

(A+B)x = x

T

Ax +x

T

Bx.

9.8 (1) Q =

1

√

2

_

1 1

1 −1

_

. The form is indeﬁnite with eigenvalues λ = 5 and

λ = −1.

9.10 (1) A =

_

2 −1

2 0

_

, (2) B =

_

3 9

0 6

_

, (3) Q =

_

1 2

1 −1

_

.

9.11 (2) The signature is 1, the index is 2, and the rank is 3.

9.15 (2) The point (1, π) is a critical point, and the Hessian is

_

1 1

1 −1

_

. Hence,

f(1, π) is a local maximum.

9.18 Seven of them are true.

(5) Consider a bilinear form b(x, y) = x

1

y

1

−x

2

y

2

on R

2

.

(7) The identity I is congruent to k

2

I for all k ∈ R. (8) See (7).

(9) Consider a bilinear form b(x, y) = x

1

y

2

. Its matrix Q =

_

0 1

0 0

_

is not

diagonalizable.

[solution] Linear Algebra

[solution] Linear Algebra

- Linear Algebra Jin Ho Kwak
- 선형대수_강의노트_kwak,_hong (1)
- Linear Algebra_Solution
- [Henry Stark, John W. Woods] Probability and Rando(BookFi.org)
- Stark Woods Solution Chp1-7 3ed
- David_Tse_Solution_Manual
- Finite-Dimensional Vector Spaces - Paul Halmos
- Estimation Theory book Solutions Stephen Kay
- Convex Optimization HW1 Solution
- Linear Algebra Solutions
- Solucionario Algebra Lineal - Jim Hefferon
- Ytha Yu, Charles Marut-Assembly Language Programming Organization of the IBM PC (1992)
- Steven M. Kay Fundamentals of Statistical Signal Processing, Volume 2 Detection Theory 1998
- SolutionsManual-Statistical and Adaptive Signal Processing
- A Friendly Introduction
- Speech Signal Processing
- Image Processing - Fundamentals
- Linear Algebra With Applications 3rd Edition - Nicholson, W. Keith
- Probability Essentials (Jacod J., Protter P)
- Linear Algebra Done Right
- Robert Lafore solution
- 1.2
- Data Networks Solutions
- Linear Algebra - Friedberg; Insel; Spence [4th E]
- An Introduction to Probability Theory and Its Applications. Vol. 2. 2nd. Ed. W. Feller
- Linear State Space Control Systems
- Probability&RandomProcesseswithApplicationstoSignalProcessing3eStark
- Solutions to Apostol
- Signal Detection and Estimation - Solution Manual
- [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd