You are on page 1of 3

# Math 206, Spring 2016

Assignment 14 Solutions

## Due: May 4, 2016

Part A.
(1) Complete problems 6 and 8 from section 6.2.
Solution.
matrix:

For problem 6, one can transform the given matrix A to the following upper triangular

1
1 1
1
0 2 1
3
.
B=
0
0 3
3
0
0 0 12
The row operations it takes to perform this reduction are as follows: 1 + 2 , 1 + 3 , 1 + 4 , 2
3 , 2 + 4 , 23 + 4 . Since we only required one row swap and no row scalings, we have that
(1)1 det(A) = det(B). Since B is triangular we have det(B) = (1)(2)(3)(12) = 72. So we have the
determinant of the original matrix is 72.
For problem 8, notice that if one performs the following operations 1 2 , 2 3 , 3
4 , 4 5 then one transforms the given matrix A to the matrix

1 0 0 0 3
0 1 0 0 4

B=
0 0 1 0 5 .
0 0 0 1 6
0 0 0 0 2
Since this latter matrix is diagonal, we have det(B) = (1)(1)(1)(1)(2) = 2. Since 4 row swaps (and
no row scalings) were required to transform A to B, we get (1)4 det(A) = det(B) = 2. Hence the
determinant of the original matrix is 2.


(2) Suppose that A is a matrix corresponding to some orthogonal transformation. Prove that det(A) is
equal to either 1 or 1. [Hint: what is AT A?]
Solution.
get

## Since A corresponds to an orthogonal transformation, we know that AT A = I. Hence we

1 = det(I) = det(AT A).

On the other hand, we know that determinant is multiplicative, and that the determinant of a matrix
is equal to the determinant of its transpose. Hence we get
2

## det(AT A) = det(AT ) det(A) = det(A) det(A) = (det(A)) .

2

Combining these expressions we find that (det(A)) = 1. Since the only real numbers that square to
1 are 1, we get the desired result.

(3) Complete problems 16 and 18 from section 7.3.
Solution. For problem 16, note that the matrix A has characteristic polynomial

1
1
0
0 0
det 0 1 1 0 0 = 3 = (2 + 1).
2
2
0
0 0
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 1 of 3

## Math 206, Spring 2016

Assignment 14 Solutions

## (The determinant calculation can most easily

first row.) Hence the only real eigenvalue is 0.
simply compute a basis for

1
1
0
ker 0 1 1
2
2
0

## be carried out via Laplace expansion along say the

To compute a basis for the corresponding eigenspace we

0
0
0

0
0
0

0
0 .
0

The standard procedure (row reduction, solving for pivot variables in terms of free variables) produces
a basis

B = 1 .

1
Note that since
X

dim(E ) < 3,

## the matrix has no eigenbasis.

For problem 18,

0
0

det
0
0

the characteristic

0 0 0

1 0 1

0 0 0
0 0 1

## polynomial of the matrix A is

0 0 0

0 0 0
= 4 23 + 2 = 2 ( 1)2 .
0 0 0
0 0 0

Hence the eigenvalues are 0 and 1 (each occurring with algebraic multiplicity 2). To find a basis for
each eigenspace, we need to compute bases for the associated eigenspaces, and hence we need to find
bases for the following kernels:

0 0 0 0
0 0 0 0
0 1 0 1 0 0 0 0

E0 = ker
0 0 0 0 0 0 0 0
0 0 0 0
0 0 0 1

1 0 0 0
0 0 0 0
0 1 0 1 0 1 0 0

E1 = ker
0 0 0 0 0 0 1 0 .
0 0 0 1
0 0 0 1
Row reducing each produces the following
eigenspace they form a basis for):

B0 =

B1 =

1
0

0
, 0
0 1
0
0

1
.
0

## (4) Complete problem 39 from section 7.3.

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 2 of 3

## Math 206, Spring 2016

Assignment 14 Solutions

## Due: May 4, 2016

Solution. We address part (a) first. We have seen previously in the course that ker(projV ) = V .
On the other hand, we know that ker(projV ) is the 0-eigenspace. Hence we have E0 = V . The
complementarity of dimensions tells us that dim(V ) = dim(Rn ) dim(V ) = n m.
On the other hand, a vector v satisfies projV (v) = v if and only if v V . But a vector satisfying
projV (v) = v is just a 1-eigenvector, and so we have E1 = V . Hence dim(E1 ) = m. This settles the
question of geometric multiplicity. What can be said about the corresponding algebraic multiplicities?
Wed like to argue that dim(E ) = alg. mult.() for {0, 1}; this will take a bit of work.
We know from a theorem in class that the geometric multiplicity of an eigenvalue is a lower bound
for the algebraic multiplicity of , and so it follows that the sum of the geometric multiplicities for all
eigenvalues is a lower bound for the sum of the algebraic multiplicities for all eigenvalues:
X
X
dim(E )
alg. mult().

On the other hand, the algebraic multiplicity of each is the degree of the root of as a root of the
characteristic polynomial. From another theorme in class we know that the characteristic polynomial is
a polynomial of degree n, and so the fundamental theorem of algebra says that it cant have more than
n roots (counted with multiplicity). This means that
X
alg. mult() n.

Since we already know that = 0 and = 1 have dim(E0 ) = n m and dim(E1 ) = m, we can plug
these into our two inequalities to produce:
X
X
n = (n m) + m = dim(E0 ) + dim(E1 )
dim(E )
alg. mult() n.

## Hence the inequalities in this expression must be bona fide equalities.

At this point we have showed that
X
dim(E ) = n

alg. mult.() = n;

we would still like to argue that dim(E ) = alg. mult.() for {0, 1}. To achieve this, first notice
that our two equalities imply that the only eigenvalues are 0 and 1; if there were another eigenvalue
then it would have to have geometric multiplicity at least 1, and then wed have
X
n=
dim(E ) > dim(E0 ) + dim(E1 ) = n,

## a clear contradiction. From this it follows that we need to have

alg. mult.(0) + alg. mult.(1) = n.
We already know that dim(E ) alg. mult.(), and so we only have to rule out the possibility that
dim(E ) < alg. mult.() for {0, 1}. But if this were the case, then we would have
n = alg. mult(0) + alg. mult.(1) < dim(E0 ) + dim(E1 ) = n,