You are on page 1of 7


1. Page 149, Exercises 12 and 13 In these exercises, one considers a function D : Mn×n (F) → F, defined on the n × n matrices with entry in a field F, with values in F. (You can consider F = R for concreteness, this is unimportant here). We also assume that D has the same multiplicative property as the determinant, namely D(AB) = D(A)D(B) (1.1)

whenever A and B are n × n matrices. The goal of these questions is to see that D is (under some nondegeneracy conditions) the determinant function. This gives another way to define the determinant. Show that either D(A) = 0 for all A or D(In ) = 1 We want to prove that one of two possibilities occur. To prove this, we suppose that one does not occur, and try to see that the second has to occur.1 Hence we suppose that the first possibility is not true. So we have two information on D, namely (1.1) and there exists a matrix M such that D(M) = 0 and we want to prove that D(In ) = 1 (1.2) For the moment, the second information seems mysterious, so we focus on (1.1) and what we want to prove. We have an information relating D and the multiplication of matrices, and we want to prove something about In . But In also has some relation to the multiplication. Namely that for any matrix M , there holds that M = In × M = M × In . Now, we want to use our information, so we take D of the identity above and using (1.1), we get D(M ) = D(In × M ) = D(I)D(M ). This is almost what we want. We just want to make sure that we do not divide by 0. But this is precisely how we can use our second information. Namely we use the equality above (which is true for all matrix M ) when M = M. In this case, D(M) = 0, and dividing by D(M), we get that D(In ) = 1 If D(In ) = 1, prove that if M is invertible, D(M ) = 0. We want to prove something for all invertible matrices. So we let M be such a matrix, and we try to prove the result for this particular M . If we can do it without making any additional assumption on M , then the proof works for all
Date: Thursday, April 16th 2008. 1This has the advantage to give us another information, besides (1.1), namely that the first possibility does not occur.

and applying D to both sides of this equality and using (1. we can then try to prove the result with the additional information that (case 1) does not hold. A priori.3) In particular the analysis above shows that D(I2 ) = 1. we stick to the n = 2 case. while multiplication on the right exchange the columns. namely J. And indeed. j)-entry δiτ (j) . we know that for every matrix M . For simplicity. where Eτ is the matrix with (i. A simple computation shows that J a c b d = c a d b and a c b J= d b d a . Besides. it is interesting to see what the multiplicative properties of J are. Since we only have information about the multiplicative properties of D. This ends the proof. since we have a relation between a multiplication (so we can use our first hypothesis (1. Prove that D(0) = 0. multiplication on the left by J exchange the rows of a matrix.1)) and In (so we can use our second hypothesis D(In ) = 1). there holds that 0 = 0 × M. 3we even have the more precise information that D(M −1 ) = 1/D(M ).1). we make the additional Hypothesis that n=2 D(I2 ) = D(J) where J is the matrix J= 0 1 1 . D(0) = D(M )D(0). then we have at least a partial result (case 1). we get that 1 = D(I2 ) = D(J)2 . c (1. and using (1. and since we have made no assumption on M . 0 (1. we get that 1 = D(In ) = D(M N ) = D(M )D(N ) and now.5) These facts will be useful later on. we get that. we ask ourselves whether 0 has some special multiplicative property.4 At this point. we have introduced one new object. 2if we need to make additional assumptions. . we see that for the product of D(M )D(N ) to be nonzero. this works for all invertible matrices.3) by D(I ) = D(E ) n τ for one transposition τ ∈ Sn . (1. Now. To finish the proof. since we are considering products.6) Again. both terms must be nonzero3. we conclude that D(J) = −1. we can compute J 2 = I2 . and hence. (1.4) Hence. taking D on both sides of this equality. Thus D(M ) = 0. for all matrices M . This look encouraging. Applying D to the equality above.matrices2 What do we know about M .3). 4I think that all the results here would hold for any n if we replace (1. we know exactly that there exists another matrix N = M −1 such that In = M N = N M.

we have accumulated some results. we see that since the second row is 0. we get that 0 1 c d = a b 0 0 = 0 0 . and (1. d. and try to simplify the question.1) and (1. so we must examine more precisely what we know. or D(0) = 0. actually.6).3) and (1.7). Prove that D(A) = 0 (1.6 But. Now. a b 5another way to conclude without using that D(I − 2) = 1 is to say that if D(0) = 0.6) and (1.1). In the case when B is obtained by interchanging the columns. we get that B = AJ and we conclude similarly. we see that J a b 0 0 = 0 0 a b and taking D of both sides.9) if one row or one column of A is 0.4). we also see that we can assume that a < b. and it is not apparent what to use. (1. namely (1. although we won’t use this fact. Prove that D(B) = −D(A) (1. in which case we are done. then all matrices have the same image (1) under D.3). using (1. for all c. then D(A) = 0. But the multiplicative property combines nicely with latter to give that in our case when A2 = 0.4) and (1. (1. But this is exactly what we need to answer the question since if B is obtained from A by interchanging the rows.5) D(B) = D(JA) = D(J)D(A) = −D(A).1). (1. i.3).2). Now.1)–(1. Now. 0 = D(0) = D(A2 ) = D(A)D(A) = D(A)2 hence D(A) = 0. which now consist of (1.8) whenever B is obtained from A by exchanging the rows or the columns.e (1. we see by (1. which in particular involves a special matrix J.And from this we conclude that either D(0) = 0. (1.8). (1. So to get some idea. we see that it suffices to prove the result when the second row is 0.3).5 Consequently D(0) = 0 and the proof of (1. we can divide by D(M ) in the equality above to get D(M ) = 1 for all matrices M . it is the precise statement of (1. examining the equality above.7) is proved.5) hold. Then. But this is not true for M = J by (1. (in case we did not do that before) we start to examine J and its multiplicative properties. but then. Indeed.3) 6Multiplying on the right by J.6) is complete Prove that if A2 = 0.1) and (1.7) We still only know three things. this question is not apparently. which again is ruled out by (1. (1. and we rapidly see that (1.4) that B = JA and. .3). Using the action of J (1. And all these are just consequences of our hypothesis. let us deal with the rows. linked to any multiplication property. there is one thing that we haven’t used so far. it was not important what the second column of J was and the equality above holds with many other matrices.

and the case when a column is treated similarly.1) and (1. Since A is a square matrix.7). using (1.1) and (1. we get that D(J ) = −D(J ). we can try some special choices for c or d. . The first trial gives AB = a c b d α β γ δ = 0 0 aγ + bδ cγ + dδ =C and taking D of this equality gives D(A)D(B) = D(C) = 0 after using (1. this is equivalent to saying that the system AX = 0 has a nontrivial solution X = (α.e. using (1. Then JJ= 0 1 0 J= 1 0 1 0 1 and taking D of this equality.9). Let a b A= c d be a singular matrix. we need to express the information A is singular by something more amenable to analysis. we get that D(AB) = D(A)D(B) = 0. δ such that B is invertible. and consequently. Hence. Hence D(J ) = 0. we get that AB = a c b d α β −β α = 0 0 −aβ + bα −cβ + dα and evaluating D of both side and using (1. and we conclude similarly.10). we just need to be able to say that D(B) = 0. Another special choice for J could be c = 0 and d = 1. To conclude. i. Prove that D(A) = 0 whenever A is singular To prove this fact. But det B = αδ − βγ.9) and conclude. so we see that choosing δ = α and γ = −β we get an invertible matrix since (α. that a b α 0 = . β) was not trivial.10) c d β 0 But this looks encouraging: we have the product of two matrices that give a new matrix with a column which is 0.9). A first obvious choice leads to the result.1) 0=D J a b 0 0 =D 0 0 a b This proves the result for the case when a row is 0. we see that D(J ) = 0. we just need to choose γ. And using the result of the second question.And then. we see that this is the case if B is invertible. β)T with α = 0 or β = 0. 1 0 Hence using (1.1) and (1. So let us try to make our column matrices into square matrices. (1. since taking c = d = 0 we get a matrix J such that 2 0 0 (J )2 = = 0. To sum up. we could apply (1. If we were dealing with square matrices.

This multiplies the determinant by −1.2. Problem 1. the determinant is 0. let us first examine some particular cases. Exercises 1. one can (1) Use the multilinearity with respect to rows or columns (2) Add a multiple of a row to another row. let us try to come back to the case when (say) a = 0. we decide to delete the entry a by adding −a/b times the third column to the second. (4) Expand with respect to a row or a column. 2. but with a = 0. −c 0 Here we have three parameters on which we have no information (except that they belong to a field). Remember that to compute determinants.3. . it is easy to see that if a = 0. So we can assume that abc = 0 and that all scalars are invertible. then the determinant is 0. Page 155. To use the symmetry of the matrix. But in this case. Since we have made no assumption on A (except that it is noninvertible). and consequently D(A) = 0. but with a = 0). Thus in all cases. D(A) = 0 and the proof is complete. but with rows. We refer to the book for more precise formulas. This gives   0 0 b det A = det −a −ac/b c  . we get that for all noninvertible matrices A. that is we add −a/b times the third row to the second row. Looking at the matrix. Compute the following  0 det A = det −a −b determinant:  a b 0 c . we proceed as before. (5) Use the fact that if two rows or columns are linearly dependent (in particular if they are the same. Note that this formula follows from item 4) and the fact that for a 1 × 1 determinant. Using item 2) in the list of properties of the determinant above. 2.7 This section deals with computation of some determinants. Similarly if b = 0 or c = 0. det A = 0. −b −c 0 To get back to a matrix of the original form. (3) Switch two rows or two columns. det(a) = a. (6) Use the formula for a 2 × 2 determinant: det a c b d = ad − bc. This gives   0 0 b 0 c det A = det  0 −b −c 0 and the two first column of this matrix are obviously linearly dependent (and this is indeed a matrix of the same shape as A. or one i 0!).1. To get some intuition. so D(B) = 0.and since det B = α2 + β 2 > 0. B is invertible. we also need to cancel the first entry in the second row. or a multiple of a column to another column without changing the value of the determinant.

3) = τ12 . It is either 1 or 3. and similarly for c2 − a2 . Compute the determinant  1 a V (a. and item 1) (the multilinearity) above to factor (b − a) out of the second row and (c − a) out of the third one. 2. we use the fact that b2 − a2 = (b − a)(b + a). b. we use the first row to delete the other entries in the first column. 1. or send 2 to 1. we finally get that V (a. 1. Now. 2). 2) sign 1 −1 −1 −1 1 1 In particular. σ2 . c) = det 0 b − a b2 − a2  . . or the image of 2 is 3. we get (2. in which case the image of 3 is 3.2. and we get the transposition which exchange 2 and 3. Problem 2. and we look at the image of 2.3. we consider σ such a permutation. 1. (1. 3}. we get (2. 3. and we obtain (3. 2. one can check that S3 has cardinal 6 = 3! and that for any two permutations σ1 . Find all the elements in S3 . the symmetric group of 3 elements. Problem 3. 2) (3. c) = (c − b)(c − a)(b − a). either the image of 2 is 2. 0 1 c+a Expanding along the first column (item 4)) and using the formula for a 2 × 2 determinant (item 6)). First. and in the latter. This gives   1 a a2 V (a. 3). In the first case. For such a permutation. c) = det 1 b 1 c of the Vandermonde matrix  a2 b2  . b.2. we can find the permutations that send 1 to 2. c2 Using item 2) in the list of properties of the determinant. This gives   1 a a2 V (a. we can find the permutations that send 1 to 1 (thus. there holds that Sign(σ1 ◦ σ2 ) = Sign(σ1 )Sign(σ2 ). c) = (b − a)(c − a) det 0 1 b + a . b. 3. and we get the identity transformation (no shuffling). τ23 = (1. in which case we get τ13 = (3. and we assume that σ(1) = 1). 2. We need to find all the permutations of the set X = {1. 2. 1). 0 c − a c2 − a2 Now. we get all the permutations of S3 : Permutation Id τ23 τ12 τ13 (1. 2) = τ12 ◦ τ13 . Finally. 1) = τ13 ◦ τ12 . 3. the permutations that send 1 to 3 either send 2 to 2. b. Finally.

1) and if A was a triangular matrix. we get det A = a11 det A2 = a11 a22 a33 . Using the equality in (2.j≤n is called triangular if mij = 0 when i < j. we see that if i < j. det M = m11 m22 . and the fact that the formula is true for n × n matrices (hence for A2 ). .Pausader@math. hence for matrices of size 4 × 4. we can define a new n × n matrix. b c And for a general triangular matrix. suppose that the formula is true for all matrices of size n. . that is to say. (n − 1) × (n − 1). To prove this. it is natural to expand with respect to the first row (item 4)). using item 6) in the list above: a 0 det = ac. But this gives det A = det a11 X 0 A2 = a11 det A2 . an+1.j≤n such that A has the shape in (2. An n × n matrix. This is the formula for (n + 1) × (n + 1) matrices. . and let A = (aij )1≤i. Brown University E-mail address: Benoit. but of smaller size.2.n+1 . . M = (mij )1≤i. Since A is triangular. Prove that if M is triangular. we see that to compute the determinant. the determinant of M is the product of the diagonal entries of M .j+1 = 0. that is mij = ai+1. then i+1 < j +1 and hence mij = ai+1. (2. the formula is true for triangular matrices of all size (it is true for matrices of size 2 × 2.1). Problem 7. .4.1).brown. we first observe that this is true for a 2 × 2 matrix. A2 is again a triangular matrix. ). So.j+1 with this formula.j≤n+1 be a matrix of size n + 1. mnn . hence for matrices of size 3 × 3. A2 = (mij )1≤i. .edu . Consequently. . so that A2 is also triangular.