This action might not be possible to undo. Are you sure you want to continue?

# Let A be a square, nxn matrix.

The determinant of A is defined by: det(A) =

σ ∈S

∑

(−

n

σ

a1 1 σ )

( 1

) σ

a

2 i

( i2

σ )

an

σ ( n )

a

(

)

This means the sum is taken over all n! permutations of Sn . Each term in the sum has one element from each row and one from each column. Let's look at the determinants of sizes 1, 2 and 3. n = 1: There is only one permutation (which is even) and det(A) = a11 n = 2: There are two permutations 1 2 (even) and 2 1 (odd) det(A) = a11a22 − a12 a21 a11 a12 You can remember this by looking at the two diagonals of A: The main a21 a22 diagonal (top-left to bottom-right) is added and the anti-diagonal is subtracted. n = 3: 6 permutations 3 even: 1 2 3 231 312 321 a11a22 a33 + a12 a23 a31 + a13 a21 a32 det(A) = and 3 odd: 213 132

− a12 a21a33 − a11 a23 a32 − a13 a22 a31

The three positive terms come from the top-left to bottom-right diagonals where the entries rotate cyclically as necessary, while the negative terms come from the top-right to bottom-left diagonals. a11 a 21 a31 a12 a22 a32 a13 a23 a33 a31 a13 n = 4: There are 24 permutations and not enough diagonals in a 4x4 matrix to work out mnemonic device. Other ways of compute determinant are needed. Here are two propositions that follow directly from the definition of determinant. Proposition: Let A have a row of zeros. Then det(A) = 0. Proof: Suppose row I of A = 0. Then every term in det(A) has one factor from row i and hence is zero. Proposition: If T = [tij] is triangular then det(T) = t11t22t33 L tnn the product of the diagonal entries. Proof: Assume for convenience that T is lower triangular. Each term in the determinant has one factor from each row and one from each column. What permutations can possibly give a nonzero product? In row 1, only t11 can be chosen. In row 2, only t22 can be chosen, since column one has been is used. In each row only the diagonal element can be chosen for a non-zero product. This meas all but one permutation leads to a zero product, and this is the identity permutation 1 2 3 . . . n. Since a11 a 21 a33 a31 a12 a22 a32 a13 a11 a23 a33

Proposition: Let A be nxn and let B = A with rows k and m interchanged. A similar argument works if T is upper triangular. Write τ for σ with entries k and m τ swapped. Corollary: Let A be an nxn matrix with two rows equal. We have bij = aij if i is not k or m. σ det(B) = ∑ (−1) b1σ (1)b2σ (2) L bmσ ( m) L bnσ ( n) σ ∈S n = σ = cσ∑ ( −1) a1σ (1) a2σ (2) L amσ ( m) L anσ ( n) = c det(A) ∈S n σ ∈S n ∑ (−1) σ a1σ (1) a2σ (2) L camσ ( m) L anσ ( n) Corollary: det(cA) = cndet(A). If E is I with a multiple of one row added to another. Proof: A = (A with two rows swapped).this is an even permutation. Corollary det(I) = 1. If E is I with 2 rows swapped. Proposition: Let A be an nxn matrix and let B be obtained from A by adding a multiple of row k to row m. det(E) = 1 . if A is nxn. Then det(A) = det(B). Then det(A) = 0. Proof: We have bij = aij if i is not m and bmj = camj . Let D be matrix A with row k replacing row m. so by the last proposition det(A) = -det(A). Then det(A) = cdet(B). Then det(A) = -det(B) Proof: Assume that k < m. Proof: Apply the last result to each of the n rows of A. Proposition: Let A be an nxn matrix and let B be obtained from A by multiplying row m by c. bkj = amj and σ det(B) = ∑ (−1) b1σ (1)b2σ (2) L bkσ ( k ) L bmσ ( m) L bnσ ( n) σ ∈S n n bmj = akj σ = σ∑ (−1) a1σ (1) a2σ (2) L amσ ( m) L akσ ( k ) L anσ ( n) Switching amσ ( m ) and akσ ( k ) puts the row indices ∈S in the correct order. Apply these last three results with A as the identity matrix. det(E) = -1. σ det(B) = ∑ (−1) b1σ (1)b2σ (2) L bmσ ( m) L bnσ ( n) σ ∈S n = = σ ∈S n ∑ (−1) ∑ (−1) σ a1σ (1) a2σ (2) L (amσ ( m) + cakσ ( m) )L anσ ( n) ( expand the sum) a1σ (1) a2σ (2) L amσ ( m) L anσ ( n) + c ∑ (−1)σ a1σ (1) a2σ (2) L akσ ( k ) L akσ ( m) L anσ ( n) σ ∈Sn σ σ ∈S n = det(A) + c det(D)= det(A). Proof: We have bij = aij if i is not m and bmj =amj+ cakj . Then det(B) = −τ∑ ( −1) a1τ (1) a2τ (2) L akτ ( k ) L amτ ( m) L anτ ( n) = -det(A) since τ and σ have ∈S n opposite signs. but this will change the permutation σ . since D has two identical rows and det(D) = 0. the result follows.

As σ runs through all permutations of − 1 − 1 Sn so does σ . It can be shown that they either all flip signs or all have the same sign as the original permutation. This is called the (1. . This sum is a determinant of the submatrix of A obtained by removing row 1 and column 1. denoted Cij: Cij = (-1)i+ j Mij Then row 1 expansion becomes det(A) = a11C11 + a12 C12 + a13C13 + L + a1 nC1 n = ∑ a1 j C1 j and for any row i j =1 n det(A) = ai1Ci1 + ai 2Ci 2 + ai 3 Ci 3 + L + ain Cin = ∑ aij Cij j =1 n Proposition: det(A) = det(AT) Proof: Let B = AT. det(E) = c. Each of the permutations maps the set {2. Cofactor expansion: Here is a sketch of the proof. Then the column − 1 indices will have the order of σ . so bij = aji det(B) = = σ ∈S n ∑ (−1) σ ∈S n ∑ (−1) σ σ b1σ (1)b2σ (2) L biσ ( i) L bnσ ( n) aσ (1)1aσ (2)2 L aσ ( i ) i L aσ ( n) n Reorder each product to put the row indices in the natural order. Assume the expansion is on row 1. This is used to prove that det(AB) = det(A) det(B) for all matrices A and B. The key idea is to collect all of the terms containing a11. denoted M11. The pattern is (-1)1+j for the jth term. . This leads to the cofactor of A. Each of similar terms in this expansion is a minor of A obtained by removing row 1 and column j for each j. . There are thus n terms in the sum : σ det(A) = ∑ (−1) a1σ (1) a2σ (2) L aiσ ( i) L anσ ( n) σ ∈S n = a11 σ (1) =1 ∑ (−1) ∑ σ a2σ (2) L aiσ ( i) L anσ ( n) + a12 σ σ (1) =2 ∑ (−1)σ a1σ (1) a3σ (3) L aiσ ( i) L anσ ( n) + L + a1n σ (1) = n (−1) a1σ (1) a2σ (2) L aiσ (i ) L a( n −1) σ ( n −1) σ ∈S n The sum multiplying a11 ∑ (−1)σ a2σ (2) L aiσ ( i ) L anσ ( n) has (n-1)! summands and each . where E is any elementary matrix. . etc. Any of these results about rows of a matrix and determinants apply to columns as well.1) minor of A. the inverse of σ . merely apply the result to the transpose of the matrix. Thus the two expressions are equal since σ and σ have the same number of inversions.σ (1) =1 one has one entry from each row and column except for row 1 and column 1. n} to itself and can be thought of as a permutation on Sn-1. These means grouping all σ 's by the value of σ (1). all those with a12. The only problem remaining are the signs of these restricted permutations on Sn. These last three propositions together show det(EA) = det(E) det(A).If E is I with a row multiplied by c.

. swapping columns negates a det(A) and if two columns of A are equal then det(A) = 0.For example.