You are on page 1of 3

Existence of the Determinant

Learning Goals: students learn that the determinant really exists, and find some formulas for it.

So far our formula for the determinant is ±(product of pivots). This isn’t such a good
formulas, because for all we know changing the order of the rows might change the pivots, or at
least the sign. And it doesn’t tell us how the determinant depends on any particular entry in the
matrix. So we need some other ways of finding the determinant. “Other,” not “better.” Why?
Because in practice we find the determinant by reducing and multiplying the pivots (if we bother
finding it at all—its more theoretically useful than practically useful). In theory, though, other
formulas give us more insight.
Let’s use the properties we have found to discover a new formula.
⎡ a11 … a1n ⎤
Start with a matrix ⎢⎢ ! " ! ⎥⎥ . We have the linearity property that says the
⎢⎣ am1 # amn ⎥⎦
a11 … a1n
determinant is linear in each row. So we can split up ! " ! in the linear combination of
am1 # amn
1 0 ! 0 0 1 ! 0 0 0 ! 1
a21 a22 ! a2n a21 a22 ! a2n a21 a22 ! a2n
the first row: a11 + a12 + ! + an1 . So
" " # " " " # " " " # "
an1 an2 ! ann an1 an2 ! ann an1 an2 ! ann
there are n terms. Each of these can be split up into the linear combination of the second row,
and so on, until we get nn terms which are all possible combinations of one element from each
row, times a determinant of a matrix with one 1 in each row.
But many—indeed most—of these terms are zero. For the matrices that have ones in the
same column will have two identical rows and thus have determinant zero. The only terms
remaining are the ones where each 1 is in a different column.

Permutations and permutation matrices

We recognize these matrices as permutation matrices, the P’s we used in row reduction to
swap rows. We will formulize this a little more now:

Definition: a permutation σ is a function whose domain and image are both the set of numbers
{1, 2, …, n}.

The permutation matrix P is the matrix which has one 1 in each row, and the 1 in row k is
in column σ(k). The determinant of a permutation matrix is either 1 or –1, because after
changing rows around (which changes the sign of the determinant) a permutation matrix
becomes I, whose determinant is one.
Definition: the sign of a permutation, sgn(σ), is the determinant of the corresponding
permutation matrix.

Of course, this may not be well defined. How do we know that one way of turning P into
I doesn’t require an odd number of row swaps while a different way of doing it might need an
even number? We still have to prove that sgn(σ) is uniquely defined. But once that is done we
will have proved

Theorem: the determinant of a matrix is ∑a a

1σ (1) 2 σ (2) !anσ (n) sgn(σ ) where the summation is

taken over all possible permutations σ of 1 – n.

The text refers to this as the “big formula.” It is quite large—there are n! terms in the
sum. This is one reason why determinants are not taken this way—computationally n! compared
to n3 operations for row reduction is a nightmare.
This does prove something, though. If the sign of a permutation is well-defined, then
there is a unique function that satisfies our defining properties for determinants. In other words,
we now know that there is at most one determinant. If we show that signs are well-defined and
that this formula does actually satisfy the properties, we will have shown existence, and we’re

Theorem: sgn(σ) is well-defined.

Proof: we will find a function whose behavior is very easy to understand why it is well
defined and show that it is equivalent to sgn(σ).
To this end, given permutation σ, define its “disorderliness” d(σ) (not a standard
term—just a convenience for this proof) to be the number of pairs (i, j) with i < j but
σ(i) > σ(j). For example, in the permutation (3, 7, 1, 4, 2, 6, 5) (that is, σ(1) = 3, σ(2) = 7
and so forth) the disordered pairs are (1, 3), (1, 7), (2, 3), (2, 4), (2, 7), (4, 7), (5, 6), (5, 7)
and (6, 7), so the disorderliness is 9. It is clear that the disorderliness of a permutation is
We will show that sgn(σ) = 1 if d(σ) is even and –1 if it is odd.
First, if any two adjacent values in σ are switched, the disorderliness changes by
exactly one. For example, d(3, 7, 4, 1, 2, 6, 5) is 10, while that of (3, 7, 1, 4, 2, 6, 5) is 9.
The simple reason for this is that switching two adjacent numbers changes the relative
order of only the pair involving those two numbers.
Now swapping any two terms in σ will change the disorderliness by an odd
number. Why? Because any swap can be achieved by swapping adjacent items k times
(if there are k things between the items being swapped) until the two items to be swapped
are next to each other, swapping them, and then making k more adjacent swaps until the
rest of the terms are back in order. For example, there are two things between the a and
the d below, and we can swap them and leave everything else alone, with 2⋅2 + 1 = 5
adjacent swaps: (a, b, c, d, e) → (b, a, c, d, e) → (b, c, a, d, e) → (b, c, d, a, e) → (b, d, c,
a, e) → (d, b, c, a, e).
Thus any swap will always change the disorderliness by 1 an odd number of
times, so the disorderliness changes by an odd number.
Now the disorderliness of the identity permutation (corresponding to the identity
matrix, of course) is zero. If σ is any permutation of even disorderliness, it always takes
an even number of swaps to turn it into the identity. An odd number can’t do it, because
changing an even number by an odd number an odd number of times can’t leave you with
zero! Similarly, every sequence of swaps that changes a permutation with odd
disorderliness into the identity must have an odd number of swaps.
Since a swap of terms in a permutation corresponds to a swap of rows in a matrix,
the number of swaps to recover the identity will always be either even or odd for a
particular permuation. So even (disorderliness) permutations have signs of +1 and odd
permutations have signs of –1.

Now, we finish by showing that our “big formula” does indeed have the defining
properties of a determinant so that, as promised, the determinant exists and is unique.

Theorem: the formula satisfies the defining qualities of a determinant.

Proof: three of the properties are easy. Namely, det(I) = 1 is trivial, since sgn(I) = 1 and
I is the only term in its expansion. The pulling-out-multiples-from-a-row property is
similarly easy, since this multiple appears exactly once in every term in the sum, as we
dissect the row in which it was multiplied. The swapping rows property is also simple
because the same products of aij’s occurs in the determinant of the swapped matrix, only
with a permutation with swapped rows, so all of the signs are changed.
That leaves the add-a-multiple-of-a-row property. So let’s say r times row i has
been added to row j. Then (note carefully the subscripts of the second term in the
parentheses!) ∑ a1σ (1) !(a jσ ( j ) + raiσ ( j ) )!anσ (n) sgn(σ ) = ∑ a1σ (1) !a jσ ( j ) !anσ (n) sgn(σ )
σ σ

+ r ∑ a1σ (1) !aiσ ( j ) !anσ (n) sgn(σ ) . The first of these two sums is det(A). The second,
when we sum over all sigmas, cancels itself out. For each pair k and l has exactly the
same product of a’s with σ(i) = k and σ(j) = l and another σ with σ(i) = l and σ(j) = k, but
all other values the same. These two sigmas have opposite signs, so the terms cancel
each other in the sum, and thus the second sum is zero.