You are on page 1of 3

Coordinate-Free Approach to Trace and Determinant

Matt Rosenzweig

1 Trace
Let V be a finite-dimensional vector space over a field F, and let T ∈ End(V ). Let {e1 , · · · , en } and {f1 , · · · , fn }
be two bases for V , and let A = (aij ) and B = (bij ) be the matrices of T with respect to {e1 , · · · , en } and
{f1 , · · · , fn }, respectively. Then
n
X n
X
b11 + · · · + bnn = fi∗ (T fi ) = fi∗ (T (e∗1 (fi )e1 + · · · + e∗n (fi )en ))
i=1 i=1
n
X n
X
= e∗1 (fi )fi∗ (T e1 ) + · · · + e∗n (fi )fi∗ (T en )
i=1 i=1
n
! n
!
X X
= e∗1 fi∗ (T e1 )fi + ··· + e∗n fi∗ (T en )fi
i=1 i=1
= e∗1 (T e1 ) + · · · + e∗n (T en )
= a11 + · · · + ann

So if we define the trace of an operator to be the sum of the diagonal entries of any matrix representation, the
trace is well-defined.

I claim that End(V ) ' V ⊗ V ∗ . Indeed,

(v, w∗ ) 7→ w∗ (·)v ∈ End(V )

Let e1 , · · · , en form a basis for V and denote the dual basis by e∗1 , · · · , e∗n . For T ∈ End(V ) fixed, we can write

T v = e∗1 (v)T e1 + · · · + e∗n (v)T en =⇒ T = e∗1 (·)T e1 + · · · + e∗n (·)T en ∈ V ⊗ V ∗

By the universal property of the tensor product, there is a unique bilinear map ev : V ⊗ V ∗ → F satisfying

ev : v ⊗ w∗ 7→ w∗ (v)

which we call the evaluation map. We define the trace of the endomorphism T , which we denote by tr(T ) to be
the image of the evaluation of map ev.
We now show that this coordinate-free definition of trace coincides with the usual matrix-dependent definition.
Let e1 , · · · , en be as above, and suppose that the matrix of T with respect to this basis is given by

a11 ··· a1n


 

A =  ... ···
.. 
.
an1 ··· ann
Pn
The trace of the matrix A is j=1 ajj . Since

T ei = a1i e1 + · · · + ani en ,

we see that
   
Xn n
X
tr(T ) = e∗1 (T e1 ) + · · · + e∗n (T en ) = e∗1  aj1 ej  + · · · + e∗n  ajn ej 
j=1 j=1

= a11 + · · · + ann

1
2 Determinant
Let T : V → V be a linear transformation of V over a field F. T induces a linear transformation
n
^ n
^
Λ(T ) : (V ) → (V ), v1 ∧ · · · ∧ vn 7→ T v1 ∧ · · · ∧ T vn
Vn
Since (V ) is 1-dimensional (it has basis e1 ∧ · · · ∧ en , where e1 , · · · , en form a basis of V ), there exists a unique
scalar α, which depends on T , such that
Λ(T )(v1 ∧ · · · ∧ vn ) = α(v1 ∧ · · · ∧ vn )
It turns out that α coincides with the usual Leibniz formula for the determinant of an n × n matrix taught in
intro linear algebra classes. Indeed, decomposing v1 ∧ · · · ∧ vn , we see that
n
! n
!
X X X σ(1)
i i
v1 ∧ · · · ∧ vn = v 1 ei ∧ · · · ∧ vn e i = v1 · · · vnσ(n) eσ(1) ∧ · · · ∧ eσ(n)
i=1 i=1 σ∈Sn
σ(1)
X
= sgn(σ)v1 · · · vnσ(n) e1 ∧ · · · ∧ en
σ∈Sn

We know that T can be represented as a matrix with respect to the basis {e1 , · · · , en }, where the k th column is
the scalars in the sum T ek = a1k e1 + · · · + ank en . Hence,
σ(1)
X
Λ(T )(v1 ∧ · · · ∧ vn ) = sgn(σ)v1 · · · vnσ(n) T e1 ∧ · · · ∧ T en
σ∈Sn
   
n n
σ(1)
X X X
= sgn(σ)v1 · · · vnσ(n)  aj1 ej  ∧ · · · ∧  ajn ej 
σ∈Sn j=1 j=1
σ(1) τ (1)
X X
= sgn(σ)v1 · · · vnσ(n) sgn(τ )a1 · · · aτn(n) e1 ∧ · · · ∧ en
σ∈Sn τ ∈Sn
τ (1)
X
= sgn(τ )a1 · · · aτn(n) v1 ∧ · · · ∧ vn
τ ∈Sn
X
= sgn(τ )a1τ −1 (1) · · · anτ−1 (n) v1 ∧ · · · ∧ vn
τ ∈Sn

The last equality shows that det(AT ) = det(A), since τ 7→ τ −1 defines a group isomorphism of the symmetric
group Sn .
The coordinate-free approach to determinants allows for easier verification of many of the basic properties of
determinants. Suppose T, S : V → V are endmorphisms. I claim that det(T S) = det(T ) det(S). Indeed,
det(T S)e1 ∧ · · · ∧ en = Λ(T S)(e1 ∧ · · · ∧ en ) = T Se1 ∧ · · · ∧ T Sen
= Λ(T )(Se1 ∧ · · · ∧ Sen )
= det(T )Λ(S)(e1 ∧ · · · ∧ en )
= det(T ) det(S)e1 ∧ · · · ∧ en
Since the determinant of the identity endomorphism I is 1, we see from the multiplicativity of the determinant
that for an invertible transformation T , det(T −1 ) = (det T )−1 . If c is some scalar, then since Λ(cT ) = cn Λ(T ),
it follows that det(cT ) = cn det(T ). Suppose σ ∈ Sn is some permutation of {1, · · · , n} and S is the linear
transformation obtained from T by
Sej = T eσ(j) , ∀j = 1, · · · , n
Then
Λ(S)(e1 ∧ · · · en ) = Se1 ∧ · · · ∧ Sen = T eσ(1) ∧ · · · ∧ T eσ(n) = sgn(σ)T e1 ∧ · · · ∧ T en
= sgn(σ)Λ(T )(e1 ∧ · · · ∧ en ),
which implies that det(S) = sgn det(T ). In particular, if we switch two columns of the matrix of giving T , the
determinant changes by a factor of −1. By considering the transpose, we see that switching two rows also changes
the determinant by a factor of −1.

2
Suppose now that there exists a basis {e1 , · · · , en } for V with respect to which the matrix of T is upper-
triangular. In this case, I claim that the determinant of T is simply the product of the diagonal entries of the
matrix. Indeed, in the expression
σ(1)
X
det(T ) = sgn(σ)a1 · · · aσ(n)
n ,
σ∈Sn

we see that each term is nonzero if and only if σ(i) ≤ i for i = 1, · · · , n, by hypothesis that the matrix (aik ) is
upper-triangular. By induction, we see that each is nonzero if and only if σ(i) = i for i = 1, · · · , n; equivalently,
σ is the identity permutation. Therefore, det(T ) = a11 · · · ann . In particular, the determinant of a diagonal matrix
is the product of its diagonal entries. So if we can somehow get an n × n matrix A into upper-triangular form,
then we can quickly compute its determinant.
What about block-diagonal and block-upper-triangualar matrices? Suppose we can there exists a basis
{e1 , · · · , en , f1 , · · · , fm } for V such that span {e1 , · · · , en } and span {f1 , · · · , fm } are both T -invariant subspaces
of V . Let S, S 0 : V → V be defined by

Sej = T ej , S 0 ej = 0 ∀j = 1, · · · , n and Sfj = 0, S 0 fj = T fj ∀j = 1, · · · , m


Vn+m Vn Vm
Since (V ) ' ( V)∧( V ), we see that

Λ(T )(e1 ∧ · · · ∧ en ∧ f1 ∧ · · · ∧ fm ) = T e1 ∧ · · · ∧ T en ∧ T f1 ∧ · · · ∧ T fm
= (Se1 + S 0 e1 ) ∧ · · · ∧ (Sen + S 0 en ) ∧ (Sf1 + S 0 f1 ) ∧ · · · ∧ (Sfm + S 0 fm )
= Se1 ∧ · · · ∧ Sen ∧ S 0 f1 ∧ · · · ∧ S 0 fm
= Λ(S)(e1 ∧ · · · ∧ en ) ∧ Λ(T )(f1 ∧ · · · ∧ fm )

where the penulatimate equality follows from the (n + m)-linearity of the exterior product. By induction, we see
that if a matrix n × n matrix A is block diagonal, with blocks A1 , · · · , Ak of size n1 × n1 , · · · , nk × nk , respectively
(n1 + · · · + nk = n), then

det(A) = det(A1 ) · · · det(Ak )

You might also like