You are on page 1of 8

Dierential Forms

The Algebra of Alternating Tensors


Denition. For any nite-dimensional real vector space V , let
k
(V ) denote the
subspace of T
k
(V ) consisting of alternating tensors (k-covectors).
There are no nonzero alternating k-tensors on V if k >dimV .
Every 0-tensor (which is a real number) is alternating, because there is no argu-
ment to interchange.
Similarly, every 1-tensor is alternating.
An alternating 2-tensor is just a skew-symmetric bilinear form on V .
Note that any 2-tensor T can be expressed as the sum of an alternating tensor
and a symmetric one, because
T(X, Y ) =
1
2
(T(X, Y ) T(Y, X)) +
1
2
(T(X, Y ) + T(Y, X))
=A(X, Y ) + S(X, Y ),
where
A(X, Y ) =
1
2
(T(X, Y ) T(Y, X))
is alternating, and
S(X, Y ) =
1
2
(T(X, Y ) + T(Y, X))
is symmetric.
Let S
k
=the group of permutations of the set {1, , k}.
Given a k tensor T ans a permutation S
k
, dene a new tensor

T by

T = T(X
(1)
, , T
(k)
).
The tensor S dened above is SymT
1
k!

S
k

T, the symmetrization of T.
Denition. Alt : T
k
(V )
k
(V ), called the alternating projection, is dened
as follows:
Alt T =
1
k!

S
k
(sgn )(

T).
More explicitly, this means
Alt T(X
1
, , X
k
) =
1
k!

S
k
(sgn )T(X
(1)
, , X
(k)
).
Examples. If T is any 1-tsnsor, then Alt T = T.
If T is a 2-tensor, then Alt T(X, Y ) =
1
2
(T(X, Y ) T(Y, X)).
For a 3-tensor T,
Alt T(X, Y, Z) =
1
6
(T(X, Y, Z) + T(Y, Z, X) + T(Z, X, Y )
T(Y, X, Z) T(X, Z, Y ) T(Z, Y, X)).
Lemma 3 (Properties of the Alternating Properties).
(a) For any tensor T, Alt T is alternating.
(b) T is alternating i Alt T = T.
Typeset by A
M
S-T
E
X
1
2
Elementary Alternating Tensors
Denition. Let k N. An ordered k-tuple I = (i
1
, , i
k
) of positive integers is
called a multi-index of length k.
If I is such a multi-index and S
k
is a permutation, we write I

for the
multi-index
I

= (i
(1)
, , i
(k)
).
Note that I

= (I

, for , S
k
.
If I and J are multi-indices of length k, we dene

J
I
=
_

_
sgn if neither I nor J has a repeated index
and J = I

for some S
k
,
0 if I or J has a repeated index
or J is not a permutation of I.
Let V be an n-dimensional vector space,
and suppose (
1
, ,
n
) is any basis for V

.
We will dene a collection of alternating tensors on V that generalize the
determinant function on R
n
.
For each multi-index I = (i
1
, , i
k
) of length k such that 1 i
1
, , i
k
n,
dene a covariant k-tensor
I
by

I
(X
1
, , X
k
) =det
_
_
_

i1
(X
1
), ,
i1
(X
k
)
.
.
.
.
.
.

i
k
(X
1
), ,
i
k
(X
k
)
_
_
_ (12.1)
=det
_
_
_
X
i1
1
X
i
k
k
.
.
.
.
.
.
X
i
k
1
X
i
k
k
_
_
_
In other words, if X denotes the matrix whose columns are the components of
the vectors X
1
, , X
k
w.r.t. the basis (E
i
) dual to (
i
), then
I
(X
1
, , X
k
)
is the determinant of the k k minor consisting of rows i
1
, , i
k
of X.
Because the determinant changes sign whenever two columes are interchanged,
it is clear that
I
is an alternating k-tensor.
Denition.
I
is called an elementary alternating tensor or elementary k-
covector.
Example. In terms of the standard dual basis (e
1
, e
2
.e
3
) for (R
3
)

, we have

13
(X, Y ) =X
1
Y
3
Y
1
X
3
;

123
(X, Y, Z) =det(X, Y, Z).
3
Lemma 4. Let (E
i
) be a basis for V , let (
i
) be the dual basis for V

, and let
I
be as dened above.
(a) If I has a repeated index, then
I
= 0.
(b) If J = I

for some S
k
, then
I
= (sgn)
J
.
(c)
I
(E
j1
, , E
j
k
) =
I
J
.
The signicance of the elementary k-covectors is that they provide a convenient
basis for
k
(V ).
Denition. A multi-inde I = (i
1
, , i
k
) is said to be increasing if i
1
< < i
k
.
It will be useful to use a primed summation sign to denote a sum over only
increasing multi-indices, so that, for example,

I
T
I

I
=

{I:1i1<<i
k
n}
T
I

I
.
Proposition 5. Let V be an n-dimensional vector space. If (
i
) is any basis for
V

, then for each positive integer k n, the collection of k-covectors
E = {
I
: I is an increasing multi-index of length k}
is a basis for
k
(V ). Therefore
dim
k
(V ) =
_
n
k
_
=
n!
k!(n k)!
.
Proof. (I) If k > n, then
k
(V ) is the trivial vector space, since any k vectors are
linearly dependent.
(II) If k n, we need to show that E spans
k
(V ) and is independent.
Let (E
i
) be the basis for V dual to (
i
).
(i) To show that E spans
k
(V ), let T
k
(V ) be arbitrary.
For each multi-index I = (i
1
, , i
k
), dene a real number T
I
by
T
I
= T(E
i1
, , E
i
k
).
Since T is alternating
T
I
= 0, if I contains a repeated multi-index,
and
T
J
= (sgn )T
I
, if J = I

for S
k
.
For any multi-index J, Lemma 4 thus gives

I
T
I

I
(E
j1
, E
j
k
) =

I
T
I

I
J
= T
J
= T(E
j1
, E
j
k
).
4
Therefore

I
T
I

I
= T. Hence E spans
k
(V ).
(ii) To show that E is an independent set, suppose

I
T
I

I
= 0, for some coecient T
I
.
Let J be any increasing multi-index. Applying both sides to (E
j1
, E
j
k
) and
using Lemma 4,
0 =

I
T
I

I
(E
j1
, E
j
k
) = T
J
.
Thus each coecient T
J
= 0.
In particular, for an n-dimensional vector space V , this proposition implies that

n
(V ) is 1-dimensional and is spanned by
1n
.
By denition, this elementary n-covector acts on vectors (X
1
, , X
n
) by taking
the determinant of their component matrix X = (X
i
j
).
For example, on R
n
with the standard basis,
1n
is precisely the determinant
function.
Lemma 6. Suppose V is an n-dimensional vector space and
n
(V ). If T :
V V is any linear map and X
1
, , X
n
are arbitrary vectors in V , then
(2) (TX
1
, , TX
n
) = (det T)(X
1
, , X
n
).
Proof. Let (E
i
) be any basis for V , and let (
i
) be the dual basis.
Since both sides of (2) are multilinear functions of X
1
, , X
n
, it suces to verify
it in the special case X
i
= E
i
, i = 1, , n.
By Proposition 5, we can write
= c
1,n
, for some c R.
Hence, we have
(det T)(E
1
, , E
n
) = (det T)c
1,n
(E
1
, , E
n
) = c det T.
On the other hand, letting (T
j
i
) denote the matrix of T w.r.t. this basis, and
letting
T
i
= TE
i
,
we have
(TE
1
, , TE
n
) = c
1,n
(T
1
, , T
n
) = c det(
j
(T
i
)) = c det(T
j
i
).
5
The Wedge Product
Denition. If
k
(V ) and

(V ), we dene the wedge product or


exterior product of and to be the alternating (k + )-tensor
(12.3) =
(k + )!
k!!
Alt ( ).
Lemma 7. Let (
1
, ,
n
) be a basis for V

. For any multi-indices
I = (i
1
, , i
k
) and J = (j
1
, , j

),
(4)
I

J
=
IJ
,
where IJ is the multi-index (i
1
, , i
k
, j
1
, , j

) obtained by contaternating I, J.
Proof. By multiplicity, it suces to show that
(5)
I

J
(E
p1
, , E
p
k+
) =
IJ
(E
p1
, , E
p
k+
)
for any sequence (E
p1
, , E
p
k+1
) of basis vectors. We consider several cases:
Case I: P = (p
1
, , p
k+
) has a repeated index.
In this case, both sides of (5) are zero.
Case II: P contains an index that does not appear on either I or J.
In this case, both sides of (5) are zero by Lemma 4(c).
Case III: P = IJ and P has no repeated indices.
We need to show that the left-hand side is also equal to 1. By denition,

J
(E
p1
, , E
p
k+
)
=
(k + )!
k!!
Alt(
I

J
)(E
p1
, , E
p
k+
)
=
1
k!!

S
k+
(sgn)
I
(E
p
(1)
, , E
p
(k)
)
J
(E
p
(k+1)
, , E
p
(k+)
).
By Lemma 4 again, the only terms in the sum above that give nonzero values are
those in which permutes the rst k indices and the last indices of P seperately.
In other words, must be of the form = , where S
k
acts by permutes
{1, , k} and S

acts by permuting {k + 1, , k + }.
Since sgn() = (sgn)(sgn ), we have

I

J
(E
p1
, , E
p
k+1
)
=
1
k!!

S
k
,
S

(sgn)(sgn )
I
(E
p
(1)
, , E
p
(k)
)
J
(E
p
(k+1)
, , E
p
(k+)
)
=
_
1
k!

S
k
(sgn )
I
(E
p
(1)
, , E
p
(k)
)
__
1
!

(sgn)(Alt
J
)(E
p
k+1
, , E
p
k+
)
_
=(Alt
I
)(E
p1
, , E
p
k
)(Alt
J
)(E
p1
, , E
p
k+
)
=
I
(E
p1
, , E
p
k
)
J
(E
p1
, , E
p
k+
) = 1.
Case IV: P is a permutation of IJ and P has no repeated indices.
In this case, applying a permutation of P brings us back to Case III.
Since the eect of the permutation is to multiply both sides of (5) by the same sign,
the result holds in this case as well.
6
Proposition 8 (Properties of the Wedge Product).
(a) Bilinearity: (a + a

) = a( ) + a

),
(a + a

) = a( ) + a

).
(b) Associativity: ( ) = ( ) .
(c) Anticommutativity: For
K
(v) and

(V ), = (1)
k
.
(d) If (
1
, ,
n
) is any basis for V

and I = (i
1
, , i
k
) is any multiindex, then
(7)
i1

i
k
=
I
.
(e) For any covectors
1
, ,
k
and vectors X
1
, , X
k
,
(8)
1

k
(X
1
, , X
k
) = det(
j
(X
i
)).
Proof. (a) immediately follows from the denition, because the tensor product is
bilinear and Alt is linear.
(b) To prove associativity, note that Lemma 7 gives
(
I

J
)
K
=
IJ

K
=
IJK
=
I

JK
=
I
(
J

J
).
The general case follows from bilinearity.
(c) Using Lemma 7 again, we obtain

I

J
=
IJ
= (sgn)
JI
= (sgn)
J

I
,
where is the permutation that sends IJ to JI. We have
sgn = (1)
k
,
since can be decomposed as a composition of k transpositions (each index of
I must be moved past each of the indices of J).
Anticommutativity then follows from bilinearity.
(d) is an immediate consequence of Lemma 7 and induction.
(e) To prove (e), we note that the special case in which each
j
is one of the basis
covectors
ij
reduces to (7).
Since both sides of (8) are multilinear in (
1
, ,
k
), this suces.
For any n-dimensional vector space V , dene a vector space

(V ) by

(V ) =
n

k=0

k
V.
It follows from Proposition 5 that dim

(V ) = 2
n
.
Proposition 8 shows that the wedge product turns

(V ) into an associative
algebra, called the exterior algebra of V .
This algebra is not commutative, but it has a closely related property.
Denition. An algebra A is said to be graded if it has a direct sum decomposition
A =

k
A
k
such that the product satises
(A
k
)(A

) A
k+
.
A graded algebra is anticommutative if the product satises ab = (1)
k
ba
for a A
k
, b A

.
Proposition 8(c) shows that

(V ) is an anticommutative graded algebra.


7
Dierential Forms on Manifolds
Denition. The subset of T
k
M consisting of alternating tensors is denoted by

k
M:

k
M =

pM

k
(T
p
M).
Denition. A section of
k
M is called a dierential k-form, or a k-form; this
is a (continuous) tensor eld whose value at each point is an alternating tensor.
The integrer k is called the degree of the form.
Denition. Denote the vector space of smooth sections of
k
M by A
k
(M).
In any smooth chart, a k-form can be written locally as
=

I
dx
i1
dx
i
k
=

I
dx
I
,
where the coecients
I
are continuous functions dened on the coordinate
domain, and we use dx
I
as the abbreviation for dx
i1
dx
i
k
.
In terms od dierential forms, the result of Lemma 4(c) translates to
dx
i1
dx
i
k
_

x
j1
, ,

x
j
k
_
=
I
J
.
Thus the component functions
I
of are determined by

I
=
_

x
j1
, ,

x
j
k
_
.
A 0-form is just continuous real-valued function.
A 1-form is a covector eld.
Denition. The wedge product of two dierential forms is dened pointwise:
( )
p
=
p

p
.
Thus the wedge product of a k-form with an -form is a (k + )-form.
If f is a 0-form and is a k-form, we interpret the wedge productf to mean
the ordinary product f.
If we dene
(11) A

(M) =
n

k=0
A
k
(M),
the wedge product turns A

(M) into an associative, anticommutative graded


algebra.
8
Denition. If F : M N is a smooth map and is a smooth dierential form
on N, the pullback F

is a smooth dierential form on M, dened as for any


smooth tensor eld:
(F

)
p
(X
1
, , X
k
) =
F(p)
(F

X
1
, , F

X
k
).
If : N M is the inclusion map of an immersed submfd, then we use the
notation |
N
for

.
Lemma 10. Suppose F : M M is smooth.
(a) F

: A
k
(N) A
k
(M) is linear.
(b) F

( ) = (F

) (F

).
(c) In any smooth chart,
F

I
dy
i1
dy
i
k
) =

(
I
F)d(y
i1
F) d(y
i
k
F).
Proposition 12. Let F : M N be a smooth map between n-manifolds. If (x
i
)
and (y
j
) are smooth coordinates on open sets U M and V N, respectively, and
u is a smooth real-valued function on V , then the following holds on U F
1
(V ) :
(12) F

(udy
1
dy
n
) = (u F)(det DF)dx
1
dx
n
,
where DF represents the matrix of partial derivatives of F in cooedinates.
Proof. It suces to show that both sides of (12) give the same result when evaluated
on (

x1
, ,

x
n
). From Lemma 10,
F

(udy
1
dy
n
) = (u F)dF
1
dF
n
.
Proposition 8(e) shows that
dF
1
dF
n
_

x
1
, ,

x
n
_
= det
_
dF
j
(

x
i
)
_
= det
_
F
j
x
i
_
.
Therefore
F

(udy
1
dy
n
)
_

x
1
, ,

x
n
_
= (u F) det DF.
Corollary 13. If (U, (x
i
)) and (

U, ( x
j
)) are overlapping smooth coordinate charts
on M, then the following identity holds in U

U:
(13) d x
1
d x
n
= det
_
x
j
x
j
_
dx
1
dx
n
.
Proof. Apply Proposition 12 with F equal to the identity map of U

U, but using
coordinates (x
i
) in the domain and ( x
j
) in the range.

You might also like