Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Shlomo Sternberg
April 23, 2004
2
Contents
1 The Campbell Baker Hausdorﬀ Formula 7
1.1 The problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 The geometric version of the CBH formula. . . . . . . . . . . . . 8
1.3 The MaurerCartan equations. . . . . . . . . . . . . . . . . . . . 11
1.4 Proof of CBH from MaurerCartan. . . . . . . . . . . . . . . . . . 14
1.5 The diﬀerential of the exponential and its inverse. . . . . . . . . 15
1.6 The averaging method. . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7 The Euler MacLaurin Formula. . . . . . . . . . . . . . . . . . . . 18
1.8 The universal enveloping algebra. . . . . . . . . . . . . . . . . . . 19
1.8.1 Tensor product of vector spaces. . . . . . . . . . . . . . . 20
1.8.2 The tensor product of two algebras. . . . . . . . . . . . . 21
1.8.3 The tensor algebra of a vector space. . . . . . . . . . . . . 21
1.8.4 Construction of the universal enveloping algebra. . . . . . 22
1.8.5 Extension of a Lie algebra homomorphism to its universal
enveloping algebra. . . . . . . . . . . . . . . . . . . . . . . 22
1.8.6 Universal enveloping algebra of a direct sum. . . . . . . . 22
1.8.7 Bialgebra structure. . . . . . . . . . . . . . . . . . . . . . 23
1.9 The Poincar´eBirkhoﬀWitt Theorem. . . . . . . . . . . . . . . . 24
1.10 Primitives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.11 Free Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.11.1 Magmas and free magmas on a set . . . . . . . . . . . . . 29
1.11.2 The Free Lie Algebra 1
X
. . . . . . . . . . . . . . . . . . . 30
1.11.3 The free associative algebra Ass(A). . . . . . . . . . . . . 31
1.12 Algebraic proof of CBH and explicit formulas. . . . . . . . . . . . 32
1.12.1 Abstract version of CBH and its algebraic proof. . . . . . 32
1.12.2 Explicit formula for CBH. . . . . . . . . . . . . . . . . . . 32
2 :(2) and its Representations. 35
2.1 Low dimensional Lie algebras. . . . . . . . . . . . . . . . . . . . . 35
2.2 :(2) and its irreducible representations. . . . . . . . . . . . . . . 36
2.3 The Casimir element. . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4 :(2) is simple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.5 Complete reducibility. . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 The Weyl group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3
4 CONTENTS
3 The classical simple algebras. 45
3.1 Graded simplicity. . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2 :(n + 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3 The orthogonal algebras. . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 The symplectic algebras. . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 The root structures. . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.1 ¹
n
= :(n + 1). . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.2 C
n
= :j(2n). n ≥ 2. . . . . . . . . . . . . . . . . . . . . . 53
3.5.3 1
n
= o(2n). n ≥ 3. . . . . . . . . . . . . . . . . . . . . . 54
3.5.4 1
n
= o(2n + 1) n ≥ 2. . . . . . . . . . . . . . . . . . . . . 55
3.5.5 Diagrammatic presentation. . . . . . . . . . . . . . . . . . 56
3.6 Low dimensional coincidences. . . . . . . . . . . . . . . . . . . . . 56
3.7 Extended diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . 58
4 EngelLieCartanWeyl 61
4.1 Engel’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Solvable Lie algebras. . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Cartan’s criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5 Radical. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.6 The Killing form. . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.7 Complete reducibility. . . . . . . . . . . . . . . . . . . . . . . . . 69
5 Conjugacy of Cartan subalgebras. 73
5.1 Derivations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.2 Cartan subalgebras. . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.3 Solvable case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 Toral subalgebras and Cartan subalgebras. . . . . . . . . . . . . . 79
5.5 Roots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.6 Bases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.7 Weyl chambers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.8 Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.9 Conjugacy of Borel subalgebras . . . . . . . . . . . . . . . . . . . 89
6 The simple ﬁnite dimensional algebras. 93
6.1 Simple Lie algebras and irreducible root systems. . . . . . . . . . 94
6.2 The maximal root and the minimal root. . . . . . . . . . . . . . . 95
6.3 Graphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.4 PerronFrobenius. . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.5 Classiﬁcation of the irreducible ∆. . . . . . . . . . . . . . . . . . 104
6.6 Classiﬁcation of the irreducible root systems. . . . . . . . . . . . 105
6.7 The classiﬁcation of the possible simple Lie algebras. . . . . . . . 109
CONTENTS 5
7 Cyclic highest weight modules. 113
7.1 Verma modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.2 When is dimIrr(λ) < ∞? . . . . . . . . . . . . . . . . . . . . . . 115
7.3 The value of the Casimir. . . . . . . . . . . . . . . . . . . . . . . 117
7.4 The Weyl character formula. . . . . . . . . . . . . . . . . . . . . 121
7.5 The Weyl dimension formula. . . . . . . . . . . . . . . . . . . . . 125
7.6 The Kostant multiplicity formula. . . . . . . . . . . . . . . . . . . 126
7.7 Steinberg’s formula. . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.8 The Freudenthal  de Vries formula. . . . . . . . . . . . . . . . . 128
7.9 Fundamental representations. . . . . . . . . . . . . . . . . . . . . 131
7.10 Equal rank subgroups. . . . . . . . . . . . . . . . . . . . . . . . . 133
8 Serre’s theorem. 137
8.1 The Serre relations. . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.2 The ﬁrst ﬁve relations. . . . . . . . . . . . . . . . . . . . . . . . . 138
8.3 Proof of Serre’s theorem. . . . . . . . . . . . . . . . . . . . . . . . 142
8.4 The existence of the exceptional root systems. . . . . . . . . . . . 144
9 Cliﬀord algebras and spin representations. 147
9.1 Deﬁnition and basic properties . . . . . . . . . . . . . . . . . . . 147
9.1.1 Deﬁnition. . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.1.2 Gradation. . . . . . . . . . . . . . . . . . . . . . . . . . . 148
9.1.3 ∧p as a C(p) module. . . . . . . . . . . . . . . . . . . . . 148
9.1.4 Chevalley’s linear identiﬁcation of C(p) with ∧p. . . . . . 148
9.1.5 The canonical antiautomorphism. . . . . . . . . . . . . . . 149
9.1.6 Commutator by an element of p. . . . . . . . . . . . . . . 150
9.1.7 Commutator by an element of ∧
2
p. . . . . . . . . . . . . 151
9.2 Orthogonal action of a Lie algebra. . . . . . . . . . . . . . . . . . 153
9.2.1 Expression for ν in terms of dual bases. . . . . . . . . . . 153
9.2.2 The adjoint action of a reductive Lie algebra. . . . . . . . 153
9.3 The spin representations. . . . . . . . . . . . . . . . . . . . . . . 154
9.3.1 The even dimensional case. . . . . . . . . . . . . . . . . . 155
9.3.2 The odd dimensional case. . . . . . . . . . . . . . . . . . . 158
9.3.3 Spin ad and \
ρ
. . . . . . . . . . . . . . . . . . . . . . . . . 159
10 The Kostant Dirac operator 163
10.1 Antisymmetric trilinear forms. . . . . . . . . . . . . . . . . . . . 163
10.2 Jacobi and Cliﬀord. . . . . . . . . . . . . . . . . . . . . . . . . . 164
10.3 Orthogonal extension of a Lie algebra. . . . . . . . . . . . . . . . 165
10.4 The value of [·
2
+ν(Cas
r
)]
0
. . . . . . . . . . . . . . . . . . . . . 167
10.5 Kostant’s Dirac Operator. . . . . . . . . . . . . . . . . . . . . . . 169
10.6 Eigenvalues of the Dirac operator. . . . . . . . . . . . . . . . . . 172
10.7 The geometric index theorem. . . . . . . . . . . . . . . . . . . . . 178
10.7.1 The index of equivariant Fredholm maps. . . . . . . . . . 178
10.7.2 Induced representations and Bott’s theorem. . . . . . . . 179
10.7.3 Landweber’s index theorem. . . . . . . . . . . . . . . . . . 180
6 CONTENTS
11 The center of l(g). 183
11.1 The HarishChandra isomorphism. . . . . . . . . . . . . . . . . . 183
11.1.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
11.1.2 Example of :(2). . . . . . . . . . . . . . . . . . . . . . . . 184
11.1.3 Using Verma modules to prove that γ
H
: 7(g) →l(h)
W
. 185
11.1.4 Outline of proof of bijectivity. . . . . . . . . . . . . . . . . 186
11.1.5 Restriction from o(g
∗
)
g
to o(h
∗
)
W
. . . . . . . . . . . . . 187
11.1.6 From o(g)
g
to o(h)
W
. . . . . . . . . . . . . . . . . . . . . 188
11.1.7 Completion of the proof. . . . . . . . . . . . . . . . . . . . 188
11.2 Chevalley’s theorem. . . . . . . . . . . . . . . . . . . . . . . . . . 189
11.2.1 Transcendence degrees. . . . . . . . . . . . . . . . . . . . 189
11.2.2 Symmetric polynomials. . . . . . . . . . . . . . . . . . . . 190
11.2.3 Fixed ﬁelds. . . . . . . . . . . . . . . . . . . . . . . . . . . 192
11.2.4 Invariants of ﬁnite groups. . . . . . . . . . . . . . . . . . . 193
11.2.5 The Hilbert basis theorem. . . . . . . . . . . . . . . . . . 195
11.2.6 Proof of Chevalley’s theorem. . . . . . . . . . . . . . . . . 196
Chapter 1
The Campbell Baker
Hausdorﬀ Formula
1.1 The problem.
Recall the power series:
exp A = 1 +A +
1
2
A
2
+
1
3!
A
3
+ . log(1 +A) = A −
1
2
A
2
+
1
3
A
3
+ .
We want to study these series in a ring where convergence makes sense; for ex
ample in the ring of nn matrices. The exponential series converges everywhere,
and the series for the logarithm converges in a small enough neighborhood of
the origin. Of course,
log(exp A) = A; exp(log(1 +A)) = 1 +A
where these series converge, or as formal power series.
In particular, if ¹ and 1 are two elements which are close enough to 0 we
can study the convergent series
log[(exp ¹)(exp 1)]
which will yield an element C such that exp C = (exp ¹)(exp 1). The problem
is that ¹ and 1 need not commute. For example, if we retain only the linear
and constant terms in the series we ﬁnd
log[(1 +¹+ )(1 +1 + )] = log(1 +¹+1 + ) = ¹+1 + .
On the other hand, if we go out to terms second order, the noncommutativity
begins to enter:
log[(1 +¹+
1
2
¹
2
+ )(1 +1 +
1
2
1
2
+ )] =
7
8 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
¹+1 +
1
2
¹
2
+¹1 +
1
2
1
2
−
1
2
(¹+1 + )
2
= ¹+1 +
1
2
[¹. 1] +
where
[¹. 1] := ¹1 −1¹ (1.1)
is the commutator of ¹ and 1, also known as the Lie bracket of ¹ and 1.
Collecting the terms of degree three we get, after some computation,
1
12
¹
2
1 +¹1
2
+1
2
¹+1¹
2
−2¹1¹−21¹1]
=
1
12
[¹. [¹. 1]]+
1
12
[1. [1. ¹]].
This suggests that the series for log[(exp ¹)(exp 1)] can be expressed entirely
in terms of successive Lie brackets of ¹ and 1. This is so, and is the content of
the CampbellBakerHausdorﬀ formula.
One of the important consequences of the mere existence of this formula is
the following. Suppose that g is the Lie algebra of a Lie group G. Then the local
structure of G near the identity, i.e. the rule for the product of two elements of
G suﬃciently closed to the identity is determined by its Lie algebra g. Indeed,
the exponential map is locally a diﬀeomorphism from a neighborhood of the
origin in g onto a neighborhood \ of the identity, and if l ⊂ \ is a (possibly
smaller) neighborhood of the identity such that l l ⊂ \, the the product of
o = exp ξ and / = exp η, with o ∈ l and / ∈ l is then completely expressed in
terms of successive Lie brackets of ξ and η.
We will give two proofs of this important theorem. One will be geometric 
the explicit formula for the series for log[(exp ¹)(exp 1)] will involve integration,
and so makes sense over the real or complex numbers. We will derive the formula
from the “MaurerCartan equations” which we will explain in the course of our
discussion. Our second version will be more algebraic. It will involve such ideas
as the universal enveloping algebra, comultiplication and the Poincar´eBirkhoﬀ
Witt theorem. In both proofs, many of the key ideas are at least as important
as the theorem itself.
1.2 The geometric version of the CBH formula.
To state this formula we introduce some notation. Let ad ¹ denote the operation
of bracketing on the left by ¹, so
ad¹(1) := [¹. 1].
Deﬁne the function ψ by
ψ(.) =
. log .
. −1
which is deﬁned as a convergent power series around the point . = 1 so
ψ(1 +n) = (1 +n)
log(1 +n)
n
= (1 +n)(1 −
n
2
+
n
2
3
+ ) = 1 +
n
2
−
n
2
6
+ .
1.2. THE GEOMETRIC VERSION OF THE CBH FORMULA. 9
In fact, we will also take this as a deﬁnition of the formal power series for ψ in
terms of n. The CampbellBakerHausdorﬀ formula says that
log((exp ¹)(exp 1)) = ¹+
1
0
ψ ((exp ad ¹)(exp tad 1)) 1dt. (1.2)
Remarks.
1. The formula says that we are to substitute
n = (exp ad ¹)(exp tad 1) −1
into the deﬁnition of ψ, apply this operator to the element 1 and then integrate.
In carrying out this computation we can ignore all terms in the expansion of ψ
in terms of ad ¹ and ad 1 where a factor of ad 1 occurs on the right, since
(ad 1)1 = 0. For example, to obtain the expansion through terms of degree
three in the CampbellBakerHausdorﬀ formula, we need only retain quadratic
and lower order terms in n, and so
n = ad ¹+
1
2
(ad ¹)
2
+tad 1 +
t
2
2
(ad 1)
2
+
n
2
= (ad ¹)
2
+t(ad 1)(ad ¹) +
1
0
1 +
n
2
−
n
2
6
dt = 1 +
1
2
ad ¹+
1
12
(ad ¹)
2
−
1
12
(ad 1)(ad ¹) + .
where the dots indicate either higher order terms or terms with ad 1 occurring
on the right. So up through degree three (1.2) gives
log(exp ¹)(exp 1) = ¹+1 +
1
2
[¹. 1] +
1
12
[¹. [¹. 1]] −
1
12
[1. [¹. 1]] +
agreeing with our preceding computation.
2. The meaning of the exponential function on the left hand side of the
CampbellBakerHausdorﬀ formula diﬀers from its meaning on the right. On
the right hand side, exponentiation takes place in the algebra of endomorphisms
of the ring in question. In fact, we will want to make a fundamental reinter
pretation of the formula. We want to think of ¹. 1, etc. as elements of a Lie
algebra, g. Then the exponentiations on the right hand side of (1.2) are still
taking place in End(g). On the other hand, if g is the Lie algebra of a Lie group
G, then there is an exponential map: exp: g →G, and this is what is meant by
the exponentials on the left of (1.2). This exponential map is a diﬀeomorphism
on some neighborhood of the origin in g, and its inverse, log, is deﬁned in some
neighborhood of the identity in G. This is the meaning we will attach to the
logarithm occurring on the left in (1.2).
3. The most crucial consequence of the CampbellBakerHausdorﬀ formula
is that it shows that the local structure of the Lie group G (the multiplication
law for elements near the identity) is completely determined by its Lie algebra.
4. For example, we see from the right hand side of (1.2) that group multi
plication and group inverse are analytic if we use exponential coordinates.
10 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
5. Consider the function τ deﬁned by
τ(n) :=
n
1 −c
−w
. (1.3)
This is a familiar function from analysis, as it enters into the EulerMaclaurin
formula, see below. (It is the exponential generating function of (−1)
k
/
k
where
the /
k
are the Bernoulli numbers.) Then
ψ(.) = τ(log .).
6. The formula is named after three mathematicians, Campbell, Baker, and
Hausdorﬀ. But this is a misnomer. Substantially earlier than the works of any
of these three, there appeared a paper by Friedrich Schur, “Neue Begruendung
der Theorie der endlichen Transformationsgruppen,” Mathematische Annalen
35 (1890), 161197. Schur writes down, as convergent power series, the com
position law for a Lie group in terms of ”canonical coordinates”, i.e., in terms
of linear coordinates on the Lie algebra. He writes down recursive relations for
the coeﬃcients, obtaining a version of the formulas we will give below. I am
indebted to Prof. Schmid for this reference.
Our strategy for the proof of (1.2) will be to prove a diﬀerential version of
it:
d
dt
log ((exp ¹)(exp t1)) = ψ ((exp ad ¹)(exp t ad 1)) 1. (1.4)
Since log(exp ¹(exp t1)) = ¹ when t = 0, integrating (1.4) from 0 to 1 will
prove (1.2). Let us deﬁne Γ = Γ(t) = Γ(t. ¹. 1) by
Γ = log ((exp ¹)(exp t1)) . (1.5)
Then
exp Γ = exp ¹exp t1
and so
d
dt
exp Γ(t) = exp ¹
d
dt
exp t1
= exp ¹(exp t1)1
= (exp Γ(t))1 so
(exp −Γ(t))
d
dt
exp Γ(t) = 1.
We will prove (1.4) by ﬁnding a general expression for
exp(−C(t))
d
dt
exp(C(t))
where C = C(t) is a curve in the Lie algebra, g, see (1.11) below.
1.3. THE MAURERCARTAN EQUATIONS. 11
In our derivation of (1.4) from (1.11) we will make use of an important
property of the adjoint representation which we might as well state now: For
any o ∈ G, deﬁne the linear transformation
Ad o : g →g : A →oAo
−1
.
(In geometrical terms, this can be thought of as follows: (The diﬀerential of )
Left multiplication by o carries g = T
I
(G) into the tangent space, T
g
(G) to G
at the point o. Right multiplication by o
−1
carries this tangent space back to g
and so the combined operation is a linear map of g into itself which we call Ad
o. Notice that Ad is a representation in the sense that
Ad (o/) = (Ad o)(Ad /) ∀o. / ∈ G.
In particular, for any ¹ ∈ g, we have the one parameter family of linear trans
formations Ad(exp t¹) and
d
dt
Ad (exp t¹)A = (exp t¹)¹A(exp −t¹) + (exp t¹)A(−¹)(exp −t¹)
= (exp t¹)[¹. A](exp −t¹) so
d
dt
Ad exp t¹ = Ad(exp t¹) ◦ ad ¹.
But ad ¹ is a linear transformation acting on g and the solution to the diﬀer
ential equation
d
dt
`(t) = `(t)ad ¹. `(0) = 1
(in the space of linear transformations of g) is exp t ad ¹. Thus Ad(exp t¹) =
exp(t ad ¹). Setting t = 1 gives the important formula
Ad (exp ¹) = exp(ad ¹). (1.6)
As an application, consider the Γ introduced above. We have
exp(ad Γ) = Ad (exp Γ)
= Ad ((exp ¹)(exp t1))
= (Ad exp ¹)(Ad exp t1)
= (exp ad ¹)(exp ad t1)
hence
ad Γ = log((exp ad ¹)(exp ad t1)). (1.7)
1.3 The MaurerCartan equations.
If G is a Lie group and γ = γ(t) is a curve on G with γ(0) = ¹ ∈ G, then
¹
−1
γ is a curve which passes through the identity at t = 0. Hence ¹
−1
γ
(0) is
a tangent vector at the identity, i.e. an element of g, the Lie algebra of G.
12 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
In this way, we have deﬁned a linear diﬀerential form θ on G with values in
g. In case G is a subgroup of the group of all invertible nn matrices (say over
the real numbers), we can write this form as
θ = ¹
−1
d¹.
We can then think of the ¹ occurring above as a collection of n
2
real valued
functions on G (the matrix entries considered as functions on the group) and
d¹ as the matrix of diﬀerentials of these functions. The above equation giving
θ is then just matrix multiplication. For simplicity, we will work in this case,
although the main theorem, equation (1.8) below, works for any Lie group and
is quite standard.
The deﬁnitions of the groups we are considering amount to constraints on
¹, and then diﬀerentiating these constraints show that ¹
−1
d¹ takes values in
g, and gives a description of g. It is best to explain this by examples:
• O(n): ¹¹
†
= 1. d¹¹
†
+¹d¹
†
= 0 or
¹
−1
d¹+
¹
−1
d¹
†
= 0.
o(n) consists of antisymmetric matrices.
• Sp(n): Let
J :=
0 1
−1 0
and let Sp(n) consist of all matrices satisfying
¹J¹
†
= J.
Then
d¹Jo
†
+¹Jd¹
†
= 0
or
(¹
−1
d¹)J +J(¹
−1
d¹)
†
= 0.
The equation 1J +J1
†
= 0 deﬁnes the Lie algebra sp(n).
• Let J be as above and deﬁne Gl(n,C) to consist of all invertible matrices
satisfying
¹J = J¹.
Then
d¹J = Jd¹ = 0.
and so
¹
−1
d¹J = ¹
−1
Jd¹ = J¹
−1
d¹.
1.3. THE MAURERCARTAN EQUATIONS. 13
We return to general considerations: Let us take the exterior derivative of
the deﬁning equation θ = ¹
−1
d¹. For this we need to compute d(¹
−1
): Since
d(¹¹
−1
) = 0
we have
d¹ ¹
−1
+¹d(¹
−1
) = 0
or
d(¹
−1
) = −¹
−1
d¹ ¹
−1
.
This is the generalization to matrices of the formula in elementary calculus for
the derivative of 1´r. Using this formula we get
dθ = d(¹
−1
d¹) = −(¹
−1
d¹ ¹
−1
) ∧ d¹ = −¹
−1
d¹∧ ¹
−1
d¹
or the MaurerCartan equation
dθ +θ ∧ θ = 0. (1.8)
If we use commutator instead of multiplication we would write this as
dθ +
1
2
[θ. θ] = 0. (1.9)
The MaurerCartan equation is of central importance in geometry and physics,
far more important than the CampbellBakerHausdorﬀ formula itself.
Suppose we have a map o : R
2
→G, with :. t coordinates on the plane. Pull
θ back to the plane, so
o
∗
θ = o
−1
∂o
∂:
d: +o
−1
∂o
∂t
dt
Deﬁne
α = α(:. t) := o
−1
∂o
∂:
and
β := β(:. t) = o
−1
∂o
∂t
so that
o
∗
θ = αd: +βdt.
Then collecting the coeﬃcient of d: ∧ dt in the Maurer Cartan equation gives
∂β
∂:
−
∂α
∂t
+ [α. β] = 0. (1.10)
This is the version of the Maurer Cartan equation we shall use in our proof
of the Campbell Baker Hausdorﬀ formula. Of course this version is completely
equivalent to the general version, since a two form is determined by its restriction
to all two dimensional surfaces.
14 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
1.4 Proof of CBH from MaurerCartan.
Let C(t) be a curve in the Lie algebra g and let us apply (1.10) to
o(:. t) := exp[:C(t)]
so that
α(:. t) = o
−1
∂o
∂:
= exp[−:C(t)] exp[:C(t)]C(t)
= C(t)
β(:. t) = o
−1
∂o
∂t
= exp[−:C(t)]
∂
∂t
exp[:C(t)] so by (1.10)
∂β
∂:
−C
(t) + [C(t). β] = 0.
For ﬁxed t consider the last equation as the diﬀerential equation (in :)
dβ
d:
= −(ad C)β +C
. β(0) = 0
where C := C(t). C
:= C
(t).
If we expand β(:. t) as a formal power series in : (for ﬁxed t):
β(:. t) = o
1
: +o
2
:
2
+o
3
:
3
+
and compare coeﬃcients in the diﬀerential equation we obtain o
1
= C
, and
no
n
= −(ad C)o
n−1
or
β(:. t) = :C
(t) +
1
2
:(−ad C(t))C
(t) + +
1
n!
:
n
(−ad C(t))
n−1
C
(t) + .
If we deﬁne
φ(.) :=
c
z
−1
.
= 1 +
1
2!
. +
1
3!
.
2
+
and set : = 1 in the expression we derived above for β(:. t) we get
exp(−C(t))
d
dt
exp(C(t)) = φ(−ad C(t))C
(t). (1.11)
Now to the proof of the CampbellBakerHausdorﬀ formula. Suppose that
¹ and 1 are chosen suﬃciently near the origin so that
Γ = Γ(t) = Γ(t. ¹. 1) := log((exp ¹)(exp t1))
1.5. THE DIFFERENTIAL OF THE EXPONENTIAL AND ITS INVERSE.15
is deﬁned for all [t[ ≤ 1. Then, as we remarked,
exp Γ = exp ¹exp t1
so exp ad Γ = (exp ad ¹)(exp t ad 1) and hence
ad Γ = log ((exp ad ¹)(exp t ad 1)) .
We have
d
dt
exp Γ(t) = exp ¹
d
dt
exp t1
= exp ¹(exp t1)1
= (exp Γ(t)1 so
(exp −Γ(t))
d
dt
exp Γ(t) = 1 and therefore
φ(−ad Γ(t))Γ
(t) = 1 by (1.11) so
φ(−log ((exp ad ¹)(exp t ad 1)))Γ
(t) = 1.
Now for [. −1[ < 1
φ(−log .) =
c
−log z
−1
−log .
=
.
−1
−1
−log .
=
. −1
. log .
so
ψ(.)φ(−log .) ≡ 1 where ψ(.) :=
. log .
. −1
so
Γ
(t) = ψ ((exp ad ¹)(exp tad 1)) 1.
This proves (1.4) and integrating from 0 to 1 proves (1.2).
1.5 The diﬀerential of the exponential and its
inverse.
Once again, equation (1.11), which we derived from the MaurerCartan equa
tion, is of signiﬁcant importance in its own right, perhaps more than the use we
made of it  to prove the CampbellBakerHausdorﬀ theorem. We will rewrite
this equation in terms of more familiar geometric operations, but ﬁrst some
preliminaries:
The exponential map exp sends the Lie algebra g into the corresponding Lie
group, and is a diﬀerentiable map. If ξ ∈ g we can consider the diﬀerential of
exp at the point ξ:
d(exp)
ξ
: g = Tg
ξ
→TG
exp ξ
16 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
where we have identiﬁed g with its tangent space at ξ which is possible since g
is a vector space. In other words, d(exp)
ξ
maps the tangent space to g at the
point ξ into the tangent space to G at the point exp(ξ). At ξ = 0 we have
d(exp)
0
= id
and hence, by the implicit function theorem, d(exp)
ξ
is invertible for suﬃ
ciently small ξ. Now the MaurerCartan form, evaluated at the point exp ξ
sends TG
exp ξ
back to g:
θ
exp ξ
: TG
exp ξ
→g.
Hence
θ
exp ξ
◦ d(exp)
ξ
: g →g
and is invertible for suﬃciently small ξ. We claim that
τ(ad ξ) ◦
θ
exp ξ
◦ d(exp
ξ
)
= id (1.12)
where τ is as deﬁned above in (1.3). Indeed, we claim that (1.12) is an immediate
consequence of (1.11).
Recall the deﬁnition (1.3) of the function τ as τ(.) = 1´φ(−.). Multiply
both sides of (1.11) by τ(ad C(t)) to obtain
τ(ad C(t)) exp(−C(t))
d
dt
exp(C(t)) = C
(t). (1.13)
Choose the curve C so that ξ = C(0) and η = C
(0). Then the chain rule says
that
d
dt
exp(C(t))
t=0
= d(exp)
ξ
(η).
Thus
exp(−C(t))
d
dt
exp(C(t))
t=0
= θ
exp ξ
d(exp)
ξ
η.
the result of applying the MaurerCartan form θ (at the point exp(ξ)) to the
image of η under the diﬀerential of exponential map at ξ ∈ g. Then (1.13) at
t = 0 translates into (1.12). QED
1.6 The averaging method.
In this section we will give another important application of (1.10): For ﬁxed
ξ ∈ g, the diﬀerential of the exponential map is a linear map from g = T
ξ
(g) to
T
exp ξ
G. The (diﬀerential of) left translation by exp ξ carries T
exp ξ
(G) back to
T
e
G = g. Let us denote this composite by exp
−1
ξ
d(exp)
ξ
. So
θ
exp ξ
◦ d(exp)
ξ
= d exp
−1
ξ
d(exp)
ξ
: g →g
is a linear map. We claim that for any η ∈ g
exp
−1
ξ
d(exp)
ξ
(η) =
1
0
Ad
exp(−sξ)
ηd:. (1.14)
1.6. THE AVERAGING METHOD. 17
We will prove this by applying(1.10) to
o(:. t) = exp (t(ξ +:η)) .
Indeed,
β(:. t) := o(:. t)
−1
∂o
∂t
= ξ +:η
so
∂β
∂:
≡ η
and
β(0. t) ≡ ξ.
The left hand side of (1.14) is α(0. 1) where
α(:. t) := o(:. t)
−1
∂o
∂:
so we may use (1.10) to get an ordinary diﬀerential equation for α(0. t). Deﬁning
γ(t) := α(0. t).
(1.10) becomes
dγ
dt
= η + [γ. ξ]. (1.15)
For any ζ ∈ g,
d
dt
Ad
exp −tξ
ζ = Ad
exp −tξ
[ζ. ξ]
= [Ad
exp −tξ
ζ. ξ].
So for constant ζ ∈ g,
Ad
exp −tξ
ζ
is a solution of the homogeneous equation corresponding to (1.15). So, by
Lagrange’s method of variation of constants, we look for a solution of (1.15) of
the form
γ(t) = Ad
exp −tξ
ζ(t)
and (1.15) becomes
ζ
(t) = Ad
exp tξ
η
or
γ(t) = Ad
exp −tξ
t
0
Ad
exp sξ
ηd:
is the solution of (1.15) with γ(0) = 0. Setting : = 1 gives
γ(1) = Ad
exp −ξ
1
0
Ad
exp sξ
d:
and replacing : by 1 −: in the integral gives (1.14).
18 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
1.7 The Euler MacLaurin Formula.
We pause to remind the reader of a diﬀerent role that the τ function plays in
mathematics. We have seen in (1.12) that τ enters into the inverse of the
exponential map. In a sense, this formula is taking into account the non
commutativity of the group multiplication, so τ is helping to relate the non
commutative to the commutative.
But much earlier in mathematical history, τ was introduced to relate the
discrete to the continuous: Let 1 denote the diﬀerentiation operator in one
variable. Then if we think of 1 as the one dimensional vector ﬁeld ∂´∂/ it
generates the one parameter group exp /1 which consists of translation by /.
In particular, taking / = 1 we have
c
D
1
(r) = 1(r + 1).
This equation is equally valid in a purely algebraic sense, taking 1 to be a
polynomial and
c
D
= 1 +1 +
1
2
1
2
+
1
3!
1
3
+ .
This series is inﬁnite. But if j is a polynomial of degree d, then 1
k
j = 0 for
/ 1 so when applied to any polynomial, the above sum is really ﬁnite. Since
1
k
c
ah
= o
k
c
ah
it follows that if 1 is any formal power series in one variable, we have
1(1)c
ah
= 1(o)c
ah
(1.16)
in the ring of power series in two variables. Of course, under suitable convergence
conditions this is an equality of functions of /.
For example, the function τ(.) = .´(1 − c
−z
) converges for [.[ < 2π since
±2πi are the closest zeros of the denominator (other than 0) to the origin. Hence
τ
d
d/
c
zh
.
= c
zh
1
1 −c
−z
(1.17)
holds for 0 < [.[ < 2π. Here the inﬁnite order diﬀerential operator on the left
is regarded as the limit of the ﬁnite order diﬀerential operators obtained by
truncating the power series for τ at higher and higher orders.
Let o < / be integers. Then for any nonnegative values of /
1
and /
2
we
have
b+h2
a−h1
c
zx
dr = c
h2z
c
bz
.
−c
−h2z
c
az
.
for . = 0. So if we set
1
1
:=
d
d/
1
. 1
2
:=
d
d/
2
.
1.8. THE UNIVERSAL ENVELOPING ALGEBRA. 19
the for 0 < [.[ < 2π we have
τ(1
1
)τ(1
2
)
b+h2
a−h1
c
zx
dr = τ(.)c
h2z
c
bz
.
−τ(−.)c
−h1z
c
az
.
because τ(1
1
)1(/
2
) = 1(/
2
) when applied to any function of /
2
since the con
stant term in τ is one and all of the diﬀerentiations with respect to /
1
give
zero.
Setting /
1
= /
2
= 0 gives
τ(1
1
)τ(1
2
)
b+h2
a−h1
c
zx
dr
h1=h2=0
=
c
az
1 −c
z
+
c
bz
1 −c
−z
. 0 < [.[ < 2π.
On then other hand, the geometric sum gives
b
¸
k=a
c
kz
= c
az
1 +c
z
+c
2z
+ +c
(b−a)z
= c
az
1 −c
(b−a+1)z
1 −c
z
=
c
az
1 −c
z
+
c
bz
1 −c
−z
.
We have thus proved the following exact EulerMacLaurin formula:
τ(1
1
)τ(1
2
)
b+h2
a−h1
1(r)dr
h1=h2=0
=
b
¸
k=a
1(/). (1.18)
where the sum on the right is over integer values of / and we have proved this
formula for functions 1 of the form 1(r) = c
zx
. 0 < [.[ < 2π. It is also true
when . = 0 by passing to the limit or by direct evaluation.
Repeatedly diﬀerentiating (1.18) (with 1(r) = c
zx
) with respect to . gives
the corresponding formula with 1(r) = r
n
c
zx
and hence for all functions of the
form r →j(r)c
zx
where j is a polynomial and [.[ < 2π.
There is a corresponding formula with remainder for C
k
functions.
1.8 The universal enveloping algebra.
We will now give an alternative (algebraic) version of the CampbellBaker
Hausdorﬀ theorem. It depends on several notions which are extremely important
in their own right, so we pause to develop them.
A universal algebra of a Lie algebra 1 is a map c : 1 → l1 where l1 is
an associative algebra with unit such that
1. c is a Lie algebra homomorphism, i.e. it is linear and
c[r. n] = c(r)c(n) −c(n)c(r)
20 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
2. If ¹ is any associative algebra with unit and α : 1 →¹ is any Lie algebra
homomorphism then there exists a unique homomorphism φ of associative
algebras such that
α = φ ◦ c.
It is clear that if l1 exists, it is unique up to a unique isomorphism. So
we may then talk of the universal algebra of 1. We will call it the universal
enveloping algebra and sometimes put in parenthesis, i.e. write l(1).
In case 1 = g is the Lie algebra of left invariant vector ﬁelds on a group
G, we may think of 1 as consisting of left invariant ﬁrst order homogeneous
diﬀerential operators on G. Then we may take l1 to consist of all left invariant
diﬀerential operators on G. In this case the construction of l1 is intuitive
and obvious. The ring of diﬀerential operators T on any manifold is ﬁltered by
degree: T
n
consisting of those diﬀerential operators with total degree at most
n. The quotient, T
n
´T
n−1
consists of those homogeneous diﬀerential operators
of degree n, i.e. homogeneous polynomials in the vector ﬁelds with function
coeﬃcients. For the case of left invariant diﬀerential operators on a group, these
vector ﬁelds may be taken to be left invariant, and the function coeﬃcients to be
constant. In other words, (l1)
n
´(l1)
n−1
consists of all symmetric polynomial
expressions, homogeneous of degree n in 1. This is the content of the Poincar´e
BirkhoﬀWitt theorem. In the algebraic case we have to do some work to get
all of this. We ﬁrst must construct l(1).
1.8.1 Tensor product of vector spaces.
Let 1
1
. . . . . 1
m
be vector spaces and (1. 1) a multilinear map 1 : 1
1
1
m
→
1. Similarly (o. G). If / is a linear map / : 1 → G, and o = / ◦ 1 then we say
that / is a morphism of (1. 1) to (o. G). In this way we make the set of all (1. 1)
into a category. Want a universal object in this category; that is, an object with
a unique morphism into every other object. So want a pair (t. T ) where T is a
vector space, t : 1
1
1
m
→ T is a multilinear map, and for every (1. 1)
there is a unique linear map /
f
: T →1 with
1 = /
f
◦ t
.
Uniqueness. By the universal property t = /
t
◦t
. t
= /
t
◦t so t = (/
t
◦/
t
)◦t,
but also t = t◦id. So /
t
◦ /
t
=id. Similarly the other way. Thus (t. T ), if it
exists, is unique up to a unique morphism. This is a standard argument valid
in any category proving the uniqueness of “initial elements”.
Existence. Let ` be the free vector space on the symbols r
1
. . . . . r
m
. r
i
∈
1
i
. Let · be the subspace generated by all the
(r
1
. . . . . r
i
+r
i
. . . . . r
m
) −(r
1
. . . . . r
i
. . . . . r
m
) −(r
1
. . . . . r
i
. . . . . r
m
)
and all the
(r
1
. . . . . . or
i
. . . . . r
m
) −o(r
1
. . . . . r
i
. . . . . r
m
)
1.8. THE UNIVERSAL ENVELOPING ALGEBRA. 21
for all i = 1. . . . . :. r
i
. r
i
∈ 1
i
. o ∈ /. Let T = `´· and
t((r
1
. . . . . r
m
)) = (r
1
. . . . . r
m
)´·.
This is universal by its very construction. QED
We introduce the notation
T = T (1
1
1
m
) =: 1
1
⊗ ⊗1
m
.
The universality implies an isomorphism
(1
1
⊗ ⊗1
m
) ⊗(1
m+1
⊗ ⊗1
m+n
)
∼
= 1
1
⊗ ⊗1
m+n
.
1.8.2 The tensor product of two algebras.
If ¹ and 1 are algebras, they are they are vector spaces, so we can form their
tensor product as vector spaces. We deﬁne a product structure on ¹ ⊗ 1 by
deﬁning
(o
1
⊗/
1
) (o
2
⊗/
2
) := o
1
o
2
⊗/
1
/
2
.
It is easy to check that this extends to give an algebra structure on ¹ ⊗1. In
case ¹ and 1 are associative algebras so is ¹ ⊗ 1, and if in addition both ¹
and 1 have unit elements, then 1
A
⊗ 1
B
is a unit element for ¹ ⊗ 1. We will
frequently drop the subscripts on the unit elements, for it is easy to see from
the position relative to the tensor product sign the algebra to which the unit
belongs. In other words, we will write the unit for ¹⊗1 as 1 ⊗1. We have an
isomorphism of ¹ into ¹⊗1 given by
o →o ⊗1
when both ¹ and 1 are associative algebras with units. Similarly for 1. Notice
that
(o ⊗1) (1 ⊗/) = o ⊗/ = (1 ⊗/) (o ⊗1).
In particular, an element of the form o ⊗ 1 commutes with an element of the
form 1 ⊗/.
1.8.3 The tensor algebra of a vector space.
Let \ be a vector space. The tensor algebra of a vector space is the solution
of the universal problem for maps α of \ into an associative algebra: it consists
of an algebra T\ and a map ι : \ →T\ such that ι is linear, and for any linear
map α : \ →¹ where ¹ is an associative algebra there exists a unique algebra
homomorphism ψ : T\ →¹ such that α = ψ ◦ ι. We set
T
n
\ := \ ⊗ ⊗\ n −factors.
We deﬁne the multiplication to be the isomorphism
T
n
\ ⊗T
m
\ →T
n+m
\
22 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
obtained by “dropping the parentheses,” i.e. the isomorphism given at the end
of the last subsection. Then
T\ :=
T
n
\
(with T
0
\ the ground ﬁeld) is a solution to this universal problem, and hence
the unique solution.
1.8.4 Construction of the universal enveloping algebra.
If we take \ = 1 to be a Lie algebra, and let 1 be the two sided ideal in T1
generated the elements [r. n] −r ⊗n +n ⊗r then
l1 := T1´1
is a universal algebra for 1. Indeed, any homomorphism α of 1 into an associa
tive algebra ¹ extends to a unique algebra homomorphism ψ : T1 → ¹ which
must vanish on 1 if it is to be a Lie algebra homomorphism.
1.8.5 Extension of a Lie algebra homomorphism to its uni
versal enveloping algebra.
If / : 1 →` is a Lie algebra homomorphism, then the composition
c
M
◦ / : 1 →l`
induces a homomorphism
l1 →l`
and this assignment sending Lie algebra homomorphisms into associative algebra
homomorphisms is functorial.
1.8.6 Universal enveloping algebra of a direct sum.
Suppose that: 1 = 1
1
⊕ 1
2
, with c
i
: 1
i
→ l(1
i
), and c : 1 → l(1) the
canonical homomorphisms. Deﬁne
1 : 1 →l(1
1
) ⊗l(1
2
). 1(r
1
+r
2
) = c
1
(r
1
) ⊗1 + 1 ⊗c
2
(r
2
).
This is a homomorphism because r
1
and r
2
commute. It thus extends to a
homomorphism
ψ : l(1) →l(1
1
) ⊗l(1
2
).
Also,
r
1
→c(r
1
)
is a Lie algebra homomorphism of 1
1
→ l(1) which thus extends to a unique
algebra homomorphism
φ
1
: l(1
1
) →l(1)
1.8. THE UNIVERSAL ENVELOPING ALGEBRA. 23
and similarly φ
2
: l(1
2
) →l(1). We have
φ
1
(r
1
)φ
2
(r
2
) = φ
2
(r
2
)φ
1
(r
1
). r
1
∈ 1
1
. r
2
∈ 1
2
since [r
1
. r
2
] = 0. As the c
i
(r
i
) generate l(1
i
), the above equation holds
with r
i
replaced by arbitrary elements n
i
∈ l(1
i
). i = 1. 2. So we have a
homomorphism
φ : l(1
1
) ⊗l(1
2
) →l(1). φ(n
1
⊗n
2
) := φ
1
(n
1
)φ
2
(n
2
).
We have
φ ◦ ψ(r
1
+r
2
) = φ(r
1
⊗1) +φ(1 ⊗r
2
) = r
1
+r
2
so φ ◦ ψ = id, on 1 and hence on l(1) and
ψ ◦ φ(r
1
⊗1 + 1 ⊗r
2
) = r
1
⊗1 + 1 ⊗r
2
so ψ ◦ φ = id on 1
1
⊗1 + 1 ⊗1
2
and hence on l(1
1
) ⊗l(1
2
). Thus
l(1
1
⊕1
2
)
∼
= l(1
1
) ⊗l(1
2
).
1.8.7 Bialgebra structure.
Consider the map 1 →l(1) ⊗l(1):
r →r ⊗1 + 1 ⊗r.
Then
(r ⊗1 + 1 ⊗r)(n ⊗1 + 1 ⊗n) =
rn ⊗1 +r ⊗n +n ⊗r + +1 ⊗rn.
and multiplying in the reverse order and subtracting gives
[r ⊗1 + 1 ⊗r. n ⊗1 + 1 ⊗n] = [r. n] ⊗1 + 1 ⊗[r. n].
Thus the map r →r ⊗1 + 1 ⊗r determines an algebra homomorphism
∆ : l(1) →l(1) ⊗l(1).
Deﬁne
ε : l(1) →/. ε(1) = 1. ε(r) = 0. r ∈ 1
and extend as an algebra homomorphism. Then
(ε ⊗id)(r ⊗1 + 1 ⊗r) = 1 ⊗r. r ∈ 1.
We identify / ⊗1 with 1 and so can write the above equation as
(ε ⊗id)(r ⊗1 + 1 ⊗r) = r. r ∈ 1.
24 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
The algebra homomorphism
(ε ⊗id) ◦ ∆ : l(1) →l(1)
is the identity (on 1 and on) 1 and hence is the identity. Similarly
(id ⊗ε) ◦ ∆ = id.
A vector space C with a map ∆ : C →C ⊗C, (called a comultiplication) and
a map ε : 1 →/ (called a counit) satisfying
(ε ⊗id) ◦ ∆ = id
and
(id ⊗ε) ◦ ∆ = id
is called a coalgebra. If C is an algebra and both ∆ and ε are algebra homo
morphisms, we say that C is a bialgebra (sometimes shortened to “bigebra”).
So we have proved that (l(1). ∆. ε) is a bialgebra.
Also
[(∆⊗id) ◦ ∆](r) = r ⊗1 ⊗1 + 1 ⊗r ⊗1 + 1 ⊗1 ⊗r = [(id ⊗∆) ◦ ∆](r)
for r ∈ 1 and hence for all elements of l(1). Hence the comultiplication is is
coassociative. (It is also cocommutative.)
1.9 The Poincar´eBirkhoﬀWitt Theorem.
Suppose that \ is a vector space made into a Lie algebra by declaring that
all brackets are zero. Then the ideal 1 in T\ deﬁning l(\ ) is generated by
r⊗n −n ⊗r, and the quotient T\´1 is just the symmetric algebra, o\ . So the
universal enveloping algebra of the trivial Lie algebra is the symmetric algebra.
For any Lie algebra 1 deﬁne l
n
1 to be the subspace of l1 generated by
products of at most n elements of 1, i.e. by all products
c(r
1
) c(r
m
). : ≤ n.
For example,,
l
0
1 = /. the ground ﬁeld
and
l
1
1 = / ⊕c(1).
We have
l
0
1 ⊂ l
1
1 ⊂ ⊂ l
n
1 ⊂ l
n+1
1 ⊂
and
l
m
1 l
n
1 ⊂ l
m+n
1.
1.9. THE POINCAR
´
EBIRKHOFFWITT THEOREM. 25
We deﬁne
gr
n
l1 := l
n
1´l
n−1
1
and
gr l1 :=
gr
n
l1
with the multiplication
gr
m
l1 gr
n
l1 →gr
m+n
l1
induced by the multiplication on l1.
If o ∈ l
n
1 we let o ∈ gr
n
l1 denote its image by the projection l
n
1 →
l
n
1´l
n−1
1 = gr
n
l1. We may write o as a sum of products of at most n
elements of 1:
o =
¸
mµ≤n
c
µ
c(r
µ,1
) c(r
µ,mµ
).
Then o can be written as the corresponding homogeneous sum
o =
¸
mµ=n
c
µ
c(r
µ,1
) c(r
µ,mµ
).
In other words, as an algebra, gr l1 is generated by the elements c(r). r ∈ 1.
But all such elements commute. Indeed, for r. n ∈ 1,
c(r)c(n) −c(n)c(r) = c([r. n]).
by the deﬁning property of the universal enveloping algebra. The right hand
side of this equation belongs to l
1
1. Hence
c(r)c(n) −c(n)c(r) = 0
in gr
2
l1. This proves that gr l1 is commutative. Hence, by the universal
property of the symmetric algebra, there exists a unique algebra homomorphism
w : o1 →gr l1
extending the linear map
1 →gr l1. r →c(r).
Since the c(r) generate gr l1 as an algebra, we know that this map is surjective.
The Poincar´eBirkhoﬀWitt theorem asserts that
w : o1 →gr l1 is an isomorphism. (1.19)
Suppose that we choose a basis r
i
. i ∈ 1 of 1 where 1 is a totally ordered
set. Since
c(r
i
)c(r
j
) = c(r
j
)c(r
i
)
we can rearrange any product of c(r
i
) so as to be in increasing order. This
shows that the elements
r
M
:= c(r
i1
) c(r
im
). ` := (i
1
. . . . . i
m
) i
1
≤ i
m
span l1 as a vector space. We claim that (1.19) is equivalent to
26 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
Theorem 1 Poincar´eBirkhoﬀWitt. The elements r
M
form a basis of l1.
Proof that (1.19) is equivalent to the statement of the theorem. For
any expression r
M
as above, we denote its length by /(`) = :. The elements
r
M
are the images under w of the monomial basis in o
m
(1). As we know that
w is surjective, equation (1.19) is equivalent to the assertion that w is injective.
This amounts to the nonexistence of a relation of the form
¸
(M)=n
c
M
r
M
=
¸
(M)<n
c
M
r
M
with some nonzero coeﬃcients on the left hand side. But any nontrivial rela
tion between the r
M
can be rewritten in the above form by moving the terms
of highest length to one side. QED
We now turn to the proof of the theorem:
Let \ be the vector space with basis .
M
where ` runs over all ordered
sequences i
1
≤ i
2
≤ ≤ i
n
. (Recall that we have chosen a well ordering on 1
and that the r
i i∈I
form a basis of 1.)
Furthermore, the empty sequence, .
∅
is allowed, and we will identify the
symbol .
∅
with the number 1 ∈ /. If i ∈ 1 and ` = (i
1
. . . . . i
n
) we write
i ≤ ` if i ≤ i
1
and then let (i. `) denote the ordered sequence (i. i
1
. . . . . i
n
).
In particular, we adopt the convention that if ` = ∅ is the empty sequence then
i ≤ ` for all i in which case (i. `) = (i). Recall that if ` = (i
1
. . . . . i
n
)) we
set /(`) = n and call it the length of `. So, for example, /(i. `) = /(`) + 1
if i ≤ `.
Lemma 1 We can make \ into an 1 module in such a way that
r
i
.
M
= .
iM
whenever i ≤ `. (1.20)
Proof of lemma. We will inductively deﬁne a map
1 \ →\. (r. ·) →r·
and then show that it satisﬁes the equation
rn· −nr· = [r. n]·. r. n ∈ 1. · ∈ \. (1.21)
which is the condition that makes \ into an 1 module. Our deﬁnition will be
such that (1.20) holds. In fact, we will deﬁne r
i
.
M
inductively on /(`) and on
i. So we start by deﬁning
r
i
.
∅
= .
(i)
which is in accordance with (1.20). This deﬁnes r
i
.
M
for /(`) = 0. For
/(`) = 1 we deﬁne
r
i
.
(j)
= .
(i,j)
if i ≤ ,
while if i , we set
r
i
.
(j)
= r
j
.
(i)
+ [r
i
. r
j
].
∅
= .
(j,i)
+
¸
c
k
ij
.
(k)
1.9. THE POINCAR
´
EBIRKHOFFWITT THEOREM. 27
where
[r
i
. r
j
] =
¸
c
k
ij
r
k
is the expression for the Lie bracket of r
i
with r
j
in terms of our basis. These
c
k
ij
are known as the structure constants of the Lie algebra, 1 in terms of
the given basis. Notice that the ﬁrst of these two cases is consistent with (and
forced on us) by (1.20) while the second is forced on us by (1.21). We now have
deﬁned r
i
.
M
for all i and all ` with /(`) ≤ 1, and we have done so in such a
way that (1.20) holds, and (1.21) holds where it makes sense (i.e. for /(`) = 0).
So suppose that we have deﬁned r
j
.
N
for all , if /(·) < /(`) and for all
, < i if /(·) = /(`) in such a way that
r
j
.
N
is a linear combination of .
L
’s with /(1) ≤ /(·) + 1 (∗).
We then deﬁne
.
iM
if i ≤ `
r
i
.
M
= (1.22)
r
j
(r
i
.
N
) + [r
i
. r
j
].
N
if ` = (,·) with i ,.
This makes sense since r
i
.
N
is already deﬁned as a linear combination of .
L
’s
with /(1) ≤ /(·) + 1 = /(`) and because [r
i
. r
j
] can be written as a linear
combination of the r
k
as above. Furthermore (∗) holds with , and · replaced
by `. Furthermore, (1.20) holds by construction. We must check (1.21). By
linearity, this means that we must show that
r
i
r
j
.
N
−r
j
r
i
.
N
= [r
i
. r
j
].
N
.
If i = , both sides are zero. Also, since both sides are antisymmetric in i and
,, we may assume that i ,. If , ≤ · and i , then this equation holds by
deﬁnition. So we need only deal with the case where , ≤ · which means that
· = (/1) with / ≤ 1 and i , /. So we have, by deﬁnition,
r
j
.
N
= r
j
.
(kP)
= r
j
r
k
.
P
= r
k
r
j
.
P
+ [r
j
. r
k
].
P
.
Now if , ≤ 1 then r
j
.
P
= .
(jP)
and / < (,1). If , ≤ 1 then r
j
.
P
= .
Q
+ n
where still / ≤ Q and n is a linear combination of elements of length < /(·).
So we know that (1.21) holds for r = r
i
. n = r
k
and · = .
(jP)
(if , ≤ 1) or
· = .
Q
(otherwise). Also, by induction, we may assume that we have veriﬁed
(1.21) for all ·
of length < /(·). So we may apply (1.21) to r = r
i
. n = r
k
and · = r
j
.
P
and also to r = r
i
. n = [r
j
. r
k
]. · = .
P
. So
r
i
r
j
.
N
= r
k
r
i
r
j
.
P
+ [r
i
. r
k
]r
j
.
P
+ [r
j
. r
k
]r
i
.
P
+ [r
i
. [r
j
. r
k
]].
P
.
Similarly, the same result holds with i and , interchanged. Subtracting this
interchanged version from the preceding equation the two middle terms from
28 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
each equation cancel and we get
(r
i
r
j
−r
j
r
i
).
N
= r
k
(r
i
r
j
−r
j
r
i
).
P
+ ([r
i
. [r
j
. r
k
]] −[r
j
. [r
i
. r
k
]).
P
= r
k
[r
i
. r
j
].
P
+ ([r
i
. [r
j
. r
k
]] −[r
j
. [r
i
. r
k
]).
P
= [r
i
. r
j
]r
k
.
P
+ ([r
k
. [r
i
. r
j
]] + [r
i
. [r
j
. r
k
]] −[r
j
. [r
i
. r
k
]).
P
= [r
i
. r
j
].
N
.
(In passing from the second line to the third we used (1.21) applied to .
P
(by
induction) and from the third to the last we used the antisymmetry of the
bracket and Jacobi’s equation.)QED
Proof of the PBW theorem. We have made \ into an 1 and hence into
a l(1) module. By construction, we have, inductively,
r
M
.
∅
= .
M
.
But if
¸
c
M
r
M
= 0
then
0 =
¸
c
M
.
M
=
¸
c
M
r
M
.
∅
contradicting the fact the the .
M
are independent. QED
In particular, the map c : 1 →l(1) is an injection, and so we may identify
1 as a subspace of l(1).
1.10 Primitives.
An element r of a bialgebra is called primitive if
∆(r) = r ⊗1 + 1 ⊗r.
So the elements of 1 are primitives in l(1).
We claim that these are the only primitives.
First prove this for the case 1 is abelian so l(1) = o(1). Then we may
think of o(1) ⊗ o(1) as polynomials in twice the number of variables as those
of o(1) and
∆(1)(n. ·) = 1(n +·).
The condition of being primitive says that
1(n +·) = 1(n) +1(·).
Taking homogeneous components, the same equality holds for each homogeneous
component. But if 1 is homogeneous of degree n, taking n = · gives
2
n
1(n) = 21(n)
1.11. FREE LIE ALGEBRAS 29
so 1 = 0 unless n = 1.
Taking gr, this shows that for any Lie algebra the primitives are contained
in l
1
(1). But
∆(c +r) = c(1 ⊗1) +r ⊗1 + 1 ⊗r
so the condition on primitivity requires c = 2c or c = 0. QED
1.11 Free Lie algebras
1.11.1 Magmas and free magmas on a set
A set ` with a map:
` ` →`. (r. n) →rn
is called a magma. Thus a magma is a set with a binary operation with no
axioms at all imposed.
Let A be any set. Deﬁne A
n
inductively by A
1
:= A and
A
n
=
¸
p+q=n
A
p
A
q
for n ≥ 2. Thus A
2
consists of all expressions o/ where o and / are elements of
A. (We write o/ instead of (o. /).) An element of A
3
is either an expression of
the form (o/)c or an expression of the form o(/c). An element of A
4
has one
out of ﬁve forms: o((/c)d). o(/(cd)). ((o/)(cd)). ((o/)c)d or (o(/c))d.
Set
`
X
:=
∞
¸
n=1
A
n
.
An element n ∈ `
X
is called a nonassociative word, and its length /(n) is the
unique n such that n ∈ A
n
. We have a “multiplication” map `
X
`
X
given
by the inclusion
A
p
A
q
→A
p+q
.
Thus the multiplication on `
X
is concatenation of nonassociative words.
If · is any magma, and 1 : A →· is any map, we deﬁne 1 : `
X
→· by
1 = 1 on A
1
, by
1 : A
2
→·. 1(o/) = 1(o)1(/)
and inductively
1 : A
p
A
q
→·. 1(n·) = 1(n)1(·).
Any element of A
n
has a unique expression as n· where n ∈ A
p
and · ∈ A
q
for
a unique (j. c) with j +c = n, so this inductive deﬁnition is valid.
It is clear that 1 is a magna homomorphism and is uniquely determined
by the original map 1. Thus `
X
is the “free magma on A” or the “universal
30 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
magma on A” in the sense that it is the solution to the universal problem
associated to a map from A to any magma.
Let ¹
X
be the vector space of ﬁnite formal linear combinations of elements
of `
X
. So an element of ¹
X
is a ﬁnite sum
¸
c
m
: with : ∈ `
X
and c
m
in
the ground ﬁeld. The multiplication in `
X
extends by bilinearity to make ¹
X
into an algebra. If we are given a map A → 1 where 1 is any algebra, we get
a unique magna homomorphism `
X
→1 extending this map (where we think
of 1 as a magma) and then a unique algebra map ¹
X
→1 extending this map
by linearity.
Notice that the algebra ¹
X
is graded since every element of `
X
has a length
and the multiplication on `
X
is graded. Hence ¹
X
is the free algebra on A
in the sense that it solves the universal problem associated with maps of A to
algebras.
1.11.2 The Free Lie Algebra L
X
.
In ¹
X
let 1 be the twosided ideal generated by all elements of the form oo. o ∈
¹
X
and (o/)c + (/c)o + (co)/. o. /. c ∈ ¹
X
. We set
1
X
:= ¹
X
´1
and call 1
X
the free Lie algebra on A. Any map from A to a Lie algebra 1
extends to a unique algebra homomorphism from 1
X
to 1.
We claim that the ideal 1 deﬁning 1
X
is graded. This means that if o =
¸
o
n
is a decomposition of an element of 1 into its homogeneous components, then
each of the o
n
also belong to 1. To prove this, let J ⊂ 1 denote the set of all
o =
¸
o
n
with the property that all the homogeneous components o
n
belong
to 1. Clearly J is a two sided ideal. We must show that 1 ⊂ J. For this it is
enough to prove the corresponding fact for the generating elements. Clearly if
o =
¸
o
p
. / =
¸
/
q
. c =
¸
c
r
then
(o/)c + (/c)o + (co)/ =
¸
p,q,r
((o
p
/
q
)c
r
+ (/
q
c
r
)o
p
+ (c
r
o
p
)/
q
) .
But also if r =
¸
r
m
then
r
2
=
¸
r
2
n
+
¸
m<n
(r
m
r
n
+r
n
r
m
)
and
r
m
r
n
+r
n
r
m
= (r
m
+r
n
)
2
−r
2
m
−r
2
n
∈ 1
so 1 ⊂ J.
The fact that 1 is graded means that 1
X
inherits the structure of a graded
algebra.
1.11. FREE LIE ALGEBRAS 31
1.11.3 The free associative algebra Ass(X).
Let \
X
be the vector space of all ﬁnite formal linear combinations of elements
of A. Deﬁne
Ass
X
= T(\
X
).
the tensor algebra of \
X
. Any map of A into an associative algebra ¹ extends to
a unique linear map from \
X
to ¹ and hence to a unique algebra homomorphism
from Ass
X
to ¹. So Ass
X
is the free associative algebra on A.
We have the maps A → 1
X
and c : 1
X
→ l(1
X
) and hence their com
position maps A to the associative algebra l(1
X
) and so extends to a unique
homomorphism
Ψ : Ass
X
→l(1
X
).
On the other hand, the commutator bracket gives a Lie algebra structure to
Ass
X
and the map A →Ass
X
thus give rise to a Lie algebra homomorphism
1
X
→Ass
X
which determines an associative algebra homomorphism
Φ : l(1
X
) →Ass
X
.
both compositions Φ◦ Ψ and Ψ◦ Φ are the identity on A and hence, by unique
ness, the identity everywhere. We obtain the important result that l(1
X
) and
Ass
X
are canonically isomorphic:
l(1
X
)
∼
= Ass
X
. (1.23)
Now the Poincar´eBirkhoﬀ Witt theorem guarantees that the map c : 1
X
→
l(1
X
) is injective. So under the above isomorphism, the map 1
X
→ Ass
X
is
injective. On the other hand, by construction, the map A → \
X
induces a
surjective Lie algebra homomorphism from 1
X
into the Lie subalgebra of Ass
X
generated by A. So we see that the under the isomorphism (1.23) 1
X
⊂ l(1
X
)
is mapped isomorphically onto the Lie subalgebra of Ass
X
generated by A.
Now the map
A →Ass
X
⊗Ass
X
. r →r ⊗1 + 1 ⊗r
extends to a unique algebra homomorphism
∆ : Ass
X
→Ass
X
⊗Ass
X
.
Under the identiﬁcation (1.23) this is none other than the map
∆ : l(1
X
) →l(1
X
) ⊗l(1
X
)
and hence we conclude that 1
X
is the set of primitive elements of Ass
X
:
1
X
= ¦n ∈ Ass
X
[∆(n) = n ⊗1 + 1 ⊗n.¦ (1.24)
under the identiﬁcation (1.23).
32 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
1.12 Algebraic proof of CBH and explicit for
mulas.
We recall our constructs of the past few sections: A denotes a set, 1
X
the free
Lie algebra on A and Ass
X
the free associative algebra on A so that Ass
X
may be identiﬁed with the universal enveloping algebra of 1
X
. Since Ass
X
may
be identiﬁed with the noncommutative polynomials indexed by A, we may
consider its completion, 1
X
, the algebra of formal power series indexed by A.
Since the free Lie algebra 1
X
is graded we may also consider its completion
which we shall denote by L
X
. Finally let : denote the ideal in 1
X
generated
by A. The maps
exp : : →1 +:. log : 1 +: →:
are well deﬁned by their formal power series and are mutual inverses. (There is
no convergence issue since everything is within the realm of formal power series.)
Furthermore exp is a bijection of the set of α ∈ : satisfying ∆α = α⊗1+1⊗α
to the set of all β ∈ 1 +: satisfying ∆β = β ⊗β.
1.12.1 Abstract version of CBH and its algebraic proof.
In particular, since the set ¦β ∈ 1 +:[∆β = β ⊗β¦ forms a group, we conclude
that for any ¹. 1 ∈ L
X
there exists a C ∈ L
X
such that
exp C = (exp ¹)(exp 1).
This is the abstract version of the CampbellBakerHausdorﬀ formula. It de
pends basically on two algebraic facts: That the universal enveloping algebra of
the free Lie algebra is the free associative algebra, and that the set of primitive
elements in the universal enveloping algebra (those satisfying ∆α = α⊗1+1⊗α)
is precisely the original Lie algebra.
1.12.2 Explicit formula for CBH.
Deﬁne the map
Φ : :∩ Ass
X
→1
X
.
Φ(r
1
. . . r
n
) := [r
1
. [r
2
. . . . . [r
n−1
. r
n
] ] = ad(r
1
) ad(r
n−1
)(r
n
).
and let Θ : Ass
X
→End(1
X
) be the algebra homomorphism extending the Lie
algebra homomorphism ad : 1
X
→End(1
X
). We claim that
Φ(n·) = Θ(n)Φ(·). ∀ n ∈ Ass
X
. · ∈ :∩ Ass
X
. (1.25)
Proof. It is enough to prove this formula when n is a monomial, n = r
1
r
n
.
We do this by induction on n. For n = 0 the assertion is obvious and for n = 1
1.12. ALGEBRAIC PROOF OF CBH AND EXPLICIT FORMULAS. 33
it follows from the deﬁnition of Φ. Suppose n 1. Then
Φ(r
1
r
n
·) = Θ(r
1
)Φ(r
2
r
n
·)
= Θ(r
1
)Θ(r
2
. . . r
n
)Φ(·)
= Θ(r
1
r
n
)Φ(·). QED
Let 1
n
X
denote the n−th graded component of 1
X
. So 1
1
X
consists of linear
combinations of elements of A, 1
2
X
is spanned by all brackets of pairs of elements
of A, and in general1
n
X
is spanned by elements of the form
[n. ·]. n ∈ 1
p
X
. · ∈ 1
q
X
. j +c = n.
We claim that
Φ(n) = nn ∀ n ∈ 1
n
X
. (1.26)
For n = 1 this is immediate from the deﬁnition of Φ. So by induction it is
enough to verify this on elements of the form [n. ·] as above. We have
Φ([n. ·]) = Φ(n· −·n)
= Θ(n)Φ(·) −Θ(·)Φ(n)
= cΘ(n)· −jΘ(·)n by induction
= c[n. ·] −j[·. n]
since Θ(n) = ad(n) for n ∈ 1
X
= (j +c)[n. ·] QED.
We can now write down an explicit formula for the n−th term in the
CampbellBakerHausdorﬀ expansion. Consider the case where A consists of
two elements A = ¦r. n¦. r = n. Let us write
. = log ((exp r)(exp n)) . ∈ L
X
. . =
∞
¸
1
.
n
(r. n).
We want an explicit expression for .
n
(r. n). We know that
.
n
=
1
n
Φ(.
n
)
34 CHAPTER 1. THE CAMPBELL BAKER HAUSDORFF FORMULA
and .
n
is a sum of noncommutative monomials of degree n in r and n. Now
(exp r)(exp n) =
∞
¸
p=0
r
p
j!
∞
¸
q=0
n
q
c!
= 1 +
¸
p+q≥1
r
p
n
q
j!c!
so
. = log((exp r)(exp n))
=
∞
¸
m=1
(−1)
m+1
:
¸
¸
p+q≥1
r
p
n
q
j!c!
¸
m
=
¸
pi+qi≥1
(−1)
m+1
:
r
p1
n
q1
r
p2
n
q2
r
pm
n
qm
j
1
!c
1
! j
m
!c
m
!
.
We want to apply
1
n
Φ to the terms in this last expression which are of total
degree n so as to obtain .
n
. So let us examine what happens when we apply Φ
to an expression occurring in the numerator: If c
m
≥ 2 we get 0 since we will
have ad(n)(n) = 0. Similarly we will get 0 if c
m
= 0. j
m
≥ 2. Hence the only
terms which survive are those with c
m
= 1 or c
m
= 0. j
m
= 1. Accordingly we
decompose .
n
into these two types:
.
n
=
1
n
¸
p+q=n
(.
p,q
+.
p,q
). (1.27)
where
.
p,q
=
¸
(−1)
m+1
:
ad(r)
p1
ad(n)
q1
ad(r)
pm
n
j
1
!c
1
! j
m
!
summed over all
j
1
+ +j
m
= j. c
1
+ +c
m−1
= c −1. c
i
+j
i
≥ 1. j
m
≥ 1
and
.
p,q
=
¸
(−1)
m+1
:
ad(r)
p1
ad(n)
q1
ad(n)
qm−1
(r)
j
1
!c
1
! c
m−1
!
summed over
j
1
+ +j
m−1
= j −1. c
1
+ +c
m−1
= c.
j
i
+c
i
≥ 1 (i = 1. . . . . :−1) c
m−1
≥ 1.
The ﬁrst four terms are:
.
1
(r. n) = r +n
.
2
(r. n) =
1
2
[r. n]
.
3
(r. n) =
1
12
[r. [r. n]] +
1
12
[n. [n. r]]
.
4
(r. n) =
1
24
[r. [n. [r. n]]].
Chapter 2
sl(2) and its
Representations.
In this chapter (and in most of the succeeding chapters) all Lie algebras and
vector spaces are over the complex numbers.
2.1 Low dimensional Lie algebras.
Any one dimensional Lie algebra must be commutative, since [A. A] = 0 in any
Lie algebra.
If g is a two dimensional Lie algebra, say with basis A. Y then [oA+/Y. cA+
dY ] = (od − /c)[A. Y ], so that there are two possibilities: [A. Y ] = 0 in which
case g is commutative, or [A. Y ] = 0, call it 1, and the Lie bracket of any
two elements of g is a multiple of 1. So if C is not a multiple of 1, we have
[C. 1] = c1 for some c = 0, and setting ¹ = c
−1
C we get a basis ¹. 1 of g with
the bracket relations
[¹. 1] = 1.
This is an interesting Lie algebra; it is the Lie algebra of the group of all aﬃne
transformations of the line, i.e. all transformations of the form
r →or +/. o = 0.
For this reason it is sometimes called the “or +/ group”. Since
o /
0 1
r
1
=
or +/
1
we can realize the group of aﬃne transformations of the line as a group of two
by two matrices. Writing
o = exp t¹. / = t1
35
36 CHAPTER 2. o1(2) AND ITS REPRESENTATIONS.
so that
o 0
0 1
= exp t
¹ 0
0 0
.
1 /
0 1
= exp t
0 1
0 0
we see that our algebra g with basis ¹. 1 and [¹. 1] = 1 is indeed the Lie
algebra of the or +/ group.
In a similar way, we could list all possible three dimensional Lie algebras, by
ﬁrst classifying them according to dim[g. g] and then analyzing the possibilities
for each value of this dimension. Rather than going through all the details, we
list the most important examples of each type. If dim[g. g] = 0 the algebra is
commutative so there is only one possibility.
A very important example arises when dim[g. g] = 1 and that is the Heisen
berg algebra, with basis 1. Q. 7 and bracket relations
[1. Q] = 7. [7. 1] = [7. Q] = 0.
Up to constants (such as Planck’s constant and i) these are the famous Heisen
berg commutation relations. Indeed, we can realize this algebra as an algebra
of operators on functions of one variable r: Let 1 = 1 = diﬀerentiation, let Q
consist of multiplication by r. Since, for any function 1 = 1(r) we have
1(r1) = 1 +r1
we see that [1. Q] = id, so setting 7 = id, we obtain the Heisenberg algebra.
As an example with dim[g. g] = 2 we have (the complexiﬁcation of) the Lie
algebra of the group of Euclidean motions in the plane. Here we can ﬁnd a basis
/. r. n of g with brackets given by
[/. r] = n. [/. n] = −r. [r. n] = 0.
More generally we could start with a commutative two dimensional algebra and
adjoin an element / with ad / acting as an arbitrary linear transformation, ¹
of our two dimensional space.
The item of study of this chapter is the algebra :(2) of all two by two
matrices of trace zero, where [g. g] = g.
2.2 sl(2) and its irreducible representations.
Indeed :(2) is spanned by the matrices:
/ =
1 0
0 −1
. c =
0 1
0 0
. 1 =
0 0
1 0
.
They satisfy
[/. c] = 2c. [/. 1] = −21. [c. 1] = /.
Thus every element of :(2) can be expressed as a sum of brackets of elements
of :(2), in other words
[:(2). :(2)] = :(2).
2.2. o1(2) AND ITS IRREDUCIBLE REPRESENTATIONS. 37
The bracket relations above are also satisﬁed by the matrices
ρ
2
(/) :=
¸
2 0 0
0 0 0
0 0 −2
¸
. ρ
2
(c) :=
¸
0 2 0
0 0 1
0 0 0
¸
. ρ
2
(1) :=
¸
0 0 0
1 0 0
0 2 0
¸
.
the matrices
ρ
3
(/) :=
¸
¸
¸
3 0 0 0
0 1 0 0
0 0 −1 0
0 0 0 −3
¸
. ρ
3
(c) :=
¸
¸
¸
0 3 0 0
0 0 2 0
0 0 0 1
0 0 0 0
¸
. ρ
3
(1) :=
¸
¸
¸
0 0 0 0
1 0 0 0
0 2 0 0
0 0 3 0
¸
.
and, more generally, the (n + 1) (n + 1) matrices given by
ρ
n
(/) :=
¸
¸
¸
¸
¸
¸
n 0 0
0 n −2 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 −n + 2 0
0 0 −n
¸
. ρ
n
(c) =
¸
¸
¸
¸
¸
¸
0 n 0
0 0 n −1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
0 0 0
¸
.
ρ
n
(1) :=
¸
¸
¸
¸
0 0 0
1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 n 0
¸
.
These representations of :(2) are all irreducible, as is seen by successively
applying ρ
n
(c) to any nonzero vector until a vector with nonzero element in
the ﬁrst position and all other entries zero is obtained. Then keep applying
ρ
n
(1) to ﬁll up the entire space.
These are all the ﬁnite dimensional irreducible representations of :(2) as
can be seen as follows: In l(:(2)) we have
[/. 1
k
] = −2/1
k
. [/. c
k
] = 2/c
k
(2.1)
[c. 1
k
] = −/(/ −1)1
k−1
+/1
k−1
/. (2.2)
Equation (2.1) follows from the fact that bracketing by any element is a deriva
tion and the fundamental relations in :(2). Equation (2.2) is proved by induc
tion: For / = 1 it is true from the deﬁning relations of :(2). Assuming it for
/, we have
[c. 1
k+1
] = [c. 1]1
k
+1[c. 1
k
]
= /1
k
−/(/ −1)1
k
+/1
k
/
= [/. 1
k
] +1
k
/ −/(/ −1)1
k
+/1
k
/
= −2/1
k
−/(/ −1)1
k
+ (/ + 1)1
k
/
= −(/ + 1)/1
k
+ (/ + 1)1
k
/.
38 CHAPTER 2. o1(2) AND ITS REPRESENTATIONS.
We may rewrite (2.2) as
¸
c.
1
/!
1
k
= (−/ + 1)
1
(/ −1)!
1
k−1
+
1
(/ −1)!
1
k−1
/. (2.3)
In any ﬁnite dimensional module \ , the element / has at least one eigenvector.
This follows from the fundamental theorem of algebra which assert that any
polynomial has at least one root; in particular the characteristic polynomial of
any linear transformation on a ﬁnite dimensional space has a root. So there is
a vector n such that /n = jn for some complex number j. Then
/(cn) = [/. c]n +c/n = 2cn +jcn = (j + 2)(cn).
Thus cn is again an eigenvector of /, this time with eigenvalue j+2. Successively
applying c yields a vector ·
λ
such that
/·
λ
= λ·
λ
. c·
λ
= 0. (2.4)
Then l(:(2))·
λ
is an invariant subspace, hence all of \ . We say that · is a
cyclic vector for the action of g on \ if l(g)· = \ ,
We are thus led to study all modules for :(2) with a cyclic vector ·
λ
satis
fying (2.4). In any such space the elements
1
/!
1
k
·
λ
span, and are eigenspaces of / of weight λ−2/. For any λ ∈ C we can construct
such a module as follows: Let b
+
denote the subalgebra of :(2) generated by
/ and c. Then l(b
+
), the universal enveloping algebra of b
+
can be regarded
as a subalgebra of l(:(2)). We can make C into a b
+
module, and hence a
l(b
+
) module by
/ 1 := λ. c 1 := 0.
Then the space
l(:(2)) ⊗
U(b
+
)
C
with c acting on C as 0 and / acting via multiplication by λ is a cyclic module
with cyclic vector ·
λ
= 1 ⊗ 1 which satisﬁes (2.4). It is a “universal” such
module in the sense that any other cyclic module with cyclic vector satisfying
(2.4) is a homomorphic image of the one we just constructed.
This space l(:(2)) ⊗
U(b
+
)
C is inﬁnite dimensional. It is irreducible unless
there is some
1
k!
1
k
·
λ
with
c
1
/!
1
k
·
λ
= 0
where / is an integer ≥ 1. Indeed, any nonzero vector n in the space is a ﬁnite
linear combination of the basis elements
1
k!
1
k
·
λ
; choose / to be the largest
integer so that the coeﬃcient of the corrresponding element does not vanish.
Then successive application of the element c (/times) will yield a multiple of
2.3. THE CASIMIR ELEMENT. 39
·
λ
, and if this multiple is nonzero, then l(:(2))n = l(:(2))·
λ
is the whole
space.
But
c
1
/!
1
k
·
λ
=
¸
c.
1
/!
1
k
·
λ
= (1 −/ +λ)
1
(/ −1)!
1
k−1
·
λ
.
This vanishes only if λ is an integer and / = λ + 1, in which case there is a
unique ﬁnite dimensional quotient of dimension / + 1. QED
The ﬁnite dimensional irreducible representations having zero as a weight
are all odd dimensional and have only even weights. We will call them “even”.
They are called “integer spin” representations by the physicists. The others are
“odd” or “half spin” representations.
2.3 The Casimir element.
In l(:(2)) consider the element
C :=
1
2
/
2
+c1 +1c (2.5)
called the Casimir element or simply the “Casimir” of :(2).
Since c1 = 1c + [c. 1] = 1c +/ in l(:(2)) we also can write
C =
1
2
/
2
+/ + 21c. (2.6)
This implies that if · is a “highest weight vector” in a :(2) module satisfying
c· = 0. /· = λ· then
C· =
1
2
λ(λ + 2)·. (2.7)
Now in l(:(2)) we have
[/. C] = 2([/. 1]c +1[/. c])
= 2(−21c + 21c)
= 0
and
[C. c] =
1
2
2(c/ +/c) + 2c −2/c
= c/ −/c + 2c
= −[/. c] + 2c
= 0.
Similarly
[C. 1] = 0.
40 CHAPTER 2. o1(2) AND ITS REPRESENTATIONS.
In other words, C lies in the center of the universal enveloping algebra of :(2),
i.e. it commutes with all elements. If \ is a module which possesses a “highest
weight vector” ·
λ
as above, and if \ has the property that ·
λ
is a cyclic vector,
meaning that \ = l(1)·
λ
then C takes on the constant value
C =
λ(λ + 2)
2
Id
since C is central and ·
λ
is cyclic.
2.4 sl(2) is simple.
An ideal 1 in a Lie algebra g is a subspace of g which is invariant under the
adjoint representation. In other words, 1 is an ideal if [g. 1] ⊂ 1. If a Lie
algebra g has the property that its only ideals are 0 and g itself, and if g is not
commutative, we say that g is simple. Let us prove that :(2) is simple. Since
:(2) is not commutative, we must prove that the only ideals are 0 and :(2)
itself. We do this by introducing some notation which will allow us to generalize
the proof in the next chapter. Let
g = :(2)
and set
g
−1
:= C1. g
0
:= C/. g
1
:= Cc
so that g, as a vector space, is the direct sum of the three one dimensional
spaces
g = g
−1
⊕g
0
⊕g
1
.
Correspondingly, write any r ∈ g as
r = r
−1
+r
0
+r
1
.
If we let
d :=
1
2
/
then we have
r = r
−1
+r
0
+r
1
.
[d. r] = −r
−1
+ 0 +r
1
. and
[d. [d. r]] = r
−1
+ 0 +r
1
.
Since the matrix
¸
1 1 1
−1 0 1
1 0 1
¸
is invertible, we see that we can solve for the “components” r
−1
. r
0
and r
1
in
terms of r. [d. r]. [d. [d. r]]. This means that if 1 is an ideal, then
1 = 1
1
⊕1
0
⊕1
1
2.5. COMPLETE REDUCIBILITY. 41
where
1
−1
:= 1 ∩ g
−1
. 1
0
:= 1 ∩ g
0
. 1
1
:= 1 ∩ g
1
.
Now if 1
0
= 0 then d =
1
2
/ ∈ 1, and hence c = [d. c] and 1 = −[d. 1] also belong
to 1 so 1 = :(2). If 1
−1
= 0 so that 1 ∈ 1, then / = [c. 1] ∈ 1 so 1 = :(2).
Similarly, if 1
1
= 0 so that c ∈ 1 then / = [c. 1] ∈ 1 so 1 = :(2).
Thus if 1 = 0 then 1 = :(2) and we have proved that :(2) is simple.
2.5 Complete reducibility.
We will use the Casimir element C to prove that every ﬁnite dimensional rep
resentation \ of :(2) is completely reducible, which means that if \
is an
invariant subspace there exists a complementary invariant subspace \
so that
\ = \
⊕\
. Indeed we will prove:
Theorem 2 1. Every ﬁnite dimensional representation of :(2) is completely
reducible.
2. Each irreducible subspace is a cyclic highest weight module with highest
weight n where n is a nonnegative integer.
3. When the representation is decomposed into a direct sum of irreducible
components, the number of components with even highest weight is the
multiplicity of 0 as an an eigenvector of / and
4. the number of components with odd highest weight is the multiplicity of 1
as an eigenvalue of /.
Proof. We know that every irreducible ﬁnite dimensional representation is a
cyclic module with integer highest weight, that those with even highest weight
contain 0 as an eigenvalue of / with multiplicity one and do not contain 1 as
an eigenvalue of /, and that those with odd highest weight contain 1 as an
eigenvalue of / with multiplicity one, and do not contain 0 as an eigenvalue. So
2), 3) and 4) follow from 1). We must prove 1).
We ﬁrst prove
Proposition 1 Let 0 → \ → \ → / → 0 be an exact sequence of :(2)
modules and such that the action of :(2) on / is trivial (as it must be, since
:(2) has no nontrivial one dimensional modules). Then this sequence splits,
i.e. there is a line in \ supplementary to \ on which :(2) acts trivially.
This proposition is, of course, a special case of the theorem we want to prove.
But we shall see that it is suﬃcient to prove the theorem.
Proof of proposition. It is enough to prove the proposition for the case
that \ is an irreducible module. Indeed, if \
1
is a submodule, then by induction
on dim \ we may assume the theorem is known for 0 →\´\
1
→\´\
1
→/ →0
so that there is a one dimensional invariant subspace ` in \´\
1
supplementary
42 CHAPTER 2. o1(2) AND ITS REPRESENTATIONS.
to \´\
1
on which the action is trivial. Let · be the inverse image of ` in \.
By another application of the proposition, this time to the sequence
0 →\
1
→· →` →0
we ﬁnd an invariant line, 1, in · complementary to \
1
. So · = \
1
⊕1. Since
(\´\
1
) = (\´\
1
) ⊕ ` we must have 1 ∩ \ = ¦0¦. But since dim \ = dim
\ + 1, we must have \ = \ ⊕ 1. In other words 1 is a one dimensional
subspace of \ which is complementary to \ .
Next we are reduced to proving the proposition for the case that :(2) acts
faithfully on \ . Indeed, let 1 = the kernel of the action on \ . Since :(2) is
simple, either 1 = :(2) or 1 = 0. Suppose that 1 = :(2). For all r ∈ :(2) we
have, by hypothesis, r\ ⊂ \ , and for r ∈ 1 = :(2) we have r\ = 0. Hence
[:(2). :(2)] = :(2)
acts trivially on all of \ and the proposition is obvious. So we are reduced
to the case that \ is irreducible and the action, ρ, of :(2) on \ is injective.
We have our Casimir element C whose image in End \ must map \ → \
since every element of :(2) does. On the other hand, C =
1
2
n(n + 2) Id = 0
since we are assuming that the action of :(2) on the irreducible module \ is
not trivial. In particular, the restriction of C to \ is an isomorphism. Hence
ker C
ρ
: \ → \ is an invariant line supplementary to \ . We have proved the
proposition.
Proof of theorem from proposition. Let 0 → 1
→ 1 be an exact
sequence of :(2) modules, and we may assume that 1
= 0. We want to ﬁnd an
invariant complement to 1
in 1. Deﬁne \ to be the subspace of Hom
k
(1. 1
)
whose restriction to 1
is a scalar times the identity, and let \ ⊂ \ be the
subspace consisting of those linear transformations whose restrictions to 1
is
zero. Each of these is a submodule of End(1). We get a sequence
0 →\ →\ →/ →0
and hence a complementary line of invariant elements in \. In particular, we
can ﬁnd an element, T which is invariant, maps 1 →1
, and whose restriction
to 1
is nonzero. Then ker T is an invariant complementary subspace. QED
2.6 The Weyl group.
We have
exp c =
1 1
0 1
and exp −1 =
1 0
−1 1
so
(exp c)(exp −1)(exp c) =
1 1
0 1
1 0
−1 1
1 1
0 1
=
0 1
−1 0
.
Since
exp ad r = ¹d(exp r)
2.6. THE WEYL GROUP. 43
we see that
τ := (exp ad c)(exp ad(−1))(exp ad c)
consists of conjugation by the matrix
0 1
−1 0
.
Thus
τ(/) =
0 1
−1 0
1 0
0 −1
0 −1
1 0
=
−1 0
0 1
= −/.
τ(c) =
0 1
−1 0
0 1
0 0
0 −1
1 0
=
0 0
−1 0
= −1
and similarly τ(1) = −c. In short
τ : c →−1. 1 →−c. / →−/.
In particular, τ induces the “reﬂection” / →−/ on C/ and hence the reﬂection
j →−j (which we shall also denote by :) on the (one dimensional) dual space.
In any ﬁnite dimensional module \ of :(2) the action of the element τ =
(exp c)(exp −1)(exp c) is deﬁned, and
(τ)
−1
/(τ) = Ad
τ
−1
(/) = :
−1
/ = :/
so if
/n = jn
then
/(τn) = τ(τ)
−1
/(τ)n = τ:(/)n = −jτn = (:j)τn.
So if
\
µ
: ¦n ∈ \ [/n = jn¦
then
τ(\
µ
) = \
sµ
. (2.8)
The two element group consisting of the identity and the element : (acting
as a reﬂection as above) is called the Weyl group of :(2). Its generalization
to an arbitrary simple Lie algebra, together with the generalization of formula
(2.8) will play a key role in what follows.
44 CHAPTER 2. o1(2) AND ITS REPRESENTATIONS.
Chapter 3
The classical simple
algebras.
In this chapter we introduce the “classical” ﬁnite dimensional simple Lie al
gebras, which come in four families: the algebras :(n + 1) consisting of all
traceless (n + 1) (n + 1) matrices, the orthogonal algebras, on even and odd
dimensional spaces (the structure for the even and odd cases are diﬀerent) and
the symplectic algebras (whose deﬁnition we will give below). We will prove
that they are indeed simple by a uniform method  the method that we used
in the preceding chapter to prove that :(2) is simple. So we axiomatize this
method.
3.1 Graded simplicity.
We introduce the following conditions on the Lie algebra g:
g =
∞
i=−1
g
i
(3.1)
[g
i
. g
j
] ⊂ g
i+j
(3.2)
[g
1
. g
−1
] = g
0
(3.3)
[g
−1
. r] = 0 ⇒ r = 0. ∀ r ∈ g
i
. ∀i ≥ 0 (3.4)
There exists a d ∈ g
0
satisfying [d. r] = /r. r ∈ g
k
. ∀/. (3.5)
and
g
−1
is irreducible under the (adjoint) action of g
0
. (3.6)
Condition (3.4) means that if r ∈ g
i
. i ≥ 0 is such that [n. r] = 0 for all n ∈ g
−1
then r = 0.
We wish to show that any nonzero g satisfying these six conditions is simple.
We know that g
−1
. g
0
and g
1
are all nonzero, since 0 = d ∈ g
0
by (3.5) and
45
46 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
[g
−1
. g
1
] = g
0
by (3.3). So g can not be the one dimensional commutative
algebra, and hence what we must show is that any nonzero ideal 1 of g must
be all of g.
We ﬁrst show that any ideal 1 must be a graded ideal, i.e. that
1 = 1
−1
⊕1
0
⊕1
1
⊕ . where 1
j
:= 1 ∩ g
j
.
Indeed, write any r ∈ g as r = r
−1
+r
0
+r
1
+ +r
k
and successively bracket
by d to obtain
r = r
−1
+r
0
+r
1
+ +r
k
[d. r] = −r
−1
+ 0 +r
1
+ +/r
k
[d. [d. r]] = r
−1
+ 0 +r
1
+ +/
2
r
k
.
.
.
.
.
.
.
.
.
(ad d)
k
r = (−1)
k
r
−1
+ 0 +r
1
+ +/
k
r
k
(ad d)
k+1
r = (−1)
k+1
r
−1
+ 0 +r
1
+ +/
k+1
r
k
.
The matrix
¸
¸
¸
¸
¸
¸
1 1 1 1
−1 0 1 /
.
.
.
.
.
.
.
.
.
.
.
.
(−1)
k
0 1 /
k
(−1)
k+1
0 1 /
k+1
¸
is non singular. Indeed, it is a van der Monde matrix, that is a matrix of the
form
¸
¸
¸
¸
¸
¸
1 1 1 1
t
1
t
2
1 t
k+2
.
.
.
.
.
.
.
.
.
.
.
.
t
k
1
t
k
2
1 t
k
k+2
t
k+1
1
t
k+1
2
1 t
k+1
k+2
¸
whose determinant is
¸
i<j
(t
i
−t
j
)
and hence nonzero if all the t
j
are distinct. Since t
1
= −1. t
2
= 0. t
3
= 1 etc. in
our case, our matrix is invertible, and so we can solve for each of the components
of r in terms of the (ad d)
j
r. In particular, if r ∈ 1 then all the (ad d)
j
r ∈ 1
since 1 is an ideal, and hence all the component r
j
of r belong to 1 as claimed.
The subspace 1
−1
⊂ g
−1
is invariant under the adjoint action of g
0
on
g
−1
, and since we are assuming that this action is irreducible, there are two
possibilities: 1
−1
= 0 or 1
−1
= g
−1
. We will show that in the ﬁrst case 1 = 0
and in the second case that 1 = g.
Indeed, if 1
−1
= 0 we will show inductively that 1
j
= 0 for all , ≥ 0. Suppose
0 = n ∈ g
0
. Since every element of [1
−1
. n] belongs to 1 and to g
−1
we conclude
3.2. o1(· + 1) 47
that [g
−1
. n] = 0 and hence that n = 0 by (3.4). Thus 1
0
= 0. Suppose that we
know that 1
j−1
= 0. Then the same argument shows that any n ∈ 1
j
satisﬁes
[g
−1
. n] = 0 and hence n = 0. So 1
j
= 0 for all ,, and since 1 is the sum of all
the 1
j
we conclude that 1 = 0.
Now suppose that 1
−1
= g
−1
. Then g
0
= [g
−1
. g
1
] = [1
−1
. g
1
] ⊂ 1. Fur
thermore, since d ∈ g
0
⊂ 1 we conclude that g
k
⊂ 1 for all / = 0 since every
element n of such a g
k
can be written as n =
1
k
[d. n] ∈ 1. Hence 1 = g. QED
For example, the Lie algebra of all polynomial vector ﬁelds, where
o
k
= ¦
¸
A
i
∂
∂r
i
A
i
homogenous polynomials of degree / + 1¦
is a simple Lie algebra. Here d is the Euler vector ﬁeld
d = r
1
∂
∂r
1
+ +r
n
∂
∂r
n
.
This algebra is inﬁnite dimensional. We are primarily interested in the ﬁnite
dimensional Lie algebras.
3.2 sl(n + 1)
Write the most general matrix in :(n + 1) as
−tr ¹ n
∗
· ¹
where ¹ is an arbitrary nn matrix, · is a column vector and n
∗
= (n
1
. . . . . n
n
)
is a row vector. Let g
−1
consist of matrices with just the top row, i.e. with
· = ¹ = 0. Let g
1
consist of matrices with just the left column, i.e. with
¹ = n
∗
= 0. Let g
0
consist of matrices with just the central block, i.e. with
· = n
∗
= 0. Let
d =
1
n + 1
−n 0
0 1
where 1 is the n n identity matrix. Thus o
0
acts on o
−1
as the algebra of all
endomorphisms, and so o
−1
is irreducible. We have
[
0 0
· 0
.
0 n
∗
0 0
] =
−'n
∗
. ·` 0
0 · ⊗n
∗
.
where 'n
∗
. ·` denotes the value of the linear function n
∗
on the vector ·, and
this is precisely the trace of the rank one linear transformation · ⊗ n
∗
. Thus
all our axioms are satisﬁed. The algebra :(n + 1) is simple.
48 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
3.3 The orthogonal algebras.
The algebra o(2) is one dimensional and (hence) commutative. In our (real)
Euclidean three dimensional space, the algebra o(3) has a basis A. Y. 7 (in
ﬁnitesimal rotations about each of the axes) with bracket relations
[A. Y ] = 7. [Y. 7] = A. [7. A] = Y.
(the usual formulae for “vector product” in three dimensions”. But we are over
the complex numbers, so can consider the basis A + iY. −A + iY. i7 and ﬁnd
that
[i7. A+iY ] = A+iY. [i7. −A+iY ] = −(−A+iY ). [A+iY. −A+iY ] = 2i7.
These are the bracket relations for :(2) with c = A + iY. 1 = −A + iY. / =
i7. In other words, the complexiﬁcation of our three dimensional world is the
irreducible three dimensional representation of :(2) so o(3) = :(2) which is
simple.
To study the higher dimensional orthogonal algebras it is useful to make two
remarks:
If \ is a vector space with a nondegenerate symmetric bilinear form ( . ),
we get an isomorphism of \ with its dual space \
∗
sending every n ∈ \ to the
linear function /
u
where /
u
(·) = (·. n). This gives an identiﬁcation of
End(\ ) = \ ⊗\
∗
with \ ⊗\.
Under this identiﬁcation, the elements of o(\ ) become identiﬁed with the anti
symmetric two tensors, that is with elements of ∧
2
(\ ). (In terms of an or
thonormal basis, a matrix ¹ belongs to o(\ ) if and only if it is antisymmetric.)
Explicitly, an element n∧· becomes identiﬁed with the linear transformation
¹
u∧v
where
¹
u∧v
r = (r. ·)n −(n. r)·.
This has the following consequence. Suppose that . ∈ \ with (.. .) = 0, and
let n be any element of \ . Then
¹
w∧z
. = (.. .)n −(.. n).
and so l(o(\ )). = \ . On the other hand, suppose that n ∈ \ with (n. n) = 0.
We can ﬁnd · ∈ \ with (·. ·) = 0 and (·. n) = 1. Now suppose in addition that
dim \ ≥ 3. We can then ﬁnd a . ∈ \ orthogonal to the plane spanned by n
and · and with (.. .) = 1. Then
¹
z∧v
n = ..
so . ∈ l(o(\ ))c and hence l(o(\ ))n = \ . We have proved:
1 If dim \ ≥ 3, then every nonzero vector in \ is cyclic, i.e the representa
tion of o(\ ) on \ is irreducible.
3.3. THE ORTHOGONAL ALGEBRAS. 49
(In two dimensions this is false  the line spanned by a vector c with (c. c) = 0
is a one dimensional invariant subspace.)
We now show that
2 o(\ ) is simple for dim \ ≥ 5.
For this, begin by writing down the bracket relations for elements of o(\ ) in
terms of their parametrization by elements of ∧
2
\ . Direct computation shows
that
[¹
u∧v
. ¹
x∧y
] = (·. r)¹
u∧y
−(n. r)¹
v∧y
−(·. n)¹
u∧x
+ (n. n)¹
v∧x
. (3.7)
Now let n = dim\ −2 and choose a basis
n. ·. r
1
. . . . . r
n
of \ where
(n. n) = (n. r
i
) = (·. ·) = (·. r
i
) = 0 ∀i. (n. ·) = 1. (r
i
. r
j
) = δ
ij
.
Let g := o(\ ) and write \ for the subspace spanned by the r
i
. Set
d := ¹
u∧v
and
g
−1
:= ¦¹
v∧x
. r ∈ \¦. g
0
:= o(\) ⊕Cd. g
1
:= ¦¹
u∧x
. r ∈ \¦.
It then follows from (3.7) that d satisﬁes (3.5). The spaces g
−1
and g
1
look like
copies of \ with the o(\) part of g
0
acting as o(\), hence irreducibly since
dim\ ≥ 3. All our remaining axioms are easily veriﬁed. Hence o(\ ) is simple
for dim\ ≥ 5.
We have seen that o(3) = :(2) is simple.
However o(4) is not simple, being isomorphic to :(2) ⊕:(2): Indeed, if 7
1
and 7
2
are vector spaces equipped with nondegenerate antisymmetric bilinear
forms ' . `
1
and ' . `
2
then 7
1
⊗7
2
has a nondegenerate symmetric bilinear form
( . ) determined by
(n
1
⊗n
2
. ·
1
⊗·
2
) = 'n
1
. ·
1
`
1
'n
2
. ·
2
`
2
.
The algebra :(2) acting on its basic two dimensional representation inﬁnitesi
mally preserves the antisymmetric form given by
r
1
r
2
.
n
1
n
2
= r
1
n
2
−r
2
n
1
.
Hence, if we take 7 = 7
1
= 7
2
to be this two dimensional space, we see that
:(2) ⊕:(2) acts as inﬁnitesimal orthogonal transformations on 7 ⊗7 which is
four dimensional. But o(4) is six dimensional so the embedding of :(2) ⊕:(2)
in o(4) is in fact an isomorphism since 3 + 3 = 6.
50 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
3.4 The symplectic algebras.
We consider an even dimensional space with coordinates c
1
. c
2
. . . . . j
1
. j
2
. . . . .
The polynomials have a Poisson bracket
¦1. o¦ :=
¸
∂1
∂j
i
∂o
∂c
i
−
∂1
∂c
i
∂o
∂j
i
. (3.8)
This is clearly antisymmetric, and direct computation will show that the Ja
cobi identity is satisﬁed. Here is a more interesting proof of Jacobi’s identity:
Notice that if 1 is a constant, then ¦1. o¦ = 0 for all o. So in doing bracket
computations we can ignore constants. On the other hand, if we take o to be
successively c
1
. . . . . c
n
. j
1
. . . . . j
n
in (3.8) we see that the partial derivatives of 1
are completely determined by how it brackets with all o, in fact with all linear
o. If we ﬁx 1, the map
/ →¦1. /¦
is a derivation, i.e. it is linear and satisﬁes
¦1. /
1
/
2
¦ = ¦1. /
1
¦/
2
+/
1
¦1. /
2
¦.
This follows immediately from from the deﬁnition (3.8). Now Jacobi’s identity
amounts to the assertion that
¦¦1. o¦. /¦ = ¦1. ¦o. /¦¦ −¦o. ¦1. /¦¦.
i.e. that the derivation
/ →¦¦1. o¦. /¦
is the commutator of the of the derivations
/ →¦1. /¦ and / →¦o. /¦.
It is enough to check this on linear polynomials /, and hence on the polynomials
c
j
and j
k
. If we take / = c
j
then
¦1. c
j
¦ =
∂1
∂j
j
. ¦o. c
j
¦ =
∂o
∂j
j
so
¦1. ¦o. c
j
¦¦ =
¸
∂1
∂j
i
∂
2
o
∂c
i
∂j
j
−
∂1
∂c
i
∂
2
o
∂j
i
∂j
j
¦1. ¦1. c
j
¦¦ =
¸
∂o
∂j
i
∂
2
1
∂c
i
∂j
j
−
∂o
∂c
i
∂
2
1
∂j
i
∂j
j
so
¦1. ¦o. c
j
¦¦ −¦o. ¦1. c
j
¦¦ =
∂
∂j
j
¦1. o¦
= ¦¦1. o¦. c
j
¦
as desired, with a similar computation for j
k
.
3.4. THE SYMPLECTIC ALGEBRAS. 51
The symplectic algebra :j(2n) is deﬁned to be the subalgebra consisting of
all homogeneous quadratic polynomials. We divide these polynomials into three
groups as follows: Let g
1
consist of homogeneous polynomials in the c’s alone,
so g
1
is spanned by the c
i
c
j
. Let g
−1
be the quadratic polynomials in the j’s
alone, and let g
0
be the mixed terms, so spanned by the c
i
j
j
. It is easy to see
that g
0
∼ o(n) and that [g
−1
. g
1
] = g
0
. To check that g
−1
is irreducible under
g
0
, observe that [j
1
c
j
. j
k
j
] = 0 if , = / or /, and [j
1
c
j
. j
j
j
] is a multiple of
j
1
j
. So we can by one or two brackets carry any nonzero element of g
−1
into
a nonzero multiply of j
2
1
, and then get any monomial from j
2
1
by bracketing
with j
i
c
1
appropriately. The element d is given by
1
2
(j
1
c
1
+ +j
n
c
n
).
We have shown that the symplectic algebra is simple, but we haven’t really
explained what it is. Consider the space of \ of homogenous linear polynomials,
i.e all polynomials of the form
/ = o
1
c
1
+ +o
n
c
n
+/
1
j
q
+ +/
n
j
n
.
Deﬁne an antisymmetric bilinear form ω on \ by setting
ω(/. /
) := ¦/. /
).
From the formula (3.8) it follows that the Poisson bracket of two linear functions
is a constant, so ω does indeed deﬁne an antisymmetric bilinear form on \ ,
and we know that this bilinear form is nondegenerate. Furthermore, if 1 is a
homogenous quadratic polynomial, and / is linear, then ¦1. /¦ is again linear,
and if we denote the map
/ →¦1. /¦
by ¹ = ¹
f
, then Jacobi’s identity translates into
ω(¹/. /
) +ω(/¹/
) = 0 (3.9)
since ¦/. /
¦ is a constant. Condition (3.9) can be interpreted as saying that ¹
belongs to the Lie algebra of the group of all linear transformations 1 on \
which preserve ω, i.e. which satisfy
ω(1/. 1/
) = ω(/. /
).
This group is known as the symplectic group. The form ω induces an isomor
phism of \ with \
∗
and hence of Hom(\. \ ) = \ ⊗ \
∗
with \ ⊗ \ , and this
time the image of the set of ¹ satisfying (3.9) consists of all symmetric ten
sors of degree two, i.e. of o
2
(\ ). (Just as in the orthogonal case we got the
antisymmetric tensors). But the space o
2
(\ ) is the same as the space of ho
mogenous polynomials of degree two. In other words, the symplectic algebra as
deﬁned above is the same as the Lie algebra of the symplectic group.
It is an easy theorem in linear algebra, that if \ is a vector space which
carries a nondegenerate antisymmetric bilinear form, then \ must be even
dimensional, and if dim \ = 2n then it is isomorphic to the space constructed
above. We will not pause to prove this theorem.
52 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
3.5 The root structures.
We are going to choose a basis for each of the classical simple algebras which
generalizes the basis c. 1. / that we chose for :(2). Indeed, for each classical
simple algebra g we will ﬁrst choose a maximal commutative subalgebra h all
of whose elements are semisimple = diagonizable in the adjoint representation.
Since the adjoint action of all the elements of h commute, this means that they
can be simultaneously diagonalized. Thus we can decompose g into a direct
sum of simultaneous eigenspaces
g = h ⊕
α
g
α
(3.10)
where 0 = α ∈ h
∗
and
g
α
:= ¦r ∈ g[[/. r] = α(/)r ∀ / ∈ h¦.
The linear functions α are called roots (originally because the α(/) are roots
of the characteristic polynomial of ad(/)). The simultaneous eigenspace g
α
is
called the root space corresponding to α. The collection of all roots will usually
be denoted by Φ.
Let us see how this works for each of the classical simple algebras.
3.5.1 A
n
= sl(n + 1).
We choose h to consist of the diagonal matrices in the algebra :(n + 1) of all
(n + 1) (n + 1) matrices with trace zero. As a basis of h we take
/
1
:=
¸
¸
¸
¸
1 0 0 0
0 −1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0
¸
/
2
:=
¸
¸
¸
¸
¸
¸
0 0 0 0
0 1 0 0
0 0 −1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0
¸
.
.
. :=
.
.
.
/
n
:=
¸
¸
¸
¸
0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 0
0 0 0 −1
¸
.
Let 1
i
denote the linear function which assigns to each diagonal matrix its
ith (diagonal) entry,
3.5. THE ROOT STRUCTURES. 53
Let 1
ij
denote the matrix with one in the i. , position and zero’s elsewhere.
Then
[/. 1
ij
] = (1
i
(/) −1
j
(/))1
ij
∀ / ∈ h
so the linear functions of the form
1
i
−1
j
. i = ,
are the roots.
We may subdivide the set of roots into two classes: the positive roots
Φ
+
:= ¦1
i
−1
j
; i < ,¦
and the negative roots
Φ
−
:= −Φ
+
= ¦1
j
−1
i
. i < ,¦.
Every root is either positive or negative. If we deﬁne
α
i
:= 1
i
−1
i+1
then every positive root can be written as a sum of the α
i
:
1
i
−1
j
= α
i
+ +α
j−1
.
We have
α
i
(/
i
) = 2.
and for i = ,
α
i
(/
i±1
) = −1. α
i
(/
j
) = 0. , = i ±1. (3.11)
The elements
1
i,i+1
. /
i
. 1
i+1,i
form a subalgebra of :(n + 1) isomorphic to :(2). We may call it :(2)
i
.
3.5.2 C
n
= sp(2n), n ≥ 2.
Let h consist of all linear combinations of j
1
c
1
. . . . . j
n
c
n
and let 1
i
be deﬁned
by
1
i
(o
1
j
1
c
1
+ +o
n
j
n
c
n
) = o
i
so 1
1
. . . . . 1
n
is the basis of h
∗
dual to the basis j
1
c
1
. . . . . j
n
c
n
of h.
If / = o
1
j
1
c
1
+ +o
n
j
n
c
n
then
[/. c
i
c
j
] = (o
i
+o
j
)c
i
c
j
[/. c
i
j
j
] = (o
i
−o
j
)c
i
j
j
[/. j
i
j
j
] = −(o
i
+o
j
)j
i
j
j
54 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
so the roots are
±(1
i
+1
j
) all i. , and 1
i
−1
j
i = ,.
We can divide the roots Φ into positive and negative roots by setting
Φ
+
= ¦1
i
+1
j
¦
all ij
∪ ¦1
i
−1
j
¦
i<j
.
If we set
α
1
:= 1
1
−1
2
. . . . . α
n−1
:= 1
n−1
−1
n
. α
n
:= 21
n
then every positive root is a sum of the α
i
. Indeed, 1
n−1
+ 1
n
= α
n−1
+ α
n
and 21
n−1
= 2α
n−1
+α
n
and so on. In particular 2α
n−1
+α
n
is a root.
If we set
/
1
:= j
1
c
1
−j
2
c
2
. . . . . /
n−1
:= j
n−1
c
n−1
−j
n
c
n
. /
n
:= j
n
c
n
then
α
i
(/
i
) = 2
while for i = ,
α
i
(/
i±1
) = −1. i = 1. . . . . n −1
α
i
(/
j
) = 0. , = i ±1. i = 1. . . . . n (3.12)
α
n
(/
n−1
) = −2.
In particular, the elements /
i
. c
i
j
i+1
. c
i+1
j
i
for i = 1. . . . . n−1 form a subalgebra
isomorphic to :(2) as do the elements /
n
.
1
2
c
2
n
. −
1
2
j
2
n
. We call these subalgebras
:(2)
i
. i = 1. . . . . n.
3.5.3 D
n
= o(2n), n ≥ 3.
We choose a basis n
1
. . . . . n
n
. ·
1
. . . . . ·
n
of our orthogonal vector space \ such
that
(n
i
. n
j
) = (·
i
. ·
j
) = 0. ∀ i. ,. (n
i
. ·
j
) = δ
ij
.
We let h be the subalgebra of o(\ ) spanned by the ¹
uivi
, i = 1. . . . . n. Here we
have written ¹
xy
instead of ¹
x∧y
in order to save space. We take
¹
u1v1
. . . . . ¹
unvn
as a basis of h and let 1
1
. . . . . 1
n
be the dual basis. Then
±1
k
±1
/ = /
are the roots since from (3.7) we have
[¹
uivi
. ¹
u
k
u
] = (δ
ik
+δ
i
)¹
u
k
u
[¹
uivi
. ¹
u
k
v
] = (δ
ik
−δ
i
)¹
u
k
v
[¹
uivi
. ¹
v
k
v
] = −(δ
ik
+δ
i
)¹
v
k
v
.
3.5. THE ROOT STRUCTURES. 55
We can choose as positive roots the
1
k
+1
. 1
k
−1
. / < /
and set
α
i
:= 1
i
−1
i+1
. i = 1. . . . . n −1. α
n
:= 1
n−1
+1
n
.
Every positive root is a sum of these simple roots. If we set
/
i
:= ¹
uivi
−¹
ui+1vi+1
. i = 1. . . . n −1.
and
/
n
= ¹
un−1vn−1
+¹
unvn
then
α
i
(/
i
) = 2
and for i = ,
α
i
(/
j
) = 0 , = i ±1. i = 1. . . . n −2
α
i
(/
i±1
) = −1 i = 1. . . . . n −2
α
n−1
(/
n−2
) = −1 (3.13)
α
n
(/
n−2
) = −1
α
n
(/
n−1
) = 0.
For i = 1. . . . . n −1 the elements /
i
. ¹
uivi+1
. ¹
ui+1vi
form a subalgebra isomor
phic to :(2) as do /
n
. ¹
un−1un
. ¹
vn−1vn
.
3.5.4 B
n
= o(2n + 1) n ≥ 2.
We choose a basis n
1
. . . . . n
n
. ·
1
. . . . . ·
n
. r of our orthogonal vector space \ such
that
(n
i
. n
j
) = (·
i
. ·
j
) = 0. ∀ i. ,. (n
i
. ·
j
) = δ
ij
.
and
(r. n
i
) = (r. ·
i
) = 0 ∀ i. (r. r) = 1.
As in the even dimensional case we let h be the subalgebra of o(\ ) spanned by
the ¹
uivi
, i = 1. . . . . n and take
¹
u1v1
. . . . . ¹
unvn
as a basis of h and let 1
1
. . . . . 1
n
be the dual basis. Then
±1
i
±1
j
i = ,. ±1
i
are roots. We take
1
i
±1
j
. 1 ≤ i < , ≤ n. together with 1
i
. i = 1. . . . . n
56 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
to be the positive roots, and
α
i
:= 1
i
−1
i+1
. i = 1. . . . . n −1. α
n
:= 1
n
to be the simple roots. We let
/
i
:= ¹
uivi
−¹
ui+1vi+1
. i = 1. . . . n −1.
as in the even case, but set
/
n
:= 2¹
unvn
.
Then every positive root can be written as a sum of the simple roots,
α
i
(/
i
) = 2. i = 1. . . . n.
and for i = ,
α
i
(/
j
) = 0 , = i ±1. i = 1. . . . n
α
i
(/
i±1
) = −1 i = 1. . . . . n −2. n (3.14)
α
n−1
(/
n
) = −2
Notice that in this case α
n−1
+ 2α
n
= 1
n−1
+ 1
n
is a root. Finally we can
construct subalgebras isomorphic to :(2), with the ﬁrst n − 1 as in the even
orthogonal case and the last :(2) spanned by /
n
. ¹
unx
. −¹
vnx
.
3.5.5 Diagrammatic presentation.
The information of the last four subsections can be summarized in each of the
following four diagrams:
The way to read this diagram is as follows: each node in the diagram stands
for a simple root, reading from left to right, starting with α
1
at the left. (In the
diagram 1
the two rightmost nodes are α
−1
and α
, say the top α
−1
and the
bottom α
.) Two nodes α
i
and α
j
are connected by (one or more) edges if and
only if α
i
(/
j
) = 0.
In all cases, the diﬀerence, α
i
−α
j
is never a root, and, for i = ,. α
i
(/
j
) ≤ 0
and is an integer. If, for i = ,, α
i
(/
j
) < 0 then α
i
+α
j
is a root.
In two of the cases (1
and C
) it happens that α
i
(/
j
) = −2. Then α
i
+α
j
and α
i
+ 2α
j
are roots, and we draw a double bond with an arrow pointing
towards α
j
. In this case 2 is is the maximum integer such that α
i
+ /α
j
is a
root. In all other cases, this maximum integer / is one if the nodes are connected
(and zero it they are not).
3.6 Low dimensional coincidences.
We have already seen that o(4) ∼ :(2) ⊕:(2). We also have
o(6) ∼ :(4).
3.6. LOW DIMENSIONAL COINCIDENCES. 57
• • . . . . . . •
¨
¨
r
r
•
•
1
/ ≥ 4
• •
. . . . . .
•< • C
/ ≥ 2
• • . . . . . . • • 1
/ ≥ 3
•
. . . . . . . . . . . .
• ¹
Figure 3.1: Dynkin diagrams of the classical simple algebras.
Both algebras are ﬁfteen dimensional and both are simple. So to realize this
isomorphism we need only ﬁnd an orthogonal representation of :(4) on a six
dimensional space. If we let \ = C
4
with the standard representation of :(4),
we get a representation of :(4) on ∧
2
(\ ) which is six dimensional. So we must
describe a nondegenerate bilinear form on ∧
2
\ which is invariant under the
action of :(4). We have a map, wedge product, of
∧
2
\ ∧
2
\ →∧
4
\.
Furthermore this map is symmetric, and invariant under the action of o(4).
However :(4) preserves a basis (a nonzero element) of ∧
4
\ and so we may
identify ∧
4
\ with C. It is easy to check that the bilinear form so obtained is
nondegenerate
We also have the identiﬁcation
:j(4) ∼ o(5)
both algebras being ten dimensional. To see this let \ = C
4
with an antisym
metric form ω preserved by oj(4). Then ω ⊗ ω induces a symmetric bilinear
form on \ ⊗\ as we have seen. Sitting inside \ ⊗\ as an invariant subspace
is ∧
2
\ as we have seen, which is six dimensional. But ∧
2
\ is not irreducible as
a representation of :j(4). Indeed, ω ∈ ∧
2
\
∗
is invariant, and hence its kernel is
a ﬁve dimensional subspace of ∧
2
\ which is invariant under :j(4). We thus get
a nonzero homomorphism :j(4) → o(5) which must be an isomorphism since
:j(4) is simple.
58 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
These coincidences can be seen in the diagrams. If we were to allow / = 2 in
the diagram for 1
it would be indistinguishable from C
2
. If we were to allow
/ = 3 in the diagram for 1
it would be indistinguishable from ¹
3
.
3.7 Extended diagrams.
It follows from Jacobi’s identity that in the decomposition (3.10), we have
[g
α
. g
α
] ⊂ g
α+α
(3.15)
with the understanding that the right hand side is zero if α + α
is not a root.
In each of the cases examined above, every positive root is a linear combination
of the simple roots with nonnegative integer coeﬃcients. Since the algebra is
ﬁnite, there must be a maximal positive root β in the sense that β +α
i
is not
a root for any simple root. For example, in the case of ¹
n
= :(n +1), the root
β := 1
1
−1
n+1
is maximal. The corresponding g
β
consists of all (n+1)(n+1)
matrices with zeros everywhere except in the upper right hand corner. We can
also consider the minimal root which is the negative of the maximal root, so
α
0
:= −β = 1
n+1
−1
1
in the case of ¹
n
. Continuing to study this case, let
/
0
:= /
n+1
−/
1
.
Then we have
α
i
(/
i
) = 2. i = 0. . . . n
and
α
0
(/
1
) = α
0
(/
n
) = −1. α
0
(/
i
) = 0. i = 0. 1. n.
This means that if we write out the (n + 1) (n + 1) matrix whose entries are
α
i
(/
j
). i. , = 0. . . . n we obtain a matrix of the form
21 −`
where `
ij
= 1 if and only if , = ±1 with the understanding that n+1 = 0 and
−1 = n, i.e we do the subscript arithmetic mod n. In other words, ` is the
adjacency matrix of the cyclic graph with n + 1 vertices labeled 0. . . . n. Also,
we have
/
0
+/
1
+ +/
n
= 0.
If we apply α
i
to this equation for i = 0. . . . n we obtain
(21 −`)1 = 0.
where 1 is the column vector all of whose entries are 1. We can write this
equation as
`1 = 21.
3.7. EXTENDED DIAGRAMS. 59
In other words, 1 is an eigenvector of ` with eigenvalue 2.
In the chapters that follow we shall see that any ﬁnite dimensional simple Lie
algebra has roots, simple roots, maximal roots etc. giving rise to a matrix `
with integer entries which is irreducible (in the sense of nonnegative matrices 
deﬁnition later on) and which has an eigenvector with positive (integer) entries
with eigenvalue 2. This will allow us to classify the simple (ﬁnite dimensional)
Lie algebras.
60 CHAPTER 3. THE CLASSICAL SIMPLE ALGEBRAS.
Chapter 4
EngelLieCartanWeyl
We return to the general theory of Lie algebras. Many of the results in this
chapter are valid over arbitrary ﬁelds, indeed if we use the axioms to deﬁne
a Lie algebra over a ring many of the results are valid in this generality. But
some of the results depend heavily on the ring being an algebraically closed ﬁeld
of characteristic zero. As a compromise, throughout this chapter we deal with
ﬁelds, and will assume that all vector spaces and all Lie algebras which appear
are ﬁnite dimensional. We will indicate the necessary additional assumptions on
the ground ﬁeld as they occur. The treatment here follows Serre pretty closely.
4.1 Engel’s theorem
Deﬁne a Lie algebra g to be nilpotent if:
∃n[ [r
1
. [r
2
. . . . r
n+1
] . . . ] = 0 ∀ r
1
. . . . . r
n+1
∈ g.
Example: n
+
:= n
+
(o(d)) := all strictly upper triangular matrices. Notice
that the product of any d + 1 such matrices is zero.
The claim is that all nilpotent Lie algebras are essentially like n
+
.
We can reformulate the deﬁnition of nilpotent as saying that the product of
any n operators ad r
i
vanishes. One version of Engel’s theorem is
Theorem 3 g is nilpotent if and only if ad r is a nilpotent operator for each
r ∈ g.
This follows (taking \ = g and the adjoint representation) from
Theorem 4 Engel Let ρ : g → End(\ ) be a representation such that ρ(r) is
nilpotent for each r ∈ g. Then there exists a basis in terms of which ρ(g) ⊂
n
+
(o(d)), i.e. becomes strictly upper triangular. Here d =dim \ .
Given a single nilpotent operator, we can always ﬁnd a nonzero vector, ·
which it sends into zero. Then on \´¦·¦ a nonzero vector which the induced
61
62 CHAPTER 4. ENGELLIECARTANWEYL
map sends into zero etc. So in terms of such a ﬂag, the corresponding matrix
is strictly upper triangular. The theorem asserts that we are can ﬁnd a single
ﬂag which works for all ρ(r). In view of the above proof for a single operator,
Engel’s theorem follows from the following simpler looking statement:
Theorem 5 Under the hypotheses of Engel’s theorem, if \ = 0, there exists a
nonzero vector · ∈ \ such that ρ(r)· = 0 ∀r ∈ g.
Proof of Theorem 5 in seven easy steps.
• Replace g by its image, i.e. assume that g ⊂ End \ .
• Then (ad r)n = 1
x
n −1
x
n where 1
x
is the linear map of End \ into itself
given by left multiplication by r, and 1
x
is given by right multiplication
by r. Both 1
x
and 1
x
are nilpotent as operators since r is nilpotent.
Also they commute. Hence by the binomial formula (ad r)
n
= (1
x
−1
x
)
n
vanishes for suﬃciently large n.
• We may assume (by induction) that for any Lie algebra, m, of smaller
dimension than that of g (and any representation) there exists a · ∈ \
such that r· = 0 ∀r ∈ m.
• Let k ⊂ g be a subalgebra, k = g, and let
· = ·(k) := ¦r ∈ g[(ad r)k ⊂ k¦
be its normalizer. The claim is that
3 ·(k) is strictly larger than k.
To see this, observe that each r ∈ k acts on k and on g´k by nilpotent
maps, and hence there is an 0 = ˆ n ∈ g´k killed by all r ∈ k. But then
n ∈ k, and [n. r] = −[r. n] ∈ k for all r ∈ k. So n ∈ ·(k). n ∈ k.
• If g = 0, there is an ideal i ⊂ g such that dimg´i = 1. Indeed, let i be a
maximal proper subalgebra of g. Its normalizer is strictly larger, hence all
of g, so i is an ideal. The inverse image in g of a line in g´i is a subalgebra,
and is strictly larger than i. Hence it must be all of g.
• Choose such an ideal, i. The subspace
\ ⊂ \. \ = ¦·[r· = 0. ∀r ∈ i¦
is invariant under g. Indeed, if n ∈ g. n ∈ \ then rnn = nrn+[r. n]n =
0.
• \ = 0 by induction. Take n ∈ g. n ∈ i. It preserves \ and is nilpotent.
Hence there is a nonzero · ∈ \ with n· = 0. Since n and i span g, we
have r· = 0 ∀r ∈ g. QED
No assumptions about the ground ﬁeld went into this.
4.2. SOLVABLE LIE ALGEBRAS. 63
4.2 Solvable Lie algebras.
Let g be a Lie algebra. 1
n
g is deﬁned inductively by
1
0
g := g. 1
1
(g) := [g. g]. . . . . 1
n+1
g := [1
n
g. 1
n
g].
If we take b to consist of all upper triangular n n matrices, then 1
1
b = n
+
consists of all strictly triangular matrices and then successive brackets eventually
lead to zero. We claim that the following conditions are equivalent and any Lie
algebra satisfying them is called solvable.
1. ∃n [1
n
g = 0.
2. ∃n such that for every family of 2
n
elements of g the successive brackets
of brackets vanish; e.g for n = 4 this says
[[[[r
1
. r
2
]. [r
3
. r
4
]]. [[r
5
. r
6
]. [r
7
. r
8
]]]. [[[r
9
. r
10
]. [r
11
. r
12
]]. [[r
13
.r
14
]. [r
15
. r
16
]]]] = 0.
3. There exists a sequence of subspaces g := i
1
⊃ i
2
⊃ ⊃ i
n
= 0 such each
is an ideal in the preceding and such that the quotient i
j
´i
j+1
is abelian,
i.e. [i
j
. i
j
] ⊂ i
j+1
.
Proof of the equivalence of these conditions. [g. g] is always an ideal in
g so the 1
j
g form a sequence of ideals demanded by 3), and hence 1) ⇒3). We
also have the obvious implications 3) ⇒ 2) and 2) ⇒ 1). So all these deﬁnitions
are equivalent.
Theorem 6 [Lie.] Let g be a solvable Lie algebra over an algebraically closed
ﬁeld / of characteristic zero, and (ρ. \ ) a ﬁnite dimensional representation of g.
Then we can ﬁnd a basis of \ so that ρ(g) consists of upper triangular matrices.
By induction on dim \ this reduces to
Theorem 7 [Lie.] Under the same hypotheses, there exists a (nonzero) com
mon eigenvector · for all the ρ(n), i.e. there is a vector · ∈ \ and a function
χ : g →/ such that
ρ(n)· = χ(n)· ∀ n ∈ g. (4.1)
Lemma 2 Suppose that i is an ideal of g and (4.1) holds for all n ∈ i. Then
χ([r. /]) = 0. ∀ r ∈ g / ∈ i.
Proof of lemma. For r ∈ g let \
i
be the subspace spanned by ·. r·. . . . . r
i−1
·
and let n 0 be minimal such that \
n
= \
n+1
. So \
n
is ﬁnite dimensional and
r\
n
⊂ \
n
. Also \
n
= \
n+k
∀/.
64 CHAPTER 4. ENGELLIECARTANWEYL
Also, for / ∈ i, (dropping the ρ) we have:
/· = χ(/)·
/r· = r/· −[r. /]·
≡ χ(/)r· mod \
1
/r
2
· = r/r· + [/. r]r·
≡ χ(/)r
2
· +nr·. mod \
1
n ∈ 1
≡ χ(/)r
2
· +χ(n)r· mod \
1
= χ(/)r
2
· mod \
2
.
.
.
.
.
.
/r
i
· ≡ χ(/)r
i
· mod \
i
.
Thus \
n
is invariant under i and for each / ∈ i, tr
Vn
/ = nχ(/). In particular
both r and / leave \
n
invariant and tr
Vn
[r. /] = 0 since the trace of any
commutator is zero. This proves the lemma.
Proof of theorem by induction on dim g, which we may assume to be positive.
Let m be any subspace of g with g ⊃ m⊃ [g. g]. Then [g. m] ⊂ [g. g] ⊂ m so m
is an ideal in g. In particular, we may choose m to be a subspace of codimension
1 containing [g. g]. By induction we can ﬁnd a · ∈ \ and a χ : m → / such
that (4.1) holds for all elements of m. Let
\ := ¦n ∈ \ [/n = χ(/)n ∀ / ∈ m¦.
If r ∈ g, then
/rn = r/n −[r. /]n = χ(/)rn −χ([r. /])n = χ(/)rn
since χ([r. /]) = 0 by the lemma. Thus \ is stable under all of g. Pick
r ∈ g. r ∈ m, and let · ∈ \ be an eigenvector of r with eigenvalue λ, say.
Then · is a simultaneous eigenvector for all of g with χ extended as
χ(/ +:r) = χ(/) +:λ. QED
We had to divide by n in the above argument. In fact, the theorem is not
true over a ﬁeld of characteristic 2, with :(2) as a counterexample.
Applied to the adjoint representation, Lie’s theorem says that there is a ﬂag
of ideals with commutative quotients, and hence [g. g] is nilpotent.
4.3 Linear algebra
Let \ be a ﬁnite dimensional vector space over an algebraically closed ﬁeld of
characteristic zero, and let
det(T1 −n) =
¸
(T −λ
i
)
mi
4.3. LINEAR ALGEBRA 65
be the factorization of its characteristic polynomial where the λ
i
are distinct.
Let o(T) be any polynomial satisfying
o(T) ≡ λ
i
mod (T −λ
i
)
mi
. o(T) ≡ 0 mod T.
which is possible by the Chinese remainder theorem. For each i let \
i
:= the
kernel of (n − λ
i
)
mi
. Then \ =
¸
\
i
and on \
i
, the operator o(n) is just the
scalar operator λ
i
1. In particular : = o(n) is semisimple (its eigenvectors span
\ ) and, since : is a polynomial in n it commutes with n. So
n = : +n
where
n = ·(n). ·(T) = T −o(T)
is nilpotent. Also
n: = :n.
We claim that these two elements are uniquely determined by
n = : +n. :n = n:.
with : semisimple and n nilpotent. Indeed, since :n = n:. :n = n: so :(n −
λ
i
)
k
= (n−λ
i
)
k
: so :\
i
⊂ \
i
. Since :−n is nilpotent, : has the same eigenvalues
on \
i
as n does, i.e. λ
i
. So : and hence n is uniquely determined.
If 1(T) is any polynomial with vanishing constant term, then if ¹ ⊂ 1
are subspaces with n1 ⊂ ¹ then 1(n)1 ⊂ ¹. So, in particular, :1 ⊂ ¹ and
n1 ⊂ ¹.
Deﬁne
\
p,q
:= \ ⊗\ ⊗ ⊗\ ⊗\
∗
⊗ ⊗\
∗
with j copies of \ and c copies of \
∗
. Let n ∈ End(\ ) act on \
∗
by −n
∗
and
on \
pq
by derivation, so , for example,
n
12
= n ⊗1 ⊗1 −1 ⊗n
∗
⊗1 −1 ⊗1 ⊗n
∗
.
Similarly, n
11
acts on \
1,1
= \ ⊗\
∗
by
n
11
(r ⊗/) = nr ⊗/ −r ⊗n
∗
/.
Under the identiﬁcation of \ ⊗\
∗
with End(\ ), the element r⊗/ acts on n ∈ \
by sending it into
/(n)r.
So the element n
11
(r ⊗/) sends n to
/(n)n(r) −(n
∗
/)(n)r = /(n)n(r) −/(n(n))r.
This is the same as the commutator of the operator n with the operator (cor
responding to) r ⊗ / acting on n. In other words, under the identiﬁcation of
\ ⊗\
∗
with End(\ ), the linear transformation n
11
gets identiﬁed with ad n.
66 CHAPTER 4. ENGELLIECARTANWEYL
Proposition 2 If n = : + n is the decomposition of n then n
pq
= :
pq
+ n
pq
is
the decomposition of n
pq
.
Proof. [:
pq
. n
pq
] = 0 and the tensor products of an eigenbasis for : is an
eigenbasis for :
pq
. Also n
pq
is a sum of commuting nilpotents hence nilpotent.
The map n →n
pq
is linear hence n
pq
= :
pq
+n
pq
. QED
If φ : / → / is a map, we deﬁne φ(:) by φ(:)
Vi
= φ(λ
i
). If we choose a
polynomial such that 1(0) = 0. 1(λ
i
) = φ(λ
i
) then 1(n) = φ(:).
Proposition 3 Suppose that φ is additive. Then
(φ(:))
pq
= φ(:
pq
).
Proof. Decompose \
pq
into a sum of tensor products of the \
i
or \
∗
j
. On each
such space we have
φ(:
p,q
) = φ(λ
i1
+ − )
= φ(λ
i1
) +φ(..)..
= (φ(:))
p,q
where the middle equation is just the additivity. QED
As an immediate consequence we obtain
Proposition 4 Notation as above. If ¹ ⊂ 1 ⊂ \
p,q
with n
pq
1 ⊂ ¹ then for
any additive map, φ(:)
pq
1 ⊂ ¹
Proposition 5 (over C) Let n = : +n as above. If tr(nφ(:)) = 0 for φ(:) = :
then n is nilpotent.
Proof. tr nφ(:) =
¸
:
i
λ
i
λ
i
=
¸
:
i
[λ
i
[
2
. So the condition implies that all the
λ
i
= 0. QED
4.4 Cartan’s criterion.
Let g ⊂ End(\ ) be a Lie subalgebra where \ is ﬁnite dimensional vector space
over C. Then
g is solvable ⇔ tr(rn) = 0 ∀r ∈ g. n ∈ [g. g].
Proof. Suppose g is solvable. Choose a basis for which g is upper triangular.
Then every n ∈ [g. g] has zeros on the diagonal, Hence tr(rn) = 0. For the
reverse implication, it is enough to show that [g. g] is nilpotent, and, by Engel,
that each n ∈ [g. g] is nilpotent. So it is enough to show that tr n: = 0, where
: is the semisimple part of n, by Proposition 5 above. If it were true that : ∈ g
we would be done, but this need not be so. Write
n =
¸
[r
i
. n
i
].
4.5. RADICAL. 67
Now for o. /. c ∈ End(\ )
tr([o. /]c) = tr(o/c −/oc)
= tr(/co −/oc)
= tr(/[c. o]) so
tr(n:) =
¸
tr([r
i
. n
i
]:)
=
¸
tr(n
i
[:. r
i
]).
So it is enough to show that ad : : g → [g. g]. We know that ad n : g → [g. g],
and we can, by Lagrange interpolation, ﬁnd a polynomial 1 such that 1(n) = :.
The result now follows from Prop. 4:
Since End(\ ) ∼ \
1,1
, take ¹ = [g. g] and 1 = g. Then ad n = n
1,1
so
n
1,1
g ⊂ [g. g[ and hence :
1,1
g ⊂ [g. g] or [:. r] ∈ [g. g] ∀r ∈ g. QED
4.5 Radical.
If i is an ideal of g and g´i is solvable, then 1
(n)
(g´i) = 0 implies that 1
(n)
g ⊂ i.
If i itself is solvable with 1
(m)
i = 0, then 1
(m+n)
g = 0. So we have proved:
Proposition 6 If i ⊂ g is an ideal, and both i and g´i are solvable, so is g.
If i and j are solvable ideals, then (i + j)´j ∼ i´(i ∩ j) is solvable, being the
homomorphic image of a solvable algebra. So, by the previous proposition:
Proposition 7 If i and j are solvable ideals in g so is i +j. In particular, every
Lie algebra g has a largest solvable ideal which contains all other solvable ideals.
It is denoted by rad g or simply by r when g is ﬁxed.
An algebra g is called semisimple if rad g = 0. Since 1i is an ideal
whenever i is (by Jacobi’s identity), if r = 0 then the last nonzero 1
(n)
r is
an abelian ideal. So an equivalent deﬁnition is: g is semisimple if it has no
nonzero abelian ideals.
We shall call a Lie algebra simple if it is not abelian and if it has no proper
ideals. We shall show in the next section that every semisimple Lie algebra is
the direct sum of simple Lie algebras in a unique way.
4.6 The Killing form.
A bilinear form ( . ) : g g →/ is called invariant if
([r. n]. .) + (n. [r. .]) = 0 ∀r. n. . ∈ g. (4.2)
Notice that if ( . ) is an invariant form, and i is an ideal, then i
⊥
is again an
ideal.
68 CHAPTER 4. ENGELLIECARTANWEYL
One way of producing invariant forms is from representations: if(ρ. \ ) is a
representation of g, then
(r. n)
ρ
:= tr ρ(r)ρ(n)
is invariant. Indeed,
([r. n]. .)
ρ
+ (n. [r. .])
ρ
= tr¦(ρ(r)ρ(n) −ρ(n)ρ(r))ρ(.)) +ρ(n)(ρ(r)ρ(.) −ρ(.)ρ(r))¦
= tr¦ρ(r)ρ(n)ρ(.) −ρ(n)ρ(.)ρ(r)¦
= 0.
In particular, if we take ρ = ad. \ = g the corresponding bilinear form is
called the Killing form and will be denoted by ( . )
κ
. We will also sometimes
write κ(r. n) instead of (r. n)
κ
.
Theorem 8 g is semisimple if and only if its Killing form is nondegenerate.
Proof. Suppose g is not semisimple and so has a nonzero abelian ideal, a.
We will show that (r. n)
κ
= 0 ∀r ∈ a. n ∈ g. Indeed, let σ = ad rad n. Then
σ maps g → a and a → 0. Hence in terms of a basis starting with elements of
a and extending, it (is upper triangular and) has 0 along the diagonal. Hence
tr σ = 0. Hence if g is not semisimple then its Killing form is degenerate.
Conversely, suppose that g is semisimple. We wish to show that the Killing
form is nondegenerate. So let u := g
⊥
= ¦r[ tr ad rad n = 0 ∀n ∈ g¦. If
r ∈ u. . ∈ g then
tr¦ad[r. .] ad n¦ = tr¦ad rad . ad n −ad . ad rad n)¦
= tr¦ad r(ad . ad n −ad n ad .)¦
= tr ad rad[.. n]
= 0.
so u is an ideal. In particular, tr
u
(ad r
u
ad n
u
) = tr
g
(ad
g
rad
g
n) for r. n ∈ u, as
can be seen from a block decomposition starting with a basis of u and extending
to g.
If we take n ∈ 1u, we see that tr ad u1ad u = 0, so ad u is solvable by
Cartan’s criterion. But the kernel of the map u →ad u is the center of u. So if
ad u is solvable, so is u. QED
Proposition 8 Let g be a semisimple algebra, i any ideal of g, and i
⊥
its
orthocomplement with respect to its Killing form. Then i ∩ i
⊥
= 0.
Indeed, i ∩i
⊥
is an ideal on which t: ad rad n ≡ 0 hence is solvable by Cartan’s
criterion. Since g is semisimple, there are no nontrivial solvable ideals. QED
Therefore
Proposition 9 Every semisimple Lie algebra is the direct sum of simple Lie
algebras.
4.7. COMPLETE REDUCIBILITY. 69
Proposition 10 1g = g for a semisimple Lie algebra.
(Since this is true for each simple component.)
Proposition 11 Let φ : g →s be a surjective homomorphism of a semisimple
Lie algebra onto a simple Lie algebra. Then if g =
¸
g
i
is a decomposition of
g into simple ideals, the restriction, φ
i
of φ to each summand is zero, except for
one summand where it is an isomorphism.
Proof. Since s is simple, the image of every φ
i
is 0 or all of s. If φ
i
is surjective
for some i then it is an isomorphism since g
i
is simple. There is at least one i
for which it is surjective since φ is surjective. On the other hand, it can not be
surjective for for two ideals, g
i
. g
j
i = , for then φ[g
i
. g
j
] = 0 = [s. s] = s. QED
4.7 Complete reducibility.
The basic theorem is
Theorem 9 [Weyl.] Every ﬁnite dimensional representation of a semisimple
Lie algebra is completely reducible.
Proof.
1. If ρ : g → End \ is injective, then the form ( . )
ρ
is nondegenerate.
Indeed, the ideal consisting of all r such that (r. n)
ρ
= 0 ∀n ∈ g is solvable
by Cartan’s criterion, hence 0.
2. The Casimir operator. Let (c
i
) and (1
i
) be bases of g which are dual
with respect to some nondegenerate invariant bilinear form, (. ). So
(c
i
. 1
j
) = δ
ij
. As the form is nondegenerate and invariant, it deﬁnes
a map of
g ⊗g →End g; r ⊗n(n) = (n. n)r.
This map is an isomorphism and is a g morphism. Under this map,
¸
c
i
⊗1
i
(n) =
¸
(n. 1
i
)c
i
= n
by the deﬁnition of dual bases. Hence under the inverse map
End g →g ⊗g
the identity element, id, corresponds to
¸
c
i
⊗1
i
(and so this expression
is independent of the choice of dual bases). Since id is annihilated by
commutator by any element of End(g), we conclude that
¸
i
c
i
⊗ 1
i
is
annihilated by the action of all (ad r)
2
= ad r ⊗ 1 + 1 ⊗ ad r. r ∈ g.
Indeed, for r. c. 1. n ∈ g we have
((ad r)
2
(c ⊗1)) n = (ad rc ⊗1 +c ⊗ad r1) n
= (1. n)[r. c] + ([r. 1]. n)c
= (1. n)[r. c] −(1. [r. n])c by (4.2)
= ((ad r)(c ⊗1) −(c ⊗1)(ad r)) n.
70 CHAPTER 4. ENGELLIECARTANWEYL
Set
C :=
¸
i
c
i
1
i
∈ l(1). (4.3)
Thus C is the image of the element
¸
i
c
i
⊗1
i
under the multiplication map
g⊗g →l(g), and is independent of the choice of dual bases. Furthermore,
C is annihilated by ad r acting on l(g). In other words, it commutes with
all elements of g, and hence with all of l(g); it is in the center of l(g).
The C corresponding to the Killing form is called the Casimir element,
its image in any representation is called the Casimir operator.
3. Suppose that ρ : g → End \ is injective. The (image of the) central
element corresponding to ( . )
ρ
deﬁnes an element of End \ denoted by
C
ρ
and
tr C
ρ
= tr ρ(
¸
c
i
1
i
)
= tr
¸
ρ(c
i
)ρ(1
i
)
=
¸
i
(c
i
. 1
i
)
= dim g
With these preliminaries, we can state the main proposition:
Proposition 12 Let 0 →\ →\ →/ →0 be an exact sequence of g modules,
where g is semisimple, and the action of g on / is trivial (as it must be). Then
this sequence splits, i.e. there is a line in \ supplementary to \ on which g
acts trivially.
The proof of the proposition and of the theorem is almost identical to the proof
we gave above for the special case of :(2). We will need only one or two
additional arguments. As in the case of :(2), the proposition is a special case
of the theorem we want to prove. But we shall see that it is suﬃcient to prove
the theorem.
Proof of proposition. It is enough to prove the proposition for the case
that \ is an irreducible module. Indeed, if \
1
is a submodule, then by induction
on dim \ we may assume the theorem is known for 0 →\´\
1
→\´\
1
→/ →0
so that there is a one dimensional invariant subspace ` in \´\
1
supplementary
to \´\
1
on which the action is trivial. Let · be the inverse image of ` in \.
By another application of the proposition, this time to the sequence
0 →\
1
→· →` →0
we ﬁnd an invariant line, 1, in · complementary to \
1
. So · = \
1
⊕1. Since
(\´\
1
) = (\´\
1
) ⊕ ` we must have 1 ∩ \ = ¦0¦. But since dim \ = dim
\ + 1, we must have \ = \ ⊕ 1. In other words 1 is a one dimensional
subspace of \ which is complementary to \ .
4.7. COMPLETE REDUCIBILITY. 71
Next we can reduce to proving the proposition for the case that g acts
faithfully on \ . Indeed, let i = the kernel of the action on \ . For all r ∈ g we
have, by hypothesis, r\ ⊂ \ , and for r ∈ i we have r\ = 0. Hence 1i acts
trivially on \. But i = 1i since i is semisimple. Hence i acts trivially on \
and we may pass to g´i. This quotient is again semisimple, since i is a sum of
some of the simple ideals of g.
So we are reduced to the case that \ is irreducible and the action, ρ, of g
on \ is injective. Then we have an invariant element C
ρ
whose image in End \
must map \ →\ since every element of g does. (We may assume that g = 0.)
On the other hand, C
ρ
= 0, indeed its trace is dim g. The restriction of C
ρ
to
\ can not have a nontrivial kernel, since this would be an invariant subspace.
Hence the restriction of C
ρ
to \ is an isomorphism. Hence ker C
ρ
: \ → \ is
an invariant line supplementary to \ . We have proved the proposition.
Proof of theorem from proposition. Let 0 → 1
→ 1 be an exact
sequence of g modules, and we may assume that 1
= 0. We want to ﬁnd an
invariant complement to 1
in 1. Deﬁne \ to be the subspace of Hom
k
(1. 1
)
whose restriction to 1
is a scalar times the identity, and let \ ⊂ \ be the
subspace consisting of those linear transformations whose restrictions to 1
is
zero. Each of these is a submodule of End(1). We get a sequence
0 →\ →\ →/ →0
and hence a complementary line of invariant elements in \. In particular, we
can ﬁnd an element, T which is invariant, maps 1 →1
, and whose restriction
to 1
is nonzero. Then ker T is an invariant complementary subspace. QED
As an illustration of construction of the Casimir operator consider g = :(2)
with
/ =
1 0
0 −1
. c =
0 1
0 0
. 1 =
0 0
1 0
.
Then
tr(ad /)
2
= 8
tr(ad c)(ad 1) = 4
so the dual basis to the basis /. c. 1 is /´8. 1´4. c´4, or, if we divide the metric
by 4, the dual basis is /´2. 1. c and so the Casimir operator C is
1
2
/
2
+c1 +1c =
1
2
/
2
+/ + 21c.
This coincides with the C that we used in Chapter II.
72 CHAPTER 4. ENGELLIECARTANWEYL
Chapter 5
Conjugacy of Cartan
subalgebras.
It is a standard theorem in linear algebra that any unitary matrix can be di
agonalized (by conjugation by unitary matrices). On the other hand, it is easy
to check that the subgroup T ⊂ l(n) consisting of all unitary matrices is a
maximal commutative subgroup: any matrix which commutes with all diagonal
unitary matrices must itself be diagonal; indeed if ¹ is a diagonal matrix with
distinct entries along the diagonal, any matrix which commutes with ¹ must be
diagonal. Notice that T is a product of circles, i.e. a torus.
This theorem has an immediate generalization to compact Lie groups: Let
G be a compact Lie group, and let T and T
be two maximal tori. (So T and T
are connected commutative subgroups (hence necessarily tori) and each is not
strictly contained in a larger connected commutative subgroup). Then there
exists an element o ∈ G such that oT
o
−1
= T. To prove this, choose one
parameter subgroups of T and T
which are dense in each. That is, choose r
and r
in the Lie algebra g of G such that the curve t → exp tr is dense in T
and the curve t →exp tr
is dense in T
. If we could ﬁnd o ∈ G such that the
o(exp tr
)o
−1
= exp t Ad
a
r
commute with all the exp :r, then o(exp tr
)o
−1
would commute with all ele
ments of T, hence belong to T, and by continuity, oT
o
−1
⊂ T and hence = T.
So we would like to ﬁnd and o ∈ G such that
[¹d
a
r
. r] = 0.
Put a positive deﬁnite scalar product ( . ) on g, the Lie algebra of G which is
invariant under the adjoint action of G. This is always possible by choosing any
positive deﬁnite scalar product and then averaging it over G.
Choose o ∈ G such that (Ad
a
r
. r) is a maximum. Let
n := Ad
a
r
.
73
74 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
We wish to show that
[n. r] = 0.
For any . ∈ g we have
([.. n]. r) =
d
dt
(Ad
exp tz
n. r)
t=0
= 0
by the maximality. But
([.. n]. r) = (.. [n. r])
by the invariance of ( . ), hence [n. r] is orthogonal to all g hence 0. QED
We want to give an algebraic proof of the analogue of this theorem for Lie
algebras over the complex numbers. In contrast to the elementary proof given
above for compact groups, the proof in the general Lie algebra case will be
quite involved, and the ﬂavor of the proof will by quite diﬀerent for the solvable
and semisimple cases. Nevertheless, some of the ingredients of the above proof
(choosing “generic elements” analogous to the choice of r and r
for example)
will make their appearance. The proofs in this chapter follow Humphreys.
5.1 Derivations.
Let δ be a derivation of the Lie algebra g. this means that
δ([n. .]) = [δ(n). .] + [n. δ(.)] ∀ n. . ∈ g.
Then, for o. / ∈ C
(δ −o −/)[n. .] = [(δ −o)n. .] + [n. (δ −/).]
(δ −o −/)
2
[n. .] = [(δ −o)
2
n. .] + 2[(δ −o)n. (δ −/).] + [n. (δ −/)
2
.]
(δ −o −/)
3
[n. .] = [(δ −o)
3
n. .] + 3[(δ −o)
2
n. (δ −/)
z
)] +
3[(δ −o)n. (δ −/)
2
.] + [n. (δ −/)
3
.]
.
.
.
.
.
.
(δ −o −/)
n
[n. .] =
¸
n
/
[(δ −o)
k
n. (δ −/)
n−k
.].
Consequences:
• Let g
a
= g
a
(δ) denote the generalized eigenspace corresponding to the
eigenvalue o, so (δ −o)
k
= 0 on g
a
for large enough /. Then
[g
a
. g
b
] ⊂ g
[a+b]
. (5.1)
• Let : = :(δ) denote the diagonizable (semisimple) part of δ, so that
:(δ) = o on g
a
. Then, for n ∈ g
a
. . ∈ g
b
:(δ)[n. .] = (o +/)[n. .] = [:(δ)n. .] + [n. :(δ).]
so : and hence also n = n(δ), the nilpotent part of δ are both derivations.
5.1. DERIVATIONS. 75
• [δ. ad r] = ad(δr)]. Indeed, [δ. ad r](n) = δ([r. n]) − [r. δ(n)] = [δ(r). n].
In particular, the space of inner derivations, Inn g is an ideal in Der g.
• If g is semisimple then Inn g = Der g. Indeed, split oﬀ an invariant comple
ment to Inn g in Der g (possible by Weyl’s theorem on complete reducibil
ity). For any δ in this invariant complement, we must have [δ. ad r] = 0
since [δ. ad r] = ad δr. This says that δr is in the center of g. Hence
δr = 0 ∀r hence δ = 0.
• Hence any r ∈ g can be uniquely written as r = : + n. : ∈ g. n ∈ g
where ad : is semisimple and ad n is nilpotent. This is known as the
decomposition into semisimple and nilpotent parts for a semisimple Lie
algebra.
• (Back to general g.) Let k be a subalgebra containing g
0
(ad r) for some
r ∈ g. Then r belongs g
0
(ad r) hence to k, hence ad r preserves ·
g
(k)
(by Jacobi’s identity). We have
r ∈ g
0
(ad r) ⊂ k ⊂ ·
g
(k) ⊂ g
all of these subspaces being invariant under ad r. Therefore, the character
istic polynomial of ad r restricted to ·
g
(k) is a factor of the charactristic
polynomial of ad r acting on g. But all the zeros of this characteristic
polynomial are accounted for by the generalized zero eigenspace g
0
(ad r)
which is a subspace of k. This means that ad r acts on ·
g
(k)´k without
zero eigenvalue.
On the other hand, ad r acts trivially on this quotient space since r ∈ k
and hence [·
g
k. r] ⊂ k by the deﬁnition of the normalizer. Hence
·
g
(k) = k. (5.2)
We now come to the key lemma.
Lemma 3 Let k ⊂ g be a subalgebra. Let . ∈ k be such that g
0
(ad .) does not
strictly contain any g
0
(ad r). r ∈ k. Suppose that
k ⊂ g
0
(ad .).
Then
g
0
(ad .) ⊂ g
0
(ad n) ∀ n ∈ k.
Proof. Choose . as in the lemma, and let r be an arbitrary element of k.
By hypothesis, r ∈ g
0
(ad .) and we know that [g
0
(ad .). g
0
(ad .)] ⊂ g
0
(ad .).
Therefore [r. g
0
(ad .)] ⊂ g
0
(ad .) and hence
ad(. +cr)g
0
(ad .) ⊂ g
0
(ad .)
for all constants c. Thus ad(. +cr) acts on the quotient space g´g
0
(ad .). We
can factor the characteristic polynomial of ad(. +cr) acting on g as
1
ad(z+cx)
(T) = 1(T. c)o(T. c)
76 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
where 1 is the characteristic polynomial of ad(. + cr) on g
0
(ad .) and o is the
characteristic polynomial of ad(. +cr) on g´g
0
(ad .). Write
1(T. c) = T
r
+1
1
(c)T
r−1
+ 1
r
(c) : = dimg
0
(ad .)
o(T. c) = T
n−r
+o
1
(c)T
n−r−1
+ +o
n−r
(c) n = dimg.
The 1
i
and the o
i
are polynomials of degree at most i in c. Since 0 is not an
eigenvalue of ad . on g´g
0
(ad .), we see that o
n−r
(0) = 0. So we can ﬁnd : +1
values of c for which o
n−r
(c) = 0, and hence for these values,
g
0
(ad(. +cr)) ⊂ g
0
(ad .).
By the minimality, this forces
g
0
(ad(. +cr)) = g
0
(ad .)
for these values of c. This means that 1(T. c) = T
r
for these values of c, so each
of the polynomials 1
1
. . . . . 1
r
has : + 1 distinct roots, and hence is identically
zero. Hence
g
0
(ad(. +cr)) ⊃ g
0
(ad .)
for all c. Take c = 1. r = n −. to conclude the truth of the lemma.
5.2 Cartan subalgebras.
A Cartan subalgebra (CSA) is deﬁned to be a nilpotent subalgebra which is its
own normalizer. A Borel subalgebra (BSA) is deﬁned to be a maximal solvable
subalgebra. The goal is to prove
Theorem 10 Any two CSA’s are conjugate. Any two BSA’s are conjugate.
Here the word conjugate means the following: Deﬁne
^(g) = ¦r[ ∃n ∈ g. o = 0. with r ∈ g
a
(ad n)¦.
Notice that every element of ^(g) is ad nilpotent and that ^(g) is stable
under Aut(g). As any r ∈ ^(g) is nilpotent, exp ad r is well deﬁned as an
automorphism of g, and we let
c(g)
denote the group generated by these elements. It is a normal subgroup of the
group of automorphisms. Conjugacy means that there is a φ ∈ c(g) with
φ(h
1
) = h
2
where h
1
and h
2
are CSA’s. Similarly for BSA’s.
As a ﬁrst step we give an alternative characterization of a CSA.
Proposition 13 h is a CSA if and only if h = g
0
(ad .) where g
0
(ad .) con
tains no proper subalgebra of the form g
0
(ad r).
5.3. SOLVABLE CASE. 77
Proof. Suppose h = g
0
(ad .) which is minimal in the sense of the proposition.
Then we know by (5.2) that h is its own normalizer. Also, by the lemma,
h ⊂ g
0
(ad r) ∀r ∈ h. Hence ad r acts nilpotently on h for all r ∈ h. Hence, by
Engel’s theorem, h is nilpotent and hence is a CSA.
Suppose that h is a CSA. Since h is nilpotent, we have h ⊂ g
0
(ad r). ∀ r ∈
h. Choose a minimal .. By the lemma,
g
0
(ad .) ⊂ g
0
(ad r) ∀r ∈ h.
Thus h acts nilpotently on g
0
(ad .)´h. If this space were not zero, we could
ﬁnd a nonzero common eigenvector with eigenvalue zero by Engel’s theorem.
This means that there is a n ∈ h with [n. h] ⊂ h contradicting the fact h is its
own normalizer. QED
Lemma 4 If φ : g → g
is a surjective homomorphism and h is a CSA of g
then φ(h) is a CSA of g
.
Clearly φ(h) is nilpotent. Let k = Ker φ and identify g
= g´k so φ(h) = h+k.
If r +k normalizes h +k then r normalizes h +k. But h = g
0
(ad .) for some
minimal such ., and as an algebra containing a g
0
(ad .), h+k is selfnormalizing.
So r ∈ h +k. QED
Lemma 5 φ : g → g
be surjective, as above, and h
a CSA of g
. Any CSA
h of m := φ
−1
(h
) is a CSA of g.
h is nilpotent by assumption. We must show it is its own normalizer in g. By
the preceding lemma, φ(h) is a Cartan subalgebra of h
. But φ(h) is nilpotent
and hence would have a common eigenvector with eigenvalue zero in h
´φ(h),
contradicting the selfnormalizing property of φ(h) unless φ(h) = h
. So φ(h) =
h
. If r ∈ g normalizes h, then φ(r) normalizes h
. Hence φ(r) ∈ h
so r ∈ m
so r ∈ h. QED
5.3 Solvable case.
In this case a Borel subalgebra is all of g so we must prove conjugacy for CSA’s.
In case g is nilpotent, we know that any CSA is all of g, since g = g
0
(ad .) for
any . ∈ g. So we may proceed by induction on dim g. Let h
1
and h
2
be Cartan
subalgebras of g. We want to show that they are conjugate. Choose an abelian
ideal a of smallest possible positive dimension and let g
= g´a. By Lemma 4
the images h
1
and h
2
of h
1
and h
2
in g
are CSA’s of g
and hence there is
a σ
∈ c(g
) with σ
(h
1
) = h
2
. We claim that we can lift this to a σ ∈ c(g).
That is, we claim
Lemma 6 Let φ : g → g
be a surjective homomorphism. If σ
∈ c(g
) then
78 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
there exists a σ ∈ c(g) such that the diagram
g
φ
−−−−→ g
σ
σ
g −−−−→
φ
g
commutes.
Proof of lemma. It is enough to prove this on generators. Suppose that
r
∈ g
a
(n
) and choose n ∈ g. φ(n) = n
so φ(g
a
(n)) = g
a
(n
), and hence we
can ﬁnd an r ∈ ^(g) mapping on to r
. Then exp ad r is the desired σ in the
above diagram if σ
= exp ad r
. QED
Back to the proof of the conjugacy theorem in the solvable case. Let m
1
:=
φ
−1
(h
1
). m
2
:= φ
−1
(h
2
). We have a σ with σ(m
1
) = m
2
so σ(h
1
) and h
2
are
both CSA’s of m
2
. If m
2
= g we are done by induction. So the one new case
is where
g = a +h
1
= a +h
2
.
Write
h
2
= g
0
(ad r)
for some r ∈ g. Since a is an ideal, it is stable under ad r and we can split it
into its 0 and nonzero generalized eigenspaces:
a = a
0
(ad r) ⊕a
∗
(ad r).
Since a is abelian, ad of every element of a acts trivially on each summand, and
since h
2
= g
0
(ad r) and a is an ideal, this decomposition is stable under h
2
,
hence under all of g. By our choice of a as a minimal abelian ideal, one or the
other of these summands must vanish. If a = a
0
(ad r) we would have a ⊂ h
2
so g = h
2
and g is nilpotent. There is nothing to prove. So the only case to
consider is a = a
∗
(ad r). Since h
2
= g
0
(ad r) we have
a = g
∗
(ad r).
Since g = h
1
+a, write
r = n +.. n ∈ h
1
. . ∈ g
∗
(ad r).
Since ad r is invertible on g
∗
(ad r), write . = [r. .
]. .
∈ a
∗
(ad r). Since a is
an abelian ideal, (ad .
)
2
= 0, so exp(ad .
) = 1 + ad .
. So
exp(ad .
)r = r −. = n.
So h := g
0
(ad n) is a CSA (of g), and since n ∈ h
1
we have h
1
⊂ g
0
(ad n) = h
and hence h
1
= h. So exp ad .
conjugates h
2
into h
1
. Writing .
as sum of
its generalized eigencomponents, and using the fact that all the elements of a
commute, we can write the exponential as a product of the exponentials of the
summands. QED
5.4. TORAL SUBALGEBRAS AND CARTAN SUBALGEBRAS. 79
5.4 Toral subalgebras and Cartan subalgebras.
The strategy is now to show that any two BSA’s of an arbitrary Lie algebra are
conjugate. Any CSA is nilpotent, hence solvable, hence contained in a BSA.
This reduces the proof of the conjugacy theorem for CSA’s to that of BSA’s
as we know the conjugacy of CSA’s in a solvable algebra. Since the radical is
contained in any BSA, it is enough to prove this theorem for semisimple Lie
algebras. So for this section the Lie algebra g will be assumed to be semisimple.
Since g does not consist entirely of ad nilpotent elements, it contains some r
which is not ad nilpotent, and the semisimple part of r is a nonzero ad semi
simple element of g. A subalgebra consisting entirely of semisimple elements
is called toral, for example, the line through r
s
.
Lemma 7 Any toral subalgebra t is abelian.
Proof. The elements ad r. r ∈ t can be each be diagonalized. We must
show that ad r has no eigenvectors with nonzero eigenvalues in t. Let n be an
eigenvector so [r. n] = on. Then (ad n)r = −on is a zero eigenvector of ad n,
which is impossible unless on = 0, since ad n annihilates all its zero eigenvectors
and is invertible on the subspace spanned by the eigenvectors corresponding to
nonzero eigenvalues. QED
One of the consequences of the considerations in this section will be:
Theorem 11 A subalgebra h of a semisimple Lie algebra gis a CSA if and
only if it is a maximal toral subalgebra.
To prove this we want to develop some of the theory of roots. So ﬁx a
maximal toral subalgebra h. Decompose g into simultaneous eigenspaces
g = C
g
(h)
g
α
(h)
where
C
g
(h) := ¦r ∈ g[[/. r] = 0 ∀ / ∈ h¦
is the centralizer of h, where α ranges over nonzero linear functions on h and
g
α
(h) := ¦r ∈ g[[/. r] = α(/)r ∀ / ∈ h¦.
As h will be ﬁxed for most of the discussion, we will drop the (h) and write
g = g
0
⊕
g
α
where g
0
= C
g
(h). We have
• [g
α
. g
β
] ⊂ g
α+β
(by Jacobi) so
• ad r is nilpotent if r ∈ g
α
. α = 0
• If α +β = 0 then κ(r. n) = 0 ∀ r ∈ g
α
. n ∈ g
β
.
80 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
The last item follows by choosing an / ∈ h with α(/) + β(/) = 0. Then
0 = κ([/. r]. n) +κ(r. [/. n]) = (α(/) +β(/))κ(r. n) so κ(r. n) = 0. This implies
that g
0
is orthogonal to all the g
α
. α = 0 and hence the nondegeneracy of κ
implies that
Proposition 14 The restriction of κ to g
0
g
0
is nondegenerate.
Our next intermediate step is to prove:
Proposition 15
h = g
0
(5.3)
if h is a maximal toral subalgebra.
Proceed according to the following steps:
r ∈ g
0
⇒ r
s
∈ g
0
r
n
∈ g
0
. (5.4)
Indeed, r ∈ g
0
⇔ad r : h →0, and then ad r
s
. ad r
n
also map h →0.
r ∈ g
0
. r semisimple ⇒r ∈ h. (5.5)
Indeed, such an r commutes with all of h. As the sum of commuting semi
simple transformations is again semisimple, we conclude that h +Cr is a toral
subalgebra. By maximality it must coincide with h.
We now show that
Lemma 8 The restriction of the Killing form κ to h h is nondegenerate.
So suppose that κ(/. r) = 0 ∀ r ∈ h. This means that κ(/. r) = 0 ∀ semi
simple r ∈ g
0
. Suppose that n ∈ g
0
is nilpotent. Since / commutes with n,
(ad /)(ad n) is again nilpotent. Hence has trace zero. Hence κ(/. n) = 0, and
therefore κ(/. r) = 0 ∀ r ∈ g
0
. Hence / = 0. QED
Next observe that
Lemma 9 g
0
is a nilpotent Lie algebra.
Indeed, all semisimple elements of g
0
commute with all of g
0
since they belong
to h, and a nilpotent element is ad nilpotent on all of g so certainly on g
0
.
Finally any r ∈ g
0
can be written as a sum r
s
+ r
n
of commuting elements
which are ad nilpotent on g
0
, hence r is. Thus g
0
consists entirely of ad nilpotent
elements and hence is nilpotent by Engel’s theorem. QED
Now suppose that / ∈ h. r. n ∈ g
0
. Then
κ(/. [r. n]) = κ([/. r]. n)
= κ(0. n)
= 0
and hence, by the nondegeneracy of κ on h, we conclude that
5.5. ROOTS. 81
Lemma 10
h ∩ [g
0
. g
0
] = 0.
We next prove
Lemma 11 g
0
is abelian.
Suppose that [g
0
. g
0
] = 0. Since g
0
is nilpotent, it has a nonzero center con
tained in [g
0
. g
0
]. Choose a nonzero element . ∈ [g
0
. g
0
] in this center. It can
not be semisimple for then it would lie in h. So it has a nonzero nilpotent part,
n, which also must lie in the center of g
0
, by the 1 ⊂ ¹ theorem we proved in
our section on linear algebra. But then ad n ad r is nilpotent for any r ∈ g
0
since [r. n] = 0. This implies that κ(n. g
0
) = 0 which is impossible. QED
Completion of proof of (5.3). We know that g
0
is abelian. But then, if
h = g
0
, we would ﬁnd a nonzero nilpotent element in g
0
which commutes with
all of g
0
(proven to be commutative). Hence κ(n. g
0
) = 0 which is impossible.
This completes the proof of (5.3). QED
So we have the decomposition
g = h ⊕
α=0
g
α
which shows that any maximal toral subalgebra h is a CSA.
Conversely, suppose that h is a CSA. For any r = r
s
+r
n
∈ g. g
0
(ad r
s
) ⊂
g
0
(ad r) since r
n
is an ad nilpotent element commuting with ad r
s
. If we choose
r ∈ h minimal so that h = g
0
(ad r), we see that we may replace r by r
s
and write h = g
0
(ad r
s
). But g
0
(ad r
s
) contains some maximal toral algebra
containing r
s
, which is then a Cartan subalgebra contained in h and hence must
coincide with h. This completes the proof of the theorem. QED
5.5 Roots.
We have proved that the restriction of κ to h is nondegenerate. This allows us
to associate to every linear function φ on h the unique element t
φ
∈ h given by
φ(/) = κ(t
φ
. /).
The set of α ∈ h
∗
. α = 0 for which g
α
= 0 is called the set of roots and is
denoted by Φ. We have
• Φ spans h
∗
for otherwise ∃/ = 0 : α(/) = 0 ∀α ∈ Φ implying that
[/. g
α
] = 0 ∀α so [/. g] = 0.
• α ∈ Φ ⇒−α ∈ Φ for otherwise g
α
⊥ g.
• r ∈ g
α
. n ∈ g
−α
. α ∈ Φ ⇒[r. n] = κ(r. n)t
α
. Indeed,
κ(/. [r. n]) = κ([/. r]. n)
= κ(t
α
. /)κ(r. n)
= κ(κ(r. n)t
α
. /).
82 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
• [g
α
. g
−α
] is one dimensional with basis t
α
. This follows from the preceding
and the fact that g
α
can not be perpendicular to g
−α
since otherwise it
will be orthogonal to all of g.
• α(t
α
) = κ(t
α
. t
α
) = 0. Otherwise, choosing r ∈ g
α
. n ∈ g
−α
with κ(r. n) =
1, we get
[r. n] = t
α
. [t
α
. r] = [t
α
. n] = 0.
So r. n. t
α
span a solvable three dimensional algebra. Acting as ad on g,
it is superdiagonizable, by Lie’s theorem, and hence ad t
α
, which is in the
commutator algebra of this subalgebra is nilpotent. Since it is ad semi
simple by deﬁnition of h, it must lie in the center, which is impossible.
• Choose c
α
∈ g
α
. 1
α
∈ g
−α
with
κ(c
α
. 1
α
) =
2
κ(t
α
. t
α
)
.
Set
/
α
:=
2
κ(t
α
. t
α
)
t
α
.
Then c
α
. 1
α
. /
α
span a subalgebra isomorphic to :(2). Call it :(2)
α
. We
shall soon see that this notation is justiﬁed, i.e that g
α
is one dimensional
and hence that :(2)
α
is well deﬁned, independent of any “choices” of
c
α
. 1
α
but depends only on α.
• Consider the action of :(2)
α
on the subalgebra m := h⊕
¸
g
nα
where n ∈
Z. The zero eigenvectors of /
α
consist of h ⊂ m. One of these corresponds
to the adjoint representation of :(2)
α
⊂ m. The orthocomplement of
/
α
∈ h gives dim h−1 trivial representations of :(2)
α
. This must exhaust
all the even maximal weight representations, as we have accounted for all
the zero weights of :(2)
α
acting on g. In particular, dim g
α
= 1 and no
integer multiple of α other than −α is a root. Now consider the subalgebra
p := h ⊕
¸
g
cα
. c ∈ C. This is a module for :(2)
α
. Hence all such c’s
must be multiples of 1´2. But 1´2 can not occur, since the double of a
root is not a root. Hence the ±α are the only multiples of α which are
roots.
Now consider β ∈ Φ. β = ±α. Let
k :=
g
β+jα
.
Each nonzero summand is one dimensional, and k is an :(2)
α
module. Also
β + iα = 0 for any i, and evaluation on /
α
gives β(/
α
) + 2i. All weights diﬀer
by multiples of 2 and so k is irreducible. Let c be the maximal integer so that
β + cα ∈ Φ, and : the maximal integer so that β − :α ∈ Φ. Then the entire
string
β −:α. β −(: −1)α. . . . β +cα
5.5. ROOTS. 83
are roots, and
β(/
α
) −2: = −(β(/
α
) + 2c)
or
β(/
α
) = : −c ∈ Z.
These integers are called the Cartan integers.
We can transfer the bilinear form κ from h to h
∗
by deﬁning
(γ. δ) = κ(t
γ
. t
δ
).
So
β(/
α
) = κ(t
β
. /
α
)
=
2κ(t
β
. t
α
)
κ(t
α
. t
α
)
=
2(β. α)
(α. α)
.
So
2(β. α)
(α. α)
= : −c ∈ Z.
Choose a basis α
1
. . . . . α
of h
∗
consisting of roots. This is possible because
the roots span h
∗
. Any root β can be written uniquely as linear combination
β = c
1
α
1
+ +c
α
where the c
i
are complex numbers. We claim that in fact the c
i
are rational
numbers. Indeed, taking the scalar product relative to ( . ) of this equation
with the α
i
gives the / equations
(β. α
i
) = c
1
(α
1
. α
i
) + +c
(α
. α
i
).
Multiplying the ith equation by 2´(α
i
. α
i
) gives a set of / equations for the /
coeﬃcients c
i
where all the coeﬃcients are rational numbers as are the left hand
sides. Solving these equations for the c
i
shows that the c
i
are rational.
Let 1 be the real vector space spanned by the α ∈ Φ. Then ( . ) restricts
to a real scalar product on 1. Also, for any λ = 0 ∈ 1,
(λ. λ) :=: κ(t
λ
. t
λ
)
:= tr(ad t
λ
)
2
=
¸
α∈Φ
α(t
λ
)
2
0.
So the scalar product ( . ) on 1 is positive deﬁnite. 1 is a Euclidean space.
In the string of roots, β is c steps down from the top, so c steps up from the
bottom is also a root, so
β −(: −c)α
84 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
is a root, or
β −
2(β. α)
(α. α)
α ∈ Φ.
But
β −
2(β. α)
(α. α)
α = :
α
(β)
where :
α
denotes Euclidean reﬂection in the hyperplane perpendicular to α. In
other words, for every α ∈ Φ
:
α
: Φ →Φ. (5.6)
The subgroup of the orthogonal group of 1 generated by these reﬂections
is called the Weyl group and is denoted by \. We have thus associated to
every semisimple Lie algebra, and to every choice of Cartan subalgebra a ﬁnite
subgroup of the orthogonal group generated by reﬂections. (This subgroup
is ﬁnite, because all the generating reﬂections, :
α
, and hence the group they
generate, preserve the ﬁnite set of all roots, which span the space.) Once we
will have completed the proof of the conjugacy theorem for Cartan subalgebras
of a semisimple algebra, then we will know that the Weyl group is determined,
up to isomorphism, by the semisimple algebra, and does not depend on the
choice of Cartan subalgebra.
We deﬁne
'β. α` :=
2(β. α)
(α. α)
.
So
'β. α` = β(/
α
) (5.7)
= : −c ∈ Z (5.8)
and
:
α
(β) = β −'β. α`α. (5.9)
So far, we have deﬁned the reﬂection :
α
purely in terms of the root struction
on 1, which is the real subspace of h
∗
generated by the roots. But in fact,
:
α
, and hence the entire Weyl group arises as (an) automorphism(s) of g which
preserve h. Indeed, we know that c
α
. 1
α
. /
α
span a subalgebra :(2)
α
isomorphic
to :(2). Now exp ad c
α
and exp ad(−1
α
) are elements of c(g). Consider
τ
α
:= (exp ad c
α
)(exp ad(−1
α
))(exp ad c
α
) ∈ c(g). (5.10)
We claim that
Proposition 16 The automorphism τ
α
preserves h and on h it is given by
τ
α
(/) = / −α(/)/
α
. (5.11)
In particular, the transformation induced by τ
α
on 1 is :
α
.
5.6. BASES. 85
Proof. It suﬃces to prove (5.11). If α(/) = 0, then both ad c
α
and ad 1
α
vanish
on / so τ
α
(/) = / and (5.11) is true. Now /
α
and ker α span h. So we need
only check (5.11) for /
α
where it says that τ(/
α
) = −/
α
. But we have already
veriﬁed this for the algebra :(2). QED
We can also verify (5.11) directly. We have
exp(ad c
α
)(/) = / −α(/)c
α
for any / ∈ h. Now [1
α
. c
α
] = −/
α
so
(ad 1
α
)
2
(c
α
) = [1
α
. −/
α
] = [/
α
. 1
α
] = −21
α
.
So
exp(−ad 1
α
)(exp ad c
α
)/ = (id −ad 1
α
+
1
2
(ad 1
α
)
2
)(/ −α(/)c
α
)
= / −α(/)c
α
−α(/)1
α
−α(/)/
α
+α(/)1
α
= / −α(/)/
α
−α(/)c
α
.
If we now apply exp ad c
α
to this last expression and use the fact that α(/
α
) = 2,
we get the right hand side of (5.11).
5.6 Bases.
∆ ⊂ Φ is a called a Base if it is a basis of 1 (so #∆ = / = dim
R
1 = dim
C
h)
and every β ∈ Φ can be written as
¸
α∈∆
/
α
α. /
α
∈ Z with either all the
coeﬃcients /
α
≥ 0 or all ≤ 0. Roots are accordingly called positive or negative
and we deﬁne the height of a root by
ht β :=
¸
α
/
α
.
Given a base, we get partial order on 1 by deﬁning λ ~ j iﬀ λ −j is a sum of
positive roots or zero. We have
(α. β) ≤ 0. α. β ∈ ∆ (5.12)
since otherwise (α. β) 0 and
:
α
(β) = β −
2(β. α)
(α. α)
α
is a root with the coeﬃcient of β = 1 0 and the coeﬃcient of α < 0, contradict
ing the deﬁnition which says that roots must have all coeﬃcients nonnegative
or nonpositive.
To construct a base, choose a γ ∈ 1. [ (γ. β) = 0 ∀ β ∈ Φ. Such an element
is called regular. Then every root has positive or negative scalar product with
γ, dividing the set of roots into two subsets:
Φ = Φ
+
∪ Φ
−
. Φ
−
= −Φ
+
.
86 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
A root β ∈ Φ
+
is called decomposable if β = β
1
+ β
2
. β
1
. β
2
∈ Φ
+
, indecom
posable otherwise. Let ∆(γ) consist of the indecomposable elements of Φ
+
(γ).
Theorem 12 ∆(γ) is a base, and every base is of the form ∆(γ) for some γ.
Proof. Every β ∈ Φ
+
can be written as a nonnegative integer combination of
∆(γ) for otherwise choose one that can not be so written with (γ. β) as small
as possible. In particular, β is not indecomposable. Write β = β
1
+ β
2
. β
i
∈
Φ
+
. Then β ∈ ∆(γ). (γ. β) = (γ. β
1
) + (γ. β
2
) and hence (γ. β
1
) < (γ. β) and
(γ. β
2
) < (γ. β). By our choice of β this means β
1
and β
2
are nonnegative
integer combinations of elements of ∆(γ) and hence so is β, contradiction.
Now (5.12) holds for ∆ = ∆(γ) for if not, α−β is a root, so either α−β ∈ Φ
+
so α = α −β +β is decomposable or β −α ∈ Φ
+
and β is decomposable.
This implies that ∆(γ) is linearly independent: for suppose
¸
α
c
α
α = 0 and
let j
α
be the positive coeﬃcients and −c
β
the negative ones, so
¸
α
j
α
α =
¸
β
c
β
β
all coeﬃcients positive. Let c be this common vector. Then (c. c) =
¸
j
α
c
β
(α. β) ≤
0 so c = 0 which is impossible unless all the coeﬃcients vanish, since all scalar
products with γ are strictly positive. Since the elements of Φ span 1 this shows
that ∆(γ) is a basis of 1 and hence a base.
Now let us show that every base is of the desired form: For any base ∆,
let Φ
+
= Φ
+
(∆) denote the set of those roots which are nonnegative integral
combinations of the elements of ∆ and let Φ
−
= Φ
−
(∆) denote the ones which
are nonpositive integral combinations of elements of ∆. Deﬁne δ
α
. α ∈ ∆ to
be the projection of α onto the orthogonal complement of the space spanned by
the other elements of the base. Then
(δ
α
. α
) = 0. α = α
. (δ
α
. α) = (δ
α
. δ
α
) 0
so γ =
¸
:
α
δ
α
. :
α
0 satisﬁes
(γ. α) 0 ∀ α ∈ ∆
hence
Φ
+
(∆) ⊂ Φ
+
(γ)
and
Φ
−
(∆) ⊂ Φ
−
(γ)
hence
Φ
+
(∆) = Φ
+
(γ) and Φ
−
(∆) = Φ
−
(γ).
Since every element of Φ
+
can be written as a sum of elements of ∆ with non
negative integer coeﬃcients, the only indecomposable elements can be the ∆,
so ∆(γ) ⊂ ∆ but then they must be equal since they have the same cardinality
/ = dim 1. QED
5.7. WEYL CHAMBERS. 87
5.7 Weyl chambers.
Deﬁne 1
β
:= β
⊥
. Then 1 −
¸
1
β
is the union of Weyl chambers each con
sisting of regular γ’s with the same Φ
+
. So the Weyl chambers are in one to
one correspondence with the bases, and the Weyl group permutes them.
Fix a base, ∆. Our goal in this section is to prove that the reﬂections
:
α
. α ∈ ∆ generate the Weyl group, \, and that \ acts simply transitively
on the Weyl chambers.
Each :
α
. α ∈ ∆ sends α →−α. But acting on λ =
¸
c
β
β, the reﬂection :
α
does not change the coeﬃcient of any other element of the base. If λ ∈ Φ
+
and
λ = α, we must have c
β
0 for some β = α in the base ∆. Then the coeﬃcient
of β in the expansion of :
α
(λ) is positive, and hence all its coeﬃcients must be
nonnegative. So :
α
(λ) ∈ Φ
+
. In short, the only element of Φ
+
sent into Φ
−
is
α. So if
δ :=
1
2
¸
β∈Φ
+
β then :
α
δ = δ −α.
If β ∈ Φ
+
. β ∈ ∆, then we can not have (β. α) ≤ 0 ∀α ∈ ∆ for then β ∪ ∆
would be linearly independent. So β −α is a root for some α ∈ ∆, and since we
have changed only one coeﬃcient, it must be a positive root. Hence any β ∈ Φ
can be written as
β = α
1
+ +α
p
α
i
∈ ∆
where all the partial sums are positive roots.
Let γ be any vector in a Euclidean space, and let :
γ
denote reﬂection in the
hyperplane orthogonal to γ. Let 1 be any orthogonal transformation. Then
:
Rγ
= 1:
γ
1
−1
(5.13)
as follows immediately from the deﬁnition.
Let α
1
. . . . . α
i
∈ ∆. and, for short, let us write :
i
:= :
αi
.
Lemma 12 If :
1
:
i−1
α
i
< 0 then ∃, < i. , ≥ 1 so that
:
1
:
i
= :
1
:
j−1
:
j+1
:
i−1
.
Proof. Set β
i−1
:= α
i
. β
j
:= :
j+1
:
i−1
α
i
. , < i − 1. Since β
i−1
∈ Φ
+
and β
0
∈ Φ
−
there must be some , for which β
j
∈ Φ
+
and :
j
β
j
= β
j−1
∈ Φ
−
implying that that β
j
= α
j
so by (5.13) with 1 = :
j+1
:
i−1
we conclude that
:
j
= (:
j+1
:
i−1
):
i
(:
j+1
:
i−1
)
−1
or
:
j
:
j+1
:
i
= :
j+1
:
i−1
implying the lemma. QED
As a consequence, if : = :
1
:
t
is a shortest expression for :, then, since
:
t
α
t
∈ Φ
−
, we must have :α
t
∈ Φ
−
.
88 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
Keeping ∆ ﬁxed in the ensuing discussion, we will call the elments of ∆ sim
ple roots, and the corresponding reﬂections simple reﬂections. Let \
denote
the subgroup of \ generated by the simple reﬂections, :
α
. α ∈ ∆. (Eventually
we will prove that this is all of \.) It now follows that if : ∈ \
and :∆ = ∆
then : = id. Indeed, if : = id, write : in a minimal fashion as a product of
simple reﬂections. By what we have just proved, it must send some simple root
into a negative root. So \
permutes the Weyl chambers without ﬁxed points.
We now show that \
acts transitively on the Weyl chambers:
Let γ ∈ 1 be a regular element. We claim
∃ : ∈ \
with (:(γ). α) 0 ∀ α ∈ ∆.
Indeed, choose : ∈ \
so that (:(γ). δ) is as large as possible. Then
(:(γ). δ) ≥ (:
α
:(γ). δ)
= (:(γ). :
α
δ)
= (:(γ). δ) −(:(γ). α) so
(:(γ). α) ≥ 0 ∀ α ∈ ∆.
We can’t have equality in this last inequality since :(γ) is not orthogonal to any
root. This proves that \
acts transitively on all Weyl chambers and hence on
all bases.
We next claim that every root belongs to at least one base. Choose a (non
regular) γ
⊥ α. but γ
∈ 1
β
. β = α. Then choose γ close enough to γ
so that
(γ. α) 0 and (γ. α) < [(γ. β)[ ∀ β = α. Then in Φ
+
(γ) the element α must be
indecomposable. If β is any root, we have shown that there is an :
∈ \
with
:
β = α
i
∈ ∆. By (5.13) this implies that every reﬂection :
β
in \ is conjugate
by an element of \
to a simple reﬂection: :
β
= :
:
i
:
−1
∈ \
. Since \ is
generated by the :
β
, this shows that \
= \.
5.8 Length.
Deﬁne the length of an element of \ as the minimal word length in its expression
as a product of simple roots. Deﬁne n(:) to be the number of positive roots
made negative by :. We know that n(:) = /(:) if /(:) = 0 or 1. We claim that
/(:) = n(:)
in general.
Proof by induction on /(:). Write : = :
1
:
i
in reduced form and let
α = α
i
. We have :α ∈ Φ
−
. Then n(::
i
) = n(:) − 1 since :
i
leaves all positive
roots positive except α. Also /(::
i
) = /(:) −1. So apply induction. QED
Let C = C(∆) be the Weyl chamber associated to the base ∆. Let C denote
its closure.
Lemma 13 If λ. j ∈ C and : ∈ \ satisﬁes :λ = j then : is a product of
simple reﬂections which ﬁx λ. In particular, λ = j. So C is a fundamental
domain for the action of \ on 1.
5.9. CONJUGACY OF BOREL SUBALGEBRAS 89
Proof. By induction on /(:). If /(:) = 0 then : = id and the assertion is clear
with the empty product. So we may assume that n(:) 0, so : sends some
positive root to a negative root, and hence must send some simple root to a
negative root. So let α ∈ ∆ be such that :α ∈ Φ
−
. Since j ∈ C, we have
(j. β) ≥ 0. ∀β ∈ Φ
+
and hence (j. :α) ≤ 0. So
0 ≥ (j. :α)
= (:
−1
j. α)
= (λ. α)
≥ 0.
So (λ. α) = 0 so :
α
λ = λ and hence ::
α
λ = j. But n(::
α
) = n(:) − 1 since
:
α
= −α and :
α
permutes all the other positive roots. So /(::
α
) = /(:) − 1
and we can apply induction to conclude that : = (::
α
):
α
is a product of simple
reﬂections which ﬁx λ.
5.9 Conjugacy of Borel subalgebras
We need to prove this for semisimple algebras since the radical is contained in
every maximal solvable subalgebra.
Deﬁne a standard Borel subalgebra (relative to a choice of CSA h and a
system of simple roots, ∆) to be
b(∆) := h ⊕
β∈Φ
+
(∆)
g
β
.
Deﬁne the corresponding nilpotent Lie algebra by
n
+
(∆) :=
β∈Φ
+
g
β
.
Since each :
α
can be realized as (exp c
α
)(exp −1
α
)(exp c
α
) every element of \
can be realized as an element of c(g). Hence all standard Borel subalgebras
relative to a given Cartan subalgebra are conjugate.
Notice that if r normalizes a Borel subalgebra, b, then
[b +Cr. b +Cr] ⊂ b
and so b + Cr is a solvable subalgebra containing b and hence must coincide
with b:
·
g
(b) = b.
In particular, if r ∈ b then its semisimple and nilpotent parts lie in b.
From now on, ﬁx a standard BSA, b. We want to prove that any other BSA,
b
is conjugate to b. We may assume that the theorem is known for Lie algebras
of smaller dimension, or for b
with b ∩ b
of greater dimension, since if dim
90 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
b∩b
= dim b, so that b
⊃ b, we must have b
= b by maximality. Therefore
we can proceed by downward induction on the dimension of the intersection
b ∩ b
.
Suppose b ∩ b
= 0. Let n
be the set of nilpotent elements in b ∩ b
. So
n
= n
+
∩ b
.
Also [b ∩ b
. b ∩ b
] ⊂ n
+
∩ b
= n
so n
is a nilpotent ideal in b ∩ b
.
Suppose that n
= 0. Then since g contains no solvable ideals,
k := ·
g
(n
) = g.
Consider the action of n
on b´(b∩b
). By Engel, there exists a n ∈ b∩b
with
[r. n] ∈ b ∩ b
∀r ∈ n
. But [r. n] ∈ [b. b] ⊂ n
+
and so [r. n] ∈ n
. So n ∈ k.
Thus n ∈ k ∩ b. n ∈ b ∩ b
. Similarly, we can interchange the roles of b and
b
in the above argument, replacing n
+
by the nilpotent subalgebra [b
. b
] of
b
, to conclude that there exists a n
∈ k ∩ b
. n
∈ b ∩ b
. In other words, the
inclusions
k ∩ b ⊃ b ∩ b
. k ∩ b
⊃ b ∩ b
are strict.
Both b ∩ k and b
∩ k are solvable subalgebras of k. Let c. c
be BSA’s
containing them. By induction, there is a σ ∈ c(k) ⊂ c(g) with σ(c
) = c. Now
let b
be a BSA containing c. We have
b
∩ b ⊃ c ∩ b ⊃ k ∩ b ⊃ b
∩ b
with the last inclusion strict. So by induction there is a τ ∈ c(g) with τ(b
) = b.
Hence
τσ(c
) ⊂ b.
Then
b ∩ τσ(b
) ⊃ τσ(c
) ∩ τσ(b
) ⊃ τσ(b
∩ k) ⊃ τσ(b ∩ b
)
with the last inclusion strict. So by induction we can further conjugate τσb
into b.
So we must now deal with the case that n
= 0, but we will still assume
that b ∩ b
= 0. Since any Borel subalgebra contains both the semisimple and
nilpotent parts of any of its elements, we conclude that b ∩ b
consists entirely
of semisimple elements, and so is a toral subalgebra, call it t. If r ∈ b. t ∈ t =
b∩b
and [r. t] ∈ t, then we must have [r. t] = 0, since all elements of [b. b] are
nilpotent. So
·
b
(t) = C
b
(t).
Let c be a CSA of C
b
(t). Since a Cartan subalgebra is its own normalizer, we
have t ⊂ c. So we have
t ⊂ c ⊂ C
b
(t) = ·
b
(t) ⊂ ·
b
(c).
Let t ∈ t. n ∈ ·
b
(c). Then [t. n] ∈ c and successive brackets by t will eventually
yield 0, since c is nilpotent. Thus (ad t)
k
n = 0 for some /, and since t is semi
simple, [t. n] = 0. Thus n ∈ C
b
(t) and hence n ∈ c since c is its own normalizer
5.9. CONJUGACY OF BOREL SUBALGEBRAS 91
in C
b
(t). Thus c is a CSA of b. We can now apply the conjugacy theorem for
CSA’s of solvable algebras to conjugate c into h.
So we may assume from now on that t ⊂ h. If t = h, then decomposing
b
into root spaces under h, we ﬁnd that the nonzero root spaces must consist
entirely of negative roots, and there must be at least one such, since b
= h.
But then we can ﬁnd a τ
α
which conjugates this into a positive root, preserving
h, and then τ
α
(b
) ∩ b has larger dimension and we can further conjugate into
b.
So we may assume that
t ⊂ h
is strict.
If
b
⊂ C
g
(t)
then since we also have h ⊂ C
g
(t), we can ﬁnd a BSA, b
of C
g
(t) containing h,
and conjugate b
to b
, since we are assuming that t = 0 and hence C
g
(t) = g.
Since b
∩b ⊃ h has bigger dimension than b
∩b, we can further conjugate to
b by the induction hypothesis.
If
b
⊂ C
g
(t)
then there is a common nonzero eigenvector for ad t in b
, call it r. So there
is a t
∈ t such that [t
. r] = c
r. c
= 0. Setting
t :=
1
c
t
we have [t. r] = r. Let Φ
t
⊂ Φ consist of those roots for which β(t) is a positive
rational number. Then
s := h ⊕
β∈Φt
g
β
is a solvable subalgebra and so lies in a BSA, call it b
. Since t ⊂ b
. r ∈ b
we see that b
∩ b
has strictly larger dimension than b ∩ b
. Also b
∩ b has
strictly larger dimension than b ∩ b
since h ⊂ b ∩ b
. So we can conjugate b
to b
and then b
to b.
This leaves only the case b ∩ b
= 0 which we will show is impossible. Let
t be a maximal toral subalgebra of b
. We can not have t = 0, for then b
would consist entirely of nilpotent elements, hence nilpotent by Engel, and also
selfnormalizing as is every BSA. Hence it would be a CSA which is impossible
since every CSA in a semisimple Lie algebra is toral. So choose a CSA, h
containing t, and then a standard BSA containing h
. By the preceding, we
know that b
is conjugate to b
and, in particular has the same dimension as b
.
But the dimension of each standard BSA (relative to any Cartan subalgebra)
is strictly greater than half the dimension of g, contradicting the hypothesis
g ⊃ b ⊕b
. QED
92 CHAPTER 5. CONJUGACY OF CARTAN SUBALGEBRAS.
Chapter 6
The simple ﬁnite
dimensional algebras.
In this chapter we classify all possible root systems of simple Lie algebras. A
consequence, as we shall see, is the classiﬁcation of the simple Lie algebras
themselves. The amazing result  due to Killing with some repair work by
´
Elie
Cartan  is that with only ﬁve exceptions, the root systems of the classical
algebras that we studied in Chapter III exhaust all possibilities.
The logical structure of this chapter is as follows: We ﬁrst show that the
root system of a simple Lie algebra is irreducible (deﬁnition below). We then
develop some properties of the of the root structure of an irreducible root system,
in particular we will introduce its extended Cartan matrix. We then use the
PerronFrobenius theorem to classify all possible such matrices. (For the expert,
this means that we ﬁrst classify the Dynkin diagrams of the aﬃne algebras of
the simple Lie algebras. Surprisingly, this is simpler and more eﬃcient than
the classiﬁcation of the diagrams of the ﬁnite dimensional simple Lie algebras
themselves.) From the extended diagrams it is an easy matter to get all possible
bases of irreducible root systems. We then develop a few more facts about root
systems which allow us to conclude that an isomorphism of irreducible root
systems implies an isomorphism of the corresponding Lie algebras. We postpone
the the proof of the existence of the exceptional Lie algebras until Chapter VIII,
where we prove Serre’s theorem which gives a uniﬁed presentation of all the
simple Lie algebras in terms of generators and relations derived directly from
the Cartan integers of the simple root system.
Throughout this chapter we will be dealing with semisimple Lie algebras
over the complex numbers.
93
94 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
6.1 Simple Lie algebras and irreducible root sys
tems.
We choose a Cartan subalgebra h of a semisimple Lie algebra g, so we have the
corresponding set Φ of roots and the real (Euclidean) space 1 that they span.
We say that Φ is irreducible if Φ can not be partitioned into two disjoint
subsets
Φ = Φ
1
∪ Φ
2
such that every element of Φ
1
is orthogonal to every element of Φ
2
.
Proposition 17 If g is simple then Φ is irreducible.
Proof. Suppose that Φ is not irreducible, so we have a decomposition as above.
If α ∈ Φ
1
and β ∈ Φ
2
then
(α +β. α) = (α. α) 0 and (α +β. β) = (β. β) 0
which means that α+β can not belong to either Φ
1
or Φ
2
and so is not a root.
This means that
[g
α
. g
β
] = 0.
In other words, the subalgebra g
1
of g generated by all the g
α
. α ∈ Φ
1
is
centralized by all the g
β
, so g
1
is a proper subalgebra of g, since if g
1
= g this
would say that g has a nonzero center, which is not true for any semisimple
Lie algebra. The above equation also implies that the normalizer of g
1
contains
all the g
γ
where γ ranges over all the roots. But these g
γ
generate g. So g
1
is
a proper ideal in g, contradicting the assumption that g is simple. QED
Let us choose a base ∆ for the root system Φ of a semisimple Lie algebra.
We say that ∆ is irreducible if we can not partition ∆ into two nonempty
mutually orthogonal sets as in the deﬁnition of irreducibility of Φ as above.
Proposition 18 Φ is irreducible if and only if ∆ is irreducible.
Proof. Suppose that Φ is not irreducible, so has a decomposition as above.
This induces a partition of ∆ which is nontrivial unless ∆ is wholly contained
in Φ
1
or Φ
2
. If ∆ ⊂ Φ
1
say, then since 1 is spanned by ∆, this means that all
the elements of Φ
2
are orthogonal to 1 which is impossible. So if ∆ is irreducible
so is Φ. Conversely, suppose that
∆ = ∆
1
∪ ∆
2
is a partition of ∆ into two nonempty mutually orthogonal subsets. We have
proved that every root is conjugate to a simple root by an element of the Weyl
group \ which is generated by the simple reﬂections. Let Φ
1
consist of those
roots which are conjugate to an element of ∆
1
and Φ
2
consist of those roots
which are conjugate to an element of ∆
2
. The reﬂections :
β
. β ∈ ∆
2
commute
with the reﬂections :
α
. α ∈ ∆
1
, and furthermore
:
β
(α) = α
6.2. THE MAXIMAL ROOT AND THE MINIMAL ROOT. 95
since (α. β) = 0. So any element of Φ
1
is conjugate to an element of ∆
1
by
an element of the subgroup \
1
generated by the :
α
. α ∈ ∆
1
. But each such
reﬂection adds or subtracts α. So Φ
1
is in the subspace 1
1
of 1 spanned by ∆
1
and so is orthogonal to all the elements of Φ
2
. So if Φ
1
is irreducible so is ∆.
QED
We are now into the business of classifying irreducible bases.
6.2 The maximal root and the minimal root.
Suppose that Φ is an irreducible root system and ∆ a base, so irreducible.
Recall that once we have chosen ∆, every root β is an integer combination of
the elements of ∆ with all coeﬃcients nonnegative, or with all coeﬃcients non
positive. We write β ~ 0 in the ﬁrst case, and β ≺ 0 in the second case. This
deﬁnes a partial order on the elements of 1 by
j ≺ λ if and only if λ −j =
¸
α∈∆
/
α
α. (6.1)
where the /
α
are nonnegative integers. This partial order will prove very im
portant to us in representation theory.
Also, for any β =
¸
/
α
α ∈ Φ
+
we deﬁne its height by
ht β =
¸
α
/
α
. (6.2)
Proposition 19 Suppose that Φ is an irreducible root system and ∆ a base.
Then
• There exists a unique β ∈ Φ
+
which is maximal relative to the ordering
≺.
• This β =
¸
/
α
α where all the /
α
are positive.
• (β. α) ≥ 0 for all α ∈ ∆ and (β. α) 0 for at least one α ∈ ∆.
Proof. Choose a β =
¸
/
α
α which is maximal relative to the ordering.
At least one of the /
α
0. We claim that all the /
α
0. Indeed, suppose
not. This partitions ∆ into ∆
1
, the set of α for which /
α
0 and ∆
2
, the
set of α for which /
α
= 0. Now the scalar product of any two distinct simple
roots is ≤ 0. (Recall that this followed from the fact that if (α
1
. α
2
) 0,
then :
2
(α
1
) = α
1
− 'α
1
. α
2
`α
2
would be a root whose α
1
coeﬃcient is positive
and whose α
2
coeﬃcient is negative which is impossible.) In particular, all the
(α
1
. α
2
) ≤ 0. α
1
∈ ∆
1
. α
2
∈ ∆
2
and so
(β. α
2
) ≤ 0. ∀ α
2
∈ ∆
2
.
The irreducibility of ∆ implies that (α
1
. α
2
) = 0 for at least one pair α
1
∈
∆
1
. α
2
∈ ∆
2
. But this scalar product must then be negative. So
(β. α
2
) < 0
96 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
and hence
:
α2
β = β −'β. α
2
`α
2
is a root with
:
α2
β −β ~ 0
contradicting the maximality of β. So we have proved that all the /
α
are positive.
Furthermore, this same argument shows that (β. α) ≥ 0 for all α ∈ ∆. Since the
elements of ∆ form a basis of 1, at least one of the scalar products must not
vanish, and so be positive. We have established the second and third items in
the proposition for any maximal β. We will now show that this maximal weight
is unique.
Suppose there were two, β and β
. Write β
=
¸
/
α
α where all the /
α
0.
Then (β. β
) 0 since (β. α) ≥ 0 for all α and 0 for at least one. Since :
β
β
is a root, this would imply that β −β
is a root, unless β = β
. But if β −β
is
a root, it is either positive or negative, contradicting the maximality of one or
the other. QED
Let us label the elements of ∆ as α
1
. . . . . α
, and let us set
α
0
:= −β
so that α
0
is the minimal root. From the second and third items in the propo
sition we know that
α
0
+/
1
α
1
+ +/
α
= 0 (6.3)
and that
'α
0
. α
i
` ≤ 0
for all i and < 0 for some i.
Let us take the left hand side (call it γ) of (6.3) and successively compute
'γ. α
i
`. i = 0. 1. . . . . /. We obtain
¸
¸
¸
¸
2 'α
1
. α
0
` 'α
. α
0
`
'α
0
. α
1
` 2 'α
. α
1
`
.
.
.
.
.
.
.
.
.
'α
0
. α
` 'α
−1
. α
` 2
¸
¸
¸
¸
¸
1
/
1
.
.
.
/
¸
= 0.
This means that if we write the matrix on the left of this equation as 21 − ¹,
then ¹ is a matrix with 0 on the diagonal and whose i. , entry is −'α
j
. α
i
`.
So ¹ a nonnegative matrix with integer entries with the properties
• if ¹
ij
= 0 then ¹
ji
= 0,
• The diagonal entries of ¹ are 0,
• ¹ is irreducible in the sense that we can not partition the indices into two
nonempty subsets 1 and J such that ¹
ij
= 0 ∀ i ∈ 1. , ∈ J and
• ¹ has an eigenvector of eigenvalue 2 with all its entries positive.
6.3. GRAPHS. 97
We will show that the PerronFrobenius theorem allows us to classify all
such matrices. From here it is an easy matter to classify all irreducible root
systems and then all simple Lie algebras. For this it is convenient to introduce
the language of graph theory.
6.3 Graphs.
An undirected graph Γ = (·. 1) consists of a set · (for us ﬁnite) and a
subset 1 of the set of subsets of · of cardinality two. We call elements of ·
“nodes” or “vertices” and the elements of 1 “edges”. If c = ¦i. ,¦ ∈ 1 we
say that the “edge” c joins the vertices i and , or that “i and , are adjacent”.
Notice that in this deﬁnition our edges are “undirected”: ¦i. ,¦ = ¦,. i¦, and we
do not allow selfloops. An example of a graph is the “cycle” ¹
(1)
with / + 1
vertices, so · = ¦0. 1. 2. . . . . /¦ with 0 adjacent to / and to 1, with 1 adjacent
to 0 and to 2 etc.
The adjacency matrix ¹ of a graph Γ is the (symmetric) 0 − 1 matrix
whose rows and columns are indexed by the elements of · and whose i. ,th
entry ¹
ij
= 1 if i is adjacent to , and zero otherwise.
For example, the adjacency matrix of the graph ¹
(1)
3
is
¸
¸
¸
0 1 0 1
1 0 1 0
0 1 0 1
1 0 1 0
¸
.
We can think of ¹ as follows: Let \ be the vector space with basis given by
the nodes, so we can think of the ith coordinate of a vector r ∈ \ as assigning
the value r
i
to the node i. Then n = ¹r assigns to i the sum of the values r
j
summed over all nodes , adjacent to i.
A path of length : is a sequence of nodes r
i1
. r
i2
. . . . . r
ir
where each node
is adjacent to the next. So, for example, the number of paths of length 2 joining
i to , is the i. ,th entry in ¹
2
and similarly, the number of paths of length :
joining i to , is the i. ,th entry in ¹
r
. The graph is said to be connected if
there is a path (of some length) joining every pair of vertices. In terms of the
adjacency matrix, this means that for every i and , there is some : such that
the i. , entry of ¹
r
is nonzero. In terms of the theory of nonnegative matrices
(see below) this says that the matrix ¹ is irreducible.
Notice that if 1 denotes the column vector all of whose entries are 1, then 1
is an eigenvector of the adjacency matrix of ¹
(1)
, with eigenvalue 2, and all the
entries of 1 are positive. In view of the PerronFrobenius theorem to be stated
below, this implies that 2 is the maximum eigenvalue of this matrix.
We modify the notion of the adjacency matrix as follows: We start with a
connected graph Γ as before, but modify its adjacency matrix by replacing some
of the ones that occur by positive integers o
ij
. If, in this replacement o
ij
1,
we redraw the graph so that there is an arrow with o
ij
lines pointing towards
98 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
the node i. For example, the graph labeled ¹
(1)
1
in Table Aﬀ 1 corresponds to
the matrix
0 2
2 0
which clearly has
1
1
as an positive eigenvector with eigenvalue 2.
Similarly, diagram ¹
(2)
2
in Table Aﬀ 2 corresponds to the matrix
0 4
1 0
which has
2
1
as eigenvector with eigenvalue 2. In the diagrams, the coeﬃcient
next to a node gives the coordinates of the eigenvector with eigenvalue 2, and it
is immediate to check from the diagram that this is indeed an eigenvector with
eigenvalue 2. For example, the 2 next to a node with an arrow pointing toward
it in C
(1)
satisﬁes 2 2 = 2 1 + 2 etc.
It will follow from the Perron Frobenius theorem to be stated and proved
below, that these are the only possible connected diagrams with maximal eigen
vector two.
All the graphs so far have zeros along the diagonal. If we relax this condi
tion, and allow for any nonnegative integer on the diagonal, then the only new
possibilities are those given in Figure 4.
Let us call a matrix symmetrizable if ¹
ij
= 0 ⇒ ¹
ji
= 0. The main result
of this chapter will be to show that the lists in the Figures 14 exhaust all irre
ducible matrices with nonnegative integer matrices, which are symmetrizable
and have maximum eigenvalue 2.
6.4 PerronFrobenius.
We say that a real matrix T is nonnegative (or positive) if all the entries
of T are nonnegative (or positive). We write T ≥ 0 or T 0. We will use
these deﬁnitions primarily for square (n n) matrices and for column vectors
= (n 1) matrices. We let
Q := ¦r ∈ R
n
: r ≥ 0. r = 0¦
so Q is the nonnegative “orthant” excluding the origin. Also let
C := ¦r ≥ 0 : r = 1¦.
So C is the intersection of the orthant with the unit sphere.
A nonnegative matrix square T is called primitive if there is a / such that
all the entries of T
k
are positive. It is called irreducible if for any i. , there
is a / = /(i. ,) such that (T
k
)
ij
0. For example, as mentioned above, the
adjacency matrix of a connected graph is irreducible.
6.4. PERRONFROBENIUS. 99
• • • • • • • •
•
1 2 3 4 5 6 4 2
3
1
(1)
8
• • • • • • •
•
1 2 3 4 3 2 1
2
1
(1)
7
• • • • •
•
•
1 2 3 2 1
1
2 1
(1)
6
• • • • •
1 2 3 4 2
1
(1)
4
• • •
1 2 3
G
(1)
2
•
•
•
¨
¨
r
r . . . . . . •
¨
¨
r
r
•
•
1
1
2 2
1
1
1
(1)
/ ≥ 4
• •
. . . . . .
•< •
1 2 2 1
C
(1)
/ ≥ 2
•
•
•
¨
¨
r
r . . . . . . • •
1
1
2 2
2 1
(1)
/ ≥ 3
•
. . . . . . . . . . . .
•
2
2
2
2
2
•
1 1
1
¹
(1)
. / ≥ 2
•< • 1 1 ¹
(1)
1
Figure 6.1: Aﬀ 1.
100 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
• • • • • <
1 2 3 2 1
1
(2)
6
• < •
. . .
• •
1 1 1 1
1
(2)
+1
. / ≥ 2
•
•
¨
¨
r
r
•
. . . . . .
•< •
1
1
2 2 1
¹
(2)
2−1
/ ≥ 3
•< •
. . . . . .
•< •
2 2 2 1
¹
(2)
2
/ ≥ 2
•< •
2 1
¹
(2)
2
Figure 6.2: Aﬀ 2
• •< •
1 2 1
1
(3)
4
Figure 6.3: Aﬀ 3
6.4. PERRONFROBENIUS. 101
•
1
0
1
•
. . . ..
•
1 1
1
/ ≥ 1
• •
. . . ..
•
1 2 2
1C
/ ≥ 1
•< •
. . . ..
•
1 1 1
11
/ ≥ 1
•
•
•
¨
¨
r
r . . . . . . •
1
1
2 2
11
/ ≥ 2
Figure 6.4: Loops allowed
102 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
If T is irreducible then 1 +T is primitive.
In this section we will assume that T is nonnegative and irreducible.
Theorem 13 PerronFrobenius.
1. T has a positive (real) eigenvalue λ
max
such that all other eigenvalues of
T satisfy
[λ[ ≤ λ
max
.
2. Furthermore λ
max
has algebraic and geometric multiplicity one, and has
an eigenvector r with r 0.
3. Any nonnegative eigenvector is a multiple of r.
4. More generally, if n ≥ 0. n = 0 is a vector and j is a number such that
Tn ≤ jn
then
n 0. and j ≥ λ
max
with j = λ
max
if and only if n is a multiple of r.
5. If 0 ≤ o ≤ T. o = T then every eigenvalue σ of o satisﬁes [σ[ < λ
max
.
6. In particular, all the diagonal minors T
(i)
obtained from T by deleting
the ith row and column have eigenvalues all of which have absolute value
< λ
max
.
We will present a proof of this theorem after ﬁrst showing how it classiﬁes
the possible connected diagrams with maximal eigenvalue two. But ﬁrst let us
clarify the meaning of the last two assertions of the theorem. The matrix T
(i)
is
usually thought of as an (n−1) (n−1) matrix obtained by “striking out” the
ith row and column. But we can also consider the matrix T
i
obtained from T
by replacing the ith row and column by all zeros. If r is an nvector which is
an eigenvector of T
i
, then the n−1 vector n obtained from r by omitting the (0)
ith entry of r is then an eigenvector of T
(i)
with the same eigenvalue (unless
the vector r only had nonzero entries in the ith position). Conversely, if n is
an eigenvector of T
(i)
then inserting 0 at the ith position will give an nvector
which is an eigenvector of T
i
with with the same eigenvalue as that of n.
More generally, suppose that o is obtained from T by replacing a certain
number of rows and the corresponding columns by all zeros. Then we may apply
item 5) of the theorem to this n n matrix, o, or the “compressed version” of
o obtained by eliminating all these rows and columns.
We will want to apply this to the following special case. A subgraph Γ
of a graph Γ is the graph obtained by eliminating some nodes, and all edges
emanating from these nodes. Thus, if ¹ is the adjacency matrix of Γ and ¹
is the adjacency matrix of ¹, then ¹
is obtained from ¹ by striking out some
rows and their corresponding columns. Thus if Γ is irreducible, so that we may
6.4. PERRONFROBENIUS. 103
apply the Perron Frobenius theorem to ¹, and if Γ
is a proper subgraph (so
we have actually deleted some rows and columns of ¹ to obtain ¹
), then the
maximum eigenvalue of ¹
is strictly less than the maximum eigenvalue of ¹
is strictly less than the maximum eigenvalue of ¹. Similarly, if an entry ¹
ij
is
1, the matrix ¹
obtained from ¹ by decreasing this entry while still keeping
it positive will have a strictly smaller maximal eigenvalue.
We now apply this theorem to conclude that the diagrams listed in Figures
Aﬀ 1, Aﬀ2, and Aﬀ 3 are all possible connected diagrams with maximal eigen
value two. A direct check shows that the vector whose coordinate at each node is
the integer attached to that node given in the ﬁgure is an eigenvector with eigen
value 2. PerronFrobenius then guarantees 2 is the maximal eigenvalue. But
now that we have shown that for each of these diagrams the maximal eigenvalue
is two, any “larger” diagram must have maximal eigenvalue strictly greater than
two and any “smaller” diagram must have maximal eigenvalue strictly less than
two.
To get started, this argument shows that ¹
(1)
1
is the only diagram for which
there is an i. , for which both o
ij
and o
ji
are 1. Indeed, if ¹ were such a
matrix, by striking out all but the i and , rows and columns, we would obtain a
two by two matrix whose oﬀ diagonal entries are both ≥ 2. If there were strict
inequality, the maximum eigenvalue of this matrix would have to be bigger than
2 (and hence also the original diagram) by Perron Frobenius.
So other than ¹
(1)
1
, we may assume that if o
ij
1 then o
ji
= 1.
Since any diagram with some entry o
ij
≥ 4 must contain ¹
(2)
2
we see that
this is the only diagram with this property and with maximum eigenvalue 2.
So other than this case, all o
ij
≤ 3.
Diagram G
(1)
2
shows that a diagram with only two vertices and a triple bond
has maximum eigenvalue strictly less than 2, since it is contained in G
(1)
2
as
a subdiagram. So any diagram with a triple bond must have at least three
vertices. But then it must “contain” either G
(1)
2
or 1
(3)
4
. But as both of these
have maximal eigenvalue 2, it can not strictly contain either. So G
(1)
2
and
1
(3)
4
.are the only possibilities with a triple bond.
Since ¹
(1)
. / ≥ 2 is a cycle with maximum eigenvalue 2, no graph can contain
a cycle without actually being a cycle, i.e. being ¹
(1)
. On the other hand, a
simple chain with only single bonds is contained in ¹
(1)
, and so must have
maximum eigenvalue strictly less than 2, So other than ¹
(1)
, every candidate
must contain at least one branch point or one double bond.
If the graph contains two double bonds, there are three possibilities as to
the mutual orientation of the arrows, they could point toward one another as
in C
(1)
, away from one another as in 1
(2)
+1
or in the same direction as in ¹
(2)
2
.
But then these are the only possibilities for diagrams with two double bonds,
as no diagram can strictly contain any of them.
Also, striking oﬀ one end vertex of C
(1)
yields a graph with one extreme
vertex with a double bound, with the arrow pointing away from the vertex, and
104 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
no branch points. Striking out one of the two vertices at the end opposite the
double bond in 1
(1)
yields a graph with one extreme vertex with with a double
bound and with the arrow pointing toward this vertex. So either diagram must
have maximum eigenvalue < 2.
Thus if there are no branch points, there must be at least one double bond
and at least two vertices on either side of the double bond. The graph with
exactly two vertices on either side is strictly contained in 1
(1)
4
and so is excluded.
So there must be at least three vertices on one side and two on the other of the
double bond. But then 1
(1)
4
and 1
(2)
6
exhaust the possibilities for one double
bond and no branch points.
If there is a double bond and a branch point then either the double bond
points toward the branch, as in ¹
(2)
2−1
or away from the branch as in 1
(1)
. But
then these exhaust the possibilities for a diagram containing both a double bond
and a branch point.
If there are two branch points, the diagram must contain 1
(1)
and hence
must coincide with 1
(1)
.
So we are left with the task of analyzing the possibilities for diagrams with
no double bonds and a single branch point. Let : denote the minimum number
of vertices on some leg of a branch (excluding the branch point itself). If : ≥ 2,
then the diagram contains 1
(1)
6
and hence must coincide with 1
(1)
6
. So we may
assume that : = 1. If two branches have only one vertex emanating, then the
diagram is strictly contained in 1
(1)
and hence excluded. So each of the two
other legs have at least two or more vertices. If both legs have more than two
vertices on them, the graph must contain, and hence coincide with 1
(1)
7
. We
are left with the sole possibility that one of the legs emanating from the branch
point has one vertex and a second leg has two vertices. But then either the
graph contains or is contained in 1
(1)
8
so 1
(1)
8
is the only such possibility.
We have completed the proof that the diagrams listed in Aﬀ 1, Aﬀ 2 and
Aﬀ 3 are the only diagrams without loops with maximum eigenvalue 2.
If we allow loops, an easy extension of the above argument shows that the
only new diagrams are the ones in the table “Loops allowed”.
6.5 Classiﬁcation of the irreducible ∆.
Notice that if we remove a vertex labeled 1 (and the bonds emanating from it)
from any of the diagrams in Aﬀ 2 or Aﬀ 3 we obtain a diagram which can also
be obtained by removing a vertex labeled 1 from one of the diagrams inAﬀ 1.
(In the diagram so obtained we ignore the remaining labels.) Indeed, removing
the right hand vertex labeled 1 from 1
(3)
4
yields ¹
2
which is obtained from ¹
(1)
2
by removing a vertex. Removing the left vertex marked 1 gives G
2
, the diagram
obtained from G
(1)
2
by removing the vertex marked 1.
Removing a vertex from ¹
(2)
2
gives ¹
1
. Removing the vertex labeled 1 from
¹
(2)
2
yields 1
2
, obtained by removing one of the vertices labeled 1 from 1
(1)
.
6.6. CLASSIFICATION OF THE IRREDUCIBLE ROOT SYSTEMS. 105
Removing a vertex labeled 1 from ¹
(2)
2−1
yields 1
2
or C
2
, removing a vertex
labeled 1 from 1
(2)
+1
yields 1
+1
and removing a vertex labeled 1 from 1
(2)
6
yields 1
4
or C
4
.
Thus all irreducible ∆ correspond to graphs obtained by removing a vertex
labeled 1 from the tableAﬀ 1. So we have classiﬁed all possible Dynkin diagrams
of all irreducible ∆ . They are given in the table labeled Dynkin diagrams.
6.6 Classiﬁcation of the irreducible root systems.
It is useful to introduce here some notation due to Bourbaki: A subset Φ of a
Euclidean space 1 is called a root system if the following axioms hold:
• Φ is ﬁnite, spans 1 and does not contain 0.
• If α ∈ Φ then the only multiples of α which are in Φ are ±α.
• If α ∈ Φ then the reﬂection :
α
in the hyperplane orthogonal to α sends Φ
into itself.
• If α. β ∈ Φ then 'β. α` ∈ Z,
Recall that
'β. α` :=
2(β. α)
(α. α)
so that the reﬂection :
α
is given by
:
α
(β) = β −'β. α`α.
We have shown that each semisimple Lie algebra gives rise to a root system, and
derived properties of the root system. If we go back to the various arguments,
we will ﬁnd that most of them apply to a “general” root system according to the
above deﬁnition. The one place where we used Lie algebra arguments directly,
was in showing that if β = ±α is a root then the collection of , such that β +,α
is a root forms an unbroken chain going from −: to c where : − c = 'β. α`.
For this we used the representation theory of :(2). So we now pause to give
an alternative proof of this fact based solely on that axioms above, and in the
process derive some additional useful information about roots.
For any two nonzero vectors α and β in 1, the cosine of the angle between
them is given by
αβ cos θ = (α. β).
So
'β. α` = 2
β
α
cos θ.
Interchanging the role of α and β and multiplying gives
'β. α`'α. β` = 4 cos
2
θ.
106 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
• • • • • • •
•
1
8
• • • • • •
•
1
7
• • • • •
•
1
6
• • • •
1
4
• •
G
2
• • . . . . . . •
¨
¨
r
r
•
•
1
/ ≥ 4
• •
. . . . . .
•< • C
/ ≥ 2
• • . . . . . . • • 1
/ ≥ 2
• •
. . . . . . . . .
• ¹
. / ≥ 1
Figure 6.5: Dynkin diagrams.
6.6. CLASSIFICATION OF THE IRREDUCIBLE ROOT SYSTEMS. 107
The right hand side is a nonnegative integer between 0 and 4. So assuming
that α = ±β and β ≥ α The possibilities are listed in the following table:
'α. β` 'β. α` 0 ≤ θ ≤ π β
2
´α
2
0 0 π´2 undetermined
1 1 π´3 1
−1 −1 2π´3 1
1 2 π´4 2
−1 −1 3π´4 2
1 3 π´6 3
−1 −3 5π´6 3
Proposition 20 If α = ±β and if (α. β) 0 then α−β is a root. If (α. β) < 0
then α +β is a root.
Proof. The second assertion follows from the ﬁrst by replacing β by −β. So
we need to prove the ﬁrst assertion. From the table, one or the other of 'β. α`
or 'α. β` equals one. So either :
α
β = β −α is a root or :
β
α = α −β is a root.
But roots occur along with their negatives so in either event α − β is a root.
QED
Proposition 21 Suppose that α = ±β are roots. Let : be the largest integer
such that β −:α is a root, and let c be the largest integer such that β +cα is a
root. Then β + iα is a root for all −: ≤ i ≤ c. Furthermore : − c = 'β. α` so
in particular [c −:[ ≤ 3.
Proof. Suppose not. Then we can ﬁnd a j and an : such that −: ≤ j < : ≤ c
such that β +jα is a root, but β +(j +1)α is not a root, and β +:α is a root
but β + (: −1)α is not. The preceding proposition then implies that
'β +jα. α` ≥ 0 while 'β +:α. α` ≤ 0
which is impossible since 'α. α` = 2 0.
Now :
α
adds a multiple of α to any root, and so preserves the string of roots
β −:α. β −(: −1)α. . . . . β +cα. Furthermore
:
α
(β +iα) = β −('β. α` +i)α
so :
α
reverses the order of the string. In particular
:
α
(β +cα) = β −:α.
The left hand side is β−('β. α`+c)α so :−c = 'β. α` as stated in the proposition.
QED
We can now apply all the preceding deﬁnitions and arguments to conclude
that the Dynkin diagrams above classify all the irreducible bases ∆ of root
systems.
108 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
Since every root is conjugate to a simple root, we can use the Dynkin dia
grams to conclude that in an irreducible root system, either all roots have the
same length (cases A, D, E) or there are two root lengths  the remaining cases.
Furthermore, if β denotes a long root and α a short root, the ratios β
2
´α
2
are 2 in the cases 1. C. and 1
4
, and 3 for the case G
2
.
Proposition 22 In an irreducible root system, the Weyl group \ acts irre
ducibly on 1. In particular, the \orbit of any root spans 1.
Proof. Let 1
be a proper invariant subspace. Let 1
denote its orthogonal
complement, so
1 = 1
⊕1
.
For any root α, If c ∈ 1
then :
α
c = c −'c. α`α ∈ 1
. So either (c. α) = 0 for
all c, and so α ∈ 1
or α ∈ 1
. Since the roots span, they can’t all belong to
the same subspace. This contradicts the irreducibility. QED
Proposition 23 If there are two distinct root lengths in an irreducible root
system, then all roots of the same length are conjugate under the Weyl group.
Also, the maximal weight is long.
Proof. Suppose that α and β have the same length. We can ﬁnd a Weyl group
element \ such that nβ is not orthogonal to α by the preceding proposition.
So we may assume that 'β. α` = 0. Since α and β have the same length, by the
table above we have 'β. α` = ±1. Replacing β by −β = :
β
β we may assume
that 'β. α` = 1. Then
(:
β
:
α
:
β
)(α) = (:
β
:
α
)(α −β)
= :
β
(−α −β +α)
= :
β
(−β)
= β. QED
Let (1. Φ) and (1
. Φ
) be two root systems. We say that a linear map
1 : 1 →1
is an isomorphism from the root system (1. Φ) to the root system
(1
. Φ
) if 1 is a linear isomorphism of 1 onto 1
with 1(Φ) = Φ
and
'1(β). 1(α)` = 'β. α`
for all α. β ∈ Φ.
Theorem 14 Let ∆ = ¦α
1
. . . . . α
¦ be a base of Φ. Suppose that (1
. Φ
) is a
second root system with base ∆
= ¦α
1
. . . . . α
¦ and that
'α
i
. α
j
` = 'α
i
. α
j
`. ∀ 1 ≤ i. , ≤ /.
Then the bijection
α
i
→α
i
6.7. THE CLASSIFICATIONOF THE POSSIBLE SIMPLE LIE ALGEBRAS.109
extends to a unique isomorphism 1 : (1. Φ) → (1
. Φ
). In other words, the
Cartan matrix ¹ of ∆ determines Φ up to isomorphism. In particular, The
Dynkin diagrams characterize all possible irreducible root systems.
Proof. Since ∆ is a basis of 1 and ∆
is a basis of 1
, the map α
i
→ α
i
extends to a unique linear isomorphism of 1 onto 1
. The equality in the
theorem implies that for α. β ∈ ∆ we have
:
f(α)
1(β) = 1(β) −'1(β). 1(α)`1(α) = 1(:
α
β).
Since the Weyl groups are generated by these simple reﬂections, this implies
that the map
n →1 ◦ n ◦ 1
−1
is an isomorphism of \ onto \
. Every β ∈ Φ is of the form n(α) where n ∈ \
and α is a simple root. Thus
1(β) = 1 ◦ n ◦ 1
−1
1(α) ∈ Φ
so 1(Φ) = Φ
. Since :
α
(β) = β−'β. α`α, the number 'β. α` is determined by the
reﬂection :
α
acting on β. But then the corresponding formula for Φ
together
with the fact that
:
f(α)
= 1 ◦ :
α
◦ 1
−1
implies that
'1(β). 1(α)` = 'β. α`.
QED
6.7 The classiﬁcation of the possible simple Lie
algebras.
Suppose that g. h. is a pair consisting of a semisimple Lie algebra g, and a
Cartan subalgebra h. This determines the corresponding Euclidean space 1
and root system Φ. Suppose we have a second such pair (g
. h
). We would like
to show that an isomorphism of (1. Φ) with (1
. Φ
) determines a Lie algebra
isomorphism of g with g
. This would then imply that the Dynkin diagrams
classify all possible simple Lie algebras. We would still be left with the problem
of showing that the exceptional Lie algebras exist. We will defer this until
Chapter VIII where we prove Serre’s theorem with gives a direct construction
of all the simple Lie algebras in terms of generators and relations determined
by the Cartan matrix.
We need a few preliminaries.
Proposition 24 Every positive root can be written as a sum of simple roots
α
i1
+ α
i
k
in such a way that every partial sum is again a root.
110 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
Proof. By induction (on say the height) it is enough to prove that for every
positive root β which is not simple, there is a simple root α such that β −α is
a root. We can not have (β. α) ≤ 0 for all α ∈ ∆ for this would imply that the
set ¦β¦ ∪ ∆ is independent (by the same method that we used to prove that ∆
was independent). So (β. α) 0 for some α ∈ ∆ and so β − α is a root. Since
β is not simple, its height is at least two, and so subtracting α will not be zero
or a negative root, hence positive. QED
Proposition 25 Let g. h be a semisimple Lie algebra with a choice of Cartan
subalgebra, Let Φ be the corresponding root system, and let ∆ be a base. Then
g is generated as a Lie algebra by the subspaces g
α
. g
−α
. α ∈ ∆.
From the representation theory of :(2)
α
we know that [g
α
. g
β
] = g
α+β
if α+β
is a root. Thus from the preceding proposition, we can successively obtain all
the g
β
for β positive by bracketing the g
α
. α ∈ ∆. Similarly we can get all
the g
β
for β negative from the g
−α
. So we can get all the root spaces. But
[g
α
. g
−α
] = C/
α
so we can get all of h. The decomposition
g = h ⊕
γ∈Φ
g
γ
then shows that we have generated all of g.
Here is the big theorem:
Theorem 15 Let g. h and g
. h
be simple Lie algebras with choices of Cartan
subalgebras, and let Φ. Φ
be the corresponding root systems. Suppose there is
an isomorphism
1 : (1. Φ) →(1
. Φ
)
which is an isometry of Euclidean spaces. Extend 1 to an isomorphism of
h
∗
→h
∗
via complexiﬁcation. Let 1 : h → h
denote the corresponding isomorphism on
the Cartan subalgebras obtained by identifying h and h
with their duals using
the Killing form.
Fix a base ∆ of Φ and ∆
of Φ
. Choose 0 = r
α
∈ g
α
. α ∈ ∆ and 0 = r
α
∈
g
α
. Extend 1 to a linear map
1 : h ⊕
α∈∆
g
α
→h
⊕
α
∈∆
g
α
by
1(r
α
) = r
α
.
Then 1 extends to a unique isomorphism of g →g
.
Proof. The uniqueness is easy. Given r
α
there is a unique n
α
∈ g
−α
for which
[r
α
. n
α
] = /
α
so 1, if it exists, is determined on the n
α
and hence on all of g
since the r
α
and n
α
generate g be the preceding proposition.
6.7. THE CLASSIFICATIONOF THE POSSIBLE SIMPLE LIE ALGEBRAS.111
To prove the existence, we will construct the graph of this isomorphism.
That is, we will construct a subalgebra k of g ⊕ g
whose projections onto the
ﬁrst and onto the second factor are isomorphisms:
Use the r
α
and n
α
as above, with the corresponding elements r
α
and n
α
in
g
. Let
r
α
:= r
α
⊕r
α
∈ g ⊕g
and similarly deﬁne
n
α
:= n
α
⊕n
α
.
and
/
α
:= /
α
⊕/
α
.
Let β be the (unique) maximal root of g, and choose r ∈ g
β
. Make a similar
choice of r
∈ g
β
where β
is the maximal root of g
. Set
r := r ⊕r
.
Let m⊂ g ⊕g
be the subspace spanned by all the
ad n
αi
1
ad n
αim
r.
The element ad n
αi
1
ad n
αim
r belongs to g
β−
P
αi
j
⊕g
β
−
P
α
i
j
so
m∩ (g
β
⊕g
β
) is one dimensional .
In particular m is a proper subspace of g ⊕g
.
Let k denote the subalgebra of g ⊕ g
generated by the r
α
the n
α
and the
/
α
. We claim that
[k. m] ⊂ m.
Indeed, it is enough to prove that m is invariant under the adjoint action of the
generators of k. For the ad n
α
this follows from the deﬁnition. For the ad /
α
we
use the fact that
[/. n
α
] = −α(/)n
α
to move the ad /
α
past all the ad n
γ
at the cost of introducing some scalar
multiple, while
ad /
α
r = 'β. α`r
β
+'β
. α
`r
β
= 'β. α`r
because 1 is an isomorphism of root systems.
Finally [r
α1
. n
α2
] = 0 if α
1
= α
2
∈ ∆ since α
1
− α
2
is not a root. On the
other hand [r
α
. n
α
] = /
α
. So we can move the ad r
α
past the ad n
γ
at the
expense of introducing an ad /
α
every time γ = α. Now α + β is not a root,
since β is the maximal root. So [r
α
. r
β
] = 0. Thus ad r
α
r = 0, and we have
proved that [k. m] ⊂ m. But since m is a proper subspace of g⊕g
, this implies
that k is a proper subalgebra, since otherwise m would be a proper ideal, and
the only proper ideals in g ⊕g
are g and g
.
112 CHAPTER 6. THE SIMPLE FINITE DIMENSIONAL ALGEBRAS.
Now the subalgebra k can not contain any element of the form . ⊕0. . = 0,
for it if did, it would have to contain all of the elements of the form n ⊕0 since
we could repeatedly apply ad r
α
’s until we reached the maximal root space and
then get all of g ⊕0, which would mean that k would also contain all of 0 ⊕g
and hence all of g ⊕g
which we know not to be the case. Similarly k can not
contain any element of the form 0⊕.
. So the projections of k onto g and onto g
are linear isomorphisms. By construction they are Lie algebra homomorphisms.
Hence the inverse of the projection of k onto g followed by the projection of k
onto g
is a Lie algebra isomorphism of g onto g
. By construction it sends r
α
to r
α
and /
α
to /
α
and so is an extension of 1. QED
Chapter 7
Cyclic highest weight
modules.
In this chapter, g will denote a semisimple Lie algebra for which we have chosen
a Cartan subalgebra, h and a base ∆ for the roots Φ = Φ
+
∪ Φ
−
of g.
We will be interested in describing its ﬁnite dimensional irreducible repre
sentations. If \ is a ﬁnite dimensional module for g, then h has at least one
simultaneous eigenvector; that is there is a j ∈ h
∗
and a n = 0 ∈ \ such that
/n = j(/)n ∀ / ∈ h. (7.1)
The linear function j is called a weight and the vector · is called a weight
vector. If r ∈ g
α
,
/rn = [/. r]n +r/n = (j +α)(/)rn.
This shows that the space of all vectors n satisfying an equation of the type
(7.1) (for varying j) spans an invariant subspace. If \ is irreducible, then the
weight vectors (those satisfying an equation of the type (7.1)) must span all of
\. Furthermore, since \ is ﬁnite dimensional, there must be a vector · and a
linear function λ such that
/· = λ(/)· ∀ / ∈ h. c
α
· = 0. ∀α ∈ Φ
+
. (7.2)
Using irreducibility again, we conclude that
\ = l(g)·.
The module is cyclic generated by ·. In fact we can be more precise: Let
/
1
. . . . . /
be the basis of h corresponding to the choice of simple roots, let
c
i
∈ g
αi
. 1
i
∈ g
−αi
where α
1
. . . . . α
m
are all the positive roots. (We can choose
them so that each c and 1 generate a little :(2).) Then
g = n
−
⊕h ⊕n
+
.
113
114 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
where c
1
. . . . . c
m
is a basis of n
+
, where /
1
. . . . . /
is a basis of h, and 1
1
. . . . . 1
m
is a basis of n
−
. The Poincar´eBirkhoﬀWitt theorem says that monomials of
the form
1
i1
1
1
im
m
/
j1
1
/
j
c
k1
1
c
km
m
form a basis of l(g). Here we have chosen to place all the c’s to the extreme
right, with the /’s in the middle and the 1’s to the left. It now follows that the
elements
1
i1
1
1
im
m
·
span \. Every such element, if nonzero, is a weight vector with weight
λ −(i
1
α
1
+ +i
m
α
m
).
Recall that
j ≺ λ means that λ −j =
¸
/
i
α
i
. α
i
0.
where the /
i
are nonnegative integers. We have shown that every weight j of
\ satisﬁes
j ≺ λ.
So we make the deﬁnition: A cyclic highest weight module for g is a module
(not necessarily ﬁnite dimensional) which has a vector ·
+
such that
r
+
·
+
= 0. ∀ r
+
∈ n
+
. /·
+
= λ(/)·
+
∀/ ∈ h
and
\ = l(g)·
+
.
In any such cyclic highest weight module every submodule is a direct sum of its
weight spaces (by van der Monde). The weight spaces \
µ
all satisfy
j ≺ λ
and we have
\ =
\
µ
.
Any proper submodule can not contain the highest weight vector, and so the
sum of two proper submodules is again a proper submodule. Hence any such \
has a unique maximal submodule and hence a unique irreducible quotient. The
quotient of any highest weight module by an invariant submodule, if not zero,
is again a cyclic highest weight module with the same highest weight.
7.1 Verma modules.
There is a “biggest” cyclic highest weight module, associated with any λ ∈ h
∗
called the Verma module. It is deﬁned as follows: Let us set
b := h ⊕n
+
.
7.2. WHEN IS DIMIRR(λ) < ∞? 115
Given any λ ∈ h
∗
let C
λ
denote the one dimensional vector space C with
basis .
+
and with the action of b given by
(/ +
¸
β0
r
β
).
+
:= λ(/).
+
.
So it is a left l(b) module. By the Poincar´e Birkhoﬀ Witt theorem, l(g) is a
free right l(b) module with basis ¦1
i1
1
1
im
¦, and so we can form the Verma
module
Verm(λ) := l(g) ⊗
U(b)
C
λ
which is a cyclic module with highest weight vector ·
+
:= 1 ⊗.
+
.
Furthermore, any two irreducible cyclic highest weight modules with the
same highest weight are isomorphic. Indeed, if \ and \ are two such with
highest weight vector ·
+
. n
+
, consider \ ⊕\ which has (·
+
. n
+
) as a maximal
weight vector with weight λ, and hence 7 := l(g)(·
+
. n
+
) is cyclic and of
highest weight λ. Projections onto the ﬁrst and second factors give nonzero
homomorphisms which must be surjective. But 7 has a unique irreducible
quotient. Hence these must induce isomorphisms on this quotient, \ and \
are isomorphic.
Hence, up to isomorphism, there is a unique irreducible cyclic highest weight
module with highest weight λ. We call it
Irr(λ).
In short, we have constructed a “largest” highest weight module Verm(λ)
and a “smallest” highest weight module Irr(λ).
7.2 When is dimIrr(λ) < ∞?
If Irr(λ) is ﬁnite dimensional, then it is ﬁnite dimensional as a module over any
subalgebra, in particular over any subalgebra isomorphic to :(2). Applied to
the subalgebra :(2)
i
generated by c
i
. /
i
. 1
i
we conclude that
λ(/
i
) ∈ Z.
Such a weight is is called integral. Furthermore the representation theory of
:(2) says that the maximal weight for any ﬁnite dimensional representation
must satisfy
λ(/
i
) = 'λ. α
i
` ≥ 0
so that λ lies in the closure of the fundamental Weyl chamber. Such a weight is
called dominant. So a necessary condition for Irr(λ) to be ﬁnite dimensional
is that λ be dominant integral. We now show that conversely, Irr(λ) is ﬁnite
dimensional whenever λ is dominant integral.
For this we recall that in the universal enveloping algebra l(g) we have
1. [c
j
. 1
k+1
i
] = 0, if i = ,
116 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
2. [/
j
. 1
k+1
i
] = −(/ + 1)α
i
(/
j
)1
k+1
i
3. [c
i
. 1
k+1
i
] = −(/ + 1)1
k
i
(/ 1 −/
i
)
where the ﬁrst two equations are consequences of the fact that ad is a derivation
and
[c
i
. 1
j
] = 0 if i = , since α
i
−α
j
is not a root
and
[/
j
. 1
j
] = −α
j
(/
i
)1
j
.
The last is a the fact about :(2) which we have proved in Chapter II. Notice
that it follows from 1.) that c
j
(1
k
i
)·
+
= 0 for all / and all i = , and from 3.)
that
c
i
1
λ(hi)+1
i
·
+
= 0
so that 1
λ(hi)+1
i
·
+
is a maximal weight vector. If it were nonzero, the cyclic
module it generates would be a proper submodule of Irr(λ) contradicting the
irreducibility. Hence
1
λ(hi)+1
i
·
+
= 0.
So for each i the subspace spanned by ·
+
. 1
i
·
+
. . 1
λ(hi)
i
·
+
is a ﬁnite dimen
sional :(2)
i
module. In particular Irr(λ) contains some ﬁnite dimensional :(2)
i
modules. Let \
denote the sum of all such. If \ is a ﬁnite dimensional :(2)
i
module, then c
α
\ is again ﬁnite dimensional, thus so their sum, which is a
ﬁnite dimensional :(2)
i
module. Hence \
is g stable, hence all of Irr(λ).
In particular, the c
i
and the 1
i
act as locally nilpotent operators on Irr(λ).
So the operators τ
i
:= (exp c
i
)(exp −1
i
)(exp c
i
) are well deﬁned and
τ
i
(Irr(λ))
µ
= Irr(λ)
siµ
so
dimIrr(λ)
wµ
= dimIrr(λ)
µ
∀n ∈ ¼ (7.3)
where ¼ denotes the Weyl group. These are all ﬁnite dimensional subspaces:
Indeed their dimension is at most the corresponding dimension in the Verma
module Verm(λ), since Irr(λ)
µ
is a quotient space of Verm(λ)
µ
. But Verm(λ)
µ
has a basis consisting of those 1
k1
1
1
km
m
·
+
. The number of such elements is
the number of ways of writing
λ −j = /
1
α
1
+ /
m
α
m
.
So dim Verm(λ)
µ
is the number of :tuplets of nonnegative integers (/
1
. . . . . /
m
)
such that the above equation holds. This number is clearly ﬁnite, and is known
as 1
K
(λ −j), the Kostant partition function of λ −j, which will play a central
role in what follows.
Now every element of 1 is conjugate under \ to an element of the closure
of the fundamental Weyl chamber, i.e. to a j satisfying
(j. α
i
) ≥ 0
7.3. THE VALUE OF THE CASIMIR. 117
i.e. to a j that is dominant. We claim that there are only ﬁnitely many dominant
weights j which are ≺ λ, which will complete the proof of ﬁnite dimensionality.
Indeed, the sum of two dominant weights is dominant, so λ + j is dominant.
On the other hand, λ −j =
¸
/
i
α
i
with the /
i
≥ 0. So
(λ. λ) −(j. n) = (λ +j. λ −j) =
¸
/
i
(λ +j. α
i
) ≥ 0.
So j lies in the intersection of the ball of radius
(λ. λ) with the discrete set
of weights ≺ λ which is ﬁnite.
We record a consequence of (7.3) which is useful under very special circum
stances. Suppose we are given a ﬁnite dimensional representation of g with the
property that each weight space is one dimensional and all weights are conju
gate under ¼. Then this representation must be irreducible. For example, take
g = :(n + 1) and consider the representation of g on ∧
k
(C
n+1
). 1 ≤ / ≤ n. In
terms of the standard basis c
1
. . . . . c
n+1
of C
n+1
the elements c
i1
∧ ∧c
i
k
are
weight vectors with weights 1
i1
+ + 1
i
k
, Where h consists of all diagonal
traceless matrices and 1
i
is the linear function which assigns to each diagonal
matrix its ith entry.
These weight spaces are all one dimensional and conjugate under the Weyl
group. Hence these representations are irreducible with highest weight
ω
i
:= 1
1
+ +1
k
in terms of the usual choice of base, /
1
. . . . . /
n
where /
j
is the diagonal matrix
with 1 in the ,th position, −1 in the , + 1st position and zeros elsewhere.
Notice that
ω
i
(/
j
) = δ
ij
so that the ω
i
form a basis of the “weight lattice” consisting of those λ ∈ h
∗
which take integral values on /
1
. . . . . /
n
.
7.3 The value of the Casimir.
Recall that our basis of l(g) consists of the elements
1
i1
1
1
im
m
/
j1
1
/
j
c
k1
1
c
km
m
.
The elements of l(h) are then the ones with no c or 1 component in their
expression. So we have a vector space direct sum decomposition
l(g) = l(h) ⊕(l(g)n
+
+n
−
l(g)) .
where n
+
and n
−
are the corresponding nilpotent subalgebras. Let γ denote
projection onto the ﬁrst factor in this decomposition. Now suppose . ∈ 7(g),
the center of the universal enveloping algebra. In particular, . ∈ l(g)
h
. The
eigenvalues of the monomial above under the action of / ∈ h are
m
¸
s=1
(/
s
−i
s
)α
s
(/).
118 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
So any monomial in the expression for . can not have 1 factors alone. We have
proved that
. −γ(.) ∈ l(g)n
+
. ∀ . ∈ 7(g). (7.4)
For any λ ∈ h
∗
, the element . ∈ 7(g) acts as a scalar, call it χ
λ
(.) on the
Verma module associated to λ.
In particular, if λ is a dominant integral weight, it acts by this same scalar
on the irreducible ﬁnite dimensional module associated to λ.
On the other hand, the linear map λ : h →C extends to a homomorphism,
which we will also denote by λ of l(h) = o(h) → C. Explicitly, if we think of
elements of l(h) = o(h) as polynomials on h
∗
, then λ(1) = 1(λ) for 1 ∈ o(h).
Since n
+
· = 0 if · is the maximal weight vector, we conclude from (7.4) that
χ
λ
(.) = λ(γ(.)) ∀ . ∈ 7(g). (7.5)
We want to apply this formula to the second order Casimir element associated
to the Killing form κ. So let /
1
. . . . . /
∈ h be the dual basis to /
1
. . . . . /
relative to κ, i.e.
κ(/
i
. /
j
) = δ
ij
.
Let r
α
∈ g
α
be a basis (i.e. nonzero) element and .
α
∈ g
−α
be the dual basis
element to r
α
under the Killing form, so the second order Casimir element is
Cas
κ
=
¸
/
i
/
i
+
¸
α
r
α
.
α
.
where the second sum on the right is over all roots. We might choose the
r
α
= c
α
for positive roots, and then the corresponding .
α
is some multiple of
the 1
α
. (And, for present purposes we might even choose 1
α
= .
α
for positive
α.) The problem is that the .
α
for positive α in the above expression for Cas
κ
are written to the right, and we must move them to the left. So we write
Cas
κ
=
¸
i
/
i
/
i
+
¸
α>0
[r
α
. .
α
] +
¸
α>0
.
α
r
α
+
¸
α<0
r
α
.
α
.
This expression for Cas
κ
has all the n
+
elements moved to the right; in partic
ular, all of the summands in the last two sums annihilate ·
λ
. Hence
γ(Cas
κ
) =
¸
i
/
i
/
i
+
¸
α>0
[r
α
. .
α
]
and
χ
λ
(Cas
κ
) =
¸
i
λ(/
i
)λ(/
i
) +
¸
α>0
λ([r
α
. .
α
]).
For any / ∈ h we have
κ(/. [r
α
. .
α
]) = κ([/. r
α
]. .
α
) = α(/)κ(r
α
. .
α
) = α(/)
so
[r
α
. .
α
] = t
α
7.3. THE VALUE OF THE CASIMIR. 119
where t
α
∈ h is uniquely determined by
κ(t
α
. /) = α(/) ∀ / ∈ h.
Let ( . )
κ
denote the bilinear form on h
∗
obtained from the identiﬁcation of h
with h
∗
given by κ. Then
¸
α>0
λ([r
α
. .
α
]) =
¸
α>0
λ(t
α
) =
¸
α>0
(λ. α)
κ
= 2(λ. ρ)
κ
(7.6)
where
ρ :=
1
2
¸
α>0
α.
On the other hand, let the constants o
i
be deﬁned by
λ(/) =
¸
i
o
i
κ(/
i
. /) ∀ / ∈ h.
In other words λ corresponds to
¸
o
i
/
i
under the isomorphism of h with h
∗
so
(λ. λ)
κ
=
¸
i,j
o
i
o
j
κ(/
i
. /
j
).
Since κ(/
i
. /
j
) = δ
ij
we have
λ(/
i
) = o
i
.
Combined with λ(/
i
) =
¸
j
o
j
κ(/
j
. /
i
) this gives
(λ. λ)
κ
=
¸
i
λ(/
i
)λ(/
i
). (7.7)
Combined with (7.6) this yields
χ
λ
(Cas
κ
) = (λ +ρ. λ +ρ)
κ
−(ρ. ρ)
κ
. (7.8)
We now use this innocuous looking formula to prove the following: We let
L = L
g
⊂ h
∗
R
denote the lattice of integral linear forms on h, i.e.
L = ¦j ∈ h
∗
[2
(j. φ)
(φ. φ)
∈ Z ∀φ ∈ ∆¦. (7.9)
L is called the weight lattice of g.
For j. λ ∈ L recall that
j ≺ λ
if λ −j is a sum of positive roots. Then
120 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
Proposition 26 Any cyclic highest weight module 7(λ). λ ∈ L has a composi
tion series whose quotients are irreducible modules, Irr(j) where j ≺ λ satisﬁes
(j +ρ. j +ρ)
κ
= (λ +ρ. λ +ρ)
κ
. (7.10)
In fact, if
d =
¸
dim7(λ)
µ
where the sum is over all j satisfying (7.10) then there are at most d steps in
the composition series.
Remark. There are only ﬁnitely many j ∈ L satisfying (7.10) since the set
of all j satisfying (7.10) is compact and L is discrete. Each weight is of ﬁnite
multiplicity. Therefore d is ﬁnite.
Proof by induction on d. We ﬁrst show that if d = 1 then 7(λ) is irreducible.
Indeed, if not, any proper submodule \, being the sum of its weight spaces,
must have a highest weight vector with highest weight j, say. But then
χ
λ
(Cas
κ
) = χ
µ
(Cas
κ
)
since \ is a submodule of 7(λ) and Cas
κ
takes on the constant value χ
λ
(Cas
κ
)
on 7(λ). Thus j and λ both satisfy (7.10) contradicting the assumption d = 1.
In general, suppose that 7(λ) is not irreducible, so has a submodule, \ and
quotient module 7(λ)´\. Each of these is a cyclic highest weight module,
and we have a corresponding composition series on each factor. In particular,
d = d
W
+d
Z(λ)/W
so that the d’s are strictly smaller for the submodule and the
quotient module. Hence we can apply induction. QED
For each λ ∈ L we introduce a formal symbol, c(λ) which we want to think
of as an “exponential” and so the symbols are multiplied according to the rule
c(j) c(ν) = c(j +ν). (7.11)
The character of a module · is deﬁned as
ch
N
=
¸
dim ·
µ
c(j).
In all cases we will consider (cyclic highest weight modules and the like) all
these dimensions will be ﬁnite, so the coeﬃcients are well deﬁned, but (in the
case of Verma modules for example) there may be inﬁnitely many terms in the
(formal) sum. Logically, such a formal sum is nothing other than a function on
L giving the “coeﬃcient” of each c(j).
In the case that · is ﬁnite dimensional, the above sum is ﬁnite. If
1 =
¸
1
µ
c(j) and o =
¸
o
ν
c(ν)
are two ﬁnite sums, then their product (using the rule (7.11)) corresponds to
convolution:
¸
1
µ
c(j)
¸
o
ν
c(ν)
=
¸
(1 × o)
λ
c(λ)
7.4. THE WEYL CHARACTER FORMULA. 121
where
(1 × o)
λ
:=
¸
µ+ν=λ
1
µ
o
ν
.
So we let Z
ﬁn
(L) denote the set of Z valued functions on L which vanish outside
a ﬁnite set. It is a commutative ring under convolution, and we will oscillate
in notation between writing an element of Z
ﬁn
(L) as an “exponential sum”
thinking of it as a function of ﬁnite support.
Since we also want to consider inﬁnite sums such as the characters of Verma
modules, we enlarge the space Z
ﬁn
(L) by deﬁning Z
gen
(L) to consist of Z val
ued functions whose supports are contained in ﬁnite unions of sets of the form
λ −
¸
α0
/
α
α. The convolution of two functions belonging to Z
gen
(L) is well
deﬁned, and belongs to Z
gen
(L). So Z
gen
(L) is again a ring.
It now follows from Prop.26 that
ch
Z(λ)
=
¸
ch
Irr(µ)
where the sum is over the ﬁnitely many terms in the composition series. In
particular, we can apply this to 7(λ) = Verm(λ), the Verma module. Let us
order the j
i
≺ λ satisfying (7.10) in such a way that j
i
≺ j
j
⇒ i ≤ ,. Then
for each of the ﬁnitely many j
i
occurring we get a corresponding formula for
ch
Verm(µi)
and so we get collection of equations
ch
Verm(µj)
=
¸
o
ij
ch
Irr(µi)
where o
ii
= 1 and i ≤ , in the sum. We can invert this upper triangular matrix
and therefore conclude that there is a formula of the form
ch
Irr(λ)
=
¸
/(j)ch
Verm(µ)
(7.12)
where the sum is over j ≺ λ satisfying (7.10) with coeﬃcients /(j) that we shall
soon determine. But we do know that /(λ) = 1.
7.4 The Weyl character formula.
We will now prove
Proposition 27 The nonzero coeﬃcients in (7.12) occur only when
j = n(λ +ρ) −ρ
where n ∈ \, the Weyl group of g, and then
/(j) = (−1)
w
.
Here
(−1)
w
:= det n.
122 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
We will prove this by proving some combinatorial facts about multiplication of
sums of exponentials.
We recall our notation: For λ ∈ h
∗
, Irr(λ) denotes the unique irreducible
module of highest weight, λ,and Verm(λ) denotes the Verma module of highest
weight λ, and more generally, 7(λ) denotes an arbitrary cyclic module of highest
weight λ. Also
ρ :=
1
2
¸
φ∈Φ
+
φ
is one half the sum of the positive roots. Let λ
i
. i = 1. . . . . dimh be the basis of
the weight lattice, 1 dual to the base ∆. So
λ
i
(/
αj
) = 'λ
i
. α
j
` = δ
ij
.
Since :
i
(α
i
) = −α
i
while keeping all the other positive roots positive, we saw
that this implied that
:
i
ρ = ρ −α
i
and therefore
'ρ. α
i
` = 1. i = 1. . . . . / := dim(h).
In other words
ρ =
1
2
¸
φ∈Φ
+
φ = λ
1
+ +λ
. (7.13)
The Kostant partition function, 1
K
(j) is deﬁned as the number of sets
of nonnegative integers, /
β
such that
j =
¸
β∈Φ
+
/
β
β.
(The value is zero if j can not be expressed as a sum of positive roots.)
For any module · and any j ∈ h
∗
, ·
µ
denotes the weight space of weight j.
For example, in the Verma module, Verm(λ), the only nonzero weight spaces
are the ones where j = λ −
¸
β∈Φ
+ /
β
β and the multiplicity of this weight
space, i.e. the dimension of Verm(λ)
µ
is the number of ways of expressing in
this fashion, i.e.
dim Verm(λ)
µ
= 1
K
(λ −j). (7.14)
In terms of the character notation introduced in the preceding section we
can write this as
ch
Verm(λ)
=
¸
1
K
(λ −j)c(j).
To be consistent with Humphreys’ notation, deﬁne the Kostant function j by
j(ν) = 1
K
(−ν)
and then in succinct language
ch
Verm(λ)
= j( −λ). (7.15)
7.4. THE WEYL CHARACTER FORMULA. 123
Observe that if
1 =
¸
1(j)c(j)
then
1 c(λ) =
¸
1(j)c(λ +j) =
¸
1(ν −λ)c(ν).
We can express this by saying that
1 c(λ) = 1( −λ).
Thus, for example,
ch
Verm(λ)
= j( −λ) = j c(λ).
Also observe that if
1
α
=
1
1 −c(−α)
:= 1 +c(−α) +c(−2α) +
then
(1 −c(−α))1
α
= 1
and
¸
α∈Φ
+
1
α
= j
by the deﬁnition of the Kostant function.
Deﬁne the function c by
c :=
¸
α∈Φ
+
(c(α´2) −c(−α´2)) = c(ρ)
¸
(1 −c(−α))
since c(ρ) =
¸
α∈Φ
+ c(α´2). Notice that
nc = (−1)
w
c.
It is enough to check this on fundamental reﬂections, but they have the property
that they make exactly one positive root negative, hence change the sign of c.
We have
cj = c(ρ). (7.16)
Indeed,
cjc(−ρ) =
¸
(1 −c(−α))
c(ρ)jc(−ρ)
=
¸
(1 −c(−α))
j
=
¸
(1 −c(−α))
¸
1
α
= 1.
Therefore,
cch
Verm(λ)
= cjc(λ) = c(ρ)c(λ) = c(λ +ρ).
124 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
Let us now multiply both sides of (7.12) by c and use the preceding equation.
We obtain
cch
Irr(λ)
=
¸
/(j)c(j +ρ)
where the sum is over all j ≺ λ satisfying (7.10), and the /(j) are coeﬃcients
we must determine.
Now ch
Irr(λ)
is invariant under the Weyl group \, and c transforms by
(−1)
w
. Hence if we apply n ∈ \ to the preceding equation we obtain
(−1)
w
cch
Irr(λ)
=
¸
/(j)c(n(j +ρ)).
This shows that the set of j + ρ with nonzero coeﬃcients is stable under \
and the coeﬃcients transform by the sign representation for each \ orbit. In
particular, each element of the form j = n(λ+ρ)−ρ has (−1)
w
as its coeﬃcient.
We can thus write
cch
V (λ)
=
¸
w∈W
(−1)
w
c(n(λ +ρ)) +1
where 1 is a sum of terms corresponding to j + ρ which are not of the form
n(λ + ρ). We claim that there are no such terms and hence 1 = 0. Indeed, if
there were such a term, the transformation properties under \ would demand
that there be such a term with j +ρ in the closure of the Weyl chamber, i.e.
j +ρ ∈ Λ := L ∩ 1
where
1 = 1
g
= ¦λ ∈ 1[(λ. φ) ≥ 0 ∀φ ∈ ∆
+
¦
and 1 = h
∗
R
denotes the space of real linear combinations of the roots. But we
claim that
j ≺ λ. (j +ρ. j +ρ) = (λ +ρ. λ +ρ). & j +ρ ∈ Λ =⇒j = λ.
Indeed, write j = λ −π. π =
¸
/
α
α. /
α
≥ 0 so
0 = (λ +ρ. λ +ρ) −(j +ρ. j +ρ)
= (λ +ρ. λ +ρ) −(λ +ρ −π. λ +ρ −π)
= (λ +ρ. π) + (π. j +ρ)
≥ (λ +ρ. π) since j +ρ ∈ Λ
≥ 0
since λ + ρ ∈ Λ and in fact lies in the interior of 1. But the last inequality is
strict unless π = 0. Hence π = 0. We will have occasion to use this type of
argument several times again in the future. In any event we have derived the
fundamental formula
cch
Irr(λ)
=
¸
w∈W
(−1)
w
c(n(λ +ρ)). (7.17)
7.5. THE WEYL DIMENSION FORMULA. 125
Notice that if we take λ = 0 and so the trivial representation with character
1 for \ (λ), (7.17) becomes
c =
¸
(−1)
w
c(nρ)
and this is precisely the denominator in the Weyl character formula:
WCF ch
Irr(λ)
=
¸
w∈W
(−1)
w
c(n(λ +ρ))
¸
w∈W
(−1)
w
c(nρ)
(7.18)
7.5 The Weyl dimension formula.
For any weight, j we deﬁne
¹
µ
:=
¸
w∈W
(−1)
w
c(nj).
Then we can write the Weyl character formula as
ch
Irr(λ)
=
¹
λ+ρ
¹
ρ
.
For any weight j deﬁne the homomorphism Ψ
µ
from the ring Z
ﬁn
(L) into
the ring of formal power series in one variable t by the formula
Ψ
µ
(c(ν)) = c
(ν,µ)κt
(and extend linearly). The left hand side of the Weyl character formula belongs
to Z
ﬁn
(L), and hence so does the right hand side which is a quotient of two
elements of Z
ﬁn
(L). Therefore for any j we have
Ψ
µ
(ch
Irr(λ)
) =
Ψ
µ
(¹
ρ+λ
)
Ψ
µ
(¹
ρ
)
.
Ψ
µ
(¹
ν
) = Ψ
ν
(¹
µ
) (7.19)
for any pair of weights. Indeed,
Ψ
µ
(¹
ν
) =
¸
w
(−1)
w
c
(µ,wν)κt
=
¸
w
(−1)
w
c
(w
−1
µ,ν)κt
=
¸
(−1)
w
c
(wµ,ν)κt
= Ψ
ν
(¹
µ
).
126 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
In particular,
Ψ
ρ
(¹
λ
) = Ψ
λ
(¹
ρ
)
= Ψ
λ
(c)
= Ψ
λ
¸
(c(α´2) −c(−α´2))
=
¸
α∈Φ
+
c
(λ,α)κt/2
−c
−(λ,α)κt/2)
=
¸
(λ. α)
κ
t
#Φ
+
+ terms of higher degree in t.
Hence
Ψ
ρ
(ch
Irr(λ)
) =
Ψ
ρ
(¹
λ+ρ
)
Ψ
ρ
(¹
ρ
)
=
¸
(λ +ρ. α)
κ
¸
(ρ. α)
κ
+ terms of positive degree in t.
Now consider the composite homomorphism: ﬁrst apply Ψ
ρ
and then set t =
0. This has the eﬀect of replacing every c(j) by the constant 1. Hence applied to
the left hand side of the Weyl character formula this gives the dimension of the
representation Irr(λ). The previous equation shows that when this composite
homomorphism is applied to the right hand side of the Weyl character formula,
we get the right hand side of the Weyl dimension formula:
dimIrr(λ) =
¸
α∈Φ
+(λ +ρ. α)
κ
¸
α∈Φ
+(ρ. α)
κ
. (7.20)
7.6 The Kostant multiplicity formula.
Let us multiply the fundamental equation (7.17) by jc(−ρ) and use the fact
(7.16) that cjc(−ρ) = 1 to obtain
ch
Irr(λ)
=
¸
w∈W
(−1)
w
jc(−ρ)c(n(λ +ρ)).
But
jc(−ρ)c(n(λ +ρ)) = j( −n(λ +ρ) +ρ)
or, in more pedestrian terms, the left hand side of this equation has, as its
coeﬃcient of c(j) the value
j(j +ρ −n(λ +ρ)).
On the other hand, by deﬁnition,
ch
Irr(λ)
=
¸
dim(Irr(λ)
µ
c(j).
We thus obtain Kostant’s formula for the multiplicity of a weight j in the
irreducible module with highest weight λ:
KMF dim(Irr(λ))
µ
=
¸
w∈W
(−1)
w
j(j +ρ −n(λ +ρ)). (7.21)
7.7. STEINBERG’S FORMULA. 127
It will be convenient to introduce some notation which simpliﬁes the appearance
of the Kostant multiplicity formula: For n ∈ \ and j ∈ L (or in 1 for that
matter) deﬁne
n j := n(j +ρ) −ρ. (7.22)
This deﬁnes another action of \ on 1 where the “origin of the orthogonal
transformations n has been shifted from 0 to −ρ”. Then we can rewrite the
Kostant multiplicity formula as
dim(Irr(λ))
µ
=
¸
w∈W
(−1)
w
1
K
(n λ −j) (7.23)
or as
ch(Irr(λ)) =
¸
w∈W
¸
µ
(−1)
w
1
K
(n λ −j)c(j). (7.24)
where 1
K
is the original Kostant partition function.
For the purposes of the next section it will be useful to record the following
lemma:
Lemma 14 If ν is a dominant weight and c = n ∈ \ then n ν is not
dominant.
Proof. If ν is dominant, so lies in the closure of the positive Weyl chamber,
then ν + ρ lies in the interior of the positive Weyl chamber. Hence if n = c,
then n(ν +ρ)(/
i
) < 0 for some i, and so nν = n(ν +ρ) −ρ is not dominant.
QED
7.7 Steinberg’s formula.
Suppose that λ
and λ
are dominant integral weights. Decompose Irr(λ
) ⊗
Irr(λ
) into irreducibles, and let n(λ) = n(λ. λ
⊗λ
) denote the multiplicity of
Irr(λ) in this decomposition into irreducibles (with n(λ) = 0 if Irr(λ) does not
appear as a summand in the decomposition). In particular, n(ν) = 0 if ν is not
a dominant weight since Irr(ν) is inﬁnite dimensional in this case, so can not
appear as a summand in the decomposition. In terms of characters, we have
ch(Irr(λ
)) ch(Irr(λ
)) =
¸
λ
n(λ) ch(Irr(λ)).
Steinberg’s formula is a formula for n(λ). To derive it, use the Weyl character
formula
ch(Irr(λ
)) =
¹
λ
+ρ
¹
ρ
. ch(Irr(λ)) =
¹
λ+ρ
¹
ρ
in the above formula to obtain
ch(Irr(λ
))¹
λ
+ρ
=
¸
λ
n(λ)¹
λ+ρ
.
128 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
Use the Kostant multiplicity formula (7.24) for λ
:
ch(Irr(λ
)) =
¸
w∈W
¸
µ
(−1)
w
1
K
(n λ
−j)c(j)
and the deﬁnition
¹
λ
+ρ
=
¸
u∈W
(−1)
u
c(n(λ
+ρ))
and the similar expression for ¹
λ+ρ
to get
¸
µ
¸
u,w∈W
(−1)
uw
1
K
(n λ
−j))c(n(λ
+ρ) +j) =
¸
λ
¸
w
n(λ)(−1)
w
c(n(λ +ρ)).
Let us make a change of variables on the right hand side, writing
ν = n λ
so the right hand side becomes
¸
ν
¸
w
(−1)
w
n(n
−1
ν)c(ν +ρ).
If ν is a dominant weight, then by Lemma 14 n
−1
ν is not dominant if n
−1
= c.
So n(n
−1
ν) = 0 if n = 1 and so the coeﬃcient of c(ν + ρ) is precisely n(ν)
when ν is dominant.
On the left hand side let
j = ν −n λ
to obtain
¸
ν,u,w
(−1)
uw
1
K
(n λ
+n λ
−ν)c(ν +ρ).
Comparing coeﬃcients for ν dominant gives
n(ν) =
¸
u,w
(−1)
uw
1
K
(n λ
+n λ
−ν). (7.25)
7.8 The Freudenthal  de Vries formula.
We return to the study of a semisimple Lie algebra g and get a reﬁnement of
the Weyl dimension formula by looking at the next order term in the expansion
we used to derive the Weyl dimension formula from the Weyl character formula.
By deﬁnition, the Killing form restricted to the Cartan subalgebra h is given
by
κ(/. /
) =
¸
α
α(/)α(/
)
7.8. THE FREUDENTHAL  DE VRIES FORMULA. 129
where the sum is over all roots. If j. λ ∈ h
∗
with t
µ
. t
λ
the elements of H
corresponding to them under the Killing form, we have
(λ. j)
κ
= κ(t
λ
. t
µ
) =
¸
α
α(t
λ
)α(t
µ
)
so
(λ. j)
κ
=
¸
α
(λ. α)
κ
(j. α)
κ
. (7.26)
For each λ in the weight lattice 1 we have let c(λ) denote the “formal
exponential” so Z
fin
(1) is the space spanned by the c(λ) and we have deﬁned
the homomorphism
Ψ
ρ
: Z
fin
(Λ) →C[[t]]. c(λ) →c
(λ.ρ)κt
.
Let · and 1 be the images under Ψ
ρ
of the Weyl numerator and denominator.
So
· = Ψ
ρ
(¹
ρ+λ
) = Ψ
ρ+λ
(¹
ρ
)
by (7.19) and
¹
ρ
= c =
¸
α∈Φ
+
c
α/2
−c
−α/2
(7.27)
and therefore
·(t) =
¸
α>0
c
(λ+ρ,α)κt/2
−c
−(λ+ρ,α)κt/2
=
¸
(λ +ρ. α)
κ
t[1 +
1
24
(λ +ρ. α)
2
κ
t
2
+ ]
with a similar formula for 1. Then ·´1 → d(λ) = the dimension of the
representation as t → 0 is the usual proof (that we reproduced above) of the
Weyl dimension formula. Sticking this in to ·´1 gives
·
1
= d(λ)
1 +
1
24
¸
α>0
[(λ +ρ. α)
2
κ
−(ρ. α)
2
κ
]t
2
+
.
For any weight j we have (j. j)
κ
=
¸
(j. α)
2
κ
by (7.26), where the sum is over
all roots so
·
1
= d
1 +
1
48
[(λ +ρ. λ +ρ)
κ
−(ρ. ρ)
κ
]t
2
+
.
and we recognize the coeﬃcient of
1
48
t
2
in the above expression as χ
λ
(Cas
κ
)),
the scalar giving the value of the Casimir associated to the Killing form in the
representation with highest weight λ.
On the other hand, the image under Ψ
ρ
of the character of the irreducible
representation with highest weight λ is
¸
µ
c
(µ,ρ)κt
=
¸
µ
(1 + (j. ρ)
κ
t +
1
2
(j. ρ)
2
κ
t
2
+ )
130 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
where the sum is over all weights in the irreducible representation counted with
multiplicity. Comparing coeﬃcients gives
¸
µ
(j. ρ)
2
κ
=
1
24
d(λ)χ
λ
(Cas
κ
).
Applied to the adjoint representation the left hand side becomes (ρ. ρ)
κ
by
(7.26), while d(λ) is the dimension of the Lie algebra. On the other hand,
χ
λ
(Cas
κ
) = 1 since tr ad(Cas
κ
) = dim(g) by the deﬁnition of Cas
κ
. So we get
(ρ. ρ)
κ
=
1
24
dim g (7.28)
for any semisimple Lie algebra g.
An algebra which is the direct sum a commutative Lie and a semisimple
Lie algebra is called reductive. The previous result of Freudenthal and deVries
has been generalized by Kostant from a semisimple Lie algebra to all reductive
Lie algebras: Suppose that g is merely reductive, and that we have chosen a
symmtric bilinear form on g which is invariant under the adjoint representation,
and denote the associated Casimir element by Cas
g
. We claim that (7.28)
generalizes to
1
24
tr ad(Cas
g
) = (ρ. ρ). (7.29)
(Notice that if g is semisimple and we take our symmetric bilinear form to be
the Killing form ( . )
κ
(7.29) becomes (7.28).) To prove (7.29) observe that
both sides decompose into sums as we decompose g into as sum of its center
and its simple ideals, since this must be an orthogonal decomposition for our
invariant scalar product. The contribution of the center is zero on both sides,
so we are reduced to proving (7.29) for a simple algebra. Then our symmetric
biinear form ( . ) must be a scalar multiple of the Killing form:
( . ) = c
2
( . )
κ
for some nonzero scalar c. If .
1
. . . . . .
N
is an orthonormal basis of g for ( . )
κ
then .
1
´c. . . . . .
N
´c is an orthonormal basis for ( . ). Thus
Cas
g
=
1
c
2
Cas
κ
.
So
tr ad(Cas
g
) =
1
c
2
tr ad(Cas
κ
) =
1
c
2
1
24
dim g.
But on h
∗
we have the dual relation
(ρ. ρ) =
1
c
2
(ρ. ρ)
κ
.
Combining the last two equations shows that (7.29) becomes (7.28).
Notice that the same proof shows that we can generalize (7.8) as
χ
λ
(Cas) = (λ +ρ. λ +ρ) −(ρ. ρ) (7.30)
valid for any reductive Lie algebra equipped with a symmetric bilinear form
invariant under the adjoint representation.
7.9. FUNDAMENTAL REPRESENTATIONS. 131
7.9 Fundamental representations.
We let ω
i
denote the weight which satisﬁes
ω
i
(/
j
) = δ
ij
so that the ω
i
form an integral basis of L and are dominant. We call these
the basic weights.If (\. ρ) and (\. σ) are two ﬁnite dimensional irreducible
representations with highest weights λ and σ, then \ ⊗ \. ρ ⊗ σ contains the
irreducible representation with highest weight λ +j, and highest weight vector
·
λ
⊗ n
µ
, the tensor product of the highest weight vectors in \ and \. Tak
ing this “highest” component in the tensor product is known as the Cartan
product of the two irreducible representations.
Let (\
i
. ρ
i
) be the irreducible representations corresponding to the basic
weight ω
i
. Then every ﬁnite dimensional irreducible representation of g can be
obtained by Cartan products from these, and for that reason they are called the
fundamental representations.
For the case of ¹
n
= :(n+1) we have already veriﬁed that the fundamental
representations are ∧
k
(\ ) where \ = C
n+1
and where the basic weights are
ω
i
= 1
1
+ +1
i
We now sketch the results for the other classical simple algebras, leaving the
details as an exercise in the use of the Weyl dimension formula.
For C
n
= :j(2n) it is immediate to check that these same expressions give the
basic weights. However while \ = C
2n
= ∧
1
(\ ) is irreducible, the higher order
exterior powers are not: Indeed, the symplectic form Ω ∈ ∧
2
(\
∗
) is preserved,
and hence so is the the map
∧
j
(\ ) →∧
j−2
(\ )
given by contraction by Ω. It is easy to check that the image of this map
is surjective (for , = 2. . . . . n). the kernel is thus an invariant subspace of
dimension
2n
,
−
2n
2, −2
and a (not completely trivial) application of the Weyl dimension formula will
show that these are indeed the dimensions of the irreducible representations
with highest weight ω
j
. Thus these kernels are the fundamental representations
of C
n
. Here are some of the details:
We have
ρ = ω
1
+ +ω
n
=
¸
(n −i + 1)1
i
.
The most general dominant weight is of the form
¸
/
i
ω
i
= o
1
1
1
+ +o
n
1
n
132 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
where
o
1
= /
1
+ +/
n
. o
2
= /
2
+ +/
n
. o
n
= /
n
where the /
i
are nonnegative integers. So we can equally well use any decreasing
sequence o
1
≥ o
2
≥ ≥ o
n
≥ 0 of integers to parameterize the irreducible
representations. We have
(ρ. 1
i
−1
j
) = , −i. (ρ. 1
i
+1
j
) = 2n + 2 −i −,.
Multiplying these all together gives the denominator in the Weyl dimension
formula.
Similarly the numerator becomes
¸
i<j
(
i
−
j
)
¸
i≤j.
(
i
+
j
)
where

i
:= o
i
+n −i + 1.
If we set :
i
:= n −i + 1 then we can write the Weyl dimension formula as
dim \ (o
1
. . . . . o
n
) =
¸
i<j

2
i
−
2
j
:
2
i
−:
2
j
¸
i

i
:
i
.
where for the case i = , we have taken out a common factor of 2
n
from the
numerator and the denominator.
An easy induction shows that
¸
i<j
(:
2
i
−:
2
j
)
¸
i
:i = (2n −1)!(2n −3)! 1!.
so if we set
:
i
= 
i
−1 = o +n −i
then
dim \ (o
1
. . . . . o
n
) =
¸
i<j
(:
i
−:
j
)(:
i
+:
j
+ 2)
¸
i
(:
i
+ 1)
(2n −1)!(2n −3)! 1!
.
For example, suppose we want to compute the dimension of the fundamental
representation corresponding to λ
2
= 1
1
+ 1
2
so o
1
= o
2
= 1. o
i
= 0. i 2. In
applying the preceding formula, all of the terms with 2 < i are the same as for
the trivial representation, as is :
1
− :
2
. The ratios of the remaining factors to
those of the trivial representation are
n
¸
j=3
,
, −1
n
¸
j=3
,
, −2
=
n
¸
j=3
,
, −2
coming from the :
i
−:
j
terms, i = 1. 2. Similarly the :
i
+:
j
terms give a factor
2n + 1
2n −1
n
¸
j=3
2n + 2 −,
2n −,
7.10. EQUAL RANK SUBGROUPS. 133
and the terms :
1
+ 1. :
2
+ 1 contribute a factor
n + 1
n −1
.
In multiplying all of these terms together there is a huge cancellation and what
is left for the dimension of this fundamental representation is
(2n + 1)(2n −2)
2
.
Notice that this equals
2n
2
−1 = dim ∧
2
\ −1.
More generally this dimension argument will show that the fundamental repre
sentations are the kernels of the contraction maps i(Ω) : ∧
k
→ (\ ) ∧
k−2
(\ )
where Ω is the symplectic form.
For 1
n
it is easy to check that ω
i
:= 1
1
+ + 1
i
(i ≤ n − 1). and ω
n
=
1
2
(1
1
+ + 1
n
) are the basic weights and the Weyl dimension formula gives
the value
2n + 1
,
for , ≤ n − 1 as the dimensions of the irreducibles with
these weight, so that they are ∧
j
(\ ). , = 1. . . . n−1 while the dimension of the
irreducible corresponding to ω
n
is 2
n
. This is the spin representation which we
will study later.
Finally, for 1
n
= o(2n) the basic weights are
ω
j
= 1
1
+ +1
j
. , ≤ n −2.
and
ω
n−1
:=
1
2
(1
1
+ +1
n−1
+1
n
) and ω
n
:=
1
2
(1
1
+ +1
n−1
−1
n
).
The Weyl dimension formula shows that the the ﬁrst n −2 fundamental repre
sentations are in fact the representation on ∧
j
(\ ). , = 1. . . . . n − 2 while the
last two have dimension 2
n−1
. These are the half spin representations which we
will also study later.
7.10 Equal rank subgroups.
In this section we present a generalization of the Weyl character formula due
to RamondGrossKostantSternberg. It depends on an interpretation of the
Weyl denominator in terms of the spin representation of the orthogonal group
O(g´h), and so on some results which we will prove in Chapter 1A. But its
logical place is in this chapter. So we will quote the results that we will need.
You might prefer to read this section after Chapter IX.
134 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
Let p be an even dimensional space with a symmetric bilinear such that
p = p
+
⊕p
−
is a direct sum decomposition of p into two isotropic subspaces. In other words
p
+
and p
−
are each half the dimension of p, and the scalar product of any two
vectors in p
+
vanishes, as does the scalar product of any two elements of p
−
.
For example, we might take p = n
+
⊕ n
−
and the symmetric bilinear form to
be the Killing form. Then p
±
= n
±
is such a desired decomposition.
The symmetric bilinear form then puts p
±
into duality, i.e. we may identify
p
−
with p
∗
+
and vice versa. Suppose that we have a commutative Lie algebra
h acting on p as inﬁnitesimal isometries, so as to preserve each p
±
, that the c
+
i
are weight vectors corresponding to weights β
i
and that the c
−
i
form the dual
basis, corresponding to the negative of these weights −β
i
. In particular, we have
a Lie algebra homomorphism ν from h to o(p), and the two spin representations
of o(p) give two representations of h. By abuse of language, let us denote these
two representations by Spin
+ν
and Spin
−ν
. We can also consider the characters
of these representations of h. According to equation (9.22) (to be proved in
Chapter IX) we have
ch
Spin+ν
−ch
Spin−ν
=
¸
j
c(
1
2
β
j
) −c(−
1
2
β
j
)
.
In the case that h is the Cartan subalgebra of a semisimple Lie algebra and
and p
±
= n
±
we recognize this expression as the Weyl denominator.
Now let g be a semisimple Lie algebra and r ⊂ g a reductive subalgebra of
the same rank. This means that we can choose a Cartan subalgebra of g which
is also a Cartan subalgebra of r. The roots of r form a subset of the roots of g.
The Weyl group \
g
acts simply transitively on the Weyl chambers of g each of
which is contained in a Weyl chamber for r. We choose a positive root system
for g, which then determines a positive root system for r, and the positive Weyl
chamber for g is contained in the positive Weyl chamber for r.
Let
C ⊂ \
g
denote the set of those elements of the Weyl group of g which map the positive
Weyl chamber of g into the positive Weyl chamber for r. By the simple transi
tivity of the Weyl group actions on chambers, we know that elements of C form
coset representatives for the subgroup \
r
⊂ \
g
. In particular, the number of
elements of C is the same as the index of \
r
in \
g
.
Let
ρ
g
and ρ
r
denote half the sum of the positive roots of g and r respectively. For any
dominant weight λ of g the weight λ + ρ
g
lies in the interior of the positive
Weyl chamber for g. Hence for each c ∈ C, the element c(λ + ρ
g
) lies in the
interior for r and hence
c • λ := c(λ +ρ
g
) −ρ
r
7.10. EQUAL RANK SUBGROUPS. 135
is a dominant weight for r, and each of these is distinct.
Let \
λ
denote the irreducible representation of g with highest weight λ. We
can consider it as a representation of the subalgebra r. Also the Killing form
(or more generally any ad invariant symmetric bilinear form) on g induces an
invariant form on r. Let p denote the orthogonal complement of r in g. We
thus get a homomorphism of r into the orthogonal algebra o(g´r), which is an
even dimensional orthogonal algebra, and hence has two spin representations.
To specify which of these two spin representations we shall denote by o
+
and
which by o
−
, we note that there is a one dimensional weight space with weight
ρ
g
−ρ
r
, and we let o
+
denote the spin representation which contains that one
dimensional space. The spaces o
±
are o(g´r) modules, and via the homomor
phism r →o(g´r) we can consider them as r modules.
Finally, for any dominant integral weight j of r we let l
µ
denote the irre
ducible module of r with highest weight j.
With all this notation we can now state
Theorem 16 [GKRS] In the representation ring R(r) we have
\
λ
⊗o
+
−\
λ
⊗o
−
=
¸
c∈C
(−1)
c
l
c•λ
. (7.31)
Proof. To say that the above equation holds in the representation ring of r
means that when we take the signed sums of the characters of the representations
occurring on both sides we get equality. In the special case that r = h, we have
observed that (7.31) is just the Weyl character formula:
χ(Irr(λ)(χ(o
+g/h
) −χ(o
−g/h
)) =
¸
w∈Wg
(−1)
w
c(n(λ +ρ
g
)).
The general case follows from this special case by dividing both sides of this
equation by χ(o
+r/h
) −χ(o
−r/h
). The left hand side becomes the character of
the left hand side of (7.31) because the weights that go into this quotient via
(9.22) are exactly those roots of g which are not roots of r. The right hand side
becomes the character of the right hand side of (9.22) by reorganizing the sum
and using the Weyl character formula for r. QED
136 CHAPTER 7. CYCLIC HIGHEST WEIGHT MODULES.
Chapter 8
Serre’s theorem.
We have classiﬁed all the possibilities for an irreducible Cartan matrix via the
classiﬁcation of the possible Dynkin diagrams. The four major series in our clas
siﬁcation correspond to the classical simple algebras we introduced in Chapter
III. The remaining ﬁve cases also correspond to simple algebras  the “excep
tional algebras”. Each deserves a discussion on its own. However a theorem of
Serre guarantees that starting with any Cartan matrix, there is a corresponding
semisimple Lie algebra. Any root system gives rise to a Cartan matrix. So even
before studying each of the simple algebras in detail, we know in advance that
they exist, provided that we know that the corresponding root system exists.
We present Serre’s theorem in this chapter. At the end of the chapter we show
that each of the exceptional root systems exists. This then proves the existence
of the exceptional simple Lie algebras.
8.1 The Serre relations.
Recall that if α and β are roots,
'β. α` := 2
(β. α)
(α. α)
and the string of roots of the form β +,α is unbroken and extends from
β −:α to β +cα where : −c = 'β. α`.
In particular, if α. β ∈ ∆ so that β −α is not a root, the string is
β. β +α. . . . . β +cα
where
c = −'β. α`.
Thus
(adc
α
)
−β,α+1
c
β
= 0.
137
138 CHAPTER 8. SERRE’S THEOREM.
for c
α
∈ g
α
. c
β
∈ g
β
but
(ad c
α
)
k
c
β
= 0 for 0 ≤ / ≤ −'β. α`.
if c
α
= 0. c
β
= 0. So if ∆ = ¦α
1
. . . . . α
¦ we may choose
c
i
∈ g
αi
. 1
i
∈ g
−αi
so that
c
1
. . . . . . c
. 1
1
. . . . . 1
generate the algebra and
[/
i
. /
j
] = 0. 1 ≤ i. ,. ≤ / (8.1)
[c
i
. 1
i
] = /
i
(8.2)
[c
i
. 1
j
] = 0 i = , (8.3)
[/
i
. c
j
] = 'α
j
. α
i
`c
j
(8.4)
[/
i
. 1
j
] = −'α
j
. α
i
`1
i
(8.5)
(ad c
i
)
−αj,αi+1
c
j
= 0 i = , (8.6)
(ad 1
i
)
−αj,αi+1
1
j
= 0 i = ,. (8.7)
Serre’s theorem says that this is a presentation of a (semi)simple Lie algebra.
In particular, the Cartan matrix gives a presentation of a simple Lie algebra,
showing that for every Dynkin diagram there exists a unique simple Lie algebra.
8.2 The ﬁrst ﬁve relations.
Let f be the free Lie algebra on 3/ generators, A
1
. . . . . A
. Y
1
. . . . . Y
. 7
1
. . . . . 7
.
If g is a semisimple Lie algebra with generators and relations (8.1)–(8.7), we
have a unique homomorphism f → g where A
i
→ c
i
. Y
i
→ 1
i
. 7
i
→ /
i
. We
want to consider an intermediate algebra, m, where we make use of all but the
last two sets of relations. So let I be the ideal in f generated by the elements
[7
i
. 7
j
]. [A
i
. Y
j
] −δ
ij
7
i
. [7
i
. A
j
] −'α
j
. α
i
`A
j
. [7
i
. Y
j
] +'α
j
. α
i
`Y
j
.
We let m := f ´I and denote the image of A
i
in m by r
i
etc.
We will ﬁrst exhibit m as Lie subalgebra of the algebra of endomorphisms of
a vector space. This will allow us to conclude that the r
i
. n
j
and .
k
are linearly
independent and from this deduce the structure of m. We will then ﬁnd that
there is a homomorphism of m onto our desired semisimple Lie algebra sending
r →c. n →1. . →/.
So consider a vector space with basis ·
1
. . . . . ·
and let ¹ be the tensor
algebra over this vector space. We drop the tensor product signs in the algebra,
so write
·
i1
·
i2
·
it
:= ·
i1
⊗ ·
it
8.2. THE FIRST FIVE RELATIONS. 139
for any ﬁnite sequence of integers with values from 1 to /. We make ¹ into an
f module as follows: We let the 7
i
act as derivations of ¹, determined by its
actions on generators by
7
i
1 = 0. 7
j
·
i
= −'α
i
. α
j
`·
j
.
So if we deﬁne
c
ij
:= 'α
i
. α
j
`
we have
7
j
(·
i1
·
it
) = −(c
i1j
+ +c
itj
)(·
i1
·
it
).
The action of the 7
i
is diagonal in this basis, so their actions commute. We let
the Y
i
act by left multiplication by ·
i
. So
Y
j
·
i1
·
it
:= ·
j
·
i1
·
it
and hence
[7
i
. Y
j
] = −c
ji
Y
j
= −'α
j
. α
i
`Y
j
as desired. We now want to deﬁne the action of the A
i
so that the relations
analogous to (8.2) and (8.3) hold. Since 7
i
1 = 0 these relations will hold when
applied to the element 1 if we set
A
j
1 = 0 ∀,
and
A
j
·
i
= 0 ∀i. ,.
Suppose we deﬁne
A
j
(·
p
·
q
) = −δ
jp
c
qj
·
q
.
Then
7
i
A
j
(·
p
·
q
) = δ
jp
c
qj
c
qi
·
q
= −c
qi
A
j
(·
p
·
q
)
while
A
j
7
i
(·
p
·
q
) = δ
jp
c
qj
(c
pi
+c
qi
)·
q
= −(c
pi
+c
qi
)A
j
(·
j
·
q
).
Thus
[7
i
. A
j
](·
p
·
q
) = c
ji
A
j
(·
p
·
q
)
as desired.
In general, deﬁne
A
j
(·
p1
·
pt
) := ·
p1
(A
j
(·
p2
·
pt
)) −δ
p1j
(c
p2j
+ +c
ptj
)(·
p2
·
pt
) (8.8)
for t ≥ 2. We claim that
7
i
A
j
(·
p1
·
pt
) = −(c
p1i
+ +c
pti
−c
ji
)A
j
(·
p1
·
pt
).
Indeed, we have veriﬁed this for the case t = 2. By induction, we may assume
that A
j
(·
p2
·
pt
) is an eigenvector of 7
i
with eigenvalue c
p2i
+ + c
pti
−
140 CHAPTER 8. SERRE’S THEOREM.
c
ji
. Multiplying this on the left by ·
p1
produces the ﬁrst term on the right of
(8.8). On the other hand, this multiplication produces an eigenvector of 7
i
with
eigenvalue c
p1i
+ +c
pti
−c
ji
. As for the second term on the right of (8.8), if
, = j
1
it does not appear. If , = j
1
then c
p1i
+ +c
pti
−c
ji
= c
p2i
+ +c
pti
. So
in either case, the right hand side of (8.8) is an eigenvector of 7
i
with eigenvalue
c
p1i
+ +c
pti
−c
ji
. But then
[7
i
. A
j
] = 'α
j
. α
i
`A
j
as desired. We have deﬁned an action of f on ¹ whose kernel contains I, hence
descends to an action of m on ¹.
Let φ : m→End ¹ denote this action. Suppose that . := o
1
.
1
+ +o
.
for some complex numbers o
1
. . . . . o
and that φ(.) = 0. The operator φ(.) has
eigenvalues
−
¸
o
j
c
ij
when acting on the subspace \ of ¹. All of these must be zero. But the Cartan
matrix is nonsingular. Hence all the o
i
= 0. This shows that the space spanned
by the .
i
is in fact /dimensional and spans an /dimensional abelian subalgebra
of m. Call this subalgebra z.
Now consider the 3/dimensional subspace of f spanned by the A
i
. Y
i
and
7
i
. i = 1. . . . . /. We wish to show that it projects onto a 3/ dimensional subspace
of m under the natural passage to the quotient f →m = f ´i. The image of this
subspace is spanned by r
i
. n
i
and .
i
. Since φ(r
i
) = 0 and φ(n
i
) = 0 we know
that r
i
= 0 and n
i
= 0. Suppose we had a linear relation of the form
¸
o
i
r
i
+
¸
/
i
n
i
+. = 0.
Choose some .
∈ z such that α
i
(.
) = 0 and α
i
(.
) = α
j
(.
) for any i = ,.
This is possible since the α
i
are all linearly independent. Bracketing the above
equation by .
gives
¸
α
(
.
)o
i
r
i
−
¸
α
i
(.
)/
i
n
i
= 0
by the relations (8.4) and (8.5). Repeated bracketing by .
and using the van
der Monde (or induction) argument shows that o
i
= 0. /
i
= 0 and hence that
. = 0.
We have proved that the elements r
i
. n
j
. .
k
in m are linearly independent.
The element
[r
i1
. [r
i2
. [ [r
it−1
. r
it
] ]]]
is an eigenvector of .
i
with eigenvalue
c
i1i
+ +c
iti
.
For any pair of elements j and λ of z
∗
(or of h
∗
) recall that
j ≺ λ
8.2. THE FIRST FIVE RELATIONS. 141
denotes the fact that λ−j =
¸
/
i
α
i
where the /
i
are all nonnegative integers.
For any λ ∈ z
∗
let m
λ
denote the set of all : ∈ m satisfying
[.. :] = λ(.): ∀. ∈ z.
Then we have shown that the subalgebra x of m generated by r
1
. . . . . r
is
contained in
m
+
:=
0≺λ
m
λ
.
Similarly, the subalgebra y of m generated by the n
i
lies in
m
−
:=
λ≺0
m
λ
.
In particular, the vector space sum
y +z +x
is direct since z ⊂ m
0
. We claim that this is in fact all of m. First of all, observe
that it is a subalgebra. Indeed, [n
i
. r
j
] = −δ
ij
.
i
lies in this subspace, and hence
[n
i
. [r
j1
. [ [r
jt−1
. r
jt
] ] ∈ x for t ≥ 2.
Thus the subspace y+z +x is closed under ad n
i
and hence under any product
of these operators. Similarly for ad r
i
. Since these generate the algebra m we
see that y +z +x = m and hence
x = m
+
and y = m
−
.
We have shown that
m = m
−
⊕z ⊕m
+
where z is an abelian subalgebra of dimension /, where the subalgebra m
+
is
generated by r
1
. . . . . r
, where the subalgebra m
−
is generated by n
1
. . . . . n
, and
where the 3/ elements r
1
. . . . . r
. n
1
. . . . n
. .
1
. . . . . .
are linearly independent.
There is a further property of m which we want to use in the next section in
the proof of Serre’s theorem. For all i = , between 1 and / deﬁne the elements
r
ij
and n
ij
by
r
ij
:= (ad r
i
)
−cji+1
(r
j
). n
ij
:= (ad n
i
)
−cji+1
(n
j
).
Conditions (8.6) and (8.7) amount to setting these elements, and hence the ideal
that they generate equal to zero. We claim that for all / and all i = , between
1 and / we have
ad r
k
(n
ij
) = 0 (8.9)
and
ad n
k
(r
ij
) = 0. (8.10)
142 CHAPTER 8. SERRE’S THEOREM.
By symmetry, it is enough to prove the ﬁrst of these equations. If / = i then
[r
k
. n
i
] = 0 by (8.3) and hence
ad r
k
(n
ij
) = (ad n
i
)
−cji+1
[r
k
. n
j
] = (ad n
i
)
−cji+1
δ
kj
/
j
by (8.2) and (8.3). If / = , this is zero. If / = , we can write this as
(ad n
i
)
−cji
(ad n
i
)/
j
= (ad n
i
)
−cji
c
ij
n
i
.
If c
ij
= 0 there is nothing to prove. If c
ij
= 0 then c
ji
= 0 and in fact is strictly
negative since the angles between all elements of a base are obtuse. But then
(ad n
i
)
−cji
n
i
= 0.
It remains to consider the case where / = i. The algebra generated by r
i
. n
,
.
i
is isomorphic to :(2) with [r
i
. n
i
] = .
i
. [.
i
. r
i
] = 2r
i
. [.
i
. n
i
] = −2n
i
. We have
a decomposition of m into weight spaces for all of z, in particular into weight
spaces for this little :(2). Now [r
i
. n
j
] = 0 (from (8.3)) so n
j
is a maximal weight
vector for this :(2) with weight −c
ji
and (8.9) is just a standard property of a
maximal weight module for :(2) with nonnegative integer maximal weight.
8.3 Proof of Serre’s theorem.
Let k be the ideal of m generated by the r
ij
and n
ij
as deﬁned above. We wish
to show that
g := m´k
is a semisimple Lie algebra with Cartan subalgebra h = z´k and root system
Φ. For this purpose, let i now denote the ideal in m
+
generated by the r
ij
and
j be the ideal in m
−
generated by the n
ij
so that
i +j ⊂ k.
We claim that j is an ideal of m. Indeed, each n
ij
is a weight vector for
z, and [z. m
−
] ⊂ m
−
, hence [z. j] ⊂ j. On the other hand, we know that
[r
k
. m
−
] ⊂ m
−
+z and [r
k
. n
ij
] = 0 by (8.9). So (ad r
k
)j ⊂ j by Jacobi. Since
the r
k
generate m
+
Jacobi then implies that [m
+
. j] ⊂ j as well, hence j is an
ideal of m. Similarly, i is an ideal of m. Hence i +j is an ideal of m, and since
it contains the generators of k, it must coincide with k, i.e.
k = i +j.
In particular, z∩k = ¦0¦ and so z projects isomorphically onto an /dimensional
abelian subalgebra of g = m´k. Furthermore, since j ∩m
+
= ¦0¦ and i ∩m
−
=
¦0¦ we have
g = n
−
⊕h ⊕n
+
(8.11)
as a vector space where
n
−
= m
−
´j. and n
+
= m
+
´i.
8.3. PROOF OF SERRE’S THEOREM. 143
and n
+
is a a sum of weight spaces of h, summed over λ ~ 0 while n
−
is a sum
of weight spaces of h with λ ≺ 0. We have to see which weight spaces survive
the passage to the quotient. The :(2) generated by r
i
. n
i
. .
i
is not sent into zero
by the projection of m onto g since .
i
is not sent into zero. Since :(2) is simple,
this means that the projection map is an isomorphism when restricted to this
:(2). Let us denote the images of r
i
. n
i
. .
i
by c
i
. 1
i
. /
i
. Thus g is generated by
the 3/ elements
c
1
. . . . . c
. 1
1
. . . . . 1
. /
1
. . . . . /
and all the axioms (8.1)(8.7) are satisﬁed.
We must show that g is ﬁnite dimensional, semisimple, and has Φ as its
root system.
First observe that ad c
i
acts nilpotently on each of the generators of the
algebra g, and hence acts locally nilpotently on all of g. Similarly for ad 1
i
.
Hence the automorphism
τ
i
:= (exp ad c
i
)(ad −1
i
)(exp ad c
i
)
is well deﬁned on all of g. So if :
i
denotes the reﬂection in the Weyl group \
corresponding to i, we have
τ
i
(g
λ
) = g
siλ
.
Notice that each of the m
λ
is ﬁnite dimensional, since the dimension of m
λ
for
λ ~ 0 is at most the number of ways to write λ as a sum of successive α
i
, each
such sum corresponding to the element [r
i1
. [r
i2
. [ . r
it
] ]. (In particular
m
kα
= ¦0¦ for / 1.) Similarly for λ ≺ 0. So it follows that each of the g
λ
is
ﬁnite dimensional, that
dimg
wλ
= dimg
λ
∀n ∈ \
and that
g
kλ
= 0 for / = −1. 0. 1.
Furthermore, g
αi
is one dimensional, and since every root is conjugate to a
simple root, we conclude that
dimg
α
= 1 ∀α ∈ Φ.
We now show that
g
λ
= ¦0¦ for λ = 0. λ ∈ Φ.
Indeed, suppose that g
λ
= ¦0¦. We know that λ is not a multiple of α for any
α ∈ Φ, since we know this to be true for simple roots, and the dimensions of the
g
λ
are invariant under the Weyl group, each root being conjugate to a simple
root. So λ
⊥
does not coincide with any hyperplane orthogonal to any root. So
we can ﬁnd a j ∈ λ
⊥
such that (α. j) = 0 for all roots. We may ﬁnd a n ∈ \
which maps j into the positive Weyl chamber for ∆ so that (α
i
. j) ≥ 0 and
hence (α
i
. nj) 0 for i = 1. . . . . /. Now
dimg
wλ
= dimg
λ
144 CHAPTER 8. SERRE’S THEOREM.
and for the latter to be nonzero, we must have
nλ =
¸
/
i
α
i
with the coeﬃcients all nonnegative or nonpositive integers. But
0 = (λ. j) = (nλ. nj) =
¸
/
i
(α
i
. j)
with (α
i
. j) 0 ∀i. Hence all the /
i
= 0.
So
dimg = / + Card Φ.
We conclude the proof if we show that g is semisimple, i.e. contains no abelian
ideals. So suppose that a is an abelian ideal. Since a is an ideal, it is stable
under h and hence decomposes into weight spaces. If g
α
∩a = ¦0¦, then g
α
⊂ a
and hence [g
−α
. g
α
] ⊂ a and hence the entire :(2) generated by g
α
and g
−α
is contained in a which is impossible since a is abelian and :(2) is simple. So
a ⊂ h. But then a must be annihilated by all the roots, which implies that
a = ¦0¦ since the roots span h
∗
. QED
8.4 The existence of the exceptional root sys
tems.
The idea of the construction is as follows. For each Dynkin diagram we will
chose a lattice L in a Euclidean space \ , and then let Φ consist of all vectors in
this lattice having all the same length, or having one of two prescribed lengths.
We then check that
2(α
1
. α
2
)
(α
1
. α
1
)
∈ Z ∀ α
1
. α
2
∈ Φ.
This implies that reﬂection through the hyperplane orthogonal to α
1
preserves
L, and since reﬂections preserve length that these reﬂection s preserve Φ. This
will show that Φ is a root system and then calculation shows that it is of the
desired type.
(G
2
). Let \ be the plane in R
3
consisting of all vectors
¸
r
n
.
¸
with
r +n +. = 0.
Let L be the intersection of the three dimensional standard lattice Z
2
with \ .
Let 1
1
. 1
2
. 1
3
denote the standard basis of R
3
. Let Φ consist of all vectors in
L of squared length 2 or 6. So Φ consists of the six short vectors
±(1
i
−1
j
) i < ,
8.4. THE EXISTENCE OF THE EXCEPTIONAL ROOT SYSTEMS. 145
and the six long vectors
±(21
i
−1
j
−1
k
) i = 1. 2. 3 , = i. , = /.
We may choose the base to consist of
α
=
1
1
−1
2
. α
2
= −21
1
+1
2
+1
3
.
146 CHAPTER 8. SERRE’S THEOREM.
1
4
. Let \ = R
4
and L = Z
4
+Z(
1
2
(1
1
+1
2
+1
3
+1
4
). Let Φ consist of all
vectors of L of squared length 1 or 2. So Φ consists of the 24 long roots
±1
i
±1
j
i < ,
and the 24 short roots
±1
1
.
1
2
(±1
1
±1
2
±1
3
±1
4
).
For ∆ we may take
α
1
= 1
2
−1
3
. α
2
= 1
3
−1
4
. α
3
= 1
4
. α
4
=
1
2
(1
1
−1
2
−1
3
−1
4
).
1
8
. Let \ = R
8
. Let
L
:= ¦
¸
c
i
1
i
. c
i
∈ Z.
¸
c
i
even ¦.
Let
L := L
+Z(
1
2
(1
1
+1
1
+1
2
+1
3
+1
4
+1
5
+1
6
+1
7
+1
8
)).
Let Φ consist of all vectors in L of squared length 2. So Φ consists of the 240
roots
±1
i
±1
j
(i < ,).
1
2
8
¸
i=1
±1
i
. (even number of + signs).
For ∆ we may take
α
1
=
1
2
(1
1
−1
1
−1
2
−1
3
−1
4
−1
5
−1
6
−1
7
+1
8
)
α
2
= 1
1
+1
2
α
i
= 1
i−1
−1
i−2
(3 ≤ i ≤ 8).
1
7
is obtained from 1
8
by letting \ be the span of the ﬁrst 7 α
i
. 1
6
is
obtained from 1
8
by letting \ be the span of the ﬁrst 6 α
i
.
Chapter 9
Cliﬀord algebras and spin
representations.
9.1 Deﬁnition and basic properties
9.1.1 Deﬁnition.
Let p be a vector space with a symmetric bilinear form ( . ). The Cliﬀord
algebra associated to this data is the algebra
C(p) := T(p)´1
where T(p) denotes the tensor algebra
T(p) = / ⊕p ⊕(p ⊗p) ⊕
and where 1 denotes the ideal in T(p) generated by all elements of the form
n
1
n
2
+n
2
n
1
−2(n
1
. n
2
)1. n
1
. n
2
∈ p
and 1 is the unit element of the tensor algebra. The space p injects as a subspace
of C(p) and generates C(p) as an algebra.
A linear map 1 of p to an associative algebra ¹ with unit 1
A
is called a
Cliﬀord map if
1(n
1
)1(n
2
) +1(n
2
)1(n
1
) = 2(n
1
. n
2
)1
A
. ∀n
1
. n
2
∈ p
or what amounts to the same thing (by polarization since we are not over a ﬁeld
of characteristic 2) if
1(n)
2
= (n. n)1
A
∀ n ∈ p.
Any Cliﬀord map gives rise to a unique algebra homomorphism of C(p) to ¹
whose restriction to p is 1. The Cliﬀord algebra is “universal” with respect to
this property.
If the bilinear form is identically zero, then C(p) = ∧p, the exterior algebra.
But we will be interested in the opposite extreme, the case where the bilinear
form is nondegenerate.
147
148CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
9.1.2 Gradation.
The ideal 1 deﬁning the Cliﬀord algebra is not Z homogeneous (unless the
bilinear form is identically zero) since its generators n
1
n
2
+n
2
n
1
−2(n
1
. n
2
)1 are
“mixed”, being a sum of terms of degree two and degree zero in T(p). But these
terms are both even. So the Z´2Z gradation is preserved upon passing to the
quotient. In other words, C(p) is a Z´2Z graded algebra:
C(p) = C
0
(p) ⊕C
1
(p)
where the elements of C
0
(p) consist of sums of products of elements of p with
an even number of factors and C
1
(p) consist of sums of terms each a product of
elements of p with an odd number of factors. The usual rules for multiplication
of a graded algebra obtain:
C
0
(p) C
0
(p) ⊂ C
0
(p). C
0
(p) C
1
(p) ⊂ C
1
(p). C
1
(p) C
1
(p) ⊂ C
0
(p).
9.1.3 ∧p as a C(p) module.
Let p be a vector space with a nondegenerate symmetric bilinear form. The
exterior algebra, ∧p inherits a bilinear form which we continue to denote by
( . ). Here the spaces ∧
k
(p) and ∧
(p) are orthogonal if / = / while
(r
1
∧ ∧ r
k
. n
1
∧ ∧ n
k
) = det ((r
i
. n
j
)) .
For · ∈ p let c(·) ∈ End(∧p) denote exterior multiplication by · and ι(·) be
the transpose of c(·) relative to this biinear form on ∧p.
So ι(·) is interior multiplication by the element of p
∗
corresponding to ·
under the map p →p
∗
induced by ( . )
p
. The map
p → End(∧p). · →c(·) +ι(·)
is a Cliﬀord map, i.e. satisﬁes
(c(·) +ι(·))
2
= (·. ·)
p
id
and so extends to a homomorphism of
C(p) →End ∧p
making ∧p into a C(p) module. We let rn denote the product of r and n in
C(p).
9.1.4 Chevalley’s linear identiﬁcation of C(p) with ∧p.
Consider the linear map
C(p) →∧p. r →r1
9.1. DEFINITION AND BASIC PROPERTIES 149
where 1 ∈ ∧
0
p under the identiﬁcation of ∧
0
p with the ground ﬁeld. The
element r1 on the extreme right means the image of 1 under the action of
r ∈ C(p).
For elements ·
1
. . . . . ·
k
∈ p this map sends
·
1
→ ·
1
·
1
·
2
→ ·
1
∧ ·
2
+ (·
1
. ·
2
)1
·
1
·
2
·
3
→ ·
1
∧ ·
2
∧ ·
3
+ (·
1
. ·
2
)·
3
−(·
1
. ·
3
)·
2
+ (·
2
. ·
3
)·
1
·
1
·
2
·
3
·
4
→ ·
1
∧ ·
2
∧ ·
3
∧ ·
4
+ (·
2
. ·
3
)·
1
∧ ·
4
−(·
2
. ·
4
)·
1
∧ ·
3
+(·
3
. ·
4
)·
1
∧ ·
2
+ (·
1
. ·
2
)·
3
∧ ·
4
−(·
1
. ·
3
)·
1
∧ ·
4
+(·
1
. ·
4
)·
2
∧ ·
3
+ (·
1
. ·
4
)(·
2
. ·
3
) −(·
1
. ·
3
)(·
2
. ·
4
) + (·
1
. ·
2
)(·
3
. ·
4
)
.
.
.
.
.
.
If the ·’s form an “orthonormal” basis of p then the products
·
i1
·
i
k
. i
1
< i
2
< i
k
. / = 0. 1. . . . . n
form a basis of C(p) while the
·
i1
∧ ∧ ·
i
k
. i
1
< i
2
< i
k
. / = 0. 1. . . . . n
form a basis of ∧p, and in fact
·
1
·
k
→·
1
∧ ∧ ·
k
if (·
i
. ·
j
) = 0 ∀i = ,. (9.1)
In particular, the map C(p) → ∧p given above is an isomorphism of vector
spaces, so we may identify C(p) with ∧p as a vector space if we choose, and
then consider that ∧p has two products: the Cliﬀord product which we denote
by juxtaposition and the exterior product which we denote with a ∧.
Notice that this identiﬁcation preserves the Z´2Z gradation, an even element
of the Cliﬀord algebra is identiﬁed with an even element of the exterior algebra
and an odd element is identiﬁed with an odd element.
9.1.5 The canonical antiautomorphism.
The Cliﬀord algebra has a canonical antiautomorphism o which is the identity
map on p. In particular, for ·
i
∈ p we have o(·
1
·
2
) = ·
2
·
1
. o(·
1
·
2
·
3
) = ·
3
·
2
·
1
,
etc. By abuse of language, we use the same letter o to denote the similar anti
automorphism on ∧p and observe from the above computations (in particular
from the corresponding choice of bases) that o commutes with our identifying
map C(p) →∧p so the notation is consistent. We have
o = (−1)
1
2
k(k−1)
id on ∧
k
(p).
150CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
For small values of / we have
/ (−1)
1
2
k(k−1)
0 1
1 1
2 −1
3 −1
4 1
5 1
6 −1.
We will use subscripts to denote the homogeneous components of elements
of ∧p. Notice that if n ∈ ∧
2
p then on = −n by the above table, while
o(n
2
) = (on)
2
= n
2
. Since n
2
is even (and hence has only even homogeneous
components) and since the maximum degree of the homogeneous component of
n
2
is 4, we conclude that
n
2
= (n
2
)
0
+ (n
2
)
4
∀ n ∈ ∧
2
p. (9.2)
For the same reason
·
2
= (·
2
)
0
+ (·
2
)
4
∀ · ∈ ∧
3
p. (9.3)
We also claim the following:
(nn
)
0
= (on. n
) = (−1)
1
2
k(k−1)
(n. n
) ∀ n. n
∈ ∧
k
(p). (9.4)
Indeed, it is suﬃcient to verify this for n. n
belonging to a basis of ∧p, say the
basis given by all elements of the form (9.1), in which case both sides of (9.4)
vanish unless n = n
. If n = n
= ·
1
∧ ∧ ·
k
(say) then
(nn)
0
= ι(·
1
) ι(·
k
)·
1
∧ ∧ ·
k
=
(−1)
1
2
k(k−1)
(·
1
. ·
1
) (·
k
. ·
k
) = (−1)
1
2
k(k−1)
(n. n)
proving (9.4).
As special cases that we will use later on, observe that
(nn
)
0
= −(n. n
) ∀ n. n
∈ ∧
2
p (9.5)
and
(··
)
0
= −(·. ·
) ∀ ·. ·
∈ ∧
3
p. (9.6)
9.1.6 Commutator by an element of p.
For any n ∈ p consider the linear map
n →[n. n] = nn −(−1)
k
nn for n ∈ ∧
k
p
9.1. DEFINITION AND BASIC PROPERTIES 151
which is (anti)commutator in the Cliﬀord multiplication by n. We claim that
[n. n] = 2ι(n)n. (9.7)
In particular, [n. ], which is automatically a derivation for the Cliﬀord multi
plication, is also a derivation for the exterior multiplication. Alternatively, this
equation says that ι(n), which is a derivation for the exterior algebra multipli
cation, is also a derivation for the Cliﬀord multiplication.
To prove (9.7) write
nn = o(no(n)).
Then
nn = n ∧ n +ι(n)n. nn = o(n ∧ o(n)) +o(ι(n)on) = n ∧ n + (oι(n)o)n.
We may assume that n ∈ ∧
k
p. Then
n ∧ n −(−1)
k
n ∧ n = 0.
so we must show that
oι(n)on = (−1)
k−1
ι(n)n.
For this we may assume that n = 0 and we may write
n = n ∧ . +.
.
where ι(n)n = 1 and ι(n). = ι(n).
= 0. In fact, we may assume that . and .
are sums of products of linear elements all of which are orthogonal to n. Then
ι(n)o. = ι(n)o.
= 0 so
ι(n)on = (−1)
k−1
o.
since . has degree one less than n and hence
oι(n)on = (−1)
k−1
. = (−1)
k−1
ι(n)n. Q11
9.1.7 Commutator by an element of ∧
2
p.
Suppose that
n ∈ ∧
2
p.
Then for n ∈ p we have
[n. n] = −[n. n] = −2ι(n)n. (9.8)
In particular, if n = n
i
∧ n
j
where n
i
. n
j
∈ p we have
[n. n] = 2(n
j
. n)n
i
−2(n
i
. n)n
j
∀ n ∈ p. (9.9)
If (n
i
. n
j
) = 0 this is an “inﬁnitesimal rotation” in the plane spanned by n
i
and
n
j
. Since n
i
∧n
j
. i < , form a basis of ∧
2
p if n
1
. . . . . n
n
form an “orthonormal”
basis of p, we see that the map
n →[n. ]
152CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
gives an isomorphism of ∧
2
p with the orthogonal algebra o(p). This identiﬁ
cation diﬀers by a factor of two from the identiﬁcation that we had been using
earlier.
Now each element of o(p) (in fact any linear transformation on p) induces
a derivation of ∧p. We claim that under the above identiﬁcation of ∧
2
p with
o(p), the derivation corresponding to n ∈ ∧
2
p is Cliﬀord commutation by n. In
symbols, if θ
u
denotes this induced derivation, we claim that
θ
u
(n) = [n. n] = nn −nn ∀ n ∈ ∧p. (9.10)
To verify this, it is enough to check it on basis elements of the form (9.1), and
hence by the derivation property for each ·
j
, where this reduces to (9.8).
We can now be more explicit about the degree four component of the Cliﬀord
square of an element of ∧
2
p, i.e. the element (n
2
)
4
occurring on the right of
(9.2). We claim that for any three elements n. n
. n
∈ p
1
2
ι(n
)ι(n
)ι(n)n
2
= (n∧n
. n)ι(n
)n+(n
∧n
. n)ι(n)n+(n
∧n. n)ι(n
)n. (9.11)
To prove this observe that
ι(n)n
2
= (ι(n)n) n +n(ι(n)n)
ι(n
)ι(n)n
2
= (ι(n
)ι(n)n) n −ι(n)nι(n
)n +ι(n
)nι(n)n +nι(n
)ι(n)n
= 2 ((n ∧ n
. n)n +ι(n
)n ∧ ι(n)n)
1
2
ι(n
)ι(n
)ι(n)n
2
= (n ∧ n
. n)ι(n
)n +ι(n
)ι(n
)n ∧ ι(n)n −ι(n
)n ∧ ι(n
)ι(n)n
= (n ∧ n
. n)ι(n
)n + (n
∧ n
. n)ι(n)n + (n
∧ n. n)ι(n
)n
as required.
We can also be explicit about the degree zero component of n
2
. Indeed, it
follows from (9.9) that if n = n
i
∧n
j
. i < , where n
1
. . . . . n
n
form an “orthonor
mal” basis of p then
tr(ad
p
n)
2
= −8(n
i
. n
i
)(n
j
. n
j
).
where ad
p
n denotes the (commutator) action of n on p under our identiﬁcation
of ∧
2
p with o(p). But
(n
i
∧ n
j
. n
i
∧ n
j
) = (n
i
. n
i
)(n
j
. n
j
)(= ±1).
So using (9.5) we see that
(n
2
)
0
=
1
8
tr(ad
p
n)
2
= −(n. n) (9.12)
for n ∈ ∧
2
p.
9.2. ORTHOGONAL ACTION OF A LIE ALGEBRA. 153
9.2 Orthogonal action of a Lie algebra.
Let r be a Lie algebra. Suppose that we have a representation of r acting
as inﬁnitesimal orthogonal transformations of p which means, in view of the
identiﬁcation of ∧
2
p with o(p) that we have a map
ν : r →∧
2
p
such that
r n = −2ι(n)ν(r) (9.13)
where r n denotes the action of r ∈ r on n ∈ p.
9.2.1 Expression for ν in terms of dual bases.
It will be useful for us to write equation (9.13) in terms of a basis. So let
n
1
. . . . . n
n
be a basis of p and let .
1
. . . . . .
n
be the dual basis relative to ( . ) =
( . )
p
. We claim that
ν(r) = −
1
4
¸
j
n
j
∧ (r .
j
). (9.14)
Indeed, it suﬃces to verify (9.13) for each of the elements .
i
. Now
ι(.
i
)
¸
−
1
4
¸
j
n
j
∧ r .
j
¸
= −
1
4
r .
i
+
1
4
¸
j
(.
i
. r .
j
)n
j
.
But
(.
i
. r .
j
) = −(r .
i
. .
j
)
since r acts as an inﬁnitesimal orthogonal transformation relative to ( . ). So
we can write the sum as
1
4
¸
j
(.
i
. r .
j
)n
j
= −
1
4
¸
j
(r .
i
. .
j
)n
j
= −
1
4
r .
i
yielding
ι(.
i
)
¸
−
1
4
¸
j
n
j
∧ r .
j
¸
= −
1
2
r .
i
which is (9.13).
9.2.2 The adjoint action of a reductive Lie algebra.
For future use we record here a special case of (9.14): Suppose that p = r = g
is a reductive Lie algebra with an invariant symmetric bilinear form, and the
action is the adjoint action, i.e. r n = [r. n]. Let h be a Cartan subalgebra of g
154CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
and let Φ denote the set of roots and suppose that we have chosen root vectors
c
φ
. c
−φ
. φ ∈ Φ so that
(c
φ
. c
−φ
) = 1.
Let /
1
. . . . . /
s
be a basis of h and /
1
. . . . /
s
the dual basis. Let
ψ : g →∧
2
g
be the map ν when applied to this adjoint action. Then (9.14) becomes
ψ(r) =
1
4
¸
s
¸
i=1
/
i
∧ [/
i
. r] +
¸
φ∈Φ
c
−φ
∧ [c
φ
. r]
¸
. (9.15)
In case r = / ∈ h this formula simpliﬁes. The [/
i
. /] = 0, and in the second
sum we have
c
−φ
∧ [c
φ
. /] = −φ(/)c
−φ
∧ c
φ
which is invariant under the interchange of φ and −φ. So let us make a choice
Φ
+
of positive roots. Then we can write (9.15) as
ψ(/) = −
1
2
¸
φ∈Φ
+
φ(/)c
−φ
∧ c
φ
. / ∈ h. (9.16)
Now
c
−φ
∧ c
φ
= −1 +c
−φ
c
φ
.
So if
ρ :=
1
2
¸
φ∈Φ
+
φ (9.17)
is one half the sum of the positive roots we have
ψ(/) = ρ(/) −
1
2
¸
φ∈Φ
+
φ(/)c
−φ
c
φ
. / ∈ h. (9.18)
In this equation, the multiplication on the right is in the Cliﬀord algebra.
9.3 The spin representations.
If
p = p
1
⊕p
2
is a direct sum decomposition of a vector space p with a symmetric bilinear
form into two orthogonal subspaces then it follows from the deﬁnition of the
Cliﬀord algebra that
C(p) = C(p
1
) ⊗C(p
2
)
9.3. THE SPIN REPRESENTATIONS. 155
where the multiplication on the tensor product is taken in the sense of superal
gebras, that is
(o
1
⊗o
2
)(/
1
⊗/
2
) := o
1
/
1
⊗o
2
/
2
if either o
2
or /
1
are even, but
(o
1
⊗o
2
)(/
1
⊗/
2
) := −o
1
/
1
⊗o
2
/
2
if both o
2
and /
1
are odd. It costs a sign to move one odd symbol past another.
9.3.1 The even dimensional case.
Suppose that p is even dimensional. If the metric is split (which is always the
case if the metric is nondegenerate and we are over the complex numbers) then
p is a direct sum of two dimensional mutually orthogonal split spaces, W
i
, so
let us examine ﬁrst the case of a two dimensional split space p, spanned by ι. c
with (ι. ι) = (c. c) = 0. (ι. c) =
1
2
. Let T be a one dimensional space with basis
t and consider the linear map of p →End (∧T) determined by
c →c(t). ι →ι(t
∗
)
where c(t) denotes exterior multiplication by t and ι(t
∗
) denotes interior multi
plication by t
∗
, the dual element to t in T
∗
. This is a Cliﬀord map since
c(t)
2
= 0 = ι(t
∗
)
2
. c(t)ι(t
∗
) +ι(t
∗
)c(t) = id.
This therefore extends to a map of C(p) → End(∧T). Explicitly, if we use
1 ∈ ∧
0
T. t ∈ ∧
1
T as a basis of ∧T this map is given by
1 →
1 0
0 1
ι →
0 1
0 0
c →
0 0
1 0
ιc →
1 0
0 0
.
This shows that the map is an isomorphism. If now
p = W
1
⊕ ⊕W
m
is a direct sum of two dimensional split spaces, and we write
T = T
1
⊕ ⊕T
m
156CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
where the C(W
i
)
∼
= End(∧T
i
) as above, then since
∧T = ∧T
1
⊗ ⊗∧T
m
we see that
C(p)
∼
= End (∧T).
In particular, C(p) is isomorphic to the full 2
m
2
m
matrix algebra and hence
has a unique (up to isomorphism) irreducible module. One model of this is
o = ∧T.
We can write
o = o
+
⊕o
−
as a supervector space, where we choose the standard Z
2
grading on ∧T to
determine the grading on o if : is even, but use the opposite grading (for
reasons which will become apparent in a moment) if : is odd.
The even part, C
0
(p) of C(p) acts irreducibly on each of o
±
. Since ∧
2
p
together with the constants generates C
0
(p) we see that the action of ∧
2
p on
each of o
±
is irreducible. Since ∧
2
p under Cliﬀord commutation is isomorphic
to o(p) the two modules o
±
give irreducible modules for the even orthogonal
algebra o(p). These are the half spin representations of the even orthogonal
algebras.
We can identify o = o
+
⊕ o
−
as a left ideal in C(p) as follows: Suppose
that we write
p = p
+
⊕p
−
where p
±
are complementary isotropic subspaces. Choose a basis c
+
1
. . . . . c
+
m
of
p
+
and let
c
+
:= c
+
1
c
+
m
= c
+
1
∧ ∧ c
+
m
∈ ∧
m
p
+
.
We have
n
+
c
+
= 0. ∀ n
+
∈ p
+
and hence
(∧p
+
)
+
c
+
= 0.
In other words
∧p
+
c
+
consists of all scalar multiples of c
+
.
Since
∧p
−
⊗∧p
+
→C(p). n
−
⊗n
+
→n
−
n
+
is a linear bijection, we see that
C(p)c
+
= ∧p
−
c
+
.
This means that the left ideal generated by c
+
in C(p) has dimension 2
m
, and
hence must be isomorphic as a left C(p) module to o. In particular it is a
minimal left ideal.
9.3. THE SPIN REPRESENTATIONS. 157
Let c
−
1
. . . . c
−
m
be a basis of p
−
and for any subset J = ¦i
1
. . . . . i
j
¦. i
1
<
i
2
< i
j
of ¦1. . . . . :¦ let
c
J
−
:= c
−
i1
∧ ∧ c
−
ij
= c
−
i1
c
−
ij
.
Then the elements
c
J
−
c
+
form a basis of this model of o as J ranges over all subsets of ¦1. . . . . :¦.
For example, suppose that we have a commutative Lie algebra h acting
on p as inﬁnitesimal isometries, so as to preserve each p
±
, that the c
+
i
are
weight vectors corresponding to weights β
i
and that the c
−
i
form the dual basis,
corresponding to the negative of these weights −β
i
. Then it follows from (9.14)
that the image, ν(/) ∈ ∧
2
(p) ⊂ C(p) of an element / ∈ h is given by
ν(/) =
1
2
¸
β
i
(/)c
+
i
∧ c
−
i
=
1
2
¸
β
i
(/)(1 −c
−
i
c
+
i
).
Thus
ν(/)c
+
= ρ
p
(/)c
+
(9.19)
where
ρ
p
:=
1
2
(β
1
+ +β
m
). (9.20)
For a subset J of ¦1. . . . . :¦ let us set
β
J
:=
¸
j∈J
β
j
.
Then we have
[ν(/). c
J
−
] = −β
J
(/)c
J
−
and so
ν(/)(c
J
−
c
+
) = [ν(/). c
J
−
]c
+
+c
J
−
ν(/)c
+
= (ρ
p
(/) −β
J
(/))c
J
−
c
+
.
So if we denote the action of ν(/) on o
±
by Spin
±
ν(/) and the action of ν(/)
on o = o
+
⊕o
−
by Spin ν(/) we have proved that
The c
J
−
c
+
are weight vectors of Spin ν with weights ρ
p
−β
J
. (9.21)
It follows from (9.21) that the diﬀerence of the characters of Spin
+
ν and Spin
−
ν
is given by
ch
Spin+ν
−ch
Spin−ν
=
¸
j
c(
1
2
β
j
) −c(−
1
2
β
j
)
= c(ρ
p
)
¸
j
(1 −c(−β
j
)) .
(9.22)
There are two special cases which are of particular importance: First, this
applies to the case where we take h to be a Cartan subalgebra of o(p) = o(C
2k
)
158CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
itself, say the diagonal matrices in the block decomposition of o(p) given by the
decomposition
C
2k
= C
k
⊕C
k
into two isotropic subspaces. In this case the β
i
is just the ith diagonal entry
and (9.22) yields the standard formula for the diﬀerence of the characters of the
spin representations of the even orthogonal algebras.
A second very important case is where we take h to be the Cartan subalgebra
of a semisimple Lie algebra g, and take
p := n
+
⊕n
−
relative to a choice of positive roots. Then the β
j
are just the positive roots,
and we see that the right hand side of (9.22) is just the Weyl denominator, the
denominator occurring in the Weyl character formula. This means that we can
write the Weyl character formula as
ch(Irr(λ) ⊗o
+
) −ch(Irr(λ) ⊗o
−
) =
¸
w∈W
(−1)
w
c(n • λ)
where
n • λ := n(λ +ρ).
If we let l
µ
denote the one dimensional module for h given by the weight j we
can drop the characters from the preceding equation and simply write the Weyl
character formula as an equation in virtual representations of h:
Irr(λ) ⊗o
+
−Irr(λ) ⊗o
−
=
¸
w∈W
(−1)
w
l
w•λ
. (9.23)
The reader can now go back to the preceding chapter and to Theorem 16
where this version of the Weyl character formula has been generalized from the
Cartan subalgebra to the case of a reductive subalgebra of equal rank. In the
next chapter we shall see the meaning of this generalization in terms of the
Kostant Dirac operator.
9.3.2 The odd dimensional case.
Since every odd dimensional space with a nonsingular bilinear form can be
written as a sum of a one dimensional space and an even dimensional space (both
nondegenerate), we need only look at the Cliﬀord algebra of a one dimensional
space with a basis element r such that (r. r) = 1 (since we are over the complex
numbers). This Cliﬀord algebra is two dimensional, spanned by 1 and r with
r
2
= 1, the element r being odd. This algebra clearly has itself as a canonical
module under left multiplication and is irreducible as a Z´2Z module. We may
call this the spin representation of Cliﬀord algebra of a one dimensional space.
Under the even part of the Cliﬀord algebra (i.e. under the scalars) it splits
into two isomorphic (one dimensional) spaces corresponding to the basis 1. r of
9.3. THE SPIN REPRESENTATIONS. 159
the Cliﬀord algebra. Relative to this basis 1. r we have the left multiplication
representation given by
1 →
1 0
0 1
. r →
0 1
1 0
.
Let us use C(C) to denote the Cliﬀord algebra of the one dimensional or
thogonal vector space just described, and o(C) its canonical module. Then
if
q = p ⊕C
is an orthogonal decomposition of an odd dimensional vector space into a direct
sum of an even dimensional space and a one dimensional space (both non
degenerate) we have
C(q)
∼
= C(p) ⊗C(C)
∼
= End(o(q))
where
o(q) := o(p) ⊗o(C)
all tensor products being taken in the sense of superalgebra. We have a decom
position
o(q) = o
+
(q) ⊕o
−
(q)
as a super vector space where
o
+
(q) = o
+
(p) ⊕ro
−
(p). o
−
(q) = o
−
(p) ⊕ro
+
(p).
These two spaces are equivalent and irreducible as C
0
(q) modules. Since the
even part of the Cliﬀord algebra is generated by ∧
2
q together with the scalars,
we see that either of these spaces is a model for the irreducible spin representa
tion of o(q) in this odd dimensional case.
Consider the decomposition p = p
+
⊕p
−
that we used to construct a model
for o(p) as being the left ideal in C(p) generated by ∧
m
p
+
where : = dimp
+
.
We have
∧(C⊕p
−
) = ∧(C) ⊗∧p
−
.
and
Proposition 28 The left ideal in the Cliﬀord algebra generated by ∧
m
p
+
is a
model for the spin representation.
Notice that this description is valid for both the even and the odd dimensional
case.
9.3.3 Spin ad and V
ρ
.
We want to consider the following situation: g is a simple Lie algebra and we
take ( . ) to be the Killing form. We have
Φ : g →∧
2
g ⊂ C(g)
160CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
which is the map ν associated to the adjoint representation of g. Let h be
a Cartan subalgebra and Φ the collection of roots. We choose root vectors
c
φ
. φ ∈ Φ so that
(c
φ
. c
−φ
) = 1.
Then it follows from (9.14) that
Φ(r) =
1
4
¸
¸
/
i
∧ [/
i
. r]
g
+
¸
φ∈Φ
c
−φ
∧ [c
φ
. r]
g
¸
(9.24)
where the brackets are the Lie brackets of g, where the /
i
range over a basis
of h and the /
i
over a dual basis. This equation simpliﬁes in the special cases
where r = / ∈ h and in the case where r = c
ψ
. ψ ∈ Φ
+
relative to a choice,
Φ
+
of positive roots. In the case that r = / ∈ h we have seen that [/
i
. /] = 0
and the equation simpliﬁes to
Φ(/) = ρ(/)1 −
1
2
¸
φ∈Φ
+
φ(/)c
−φ
c
φ
(9.25)
where
ρ =
1
2
¸
φ∈Φ
+
φ
is one half the sum of then positive roots.
We claim that for ψ ∈ Φ
+
we have
Φ(c
ψ
) =
¸
r
γ
c
ψ
(9.26)
where the sum is over pairs (γ
. ψ
) such that either
1. γ
= 0. ψ
= ψ and r
γ
∈ h or
2. γ
∈ Φ. ψ
∈ Φ
+
and γ
+ψ
= ψ. and r
γ
∈ g
γ
.
To see this, ﬁrst observe that this ﬁrst sum on the right of (9.24) gives
¸
ψ(/
i
)/
i
∧ c
ψ
and so all these summands are of the form 1). For each summand
c
−φ
∧ [c
φ
. c
ψ
]
of the second sum, we may assume that either φ +ψ = 0 or that φ +ψ ∈ Φ for
otherwise [c
φ
. c
ψ
] = 0. If φ +ψ = 0, so ψ = −φ = 0, we have [c
φ
. c
ψ
] ∈ h which
is orthogonal to c
−φ
since φ = 0. So
c
−φ
∧ [c
φ
. c
ψ
] = −[c
φ
. c
ψ
]c
ψ
again has the form 1).
9.3. THE SPIN REPRESENTATIONS. 161
If φ +ψ = τ = 0 is a root, then (c
−φ
. c
τ
) = 0 since φ = τ. If τ ∈ Φ
+
then
c
−φ
∧ [c
φ
. c
ψ
] = c
−φ
n
τ
.
where n
τ
is a multiple of c
τ
so this summand is of the form 2). If τ is a negative
root, the φ must be a negative root so −φ is a positive root, and we can switch
the order of the factors in the preceding expression at the expense of introducing
a sign. So again this is of the form 2), completing the proof of (9.26).
Let n
+
be the subalgebra of g generated by the positive root vectors and
similarly n
−
the subalgebra generated by the negative root vectors so
g = n
+
⊕b
−
. b
−
:= n
−
⊕h
is an h stable decomposition of g into a direct sum of the nilradical and its
opposite Borel subalgebra.
Let · be the number of positive roots and let
0 = n ∈ ∧
N
n
+
.
Clearly
nn = 0 ∀n ∈ n
+
.
Hence by (9.26) we have
Φ(n
+
)n = 0
while by (9.25)
Φ(/)n = ρ(/)n ∀ / ∈ h.
This implies that the cyclic module
Φ(l(g))n
is a model for the irreducible representation \
ρ
of g with highest weight ρ. Left
multiplication by Φ(r). r ∈ g gives the action of g on this module.
Furthermore, if nc = 0 for some c ∈ C(g) then nc has the same property:
Φ(n
+
)nc = 0. Φ(/)nc = ρ(/)nc. ∀ / ∈ h.
Thus every nc = 0 also generates a g module isomorphic to \
ρ
.
Now the map
∧n
+
⊗∧b
−
→C(g). r ⊗/ →r/
is a linear isomorphism and right Cliﬀord multiplication of ∧
N
n
+
by ∧n
+
is
just ∧
N
n
+
, all the elements of of ∧
+
n
+
yielding 0. So we have the vector space
isomorphism
nC(g) = ∧
N
n
+
⊗∧b
−
.
In other words,
Φ(l(g))nC(g)
162CHAPTER 9. CLIFFORDALGEBRAS ANDSPINREPRESENTATIONS.
is a direct sum of irreducible modules all isomorphic to \
ρ
with multiplicity
equal to
dim∧b
−
= 2
s+N
where : = dimh and · = dimn
−
= dimn
+
. Let us compute the dimension
of \
ρ
using the Weyl dimension formula which asserts that for any irreducible
ﬁnite dimensional representation \
λ
with highest weight λ we have
dim\
λ
=
¸
φ∈Φ+
(ρ +λ. φ)
¸
φ∈Φ+
(ρ. φ)
.
If we plug in λ = ρ we see that each factor in the numerator is twice the
corresponding factor in the denominator so
dim\
ρ
= 2
N
. (9.27)
But then
dimΦ(l(g))nC(g) = 2
s+2N
= dimC(g).
This implies that
C(g) = Φ(l(g))nC(g) = Φ(l(g))n(∧b
−
). (9.28)
proving that C(g) is primary of type \
ρ
with multiplicity 2
s+N
as a represen
tation of g under the left multiplication action of Φ(g).
This implies that any submodule for this action, in particular any left ideal
of C(g), is primary of type \
ρ
. Since we have realized the spin representation
of C(g) as a left ideal in C(g) we have proved the important
Theorem 17 Spin ad is primary of type \
ρ
.
One consequence of this theorem is the following:
Proposition 29 The weights of \
ρ
are
ρ −φ
J
(9.29)
where J ranges over subsets of the positive roots and
φ
J
=
¸
φ∈J
φ
J
each occurring with multiplicity equal to the number of subsets J yielding the
same value of φ
J
.
Indeed, (9.21) gives the weights of Spin ad, but several of the β
J
are equal
due to the trivial action of ad(h) on itself. However this contribution to the
multiplicity of each weight occurring in (9.21) is the same, and hence is equal
to the multiplicity of \
ρ
in Spin ad. So each weight vector of \
ρ
must be of the
form (9.29) each occurring with the multiplicity given in the proposition.
Chapter 10
The Kostant Dirac operator
Let p be a vector space with a nondegenerate symmetric bilinear form. We
have the Cliﬀord algebra C(p) and the identiﬁcation of o(p) = ∧
2
(p) inside
C(p).
10.1 Antisymmetric trilinear forms.
Let φ be an antisymmetric trilinear form on p. Then φ deﬁnes an antisymmetric
map
/ = /
φ
: p ⊗p →p
by the formula
(/(n. n
). n
) = φ(n. n
. n
) ∀ n. n
. n
∈ p.
This bilinear map “leaves ( . ) invariant” in the sense that
(/(n. n
). n
) = (n. /(n
. n
)).
Conversely, any antisymmetric map / : p ⊗ p → p satisfying this condition
deﬁnes an antisymmetric form φ. Finally either of these two objects deﬁnes an
element · ∈ ∧
3
p by
−2(·. n ∧ n
∧ n
) = (/(n. n
). n
) = φ(n. n
. n
). (10.1)
We can write this relation in several alternative ways: Since
−2(·. n ∧ n
∧ n
) = −2(ι(n
)ι(n)·. n
) = 2(ι(n)ι(n
)·. n
)
we have
/(n. n
) = 2ι(n)ι(n
)·. (10.2)
Also, ι(n)· ∈ ∧
2
p and so is identiﬁed with an element of o(p) by commutator
in the Cliford algebra:
ad(ι(n)·)(n
) = [ι(n)·. n
] = −2ι(n
)ι(n)·
163
164 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
so
ad(ι(n)·)(n
) = [ι(n)·. n
] = /(n. n
). (10.3)
10.2 Jacobi and Cliﬀord.
Given an antisymmetric bilinear map / : p ⊗p →p we may deﬁne
Jac(/) : p ⊗p ⊗p →p
by
Jac(/)(n. n
. n
) = /(/(n. n
). n
) +/(/(n
. n
). n) +/(/(n
. n). n
)
so that the vanishing of Jac(/) is the usual Jacobi identity. It is easy to check
that Jac(/) is antisymmetric and that if / satisﬁes (/(n. n
). n
) = (n. /(n
. n
))
then the four form
n. n
. n
. n
→(Jac(/)(n. n
. n
). n
)
is antisymmetric. We claim that if · ∈ ∧
3
p as in the preceding subsection, then
ι(n
)ι(n
)ι(n)·
2
=
1
2
Jac(/)(n. n
. n
). (10.4)
To prove this observe that
ι(n)·
2
= (ι(n)·)· −·(ι(n)·)
ι(n
)ι(n)·
2
= (ι(n
)ι(n)·)· + (ι(n)·)(ι(n
)·) −(ι(n
)·)(ι(n)·) +·(ι(n
)ι(n)·)
ι(n
)ι(n
)ι(n)·
2
= −(ι(n
)ι(n)·)ι(n
)· + (ι(n
)·)(ι(n
)ι(n)·) + (ι(n
)ι(n)·)ι(n
)·
+(ι(n)·)(ι(n
)ι(n
)· −(ι(n
)ι(n
)·)(ι(n)·) −(ι(n
)·)(ι(n
)ι(n)·))
= [ι(n
)·. ι(n
)ι(n)·] + [ι(n
)·. ι(n)ι(n
)·] + [ι(n)·. ι(n
)ι(n
)·]
=
1
2
Jac(/)(n. n
. n
)
by (10.2) and (10.3).
Equation (10.4) describes the degree four component of ·
2
in terms of Jac(/).
We can be explicit about the degree zero component of ·
2
. We claim that
(·
2
)
0
=
1
24
tr
n
¸
j=1
[n →c
j
/(n
j
. /(n
j
. n))]. c
j
:= (n
j
. n
j
). (10.5)
Indeed, by (9.6) we know that (·
2
)
0
= −(·. ·) and since n
i
∧ n
j
∧ n
k
. i < , < /
10.3. ORTHOGONAL EXTENSION OF A LIE ALGEBRA. 165
form an “orthonormal” basis of ∧
3
p we have
−(·. ·) = −
¸
1≤i<j<k≤n
±(·. n
i
∧ n
j
∧ n
k
)
2
. ± = c
i
c
j
c
k
= −
1
6
n,n,n
¸
i=1,j=1,k=1
±(·. n
i
∧ n
j
∧ n
k
)
2
= −
1
6
n,n,n
¸
i=1,j=1,k=1
±(ι(n
k
)ι(n
j
)·. n
i
)
2
= −
1
24
n,n,n
¸
i=1,j=1,k=1
±(/(n
j
. n
k
). n
i
)
2
= −
1
24
n,n
¸
j=1,k=1
c
j
c
k
(/(n
j
. n
k
). /(n
j
. n
k
))
=
1
24
n,n
¸
j=1,k=1
c
j
c
k
(/(n
j
. /(n
j
. n
k
)). n
k
)
proving (10.5).
10.3 Orthogonal extension of a Lie algebra.
Let us get back to the general case of a Lie algebra r acting as inﬁnitesimal
orthogonal transformations on p and the map ν : r → ∧
2
p given by (9.13).
Suppose that the Lie algebra r has a nondegenerate invariant symmetric bilinear
form ( . )
r
. We have the transpose map
ν
†
: ∧
2
p →r
since both r and ∧
2
p have nondegenerate symmetric bilinear forms. For n and
n
in p, let us deﬁne
[n. n
]
r
:= −2ν
†
(n ∧ n
). .
This map is an r morphism which says that
[r. [n. n
]
r
] = [r n. n
]
r
+ [n. r n
]
r
. (10.6)
where the bracket on the left denotes the Lie bracket on r. Also, we have
(r. [n. n
]
r
)
r
= −2(r. ν
†
(n ∧ n
))
r
= −2(ν(r). n ∧ n
)
p
= −2(ι(n)ν(r). n
)
p
= (r n. n
)
p
.
166 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
So we have proved
(r. [n. n
]
r
)
r
= (r n. n
)
p
. (10.7)
This has the following signiﬁcance: Suppose that we want to make r ⊕p into a
Lie algebra with an invariant symmetric bilinear form ( . ) such that
• r and p are orthogonal under ( . ),
• the restriction of ( . ) to r is ( . )
r
and the restriction of ( . ) to p is ( . )
p
,
and
• [r. p] ⊂ p and the bracket of an element of r with an element of p is given
by [r. n] = r n.
Then
the r component of [n. n
] must be given by [n. n
]
r
.
Thus to deﬁne a Lie algebra structure on r ⊕ p we must specify the p
component of the bracket of two elements of p. This amounts to specifying
a · ∈ ∧
3
p as we have seen, and the condition that the Jacobi identity hold
for r. n. n
with r ∈ r and n. n
∈ p amounts to the condition that · ∈ ∧
3
p be
invariant under the action of r. It then follows that if we try to deﬁne [ . ] = [ . ]
v
by
[n. n
] = [n. n
]
r
+ 2ι(n)ι(n
)·
then
([.. .
]. .
) = (.. [.
. .
])
for any three elements of g := r ⊕ p, and the Jacobi identity is satisﬁed if at
least one of these elements belongs to r. Furthermore, for any r ∈ r we have
([[n. n
]. n
]. r) = ([[n. n
]. n
]. r)
r
= ([n. n
]. [n
. r])
p
by (10.7)
= ([r. [n. n
]]. n
)
p
= ([[r. n]. n
]. n
)
p
+ ([n. [r. n
]]. n
)
p
= ([r. n]. [n
. n
])
p
+ ([r. n
]. [n
. n])
p
= (r. [n. [n
. n
])
r
+ (r. [n
. [n
. n]])
r
or
([[n. n
]. n
] + [[n
. n
]. n] + [[n
. n]. n
]. r) = 0.
In other words, the r component of the Jacobi identity holds for three elements
of p.
So what remains to be checked is the p component of the Jacobi identity for
three elements of p. This is the sum
Jac(/)(n. n
. n
) + [n. n
]
r
n
+ [n
. n
]
r
n + [n
. n]
r
n
.
10.4. THE VALUE OF [\
2
+ν(CAS
R
)]
0
. 167
Let us choose an“orthonormal” basis ¦r
i
¦. i = 1. . . . . : of r and write
[n. n
]
r
=
¸
i
c
i
([n. n
]. r
i
)r
i
. c
i
:= (r
i
. r
i
)
r
= ±1
so
[n. n
]
r
n
=
¸
i
c
i
([n. n
]
r
. r
i
)r
i
n
.
Then by (9.11) and (10.4) we see that the Jacobi identity is
ι(n)ι(n
)ι(n
)
·
2
+ν(Cas
r
)
= 0
where
Cas
r
:=
¸
i
c
i
r
2
i
∈ l(r)
does not depend on the choice of basis, and ν : l(r) →C(p) is the extension of
the homomorphism ν : r → C(p). In particular, we have proved that · deﬁnes
an extension of the Lie algebra structure satisfying our condition if and only if
·
2
+ν(Cas
r
) ∈ C (10.8)
i.e. has no component of degree four.
Suppose that this condition holds. We then have deﬁned a Lie algebra
structure on
g = r ⊕p.
We let 1
r
and 1
p
denote projections onto the ﬁrst and second components of
our decomposition. Our Lie bracket on g, denoted simply by [ . ] satisﬁes
[r. r
] = [r. r
]
r
. r. r
∈ r (10.9)
[r. n] = r n. r ∈ r. n ∈ p (10.10)
1
r
[n. n
] = [n. n
]
r
= −2ν
†
(n ∧ n
) n. n
∈ p (10.11)
1
p
[n. n
] = /(n. n
) = 2ι(n)ι(n
)·. n. n
∈ p. (10.12)
From now on we will assume that we are over the complex numbers or that
we are over the reals and the symmetric bilinear forms are positive deﬁnite.
This is not for any mathematical reasons but because the formulas become a
bit complicated if we put in all the signs. We leave the general case to the
reader.
10.4 The value of [v
2
+ν(Cas
r
)]
0
.
Condition (10.8) says that the degree four component of ·
2
+ν(Cas
r
) vanishes.
Assume that this holds, so we have constructed a Lie algebra. We will now
compute the degree zero component of ·
2
+ν(Cas
r
). The answer will be given
in equation (10.13) below.
168 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
We can write (10.5) as
(·
2
)
0
=
1
24
tr 1
p
n
¸
j=1
ad
g
(n
j
)1
p
ad
g
(n
j
)1
p
=
1
24
tr 1
p
n
¸
j=1
ad
g
(n
j
)1
p
ad
g
(n
j
)
in view of (10.12) where ad
g
denotes the adjoint action on all of g. On the other
hand, we have from (9.12) that
ν(Cas
r
)
0
=
1
8
tr
¸
i
(ad r
i
)
2
1
p
=
1
8
r
¸
i=1
n
¸
j=1
([r
i
. [r
i
. n
j
]]. n
j
).
We can rewrite this sum as
1
8
r,n
¸
i=1,j=1
([r
i
. n
j
]. [n
j
. r
i
])
which equals
1
8
¸
r,n,n
i=1,j=1,k=1
([r
i
. n
j
]. n
k
)(n
k
. [n
j
. r
i
]). But this equals
1
8
r,n,n
¸
i=1,j=1,k=1
(r
i
. [n
j
. n
k
])([n
k
. n
j
]. r
i
)
=
1
8
¸
j=1,k=1
(1
r
[n
j
. n
k
]. 1
r
[n
k
. n
j
])
=
1
8
n,n
¸
j=1,k=1
(1
r
[n
j
. n
k
]. [n
k
. n
j
])
=
1
8
n,n
¸
j=1,k=1
([n
j
. 1
r
[n
j
. n
k
]]. n
k
)
=
1
8
tr 1
p
n
¸
j=1
ad
g
(n
j
)1
r
ad
g
(n
j
)1
p
.
In other words
ν(Cas
r
)
0
=
1
8
tr 1
p
n
¸
j=1
ad
g
(n
j
)1
r
ad
g
(n
j
)1
p
=
1
8
tr
n
¸
j=1
ad
g
(n
j
)1
r
ad
g
(n
j
)1
p
.
Multiplying this equation by 1´3 and adding it to the above expression for (·
2
)
0
gives
1
3
ν(Cas
r
)
0
+ (·
2
)
0
=
1
24
tr
n
¸
j=1
(ad
g
n
j
)
2
1
p
.
10.5. KOSTANT’S DIRAC OPERATOR. 169
We can write
ν(Cas
r
)
0
=
1
8
r,n
¸
i=1,j=1
([r
i
. n
j
]. [n
j
. r
i
]) =
1
8
r,n
¸
i=1,j=1
(r
i
. [n
j
. [n
j
. r
i
]])
=
1
8
tr 1
r
n
¸
j=1
(ad
g
n
j
)
2
1
r
=
1
8
tr
n
¸
j=1
(ad
g
n
j
)
2
1
r
.
Multiplying by 1´3 and adding to the preceding equation gives
2
3
ν(Cas
r
)
0
+ (·
2
)
0
=
1
24
tr
n
¸
j=1
(ad
g
n
j
)
2
.
On the other hand
ν(Cas
r
)
0
=
1
8
r,n
¸
i=1,j=1
([r
i
. [r
i
. n
j
]]. n
j
) =
1
8
tr ad
p
(Cas
r
)
=
1
8
(tr ad
g
(Cas
r
) −tr ad
r
(Cas
r
)) .
Multiplying by 1´3 and adding to the preceding equation, and using the fact
that Cas
g
= Cas
r
+
¸
n
2
j
gives
ν(Cas
r
) +·
2
=
1
24
(tr ad
g
(Cas
g
) −tr ad
r
(Cas
r
)) (10.13)
when (10.8) holds.
Suppose now that the Lie algebra r is reductive and that the Lie algebra g
we created out of r and p using a · ∈ ∧
3
p satisfying (10.8) is also reductive.
Using (7.29) for g and for r in the right hand side of (10.13) yields
ν(Cas
r
) +·
2
= ((ρ
g
. ρ
g
) −(ρ
r
. ρ
r
)) . (10.14)
10.5 Kostant’s Dirac Operator.
Suppose that we have constructed our Lie algebra g = r + p from a · ∈ ∧
3
p
satisfying (10.8). We are going to deﬁne
K ∈ l(g) ⊗C(p)
as follows: Let n
1
. . . . . n
n
be an orthonormal basis of p. Then
K :=
¸
i
n
i
⊗n
i
+ 1 ⊗·. (10.15)
(On the left of the tensor product sign the n
i
∈ p is considered as an element
of l(g) via the canonical injection of p in l(p) ⊂ l(g) and on the right of the
tensor product sign it lies in C(p) via the canonical injection of p into C(p).)
170 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
We have a homomorphism l(r) →l(g), in particular a Lie algebra injection
r → l(g). We also have a Lie algebra homomorphism ν : r → C(p). In
particular, we have the diagonal Lie algebra map
diag : r →l(g) ⊗C(p). diag(r) = r ⊗1 + 1 ⊗ν(r)
and this extends to an algebra map
diag : l(r) →l(g) ⊗C(p).
For example,
diag(Cas
r
) =
¸
i
(r
i
⊗1 + 1 ⊗ν(r
i
))
2
where r
1
. . . . . r
r
is an orthonormal basis of r. In other words
diag(Cas
r
) =
r
¸
i=1
r
2
i
⊗1 + 2
r
¸
i=1
r
i
⊗ν(r
i
) +
r
¸
i=1
1 ⊗ν(r
i
)
2
. (10.16)
We claim that
K
2
= Cas
g
⊗1 −diag(Cas
r
) +
1
24
(tr ad
g
(Cas
g
) −tr ad
r
(Cas
r
)) 1 ⊗1. (10.17)
To prove this, let us write (10.15) as
K = K
+K
.
So
(K
)
2
= 1 ⊗·
2
and hence
(K
)
2
+
r
¸
j=1
1 ⊗ν(r
i
)
2
=
1
24
(tr ad
g
(Cas
g
) −tr ad
r
(Cas
r
)) 1 ⊗1
by (10.13). We have
(K
)
2
=
¸
ij
n
i
n
j
⊗n
i
n
j
=
¸
i
n
2
i
⊗1 +
¸
i=j
n
i
n
j
⊗n
i
n
j
=
¸
i
n
2
i
⊗1 +
¸
i<j
(n
i
n
j
−n
j
n
i
) ⊗n
i
n
j
=
¸
i
n
2
i
⊗1 +
¸
i<j
[n
i
. n
j
] ⊗n
i
n
j
=
¸
i
n
2
i
⊗1 −2
¸
i<j
ν
†
(n
i
∧ n
j
) ⊗n
i
∧ n
j
+ 2
¸
i<j
ι(n
i
)ι(n
j
)· ⊗n
i
∧ n
j
.
10.5. KOSTANT’S DIRAC OPERATOR. 171
where we have used the decomposition of [n
i
. n
j
] into its r and p components to
get to the last expression. We can write the middle term in the last expression
as
−2
¸
i<j
ν
†
(n
i
∧ n
j
) ⊗n
i
∧ n
j
= −2
r
¸
k=1
¸
i<j
(ν
†
(n
i
∧ n
j
). r
k
)r
k
⊗n
i
∧ n
j
= −2
r
¸
k=1
¸
i<j
(n
i
∧ n
j
. ν(r
k
))r
k
⊗n
i
∧ n
j
= −2
r
¸
k=1
¸
i<j
r
k
⊗(n
i
∧ n
j
. ν(r
k
))n
i
∧ n
j
= −2
¸
i
r
i
⊗ν(r
i
).
Since
¸
r
2
i
⊗1 +
¸
n
2
j
⊗1 = Cas
g
⊗1 we conclude that
(K
)
2
+ (K
)
2
+ diag(Cas
r
)
= Cas
g
⊗1+
1
24
(tr ad
g
(Cas
g
) −tr ad
r
(Cas
r
)) 1⊗1+2
¸
i<j
ι(n
i
)ι(n
j
)· ⊗n
i
∧n
j
.
To complete the proof of (10.17) we must show that
K
K
+K
K
= −2
¸
i<j
ι(n
i
)ι(n
j
)· ⊗n
i
∧ n
j
.
But
K
K
+K
K
=
¸
n
j
⊗[n
j
. ·]
= 2
¸
n
j
⊗ι(n
j
)·
= 2
¸
i<k
¸
j
n
j
⊗(ι(n
j
)·. n
i
∧ n
k
)n
i
∧ n
k
=
¸
i<k
¸
j
n
j
⊗(2·. n
i
∧ n
k
∧ n
j
)n
i
∧ n
k
=
¸
i<k
¸
j
n
j
⊗(2ι(n
k
)ι(n
i
)·. n
j
)n
i
∧ n
j
=
¸
i<k
¸
j
(2ι(n
k
)ι(n
i
)·. n
j
)n
j
⊗n
i
∧ n
j
=
¸
i<k
2ι(n
k
)ι(n
i
)· ⊗n
i
∧ n
k
.
completing the proof.
In the case where r and g are reductive we have the alternative formula
K
2
= Cas
g
⊗1 −diag(Cas
r
) + ((ρ
g
. ρ
g
) −(ρ
r
. ρ
r
)) 1 ⊗1. (10.18)
172 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
Suppose that λ is the highest weight of a ﬁnite dimensional irreducible rep
resentation \
λ
of g so that we get a surjective homomorphism
l(g) →End(\
λ
)
and hence a corresponding homomorphism
l(g) ⊗C(p) →End(\
λ
) ⊗C(p).
Also let diag
λ
denote the composition of this homomorphism with
diag : l(r) →U(g) ⊗C(p).
Then from the value of the Casimir (7.8) and (10.18) we get
K
2
λ
= ((λ +ρ
g
. λ +ρ
g
) −(ρ
r
. ρ
r
)) 1 ⊗1 −diag
λ
(Cas
r
). (10.19)
10.6 Eigenvalues of the Dirac operator.
We consider the situation where g = r ⊕ p is a Lie algebra with invariant
symmetric bilinear form, where r has the same rank as g, and where we have
chosen a common Cartan subalgebra
h ⊂ r ⊂ g.
We let / denote the dimension of h, i.e. the common rank of r and g. We let
Φ = Φ
g
denote the set of roots of g, let \ = \
g
denote the Weyl group of g,
and let \
r
denote the Weyl group of r so that
\
r
⊂ \
and we let c denote the index of \
r
in \.
A choice of positive roots Φ
+
for g amounts to a choice of a Borel subalgebra
b of g and then b ∩ r is a Borel subalgebra of r, which picks out a system of
positive roots Φ
+
r
for r and then
Φ
+
r
⊂ Φ
+
.
The corresponding Weyl chambers are
1 = 1
g
= ¦λ ∈ h
∗
R
[(λ. φ) ≥ 0 ∀φ ∈ Φ
+
¦
and
1
r
= ¦λ ∈ h
∗
R
[(λ. φ) ≥ 0 ∀φ ∈ Φ
+
r
¦
so
1 ⊂ 1
r
and we have chosen a crosssection C of \
r
in \ as
C = ¦n ∈ \[n1 ⊂ 1
r
¦.
10.6. EIGENVALUES OF THE DIRAC OPERATOR. 173
so
\ = \
r
C. 1
r
=
¸
w∈C
n1.
We let L = L
g
⊂ h
∗
R
denote the lattice of g integral linear forms on h, i.e.
L = ¦j ∈ h
∗
[2
(j. φ)
(φ. φ)
∈ Z ∀φ ∈ ∆¦.
We let
ρ = ρ
g
=
1
2
¸
φ∈∆
+
φ
and
ρ
r
=
1
2
¸
φ∈∆
+
r
φ.
We set
L
r
= the lattice spanned by L and ρ
r
.
and
Λ := L ∩ 1. Λ
r
:= L
r
∩ 1
r
.
For any r module 7 we let Γ(7) denote its set of weights, and we shall
assume that
Γ(7) ⊂ L
r
.
For such a representation deﬁne
:
Z
:= max
γ∈Γ(Z)
(γ +ρ
r
. γ +ρ
r
). (10.20)
For any j ∈ Λ
r
we let 7
µ
denote the irreducible module with highest weight j.
Proposition 30 Let
Γ
max
(7) := ¦j ∈ Γ(7)[(j +ρ
r
. j +ρ
r
) = :
Z
¦.
Let j ∈ Γ
max
(7). Then
1. j ∈ Λ
r
.
2. If . = 0 is a weight vector with weight j then . is a highest weight vector,
and hence the submodule l(r). is irreducible and equivalent to 7
µ
.
3. Let
Y
max
:=
¸
µ∈Γmax(Z)
7
µ
and
Y := l(r)Y
max
.
Then :
Z
−(ρ
r
. ρ
r
) is the maximal eigenvalue of Cas
r
on 7 and Y is the
corresponding eigenspace.
174 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
Proof. We ﬁrst show that
j ∈ Γ
max
⇒ j +ρ
r
∈ Λ
r
.
Suppose not, so there exists a n = 1. n ∈ \
r
such that
nj +nρ
r
∈ Λ
r
.
But n changes the sign of some of the positive roots (the number of such changes
being equal the length of n in terms of the generating reﬂections), and so ρ
r
−nρ
r
is a nontrivial sum of positive roots. Therefore
(nj +nρ
r
. ρ
r
−nρ
r
) ≥ 0. (ρ
r
−nρ
r
. ρ
r
−nρ
r
) 0
and
nj +ρ
r
= (nj +nρ
r
) + (ρ
r
−nρ
r
)
satisﬁes
(nj +ρ
r
. nj +ρ
r
) (nj +nρ
r
. nj +nρ
r
) = (j +ρ
r
. j +ρ) = :
Z
contradicting the deﬁnition of :
Z
. Now suppose that . is a weight vector
with weight j which is not a highest weight vector. Then there will be some
irreducible component of 7 containing . and having some weight j
such that
j
−j is a non trivial sum of positive roots. We have
j
+ρ
r
= (j
−j) + (j +ρ
r
)
so by the same argument we conclude that
(j
+ρ
r
. j
+ρ
r
) :
Z
since j + ρ
r
∈ Λ
r
, and again this is impossible. Hence . is a highest weight
vector implying that j ∈ Λ
r
. This proves 1) and 2).
We have already veriﬁed that the eigenvalue of the Casimir Cas
r
on any 7
γ
is (γ +ρ
r
. γ +ρ
r
) −(ρ
r
. ρ
r
). This proves 3).
Consider the irreducible representation \
ρ
of g corresponding to ρ = ρ
g
. By
the same arguments, any weight γ = ρ of \
ρ
lying in 1 must satisfy (γ. γ) <
(ρ. ρ) and hence any weight γ of \
ρ
satisfying (γ. γ) = (ρ. ρ) must be of the form
γ = nρ
for a unique n ∈ \. But
nρ = ρ −
¸
φ∈Jw
φ = ρ −φ
J
where
J
w
:= n(−Φ
+
) ∩ Φ
+
.
10.6. EIGENVALUES OF THE DIRAC OPERATOR. 175
We know that all the weights of \
ρ
are of the form ρ −φ
J
as J ranges over all
subsets of Φ
+
. So
(ρ. ρ) ≥ (ρ −φ
J
. ρ −φ
J
) (10.21)
where we have strict inequality unless J = J
w
for some n ∈ \.
Now let λ ∈ Λ, let \
λ
be the corresponding irreducible module with highest
weight λ and let γ be a weight of \
λ
. As usual, let J denote a subset of the
positive roots, J ⊂ Φ
+
. We claim that
Proposition 31 We have
(λ +ρ. λ +ρ) ≥ (γ +ρ −φ
J
. γ +ρ −φ
J
) (10.22)
with strict inequality unless there exists a n ∈ \ such that
γ = nλ. and J = J
w
in which case the n is unique.
Proof. Choose n such that
n
−1
(γ +ρ −φ
J
) ∈ Λ.
Since n
−1
(γ) is a weight of \
λ
, λ−n
−1
(γ) is a sum (possibly empty) of positive
roots. Also n
−1
(ρ −φ
J
) is a weight of \
ρ
and hence ρ −n
−1
(ρ −φ
J
) is a sum
(possibly empty) of positive roots. Since
λ +ρ = (λ −n
−1
γ) + (ρ −n
−1
(ρ −φ
J
) +n
−1
(γ +ρ −φ
J
)).
we conclude that
(λ +ρ. λ +ρ) ≥ (n
−1
(γ +ρ −φ
J
). n
−1
(γ +ρ −φ
J
)) = (γ +ρ −φ
J
. γ +ρ −φ
J
)
with strict inequality unless λ − n
−1
(γ) = 0 = ρ − n
−1
(ρ − φ
J
), and this last
equality implies that J = J
w
. QED
We have the spin representation Spin ν where ν : r → C(p). Call this
module o. Consider
\
λ
⊗o
as a r module. Then, letting γ denote a weight of \
λ
, we have
Γ(\
λ
⊗o) = ¦j = γ +ρ
p
−φ
J
¦ (10.23)
where
ρ
p
=
1
2
¸
J∈Φ
+
p
φ. Φ
+
p
:= Φ
+
´Φ
+
r
.
In other words, Φ
p
are the roots of g which are not roots of r, or, put another
way, they are the weights of p considered as a r module. (Our equal rank
176 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
assumption says that 0 does not occur as one of these weights.) For the weights
j of \
λ
⊗o the form (10.23) gives
j +ρ
r
= γ +ρ −φ
J
. J ⊂ ∆
+
p
.
So if we set 7 = \
λ
⊗o as a r module, (10.22) says that
(λ +ρ. λ +ρ) ≥ :
Z
.
But we may take J = ∅ as one of our weights showing that
:
Z
= (λ +ρ
g
. λ +ρ
g
). (10.24)
To determine Γ
max
(7) as in Prop. 30 we again use Prop.31 and (10.23): A
j = γ + ρ
p
− φ
J
belongs to Γ
max
(7) if and only if γ = nλ and J = J
w
. But
then
ρ
g
−φ
J
= nρ
g
.
Since ρ
g
= ρ
r
+ρ
p
we see from the form (10.23) that
j +ρ
r
= n(λ +ρ
g
) (10.25)
where n is unique, and
J
w
⊂ Φ
+
p
.
We claim that this condition is the same as the condition n(1) ⊂ 1
r
deﬁning
our crosssection, C. Indeed, n ∈ C if and only if (φ. nρ
g
) 0. ∀ φ ∈ Φ
+
r
. But
(φ. nρ) = (n
−1
φ. ρ) 0 if and only if φ ∈ n(Φ
+
). Since J
w
= n(−Φ
+
) ∩ Φ
+
,
we see that J
w
⊂ Φ
+
p
is equivalent to the condition n ∈ C.
Now for j ∈ Γ
max
(7) we have
j = n(λ +ρ) −ρ
r
=: n • λ (10.26)
where γ = n(λ) and so has multiplicity one in \
λ
.
Furthermore, we claim that the weight ρ
p
− φ
Jw
has multiplicity one in o.
Indeed, consider the representation
7
ρr
⊗o
of r. It has the weight ρ = ρ
r
+ ρ
p
as a highest weight, and in fact, all of
the weights of \
ρg
occur among its weights. Hence, on dimensional grounds,
say from the Weyl character formula, we conclude that it coincides, as a rep
resentation of r, with the restriction of the representation \
ρg
to r. But since
ρ
g
− φ
Jw
= nρ
g
has multiplicity one in \
ρg
, we conclude that ρ
p
− φ
Jw
has
multiplicity one in o.
We have proved that each of the n • λ have multiplicity one in \
λ
⊗o with
corresponding weight vectors
.
w•λ
:= ·
wλ
⊗c
Jw
−
c
+
.
10.6. EIGENVALUES OF THE DIRAC OPERATOR. 177
So each of the submodules
7
w•λ
:= l(r).
w•λ
(10.27)
occurs with multiplicity one in \
λ
⊗ o. The length of n ∈ C (in terms of
the simple reﬂections of \ determined by ∆) is the number of positive roots
changed into negative roots, i.e. the cardinality of J
w
. This cardinality is the
sign of det n and also determines whether c
J
−
c
+
belongs to o
+
or to o
−
. From
Prop.31 and equation (10.24) we know that the maximum eigenvalue of Cas
r
on \
λ
⊗o is
(λ +ρ
g
. λ +ρ
g
) −(ρ
r
. ρ
r
).
Now K
λ
∈ End(\
λ
⊗o) commutes with the action of r with
\
λ
⊗o
+
→\
λ
⊗o
−
K
λ
:
\
λ
⊗o
−
→\
λ
⊗o
+
.
Furthermore, by (10.19), the kernel of K
2
λ
is the eigenspace of Cas
r
correspond
ing to the eigenvalue (λ +ρ. λ +ρ) −(ρ
r
. ρ
r
). Thus
Ker(K
2
λ
) =
¸
w∈C
7
w•λ
.
Each of these modules lies either in \ ⊗o
+
or \ ⊗o
−
, one or the other but not
both. Hence
Ker(K
2
λ
) = Ker(K
λ
)
and so
Ker(K
λ
)[
V
λ⊗S
+
=
¸
w∈C, det w=1
7
w•λ
(10.28)
and
Ker(K
λ
)[
V
λ⊗S
−
=
¸
w∈C, det w=−1
7
w•λ
(10.29)
Let
1
±
:=
¸
w∈C, det w=±1
7
w•λ
. (10.30)
It follows from (10.28) that K
λ
induces an injection of
(\
λ
⊗o
+
)´1
+
→\ ⊗o
−
which we can follow by the projection
\
λ
⊗o
−
→(\
λ
⊗o
−
)´1
−
.
Hence K
λ
induces a bijection
˜
K
λ
: (\ ⊗o
+
)´1
+
→(\
λ
⊗o
−
)´1
−
. (10.31)
178 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
In short, we have proved that the sequence
0 →1
+
→\
λ
⊗o
+
→\
λ
⊗o
−
→1
−
→0 (10.32)
is exact in a very precise sense, where the middle map is the Kostant Dirac
operator: each summand of 1
+
occurs exactly once in \
λ
⊗o
+
and similarly for
1
−
. This gives a much more precise statement of Theorem 16 and a completely
diﬀerent proof.
10.7 The geometric index theorem.
Let : be the representation of G on the space T(G) of smooth or on 1
2
(G) of
1
2
functions on G coming from right multiplication. Thus
[:(o)1](o) = 1(oo).
Then K acts on T(G) ⊗o or on 1
2
(G) ⊗o and centralizes the action of diag r.
If l is a module for 1, we may consider T(G) ⊗ o ⊗ l or 1
2
(G) ⊗ o ⊗ l,
and K⊗1 commutes with diag r ⊗1 and with the action ρ of 1 on l, i.e with
1 ⊗ 1 ⊗ ρ. If 1 is connected, this implies that K commutes with the diagonal
action of
˜
1, the universal cover of 1, on T ⊗o ⊗l or 1
2
(G) ⊗o ⊗l given by
/ →:(/) ⊗Spin(/) ⊗ρ(/). / ∈ 1
where Spin :
˜
1 → Spin(p) is the group homomorphism corresponding to the
Lie algebra homomorphism ν. If G´1 is a spin manifold, the invariants under
this 1 action correspond to smooth or 1
2
sections of S ⊗  where S is the
spin bundle of G´1 and  is the vector bundle on G´1 corresponding to l.
Thus K descends (by restriction) to a diﬀerential operator ∂ on G´1 and we
shall compute its Gindex for irreducible l. The key result, due to Landweber,
asserts that if l belongs to a multiplet coming from an irreducible \ of G, then
this index is, up to a sign, equal to \ . If l does not belong to a multiplet, then
this index is zero. We begin with some preliminary results due to Bott.
10.7.1 The index of equivariant Fredholm maps.
Let 1 and 1 be Hilbert spaces which are unitary modules for the compact Lie
group G. Suppose that
1 =
¯
n
1
n
. 1 =
¯
n
1
n
are completed direct sum decompositions into subspaces which are Ginvariant
and ﬁnite dimensional, and that
T : 1 →1
10.7. THE GEOMETRIC INDEX THEOREM. 179
is a Fredholm map (ﬁnite dimensional kernel and cokernel) such that
T(1
n
) ⊂ 1
n
.
We write
Index
G
T = Ker T −Coker T
as an element of R(G), the ring of virtual representations of G. Thus R(G)
is the space of ﬁnite linear combinations
¸
λ
o
λ
\
λ
. o
λ
∈ Z as \
λ
ranges over
the irreducible representations of G. (Here, and in what follows, we are regard
ing any ﬁnite dimensional representation of G as an element of R(G) by its
decomposition into irreducibles, and similarly the diﬀerence of any two ﬁnite
dimensional representations is an element of R(G).)
If we denote the restriction of T to 1
n
by T
n
, then
Index
G
T =
¸
Index
G
T
n
where all but a ﬁnite number of terms on the right vanish. For each n we have
the exact sequence
0 →Ker T
n
→1
n
→1
n
→Coker T
n
→0.
Thus
Index
G
T
n
= 1
n
−1
n
as elements of R(G). Therefore we can write
Index
G
T =
¸
(1
n
−1
n
) (10.33)
in R(G), where all but a ﬁnite number of terms on the right vanish. We shall
refer to this as Bott’s equation.
10.7.2 Induced representations and Bott’s theorem.
Let 1 be a closed subgroup of G. Given any 1action ρ on a vector space l,
we consider the associated vector bundle G
R
\ over the homogeneous space
G´1. The sections of this bundle are then equivariant lvalued functions on G
satisfying :(o/) = ρ(/)
−1
:(o) for all / ∈ 1. Applying the PeterWeyl theorem,
we can decompose the space of 1
2
maps from G to l into a sum over the
irreducible representations \
λ
of G,
1
2
(G) ⊗l
∼
=
¯
λ
\
λ
⊗\
∗
λ
⊗l.
with respect to the G G 1 action  ⊗ : ⊗ ρ. The 1equivariance condition
is equivalent to requiring that the functions be invariant under the diagonal
1action / → :(/) ⊗ ρ(/). Restricting the PeterWeyl decomposition above to
the 1 invariant subspace, we obtain
1
2
(G
R
l)
∼
=
´
¸
λ
\
λ
⊗(\
∗
λ
⊗l)
R
∼
=
´
¸
λ
\
λ
⊗Hom
R
(\
λ
. l).
(10.34)
180 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
The Lie group G acts on the space of sections by (o), the left action of G on
functions, which is preserved by this construction. The space 1
2
(G
H
l) is
thus an inﬁnite dimensional representation of G.
The intertwining number of two representations gives us an inner product
'\. \`
G
= dim
C
Hom
G
(\. \)
on R(G), with respect to which the irreducible representations of G form an
orthonormal basis. Taking the formal completion of R(G), we deﬁne
ˆ
R(G) to
be the space of possibly inﬁnite formal sums
¸
λ
o
λ
\
λ
. The intertwining number
then extends to a pairing R(G)
ˆ
R(G) →Z.
If 1 is a subgroup of G, every representation of G automatically restricts
to a representation of 1. This gives us a pullback map i
∗
: R(G) → R(1),
corresponding to the inclusion i : 1 → G. The map l → 1
2
(G
H
l) dis
cussed above assigns to each 1representation an induced inﬁnite dimensional
Grepresentation. Expressed in terms of our representation ring notation, this
induction map becomes the homomorphism i
∗
: R(1) →
ˆ
R(G) given by
i
∗
l =
¸
λ
'i
∗
\
λ
. l`
R
\
λ
.
the formal adjoint to the pullback i
∗
. This is the content of the Frobenius
reciprocity theorem.
A homogeneous diﬀerential operator on G´1 is a diﬀerential operator 1 :
Γ(c) →Γ(T) between two homogeneous vector bundles c and T that commutes
with the left action of G on sections. If the operator is elliptic, then its kernel and
cokernel are both ﬁnite dimensional representations of G, and thus its Gindex
is a virtual representation in 1(G). In this case, the index takes a particularly
elegant form.
Theorem 18 (Bott) If 1 : Γ(G
H
l
0
) →Γ(G
H
l
1
) is an elliptic homoge
neous diﬀerential operator, then the Gequivariant index of 1 is given by
Index
G
1 = i
∗
(l
0
−l
1
).
where i
∗
(l
0
−l
1
) is a ﬁnite element in
ˆ
R(G), i.e. belongs to R(G).
In particular, note that the index of a homogeneous diﬀerential operator
depends only on the vector bundles involved and not on the operator itself! To
prove the theorem, just use Bott’s formula (10.33), where the subscript n is
replaced by λ labeling the the Girreducibles.
10.7.3 Landweber’s index theorem.
Suppose that G is semisimple and simply connected and 1 is a reductive sub
group of maximal rank. Suppose further that G´1 is a spin manifold, then we
can compose the spin representation o = o
+
⊕ o
−
of Spin(p) with the lifted
10.7. THE GEOMETRIC INDEX THEOREM. 181
map Spin :
˜
1 →Spin(p) to obtain a homogeneous vector bundle, the spin bun
dle S over G´1. For any representation of 1 on l the Kostant Dirac operator
descends to an operator
∂
U
: Γ(S
±
⊗) →Γ(S
∓
⊗).
(This operator has the same symbol as the Dirac operator arising from the
LeviCivita connection on G´1 twisted by , and has the same index by Bott’s
theorem. For the precise relation between this Dirac operator coming from
K and the dirac operator coming from the Levi civita connection we refer to
Landweber’s thesis.)
The following theorem of Landweber gives an expression for the index of this
Kostant Dirac operator. In particular, if we consider G´T, where T is a maximal
torus (which is always a spin manifold), this theorem becomes a version of the
BorelWeilBott theorem expressed in terms of spinors and the Dirac operator,
instead of in its customary form involving holomorphic sections and Dolbeault
cohomology.
Theorem 19 (Landweber) Let G´1 be a spin manifold, and let l
µ
be an
irreducible representation l
µ
of 1 with highest weight j. The Gequivariant
index of the Dirac operator ∂
U
is the virtual Grepresentation
Index
G
∂
Uµ
= (−1)
dimp/2
(−1)
w
\
w(µ+ρ
H
)−ρ
G
(10.35)
if there exists an element n ∈ \
G
in the Weyl group of G such that the weight
n(j +ρ
H
) −ρ
G
is dominant for G. If no such n exists, then Index
G
∂
Uµ
= 0.
Proof. For any irreducible representation \
λ
of G with highest weight λ we
have
\
λ
⊗(o
+
−o
−
) =
¸
w∈C
(−1)
w
l
w•λ
by [GKRS]. Hence
Hom
R
(\
λ
⊗(o
+
−o
−
). l
µ
) = 0
if j = n • λ for some n ∈ C while
Hom
R
(\
λ
⊗(o
+
−o
−
). l
µ
) = (−1)
w
if j = n • λ. But, by (10.33) and Theorem 18 we have
Index
G
∂
U
=
¯
λ
\
λ
⊗(\
∗
λ
⊗(o
+
−o
−
) ⊗l
µ
)
R
=
¯
Hom
R
(\
λ
⊗(o
+
−o
−
)
∗
. l
µ
).
Now (o
+
−o
−
)
∗
= o
+
−o
−
if dimp
∼
= 0 mod(4) while (o
+
−o
−
)
∗
= o
−
−o
+
if dimp
∼
= 2 mod(4). Hence
Index
G
∂
Uµ
= (−1)
dimp/2
¯
Hom
R
(\
λ
⊗(o
+
−o
−
). l
µ
). (10.36)
182 CHAPTER 10. THE KOSTANT DIRAC OPERATOR
The right hand side of (10.36) vanishes if j does not belong to a multiplet, i.e
is not of the form
n • λ = n(λ +ρ
g
) −ρ
r
for some λ. The condition n • λ = j can thus be written as
n
−1
(j +ρ
r
) −ρ
g
= λ.
If this equation does hold, then we get the formula in the theorem (with n
replaced by n
−1
which has the same determinant). QED
In general, if G´1 is not a spin manifold, then in order to obtain a similar
result we must instead consider the operator
∂
Uµ
:
1
2
(G) ⊗(o
±
) ⊗l
µ
r
→
1
2
(G) ⊗(o
∓
) ⊗l
µ
r
viewed as an operator on G, restricted to the space of (o⊗l
µ
)valued functions
on G that are invariant under the diagonal raction ·(7) = diag(7) + σ(7),
where σ is the raction on l
µ
. Note that if o⊗l
µ
is induced by a representation
of the Lie group 1, then this operator descends to a welldeﬁned operator on
G´1 as before. In general, the Gequivariant index of this operator ∂
Uµ
is once
again given by (10.35). To prove this, we note that Bott’s identity (10.33) and
his theorem continue to hold for the induction map i
∗
: R(r) →
ˆ
R(g) using the
representation rings for the Lie algebras instead of the Lie groups. Working in
the Lie algebra context, we no longer need concern ourselves with the topological
obstructions occurring in the global Lie group picture. The rest of the proof of
Theorem 19 continues unchanged.
Chapter 11
The center of U(g).
The purpose of this chapter is to study the center of the universal enveloping
algebra of a semisimple Lie algebra g. We have already made use of the (second
order) Casimir element.
11.1 The HarishChandra isomorphism.
Let us return to the situation and notation of Section 7.3. We have the monomial
basis
1
i1
1
1
im
m
/
j1
1
/
j
c
k1
1
c
km
m
of l(g), the decomposition
l(g) = l(h) ⊕(l(g)n
+
+n
−
l(g))
and the projection
γ : l(g) →l(h)
onto the ﬁrst factor of this decomposition. This projection restricts to a projec
tion, also denoted by γ
γ : 7(g) →l(h).
The projection γ : 7(g) →l(h) is a bit awkward. However HarishChandra
showed that by making a slight modiﬁcation in γ we get an isomorphism of
7(g) onto the ring of Weyl group invariants of l(h) = o(h). HarishChandra’s
modiﬁcation is as follows: As usual, let
ρ :=
1
2
¸
α>0
α.
Recall that for each i, the reﬂection :
i
sends α
i
→ −α
i
and permutes the
remaining positive roots. Hence
:
i
ρ = ρ −α
i
183
184 CHAPTER 11. THE CENTER OF l(G).
But by deﬁnition,
:
i
ρ = ρ −'ρ. α
i
`α
i
and so
'ρ. α
i
` = 1
for all i = 1. . . . . :. So
ρ = ω
1
+ +ω
m
.
i.e. ρ is the sum of the fundamental weights.
11.1.1 Statement
Deﬁne
σ : h →l(h). σ(/) = / −ρ(/)1. (11.1)
This is a linear map from h to the commutative algebra l(h) and hence, by
the universal property of l(h), extends to an algebra homomorphism of l(h)
to itself which is clearly an automorphism. We will continue to denote this
automorphism by σ. Set
γ
H
:= σ ◦ γ.
Then HarishChandra’s theorem asserts that
γ
H
: 7(g) →l(h)
W
and is an isomorphism of algebras.
11.1.2 Example of sl(2).
To see what is going on let’s look at this simplest case. The Casimir of degree
two is
1
2
/
2
+c1 +1c.
as can be seen from the deﬁnition. Or it can be checked directly that this
element is in the center. It is not written in our standard form which requires
that the 1 be on the left. But c1 = 1c + [c. 1] = 1c +/. So the way of writing
this element in terms of the above basis is
1
2
/
2
+/ + 21c.
and applying γ to it yields
1
2
/
2
+/.
There is only one positive root and its value on / is 2, so ρ(/) = 1. Thus σ
sends
1
2
/
2
+/ into
1
2
(/ −1)
2
+/ −1 =
1
2
/
2
−/ +
1
2
+/ −1 =
1
2
(/
2
−1).
The Weyl group in this case is just the identity together with the reﬂection
/ →−/, and the expression on the right is clearly Weyl group invariant.
11.1. THE HARISHCHANDRA ISOMORPHISM. 185
11.1.3 Using Verma modules to prove that γ
H
: Z(g) →
U(h)
W
.
Any j ∈ h
∗
can be thought of as a linear map of h into the commutative
algebra, C and hence extends to an algebra homomorphism of l(h) to C. If
we regard an element of l(h) = o(h) as a polynomial function on h
∗
, then this
homomorphism is just evaluation at j.
From our deﬁnitions,
(λ −ρ)γ(.) = γ
H
(.)(λ) ∀. ∈ 7(g).
Let us consider the Verma module Verm(λ −ρ) where we denote its highest
weight vector by ·
+
. For any . ∈ 7(g), we have /.·
+
= ./·
+
= (λ −ρ)(/).·
+
and c
i
.·
+
= .c
i
·
+
= 0. So .·
+
is a highest weight vector with weight λ − ρ
and hence must be some multiple of ·
+
. Call this multiple ϕ
λ
(.). Then
.1
i1
1
1
im
m
·
+
= 1
i1
1
1
im
m
ϕ
λ
(.)·
+
.
so . acts as scalar multiplication by ϕ
λ
(.) on all of Verm(λ − ρ). To see what
this scalar is, observe that since . −γ(.) ∈ l(g)n
+
, we see that . has the same
action on ·
+
as does γ(.) which is multiplication by (λ −ρ)(γ(.)) = λ(γ
H
(.)).
In other words,
ϕ
λ
(.) = λ(γ
H
(.)) = γ
H
(.)(λ).
Notice that in this argument we only used the fact that Verm(λ−ρ) is a cyclic
highest weight module: If \ is any cyclic highest weight module with highest
weight j − ρ then . acts as multiplication by ϕ
µ
(.) = j(γ
H
(.)) = γ
H
(.)(j).
We will use this observation in a moment.
Getting back to the case of Verm(λ − ρ), for a simple root α = α
i
let
: = :
i
:= 'λ. α
i
` and suppose that : is an integer. The element
1
m
i
·
+
∈ Verm(λ −ρ)
µ
where
j = λ −ρ −:α = :
i
λ −ρ.
Now from the point of view of the :(2) generated by c
i
. 1
i
, the vector ·
+
is a maximal weight vector with weight : − 1. Hence c
i
1
m
i
·
+
= 0. Since
[c
j
. 1
i
] = 0. i = , we have c
j
1
m
i
·
+
= 0 as well. So 1
m
i
·
+
= 0 is a maximal
weight vector with weight :
i
λ −ρ. Call the highest weight module it generates,
`. Then from ` we see that
ϕ
siλ
(.) = ϕ
λ
(.).
Hence we have proved that
(γ
H
(.))(nλ) = (γ
H
(.))(λ) ∀ n ∈ \
186 CHAPTER 11. THE CENTER OF l(G).
if λ is dominant integral. But two polynomials which agree on all dominant
integral weights must agree everywhere. We have shown that the image of γ
lies in o(h)
W
.
Furthermore, we have
.
1
.
2
−γ(.
1
)γ(.
2
) = .
1
(.
2
−γ(.
2
)) +γ(.
2
)(.
1
−γ(.
2
)) ∈ l(g)n
+
.
So
γ(.
1
.
2
) = γ(.
1
)γ(.
2
).
This says that γ is an algebra homomorphism, and since γ
H
= σ ◦ γ where σ is
an automorphism, we conclude that γ
H
is a homomorphism of algebras.
Equally well, we can argue directly from the fact that . ∈ 7(g) acts as
multiplication by ϕ
λ
(.) = γ
H
(.)(λ) on Verm(λ − ρ) that γ
H
is an algebra
homomorphism.
11.1.4 Outline of proof of bijectivity.
To complete the proof of HarishChandra’s theorem we must prove that γ
H
is a bijection. For this we will introduce some intermediate spaces and ho
momorphisms. Let Y (g) := o(g)
g
denote the subspace ﬁxed by the adjoint
representation (extended to the symmetric algebra by derivations). This is a
subalgebra, and the ﬁltration on o(g) induces a ﬁltration on Y (g). We shall
produce an isomorphism
1 : Y (g) →o(h)
W
.
We also have a linear space isomorphism of l(g) →o(g) given by the symmetric
embedding of elements of o
k
(g) into l(g), and let : be the restriction of this
to 7(g). We shall see that : : 7(g) →Y (g) is an isomorphism. Finally, deﬁne
o
k
(g) = o
0
(g) ⊕o
1
(g) ⊕ ⊕o
k
(g)
so as to get a ﬁltration on o(g). This induces a ﬁltration on o(h) ⊂ o(g). We
shall show that for any . ∈ l
k
(g) ∩ 7(g) we have
(1 ◦ :)(.) ≡ γ
H
(.) mod o
k−1
(g).
This proves inductively that γ
H
is an isomorphism since : and 1 are. Also, since
σ does not change the highest order component of an element in o(h), it will
be enough to prove that for . ∈ l
k
(g) ∩ 7(g) we have
(1 ◦ :)(.) ≡ γ(.) mod o
k−1
(g). (11.2)
We now proceed to the details.
11.1. THE HARISHCHANDRA ISOMORPHISM. 187
11.1.5 Restriction from S(g
∗
)
g
to S(h
∗
)
W
.
We ﬁrst discuss polynomials on g — that is elements of o(g
∗
). Let τ be a ﬁnite
dimensional representation of of g, and consider the symmetric function 1 of
degree / on g given by
(A
1
. . . . . A
k
) →
¸
tr (τ(A
π1
) τ(A
πk
))
where the sum is over all permutations. For any Y ∈ g, by deﬁnition,
Y 1(A
1
. . . . . A
k
) = 1([Y. A
1
]. A
2
. . . . . A
k
) + +1(A
1
. . . . . A
k−1
. [Y. A
k
]).
Applied to the above
1(A
1
. . . . . A
k
) = tr τ(A
1
) τ(A
k
)
we get
tr τ(Y )τ(A
1
) τ(A
k
) −tr τ(A
1
)τ(Y ) τ(A
k
)
+tr τ(A
1
)τ(A)τ(A
2
τ(A
k
) −
= tr τ(Y )τ(A
1
) τ(A
k
) −tr τ(A
1
) τ(A
k
)τ(Y ) = 0.
In other words, the function
A →tr τ(A)
n
belongs to o(g
∗
)
g
. Now since h is a subspace of g, the restriction map induces
a homomorphism,
: : o(g
∗
) →o(h
∗
).
If 1 ∈ o(g
∗
)
g
, then, as a function on g it is invariant under the automorphism
τ
i
:= (exp ad c
i
)(exp ad −1
i
)(exp ad c
i
) and hence
: : o(g
∗
)
g
→o(h
∗
)
W
.
If 1 ∈ o(g
∗
)
g
is such that its restriction to h vanishes, then its value at any
element which is conjugate to an element of h (under c(g) the subgroup of
automorphisms of g generated by the τ
i
) must also vanish. But these include
a dense set in g, so 1, being continuous, must vanish everywhere. So the
restriction of : to o(g
∗
)
g
is injective.
To prove that it is surjective, it is enough to prove that o(h
∗
)
W
is spanned
by all functions of the form A →tr τ(A)
k
as τ ranges over all ﬁnite dimensional
representations and / ranges over all nonnegative integers. Now the powers of
any set of spanning elements of h
∗
span o(h
∗
). So we can write any element
of o(h
∗
)
W
as a linear combination of the /λ
k
where / denotes averaging over
\. So it is enough to show that for any dominant weight λ, we can express
λ
k
in terms of tr τ
k
. Let 1
λ
denote the (ﬁnite) set of all dominant weights
≺ λ. Let τ denote the ﬁnite dimensional representation with highest weight λ.
Then tr τ(A)
k
− /λ
k
(A) is a combination of /j(A)
k
where j ∈ 1
λ
. Hence
by induction on the ﬁnite set 1
λ
we get the desired result. In short, we have
proved that
: : o(g
∗
)
g
→o(h
∗
)
W
is bijective.
188 CHAPTER 11. THE CENTER OF l(G).
11.1.6 From S(g)
g
to S(h)
W
.
Now we transfer all this information from o(g
∗
) to o(g): Use the Killing form
to identify g with g
∗
and hence get an isomorphism
α : o(g) →o(g
∗
).
Similarly, let
β : o(h) →o(h
∗
)
be the isomorphism induced by the restriction of the Killing form to h, which
we know to be nondegenerate. Notice that β commutes with the action of the
Weyl group. We can write
o(g) = o(h) +J
where J is the ideal in o(g) generated by n
+
and n
−
. Let
, : o(g) →o(h)
denote the homomorphism obtained by quotienting out by this ideal. We claim
that the diagram
o(g)
α
−−−−→ o(g
∗
)
j
r
o(h)
−−−−→
β
o(h
∗
)
commutes. Indeed, since all maps are algebra homomorphisms, it is enough to
check this on generators, that is on elements of g. If A ∈ g, then
'/. :α(A)` = '/. α(A)` = (/. A)
where the scalar product on the right is the Killing form. But since / is orthog
onal under the Killing form to n
+
+n
−
, we have
(/. A) = (/. ,A) = '/. β(,A)`. Q11
Upon restriction to the g and \invariants, we have proved that the right hand
column is a bijection, and hence so is the left hand column, since β is a \
module morphism. Recalling that we have deﬁned Y (g) := o(g)
g
, we have
shown that the restriction of , to Y (g) is an isomorphism, call it 1:
1 : Y (g) →o(h)
W
.
11.1.7 Completion of the proof.
Now we have a canonical linear bijection of o(g) with l(g) which maps
o(g) ÷ A
1
A
r
→
1
:!
¸
π∈Σr
A
π1
A
πr
.
11.2. CHEVALLEY’S THEOREM. 189
where the multiplication on the left is in o(g) and the multiplication on the
right is in l(g) and where Σ
r
denotes the permutation group on : letters. This
map is a g module morphism. In particular this map induces a bijection
: : 7(g) →Y (g).
Our proof will be complete once we prove (11.2). This is a calculation: write
n
AJB
:= 1
A
/
J
c
B
for our usual monomial basis, where the multiplication on the right is in the
universal enveloping algebra. Let us also write
j
AJB
:= 1
A
/
J
c
B
= 1
iα
α
/
j1
1
/
j
c
kγ
γ
∈ o(g)
where now the powers and multiplication are in o(g). The image of n
AJB
under
the canonical isomorphism with of l(g) with o(g) will not be j
AJB
in general,
but will diﬀer from j
AJB
by a term of lower ﬁltration degree. Now the projection
γ : l(g) →l(h) coming from the decomposition
l(g) = l(h) ⊕(n
−
l(g) +l(g)n
+
)
sends n
AJB
→0 unless ¹ = 0 = 1 and is the identity on n
0J0
. Similarly,
,(j
AJB
) = 0 unless ¹ = 0 = 1
and
,(j
0J0
) = j
0J0
= /
J
.
These two facts complete the proof of (11.2). Q11
11.2 Chevalley’s theorem.
Harish Chandra’s theorem says that the center of the universal enveloping alge
bra of a semisimple Lie group is isomorphic to the ring of Weyl group invariants
in the polyinomial algebra o(h). Chevalley’s theorem asserts that this ring is
in fact a polynomial ring in / generators where / = dimh. To prove Cheval
ley’s theorem we need to call on some facts from ﬁeld theory and from the
representation theory of ﬁnite groups.
11.2.1 Transcendence degrees.
A ﬁeld extension 1 : 1 is ﬁnitely generated if there are elements α
1
. . . . . α
n
of 1
so that 1 = 1(α
1
. . . . . α
n
). In other words, every element of 1 can be written
as a rational expression in the α
1
. . . . . α
n
.
Elements t
1
. . . . . t
k
of 1 are called (algebraically) independent (over 1) if
there is no nontrivial polynomial j with coeﬃcients in 1 such that
j(t
1
. . . . . t
k
) = 0.
190 CHAPTER 11. THE CENTER OF l(G).
Lemma 15 If 1 : 1 is ﬁnitely generated, then there exists an intermediate ﬁeld
` such that ` = 1(α
1
. . . . . α
r
) where the α
1
. . . . . α
r
are independent transcen
dental elements and 1 : ` is a ﬁnite extension (i.e. 1 has ﬁnite dimension over
` as a vector space).
Proof. We are assuming that 1 = 1(β
1
. . . . . β
q
). If all the β
i
are algebraic,
then 1 : 1 is a ﬁnite extension. Otherwise one of the β
i
is transcendental.
Call this α
1
. If 1 : 1(α
1
) is a ﬁnite extension we are done. Otherwise one
of the remaining β
i
is transcendental over 1(α
1
). Call it α
2
. So α
1
. α
2
are
independent. Proceed.
Lemma 16 If there is another collection γ
1
. . . γ
s
so that 1 : 1(γ
1
. . . . . γ
s
) is
ﬁnite then : = :. This common number is called the transcendence degree of 1
over 1.
Proof. If : = 0, then every element of 1 is algebraic, contradicting the assump
tion that the α
1
. . . . . α
r
are independent, unless : = 0. So we may assume that
: 0. Since 1 : ` is ﬁnite, there is a polynomial j such that
j(γ
1
. α
1
. . . . . α
r
) = 0.
This polynomial must contain at least one α, since γ
1
is transcendental. Renum
ber if necessary so that α
1
occurs in j. Then α
1
is algebraic over 1(γ
1
. α
2
. . . . . α
r
)
and 1 : 1(γ
1
. α
2
. . . . . α
r
) is ﬁnite. Continuing this way we can successively re
place αs by γs until we conclude that 1 : 1(γ
1
. . . . . γ
r
) is ﬁnite. If : : then
the γs are not algebraically independent. so : ≤ : and similarly : ≤ :.
Notice that if α
1
. . . . . α
n
are algebraically independent then 1(α
1
. . . . . α
n
)
is isomorphic to the ﬁeld of rational functions in n indeterminates 1(t
1
. . . . . t
n
)
since 1(α
1
. . . . . α
n
) = 1(α
1
. . . . . α
n−1
)(α
n
) by clearing denominators.
11.2.2 Symmetric polynomials.
The symmetric group o
n
acts on 1(t
1
. . . . . t
n
) by permuting the variables. The
ﬁxed ﬁeld, 1, contains all the symmetric polynomials, in particular the ele
mentary symmetric polynomials :
1
. . . . . :
n
where :
r
is the sum of all possible
distinct products taken : at a time. Using the general theory of ﬁeld extensions,
we can conclude that
Proposition 32 1 = 1(:
1
. . . . . :
n
).
The strategy of the proof is to ﬁrst show that the dimension of the extension
1(t
1
. . . . . t
n
) : 1(:
1
. . . . . :
n
) is ≤ n! and then a basic theorem in Galois theory
(which we shall recall) says that the dimension of 1 : 1 equals the order of
the group G = o
n
which is n!. Since 1(:
1
. . . . . :
n
) ⊂ 1 this will imply the
proposition. Consider the extensions
1(t
1
. . . . . t
n
) ⊃ 1(:
1
. . . . . :
n
. t
n
) ⊃ 1(:
1
. . . . . :
n
).
11.2. CHEVALLEY’S THEOREM. 191
We have the equation 1(t
n
) = 0 where
1(t) := t
n
−:
1
t
n−1
+t
n−2
:
2
+ (−1)
n
:
n
.
This shows that the dimension of the ﬁeld extension 1(:
1
. . . . . :
n
. t
n
) : 1(:
1
. . . . . :
n
)
is ≤ n. If we let :
1
. . . :
n−1
denote the elementary symmetric functions in n −1
variables, we have
:
j
= t
n
:
j−1
+:
j
so
1(:
1
. . . . . :
n
. t
n
) = 1(t
n
. :
1
. . . . . :
n−1
).
By induction we may assume that
dim1(t
1
. . . . . t
n
) : 1(:
1
. . . . . :
n
. t
n
) = dim1(t
n
)(t
1
. . . . . t
n−1
) : 1(t
n
)(:
1
. . . . . :
n−1
)
≤ (n −1)!
proving that
dim1(t
1
. . . . . t
n
) : 1(:
1
. . . . . :
n
) ≤ n!.
A fundamental theorem of Galois theory says
Theorem 20 Let G be a ﬁnite subgroup of the group of automorphisms of the
ﬁeld 1 over the ﬁeld 1, and let 1 be the ﬁxed ﬁeld. Then
dim[1 : 1] = #G.
This theorem, whose proof we will recall in the next section, then completes
the proof of the proposition. The proposition implies that every symmetric
polynomial is a rational function of the elementary symmetric functions.
In fact, every symmetric polynomial is a polynomial in the elementary sym
metric functions, giving a stronger result. This is proved as follows: put the
lexicographic order on the set of n−tuples of integers, and therefor on the set
of monomials; so r
i1
1
r
in
n
is greater than r
j1
1
r
jn
n
in this ordering if i
1
,
1
of i
1
= ,
1
and i
2
,
2
or etc. Any polynomial has a “leading monomial” the
greatest monomial relative to this lexicographic order. The leading monomial
of the product of polynomials is the product of their leading monomials. We
shall prove our contention by induction on the order of the leading monomial.
Notice that if j is a symmetric polynomial, then the exponents i
1
. . . . . i
n
of its
leading term must satisfy
i
1
≥ i
2
≥ ≥ i
n
.
for otherwise the monomial obtained by switching two adjacent exponents (which
occurs with the same coeﬃcient in the symmetric polynomial, j) would be
strictly higher in our lexicographic order. Suppose that the coeﬃcient of this
leading monomial is o. Then
c = o:
i1−i2
1
:
i2−i3
2
:
in−1−in
n−1
:
in
n
has the same leading monomial with the same coeﬃcient. Hence j − c has a
smaller leading monomial. QED
192 CHAPTER 11. THE CENTER OF l(G).
11.2.3 Fixed ﬁelds.
We now turn to the proof of the theorem of the previous section.
Lemma 17 Every distinct set of monomorphisms of a ﬁeld 1 into a ﬁeld 1
are linearly independent over 1.
Let λ
1
. . . . . λ
n
be distinct monomorphisms of 1 → 1. The assertion is that
there can not exist o
1
. . . . . o
n
∈ 1 such that
o
1
λ
1
(r) + +o
n
λ
n
(r) ≡ 0 ∀r ∈ 1
unless all the o
i
= 0. Assume the contrary, so that such an equation holds, and
we may assume that none of the o
i
= 0. Looking at all such possible equations,
we may pick one which involves the fewest number of terms, and we may assume
that this is the equation we are studying. In other words, no such equation holds
with fewer terms. Since λ
1
= λ
n
, there exists a n ∈ 1 such that λ
1
(n) = λ
n
(n)
and in particular n = 0. Substituting nr for r gives
o
1
λ
1
(nr) + +o
n
λ
n
(nr) = 0
so
o
1
λ
1
(n)λ
1
(r) + +o
n
λ
n
(n)λ
n
(r) = 0
and multiplying our original equation by λ
1
(n) and subtracting gives
o
2
(λ
1
(n) −λ
2
(n))λ
2
(r) + +o
n
(λ
n
(n) −λ
1
(n))λ
n
(r) = 0
which is a nontrivial equation with fewer terms. Contradiction.
Let n = #G, and let the elements of G be o
1
= 1. . . . . o
n
. Suppose that dim
1 : 1 = : < n. Let r
1
. . . . . r
m
be a basis of 1 over 1. The system of equations
o
1
(r
j
)n
1
+ +o
n
(r
j
)n
n
= 0. , = 1. . . . . :
has more unknowns than equations, and so we can ﬁnd nonzero n
1
. . . . . n
n
solving these equations. Any / ∈ 1 can be expanded as
/ = /
1
r
1
+ +/
m
r
m
. /
i
∈ 1.
and so
o
1
(/)n
1
+ +o
n
(/)n
n
=
¸
j
/
j
[o
1
(r
j
)n
1
+ +o
n
(r
j
)n
n
]
= 0
showing that the monomorphisms o
i
are linearly dependent. This contradicts
the lemma.
Suppose that dim 1 : 1 n. Let r
1
. . . . . r
n
. r
n+1
be linearly independent
over 1, and ﬁnd n
1
. . . . . n
n+1
∈ 1 not all zero solving the n equations
o
j
(r
1
)n
1
+ +o
j
(r
n+1
)n
n+1
= 0. , = 1. . . . . n.
11.2. CHEVALLEY’S THEOREM. 193
Choose a solution with fewest possible nonzero n
s and relabel so that the ﬁrst
are the nonvanishing ones, so the equations now read
o
j
(r
1
)n
1
+ +o
j
(r
r
)n
r
= 0. , = 1. . . . . n.
and no such equations hold with less than : n
s. Applying o ∈ G to the preceding
equation gives
oo
j
(r
1
)o(n
1
) + +oo
j
(r
r
)o(n
r
) = 0.
But oo
j
runs over all the elements of G, and so o(n
1
). . . . . o(n
r
) is a solution of
our original equations. In other words we have
o
j
(r
1
)n
1
+ +o
j
(r
r
)n
r
= 0 and
o
j
(r
1
)o(n
1
) + +o
j
(r
r
)o(n
r
) = 0.
Multiplying the ﬁrst equations by o(n
1
), the second by n
1
and subtracting, gives
o
j
(r
2
)[n
2
o(n
1
) −o(n
2
)n
1
)] + +o
j
(r
r
)[n
r
o(n
1
) −o(n
r
)n
1
] = 0.
a system with fewer n
:. This can not happen unless the coeﬃcients vanish, i.e.
n
i
o(n
1
) = n
1
o(n
i
)
or
n
i
n
−1
1
= o(n
i
n
−1
1
) ∀o ∈ G.
This means that
n
i
n
−1
1
∈ 1.
Setting .
i
= n
i
´n
1
and / = n
1
, we get the equation
r
1
/.
1
+ +r
r
/.
r
= 0
as the ﬁrst of our system of equations. Dividing by / gives a linear relation
among r
1
. . . . . r
r
contradicting the assumption that they are independent.
11.2.4 Invariants of ﬁnite groups.
Let G be a ﬁnite group acting on a vector space. It action on the symmetric
algebra o(\
∗
) which is the same as the algebra of polynomial functions on \
by
(o1)(·) = 1(o
−1
·).
Let
1 = o(\
∗
)
G
be the ring of invariants. Let o = o(\
∗
) and 1 be the ﬁeld of quotients of o,
so that 1 = 1(t
1
. . . . . t
n
) where n = dim\ . From the theorem on ﬁxed ﬁelds,
we know that the dimension of 1 as an extension of 1
G
is equal to the number
of elements in G, in particular ﬁnite. So 1
G
has transcendence degree n over
the ground ﬁeld.
194 CHAPTER 11. THE CENTER OF l(G).
Clearly the ﬁeld of fractions of 1 is contained in 1
G
. We claim that they
coincide. Indeed, suppose that j. c ∈ o. j´c ∈ 1
G
. Multiply the numerator
and denominator by Πoj the product taken over all o ∈ G. o = 1. The new nu
merator is G invariant. Therefore so is the denominator, and we have expressed
j´c as the quotient of two elements of 1.
If the ﬁnite group G acts on a vector space, then averaging over the group,
i.e. the map
1 ÷ 1 →1
:=
1
#G
¸
o 1
is a projection onto the subspace of invariant elements:
/ : 1 →1
1 →1
G
.
In particular, if 1 is ﬁnite dimensional,
dim1
G
= tr /. (11.3)
We may apply the averaging operator to our (inﬁnite dimensional) situation
where o = o(\
∗
) and 1 = o
G
in which case we have the additional obvious
fact that
(jc)
= j
c ∀ j ∈ o. c ∈ 1.
Let 1
+
⊂ 1 denote the subring of 1 consisting of elements with constant term
zero. Let
1 := o1
+
so that 1 is an ideal in o. By the Hilbert basis theorem (whose proof we recall
in the next section) the ideal 1 is ﬁnitely generated, and hence, from any set of
generators, we may choose a ﬁnite set of generators.
Theorem 21 Let 1
1
. . . . . 1
r
be homogeneous elements of 1
+
which generate 1
as an ideal of o. Then 1
1
. . . . . 1
r
together with 1 generate 1 as a 1 algebra. In
particular, 1 is a ﬁnitely generated 1 algebra.
Proof. We must show that any 1 ∈ 1 can be expressed as a polynomial in
the 1
1
. . . . . 1
r
, and since every 1 is a sum of its homogeneous components, it is
enough to do this for homogeneous 1 and we proceed by induction on its degree.
The statement is obvious for degree zero. For positive degree, 1 ∈ 1 ⊂ 1 so
1 = :
1
1
1
+ +:
r
1
r
. :
i
∈ 1
and since 1. 1
1
. . . . . 1
r
are homogeneous, we may assume the :
i
are homogeneous
of degree deg 1 −deg 1
i
since all other contributions must cancel. Now apply /
to get
1 = :
1
1
1
+ +:
r
1
r
.
The :
i
lie in 1 and have lower homogeneous degree than 1, and hence can be
expressed as polynomials in 1
1
. . . . . 1
r
. Hence so can 1.
11.2. CHEVALLEY’S THEOREM. 195
11.2.5 The Hilbert basis theorem.
A commutative ring is called Noetherian if any of the following equivalent con
ditions holds:
1. If 1
1
⊂ 1
2
⊂ is an ascending chain of ideals then there is a / such that
1
k
= 1
k+1
= 1
k+2
= . . . .
2. Every nonempty set of ideals has a maximal element with respect to
inclusion.
3. Every ideal is ﬁnitely generated.
The Hilbert basis theorem asserts that if 1 is a Noetherian ring, then so is
the polynomial ring 1[A]. In particular, all ideals in 1[A
1
. . . . . A
n
] are ﬁnitely
generated.
Let 1 be an ideal in 1A and for any positive integer / let 1
k
(1) ⊂ 1 be
deﬁned by
1
k
(1) := ¦o
k
∈ 1[∃o
k−1
. . . . . o
1
∈ 1 with
k
¸
0
o
j
A
j
∈ 1¦.
For each /, 1
k
(1) is an ideal in 1. Multiplying by A shows that
1
k
(1) ⊂ 1
k+1
(1).
Hence these ideals stabilize. If 1 ⊂ J and 1
k
(1) = 1
k
(J) for all /, we claim
that this implies that 1 = J. Indeed, suppose not, and choose a polynomial of
smallest degree belonging to J but not to 1, say this degree is /. Its leading
coeﬃcient belongs to 1
k
(J) and can not belong to 1
k
(1) because otherwise we
could ﬁnd a polynomial of smaller degree belonging to J and not to 1.
Proof of the Hilbert basis theorem. Let
1
0
⊂ 1
1
⊂
be an ascending chain of ideals in 1[A]. Consider the set of ideals 1
p
(1
q
). We
can choose a maximal member. So for / ≥ j we have
1
k
(1
j
) = 1
k
(1
q
) ∀, ≥ c.
For each of the ﬁnitely many values , = 1. . . . . j −1, the ascending chains
1
i
(1
0
) ⊂ 1
i
(1
1
) ⊂
stabilizes. So we can ﬁnd a large enough : (bigger than the ﬁnitely many large
values needed to stabilize the various chains) so that
1
i
(1
j
) = 1
i
(1
r
) ∀, ≥ :. ∀i.
This shows that 1
j
= 1: ∀, ≥ :.
196 CHAPTER 11. THE CENTER OF l(G).
11.2.6 Proof of Chevalley’s theorem.
This says that if 1 = R and \ is a ﬁnite subgroup of O(\ ) generated by
reﬂections, then its ring of invariants is a polynomial ring in n generators,
where n = dim\ . Without loss of generality we may assume that \ acts
eﬀectively, i.e. no nonzero vector is ﬁxed by all of \.
Let 1
1
. . . . . 1
r
be a minimal set of homogeneous generators. Suppose we
could prove that they are algebraically independent. Since the transcendence
degree of the quotient ﬁeld of 1 is n = dim\ , we conclude that : = n. So
the whole point is to prove that a minimal set of homogeneous generators must
be algebraically independent  that there can not exist a nonzero polynomial
/ = /(n
1
. . . . . n
r
) so that
/(1
1
. . . . . 1
r
) = 0. (11.4)
So we want to get a smaller set of generators assuming that such a relation
exists. Let
d
1
:= deg 1
1
. . . . . d
r
:= deg 1
r
.
For any nonzero monomial
on
e1
1
n
fr
r
occurring in / the term
o1
r1
1
1
er
r
we get by substituting 1’s for n’s has degree
d = c
1
d
1
+ c
r
d
r
and hence we may throw away all monomials in / which do not satisfy this
equation. Now set
/
i
:=
∂/
∂n
i
(1
1
. . 1
r
)
so that /
i
∈ 1 is homogeneous of degree d − d
i
, and let J be the ideal in
1 generated by the /
i
. Renumber 1
1
. . . . . 1
r
so that /
1
. . . . . /
m
is a minimal
generating set for J. This means that
/
i
=
m
¸
j=1
o
ij
/
j
. o
ij
∈ 1
for i : (if : < :; if : = : we have no such equations). Once again, since the
/
i
are homogeneous of degree d−d
i
we may assume that each o
ij
is homogeneous
of degree d
i
−d
j
by throwing away extraneous terms.
Now let us diﬀerentiate the equation (11.4) with respect to r
k
. / = 1. . . . . n
to obtain
r
¸
i=1
/
i
∂1
i
∂r
k
/ = 1. . . . . n
11.2. CHEVALLEY’S THEOREM. 197
and substitute the above expressions for /
i
. i : to get
m
¸
i=1
/
i
¸
∂1
i
∂r
k
+
r
¸
j=m+1
o
ji
∂1
j
∂r
k
¸
/ = 1. . . . . n.
Set
j
i
:=
∂1
i
∂r
k
+
r
¸
j=m+1
o
ji
∂1
j
∂r
k
i = 1. . . . . :
so that each j
i
is homogeneous with
deg j
i
= d
i
−1
and we have the equation
/
1
j
1
+ /
m
j
m
= 0. (11.5)
We will prove that this implies that
j
1
∈ 1. (11.6)
Assuming this for the moment, this means that
∂1
1
∂r
k
+
r
¸
j=m+1
o
j1
∂1
j
∂r
k
= :n:
r
i=1
1
i
c
i
where c
i
∈ o. Multiply these equations by r
k
and sum over / and apply Euler’s
formula for homogeneous polynomials
¸
r
k
∂1
∂r
k
= (deg 1)1.
We get
d
1
1
1
+
¸
d
j
o
j1
1
j
=
¸
1
i
:
i
with dco:
1
0 if it is not zero. Once again, the left hand side is homogeneous
of degree d
1
so we can throw away all terms on the right which are not of this
degree because of cancellation. But this means that we throw away the term
involving 1
1
, and we have expressed 1
1
in terms of 1
2
. . . . . 1
r
, contradicting our
choice of 1
1
. . . . . 1
r
as a minimal generating set.
So the proof of Chevalley’s theorem reduced to proving that (11.5) implies
(11.6), and for this we must use the fact that \ is generated by reﬂections,
which we have not yet used. The desired implication is a consequence of the
following
Proposition 33 Let /
1
. . . . . /
m
∈ 1 be homogeneous with /
1
not in the ideal
of 1 generated by /
2
. . . . . /
m
. Suppose that (11.5) holds with homogeneous ele
ments j
i
∈ o. Then (11.6) holds.
198 CHAPTER 11. THE CENTER OF l(G).
Notice that /
1
can not lie in the ideal of o generated /
2
. . . . /
m
because we can
apply the averaging operator to the equation
/
1
= /
2
/
2
+ +/
m
/
m
/
i
∈ o
to arrange that the same equation holds with /
i
replaced by /
i
∈ 1.
We prove the proposition by induction on the degree of j
1
. This must be
positive, since j
1
= 0 constant would put /
1
in the ideal generated by the
remaining /
i
. Let : be a reﬂection in \ and H its hyperplane of ﬁxed vectors.
Then
:j
i
−j
i
= 0 on H.
Let / be a nonzero linear function whose zero set is this hyperplane. With
no loss of generality, we may assume that the last variable, r
n
, occurs with
nonzero coeﬃcient in / relative to some choice of orthogonal coordinates. In
fact, by rotation, we can arrange (temporarily) that / = r
n
. Expanding out the
polynomial :j
i
− j
i
in powers of the (rotated) variables, we see that :o
i
− o
i
must have no terms which are powers of r
1
. . . . . r
n−1
alone. Put invariantly, we
see that
:j
i
−j
i
= /:
i
where :
i
is homogeneous of degree one less that that of j
i
. Apply : to equation
(11.5) and subtract to get
/ (/
1
:
1
+ /
m
:
m
) = 0.
Since / = 0 we may divide by / to get an equation of the form (11.5) with j
1
replaced by :
1
of lower degree. So :
1
∈ 1 by induction. So
:j
1
−j
1
∈ 1.
Now \ stabilizes 1
+
and hence 1 and we have shown that each n ∈ \ acts
trivially on the quotient of j
1
in this quotient space o´1. Thus j
1
= /j
1
≡ j
1
mod 1. So j
1
∈ 1 since j
1
∈ 1. QED
This action might not be possible to undo. Are you sure you want to continue?