This action might not be possible to undo. Are you sure you want to continue?
CT, Lent 2005
1 What is Representation Theory?
Groups arise in nature as “sets of symmetries (of an object), which are closed under compo
sition and under taking inverses”. For example, the symmetric group S
n
is the group of all
permutations (symmetries) of ¦1, . . . , n¦; the alternating group A
n
is the set of all symmetries
preserving the parity of the number of ordered pairs (did you really remember that one?); the
dihedral group D
2n
is the group of symmetries of the regular ngon in the plane. The orthogonal
group O(3) is the group of distancepreserving transformations of Euclidean space which ﬁx the
origin. There is also the group of all distancepreserving transformations, which includes the
translations along with O(3).
1
The oﬃcial deﬁnition is of course more abstract, a group is a set G with a binary operation
∗ which is associative, has a unit element e and for which inverses exist. Associativity allows a
convenient abuse of notation, where we write gh for g ∗ h; we have ghk = (gh)k = g(hk) and
parentheses are unnecessary. I will often write 1 for e, but this is dangerous on rare occasions,
such as when studying the group Z under addition; in that case, e = 0.
The abstract deﬁnition notwithstanding, the interesting situation involves a group “acting”
on a set. Formally, an action of a group G on a set X is an “action map” a : GX →X which
is compatible with the group law, in the sense that
a(h, a(g, x)) = a(hg, x) and a(e, x) = x.
This justiﬁes the abusive notation a(g, x) = g.x or even gx, for we have h(gx) = (hg)x.
From this point of view, geometry asks, “Given a geometric object X, what is its group of
symmetries?” Representation theory reverses the question to “Given a group G, what objects X
does it act on?” and attempts to answer this question by classifying such X up to isomorphism.
Before restricting to the linear case, our main concern, let us remember another way to
describe an action of G on X. Every g ∈ G deﬁnes a map a(g) : X → X by x → gx. This
map is a bijection, with inverse map a(g
−1
): indeed, a(g
−1
) ◦ a(g)(x) = g
−1
gx = ex = x from
the properties of the action. Hence a(g) belongs to the set Perm(X) of bijective selfmaps of X.
This set forms a group under composition, and the properties of an action imply that
1.1 Proposition. An action of G on X “is the same as” a group homomorphism α : G →
Perm(X).
1.2 Remark. There is a logical abuse here, clearly an action, deﬁned as a map a : GX →X is
not the same as the homomorphism α in the Proposition; you are meant to read that specifying
one is completely equivalent to specifying the other, unambiguously. But the deﬁnitions are
designed to allow such abuse without much danger, and I will frequently indulge in that (in fact
I denoted α by a in lecture).
2
1
This group is isomorphic to the semidirect product O(3) R
3
— but if you do not know what this means,
do not worry.
2
With respect to abuse, you may wish to err on the side of caution when writing up solutions in your exam!
1
The reformulation of Prop. 1.1 leads to the following observation. For any action a H on X
and group homomorphism ϕ : G →H, there is deﬁned a restricted or pulledback action ϕ
∗
a of
G on X, as ϕ
∗
a = a ◦ ϕ. In the original deﬁnition, the action sends (g, x) to ϕ(g)(x).
(1.3) Example: Tautological action of Perm(X) on X
This is the obvious action, call it T, sending (f, x) to f(x), where f : X → X is a bijection
and x ∈ X. Check that it satisﬁes the properties of an action! In this language, the action a of
G on X is α
∗
T, with the homomorphism α of the proposition — the pullback under α of the
tautological action.
(1.4) Linearity.
The question of classifying all possible X with action of G is hopeless in such generality, but
one should recall that, in ﬁrst approximation, mathematics is linear. So we shall take our X to
a vector space over some ground ﬁeld, and ask that the action of G be linear, as well, in other
words, that it should preserve the vector space structure. Our interest is mostly conﬁned to the
case when the ﬁeld of scalars is C, although we shall occasional mention how the picture changes
when other ﬁelds are studied.
1.5 Deﬁnition. A linear representation ρ of G on a complex vector space V is a settheoretic
action on V which preserves the linear structure, that is,
ρ(g)(v
1
+v
2
) = ρ(g)v
1
+ρ(g)v
2
, ∀v
1,2
∈ V,
ρ(g)(kv) = k ρ(g)v, ∀k ∈ C, v ∈ V
Unless otherwise mentioned, representation will mean ﬁnitedimensional complex representation.
(1.6) Example: The general linear group
Let V be a complex vector space of dimension n < ∞. After choosing a basis, we can identify it
with C
n
, although we shall avoid doing so without good reason. Recall that the endomorphism
algebra End(V ) is the set of all linear maps (or operators) L : V →V , with the natural addition
of linear maps and the composition as multiplication. (If you do not remember, you should
verify that the sum and composition of two linear maps is also a linear map.) If V has been
identiﬁed with C
n
, a linear map is uniquely representable by a matrix, and the addition of linear
maps becomes the entrywise addition, while the composition becomes the matrix multiplication.
(Another good fact to review if it seems lost in the mists of time.)
Inside End(V ) there is contained the group GL(V ) of invertible linear operators (those
admitting a multiplicative inverse); the group operation, of course, is composition (matrix mul
tiplication). I leave it to you to check that this is a group, with unit the identity operator Id.
The following should be obvious enough, from the deﬁnitions.
1.7 Proposition. V is naturally a representation of GL(V ).
It is called the standard representation of GL(V ). The following corresponds to Prop. 1.1,
involving the same abuse of language.
1.8 Proposition. A representation of G on V “is the same as” a group homomorphism from
G to GL(V ).
Proof. Observe that, to give a linear action of G on V , we must assign to each g ∈ G a linear
selfmap ρ(g) ∈ End(V ). Compatibility of the action with the group law requires
ρ(h) (ρ(g)(v)) = ρ(hg)(v), ρ(1)(v) = v, ∀v ∈ V,
whence we conclude that ρ(1) = Id, ρ(hg) = ρ(h) ◦ ρ(g). Taking h = g
−1
shows that ρ(g) is
invertible, hence lands in GL(V ). The ﬁrst relation then says that we are dealing with a group
homomorphism.
2
1.9 Deﬁnition. An isomorphism φ between two representations (ρ
1
, V
1
) and (ρ
2
, V
2
) of G is a
linear isomorphism φ : V
1
→V
2
which intertwines with the action of G, that is, satisﬁes
φ(ρ
1
(g)(v)) = ρ
2
(g)(φ(v)).
Note that the equality makes sense even if φ is not invertible, in which case it is just called
an intertwining operator or Glinear map. However, if φ is invertible, we can write instead
ρ
2
= φ ◦ ρ
1
◦ φ
−1
, (1.10)
meaning that we have an equality of linear maps after inserting any group element g. Observe
that this relation determines ρ
2
, if ρ
1
and φ are known. We can ﬁnally formulate the
Basic Problem of Representation Theory: Classify all representations of a given group G,
up to isomorphism.
For arbitrary G, this is very hard! We shall concentrate on ﬁnite groups, where a very good
general theory exists. Later on, we shall study some examples of topological compact groups,
such as U(1) and SU(2). The general theory for compact groups is also completely understood,
but requires more diﬃcult methods.
I close with a simple observation, tying in with Deﬁnition 1.9. Given any representation ρ
of G on a space V of dimension n, a choice of basis in V identiﬁes this linearly with C
n
. Call
the isomorphism φ. Then, by formula (1.10), we can deﬁne a new representation ρ
2
of G on
C
n
, which is isomorphic to (ρ, V ). So any ndimensional representation of G is isomorphic to a
representation on C
n
. The use of an abstract vector space does not lead to ‘new’ representation,
but it does free us from the presence of a distinguished basis.
2 Lecture
Today we discuss the representations of a cyclic group, and then proceed to deﬁne the important
notions of irreducibility and complete reducibility
(2.1) Concrete realisation of isomorphism classes
We observed last time that every mdimensional representation of a group G was isomorphic
to a representation on C
m
. This leads to a concrete realisation of the set of mdimensional
isomorphism classes of representations.
2.2 Proposition. The set of mdimensional isomorphism classes of Grepresentations is in
bijection with the quotient
Hom(G; GL(m; C)) /GL(m; C)
of the set of group homomorphism to GL(m) by the overall conjugation action on the latter.
Proof. Conjugation by φ ∈ GL(m) sends a homomorphism ρ to the new homomorphism g →
φ ◦ ρ(g) ◦ φ
−1
. According to Deﬁnition 1.9, this has exactly the eﬀect of identifying isomorphic
representations.
2.3 Remark. The proposition is not as useful (for us) as it looks. It can be helpful in under
standing certain inﬁnite discrete groups — such as Z below — in which case the set Hom can
have interesting geometric structures. However, for ﬁnite groups, the set of isomorphism classes
is ﬁnite so its description above is not too enlightening.
3
(2.4) Example: Representations of Z.
We shall classify all representations of the group Z, with its additive structure. We must have
ρ(0) = Id. Aside from that, we must specify an invertible matrix ρ(n) for every n ∈ Z. However,
given ρ(1), we can recover ρ(n) as ρ(1 + . . . + 1) = ρ(1)
n
. So there is no choice involved.
Conversely, for any invertible map ρ(1) ∈ GL(m), we obtain a representation of Z this way.
Thus, mdimensional isomorphism classes of representations of Z are in bijection with the
conjugacy classes in GL(m). These can be parametrised by the Jordan canonical form (see the
next example). We will have m continuous parameters — the eigenvalues, which are nonzero
complex numbers, and are deﬁned up to reordering — and some discrete parameters whenever
two or more eigenvalues coincide, specifying the Jordan block sizes.
(2.5) Example: the cyclic group of order n
Let G = ¦1, g, . . . , g
n−1
¦, with the relation g
n
= 1. A representation of G on V deﬁnes an
invertible endomorphism ρ(g) ∈ GL(V ). As before, ρ(1) = Id and ρ(g
k
) = ρ(g)
k
, so all other
images of ρ are determined by the single operator ρ(g).
Choosing a basis of V allows us to convert ρ(g) into a matrix A, but we shall want to be
careful with our choice. Recall from general theory that there exists a Jordan basis in which
ρ(g) takes its blockdiagonal Jordan normal form
A =
_
¸
¸
_
J
1
0 . . . 0
0 J
2
. . . 0
0 0 . . . J
m
_
¸
¸
_
where the Jordan blocks J
k
take the form
J =
_
¸
¸
¸
¸
_
λ 1 0 . . . 0
0 λ 1 . . . 0
0 0 0 . . . 1
0 0 0 . . . λ
_
¸
¸
¸
¸
_
.
However, we must impose the condition A
n
= Id. But A
n
itself will be blockdiagonal, with
blocks J
n
k
, so we must have J
n
k
= 1. To compute that, let N be the Jordan matrix with λ = 0;
we then have J = λId +N, so
J
n
= (λId +N)
n
= λ
n
Id +
_
n
1
_
λ
n−1
N +
_
n
2
_
λ
n−2
N
2
+. . . ;
but notice that N
p
, for any p, is the matrix with zeroes and ones only, with the ones in index
position (i, j) with i = j +k (a line parallel to the diagonal, k steps above it). So the sum above
can be Id only if λ
n
= 1 and N = 0. In other words, J is a 1 1 block, and ρ(g) is diagonal in
this basis. We conclude the following
2.6 Proposition. If V is a representation of the cyclic group G of order n, there exists a basis in
which the action of every group element is diagonal, the with nth roots of unity on the diagonal.
In particular, the mdimensional representations of C
n
are classiﬁed up to isomorphism by
unordered mtuples of nth roots of unity.
(2.7) Example: Finite abelian groups
The discussion for cyclic groups generalises to any ﬁnite Abelian group A. (The resulting
classiﬁcation of representations is more or less explicit, depending on whether we are willing to
use the classiﬁcation theorem for ﬁnite abelian groups; see below.) We recall the following fact
from linear algebra:
4
2.8 Proposition. Any family of commuting, separately diagonalisable m m matrices can be
simultaneously diagonalised.
The proof is delegated to the example sheet; at any rate, an easier treatment of ﬁnite abelian
groups will emerge from Schur’s Lemma in Lecture 4.
This implies that any representation of A is isomorphic to one where every group element
acts diagonally. Each diagonal entry then determines a onedimensional representation of A.
So the classiﬁcation reads: mdimensional isomorphism classes of representations of A are in
bijection with unordered mtuples of 1dimensional representations. Note that for 1dimensional
representations, viewed as homomorphisms ρ : A →C
×
, there is no distinction between identity
and isomorphism (the conjugation action of GL(1; C) on itself is trivial).
To say more, we must invoke the classiﬁcation of ﬁnite abelian groups, according to which A
is isomorphic to a direct product of cyclic groups. To specify a 1dimensional representation of A
we must then specify a root of unity of the appropriate order independently for each generator.
(2.9) Subrepresentations and Reducibility
Let ρ : G →GL(V ) be a representation of G.
2.10 Deﬁnition. A subrepresentation of V is a Ginvariant subspace W ⊆ V ; that is, we have
∀w ∈ W, g ∈ G ⇒ρ(g)(w) ∈ W.
W becomes a representation of G under the action ρ(g).
Recall that, given a subspace W ⊆ V , we can form the quotient space V/W, the set of W
cosets v +W in V . If W was Ginvariant, the Gaction on V descends to (=deﬁnes) an action
on V/W by setting g(v + W) := ρ(g)(v) + W. If we choose another v
in the same coset as v,
then v −v
∈ W, so ρ(g)(v −v
) ∈ W, and then the cosets ρ(v) +W and ρ(v
) +W agree.
2.11 Deﬁnition. With this action, V/W is called the quotient representation of V under W.
2.12 Deﬁnition. The direct sum of two representations (ρ
1
, V
1
) and (ρ
2
, V
2
) is the space V
1
⊕V
2
with the blockdiagonal action ρ
1
⊕ρ
2
of G.
(2.13) Example
In the direct sum V
1
⊕ V
2
, V
1
is a subrepresentation and V
2
is isomorphic to the associated
quotient representation. Of course the roles of 1 and 2 can be interchanged. However, one should
take care that for an arbitrary group, it need not be the case that any representation V with
subrepresentation W decomposes as W⊕W/V . This will be proved for complex representations
of ﬁnite groups.
2.14 Deﬁnition. A representation is called irreducible if it contains no proper invariant sub
spaces. It is called completely reducible if it decomposes as a direct sum of irreducible sub
representations.
In particular, irreducible representations are completely reducible.
For example, 1dimensional representations of any group are irreducible. Earlier, we thus
proved that ﬁnitedimensional complex representations of a ﬁnite abelian group are completely
reducible: indeed, we decomposed V into a direct sum of lines L
1
⊕. . .⊕L
dimV
, along the vectors
in the diagonal basis. Each line is preserved by the action of the group. In the cyclic case, the
possible actions of C
n
on a line correspond to the n eligible roots of unity to specify for ρ(g).
2.15 Proposition. Every complex representation of a ﬁnite abelian group is completely re
ducible, and every irreducible representation is 1dimensional.
It will be our goal to establish an analogous proposition for every ﬁnite group G. The result
is called the Complete Reducibility Theorem. For nonabelian groups, we shall have to give up
on the 1dimensional requirement, but we shall still salvage a canonical decomposition.
5
3 Complete Reducibility and Unitarity
In the homework, you ﬁnd an example of a complex representation of the group Z which is not
completely reducible, and also of a representation of the cyclic group of prime order p over the
ﬁnite ﬁeld F
p
which is not completely reducible. This underlines the importance of the following
Complete Reducibility Theorem for ﬁnite groups.
3.1 Theorem. Every complex representation of a ﬁnite group is completely reducible.
The theorem is so important that we shall give two proofs. The ﬁrst uses inner products,
and so applies only to R or C, but generalises to compact groups. The more algebraic proof, on
the other hand, extends to any ﬁelds of scalars whose characteristic does not divide the order of
the group (equivalently, the order of the group should not be 0 in the ﬁeld).
Beautiful as it is, the result would have limited value without some supply of irreducible
representations. It turns out that the following example provides an adequate supply.
(3.2) Example: the regular representation
Let C[G] be the vector space of complex functions on G. It has a basis ¦e
g
¦
g∈G
, with e
g
representing the function equal to 1 at g and 0 elsewhere. G acts on this basis as follows:
λ(g)(e
h
) = e
gh
.
This settheoretic action extends by linearity to the vector space:
λ(g)
_
h∈G
v
h
e
h
_
=
h∈G
v
h
λ(g)e
h
=
h∈G
v
h
e
gh
.
(Exercise: check that this deﬁnes a linear action.) On coordinates, the action is opposite to what
you might expect: namely, the hcoordinate of λ(g)(v) is v
g
−1
h
. The result is the left regular
representation of G. Later we will decompose λ into irreducibles, and we shall see that every
irreducible isomorphism class of Greps occurs in the decomposition.
3.3 Remark. If G acts on a set X, let C[X] be the vector space of functions on X, with obvious
basis ¦e
x
¦
x∈X
. By linear extension of the permutation action ρ(g)(e
x
) = e
gx
, we get a linear
action of G on C[X]; this is the permutation representation associated to X.
(3.4) Unitarity
Inner products are an important aid in investigating real or complex representations, and lead
to a ﬁrst proof of Theorem 3.1.
3.5 Deﬁnition. A representation ρ of G on a complex vector space V is unitary if V has been
equipped with a hermitian inner product ¸ [ ) which is preserved by the action of G, that is,
¸v[w) = ¸ρ(g)(v)[ρ(g)(w)), ∀v, w ∈ V, g ∈ G.
It is unitarisable if it can be equipped with such a product (even if none has been chosen).
For example, the regular representation of a ﬁnite group is unitarisable: it is made unitary by
declaring the standard basis vectors e
g
to be orthonormal.
The representation is unitary iﬀ the homomorphism ρ : G →GL(V ) lands inside the unitary
group U(V ) (deﬁned with respect to the inner product). We can restate this condition in the
form ρ(g)
∗
= ρ(g
−1
). (The latter is also ρ(g)
−1
).
3.6 Theorem (Unitary Criterion). Finitedimensional unitary representations of any group
are completely reducible.
The proof relies on the following
6
3.7 Lemma. Let V be a unitary representation of G and let W be an invariant subspace. Then,
the orthocomplement W
⊥
is also Ginvariant.
Proof. We must show that, ∀v ∈ W
⊥
and ∀g ∈ G, gv ∈ W
⊥
. Now, v ∈ W
⊥
⇔ ¸v[w) = 0,
∀w ∈ W. If so, then ¸gv[gw) = 0, for any g ∈ G and ∀w ∈ W. This implies that ¸gv[w
) = 0,
∀w
∈ W, since we can choose w = g
−1
w
∈ W, using the invariance of W. But then, gv ∈ W
⊥
,
and g ∈ G was arbitrary.
We are now ready to prove the following stronger version of the unitary criterion (3.6).
3.8 Theorem. A ﬁnitedimensional unitary representation of a group admits an orthogonal
decomposition into irreducible unitary subrepresentations.
Proof. Clearly, any subrepresentation is unitary for the restricted inner product, so we must
merely produce the decomposition into irreducibles. Assume that V is not irreducible; it then
contains a proper invariant subspace W ⊆ V . By Lemma 3.7, W
⊥
is another subrepresentation,
and we clearly have an orthogonal decomposition V = W⊕W
⊥
, which is Ginvariant. Continuing
with W and W
⊥
must terminate in an irreducible decomposition, by ﬁnitedimensionality.
3.9 Remark. The assumption dimV < ∞ is in fact unnecessary for ﬁnite G, but cannot be
removed without changes to the statement, for general G. For representations of inﬁnite, non
compact groups on inﬁnitedimensional (Hilbert) spaces, the most we can usually hope for is a
decomposition into a direct integral of irreducibles. For example, this happens for the translation
action of the group R on the Hilbert space L
2
(R). The irreducible “components” are the Fourier
modes exp(ikx), labelled by k ∈ R; note that they are not quite in L
2
. Nonetheless, there results
an integral decomposition of any vector f(x) ∈ L
2
(R) into Fourier modes,
f(x) =
1
2π
_
R
ˆ
f(k) exp(−ik x)dk,
known to you as the Fourier inversion formula;
ˆ
f(k)/2π should be regarded as the coordinate
value of f along the “basis vector” exp(−ikx). Even this milder expectation of integral decompo
sition can fail for more general groups, and leads to delicate and diﬃcult problems of functional
analysis (von Neumann algebras).
3.10 Theorem (Weyl’s unitary trick). Finitedimensional representations of ﬁnite groups
are unitarisable.
Proof. Starting with a hermitian, positive deﬁnite inner product ¸ [ ) on your representation,
construct an new one ¸ [ )
by averaging over G,
¸v[w)
:=
1
[G[
g∈G
¸gv[gw).
Check invariance and positivity (homework).
3.11 Remark. Weyl’s unitary trick applies to continuous representations of compact groups. In
that case, we use integration over G to average. The key point is the existence of a measure
on G which is invariant for the translation action of the group on itself (the Haar measure).
For groups of geometric origin, such as U(1) (and even U(n) or SU(n)) the existence of such a
measure is obvious, but in general it is a diﬃcult theorem.
The complete reducibility theorem (3.1) for ﬁnite groups follows from Theorem 3.8 and
Weyl’s unitary trick.
7
(3.12) Alternative proof of Complete Reducibility
The following argument makes no appeal to inner products, and so has the advantage of working
over more general ground ﬁelds.
3.13 Lemma. Let V be a ﬁnitedimensional representation of the ﬁnite group G over a ﬁeld
of characteristic not dividing [G[. Then, every invariant subspace W ⊆ V has an invariant
complement W
⊆ V .
Recall that the subspace W
is a complement of W if W ⊕W
= V .
Proof. The complementing condition can be broken down into two parts,
• W ∩ W
= ¦0¦
• dimW + dimW
= dimV .
We’d like to construct the complement using the same trick as with the inner product, by “av
eraging a complement over G” to produce an invariant complement. While we cannot average
subspaces, note that any complement of W is completely determined as the kernel of the projec
tion operator P : V → W, which sends a vector v ∈ V to its Wcomponent P(v). Choose now
an arbitrary complement W” ⊆ V (not necessarily invariant) and call P the projection operator.
It satisﬁes
• P = Id on W
• ImP = W.
Now let Q :=
1
G
g∈G
g ◦ P ◦ g
−1
. I claim that:
• Q commutes with G, that is, Q◦ h = h ◦ Q, ∀h ∈ G;
• Q = Id on W;
• ImQ = W.
The ﬁrst condition implies that W
:= ker Q is a Ginvariant subspace. Indeed, v ∈ ker Q ⇒
Q(v) = 0, hence h ◦ Q(v) = 0, hence Q(hv) = 0 and hv ∈ ker Q, ∀h ∈ G. The second condition
implies that W
∩ W = ¦0¦, while the third implies that dimW
+ dimW = dimV ; so we have
constructed our invariant complement.
4 Schur’s Lemma
To understand representations, we also need to understand the automorphisms of a representa
tion. These are the invertible selfmaps commuting with the Gaction. To see the issue, assume
given a representation of G, and say you have performed a construction within it (such as a
drawing of your favourite Disney character; more commonly, it will be a distinguished geometric
object). If someone presents you with another representation, claimed to be isomorphic to yours,
how uniquely can you repeat your construction in the other copy of the representation? Schur’s
Lemma answers this for irreducible representations.
4.1 Theorem (Schur’s lemma over C). If V is an irreducible complex Grepresentation,
then every linear operator φ : V →V commuting with G is a scalar.
8
Proof. Let λ be an eigenvalue of φ. I claim that the eigenspace E
λ
is Ginvariant. Indeed,
v ∈ E
λ
⇒φ(v) = λv, whence
φ(gv) = gφ(v) = g(λv) = λ gv,
so gv ∈ E
λ
, and g was arbitrary. But then, E
λ
= V by irreducibility, so φ = λId.
Given two representations V and W, we denote by Hom
G
(V, W) the vector space of inter
twiners from V to W, meaning the linear operators which commute with the Gaction.
4.2 Corollary. If V and W are irreducible, the space Hom
G
(V, W) is either 1dimensional or
¦0¦, depending on whether or not the representations are isomorphic. In the ﬁrst case, any
nonzero map is an isomorphism.
Proof. Indeed, the kernel and the image of an intertwiner φ are invariant subspaces of V and
W, respectively (proof as for the eigenspaces above). Irreducibility leaves ker φ = 0 or V and
·φ = 0 or W as the only options. So if φ is not injective, ker φ = V and φ = 0. If φ is injective
and V ,= 0, then ·φ = W and so φ is an isomorphism. Finally, to see that two intertwiners φ, ψ
diﬀer by a scalar factor, apply Schur’s lemma to φ
−1
◦ ψ.
(4.3) Schur’s Lemma over other ﬁelds
The correct statement over other ﬁelds (even over R) requires some preliminary deﬁnitions.
4.4 Deﬁnition. An algebra over a ﬁeld k is an associative ring with unit, containing a distin
guished copy of k, commuting with every algebra element, and with 1 ∈ k being the algebra
unit. A division ring is a ring where every nonzero element is invertible, and a division algebra
is a division ring which is also a kalgebra.
4.5 Deﬁnition. Let V be a Grepresentation over k. The endomorphism algebra End
G
(V ) is
the space of linear selfmaps φ : V →V which commute with the group action, ρ(g)◦φ = φ◦ρ(g),
∀g ∈ G. The addition is the usual addition of linear maps, and the multiplication is composition.
I entrust it to your care to check that End
G
(V ) is indeed a kalgebra. In earlier notation,
End
G
(V ) = Hom
G
(V, V ); however, Hom
G
(V, W) is only a vector space in general.
4.6 Theorem (Schur’s Lemma). If V is an irreducible ﬁnitedimensional Grepresentation
over k, then End
G
(V ) is a ﬁnitedimensional division algebra over k.
Proof. For any φ : V → V commuting with G, ker φ and Imφ are Ginvariant subspaces.
Irreducibility implies that either ker φ = ¦0¦, so φ is injective and then an isomorphism (for
dimensional reasons, or because Imφ = V ), or else ker φ = V and then φ = 0.
A second proof of Schur’s Lemma over C can now be deduced from the following.
4.7 Proposition. The only ﬁnitedimensional division algebra over C is C itself.
Proof. Let, indeed, A be the algebra and α ∈ A. The elements 1, α, α
2
, . . . of the ﬁnite
dimensional vector space A over C must be linearly dependent. So we have a relation P(α) = 0
in A, for some polynomial P with complex coeﬃcients. But such a polynomial factors into linear
terms
k
(α −α
k
), for some complex numbers α
k
, the roots of P. Since we are in a division
ring and the product is zero, one of the factors must be zero, therefore α ∈ C and so A = C.
4.8 Remark. Over the reals, there are three ﬁnitedimensional division algebras, namely R, C
and H. All of these can occur as endomorphism algebras of irreducibles of ﬁnite groups (see
Example Sheet 1).
9
(4.9) Application: Finite abelian groups revisited
Schur’s Lemma gives a shorter proof of the 1dimensionality of irreps for ﬁnite abelian groups.
4.10 Theorem. Any irreducible complex representation of an abelian group G is onedimensional.
Proof. Let g ∈ G and call ρ the representation. Then, ρ(g) commutes with every ρh. By
irreducibility and Schur’s lemma, it follows that ρ(g) is a scalar. As each group element acts by
a scalar, every line in V is invariant, so irreducibility implies that V itself is a line.
As one might expect from the failure of Schur’s lemma, this proposition is utterly false over
other ground ﬁelds. For example, over the reals, the two irreducible representations of Z/3
have dimensions 1 and 2. (They are the trivial and the rotation representations, respectively).
However, one can show in general that the irreps are 1dimensional over their endomorphism
ring, which is an extension ﬁeld of k.
The structure theorem asserts that every ﬁnite abelian G splits as
k
C
n
k
, for cyclic groups
C
n
k
of orders n
k
. Choose a generator g
k
in each factor; this must act on any line by a root
of unity of order n
k
. So, to specify a onedimensional representation, one must choose such
a root of unity for each k. In particular, the number of irreducible isomorphism classes of
representations of a ﬁnite abelian group equals the order of the group. Nonetheless, there is
no canonical correspondence between group elements and representations; one must choose
generators in each cyclic factor for that.
4.11 Remark. The equality above generalises to nonabelian ﬁnite groups, but not as naively as
one might think. One correct statement is that the number of irreducible isomorphism classes
equals the number of conjugacy classes in the group. Another generalisation asserts that the
squared dimensions of the irreducible isomorphism classes add up to the order of the group.
5 Isotypical Decomposition
To motivate the result we prove today, recall that any diagonalisable linear endomorphism
A : V → V leads to an eigenspace decomposition of the space V as
λ
V (λ), where V (λ)
denotes the subspace of vectors verifying Av = λv. The decomposition is canonical, in the sense
that it depends on A alone and no other choices. (By contrast, there is no canonical eigenbasis of
V : we must choose a basis in each V (λ) for that.) To show how useful this is, we now classify the
irreducible representations of D
6
“by hand”, also proving complete reducibility in the process.
(5.1) Example: Representations of D
6
The symmetric group S
3
on three letters, also isomorphic to the dihedral group D
6
, has two
generators g, r satisfying g
3
= r
2
= 1, gr = rg
−1
. It is easy to spot three irreducible represen
tations:
• The trivial 1dimensional representation 1
• The sign representation S
• The geometric 2dimensional representation W.
All these representations are naturally deﬁned on real vector spaces, but we wish to work with
the complex numbers so we view them as complex vector spaces instead. For instance, the
2dimensional rep. is deﬁned by W = C
2
by letting g act on W = C
2
as the diagonal matrix
ρ(g) = diag
_
ω, ω
2
¸
, and r as the matrix ρ(r) =
_
0 1
1 0
_
; here, ω = exp
_
2πi
3
_
is the basic cube
root of unity. (Exercise: Relate this to the geometric action of D
6
. You must change bases, to
diagonalise g.)
10
Let now (π, V ) be any complex representation of D
6
. We can diagonalise the action of π(g)
and split V into eigenspaces,
V = V (1) ⊕V (ω) ⊕V (ω
2
).
The relation rgr
−1
= g
−1
shows that π(r) must preserve V (1) and interchange the other sum
mands: that is, π(r) : V (ω) → V (ω
2
) is an isomorphism, with inverse π(r) (since r
2
= 1).
Decompose V (1) into the π(r)eigenspaces; the eigenvalues are ±1, and since g acts trivially it
follows that V (1) splits into a sum of copies of 1 and S. Further, choose a basis e
1
, . . . e
n
of
V (ω), and let e
k
= π(r)e
k
. Then, π(r) acts on the 2dimensional space Ce
k
⊕Ce
k
as the matrix
_
0 1
1 0
_
, while π(g) acts as diag
_
ω, ω
2
¸
. It follows that we have split V (ω) ⊕V (ω
2
) into n copies
of the 2dimensional representation W.
(5.2) Decomposition into isotypical components
Continuing the general theory, the next task is to understand completely reducible representa
tions in terms of their irreducible constituents. A completely reducible V decomposes into a sum
of irreducibles. Grouping isomorphic irreducible summands into blocks, we write V
∼
=
k
V
k
,
where each V
k
is isomorphic to a sum of n
k
copies of the irreducible representation W
k
; the
latter are assumed nonisomorphic for diﬀerent k’s. Another notation we might employ is
V
∼
=
k
W
⊕n
k
k
.
We call n
k
the multiplicity of W
k
in V , and the decomposition of V just described the canonical
decomposition of V , or decomposition into isotypical components V
k
. This terminology must now
be justiﬁed: we do not know yet whether this decomposition is unique, and not even whether
the n
k
are unambiguously deﬁned.
5.3 Lemma. Let V =
k
V
k
, V
=
k
V
k
be canonical decompositions of two representations
V and V
. Then, any Ginvariant homomorphism φ : V
→V maps V
k
to V
k
.
Note that, for any completely reducible V and V
, we can arrange the decompositions to be
labelled by the same indexes, adding zero blocks if some W
k
is not present in the decomposition.
Obviously, φ will be zero on those. We are not assuming uniqueness of the decompositions;
rather, this will follow from the Lemma.
Proof. We consider the “block decomposition” of φ with respect to the direct sum decomposi
tions of V and V
. Speciﬁcally, restricting φ to the summand V
l
and projecting to V
k
leads to
the “block” φ
kl
: V
l
→ V
k
, which is a linear Gmap. Consider now an oﬀdiagonal block φ
kl
,
k ,= l. If we decompose V
k
and V
l
further into copies of irreducibles W
k
and W
l
, the resulting
blocks of φ
kl
give Glinear maps between nonisomorphic irreducibles, and such maps are all
null by Corollary 4.2. So φ
kl
= 0 if k ,= l, as asserted.
5.4 Theorem. Let V =
V
k
be a canonical decomposition of V .
(i) Every subrepresentation of V which is isomorphic to W
k
is contained in V
k
.
(ii) The canonical decomposition is unique, that is, it does not depend on the original decompo
sition of V into irreducibles.
(iii) The endomorphism algebra End
G
(V
k
) is isomorphic to a matrix algebra M
n
k
(C), a choice
of isomorphism coming from a decomposition of V
k
into a direct sum of the irreducible W
k
.
(iv) End
G
(V ) is isomorphic to the direct sum of matrix algebras
k
M
n
k
(C), blockdiagonal
with respect to the canonical decomposition of V .
11
Proof. Part (i) follows from the Lemma by taking V
= W
k
. This gives an intrinsic description
of V
k
, as the sum of all copies of W
k
contained in V , leading to part (ii).
Let φ ∈ End
G
(V
k
) write V
k
as a direct sum
∼
= W
⊕n
k
k
of copies of W
k
. In the resulting
blockdecomposition of φ, each block φ
pq
is a Ginvariant linear map between copies of W
k
. By
Schur’s lemma, all blocks are scalar multiples of the identity: φ
pq
= f
pq
Id, for some constant
f
pq
. Assigning to φ the matrix [f
pq
] identiﬁes End
G
(V
k
) with M
n
k
(C).
3
Part (iv) follows from
(iii) and the Lemma.
5.5 Remark. When the ground ﬁeld is not C, we must allow the appropriate division rings to
replace C in the theorem.
(5.6) Operations on representations
We now discuss some operations on group representations; they can be used to enlarge the supply
of examples, once a single interesting representation of a group is known. We have already met
the direct sum of two representations in Lecture 2; however, that cannot be used to produce
new irreducibles.
5.7 Deﬁnition. The dual representation of V is the representation ρ
∗
on the dual vector space
V
∗
:= Hom(V ; C) of linear maps V →C deﬁned by ρ
∗
(g)(L) = L ◦ ρ(g)
−1
.
That is, a linear map L : V →C is mapped to the linear map ρ
∗
(g)(L) sending v to L
_
ρ(g)
−1
(v)
_
.
Note how this deﬁnition preserves the duality pairing between V and V
∗
,
L(v) = ρ
∗
(g)(L) (ρ(g)(v)) .
There is actually little choice in the matter, because the more naive option ρ
∗
(g)(L) = L ◦ ρ(g)
does not, in fact, determine an action of G. The following is left as an exercise.
5.8 Proposition. The dual representation ρ
∗
is irreducible if and only if ρ was so.
Exercise. Study duality on irreducible representations of abelian groups.
The dual representation is a special instance of the Hom space of two representations: we can
replace the 1dimensional space of scalars C, the target space of our linear map, by an arbitrary
representation W.
5.9 Deﬁnition. For two vector spaces V, W, Hom(V, W) will denote the vector space of linear
maps from V to W. If V and W carry linear Gactions ρ and σ, then Hom(V, W) carries a
natural Gaction in which g ∈ G sends φ : V →W to σ(g) ◦ φ ◦ ρ(g)
−1
.
In the special case W = C with the trivial Gaction, we recover the construction of V
∗
. Note
also that the invariant vectors in the Hom space are precisely the Glinear maps from V to W.
When V = W, this Hom space and its invariant part are algebras, but of course not in general.
6 Tensor products
We now construct the tensor product of two vector spaces V and W. For simplicity, we will take
them to be ﬁnitedimensional. We give two deﬁnitions; the ﬁrst is more abstract, and contains
all the information you need to work with tensor products, but gives little idea of the true size
of the resulting space. The second is quite concrete, but requires a choice of basis, and in this
way breaks any symmetry we might wish to exploit in the context of group actions.
3
A diﬀerent decomposition of V
k
is related to the old one by a Gisomorphism S : W
⊕n
k
k
→ W
⊕n
k
k
; but this
Gisomorphism is itself blockdecomposable into scalar multiples of Id, so the eﬀect on φ is simply to conjugate
the associated matrix [f
pq
] by S.
12
6.1 Deﬁnition. The tensor product V ⊗
k
W of two vector spaces V and W over k is the k
vector space based on elements v ⊗w, labelled by pairs of vectors v ∈ V and w ∈ W, modulo
the following relations, for all k ∈ C, v ∈ V , w ∈ W:
(v
1
+v
2
) ⊗w = v
1
⊗w +v
2
⊗w;
v ⊗(w
1
+w
2
) = v ⊗w
1
+v ⊗w
2
;
(k v) ⊗w = v ⊗(k w) = k (v ⊗w).
In other words, V ⊗W is the quotient of the vector space with basis ¦v ⊗w¦ by the subspace
spanned by the diﬀerences of left– and righthand sides in each identity above. When the ground
ﬁeld is understood, it can be omitted from the notation.
6.2 Proposition. Let ¦e
1
, . . . , e
m
¦ and ¦f
1
, . . . , f
n
¦ be bases of V and W. Then, ¦e
p
⊗f
q
¦, with
1 ≤ p ≤ m, 1 ≤ q ≤ n, is a vector space basis of V ⊗W.
Proof. First, we show that the vectors e
p
⊗f
q
span the tensor product V ⊗W. Indeed, for any
v ⊗ w, we can express the factors v and w linearly in terms of the basis elements. The tensor
relations then express v⊗w as a linear combination of the e
p
⊗f
q
. To check linear independence,
we construct, for each pair (p, q), a linear functional L : V ⊗W → k equal to 1 on e
p
⊗f
q
and
vanishing on all other e
r
⊗f
s
. This shows that no linear relation can hold among our vectors.
Let ε : V → k and ϕ : W → k be the linear maps which are equal to 1 on e
p
, resp. f
q
and
vanish on the other basis vectors. We let L(v ⊗ w) = ε(v) ϕ(w). We note by inspection that
L takes equal values on the two sides of each of the identities in (6.1), and so it descends to a
well deﬁned linear map on the quotient space V ⊗W. This completes the proof.
The tensor product behaves like a multiplication with respect to the direct sum:
6.3 Proposition. There are natural isomorphisms U⊗(V ⊗W)
∼
= (U⊗V )⊗W, (U⊕V )⊗W
∼
=
(U ⊗W) ⊕(V ⊗W)
Proof. “Naturality” means that no choice of basis is required to produce an isomorphism. In
deed, send u⊗(v⊗w) to (u⊗v) ⊗w for the ﬁrst map, and (u, v) ⊗w to (u⊗w, v⊗w) in the
second, extend by linearity to the spaces and check that the tensor relations (6.1) are preserved
by the maps.
6.4 Deﬁnition. The tensor product of two representations ρ on V and σ on W of a group G is
the representation ρ ⊗σ on V ⊗W deﬁned by the condition
(ρ ⊗σ)(g)(v ⊗w) = ρ(g)(v) ⊗σ(g)(w),
and extended to all vectors in V ⊗W by linearity.
By inspection, this construction preserves the linear relations in Def. 6.1, so it does indeed lead
to a linear action of G on the tensor product. We shall write this action in matrix form, using
a basis as in Proposition 6.2, but ﬁrst let us note a connection between these constructions.
6.5 Proposition. If dimV, dimW < ∞, there is a natural isomorphism of vector spaces (pre
serving Gactions, if deﬁned) from W ⊗V
∗
to Hom(V, W).
Proof. There is a natural map from the ﬁrst space to the second, deﬁned by sending w ⊗ L ∈
W ⊗V
∗
to the rank one linear map v →w L(v). We extend this to all of W ⊗V
∗
by linearity.
Clearly, this preserves the Gactions, as deﬁned earlier, so we just need to check the map is an
isomorphism. In the basis ¦f
i
¦ of W, the dual basis ¦e
∗
j
¦ of V
∗
and the basis ¦f
i
⊗e
∗
j
¦ of W⊗V
∗
constructed in Def. 6.1.ii, we can check that the basis vector f
i
⊗ e
∗
j
is sent to the elementary
matrix E
ij
; and the latter form a basis of all linear maps from V to W.
13
(6.6) The tensor product of two linear maps
Consider two linear operators A ∈ End(V ) and B ∈ End(W). We deﬁne an operator A ⊗ B ∈
End(V ⊗W) by (A⊗B)(v⊗w) = (Av) ⊗(Bw), extending by linearity to all vectors in V ⊗W.
In bases ¦e
i
¦, ¦f
k
¦ of V, W, our operators are given by matrices A
ij
, 1 ≤ i, j ≤ m, and B
kl
,
1 ≤ k, l ≤ n. Let us write the matrix form of A⊗B in the basis ¦e
i
⊗f
k
¦ of V ⊗W. Note that
a basis index for V ⊗W is a pair (i, k) with 1 ≤ i ≤ m, 1 ≤ k ≤ n. We have the simple formula
(A⊗B)
(i,k)(j,l)
= A
ij
B
kl
,
which is checked by observing that the application of A⊗B to the basis vector e
j
⊗f
l
contains
e
i
⊗f
k
with the advertised coeﬃcient A
ij
B
kl
.
Here are some useful properties of A⊗B.
6.7 Proposition. (i) The eigenvalues of A⊗B are λ
i
µ
k
.
(ii) Tr(A⊗B) = Tr(A)Tr(B).
(iii) det(A⊗B) = det(A)
n
det(B)
m
.
Proof. Tr(A ⊗ B) =
i,k
(A ⊗ B)
(i,k)(i,k)
=
i,k
A
ii
B
kk
=
i
A
ii
k
B
kk
= Tr(A) Tr(B),
proving part (ii). When A and B are diagonalisable, parts (i) and (iii) are seen as follows.
Choose the ¦e
i
¦, ¦f
j
¦ to be eigenbases for A, B. Then, A ⊗ B is also diagonal, with entries
λ
i
µ
k
. This deals with the generic case, because generic matrices are diagonalisable. For any
matrix, we can choose a diagonalisable matrix arbitrarily closeby, and the formulae (i) and (iii)
apply. If you dislike this argument, choose instead bases in which A and B are uppertriangular
(this can always be done), and check that A⊗B is also uppertriangular in the tensor basis.
6.8 Remark. A concrete proof of (iii) can be given using rowreduction. Applied to A and B, this
results in uppertriangular matrices S and T, whose diagonal entries multiply to give the two
determinants. Now, ordering the index pairs (i, k) lexicographically, the matrix A⊗B acquires
a decomposition into blocks of size n n
_
¸
¸
_
A
11
B A
12
B A
1m
B
A
21
B A
2m
B
A
m1
B A
m2
B A
mm
B
_
¸
¸
_
We can apply the rowreduction procedure for A treating the blocks in this matrix as entries;
this results in the matrix [S
ij
B]. Applying now the rowreduction procedure for B to each
block of n consecutive rows results in the matrix [S
ij
T], which is S ⊗ T. Its determinant is
i,k
S
ii
T
kk
= (det S)
n
(det T)
m
. The statement about eigenvalues follows from the determinant
formula by considering the characteristic polynomial.
7 Examples and complements
The formula for A ⊗ B permits us to write the matrix form of the tensor product of two
representations, in the basis (6.2). However, the matrices involved are often too large to give
any insight. In the following example, we take a closer look at the tensor square V
⊗2
:= W⊗W
of the irreducible 2dimensional representation of D
6
; we will see that it decomposes into the
three distinct irreducibles listed in Lecture 5.
14
(7.1) Example: Decomposing the tensor square of a representation
Let G = D
6
. It has the three irreps 1, S, W listed in Lecture 5. It is easy to check that 1⊗S = S,
S ⊗S = 1. To study W ⊗W, we could use the matrix to write the 4 4 matrices of ρ ⊗ρ, but
there is a better way to understand what happens.
Under the action of g, W decomposes as a sum L
1
⊕ L
2
of eigenlines. We also call L
0
the
line with trivial action of the group C
3
generated by g. We get a C
3
isomorphism
W ⊗W = (L
1
⊗L
1
) ⊕(L
2
⊗L
2
) ⊕(L
1
⊗L
2
) ⊕(L
2
⊗L
1
).
Now, as C
3
reps, L
1
⊗L
1
∼
= L
2
, L
2
⊗L
2
∼
= L
1
, L
1
⊗L
2
∼
= L
0
, so our decomposition is isomorphic
to L
2
⊕ L
1
⊕ L
0
⊕ L
0
. We can also say something about r: since it swaps L
1
and L
2
in each
factor of the tensor product, we see that, in the last decomposition, it must swap L
1
with L
2
and
also the two factors of L
0
among themselves. We suspect that the ﬁrst two lines add up to an
isomorphic copy of W; to conﬁrm this, choose a basis vector e
1
in L
1
, and let e
2
= ρ
⊗2
(e
1
) ∈ L
2
.
Because r
2
= 1, we see that in this basis of L
2
⊕L
1
, G acts by the same matrices as on W.
On the last two summands, G acts trivially, but r swaps the two lines. Choosing a basis of
the form ¦f , ρ
⊗2
(f )¦ leads again to the matrix expression ρ
⊗2
(r) =
_
0 1
1 0
_
; this diagonalises to
diag[1, −1]. So we recognise here the sum of the trivial representation and the sign representation
S of S
3
. All in all, we have
W ⊗W
∼
= W ⊕L
0
⊕S.
The appearance of the ‘new’ irreducible sign representation shows that there cannot be a straight
forward universal formula for the decomposition of tensor products of irreducibles. One substan
tial accomplishment of the theory of characters, which we will study next, is a simple calculus
which permits the decomposition of representations into irreducibles.
For your amusement, here is the matrix form of ρ⊗ρ, in a basis matching the decomposition
into lines given earlier. (Note: this is not the lexicographically ordered basis of C
2
⊗C
2
.)
ρ
⊗2
(g) =
_
¸
¸
_
ω
2
0 0 0
0 ω 0 0
0 0 1 0
0 0 0 1
_
¸
¸
_
; ρ
⊗2
(r) =
_
¸
¸
_
0 1 0 0
1 0 0 0
0 0 0 1
0 0 1 0
_
¸
¸
_
.
(7.2) The symmetric group action on tensor powers
There was no possibility of W ⊗W being irreducible: switching the two factors in the product
gives an action of C
2
which commutes with D
6
, and the ±1 eigenspaces of the action must then
be invariant under D
6
. The +1eigenspace, the part of W ⊗ W invariant under the switch, is
called the symmetric square, and the (−1)part is the exterior square. These are special instances
of the following more general decomoposition.
Let V be a representation of G, ρ : G → GL(V ), and let V
⊗n
:= V ⊗ . . . ⊗ V (n times). If
v
1
, . . . , v
m
is a basis of V , then a basis of V
⊗n
is the collection of vectors v
k
1
⊗ ⊗v
k
n
, where
the indexes k
1
, . . . , k
n
range over ¦1 m¦
n
. So V
⊗n
has dimension m
n
.
More generally, any ntuple of vectors u
1
, , u
n
∈ V deﬁnes a vector u
1
⊗ ⊗u
n
∈ V
⊗n
.
A general vector in V
⊗n
is a linear combination of these.
7.3 Proposition. The symmetric group acts on V
⊗n
by permuting the factors: σ ∈ S
n
sends
u
1
⊗ ⊗u
n
to u
σ(1)
⊗ ⊗u
σ(n)
.
Proof. This prescription deﬁnes a permutation action on our standard basis vectors, which
extends by linearity to the entire space. By expressing a general u
1
⊗ ⊗ u
n
in terms of the
standard basis vectors, we can check that σ maps it to u
σ(1)
⊗ ⊗u
σ(n)
, as claimed.
15
7.4 Proposition. Setting ρ
⊗n
(g)(u
1
⊗ ⊗u
n
) = ρ(g)(u
1
) ⊗ ⊗ρ(g)(u
n
) and extending by
linearity deﬁnes an action of G on V
⊗n
, which commutes with the action of S
n
.
Proof. Assuming this deﬁnes an action, commutation with S
n
is clear, as applying ρ
⊗n
(g) before
or after a permutation of the factors has the same eﬀect. We can see that we get an action from
the ﬁrst (abstract) deﬁnition of the tensor product: ρ
⊗n
(g) certainly deﬁnes an action on the
(huge) vector space based on all the symbols u
1
⊗ ⊗u
n
, and this action preserves the linear
relations deﬁning V
⊗n
; so it descends to a welldeﬁned linear operator on the latter.
7.5 Corollary. Every S
n
isotypical component of V
⊗n
is a Gsubrepresentation.
indeed, since ρ
⊗n
(g) commutes with S
n
, it must preserve the canonical decomposition of V
⊗n
under S
n
(proved in Lecture 4).
Recall two familiar representations of the symmetric group, the trivial 1dimensional rep
resentation 1 and the sign representation ε : S
n
→ ¦±1¦ (deﬁned by declaring that every
transposition goes to −1). The two corresponding blocks in V
⊗n
are called Sym
n
V , the sym
metric powers, and Λ
n
V , the exterior or alternating powers of V . They are also called the
invariant/antiinvariant subspaces, or the symmetric/antisymmetric parts of V
⊗n
.
7.6 Proposition. Let P
±
be the following linear selfmaps on V
⊗n
,
u
1
⊗ ⊗u
n
→
_
¸
¸
¸
_
¸
¸
¸
_
1
n!
σ∈S
n
u
σ(1)
⊗ ⊗u
σ(n)
for P
+
1
n!
σ∈S
n
ε(σ)u
σ(1)
⊗ ⊗u
σ(n)
for P
−
Then:
• Im(P
±
) ⊂ V
⊗n
belongs to the invariant/antiinvariant subspace;
• P
±
acts as the identity on invariant/antiinvariant vectors;
• P
2
±
= P
±
, and so the two operators are the projections onto Sym
n
V and Λ
n
V .
Proof. The ﬁrst part is easy to see: we are averaging the transforms of a vector under the
symmetric group action, so the result is clearly invariant (or antiinvariant, when the sign is
inserted). The second part is obvious, and the third follows from the ﬁrst two.
7.7 Proposition (Basis for Sym
n
V, Λ
n
V ). If ¦v
1
, . . . , v
m
¦ is a basis for V , then:
a basis for Sym
n
V is
_
1
n!
σ∈S
n
v
k
σ(1)
⊗ ⊗v
k
σ(n)
_
, as 1 ≤ k
1
≤ ≤ k
n
≤ m,
and a basis for Λ
n
V is
_
1
n!
σ∈S
n
ε(σ)v
k
σ(1)
⊗ ⊗v
k
σ(n)
_
, as 1 ≤ k
1
< < k
n
≤ m.
Proof. (Sketch) The previous proposition and our basis for V
⊗n
make it clear that the vectors
above span Sym
n
and Λ
n
, if we range over all choices of k
i
. However, an outoforder collection
of k
i
can be ordered by a permutation, resulting in the same basis vector (up to a sign, in the
alternating case). So the outoforder indexes merely repeat some of the basis vectors. Also, in
the alternating case, the sum is zero as soon as two indexes have equal values. Finally, distinct
sets of indexes k
i
lead to linear combinations of disjoint sets of standard basis vectors in V
⊗n
,
which shows the linear independence of our collections.
16
7.8 Corollary. Λ
n
V = 0 if n > m. More generally, dimΛ
m
V = 1 and dimΛ
k
V =
_
m
k
_
.
Notation: In the symmetric power, we often use u
1
. . . u
n
to denote the symmetrised vectors
in Prop. 7.4, while in the exterior power, one uses u
1
∧ . . . ∧ u
n
.
8 The character of a representation
We now turn to the core results of the course, the character theory of group representations.
This allows an eﬀective calculus with group representations, including their tensor products,
and their decomposition into irreducibles.
We want to attach invariants to a representation ρ of a ﬁnite group G on V . The matrix
coeﬃcients of ρ(g) are basisdependent, hence not true invariants. Observe, however, that g
generates a ﬁnite cyclic subgroup of G; this implies the following (see Lecture 2).
8.1 Proposition. If G and dimV are ﬁnite, then every ρ(g) is diagonalisable.
More precisely, all eigenvalues of ρ(g) will be roots of unity of orders dividing that of G. (Apply
Lagrange’s theorem to the cyclic subgroup generated by g.)
To each g ∈ G, we can assign its eigenvalues on V , up to order. Numerical invariants
result from the elementary symmetric functions of these eigenvalues, which you also know as
the coeﬃcients of the characteristic polynomial det
V
[λ −ρ(g)]. Especially meaningful are
• the constant term, det ρ(g) (up to sign);
• the subleading term, Trρ(g) (up to sign).
The following is clear from multiplicativity of the determinant:
8.2 Proposition. The map g →det
V
ρ(g) ∈ C deﬁnes a 1dimensional representation of G.
This is a nice, but “weak” invariant. For instance, you may know that the alternating
group A
5
is simple, that is, it has no proper normal subgroups. Because it is not abelian,
any homomorphism to C must be trivial, so the determinant of any of its representations is
1. Even for abelian groups, the determinant is too weak to distinguish isomorphism classes of
representations. The winner turns out to be the other interesting invariant.
8.3 Deﬁnition. The character of the representation ρ is the complexvalued function on G
deﬁned by χ
ρ
(g) := Tr
V
(ρ(g)).
The Big Theorem of the course is that this is a complete invariant, in the sense that it
determines ρ up to isomorphism.
For a representation ρ : G →GL(V ), we have deﬁned χ
V
: G →C by χ
V
(g) = Tr
V
(ρ(g)).
8.4 Theorem (First properties).
1. χ
V
is conjugationinvariant, χ
V
(hgh
−1
) = χ
V
(g), ∀g, h ∈ G;
2. χ
V
(1) = dimV ;
3. χ
V
(g
−1
) = χ
V
(g);
4. For two representations V, W, χ
V ⊕W
= χ
V
+χ
W
, and χ
V ⊗W
= χ
V
χ
W
;
5. For the dual representation V
∗
, χ
V
∗(g) = χ
V
(g
−1
).
17
Proof. Parts (1) and (2) are clear. For part (3), choose an invariant inner product on V ; unitarity
of ρ(g) implies that ρ(g
−1
) = ρ(g)
−1
= ρ(g)
T
, whence (3) follows by taking the trace.
4
Part (4)
is clear from
Tr
_
A 0
0 B
_
= TrA+ TrB, and ρ
V ⊕W
=
_
ρ
V
0
0 ρ
W
_
.
The product formula follows form the identity Tr(A⊗B) = TrA TrB from Lecture 6. Finally,
Part (5) follows form the fact that the action of g on a linear map L ∈ Hom(V ; C) = V
∗
is
composition with ρ(g)
−1
(Lecture 5).
8.5 Remark. conjugationinvariant functions on G are also called central functions or class func
tions. Their value at a group element g depends only on the conjugacy class of g. We can
therefore view them as functions on the set of conjugacy classes.
(8.6) The representation ring
We can rewrite property (4) more professionally by introducing an algebraic construct.
8.7 Deﬁnition. The representation ring R
G
of the ﬁnite group G is the free abelian group
based on the set of isomorphism classes of irreducible representations of G, with multiplication
reﬂecting the tensor product decomposition of irreducibles:
[V ] [W] =
k
n
k
[V
k
] iﬀ V ⊗W
∼
=
k
V
⊕n
k
k
,
where we write [V ] for the conjugacy class of an irreducible representation V , and the tensor
product V ⊗W has been decomposed into irreducibles V
k
, with multiplicities n
k
.
Thus deﬁned, the representation ring is associative and commutative, with identity the trivial
1dimensional representation: [C] [W] = [W] for any W, because C ⊗W
∼
= W.
Example: G = C
n
, the cyclic group of order n
Our irreducibles are L
0
, . . . , L
n−1
, where the generator of C
n
acts on L
k
as exp
_
2πik
n
_
. We
have L
p
⊗L
q
∼
= L
p+q
, subscripts being taken modulo n. It follows that R
C
n
∼
= Z[X]/(X
n
−1),
identifying L
k
with X
k
.
The professional restatement of property (4) in Theorem 8.4 is the following.
8.8 Proposition. The character is a ring homomorphism from R
G
to the class functions on
G. It takes the involution V V
∗
to complex conjugation.
Later, we shall return to this homomorphism and list some other good properties.
(8.9) Orthogonality of characters
For two complex functions ϕ, ψ on G, we let
¸ϕ[ψ) :=
1
[G[
g∈G
ϕ(g) ψ(g).
In particular, this deﬁnes an inner product on class functions ϕ, ψ, where we sum over conjugacy
classes C ⊂ G:
¸ϕ[ψ) =
1
[G[
[C[ ϕ(C)ψ(C).
We are now ready for the ﬁrst part of our main theorem. There will be a complement in the
next lecture.
4
For an alternative argument, recall that the eigenvalues of ρ(g) are roots of unity, and those of ρ(g)
−1
will be
their inverses; but these are also their conjugates. Recall now that the trace is the sum of the eigenvalues.
18
8.10 Theorem (Orthogonality of characters).
1. If V is irreducible, then χ
V

2
= 1.
2. If V, W are irreducible and not isomorphic, then ¸χ
V
[χ
W
) = 0.
Before proving the theorem, let us list some consequences to illustrate its power.
8.11 Corollary. The number of times an irreducible representation V appears in an irreducible
decomposition of some W is ¸χ
V
[χ
W
).
8.12 Corollary. The above number (called the multiplicity of V in W) is independent of the
irreducible decomposition
We had already proved this in Lecture 4, but we now have a second proof.
8.13 Corollary. Two representations are isomorphic iﬀ they have the same character.
(Use complete reducibility and the ﬁrst corollary above).
8.14 Corollary. The multiplicity of the trivial representation in W is
1
G
g∈G
χ
W
(g).
8.15 Corollary (Irreducibility criterion). V is irreducible iﬀ χ
V

2
= 1.
Proof. Decompose V into irreducibles as
k
V
⊕n
k
k
; then, χ
V
=
k
n
k
χ
k
, and χ
V

2
=
k
n
2
k
.
So χ
V

2
= 1 iﬀ all the n
k
’s vanish but one, whose value must be 1.
In preparation for the proof of orthogonality, we establish the following Lemma. For two
representations V, W of G and any linear map φ : V →W, deﬁne
φ
0
=
1
[G[
g∈G
ρ
W
(g) ◦ φ ◦ ρ
V
(g)
−1
.
8.16 Lemma.
1. φ
0
intertwines the actions G.
2. If V and W are irreducible and not isomorphic, then φ
0
= 0.
3. If V = W and ρ
V
= ρ
W
, then φ
0
=
Trφ
dimV
Id.
Proof. For part (1), we just examine the result of conjugating by h ∈ G:
ρ
W
(h) ◦ φ
0
◦ ρ
V
(h)
−1
=
1
[G[
g∈G
ρ
W
(h)ρ
W
(g) ◦ φ ◦ ρ
V
(g)
−1
ρ
V
(h)
−1
=
1
[G[
g∈G
ρ
W
(hg) ◦ φ ◦ ρ
V
(hg)
−1
=
1
[G[
g∈G
ρ
W
(g) ◦ φ ◦ ρ
V
(g)
−1
= φ
0
.
Part (2) now follows from Schur’s lemma. For part (3), Schur’s lemma again tells us that φ
0
is
a scalar; to ﬁnd it, it suﬃces to take the trace over V : then, φ
0
= Tr(φ
0
)/ dimV . But we have
Tr(φ
0
) =
1
[G[
g∈G
Tr
_
ρ
V
(g) ◦ φ ◦ ρ
V
(g)
−1
_
= Tr(φ),
as claimed.
19
Proof of orthogonality. Choose invariant inner products on V, W and orthonormal bases ¦v
i
¦
and ¦w
j
¦ for the two spaces. When V = W we shall use the same inner product and basis in
both. We shall use the “braket” vector notation explained in the handout. Then we have
¸χ
W
[χ
V
) =
1
[G[
g∈G
¯ χ
W
(g)χ
V
(g) =
1
[G[
g∈G
χ
W
(g
−1
)χ
V
(g)
=
1
[G[
i,j
g∈G
¸w
i
[ρ
W
(g)
−1
[w
i
)¸v
j
[ρ
V
(g)[v
j
).
We now interpret each summand as follows. Recall ﬁrst that [w
i
)¸v
j
[ designates the linear map
V →W which sends a vector v to the vector w
i
¸v
j
[v). The product ¸w
i
[ρ
W
(g)
−1
[w
i
)¸v
j
[ρ
V
(g)[v
j
)
is then interpretable as the result of applying the linear map ρ
W
(g)
−1
◦ [w
i
)¸v
j
[ ◦ ρ
V
(g) to the
vector [v
j
), and then taking dot product with w
j
.
Fix now i and j and sum over g ∈ G. Lemma 8.16 then shows that the sum of linear maps
ρ
W
(g)
−1
◦ [w
i
)¸v
j
[ ◦ ρ
V
(g) =
_
0 if V W,
Tr ([w
i
)¸v
j
[) / dimV if ρ
V
= ρ
W
.
(8.17)
In the ﬁrst case, summing over i, j still leads to zero, and we have proved that χ
V
⊥ χ
W
. In the
second case,
Tr ([w
i
)¸v
j
[) =
_
1 if i = j,
0 otherwise,
and summing over i, j leads to a factor of dimV , cancelling the denominator in the second line
of (8.17) and completing our proof.
8.18 Remark. The inner product of characters can be deﬁned over any ground ﬁeld k of charac
teristic not dividing [G[, by using χ(g
−1
) in lieu of χ(g). We have used Schur’s Lemma over C
in establishing Part (3) of Lemma 8.16, although not for Parts 1 and 2. Hence, the orthogonal
ity of characters of nonisomorphic irreducibles holds over any such ﬁeld k, but orthonormality
χ
2
= 1 can fail if k is not algebraically closed. The value of the square depends on the
decomposition of the representation in the algebraic closure
¯
k. This can be determined from
the division algebra D
V
= End
G
k
(V ) and the Galois theory of its centre, which will be a ﬁnite
extension ﬁeld of k. See, for instance, Serre’s book for more detail.
(8.19) The representation ring again
We deﬁned the representation ring R
G
of G as the free abelian group based on the isomorphism
classes of irreducible representations of G. Using complete reducibility, we can identify the linear
combinations of these basis elements with nonnegative coeﬃcients with isomorphism classes of
Grepresentations: we simply send a representation
i
V
⊕n
i
i
, decomposed into irreducibles, to
the linear combination
i
n
i
[V
i
] ∈ R
G
(the bracket denotes the isomorphism class). General
elements of R
G
are sometimes called virtual representations of G. We deﬁned the multiplication
in R
G
using the tensor product of representations. We sometimes denote the product by ⊗, to
remember its origin. The onedimensional trivial representation 1 is the unit.
We then deﬁned the character χ
V
of a representation ρ on V to be the complexvalued
function on G with χ
V
(g) = Tr
V
ρ(g), the operator trace in the representation ρ. This function
is conjugationinvariant, or a class function on G. We denote the space of complex class functions
on G by C[G]
G
. By linearity, χ becomes a map χ : R
G
→C[G]
G
. We then checked that
• χ
V ⊕W
= χ
V
+χ
W
, χ
V ⊗W
= χ
V
χ
W
. Thus, χ : R
G
→C[G]
G
is a ring homomorphism.
• Duality corresponds to complex conjugation: χ(V
∗
) = ¯ χ
V
.
20
• The linear functional Inv : R
G
→ Z, sending a virtual representation to the multiplicity
(=coeﬃcient) of the trivial representation 1 corresponds to averaging over G:
Inv(V ) =
1
[G[
g∈G
χ
V
(g).
• We can deﬁne a positive deﬁnite inner product on R
G
by ¸V [W) := Inv(V
∗
⊗ W). This
corresponds to the obvious inner product on characters,
1
G
g∈G
¯ χ
V
(g)χ
W
(g).
8.20 Remark. The structures detailed above for R
G
is that of a Frobenius ring with involution.
(In a ring without involution, we would just require that the bilinear pairing V W →Inv(V ⊗W)
should be nondegenerate). Similarly, C[G]
G
is a complex Frobenius algebra with involution,
and we are saying that the character χ is a homomorphism of such structures. It fails to be an
isomorphism “only in the obvious way”: the two rings share a basis of irreducible representations
(resp. characters), but the R
G
coeﬃcients are integral, not complex.
9 The Regular Representation
Recall that the regular representation of G has basis ¦e
g
¦, labelled by g ∈ G; we let G permute
the basis vectors according to its left action, λ(h)(e
g
) = e
hg
. This deﬁnes a linear action on the
span of the e
g
.
We can identify the span of the e
g
with the space C[G] of complex functions on G, by
matching e
g
with the function sending g to 1 ∈ C and all other group elements to zero. Under
this correspondence, elements h ∈ G act (“on the left”) on C[G], by sending the function ϕ to
the function λ(h)(ϕ)(g) := ϕ(h
−1
g).
We can see that the character of the regular representation is
χ
reg
(g) =
_
[G[ if g = 1
0 otherwise
in particular, χ
reg

2
= [G[
2
/[G[ = [G[, so this is far from irreducible. Indeed, every irreducible
representation appears in the regular representation, as the following proposition shows.
9.1 Proposition. The multiplicity of any irreducible representation in the regular representation
equals its dimension.
Proof. ¸χ
V
[χ
reg
) =
1
G
χ
V
(1)χ
reg
(1) = χ
V
(1) = dimV.
For instance, the trivial representation appears exactly once, which we can also conﬁrm directly:
the only translationinvariant functions on G are constant.
9.2 Proposition.
V
dim
2
V = [G[, the sum ranging over all irreducible isomorphism classes.
Proof. From χ
reg
=
dimV χ
V
we get χ
reg

2
=
dim
2
V , as desired.
We now proceed to complete our main orthogonality theorem.
9.3 Theorem (Completeness of characters). The irreducible characters form a basis for
the space of complex class functions.
Thus, they form an orthonormal basis for that space. The proof consists in showing that if
a class function ϕ is orthogonal to every irreducible character, then its value at every group
element is zero. For this, we need the following.
21
Construction. To every function ϕ : G → C, and to every representation (V, ρ) of G, we assign
the following linear selfmap ρ(ϕ) of V :
ρ(ϕ) =
g∈G
ϕ(g)ρ(g).
We note, by applying ρ(h) (h ∈ G) and relabelling the sum, that ρ(h)ρ(ϕ) = ρ (λ(h)ϕ).
9.4 Lemma. ϕ is a class function iﬀ ρ(ϕ) commutes with G, in every representation V .
Proof. If ϕ is a class function, then ϕ(h
−1
gh) = ϕ(g) for all g, h and so
ρ(h)ρ(ϕ)ρ(h
−1
) =
g∈G
ϕ(g)ρ(hgh
−1
) =
k∈G
ϕ(h
−1
kh)ρ(k) =
k∈G
ϕ(k)ρ(k).
In the other direction, let V be the regular representation; then ρ(ϕ)(e
1
) =
g∈G
ϕ(g)e
g
, and
λ(h)ρ(ϕ)(e
1
) =
g∈G
ϕ(g)e
hg
=
g∈G
ϕ(h
−1
g)e
g
, whereas
ρ(ϕ)λ(h)(e
1
) = ρ(ϕ)(e
h
) =
g∈G
ϕ(g)e
gh
=
g∈G
ϕ(gh
−1
)e
g
;
equating the two shows that ϕ(h
−1
g) = ϕ(gh
−1
), as claimed.
When ϕ is a class function and V is irreducible, Schur’s Lemma and the previous result show
that ρ(ϕ) is a scalar. To ﬁnd it, we compute the trace:
Tr [ρ(ϕ)] =
g∈G
ϕ(g)χ
V
(g) = [G[ ¸χ
V
∗[ϕ)
(recall that χ
V
= χ
V
∗). We obtain that, when V is irreducible,
ρ(ϕ) =
[G[
dimV
¸χ
V
∗[ϕ) Id. (9.5)
Proof of Completeness. Assume that the class function ϕ is orthogonal to all irreducible char
acters. By (9.5), ρ(ϕ) = 0 in every irreducible representation V , hence (by complete reducibil
ity) also in the regular representation. But, as we saw in the proof of Lemma 9.4, we have
ρ(ϕ)(e
1
) =
g∈G
ϕ(g)e
g
in the regular representation; so ϕ = 0, as desired.
(9.6) The Group algebra
We now study the regular representation in greater depth. Deﬁne a multiplication on C[G] by
setting e
g
e
h
= e
gh
on the basis elements and extending by linearity:
g
ϕ
g
e
g
h
ψ
h
e
h
=
g,h
ϕ
g
ψ
h
e
gh
=
g
_
h
ϕ
gh
−1ψ
h
_
e
g
.
This is associative and distributive for addition, but it is not commutative unless G was so.
(Associativity follows from the same property in the group.) It contains e
1
as the multiplicative
identity, and the copy Ce
1
of C commutes with everything. This makes C[G] into a Calgebra.
The group G embeds into the group of multiplicatively invertible elements of C[G] by g →e
g
.
For this reason, we will often replace e
g
by g in our notation, and think of elements of C[G] as
linear combinations of group elements, with the obvious multiplication.
22
9.7 Proposition. There is a natural bijection between modules over C[G] and complex repre
sentations of G.
Proof. On each representation (V, ρ) of G, we let the group algebra act in the obvious way,
ρ
_
g
ϕ
g
g
_
(v) =
g
ϕ
g
ρ(g)(v).
Conversely, given a C[G]module M, the action of Ce
1
makes it into a complex vector space and
a group action is deﬁned by embedding G inside C[G] as explained above.
We can reformulate the discussion above (perhaps more obscurely) by saying that a group
homomorphism ρ : G → GL(V ) extends naturally to an algebra homomorphism from C[G] to
End(V ); V is a module over End(V ) and the extended homomorphism makes it into a C[G]
module. Conversely, any such homomorphism deﬁnes by restriction a representation of G, from
which the original map can then be recovered by linearity.
Choosing a complete list (up to isomorphism) of irreducible representations V of G gives a
homomorphism of algebras,
⊕ρ
V
: C[G] →
V
End(V ), ϕ →(ρ
V
(ϕ)) . (9.8)
Now each space End(V ) carries two commuting actions of G: these are left composition with
ρ
V
(g), and right composition with ρ
V
(g)
−1
. For an alternative realisation, End(V ) is isomorphic
to V ⊗ V
∗
and G acts separately on the two factors. There are also two commuting actions of
G on C[G], by multiplication on the left and on the right, respectively:
(λ(h)ϕ) (g) = ϕ(h
−1
g), (ρ(h)ϕ) (g) = ϕ(gh).
It is clear from our construction that the two actions of G G on the left and right sides of
(9.8) correspond, hence our algebra homomorphism is also a map of GGrepresentations. The
main result of this subsection is the following much more precise version of Proposition 9.1.
9.9 Theorem. The homomorphism (9.8) is an isomorphism of algebras, and hence of GG
representations. In particular, we have an isomorphism of GGrepresentations,
C[G]
∼
=
V
V ⊗V
∗
,
with the sum ranging over the irreducible representations of G.
For the proof, we require the following
9.10 Lemma. Let V, W be complex irreducible representations of the ﬁnite groups G and H.
Then, V ⊗ W is an irreducible representation of G H. Moreover, every irrep of G H is of
that form.
The reader should be cautioned that the statement can fail if the ﬁeld is not algebraically closed.
Indeed, the proof uses the orthonormality of characters.
Proof. Direct calculation of the square norm shows that the character χ
v
(g) χ
W
(h) of GH
is irreducible. (Homework: do this!). Moreover, I claim that these characters span the class
functions on GH. Indeed, conjugacy classes in GH are Cartesian products of classes in the
factors, and we can write every indicator class function in G and H as a linear combination of
characters; so we can write every indicator class function on G H as a linear combination of
the χ
V
χ
W
.
23
In particular, it follows that the representation V ⊗V
∗
of GG is irreducible. Moreover, no
two such reps for distinct V can be isomorphic; indeed, on pairs (g, 1) ∈ G G, the character
reduces to that of V .
Proof of Theorem 9.8. By dimensioncount, it suﬃces to show that our map ⊕ρ
V
is surjective.
The G Gstructure will be essential for this. Note from our Lemma that the decomposition
S :=
V
V ⊗ V
∗
splits the sum S into pairwise nonisomorphic, irreducible representations.
From the results in Lecture 4, we know that any G Gmap into S must be blockdiagonal
for the isotypical decompositions. In particular, if a map surjects onto each factor separately,
it surjects onto the sum S. Since the summands are irreducible, it suﬃces then to show that
each projection to End(V ) is nonzero. But the group identity e
1
maps to the identity in each
End(V ).
9.11 Remark. (i) It can be shown that the inverse map to ⊕ρ
V
assigns to φ ∈ End(V ) the
function g →Tr
V
(ρ
V
(g)φ)/ dimV . (Optional Homework.)
(ii) Schur’s Lemma ensures that the invariants on the righthand side for the action of the
diagonal copy G ⊂ GG are the multiples of the identity in each summand. On the left side,
we get the conjugationinvariant functions, or class functions.
10 The character table
In view of our theorem about completeness and orthogonality of characters, the goal of complex
representation theory for a ﬁnite group is to produce the character table. This is the square
matrix whose rows are labelled by the irreducible characters, and whose columns are labelled
by the conjugacy classes in the group. The entries in the table are the values of the characters.
A representation can be regarded as “known” as soon as its character is known. Indeed,
one can extract the irreducible multiplicities from the dot products with the rows of the char
acter table. You will see in the homework (Q. 2.5) that you can even construct the isotypical
decomposition of your representation, once the irreducible characters are known.
Orthonormality of characters implies a certain orthonormality of the rows of the character
table, when viewed as a matrix A. “A certain” reﬂects to the fact that the inner product between
characters is not the dot product of the rows; rather, the column of a class C must be weighed
by a factor [C[/[G[ in the dot product (as the sum goes over group elements and not conjugacy
classes). In other words, if B is the matrix obtained from A by multiplying each column by
_
[C[/[G[, then the orthonormality relations imply that B is a unitary matrix. Orthonormality
of its columns leads to the
10.1 Proposition (The second orthogonality relations). For any conjugacy classes C, C
,
we have, summing over the irreducible characters χ of G,
χ
χ(C) χ(C
) =
_
[G[/[C[ if C = C
,
0 otherwise.
10.2 Remark. The orbitstabiliser theorem, applied to the conjugation action of G on itself and
some element c ∈ C, shows that the prefactor [G[/[C[ is the order of Z
G
(c), the centraliser of c
in G (the subgroup of Gelements commuting with c).
As a reformulation, we can express the indicator function E
C
of a conjugacy class C ⊂ G in
terms of characters:
E
C
(g) =
[C[
[G[
χ
χ(C) χ(g).
Note that the coeﬃcients need not be integral, so the indicator function of a conjugacy class is
not the character of a representation.
24
(10.3) Example: The cyclic group C
n
of order n
Call g a generator, and let L
k
be the onedimensional representation where g acts as ω
k
, where
ω = exp
_
2πi
n
_
. Then we know that L
0
, . . . , L
n−1
are all the irreducibles and the character table
is the following:
¦1¦ ¦g¦ . . . ¦g
q
¦ . . .
1 1 1 . . . 1 . . .
L
0
1 ω . . . ω
q
. . .
.
.
.
.
.
.
.
.
. . . .
.
.
. . . .
L
p
1 ω
p
. . . ω
pq
. . .
.
.
.
.
.
.
.
.
. . . .
.
.
.
Moving on to D
6
, for which we had found the onedimensional irreducibles 1, S and the
2dimensional irreducible V , we note that 1 + 1 + 2
2
= 6 so there are no other irreducibles. We
thus expect 3 conjugacy classes and, indeed, we ﬁnd ¦1¦, ¦g, g
2
¦, ¦r, rg, rg
2
¦, the three reﬂections
being all conjugate (g
−1
rg = rg
2
, etc). From our matrix construction of the representation we
ﬁll in the entries of the character table:
¦1¦ ¦g, g
2
¦ ¦rg, . . .¦
1 1 1 1
S 1 1 −1
V 2 −1 0
Let us check one of the orthogonality relations: χ
2
V
= (2
2
+ 1
2
)/6 = 5/6, which does not match
the theorem very well; that is, of course, because we forgot to weigh the conjugacy classes by
their size. It is helpful to make a note of the weights somewhere in the character table. The
correct computation is χ
2
V
= (2
2
+ 2 1
2
)/6 = 1.
The general dihedral group D
2n
works very similarly, but there is a distinction, according
to the parity of n. Let us ﬁrst take n = 2m + 1, odd, and let ω = exp(2πi/n). We can
again see the trivial and sign representations 1 and S; additionally, we have m twodimensional
representations V
k
, labelled by 0 < k ≤ m, in which the generator g of the cyclic subgroup C
n
acts as the diagonal matrix with entries ω
k
, ω
−k
, and the reﬂection r switches the basis vectors.
Irreducibility of V
k
follows as in the case n = 3, but we will of course be able to check that from
the character. The conjugacy classes are ¦1¦, the pairs of opposite rotations ¦g
k
, g
−k
¦ and all n
reﬂections, m+2 total, matching the number of representations we found. Here is the character
table:
¦1¦ . . . ¦g
q
, g
−q
¦ . . . refs.
1 1 . . . 1 . . . 1
S 1 . . . 1 . . . −1
.
.
.
.
.
. . . .
.
.
. . . .
.
.
.
V
p
2 . . . 2 cos
2πpq
n
. . . 0
.
.
.
.
.
. . . .
.
.
. . . .
.
.
.
The case of even n = 2m is slightly diﬀerent. We can still ﬁnd the representations 1, S and V
k
,
for 1 ≤ k ≤ m, but the sum of squared dimensions 1 + 1 + m 2
2
= 2m + 2 is now incorrect.
The problem is explained by observing that, in the representation V
m
, g acts as −Id, so g and
r are simultaneously diagonalisable and V
m
splits into two lines L
+
⊕L
−
, distinguished by the
eigenvalues of r. The sum of squares is now the correct 1 +1 +4(m−1) +1 +1 = 4m. Observe
also that there are two conjugacy classes of reﬂections, ¦r, rg
2
, . . . , rg
n−2
¦ and ¦rg, rg
3
, . . .¦.
25
The character table of D
4m
is thus the following (0 < p, q < m):
¦1¦ . . . ¦g
q
, g
−q
¦ . . . ¦r, rg
2
, . . .¦ ¦rg, . . .¦
1 1 . . . 1 . . . 1 1
S 1 . . . 1 . . . −1 −1
.
.
.
.
.
. . . .
.
.
. . . .
.
.
.
V
p
2 . . . 2 cos
2πpq
n
. . . 0 0
.
.
.
.
.
. . . .
.
.
. . . .
.
.
.
.
.
.
L
±
1 . . . (−1)
q
. . . ±1 ∓1
For the Group algebra see Lecture 13.
11 Groups of order pq
Generalising the dihedral croups, we now construct the character table of certain groups of order
pq, where p and q are two primes such that q[(p−1). We deﬁne a special group F
p,q
by generators
and relations. It can be shown that, other than the cyclic group of the same order, this is the
only group of order pq.
(11.1) The group F
p,q
.
A theorem from algebra asserts that the group Z/p
×
of nonzero residue classes under multi
plication is cyclic. A proof of this fact can be given as follows: one shows (from the structure
theorem, or directly) that any abelian group of order d, in which the equation x
d
= 1 has no
more than d solutions, is cyclic. For the group Z/p
×
, this property follows because Z/p is a
ﬁeld, so the polynomial cannot split into more than d linear factors.
Let then u be an element of multiplicative order q. (Such elements exist if q[(p − 1), and
there are precisely ϕ(q) = q − 1 of them). Consider the group F
p,q
with two generators a, b
satisfying the relations a
p
= b
q
= 1, bab
−1
= a
u
. When q = 2, we must take u = −1 and we
obtain the dihedral group D
2p
.
11.2 Proposition. F
p,q
has exactly pq elements, namely a
m
b
n
, for 0 ≤ m < p, 0 ≤ n < q.
Proof. Clearly, these elements exhaust F
p,q
, because the commutation relation allows us to move
all b’s to the right of the a’s in any word. To show that there are no further identiﬁcations,
the best way is to realise the group concretely. We shall in fact embed F
p,q
within the group
GL
2
(Z/p) of invertible 2 2 matrices with coeﬃcients in Z/p. Speciﬁcally, we consider the
matrices of the form
_
x y
0 1
_
: x a power of u, y arbitrary.
There are pq such matrices and they are closed under multiplication. We realise F
p,q
by sending
a →
_
1 1
0 1
_
, b →
_
u 0
0 1
_
.
11.3 Remark. It can be shown (you’ll do this in the homework) that a diﬀerent choice of u leads
to an isomorphic group. So omitting u from the notation of F
p,q
is in fact justiﬁed.
Let now r = (p − 1)/q, and let v
1
, . . . , v
r
be a set of representatives for the cosets of Z/p
×
modulo the subgroup generated by u. We view the v
i
as residue classes mod p; thus, ¦v
i
u
j
¦ is
a complete set of nonzero residue classes mod p, as 1 ≤ i ≤ r and 0 ≤ j < q.
11.4 Proposition. F
p,q
has q + r conjugacy classes. A complete list of representatives is
¦1, a
v
m
, b
n
¦, with 1 ≤ m ≤ r and 1 < n < q.
26
Proof. More precisely, the class of a
v
m
contains the elements a
v
m
u
i
, 0 ≤ i < q and that of b
n
the elements a
j
b
n
, 0 ≤ j < p. We obtain the other powers of a by successive bconjugation—
conjugation by a does not change those elements—and the a
j
b
n
by aconjugation: ab
n
a
−1
=
a
1−u
n
b, and u
n
,= 1 if n < q, so 1−u
n
,= 0 mod p and repeated conjugation produces all powers
of a. Moreover, it is clear that further conjugation by b does not enlarge this set.
(11.5) Representations.
The subgroup generated by a is normal in F
p,q
, and the quotient group is cyclic of order q. We
can thus see q 1dimensional representations of F
p,q
, on which a acts trivially, but b acts by
the various roots of unity of order q. We need to ﬁnd r more representations, whose squared
dimensions should add up to (p − 1)q = rq
2
. The obvious guess is to look for qdimensional
representations. In fact, much as we did for the dihedral group, we can ﬁnd them and spot their
irreducibility without too much eﬀort.
Note that, on any representation, the eigenvalues of a are pth roots of unity, powers of
ω = exp
2πi
p
. Assume now that ω
k
is an eigenvalue, k ,= 0. I claim that the powers of exponent
ku, ku
2
, . . . ku
q−1
are also present among the eigenvalues. Indeed, let v be the ω
k
eigenvector;
we have
ρ(a)ρ(b)
−1
(v) = ρ(b)
−1
ρ(a
u
)(v) = ρ(b)
−1
ω
ku
(v) = ω
ku
ρ(b)
−1
(v),
and so ρ(b)
−1
(v) is an eigenvector with eigenvalue ω
ku
. Thus, the nontrivial eigenvalues of a
cannot appear in isolation, but rather in the r families ¦ω
v
i
u
j
¦, in each of which 0 ≤ j < q.
Fixing i, the smallest possible matrix is then
ρ
i
(a) = diag[ω
v
i
, ω
v
i
u
, . . . , ω
v
i
u
q−1
].
Note now that the matrix B which cyclically permutes the standard basis vectors satisﬁes
Bρ
i
(a)B
−1
= ρ
i
(a
u
), so we can set ρ
i
(b) = B to get a qdimensional representation of F
p,q
. This
is irreducible, for we saw that no representation where a ,= 1 can have dimension lower than q.
(11.6) The character table
Let η = exp
2ıi
q
. Call L
k
the 1dimensional representation in which b acts as η
k
and V
m
the
representation ρ
m
above. The character table for F
p,q
looks as follows. Note that 0 < k < q,
0 < m ≤ r, 1 ≤ l ≤ r, 0 < n < q, and the sum in the table ranges over 0 ≤ s < q.
1 a
v
l
b
n
1 1 1 1
L
k
1 1 η
kn
V
m
q
s
ω
v
m
v
l
u
s
0
The orthogonality relations are not completely obvious but require a short calculation. The key
identity is
ω
xv
m
u
s
= 0, where x mod p is a ﬁxed, nonzero residue and m and s in the sum
range over all possible choices.
12 The alternating group A
5
In this lecture we construct the character table of the alternating group A
5
of even permutations
on ﬁve letters. Unlike the previous examples, we will now exploit the properties of characters in
ﬁnding representations and proving their irreducibility.
We start with the conjugacy classes. Recall that every permutation is a product of disjoint
cycles, uniquely up to their order, and that two permutations are conjugate in the symmetric
27
group iﬀ they have the same cycle type, meaning the cycle lengths (with their multiplicities).
Thus, a complete set of representatives for the conjugacy classes in S
5
is
Id, (12), (12)(34), (123), (123)(45), (1234), (12345),
seven in total. Of these, only Id, (12)(34), (123) and (12345) are in A
5
.
Now, a conjugacy class in S
5
may or may not break up into several classes in A
5
. The reason
is that we may or may not be able to relate two given permutations in A
5
which are conjugate
in A
5
by an even permutation. Now the ﬁrst three permutations in our list all commute with
some transposition. This ensures that we can implement the eﬀect of any conjugation in S
5
by
a conjugation in A
5
: conjugate by the extra transposition if necessary. We do not have such an
argument for the 5cycle, so there is the possibility that the cycles (12345) and (21345) are not
conjugate in A
5
. To make sure, we count the size of the conjugacy class by the orbitstabiliser
theorem. The size of the conjugacy class containing g ∈ G is [G[ divided by the order of the
centraliser of g. The centraliser of (12345) in S
5
is the cyclic subgroup of order 5 generated by
this cycle, and is therefore the same as the centraliser in A
5
. So the order of the conjugacy class
in S
5
is 24, but in A
5
only 12. So our suspicions are conﬁrmed, and there are two classes of
5cycles in A
5
.
12.1 Remark. The same orbitstabiliser argument shows that the conjugacy class in S
n
of an
even permutation σ is a single conjugacy class in A
n
precisely when σ commutes with some odd
permutation; else, it breaks up into two classes of equal size .
For use in computations, we note that the orders of the conjugacy classes are 1 for Id, 15 for
(12)(34), 20 for (123) and 12 each for (12345) and (21345). We have, as we should,
1 + 15 + 20 + 12 + 12 = 60 = [A
5
[.
We now start producing the rows of the character table, listing the conjugacy classes in the
order above. The trivial rep 1 gives the row 1, 1, 1, 1, 1. Next, we observe that A
5
acts naturally
on C
5
by permuting the basis vectors. This representation is not irreducible; indeed, the line
spanned by the vector e
1
+ . . . + e
5
is invariant. The character χ
C
5 is 5, 1, 2, 0, 0 and its dot
product ¸χ
C
5[χ
1
) = 5/60 + 1/4 + 2/3 = 1 with the trivial rep shows that 1 appears in C
5
with
multiplicity one. The complement V has character χ
V
= 4, 0, 1, −1, −1, whose square norm is
χ
V

2
=
16
60
+
1
3
+
1
5
+
1
5
= 1,
showing that V is irreducible.
We seek more representations, and consider for that the tensor square V
⊗2
. We already know
this is reducible, because it splits as Sym
2
V ⊕Λ
2
V , but we try extract irreducible components.
For this we need the character formulae for Sym
2
V and Λ
2
V .
12.2 Proposition. For any g ∈ G,
χ
Sym
2
V
(g) =
_
χ
V
(g)
2
+χ
V
(g
2
)
¸
/2,
χ
Λ
2
V
(g) =
_
χ
V
(g)
2
−χ
V
(g
2
)
¸
/2.
Proof. Having ﬁxed g, we choose a basis of V on which g acts diagonally, with eigenvalues
λ
1
, . . . , λ
d
, repeated as necessary. From our bases of Sym
2
V, Λ
2
V , we see that the eigenvalues
of g on the two spaces are λ
p
λ
q
, with 1 ≤ p ≤ q ≤ d on Sym
2
and with 1 ≤ p < q ≤ d on Λ
2
.
Then, the proposition reduces to the obvious identities
2
p≤q
λ
p
λ
q
=
_
λ
p
_
2
+
λ
2
p
,
2
p<q
λ
p
λ
q
=
_
λ
p
_
2
−
λ
2
p
.
28
The class function g →χ
V
(g
2
) is denoted by Ψ
2
χ
V
. In our case, it is seen to give the values
(4, 4, 1, −1, −1), whereas χ
2
V
is (16, 0, 1, 1, 1). So our characters are χ
Sym
2
V
= (10, 2, 1, 0, 0)
and χ
Λ
2
V
= (6, −2, 0, 1, 1). We can see that Sym
2
V will have a positive dot product with 1;
speciﬁcally,
¸χ
Sym
2
V
[χ
1
) =
1
6
+
2
4
+
1
3
= 1,
so Sym
2
V contains 1 exactly once. We also compute
¸χ
Sym
2
V
[χ
V
) =
4
6
+
1
3
= 1,
showing that V appears once as well. The complement W of 1⊕V has character (5, 1, −1, 0, 0),
with normsquare
χ
W

2
=
25
60
+
1
4
+
1
3
= 1,
showing that W is a new, 5dimensional irreducible representation.
We need two more representations. The squared dimensions we have so far add up to
1 + 4
2
+ 5
2
= 42, missing 18. The only decomposition into two squares is 3
2
+ 3
2
, so we are
looking for two 3dimensional ones. With luck, they will both appear in Λ
2
V . This will happen
precisely if
• χ
Λ
2
V

2
= 2,
• χ
Λ
2
V
is orthogonal to the other three characters.
We check the ﬁrst condition, leaving the second as an exercise:
χ
Λ
2
V

2
=
36
60
+
4
4
+
1
5
+
1
5
= 1 + 1 = 2.
We are left with the problem of decomposing χ
Λ
2
V
into two irreducible characters. These
must be unit vectors orthogonal to the other three characters. If we were sure the values are
real, there would be a unique solution, discoverable by easy linear algebra. With complex values
allowed, there is a oneparameter family of solutions; but it turns out only one solution will take
the value 3 at the conjugacy class Id, so a bit of work will produce the answer. However, there
is a better way (several, in fact).
One method is to understand Λ
2
V from the point of view of S
5
. (In the homework, you
get to check that it is an irreducible representation of S
5
; this is why this make sense). Notice
that conjugation by your favourite transposition τ deﬁnes an automorphism of A
5
, hence a
permutation of the conjugacy classes. Thereunder, the ﬁrst three classes stay ﬁxed, but the two
types of 5cycles get interchanged. Therefore, this automorphism will switch the last two columns
of the character table. Now, the same automorphism also acts on the set of representations,
because the formula
τ
ρ(σ) := ρ(τστ
−1
)
deﬁnes a new representation
τ
ρ from any given representation ρ, acting on the same vector space.
It is easy to see that
τ
ρ is irreducible iﬀ ρ is so (they will have the same invariant subspaces
in general). So, the same automorphism also permutes the rows of the character table. Now,
the three rows we found have dimensions 1, 4, 5 and are the unique irreducible reps of those
dimensions. So, at most, τ can switch the last two rows; else, it does nothing.
Now, if switching the last two columns did nothing, it would follow that all characters of A
5
take equal values on the last two conjugacy classes, contradicting the fact that the characters
span all class functions. So we must be in the ﬁrst case and τ must switch the last two rows.
But then it follows that the bottom two entries in the ﬁrst three columns (those not moved by
29
τ) are always equal; as we know they sum to χ
Λ
2
V
, they are (3, −1, 0). For the last two rows
and two columns, swapping the rows must have the same eﬀect as swapping the columns, so the
last 2 2 block is
_
a b
b a
_
;
we have a +b = 1 and, by orthogonality, ab = −1; whence a, b =
1±
√
5
2
.
All in all, the character table for A
5
is the following:
¦Id¦ (12)(34) (123) (12345) (21345)
1 1 1 1 1 1
V 4 0 1 −1 −1
W 5 1 −1 0 0
Λ
3 −1 0 a b
Λ
3 −1 0 b a
12.3 Remark. There are two other ways to ﬁll in the last two rows. One is to argue a priori
that the character values are real. This can be done using the little result in Question 2.6.b.
Knowing that, observe that the character value of a 5cycle in a 3dimensional representation
must be a sum of three ﬁfth roots of unity. The only real values of such sums are 3 and the two
values a, b above. The value 3 could only be attained if the 5cycles acted as the identities; but
in that case, they would commute with everything in the group. This is not possible, as A
5
is
simple, hence has no normal subgroups, hence is isomorphic to its image under any nontrivial
representation; so any relation holding in the representation would have to hold in the group.
So the only possible character values are a and b and you can easily check that the table above
is the only option compatible with orthogonality.
Another way to ﬁll the last rows, of course, would be to “spot” the representations. They
actually come from the two identiﬁcations of A
5
with the group of isometries of the icosahedron.
The ﬁvecycles become rotations by multiples of 2π/5, whose traces are seen to be a and b from
the matrix description of these rotations.
13 Integrality in the group algebra
The goal of this section is to study the properties of integral elements in the group algebra,
resulting in the remarkable theorem that the degree of an irreducible representation divides the
order of the group. We recall ﬁrst the notion of integrality.
13.1 Deﬁnition. Let S ⊂ R be a subring of the commutative ring R. An element x ∈ R is
called integral over S if the following equivalent conditions are satisﬁed:
(a) x satisﬁes some monic polynomial equation with coeﬃcients in S;
(b) The ring S[x] generated by x over S within R is a ﬁnitely generated Smodule.
If S = Z, x is simply called integral. An integral element of C is called an algebraic integer.
Recall that an Smodule M is ﬁnitely generated if it contains elements m
1
, . . . , m
k
such that
every m ∈ M is expressible as s
1
m
1
+ +s
k
m
k
, for suitable s
i
∈ S. Both conditions are seen
to be equivalent to the statement that, for some large N, x
N
is expressible in terms of the x
n
,
n < N, with coeﬃcients in S. Here are two general facts that we will use.
13.2 Proposition. If x, y ∈ R are both integral over S, then so are x +y and xy.
Proof. We know that x is integral over S and that y is integral over S[x]. Hence S[x] is a ﬁnitely
generated Smodule and S[x, y] is a ﬁnitely generated S[x]module. All in all, S[x, y] is a ﬁnitely
generated Smodule. (The products of one generator for S[x] over S and of S[x, y] over S[x]
give a set of generators of S[x, y] over S.)
30
13.3 Proposition. The image of an integral element under any ring homomorphism is integral.
This is clear from the ﬁrst deﬁnition. Finally, we note the following fact about algebraic integers.
13.4 Proposition. Any rational algebraic integer is a ‘genuine’ integer (in Z).
Sketch of proof. Assume that the reduced fraction x = p/q satisﬁes a monic polynomial equation
of degree n with integer coeﬃcients and get a contradiction by showing that the denominator
q
n
in the leading term cannot be cancelled by any of the other terms in the sum.
13.5 Remark. The proposition is a special case of Gauss’ Lemma, which says that any monic
factor with rational coeﬃcients of a monic polynomial with integral coeﬃcients must in fact
have integral coeﬃcients.
Our goal is the following
13.6 Theorem. The dimension of any complex irreducible representation of a ﬁnite group
divides the order of the group.
Calling G the group and V the representation, we will prove that [G[/ dimV is an algebraic
integer. The proposition above will then imply the theorem. Observe the identity
[G[
dimV
=
g∈G
χ
V
(g
−1
)
χ
V
(g)
dimV
=
C
χ
V
(C)χ
V
(C)
[C[
dimV
;
where the last sum ranges over the conjugacy classes C. The theorem then follows from the
following two propositions.
13.7 Proposition. The values χ(g) of the character of any representation of G are algebraic
integers.
Proof. Indeed, the eigenvalues of ρ(g) are roots of unity.
13.8 Proposition. For any conjugacy class C ⊂ G and irreducible representation V , the number
χ
V
(C) [C[/ dimV is an algebraic integer.
To prove the last proposition, we need to recall the group algebra C[G], the space of formal
linear combinations
ϕ
g
g of group elements with multiplication prescribed by the group law
and the distributive law. It contains the subspace of class functions. We need to know that this
is a commutative subalgebra; in fact we have the following, more precise
13.9 Proposition. The space of class functions is the centre Z(C[G]) of C[G].
Proof. Class functions are precisely those elements ϕ ∈ C[G] for which λ(g)ϕρ(g)
−1
= ϕ, or
λ(g)ϕ = ϕρ(g), in terms of the left and right multiplication actions of G. Rewriting in terms of
the multiplication law in C[G], this says g ϕ = ϕ g, so class functions are those commuting
with every g ∈ G. But that is equivalent to their commutation with every element of C[G].
Let now E
C
∈ C[G] denote the indicator function of the conjugacy class C ⊂ G. Note that
E
{1}
is the identity in C[G].
13.10 Proposition.
C
Z E
C
is a subring of Z(C[G]). In particular, every E
C
is integral in
Z(C[G]).
Proof. Writing E
C
=
g∈C
g makes it clear that a product of two E’s is a sum of group elements
(with integral coeﬃcients). Grouping them by conjugacy class leads to an integral combination
of the other E’s. Integrality of E
C
follows, because the ring Z[E
C
] is a subgroup of the free,
ﬁnitely generated abelian group
C
Z E
C
, and so it is a ﬁnitely generated abelian group as
well.
31
Recall now that the group homomorphismρ
V
: G →GL(V ) extends by linearity to an algebra
homomorphism ρ
V
: C[G] → End(V ); namely, ρ
V
(ϕ) =
g
ϕ
g
ρ
V
(g). This is compatible with
the conjugation action of G, and so class functions must go to the subalgebra End
G
(V ) of
Ginvariant endomorphisms. When V is irreducible, the latter is isomorphic to C, by Schur’s
lemma. Since Tr
V
(α Id) = αdimV , the isomorphism is realised by taking the trace, and
dividing by the dimension. We obtain the following
13.11 Proposition. When V is irreducible, Tr
V
/ dimV is an algebra homomorphism:
Tr
V
( )
dimV
: Z(C[g]) →C,
g
ϕ
g
g →
g
ϕ
g
χ
V
(g)
dimV
.
Applying this homomorphism to E
C
results in χ
V
(C)
C
dimV
. Propositions 13.3 and 13.10
imply (13.8) and thus our theorem.
14 Induced Representations
The study of A
5
illustrates the need for more methods to construct representations. Now, for
any group, we have the regular representation, which contains every irreducible; but there is no
good method to break it up in general. We need some smaller representations.
One tool is suggested by the ﬁrst of the representations of A
5
, the permutation representation
C
5
. In general, whenever G acts on X, it will act linearly on the space C[X] of complexvalued
functions on X. This is called the permutation representation of X. Some of its basic properties
are listed in Q. 2.3. For example, the decomposition X =
X
k
into orbits of the Gaction
leads to a direct sum decomposition C[X] =
C[X
k
]. So if we are interested in irreducible
representations, we might as well conﬁne ourselves to transitive group actions. In this case, X
can be identiﬁed with the space G/H of left cosets of G for some subgroup H (the stabiliser of
some point). You can check that, if G = A
5
and H = A
4
, then C[X]
∼
= C
5
, the permutation
representation studied in the last lecture.
We will generalise this construction. Let then H ⊂ G be a subgroup and ρ : H →GL(W) a
representation thereof.
14.1 Deﬁnition. The induced representation Ind
G
H
W is the subspace of the space of all maps
G →W, with the left action of G on G, which are invariant under the action of H, on the right
on G and simultaneously by ρ on W.
Under the left action of an element k ∈ G, a map ϕ : G →W gets sent to the map g →ϕ(k
−1
g) ∈
W. The simultaneous action of h ∈ H on G and on W sends it to the map g → ρ(h) (ϕ(gh)).
This makes it clear that the two actions, of G and of H, commute, and so the subspace of
Hinvariant maps is a subrepresentation of G.
We can write the actions a bit more explicitly by observing that the space of Wvalued maps
on G is isomorphic to C[G] ⊗ W: using the natural basis e
g
of functions on G and any basis
¦f
i
¦ of W, we identify the basis element e
g
⊗ f
i
of C[G] ⊗ W with the map taking g ∈ G to f
i
and every other group element to zero. In this basis, the left action of k ∈ G sends e
g
⊗ w to
e
kg
⊗w. The right action of h ∈ H sends it to e
gh
−1 ⊗w, whereas the simultaneous action on
the right and on W sends it to e
kg
⊗ ρ(h)(w). A general function
g∈G
e
g
⊗ w
g
∈ C[G] ⊗ W
gets sent by h to
g∈G
e
gh
−1 ⊗ρ(h)(w
g
) =
g∈G
e
g
⊗ρ(h)(w
gh
).
Thus, the Hinvariance condition is
ρ(h)(w
gh
) = w
g
, ∀h ∈ H. (*)
32
Let R ⊂ G be a set of representatives for the cosets G/H. For r ∈ R, let W
r
⊂ Ind
G
H
W be
the subspace of functions supported on rH, that is, the functions vanishing everywhere else.
14.2 Proposition. Each vector space W
r
is isomorphic to W, and we have a decomposition,
as a vector space, Ind
G
H
W
∼
=
r
W
r
.
In particular, dimInd
G
H
W = [G/H[ dimW.
Proof. For each r, we construct an isomorphism from W to W
r
by
w ∈ W →
h∈H
e
rh
⊗ρ(h
−1
)(w).
Clearly, this function is supported on rH, and, in view of (*), it is Hinvariant; moreover,
the same condition makes it clear that every invariant function on rH has this form, for some
w ∈ W. Also, every function
g∈G
e
g
⊗w
g
can be rewritten as
r∈R
h∈H
e
rh
⊗w
rh
=
r∈R
h∈H
e
rh
⊗ρ(h
−1
)(w
r
),
in view of the same Hinvariance condition. This gives the desired decomposition
r
W
r
.
Examples
1. If H = ¦1¦ and W is the trivial representation, Ind
G
H
W is the regular representation of G.
2. If H is any subgroup and W is the trivial representation, then Ind
G
H
W
∼
= C[G/H] ⊗W, the
permutation representation on G/H times the vector space W with the trivial Gaction.
3. (This is not obvious) More generally, if the action of H on W extends to an action of the
ambient group G, then Ind
G
H
W
∼
= C[G/H] ⊗W, with the extended Gaction on W.
Let us summarise the basic properties of induction.
14.3 Theorem (Basic Properties).
(i) Ind(W
1
⊕W
2
)
∼
= IndW
1
⊕IndW
2
.
(ii) dimInd
G
H
W = [G/H[ dimW.
(iii) Ind
G
{1}
1 is the regular representation of G.
(iv) Let H ⊂ K ⊂ G. Then, Ind
G
K
Ind
K
H
W
∼
= Ind
G
H
W.
Proof. Part (i) is obvious from the deﬁnition, and we have seen (ii) and (iii). Armed with
suﬃcient patience, you can check property (iv) from the character formula for the induced
representation that we will give in the next lecture. Here is the proof from ﬁrst deﬁnitions.
Ind
G
H
W is the space of maps fromGto W that are invariant under H. An element of Ind
G
K
Ind
K
H
W
is a map from G into the space of maps from K into W, invariant under K and under H. This
is also a map φ : GK →W, satisfying
φ(gk
, k) = φ(g, k
k) and φ(g, kh
−1
) = ρ
W
(h)φ(g, k)
Such a map is determined by its restriction to G¦1¦, and the resulting restriction ψ : G →W
satisﬁes ψ(gh
−1
) = ρ
W
(ψ)(g, k) for h ∈ H, and as such determines an element of Ind
G
H
W.
Conversely, such a ψ is extended to a φ on all of GK using the Kinvariance condition.
33
15 Induced characters and Frobenius Reciprocity
(15.1) Character formula for Ind
G
H
W.
Induction is used to construct new representations, in the hope of producing irreducibles. It is
quite rare that an induced rep will be irreducible (but we will give a useful criterion for that,
due to Mackey); so we have to extract the ‘known’ irreducibles with the correct multiplicities.
The usual method to calculate multiplicities involves characters, so we need a character formula
for the induced representation.
We begin with a procedure which assigns to any class function on H a class function on
G. The obvious extension by zero is usually not a class function—the complement of H in G is
usually not a union of conjugacy classes. The most natural ﬁx would be to take the transforms of
the extended function under conjugation by all elements of G, and add them up. This, however,
is a bit redundant, because many of these conjugates coincide:
15.2 Lemma. If ϕ is a class function on H, extended by zero to all of G, and x ∈ G is ﬁxed,
then the “conjugate function” g ∈ G →ϕ(x
−1
gx) depends only on the coset xH.
Proof. Indeed, ϕ(h
−1
x
−1
gxh) = ϕ(x
−1
gx), because ϕ was invariant under conjugation in H.
This justiﬁes the following; recall that R ⊂ G is a system of representatives for the cosets G/H.
15.3 Deﬁnition. Let ϕ be a class function on H. The induced function Ind
G
H
(ϕ) is the class
function on G given by
Ind
G
H
(ϕ)(g) =
r∈R
ϕ(r
−1
gr) =
1
[H[
x∈G
ϕ(x
−1
gx),
where we declare ϕ to be zero at the points of G¸ H.
The sum thus ranges over those r for which r
−1
gr ∈ H. We are ready for the main result.
15.4 Theorem. The character of the induced representation Ind
G
H
W is Ind
G
H
(χ
W
).
Proof. Recall from the previous lecture that we can decompose IndW as
Ind
G
H
W =
r∈R
W
r
, (15.5)
where W
r
∼
= W as a vector space. Moreover, as each W
r
consists of the functions supported
only on the coset rH, the action of G on IndW permutes the summands, in accordance to the
action of G on the left cosets G/H.
We want the trace of the action of g ∈ G. We can conﬁne ourselves to the summands in
(15.5) which are preserved by g; the others will contribute purely oﬀdiagonal blocks to the
action. Now, grH = rH iﬀ r
−1
gr = k ∈ H. For each such r, we need the trace of g on W
r
.
Now the isomorphism W
∼
= W
r
was deﬁned by
w ∈ W →
h∈H
e
rh
⊗h
−1
w,
which transforms under g to give
g
h∈H
e
rh
⊗h
−1
w =
h∈H
e
grh
⊗h
−1
w =
h∈H
e
rkh
⊗h
−1
w =
h∈H
e
rh
⊗h
−1
kw,
and so the action of g on W
r
corresponds, under this isomorphism, to the action of k = r
−1
gr
on W, and so the block W
r
contributes χ
W
(r
−1
gr) to the trace of g. Summing over the relevant
r gives our theorem.
34
(15.6) Frobenius Reciprocity
Surprisingly, one can compute multiplicities of Girreducibles in IndW which does not require
knowledge of its character on G, but only the character of W in H. The practical advantage
is that a dot product of characters on G, involving one induced rep, is reduced to one on H.
This is the content of the Frobenius reciprocity theorem below. To state it nicely, we introduce
some notation. Write Res
H
G
V , or just ResV , when regarding a Grepresentation V merely as
a representation of the subgroup H. (ResV is the “restricted” representation). For two G
representations U, V , let
¸U[V )
G
= ¸χ
U
[χ
V
);
recall that this is also the dimension of the space Hom
G
(U, V ) of Ginvariant linear maps.
(Check it by decomposing into irreducibles.) Because the isomorphism Hom(U, V )
∼
= V ⊗ U
∗
is compatible with the actions of G, the same number is also the dimension of the subspace
(V ⊗U
∗
)
G
⊂ V ⊗U
∗
of Ginvariant vectors.
15.7 Theorem (Frobenius Reciprocity). ¸V [Ind
G
H
W)
G
= ¸Res
H
G
V [W)
H
.
15.8 Remark. Viewing the new ¸ [ )
G
as an inner product between representations, the theorem
says, in more abstract language, that Ind
G
H
and Res
H
G
are adjoint functors.
15.9 Corollary. The multiplicity of the Girreducible representation V in the induced represen
tation Ind
G
H
W is equal to the multiplicity of the irreducible Hrepresentation W in the restricted
representation Res
H
G
V .
Proof of Frobenius reciprocity. One can check directly (you will do so in the homework) that the
linear map Ind, from class functions on H to class functions on G, is the hermitian adjoint of the
restriction of class functions from G to H; the Reciprocity formula then follows from Theorem
15.4. Here, we give a proof which applies more generally (even when character theory does not).
We must show that dimHom
G
(V ; IndW) = dimHom
H
(V ; W). We prove this by construct
ing a natural isomorphism between the two spaces (independent of arbitrary choices),
Hom
G
(V ; IndW)
∼
= Hom
H
(V ; W). (*)
The chain of isomorphisms leading to (*) is the following:
Hom
G
(V ; IndW) = Hom
G
_
V ; (C[G] ⊗W)
H
_
= Hom
G×H
(V ; C[G] ⊗W)
= (C[G] ⊗Hom(V ; W))
G×H
= Hom
H
(V ; W),
where the superscripts G, H indicate that we take the subspaces of invariant vectors under the
respective groups, and it remains to explain the group actions and the isomorphisms.
The ﬁrst step is the deﬁnition of the induced representation. Now, because the Gaction
(on V and on the left on C[G]) commutes with the Haction (on the right on C[G] and on W),
we get an action of G H on Hom(V ; C[G] ⊗ W), and the invariant vectors under G H are
precisely those invariant under both G and H; which accounts for our second isomorphism. In
the third step, we have used the isomorphism of vector spaces
Hom(V ; C[G] ⊗W) = C[G] ⊗Hom(V ; W),
which equates the linear maps from V to Wvalued functions on G with the functions on G with
values in Hom(V ; W): indeed, both sides are maps from GV to W which are linear in V . We
keep note of the fact that G acts on the left on C[G] and on V , while H acts on the right on
C[G] and on W.
35
The last step gets rid of G altogether. It is explained by the fact that a map on G with values
anywhere which is invariant under left multiplication of G is completely determined by its value
at the identity; and this value can be prescribed without constraint. To be more precise, let
φ : GV →W be a map linear on V , and denote by φ
g
∈ Hom(V, W) its value at G. Invariance
under g forces us to have φ
gk
= φ
k
◦ ρ
V
(g
−1
), as seen by acting on φ by g and evaluating at
k ∈ G. In particular, φ
g
= φ
1
◦ ρ
V
(g
−1
). Conversely, for any ψ
1
= ψ ∈ Hom(V ; W), the map on
G sending k to ψ
k
:= ψ
1
◦ ρ
V
(k)
−1
is invariant, because
ψ
gk
= ψ
1
◦ ρ
V
(gk)
−1
= ψ
1
◦ ρ
V
(k)
−1
ρ
V
(g)
−1
= ψ
k
◦ ρ
V
(g)
−1
,
as needed for invariance under any g ∈ G.
We have thus shown that sending φ : G → Hom(V ; W) to φ
1
gives a linear isomorphism
from the subspace of Ginvariants in C[G] ⊗Hom(V ; W) to Hom(V ; W). It remains to compare
the Hinvariance conditions on the two spaces, to establish the fourth and last isomorphism in
the chain. It turns out, in fact, that the Haction on the ﬁrst space, on the right on G and on
W, agrees with the Haction on Hom(V, W), simultaneously on V and on W. Indeed, acting
by h ∈ H on φ results in the map g → ρ
W
(h) ◦ φ
gh
. In particular, 1 goes to ρ
W
(h) ◦ φ
h
. But
we know that φ
h
= φ
1
◦ ρ
V
(h)
−1
, whence it follows that the transformed map sends 1 ∈ G to
ρ
W
(h) ◦ φ
1
◦ ρ
V
(h)
−1
. So the eﬀect on φ
1
∈ Hom(V ; W) is that of the simultaneous action of h
on V and W, as claimed.
16 Mackey theory
The main result of this section describes the restriction to a subgroup K ⊂ G of an induced
representation Ind
G
H
W. No relation is assumed between K and H, but the most useful appli
cation of this rather technical formula comes in when K = H: in this case, we obtain a neat
irreducibility criterion for Ind
G
H
W. The formulae become much cleaner when H is a normal
subgroup of G, but this plays no role in their proof.
16.1 Deﬁnition. The double coset space K¸G/H is the set of K Horbits on G, for the left
action of K and the right action of H.
Note that left and right multiplication commute, so we get indeed an action of the product group
K H. There are equivalent ways to describe the set, as the set of orbits for the (left) action
of K on the coset space G/H, or the set of orbits for the right action of H on the space K¸G.
An important special case is when H = K and is normal in G: in this case, K¸G/H equals the
simple coset space G/H. (Why?)
Let S ∈ G be a set of representatives for the double cosets (one point in each orbit); we thus
have G =
s
KsH. For each s ∈ S, let
H
s
= sHs
−1
∩ K.
Then, H
s
is a subgroup of K, but it also embeds into H by the homomorphism H
s
÷ x →s
−1
hs.
To see its meaning, note that H
s
⊂ K is the stabiliser of the coset sH under the action of K on
G/H, while the embedding in H identiﬁes H
s
with the stabiliser of Ks under the (right) action
of H on K¸G.
A representation ρ : H → GL(W) leads to a representation of H
s
on the same space by
ρ
s
(x) = ρ(s
−1
xs). We write W
s
for the space W, when this action of H
s
is understood.
16.2 Theorem (Mackey’s Restriction Formula).
Res
K
G
Ind
G
H
(W)
∼
=
s∈S
Ind
K
H
s
(W
s
)
36
Proof. We have G =
s
KsH, and the left Kaction and right Haction permute these double
cosets. We can then decompose the space of Wvalued functions on G into a sum of functions
supported on each coset. Hinvariance of the decomposition means that the invariant subspace
Ind
G
H
(W) ⊂ C[G]⊗W decomposes accordingly, into the sum of Hinvariants in
s
C[KsH]⊗W.
We claim now that the Hinvariant part of C[KsH] ⊗ W is isomorphic to Ind
K
H
s
(W
s
), as a
representation of K. Let then φ : KsH → W be a map invariant for the action of H (on the
right on KsH, and on W). Restricting to the subset Ks gives a map ψ : K → W invariant
under the same action of H
s
, by setting ψ(k) := φ(ks). But this is a vector in Ind
K
H
s
(W
s
), so we
do get a natural linear map φ →ψ from the ﬁrst space to the second.
Conversely, let now ψ : K →W represent a vector in the induced representation Ind
K
H
s
(W
s
);
we deﬁne a map φ : KsH →W by φ(ksh) := ρ(h)
−1
ψ(k). If x ∈ H, then
φ(ksh x
−1
) = ρ(xh
−1
)ψ(k) = ρ(x)φ(ksh),
showing that φ is invariant under the action on H (on the right on KsH and simultaneously
on W). There is a problem, however: the same group element may have several expressions as
ksh, with k ∈ K and h ∈ H, and we must check that the value φ(ksh) is independent of that.
Let ksh = k
sh
; then, k
−1
k
= s h
h
−1
s
−1
, and so their common value y is in H
s
. But then,
φ(ksh) = ρ(h)
−1
ψ(k) = ρ(h)
−1
ρ(y)
−1
ψ(ky) = ρ(yh)
−1
ψ(kk
−1
k
) = ρ(h
)
−1
ψ(k
) = φ(k
sh
),
as needed, where for the second equality we have used invariance of ψ under y ∈ H
s
. The maps
φ →ψ and ψ →φ are clearly inverse to each other, completing the proof of the theorem.
16.3 Remark. One can give a computational proof of Mackey’s theorem using the formula for
the character of an induced representation. However, the proof given above applies even when
character theory does not (e.g. when complete reducibility fails).
(16.4) Mackey’s irreducibility criterion
We will now investigate irreducibility of the induced representation Ind
G
H
(W) by applying
Mackey’s formula to the case K = H.
Note that, when K = H, we have for each s ∈ S two representations of the group H
s
=
sHs
−1
∩ H on the space W: ρ
s
deﬁned as above and the restricted representation Res
H
s
H
W.
The two actions stem from the two embeddings of H
s
inside H, by s
−1
conjugation and by
the natural inclusion, respectively. Call two representations of a group disjoint if they have no
irreducible components in common. This is equivalent to orthogonality of their characters.
16.5 Theorem (Mackey’s Irreducibility Criterion). Ind
G
H
W is irreducible if and only if
(i) W is irreducible
(ii) For each s ∈ S ¸ H, the representations W
s
and Res
H
s
H
W are disjoint.
16.6 Remark. The set of representatives S for the double cosets was arbitrary, so we might as
well impose condition (ii) on any g ∈ G¸ H; but it suﬃces to check it for g ∈ S.
The criterion becomes especially eﬀective when H is normal in G. In this case, the double
cosets are the same as the simple cosets (left or right). Moreover, sHs
−1
= H, so H
s
= H for
all s; and the representation W
s
of H is irreducible, if W was so. Hence the following
16.7 Corollary. Let H ⊂ G be a normal subgroup. Then, Ind
G
H
W is irreducible iﬀ W is
irreducible and the representations W and W
s
are nonisomorphic, for all s ∈ G¸ H.
Again, it suﬃces to check the condition on a set of representatives; in fact, it turns out that the
isomorphism class of W
g
, for g ∈ G, depends only on the coset gH.
37
Proof of Mackey’s criterion. We have, in the notation ¸U[V ) := ¸χ
U
[χ
V
),
¸Ind
G
H
W[Ind
G
H
W)
G
= ¸Res
H
G
Ind
G
H
W[W)
H
=
s
¸Ind
H
H
s
W
s
[W)
H
=
s
¸W[Ind
H
H
s
W
s
)
H
=
s
¸Res
H
s
H
W[W
s
)
H
s
.
For the ﬁrst and last equality, we have used the Frobenius reciprocity formula; the second
equality is Mackey’s formula, while the third uses the fact that the dot product of characters is
an integer (hence real).
Now every term in the ﬁnal sum is a nonnegative integer, so the the answer is 1, as required
by irreducibility, only if all terms but one vanish, and the nonzero term is 1 itself. Now the
choice of representatives S was arbitrary, so we can assume that 1 was chosen to represent the
identity coset HH = H. If so, H
1
= H and W
1
= W, and the s = 1 term is χ
W

2
. So this
is the nonvanishing term, and it must equal 1, and so W must be irreducible; while all other
terms must vanish, which is part (ii) of Mackey’s criterion.
(16.8) Examples
1. If G = D
2n
, H = C
n
and W is the representation L
k
with generator eigenvalue exp
_
2πik
n
_
,
the induced representation is V
k
. The transform of L
k
under reﬂection is the representation
L
−k
, and Mackey’s criterion holds whenever exp
_
2πik
n
_
,= exp
_
−2πik
n
_
, that is, when k ,=
0, n/2 (when n is even); and indeed, in each of those cases, V
k
breaks up into two lines.
2. If G = S
5
and H = A
5
, Mackey’s criterion fails for the ﬁrst three irreducible representations
1, V and W, but holds for the two ‘halves’ of Λ
2
V . So the ﬁrst three representations
induce reducible ones, but the last two induce irreducible representations. One can check
(by characters, but also directly) that the ﬁrst three induced representations split as 1⊕S,
V ⊕ (V ⊗ S) and W ⊕ (W ⊗ S), while Λ
and Λ
each induce Λ
2
V , which is irreducible
under S
5
. As a matter of fact, this leads to the complete list of irreducibles of S
5
.
17 Nilpotent groups and pgroups
As an application of induced representations, we now show that for a certain class of groups
(the nilpotent ones) all irreducible representations arise by induction from a 1dimensional rep
resentation of an appropriate subgroup. Except for the deﬁnition that we are about to give, this
section contains optional material.
17.1 Deﬁnition. The group G is solvable if there is a ﬁnite chain
G = G
(n)
⊃ G
(n−1)
⊃ ⊃ G
(0)
= ¦1¦
of subgroups such that G
(k)
is normal in G
(k+1)
and the quotient group G
(k+1)
/G
(k)
is abelian.
G is nilpotent if the G
(k)
are normal in all of G and each G
(k+1)
/G
(k)
is central in G/G
(k)
.
(17.2) Examples
(i) Every abelian group is nilpotent.
(ii) The group B of uppertriangular invertible matrices with entries in Z/p is solvable, but
not nilpotent; the subgroups B
(k)
have 1’s on the diagonal and zeroes on progressively more
superdiagonals.
(iii) The subgroup N ⊂ B of uppertriangular matrices with 1’s on the diagonal is nilpotent.
(iv) Every subgroup and quotient group of a solvable (nilpotent) group is solvable (nilpotent):
just use the chain of intersections with the G
(k)
or the quotient chain, as appropriate.
38
The following proposition underscores the importance of nilpotent groups. Recall that a
pgroup is a group whose order is a power of the prime number p.
17.3 Proposition. Every pgroup is nilpotent.
Proof. We will show that the centre Z of a pgroup G is nontrivial. Since G/Z is also a pgroup,
we can repeat the argument and produce the chain in 17.1.
Consider the conjugacy classes in G. Since they are orbits of the conjugation action, the
orbitstabiliser theorem implies that their orders are powers of p. If 1 was the only central
element, then the order of every other conjugacy class would be 0 (mod p). They would then
add up to 1 (mod p). However, the order of G is 0 (mod p), contradiction.
17.4 Remark. The same argument can be applied to the (linear) action of G on a vector space
over a ﬁnite ﬁeld of characteristic p, and implies that every such action has nonzero invariant
vectors. This has the remarkable implication that the only irreducible representation of a pgroup
in characteristic p is the trivial one.
17.5 Theorem. Every irreducible representation of a nilpotent group is induced from a one
dimensional representation of an appropriate subgroup.
The subgroup will of course depend on the representation. Still, this is rather a good result,
especially for pgroups, whose structure is usually sensible enough that an understanding of all
subgroups is not a hopeless task. The proof will occupy the rest of the lecture and relies on the
following
17.6 Lemma (Clever Lemma). Let G be a ﬁnite group, A ⊂ G a normal subgroup and V an
irreducible representation of G. Then, V = Ind
G
H
W for some intermediate group A ⊂ H ⊂ G
and an irreducible representation of H whose restriction to A is isotypical.
Recall that “W is isotypical for A” means that it is a direct sum of many copies of the same
irreducible representation of A. Note that we may have H = G and W = V , if its restriction to
A is already isotypical. We are interested in the special case when A is abelian, in which case
the fact that the irreducible reps are 1dimensional implies that the isotypical representations
are precisely the scalar ones.
17.7 Corollary. With the same assumptions, if A ⊂ G is abelian, then either its action on V
is scalar, or else V is induced from a proper subgroup H ⊂ G.
Proof of the Clever Lemma. Let V =
i
V
i
be the isotypical decomposition of V with respect
to A. If there is a single summand, we take H = G as indicated. Otherwise, observe that the
action of G on V must permute the blocks V
i
. Indeed, denoting the action by ρ, choose g ∈ G
and let
g
ρ(a) := ρ(gag
−1
). Conjugation by g deﬁnes an automorphism of the (normal) subgroup
A, which must permute its irreducible characters.
Then, ρ(g) : V → V deﬁnes an isomorphism of Arepresentations, if we let A act via ρ on
the ﬁrst space and via
g
ρ on the second. Moreover, the same decomposition V =
i
V
i
is also
the isotypical decomposition under the action
g
ρ of A. Of course, the irreducible type assigned
to V
i
has been changed, according to the action of gconjugation on characters of A; but the
fact that all irreducible summands within each V
i
are isomorphic, and distinct from those in V
j
,
j ,= i, cannot have changed.
It follows that ρ(g) preserves the blocks in the isotypical decomposition; reverting to the
action ρ on the second space, we conclude that ρ(g) permutes the blocks V
i
according to its
permutation action on irreducible characters of A.
39
Note, moreover, that this permutation action of G is transitive, (if we ignore zero blocks).
Indeed, the sum of blocks within an orbit will be a Ginvariant subspace of V , which was assumed
irreducible. It follows that all nonzero blocks have the same dimension.
Choose now a (nonzero) block V
0
and let H be its stabiliser within G. From the orbit
stabiliser theorem, dimV = [G/H[ dimV
0
. I claim that V
∼
= Ind
G
H
V
0
. Indeed, Frobenius
reciprocity gives
dimHom
G
(Ind
G
H
V
0
, V ) = dimHom
H
(V
0
, Res
H
G
V ) ≥ 1,
the inequality holding because V
0
is an Hsubrepresentation of V . However, V is irreducible
under G and its dimension equals that of Ind
G
H
V
0
, so any nonzero Gmap between the two
spaces is an isomorphism.
17.8 Lemma (Little Lemma). If G is nilpotent and nonabelian, then it contains a normal,
abelian and noncentral subgroup.
Proof. Let Z ⊂ G be the centre and let G
⊂ G be the smallest subgroup in the chain (17.1)
not contained in Z. Then, G
/Z is central in G/Z, and so for any g
∈ G
, gg
g
−1
∈ g
Z for
any g ∈ G. Thus, any such g
generates, together with Z, a subgroup of G with the desired
properties.
Proof of Theorem 17.5. Let ρ : G → V be an irreducible representation of the nilpotent group
G. The subgroup ker ρ is normal and G/ ker ρ is also nilpotent. If G/ ker ρ is abelian, then the
irreducible representation V must be onedimensional and we are done. Else, let A ⊂ G/ ker ρ
be a normal, abelian, noncentral subgroup. For a noncentral element a ∈ A, ρ(a) cannot be a
scalar, or else it would commute with ρ(g) for any g ∈G, and then ρ(gag
−1
) = Id for any g, so
gag
−1
∈ ker ρ and then a would be central in G/ ker ρ.
It follows from Corollary 17.7 that there exists a proper subgroup H ⊂ G with a represen
tation W of H/ ker ρ, such that V is induced from H/ ker ρ to G/ ker ρ. But that is also the
representation Ind
G
H
W. The subgroup H is also nilpotent, and we can repeat the argument until
we ﬁnd a 1dimensional W.
(17.9) Example
A nonabelian exampleis the Heisenberg group of uppertriangular 33 matrices in Z/p with 1’s
on the diagonal. There are (p − 1) irreducible representations of dimension p, and they are all
induced from 1dimensional representations of the abelian subgroup of elements of the following
form
_
_
1 ∗ ∗
0 1 0
0 0 1
_
_
18 Burnside’s p
a
q
b
theorem
(18.1) Additional comments
Induction from representations of “known” subgroups is an important tool in constructing the
character table of a group. Frobenius’ and Mackey’s results can be used to (attempt to) break
up induced reps into irreducibles. For arbitrary ﬁnite groups, there isn’t a result as good as
the one for pgroups — it need not be true that every irreducible representation is induced
from a 1dimensional one. However, the following theorem of Brauer’s is a good substitute for
that. I refer you to Serre’s book for the proof, which is substantially more involved than that of
Theorem 17.5.
If G is any ﬁnite group and [G[ = p
n
m, with m prime to p, then two theorems of Sylow
assert the following:
40
1. G contains subgroups of order p
n
;
2. all these subgroups are conjugate;
3. every psubgroup of G is contained in one of these.
The subgroups in (1) are called the Sylow psubgroups of G.
An elementary psubgroup of G is one which decomposes as a direct product of a cyclic
subgroup and a pgroup. Up to conjugation, the elementary subgroups of G are of the form
¸x) S, where x ∈ G and S a subgroup of a ﬁxed Sylow subgroup of the centraliser of x in G.
18.2 Theorem (Brauer). The representation ring of G is spanned over Z by representations
induced from 1dimensional reps of elementary subgroups. In other words, every irreducible
character is an integral linear combination of characters induced in this way.
Knowing the integral span of characters within the space of class functions determines the
irreducible characters, as the functions of squarenorm 1 whose value at 1 ∈ G is positive. So,
in principle, the irreducible characters can be found in ﬁnitely many steps, once the structure
of the group is suﬃciently well known.
(18.3) Burnside’s theorem
One impressive application of the theory of characters to group theory is the following theorem
of Burnside’s, proved early in the 20th century. It was not given a purely grouptheoretic proof
until 1972. Let p, q be prime and a, b arbitrary natural numbers.
18.4 Theorem (Burnside). Every group of order p
a
q
b
is solvable.
Noting that every subgroup and quotient group of such a group has order p
a
q
b
, it suﬃces to
prove the following
18.5 Proposition. Every group of order p
a
q
b
,= p, q contains a nontrivial normal subgroup.
Indeed, we can continue applying the proposition to the normal subgroup and the quotient group,
until we produce a chain where all subquotients have orders p or q (and thus are abelian). Groups
containing no nontrivial normal subgroups are called simple, so we will prove that the order of
a simple group is contains at least three distinct prime factors. The group A
5
, of order 60, is
simple, so this is the end of this road.
We will keep reformulating the desired property of our groups until we convert it into an
integrality property of characters.
1. Every nontrivial representation of a simple group is faithful.
(Indeed, the kernel must be trivial.)
2. If some group element g ,= 1 acts as a scalar in a representation ρ, then either g is central
or else ρ is not faithful. Either way, the group is not simple.
Indeed, ugu
−1
∈ ker ρ for any u ∈ G.
3. With standard notations, ρ(g) is a scalar iﬀ [χ(g)[ = χ(1).
Indeed, χ(g) is a sum of its χ(1) eigenvalues, which are roots of unity. The triangle
inequality for the norm of the sum is strict, unless all eigenvalues agree.
The following two facts now imply Proposition 18.5 and thus Burnside’s theorem.
18.6 Proposition. For any character χ of the ﬁnite group G, [χ(g)[ = χ(1) if and only if
χ(g)/χ(1) is a nonzero algebraic integer.
18.7 Proposition. If G has order p
a
q
b
, there exists an irreducible character χ and an element
g such that χ(g)/χ(1) is a nonzero algebraic integer.
41
Proof of (18.6). Let d be the dimension of the representation; we have
α :=
χ(g)
χ(1)
=
ω
1
+ +ω
d
d
,
for suitable roots of unity ω
k
. Now, the minimal polynomial of α over Q is
P
α
(X) =
β
(X −β)
where β ranges over all distinct transforms of α under the action of the Galois group Γ of the
algebraic closure Q of Q.
5
This is because P
α
, having rational coeﬃcients, is invariant under the
action of Γ, and so it must admit all the β’s as roots. On the other hand, the product above is
clearly Γinvariant, hence it has rational coeﬃcients, and has α as a root; so it must agree with
the minimal polynomial.
Gauss’ Lemma asserts that any monic polynomial with integer coeﬃcients is irreducible over
Q as soon as it is so over Z. Therefore, if α is an algebraic integer, then P
α
(X) must have
integer coeﬃcients: indeed, being an algebraic integer, α must satisfy some monic equations
with integral coeﬃcients, among which the one of least degree must be irreducible over Q, and
hence must agree with the minimal polynomial P
α
.
Now, if α ,= 0, then no transform β can be zero. If so, the product of all β, which is integral,
has modulus 1 or more. However, the Galois transform of a root of unity is also a root of unity,
so every β is an average of roots of unity, and has modulus ≤ 1. We therefore get a contradiction
from algebraic integrality, unless α = 0 or [α[ = 1.
Proof of 18.7. We will ﬁnd a conjugacy class ¦1¦ , = C ⊂ G whose order is a power of p, and an
irreducible character χ, with χ(C) ,= 0 and χ(1) not divisible by p. Assuming that, recall from
Lecture 13 that the number [C[
χ(C)
χ(1)
is an algebraic integer. As [C[ is a power of p, which does
not divide the denominator, we are tempted to conclude that χ(C)/χ(1) is already an algebraic
integer. This would be clear from the prime factorisation, if χ(C) were an actual integer. As it
is, we must be more careful and argue as follows.
B´ezout’s theorem secures the existence of integers m, n such that m[C[ +nχ(1) = 1. Then,
m[C[
χ(C)
χ(1)
+nχ(C) =
χ(C)
χ(1)
,
displaying χ(C)/χ(1) as a sum of algebraic integers.
It remains to ﬁnd the desired C and χ. Choose a qSylow subgroup H ⊂ G; this has non
trivial centre, as proved in Proposition 17.3. The centraliser of a nontrivial element g ⊂ Z(H)
contains H, and so the conjugacy class C of g has ppower order, from the orbitstabiliser
theorem. This gives our C.
The column orthogonality relations of the characters, applied to C and ¦1¦, give
1 +
χ=1
χ(C)χ(1) = 0,
where the χ in the sum range over the nontrivial irreducible characters. Each χ(C) is an
algebraic integer. If every χ(1) for which χ(C) ,= 0 were divisible by p, then dividing the entire
equation by p would show that 1/p was an algebraic integer, contradiction. So there exists a χ
with χ(C) ,= 0 and χ(1) not divisible by p, as required.
5
It suﬃces to consider the extension ﬁeld generated by the roots of unity appearing in α.
42
19 Topological groups
In studying the ﬁnitedimensional complex representations of a ﬁnite group G, we saw that
1. they are all unitarisable;
2. they are completely reducible;
3. the characters of irreducible representations form an orthonormal basis of the space of
class functions;
4. Representations are determined by their characters, up to isomorphisms;
5. The character map χ →R
G
is a ring homomorphism, compatible with involutions, χ ¯ χ
corresponds to V V
∗
, and with inner products, ¸χ
V
[χ
W
) = ¸V [W) := dimHom
G
(V ; W).
Note that 1 ⇒ 2 and 3 ⇒ 4; part (5) is obvious from the deﬁnition of characters, save for the
inner product bit, which follows from (1) and (3).
We will now study the representation theory of inﬁnite compact groups, and will see that the
same properties (1)–(5) hold. Point (3) will be amended to account for the fact that the space
of class functions is inﬁnitedimensional. Correspondingly, there will be inﬁnitely many isomor
phism classes, but their characters will still form a complete orthonormal set in the (Hilbert)
space of class functions. (This is also called a Hilbert space basis.)
There is a good general theory for all compact groups. In the special case of compact Lie
groups — the groups of isometries of geometric objects, such as SO(n), SU(n), U(n) — the theory
is due to Hermann Weyl
6
and is outrageously successful: unlike the ﬁnite group case, we can
list all irreducible characters in explicit form! That is the Weyl character formula. However,
the general theory requires two deeper theorems of analysis (existence of the Haar measure and
the PeterWeyl theorem, see below), that go a bit beyond our means. For simple groups such as
U(1), SU(2) and related ones, these results are quite easy to establish directly, and so we shall
study them completely rigorously. It is not much more diﬃcult to handle U(n) for general n,
but we only have time for a quick survey.
19.1 Deﬁnition. A topological group is a group which is also a topological space, and for which
the group operations are continuous. It is called compact if it is so as a topological space. A
representation of a topological group G on a ﬁnitedimensional vector space V is a continuous
group homomorphism ρ : G → GL(V ), with the topology of GL(V ) inherited from the space
End(V ) of linear selfmaps.
19.2 Remark. One can deﬁne continuity for inﬁnitedimensional representations, but more care
is needed when deﬁning GL(V ) and its topology. Typically, one starts with some topology on V ,
then one chooses a topology on the space End(V ) of continuous linear operators (for instance,
the norm topology on bounded operators, when V is a Hilbert space) and then one embeds
GL(V ) as a closed subset of End(V ) End(V ) by the map g → (g, g
−1
). This deﬁnes the
structure of a topological group on GL(V ).
(19.3) Orthonormality of characters
In establishing the key properties (1) and (3) above, the two ingredients were Schur’s Lemma
and Weyl’s trick of averaging over G. The ﬁrst one applies without change to any group. The
second must be replaced by integration over G. Here, we take advantage of the continuity
assumption in our representations, which ensures that all intervening functions in the argument
are continuous on G, and can be integrated.
Even so, the key properties of the average over G were its invariance under left and right
translation, and the fact that G has total volume 1 (so that the average of the constant function
6
The unitary case was worked out by Schur.
43
1 is 1). The second property can always be ﬁxed by normalising the volume, but the ﬁrst
imposed a strong constraint on the volume form we use for integration: the volume element dg
on the group must be left and right invariant. A (diﬃcult) theorem of Haar asserts that the two
constraints determine the volume element uniquely (subject to a reasonable technical constraint:
speciﬁcally, any compact, Hausdorﬀ group carries a unique biinvariant regular Borel measure
of total mass 1). For Lie groups, an easier proof of existence can be given; we’ll see that for
U(1) and SU(2).
Armed with an invariant integration measure, the main results and their consequences in
Lecture 8 follow as before, replacing
1
G
g∈G
in the arguments with
_
G
dg.
19.4 Proposition. Every ﬁnitedimensional representation of a compact group is unitarisable,
and hence completely reducible.
19.5 Theorem. Characters of irreducible representations of a compact group have unit L
2
norm, and characters of nonisomorphic irreducibles are orthogonal.
19.6 Corollary. A representation is irreducible iﬀ its character has norm 1.
Completeness of characters is a diﬀerent matter. Its formulation now is that the characters
form a Hilbert space basis of the space of squareintegrable class functions. In addition to the
orthogonality properties, this asserts that any continuous class function which is orthogonal to
every irreducible character is null. This asserts the existence of an “ample supply” of irreducible
representations, and so it cannot be selfevident.
In the case of ﬁnite groups, we found enough representations by studying the regular repre
sentations. This method is also applicable to the compact case, but relies here on a fundamental
theorem that we state without proof:
19.7 Theorem (PeterWeyl). The space of squareintegrable functions on G is the Hilbert
space direct sum over ﬁnitedimensional irreducible representations V ,
L
2
(G)
∼
=
ˆ
End(V ).
The map from left to right sends a function f on G to the operators
_
G
f(g) ρ
V
(g)dg. The
inverse map sends φ ∈ End(V ) to the function g → Tr
V
(ρ
V
(g)
∗
φ). The inner product on
End(V ) corresponding to L
2
on G is given by ¸φ[ψ) = Tr
V
(φ
∗
ψ) dimV .
As opposed to the case of ﬁnite groups, the proof for compact groups is somewhat involved.
(Lie groups allow for an easier argument.) However, we shall make no use of the PeterWeyl
theorem here; in the cases we study, the completeness of characters will be established by direct
construction of representations.
(19.8) A compact group: U(1)
To illustrate the need for a topology and the restriction to continuous representations, we start
oﬀ by ignoring these aspects. As an abelian group, U(1) is isomorphic to R/Z via the map
x →exp(2πix). Now R is a vector space over Q, and general nonsense (Hamel’s basis theorem)
asserts that is must have a basis A = ¦α¦ ⊂ R as a Qvector space; moreover, we can choose
our ﬁrst basis element to be 1. Thus, we have isomorphisms of abelian groups,
R
∼
= Q⊕
α=1
Qα, R/Z
∼
= Q/Z
α=1
Qα.
It’s easy to see that the choice of an integer n and of a complex number z
α
for each A ÷ α ,= 1
deﬁnes a onedimensional complex representation of the second group, by specifying
(x, x
α
) →exp(2πinx)
α=1
exp(2πiz
α
x
α
).
44
Notice that, for every element in our group, all but ﬁnitely many coordinates x
α
must vanish,
so that almost all factors in the product are 1.
A choice of complex number for each element of a basis of R over Q is not a sensible collection
of data; and it turns out this does not even succeed in covering all 1dimensional representations
of R/Z. Things will improve dramatically when we impose the continuity requirement; we will
see in the next lecture that the only surviving datum is the integer n.
19.9 Remark. The subgroup Q/Z turns out to have a more interesting representation theory,
even when continuity is ignored. The onedimensional representations are classiﬁed by the pro
ﬁnite completion
´
Z of Z.
Henceforth, “representation” of a topological group will mean continuous representation. We
now classify the ﬁnitedimensional representations of U(1). The character theory is closely tied
to the theory of Fourier series.
19.10 Theorem. A continuous 1dimensional representation U(1) →C
×
has the form z →z
n
,
for some integer n.
For the proof, we will need the following
19.11 Lemma. The closed proper subgroups of U(1) are the cyclic subgroups µ
n
of nth roots of
unity (n > 1).
Proof of the Lemma. If q ∈ U(1) is not a root of unity, then its powers are dense in U(1)
(exercise, using the pigeonhole principle). So, any closed, proper subgroup of U(1) consists only
of roots of 1. Among those, there must be one of smallest argument (in absolute value) or else
there would be a sequence converging to 1; these would again generate a dense subgroup of U(1).
The root of unity of smallest argument is then the generator.
Proof of Theorem 19.10. A continuous map ϕ : U(1) →C
×
has compact, hence bounded image.
The image must lie on the unit circle, because the integral powers of any other complex number
form an unbounded sequence. So ϕ is a continuous homomorphism from U(1) itself.
Now, ker ϕ is a closed subgroup of U(1). If ker ϕ is the entire group, then ϕ
∼
= 1 and the
theorem holds with n = 0. If ker ϕ = µ
n
(n ≥ 1), we will now show that ϕ(z) = z
±n
, with the
same choice of sign for all z. To see this, deﬁne a continuous function
ψ : [0, 2π/n] →R, ψ(0) = 0, ψ(θ) = arg ϕ(e
iθ
);
in other words, we parametrise U(1) by the argument θ, start with ψ(0) = 0, which is one value
of the argument of ϕ(1) = 1, choose the argument so as to make the function continuous.
Because ker ϕ = µ
n
, ψ must be injective on [0, 2π/n). By continuity, it must be monotonically
increasing or decreasing (Intermediate Value Theorem), and we must have ψ(2π/n) = ±2π:
the value zero is ruled out by monotonicity and any other multiple of 2π would lead to an
intermediate value of θ with ϕ(e
iθ
) = 1. Henceforth, ± denotes the sign of ψ(2π/n).
Because ϕ is a homomorphism and ϕ(e
2πi/n
) = 1, ϕ(e
2πi/mn
) must be an mth root of unity,
and so ψ(¦2πk/mn¦) ⊂ ¦±2πk/m¦, k = 0, . . . , m. By monotonicity, these m + 1 values must
be taken exactly once and in the natural order, so ψ(2πk/mn) = ±2πk/m, for all m and all
k = 0, . . . , m. But then, ψ(θ) = ±n θ, by continuity, and ϕ(z) = z
±n
, as claimed.
19.12 Remark. Complete reducibility now shows that continuous ﬁnitedimensional representa
tions of U(1) are in fact algebraic: that is, the entries in a matrix representations are Laurent
polynomials in z. This is not an accident; it holds for a large class of topological groups (the
compact Lie groups).
45
(19.13) Character theory
The homomorphisms ρ
n
: U(1) → C
×
, ρ
n
(z) = z
n
form the complete list of irreducible repre
sentations of U(1). Clearly, their characters (which we abusively denote by the same symbol)
are linearly independent; in fact, they are orthonormal in the inner product
¸ϕ[ψ) =
1
2π
_
2π
0
ϕ(θ)ψ(θ)dθ,
which corresponds to “averaging over U(1)” (z = e
iθ
). The functions ρ
n
are sometimes called
Fourier modes, and their ﬁnite linear combinations are the Fourier polynomials.
As U(1) is abelian, it coincides with the space of its conjugacy classes. We can now state
our main theorem.
19.14 Theorem. (i) The functions ρ
n
form a complete list of irreducible characters of U(1).
(ii) Every ﬁnitedimensional representation V of U(1) is isomorphic to a sum of the ρ
n
. Its
character χ
V
is a Fourier polynomial. The multiplicity of ρ
n
in V is the inner product ¸ρ
n
[χ
V
).
Recall that complete reducibility of representations follows by Weyl’s unitary trick, averaging
any given inner product by integration on U(1).
As the space of (continuous) class functions is inﬁnitedimensional, it requires a bit of care
to state the ﬁnal part of our main theorem, that the characters form a “basis” of the space of
class functions. The good setting for that is the Hilbert space of squareintegrable functions,
which we discuss below. For now, let us just note an algebraic version of the result.
19.15 Proposition. The ρ
n
form a basis of the polynomial functions on U(1) ⊂ R
2
.
Indeed, on the unit circle, ¯ z = z
−1
so the the Fourier polynomials are also the polynomials in z
and ¯ z, which can also be expressed as polynomials in x, y.
(19.16) Digression: Fourier series
The Fourier modes ρ
n
form a complete orthonormal set (orthonormal basis) of the Hilbert space
L
2
(U(1)). This means that every squareintegrable function f ∈ L
2
has a (Fourier) series
expansion
f(θ) =
∞
−∞
f
n
e
inθ
=
∞
−∞
f
n
ρ
n
, (*)
which converges in mean square; the Fourier coeﬃcients f
n
are given by the formula
f
n
= ¸ρ
n
[f) =
1
2π
_
2π
0
e
−inθ
f(θ)dθ. (**)
Recall that meansquare convergence signiﬁes that the partial sums approach f, in the distance
deﬁned by the inner product. The proof of most of this is fairly easy. Orthonormality of the ρ
n
implies the Fourier expansion formula for the Fourier polynomials, the ﬁnite linear combinations
of the ρ
n
. (Of course, in this case the Fourier series is a ﬁnite sum, and no analysis is needed.)
Furthermore, for any given ﬁnite collection S of indexes n, and for any f ∈ L
2
, the sum of terms
in (*) with n ∈ S is the orthogonal projection of f onto the the span of the ρ
n
, n ∈ S. Hence,
any ﬁnite sum of norms of the Fourier coeﬃcients
f
n

2
in (*) is bounded by f
2
, and the
Fourier series converges. Moreover, the limit g has the property that f −g is orthogonal to each
ρ
n
; in other words, g is the projection of f onto the span of all the ρ
n
.
The more diﬃcult part is to show that any f ∈ L
2
which is orthogonal to all ρ
n
vanishes.
One method is to derive it from a powerful theorem of Weierstraß’, which says that the Fourier
polynomials are dense in the space of continuous functions, in the sense of uniform convergence.
46
(The result holds for continuous functions on any compact subset of R
N
; here, N = 2.) Approx
imating a candidate f, orthogonal to all ρ
n
, by a sequence of Fourier polynomials p
k
leads to a
contradiction, because
f −p
k

2
= f
2
+p
k

2
≥ f
2
,
by orthogonality, and yet uniform convergence certainly implies convergence in meansquare.
We summarise the basic facts in the following
19.17 Theorem. (i) Any continuous function on U(1) can be uniformly approximated by ﬁnite
linear combinations of the ρ
n
.
(ii) Any squareintegrable function f ∈ L
2
(U(1)) has a series expansion f =
f
n
ρ
n
, with
Fourier coeﬃcients f
n
given by (**).
20 The group SU(2)
We move on to the group SU(2). We will describe its conjugacy classes, ﬁnd the biinvariant
volume form, whose existence implies the unitarisability and complete reducibility of its con
tinuous ﬁnitedimensional representations. In the next lecture, we will list the irreducibles with
their characters.
Giving away the plot, observe that SU(2) acts on the space P of polynomials in two variables
z
1
, z
2
, by its natural linear action on the coordinates. We can decompose P into the homogeneous
pieces P
n
, n = 0, 1, 2, . . ., preserved by the action of SU(2): thus, the space P
0
of constants is
the trivial onedimensional representation, the space P
1
of linear functions the standard 2
dimensional one, etc. Then, P
0
, P
1
, P
2
, . . . is the complete list of irreducible representations of
SU(2), up to isomorphism, and every continuous ﬁnitedimensional representation is isomorphic
to a direct sum of those.
(20.1) SU(2) and the quaternions
By deﬁnition, SU(2) is the group of complex 22 matrices preserving the complex inner product
and with determinant 1; the group operation is matrix multiplication. Explicitly,
SU(2) =
__
u v
−¯ v ¯ u
_
: u, v ∈ C, [u[
2
+[v[
2
= 1
_
.
Geometrically, this can be identiﬁed with the threedimensional unit sphere in C
2
. It is useful
to replace C
2
with Hamilton’s quaternions H, generated over R by elements i, j satisfying the
relations i
2
= j
2
= −1, ij = −ji. Thus, H is fourdimensional, spanned by 1, i, j and k := ij. The
conjugate of a quaternion q = a+bi +cj +dk is ¯ q := a−bi −cj −dk, and the quaternion norm is
q
2
= q¯ q = ¯ qq = a
2
+b
2
+c
2
+d
2
.
The relation q
1
q
2
= ¯ q
2
¯ q
1
shows that  : H →R is multiplicative,
q
1
q
2
 = q
1
q
2
q
1
q
2
= q
1
q
2
¯ q
2
¯ q
1
= q
1
q
2

in particular, the “unit quaternions” (the quaternions of unit norm) form a group under multi
plication. Direct calculation establishes the following.
20.2 Proposition. Sending
_
u v
−¯ v u
_
to q = u + vj gives an isomorphism of SU(2) with the
multiplicative group of unit quaternions.
47
(20.3) Conjugacy classes
A theorem from linear algebra asserts that unitary matrices are diagonalisable in an orthonormal
eigenbasis. The diagonalised matrix is then also unitary, so its diagonal elements are complex
numbers of unit norm. These numbers, of course, are the eigenvalues of the matrix. They
are only determined up to reordering. We therefore get a bijection of conjugacy classes in the
unitary group with unordered Ntuples of complex numbers of unit norm:
U(N)/U(N) ↔U(1)
N
/S
N
,
where the symmetric group S
N
acts by permuting the N factors of U(1)
N
. The map from right
to left is deﬁned by the inclusion U(1)
N
⊂ U(N) and is therefore continuous; a general theorem
from topology ensures that it is in fact a homeomorphism.
7
Clearly, restricting to SU(N) imposed the det = 1 restriction on the right side U(1)
N
.
However, there is a potential problem in sight, because it might happen, in principle, that two
matrices in SU(N) will be conjugate in U(N), but not in SU(N). For a general situation of a
subgroup H ⊂ G this warrants deserves careful consideration. In the case at hand, however, we
are lucky. The scalar matrices U(1) Id ⊂ U(N) are central, that is, their conjugation action is
trivial. Now, for any matrix A ∈ U(N) and any Nth root r of det A, we have r
−1
A ∈ SU(N),
and conjugating by the latter has the same eﬀect as conjugation by A. We then get a bijection
SU(N)/SU(N) ↔S
_
U(1)
N
_
/S
N
,
where U(1)
N
is identiﬁed with the group of diagonal unitary N N matrices and the S on the
right, standing for “special”, indicates its subgroup of determinant 1 matrices.
20.4 Proposition. The normalised trace
1
2
Tr : SU(2) → C gives a homomorphisms of the set
of conjugacy classes in SU(2) with the interval [−1, 1].
The set of conjugacy classes is given the quotient topology.
Proof. As discussed, matrices are conjugate in SU(2) iﬀ their eigenvalues agree up to order. The
eigenvalues form a pair ¦z, z
−1
¦, with z on the unit circle, so they are the roots of
X
2
−(z +z
−1
)X + 1,
in which the middle coeﬃcient ranges over [−2, 2].
The quaternion picture provides a good geometric model of this map: the halftrace becomes
the projection q → a on the real axis R ⊂ H. The conjugacy classes are therefore the 2
dimensional spheres of constant latitude on the unit sphere, plus the two poles, the singletons
¦±1¦. The latter correspond to the central elements ±I
2
∈ SU(2).
(20.5) Characters as Laurent polynomials
The preceding discussion implies that the characters of continuous SU(2)representations are
continuous functions on [−1, 1]. It will be more proﬁtable, however, to parametrise that interval
by the “latitude” φ ∈ [0, π], rather than the real part cos φ. We can in fact let φ range from
[0, 2π], provide we remember our functions are invariant under φ ↔−φ and periodic with period
2π. We call such functions of φ Weylinvariant, or even. Functions which change sign under
that same symmetry are antiinvariant or odd.
7
We use the quotient topologies on both sides. The result I am alluding to asserts that a continuous bijection
from a compact space to a Hausdorﬀ space is a homeomorphism.
48
Notice that φ parametrises the subgroup of diagonal matrices, isomorphic to U(1),
_
e
iφ
0
0 e
−iφ
_
=
_
z 0
0 z
−1
_
∈ SU(2),
and the transformation φ ↔−φ is induced by conjugation by the matrix
_
0 i
i 0
_
∈ SU(2). The
character of an SU(2)representation V will be the function of z ∈ U(1)
χ
V
(z) = Tr
V
__
z 0
0 z
−1
__
,
invariant under z ↔ z
−1
. When convenient, we shall reexpress it in terms of φ and abusively
write χ
V
(φ).
A representation of SU(2) restricts to one of our U(1), and we know from last lecture that
the characters of the latter are polynomials in z and z
−1
. Such functions are called Laurent
polynomials. We therefore obtain the
20.6 Proposition. Characters of SU(2)representations are even Laurent polynomials in z.
(20.7) The volume form and Weyl’s Integration formula for SU(2)
For ﬁnite groups, averaging over the group was used to prove complete reducibility, to deﬁne the
inner product of characters and prove the orthogonality theorems. For compact groups (such as
U(1)), integration over the group must be used instead. In both cases, the essential property of
the operation is its invariance under left and right translations on the group. To state this more
precisely: the linear functional sending a continuous function ϕ on G to
_
G
ϕ(g)dg is invariant
under left and right translations of ϕ. A secondary property (which is arranged by appropriate
scaling) is that the average or integral of the constant function 1 is 1.
We thus need a volume form over SU(2) which is invariant under left and right translations.
Our geometric model for SU(2) as the unit sphere in R
4
allows us to ﬁnd it directly, without
appealing to Haar’s (diﬃcult) general result. Note, indeed, that the actions of SU(2) on H by
left and right multiplication preserve the quaternion norm, hence the Euclidean distance. In
particular, the usual Euclidean volume element on the unit sphere must be invariant under both
left and right multiplication by SU(2) elements.
20.8 Remark. The rightleft action of SU(2) SU(2) on R
4
= H, whereby αβ sends q ∈ H to
αqβ
−1
, gives a homomorphism h : SU(2) SU(2) → SO(4). (The action preserves orientation,
because the group is connected.) Clearly, (−Id, −Id) acts as the identity, so h factors through
the quotient SU(2) SU(2)/¦±(Id, Id)¦, and indeed turns out to give an isomorphism of the
latter with SO(4). We will discuss this in Lecture 22.
To understand the inner product of characters, we are interested in integrating class functions
over SU(2). This must be expressible in terms of their restriction to U(1). This is made explicit
by the following theorem, whose importance cannot be overestimated: as we shall see, it implies
the character formulae for all irreducible representations of SU(2). Let dg be the Euclidean
volume form on the unit sphere in H, normalised to total volume 1, and let ∆(φ) = e
iφ
− e
−iφ
be the Weyl denominator.
20.9 Theorem (Weyl Integration Formula). For a continuous class function f on SU(2),
we have
_
SU(2)
f(g)dg =
1
2
1
2π
_
2π
0
f(φ) [∆(φ)[
2
dφ =
1
π
_
2π
0
f(φ) sin
2
φ dφ.
49
Thus, the integral over SU(2) can be computed by restriction to the U(1) of diagonal matrices,
after correcting the measure by the factor
1
2
[∆(φ)[
2
. We are abusively writing f(φ) for the value
of f at
_
e
iφ
0
0 e
−iφ
_
.
Proof. In the presentation of SU(2) as the unit sphere in H, the function f being constant on
the spheres of constant latitude. The volume of a spherical slice of width dφ is C sin
2
φ dφ,
with the constant C normalised by the “total volume 1” condition
_
π
0
C sin
2
φ dφ = 1,
whence C = 2/π, in agreement with the theorem.
21 Irreducible characters of SU(2)
Let us check the irreducibility of some representations.
• The trivial 1dimensional representation is obviously irreducible; its character has norm 1,
due to our normalisation of the measure.
• The character of the standard representation on C
2
is e
iφ
+e
−iφ
= 2 cos φ. Its norm is
4
π
_
2π
0
cos
2
φ sin
2
φ dφ =
1
π
_
2π
0
(2 cos φsin φ)
2
dφ =
1
π
_
2π
0
sin
2
2φ dφ = 1,
so C
2
is irreducible. Of course, this can also be seen directly (no invariant lines).
• The character of the tensor square C
2
⊗C
2
is 4 cos
2
φ, so its square norm is
1
π
_
2π
0
16 cos
4
φsin
2
φ dφ =
1
π
_
2π
0
4 cos
2
φsin
2
2φ dφ
=
1
π
_
2π
0
(sin 3φ + sin φ)
2
dφ
=
1
π
_
2π
0
_
sin
2
3φ + sin
2
φ + 2 sin φsin 3φ
_
dφ = 1 + 1 = 2,
so this is reducible. Indeed, we have (C
2
)
⊗2
= Sym
2
C
2
⊕Λ
2
C
2
; these must be irreducible.
Λ
2
C
2
is the trivial 1dim. representation, since its character is 1 (the product of the eigen
values), and so Sym
2
C
2
is a new, 3dimensional irreducible representation, with character
2 cos 2φ + 1 = z
2
+z
−2
+ 1.
So far we have found the irreducible representations C = Sym
0
C
2
, C
2
= Sym
1
C
2
, Sym
2
C
2
,
with characters 1, z +z
−1
, z
2
+ 1 +z
−2
. This surely lends credibility to the following.
21.1 Theorem. The character χ
n
of Sym
n
C
2
is z
n
+z
n−2
+. . . +z
2−n
+z
−n
. Its norm is 1,
and so each Sym
n
C
2
is an irreducible representation of SU(2). Moreover, this is the complete
list of irreducibles, as n ≥ 0.
21.2 Remark. Note that Sym
n
C
2
has dimension n + 1.
50
Proof. The standard basis vectors e
1
, e
2
scale under g =
_
z 0
0 z
−1
_
by the factors z
±1
. Now,
the basis vectors of Sym
n
C
2
arise by symmetrising the vectors e
⊗n
1
, e
⊗(n−1)
1
⊗e
2
, . . . , e
⊗n
2
in the
nth tensor power (C
2
)
⊗n
. The vectors appearing in the symmetrisation of e
⊗p
1
⊗ e
⊗(n−p)
2
are
tensor products containing p factors of e
1
and n −p factors of e
2
, in some order; each of these
is an eigenvectors for g, with eigenvalue z
p
z
n−p
. Hence, the eigenvalues of g on Sym
n
are
¦z
n
, z
n−2
, . . . , z
−n
¦ and the character formula follows.
We have
χ
n
(z) =
z
n+1
−z
−n−1
z −z
−1
,
whence χ
n
(z)[
2
[∆(z)[
2
= [z
n+1
−z
−n−1
[
2
= 4 sin
2
[(n + 1)φ]. Thus,
χ
n

2
=
1
4π
_
2π
0
4 sin
2
[(n + 1)φ] dφ =
1
π
π = 1,
proving irreducibility.
To see completeness, observe that the functions ∆(z)χ
n
(z) span the vector space of odd
Laurent polynomials (¸20.5). For any character χ, the function χ(z)∆(z) is an odd Laurent
polynomial. If χ was a new irreducible character, χ(z)∆(z) would have to be orthogonal to
all antiinvariant Laurent polynomials, in the standard inner product on the circle. (Combined
with the weight [∆[
2
, this is the correct SU(2) inner product up to scale, as per the integration
formula). But this is impossible, as the orthogonal complement of the odd polynomials is
spanned by the even ones.
(21.3) Another look at the irreducibles
The group SU(2) acts linearly on the vector space P of polynomials in z
1
, z
2
by transforming
the variables in the standard way,
_
u v
−¯ v ¯ u
_
_
z
1
z
2
_
=
_
uz
1
+vz
2
−¯ vz
1
+ ¯ uz
2
_
.
This preserves the decomposition of P as the direct sum of the spaces P
n
of homogeneous
polynomials of degree n.
21.4 Proposition. As SU(2)representations, Sym
n
C
2 ∼
= P
n
, the space of homogeneous poly
nomials of degree n in two variables.
Proof. An isomorphism from P
n
to Sym
n
C
2
is deﬁned by sending z
p
1
z
n−p
2
to the symmetrisation
of e
⊗p
1
⊗e
⊗(n−p)
2
. By linearity, this will send (uz
1
+vz
2
)
p
(−¯ vz
1
+¯ uz
2
)
n−p
to the symmetrisation
of (ue
1
+ ve
2
)
⊗p
⊗ (−¯ ve
1
+ ¯ ue
2
)
⊗(n−p)
. This shows that our isomorphism commutes with the
SU(2)action.
21.5 Remark. Equality of the representations up to isomorphism can also be seen by checking
the character of a diagonal matrix g, for which the z
p
1
z
n−p
2
are eigenvectors.
(21.6) Tensor product of representations
Knowledge of the characters allows us to ﬁnd the decomposition of tensor products. Obviously,
tensoring with C does not change a representation. The next example is
χ
1
χ
n
= (z +z
−1
)
z
n+1
−z
−n−1
z −z
−1
=
z
n+2
+z
n
−z
−n
−z
−2−n
z −z
−1
= χ
n+1
+χ
n−1
,
provided n ≥ 1; if n = 0 of course the answer is χ
1
. More generally, we have
51
21.7 Theorem. If p ≥ q, χ
p
χ
q
= χ
p+q
+χ
p+q−2
+. . . +χ
p−q
. Hence,
Sym
p
C
2
⊗Sym
q
C
2
∼
=
q
k=0
Sym
p+q−2k
C
2
.
Proof.
χ
p
(z) χ
q
(z) =
z
p+1
−z
−p−1
z −z
−1
_
z
q
+z
q−2
+. . . +z
−q
_
=
q
k=0
z
p+q+1−2k
−z
2k−p−q−1
z −z
−1
=
q
k=0
χ
p+q−2k
;
the condition p ≥ q ensures that there are no cancellations in the sum.
21.8 Remark. Let us deﬁne formally χ
−n
= −χ
n−2
, a “virtual character”. Thus, χ
0
= 1, the
trivial character, χ
−1
= 0 and χ
−2
= −1. The decomposition formula becomes
χ
p
χ
q
= χ
p+q
+χ
p+q−2
+. . . +χ
p−q
,
regardless whether p ≥ q or not.
(21.9) Multiplicities
The multiplicity of the irreducible representation Sym
n
in a representation with character χ(z)
can be computed as the inner product ¸χ[χ
n
), with the integration (20.9). The following recipe,
which exploits the simple form of the functions ∆(z)χ
n
(z) = z
n+1
−z
−n−1
, is sometimes helpful.
21.10 Proposition. ¸χ[χ
n
) is the coeﬃcient of z
n+1
in ∆(z)χ(z).
22 Some SU(2)related groups
We now discuss two groups closely related to SU(2) and describe their irreducible representations
and characters. These are SO(3), SO(4) and U(2). We start with the following
22.1 Proposition. SO(3)
∼
= SU(2)/¦±Id¦, SO(4)
∼
= SU(2) SU(2)/¦±(Id, Id)¦ and U(2)
∼
=
U(1) SU(2)/¦±(Id, Id)¦.
Proof. Recall the leftright multiplication action of SU(2), viewed as the space of unit norm
quaternions, on H
∼
= R
4
. This gives a homomorphism from SU(2) SU(2) →SO(4). Note that
αβ send 1 ∈ H to αβ
−1
, so αβ ﬁxes 1 iﬀ α = β. The pair αα ﬁxes every other quaternion
iﬀ α is central in SU(2), that is, α = ±Id. So the kernel of our homomorphism is ¦±(Id, Id)¦.
We see surjectivity and construct the isomorphism SU(2) → SO(3) at the same time. Re
stricting our leftright action to the diagonal copy of SU(2) leads to the conjugation action of
SU(2) on the space of pure quaternions, spanned by i, j, k. I claim that this generates the full
rotation group SO(3): indeed, rotations in the ¸i, j)plane are implemented by elements a +bk,
and similarly with any permutation of i, j, k, and these rotations generate SO(3). This constructs
a surjective homomorphism from SU(2) to SO(3), but we already know that the kernel is ¦±Id¦.
So we have the assertion about SO(3).
Returning to SO(4), we can take any orthonormal, pure quaternion frame to any other one
by a conjugation. We can also take 1 ∈ H to any other unit vector by a left multiplication.
52
Combining these, it follows that we can take any orthonormal 4frame to any other one by a
suitable conjugation, followed by a left multiplication.
Finally, the assertion about U(2) is clear, as both U(1) and SU(2) sit naturally in U(2) (the
former as the scalar matrices), and intersect at ¦±Id¦.
The group isomorphisms in the proposition are in fact homeomorphisms; that means, the
inverse map is continuous (using the quotient topology on the quotient groups). It is not diﬃcult
(although a bit painful) to prove this directly from the construction, but this follows more easily
from the following topological fact which is of independent interest.
22.2 Proposition. Any continuous bijection from a Hausdorﬀ space to a compact space is a
homeomorphism.
(22.3) Representations
It follows that continuous representations of the three groups in Proposition 22.1 are the same
as continuous representations of SU(2), SU(2) SU(2) and SU(2) U(1), respectively, which
send −Id, −(Id, Id) and −(Id, Id) to the identity matrix.
22.4 Corollary. The complete list of irreducible representations of SO(3) is Sym
2n
C
2
, as n ≥ 0.
This formulation is slightly abusive, as C
2
itself is not a representation of SO(3) but only of
SU(2) (−Id acts as −Id). But the sign problem is ﬁxed on all even symmetric powers. For
example, Sym
2
C
2
is the standard 3dimensional representation of SO(3). (There is little choice
about it, as it is the only 3dimensional representation on the list).
The image of our copy ¦diag[z, z
−1
]¦ ⊂ SU(2) of U(1), in our homomorphism, is the family
of matrices
_
_
1 0 0
0 cos ϕ −sin ϕ
0 sin ϕ cos ϕ
_
_
,
with (caution!) ϕ = 2φ. These cover all the conjugacy classes in SO(3), and the irreducible
characters of SO(3) restrict to this as
1 + 2 cos ϕ + 2 cos(2ϕ) + + 2 cos(nϕ).
(22.5) Representations of a product
Handling the other two groups, SO(4) and U(2), requires the following
22.6 Lemma. For the product G H of two compact groups G and H, the complete list of
irreducible representations consists of the tensor products V ⊗ W, as V and W range over the
irreducibles of G and H, independently.
Proof using character theory. From the properties of the tensor product of matrices, it follows
that the character of V ⊗W at the element g h is
χ
V ⊗W
(g h) = χ
V
(g) χ
W
(h).
Now, a conjugacy class in G H is a Cartesian product of conjugacy classes in G and H, and
character theory ensures that the χ
V
and χ
W
form Hilbert space bases of the L
2
class functions
on the two groups. It follows that the χ
V
(g) χ
W
(h) form a Hilbert space basis of the class
functions on GH, so this is a complete list of irreducible characters.
Incidentally, the proof establishes the completeness of characters for the product GH, if it
was known on the factors. So, given our knowledge of U(1) and SU(2), it does give a complete
argument. However, we also indicate another proof which does not rely on character theory,
but uses complete reducibility instead. The advantage is that we only need to know complete
reducibility under one of the factors G, H.
53
Proof without characters. Let U be an irreducible representation of G H, and decompose it
into isotypical components U
i
under the action of H. Because the action of G commutes with H,
it must be blockdiagonal in this decomposition, and irreducibility implies that we have a single
block. Let then W be the unique irreducible Htype appearing in U, and let V := Hom
H
(W, U).
This inherits and action of G from U, and we have an isomorphism U
∼
= W ⊗ Hom
H
(W, U) =
V ⊗W (see Example Sheet 2), with G and H acting on the two factors. Moreover, any proper
Gsubrepresentation of V would lead to a proper GHsubrepresentation of U, and this implies
irreducibility of V .
22.7 Corollary. The complete list of irreducible representations of SO(4) is Sym
m
C
2
⊗Sym
n
C
2
,
with the two SU(2) factors acting on the two Sym factors. Here, m, n ≥ 0 and m = n mod 2.
(22.8) A closer look at U(2)
Listing the representations of U(2) requires a comment. As U(2) = U(1) SU(2)/¦±(Id, Id)¦,
in principle we should tensor together pairs of representations of U(1) and SU(2) of matching
parity, so that −(Id, Id) acts as +Id in the product. However, observe that U(2) acts naturally
on C
2
, extending the SU(2) action, so the action of SU(2) on Sym
n
C
2
also extends to all of
U(2). Thereunder, the factor U(1) of scalar matrices does not act trivially, but rather by the
nth power of the natural representation (since it acts naturally on C
2
).
In addition, U(2) has a 1dimensional representation det : U(2) → U(1). Denote by det
⊗m
its mth tensor power; it restricts to the trivial representation on SU(2) and to the 2mth power
of the natural representation on the scalar subgroup U(1). The classiﬁcation of representations
of U(2) in terms of their restrictions to the subgroups U(1) and SU(2) coverts then into the
22.9 Proposition. The complete list of irreducible representations of U(2) is det
⊗m
⊗Sym
n
C
2
,
with m, n ∈ Z, n ≥ 0.
(22.10) Characters of U(2)
We saw earlier that the conjugacy classes of U(2) were labelled by unordered pairs ¦z
1
, z
2
¦ of
eigenvalues. These are doublecovered by the subgroup of diagonal matrices diag[z
1
, z
2
]. The
trace of this matrix on Sym
n
C
2
is z
n
1
+z
n−1
1
z
2
+ +z
n
2
, and so the character of det
⊗m
⊗Sym
n
C
2
is the symmetric
8
Laurent polynomial
z
m+n
1
z
m
2
+z
m+n−1
1
z
m+1
2
+ +z
m
1
z
m+n
2
. (22.11)
Moreover, these are all the irreducible characters of U(2). Clearly, they span the space of
symmetric Laurent polynomials. General theory tells us that they should form a Hilbert space
basis of the space of class functions, with respect to the U(2) inner product; but we can check
this directly from the
22.12 Proposition (Weyl integration formula for U(2)). For a class function f on U(2),
we have
_
U(2)
f(u)du =
1
2
1
4π
2
__
2π×2π
0×0
f(φ
1
, φ
2
) [e
iφ
1
−e
iφ
2
[
2
dφ
1
dφ
2
.
Here, f(φ
1
, φ
2
) abusively denotes f(diag[e
iφ
1
, e
iφ
2
]). We call ∆(z
1
, z
2
) := z
1
− z
2
= e
iφ
1
− e
iφ
2
the Weyl denominator for U(2). The proposition is proved by lifting f to the double cover
U(1) SU(2) of U(2) and applying the integration formula for SU(2); we omit the details.
Armed with the integration formula, we could have discovered the character formulae (22.11) a
priori, before constructing the representations. Of course, proving that all formulae are actually
realised as characters requires either a direct construction (as given here) or a general existence
argument, as secured by the PeterWeyl theorem.
8
Under the switch z
1
↔z
2
.
54
23 The unitary group
∗
The representation theory of the general unitary groups U(N) is very similar to that of U(2), with
the obvious change that the characters are now symmetric Laurent polynomials in N variables,
with integer coeﬃcients; the variables represent the eigenvalues of a unitary matrix. These are
polynomials in the z
i
and z
−1
i
which are invariant under permutation of the variables. Any such
polynomial can be expressed as (z
1
z
2
z
N
)
n
f(z
1
, . . . , z
N
), where f is a genuine symmetric
polynomial with integer coeﬃcients. The fact that all U(N) characters are of this form follows
from the classiﬁcation of conjugacy classes in Lecture 20, and the representation theory of the
subgroup U(1)
N
: representations of the latter are completely reducible, with the irreducible
being tensor products of N onedimensional representations of the factors (Lemma 22.6).
(23.1) Symmetry vs. antisymmetry
Let us introduce some notation. For an Ntuple λ = λ
1
, . . . , λ
N
of integers, denote by z
λ
the
(Laurent) monomial z
λ
1
1
z
λ
2
2
z
λ
N
N
. Given σ ∈ S
N
, the symmetric group on N letters, z
σ(λ)
will denote the monomial for the permuted Ntuple σ(λ). There is a distinguished Ntuple
δ = (N −1, N −2, . . . , 0). The antisymmetric Laurent polynomials
a
λ
(z) :=
σ∈S
N
ε(σ) z
σ(λ)
,
where ε(σ) denotes the signature of σ, are distinguished by the following simple observation:
23.2 Proposition. As λ ranges over the decreasingly ordered Ntuples, λ
1
≥ λ
2
≥ . . . ≥ λ
N
,
the a
λ+δ
form a Zbasis of the integral antisymmetric Laurent polynomials.
The polynomial a
δ
(z) is special, as the following identities show:
a
δ
(z) = det[z
N−q
p
] =
p<q
(z
p
−z
q
).
The last product is also denoted ∆(z) and called the Weyl denominator. The ﬁrst identity
is simply the big formula for the determinant of the matrix with entries z
N−q
p
; the second is
called the Vandermonde formula and can be proved in several ways. One can do induction
on N, combined with some clever row operations. Or, one notices that the determinant is a
homogeneous polynomial in the z
p
, of degree N(N − 1)/2, and vanishes whenever z
p
= z
q
for
some p ,= q; this implies that the product on the right divides the determinant, so we have
equality up to a constant factor, and we need only match the coeﬃcient for a single term in the
formula, for instance z
δ
.
23.3 Remark. There is a determinant formula (although not a product expansion) for every a
λ
:
a
λ
(z) = det[z
λ
q
p
].
23.4 Proposition. Multiplication by ∆ establishes a bijection between symmetric and anti
symmetric Laurent polynomials with integer coeﬃcients.
Proof. The obvious part (which is the only one we need) is that ∆ f is antisymmetric and
integral, if f is symmetric integral. In the other direction, antisymmetry of a g implies its
vanishing whenever z
p
= z
q
for some p ,= q, which (by repeated use of B´ezout’s theorem, that
g(a) = 0 ⇒(X −a)[g(X)) implies that ∆(z) divides g, with integral quotient.
23.5 Remark. The proposition and its proof apply equally well to the genuine polynomials, as
opposed to Laurent polynomials.
55
(23.6) The irreducible characters of U(N)
23.7 Deﬁnition. The Schur function s
λ
is deﬁned as
s
λ
(z) :=
a
λ+δ
(z)
a
δ
(z)
=
a
λ+δ
(z)
∆(z)
.
By the previous proposition, the Schur functions form an integral basis of the space of sym
metric Laurent polynomials with integer coeﬃcients. The Schur polynomials are the genuine
polynomials among the s
λ
’s; they correspond to the λ’s where each λ
k
> 0, and they form a
basis of the space of integral symmetric polynomials. Our main theorem is now
23.8 Theorem. The Schur functions are precisely the irreducible characters of U(N).
Following the model of SU(2) in the earlier sections and the general orthogonality theory of
characters, the theorem breaks up into two statements.
23.9 Proposition. The Schur functions are orthonormal in the inner product deﬁned by inte
gration over U(N), with respect to the normalised invariant measure.
23.10 Proposition. Every s
λ
is the character of some representation of U(N).
We shall not prove Proposition 23.9 here; as in the case of SU(2), it reduces to the following
integration formula, whose proof, however, is now more diﬃcult, due to the absence of a concrete
geometric model for U(N) with its invariant measure.
23.11 Theorem (Weyl Integration for U(N)). For a class function f on U(N), we have
_
U(N)
f(u)du =
1
N!
1
(2π)
N
_
2π
0
_
2π
0
f(φ
1
, . . . , φ
N
) [∆(z)[
2
dφ
1
dφ
N
.
We have used the standard convention f(φ
1
, . . . , φ
N
) = f(diag[z
1
, . . . , z
N
]) and z
p
= e
iφ
p
.
(23.12) Construction of representations
Much like in the case of SU(2), the irreps of U(N) can be found in terms of the tensor powers of its
standard representation on C
N
. Formulating this precisely requires a comment. A representation
of U(N) will be called polynomial iﬀ its character is a genuine (as opposed to Laurent) polynomial
in the z
p
. For example, the determinant representation u →det u is polynomial, as its character
is z
1
z
N
, but its dual representation has character (z
1
z
N
)
−1
and is not polynomial.
23.13 Remark. A symmetric polynomial in the z
p
is expressible in terms of the elementary
symmetric polynomials. If the z
p
are the eigenvalues of a matrix u, these are the coeﬃcients of
the characteristic polynomial of u. As such, they have polynomial expressions in terms of the
entries of u. Thus, in any polynomial representation ρ, the trace of ρ(u) is a polynomial in the
entries of u ∈ U(N). It turns out in fact that all entries of the matrix ρ(u) are polynomially
expressible in terms of those of u, without involving the entries of u
−1
.
Now, the character of (C
N
)
⊗d
is the polynomial (z
1
+ + z
N
)
d
, and it follows from our
discussion of Schur functions and polynomials that any representation of U(N) appearing in the
irreducible decomposition of (C
N
)
⊗d
is polynomial. The key step in proving the existence of
representation now consists in showing that every Schur polynomial appears in the irreducible
decomposition of (C
N
)
⊗d
, for some d. This implies that all Schur polynomials are characters
of representations. However, any Schur function converts into a Schur polynomial upon multi
plication by a large power of det; so this does imply that all Schur functions are characters of
U(N)representations (a polynomial representation tensored with a large inverse power of det).
56
Note that the Schur function s
λ
is homogeneous of degree [λ[ = λ
1
+ +λ
N
; so if it appears
in any (C
N
)
⊗d
, it must do so for d = [λ[. Checking its presence requires us only to show that
¸s
λ
[(z
1
+ +z
N
)
λ
) > 0
in the inner product (23.11). While there is a reasonably simple argument for this, it turns out
that we can (and will) do much more with little extra eﬀort.
(23.14) SchurWeyl duality
To state the key theorem, observe that the space (C
N
)
⊗d
carries an action of the symmetric
group S
d
on d letters, which permutes the factors in every vector v
1
⊗v
2
⊗ ⊗v
d
. Clearly, this
commutes with the action of U(N), in which a matrix u ∈ U(N) transforms all tensor factors
simultaneously. This way, (C
N
)
⊗d
becomes a representation of the product group U(N) S
d
.
23.15 Theorem (SchurWeyl duality). Under the action of U(N) S
d
, (C
N
)
⊗d
decomposes
as a direct sum of products
λ
V
λ
⊗S
λ
labelled by partitions of d with no more than N parts, in which V
λ
is the irreducible representation
of U(N) with character s
λ
and S
λ
is an irreducible representation of S
d
.
The representations S
λ
, for various λ’s, depend only on λ (and not on N), are pairwise
nonisomorphic. For N ≥ d they exhaust all irreducible representations of S
d
.
Recall that a partition of d with n parts is a sequence λ
1
≥ λ
2
≥ . . . λ
n
> 0 of integers summing
to d. Once N ≥ d, all partitions of d are represented in the sum. As the number of conjugacy
classes in S
d
equals the number of partitions, the last sentence in the theorem is obvious. Nothing
else in the theorem is obvious; it will be proved in the next section.
24 The symmetric group and SchurWeyl duality
∗
(24.1) Conjugacy classed in the symmetric group
The duality theorem relates the representations of the unitary groups U(N) to those of the
symmetric groups S
d
, so we must say a word about the latter. Recall that every permutation
has a unique decomposition as a product of disjoint cycles, and that two permutations are
conjugate if and only if they have the same cycle type, which is the collection of all cycle lengths,
with multiplicities. Ordering the lengths decreasingly leads to a partition µ = µ
1
≥ . . . ≥ µ
n
> 0
of d. Assume that the numbers 1, 2, . . . , d occur m
1
, m
2
, . . . , m
d
times, respectively in µ; that
is, m
k
is the number of kcycles in our permutation. We shall also refer to the cycle type by the
notation (m). Thus, the conjugacy classes in S
d
are labelled by (m)’s such that
k
k m
k
= d.
It follows that the number of irreducible characters of S
d
is the number of partitions of d. What
is also true but unexpected is that there is a natural way to assign irreducible representations
to partitions, as we shall explain.
24.2 Proposition. The conjugacy class of cycle type (m) has order
d!
m
k
!
k
m
k
.
Proof. By the orbitstabiliser theorem, applied to the conjugation action of S
d
on itself, it suﬃces
to show that the centraliser of a permutation of cycle type (m) is
m
k
!
k
m
k
. But an element
of the centraliser must permute the kcycles, for all k, while preserving the cyclic order of the
elements within. We get a factor of m
k
! from the permutations of the m
k
kcycles, and k
m
k
from independent cyclic permutations within each kcycle.
57
(24.3) The irreducible characters of S
d
Choose N > 0 and denote by p
µ
or p
(m)
the product of power sums
p
µ
(z) =
k
(z
µ
k
1
+z
µ
k
2
+ +z
µ
k
N
) =
k>0
(z
k
1
+ +z
k
N
)
m
k
;
note that if we are to allow a number of trailing zeroes at the end of µ, they must be ignored in
the ﬁrst product. For any partition λ of d with no more than N parts, we deﬁne a function on
conjugacy classes of S
d
by
ω
λ
(m) = coeﬃcient of z
λ+δ
in p
(m)
(z) ∆(z).
It is important to observe that ω
λ
(m) does not depend on N, provided the latter is larger
than the number of parts of λ; indeed, an extra variable z
N+1
can be set to zero, p
(m)
is
then unchanged, while both ∆(z) and z
λ+δ
acquire a factor of z
1
z
2
z
N
, leading to the same
coeﬃcient in the deﬁnition.
Antisymmetry of p
(m)
(z)∆(z) and the deﬁnition of the Schur functions lead to the following
24.4 Proposition. p
(m)
(z) =
λ
ω
λ
(m) s
λ
(z), the sum running over the partitions of d with
no more than N parts.
Indeed, this is equivalent to the identity
p
(m)
(z) ∆(z) =
λ
ω
λ
(m) a
λ+δ
(z),
which just restates the deﬁnition of the ω
λ
.
24.5 Theorem (Frobenius character formula). The ω
λ
are the irreducible characters of the
symmetric group S
d
.
We will prove that the ω
λ
are orthonormal in the inner product of class functions on S
d
. We will
then show that they are characters of representations of S
d
, which will imply their irreducibility.
The second part will be proved in conjunction with SchurWeyl duality.
Proof of orthonormality. We must show that for any partitions λ, ν,
(m)
ω
λ
(m) ω
ν
(m)
m
k
!
k
m
k
= δ
λν
. (24.6)
We will prove these identities simultaneously for all d, via the identity
(m);λ,ν
ω
λ
(m) ω
ν
(m)
m
k
!
k
m
k
s
λ
(z)s
ν
(w) ≡
λ
s
λ
(z)s
λ
(w) (24.7)
for variables z, w, with (m) ranging over all cycle types of all symmetric groups S
d
and λ, ν
ranging over all partitions with no more than N parts of all integers d. Independence of the
Schur polynomials implies (24.6).
From Proposition 24.4, the left side in (24.7) is
(m)
p
(m)
(z) p
(m)
(w)
m
k
!
k
m
k
,
58
which is also
(m)
k>0
(z
k
1
+ +z
k
N
)
m
k
(w
k
1
+ +w
k
N
)
m
k
m
k
! k
m
k
=
(m)
k>0
(
p,q
(z
p
w
q
)
k
/k)
m
k
m
k
!
=
k>0
exp
_
p,q
(z
p
w
q
)
k
/k
_
= exp
_
p,q;k
(z
p
w
q
)
k
/k
_
= exp
_
−
p,q
log(1 −z
p
w
q
)
_
=
p,q
(1 −z
p
w
q
)
−1
.
We are thus reduced to proving the identity
λ
s
λ
(z)s
λ
(w) =
p,q
(1 −z
p
w
q
)
−1
. (24.8)
There are (at least) two ways to proceed. We can multiply both sides by ∆(z)∆(w) and show
that each side agrees with the N N Cauchy determinant det[(1 −z
p
w
q
)
−1
]. The equality
det[(1 −z
p
w
q
)
−1
] = ∆(z)∆(w)
p,q
(1 −z
p
w
q
)
−1
can be proved by clever row operations and induction on N. (See Appendix A of FultonHarris,
Representation Theory, for help if needed.) On the other hand, if we expand each entry in the
matrix in a geometric series, multilinearity of the determinant gives
det[(1 −z
p
w
q
)
−1
] =
l
1
,...,l
N
≥0
det[(z
p
w
q
)
l
q
]
=
l
1
,...,l
N
≥0
det[z
l
q
p
]
q
w
l
q
q
=
l
1
,...,l
N
≥0
a
l
(z)w
l
=
l
1
>l
2
>···>l
N
≥0
a
l
(z)a
l
(w),
the last identity from the antisymmetry in l of the a
l
. The result is the left side of (24.8)
multiplied by ∆(z)∆(w), as desired.
Another proof of (24.8) can be given by exploiting the orthogonality properties of Schur
functions and a judicious use of Cauchy’s integration formula. Indeed, the lefthand side Σ(z, w)
of the equation has the property that
1
N!
_
_
Σ(z, w
−1
)f(w)∆(w)∆(w
−1
)
dw
1
2πiw
1
dw
N
2πiw
N
= f(z)
for any symmetric polynomial f(w), while any homogeneous symmetric Laurent polynomial
containing negative powers of the w
q
integrates to zero. (The integrals are over the unit circles.)
An Nfold application of Cauchy’s formula shows that the same is true for
p,q
(1 −z
p
w
−1
q
)
−1
,
the function obtained from the righthand side of (24.8) after substituting w
q
↔ w
−1
q
, subject
to the convergence condition [z
p
[ < [w
q
[ for all p, q. I will not give the detailed calculation, as
the care needed in discussing convergence makes this argument more diﬃcult, in the end, than
the one with the Cauchy determinant, but just observe that the algebraic manipulation
∆(w)∆(w
−1
)
w
1
w
N
p,q
(1 −z
p
w
−1
q
)
=
p=q
(w
q
−w
p
)
p,q
(w
q
−z
p
)
59
and the desired integral is rewritten as
1
N!
_
_
p=q
(w
q
−w
p
)
p,q
(w
q
−z
p
)
f(w)
dw
1
2πi
dw
N
2πi
= f(z),
for any symmetric polynomial f. For N = 1 one recognises the Cauchy formula; in general, we
have simple poles at the z
p
, to which values the w
q
are to be set, in all possible permutations.
(The numerator ensures that setting diﬀerent w’s to the same z value gives no contribution).
Note that, if f is not symmetric, the output is the symmetrisation of f.
(24.9) Proof of the key theorems
We now prove the following, with a single calculation:
1. The ω
λ
are characters of representations of S
d
.
2. Every Schur polynomial is the character of some representation of U(N).
3. The SchurWeyl duality theorem.
Note that (1) and (2) automatically entail the irreducibility of the corresponding representations.
Speciﬁcally, we will verify the following
24.10 Proposition. The character of (C
N
)
⊗d
, as a representation of S
d
U(N), is
λ
ω
λ
(m) s
λ
(z),
with the sum ranging over the partitions of d with no more than N parts.
The character is the trace of the action of M := σ diag[z
1
, . . . , z
N
], for any permutation σ
of cycle type (m). By Lemma 22.6, (C
N
)
⊗d
breaks up as a sum
W
i
⊗ V
i
of tensor products
of irreducible representations of S
d
and U(N), respectively. Collecting together the irreducible
representations of U(N) expresses the character of (C
N
)
⊗d
as
λ
χ
λ
(m) s
λ
(z),
for certain characters χ
λ
of S
d
. Comparing this with the proposition implies (1)–(3) in our list.
Proof. Let e
1
, . . . , e
N
be the standard basis of C
N
. A basis for (C
N
)
⊗d
is e
i
1
⊗ e
i
2
⊗ ⊗ e
i
d
,
for all possible indexing functions i : ¦1, . . . , d¦ → ¦1, . . . , N¦. Under M, this basis vector gets
transformed to z
i
1
z
i
d
e
i
σ(1)
⊗ e
i
σ(2)
⊗ ⊗ e
i
σ(d)
. This is oﬀdiagonal unless i
k
= i
σ(k)
for
each k, that is, unless i is constant within each cycle of σ. So the only contribution to the trace
comes from such i.
The diagonal entry for such a i is a product of factors of z
k
i
for each kcycle, with an
appropriate i. Summing this over all these i leads to
k>0
(z
k
1
+ +z
k
N
)
m
k
= p
(m)
(z),
and our proposition now follows from Prop. 24.4.
60
The reformulation of Prop. 1.1 leads to the following observation. For any action a H on X and group homomorphism ϕ : G → H, there is deﬁned a restricted or pulledback action ϕ∗ a of G on X, as ϕ∗ a = a ◦ ϕ. In the original deﬁnition, the action sends (g, x) to ϕ(g)(x). (1.3) Example: Tautological action of Perm(X) on X This is the obvious action, call it T , sending (f, x) to f (x), where f : X → X is a bijection and x ∈ X. Check that it satisﬁes the properties of an action! In this language, the action a of G on X is α∗ T , with the homomorphism α of the proposition — the pullback under α of the tautological action. (1.4) Linearity. The question of classifying all possible X with action of G is hopeless in such generality, but one should recall that, in ﬁrst approximation, mathematics is linear. So we shall take our X to a vector space over some ground ﬁeld, and ask that the action of G be linear, as well, in other words, that it should preserve the vector space structure. Our interest is mostly conﬁned to the case when the ﬁeld of scalars is C, although we shall occasional mention how the picture changes when other ﬁelds are studied. 1.5 Deﬁnition. A linear representation ρ of G on a complex vector space V is a settheoretic action on V which preserves the linear structure, that is, ρ(g)(v1 + v2 ) = ρ(g)v1 + ρ(g)v2 , ∀v1,2 ∈ V, ρ(g)(kv) = k · ρ(g)v, ∀k ∈ C, v ∈ V Unless otherwise mentioned, representation will mean ﬁnitedimensional complex representation. (1.6) Example: The general linear group Let V be a complex vector space of dimension n < ∞. After choosing a basis, we can identify it with Cn , although we shall avoid doing so without good reason. Recall that the endomorphism algebra End(V ) is the set of all linear maps (or operators) L : V → V , with the natural addition of linear maps and the composition as multiplication. (If you do not remember, you should verify that the sum and composition of two linear maps is also a linear map.) If V has been identiﬁed with Cn , a linear map is uniquely representable by a matrix, and the addition of linear maps becomes the entrywise addition, while the composition becomes the matrix multiplication. (Another good fact to review if it seems lost in the mists of time.) Inside End(V ) there is contained the group GL(V ) of invertible linear operators (those admitting a multiplicative inverse); the group operation, of course, is composition (matrix multiplication). I leave it to you to check that this is a group, with unit the identity operator Id. The following should be obvious enough, from the deﬁnitions. 1.7 Proposition. V is naturally a representation of GL(V ). It is called the standard representation of GL(V ). The following corresponds to Prop. 1.1, involving the same abuse of language. 1.8 Proposition. A representation of G on V “is the same as” a group homomorphism from G to GL(V ). Proof. Observe that, to give a linear action of G on V , we must assign to each g ∈ G a linear selfmap ρ(g) ∈ End(V ). Compatibility of the action with the group law requires ρ(h) (ρ(g)(v)) = ρ(hg)(v), ρ(1)(v) = v, ∀v ∈ V,
whence we conclude that ρ(1) = Id, ρ(hg) = ρ(h) ◦ ρ(g). Taking h = g −1 shows that ρ(g) is invertible, hence lands in GL(V ). The ﬁrst relation then says that we are dealing with a group homomorphism. 2
1.9 Deﬁnition. An isomorphism φ between two representations (ρ1 , V1 ) and (ρ2 , V2 ) of G is a linear isomorphism φ : V1 → V2 which intertwines with the action of G, that is, satisﬁes φ (ρ1 (g)(v)) = ρ2 (g)(φ(v)). Note that the equality makes sense even if φ is not invertible, in which case it is just called an intertwining operator or Glinear map. However, if φ is invertible, we can write instead ρ2 = φ ◦ ρ1 ◦ φ−1 , (1.10)
meaning that we have an equality of linear maps after inserting any group element g. Observe that this relation determines ρ2 , if ρ1 and φ are known. We can ﬁnally formulate the Basic Problem of Representation Theory: Classify all representations of a given group G, up to isomorphism. For arbitrary G, this is very hard! We shall concentrate on ﬁnite groups, where a very good general theory exists. Later on, we shall study some examples of topological compact groups, such as U(1) and SU(2). The general theory for compact groups is also completely understood, but requires more diﬃcult methods. I close with a simple observation, tying in with Deﬁnition 1.9. Given any representation ρ of G on a space V of dimension n, a choice of basis in V identiﬁes this linearly with Cn . Call the isomorphism φ. Then, by formula (1.10), we can deﬁne a new representation ρ2 of G on Cn , which is isomorphic to (ρ, V ). So any ndimensional representation of G is isomorphic to a representation on Cn . The use of an abstract vector space does not lead to ‘new’ representation, but it does free us from the presence of a distinguished basis.
2
Lecture
Today we discuss the representations of a cyclic group, and then proceed to deﬁne the important notions of irreducibility and complete reducibility (2.1) Concrete realisation of isomorphism classes We observed last time that every mdimensional representation of a group G was isomorphic to a representation on Cm . This leads to a concrete realisation of the set of mdimensional isomorphism classes of representations. 2.2 Proposition. The set of mdimensional isomorphism classes of Grepresentations is in bijection with the quotient Hom (G; GL(m; C)) /GL(m; C) of the set of group homomorphism to GL(m) by the overall conjugation action on the latter. Proof. Conjugation by φ ∈ GL(m) sends a homomorphism ρ to the new homomorphism g → φ ◦ ρ(g) ◦ φ−1 . According to Deﬁnition 1.9, this has exactly the eﬀect of identifying isomorphic representations. 2.3 Remark. The proposition is not as useful (for us) as it looks. It can be helpful in understanding certain inﬁnite discrete groups — such as Z below — in which case the set Hom can have interesting geometric structures. However, for ﬁnite groups, the set of isomorphism classes is ﬁnite so its description above is not too enlightening.
3
4) Example: Representations of Z. We shall classify all representations of the group Z. Aside from that. j) with i = j + k (a line parallel to the diagonal. . with the relation g n = 1. (2. see below. for any invertible map ρ(1) ∈ GL(m). 0 0 J2 .. the mdimensional representations of Cn are classiﬁed up to isomorphism by unordered mtuples of nth roots of unity. and ρ(g) is diagonal in this basis. with n n blocks Jk . . (2. we can recover ρ(n) as ρ(1 + . Conversely.5) Example: the cyclic group of order n Let G = {1. we must impose the condition An = Id. λ However. . However. we must specify an invertible matrix ρ(n) for every n ∈ Z. But An itself will be blockdiagonal. ρ(1) = Id and ρ(g k ) = ρ(g)k . specifying the Jordan block sizes. In other words. As before. we obtain a representation of Z this way. So there is no choice involved. there exists a basis in which the action of every group element is diagonal. given ρ(1). g n−1 }. 0 . . is the matrix with zeroes and ones only.. and are deﬁned up to reordering — and some discrete parameters whenever two or more eigenvalues coincide. These can be parametrised by the Jordan canonical form (see the next example). so J n = (λId + N )n = λn Id + n n−1 n n−2 2 λ N+ λ N + . . . Recall from general theory that there exists a Jordan basis in which ρ(g) takes its blockdiagonal Jordan normal form J1 0 .. A representation of G on V deﬁnes an invertible endomorphism ρ(g) ∈ GL(V ). .) We recall the following fact from linear algebra: 4 . J is a 1 × 1 block. . We must have ρ(0) = Id. We conclude the following 2. . . .(2.6 Proposition. with its additive structure. which are nonzero complex numbers. depending on whether we are willing to use the classiﬁcation theorem for ﬁnite abelian groups. . we then have J = λId + N . 1 2 but notice that N p .. . .. mdimensional isomorphism classes of representations of Z are in bijection with the conjugacy classes in GL(m). for any p. (The resulting classiﬁcation of representations is more or less explicit.7) Example: Finite abelian groups The discussion for cyclic groups generalises to any ﬁnite Abelian group A. the with nth roots of unity on the diagonal. k steps above it). 0 A= ··· ··· 0 0 . Choosing a basis of V allows us to convert ρ(g) into a matrix A. but we shall want to be careful with our choice. g. So the sum above can be Id only if λn = 1 and N = 0. If V is a representation of the cyclic group G of order n. so all other images of ρ are determined by the single operator ρ(g). with the ones in index position (i. let N be the Jordan matrix with λ = 0.. + 1) = ρ(1)n . Thus. To compute that.. 1 . 0 . . In particular. so we must have Jk = 1. . Jm where the Jordan blocks Jk take the form λ 0 J = 0 0 1 λ 0 1 ··· 0 0 0 0 . . We will have m continuous parameters — the eigenvalues.
5 . the possible actions of Cn on a line correspond to the n eligible roots of unity to specify for ρ(g). 2. then v − v ∈ W . at any rate. (2. g ∈ G ⇒ ρ(g)(w) ∈ W. To say more.10 Deﬁnition. 2. Every complex representation of a ﬁnite abelian group is completely reducible. we can form the quotient space V /W .11 Deﬁnition. Each line is preserved by the action of the group. 2.8 Proposition.9) Subrepresentations and Reducibility Let ρ : G → GL(V ) be a representation of G. there is no distinction between identity and isomorphism (the conjugation action of GL(1. Any family of commuting. the set of W cosets v + W in V . but we shall still salvage a canonical decomposition. V1 is a subrepresentation and V2 is isomorphic to the associated quotient representation.12 Deﬁnition. and then the cosets ρ(v) + W and ρ(v ) + W agree. Earlier. It will be our goal to establish an analogous proposition for every ﬁnite group G. The proof is delegated to the example sheet. we decomposed V into a direct sum of lines L1 ⊕. 2. This implies that any representation of A is isomorphic to one where every group element acts diagonally. If W was Ginvariant. However.13) Example In the direct sum V1 ⊕ V2 . Note that for 1dimensional representations. Recall that. So the classiﬁcation reads: mdimensional isomorphism classes of representations of A are in bijection with unordered mtuples of 1dimensional representations. V2 ) is the space V1 ⊕V2 with the blockdiagonal action ρ1 ⊕ ρ2 of G. so ρ(g)(v − v ) ∈ W . we must invoke the classiﬁcation of ﬁnite abelian groups.2. we thus proved that ﬁnitedimensional complex representations of a ﬁnite abelian group are completely reducible: indeed. With this action. Each diagonal entry then determines a onedimensional representation of A. separately diagonalisable m × m matrices can be simultaneously diagonalised. This will be proved for complex representations of ﬁnite groups. To specify a 1dimensional representation of A we must then specify a root of unity of the appropriate order independently for each generator. . The direct sum of two representations (ρ1 . the Gaction on V descends to (=deﬁnes) an action on V /W by setting g(v + W ) := ρ(g)(v) + W . For example. If we choose another v in the same coset as v. W becomes a representation of G under the action ρ(g). V /W is called the quotient representation of V under W . The result is called the Complete Reducibility Theorem. irreducible representations are completely reducible. we have ∀w ∈ W. V1 ) and (ρ2 . according to which A is isomorphic to a direct product of cyclic groups. In particular.14 Deﬁnition. an easier treatment of ﬁnite abelian groups will emerge from Schur’s Lemma in Lecture 4. For nonabelian groups. and every irreducible representation is 1dimensional.⊕Ldim V . (2. viewed as homomorphisms ρ : A → C× . one should take care that for an arbitrary group. given a subspace W ⊆ V .15 Proposition. along the vectors in the diagonal basis. it need not be the case that any representation V with subrepresentation W decomposes as W ⊕ W/V . 2. we shall have to give up on the 1dimensional requirement. that is. A representation is called irreducible if it contains no proper invariant subspaces. A subrepresentation of V is a Ginvariant subspace W ⊆ V . C) on itself is trivial). 1dimensional representations of any group are irreducible. It is called completely reducible if it decomposes as a direct sum of irreducible subrepresentations. . Of course the roles of 1 and 2 can be interchanged. In the cyclic case.
(3. g ∈ G. we get a linear action of G on C[X]. and so applies only to R or C. If G acts on a set X. It has a basis {eg }g∈G .3 Remark. the hcoordinate of λ(g)(v) is vg−1 h . The proof relies on the following 6 . 3.1 Theorem. the order of the group should not be 0 in the ﬁeld). The theorem is so important that we shall give two proofs. ∀v. w ∈ V. that is. It is unitarisable if it can be equipped with such a product (even if none has been chosen). let C[X] be the vector space of functions on X.6 Theorem (Unitary Criterion).2) Example: the regular representation Let C[G] be the vector space of complex functions on G. the action is opposite to what you might expect: namely. This underlines the importance of the following Complete Reducibility Theorem for ﬁnite groups. (The latter is also ρ(g)−1 ). with eg representing the function equal to 1 at g and 0 elsewhere. and also of a representation of the cyclic group of prime order p over the ﬁnite ﬁeld Fp which is not completely reducible.1. G acts on this basis as follows: λ(g)(eh ) = egh . extends to any ﬁelds of scalars whose characteristic does not divide the order of the group (equivalently. 3. 3. the result would have limited value without some supply of irreducible representations. (3. vw = ρ(g)(v)ρ(g)(w) . For example. Beautiful as it is.5 Deﬁnition.4) Unitarity Inner products are an important aid in investigating real or complex representations. 3. the regular representation of a ﬁnite group is unitarisable: it is made unitary by declaring the standard basis vectors eg to be orthonormal. (Exercise: check that this deﬁnes a linear action.) On coordinates. A representation ρ of G on a complex vector space V is unitary if V has been equipped with a hermitian inner product  which is preserved by the action of G. The ﬁrst uses inner products. The more algebraic proof. Finitedimensional unitary representations of any group are completely reducible. you ﬁnd an example of a complex representation of the group Z which is not completely reducible. It turns out that the following example provides an adequate supply. Every complex representation of a ﬁnite group is completely reducible. The result is the left regular representation of G. this is the permutation representation associated to X. This settheoretic action extends by linearity to the vector space: λ(g) h∈G vh · eh = h∈G vh · λ(g)eh = h∈G vh · egh .3 Complete Reducibility and Unitarity In the homework. Later we will decompose λ into irreducibles. We can restate this condition in the form ρ(g)∗ = ρ(g −1 ). and lead to a ﬁrst proof of Theorem 3. and we shall see that every irreducible isomorphism class of Greps occurs in the decomposition. By linear extension of the permutation action ρ(g)(ex ) = egx . but generalises to compact groups. on the other hand. with obvious basis {ex }x∈X . The representation is unitary iﬀ the homomorphism ρ : G → GL(V ) lands inside the unitary group U(V ) (deﬁned with respect to the inner product).
W ⊥ is another subrepresentation. g∈G Check invariance and positivity (homework).7 Lemma. Let V be a unitary representation of G and let W be an invariant subspace. Nonetheless. For groups of geometric origin. This implies that gvw = 0. for general G. Proof. ∀v ∈ W ⊥ and ∀g ∈ G. using the invariance of W . Clearly. ∀w ∈ W .7. A ﬁnitedimensional unitary representation of a group admits an orthogonal decomposition into irreducible unitary subrepresentations. But then. construct an new one  by averaging over G. vw := 1 G gvgw .8 and Weyl’s unitary trick. We must show that. such as U(1) (and even U(n) or SU(n)) the existence of such a measure is obvious. f (k)/2π should be regarded as the coordinate value of f along the “basis vector” exp(−ikx).6). For representations of inﬁnite. we use integration over G to average. and leads to delicate and diﬃcult problems of functional analysis (von Neumann algebras).8 Theorem. Proof. there results an integral decomposition of any vector f (x) ∈ L2 (R) into Fourier modes. it then contains a proper invariant subspace W ⊆ V . this happens for the translation action of the group R on the Hilbert space L2 (R). and g ∈ G was arbitrary. positive deﬁnite inner product  on your representation. Weyl’s unitary trick applies to continuous representations of compact groups. which is Ginvariant. v ∈ W ⊥ ⇔ vw = 0. 3. so we must merely produce the decomposition into irreducibles. Finitedimensional representations of ﬁnite groups are unitarisable. but cannot be removed without changes to the statement. 7 . the most we can usually hope for is a decomposition into a direct integral of irreducibles. but in general it is a diﬃcult theorem. Starting with a hermitian. For example. Continuing with W and W ⊥ must terminate in an irreducible decomposition.11 Remark. gv ∈ W ⊥ . and we clearly have an orthogonal decomposition V = W ⊕W ⊥ . 3. Proof. By Lemma 3. In that case. If so. for any g ∈ G and ∀w ∈ W . Now. 3. then gvgw = 0. Then. the orthocomplement W ⊥ is also Ginvariant. 3.1) for ﬁnite groups follows from Theorem 3. The assumption dim V < ∞ is in fact unnecessary for ﬁnite G. Even this milder expectation of integral decomposition can fail for more general groups. ∀w ∈ W . R ˆ known to you as the Fourier inversion formula. since we can choose w = g −1 w ∈ W . The complete reducibility theorem (3. by ﬁnitedimensionality. note that they are not quite in L2 . Assume that V is not irreducible. The irreducible “components” are the Fourier modes exp(ikx). gv ∈ W ⊥ . labelled by k ∈ R. noncompact groups on inﬁnitedimensional (Hilbert) spaces. any subrepresentation is unitary for the restricted inner product.10 Theorem (Weyl’s unitary trick).9 Remark. The key point is the existence of a measure on G which is invariant for the translation action of the group on itself (the Haar measure).3. We are now ready to prove the following stronger version of the unitary criterion (3. f (x) = 1 2π ˆ f (k) exp(−ik · x)dk.
Choose now an arbitrary complement W ” ⊆ V (not necessarily invariant) and call P the projection operator. while the third implies that dim W + dim W = dim V . and say you have performed a construction within it (such as a drawing of your favourite Disney character. Recall that the subspace W is a complement of W if W ⊕ W = V . The second condition implies that W ∩ W = {0}. The complementing condition can be broken down into two parts. • W ∩ W = {0} • dim W + dim W = dim V . These are the invertible selfmaps commuting with the Gaction. Then. 3. • ImQ = W . Proof. which sends a vector v ∈ V to its W component P (v). The ﬁrst condition implies that W := ker Q is a Ginvariant subspace. that is. by “averaging a complement over G” to produce an invariant complement. It satisﬁes • P = Id on W • ImP = W . Q ◦ h = h ◦ Q. • Q = Id on W .1 Theorem (Schur’s lemma over C). We’d like to construct the complement using the same trick as with the inner product.(3. Let V be a ﬁnitedimensional representation of the ﬁnite group G over a ﬁeld of characteristic not dividing G. I claim that: • Q commutes with G. note that any complement of W is completely determined as the kernel of the projection operator P : V → W . assume given a representation of G. If V is an irreducible complex Grepresentation. ∀h ∈ G. To see the issue. it will be a distinguished geometric object). more commonly.13 Lemma. v ∈ ker Q ⇒ Q(v) = 0.12) Alternative proof of Complete Reducibility The following argument makes no appeal to inner products. and so has the advantage of working over more general ground ﬁelds. claimed to be isomorphic to yours. hence Q(hv) = 0 and hv ∈ ker Q. ∀h ∈ G. If someone presents you with another representation. 4 Schur’s Lemma To understand representations. so we have constructed our invariant complement. Now let Q := 1 G g∈G g ◦ P ◦ g −1 . 4. how uniquely can you repeat your construction in the other copy of the representation? Schur’s Lemma answers this for irreducible representations. While we cannot average subspaces. we also need to understand the automorphisms of a representation. Indeed. every invariant subspace W ⊆ V has an invariant complement W ⊆ V . then every linear operator φ : V → V commuting with G is a scalar. 8 . hence h ◦ Q(v) = 0.
(4. 4. But such a polynomial factors into linear terms k (α − αk ).5 Deﬁnition. ρ(g)◦φ = φ◦ρ(g). . So if φ is not injective. A be the algebra and α ∈ A. EndG (V ) = HomG (V. ker φ = V and φ = 0. containing a distinguished copy of k. the roots of P . 4. any nonzero map is an isomorphism. The endomorphism algebra EndG (V ) is the space of linear selfmaps φ : V → V which commute with the group action. there are three ﬁnitedimensional division algebras. W ) is either 1dimensional or {0}. Irreducibility implies that either ker φ = {0}. so φ = λId. α. The only ﬁnitedimensional division algebra over C is C itself. . or because Imφ = V ). ker φ and Imφ are Ginvariant subspaces. to see that two intertwiners φ. W ) the vector space of intertwiners from V to W . Irreducibility leaves ker φ = 0 or V and φ = 0 or W as the only options. and with 1 ∈ k being the algebra unit. respectively (proof as for the eigenspaces above). Let λ be an eigenvalue of φ. and the multiplication is composition. All of these can occur as endomorphism algebras of irreducibles of ﬁnite groups (see Example Sheet 1). If φ is injective and V = 0. Indeed. the kernel and the image of an intertwiner φ are invariant subspaces of V and W . For any φ : V → V commuting with G. 4. or else ker φ = V and then φ = 0. and a division algebra is a division ring which is also a kalgebra. C and H. for some polynomial P with complex coeﬃcients. for some complex numbers αk . meaning the linear operators which commute with the Gaction.4 Deﬁnition. If V and W are irreducible. so gv ∈ Eλ .7 Proposition. Finally. W ) is only a vector space in general. A division ring is a ring where every nonzero element is invertible.8 Remark. v ∈ Eλ ⇒ φ(v) = λv. Indeed. 9 . of the ﬁnitedimensional vector space A over C must be linearly dependent. The elements 1. therefore α ∈ C and so A = C. But then. then φ = W and so φ is an isomorphism. the space HomG (V. An algebra over a ﬁeld k is an associative ring with unit. whence φ(gv) = gφ(v) = g(λv) = λ · gv. so φ is injective and then an isomorphism (for dimensional reasons.Proof. then EndG (V ) is a ﬁnitedimensional division algebra over k. Since we are in a division ring and the product is zero. If V is an irreducible ﬁnitedimensional Grepresentation over k. we denote by HomG (V. commuting with every algebra element. HomG (V. The addition is the usual addition of linear maps. however. one of the factors must be zero. Given two representations V and W . ∀g ∈ G.2 Corollary. 4. In the ﬁrst case. . Let V be a Grepresentation over k. Eλ = V by irreducibility. apply Schur’s lemma to φ−1 ◦ ψ. ψ diﬀer by a scalar factor. 4. α2 . Proof. Let. A second proof of Schur’s Lemma over C can now be deduced from the following. I entrust it to your care to check that EndG (V ) is indeed a kalgebra. Proof. So we have a relation P (α) = 0 in A. indeed. In earlier notation. depending on whether or not the representations are isomorphic. Proof. and g was arbitrary. namely R. I claim that the eigenspace Eλ is Ginvariant. Over the reals. 4. V ).3) Schur’s Lemma over other ﬁelds The correct statement over other ﬁelds (even over R) requires some preliminary deﬁnitions.6 Theorem (Schur’s Lemma).
11 Remark. 5 Isotypical Decomposition To motivate the result we prove today. It is easy to spot three irreducible representations: • The trivial 1dimensional representation 1 • The sign representation S • The geometric 2dimensional representation W . also isomorphic to the dihedral group D6 . in the sense that it depends on A alone and no other choices. one must choose generators in each cyclic factor for that. has two generators g. ω = exp 2πi is the basic cube 3 1 0 root of unity. By irreducibility and Schur’s lemma. where V (λ) denotes the subspace of vectors verifying Av = λv. this proposition is utterly false over other ground ﬁelds. As one might expect from the failure of Schur’s lemma. All these representations are naturally deﬁned on real vector spaces. for cyclic groups Cnk of orders nk . is deﬁned by W = C2 by letting g act on W = C2 as the diagonal matrix 0 1 ρ(g) = diag ω. to diagonalise g. r satisfying g 3 = r2 = 1. For example. Any irreducible complex representation of an abelian group G is onedimensional. so irreducibility implies that V itself is a line. Let g ∈ G and call ρ the representation. So. it follows that ρ(g) is a scalar.(4. As each group element acts by a scalar.1) Example: Representations of D6 The symmetric group S3 on three letters. respectively). The structure theorem asserts that every ﬁnite abelian G splits as k Cnk . ρ(g) commutes with every ρh. For instance. ω 2 .10 Theorem. Then. but we wish to work with the complex numbers so we view them as complex vector spaces instead. (They are the trivial and the rotation representations. The decomposition is canonical. Another generalisation asserts that the squared dimensions of the irreducible isomorphism classes add up to the order of the group. here. this must act on any line by a root of unity of order nk .) 10 . recall that any diagonalisable linear endomorphism A : V → V leads to an eigenspace decomposition of the space V as λ V (λ). one must choose such a root of unity for each k. every line in V is invariant. and r as the matrix ρ(r) = . One correct statement is that the number of irreducible isomorphism classes equals the number of conjugacy classes in the group. However. we now classify the irreducible representations of D6 “by hand”. gr = rg −1 . to specify a onedimensional representation. The equality above generalises to nonabelian ﬁnite groups. (By contrast. which is an extension ﬁeld of k. there is no canonical eigenbasis of V : we must choose a basis in each V (λ) for that.9) Application: Finite abelian groups revisited Schur’s Lemma gives a shorter proof of the 1dimensionality of irreps for ﬁnite abelian groups.) To show how useful this is. also proving complete reducibility in the process. Proof. You must change bases. one can show in general that the irreps are 1dimensional over their endomorphism ring. the two irreducible representations of Z/3 have dimensions 1 and 2. 4. 4. (5. the number of irreducible isomorphism classes of representations of a ﬁnite abelian group equals the order of the group. over the reals. Choose a generator gk in each factor. there is no canonical correspondence between group elements and representations. but not as naively as one might think. (Exercise: Relate this to the geometric action of D6 . In particular. Nonetheless. the 2dimensional rep.
Proof. We can diagonalise the action of π(g) and split V into eigenspaces. Note that. . and such maps are all null by Corollary 4. π(r) : V (ω) → V (ω 2 ) is an isomorphism. any Ginvariant homomorphism φ : V → V maps Vk to Vk . restricting φ to the summand Vl and projecting to Vk leads to the “block” φkl : Vl → Vk . π(r) acts on the 2dimensional space Cek ⊕ Cek as the matrix 0 1 . the eigenvalues are ±1. This terminology must now be justiﬁed: we do not know yet whether this decomposition is unique. for any completely reducible V and V . φ will be zero on those. we write V ∼ k Vk . blockdiagonal with respect to the canonical decomposition of V . as asserted. Further. We are not assuming uniqueness of the decompositions. It follows that we have split V (ω) ⊕ V (ω 2 ) into n copies 1 0 of the 2dimensional representation W . that is. while π(g) acts as diag ω.4 Theorem. Let V = k Vk . this will follow from the Lemma. Let V = Vk be a canonical decomposition of V . 5. We call nk the multiplicity of Wk in V . (i) Every subrepresentation of V which is isomorphic to Wk is contained in Vk . Grouping isomorphic irreducible summands into blocks. Decompose V (1) into the π(r)eigenspaces. and let ek = π(r)ek . . = where each Vk is isomorphic to a sum of nk copies of the irreducible representation Wk . or decomposition into isotypical components Vk . The relation rgr−1 = g −1 shows that π(r) must preserve V (1) and interchange the other summands: that is. (ii) The canonical decomposition is unique. 5. (iv) EndG (V ) is isomorphic to the direct sum of matrix algebras k Mnk (C). Speciﬁcally. (5. with inverse π(r) (since r2 = 1). A completely reducible V decomposes into a sum of irreducibles. adding zero blocks if some Wk is not present in the decomposition. V = k Vk be canonical decompositions of two representations V and V . Consider now an oﬀdiagonal block φkl . the resulting blocks of φkl give Glinear maps between nonisomorphic irreducibles.3 Lemma. V = V (1) ⊕ V (ω) ⊕ V (ω 2 ). the latter are assumed nonisomorphic for diﬀerent k’s. it does not depend on the original decomposition of V into irreducibles. en of V (ω). the next task is to understand completely reducible representations in terms of their irreducible constituents. Obviously. choose a basis e1 . ω 2 . a choice of isomorphism coming from a decomposition of Vk into a direct sum of the irreducible Wk . and the decomposition of V just described the canonical decomposition of V .Let now (π. If we decompose Vk and Vl further into copies of irreducibles Wk and Wl . which is a linear Gmap. . k = l. 11 . We consider the “block decomposition” of φ with respect to the direct sum decompositions of V and V .2) Decomposition into isotypical components Continuing the general theory. and since g acts trivially it follows that V (1) splits into a sum of copies of 1 and S. Then. and not even whether the nk are unambiguously deﬁned. So φkl = 0 if k = l. Another notation we might employ is V ∼ = k ⊕n Wk k .2. Then. we can arrange the decompositions to be labelled by the same indexes. V ) be any complex representation of D6 . rather. (iii) The endomorphism algebra EndG (Vk ) is isomorphic to a matrix algebra Mnk (C).
For two vector spaces V. 5. we recover the construction of V ∗ . We have already met the direct sum of two representations in Lecture 2.3 Part (iv) follows from (iii) and the Lemma.8 Proposition. as the sum of all copies of Wk contained in V . all blocks are scalar multiples of the identity: φpq = fpq · Id. for some constant fpq . but of course not in general. Study duality on irreducible representations of abelian groups. in fact. but this Gisomorphism is itself blockdecomposable into scalar multiples of Id. and in this way breaks any symmetry we might wish to exploit in the context of group actions. W ) will denote the vector space of linear maps from V to W . 6 Tensor products We now construct the tensor product of two vector spaces V and W . In the resulting = blockdecomposition of φ. each block φpq is a Ginvariant linear map between copies of Wk . W ) carries a natural Gaction in which g ∈ G sends φ : V → W to σ(g) ◦ φ ◦ ρ(g)−1 . Assigning to φ the matrix [fpq ] identiﬁes EndG (Vk ) with Mnk (C). L(v) = ρ∗ (g)(L) (ρ(g)(v)) . The second is quite concrete. then Hom(V. Note how this deﬁnition preserves the duality pairing between V and V ∗ . leading to part (ii). 5. we will take them to be ﬁnitedimensional. If V and W carry linear Gactions ρ and σ. 5. and contains all the information you need to work with tensor products. by an arbitrary representation W . a linear map L : V → C is mapped to the linear map ρ∗ (g)(L) sending v to L ρ(g)−1 (v) . Exercise. When the ground ﬁeld is not C. because the more naive option ρ∗ (g)(L) = L ◦ ρ(g) does not. we must allow the appropriate division rings to replace C in the theorem. 3 ⊕n ⊕n 12 . The dual representation of V is the representation ρ∗ on the dual vector space V ∗ := Hom(V .7 Deﬁnition. ⊕n Let φ ∈ EndG (Vk ) write Vk as a direct sum ∼ Wk k of copies of Wk . The dual representation is a special instance of the Hom space of two representations: we can replace the 1dimensional space of scalars C. Part (i) follows from the Lemma by taking V = Wk . W . but gives little idea of the true size of the resulting space.5 Remark. C) of linear maps V → C deﬁned by ρ∗ (g)(L) = L ◦ ρ(g)−1 . they can be used to enlarge the supply of examples. The dual representation ρ∗ is irreducible if and only if ρ was so.6) Operations on representations We now discuss some operations on group representations. By Schur’s lemma. When V = W . For simplicity. the target space of our linear map. the ﬁrst is more abstract. but requires a choice of basis. That is. so the eﬀect on φ is simply to conjugate the associated matrix [fpq ] by S. In the special case W = C with the trivial Gaction. A diﬀerent decomposition of Vk is related to the old one by a Gisomorphism S : Wk k → Wk k . We give two deﬁnitions. that cannot be used to produce new irreducibles. determine an action of G. This gives an intrinsic description of Vk .9 Deﬁnition. There is actually little choice in the matter. Note also that the invariant vectors in the Hom space are precisely the Glinear maps from V to W . once a single interesting representation of a group is known. this Hom space and its invariant part are algebras. Hom(V. however. 5. (5.Proof. The following is left as an exercise.
for each pair (p. and extended to all vectors in V ⊗ W by linearity.2 Proposition. as deﬁned earlier. There is a natural map from the ﬁrst space to the second. fq and vanish on the other basis vectors.1 Deﬁnition. This completes the proof. . with 1 ≤ p ≤ m. extend by linearity to the spaces and check that the tensor relations (6. send u ⊗ (v ⊗ w) to (u ⊗ v) ⊗ w for the ﬁrst map. Indeed. but ﬁrst let us note a connection between these constructions. we show that the vectors ep ⊗ fq span the tensor product V ⊗ W . . W ). em } and {f1 . if deﬁned) from W ⊗ V ∗ to Hom(V. dim W < ∞.4 Deﬁnition.1. there is a natural isomorphism of vector spaces (preserving Gactions. The tensor relations then express v ⊗w as a linear combination of the ep ⊗fq . We let L(v ⊗ w) = ε(v) · ϕ(w). . The tensor product behaves like a multiplication with respect to the direct sum: 6. .3 Proposition. Clearly. 13 .6. “Naturality” means that no choice of basis is required to produce an isomorphism. We shall write this action in matrix form. is a vector space basis of V ⊗ W . {ep ⊗ fq }. 6. this preserves the Gactions. Then. fn } be bases of V and W . and (u. for all k ∈ C.5 Proposition. 6. v ⊗ w) in the second. v ∈ V . We note by inspection that L takes equal values on the two sides of each of the identities in (6. deﬁned by sending w ⊗ L ∈ W ⊗ V ∗ to the rank one linear map v → w · L(v). it can be omitted from the notation. so it does indeed lead to a linear action of G on the tensor product. V ⊗ W is the quotient of the vector space with basis {v ⊗ w} by the subspace spanned by the diﬀerences of left– and righthand sides in each identity above. This shows that no linear relation can hold among our vectors. a linear functional L : V ⊗ W → k equal to 1 on ep ⊗ fq and vanishing on all other er ⊗ fs . In the basis {fi } of W . we can express the factors v and w linearly in terms of the basis elements.1) are preserved by the maps. . we can check that the basis vector fi ⊗ e∗ is sent to the elementary j matrix Eij .deﬁned linear map on the quotient space V ⊗ W . labelled by pairs of vectors v ∈ V and w ∈ W . 6. To check linear independence. 6. resp. There are natural isomorphisms U ⊗(V ⊗W ) ∼ (U ⊗V )⊗W . The tensor product of two representations ρ on V and σ on W of a group G is the representation ρ ⊗ σ on V ⊗ W deﬁned by the condition (ρ ⊗ σ)(g)(v ⊗ w) = ρ(g)(v) ⊗ σ(g)(w). The tensor product V ⊗k W of two vector spaces V and W over k is the kvector space based on elements v ⊗ w. modulo the following relations. q). First.1). Indeed. By inspection. . Let {e1 . and so it descends to a well. v) ⊗ w to (u ⊗ w. using a basis as in Proposition 6. v ⊗ (w1 + w2 ) = v ⊗ w1 + v ⊗ w2 . so we just need to check the map is an isomorphism. and the latter form a basis of all linear maps from V to W . 1 ≤ q ≤ n. for any v ⊗ w.ii. In other words. . this construction preserves the linear relations in Def. Proof. . w ∈ W : (v1 + v2 ) ⊗ w = v1 ⊗ w + v2 ⊗ w. We extend this to all of W ⊗ V ∗ by linearity. we construct. Let ε : V → k and ϕ : W → k be the linear maps which are equal to 1 on ep . If dim V.2.1. 6. (U ⊕V )⊗W ∼ = = (U ⊗ W ) ⊕ (V ⊗ W ) Proof. Proof. the dual basis {e∗ } of V ∗ and the basis {fi ⊗ e∗ } of W ⊗ V ∗ j j constructed in Def. When the ground ﬁeld is understood. (k · v) ⊗ w = v ⊗ (k · w) = k · (v ⊗ w).
In bases {ei }. For any matrix.k)(i. {fj } to be eigenbases for A. W . If you dislike this argument. Applying now the rowreduction procedure for B to each block of n consecutive rows results in the matrix [Sij · T ]. When A and B are diagonalisable. However.2). Tr(A ⊗ B) = i. k) lexicographically. this results in the matrix [Sij · B]. and the formulae (i) and (iii) apply. parts (i) and (iii) are seen as follows.7 Proposition. 14 . Here are some useful properties of A ⊗ B. the matrix A ⊗ B acquires a decomposition into blocks of size n × n A11 · B A12 · B · · · A1m · B A21 · B ··· · · · A2m · B ··· ··· Am1 · B Am2 · B · · · Amm · B We can apply the rowreduction procedure for A treating the blocks in this matrix as entries. 1 ≤ i. our operators are given by matrices Aij .k Aii Bkk = i Aii · k Bkk = Tr(A) · Tr(B). whose diagonal entries multiply to give the two determinants. 6. we can choose a diagonalisable matrix arbitrarily closeby. which is S ⊗ T . Note that a basis index for V ⊗ W is a pair (i. k) with 1 ≤ i ≤ m. Its determinant is n m i. 7 Examples and complements The formula for A ⊗ B permits us to write the matrix form of the tensor product of two representations. Applied to A and B. {fk } of V.l) = Aij · Bkl . Let us write the matrix form of A ⊗ B in the basis {ei ⊗ fk } of V ⊗ W . Choose the {ei }. We deﬁne an operator A ⊗ B ∈ End(V ⊗ W ) by (A ⊗ B)(v ⊗ w) = (Av) ⊗ (Bw). 1 ≤ k ≤ n. and Bkl . (iii) det(A ⊗ B) = det(A)n det(B)m . In the following example. We have the simple formula (A ⊗ B)(i.k) = i. The statement about eigenvalues follows from the determinant formula by considering the characteristic polynomial. which is checked by observing that the application of A ⊗ B to the basis vector ej ⊗ fl contains ei ⊗ fk with the advertised coeﬃcient Aij · Bkl . with entries λi · µk . 1 ≤ k. Proof. because generic matrices are diagonalisable. A concrete proof of (iii) can be given using rowreduction. in the basis (6. and check that A ⊗ B is also uppertriangular in the tensor basis.8 Remark. B. l ≤ n. j ≤ m. proving part (ii). (ii) Tr(A ⊗ B) = Tr(A)Tr(B). Then. this results in uppertriangular matrices S and T .6) The tensor product of two linear maps Consider two linear operators A ∈ End(V ) and B ∈ End(W ). extending by linearity to all vectors in V ⊗ W .k)(j. choose instead bases in which A and B are uppertriangular (this can always be done).(6. we will see that it decomposes into the three distinct irreducibles listed in Lecture 5. the matrices involved are often too large to give any insight. (i) The eigenvalues of A ⊗ B are λi · µk . Now.k (A ⊗ B)(i. This deals with the generic case. ordering the index pairs (i. A ⊗ B is also diagonal. 6.k Sii Tkk = (det S) · (det T ) . we take a closer look at the tensor square V ⊗2 := W ⊗ W of the irreducible 2dimensional representation of D6 .
It is easy to check that 1⊗S = S. . is called the symmetric square. . W decomposes as a sum L1 ⊕ L2 of eigenlines. it must swap L1 with L2 and also the two factors of L0 among themselves. 15 . (Note: this is not the lexicographically ordered basis of C2 ⊗ C2 . is a simple calculus which permits the decomposition of representations into irreducibles. ρ⊗2 (f )} leads again to the matrix expression ρ⊗2 (r) = . and let e2 = ρ⊗2 (e1 ) ∈ L2 . If v1 . and the (−1)part is the exterior square. One substantial accomplishment of the theory of characters. vm is a basis of V . · · · . L2 ⊗ L2 ∼ L1 . A general vector in V ⊗n is a linear combination of these. . Under the action of g. the part of W ⊗ W invariant under the switch. It has the three irreps 1. which extends by linearity to the entire space. Now. L1 ⊗ L1 ∼ L2 . For your amusement. but r swaps the two lines. ρ : G → GL(V ). . here is the matrix form of ρ ⊗ ρ.3 Proposition. we see that. These are special instances of the following more general decomoposition. ⊗ V (n times). un ∈ V deﬁnes a vector u1 ⊗ · · · ⊗ un ∈ V ⊗n . kn range over {1 · · · m}n . We also call L0 the line with trivial action of the group C3 generated by g. We suspect that the ﬁrst two lines add up to an isomorphic copy of W . any ntuple of vectors u1 . = The appearance of the ‘new’ irreducible sign representation shows that there cannot be a straightforward universal formula for the decomposition of tensor products of irreducibles. then a basis of V ⊗n is the collection of vectors vk1 ⊗ · · · ⊗ vkn . . We get a C3 isomorphism W ⊗ W = (L1 ⊗ L1 ) ⊕ (L2 ⊗ L2 ) ⊕ (L1 ⊗ L2 ) ⊕ (L2 ⊗ L1 ). ρ⊗2 (r) = ρ⊗2 (g) = 0 0 0 1 . and let V ⊗n := V ⊗ . this diagonalises to 1 0 diag[1. . in the last decomposition. More generally. 0 0 1 0 0 0 1 0 0 0 0 1 (7. −1]. The +1eigenspace. To study W ⊗ W . 7. W listed in Lecture 5. This prescription deﬁnes a permutation action on our standard basis vectors. as C3 reps. we have W ⊗ W ∼ W ⊕ L0 ⊕ S.2) The symmetric group action on tensor powers There was no possibility of W ⊗ W being irreducible: switching the two factors in the product gives an action of C2 which commutes with D6 . where the indexes k1 . we could use the matrix to write the 4 × 4 matrices of ρ ⊗ ρ. . to conﬁrm this. and the ±1 eigenspaces of the action must then be invariant under D6 . as claimed. S. . Let V be a representation of G. which we will study next. we see that in this basis of L2 ⊕ L1 . G acts trivially. By expressing a general u1 ⊗ · · · ⊗ un in terms of the standard basis vectors. We can also say something about r: since it swaps L1 and L2 in each factor of the tensor product. On the last two summands. so our decomposition is isomorphic = = = to L2 ⊕ L1 ⊕ L0 ⊕ L0 . choose a basis vector e1 in L1 . Proof. we can check that σ maps it to uσ(1) ⊗ · · · ⊗ uσ(n) .1) Example: Decomposing the tensor square of a representation Let G = D6 . So V ⊗n has dimension mn . G acts by the same matrices as on W . So we recognise here the sum of the trivial representation and the sign representation S of S3 . S ⊗ S = 1. The symmetric group acts on V ⊗n by permuting the factors: σ ∈ Sn sends u1 ⊗ · · · ⊗ un to uσ(1) ⊗ · · · ⊗ uσ(n) . Because r2 = 1. Choosing a basis of 0 1 the form {f .) 2 0 1 0 0 ω 0 0 0 1 0 0 0 0 ω 0 0 . . . in a basis matching the decomposition into lines given earlier.(7. but there is a better way to understand what happens. L1 ⊗ L2 ∼ L0 . All in all.
Proof. 16 . Proof. . . the sum is zero as soon as two indexes have equal values. 7. Recall two familiar representations of the symmetric group. distinct sets of indexes ki lead to linear combinations of disjoint sets of standard basis vectors in V ⊗n . since ρ⊗n (g) commutes with Sn . so the result is clearly invariant (or antiinvariant. resulting in the same basis vector (up to a sign. an outoforder collection of ki can be ordered by a permutation. 7. Setting ρ⊗n (g)(u1 ⊗ · · · ⊗ un ) = ρ(g)(u1 ) ⊗ · · · ⊗ ρ(g)(un ) and extending by linearity deﬁnes an action of G on V ⊗n . then: a basis for Symn V is 1 n! vkσ(1) ⊗ · · · ⊗ vkσ(n) σ∈Sn . vm } is a basis for V . the trivial 1dimensional representation 1 and the sign representation ε : Sn → {±1} (deﬁned by declaring that every transposition goes to −1). .4 Proposition. Let P± be the following linear selfmaps on V ⊗n . Also. 1 uσ(1) ⊗ · · · ⊗ uσ(n) for P+ n! σ∈Sn u1 ⊗ · · · ⊗ un → 1 ε(σ)uσ(1) ⊗ · · · ⊗ uσ(n) for P− n! σ∈Sn Then: • Im(P± ) ⊂ V ⊗n belongs to the invariant/antiinvariant subspace.6 Proposition. which shows the linear independence of our collections. and this action preserves the linear relations deﬁning V ⊗n . which commutes with the action of Sn .7. the symmetric powers. We can see that we get an action from the ﬁrst (abstract) deﬁnition of the tensor product: ρ⊗n (g) certainly deﬁnes an action on the (huge) vector space based on all the symbols u1 ⊗ · · · ⊗ un . Finally. However. Every Sn isotypical component of V ⊗n is a Gsubrepresentation.5 Corollary. or the symmetric/antisymmetric parts of V ⊗n . so it descends to a welldeﬁned linear operator on the latter. and the third follows from the ﬁrst two. it must preserve the canonical decomposition of V ⊗n under Sn (proved in Lecture 4). • P± acts as the identity on invariant/antiinvariant vectors. in the alternating case. So the outoforder indexes merely repeat some of the basis vectors. when the sign is inserted). as 1 ≤ k1 ≤ · · · ≤ kn ≤ m.7 Proposition (Basis for Symn V. and Λn V . If {v1 . The ﬁrst part is easy to see: we are averaging the transforms of a vector under the symmetric group action. commutation with Sn is clear. . in the alternating case). 7. indeed. The second part is obvious. The two corresponding blocks in V ⊗n are called Symn V . and a basis for Λn V is 1 n! ε(σ)vkσ(1) ⊗ · · · ⊗ vkσ(n) σ∈Sn . Assuming this deﬁnes an action. They are also called the invariant/antiinvariant subspaces. (Sketch) The previous proposition and our basis for V ⊗n make it clear that the vectors above span Symn and Λn . Λn V ). the exterior or alternating powers of V . 2 • P± = P± . and so the two operators are the projections onto Symn V and Λn V . as applying ρ⊗n (g) before or after a permutation of the factors has the same eﬀect. Proof. as 1 ≤ k1 < · · · < kn ≤ m. if we range over all choices of ki .
7.8 Corollary. Λn V = 0 if n > m. More generally, dim Λm V = 1 and dim Λk V =
m k
.
Notation: In the symmetric power, we often use u1 · . . . · un to denote the symmetrised vectors in Prop. 7.4, while in the exterior power, one uses u1 ∧ . . . ∧ un .
8
The character of a representation
We now turn to the core results of the course, the character theory of group representations. This allows an eﬀective calculus with group representations, including their tensor products, and their decomposition into irreducibles. We want to attach invariants to a representation ρ of a ﬁnite group G on V . The matrix coeﬃcients of ρ(g) are basisdependent, hence not true invariants. Observe, however, that g generates a ﬁnite cyclic subgroup of G; this implies the following (see Lecture 2). 8.1 Proposition. If G and dim V are ﬁnite, then every ρ(g) is diagonalisable. More precisely, all eigenvalues of ρ(g) will be roots of unity of orders dividing that of G. (Apply Lagrange’s theorem to the cyclic subgroup generated by g.) To each g ∈ G, we can assign its eigenvalues on V , up to order. Numerical invariants result from the elementary symmetric functions of these eigenvalues, which you also know as the coeﬃcients of the characteristic polynomial detV [λ − ρ(g)]. Especially meaningful are • the constant term, det ρ(g) (up to sign); • the subleading term, Trρ(g) (up to sign). The following is clear from multiplicativity of the determinant: 8.2 Proposition. The map g → detV ρ(g) ∈ C deﬁnes a 1dimensional representation of G. This is a nice, but “weak” invariant. For instance, you may know that the alternating group A5 is simple, that is, it has no proper normal subgroups. Because it is not abelian, any homomorphism to C must be trivial, so the determinant of any of its representations is 1. Even for abelian groups, the determinant is too weak to distinguish isomorphism classes of representations. The winner turns out to be the other interesting invariant. 8.3 Deﬁnition. The character of the representation ρ is the complexvalued function on G deﬁned by χρ (g) := TrV (ρ(g)). The Big Theorem of the course is that this is a complete invariant, in the sense that it determines ρ up to isomorphism. For a representation ρ : G → GL(V ), we have deﬁned χV : G → C by χV (g) = TrV (ρ(g)). 8.4 Theorem (First properties). 1. χV is conjugationinvariant, χV (hgh−1 ) = χV (g), ∀g, h ∈ G; 2. χV (1) = dim V ; 3. χV (g −1 ) = χV (g); 4. For two representations V, W , χV ⊕W = χV + χW , and χV ⊗W = χV · χW ; 5. For the dual representation V ∗ , χV ∗ (g) = χV (g −1 ).
17
Proof. Parts (1) and (2) are clear. For part (3), choose an invariant inner product on V ; unitarity of ρ(g) implies that ρ(g −1 ) = ρ(g)−1 = ρ(g) , whence (3) follows by taking the trace.4 Part (4) is clear from A 0 ρ 0 Tr = TrA + TrB, and ρV ⊕W = V . 0 B 0 ρW The product formula follows form the identity Tr(A ⊗ B) = TrA · TrB from Lecture 6. Finally, Part (5) follows form the fact that the action of g on a linear map L ∈ Hom(V ; C) = V ∗ is composition with ρ(g)−1 (Lecture 5). 8.5 Remark. conjugationinvariant functions on G are also called central functions or class functions. Their value at a group element g depends only on the conjugacy class of g. We can therefore view them as functions on the set of conjugacy classes. (8.6) The representation ring We can rewrite property (4) more professionally by introducing an algebraic construct. 8.7 Deﬁnition. The representation ring RG of the ﬁnite group G is the free abelian group based on the set of isomorphism classes of irreducible representations of G, with multiplication reﬂecting the tensor product decomposition of irreducibles: [V ] · [W ] =
k T
nk · [Vk ]
iﬀ V ⊗ W ∼ =
k
Vk⊕nk ,
where we write [V ] for the conjugacy class of an irreducible representation V , and the tensor product V ⊗ W has been decomposed into irreducibles Vk , with multiplicities nk . Thus deﬁned, the representation ring is associative and commutative, with identity the trivial 1dimensional representation: [C] · [W ] = [W ] for any W , because C ⊗ W ∼ W . = Example: G = Cn , the cyclic group of order n Our irreducibles are L0 , . . . , Ln−1 , where the generator of Cn acts on Lk as exp 2πik . We n have Lp ⊗ Lq ∼ Lp+q , subscripts being taken modulo n. It follows that RCn ∼ Z[X]/(X n − 1), = = identifying Lk with X k . The professional restatement of property (4) in Theorem 8.4 is the following. 8.8 Proposition. The character is a ring homomorphism from RG to the class functions on G. It takes the involution V V ∗ to complex conjugation. Later, we shall return to this homomorphism and list some other good properties. (8.9) Orthogonality of characters For two complex functions ϕ, ψ on G, we let ϕψ := 1 G ϕ(g) · ψ(g).
g∈G
In particular, this deﬁnes an inner product on class functions ϕ, ψ, where we sum over conjugacy classes C ⊂ G: 1 ϕψ = C · ϕ(C)ψ(C). G We are now ready for the ﬁrst part of our main theorem. There will be a complement in the next lecture.
For an alternative argument, recall that the eigenvalues of ρ(g) are roots of unity, and those of ρ(g)−1 will be their inverses; but these are also their conjugates. Recall now that the trace is the sum of the eigenvalues.
4
18
8.10 Theorem (Orthogonality of characters). 1. If V is irreducible, then χV
2
= 1.
2. If V, W are irreducible and not isomorphic, then χV χW = 0. Before proving the theorem, let us list some consequences to illustrate its power. 8.11 Corollary. The number of times an irreducible representation V appears in an irreducible decomposition of some W is χV χW . 8.12 Corollary. The above number (called the multiplicity of V in W ) is independent of the irreducible decomposition We had already proved this in Lecture 4, but we now have a second proof. 8.13 Corollary. Two representations are isomorphic iﬀ they have the same character. (Use complete reducibility and the ﬁrst corollary above). 8.14 Corollary. The multiplicity of the trivial representation in W is 8.15 Corollary (Irreducibility criterion). V is irreducible iﬀ χV
1 G 2 g∈G χW (g).
= 1.
2
Proof. Decompose V into irreducibles as k Vk⊕nk ; then, χV = k nk · χk , and χV So χV 2 = 1 iﬀ all the nk ’s vanish but one, whose value must be 1.
=
k
n2 . k
In preparation for the proof of orthogonality, we establish the following Lemma. For two representations V, W of G and any linear map φ : V → W , deﬁne φ0 = 8.16 Lemma. 1. φ0 intertwines the actions G. 2. If V and W are irreducible and not isomorphic, then φ0 = 0. Trφ 3. If V = W and ρV = ρW , then φ0 = · Id. dim V Proof. For part (1), we just examine the result of conjugating by h ∈ G: ρW (h) ◦ φ0 ◦ ρV (h)−1 = = 1 G 1 G ρW (h)ρW (g) ◦ φ ◦ ρV (g)−1 ρV (h)−1
g∈G
1 G
ρW (g) ◦ φ ◦ ρV (g)−1 .
g∈G
ρW (hg) ◦ φ ◦ ρV (hg)−1 =
g∈G
1 G
ρW (g) ◦ φ ◦ ρV (g)−1 = φ0 .
g∈G
Part (2) now follows from Schur’s lemma. For part (3), Schur’s lemma again tells us that φ0 is a scalar; to ﬁnd it, it suﬃces to take the trace over V : then, φ0 = Tr(φ0 )/ dim V . But we have Tr(φ0 ) = as claimed. 1 G Tr ρV (g) ◦ φ ◦ ρV (g)−1 = Tr(φ),
g∈G
19
we can identify the linear combinations of these basis elements with nonnegative coeﬃcients with isomorphism classes of Grepresentations: we simply send a representation i Vi⊕ni . Tr (wi vj ) = 0 otherwise. χ becomes a map χ : RG → C[G]G . We deﬁned the multiplication in RG using the tensor product of representations.17) and completing our proof. summing over i. j leads to a factor of dim V . Fix now i and j and sum over g ∈ G. to remember its origin.18 Remark.19) The representation ring again We deﬁned the representation ring RG of G as the free abelian group based on the isomorphism classes of irreducible representations of G. Using complete reducibility. for instance. We then deﬁned the character χV of a representation ρ on V to be the complexvalued function on G with χV (g) = TrV ρ(g). decomposed into irreducibles. and summing over i. We have used Schur’s Lemma over C in establishing Part (3) of Lemma 8. 1 if i = j. The product wi ρW (g)−1 wi vj ρV (g)vj is then interpretable as the result of applying the linear map ρW (g)−1 ◦ wi vj  ◦ ρV (g) to the vector vj . but orthonormality χ 2 = 1 can fail if k is not algebraically closed. Recall ﬁrst that wi vj  designates the linear map V → W which sends a vector v to the vector wi · vj v . The inner product of characters can be deﬁned over any ground ﬁeld k of characteristic not dividing G. Hence. We sometimes denote the product by ⊗. i. (8. to the linear combination i ni [Vi ] ∈ RG (the bracket denotes the isomorphism class). When V = W we shall use the same inner product and basis in both. by using χ(g −1 ) in lieu of χ(g).16 then shows that the sum of linear maps ρW (g)−1 ◦ wi vj  ◦ ρV (g) = 0 Tr (wi vj ) / dim V if V W. χV ⊗W = χV · χW . The onedimensional trivial representation 1 is the unit. which will be a ﬁnite extension ﬁeld of k. or a class function on G.j g∈G We now interpret each summand as follows. ¯ 20 . Choose invariant inner products on V. Serre’s book for more detail.16. See. The value of the square depends on the ¯ decomposition of the representation in the algebraic closure k. and we have proved that χV ⊥ χW . although not for Parts 1 and 2. Lemma 8. We denote the space of complex class functions on G by C[G]G . General elements of RG are sometimes called virtual representations of G. (8. • Duality corresponds to complex conjugation: χ(V ∗ ) = χV .Proof of orthogonality. This can be determined from G the division algebra DV = Endk (V ) and the Galois theory of its centre. By linearity. if ρV = ρW . Then we have χW χV = 1 G χW (g)χV (g) = ¯ g∈G 1 G χW (g −1 )χV (g) g∈G = 1 G wi ρW (g)−1 wi vj ρV (g)vj . the operator trace in the representation ρ. the orthogonality of characters of nonisomorphic irreducibles holds over any such ﬁeld k. χ : RG → C[G]G is a ring homomorphism. W and orthonormal bases {vi } and {wj } for the two spaces. We then checked that • χV ⊕W = χV + χW . j still leads to zero. 8. and then taking dot product with wj . We shall use the “bra·ket” vector notation explained in the handout. Thus. cancelling the denominator in the second line of (8. In the second case.17) In the ﬁrst case. This function is conjugationinvariant.
(In a ring without involution. We can identify the span of the eg with the space C[G] of complex functions on G. Proof. χV χreg = 1 G χV (1)χreg (1) = χV (1) = dim V. we need the following. 9. the trivial representation appears exactly once. It fails to be an isomorphism “only in the obvious way”: the two rings share a basis of irreducible representations (resp. 9 The Regular Representation Recall that the regular representation of G has basis {eg }. the sum ranging over all irreducible isomorphism classes.3 Theorem (Completeness of characters). g∈G • We can deﬁne a positive deﬁnite inner product on RG by V W := Inv(V ∗ ⊗ W ). dim V · χV we get χreg 2 = dim2 V . C[G]G is a complex Frobenius algebra with involution. λ(h)(eg ) = ehg . We now proceed to complete our main orthogonality theorem. labelled by g ∈ G.2 Proposition. as the following proposition shows. sending a virtual representation to the multiplicity (=coeﬃcient) of the trivial representation 1 corresponds to averaging over G: Inv(V ) = 1 G χV (g). The multiplicity of any irreducible representation in the regular representation equals its dimension. This deﬁnes a linear action on the span of the eg . they form an orthonormal basis for that space. This 1 corresponds to the obvious inner product on characters.1 Proposition. 9. 21 . For this. elements h ∈ G act (“on the left”) on C[G]. Proof. The irreducible characters form a basis for the space of complex class functions. not complex.20 Remark. Under this correspondence. as desired. Similarly. We can see that the character of the regular representation is χreg (g) = G if g = 1 0 otherwise in particular. so this is far from irreducible. Thus. Indeed. which we can also conﬁrm directly: the only translationinvariant functions on G are constant. The structures detailed above for RG is that of a Frobenius ring with involution. by matching eg with the function sending g to 1 ∈ C and all other group elements to zero. For instance. we would just require that the bilinear pairing V ×W → Inv(V ⊗W ) should be nondegenerate). G g∈G χV (g)χW (g). From χreg = V dim2 V = G. χreg 2 = G2 /G = G. characters). but the RG coeﬃcients are integral. ¯ 8.• The linear functional Inv : RG → Z. we let G permute the basis vectors according to its left action. every irreducible representation appears in the regular representation. The proof consists in showing that if a class function ϕ is orthogonal to every irreducible character. and we are saying that the character χ is a homomorphism of such structures. 9. then its value at every group element is zero. by sending the function ϕ to the function λ(h)(ϕ)(g) := ϕ(h−1 g).
ρ(ϕ) = 0 in every irreducible representation V . then ϕ(h−1 gh) = ϕ(g) for all g.Construction. as claimed. 9. (9. in every representation V . we compute the trace: Tr [ρ(ϕ)] = g∈G ϕ(g)χV (g) = G · χV ∗ ϕ (recall that χV = χV ∗ ). In the other direction. 22 . and to every representation (V. let V be the regular representation. we assign the following linear selfmap ρ(ϕ) of V : ρ(ϕ) = g∈G ϕ(g)ρ(g). hence (by complete reducibility) also in the regular representation. Deﬁne a multiplication on C[G] by setting eg · eh = egh on the basis elements and extending by linearity: ϕ g eg · g h ψ h eh = g. If ϕ is a class function. ϕ is a class function iﬀ ρ(ϕ) commutes with G. ϕ(g)egh = whereas ϕ(gh−1 )eg .5) Proof of Completeness. with the obvious multiplication. Proof. We note. For this reason. dim V (9.4.h ϕg ψh egh = g h ϕgh−1 ψh eg . so ϕ = 0. The group G embeds into the group of multiplicatively invertible elements of C[G] by g → eg . that ρ(h)ρ(ϕ) = ρ (λ(h)ϕ). ρ(ϕ) = G · χV ∗ ϕ · Id.5). ρ) of G. we will often replace eg by g in our notation. To every function ϕ : G → C. then ρ(ϕ)(e1 ) = λ(h)ρ(ϕ)(e1 ) = g∈G and ϕ(g)ehg = g∈G ϕ(h−1 g)eg . as we saw in the proof of Lemma 9. This is associative and distributive for addition. but it is not commutative unless G was so. We obtain that. h and so ρ(h)ρ(ϕ)ρ(h−1 ) = g∈G ϕ(g)ρ(hgh−1 ) = k∈G ϕ(h−1 kh)ρ(k) = k∈G ϕ(k)ρ(k).6) The Group algebra We now study the regular representation in greater depth. we have ρ(ϕ)(e1 ) = g∈G ϕ(g)eg in the regular representation. and think of elements of C[G] as linear combinations of group elements. When ϕ is a class function and V is irreducible. But. Assume that the class function ϕ is orthogonal to all irreducible characters. as desired. when V is irreducible. by applying ρ(h) (h ∈ G) and relabelling the sum. Schur’s Lemma and the previous result show that ρ(ϕ) is a scalar. g∈G ϕ(g)eg .) It contains e1 as the multiplicative identity. ρ(ϕ)λ(h)(e1 ) = ρ(ϕ)(eh ) = g∈G g∈G equating the two shows that ϕ(h−1 g) = ϕ(gh−1 ). By (9. This makes C[G] into a Calgebra.4 Lemma. and the copy Ce1 of C commutes with everything. (Associativity follows from the same property in the group. To ﬁnd it.
The reader should be cautioned that the statement can fail if the ﬁeld is not algebraically closed. W be complex irreducible representations of the ﬁnite groups G and H. Indeed. with the sum ranging over the irreducible representations of G. (Homework: do this!). In particular. On each representation (V.9 Theorem. 23 . Moreover. The main result of this subsection is the following much more precise version of Proposition 9. For an alternative realisation.8) is an isomorphism of algebras. ρ) of G. respectively: (λ(h)ϕ) (g) = ϕ(h−1 g). Choosing a complete list (up to isomorphism) of irreducible representations V of G gives a homomorphism of algebras. hence our algebra homomorphism is also a map of G × Grepresentations.1. so we can write every indicator class function on G × H as a linear combination of the χV × χW . given a C[G]module M . we have an isomorphism of G × Grepresentations. Conversely. 9. from which the original map can then be recovered by linearity. For the proof.9. End(V ) is isomorphic to V ⊗ V ∗ and G acts separately on the two factors. conjugacy classes in G × H are Cartesian products of classes in the factors. the proof uses the orthonormality of characters. Proof. Conversely. we require the following 9. and we can write every indicator class function in G and H as a linear combination of characters. ⊕ρV : C[G] → V End(V ). Indeed. we let the group algebra act in the obvious way. Direct calculation of the square norm shows that the character χv (g) × χW (h) of G × H is irreducible. C[G] ∼ = V V ⊗ V ∗. I claim that these characters span the class functions on G × H. V ⊗ W is an irreducible representation of G × H. Proof. the action of Ce1 makes it into a complex vector space and a group action is deﬁned by embedding G inside C[G] as explained above. There are also two commuting actions of G on C[G]. and hence of G × Grepresentations. by multiplication on the left and on the right.10 Lemma. Moreover. any such homomorphism deﬁnes by restriction a representation of G. We can reformulate the discussion above (perhaps more obscurely) by saying that a group homomorphism ρ : G → GL(V ) extends naturally to an algebra homomorphism from C[G] to End(V ). ϕ → (ρV (ϕ)) .8) correspond. V is a module over End(V ) and the extended homomorphism makes it into a C[G]module. The homomorphism (9. It is clear from our construction that the two actions of G × G on the left and right sides of (9. ρ g ϕg · g (v) = g ϕg ρ(g)(v). (ρ(h)ϕ) (g) = ϕ(gh). every irrep of G × H is of that form.8) Now each space End(V ) carries two commuting actions of G: these are left composition with ρV (g). (9. and right composition with ρV (g)−1 .7 Proposition. Let V. Then. There is a natural bijection between modules over C[G] and complex representations of G.
G χ Note that the coeﬃcients need not be integral. (Optional Homework. This is the square matrix whose rows are labelled by the irreducible characters.1 Proposition (The second orthogonality relations). Orthonormality of its columns leads to the 10. 10 The character table In view of our theorem about completeness and orthogonality of characters. the goal of complex representation theory for a ﬁnite group is to produce the character table. In particular. From the results in Lecture 4. C . if B is the matrix obtained from A by multiplying each column by C/G. or class functions. 9. Orthonormality of characters implies a certain orthonormality of the rows of the character table. summing over the irreducible characters χ of G. no two such reps for distinct V can be isomorphic.2 Remark. Note from our Lemma that the decomposition ∗ S := V V ⊗ V splits the sum S into pairwise nonisomorphic.8. and whose columns are labelled by the conjugacy classes in the group. it follows that the representation V ⊗ V ∗ of G × G is irreducible. Since the summands are irreducible. once the irreducible characters are known. we can express the indicator function EC of a conjugacy class C ⊂ G in terms of characters: C EC (g) = χ(C) · χ(g). You will see in the homework (Q. the centraliser of c in G (the subgroup of Gelements commuting with c).11 Remark. irreducible representations. 10. we have. rather. For any conjugacy classes C. The G × Gstructure will be essential for this. one can extract the irreducible multiplicities from the dot products with the rows of the character table. On the left side. then the orthonormality relations imply that B is a unitary matrix. if a map surjects onto each factor separately. The orbitstabiliser theorem. By dimensioncount. otherwise. As a reformulation. shows that the prefactor G/C is the order of ZG (c). the column of a class C must be weighed by a factor C/G in the dot product (as the sum goes over group elements and not conjugacy classes). we know that any G × Gmap into S must be blockdiagonal for the isotypical decompositions. (i) It can be shown that the inverse map to ⊕ρV assigns to φ ∈ End(V ) the function g → TrV (ρV (g)φ)/ dim V . indeed. we get the conjugationinvariant functions. A representation can be regarded as “known” as soon as its character is known. the character reduces to that of V .) (ii) Schur’s Lemma ensures that the invariants on the righthand side for the action of the diagonal copy G ⊂ G × G are the multiples of the identity in each summand. applied to the conjugation action of G on itself and some element c ∈ C. Moreover.5) that you can even construct the isotypical decomposition of your representation. In other words. when viewed as a matrix A. so the indicator function of a conjugacy class is not the character of a representation. Indeed. on pairs (g. 2. Proof of Theorem 9. “A certain” reﬂects to the fact that the inner product between characters is not the dot product of the rows. But the group identity e1 maps to the identity in each End(V ). 1) ∈ G × G.In particular. it suﬃces to show that our map ⊕ρV is surjective. χ(C) · χ(C ) = χ G/C 0 if C = C . The entries in the table are the values of the characters. it surjects onto the sum S. it suﬃces then to show that each projection to End(V ) is nonzero. 24 .
which does not match V the theorem very well. The case of even n = 2m is slightly diﬀerent. in the representation Vm . Irreducibility of Vk follows as in the case n = 3. odd. Here is the character table: {1} . . . S and the 2dimensional irreducible V . . of course. the three reﬂections being all conjugate (g −1 rg = rg 2 .. etc). . 0 . {r. . . in which the generator g of the cyclic subgroup Cn acts as the diagonal matrix with entries ω k . . Lp .. 1 .. .} 1 1 1 1 1 −1 2 −1 0 Let us check one of the orthogonality relations: χ2 = (22 + 12 )/6 = 5/6. 1 S 1 . 1 . . L0 1 ω . . . . {g. 2 cos 2πpq n . Vp . . . for which we had found the onedimensional irreducibles 1. . g −q } . that is. .. Let us ﬁrst take n = 2m + 1. .. . . rg n−2 } and {rg. but there is a distinction. distinguished by the eigenvalues of r. . S and Vk . . We can again see the trivial and sign representations 1 and S. ω pq . Observe also that there are two conjugacy classes of reﬂections. ωp . . . .. . m + 2 total.. g 2 }. so g and r are simultaneously diagonalisable and Vm splits into two lines L+ ⊕ L− . rg 2 . . {g q . . . matching the number of representations we found. The problem is explained by observing that.3) Example: The cyclic group Cn of order n Call g a generator... . g −k } and all n reﬂections. rg 3 . .. . . g acts as −Id. Then we know that L0 . . . Moving on to D6 . . . . and let Lk be the onedimensional representation where g acts as ω k . because we forgot to weigh the conjugacy classes by their size. we ﬁnd {1}. . . additionally. . ... . for 1 ≤ k ≤ m. . g 2 } {rg. but the sum of squared dimensions 1 + 1 + m · 22 = 2m + 2 is now incorrect.}.. ω −k . . . .. . and the reﬂection r switches the basis vectors.. V The general dihedral group D2n works very similarly. . . rg 2 }... according to the parity of n. . From our matrix construction of the representation we ﬁll in the entries of the character table: 1 S V {1} {g.(10. ωq .. 2 .. . . We thus expect 3 conjugacy classes and. The conjugacy classes are {1}. and let ω = exp(2πi/n). . . 1 . We can still ﬁnd the representations 1.. rg.. 1 1 .. but we will of course be able to check that from the character. .. . indeed. the pairs of opposite rotations {g k . .. . {r. . we note that 1 + 1 + 22 = 6 so there are no other irreducibles. . 1 . . The sum of squares is now the correct 1 + 1 + 4(m − 1) + 1 + 1 = 4m. . .. where ω = exp 2πi . . . .. . . {g q } . . It is helpful to make a note of the weights somewhere in the character table.. .. Ln−1 are all the irreducibles and the character table n is the following: {1} {g} . . labelled by 0 < k ≤ m. . . 1 1 1 . . we have m twodimensional representations Vk . . . . The correct computation is χ2 = (22 + 2 · 12 )/6 = 1.. −1 . . 25 . . .. refs.
..4 Proposition.q has exactly pq elements.2 Proposition. L± {1} 1 1 ..1) The group Fp. with 1 ≤ m ≤ r and 1 < n < q. . namely am bn . . other than the cyclic group of the same order. . . this property follows because Z/p is a ﬁeld. . 26 .. 0 1 There are pq such matrices and they are closed under multiplication. A complete list of representatives is {1. thus. or directly) that any abelian group of order d. . We realise Fp.q . where p and q are two primes such that q(p−1). Fp. Let then u be an element of multiplicative order q. . rg 2 . (11. 1 For the Group algebra see Lecture 13. as 1 ≤ i ≤ r and 0 ≤ j < q. and let v1 .... . We shall in fact embed Fp. . Vp .. {r.} . We view the vi as residue classes mod p. .} {rg. . .q . .. .. we consider the matrices of the form x y : x a power of u.q by sending 1 1 u 0 a→ . 11 Groups of order pq Generalising the dihedral croups. .q within the group GL2 (Z/p) of invertible 2 × 2 matrices with coeﬃcients in Z/p. It can be shown that. . (Such elements exist if q(p − 1). vr be a set of representatives for the cosets of Z/p× modulo the subgroup generated by u. A proof of this fact can be given as follows: one shows (from the structure theorem. 1 . the best way is to realise the group concretely. When q = 2.. . ..3 Remark. . 11. For the group Z/p× . . 1 1 . . . . 1 . 2 . .. . for 0 ≤ m < p. Let now r = (p − 1)/q.. . 0 ≤ n < q. A theorem from algebra asserts that the group Z/p× of nonzero residue classes under multiplication is cyclic. . . we must take u = −1 and we obtain the dihedral group D2p ..q is in fact justiﬁed. because the commutation relation allows us to move all b’s to the right of the a’s in any word. . {g q .b→ .. in which the equation xd = 1 has no more than d solutions. Fp. We deﬁne a special group Fp. . . . these elements exhaust Fp. It can be shown (you’ll do this in the homework) that a diﬀerent choice of u leads to an isomorphic group. g −q } . 2 cos 2πpq n . 0 .q has q + r conjugacy classes.. . . y arbitrary. we now construct the character table of certain groups of order pq.. b satisfying the relations ap = bq = 1. . this is the only group of order pq. so the polynomial cannot split into more than d linear factors.. Proof. {vi uj } is a complete set of nonzero residue classes mod p. . ±1 0 .q by generators and relations. So omitting u from the notation of Fp. . bab−1 = au . 0 1 0 1 11. (−1)q . bn }. avm . .. . To show that there are no further identiﬁcations.q with two generators a.. is cyclic. . q < m): 1 S . 11. Clearly. ..The character table of D4m is thus the following (0 < p. and there are precisely ϕ(q) = q − 1 of them). Consider the group Fp. −1 −1 .. 1 . Speciﬁcally.
0 ≤ j < p. Indeed. The subgroup generated by a is normal in Fp. In fact. 0 < n < q. . but rather in the r families {ω vi u }.5) Representations. (11.q looks as follows. and the quotient group is cyclic of order q. We start with the conjugacy classes. The obvious guess is to look for qdimensional representations. ω vi u q−1 i ]. More precisely. Unlike the previous examples. we can ﬁnd them and spot their irreducibility without too much eﬀort.q . the nontrivial eigenvalues of a j cannot appear in isolation. Assume now that ω k is an eigenvalue. and that two permutations are conjugate in the symmetric 27 . . 12 The alternating group A5 In this lecture we construct the character table of the alternating group A5 of even permutations on ﬁve letters. we will now exploit the properties of characters in ﬁnding representations and proving their irreducibility. where x mod p is a ﬁxed. uniquely up to their order. the smallest possible matrix is then ρi (a) = diag[ω vi . nonzero residue and m and s in the sum range over all possible choices. . 1 1 1 q avl 1 1 s ω vm v l u s bn 1 η kn 0 1 Lk Vm The orthogonality relations are not completely obvious but require a short calculation. and un = 1 if n < q. Moreover. whose squared dimensions should add up to (p − 1)q = rq 2 . .q . I claim that the powers of exponent p ku. the eigenvalues of a are pth roots of unity.q . . and the sum in the table ranges over 0 ≤ s < q. The character table for Fp. kuq−1 are also present among the eigenvalues.Proof. Thus. Note that. Fixing i. (11. We need to ﬁnd r more representations. so we can set ρi (b) = B to get a qdimensional representation of Fp. 1 ≤ l ≤ r. Note now that the matrix B which cyclically permutes the standard basis vectors satisﬁes Bρi (a)B −1 = ρi (au ). We obtain the other powers of a by successive bconjugation— conjugation by a does not change those elements—and the aj bn by aconjugation: abn a−1 = n a1−u b. it is clear that further conjugation by b does not enlarge this set. The key s identity is ω xvm u = 0. powers of ω = exp 2πi . let v be the ω k eigenvector. much as we did for the dihedral group. 0 ≤ i < q and that of bn the elements aj bn . we have ρ(a)ρ(b)−1 (v) = ρ(b)−1 ρ(au )(v) = ρ(b)−1 ω ku (v) = ω ku ρ(b)−1 (v). . and so ρ(b)−1 (v) is an eigenvector with eigenvalue ω ku . Recall that every permutation is a product of disjoint cycles. k = 0. ku2 . 0 < m ≤ r. Note that 0 < k < q. on which a acts trivially. for we saw that no representation where a = 1 can have dimension lower than q. This is irreducible. Call Lk the 1dimensional representation in which b acts as η k and Vm the q representation ρm above. . ω vi u .6) The character table Let η = exp 2ıi . but b acts by the various roots of unity of order q. so 1 − un = 0 mod p and repeated conjugation produces all powers of a. We can thus see q 1dimensional representations of Fp. the class of avm contains the elements avm u . on any representation. in each of which 0 ≤ j < q.
(123) and (12345) are in A5 . . 1. a complete set of representatives for the conjugacy classes in S5 is Id. We do not have such an argument for the 5cycle. For use in computations. −1. We seek more representations. 0 and its dot product χC5 χ1 = 5/60 + 1/4 + 2/3 = 1 with the trivial rep shows that 1 appears in C5 with multiplicity one. repeated as necessary.group iﬀ they have the same cycle type. 1 + 15 + 20 + 12 + 12 = 60 = A5 . and is therefore the same as the centraliser in A5 . 2. 20 for (123) and 12 each for (12345) and (21345). as we should. + e5 is invariant. listing the conjugacy classes in the order above. (123)(45). the proposition reduces to the obvious identities 2 2 p≤q λp λq = λp λ q = p<q λp 2 + − λ2 . we observe that A5 acts naturally on C5 by permuting the basis vectors. 12. so there is the possibility that the cycles (12345) and (21345) are not conjugate in A5 . (12345). The character χC5 is 5. . 60 3 5 5 showing that V is irreducible. We now start producing the rows of the character table. (123). we note that the orders of the conjugacy classes are 1 for Id. The reason is that we may or may not be able to relate two given permutations in A5 which are conjugate in A5 by an even permutation. we see that the eigenvalues of g on the two spaces are λp λq . 1. Then. Proof. 0. For this we need the character formulae for Sym2 V and Λ2 V . but in A5 only 12. with eigenvalues λ1 . Now the ﬁrst three permutations in our list all commute with some transposition. 1. indeed. 1. it breaks up into two classes of equal size . 1. Now. We have. From our bases of Sym2 V. Of these. (1234). we choose a basis of V on which g acts diagonally. only Id. So our suspicions are conﬁrmed. p 2 λp 28 . . (12)(34).1 Remark. 12. The same orbitstabiliser argument shows that the conjugacy class in Sn of an even permutation σ is a single conjugacy class in An precisely when σ commutes with some odd permutation. whose square norm is χV 2 = 16 1 1 1 + + + = 1. Thus. a conjugacy class in S5 may or may not break up into several classes in A5 . We already know this is reducible. and consider for that the tensor square V ⊗2 . This representation is not irreducible. The centraliser of (12345) in S5 is the cyclic subgroup of order 5 generated by this cycle. . . 1. 15 for (12)(34). we count the size of the conjugacy class by the orbitstabiliser theorem. the line spanned by the vector e1 + . The complement V has character χV = 4. This ensures that we can implement the eﬀect of any conjugation in S5 by a conjugation in A5 : conjugate by the extra transposition if necessary. else. (12). Next. To make sure. λd . seven in total. and there are two classes of 5cycles in A5 . So the order of the conjugacy class in S5 is 24. p λ2 . χSym2 V (g) = χV (g)2 + χV (g 2 ) /2. −1. The size of the conjugacy class containing g ∈ G is G divided by the order of the centraliser of g. . For any g ∈ G. with 1 ≤ p ≤ q ≤ d on Sym2 and with 1 ≤ p < q ≤ d on Λ2 . The trivial rep 1 gives the row 1. 0. χΛ2 V (g) = χV (g)2 − χV (g 2 ) /2. but we try extract irreducible components. Λ2 V . (12)(34). meaning the cycle lengths (with their multiplicities). because it splits as Sym2 V ⊕ Λ2 V . Having ﬁxed g.2 Proposition.
the same automorphism also permutes the rows of the character table. there is a better way (several. it is seen to give the values (4. 6 3 showing that V appears once as well. 0. 5 and are the unique irreducible reps of those dimensions. they will both appear in Λ2 V . We also compute χSym2 V χV = 4 1 + = 1. 6 4 3 so Sym2 V contains 1 exactly once. 1. So. leaving the second as an exercise: χΛ2 V 2 = 36 4 1 1 + + + = 1 + 1 = 2. However. 1. Therefore. With luck. 2. but the two types of 5cycles get interchanged. it does nothing. We check the ﬁrst condition. 1. this is why this make sense). 60 4 3 showing that W is a new. hence a permutation of the conjugacy classes. there would be a unique solution. −1. else. • χΛ2 V is orthogonal to the other three characters. So we must be in the ﬁrst case and τ must switch the last two rows. 0. The complement W of 1 ⊕ V has character (5. it would follow that all characters of A5 take equal values on the last two conjugacy classes. So our characters are χSym2 V = (10. 0. But then it follows that the bottom two entries in the ﬁrst three columns (those not moved by 29 . Now. 4. 1. acting on the same vector space. τ can switch the last two rows. One method is to understand Λ2 V from the point of view of S5 . In our case. 0) V and χΛ2 V = (6. Now. Now. you get to check that it is an irreducible representation of S5 . This will happen precisely if • χΛ2 V 2 = 2. discoverable by easy linear algebra. 60 4 5 5 We are left with the problem of decomposing χΛ2 V into two irreducible characters. because the formula τ ρ(σ) := ρ(τ στ −1 ) deﬁnes a new representation τ ρ from any given representation ρ. With complex values allowed. Thereunder. the same automorphism also acts on the set of representations. −1. If we were sure the values are real. −2. but it turns out only one solution will take the value 3 at the conjugacy class Id. the three rows we found have dimensions 1. It is easy to see that τ ρ is irreducible iﬀ ρ is so (they will have the same invariant subspaces in general). 1). speciﬁcally. so a bit of work will produce the answer. We need two more representations. in fact). The squared dimensions we have so far add up to 1 + 42 + 52 = 42. whereas χ2 is (16. We can see that Sym2 V will have a positive dot product with 1. 0. −1). These must be unit vectors orthogonal to the other three characters. there is a oneparameter family of solutions.The class function g → χV (g 2 ) is denoted by Ψ2 χV . at most. contradicting the fact that the characters span all class functions. 1). with normsquare 25 1 1 χW 2 = + + = 1. if switching the last two columns did nothing. Notice that conjugation by your favourite transposition τ deﬁnes an automorphism of A5 . 1 2 1 χSym2 V χ1 = + + = 1. The only decomposition into two squares is 32 + 32 . this automorphism will switch the last two columns of the character table. so we are looking for two 3dimensional ones. 0). (In the homework. 1. the ﬁrst three classes stay ﬁxed. So. 4. 1. 5dimensional irreducible representation. missing 18.
Hence S[x] is a ﬁnitely generated Smodule and S[x. Recall that an Smodule M is ﬁnitely generated if it contains elements m1 . as A5 is simple. they are (3. An element x ∈ R is called integral over S if the following equivalent conditions are satisﬁed: (a) x satisﬁes some monic polynomial equation with coeﬃcients in S. swapping the rows must have the same eﬀect as swapping the columns. as we know they sum to χΛ2 V . One is to argue a priori that the character values are real. Here are two general facts that we will use. 13 Integrality in the group algebra The goal of this section is to study the properties of integral elements in the group algebra. S[x. So the only possible character values are a and b and you can easily check that the table above is the only option compatible with orthogonality. y] over S. then so are x + y and xy. mk such that every m ∈ M is expressible as s1 m1 + · · · + sk mk . hence has no normal subgroups. so the last 2 × 2 block is a b . 13. whence a. n < N . y] over S[x] give a set of generators of S[x. (The products of one generator for S[x] over S and of S[x. .) 30 . resulting in the remarkable theorem that the degree of an irreducible representation divides the order of the group. Proof.6.2 Proposition. b = All in all. with coeﬃcients in S. hence is isomorphic to its image under any nontrivial representation.τ ) are always equal. We know that x is integral over S and that y is integral over S[x]. ab = −1. they would commute with everything in the group.3 Remark. for suitable si ∈ S. (b) The ring S[x] generated by x over S within R is a ﬁnitely generated Smodule. If S = Z.b. y] is a ﬁnitely generated Smodule. For the last two rows and two columns. We recall ﬁrst the notion of integrality. −1. of course. 0). Let S ⊂ R be a subring of the commutative ring R. for some large N . Knowing that.1 Deﬁnition. The only real values of such sums are 3 and the two values a. observe that the character value of a 5cycle in a 3dimensional representation must be a sum of three ﬁfth roots of unity. Another way to ﬁll the last rows. There are two other ways to ﬁll in the last two rows. This can be done using the little result in Question 2. If x. x is simply called integral. All in all. b above. b a we have a + b = 1 and. 13. y] is a ﬁnitely generated S[x]module. whose traces are seen to be a and b from the matrix description of these rotations. by orthogonality. Both conditions are seen to be equivalent to the statement that. An integral element of C is called an algebraic integer. so any relation holding in the representation would have to hold in the group. This is not possible. The value 3 could only be attained if the 5cycles acted as the identities. would be to “spot” the representations. but in that case. They actually come from the two identiﬁcations of A5 with the group of isometries of the icosahedron. {Id} (12)(34) (123) (12345) (21345) 1 1 1 1 1 4 0 1 −1 −1 5 1 −1 0 0 3 −1 0 a b 3 −1 0 b a 12. y ∈ R are both integral over S. The ﬁvecycles become rotations by multiples of 2π/5. . the character table for A5 is the following: 1 V W Λ Λ √ 1± 5 2 . xN is expressible in terms of the xn . . .
Writing EC = g∈C g makes it clear that a product of two E’s is a sum of group elements (with integral coeﬃcients).5 Remark. Proof. 13. Z(C[G]). in terms of the left and right multiplication actions of G. C Z · EC is a subring of Z(C[G]). Grouping them by conjugacy class leads to an integral combination of the other E’s.4 Proposition. Observe the identity G = dim V χV (g −1 ) · g∈G χV (g) = dim V χV (C)χV (C) · C C .10 Proposition. Assume that the reduced fraction x = p/q satisﬁes a monic polynomial equation of degree n with integer coeﬃcients and get a contradiction by showing that the denominator q n in the leading term cannot be cancelled by any of the other terms in the sum. we need to recall the group algebra C[G]. The proposition is a special case of Gauss’ Lemma. 13. every EC is integral in Proof. we will prove that G/ dim V is an algebraic integer. Our goal is the following 13.7 Proposition. the number χV (C) · C/ dim V is an algebraic integer.9 Proposition. because the ring Z[EC ] is a subgroup of the free. dim V where the last sum ranges over the conjugacy classes C. The image of an integral element under any ring homomorphism is integral. Finally. which says that any monic factor with rational coeﬃcients of a monic polynomial with integral coeﬃcients must in fact have integral coeﬃcients. Proof. this says g · ϕ = ϕ · g. 13. we note the following fact about algebraic integers. Class functions are precisely those elements ϕ ∈ C[G] for which λ(g)ϕρ(g)−1 = ϕ. It contains the subspace of class functions. For any conjugacy class C ⊂ G and irreducible representation V . 13. The theorem then follows from the following two propositions. so class functions are those commuting with every g ∈ G. and so it is a ﬁnitely generated abelian group as well. Note that E{1} is the identity in C[G]. To prove the last proposition.3 Proposition.8 Proposition. Rewriting in terms of the multiplication law in C[G]. 13. Any rational algebraic integer is a ‘genuine’ integer (in Z). But that is equivalent to their commutation with every element of C[G]. or λ(g)ϕ = ϕρ(g). In particular.13. The values χ(g) of the character of any representation of G are algebraic integers. in fact we have the following. ﬁnitely generated abelian group C Z · EC . Sketch of proof. the eigenvalues of ρ(g) are roots of unity. more precise 13. Let now EC ∈ C[G] denote the indicator function of the conjugacy class C ⊂ G. The space of class functions is the centre Z(C[G]) of C[G]. The dimension of any complex irreducible representation of a ﬁnite group divides the order of the group. We need to know that this is a commutative subalgebra. Indeed. Calling G the group and V the representation. the space of formal linear combinations ϕg · g of group elements with multiplication prescribed by the group law and the distributive law. Integrality of EC follows.6 Theorem. 31 . The proposition above will then imply the theorem. This is clear from the ﬁrst deﬁnition.
the Hinvariance condition is ρ(h)(wgh ) = wg . In general. When V is irreducible. and so class functions must go to the subalgebra EndG (V ) of Ginvariant endomorphisms. This is called the permutation representation of X. Let then H ⊂ G be a subgroup and ρ : H → GL(W ) a representation thereof. Some of its basic properties are listed in Q.1 Deﬁnition.Recall now that the group homomorphism ρV : G → GL(V ) extends by linearity to an algebra homomorphism ρV : C[G] → End(V ). 32 ∀h ∈ H. In this case. When V is irreducible. Propositions 13. the permutation = representation studied in the last lecture. g∈G g∈G Thus. with the left action of G on G. We will generalise this construction. 14. dim V C Applying this homomorphism to EC results in χV (C) dim V . the decomposition X = Xk into orbits of the Gaction leads to a direct sum decomposition C[X] = C[Xk ]. X can be identiﬁed with the space G/H of left cosets of G for some subgroup H (the stabiliser of some point).3. We can write the actions a bit more explicitly by observing that the space of W valued maps on G is isomorphic to C[G] ⊗ W : using the natural basis eg of functions on G and any basis {fi } of W . We obtain the following 13. The induced representation IndG W is the subspace of the space of all maps H G → W . 2. which contains every irreducible. Now. of G and of H. (*) . Under the left action of an element k ∈ G. the left action of k ∈ G sends eg ⊗ w to ekg ⊗ w. a map ϕ : G → W gets sent to the map g → ϕ(k −1 g) ∈ W .3 and 13. the latter is isomorphic to C. Since TrV (α · Id) = α dim V .10 imply (13. 14 Induced Representations The study of A5 illustrates the need for more methods to construct representations.8) and thus our theorem. One tool is suggested by the ﬁrst of the representations of A5 . for any group. commute. So if we are interested in irreducible representations. but there is no good method to break it up in general. For example. then C[X] ∼ C5 . which are invariant under the action of H. on the right on G and simultaneously by ρ on W . namely. we identify the basis element eg ⊗ fi of C[G] ⊗ W with the map taking g ∈ G to fi and every other group element to zero. The simultaneous action of h ∈ H on G and on W sends it to the map g → ρ(h) (ϕ(gh)). The right action of h ∈ H sends it to egh−1 ⊗ w. dim V g ϕg · g → g ϕg · χV (g) . In this basis. TrV / dim V is an algebra homomorphism: TrV ( ) : Z(C[g]) → C.11 Proposition. ρV (ϕ) = g ϕg · ρV (g). We need some smaller representations. You can check that. This makes it clear that the two actions. it will act linearly on the space C[X] of complexvalued functions on X. This is compatible with the conjugation action of G. whenever G acts on X. the permutation representation C5 . whereas the simultaneous action on the right and on W sends it to ekg ⊗ ρ(h)(w). A general function g∈G eg ⊗ wg ∈ C[G] ⊗ W gets sent by h to egh−1 ⊗ ρ(h)(wg ) = eg ⊗ ρ(h)(wgh ). we have the regular representation. we might as well conﬁne ourselves to transitive group actions. the isomorphism is realised by taking the trace. and dividing by the dimension. by Schur’s lemma. and so the subspace of Hinvariant maps is a subrepresentation of G. if G = A5 and H = A4 .
and. the functions vanishing everywhere else. dim IndG W = G/H · dim W . = H Let us summarise the basic properties of induction. Part (i) is obvious from the deﬁnition. For r ∈ R.2 Proposition. then IndG W ∼ C[G/H] ⊗ W . you can check property (iv) from the character formula for the induced representation that we will give in the next lecture. it is Hinvariant. moreover. k) Such a map is determined by its restriction to G × {1}. for some w ∈ W . H Conversely. 33 . Wr . kh−1 ) = ρW (h)φ(g. in view of the same Hinvariance condition. Also. if the action of H on W extends to an action of the ambient group G.3 Theorem (Basic Properties). (This is not obvious) More generally. An element of IndG IndK W H K H is a map from G into the space of maps from K into W . IndG W is the regular representation of G. every function g∈G eg ⊗ wg can be rewritten as erh ⊗ wrh = r∈R h∈H r∈R h∈H r erh ⊗ ρ(h−1 )(wr ). let Wr ⊂ IndG W be H the subspace of functions supported on rH. in view of (*). and we have a decomposition. 14. k) for h ∈ H. H (iii) IndG 1 is the regular representation of G. then IndG W ∼ C[G/H] ⊗ W . Armed with suﬃcient patience. and the resulting restriction ψ : G → W satisﬁes ψ(gh−1 ) = ρW (ψ)(g. such a ψ is extended to a φ on all of G × K using the Kinvariance condition. and we have seen (ii) and (iii). and as such determines an element of IndG W . k k) and φ(g. Here is the proof from ﬁrst deﬁnitions. that is. this function is supported on rH. Then. invariant under K and under H. k) = φ(g. we construct an isomorphism from W to Wr by w∈W → h∈H erh ⊗ ρ(h−1 )(w). This gives the desired decomposition Examples 1.Let R ⊂ G be a set of representatives for the cosets G/H. with the extended Gaction on W . as a vector space. the = H permutation representation on G/H times the vector space W with the trivial Gaction. 3. IndG W is the space of maps from G to W that are invariant under H. satisfying φ(gk . {1} (iv) Let H ⊂ K ⊂ G. IndG W ∼ r Wr . H Proof. If H is any subgroup and W is the trivial representation. = (ii) dim IndG W = G/H dim W . IndG IndK W ∼ IndG W . = K H H Proof. For each r. Clearly. Each vector space Wr is isomorphic to W . H 2. 14. the same condition makes it clear that every invariant function on rH has this form. = H In particular. If H = {1} and W is the trivial representation. (i) Ind(W1 ⊕ W2 ) ∼ IndW1 ⊕ IndW2 . This is also a map φ : G × K → W .
and so the block Wr contributes χW (r−1 gr) to the trace of g. If ϕ is a class function on H. we need the trace of g on Wr . to the action of k = r−1 gr on W . so we have to extract the ‘known’ irreducibles with the correct multiplicities. We can conﬁne ourselves to the summands in (15. the action of G on IndW permutes the summands. Proof. Let ϕ be a class function on H. grH = rH iﬀ r−1 gr = k ∈ H. This justiﬁes the following. (15.1) Character formula for IndG W . Indeed.5) which are preserved by g.15 Induced characters and Frobenius Reciprocity (15. This. ϕ(h−1 x−1 gxh) = ϕ(x−1 gx). because many of these conjugates coincide: 15. It is quite rare that an induced rep will be irreducible (but we will give a useful criterion for that. x∈G where we declare ϕ to be zero at the points of G \ H. H Induction is used to construct new representations. The sum thus ranges over those r for which r−1 gr ∈ H. 34 . The most natural ﬁx would be to take the transforms of the extended function under conjugation by all elements of G. Now. the others will contribute purely oﬀdiagonal blocks to the action.3 Deﬁnition. We are ready for the main result. in the hope of producing irreducibles. under this isomorphism. The usual method to calculate multiplicities involves characters. 15. We want the trace of the action of g ∈ G. The character of the induced representation IndG W is IndG (χW ). and so the action of g on Wr corresponds. due to Mackey). because ϕ was invariant under conjugation in H. The obvious extension by zero is usually not a class function—the complement of H in G is usually not a union of conjugacy classes. however. For each such r. 15. Summing over the relevant r gives our theorem. as each Wr consists of the functions supported = only on the coset rH. H H Proof.4 Theorem. erh ⊗ h−1 w = h∈H egrh ⊗ h−1 w = h∈H erkh ⊗ h−1 w = h∈H erh ⊗ h−1 kw. Recall from the previous lecture that we can decompose IndW as IndG W = H r∈R Wr . is a bit redundant. and x ∈ G is ﬁxed.5) where Wr ∼ W as a vector space. extended by zero to all of G. then the “conjugate function” g ∈ G → ϕ(x−1 gx) depends only on the coset xH. We begin with a procedure which assigns to any class function on H a class function on G. in accordance to the action of G on the left cosets G/H. Now the isomorphism W ∼ Wr was deﬁned by = w∈W → which transforms under g to give g h∈H h∈H erh ⊗ h−1 w. so we need a character formula for the induced representation. recall that R ⊂ G is a system of representatives for the cosets G/H. Moreover. The induced function IndG (ϕ) is the class H function on G given by IndG (ϕ)(g) = H r∈R ϕ(r−1 gr) = 1 H ϕ(x−1 gx). and add them up.2 Lemma.
For two Grepresentations U.(15. H G 15. or just ResV . because the Gaction (on V and on the left on C[G]) commutes with the Haction (on the right on C[G] and on W ). The practical advantage is that a dot product of characters on G. we give a proof which applies more generally (even when character theory does not).9 Corollary. To state it nicely.6) Frobenius Reciprocity Surprisingly. let U V G = χU χV . IndW ) = dim HomH (V . involving one induced rep. W ). we have used the isomorphism of vector spaces Hom(V . the same number is also the dimension of the subspace (V ⊗ U ∗ )G ⊂ V ⊗ U ∗ of Ginvariant vectors. One can check directly (you will do so in the homework) that the linear map Ind.8 Remark. Write ResH V . in more abstract language. and the invariant vectors under G × H are precisely those invariant under both G and H.) Because the isomorphism Hom(U. W ): indeed. = The chain of isomorphisms leading to (*) is the following: HomG (V . 15. is reduced to one on H. Now. W ). which equates the linear maps from V to W valued functions on G with the functions on G with values in Hom(V . Here. We prove this by constructing a natural isomorphism between the two spaces (independent of arbitrary choices). V IndG W H G = ResH V W G H. one can compute multiplicities of Girreducibles in IndW which does not require knowledge of its character on G. V . we introduce some notation. C[G] ⊗ W ). recall that this is also the dimension of the space HomG (U. The multiplicity of the Girreducible representation V in the induced representation IndG W is equal to the multiplicity of the irreducible Hrepresentation W in the restricted H representation ResH V . H indicate that we take the subspaces of invariant vectors under the respective groups. we get an action of G × H on Hom(V . W ). from class functions on H to class functions on G. is the hermitian adjoint of the restriction of class functions from G to H.4. both sides are maps from G × V to W which are linear in V . (*) 35 . the theorem says. IndW ) ∼ HomH (V . In the third step. but only the character of W in H.7 Theorem (Frobenius Reciprocity). (Check it by decomposing into irreducibles. W ))G×H = HomH (V . that IndG and ResH are adjoint functors. V ) of Ginvariant linear maps. We must show that dim HomG (V . W ). (C[G] ⊗ W )H = HomG×H (V . (ResV is the “restricted” representation). The ﬁrst step is the deﬁnition of the induced representation. which accounts for our second isomorphism. Viewing the new  G as an inner product between representations. and it remains to explain the group actions and the isomorphisms. while H acts on the right on C[G] and on W . C[G] ⊗ W ) = (C[G] ⊗ Hom(V . the Reciprocity formula then follows from Theorem 15. HomG (V . when regarding a Grepresentation V merely as G a representation of the subgroup H. We keep note of the fact that G acts on the left on C[G] and on V . where the superscripts G. G Proof of Frobenius reciprocity. C[G] ⊗ W ) = C[G] ⊗ Hom(V . This is the content of the Frobenius reciprocity theorem below. IndW ) = HomG V . 15. V ) ∼ V ⊗ U ∗ = is compatible with the actions of G.
as the set of orbits for the (left) action of K on the coset space G/H. An important special case is when H = K and is normal in G: in this case. let Hs = sHs−1 ∩ K. In particular. so we get indeed an action of the product group K × H. W ) to Hom(V . W ) is that of the simultaneous action of h on V and W . simultaneously on V and on W . but it also embeds into H by the homomorphism Hs x → s−1 hs. as claimed. No relation is assumed between K and H. But we know that φh = φ1 ◦ ρV (h)−1 . Hs is a subgroup of K. To be more precise. The double coset space K\G/H is the set of K × Horbits on G. 16. It turns out.The last step gets rid of G altogether. 16 Mackey theory The main result of this section describes the restriction to a subgroup K ⊂ G of an induced representation IndG W . φg = φ1 ◦ ρV (g −1 ). acting by h ∈ H on φ results in the map g → ρW (h) ◦ φgh . So the eﬀect on φ1 ∈ Hom(V . the map on G sending k to ψk := ψ1 ◦ ρV (k)−1 is invariant. as seen by acting on φ by g and evaluating at k ∈ G. We write Ws for the space W . For each s ∈ S. Conversely. It is explained by the fact that a map on G with values anywhere which is invariant under left multiplication of G is completely determined by its value at the identity. agrees with the Haction on Hom(V. whence it follows that the transformed map sends 1 ∈ G to ρW (h) ◦ φ1 ◦ ρV (h)−1 . W ) its value at G. Note that left and right multiplication commute. that the Haction on the ﬁrst space. The formulae become much cleaner when H is a normal H subgroup of G. as needed for invariance under any g ∈ G. W ). note that Hs ⊂ K is the stabiliser of the coset sH under the action of K on G/H. Indeed. 16. (Why?) Let S ∈ G be a set of representatives for the double cosets (one point in each orbit). W ). because ψgk = ψ1 ◦ ρV (gk)−1 = ψ1 ◦ ρV (k)−1 ρV (g)−1 = ψk ◦ ρV (g)−1 . W ) to φ1 gives a linear isomorphism from the subspace of Ginvariants in C[G] ⊗ Hom(V . but the most useful appliH cation of this rather technical formula comes in when K = H: in this case. In particular. K\G/H equals the simple coset space G/H. on the right on G and on W . when this action of Hs is understood. W ). let φ : G × V → W be a map linear on V . but this plays no role in their proof. It remains to compare the Hinvariance conditions on the two spaces.2 Theorem (Mackey’s Restriction Formula). we obtain a neat irreducibility criterion for IndG W . There are equivalent ways to describe the set. Then. We have thus shown that sending φ : G → Hom(V . for any ψ1 = ψ ∈ Hom(V . we thus have G = s KsH. in fact. while the embedding in H identiﬁes Hs with the stabiliser of Ks under the (right) action of H on K\G. A representation ρ : H → GL(W ) leads to a representation of Hs on the same space by ρs (x) = ρ(s−1 xs). and this value can be prescribed without constraint. Invariance under g forces us to have φgk = φk ◦ ρV (g −1 ). or the set of orbits for the right action of H on the space K\G. 1 goes to ρW (h) ◦ φh .1 Deﬁnition. for the left action of K and the right action of H. to establish the fourth and last isomorphism in the chain. ResK IndG (W ) ∼ = G H s∈S IndKs (Ws ) H 36 . and denote by φg ∈ Hom(V. To see its meaning.
then. into the sum of Hinvariants in s C[KsH]⊗W . Moreover. But this is a vector in IndKs (Ws ). Restricting to the subset Ks gives a map ψ : K → W invariant under the same action of Hs . If x ∈ H. This is equivalent to orthogonality of their characters. so we might as well impose condition (ii) on any g ∈ G \ H. In this case.7 Corollary. 37 . H we deﬁne a map φ : KsH → W by φ(ksh) := ρ(h)−1 ψ(k). However.3 Remark. and the representation Ws of H is irreducible. The set of representatives S for the double cosets was arbitrary. the proof given above applies even when character theory does not (e. 16. respectively. The maps φ → ψ and ψ → φ are clearly inverse to each other. Then. One can give a computational proof of Mackey’s theorem using the formula for the character of an induced representation. (16. Note that.g. Hinvariance of the decomposition means that the invariant subspace IndG (W ) ⊂ C[G]⊗W decomposes accordingly. Call two representations of a group disjoint if they have no irreducible components in common. 16. φ(ksh) = ρ(h)−1 ψ(k) = ρ(h)−1 ρ(y)−1 ψ(ky) = ρ(yh)−1 ψ(kk −1 k ) = ρ(h )−1 ψ(k ) = φ(k sh ). so we H do get a natural linear map φ → ψ from the ﬁrst space to the second. H 16. The criterion becomes especially eﬀective when H is normal in G. in fact. Again. Let ksh = k sh . for g ∈ G. the representations Ws and ResHs W are disjoint. it suﬃces to check the condition on a set of representatives. by s−1 conjugation and by the natural inclusion. depends only on the coset gH. We have G = s KsH. Let then φ : KsH → W be a map invariant for the action of H (on the right on KsH. when complete reducibility fails). H We claim now that the Hinvariant part of C[KsH] ⊗ W is isomorphic to IndKs (Ws ). There is a problem. by setting ψ(k) := φ(ks). where for the second equality we have used invariance of ψ under y ∈ Hs . then φ(ksh · x−1 ) = ρ(xh−1 )ψ(k) = ρ(x)φ(ksh). IndG W is irreducible iﬀ W is H irreducible and the representations W and Ws are nonisomorphic. the double cosets are the same as the simple cosets (left or right). however: the same group element may have several expressions as ksh.Proof. as needed. sHs−1 = H. IndG W is irreducible if and only if H (i) W is irreducible (ii) For each s ∈ S \ H. but it suﬃces to check it for g ∈ S. showing that φ is invariant under the action on H (on the right on KsH and simultaneously on W ). H The two actions stem from the two embeddings of Hs inside H. completing the proof of the theorem. But then. and on W ). Let H ⊂ G be a normal subgroup. so Hs = H for all s. Hence the following 16. it turns out that the isomorphism class of Wg . and we must check that the value φ(ksh) is independent of that. with k ∈ K and h ∈ H. and so their common value y is in Hs . We can then decompose the space of W valued functions on G into a sum of functions supported on each coset.6 Remark.5 Theorem (Mackey’s Irreducibility Criterion). let now ψ : K → W represent a vector in the induced representation IndKs (Ws ).4) Mackey’s irreducibility criterion We will now investigate irreducibility of the induced representation IndG (W ) by applying H Mackey’s formula to the case K = H. as a H representation of K. k −1 k = s · h h−1 · s−1 . we have for each s ∈ S two representations of the group Hs = sHs−1 ∩ H on the space W : ρs deﬁned as above and the restricted representation ResHs W . for all s ∈ G \ H. if W was so. when K = H. Conversely. and the left Kaction and right Haction permute these double cosets.
the second equality is Mackey’s formula. so we can assume that 1 was chosen to represent the identity coset HH = H. while all other terms must vanish. which is irreducible under S5 . but also directly) that the ﬁrst three induced representations split as 1 ⊕ S. in the notation U V := χU χV . V and W . and so W must be irreducible. So this is the nonvanishing term. V ⊕ (V ⊗ S) and W ⊕ (W ⊗ S). (iv) Every subgroup and quotient group of a solvable (nilpotent) group is solvable (nilpotent): just use the chain of intersections with the G(k) or the quotient chain. so the the answer is 1. (ii) The group B of uppertriangular invertible matrices with entries in Z/p is solvable. The group G is solvable if there is a ﬁnite chain G = G(n) ⊃ G(n−1) ⊃ · · · ⊃ G(0) = {1} of subgroups such that G(k) is normal in G(k+1) and the quotient group G(k+1) /G(k) is abelian.2) Examples (i) Every abelian group is nilpotent. (17. and it must equal 1. but not nilpotent. n/2 (when n is even). Now every term in the ﬁnal sum is a nonnegative integer. For the ﬁrst and last equality. as required by irreducibility. If so. n the induced representation is Vk . we have used the Frobenius reciprocity formula. If G = D2n .8) Examples 1. in each of those cases. H1 = H and W1 = W . the subgroups B (k) have 1’s on the diagonal and zeroes on progressively more superdiagonals. H = Cn and W is the representation Lk with generator eigenvalue exp 2πik . One can check (by characters. IndG W IndG W H H G = ResH IndG W W G H = s H IndHs Ws W H H = s W IndHs Ws H H = s ResHs W Ws H Hs . that is. while the third uses the fact that the dot product of characters is an integer (hence real). Except for the deﬁnition that we are about to give. We have. we now show that for a certain class of groups (the nilpotent ones) all irreducible representations arise by induction from a 1dimensional representation of an appropriate subgroup. (iii) The subgroup N ⊂ B of uppertriangular matrices with 1’s on the diagonal is nilpotent. while Λ and Λ each induce Λ2 V . If G = S5 and H = A5 . but the last two induce irreducible representations. only if all terms but one vanish. this leads to the complete list of irreducibles of S5 . and Mackey’s criterion holds whenever exp 2πik = exp −2πik . which is part (ii) of Mackey’s criterion. (16. As a matter of fact. and the nonzero term is 1 itself. 17. and the s = 1 term is χW 2 .1 Deﬁnition.Proof of Mackey’s criterion. 17 Nilpotent groups and pgroups As an application of induced representations. Mackey’s criterion fails for the ﬁrst three irreducible representations 1. but holds for the two ‘halves’ of Λ2 V . this section contains optional material. So the ﬁrst three representations induce reducible ones. when k = n n 0. G is nilpotent if the G(k) are normal in all of G and each G(k+1) /G(k) is central in G/G(k) . and indeed. as appropriate. Vk breaks up into two lines. 2. The transform of Lk under reﬂection is the representation L−k . Now the choice of representatives S was arbitrary. 38 .
which must permute its irreducible characters. If 1 was the only central element. Note that we may have H = G and W = V . This has the remarkable implication that the only irreducible representation of a pgroup in characteristic p is the trivial one. and implies that every such action has nonzero invariant vectors. we conclude that ρ(g) permutes the blocks Vi according to its permutation action on irreducible characters of A. but the fact that all irreducible summands within each Vi are isomorphic. With the same assumptions. choose g ∈ G and let g ρ(a) := ρ(gag −1 ). Recall that “W is isotypical for A” means that it is a direct sum of many copies of the same irreducible representation of A. 39 . whose structure is usually sensible enough that an understanding of all subgroups is not a hopeless task. denoting the action by ρ. the same decomposition V = i Vi is also the isotypical decomposition under the action g ρ of A. the irreducible type assigned to Vi has been changed. in which case the fact that the irreducible reps are 1dimensional implies that the isotypical representations are precisely the scalar ones. if A ⊂ G is abelian. A ⊂ G a normal subgroup and V an irreducible representation of G. Let V = i Vi be the isotypical decomposition of V with respect to A. If there is a single summand. j = i. the orbitstabiliser theorem implies that their orders are powers of p. if its restriction to A is already isotypical.The following proposition underscores the importance of nilpotent groups. or else V is induced from a proper subgroup H ⊂ G. V = IndG W for some intermediate group A ⊂ H ⊂ G H and an irreducible representation of H whose restriction to A is isotypical. Recall that a pgroup is a group whose order is a power of the prime number p. 17. 17. the order of G is 0 (mod p).1. The subgroup will of course depend on the representation.3 Proposition.7 Corollary. They would then add up to 1 (mod p). Conjugation by g deﬁnes an automorphism of the (normal) subgroup A.6 Lemma (Clever Lemma). Still. if we let A act via ρ on the ﬁrst space and via g ρ on the second. Since G/Z is also a pgroup. according to the action of gconjugation on characters of A. Then. Indeed. We are interested in the special case when A is abelian. contradiction. Consider the conjugacy classes in G. The proof will occupy the rest of the lecture and relies on the following 17. However. Moreover. Every pgroup is nilpotent. 17. then the order of every other conjugacy class would be 0 (mod p). ρ(g) : V → V deﬁnes an isomorphism of Arepresentations. Proof of the Clever Lemma. reverting to the action ρ on the second space. and distinct from those in Vj . especially for pgroups. observe that the action of G on V must permute the blocks Vi . Since they are orbits of the conjugation action. then either its action on V is scalar. we take H = G as indicated. 17. Proof. The same argument can be applied to the (linear) action of G on a vector space over a ﬁnite ﬁeld of characteristic p. Then.4 Remark. Otherwise.5 Theorem. we can repeat the argument and produce the chain in 17. It follows that ρ(g) preserves the blocks in the isotypical decomposition. We will show that the centre Z of a pgroup G is nontrivial. cannot have changed. Every irreducible representation of a nilpotent group is induced from a onedimensional representation of an appropriate subgroup. Let G be a ﬁnite group. Of course. this is rather a good result.
5. Frobenius = H reciprocity gives dim HomG (IndG V0 .5. noncentral subgroup. Let Z ⊂ G be the centre and let G ⊂ G be the smallest subgroup in the chain (17. let A ⊂ G/ ker ρ be a normal.7 that there exists a proper subgroup H ⊂ G with a representation W of H/ ker ρ. which is substantially more involved than that of Theorem 17. V is irreducible under G and its dimension equals that of IndG V0 . so gag −1 ∈ ker ρ and then a would be central in G/ ker ρ.1) Additional comments Induction from representations of “known” subgroups is an important tool in constructing the character table of a group. For a noncentral element a ∈ A. If G is nilpotent and nonabelian. If G/ ker ρ is abelian. moreover. gg g −1 ∈ g Z for any g ∈ G. H G the inequality holding because V0 is an Hsubrepresentation of V . and so for any g ∈ G . It follows that all nonzero blocks have the same dimension. and we can repeat the argument until H we ﬁnd a 1dimensional W . Then. then it contains a normal.1) not contained in Z. I refer you to Serre’s book for the proof. The subgroup ker ρ is normal and G/ ker ρ is also nilpotent. From the orbitstabiliser theorem. However. There are (p − 1) irreducible representations of dimension p. Choose now a (nonzero) block V0 and let H be its stabiliser within G. any such g generates. abelian. It follows from Corollary 17. But that is also the representation IndG W . Thus.8 Lemma (Little Lemma). Frobenius’ and Mackey’s results can be used to (attempt to) break up induced reps into irreducibles. there isn’t a result as good as the one for pgroups — it need not be true that every irreducible representation is induced from a 1dimensional one. ρ(a) cannot be a scalar. the sum of blocks within an orbit will be a Ginvariant subspace of V . However. then the irreducible representation V must be onedimensional and we are done. and they are all induced from 1dimensional representations of the abelian subgroup of elements of the following form 1 ∗ ∗ 0 1 0 0 0 1 18 Burnside’s pa q b theorem (18. abelian and noncentral subgroup. G /Z is central in G/Z. together with Z. Indeed. For arbitrary ﬁnite groups. with m prime to p. which was assumed irreducible. or else it would commute with ρ(g) for any g ∈G. The subgroup H is also nilpotent. so any nonzero Gmap between the two H spaces is an isomorphism. ResH V ) ≥ 1. Proof. Proof of Theorem 17. (if we ignore zero blocks). such that V is induced from H/ ker ρ to G/ ker ρ. Else. 17. V ) = dim HomH (V0 . the following theorem of Brauer’s is a good substitute for that. (17. If G is any ﬁnite group and G = pn m. and then ρ(gag −1 ) = Id for any g. Let ρ : G → V be an irreducible representation of the nilpotent group G. dim V = G/H · dim V0 . I claim that V ∼ IndG V0 . a subgroup of G with the desired properties.Note. that this permutation action of G is transitive.9) Example A nonabelian exampleis the Heisenberg group of uppertriangular 3 × 3 matrices in Z/p with 1’s on the diagonal. Indeed. then two theorems of Sylow assert the following: 40 .
1. Knowing the integral span of characters within the space of class functions determines the irreducible characters. If G has order pa q b .3) Burnside’s theorem One impressive application of the theory of characters to group theory is the following theorem of Burnside’s. 41 . The representation ring of G is spanned over Z by representations induced from 1dimensional reps of elementary subgroups. then either g is central or else ρ is not faithful.) 2. 18. If some group element g = 1 acts as a scalar in a representation ρ. the elementary subgroups of G are of the form x × S. proved early in the 20th century. An elementary psubgroup of G is one which decomposes as a direct product of a cyclic subgroup and a pgroup.5 and thus Burnside’s theorem. Groups containing no nontrivial normal subgroups are called simple. where x ∈ G and S a subgroup of a ﬁxed Sylow subgroup of the centraliser of x in G. Let p. Indeed. in principle. (18. it suﬃces to prove the following 18. Either way. Indeed. χ(g) is a sum of its χ(1) eigenvalues. which are roots of unity. every psubgroup of G is contained in one of these.7 Proposition. of order 60. 18. ρ(g) is a scalar iﬀ χ(g) = χ(1). the group is not simple. χ(g) = χ(1) if and only if χ(g)/χ(1) is a nonzero algebraic integer. every irreducible character is an integral linear combination of characters induced in this way. b arbitrary natural numbers. 3. we can continue applying the proposition to the normal subgroup and the quotient group. the kernel must be trivial. as the functions of squarenorm 1 whose value at 1 ∈ G is positive. so this is the end of this road. q contains a nontrivial normal subgroup. 2. G contains subgroups of order pn . With standard notations. there exists an irreducible character χ and an element g such that χ(g)/χ(1) is a nonzero algebraic integer.6 Proposition.5 Proposition. 18. until we produce a chain where all subquotients have orders p or q (and thus are abelian). Every group of order pa q b = p. So. 3. Every nontrivial representation of a simple group is faithful. Up to conjugation. Indeed. unless all eigenvalues agree. It was not given a purely grouptheoretic proof until 1972. Every group of order pa q b is solvable. Noting that every subgroup and quotient group of such a group has order pa q b . In other words. The following two facts now imply Proposition 18. For any character χ of the ﬁnite group G. The triangle inequality for the norm of the sum is strict. q be prime and a. is simple. ugu−1 ∈ ker ρ for any u ∈ G. (Indeed. 18. The group A5 . all these subgroups are conjugate. so we will prove that the order of a simple group is contains at least three distinct prime factors. the irreducible characters can be found in ﬁnitely many steps. We will keep reformulating the desired property of our groups until we convert it into an integrality property of characters.4 Theorem (Burnside). The subgroups in (1) are called the Sylow psubgroups of G. 1.2 Theorem (Brauer). once the structure of the group is suﬃciently well known.
we have α := χ(g) ω1 + · · · + ωd = . As it is. χ(1) d for suitable roots of unity ωk . the Galois transform of a root of unity is also a root of unity. as required. This would be clear from the prime factorisation. If so. and hence must agree with the minimal polynomial Pα . contradiction. Now. On the other hand. It remains to ﬁnd the desired C and χ. is invariant under the action of Γ. Proof of 18. So there exists a χ with χ(C) = 0 and χ(1) not divisible by p. give 1+ χ=1 χ(C)χ(1) = 0. n such that mC + nχ(1) = 1. As C is a power of p. Let d be the dimension of the representation. as proved in Proposition 17. applied to C and {1}. Now. which is integral. so every β is an average of roots of unity.3. unless α = 0 or α = 1. where the χ in the sum range over the nontrivial irreducible characters. Choose a qSylow subgroup H ⊂ G. from the orbitstabiliser theorem. The centraliser of a nontrivial element g ⊂ Z(H) contains H. Then. if α = 0. This gives our C. However. hence it has rational coeﬃcients.6). we are tempted to conclude that χ(C)/χ(1) is already an algebraic integer. this has nontrivial centre. Therefore. B´zout’s theorem secures the existence of integers m. with χ(C) = 0 and χ(1) not divisible by p. We therefore get a contradiction from algebraic integrality. the product of all β. Gauss’ Lemma asserts that any monic polynomial with integer coeﬃcients is irreducible over Q as soon as it is so over Z. α must satisfy some monic equations with integral coeﬃcients. e mC χ(C) χ(C) + nχ(C) = . and so it must admit all the β’s as roots.7. if χ(C) were an actual integer. and an irreducible character χ. we must be more careful and argue as follows.5 This is because Pα . so it must agree with the minimal polynomial. χ(1) χ(1) displaying χ(C)/χ(1) as a sum of algebraic integers. recall from Lecture 13 that the number C · χ(C) is an algebraic integer. then no transform β can be zero. and has α as a root. If every χ(1) for which χ(C) = 0 were divisible by p. Each χ(C) is an algebraic integer. 5 It suﬃces to consider the extension ﬁeld generated by the roots of unity appearing in α. the product above is clearly Γinvariant. Assuming that. and so the conjugacy class C of g has ppower order. 42 . being an algebraic integer. among which the one of least degree must be irreducible over Q.Proof of (18. then Pα (X) must have integer coeﬃcients: indeed. having rational coeﬃcients. and has modulus ≤ 1. if α is an algebraic integer. The column orthogonality relations of the characters. We will ﬁnd a conjugacy class {1} = C ⊂ G whose order is a power of p. the minimal polynomial of α over Q is Pα (X) = β (X − β) where β ranges over all distinct transforms of α under the action of the Galois group Γ of the algebraic closure Q of Q. then dividing the entire equation by p would show that 1/p was an algebraic integer. has modulus 1 or more. which does χ(1) not divide the denominator.
g −1 ). the norm topology on bounded operators. but we only have time for a quick survey. see below). The second must be replaced by integration over G. they are completely reducible. A representation of a topological group G on a ﬁnitedimensional vector space V is a continuous group homomorphism ρ : G → GL(V ). For simple groups such as U (1). χ χ ¯ corresponds to V V ∗ . We will now study the representation theory of inﬁnite compact groups. the general theory requires two deeper theorems of analysis (existence of the Haar measure and the PeterWeyl theorem. W ). (This is also called a Hilbert space basis. 19. SU(n).3) Orthonormality of characters In establishing the key properties (1) and (3) above. One can deﬁne continuity for inﬁnitedimensional representations. save for the inner product bit. 43 . This deﬁnes the structure of a topological group on GL(V ). A topological group is a group which is also a topological space. Typically. then one chooses a topology on the space End(V ) of continuous linear operators (for instance. In the special case of compact Lie groups — the groups of isometries of geometric objects. The character map χ → RG is a ring homomorphism. Correspondingly. that go a bit beyond our means. Point (3) will be amended to account for the fact that the space of class functions is inﬁnitedimensional. but more care is needed when deﬁning GL(V ) and its topology. 5. the key properties of the average over G were its invariance under left and right translation.19 Topological groups In studying the ﬁnitedimensional complex representations of a ﬁnite group G. we take advantage of the continuity assumption in our representations. and with inner products. when V is a Hilbert space) and then one embeds GL(V ) as a closed subset of End(V ) × End(V ) by the map g → (g. 2. they are all unitarisable. and will see that the same properties (1)–(5) hold. (19. we can list all irreducible characters in explicit form! That is the Weyl character formula. U(n) — the theory is due to Hermann Weyl6 and is outrageously successful: unlike the ﬁnite group case. up to isomorphisms. and the fact that G has total volume 1 (so that the average of the constant function 6 The unitary case was worked out by Schur. χV χW = V W := dim HomG (V . It is called compact if it is so as a topological space. one starts with some topology on V . SU (2) and related ones. the two ingredients were Schur’s Lemma and Weyl’s trick of averaging over G. these results are quite easy to establish directly. The ﬁrst one applies without change to any group. It is not much more diﬃcult to handle U (n) for general n. and for which the group operations are continuous. 19. which follows from (1) and (3). 4. but their characters will still form a complete orthonormal set in the (Hilbert) space of class functions. which ensures that all intervening functions in the argument are continuous on G. the characters of irreducible representations form an orthonormal basis of the space of class functions. Even so. and so we shall study them completely rigorously. there will be inﬁnitely many isomorphism classes. compatible with involutions. such as SO(n). However. with the topology of GL(V ) inherited from the space End(V ) of linear selfmaps. and can be integrated.) There is a good general theory for all compact groups.1 Deﬁnition. Here. 3. Note that 1 ⇒ 2 and 3 ⇒ 4. part (5) is obvious from the deﬁnition of characters. Representations are determined by their characters. we saw that 1.2 Remark.
and hence completely reducible. we shall make no use of the PeterWeyl theorem here. α=1 It’s easy to see that the choice of an integer n and of a complex number zα for each A deﬁnes a onedimensional complex representation of the second group.1 is 1). For Lie groups. we start oﬀ by ignoring these aspects.and right invariant. Every ﬁnitedimensional representation of a compact group is unitarisable. an easier proof of existence can be given. this asserts that any continuous class function which is orthogonal to every irreducible character is null. As an abelian group. Armed with an invariant integration measure. A (diﬃcult) theorem of Haar asserts that the two constraints determine the volume element uniquely (subject to a reasonable technical constraint: speciﬁcally. The inverse map sends φ ∈ End(V ) to the function g → TrV (ρV (g)∗ φ). and characters of nonisomorphic irreducibles are orthogonal.8) A compact group: U(1) To illustrate the need for a topology and the restriction to continuous representations. 19. R∼Q⊕ = α=1 Qα. 19. The second property can always be ﬁxed by normalising the volume. = The map from left to right sends a function f on G to the operators G f (g) · ρV (g)dg. we’ll see that for U(1) and SU(2). This asserts the existence of an “ample supply” of irreducible representations. In the case of ﬁnite groups. the main results and their consequences in 1 Lecture 8 follow as before. 44 . we found enough representations by studying the regular representations.7 Theorem (PeterWeyl).) However. ˆ L2 (G) ∼ End(V ). the completeness of characters will be established by direct construction of representations. we can choose our ﬁrst basis element to be 1. and general nonsense (Hamel’s basis theorem) asserts that is must have a basis A = {α} ⊂ R as a Qvector space. xα ) → exp(2πinx) · α=1 exp(2πizα · xα ).4 Proposition. replacing G g∈G in the arguments with G dg. and so it cannot be selfevident. Characters of irreducible representations of a compact group have unit L2 norm. Hausdorﬀ group carries a unique biinvariant regular Borel measure of total mass 1). any compact. moreover. but relies here on a fundamental theorem that we state without proof: 19.6 Corollary. we have isomorphisms of abelian groups. but the ﬁrst imposed a strong constraint on the volume form we use for integration: the volume element dg on the group must be left. U(1) is isomorphic to R/Z via the map x → exp(2πix). (Lie groups allow for an easier argument. in the cases we study. As opposed to the case of ﬁnite groups. R/Z ∼ Q/Z × = α=1 Qα. The inner product on End(V ) corresponding to L2 on G is given by φψ = TrV (φ∗ ψ) · dim V . The space of squareintegrable functions on G is the Hilbert space direct sum over ﬁnitedimensional irreducible representations V . Thus. by specifying (x. In addition to the orthogonality properties. Its formulation now is that the characters form a Hilbert space basis of the space of squareintegrable class functions. This method is also applicable to the compact case. 19. A representation is irreducible iﬀ its character has norm 1. Completeness of characters is a diﬀerent matter.5 Theorem. the proof for compact groups is somewhat involved. (19. Now R is a vector space over Q.
start with ψ(0) = 0.10 Theorem. we will see in the next lecture that the only surviving datum is the integer n. using the pigeonhole principle). so that almost all factors in the product are 1. deﬁne a continuous function ψ : [0. The closed proper subgroups of U(1) are the cyclic subgroups µn of nth roots of unity (n > 1). Henceforth. But then. So. with the same choice of sign for all z. Henceforth. ψ(θ) = arg ϕ(eiθ ). Because ker ϕ = µn . 19. hence bounded image. The character theory is closely tied to the theory of Fourier series. there must be one of smallest argument (in absolute value) or else there would be a sequence converging to 1. . even when continuity is ignored. which is one value of the argument of ϕ(1) = 1. ψ must be injective on [0. for all m and all k = 0. so ψ(2πk/mn) = ±2πk/m.11 Lemma. If q ∈ U(1) is not a root of unity. 19. and it turns out this does not even succeed in covering all 1dimensional representations of R/Z. If ker ϕ is the entire group. we will now show that ϕ(z) = z ±n . these m + 1 values must be taken exactly once and in the natural order. . 19. By continuity. ψ(θ) = ±n · θ. The onedimensional representations are classiﬁed by the proﬁnite completion Z of Z. these would again generate a dense subgroup of U(1). and so ψ({2πk/mn}) ⊂ {±2πk/m}. For the proof. we parametrise U(1) by the argument θ. we will need the following 19. So ϕ is a continuous homomorphism from U(1) itself.12 Remark. ϕ(e2πi/mn ) must be an mth root of unity. then its powers are dense in U(1) (exercise. and we must have ψ(2π/n) = ±2π: the value zero is ruled out by monotonicity and any other multiple of 2π would lead to an intermediate value of θ with ϕ(eiθ ) = 1. Now. for every element in our group. By monotonicity. A continuous 1dimensional representation U(1) → C× has the form z → z n . The subgroup Q/Z turns out to have a more interesting representation theory. If ker ϕ = µn (n ≥ 1). in other words. Proof of Theorem 19.9 Remark. as claimed. . Things will improve dramatically when we impose the continuity requirement. We now classify the ﬁnitedimensional representations of U(1). and ϕ(z) = z ±n .Notice that. To see this. . This is not an accident. then ϕ ∼ 1 and the = theorem holds with n = 0. . all but ﬁnitely many coordinates xα must vanish. The image must lie on the unit circle. the entries in a matrix representations are Laurent polynomials in z. . Because ϕ is a homomorphism and ϕ(e2πi/n ) = 1. The root of unity of smallest argument is then the generator. Complete reducibility now shows that continuous ﬁnitedimensional representations of U(1) are in fact algebraic: that is. A choice of complex number for each element of a basis of R over Q is not a sensible collection of data. A continuous map ϕ : U(1) → C× has compact. choose the argument so as to make the function continuous. ker ϕ is a closed subgroup of U(1). . 2π/n] → R. k = 0. any closed.10. “representation” of a topological group will mean continuous representation. 2π/n). 45 . by continuity. m. Proof of the Lemma. it must be monotonically increasing or decreasing (Intermediate Value Theorem). because the integral powers of any other complex number form an unbounded sequence. for some integer n. ± denotes the sign of ψ(2π/n). . proper subgroup of U(1) consists only of roots of 1. it holds for a large class of topological groups (the compact Lie groups). ψ(0) = 0. m. Among those.
Orthonormality of the ρn implies the Fourier expansion formula for the Fourier polynomials. and their ﬁnite linear combinations are the Fourier polynomials. for any given ﬁnite collection S of indexes n. which we discuss below. n ∈ S. 46 . (*) which converges in mean square. The proof of most of this is fairly easy. in the sense of uniform convergence. any ﬁnite sum of norms of the Fourier coeﬃcients fn 2 in (*) is bounded by f 2 . Its character χV is a Fourier polynomial. (ii) Every ﬁnitedimensional representation V of U(1) is isomorphic to a sum of the ρn . let us just note an algebraic version of the result. in fact. on the unit circle. it requires a bit of care to state the ﬁnal part of our main theorem. in the distance deﬁned by the inner product. z = z −1 so the the Fourier polynomials are also the polynomials in z ¯ and z . averaging any given inner product by integration on U(1). 0 which corresponds to “averaging over U(1)” (z = eiθ ). they are orthonormal in the inner product ϕψ = 1 2π 2π ϕ(θ)ψ(θ)dθ. The functions ρn are sometimes called Fourier modes. and no analysis is needed. in this case the Fourier series is a ﬁnite sum. (Of course. Hence. The ρn form a basis of the polynomial functions on U(1) ⊂ R2 . Recall that complete reducibility of representations follows by Weyl’s unitary trick. As the space of (continuous) class functions is inﬁnitedimensional. Moreover.) Furthermore. the Fourier coeﬃcients fn are given by the formula fn = ρn f = 1 2π 2π 0 e−inθ f (θ)dθ. which says that the Fourier polynomials are dense in the space of continuous functions. Indeed. their characters (which we abusively denote by the same symbol) are linearly independent. This means that every squareintegrable function f ∈ L2 has a (Fourier) series expansion ∞ ∞ f (θ) = −∞ fn · e inθ = −∞ fn · ρn . and the Fourier series converges. that the characters form a “basis” of the space of class functions. (i) The functions ρn form a complete list of irreducible characters of U(1). The good setting for that is the Hilbert space of squareintegrable functions.14 Theorem. (**) Recall that meansquare convergence signiﬁes that the partial sums approach f . 19. 19. One method is to derive it from a powerful theorem of Weierstraß’.(19. and for any f ∈ L2 . For now. it coincides with the space of its conjugacy classes. the sum of terms in (*) with n ∈ S is the orthogonal projection of f onto the the span of the ρn .15 Proposition. Clearly. which can also be expressed as polynomials in x. y. in other words. the limit g has the property that f − g is orthogonal to each ρn . The multiplicity of ρn in V is the inner product ρn χV .13) Character theory The homomorphisms ρn : U(1) → C× .16) Digression: Fourier series The Fourier modes ρn form a complete orthonormal set (orthonormal basis) of the Hilbert space L2 (U(1)). We can now state our main theorem. ρn (z) = z n form the complete list of irreducible representations of U(1). the ﬁnite linear combinations of the ρn . The more diﬃcult part is to show that any f ∈ L2 which is orthogonal to all ρn vanishes. As U(1) is abelian. g is the projection of f onto the span of all the ρn . ¯ (19.
(ii) Any squareintegrable function f ∈ L2 (U(1)) has a series expansion f = fn · ρn . It is useful to replace C2 with Hamilton’s quaternions H. Then. 20.17 Theorem. In the next lecture. observe that SU(2) acts on the space P of polynomials in two variables z1 . preserved by the action of SU(2): thus. SU(2) is the group of complex 2×2 matrices preserving the complex inner product and with determinant 1. . generated over R by elements i. is the complete list of irreducible representations of SU(2). q2 The relation q1 q2 = q2 q1 shows that ¯¯ q1 q2 = q1 q2 · q1 q2 = q1 q2 · q2 q1 = q1 ¯¯ in particular. (20. u v to q = u + vj gives an isomorphism of SU(2) with the −¯ u v multiplicative group of unit quaternions. We will describe its conjugacy classes.(The result holds for continuous functions on any compact subset of RN . z2 . the space P1 of linear functions the standard 2dimensional one. Direct calculation establishes the following. . and the quaternion norm is ¯ q 2 = q q = q q = a2 + b2 + c2 + d2 .. P2 . 1. We summarise the basic facts in the following 19. v ∈ C. the space P0 of constants is the trivial onedimensional representation. SU(2) = u v : u. . by orthogonality. 2. whose existence implies the unitarisability and complete reducibility of its continuous ﬁnitedimensional representations. by a sequence of Fourier polynomials pk leads to a contradiction. P0 . i. u2 + v2 = 1 . because f − pk 2 = f 2 + pk 2 ≥ f 2 . H is fourdimensional. j and k := ij. . the group operation is matrix multiplication. N = 2. up to isomorphism. n = 0. orthogonal to all ρn . we will list the irreducibles with their characters. Explicitly. the “unit quaternions” (the quaternions of unit norm) form a group under multiplication. The conjugate of a quaternion q = a + bi + cj + dk is q := a − bi − cj − dk.2 Proposition. j satisfying the relations i2 = j2 = −1. Sending 47 . We can decompose P into the homogeneous pieces Pn . this can be identiﬁed with the threedimensional unit sphere in C2 . ﬁnd the biinvariant volume form. and every continuous ﬁnitedimensional representation is isomorphic to a direct sum of those. Thus. P1 . here. . ¯ ¯ : H → R is multiplicative. etc. and yet uniform convergence certainly implies convergence in meansquare. by its natural linear action on the coordinates.1) SU(2) and the quaternions By deﬁnition. (i) Any continuous function on U(1) can be uniformly approximated by ﬁnite linear combinations of the ρn . ij = −ji. with Fourier coeﬃcients fn given by (**).) Approximating a candidate f . 20 The group SU(2) We move on to the group SU(2). spanned by 1. −¯ u v ¯ Geometrically. . Giving away the plot.
We use the quotient topologies on both sides. 2]. We call such functions of φ Weylinvariant.4 Proposition. They are only determined up to reordering. or even. matrices are conjugate in SU(2) iﬀ their eigenvalues agree up to order. however. These numbers. that two matrices in SU(N ) will be conjugate in U(N ). The set of conjugacy classes is given the quotient topology. Now. plus the two poles. with z on the unit circle. and conjugating by the latter has the same eﬀect as conjugation by A. The latter correspond to the central elements ±I2 ∈ SU(2). The normalised trace 1 Tr : SU(2) → C gives a homomorphisms of the set 2 of conjugacy classes in SU(2) with the interval [−1. The quaternion picture provides a good geometric model of this map: the halftrace becomes the projection q → a on the real axis R ⊂ H. provide we remember our functions are invariant under φ ↔ −φ and periodic with period 2π. are the eigenvalues of the matrix. of course. As discussed. in which the middle coeﬃcient ranges over [−2. 1]. to parametrise that interval by the “latitude” φ ∈ [0. The scalar matrices U(1) · Id ⊂ U(N ) are central. However. we have r−1 A ∈ SU(N ). The eigenvalues form a pair {z. It will be more proﬁtable.5) Characters as Laurent polynomials The preceding discussion implies that the characters of continuous SU(2)representations are continuous functions on [−1. In the case at hand. z −1 }. For a general situation of a subgroup H ⊂ G this warrants deserves careful consideration. We then get a bijection SU(N )/SU(N ) ↔ S U(1)N /SN . Proof. The result I am alluding to asserts that a continuous bijection from a compact space to a Hausdorﬀ space is a homeomorphism. however. restricting to SU(N ) imposed the det = 1 restriction on the right side U (1)N . so they are the roots of X 2 − (z + z −1 )X + 1. that is.(20.7 Clearly. the singletons {±1}. so its diagonal elements are complex numbers of unit norm. standing for “special”. We can in fact let φ range from [0. The map from right to left is deﬁned by the inclusion U(1)N ⊂ U(N ) and is therefore continuous. (20. where the symmetric group SN acts by permuting the N factors of U (1)N . The conjugacy classes are therefore the 2dimensional spheres of constant latitude on the unit sphere. because it might happen. 7 48 . indicates its subgroup of determinant 1 matrices. π]. a general theorem from topology ensures that it is in fact a homeomorphism. we are lucky. but not in SU(N ). 20.3) Conjugacy classes A theorem from linear algebra asserts that unitary matrices are diagonalisable in an orthonormal eigenbasis. in principle. 1]. The diagonalised matrix is then also unitary. We therefore get a bijection of conjugacy classes in the unitary group with unordered N tuples of complex numbers of unit norm: U(N )/U(N ) ↔ U(1)N /SN . 2π]. where U(1)N is identiﬁed with the group of diagonal unitary N × N matrices and the S on the right. for any matrix A ∈ U(N ) and any N th root r of det A. their conjugation action is trivial. Functions which change sign under that same symmetry are antiinvariant or odd. there is a potential problem in sight. rather than the real part cos φ.
0 i ∈ SU(2). indeed. The i 0 invariant under z ↔ z −1 .Notice that φ parametrises the subgroup of diagonal matrices. In particular. 20. we have 2π 1 1 1 2π f (g)dg = f (φ) · ∆(φ)2 dφ = f (φ) sin2 φ dφ. −Id) acts as the identity. (20. the essential property of the operation is its invariance under left and right translations on the group. We thus need a volume form over SU(2) which is invariant under left and right translations. so h factors through the quotient SU(2) × SU(2)/{±(Id. eiφ 0 z 0 −iφ = 0 z −1 ∈ SU(2). This is made explicit by the following theorem. (−Id. isomorphic to U(1). and we know from last lecture that the characters of the latter are polynomials in z and z −1 . it implies the character formulae for all irreducible representations of SU(2). Note. to deﬁne the inner product of characters and prove the orthogonality theorems.6 Proposition. To understand the inner product of characters.7) The volume form and Weyl’s Integration formula for SU(2) For ﬁnite groups. normalised to total volume 1. To state this more precisely: the linear functional sending a continuous function ϕ on G to G ϕ(g)dg is invariant under left and right translations of ϕ. A representation of SU(2) restricts to one of our U(1).9 Theorem (Weyl Integration Formula). we are interested in integrating class functions over SU(2). 20.) Clearly. integration over the group must be used instead. whose importance cannot be overestimated: as we shall see. Such functions are called Laurent polynomials. Id)}. that the actions of SU(2) on H by left and right multiplication preserve the quaternion norm. (The action preserves orientation. We will discuss this in Lecture 22. we shall reexpress it in terms of φ and abusively write χV (φ). and indeed turns out to give an isomorphism of the latter with SO(4). gives a homomorphism h : SU(2) × SU(2) → SO(4). When convenient. In both cases. 2 2π 0 π 0 SU(2) 49 . because the group is connected. For a continuous class function f on SU(2). averaging over the group was used to prove complete reducibility.8 Remark. the usual Euclidean volume element on the unit sphere must be invariant under both left and right multiplication by SU(2) elements. For compact groups (such as U(1)). The right×left action of SU(2) × SU(2) on R4 = H. without appealing to Haar’s (diﬃcult) general result. We therefore obtain the 20. hence the Euclidean distance. whereby α × β sends q ∈ H to αqβ −1 . Characters of SU(2)representations are even Laurent polynomials in z. This must be expressible in terms of their restriction to U(1). Our geometric model for SU(2) as the unit sphere in R4 allows us to ﬁnd it directly. A secondary property (which is arranged by appropriate scaling) is that the average or integral of the constant function 1 is 1. and let ∆(φ) = eiφ − e−iφ be the Weyl denominator. Let dg be the Euclidean volume form on the unit sphere in H. 0 e and the transformation φ ↔ −φ is induced by conjugation by the matrix character of an SU(2)representation V will be the function of z ∈ U(1) χV (z) = TrV z 0 0 z −1 .
In the presentation of SU(2) as the unit sphere in H. . • The character of the standard representation on C2 is eiφ + e−iφ = 2 cos φ. due to our normalisation of the measure. 21 Irreducible characters of SU(2) Let us check the irreducibility of some representations. This surely lends credibility to the following. Of course. 0 whence C = 2/π.Thus. The volume of a spherical slice of width dφ is C · sin2 φ dφ. z + z −1 . • The character of the tensor square C2 ⊗ C2 is 4 cos2 φ. Sym2 C2 . C2 = Sym1 C2 .1 Theorem. the function f being constant on the spheres of constant latitude. after correcting the measure by the factor 1 ∆(φ)2 . this can also be seen directly (no invariant lines). Its norm is 4 π 2π cos2 φ · sin2 φ dφ = 0 1 π 2π 0 (2 cos φ sin φ)2 dφ = 1 π 2π sin2 2φ dφ = 1. its character has norm 1. the integral over SU(2) can be computed by restriction to the U(1) of diagonal matrices. and so each Symn C2 is an irreducible representation of SU(2). with character 2 cos 2φ + 1 = z 2 + z −2 + 1. So far we have found the irreducible representations C = Sym0 C2 . • The trivial 1dimensional representation is obviously irreducible. this is the complete list of irreducibles. so its square norm is 1 π 2π 0 16 cos4 φ sin2 φ dφ = 1 π 1 = π 1 = π 2π 4 cos2 φ sin2 2φ dφ 0 2π 0 2π (sin 3φ + sin φ)2 dφ sin2 3φ + sin2 φ + 2 sin φ sin 3φ dφ = 1 + 1 = 2. since its character is 1 (the product of the eigenvalues). The character χn of Symn C2 is z n + z n−2 + . . in agreement with the theorem. Moreover. we have (C2 )⊗2 = Sym2 C2 ⊕ Λ2 C2 . Its norm is 1. 0 e−iφ Proof. 50 . these must be irreducible. 3dimensional irreducible representation. 21. 0 so C2 is irreducible. Indeed. z 2 + 1 + z −2 . Note that Symn C2 has dimension n + 1. We are abusively writing f (φ) for the value 2 eiφ 0 of f at . and so Sym2 C2 is a new. representation. + z 2−n + z −n . with characters 1. Λ2 C2 is the trivial 1dim. 0 so this is reducible. as n ≥ 0. with the constant C normalised by the “total volume 1” condition π C · sin2 φ dφ = 1. 21.2 Remark.
. But this is impossible. χn 2 = 1 4π 2π 4 sin2 [(n + 1)φ] dφ = 0 1 · π = 1. . Symn C2 ∼ Pn . If χ was a new irreducible character. An isomorphism from Pn to Symn C2 is deﬁned by sending z1 z2 to the symmetrisation ⊗(n−p) . e⊗n in the 1 2 ⊗(n−p) nth tensor power (C2 )⊗n . The next example is χ1 · χn = (z + z −1 ) z n+1 − z −n−1 z n+2 + z n − z −n − z −2−n = = χn+1 + χn−1 .6) Tensor product of representations Knowledge of the characters allows us to ﬁnd the decomposition of tensor products. . e1 ⊗ e2 . (21. Equality of the representations up to isomorphism can also be seen by checking p n−p the character of a diagonal matrix g.5 Remark. The standard basis vectors e1 . To see completeness. This shows that our isomorphism commutes with the of (ue1 + ve2 ) v 1 ¯ 2 SU(2)action. as per the integration formula). (21. z u v uz1 + vz2 · 1 = . . (Combined with the weight ∆2 . χ(z)∆(z) would have to be orthogonal to all antiinvariant Laurent polynomials.4 Proposition. By linearity. 21. for which the z1 z2 are eigenvectors. As SU(2)representations. e2 scale under g = z 0 by the factors z ±1 . π proving irreducibility. 21. More generally. if n = 0 of course the answer is χ1 . Hence. For any character χ. the space of homogeneous poly= nomials of degree n in two variables. We have z n+1 − z −n−1 . The vectors appearing in the symmetrisation of e⊗p ⊗ e2 are 1 tensor products containing p factors of e1 and n − p factors of e2 . z −n } and the character formula follows. Thus. z2 by transforming the variables in the standard way.Proof. −¯ u v ¯ z2 −¯z1 + uz2 v ¯ This preserves the decomposition of P as the direct sum of the spaces Pn of homogeneous polynomials of degree n. each of these is an eigenvectors for g. the eigenvalues of g on Symn are {z n . p n−p Proof. Obviously. with eigenvalue z p · z n−p . 0 z −1 ⊗(n−1) the basis vectors of Symn C2 arise by symmetrising the vectors e⊗n . . z n−2 . in the standard inner product on the circle. in some order. this will send (uz1 + vz2 )p · (−¯z1 + uz2 )n−p to the symmetrisation v ¯ of e⊗p ⊗ e2 1 ⊗p ⊗ (−¯e + ue )⊗(n−p) . tensoring with C does not change a representation. this is the correct SU(2) inner product up to scale. we have 51 . z − z −1 z − z −1 provided n ≥ 1. Now. χn (z) = z − z −1 whence χn (z)2 ∆(z)2 = z n+1 − z −n−1 2 = 4 sin2 [(n + 1)φ]. as the orthogonal complement of the odd polynomials is spanned by the even ones.3) Another look at the irreducibles The group SU(2) acts linearly on the vector space P of polynomials in z1 .5). the function χ(z)∆(z) is an odd Laurent polynomial. . observe that the functions ∆(z)χn (z) span the vector space of odd Laurent polynomials (§20. . .
Proof. viewed as the space of unit norm quaternions. χχn is the coeﬃcient of z n+1 in ∆(z)χ(z). We start with the following 22.8 Remark. is sometimes helpful. 52 . This gives a homomorphism from SU(2) × SU(2) → SO(4). This constructs a surjective homomorphism from SU(2) to SO(3). rotations in the i. So the kernel of our homomorphism is {±(Id. The following recipe. on H ∼ R4 . k. So we have the assertion about SO(3). Restricting our left×right action to the diagonal copy of SU(2) leads to the conjugation action of SU(2) on the space of pure quaternions.9) Multiplicities The multiplicity of the irreducible representation Symn in a representation with character χ(z) can be computed as the inner product χχn . Recall the left×right multiplication action of SU(2). 21. . . Id)}. k. that is. SO(4) ∼ SU(2) × SU(2)/{±(Id. q Sym C ⊗ Sym C ∼ = p 2 q 2 k=0 Symp+q−2k C2 . χ−1 = 0 and χ−2 = −1. pure quaternion frame to any other one by a conjugation. (21. regardless whether p ≥ q or not. Thus. . SO(4) and U(2). 22 Some SU(2)related groups We now discuss two groups closely related to SU(2) and describe their irreducible representations and characters.7 Theorem. If p ≥ q. SO(3) ∼ SU(2)/{±Id}. . j. Id)}. + χp−q . The decomposition formula becomes χp · χq = χp+q + χp+q−2 + . χ0 = 1. χp · χq = χp+q + χp+q−2 + . We see surjectivity and construct the isomorphism SU(2) → SO(3) at the same time. . so α × β ﬁxes 1 iﬀ α = β. spanned by i. and these rotations generate SO(3). and similarly with any permutation of i. Returning to SO(4). + χp−q . which exploits the simple form of the functions ∆(z)χn (z) = z n+1 − z −n−1 . with the integration (20. j plane are implemented by elements a + bk. These are SO(3).21. Proof.9). 21. Id)} and U(2) ∼ = = = U(1) × SU(2)/{±(Id. but we already know that the kernel is {±Id}. χp (z) · χq (z) = z p+1 − z −p−1 · z q + z q−2 + . we can take any orthonormal. Note that = α × β send 1 ∈ H to αβ −1 .1 Proposition. The pair α × α ﬁxes every other quaternion iﬀ α is central in SU(2). a “virtual character”. α = ±Id. Let us deﬁne formally χ−n = −χn−2 .10 Proposition. j. the condition p ≥ q ensures that there are no cancellations in the sum. Hence. the trivial character. I claim that this generates the full rotation group SO(3): indeed. . We can also take 1 ∈ H to any other unit vector by a left multiplication. + z −q z − z −1 q z p+q+1−2k − z 2k−p−q−1 = z − z −1 k=0 q = k=0 χp+q−2k .
It follows that the χV (g) · χW (h) form a Hilbert space basis of the class functions on G × H. if it was known on the factors. the proof establishes the completeness of characters for the product G × H. 22. 22. requires the following 22. The group isomorphisms in the proposition are in fact homeomorphisms. Incidentally. a conjugacy class in G × H is a Cartesian product of conjugacy classes in G and H. it does give a complete argument.Combining these. the complete list of irreducible representations consists of the tensor products V ⊗ W . Sym2 C2 is the standard 3dimensional representation of SO(3). The image of our copy {diag[z. (22. H. Id) and −(Id. as it is the only 3dimensional representation on the list). SO(4) and U(2). which send −Id. the inverse map is continuous (using the quotient topology on the quotient groups). we also indicate another proof which does not rely on character theory. that means. but uses complete reducibility instead. so this is a complete list of irreducible characters. However. z −1 ]} ⊂ SU(2) of U(1). 53 .4 Corollary. This formulation is slightly abusive.5) Representations of a product Handling the other two groups. For example. (There is little choice about it. Now. as C2 itself is not a representation of SO(3) but only of SU(2) (−Id acts as −Id). as V and W range over the irreducibles of G and H. Finally. it follows that the character of V ⊗ W at the element g × h is χV ⊗W (g × h) = χV (g) · χW (h). respectively. 0 sin ϕ cos ϕ with (caution!) ϕ = 2φ. as n ≥ 0. So.2 Proposition. But the sign problem is ﬁxed on all even symmetric powers. and character theory ensures that the χV and χW form Hilbert space bases of the L2 class functions on the two groups. but this follows more easily from the following topological fact which is of independent interest. The complete list of irreducible representations of SO(3) is Sym2n C2 . The advantage is that we only need to know complete reducibility under one of the factors G. SU(2) × SU(2) and SU(2) × U(1). These cover all the conjugacy classes in SO(3). From the properties of the tensor product of matrices. given our knowledge of U(1) and SU(2). followed by a left multiplication. in our homomorphism. is the family of matrices 1 0 0 0 cos ϕ − sin ϕ . and the irreducible characters of SO(3) restrict to this as 1 + 2 cos ϕ + 2 cos(2ϕ) + · · · + 2 cos(nϕ). as both U(1) and SU(2) sit naturally in U(2) (the former as the scalar matrices). the assertion about U(2) is clear. it follows that we can take any orthonormal 4frame to any other one by a suitable conjugation. and intersect at {±Id}. independently. It is not diﬃcult (although a bit painful) to prove this directly from the construction.1 are the same as continuous representations of SU(2).6 Lemma. Id) to the identity matrix. Any continuous bijection from a Hausdorﬀ space to a compact space is a homeomorphism.3) Representations It follows that continuous representations of the three groups in Proposition 22. Proof using character theory. For the product G × H of two compact groups G and H. −(Id. (22.
Thereunder. Let then W be the unique irreducible Htype appearing in U . z2 z1 z 2 + z 1 (22. 8 Under the switch z1 ↔ z2 . This inherits and action of G from U . n ≥ 0 and m = n mod 2. In addition. any proper Gsubrepresentation of V would lead to a proper G × Hsubrepresentation of U . Armed with the integration formula. as secured by the PeterWeyl theorem. 54 . These are doublecovered by the subgroup of diagonal matrices diag[z1 . Here. with G and H acting on the two factors. n ∈ Z. and so the character of det⊗m ⊗Symn C2 is the symmetric8 Laurent polynomial m+n−1 m+1 m+n m m m+n + · · · + z1 z 2 .9 Proposition. the factor U(1) of scalar matrices does not act trivially. 2 4π 2 U(2) 0×0 Here.11) Moreover. φ2 ) · eiφ1 − eiφ2 2 dφ1 dφ2 . φ2 ) abusively denotes f (diag[eiφ1 . we omit the details. m. f (φ1 . Because the action of G commutes with H. with respect to the U(2) inner product. General theory tells us that they should form a Hilbert space basis of the space of class functions. Let U be an irreducible representation of G × H. U ) = = V ⊗ W (see Example Sheet 2). so the action of SU(2) on Symn C2 also extends to all of U(2). extending the SU(2) action. We call ∆(z1 . but we can check this directly from the 22. and we have an isomorphism U ∼ W ⊗ HomH (W. U ). and let V := HomH (W. The complete list of irreducible representations of SO(4) is Symm C2 ⊗Symn C2 . these are all the irreducible characters of U(2). with the two SU(2) factors acting on the two Sym factors. Moreover. The n−1 n n trace of this matrix on Symn C2 is z1 +z1 z2 +· · ·+z2 . in principle we should tensor together pairs of representations of U(1) and SU(2) of matching parity. and this implies irreducibility of V .7 Corollary. n ≥ 0. Id)}. and irreducibility implies that we have a single block. Of course. Id) acts as +Id in the product. it must be blockdiagonal in this decomposition. (22. and decompose it into isotypical components Ui under the action of H. we could have discovered the character formulae (22.12 Proposition (Weyl integration formula for U(2)). z2 ) := z1 − z2 = eiφ1 − eiφ2 the Weyl denominator for U(2). The proposition is proved by lifting f to the double cover U(1) × SU(2) of U(2) and applying the integration formula for SU(2). with m.11) a priori. The complete list of irreducible representations of U(2) is det⊗m ⊗Symn C2 . As U(2) = U(1) × SU(2)/{±(Id. Clearly. Denote by det⊗m its mth tensor power. However.Proof without characters. z2 } of eigenvalues. U(2) has a 1dimensional representation det : U(2) → U(1). they span the space of symmetric Laurent polynomials. but rather by the nth power of the natural representation (since it acts naturally on C2 ). before constructing the representations. so that −(Id. observe that U(2) acts naturally on C2 . For a class function f on U(2). eiφ2 ]). 22. we have 2π×2π 1 1 f (u)du = f (φ1 .8) A closer look at U(2) Listing the representations of U(2) requires a comment.10) Characters of U(2) We saw earlier that the conjugacy classes of U(2) were labelled by unordered pairs {z1 . it restricts to the trivial representation on SU(2) and to the 2mth power of the natural representation on the scalar subgroup U(1). The classiﬁcation of representations of U(2) in terms of their restrictions to the subgroups U(1) and SU(2) coverts then into the 22. proving that all formulae are actually realised as characters requires either a direct construction (as given here) or a general existence argument. z2 ]. (22.
λ1 ≥ λ2 ≥ . and we need only match the coeﬃcient for a single term in the formula. of degree N (N − 1)/2. As λ ranges over the decreasingly ordered N tuples. denote by zλ the λ λ λ (Laurent) monomial z1 1 z2 2 · · · zNN .6). the variables represent the eigenvalues of a unitary matrix. if f is symmetric integral. There is a distinguished N tuple δ = (N − 1. . .3 Remark. The fact that all U(N ) characters are of this form follows from the classiﬁcation of conjugacy classes in Lecture 20. ≥ λN . . zσ(λ) will denote the monomial for the permuted N tuple σ(λ). The last product is also denoted ∆(z) and called the Weyl denominator. where ε(σ) denotes the signature of σ. These are −1 polynomials in the zi and zi which are invariant under permutation of the variables. antisymmetry of a g implies its vanishing whenever zp = zq for some p = q. the aλ+δ form a Zbasis of the integral antisymmetric Laurent polynomials. one notices that the determinant is a homogeneous polynomial in the zp . with the obvious change that the characters are now symmetric Laurent polynomials in N variables. are distinguished by the following simple observation: 23. The ﬁrst identity N is simply the big formula for the determinant of the matrix with entries zp −q . . with the irreducible being tensor products of N onedimensional representations of the factors (Lemma 22.23 The unitary group∗ The representation theory of the general unitary groups U(N ) is very similar to that of U(2). (23. . 0). . . Any such polynomial can be expressed as (z1 z2 · · · zN )n · f (z1 . For an N tuple λ = λ1 . Proof. so we have equality up to a constant factor. zN ). 23.1) Symmetry vs. . . N − 2. antisymmetry Let us introduce some notation. this implies that the product on the right divides the determinant. as the following identities show: N aδ (z) = det[zp −q ] = p<q (zp − zq ). . . The proposition and its proof apply equally well to the genuine polynomials. The polynomial aδ (z) is special. λN of integers. Or. the second is called the Vandermonde formula and can be proved in several ways.2 Proposition. combined with some clever row operations.4 Proposition.5 Remark. Given σ ∈ SN . as opposed to Laurent polynomials. . 23. 23. with integral quotient. and the representation theory of the subgroup U(1)N : representations of the latter are completely reducible. The obvious part (which is the only one we need) is that ∆ · f is antisymmetric and integral. There is a determinant formula (although not a product expansion) for every aλ : aλ (z) = det[zp q ]. Multiplication by ∆ establishes a bijection between symmetric and antisymmetric Laurent polynomials with integer coeﬃcients. with integer coeﬃcients. which (by repeated use of B´zout’s theorem. that e g(a) = 0 ⇒ (X − a)g(X)) implies that ∆(z) divides g. for instance zδ . . One can do induction on N . The antisymmetric Laurent polynomials aλ (z) := σ∈SN ε(σ) · zσ(λ) . the symmetric group on N letters. . In the other direction. λ 55 . where f is a genuine symmetric polynomial with integer coeﬃcients. and vanishes whenever zp = zq for some p = q.
. the determinant representation u → det u is polynomial. 23. The Schur functions are precisely the irreducible characters of U(N ). Following the model of SU(2) in the earlier sections and the general orthogonality theory of characters. due to the absence of a concrete geometric model for U(N ) with its invariant measure. the irreps of U(N ) can be found in terms of the tensor powers of its standard representation on CN . and they form a basis of the space of integral symmetric polynomials. any Schur function converts into a Schur polynomial upon multiplication by a large power of det. The key step in proving the existence of representation now consists in showing that every Schur polynomial appears in the irreducible decomposition of (CN )⊗d . as its character is z1 · · · zN . so this does imply that all Schur functions are characters of U(N )representations (a polynomial representation tensored with a large inverse power of det). they correspond to the λ’s where each λk > 0. but its dual representation has character (z1 · · · zN )−1 and is not polynomial. For example. in any polynomial representation ρ.8 Theorem. . Now. aδ (z) ∆(z) By the previous proposition. it reduces to the following integration formula. If the zp are the eigenvalues of a matrix u. The Schur function sλ is deﬁned as sλ (z) := aλ+δ (z) aλ+δ (z) = . . these are the coeﬃcients of the characteristic polynomial of u.12) Construction of representations Much like in the case of SU(2). A representation of U(N ) will be called polynomial iﬀ its character is a genuine (as opposed to Laurent) polynomial in the zp . . This implies that all Schur polynomials are characters of representations.13 Remark. the theorem breaks up into two statements. We shall not prove Proposition 23. however. is now more diﬃcult.(23. as in the case of SU(2). . We have used the standard convention f (φ1 .7 Deﬁnition. with respect to the normalised invariant measure. we have f (u)du = U(N ) 1 1 N ! (2π)N 2π 2π ··· 0 0 f (φ1 .10 Proposition.11 Theorem (Weyl Integration for U(N )). 23. 23. . The Schur polynomials are the genuine polynomials among the sλ ’s. without involving the entries of u−1 . However. It turns out in fact that all entries of the matrix ρ(u) are polynomially expressible in terms of those of u. 56 . . Thus. . . The Schur functions are orthonormal in the inner product deﬁned by integration over U(N ).9 Proposition. the character of (CN )⊗d is the polynomial (z1 + · · · + zN )d . φN ) = f (diag[z1 . Formulating this precisely requires a comment. 23. Every sλ is the character of some representation of U(N ). For a class function f on U(N ). φN ) · ∆(z)2 dφ1 · · · dφN . . (23. A symmetric polynomial in the zp is expressible in terms of the elementary symmetric polynomials. and it follows from our discussion of Schur functions and polynomials that any representation of U(N ) appearing in the irreducible decomposition of (CN )⊗d is polynomial.9 here. Our main theorem is now 23. zN ]) and zp = eiφp . . . the trace of ρ(u) is a polynomial in the entries of u ∈ U(N ). whose proof. the Schur functions form an integral basis of the space of symmetric Laurent polynomials with integer coeﬃcients. for some d. they have polynomial expressions in terms of the entries of u.6) The irreducible characters of U(N ) 23. As such.
the last sentence in the theorem is obvious.1) Conjugacy classed in the symmetric group The duality theorem relates the representations of the unitary groups U(N ) to those of the symmetric groups Sd . that is. as we shall explain. which is the collection of all cycle lengths. This way. . ≥ µn > 0 of d. so if it appears in any (CN )⊗d . The representations S λ . As the number of conjugacy classes in Sd equals the number of partitions. It follows that the number of irreducible characters of Sd is the number of partitions of d. (23. . so we must say a word about the latter. For N ≥ d they exhaust all irreducible representations of Sd .Note that the Schur function sλ is homogeneous of degree λ = λ1 +· · ·+λN . Recall that every permutation has a unique decomposition as a product of disjoint cycles. Thus. . in which a matrix u ∈ U(N ) transforms all tensor factors simultaneously. The conjugacy class of cycle type (m) has order d! mk ! · k mk . 57 .11). (CN )⊗d becomes a representation of the product group U(N ) × Sd . . λn > 0 of integers summing to d. all partitions of d are represented in the sum. Ordering the lengths decreasingly leads to a partition µ = µ1 ≥ . Assume that the numbers 1. observe that the space (CN )⊗d carries an action of the symmetric group Sd on d letters. 24. . the conjugacy classes in Sd are labelled by (m)’s such that k k · mk = d. .15 Theorem (SchurWeyl duality). . for various λ’s. depend only on λ (and not on N ). .2 Proposition. md times. it must do so for d = λ. We shall also refer to the cycle type by the notation (m). But an element of the centraliser must permute the kcycles. and that two permutations are conjugate if and only if they have the same cycle type. which permutes the factors in every vector v1 ⊗ v2 ⊗ · · · ⊗ vd . Checking its presence requires us only to show that sλ (z1 + · · · + zN )λ > 0 in the inner product (23. . it will be proved in the next section. applied to the conjugation action of Sd on itself. Recall that a partition of d with n parts is a sequence λ1 ≥ λ2 ≥ . 23. Once N ≥ d. and k mk from independent cyclic permutations within each kcycle. it turns out that we can (and will) do much more with little extra eﬀort. while preserving the cyclic order of the elements within. with multiplicities. 2. . 24 The symmetric group and SchurWeyl duality∗ (24. Proof. in which Vλ is the irreducible representation of U(N ) with character sλ and S λ is an irreducible representation of Sd . Nothing else in the theorem is obvious. Clearly. . What is also true but unexpected is that there is a natural way to assign irreducible representations to partitions. .14) SchurWeyl duality To state the key theorem. for all k. respectively in µ. (CN )⊗d decomposes as a direct sum of products Vλ ⊗ S λ λ labelled by partitions of d with no more than N parts. mk is the number of kcycles in our permutation. it suﬃces to show that the centraliser of a permutation of cycle type (m) is mk !· k mk . this commutes with the action of U(N ). We get a factor of mk ! from the permutations of the mk kcycles. While there is a reasonably simple argument for this. d occur m1 . m2 . By the orbitstabiliser theorem. are pairwise nonisomorphic. Under the action of U(N ) × Sd .
6) (m) We will prove these identities simultaneously for all d. The ωλ are the irreducible characters of the symmetric group Sd . the left side in (24.λ. via the identity ωλ (m) · ων (m) sλ (z)sν (w) ≡ mk ! · k mk sλ (z)sλ (w) λ (24. p(m) (z) = no more than N parts.6). indeed. Antisymmetry of p(m) (z)·∆(z) and the deﬁnition of the Schur functions lead to the following 24. which just restates the deﬁnition of the ωλ . ν. We will prove that the ωλ are orthonormal in the inner product of class functions on Sd .ν for variables z. Independence of the Schur polynomials implies (24. they must be ignored in the ﬁrst product. For any partition λ of d with no more than N parts. Proof of orthonormality. mk ! · k mk (m) 58 . we deﬁne a function on conjugacy classes of Sd by ωλ (m) = coeﬃcient of zλ+δ in p(m) (z) · ∆(z). It is important to observe that ωλ (m) does not depend on N . which will imply their irreducibility.3) The irreducible characters of Sd Choose N > 0 and denote by pµ or p(m) the product of power sums pµ (z) = k µ µ µ (z1 k + z2 k + · · · + zNk ) = k>0 k k (z1 + · · · + zN )mk . ν ranging over all partitions with no more than N parts of all integers d. p(m) is then unchanged. note that if we are to allow a number of trailing zeroes at the end of µ.7) is p(m) (z) · p(m) (w) . provided the latter is larger than the number of parts of λ.5 Theorem (Frobenius character formula). while both ∆(z) and zλ+δ acquire a factor of z1 z2 · · · zN . this is equivalent to the identity p(m) (z) · ∆(z) = λ ωλ (m) · aλ+δ (z). an extra variable zN +1 can be set to zero. We will then show that they are characters of representations of Sd . with (m) ranging over all cycle types of all symmetric groups Sd and λ. mk ! · k mk (24. ωλ (m) · ων (m) = δλν . We must show that for any partitions λ. The second part will be proved in conjunction with SchurWeyl duality. 24. w. From Proposition 24.7) (m).4. leading to the same coeﬃcient in the deﬁnition. λ ωλ (m) · sλ (z).4 Proposition. the sum running over the partitions of d with Indeed.(24.
Indeed.q. We can multiply both sides by ∆(z)∆(w) and show that each side agrees with the N × N Cauchy determinant det[(1 − zp wq )−1 ]. as desired....k = k>0 exp p. the lefthand side Σ(z. w) of the equation has the property that 1 N! ··· Σ(z..q (1 − zp wq )−1 . (See Appendix A of FultonHarris. q..q (1 − zp wq ) 59 p=q (wq p. (The integrals are over the unit circles. The equality det[(1 − zp wq )−1 ] = ∆(z)∆(w) p. I will not give the detailed calculation.. Representation Theory. for help if needed. −1 the function obtained from the righthand side of (24. Another proof of (24.q (1 − zp wq )−1 . but just observe that the algebraic manipulation ∆(w)∆(w−1 ) −1 = w1 · · · wN p.q We are thus reduced to proving the identity sλ (z)sλ (w) = λ p.q log(1 − zp wq ) (1 − zp wq )−1 .q (wq − wp ) − zp ) ..which is also k k k k (z1 + · · · + zN )mk (w1 + · · · + wN )mk mk ! · k mk (m) k>0 = (m) k>0 ( k mk p.) On the other hand.q (zp wq )k /k = exp (zp wq )k /k = exp − p. than the one with the Cauchy determinant. = p.lN ≥0 wqq al (z)al (w).8) multiplied by ∆(z)∆(w).lN ≥0 q l = = l1 ..8) after substituting wq ↔ wq .8) There are (at least) two ways to proceed. l1 >l2 >···>lN ≥0 l al (z)wl = the last identity from the antisymmetry in l of the al . as the care needed in discussing convergence makes this argument more diﬃcult. w−1 )f (w)∆(w)∆(w−1 ) dw1 dwN = f (z) ··· 2πiw1 2πiwN for any symmetric polynomial f (w). if we expand each entry in the matrix in a geometric series.. in the end.q (zp wq ) /k) mk ! p.q (1 − zp wq )−1 can be proved by clever row operations and induction on N . (24...lN ≥0 det[(zp wq )lq ] det[zpq ] · l1 .) −1 An N fold application of Cauchy’s formula shows that the same is true for p.. multilinearity of the determinant gives det[(1 − zp wq )−1 ] = l1 . subject to the convergence condition zp  < wq  for all p. while any homogeneous symmetric Laurent polynomial containing negative powers of the wq integrates to zero. The result is the left side of (24.8) can be given by exploiting the orthogonality properties of Schur functions and a judicious use of Cauchy’s integration formula.
3. that is. zN ]. . unless i is constant within each cycle of σ. .10 Proposition. with the sum ranging over the partitions of d with no more than N parts. . . for all possible indexing functions i : {1. Under M . . By Lemma 22.6. 2.9) Proof of the key theorems We now prove the following. The ωλ are characters of representations of Sd . Summing this over all these i leads to k k (z1 + · · · + zN )mk = p(m) (z). with an appropriate i. respectively. . Collecting together the irreducible representations of U(N ) expresses the character of (CN )⊗d as λ χλ (m) · sλ (z). if f is not symmetric. for any permutation σ of cycle type (m). eN be the standard basis of CN . So the only contribution to the trace comes from such i. Let e1 . this basis vector gets transformed to zi1 · · · zid · eiσ(1) ⊗ eiσ(2) ⊗ · · · ⊗ eiσ(d) . with a single calculation: 1. . Note that. (The numerator ensures that setting diﬀerent w’s to the same z value gives no contribution). The SchurWeyl duality theorem. d} → {1. 60 . . in general. This is oﬀdiagonal unless ik = iσ(k) for each k.and the desired integral is rewritten as 1 N! ··· p=q (wq p. For N = 1 one recognises the Cauchy formula. as a representation of Sd × U(N ). The character of (CN )⊗d . . the output is the symmetrisation of f . . Every Schur polynomial is the character of some representation of U(N ). k The diagonal entry for such a i is a product of factors of zi for each kcycle. k>0 and our proposition now follows from Prop. . Speciﬁcally. . . Note that (1) and (2) automatically entail the irreducibility of the corresponding representations. 24. we will verify the following 24. . (CN )⊗d breaks up as a sum Wi ⊗ Vi of tensor products of irreducible representations of Sd and U(N ). Proof. to which values the wq are to be set. Comparing this with the proposition implies (1)–(3) in our list. . is λ ωλ (m) · sλ (z). for certain characters χλ of Sd . we have simple poles at the zp . . 2πi 2πi for any symmetric polynomial f . in all possible permutations. N }. The character is the trace of the action of M := σ × diag[z1 .q (wq − wp ) − zp ) f (w) dw1 dwN ··· = f (z). A basis for (CN )⊗d is ei1 ⊗ ei2 ⊗ · · · ⊗ eid . (24.4.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.