Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Andrew Baker
07/11/2011 BY: ⃝ A. J. Baker
School of Mathematics & Statistics, University of Glasgow.
Email address: a.baker@maths.gla.ac.uk
URL: http://www.maths.gla.ac.uk/∼ajb
Contents
Chapter 1. Linear and multilinear algebra 1
1.1. Basic linear algebra 1
1.2. Class functions and the CayleyHamilton Theorem 5
1.3. Separability 9
1.4. Basic notions of multilinear algebra 10
Exercises on Chapter 1 12
Chapter 2. Representations of ﬁnite groups 15
2.1. Linear representations 15
2.2. Ghomomorphisms and irreducible representations 17
2.3. New representations from old 22
2.4. Permutation representations 24
2.5. Properties of permutation representations 26
2.6. Calculating in permutation representations 27
2.7. Generalized permutation representations 29
Exercises on Chapter 2 31
Chapter 3. Character theory 33
3.1. Characters and class functions on a ﬁnite group 33
3.2. Properties of characters 36
3.3. Inner products of characters 37
3.4. Character tables 40
3.5. Examples of character tables 44
3.6. Reciprocity formulæ 49
3.7. Representations of semidirect products 51
3.8. Real representations 52
Exercises on Chapter 3 53
Chapter 4. Some applications to group theory 57
4.1. Characters and the structure of groups 57
4.2. A result on representations of simple groups 59
4.3. A Theorem of Frobenius 60
Exercises on Chapter 4 62
Appendix A. Background information on groups 65
A.1. The Isomorphism and Correspondence Theorems 65
A.2. Some deﬁnitions and notation 66
A.3. Group actions 67
i
A.4. The Sylow theorems 69
A.5. Solvable groups 70
A.6. Product and semidirect product groups 70
A.7. Some useful groups 71
A.8. Some useful Number Theory 72
Bibliography 73
Solutions 73
Chapter 1 73
Chapter 2 75
Chapter 3 78
Chapter 4 81
ii
CHAPTER 1
Linear and multilinear algebra
In this chapter we will study the linear algebra required in representation theory. Some of
this will be familiar but there will also be new material, especially that on ‘multilinear’ algebra.
1.1. Basic linear algebra
Throughout the remainder of these notes k will denote a ﬁeld, i.e., a commutative ring with
unity 1 in which every nonzero element has an inverse. Most of the time in representation
theory we will work with the ﬁeld of complex numbers C and occasionally the ﬁeld of real
numbers R. However, a lot of what we discuss will work over more general ﬁelds, including
those of ﬁnite characteristic such as F
p
= Z/p for a prime p. Here, the characteristic of the
ﬁeld k is deﬁned to be the smallest natural number p ∈ N such that
p1 = 1 +· · · + 1
. ¸¸ .
p summands
= 0,
provided such a number exists, in which case k is said to have ﬁnite or positive characteristic,
otherwise k is said to have characteristic 0. When the characteristic of k is ﬁnite it is actually
a prime number.
1.1.1. Bases, linear transformations and matrices. Let V be a ﬁnite dimensional
vector space over k, i.e., a kvector space. Recall that a basis for V is a linearly independent
spanning set for V . The dimension of V (over k) is the number of elements in any basis, and
is denoted dim
k
V . We will often view k itself as a 1dimensional kvector space with basis {1}
or indeed any set {x} with x ̸= 0.
Given two kvector spaces V, W, a linear transformation (or linear mapping) from V to W
is a function φ: V −→ W such that
φ(v
1
+v
2
) = φ(v
1
) +φ(v
2
) (v
1
, v
2
, v ∈ V ),
φ(tv) = tφ(v) (t ∈ k).
We will be denote the set of all linear transformations V −→ W by Hom
k
(V, W). This is a
kvector space with the operations of addition and scalar multiplication given by
(φ +θ)(u) = φ(u) +θ(u) (φ, θ ∈ Hom
k
(V, W)),
(tφ)(u) = t(φ(u)) = φ(tu) (t ∈ k).
The extension property in the next result is an important property of a basis.
Proposition 1.1. Let V, W be kvector spaces with V ﬁnite dimensional, and {v
1
, . . . , v
m
}
a basis for V where m = dim
k
V . Given a function φ: {v
1
, . . . , v
m
} −→ W, there is a unique
linear transformation ˜ φ: V −→ W such that
˜ φ(v
j
) = φ(v
j
) (1 j m).
1
We can express this with the aid of the commutative diagram
{v
1
, . . . , v
m
}
φ
%%
K
K
K
K
K
K
K
K
K
K
inclusion
//
V
∃! ˜ φ
W
in which the dotted arrow is supposed to indicate a (unique) solution to the problem of ﬁlling
in the diagram
{v
1
, . . . , v
m
}
φ
%%
K
K
K
K
K
K
K
K
K
K
inclusion
//
V
W
with a linear transformation so that composing the functions corresponding to the horizontal
and right hand sides agrees with the functions corresponding to left hand side.
Proof. The formula for ˜ φ is
˜ φ
_
_
m
∑
j=1
λ
j
v
j
_
_
=
m
∑
j=1
λ
j
φ(v
j
).
The linear transformation ˜ φ is known as the linear extension of φ and is often just denoted
by φ.
Let V, W be ﬁnite dimensional kvector spaces with the bases {v
1
, . . . , v
m
} and {w
1
, . . . , w
n
}
respectively, where m = dim
k
V and n = dim
k
W. By Proposition 1.1, for 1 i m and
1 j n, the function
φ
ij
: {v
1
, . . . , v
m
} −→ W; φ
ij
(v
k
) = δ
ik
w
j
(1 k m)
has a unique extension to a linear transformation φ
ij
: V −→ W.
Proposition 1.2. The collection of functions φ
ij
: V −→ W (1 i m, 1 j n) forms
a basis for Hom
k
(V, W). Hence
dim
k
Hom
k
(V, W) = dim
k
V dim
k
W = mn.
A particular and important case of this is the dual space of V ,
V
∗
= Hom(V, k).
Notice that dim
k
V
∗
= dim
k
V . Given any basis {v
1
, . . . , v
m
} of V , deﬁne elements v
∗
i
∈ V
∗
(i = 1, . . . , m) by
v
∗
i
(v
k
) = δ
ik
,
where δ
ij
is the Kronecker δfunction for which
δ
ij
=
_
_
_
1 if i = j,
0 otherwise.
Then the set of functions {v
∗
1
, . . . , v
∗
m
} forms a basis of V
∗
. There is an associated isomorphism
V −→ V
∗
under which
v
j
←→ v
∗
j
.
2
If we set V
∗∗
= (V
∗
)
∗
, the double dual of V , then there is an isomorphism V
∗
−→ V
∗∗
under
which
v
∗
j
←→ (v
∗
j
)
∗
.
Here we use the fact that the v
∗
j
form a basis for V
∗
. Composing these two isomorphisms we
obtain a third V −→ V
∗∗
given by
v
j
←→ (v
∗
j
)
∗
.
In fact, this does not depend on the basis of V used, although the factors do! This is sometimes
called the canonical isomorphism V −→ V
∗∗
.
The set of all (k)endomorphisms of V is
End
k
(V ) = Hom
k
(V, V ).
This is a ring (actually a kalgebra, and also noncommutative if dim
k
V > 1) with addition as
above, and composition of functions as its multiplication. There is a ring monomorphism
k −→ End
k
(V ); t −→ t Id
V
,
which embeds k into End
k
(V ) as the subring of scalars. We also have
dim
k
End
k
(V ) = (dim
k
V )
2
.
Let GL
k
(V ) denote the group of all invertible klinear transformations V −→ V , i.e., the
group of units in End
k
(V ). This is usually called the general linear group of V or the group of
linear automorphisms of V and denoted GL
k
(V ) or Aut
k
(V ).
Now let v = {v
1
, . . . , v
m
} and w = {w
1
, . . . , w
n
} be bases for V and W. Then given a linear
transformation φ: V −→ W we may deﬁne the matrix of φ with respect to the bases v and w
to be the n ×m matrix with coeﬃcients in k,
w
[φ]
v
= [a
ij
],
where
φ(v
j
) =
n
∑
k=1
a
kj
w
k
.
Now suppose we have a second pair of bases for V and W, say v
′
= {v
′
1
, . . . , v
′
m
} and w
′
=
{w
′
1
, . . . , w
′
n
}. Then we can write
v
′
j
=
m
∑
r=1
p
rj
v
r
, w
′
j
=
n
∑
s=1
q
sj
w
s
,
for some p
ij
, q
ij
∈ k. If we form the m×m and n ×n matrices P = [p
ij
] and Q = [q
ij
], then we
have the following standard result.
Proposition 1.3. The matrices
w
[φ]
v
and
w
′ [φ]
v
′ are related by the formulæ
w
′ [φ]
v
′ = Q
w
[φ]
v
P
−1
= Q[a
ij
]P
−1
.
In particular, if W = V , w = v and w
′
= v
′
, then
v
′ [φ]
v
′ = P
v
[φ]
v
P
−1
= P[a
ij
]P
−1
.
3
1.1.2. Quotients and complements. Let W ⊆ V be a vector subspace. Then we deﬁne
the quotient space V/W to be the set of equivalence classes under the equivalence relation ∼
on V deﬁned by
u ∼ v if and only if v −u ∈ W.
We denote the class of v by v +W. This set V/W becomes a vector space with operations
(u +W) + (v +W) = (u +v) +W,
λ(v +W) = (λv) +W
and zero element 0 + W. There is a linear transformation, usually called the quotient map
q : V −→ V/W, deﬁned by
q(v) = v +W.
Then q is surjective, has kernel ker q = W and has the following universal property.
Theorem 1.4. Let f : V −→ U be a linear transformation with W ⊆ ker f. Then there is a
unique linear transformation f : V/W −→ U for which f = f ◦ q. This can be expressed in the
diagram
V
f
?
?
?
?
?
?
?
?
q
//
V/W
∃! f
}}
U
in which all the sides represent linear transformations.
Proof. We deﬁne f by
f(v +W) = f(v),
which makes sense since if v
′
∼ v, then v
′
−v ∈ W, hence
f(v
′
) = f((v
′
−v) +v) = f(v
′
−v) +f(v) = f(v).
The uniqueness follows from the fact that q is surjective.
Notice also that
(1.1) dim
k
V/W = dim
k
V −dim
k
W.
A linear complement (in V ) of a subspace W ⊆ V is a subspace W
′
⊆ V such that the
restriction q

W
′
: W
′
−→ V/W is a linear isomorphism. The next result sums up properties of
linear complements and we leave the proofs as exercises.
Theorem 1.5. Let W ⊆ V and W
′
⊆ V be vector subspaces of the kvector space V with
dim
k
V = n. Then the following conditions are equivalent.
(a) W
′
is a linear complement of W in V .
(b) Let {w
1
, . . . , w
r
} be a basis for W, and {w
r+1
, . . . , w
n
} a basis for W
′
. Then
{w
1
, . . . , w
n
} = {w
1
, . . . , w
r
} ∪ {w
r+1
, . . . , w
n
}
is a basis for V .
(c) Every v ∈ V has a unique expression of the form
v = v
1
+v
2
for some elements v
1
∈ W, v
2
∈ W
′
. In particular, W ∩ W
′
= {0}.
4
(d) Every linear transformation h: W
′
−→ U has a unique extension to a linear transfor
mation H: V −→ U with W ⊆ ker H.
(e) W is a linear complement of W
′
in V .
(f) There is a linear isomorphism J : V
∼
=
−→ W × W
′
for which imJ

W
= W × {0} and
imJ

W
′
= {0} ×W
′
.
(g) There are unique linear transformations p: V −→ V and p
′
: V −→ V having images
imp = W, imp
′
= W
′
and which satisfy
p
2
= p ◦ p = p, p
′
2
= p
′
◦ p
′
= p
′
, Id
V
= p +p
′
.
We often write V = W ⊕W
′
whenever W
′
is a linear complement of W. The maps p, p
′
of
Theorem 1.5(g) are often called the (linear) projections onto W and W
′
.
This situation just discussed can be extended to the case of r subspaces V
1
, . . . , V
r
⊆ V for
which
V = V
1
+· · · +V
r
=
_
r
∑
j=1
v
j
: v
j
∈ V
j
_
,
and inductively we have that V
k
is a linear complement of (V
1
⊕· · · ⊕V
k−1
) in (V
1
+· · · +V
k
).
A linear complement for a subspace W ⊆ V always exists since each basis {w
1
, . . . , w
r
} of
W extends to a basis {w
1
, . . . , w
r
, w
r+1
, . . . , w
n
} of V , and on taking W
′
to be the subspace
spanned by {w
r+1
, . . . , w
n
}, Theorem 1.5(b) implies that W
′
is a linear complement.
1.2. Class functions and the CayleyHamilton Theorem
In this section k can be any ﬁeld. Let A = [a
ij
] be an n ×n matrix over k.
Definition 1.6. The characteristic polynomial of A is the polynomial (in the variable X)
char
A
(X) = det(XI
n
−[a
ij
]) =
n
∑
k=0
c
k
(A)X
k
∈ k[X],
where I
n
is the n ×n identity matrix.
Notice that c
n
(A) = 1, so this polynomial in X is monic and has degree n. The coeﬃcients
c
k
(A) ∈ k are polynomial functions of the entries a
ij
. The following is an important result
about the characteristic polynomial.
Theorem 1.7 (CayleyHamilton Theorem: matrix version). The matrix A satisﬁes
the polynomial identity
char
A
(A) =
n
∑
k=0
c
k
(A)A
k
= 0.
Example 1.8. Let
A =
_
0 −1
1 0
_
∈ R[X].
Then
char
A
(X) = det
_
X 1
−1 X
_
= X
2
+ 1.
By calculation we ﬁnd that A
2
+I
2
= O
2
as claimed.
5
Lemma 1.9. Let A = [a
ij
] and P be an n × n matrix with coeﬃcients in k. Then if P is
invertible,
char
PAP
−1(X) = char
A
(X).
Thus each of the coeﬃcients c
k
(A) (0 k n) satisﬁes
c
k
(PAP
−1
) = c
k
(A).
Proof. We have
char
PAP
−1(X) = det(XI
n
−PAP
−1
)
= det(P(XI
n
)P
−1
−PAP
−1
)
= det(P(XI
n
−A)P
−1
)
= det P det(XI
n
−A) det P
−1
= det P char
A
(X)(det P)
−1
= char
A
(X).
Comparing coeﬃcients we obtain the result.
This result shows that as functions of A (and hence of the a
ij
), the coeﬃcients c
k
(A) are
invariant or class functions in the sense that they are invariant under conjugation,
c
r
(PAP
−1
) = c
r
(A).
Recall that for an n ×n matrix A = [a
ij
], the trace of A, tr A ∈ k, is deﬁned by
tr A =
n
∑
j=1
a
jj
.
Proposition 1.10. For any n ×n matrix over k we have
c
n−1
(A) = −tr A and c
n
(A) = (−1)
n
det A.
Proof. The coeﬃcient of X
n−1
in det(XI
n
−[a
ij
]) is
−
n
∑
r=1
a
rr
= −tr[a
ij
] = −tr A,
giving the formula for c
n−1
(A). Putting X = 0 in det(XI
n
−[a
ij
]) gives
c
n
(A) = det([−a
ij
]) = (−1)
n
det[a
ij
] = (−1)
n
det A.
Now let φ: V −→ V be a linear transformation on a ﬁnite dimensional kvector space with
a basis v = {v
1
, . . . , v
n
}. Consider the matrix of φ relative to v,
[φ]
v
= [a
ij
],
where
φ(v
j
) =
n
∑
r=1
a
rj
v
r
.
Then the trace of φ with respect to the basis v is
tr
v
φ = tr[φ]
v
.
6
If we change to a second basis w say, there is an invertible n ×n matrix P = [p
ij
] such that
w
j
=
n
∑
r=1
p
rj
v
r
,
and then
[φ]
w
= P[φ]
v
P
−1
.
Hence,
tr
w
φ = tr
_
P[φ]
v
P
−1
_
= tr
v
φ.
Thus we see that the quantity
tr φ = tr
v
φ
only depends on φ, not the basis v. We call this the trace of φ. Similarly, we can deﬁne
det φ = det A.
More generally, we can consider the polynomial
char
φ
(X) = char
[φ]
v
(X)
which by Lemma 1.9 is independent of the basis v. Thus all of the coeﬃcients c
k
(A) are
functions of φ and do not depend on the basis used, so we may write c
k
(φ) in place of c
k
(A).
In particular, an alternative way to deﬁne tr φ and det φ is by setting
tr φ = −c
n−1
(φ) = tr A, det φ = (−1)
n
c
0
(φ) = det A.
We also call char
φ
(X) the characteristic polynomial of φ. The following is a formulation of the
CayleyHamilton Theorem for a linear transformation.
Theorem 1.11 (CayleyHamilton Theorem: linear transformation version).
If φ: V −→ V is a klinear transformation on the ﬁnite dimensional kvector space V , then φ
satisﬁes the polynomial identity
char
φ
(φ) = 0.
More explicitly, if
char
φ
(X) =
n
∑
r=0
c
r
(φ)X
r
,
then writing φ
0
= Id
V
, we have
n
∑
r=0
c
r
(φ)φ
r
= 0.
There is an important connection between class functions of matrices (such as the trace
and determinant) and eigenvalues. Recall that if k is an algebraically closed ﬁeld then any
nonconstant monic polynomial with coeﬃcients in k factors into d linear factors over k, where
d is the degree of the polynomial.
Proposition 1.12. Let k be an algebraically closed ﬁeld and let A be an n ×n matrix with
entries in k. Then the eigenvalues of A in k are the roots of the characteristic polynomial
char
A
(X) in k. In particular, A has at most n distinct eigenvalues in k.
7
On factoring char
A
(X) into linear factors over k we may ﬁnd some repeated linear factors
corresponding to ‘repeated’ or ‘multiple’ roots. If a linear factor (X −λ) appears to degree m
say, we say that λ is an eigenvalue of multiplicity m. If every eigenvalue of A has multiplicity 1,
then A is diagonalisable in the sense that there is an invertible matrix P satisfying
PAP
−1
= diag(λ
1
, . . . , λ
n
),
the diagonal matrix with the n distinct diagonal entries λ
k
down the leading diagonal. More
generally, let
(1.2) char
A
(X) = (X −λ
1
) · · · (X −λ
n
),
where now we allow some of the λ
j
to be repeated. Then we can describe tr A and det A in
terms of the eigenvalues λ
j
.
Proposition 1.13. The following identities hold:
tr A =
n
∑
j=1
λ
j
= λ
1
+· · · +λ
n
, det A = λ
1
· · · λ
n
.
Proof. These can be veriﬁed by considering the degree (n − 1) and constant terms in
Equation (1.2) and using Proposition 1.10.
Remark 1.14. More generally, the coeﬃcient (−1)
n−k
c
k
(A) can be expressed as the kth
elementary symmetric function in λ
1
, . . . , λ
n
.
We can also apply the above discussion to a linear transformation φ: V −→ V , where an
eigenvector for the eigenvalue λ ∈ C is a nonzero vector v ∈ V satisfying φ(v) = λv.
The characteristic polynomial may not be the polynomial of smallest degree satisﬁed by a
matrix or a linear transformation. By deﬁnition, a minimal polynomial of an n×n matrix A or
linear transformation φ: V −→ V is a (nonzero) monic polynomial f(X) of smallest possible
degree for which f(A) = 0 or f(φ) = 0.
Lemma 1.15. For an n ×n matrix A or a linear transformation φ: V −→ V , let f(X) be a
minimal polynomial and g(X) be any other polynomial for which g(A) = 0 or g(φ) = 0. Then
f(X)  g(X). Hence f(X) is the unique minimal polynomial.
Proof. We only give the proof for matrices, the proof for a linear transformation is similar.
Suppose that f(X) g(X). Then we have
g(X) = q(X)f(X) +r(X),
where deg r(X) < deg f(X). Since r(A) = 0 and r(X) has degree less than f(X), we have a
contradiction. Hence f(X)  g(X). In particular, if g(X) has the same degree as f(X), the
minimality of g(X) also gives g(X)  f(X). As these are both monic polynomials, this implies
f(X) = g(X).
We write min
A
(X) or min
φ
(X) for the minimal polynomial of A or φ. Note also that
min
A
(X)  char
A
(X) and min
φ
(X)  char
φ
(X).
8
1.3. Separability
Lemma 1.16. Let V be a ﬁnite dimensional vector space over C and let φ: V −→ V be a
linear transformation. Suppose that
0 ̸= f(X) =
m
∑
r=0
c
r
X
r
∈ C[X]
is a polynomial with no repeated linear factors over C and that φ satisﬁes the relation
m
∑
r=0
c
r
φ
r
= 0,
i.e., for every v ∈ V ,
m
∑
r=0
c
r
φ
r
(v) = 0.
Then V has a basis v = {v
1
, . . . , v
n
} consisting of eigenvectors of φ.
Proof. By multiplying by the inverse of the leading coeﬃcient of f(X) we can replace
f(X) by a monic polynomial with the same properties, so we will assume that f(X) is monic,
i.e., c
m
= 1. Factoring over C, we obtain
f(X) = f
m
(X) = (X −λ
1
) · · · (X −λ
m
),
where the λ
j
∈ C are distinct. Put
f
m−1
(X) = (X −λ
1
) · · · (X −λ
m−1
).
Notice that f
m
(X) = f
m−1
(X)(X − λ
m
), hence (X − λ
m
) cannot divide f
m−1
(X), since this
would lead to a contradiction to the assumption that f
m
(X) has no repeated linear factors.
Using long division of (X −λ
m
) into f
m−1
(X), we see that
f
m−1
(X) = q
m
(X)(X −λ
m
) +r
m
,
where the remainder r
m
∈ C cannot be 0 since if it were then (X −λ
m
) would divide f
m−1
(X).
Dividing by r
m
if necessary, we see that for some nonzero s
m
∈ C,
s
m
f
m−1
(X) −q
m
(X)(X −λ
m
) = 1.
Substituting X = φ, we have for any v ∈ V ,
s
m
f
m−1
(φ)(v) −q
m
(φ)(φ −λ
m
Id
V
)(v) = v.
Notice that we have
(φ −λ
m
Id
V
) (s
m
f
m−1
(φ)(v)) = s
m
f
m
(φ)(v) = 0
and
f
m−1
(φ) (q
m
(φ)(φ −λ
m
Id
V
)(v)) = q
m
(φ)f
m
(φ)(v) = 0.
Thus we can decompose v into a sum v = v
m
+v
′
m
, where
(φ −λ
m
Id
V
)(v
m
) = 0 = f
m−1
(φ)(v
′
m
).
9
Consider the following two subspaces of V :
V
m
= {v ∈ V : (φ −λ
m
Id
V
)(v) = 0},
V
′
m
= {v ∈ V : f
m−1
(φ)(v) = 0}.
We have shown that V = V
m
+V
′
m
. If v ∈ V
m
∩ V
′
m
, then from above we would have
v = s
m
f
m−1
(φ)(v) −q
m
(φ)(φ −λ
m
Id
V
)(v) = 0.
So V
m
∩ V
′
m
= {0}, hence V = V
m
⊕V
′
m
. We can now consider V
′
m
in place of V , noticing that
for v ∈ V
′
m
, φ(v) ∈ V
′
m
, since
f
m−1
(φ)(φ(v)) = φ(f
m−1
(φ)(v)) = 0.
Continuing in this fashion, we eventually see that
V = V
1
⊕· · · ⊕V
m
where for v ∈ V
k
,
(φ −λ
k
)(v) = 0.
If we choose a basis v
(k)
of V
k
, then the (disjoint) union
v = v
(1)
∪ · · · ∪ v
(m)
is a basis for V , consisting of eigenvectors of φ.
The condition on φ in this result is sometimes referred to as the separability or semisimplicity
of φ. We will make use of this when discussing characters of representations.
1.4. Basic notions of multilinear algebra
In this section we will describe the tensor product of r vector spaces. We will most often
consider the case where r = 2, but give the general case for completeness. Multilinear algebra
is important in diﬀerential geometry, relativity, electromagnetism, ﬂuid mechanics and indeed
much of advanced applied mathematics where tensors play a rˆole.
Let V
1
, . . . , V
r
and W be kvector spaces. A function
F : V
1
×· · · ×V
r
−→ W
is kmultilinear if it satisﬁes
F(v
1
, . . . , v
k−1
, v
k
+v
′
k
, v
k+1
, . . . , v
r
) = F(v
1
, . . . , v
k
, . . . , v
r
) +F(v
1
, . . . , v
k−1
, v
′
k
, v
k+1
, . . . , v
r
),
(ML1)
F(v
1
, . . . , v
k−1
, tv
k
, v
k+1
, . . . , v
r
) = tF(v
1
, . . . , v
k−1
, v
k
, v
k+1
, . . . , v
r
)
(ML2)
for v
j
, v
′
j
∈ V and t ∈ k. It is symmetric if for any permutation σ ∈ S
r
(the permutation group
on r objects),
F(v
σ(1)
, . . . , v
σ(k)
, . . . , v
σ(r)
) = F(v
1
, . . . , v
k
, . . . , v
r
), (MLS)
and is alternating or skewsymmetric if
F(v
σ(1)
, . . . , v
σ(k)
, . . . , v
σ(r)
) = sign(σ)F(v
1
, . . . , v
k
, . . . , v
r
), (MLA)
where sign(σ) ∈ {±1} is the sign of σ.
10
The tensor product of V
1
, . . . , V
r
is a kvector space V
1
⊗V
2
⊗· · ·⊗V
r
together with a function
τ : V
1
×· · · ×V
r
−→ V
1
⊗V
2
⊗· · · ⊗V
r
satisfying the following universal property.
UPTP: For any kvector space W and multilinear map F : V
1
× · · · × V
r
−→ W, there is a
unique linear transformation F
′
: V
1
⊗· · · ⊗V
r
−→ W for which F
′
◦ τ = F.
In diagram form this becomes
V
1
×· · · ×V
r
F
&&
L
L
L
L
L
L
L
L
L
L
L
τ
//
V
1
⊗· · · ⊗V
r
∃! F
′
xx
W
where the dotted arrow represents a unique linear transformation making the diagram commute.
When V
1
= V
2
= · · · = V
r
= V , we call V ⊗· · · ⊗V the rth tensor power and write T
r
V .
Our next result provides an explicit description of a tensor product.
Proposition 1.17. If the ﬁnite dimensional kvector space V
k
(1 k r) has a basis
v
k
= {v
k,1
, . . . , v
k,n
k
}
where dim
k
V
k
= n
k
, then V
1
⊗· · · ⊗V
r
has a basis consisting of the vectors
v
1,i
1
⊗· · · ⊗v
r,i
r
= τ(v
1,i
1
, . . . , v
r,i
r
),
where 1 i
k
n
k
. Hence we have
dim
k
V
1
⊗· · · ⊗V
r
= n
1
· · · n
r
.
More generally, for any sequence w
1
∈ V
1
, . . . w
r
∈ V
r
, we set
w
1
⊗· · · ⊗w
r
= τ(w
1
, . . . , w
r
).
These satisfy the multilinearity formulæ
(MLF1) w
1
⊗· · · ⊗w
k−1
⊗(w
k
+w
′
k
) ⊗w
k+1
⊗· · · ⊗w
r
=
w
1
⊗· · · ⊗w
k
⊗· · · ⊗w
r
+w
1
⊗· · · ⊗w
k−1
⊗w
′
k
⊗w
k+1
⊗· · · ⊗w
r
,
(MLF2) w
1
⊗· · · ⊗w
k−1
⊗tw
k
⊗w
k+1
⊗· · · ⊗w
r
=
t(w
1
⊗· · · ⊗w
k−1
⊗w
k
⊗w
k+1
⊗· · · ⊗w
r
).
We will see later that the tensor power T
r
V can be decomposed as a direct sum T
r
V =
Sym
r
V ⊕Alt
r
V consisting of the symmetric and antisymmetric or alternating tensors Sym
r
V
and Alt
r
V .
We end with some useful results.
Proposition 1.18. Let V
1
, . . . , V
r
, V be ﬁnite dimensional kvector spaces. Then there is
a linear isomorphism
V
∗
1
⊗· · · ⊗V
∗
r
∼
= (V
1
⊗· · · ⊗V
r
)
∗
.
In particular,
T
r
(V
∗
)
∼
= (T
r
V )
∗
.
11
Proof. Use the universal property to construct a linear transformation with suitable prop
erties.
Proposition 1.19. Let V, W be ﬁnite dimensional kvector spaces. Then there is a klinear
isomorphism
W ⊗V
∗
∼
= Hom
k
(V, W)
under which for α ∈ V
∗
and w ∈ W,
w ⊗α ←→ wα
where by deﬁnition, wα: V −→ W is the function determined by wα(v) = α(v)w for v ∈ V .
Proof. The function W × V
∗
−→ Hom
k
(V, W) given by (w, α) → wα is bilinear, and
hence factors uniquely through a linear transformation W⊗V
∗
−→ Hom
k
(V, W). But for bases
v = {v
1
, . . . , v
m
} and w = {w
1
, . . . , w
n
} of V and W, then the vectors w
j
⊗v
∗
i
form a basis of
W ⊗V
∗
. Under the above linear mapping, w
j
⊗v
∗
i
gets sent to the function w
j
v
∗
i
which maps
v
k
to w
j
if k = i and 0 otherwise. Using Propositions 1.2 and 1.17, it is now straightforward to
verify that these functions are linearly independent and span Hom
k
(V, W).
Proposition 1.20. Let V
1
, . . . , V
r
, W
1
, . . . , W
r
be ﬁnite dimensional kvector spaces, and
for each 1 k r, let φ
k
: V
k
−→ W
k
be a linear transformation. Then there is a unique linear
transformation
φ
1
⊗· · · ⊗φ
r
: V
1
⊗· · · ⊗V
r
−→ W
1
⊗· · · ⊗W
r
given on each tensor v
1
⊗· · · ⊗v
r
by the formula
φ
1
⊗· · · ⊗φ
r
(v
1
⊗· · · ⊗v
r
) = φ
1
(v
1
) ⊗· · · ⊗φ
1
(v
r
).
Proof. This follows from the universal property UPTP.
Exercises on Chapter 1
11. Consider the 2dimensional Cvector space V = C
2
. Viewing V as a 4dimensional Rvector
space, show that
W = {(z, w) ∈ C
2
: z = −w}
is an Rvector subspace of V . Is it a Cvector subspace?
Show that the function θ: W −→C given by
θ(z, w) = Re z + Imw
is an Rlinear transformation. Choose Rbases for W and C and determine the matrix of θ
with respect to these. Use these bases to extend θ to an Rlinear transformation Θ: V −→ C
agreeing with θ on W. Is there an extension which is Clinear?
12. Let V = C
4
as a Cvector space. Suppose that σ: V −→ V is the function deﬁned by
σ(z
1
, z
2
, z
3
, z
4
) = (z
3
, z
4
, z
1
, z
2
).
Show that σ is a Clinear transformation. Choose a basis for V and determine the matrix of σ
relative to it. Hence determine the characteristic and minimal polynomials of σ and show that
there is basis for V consisting of eigenvectors of σ.
12
13. For the matrix
A =
_
¸
_
18 5 15
−6 5 −9
−2 −1 5
_
¸
_
show that the characteristic polynomial is char
A
(X) = (X − 12)(X − 8)
2
and ﬁnd a basis for
C
3
consisting of eigenvectors of A. Determine the minimal polynomial of A.
14. For each of the kvector spaces V and subspaces W, ﬁnd a linear complement W
′
.
(i) k = R, V = R
3
, W = {(x
1
, x
2
, x
3
) : x
2
−2x
3
= 0};
(ii) k = R, V = R
4
, W = {(x
1
, x
2
, x
3
, x
4
) : x
2
−2x
3
= 0 = x
1
+x
4
};
(iii) k = C, V = C
3
, W = {(z
1
, z
2
, z
3
, z
4
) : z
2
−iz
3
= 0 = z
1
+ 4iz
4
}.
(iv) k = R, V = (R
3
)
∗
, W = {α : α(e
3
) = 0}.
15. Let V be a 2dimensional Cvector space with basis {v
1
, v
2
}. Describe a basis for the tensor
square T
2
V = V ⊗ V and state the universal property for the natural function τ : V × V −→
T
2
V .
Let F : V ×V −→C be a nonconstant Cbilinear function for which
F(v, u) = −F(u, v) (u, v ∈ V )
(Such a function is called alternating or odd.) Show that F factors through a linear transfor
mation F
′
: T
2
V −→C and ﬁnd ker F
′
.
If G: V ×V −→C is a second such function, show that there is a t ∈ C for which G(u, v) =
tF(u, v) for all u, v ∈ V .
16. Let V be a ﬁnite dimensional kvector space where char k = 0 (e.g., k = Q, R, C) and
dim
k
V = n where n is even.
Let F : V × V −→ k be an alternating kbilinear function which is nondegenerate in the
sense that for each v ∈ V , there is a w ∈ V such that F(v, w) ̸= 0.
Show that there is a basis {v
1
, . . . , v
n
} for V for which
F(v
2r−1
, v
2r
) = −F(v
2r
, v
2r−1
) = 1, (r = 1, . . . , n/2),
F(v
i
, v
j
) = 0, whenever i −j ̸ = 1.
[Hint:Try using induction on m = n/2, starting with m = 1.]
13
CHAPTER 2
Representations of ﬁnite groups
2.1. Linear representations
In discussing representations, we will be mainly interested in the situations where k = R or
k = C. However, other cases are important and unless we speciﬁcally state otherwise we will
usually assume that k is an arbitrary ﬁeld of characteristic 0. For ﬁelds of ﬁnite characteristic
dividing the order of the group, Representation Theory becomes more subtle and the resulting
theory is called Modular Representation Theory. Another important property of the ﬁeld k
required in many situations is that it is algebraically closed in the sense that every polynomial
over k has a root in k; this is true for C but not for R, however, the latter case is important in
many applications of the theory. Throughout this section, G will denote a ﬁnite group.
A homomorphism of groups ρ: G −→ GL
k
(V ) deﬁnes a klinear action of G on V by
g · v = ρ
g
v = ρ(g)(v),
which we call a krepresentation or klinear representation of G in (or on) V . Sometimes V
together with ρ is called a Gmodule, although we will not use that terminology. The case where
ρ(g) = Id
V
is called the trivial representation in V . Notice that we have the following identities:
(hg) · v = ρ
hg
v = ρ
h
◦ ρ
g
v = h · (g · v) (h, g ∈ G, v ∈ V ), (Rep1)
g · (v
1
+v
2
) = ρ
g
(v
1
+v
2
) = ρ
g
v
1
+ρ
g
v
2
= g · v
1
+g · v
2
(g ∈ G, v
i
∈ V ), (Rep2)
g · (tv) = ρ
g
(tv) = tρ
g
(v) = t(g · v) (g ∈ G, v ∈ V, t ∈ k). (Rep3)
A vector subspace W of V which is closed under the action of elements of G is called a G
submodule or Gsubspace; we sometimes say that W is stable under the action of G. It is usual
to view W as being a representation in its own right, using the ‘restriction’ ρ

W
: G −→ GL
k
(W)
deﬁned by
ρ

W
(g)(w) = ρ
g
(w).
The pair consisting of W and ρ

W
is called a subrepresentation of the original representation.
Given a basis v = {v
1
, . . . , v
n
} for V with dim
k
V = n, for each g ∈ G we have the associated
matrix of ρ(g) relative to v, [r
ij
(g)] which is deﬁned by
(RepMat) ρ
g
v
j
=
n
∑
k=1
r
kj
(g)v
k
.
Example 2.1. Let ρ: G −→ GL
k
(V ) where dim
k
V = 1. Given any nonzero element v ∈ V
(which forms a basis for V ) we have for each g ∈ G a λ
g
∈ k satisfying g · v = λ
g
v. By
Equation (Rep1), for g, h ∈ G we have
λ
hg
v = λ
h
λ
g
v,
15
and hence
λ
hg
= λ
h
λ
g
.
From this it is easy to see that λ
g
̸= 0. Thus there is a homomorphism Λ: G −→k
×
given by
Λ(g) = λ
g
.
Although this appears to depend on the choice of v, in fact it is independent of it (we leave this
as an exercise). As G is ﬁnite, every element g ∈ G has a ﬁnite order g, and it easily follows
that
λ
g
g
= 1,
so λ
g
is a gth root of unity. Hence, given a 1dimensional representation of a group, we can
regard it as equivalent to such a homomorphism G −→k
×
.
Here are two illustrations of Example 2.1.
Example 2.2. Take k = R. Then the only roots of unity in R are ±1, hence we can assume
that for a 1dimensional representation over R, Λ: G −→ {1, −1}, where the codomain is a
group under multiplication. The sign function sign: S
n
−→ {1, −1} provides an interesting and
important example of this.
Example 2.3. Now take k = C. Then for each n ∈ N we have n distinct nth roots of unity
in C
×
. We will denote the set of all nth roots of unity by µ
n
, and the set of all roots of unity
by
µ
∞
=
∪
n∈N
µ
n
,
where we use the inclusions µ
m
⊆ µ
n
whenever m  n. These are abelian groups under multi
plication.
Given a 1dimensional representation over C, the function Λ can be viewed as a homomor
phism Λ: G −→ µ
∞
, or even Λ: G −→ µ
G
by Lagrange’s Theorem.
For example, if G = C is cyclic of order N say, then we must have for any 1dimensional
representation of C that Λ: C −→ µ
N
. Note that there are exactly N of such homomorphisms.
Example 2.4. Let G be a simple group which is not abelian. Then given a 1dimensional
representation ρ: G −→ GL
k
(V ) of G, the associated homomorphism Λ: G −→ µ
G
has abelian
image, hence ker Λ has to be bigger than {e
G
}. Since G has no proper normal subgroups, we
must have ker Λ = G. Hence, ρ(g) = Id
V
.
Indeed, for any representation ρ: G −→ GL
k
(V ) we have ker ρ = G or ker ρ = {e
G
}. Hence,
either the representation is trivial or ρ is an injective homomorphism, which therefore embeds
G into GL
k
(V ). This severely restricts the smallest dimension of nontrivial representations of
nonabelian simple groups.
Example 2.5. Let G = {e, τ}
∼
= Z/2 and let V be any representation over any ﬁeld not of
characteristic 2. Then there are kvector subspaces V
+
, V
−
of V for which V = V
+
⊕ V
−
and
the action of G is given by
τ · v =
_
_
_
v if v ∈ V
+
,
−v if v ∈ V
−
.
16
Proof. Deﬁne linear transformations ε
+
, ε
−
: V −→ V by
ε
+
(v) =
1
2
(v +τ · v) , ε
−
(v) =
1
2
(v −τ · v) .
It is easily veriﬁed that
ε
+
(τ · v) = ε
+
(v), ε
−
(τ · v) = −ε
−
(v).
We take V
+
= imε
+
and V
−
= imε
−
and the direct sum decomposition follows from the identity
v = ε
+
(v) +ε
−
(v).
The decomposition in this example corresponds to the two distinct irreducible representa
tions of Z/2. Later we will see (at least over the complex numbers C) that there is always such a
decomposition of a representation of a ﬁnite group G with factors corresponding to the distinct
irreducible representations of G.
Example 2.6. Let D
2n
be the dihedral group of order 2n described in Section A.7.2. This
group is generated by elements α of order n and β of order 2, subject to the relation
βαβ = α
−1
.
We can realise D
2n
as the symmetry group of the regular ngon centred at the origin and with
vertices on the unit circle (we take the ﬁrst vertex to be (1, 0)). It is easily checked that relative
the standard basis {e
1
, e
2
} of R
2
, we get
α
r
=
_
cos 2rπ/n −sin 2rπ/n
sin 2rπ/n cos 2rπ/n
_
βα
r
=
_
cos 2rπ/n −sin 2rπ/n
−sin 2rπ/n −cos 2rπ/n
_
for r = 0, . . . , (n −1).
Thus we have a 2dimensional representation ρ
R
of D
2n
over R, where the matrices of ρ
R
(α
r
)
and ρ
R
(βα
r
) are given by the above. We can also view R
2
as a subset of C
2
and interpret these
matrices as having coeﬃcients in C. Thus we obtain a 2dimensional complex representation
ρ
C
of D
2n
with the above matrices relative to the Cbasis {e
1
, e
2
}.
2.2. Ghomomorphisms and irreducible representations
Suppose that we have two representations ρ: G −→ GL
k
(V ) and σ: G −→ GL
k
(W). Then
a linear transformation f : V −→ W is called Gequivariant, Glinear or a Ghomomorphism
with respect to ρ and σ, if for each g ∈ G the diagram
V
f
−−−−→ W
ρ
g
¸
¸
_
¸
¸
_
σ
g
V
f
−−−−→ W
commutes, i.e., σ
g
◦ f = f ◦ ρ
g
or equivalently, σ
g
◦ f ◦ ρ
g
−1 = f. A Ghomomorphism which
is a linear isomorphism is called a Gisomorphism or Gequivalence and we say that the repre
sentations are Gisomorphic or Gequivalent.
We deﬁne an action of G on Hom
k
(V, W), the vector space of klinear transformations
V −→ W, by
(g · f)(v) = σ
g
f(ρ
g
−1v) (f ∈ Hom
k
(V, W)).
17
This is another Grepresentation. The Ginvariant subspace Hom
G
(V, W) = Hom
k
(V, W)
G
is
then equal to the set of all Ghomomorphisms.
If the only Gsubspaces of V are {0} and V , ρ is called irreducible or simple.
Given a subrepresentation W ⊆ V , the quotient vector space V/W also admits a linear
action of G, ρ
W
: G −→ GL
k
(V/W), the quotient representation, where
ρ
W
(g)(v +W) = ρ(g)(v) +W,
which is well deﬁned since whenever v
′
−v ∈ W,
ρ(g)(v
′
) +W = ρ(g)(v + (v
′
−v)) +W =
_
ρ(g)(v) +ρ(g)(v
′
−v))
_
+W = ρ(g)(v) +W.
Proposition 2.7. If f : V −→ W is a Ghomomorphism, then
(a) ker f is a Gsubspace of V ;
(b) imf is a Gsubspace of W.
Proof. (a) Let v ∈ ker f. Then for g ∈ G,
f(ρ
g
v) = σ
g
f(v) = 0,
so ρ
g
v ∈ ker f. Hence ker f is a Gsubspace of V
(b) Let w ∈ imf with w = f(u) for some u ∈ V . Now
σ
g
w = σ
g
f(u) = f(ρ
g
u) ∈ imf,
hence imf is a Gsubspace of W.
Theorem 2.8 (Schur’s Lemma). Let ρ: G −→ GL
C
(V ) and σ: G −→ GL
C
(W) be irred
ucible representations of G over the ﬁeld C, and let f : V −→ W be a Glinear map.
(a) If f is not the zero map, then f is an isomorphism.
(b) If V = W and ρ = σ, then for some λ ∈ C, f is given by
f(v) = λv (v ∈ V ).
Remark 2.9. Part (a) is true for any ﬁeld k in place of C. Part (b) is true for any
algebraically closed ﬁeld in place of C.
Proof. (a) Proposition 2.7 implies that ker f ⊆ V and imf ⊆ W are Gsubspaces. By the
irreducibility of V , either ker f = V (in which case f is the zero map) or ker f = {0} in which
case f is injective. Similarly, irreducibility of W implies that imf = {0} (in which case f is the
zero map) or imf = W in which case f is surjective. Thus if f is not the zero map it must be
an isomorphism.
(b) Let λ ∈ C be an eigenvalue of f, with eigenvector v
0
̸= 0. Let f
λ
: V −→ V be the linear
transformation for which
f
λ
(v) = f(v) −λv (v ∈ V ).
For each g ∈ G,
ρ
g
f
λ
(v) = ρ
g
f(v) −ρ
g
λv
= f(ρ
g
v) −λρ
g
v,
= f
λ
(ρ
g
v),
18
showing that f
λ
is Glinear. Since f
λ
(v
0
) = 0, Proposition 2.7 shows that ker f
λ
= V . As
dim
C
V = dim
C
ker f
λ
+ dim
C
imf
λ
,
we see that imf
λ
= {0} and so
f
λ
(v) = 0 (v ∈ V ).
A linear transformation f : V −→ V is sometimes called a homothety if it has the form
f(v) = λv (v ∈ V ).
In this proof, it is essential that we take k = C rather than k = R for example, since we
need the fact that every polynomial over C has a root to guarantee that linear transformations
V −→ V always have eigenvalues. This theorem can fail to hold for representations over R as
the next example shows.
Example 2.10. Let k = R and V = C considered as a 2dimensional Rvector space. Let
G = µ
4
= {1, −1, i, −i}
be the group of all 4th roots of unity with ρ: µ
4
−→ GL
k
(V ) given by
ρ
α
z = αz.
Then this deﬁnes a 2dimensional representation of G over R. If we use the basis {u = 1, v = i},
then
ρ
i
u = v, ρ
i
v = −u.
From this we see that any Gsubspace of V containing a nonzero element w = au + bv also
contains −bu +av, and hence it must be all of V (exercise). So V is irreducible.
But the linear transformation φ: V −→ V given by
φ(au +bv) = −bu +av = ρ
i
(au +bv)
is Glinear, but not the same as multiplication by a real number (this is left as an exercise).
We will give a version of Schur’s Lemma which does not assume the ﬁeld k is algebraically
closed. First we recall some notation. Let D be a ring which has a ring homomorphism
η: k −→ D such that for all t ∈ k and x ∈ D,
η(t)x = xη(t).
Since η is injective, we identify k with a subring of D and take η to be the inclusion function.
Taking scalar multiplication to be t · x = tx, D becomes a kvector space. If D is a ﬁnite
dimensional and every nonzero element x ∈ D is invertible, then D is called a kdivision
algebra; if k is the centre of D, then D is called a kcentral division algebra. When k = R,
then up to isomorphism, the only Rdivision algebras are R, C and H (the quaternions, of
dimension 4). Over an algebraically closed ﬁeld things are simpler.
Proposition 2.11. k is algebraically closed then the only kcentral division algebra is k.
19
Proof. Suppose that D is a kcentral division algebra. Let z ∈ D be a nonzero element.
Multiplication by z gives a klinear transformation τ
z
: D −→ D. By the usual theory of
eigenvalues over an algebraically closed ﬁeld, τ
z
has an eigenvalue λ say, with eigenvector v ∈ D,
i.e., v ̸= 0 and
(λ −z)v = 0.
Multiplying by v
−1
gives (λ −z) = 0, hence z = λ ∈ k.
The next result is proved in a similar way to Part (a) of Schur’s Lemma 2.8.
Theorem 2.12 (Schur’s Lemma: general version). Let k be a ﬁeld and G a ﬁnite group.
Let ρ: G −→ GL
k
(W) be an irreducible krepresentation. Then Hom
G
(W, W) is a kdivision
algebra.
The next result is also valid for all ﬁelds in which G is invertible.
Theorem 2.13 (Maschke’s Theorem). Let V be a kvector space and ρ: G −→ GL
k
(V ) a
krepresentation. Let W ⊆ V be a Gsubspace of V . Then there is a projection onto W which is
Gequivariant. Equivalently, there is a linear complement W
′
of W which is also a Gsubspace.
Proof. Let p: V −→ V be a projection onto W. Deﬁne a linear transformation p
0
: V −→
V by
p
0
(v) =
1
G
∑
g∈G
ρ
g
◦ p ◦ ρ
−1
g
(v).
Then for v ∈ V ,
ρ
g
◦ p ◦ ρ
−1
g
(v) ∈ W
since imp = W and W is a Gsubspace; hence p
0
(v) ∈ W. We also have
p
0
(ρ
g
v) =
1
G
∑
h∈G
ρ
h
p(ρ
−1
h
ρ
g
v)
=
1
G
∑
h∈G
ρ
g
ρ
g
−1
h
p(ρ
−1
g
−1
h
v)
= ρ
g
_
1
G
∑
h∈G
ρ
g
−1
h
p(ρ
−1
g
−1
h
v)
_
= ρ
g
_
1
G
∑
h∈G
ρ
h
p(ρ
h
−1v)
_
= ρ
g
p
0
(v),
which shows that p
0
is Gequivariant. If w ∈ W,
p
0
(w) =
1
G
∑
g∈G
ρ
g
p(ρ
g
−1w)
=
1
G
∑
g∈G
ρ
g
ρ
g
−1w
=
1
G
∑
g∈G
w
=
1
G
(G w) = w.
20
Hence p
0

W
= Id
W
, showing that p
0
has image W.
Now consider W
′
= ker p
0
, which is a Gsubspace by part (a) of Proposition 2.7. This
is a linear complement for W since given the quotient map q : V −→ V/W, if v ∈ W
′
then
q(v) = 0 +W implies v ∈ W ∩ W
′
and hence 0 = p
0
(v) = v.
Theorem 2.14. Let ρ: G −→ GL
k
(V ) be a linear representation of a ﬁnite group with V
nontrivial. Then there are Gspaces U
1
, . . . , U
r
⊆ V , each of which is a nontrivial irreducible
subrepresentation and
V = U
1
⊕· · · ⊕U
r
.
Proof. We proceed by Induction on n = dim
k
V . If n = 1, the result is true with U
1
= V .
So assume that the result holds whenever dim
k
V < n. Now either V is irreducible or
there is a proper Gsubspace U
1
⊆ V . By Theorem 2.13, there is a Gcomplement U
′
1
of
U
1
in V with dim
k
U
′
1
< n. By the Inductive Hypothesis there are irreducible Gsubspaces
U
2
, . . . , U
r
⊆ U
′
1
⊆ V for which
U
′
1
= U
2
⊕· · · ⊕U
r
,
and so we ﬁnd
V = U
1
⊕U
2
⊕· · · ⊕U
r
.
We will see later that given any two such collections of nontrivial irreducible subrepresent
ations U
1
, . . . , U
r
and W
1
, . . . , W
s
, we have s = r and for each k, the number of W
j
Gisomorphic
to U
k
is equal to the number of U
j
Gisomorphic to U
k
. The proof of this will use characters,
which give further information such as the multiplicity of each irreducible which occurs as a sum
mand in V . The irreducible representations U
k
are called the irreducible factors or summands
of the representation V .
An important example of a Gsubspace of any representation ρ on V is the Ginvariant
subspace
V
G
= {v ∈ V : ρ
g
v = v ∀g ∈ G}.
We can construct a projection map V −→ V
G
which is Glinear, provided that the characteristic
of k does not divide G. In practice, we will be mainly interested in the case where k = C, so
in this section from now on, we will assume that k has characteristic 0.
Proposition 2.15. Let ε: V −→ V be the klinear transformation deﬁned by
ε(v) =
1
G
∑
g∈G
ρ
g
v.
Then
(a) For g ∈ G and v ∈ V , ρ
g
ε(v) = ε(v);
(b) ε is Glinear;
(c) for v ∈ V
G
, ε(v) = v and so imε = V
G
.
Proof. (a) Let g ∈ G and v ∈ V . Then
ρ
g
ε(v) = ρ
g
_
1
G
∑
h∈G
ρ
h
v
_
=
1
G
∑
h∈G
ρ
g
ρ
h
v =
1
G
∑
h∈G
ρ
gh
v =
1
G
∑
h∈G
ρ
h
v = ε(v).
21
(b) Similarly, for g ∈ G and v ∈ V ,
ε(ρ
g
v) =
1
G
∑
h∈G
ρ
h
(ρ
g
v) =
1
G
∑
h∈G
ρ
hg
v =
1
G
∑
k∈G
ρ
k
v = ε(v).
By (a), this agrees with ρ
g
ε(v). Hence, ε is Glinear.
(c) For v ∈ V
G
,
ε(v) =
1
G
∑
g∈G
ρ
g
v =
1
G
∑
g∈G
v =
1
G
Gv = v.
Notice that this also shows that imε = V
G
.
2.3. New representations from old
Let G be a ﬁnite group and k a ﬁeld. In this section we will see how new representations
can be manufactured from existing ones. As well as allowing interesting new examples to be
constructed, this sometimes gives ways of understanding representations in terms of familiar
ones. This will be important when we have learnt how to decompose representations in terms
of irreducibles and indeed is sometimes used to construct the latter.
Let V
1
, . . . , V
r
be kvector spaces admitting representations ρ
1
, . . . , ρ
r
of G. Then for each
g ∈ G and each j, we have the corresponding linear transformation ρ
j
g
: V
j
−→ V
j
. By Propo
sition 1.20 there is a unique linear transformation
ρ
1
g
⊗· · · ⊗ρ
r
g
: V
1
⊗· · · ⊗V
r
−→ V
1
⊗· · · ⊗V
r
.
It is easy to verify that this gives a representation of G on the tensor product V
1
⊗· · ·⊗V
r
, called
the tensor product of the original representations. By Proposition 1.20 we have the formula
(2.1) ρ
1
g
⊗· · · ⊗ρ
r
g
(v
1
⊗· · · ⊗v
r
) = ρ
1
g
v
1
⊗· · · ⊗ρ
r
g
v
r
for v
j
∈ V
j
(j = 1, . . . , r).
Let V, W be kvector spaces supporting representations ρ: G −→ GL
k
(V ) and σ: G −→
GL
k
(W). Recall that Hom
k
(V, W) is the set of all linear transformations V −→ W which
is a kvector space whose addition and multiplication are given by the following formulæ for
φ, θ ∈ Hom
k
(V, W) and t ∈ k:
(φ +θ)(u) = φ(u) +θ(u),
(tφ)(u) = t(φ(u)) = φ(tu).
There is an action of G on Hom
k
(V, W) deﬁned by
(τ
g
φ)(u) = σ
g
φ(ρ
g
−1u).
This turns out to be a linear representation of G on Hom
k
(V, W).
As a particular example of this, taking W = k with the trivial action of G (i.e., σ
g
= Id
k
),
we obtain an action of G on the dual of V ,
V
∗
= Hom
k
(V, k).
This action determines the contragredient representation ρ
∗
. Explicitly, this satisﬁes
ρ
∗
g
φ = φ ◦ ρ
g
−1.
22
Proposition 2.16. Let ρ: G −→ GL
k
(V ) be a representation, and v = {v
1
, . . . , v
n
} be a
basis of V . Suppose that relative to v,
[ρ
g
]
v
= [r
ij
(g)] (g ∈ G).
Then relative to the dual basis v
∗
= {v
∗
1
, . . . , v
∗
n
}, we have
[ρ
∗
g
]
v
∗ = [r
ji
(g
−1
)] (g ∈ G),
or equivalently,
[ρ
∗
g
]
v
∗ = [ρ
g
−1]
T
.
Proof. If we write
[ρ
∗
g
]
v
∗ = [t
ij
(g)],
then by deﬁnition,
ρ
∗
g
v
∗
s
=
n
∑
r=1
t
rs
(g)v
∗
r
.
Now for each i = 1, . . . , n,
(ρ
∗
g
v
∗
j
)(v
i
) =
n
∑
r=1
t
rj
(g)v
∗
r
(v
i
),
which gives
v
∗
j
(ρ
g
−1v
i
) = t
ij
(g),
and hence
t
ij
(g) = v
∗
j
(
n
∑
k=1
r
ki
(g
−1
)v
i
) = r
ji
(g
−1
).
Another perspective on the above is provided by the next result, whose proof is left as an
exercise.
Proposition 2.17. The klinear isomorphism
Hom
k
(V, W)
∼
= W ⊗
k
V
∗
is a Gisomorphism where the right hand side carries the tensor product representation σ ⊗ρ
∗
.
Using these ideas together with Proposition 2.15 we obtain the following useful result
Proposition 2.18. For k of characteristic 0, the Ghomomorphism
ε: Hom
k
(V, W) −→ Hom
k
(V, W)
of Proposition 2.15 has image equal to the set of Ghomomorphisms V −→ W, Hom
k
(V, W)
G
which is also Gisomorphic to (W ⊗
k
V
∗
)
G
.
Now let ρ: G −→ GL
k
(V ) be a representation of G and let H G. We can restrict ρ to
H and obtain a representation ρ

H
: H −→ GL
k
(V ) of H, usually denoted ρ ↓
G
H
or Res
G
H
ρ; the
Hmodule V is also denoted V ↓
G
H
or Res
G
H
V .
Similarly, if G K, then we can form the induced representation ρ ↑
K
G
: K −→ GL
k
(V ↑
K
G
)
as follows. Take K
R
to be the Gset consisting of the underlying set of K with the Gaction
g · x = xg
−1
.
23
Deﬁne
V ↑
K
G
= Ind
K
G
V = Map(K
R
, V )
G
= {f : K −→ V : f(x) = ρ
g
f(xg) ∀x ∈ K}.
Then K acts linearly on V ↑
K
G
by
(k · f)(x) = f(kx),
and so we obtain a linear representation of K. The induced representation is often denoted
ρ ↑
K
G
or Ind
K
G
ρ. The dimension of V ↑
K
G
is dim
k
V ↑
K
G
= K/G dim
k
V . Later we will meet
Reciprocity Laws relating these induction and restriction operations.
2.4. Permutation representations
Let G be a ﬁnite group and X a ﬁnite Gset, i.e., a ﬁnite set X equipped with an action
of G on X, written gx. A ﬁnite dimensional Grepresentation ρ: G −→ GL
C
(V ) over k is a
permutation representation on X if there is an injective Gmap j : X −→ V and imj = j(X) ⊆
V is a kbasis for V . Notice that a permutation representation really depends on the injection
j. We frequently have situations where X ⊆ V and j is the inclusion of the subset X. The
condition that j be a Gmap amounts to the requirement that
ρ
g
(j(x)) = j(gx) (g ∈ G, x ∈ X).
Definition 2.19. A homomorphism from a permutation representation j
1
: X
1
−→ V
1
to a
second j
2
: X
2
−→ V
2
is a Glinear transformation Φ: V
1
−→ V
2
such that
Φ(j
1
(x)) ∈ imj
2
(x ∈ X
1
).
A Ghomomorphism of permutation representations which is a Gisomorphism is called a G
isomorphism of permutation representations.
Notice that by the injectivity of j
2
, this implies the existence of a unique Gmap φ: X
1
−→
X
2
for which
j
2
(φ(x)) = Φ(j
1
(x)) (x ∈ X
1
).
Equivalently, we could specify the Gmap φ: X
1
−→ X
2
and then Φ: V
1
−→ V
2
would be the
unique linear extension of φ restricted to imj
2
(see Proposition 1.1). In the case where Φ is a
Gisomorphism, it is easily veriﬁed that φ: X
1
−→ X
2
is a Gequivalence.
To show that such permutations representations exist in abundance, we proceed as follows.
Let X be a ﬁnite set equipped with a Gaction. Let k[X] = Map(X, k), the set of all functions
X −→ k. This is a ﬁnite dimensional kvector space with addition and scalar multiplication
deﬁned by
(f
1
+f
2
)(x) = f
1
(x) +f
2
(x), (tf)(x) = t(f(x)),
for f
1
, f
2
, f ∈ Map(X, k), t ∈ k and x ∈ X. There is an action of G on Map(X, k) given by
(g · f)(x) = f(g
−1
x).
If Y is a second ﬁnite Gset, and φ: X −→ Y a Gmap, then we deﬁne the induced function
φ
∗
: k[X] −→k[Y ] by
(φ
∗
f)(y) =
∑
x∈φ
−1
{y}
f(x) =
∑
φ(x)=y
f(x).
Theorem 2.20. Let G be a ﬁnite group.
24
(a) For a ﬁnite Gset X, k[X] is a ﬁnite dimensional permutation representation of di
mension dim
k
k[X] = X.
(b) For a Gmap φ: X −→ Y , the induced function φ
∗
: k[X] −→ k[Y ] is a Glinear
transformation.
Proof.
a) For each x ∈ X we have a function δ
x
: X −→k given by
δ
x
(y) =
_
_
_
1 if y = x,
0 otherwise.
The map j : X −→k[X] given by
j(x) = δ
x
is easily seen to be an injection. It is also a Gmap, since
δ
gx
(y) = 1 ⇐⇒ δ
x
(g
−1
y) = 1,
and hence
j(gx)(y) = δ
gx
(y) = δ
x
(g
−1
y) = (g · δ
x
)(y).
Given a function f : X −→k, consider
f −
∑
x∈X
f(x)δ
x
∈ k[X].
Then for y ∈ X,
f(y) −
∑
x∈X
f(x)δ
x
(y) = f(y) −f(y) = 0,
hence f −
∑
x∈X
f(x)δ
x
is the constant function taking the value 0 on X. So the functions δ
x
(x ∈ X) span k[X]. They are also linearly independent, since if the 0 valued constant function
is expressed in the form
∑
x∈X
t
x
δ
x
for some t
x
∈ k, then for each y ∈ X,
0 =
∑
x∈X
t
x
δ
x
(y) = t
y
,
hence all the coeﬃcients t
x
must be 0.
b) The klinearity of φ
∗
is easily checked. To show it is a Gmap, for g ∈ G,
(g · φ
∗
f)(y) = (φ
∗
f)(g
−1
y)
=
∑
x∈φ
−1
{g
−1
y}
f(x)
=
∑
x∈φ
−1
{y}
f(g
−1
x),
since
φ
−1
{g
−1
y} = {x ∈ X : gφ(x) = y}
= {x ∈ X : φ(gx) = y}
= {g
−1
x : x ∈ X, x ∈ φ
−1
{y}}.
25
Since by deﬁnition
(g · f)(x) = f(g
−1
x),
we have
(g · φ
∗
f) = φ
∗
(g · f).
Given a permutation representation k[X], we will often use the injection j to identify X
with a subset of k[X]. If φ: X −→ Y is a Gmap, notice that
φ
∗
(
∑
x∈X
t
x
δ
x
) =
∑
x∈X
t
x
δ
φ(x)
.
We will sometimes write x instead of δ
x
, and a typical element of k[X] as
∑
x∈X
t
x
x, where
each t
x
∈ k, rather than
∑
x∈X
t
x
δ
x
. Another convenient notational device is to list the elements
of X as x
1
, x
2
, . . . , x
n
and then identify n = {1, 2, . . . , n} with X via the correspondence k ←→
x
k
. Then we can identify k[n]
∼
= k
n
with k[X] using the correspondence
(t
1
, t
2
, . . . , t
n
) ←→
n
∑
k=1
t
k
x
k
.
2.5. Properties of permutation representations
Let X be a ﬁnite Gset. The result shows how to reduce an arbitrary permutation repre
sentation to a direct sum of those induced from transitive Gsets.
Proposition 2.21. Let X = X
1
⨿
X
2
where X
1
, X
2
⊆ X are closed under the action of G.
Then there is a Gisomorphism
k[X]
∼
= k[X
1
] ⊕k[X
2
].
Proof. Let j
1
: X
1
−→ X and j
2
: X
2
−→ X be the inclusion maps, which are Gmaps. By
Theorem 2.20(b), there are Glinear transformations j
1
∗
: k[X
1
] −→ k[X] and j
2
∗
: k[X
2
] −→
k[X]. For
f =
∑
x∈X
t
x
x ∈ k[X],
we have the ‘restrictions’
f
1
=
∑
x∈X
1
t
x
x, f
2
=
∑
x∈X
2
t
x
x.
We deﬁne our linear map k[X]
∼
= k[X
1
] ⊕k[X
2
] by
f −→ (f
1
, f
2
).
It is easily seen that this is a linear transformation, and moreover has an inverse given by
(h
1
, h
2
) −→ j
1
∗
h
1
+j
2
∗
h
2
.
Finally, this is a Gmap since the latter is the sum of two Gmaps, so its inverse is a Gmap.
Let X
1
and X
2
be Gsets. Then X = X
1
×X
2
can be made into a Gset with action given
by
g · (x
1
, x
2
) = (gx
1
, gx
2
).
Proposition 2.22. Let X
1
and X
2
be Gsets. Then there is a Gisomorphism
k[X
1
] ⊗k[X
2
]
∼
= k[X
1
×X
2
].
26
Proof. The function F : k[X
1
] ×k[X
2
] −→k[X
1
×X
2
] deﬁned by
F(
∑
x∈X
1
s
x
x,
∑
y∈X
2
t
y
y) =
∑
x∈X
1
∑
y∈X
2
s
x
t
y
(x, y)
is kbilinear. Hence by the universal property of the tensor product (Section 1.4, UPTP), there
is a unique linear transformation F
′
: k[X
1
] ⊗k[X
2
] −→k[X
1
×X
2
] for which
F
′
(x ⊗y) = (x, y) (x ∈ X
1
, y ∈ X
2
).
This is easily seen to to be a Glinear isomorphism.
Definition 2.23. Let G be a ﬁnite group. The regular representation over k is the G
representation k[G]. This has dimension dim
k
k[G] = G.
Proposition 2.24. The regular representation of a ﬁnite group G over a ﬁeld k is a ring
(in fact a kalgebra). Moreover, this ring is commutative if and only if G is abelian.
Proof. Let a =
∑
g∈G
a
g
g and b =
∑
g∈G
b
g
g where a
g
, b
g
∈ G. Then we deﬁne the product
of a and b by
ab =
∑
g∈G
_
∑
h∈G
a
h
b
h
−1
g
_
g.
Note that for g, h ∈ G in k[G] we have
(1g)(1h) = gh.
For commutativity, each such product (1g)(1h) must agree with (1h)(1g), which happens if and
only if G is abelian. The rest of the details are left as an exercise.
The ring k[G] is called the group algebra or group ring of G over k. The next result is
left as an exercise for those who know about modules. It provides a link between the study
of modules over k[G] and Grepresentations, and so the group ring construction provides an
important source of noncommutative rings and their modules.
Proposition 2.25. Let V be a k vector space. Then if V carries a Grepresentation, it
admits the structure of a k[G] module deﬁned by
(
∑
g∈G
a
g
g)v =
∑
g∈G
a
g
gv.
Conversely, if V is a k[G]module, then it admits a Grepresentation with action deﬁned by
g · v = (1g)v.
2.6. Calculating in permutation representations
In this section, we determine how the permutation representation k[X] looks in terms of the
basis consisting of elements x (x ∈ X). We know that g ∈ G acts by sending x to gx. Hence, if
we label the rows and columns of a matrix by the elements of X, the X ×X matrix [g] of g
with respect to this basis has xy entry
(2.2) [g]
xy
= δ
x,gy
=
_
_
_
1 if x = gy,
0 otherwise,
27
where δ
a,b
denotes the Kronecker δ function which is 0 except for when a = b and it then takes
the value 1. Thus there is exactly one 1 in each row and column, and 0’s everywhere else. The
following is an important example.
Let X = n = {1, 2, . . . , n} and G = S
n
, the symmetric group of degree n, acting on n in the
usual way. We may take as a basis for k[n], the functions δ
j
(1 j n) given by
δ
j
(k) =
_
_
_
1 if k = j,
0 otherwise.
Relative to this basis, the action of σ ∈ S
n
is given by the n×n matrix [σ] whose ijth entry is
(2.3) [σ]
ij
=
_
_
_
1 if i = σ(j),
0 otherwise.
Taking n = 3, we get
[(1 3 2)] =
_
¸
_
0 1 0
0 0 1
1 0 0
_
¸
_
, [(1 3)] =
_
¸
_
0 0 1
0 1 0
1 0 0
_
¸
_
, [(1 3 2)(1 3)] = [(1 2)] =
_
¸
_
0 1 0
1 0 0
0 0 1
_
¸
_
.
As expected, we also have
[(1 3 2)][(1 3)] =
_
¸
_
0 1 0
0 0 1
1 0 0
_
¸
_
_
¸
_
0 0 1
0 1 0
1 0 0
_
¸
_
=
_
¸
_
0 1 0
1 0 0
0 0 1
_
¸
_
= [(1 3 2)(1 3)].
An important fact about permutation representations is the following, which makes their char
acters easy to calculate.
Proposition 2.26. Let X be a ﬁnite Gset, and k[X] the associated permutation represen
tation. Let g ∈ G and ρ
g
: k[X] −→k[X] be the linear transformation induced by g. Then
tr ρ
g
= X
g
 = {x ∈ X : gx = x} = number of elements of X ﬁxed by g.
Proof. Take the elements of X to be a basis for k[X]. Then tr ρ
g
is the sum of the diagonal
terms in the matrix [ρ
g
] relative to this basis. Now making use of Equation (2.2) we see that
tr ρ
g
= number of nonzero diagonal terms in [ρ
g
] = number of elements of X ﬁxed by g.
Our next result shows that permutation representations are selfdual.
Proposition 2.27. Let X be a ﬁnite Gset, and k[X] the associated permutation represen
tation. Then there is a Gisomorphism k[X]
∼
= k[X]
∗
.
Proof. Take as a basis of k[X] the elements x ∈ X. Then a basis for the dual space k[X]
∗
consists of the elements x
∗
. By deﬁnition of the action of G on k[X]
∗
= Hom
k
(k[X], k), we have
(g · x
∗
)(y) = x
∗
(g
−1
x) (g ∈ G, y ∈ X).
A familiar calculation shows that g · x
∗
= (gx)
∗
, and so this basis is also permuted by G. Now
deﬁne a function φ: k[X] −→k[X]
∗
by
φ(
∑
x∈X
a
x
x) =
∑
x∈X
a
x
x
∗
.
28
This is a klinear isomorphism also satisfying
φ
_
g
∑
x∈X
a
x
x
_
= φ
_
∑
x∈X
a
x
(gx)
_
=
∑
x∈X
a
x
(gx)
∗
= g · φ
_
∑
x∈X
a
x
x
_
.
Hence φ is a Gisomorphism.
2.7. Generalized permutation representations
It is useful to generalize the notion of permutation representation somewhat. Let V be a
ﬁnite dimensional kvector space with a representation of G, ρ: G −→ GL
k
(V ); we will usually
write gv = ρ
g
v. We can consider the set of all functions X −→ V , Map(X, V ), and this is also
a ﬁnite dimensional kvector space with addition and scalar multiplication deﬁned by
(f
1
+f
2
)(x) = f
1
(x) +f
2
(x), (tf)(x) = t(f(x)),
for f
1
, f
2
, f ∈ Map(X, V ), t ∈ k and x ∈ X. There is a representation of G on Map(X, V ) given
by
(g · f)(x) = gf(g
−1
x).
We call this a generalized permutation representation of G.
Proposition 2.28. Let Map(X, V ) be a permutation representation of G, where V has basis
v = {v
1
, . . . , v
n
}. Then the functions δ
x,j
: X −→ V (x ∈ X, 1 j n) given by
δ
x,j
(y) =
_
_
_
v
j
if y = x,
0 otherwise,
for y ∈ X, form a basis for Map(X, V ). Hence,
dim
k
Map(X, V ) = X dim
k
V.
Proof. Let f : X −→ V . Then for any y ∈ X,
f(y) =
n
∑
j=1
f
j
(y)v
j
,
where f
j
: X −→ k is a function. It suﬃces now to show that any function h: X −→ k has a
unique expression as
h =
∑
x∈X
h
x
δ
x
where h
x
∈ k and δ
x
: X −→k is given by
δ
x
(y) =
_
_
_
1 if y = x,
0 otherwise.
But for y ∈ X,
h(y) =
∑
x∈X
h
x
δ
x
(y) ⇐⇒ h(y) = h
y
.
Hence h =
∑
x∈X
h(x)δ
x
is the unique expansion of this form. Combining this with the above
we have
f(y) =
n
∑
j=1
f
j
(y)v
j
=
n
∑
j=1
∑
x∈X
f
j
(x)δ
x
(y)v
j
,
29
and so
f =
n
∑
j=1
∑
x∈X
f
j
(x)δ
xj
,
is the unique such expression, since δ
xj
(y) = δ
x
(y)v
j
.
Proposition 2.29. If V = V
1
⊕ V
2
is a direct sum of representations V
1
, V
2
, then there is
a Gisomorphism
Map(X, V )
∼
= Map(X, V
1
) ⊕Map(X, V
2
).
Proof. Recall that every v ∈ V has a unique expression of the form v = v
1
+v
2
. Deﬁne a
function
Map(X, V ) −→ Map(X, V
1
) ⊕Map(X, V
2
); f −→ f
1
+f
2
,
where f
1
: X −→ V
1
and f
2
: X −→ V
2
satisfy
f(x) = f
1
(x) +f
2
(x) (x ∈ X).
This is easily seen to be both a linear isomorphism and a Ghomomorphism, hence a G
isomorphism.
Proposition 2.30. Let X = X
1
⨿
X
2
where X
1
, X
2
⊆ X are closed under the action of G.
Then there is a Gisomorphism
Map(X, V )
∼
= Map(X
1
, V ) ⊕Map(X
2
, V ).
Proof. Let j
1
: X
1
−→ X and j
2
: X
2
−→ X be the inclusion maps, which are Gmaps.
Then given f : X −→ V , we have two functions f
k
: X −→ V (k = 1, 2) given by
f
k
(x) =
_
_
_
f(x) if x ∈ X
k
,
0 otherwise.
Deﬁne a function
Map(X, V )
∼
= Map(X
1
, V ) −→ Map(X
2
, V ); f −→ f
1
+f
2
.
This is easily seen to be a linear isomorphism. Using the fact that X
k
is closed under the action
of G, we see that
(g · f)
k
= g · f
k
,
so
g · (f
1
+f
2
) = g · f
1
+g · f
2
.
Therefore this map is a Gisomorphism.
These results tell us how to reduce an arbitrary generalized permutation representation to
a direct sum of those induced from a transitive Gset X and an irreducible representation V .
30
Exercises on Chapter 2
21. Consider the function σ: D
2n
−→ GL
C
(C
2
) given by
σ
α
r (xe
1
+ye
2
) = ζ
r
xe
1
+ζ
−r
ye
2
, σ
α
r
β
(xe
1
+ye
2
) = ζ
r
ye
1
+ζ
−r
xe
2
,
where σ
g
= σ(g) and ζ = e
2πi/n
. Show that this deﬁnes a 2dimensional representation of D
2n
over C. Show that this representation is irreducible and determine ker σ.
22. Show that there is a 3dimensional real representation θ: Q
8
−→ GL
R
(R
3
) of the quater
nion group Q
8
for which
θ
i
(xe
1
+ye
2
+ze
3
) = xe
1
−ye
2
−ze
3
, θ
j
(xe
1
+ye
2
+ze
3
) = −xe
1
+ye
2
−ze
3
.
Show that this representation is not irreducible and determine ker θ.
23. Consider the 2dimensional complex vector space
V = {(x
1
, x
2
, x
3
) ∈ C
3
: x
1
+x
2
+x
3
= 0}.
Show that the symmetric group S
3
has a representation ρ on V deﬁned by
ρ
σ
(x
1
, x
2
, x
3
) = (x
σ
−1
(1)
, x
σ
−1
(2)
, x
σ
−1
(3)
)
for σ ∈ S
3
. Show that this representation is irreducible.
24. If p is a prime and G is a nontrivial ﬁnite pgroup, show that there is a nontrivial
1dimensional representation of G. More generally show this holds for a ﬁnite solvable group.
25. Let X = {1, 2, 3} = 3 with the usual action of the symmetric group S
3
. Consider
the complex permutation representation of S
3
associated to this with underlying vector space
V = C[3].
(i) Show that the invariant subspace
V
S
3
= {v ∈ V : σ · v = v ∀σ ∈ S
3
}
is 1dimensional.
(ii) Find a 2dimensional S
3
subspace W ⊆ V such that V = V
S
3
⊕W.
(iii) Show that W of (ii) is irreducible.
(iv) Show that the restriction W ↓
S
3
H
of the representation of S
3
on W to the subgroup
H = {e, (1 2)} is not irreducible.
(v) Show that the restriction of the representation W ↓
S
3
K
of S
3
on W to the subgroup
K = {e, (1 2 3), (1 3 2)} is not irreducible.
26. Let the ﬁnite group G act on the ﬁnite set X and let C[X] be the associated complex
permutation representation.
(i) If the action of G on X is transitive (i.e., there is exactly one orbit), show that there is a
1dimensional Gsubspace C{v
X
} with basis vector v
X
=
∑
x∈X
x. Find a Gsubspace
W
X
⊆ C[X] for which C[X] = C{v
X
} ⊕W
X
.
31
(ii) For a general Gaction on X, for each Gorbit Y in X use (a) to ﬁnd a 1dimensional
Gsubspace V
Y
and another W
Y
of dimension (Y  −1) such that
C[X] = V
Y
1
⊕W
Y
1
⊕· · · ⊕V
Y
r
⊕W
Y
r
where Y
1
, . . . , Y
r
are the distinct Gorbits of X.
27. If ρ: G −→ GL
k
(V ) is an irreducible representation, prove that the contragredient repre
sentation ρ
∗
: G −→ GL
k
(V
∗
) is also irreducible.
28. Let k be a ﬁeld of characteristic diﬀerent from 2. Let G be a ﬁnite group of even order
and let C G be a subgroup of order 2 with generator γ. Consider the regular representation
of G, which comes from the natural left action of G on k[G].
Now consider the action of the generator γ by multiplication of basis vectors on the right.
Denote the +1, −1 eigenspaces for this action by k[G]
+
, k[G]
−
respectively.
Show that the subspaces k[G]
±
are Gsubspaces and that
dim
k
k[G]
+
= dim
k
k[G]
−
=
G
2
.
Deduce that k[G] = k[G]
+
⊕k[G]
−
as Grepresentations.
32
CHAPTER 3
Character theory
3.1. Characters and class functions on a ﬁnite group
Let G be a ﬁnite group and let ρ: G −→ GL
C
(V ) be a ﬁnite dimensional Crepresentation
of dimension dim
C
V = n. For g ∈ G, the linear transformation ρ
g
: V −→ V will sometimes be
written g· or g. The character of g in the representation ρ is the trace of g on V , i.e.,
χ
ρ
(g) = tr ρ
g
= tr g.
We can view χ
ρ
as a function χ
ρ
: G −→C, the character of the representation ρ.
Definition 3.1. A function θ: G −→C is a class function if for all g, h ∈ G,
θ(hgh
−1
) = θ(g),
i.e., θ is constant on each conjugacy class of G.
Proposition 3.2. For all g, h ∈ G,
χ
ρ
(hgh
−1
) = χ
ρ
(g).
Hence χ
ρ
: G −→C is a class function on G.
Proof. We have
ρ
hgh
−1 = ρ
h
◦ ρ
g
◦ ρ
h
−1 = ρ
h
◦ ρ
g
◦ ρ
−1
h
and so
χ
ρ
(hgh
−1
) = tr ρ
h
◦ ρ
g
◦ ρ
−1
h
= tr ρ
g
= χ
ρ
(g).
Example 3.3. Let G = S
3
act on the set 3 = {1, 2, 3} in the usual way. Let V = C[3] be
the associated permutation representation over C, where we take as a basis e = {e
1
, e
2
, e
3
} with
action
σ · e
j
= e
σ(j)
.
Let us determine the character of this representation ρ: S
3
−→ GL
C
(V ).
The elements of S
3
written using cycle notation are the following:
1, (1 2), (2 3), (1 3), (1 2 3), (1 3 2).
The matrices of these elements with respect to e are
I
3
,
_
¸
_
0 1 0
1 0 0
0 0 1
_
¸
_
,
_
¸
_
1 0 0
0 0 1
0 1 0
_
¸
_
,
_
¸
_
0 0 1
0 1 0
1 0 0
_
¸
_
,
_
¸
_
0 0 1
1 0 0
0 1 0
_
¸
_
,
_
¸
_
0 1 0
0 0 1
1 0 0
_
¸
_
.
Taking traces we obtain
χ
ρ
(1) = 3, χ
ρ
(1 2) = χ
ρ
(2 3) = χ
ρ
(1 3) = 1, χ
ρ
(1 2 3) = χ
ρ
(1 3 2) = 0.
33
Notice that we have χ
ρ
(g) ∈ Z for all g ∈ G. Indeed, by Proposition 2.26 we have the
following important and useful result.
Proposition 3.4. Let X be a Gset and ρ the associated permutation representation on
C[X]. Then for each g ∈ G,
χ
ρ
(g) = X
g
 = {x ∈ X : g · x = x} = the number of elements of X ﬁxed by g,
in particular, χ
ρ
(g) is a nonnegative integer.
The next result sheds further light on the signiﬁcance of the character of a Grepresentation
over the complex number ﬁeld C and makes use of linear algebra developed in Section 1.3 of
Chapter 1.
Theorem 3.5. For g ∈ G, there is a basis v = {v
1
, . . . , v
n
} of V consisting of eigenvectors
of the linear transformation g.
Proof. Let d = g, the order of g. For v ∈ V ,
(g
d
−Id
V
)(v) = 0.
Now we can apply Lemma 1.16 with the polynomial f(X) = X
d
−1, which has d distinct roots
in C.
There may well be a smaller degree polynomial identity satisﬁed by the linear transformation
g on V . However, if a polynomial f(X) satisﬁed by g has deg f(X) d and no repeated linear
factors, then f(X)(X
d
−1).
Corollary 3.6. The distinct eigenvalues of the linear transformation g on V are dth roots
of unity. More precisely, if d
0
is the smallest natural number such that for all v ∈ V ,
(g
d
0
−Id
V
)(v) = 0,
then the distinct eigenvalues of g are d
0
th roots of unity.
Proof. An eigenvalue λ (with eigenvector v
λ
̸= 0) of g satisﬁes
(g
d
−Id
V
)(v
λ
) = 0,
hence
(λ
d
−1)v
λ
= 0.
Corollary 3.7. For any g ∈ G we have
χ
ρ
(g) =
n
∑
j=1
λ
j
where λ
1
, . . . , λ
n
are the n eigenvalues of ρ
g
on V , including repetitions.
Corollary 3.8. For g ∈ G we have
χ
ρ
(g
−1
) = χ
ρ
(g) = χ
ρ
∗(g).
Proof. If the eigenvalues of ρ
g
including repetitions are λ
1
, . . . , λ
n
, then the eigenvalues of
ρ
g
−1 including repetitions are easily seen to be λ
−1
1
, . . . , λ
−1
n
. But if ζ is a root of unity, then
ζ
−1
= ζ, and so χ
ρ
(g
−1
) = χ
ρ
(g). The second equality follows from Proposition 2.16.
34
Now let us return to the idea of functions on a group which are invariant under conjugation.
Denote by G
c
the set G and let G act on it by conjugation,
g · x = gxg
−1
.
The set of all functions G
c
−→C, Map(G
c
, C) has an action of G given by
(g · α)(x) = α(gxg
−1
)
for α ∈ Map(G
c
, C), g ∈ G and x ∈ G
c
. Then the class functions are those which are invari
ant under conjugation and hence form the set Map(G
c
, C)
G
which is a Cvector subspace of
Map(G
c
, C).
Proposition 3.9. The Cvector space Map(G
c
, C)
G
has as a basis the set of all functions
∆
C
: G
c
−→C for C a conjugacy class in G, deﬁned by
∆
C
(x) =
_
_
_
1 if x ∈ C,
0 if x / ∈ C.
Thus dim
C
Map(G
c
, C)
G
is the number of conjugacy classes in G.
Proof. From the proof of Theorem 2.20, we know that a class function α: G
c
−→C has a
unique expression of the form
α =
∑
x∈G
c
a
x
δ
x
for suitable a
x
∈ G
c
. But
g · α =
∑
x∈G
c
a
x
(g · δ
x
) =
∑
x∈G
c
a
x
δ
gxg
−1 =
∑
x∈G
c
a
gxg
−1δ
x
.
Hence by uniqueness and the deﬁnition of class function, we must have
a
gxg
−1 = a
x
(g ∈ G, x ∈ G
c
).
Hence,
α =
∑
C
a
C
∑
x∈C
δ
x
,
where for each conjugacy class C we choose any element c
0
∈ C and put a
C
= a
c
0
. Here the
outer sum is over all the conjugacy classes C of G. We now ﬁnd that
∆
C
=
∑
x∈C
δ
x
and the rest of the proof is straightforward.
We will see that the characters of nonisomorphic irreducible representations of G also form
a basis of Map(G
c
, C)
G
. We set C(G) = Map(G
c
, C)
G
.
35
3.2. Properties of characters
In this section we will see some other important properties of characters.
Theorem 3.10. Let G be a ﬁnite group with ﬁnite dimensional complex representations
ρ: G −→ GL
C
(V ) and σ: G −→ GL
C
(W). Then
(a) χ
ρ
(e) = dim
C
V and for g ∈ G, χ
ρ
(g) χ
ρ
(e).
(b) The tensor product representation ρ ⊗σ has character
χ
ρ⊗σ
= χ
ρ
χ
σ
,
i.e., for each g ∈ G,
χ
ρ⊗σ
(g) = χ
ρ
(g)χ
σ
(g).
(c) Let τ : G −→ GL
C
(U) be a representation which is Gisomorphic to the direct sum of
ρ and σ, so U
∼
= V ⊕W. Then
χ
τ
= χ
ρ
+χ
σ
,
i.e., for each g ∈ G,
χ
τ
(g) = χ
ρ
(g) +χ
σ
(g).
Proof. (a) The ﬁrst statement is immediate from the deﬁnition. For the second, using
Theorem 3.5, we may choose a basis v = {v
1
, . . . , v
r
} of V for which ρ
g
v
k
= λ
k
v
k
, where λ
k
is
a root of unity (hence satisﬁes λ
k
 = 1). Then
χ
ρ
(g) = 
r
∑
k=1
λ
k

r
∑
k=1
λ
k
 = r = χ
ρ
(e).
(b) Let g ∈ G. By Theorem 3.5, there are bases v = {v
1
, . . . , v
r
} and w = {w
1
, . . . , w
s
} for V
and W consisting of eigenvectors for ρ
g
and σ
g
with corresponding eigenvalues λ
1
, . . . , λ
r
and
µ
1
, . . . , µ
s
. The elements v
i
⊗w
j
form a basis for V ⊗W and by the formula of Equation (2.1),
the action of g on these vectors is given by
(ρ ⊗σ)
g
· (v
i
⊗w
j
) = λ
i
µ
j
v
i
⊗w
j
.
Finally Corollary 3.7 implies
tr(ρ ⊗σ)
g
=
∑
i,j
λ
i
µ
j
= χ
ρ
(g)χ
σ
(g).
(c) For g ∈ G, choose bases v = {v
1
, . . . , v
r
} and w = {w
1
, . . . , w
s
} for V and W consisting
of eigenvectors for ρ
g
and σ
g
with corresponding eigenvalues λ
1
, . . . , λ
r
and µ
1
, . . . , µ
s
. Then
v ∪w = {v
1
, . . . , v
r
, w
1
, . . . , w
s
} is a basis for U consisting of eigenvectors for τ
g
with the above
eigenvalues. Then
χ
τ
(g) = tr τ
g
= λ
1
+· · · +λ
r
+µ
1
+· · · +µ
s
= χ
ρ
(g) +χ
σ
(g).
36
3.3. Inner products of characters
In this section we will discuss a way to ‘compare’ characters, using a scalar or inner product
on the vector space of class functions C(G). In particular, we will see that the character of
a representation determines it up to a Gisomorphism. We will again work over the ﬁeld of
complex numbers C.
We begin with the notion of scalar or inner product on a ﬁnite dimensional Cvector space
V . A function (  ): V × V −→ C is called a hermitian inner or scalar product on V if for
v, v
1
, v
2
, w ∈ V and z
1
, z
2
∈ C,
(z
1
v
1
+z
2
v
2
w) = z
1
(v
1
w) +z
2
(v
2
w), (LLin)
(wz
1
v
1
+z
2
v
2
) = z
1
(wv
1
) +z
2
(wv
2
), (RLin)
(vw) = (wv), (Symm)
0 (vv) ∈ R with equality if and only if v = 0. (PoDe)
A set of vectors {v
1
, . . . , v
k
} is said to be orthonormal if
(v
i
v
j
) = δ
ij
=
_
_
_
1 if i = j,
0 else.
We will deﬁne an inner product (  )
G
on C(G) = Map(G
c
, C)
G
, often writing (  ) when the
group G is clear from the context.
Definition 3.11. For α, β ∈ C(G), let
(αβ)
G
=
1
G
∑
g∈G
α(g)β(g).
Proposition 3.12. (  ) = (  )
G
is an hermitian inner product on C(G).
Proof. The properties LLin, RLin and Symm are easily checked. We will show that PoDe
holds. We have
(αα) =
1
G
∑
g∈G
α(g)α(g) =
1
G
∑
g∈G
α(g)
2
0
with equality if and only if α(g) = 0 for all g ∈ G. Hence (αα) satisﬁes PoDe.
Now let ρ: G −→ GL
C
(V ) and θ: G −→ GL
C
(W) be ﬁnite dimensional representations
over C. We know how to determine (χ
ρ
χ
θ
)
G
from the deﬁnition. Here is another interpreta
tion of this quantity. Recall from Proposition 2.17 the representations of G on W ⊗ V
∗
and
Hom
C
(V, W); in fact these are Gisomorphic, W⊗V
∗ ∼
= Hom
C
(V, W). By Proposition 2.18, the
Ginvariant subspaces (W ⊗V
∗
)
G
and Hom
C
(V, W)
G
are subrepresentations and are images of
Ghomomorphisms ε
1
: W ⊗V
∗
−→ W ⊗V
∗
and ε
2
: Hom
C
(V, W) −→ Hom
C
(V, W).
Proposition 3.13. We have
(χ
θ
χ
ρ
)
G
= tr ε
1
= tr ε
2
.
Proof. Let g ∈ G. By Theorem 3.5 and Corollary 3.7 we can ﬁnd bases v = {v
1
, . . . , v
r
}
for V and w = {w
1
, . . . , w
s
} for W consisting of eigenvectors with corresponding eigenvalues
37
λ
1
, . . . , λ
r
and µ
1
, . . . , µ
s
. The elements w
j
⊗v
∗
i
form a basis for W ⊗V
∗
and moreover g acts
on these by
(θ ⊗ρ
∗
)
g
(w
j
⊗v
∗
i
) = µ
j
λ
i
w
j
⊗v
∗
i
,
using Proposition 2.16. By Corollary 3.7 we have
tr(θ ⊗ρ
∗
)
g
=
∑
i,j
µ
j
λ
i
=
_
∑
j
µ
j
__
∑
i
λ
i
_
= χ
θ
(g)χ
ρ
(g).
By deﬁnition of ε
1
, we have
tr ε
1
=
1
G
∑
g∈G
tr(θ ⊗ρ
∗
)
g
=
1
G
∑
g∈G
χ
θ
(g)χ
ρ
(g) = (χ
θ
χ
ρ
).
Since ε
2
corresponds to ε
1
under the Gisomorphism
W ⊗V
∗
∼
= Hom
C
(V, W),
we obtain tr ε
1
= tr ε
2
.
Corollary 3.14. For irreducible representations ρ and θ,
(χ
θ
χ
ρ
) =
_
_
_
1 if ρ and θ are Gequivalent,
0 otherwise.
Proof. By Schur’s Lemma, Theorem 2.8,
dim
k
Hom
k
(V, W)
G
=
_
_
_
1 if ρ and θ are Gequivalent,
0 otherwise.
Since ε
2
is the identity on Hom
k
(V, W)
G
, the result follows.
Thus if we take a collection of nonequivalent irreducible representations {ρ
1
, . . . , ρ
r
}, their
characters form an orthonormal set {χ
ρ
1
, . . . , χ
ρ
r
} in C(G), i.e.,
(χ
ρ
i
χ
ρ
j
) = δ
ij
.
By Proposition 3.9 we know that dim
C
C(G) is equal to the number of conjugacy classes in G.
We will show that the characters of the distinct inequivalent irreducible representations form a
basis for C(G), thus there must be dim
C
C(G) such distinct inequivalent irreducibles.
Theorem 3.15. The characters of all the distinct inequivalent irreducible representations
of G form an orthonormal basis for C(G).
Proof. Suppose α ∈ C(G) and for every irreducible ρ we have (αχ
ρ
) = 0. We will show
that α = 0.
Suppose that ρ: G −→ GL
C
(V ) is any representation of G. Then deﬁne ρ
α
: V −→ V by
ρ
α
(v) =
∑
g∈G
α(g)ρ
g
v.
38
For any h ∈ G and v ∈ V we have
ρ
α
(ρ
h
v) =
∑
g∈G
α(g)ρ
g
(ρ
h
v)
= ρ
h
_
∑
g∈G
α(g)ρ
h
−1
gh
v
_
= ρ
h
_
∑
g∈G
α(h
−1
gh)ρ
h
−1
gh
v
_
= ρ
h
_
∑
g∈G
α(g)ρ
g
v
_
= ρ
h
ρ
α
(v).
Hence ρ
α
∈ Hom
C
(V, V )
G
, i.e., ρ
α
is Glinear.
Now applying this to an irreducible ρ with dimρ = n, by Schur’s Lemma, Theorem 2.8,
there must be a λ ∈ C for which ρ
α
= λId
V
.
Taking traces, we have tr ρ
α
= nλ. Also
tr ρ
α
=
∑
g∈G
α(g) tr ρ
g
=
∑
g∈G
α(g)χ
ρ
(g) = G(αχ
ρ
∗).
Hence we obtain
λ =
G
dim
C
V
(αχ
ρ
∗).
If (αχ
ρ
) = 0 for all irreducible ρ, then as ρ
∗
is irreducible whenever ρ is, we must have ρ
α
= 0
for every such irreducible ρ.
Since every representation ρ decomposes into a sum of irreducible subrepresentations, it is
easily veriﬁed that for every ρ we also have ρ
α
= 0 for such an α.
Now apply this to the regular representation ρ = ρ
reg
on V = C[G]. Taking the basis vector
e ∈ C[G] we have
ρ
α
(e) =
∑
g∈G
α(g)ρ
g
e =
∑
g∈G
α(g)ge =
∑
g∈G
α(g)g.
But this must be 0, hence we have
∑
g∈G
α(g)g = 0
in C[G] which can only happen if α(g) = 0 for every g ∈ G, since the g ∈ G form a basis of
C[G]. Thus α = 0 as desired.
Now for any α ∈ C(G), we can form the function
α
′
= α −
r
∑
i=1
(αχ
ρ
i
)χ
ρ
i
,
where ρ
1
, ρ
2
, . . . , ρ
r
is a complete set of nonisomorphic irreducible representation of G. For
each k we have
(α
′
χ
ρ
k
) = (αχ
ρ
k
) −
r
∑
i=1
(αχ
ρ
i
)(χ
ρ
i
χ
ρ
k
)
= (αχ
ρ
k
) −
r
∑
i=1
(αχ
ρ
i
)δ
i k
= (αχ
ρ
k
) −(αχ
ρ
k
) = 0,
39
hence α
′
= 0. So the characters χ
ρ
i
span C(G), and orthogonality shows that they are linearly
independent, hence they form a basis.
Recall Theorem 2.14 which says that any representation V can be decomposed into irred
ucible Gsubspaces,
V = V
1
⊕· · · ⊕V
m
.
Theorem 3.16. Let V = V
1
⊕ · · · ⊕ V
m
be a decomposition into irreducible subspaces. If
ρ
k
: G −→ GL
C
(V
k
) is the representation on V
k
and ρ: G −→ GL
C
(V ) is the representation on
V , then (χ
ρ
χ
ρ
k
) = (χ
ρ
k
χ
ρ
) is equal to the number of the factors V
j
Gequivalent to V
k
.
More generally, if also W = W
1
⊕· · · ⊕W
n
is a decomposition into irreducible subspaces with
σ
k
: G −→ GL
C
(W
k
) the representation on W
k
and σ: G −→ GL
C
(W) is the representation on
W, then
(χ
σ
χ
ρ
k
) = (χ
ρ
k
χ
σ
)
is equal to the number of the factors W
j
Gequivalent to V
k
, and
(χ
ρ
χ
σ
) = (χ
σ
χ
ρ
)
=
∑
k
(χ
σ
χ
ρ
k
)
=
∑
ℓ
(χ
σ
ℓ
χ
ρ
).
3.4. Character tables
The character table of a ﬁnite group G is the array formed as follows. Its columns correspond
to the conjugacy classes of G while its rows correspond to the characters χ
i
of the inequiva
lent irreducible representations of G. The jth conjugacy class C
j
is indicated by displaying a
representative c
j
∈ C
j
. In the (i, j)th entry we put χ
i
(c
j
).
c
1
c
2
· · · c
n
χ
1
χ
1
(c
1
) χ
1
(c
2
) · · · χ
1
(c
n
)
χ
2
χ
2
(c
1
) χ
2
(c
2
) · · · χ
2
(c
n
)
.
.
.
.
.
.
χ
n
χ
n
(c
1
) χ
n
(c
2
) · · · χ
n
(c
n
)
Conventionally we take c
1
= e and χ
1
to be the trivial character corresponding to the trivial
1dimensional representation. Since χ
1
(g) = 1 for g ∈ G, the top of the table will always have
the form
e c
2
· · · c
n
χ
1
1 1 · · · 1
Also, the ﬁrst column will consist of the dimensions of the irreducibles ρ
i
, χ
i
(e).
For the symmetric group S
3
we have
e (1 2) (1 2 3)
χ
1
1 1 1
χ
2
1 −1 1
χ
3
2 0 −1
40
The representations corresponding to the χ
j
will be discussed later. Once we have the character
table of a group G we can decompose an arbitrary representation into its irreducible constituents,
since if the distinct irreducibles have characters χ
j
(1 j r) then a representation ρ on V
has a decomposition
V
∼
= n
1
V
1
⊕· · · ⊕n
r
V
r
,
where n
j
V
j
∼
= V
j
⊕ · · · ⊕ V
j
means a Gsubspace isomorphic to the sum of n
j
copies of the
irreducible representation corresponding to χ
j
. Theorem 3.16 now gives n
j
= (χ
ρ
χ
j
). The
nonnegative integer n
j
is called the multiplicity of the irreducible V
j
in V . The following
irreducibility criterion is very useful.
Proposition 3.17. If ρ: G −→ GL
C
(V ) is a nonzero representation, then V is irreducible
if and only if (χ
ρ
χ
ρ
) = 1.
Proof. If V = n
1
V
1
⊕· · · ⊕n
r
V
r
, then by orthonormality of the χ
j
,
(χ
ρ
χ
ρ
) = (
∑
i
n
i
χ
i

∑
j
n
j
χ
j
) =
∑
i
∑
j
n
i
n
j
(χ
i
χ
j
) =
∑
j
n
2
j
.
So (χ
ρ
χ
ρ
) = 1 if and only if n
2
1
+ · · · + n
2
r
= 1. Remembering that the n
j
are nonnegative
integers we see that (χ
ρ
χ
ρ
) = 1 if and only if all but one of the n
j
is zero and for some k,
n
k
= 1. Thus V
∼
= V
k
and so is irreducible.
Notice that for the character table of S
3
we can check that the characters satisfy this criterion
and are also orthonormal. Provided we believe that the rows really do represent characters we
have found an orthonormal basis for the class functions C(S
3
). We will return to this problem
later.
Example 3.18. Let us assume that the above character table for S
3
is correct and let
ρ = ρ
reg
be the regular representation of S
3
on the vector space V = C[S
3
]. Let us take as a
basis for V the elements of S
3
. Then
ρ
σ
τ = στ,
hence the matrix [ρ
σ
] of ρ
σ
relative to this basis has 0’s down its main diagonal, except when
σ = e for which it is the 6 ×6 identity matrix. The character is χ given by
χ(σ) = tr[ρ
σ
] =
_
_
_
6 if σ = e,
0 otherwise.
Thus we obtain
(χ
ρ
χ
1
) =
1
6
∑
σ∈S
3
χ
ρ
(σ)χ
1
(σ) =
1
6
×6 = 1,
(χ
ρ
χ
2
) =
1
6
∑
σ∈S
3
χ
ρ
(σ)χ
2
(σ) =
1
6
×6 = 1,
(χ
ρ
χ
3
) =
1
6
∑
σ∈S
3
χ
ρ
(σ)χ
3
(σ) =
1
6
(6 ×2) = 2.
Hence we have
C[S
3
]
∼
= V
1
⊕V
2
⊕V
3
⊕V
3
= V
1
⊕V
2
⊕2V
3
.
41
In fact we have seen the representation V
3
already in Problem Sheet 2, Qu. 5(b). It is easily
veriﬁed that the character of that representation is χ
3
.
Of course, in order to use character tables, we ﬁrst need to determine them! So far we
do not know much about this beyond the fact that the number of rows has to be the same as
the number of conjugacy classes of the group G and the existence of the 1dimensional trivial
character which we will always denote by χ
1
and whose value is χ
1
(g) = 1 for g ∈ G. The
characters of the distinct complex irreducible representations of G are the irreducible characters
of G.
Theorem 3.19. Let G be a ﬁnite group. Let χ
1
, . . . , χ
r
be the distinct complex irreducible
characters and ρ
reg
the regular representation of G on C[G].
(a) Every complex irreducible representation of G occurs in C[G]. Equivalently, for each
irreducible character χ
j
, (χ
ρ
reg
χ
j
) ̸= 0.
(b) The multiplicity n
j
of the irreducible V
j
with character χ
j
in C[G] is given by
n
j
= dim
C
V
j
= χ
j
(e).
So to ﬁnd all the irreducible characters, we only have to decompose the regular representa
tion!
Proof. Using the formualæ
χ
ρ
reg
(g) =
_
_
_
G if g = e,
0 if g ̸= e,
we have
n
j
= (χ
ρ
reg
χ
j
) =
1
G
∑
g∈G
χ
ρ
reg
(g)χ
j
(g) =
1
G
χ
ρ
reg
(e)χ
j
(e) = χ
j
(e).
Corollary 3.20. We have
G =
r
∑
j=1
n
2
j
=
r
∑
j=1
(χ
ρ
reg
χ
j
)
2
.
The following result also holds but the proof requires some Algebraic Number Theory.
Proposition 3.21. For each irreducible character χ
j
, n
j
= (χ
ρ
reg
χ
j
) divides the order of
G, i.e., n
j
 G.
The following row and column orthogonality relations for the character table of a group G
are very important.
Theorem 3.22. Let χ
1
, . . . , χ
r
be the distinct complex irreducible characters of G and e =
g
1
, . . . , g
r
be a collection of representatives for the conjugacy classes of G and for each k, let
C
G
(g
k
) be the centralizer of g
k
.
(a) Row orthogonality: For 1 i, j r,
r
∑
k=1
χ
i
(g
k
)χ
j
(g
k
)
 C
G
(g
k
)
= (χ
i
χ
j
) = δ
ij
.
42
(b) Column orthogonality: For 1 i, j r,
r
∑
k=1
χ
k
(g
i
)χ
k
(g
j
)
 C
G
(g
i
)
= δ
ij
.
Proof.
(a) We have
δ
ij
= (χ
i
χ
j
) =
1
G
∑
g∈G
χ
i
(g)χ
j
(g)
=
1
G
r
∑
k=1
G
 C
G
(g
k
)
χ
i
(g
k
)χ
j
(g
k
)
[since the conjugacy class of g
k
contains G/ C
G
(g
k
) elements]
=
r
∑
k=1
χ
i
(g
k
)χ
j
(g
k
)
 C
G
(g
k
)
.
(b) Let ψ
s
: G −→C be the function given by
ψ
s
(g) =
_
_
_
1 if g is conjugate to g
s
,
0 if g is not conjugate to g
s
.
By Theorem 3.15, there are λ
k
∈ C such that
ψ
s
=
r
∑
k=1
λ
k
χ
k
.
But then λ
j
= (ψ
s
χ
j
). We also have
(ψ
s
χ
j
) =
1
G
∑
g∈G
ψ
s
(g)χ
j
(g)
=
r
∑
k=1
ψ
s
(g
k
)χ
j
(g
k
)
 C
G
(g
k
)
=
χ
j
(g
s
)
 C
G
(g
s
)
,
hence
ψ
s
=
r
∑
j=1
χ
j
(g
s
)
 C
G
(g
s
)
χ
j
.
Thus we have the required formula
δ
st
= ψ
s
(g
t
) =
r
∑
j=1
χ
j
(g
t
)χ
j
(g
s
)
 C
G
(g
s
)
.
43
3.5. Examples of character tables
Equipped with the results of the last section, we can proceed to ﬁnd some character tables.
For abelian groups we have the following result which follows from what we have seen already
together with the fact that in an abelian group every conjugacy class has exactly one element.
Proposition 3.23. Let G be a ﬁnite abelian group. Then there are G distinct complex
irreducible characters, each of which is 1dimensional. Moreover, in the regular representation
each irreducible occurs with multiplicity 1, i.e.,
C[G]
∼
= V
1
⊕· · · ⊕V
G
.
Example 3.24. Let G = ⟨g
0
⟩
∼
= Z/n be cyclic of order n. Let ζ
n
= e
2πi/n
, the ‘standard’
primitive nth root of unity. Then for each k = 0, 1, . . . , (n −1) we may deﬁne a 1dimensional
representation ρ
k
: G −→C
×
by
ρ
k
(g
r
0
) = ζ
rk
n
.
The character of ρ
k
is χ
k
given by
χ
k
(g
r
0
) = ζ
rk
n
.
Clearly these are all irreducible and nonisomorphic.
Let us consider the orthogonality relations for these characters. We have
(χ
k
χ
k
) =
1
n
n−1
∑
r=0
χ
k
(g
r
0
)χ
k
(g
r
0
)
=
1
n
n−1
∑
r=0
ζ
kr
n
ζ
kr
n
=
1
n
n−1
∑
r=0
1 =
n
n
= 1.
For 0 k < ℓ (n −1) we have
(χ
k
χ
ℓ
) =
1
n
n−1
∑
r=0
χ
k
(g
r
0
)χ
ℓ
(g
r
0
)
=
1
n
n−1
∑
r=0
ζ
kr
n
ζ
ℓr
n
=
1
n
n−1
∑
r=0
ζ
(k−ℓ)r
n
.
By row orthogonality this sum is 0. This is a special case of the following identity which is often
used in many parts of Mathematics.
Lemma 3.25. Let d ∈ N, m ∈ Z and ζ
d
= e
2πi/d
. Then
d−1
∑
r=0
ζ
mr
d
=
_
_
_
d if d  m,
0 otherwise.
44
Proof. We give a proof which does not use character theory!
If d m, then ζ
m
d
̸= 1. Then we have
ζ
m
d
d−1
∑
r=0
ζ
mr
d
=
d−1
∑
r=0
ζ
m(r+1)
d
=
d
∑
s=1
ζ
ms
d
=
d−1
∑
r=0
ζ
mr
d
,
hence
(ζ
m
d
−1)
d−1
∑
r=0
ζ
mr
d
= 0,
and so
d−1
∑
r=0
ζ
mr
d
= 0.
If d  m then
d−1
∑
r=0
ζ
mr
d
=
d−1
∑
r=0
1 = d.
As a special case of Exercise 3.24, consider the case where n = 3 and G = ⟨g
0
⟩
∼
= Z/3. The
character table of G is
e g
0
g
2
0
χ
1
1 1 1
χ
2
1 ζ
3
ζ
2
3
χ
3
1 ζ
2
3
ζ
3
Example 3.26. Let G = ⟨a
0
, b
0
⟩ be abelian of order 4, so G
∼
= Z/2 × Z/2. The character
table of G is as follows.
e a
0
b
0
a
0
b
0
χ
1
= χ
00
1 1 1 1
χ
10
1 −1 1 −1
χ
01
1 1 −1 −1
χ
11
1 −1 −1 1
Example 3.27. The character table of the quaternion group of order 8, Q
8
, is as follows.
1 −1 i j k
χ
1
1 1 1 1 1
χ
i
1 1 1 −1 −1
χ
j
1 1 −1 1 −1
χ
k
1 1 −1 −1 1
χ
2
2 −2 0 0 0
45
Proof. There are 5 conjugacy classes:
{1}, {−1}, {i, −i}, {j, −j}, {k, −k}.
As always we have the trivial character χ
1
. There are 3 homomorphisms Q
8
−→C
×
given by
ρ
i
(i
r
) = 1 and ρ
i
(j) = ρ
i
(k) = −1,
ρ
j
(j
r
) = 1 and ρ
j
(i) = ρ
i
(k) = −1,
ρ
k
(k
r
) = 1 and ρ
k
(i) = ρ
k
(j) = −1.
These provide three 1dimensional representations with characters χ
i
, χ
j
, χ
k
taking values
χ
i
(i
r
) = 1 and χ
i
(j) = χ
i
(k) = −1,
χ
j
(j
r
) = 1 and χ
j
(i) = χ
i
(k) = −1,
χ
k
(k
r
) = 1 and χ
k
(i) = χ
k
(j) = −1.
Since Q
8
 = 8, we might try looking for a 2dimensional complex representation. But the
deﬁnition of Q
8
provides us with the inclusion homomorphism j : Q
8
−→ GL
C
(C
2
), where we
interpret the matrices as taken in terms of the standard basis. The character of this represen
tation is χ
2
given by
χ
2
(1) = 2, χ
2
(−1) = −2, χ
2
(±i) = χ
2
(±j) = χ
2
(±k) = 0.
This completes the determination of the character table of Q
8
.
Example 3.28. The character table of the dihedral group of order 8, D
8
, is as follows.
e α
2
α β αβ
χ
1
1 1 1 1 1
χ
2
1 1 1 −1 −1
χ
3
1 1 −1 1 −1
χ
4
1 1 −1 −1 1
χ
5
2 −2 0 0 0
Proof. The elements of D
8
are
e, α, α
2
, α
3
, β, αβ, α
2
β, α
3
β
and these satisfy the relations
α
4
= e = β
2
, βαβ = α
−1
.
The conjugacy classes are the sets
{e}, {α
2
}, {α, α
3
}, {β, α
2
β}, {αβ, α
3
β}.
There are two obvious 1dimensional representations, namely the trivial one ρ
1
and also ρ
2
,
where
ρ
2
(α) = 1, ρ
2
(β) = −1.
The character of ρ
2
is determined by
χ
2
(α
r
) = 1, χ
2
(βα
r
) = −1.
A third 1dimensional representation comes from the homomorphism ρ
3
: D
8
−→C
×
given by
ρ
3
(α) = −1, ρ
3
(β) = 1.
46
The fourth 1dimensional representation comes from the homomorphism ρ
4
: D
8
−→ C
×
for
which
ρ
4
(α) = −1, ρ
4
(β) = −1.
The characters χ
1
, χ
2
, χ
3
, χ
4
are clearly distinct and thus orthonormal.
Before describing χ
5
as the character of a 2dimensional representation, we will determine
it up to a scalar factor. Suppose that
χ
5
(e) = a, χ
5
(α
2
) = b, χ
5
(α) = c, χ
5
(β) = d, χ
5
(βα) = e
for a, b, c, d, e ∈ C. The orthonormality conditions give (χ
5
χ
j
) = δ
j 5
. For j = 1, 2, 3, 4, we
obtain the following linear system:
_
¸
¸
¸
_
1 1 2 2 2
1 1 2 −2 −2
1 1 −2 2 −2
1 1 −2 −2 2
_
¸
¸
¸
_
_
¸
¸
¸
¸
¸
¸
_
a
b
c
d
e
_
¸
¸
¸
¸
¸
¸
_
=
_
¸
¸
¸
_
0
0
0
0
_
¸
¸
¸
_
(3.1)
which has solutions
b = −a, c = d = e = 0.
If χ
5
is an irreducible character we must also have (χ
5
χ
5
) = 1, giving
1 =
1
8
_
a
2
+a
2
_
=
a
2
4
,
and so a = ±2. So we must have the stated bottom row. The corresponding representation
appears in Example 2.6 where it is viewed as a complex representation which is easily seen to
have character χ
5
.
Remark 3.29. The groups Q
8
and D
8
have identical character tables even though they
are nonisomorphic! This shows that character tables do not always distinguish nonisomorphic
groups.
Example 3.30. The character table of the symmetric group S
4
, is as follows.
e (1 2) (1 2)(3 4) (1 2 3) (1 2 3 4)
[1] [6] [3] [8] [6]
χ
1
1 1 1 1 1
χ
2
1 −1 1 1 −1
χ
3
3 1 −1 0 −1
χ
4
3 −1 −1 0 1
χ
5
2 0 2 −1 0
Proof. Recall that the conjugacy classes correspond to the diﬀerent cycle types which are
represented by the elements in the following list, where the numbers in brackets [ ] give the sizes
of the conjugacy classes:
e [1], (1 2) [6], (1 2)(3 4) [3], (1 2 3) [8], (1 2 3 4) [6].
So there are 5 rows and columns in the character table. The sign representation sign: S
4
−→C
×
is 1dimensional and has character
χ
2
(e) = χ
2
((1 2)(3 4)) = χ
2
(1 2 3) = 1 and χ
2
(1 2) = χ
2
(1 2 3 4) = −1.
47
The 4dimensional permutation representation ˜ ρ
4
corresponding to the action on 4 = {1, 2, 3, 4}
has character χ
˜ ρ
4
given by
χ
˜ ρ
4
(σ) = number of ﬁxed points of σ.
So we have
χ
˜ ρ
4
(e) = 4, χ
˜ ρ
4
((1 2)(3 4)) = χ
˜ ρ
4
(1 2 3 4) = 0, χ
˜ ρ
4
(1 2 3) = 1, χ
˜ ρ
4
(1 2) = 2.
We know that this representation has the form
C[4] = C[4]
S
4
⊕W
where W is a 3dimensional S
4
subspace whose character χ
3
is determined by
χ
1
+χ
3
= χ
˜ ρ
4
,
hence
χ
3
= χ
˜ ρ
4
−χ
1
.
So we obtain the following values for χ
3
χ
3
(e) = 3, χ
3
((1 2)(3 4)) = χ
3
(1 2 3 4) = −1, χ
3
(1 2 3) = 0, χ
3
(1 2) = 1.
Calculating the inner product of this with itself gives
(χ
3
χ
3
) =
1
24
(9 + 6 + 3 + 0 + 6) = 1,
and so χ
3
is the character of an irreducible representation.
From this information we can deduce that the two remaining irreducibles must have dimen
sions n
4
, n
5
for which
n
2
4
+n
2
5
= 24 −1 −1 −9 = 13,
and thus we can take n
4
= 3 and n
5
= 2, since these are the only possible values up to order.
If we form the tensor product ρ
2
⊗ρ
3
we get a character χ
4
given by
χ
4
(g) = χ
2
(g)χ
3
(g),
hence the 4th line in the table. Then (χ
4
χ
4
) = 1 and so χ
4
really is an irreducible character.
For χ
5
, recall that the regular representation ρ
reg
has character χ
ρ
reg
decomposing as
χ
ρ
reg
= χ
1
+χ
2
+ 3χ
3
+ 3χ
4
+ 2χ
5
,
hence we have
χ
5
=
1
2
_
χ
ρ
reg
−χ
1
−χ
2
−3χ
3
−3χ
4
_
,
which gives the last row of the table.
Notice that in this example, the tensor product ρ
3
⊗ρ
5
which is a 6dimensional represen
tation that cannot be irreducible. Its character χ
ρ
3
⊗ρ
5
must be a linear combination of the
irreducibles,
χ
ρ
3
⊗ρ
5
=
5
∑
j=1
(χ
ρ
3
⊗ρ
5
χ
j
)χ
j
.
Recall that for g ∈ S
4
,
χ
ρ
3
⊗ρ
5
(g) = χ
ρ
3
(g)χ
ρ
5
(g).
48
For the values of the coeﬃcients we have
(χ
ρ
3
⊗ρ
5
χ
1
) =
1
24
(6 + 0 −6 + 0 + 0) = 0,
(χ
ρ
3
⊗ρ
5
χ
2
) =
1
24
(6 + 0 −6 + 0 + 0) = 0,
(χ
ρ
3
⊗ρ
5
χ
3
) =
1
24
(18 + 0 + 6 + 0 + 0) = 1,
(χ
ρ
3
⊗ρ
5
χ
4
) =
1
24
(18 + 0 + 6 + 0 + 0) = 1,
(χ
ρ
3
⊗ρ
5
χ
5
) =
1
24
(12 + 0 −12 + 0 + 0) = 0.
Thus we have
χ
ρ
3
⊗ρ
5
= χ
3
+χ
4
.
In general it is hard to predict how the tensor product of representations decomposes in terms
of irreducibles.
3.6. Reciprocity formulæ
Let H G, let ρ: G −→ GL
C
(V ) be a representation of G and let σ: H −→ GL
C
(W) be a
representation of H. Recall that the induced representation σ ↑
G
H
is of dimension G/H dim
C
W,
while the restriction ρ ↓
G
H
has dimension dim
C
V . We will write χ
ρ
↓
G
H
and χ
σ
↑
G
H
for the
characters of these representations. First we show how to calculate the character of an induced
representation.
Lemma 3.31. The character of the induced representation σ ↑
G
H
is given by
χ
σ
↑
G
H
(g) =
1
H
∑
x∈G
g∈xHx
−1
χ
σ
(x
−1
gx).
Proof. See [1, §16].
Example 3.32. Let H = {e, α, α
2
, α
3
} D
8
and let σ: H −→ C
×
be the 1dimensional
representation of H for which
σ(α
k
) = i
k
.
Decompose the induced representation σ ↑
D
8
H
into its irreducible summands over the group D
8
.
Proof. We will use the character table of D
8
given in Example 3.28. Notice that H ▹ D
8
,
hence for x ∈ D
8
we have xHx
−1
= H. Let χ = χ
σ
↑
D
8
H
be the character of this induced
representation. We have
χ(g) =
1
4
∑
x∈D
8
g∈xHx
−1
χ
σ
(x
−1
gx)
=
_
¸
_
¸
_
1
4
∑
x∈D
8
χ
σ
(x
−1
gx) if g ∈ H,
0 if g / ∈ H.
49
Thus if g ∈ H we ﬁnd that
χ(g) =
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
1
4
_
4χ
σ
(α) + 4χ
σ
(α
3
)
_
if g = α, α
3
,
1
4
_
8χ
σ
(α
2
)
_
if g = α
2
,
1
4
(8χ
σ
(e)) if g = e.
Hence we have
χ(g) =
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
i +i
3
= 0 if g = α, α
3
,
−2 if g = α
2
,
2 if g = e,
0 if g / ∈ H.
Taking inner products with the irreducible characters χ
j
we obtain the following.
(χχ
1
)
D
8
=
1
8
(2 −2 + 0 + 0 + 0) = 0,
(χχ
2
)
D
8
=
1
8
(2 −2 + 0 + 0 + 0) = 0,
(χχ
3
)
D
8
=
1
8
(2 −2 + 0 + 0 + 0) = 0,
(χχ
4
)
D
8
=
1
8
(2 −2 + 0 + 0 + 0) = 0,
(χχ
5
)
D
8
=
1
8
(4 + 4 + 0 + 0 + 0) = 1.
Hence we must have χ = χ
5
, giving another derivation of the representation ρ
5
.
Theorem 3.33 (Frobenius Reciprocity). There is a linear isomorphism
Hom
G
(W ↑
G
H
, V )
∼
= Hom
H
(W, V ↓
G
H
).
Equivalently on characters we have
(χ
σ
↑
G
H
χ
ρ
)
G
= (χ
σ
χ
ρ
↓
G
H
)
H
.
Proof. See [1, §16].
Example 3.34. Let σ be the irreducible representation of S
3
with character χ
3
and un
derlying vector space W. Decompose the induced representation W ↑
S
4
S
3
into its irreducible
summands over the group S
4
.
Proof. Let
W ↑
S
4
S
3
∼
= n
1
V
1
⊕n
2
V
2
⊕n
3
V
3
⊕n
4
V
4
⊕n
5
V
5
.
Then
n
j
= (χ
j
χ
σ
↑
S
4
S
3
)
S
4
= (χ
j
↓
S
4
S
3
χ
σ
)
S
3
.
50
To evaluate the restriction χ
j
↓
S
4
S
3
we take only elements of S
4
lying in S
3
. Hence we have
n
1
= (χ
1
↓
S
4
S
3
χ
σ
)
S
3
=
1
6
(2 + 0 −2) = 0,
n
2
= (χ
2
↓
S
4
S
3
χ
σ
)
S
3
=
1
6
(1 · 2 + 0 + 1 · (−2)) = 0,
n
3
= (χ
3
↓
S
4
S
3
χ
σ
)
S
3
=
1
6
(3 · 2 + 0 + 0 · (−2)) = 1,
n
4
= (χ
4
↓
S
4
S
3
χ
σ
)
S
3
=
1
6
(3 · 2 + 0 + 0 · (−2)) = 1,
n
5
= (χ
5
↓
S
4
S
3
χ
σ
)
S
3
=
1
6
(2 · 2 + 0 +−2 · (−1)) =
6
6
= 1.
Hence we have
W ↑
S
4
S
3
∼
= V
3
⊕V
4
⊕V
5
.
3.7. Representations of semidirect products
Recall the notion of a semidirect product G = N H; this has N ▹G, H G, H∩N = {e}
and HN = NH = G. We will describe a way to produce the irreducible characters of G from
those of the groups N ▹ G and H G.
Proposition 3.35. Let φ: Q −→ G be a homomorphism and let ρ: G −→ GL
C
(V ) be a
representation of G. Then the composite φ
∗
ρ = ρ ◦ φ is a representation of Q on V . Moreover,
if φ
∗
ρ is irreducible over Q, then ρ is irreducible over G.
Proof. The ﬁrst part is clear.
For the second, suppose that W ⊆ V is a Gsubspace. Then for h ∈ Q and w ∈ W we have
(φ
∗
ρ)
h
w = ρ
φ(h)
w ∈ W.
Hence W is a Qsubspace. By irreducibility of φ
∗
ρ, W = {0} or W = V , hence V is irreducible
over G.
The representation φ
∗
ρ is called the representation on V induced by φ and we often denote
the underlying Qmodule by φ
∗
V . If j : Q −→ G is the inclusion of a subgroup, then j
∗
ρ = ρ ↓
G
Q
,
the restriction of ρ to Q.
In the case of G = N H, there is a surjection π: G −→ H given by
π(nh) = h (n ∈ N, h ∈ H),
as well as the inclusions i : N −→ G and j : H −→ G. We can apply the above to each of these
homomorphisms.
Now let ρ: G −→ GL
C
(V ) be an irreducible representation of the semidirect product
G = N H. Then i
∗
V decomposes as
i
∗
V = W
1
⊕· · · ⊕W
m
where W
k
is a nonzero irreducible Nsubspace. For each g ∈ G, notice that if x ∈ N and
w ∈ W
1
, then
ρ
x
(ρ
g
w) = ρ
xg
w = ρ
g
ρ
g
−1
xg
w = ρ
g
w
′
for w
′
= ρ
g
−1
xg
w. Since g
−1
xg ∈ g
−1
Ng = N,
gW
1
= {ρ
g
w : w ∈ W
1
}
51
is an Nsubspace of i
∗
V . If we take
˜
W
1
=
_
_
_
∑
g∈G
ρ
g
w
g
: w
g
∈ W
1
_
_
_
,
then we can verify that
˜
W
1
is a nonzero Nsubspace of i
∗
V and in fact is also a Gsubspace of
V . Since V is irreducible, this shows that V =
˜
W
1
.
Now let
H
1
= {h ∈ H : hW
1
= W
1
} ⊆ H.
Then we can verify that H
1
H G. The semidirect product
G
1
= N H
1
= {nh ∈ G : n ∈ N, h ∈ H
1
} G
also acts on W
1
since for nh ∈ G
1
and w ∈ W
1
,
ρ
nh
w = ρ
n
ρ
h
w = ρ
n
w
′′
∈ W
1
where w
′′
= ρ
h
w; hence W
1
is a G
1
subspace of V ↓
G
G
1
. Notice that by the second part of
Proposition 3.35, W
1
is irreducible over G
1
.
Lemma 3.36. There is a Gisomorphism
W
1
↑
G
G
1
∼
= V.
Proof. See the books [3, 4].
Thus every irreducible of G = N H arises from an irreducible representation of N which
extends to a representation (actually irreducible) of such a subgroup N K N H = G for
K H but to no larger subgroup.
Example 3.37. Let D
2n
be the dihedral group of order 2n. Then every irreducible repre
sentation of D
2n
has dimension 1 or 2.
Proof. We have D
2n
= N H where N = ⟨α⟩
∼
= Z/n and H = {e, β}. The n distinct
irreducibles ρ
k
of N are all 1dimensional by Example 3.24. Hence for each of these we have
a subgroup H
k
H such that the action of N extends to N H
k
and so the corresponding
induced representation V
k
↑
D
2
n
H
k
is irreducible and has dimension D
2n
/(N H
k
) = 2/H
k
.
Every irreducible of D
2n
occurs this way.
For n = 4, it is a useful exercise to identify the irreducibles in the character table in this
way.
3.8. Real representations
This section is inspired by [4, §13.2], to which the reader is referred for details.
Suppose that ρ: G −→ GL
C
(V ) is an irreducible complex representation of a ﬁnite group
G, where dim
C
V = n. We can view V as a real vector space and then dim
R
V = 2n. Thus we
have an underlying real representation ρ
R
: G −→ GL
R
(V ).
If we view C as a 2dimensional Rvector space, then we can consider the 4ndimensional
real vector space C ⊗
R
V as a 2ndimensional Cvector space by deﬁning scalar multiplication
on basic tensors to be given by
z · (w ⊗v) = (zw) ⊗v
52
for z, w ∈ C and v ∈ V . A basis {v
1
, . . . , v
n
} for V as a Cvector space gives rise to a basis
{1 ⊗ v
1
, i ⊗ v
1
, 1 ⊗ v
2
, i ⊗ v
2
, . . . , 1 ⊗ v
n
, i ⊗ v
n
} for C ⊗
R
V as a Cvector space. Furthermore,
there is an action of G on C ⊗
R
V given on basic tensors by
g · (w ⊗v) = w ⊗ρ
g
v.
It is easy to see that this is Clinear and so we obtain a 2ndimensional complex representation
C ⊗ρ
R
: G −→ GL
C
(C ⊗
R
V ),
called the complexiﬁcation of the real representation ρ
R
. It is not diﬃcult to see that
(3.2) χ
C⊗ρ
R
= χ
ρ
+χ
ρ
.
Notice that when χ
ρ
takes only real values,
2χ
C⊗ρ
R
= χ
ρ
,
so C ⊗ρ
R
= 2ρ = ρ +ρ.
There are 3 diﬀerent mutually exclusive situations that can occur.
(A) χ
ρ
̸= χ
ρ
.
(B) χ
ρ
= χ
ρ
and ρ = C ⊗
R
ρ
0
for some real representation ρ
0
of G.
(C) χ
ρ
= χ
ρ
but ρ is not the complexiﬁcation of a real representation.
Exercises on Chapter 3
31. Determine the characters of the representations in Questions 1,2,3 of Chapter 2.
32. Let ρ
c
: G −→ GL
C
(C[G
c
]) denote the permutation representation associated to the
conjugation action of G on its own underlying set G
c
, i.e., g · x = gxg
−1
. Let χ
c
= χ
ρ
c
be the
character of ρ
c
.
(i) For x ∈ G show that the vector subspace V
x
spanned by all the conjugates of x is a
Gsubspace. What is dimV
x
?
(ii) For g ∈ G show that χ
c
(g) =  C
G
(g) where C
G
(g) is the centralizer of g in G.
(iii) For any class function α ∈ C(G) determine (αχ
c
).
(iv) If χ
1
, . . . , χ
r
are the distinct irreducible characters of G and ρ
1
, . . . , ρ
r
are the corre
sponding irreducible representations, determine the multiplicity of ρ
j
in C[G
c
].
(v) Carry out these calculations for the groups S
3
, S
4
, A
4
, D
8
, Q
8
.
33. Let G be a ﬁnite group and H G a subgroup. Consider the set of cosets G/H as a Gset
with action given by g · xH = gxH and let ρ be the associated permutation representation on
C[G/H].
(i) Show that for g ∈ G,
χ
ρ
(g) = {xH ∈ G/H : g ∈ xHx
−1
}.
(ii) If H ▹ G (i.e., H is a normal subgroup), show that
χ
ρ
(g) =
_
_
_
0 if g / ∈ H,
G/H if g ∈ H.
53
(iii) Determine the character χ
ρ
when G = S
4
(the permutation group of the set {1, 2, 3, 4})
and H = S
3
(viewed as the subgroup of all permutations ﬁxing 4).
34. Let ρ: G −→ GL
C
(W) be a representation and let ρ
j
: G −→ GL
C
(V
j
) (j = 1, . . . , r) be
the distinct irreducible representations of G with characters χ
j
= χ
ρ
j
.
(i) For each i, show that ε
i
: W −→ W is a Glinear transformation, where
ε
i
(w) =
χ
i
(e)
G
∑
g∈G
χ
i
(g)ρ
g
w.
(ii) Let W
j,k
⊆ W be nonzero Gsubspaces such that
W = W
1,1
⊕· · · ⊕W
1,s
1
⊕W
2,1
⊕· · · ⊕W
2,s
2
⊕· · · ⊕W
r,1
⊕· · · ⊕W
r,s
r
and W
j,k
is Gisomorphic to V
j
. Show that if w ∈ W
j,k
then ε
i
(w) ∈ W
j,k
.
(iii) By considering for each pair j, k the restriction of ε
i
to a Glinear transformation
ε
′
i
: W
j,k
−→ W
j,k
, show that if w ∈ W
j,k
then
ε
i
(w) =
_
_
_
w if i = j,
0 otherwise.
Deduce that imε
i
= W
i,1
⊕· · · W
i,s
i
.
[Remark: The subspace W
i
= imε
i
is called the subspace associated to the irreducible
ρ
i
and depends only on ρ and ρ
i
. Consequently, the decomposition W = W
1
⊕· · · ⊕W
r
is called the canonical decomposition of W. Given each W
j
, there are many diﬀerent
ways to decompose it into irreducible Gisomorphic to V
j
, hence the original ﬁner
decomposition is noncanonical.]
(iv) Show that
ε
i
◦ ε
j
=
_
_
_
ε
i
if i = j,
0 otherwise.
(v) For the group S
3
, use these ideas to ﬁnd the canonical decomposition for the regular
representation C[S
3
]. Repeat this for some other groups and nonirreducible represen
tations.
35. Let A
4
be the alternating group and ζ = e
2πi/3
∈ C.
(i) Verify the orthogonality relations for the character table of A
4
given below.
e (1 2)(3 4) (1 2 3) (1 3 2)
[1] [3] [4] [4]
χ
1
1 1 1 1
χ
2
1 1 ζ ζ
−1
χ
3
1 1 ζ
−1
ζ
χ
4
3 −1 0 0
(ii) Let ρ: A
4
−→ GL
C
(V ) be the permutation representation of A
4
associated to the
conjugation action of A
4
on the set X = A
4
. Using the character table in (i), express V
as a direct sumn
1
V
1
⊕n
2
V
2
⊕n
3
V
3
⊕n
4
V
4
, where V
j
denotes an irreducible representation
with character χ
j
.
54
(iii) For each of the representations V
i
, determine its contragredient representation V
∗
i
as
a direct sum n
1
V
1
⊕n
2
V
2
⊕n
3
V
3
⊕n
4
V
4
.
(iv) For each of the representations V
i
⊗V
j
, determine its direct sum decomposition n
1
V
1
⊕
n
2
V
2
⊕n
3
V
3
⊕n
4
V
4
.
36. Let A S
n
be an abelian group which acts transitively on the set n.
(i) Show that for each k ∈ n the stabilizer of k is trivial. Deduce that A = n.
(ii) Show that the permutation representation C[n] of A decomposes as
C[n] = ρ
1
⊕· · · ⊕ρ
n
,
where ρ
1
, . . . , ρ
n
are the distinct irreducible representations of A.
55
CHAPTER 4
Some applications to group theory
In this chapter we will see some applications of representation theory to Group Theory.
4.1. Characters and the structure of groups
In this section we will give some results relating the character table of a ﬁnite group to its
subgroup structure.
Let ρ: G −→ GL
C
(V ) be a representation for which dim
C
V = n = χ
ρ
(e). Deﬁne a subset
of G by
ker χ
ρ
= {g ∈ G : χ
ρ
(g) = n}.
Proposition 4.1. We have
ker χ
ρ
= ker ρ,
hence ker χ
ρ
is a normal subgroup of G, ker χ
ρ
▹G.
Proof. For g ∈ ker χ
ρ
, let v = {v
1
, . . . , v
n
} be a basis of V consisting of eigenvectors of ρ
g
,
so ρ
g
v
k
= λ
k
v
k
for suitable λ
k
∈ C, and indeed each λ
k
is a root of unity and so has the form
λ
k
= e
t
k
i
for t
k
∈ R. Then
χ
ρ
(g) =
n
∑
k=1
λ
k
.
Recall that for t ∈ R, e
ti
= cos t +i sin t. Hence
χ
ρ
(g) =
n
∑
k=1
cos t
k
+i
n
∑
k=1
sin t
k
.
Since χ
ρ
(e) = n,
n
∑
k=1
cos t
k
= n,
which can only happen if each cos t
k
= 1, but then sin t
k
= 0. So we have all λ
k
= 1 which
implies that ρ
g
= Id
V
. Thus ker χ
ρ
= ker ρ as claimed.
Proposition 4.2. Let χ
1
, . . . , χ
r
be the distinct irreducible characters of G. Then
r
∩
k=1
ker χ
k
= {e}.
Proof. Set
K =
r
∩
k=1
ker χ
k
G.
By Proposition 4.1, for each k, ker χ
k
= ker ρ
k
, hence K ▹ G. Indeed, since K ker ρ
k
there is
a factorisation of ρ
k
: G −→ GL
C
(V
k
),
G
p
−→ G/K
ρ
′
k
−→ GL
C
(V
k
),
57
where p: G −→ G/K is the quotient homomorphism. As p is surjective, it is easy to check
that ρ
′
k
is an irreducible representation of G/K, with character χ
′
k
. Clearly the χ
′
k
are distinct
irreducible characters of the corresponding irreducible representations and n
k
= χ
k
(e) = χ
′
k
(eK)
are their dimensions.
By Corollary 3.20, we have
n
2
1
+· · · +n
2
r
= G
since the χ
k
are the distinct irreducible characters of G. But we also have
n
2
1
+· · · +n
2
r
G/K
since the χ
′
k
are some of the distinct irreducible characters of G/K. Combining these we have
G G/K which can only happen if G/K = G, i.e., if K = {e}. So in fact
r
∩
k=1
ker χ
k
= {e}.
Proposition 4.3. Let χ
1
, . . . , χ
r
be the distinct irreducible characters of G and let r =
{1, . . . , r}. Then a subgroup N G is normal if and only it has the form
N =
∩
k∈S
ker χ
k
for some subset S ⊆ r.
Proof. Let N ▹ G and suppose the quotient group G/N has s distinct irreducible repre
sentations σ
k
: G/N −→ GL
C
(W
k
) (k = 1, . . . , s) with characters ˜ χ
k
. Each of these gives rise to
a composite representation of G
σ
′
k
: G
q
−→ G/N
σ
k
−→ GL
C
(W
k
)
and again this is irreducible because the quotient homomorphism q : G −→ G/N is surjective.
This gives s distinct irreducible characters of G, so each χ
σ
′
k
is actually one of the χ
j
.
By Proposition 4.2 applied to the quotient group G/N,
s
∩
k=1
ker σ
k
=
s
∩
k=1
ker ˜ χ
k
= {eN},
hence since ker σ
′
k
= q
−1
ker σ
k
, we have
s
∩
k=1
ker χ
σ
′
k
=
s
∩
k=1
ker σ
′
k
= N.
Conversely, for any S ⊆ r,
∩
k∈S
ker χ
k
▹ G
since for each k, ker χ
k
▹ G.
Corollary 4.4. G is simple if and only if for every irreducible character χ
k
̸= χ
1
and
e ̸= g ∈ G, χ
k
(g) ̸= χ
k
(e). Hence the character table can be used to decide whether G is simple.
Corollary 4.5. The character table can be used to decide whether G is solvable.
58
Proof. G is solvable if and only if there is a sequence of subgroups
{e} = G
ℓ
▹ G
ℓ−1
▹ · · · ▹ G
1
▹ G
0
= G
for which the quotient groups G
s
/G
s+1
are abelian. This can be seen from the character table.
For a solvable group we can take the subgroups to be the lower central series given by G
(0)
= G,
and in general G
(s+1)
= [G
(s)
, G
(s)
]. It is easily veriﬁed that G
(s)
▹G and G
(s)
/G
(s+1)
is abelian.
By Proposition 4.3 we can now check whether such a sequence of normal subgroups exists using
the character table.
We can also deﬁne the subset
ker χ
ρ
 = {g ∈ G : χ
ρ
(g) = χ
ρ
(e)}.
Proposition 4.6. ker χ
ρ
 is a normal subgroup of G.
Proof. If g ∈ ker χ
ρ
, then using the notation of the proof of Proposition 4.1 so that
χ
ρ
(g) = χ
ρ
(e) = n, we ﬁnd that
n
2
= χ
ρ
(g)
2
= 
n
∑
k=1
cos t
k
+i
n
∑
k=1
sin t
k

2
=
_
n
∑
k=1
cos t
k
_
2
+
_
n
∑
k=1
sin t
k
_
2
=
n
∑
k=1
cos
2
t
k
+
n
∑
k=1
sin
2
t
k
+ 2
∑
1k<ℓn
(cos t
k
cos t
ℓ
+ sin t
k
sin t
ℓ
)
= n + 2
∑
1k<ℓn
cos(t
k
−t
ℓ
)
n + 2
_
n
2
_
= n +n(n −1) = n
2
.
with equality possible if and only if cos(t
k
− t
ℓ
) = 1 whenever 1 k < ℓ n. Assuming that
t
j
∈ [0, 2π) for each j, we must have t
ℓ
= t
k
, since we do indeed have equality here. Hence
ρ(g) = λ
g
Id
V
. In fact, λ
g
 = 1 since eigenvalues of ρ
g
are roots of unity.
If g
1
, g
2
∈ ker χ
ρ
, then
ρ
g
1
g
2
= λ
g
1
λ
g
2
Id
V
and so g
1
g
2
∈ ker χ
ρ
, hence ker χ
ρ
 is a subgroup of G. Normality is also easily veriﬁed.
4.2. A result on representations of simple groups
Let G be a ﬁnite nonabelian simple group (hence of order G > 1). We already know that
G has no nontrivial 1dimensional representations.
Theorem 4.7. An irreducible 2dimensional representation of a ﬁnite nonabelian simple
group G is trivial.
Proof. Suppose we have a nontrivial 2dimensional irreducible representation ρ of G. By
choosing a basis we can assume that we are considering a representation ρ: G −→ GL
C
(C
2
). We
59
can form the composite det ◦ρ: G −→ C
×
which is a homomorphism whose kernel is a proper
normal subgroup of G, hence must equal G. Hence ρ: G −→ SL
C
(C
2
), where
SL
C
(C
2
) = {A ∈ GL
C
(C
2
) : det A = 1}.
Now notice that since ρ is irreducible and 2dimensional, Proposition 3.21 tells us that G
is even (this is the only time we have actually used this result!) Now by Cauchy’s Lemma,
Theorem A.13, there is an element t ∈ G of order 2. Hence ρ
t
∈ SL
C
(C
2
) also has order 2 since
ρ is injective. Since ρ
t
satisﬁes the polynomial identity
ρ
2
t
−I
2
= O
2
,
its eigenvalues must be ±1. By Theorem 3.5 we know that we can diagonalise ρ
t
, hence at least
one eigenvalue must be −1. If one eigenvalue were 1 then for a suitable invertible matrix P we
would have
Pρ
t
P
−1
=
_
1 0
0 −1
_
implying det ρ
t
= −1, which contradicts the fact that det ρ
t
= 1. Hence we must have −1 as a
repeated eigenvalue and so for suitable invertible matrix P,
Pρ
t
P
−1
=
_
−1 0
0 −1
_
= −I
2
and hence
ρ
t
= P
−1
(−I
2
)P = −I
2
.
For g ∈ G,
ρ
gtg
−1 = ρ
g
ρ
t
ρ
−1
g
= ρ
g
(−I
2
)ρ
−1
g
= −I
2
= ρ
t
,
and since ρ is injective, gtg
−1
= t. Thus e ̸= t ∈ Z(G) = {e} since Z(G) ▹ G. This provides a
contradiction.
4.3. A Theorem of Frobenius
Let G be a ﬁnite group and H G a subgroup which has the following property:
For all g ∈ G−H, gHg
−1
∩ H = {e}.
Such a subgroup H is called a Frobenius complement.
Theorem 4.8 (Frobenius’s Theorem). Let H G be a Frobenius complement and let
K = G−
∪
g∈G
gHg
−1
⊆ G,
the subset of G consisting of all elements of G which are not conjugate to elements of H. Then
N = K ∪ {e} is a normal subgroup of G which is the semidirect product G = N H.
Such a subgroup N is called a Frobenius kernel of G.
The remainder of this section will be devoted to giving a proof of this theorem using Char
acter Theory. We begin by showing that
(4.1) K =
G
H
−1.
First observe that if e ̸= g ∈ xHx
−1
∩ yHy
−1
, then e ̸= x
−1
gx ∈ H ∩ x
−1
yHy
−1
x; the latter
can only occur if x
−1
y ∈ H. Notice that the normalizer N
G
(H) is no bigger than H, hence
60
N
G
(H) = H. Thus there are exactly G/ N
G
(H) = G/H distinct conjugates of H, with
only one element e in common to two or more of them. So the number elements of G which are
conjugate to elements of H is
G
H
(H −1) + 1.
Hence,
K = G −
G
H
(H −1) −1 =
G
H
−1.
Now let α ∈ C(H) be a class function on the group H. We can deﬁne a function ˜ α: G −→C
by
˜ α(g) =
_
_
_
α(xgx
−1
) if xgx
−1
∈ H,
α(e) if g ∈ K.
This is well deﬁned and also a class function on G. We also have
(4.2) ˜ α = α ↑
G
H
−α(e)(χ
ξ
↑
G
H
−χ
G
1
),
where we use the notation of Qu. 4 in the Problems. In fact, χ
ξ
↑
G
H
−χ
G
1
is the character of a
representation of G.
Given two class functions α, β on H,
(˜ α
˜
β)
G
=
1
G
_
_
∑
g∈G
˜ α(g)
˜
β(g)
_
_
=
1
G
_
_
(K + 1)α(e)β(e) +
∑
g∈G−N
˜ α(g)
˜
β(g)
_
_
=
1
G
_
_
G
H
α(e)β(e) +
G
H
∑
e̸=h∈H
˜ α(h)
˜
β(h)
_
_
[by Equation (4.1)]
=
1
H
_
∑
h∈H
˜ α(h)
˜
β(h)
_
= (˜ α
˜
β)
H
.
For χ an irreducible character of H,
( ˜ χ ˜ χ)
G
= (χχ)
H
= 1
by Proposition 3.17. Also, Equation (4.2) implies that
˜ χ =
∑
j
m
j
χ
G
j
,
where m
j
∈ Z and the χ
G
j
are the distinct irreducible characters of G. Using Frobenius Reci
procity 3.33, these coeﬃcients m
j
are given by
m
j
= ( ˜ χχ
G
j
)
G
= (χχ
G
j
↓
G
H
)
H
0
61
since χ, χ
G
j
↓
G
H
are characters of H. As ˜ χ(e) = χ(e) > 0, ˜ χ is itself the character of some
representation ρ of G, i.e., ˜ χ = χ
ρ
. Notice that
N = {g ∈ G : χ
ρ
(g) = χ
ρ
(e)} = ker ρ.
Hence, by Proposition 4.1, N is a normal subgroup of G.
Now H ∩ N = {e} by construction. Moreover,
NH H N = G,
hence NH = NH = G. So G = N H. This completes the proof of Theorem 4.8.
An equivalent formulation of this result is the following which can be found in [1, Chapter 6].
Theorem 4.9 (Frobenius’s Theorem: group action version). Let the ﬁnite group G
act transitively on the set X, and suppose that each element g ̸= e ﬁxes at most one element of
X, i.e., X
g
 1. Then
N = {g ∈ G : X
g
 = 0} ∪ {e}
is a normal subgroup of G.
Proof. Let x ∈ X be ﬁxed by some element of G not equal to the identity element e, and
let H = Stab
G
(x). Then for k ∈ G−H, k · x ̸= x has
Stab
G
(k · x) = k Stab
G
(x)k
−1
= kHk
−1
.
If e ̸= g ∈ H ∩kHk
−1
, then g stabilizes x and k · x, but this contradicts the assumption on the
number of ﬁxed points of elements in G. Hence H is a Frobenius complement. Now the result
follows from Theorem 4.8.
Example 4.10. The subgroup H = {e, (1 2)} S
3
satisﬁes the conditions of Theorem 4.8.
Then
∪
g∈S
3
gHg
−1
= {e, (1 2), (1 3), (2 3)}
and N = {e, (1 2 3), (1 3 2)} is a Frobenius kernel.
Exercises on Chapter 4
41. Let ρ: G −→ GL
C
(V ) be a representation.
(i) Show that the sets
ker χ
ρ
= {g ∈ G : χ
ρ
(g) = χ
ρ
(e)} ⊆ G,
ker χ
ρ
 = {g ∈ G : χ
ρ
(g) = χ
ρ
(e)} ⊆ G,
are normal subgroups of G for which ker χ
ρ
ker χ
ρ
 and ker χ
ρ
= ker ρ.
[Hint: Recall that for t ∈ R, e
ti
= cos t +i sin t.]
(ii) Show that the commutator subgroup [ker χ
ρ
, ker χ
ρ
] of ker χ
ρ
 is a subgroup of
ker χ
ρ
.
42. Let G be a ﬁnite group and X = G/H the ﬁnite Gset on which G acts transitively with
action written g · kH = gkH for g, k ∈ G. Let ξ be the associated permutation representation
on C[X].
62
(i) Using the deﬁnition of induced representations, show that ξ is Gisomorphic to ξ
H
1
↑
G
H
,
where ξ
H
1
is the trivial 1dimensional representation of H.
(ii) Let W
X
⊆ C[X] be the Gsubspace of Qu. 2.7(i) and θ the representation on W
X
. Show
that χ
θ
= χ
ξ
−χ
G
1
, where χ
G
1
is the character of the trivial 1dimensional representation
of G.
(iii) Use Frobenius Reciprocity to prove that (χ
θ
χ
G
1
)
G
= 0.
43. Continuing with the setup in Qu. 4.3 with the additional assumption that X 2, let
Y = X ×X be given the associated diagonal action g · (x
1
, x
2
) = (g · x
1
, g · x
2
) and let σ be the
associated permutation representation on C[Y ]. The action on X is said to be doubly transitive
or 2transitive if, whenever (x
1
, x
2
), (x
′
1
, x
′
2
) ∈ Y with x
1
̸= x
2
and x
′
1
̸= x
′
2
, there is a g ∈ G for
which g · (x
1
, x
2
) = (x
′
1
, x
′
2
).
(i) Show that χ
σ
= χ
ξ
2
, i.e., show that for every g ∈ G, χ
σ
(g) = χ
ξ
(g)
2
.
(ii) Show that the action on X is 2transitive if and only if the action on Y has exactly
two orbits.
(iii) Show that the action on X is 2transitive if and only if (χ
σ
χ
G
1
)
G
= 2.
(iv) Show that the action on X is 2transitive if and only if θ is irreducible.
44. The following is the character table of a certain group G of order 60, where the numbers in
brackets [ ] are the numbers in the conjugacy classes of G, χ
k
is the character of an irreducible
representation ρ
k
and α = (1 +
√
5)/2, β = (1 −
√
5)/2.
g
1
= e
G
g
2
g
3
g
4
g
5
[1] [20] [15] [12] [12]
χ
1
1 1 1 1 1
χ
2
5 −1 1 0 0
χ
3
4 1 0 −1 −1
χ
4
3 0 −1 α β
χ
5
a b c d e
(i) Determine the dimension of the representation ρ
5
.
(ii) Use row orthogonality to determine χ
5
.
(iii) Show that G is a simple group.
(iv) Decompose each of the contragredient representations ρ
∗
j
as a direct sum of the irred
ucible representations ρ
k
.
(v) Decompose each of the tensor product representations ρ
s
⊗ ρ
t
as a direct sum of the
irreducible representations ρ
k
.
(vi) Identify this group G up to isomorphism.
63
APPENDIX A
Background information on groups
A.1. The Isomorphism and Correspondence Theorems
The three Isomorphism Theorems and the Correspondence Theorem are fundamental results
of Group Theory. We will write H G and N ▹ G to indicate that H is a subgroup and N is a
normal subgroup of G.
Recall that given a normal subgroup N ▹ G the quotient or factor group G/N has for its
elements the distinct cosets
gN = {gn ∈ G : n ∈ N} (g ∈ G).
Then the natural homomorphism
π: G −→ G/N; π(g) = gN
is surjective with kernel ker π = N.
Theorem A.1 (First Isomorphism Theorem). Let φ: G −→ H be a homomorphism with
N = ker φ. Then there is a unique homomorphism φ: G/N −→ H such that φ ◦ π = φ.
Equivalently, there is a unique factorisation
φ: G
π
−→ G/N
φ
−→ H.
In diagram form this becomes
G
φ
?
?
?
?
?
?
?
?
q
//
G/N
∃! φ
}}
H
where all the arrows represent group homomorphisms.
Theorem A.2 (Second Isomorphism Theorem). Let H G and N ▹ G. Then there is an
isomorphism
HN/N
∼
= H/(H ∩ N); hn ←→ h(H ∩ N).
Theorem A.3 (Third Isomorphism Theorem). Let K ▹ G and N ▹ G with N ▹ K. Then
K/N G/N is a normal subgroup, and there is an isomorphism
G/K
∼
= (G/N)/(K/N); gK ←→ (gN)(K/N).
Theorem A.4 (Correspondence Theorem). There is a oneone correspondence between sub
groups of G containing N and subgroups of G/N, given by
H ←→ π(H) = H/N,
π
−1
Q ←→ Q,
65
where
π
−1
Q = {g ∈ G : π(g) = gN ∈ Q}.
Moreover, under this correspondence, H ▹ G if and only if π(H) ▹ G/N.
A.2. Some deﬁnitions and notation
Let G be a group.
Definition A.5. The centre of G is the subset
Z(G) = {c ∈ G : gc = cg ∀g ∈ G}.
This is a normal subgroup of G, i.e., Z(G) ▹ G.
Definition A.6. Let g ∈ G, then the centralizer of g is
C
G
(g) = {c ∈ G : cg = gc}.
This is a subgroup of G, i.e., C
G
(g) G.
Definition A.7. Let H G. The normalizer of H in G is
N
G
(H) = {c ∈ G : cHc
−1
= H}.
This is a subgroup of G containing H; moreover, H is a normal subgroup of N
G
(H), i.e.,
H ▹ N
G
(H).
Definition A.8. G is simple if its only normal subgroups are G and {e}. Equivalently, it
has no nontrivial proper subgroups.
Definition A.9. The order of G, G, is the number of elements in G when G is ﬁnite, and
∞ otherwise. If g ∈ G, the order of g, g, is the smallest natural number n ∈ N such g
n
= e
provided such a number exists, otherwise it is ∞. Equivalently, g =  ⟨g⟩ , the order of the
cyclic group generated by g. If G is ﬁnite, then every element has ﬁnite order.
Theorem A.10 (Lagrange’s Theorem). If G is a ﬁnite group, and H G, then H divides
G. In particular, for any g ∈ G, g divides G.
Definition A.11. Two elements x, y ∈ G are conjugate in G if there exists g ∈ G such that
y = gxg
−1
.
The conjugacy class of x is the set of all elements of G conjugate to x,
x
G
= {y ∈ G : y = gxg
−1
for some g ∈ G}.
Conjugacy is an equivalence relation on G and the distinct conjugacy classes are the distinct
equivalence classes.
66
A.3. Group actions
Let G be a group (with identity element e = e
G
) and X be a set. Recall that an action of
G on X is a rule assigning to each g ∈ G a bijection φ
g
: X −→ X and satisfying the identities
φ
gh
= φ
g
◦ φ
h
,
φ
e
G
= Id
X
.
We will frequently make use of the notation
g · x = φ
g
(x)
(or even just write gx) when the action is clear, but sometimes we may need to refer explicitly
to the action. It is often useful to view an action as corresponding to a function
Φ: G×X −→ X; φ(g, x) = φ
g
(x).
It is also frequently important to regard an action of G as corresponding to a group homomor
phism
φ: G −→ Perm(X); g → φ
g
,
where Perm(X) denotes the group of all permutations (i.e., bijections X −→ X) of the set X.
If n = {1, 2, . . . , n}, then S
n
= Perm(n) is the symmetric group on n objects; S
n
has order n!,
i.e., S
n
 = n!.
Given such an action of G on X, we make the following deﬁnitions:
Stab
φ
(x) = {g ∈ G : φ
g
(x) = x},
Orb
φ
(x) = {y ∈ X : for some g ∈ G, y = φ
g
(x)},
X
G
= {x ∈ X : gx = x ∀g ∈ G}.
Then Stab
φ
(x) is called the stabilizer of x and is often denoted Stab
G
(x) when the action is
clear, while Orb
φ
(x) is called the orbit of x and is often denoted Orb
G
(x). X
G
is called the
ﬁxed point set of the action.
Theorem A.12. Let φ be an action of G on X, and x ∈ X.
(a) Stab
φ
(x) is a subgroup of G. Hence if G is ﬁnite, then so is Stab
φ
(x) and by Lagrange’s
Theorem,  Stab
φ
(x)  G.
(b) There is a bijection
G/ Stab
φ
(x) ←→ Orb
φ
(x); g Stab
φ
(x) ←→ g · x = φ
g
(x).
Furthermore, this bijection is Gequivariant in the sense that
hg Stab
φ
(x) ↔ h · (g · x).
In particular, if G is ﬁnite, then so is Orb
φ
(x) and we have
 Orb
φ
(x) = G/ Stab
φ
(x).
(c) The distinct orbits partition X into a disjoint union of subsets,
X =
⨿
distinct
orbits
Orb
φ
(x).
67
Equivalently, there is an equivalence relation ∼
G
on X for which the distinct orbits are
the equivalence classes and given by
x∼
G
y ⇐⇒ for some g ∈ G, y = g · x.
Hence, when X is ﬁnite, then
X =
∑
distinct
orbits
 Orb
φ
(x)
This theorem is the basis of many arguments in Combinatorics and Number Theory as well
as Group Theory. Here is an important group theoretic example, often called Cauchy’s Lemma.
Theorem A.13 (Cauchy’s Lemma). Let G be a ﬁnite group and let p be a prime for which
p  G. Then there is an element g ∈ G of order p.
Proof. Let
X = G
p
= {(g
1
, g
2
, . . . , g
p
) : g
j
∈ G, g
1
g
2
· · · g
p
= e
G
}.
Let H be the group of all cyclic permutations of the set {1, 2, . . . , p}; this is a cyclic group of
order p. Consider the following action of H on X:
γ · (g
1
, g
2
, . . . , g
p
) = (g
γ
−1
(1)
, g
γ
−1
(2)
, . . . , g
γ
−1
(p)
).
It is easily seen that this is an action. By Theorem A.12, the size of each orbit must divide
H = p, hence it must be 1 or p since p is prime. On the other hand,
X = G
p
≡ 0 (mod p),
since p  G. Again by Theorem A.12, we have
X =
∑
distinct
orbits
 Orb
H
(x),
and hence
∑
distinct
orbits
 Orb
H
(x) ≡ 0 (mod p).
But there is at least one orbit of size 1, namely that containing e = (e
G
, . . . , e
G
), hence,
∑
distinct
orbits not
containing e
 Orb
H
(x) ≡ −1 (mod p).
If all the left hand summands are p, then we obtain a contradiction, so at least one other orbit
contains exactly one element. But such an orbit must have the form
Orb
H
((g, g, . . . , g)) , g
p
= e
G
.
Hence g is the desired element of order p.
Later, we will meet the following type of action. Let k be a ﬁeld and V a vector space over
k. Let GL
k
(V ) denote the group of all invertible klinear transformations V −→ V . Then for
any group G, a group homomorphism ρ: G −→ GL
k
(V ) deﬁnes a klinear action of G on V by
g · v = ρ(g)(v).
68
This is also called a krepresentation of G in (or on) V . One extreme example is provided by
the case where G = GL
k
(V ) with ρ = Id
GL
k
(V )
. We will be mainly interested in the situation
where G is ﬁnite and k = R or k = C; however, other cases are important in Mathematics.
If we have actions of G on sets X and Y , a function φ: X −→ Y is called Gequivariant or
a Gmap if
φ(gx) = gφ(x) (g ∈ G, x ∈ X).
An invertible Gmap is called a Gequivalence (it is easily seen that the inverse map is itself
a a Gmap). We say that two Gsets are Gequivalent if there is a Gequivalence between
them. Another way to understand these ideas is as follows. If Map(X, Y ) denotes the set of all
functions X −→ Y , then we can deﬁne an action of G by
(g · φ)(x) = g(φ(g
−1
x)).
Then the ﬁxed point set of this action is
Map(X, Y )
G
= {φ : gφ(g
−1
x) = φ(x) ∀x, g} = {φ : φ(gx) = gφ(x) ∀x, g}.
So Map
G
(X, Y ) = Map(X, Y )
G
is just the set of all Gequivariant maps.
A.4. The Sylow theorems
The Sylow Theorems provide the beginnings of a systematic study of the structure of ﬁnite
groups. For a ﬁnite group G, they connect the factorisation of G into prime powers,
G = p
r
1
1
p
r
2
2
· · · p
r
d
d
,
where 2 p
1
< p
2
< · · · < p
d
with p
k
prime, and r
k
> 0, to the existence of subgroups of
prime power order, often called psubgroups. They also provide a sort of converse to Lagrange’s
Theorem.
Here are the three Sylow Theorems. Recall that a proper subgroup H < G is maximal if
it is contained in no larger proper subgroup; also a subgroup P G is a pSylow subgroup if
P = p
k
where p
k+1
G.
Theorem A.14 (Sylow’s First Theorem). A psubgroup P G is maximal if and only
if it is a pSylow subgroup. Hence every psubgroup is contained in a pSylow subgroup.
Theorem A.15 (Sylow’s Second Theorem). Any two pSylow subgroups P, P
′
G are
conjugate in G.
Theorem A.16 (Sylow’s Third Theorem). Let P G be a pSylow subgroup with
P = p
k
, so G = p
k
m where p m. Also let n
p
be the number of distinct pSylow subgroups
of G. Then
n
p
≡ 1 (mod p); (i)
m ≡ 0 (mod n
p
). (ii)
Finally, we end with an important result on chains of subgroups in a ﬁnite pgroup.
Theorem A.17. Let P be a ﬁnite pgroup. Then there is a sequence of subgroups
{e} = P
0
P
1
· · · P
n
= P,
with P
k
 = p
k
and P
k−1
▹ P
k
for 1 k n.
69
We also have the following which can be proved directly by the method in the proof of
Theorem A.13. Recall that for any group G, its centre is the normal subgroup
Z(G) = {c ∈ G : ∀g ∈ G, cg = gc} ▹ G.
Theorem A.18. Let P be a nontrivial ﬁnite pgroup. Then the centre of P is nontrivial,
i.e., Z(P) ̸= {e}.
Sylow theory seemingly reduces the study of structure of a ﬁnite group to the interaction
between the diﬀerent Sylow subgroups as well as their internal structure. In reality, this is just
the beginning of a diﬃcult subject, but the idea seems simple enough!
A.5. Solvable groups
Definition A.19. A group G which has a sequence of subgroups
{e} = H
0
H
1
· · · H
n
= G,
with H
k−1
▹ H
k
and H
k
/H
k−1
cyclic of prime order, is called solvable (soluble or soluable).
Solvable groups are generalizations of pgroups in that every ﬁnite pgroup is solvable. A
ﬁnite solvable group G can be thought of as built up from the abelian subquotients H
k
/H
k−1
.
Since ﬁnite abelian groups are easily understood, the complexity is then in the way these
subquotients are ‘glued’ together.
More generally, for a group G, a series of subgroups
G = G
0
> G
1
> · · · > G
r
= {e}
is called a composition series for G if G
j+1
▹ G
j
for each j, and each successive quotient group
G
j
/G
j+1
is simple. The quotient groups G
j
/G
j+1
(and groups isomorphic to them) are called
the composition factors of the series, which is said to have length r. Every ﬁnite group has a
composition series, with solvable groups being the ones with abelian subquotients. Thus, to
study a general ﬁnite group requires that we analyse both ﬁnite simple groups and also the
ways that they can be glued together to appear as subquotients for composition series.
A.6. Product and semidirect product groups
Given two groups H, K, their product G = H ×K is the set of ordered pairs
H ×K = {(h, k) : h ∈ H, k ∈ K}
with multiplication (h
1
, k
1
) · (h
2
, k
2
) = (h
1
h
2
, k
1
k
2
), identity e
G
= (e
H
, e
K
) and inverses given
by (h, k)
−1
= (h
−1
, k
−1
).
A group G is the semidirect product G = N H of the subgroups N, H if N ▹ G, H G,
H ∩ N = {e} and HN = NH = G. Thus, each element g ∈ G has a unique expression g = hn
where n ∈ N, h ∈ H. The multiplication is given in terms of such factorisations by
(h
1
n
1
)(h
2
n
2
) = (h
1
h
2
)(h
−1
2
n
1
h
2
n
2
),
where h
−1
2
n
1
h
2
∈ N by the normality of N.
An example of a semidirect product is provided by the symmetric group on 3 letters, S
3
.
Here we can take
N = {e, (1 2 3), (1 3 2)}, H = {e, (1 2)}.
70
H can also be one of the subgroups {e, (1 3)}, {e, (2 3)}.
A.7. Some useful groups
In this section we deﬁne various groups that will prove useful as test examples in the theory
we will develop. Some of these will be familiar although the notation may vary from that in
previous encounters with these groups.
A.7.1. The quaternion group. The quaternion group of order 8, Q
8
, has as elements
the following 2 ×2 complex matrices:
±1, ±i, ±j, ±k,
where
1 =
_
1 0
0 1
_
= I
2
, i =
_
i 0
0 −i
_
, j =
_
0 1
−1 0
_
, k =
_
0 i
i 0
_
.
A.7.2. Dihedral groups.
Definition A.20. The dihedral group of order 2n, D
2n
, is generated by two elements α, β
of orders α = n and β = 2 which satisfy the relation
βαβ = α
−1
.
The distinct elements of D
2n
are
α
r
, α
r
β (r = 0, . . . , n −1).
Notice that we also have α
r
β = βα
−r
. A useful geometric interpretation of D
2n
is provided by
the following.
Proposition A.21. The group D
2n
is isomorphic to the symmetry group of a regular n
gon in the plane, with α corresponding to a rotation through 2π/n about the centre and β
corresponding to the reﬂection in a line through a vertex and the centre.
A.7.3. Symmetric and alternating groups. The symmetric group on n objects S
n
is
best handled using cycle notation. Thus, if σ ∈ S
n
, then we express σ in terms of its disjoint
cycles. Here the cycle (i
1
i
2
. . . i
k
) is the element which acts on the set n = {1, 2, . . . , n} by
sending i
r
to i
r+1
(if r < k) and i
k
to i
1
, while ﬁxing the remaining elements of n; the length
of this cycle is k and we say that it is a kcycle. Every permutation σ has a unique expression
(apart from order) as a composition of its disjoint cycles, i.e., cycles with no common entries.
We usually supress the cycles of length 1, thus (1 2 3)(4 6)(5) = (1 2 3)(4 6).
It is also possible to express a permutation σ as a composition of 2cycles; such a decompo
sition is not unique, but the number of the 2cycles taken modulo 2 (or equivalently, whether
this number is even or odd, i.e., its parity) is unique. The sign of σ is
sign σ = (−1)
number of 2cycles
= ±1.
Theorem A.22. The function sign: S
n
−→ {1, −1} is a surjective group homomorphism.
71
The kernel of sign is called the alternating group A
n
and its elements are called even per
mutations, while elements of S
n
not in A
n
are called odd permutations. Notice that A
n
 =
S
n
/2 = n!/2. S
n
is the disjoint union of the two cosets eA
n
= A
n
and τA
n
where τ ∈ S
n
is
any odd permutation.
Here are the elements of A
3
and S
3
expressed in cycle notation.
A
3
: e = (1)(2)(3), (1 2 3) = (1 3)(1 2), (1 3 2) = (1 2)(1 3).
S
3
: e, (1 2 3), (1 3 2), (1 2)e = (1 2), (1 2)(1 2 3) = (1)(2 3), (1 2)(1 3 2) = (2)(1 3).
A.8. Some useful Number Theory
In the section we record some number theoretic results that are useful in studying ﬁnite
groups. These should be familiar and no proofs are given. Details can be found in [2] or any
other basic book on abstract algebra.
Definition A.23. Given two integers a, b, their highest common factor or greatest common
divisor is the highest positive common factor, and is written (a, b). It has the property that
any integer common divisor of a and b divides (a, b).
Definition A.24. Two integers a, b are coprime if (a, b) = 1.
Theorem A.25. Let a, b ∈ Z. Then there are integers r, s such that ra + sb = (a, b). In
particular, if a and b are coprime, then there are integers r, s such that ra +sb = 1.
More generally, if a
1
, . . . , a
n
are pairwise coprime, then there are integers r
1
, . . . , r
n
such
that
r
1
a
1
+· · · +r
n
a
n
= 1.
These results are consequences of the Euclidean or Division Algorithm for Z.
EA: Let a, b ∈ Z. Then there are unique q, r ∈ Z for which 0 r < b and a = qb +r.
It can be shown that in this situation, (a, b) = (b, r). This allows a determination of the
highest common factor of a and b by repeatedly using EA until the remainder r becomes 0,
when the previous remainder will be (a, b).
72
Bibliography
[1] J. L. Alperin & R. B. Bell, Groups and Representations, SpringerVerlag (1995).
[2] J. B. Fraleigh, A First Course in Abstract Algebra, 5th Edition, AddisonWesley (1994).
[3] G. James & M. Liebeck, Representations and Characters of Groups, Cambridge University Press (1993).
[4] JP. Serre, Linear Representations of Finite Groups, SpringerVerlag (1977).
[5] S. Sternberg, Group theory and physics, Cambridge University Press (1994).
73
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.