This action might not be possible to undo. Are you sure you want to continue?

Algebras

Karin Erdmann

Mathematical Institute

University of Oxford

October 2007

1

Introduction

You will probably have studied groups acting on sets, and have seen that group actions

occur in many situations. Very often, the underlying set has the structure of a vector space,

and the group acts by linear transformation. This is very good because in this case, one gets

new information, for example, by exploiting invariant subspaces, or eigenvalues, or other

properties coming from linear algebra.

The action of a group element on a vector space is always invertible. But in many

applications one needs to deal with linear maps which are not invertible. As an extreme,

take a linear map whose iteration eventually maps everything to zero. For example, taking

derivatives of functions is a linear map, and if polynomial of degree 2 is diﬀerentiated three

times one always gets zero.

Therefore one would also like to study actions by linear transformations which are not

necessarily invertible. One appropriate structure to model such actions is that of an ’algebra’,

more precisely, an associative K-algebra where K is a ﬁeld. This includes many known

examples, like polynomials K[X] , or square matrices. New examples are group algebras

which can be thought of as a ’linearisation’ of groups; and these ensure that group actions

by linear transformations can be viewed from this new perspective.

We will start by introducing these algebras, and give examples, and we will investigate

some general properties.

If an algebra A acts on a vector space V , then V together with this action is said to

be an A-module. [This is analogous to the approach for group actions. ’If G acts on a set

Ω, then Ω is a G-set’ becomes ’If A acts on a vector space V then V is an A-module’.]

The second chapter studies modules, and their general properties. Furthermore, we deﬁne

actions of algebras on vector spaces, and show that A-modules and actions of A on vector

spaces are the same. For some purposes, the language of modules is more convenient, but

sometimes it is more natural to think of actions.

An A-module V is said to be simple (or ‘irreducible’) if it is non-zero, and if it does

not have any subspaces which are invariant under the action of A except V and 0. Simple

modules are the ’building blocks’ for arbitrary ﬁnite-dimensional A-modules. The Jordan-

H¨older Theorem makes this precise, and we will prove this theorem, for modules, in the

third chapter.

The nicest modules are the ones which are direct sums of simple modules, they are called

’semisimple’. An algebra for which all modules are semisimple, is said to be a semisimple

algebra. Semisimple modules and algebras are investigated in chapter 4. Fortunately, the

structure of semisimple algebras is completely understood, it is described by the Wedderburn

Theorem. This is a very important result on algebras, and it is used in many situations. We

will give a proof when K = C, in chapter 5.

Maschke’s Theorem characterizes precisely when algebras arising from group actions are

semisimple. Namely, this is the case if and only if the characteristic of the ﬁeld does not

divide the order of the group. This is proved in chapter 6.

Maschke’s Theorem has numerous important applications. Combining it with Wedder-

burn’s theorem gives a complete description of the irreducible representations of G over C.

This can be taken as the starting point for the study of group characters. Given a rep-

2

resentation ρ : G → GL(n, C), then the character associated to this representation is the

function χ : G → C which takes g ∈ G to the trace of ρ(g), that is χ(g) =

n

i=1

a

ii

where

ρ(g) = [a

ij

]. Characters have very remarkable properties. For example, one can detect by

just looking at the characters whether or not two representations are equivalent. Characters

have applications in many other parts of mathematics (or even in other sciences).

The last chapter deals with general properties of characters, and gives some applications.

This chapter is short since the subject is well covered by existing literature. Further material

can be found for example in the books by Ledermann ??, or James and Liebeck ??.

It is expected that you are familiar with the ﬁrst and second year basic linear algebra,

such as elementary properties of vector spaces, and linear maps. As well, we expect that

you remember the group axioms. We make some conventions: We only consider rings with

identity, and all vector spaces will be ﬁnite-dimensional, except for the polynomial ring

K[X]. We mention some occasions when results also hold without assuming that the vector

spaces are ﬁnite-dimensional.

Oxford, 2007 KE

1

Algebras

The main object we want to study, are algebras over some ﬁeld. Roughly speaking, an algebra

is a ring which also is a vector space, in which scalars commute with everything.

We remind of the deﬁnition of a ring.

Deﬁnition 1.1 (Reminder)

A ring R is an abelian group (R, +) which in addition has another operation, (r, s) → rs :

R R → R, called multiplication, such that

(i) (Distributivity) r(x +y) = rx +ry and (r +s)x = rx +sx.

(ii) (Associativiaty) r(st) = (rs)t.

The ring is commutative if rs = sr for all r, s ∈ R. An identity of R is an element 1

R

∈ R

such that 1

R

x = x1

R

= x for all x ∈ R.

Convention In this course, all rings are assumed to have a an identity. (If no confusion

is likely we write 1 for 1

R

). Rings are usually not commutative.

You have already seen various examples:

(1) The integers, Z. The rational numbers, Q, etc

(2) The polynomials K[X] in one variable X, with coeﬃcients in K. Similarly, the polyno-

mials K[X, Y ] in two commuting variables X and Y , with coeﬃcients in K.

(3) The nn matrices M

n

(K), with entries in a ﬁeld K, with respect to matrix multiplication

and addition. This is not commutative for n ≥ 2.

(4) If R and S are rings, the direct product of R and S is deﬁned as

R S = ¦(r, s) : r ∈ R, s ∈ S¦

where addition and multiplication are componentwise.

4 1. Algebras

1.1 Algebras

The above examples (2) and (3) are not just rings but also vector spaces. There are many

more rings which are vector spaces, and this has led to making the deﬁnition of an algebra.

Deﬁnition 1.2

An algebra A over a ﬁeld K (or a K-algebra) is a ring, with multiplication

(a, b) → a.b (a, b ∈ A)

which also is a K-vector space, with scalar multiplication

(λ, a) → λa (λ ∈ K, a ∈ A),

and where the scalar multiplication and the ring multiplication satisfy

λ(a.b) = (λa).b = a.(λb) (λ ∈ K, a, b ∈ A).

The algebra A is ﬁnite-dimensional if dim

K

(A) < ∞.

The condition relating scalar multiplication and ring multiplication says that scalars

commute with everything. One might want to spell out the various axioms. We have already

listed the ones for a ring. To say that A is a K-vector space means that for all a, b ∈ A and

λ, µ ∈ K we have

(i) λ(b +c) = λb +λc;

(ii) (λ +µ)a = λa +µa;

(iii) (λµ)a = λ(µa);

(iv) 1

K

a = a.

Properties (i) and (ii) are sometimes summarized by saying that ’scalar multiplication is

bilinear’.

Strictly speaking, we should say that A is an associative algebra; the underlying multipli-

cation in the ring is associative. There are other types of algebras, for example Lie algebras;

but we will only consider associative algebras.

Since A is a vector space and 1

A

is a non-zero vector, it follows that the map λ → λ1

A

from

K to A is 1-1. We will therefore view K as a subset of A, using this map as identiﬁcation. This

also means that it is not really necessary to have diﬀerent notation for scalar multiplication

and ring multiplication, so we will usually write ab instead of a.b, this should not cause

confusion.

1.2 The multiplication 5

Example 1.3

Let A be the set of upper triangular 2 2-matrices over R, that is

A = ¦

x y

0 z

: x, y, z ∈ R¦

with respect to matrix addition and multiplication. This is clearly a subspace of M

2

(R).

Furthermore, it is a subring: since it is a subspace, we know that (A, +) is a subgroup of

M

2

(R); furthermore a product of two upper triangular matrices is again upper triangular;

and the identity of M

2

(R) lies in A. So A is a ring. Scalar multiples of the identity matrix

commute with all matrices, so the property of scalar multiplication and ring multiplication

holds.

1.2 The multiplication

Suppose A is an algebra, what does one need to understand the multiplcation? Take any

vector space basis of A, say ¦v

1

, . . . , v

n

¦ If we know the products v

i

v

j

for all i, j then we

know all products. Namely, take two arbitrary elements a, b ∈ A, then a =

i

a

i

v

i

and

b =

j

b

j

v

j

for a

i

, b

j

∈ K, and

ab = (

i

a

i

v

i

)(

j

b

j

v

j

) =

n

i,j=1

a

i

b

j

(v

i

v

j

)

This is very useful; in practice one would aim to use a basis where the products v

i

v

j

are

’easy’ for example one may take the identity 1

A

as one of the basis elements.

Example 1.4

In Example 1.3, you would probably choose as a basis the ’matrix units’ in A, that is

E

11

=

1 0

0 0

, E

12

=

0 1

0 0

, E

22

=

0 0

0 1

**Then the products are easy to describe; E
**

2

ii

= E

ii

and E

11

E

12

= E

12

= E

12

E

22

, and

E

12

E

11

= 0 = E

22

E

12

. One might visualize the multipication by a diagram like

E

11

•

E

12

−→ •E

22

1.3 Constructing algebras

You can also construct algebras using the fact that the multiplication is already determined

by products of some basis. You might start with some vector space V , ﬁx a basis, and just

6 1. Algebras

deﬁne the products of any two basis elements. However, you need to make sure that the

associative law holds.

Exercise 1.1

Let V have basis v

1

, v

2

. Which of the following products satisﬁes the associative law?

If so, does this deﬁne an algebra (with identity)? Here c, d ∈ K.

(i) v

1

v

2

= v

2

v

1

= v

2

, v

2

1

= v

1

, v

2

2

= cv

1

+dv

2

.

(ii) v

2

v

1

= v

2

v

1

= v

2

, v

2

1

= v

2

, v

2

2

= v

1

+v

2

.

Solution 1.5

Consider the deﬁnition in (i). The products in which v

1

occurs, tell us that v

1

is the identity.

So to check associativity, we only need to compare (v

2

v

2

)v

2

and v

2

(v

2

v

2

), and one checks

that these are equal. So the deﬁnition (i) deﬁnes an algebra with identity.

Now consider the deﬁnition in (ii), we have (v

1

v

1

)v

2

= v

2

v

2

= v

1

+ v

2

but v

1

(v

1

v

2

) =

v

1

v

2

= v

2

These are not equal and this multiplication does not satisfy the associative law.

1.4 Some important examples

(1) The ﬁeld K is a K-algebra.

(2) Polynomial rings K[X], or K[X, Y ], are K-algebras.

(3) The n n matrices M

n

(K), with respect to matrix multplication and addition.

There are many more algebras consisting of matrices. For example, take the upper trian-

gular matrices

T

n

(K) := ¦[a

ij

] ∈ M

n

(K) : a

ij

= 0 for i > j¦

They also form an algebra, with respect to matrix multiplication and addition.

(4) Let V be a K-vector space, and deﬁne

End

K

(V ) := ¦α : V → V : α is K-linear ¦

the K-linear transformations V → V . This is a K-algebra, if one takes as multiplication

the composition of maps, and where the addition and scalar multiplcation are pointwise,

as usual.

(5) The ﬁeld C is also an algebra over R, of dimension 2. Similarly, the ﬁeld Q(

√

2) is an

algebra over Q, of dimension 2. More generally, if K is a subﬁeld of a larger ﬁeld L, then

L is an algebra over K.

(6) The algebra H of quaternions is the 4-dimensional algebra over R with basis 1, i, j, k and

where the multiplication is deﬁned by

ij = −ji = k, jk = −kj = i, ki = −ik = j

1.5 Subalgebras and ideals, factor rings 7

This algebra is a division ring: The general element of H is of the form u = a+bi +cj +dk

with a, b, c, d ∈ R. Let ¯ u := a −bi −cj −dk, then ¯ uu = a

2

+b

2

+c

2

+d

2

, which is ,= 0 for

u ,= 0, and one can write down the inverse of a non-zero element u.

(7) Let G be a group and K any ﬁeld. The group algebra A = KG has underlying vector

space with basis ¦v

g

: g ∈ G¦. The multiplication on the basis elements is deﬁned as

v

g

v

h

:= v

gh

.

and it is extended to linear combinations. This deﬁnes an associative multiplication,

(v

g

v

h

)v

x

= v

gh

v

x

= v

(gh)x

= v

g(hx)

= v

g

v

hx

= v

g

(v

h

v

x

).

The identity of KG is the element v

1

where 1 = 1

G

is the identity of G. This algebra has

dimension equal to the order of G.

Some authors write simply g for the vector in KG, instead of v

g

.

(8) If A is any K-algebra, then the ’opposite algebra’ A

op

of A has underlying space A, and

the multiplication in A

op

is deﬁned by

a ∗ b := ba (a, b ∈ A).

It is easy to check that this is again an algebra. Clearly (A

op

)

op ∼

= A.

1.5 Subalgebras and ideals, factor rings

We recall some standard constructions which are completely analogous to those you have

seen for commutative rings.

The example (3) in 1.4 suggests that we should deﬁne a ’subalgebra’. Suppose A is a K-

algebra, then a subalgebra B of A is a subset of A which is an algebra with respect to the

operations in A, that is:

Deﬁnition 1.6

Let B be a subset of A. Then B is a subalgebra if B is a subspace such that

(i) for all b

1

, b

2

∈ B, the product b

1

b

2

belongs to B; and

(ii) the identity 1

A

belongs to B.

1.5.1 Examples

Let A = M

n

(K). This has many important subalgebras.

(1) The upper triangular matrices, T

n

(K) form a subalgebra of A. This is not commutative

for n ≥ 2, for example the matrix units E

11

and E

12

do not commute (see 1.4).

8 1. Algebras

(2) Let α ∈ A, and deﬁne A

α

to be the span of ¦1, α, α

2

, . . .¦. That is, A

α

is the space of all

matrices which are polynomials in α. This a subalgebra of A, and it is always commutative.

(3) The diagonal matrices D

n

(K), form a subalgebra of A, of dimension n.

(4) The ’three-subspace algebra’ is the subalgebra of M

4

(K) deﬁned by

¦

a

1

b

1

b

2

b

3

0 a

2

0 0

0 0 a

3

0

0 0 0 a

4

: a

i

, b

j

∈ K¦.

(5) There are also subalgebras such as

¦

a b 0 0

c d 0 0

0 0 x y

0 0 z u

: a, b, c, d, x, y, z, uzinK¦ ⊂ M

4

(K).

Not every subring of M

n

(K) is a subalgebra. For example, M

n

(Z) is a subring of M

n

(R)

but it is not a subalgebra.

Deﬁnition 1.7

If R is a ring (or an algebra ) then I is a left ideal of R provided (I, +) is a subgroup of

(R, +) such that rx ∈ I for all x ∈ I and r ∈ R.

Similarly I is a right ideal of R if (I, +) is a subgroup such that xr ∈ I for all x ∈ I and

r ∈ R. I is an ideal if it is both a left ideal and a right ideal.

For example, if z ∈ R then Rz = ¦rz : r ∈ R¦ is a left ideal. For non-commutative rings,

Rz need not be an ideal.

Exercise 1.2

Let R = M

2

(K) and n ≥ 1, and let z be the ’matrix unit’ z = E

11

. Calculate Rz, and

also zR. Are they equal?

1.5.2 Factor rings

If I is an ideal of R , consider cosets r +I for r ∈ R. Recall that the cosets R/I form a ring,

with +, . deﬁned ’as usual’ by

(r +I) + (s +I) := (r +s) +I, (r +I)(s +I) := (rs) +I

for r, s ∈ R.

When the ring is a K-algebra then we have some extra structure.

1.6 Algebra homomorphisms 9

Lemma 1.8

Assume A is an algebra.

(a) Suppose I is a left (or right or 2-sided) ideal of A. Then I is a subspace of A.

(b) If I is an ideal of A then A/I is an algebra.

Proof

(a) By deﬁnition, (I, +) is a group. We need to show that if c ∈ K and x ∈ I then cx ∈ I.

But (c1

A

) ∈ A, and

cx = c(1

A

x) = (c1

A

)x ∈ I.

(b) We know already that the cosets A/I form a ring, and they also form a vector space

(see A1). We only have to check that scalars commute with everything, but this property is

inherited from A. Explicitly, let λ ∈ K and a, b ∈ A, then

(λ1

A

+I)[(a +I)(b +I)] = (λ1

A

+I)(ab +I)

= λ1

A

(ab) +I)

= (λa)b +I = (λa +I)(b +I)

but since λ(ab) = a(λb), it is also equal to (a +I)(λb +I).

Example 1.9

Let A = K[X], the algebra of polynomials, and let I be a non-zero ideal of A, then there

is some non-zero polynomial f(X) such that I = (f(X)), a principal ideal; and A/I =

K[X]/(f(X)). Such factor algebra is ﬁnite-dimensional, its dimension is the degree of f(X).

1.6 Algebra homomorphisms

Deﬁnition 1.10

Let A and B be K-algebras. A map φ : A → B is a K-algebra homomorphism if

(i) φ is K-linear,

(ii) φ(ab) = φ(a)φ(b) for all a, b ∈ A; and

(iii) φ(1

A

) = 1

B

.

The map φ is a K-algebra isomorphism if it is a K-algebra homomorphism and is in

addition bijective.

Example 1.11

Let A be the algebra of upper triangular 2 2-matrices over R (see 1.3), and let B be the

10 1. Algebras

direct product of two copies of R, that is B = R R. Deﬁne φ : A → B by

φ(

a b

0 c

) := (a, c).

Then φ is linear; as a vector space map it is a projection onto some coordinates. You should

check that φ preserves the multiplication and maps the identity of A to the identity of B.

When you write linear transformations of a vector space as matrices with respect to a

ﬁxed basis, you basically prove that the algebra of linear transformations is isomorphic to

the algebra of square matrices. We recall the proof, partly as a reminder, but also since we

will later need a generalization.

Lemma 1.12

Suppose V is an n-dimensional vector space over the ﬁeld K. Then the algebras End

K

(V )

and M

n

(K) are isomorphic.

Proof

We ﬁx a K-basis of V . Suppose α is a linear transformation of V , let M(α) be the matrix

of α with respect to the ﬁxed basis. Then deﬁne a map

ψ : End

K

(V ) → M

n

(K), ψ(α) := M(α).

One checks that ψ is K-linear. One also checks that it preserves the multiplication, that is

M(β) ◦ M(α) = M(β ◦ α). [This is done in ﬁrst year linear algebra]. The map ψ is also a

one-to-one. Suppose M(α) = 0, then by deﬁnition of the matrix α maps the ﬁxed basis to

zero, but then α = 0. The map ψ is surjective, because every n n matrix deﬁnes a linear

transformation of V .

In general, homomorphism and isomorphism are very important to compare diﬀerent

algebras.

Exercise 1.3

Suppose φ : A → B is an isomorphism of K-algebras. Show that then

(i) If a ∈ A then a

2

= 0 if and only if φ(a

2

) = 0.

(ii) a ∈ A is a zero divisor if and only if φ(a) is a zero divisor.

(iii) A is a ﬁeld if and only if B is a ﬁeld.

1.6.1 Some common algebra homomorphisms

Some algebra homomorphisms occur very often, we will list some. You are encouraged to

check these in detail.

1.7 Some algebras of small dimensions 11

(1) Let I be an ideal of A, then the ’canonical map’ π : A → A/I which is deﬁned as

π(a) := a +I, is an algebra homomorphism.

(2) Substitution is an algebra homomorphism whenever it makes sense. Let B be any K-

algebra and b ∈ B. Deﬁne ψ : K[X] −→ B by

ψ(f) = f(b) (f ∈ K[X]).

(3) Let A = A

1

A

2

, the direct product of two algebras. Then the projection π

1

: A → A

1

deﬁned by π

1

(a

1

, a

2

) := a

1

is an algebra homomorphism, and similarly the projection π

2

from A onto A

2

is an algebra homomorphism.

Note however that the inclusion map a

1

→ (a

1

, 0) is not an algebra homomorphism, as it

does not take the identity of A

1

to the identity of A.

Exactly as for rings we have an Isomorphism Theorem.

Theorem 1.13 (Isomorphism Theorem)

Let A and B be K-algebras, and suppose φ : A → B is a K-algebra homomorphism. Then

ker(φ) is an ideal of A, im(φ) is a subalgebra of B and

A/ker(φ)

∼

= im(φ).

Proof

Almost everything follows from the isomorphism theorem for rings. We only need to check

that im(φ) is actually a subalgebra of B. Since φ is linear, the image Im(φ) is a subspace,

and we know it is a subring containing the identity of B, and therefore it is a subalgebra.

Example 1.14

Suppose A = A

1

A

2

, the direct product of two K-algebras. Then as we have seen, the

projection π

1

: A → A

1

is an algebra homomorphism, and it is onto. By the Isomorphism

Theorem we have A/ker(π

1

)

∼

= A

1

. Furthermore, the deﬁnition of π

1

gives that

ker(π

1

) = ¦(0, a

2

) : a

2

∈ A

2

¦ = ¦0¦ A

2

.

This also shows that ¦0¦ A

2

is an ideal of A.

1.7 Some algebras of small dimensions

One might like to know how many K-algebras there are of a given dimension, up to isomor-

phism, and if possible have a complete description. Looking at small dimensions, we observe

12 1. Algebras

that any 1-dimensional K-algebra is isomorphic to K. Namely, it must contain the the scalar

multiples of the identity, and this is then the whole algebra, by dimension.

We consider now algebras of dimension 2 over R. The construction in 1.9 produces

many examples. Namely, take any polynomial f(X) ∈ R[X] of degree 2, and take A :=

R[X]/(f(X)). We ask when two such algebras are isomorphic. We would also want to know

whether there are others.

We will now classify 2-dimensional algebras over R, up to isomorphism. Take such algebra

A. We can choose a basis which contains the identity of A, say ¦1

A

, b¦.

Then b

2

must be linear combination of 1, b, so there are scalars c, d ∈ R such that

b

2

= c1

A

+db. We consider the polynomial X

2

−dX −c and we complete squares,

X

2

−dX −c = (X −d/2)

2

−(c + (d/2)

2

).

Let β

:= b − (d/2)1

A

, this is an element in the algebra, and we also set r = (c + (d/2)

2

),

which is a scalar. Then we have

β

2

= r1

A

.

Then set

β :=

[r[

−1

β

r ,= 0

β

r = 0

Then the set ¦1

A

, β¦ also is a basis of A, and we have β

2

= 0 or = ±1

A

.

This brings A into only three possible forms. We write A

j

for the algebra in which β

2

=

j1

A

for j = 0, 1, −1. We want to show that no two of these three algebras are isomorphic.

We use 1.3.

(1) The algebra A

0

has a non-zero element with square zero. By 1.3, any algebra isomor-

phic to A

0

must have such element.

(2) The algebra A

1

does not have a non-zero element whose square is zero:

Suppose α

2

= 0 for α ∈ A. Write α = x1 +yβ with x, y ∈ R, then

α

2

= (x

2

+y

2

)1 + 2xyβ = 0

and it follows that 2xy = 0 and x

2

+y

2

= 0, since 1

A

and β are linearly independent. Now

x, y ∈ R and we deduce x = y = 0, and therefore α = 0.

This shows that the algebra A

1

is not isomorphic to A

0

.

(3) Consider the algebra A

−1

. This occurs in nature, namely C is such R-algebra, taking

β = i.

In fact we can see directly that A

−1

is a ﬁeld, from

(c +dβ)(c −dβ) = c

2

+d

2

and if c +dα ,= 0 we can write down its inverse with respect to multiplication.

Clearly A

0

is not a ﬁeld, and A

1

also is not a ﬁeld, it has zero divisors: (β−1)(β+1) = 0.

So A

−1

is not isomorphic to A

0

or A

1

.

We can list a ’canonical representative’ for each of the three algebras. Consider the algebra

R[X]/(X

2

), this is 2-dimensional and is generated by a non-zero element with square zero.

1.8 Finite-dimenional algebras A which can be generated by one element. 13

So it isomorphic to A

0

. Next, consider R[X]/(X

2

−1), this has a generator with square equal

to 1, so it is isomorphic to A

1

. Similarly R[X]/(X

2

+1) is isomorphic to A

−1

. To summarize,

we have proved:

Lemma 1.15

Up to isomorphism, there are precisely three 2-dimensional algebras over R. Any 2-

dimensional algebra over R is isomorphic to precisely one of

R[X]/(X

2

), R[X]/(X

2

−1), R[X]/(X

2

+ 1).

One might ask what happens for diﬀerent ﬁelds. There are inﬁnitely many non-isomorphic

2-dimensional algebras over Q , and there are only two 2-dimensional algebras over C (see

exercises).

Deﬁnition 1.16

The K-algebra A is generated by a set S = ¦α

1

, . . . , α

k

¦ if A is the K-span of 1 together

with all ’monomials’ α

i

1

...α

i

r

for α

i

ν

∈ S.

Sometimes it is useful to have a small set of generators, for practical purposes. For

example, the polynomial algebra A = K[X] is generated by X. Or, let A = KG be the

group algebra. If G = ¸g) cyclic, then A is generated by v

g

.

1.8 Finite-dimenional algebras A which can be

generated by one element.

Suppose A is a ﬁnite-dimensional algebra that is generated by one element α, say. Then

A is spanned by the set ¦1, α, α

2

, . . . , ¦. [The algebra A

α

in 1.4 is an example]. There is a

polynomial of smallest degree, m(X), such that m(α) = 0. This is the same argument as

in Linear Algebra, when we prove that a linear map has a minimal polynomial. Namely,

since A is ﬁnite-dimensional, the elements α

j

cannot all be linearly independent. Let n be

smallest such that 1, α, . . . , α

n

are linearly dependent. Then write

α

n

=

n−1

i=0

c

i

α

i

for some c

i

∈ K. Then, as in linear algebra, if m(X) = X

n

−

n−1

i=0

c

i

X

i

then m(X) is the

unique monic polynomial of smallest degree such that m(α) = 0.

Deﬁne ψ : K[X] → A by substituting α,

ψ(f) = f(α).

14 1. Algebras

This is a K− algebra homomorphism (see 1.6.1). It is surjective, by deﬁnition of A. The

Isomorphism Theorem shows that

K[X]/ker(ψ)

∼

= A.

Moreover, ker(ψ) = (m(X)); namely as in linear algebra, we have f(α) = 0 if and only if

m(X) divides f(X). This also shows that the dimension of K[X]/(m(X)) is equal to the

degree of m(X) (which you have probably seen).

Example 1.17

Let A = KG where G = ¸g), the cyclic group of order 3. Then A is generated by α = v

g

.

The previous shows that A

∼

= K[X]/(m(X)) where m(X) is the minimal polynomial of v

g

.

We have

v

3

g

= v

g

3 = v

1

= 1

A

and therefore the minimal polynomial of v

g

divides X

3

− 1. We know that dimA = 3 and

hence m(X) must have degree 3 and it follows that m(X) = X

3

−1.

EXERCISES

1.4. Let A = B = C and φ(a) := ¯ a, the map which takes a to its complex conjugate.

Verify that

(i) the map φ is a ring homomorphism;

(ii) A is a 2-dimensional R-algebra, and the map φ is an R-algebra homomorphism.

(iii) We know that A and B are C-algebras. Show that φ is not a C-algebra

homomorphism.

1.5. Let A be the set of matrices

A = ¦

a b

−b a

: a, b ∈ R¦.

Show that A is an R-subalgebra of M

2

(R). [Which of the three algebras in ?? is

it?]

1.6. Let K = Z/2Z, the ﬁeld with two elements. Let A be the set of matrices

A = ¦

a b

b a +b

: a, b ∈ K¦.

Show that A is a subalgebra of M

2

(K). [Note that A is generated by

0 1

1 1

.

Find its square.]

1.7. Show that the algebra in 1.4(4) is isomorphic to the direct product M

2

(K)

M

2

(K).

1.8 Finite-dimenional algebras A which can be generated by one element. 15

1.8. Consider the three-subspace algebra S in 1.4(4). Show that there is a surjective

algebra homomorphism from S onto the direct product K K K K.

1.9. Let S be the three-subspace algebra in 1.4(4). Find a description of S which is

similar to the algebra in example 1.4.

1.10. (Continuation) Find the opposite algebra S

op

. Is it isomorphic to S?

1.11. Show that there are precisely two 2-dimensional algebras over C, up to isomor-

phism.

1.12. Consider 2-dimensional algebras over Q. Show that the algebras Q[X]/(X

2

− p)

and Q[X]/(X

2

−q) are not isomorphic if p and q are distinct primes.

Solution 1.18

We ﬁx the basis ¦1

A

, α¦ of A = Q[X]/(X

2

−p) where 1

A

is the coset of 1 and α is

the coset of X. That is α

2

= p1

A

. Similarly we ﬁx the basis of B = Q[X]/(X

2

−q)

consisting of 1

B

and β with β

2

= q1

B

.

Suppose ψ : A → B is an algebra isomomorphism, then

ψ(1

A

) = 1

B

, ψ(α) = c1

B

+dβ

where c, d ∈ Q and d ,= 0. We must have

p = p1

A

= pψ(1

A

) = ψ(p1

A

)

= ψ(α

2

) = ψ(α)

2

= (c1

A

+dβ)

2

= c

2

1

B

+d

2

β

2

+ 2cdβ

= (c

2

+qd

2

)1

B

+ 2cdβ

But 1

B

and β are linearly independent (

√

q ,∈ Q), so 2cd = 0 and p = (c

2

+qd

2

).

Since d ,= 0 we must have c = 0 and then it follows that p = q, contrary to the

hypothesis. This shows that A and B are not isomorphic.

2

Representations, Modules

We want to study actions of groups and algebras on vector spaces. If V is a vector space,

then End

K

(V ) is the algebra of linear transformations of V , and this contains GL(V ), the

group of invertible linear transformations of V . When V is n-dimensional and we use matri-

ces with respect to some ﬁxed basis, then End

K

(V ) becomes M

n

(K), and GL(V ) becomes

GL

n

(K).

Convention We write consistently all maps to the left , following the practice used in

Linear Algebra. To be consistent, we then also let groups act on the left of vector spaces.

Deﬁnition 2.1

Let G be a group, and let V be some vector space. A (linear) representation of G on V is

a group homomorphism

ρ : G −→ GL(V ).

The representation has degree n where n = dimV .

If we write linear transformations as matrices with respect to a ﬁxed basis, we get the

group homomorphism ρ : G → GL(n, K). This is sometimes called a matrix representation

of G.

Deﬁnition 2.2

Let A be a K-algebra and V be a vector space A representation of A on V is a K-algebra

homomorphism

θ : A −→ End

K

(V ).

18 2. Representations, Modules

The representation has degree n where n = dimV . If we write linear transformations as

matrices with respect to a ﬁxed basis, we get an algebra homomorphism

θ : A −→ M

n

(K)

This is sometimes called a matrix representation of A.

The deﬁnitions of a representation as above also make sense when V is not ﬁnite-

dimensional. But recall that in these notes, all vector spaces are assumed to be ﬁnite-

dimensional (except for K[X]).

2.0.1 Examples

(1) Let A be a subalgebra of End

K

(V ) where V is a vector space. Then the inclusion map

(a → a)fromA to End

K

(V ) is clearly an algebra homomorphism, hence is a representation.

Similarly, if A is a subalgebra of M

n

(K) then the inclusion map is a representation of A.

For example, take A = End

K

(V ), or M

n

(K). Or take A as in the exercise 1.5 and V = R

2

.

(2) Let V = A and deﬁne θ : A → End

K

(A) by

θ(a) = [x → ax].

Then θ(a) ∈ End

K

(A). This is known as the ‘left regular representation’. You should

check that θ is an algebra homomorphism.

(3) Let A = K[X], the algebra of polynomials. Take a vector space V and a linear transfor-

mation α of V . Deﬁne

θ : A → End

K

(V ), θ(f) := f(α)

that is substitute α into f. This is a representation of A. [In chapter 1 we have seen that

θ is an algebra homomorphism.] For each α, there is such representation, and we write

θ = θ

α

.

Lemma 2.3

Every representation of the algebra A = K[X] is of the form θ

α

for some linear transforma-

tion α.

Proof

Suppose φ : A → End

K

(V ) is a representation. Then set α := φ(X). This is linear transfor-

mation of V . Furthermore, if f ∈ A, say f =

i

a

i

X

i

then we have

φ(f) =

i

a

i

φ(X)

i

=

i

a

i

α

i

= f(α)

where the ﬁrst equality holds since φ is an algebra homomorphism. So φ = θ

α

.

2. Representations, Modules 19

2.0.2 Examples

(1) Suppose Ω is a ﬁnite G-set. Take a vector space V with a basis labelled by the elements

of Ω, say V = span¦b

ω

: ω ∈ Ω¦. We will call V = KΩ later. Deﬁne

ρ : G → GL(V )

as follows. For g ∈ G, we take for ρ(g) the linear map which takes b

ω

to b

g(ω)

. One checks

that ρ is a group homomorphism. This comes from the fact that G acts on Ω. Note that

we write maps to the left.

For example let n=3 and G = o

3

. If we take Ω = ¦1, 2, 3¦ and if g is the transposition

permuting 1 and 2 then ρ(g) has matrix

ρ(g) =

0 1 0

1 0 0

0 0 1

.

(2) As a special case of (1), take Ω = G and where the action is by left multiplication. The

corresponding representation is the left regular representation . That is, for g ∈ G, ρ(g) is

the linear map with

v

x

→ v

gx

.

Its degree is the order of G.

(3) Let G be any group and take V = K. Deﬁne ρ : G → GL(K) by

ρ(g) = 1

K

, (g ∈ G).

This is a representation of G, called the trivial representation.

(4) Let G be the group of symmetries of a square. As a group, this is isomorphic to the

dihedral group of order 8. Draw the square in the plane, such that the center is in the

origin, and such that the corners are at (±1, ±1). Let V = R

2

. For g ∈ G, let ρ(g) be the

linear map of V which induces the symmetry given by g. We write ρ(g) as a matrix with

respect to the standard basis.

Let σ be the rotation by π/2 (anti-clockwise), then

ρ(σ) =

0 −1

1 0

.

Suppose τ ∈ G is the reﬂection taking (1, 1) to (1, −1), then

ρ(τ) =

1 0

0 −1

.

The group G can be generated by σ and τ. Since we want to deﬁne a group homomorphism,

these two matrices determine already the action of all elements of G. But we must check

that this really deﬁnes a group homomorphism.

20 2. Representations, Modules

The group G has a presentation

¸σ, τ : σ

4

= 1, τ

2

= 1, τ

−1

στ = σ

−1

)

All we have to do is to check that the matrices ρ(σ) and ρ(τ) satisfy the relations deﬁning

G, and this is an easy calculation. Then we have shown that we have a well-deﬁned

representation

ρ : G → GL

2

(R)

which takes σ, τ to the matrices ρ(σ) and ρ(τ) deﬁned above.

Exercise 2.1

Show that the trivial representation (3) can be viewed as a special case of (1).

Remark 2.4

(a) Let A = KG, the group algebra of a ﬁnite group G. Suppose we have a representation

of A, say θ : A → End

K

(V ). For g ∈ G, the element v

g

∈ A is invertible, and hence θ(v

g

)

also is invertible and therefore lies in GL(V ). Moreover, if g, h ∈ G then

θ(v

g

v

h

) = θ(v

g

)θ(v

h

), θ(v

1

) = Id

V

since θ preserves the multiplication. So we can deﬁne

ρ : G → GL(V ), ρ(g) := θ(v

g

)

and this is a group homomorphism. This shows that any representation of the group algebra

A = KG automatically gives a group representation of G.

(b) Conversely, suppose V is a vector space over K and ρ : G → GL(V ) is a representation

of G. We view G as a basis of the group algebra A = KG, and therefore we get a linear map

θ : A → End

K

(V ) by setting θ(

g

a

g

v

g

) :=

g

a

g

ρ(g). One checks that this is an algebra

homomorphism. This shows that every representation of G also gives a representation of the

group algebra KG.

Deﬁnition 2.5

Given two representations θ

1

, θ

2

of the algebra A where θ

1

: A → End

K

(V

1

) and θ

2

: A →

End

K

(V

2

). Then θ

1

and θ

2

are said to be equivalent if there is a vector space isomorphism

ψ : V

1

→ V

2

such tat

ψ

−1

◦ θ

2

(a) ◦ ψ = θ

1

(a)

for all a ∈ A.

This means that θ

1

(a) and θ

2

(a) should be simultaneously similar, for all a ∈ A. In the

special case when A = K[X] we have therefore the following.

2.1 Modules 21

Lemma 2.6

Let A = K[X], then two representations θ

α

and θ

β

are equivalent if and only if the linear

transformations α and β are similar.

Take any ideal I of the algebra A, and let B = A/I. Since the canonical map A → A/I

is an algebra map, we can view representations of the factor algebra as representations of

the original algebra. More precisely:

Lemma 2.7

Let A be any algebra, and I an ideal of A. Let B := A/I. Then the representations of B are

precisely the representations of A which map I to zero.

Proof

Let γ : A → End

K

(V ) be a K-algebra homomorphism such that γ(x) = 0 for all x ∈ I.

Then deﬁne

¯ γ : B → End

K

(V )

by ¯ γ(a + I) := γ(a). This is well-deﬁned: if a + I = a

+ I, then a − a

∈ I and γ(a −

a

) = 0 and therefore γ(a) = γ(a

**). One checks that ¯ γ is an algebra homomorphism, this is
**

straightforward.

Conversely, let θ : B → End

K

(V ) be a representation of B. Deﬁne

˜

θ : A → End

K

(V ) by

˜

θ(a) := θ(a +I)

That is,

˜

θ is the composition of θ and the canonical map π : A → A/I, and therefore

˜

θ is

an algebra homomorphism.

Deﬁnition 2.8

Given a representation θ of B where B = A/I, then the corresponding representation

˜

θ of

A as in the Lemma is called inﬂation of θ.

2.1 Modules

If we have a representation of the algebra A on the vector space V , then A ‘acts on V ’,

and V together with this action is said to be an A-module. This is analogous to the case of

groups acting on sets. [Given a group homomorphism from G to Sym(Ω), then Ω together

with this action is a G-set].

Modules can be deﬁned for arbitrary rings, not just algebras. They are very common

(and important), for example modules over Z occur frequently. The basic concepts are the

same; therefore we give the general deﬁnition.

22 2. Representations, Modules

Deﬁnition 2.9

Let R be a ring. An R-module is an abelian group (M, +) together with a map

R M → M, (r, m) → rm (r ∈ R, m ∈ M)

such that for all r, s ∈ R and all m, n ∈ M

(i) (r +s)m = rm+sm;

(ii) r(m+n) = rm+rn;

(iii) r(sm) = (rs)m;

(iv) 1

R

m = m.

One can think of a module as a generalization of a vector space: A vector space is an

abelian group M together with a scalar multiplication, that is a map KM → M, satisfying

the usual axioms. If one replaces K by a ring R, then one gets an R-module. When R = K,

that is R is a ﬁeld, then R-modules are therefore exactly the same as K-vector spaces.

The above deﬁnes a left R-module, and one deﬁnes right R-modules analogously. When R

is not commutative the behaviour of left modules and of right modules can be diﬀerent; to

go into details is beyond this course (see however an exercise in chapter 3).

We will consider only left modules since our rings are K-algebras, and scalars are usually

written to the left.

Example 2.10

Take any left ideal I of R, then I is an R-module. First, (I, +) is a group, by deﬁnition. The

properties (i) to (iv) hold even for m, n, r, s ∈ R, and then also for m, n ∈ I and r, s ∈ R.

In this course, we will focus on the case when the ring is a K-algebra. Some of the general

properties are the same for rings.

Convention We write R and M if we think of an R-module for a general ring, and we

write A and V if we work with an A-module where A is a K-algebra.

Suppose A is a K-algebra. Then A-modules are automatically vector spaces, and this is

very important:

Lemma 2.11

Let A be a K-algebra. Then any A-module is automatically a K-vector space.

Proof

Recall that we view K as a subset of A, so this gives us a map K V → V , and it satisﬁes

the vector space axioms, they are then just (i) to (iv) in 2.9.

2.1 Modules 23

2.1.1 Relating A-modules and representations of A

The following shows that ’modules’ and ’representations’ of an algebra are the same. This

is a formal matter, nothing is ‘done’ to the modules or representations and it only describes

two diﬀerent views of the sameobject. It is convenient as it often saves one a lot of checking,

and it gives twice as much information.

Lemma 2.12

Let A be a K-algebra.

(a) Suppose V is an A-module. Then we have a representation of A on V ,

θ : A → End

K

(V ), θ(a) = [v → av] (a ∈ A, v ∈ V ).

(b) Suppose σ : A → End(V ) is a representation. Then V becomes an A-module by setting

av := σ(a)(v), (a ∈ A, v ∈ V ).

Proof

(a) The map θ(a) lies in End

K

(V ): It is a linear transformation of V ,

θ(a)(λ

1

v

1

+λ

2

v

2

) = a(λ

1

v

1

+λ

2

v

2

) = (aλ

1

v

1

) + (aλ

2

v

2

)

= λ

1

(av

1

) +λ

2

(av

2

)

= λ

1

θ(a)(v

1

) +λ

2

θ(a)v

2

To show that it is an algebra homomorphism,

θ(ab)(v) = (ab)v = a(bv)

= θ(a)[bv] = θ(a)[θ(b)(v)]

= [θ(a) ◦ θ(b)](v)

which holds for all v ∈ V , hence θ(ab) = θ(a)θ(b). Similarly one checks that θ(1

A

) = Id

V

.

(b) It is straightforward to check the axioms for an A-module. For example

(a

1

+a

2

)v = σ(a

1

+a

2

)(v)

= [σ(a

1

) +σ(a

2

)](v)

= σ(a

1

)(v) +σ(a

1

)(v) = a

1

v +a

2

v

We leave the rest as an exercise.

24 2. Representations, Modules

2.1.2 Examples

(1) When A = K, then A-modules are the same as K-vector spaces.

(2) The ’natural module’. Assume A is a subalgebra of End

K

(V ). Then V is an A-module,

where the action of A is just applying the linear maps to the vectors. That is,

(a, v) → a(v) (a ∈ A, v ∈ V ).

This is the action where the representation is the inclusion map A ⊆ End

K

(V ). So V is

an A-module.

Alternatively, one can check the module axioms: Let a, b ∈ A and v, w ∈ V , then

(i) (a +b)(v) = a(v) +b(v)

by the deﬁnition of the sum of two maps; and

(ii) a(v +w) = a(v) +a(w)

since a is linear; and

(iii) (ab)(v) = a(b(v))

since the multiplication in End

K

(V ) is deﬁned to be composition of maps; and

(iv) 1

A

(v) = v.

(3) The natural module also has a matrix version. Let A be a subalgebra of M

n

(K), and let

V := (K

n

)

t

, the space of column vectors. Then V is an A-module if one takes as action

to be multiplying the matrix and the column vector.

(4) Permutation modules. Let A = KG where G is a ﬁnite group. Suppose Ω is any G-set,

and let

V = KΩ = Span¦b

ω

: ω ∈ Ω¦

as in 2.0.2(1). Deﬁne an action of A on KΩ by setting

v

g

b

ω

:= b

g(ω)

.

and extend to linear combinations in A. This deﬁnes an A-module.

To see this, take the group representation in 2.0.2(1) and view it as a representation of

KG (as in 2.4(b)), then use 2.12.

Alternatively, you can check the axioms.

(5) The ‘trivial module’. Let A = KG where G is a ﬁnite group. The trivial module has

underlying vector space K, and the action of A os deﬁned by

v

g

x = x (x ∈ K, g ∈ G)

Take the trivial representation in 2.0.2(3), view it as a representation of KG, and use

2.12. Or else, check axioms.

2.2 K[X]-modules 25

Lemma 2.13

Let A be an algebra and B = A/I where I is an ideal of A. Then the B-modules are precisely

the A-modules V on which I acts as zero, and where the actions are related by

(a +I)v = av (a ∈ A, v ∈ V ).

Proof

This is a reformulation of what we called ’inﬂation’. Apply 2.7 and use 2.12

2.2 K[X]-modules

Take a vector space V and some linear transformation α : V → V . We have deﬁned the

representation θ

α

from A := K[X] to End

K

(V ), by θ

α

(f) = f(α). This gives that V is an

A-module by setting

fv := f(α)(v) (f ∈ A, v ∈ V.)

We denote this K[X]-module by V

α

. Since every representation of A is of the form θ

α

for

some α, every K[X]-module is isomorphic to V

α

for some α.

The following relates K[X]-modules with modules for factor algebras K[X]/I. This is

important since for I ,= 0, these factor algebras are ﬁnite-dimensional, and many ﬁnite-

dimensional algebras occuring ’in nature’ are of this form.

Lemma 2.14

Let A = K[X]/(f) where f is some non-zero polynomial in K[X]. Then the A-modules can

be viewed as the K[X]-modules V

α

which satisfy f(α) = 0.

Proof

This is a special case of 2.13 with I = (f). Then note that if V is an A-module, with action

fv = f(α)v. So I maps V to zero if and only if f(α) = 0.

Note that we only change the point of view, and we ’don’t do anything’ to the module.

One advantage of modules as compared with representations is that this perspective natu-

rally lead to new concepts. We introduce some of these now.

26 2. Representations, Modules

2.3 Submodules, factor modules

Deﬁnition 2.15

Let R be a ring and M some R-module. A submodule of M is a subgroup (U, +) which is

closed under the action of R, that is ru ∈ U for all r ∈ R and u ∈ U.

2.3.1 Examples

(1) The left ideals I of R are precisely the submodules of the R-module R.

(2) Suppose M

1

and M

2

are R-modules. Then the direct product

M

1

M

2

:= ¦(m

1

, m

2

) : m

i

∈ M

i

¦

is an R-module if one deﬁnes the action of R componentwise, that is

r(m

1

, m

2

) := (rm

1

, rm

2

) (r ∈ R, m

i

∈ M

i

).

(3) Consider the 2-dimensional R-algebra A

0

at the end of Chapter 1. The 1-dimensional

subspace spanned by β is a submodule of A

0

.

On the other hand, if you look at the algebra A

1

in the same section, then the subspace

spanned by β is not a submodule. (But the space spanned by β −1

A

is a submodule.)

Exercise 2.2

Let A = A

1

A

2

, the product of two K-algebras. Suppose M is some A-module.

Deﬁne

M

1

:= ¦(1

A

1

, 0)m : m ∈ M¦, M

2

:= ¦(0, 1

A

2

)m : m ∈ M¦.

Show that M

1

and M

2

are submodules of M and that M = M

1

⊕M

2

.

You might note that we have seen direct products of modules, and also direct sums. The

products are needed, to construct a new module from given ones which are not related. On

the other hand, if we write M = M

1

⊕M

2

then we always implicitly mean that M is a given

module and M

1

, M

2

are submodules. [Some books distinguish these two constructions, by

calling them ’external’ and ’internal’ direct sum.]

Exercise 2.3

Let A = M

n

(K), the algebra of n n matrices over K, and consider the A-module

A. We deﬁne C

i

⊂ A to be the set of matrices which are zero outside the i-th column.

Show that C

i

is a submodule of A, and that

A = C

1

⊕C

2

⊕. . . ⊕C

n

.

2.4 Module homomorphisms 27

Suppose U is a submodule of an R-module M, you know that the cosets M/U := ¦m+U :

m ∈ M¦ form an abelian group. In the case when M is an R-module and U is a submodule,

the set of cosets has the structure of an R-module.

Deﬁnition 2.16

Let M be an R-module and U a submodule of M. Then the cosets M/U form an R–module,

if one deﬁnes

r(m+U) := rm+U, (r ∈ R, m ∈ M).

This is called the factor module.

One has to check that the action is well-deﬁned: If m+U = m

+U then m−m

∈ U and

then r(m−m

) ∈ U as well. But r(m−m

) = rm−rm

and therefore rm+U = rm

+U.

The axioms are inherited from M.

Example 2.17

Let M = R as an R-module, then for any d ∈ R, M has submodule I = Rd, and a factor

module R/Rd. When R = Z, you will have seen these. In general, a module of the form

Rd = ¦rd : r ∈ R¦ is said to be a cyclic R-module.

2.4 Module homomorphisms

We have said that a ’module’ is a generalization of a vector space where scalars are replaced

by elements in the ring. Accordingly, R-module homomorphisms are the analog of linear

maps of vector spaces.

Deﬁnition 2.18

Suppose R is a ring, and M and N are R-modules. A map φ : M → N is an R-module

homomorphism if for all m, m

1

, m

2

∈ M and r ∈ R we have

(i) φ(m

1

+m

2

) = φ(m

1

) +φ(m

2

); and

(ii) φ(rm) = rφ(m).

An isomorphism of R-modules is an R-module homomorphism which is also bijective.

The set of all R-module homomorphisms from M to N is denoted by

Hom

R

(M, N).

An R-module homomorphism from M to M is called an R-endomorphism of M, and the

set of all R-endomorphisms of M is denoted by

End

R

(M).

28 2. Representations, Modules

In the case when the ring is a K-algebra A, then this deﬁnition also says that φ must be

K-linear. Namely, we vies λ ∈ K as an element of A, by taking λ1

A

, and then we have for

λ, µ ∈ K that

φ(λm

1

+µm

2

) = φ((λ1

A

)m

1

+ (µ1

A

)m

2

)

= (λ1

A

)φ(m

1

) + (µ1

A

)φ(m

2

)

= λφ(m

1

) +µφ(m

2

).

Exercise 2.4

Suppose V is an A-module where A is a K-algebra. The set End

A

(V ) of all A-module

homomorphisms V → V is by what we just noted, a subset of End

K

(V ). Check that

it is actually a subalgebra.

Example 2.19

Consider A = K[X]. The algebra is generated by X as an algebra, so an A-module homo-

morphism between A-modules is just a linear map that commutes with the action of X.

We have described the A-modules (see 2.2), let V

α

and W

β

be A-modules, an A-module

homomorphism is a linear map such that θ(Xv) = Xθ(v) (v ∈ V

α

). On V

α

, the element X

acts by α, and on W

β

, the action of X is given by β. So this means

θ(α(v)) = β(θ(v)) (v ∈ V

α

)

This holds for all v, so we have

θ ◦ α = β ◦ θ.

In particular V

α

∼

= W

β

if and only if there is an invertible linear map θ such that

θ

−1

◦ β ◦ θ = α.

Exercise 2.5

Suppose A is a K-algebra, and assume V and W are A-modules. Show that V

∼

= W

as A-modules if and only if the corresponding representations are equivalent.

2.4.1 Some common module homomorphisms

(1) Suppose U is a submodule of an R-module M, then the ’canonical map’ π : M → M/U,

deﬁned by π(m) = m+U, is an R-module homomorphism.

2.4 Module homomorphisms 29

(2) Suppose M is an R-module, and m ∈ M. Then we always have an R-module homomor-

phism φ : R → M, given by

φ(r) := rm (r ∈ R).

This is a very common homomorphism, perhaps we might call it a ’multiplication homo-

morphism’. There is a general version of this, which also is very common. Namely, suppose

m

1

, m

2

, . . . , m

n

are given elements in M. Now take the R-module R

n

:= RR. . . R,

and deﬁne

ψ : R

n

→ M, ψ(r

1

, r

2

, . . . , r

n

) := r

1

m

1

+r

2

m

2

+. . . +r

n

m

n

(r

1

, . . . , r

n

∈ R). You should check that this is indeed an R-module homomorphism.

(3) Suppose M = M

1

M

2

, the direct product of two R-modules. Then the projection maps

π

i

onto the coordinates are R-module homomorphisms. Similarly, the inclusion maps

ι

1

: M

1

→ M, ι

1

(m

1

) := (m

1

, 0)

and similarly ι

2

, are R-module homomorphism. You should check this as well.

(4) Similarly, if M = U ⊕V , the direct sum of two submodules U and V , then the projection

maps, and the inclusion maps are R-module homomorphisms.

Theorem 2.20 (Isomorphism theorems)

(a) Suppose φ : M → N is an R-module homomorphism. Then ker(φ) is a submodule of M

and im(φ) is a submodule of N, and

M/ker(φ)

∼

= im(φ).

(b) Suppose U, V are submodules of M, then so are U +V and U ∩ V , and

(U +V )/U

∼

= V/(U ∩ V ).

(v) Suppose U ⊆ V ⊆ M are submodules, then V/U is a submodule of M/U, and

(M/U)/(V/U)

∼

= (M/V ).

Proof

(a) Since φ is in particular a homomorphism of the additive groups, we know that ker(φ) is

a subgroup of M, so we just have to check that it is R-invariant. Let m ∈ ker(φ) and r ∈ R,

then

φ(rm) = rφ(m) = r.0 = 0

and rm ∈ ker(φ). Similarly one checks that im(φ) is a submodule of N. The isomorphism

theorem for abelian groups shows that the map

ψ : M/ker(φ) → im(φ), ψ(m+ ker(φ)) = φ(m)

30 2. Representations, Modules

is well-deﬁned and is an isomorphism of abelian groups. One now checks that this map is in

fact an R-module homomorphism.

Parts (b) and (c) hold for abelian groups, and one just has to check that the maps used in

that case are also compatible with the action of R. For example, in (b), the general element

of (U +V )/U can be written as v +U for v ∈ V . Then the map is deﬁned as

ψ : (U +V )/U → V/(U ∩ V ), ψ(v +U) = v +U ∩ V

If r ∈ R then

ψ(r(v +U)) = ψ((rv) +U) = rv +U ∩ V = r(v +U ∩ V ) = rψ(v +U).

2.5 The submodule correspondence

Suppose M is an R-module and U is a submodule. Then there is a 1-1 correspondence,

inclusion preserving, between

(i) the submodules of M/U, and

(ii) the submodules of M that contain U.

Namely, given a submodule X of M/U, deﬁne

˜

X := ¦m ∈ M : x +U ∈ X¦.

This is a submodule of M and it contains U:

(a) First,

˜

X is a subgroup of M. It contains 0, and if x

1

, x

2

∈

˜

X then

(x

1

±x

2

) +U = (x

1

+U) ±(x

2

+U)

and this lies in X since X is a subgroup of M/U.

(b)

˜

X is a submodule: Let r ∈ R and x ∈

˜

X, then

rx +U = r(x +U) ∈ X

since X is an A-submodule of M/U and therefore rx ∈

˜

X.

Conversely, given a submodule V of M such that U ⊆ V . Then V/U := ¦v +U : v ∈ V ¦

is a submodule of M/U, as we have seen. We leave as an exercise to show that these

correspondences preserve inclusion.

To get the 1-1 correspondence, we must check that

˜

X/U = X, and that

V/U = V . First,

˜

X/U = ¦x +U : x ∈

˜

X¦

= ¦x +U : x +U ∈ X¦ = X.

Second, we have

V/U = ¦x ∈ M : x +U ∈ V/U¦

= ¦x ∈ M : x +U = v +U, for some v ∈ V ¦

Now x + U = v + U if and only if x − v ∈ U, that is x − v = u ∈ U. But U ⊂ V , by

assumption, so x −v = u ∈ U if and only if x ∈ V . Hence

V/U = V .

2.6 Tensor products 31

2.6 Tensor products

This is not part of the B2 syllabus

Deﬁnition 2.21

Suppose V and W are vector spaces over some ﬁeld K with bases v

1

, . . . , v

m

and w

1

, . . . , w

n

respectively. For each i, j with 1 ≤ i ≤ m and 1 ≤ j ≤ n, we introduce a symbol v

i

⊗ w

j

.

The tensor product space V ⊗W is deﬁned to be the mn-dimensional vector space over K

with a basis given by

¦v

i

⊗w

j

: 1 ≤ i ≤ m, 1 ≤ j ≤ n¦

Thus V ⊗W consists of all expressions of the form

i,j

λ

i,j

(v

i

⊗w

j

) (λ

i,j

∈ K)

For v ∈ V and w ∈ W with v =

m

i=1

λ

i

v

i

and w =

n

j=1

µ

j

w

j

(with λ

i

, µ

j

∈ K) we deﬁne

v ⊗w by

v ⊗w :=

i,j

λ

i

µ

j

(v

i

⊗w

j

)

For example,

(2v

1

−v

2

) ⊗(w

1

+w

2

) = 2(v

1

⊗w

1

) −v

2

⊗w

1

+ 2(v

1

⊗w

2

) −(v

2

⊗w

2

).

Note that not every element of V ⊗W is of the form v ⊗w.

Exercise 2.6

Show that v

1

⊗ w

1

+ v

2

⊗ w

2

cannot be expressed in the form v ⊗ w for v ∈ V and

w ∈ W.

Proposition 2.22

If e

1

, . . . , e

m

is any basis of V and f

1

, . . . , f

n

is any basis of W then

¦e

i

⊗f

j

: 1 ≤ i ≤ m, 1 ≤ j ≤ n¦

is a basis for V ⊗W.

Proof

Write v

i

=

m

k=1

c

ki

e

k

and w

j

=

n

l=1

d

lj

f

l

with c

ki

and d

lj

∈ K. Then

v

i

⊗w

j

=

k,l

c

ki

d

jl

(e

k

⊗f

l

)

This shows that the mn elements e

i

⊗f

j

span V ⊗W. But dim(V ⊗W) = mn and therefore

they form a basis.

32 2. Representations, Modules

Now suppose G is a group, and ρ

V

: G → GL(V ) and ρ

W

: G → GL(W) are representa-

tions of G. Then we have a representation of G on V ⊗W. In the following we use the bases

of V , W and V ⊗W as above.

Proposition 2.23

Let g ∈ G and deﬁne ρ : G → GL(V ⊗W) by

ρ(g)(v

i

⊗w

j

) := ρ

V

(g)(v

i

) ⊗ρ

W

(g)(w

j

).

Then ρ is a representation of G.

Proof

We must show that ρ(gh) = ρ(g) ◦ ρ(h) for all g, h ∈ G. One way is ﬁrst to check that for

all v ∈ V and w ∈ W we have

ρ(g)(v ⊗w) = ρ

V

(g)(v) ⊗ρ

W

(g)(w).

When this done, then we get

ρ(gh)(v

i

⊗w

j

) = ρ

V

(gh)(v

i

) ⊗ρ

W

(gh)(w

j

)

= ρ

V

(g)[ρ

V

(h)(v

i

)] ⊗ρ

W

(g)[ρ

W

(h)(w

j

)]

= ρ(g)[ρ

V

(h)(v

i

) ⊗ρ

W

(h)(w

j

)]

= ρ(g)[ρ(h)(v

i

⊗w

j

).

EXERCISES

2.7. Suppose V is an A-module with submodules U, V and W.

(a) Check that U + V and U ∩ V are submodules of M. Show by means of an

example that it is not in general the case that U ∩(V +W) = (U ∩V ) +(U ∩W).

[Try A = R and U, V, W be subspaces of R

2

].

(b) Show that U ¸ V is never a submodule. Show also that U ∪V is a submodule

if and only if U ⊆ V or V ⊆ U.

2.8. Suppose M = U V , the direct product of A-modules U and V . Check that

˜

U := ¦(u, 0) : u ∈ U¦ is a submodule of M, isomorphic to U. Write down a

similar submodule

˜

V of M isomorphic to V , and show that M =

˜

U ⊕

˜

V , the

direct sum of submodules.

2.9. Let A = KG be the group algebra where G is a ﬁnite group. The trivial A-module

is deﬁned to be the 1-dimensional module with underlying space K, with action

v

g

x = x (g ∈ G, x ∈ K).

2.6 Tensor products 33

Show that the corresponding representation ρ : A → End

K

(K)satisﬁes ρ(a) =

Id

K

for all a ∈ A. Check that this is indeed a representation.

2.10. Let Ω be a transitive G-set, and let V = KΩ be the corresponding permutation

module. Let ζ :=

ω∈Ω

b

ω

. Show that v

g

ζ = ζ for all g ∈ G, and deduce that Kζ

is a submodule of V and that it is isomorphic to the trivial A-module.

2.11. (Continuation) Show also Kζ is the unique submodule of V which is isomorphic

to the trivial module. Is this still true when Ω is not transitive?

2.12. Let A = K[X]/(X

n

), and let V = A, as an A-module. By applying the submodule

correspondence, or otherwise, ﬁnd all submodules of V . Deduce that if V

1

and V

2

are submodules, then either V

1

⊆ V

2

, or V

2

⊆ V

1

.

2.13. Let A = CG be the group algebra of the dihecral group of order 10,

G = ¸σ, τ : σ

5

= 1, τ

2

= 1, τστ

−1

= σ

−1

).

Suppose ω is some 5-th root of 1. Show that the matrices

ρ(σ) =

ω 0

0 ω

−1

, ρ(τ) =

0 1

1 0

**satisfy the deﬁning relations for G, hence give rise to a group representation
**

ρ : G → GL(2, C), and an A-module.

2.14. (Continuation) Let G and ρ be as above, and view ρ : G → GL(V ) where V = C

2

.

Consider the tensor product V ⊗V as a G-module. Does this have a 1-dimensional

submodule? That is, does there exist some ζ =

c

ij

(v

i

⊗ v

j

) ∈ V ⊗ V which is

a common eigenvector for all group elements?

3

The Jordan-H¨older Theorem

Let A be a ﬁnite-dimensional K-algebra. We have seen that every A-module V is also a

K-vector space. This allows us to apply results from linear algebra to A-modules.

Deﬁnition 3.1

Suppose V is an A-module. Then V is simple (or irreducible ) if V is non-zero, and if it does

not have any submodules other that 0 and V .

For example, take a module V such that dim

K

(V ) = 1, then Then V must be simple. It

does not have any subspace except 0 and V and therefore it cannot have a submodule except

0 or V . The converse is not true. Simple modules can have arbitrary large dimensions, or

can even be inﬁnite-dimensional (see an exercise).

Example 3.2

Let A = M

n

(K), and take V to be the natural module, the space of column vectors V =

(K

n

)

t

.

We claim that V is simple: We have to show that if U is a non-zero submodule of V then

actually U = V . So take such U, and take a non-zero element u ∈ U, say

u =

x

1

x

2

. . .

x

n

.

The algebra A contains all ’matrix units’ E

ij

, and one checks that E

ij

u has x

j

in the i-th

coordinate, and all other coordinates are zero.

36 3. The Jordan-H¨older Theorem

Since u ,= 0, for some j we know that x

j

is non-zero. So for this value j, (x

−1

j

)E

ij

u is

the basis vector ε

i

of V . But (x

−1

j

)E

ij

lies in A and therefore ε

i

∈ U. But i is arbitrary, so

U contains a basis for M and therefore U = V .

The method we used to show that U is simple, is more general.

If m ∈ V where V is some module, let Am := ¦am : a ∈ A¦. This is a submodule of V ,

you should check this.

Lemma 3.3

Let V be an A-module. Then V is simple if and only if for each 0 ,= m ∈ V we have Am = V .

Proof

⇒ Suppose V is simple, and take 0 ,= m ∈ V . We know that Am is a submodule, and it

contains m(= 1

A

m), and so Am is non-zero and therefore Am = V .

⇐ Suppose 0 ,= U is a submodule of V , then there is some non-zero m ∈ U. Since U is

a submodule, we have Am ⊆ U, but by the hypothesis,

V = Am ⊆ U ⊆ V

and hence U = V .

Example 3.4

Let A = RG, where G is the symmetry group of the square, see chapter 2. We have seen

there is the representation ρ : G → GL

2

(R) such that

ρ(σ) =

0 −1

1 0

, ρ(τ) =

1 0

0 −1

.

The corresponding A-module is V = R

2

, and for g ∈ G, the basis element v

g

acts on V

through

v

g

x

1

x

2

= ρ(g)

x

1

x

2

.

We claim that V is simple. Suppose, for a contradiction, that V has a submodule 0 ,= U ⊂ V

and U ,= V . Then U is 1-dimensional, say U is spanned by u. But then v

σ

u = λu for some

λ ∈ R, which means that u ∈ R

2

is an eigenvector of ρ(v

σ

). But the matrix ρ(σ) does not

have a real eigenvalue, a contradiction.

We will need to understand also when a factor module is simple. This is done by using

the submodule correspondence.

3.1 Examples 37

Lemma 3.5

Suppose V is an A- module and U is a submodule of V . Then the module V/U is simple

⇐⇒ U is a maximal submodule of V . [That is, if U ⊆ W ⊆ V then W = U or W = V .]

Proof

Apply the submodule correspondence.

Deﬁnition 3.6

Suppose V is an A-module. A composition series of V is a ﬁnite chain of submodules

0 = V

0

⊂ V

1

⊂ V

2

⊂ . . . ⊂ V

n

= V

such that the factor modules V

i

/V

i−1

are simple, for 1 ≤ i ≤ n. The length of the composition

series is n, the number of quotients.

3.1 Examples

(1) If V is simple then 0 = V

0

⊂ V

1

= V is a composition series.

(2) Given a composition series as in the deﬁnition, if V

k

is one of the terms, then V

k

‘inherits’

the composition series

0 = V

0

⊂ V

1

⊂ . . . ⊂ V

k

.

(3) Let K = R and take A to be the 2-dimensional algebra over R, with basis ¦1

A

, β¦ such

that β

2

= 0 (see ??). Take V := A. Then if V

1

is the space spanned by β, then V

1

is a

submodule. [It is a subspace, and it is invariant under the action of the basis of A]. Since

V

1

and V/V

1

are 1-dimensional, they are simple. Hence V has composition series

0 = V

0

⊂ V

1

⊂ V

2

= V.

(4) Let A = M

n

(K) and V = A. In Exercise ?? we have seen that A = C

1

⊕C

2

⊕. . . ⊕C

n

,

a direct sum of simple A-modules. So we have a ﬁnite chain of submodules

0 ⊂ C

1

⊂ C

1

⊕C

2

⊂ . . . ⊂ C

1

⊕. . . ⊕C

n−1

⊂ A

Each factor module is simple: By the isomorphism theorem

C

1

⊕. . . ⊕C

k

/C

1

⊕. . . ⊕C

k−1

∼

= C

k

/C

k

∩ (C

1

⊕. . . ⊕C

k−1

) = C

k

/¦0¦ = C

k

.

So this chain is a composition series.

38 3. The Jordan-H¨older Theorem

Lemma 3.7

Assume V is a ﬁnite-dimensional A-module. Then V has a composition series.

Proof

This is proved by induction on dimV . If dimV = 1 then V is simple, and we are done by

3.1 (1).

So assume now that dimV > 1. If V is simple then, by 3.1(1), V has acomposition

series. Otherwise, V has proper submodules. Take a proper submodule U ⊂ V of largest

possible dimension. Then V/U must be simple, by the Submodule Correspondence. Since

dimU < dimV , we can apply the inductive hypothesis. So U has a composition series, say

0 = U

0

⊂ U

1

⊂ U

2

⊂ . . . ⊂ U

k

= U.

This gives us the composition series of V ,

0 = U

0

⊂ U

1

⊂ U

2

⊂ . . . ⊂ U

k

= U ⊂ V.

In general, a module can have many composition series (we will see examples). The

Jordan-H¨older Theorem shows that any two composition series have the same length, and

the same factors up to isomorphism counted with multiplicities:

Theorem 3.8 (Jordan-H¨older Theorem)

Suppose V has two composition series

(I) 0 ⊂ V

1

⊂ V

2

⊂ . . . ⊂ V

n−1

⊂ V

n

= V

(II) 0 ⊂ W

1

⊂ W

2

⊂ . . . ⊂ W

m−1

⊂ W

m

= V.

Then n = m, and there is a permutation σ of ¦1, 2, . . . , n¦ such that V

i

/V

i−1

∼

= W

σ(i)

/W

σ(i)−1

for each i.

The simple factor modules V

i

/V

i−1

are called the composition factors of V . By this

theorem, they only depend on V , and not on the composition series.

Example 3.9

Let A = M

n

(K) and V = A. With the notation as in Chapter 2.1, V has submodules

V

1

:= C

1

,

V

2

:= C

1

⊕C

2

,

. . .

V

i

:= C

1

⊕C

2

⊕. . . ⊕C

i

.

3.1 Examples 39

These submodules form a series

0 = V

0

⊂ V

1

⊂ . . . V

n−1

⊂ V

n

= V.

This is a composition series, as we have seen. The module V also has submodules

W

1

:= C

n

,

W

2

:= C

n

⊕C

n−1

,

. . .

W

j

:= C

n

⊕C

n−1

⊕. . . ⊕C

n−j+1

,

and this gives us a series of submodules

0 = W

0

⊂ W

1

⊂ . . . ⊂ W

n−1

⊂ W

n

= V.

This also is a composition series, since W

j

/W

j−1

∼

= C

n−j+1

which is simple. Both composi-

tion series have lenght n, and if we take the permutation

σ = (1 n)(2 n −1) . . .

then W

σ(i)

/W

σ(i)−1

∼

= C

i

∼

= V

i

/V

i−1

.

For the proof of the Jordan-H¨older Theorem, we need to compare two given composition

series. The case when V

n−1

is diﬀerent from W

m−1

, makes more work, and we will use the

following.

Lemma 3.10

With the notation as in the Jordan-H¨older Theorem, suppose V

n−1

,= W

m−1

. Let

D := V

n−1

∩ W

m−1

. Then

(a) V

n−1

/D

∼

= V/W

m−1

, and hence it is simple;

(b) W

m−1

/D

∼

= V/V

n−1

, and hence it is simple.

Proof

We ﬁrst show that V

n−1

+W

m−1

= V .

We have V

n−1

⊆ V

n−1

+ W

m−1

⊆ V , and since V/V

n−1

is simple, V

n−1

is a maximal

submodule of V . So either V

n−1

+W

m−1

= V , or V

n−1

+W

m−1

= V

n−1

.

Assume (for a contradiction) that V

n−1

+W

m−1

= V

n−1

, then we have

W

m−1

⊆ V

n−1

+W

m−1

⊆ V

n−1

⊂ V

But W

m−1

also is a maximal submodule of V , therefore W

m−1

= V

n−1

, a contradiction to

the hypothesis. Therefore V

n−1

+W

m−1

= V , as stated.

40 3. The Jordan-H¨older Theorem

Now we apply an isomorphism theorem, and get

V/W

m−1

∼

= (V

n−1

+W

m−1

)/W

m−1

∼

= V

n−1

/V

n−1

∩ W

m−1

= V

n−1

/D

Similarly one shows that V/V

n−1

∼

= W

m−1

/D.

Proof (of the Jordan-H¨older Theorem)

Given two composition series (I) and (II) as above, we say that they are equivalent provided

n = m and there is a permutation σ ∈ Sym(n) such that V

i

/V

i−1

∼

= W

σ(i)

/W

σ(i)−1

. In this

proof we will abbreviate ’composition series’ by CS.

We use induction on n. Assume ﬁrst n = 1. Then V is simple, so W

1

= V (since there is

no non-zero submodule except V ); and m = 1.

Now suppose n > 1. The inductive hypothesis is that the theorem holds for modules

which have a composition series of length ≤ n −1.

(a) Assume ﬁrst that V

n−1

= W

m−1

=: U, say. Then the module U inherits a CS of length

n − 1, from (I). By the inductive hypothesis, any two composition series of U have length

n−1. So the composition series of U inherited from (II) also has length n−1 and therefore

m−1 = n −1 and m = n. Moreover, by the inductive hypothesis, there is a permutation σ

of n −1 such that V

i

/V

i−1

∼

= W

σ(i)

/W

σ(i)−1

. We also have V

n

/V

n−1

= W

n

/W

n−1

. So if we

view σ as an element of Sym(n) ﬁxing n then we have the required permutation.

(b) Now assume V

n−1

,= W

m−1

, now we deﬁne D := V

n−1

∩ W

m−1

. Take a composition

series of D, say

0 = D

0

⊂ D

1

⊂ . . . ⊂ D

t

= D

Then V has composition series

(III) 0 = D

0

⊂ D

1

⊂ . . . ⊂ D

t

= D ⊂ V

n−1

⊂ V

(IV ) 0 = D

0

⊂ D

1

⊂ . . . ⊂ D

t

= D ⊂ W

m−1

⊂ V

since, by 3.10, the quotients V

n−1

/D and W

m−1

/D are simple. Moreover, by the Lemma

3.10, we know that (III) and (IV) are equivalent: take the permutation σ = (t + 1 t + 2).

Next, we claim that m = n. The module V

n−1

inherits a a CS of length n−1 from (I). So

by the inductive hypothesis, all CS of V

n−1

have length n−1. But the CS which is inherited

from (III) has length t + 1 and hence n − 1 = t + 1. Now, the module W

m−1

inherits from

(IV) a composition series of length t + 1 = n − 1, so by the inductive hypothesis all CS

of W

m−1

have length n − 1. In particular the CS inherited from (II) does, and therefore

m−1 = n −1 and m = n.

The series (I) and (III) are equivalent: By the inductive hypothesis, there is a permutation

of n −1 letters, γ say, such that

D

i

/D

i−1

∼

= V

γ(i)

/V

γ(i)−1

, (i ,= n −1), and V

n−1

/D

∼

= V

γ(n−1

/V

γ(n−1)−1

.

3.2 Examples 41

We view γ as a permutation of n letters, and then also V/V

n−1

= V

n

/V

n−1

∼

= V

γ(n)

/V

γ(n)−1

,

which proves the claim.

Similarly one shows that (II) and (IV) are equivalent. We have already seen that (III)

and (IV) are equivalent as well, and it follows that (I) and (II) are equivalent. This completes

the proof.

Lemma 3.11

Suppose V is a ﬁnite-dimensional A-module, and N is a submodule of V . Then there is a

composition series of V in which N is one of the terms.

Proof

Take a composition series of N, say

0 = N

0

⊂ N

1

⊂ . . . ⊂ N

t

= N

Now take a composition series of V/N. By the Submodule Correspondence, we can write

such composition series as

0 = U

0

/N ⊂ U

1

/N ⊂ . . . ⊂ U

s

/N = V/N

since any submodule of V/N is of the form U/N where U is a submodule of V containing

N. Moreover, by the submodule correspondence, U

i

/N ⊆ U

i+1

/N if and only if U

i

⊆ U

i+1

.

So we have U

0

= N and U

s

= V , and we get a series of submodules of V

(∗) 0 = N

0

⊂ N

1

⊂ . . . ⊂ N

t

= N ⊂ U

1

⊂ U

2

⊂ . . . ⊂ U

s

= V.

We know that N

i

/N

i−1

is simple. Moreover, by an isomorphism theorem we have

U

i

/U

i−1

∼

= (U

i

/N)/(U

i−1

/N)

which is simple. This proves that (*) is a composition series of V .

3.2 Examples

(1) Let A = M

n

(K) and V = A. We have constructed a composition series in 3.9, and we

have seen that any two composition factors of A are isomorphic to the natural module.

(2) This example shows that non-isomorphic composition factors can occur.

Let K = R and A = RC, the direct product of R-algebras. Let S

1

:= ¦(r, 0) : r ∈ R¦ ⊂ A,

this is a left ideal of A and therefore a submodule. Let also S

2

:= ¦(0, z) : z ∈ C¦ ⊂ A,

this also is a left ideal of A and therefore a submodule.

42 3. The Jordan-H¨older Theorem

Consider the series

(∗) 0 ⊂ S

1

⊂ A

We claim that A/S

1

∼

= S

2

. Deﬁne ψ : A → S

2

to be the projection on the second

coordinate. By ??, this is an A-module homomorphism, and it is clearly onto, and it has

kernel S

1

.

To show that (*) is a composition series, we must verify that S

1

and S

2

are simple. This

is clear for S

1

since it is 1-dimensional. To prove it for S

2

we apply 3.3.

Take 0 ,= (0, w) ∈ S

2

, we must show that the submodule A(0, w) generated by (0, w) is

equal to S

2

.

Since w is a non-zero complex number, (0, w

−1

) lies in A and therefore (0, w

−1

)(0, w) =

(0, 1) is contained in the submodule generated by (0, w), and it follows that this submodule

is S

2

.

Exercise 3.1

Let A = T

2

(K), the algebra of upper triangular 2 2 matrices. Find a composition

series of the A-module A. Verify that non-isomorphic composition factors occur.

For a ﬁnite-dimensional algebra A, the composition series of V = A are very important

because this gives actually information on all simple A-modules. We will show this now, it

is based on the following:

Lemma 3.12

Suppose S is a simple A-module, so that S = As for some non-zero s ∈ S. Let

I := Ann(s) = ¦a ∈ A : as = 0¦.

Then I is a left ideal of A, and S

∼

= A/I as A-modules.

Proof

Deﬁne a map

ψ : A → S, ψ(a) := as.

This is an A-module homomorphism, by ??. It is clearly onto, and hence by the Isomorphism

Theorem we have

A/Ker(ψ)

∼

= Im(ψ) = S.

By deﬁnition, the kernel of ψ is Ann(s). In particular it is a left ideal, by the Isomorphism

Theorem

Corollary 3.13

Let A be a ﬁnite-dimensional algebra. Then every simple A-module occurs as a composition

factor of A, up to isomorphism. Hence there are only ﬁnitely many simple A-modules (up

to isomorphism).

3.3 Simple A

1

×A

2

-modules 43

Proof

By lemma 3.12 we know that if S is a simple A-module then S

∼

= A/I for some left ideal I.

Now, I is then a submodule, so by 3.11 there is some composition series of A in which I is

one of the terms. So A/I is a composition factor of A.

Example 3.14

Let A = M

n

(C), this has a unique simple module, up to isomorphism, namely the natural

module (C

n

)

t

of column vectors. This follows from 3.1 and 3.13.

3.3 Simple A

1

×A

2

-modules

Let A = A

1

A

2

, the direct product of two K-algebras. Recall from 1.14 that A

1

and A

2

are factor algebras of A, (taking the projections). Recall that we can ’inﬂate’ modules from

factor algebras to the large algebra, see 2.7. So we inﬂate the simple modules for A

1

and

A

2

, and we get A-modules. These inﬂations are still simple as A-modules, roughly speaking

since we ’don’t do anything’. But you should check this.

Now consider a simple A-module S. We apply exercise ??, this shows that S = S

1

⊕S

2

where S

i

is a module for the algebra A

i

. But S is simple, so S = S

1

and S

2

= 0 or S = S

2

and S

1

= 0. Say S = S

1

, then from the deﬁnition of S

1

in exercise ?? we see that elements of

the ideal ¦0¦ A

2

of A annihilate S

1

. This shows that S

1

really is the inﬂation of a module

for A

1

and it is still simple as such. Hence every simple A-module can be viewed as a module

for A

1

or for A

2

(not both). We have now proved the following.

Lemma 3.15

The simple A-modules are precisely the inﬂations of the simple A

1

-modules, together with

the inﬂations of the simple A

2

-modules to A.

Example 3.16

Let A = A

1

A

2

where A

1

= M

2

(K) and A

2

= M

3

(K). By 3.14, the natural 2-dimensional

module, of column vectors, is the only simple A

1

-module, up to isomorphism, and similarly

the natural 3-dimensional module is the only simple A

2

-module, up to isomorphism. By the

lemma, the algebra A has precisely two simple modules, up to isomorphism. The action on

the 2-dimensional module is explicitly

(a

1

, a

2

)v = a

1

v (v ∈ (K

2

)

t

, a

i

∈ A

i

).

44 3. The Jordan-H¨older Theorem

Remark 3.17

Let R be any ring. Then the deﬁnition of ’simple’ also makes sense for R-modules, and

in the deﬁnition of simple modules, we could equally well have taken R instead of A. For

general rings, the concept of ’simple’ modules is far less important, even when the module

in question is ’small’, such as generated by one element.

For example, take R = Z, and M = R. This does not have a simple submodule. Namely,

any non-zero submodule U of M is a left ideal, hence is of the form U = Za for some

0 ,= a ∈ Z. Then for example Z(2a) is a proper submodule of U, so U is not simple.

EXERCISES

3.2. Find a composition series for the 3-subspace algebra.

3.3. This extends example 3.4. Let A = CG where G is the group of symmetries of

the square. Let V = C

2

, this is an A-module if we take the representation as in

example 3.4 (or example 3.2). Show that V is simple.

3.4. Suppose V and W are A-modules and φ : V → W is an A-module isomorphism.

(a) Show that V is simple if and only if W is simple.

(b) Suppose 0 = V

0

⊂ V

1

⊂ . . . ⊂ V

n

= V is a composition series of V . Show that

then

0 ⊂ φ(V

1

) ⊂ . . . ⊂ φ(V

n

) = W

is a composition series of W.

3.5. Suppose M is an A-module and that U and V are maximal submodules of M.

Suppose U ,= V , show that then U +V = M.

3.6. Let A be the (3-dimensional) algebra of all upper triangular 22 matrices over a

ﬁeld K. Find a composition series of the A-module A. Show that A has precisely

two simple modules, up to isomorphism.

3.7. Let A be the matrix ring

A =

C C

0 R

.

[That is, A is the subring of M

2

(C) of upper triangular matrices with 22-entry in

R.] Show that A is an R-algebra. What is its dimension (over R)? Let

I =

0 C

0 0

.

(a) Show that I is a simple left ideal of A. It is also a right ideal. Is I simple as

a right ideal?

(b) Show that A/I is isomorphic to C ⊕R, as R-algebras.

3.3 Simple A

1

×A

2

-modules 45

[A simple left ideal of A is a left ideal I such that there are no left ideals J of A

such that 0 ,= J ⊂ I and J ,= I.]

3.8. Let V be a 2-dimensional vector space over K, and let A be a subalgebra of

End

K

(V ). Recall that V is then an A-module (by applying linear transformations

to vectors). Show that V is not simple as an A-module if and only if there is

some 0 ,= v ∈ V which is an eigenvector for all α ∈ A.

3.9. Let A be the R-algebra in question 3.7. Find a composition series of the A-module

A. Find also all simple A-modules (up to isomorphism).

3.10. Let A = K[X]/I where I = (f(X)).

(a) Let f(X) = X

4

− 1 and K = R. Find all simple A-modules (up to isomor-

phims).

(b) Let f(X) = X

3

− 2 and K = Q. Find all simple A-modules (up to isomor-

phism).

3.11. Let V be an A-module where A is a ﬁnite-dimensional algebra, and let M and N

be maximal submodules of V such that M ,= N. Prove that

(i) M +N = V , and

(ii) V/M

∼

= N/M ∩ N and V/N

∼

= M/M ∩ N.

Suppose now that M∩N = 0. Deduce that then M and N are simple, and hence

that V has two composition series

0 ⊂ M ⊂ V, and 0 ⊂ N ⊂ V.

Write down the permutation as in the Jordan-H¨older Theorem.

3.12. Let A = K[X]/(X

n

) and V = A. Show that V has a unique composition series,

which has length n. [You might use 2.12.]

3.13. Find the simple modules for the algebra A = K[X]/(X

2

) K[X]/(X

3

).

3.14. Let A = M

2

(R), and V = A. The following will show that A has inﬁnitely many

diﬀerent composition series.

(a) Let e ∈ A be a projection, that is e

2

= e. Show that then Ae := ¦ae : a ∈ A¦

is a submodule of A. Show that if e ,= 0, 1 then

0 ⊂ Ae ⊂ A

is a composition series of A. [You may apply the Jordan-H¨older Theorem].

(b) For λ ∈ R, check that e

λ

is a projection where

e

λ

=

1 λ

0 0

.

(c) Show that for λ ,= µ, the modules Ae

λ

and Ae

µ

are distinct. Hence deduce

that V has inﬁnitely many diﬀerent composition series.

46 3. The Jordan-H¨older Theorem

3.15. Suppose A = CG where G is the dihedral group of order 10, as in Exercise 2.13.

Suppose V is a simple A-module.

(a) Prove that dimV ≤ 2. [Show that if w is an eigenvector of the linear map

x → v

σ

x, then so is v

τ

w, and Span¦w, v

τ

w¦ is an A-submodule of V .]

(b) Show that if dimV = 1, then v

τ

has eigenvalue ±1, and v

σ

2 has eigenvalue 1

on V . Hence ﬁnd all 1-dimensional simple A-modules.

3.16. Let V be any vector space over K, and let A = End

K

(V ). Show that V is a simple

A-module. [The interesting case is when V is inﬁnite-dimensional.]

4

Simple and semisimple modules, semisimple

algebras

Let A be a ﬁnite-dimensional K-algebra. The Jordan-H¨older Theorem shows that simple

modules are ’building blocks’ for arbitrary ﬁnite-dimensional A-modules. So it is important

to understand simple modules. The ﬁrst question one might ask, given two simple A-modules,

how can we ﬁnd out whether or not they are isomorphic? This is answered by Schur’s lemma,

which we will now present. In fact Schur’s lemma has many applications (and we’ll give a

few).

Lemma 4.1 (Schur’s Lemma)

Suppose S and T are simple A-modules and φ : S −→ T is an A-module homomorphism.

Then either φ = 0, or φ is an isomorphism.

Suppose S = T and K = C. If dimS < ∞ then φ = λId

S

for some scalar λ ∈ C.

Proof

Suppose φ is non-zero. The kernel ker(φ) is an A-submodule of S but S is simple. Since

φ ,= 0, ker(φ) ,= S. So ker(φ) = 0 and φ is 1-1.

The image im(φ) is a submodule of T, and T is simple. Since φ ,= 0, we know im(φ) ,= 0

and therefore im(φ) = T. So φ is onto, and we have proved that φ is an isomorphism.

For the last part, we know that over C, φ has an eigenvalue, λ say. That is, there is some

non-zero v ∈ S such that φ(v) = λv. The map λId

S

is also an A-module homomorphism;

and so is φ−λId

S

as well. The kernel of φ−λId

S

is a submodule and is non-zero (it contains

v). It follows that ker(φ −λId

S

) = S, so that we have φ = λId

S

.

48 4. Simple and semisimple modules, semisimple algebras

This is very general; in the ﬁrst part S and T need not be ﬁnite-dimensional. It has many

applications. One is that elements in the centre of some algebra act as scalars on simple

modules when A is a C-algebra.

The centre of A is deﬁned to be

Z(A) := ¦z ∈ A : za = az for all a ∈ A¦

Lemma 4.2

Let A be a C-algebra, and S a simple A-module. If z ∈ Z(A) then there is some λ = λ

z

∈ C

such that zx = λ

z

x for all x ∈ S.

Proof

The linear map ρ : S → S deﬁned by ρ(s) = zs is an A-module homomorphism (this is easy

to check). But S is simple, and A is an algebra over C. So by Schur’s Lemma there is some

λ ∈ C such that ρ(s) = λs for all s ∈ S, that is zs = λs.

Corollary 4.3

Assume A is a commutative algebra over C. Then every simple A-module is 1-dimensional.

Proof

Let S be a simple A-module. We have A = Z(A), so by the previous, every a ∈ A acts as

scalar multiplication on S. Take 0 ,= v ∈ S, then for every a ∈ A, av belongs to the span of

v. So the span of v is a non-zero submodule, so it must be equal to S.

The assumption that the ﬁeld is C, is important. For example, consider the 2-dimensional

algebra A over R with basis ¦1

A

, β¦ where β

2

= −1

A

. Take V = A, this is a simple A-module:

Suppose, for a contradiction, V has a non-trivial submodule, this must 1-dimensional, say

it is spanned by some 0 ,= v ∈ A. Then βv is a scalar multiple of v, that is v is an eigenvector

of β. But β does not have an eigenvector in A. So we have a commutative algebra over R

which has a 2-dimensional simple module.

Inﬁnite-dimensional algebras can behave diﬀerently.

Example 4.4

The Heisenberg algebra H is the algebra over C generated by two non-commuting elements

X and Y , and the only relation which holds in the algebra is that XY −Y X = q1

H

where q is

a non-zero element in C. It is not ﬁnite-dimensional, for example the monomials 1, X, X

2

, . . .

are linearly independent.

The Heisenberg algebra does not have any ﬁnite-dimensional simple modules:

4.1 Some classiﬁcations of simple modules 49

Suppose, for a contradiction, S is a ﬁnite-dimensional simple H-module. Fix a basis of

S, and write multiplication by X, Y as matrices with respect to this basis. Then the matrix

of XY −Y X is qI

n

where I

n

is the identity matrix, where n = dimS. Take the trace of this

matrix,

0 = Tr(XY −Y X) = Tr(qI

n

) = qn

and dimS = 0 but S ,= 0, a contradiction.

4.1 Some classiﬁcations of simple modules

Let A be a ﬁnite-dimensional algebra over K. We have seen that all simple A-modules occur

as composition factors of the A-module A (see 3.13). In particular this implies that simple

modules of a ﬁnite-dimensional algebra are always ﬁnite-dimensional.

One could ask whether it is possible given A, to completely describe all simple modules,

that is one simple module of each isomorphism class. In general, this is rather hard. But in

special cases, it is possible.

4.1.1 Simple modules of A = CG where G is a cyclic group

Let A = CG where G = ¸g), a cyclic group of order n. Then A is commutative, so by 4.3,

every simple A-module is 1-dimensional. So let S = span¦x¦ be a 1-dimensional A-module.

Then the structure of S is completely determined by the action of v

g

since v

g

generates the

algebra A. We must have v

g

x = λx for some λ ∈ C. We have then

x = v

1

x = v

g

nv

= (v

g

)

n

x = λ

n

x

and λ

n

= 1. Hence λ = exp(

2kπi

n

) for some k with 0 ≤ k ≤ n −1.

This really does deﬁne an A-module, to see this it suﬃces to note that the corresponding

map from G to GL(S) is a group homomorphism.

Choose and ﬁx a primitive n-th root of unity, ω say. Then λ = ω

k

for some k with

0 ≤ k ≤ n −1. The representation we want is the map

ρ : G → GL(1, C), ρ(g

j

) := ω

jk

(1 ≤ j ≤ n), and one checks that this a group homomorphism.

Note also that for diﬀerent k we get representations which are not equivalent. In total

we have n distinct simple modules.

50 4. Simple and semisimple modules, semisimple algebras

4.1.2 Simple modules for G where G has order p

d

, over F

p

Another type of algebra where we can ﬁnd all simple modules, up to isomorphism, is the

group algebra A = KG where G is some group of order p

r

for p a prime, and K = F

p

, the

ﬁeld with p elements.

Lemma 4.5

Assume [G[ = p

r

, p prime, and K = F

p

, the ﬁnite ﬁeld with p elements. Then the trivial

module is the only simple A-module (up to isomorphism)

Proof

Let V be a simple A-module. Let ρ : G → GL(V ) the corresponding representation. View V

as a G-set, then it is a disjoint union of orbits and each orbit has size dividing [G[, ie some

power of p.

If dim(V ) = n, then the set V has size p

n

which is a power of p. So the number of orbits

of size 1 is divisible by p. Now ¦0¦ is an orbit of size 1, so the number of orbits of size 1 is

non-zero and then must be at least p. So there is some 0 ,= x ∈ V such that ρ

g

x = x for all

g ∈ G.

For the module, this means v

g

x = x for all g ∈ G. Then Span¦x¦ is a submodule. But

V is simple, so V = Span¦x¦, and this is the trivial module.

Deﬁnition 4.6

Let A be a K-algebra, and let V be a non-zero (ﬁnite-dimensional) A-module. Then V is

semi-simple if V has simple submodules S

1

, S

2

, . . . , S

k

such that

V = S

1

⊕S

2

⊕. . . ⊕S

k

.

4.1.3 Examples

(1)Any simple module is semisimple. (So the name ’semisimple’ is reasonable).

(2) Let A = K. Then A-modules are the same as vector spaces. Given a vector space V ,

take basis ¦b

1

, . . . , b

n

¦ and set S

i

:= Span¦b

i

¦. Then S

i

is a simple A-submodule of V ,

and V = S

1

⊕. . . ⊕S

k

. This shows that every A-module is semisimple.

(3) Let A = M

n

(K) and V = A. We know from ?? that V = C

1

⊕C

2

⊕. . . ⊕C

n

where C

i

is the space of matrices which are zero outside the i-th column. We have also seen that

each C

i

is a simple A-module. So V is semisimple.

(4) Not every module is semisimple. Let A be the 2-dimensional algebra over R with basis

¦1

A

, β¦ such that β

2

= 0. Let V = A, this is not semisimple.

4.1 Some classiﬁcations of simple modules 51

Assume for a contradiction that V is semisimple. In 3.1 we have proved that V has a

composition series of length two, with two 1-dimensional composition factors. So V is

not simple, and then we can only have V = S

1

⊕ S

2

where S

1

and S

2

are 1-dimensional

submodules of V . Then we have a basis of V consisting of eigenvectors for every element in

A, and in particular, eigenvectors for x → βx. But this is not diagonalizable( for example,

β has only eigenvalue = 0 and if it were diagonalizable then it would follow that β = 0).

Given some A-module V , how can we decide whether or not it is semisimple? There are

several criteria, and each of them has advantages, depending on the circumstances.

Lemma 4.7

Let V be an A-module, then the following are equivalent.

(1) If U is a submodule of V then there is a submodule C of V such that U ⊕C = V .

(2) V is a direct sum of simple submodules (that is, V is semisimple).

(3) V is a sum of simple modules.

Proof

(1)⇒ (2) We may assume that V ,= 0, then V has at least one simple submodule. There

is then a submodule U of V which is a direct sum of simple modules, of largest possible

dimension. We must show that U = V . By (1), there is a submodule C such that V = U⊕C.

Assume (for a contradiction) that U ,= V , then C is non-zero, and then C must have a simple

submodule, S say. Consider now U

:= U+S. We have S∩U ⊆ C∩U = 0, that is U

= U⊕S.

Since U is a direct sum of simple submodules, also U

**is a direct sum of simple submodules.
**

But U is a proper submodule of U

and dimU < dimU

**, which contradicts the choice of U.
**

This shows that U = V , that is, V is a direct sum of simple modules.

(2) ⇒(3) This is clear.

(3) ⇒ (1) Let U be a submodule of V . Consider the set of submodules of V given by

o = ¦W ⊂ V : U ∩ W = 0¦.

Then o ,= ∅. Take C ∈ o of largest possible dimension. We claim that then U ⊕C = V .

By construction U∩C = 0. Sppose we have Assume U+C ,= V . Since V = S

1

+S

2

+. . .+

S

k

where the S

j

are simple submodules of V , there must be a simple submodule S

i

of V with

S

i

,⊂ U +C and then S

i

,⊂ C. So C ⊂ C+S

i

, a proper submodule and dimC < dim(C+S

i

).

So we get a contradiction if we show that the module C + S

i

belongs to the set o. So we

must show that

(C +S

i

) ∩ U = 0 :

Take u = c+x ∈ U and c ∈ C and x ∈ S

i

. Then x = u−c ∈ (U+C)∩S

i

. But (U+C)∩S

i

is a

submodule of S

i

and is not equal to S

i

(since S

i

is not contained in U+C). So (U+C)∩S

i

= 0

It follows that x = 0 and u = c ∈ U ∩ C = 0.

So we have now a contradiction. This completes the proof that V = U ⊕C.

52 4. Simple and semisimple modules, semisimple algebras

Lemma 4.8

(a) Submodules and factor modules of semi-simple modules are semi-simple.

(b) If V

1

and V

2

are semisimple A-modules then V

1

V

2

is a semisimple A-module.

Exercise 4.1

Suppose f : S → X is an A-module homomorphism where S is simple. Show that

then f(S) is either simple, or is zero.

Solution The Isomorphism Theorem gives f(S)

∼

= S/ker(f). Since S is simple, we have

ker(f) = 0 or = S. In the ﬁrst case, f(S)

∼

= S and f(S) is simple, otherwise f(S) = 0.

Proof (of 4.8)

(a) Suppose V is semi-simple with factor module V/U. Let π : V → V/U be the canonical

map π(v) = v + U, this is an A-module homomorphism. Suppose V = S

1

+ S

2

+ . . . + S

k

with S

i

simple, then π(V ) = π(S

1

) +π(S

2

) +. . . +π(S

k

) and π(S

i

) is either zero or simple.

So V/U is a sum of simple modules, hence is semi-simple [here we use part (3) of 4.7].

If U is a submodule of V then by part (1) of 4.7 we know V = U ⊕C and then U

∼

= V/C,

so U is semi-simple by what we have just proved.

(b) Let V

1

and V

2

be semisimple. By exercise 2.8 we have V := V

1

V

2

=

˜

V

1

⊕

˜

V

2

with

˜

V

i

∼

= V

i

. So

˜

V

i

is a direct sum of simple modules, for each i, and hence V is also a direct

sum of simple modules.

In Example 4.1.1.(1) we have seen that the algebra A = K has the property that every

A-module is semisimple. Other algebras have the same property, and this has inspired the

following deﬁnition.

Deﬁnition 4.9

The algebra A is semisimple if every A-module is semisimple.

How can one see whether or not A is semisimple without having to check all modules? This

is easy, because of the following.

Lemma 4.10

Let Abe a ﬁnite-dimensional K-algebra. Then Ais semisimple ⇔the A-module Ais semisim-

ple.

Proof

If all A-modules are semisimple then in particular A as an A-module is semisimple.

4.1 Some classiﬁcations of simple modules 53

To prove the other implication, suppose A as an A-module is semisimple. Take an arbi-

trary A-module M. Take a K-basis of M, say ¦m

1

, . . . , m

n

¦. Write A

n

= A A . . . A,

the direct product of n copies of A. Deﬁne ψ : A

n

→ M by

ψ(a

1

, . . . , a

n

) =

n

i=1

a

i

m

i

This is an A-module homomorphism (by 1.6.1) and it is surjective. So the Isomorphism

Theorem gives that M

∼

= A

n

/ker(ψ).

The module A is semi-simple, and then also A

n

, by 4.8 part (b) and induction on n. Now

4.8(a) shows that V is semi-simple.

4.1.4 Examples

(1) The algebra A = M

n

(K) is semi-simple. (See 4.1.3).

(2)Let A be the 2-dimensional algebra over R as in 4.1.3(3). We have found there a module

which is not semisimple, and hence A is not semisimple.

The algebra is also isomorphic to the algebra of matrices,

¦

a b

0 a

: a, b ∈ R¦.

[Namely, the algebra is 2-dimensional and it contains a non-zero element with square zero,

and see ??]. So this algebra of matrices also is not semisimple. However, it is a subalgebra

of M

2

(R) which is semisimple!

The last example shows that a subalgebra of a semisimple algebra need not be semisimple.

On the other hand, factor algebras or semisimple algebras are always semisimple, we show

this now.

Lemma 4.11

Let I be an ideal of A and B = A/I. The following are equivalent:

(i) V is a semisimple B-module.

(ii) V is a semisimple A-module with IV = 0.

Hence if A is semisimple then B is semisimple.

Proof

Recall from 2.7 that B-modules can be viewed as the A-modules V with IV = 0, and where

the two actions are related by the equation

ax = (a +I)x, (a ∈ A, x ∈ V ).

[We write IV for the span of the set ¦xv : x ∈ I, v ∈ V ¦.]

54 4. Simple and semisimple modules, semisimple algebras

(i)⇒ (ii) Let V = S

1

⊕. . . ⊕S

k

where S

i

are simple B- submodules of V . Then we view V

as an A-module with IV = 0. Then IS

j

⊆ IV = 0, therefore S

i

can also be viewed as an

A-module. As an A-module it is simple as well: Namely if 0 ,= x ∈ S

j

then Ax = Bx = S

j

.

So V is also semisimple as an A-module.

(ii) ⇒ (i) Suppose V = S

1

⊕. . . ⊕S

k

with S

i

simple A-submodules of V , and IV = 0.

Then IS

i

⊆ IV = 0, so S

i

is viewed as B-module, and it is still simple as a B-module, since

for 0 ,= x ∈ S

i

we have Ax = S

i

but Ax = Bx.

For the last part, suppose A is semisimple. Take any B-module V , then we can view V

as an A module satisfying IV = 0. But A is semisimple, therefore V is semisimple as an

A-module. By the implication (i) ⇒ (ii) it is also semisimple as a B-module. This shows

that B is semisimple.

Proposition 4.12

Let A = A

1

A

2

, the direct product of algebras. Then A is semisimple if and only if both

A

1

and A

2

are semisimple.

Proof

⇒ Suppose A is semisimple, the projection π

1

: A → A

1

onto the ﬁrst coordinate is

an algebra homomorphism and it is surjective. By 4.11, A

1

is semisimple. Similarly, A

2

is

semisimple.

⇐ Assume A

1

and A

2

are semisimple. Write A

1

= S

1

⊕ S

2

⊕ . . . ⊕ S

k

where the S

i

are

simple A

1

-submodules of A

1

, similarly A

2

= T

1

⊕ . . . T

l

with T

i

simple A

2

-submodules of

A

2

. Then the A-module A

1

A

2

can be written as the sum of all S

i

¦0¦ and ¦0¦ T

j

.

These are simple A-modules, by 3.15.

EXERCISES

4.2. For each of the following subalgebras A of M

2

(K), consider the natural module

V = (K

2

)

t

of column vectors. Show that V is simple, and ﬁnd End

A

(V ), that is,

the algebra of linear maps φ : V → V which commute with all elements in A.

By Schur’s Lemma, this algebra is a division ring. Identify it with ’something

known’.

(i) K = R, A = ¦

a b

−b a

: a, b ∈ R¦.

(ii) K = Z/2Z, A = ¦

a b

b a +b

: a, b ∈ Z/2Z¦.

[Note: to see that in each case A really is an algebra, see the Exercise 1.5]

4.1 Some classiﬁcations of simple modules 55

4.3. Suppose A is a ﬁnite dimensional algebra over a ﬁnite ﬁeld F, and S is a (ﬁnite-

dimensional) simple A-module. Let D := End

A

(S). Show that then D must be a

ﬁeld.

4.4. An idempotent of an algebra A is an element e ∈ A such that e

2

= e. Let e

1

and

e

2

be idempotents such that 1

A

= e

1

+e

2

and e

1

e

2

= 0 = e

2

e

1

. Assume also that

e

1

and e

2

are central in A, that is e

i

a = ae

i

for all a ∈ A.

(a) Suppose V is an A-module, show that then e

1

V and e

2

V are submodules of

V and that V = e

1

V ⊕e

2

V . Moreover, show that e

1

V = ¦v ∈ V : v = e

1

v¦.

Suppose now that S

1

and S

2

are simple A-modules such that S

1

= e

1

S

1

and

S

2

= e

2

S

2

.

(b) Show that then S

1

is not isomorphic to S

2

.

(c) Assume K = C. Let V = S

1

⊕ S

2

, show that End

A

(V ) is isomorphic to the

algebra of diagonal matrices in M

2

(C). [Hint: Apply Schur’s Lemma ]

Show also that if W = S

1

⊕S

1

then End

A

(W)

∼

= M

2

(C).

4.5. (Continutation) With the notation of the previous question, what can you say

about End

A

(N) where N = (S

1

⊕S

1

) ⊕(S

2

⊕S

2

⊕S

2

)?

4.6. Suppose A is an algebra and N is some A-module. We deﬁne a subquotient of N

to be a module Y/X where X, Y are submodules of N such that 0 ⊆ X ⊆ Y ⊆ N.

Suppose N has a composition length 3, and assume that every subquotient of

N which has composition length 2, is semisimple. Show that then N must be

semisimple.

[Suggestion: Choose a simple submodule X of N and show that there are sub-

modules U

1

,= U

2

of N, both containing X, of compositionlength 2. Then show

that U

1

+U

2

is the direct sum of three simple modules.]

4.7. Let G be the symmetric group and Ω the natural G-set, so that the permutation

module KΩ has basis ¦b

1

, b

2

, . . . , b

n

¦. Let W :=

a

i

b

i

: a

i

∈ K,

i

a

i

= 0¦.

Show that W is a submodule of KΩ. Show also that if K = C then W is simple,

and CΩ is the direct sum of W with a copy of the trivial module.

5

The Wedderburn Theorem

Given a K-algebra A, when is it semisimple? Wedderburn’s theorem answers this, and it

gives a complete description of arbitrary semisimple algebras. We will prove the theorem

for the case K = C, and we will explain what the answer is for arbitrary ﬁelds. For start,

we consider commutative ﬁnite-dimensional semisimple algebras over C. The classiﬁcation

of such algebras is attributed to Weierstrass and Dedekind.

Proposition 5.1

Suppose A is a ﬁnite-dimensional commutative algebra over C. Then A is semisimple ⇔ A

is isomorphic to the direct product of copies of C, as algebra: A

∼

= C C . . . C.

Proof

⇐ We know that C as an algebra over C is semisimple, and by Proposition 4.12, so is the

direct product of ﬁnitely many copies of C.

⇒ Suppose A is the direct sum of simple submodules,

A = S

1

⊕S

2

⊕. . . ⊕S

k

.

By Corollary 4.3, each S

i

is 1-dimensional. We write the identity of A as

1

A

= e

1

+e

2

+. . . +e

k

, e

i

∈ S

i

.

(a) We claim that e

2

i

= e

i

and e

i

e

j

= 0 for i ,= j. Namely, we have

e

i

= e

i

1

A

= e

i

e

1

+e

i

e

2

+. . . +e

i

e

k

and therefore

e

i

−e

2

i

= e

i

e

1

+. . . +e

i

e

i−1

+e

i

e

i+1

+. . . +e

i

e

k

.

58 5. The Wedderburn Theorem

The left hand side belongs to S

i

and the right hand side belongs to

j=i

S

j

. But the sum

is direct, therefore

S

i

∩

j=i

S

j

= 0.

So e

2

i

= e

i

, and now 0 = e

i

e

1

+. . . +e

i

e

i−1

+e

i

e

i+1

+. . . +e

i

e

k

and since we have a direct

sum, each summand must be zero.

We claim that e

i

,= 0 for all i. Take some non-zero x ∈ S

i

, then

x = x1

A

= xe

1

+. . . xe

i

+. . . xe

k

.

Now x − xe

i

∈ S

i

∩

j=i

S

j

= 0 and therefore x = xe

i

,= 0. It follows that e

i

must be

non-zero. Since S

i

is 1-dimensional, we deduce that S

i

= Ce

i

and therefore for each a ∈ A,

we have ae

i

= a

i

e

i

with a

i

∈ C.

Now, for every a ∈ A we have

(∗) a = a1

A

= ae

1

+ae

2

+. . . ae

k

= a

1

e

1

+. . . +a

k

e

k

.

Deﬁne a map Ψ : A →C . . . C by

ψ(a) := (a

1

, a

2

, . . . , a

k

) (a

i

as in (∗))

This is clearly linear. It is also onto, as ψ(e

i

) = (0, 0, . . . , 1, 0, . . .) for each i, by the above.

It is 1-1: If all a

i

are zero then a = 0, by (*). The map ψ is an algebra homomorphism:

ψ(a)ψ(b) = (a

1

, . . . , a

k

)(b

1

, b

2

, . . . , b

k

)

= (a

1

b

1

, a

2

b

2

, . . . , a

k

b

k

)

= ψ(ab)

since

ab1

A

= a(b

1

e

1

+b

2

e

2

+. . . +b

k

e

k

)

= ab

1

e

1

+ab

2

e

2

+. . . +ab

k

e

k

= b

1

(ae

1

) +b

2

(ae

2

) +. . . +b

k

(ae

k

)

= b

1

a

1

e

1

+b

2

a

2

e

2

+. . . +b

k

a

k

e

k

= a

1

b

1

e

1

+a

2

b

2

e

2

+. . . +a

k

b

k

e

k

.

For arbitrary semisimple algebras, the building blocks are matrix algebras M

n

(C), and

the structure theorem for semisimple algebras over C is due to Wedderburn (though, ac-

cording to [1], a more general version was proved by Artin).

Theorem 5.2

[The Wedderburn Theorem for C ] Let A be a ﬁnite-dimensional algebra over C. Then A is

semi-simple if and only if A is isomorphic to the direct product of matrix rings

A

∼

= M

n

1

(C) M

n

2

(C) . . . M

n

k

(C)

5. The Wedderburn Theorem 59

One direction is already known: We know that the direct product of matrix algebras is

always semisimple. Namely, in 4.1.3 we have seen that M

n

(K) is semisimple for each n ≥ 1,

in fact for any ﬁeld K. Now, Lemma 4.12 shows that the direct product of matrix algebras

also is semisimple.

To prove that any ﬁnite-dimensional semisimple algebra over C is isomorphic to a direct

product of such matrix rings, takes more work.

The ﬁrst thing we might ask is, where do the matrices come from? We are used to writing

linear maps as matrices, and a good guess might be that this should be generalized in some

way. If a linear map is written as matrix, one starts by ﬁxing a basis of the space, and works

with coordinates with respect to this basis. For example, if we take a 2-dimensional space

the we identify the vector space with column vectors in K

2

.

For our generalization we imitate this, and we consider an A-module which is a direct

product. So let V = U

1

U

2

= ¦(u

1

, u

2

) : u

i

∈ U

i

¦ where U

1

and U

2

are A-modules. We

deﬁne a ’matrix algebra’, with underlying space

Λ = ¦[γ

ij

] : γ

ij

∈ Hom

A

(U

j

, U

i

) ¦

and with matrix multiplication and -addition. One checks that this really is an algebra. Note

however that the matrix entries do not commute in general.

Next, we want to relate this algebra to the endomorphism algebra of V . Let π

i

: V → U

i

be the projection,

π

i

(u

1

, u

2

) = u

i

This is an A-module homomorphism, by 1.6.1.

Similarly, let κ

1

: U

1

→ V be the inclusion map,

κ

1

(u

1

) = (u

1

, 0),

and similarly deﬁne κ

2

. These are also A-module homomorphisms, by 1.6.1. These maps

have a very important property:

(∗) We have κ

1

π

1

+κ

2

π

2

= Id

V

:

Namely if m = (u

1

, u

2

) then u

i

= π

i

(m) and

κ

1

π

1

(m) +κ

2

π

2

(m) = κ

1

(u

1

) +κ

2

(u

2

)

= (u

1

, 0) + (0, u

2

) = (u

1

, u

2

) = m.

Lemma 5.3

Let V = U

1

U

2

. Then the algebra End

A

(V ) is isomorphic to Λ.

60 5. The Wedderburn Theorem

Remark 5.4

When A = K and the U

i

are just 1-dimensional vector spaces then this is the same as

writing linear maps of a 2-dimensional space as matrices. The proof in general is completely

analogous.

Proof

Given an A-module homomorphism γ : V → V . Then let

γ

ij

:= π

i

◦ γ ◦ κ

j

: U

j

→ U

i

this is an A-module homomorphism. Now deﬁne

Ψ : End

A

(V ) → Λ, γ → [γ

ij

].

We claim that this map Ψ is an algebra homomorphism.

(a) It is linear: We have

π

i

(cγ +δ)κ

j

= cπ

i

γκ

j

+π

i

δκ

j

for all i, j where c is a scalar, and γ, δ are A-module endomorphisms of V , and therefore

Ψ(cγ +δ) = cΨ(γ) +Ψ(δ).

(b) Ψ commutes with taking products: Consider Ψ(γ)Ψ(δ) = [γ

ij

][δ

ij

]. This matrix product

has ij-entry

2

t=1

γ

it

δ

tj

=

t

π

i

κ

t

π

t

δκ

j

= π

i

γ(κ

1

π

1

+κ

2

π

2

)δκ

j

By (*) we have that κ

1

π

1

+κ

2

π

2

= Id

V

, and so the ij-th entry is equal to

π

i

γδκ

j

.

and this is precisely the ij-th entry of Ψ(γδ). This is true for all i, j and therefore

Ψ(γ)Ψ(δ) = Ψ(γδ).

(c) If γ = Id

V

then one gets γ

ij

= 0 for i ,= j and γ

ii

= Id

U

i

. This shows that Ψ(Id

V

) is the

identity matrix in Λ.

(d) The map Ψ is one-to-one: Suppose π

i

γκ

j

= 0 for all i, j. Then expanding shows that for

all m ∈ V we have γ(m) =

i,j

π

i

γκ

j

(m) and so this is zero. That is γ = 0.

(e) The map Ψ is also onto: Given a matrix [φ

ij

] in Λ, then deﬁne φ : V → V by setting

φ(u

1

, u

2

) = [φ

ij

]

u

1

u

2

.

One checks that Ψ(φ) = [φ

ij

].

5. The Wedderburn Theorem 61

Example 5.5

Let A = C. Let V = S

1

S

2

S

2

= ¦(s

1

, s

2

, ˜ s

2

) : s

1

∈ S

1

, s

2

∈ S

2

, ˜ s

2

∈ S

2

¦. We assume S

1

is not isomorphic to S

2

.

Let φ : V → V be an A-module homomorphism, then

φ(s

1

, s

2

, ˜ s

2

) = φ(s

1

, 0, 0) +φ(0, s

2

, 0) +φ(0, 0, ˜ s

2

)

So we look at each of the three terms. Consider φ(s

1

, 0, 0) = (x

1

, x

2

, ˜ x

2

) with x

1

∈ S

1

and

the other two components in S

2

.

We get a map s

1

→ x

1

, and by Schur’s Lemma this map is a scalar multiple of the

identity, in other words there is a scalar λ, such that x

1

= λs

1

, for any s

1

∈ S

1

. Again by

Schur’s Lemma, x

2

= 0 and ˜ x

2

= 0. We start writing down the matrix for φ, and what we

have found tells us that the ﬁrst column of this matrix is

λ

0

0

.

If we continue in this way, we get a matrix of the form

λ 0

0 A

where A ∈ M

2

(C).

Proposition 5.6

Assume V is a semisimple A-module. Then End

A

(V ) is isomorphic to a direct product of

matrix rings,

M

n

1

(C) . . . M

n

k

(C).

Proof

Let V = S

1

⊕S

2

⊕. . . ⊕S

n

where S

i

are simple. By 5.3 and induction, we have End

A

(V )

∼

= Λ

where

(∗) Λ = ¦[φ

ij

] : φ

ij

: S

j

→ S

i

¦

Now we apply Schur’s Lemma. If S

j

,

∼

= S

i

then φ

ij

= 0.

Suppose S

j

∼

= S

i

. Then we identify S

j

and S

i

, and by Schur’s Lemma, φ

ij

= λ

ij

Id with

λ

ij

∈ C.

We label the simple modules so that isomorphic ones come together: Let S

1

∼

= S

2

∼

= . . .

∼

=

S

n

1

, then S

n

1

+1

∼

= S

n

1

+2

∼

= . . .

∼

= S

n

1

+n

2

, where S

n

1

+1

,

∼

= S

1

, and so on.

62 5. The Wedderburn Theorem

Then a matrix in Λ has block diagonal shape, oﬀ-diagonal blocks are zero, and on the

diagonal blocks we have arbitrary matrices:

A

1

0 0 . . .

0 A

2

0 . . .

. . . . . .

. . . A

k

.

Multiply two of these, get

A

1

0 0 . . .

0 A

2

0 . . .

. . . . . .

. . . A

k

B

1

0 0 . . .

0 B

2

0 . . .

. . . . . .

. . . B

k

=

A

1

B

1

0 0 . . .

0 A

2

B

2

0 . . .

. . . . . .

. . . A

k

B

k

.

This shows that the multiplication is precisely as in the direct product of matrix rings. This

suggests to deﬁne

Θ(

A

1

0 0 . . .

0 A

2

0 . . .

. . . . . .

. . . A

k

) = (A

1

, A

2

, . . . , A

k

)

which is an element in M

n

1

(C) . . . M

n

k

(C). This map Θ is clearly C-linear and bijective.

We have just shown that it preserves products, hence it is an isomorphism of algebras.

Recall from 1.4 that the ’opposite algebra’ A

op

of A is the algebra with underlying vector

space A, and with multiplication

a ∗ b := ba.

Lemma 5.7

Let A = M

n

(K), then A

op

is isomorphic to A.

Proof

For any n n matrix X, let τ(X) := X

t

, the transpose of the matrix. This is linear (from

basic linear algebra), and it is a vector space isomorphism since τ

2

= id. Furthermore, as

one learns it in elementary linear algebra,

τ(XY ) = (XY )

t

= Y

t

X

t

= τ(Y )τ(X) = τ(X) ∗ τ(Y ).

This shows that τ is an isomorphism of algebras A → A

op

.

Exercise 5.1

Let A = A

1

A

2

, the direct product of algebras. Check that A

op

is isomorphic to

A

op

1

A

op

2

.

5. The Wedderburn Theorem 63

Solution 5.8

The underlying vector spaces for A

op

and A

op

1

A

op

2

are the same. The multiplication in A

op

is

(a

1

, a

2

) ∗ (b

1

, b

2

) = (b

1

, b

2

)(a

1

, a

2

) = (b

1

a

1

, b

2

a

2

).

The multiplication in A

op

1

A

op

2

is

(a

1

, a

2

)(b

1

, b

2

) = (a

1

∗ b

1

, a

2

∗ b

2

) = (b

1

a

1

, b

2

a

2

)

Hence the identity map gives us an algebra isomorphism from A

op

to A

op

1

A

op

2

.

Lemma 5.9

Let A be any K-algebra. Then A is isomorphic to End

A

(A)

op

.

Proof

(a) Let a ∈ A, deﬁne ’right multiplication’ r

a

: A → A by

r

a

(x) = xa (x ∈ A).

Then r

a

is an A-module homomorphism, by 1.6.1. Furthermore, we see that if a = 1

A

then

r

a

is the identity map of A.

(b) We have End

A

(A) = ¦r

a

: a ∈ A¦: One inclusion holds by (a), and for the other

inclusion, take f : A → A to be an A-module homomorphism. Set a := f(1

A

). Then for any

x ∈ A

f(x) = f(x1

A

) = xf(1

A

) = xa = r

a

(x).

This is true for all x ∈ A, and hence f = r

a

.

(c) Consider the composition. We have

r

a

◦ r

b

(x) = r

a

(xb) = (xb)a = x(ba) = r

ba

(x)

So r

b

∗ r

a

= r

a

◦ r

b

= r

ba

.

Hence we deﬁne a map ψ : A → End

A

(A)

op

by setting

ψ(a) = r

a

.

By (a), it takes the identity to the identity, and by part (c), it preserves products. One

checks that ψ is K-linear. Finally, by (b) it is bijective, and we have now proved that ψ is

an isomorphism.

64 5. The Wedderburn Theorem

5.1 The proof of Wedderburn’s theorem

We only have to put the previous results together. By 5.9 we have

A

∼

= (End

A

(A))

op

∼

= (M

n

1

(C) . . . M

n

k

(C))

op

and by 5.1 and 5.7, this is isomorphic to

(M

n

1

(C))

op

. . . (M

n

k

(C))

op

∼

= M

n

1

(C) . . . M

n

k

(C).

Remark 5.10

We get the result for the commutative case now as a corollary. Namely, for a commutative

semisimple algebra, all matrix blocks in the Wedderburn theorem must be commutative,

and this is only true if n

i

= 1 for all i.

We can now give a complete description of the simple modules of a semisimple algebra over

C.

Corollary 5.11

Let A be a ﬁnite-dimensional semisimple algebra over C, and suppose A

∼

= M

n

1

(C) . . .

M

n

k

(C) as algebras. Then A has precisely k simple modules (up to isomorphism). They are

of the form S

1

, S

2

, . . . , S

k

where we can take S

i

= ((C)

n

i

)

t

, and such that

(a) The i-th factor of A acts on S

i

by matrix multiplication;

(b) The i-th factor of A acts on S

j

as zero for j ,= i.

In particular the dimensions of the simple modules are n

1

, n

2

, . . . , n

k

.

Proof

In 3.15 we have classiﬁed the simple modules of an algebra of the form A

1

A

2

. Namely,

these are precisely all modules of the form

S ¦0¦, ¦0¦ T

where S runs through the simple A

1

-modules, and T runs through the simple A

2

-modules.

We apply this inductively, and see that all but one of the factors of our product of matrix

blocks act as zero on a simple A-module.

From 3.14 we know that M

n

i

(C) has a unique simple module (up to isomorphism),

namely the natural module of column vectors (C

n

i

)

t

.

Remark 5.12

What can we say about a ﬁnite-dimensional semisimple algebra A over an arbitrary ﬁeld K?

The answer is that always A is isomorphic to a product of matrix rings where the matrix

blocks are M

n

i

(D

i

) and D

i

is some division ring containing K.

5.1 The proof of Wedderburn’s theorem 65

One can see this with little trouble if one goes through the proof and checks where we

used that the ﬁeld was C. In fact, this is only used when we apply Schur’s Lemma, to say

the endomorphism ring of the simple module S

i

is isomorphic to C. In general we can only

say that the endomorphism ring of S

i

is a division ring, this is the general version of Schur’s

Lemma.

Everything else stays the same, and then the proof gives

A

∼

= M

n

1

(D

1

) . . . M

n

k

(D

k

).

EXERCISES

5.2. Suppose A is a ﬁnite-dimensional commutative semisimple algebra over C. Find

all ideals of A.

5.3. Show that A = M

n

(C) does not have any ideals except 0 and A. Hence ﬁnd all

ideals of a ﬁnite-dimensional semisimple algebra over C.

5.4. Suppose A = K

1

K

2

K

3

, the direct product of three ﬁelds. Find all the ideals

of A.

5.5. Suppose A = M

n

1

(C) . . . M

n

k

(C). Show that the center of A is commutative

and semisimple, and has dimension k.

5.6. Suppose A is a ﬁnite-dimensional semisimple algebra over C. Suppose x is an

element in the center Z(A). Show that if x is nilpotent then x = 0.

5.7. Which of the following commutative algebras over C are semisimple? Note that

the algebras in (1) have dimension 2, and the others have dimension 4.

(1) C[X]/(X

2

−X), C[X]/(X

2

), C[X]/(X

2

−1).

(2) C[X

1

]/(X

2

1

−X

1

) C[X

2

]/(X

2

2

−X

2

)

(3) C[X

1

, X

2

]/(X

2

1

−X

1

, X

2

2

−X

2

)

(4) C[X

1

]/(X

2

1

) C[X

2

]/(X

2

2

),

(5) C[X

1

, X

2

]/(X

2

1

, X

2

2

)

6

Maschke’s Theorem

In the previous chapter we have proved a main structure theorem for semisimple algebras.

One would like now to know to which algebras this can be applied. For example, you might

ask when a group algebra of a ﬁnite group is semisimple. This is answered by Maschke’s

theorem.

Theorem 6.1 (Maschke’s Theorem)

Let G be a ﬁnite group and A = KG the group algebra where K is some ﬁeld. Then A is

semisimple if and only if the characteristic of K does not divide the order of G.

The main idea of the proof which we are going to present is, that from any linear map

between KG-modules, one can always construct a KG-homomorphism, by ’averaging over

the group’.

Lemma 6.2

Suppose M and N are KG-modules, and f : M → N is a K-linear map. Deﬁne

T(f) : M → N, m →

g∈G

v

g

(f(v

g

−1m)).

Then T(f) is a KG-homomorphism.

Proof

One checks that T(f) is linear. Alternatively one could argue that multiplication by elements

in A is linear, and also f is linear and T(f) is a linear combination of compositions of linear

68 6. Maschke’s Theorem

maps and is therefore linear. To see that it is an A-homomorphism it suﬃces to check for

the basis of A, so let x ∈ G, then

T(f)(v

x

m) =

g

v

g

(f(v

g

−1v

x

m)) =

g

v

x

(v

x

−1v

g

(f(v

(x

−1

g)

−1m))) = v

x

(T(f)(m)).

6.1 Proof of Maschke’s Theorem

⇐ Assume that the characteristic of K does not divide [G[. Let W be a submodule of

A = KG, we must show that there is a submodule C of A such that W ⊕C = A.

There is a subspace V such that W ⊕V = A. Let π : A → A be the projection onto W

with kernel V , this is just a linear map. Deﬁne

γ :=

1

[G[

Tπ.

This is a scalar multiple of an A-module homomorphism and hence is also an A-module

homomorphism. So C := Ker(γ) is an A-submodule. We will now show that KG = W ⊕C

as KG-modules.

(a) We claim that the restriction of γ to W is the identity map, and that Im(γ) = W :

If m ∈ W then v

g

−1m ∈ W and so π(v

g

−1m) = v

g

−1m therefore v

g

π(v

g

−1m) = m and

γ(m) =

1

|G|

(

g∈G

m) = m.

This implies W ⊆ Im(γ). Conversely, let m ∈ A. Since π(v

g

−1m) ∈ W and W is a submodule

it follows that v

g

π(v

g

−1m) ∈ W and then γ(m) ∈ W.

(b) We claim that W ∩ C = 0 : If m ∈ W ∩ C then γ(m) = m and γ(m) = 0.

(c) W +C = A : We have dim(W +C) = dim(W) +dim(C) (by (b) and Linear Algebra)

which is dimIm(γ) + dimKer(γ) and which is equal to dim(A) by rank-nullity.

⇒ For the converse of Maschke’s Theorem, suppose A = KG is semi-simple. We claim that

char(K) does not divide the order of G:

Let ω :=

g∈G

v

g

which is an element of KG. Check that

(∗) v

x

ω = ω ( allx ∈ G.)

Therefore, let U be the span of ω, this is a (1-dimensional) submodule of A. Suppose A is

semi-simple, then there is a submodule C of A such that U ⊕C = A. Then one checks that

U = Ae where e is an idempotent of A, and so e = cω for c ∈ K. From e

2

= e ,= 0 we get

using (*) that

0 ,= ω

2

= [G[ω

and consequently [G[ , = 0 in K.

6.1 Proof of Maschke’s Theorem 69

6.1.1 Exploiting the map T further

In the ﬁrst Lemma of this section we have seen that by ’averaging over G’ we can produce

KG-module homomorphisms starting with arbitrary linear maps. This is a very powerful

tool which is used in other contexts. Recall that the trace tr(φ) of a linear transformation

is the trace of some matrix representing φ. For a detailed reminder, see the beginning of

chapter 7.

Corollary 6.3

Suppose V and W are simple CG-modules, and suppose f : V → W is a linear transforma-

tion.

(a) Assume V and W are not isomorphic, the T(f) = 0.

(b) Assume V = W, then T(f) = λI where

λ =

[G[

dimV

tr(f)

Proof

We apply 6.2 and Schur’s Lemma, this gives (a) and also that in (b) we have T(f) is a

multiple of the identity. We calculate the trace of T(f). We have

tr(v

g

fv

g

−1) = tr(f) (g ∈ G)

hence trT(f) = [G[tr(f) On the other hand, the trace of T(f) is equal to λdimV . The

statement follows.

6.1.2 Permutation modules

Suppose G is a ﬁnite group and Ω is a left G-set. Recall that the permutation module KΩ

is deﬁned to be the vector space over K with basis

¦b

ω

: ω ∈ Ω¦.

where the action is given by

v

g

b

ω

:= b

gω

.

Let ζ :=

ω

b

ω

. We have seen that v

g

ζ = ζ for all g ∈ G and hence ¸ζ) is a submodule

isomorphic to the trivial module. The following is a ’Maschke type’ argument.

Lemma 6.4

If char(K) does not divide [Ω[ then the submodule spanned by ζ is a direct summand of

KΩ.

70 6. Maschke’s Theorem

Proof

Let ψ : KΩ → ¸ζ) be the linear map with ψ(b

ω

) = ζ for each ω. Check that this is a

KG-homomorphism.

Then ψ(ζ) = [Ω[ζ. So if [Ω[ , = 0 then the intersection of Ker(ψ) with the trivial module

¸ζ) is zero and by dimensions KΩ = ¸ζ) ⊕Ker(ψ).

6.2 Some consequences of Maschke’s Theorem

Suppose G is a ﬁnite group, then by Maschke’s Theorem the group algebra CG is semisimple.

We can therefore apply Wedderburn’s theorem and obtain that

CG

∼

= M

n

1

(C) M

n

2

(C) . . . M

n

k

(C).

This contains a lot of information. First, by comparing dimensions, we have

[G[ =

k

i=1

n

2

i

.

Moreover, the sizes of the matrix blocks are the dimensions of the simple modules for CG,

by 5.11. There is always the trivial representation, which is 1-dimensional. We can take this

to correspond to the matrix block M

n

1

(C), that is n

1

= 1.

The number k of matrix blocks has an interpretation in terms of the group. It is equal

to the number of conjugacy classes (see the exercises below).

Example 6.5

We can sometimes ﬁnd the dimensions of simple modules. Let G be the dihedral group of

order 10, as in Exercise 2.13, then by Exercise 3.15, the dimension of a simple module is

≤ 2, and there are precisely two 1-dimensional simple modules. We have now

10 = 1 + 1 +

k

i=3

n

2

i

and the only possible solution is 10 = 1 + 1 + 4 + 4. So we know that there are two non-

isomorphic 2-dimensional simple modules. You might ﬁnd these explicitly.

Then we know there are four matrix blocks, so there should be four conjugacy classes.

Perhaps you know this anyway.

EXERCISES

6.1. Show that the center of the group algebra KG has a basis consisting of the class

sums. The class sum [C] of a conjugacy class C = g

G

is deﬁned to be the sum of

all elements in C (it has

|G|

|C

G

(g)|

terms).

6.2 Some consequences of Maschke’s Theorem 71

6.2. Let G be the symmetric group Sym(3), of order 6, and let V = KΩ where Ω is

the natural G-set, of size 3.

(a) Suppose K = C, express V as a direct sum of two simple CG-modules.

(b) Suppose K has characteristic 3. Show that then V has a composition series

of length 3.

6.3. Let G be the dihedral group of order 2n where n is odd. Generalize 2.13 and 3.15.

Find the dimensions of all simple CG-modules. Now do the same when n is even

and n > 2. What happens if n = 2?

6.4. Let A = KG, the group algebra of a ﬁnite group. If C is a conjugacy class of G,

deﬁne the class sum to be

[C] :=

g∈C

v

g

Show that [C] belongs to the center Z(A) of A. Show also that the class sums

form a K-basis for Z(A).

6.5. (Continuation) Suppose now that K = C. Deduce from Wedderburn’s Theorem

that the number of matrix blocks is equal to the number of conjugacy classes of

G. What does this tell if G is abelian?

6.6. Let A = CG where G is the symmetric group Sym(3), and take σ = (1 2 3) and

τ = (1 2). We want to show directly that A is a direct product of three matrix

algebras. [We know from 6.3 that there should be two blocks of size 1 and one

block of size 2].

(a) Let e

±

:=

1

6

(1±v

τ

)(1+v

σ

+v

σ

2), show that e

±

are idempotents in the centre

of A, and e

+

e

−

= 0.

(b) Let

f =

1

3

(1 +ω

−1

v

σ

+ωv

σ

2)

where ω is a primitive 3rd root of 1. Let f

1

:= v

τ

fv

τ

−1. Show that f and f

1

are

orthogonal idempotents, and that

f +f

1

= 1

A

−e

−

−e

+

.

(c) Show that Span¦f, fv

τ

, v

τ

f, f

1

¦ is an algebra, isomorphic to M

2

(C).

Apply these calculations, and show directly that A is isomorphic to a product of

three matrix algebras. [Taking direct sums, might be more natural].

7

Characters

Suppose that ρ : G → GL(n, C) is a representation of the group G. With each n n matrix

ρ(g) we can associate its trace, that is we add its diagonal entries. We write χ(g) for this

trace. The function χ : G →C is deﬁned to be the character associated to the representation

ρ.

Characters of representations have many nice properties, and they are a very important

tool for calculting with representations of groups.

7.1 Deﬁnition, examples, basic properties

Suppose A is some n n matrix, recall that the trace of A is deﬁned as the sum of the

diagonal entries,

tr(A) :=

n

i=1

a

ii

.

Recall also that tr(AB) = tr(BA) if B is some other nn matrix, and therefore tr(P

−1

AP) =

tr(A) if P is an invertible n n matrix.

If φ : V → V is a linear transformation of a ﬁnite-dimensional vector space V then we

write tr(φ) for the trace of a matrix of φ, with respect to some basis. By the above, this

does not depend on the choice of a basis.

As well, over C, the trace tr(A) is equal to the sum of the eigenvalues of A. Actually,

most of our matrices will satisfy equations of the form A

m

= I, all over C, and then A is

diagonalizable, by Linear Algebra.

74 7. Characters

Deﬁnition 7.1

Suppose ρ : G → GL(n, C) is a representation of G. The associated character χ

ρ

: G →C is

deﬁned by

χ

ρ

(g) := tr(ρ(g)).

Write also χ

V

if V is the CG-module corresponding to the representation ρ.

Example 7.2

The trivial representation of G is the map ρ : G → GL

1

(C) with ρ(g) = 1 for each g ∈ G.

The associated character is known as the ’trivial character’. Write χ

1

for this character, then

χ

1

(g) = 1 for each g ∈ G.

Example 7.3

Let Ω be a G-set, and V = CΩ be the permutation module corresponding to Ω. Call its

character χ

Ω

, the ’permutation character’. Then

χ

Ω

(g) = [Fix

Ω

(g)[

where Fix

Ω

(g) = ¦ω ∈ Ω : g(ω) = ω¦.

Example 7.4

Recall that the (left) regular representation has underlying vector space V = CG, and the

action is by left multiplication.

Let ρ

reg

its character. Then

χ

reg

(g) =

[G[ g = 1

G

0 otherwise

This is also a special case of a permutation character.

Given a ﬁnite group G and a representation ρ : G → GL(V ), we want to ﬁnd the trace

of ρ(g) for g ∈ G. Since g has ﬁnite order, we know that ρ(g)

m

= ρ(g

m

) = ρ(1) = I for some

m ≥ 1. So the linear transformation ρ(g) satisﬁes the polynomial

X

m

−1 = 0

and therefore it is diagonalizable, with eigenvalues some m-th roots of 1. This is very good

to know in many applications.

Deﬁnition 7.5

Let χ be a character of G. Then χ is said to be irreducible if χ = χ

V

where V is a simple

CG-module.

7.1 Deﬁnition, examples, basic properties 75

For example, the trivial character is irreducible. More generally, if the corresponding

module is 1-dimensional then the associated character is irreducible.

Example 7.6

Let G = o

n

, the symmetric group, and let CΩ be the natural permutation module, where

Ω = ¦1, 2, . . . , n¦. In chapter 4, we have seen that CΩ

∼

= K ⊕W as a CG-module where W

is simple, of dimension n−1 (and K is a copy of the trivial module). So χ

W

is an irreducible

character. By 7.3, we have

χ

W

(g) = [Fix

Ω

(g)[ −1 (g ∈ G)

Lemma 7.7

Suppose ρ

1

and ρ

2

are equivalent representations of G, with associated characters χ

1

and

χ

2

. Then χ

1

= χ

2

. Then χ

1

= χ

2

.

Proof

By assumption, there is an invertible matrix P such that for all g ∈ G we have

ρ

1

(g) = Pρ

2

(g)P

−1

Let χ

1

, χ

2

denote the characters associated to ρ

1

, ρ

2

. Then

χ

1

(g) = tr(ρ

1

(g)) = tr(Pρ

2

(g)P

−1

) = tr(ρ

2

(g)) = χ

2

(g).

Proposition 7.8

Let ρ : G → GL(n, C) be a representation of the ﬁnite group G, and let χ be the associated

character. Then

(i) χ(1) = n;

(ii) χ(g

−1

) = χ(g) (g ∈ G).

(iii) χ(ygy

−1

) = χ(g) for all y, g ∈ G. That is, χ is a class function.

Proof

(i) We have ρ(1) = I

n

, the identity n n matrix. It has trace n.

(ii) Fix some g ∈ G. The matrix ρ(g) is diagonalizable (as we noted before, since g has ﬁnite

order), so let P be some invertible nn matrix with Pρ(g)P

−1

= D, a diagonal matrix, and

let ω

1

, . . . , ω

n

be its diagonal entries. Then the ω

i

are m-th roots of unity where g

m

= 1.

The inverse of this matrix is diagonal with diagonal entries ¯ ω

i

. But the inverse of this matrix

76 7. Characters

is Pρ(g

−1

)P

−1

, and hence we have

χ(g

−1

) =

i

¯ ω

i

=

i

ω

i

= χ(g)

(iii) (clear)

Lemma 7.9

Suppose W

1

, W

2

are CG-modules, with corresponding characters χ

1

, χ

2

. Then the character

χ associated to W

1

⊕W

2

is equal to χ

1

+χ

2

.

Proof

Write ρ

i

for the representation of G on W

i

for i = 1, 2, and write ρ for the representation

of G on W

1

⊕W

2

. Take a basis of W

1

and one of W

2

, then the union is a basis of W

1

⊕W

2

.

Since W

1

and W

2

are submodules, for each g ∈ G the matrix of ρ(g) is block diagonal, where

the diagonal blocks are ρ

1

(g) and ρ

2

(g). Therefore tr(ρ(g)) = tr(ρ

1

(g)) + tr(ρ

2

(g)).

Let S

1

, S

2

, . . . , S

k

be the simple CG-modules and let χ

1

, χ

2

, . . . be the corresponding

characters. That is, they are the irreducible characters of G.

Corollary 7.10

Suppose W is any CG-module, write W = ⊕S

a

i

i

as a direct sum of simple CG-modules,

where the a

i

≥ 0. Then the character χ

W

is given by

χ

W

=

a

i

χ

i

.

Hence, to understand all characters, we need to understand the irreducible chararacters.

7.2 The orthogonality relations

Characters are functions from G to C. So we set

C

G

:= ¦φ : G →C¦

This is a vector space over C, with + and scalar multiplication pointwise. It has dimension

= [G[. We deﬁne an inner product on C

G

by setting

¸φ, ψ) :=

1

[G[

g∈G

φ(g)ψ(g)

7.2 The orthogonality relations 77

Exercise 7.1

Show that ¸−, −) is an innter product on C

G

.

Let Cl(G) be the set of φ ∈ C

G

which are constant on conjugacy classes. This is a subspace

of C

G

, it has dimensionl the number of conjugacy classes. The characters of G are contained

in Cl(G). Moreover, the number of irreducible characters is precisely the dimension of Cl(G).

The motivation for the inner product comes from orthogonality properties of the irre-

ducible characters. The following is sometimes called the ’ﬁrst orthogonality relation’.

Theorem 7.11

Suppose χ and χ

**are irreducible characters of G, corresponding to representations V and
**

W. Then

¸χ, χ) =

1 V

∼

= W

0 V ,

∼

= W.

Proof

By 7.9 we can assume that V = W in the case V

∼

= W. Let ρ

V

and ρ

W

be the corresponding

representations. Write ρ

W

(g) = [a

kl

(g)] and ρ

V

(g

−1

) = [b

λτ

(g

−1

)]. We reformulate the inner

product:

(∗) ¸χ, χ

) =

1

[G[

g∈G

(

dimV

i=1

a

ii

(g))(

dimW

j=1

b

jj

(g

−1

)) =

i,j

[

1

[G[

g∈G

a

ii

(g)b

jj

(g

−1

).

From Chapter 6, for any linear map h : V → W we have

(∗∗) T(h) =

g∈G

ρ

W

(g)hρ

V

(g

−1

) =

λI V = W

0 V ,

∼

= W

where λ =

tr(h)|G|

dimV

. Now we ﬁx i, j and take h := E

ij

, this has trace δ

ij

. Taking matrices,

ρ

W

(g)E

ij

ρ

V

(g

−1

) = [a

ki

(g)b

jτ

(g

−1

)]

k,τ

.

Then the kτ-th entry of the matrix (**) is

g∈G

a

ki

(g)b

jτ

(g

−1

) =

δ

ij

|G|

dimV

V = W

0 V ,

∼

= W

Now take k = i and τ = j, and then sum over all i, j, and we get that (*) is = 0 if V ,

∼

= W

and otherwise (*) is equal to

i,j

δ

ij

[G[

dimV

=

dimV

i=1

[G[

dimV

= [G[,

as stated.

78 7. Characters

Theorem 7.12

Suppose V is a ﬁnite-dimensional CG-module with character χ

V

. Write V = V

1

⊕V

2

⊕. . .⊕V

r

where the V

i

are simple. If S is any simple CG-module with character χ

S

then

¸χ

V

, χ

S

) = #¦i : V

i

∼

= S¦

Proof

We have χ

V

= χ

1

+χ

2

+. . . +χ

r

where χ

i

is the character of V

i

. Then

¸χ

V

, χ

S

) = ¸χ

1

, χ

S

) +. . . +¸χ

r

, χ

S

)

If V

i

∼

= S then the inner product is 1, otherwise it is zero.

Theorem 7.13

Suppose V and W are CG-modules, with characters χ

V

and χ

W

. Then V

∼

= W if and only

if χ

V

= χ

W

.

Proof

⇒ This is 7.9

⇐ Suppose χ

V

= χ

W

, then ¸χ

V

, χ

S

) = ¸χ

W

, χ

S

) for all simple modules S. By 7.12 it

follows that V is isomorphic to W.

7.3 The character table

The irreducible characters of a ﬁnite group G are the building blocks for all characters of G,

and it is convenient to display them in the form of a matrix, which is known as the character

table of G.

We have seen that characters are constant on conjugacy classes. We know that the

number of conjugacy classes is equal to the number of matrix blocks in the Wedderburn

decomposition, hence is equal to the number of irreducible characters.

We recall that the irreducible characters are labelled as χ

1

, . . . , χ

k

, and we take χ

1

to be

the trivial character.

Let C

1

, C

2

, . . . , C

k

be the conjugacy classes of G. Pick g

i

∈ C

i

. We make the convention

that g

1

= 1.

Deﬁnition 7.14

The Character table of G is the k k matrix

[χ

i

(g

j

)]

i,j

7.3 The character table 79

Example 7.15

Let G be a cyclic group of order n, say G = ¸g). In chapter 4 we have classiﬁed the irreducible

representations of G over C. Take ω to be a primitive n-th root of unity, then we have for

each t with 1 ≤ t ≤ n the 1-dimensional simple module on which g acts with eigenvalue ω

t

,

and hence g

j

acts with eigenvalue ω

jt

. The group is abelian, so each g

j

is the only element

in its conjugacy class.

So for example when n = 3, the character table is classes have size 1. The character table

is

1 g g

2

χ

1

1 1 1

χ

2

1 ω ω

2

χ

3

1 ω

2

ω

Example 7.16

Let G be the Klein 4-group, say G = ¸x, y). This has four 1-dimensional (simple) modules.

We describe them by the eigenvalues of x and y on a generator.

x y

S

1

1 1

S

2

−1 1

S

3

1 −1

S

4

−1 −1

Let χ

1

, χ

2

, χ

3

, χ

4

be the corresponding irreducible characters. Then the character table is

1 x y xy

χ

1

1 1 1 1

χ

2

1 −1 1 −1

χ

3

1 1 −1 −1

χ

4

1 −1 −1 1

Example 7.17

Let G be the dihedral group of order 8. We take the presentation as in chapter 2 as

G = ¸σ, τ : σ

4

= 1, τ

2

= 1, τστ

−1

= σ

−1

).

The element σ

2

commutes with all elements of G and hence The subgroup N := ¸σ

2

) is

normal. One checks that G/N is isomorphic to the Klein 4-group. Any representation of

G/N can be inﬂated to a representation of G, and therefore we have four 1-dimensional rep-

resentations (by the previous example). These are still irreducible viewed as representation

of G, and this gives us four 1-dimensional irreducible characters.

In chapter 2 we have constructed a 2-dimensional representation of G. This is checked

to be simple (alternatively, check that the inner product of its character is 1). The formula

[G[ =

n

2

i

shows that we have found all irreducible characters.

80 7. Characters

As well G has ﬁve conjugacy classes. We choose representatives for the classes, as

g

1

= 1, g

2

= σ

2

, g

3

= σ, g

4

= τ, g

5

= στ.

The element σ

2

acts trivially on the 1-dimensional modules, and we can write down the

character values of the 1-dimensional modules by just copying appropriately the character

of the Klein 4-group. For the 2-dimensional representation we calculate the trace of the

representation constructed in chapter 2. We get the character table:

1 σ

2

σ τ τσ

χ

1

1 1 1 1 1

χ

2

1 1 −1 1 −1

χ

3

1 1 1 −1 −1

χ

4

1 1 −1 −1 1

χ

5

2 −2 0 0 0

By 7.11 the rows of the character table satisfy an orthogonality relation. We will now

see that this implies orthogonality of the columns of the character table.

Theorem 7.18

Fix j, . Then we have

k

i=1

χ

i

(g

j

)χ

i

(g

) =

0 j ,=

[C

G

(g

j

)[ j =

Proof

Deﬁne a matrix X := [x

ij

] with

x

ij

:= ([C

G

(g

j

)[)

−1/2

χ

i

(g

j

)

Consider the ith entry of X

¯

X

t

. This is

k

j=1

χ

i

(g

j

)χ

(g

j

)[C

G

(g

j

)[

−1

=

1

[G[

g∈G

χ

i

(g)χ

(g) = ¸χ

i

, χ

) = δ

ij

.

This means that X

¯

X

t

= I. Therefore we have

¯

X

t

X = I as well. We spell this out and get

δ

j

=

k

i=1

[C

G

(g

j

)[

−1/2

[C

G

(g

)[

−1/2

χ

i

(g

j

)χ

i

(g

)

which gives the statement.

7.4 Products of characters 81

Example 7.19

Let G = A

4

, the alternating group on four letters. Recall that [G[ = 12. We want to ﬁnd

the character table.

(1) Let N be the Klein 4-group ⊂ G. Then N is a normal subgroup of G, and G/N is

cyclic of order 3. Each simple module for G/N over C can be viewed as a CG-module, by

’inﬂation’. So we get therefore three simple modules for CG of dimension 1, so we have three

linear characters.

(2) The group G has four conjugacy classes (!). Representatives are g

1

= 1, g

2

= (12)(34)

and g

3

= (123), g

4

= (132). The character table is square, so there must be precisely one

more irreducible character. The sum-squares formula shows that it has degree three.

We start constructing the character table, let ω be a primitive 3rd root of 1, recall

1 +ω +ω

2

= 0. Then we have (using Example 7.15)

1 g

2

g

3

g

4

χ

1

1 1 1 1

χ

2

1 1 ω ω

2

χ

3

1 1 ω

2

ω

χ

4

3

We ﬁnd the last row by using orthogonality relations. From the orthogonality of the

second and ﬁrst column we get

0 = 1 + 1 + 1 + 3χ

4

(g

2

)

and hence χ

4

(g

2

) = −1.

Next, the orthogonality of the third and ﬁrst column shows that

0 = 1 +ω +ω

2

+ 3χ

4

(g

3

)

and therefore χ

4

(g

3

) = 0. Similarly χ

4

(g

4

) = 0.

This shows that there is an irreducible representation of A

4

of degree 3, and it also says

what the character of this representation is. So one would like to actually construct such

representation!

Take the natural permutation module CΩ of o

4

. This is the direct sum of the trivial

module and a (simple) module W of dimension 3 (see 7.6 and chapter 4). We view this module

W as a module for A

4

and we see that the character of the corresponding representation is

precisely χ

4

.

So we can deduce that W is a simple CA

4

-module (and then it must also be simple as a

Co

4

-module! A proof without calculations)

7.4 Products of characters

This part is not in the B2 syllabus

82 7. Characters

We have deﬁned tensor products of vector spaces, and tensor products of group repre-

sentations. The importance of this is that the character of a tensor product is the product

of the characters. This gives a very powerful tool to construct new characters from known

ones.

Theorem 7.20

Assume G is ﬁnite. Suppose V, W are CG-modules, with characters χ

V

, χ

W

respectively.

Then V ⊗W has character χ

V

χ

W

.

Proof

Let ρ

V

, ρ

W

be the representations corresponding to V, W respectively, and write ρ for the

representation corresponding to V ⊗W. Let g ∈ G. We can choose a basis e

i

of eigenvectors

of g on V , with eigenvalues λ

i

, and a basis f

j

of eigenvectors of g on W, with eigenvalues

µ

j

(say). Then we use the basis ¦e

i

⊗ f

j

¦ of V ⊗ W, to calculate the character of V ⊗ W.

We have

ρ(g)(e

i

⊗f

j

) = ρ

V

(g)(e

i

) ⊗ρ

W

(g)(f

j

)

= λ

i

e

i

⊗µ

j

f

j

= λ

i

µ

j

(e

i

⊗f

j

)

Hence

χ

V ⊗W

(g) =

i,j

λ

i

µ

j

= (

i

λ

i

)(

j

µ

j

) = χ

V

(g)χ

W

(g).

EXERCISES

7.2. Calculate the character table of the symmetric group G = o

4

. [You might exploit

group factor groups C

2

(

∼

= G/A

4

) and o

3

(

∼

= G/V

4

, V

4

the Klein 4-group). You

might also exploit products of characters χχ

where χ

is a linear character.]

7.3. Calculate the character table of the symmetric group o

5

.

7.4. Find the character table of the quaternion group of order 8. Verify that this is

the same as the character table of the dihedral group of order 8.

Bibliography

[1] C.W. Curtis, Pioneers of representation theory, AMS History of Mathematics 15, 1999.

[2] Y. A. Drozd, V. V. Kirichenko, Finite-dimensional Algebras, Springer 1994.

[3] G. James, M. Liebeck. Representations and characters of groups, 2nd edition, CUP 2001.

[4] W. Ledermann, Introduction to group characters, 2nd edition, CUP 1987.

Sign up to vote on this title

UsefulNot usefulIntroduction to Representation theory of Groups.

Introduction to Representation theory of Groups.

- Rep n Control
- The definition of a planar algebra
- Computer Scince & Engineering
- Nil Injective Ring
- IFMConf_53
- [Rudolf Lidl, Harald Niederreiter] Introduction to(BookFi.org)
- Abstract Algebra
- The notions of the SMARANDACHE GROUP and the SMARANDACHE BOOLEAN RING
- 10.1.1.101
- AP1-Group
- Topological Hilbert Nullstellensatz for Bergman Spaces - Razvan Gelca
- Group Theory - The Application to Quantum Mechanics [Meijer-Bauer]
- Scholz e Lectures
- Topology Conf
- kolko
- Algebra Homework Set 5 Hung Tran. 6.3.4 Suppose w Is
- OMNeT 4 Simulation Pltfm
- 2117-2869-1-PB
- Fall Abstract Algebra
- Applications of Functional Analysis in Engineering
- vspaces
- Intro to R
- Advances in Chemical Engineering v24
- Lab 3
- lag13
- GT notes
- V.gelfreichnotesfuncan
- Group Theroy
- Diff Geom Physics
- Linear Algebra Cheat Sheet
- Institute Notes

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.