De Gruyter Graduate
Celine Carstensen
Benjamin Fine
Gerhard Rosenberger
Abstract Algebra
Applications to Galois Theory,
Algebraic Geometry and Cryptography
De Gruyter
Mathematics Subject Classification 2010: Primary: 1201, 1301, 1601, 2001; Secondary: 0101,
0801, 1101, 1401, 9401.
This book is Volume 11 of the Sigma Series in Pure Mathematics, Heldermann Verlag.
ISBN 9783110250084
eISBN 9783110250091
Library of Congress CataloginginPublication Data
Carstensen, Celine.
Abstract algebra : applications to Galois theory, algebraic geo
metry, and cryptography / by Celine Carstensen, Benjamin Fine,
and Gerhard Rosenberger.
p. cm. Ϫ (Sigma series in pure mathematics ; 11)
Includes bibliographical references and index.
ISBN 9783110250084 (alk. paper)
1. Algebra, Abstract. 2. Galois theory. 3. Geometry, Algebraic.
4. Crytography. I. Fine, Benjamin, 1948Ϫ II. Rosenberger, Ger
hard. III. Title.
QA162.C375 2011
5151.02Ϫdc22
2010038153
Bibliographic information published by the Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available in the Internet at http://dnb.dnb.de.
”2011 Walter de Gruyter GmbH & Co. KG, Berlin/New York
Typesetting: DaTeX Gerd Blumenstein, Leipzig, www.datex.de
Printing and binding: AZ Druck und Datentechnik GmbH, Kempten
ϱ Printed on acidfree paper
Printed in Germany
www.degruyter.com
Preface
Traditionally, mathematics has been separated into three main areas; algebra, analysis
and geometry. Of course there is a great deal of overlap between these areas. For
example, topology, which is geometric in nature, owes its origins and problems as
much to analysis as to geometry. Further the basic techniques in studying topology
are predominantly algebraic. In general, algebraic methods and symbolism pervade
all of mathematics and it is essential for anyone learning any advanced mathematics
to be familiar with the concepts and methods in abstract algebra.
This is an introductory text on abstract algebra. It grew out of courses given to
advanced undergraduates and beginning graduate students in the United States and
to mathematics students and teachers in Germany. We assume that the students are
familiar with Calculus and with some linear algebra, primarily matrix algebra and the
basic concepts of vector spaces, bases and dimensions. All other necessary material
is introduced and explained in the book. We assume however that the students have
some, but not a great deal, of mathematical sophistication. Our experience is that the
material in this can be completed in a full years course. We presented the material
sequentially so that polynomials and ﬁeld extensions preceded an in depth look at
group theory. We feel that a student who goes through the material in these notes will
attain a solid background in abstract algebra and be able to move on to more advanced
topics.
The centerpiece of these notes is the development of Galois theory and its important
applications, especially the insolvability of the quintic. After introducing the basic al
gebraic structures, groups, rings and ﬁelds, we begin the theory of polynomials and
polynomial equations over ﬁelds. We then develop the main ideas of ﬁeld extensions
and adjoining elements to ﬁelds. After this we present the necessary material from
group theory needed to complete both the insolvability of the quintic and solvability
by radicals in general. Hence the middle part of the book, Chapters 9 through 14 are
concerned with group theory including permutation groups, solvable groups, abelian
groups and group actions. Chapter 14 is somewhat off to the side of the main theme
of the book. Here we give a brief introduction to free groups, group presentations
and combinatorial group theory. With the group theory material in hand we return
to Galois theory and study general normal and separable extensions and the funda
mental theorem of Galois theory. Using this we present several major applications
of the theory including solvability by radicals and the insolvability of the quintic, the
fundamental theorem of algebra, the construction of regular ngons and the famous
impossibilities; squaring the circling, doubling the cube and trisecting an angle. We
vi Preface
ﬁnish in a slightly different direction giving an introduction to algebraic and group
based cryptography.
October 2010 Celine Carstensen
Benjamin Fine
Gerhard Rosenberger
Contents
Preface v
1 Groups, Rings and Fields 1
1.1 Abstract Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Integral Domains and Fields . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Subrings and Ideals . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Factor Rings and Ring Homomorphisms . . . . . . . . . . . . . . . . 9
1.6 Fields of Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Characteristic and Prime Rings . . . . . . . . . . . . . . . . . . . . . 14
1.8 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Maximal and Prime Ideals 21
2.1 Maximal and Prime Ideals . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Prime Ideals and Integral Domains . . . . . . . . . . . . . . . . . . . 22
2.3 Maximal Ideals and Fields . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 The Existence of Maximal Ideals . . . . . . . . . . . . . . . . . . . . 25
2.5 Principal Ideals and Principal Ideal Domains . . . . . . . . . . . . . . 27
2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3 Prime Elements and Unique Factorization Domains 29
3.1 The Fundamental Theorem of Arithmetic . . . . . . . . . . . . . . . 29
3.2 Prime Elements, Units and Irreducibles . . . . . . . . . . . . . . . . 35
3.3 Unique Factorization Domains . . . . . . . . . . . . . . . . . . . . . 38
3.4 Principal Ideal Domains and Unique Factorization . . . . . . . . . . . 41
3.5 Euclidean Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.6 Overview of Integral Domains . . . . . . . . . . . . . . . . . . . . . 51
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4 Polynomials and Polynomial Rings 53
4.1 Polynomials and Polynomial Rings . . . . . . . . . . . . . . . . . . . 53
4.2 Polynomial Rings over Fields . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Polynomial Rings over Integral Domains . . . . . . . . . . . . . . . . 57
4.4 Polynomial Rings over Unique Factorization Domains . . . . . . . . 58
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
viii Contents
5 Field Extensions 66
5.1 Extension Fields and Finite Extensions . . . . . . . . . . . . . . . . . 66
5.2 Finite and Algebraic Extensions . . . . . . . . . . . . . . . . . . . . 69
5.3 Minimal Polynomials and Simple Extensions . . . . . . . . . . . . . 70
5.4 Algebraic Closures . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5 Algebraic and Transcendental Numbers . . . . . . . . . . . . . . . . 75
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6 Field Extensions and Compass and Straightedge Constructions 80
6.1 Geometric Constructions . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2 Constructible Numbers and Field Extensions . . . . . . . . . . . . . . 80
6.3 Four Classical Construction Problems . . . . . . . . . . . . . . . . . 83
6.3.1 Squaring the Circle . . . . . . . . . . . . . . . . . . . . . . . 83
6.3.2 The Doubling of the Cube . . . . . . . . . . . . . . . . . . . 83
6.3.3 The Trisection of an Angle . . . . . . . . . . . . . . . . . . . 83
6.3.4 Construction of a Regular nGon . . . . . . . . . . . . . . . . 84
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7 Kronecker’s Theorem and Algebraic Closures 91
7.1 Kronecker’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.2 Algebraic Closures and Algebraically Closed Fields . . . . . . . . . . 94
7.3 The Fundamental Theorem of Algebra . . . . . . . . . . . . . . . . . 100
7.3.1 Splitting Fields . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.3.2 Permutations and Symmetric Polynomials . . . . . . . . . . . 101
7.4 The Fundamental Theorem of Algebra . . . . . . . . . . . . . . . . . 105
7.5 The Fundamental Theorem of Symmetric Polynomials . . . . . . . . 109
7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8 Splitting Fields and Normal Extensions 113
8.1 Splitting Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.2 Normal Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9 Groups, Subgroups and Examples 119
9.1 Groups, Subgroups and Isomorphisms . . . . . . . . . . . . . . . . . 119
9.2 Examples of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.3 Permutation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.4 Cosets and Lagrange’s Theorem . . . . . . . . . . . . . . . . . . . . 128
9.5 Generators and Cyclic Groups . . . . . . . . . . . . . . . . . . . . . 133
9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Contents ix
10 Normal Subgroups, Factor Groups and Direct Products 141
10.1 Normal Subgroups and Factor Groups . . . . . . . . . . . . . . . . . 141
10.2 The Group Isomorphism Theorems . . . . . . . . . . . . . . . . . . . 146
10.3 Direct Products of Groups . . . . . . . . . . . . . . . . . . . . . . . 149
10.4 Finite Abelian Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 151
10.5 Some Properties of Finite Groups . . . . . . . . . . . . . . . . . . . . 156
10.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
11 Symmetric and Alternating Groups 161
11.1 Symmetric Groups and Cycle Decomposition . . . . . . . . . . . . . 161
11.2 Parity and the Alternating Groups . . . . . . . . . . . . . . . . . . . 164
11.3 Conjugation in S
n
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
11.4 The Simplicity of ·
n
. . . . . . . . . . . . . . . . . . . . . . . . . . 168
11.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
12 Solvable Groups 171
12.1 Solvability and Solvable Groups . . . . . . . . . . . . . . . . . . . . 171
12.2 Solvable Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
12.3 The Derived Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
12.4 Composition Series and the Jordan–Hölder Theorem . . . . . . . . . 177
12.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
13 Groups Actions and the Sylow Theorems 180
13.1 Group Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
13.2 Conjugacy Classes and the Class Equation . . . . . . . . . . . . . . . 181
13.3 The Sylow Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 183
13.4 Some Applications of the Sylow Theorems . . . . . . . . . . . . . . 187
13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
14 Free Groups and Group Presentations 192
14.1 Group Presentations and Combinatorial Group Theory . . . . . . . . 192
14.2 Free Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
14.3 Group Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
14.3.1 The Modular Group . . . . . . . . . . . . . . . . . . . . . . 200
14.4 Presentations of Subgroups . . . . . . . . . . . . . . . . . . . . . . . 207
14.5 Geometric Interpretation . . . . . . . . . . . . . . . . . . . . . . . . 209
14.6 Presentations of Factor Groups . . . . . . . . . . . . . . . . . . . . . 212
14.7 Group Presentations and Decision Problems . . . . . . . . . . . . . . 213
14.8 Group Amalgams: Free Products and Direct Products . . . . . . . . . 214
14.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
x Contents
15 Finite Galois Extensions 217
15.1 Galois Theory and the Solvability of Polynomial Equations . . . . . . 217
15.2 Automorphism Groups of Field Extensions . . . . . . . . . . . . . . 218
15.3 Finite Galois Extensions . . . . . . . . . . . . . . . . . . . . . . . . 220
15.4 The Fundamental Theorem of Galois Theory . . . . . . . . . . . . . 221
15.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
16 Separable Field Extensions 233
16.1 Separability of Fields and Polynomials . . . . . . . . . . . . . . . . . 233
16.2 Perfect Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
16.3 Finite Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
16.4 Separable Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . 238
16.5 Separability and Galois Extensions . . . . . . . . . . . . . . . . . . . 241
16.6 The Primitive Element Theorem . . . . . . . . . . . . . . . . . . . . 245
16.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
17 Applications of Galois Theory 248
17.1 Applications of Galois Theory . . . . . . . . . . . . . . . . . . . . . 248
17.2 Field Extensions by Radicals . . . . . . . . . . . . . . . . . . . . . . 248
17.3 Cyclotomic Extensions . . . . . . . . . . . . . . . . . . . . . . . . . 252
17.4 Solvability and Galois Extensions . . . . . . . . . . . . . . . . . . . 253
17.5 The Insolvability of the Quintic . . . . . . . . . . . . . . . . . . . . . 254
17.6 Constructibility of Regular nGons . . . . . . . . . . . . . . . . . . . 259
17.7 The Fundamental Theorem of Algebra . . . . . . . . . . . . . . . . . 261
17.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
18 The Theory of Modules 265
18.1 Modules Over Rings . . . . . . . . . . . . . . . . . . . . . . . . . . 265
18.2 Annihilators and Torsion . . . . . . . . . . . . . . . . . . . . . . . . 270
18.3 Direct Products and Direct Sums of Modules . . . . . . . . . . . . . 271
18.4 Free Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
18.5 Modules over Principal Ideal Domains . . . . . . . . . . . . . . . . . 276
18.6 The Fundamental Theorem for Finitely Generated Modules . . . . . . 279
18.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
19 Finitely Generated Abelian Groups 285
19.1 Finite Abelian Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 285
19.2 The Fundamental Theorem: ¸Primary Components . . . . . . . . . 286
19.3 The Fundamental Theorem: Elementary Divisors . . . . . . . . . . . 288
19.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Contents xi
20 Integral and Transcendental Extensions 295
20.1 The Ring of Algebraic Integers . . . . . . . . . . . . . . . . . . . . . 295
20.2 Integral ring extensions . . . . . . . . . . . . . . . . . . . . . . . . . 298
20.3 Transcendental ﬁeld extensions . . . . . . . . . . . . . . . . . . . . . 302
20.4 The transcendence of e and ¬ . . . . . . . . . . . . . . . . . . . . . . 307
20.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
21 The Hilbert Basis Theorem and the Nullstellensatz 312
21.1 Algebraic Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
21.2 Algebraic Varieties and Radicals . . . . . . . . . . . . . . . . . . . . 312
21.3 The Hilbert Basis Theorem . . . . . . . . . . . . . . . . . . . . . . . 314
21.4 The Hilbert Nullstellensatz . . . . . . . . . . . . . . . . . . . . . . . 315
21.5 Applications and Consequences of Hilbert’s Theorems . . . . . . . . 317
21.6 Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
21.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
22 Algebraic Cryptography 326
22.1 Basic Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
22.2 Encryption and Number Theory . . . . . . . . . . . . . . . . . . . . 331
22.3 Public Key Cryptography . . . . . . . . . . . . . . . . . . . . . . . . 335
22.3.1 The Difﬁe–Hellman Protocol . . . . . . . . . . . . . . . . . . 336
22.3.2 The RSA Algorithm . . . . . . . . . . . . . . . . . . . . . . 337
22.3.3 The ElGamal Protocol . . . . . . . . . . . . . . . . . . . . . 339
22.3.4 Elliptic Curves and Elliptic Curve Methods . . . . . . . . . . 341
22.4 Noncommutative Group based Cryptography . . . . . . . . . . . . . 342
22.4.1 Free Group Cryptosystems . . . . . . . . . . . . . . . . . . . 345
22.5 Ko–Lee and Anshel–Anshel–Goldfeld Methods . . . . . . . . . . . . 349
22.5.1 The Ko–Lee Protocol . . . . . . . . . . . . . . . . . . . . . . 350
22.5.2 The Anshel–Anshel–Goldfeld Protocol . . . . . . . . . . . . 350
22.6 Platform Groups and Braid Group Cryptography . . . . . . . . . . . 351
22.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Bibliography 359
Index 363
Chapter 1
Groups, Rings and Fields
1.1 Abstract Algebra
Abstract algebra or modern algebra can be best described as the theory of algebraic
structures. Brieﬂy, an algebraic structure is a set S together with one or more binary
operations on it satisfying axioms governing the operations. There are many alge
braic structures but the most commonly studied structures are groups, rings, ﬁelds
and vector spaces. Also widely used are modules and algebras. In this ﬁrst chapter
we will look at some basic preliminaries concerning groups, rings and ﬁelds. We will
only brieﬂy touch on groups here, a more extensive treatment will be done later in the
book.
Mathematics traditionally has been subdivided into three main areas – analysis,
algebra and geometry. These areas overlap in many places so that it is often difﬁcult
to determine whether a topic is one in geometry say or in analysis. Algebra and
algebraic methods permeate all these disciplines and most of mathematics has been
algebraicized – that is uses the methods and language of algebra. Groups, rings and
ﬁelds play a major role in the modern study of analysis, topology, geometry and even
applied mathematics. We will see these connections in examples throughout the book.
Abstract algebra has its origins in two main areas and questions that arose in these
areas – the theory of numbers and the theory of equations. The theory of numbers
deals with the properties of the basic number systems – integers, rationals and reals
while the theory of equations, as the name indicates, deals with solving equations, in
particular polynomial equations. Both are subjects that date back to classical times.
A whole section of Euclid’s elements is dedicated to number theory. The foundations
for the modern study of number theory were laid by Fermat in the 1600s and then by
Gauss in the 1800s. In an attempt to prove Fermat’s big theorem Gauss introduced
the complex integers a ÷bi where a and b are integers and showed that this set has
unique factorization. These ideas were extended by Dedekind and Kronecker who
developed a wide ranging theory of algebraic number ﬁelds and algebraic integers.
A large portion of the terminology used in abstract algebra, rings, ideals, factorization
comes from the study of algebraic number ﬁelds. This has evolved into the modern
discipline of algebraic number theory.
The second origin of modern abstract algebra was the problem of trying to deter
mine a formula for ﬁnding the solutions in terms of radicals of a ﬁfth degree poly
nomial. It was proved ﬁrst by Rufﬁni in 1800 and then by Abel that it is impossible
to ﬁnd a formula in terms of radicals for such a solution. Galois in 1820 extended
2 Chapter 1 Groups, Rings and Fields
this and showed that such a formula is impossible for any degree ﬁve or greater. In
proving this he laid the groundwork for much of the development of modern abstract
algebra especially ﬁeld theory and ﬁnite group theory. Earlier, in 1800, Gauss proved
the fundamental theorem of algebra which says that any nonconstant complex poly
nomial equation must have a solution. One of the goals of this book is to present a
comprehensive treatment of Galois theory and a proof of the results mentioned above.
The locus of real points (.. ,) which satisfy a polynomial equation ¡(.. ,) = 0 is
called an algebraic plane curve. Algebraic geometry deals with the study of algebraic
plane curves and extensions to loci in a higher number of variables. Algebraic geom
etry is intricately tied to abstract algebra and especially commutative algebra. We will
touch on this in the book also.
Finally linear algebra, although a part of abstract algebra, arose in a somewhat
different context. Historically it grew out of the study of solution sets of systems of
linear equations and the study of the geometry of real ndimensional spaces. It began
to be developed formally in the early 1800s with work of Jordan and Gauss and then
later in the century by Cayley, Hamilton and Sylvester.
1.2 Rings
The primary motivating examples for algebraic structures are the basic number sys
tems; the integers Z, the rational numbers Q, the real numbers R and the complex
numbers C. Each of these has two basic operations, addition and multiplication and
form what is called a ring. We formally deﬁne this.
Deﬁnition 1.2.1. A ring is a set 1 with two binary operations deﬁned on it, addition,
denoted by ÷, and multiplication, denoted by , or just by juxtaposition, satisfying
the following six axioms:
(1) Addition is commutative: a ÷b = b ÷a for each pair a. b in 1.
(2) Addition is associative: a ÷(b ÷c) = (a ÷b) ÷c for a. b. c ÷ 1.
(3) There exists an additive identity, denoted by 0, such that a ÷ 0 = a for each
a ÷ 1.
(4) For each a ÷ 1 there exists an additive inverse, denoted by ÷a, such that a ÷
(÷a) = 0.
(5) Multiplication is associative: a(bc) = (ab)c for a. b. c ÷ 1.
(6) Multiplication is left and right distributive over addition: a(b ÷c) = ab ÷ac
and (b ÷c)a = ba ÷ca for a. b. c ÷ 1.
Section 1.2 Rings 3
If in addition
(7) Multiplication is commutative: ab = ba for each pair a. b in 1.
then 1 is a commutative ring.
Further if
(8) There exists a multiplicative identity denoted by 1 such that a 1 = a and 1 a =
a for each a in 1.
then 1 is a ring with identity.
If 1 satisﬁes (1) through (8) then 1 is a commutative ring with an identity.
A set G with one operation, ÷, on it satisfying axioms (1) through (4) is called an
abelian group. We will discuss these further later in the chapter.
The numbers systems Z. Q. R. C are all commutative rings with identity.
A ring 1 with only one element is called trivial. A ring 1 with identity is trivial if
and only if 0 = 1.
A ﬁnite ring is a ring 1 with only ﬁnitely many elements in it. Otherwise 1 is
an inﬁnite ring. Z. Q. R. C are all inﬁnite rings. Examples of ﬁnite rings are given
by the integers modulo n, Z
n
, with n > 1. The ring Z
n
consists of the elements
0. 1. 2. . . . . n÷1 with addition and multiplication done modulo n. That is, for example
4 3 = 12 = 2 modulo 5. Hence in Z
5
we have 4 3 = 2. The rings Z
n
are all ﬁnite
commutative rings with identity.
To give examples of rings without an identity consider the set nZ = {n: : : ÷
Z} consisting of all multiples of the ﬁxed integer n. It is an easy veriﬁcation (see
exercises) that this forms a ring under the same addition and multiplication as in Z
but that there is no identity for multiplication. Hence for each n ÷ Z with n > 1 we
get an inﬁnite commutative ring without an identity.
To obtain examples of noncommutative rings we consider matrices. Let M
2
(Z) be
the set of 2 × 2 matrices with integral entries. Addition of matrices is done compo
nentwise, that is
_
a
1
b
1
c
1
J
1
_
÷
_
a
2
b
2
c
2
J
2
_
=
_
a
1
÷a
2
b
1
÷b
2
c
1
÷c
2
J
1
÷J
2
_
while multiplication is matrix multiplication
_
a
1
b
1
c
1
J
1
_
_
a
2
b
2
c
2
J
2
_
=
_
a
1
a
2
÷b
1
c
2
a
1
b
2
÷b
1
J
2
c
1
a
2
÷J
1
c
2
c
1
b
2
÷J
1
J
2
_
.
Then again it is an easy veriﬁcation (see exercises) that M
2
(Z) forms a ring. Fur
ther since matrix multiplication is noncommutative this forms a noncommutative ring.
However the identity matrix does form a multiplicative identity for it. M
2
(nZ) with
n > 1 provides an example of an inﬁnite noncommutative ring without an identity.
Finally M
2
(Z
n
) for n > 1 will give an example of a ﬁnite noncommutative ring.
4 Chapter 1 Groups, Rings and Fields
1.3 Integral Domains and Fields
Our basic number systems have the property that if ab = 0 then either a = 0 or b = 0.
However this is not necessarily true in the modular rings. For example 2 3 = 0 in Z
6
.
Deﬁnition 1.3.1. A zero divisor in a ring 1 is an element a ÷ 1 with a = 0 such
that there exists an element b = 0 with ab = 0. A commutative ring with an identity
1 = 0 and with no zero divisors is called an integral domain. Notice that having no
zero divisors is equivalent to the fact that if ab = 0 in 1 then either a = 0 or b = 0.
Hence Z. Q. R. C are all integral domains but from the example above Z
6
is not.
In general we have the following.
Theorem 1.3.2. Z
n
is an integral domain if and only if n is a prime.
Proof. First of all notice that under multiplication modulo n an element m is 0 if and
only if n divides m. We will make this precise shortly. Recall further Euclid’s lemma
which says that if a prime ¸ divides a product ab then ¸ divides a or ¸ divides b.
Now suppose that n is a prime and ab = 0 in Z
n
. Then n divides ab. From Euclid’s
lemma it follows that n divides a or n divides b. In the ﬁrst case a = 0 in Z
n
while
in the second b = 0 in Z
n
. It follows that there are no zero divisors in Z
n
and since
Z
n
is a commutative ring with an identity it is an integral domain.
Conversely suppose Z
n
is an integral domain. Suppose that n is not prime. Then
n = ab with 1 < a < n, 1 < b < n. It follows that ab = 0 in Z
n
with neither a nor
b being zero. Therefore they are zero divisors which is a contradiction. Hence n must
be prime.
In Qevery nonzero element has a multiplicative inverse. This is not true in Zwhere
only the elements ÷1. 1 have multiplicative inverses within Z.
Deﬁnition 1.3.3. A unit in a ring 1 with identity is an element a which has a multi
plicative inverse, that is an element b such that ab = ba = 1. If a is a unit in 1 we
denote its inverse by a
1
.
Hence every nonzero element of Q and of R and of C is a unit but in Z the only
units are ˙1. In M
2
(R) the units are precisely those matrices that have nonzero deter
minant while in M
2
(Z) the units are those integral matrices that have determinant ˙1.
Deﬁnition 1.3.4. A ﬁeld J is a commutative ring with an identity 1 = 0 where every
nonzero element is a unit.
The rationals Q, the reals Rand the complexes C are all ﬁelds. If we relax the com
mutativity requirement and just require that in the ring 1 with identity each nonzero
element is a unit then we get a skew ﬁeld or division ring.
Section 1.3 Integral Domains and Fields 5
Lemma 1.3.5. If J is a ﬁeld then J is an integral domain.
Proof. Since a ﬁeld J is already a commutative ring with an identity we must only
show that there are no zero divisors in J.
Suppose that ab = 0 with a = 0. Since J is a ﬁeld and a is nonzero it has an
inverse a
1
. Hence
a
1
(ab) = a
1
0 = 0 == (a
1
a)b = 0 == b = 0.
Therefore J has no zero divisors and must be an integral domain.
Recall that Z
n
was an integral domain only when n was a prime. This turns out to
also be necessary and sufﬁcient for Z
n
to be a ﬁeld.
Theorem 1.3.6. Z
n
is a ﬁeld if and only if n is a prime.
Proof. First suppose that Z
n
is a ﬁeld. Then from Lemma 1.3.5 it is an integral
domain, so from Theorem 1.3.2 n must be a prime.
Conversely suppose that n is a prime. We must show that Z
n
is a ﬁeld. Since we
already know that Z
n
is an integral domain we must only show that each nonzero
element of Z
n
is a unit. Here we need some elementary facts from number theory. If
a. b are integers we use the notation a[b to indicate that a divides b.
Recall that given nonzero integers a. b their greatest common divisor or GCDJ >0
is a positive integer which is a common divisor, that is J[a and J[b, and if J
1
is any
other common divisor then J
1
[J. We denote the greatest common divisor of a. b by
either gcd(a. b) or (a. b). It can be proved that given nonzero integers a. b their GCD
exists, is unique and can be characterized as the least positive linear combination of
a and b. If the GCD of a and b is 1 then we say that a and b are relatively prime or
coprime. This is equivalent to being able to express 1 as a linear combination of a
and b.
Now let a ÷ Z
n
with n prime and a = 0. Since a = 0 we have that n does not
divide a. Since n is prime it follows that a and n must be relatively prime, (a. n) = 1.
From the number theoretic remarks above we then have that there exist .. , with
a. ÷n, = 1.
However in Z
n
the element n, = 0 and so in Z
n
we have
a. = 1.
6 Chapter 1 Groups, Rings and Fields
Therefore a has a multiplicative inverse in Z
n
and is hence a unit. Since a was an
arbitrary nonzero element we conclude that Z
n
is a ﬁeld.
The theorem above is actually a special case of a more general result from which
Theorem 1.3.6 could also be obtained.
Theorem 1.3.7. Each ﬁnite integral domain is a ﬁeld.
Proof. Let J be a ﬁnite integral domain. We must show that J is a ﬁeld. It is clearly
sufﬁcient to show that each nonzero element of J is a unit. Let
{0. 1. r
1
. . . . . r
n
}
be the elements of J. Let r
i
be a ﬁxed nonzero element and multiply each element of
J by r
i
on the left. Now
if r
i
r
}
= r
i
r
k
then r
i
(r
}
÷ r
k
) = 0.
Since r
i
= 0 it follows that r
}
÷ r
k
= 0 or r
}
= r
k
. Therefore all the products r
i
r
}
are distinct. Hence
1 = {0. 1. r
1
. . . . . r
n
} = r
i
1 = {0. r
i
. r
i
r
1
. . . . . r
i
r
n
}.
Hence the identity element 1 must be in the righthand list, that is there is an r
}
such
that r
i
r
}
= 1. Therefore r
i
has a multiplicative inverse and is hence a unit. Therefore
J is a ﬁeld.
1.4 Subrings and Ideals
A very important concept in algebra is that of a substructure that is a subset having
the same structure as the superset.
Deﬁnition 1.4.1. A subring of a ring 1 is a nonempty subset S that is also a ring
under the same operations as 1. If 1 is a ﬁeld and S also a ﬁeld then its a subﬁeld.
If S Ç 1 then S satisﬁes the same basic axioms, associativity and commutativity
of addition for example. Therefore S will be a subring if it is nonempty and closed
under the operations, that is closed under addition, multiplication and taking additive
inverses.
Lemma 1.4.2. A subset S of a ring 1 is a subring if and only if S is nonempty and
whenever a. b ÷ S we have a ÷b ÷ S, a ÷ b ÷ S and ab ÷ S.
Section 1.4 Subrings and Ideals 7
Example 1.4.3. Show that if n > 1 the set nZ is a subring of Z. Here clearly nZ is
nonempty. Suppose a = n:
1
. b = n:
2
are two element of nZ. Then
a ÷b = n:
1
÷n:
2
= n(:
1
÷:
2
) ÷ nZ
a ÷ b = n:
1
÷ n:
2
= n(:
1
÷ :
2
) ÷ nZ
ab = n:
1
n:
2
= n(n:
1
:
2
) ÷ nZ.
Therefore nZ is a subring.
Example 1.4.4. Show that the set of real numbers of the form
S = {u ÷·
_
2 : u. · ÷ Q}
is a subring of R.
Here 1 ÷
_
2 ÷ S, so S is nonempty. Suppose a = u
1
÷·
1
_
2, b = u
2
÷·
2
_
2
are two element of S. Then
a ÷b = (u
1
÷·
1
_
2) ÷(u
2
÷·
2
_
2) = u
1
÷u
2
÷(·
1
÷·
2
)
_
2 ÷ S
a ÷ b = (u
1
÷·
1
_
2) ÷ (u
2
÷·
2
_
2) = u
1
÷ u
2
÷(·
1
÷ ·
2
)
_
2 ÷ S
a b = (u
1
÷·
1
_
2) (u
2
÷·
2
_
2) = (u
1
u
2
÷2·
1
·
2
) ÷(u
1
·
2
÷·
1
u
2
)
_
2÷S.
Therefore S is a subring.
We will see this example later as an algebraic number ﬁeld.
In the following we are especially interested in special types of subrings called
ideals.
Deﬁnition 1.4.5. Let 1 be a ring and 1 Ç 1. Then 1 is a (twosided) ideal if the
following properties holds:
(1) 1 is nonempty.
(2) If a. b ÷ 1 then a ˙b ÷ 1.
(3) If a ÷ 1 and r is any element of 1 then ra ÷ 1 and ar ÷ 1.
We denote the fact that 1 forms an ideal in 1 by 1 < 1.
Notice that if a. b ÷ 1, then from (3) we have ab ÷ 1 and ba ÷ 1. Hence 1 forms a
subring, that is each ideal is also a subring. {0} and the whole ring 1 are trivial ideals
of 1.
If we assume that in (3) only ra ÷ 1 then 1 is called a left ideal. Analogously we
deﬁne a right ideal.
8 Chapter 1 Groups, Rings and Fields
Lemma 1.4.6. Let 1 be a commutative ring and a ÷ 1. Then the set
(a) = a1 = {ar : r ÷ 1}
is an ideal of 1.
This ideal is called the principal ideal generated by a.
Proof. We must verify the three properties of the deﬁnition. Since a ÷ 1 we have
that a1 is nonempty. If u = ar
1
. · = ar
2
are two elements of a1 then
u ˙· = ar
1
˙ar
2
= a(r
1
˙r
2
) ÷ a1
so (2) is satisﬁed.
Finally let u = ar
1
÷ a1 and r ÷ 1. Then
ru = rar
1
= a(rr
1
) ÷ a1 and ur = ar
1
r = a(r
1
r) ÷ a1.
Recall that a ÷ (a) if 1 has an identity.
Notice that if n ÷ Z then the principal ideal generated by n is precisely the ring
nZ, that we have already examined. Hence for each n > 1 the subring nZ is actually
an ideal. We can show more.
Theorem 1.4.7. Any subring of Z is of the form nZ for some n. Hence each subring
of Z is actually a principal ideal.
Proof. Let S be a subring of Z. If S = {0} then S = 0Z so we may assume that
S has nonzero elements. Since S is a subring if it has nonzero elements it must have
positive elements (since it has the additive inverse of any element in it).
Let S
÷
be the set of positive elements in S. From the remarks above this is a
nonempty set and so there must be a least positive element n. We claim that S = nZ.
Let m be a positive element in S. By the division algorithm
m = qn ÷r.
where either r = 0 or 0 < r < n. Suppose that r = 0. Then
r = m÷ qn.
Now m ÷ S and n ÷ S. Since S is a subring it is closed under addition so that
qn ÷ S. But S is a subring so m ÷ qn ÷ S. It follows that r ÷ S. But this is
a contradiction since n was the least positive element in S. Therefore r = 0 and
m = qn. Hence each positive element in S is a multiple of n.
Now let m be a negative element of S. Then ÷m ÷ S and ÷m is positive. Hence
÷m = qn and thus m = (÷q)n. Therefore every element of S is a multiple of n and
so S = nZ.
It follows that every subring of Z is of this form and therefore every subring of Z
is an ideal.
Section 1.5 Factor Rings and Ring Homomorphisms 9
We mention that this is true in Z but not always true. For example Z is a subring of
Qbut not an ideal.
An extension of the proof of Lemma 1.4.2 gives the following. We leave the proof
as an exercise.
Lemma 1.4.8. Let 1 be a commutative ring and a
1
. . . . . a
n
÷ 1 be a ﬁnite set of
elements in 1. Then the set
(a
1
. . . . . a
n
) = {r
1
a
1
÷r
2
a
2
÷ ÷r
n
a
n
: r
i
÷ 1}
is an ideal of 1.
This ideal is called the ideal generated by a
1
. . . . . a
n
.
Recall that a
1
. . . . . a
n
are in (a
1
. . . . . a
n
) if 1 has an identity.
Theorem 1.4.9. Let 1 be a commutative ring with an identity 1 = 0. Then 1 is a
ﬁeld if and only if the only ideals in 1 are {0} and 1.
Proof. Suppose that 1 is a ﬁeld and 1 C 1 is an ideal. We must show that either
1 = {0} or 1 = 1. Suppose that 1 = {0} then we must show that 1 = 1.
Since 1 = {0} there exists an element a ÷ 1 with a = 0. Since 1 is a ﬁeld this
element a has an inverse a
1
. Since 1 is an ideal it follows that a
1
a = 1 ÷ 1. Let
r ÷ 1 then, since 1 ÷ 1, we have r 1 = r ÷ 1. Hence 1 Ç 1 and hence 1 = 1.
Conversely suppose that 1 is a commutative ring with an identity whose only ideals
are {0} and 1. We must show that 1 is a ﬁeld or equivalently that every nonzero
element of 1 has a multiplicative inverse.
Let a ÷ 1 with a = 0. Since 1 is a commutative ring and a = 0, the principal
ideal a1 is a nontrivial ideal in 1. Hence a1 = 1. Therefore the multiplicative
identity 1 ÷ a1. It follows that there exists an r ÷ 1 with ar = 1. Hence a has a
multiplicative inverse and 1 must be a ﬁeld.
1.5 Factor Rings and Ring Homomorphisms
Given an ideal 1 in a ring 1 we can build a new ring called the factor ring or quotient
ring of 1 modulo 1. The special condition on the subring 1 that r1 Ç 1 and 1r Ç 1
for all r ÷ 1, that makes it an ideal, is speciﬁcally to allow this construction to be a
ring.
Deﬁnition 1.5.1. Let 1 be an ideal in a ring 1. Then a coset of 1 is a subset of 1 of
the form
r ÷1 = {r ÷i : i ÷ 1}
with r a ﬁxed element of 1.
10 Chapter 1 Groups, Rings and Fields
Lemma 1.5.2. Let 1 be an ideal in a ring 1. Then the cosets of 1 partition 1, that is
any two cosets are either coincide or disjoint.
We leave the proof to the exercises.
Now on the set of all cosets of an ideal we will build a new ring.
Theorem 1.5.3. Let 1 be an ideal in a ring 1. Let 1,1 be the set of all cosets of 1
in 1, that is
1,1 = {r ÷1 : r ÷ 1}.
We deﬁne addition and multiplication on 1,1 in the following manner:
(r
1
÷1) ÷(r
2
÷1) = (r
1
÷r
2
) ÷1
(r
1
÷1) (r
2
÷1) = (r
1
r
2
) ÷1.
Then 1,1 forms a ring called the factor ring of 1 modulo 1. The zero element of
1,1 is 0 ÷1 and the additive inverse of r ÷1 is ÷r ÷1.
Further if 1 is commutative then 1,1 is commutative and if 1 has an identity then
1,1 has an identity 1 ÷1.
Proof. The proofs that 1,1 satisﬁes the ring axioms under the deﬁnitions above is
straightforward. For example
(r
1
÷1) ÷(r
2
÷1) = (r
1
÷r
2
) ÷1 = (r
2
÷r
1
) ÷1 = (r
2
÷1) ÷(r
1
÷1)
and so addition is commutative.
What must be shown is that both addition and multiplication are welldeﬁned. That
is, if
r
1
÷1 = r
t
1
÷1 and r
2
÷1 = r
t
2
÷1
then
(r
1
÷1) ÷(r
2
÷1) = (r
t
1
÷1) ÷(r
t
2
÷1)
and
(r
1
÷1) (r
2
÷1) = (r
t
1
÷1) (r
t
2
÷1).
Now if r
1
÷ 1 = r
t
1
÷ 1 then r
1
÷ r
t
1
÷ 1 and so r
1
= r
t
1
÷ i
1
for some i
1
÷ 1.
Similarly if r
2
÷1 = r
t
2
÷1 then r
2
÷ r
t
2
÷1 and so r
2
= r
t
2
÷i
2
for some i
2
÷ 1.
Then
(r
1
÷1) ÷(r
2
÷1) = (r
t
1
÷i
1
÷1) ÷(r
t
2
÷i
2
÷1) = (r
t
1
÷1) ÷(r
t
2
÷1)
since i
1
÷1 = 1 and i
2
÷1 = 1. Similarly
(r
1
÷1) (r
2
÷1) = (r
t
1
÷i
1
÷1) (r
t
2
÷i
2
÷1)
= r
t
1
r
t
2
÷r
t
1
i
2
÷r
t
2
i
1
÷r
t
1
1 ÷r
t
2
1 ÷1 1
= (r
t
1
r
t
2
) ÷1
since all the other products are in the ideal 1.
Section 1.5 Factor Rings and Ring Homomorphisms 11
This shows that addition and multiplication are welldeﬁned. It also shows why the
ideal property is necessary.
As an example let 1 be the integers Z. As we have seen each subring is an ideal
and of the form nZ for some natural number n. The factor ring Z,nZ is called the
residue class ring modulo n denoted Z
n
. Notice that we can take as cosets
0 ÷nZ. 1 ÷nZ. . . . . (n ÷ 1) ÷nZ.
Addition and multiplication of cosets is then just addition and multiplication mod
ulo n, as we can see, that this is just a formalization of the ring Z
n
, that we have
already looked at. Recall that Z
n
is an integral domain if and only if n is prime and
Z
n
is a ﬁeld for precisely the same n. If n = 0 then Z,nZ is the same as Z.
We now show that ideals and factor rings are closely related to certain mappings
between rings.
Deﬁnition 1.5.4. Let 1 and S be rings. Then a mapping ¡ : 1 ÷ S is a ring
homomorphism if
¡(r
1
÷r
2
) = ¡(r
1
) ÷¡(r
2
) for any r
1
. r
2
÷ 1
¡(r
1
r
2
) = ¡(r
1
) ¡(r
2
) for any r
1
. r
2
÷ 1.
In addition,
(1) ¡ is an epimorphism if it is surjective.
(2) ¡ is an monomorphism if it is injective.
(3) ¡ is an isomorphism if it is bijective, that is both surjective and injective. In this
case 1 and S are said to be isomorphic rings which we denote by 1 Š S.
(4) ¡ is an endomorphism if 1 = S, that is a ring homomorphism from a ring to
itself.
(5) ¡ is an automorphism if 1 = S and ¡ is an isomorphism.
Lemma 1.5.5. Let 1 and S be rings and let ¡ : 1 ÷ S be a ring homomorphism.
Then
(1) ¡(0) = 0 where the ﬁrst 0 is the zero element of 1 and the second is the zero
element of S.
(2) ¡(÷r) = ÷¡(r) for any r ÷ 1.
Proof. We obtain ¡(0) = 0 from the equation ¡(0) = ¡(0 ÷ 0) = ¡(0) ÷ ¡(0).
Hence 0 = ¡(0) = ¡(r ÷ r) = ¡(r ÷ (÷r)) = ¡(r) ÷ ¡(÷r), that is ¡(÷r) =
÷¡(r).
12 Chapter 1 Groups, Rings and Fields
Deﬁnition 1.5.6. Let 1 and S be rings and let ¡ : 1 ÷S be a ring homomorphism.
Then the kernel of ¡ is
ker(¡ ) = {r ÷ 1 : ¡(r) = 0}.
The image of ¡ , denoted im(¡ ), is the range of ¡ within S. That is
im(¡ ) = {s ÷ S : there exists r ÷ 1 with ¡(r) = s}.
Theorem 1.5.7 (ring isomorphism theorem). Let 1 and S be rings and let
¡ : 1 ÷S
be a ring homomorphism. Then
(1) ker(¡ ) is an ideal in 1, im(¡ ) is a subring of S and
1, ker(¡ ) Š im(¡ ).
(2) Conversely suppose that 1 is an ideal in a ring 1. Then the map ¡ : 1 ÷1,1
given by ¡(r) = r ÷ 1 for r ÷ 1 is a ring homomorphism whose kernel is 1
and whose image is 1,1.
The theorem says that the concepts of ideal of a ring and kernel of a ring homo
morphism coincide, that is each ideal is the kernel of a homomorphism and the kernel
of each ring homomorphism is an ideal.
Proof. Let ¡ : 1 ÷ S be a ring homomorphism and let 1 = ker(¡ ). We show
ﬁrst that 1 is an ideal. If r
1
. r
2
÷ 1 then ¡(r
1
) = ¡(r
2
) = 0. It follows from the
homomorphism property that
¡(r
1
˙r
2
) = ¡(r
1
) ˙¡(r
2
) = 0 ÷0 = 0
¡(r
1
r
2
) = ¡(r
1
) ¡(r
2
) = 0 0 = 0.
Therefore 1 is a subring.
Now let i ÷ 1 and r ÷ 1. Then
¡(r i ) = ¡(r) ¡(i ) = ¡(r) 0 = 0 and ¡(i r) = ¡(i ) ¡(r) = 0 ¡(r) = 0
and hence 1 is an ideal.
Consider the factor ring 1,1. Let ¡
+
: 1,1 ÷im(¡ ) by ¡
+
(r ÷1) = ¡(r). We
show that ¡
+
is an isomorphism.
First we show that it is welldeﬁned. Suppose that r
1
÷1 = r
2
÷1 then r
1
÷r
2
÷
1 = ker(¡ ). It follows that ¡(r
1
÷r
2
) = 0 so ¡(r
1
) = ¡(r
2
). Hence ¡
+
(r
1
÷1) =
¡
+
(r
2
÷1) and the map ¡
+
is welldeﬁned.
Section 1.6 Fields of Fractions 13
Now
¡
+
((r
1
÷1) ÷(r
2
÷1)) = ¡
+
((r
1
÷r
2
) ÷1) = ¡(r
1
÷r
2
)
= ¡(r
1
) ÷¡(r
2
) = ¡
+
(r
1
÷1) ÷¡
+
(r
2
÷1)
and
¡
+
((r
1
÷1) (r
2
÷1)) = ¡
+
((r
1
r
2
) ÷1) = ¡(r
1
r
2
)
= ¡(r
1
) ¡(r
2
) = ¡
+
(r
1
÷1) ¡
+
(r
2
÷1).
Hence ¡
+
is a homomorphism. We must now show that it is injective and surjective.
Suppose that ¡
+
(r
1
÷1) = ¡
+
(r
2
÷1). Then ¡(r
1
) = ¡(r
2
) so that ¡(r
1
÷r
2
) =
0. Hence r
1
÷ r
2
÷ ker(¡ ) = 1. Therefore r
1
÷ r
2
÷1 and thus r
1
÷1 = r
2
÷1
and the map ¡
+
is injective.
Finally let s ÷ im(¡ ). Then there exists and r ÷ 1 such that ¡(r) = s. Then
¡
+
(r ÷1) = s and the map ¡
+
is surjective and hence an isomorphism. This proves
the ﬁrst part of the theorem.
To prove the second part let 1 be an ideal in 1 and 1,1 the factor ring. Consider
the map ¡ : 1 ÷ 1,1 given by ¡(r) = r ÷1. From the deﬁnition of addition and
multiplication in the factor ring 1,1 it is clear that this is a homomorphism. Consider
the kernel of ¡ . If r ÷ ker(¡ ) then ¡(r) = r ÷ 1 = 0 = 0 ÷ 1. This implies
that r ÷ 1 and hence the kernel of this map is exactly the ideal 1 completing the
theorem.
Theorem 1.5.7 is called the ring isomorphism theorem or the ﬁrst ring isomorphism
theorem. We mention that there is an analogous theorem for each algebraic structure.
In particular for groups and vector spaces. We will mention the result for groups in
Section 1.8.
1.6 Fields of Fractions
The integers are an integral domain and the rationals Q are a ﬁeld that contains the
integers. First we show that Qis the smallest ﬁeld containing Z.
Theorem 1.6.1. The rationals Qare the smallest ﬁeld containing the integers Z. That
is if Z Ç J Ç Qwith J a subﬁeld of Qthen J = Q.
Proof. Since Z Ç J we have m. n ÷ J for any two integers m. n. Since J is a
subﬁeld, it is closed under taking division, that is taking multiplicative inverses and
hence the fraction
n
n
÷ J. Since each element of Q is such a fraction it follows that
Q Ç J. Since J Ç Qit follows that J = Q.
14 Chapter 1 Groups, Rings and Fields
Notice that to construct the rationals from the integers we form all the fractions
n
n
with n = 0 and where
n
1
n
1
=
n
2
n
2
if m
1
n
2
= n
1
m
2
. We then do the standard
operations on fractions. If we start with any integral domain D we can mimic this
construction to build a ﬁeld of fractions fromD that is the smallest ﬁeld containing D.
Theorem 1.6.2. Let D be an integral domain. Then there is a ﬁeld J containing D,
called the ﬁeld of fractions for D, such that each element of J is a fraction from D,
that is an element of the form J
1
J
1
2
with J
1
. J
2
÷ D. Further J is unique up to
isomorphism and is the smallest ﬁeld containing D.
Proof. The proof is just the mimicking of the construction of the rationals from the
integers. Let
J

= {(J
1
. J
2
) : J
1
. J
2
= 0. J
1
. J
2
÷ D}.
Deﬁne on J

the equivalence relation
(J
1
. J
2
) = (J
t
1
. J
t
2
) if J
1
J
t
2
= J
2
J
t
1
.
Let J be the set of equivalence classes and deﬁne addition and multiplication in the
usual manner as for fractions where the result is the equivalence class.
(J
1
. J
2
) ÷(J
3
. J
4
) = (J
1
J
4
÷J
2
J
3
. J
2
J
4
)
(J
1
. J
2
) (J
3
. J
4
) = (J
1
J
3
. J
2
J
4
).
It is now straightforward to verify the ring axioms for J. The inverse of (J
1
. 1) is
(1. J
1
) for J
1
= 0 in D.
As with Z we identify the elements of J as fractions
d
1
d
2
.
The proof that J is the smallest ﬁeld containing D is the same as for QfromZ.
As examples we have that Q is the ﬁeld of fractions for Z. A familiar but less
common example is the following.
Let RŒ. be the set of polynomials over the real numbers R. It can be shown that
RŒ. forms an integral domain. The ﬁeld of fractions consists of all formal functions
((x)
¿(x)
where ¡(.). g(.) are real polynomials with g(.) = 0. The corresponding ﬁeld
of fractions is called the ﬁeld of rational functions over R and is denoted R(.).
1.7 Characteristic and Prime Rings
We saw in the last section that Q is the smallest ﬁeld containing the integers. Since
any subﬁeld of Q must contain the identity, it follows that any nontrivial subﬁeld of
Q must contain the integers and hence be all of Q. Therefore Q has no nontrivial
subﬁelds. We say that Qis a prime ﬁeld.
Section 1.7 Characteristic and Prime Rings 15
Deﬁnition 1.7.1. A ﬁeld J is a prime ﬁeld if J contains no nontrivial subﬁelds.
Lemma 1.7.2. Let 1 be any ﬁeld. Then 1 contains a prime ﬁeld J as a subﬁeld.
Proof. Let 1
1
. 1
2
be subﬁelds of 1. If k
1
. k
2
÷ 1
1
¨1
2
then k
1
˙k
2
÷ 1
1
since 1
1
is a subﬁeld and k
1
˙k
2
÷ 1
2
since 1
2
is a subﬁeld. Therefore k
1
˙k
2
÷ 1
1
¨1
2
.
Similarly k
1
k
1
2
÷ 1
1
¨ 1
2
. It follows that 1
1
¨ 1
2
is again a subﬁeld.
Now let J be the intersection of all subﬁelds of 1. From the argument above J is
a subﬁeld and the only nontrivial subﬁeld of J is itself. Hence J is a prime ﬁeld.
Deﬁnition 1.7.3. Let 1 be a commutative ring with an identity 1 = 0. The smallest
positive integer n such that n 1 = 1 ÷ 1 ÷ ÷ 1 = 0 is called the characteristic
of 1. If there is no such n, then 1 has characteristic 0. We denote the characteristic
by char(1).
Notice ﬁrst that the characteristic of Z. Q. R are all zero. Further the characteristic
of Z
n
is n.
Theorem 1.7.4. Let 1 be an integral domain. Then the characteristic of 1 is either
0 or a prime. In particular the characteristic of a ﬁeld is zero or a prime.
Proof. Suppose that 1 is an integral domain and char(1) = n = 0. Suppose that
n = mk with 1 < m < n, 1 < k < n. Then n 1 = 0 = (m 1)(k 1). Since
1 is an integral domain we have no zero divisors and hence m 1 = 0 or k 1 = 0.
However this is a contradiction since n is the least positive integer such that n 1 = 0.
Therefore n must be a prime.
We have seen that every ﬁeld contains a prime ﬁeld. We extend this.
Deﬁnition 1.7.5. A commutative ring 1 with an identity 1 = 0 is a prime ring if the
only subring containing the identity is the whole ring.
Clearly both the integers Z and the modular integers Z
n
are prime rings. In fact up
to isomorphism they are the only prime rings.
Theorem 1.7.6. Let 1 be a prime ring. If char(1) = 0 then 1 Š Z, while if
char(1) = n > 0 then 1 Š Z
n
.
Proof. Suppose that char(1) = 0. Let S = {r = m 1 : r ÷ 1. m ÷ Z}. Then S is
a subring of 1 containing the identity (see the exercises) and hence S = 1. However
the map m 1 ÷mgives an isomorphism fromS to Z. It follows that 1 is isomorphic
to Z.
If char(1) = n > 0 the proof is identical. Since n 1 = 0 the subring S of 1
deﬁned above is all of 1 and isomorphic to Z
n
.
16 Chapter 1 Groups, Rings and Fields
Theorem 1.7.6 can be extended to ﬁelds with Qtaking the place of Z and Z
]
, with
¸ a prime, taking the place of Z
n
.
Theorem 1.7.7. Let 1 be a prime ﬁeld. If 1 has characteristic 0 then 1 Š Qwhile
if 1 has characteristic ¸ then 1 Š Z
]
.
Proof. The proof is identical to that of Theorem1.7.6; however we consider the small
est subﬁeld 1
1
of 1 containing S.
We mention that there can be inﬁnite ﬁelds of characteristic ¸. Consider for ex
ample the ﬁeld of fractions of the polynomial ring Z
]
Œ.. This is the ﬁeld of rational
functions with coefﬁcients in Z
]
.
We give a theorem on ﬁelds of characteristic ¸ that will be important much later
when we look at Galois theory.
Theorem 1.7.8. Let 1 be a ﬁeld of characteristic ¸. Then the mapping ç : 1 ÷ 1
given by ç(k) = k
]
is an injective endomorphism of 1. In particular (a ÷ b)
]
=
a
]
÷b
]
for any a. b ÷ 1.
This mapping is called the Frobenius homomorphism of 1.
Further if 1 is ﬁnite, ç is an automorphism.
Proof. We ﬁrst show that ç is a homomorphism. Now
ç(ab) = (ab)
]
= a
]
b
]
= ç(a)ç(b).
We need a little more work for addition.
ç(a ÷b) = (a ÷b)
]
=
]
i=0
_
¸
i
_
a
i
b
]i
= a
]
÷
]1
i=1
_
¸
i
_
a
i
b
]i
÷b
]
by the binomial expansion which holds in any commutative ring. However
_
¸
i
_
=
¸(¸
1
) (¸ ÷ i ÷1)
i (i ÷ 1) 1
and it is clear that ¸[
_
]
i
_
for 1 _ i _ ¸ ÷ 1. Hence in 1 we have
_
]
i
_
1 = 0 and so
we have
ç(a ÷b) = (a ÷b)
]
= a
]
÷b
]
= ç(a) ÷ç(b).
Therefore ç is a homomorphism.
Further ç is always injective. To see this suppose that ç(.) = ç(,). Then
ç(. ÷ ,) = 0 == (. ÷ ,)
]
= 0.
But 1 is a ﬁeld so there are no zero divisors so we must have . ÷ , = 0 or . = ,.
If 1 is ﬁnite and ç is injective it must also be surjective and hence an automorphism
of 1.
Section 1.8 Groups 17
1.8 Groups
We close this ﬁrst chapter by introducing some basic deﬁnitions and results from
group theory, that mirror the results, that were presented for rings and ﬁelds. We will
look at group theory in more detail later in the book. Proofs will be given at that point.
Deﬁnition 1.8.1. A group G is a set with one binary operation (which we will denote
by multiplication) such that
(1) The operation is associative.
(2) There exists an identity for this operation.
(3) Each g ÷ G has an inverse for this operation.
If, in addition, the operation is commutative, the group G is called an abelian group.
The order of G is the number of elements in G, denoted by [G[. If [G[ < o. G is a
ﬁnite group otherwise G is an inﬁnite group.
Groups most often arise from invertible mappings of a set onto itself. Such map
pings are called permutations.
Theorem 1.8.2. The group of all permutations on a set · forms a group called the
symmetric group on · which we denote by S
¹
. If · has more than 2 elements then S
¹
is nonabelian.
Deﬁnition 1.8.3. Let G
1
and G
2
be groups. Then a mapping ¡ : G
1
÷ G
2
is a
(group) homomorphism if
¡(g
1
g
2
) = ¡(g
1
)¡(g
2
) for any g
1
. g
2
÷ G
1
.
As with rings we have further
(1) ¡ is an epimorphism if it is surjective.
(2) ¡ is an monomorphism if it is injective.
(3) ¡ is an isomorphism if it is bijective, that is both surjective and injective. In
this case G
1
and G
2
are said to be isomorphic groups, which we denote by
G
1
Š G
2
.
(4) ¡ is an endomorphism if G
1
= G
2
, that is a homomorphism from a group to
itself.
(5) ¡ is an automorphism if G
1
= G
2
and ¡ is an isomorphism.
18 Chapter 1 Groups, Rings and Fields
Lemma 1.8.4. Let G
1
and G
2
be groups and let ¡ : G
1
÷G
2
be a homomorphism.
Then
(a) ¡(1) = 1 where the ﬁrst 1 is the identity element of G
1
and the second is the
identity element of G
2
.
(b) ¡(g
1
) = (¡(g))
1
for any g ÷ G
1
.
If · is a set, [·[ denotes the size of ·.
Theorem 1.8.5. If ·
1
and ·
2
are sets with [·
1
[ = [·
2
[ then S
¹
1
Š S
¹
2
. If [·[ = n
with n ﬁnite we call S
¹
the symmetric group on n elements which we denote by S
n
.
Further we have [S
n
[ = nŠ.
Subgroups are deﬁned in an analogous manner to subrings. Special types of sub
groups called normal subgroups take the place in group theory that ideals play in ring
theory.
Deﬁnition 1.8.6. A subset H of a group G is a subgroup if H = 0 and H forms a
group under the same operation as G. Equivalently, H is a subgroup if H = 0 and
H is closed under the operation and inverses.
Deﬁnition 1.8.7. If H is a subgroup of a group G, then a left coset of H is a subset
of G of the form gH = {gh : h ÷ H}. A right coset of H is a subset of G of the
form Hg = {hg : h ÷ H}.
As with rings the cosets of a subgroup partition a group. We call the number of
right cosets of a subgroup H in a group G then index of H in G, denoted [G : H[.
One can prove that the number of right cosets is equal to the number of left cosets.
For ﬁnite groups we have the following beautiful result called Lagrange’s theorem.
Theorem 1.8.8 (Lagrange’s theorem). Let G be a ﬁnite group and H a subgroup.
Then the order of H divides the order of G. In particular
[G[ = [H[[G : H[.
Normal subgroups take the place of ideals in group theory.
Deﬁnition 1.8.9. A subgroup H of a group G is a normal subgroup, denoted HCG,
if every left coset of H is also a right coset, that is gH = Hg for each g ÷ G. Note
that this does not say that g and H commute elementwise, just that the subsets gH
and Hg are the same. Equivalently H is normal if g
1
Hg = H for any g ÷ G.
Normal subgroups allow us to construct factor groups just as ideals allowed us to
construct factor rings.
Section 1.9 Exercises 19
Theorem 1.8.10. Let H be a normal subgroup of a group G. Let G,H be the set of
all cosets of H in G, that is
G,H = {gH : g ÷ G}.
We deﬁne multiplication on G,H in the following manner
(g
1
H)(g
2
H) = g
1
g
2
H.
Then G,H forms a group called the factor group or quotient group of G modulo H.
The identity element of G,H is 1H and the inverse of gH is g
1
H.
Further if G is abelian then G,H is also abelian.
Finally as with rings normal subgroups, factor groups are closely tied to homo
morphisms.
Deﬁnition 1.8.11. Let G
1
and G
2
be groups and let ¡ : G
1
÷G
2
be a homomorph
ism. Then the kernel of ¡ , denoted ker(¡ ), is
ker(¡ ) = {g ÷ G
1
: ¡(g) = 1}.
The image of ¡ , denoted im(¡ ), is the range of ¡ within G
2
. That is
im(¡ ) = {h ÷ G
2
: there exists g ÷ G
1
with ¡(g) = h}.
Theorem 1.8.12 (group isomorphism theorem). Let G
1
and G
2
be groups and let
¡ : G
1
÷G
2
be a homomorphism. Then
(1) ker(¡ ) is a normal subgroup in G
1
. im(¡ ) is a subgroup of G
2
and
G
1
, ker(¡ ) Š im(¡ ).
(2) Conversely suppose that H is a normal subgroup of a group G. Then the map
¡ : G ÷ G,H given by ¡(g) = gH for g ÷ G is a homomorphism whose
kernel is H and whose image is G,H.
1.9 Exercises
1. Let ç : 1 ÷ 1 be a homomorphism from a ﬁeld 1 to a ring 1. Show: Either
ç(a) = 0 for all a ÷ 1 or ç is a monomorphism.
2. Let 1be a ring and M = 0 an arbitrary set. Showthat the following are equivalent:
(i) The ring of all mappings from M to 1 is a ﬁeld.
(ii) M contains only one element and 1 is a ﬁeld.
20 Chapter 1 Groups, Rings and Fields
3. Let ¬ be a set of prime numbers. Deﬁne
Q
t
=
_
a
b
: all prime divisors of b are in ¬
_
.
(i) Show that Q
t
is a subring of Q.
(ii) Let 1 be a subring of Qand let
o
b
÷ 1 with coprime integers a. b. Show that
1
b
÷ 1.
(iii) Determine all subrings 1 of Q. (Hint: Consider the set of all prime divisors
of denominators of reduced elements of 1.)
4. Prove Lemma 1.5.2.
5. Let 1 be a commutative ring with an identity 1 ÷ 1. Let A, B and C be ideals
in 1. A÷B := {a ÷b : a ÷ A. b ÷ B} and AB := ({ab : a ÷ A. b ÷ B}).
Show:
(i) A÷B < 1, A÷B = (AL B)
(ii) AB = {a
1
b
1
÷ ÷a
n
b
n
: n ÷ N. a
i
÷ A. b
i
÷ B}, AB Ç A¨ B
(iii) A(B ÷C) = AB ÷AC, (A÷B)C = AB ÷BC, (AB)C = A(BC)
(iv) A = 1 =A¨ 1
+
= 0
(v) a. b ÷ 1 =(a) ÷(b) = {.a ÷,b : .. , ÷ 1}
(vi) a. b ÷ 1 =(a)(b) = (ab). Here (a) = 1a = {.a : . ÷ 1}.
6. Solve the following congruence:
3. ÷ 5 mod 7.
Is this congruence also solvable mod 17?
7. Show that the set of 2 × 2 matrices over a ring 1 forms a ring.
8. Prove Lemma 1.4.8.
9. Prove that if 1 is a ring with identity and S = {r = m 1 : r ÷ 1. m ÷ Z} then S
is a subring of 1 containing the identity.
Chapter 2
Maximal and Prime Ideals
2.1 Maximal and Prime Ideals
In the ﬁrst chapter we deﬁned ideals 1 in a ring 1 and then the factor ring 1,1 of
1 modulo the ideal 1. We saw further that if 1 is commutative then 1,1 is also
commutative and if 1 has an identity then so does 1,1. This raises further questions
concerning the structure of factor rings. In particular we can ask under what condi
tions does 1,1 form an integral domain and under what conditions does 1,1 form
a ﬁeld. These questions lead us to deﬁne certain special properties of ideals, called
prime ideals and maximal ideals.
For motivation let us look back at the integers Z. Recall that each proper ideal in Z
has the form nZ for some n > 1 and the resulting factor ring Z,nZ is isomorphic
to Z
n
. We proved the following result.
Theorem 2.1.1. Z
n
= Z,nZ is an integral domain if and only if n = ¸ a prime.
Further Z
n
is a ﬁeld again if and only if n = ¸ is a prime.
Hence for the integers Za factor ring is a ﬁeld if and only if it is an integral domain.
We will see later that this is not true in general. However what is clear is that the
special ideals nZ leading to integral domains and ﬁelds are precisely when n is a
prime. We look at the ideals ¸Z with ¸ a prime in two different ways and then use
these in subsequent sections to give the general deﬁnitions. We ﬁrst need a famous
result, Euclid’s lemma, from number theory. For integers a. b the notation a[b means
that a divides b.
Lemma 2.1.2 (Euclid). If ¸ is a prime and ¸[ab then ¸[a or ¸[b.
Proof. Recall that the greatest common divisor or GCD of two integers a. b is an
integer J > 0 such that J is a common divisor of both a and b and if J
1
is another
common divisor of a and b then J
1
[J. We express the GCD of a. b by J = (a. b). It
is known that for any two integers a. b their GCD exists and is unique and further is
the least positive linear combination of a and b, that is the least positive integer of the
form a. ÷b, for integers .. ,. The integers a. b are relatively prime if their GCD is
1, (a. b) = 1. In this case 1 is a linear combination of a and b.
Now suppose ¸[ab where ¸ is a prime. If ¸ does not divide a then since the only
positive divisors of ¸ are 1 and ¸ it follows that (a. ¸) = 1. Hence 1 is expressible
22 Chapter 2 Maximal and Prime Ideals
as a linear combination of a and ¸. That is a. ÷ ¸, = 1 for some integers .. ,.
Multiply through by b, so that
ab. ÷¸b, = b.
Now ¸[ab so ¸[ab. and ¸[¸b,. Therefore ¸[ab. ÷¸b,, that is, ¸[b.
We now recast this lemma in two different ways in terms of the ideal ¸Z. Notice
that ¸Z consists precisely of all the multiples of ¸. Hence ¸[ab is equivalent to
ab ÷ ¸Z.
Lemma 2.1.3. If ¸ is a prime and ab ÷ ¸Z then a ÷ ¸Z or b ÷ ¸Z.
This conclusion will be taken as the deﬁnition of a prime ideal in the next section.
Lemma 2.1.4. If ¸ is a prime and ¸Z Ç nZ then n = 1 or n = ¸. That is, every
ideal in Z containing ¸Z with ¸ a prime is either all of Z or ¸Z.
Proof. Suppose that ¸Z Ç nZ. Then ¸ ÷ nZ so ¸ is a multiple of n. Since ¸ is a
prime it follows easily that either n = 1 or n = ¸.
In Section 2.3 the conclusion of this lemma will be taken as the deﬁnition of a
maximal ideal.
2.2 Prime Ideals and Integral Domains
Motivated by Lemma 2.1.3 we make the following general deﬁnition for commutative
rings 1 with identity.
Deﬁnition 2.2.1. Let 1 be a commutative ring. An ideal 1 in 1 with 1 = 1 is a
prime ideal if whenever ab ÷ 1 with a. b ÷ 1 then either a ÷ 1 or b ÷ 1.
This property of an ideal is precisely what is necessary and sufﬁcient to make the
factor ring 1,1 an integral domain.
Theorem 2.2.2. Let 1 be a commutative ring with an identity 1 = 0 and let 1 be a
nontrivial ideal in 1. Then 1 is a prime ideal if and only if the factor ring 1,1 is an
integral domain.
Proof. Let 1 be a commutative ring with an identity 1 = 0 and let 1 be a prime ideal.
We show that 1,1 is an integral domain. From the results in the last chapter we have
that 1,1 is again a commutative ring with an identity. Therefore we must show that
there are no zero divisors in 1,1. Suppose that (a ÷1)(b ÷1) = 0 in 1,1. The
zero element in 1,1 is 0 ÷1 and hence
(a ÷1)(b ÷1) = 0 = 0 ÷1 == ab ÷1 = 0 ÷1 == ab ÷ 1.
Section 2.2 Prime Ideals and Integral Domains 23
However 1 is a prime ideal so we must then have a ÷ 1 or b ÷ 1. If a ÷ 1 then
a ÷1 = 1 = 0÷1 so a ÷1 = 0 in 1,1. The identical argument works if b ÷ 1.
Therefore there are no zero divisors in 1,1 and hence 1,1 is an integral domain.
Conversely suppose that 1,1 is an integral domain. We must show that 1 is a
prime ideal. Suppose that ab ÷ 1. Then (a ÷ 1)(b ÷ 1) = ab ÷ 1 = 0 ÷ 1.
Hence in 1,1 we have
(a ÷1)(b ÷1) = 0.
However 1,1 is an integral domain so it has no zero divisors. It follows that either
a ÷1 = 0 and hence a ÷ 1 or b ÷1 = 0 and b ÷ 1. Therefore either a ÷ 1 or
b ÷ 1 so 1 is a prime ideal.
In a commutative ring 1 we can deﬁne a multiplication of ideals. We then obtain
an exact analog of Euclid’s lemma. Since 1 is commutative each ideal is 2sided.
Deﬁnition 2.2.3. Let 1 be a commutative ring with an identity 1 = 0 and let · and
T be ideals in 1. Deﬁne
·T = {a
1
b
1
÷ ÷a
n
b
n
: a
i
÷ ·. b
i
÷ T. n ÷ N}.
That is ·T is the set of ﬁnite sums of products ab with a ÷ · and b ÷ T.
Lemma 2.2.4. Let 1 be a commutative ring with an identity 1 = 0 and let · and T
be ideals in 1. Then ·T is an ideal.
Proof. We must verify that ·T is a subring and that it is closed under multiplication
from 1. Le r
1
. r
2
÷ ·T. Then
r
1
= a
1
b
1
÷ ÷a
n
b
n
for some a
i
÷ ·. b
i
÷ T
and
r
2
= a
t
1
b
t
1
÷ ÷a
t
n
b
t
n
for some a
t
i
÷ ·. b
t
i
÷ T.
Then
r
1
˙r
2
= a
1
b
1
÷ ÷a
n
b
n
˙a
t
1
b
t
1
˙ ˙a
t
n
b
t
n
which is clearly in ·T. Further
r
1
r
2
= a
1
b
1
a
t
1
b
t
1
÷ ÷a
n
b
n
a
t
n
b
t
n
.
Consider for example the ﬁrst term a
1
b
1
a
t
1
b
t
1
. Since 1 is commutative this is equal
to
(a
1
a
t
1
)(b
1
b
t
1
).
Now a
1
a
t
1
÷ · since · is a subring and b
1
b
t
1
÷ T since T is a subring. Hence this
term is in ·T. Similarly for each of the other terms. Therefore r
1
r
2
÷ ·T and hence
·T is a subring.
24 Chapter 2 Maximal and Prime Ideals
Now let r ÷ 1 and consider rr
1
. This is then
rr
1
= ra
1
b
1
÷ ÷ra
n
b
n
.
Now ra
i
÷ · for each i since · is an ideal. Hence each summand is in ·T and then
rr
1
÷ ·T. Therefore ·T is an ideal.
Lemma 2.2.5. Let 1 be a commutative ring with an identity 1 = 0 and let · and
T be ideals in 1. If 1 is a prime ideal in 1 then ·T Ç 1 implies that · Ç 1 or
T Ç 1.
Proof. Suppose that ·T Ç 1 with 1 a prime ideal and suppose that T is not con
tained in 1. We show that · Ç 1. Since ·T Ç 1 each product a
i
b
}
÷ 1. Choose
a b ÷ T with b … 1 and let a be an arbitrary element of ·. Then ab ÷ 1. Since
1 is a prime ideal this implies either a ÷ 1 or b ÷ 1. But by assumption b … 1 so
a ÷ 1. Since a was arbitrary we have · Ç 1.
2.3 Maximal Ideals and Fields
Now, motivated by Lemma 2.1.4 we deﬁne a maximal ideal.
Deﬁnition 2.3.1. Let 1 be a ring and 1 an ideal in 1. Then 1 is a maximal ideal if
1 = 1 and if J is an ideal in 1 with 1 Ç J then 1 = J or J = 1.
If 1 is a commutative ring with an identity this property of an ideal 1 is precisely
what is necessary and sufﬁcient so that 1,1 is a ﬁeld.
Theorem 2.3.2. Let 1 be a commutative ring with an identity 1 = 0 and let 1 be an
ideal in 1. Then 1 is a maximal ideal if and only if the factor ring 1,1 is a ﬁeld.
Proof. Suppose that 1 is a commutative ring with an identity 1 = 0 and let 1 be an
ideal in 1. Suppose ﬁrst that 1 is a maximal ideal and we show that the factor ring
1,1 is a ﬁeld.
Since 1 is a commutative ring with an identity the factor ring 1,1 is also a com
mutative ring with an identity. We must show then that each nonzero element of 1,1
has a multiplicative inverse. Suppose then that r = r ÷1 ÷ 1,1 is a nonzero element
of 1,1. It follows that r … 1. Consider the set (r. 1) = {r. ÷ i : . ÷ 1. i ÷ 1}.
This is also an ideal (see exercises) called the ideal generated by r and 1, denoted
(r. 1). Clearly 1 Ç (r. 1) and since r … 1 and r = r 1 ÷ 0 ÷ (r. 1) it follows that
(r. 1) = 1. Since 1 is a maximal ideal it follows that (r. 1) = 1 the whole ring.
Hence the identity element 1 ÷ (r. 1) and so there exist elements . ÷ 1 and i ÷ 1
such that 1 = r. ÷i . But then 1 ÷ (r ÷1)(. ÷1) and so 1 ÷1 = (r ÷1)(. ÷1).
Section 2.4 The Existence of Maximal Ideals 25
Since 1 ÷ 1 is the multiplicative identity of 1,1 is follows that . ÷ 1 is the multi
plicative inverse of r ÷1 in 1,1. Since r ÷1 was an arbitrary nonzero element of
1,1 it follows that 1,1 is a ﬁeld.
Now suppose that 1,1 is a ﬁeld for an ideal 1. We show that 1 must be maximal.
Suppose then that 1
1
is an ideal with 1 Ç 1
1
and 1 = 1
1
. We must show that 1
1
is all
of 1. Since 1 = 1
1
there exists an r ÷ 1
1
with r … 1. Therefore the element r ÷1 is
nonzero in the factor ring 1,1 and since 1,1 is a ﬁeld it must have a multiplicative
inverse . ÷1. Hence (r ÷1)(. ÷1) = r. ÷1 = 1 ÷1 and therefore there is an
i ÷ 1 with 1 = r. ÷i . Since r ÷ 1
1
and 1
1
is an ideal we get that r. ÷ 1
1
. Further
since 1 Ç 1
1
it follows that r. ÷i ÷ 1
1
and so 1 ÷ 1
1
. If r
1
is an arbitrary element
of 1 then r
1
1 = r
1
÷ 1
1
. Hence 1 Ç 1
1
and so 1 = 1
1
. Therefore 1 is a maximal
ideal.
Recall that a ﬁeld is already an integral domain. Combining this with the ideas of
prime and maximal ideals we obtain:
Theorem 2.3.3. Let 1 be a commutative ring with an identity 1 = 0. Then each
maximal ideal is a prime ideal
Proof. Suppose that 1 is a commutative ring with an identity and 1 is a maximal ideal
in 1. Then from Theorem 2.3.2 we have that the factor ring 1,1 is a ﬁeld. But a ﬁeld
is an integral domain so 1,1 is an integral domain. Therefore from Theorem 2.2.2
we have that 1 must be a prime ideal.
The converse is not true in general. That is there are prime ideals that are not
maximal. Consider for example 1 = Z the integers and 1 = {0}. Then 1 is an ideal
and 1,1 = Z,{0} Š Z is an integral domain. Hence {0} is a prime ideal. However
Z is not a ﬁeld so {0} is not maximal. Note however that in the integers Z a proper
ideal is maximal if and only if it is a prime ideal.
2.4 The Existence of Maximal Ideals
In this section we prove that in any ring 1 with an identity there do exist maximal
ideals. Further given an ideal 1 = 1 then there exists a maximal ideal 1
0
such that
1 Ç 1
0
. To prove this we need three important equivalent results from logic and set
theory.
First recall that a partial order _ on a set S is a reﬂexive, transitive relation on S.
That is a _ a for all a ÷ S and if a _ b. b _ c then a _ c. This is a “partial” order
since there may exist elements a ÷ S where neither a _ b nor b _ a. If · is any set
then it is clear that containment of subsets is a partial order on the power set P(·).
If _ is a partial order on a set M, then a chain on M is a subset 1 Ç M such that
a. b ÷ 1 implies that a _ b or b _ a. A chain on M is bounded if there exists an
26 Chapter 2 Maximal and Prime Ideals
m ÷ M such that k _ m for all k ÷ 1. The element m is called an upper bound
for 1. An element m
0
÷ M is maximal if whenever m ÷ M with m
0
_ m then
m = m
0
. We now state the three important results from logic.
Zorn’s lemma. If each chain of M has an upper bound in M then there is at least
one maximal element in M.
Axiom of wellordering. Each set M can be wellordered, such that each nonempty
subset of M contains a least element.
Axiom of choice. Let {M
i
: i ÷ 1} be a nonempty collection of nonempty sets. Then
there is a mapping ¡ : 1 ÷
_
i÷J
M
i
with ¡(i ) ÷ M
i
for all i ÷ 1.
The following can be proved.
Theorem 2.4.1. Zorn’s lemma, the axiom of wellordering and the axiom of choice
are all equivalent.
We now show the existence of maximal ideals in commutative rings with identity.
Theorem 2.4.2. Let 1 be a commutative ring with an identity 1 = 0 and let 1 be an
ideal in 1 with 1 = 1. Then there exists a maximal ideal 1
0
in 1 with 1 Ç 1
0
. In
particular a ring with an identity contains maximal ideals.
Proof. Let 1 be an ideal in the commutative ring 1. We must show that there exists a
maximal ideal 1
0
in 1 with 1 Ç 1
0
.
Let
M = {X : X is an ideal with 1 Ç X = 1}.
Then M is partially ordered by containment. We want to show ﬁrst that each chain in
M has a maximal element. If 1 = {X
}
: X
}
÷ M. ¸ ÷ J} is a chain let
X
t
=
_
}÷J
X
}
.
If a. b ÷ X
t
then there exists an i. ¸ ÷ J with a ÷ X
i
. b ÷ X
}
. Since 1 is a chain
either X
i
Ç X
}
or X
}
Ç X
i
. Without loss of generality suppose that X
i
Ç X
}
so that
a. b ÷ X
}
. Then a ˙b ÷ X
}
Ç X
t
and ab ÷ X
}
Ç X
t
since X
}
is an ideal. Further
if r ÷ 1 then ra ÷ X
}
Ç X
t
since X
}
is an ideal. Therefore X
t
is an ideal in 1.
Since X
}
= 1 it follows that 1 … X
}
for all ¸ ÷ J. Therefore 1 … X
t
and so
X
t
= 1. It follows that under the partial order of containment X
t
is an upper bound
for 1.
We now use Zorn’s lemma. From the argument above we have that each chain has
a maximal element. Hence for an ideal 1 the set M above has a maximal element.
This maximal element 1
0
is then a maximal ideal containing 1.
Section 2.5 Principal Ideals and Principal Ideal Domains 27
2.5 Principal Ideals and Principal Ideal Domains
Recall again that in the integers Z each ideal 1 is of the form nZ for some integer n.
Hence in Z each ideal can be generated by a single element.
Lemma 2.5.1. Let 1 be a commutative ring and a
1
. . . . . a
n
be elements of 1. Then
the set
(a
1
. . . . . a
n
) = {r
1
a
1
÷ ÷r
n
a
n
: r
i
÷ 1}
forms an ideal in 1 called the ideal generated by a
1
. . . . . a
n
.
Proof. The proof is straightforward. Let
a = r
1
a
1
÷ ÷r
n
a
n
. b = s
1
a
1
÷ ÷s
n
a
n
with r
1
. . . . . r
n
. s
1
. . . . . s
n
elements of 1, be two elements of (a
1
. . . . . a
n
). Then
a ˙b = (r
1
˙s
1
)a
1
÷ ÷(r
n
˙s
n
)a
n
÷ (a
1
. . . . . a
n
)
ab = (r
1
s
1
a
1
)a
1
÷(r
1
s
2
a
1
)a
2
÷ ÷(r
n
s
n
a
n
)a
n
÷ (a
1
. . . . . a
n
)
so (a
1
. . . . . a
n
) forms a subring. Further if r ÷ 1 we have
ra = (rr
1
)a
1
÷ ÷(rr
n
)a
n
÷ (a
1
. . . . . a
n
)
and so (a
1
. . . . . a
n
) is an ideal.
Deﬁnition 2.5.2. Let 1 be a commutative ring. An ideal 1 Ç 1 is a principal ideal
if it has a single generator. That is
1 = (a) = a1 for some a ÷ 1.
We now restate Theorem 1.4.7 of Chapter 1.
Theorem 2.5.3. Every nonzero ideal in Z is a principal ideal.
Proof. Every ideal 1 in Z is of the form nZ. This is the principal ideal generated
by n.
Deﬁnition 2.5.4. A principal ideal domain or PID is an integral domain in which
every ideal is principal.
Corollary 2.5.5. The integers Z are a principal ideal domain.
We mention that the set of polynomials 1Œ. with coefﬁcients from a ﬁeld 1 is
also a principal ideal domain. We will return to this in the next chapter.
Not every integral domain is a PID. Consider 1Œ.. , the set of polynomials over
1 in two variables .. ,. Let 1 consist of all the polynomials with zero constant term.
28 Chapter 2 Maximal and Prime Ideals
Lemma 2.5.6. The set 1 in 1Œ.. , as deﬁned above is an ideal but not a principal
ideal.
Proof. We leave the proof that 1 forms an ideal to the exercises. To show that it is
not a principal ideal suppose 1 = (¸(.. ,)). Now the polynomial q(.) = . has zero
constant term so q(.) ÷ 1. Hence ¸(.. ,) cannot be a constant polynomial. Further
if ¸(.. ,) had any terms with , in them there would be no way to multiply ¸(.. ,)
by a polynomial h(.. ,) and obtain just .. Therefore ¸(.. ,) can contain no terms
with , in them. But the same argument using s(,) = , shows that ¸(.. ,) cannot
have any terms with . in them. Therefore there can be no such ¸(.. ,) generating 1
and so 1 is not principal and 1Œ.. , is not a principal ideal domain.
2.6 Exercises
1. Consider the set (r. 1) = {r. ÷i : . ÷ 1. i ÷ 1} where 1 is an ideal. Prove that
this is also an ideal called the ideal generated by r and 1, denoted (r. 1).
2. Let 1 and S be commutative rings and let ç : 1 ÷S be a ring epimorphism. Let
Mbe a maximal ideal in 1. Show:
ç(M) is a maximal ideal in S if and only if ker(ç) Ç M. Is ç(M) always a
prime ideal of S?
3. Let A
1
. . . . . A
t
be ideals of a commutative ring 1. Let P be a prime ideal of 1.
Show:
(i)
_
t
i=1
A
i
Ç P =A
}
Ç P for at least one index ¸ .
(ii)
_
t
i=1
A
i
= P =A
}
= P for at least one index ¸ .
4. Which of the following ideals Aare prime ideals of 1? Which are maximal ideals?
(i) A = (.), 1 = ZŒ..
(ii) A = (.
2
), 1 = ZŒ..
(iii) A = (1 ÷
_
5), 1 = ZŒ
_
5.
(iv) A = (.. ,), 1 = QŒ.. ,.
5. Let n =
1
2
(1 ÷
_
÷3). Show that (2) is a prime ideal and even a maximal ideal of
ZŒn, but (2) is neither a prime ideal nor a maximal ideal of ZŒi .
6. Let 1 = {
o
b
: a. b ÷ Z. b odd}. Show that 1 is a subring of Q and that there is
only one maximal ideal Min 1.
7. Let 1 be a ring with an identity. Let .. , ÷ 1 and . = 0 not be a zero divisor.
Further let (.) be a prime ideal with (.) Ç (,) = 1. Show that (.) = (,).
8. Consider 1Œ.. , the set of polynomials over 1 in two variables .. ,. Let 1 consist
of all the polynomials with zero constant term. Prove that the set 1 is an ideal.
Chapter 3
Prime Elements and Unique Factorization
Domains
3.1 The Fundamental Theorem of Arithmetic
The integers Z have served as much of our motivation for properties of integral do
mains. In the last chapter we saw that Z is a principal ideal domain and furthermore
that prime ideals = {0} are maximal. From the viewpoint of the multiplicative struc
ture of Z and the viewpoint of classical number theory the most important property
of Z is the fundamental theorem of arithmetic. This states that any integer n = 0 is
uniquely expressible as a product of primes where uniqueness is up to ordering and
the introduction of ˙1, that is units. In this chapter we show that this property is not
unique to the integers and there are many other integral domains where this also holds.
These are called unique factorization domains and we will present several examples.
First we review the fundamental theorem of arithmetic, its proof and several other
ideas from classical number theory.
Theorem 3.1.1 (fundamental theorem of arithmetic). Given any integer n = 0 there
is a factorization
n = c¸
1
¸
2
¸
k
where c = ˙1 and ¸
1
. . . . . ¸
n
are primes. Further this factorization is unique up to
the ordering of the factors.
There are two main ingredients that go into the proof; induction and Euclid’s
lemma. We presented this in the last chapter. In turn however Euclid’s lemma de
pends upon the existence of greatest common divisors and their linear expressibility.
Therefore to begin we present several basic ideas from number theory.
The starting point for the theory of numbers is divisibility.
Deﬁnition 3.1.2. If a. b are integers we say that a divides b, or that a is a factor or
divisor of b, if there exists an integer q such that b = aq. We denote this by a[b. b is
then a multiple of a. If b > 1 is an integer whose only factors are ˙1. ˙b then b is a
prime, otherwise b > 1 is composite.
The following properties of divisibility are straightforward consequences of the
deﬁnition.
30 Chapter 3 Prime Elements and Unique Factorization Domains
Lemma 3.1.3. The following properties hold:
(1) a[b =a[bc for any integer c.
(2) a[b and b[c implies a[c.
(3) a[b and a[c implies that a[(b. ÷c,) for any integers .. ,.
(4) a[b and b[a implies that a = ˙b.
(5) If a[b and a > 0. b > 0 then a _ b.
(6) a[b if and only if ca[cb for any integer c = 0.
(7) a[0 for all a ÷ Z and 0[a only for a = 0.
(8) a[ ˙1 only for a = ˙1.
(9) a
1
[b
1
and a
2
[b
2
implies that a
1
a
2
[b
1
b
2
.
If b. c. .. , are integers then an integer b. ÷c, is called a linear combination of
b. c. Thus part (3) of Lemma 3.1.3 says that if a is a common divisor of b. c then a
divides any linear combination of b and c.
Further, note that if b > 1 is a composite then there exists . > 0 and , > 0 such
that b = ., and from part (5) we must have 1 < . < b, 1 < , < b.
In ordinary arithmetic, given a. b we can always attempt to divide a into b. The
next result called the division algorithm says that if a > 0 either a will divide b or the
remainder of the division of b by a will be less than a.
Theorem 3.1.4 (division algorithm). Given integers a. b with a > 0 then there exist
unique integers q and r such that b = qa ÷r where either r = 0 or 0 < r < a.
One may think of q and r as the quotient and remainder respectively when dividing
b by a.
Proof. Given a. b with a > 0 consider the set
S = {b ÷ qa _ 0 : q ÷ Z}.
If b > 0 then b ÷a _ 0 and the sum is in S. If b _ 0 then there exists a q > 0 with
÷qa < b. Then b ÷ qa > 0 and is in S. Therefore in either case S is nonempty.
Hence S is a nonempty subset of NL{0} and therefore has a least element r. If r = 0
we must show that 0 < r < a. Suppose r _ a, then r = a ÷. with . _ 0 and . < r
since a > 0. Then b ÷qa = r = a÷. =b ÷(q ÷1)a = .. This means that . ÷ S.
Since . < r this contradicts the minimality of r which is a contradiction. Therefore
if r = 0 it follows that 0 < r < a.
The only thing left is to show the uniqueness of q and r. Suppose b = q
1
a ÷ r
1
also. By the construction above r
1
must also be the minimal element of S. Hence
r
1
_ r and r _ r
1
so r = r
1
. Now
b ÷ qa = b ÷ q
1
a == (q
1
÷ q)a = 0
but since a > 0 it follows that q
1
÷ q = 0 so that q = q
1
.
Section 3.1 The Fundamental Theorem of Arithmetic 31
The next idea that is necessary is the concept of greatest common divisor.
Deﬁnition 3.1.5. Given nonzero integers a. b their greatest common divisor or GCD
J > 0 is a positive integer which is a common divisor, that is J[a and J[b, and if J
1
is any other common divisor then J
1
[J. We denote the greatest common divisor of
a. b by either gcd(a. b) or (a. b).
Certainly, if a. b are nonzero integers with a > 0 and a[b then a = gcd(a. b).
The next result says that given any nonzero integers they do have a greatest common
divisor and it is unique.
Theorem 3.1.6. Given nonzero integers a. b their GCD exists, is unique and can be
characterized as the least positive linear combination of a and b.
Proof. Given nonzero a. b consider the set
S = {a. ÷b, > 0 : .. , ÷ Z}.
Now a
2
÷ b
2
> 0 so S is a nonempty subset of N and hence has a least element
J > 0. We show that J is the GCD.
First we must show that J is a common divisor. Now J = a. ÷b, and is the least
such positive linear combination. By the division algorithm a = qJ ÷ r with 0 _
r < J. Suppose r = 0. Then r = a÷qJ = a÷q(a. ÷b,) = (1÷q.)a÷qb, > 0.
Hence r is a positive linear combination of a and b and therefore is in S. But then
r < J contradicting the minimality of J in S. It follows that r = 0 and so a = qJ
and J[a. An identical argument shows that J[b and so J is a common divisor of a
and b. Let J
1
be any other common divisor of a and b. Then J
1
divides any linear
combination of a and b and so J
1
[J. Therefore J is the GCD of a and b.
Finally we must show that J is unique. Suppose J
1
is another GCD of a and b.
Then J
1
> 0 and J
1
is a common divisor of a. b. Then J
1
[J since J is a GCD.
Identically J[J
1
since J
1
is a GCD. Therefore J = ˙J
1
and then J = J
1
since they
are both positive.
If (a. b) = 1 then we say that a. b are relatively prime. It follows that a and b are
relatively prime if and only if 1 is expressible as a linear combination of a and b. We
need the following three results.
Lemma 3.1.7. If J = (a. b) then a = a
1
J and b = b
1
J with (a
1
. b
1
) = 1.
Proof. If J = (a. b) then J[a and J[b. Hence a = a
1
J and b = b
1
J. We have
J = a. ÷b, = a
1
J. ÷b
1
J,.
32 Chapter 3 Prime Elements and Unique Factorization Domains
Dividing both sides of the equation by J we obtain
1 = a
1
. ÷b
1
,.
Therefore (a
1
. b
1
) = 1.
Lemma 3.1.8. For any integer c we have that (a. b) = (a. b ÷ac).
Proof. Suppose (a. b) = J and (a. b ÷ac) = J
1
. Now J is the least positive linear
combination of a and b. Suppose J = a.÷b,. J
1
is a linear combination of a. b÷ac
so that
J
1
= ar ÷(b ÷ac)s = a(cs ÷r) ÷bs.
Hence J
1
is also a linear combination of a and b and therefore J
1
_ J. On the other
hand J
1
[a and J
1
[(b ÷ac) and so J
1
[b. Therefore J
1
[J so J
1
_ J. Combining these
we must have J
1
= J.
The next result, called the Euclidean algorithm, provides a technique for both ﬁnd
ing the GCD of two integers and expressing the GCD as a linear combination.
Theorem 3.1.9 (Euclidean algorithm). Given integers b and a > 0 with a − b, form
the repeated divisions
b = q
1
a ÷r
1
. 0 < r
1
< a
a = q
2
r
1
÷r
2
. 0 < r
2
< r
1
.
.
.
r
n2
= q
n
r
n1
÷r
n
. 0 < r
n
< r
n1
r
n1
= q
n÷1
r
n
.
The last nonzero remainder r
n
is the GCD of a. b. Further r
n
can be expressed as a
linear combination of a and b by successively eliminating the r
i
’s in the intermediate
equations.
Proof. In taking the successive divisions as outlined in the statement of the theorem
each remainder r
i
gets strictly smaller and still nonnegative. Hence it must ﬁnally end
with a zero remainder. Therefore there is a last nonzero remainder r
n
. We must show
that this is the GCD.
Now from Lemma 3.1.7 the gcd (a. b) = (a. b÷q
1
a) = (a. r
1
) = (r
1
. a÷q
2
r
1
) =
(r
1
. r
2
). Continuing in this manner we have then that (a. b) = (r
n1
. r
n
) = r
n
since
r
n
divides r
n1
. This shows that r
n
is the GCD.
To express r
n
as a linear combination of a and b notice ﬁrst that
r
n
= r
n2
÷ q
n
r
n1
.
Section 3.1 The Fundamental Theorem of Arithmetic 33
Substituting this in the immediately preceding division we get
r
n
= r
n2
÷ q
n
(r
n3
÷ q
n1
r
n2
) = (1 ÷q
n
q
n1
)r
n2
÷ q
n
r
n3
.
Doing this successively we ultimately express r
n
as a linear combination of a and b.
Example 3.1.10. Find the GCD of 270 and 2412 and express it as a linear combina
tion of 270 and 2412.
We apply the Euclidean algorithm
2412 = (8)(270) ÷252
270 = (1)(252) ÷18
252 = (14)(18).
Therefore the last nonzero remainder is 18 which is the GCD. We now must express
18 as a linear combination of 270 and 2412.
From the ﬁrst equation
252 = 2412 ÷ (8)(270)
which gives in the second equation
270 = 2412 ÷ (8)(270) ÷18 == 18 = (÷1)(2412) ÷(9)(270)
which is the desired linear combination.
The next result that we need is Euclid’s lemma. We stated and proved this in the
last chapter but we restate it here.
Lemma 3.1.11 (Euclid’s lemma). If ¸ is a prime and ¸[ab then ¸[a or ¸[b.
We can now prove the fundamental theorem of arithmetic. Induction sufﬁces to
show that there always exists such a decomposition into prime factors.
Lemma 3.1.12. Any integer n > 1 can be expressed as a product of primes, perhaps
with only one factor.
Proof. The proof is by induction. n = 2 is prime so its true at the lowest level.
Suppose that any integer 2 _ k < n can be decomposed into prime factors, we must
show that n then also has a prime factorization.
If n is prime then we are done. Suppose then that n is composite. Hence n = m
1
m
2
with 1 < m
1
< n, 1 < m
2
< n. By the inductive hypothesis both m
1
and m
2
can be
expressed as products of primes. Therefore n can, also using the primes from m
1
and
m
2
, completing the proof.
34 Chapter 3 Prime Elements and Unique Factorization Domains
Before we continue to the fundamental theorem we mention that the existence of a
prime decomposition, unique or otherwise, can be used to prove that the set of primes
is inﬁnite. The proof we give goes back to Euclid and is quite straightforward.
Theorem 3.1.13. There are inﬁnitely many primes.
Proof. Suppose that there are only ﬁnitely many primes ¸
1
. . . . . ¸
n
. Each of these is
positive so we can form the positive integer
N = ¸
1
¸
2
¸
n
÷1.
From Lemma 3.1.12 N has a prime decomposition. In particular there is a prime ¸
which divides N. Then
¸[(¸
1
¸
2
¸
n
÷1).
Since the only primes are assumed ¸
1
. ¸
2
. . . . . ¸
n
it follows that ¸ = ¸
i
for some
i = 1. . . . . n. But then ¸[¸
1
¸
2
¸
i
. . . ¸
n
c so ¸ cannot divide ¸
1
¸
n
÷1 which
is a contradiction. Therefore ¸ is not one of the given primes showing that the list of
primes must be endless.
We can now prove the fundamental theorem of arithmetic.
Proof. We assume that n _ 1. If n _ ÷1 we use c = ÷n and the proof is the same.
The statement certainly holds for n = 1 with k = 0. Now suppose n > 1. From
Lemma 3.1.12, n has a prime decomposition
n = ¸
1
¸
2
¸
n
.
We must show that this is unique up to the ordering of the factors. Suppose then that
n has another such factorization n = q
1
q
2
q
k
with the q
i
all prime. We must show
that m = k and that the primes are the same. Now we have
n = ¸
1
¸
2
¸
n
= q
1
q
k
.
Assume that k _ m. From
n = ¸
1
¸
2
¸
n
= q
1
q
k
it follows that ¸
1
[q
1
q
2
q
k
. From Lemma 3.1.11 then we must have that ¸
1
[q
i
for
some i . But q
i
is prime and ¸
1
> 1 so it follows that ¸
1
= q
i
. Therefore we can
eliminate ¸
1
and q
i
from both sides of the factorization to obtain
¸
2
¸
n
= q
1
q
i1
q
i÷1
q
k
.
Continuing in this manner we can eliminate all the ¸
i
from the left side of the factor
ization to obtain
1 = q
n÷1
q
k
.
If q
n÷1
. . . . . q
k
were primes this would be impossible. Therefore m = k and each
prime ¸
i
was included in the primes q
1
. . . . . q
n
. Therefore the factorizations differ
only in the order of the factors, proving the theorem.
Section 3.2 Prime Elements, Units and Irreducibles 35
3.2 Prime Elements, Units and Irreducibles
We now let 1 be an arbitrary integral domain and attempt to mimic the divisibility
deﬁnitions and properties.
Deﬁnition 3.2.1. Let 1 be an integral domain.
(1) Suppose that a. b ÷ 1. Then a is a factor or divisor of b if there exists a c ÷ 1
with b = ac. We denote this, as in the integers, by a[b. If a is a factor of b then
b is called a multiple of a.
(2) An element a ÷ 1 is a unit if a has a multiplicative inverse within 1, that is
there exists an element a
1
÷ 1 with aa
1
= 1.
(3) A prime element of 1 is an element ¸ = 0 such that ¸ is not a unit and if ¸[ab
then ¸[a or ¸[b.
(4) An irreducible in 1 is an element c = 0 such that c is not a unit and if c = ab
then a or b must be a unit.
(5) a and b in 1 are associates if there exists a unit e ÷ 1 with a = eb.
Notice that in the integers Z the units are just ˙1. The set of prime elements
coincides with the set of irreducible elements. In Z this are precisely the set of prime
numbers. On the other hand if 1 is a ﬁeld every nonzero element is a unit so in 1
there are no prime elements and no irreducible elements.
Recall that the modular rings Z
n
are ﬁelds (and integral domains) when n is a
prime. In general if n is not a prime then Z
n
is a commutative ring with an identity
and a unit is still an invertible element. We can characterize the units within Z
n
.
Lemma 3.2.2. a ÷ Z
n
is a unit if and only if (a. n) = 1.
Proof. Suppose (a. n) = 1. Then there exist .. , ÷ Z such that a. ÷n, = 1. This
implies that a. ÷ 1 mod n which in turn implies that a. = 1 in Z
n
and therefore a
is a unit.
Conversely suppose a is a unit in Z
n
. Then there is an . ÷ Z
n
with a. = 1. In
terms of congruence then
a. ÷ 1 mod n == n[a. ÷ 1 == a. ÷ 1 = n, == a. ÷ n, = 1.
Therefore 1 is a linear combination of a and n and so (a. n) = 1.
If 1 is an integral domain then the set of units within 1 will form a group.
Lemma 3.2.3. If 1 is a commutative ring with an identity then the set of units in 1
form an abelian group under ring multiplication. This is called the unit group of 1
denoted U(1).
36 Chapter 3 Prime Elements and Unique Factorization Domains
Proof. The commutativity and associativity of U(1) follow from the ring properties.
The identity of U(1) is the multiplicative identity of 1 while the ring multiplicative
inverse for each unit is the group inverse. We must show that U(1) is closed under
ring multiplication. If a ÷ 1 is a unit we denote its multiplicative inverse by a
1
.
Now suppose a. b ÷ U(1). Then a
1
. b
1
exist. It follows that
(ab)(b
1
a
1
) = a(bb
1
)a
1
= aa
1
= 1.
Hence ab has an inverse, namely b
1
a
1
(= a
1
b
1
in a commutative ring) and
hence ab is also a unit. Therefore U(1) is closed under ring multiplication.
In general irreducible elements are not prime. Consider for example the subring of
the complex numbers (see exercises) given by
1 = ZŒi
_
5 = {. ÷i
_
5, : .. , ÷ Z}.
This is a subring of the complex numbers C and hence can have no zero divisors.
Therefore 1 is an integral domain.
For an element . ÷i,
_
5 ÷ 1 deﬁne its norm by
N(. ÷i,
_
5) = [. ÷i,
_
5[ = .
2
÷5,
2
.
Since .. , ÷ Z it is clear that the norm of an element in 1 is a nonnegative integer.
Further if a ÷ 1 with N(a) = 0 then a = 0.
We have the following result concerning the norm.
Lemma 3.2.4. Let 1 and N be as above. Then
(1) N(ab) = N(a)N(b) for any elements a. b ÷ 1.
(2) The units of 1 are those a ÷ 1 with N(a) = 1. In 1 the only units are ˙1.
Proof. The fact that the norm is multiplicative is straightforward and left to the exer
cises. If a ÷ 1 is a unit then there exists a multiplicative inverse b ÷ 1 with ab = 1.
Then N(ab) = N(a)N(b) = 1. Since both N(a) and N(b) are nonnegative integers
we must have N(a) = N(b) = 1.
Conversely suppose that N(a) = 1. If a = . ÷i,
_
5 then .
2
÷5,
2
= 1. Since
.. , ÷ Z we must have , = 0 and .
2
= 1. Then a = . = ˙1.
Using this lemma we can show that 1 possesses irreducible elements that are not
prime.
Lemma 3.2.5. Let 1 be as above. Then 3 = 3 ÷i 0
_
5 is an irreducible element in
1 but 3 is not prime.
Section 3.2 Prime Elements, Units and Irreducibles 37
Proof. Suppose that 3 = ab with a. b ÷ 1 and a. b nonunits. Then N(3) = 9 =
N(a)N(b) with neither N(a) =1 nor N(b) =1. Hence both N(a) =3 and N(b) =3.
Let a = . ÷i,
_
5. It follows that .
2
÷5,
2
= 3. Since .. , ÷ Z this is impossible.
Therefore one of a or b must be a unit and 3 is an irreducible element.
We show that 3 is not prime in 1. Let a = 2 ÷ i
_
5 and b = 2 ÷ i
_
5. Then
ab = 9 and hence 3[ab. Suppose 3[a so that a = 3c for some c ÷ 1. Then
9 = N(a) = N(3)N(c) = 9N(c) == N(c) = 1.
Therefore c is a unit in 1 and from Lemma 3.2.4 we get c = ˙1. Hence a = ˙3.
This is a contradiction, so 3 does not divide a. An identical argument shows that 3
does not divide b. Therefore 3 is not a prime element in 1.
We now examine the relationship between prime elements and irreducibles.
Theorem 3.2.6. Let 1 be an integral domain. Then
(1) Each prime element of 1 is irreducible.
(2) ¸ ÷ 1 is a prime element if and only if ¸ = 0 and (¸) = ¸1 is a prime ideal.
(3) ¸ ÷ 1 is irreducible if and only if ¸ = 0 and (¸) = ¸1 is maximal in the set
of all principal ideals of 1 which are not equal to 1.
Proof. (1) Suppose that ¸ ÷ 1 is a prime element and ¸ = ab. We must show that
either a or b must be a unit. Now¸[ab so either ¸[a or ¸[b. Without loss of generality
we may assume that ¸[a, so a = ¸r for some r ÷ 1. Hence ¸ = ab = (¸r)b =
¸(rb). However 1 is an integral domain so ¸ ÷ ¸rb = ¸(1 ÷ rb) = 0 implies that
1 ÷ rb = 0 and hence rb = 1. Therefore b is a unit and hence ¸ is irreducible.
(2) Suppose that ¸ is a prime element. Then ¸ = 0. Consider the ideal ¸1 and
suppose that ab ÷ ¸1. Then ab is a multiple of ¸ and hence ¸[ab. Since ¸ is prime
it follows that ¸[a or ¸[b. If ¸[a then a ÷ ¸1 while if ¸[b then b ÷ ¸1. Therefore
¸1 is a prime ideal.
Conversely suppose that ¸1 is a prime ideal and suppose that ¸ = ab. Then
ab ÷ ¸1 so a ÷ ¸1 or b ÷ ¸1. If a ÷ ¸1 then ¸[a and if b ÷ ¸1 then ¸[b and so
¸ is prime.
(3) Let ¸ be irreducible then ¸ = 0. Suppose that ¸1 Ç a1 where a ÷ 1. Then
¸ = ra for some r ÷ 1. Since ¸ is irreducible it follows that either a is a unit or r is
a unit. If r is a unit we have ¸1 = ra1 = a1 = 1 since ¸ is not a unit. If a is a unit
then a1 = 1 and ¸1 = r1 = 1. Therefore ¸1 is maximal in the set of principal
ideals not equal to 1.
Conversely suppose ¸ = 0 and ¸1 is a maximal ideal in the set of principal ideals
= 1. Let ¸ = ab with a not a unit. We must show that b is a unit. Since a1 = 1
and ¸1 Ç a1 from the maximality we must have ¸1 = a1. Hence a = r¸ for some
r ÷ 1. Then ¸ = ab = r¸b and as before we must have rb = 1 and b a unit.
38 Chapter 3 Prime Elements and Unique Factorization Domains
Theorem 3.2.7. Let 1 be a principle ideal domain. Then:
(1) An element ¸ ÷ 1 is irreducible if and only if it is a prime element.
(2) A nonzero ideal of 1 is a maximal ideal if and only if it is a prime ideal.
(3) The maximal ideals of 1 are precisely those ideals ¸1 where ¸ is a prime
element.
Proof. First note that {0} is a prime ideal but not maximal.
(1) We already know that prime elements are irreducible. To show the converse
suppose that ¸ is irreducible. Since 1 is a principal ideal domain from Theorem 3.2.6
we have that ¸1 is a maximal ideal, and each maximal ideal is also a prime ideal.
Therefore from Theorem 3.2.6 we have that ¸ is a prime element.
(2) We already know that each maximal ideal is a prime ideal. To show the converse
suppose that 1 = {0} is a prime ideal. Then 1 = ¸1 where ¸ is a prime element
with ¸ = 0. Therefore ¸ is irreducible from part (1) and hence ¸1 is a maximal ideal
from Theorem 3.2.6.
(3) This follows directly from the proof in part (2) and Theorem 3.2.6.
3.3 Unique Factorization Domains
We now consider integral domains where there is unique factorization into primes. If
1 is an integral domain and a. b ÷ 1 then we say that a and b are associates if there
exists a unit c ÷ 1 with a = cb.
Deﬁnition 3.3.1. An integral domain D is a unique factorization domain or UFD if
for each J ÷ D then either J = 0, J is a unit or J has a factorization into primes
which is unique up to ordering and unit factors. This means that if
r = ¸
1
¸
n
= q
1
q
k
then m = k and each ¸
i
is an associate of some q
}
.
There are several relationships in integral domains that are equivalent to unique
factorization.
Deﬁnition 3.3.2. Let 1 be an integral domain.
(1) 1 has property (A) if and only if for each nonunit a = 0 there are irreducible
elements q
1
. . . . . q
i
÷ 1 satisfying a = q
1
q
i
.
(2) 1 has property (A
t
) if and only if for each nonunit a = 0 there are prime
elements ¸
1
. . . . . ¸
i
÷ 1 satisfying a = ¸
1
¸
i
.
Section 3.3 Unique Factorization Domains 39
(3) 1 has property (B) if and only if whenever q
1
. . . . . q
i
and q
t
1
. . . . . q
t
x
are irre
ducible elements of 1 with
q
1
q
i
= q
t
1
q
t
x
then r = s and there is a permutation ¬ ÷ S
i
such that for each i ÷ {1. . . . . r}
the elements q
i
and q
t
t(i)
are associates (uniqueness up to ordering and unit
factors).
(4) 1 has property (C) if and only if each irreducible element of 1 is a prime ele
ment.
Notice that properties (A) and (C) together are equivalent to what we deﬁned as
unique factorization. Hence an integral domain satisfying (A) and (C) is a UFD. We
show next, that there are other equivalent formulations.
Theorem 3.3.3. In an integral domain 1 the following are equivalent:
(1) 1 is a UFD.
(2) 1 satisﬁes properties (A) and (B).
(3) 1 satisﬁes properties (A) and (C).
(4) 1 satisﬁes property (A
t
).
Proof. As remarked before the statement of the theorem by deﬁnition (A) and (C) are
equivalent to unique factorization. We show here that (2), (3) and (4) are equivalent.
First we show that (2) implies (3).
Suppose that 1 satisﬁes properties (A) and (B). We must show that it also satisﬁes
(C), that is we must show that if q ÷ 1 is irreducible then q is prime. Suppose that
q ÷ 1 is irreducible and q[ab with a. b ÷ 1. Then we have ab = cq for some c ÷ 1.
If a is a unit from ab = cq we get that b = a
1
cq and q[b. Identically if b is a unit.
Therefore we may assume that neither a nor b are units.
If c = 0 then since 1 is an integral domain either a = 0 or b = 0 and q[a or q[b.
We may assume then that c = 0.
If c is a unit then q = c
1
ab and since q is irreducible either c
1
a or b are units.
If c
1
a is a unit then a is also a unit so if c is a unit either a or b are units contrary to
our assumption.
Therefore we may assume that c = 0 and c is not a unit. From property (A) we
have
a = q
1
q
i
b = q
t
1
q
t
x
c = q
tt
1
q
tt
t
40 Chapter 3 Prime Elements and Unique Factorization Domains
where q
1
. . . . q
i
. q
t
1
. . . . . q
t
x
. q
tt
1
. . . . q
tt
t
are all irreducibles. Hence
q
1
q
i
q
t
1
q
t
x
= q
tt
1
q
tt
t
q.
From property (B) q is an associate of some q
i
or q
t
}
. Hence q[q
i
or q[q
t
}
. It follows
that q[a or q[b and therefore q is a prime element.
That (3) implies (4) is direct.
We show that (4) implies (2).
Suppose that 1 satisﬁes property (A
t
). We must show that it satisﬁes both (A)
and (B). We show ﬁrst that (A) follows from (A
t
) by showing that irreducible elements
are prime.
Suppose that q is irreducible. Then from (A
t
) we have
q = ¸
1
¸
i
with each ¸
i
prime. It follows without loss of generality that ¸
2
¸
i
is a unit and ¸
1
is a nonunit and hence ¸
i
[1 for i = 2. . . . . r. Thus q = ¸
1
and q is prime. Therefore
(A) holds.
We now show that (B) holds. Let
q
1
q
i
= q
t
1
q
t
x
where q
i
. q
t
}
are all irreducibles and hence primes. Then
q
t
1
[q
1
q
i
and so q
t
1
[q
i
for some i . Without loss of generality suppose q
t
1
[q
1
. Then q
1
= aq
t
1
.
Since q
1
is irreducible it follows that a is a unit and q
1
and q
t
1
are associates. It
follows then that
aq
2
q
i
= q
t
2
q
t
x
since 1 has no zero divisors. Property (B) holds then by induction and the theorem is
proved.
Note that in our new terminology Z is a UFD. In the next section we will present
other examples of UFD’s however not every integral domain is a unique factorization
domain.
As we deﬁned in the last section let 1 be the following subring of C.
1 = ZŒi
_
5 = {. ÷i
_
5, : .. , ÷ Z}.
1 is an integral domain and we showed using the norm that 3 is an irreducible in 1.
Analogously we can show that the elements 2 ÷i
_
5. 2 ÷ i
_
5 are also irreducibles
in 1 and further 3 is not an associate of either 2 ÷i
_
5 or 2 ÷ i
_
5. Then
9 = 3 3 = (2 ÷i
_
5)(2 ÷ i
_
5)
Section 3.4 Principal Ideal Domains and Unique Factorization 41
give two different decompositions for an element in terms of irreducible elements.
The fact that 1 is not a UFD also follows from the fact that 3 is an irreducible element
which is not prime.
Notice also that Euclid’s proof, that there are inﬁnitely many primes in Z, works in
any UFD.
Theorem 3.3.4. Let 1 be a UFD. Then 1 has inﬁnitely many prime elements.
Unique factorization is tied to the famous solution of Fermat’s big theorem. Wiles
and Taylor in 1995 proved the following.
Theorem 3.3.5. The equation .
]
÷,
]
= :
]
has no integral solutions with .,: = 0
for any prime ¸ _ 3.
Kummer tried to prove this theorem by attempting to factor .
]
= :
]
÷,
]
. Call the
conclusion of Theorem 3.3.4 in an integral domain 1 property (J
]
). Let c = e
2i
p
.
Then
:
]
÷ ,
]
=
]1
}=0
(: ÷ c
}
,).
View this equation in the ring
1 = ZŒc =
_
]1
}=0
a
}
c
}
: a
}
÷ Z
_
.
Kummer proved that if 1is a UFDthen property (J
]
) holds. However, independently,
from Uchida and Montgomery (1971) 1 is a UFD only if ¸ _ 19 (see [41]).
3.4 Principal Ideal Domains and Unique Factorization
In this section we prove that every principal ideal domain (PID) is a unique factoriza
tion domain (UFD). We say that an ascending chain of ideals in 1
1
1
Ç 1
2
Ç Ç 1
n
Ç
becomes stationary if there exists an m such that 1
i
= 1
n
for all r _ m.
Theorem 3.4.1. Let 1 be an integral domain. If each ascending chain of principal
ideals in 1 becomes stationary, then 1 satisﬁes property (A).
Proof. Suppose that a = 0 is a not a unit in 1. Suppose that a is not a product
of irreducible elements. Clearly then a cannot itself be irreducible. Hence a = a
1
b
1
with a
1
. b
1
÷ 1and a
1
. b
1
are not units. If both a
1
or b
1
can be expressed as a product
42 Chapter 3 Prime Elements and Unique Factorization Domains
of irreducible elements then so can a. Without loss of generality then suppose that a
1
is not a product of irreducible elements.
Since a
1
[a we have the inclusion of ideals a1 _ a
1
1. If a
1
1 = a1 then a
1
÷ a1
and a
1
= ar = a
1
b
1
r which implies that b
1
is a unit contrary to our assumption.
Therefore a1 = a
1
1 and the inclusion is proper. By iteration then we obtain a strictly
increasing chain of ideals
a1 Ç a
1
1 Ç Ç a
n
1 Ç .
From our hypothesis on 1 this must become stationary contradicting the argument
above that the inclusion is proper. Therefore a must be a product of irreducibles.
Theorem 3.4.2. Each principal ideal domain 1 is a unique factorization domain.
Proof. Suppose that 1 is a principal ideal domain. 1 satisﬁes property (C) by The
orem 3.2.7(1), so to show that it is a unique factorization domain we must show that
it also satisﬁes property (A). From the previous theorem it sufﬁces to show that each
ascending chain of principal ideals becomes stationary. Consider such an ascending
chain
a
1
1 Ç a
2
1 Ç Ç a
n
1 Ç .
Now let
1 =
o
_
i=1
a
i
1.
Now1 is an ideal in 1and hence a principal ideal. Therefore 1 = a1for some a ÷ 1.
Since 1 is a union there exists an m such that a ÷ a
n
1. Therefore 1 = a1 Ç a
n
1
and hence 1 = a
n
1 and a
i
1 Ç a
n
1 for all i _ m. Therefore the chain becomes
stationary and from Theorem 3.4.1 1 satisﬁes property (A).
Since we showed that the integers Zare a PIDwe can recover the fundamental theo
rem of arithmetic from Theorem 3.4.2. We now present another important example of
a PID and hence a UFD. In the next chapter we will look in detail at polynomials with
coefﬁcients in an integral domain. Below we consider polynomials with coefﬁcients
in a ﬁeld and for the present leave out many of the details.
If J is a ﬁeld and n is a nonnegative integer, then a polynomial of degree n over J
is a formal sum of the form
1(.) = a
0
÷a
1
. ÷ ÷a
n
.
n
with a
i
÷ J for i = 0. . . . . n, a
n
= 0, and . an indeterminate. A polynomial
1(.) over J is either a polynomial of some degree or the expression 1(.) = 0,
which is called the zero polynomial and has degree ÷o. We denote the degree of
1(.) by deg 1(.). A polynomial of zero degree has the form 1(.) = a
0
and is
Section 3.4 Principal Ideal Domains and Unique Factorization 43
called a constant polynomial and can be identiﬁed with the corresponding element
of J. The elements a
i
÷ J are called the coefﬁcients of 1(.); a
n
is the leading
coefﬁcient. If a
n
= 1, 1(.) is called a monic polynomial. Two nonzero polynomials
are equal if and only if they have the same degree and exactly the same coefﬁcients.
A polynomial of degree 1 is called a linear polynomial while one of degree two is a
quadratic polynomial.
We denote by JŒ. the set of all polynomials over J and we will show that JŒ.
becomes a principal ideal domain and hence a unique factorization domain. We ﬁrst
deﬁne addition, subtraction, and multiplication on JŒ. by algebraic manipulation.
That is, suppose 1(.) = a
0
÷a
1
. ÷ ÷a
n
.
n
. Q(.) = b
0
÷b
1
. ÷ ÷b
n
.
n
then
1(.) ˙Q(.) = (a
0
˙b
0
) ÷(a
1
˙b
1
). ÷ .
that is, the coefﬁcient of .
i
in 1(.) ˙Q(.) is a
i
˙b
i
, where a
i
= 0 for i > n and
b
}
= 0 for ¸ > m. Multiplication is given by
1(.)Q(.)=(a
0
b
0
)÷(a
1
b
0
÷a
0
b
1
).÷(a
0
b
2
÷a
1
b
1
÷a
2
b
0
).
2
÷ ÷(a
n
b
n
).
n÷n
.
that is, the coefﬁcient of .
i
in 1(.)Q(.) is (a
0
b
i
÷a
1
b
i1
÷ ÷a
i
b
0
).
Example 3.4.3. Let 1(.) = 3.
2
÷4. ÷ 6 and Q(.) = 2. ÷7 be in QŒ.. Then
1(.) ÷Q(.) = 3.
2
÷6. ÷1
and
1(.)Q(.) = (3.
2
÷4. ÷ 6)(2. ÷7) = 6.
3
÷29.
2
÷16. ÷ 42.
From the deﬁnitions the following degree relationships are clear. The proofs are in
the exercises.
Lemma 3.4.4. Let 0 = 1(.). 0 = Q(.) in JŒ.. Then:
(1) deg 1(.)Q(.) = deg 1(.) ÷deg Q(.).
(2) deg(1(.) ˙Q(.)) _ Max(deg 1(.). deg Q(.)) if 1(.) ˙Q(.) = 0.
We next obtain the following.
Theorem 3.4.5. If J is a ﬁeld, then JŒ. forms an integral domain. J can be nat
urally embedded into JŒ. by identifying each element of J with the corresponding
constant polynomial. The only units in JŒ. are the nonzero elements of J.
Proof. Veriﬁcation of the basic ring properties is solely computational and is left to
the exercises. Since deg 1(.)Q(.) = deg 1(.) ÷deg Q(.), it follows that if neither
1(.) = 0 nor Q(.) = 0 then 1(.)Q(.) = 0 and therefore JŒ. is an integral
domain.
44 Chapter 3 Prime Elements and Unique Factorization Domains
If G(.) is a unit in JŒ., then there exists an H(.) ÷ JŒ. with G(.)H(.) =
1. From the degrees we have deg G(.) ÷deg H(.) = 0 and since deg G(.) _ 0,
deg H(.) _ 0. This is possible only if deg G(.) = deg H(.) = 0. Therefore
G(.)÷J.
Now that we have JŒ. as an integral domain we proceed to show that JŒ. is a
principal ideal domain and hence there is unique factorization into primes. We ﬁrst
repeat the deﬁnition of a prime in JŒ.. If 0 = ¡(.) has no nontrivial, nonunit factors
(it cannot be factorized into polynomials of lower degree) then ¡(.) is a prime in JŒ.
or a prime polynomial. A prime polynomial is also called an irreducible polynomial.
Clearly, if deg g(.) = 1 then g(.) is irreducible.
The fact that JŒ. is a principal ideal domain follows from the division algorithm
for polynomials, which is entirely analogous to the division algorithm for integers.
Lemma 3.4.6 (division algorithm in JŒ.). If 0 = ¡(.), 0 = g(.) ÷ JŒ. then
there exist unique polynomials q(.). r(.) ÷ JŒ. such that ¡(.) = q(.)g(.) ÷r(.)
where r(.) = 0 or deg r(.) < deg g(.). (The polynomials q(.) and r(.) are called
respectively the quotient and remainder.)
This theorem is essentially long division of polynomials. A formal proof is based
on induction on the degree of g(.). We omit this but give some examples from QŒ..
Example 3.4.7. (a) Let ¡(.) = 3.
4
÷ 6.
2
÷8. ÷ 6, g(.) = 2.
2
÷4. Then
3.
4
÷ 6.
2
÷8. ÷ 6
2.
2
÷4
=
3
2
.
2
÷ 6 with remainder 8. ÷18.
Thus here q(.) =
3
2
.
2
÷ 6, r(.) = 8. ÷18.
(b) Let ¡(.) = 2.
5
÷2.
4
÷6.
3
÷10.
2
÷4., g(.) = .
2
÷.. Then
2.
5
÷2.
4
÷6.
3
÷10.
2
÷4.
.
2
÷.
= 2.
3
÷6. ÷4.
Thus here q(.) = 2.
3
÷6. ÷4 and r(.) = 0.
Theorem 3.4.8. Let J be a ﬁeld. Then the polynomial ring JŒ. is a principal ideal
domain and hence a unique factorization domain.
Proof. The proof is essentially analogous to the proof in the integers. Let 1 be an
ideal in JŒ. with 1 = JŒ.. Let ¡(.) be a polynomial in 1 of minimal degree. We
claim that 1 = (¡(.)) the principal ideal generated by ¡(.). Let g(.) ÷ 1. We must
show that g(.) is a multiple of ¡(.). By the division algorithm in JŒ. we have
g(.) = q(.)¡(.) ÷r(.)
Section 3.5 Euclidean Domains 45
where r(.) = 0 or deg(r(.)) <deg(¡(.)). If r(.) = 0 then deg(r(.)) <deg(¡(.)).
However r(.) = g(.) ÷ q(.)¡(.) ÷ 1 since 1 is an ideal and g(.). ¡(.) ÷ 1. This
is a contradiction since ¡(.) was assumed to be a polynomial in 1 of minimal degree.
Therefore r(.) = 0 and hence g(.) = q(.)¡(.) is a multiple of ¡(.). Therefore
each element of 1 is a multiple of ¡(.) and hence 1 = (¡(.)).
Therefore JŒ. is a principal ideal domain and from Theorem 3.4.2 a unique fac
torization domain.
We proved that in a principal ideal domain every ascending chain of ideals becomes
stationary. In general a ring 1 (commutative or not) satisﬁes the ascending chain
condition or ACC if every ascending chain of left (or right) ideals in 1 becomes
stationary. A ring satisfying the ACC is called a Noetherian ring.
3.5 Euclidean Domains
In analyzing the proof of unique factorization in both Z and JŒ., it is clear that it
depends primarily on the division algorithm. In Z the division algorithm depended
on the fact that the positive integers could be ordered and in JŒ. on the fact that the
degrees of nonzero polynomials are nonnegative integers and hence could be ordered.
This basic idea can be generalized in the following way.
Deﬁnition 3.5.1. An integral domain D is a Euclidean domain if there exists a func
tion N from D

= D \ {0} to the nonnegative integers such that
(1) N(r
1
) _ N(r
1
r
2
) for any r
1
. r
2
÷ D

.
(2) For all r
1
. r
2
÷ D with r
1
= 0 there exist q. r ÷ D such that
r
2
= qr
1
÷r
where either r = 0 or N(r) < N(r
1
).
The function N is called a Euclidean norm on D.
Therefore Euclidean domains are precisely those integral domains which allow di
vision algorithms. In the integers Z deﬁne N(:) = [:[. Then N is a Euclidean norm
on Z and hence Z is a Euclidean domain. On JŒ. deﬁne N(¸(.)) = deg(¸(.)) if
¸(.) = 0. Then N is also a Euclidean norm on JŒ. so that JŒ. is also a Euclidean
domain. In any Euclidean domain we can mimic the proofs of unique factorization in
both Z and JŒ. to obtain the following:
Theorem 3.5.2. Every Euclidean domain is a principal ideal domain and hence a
unique factorization domain.
Before proving this theorem we must develop some results on the number theory
of general Euclidean domains. First some properties of the norm.
46 Chapter 3 Prime Elements and Unique Factorization Domains
Lemma 3.5.3. If 1 is a Euclidean domain then
(a) N(1) is minimal among {N(r) : r ÷ 1

}.
(b) N(u) = N(1) if and only if u is a unit.
(c) N(a) = N(b) for a. b ÷ 1

if a. b are associates.
(d) N(a) < N(ab) unless b is a unit.
Proof. (a) From property (1) of Euclidean norms we have
N(1) _ N(1 r) = N(r) for any r ÷ 1

.
(b) Suppose u is a unit. Then there exists u
1
with u u
1
= 1. Then
N(u) _ N(u u
1
) = N(1).
From the minimality of N(1) it follows that N(u) = N(1).
Conversely suppose N(u) = N(1). Apply the division algorithm to get
1 = qu ÷r.
If r = 0 then N(r) < N(u) = N(1) contradicting the minimality of N(1). Therefore
r = 0 and 1 = qu. Then u has a multiplicative inverse and hence is a unit.
(c) Suppose a. b ÷ 1

are associates. Then a = ub with u a unit. Then
N(b) _ N(ub) = N(a).
On the other hand b = u
1
a so
N(a) _ N(u
1
a) = N(b).
Since N(a) _ N(b) and N(b) _ N(a) it follows that N(a) = N(b).
(d) Suppose N(a) = N(ab). Apply the division algorithm
a = q(ab) ÷r
where r = 0 or N(r) < N(ab). If r = 0 then
r = a ÷ qab = a(1 ÷ qb) == N(ab) = N(a) _ N(a(1 ÷ qb)) = N(r)
contradicting that N(r) < N(ab). Hence r = 0 and a = q(ab) = (qb)a. Then
a = (qb)a = 1 a == qb = 1
since there are no zero divisors in an integral domain. Hence b is a unit. Since
N(a) _ N(ab) it follows that if b is not a unit we must have N(a) < N(ab).
Section 3.5 Euclidean Domains 47
We can now prove Theorem 3.5.2.
Proof. Let D be a Euclidean domain. We show that each ideal 1 = D in D is
principal. Let 1 = D be an ideal in D. If 1 = {0} then 1 = (0) and 1 is principal.
Therefore we may assume that there are nonzero elements in 1. Hence there are
elements . ÷ 1 with strictly positive norm. Let a be an element of 1 of minimal
norm. We claim that 1 = (a). Let b ÷ 1. We must show that b is a multiple of a.
Now by the division algorithm
b = qa ÷r
where either r = 0 or N(r) < N(a). As in Z and JŒ. we have a contradiction if
r = 0. In this case N(r) < N(a) but r = b ÷qa ÷ 1 since 1 is an ideal contradicting
the minimality of N(a). Therefore r = 0 and b = qa and hence 1 = (a).
As a ﬁnal example of a Euclidean domain we consider the Gaussian integers
ZŒi  = {a ÷bi : a. b ÷ Z}.
It was ﬁrst observed by Gauss that this set permits unique factorization. To show this
we need a Euclidean norm on ZŒi .
Deﬁnition 3.5.4. If : = a ÷bi ÷ ZŒi  then its norm N(:) is deﬁned by
N(a ÷bi ) = a
2
÷b
2
.
The basic properties of this norm follow directly from the deﬁnition (see exercises).
Lemma 3.5.5. If ˛. ˇ ÷ ZŒi  then:
(1) N(˛) is an integer for all ˛ ÷ ZŒi .
(2) N(˛) _ 0 for all ˛ ÷ ZŒi .
(3) N(˛) = 0 if and only if ˛ = 0.
(4) N(˛) _ 1 for all ˛ = 0.
(5) N(˛ˇ) = N(˛)N(ˇ), that is the norm is multiplicative.
From the multiplicativity of the norm we have the following concerning primes and
units in ZŒi .
Lemma 3.5.6. (1) u ÷ ZŒi  is a unit if and only if N(u) = 1.
(2) If ¬ ÷ ZŒi  and N(¬) = ¸ where ¸ is an ordinary prime in Z then ¬ is a prime
in ZŒi .
48 Chapter 3 Prime Elements and Unique Factorization Domains
Proof. Certainly u is a unit if and only if N(u)=N(1). But in ZŒi  we have N(1)=1
so the ﬁrst part follows.
Suppose next that ¬ ÷ ZŒi  with N(¬) = ¸ for some ¸ ÷ Z. Suppose that
¬ = ¬
1
¬
2
. From the multiplicativity of the norm we have
N(¬) = ¸ = N(¬
1
)N(¬
2
).
Since each norm is a positive ordinary integer and ¸ is a prime it follows that either
N(¬
1
) = 1 or N(¬
2
) = 1. Hence either ¬
1
or ¬
2
is a unit. Therefore ¬ is a prime in
ZŒi .
Armed with this norm we can show that ZŒi  is a Euclidean domain.
Theorem 3.5.7. The Gaussian integers ZŒi  form a Euclidean domain.
Proof. That ZŒi  forms a commutative ring with an identity can be veriﬁed directly
and easily. If ˛ˇ = 0 then N(˛)N(ˇ) = 0 and since there are no zero divisors in
Z we must have N(˛) = 0 or N(ˇ) = 0. But then either ˛ = 0 or ˇ = 0 and
hence ZŒi  is an integral domain. To complete the proof we show that the norm N is
a Euclidean norm.
From the multiplicativity of the norm we have if ˛. ˇ = 0
N(˛ˇ) = N(˛)N(ˇ) _ N(˛) since N(ˇ) _ 1.
Therefore property (1) of Euclidean norms is satisﬁed. We must now show that the
division algorithm holds.
Let ˛ = a ÷bi and ˇ = c ÷Ji be Gaussian integers. Recall that for a nonzero
complex number : = . ÷i, its inverse is
1
:
=
:
[:[
2
=
. ÷ i,
.
2
÷,
2
.
Therefore as a complex number
˛
ˇ
= ˛
ˇ
[ˇ[
2
= (a ÷bi )
c ÷ Ji
c
2
÷J
2
=
ac ÷bJ
c
2
÷J
2
÷
ac ÷ bJ
c
2
÷J
2
i = u ÷i·.
Now since a. b. c. J are integers u. · must be rationals. The set
{u ÷i· : u. · ÷ Q}
is called the set of the Gaussian rationals.
Section 3.5 Euclidean Domains 49
If u. · ÷ Z then u ÷ i· ÷ ZŒi , ˛ = qˇ with q = u ÷ i·, and we are done.
Otherwise choose ordinary integers m. n satisfying [u ÷m[ _
1
2
and [· ÷n[ _
1
2
and
let q = m÷i n. Then q ÷ ZŒi . Let r = ˛ ÷ qˇ. We must show that N(r) < N(ˇ).
Working with complex absolute value we get
[r[ = [˛ ÷ qˇ[ = [ˇ[
ˇ
ˇ
ˇ
ˇ
˛
ˇ
÷ q
ˇ
ˇ
ˇ
ˇ
.
Now
ˇ
ˇ
ˇ
ˇ
˛
ˇ
÷q
ˇ
ˇ
ˇ
ˇ
= [(u÷m) ÷i(· ÷n)[ =
_
(u ÷ m)
2
÷(· ÷ n)
2
_
_
_
1
2
_
2
÷
_
1
2
_
2
< 1.
Therefore
[r[ < [ˇ[ == [r[
2
< [ˇ[
2
== N(r) < N(ˇ)
completing the proof.
Since ZŒi  forms a Euclidean domain it follows from our previous results that ZŒi 
must be a principal ideal domain and hence a unique factorization domain.
Corollary 3.5.8. The Gaussian integers are a UFD.
Since we will now be dealing with many kinds of integers we will refer to the
ordinary integers Z as the rational integers and the ordinary primes ¸ as the rational
primes. It is clear that Z can be embedded into ZŒi . However not every rational
prime is also prime in ZŒi . The primes in ZŒi  are called the Gaussian primes. For
example we can show that both 1 ÷i and 1 ÷i are Gaussian primes, that is primes in
ZŒi . However (1 ÷i )(1 ÷ i ) = 2 so that the rational prime 2 is not a prime in ZŒi .
Using the multiplicativity of the Euclidean norm in ZŒi  we can describe all the units
and primes in ZŒi .
Theorem 3.5.9. (1) The only units in ZŒi  are ˙1. ˙i .
(2) Suppose ¬ is a Gaussian prime. Then ¬ is either:
(a) a positive rational prime ¸ ÷ 3 mod 4 or an associate of such a rational
prime.
(b) 1 ÷i or an associate of 1 ÷i .
(c) a ÷bi or a ÷bi where a > 0, b > 0, a is even and N(¬) = a
2
÷b
2
= ¸
with ¸ a rational prime congruent to 1 mod 4 or an associate of a ÷bi or
a ÷ bi .
Proof. (1) Suppose u = . ÷ i, ÷ ZŒi  is a unit. Then from Lemma 3.5.6 we have
N(u) = .
2
÷ ,
2
= 1 implying that (.. ,) = (0. ˙1) or (.. ,) = (˙1. 0). Hence
u = ˙1 or u = ˙i .
50 Chapter 3 Prime Elements and Unique Factorization Domains
(2) Now suppose that ¬ is a Gaussian prime. Since N(¬) = ¬¬ and ¬ ÷ ZŒi 
it follows that ¬[N(¬). N(¬) is a rational integer so N(¬) = ¸
1
¸
k
where the
¸
i
’s are rational primes. By Euclid’s lemma ¬[¸
i
for some ¸
i
and hence a Gaussian
prime must divide at least one rational prime. On the other hand suppose ¬[¸ and ¬[q
where ¸. q are different primes. Then (¸. q) = 1 and hence there exist .. , ÷ Z such
that 1 = ¸. ÷ q,. It follows that ¬[1 a contradiction. Therefore a Gaussian prime
divides one and only one rational prime.
Let ¸ be the rational prime that ¬ divides. Then N(¬)[N(¸) = ¸
2
. Since N(¬)
is a rational integer it follows that N(¬) = ¸ or N(¬) = ¸
2
. If ¬ = a ÷ bi then
a
2
÷b
2
= ¸ or a
2
÷b
2
= ¸
2
.
If ¸ = 2 then a
2
÷ b
2
= 2 or a
2
÷ b
2
= 4. It follows that ¬ = ˙2. ˙2i or
¬ = 1 ÷ i or an associate of 1 ÷ i . Since (1 ÷ i )(1 ÷ i ) = 2 and neither 1 ÷ i
nor 1 ÷ i are units it follows that neither 2 nor any of its associates are primes. Then
¬ = 1 ÷i or an associate of 1 ÷i . To see that 1 ÷i is prime suppose 1 ÷i = ˛ˇ.
Then N(1 ÷i ) = 2 = N(˛)N(ˇ). It follows that either N(˛) = 1 or N(ˇ) = 1 and
either ˛ or ˇ is a unit.
If ¸ = 2 then either ¸ ÷ 3 mod 4 or ¸ ÷ 1 mod 4. Suppose ﬁrst that ¸ ÷ 3 mod
4. Then a
2
÷b
2
= ¸ would imply from Fermat’s twosquare theorem (see [35]) that
¸ ÷ 1 mod 4. Therefore from the remarks above a
2
÷b
2
= ¸
2
and N(¬) = N(¸).
Since ¬[¸ we have ¬ = ˛¸ with ˛ ÷ ZŒi . From N(¬) = N(¸) we get that
N(˛) = 1 and ˛ is a unit. Therefore ¬ and ¸ are associates. Hence in this case ¬ is
an associate of a rational prime congruent to 3 mod 4.
Finally suppose ¸ ÷ 1 mod 4. From the remarks above either N(¬) = ¸ or
N(¬) = ¸
2
. If N(¬) = ¸
2
then a
2
÷b
2
= ¸
2
. Since ¸ ÷ 1 mod 4 from Fermat’s
two square theorem there exist m. n ÷ Z with m
2
÷n
2
= ¸. Let u = m ÷i n then
the norm N(u) = ¸. Since ¸ is a rational prime it follows that u is a Gaussian prime.
Similarly its conjugate u is also a Gaussian prime. Now uu = ¸
2
= N(¬). Since
¬[N(¬) it follows that ¬[uu and from Euclid’s lemma either ¬[u or ¬[u. If ¬[u they
are associates since both are primes. But this is a contradiction since N(¬) = N(u).
The same is true if ¬[u.
It follows that if ¸ ÷ 1 mod 4, then N(¬) = ¸
2
. Therefore in this case N(¬) =
¸ = a
2
÷ b
2
. An associate of ¬ has both a. b > 0 (see exercises). Further since
a
2
÷ b
2
= ¸ one of a or b must be even. If a is odd then b is even then i ¬ is an
associate of ¬ with a even completing the proof.
Finally we mention that the methods used in ZŒi  cannot be applied to all quadratic
integers. For example we have seen that there is not unique factorization in ZŒ
_
÷5.
Section 3.6 Overview of Integral Domains 51
3.6 Overview of Integral Domains
Here we present some additional deﬁnitions for special types of integral domains.
Deﬁnition 3.6.1. (1) A Dedekind domain D is an integral domain such that each
nonzero proper ideal · ({0} = · = 1) can be written uniquely as a product of
prime ideals
· = 1
1
1
i
with each 1
i
a prime ideal and the factorization is unique up to ordering.
(2) A Prüfer ring is an integral domain such that
· (T ¨ C) = ·T ¨ ·C
for all ideals ·. T. C in 1.
Dedekind domains arise naturally in algebraic number theory. It can be proved that
the rings of algebraic integers in any algebraic number ﬁeld are Dedekind domains
(see [35]).
If 1 is a Dedekind domain it is also a Prüfer Ring. If 1 is a Prüfer ring and a unique
factorization domain then 1 is a principal ideal domain.
In the next chapter we will prove a theorem due to Gauss that if 1 is a UFD then the
polynomial ring 1Œ. is also a UFD. If 1 is a ﬁeld we have already seen that 1Œ. is
a UFD. Hence the polynomial ring in several variables 1Œ.
1
. . . . . .
n
 is also a UFD.
This fact plays an important role in algebraic geometry.
3.7 Exercises
1. Let 1 be an integral domain and let ¬ ÷ 1 \ (U(1) L {0}). Show:
(i) If for each a ÷ 1 with ¬ − a there exist z. j ÷ 1 with z¬ ÷ja = 1 then
¬ is a prime element of 1.
(ii) Give an example for a prime element ¬ in an UFD 1, which does not satisfy
the conditions of (i).
2. Let 1 be an UFD and let a
1
. . . . . a
t
be pairwise coprime elements of 1. If a
1
a
t
is an mth power (m ÷ N), then all factors a
i
are an associate of an mth power. Is
each a
i
necessarily an mth power?
3. Decide if the unit group of ZŒ
_
3, ZŒ
_
5 and ZŒ
_
7 is ﬁnite or inﬁnite. For which
a ÷ Z are (1 ÷
_
5) and (a ÷
_
5) associates in ZŒ
_
5?
4. Let k ÷ Z and k = .
2
for all . ÷ Z. Let ˛ = a ÷b
_
k and ˇ = c ÷J
_
k be
elements of ZŒ
_
k and N(˛) = a
2
÷ kb
2
, N(ˇ) = c
2
÷ kJ
2
. Show:
52 Chapter 3 Prime Elements and Unique Factorization Domains
(i) The equality of the absolute values of N(˛) and N(ˇ) is necessary for the
association of ˛ and ˇ in ZŒ
_
k. Is this constraint also sufﬁcient?
(ii) Sufﬁcient for the irreducibility of ˛ in ZŒ
_
k is the irreducibility of N(˛) in
Z. Is this also necessary?
5. In general irreducible elements are not prime. Consider the set of complex number
given by
1 = ZŒi
_
5 = {. ÷i
_
5, : .. , ÷ Z}.
Show that they form a subring of C.
6. For an element . ÷i,
_
5 ÷ 1 deﬁne its norm by
N(. ÷i,
_
5) = [. ÷i,
_
5[ = .
2
÷5,
2
.
Prove that the norm is multiplicative, that is N(ab) = N(a)N(b).
7. Prove Lemma 3.4.4.
8. Prove that the set of polynomials 1Œ. with coefﬁcients in a ring 1 forms a ring.
9. Prove the basic properties of the norm of the Gaussian integers. If ˛. ˇ ÷ ZŒi 
then:
(i) N(˛) is an integer for all ˛ ÷ ZŒi .
(ii) N(˛) _ 0 for all ˛ ÷ ZŒi .
(iii) N(˛) = 0 if and only if ˛ = 0.
(iv) N(˛) _ 1 for all ˛ = 0.
(v) N(˛ˇ) = N(˛)N(ˇ), that is the norm is multiplicative.
Chapter 4
Polynomials and Polynomial Rings
4.1 Polynomials and Polynomial Rings
In the last chapter we saw that if 1 is a ﬁeld then the set of polynomials with coef
ﬁcients in 1, which we denoted 1Œ., forms a unique factorization domain. In this
chapter we take a more detailed look at polynomials over a general ring 1. We then
prove that if 1 is a UFD then the polynomial ring 1Œ. is also a UFD. We ﬁrst take a
formal look at polynomials.
Let 1 be a commutative ring with an identity. Consider the set
¯
1 of functions ¡
from the nonnegative integers N = NL{0} into 1 with only a ﬁnite number of values
nonzero. That is
¯
1 = {¡ : N ÷1 : ¡(n) = 0 for only ﬁnitely many n}.
On
¯
1 we deﬁne the following addition and multiplication
(¡ ÷g)(m) = ¡(m) ÷g(m)
(¡ g)(m) =
i÷}=n
¡(i )g(¸ ).
If we let . = (0. 1. 0. . . .) and identify (r. 0. . . .) with r ÷ 1 then
.
0
= (1. 0. . . .) = 1 and .
i÷1
= . .
i
.
Now if ¡ = (r
0
. r
1
. r
2
. . . .) then ¡ can be written as
¡ =
o
i=0
r
i
.
i
=
n
i=0
r
i
.
i
for some m _ 0 since r
i
= 0 for only ﬁnitely many i . Further this presentation is
unique.
We now call . an indeterminate over 1 and write each element of
¯
1 as ¡(.) =
n
i=0
r
i
.
i
with ¡(.) = 0 or r
n
= 0. We also now write 1Œ. for
¯
1. Each element of
1Œ. is called a polynomial over 1. The elements r
0
. . . . . r
n
are called the coefﬁcients
of ¡(.) with r
n
the leading coefﬁcient. If r
n
= 0 the natural number m is called the
degree of ¡(.) which we denote by deg ¡(.). We say that ¡(.) = 0 has degree
÷o. The uniqueness of the representation of a polynomial implies that two nonzero
54 Chapter 4 Polynomials and Polynomial Rings
polynomials are equal if and only if they have the same degree and exactly the same
coefﬁcients. A polynomial of degree 1 is called a linear polynomial while one of
degree two is a quadratic polynomial. The set of polynomials of degree 0 together
with 0 form a ring isomorphic to 1 and hence can be identiﬁed with 1, the constant
polynomials. Thus the ring 1 embeds in the set of polynomials 1Œ.. The following
results are straightforward concerning degree.
Lemma 4.1.1. Let ¡(.) = 0. g(.) = 0 ÷ 1Œ.. Then:
(a) deg ¡(.)g(.) _ deg ¡(.) ÷deg g(.).
(b) deg(¡(.) ˙g(.)) _ Max(deg ¡(.). deg g(.)).
If 1 is an integral domain then we have equality in (a).
Theorem 4.1.2. Let 1 be a commutative ring with an identity. Then the set of polyno
mials 1Œ. forms a ring called the ring of polynomials over 1. The ring 1 identiﬁed
with 0 and the polynomials of degree 0 naturally embeds into 1Œ.. 1Œ. is commu
tative if and only if 1 is commutative. Further 1Œ. is uniquely determined by 1 and
..
Proof. Set ¡(.) =
n
i=0
r
i
.
i
and g(.) =
n
}=0
s
}
.
}
. The ring properties follow
directly by computation. The identiﬁcation of r ÷ 1 with the polynomial r(.) = r
provides the embedding of 1 into 1Œ.. From the deﬁnition of multiplication in 1Œ.
if 1 is commutative then 1Œ. is commutative. Conversely if 1Œ. is commutative
then from the embedding of 1 into 1Œ. it follows that 1 must be commutative. Note
that if 1 has a multiplicative identity 1 = 0 then this is also the multiplicative identity
of 1Œ..
Finally if S is a ring that contains 1 and ˛ ÷ S then
1Œ˛ =
_
i_0
r
i
˛
i
: r
i
÷ 1 and r
i
= 0 for only a ﬁnite number of i
_
is a homomorphic image of 1Œ. via the map
i_0
r
i
.
i
÷
i_0
r
i
˛
i
.
Hence 1Œ. is uniquely determined by 1 and ..
If 1 is an integral domain then irreducible polynomials are deﬁned as irreducibles
in the ring 1Œ.. If 1 is a ﬁeld then ¡(.) is an irreducible polynomial if there is
no factorization ¡(.) = g(.)h(.) where g(.) and h(.) are polynomials of lower
degree than ¡(.). Otherwise ¡(.) is called reducible. In elementary mathematics
polynomials are considered as functions. We recover that idea via the concept of
evaluation.
Section 4.2 Polynomial Rings over Fields 55
Deﬁnition 4.1.3. Let ¡(.) = r
0
÷ r
1
. ÷ ÷ r
n
.
n
be a polynomial over a com
mutative ring 1 with an identity and let c ÷ 1. Then the element
¡(c) = r
0
÷r
1
c ÷ ÷r
n
c
n
÷ 1
is called the evaluation of ¡(.) at c.
Deﬁnition 4.1.4. If ¡(.) ÷ 1Œ. and ¡(c) = 0 for c ÷ 1, then c is called a zero or a
root of ¡(.) in 1.
4.2 Polynomial Rings over Fields
We now restate some of the result of the last chapter for 1Œ. where 1 is a ﬁeld. We
then consider some consequences of these results to zeros of polynomials.
Theorem 4.2.1. If J is a ﬁeld, then JŒ. forms an integral domain. J can be nat
urally embedded into JŒ. by identifying each element of J with the corresponding
constant polynomial. The only units in JŒ. are the nonzero elements of J.
Proof. Veriﬁcation of the basic ring properties is solely computational and is left to
the exercises. Since deg 1(.)Q(.) = deg 1(.) ÷deg Q(.), it follows that if neither
1(.) = 0 nor Q(.) = 0 then 1(.)Q(.) = 0 and therefore JŒ. is an integral
domain.
If G(.) is a unit in JŒ., then there exists an H(.) ÷JŒ. with G(.)H(.) =1.
From the degrees we have deg G(.) ÷ deg H(.) = 0 and since deg G(.) _ 0,
deg H(.) _ 0. This is possible only if deg G(.) = deg H(.) = 0. Therefore
G(.) ÷ J.
Now that we have JŒ. as an integral domain we proceed to show that JŒ. is a
principal ideal domain and hence there is unique factorization into primes. We ﬁrst
repeat the deﬁnition of a prime in JŒ.. If 0 = ¡(.) has no nontrivial, nonunit factors
(it cannot be factorized into polynomials of lower degree) then ¡(.) is a prime in JŒ.
or a prime polynomial. A prime polynomial is also called an irreducible polynomial
over J. Clearly, if deg g(.) = 1 then g(.) is irreducible.
The fact that JŒ. is a principal ideal domain follows from the division algorithm
for polynomials, which is entirely analogous to the division algorithm for integers.
Theorem 4.2.2 (division algorithm in JŒ.). If 0 = ¡(.). 0 = g(.) ÷ JŒ. then
there exist unique polynomials q(.). r(.) ÷ JŒ. such that ¡(.) = q(.)g(.) ÷r(.)
where r(.) = 0 or deg r(.) < deg g(.). (The polynomials q(.) and r(.) are called
respectively the quotient and remainder.)
This theorem is essentially long division of polynomials. A formal proof is based
on induction on the degree of g(.). We omit this but give some examples from QŒ..
56 Chapter 4 Polynomials and Polynomial Rings
Example 4.2.3. (a) Let ¡(.) = 3.
4
÷ 6.
2
÷8. ÷ 6, g(.) = 2.
2
÷4. Then
3.
4
÷ 6.
2
÷8. ÷ 6
2.
2
÷4
=
3
2
.
2
÷ 6 with remainder 8. ÷18.
Thus here q(.) =
3
2
.
2
÷ 6 and r(.) = 8. ÷18.
(b) Let ¡(.) = 2.
5
÷2.
4
÷6.
3
÷10.
2
÷4., g(.) = .
2
÷.. Then
2.
5
÷2.
4
÷6.
3
÷10.
2
÷4.
.
2
÷.
= 2.
3
÷6. ÷4.
Thus here q(.) = 2.
3
÷6. ÷4 and r(.) = 0.
Theorem 4.2.4. Let J be a ﬁeld. Then the polynomial ring JŒ. is a principal ideal
domain and hence a unique factorization domain.
We now give some consequences relative to zeros of polynomials in JŒ..
Theorem 4.2.5. If ¡(.) ÷ JŒ. and c ÷ J with ¡(c) = 0 then
¡(.) = (. ÷ c)h(.).
where deg h(.) < deg ¡(.).
Proof. Divide ¡(.) by . ÷ c. Then by the division algorithm we have
¡(.) = (. ÷ c)h(.) ÷r(.)
where r(.) = 0 or deg r(.) < deg(. ÷ c) = 1. Hence if r(.) = 0 then r(.) is a
polynomial of degree 0, that is a constant polynomial, that is r(.) = r for r ÷ J.
Hence we have
¡(.) = (. ÷ c)h(.) ÷r.
This implies that
0 = ¡(.) = 0h(c) ÷r = r
and therefore r = 0 and ¡(.) = (. ÷ c)h(.). Since deg(. ÷ c) = 1 we must have
that deg h(.) < deg ¡(.).
If ¡(.) = (. ÷ c)
k
h(.) for some k _ 1 with h(c) = 0 then c is called a zero of
order k.
Theorem 4.2.6. Let ¡(.) ÷ JŒ. with degree 2 or 3. Then ¡ is irreducible if and
only if ¡(.) doesn’t have a zero in J.
Proof. Suppose that ¡(.) is irreducible of degree 2 or 3. If ¡(.) has a zero c then
from Theorem 4.2.4 we have ¡(.) = (. ÷c)h(.) with h(.) of degree 1 or 2. There
fore ¡(.) is reducible a contradiction and hence ¡(.) cannot have a zero.
Conversely from Theorem 4.2.4 if ¡(.) has a zero and if of degree greater than 1
then ¡(.) is reducible.
Section 4.3 Polynomial Rings over Integral Domains 57
4.3 Polynomial Rings over Integral Domains
Here we consider 1Œ. where 1 is an integral domain.
Deﬁnition 4.3.1. Let 1 be an integral domain. Then a
1
. a
2
. . . . . a
n
÷ 1 are coprime
if the set of all common divisors of a
1
. . . . . a
n
consists only of units.
Notice for example that this concept depends on the ring 1. For example 6 and 9
are not coprime over the integers Z since 3[6 and 3[9 and 3 is not a unit. However 6
and 9 are coprime over the rationals Q. Here 3 is a unit.
Deﬁnition 4.3.2. Let ¡(.) =
n
i=1
r
i
.
i
÷ 1Œ. where 1 is an integral domain.
Then ¡(.) is a primitive polynomial or just primitive if r
0
. r
1
. . . . . r
n
are coprime
in 1.
Theorem 4.3.3. Let 1 be an integral domain. Then
(a) The units of 1Œ. are the units of 1.
(b) If ¸ is a prime element of 1 then ¸ is a prime element of 1Œ..
Proof. If r ÷ 1 is a unit then since 1 embeds into 1Œ. it follows that r is also a unit
in 1Œ.. Conversely suppose that h(.) ÷ 1Œ. is a unit. Then there is a g(.) such
that h(.)g(.) = 1. Hence deg ¡(.) ÷ deg g(.) = deg 1 = 0. Since degrees are
nonnegative integers it follows that deg ¡(.) = deg g(.) = 0 and hence ¡(.) ÷ 1.
Now suppose that ¸ is a prime element of 1. Then ¸ = 0 and ¸1 is a prime ideal
in 1. We must show that ¸1Œ. is a prime ideal in 1Œ.. Consider the map
t : 1Œ. ÷(1,¸1)Œ. given by
t
_
n
i=0
r
i
.
i
_
=
n
i=0
(r
i
÷¸1).
i
.
Then t is an epimorphism with kernel ¸1Œ.. Since ¸1 is a prime ideal we know that
1,¸1 is an integral domain. It follows that (1,¸1)Œ. is also an integral domain.
Hence ¸1Œ. must be a prime ideal in 1Œ. and therefore ¸ is also a prime element
of 1Œ..
Recall that each integral domain 1 can be embedded into a unique ﬁeld of frac
tions 1. We can use results on 1Œ. to deduce some results in 1Œ..
Lemma 4.3.4. If 1 is a ﬁeld then each nonzero ¡(.) ÷ 1Œ. is a primitive.
Proof. Since 1 is a ﬁeld each nonzero element of 1 is a unit. Therefore the only
common divisors of the coefﬁcients of ¡(.) are units and hence ¡(.) ÷ 1Œ. is
primitive.
58 Chapter 4 Polynomials and Polynomial Rings
Theorem 4.3.5. Let 1 be an integral domain. Then each irreducible ¡(.) ÷ 1Œ. of
degree > 0 is primitive.
Proof. Let ¡(.) be an irreducible polynomial in 1Œ. and let r ÷ 1 be a common
divisor of the coefﬁcients of ¡(.). Then ¡(.) = rg(.) where g(.) ÷ 1Œ.. Then
deg ¡(.) = deg g(.) > 0 so g(.) … 1. Since the units of 1Œ. are the units of 1 it
follows that g(.) is not a unit in 1Œ.. Since ¡(.) is irreducible it follows that r must
be a unit in 1Œ. and hence r is a unit in 1. Therefore ¡(.) is primitive.
Theorem 4.3.6. Let 1 be an integral domain and 1 its ﬁeld of fractions. If ¡(.) ÷
1Œ. is primitive and irreducible in 1Œ. then ¡(.) is irreducible in 1Œ..
Proof. Suppose that ¡(.) ÷ 1Œ. is primitive and irreducible in 1Œ. and suppose
that ¡(.) = g(.)h(.) where g(.). h(.) ÷ 1Œ. Ç 1Œ.. Since ¡(.) is irreducible
in 1Œ. either g(.) or h(.) must be a unit in 1Œ.. Without loss of generality suppose
that g(.) is a unit in 1Œ.. Then g(.) = g ÷ 1. But g(.) ÷ 1Œ. and 1¨1Œ. = 1.
Hence g ÷ 1. Then g is a divisor of the coefﬁcients of ¡(.) and as ¡(.) is prim
itive g(.) must be a unit in 1 and therefore also a unit in 1Œ.. Therefore ¡(.) is
irreducible in 1Œ..
4.4 Polynomial Rings over Unique Factorization Domains
In this section we prove that if 1 is a UFD then the polynomial ring 1Œ. is also a
UFD. We ﬁrst need the following due to Gauss.
Theorem 4.4.1 (Gauss’ lemma). Let 1 be a UFD and ¡(.). g(.) primitive polyno
mials in 1Œ.. Then their product ¡(.)g(.) is also primitive.
Proof. Let 1 be a UFD and ¡(.). g(.) primitive polynomials in 1Œ.. Suppose that
¡(.)g(.) is not primitive. Then there is a prime element ¸ ÷ 1 that divides each
of the coefﬁcients of ¡(.)g(.). Then ¸[¡(.)g(.). Since prime elements of 1 are
also prime elements of 1Œ. it follows that ¸ is also a prime element of 1Œ. and
hence ¸[¡(.) or ¸[g(.). Therefore either ¡(.) or g(.) is not primitive giving a
contradiction.
Theorem 4.4.2. Let 1 be a UFD and 1 its ﬁeld of fractions.
(a) If g(.) ÷ 1Œ. is nonzero then there is a nonzero a ÷ 1 such that ag(.) ÷ 1Œ.
is primitive.
(b) Let ¡(.). g(.) ÷ 1Œ. with g(.) primitive and ¡(.) = ag(.) for some a ÷ 1.
Then a ÷ 1.
(c) If ¡(.) ÷ 1Œ. is nonzero then there is a b ÷ 1 and a primitive g(.) ÷ 1Œ.
such that ¡(.) = bg(.).
Section 4.4 Polynomial Rings over Unique Factorization Domains 59
Proof. (a) Suppose that g(.) =
n
i=0
a
i
.
i
with a
i
=
i
i
x
i
, r
i
. s
i
÷ 1. Set s =
s
0
s
1
s
n
. Then sg(.) is a nonzero element of 1Œ.. Let J be a greatest common
divisor of the coefﬁcients of sg(.). If we set a =
x
d
then ag(.) is primitive.
(b) For a ÷ 1 there are coprime r. s ÷ 1 satisfying a =
i
x
. Suppose that a … 1.
Then there is a prime element ¸ ÷ 1 dividing s. Since g(.) is primitive ¸ does not
divide all the coefﬁcients of g(.). However we also have ¡(.) = ag(.) =
i
x
g(.).
Hence s¡(.) = rg(.) where ¸[s and ¸ doesn’t divide r. Therefore ¸ divides all the
coefﬁcients of g(.) and hence a ÷ 1.
(c) From part (a) there is a nonzero a ÷ 1 such that a¡(.) is primitive in 1Œ..
Then ¡(.) = a
1
(a¡(.)). From part (b) we must have a
1
÷ 1. Set g(.) = a¡(.)
and b = a
1
.
Theorem 4.4.3. Let 1 be a UFD and 1 its ﬁeld of fractions. Let ¡(.) ÷ 1Œ. be a
polynomial of degree _ 1.
(a) If ¡(.) is primitive and ¡(.)[g(.) in 1Œ. then ¡(.) divides g(.) also in 1Œ..
(b) If ¡(.) is irreducible in 1Œ. then it is also irreducible in 1Œ..
(c) If ¡(.) is primitive and a prime element of 1Œ. then ¡(.) is also a prime
element of 1Œ..
Proof. (a) Suppose that g(.) = ¡(.)h(.) with h(.) ÷ 1Œ.. From Theorem 4.4.2
part (a) there is a nonzero a ÷ 1 such that h
1
(.) = ah(.) is primitive in 1Œ..
Hence g(.) =
1
o
(¡(.)h
1
(.). From Gauss’ lemma ¡(.)h
1
(.) is primitive in 1Œ.
and therefore from Theorem 4.4.2 part (b) we have
1
o
÷ 1. It follows that ¡(.)[g(.)
in 1Œ..
(b) Suppose that g(.) ÷ 1Œ. is a factor of ¡(.). From Theorem 4.4.2 part (a)
there is a nonzero a ÷ 1 with g
1
(.) = ag(.) primitive in 1Œ.. Since a is a unit in
1 it follows that
g(.)[¡(.) in 1Œ. implies g
1
(.)[¡(.) in 1Œ.
and hence since g
1
(.) is primitive
g
1
(.)[¡(.) in 1Œ..
However by assumption ¡(.) is irreducible in 1Œ.. This implies that either g
1
(.) is
a unit in 1 or g
1
(.) is an associate of ¡(.).
If g
1
(.) is a unit then g
1
÷ 1 and g
1
= ga and hence g ÷ 1, that is g = g(.) is
a unit.
If g
1
(.) is an associate of ¡(.) then ¡(.) = bg(.) where b ÷ 1 since g
1
(.) =
ag(.) with a ÷ 1. Combining these it follows that ¡(.) has only trivial factors in
1Œ. and since by assumption ¡(.) is nonconstant it follows that ¡(.) is irreducible
in 1Œ..
60 Chapter 4 Polynomials and Polynomial Rings
(c) Suppose that ¡(.)[g(.)h(.) with g(.). h(.) ÷ 1Œ.. Since ¡(.) is a prime
element in 1Œ. we have that ¡(.)[g(.) or ¡(.)[h(.) in 1Œ.. From part (a) we have
¡(.)[g(.) or ¡(.)[h(.) in 1Œ. implying that ¡(.) is a prime element in 1Œ..
We can now state and prove our main result.
Theorem 4.4.4 (Gauss). Let 1 be a UFD. Then the polynomial ring 1Œ. is also a
UFD
Proof. By induction on degree we show that each nonunit ¡(.) ÷ 1Œ.. ¡(.) = 0, is
a product of prime elements. Since 1 is an integral domain so is 1Œ., and so the fact
that 1Œ. is a UFD then follows from Theorem 3.3.3.
If deg ¡(.) = 0 then ¡(.) = ¡ is a nonunit in 1. Since 1 is a UFD ¡ is a product
of prime elements in 1. However from Theorem 4.3.3 each prime factor is then also
prime in 1Œ.. Therefore ¡(.) is a product of prime elements.
Now suppose n > 0 and that the claim is true for all polynomials ¡(.) of degree
< n. Let ¡(.) be a polynomial of degree n > 0. From Theorem 4.4.2 (c) there is an
a ÷ 1 and a primitive h(.) ÷ 1Œ. satisfying ¡(.) = ah(.). Since 1 is a UFD the
element a is a product of prime elements in 1 or a is a unit in 1. Since the units in
1Œ. are the units in 1 and a prime element in 1 is also a prime element in 1Œ. it
follows that a is a product of prime elements in 1Œ. or a is a unit in 1Œ.. Let 1
be the ﬁeld of fractions of 1. Then 1Œ. is a UFD. Hence h(.) is a product of prime
elements of 1Œ.. Let ¸(.) ÷ 1Œ. be a prime divisor of h(.). From Theorem 4.4.2
we can assume by multiplication of ﬁeld elements that ¸(.) ÷ 1Œ. and ¸(.) is
primitive. From Theorem 4.4.2 (c) it follows that ¸(.) is a prime element of 1Œ. and
further from Theorem 4.4.3 (a) that ¸(.) is a divisor of h(.) in 1Œ.. Therefore
¡(.) = ah(.) = a¸(.)g(.) ÷ 1Œ.
where
(1) a is a product of prime elements of 1Œ. or a is a unit in 1Œ.,
(2) deg ¸(.) > 0, since ¸(.) is a prime element in 1Œ.,
(3) ¸(.) is a prime element in 1Œ., and
(4) deg g(.) < deg ¡(.) since deg ¸(.) > 0.
By our inductive hypothesis we have then that g(.) is a product of prime elements
in 1Œ. or g(.) is a unit in 1Œ.. Therefore the claim holds for ¡(.) and therefore
holds for all ¡(.) by induction.
If 1Œ. is a polynomial ring over 1 we can form a polynomial ring in a new in
determinate , over this ring to form (1Œ.)Œ,. It is straightforward that (1Œ.)Œ, is
isomorphic to (1Œ,)Œ.. We denote both of these rings by 1Œ.. , and consider this
as the ring of polynomials in two commuting variables .. , with coefﬁcients in 1.
Section 4.4 Polynomial Rings over Unique Factorization Domains 61
If 1 is a UFD then from Theorem 4.4.4 1Œ. is also a UFD and hence 1Œ.. ,
is also a UFD. Inductively then the ring of polynomials in n commuting variables
1Œ.
1
. .
2
. . . . . .
n
 is also a UFD.
Corollary 4.4.5. If 1 is a UFD then the polynomial ring in n commuting variables
1Œ.
1
. . . . . .
n
 is also a UFD.
We now give a condition for a polynomial in 1Œ. to have a zero in 1Œ. where 1
is the ﬁeld of fractions of 1.
Theorem 4.4.6. Let 1 be a UFD and 1 its ﬁeld of fractions. Let ¡(.) = .
n
÷
r
n1
.
n1
÷ ÷r
0
÷ 1Œ.. Suppose that ˇ ÷ 1 is a zero of ¡(.). Then ˇ is in 1
and is a divisor of r
0
.
Proof. Let ˇ =
i
x
where s = 0 and r. s ÷ 1 and r. s are coprime. Now
¡
_
r
s
_
= 0 =
r
n
s
n
÷r
n1
r
n1
s
n1
÷ ÷r
0
.
Hence it follows that s must divide r
n
. Since r and s are coprime s must be a unit and
then without loss of generality we may assume that s = 1. Then ˇ ÷ 1 and
r(r
n1
÷ ÷a
1
) = ÷a
0
and so r[a
0
.
Note that since Z is a UFD, Gauss’ theorem implies that ZŒ. is also a UFD. How
ever ZŒ. is not a principal ideal domain. For example the set of integral polynomials
with even constant term is an ideal but not principal. We leave the veriﬁcation to the
exercises. On the other hand we saw that if 1 is a ﬁeld 1Œ. is a PID. The question
arises as to when 1Œ. actually is a principal ideal domain. It turns out to be precisely
when 1 is a ﬁeld.
Theorem 4.4.7. Let 1 be a commutative ring with an identity. Then the following are
equivalent:
(1) 1 is a ﬁeld.
(2) 1Œ. is Euclidean.
(3) 1Œ. is a principal ideal domain.
Proof. From Section 4.2 we know that (a) implies (b) which in turn implies (c).
Therefore we must show that (c) implies (a). Assume then that 1Œ. is a principal
ideal domain. Deﬁne the map
t : 1Œ. ÷1
62 Chapter 4 Polynomials and Polynomial Rings
by
t(¡(.)) = ¡(0).
It is easy to see that t is a ring homomorphism with 1Œ., ker(t) Š 1. Therefore
ker(t) = 1Œ.. Since 1Œ. is a principal ideal domain it is an integral domain. It
follows that ker(t) must be a prime ideal since the quotient ring is an integral domain.
However since 1Œ. is a principal ideal domain prime ideals are maximal ideals and
hence ker(t) is a maximal ideal. Therefore 1 Š 1Œ., ker(t) is a ﬁeld.
We nowconsider the relationship between irreducibles in 1Œ. for a general integral
domain and irreducibles in 1Œ. where 1 is its ﬁeld of fractions. This is handled by
the next result called Eisenstein’s criterion.
Theorem 4.4.8 (Eisenstein’s criterion). Let 1 be an integral domain and 1 its ﬁeld
of fractions. Let ¡(.) =
n
i=0
a
i
.
i
÷ 1Œ. of degree n > 0. Let ¸ be a prime
element of 1 satisfying
(1) ¸[a
i
for i = 0. . . . . n ÷ 1.
(2) ¸ does not divide a
n
.
(3) ¸
2
does not divide a
0
.
Then:
(a) If ¡(.) is primitive then ¡(.) is irreducible in 1Œ..
(b) Suppose that 1 is a UFD. Then ¡(.) is also irreducible in 1Œ..
Proof. (a) Suppose that ¡(.) = g(.)h(.) with g(.). h(.) ÷ 1Œ.. Suppose that
g(.) =
k
i=1
b
i
.
i
. b
k
= 0 and h(.) =
I
i=1
c
}
.
}
. c
I
= 0.
Then a
0
= b
0
c
0
. Now ¸[a
0
but ¸
2
does not divide a
0
. This implies that either ¸
doesn’t divide b
0
or ¸ doesn’t divide c
0
. Without loss of generality assume that ¸[b
0
and ¸ doesn’t divide c
0
.
Since a
n
= b
k
c
I
and ¸ does not divide a
n
it follows that ¸ does not divide b
k
. Let
b
}
be the ﬁrst coefﬁcient of g(.) which is not divisible by ¸. Consider
a
}
= b
}
c
0
÷ ÷b
0
c
}
where everything after the ﬁrst term is divisible by ¸. Since ¸ does not divide both
b
}
and c
0
it follows that ¸ does not divide b
}
c
0
and therefore ¸ does not divide a
}
which implies that ¸ = n. Then from ¸ _ k _ n it follows that k = n. Therefore
deg g(.) = deg ¡(.) and hence deg h(.) = 0. Thus h ÷ 1. Then from ¡(.) =
hg(.) with ¡ primitive it follows that h is a unit and therefore ¡(.) is irreducible.
Section 4.4 Polynomial Rings over Unique Factorization Domains 63
(b) Suppose that ¡(.) = g(.)h(.) with g(.). h(.) ÷ 1Œ.. The fact that ¡(.)
was primitive was only used in the ﬁnal part of part (a) so by the same arguments as in
part (a) we may assume without loss of generality that h ÷ 1 Ç 1. Therefore ¡(.)
is irreducible in 1Œ..
We give some examples.
Example 4.4.9. Let 1 = Z and ¸ a prime number. Suppose that n. m are integers
such that n _ 1 and ¸ does not divide m. Then .
n
˙¸m is irreducible in ZŒ. and
QŒ.. In particular (¸m)
1
n
is irrational.
Example 4.4.10. Let 1 = Z and ¸ a prime number. Consider the polynomial
ˆ
]
(.) =
.
]
÷ 1
. ÷ 1
= .
]1
= .
]2
÷ ÷1.
Since all the coefﬁcients of ˆ
]
(.) are equal to 1, Eisenstein’s criterion is not directly
applicable. However the fact that ˆ
]
(.) is irreducible implies that for any integer a
the polynomial ˆ
]
(. ÷a) is also irreducible in ZŒ.. It follows that
ˆ
]
(. ÷1) =
(. ÷1)
]
÷ 1
(. ÷1) ÷ 1
=
.
]
÷
_
]
1
_
.
]1
÷ ÷
_
]
]1
_
. ÷1
]
÷ 1
.
= .
]1
÷
_
¸
1
_
.
]1
÷ ÷
_
¸
¸ ÷ 1
_
.
Now ¸[
_
]
i
_
for 1 _ i _ ¸ ÷ 1 (see exercises) and moreover
_
]
]1
_
= ¸ is not
divisible by ¸
2
. Therefore we can apply the Eisenstein criterion to conclude that
ˆ
]
(.) is irreducible in ZŒ. and QŒ..
Theorem 4.4.11. Let 1 be a UFD and 1 its ﬁeld of fractions. Let ¡(.)=
n
i=0
a
i
.
i
÷ 1Œ. be a polynomial of degree _ 1. Let 1 be a prime ideal in 1 with a
n
… 1. Let
1 = 1,1 and let ˛ : 1Œ. ÷1Œ. be deﬁned by
˛
_
n
i=0
r
i
.
i
_
=
n
i=0
(r
i
÷1).
i
.
˛ is an epimorphism. Then if ˛(¡(.)) is irreducible in 1Œ. then ¡(.) is irreducible
in 1Œ..
Proof. By Theorem 4.4.3 there is an a ÷ 1 and a primitive g(.) ÷ 1Œ. satisfying
¡(.) = ag(.). Since a
n
… 1 we have that ˛(a) = 0 and further the highest
coefﬁcient of g(.) is also not an element of 1. If ˛(g(.)) is reducible then ˛(¡(.))
is also reducible. Thus ˛(g(.)) is irreducible. However from Theorem 4.4.4 g(.) is
64 Chapter 4 Polynomials and Polynomial Rings
irreducible in 1Œ. so ¡(.) = ag(.) is also irreducible in 1Œ.. Therefore to prove
the theorem it sufﬁces to consider the case where ¡(.) is primitive in 1Œ..
Now suppose that ¡(.) is primitive. We show that ¡(.) is irreducible in 1Œ..
Suppose that ¡(.) = g(.)h(.), g(.). h(.) ÷ 1Œ. with ¡(.). g(.) nonunits in
1Œ.. Since ¡(.) is primitive g. h … 1 and so deg g(.) < deg ¡(.) and deg h(.) <
deg ¡(.).
Now we have ˛(¡(.)) = ˛(g(.))˛(h(.)). Since 1 is a prime ideal 1,1 is an
integral domain and so in 1Œ. we have
deg ˛(g(.)) ÷deg ˛(g(.)) = deg ˛(¡(.)) = deg ¡(.)
since a
n
… 1. Since 1 is a UFD it has no zero divisors and so
deg ¡(.) = deg g(.) ÷deg h(.).
Now
deg ˛(g(.)) _ deg g(.)
deg ˛(h(.)) _ deg h(.).
Therefore deg ˛(g(.)) = deg g(.) and deg ˛(h(.)) = deg h(.). Therefore ˛(¡(.))
is reducible and we have a contradiction.
It is important to note that ˛(¡(.)) being reducible does not imply that ¡(.) is
reducible. For example ¡(.) = .
2
÷1 is irreducible in ZŒ.. However in Z
2
Œ. we
have
.
2
÷1 = (. ÷1)
2
and hence ¡(.) is reducible in Z
2
Œ..
Example 4.4.12. Let ¡(.) = .
5
÷ .
2
÷1 ÷ ZŒ.. Choose 1 = 2Z so that
˛(¡(.)) = .
5
÷.
2
÷1 ÷ Z
2
Œ..
Suppose that in Z
2
Œ. we have ˛(¡(.)) = g(.)h(.). Without loss of generality we
may assume that g(.) is of degree 1 or 2.
If deg g(.) = 1 then ˛(¡(.)) has a zero c in Z
2
Œ.. The two possibilities for c are
c = 0 or c = 1. Then
If c = 0 then 0 ÷0 ÷1 = 1 = 0.
If c = 1 then 1 ÷1 ÷1 = 1 = 0.
Hence the degree of g(.) cannot be 1.
Section 4.5 Exercises 65
Suppose deg g(.) = 2. The polynomials of degree 2 over Z
2
Œ. have the form
.
2
÷. ÷1. .
2
÷.. .
2
÷1. .
2
.
The last three, .
2
÷.. .
2
÷1. .
2
all have zeros in Z
2
Œ. so they can’t divide ˛(¡(.)).
Therefore g(.) must be .
2
÷. ÷1. Applying the division algorithm we obtain
˛(¡(.)) = (.
3
÷.
2
)(.
2
÷. ÷1) ÷1
and therefore .
2
÷.÷1 does not divide ˛(¡(.)). It follows that ˛(¡(.)) is irreducible
and from the previous theorem ¡(.) must be irreducible in QŒ..
4.5 Exercises
1. For which a. b ÷ Z does the polynomial .
2
÷ 3. ÷ 1 divide the polynomial
.
3
÷.
2
÷a. ÷b?
2. Let a ÷bi ÷ C be a zero of ¡(.) ÷ RŒ.. Show that also a ÷i b is a zero of ¡(.).
3. Determine all irreducible polynomials over R. Factorize ¡(.) ÷ RŒ. in irre
ducible polynomials.
4. Let 1 be an integral domain, A< 1 an ideal and ¡ ÷ 1Œ. a monic polynomial.
Deﬁne
¨
¡ ÷ 1Œ.,AŒ. by the mapping 1Œ. ÷ 1,AŒ., ¡ =
a
i
.
i
÷
¨
¡ =
¨ a
i
.
i
, where ¨ a := a ÷ A. Show: If
¨
¡ ÷ 1,AŒ. is irreducible then also
¡ ÷ 1Œ. is irreducible.
5. Decide if the following polynomials ¡ ÷ 1Œ. are irreducible:
(i) ¡(.) = .
3
÷2.
2
÷3, 1 = Z.
(ii) ¡(.) = .
5
÷ 2. ÷1, 1 = Q.
(iii) ¡(.) = 3.
4
÷7.
2
÷14. ÷7, 1 = Q.
(iv) ¡(.) = .
T
÷(3 ÷ i ).
2
÷(3 ÷4i ). ÷4 ÷2i , 1 = ZŒi .
(v) ¡(.) = .
4
÷3.
3
÷2.
2
÷3. ÷4, 1 = Q.
(vi) ¡(.) = 8.
3
÷ 4.
2
÷2. ÷ 1, 1 = Z.
6. Let 1 be an integral domain with characteristic 0, let k _ 1 and ˛ ÷ 1. In 1Œ.
deﬁne the derivatives ¡
(k)
(.), k = 0. 1. 2. . . . , of a polynomial ¡(.) ÷ 1Œ. by
¡
0
(.) := ¡(.).
¡
(k)
(.) := ¡
(k1)
0
(.).
Show that ˛ is a zero of order k of the polynomial ¡(.) ÷ 1Œ., if ¡
(k1)
(˛) = 0,
but ¡
(k)
(˛) = 0.
7. Prove that the set of integral polynomials with even constant term is an ideal but
not principal.
8. Prove that ¸[
_
]
i
_
for 1 _ i _ ¸ ÷ 1.
Chapter 5
Field Extensions
5.1 Extension Fields and Finite Extensions
Much of algebra in general arose fromthe theory of equations, speciﬁcally polynomial
equations. As discovered by Galois and Abel the solutions of polynomial equations
over ﬁelds is intimately tied to the theory of ﬁeld extensions. This theory eventually
blossoms into Galois Theory. In this chapter we discuss the basic material concerning
ﬁeld extensions.
Recall that if 1 is a ﬁeld and 1 Ç 1 is also a ﬁeld under the same operations as 1
then 1 is called a subﬁeld of 1. If we view this situation from the viewpoint of 1 we
say that 1 is an extension ﬁeld or ﬁeld extension of 1. If 1. 1 are ﬁelds with 1 Ç 1
we always assume that K is a subﬁeld of 1.
Deﬁnition 5.1.1. If 1. 1are ﬁelds with 1 Ç 1then we say that 1is a ﬁeld extension
or extension ﬁeld of 1. We denote this by 1[1.
Note that this is equivalent to having a ﬁeld monomorphism
i : 1 ÷1
and then identifying 1 and i(1).
As examples we have that R is an extension ﬁeld of Q and C is an extension ﬁeld
of both C and Q. If 1 is any ﬁeld then the ring of polynomials 1Œ. over 1 is an
integral domain. Let 1(.) be the ﬁeld of fractions of 1Œ.. This is called the ﬁeld of
rational functions over 1. Since 1 can be considered as part of 1Œ. it follows that
1 Ç 1(.) and hence 1(.) is an extension ﬁeld of 1.
Acrucial concept is that of the degree of a ﬁeld extension. Recall that a vector space
V over a ﬁeld J consists of an abelian group V together with scalar multiplication
from J satisfying:
(1) ¡ · ÷ V if ¡ ÷ J, · ÷ V .
(2) ¡(u ÷·) = ¡ u ÷¡ · for ¡ ÷ J, u. · ÷ V .
(3) (¡ ÷g)· = ¡ · ÷g· for ¡. g ÷ J, · ÷ V .
(4) (¡g)· = ¡(g·) for ¡. g ÷ J, · ÷ V .
(5) 1· = · for · ÷ V .
Section 5.1 Extension Fields and Finite Extensions 67
Notice that if 1 is a subﬁeld of 1 then multiplication of elements of 1 by elements
of 1 are still in 1. Since 1 is an abelian group under addition, 1 can be considered
as a vector space over 1. Thus any extension ﬁeld is a vector space over any of its
subﬁelds. Using this we deﬁne the degree [1 : 1[ of an extension 1 Ç 1 as the
dimension dim
1
(1) of 1 as a vector space over 1. We call 1 a ﬁnite extension of 1
if [1 : 1[ < o.
Deﬁnition 5.1.2. If 1 is an extension ﬁeld of 1 then the degree of the extension 1[1
is deﬁned as the dimension, dim
1
(1), of 1, as a vector space over 1. We denote the
degree by [1 : 1[. The ﬁeld extension 1[1 is a ﬁnite extension if the degree [1 : 1[
is ﬁnite.
Lemma 5.1.3. [C : R[ = 2 but [R : Q[ = o.
Proof. Every complex number can be written uniquely as a ÷ i b where a. b ÷ R.
Hence the elements 1. i constitute a basis for C over R and therefore the dimension
is 2, that is [C : R[ = 2.
The fact that [R : Q[ = o depends on the existence of transcendental numbers.
An element r ÷ R is algebraic (over Q) if it satisﬁes some nonzero polynomial with
coefﬁcients from Q. That is, 1(r) = 0, where
0 = 1(.) = a
0
÷a
1
. ÷ ÷a
n
.
n
with a
i
÷ Q.
Any q ÷ Q is algebraic since if 1(.) = . ÷ q then 1(q) = 0. However, many
irrationals are also algebraic. For example
_
2 is algebraic since .
2
÷ 2 = 0 has
_
2
as a root. An element r ÷ R is transcendental if it is not algebraic.
In general it is very difﬁcult to show that a particular element is transcendental.
However there are uncountably many transcendental elements (see exercises). Spe
ciﬁc examples are e and ¬. We will give a proof of their transcendence later in this
book.
Since e is transcendental, for any natural number n the set of vectors {1. e.
e
2
. . . . . e
n
} must be independent over Q, for otherwise there would be a polynomial
that e would satisfy. Therefore, we have inﬁnitely many independent vectors in R
over Qwhich would be impossible if R had ﬁnite degree over Q.
Lemma 5.1.4. If 1 is any ﬁeld then [1(.) : 1[ = o.
Proof. For any n the elements 1. .. .
2
. . . . . .
n
are independent over 1. Therefore as
in the proof of Lemma 5.1.3 1(.) must be inﬁnite dimensional over 1.
If 1[1 and 1
1
[1
1
are ﬁeld extensions then they are isomorphic ﬁeld extensions if
there exists a ﬁeld isomorphism ¡ : 1 ÷ 1
1
such that ¡
[
K
is an isomorphism from
1 to 1
1
.
Suppose that 1 Ç 1 Ç M are ﬁelds. Below we show that the degrees multiply. In
this situation where 1 Ç 1 Ç M we call 1 an intermediate ﬁeld.
68 Chapter 5 Field Extensions
Theorem 5.1.5. Let 1. 1. M be ﬁelds with 1 Ç 1 Ç M. Then
[M : 1[ = [M : 1[[1 : 1[.
Note that [M : 1[ = oif and only if either [M : 1[ = oor [1 : 1[ = o.
Proof. Let {.
i
: i ÷ 1} be a basis for 1 as a vector space over 1 and let {,
}
: ¸ ÷ J}
be a basis for M as a vector space over 1. To prove the result it is sufﬁcient to show
that the set
T = {.
i
,
}
: i ÷ 1. ¸ ÷ J}
is a basis for M as a vector space over 1. To show this we must show that T is a
linearly independent set over 1 and that T spans M.
Suppose that
i,}
k
i}
.
i
,
}
= 0 where k
i}
÷ 1.
We can then write this sum as
}
_
i
k
i}
.
i
_
,
}
= 0.
But
i
k
i}
.
i
÷ 1. Since {,
}
: ¸ ÷ J} is a basis for M over 1 the ,
}
are independent
over 1 and hence for each ¸ we get,
i
k
i}
.
i
= 0. Now since {.
i
: i ÷ 1} is a basis
for 1 over 1 it follows that the .
i
are linearly independent and since for each ¸ we
have
i
k
i}
.
i
= 0 it must be that k
i}
= 0 for all i and for all ¸ . Therefore the set T
is linearly independent over 1.
Now suppose that m ÷ M. Then since {,
}
: ¸ ÷ J} spans M over 1 we have
m =
}
c
}
,
}
with c
}
÷ 1.
However {.
i
: i ÷ 1} spans 1 over 1 and so for each c
}
we have
c
}
=
i
k
i}
.
i
with k
i}
÷ 1.
Combining these two sums we have
m =
i}
k
i}
.
i
,
}
and hence T spans M over 1. Therefore T is a basis for M over 1 and the result is
proved.
Section 5.2 Finite and Algebraic Extensions 69
Corollary 5.1.6. (a) If [1 : 1[ is a prime number then there exists no proper inter
mediate ﬁeld between 1 and 1.
(b) If 1 Ç 1 and [1 : 1[ = 1 then 1 = 1.
Let 1[1 be a ﬁeld extension and suppose that · Ç 1. Then certainly there are
subrings of 1 containing both · and 1, for example 1. We denote by 1Œ· the
intersection of all subrings of 1 containing both 1 and ·. Since the intersection of
subrings is a subring it follows that 1Œ· is a subring containing both 1 and · and
the smallest such subring. We call 1Œ· the ring adjunction of · to 1.
In an analogous manner we let 1(·) be the intersection of all subﬁelds of 1 con
taining both 1 and ·. This is then a subﬁeld of 1 and the smallest subﬁeld of 1
containing both 1 and ·. The subﬁeld 1(·) is called the ﬁeld adjunction of · to 1.
Clearly 1Œ· Ç 1(·). If · = {a
1
. . . . . a
n
} then we write
1Œ· = 1Œa
1
. . . . . a
n
 and 1(·) = 1(a
1
. . . . . a
n
).
Deﬁnition 5.1.7. The ﬁeld extension 1[1 is ﬁnitely generated if there exist a
1
. . . . .
a
n
÷ 1 such that 1 = 1(a
1
. . . . . a
n
). The extension 1[1 is a simple extension if
there is an a ÷ 1 with 1 = 1(a). In this case a is called a primitive element of 1[1.
Later we will look at an alternative way to view the adjunction constructions in
terms of polynomials.
5.2 Finite and Algebraic Extensions
We now turn to the relationship between ﬁeld extensions and the solution of polyno
mial equations.
Deﬁnition 5.2.1. Let 1[1 be a ﬁeld extension. An element a ÷ 1 is algebraic over
1 if there exists a polynomial ¸(.) ÷ 1Œ. with ¸(a) = 0. 1 is an algebraic
extension of 1 if each element of 1 is algebraic over 1. An element a ÷ 1 that is
not algebraic over 1 is called transcendental. 1 is a transcendental extension if there
are transcendental elements, that is they are not algebraic over 1.
For the remainder of this section we assume that 1[1 is a ﬁeld extension.
Lemma 5.2.2. Each element of 1 is algebraic over 1.
Proof. Let k ÷ 1. Then k is a root of the polynomial ¸(.) = . ÷ k ÷ 1Œ..
70 Chapter 5 Field Extensions
We tie now algebraic extensions to ﬁnite extensions.
Theorem 5.2.3. If 1[1 is a ﬁnite extension then 1[1 is an algebraic extension.
Proof. Suppose that 1[1 is a ﬁnite extension and a ÷ 1. We must show that a is
algebraic over 1. Suppose that [1 : 1[ = n < othen dim
1
(1) = n. It follows that
any n ÷1 elements of 1 are linearly dependent over 1.
Now consider the elements 1. a. a
2
. . . . . a
n
in 1. These are n÷1 distinct elements
in 1so they are dependent over 1. Hence there exist c
0
. . . . . c
n
÷ 1 not all zero such
that
c
0
÷c
1
a ÷ ÷c
n
a
n
= 0.
Let ¸(.) = c
0
÷c
1
. ÷ ÷c
n
.
n
. Then ¸(.) ÷ 1Œ. and ¸(a) = 0. Therefore a
is algebraic over 1. Since a was arbitrary it follows that 1 is an algebraic extension
of 1.
From the previous theorem it follows that every ﬁnite extension is algebraic. The
converse is not true, that is there are algebraic extensions that are not ﬁnite. We will
give examples in Section 5.4.
The following lemma gives some examples of algebraic and transcendental exten
sions.
Lemma 5.2.4. C[R is algebraic but R[Q and C[Q are transcendental. If 1 is any
ﬁeld then 1(.)[1 is transcendental.
Proof. Since 1. i constitute a basis for C over R we have [C : R[ = 2. Hence C is a
ﬁnite extension of R and therefore from Theorem 5.2.3 an algebraic extension. More
directly if ˛ = a ÷i b ÷ C then ˛ is a zero of .
2
÷ 2a. ÷(a
2
÷b
2
) ÷ RŒ..
The existence of transcendental numbers (we will discuss these more fully in Sec
tion 5.5) shows that both R[Qand C[Qare transcendental extensions.
Finally the element . ÷ 1(.) is not a zero of any polynomial in 1Œ.. Therefore
. is a transcendental element so the extension 1(.)[1 is transcendental.
5.3 Minimal Polynomials and Simple Extensions
If 1[1 is a ﬁeld extension and a ÷ 1 is algebraic over 1 then ¸(a) = 0 for some
polynomial ¸(.) ÷ 1Œ.. In this section we consider the smallest such polynomial
and tie it to a simple extension of 1.
Deﬁnition 5.3.1. Suppose that 1[1 is a ﬁeld extension and a ÷ 1is algebraic over 1.
The polynomial m
o
(.) ÷ 1Œ. is the minimal polynomial of a over 1 if
Section 5.3 Minimal Polynomials and Simple Extensions 71
(1) m
o
(.) has leading coefﬁcient 1, that is, it is a monic polynomial.
(2) m
o
(a) = 0.
(3) If ¡(.) ÷ 1Œ. with ¡(a) = 0 then m
o
(.)[¡(.).
Hence m
o
(.) is the monic polynomial of minimal degree that has a as a zero.
We prove next that every algebraic element has such a minimal polynomial.
Theorem 5.3.2. Suppose that 1[1 is a ﬁeld extension and a ÷ 1is algebraic over 1.
Then
(1) The minimal polynomial m
o
(.) ÷ 1Œ. exists and is irreducible over 1.
(2) 1Œa Š 1(a) Š 1Œ.,(m
o
(.)) where (m
o
(.)) is the principal ideal in 1Œ.
generated by m
o
(.).
(3) [1(a) : 1[ = deg(m
o
(.)). Therefore 1(a)[1 is a ﬁnite extension.
Proof. (1) Suppose that a ÷ 1 is algebraic over 1. Let
1 = {¡(.) ÷ 1Œ. : ¡(a) = 0}.
Since a is algebraic 1 = 0. It is straightforward to show (see exercises) that 1 is an
ideal in 1Œ.. Since 1 is a ﬁeld we have that 1Œ. is a principal ideal domain. Hence
there exists g(.) ÷ 1Œ. with 1 = (g(.)). Let b be the leading coefﬁcient of g(.).
Then m
o
(.) = b
1
g(.) is a monic polynomial. We claim that m
o
(.) is the minimal
polynomial of a and that m
o
(.) is irreducible.
First it is clear that 1 = (g(.)) = (m
o
(.)). If ¡(.) ÷ 1Œ. with ¡(a) = 0 then
¡(.) = h(.)m
o
(.) for some h(.). Therefore m
o
(.) divides any polynomial that has
a as a zero. It follows that m
o
(.) is the minimal polynomial.
Suppose that m
o
(.) = g
1
(.)g
2
(.). Then since m
o
(a) = 0 it follows that either
g
1
(a) = 0 or g
2
(a) = 0. Suppose g
1
(a) = 0. Then from above m
o
(.)[g
1
(.)
and since g
1
(.)[m
o
(.) we must then have that g
2
(.) is a unit. Therefore m
o
(.) is
irreducible.
(2) Consider the map t : 1Œ. ÷1Œa given by
t
_
i
k
i
.
i
_
=
i
k
i
a
i
.
Then t is a ring epimorphism (see exercises) and
ker(t) = {¡(.) ÷ 1Œ. : ¡(a) = 0} = (m
o
(.))
from the argument in the proof of part (1). It follows that
1Œ.,(m
o
(.)) Š 1Œa.
72 Chapter 5 Field Extensions
Since m
o
(.) is irreducible we have 1Œ.,(m
o
(.)) is a ﬁeld and therefore 1Œa =
1(a).
(3) Let n = deg(m
o
(.)). We claim that the elements 1. a. . . . . a
n1
are a basis for
1Œa = 1(a) over 1. First suppose that
n1
i=1
c
i
a
i
= 0
with not all c
i
= 0 and c
i
÷ 1. Then h(a) = 0 where h(.) =
n1
i=0
c
i
.
i
. But this
contradicts the fact that m
o
(.) has minimal degree over all polynomials in 1Œ. that
have a as a zero. Therefore the set 1. a. . . . . a
n1
is linearly independent over 1.
Now let b ÷ 1Œa Š 1Œ.,(m
o
(.)). Then there is a g(.) ÷ 1Œ. with b = g(a).
By the division algorithm
g(.) = h(.)m
o
(.) ÷r(.)
where r(.) = 0 or deg(r(.)) < deg(m
o
(.)). Now
r(a) = g(a) ÷ h(a)m
o
(a) = g(a) = b.
If r(.) = 0 then b = 0. If r(.) = 0 then since deg(r(.)) < n we have
r(.) = c
0
÷c
1
. ÷ ÷c
n1
.
n1
with c
i
÷ 1 and some c
i
but not all might be zero. This implies that
b = r(a) = c
0
÷c
1
a ÷ ÷c
n1
a
n1
and hence b is a linear combination over 1 of 1. a. . . . . a
n1
. Hence 1. a. . . . . a
n1
spans 1Œa over 1 and hence forms a basis.
Theorem 5.3.3. Suppose that 1[1 is a ﬁeld extension and a ÷ 1is algebraic over 1.
Suppose that ¡(.) ÷ 1Œ. is a monic polynomial with ¡(a) = 0. Then ¡(.) is the
minimal polynomial if and only if ¡(.) is irreducible in 1Œ..
Proof. Suppose that ¡(.) is the minimal polynomial of a. Then ¡(.) is irreducible
from the previous theorem.
Conversely suppose that ¡(.) is monic, irreducible and ¡(a) = 0. From the pre
vious theorem m
o
(.)[¡(.). Since ¡(.) is irreducible we have ¡(.) = cm
o
(.) with
c ÷ 1. However since both ¡(.) and m
o
(.) are monic we must have c = 1 and
¡(.) = m
o
(.).
We now show that a ﬁnite extension of 1 is actually ﬁnitely generated over 1 and
further it is generated by ﬁnitely many algebraic elements.
Section 5.3 Minimal Polynomials and Simple Extensions 73
Theorem 5.3.4. Let 1[1 be a ﬁeld extension. Then the following are equivalent:
(1) 1[1 is a ﬁnite extension.
(2) 1[1 is an algebraic extension and there exist elements a
1
. . . . . a
n
÷ 1 such
that 1 = 1(a
1
. . . . . a
n
).
(3) There exist algebraic elements a
1
. . . . . a
n
÷ 1 such that 1 = 1(a
1
. . . . . a
n
).
Proof. (1) =(2). We have seen in Theorem 5.2.3 that a ﬁnite extension is algebraic.
Suppose that a
1
. . . . . a
n
are a basis for 1 over 1. Then clearly 1 = 1(a
1
. . . . . a
n
).
(2) =(3). If 1[1 is an algebraic extension and 1 = 1(a
1
. . . . . a
n
) then each a
i
is algebraic over 1.
(3) =(1). Suppose that there exist algebraic elements a
1
. . . . . a
n
÷ 1 such that
1 = 1(a
1
. . . . . a
n
). We show that 1[1 is a ﬁnite extension. We do this by induction
on n. If n = 1 then 1 = 1(a) for some algebraic element a and the result follows
from Theorem 5.3.2. Suppose now that n _ 2. We assume then that an extension
1(a
1
. . . . . a
n1
) with a
1
. . . . . a
n1
algebraic elements is a ﬁnite extension. Now
suppose that we have 1 = 1(a
1
. . . . . a
n
) with a
1
. . . . . a
n
algebraic elements.
Then
[1(a
1
. . . . . a
n
) : 1[
= [1(a
1
. . . . . a
n1
)(a
n
) : 1(a
1
. . . . . a
n1
)[[1(a
1
. . . . . a
n1
) : 1[.
The second term [1(a
1
. . . . . a
n1
) : 1[ is ﬁnite from the inductive hypothesis. The
ﬁrst term [1(a
1
. . . . . a
n1
)(a
n
) : 1(a
1
. . . . . a
n1
)[ is also ﬁnite from Theorem 5.3.2
since it is a simple extension of the ﬁeld 1(a
1
. . . . . a
n1
) by the algebraic element
a
n
. Therefore [1(a
1
. . . . . a
n
) : 1[ is ﬁnite.
Theorem 5.3.5. Suppose that 1 is a ﬁeld and 1 is an integral domain with 1 Ç 1.
Then 1 can be viewed as a vector space over 1. If dim
1
(1) < othen 1 is a ﬁeld.
Proof. Let r
0
÷ 1 with r
0
= 0. Deﬁne the map from 1 to 1 given by
t(r) = rr
0
.
It is easy to show (see exercises) that this is a linear transformation from 1 to 1
considered as a vector space over 1.
Suppose that t(r) = 0. Then rr
0
= 0 and hence r = 0 since r
0
= 0 and 1 is an
integral domain. It follows that t is an injective map. Since 1 is a ﬁnite dimensional
vector space over 1 and t is an injective linear transformation it follows that t must
also be surjective. This implies that there exists and r
1
with t(r
1
) = 1. Then r
1
r
0
= 1
and hence r
0
has an inverse within 1. Since r
0
was an arbitrary nonzero element of
1 it follows that 1 is a ﬁeld.
74 Chapter 5 Field Extensions
Theorem 5.3.6. Suppose that 1 Ç 1 Ç M is a chain of ﬁeld extensions. Then M[1
is algebraic if and only if M[1 is algebraic and 1[1 is algebraic.
Proof. If M[1 is algebraic then certainly M[1 and 1[1 are algebraic.
Now suppose that M[1 and 1[1 are algebraic. We show that M[1 is algebraic.
Let a ÷ M. Then since a is algebraic over 1 there exist b
0
. b
1
. . . . . b
n
÷ 1 with
b
0
÷b
1
a ÷ ÷b
n
a
n
= 0.
Each b
i
is algebraic over 1 and hence 1(b
0
. . . . . b
n
) is ﬁnite dimensional over 1.
Therefore 1(b
0
. . . . . b
n
)(a) = 1(b
0
. . . . . b
n
. a) is also ﬁnite dimensional over 1.
Therefore 1(b
0
. . . . . b
n
. a) is a ﬁnite extension of 1 and hence an algebraic exten
sion 1. Since a ÷ 1(b
0
. . . . . b
n
. a) it follows that a is algebraic over 1 and therefore
M is algebraic over 1.
5.4 Algebraic Closures
As before suppose that 1[1 is a ﬁeld extension. Since each element of 1 is algebraic
over 1 there are certainly algebraic elements over 1 within 1. Let A
1
denote the set
of all elements of 1that are algebraic over 1. We prove that A
1
is actually a subﬁeld
of 1. It is called the algebraic closure of 1 within 1.
Theorem 5.4.1. Suppose that 1[1 is a ﬁeld extension and let A
1
denote the set of
all elements of 1 that are algebraic over 1. Then A
1
is a subﬁeld of 1. A
1
is called
the algebraic closure of 1 in 1.
Proof. Since 1 Ç A
1
we have that A
1
= 0. Let a. b ÷ A
1
. Since a. b are both
algebraic over 1 from Theorem 5.3.4 we have that 1(a. b) is a ﬁnite extension of 1.
Therefore 1(a. b) is an algebraic extension of 1 and hence each element of 1(a. b)
is algebraic over 1. Now a. b ÷ 1(a. b), if b = 0, and 1(a. b) is a ﬁeld so a ˙b. ab
and a,b are all in 1(a. b) and hence all algebraic over 1. Therefore a ˙b. ab. a,b,
if b = 0, are all in A
1
. It follows that A
1
is a subﬁeld of 1.
In Section 5.2 we showed that every ﬁnite extension is an algebraic extension. We
mentioned that the converse is not necessarily true, that is there are algebraic exten
sions that are not ﬁnite. Here we give an example.
Theorem 5.4.2. Let Abe the algebraic closure of the rational numbers Qwithin the
complex numbers C. Then Ais an algebraic extension of Qbut [A : Q[ = o.
Proof. From the previous theorem A is an algebraic extension of Q. We show that it
cannot be a ﬁnite extension. By Eisenstein’s criterion the rational polynomial ¡(.) =
.
]
÷ ¸ is irreducible over Q for any prime ¸. Let a be a zero in C of ¡(.). Then
Section 5.5 Algebraic and Transcendental Numbers 75
a ÷ A and [Q(a) : Q[ = ¸. Therefore [A : Q[ _ ¸ for all primes ¸. Since there are
inﬁnitely many primes this implies that [A : Q[ = o
5.5 Algebraic and Transcendental Numbers
In this section we consider the string of ﬁeld extensions Q Ç R Ç C.
Deﬁnition 5.5.1. An algebraic number ˛ is an element of C which is algebraic over
Q. Hence an algebraic number is an ˛ ÷ C such that ¡(˛) = 0 for some ¡(.) ÷
QŒ.. If ˛ ÷ C is not algebraic it is transcendental.
We will let A denote the totality of algebraic numbers within the complex num
bers C, and T the set of transcendentals so that C = AL T . In the language of the
last subsection, A is the algebraic closure of Q within C. As in the general case, if
˛ ÷ C is algebraic we will let m
˛
(.) denote the minimal polynomial of ˛ over Q.
We now examine the sets Aand T more closely. Since Ais precisely the algebraic
closure of Q in C we have from our general result that A actually forms a subﬁeld
of C. Further since the intersection of subﬁelds is again a subﬁeld it follows that
A
t
= A¨ R the real algebraic numbers form a subﬁeld of the reals.
Theorem 5.5.2. The set A of algebraic numbers forms a subﬁeld of C. The subset
A
t
= A¨ R of real algebraic numbers forms a subﬁeld of R.
Since each rational is algebraic it is clear that there are algebraic numbers. Fur
ther there are irrational algebraic numbers,
_
2 for example, since it satisﬁes the irre
ducible polynomial .
2
÷ 2 = 0 over Q. On the other hand we haven’t examined the
question of whether transcendental numbers really exist. To show that any particular
complex number is transcendental is in general quite difﬁcult. However it is relatively
easy to show that there are uncountably inﬁnitely many transcendentals.
Theorem 5.5.3. The set A of algebraic numbers is countably inﬁnite. Therefore T
the set of transcendental numbers and T
t
= T ¨R, the real transcendental numbers,
are uncountably inﬁnite.
Proof. Let
P
n
= {¡(.) ÷ QŒ. : deg(¡(.)) _ n}.
Since if ¡(.) ÷ P
n
, ¡(.) = q
o
÷q
1
. ÷ ÷q
n
.
n
with q
i
÷ Q we can identify a
polynomial of degree _ n with an (n ÷1)tuple (q
0
. q
1
. . . . . q
n
) of rational numbers.
Therefore the set P
n
has the same size as the (n ÷1)fold Cartesian product of Q:
Q
n÷1
= Q× Q× × Q.
76 Chapter 5 Field Extensions
Since a ﬁnite Cartesian product of countable sets is still countable it follows that P
n
is a countable set.
Now let
B
n
=
_
](x)÷P
n
{roots of¸(.)}.
that is B
n
is the union of all roots in C of all rational polynomials of degree _ n.
Since each such ¸(.) has a maximum of n roots and since P
n
is countable it follows
that B
n
is a countable union of ﬁnite sets and hence is still countable. Now
A =
o
_
n=1
B
n
so Ais a countable union of countable sets and is therefore countable.
Since both R and C are uncountably inﬁnite the second assertions follow directly
from the countability of A. If say T were countable then C = AL T would also be
countable which is a contradiction.
From Theorem 5.5.3 we know that there exist inﬁnitely many transcendental num
bers. Liouville in 1851 gave the ﬁrst proof of the existence of transcendentals by
exhibiting a few. He gave as one the following example.
Theorem 5.5.4. The real number
c =
o
}=1
1
10
}Š
is transcendental.
Proof. First of all since
1
10
jŠ
<
1
10
j
, and
o
}=1
1
10
j
is a convergent geometric series,
it follows from the comparison test that the inﬁnite series deﬁning c converges and
deﬁnes a real number. Further since
o
}=1
1
10
j
=
1
9
, it follows that c <
1
9
< 1.
Suppose that c is algebraic so that g(c) = 0 for some rational nonzero polyno
mial g(.). Multiplying through by the least common multiple of all the denomina
tors in g(.) we may suppose that ¡(c) = 0 for some integral polynomial ¡(.) =
n
}=0
m
}
.
}
. Then c satisﬁes
n
}=0
m
}
c
}
= 0
for some integers m
0
. . . . . m
n
.
Section 5.5 Algebraic and Transcendental Numbers 77
If 0 < . < 1 then by the triangle inequality
[¡
t
(.)[ =
ˇ
ˇ
ˇ
ˇ
n
}=1
¸m
}
.
}1
ˇ
ˇ
ˇ
ˇ
_
n
}=1
[¸m
}
[ = T
where T is a real constant depending only on the coefﬁcients of ¡(.).
Now let
c
k
=
k
}=1
1
10
}Š
be the kth partial sum for c. Then
[c ÷ c
k
[ =
o
}=k÷1
1
10
}Š
< 2
1
10
(k÷1)Š
.
Apply the mean value theorem to ¡(.) at c and c
k
to obtain
[¡(c) ÷ ¡(c
k
)[ = [c ÷ c
k
[[¡
t
(¸)[
for some ¸ with c
k
< ¸ < c < 1. Now since 0 < ¸ < 1 we have
[c ÷ c
k
[[¡
t
(¸)[ < 2T
1
10
(k÷1)Š
.
On the other hand, since ¡(.) can have at most n roots, it follows that for all k
large enough we would have ¡(c
k
) = 0. Since ¡(c) = 0 we have
[¡(c) ÷ ¡(c
k
)[ = [¡(c
k
)[ =
ˇ
ˇ
ˇ
ˇ
n
}=1
m
}
c
}
k
ˇ
ˇ
ˇ
ˇ
>
1
10
nkŠ
since for each ¸ , m
}
c
}
k
is a rational number with denominator 10
}kŠ
. However if k is
chosen sufﬁciently large and n is ﬁxed we have
1
10
nkŠ
>
2T
10
(k÷1)Š
contradicting the equality from the mean value theorem. Therefore c is transcenden
tal.
In 1873 Hermite proved that e is transcendental while Lindemann in 1882 showed
that ¬ is transcendental. Schneider in 1934 showed that a
b
is transcendental if a = 0,
a and b are algebraic and b is irrational. Later in the book we will prove that both e
and ¬ are transcendental. An interesting open question is the following:
Is ¬ transcendental over Q(e)?
To close this section we show that in general if a ÷ 1 is transcendental over 1 then
1(a)[1 is isomorphic to the ﬁeld of rational functions over 1.
78 Chapter 5 Field Extensions
Theorem 5.5.5. Suppose that 1[1 is a ﬁeld extension and a ÷ 1 is transcendental
over 1. Then 1(a)[1 is isomorphic to 1(.)[1. Here the isomorphism j : 1(.) ÷
1(a) can be chosen such that j(.) = a.
Proof. Deﬁne the map j : 1(.) ÷1(a) by
j
_
¡(.)
g(.)
_
=
¡(a)
g(a)
for ¡(.). g(.) ÷ 1Œ. with g(.) = 0. Then j is a homomorphism and j(.) = a.
Since j = 0 it follows that j is an isomorphism.
5.6 Exercises
1. Let a ÷ C with a
3
÷ 2a ÷ 2 = 0 and b = a
2
÷ a. Compute the minimal
polynomial m
b
(.) of b over Qand compute the inverse of b in Q(a).
2. Determine the algebraic closure of R in C(.).
3. Let a
n
:=
2
n
_
2 ÷ R, n = 1. 2. 3. . . . and · := {a
n
: n ÷ N} and 1 := Q(·).
Show:
(i) [Q(a
n
) : Q[ = 2
n
.
(ii) [1 : Q[ = o.
(iii) 1 =
_
o
n=1
Q(a
n
).
(iv) 1 is algebraic over Q.
4. Determine [1 : Q[ for
(i) 1 = Q(
_
2.
_
÷2).
(ii) 1 = Q(
_
3.
_
3 ÷
3
_
3).
(iii) 1 = Q(
1÷i
_
2
.
1÷i
_
2
).
5. Show that Q(
_
2.
_
3) = {a ÷b
_
2÷c
_
3÷J
_
6 : a. b. c. J ÷ Q}. Determine
the degree of Q(
_
2.
_
3) over Q. Further show that Q(
_
2.
_
3) = Q(
_
2 ÷
_
3).
6. Let 1, 1 be ﬁelds and a ÷ 1 be transcendental over 1. Show:
(i) Each element of 1(a)[1 is transcendental over 1.
(ii) a
n
is transcendental over 1 for each n > 1.
(iii) If 1 := 1(
o
3
o÷1
) then a is algebraic over 1. Determine the minimal poly
nomial m
o
(.) of a over 1.
Section 5.6 Exercises 79
7. Let 1 be a ﬁeld and a ÷ 1(.)[1. Show:
(i) . is algebraic over 1(a).
(ii) If 1 is a ﬁeld with 1 Ç 1 _ 1(.), then [1(.) : 1[ < o.
(iii) a is transcendental over 1.
8. Suppose that a ÷ 1 is algebraic over 1. Let
1 = {¡(.) ÷ 1Œ. : ¡(a) = 0}.
Since a is algebraic 1 = 0. Prove that 1 is an ideal in 1Œ..
9. Prove that there are uncountably many transcendental numbers. To do this show
that the set Aof algebraic numbers is countable. To do this:
(i) Show that Q
n
Œ. the set of rational polynomials of degree _ n is countable
(ﬁnite Cartesian product of countable sets).
(ii) Let B
n
= {Zeros of polynomials in Q
n
}. Show that B is countable.
(iii) Show that A =
_
o
n=1
B
n
and conclude that Ais countable.
(iv) Show that the transcendental numbers are uncountable.
10. Consider the map t : 1Œ. ÷1Œa given by
t
_
i
k
i
.
i
_
=
i
k
i
a
i
.
Show that t is a ring epimorphism.
11. Suppose that 1 is a ﬁeld and 1 is an integral domain with 1 Ç 1. Then 1 can
be viewed as a vector space over 1. Let r
0
÷ 1 with r
0
= 0. Deﬁne the map
from 1 to 1 given by
t(r) = rr
0
.
Show that this is a linear transformation from1 to 1 considered as a vector space
over 1.
Chapter 6
Field Extensions and Compass and Straightedge
Constructions
6.1 Geometric Constructions
Greek mathematicians in the classical period posed the problem of constructing cer
tain geometric ﬁgures in the Euclidean plane using only a straightedge and a compass.
These are known as geometric construction problems.
Recall from elementary geometry that using a straightedge and compass it is pos
sible to draw a line parallel to a given line segment through a given point, to extend
a given line segment, and to erect a perpendicular to a given line at a given point on
that line. There were other geometric construction problems that the Greeks could
not determine straightedge and compass solutions but on the other hand were never
able to prove that such constructions were impossible. In particular there were four
famous insolvable (to the Greeks) construction problems. The ﬁrst is the squaring of
the circle. This problem is, given a circle, to construct using straightedge and com
pass a square having area equal to that of the given circle. The second is the doubling
of the cube. This problem is given a cube of given side length, to construct using a
straightedge and compass, a side of a cube having double the volume of the original
cube. The third problem is the trisection of an angle. This problem is to trisect a given
angle using only a straightedge and compass. The ﬁnal problem is the construction of
a regular ngon. This problems asks which regular ngons could be constructed using
only straightedge and compass.
By translating each of these problems into the language of ﬁeld extensions we can
show that each of the ﬁrst three problems are insolvable in general and we can give
the complete solution to the construction of the regular ngons.
6.2 Constructible Numbers and Field Extensions
We now translate the geometric construction problems into the language of ﬁeld ex
tensions. As a ﬁrst step we deﬁne a constructible number.
Deﬁnition 6.2.1. Suppose we are given a line segment of unit length. An ˛ ÷ R is
constructible if we can construct a line segment of length [˛[ in a ﬁnite number of
steps from the unit segment using a straightedge and compass.
Section 6.2 Constructible Numbers and Field Extensions 81
Our ﬁrst result is that the set of all constructible numbers forms a subﬁeld of R.
Theorem 6.2.2. The set C of all constructible numbers forms a subﬁeld of R. Fur
ther, Q Ç C.
Proof. Let C be the set of all constructible numbers. Since the given unit length
segment is constructible, we have 1 ÷ C. Therefore, C = 0, and thus to show that it
is a ﬁeld we must show that it is closed under the ﬁeld operations.
Suppose ˛. ˇ are constructible. We must show then that ˛ ˙ ˇ. ˛ˇ, and ˛,ˇ for
ˇ = 0 are constructible. If ˛. ˇ > 0, construct a line segment of length [˛[. At
one end of this line segment extend it by a segment of length [ˇ[. This will construct
a segment of length ˛ ÷ ˇ. Similarly, if ˛ > ˇ, lay off a segment of length [ˇ[
at the beginning of a segment of length [˛[. The remaining piece will be ˛ ÷ ˇ.
By considering cases we can do this in the same manner if either ˛ or ˇ or both
are negative. These constructions are pictured in Figure 6.1. Therefore, ˛ ˙ ˇ are
constructible.
Figure 6.1
In Figure 6.2 we show how to construct ˛ˇ. Let the line segment O· have length
[˛[. Consider a line 1 through O not coincident with O·. Let OT have length [ˇ[ as
in the diagram. Let 1 be on ray OT so that O1 has length 1. Draw ·1 and then ﬁnd
Q on ray O· such that TQ is parallel to ·1. From similar triangles we then have
[O1[
[OT[
=
[O·[
[OQ[
=
1
[ˇ[
=
[˛[
[OQ[
.
Then [OQ[ = [˛[[ˇ[, and so ˛ˇ is constructible.
Figure 6.2
82 Chapter 6 Field Extensions and Compass and Straightedge Constructions
A similar construction, pictured in Figure 6.3, shows that ˛,ˇ for ˇ = 0 is con
structible. Find O·. OT. O1 as above. Now, connect · to T and let 1Q be parallel
to ·T. From similar triangles again we have
1
[ˇ[
=
[OQ[
[˛[
==
[˛[
[ˇ[
= [OQ[.
Hence ˛,ˇ is constructible.
Figure 6.3
Therefore, C is a subﬁeld of R. Since char C = 0, it follows that Q Ç C.
Let us now consider analytically how a constructible number is found in the plane.
Starting at the origin and using the unit length and the constructions above, we can
locate any point in the plane with rational coordinates. That is, we can construct the
point 1 = (q
1
. q
2
) with q
1
. q
2
÷ Q. Using only straightedge and compass, any
further point in the plane can be determined in one of the following three ways.
1. The intersection point of two lines each of which passes through two known
points each having rational coordinates.
2. The intersection point of a line passing through two known points having ra
tional coordinates and a circle whose center has rational coordinates and whose
radius squared is rational.
3. The intersection point of two circles each of whose centers has rational coordi
nates and each of whose radii is the square root of a rational number.
Analytically, the ﬁrst case involves the solution of a pair of linear equations each
with rational coefﬁcients and thus only leads to other rational numbers. In cases two
and three we must solve equations of the form .
2
÷ ,
2
÷ a. ÷ b, ÷ c = 0, with
a. b. c ÷ Q. These will then be quadratic equations over Q, and thus the solutions
will either be in Q or in a quadratic extension Q(
_
˛) of Q. Once a real quadratic
extension of Q is found, the process can be iterated. Conversely it can be shown that
if ˛ is constructible, so is
_
˛. We thus can prove the following theorem.
Section 6.3 Four Classical Construction Problems 83
Theorem 6.2.3. If ; is constructible with ; … Q, then there exists a ﬁnite number of
elements ˛
1
. . . . . ˛
i
÷ R with ˛
i
= ; such that for i = 1. . . . . r, Q(˛
1
. . . . . ˛
i
) is
a quadratic extension of Q(˛
1
. . . . . ˛
i1
). In particular, [Q(;) : Q[ = 2
n
for some
n _ 1.
Therefore, the constructible numbers are precisely those real numbers that are con
tained in repeated quadratic extensions of Q. In the next section we use this idea to
show the impossibility of the ﬁrst three mentioned construction problems.
6.3 Four Classical Construction Problems
We now consider the aforementioned construction problems. Our main technique
will be to use Theorem 6.2.3. From this result we have that if ; is constructible with
; … Q, then [Q(;) : Q[ = 2
n
for some n _ 1.
6.3.1 Squaring the Circle
Theorem 6.3.1. It is impossible to square the circle. That is, it is impossible in gen
eral, given a circle, to construct using straightedge and compass a square having area
equal to that of the given circle.
Proof. Suppose the given circle has radius 1. It is then constructible and would have
an area of ¬. A corresponding square would then have to have a side of length
_
¬.
To be constructible a number ˛ must have [Q(˛) : Q[ = 2
n
< oand hence ˛ must
be algebraic. However ¬ is transcendental, so
_
¬ is also transcendental and therefore
not constructible.
6.3.2 The Doubling of the Cube
Theorem 6.3.2. It is impossible to double the cube. This means that it is impossible
in general, given a cube of given side length, to construct using a straightedge and
compass, a side of a cube having double the volume of the original cube.
Proof. Let the given side length be 1, so that the original volume is also 1. To double
this we would have to construct a side of length 2
1{3
. However [Q(2
1{3
) : Q[ = 3
since the minimal polynomial over Qis m
2
1=3(.) = .
3
÷2. This is not a power of 2
so 2
1{3
is not constructible.
6.3.3 The Trisection of an Angle
Theorem 6.3.3. It is impossible to trisect an angle. This means that it is impossible
in general to trisect a given angle using only a straightedge and compass.
84 Chapter 6 Field Extensions and Compass and Straightedge Constructions
Proof. An angle 0 is constructible if and only if a segment of length [cos 0[ is con
structible. Since cos(¬,3) = 1,2, therefore ¬,3 is constructible. We show that it
cannot be trisected by straightedge and compass.
The following trigonometric identity holds
cos(30) = 4 cos
3
(0) ÷ 3 cos(0).
Let ˛ = cos(¬,9). From the above identity we have 4˛
3
÷ 3˛ ÷
1
2
= 0. The
polynomial 4.
3
÷ 3. ÷
1
2
is irreducible over Q, and hence the minimal polynomial
over Qis m
˛
(.) = .
3
÷
3
4
. ÷
1
S
. It follows that [Q(˛) : Q[ = 3, and hence ˛ is not
constructible. Therefore, the corresponding angle ¬,9 is not constructible. Therefore,
¬,3 is constructible, but it cannot be trisected.
6.3.4 Construction of a Regular nGon
The ﬁnal construction problem we consider is the construction of regular ngons. The
algebraic study of the constructibility of regular ngons was initiated by Gauss in the
early part of the nineteenth century.
Notice ﬁrst that a regular ngon will be constructible for n _ 3 if and only if
the angle
2t
n
is constructible, which is the case if and only if the length cos
2t
n
is a
constructible number. From our techniques if cos
2t
n
is a constructible number then
necessarily [Q(cos(
2t
n
)) : Q[ = 2
n
for some m. After we discuss Galois theory we
see that this condition is also sufﬁcient. Therefore cos
2t
n
is a constructible number if
and only if [Q(cos(
2t
n
)) : Q[ = 2
n
for some m.
The solution of this problem, that is the determination of when [Q(cos(
2t
n
) : Q[ =
2
n
involves two concepts from number theory; the Euler phifunction and Fermat
primes.
Deﬁnition 6.3.4. For any natural number n, the Euler phifunction is deﬁned by
ç(n) = number of integers less than or equal to n and relatively prime to n.
Example 6.3.5. ç(6) = 2 since among 1. 2. 3. 4. 5. 6 only 1. 5 are relatively prime
to 6.
It is fairly straightforward to develop a formula for ç(n). A formula is ﬁrst deter
mined for primes and for prime powers and then pasted back together via the funda
mental theorem of arithmetic.
Lemma 6.3.6. For any prime ¸ and m > 0,
ç(¸
n
) = ¸
n
÷ ¸
n1
= ¸
n
_
1 ÷
1
¸
_
.
Section 6.3 Four Classical Construction Problems 85
Proof. If 1 _ a _ ¸ then either a = ¸ or (a. ¸) = 1. It follows that the positive
integers less than or equal to ¸
n
which are not relatively prime to ¸
n
are precisely
the multiples of ¸ that is ¸. 2¸. 3¸. . . . . ¸
n1
¸. All other positive a < ¸
n
are
relatively prime to ¸
n
. Hence the number relatively prime to ¸
n
is
¸
n
÷ ¸
n1
.
Lemma 6.3.7. If (a. b) = 1 then ç(ab) = ç(a)ç(b).
Proof. Given a natural number n a reduced residue system modulo n is a set of in
tegers .
1
. . . . . .
k
such that each .
i
is relatively prime to n, .
i
= .
}
mod n unless
i = ¸ and if (.. n) = 1 for some integer . then . ÷ .
i
mod n for some i . Clearly
ç(n) is the size of a reduced residue system modulo n.
Let 1
o
= {.
1
. . . . . .
¢(o)
} be a reduced residue system modulo a, 1
b
= {,
1
. . . . .
,
¢(b)
} be a reduced residue system modulo b, and let
S = {a,
i
÷b.
}
: i = 1. . . . . ç(b). ¸ = 1. . . . . ç(a)}.
We claim that S is a reduced residue system modulo ab. Since S has ç(a)ç(b)
elements it will follow that ç(ab) = ç(a)ç(b).
To show that S is a reduced residue system modulo ab we must show three things:
ﬁrst that each . ÷ S is relatively prime to ab; second that the elements of S are
distinct; and ﬁnally that given any integer n with (n. ab) = 1 then n ÷ s mod ab for
some s ÷ S.
Let . = a,
i
÷ b.
}
. Then since (.
}
. a) = 1 and (a. b) = 1 it follows that
(.. a) = 1. Analogously (.. b) = 1. Since . is relatively prime to both a and b we
have (.. ab) = 1. This shows that each element of S is relatively prime to ab.
Next suppose that
a,
i
÷b.
}
÷ a,
k
÷b.
I
mod ab.
Then
ab[(a,
i
÷b.
}
) ÷ (a,
k
÷b.
I
) == a,
i
÷ a,
k
mod b.
Since (a. b) = 1 it follows that ,
i
÷ ,
k
mod b. But then ,
i
= ,
k
since 1
b
is a
reduced residue system. Similarly .
}
= .
I
. This shows that the elements of S are
distinct modulo ab.
Finally suppose (n. ab) = 1. Since (a. b) = 1 there exist .. , with a. ÷b, = 1.
Then
an. ÷bn, = n.
Since (.. b) = 1 and (n. b) = 1 it follows that (n.. b) = 1. Therefore there is an
s
i
with n. = s
i
÷ t b. In the same manner (n,. a) = 1 and so there is an r
}
with
86 Chapter 6 Field Extensions and Compass and Straightedge Constructions
n, = r
}
÷ua. Then
a(s
i
÷t b) ÷b(r
}
÷ua) = n == n = as
i
÷br
}
÷(t ÷u)ab
== n ÷ ar
i
÷bs
}
mod ab
and we are done.
We now give the general formula for ç(n).
Theorem 6.3.8. Suppose n = ¸
e
1
1
¸
e
k
k
then
ç(n) = (¸
e
1
1
÷ ¸
e
1
1
1
)(¸
e
2
2
÷ ¸
e
2
1
2
) (¸
e
k
k
÷ ¸
e
k
1
k
).
Proof. From the previous lemma we have
ç(n) = ç(¸
e
1
1
)ç(¸
e2
2
) ç(¸
e
k
k
)
= (¸
e
1
1
÷ ¸
e
1
1
1
)(¸
e
2
2
÷ ¸
e
2
1
2
) (¸
e
k
k
÷ ¸
e
k
1
k
)
= ¸
e
1
1
(1 ÷ 1,¸
1
) ¸
e
k
k
(1 ÷ 1,¸
k
) = ¸
e
1
1
¸
e
k
k
(1 ÷ 1,¸
1
) (1 ÷ 1,¸
k
)
= n
i
(1 ÷ 1,¸
i
).
Example 6.3.9. Determine ç(126). Now
126 = 2 3
2
7 == ç(126) = ç(2)ç(3
2
)ç(7) = (1)(3
2
÷ 3)(6) = 36.
Hence there are 36 units in Z
126
.
An interesting result with many generalizations in number theory is the following.
Theorem 6.3.10. For n > 1 and for J _ 1
d[n
ç(J) = n.
Proof. We ﬁrst prove the theorem for prime powers and then paste together via the
fundamental theorem of arithmetic.
Suppose that n = ¸
e
for ¸ a prime. Then the divisors of n are 1. ¸. ¸
2
. . . . . ¸
e
, so
d[n
ç(J) = ç(1) ÷ç(¸) ÷ç(¸
2
) ÷ ÷ç(¸
e
)
= 1 ÷(¸ ÷ 1) ÷(¸
2
÷ ¸) ÷ ÷(¸
e
÷ ¸
e1
).
Notice that this sum telescopes, that is 1 ÷(¸ ÷ 1) = ¸. ¸ ÷(¸
2
÷ ¸) = ¸
2
and
so on. Hence the sum is just ¸
e
and the result is proved for n a prime power.
Section 6.3 Four Classical Construction Problems 87
We now do an induction on the number of distinct prime factors of n. The above
argument shows that the result is true if n has only one distinct prime factor. Assume
that the result is true whenever an integer has less than k distinct prime factors and
suppose n = ¸
e
1
1
¸
e
k
k
has k distinct prime factors. Then n = ¸
e
c where ¸ = ¸
1
,
e = e
1
and c has fewer than k distinct prime factors. By the inductive hypothesis
d[c
ç(J) = c.
Since (c. ¸) = 1 the divisors of n are all of the form ¸
˛
J
1
where J
1
[c and ˛ =
0. 1. . . . . e. It follows that
d[n
ç(J) =
d
1
[c
ç(J
1
) ÷
d
1
[c
ç(¸J
1
) ÷ ÷
d
1
[c
ç(¸
e
J
1
).
Since (J
1
. ¸
˛
) = 1 for any divisor of c this sum equals
d
1
[c
ç(J
1
) ÷
d
1
[c
ç(¸)ç(J
1
) ÷ ÷
d
1
[c
ç(¸
e
)ç(J
1
)
=
d
1
[c
ç(J
1
) ÷(¸ ÷ 1)
d
1
[c
ç(J
1
) ÷ ÷(¸
e
÷ ¸
e1
)
d
1
[c
ç(J
1
)
= c ÷(¸ ÷ 1)c ÷(¸
2
÷ ¸)c ÷ ÷(¸
e
÷ ¸
e1
)c.
As in the case of prime powers this sum telescopes giving a ﬁnal result
d[n
ç(J) = ¸
e
c = n.
Example 6.3.11. Consider n = 10. The divisors are 1. 2. 5. 10. Then ç(1) = 1,
ç(2) = 1, ç(5) = 4, ç(10) = 4. Then
ç(1) ÷ç(2) ÷ç(5) ÷ç(10) = 1 ÷1 ÷4 ÷4 = 10.
We will see later in the book that the Euler phifunction plays an important role in
the structure theory of abelian groups.
We now turn to Fermat primes.
Deﬁnition 6.3.12. The Fermat numbers are the sequence (J
n
) of positive integers
deﬁned by
J
n
= 2
2
n
÷1. n = 0. 1. 2. 3. . . . .
If a particular J
n
is prime it is called a Fermat prime.
88 Chapter 6 Field Extensions and Compass and Straightedge Constructions
Fermat believed that all the numbers in this sequence were primes. In fact J
0
. J
1
.
J
2
. J
3
. J
4
are all prime but J
5
is composite and divisible by 641 (see exercises). It is
still an open question whether or not there are inﬁnitely many Fermat primes. It has
been conjectured that there are only ﬁnitely many. On the other hand if a number of
the form 2
n
÷1 is a prime for some integer n then it must be a Fermat prime.
Theorem 6.3.13. If a _ 2 and a
n
÷1, n _ 1, is a prime then a is even and n = 2
n
for some nonnegative integer m. In particular if ¸ = 2
k
÷1, k _ 1, is a prime then
k = 2
n
for some n and ¸ is a Fermat prime.
Proof. If a is odd then a
n
÷1 is even and hence not a prime. Suppose then that a is
even and n = kl with k odd and k _ 3. Then
a
kI
÷1
a
I
÷1
= a
(k1)I
÷ a
(k2)I
÷ ÷1.
Therefore a
I
÷1 divides a
kI
÷1 if k _ 3. Hence if a
n
÷1 is a prime we must have
n = 2
n
.
We can now state the solution to the constructibility of regular ngons.
Theorem 6.3.14. A regular ngon is constructible with a straightedge and compass
if and only if n = 2
n
¸
1
¸
k
where ¸
1
. . . . . ¸
k
are distinct Fermat primes.
Before proving the theoremnotice for example that a regular 20gon is constructible
since 20 = 2
2
5 and 5 is a Fermat prime. On the other hand a regular 11gon is not
constructible.
Proof. Let j = e
2i
n
be a primitive nth root of unity. Since
e
2i
n
= cos
_
2¬
n
_
÷i sin
_
2¬
n
_
is easy to compute that (see exercises)
j ÷
1
j
= 2 cos
_
2¬
n
_
.
Therefore Q(j ÷
1
µ
) = Q(cos(
2t
n
)). After we discuss Galois theory in more detail
we will prove that
ˇ
ˇ
ˇ
ˇ
Q
_
j ÷
1
j
_
: Q
ˇ
ˇ
ˇ
ˇ
=
ç(n)
2
where ç(n) is the Euler phifunction. Therefore cos(
2t
n
) is constructible if and only
if
¢(n)
2
and hence ç(n) is a power of 2.
Section 6.4 Exercises 89
Suppose that n = 2
n
¸
e
1
1
¸
e
k
k
, all ¸
i
odd primes. Then from Theorem 6.3.8
ç(n) = 2
n1
(¸
e
1
1
÷ ¸
e
1
1
1
)(¸
e
2
2
÷ ¸
e
2
1
2
) (¸
e
k
k
÷ ¸
e
k
1
k
).
If this was a power of 2 each factor must also be a power of 2. Now
¸
e
i
i
÷ ¸
e
i
1
i
= ¸
e
i
1
i
(¸
i
÷ 1).
If this is to be a power of 2 we must have e
i
= 1 and ¸
i
÷ 1 = 2
k
i
for some k
i
.
Therefore each prime is distinct to the ﬁrst power and ¸
i
= 2
k
i
÷1 is a Fermat prime
proving the theorem.
6.4 Exercises
1. Let ç be a given angle. In which of the following cases is the angle [ constructible
from the angle ç by compass and straightedge?
(a) ç =
t
13
, [ =
t
26
.
(b) ç =
t
33
, [ =
t
11
.
(c) ç =
t
T
, [ =
t
12
.
2. (The golden section) In the plane let ·T be a given segment from · to T with
length a. The segment ·T should be divided such that the proportion of ·T to
the length of the bigger subsegment is equal to the proportion of the length of the
bigger subsegment to the length of the smaller subsegment:
a
b
=
b
a ÷ b
.
where b is the length of the bigger subsegment. Such a division is called division
by the golden section. If we write b = a., 0 < . < 1, then
1
x
=
x
1x
, that is
.
2
= 1 ÷ .. Show:
(a)
1
x
=
1÷
_
5
2
= ˛.
(b) Construct the division of ·T by the golden section with compass and straight
edge.
(c) If we divide the radius r > 0 of a circle by the golden section, then the
bigger part of the so divided radius is the side of the regular 10gon with its
10 vertices on the circle.
3. Given a regular 10gon such that the 10 vertices are on the circle with radius 1 > 0.
Show that the length of each side is equal to the bigger part of the, by the golden
section divided, radius. Describe the procedure of the construction of the regular
10gon and 5gon.
90 Chapter 6 Field Extensions and Compass and Straightedge Constructions
4. Construct the regular 17gon with compass and straightedge. Hint: We have to
construct the number
1
2
(o ÷ o
1
) = cos
2t
1T
, where o = e
2i
17
. First, construct
the positive zero o
1
of the polynomial .
2
÷. ÷ 4; we get
o
1
=
1
2
(
_
17 ÷ 1) = o ÷o
1
÷o
2
÷o
2
÷o
4
÷o
4
÷o
S
÷o
S
.
Then, construct the positive zero o
2
of the polynomial .
2
÷ o
1
. ÷ 1; we get
o
2
=
1
4
_
_
17 ÷ 1 ÷
_
34 ÷ 2
_
17
_
= o ÷o
1
÷o
4
÷o
4
.
From o
1
and o
2
construct ˇ =
1
2
(o
2
2
÷o
1
÷o
2
÷4). Then o
3
= 2 cos
2t
1T
is the
biggest of the two positive zeros of the polynomial .
2
÷ o
2
. ÷ˇ.
5. The Fibonaccinumbers ¡
n
, n ÷ N L {0}, are deﬁned by ¡
0
= 0, ¡
1
= 1 and
¡
n÷2
= ¡
n÷1
÷¡
n
for n ÷ N L {0}. Show:
(a) ¡
n
=
˛
n
ˇ
n
˛ˇ
with ˛ =
1÷
_
5
2
, ˇ =
1
_
5
2
.
(b) (
(
nC1
(
n
)
n÷N
converges and lim
no
(
nC1
(
n
=
1÷
_
5
2
= ˛.
(c)
_
0 1
1 1
_
n
=
_
(
n1
(
n
(
n
(
nC1
_
, n ÷ N.
(d) ¡
1
÷¡
2
÷ ÷¡
n
= ¡
n÷2
÷ 1, n _ 1.
(e) ¡
n1
¡
n÷1
÷ ¡
2
n
= (÷1)
n
, n ÷ N.
(f) ¡
2
1
÷¡
2
2
÷ ÷¡
2
n
= ¡
n
¡
n÷1
, n ÷ N.
(g) The Fermat numbers J
0
. J
1
. J
2
. J
3
. J
4
. are all prime but J
5
is composite
and divisible by 641.
6. Let j = e
2i
n
be a primitive nth root of unity. Using
e
2i
n
= cos
_
2¬
n
_
÷i sin
_
2¬
n
_
show that
j ÷
1
j
= 2 cos
_
2¬
n
_
.
Chapter 7
Kronecker’s Theorem and Algebraic Closures
7.1 Kronecker’s Theorem
In the last chapter we proved that if 1[1 is a ﬁeld extension then there exists an
intermediate ﬁeld 1 Ç A Ç 1 such that A is algebraic over 1 and contains all
the elements of 1 that are algebraic over 1. We call A the algebraic closure of 1
within 1. In this chapter we prove that starting with any ﬁeld 1 we can construct
an extension ﬁeld 1 that is algebraic over 1 and is algebraically closed. By this
we mean that there are no algebraic extensions of 1 or equivalently that there are no
irreducible nonlinear polynomials in 1Œ.. In the ﬁnal section of this chapter we will
give a proof of the famous fundamental theorem of algebra which in the language of
this chapter says that the ﬁeld C of complex numbers is algebraically closed. We will
present another proof of this important result later in the book after we discuss Galois
theory.
First we need the following crucial result of Kronecker which says that given a
polynomial ¡(.) in 1Œ. where 1 is a ﬁeld we can construct an extension ﬁeld 1
of 1 in which ¡(.) has a root ˛. We say that 1 has been constructed by adjoining ˛
to 1. Recall that if ¡(.) ÷ 1Œ. is irreducible then ¡(.) can have no roots in 1. We
ﬁrst need the following concept.
Deﬁnition 7.1.1. Let 1[1 and 1
t
[1 be ﬁeld extensions. Then a 1isomorphism is
an isomorphism t : 1 ÷ 1
t
that is the identity map on 1, that is ﬁxes each element
of 1.
Theorem 7.1.2 (Kronecker’s theorem). Let 1 be a ﬁeld and ¡(.) ÷ 1Œ.. Then
there exists a ﬁnite extension 1
t
of 1 where ¡(.) has a root.
Proof. Suppose that ¡(.) ÷ 1Œ.. We know that ¡(.) factors into irreducible poly
nomials. Let ¸(.) be an irreducible factor of ¡(.). From the material in Chapter 4
we know that since ¸(.) is irreducible the principal ideal (¸(.)) in 1Œ. is a maxi
mal ideal. To see this suppose that g(.) … (¸(.)), so that g(.) is not a multiple of
¸(.). Since ¸(.) is irreducible, it follows that (¸(.). g(.)) = 1. Thus there exist
h(.). k(.) ÷ 1Œ. with
h(.)¸(.) ÷k(.)g(.) = 1.
92 Chapter 7 Kronecker’s Theorem and Algebraic Closures
The element on the left is in the ideal (g(.). (¸(.)), so the identity, 1, is in this ideal.
Therefore, the whole ring 1Œ. is in this ideal. Since g(.) was arbitrary, this implies
that the principal ideal (¸(.)) is maximal.
Now let 1
t
= 1Œ.,(¸(.)). Since (¸(.)) is a maximal ideal it follows that 1
t
is
a ﬁeld. We show that 1 can be embedded in 1
t
and that ¸(.) has a zero in 1
t
.
First consider the map ˛ : 1Œ. ÷ 1
t
by ˛(¡(.)) = ¡(.) ÷ (¸(.)). This is a
homomorphism. Since the identity element 1 ÷ 1 is not in (¸(.)) it follows that ˛
restricted to 1 is nontrivial. Therefore ˛ restricted to 1 is a monomorphism since
if ker(˛
[
K
) = 1 then ker(˛
[
K
) = {0}. Therefore 1 can be embedded into ˛(1)
which is contained in 1
t
. Therefore 1
t
can be considered as an extension ﬁeld of 1.
Consider the element a = . ÷ (¸(.)) ÷ 1
t
. Then ¸(a) = ¸(.) ÷ (¸(.)) =
0 ÷(¸(.)) since ¸(.) ÷ (¸(.)). But 0 ÷(¸(.)) is the zero element 0 of the factor
ring 1Œ.,(¸(.)). Therefore in 1
t
we have ¸(a) = 0 and hence ¸(.) has a zero
in 1
t
. Since ¸(.) divides ¡(.) we must have ¡(a) = 0 in 1
t
also. Therefore we
have constructed an extension ﬁeld of 1 in which ¡(.) has a zero.
We now outline a slightly more constructive proof of Kronecker’s theorem. From
this construction we say that the ﬁeld 1
t
constructed by adjoining the root ˛ to 1.
Proof of Kronecker’s theorem. We can assume that ¡(.) is irreducible. Suppose that
¡(.) = a
0
÷a
1
. ÷ ÷a
n
.
n
with a
n
= 0. Deﬁne ˛ to satisfy
a
0
÷a
1
˛ ÷ ÷a
n
˛
n
= 0.
Now deﬁne 1
t
= 1(˛) in the following manner. We let
1(˛) = {c
0
÷c
1
˛ ÷ ÷c
n1
˛
n1
: c
i
÷ 1}.
Then on 1(˛) deﬁne addition and subtraction componentwise and deﬁne multiplica
tion by algebraic manipulation, replacing powers of ˛ higher than ˛
n
by using
˛
n
=
÷a
0
÷ a
1
˛ ÷ ÷ a
n1
˛
n1
a
n
.
We claim that 1
t
= 1(˛) then forms a ﬁeld of ﬁnite degree over 1. The basic
ring properties follow easily by computation (see exercises) using the deﬁnitions. We
must show then that every nonzero element of 1(˛) has a multiplicative inverse. Let
g(˛) ÷ 1(˛). Then the corresponding polynomial g(.) ÷ 1Œ. is a polynomial of
degree _ n ÷ 1. Since ¡(.) is irreducible of degree n it follows that ¡(.) and g(.)
must be relatively prime, that is (¡(.). g(.)) = 1. Hence there exist a(.). b(.) ÷
1Œ. with
a(.)¡(.) ÷b(.)g(.) = 1.
Evaluate these polynomials at ˛ to get
a(˛)¡(˛) ÷b(˛)g(˛) = 1.
Section 7.1 Kronecker’s Theorem 93
Since by deﬁnition we have ¡(˛) = 0 this becomes
b(˛)g(˛) = 1.
Now b(˛) might have degree higher than n ÷ 1 in ˛. However using the relation that
¡(˛) = 0 we can rewrite b(˛) as b(˛) where b(˛) now has degree _ n ÷ 1 in ˛ and
hence is in 1(˛). Therefore
b(˛)g(˛) = 1
and hence g(˛) has a multiplicative inverse. It follows that 1(˛) is a ﬁeld and by
deﬁnition ¡(˛) = 0. The elements 1. ˛. . . . . ˛
n1
form a basis for 1(˛) over 1 and
hence
[1(˛) : 1[ = n.
Example 7.1.3. Let ¡(.) = .
2
÷1 ÷ RŒ.. This is irreducible over R. We construct
the ﬁeld in which this has a root.
Let ˛ be an indeterminate with ˛
2
÷1 = 0 or ˛
2
= ÷1. The extension ﬁeld R(˛)
then has the form
R(˛) = {. ÷˛, : .. , ÷ R. ˛
2
= ÷1}.
It is clear that this ﬁeld is Risomorphic to the complex numbers C, that is, R(˛) Š
R(i ) Š C.
In Chapter 5 we showed that if 1[1 is a ﬁeld extension and a ÷ 1 is algebraic over
1 then there is a smallest algebraic extension 1(a) of 1 within 1. Further 1(a) is
determined by the minimal polynomial m
o
(.). The difference between this construc
tion and the construction in Kronecker’s theorem is that in the proof of Kronecker’s
theorem ˛ is deﬁned to be the root and we constructed the ﬁeld around it, whereas
in the previous construction ˛ was assumed to satisfy the polynomial and 1(˛) was
an already existing ﬁeld that contained ˛. However the next theorem says that these
constructions are the same up to 1isomorphism.
Theorem 7.1.4. Let ¸(.) ÷ 1Œ. be an irreducible polynomial and let 1
t
= 1(˛)
be the extension ﬁeld of 1 constructed in Kronecker’s theorem in which ¸(.) has a
zero ˛. Let 1 be an extension ﬁeld of 1 and suppose that a ÷ 1 is algebraic with
minimal polynomial m
˛
(.) = ¸(.). Then 1(˛) is 1isomorphic to 1(a).
Proof. If 1[1 is a ﬁeld extension and a ÷ 1 is algebraic then the construction of the
subﬁeld 1(a) within 1 is identical to the construction outlined above for 1(˛). If
deg(¸(.)) = n then the elements 1. a. . . . . a
n1
constitute a basis for 1(a) over 1
and the elements 1. ˛. . . . . ˛
n1
constitute a basis for 1(˛) over 1. The mapping
t : 1(a) ÷1(˛)
deﬁned by t(k) = k if k ÷ 1 and t(a) = ˛ and then extended by linearity is easily
shown to be a 1isomorphism (see exercises).
94 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Theorem 7.1.5. Let 1 be a ﬁeld. Then the following are equivalent.
(1) Each nonconstant polynomial in 1Œ. has a zero in 1.
(2) Each nonconstant polynomial in 1Œ. factors into linear factors over 1. That
is, for each ¡(.) ÷ 1Œ. there exist elements a
1
. . . . . a
n
. b ÷ 1 with
¡(.) = b(. ÷ a
1
) (. ÷ a
n
).
(3) An element of 1Œ. is irreducible if and only if it is of degree one.
(4) If 1[1 is an algebraic extension then 1 = 1.
Proof. Suppose that each nonconstant polynomial in 1Œ. has a zero in 1. Let
¡(.) ÷ 1Œ. with deg(¡(.)) = n. Suppose that a
1
is a zero of ¡(.) then
¡(.) = (. ÷ a
1
)h(.)
where the degree of h(.) is n ÷ 1. Now h(.) has a zero a
2
in 1 so that
¡(.) = (. ÷ a
1
)(. ÷ a
2
)g(.)
with deg(g(.)) = n ÷ 2. Continue in this manner and ¡(.) factors completely into
linear factors. Hence (1) implies (2).
Now suppose (2), that is that each nonconstant polynomial in 1Œ. factors into
linear factors over 1. Suppose that ¡(.) is irreducible. If deg(¡(.)) > 1 then ¡(.)
factors into linear factors and hence is not irreducible. Therefore ¡(.) must be of
degree 1 and (2) implies (3).
Now suppose that an element of 1Œ. is irreducible if and only if it is of degree
one and suppose that 1[1 is an algebraic extension. Let a ÷ 1. Then a is algebraic
over 1. Its minimal polynomial m
o
(.) is monic and irreducible over 1 and hence
from (3) is linear. Therefore m
o
(.) = . ÷a ÷ 1Œ.. It follows that a ÷ 1 and hence
1 = 1. Therefore (3) implies (4).
Finally suppose that whenever 1[1 is an algebraic extension then 1 = 1. Suppose
that ¡(.) is a nonconstant polynomial in 1Œ.. From Kronecker’s theorem there
exists a ﬁeld extension 1 and a ÷ 1 with ¡(a) = 0. However 1 is an algebraic
extension so by supposition 1 = 1. Therefore a ÷ 1 and ¡(.) has a zero in 1.
Therefore (4) implies (1) completing the proof.
In the next section we will prove that given a ﬁeld 1 we can always ﬁnd an exten
sion ﬁeld 1 with the properties of the last theorem.
7.2 Algebraic Closures and Algebraically Closed Fields
A ﬁeld 1 is termed algebraically closed if 1 has no algebraic extensions other than
1 itself. This is equivalent to any one of the conditions of Theorem 7.1.5.
Section 7.2 Algebraic Closures and Algebraically Closed Fields 95
Deﬁnition 7.2.1. A ﬁeld 1 is algebraically closed if every nonconstant polynomial
¡(.) ÷ 1Œ. has a zero in 1.
The following theorem is just a restatement of Theorem 7.1.5.
Theorem 7.2.2. A ﬁeld 1 is algebraically closed if and only it satisﬁes any one of
the following conditions.
(1) Each nonconstant polynomial in 1Œ. has a zero in 1.
(2) Each nonconstant polynomial in 1Œ. factors into linear factors over 1. That
is, for each ¡(.) ÷ 1Œ. there exist elements a
1
. . . . . a
n
. b ÷ 1 with
¡(.) = b(. ÷ a
1
) (. ÷ a
n
).
(3) An element of 1Œ. is irreducible if and only if it is of degree one.
(4) If 1[1 is an algebraic extension then 1 = 1.
The prime example of an algebraically closed ﬁeld is the ﬁeld C of complex num
bers. The fundamental theorem of algebra says that any nonconstant complex poly
nomial has a complex root.
We now show that the algebraic closure of one ﬁeld within an algebraically closed
ﬁeld is algebraically closed. First we deﬁne a general algebraic closure.
Deﬁnition 7.2.3. An extension ﬁeld 1 of a ﬁeld 1 is an algebraic closure of 1 if 1
is algebraically closed and 1[1 is algebraic.
Theorem 7.2.4. Let 1 be a ﬁeld and 1[1 an extension of 1 with 1 algebraically
closed. Let 1 = A
1
be the algebraic closure of 1 within 1. Then 1 is an algebraic
closure of 1.
Proof. Let 1 = A
1
be the algebraic closure of 1 within 1. We know that 1[1 is
algebraic therefore we must show that 1 is algebraically closed.
Let ¡(.) be a nonconstant polynomial in 1Œ.. Then ¡(.) ÷ 1Œ.. Since 1 is
algebraically closed ¡(.) has a zero a in 1. Since ¡(a) = 0 and ¡(.) ÷ 1Œ. it
follows that a is algebraic over 1. However 1 is algebraic over 1 and therefore a
is also algebraic over 1. Hence a ÷ 1 and ¡(.) has a zero in 1. Therefore 1 is
algebraically closed.
We want to note the distinction between being algebraically closed and being an
algebraic closure.
Lemma 7.2.5. The complex numbers C is an algebraic closure of R but not an al
gebraic closure of Q. An algebraic closure of Q is A the ﬁeld of algebraic numbers
within C.
96 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Proof. C is algebraically closed (the fundamental theorem of algebra) and since
[C : R[ = 2 it is algebraic over R. Therefore C is an algebraic closure of R. Although
C is algebraically closed and contains the rational numbers Q it is not an algebraic
closure of Qsince it is not algebraic over Qsince there exist transcendental elements.
On the other hand, A, the ﬁeld of algebraic numbers within Q, is an algebraic
closure of Qfrom Theorem 7.2.4.
We now show that every ﬁeld has an algebraic closure. To do this we ﬁrst show that
any ﬁeld can be embedded into an algebraically closed ﬁeld.
Theorem 7.2.6. Let 1 be a ﬁeld. Then 1 can be embedded into an algebraically
closed ﬁeld.
Proof. We show ﬁrst that there is an extension ﬁeld 1of 1 in which each nonconstant
polynomial ¡(.) ÷ 1Œ. has a zero in 1.
Assign to each nonconstant ¡(.) ÷ 1Œ. the symbol ,
(
and consider
1 = 1Œ,
(
: ¡(.) ÷ 1Œ.
the polynomial ring over 1 in the variables ,
(
. Set
1 =
_
n
}=1
¡
}
(,
(
j
)r
}
: r
}
÷ 1. ¡
}
(.) ÷ 1Œ.
_
.
It is straightforward that 1 is an ideal in 1. Suppose that 1 = 1. Then 1 ÷ 1. Hence
there is a linear combination
1 = g
1
¡
1
(,
(
1
) ÷ ÷g
n
¡
n
(,
(
n
)
where g
i
÷ 1 = 1.
In the n polynomials g
1
. . . . . g
n
there are only a ﬁnite number of variables, say for
example
,
(
1
. . . . . ,
(
n
. . . . . ,
(
m
.
Hence
1 =
n
i=1
g
i
(,
(
1
. . . . . ,
(
m
)¡
i
(,
(
i
)(+).
Successive applications of Kronecker’s theorem lead us to construct an extension ﬁeld
1 of 1 in which each ¡
i
has a zero a
i
. Substituting a
i
for ,
(
i
in (8) above we get
that 1 = 0 a contradiction. Therefore 1 = 1.
Since 1 is a ideal not equal to the whole ring 1 it follows that 1 is contained in
a maximal ideal M of 1. Set 1 = 1,M. Since M is maximal 1 is a ﬁeld. Now
1 ¨ M = {0}. If not suppose that a ÷ 1 ¨ M with a = 0. Then a
1
a = 1 ÷ M
Section 7.2 Algebraic Closures and Algebraically Closed Fields 97
and then M = 1. Now deﬁne t : 1 ÷1 by t(k) = k ÷M. Since 1 ¨M = {0} it
follows that ker(t) = {0} so t is a monomorphism. This allows us to identify 1 and
t(1) and shows that 1 embeds into 1.
Now suppose that ¡(.) is a nonconstant polynomial in 1Œ.. Then
¡(,
(
÷M) = ¡(,
(
) ÷M.
However by the construction ¡(,
(
) ÷ M so that
¡(,
(
÷M) = M ÷M = the zero element of 1.
Therefore ,
(
÷M is a zero of ¡(.).
Therefore we have constructed a ﬁeld 1 in which every nonconstant polynomial in
1Œ. has a zero in 1.
We now iterate this procedure to form a chain of ﬁelds
1 Ç 1
1
(= 1) Ç 1
2
Ç
such that each nonconstant polynomial of 1
i
Œ. has a zero in 1
i÷1
.
Now let
´
1 =
_
J
1
i
. It is easy to show (see exercises) that
´
1 is a ﬁeld. If ¡(.) is
a nonconstant polynomial in
´
1Œ. then there is some i with ¡(.) ÷ 1
i
Œ.. Therefore
¡(.) has a zero in 1
i÷1
Œ. Ç
´
1. Hence ¡(.) has a zero in
´
1 and
´
1 is algebraically
closed.
Theorem 7.2.7. Let 1 be a ﬁeld. Then 1 has an algebraic closure.
Proof. Let
´
1 be an algebraically closed ﬁeld containing 1 which exists from Theo
rem 7.2.6.
Now let 1 = A
¨
1
be the set of elements of
´
1 that are algebraic over 1. From
Theorem 7.2.4
´
1 is an algebraic closure of 1.
The following lemma is straightforward. We leave the proof to the exercises.
Lemma 7.2.8. Let 1. 1
t
be ﬁelds and ç : 1 ÷1
t
a homomorphism. Then
¯
ç : 1Œ. ÷1
t
Œ. given by
¯
ç
_
n
i=1
k
i
.
i
_
=
n
i=0
(ç(k
i
)).
i
is also a homomorphism. By convention we identify ç and
¯
ç and write ç =
¯
ç. If ç is
an isomorphism then so is
¯
ç.
98 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Lemma 7.2.9. Let 1. 1
t
be ﬁelds and ç : 1 ÷ 1
t
an isomorphism. Let ¡(.) ÷
1Œ. be irreducible. Let 1 Ç 1(a) and 1
t
Ç 1
t
(a
t
) where a is a zero of ¡(.)
and a
t
is a zero of ç(¡(.)). Then there is an isomorphism [ : 1(a) ÷ 1
t
(a
t
) with
[
[
K
= ç and [(a) = a
t
. Further [ is uniquely determined.
Proof. This is a generalized version of Theorem 7.1.4. If b ÷ 1(a) then from the
construction of 1(a) there is a polynomial g(.) ÷ 1Œ. with b = g(a). Deﬁne a
map
[ : 1(a) ÷1
t
(a
t
)
by
[(b) = ç(g(.))(a
t
).
We show that [ is an isomorphism.
First [ is welldeﬁned. Suppose that b = g(a) = h(a) with h(.) ÷ 1Œ.. Then
(g ÷h)(a) = 0. Since ¡(.) is irreducible this implies that ¡(.) = cm
o
(.) and since
a is a zero of (g ÷ h)(.) then ¡(.)[(g ÷ h)(.). Then
ç(¡(.))[(ç(g(.)) ÷ ç(h(.))).
Since ç(¡(.))(a
t
) = 0 this implies that ç(g(.))(a
t
) = ç(h(.))(a
t
) and hence the
map [ is welldeﬁned.
It is easy to show that [ is a homomorphism. Let b
1
= g
1
(a), b
2
= g
2
(a). Then
b
1
b
1
= g
1
g
2
(a). Hence
[(b
1
b
2
) = (ç(g
1
g
2
))(a
t
) = ç(g
1
)(a
t
)ç(g
2
)(a
t
) = [(b
1
)[(b
2
).
In the same manner we have [(b
1
÷b
2
) = [(b
1
) ÷[(b
2
).
Now suppose that k ÷ 1 so that k ÷ 1Œ. is a constant polynomial. Then [(k) =
(ç(k))(a
t
) = ç(k). Therefore [ restricted to 1 is precisely ç.
As [ is not the zero mapping it follows that [ is a monomorphism.
Finally since 1(a) is generated from 1 and a, and [ restricted to 1 is ç it follows
that [ is uniquely determined by ç and [(a) = a
t
. Hence [ is unique.
Theorem 7.2.10. Let 1[1 be an algebraic extension. Suppose that 1
1
is an alge
braically closed ﬁeld and ç is an isomorphism from 1 to 1
1
Ç 1
1
. Then there exists
a monomorphism [ from 1 to 1
1
with [
[
K
= ç.
Section 7.2 Algebraic Closures and Algebraically Closed Fields 99
Before we give the proof we note that the theorem gives the following diagram:
In particular the theorem can be applied to monomorphisms of a ﬁeld 1 within an
algebraic closure 1 of 1. Speciﬁcally suppose that 1 Ç 1 where 1 is an algebraic
closure of 1 and let ˛ : 1 ÷ 1 be a monomorphism with ˛(1) = 1. Then there
exists an automorphism ˛
+
of 1 with ˛
+
[
K
= ˛.
Proof of Theorem 7.2.10. Consider the set
M = {(M. t) : M is a ﬁeld with 1 Ç M Ç 1.
where there exists a monomorphism t : M ÷1
1
with t
[
K
= ç}.
Now the set M is nonempty since (1. ç) ÷ M. Order M by (M
1
. t
1
) < (M
2
. t
2
)
if M
1
Ç M
2
and (t
2
)
[
M
1
= t
1
. Let
K = {(M
i
. t
i
) : i ÷ 1}
be a chain in M. Let (M. t) be deﬁned by
M =
_
i÷J
M
i
with t(a) = t
i
(a) for all a ÷ M
i
.
It is clear that M is an upper bound for the chain K. Since each chain has an upper
bound it follows from Zorn’s lemma that Mhas a maximal element (N. j). We show
that N = 1.
Suppose that N ¨ 1. Let a ÷ 1\ N. Then a is algebraic over N and further alge
braic over 1 since 1[1 is algebraic. Let m
o
(.) ÷ NŒ. be the minimal polynomial
of a relative to N. Since 1
1
is algebraically closed j(m
o
(.)) has a zero a
t
÷ 1
1
.
Therefore there is a monomorphism j
t
: N(a) ÷1
1
with j
t
restricted to N the same
as j. It follows that (N. j) < (N(a). j
t
) since a … N. This contradicts the maximality
of N. Therefore N = 1 completing the proof.
Combining the previous two theorems we can now prove that any two algebraic
closures of a ﬁeld 1 are unique up to 1isomorphism, that is up to an isomorphism
that is the identity on 1.
100 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Theorem 7.2.11. Let 1
1
and 1
2
be algebraic closures of the ﬁeld 1. Then there is a
1isomorphism t : 1 ÷1
1
. Again by 1isomorphism we mean that t is the identity
on 1.
Proof. From Theorem 7.2.7 there is a monomorphism t : 1
1
÷ 1
2
with t the
identity on 1. However since 1
1
is algebraically closed so is t(1
1
). Then 1
2
[t(1
1
)
is an algebraic extension and since 1
2
is algebraically closed we must have 1
2
=
t(1
1
). Therefore t is also surjective and hence an isomorphism.
The following corollary is immediate.
Corollary 7.2.12. Let 1[1 and 1
t
[1 be ﬁeld extensions with a ÷ 1 and a
t
÷ 1
t
algebraic elements over 1. Then 1(a) is 1isomorphic to 1(a
t
) if and only if
[1(a) : 1[ = [1(a
t
) : 1[ and there is an element a
tt
÷ 1(a
t
) with m
o
(.) = m
o
00 (.).
7.3 The Fundamental Theorem of Algebra
In this section we give a proof of the fact that the complex numbers form an alge
braically closed ﬁeld. This is known as the fundamental theorem of algebra. First
we need the concept of a splitting ﬁeld for a polynomial. In the next chapter we will
examine this concept more deeply.
7.3.1 Splitting Fields
We have just seen that given an irreducible polynomial over a ﬁeld J we could always
ﬁnd a ﬁeld extension in which this polynomial has a root. We now push this further
to obtain ﬁeld extensions where a given polynomial has all its roots.
Deﬁnition 7.3.1. If 1 is a ﬁeld and 0 = ¡(.) ÷ 1Œ. and 1
t
is an extension ﬁeld
of 1, then ¡(.) splits in 1
t
(1
t
may be 1), if ¡(.) factors into linear factors in
1
t
Œ.. Equivalently, this means that all the roots of ¡(.) are in 1
t
.
1
t
is a splitting ﬁeld for ¡(.) over 1 if 1
t
is the smallest extension ﬁeld of 1 in
which ¡(.) splits. (A splitting ﬁeld for ¡(.) is the smallest extension ﬁeld in which
¡(.) has all its possible roots.)
1
t
is a splitting ﬁeld over 1 if it is the splitting ﬁeld for some ﬁnite set of polyno
mials over 1.
Theorem 7.3.2. If 1 is a ﬁeld and 0 = ¡(.) ÷ 1Œ., then there exists a splitting
ﬁeld for ¡(.) over 1.
Proof. The splitting ﬁeld is constructed by repeated adjoining of roots. Suppose with
out loss of generality that ¡(.) is irreducible of degree n over J. From Theorem 6.2.2
there exists a ﬁeld J
t
containing ˛ with ¡(˛) = 0. Then ¡(.) = (.÷˛)g(.) ÷ J
t
Œ.
Section 7.3 The Fundamental Theorem of Algebra 101
with deg g(.) = n÷1. By an inductive argument g(.) has a splitting ﬁeld and there
fore so does ¡(.).
In the next chapter we will give a further characterization of splitting ﬁelds.
7.3.2 Permutations and Symmetric Polynomials
To obtain a proof of the fundamental theorem of algebra we need to go a bit outside
of our main discussions of rings and ﬁelds and introduce symmetric polynomials.
In order to introduce this concept we ﬁrst review some basic ideas from elementary
group theory which we will look at in detail later in the book.
Deﬁnition 7.3.3. A group G is a set with one binary operation which we will denote
by multiplication, such that
(1) The operation is associative, that is, (g
1
g
2
)g
3
=g
1
(g
2
g
3
) for all g
1
. g
2
. g
3
÷G.
(2) There exists an identity for this operation, that is, an element 1 such that 1g = g
for each g ÷ G.
(3) Each g ÷ G has an inverse for this operation, that is, for each g there exists a
g
1
with the property that gg
1
= 1.
If in addition the operation is commutative (g
1
g
2
= g
2
g
1
for all g
1
. g
2
÷ G), the
group G is called an abelian group. The order of G is the number of elements in G,
denoted [G[. If [G[ < o. G is a ﬁnite group. H Ç G is a subgroup if H is also a
group under the same operation as G. Equivalently, H is a subgroup if H = 0 and
H is closed under the operation and inverses.
Groups most often arise from invertible mappings of a set onto itself. Such map
pings are called permutations.
Deﬁnition 7.3.4. If T is a set, a permutation on T is a onetoone mapping of T onto
itself. We denote by S
T
the set of all permutations on T .
Theorem 7.3.5. For any set T , S
T
forms a group under composition called the sym
metric group on T . If T. T
1
have the same cardinality (size), then S
T
Š S
T
1
. If T is
a ﬁnite set with [T [ = n, then S
T
is a ﬁnite group and [S
T
[ = nŠ.
Proof. If S
T
is the set of all permutations on the set T , we must showthat composition
is an operation on S
T
that is associative and has an identity and inverses.
Let ¡. g ÷ S
T
. Then ¡. g are onetoone mappings of T onto itself. Consider
¡ ıg : T ÷T . If ¡ ıg(t
1
) = ¡ ıg(t
2
), then ¡(g(t
1
)) = ¡(g(t
2
)) and g(t
1
) = g(t
2
),
since ¡ is onetoone. But then t
1
= t
2
since g is onetoone.
If t ÷ T , there exists t
1
÷ T with ¡(t
1
) = t since ¡ is onto. Then there exists
t
2
÷ T with g(t
2
) = t
1
since g is onto. Putting these together, ¡(g(t
2
)) = t , and
102 Chapter 7 Kronecker’s Theorem and Algebraic Closures
therefore ¡ ı g is onto. Therefore, ¡ ı g is also a permutation and composition gives
a valid binary operation on S
T
.
The identity function 1(t ) = t for all t ÷ T will serve as the identity for S
T
, while
the inverse function for each permutation will be the inverse. Such unique inverse
functions exist since each permutation is a bijection.
Finally, composition of functions is always associative and therefore S
T
forms a
group.
If T. T
1
have the same cardinality, then there exists a bijection o : T ÷T
1
. Deﬁne
a map J : S
T
÷ S
T
1
in the following manner: if ¡ ÷ S
T
, let J(¡ ) be the permu
tation on T
1
given by J(¡ )(t
1
) = o(¡(o
1
(t
1
))). It is straightforward to verify that
J is an isomorphism (see the exercises).
Finally, suppose [T [ = n < o. Then T = {t
1
. . . . . t
n
}. Each ¡ ÷ S
T
can be
pictured as
¡ =
_
t
1
. . . t
n
¡(t
1
) . . . ¡(t
n
)
_
.
For t
1
there are n choices for ¡(t
1
). For t
2
there are only n ÷ 1 choices since ¡ is
onetoone. This continues down to only one choice for t
n
. Using the multiplication
principle, the number of choices for ¡ and therefore the size of S
T
is
n(n ÷ 1) 1 = nŠ.
For a set with n elements we denote S
T
by S
n
called the symmetric group on n
symbols.
Example 7.3.6. Write down the six elements of S
3
and give the multiplication table
for the group.
Name the three elements 1. 2. 3. The six elements of S
3
are then:
1 =
_
1 2 3
1 2 3
_
. a =
_
1 2 3
2 3 1
_
. b =
_
1 2 3
3 1 2
_
c =
_
1 2 3
2 1 3
_
. J =
_
1 2 3
3 2 1
_
. e =
_
1 2 3
1 3 2
_
.
The multiplication table for S
3
can be written down directly by doing the required
composition. For example,
ac =
_
1 2 3
2 3 1
__
1 2 3
2 1 3
_
=
_
1 2 3
3 2 1
_
= J.
To see this, note that a : 1 ÷ 2. 2 ÷ 3. 3 ÷ 1; c : 1 ÷ 2. 2 ÷ 1. 3 ÷ 3 and so
ac : 1 ÷3. 2 ÷2. 3 ÷1.
It is somewhat easier to construct the multiplication table if we make some obser
vations. First, a
2
= b and a
3
= 1. Next, c
2
= 1, J = ac, e = a
2
c and ﬁnally
ac = ca
2
.
Section 7.3 The Fundamental Theorem of Algebra 103
From these relations the following multiplication table can be constructed:
1 a a
2
c ac a
2
c
1 1 a a
2
c ac a
2
c
a a a
2
1 ac a
2
c c
a
2
a
2
1 a a
2
c c ac
c c a
2
c ac 1 a
2
a
ac ac c a
2
c a 1 a
2
a
2
c a
2
c ac c a
2
a 1
.
To see this, consider, for example, (ac)a
2
= a(ca
2
) = a(ac) = a
2
c.
More generally, we can say that S
3
has a presentation given by
S
3
= (a. c: a
3
= c
2
= 1. ac = ca
2
).
By this we mean that S
3
is generated by a. c, or that S
3
has generators a. c and
the whole group and its multiplication table can be generated by using the relations
a
3
= c
2
= 1, ac = ca
2
.
An important result, the form of which we will see later in our work on extension
ﬁelds, is the following.
Lemma 7.3.7. Let T be a set and T
1
Ç T a subset. Let H be the subset of S
T
that
ﬁxes each element of T
1
, that is, ¡ ÷ H if ¡(t ) = t for all t ÷ T
1
. Then H is a
subgroup.
Proof. H = 0 since 1 ÷ H. Now suppose h
1
. h
2
÷ H. Let t
1
÷ T
1
and consider
h
1
ı h
2
(t
1
) = h
1
(h
2
(t
1
)). Now h
2
(t
1
) = t
1
since h
2
÷ H, but then h
1
(t
1
) = t
1
since h
1
÷ H. Therefore, h
1
ı h
2
÷ H and H is closed under composition. If h
1
ﬁxes t
1
then h
1
1
also ﬁxes t
1
so H is also closed under inverses and is therefore a
subgroup.
We now apply these ideas of permutations to certain polynomials in independent
indeterminates over a ﬁeld. We will look at these in detail later in this book.
Deﬁnition 7.3.8. Let ,
1
. . . . . ,
n
be (independent) indeterminates over a ﬁeld 1.
A polynomial ¡(,
1
. . . . . ,
n
) ÷ 1Œ,
1
. . . . . ,
n
 is a symmetric polynomial in ,
1
.
. . . . ,
n
if ¡(,
1
. . . . . ,
n
) is unchanged by any permutation o of {,
1
. . . . . ,
n
}, that
is, ¡(,
1
. . . . . ,
n
) = ¡(o(,
1
). . . . . o(,
n
)).
If 1 Ç 1
t
are ﬁelds and ˛
1
. . . . . ˛
n
are in J
t
, then we call a polynomial
¡(˛
1
. . . . . ˛
n
) with coefﬁcients in 1 symmetric in ˛
1
. . . . . ˛
n
if ¡(˛
1
. . . . . ˛
n
) is
unchanged by any permutation o of {˛
1
. . . . . ˛
n
}.
104 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Example 7.3.9. Let 1 be a ﬁeld and k
0
. k
1
÷ 1. Let h(,
1
. ,
2
) = k
0
(,
1
÷ ,
2
) ÷
k
1
(,
1
,
2
).
There are two permutations on {,
1
. ,
2
}, namely o
1
: ,
1
÷ ,
1
, ,
2
÷ ,
2
and
o
2
: ,
1
÷ ,
2
, ,
2
÷ ,
1
. Applying either one of these two to {,
1
. ,
2
} leaves
h(,
1
. ,
2
) invariant. Therefore, h(,
1
. ,
2
) is a symmetric polynomial.
Deﬁnition 7.3.10. Let .. ,
1
. . . . . ,
n
be indeterminates over a ﬁeld 1 (or elements of
an extension ﬁeld 1
t
of 1). Form the polynomial
¸(.. ,
1
. . . . . ,
n
) = (. ÷ ,
1
) (. ÷ ,
n
).
The i th elementary symmetric polynomial s
i
in ,
1
. . . . . ,
n
for i = 1. . . . . n, is
(÷1)
i
a
i
, where a
i
is the coefﬁcient of .
ni
in ¸(.. ,
1
. . . . . ,
n
).
Example 7.3.11. Consider ,
1
. ,
2
. ,
3
. Then
¸(.. ,
1
. ,
2
. ,
3
) = (. ÷ ,
1
)(. ÷ ,
2
)(. ÷ ,
3
)
= .
3
÷ (,
1
÷,
2
÷,
3
).
2
÷(,
1
,
2
÷,
1
,
3
÷,
2
,
3
). ÷ ,
1
,
2
,
3
.
Therefore, the three elementary symmetric polynomials in ,
1
. ,
2
. ,
3
over any ﬁeld
are
(1) s
1
= ,
1
÷,
2
÷,
3
.
(2) s
2
= ,
1
,
2
÷,
1
,
3
÷,
2
,
3
.
(3) s
3
= ,
1
,
2
,
3
.
In general, the pattern of the last example holds for ,
1
. . . . . ,
n
. That is,
s
1
= ,
1
÷,
2
÷ ÷,
n
s
2
= ,
1
,
2
÷,
1
,
3
÷ ÷,
n1
,
n
s
3
= ,
1
,
2
,
3
÷,
1
,
2
,
4
÷ ÷,
n2
,
n1
,
n
.
.
.
s
n
= ,
1
,
n
.
The importance of the elementary symmetric polynomials is that any symmetric
polynomial can be built up fromthe elementary symmetric polynomials. We make this
precise in the next theorem called the fundamental theorem of symmetric polynomials.
We will use this important result several times, and we will give a complete proof in
Section 7.5.
Section 7.4 The Fundamental Theorem of Algebra 105
Theorem 7.3.12 (fundamental theorem of symmetric polynomials). If 1 is a sym
metric polynomial in the indeterminates ,
1
. . . . . ,
n
over a ﬁeld 1, that is, 1 ÷
1Œ,
1
. . . . . ,
n
 and 1 is symmetric, then there exists a unique g ÷ 1Œ,
1
. . . . . ,
n
 such
that ¡(,
1
. . . . . ,
n
) = g(s
1
. . . . . s
n
). That is, any symmetric polynomial in ,
1
. . . . . ,
n
is a polynomial expression in the elementary symmetric polynomials in ,
1
. . . . . ,
n
.
From this theorem we obtain the following two lemmas, which will be crucial in
our proof of the fundamental theorem of algebra.
Lemma 7.3.13. Let ¸(.) ÷ 1Œ. and suppose ¸(.) has the roots ˛
1
. . . . . ˛
n
in the
splitting ﬁeld 1
t
. Then the elementary symmetric polynomials in ˛
1
. . . . . ˛
n
are in 1.
Proof. Suppose ¸(.) = c
0
÷c
1
. ÷ ÷c
n
.
n
÷ 1Œ.. In 1
t
Œ., ¸(.) splits, with
roots ˛
1
. . . . . ˛
n
, and thus in 1
t
Œ.,
¸(.) = c
n
(. ÷ ˛
1
) (. ÷ ˛
n
).
The coefﬁcients are then c
n
(÷1)
i
s
i
(˛
1
. . . . . ˛
n
), where the s
i
(˛
1
. . . . . ˛
n
) are the
elementary symmetric polynomials in ˛
1
. . . . . ˛
n
. However, ¸(.) ÷ 1Œ., so each
coefﬁcient is in 1. It follows then that for each i , c
n
(÷1)
i
s
i
(˛
1
. . . . . ˛
n
) ÷ 1, and
hence s
i
(˛
1
. . . . . ˛
n
) ÷ 1 since c
n
÷ 1.
Lemma 7.3.14. Let ¸(.) ÷ 1Œ. and suppose ¸(.) has the roots ˛
1
. . . . . ˛
n
in the
splitting ﬁeld 1
t
. Suppose further that g(.) = g(.. ˛
1
. . . . . ˛
n
) ÷ 1
t
Œ.. If g(.) is a
symmetric polynomial in ˛
1
. . . . . ˛
n
, then g(.) ÷ 1Œ..
Proof. If g(.) = g(.. ˛
1
. . . . . ˛
n
) is symmetric in ˛
1
. . . . . ˛
n
, then from Theo
rem 7.3.12 it is a symmetric polynomial in the elementary symmetric polynomials
in ˛
1
. . . . . ˛
n
. From Lemma 7.3.13 these are in the ground ﬁeld 1, so the coefﬁcients
of g(.) are in 1. Therefore, g(.) ÷ 1Œ..
7.4 The Fundamental Theorem of Algebra
We now present a proof of the fundamental theorem of algebra.
Theorem 7.4.1 (fundamental theorem of algebra). Any nonconstant complex polyno
mial has a complex root. In other words, the complex number ﬁeld C is algebraically
closed.
The proof depends on the following sequence lemmas. The crucial one now is the
last, which says that any real polynomial must have a complex root.
Lemma 7.4.2. Any odddegree real polynomial must have a real root.
106 Chapter 7 Kronecker’s Theorem and Algebraic Closures
Proof. This is a consequence of the intermediate value theorem from analysis.
Suppose 1(.) ÷ RŒ. with deg 1(.) = n = 2k ÷ 1 and suppose the leading
coefﬁcient a
n
> 0 (the proof is almost identical if a
n
< 0). Then
1(.) = a
n
.
n
÷(lower terms)
and n is odd. Then,
(1) lim
xo
1(.) = lim
xo
a
n
.
n
= osince a
n
> 0.
(2) lim
xo
1(.) = lim
xo
a
n
.
n
= ÷osince a
n
> 0 and n is odd.
From (1), 1(.) gets arbitrarily large positively, so there exists an .
1
with 1(.
1
) >
0. Similarly, from (2) there exists an .
2
with 1(.
2
) < 0.
A real polynomial is a continuous realvalued function for all . ÷ R. Since
1(.
1
)1(.
2
) < 0, it follows from the intermediate value theorem that there exists
an .
3
, between .
1
and .
2
, such that 1(.
3
) = 0.
Lemma 7.4.3. Any degreetwo complex polynomial must have a complex root.
Proof. This is a consequence of the quadratic formula and of the fact that any complex
number has a square root.
If 1(.) = a.
2
÷b. ÷c, a = 0, then the roots formally are
.
1
=
÷b ÷
_
b
2
÷ 4ac
2a
. .
2
=
÷b ÷
_
b
2
÷ 4ac
2a
.
From DeMoivre’s theorem every complex number has a square root, hence .
1
. .
2
exist in C. They of course are the same if b
2
÷ 4ac = 0.
To go further we need the concept of the conjugate of a polynomial and some
straightforward consequences of this idea.
Deﬁnition 7.4.4. If 1(.) = a
0
÷ ÷a
n
.
n
is a complex polynomial then its con
jugate is the polynomial 1(.) = a
0
÷ ÷ a
n
.
n
. That is, the conjugate is the
polynomial whose coefﬁcients are the complex conjugates of those of 1(.).
Lemma 7.4.5. For any 1(.) ÷ CŒ. we have:
(1) 1(:) = 1(:) if : ÷ C.
(2) 1(.) is a real polynomial if and only if 1(.) = 1(.).
(3) If 1(.)Q(.) = H(.) then H(.) = (1(.))(Q(.)).
Section 7.4 The Fundamental Theorem of Algebra 107
Proof. (1) Suppose : ÷ C and 1(:) = a
0
÷ ÷a
n
:
n
. Then
1(:) = a
0
÷ ÷a
n
:
n
= a
0
÷a
1
: ÷ ÷a
n
:
n
= 1(:).
(2) Suppose 1(.) is real then a
i
= a
i
for all its coefﬁcients and hence 1(.) =
1(.). Conversely suppose 1(.) = 1(.). Then a
i
= a
i
for all its coefﬁcients and
hence a
i
÷ R for each a
i
and so 1(.) is a real polynomial.
(3) The proof is a computation and left to the exercises.
Lemma 7.4.6. Suppose G(.) ÷ CŒ.. Then H(.) = G(.)G(.) ÷ RŒ..
Proof. H(.) = G(.)G(.) = G(.)G(.) = G(.)G(.) = G(.)G(.) = H(.).
Therefore, H(.) is a real polynomial.
Lemma 7.4.7. If every nonconstant real polynomial has a complex root, then every
nonconstant complex polynomial has a complex root.
Proof. Let 1(.) ÷ CŒ. and suppose that every nonconstant real polynomial has at
least one complex root. Let H(.) = 1(.)1(.). From Lemma 7.4.6, H(.) ÷ RŒ..
By supposition there exists a :
0
÷ C with H(:
0
) = 0. Then 1(:
0
)1(:
0
) = 0, and
since C is a ﬁeld it has no zero divisors. Hence either 1(:
0
) = 0 or 1(:
0
) = 0. In the
ﬁrst case :
0
is a root of 1(.). In the second case 1(:
0
) = 0. Then from Lemma 7.4.5
1(:
0
) = 1(:
0
) = 1(:
0
) = 0. Therefore, :
0
is a root of 1(.).
Now we come to the crucial lemma.
Lemma 7.4.8. Any nonconstant real polynomial has a complex root.
Proof. Let ¡(.) = a
0
÷a
1
. ÷ ÷a
n
.
n
÷ RŒ. with n _ 1, a
n
= 0. The proof is
an induction on the degree n of ¡(.).
Suppose n = 2
n
q where q is odd. We do the induction on m. If m = 0 then
¡(.) has odd degree and the theorem is true from Lemma 7.4.2. Assume then that
the theorem is true for all degrees J = 2
k
q
t
where k < m and q
t
is odd. Now assume
that the degree of ¡(.) is n = 2
n
q.
Suppose 1
t
is the splitting ﬁeld for ¡(.) over R in which the roots are ˛
1
. . . . . ˛
n
.
We show that at least one of these roots must be in C. (In fact, all are in C but to
prove the lemma we need only show at least one.)
Let h ÷ Z and form the polynomial
H(.) =
i~}
(. ÷ (˛
i
÷˛
}
÷h˛
i
˛
}
)).
108 Chapter 7 Kronecker’s Theorem and Algebraic Closures
This is in 1
t
Œ.. In forming H(.) we chose pairs of roots {˛
i
. ˛
}
}, so the number of
such pairs is the number of ways of choosing two elements out of n = 2
n
q elements.
This is given by
(2
n
q)(2
n
q ÷ 1)
2
= 2
n1
q(2
n
q ÷ 1) = 2
n1
q
t
with q
t
odd. Therefore, the degree of H(.) is 2
n1
q
t
.
H(.) is a symmetric polynomial in the roots ˛
1
. . . . . ˛
n
. Since ˛
1
. . . . . ˛
n
are the
roots of a real polynomial, from Lemma 7.3.14 any polynomial in the splitting ﬁeld
symmetric in these roots must be a real polynomial.
Therefore, H(.) ÷ RŒ. with degree 2
n1
q
t
. By the inductive hypothesis, then,
H(.) must have a complex root. This implies that there exists a pair {˛
i
. ˛
}
} with
˛
i
÷˛
}
÷h˛
i
˛
}
÷ C.
Since h was an arbitrary integer, for any integer h
1
there must exist such a pair
{˛
i
. ˛
}
} with
˛
i
÷˛
}
÷h
1
˛
i
˛
}
÷ C.
Now let h
1
vary over the integers. Since there are only ﬁnitely many such pairs
{˛
i
. ˛
}
}, it follows that there must be at least two different integers h
1
. h
2
such that
:
1
= ˛
i
÷˛
}
÷h
1
˛
i
˛
}
÷ C and :
2
= ˛
i
÷˛
}
÷h
2
˛
i
˛
}
÷ C.
Then :
1
÷ :
2
= (h
1
÷ h
2
)˛
i
˛
}
÷ C and since h
1
. h
2
÷ Z Ç C it follows that
˛
i
˛
}
÷ C. But then h
1
˛
i
˛
}
÷ C, from which it follows that ˛
i
÷˛
}
÷ C. Then,
¸(.) = (. ÷ ˛
i
)(. ÷ ˛
}
) = .
2
÷ (˛
i
÷˛
}
). ÷˛
i
˛
}
÷ CŒ..
However, ¸(.) is then a degreetwo complex polynomial and so from Lemma 7.4.3 its
roots are complex. Therefore, ˛
i
. ˛
}
÷ C, and therefore ¡(.) has a complex root.
It is now easy to give a proof of the fundamental theorem of algebra. From Lem
ma 7.4.8 every nonconstant real polynomial has a complex root. From Lemma 7.4.7 if
every nonconstant real polynomial has a complex root, then every nonconstant com
plex polynomial has a complex root proving the fundamental theorem.
Theorem 7.4.9. If 1 is a ﬁnite dimensional ﬁeld extension of C, then 1 = C.
Proof. Let a ÷ 1. Regard the elements 1. a. a
2
. . . . . These elements become linearly
dependent over C and we get a nonconstant polynomial over C with root a. By the
fundamental theorem of algebra we know that a ÷ C.
Corollary 7.4.10. If 1 is a ﬁnite dimensional ﬁeld extension of R, then 1 = R or
1 = C.
Section 7.5 The Fundamental Theorem of Symmetric Polynomials 109
7.5 The Fundamental Theorem of Symmetric Polynomials
In the proof of the fundamental theorem of algebra that was given in the previous sec
tion we used the fact that any symmetric polynomial in n indeterminates is a polyno
mial in the elementary symmetric polynomials in these indeterminates. In this section
we give a proof of this theorem.
Let 1 be an integral domain with .
1
. . . . . .
n
(independent) indeterminates over
1 and let 1Œ.
1
. . . . . .
n
 be the polynomial ring in these indeterminates. Any poly
nomial ¡(.
1
. . . . . .
n
) ÷ 1Œ.
1
. . . . . .
n
 is composed of a sum of pieces of the form
a.
i
1
1
. . . .
i
n
n
with a ÷ 1. We ﬁrst put an order on these pieces of a polynomial.
The piece a.
i
1
1
. . . .
i
n
n
with a = 0 is called higher than the piece b.
}
1
1
. . . .
}
n
n
with
b = 0 if the ﬁrst one of the differences
i
1
÷ ¸
1
. i
2
÷ ¸
2
. . . . . i
n
÷ ¸
n
that differs from zero is in fact positive. The highest piece of a polynomial ¡(.
1
. . . . .
.
n
) is denoted by HG(¡ ).
Lemma 7.5.1. For ¡(.
1
. . . . . .
n
). g(.
1
. . . . . .
n
) ÷ 1Œ.
1
. . . . . .
n
 we have
HG(¡g) = HG(¡ ) HG(g).
Proof. We use an induction on n, the number of indeterminates. It is clearly true for
n = 1, and now assume that the statement holds for all polynomials in k indeter
minates with k < n and n _ 2. Order the polynomials via exponents on the ﬁrst
indeterminate .
1
so that
¡(.
1
. . . . . .
n
) = .
i
1
ç
i
(.
2
. . . . . .
n
) ÷.
i1
1
ç
i1
(.
2
. . . . . .
n
)
÷ ÷ç
0
(.
2
. . . . . .
n
)
g(.
1
. . . . . .
n
) = .
x
1
[
x
(.
2
. . . . . .
n
) ÷.
x1
1
[
x1
(.
2
. . . . . .
n
)
÷ ÷[
0
(.
2
. . . . . .
n
).
Then HG(¡g) = .
i÷x
1
HG(ç
i
[
x
). By the inductive hypothesis
HG(ç
i
[
x
) = HG(ç
i
) HG([
x
).
Hence
HG(¡g) = .
i÷x
1
HG(ç
i
) HG([
x
)
= (.
i
1
HG(ç
i
))(.
x
1
HG([
x
)) = HG(¡ ) HG(g).
110 Chapter 7 Kronecker’s Theorem and Algebraic Closures
The elementary symmetric polynomials in n indeterminates .
1
. . . . . .
n
are
s
1
= .
1
÷.
2
÷ ÷.
n
s
2
= .
1
.
2
÷.
1
.
3
÷ ÷.
n1
.
n
s
3
= .
1
.
2
.
3
÷.
1
.
2
.
4
÷ ÷.
n2
.
n1
.
n
.
.
.
s
n
= .
1
.
n
.
These were found by forming the polynomial ¸(.. .
1
. . . . . .
n
) = (. ÷.
1
) (. ÷
.
n
). The i th elementary symmetric polynomial s
i
in .
1
. . . . . .
n
is then (÷1)
i
a
i
,
where a
i
is the coefﬁcient of .
ni
in ¸(.. .
1
. . . . . .
n
).
In general,
s
k
=
i
1
~i
2
~···~i
k
,1_k_n
.
i
1
.
i
2
.
i
k
.
where the sum is taken over all the
_
n
k
_
different systems of indices i
1
. . . . . i
k
with
i
1
< i
2
< < i
k
.
Further, a polynomial s(.
1
. . . . . .
n
) is a symmetric polynomial if s(.
1
. . . . . .
n
)
is unchanged by any permutation o of {.
1
. . . . . .
n
}, that is, s(.
1
. . . . . .
n
) =
s(o(.
1
). . . . . o(.
n
)).
Lemma 7.5.2. In the highest piece a.
k
1
1
.
k
n
n
. a = 0, of a symmetric polynomial
s(.
1
. . . . . .
n
) we have k
1
_ k
2
_ _ k
n
.
Proof. Assume that k
i
< k
}
for some i < ¸ . As a symmetric polynomial, s(.
1
. . . . .
.
n
) also must then contain the piece a.
k
1
1
.
k
j
i
.
k
i
}
.
k
n
n
, which is higher than
a.
k
1
1
.
k
i
i
.
k
j
}
.
k
n
n
, giving a contradiction.
Lemma 7.5.3. The product s
k
1
k
2
1
s
k
2
k
3
2
s
k
n1
k
n
n1
s
k
n
n
with k
1
_ k
2
_ _ k
n
has the highest piece .
k
1
1
.
k
2
2
.
k
n
n
.
Proof. From the deﬁnition of the elementary symmetric polynomials we have that
HG(s
t
k
) = (.
1
.
2
.
k
)
t
. 1 _ k _ n. t _ 1.
From Lemma 7.4.2,
HG(s
k
1
k
2
1
s
k
2
k
3
2
s
k
n1
k
n
n1
s
k
n
n
)
= .
k
1
k
2
1
(.
1
.
2
)
k
2
k
3
(.
1
.
k
n1
k
n
n1
)(.
1
.
n
)
k
n
= .
k
1
1
.
k
2
2
.
k
n
n
.
Section 7.6 Exercises 111
Theorem 7.5.4. Let s(.
1
. . . . . .
n
) ÷ 1Œ.
1
. . . . . .
n
 be a symmetric polynomial. Then
s(.
1
. . . . . .
n
) can be uniquely expressed as a polynomial ¡(s
1
. . . . . s
n
) in the elemen
tary symmetric polynomials s
1
. . . . . s
n
with coefﬁcients from 1.
Proof. We prove the existence of the polynomial ¡ by induction on the size of the
highest pieces. If in the highest piece of a symmetric polynomial all exponents are
zero, then it is constant, that is, an element of 1 and there is nothing to prove.
Now we assume that each symmetric polynomial with highest piece smaller than
that of s(.
1
. . . . . .
n
) can be written as a polynomial in the elementary symmetric
polynomials. Let a.
k
1
1
.
k
n
n
, a = 0, be the highest piece of s(.
1
. . . . . .
n
). Let
t (.
1
. . . . . .
n
) = s(.
1
. . . . . .
n
) ÷ as
k
1
k
2
1
s
k
n1
k
n
n1
s
k
n
n
.
Clearly, t (.
1
. . . . . .
n
) is another symmetric polynomial, and from Lemma 7.4.5 the
highest piece of t (.
1
. . . . . .
n
) is smaller than that of s(.
1
. . . . . .
n
). Therefore,
t (.
1
. . . . . .
n
) and hence s(.
1
. . . . . .
n
) = t (.
1
. . . . . .
n
) ÷ as
k
1
k
2
1
s
k
n1
k
n
n1
s
k
n
n
can be written as a polynomial in s
1
. . . . . s
n
.
To prove the uniqueness of this expression assume that s(.
1
. . . . . .
n
) = ¡(s
1
. . . . .
s
n
) =g(s
1
. . . . . s
n
). Then ¡(s
1
. . . . . s
n
) ÷ g(s
1
. . . . . s
n
) =h(s
1
. . . . . s
n
) =ç(.
1
. . . . .
.
n
) is the zero polynomial in .
1
. . . . . .
n
. Hence, if we write h(s
1
. . . . . s
n
) as a sum of
products of powers of the s
1
. . . . . s
n
, all coefﬁcients disappear because two different
products of powers in the s
1
. . . . . s
n
have different highest pieces. This follows from
previous set of lemmas. Therefore, ¡ and g are the same, proving the theorem.
7.6 Exercises
1. Suppose that ¡(.) = a
0
÷ a
1
. ÷ ÷ a
n
.
n
÷ 1Œ., 1 a ﬁeld, with a
n
= 0
and ¡(.) irreducible. Deﬁne ˛ to satisfy
a
0
÷a
1
˛ ÷ ÷a
n
˛
n
= 0
and deﬁne 1
t
= 1(˛) in the following manner. We let
1(˛) = {c
0
÷c
1
˛ ÷ ÷c
n1
˛
n1
: c
i
÷ 1}.
Then on 1(˛) deﬁne addition and subtraction componentwise and deﬁne mul
tiplication by algebraic manipulation, replacing powers of ˛ higher than ˛
n
by
using
˛
n
=
÷a
0
÷ a
1
˛ ÷ ÷ a
n1
˛
n1
a
n
.
Prove that 1
t
= 1(˛) forms a ring.
112 Chapter 7 Kronecker’s Theorem and Algebraic Closures
2. Let ¡. g ÷ 1Œ. be irreducible polynomials of degree 2 over the ﬁeld 1. Let
˛
1
. ˛
2
respectively ˇ
1
. ˇ
2
be zeros of ¡ and g. For 1 _ i. ¸ _ 2 let v
i}
=
˛
i
÷ˇ
}
. Show:
(a) [1(v
i}
) : 1[ ÷ {1. 2. 3}.
(b) For ﬁxed ¡. g there are at most two different degrees in (a).
(c) Decide, which combinations of degrees in (b) (with ¡. g variable) are pos
sible, and give an example in each case.
3. Let 1[1 be a ﬁeld extension, let v ÷ 1 and ¡(.) ÷ 1Œ. a polynomial of degree
_ 1. Let all coefﬁcients of ¡(.) be algebraic over 1. If ¡(v) = 0, then v is
algebraic over 1.
4. Let 1[1 be a ﬁeld extension and let M be an intermediate ﬁeld. The extension
M[1 is algebraic. For v ÷ 1 the following are equivalent:
(a) v is algebraic over M.
(b) v is algebraic over 1.
5. Let 1[1 be a ﬁeld extension and v
1
. v
2
÷ 1. Then the following are equivalent:
(a) v
1
and v
2
are algebraic over 1.
(b) v
1
÷v
2
and v
1
v
2
are algebraic over 1.
6. Let 1[1 be a simple ﬁeld extension. Then there is an extension ﬁeld 1
t
of 1 of
the form 1
t
= 1(v
1
. v
2
) with:
(a) v
1
and v
2
are transcendental over 1.
(b) The set of all over 1 algebraic elements of 1
t
is 1.
7. In the proof of Theorem 7.1.4 show that the mapping
t : 1(a) ÷1(˛)
deﬁned by t(k) = k if k ÷ 1 and t(a) = ˛ and then extended by linearity is a
1isomorphism.
8. For each i ÷ 1 let 1
i
be a ﬁeld. Now let
´
1 =
_
i÷J
1
i
. Show that
´
1 is a ﬁeld.
9. Prove Lemma 7.2.8.
10. If T. T
1
are sets with the same cardinality, then there exists a bijection o : T ÷
T
1
. Deﬁne a map J : S
T
÷ S
T
1
in the following manner: if ¡ ÷ S
T
, let J(¡ )
be the permutation on T
1
given by J(¡ )(t
1
) = o(¡(o
1
(t
1
))). Prove that J is
an isomorphism.
11. Prove that if 1(X). Q(.). H(.) ÷ C then if 1(.)Q(.) = H(.) then H(.) =
(1(.))(Q(.)).
Chapter 8
Splitting Fields and Normal Extensions
8.1 Splitting Fields
In the last chapter we introduced splitting ﬁelds and used this idea to present a proof
of the fundamental theorem of algebra. The concept of a splitting ﬁeld is essential to
the Galois theory of equations so in this chapter we look more deeply at this idea.
Deﬁnition 8.1.1. Let 1 be a ﬁeld and ¡(.) a nonconstant polynomial in 1Œ.. An
extension ﬁeld 1 of 1 is a splitting ﬁeld for ¡(.) over 1 if
(a) ¡(.) splits into linear factors in 1Œ..
(b) If 1 Ç M Ç 1 and M = 1 then ¡(.) does not split into linear factors in
MŒ..
From part (b) in the deﬁnition the following is clear.
Lemma 8.1.2. 1 is a splitting ﬁeld for ¡(.) ÷ 1Œ. if and only if ¡(.) splits into
linear factors in 1Œ. and if ¡(.) = b(. ÷ a
1
) (. ÷ a
n
) with b ÷ 1 then 1 =
1(a
1
. . . . . a
n
).
Example 8.1.3. The ﬁeld C of complex numbers is a splitting ﬁeld for the polynomial
¸(.) = .
2
÷ 1 in RŒ.. In fact since C is algebraically closed it is a splitting ﬁeld
for any real polynomial ¡(.) ÷ RŒ. which has at least one nonreal zero.
The ﬁeld Q(i ) adjoining i to Qis a splitting ﬁeld for .
2
÷1 over QŒ..
The next result was used in the previous chapter. We restate and reprove it here.
Theorem 8.1.4. Let 1 be a ﬁeld. Then each nonconstant polynomial in 1Œ. has a
splitting ﬁeld.
Proof. Let 1 be an algebraic closure of 1. Then ¡(.) splits in 1Œ., that is ¡(.) =
b(. ÷ a
1
) (. ÷ a
n
) with b ÷ 1 and a
i
÷ 1. Let 1 = 1(a
1
. . . . . a
n
). Then 1 is
the splitting ﬁeld for ¡(.) over 1.
We next show that the splitting ﬁeld over 1 of a given polynomial is unique up to
1isomorphism.
114 Chapter 8 Splitting Fields and Normal Extensions
Theorem 8.1.5. Let 1. 1
t
be ﬁeld and ç : 1 ÷ 1
t
an isomorphism. Let ¡(.) be a
nonconstant polynomial in 1Œ. and ¡
t
(.) = ç(¡(.)) its image in 1
t
Œ.. Suppose
that 1 is a splitting ﬁeld for ¡(.) over 1 and 1
t
is a splitting ﬁeld for ¡
t
(.) over 1
t
.
(a) Suppose that 1
t
Ç 1
tt
. Then if [ : 1 ÷1
tt
is a monomorphism with [
[
K
= ç
then [ is an isomorphism from 1 onto 1
t
. Moreover [ maps the set of zeros of
¡(.) in 1onto the set of zeros of ¡
t
(.) in 1
t
. The map [ is uniquely determined
by the values of the zeros of ¡(.).
(b) If g(.) is an irreducible factor of ¡(.) in 1Œ. and a is a zero of g(.) in 1 and
a
t
is a zero of g
t
(.) = ç(g(.)) in 1
t
then there is an isomorphism [ from 1 to
1
t
with [
[
K
= ç and [(a) = [(a
t
).
Before giving the proof of this theorem we note that the following important result
is a direct consequence of it.
Theorem 8.1.6. A splitting ﬁeld for ¡(.) ÷ 1Œ. is unique up to 1isomorphism.
Proof of Theorem 8.1.5. Suppose that ¡(.) = b(. ÷a
1
) (. ÷a
n
) ÷ 1Œ. and that
¡
t
(.) = b
t
(. ÷ a
t
1
) (. ÷ a
t
n
) ÷ 1
t
Œ.. Then
¡
t
(.) = ç(¡(.)) = [(¡(.)) = ([(b))(. ÷ [(a
1
)) (. ÷ [(a
n
)).
We have proved that polynomials have unique factorization over ﬁelds. Since 1
t
Ç
1
tt
it follows that the set of zeros ([(a
1
). . . . . [(a
n
)) is a permutation of the set of
zeros (a
t
1
. . . . . a
t
n
). In particular this implies that [(a
i
) ÷ 1
t
, that is
im([) = 1
t
= 1
t
(a
1
. . . . . a
t
n
).
Since the image of [ is 1
t
(a
1
. . . . . a
t
n
) = 1
t
([(a
i
). . . . . [(a
n
)) it is clear that [ is
uniquely determined by the images [(a
i
). This proves part (a).
For part (b) embed 1
t
in an algebraic closure 1
tt
. Hence there is a monomorphism
ç
t
: 1(a) ÷1
tt
with ç
t
[
K
= ç and ç
t
(a) = a
t
. Hence there is a monomorphism [ : 1 ÷ 1
tt
with
[
[
K.a/
= ç
t
. Then from part (a) it follows that [ : 1 ÷1
t
is an isomorphism.
Example 8.1.7. Let ¡(.) = .
3
÷7 ÷ QŒ.. This has no zeros in Qand since it is of
degree 3 it follows that it must be irreducible in QŒ..
Let o = ÷
1
2
÷
_
3
2
i ÷ C. Then it is easy to show by computation that o
2
=
÷
1
2
÷
_
3
2
i and o
3
= 1. Therefore the three zeros of ¡(.) in C are
a
1
= 7
1{3
a
2
= o 7
1{3
a
3
= o
2
7
1{3
.
Section 8.2 Normal Extensions 115
Hence 1 = Q(a
1
. a
2
. a
3
) the splitting ﬁeld of ¡(.). Since the minimal polynomial
of all three zeros over Qis the same (¡(.)) it follows that
Q(a
1
) Š Q(a
2
) Š Q(a
3
).
Since Q(a
1
) Ç R and a
2
. a
3
are nonreal it is clear that a
2
. a
3
… Q(a
1
).
Suppose that Q(a
2
) = Q(a
3
). Then o = a
3
a
1
2
÷ Q(a
2
) and so 7
1{3
= o
1
a
2
÷
Q(a
2
). Hence Q(a
1
) Ç Q(a
2
) and therefore Q(a
1
) = Q(a
2
) since they have the
same degree over Q. This contradiction shows that Q(a
2
) and Q(a
3
) are distinct.
By computation we have a
3
= a
1
1
a
2
2
and hence
1 = Q(a
1
. a
2
. a
3
) = Q(a
1
. a
2
) = Q(7
1{3
. o).
Now the degree of 1 over Qis
[1 : Q[ = [Q(7
1{3
. o) : Q(o)[[Q(o) : Q[.
Now [Q(o) : Q[ = 2 since the minimal polynomial of o over Q is .
2
÷ . ÷ 1.
Since no zero of ¡(.) lies in Q(o) and the degree of ¡(.) is 3 it follows that ¡(.) is
irreducible over Q(o). Therefore we have that the degree of 1 over Q(o) is 3. Hence
[1 : Q[ = (2)(3) = 6.
We now have the following lattice diagram of ﬁelds and subﬁelds:
We do not know however if there are any more intermediate ﬁelds. There could for
example be inﬁnitely many. However as we will see when we do the Galois theory
there are no others.
8.2 Normal Extensions
We now consider algebraic ﬁeld extensions 1 of 1 which have the property that if
¡(.) ÷ 1Œ. has a zero in 1 then ¡(.) must split in 1. In particular we show that if
1 is a splitting ﬁeld of ﬁnite degree for some g(.) ÷ 1Œ. then 1 has this property.
116 Chapter 8 Splitting Fields and Normal Extensions
Deﬁnition 8.2.1. A ﬁeld extension 1 of a ﬁeld 1 is a normal extension if
(a) 1[1 is algebraic.
(b) Each irreducible polynomial ¡(.) ÷ 1Œ. that has a zero in 1 splits into linear
factors in 1Œ..
Note that in Example 8.1.7 the extension ﬁelds Q(˛
i
)[Qare not normal extensions.
Although ¡(.) has a zero in Q(˛
i
) the polynomial ¡(.) does not split into linear
factors in Q(˛
i
)Œ..
We now show that 1[1 is a ﬁnite normal extension if and only if 1 is the splitting
ﬁeld for some ¡(.) ÷ 1Œ..
Theorem 8.2.2. Let 1[1 be a ﬁnite extension. Then the following are equivalent.
(a) 1[1 is a normal extension.
(b) 1[1 is a splitting ﬁeld for some ¡(.) ÷ 1Œ..
(c) If 1 Ç 1
t
and [ : 1 ÷1
t
is a monomorphism with [
[
K
the identity map on 1
then [ is an automorphism of 1, that is [(1) = 1.
Proof. Suppose that 1[1 is a ﬁnite normal extension. Since 1[1 is a ﬁnite extension
1 is algebraic over 1 and since of ﬁnite degree we have 1 = 1(a
1
. . . . . a
n
) with a
i
algebraic over 1.
Let ¡
i
(.) ÷ 1Œ. be the minimal polynomial of a
i
. Since 1[1 is a normal ex
tension ¡
i
(.) splits in 1Œ.. This is true for each i = 1. . . . . n. Let ¡(.) =
¡
1
(.)¡
2
(.) ¡
n
(.). Then ¡(.) splits into linear factors in 1Œ.. Since 1 =
1(a
1
. . . . . a
n
) the polynomial ¡(.) cannot have all its zeros in any intermediate ex
tension between 1 and 1. Therefore 1 is the splitting ﬁeld for ¡(.). Hence (a)
implies (b).
Now suppose that 1 Ç 1
t
and [ : 1 ÷ 1
t
is a monomorphism with [
[
K
the
identity map on 1. Then the extension ﬁeld [(1) of 1 is also a splitting ﬁeld for
¡(.) since [
[
K
is the identity on 1. Hence [ maps the zeros of ¡(.) in 1 Ç 1
t
onto
the zeros of ¡(.) in [(1) Ç 1
t
it follows that [(1) = 1. Hence (b) implies (c).
Finally suppose (c). Hence we assume that if 1 Ç 1
t
and [ : 1 ÷ 1
t
is a
monomorphism with [
[
K
the identity map on 1 then [ is an automorphism of 1,
that is [(1) = 1.
As before 1[1 is algebraic since 1[1 is ﬁnite. Suppose that ¡(.) ÷ 1Œ. is irre
ducible and that a ÷ 1is a zero of ¡(.). There are algebraic elements a
1
. . . . . a
n
÷ 1
with 1 = 1(a
1
. . . . . a
n
) since 1[1 is ﬁnite. For i = 1. . . . . n let ¡
i
(.) ÷ 1Œ. be
the minimal polynomial of a
i
and let g(.) = ¡(.)¡
1
(.) ¡
n
(.). Let 1
t
be the
splitting ﬁeld of g(X). Clearly 1 Ç 1
t
. Let b ÷ 1
t
be a zero of ¡(.). From The
orem 8.1.5 there is an automorphism [ of 1
t
with [(a) = b and [
[
K
the identity
on 1. Hence by our assumption [
[
L
is an automorphism of 1. It follows that b ÷ 1
and hence ¡(.) splits in 1Œ.. Therefore (c) implies (a) completing the proof.
Section 8.2 Normal Extensions 117
To give simple examples of normal extensions we have the following lemma.
Lemma 8.2.3. If 1 is an extension of 1 with [1 : 1[ = 2 then 1 is a normal
extension of 1.
Proof. Suppose that [1 : 1[ = 2. Then 1[1 is algebraic since its ﬁnite. Let ¡(.) ÷
1Œ. be irreducible with leading coefﬁcient 1 and which has a zero in 1. Let a be
one zero. Then ¡(.) must be the minimal polynomial of a. However deg(m
o
(.)) _
[1 : 1[ = 2 and hence ¡(.) is of degree 1 or 2. Since ¡(.) has a zero in 1 it follows
that it must split into linear factors in 1Œ. and therefore 1 is a normal extension.
Later we will tie this result to group theory when we prove that a subgroup of
index 2 must be a normal subgroup.
Example 8.2.4. As a ﬁrst example of the lemma consider the polynomial ¡(.) =
.
2
÷ 2. In R this splits as (. ÷
_
2)(. ÷
_
2) and hence the ﬁeld Q(
_
2) is the
splitting ﬁeld of ¡(.) = .
2
÷ 2 over Q. Therefore Q(
_
2) is a normal extension
of Q.
Example 8.2.5. As a second example consider the polynomial .
4
÷ 2 in QŒ.. The
zeros in C are
2
1{4
. 2
1{4
i. 2
1{4
i
2
. 2
1{4
i
3
.
Hence
1 = Q(2
1{4
. 2
1{4
i. 2
1{4
i
2
. 2
1{4
i
3
)
is the splitting ﬁeld of .
4
÷ 2 over Q.
Now
1 = Q(2
1{4
. 2
1{4
i. 2
1{4
i
2
. 2
1{4
i
3
) = Q(2
1{4
. i ).
Therefore we have
[1 : Q[ = [1 : Q(2
1{4
)[[Q(2
1{4
) : Q[.
Since .
4
÷ 2 is irreducible over Q we have [Q(2
1{4
) : Q[ = 4. Since i has degree 2
over any real ﬁeld we have [1 : Q(2
1{4
)[ = 2. Therefore 1 is a normal extension of
Q(2
1{4
) and .
2
÷
_
2 ÷ Q(
_
2)Œ. has the splitting ﬁeld Q(2
1{4
).
Altogether we have 1[Q(2
1{4
), Q(2
1{4
)[Q(2
1{2
), Q(2
1{2
)[Qand 1[Qare normal
extensions. However Q(2
1{4
)[Q is not normal since 2
1{4
is a zero of .
4
÷ 2 but
Q(2
1{4
) does not contain all the zeros of .
4
÷ 2.
118 Chapter 8 Splitting Fields and Normal Extensions
Hence, we have the following ﬁgure.
Figure 8.1
8.3 Exercises
1. Determine the splitting ﬁeld of ¡(.) ÷ QŒ. and its degree over Qin the following
cases:
(a) ¡(.) = .
4
÷ ¸, where ¸ is a prime.
(b) ¡(.) = .
]
÷ 2, where ¸ is a prime.
2. Determine the degree of the splitting ﬁeld of the polynomial .
4
÷4 over Q. De
termine the splitting ﬁeld of .
6
÷4.
4
÷4.
2
÷3 over Q.
3. For each a ÷ Z let ¡
o
(.) = .
3
÷ a.
2
÷(a ÷ 3). ÷1 ÷ QŒ. be given.
(a) ¡
o
is irreducible over Qfor each a ÷ Z.
(b) If b ÷ R is a zero of ¡
o
, then also (1 ÷b)
1
and (b ÷1)b
1
are zeros of ¡
o
.
(c) Determine the splitting ﬁeld 1 of ¡
o
(.) over Qand its degree [1 : Q[.
4. Let 1 be a ﬁeld and ¡(.) ÷ 1Œ. a polynomial of degree n. Let 1 be a splitting
ﬁeld of ¡(.). Show:
(a) If a
1
. . . . . a
n
÷ 1 are the zeros of ¡ , then [1(a
1
. . . . . a
t
) : 1[ _ n (n ÷
1) (n ÷ t ÷1) for each t with 1 _ t _ n.
(b) 1 over 1 is of degree at most nŠ.
(c) If ¡(.) is irreducible over 1 then n divides [1 : 1[.
Chapter 9
Groups, Subgroups and Examples
9.1 Groups, Subgroups and Isomorphisms
Recall from Chapter 1 that the three most commonly studied algebraic structures are
groups, rings and ﬁelds. We have now looked rather extensively at rings and ﬁelds and
in this chapter we consider the basic concepts of group theory. Groups arise in many
different areas of mathematics. For example they arise in geometry as groups of con
gruence motions and in topology as groups of various types of continuous functions.
Later in this book they will appear in Galois theory as groups of automorphisms of
ﬁelds. First we recall the deﬁnition of a group given previously in Chapter 1.
Deﬁnition 9.1.1. A group G is a set with one binary operation which we will denote
by multiplication, such that
(1) The operation is associative, that is, (g
1
g
2
)g
3
=g
1
(g
2
g
3
) for all g
1
. g
2
. g
3
÷G.
(2) There exists an identity for this operation, that is, an element 1 such that 1g = g
and g1 = g for each g ÷ G.
(3) Each g ÷ G has an inverse for this operation, that is, for each g there exists a
g
1
with the property that gg
1
= 1 and g
1
g = 1.
If in addition the operation is commutative, that is g
1
g
2
= g
2
g
1
for all g
1
. g
2
÷ G,
the group G is called an abelian group.
The order of G, denoted [G[, is the number of elements in the group G. If [G[ < o,
G is a ﬁnite group otherwise it is an inﬁnite group.
It follows easily from the deﬁnition that the identity is unique and that each element
has a unique inverse.
Lemma 9.1.2. If G is a group then there is a unique identity. Further if g ÷ G its
inverse is unique. Finally if g
1
. g
2
÷ G then (g
1
g
2
)
1
= g
1
2
g
1
1
.
Proof. Suppose that 1 and e are both identities for G. Then 1e = e since e is an
identity and 1e = 1 since 1 is an identity. Therefore 1 = e and there is only one
identity.
Next suppose that g ÷ G and g
1
and g
2
are inverses for g. Then
g
1
gg
2
= (g
1
g)g
2
= 1g
2
= g
2
120 Chapter 9 Groups, Subgroups and Examples
since g
1
g = 1. On the other hand
g
1
gg
2
= g
1
(gg
2
) = g
1
1 = g
1
since gg
2
= 1. It follows that g
1
= g
2
and g has a unique inverse.
Finally consider
(g
1
g
2
)(g
1
2
g
1
1
) = g
1
(g
2
g
1
2
)g
1
1
= g
1
1g
1
1
= g
1
g
1
1
= 1.
Therefore g
1
2
g
1
1
is an inverse for g
1
g
2
and since inverses are unique it is the inverse
of the product.
Groups most often arise as permutations on a set. We will see this, as well as other
speciﬁc examples of groups, in the next sections.
Finite groups can be completely described by their group tables or multiplication
tables. These are sometimes called Cayley tables. In general, let G = {g
1
. . . . . g
n
}
be a group, then the multiplication table of G is:
g
1
g
2
g
}
g
n
g
1
g
2
.
.
.
g
i
g
i
g
}
.
.
.
g
n
. . .
The entry in the row of g
i
÷ G and column of g
}
÷ G is the product (in that order)
g
i
g
}
in G.
Groups satisfy the cancellation law for multiplication.
Lemma 9.1.3. If G is a group and a. b. c ÷ G with ab = ac or ba = ca then b = c.
Proof. Suppose that ab = ac. Then a has an inverse a
1
so we have
a
1
(ab) = a
1
(ac).
From the associativity of the group operation we then have
(a
1
a)b = (a
1
a)c == 1 b = 1 c == b = c.
A consequence of Lemma 9.1.3 is that each row and each column in a group table is
just a permutation of the group elements. That is each group element appears exactly
once in each row and each column.
A subset H Ç G is a subgroup of G if H is also a group under the same operation
as G. As for rings and ﬁelds a subset of a group is a subgroup if it is nonempty and
closed under both the group operation and inverses.
Section 9.2 Examples of Groups 121
Lemma 9.1.4. A subset H Ç G is a subgroup if H = 0 and H is closed under the
operation and inverses. That is, if a. b ÷ H then ab ÷ H and a
1
. b
1
÷ H.
We leave the proof of this to the exercises.
Let G be a group and g ÷ G; we denote by g
n
, n ÷ N, as with numbers, the
product of g taken n times. A negative exponent will indicate the inverse of the
positive exponent. As usual, let g
0
= 1. Clearly group exponentiation will satisfy the
standard laws of exponents. Now consider the set
H = {1 = g
0
. g. g
1
. g
2
. g
2
. . . .}
of all powers of g. We will denote this by (g).
Lemma 9.1.5. If G is a group and g ÷ G then (g) forms a subgroup of G called the
cyclic subgroup generated by g. (g) is abelian even if G is not.
Proof. If g ÷ G then g ÷ (g) and hence (g) is nonempty. Suppose then that a =
g
n
. b = g
n
are elements of (g). Then ab = g
n
g
n
= g
n÷n
÷ (g) so (g) is closed
under the group operation. Further a
1
= (g
n
)
1
= g
n
÷ (g) so (g) is closed
under inverses. Therefore (g) is a subgroup.
Finally ab = g
n
g
n
= g
n÷n
= g
n÷n
= g
n
g
n
= ba and hence (g) is abelian.
Suppose that g ÷ G and g
n
= 1 for some positive integer m. Then let n be
the smallest positive integer such that g
n
= 1. It follows that the set of elements
{1. g. g
2
. . . . . g
n1
} are all distinct but for any other power g
k
we have g
k
= g
t
for
some k = 0. 1. . . . . n ÷ 1 (see exercises). The cyclic subgroup generated by g then
has order n and we say that g has order n which we denote by o(g) = n. If no such n
exists we say that g has inﬁnite order. We will look more deeply at cyclic groups and
subgroups in Section 9.5.
We introduce one more concept before looking at examples.
Deﬁnition 9.1.6. If G and H are groups then a mapping ¡ : G ÷ H is a (group)
homomorphismif ¡(g
1
g
2
) = ¡(g
1
)¡(g
2
) for any g
1
. g
2
÷ G. If ¡ is also a bijection
then it is an isomorphism.
As with rings and ﬁelds we say that two groups G and H are isomorphic, denoted
by G Š H, if there exists an isomorphism J : G ÷ H. This means that abstractly
G and H have exactly the same algebraic structure.
9.2 Examples of Groups
As already mentioned groups arise in many diverse areas of mathematics. In this
section and the next we present speciﬁc examples of groups.
122 Chapter 9 Groups, Subgroups and Examples
First of all any ring or ﬁeld under addition forms an abelian group. Hence, for
example (Z. ÷). (Q. ÷). (R. ÷). (C. ÷) where Z. Q. R. C are respectively the inte
gers, the rationals, the reals and the complex numbers, all are inﬁnite abelian groups.
If Z
n
is the modular ring Z,nZ then for any natural number n, (Z
n
. ÷) forms a ﬁnite
abelian group. In abelian groups the group operation is often denoted by ÷ and the
identity element by 0 (zero).
In a ﬁeld J, the nonzero elements are all invertible and form a group under multi
plication. This is called the multiplicative group of the ﬁeld J and is usually denoted
by J
+
. Since multiplication in a ﬁeld is commutative the multiplicative group of a
ﬁeld is an abelian group. Hence Q
+
. R
+
. C
+
are all inﬁnite abelian groups while if ¸
is a prime Z
+
]
forms a ﬁnite abelian group. Recall that if ¸ is a prime then the modular
ring Z
]
is a ﬁeld.
Within Q
+
. R
+
. C
+
there are certain multiplicative subgroups. Since the positive
rationals Q
÷
and the positive reals R
÷
are closed under multiplication and inverse
they form subgroups of Q
+
and R
+
respectively. In C if we consider the set of all
complex numbers : with [:[ = 1 then these form a multiplicative subgroup. Further
within this subgroup if we consider the set of nth roots of unity : (that is :
n
= 1) for
a ﬁxed n this forms a subgroup, this time of ﬁnite order.
The multiplicative group of a ﬁeld is a special case of the unit group of a ring. If
1 is a ring with identity, recall that a unit is an element of 1 with a multiplicative
inverse. Hence in Z the only units are ˙1 while in any ﬁeld every nonzero element is
a unit.
Lemma 9.2.1. If 1 is a ring with identity then the set of units in 1 forms a group
under multiplication called the unit group of 1 and is denoted by U(1). If 1 is a ﬁeld
then U(1) = 1
+
.
Proof. Let 1 be a ring with identity. Then the identity 1 itself is a unit so 1 ÷ U(1)
and hence U(1) is nonempty. If e ÷ 1 is a unit then it has a multiplicative inverse
e
1
. Clearly then the multiplicative inverse has an inverse, namely e so e
1
÷ U(1)
if e is. Hence to show U(1) is a group we must show that it is closed under product.
Let e
1
. e
2
÷ U(1). Then there exist e
1
1
. e
1
2
. It follows that e
1
2
e
1
1
is an inverse
for e
1
e
2
. Hence e
1
e
2
is also a unit and U(1) is closed under product. Therefore for
any ring 1 with identity U(1) forms a multiplicative group.
To present examples of nonabelian groups we turn to matrices. If J is a ﬁeld we let
GL(n. J) = {n × n matrices over J with nonzero determinant}
and
SL(n. J) = {n × n matrices over J with determinant one}.
Section 9.2 Examples of Groups 123
Lemma 9.2.2. If J is a ﬁeld then for n _ 2, GL(n. J) forms a nonabelian group
under matrix multiplication and SL(n. J) forms a subgroup.
GL(n. J) is called the ndimensional general linear group over J, while SL(n. J)
is called the ndimensional special linear group over J.
Proof. Recall that for two n × n matrices ·. T with n _ 2 over a ﬁeld we have
det(·T) = det(·) det(T)
where det is the determinant.
Now for any ﬁeld the n × n identity matrix 1 has determinant 1 and hence 1 ÷
GL(n. J). Since determinants multiply the product of two matrices with nonzero
determinant has nonzero determinant so GL(n. J) is closed under product. Further
over a ﬁeld J if · is an invertible matrix then
det(·
1
) =
1
det ·
and so if · has nonzero determinant so does its inverse. It follows that GL(n. J) has
the inverse of any of its elements. Since matrix multiplication is associative it follows
that GL(n. J) form a group. It is nonabelian since in general matrix multiplication is
noncommutative.
We leave the fact that SL(n. J) forms a subgroup to the exercises.
Groups play an important role in geometry. In any metric geometry an isometry is a
mapping that preserves distance. To understand a geometry one must understand the
group of isometries. We look brieﬂy at the Euclidean geometry of the plane E
2
.
An isometry or congruence motion of E
2
is a transformation or bijection T of E
2
that preserves distance, that is J(a. b) = J(T(a). T(b)) for all points a. b ÷ E
2
.
Theorem 9.2.3. The set of congruence motions of E
2
forms a group called the
Euclidean group. We denote the Euclidean group by E.
Proof. The identity map 1 is clearly an isometry and since composition of mappings
is associative we need only show that the product of isometries is an isometry and that
the inverse of an isometry is an isometry.
Let T. U be isometries. Then J(a. b) = J(T(a). T(b)) and J(a. b) = J(U(a).
U(b)) for any points a. b. Now consider
J(T U(a). T U(b)) = J(T(U(a)). T(U(b))) = J(U(a). U(b))
since T is an isometry. However
J(U(a). U(b)) = J(a. b)
since U is an isometry. Combining these we have that T U is also an isometry.
124 Chapter 9 Groups, Subgroups and Examples
Consider T
1
and points a. b. Then
J(T
1
(a). T
1
(b)) = J(T T
1
(a). T T
1
(b))
since T is an isometry. But T T
1
= 1 and hence
J(T
1
(a). T
1
(b)) = J(T T
1
(a). T T
1
(b)) = J(a. b).
Therefore T
1
is also an isometry and hence E is a group.
One of the major results concerning E is the following. We refer to [25], [26] and
[22] for a more thorough treatment.
Theorem 9.2.4. If T ÷ E then T is either a translation, rotation, reﬂection or glide
reﬂection. The set of translations and rotations forms a subgroup.
Proof. We outline a brief proof. If T is an isometry and T ﬁxes the origin (0. 0)
then T is a linear mapping. It follows that T is a rotation or a reﬂection. If T does
not ﬁx the origin then there is a translation T
0
such that T
0
T ﬁxes the origin. This
gives translations and glide reﬂections. In the exercises we expand out more of the
proof.
If D is a geometric ﬁgure in E
2
, such as a triangle or square, then a symmetry of
D is a congruence motion T : E
2
÷ E
2
that leaves D in place. It may move the
individual elements of D however. So for example a rotation about the center of a
circle is a symmetry of the circle.
Lemma 9.2.5. If D is a geometric ﬁgure in E
2
then the set of symmetries of D forms
a subgroup of E called the symmetry group of D denoted by Sym(D).
Proof. We show that Sym(D) is a subgroup of E. The identity map 1 ﬁxes D so
1 ÷ Sym(D) and so Sym(D) is nonempty. Let T. U ÷ Sym(D). Then T maps
D to D and so does U. It follows directly that so does the composition T U and
hence T U ÷ Sym(D). If T maps D to D then certainly the inverse does. Therefore
Sym(D) is a subgroup of E.
Example 9.2.6. Let T be an equilateral triangle. Then there are exactly six symme
tries of T (see exercises). These are
1 = the identity
r = a rotation of 120
ı
around the center of T
r
2
= a rotation of 240
ı
around the center of T
¡ = a reﬂection over the perpendicular bisector of one of the sides
¡ r = the composition of ¡ and r
¡ r
2
= the composition of ¡ and r
2
.
Section 9.3 Permutation Groups 125
Sym(T ) is called the dihedral group D
3
. In the next section we will see that it is
isomorphic to S
3
, the symmetric group on 3 symbols.
9.3 Permutation Groups
Groups most often appear as groups of transformations or permutations on a set. In
this section we will take a short look at permutation groups and then examine them
more deeply in the next chapter. We recall some ideas ﬁrst introduced in Chapter 7 in
relation to the proof of the fundamental theorem of algebra.
Deﬁnition 9.3.1. If · is a set, a permutation on · is a onetoone mapping of · onto
itself. We denote by S
¹
the set of all permutations on ·.
Theorem 9.3.2. For any set ·, S
¹
forms a group under composition called the sym
metric group on ·. If [·[ > 2 then S
¹
is nonabelian. Further if ·. T have the same
cardinality, then S
¹
Š S
B
.
Proof. If S
¹
is the set of all permutations on the set ·, we must show that composition
is an operation on S
¹
that is associative and has an identity and inverses.
Let ¡. g ÷ S
¹
. Then ¡. g are onetoone mappings of · onto itself. Consider
¡ ı g : · ÷ ·. If ¡ ı g(a
1
) = ¡ ı g(a
2
), then ¡(g(a
1
)) = ¡(g(a
2
)) and
g(a
1
) = g(a
2
), since ¡ is onetoone. But then a
1
= a
2
since g is onetoone.
If a ÷ ·, there exists a
1
÷ · with ¡(a
1
) = a since ¡ is onto. Then there exists
a
2
÷ · with g(a
2
) = a
1
since g is onto. Putting these together, ¡(g(a
2
)) = a, and
therefore ¡ ı g is onto. Therefore, ¡ ı g is also a permutation and composition gives
a valid binary operation on S
¹
.
The identity function 1(a) = a for all a ÷ · will serve as the identity for S
¹
, while
the inverse function for each permutation will be the inverse. Such unique inverse
functions exist since each permutation is a bijection.
Finally, composition of functions is always associative and therefore S
¹
forms a
group.
Suppose that [·[ > 2. Then · has at least 3 elements. Call them a
1
. a
2
. a
2
.
Consider the 2 permutations ¡ and g which ﬁx (leave unchanged) all of · except
a
1
. a
2
. a
3
and on these three elements
¡(a
1
) = a
2
. ¡(a
2
) = a
3
. ¡(a
3
) = a
1
g(a
1
) = a
2
. g(a
2
) = a
1
. g(a
3
) = a
3
.
Then under composition
¡(g(a
1
)) = a
3
. ¡(g(a
2
)) = a
2
. ¡(g(a
3
)) = a
1
126 Chapter 9 Groups, Subgroups and Examples
while
g(¡(a
1
)) = a
1
. g(¡(a
2
)) = a
3
. g(¡(a
3
)) = a
2
.
Therefore ¡ ı g = g ı ¡ and hence S
¹
is not abelian.
If ·. T have the same cardinality, then there exists a bijection o : · ÷T. Deﬁne a
map J : S
¹
÷ S
B
in the following manner: if ¡ ÷ S
¹
, let J(¡ ) be the permutation
on T given by J(¡ )(b) = o(¡(o
1
(b))). It is straightforward to verify that J is an
isomorphism (see the exercises).
If ·
1
Ç · then those permutations on · that map ·
1
to ·
1
form a subgroup of S
¹
called the stabilizer of ·
1
denoted stab(·
1
). We leave the proof to the exercises.
Lemma 9.3.3. If ·
1
Ç · then stab(·
1
) = {¡ ÷ S
¹
: ¡ : ·
1
÷ ·
1
} forms a
subgroup of S
¹
.
A permutation group is any subgroup of S
¹
for some set ·.
We now look at ﬁnite permutation groups. Let · be a ﬁnite set, say · = {a
1
.
a
2
. . . . . a
n
}. Then each ¡ ÷ S
¹
can be pictured as
¡ =
_
a
1
. . . a
n
¡(a
1
) . . . ¡(a
n
)
_
.
For a
1
there are n choices for ¡(a
1
). For a
2
there are only n ÷ 1 choices since ¡ is
onetoone. This continues down to only one choice for a
n
. Using the multiplication
principle, the number of choices for ¡ and therefore the size of S
¹
is
n(n ÷ 1) 1 = nŠ.
We have thus proved the following theorem.
Theorem 9.3.4. If [·[ = n then [S
¹
[ = nŠ.
For a set with n elements we denote S
¹
by S
n
, called the symmetric group on n
symbols.
Example 9.3.5. Write down the six elements of S
3
and give the multiplication table
for the group.
Name the three elements 1. 2. 3. The six elements of S
3
are then
1 =
_
1 2 3
1 2 3
_
. a =
_
1 2 3
2 3 1
_
. b =
_
1 2 3
3 1 2
_
c =
_
1 2 3
2 1 3
_
. J =
_
1 2 3
3 2 1
_
. e =
_
1 2 3
1 3 2
_
.
Section 9.3 Permutation Groups 127
The multiplication table for S
3
can be written down directly by doing the required
composition. For example,
ac =
_
1 2 3
2 3 1
__
1 2 3
2 1 3
_
=
_
1 2 3
3 2 1
_
= J.
To see this, note that a : 1 ÷ 2. 2 ÷ 3. 3 ÷ 1; c : 1 ÷ 2. 2 ÷ 1. 3 ÷ 3 and so
ac : 1 ÷3. 2 ÷2. 3 ÷1.
It is somewhat easier to construct the multiplication table if we make some obser
vations. First, a
2
= b and a
3
= 1. Next, c
2
= 1, J = ac, e = a
2
c and ﬁnally
ac = ca
2
.
From these relations the following multiplication table can be constructed:
1 a a
2
c ac a
2
c
1 1 a a
2
c ac a
2
c
a a a
2
1 ac a
2
c c
a
2
a
2
1 a a
2
c c ac
c c a
2
c ac 1 a
2
a
ac ac c a
2
c a 1 a
2
a
2
c a
2
c ac c a
2
a 1
To see this, consider, for example, (ac)a
2
= a(ca
2
) = a(ac) = a
2
c.
More generally, we can say that S
3
has a presentation given by
S
3
= (a. c: a
3
= c
2
= 1. ac = ca
2
).
By this we mean that S
3
is generated by a. c, or that S
3
has generators a. c and
the whole group and its multiplication table can be generated by using the relations
a
3
= c
2
= 1, ac = ca
2
.
A theorem of Cayley actually shows that every group is a permutation group.
A group G is a permutation group on the group G itself considered as a set. This
result however does not give much information about the group.
Theorem 9.3.6 (Cayley’s theorem). Let G be a group. Consider the set of elements
of G. Then the group G is a permutation group on the set G, that is G is a subgroup
of S
G
.
Proof. We show that to each g ÷ G we can associate a permutation of the set G. If
g ÷ G let ¬
¿
be the map given by
¬
¿
: g
1
÷gg
1
for each g
1
÷ G.
It is straightforward to show that each ¬
¿
is a permutation on G.
128 Chapter 9 Groups, Subgroups and Examples
9.4 Cosets and Lagrange’s Theorem
In this section given a group G and a subgroup H we deﬁne an equivalence relation
on G. The equivalence classes all have the same size and are called the (left) or (right)
cosets of H in G.
Deﬁnition 9.4.1. Let G be a group and H Ç G a subgroup. For a. b ÷ G deﬁne
a ~ b if a
1
b ÷ H.
Lemma 9.4.2. Let G be a group and H Ç G a subgroup. Then the relation deﬁned
above is an equivalence relation on G. The equivalence classes all have the form aH
for a ÷ G and are called the left cosets of H in G. Clearly G is a disjoint union of its
left cosets.
Proof. Let us show, ﬁrst of all, that this is an equivalence relation. Now a ~ a
since a
1
a = e ÷ H. Therefore the relation is reﬂexive. Further a ~ b implies
a
1
b ÷ H, but since H is a subgroup of G we have b
1
a = (a
1
b)
1
÷ H and so
b ~ a. Therefore the relation is symmetric. Finally suppose that a ~ b and b ~ c.
Then a
1
b ÷ H and b
1
c ÷ H. Since H is a subgroup a
1
b b
1
c = a
1
c ÷ H,
and hence a ~ c. Therefore the relation is transitive and hence is an equivalence
relation.
For a ÷ G the equivalence class is
Œa = {g ÷ G : a ~ g} = {a ÷ G : a
1
g ÷ H}.
But then clearly g ÷ aH. It follows that the equivalence class for a ÷ G is precisely
the set
aH = {g ÷ G : g = ah for some h ÷ H}.
These classes, aH, are called left cosets of H and since they are equivalence classes
they partition G. This means that every element of g is in one and only one left coset.
In particular, bH = H = eH if and only if b ÷ H.
If aH is a left coset then we call the element a a coset representative representative.
A complete collection
{a ÷ G : {aH} is the set of all distinct left cosets of H}
is called a (left) transversal of H in G.
One could deﬁne another equivalence relation by deﬁning a ~ b if and only if
ba
1
÷ H. Again this can be shown to be an equivalence relation on G, and the
equivalence classes here are sets of the form
Ha = {g ÷ G : g = ha for some h ÷ H}.
Section 9.4 Cosets and Lagrange’s Theorem 129
called right cosets of H. Also, of course, G is the (disjoint) union of distinct right
cosets.
It is easy to see that any two left (right) cosets have the same order (number of
elements). To demonstrate this consider the mapping aH ÷bH via ah ÷bh where
h ÷ H. It is not hard to show that this mapping is 11 and onto (see exercises). Thus
we have [aH[ = [bH[. (This is also true for right cosets and can be established in a
similar manner.) Letting b ÷ H in the above discussion, we see [aH[ = [H[, for any
a ÷ G, that is the size of each left or right coset is exactly the same as the subgroup H.
One can also see that the collection {aH} of all distinct left cosets has the same
number of elements as the collection {Ha} of all distinct right cosets. In other words,
the number of left cosets equals the number of right cosets (this number may be inﬁ
nite). For consider the map
¡ : aH ÷Ha
1
.
This mapping is welldeﬁned: for if aH = bH, then b = ah where h ÷ H. Thus
¡(bH) = Hb
1
= Hh
1
a
1
= ¡(aH). It is not hard to show that this mapping
is 11 and onto (see exercises). Hence the number of left cosets equals the number of
right cosets.
Deﬁnition 9.4.3. Let G be a group and H Ç G a subgroup. The number of distinct
left cosets, which is the same as the number of distinct right cosets, is called the index
of H in G, denoted by ŒG : H.
Now let us consider the case where the group G is ﬁnite. Each left coset has the
same size as the subgroup H and here both are ﬁnite. Hence [aH[ = [H[ for each
coset. Further the group G is a disjoint union of the left cosets, that is
G = H L g
1
H L L g
n
H.
Since this is a disjoint union we have
[G[ = [H[ ÷[g
1
H[ ÷ ÷[g
n
H[ = [H[ ÷[H[ ÷ ÷[H[ = [H[ŒG : H.
This establishes the following extremely important theorem.
Theorem 9.4.4 (Lagrange’s theorem). Let G be a group and H Ç G a subgroup.
Then
[G[ = [H[ŒG : H.
If G is a ﬁnite group this implies that both the order of a subgroup and the index of a
subgroup are divisors of the order of the group.
This theorem plays a crucial role in the structure theory of ﬁnite groups since it
greatly restricts the size of subgroups. For example in a group of order 10 there can
be proper subgroups only of orders 1, 2 and 5.
130 Chapter 9 Groups, Subgroups and Examples
As an immediate corollary, we have the following result.
Corollary 9.4.5. The order of any element g ÷ G, where G is a ﬁnite group, divides
the order of the group. In particular if [G[ = n and g ÷ G then o(g)[n and g
n
= 1.
Proof. Let g ÷ G and o(g) = m. Then m is the size of the cyclic subgroup generated
by g and hence divides m from Lagrange’s theorem. Then n = mk and so
g
n
= g
nk
= (g
n
)
k
= 1
k
= 1.
Before leaving this section we consider some results concerning general subsets of
a group.
Suppose that G is a group and S is an arbitrary nonempty subset of G, S Ç G and
S = 0; such a set S is usually called a complex of G.
If U and V are two complexes of G, the product UV is deﬁned as follows:
UV = {g
1
g
2
÷ G : u ÷ U. · ÷ V }.
Now suppose that U. V are subgroups of G. When is the complex UV again a
subgroup of G?
Theorem 9.4.6. The product UV of two subgroups U, V of a group G is itself a
subgroup if and only if U and V commute, that is, if and only if UV = V U.
Proof. We note ﬁrst that when we say U and V commute, we do not demand that this
is so elementwise. In other words, it is not required that u· = ·u for all u ÷ U and
all · ÷ V ; all that is required is that for any u ÷ U and · ÷ V u· = ·
1
u
1
, for some
elements u
1
÷ U and ·
1
÷ V .
Assume that UV is a subgroup of G. Let u ÷ U and · ÷ V . Then u ÷ U 1 Ç UV
and · ÷ 1 V Ç UV . But since UV is assumed itself to be a subgroup, it follows that
·u ÷ UV . Hence each product ·u ÷ UV and so V U Ç UV . In an identical manner
UV Ç V U and so UV = V U.
Conversely, suppose that UV = V U. Let g
1
= u
1
·
1
÷ UV , g
2
= u
2
·
2
÷ UV .
Then
g
1
g
1
= (u
1
·
1
)(u
2
·
2
) = u
1
(·
1
u
2
)·
2
= u
1
u
3
·
3
·
2
= (u
1
u
3
)(·
3
·
2
) ÷ UV
since ·
1
u
2
= u
3
·
3
for some u
3
÷ U and ·
3
÷ V . Further
g
1
1
= (u
1
·
1
)
1
= ·
1
1
u
1
1
= u
4
·
4
.
It follows that UV is a subgroup.
Section 9.4 Cosets and Lagrange’s Theorem 131
Theorem 9.4.7 (product formula). Let U. V be subgroups of G and let 1 be a left
transversal of the intersection U ¨ V in U. Then
UV =
_
i÷T
rV
where this is a disjoint union.
In particular if U. V are ﬁnite then
[UV [ =
[U[[V [
[U ¨ V [
.
Proof. Since 1 Ç U we have that
_
i÷T
rV Ç UV.
In the other direction let u· ÷ UV . Then
U =
_
i÷T
r(U ¨ V ).
It follows that u = r·
t
with r ÷ 1 and ·
t
÷ U ¨ V . Hence
u· = r·
t
· ÷ rV.
The union of cosets of V is disjoint so
u· ÷
_
i÷T
rV.
Therefore UV Ç
_
i÷T
rV proving the equality.
Now suppose that [U[ and [V [ are ﬁnite. Then we have
[UV [ = [1[[V [ = [U : U ¨ V [[V [ =
[U[
[U ¨ V [
[V [ =
[U[[V [
[U ¨ V [
.
We now show that index is multiplicative. Later we will see how this fact is related
to the multiplicativity of the degree of ﬁeld extensions.
Theorem9.4.8. Suppose G is a group and U and V are subgroups with U Ç V Ç G.
Then if G is the disjoint union
G =
_
i÷T
rV
and V is the disjoint union
V =
_
x÷S
sU
132 Chapter 9 Groups, Subgroups and Examples
then we get a disjoint union for G as
G =
_
i÷T,x÷S
rsU.
In particular if ŒG : V  and ŒV : U are ﬁnite then
ŒG : U = ŒG : V ŒV : U.
Proof. Now
G =
_
i÷T
rV =
_
i÷T
_
_
x÷S
sU
_
=
_
i÷T,x÷S
rsU.
Suppose that r
1
s
1
U = r
2
s
2
U. Then r
1
s
1
UV = r
2
s
2
UV . But s
1
UV = V and
s
2
UV = V so r
1
V = r
2
V which implies that r
1
= r
2
. Then s
1
U = s
2
U which
implies that s
1
= s
2
. Therefore the union is disjoint.
The index formula now follows directly.
The next result says that the intersection of subgroups of ﬁnite index must again be
of ﬁnite index.
Theorem 9.4.9 (Poincaré). Suppose that U. V are subgroups of ﬁnite index in G.
Then U ¨ V is also of ﬁnite index. Further
ŒG : U ¨ V  _ ŒG : UŒG : V .
If ŒG : U. ŒG : V  are relatively prime then equality holds.
Proof. Let r be the number of left cosets of U in G that are contained in UV . r is
ﬁnite since the index ŒG : U is ﬁnite. From Theorem 9.4.7 we then have
[V : U ¨ V [ = r _ ŒG : U.
Then from Theorem 9.4.8
ŒG : U ¨ V  = ŒG : V ŒV : U ¨ V  _ ŒG : V ŒG : U.
Since both ŒG : U and ŒG : V  are ﬁnite so is ŒG : U ¨ V .
Now ŒG : U[ŒG : U ¨ V  and ŒG : V [ŒG : U ¨ V . If ŒG : U and ŒG : V  are
relatively prime then
ŒG : UŒG : V [ŒG : U ¨ V  == ŒG : UŒG : V  _ ŒG : U ¨ V 
Therefore we must have equality.
Corollary 9.4.10. Suppose that ŒG : U and ŒG : V  are ﬁnite and relatively prime.
Then G = UV .
Section 9.5 Generators and Cyclic Groups 133
Proof. From Theorem 9.4.9 we have
ŒG : U ¨ V  = ŒG : UŒG : V .
From Theorem 9.4.8
ŒG : U ¨ V  = ŒG : V ŒV : U ¨ V .
Combing these we have
ŒV : U ¨ V  = ŒG : U.
The number of left cosets of U in G that are contained in V U is equal to the number
of all left cosets of U in G. It follows then that we must have G = UV .
9.5 Generators and Cyclic Groups
We saw that if G is any group and g ÷ G then the powers of g generate a subgroup
of G called the cyclic subgroup generated by g. Here we explore more fully the idea
of generating a group or subgroup. We ﬁrst need the following.
Lemma 9.5.1. If U and V are subgroups of a group G then their intersection U ¨V
is also a subgroup.
Proof. Since the identity of G is in both U and V we have that U ¨ V is nonempty.
Suppose that g
1
. g
2
÷ U ¨ V . Then g
1
. g
2
÷ U and hence g
1
1
g
2
÷ U since U is a
subgroup. Analogously g
1
1
g
2
÷ V . Hence g
1
g
2
÷ U ¨ V and therefore U ¨ V is
a subgroup.
Now let S be a subset of a group G. The subset S is certainly contained in at least
one subgroup of G, namely G itself. Let {U
˛
} be the collection of all subgroups of
G containing S. Then
_
˛
U
˛
is again a subgroup of G from Lemma 9.5.1. Further
it is the smallest subgroup of G containing S (see the exercises). We call
_
˛
U
˛
the
subgroup of G generated by S and denote it by (S) or grp(S). We call the set S a set
of generators for (S).
Deﬁnition 9.5.2. A subset M of a group G is a set of generators for G if G =
(M), that is the smallest subgroup of G containing M is all of G. We say that G is
generated by M and that M is a set of generators for G.
Notice that any group G has at least one set of generators, namely G itself. If
G = (M) and M is a ﬁnite set then we say that G is ﬁnitely generated. Clearly
any ﬁnite group is ﬁnitely generated. Shortly we will give an example of a ﬁnitely
generated inﬁnite group.
134 Chapter 9 Groups, Subgroups and Examples
Example 9.5.3. The set of all reﬂections forms a set of generators for the Euclidean
group E. Recall that any T ÷ E is either a translation, a rotation, a reﬂection or a
glide reﬂection. It can be shown (see exercises) that any one of these can be expressed
as a product of 3 or fewer reﬂections.
We now consider the case where a group G has a single generator.
Deﬁnition 9.5.4. A group G is cyclic if there exists a g ÷ G such that G = (g).
In this case G = {g
n
: n ÷ Z}, that is G consists of all the powers of the element g.
If there exists an integer msuch that g
n
= 1, then there exists a smallest such positive
integer say n. It follows that g
k
= g
I
if and only if k ÷ l mod n. In this situation the
distinct powers of g are precisely
{1 = g
0
. g. g
2
. . . . . g
n1
}.
It follows that [G[ = n. We then call G a ﬁnite cyclic group. If no such power exists
then all the powers of G are distinct and G is an inﬁnite cyclic group.
We show next that any two cyclic groups of the same order are isomorphic.
Theorem 9.5.5. (a) If G = (g) is an inﬁnite cyclic group then G Š (Z. ÷) that is
the integers under addition.
(b) If G = (g) is a ﬁnite cyclic group of order n then G Š (Z
n
. ÷) that is the
integers modulo n under addition.
It follows that for a given order there is only one cyclic group up to isomorphism.
Proof. Let G be an inﬁnite cyclic group with generator g. Map g onto 1 ÷ (Z. ÷).
Since g generates G and 1 generates Zunder addition this can be extended to a homo
morphism. It is straightforward to show that this deﬁnes an isomorphism.
Now let G be a ﬁnite cyclic group of order n with generator g. As above map g to
1 ÷ Z
n
and extend to a homomorphism. Again it is straightforward to show that this
deﬁnes an isomorphism.
Now let G and H be two cyclic groups of the same order. If both are inﬁnite then
both are isomorphic to (Z. ÷) and hence isomorphic to each other. If both are ﬁnite of
order n then both are isomorphic to (Z
n
. ÷) and hence isomorphic to each other.
Theorem 9.5.6. Let G = (g) be a ﬁnite cyclic group of order n. Then every subgroup
of G is also cyclic. Further if J[n there exists a unique subgroup of G of order J.
Proof. Let G = (g) be a ﬁnite cyclic group of order n and suppose that H is a
subgroup of G. Notice that if g
n
÷ H then g
n
is also in H since H is a subgroup.
Hence H must contain positive powers of the generator g. Let t be the smallest
positive power of g such that g
t
÷ H. We claim that H = (g
t
) the cyclic subgroup
Section 9.5 Generators and Cyclic Groups 135
of G generated by g
t
. Let h ÷ H then h = g
n
for some positive integer m _ t .
Divide m by t to get
m = qt ÷r where r = 0 or 0 < r < t.
If r = 0 then r = m÷ qt > 0. Now g
n
÷ H, g
t
÷ H so g
qt
÷ H for any q since
H is a subgroup. It follows that g
n
g
qt
= g
nqt
÷ H. This implies that g
i
÷ H.
However this is a contradiction since r < t and t is the least positive power in H. It
follows that r = 0 so m = qt . This implies that g
n
= g
qt
= (g
t
)
q
, that is g
n
is
a multiple of g
t
. Therefore every element of H is a multiple of g
t
and therefore g
t
generates H and hence H is cyclic.
Now suppose that J[n so that n = kJ. Let H = (g
k
), that is the subgroup of G
generated by g
k
. We claim that H has order J and that any other subgroup H
1
of G
with order J coincides with H. Now (g
k
)
d
= g
kd
= g
n
= 1 so the order of g
k
divides J and hence is _ J. Suppose that (g
k
)
d
1
= g
kd
1
= 1 with J
1
< J. Then
since the order of g is n we have n = kJ[kJ
1
with J
1
< J which is impossible.
Therefore the order of g
k
is J and h = (g
k
) is a subgroup of G of order J.
Now let H
1
be a subgroup of G of order J. We must show that H
1
= H. Let
h ÷ H
1
so h = g
t
and hence g
td
= 1. It follows that n[t J and so kJ[t J and hence
k[t , that is t = qk for some positive integer q. Therefore g
t
= (g
k
)
q
÷ H. Therefore
H
1
Ç H and since they are of the same size H = H
1
.
Theorem 9.5.7. Let G = (g) be an inﬁnite cyclic group. Then a subgroup H is of
the form H = (g
t
) for a positive integer t . Further if t
1
. t
2
are positive integers with
t
1
= t
2
then (g
t
1
) and (g
t
2
) are distinct.
Proof. Let G = (g) be an inﬁnite cyclic group and H a subgroup of G. As in the
proof of Theorem 9.5.6 H must contain positive powers of the generator g. Let t be
the smallest positive power of g such that g
t
÷ H. We claim that H = (g
t
) the cyclic
subgroup of G generated by g
t
. Let h ÷ H then h = g
n
for some positive integer
m _ t . Divide m by t to get
m = qt ÷r where r = 0 or 0 < r < t.
If r = 0 then r = m÷ qt > 0. Now g
n
÷ H, g
t
÷ H so g
qt
÷ H for any q since
H is a subgroup. It follows that g
n
g
qt
= g
nqt
÷ H. This implies that g
i
÷ H.
However this is a contradiction since r < t and t is the least positive power in H. It
follows that r = 0 so m = qt . This implies that g
n
= g
qt
= (g
t
)
q
, that is g
n
is
a multiple of g
t
. Therefore every element of H is a multiple of g
t
and therefore g
t
generates H and hence H = (g
t
).
From the proof above in the subgroup (g
t
) the integer t is the smallest positive
power of g in (g
t
). Therefore if t
1
. t
2
are positive integers with t
1
= t
2
then (g
t
1
) and
(g
t
2
) are distinct.
136 Chapter 9 Groups, Subgroups and Examples
Theorem 9.5.8. Let G = (g) be a cyclic group. Then
(a) If G=(g) is ﬁnite of order n then g
k
is also a generator if and only if (k. n)=1.
That is the generators of G are precisely those powers g
k
where k is relatively
prime to n.
(b) If G = (g) is inﬁnite then the only generators are g. g
1
.
Proof. (a) Let G = (g) be a ﬁnite cyclic group of order n and suppose that (k. n) = 1.
Then there exist integers .. , with k. ÷n, = 1. It follows that
g = g
kx÷n,
= (g
k
)
x
(g
n
)
,
= (g
k
)
x
since g
n
= 1. Hence g is a power of g
k
that implies every element of G is also a
power of g
k
. Therefore g
k
is also a generator.
Conversely suppose that g
k
is also a generator. Then g is a power of g
k
so there
exists an . such that g = g
kx
. It follows that k. ÷ 1 modulo n and so there exists
a , such that
k. ÷n, = 1.
This then implies that (k. n) = 1.
(b) If G = (g) is inﬁnite then any power of g other than g
1
generates a proper
subgroup. If g is a power of g
n
for some n so that g = g
nx
it follows that g
nx1
= 1
so that g has ﬁnite order contradicting that G is inﬁnite cyclic.
Recall that for positive integers n the Euler phifunction is deﬁned as follows.
Deﬁnition 9.5.9. For any n > 0, let
ç(n) = number of integers less than or equal to n and relatively prime to n.
Example 9.5.10. ç(6) = 2 since among 1. 2. 3. 4. 5. 6 only 1. 5 are relatively prime
to 6.
Corollary 9.5.11. If G = (g) is ﬁnite of order n then there are ç(n) generators for
G where ç is the Euler phifunction.
Proof. From Theorem 9.5.8 the generators of G are precisely the powers g
k
where
(k. n) = 1. The numbers relatively prime to n are counted by the Euler phifunction.
Recall that in an arbitrary group G, if g ÷ G, then the order of g, denoted o(g),
is the order of the cyclic subgroup generated by g. Given two elements g. h ÷ G in
general there is no relationship between o(g). o(h) and the order of the product gh.
However if they commute there is a very direct relationship.
Section 9.5 Generators and Cyclic Groups 137
Lemma 9.5.12. Let G be an arbitrary group and g. h ÷ G both of ﬁnite order
o(g). o(h). If g and h commute, that is gh = hg, then o(gh) divides lcm(o(g). o(h)).
In particular if G is an abelian group then o(gh)[ lcm(o(g). o(h)) for all g. h ÷ G of
ﬁnite order. Further if (g) ¨ (h) = {1} then o(gh) = lcm(o(g). o(h)).
Proof. Suppose o(g) = n and o(h) = m are ﬁnite. If g. h commute then for any k
we have (gh)
k
= g
k
h
k
. Let t = lcm(n. m) then t = k
1
m. t = k
2
n. Hence
(gh)
t
= g
t
h
t
= (g
n
)
k
1
(h
n
)
k
2
= 1.
Therefore the order of gh is ﬁnite and divides t . Suppose that (g) ¨ (h) = {1} that
is the cyclic subgroup generated by g intersects trivially with the cyclic subgroup
generated by h. Let k = o(gh) which we know is ﬁnite from the ﬁrst part of the
lemma. Let t = lcm(n. m). We then have (gh)
k
= g
k
h
k
= 1 which implies that
g
k
= h
k
. Since the cyclic subgroups have only trivial intersection this implies that
g
k
= 1 and h
k
= 1. But then n[k and m[k and hence t [k. Since k[t it follows that
k = t .
Recall that if m and n are relatively prime then lcm(m. n) = mn. Further if the
orders of g and h are relatively prime it follows from Lagrange’s theorem that (g) ¨
(h) = {1}. We then get the following.
Corollary 9.5.13. If g. h commute and o(g) and o(h) are ﬁnite and relatively prime
then o(gh) = o(g)o(h).
Deﬁnition 9.5.14. If G is a ﬁnite abelian group then the exponent of G is the lcm of
the orders of all elements of G. That is
exp(G) = lcm{o(g) : g ÷ G}.
As a consequence of Lemma 9.5.12 we obtain
Lemma 9.5.15. Let G be a ﬁnite abelian group. Then G contains an element of order
exp(G).
Proof. Suppose that exp(G) = ¸
e
1
1
¸
e
k
k
with ¸
i
distinct primes. By the deﬁnition
of exp(G) there is a g
i
÷ G with o(g
i
) = ¸
e
i
i
r
i
with ¸
i
and r
i
relatively prime. Let
h
i
= g
i
i
i
. Then from Lemma 9.5.12 we get o(h
i
) = ¸
e
i
i
. Now let g = h
1
h
2
h
k
.
From the corollary to Lemma 9.5.12 we have o(g) = ¸
e1
1
¸
e
k
k
= exp(G).
If 1 is a ﬁeld then the multiplicative subgroup of nonzero elements of 1 is an
abelian group 1

. The above results lead to the fact that a ﬁnite subgroup of 1

must
actually be cyclic.
Theorem 9.5.16. Let 1 be a ﬁeld. Then any ﬁnite subgroup of 1

is cyclic.
138 Chapter 9 Groups, Subgroups and Examples
Proof. Let · Ç 1

with [·[ = n. Suppose that m = exp(·). Consider the poly
nomial ¡(.) = .
n
÷ 1 ÷ 1Œ.. Since the order of each element in · divides m it
follows that a
n
= 1 for all a ÷ · and hence each a ÷ · is a zero of the polynomial
¡(.). Hence ¡(.) has at least n zeros. Since a polynomial of degree m over a ﬁeld
can have at most m zeros it follows that n < m. From Lemma 9.5.15 there is an
element a ÷ · with o(a) = m. Since [·[ = n it follows that m[n and hence m < n.
Therefore m = n and hence · = (a) showing that · is cyclic.
We close this section with two other results concerning cyclic groups. The ﬁrst
proves, using group theory, a very interesting number theoretic result concerning the
Euler phifunction.
Theorem 9.5.17. For n > 1 and for J _ 1
d[n
ç(J) = n.
Proof. Consider a cyclic group G of order n. For each J[n, J _ 1 there is a unique
cyclic subgroup H of order J. H then has ç(J) generators. Each element in G
generates its own cyclic subgroup H
1
, say of order J and hence must be included in
the ç(J) generators of H
1
. Therefore
d[n
ç(J) = sum of the numbers of generators of the cyclic subgroups of G.
But this must be the whole group and hence this sum is n.
We shall make use of the above theorem directly in the following theorem.
Theorem 9.5.18. If [G[ = n and if for each positive J such that J[n, G has at most
one cyclic subgroup of order J, then G is cyclic (and consequently, has exactly one
cyclic subgroup of order J).
Proof. For each J[n, J > 0, let [(J) = the number of elements of G of order J.
Then
d[n
[(J) = n.
Now suppose that [(J) = 0 for a given J[n. Then there exists an a ÷ G of order J
which generates a cyclic subgroup, (a), of order J of G. We claim that all elements
of G of order J are in (a). Indeed, if b ÷ G with o(b) = J and b … (a), then (b) is a
second cyclic subgroup of order J, distinct from (a). This contradicts the hypothesis,
so the claim is proved. Thus, if [(J) = 0, then [(J) = ç(J). In general, we
have [(J) _ ç(J), for all positive J[n. But n =
d[n
[(J) _
d[n
ç(J), by
the previous theorem. It follows, clearly, from this that [(J) = ç(J) for all J[n. In
Section 9.6 Exercises 139
particular, [(n) = ç(n) _ 1. Hence, there exists at least one element of G of order
n; hence G is cyclic. This completes the proof.
Corollary 9.5.19. If in a group G of order n, for each J[n, the equation .
d
= 1 has
at most J solutions in G, then G is cyclic.
Proof. The hypothesis clearly implies that G can have at most one cyclic subgroup of
order J since all elements of such a subgroup satisfy the equation. So Theorem 9.5.17
applies to give our result.
If H is a subgroup of a group G then G operates as a group of permutations on
the set {aH : a ÷ 1} of left cosets of H in G where 1 is a left transversal of H
in G. This we can use to show that a ﬁnitely generated group has only ﬁnitely many
subgroups of a given ﬁnite index.
Theorem 9.5.20. Let G be a ﬁnitely generated group. The number of subgroups of
index n < ois ﬁnite.
Proof. Let H be a subgroup of index n. We choose a left transversal {c
1
. . . . . c
n
}
for H in G where c
1
= 1 represents H. G permutes the set of cosets c
i
H by
multiplication from the left. This induces a homomorphism [
1
from G to S
n
as
follows. For each g ÷ G let [
1
(g) be the permutation which maps i to ¸ if gc
i
H =
c
}
H. [
1
(g) ﬁxes the number 1 if and only if g ÷ H because c
1
H = H. Now, let
H and 1 be two different subgroups of index n in G. Then there exists g ÷ H with
g … 1 and [
1
(g) = [
1
(g), and hence [
1
and [
1
are different. Since G is ﬁnitely
generated there are only ﬁnitely many homomorphisms from G to S
n
. Therefore the
number of subgroups of index n < ois ﬁnite.
9.6 Exercises
1. Prove Lemma 9.1.4.
2. Suppose that g ÷ G and g
n
= 1 for some positive integer m. Let n be the small
est positive integer such that g
n
= 1. Show the set of elements {1. g. g
2
. . . . .
g
n1
} are all distinct but for any other power g
k
we have g
k
= g
t
for some
k = 0. 1. . . . . n ÷ 1.
3. Let G be a group and U
1
. U
2
be ﬁnite subgroups of G. If [U
1
[ and [U
2
[ are
relatively prime, then U
1
¨ U
2
= {e}.
4. Let ·. T be subgroups of a ﬁnite group G. If [·[ [T[ > [G[ then · ¨ T = {e}.
140 Chapter 9 Groups, Subgroups and Examples
5. Let G be the set of all real matrices of the form
_
o b
b o
_
, where a
2
÷ b
2
= 0.
Show:
(a) G is a group.
(b) For each n ÷ N there is at least one element of order n in G.
6. Let ¸ be a prime, and let G = SL(2. ¸). Show:
(a) G has at least 2¸ ÷ 2 elements of order ¸ (their exact number is ¸
2
÷ 1).
(b) If ¸ is odd, then .
]
= 1 for all . ÷ G.
7. Let ¸ be a prime and a ÷ Z. Show that a
]
÷ a mod ¸.
8. Let J be a ﬁeld. Show that the set of n × n matrices of determinant 1 over J
forms a group.
9. Here we outline a proof that every planar Euclidean congruence motion is either
a rotation, translation, reﬂection or glide reﬂection. An isometry in this problem
is a planar Euclidean congruence motion. Show:
(a) If T is an isometry then it is completely determined by its action on a trian
gle – equivalent to showing that if T ﬁxes three noncollinear points then it
must be the identity.
(b) If an isometry T has exactly one ﬁxed point then it must be a rotation with
that point as center.
(c) If an isometry T has two ﬁxed points then it ﬁxes the line joining them.
Then show that if T is not the identity it must be a reﬂection through this
line.
(d) If an isometry T has no ﬁxed point but preserves orientation then it must be
a translation.
(e) If an isometry T has no ﬁxed point but reverses orientation then it must be
a glide reﬂection.
10. Let 1
n
be a regular ngon and D
1
its group of symmetries. Show that [D
n
[ =
2n. (Hint: First show that [D
n
[ _ 2n and then exhibit 2n distinct symmetries.)
11. If ·. T have the same cardinality, then there exists a bijection o : · ÷ T.
Deﬁne a map J : S
¹
÷ S
B
in the following manner: if ¡ ÷ S
¹
, let J(¡ ) be
the permutation on T given by J(¡ )(b) = o(¡(o
1
(b))). Show that J is an
isomorphism.
12. Prove Lemma 9.3.3.
Chapter 10
Normal Subgroups, Factor Groups and
Direct Products
10.1 Normal Subgroups and Factor Groups
In rings we saw that there were certain special types of subrings, called ideals, that
allowed us to deﬁne factor rings. The analogous object for groups is called a normal
subgroup which we will deﬁne and investigate in this section.
Deﬁnition 10.1.1. Let G be an arbitrary group and suppose that H
1
and H
2
are sub
groups of G. We say that H
2
is conjugate to H
1
if there exists an element a ÷ G such
that H
2
= aH
1
a
1
. H
1
. H
2
are the called conjugate subgroups of G.
Lemma 10.1.2. Let G be an arbitrary group. Then the relation of conjugacy is an
equivalence relation on the set of subgroups of G.
Proof. We must show that conjugacy is reﬂexive, symmetric and transitive. If H is a
subgroup of G then 1
1
H1 = H and hence H is conjugate to itself and therefore the
relation is reﬂexive.
Suppose that H
1
is conjugate to H
2
. Then there exists a g ÷ G with g
1
H
1
g =
H
2
. This implies that gH
2
g
1
= H
1
. However (g
1
)
1
= g and hence letting
g
1
= g
1
we have g
1
1
H
2
g
1
= H
1
. Therefore H
2
is conjugate to H
1
and conjugacy
is symmetric.
Finally suppose that H
1
is conjugate to H
2
and H
2
is conjugate to H
3
. Then there
exist g
1
. g
2
÷ G with H
2
= g
1
1
H
1
g
1
and H
3
= g
1
2
H
2
g
2
. Then
H
3
= g
1
2
g
1
1
H
1
g
1
g
2
= (g
1
g
2
)
1
H
1
(g
1
g
2
).
Therefore H
3
is conjugate to H
2
and conjugacy is transitive.
Lemma 10.1.3. Let G be an arbitrary group. Then for g ÷ G the map g : a ÷
g
1
ag is an automorphism on G.
Proof. For a ﬁxed g ÷ G deﬁne the map ¡ : G ÷ G by ¡(a) = g
1
ag for a ÷ G.
We must show that this is a homomorphism and that it is onetoone and onto.
142 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Let a
1
. a
2
÷ G. Then
¡(a
1
a
2
) = g
1
a
1
a
2
g = (g
1
a
1
g)(g
1
a
2
g) = ¡(a
1
)¡(a
2
).
Hence ¡ is a homomorphism.
If ¡(a
1
) = ¡(a
2
) then g
1
a
1
g = g
1
a
2
g. Clearly by the cancellation law we
then have a
1
= a
2
and hence ¡ is onetoone.
Finally let a ÷ G and let a
1
= gag
1
. Then a = g
1
a
1
g and hence ¡(a
1
) = a.
It follows that ¡ is onto and therefore ¡ is an automorphism on G.
In general a subgroup H of a group G may have many different conjugates. How
ever in certain situations the only conjugate of a subgroup H is H itself. If this is the
case we say that H is a normal subgroup. We will see shortly that this is precisely the
analog for groups of the concept of an ideal in rings.
Deﬁnition 10.1.4. Let G be an arbitrary group. A subgroup H is a normal subgroup
of G, which we denote by H < G, if g
1
Hg = H for all g ÷ G.
Since the conjugation map is an isomorphism it follows that if g
1
Hg Ç H then
g
1
Hg = H. Hence in order to show that a subgroup is normal we need only show
inclusion.
Lemma 10.1.5. Let N be a subgroup of a group G. Then if aNa
1
Ç N for all
a ÷ G, then aNa
1
= N. In particular, aNa
1
Ç N for all a ÷ G implies that N
is a normal subgroup.
Notice that if g
1
Hg = H then Hg = gH. That is as sets the left coset gH
is equal to the right coset Hg. Hence for each h
1
÷ H there is an h
2
÷ H with
gh
1
= h
2
g. If H < G this is true for all g ÷ G. Further if H is normal then for the
product of two cosets g
1
H and g
2
H we have
(g
1
H)(g
2
H) = g
1
(Hg
2
)H = g
1
g
2
(HH) = g
1
g
2
H.
If (g
1
H)(g
2
H) = (g
1
g
2
)H for all g
1
. g
2
÷ G we necessarily have gHg
1
= H
for all g ÷ G.
Hence we have proved:
Lemma 10.1.6. Let H be a subgroup of a group G. Then the following are equiva
lent:
(1) H is a normal subgroup of G.
(2) g
1
Hg = H for all g ÷ G.
(3) gH = Hg for all g ÷ G.
(4) (g
1
H)(g
2
H) = (g
1
g
2
)H for all g
1
. g
2
÷ G.
Section 10.1 Normal Subgroups and Factor Groups 143
This is precisely the condition needed to construct factor groups. First we give
some examples of normal subgroups.
Lemma 10.1.7. Every subgroup of an abelian group is normal.
Proof. Let G be abelian and H a subgroup of G. Suppose g ÷ G then gh = hg for
all h ÷ H since G is abelian. It follows that gH = Hg. Since this is true for every
g ÷ G it follows that H is normal.
Lemma 10.1.8. Let H Ç G be a subgroup of index 2, that is ŒG : H = 2. Then H
is normal in G.
Proof. Suppose that ŒG : H = 2. We must show that gH = Hg for all g ÷ G. If
g ÷ H then clearly H = gH = Hg. Therefore we may assume that g is not in H.
Then there are only 2 left cosets and 2 right cosets. That is,
G = H L gH = H L Hg.
Since the union is a disjoint union we must have gH = Hg and hence H is normal.
Lemma 10.1.9. Let 1 be any ﬁeld. Then the group SL(n. 1) is a normal subgroup
of GL(n. 1) for any positive integer n.
Proof. Recall that GL(n. 1) is the group of n × n matrices over the ﬁeld 1 with
nonzero determinant while SL(n. 1) is the subgroup of n × n matrices over the ﬁeld
1 with determinant equal to 1. Let U ÷ SL(n. 1) and T ÷ GL(n. 1). Consider
T
1
UT . Then
det(T
1
UT ) = det(T
1
) det(U) det(T ) = det(U) det(T
1
T )
= det(U) det(1) = det(U) = 1.
Hence T
1
UT ÷ SL(n. 1) for any U ÷ SL(n. 1) and any T ÷ GL(n. 1). It follows
that T
1
SL(n. 1)T Ç SL(n. 1) and therefore SL(n. 1) is normal in GL(n. 1).
The intersection of normal subgroups is again normal and the product of normal
subgroups is normal.
Lemma 10.1.10. Let N
1
. N
2
be normal subgroups of the group G. Then
(1) N
1
¨ N
2
is a normal subgroup of G.
(2) N
1
N
2
is a normal subgroup of G.
(3) If H is any subgroup of G then N
1
¨H is a normal subgroup of H and N
1
H =
HN
1
.
144 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Proof. (a) Let n ÷ N
1
¨ N
2
and g ÷ G. Then g
1
ng ÷ N
1
since N
1
is normal.
Similarly g
1
ng ÷ N
2
since N
2
is normal. Hence g
1
ng ÷ N
1
¨N
2
. It follows that
g
1
(N
1
¨ N
2
)g Ç N
1
¨ N
2
and therefore N
1
¨ N
2
is normal.
(b) Let n
1
÷ N
1
. n
2
÷ N
2
. Since N
1
. N
2
are both normal N
1
N
2
= N
2
N
1
as sets
and the complex N
1
N
2
forms a subgroup of G. Let g ÷ G and n
1
n
2
÷ N
1
N
2
. Then
g
1
(n
1
n
2
)g = (g
1
n
1
g)(g
1
n
2
g) ÷ N
1
N
2
since g
1
n
1
g ÷ N
1
and g
1
n
2
g ÷ N
2
. Therefore N
1
N
2
is normal in G.
(c) Let h ÷ H and n ÷ N ¨H. Then as in part (a) h
1
nh ÷ N ¨H and therefore
N ¨ H is a normal subgroup of H. The same argument as in part (b) shows that
N
1
H = HN
1
.
We now construct factor groups or quotient groups of a group modulo a normal
subgroup.
Deﬁnition 10.1.11. Let G be an arbitrary group and H a normal subgroup of G. Let
G,H denote the set of distinct left (and hence also right) cosets of H in G. On G,H
deﬁne the multiplication
(g
1
H)(g
2
H) = g
1
g
2
H
for any elements g
1
H. g
2
H in G,H.
Theorem 10.1.12. Let G be a group and H a normal subgroup of G. Then G,H
under the operation deﬁned above forms a group. This group is called the factor
group or quotient group of G modulo H. The identity element is the coset 1H = H
and the inverse of a coset gH is g
1
H.
Proof. We ﬁrst show that the operation on G,N is welldeﬁned. Suppose that a
t
N =
aN and b
t
N = bN, then b
t
÷ bN and so b
t
= bn
1
. Similarly a
t
= an
2
where
n
1
. n
2
÷ N. Therefore
a
t
b
t
N = an
2
bn
1
N = an
2
bN
since n
1
÷ N. But b
1
n
2
b = n
3
÷ N, since N is normal, so the righthand side of
the equation can be written as
an
2
bN = abN.
Thus we have shown that if N < G then a
t
b
t
N = abN, and the operation on G,N is
indeed, welldeﬁned.
The associative law is true because coset multiplication as deﬁned above uses the
ordinary group operation which is by deﬁnition associative.
The coset N serves as the identity element of G,N. Notice that
aN N = aN
2
= aN
Section 10.1 Normal Subgroups and Factor Groups 145
and
N aN = aN
2
= aN.
The inverse of aN is a
1
N since
aNa
1
N = aa
1
N
2
= N.
We emphasize that the elements of G,N are cosets and thus subsets of G. If
[G[ < o, then [G,N[ = ŒG : N, the member of cosets of N in G. It is also to
be emphasized that in order for G,N to be a group N must be a normal subgroup
of G.
In some cases properties of G are preserved in factor groups.
Lemma 10.1.13. If G is abelian then any factor group of G is also abelian. If G is
cyclic then any factor group of G is also cyclic.
Proof. Suppose that G is abelian and H is a subgroup of G. H is necessarily normal
from Lemma 10.1.7 so that we can form the factor group G,H. Let g
1
H. g
2
H ÷
G,H. Since G is abelian we have g
1
g
2
= g
2
g
1
. Then in G,H,
(g
1
H)(g
2
H) = (g
1
g
2
)H = (g
2
g
1
)H = (g
2
H)(g
1
H).
Therefore G,H is abelian.
We leave the proof of the second part to the exercises.
An extremely important concept is when a group contains no proper normal sub
groups other than the identity subgroup {1}.
Deﬁnition 10.1.14. A group G = {1} is simple provided that N < G implies N = G
or N = {1}.
One of the most outstanding problems in group theory has been to give a complete
classiﬁcation of all ﬁnite simple groups. In other words, this is the program to dis
cover all ﬁnite simple groups and to prove that there are no more to be found. This
was accomplished through the efforts of many mathematicians. The proof of this
magniﬁcent result took thousands of pages. We refer the reader to [18] for a complete
discussion of this. We give one elementary example.
Lemma 10.1.15. Any ﬁnite group of prime order is simple and cyclic.
Proof. Suppose that G is a ﬁnite group and [G[ = ¸ where ¸ is a prime. Let g ÷ G
with g = 1. Then (g) is a nontrivial subgroup of G so its order divides the order of
G by Lagrange’s theorem. Since g = 1 and ¸ is a prime we must have [(g)[ = ¸.
Therefore (g) is all of G, that is G = (g) and hence G is cyclic.
The argument above shows that G has no nontrivial proper subgroups and therefore
no nontrivial normal subgroups. Therefore G is simple.
In the next chapter we will examine certain other ﬁnite simple groups.
146 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
10.2 The Group Isomorphism Theorems
In Chapter 1 we saw that there was a close relationship between ring homomorphisms
and factor rings. In particular to each ideal, and consequently to each factor ring,
there is a ring homomorphism that has that ideal as its kernel. Conversely to each ring
homomorphism its kernel is an ideal and the corresponding factor ring is isomorphic
to the image of the homomorphism. This was formalized in Theorem 1.5.7 which we
called the ring isomorphism theorem. We now look at the group theoretical analog
of this result, called the group isomorphism theorem. We will then examine some
consequences of this result that will be crucial in the Galois theory of ﬁelds.
Deﬁnition 10.2.1. If G
1
and G
2
are groups and ¡ : G
1
÷ G
2
is a group homo
morphism then the kernel of ¡ , denoted ker(¡ ), is deﬁned as
ker(¡ ) = {g ÷ G
1
: ¡(g) = 1}.
That is the kernel is the set of the elements of G
1
that map onto the identity of G
2
.
The image of ¡ , denoted im(¡ ), is the set of elements of G
2
mapped onto by ¡ from
elements of G
1
. That is
im(¡ ) = {g ÷ G
2
: ¡(g
1
) = g
2
for some g
1
÷ G
1
}.
Note that if ¡ is a surjection then im(¡ ) = G
2
.
As with ring homomorphisms the kernel measures how far a homomorphism is
from being an injection, that is, a onetoone mapping.
Lemma 10.2.2. Let G
1
and G
2
are groups and ¡ : G
1
÷ G
2
a group homomorph
ism. Then ¡ is injective if and only if ker(¡ ) = {1}.
Proof. Suppose that ¡ is injective. Since ¡(1) = 1 we always have 1 ÷ ker(¡ ).
Suppose that g ÷ ker(¡ ). Then ¡(g) = ¡(1). Since ¡ is injective this implies that
g = 1 and hence ker(¡ ) = {1}.
Conversely suppose that ker(¡ ) = {1} and ¡(g
1
) = ¡(g
2
). Then
¡(g
1
)(¡(g
2
))
1
= 1 == ¡(g
1
g
1
2
) = 1 == g
1
g
1
2
÷ ker(¡ ).
Then since ker(¡ ) = {1} we have g
1
g
1
2
= 1 and hence g
1
= g
2
. Therefore ¡ is
injective.
We now state the group isomorphism theorem. This is entirely analogous to the
ring isomorphism theorem replacing ideals by normal subgroups. We note that this
theorem is sometimes called the ﬁrst group isomorphism theorem.
Section 10.2 The Group Isomorphism Theorems 147
Theorem 10.2.3 (group isomorphism theorem). (a) Let G
1
and G
2
be groups and
¡ : G
1
÷ G
2
a group homomorphism. Then ker(¡ ) is a normal subgroup of
G
1
, im(¡ ) is a subgroup of G
2
and
G, ker(¡ ) Š im(¡ ).
(b) Conversely suppose that N is a normal subgroup of a group G. Then there
exists a group H and a homomorphism ¡ : G ÷ H such that ker(¡ ) = N
and im(¡ ) = H.
Proof. (a) Since 1 ÷ ker(¡ ) the kernel is nonempty. Suppose that g
1
. g
2
÷ ker(¡ ).
Then ¡(g
1
) = ¡(g
2
) = 1. It follows that ¡(g
1
g
1
2
) = ¡(g
1
)(¡(g
2
))
1
= 1.
Hence g
1
g
1
2
÷ ker(¡ ) and therefore ker(¡ ) is a subgroup of G
1
. Further for any
g ÷ G
1
we have
¡(g
1
g
1
g) = (¡(g))
1
¡(g
1
)¡(g)
= (¡(g))
1
1 ¡(g) = ¡(g
1
g) = ¡(1) = 1.
Hence g
1
g
1
g ÷ ker(¡ ) and ker(¡ ) is a normal subgroup.
It is straightforward to show that im(¡ ) is a subgroup of G
2
.
Consider the map
´
¡ : G, ker(¡ ) ÷im(¡ ) deﬁned by
´
¡ (g ker(¡ )) = ¡(g).
We show that this is an isomorphism.
Suppose that g
1
ker(¡ ) = g
2
ker(¡ ) then g
1
g
1
2
÷ ker(¡ ) so that ¡(g
1
g
1
2
) = 1.
This implies that ¡(g
1
) = ¡(g
2
) and hence the map
´
¡ is welldeﬁned. Now
´
¡ (g
1
ker(¡ )g
2
ker(¡ )) =
´
¡ (g
1
g
2
ker(¡ )) = ¡(g
1
g
2
)
= ¡(g
1
)¡(g
2
) =
´
¡ (g
1
ker(¡ ))
´
¡ (g
2
ker(¡ ))
and therefore
´
¡ is a homomorphism.
Suppose that
´
¡ (g
1
ker(¡ )) =
´
¡ (g
2
ker(¡ )) then ¡(g
1
) = ¡(g
2
) and hence
g
1
ker(¡ ) = g
2
ker(¡ ). It follows that
´
¡ is injective.
Finally suppose that h ÷ im(¡ ). Then there exists a g ÷ G
1
with ¡(g) = h. Then
´
¡ (g ker(¡ )) = h and
´
¡ is a surjection onto im(¡ ). Therefore
´
¡ is an isomorphism
completing the proof of part (a).
(b) Conversely suppose that N is a normal subgroup of G. Deﬁne the map ¡ :
G ÷ G,N by ¡(g) = gN for g ÷ G. By the deﬁnition of the product in the
quotient group G,N it is clear that ¡ is a homomorphism with im(¡ ) = G,N. If
g ÷ ker(¡ ) then ¡(g) = gN = N since N is the identity in G,N. However this
implies that g ÷ N and hence it follows that ker(¡ ) = N completing the proof.
148 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
There are two related theorems that are called the second isomorphism theorem and
the third isomorphism theorem.
Theorem 10.2.4 (second isomorphism theorem). Let N be a normal subgroup of a
group G and U a subgroup of G. Then U ¨ N is normal in U and
(UN),N Š U,(U ¨ N).
Proof. From Lemma 10.1.10 we know that U ¨ N is normal in U. Deﬁne the map
˛ : UN ÷U,U ¨ N
by ˛(un) = u(U ¨ N). If un = u
t
n
t
then u
t1
u = n
t
n
1
÷ U ¨ N. Therefore
u
t
(U ¨ N) = u(U ¨ N) and hence the map ˛ is welldeﬁned.
Suppose that un. u
t
n
t
÷ UN Since N is normal in G we have that unu
t
n
t
÷ uu
t
N.
Hence unu
t
n
t
= uu
t
n
tt
with n
tt
÷ N. Then
˛(unu
t
n
t
) = ˛(uu
t
n) = uu
t
(U ¨ N).
However U ¨ N is normal in U so
uu
t
(U ¨ N) = u(U ¨ N)u
t
(U ¨ N) = ˛(un)˛(u
t
n
t
).
Therefore ˛ is a homomorphism.
We have im(˛) = U,(U ¨ N) by deﬁnition. Suppose that un ÷ ker(˛). Then
˛(un) = U ¨ N Ç N which implies u ÷ N. Therefore ker(¡ ) = N. From the
group isomorphism theorem we then have
UN,N Š U,(U ¨ N)
proving the theorem.
Theorem 10.2.5 (third isomorphism theorem). Let N and M be normal subgroups
of a group G with N a subgroup of M. Then M,N is a normal subgroup in G,N
and
(G,N),(M,N) Š G,M.
Proof. Deﬁne the map ˇ : G,N ÷G,M by
ˇ(gN) = gM.
It is straightforward that ˇ is welldeﬁned and a homomorphism. If gN ÷ ker(ˇ) then
ˇ(gN) = gM = M and hence g ÷ M. It follows that ker(ˇ) = M,N. In particular
this shows that M,N is normal in G,N. From the group isomorphism theorem then
(G,N),(M,N) Š G,M.
Section 10.3 Direct Products of Groups 149
For a normal subgroup N in G the homomorphism ¡ : G ÷ G,N provides a
onetoone correspondence between subgroups of G containing N and the subgroups
of G,N. This correspondence will play a fundamental role in the study of subﬁelds
of a ﬁeld.
Theorem 10.2.6 (correspondence theorem). Let N be a normal subgroup of a group
G and let ¡ be the corresponding homomorphism ¡ : G ÷G,N. Then the mapping
ç : H ÷¡(H)
where H is a subgroup of G containing N provides a onetoone correspondence
between all the subgroups of G,N and the subgroups of G containing N.
Proof. We ﬁrst show that the mapping ç is surjective. Let H
1
be a subgroup of G,N
and let
H = {g ÷ G : ¡(g) ÷ H
1
}.
We show that H is a subgroup of G and that N Ç H.
If g
1
. g
2
÷ H then ¡(g
1
) ÷ H
1
and ¡(g
2
) ÷ H
1
. Therefore ¡(g
1
)¡(g
2
) ÷ H
1
and hence ¡(g
1
g
2
) ÷ H
1
. Therefore g
1
g
2
÷ H. In an identical fashion g
1
1
÷ H.
Therefore H is a subgroup of G. If n ÷ N then ¡(n) = 1 ÷ H
1
and hence n ÷ H.
Therefore N Ç H showing that the map ç is surjective.
Suppose that ç(H
1
) = ç(H
2
) where H
1
and H
2
are subgroups of G containing
N. This implies that ¡(H
1
) = ¡(H
2
). Let g
1
÷ H
1
. Then ¡(g
1
) = ¡(g
2
) for some
g
2
÷ H
2
. Then g
1
g
1
2
÷ ker(¡ ) = N Ç H
2
. It follows that g
1
g
1
2
÷ H
2
so that
g
1
÷ H
2
. Hence H
1
Ç H
2
. In a similar fashion H
2
Ç H
1
and therefore H
1
= H
2
.
It follows that ç is injective.
10.3 Direct Products of Groups
In this section we look at a very important construction, the direct product, which
allows us to build new groups out of existing groups. This construction is the analog
for groups of the direct sum of rings. As an application of this construction, in the
next section we present a theorem which completely describes the structure of ﬁnite
abelian groups.
Let G
1
. G
2
be groups and let G be the Cartesian product of G
1
and G
2
. That is
G = G
1
× G
2
= {(a. b) : a ÷ G
1
. b ÷ G
2
}.
On G deﬁne
(a
1
. b
1
) (a
2
. b
2
) = (a
1
a
2
. b
1
b
2
).
With this operation it is direct to verify the groups axioms for G and hence G becomes
a group.
150 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Theorem 10.3.1. Let G
1
. G
2
be groups and G the Cartesian product G
1
× G
2
with
the operation deﬁned above. Then G forms a group called the direct product of G
1
and G
2
. The identity element is (1. 1) and (g. h)
1
= (g
1
. h
1
).
This construction can be iterated to any ﬁnite number of groups (also to an inﬁnite
number but we won’t consider that here) G
1
. . . . . G
n
to form the direct product G
1
×
G
2
× × G
n
.
Theorem 10.3.2. For groups G
1
and G
2
we have G
1
×G
2
Š G
2
×G
1
and G
1
×G
2
is abelian if and only if each G
i
, i = 1. 2, is abelian.
Proof. The map (a. b) ÷ (b. a) where a ÷ G
1
. b ÷ G
2
provides an isomorphism
from G
1
× G
2
÷G
2
× G
1
.
Suppose that both G
1
. G
2
are abelian. Then if a
1
. a
2
÷ G
1
. b
1
. b
2
÷ G
2
we have
(a
1
. b
1
)(a
2
. b
2
) = (a
1
a
2
. b
1
b
2
) = (a
2
a
1
. b
2
b
1
) = (a
2
. b
2
)(a
1
. b
1
)
and hence G
1
× G
2
is abelian.
Conversely suppose G
1
×G
2
is abelian and suppose that a
1
. a
2
÷ G
1
. Then for the
identity 1 ÷ G
2
we have
(a
1
a
2
. 1) = (a
1
. 1)(a
2
. 1) = (a
2
. 1)(a
1
. 1) = (a
2
a
1
. 1).
Therefore a
1
a
2
= a
2
a
1
and G
1
is abelian. Identically G
2
is abelian.
We show next that in G
1
× G
2
there are normal subgroups H
1
. H
2
with H
1
Š G
1
and H
2
Š G
2
.
Theorem 10.3.3. Let G = G
1
×G
2
. Let H
1
= {(a. 1) : a ÷ G
1
} and H
2
= {(1. b) :
b ÷ G
2
}. Then both H
1
and H
2
are normal subgroups of G with G = H
1
H
2
and
H
1
¨ H
2
= {1}. Further H
1
Š G
1
. H
2
Š G
2
and G,H
1
Š G
2
and G,H
2
Š G
1
.
Proof. Map G
1
× G
2
onto G
2
by (a. b) ÷ b. It is clear that this map is a homo
morphism and that the kernel is H
1
= {(a. 1) : a ÷ G
1
}. This establishes that H
1
is a normal subgroup of G and that G,H
1
Š G
2
. In an identical fashion we get that
G,H
2
Š G
1
. The map (a. 1) ÷a provides the isomorphism from H
1
onto G
1
.
If the factors are ﬁnite it is easy to ﬁnd the order of G
1
× G
2
. The size of the
Cartesian product is just the product of the sizes of the factors.
Lemma 10.3.4. If [G
1
[ and [G
2
[ are ﬁnite then [G
1
× G
2
[ = [G
1
[[G
2
[.
Nowsuppose that G is a group with normal subgroups G
1
. G
2
such that G = G
1
G
2
and G
1
¨ G
2
= {1}. Then we will show that G is isomorphic to the direct product
G
1
×G
2
. In this case we say that G is the internal direct product of its subgroups and
that G
1
. G
2
are direct factors of G.
Section 10.4 Finite Abelian Groups 151
Theorem 10.3.5. Suppose that G is a group with normal subgroups G
1
. G
2
such that
G = G
1
G
2
and G
1
¨G
2
= {1}. Then G is isomorphic to the direct product G
1
×G
2
.
Proof. Since G = G
1
G
2
each element of G has the form ab with a ÷ G
1
. b ÷ G
2
.
We ﬁrst show that each a ÷ G
1
commutes with each b ÷ G
2
. Consider the element
aba
1
b
1
. Since G
1
is normal ba
1
b
1
÷ G
1
which implies that abab
1
÷ G
1
.
Since G
2
is normal aba
1
÷ G
2
which implies that aba
1
b
1
÷ G
2
. Therefore
aba
1
b
1
÷ G
1
¨ G
2
= {1} and hence aba
1
b
1
= 1 so that ab = ba.
Now map G onto G
1
× G
2
by ¡(ab) ÷(a. b). We claim that this is an isomorph
ism. It is clearly onto. Now
¡((a
1
b
1
)(a
2
b
2
)) = ¡(a
1
a
2
b
1
b
2
) = (a
1
a
2
. b
1
b
2
)
= (a
1
. b
1
)(a
2
. b
2
) = ¡((a
1
. b
1
))¡(a
2
. b
2
))
so that ¡ is a homomorphism. The kernel is G
1
¨G
2
= {1} and so ¡ is an isomorph
ism.
Although the end resulting groups are isomorphic we call G
1
× G
2
an external
direct product if we started with the groups G
1
. G
2
and constructed G
1
× G
2
and
call G
1
× G
2
an internal direct product if we started with a group G having normal
subgroups as in the theorem.
10.4 Finite Abelian Groups
We now use the results of the last section to present a theorem that completely pro
vides the structure of ﬁnite abelian groups. This theorem is a special case of a general
result on modules that we will examine in detail later in the book.
Theorem 10.4.1 (basis theorem for ﬁnite abelian groups). Let G be a ﬁnite abelian
group. Then G is a direct product of cyclic groups of prime power order.
Before giving the proof we give two examples showing how this theorem leads to
the classiﬁcation of ﬁnite abelian groups.
Since all cyclic groups of order n are isomorphic to (Z
n
. ÷) we will denote a cyclic
group of order n by Z
n
.
Example 10.4.2. Classify all abelian groups of order 60. Let G be an abelian group
of order 60. From Theorem 10.4.1 G must be a direct product of cyclic groups of
prime power order. Now 60 = 2
2
3 5 so the only primes involved are 2, 3 and 5.
Hence the cyclic group involved in the direct product decomposition of G have order
either 2, 4, 3 or 5 (by Lagrange’s theorem they must be divisors of 60). Therefore G
152 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
must be of the form
G Š Z
4
× Z
3
× Z
5
G Š Z
2
× Z
2
× Z
3
× Z
5
.
Hence up to isomorphism there are only two abelian groups of order 60.
Example 10.4.3. Classify all abelian groups of order 180. Now 180 = 2
2
3
2
5 so
the only primes involved are 2, 3 and 5. Hence the cyclic group involved in the direct
product decomposition of G have order either 2, 4, 3, 9 or 5 (by Lagrange’s theorem
they must be divisors of 180). Therefore G must be of the form
G Š Z
4
× Z
9
× Z
5
G Š Z
2
× Z
2
× Z
9
× Z
5
G Š Z
4
× Z
3
× Z
3
× Z
5
G Š Z
2
× Z
2
× Z
3
× Z
3
× Z
5
.
Hence up to isomorphism there are four abelian groups of order 180.
The proof of Theorem 10.4.1 involves the following lemmas.
Lemma 10.4.4. Let G be a ﬁnite abelian group and let ¸[[G[ where ¸ is a prime.
Then all the elements of G whose orders are a power of ¸ form a normal subgroup
of G. This subgroup is called the ¸primary component of G, which we will denote
by G
]
.
Proof. Let ¸ be a prime with ¸[[G[ and let a and b be two elements of G of order a
power of ¸. Since G is abelian the order of ab is the lcm of the orders which is again
a power of ¸. Therefore ab ÷ G
]
. The order of a
1
is the same as the order of a so
a
1
÷ G
]
and therefore G
]
is a subgroup.
Lemma 10.4.5. Let G be a ﬁnite abelian group of order n. Suppose that n =
¸
e
1
1
¸
e
k
k
with ¸
1
. . . . . ¸
k
distinct primes. Then
G Š G
]
1
× × G
]
k
where G
]
i
is the ¸
i
primary component of G.
Proof. Each G
]
i
is normal since G is abelian and since distinct primes are relatively
prime the intersection of the G
]
i
is the identity. Therefore Lemma 10.4.5 will follow
by showing that each element of G is a product of elements in the G
]
1
.
Section 10.4 Finite Abelian Groups 153
Let g ÷ G. Then the order of g is ¸
(
1
1
¸
(
k
k
. We write this as ¸
(
i
i
m with
(m. ¸
i
) = 1. Then g
n
has order ¸
(
i
i
and hence is in G
]
i
. Now since ¸
1
. . . . . ¸
k
are relatively prime there exists m
1
. . . . . m
k
with
m
1
¸
(
1
1
÷ ÷m
k
¸
(
k
k
= 1
and hence
g = (g
]
f
1
1
)
n
1
(g
]
f
k
k
)
n
k
.
Therefore g is a product of elements in the G
]
i
.
We next need the concept of a basis. Let G be any ﬁnitely generated abelian group
(ﬁnite or inﬁnite) and let g
1
. . . . . g
n
be a set of generators for G. The generators
g
1
. . . . . g
n
form a basis if
G = (g
1
) × × (g
n
).
that is G is the direct product of the cyclic subgroups generated by the g
i
. The basis
theorem for ﬁnite abelian groups says that any ﬁnite abelian group has a basis.
Suppose that G is a ﬁnite abelian group with a basis g
1
. . . . . g
k
so that G = (g
1
) ×
× (g
k
). Since G is ﬁnite each g
i
has ﬁnite order say m
i
. It follows than from the
fact that G is a direct product that each g ÷ G can be expressed as
g = g
n
1
1
g
n
k
k
and further the integers n
1
. . . . . n
k
are unique modulo the order of g
i
. Hence each
integer n
i
can be chosen in the range 0. 1. . . . . m
i
÷ 1 and within this range for the
element g the integer n
i
is unique.
From the previous lemma each ﬁnite abelian group splits into a direct product of
its ¸primary components for different primes ¸. Hence to complete the proof of the
basis theorem we must show that any ﬁnite abelian group of order ¸
n
for some prime
¸ has a basis. We call an abelian group of order ¸
n
an abelian ¸group.
Consider an abelian group G of order ¸
n
for a prime ¸. It is somewhat easier
to complete the proof if we consider the group using additive notation. That is the
operation is considered ÷, the identity as 0 and powers are given by multiples. Hence
if an element g ÷ G has order ¸
k
then in additive notation ¸
k
g = 0. A set of
elements g
1
. . . . . g
k
is then a basis for G if each g ÷ G can be expressed uniquely as
g = m
1
g
1
÷ ÷ m
k
g
k
where the m
i
are unique modulo the order of g
i
. We say
that the g
1
. . . . . g
k
are independent and this is equivalent to the fact that whenever
m
1
g
1
÷ ÷ m
k
g
k
= 0 then m
i
÷ 0 modulo the order of g
i
. We now prove that
any abelian ¸group has a basis.
Lemma 10.4.6. Let G be a ﬁnite abelian group of prime power order ¸
n
for some
prime ¸. Then G is a direct product of cyclic groups.
154 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Notice that in the group G we have ¸
n
g = 0 for all g ÷ G as a consequence of
Lagrange’s theorem. Further every element has order a power of ¸. The smallest
power of ¸ say ¸
i
such that ¸
i
g = 0 for all g ÷ G is called the exponent of G. Any
ﬁnite abelian ¸group must have some exponent ¸
i
.
Proof. The proof of this lemma is by induction on the exponent.
The lowest possible exponent is ¸ so suppose ﬁrst that ¸g = 0 for all g ÷ G. Since
G is ﬁnite it has a ﬁnite system of generators. Let S = {g
1
. . . . . g
k
} be a minimal set
of generators for G. We claim that this is a basis. Since this is a set of generators to
show its a basis we must show that they are independent. Hence suppose that we have
m
1
g
1
÷ ÷m
k
g
k
= 0 (1)
for some set of integers m
i
. Since the order of each g
i
is ¸, as explained above we
may assume that 0 _ m
i
< ¸ for i = 1. . . . . k. Suppose that one m
i
= 0. Then
(m
i
. ¸) = 1 and hence there exists an .
i
with m
i
.
i
÷ 1 mod ¸ (see Chapter 4).
Multiplying the equation (1) by .
i
we get modulo ¸,
m
1
.
i
g
1
÷ ÷g
i
÷ ÷m
k
.
i
g
k
.
and rearranging
g
i
= ÷m
1
.
i
g
1
÷ ÷ m
k
.
k
g
k
.
But then g
i
can be expressed in terms of the other g
}
and therefore the set {g
1
. . . . . g
k
}
is not minimal. It follows that g
1
. . . . . g
k
constitute a basis and the lemma is true for
the exponent ¸.
Now suppose that any ﬁnite abelian group of exponent ¸
n1
has a basis and as
sume that G has exponent ¸
n
. Consider the set G = ¸G = {¸g : g ÷ G}. It is
straightforward that this forms a subgroup (see exercises). Since ¸
n
g = 0 for all
g ÷ G it follows that ¸
n1
g = 0 for all g ÷ G and so the exponent of G _ ¸
n1
.
By the inductive hypothesis G has a basis
S = {¸g
1
. . . . . ¸g
k
}.
Consider the set {g
1
. . . . . g
k
} and adjoin to this set the set of all elements h ÷ G
satisfying ¸h = 0. Call this set S
1
so that we have
S
1
= {g
1
. . . . . g
k
. h
1
. . . . . h
t
}.
We claim that S
1
is a set of generators for G. Let g ÷ G. Then ¸g ÷ G which has
the basis ¸g
1
. . . . . ¸g
k
so that
¸g = m
1
¸g
1
÷ ÷m
k
¸g
k
.
This implies that
¸(g ÷ m
1
g
1
÷ ÷ m
k
g
k
) = 0
Section 10.4 Finite Abelian Groups 155
so that g
1
÷ m
1
g
1
÷ ÷ m
k
g
k
must be one of the h
i
. Hence
g ÷ m
1
g
1
÷ ÷ m
k
g
k
= h
i
so that g = m
1
g
1
÷ ÷m
k
g
k
÷h
i
proving the claim.
Now S
1
is ﬁnite so there is a minimal subset of S
1
that is still a generating system
for G. Call this S
0
and suppose that S
0
, renumbering if necessary, is
S
0
= {g
1
. . . . . g
i
. h
1
. . . . . h
x
} with ¸h
i
= 0 for i = 1. . . . . s.
The subgroup generated by h
1
. . . . . h
x
has exponent ¸ so by inductive hypothesis has
a basis. We may assume than that h
1
. . . . . h
x
is a basis for this subgroup and hence
is independent. We claim now that g
1
. . . . . g
i
. h
1
. . . . . h
x
are independent and hence
form a basis for G.
Suppose that
m
1
g
i
÷ ÷m
i
g
i
÷n
1
h
1
÷ ÷n
x
h
x
= 0 (2)
for some integers m
1
. . . . . m
i
. h
1
. . . . . h
x
. Each m
i
. n
i
must be divisible by ¸. Sup
pose for example that some m
i
is not. Then (m
i
. ¸) = 1 and then (m
i
. ¸
n
) = 1. This
implies that there exists an .
i
with m
i
.
i
÷ 1 mod ¸
n
. Multiplying through by .
i
and
rearranging we then obtain
g
i
= ÷m
1
.
i
g
1
÷ ÷ n
x
.
i
h
x
.
Therefore g
i
can be expressed in terms of the remaining elements of S
0
contradicting
the minimality of S
0
. An identical argument works if some n
i
is not divisible by ¸.
Therefore the relation (2) takes the form
a
1
¸g
1
÷ ÷a
i
¸g
i
÷b
1
¸h
1
÷ ÷b
x
¸h
x
= 0. (3)
Each of the terms ¸h
i
= 0 so that (3) becomes
a
]
g
1
÷ ÷a
i
¸g
i
= 0.
The g
1
. . . . . g
i
are independent and hence a
i
¸ = 0 for each i and hence a
i
= 0.
Now (2) becomes
n
1
h
1
÷ ÷n
x
h
x
= 0.
However h
1
. . . . . h
x
are independent so each n
i
= 0 completing the claim.
Therefore the whole group G has a basis proving the lemma by induction.
For more details see the proof of the general result on modules over principal ideal
domains later in the book. There is also an additional elementary proof for the basis
theorem for ﬁnitely generated abelian groups.
156 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
10.5 Some Properties of Finite Groups
Classiﬁcation is an extremely important concept in algebra. A large part of the theory
is devoted to classifying all structures of a given type, for example all UFD’s. In most
cases this is not possible. Since for a given ﬁnite n there are only ﬁnitely many group
tables it is theoretically possible to classify all groups of order n. However even for
small n this becomes impractical. We close the chapter by looking at some further
results on ﬁnite groups and then using these to classify all the ﬁnite groups up to
order 10.
Before stating the classiﬁcation we give some further examples of groups that are
needed.
Example 10.5.1. In Example 9.2.6 we saw that the symmetry group of an equilateral
triangle had 6 elements and is generated by elements r and ¡ which satisfy the rela
tions r
3
= ¡
2
= 1, ¡
1
r¡ = r
1
, where r is a rotation of 120
ı
about the center
of the triangle and ¡ is a reﬂection through an altitude. This was called the dihedral
group D
3
of order 6.
This can be generalized to any regular ngon. If D is a regular ngon, then the
symmetry group D
n
has 2n elements and is called the dihedral group of order 2n. It
is generated by elements r and ¡ which satisfy the relations r
n
= ¡
2
= 1, ¡
1
r¡ =
r
n1
, where r is a rotation of
2t
n
about the center of the ngon and ¡ is a reﬂection.
Hence, D
4
, the symmetries of a square, has order 8 and D
5
, the symmetries of a
regular pentagon, has order 10.
Example 10.5.2. Let i. ¸. k be the generators of the quaternions. Then we have
i
2
= ¸
2
= k
2
= ÷1. (÷1)
2
= 1 and i¸k = 1.
These elements then form a group of order 8 called the quaternion group denoted
by Q. Since i¸k = 1 we have i¸ = ÷¸i , and the generators i and ¸ satisfy the
relations i
4
= ¸
4
= 1, i
2
= ¸
2
, i¸ = i
2
¸i .
We now state the main classiﬁcation and then prove it in a series of lemmas.
Theorem 10.5.3. Let G be a ﬁnite group.
(a) If [G[ = 2 then G Š Z
2
.
(b) If [G[ = 3 then G Š Z
3
.
(c) If [G[ = 4 then G Š Z
4
or G Š Z
2
× Z
2
.
(d) If [G[ = 5 then G Š Z
5
.
(e) If [G[ = 6 then G Š Z
6
Š Z
2
× Z
3
or G Š D
3
, the dihedral group with 6
elements. (Note D
3
Š S
3
the symmetric group on 3 symbols.)
(f) If [G[ = 7 then G Š Z
T
.
Section 10.5 Some Properties of Finite Groups 157
(g) If [G[ = 8 then G Š Z
S
or G Š Z
4
× Z
2
or G Š Z
2
× Z
2
× Z
2
or G Š D
4
,
the dihedral group of order 8, or G Š Q the quaternion group.
(h) If [G[ = 9 then G Š Z
9
or G Š Z
3
× Z
3
.
(i) If [G[ = 10 then G Š Z
10
Š Z
2
× Z
5
or G Š D
5
, the dihedral group with
10 elements.
Recall from Section 10.1 that a ﬁnite group of prime order must be cyclic. Hence in
the theorem the cases [G[ = 2. 3. 5. 7 are handled. We next consider the case where
G has order ¸
2
where ¸ is a prime.
Deﬁnition 10.5.4. If G is a group then its center denoted 7(G) is the set of elements
in G which commute with everything in G. That is
7(G) = {g ÷ G : gh = hg for any h ÷ G}.
Lemma 10.5.5. For any group G the following hold:
(a) The center 7(G) is a normal subgroup.
(b) G = 7(G) if and only if G is abelian.
(c) If G,7(G) is cyclic then G is abelian.
Proof. (a) and (b) are direct and we leave them to the exercises. Consider the case
where G,7(G) is cyclic. Then each coset of 7(G) has the form g
n
7(G) where
g ÷ G. Let a. b ÷ G. Then since a. b are in cosets of the center we have a = g
n
u
and b = g
n
· with u. · ÷ 7(G). Then
ab = (g
n
u)(g
n
·) = (g
n
g
n
)(u·) = (g
n
g
n
)(·u) = (g
n
·)(g
n
u) = ba
since u. · commute with everything. Therefore G is abelian.
A ¸group is any ﬁnite group of prime power order. We need the following. The
proof of this is based on what is called the class equation which we will prove in
Chapter 13.
Lemma 10.5.6. A ﬁnite ¸group has a nontrivial center of order at least ¸.
Lemma 10.5.7. If [G[ = ¸
2
with ¸ a prime then G is abelian and hence G Š Z
]
2
or G Š Z
]
× Z
]
.
Proof. Suppose that [G[ = ¸
2
. Then from the previous lemma G has a nontrivial
center and hence [7(G)[ = ¸ or [7(G)[ = ¸
2
. If [7(G)[ = ¸
2
then G = 7(G) and
G is abelian. If [7(G)[ = ¸ then [G,7(G)[ = ¸. Since ¸ is a prime this implies that
G,7(G) is cyclic and hence from Lemma 10.5.5 G is abelian.
158 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Lemma 10.5.7 handles the cases n = 4 and n = 9. Therefore if [G[ = 4 we
must have G Š Z
4
or G Š Z
2
× Z
2
and if [G[ = 9 we must have G Š Z
9
or
G Š Z
3
× Z
3
.
This leaves n = 6. 8. 10. We next handle 6 and 10.
Lemma 10.5.8. If G is any group where every nontrivial element has order 2 then G
is abelian.
Proof. Suppose that g
2
= 1 for all g ÷ G. This implies that g = g
1
for all g ÷ G.
Let a. b be arbitrary elements of G. Then
(ab)
2
= 1 == abab = 1 == ab = b
1
a
1
= ba.
Therefore a. b commute and G is abelian.
Lemma 10.5.9. If [G[ = 6 then G Š Z
6
or G Š D
3
.
Proof. Since 6 = 2 3 if G was abelian then G Š Z
2
× Z
3
. Notice that if an abelian
group has an element of order m and an element of order n with (n. m) = 1 then it
has an element of order mn. Therefore for 6 if G is abelian there is an element of
order 6 and hence G Š Z
2
× Z
3
Š Z
6
.
Now suppose that G is nonabelian. The nontrivial elements of G have orders 2, 3
or 6. If there is an element of order 6 then G is cyclic and hence abelian. If every
element has order 2 then G is abelian. Therefore there is an element of order 3 say
g ÷ G. The cyclic subgroup (g) = {1. g. g
2
} then has index 2 in G and is therefore
normal. Let h ÷ G with h … (g). Since g. g
2
both generate (g) we must have
(g) ¨ (h) = {1}. If h also had order 3 then [(g. h)[ =
[(¿)[[(h)[
[(¿)í(h)[
= 9 which is
impossible. Therefore h must have order 2. Since (g) is normal we have h
1
gh = g
t
for t = 1. 2. If h
1
gh = g then g. h commute and the group G is abelian. Therefore
h
1
gh = g
2
= g
1
. It follows that g. h generate a subgroup of G satisfying
g
3
= h
2
= 1. h
1
gh = g
1
.
This deﬁnes a subgroup of order 6 isomorphic to D
3
and hence must be all of G.
Lemma 10.5.10. If [G[ = 10 then G Š Z
10
or G Š D
5
.
Proof. The proof is almost identical to that for n = 6. Since 10 = 2 5 if G were
abelian G Š Z
2
× Z
5
= Z
10
.
Now suppose that G is nonabelian. As for n = 6, G must contain a normal cyclic
subgroup of order 5 say (g) = {1. g. g
2
. g
3
. g
4
}. If h … (g) then exactly as for
n = 6 it follows that h must have order 2 and h
1
gh = g
t
for t = 1. 2. 3. 4. If
Section 10.5 Some Properties of Finite Groups 159
h
1
gh = g then g. h commute and G is abelian. Notice that h
1
= h. Suppose that
h
1
gh = hgh = g
2
. Then
(hgh)
3
= (g
2
)
3
= g
6
= g == g = h
2
gh
2
= hg
2
h = g
4
== g = 1
which is a contradiction. Similarly hgh = g
3
leads to a contradiction. Therefore
h
1
gh = g
4
= g
1
and g. h generate a subgroup of order 10 satisfying
g
5
= h
2
= 1: h
1
gh = g
1
.
Therefore this is all of G and is isomorphic to D
5
.
This leaves the case n = 8 that is the most difﬁcult. If [G[ = 8 and G is abelian
then clearly G Š Z
S
or G Š Z
4
× Z
2
or G Š Z
2
× Z
2
× Z
2
. The proof of
Theorem 10.5.3 is then completed with the following.
Lemma 10.5.11. If G is a nonabelian group of order 8 then G Š D
4
or G Š Q.
Proof. The nontrivial elements of G have orders 2, 4 or 8. If there is an element of
order 8 then G is cyclic and hence abelian while if every element has order 2 then G
is abelian. Hence we may assume that G has an element of order 4 say g. Then (g)
has index 2 and is a normal subgroup. Suppose ﬁrst that G has an element h … (g) of
order 2. Then
h
1
gh = g
t
for some t = 1. 2. 3.
If h
1
gh = g then as in the cases 6 and 10, (g. h) deﬁnes an abelian subgroup of
order 8 and hence G is abelian. If h
1
gh = g
2
then
(h
1
gh)
2
= (g
2
)
2
= g
4
= 1 == g = h
2
gh
2
= h
1
g
2
h = g
4
== g
3
= 1
contradicting the fact that g has order 4. Therefore h
1
gh = g
3
= g
1
. It follows
that g. h deﬁne a subgroup of order 8 isomorphic to D
4
. Since [G[ = 8 this must be
all of G and G Š D
4
.
Therefore we may now assume that every element h ÷ G with h … (g) has order 4.
Let h be such an element. Then h
2
has order 2 so h
2
÷ (g) which implies that
h
2
= g
2
. This further implies that g
2
is central, that is commutes with everything.
Identifying g with i , h with ¸ and g
2
with ÷1 we get that G is isomorphic to Q
completing Lemma 10.5.11 and the proof of Theorem 10.5.3.
In principle this type of analysis can be used to determine the structure of any ﬁnite
group although it quickly becomes impractical. A major tool in this classiﬁcation
is the following important result known as the Sylow theorem which now we just
state. We will prove this theorem in Chapter 13. If [G[ = ¸
n
n with ¸ a prime and
(n. ¸) = 1 then a subgroup of G of order ¸
n
is called a ¸Sylow subgroup. It is not
clear at ﬁrst that a group will contain ¸Sylow subgroups.
160 Chapter 10 Normal Subgroups, Factor Groups and Direct Products
Theorem 10.5.12 (Sylow theorem). Let [G[ = ¸
n
n with ¸ a prime and (n. ¸) = 1.
(a) G contains a ¸Sylow subgroup.
(b) All ¸Sylow subgroups of G are conjugate G.
(c) Any ¸subgroup of G is contained in a ¸Sylow subgroup.
(d) The number of ¸Sylow subgroups of G is of the form 1 ÷¸k and divides n.
10.6 Exercises
1. Prove that if G is cyclic then any factor group of G is also cyclic.
2. Prove that for any group G the center 7(G) is a normal subgroup and G = 7(G)
if and only if G is abelian.
3. Let U
1
and U
2
be subgroups of a group G. Let .. , ÷ G. Show:
(i) If .U
1
= ,U
2
then U
1
= U
2
.
(ii) Give an example to show that .U
1
= U
2
. does not imply U
1
= U
2
.
4. Let U. V be subgroups of a group G. Let .. , ÷ G. If U.V ¨ U,V = 0 then
U.V = U,V .
5. Let N be a cyclic normal subgroup of the group G. Then all subgroups of N are
normal subgroups of G. Give an example to show that the statement is not correct
if N is not cyclic.
6. Let N
1
and N
2
be normal subgroups of G. Show:
(i) If all elements in N
1
and N
2
have ﬁnite order, then also the elements of
N
1
N
2
.
(ii) Let e
1
. e
2
÷ N. If n
e
i
i
= 1 for all n
i
÷ N
i
(i = 1. 2), then .
e
1
e
2
= 1 for all
. ÷ N
1
N
2
.
7. Find groups N
1
. N
2
and G with N
1
< N
2
< G, but N
1
is not a normal subgroup
of G.
8. Let G be a group generated by a and b and let bab
1
= a
i
and a
n
= 1 for
suitable r ÷ Z. n ÷ N. Show:
(i) The subgroup · := (a) is a normal subgroup of G.
(ii) G,· = (b·).
(iii) G = {b
}
a
i
: i. ¸ ÷ Z}.
9. Prove that any group of order 24 cannot be simple.
Chapter 11
Symmetric and Alternating Groups
11.1 Symmetric Groups and Cycle Decomposition
Groups most often appear as groups of transformations or permutations on a set. In
Galois Theory groups will appear as permutation groups on the zeros of a polynomial.
In Section 9.3 we introduced permutation groups and the symmetric group S
n
. In this
chapter we look more carefully at the structure of S
n
and for each n introduce a very
important normal subgroup ·
n
of S
n
called the alternating group on n symbols.
Recall that if · is a set, a permutation on · is a onetoone mapping of · onto
itself. The set S
¹
of all permutations on · forms a group under composition called
the symmetric group on ·. If [·[ > 2 then S
¹
is nonabelian. Further if ·. T have the
same cardinality, then S
¹
Š S
B
.
If [·[ = n then [S
¹
[ = nŠ and in this case we denote S
¹
by S
n
, called the symmetric
group on n symbols. For example [S
3
[ = 6. In Example 9.3.5 we showed that the six
elements of S
3
can be given by:
1 =
_
1 2 3
1 2 3
_
. a =
_
1 2 3
2 3 1
_
. b =
_
1 2 3
3 1 2
_
c =
_
1 2 3
2 1 3
_
. J =
_
1 2 3
3 2 1
_
. e =
_
1 2 3
1 3 2
_
.
Further we saw that S
3
has a presentation given by
S
3
= (a. c: a
3
= c
2
= 1. ac = ca
2
).
By this we mean that S
3
is generated by a. c, or that S
3
has generators a. c and
the whole group and its multiplication table can be generated by using the relations
a
3
= c
2
= 1, ac = ca
2
.
In general a permutation group is any subgroup of S
¹
for a set ·.
For the remainder of this chapter we will only consider ﬁnite symmetric groups S
n
and always consider the set · as · = {1. 2. 3. . . . . n}.
Deﬁnition 11.1.1. Suppose that ¡ is a permutation of · = {1. 2. . . . . n}, which has
the following effect on the elements of ·: There exists an element a
1
÷ · such that
¡(a
1
) = a
2
, ¡(a
2
) = a
3
, . . . , ¡(a
k1
) = a
k
, ¡(a
k
) = a
1
, and ¡ leaves all other
elements (if there are any) of · ﬁxed, i.e., ¡(a
}
) = a
}
for a
}
= a
i
, i = 1. 2. . . . . k.
Such a permutation ¡ is called a cycle or a kcycle.
162 Chapter 11 Symmetric and Alternating Groups
We use the following notation for a kcycle, ¡ , as given above:
¡ = (a
1
. a
2
. . . . . a
k
).
The cycle notation is read from left to right, it says ¡ takes a
1
into a
2
, a
2
into a
3
,
etc., and ﬁnally a
k
, the last symbol, into a
1
, the ﬁrst symbol. Moreover, ¡ leaves all
the other elements not appearing in the representation above ﬁxed.
Note that one can write the same cycle in many ways using this type of notation;
e.g., ¡ = (a
2
. a
3
. . . . . a
k
. a
1
). In fact any cyclic rearrangement of the symbols gives
the same cycle. The integer k is the length of the cycle. Note we allow a cycle to have
length 1, i.e., ¡ = (a
1
), for instance, this is just the identity map. For this reason, we
will usually designate the identity of S
n
by (1) or just 1. (Of course, it also could be
written as (a
i
) where a
i
÷ ·.)
If ¡ and g are two cycles, they are called disjoint cycles if the elements moved by
one are left ﬁxed by the other, that is, their representations contain different elements
of the set · (their representations are disjoint as sets).
Lemma 11.1.2. If ¡ and g are disjoint cycles, then they must commute, that is, ¡g =
g¡ .
Proof. Since the cycles ¡ and g are disjoint, each element moved by ¡ is ﬁxed by
g and vice versa. First suppose ¡(a
i
) = a
i
. This implies that g(a
i
) = a
i
and
¡
2
(a
i
) = ¡(a
i
). But since ¡
2
(a
i
) = ¡(a
i
), g(¡(a
i
)) = ¡(a
i
). Thus (¡g)(a
i
) =
¡(g(a
i
)) = ¡(a
i
) while (g¡ )(a
i
) = g(¡(a
i
)) = ¡(a
i
). Similarly if g(a
}
) = a
}
,
then (¡g)(a
}
) = (g¡ )(a
}
). Finally, if ¡(a
k
) = a
k
and g(a
k
) = a
k
then clearly
(¡g)(a
k
) = a
k
= (g¡ )(a
k
). Thus g¡ = ¡g.
Before proceeding further with the theory, let us consider a speciﬁc example. Let
· = {1. 2. . . . . 8} and let
¡ =
_
1 2 3 4 5 6 7 8
2 4 6 5 1 7 3 8
_
.
We pick an arbitrary number from the set ·, say 1. Then ¡(1) = 2, ¡(2) = 4,
¡(4) = 5, ¡(5) = 1. Now select an element from · not in the set {1. 2. 4. 5}, say 3.
Then ¡(3) = 6, ¡(6) = 7, ¡(7) = 3. Next select any element of · not occurring in
the set {1. 2. 4. 5} L{3. 6. 7}. The only element left is 8, and ¡(8) = 8. It is clear that
we can now write the permutation ¡ as a product of cycles:
¡ = (1. 2. 4. 5)(3. 6. 7)(8)
Section 11.1 Symmetric Groups and Cycle Decomposition 163
where the order of the cycles is immaterial since they are disjoint and therefore com
mute. It is customary to omit such cycles as (8) and write ¡ simply as
¡ = (1245)(367)
with the understanding that the elements of · not appearing are left ﬁxed by ¡ .
It is not difﬁcult to generalize what was done here for a speciﬁc example, and show
that any permutation ¡ can be written uniquely, except for order, as a product of
disjoint cycles. Thus let ¡ be a permutation on the set · = {1. 2. . . . . n}, and let
a
1
÷ ·. Let ¡(a
1
) = a
2
, ¡
2
(a
1
) = ¡(a
2
) = a
3
, etc., and continue until a repetition
is obtained. We claim that this ﬁrst occurs for a
1
, that is, the ﬁrst repetition is say
¡
k
(a
1
) = ¡(a
k
) = a
k÷1
= a
1
. For suppose the ﬁrst repetition occurs at the kth
iterate of ¡ and
¡
k
(a
1
) = ¡(a
k
) = a
k÷1
.
and a
k÷1
= a
}
, where ¸ < k. Then
¡
k
(a
1
) = ¡
}1
(a
1
).
and so ¡
k}÷1
(a
1
) = a
1
. However, k÷¸ ÷1 < k if ¸ = 1, and we assumed that the
ﬁrst repetition occurred for k. Thus, ¸ = 1 and so ¡ does cyclically permute the set
{a
1
. a
2
. . . . . a
k
}. If k < n, then there exists b
1
÷ · such that b
1
… {a
1
. a
2
. . . . . a
k
}
and we may proceed similarly with b
1
. We continue in this manner until all the
elements of · are accounted for. It is then seen that ¡ can be written in the form
¡ = (a
1
. . . . . a
k
)(b
1
. . . . . b
I
)(c
1
. . . . . c
n
) (h
1
. . . . . h
t
).
Note that all powers ¡
i
(a
1
) belong to the set {a
1
= ¡
0
(a
1
) = ¡
k
(a
1
). a
2
=
¡
1
(a
1
). . . . . a
k
= ¡
k1
(a
1
)}, all powers ¡
i
(b
1
) belong to the set {b
1
= ¡
0
(b
1
) =
¡
I
(b
1
). b
2
= ¡
1
(b
1
). . . . . b
I
= ¡
I1
(b
1
)}. . . . . Here, by deﬁnition, b
1
is the small
est element in {1. 2. . . . . n} which does not belong to {a
1
= ¡
0
(a
1
) = ¡
k
(a
1
). a
2
=
¡
1
(a
1
). . . . . a
k
= ¡
k1
(a
1
)}, c
1
is the smallest element in {1. 2. . . . . n} which does
not belong to
{a
1
= ¡
0
(a
1
) = ¡
k
(a
1
). a
2
= ¡
1
(a
1
). . . . . a
k
= ¡
k1
(a
1
)}
L {b
1
= ¡
0
(b
1
) = ¡
I
(b
1
). b
2
= ¡
1
(b
1
). . . . . b
I
= ¡
I1
(b
1
)}.
Therefore by construction, all the cycles are disjoint. From this it follows that k ÷¹÷
m÷ ÷t = n. It is clear that this factorization is unique except for the order of the
factors since it tells explicitly what effect ¡ has on each element of ·.
In summary we have proven the following result.
Theorem 11.1.3. Every permutation of S
n
can be written uniquely as a product of
disjoint cycles (up to order).
164 Chapter 11 Symmetric and Alternating Groups
Example 11.1.4. The elements of S
3
can be written in cycle notation as 1 = (1).
(1. 2). (1. 3). (2. 3). (1. 2. 3). (1. 3. 2). This is the largest symmetric group which con
sists entirely of cycles.
In S
4
, for example, the element (1. 2)(3. 4) is not a cycle.
Suppose we multiply two elements of S
3
say (1. 2) and (1. 3). In forming the prod
uct or composition here, we read from right to left. Thus to compute (1. 2)(1. 3): We
note the permutation (1. 3) takes 1 into 3 and then the permutation (1. 2) takes 3 into 3
so the composite (1. 2)(1. 3) takes 1 into 3. Continuing the permutation (1. 3) takes 3
into 1 and then the permutation (1. 2) takes 1 into 2, so the composite (1. 2)(1. 3)
takes 3 into 2. Finally (1. 3) takes 2 into 2 and then (1. 2) takes 2 into 1 so (1. 2)(1. 3)
takes 2 into 1. Thus we see
(1. 2)(1. 3) = (1. 3. 2).
As another example of this cycle multiplication consider the product in S
5
,
(1. 2)(2. 4. 5)(1. 3)(1. 2. 5).
Reading from right to left 1 ÷ 2 ÷ 2 ÷ 4 ÷ 4 so 1 ÷ 4. Now 4 ÷ 4 ÷ 4 ÷
5 ÷5 so 4 ÷5. Next 5 ÷1 ÷3 ÷3 ÷3 so 5 ÷3. Then 3 ÷3 ÷1 ÷1 ÷2
so 3 ÷ 2. Finally 2 ÷ 5 ÷ 5 ÷ 2 ÷ 1, so 2 ÷ 1. Since all the elements of
· = {1. 2. 3. 4. 5} have been accounted for, we have
(1. 2)(2. 4. 5)(1. 3)(1. 2. 5) = (1. 4. 5. 3. 2).
Let ¡ ÷ S
n
. If ¡ is a cycle of length 2, i.e., ¡ = (a
1
. a
2
) where a
1
. a
2
÷ ·, then
¡ is called a transposition. Any cycle can be written as a product of transpositions,
namely
(a
1
. . . . . a
k
) = (a
1
. a
k
)(a
1
. a
k1
) (a
1
. a
2
).
From Theorem 11.1.3 any permutation can be written in terms of cycles, but from
the above any cycle can be written as a product of transpositions. Thus we have the
following result.
Theorem 11.1.5. Let ¡ ÷ S
n
be any permutation of degree n. Then ¡ can be written
as a product of transpositions.
11.2 Parity and the Alternating Groups
If ¡ is a permutation with a cycle decomposition
(a
1
. . . . . a
k
)(b
1
. . . . . b
}
) (m
1
. . . . . m
t
)
then ¡ can be written as a product of
W(¡ ) = (k ÷ 1) ÷(¸ ÷ 1) ÷ ÷(t ÷ 1)
Section 11.2 Parity and the Alternating Groups 165
transpositions. The number W(¡ ) is uniquely associated with the permutation ¡
since ¡ is uniquely represented (up to order) as a product of disjoint cycles. However,
there is nothing unique about the number of transpositions occurring in an arbitrary
representation of ¡ as a product of transpositions. For example in S
3
(1. 3. 2) = (1. 2)(1. 3) = (1. 2)(1. 3)(1. 2)(1. 2).
since (1. 2)(1. 2) = (1), the identity permutation of S
3
.
Although the number of transpositions is not unique in the representation of a per
mutation, ¡ , as a product of transpositions, we shall show, however, that the parity
(even or oddness) of that number is unique. Moreover, this depends solely on the
number W(¡ ) uniquely associated with the representation of ¡ . More explicitly, we
have the following result.
Theorem 11.2.1. If ¡ is a permutation written as a product of disjoint cycles and if
W(¡ ) is the associated integer given above, then if W(¡ ) is even (odd) any repre
sentation of ¡ as a product of transpositions must contain an even (odd) number of
transpositions.
Proof. We ﬁrst observe the following:
(a. b)(b. c
1
. . . . . c
t
)(a. b
1
. . . . . b
k
) = (a. b
1
. . . . . b
k
. b. c
1
. . . . . c
t
).
(a. b)(a. b
1
. . . . . b
k
. b. c
1
. . . . . c
t
) = (a. b
1
. . . . . b
k
)(b. c
1
. . . . . c
t
).
Suppose now that ¡ is represented as a product of disjoint cycles, where we include
all the 1cycles of elements of · which ¡ ﬁxes, if any. If a and b occur in the same
cycle in this representation for ¡ ,
¡ = (a. b
1
. . . . . b
k
. b. c
1
. . . . . c
t
) .
then in the computation of W(¡ ) this cycle contributes k ÷ t ÷ 1. Now consider
(a. b)¡ . Since the cycles are disjoint and disjoint cycles commute,
(a. b)¡ = (a. b)(a. b
1
. . . . . b
k
. b. c
1
. . . . . c
t
)
since neither a nor b can occur in any factor of ¡ other than (a. b
1
. . . . . b
k
. b. c
1
.
. . . . c
t
). So that (a. b) cancels out and we ﬁnd that (a. b)¡ = (b. c
1
. . . . . c
t
)(a. b
1
.
. . . . b
k
) . Since W((b. c
1
. . . . . c
t
)(a. b
1
. . . . . b
k
)) = k ÷t but W(a. b
1
. . . . . b
k
. b.
c
1
. . . . . c
t
) = k ÷t ÷1, we have W((a. b)¡ ) = W(¡ ) ÷ 1.
A similar analysis shows that in the case where a and b occur in different cycles in
the representation of ¡ , then W((a. b)¡ ) = W(¡ ) ÷ 1. Combining both cases, we
have
W((a. b)¡ ) = W(¡ ) ˙1.
166 Chapter 11 Symmetric and Alternating Groups
Now let ¡ be written as a product of m transpositions, say
¡ = (a
1
. b
1
)(a
2
. b
2
) (a
n
. b
n
).
Then
(a
n
. b
n
) (a
2
. b
2
)(a
1
. b
1
)¡ = 1.
Iterating this, together with the fact that W(1) = 0, shows that
W(¡ )(˙1)(˙1)(˙1) (˙1) = 0.
where there are m terms of the form ˙1. Thus
W(¡ ) = (˙1)(˙1) (˙1).
m times. Note if exactly ¸ are ÷ and q = m ÷ ¸ are ÷ then m = ¸ ÷ q and
W(¡ ) = ¸ ÷ q. Hence m ÷ W(¡ ) (mod 2). Thus, W(¡ ) is even if and only if m is
even and this completes the proof.
It now makes sense to state the following deﬁnition since we know that the parity
is indeed unique.
Deﬁnition 11.2.2. A permutation ¡ ÷ S
n
is said to be even if it can be written as a
product of an even number of transpositions. Similarly, ¡ is called odd if it can be
written as a product of an odd number of transpositions.
Deﬁnition 11.2.3. On the group S
n
we deﬁne the sign function sgn : S
n
÷ (Z
2
. ÷)
by sgn(¬) = 0 if ¬ is an even permutation and sgn(¬) = 1 if ¬ is an odd permutation.
We note that if ¡ and g are even permutations then so are ¡g and ¡
1
and also
the identity permutation is even. Further if ¡ is even and g is odd it is clear that ¡g
is odd. From this it is straightforward to establish the following.
Lemma 11.2.4. sgn is a homomorphism from S
n
onto (Z
2
. ÷).
We now let
·
n
= {¬ ÷ S
n
: sgn(¬) = 0}.
That is, ·
n
is precisely the set of even permutations in S
n
.
Theorem 11.2.5. For each n ÷ N the set ·
n
forms a normal subgroup of index 2 in
S
n
called the alternating group on n symbols. Further [·
n
[ =
nŠ
2
.
Proof. By Lemma 11.2.4 sgn : S
n
÷(Z
2
. ÷) is a homomorphism. Then ker(sgn) =
·
n
and therefore ·
n
is a normal subgroup of S
n
. Since im(sgn) = Z
2
we have
[im(sgn)[ = 2 and hence [S
n
,·
n
[ = 2. Therefore ŒS
n
: ·
n
 = 2. Since [S
n
[ = nŠ
then [·
n
[ =
nŠ
2
follows from Lagrange’s theorem.
Section 11.3 Conjugation in S
n
167
11.3 Conjugation in S
n
Recall that in a group G two elements .. , ÷ G are conjugates if there exists a g ÷ G
with g
1
.g = ,. Conjugacy is an equivalence relation on G. In the symmetric
groups S
n
it is easy to determine if two elements are conjugates. We say that two
permutations in S
n
have the same cycle structure if they have the same number of
cycles and the lengths are the same. Hence for example in S
S
the permutations
¬
1
= (1. 3. 6. 7)(2. 5) and ¬
2
= (2. 3. 5. 6)(1. 8)
have the same cycle structure. In particular if ¬
1
. ¬
2
are two permutations in S
n
then
¬
1
. ¬
2
are conjugates if and only if they have the same cycle structure. Therefore in
S
S
the permutations
¬
1
= (1. 3. 6. 7)(2. 5) and ¬
2
= (2. 3. 5. 6)(1. 8)
are conjugates.
Lemma 11.3.1. Let
¬ = (a
11
. a
12
. . . . . a
1k
1
) (a
x1
. a
x2
. . . . . a
xk
s
)
be the cycle decomposition of ¬ ÷ S
n
. Let t ÷ S
n
and denote by a
r
i}
the image of a
i}
under t. Then
t¬t
1
= (a
r
11
. a
r
12
. . . . . a
r
1k
1
) (a
r
x1
. a
r
x2
. . . . . a
r
xk
s
).
Proof. (a) Consider a
11
then operating on the left like functions we have
t¬t
1
(a
r
11
) = t¬(a
11
) = t(a
12
) = a
r
12
.
The same computation then follows for all the symbols a
i}
proving the lemma.
Theorem 11.3.2. Two permutations ¬
1
. ¬
2
÷ S
n
are conjugates if and only if they
are of the same cycle structure.
Proof. Suppose that ¬
2
= t¬
1
t
1
. Then from Lemma 11.3.1 we have that ¬
1
and
¬
2
are of the same cycle structure.
Conversely suppose that ¬
1
and ¬
2
are of the same cycle structure. Let
¬
1
= (a
11
. a
12
. . . . . a
1k
1
) (a
x1
. a
x2
. . . . . a
xk
s
)
¬
2
= (b
11
. b
12
. . . . . b
1k
1
) (b
x1
. b
x2
. . . . . b
xk
s
)
where we place the cycles of the same length under each other. Let t be the per
mutation in S
n
that maps each symbol in ¬
1
to the digit below it in ¬
2
. Then from
Lemma 11.3.1 we have t¬
1
t
1
= ¬
2
and hence ¬
1
and ¬
2
are conjugate.
168 Chapter 11 Symmetric and Alternating Groups
11.4 The Simplicity of A
n
A simple group is a group G with no nontrivial proper normal subgroups. Up to this
point the only examples we have of simple groups are cyclic groups of prime order.
In this section we prove that if n _ 5 each alternating group ·
n
is a simple group.
Theorem 11.4.1. For each n _ 3 each ¬ ÷ ·
n
is a product of cycles of length 3.
Proof. Let ¬ ÷ ·
n
. Since ¬ is a product of an even number of transpositions to prove
the theorem it sufﬁces to show that if t
1
. t
2
are transpositions then ¬
1
¬
2
is a product
of 3cycles.
Suppose that a. b. c. J are different digits in {1. . . . . n}. There are three cases to
consider. First:
Case (1): (a. b)(a. b) = 1 = (1. 2. 3)
0
and hence it is true here.
Next:
Case (2): (a. b)(b. c) = (c. a. b)
and hence it is true here also.
Finally:
Case (3): (a. b)(c. J) = (a. b)(b. c)(b. c)(c. J) = (c. a. b)(c. J. b)
since (b. c)(b. c) = 1. Therefore it is true here also proving the theorem.
Now our main result:
Theorem 11.4.2. For n _ 5 the alternating group ·
n
is a simple nonabelian group.
Proof. Suppose that N is a nontrivial normal subgroup of ·
n
with n _ 5. We show
that N = ·
n
and hence that ·
n
is simple.
We claim ﬁrst that N must contain a 3cycle. Let 1 = ¬ ÷ N then ¬ is not a
transposition since ¬ ÷ ·
n
. Therefore ¬ moves at least 3 digits. If ¬ moves exactly
3 digits then it is a 3cycle and we are done. Suppose then that ¬ moves at least 4
digits. Let ¬ = t
1
t
i
with t
i
disjoint cycles.
Case (1): There is a t
i
= (. . . . a. b. c. J). Set o = (a. b. c) ÷ ·
n
. Then
¬o¬
1
= t
i
ot
1
i
= (b. c. J).
However from Lemma 11.3.1 (b. c. J) = (a
r
i
. b
r
i
. c
r
i
). Further since ¬ ÷ N and N
is normal we have
¬(o¬
1
o
1
) = (b. c. J)(a. c. b) = (a. J. b).
Therefore in this case N contains a 3cycle.
Section 11.4 The Simplicity of ·
n
169
Case (2): There is a t
i
which is a 3cycle. Then
¬ = (a. b. c)(J. e. . . .).
Now set o = (a. b. J) ÷ ·
n
and then
¬o¬
1
= (b. c. e) = (a
t
. b
t
. J
t
)
and
o
1
¬o¬
1
= (a. b. J)(b. c. e) = (b. c. e. J. a) ÷ N.
Now use Case (1). Therefore in this case N has a 3cycle.
In the ﬁnal case ¬ is a disjoint product of transpositions.
Case (3): ¬ = (a. b)(c. J) . Since n _ 5 there is an e = a. b. c. J. Let
o = (a. c. e) ÷ ·
n
. Then
¬o¬
1
= (b. J. e
1
) with e
1
= e
t
= b. J.
However (a
t
. c
t
. e
t
) = (b. J. e
1
). Let ; = (o
1
¬o)¬
1
. This is in N since N is
normal. If e = e
1
then ; = (e. c. a)(b. J. e) = (a. e. b. J. c) and we can use Case (1)
to get that N contains a 3cycle. If e = e
1
then I = (e. c. a)(b. J. e
1
) ÷ N and then
we can use Case (2) to obtain that N contains a 3cycle.
These three cases show that N must contain a 3cycle.
If N is normal in ·
n
then from the argument above N contains a 3cycle t. How
ever from Theorem 11.3.2 any two 3cycles in S
n
are conjugate. Hence t is conjugate
to any other 3cycle in S
n
. Since N is normal and t ÷ N each of these conjugates
must also be in N. Therefore N contains all 3cycles in S
n
. From Theorem 11.4.1
each element of ·
n
is a product of 3cycles. It follows then that each element of ·
n
is in N. However since N Ç ·
n
this is only possible if N = ·
n
completing the
proof.
Theorem 11.4.3. Let ¸ be a prime and U Ç S
]
a subgroup. Let t be a transposition
and ˛ a ¸cycle with ˛. t ÷ U. Then U = S
]
.
Proof. Suppose without loss of generality that t = (1. 2). Since ˛. ˛
2
. . . . . ˛
]1
are ¸cycles with no ﬁxed points then there is an i with ˛
i
(1) = 2. Without loss of
generality we may then assume that ˛ = (1. 2. a
3
. . . . . a
]
). Let
¬ =
_
1 2 a
3
a
]
1 2 3 ¸
_
.
Then from Lemma 10.3.4 we have
¬˛¬
1
= (1. 2. . . . . ¸).
Further ¬(1. 2)¬
1
= (1. 2). Hence U
1
= ¬U¬
1
contains (1. 2) and (1. 2. . . . . ¸).
170 Chapter 11 Symmetric and Alternating Groups
Now we have
(1. 2. . . . . ¸)(1. 2)(1. 2. . . . . ¸)
1
= (2. 3) ÷ U
1
.
Analogously
(1. 2. . . . . ¸)(2. 3)(1. 2. . . . . ¸)
1
= (3. 4) ÷ U
1
.
and so on until
(1. 2. . . . . ¸)(¸ ÷ 2. ¸ ÷ 1)(1. 2. . . . . ¸)
1
= (¸ ÷ 1. ¸) ÷ U
1
.
Hence the transpositions (1. 2). (2. 3). . . . . (¸ ÷ 1. ¸) ÷ U
1
. Moreover
(1. 2)(2. 3)(1. 2) = (1. 3) ÷ U
1
.
In an identical fashion each (1. k) ÷ U
1
. Then for any digits s. t we have
(1. s)(1. t )(1. s) = (s. t ) ÷ U
1
.
Therefore U
1
contains all the transpositions of S
]
and hence U
1
= S
]
. Since U =
¬U
1
¬
1
we must have U = S
]
also.
11.5 Exercises
1. Show that for n _ 3 the group ·
n
is generated by {(12k) : k _ 3}.
2. Let o = (k
1
. . . . . k
x
) ÷ S
n
be a permutation. Show that the order of o is the least
common multiple of k
1
. . . . . k
x
. Compute the order of t =
_
1 2 3 4 5 6 T
2 6 5 1 3 4 T
_
÷ S
T
.
3. Let G = S
4
.
(i) Determine a noncyclic subgroup H of order 4 of G.
(ii) Show that H is normal.
(iii) Show that ¡(g)(h) := ghg
1
deﬁnes an epimorphism ¡ : G ÷Aut(H) for
g ÷ G and h ÷ H. Determine its kernel.
4. Show that all subgroups of order 6 of S
4
are conjugate.
Chapter 12
Solvable Groups
12.1 Solvability and Solvable Groups
The original motivation for Galois theory grew out of a famous problem in the theory
of equations. This problem was to determine the solvability or insolvability of a poly
nomial equation of degree 5 or higher in terms of a formula involving the coefﬁcients
of the polynomial and only using algebraic operations and radicals. This question
arose out of the wellknown quadratic formula.
The ability to solve quadratic equations and in essence the quadratic formula was
known to the Babylonians some 3600 years ago. With the discovery of imaginary
numbers, the quadratic formula then says that any degree two polynomial over C can
be solved by radicals in terms of the coefﬁcients. In the sixteenth century the Italian
mathematician Niccolo Tartaglia discovered a similar formula in terms of radicals to
solve cubic equations. This cubic formula is now known erroneously as Cardano’s
formula in honor of Cardano, who ﬁrst published it in 1545. An earlier special ver
sion of this formula was discovered by Scipione del Ferro. Cardano’s student Ferrari
extended the formula to solutions by radicals for fourth degree polynomials. The
combination of these formulas says that polynomial equations of degree four or less
over the complex numbers can be solved by radicals.
From Cardano’s work until the very early nineteenth century, attempts were made
to ﬁnd similar formulas for degree ﬁve polynomials. In 1805 Rufﬁni proved that ﬁfth
degree polynomial equations are insolvable by radicals in general. Therefore there
exists no comparable formula for degree 5. Abel in 1825–1826 and Galois in 1831
extended Rufﬁni’s result and proved the insolubility by radicals for all degrees ﬁve or
greater. In doing this, Galois developed a general theory of ﬁeld extensions and its
relationship to group theory. This has come to be known as Galois theory and is really
the main focus of this book.
The solution of the insolvability of the quintic and higher involved a translation of
the problem into a group theory setting. For a polynomial equation to be solvable by
radicals its corresponding Galois group (a concept we will introduce in Chapter 16)
must be a solvable group. This is a group with a certain deﬁned structure. In this
chapter we introduce and discuss this class of groups.
172 Chapter 12 Solvable Groups
12.2 Solvable Groups
A normal series for a group G is a ﬁnite chain of subgroups beginning with G and
ending with the identity subgroup {1}
G = G
0
¸ G
1
¸ G
2
¸ ¸ {1}
in which each G
i÷1
is a proper normal subgroup of G
i
. The factor groups G
i
,G
i÷1
are called the factors of the series and n is the length of the series.
Deﬁnition 12.2.1. A group G is solvable if it has a normal series with abelian factors,
that is G
i
,G
i÷1
is abelian for all i = 0. 1. . . . . n. Such a normal series is called a
solvable series.
If G is an abelian group then G = G
0
¸ {1} provides a solvable series. Hence
any abelian group is solvable. Further the symmetric group S
3
on 3symbols is also
solvable however nonabelian. Consider the series
S
3
¸ ·
3
Ç {1}.
Since [S
3
[ = 6 we have [·
3
[ = 3 and hence ·
3
is cyclic and therefore abelian.
Further [S
3
,·
3
[ = 2 and hence the factor group S
3
,·
3
is also cyclic and hence
abelian. Therefore the series above gives a solvable series for S
3
.
Lemma 12.2.2. If G is a ﬁnite solvable group then G has a normal series with cyclic
factors.
Proof. If G is a ﬁnite solvable group then by deﬁnition it has a normal series with
abelian factors. Hence to prove the lemma it sufﬁces to show that a ﬁnite abelian
group has a normal series with cyclic factors.
Let · be a nontrivial ﬁnite abelian group. We do an induction on the order of ·. If
[·[ = 2 then · itself is cyclic and the result follows. Suppose that [·[ > 2. Choose
an 1 = a ÷ ·. Let N = (a) so that N is cyclic. Then we have the normal series
· ¸ N ¸ {1} with ·,N abelian. Further ·,N has order less than · so ·,N has a
normal series with cyclic factors and the result follows.
Solvability is preserved under subgroups and factor groups.
Theorem 12.2.3. Let G be a solvable group. Then:
(1) Any subgroup H of G is also solvable.
(2) Any factor group G,N of G is also solvable.
Section 12.2 Solvable Groups 173
Proof. (1) Let G be a solvable group and suppose that
G = G
0
¸ G
1
¸ ¸ G
i
¸ {1}
is a solvable series for G. Hence G
i÷1
is a normal subgroup of G
i
for each i and the
factor group G
i
,G
i÷1
is abelian.
Now let H be a subgroup of G and consider the chain of subgroups
H = H ¨ G
0
¸ H ¨ G
1
¸ ¸ H ¨ G
i
¸ {1}.
Since G
i÷1
is normal in G
i
we know that H ¨G
i÷1
is normal in H ¨G
i
and hence
this gives a ﬁnite normal series for H. Further from the second isomorphism theorem
we have for each i ,
(H ¨ G
i
),(H ¨ G
i÷1
) = (H ¨ G
i
),((H ¨ G
i
) ¨ G
i÷1
)
Š (H ¨ G
i
)G
i÷1
,G
i÷1
Ç G
i
,G
i÷1
.
However G
i
,G
i÷1
is abelian so each factor in the normal series for H is abelian.
Therefore the above series is a solvable series for H and hence H is also solvable.
(2) Let N be a normal subgroup of G. Then from (1) N is also solvable. As above
let
G = G
0
¸ G
1
¸ ¸ G
i
¸ {1}
be a solvable series for G. Consider the chain of subgroups
G,N = G
0
N,N ¸ G
1
N,N ¸ ¸ G
i
N,N ¸ N,N = {1}.
Let m ÷ G
i1
. n ÷ N. Then since N is normal in G,
(mn)
1
G
i
N(mn) = n
1
m
1
G
i
mnN = n
1
G
i
nN
= n
1
NG
i
= NG
i
= G
i
N.
It follows that G
i÷1
N is normal in G
i
N for each i and therefore the series for G,N
is a normal series.
Further, again from the isomorphism theorems
(G
i
N,N),(G
i÷1
N,N) Š G
i
,(G
i
¨ G
i÷1
N)
Š (G
i
,G
i÷1
,((G
i
¨ G
i÷1
N),G
i÷1
).
However the last group (G
i
,G
i÷1
,((G
i
¨ G
i÷1
N),G
i÷1
) is a factor group of the
group G
i
,G
i÷1
which is abelian. Hence this last group is also abelian and therefore
each factor in the normal series for G,N is abelian. Hence this series is a solvable
series and G,N is solvable.
174 Chapter 12 Solvable Groups
The following is a type of converse of the above theorem.
Theorem 12.2.4. Let G be group and H a normal subgroup of G. If both H and
G,H are solvable then G is solvable.
Proof. Suppose that
H = H
0
¸ H
1
¸ ¸ H
i
¸ {1}
G,H = G
0
,H ¸ G
1
,H ¸ ¸ G
x
,H ¸ H,H = {1}
are solvable series for H and G,H respectively. Then
G = G
0
¸ G
1
¸ ¸ G
x
= H ¸ H
1
¸ ¸ {1}
gives a normal series for G. Further from the isomorphism theorems again
G
i
,G
i÷1
Š (G
i
,H),(G
i÷1
,H)
and hence each factor is abelian. Therefore this is a solvable series for G and hence
G is solvable.
This theorem allows us to prove that solvability is preserved under direct products.
Corollary 12.2.5. Let G and H be solvable groups. Then their direct product G×H
is also solvable.
Proof. Suppose that G and H are solvable groups and 1 = G × H. Recall from
Chapter 10 that G can be considered as a normal subgroup of 1 with 1,G Š H.
Therefore G is a solvable subgroup of 1 and G,1 is a solvable quotient. It follows
then from Theorem 12.2.4 that 1 is solvable.
We saw that the symmetric group S
3
is solvable. However the following theorem
shows that the symmetric group S
n
is not solvable for n _ 5. This result will be
crucial to the proof of the insolvability of the quintic and higher.
Theorem 12.2.6. For n _ 5 the symmetric group S
n
is not solvable.
Proof. For n _ 5 we saw that the alternating group ·
n
is simple. Further ·
n
is non
abelian. Hence ·
n
cannot have a nontrivial normal series and so no solvable series.
Therefore ·
n
is not solvable. If S
n
were solvable for n _ 5 then from Theorem 12.2.3
·
n
would also be solvable. Therefore S
n
must also be nonsolvable for n _ 5.
In general for a simple, solvable group we have the following.
Lemma 12.2.7. If a group G is both simple and solvable then G is cyclic of prime
order.
Section 12.3 The Derived Series 175
Proof. Suppose that G is a nontrivial simple, solvable group. Since G is simple the
only normal series for G is G = G
0
¸ {1}. Since G is solvable the factors are abelian
and hence G is abelian. Again since G is simple G must be cyclic. If G were inﬁnite
then G Š (Z. ÷). However then 2Z is a proper normal subgroup, a contradiction.
Therefore G must be ﬁnite cyclic. If the order were not prime then for each proper
divisor of the order there would be a nontrivial proper normal subgroup. Therefore G
must be of prime order.
In general a ﬁnite ¸group is solvable.
Theorem 12.2.8. A ﬁnite ¸group G is solvable.
Proof. Suppose that [G[ = ¸
n
. We do this by induction on n. If n = 1 then [G[ = ¸
and G is cyclic, hence abelian and therefore solvable. Suppose that n > 1. Then as
used previously G has a nontrivial center 7(G). If 7(G) = G then G is abelian and
hence solvable. If 7(G) = G then 7(G) is a ﬁnite ¸group of order less than ¸
n
.
From our inductive hypothesis 7(G) must be solvable. Further G,7(G) is then also
a ﬁnite ¸group of order less than ¸
n
so it is also solvable. Hence 7(G) and G,7(G)
are both solvable so from Theorem 12.2.4 G is solvable.
12.3 The Derived Series
Let G be a group and let a. b ÷ G. The product aba
1
b
1
is called the commutator
of a and b. We write Œa. b = aba
1
b
1
.
Clearly Œa. b = 1 if and only if a and b commute.
Deﬁnition 12.3.1. Let G
t
be the subgroup of G which is generated by the set of all
commutators
G
t
= g¸({Œ.. , : .. , ÷ G}).
G
t
is called the commutator or (derived) subgroup of G. We sometimes write G
t
=
ŒG. G.
Theorem 12.3.2. For any group G the commutator subgroup G
t
is a normal sub
group of G and G,G
t
is abelian. Further if H is a normal subgroup of G then G,H
is abelian if and only if G
t
Ç H.
Proof. The commutator subgroup G
t
consists of all ﬁnite products of commutators
and inverses of commutators. However
Œa. b
1
= (aba
1
b
1
)
1
= bab
1
a
1
= Œb. a
176 Chapter 12 Solvable Groups
and so the inverse of a commutator is once again a commutator. It then follows that
G
t
is precisely the set of all ﬁnite products of commutators, i.e., G
t
is the set of all
elements of the form
h
1
h
2
h
n
where each h
i
is a commutator of elements of G.
If h = Œa. b for a. b ÷ G, and . ÷ G, .h.
1
= Œ.a.
1
. .b.
1
 is again
a commutator of elements of G. Now from our previous comments, an arbitrary
element of G
t
has the form h
1
h
2
h
n
, where each h
i
is a commutator. Thus
.(h
1
h
2
h
n
).
1
= (.h
1
.
1
)(.h
2
.
1
) (.h
n
.
1
) and, since by the above each
.h
i
.
1
is a commutator, .(h
1
h
2
h
n
).
1
÷ G
t
. It follows that G
t
is a normal
subgroup of G.
Consider the factor group G,G
t
. Let aG
t
and bG
t
be any two elements of G,G
t
.
Then
ŒaG
t
. bG
t
 = aG
t
bG
t
(aG
t
)
1
(bG
t
)
1
= aG
t
bG
t
a
1
G
t
b
1
G
t
= aba
1
b
1
G
t
= G
t
since Œa. b ÷ G
t
. In other words, any two elements of G,G
t
commute and therefore
G,G
t
is abelian.
Now let N be a normal subgroup of G with G,N abelian. Let a. b ÷ G then aN
and bN commute since G,N is abelian. Therefore
ŒaN. bN = aNbNa
1
Nb
1
N = aba
1
b
1
N = N.
It follows that Œa. b ÷ N. Therefore all commutators of elements in G lie in N and
therefore the commutator subgroup G
t
Ç N.
From the second part of Theorem 12.3.2 we see that G
t
is the minimal normal
subgroup of G such that G,N is abelian. We call G,G
t
= G
ab
the abelianization
of G.
We consider next the following inductively deﬁned sequence of subgroups of an
arbitrary group G called the derived series.
Deﬁnition 12.3.3. For an arbitrary group G deﬁne G
(0)
= G and G
(1)
= G
t
and
then inductively G
(n÷1)
= (G
(n)
)
t
. That is G
(n÷1)
is the commutator subgroup or
derived group of G
(n)
. The chain of subgroups
G = G
(0)
¸ G
(1)
¸ ¸ G
(n)
¸
is called the derived series for G.
Notice that since G
(i÷1)
is the commutator subgroup of G
(i)
we have G
(i)
,G
(i÷1)
is abelian. If the derived series was ﬁnite then G would have a normal series with
abelian factors and hence be solvable. The converse is also true and characterizes
solvable groups in terms of the derived series.
Section 12.4 Composition Series and the Jordan–Hölder Theorem 177
Theorem 12.3.4. A group G is solvable if and only if its derived series is ﬁnite. That
is there exists an n such that G
(n)
= {1}.
Proof. If G
(n)
= {1} for some n then as explained above the derived series provides
a solvable series for G and hence G is solvable.
Conversely suppose that G is solvable and let
G = G
0
¸ G
1
¸ ¸ G
i
= {1}
be a solvable series for G. We claim ﬁrst that G
i
¸ G
(i)
for all i . We do this by
induction on r. If r = 0 then G = G
0
= G
(0)
. Suppose that G
i
¸ G
(i)
. Then
G
t
i
¸ (G
(i)
)
t
= G
(i÷1)
. Since G
i
,G
i÷1
is abelian it follows from Theorem 12.3.2
that G
i÷1
¸ G
t
i
. Therefore G
i÷1
¸ G
(i÷1)
establishing the claim.
Now if G is solvable from the claim we have that G
i
¸ G
(i)
. However G
i
= {1}
and therefore G
(i)
= {1} proving the theorem.
The length of the derived series is called the solvability length of a solvable group
G. The class of solvable groups of class c consists of those solvable groups of solv
ability length c or less.
12.4 Composition Series and the Jordan–Hölder Theorem
The concept of a normal series is extremely important in the structure theory of
groups. This is especially true for ﬁnite groups. If
G = G
0
¸ G
1
¸ ¸ {1}
G = H
0
¸ H
1
¸ ¸ {1}
are two normal series for the group G then the second is a reﬁnement of the ﬁrst if all
the terms of the second occur in the ﬁrst series. Further, two normal series are called
equivalent or (isomorphic) if there exists a 11 correspondence between the factors
(hence the length must be the same) of the two series such that the corresponding
factors are isomorphic.
Theorem 12.4.1 (Schreier’s theorem). Any two normal series for a group G have
equivalent reﬁnements.
Proof. Consider two normal series for G
G = G
0
¸ G
1
¸ ¸ G
x
¸ {1} = G
x÷1
G = H
0
¸ H
1
¸ ¸ H
t
¸ {1} = G
t÷1
.
178 Chapter 12 Solvable Groups
Now deﬁne
G
i}
= (G
i
¨ H
}
)G
i÷1
. ¸ = 0. 1. 2. . . . . t ÷1.
H
}i
= (G
i
¨ H
}
)H
}÷1
. i = 0. 1. 2. . . . . s ÷1.
Then we have
G = G
00
¸ G
01
¸ ¸ G
0,x÷1
= G
1
= G
10
¸ ¸ G
1,x÷1
= G
2
¸ ¸ G
t,x÷1
= {e}.
and
G = H
00
¸ H
01
¸ ¸ H
0,t÷1
= H
1
= H
10
¸ ¸ H
1,t÷1
= H
2
¸ ¸ H
x,t÷1
= {e}.
Now applying the third isomorphism theorem to the groups G
i
, H
}
, G
i÷1
, H
}÷1
,
we have that G
i,}÷1
= (G
i
¨ H
}÷1
)G
i÷1
is a normal subgroup of G
i,}
= (G
i
¨
H
}
)G
i÷1
and H
},i÷1
= (G
i÷1
¨ H
}
)H
}÷1
is a normal subgroup of H
},i
= (G
i
¨
H
}
)H
}÷1
. Furthermore, also
G
i}
,G
i,}÷1
Š H
}i
,H
},i÷1
.
Thus the above two are normal series which are reﬁnements of the two given series
and they are equivalent.
A proper normal subgroup N of a group G is called maximal in G if there does not
exist any normal subgroup N Ç M Ç G with all inclusions proper. This is the group
theoretic analog of a maximal ideal. An alternative characterization is the following.
N is a maximal normal subgroup of G if and only if G,N is simple.
A normal series where each factor is simple can have no reﬁnements.
Deﬁnition 12.4.2. A composition series for a group G is a normal series where all
the inclusions are proper and such that G
i÷1
is maximal in G
i
. Equivalently a normal
series where each factor is simple.
It is possible that an arbitrary group does not have a composition series or even if it
does have one, a subgroup of it may not have one. Of course, a ﬁnite group does have
a composition series.
In the case in which a group, G, does have a composition series the following
important theorem, called the Jordan–Hölder theorem, provides a type of unique fac
torization.
Theorem 12.4.3 (Jordan–Hölder theorem). If a group G has a composition series,
then any two composition series are equivalent, that is the composition factors are
unique.
Section 12.5 Exercises 179
Proof. Suppose we are given two composition series. Applying Theorem 12.4.1 we
get that the two composition series have equivalent reﬁnements. But the only reﬁne
ment of a composition series is one obtained by introducing repetitions. If in the 11
correspondence between the factors of these reﬁnements, the paired factors equal to
{e} are disregarded, that is if we drop the repetitions, we get clearly that the original
composition series are equivalent.
We remarked in Chapter 10 that the simple groups are important because they play
a role in ﬁnite group theory somewhat analogous to that of the primes in number
theory. In particular, an arbitrary ﬁnite group, G, can be broken down into simple
components. These uniquely determined simple components are, according to the
Jordan–Hölder theorem, the factors of a composition series for G.
12.5 Exercises
1. Let 1 be a ﬁeld and
G =
__
a . ,
0 b :
0 0 c
_
: a. b. c. .. ,. : ÷ 1. abc = 0
μ
.
Show that G is solvable.
2. A group G is called polycyclic if it has a normal series with cyclic factors. Show:
(i) Each subgroup and each factor group of a polycyclic group is polycyclic.
(ii) In a polycyclic group each normal series has the same number of inﬁnite
cyclic factors.
3. Let G be a group. Show:
(i) If G is ﬁnite and solvable, then G is polycyclic.
(ii) If G is polycyclic, then G is ﬁnitely generated.
(iii) The group (Q. ÷) is solvable, but not polycyclic.
4. Let N
1
and N
2
be normal subgroups of G. Show:
(i) If N
1
and N
2
are solvable, then also N
1
N
2
is a solvable normal subgroup
of G.
(ii) Is (i) still true, if we replace “solvable” by “abelian”?
5. Let N
1
. . . . . N
t
be normal subgroups of a group G. If all factor groups G,N
i
are
solvable, then also G,(N
1
¨ ¨ N
t
) is solvable.
Chapter 13
Groups Actions and the Sylow Theorems
13.1 Group Actions
A group action of a group G on a set · is a homomorphism from G into S
¹
the
symmetric group on ·. We say that G acts on ·. Hence G acts on · if to each g ÷ G
corresponds a permutation
¬
¿
: · ÷·
such that
(1) ¬
¿
1
(¬
¿
2
(a)) = ¬
¿
1
¿
2
(a) for all g
1
. g
2
÷ G and for all a ÷ ·
(2) 1(a) = a for all a ÷ ·.
For the remainder of this chapter if g ÷ G and a ÷ · we will write ga for ¬
¿
(a).
Group actions are an extremely important idea and we use this idea in the present
chapter to prove several fundamental results in group theory.
If G acts on the set · then we say that two elements a
1
. a
2
÷ · are congruent
under G if there exists a g ÷ G with ga
1
= a
2
. The set
G
o
= {a
1
÷ · : a
1
= ga for some g ÷ G}
is called the orbit of a. It consists of elements congruent to a under G.
Lemma 13.1.1. If G acts on · then congruence under G is an equivalence relation
on ·.
Proof. Any element a ÷ · is congruent to itself via the identity map and hence the
relation is reﬂexive. If a
1
~ a
2
so that ga
1
= a
2
for some g ÷ G then g
1
a
2
= a
1
and so a
2
~ a
1
and the relation is symmetric. Finally is g
1
a
1
= a
2
and g
2
a
2
= a
3
then g
2
g
1
a
1
= a
3
and relation is transitive.
Recall that the equivalence classes under an equivalence relation partition a set. For
a given a ÷ · its equivalence class under this relation is precisely its orbit as deﬁned
above.
Corollary 13.1.2. If G acts on the set · then the orbits under G partition the set ·.
We say that G acts transitively on · if any two elements of · are congruent un
der G. That is the action is transitive if for any a
1
. a
2
÷ · there is some g ÷ G such
that ga
1
= a
2
.
Section 13.2 Conjugacy Classes and the Class Equation 181
If a ÷ · the stabilizer of a consists of those g ÷ G that ﬁx a. Hence
Stab
G
(a) = {g ÷ G : ga = a}.
The following lemma is easily proved and left to the exercises.
Lemma 13.1.3. If G acts on · then for any a ÷ · the stabilizer Stab
G
(a) is a
subgroup of G.
We now prove the crucial theorem concerning group actions.
Theorem 13.1.4. Suppose that G acts on · and a ÷ ·. Let G
o
be the orbit of a
under G and Stab
G
(a) its stabilizer. Then
[G : Stab
G
(a)[ = [G
o
[.
That is the size of the orbit of a is the index of its stabilizer in G.
Proof. Suppose that g
1
. g
2
÷ G with g
1
Stab
G
(a) = g
2
Stab
G
(a), that is they deﬁne
the same left coset of the stabilizer. Then g
1
2
g
1
÷ Stab
G
(a). This implies that
g
1
2
g
1
a = a so that g
2
a = g
1
a. Hence any two elements in the same left coset
of the stabilizer produce the same image of a in G
o
. Conversely if g
1
a = g
2
then
g
1
. g
2
deﬁne the same left coset of Stab
G
(a). This shows that there is a onetoone
correspondence between left cosets of Stab
G
(a) and elements of G
o
. It follows that
the size of G
o
is precisely the index of the stabilizer.
We will use this theorem repeatedly with different group actions to obtain important
group theoretic results.
13.2 Conjugacy Classes and the Class Equation
In Section 10.5 we introduced the center of a group
7(G) = {g ÷ G : gg
1
= g
1
g for all g
1
÷ G}.
and showed that it is a normal subgroup of G. We then used this normal subgroup in
conjunction with what we called the class equation to show that any ﬁnite ¸group
has a nontrivial center. In this section we use group actions to derive the class equation
and prove the result for ﬁnite ¸groups.
Recall that if G is a group then two elements g
1
. g
2
÷ G are conjugate if there
exists a g ÷ G with g
1
g
1
g = g
2
. We saw that conjugacy is an equivalence relation
on G. For g ÷ G its equivalence class is called its conjugacy class that we will denote
by Cl(g). Thus
Cl(g) = {g
1
÷ G : g
1
is conjugate to g}.
182 Chapter 13 Groups Actions and the Sylow Theorems
If g ÷ G then its centralizer C
G
(g) is the set of elements in G that commute with g:
C
G
(g) = {g
1
÷ G : gg
1
= g
1
g}.
Theorem 13.2.1. Let G be a ﬁnite group and g ÷ G. Then the centralizer of g is a
subgroup of G and
[G : C
G
(g)[ = [Cl(g)[.
That is the index of the centralizer of g is the size of its conjugacy class.
In particular for a ﬁnite group the size of each conjugacy class divides the order of
the group.
Proof. Let the group G act on itself by conjugation. That is g(g
1
) = g
1
g
1
g. It is
easy to show that this is an action on the set G (see exercises). For g ÷ G its orbit un
der this action is precisely its conjugacy class Cl(g) and the stabilizer is its centralizer
C
G
(g). The statements in the theorem then follow directly from Theorem 13.1.4.
For any group G, since conjugacy is an equivalence relation, the conjugacy classes
partition G. Hence
G =
´
_
¿÷G
Cl(g)
where this union is a disjoint union. It follows that
[G[ =
¿÷G
[Cl(g)[
where this sum is taken over distinct conjugacy classes.
If Cl(g) = {g} that is the conjugacy class of g is g alone then C
G
(g) = G so that
g commutes with all of G. Therefore in this case g ÷ 7(G). This is true for every
element of the center and therefore
G = 7(G) L
´
_
¿…7(G)
Cl(g)
where again the second union is a disjoint union. The size of G is then the sum of
these disjoint pieces so
[G[ = [7(G)[ ÷
¿…7(G)
[Cl(g)[.
However fromTheorem13.2.1 [Cl(g)[ = [G : C
G
(g)[ so the equation above becomes
[G[ = [7(G)[ ÷
¿…7(G)
[G : C
G
(g)[.
This is known as the class equation.
Section 13.3 The Sylow Theorems 183
Theorem 13.2.2 (class equation). Let G be a ﬁnite group. Then
[G[ = [7(G)[ ÷
¿…7(G)
[G : C
G
(g)[
where the sum is taken over the distinct centralizers.
As a ﬁrst application we prove the result that ﬁnite ¸groups have nontrivial centers.
Theorem 13.2.3. Let G be a ﬁnite ¸group. Then G has a nontrivial center.
Proof. Let G be a ﬁnite ¸group so that [G[ = ¸
n
for some n and consider the class
equation
[G[ = [7(G)[ ÷
¿…7(G)
[G : C
G
(g)[.
Since [G : C
G
(g)[ divides [G[ for each g ÷ G we must have that ¸[[G : C
G
(g)[ for
each g ÷ G. Further ¸[[G[. Therefore ¸ must divide [7(G)[ and hence [7(G)[ = ¸
n
for some m _ 1 and therefore 7(G) is nontrivial.
The idea of conjugacy and the centralizer of an element can be extended to sub
groups. If H
1
. H
2
are subgroups of a group G then H
1
. H
2
are conjugate if there
exists a g ÷ G such that g
1
H
1
g = H
2
. As for elements conjugacy is an equiva
lence relation on the set of subgroups of G.
If H Ç G is a subgroup then its conjugacy class consists of all the subgroups of G
conjugate to it. The normalizer of H is
N
G
(H) = {g ÷ G : g
1
Hg = H}.
As for elements let G act on the set of subgroups of G by conjugation. That is for
g ÷ G the map is given by H ÷g
1
Hg. For H Ç G the stabilizer under this action
is precisely the normalizer. Hence exactly as for elements we obtain the following
theorem.
Theorem 13.2.4. Let G be a group and H Ç G a subgroup. Then the normalizer
N
G
(H) of H is a subgroup of G, H is normal in N
G
(H) and
[G : N
G
(H)[ = number of conjugates of H in G.
13.3 The Sylow Theorems
If G is a ﬁnite group and H Ç G is a subgroup then Lagrange’s theorem guarantees
that the order of H divides the order of G. However the converse of Lagrange’s
theorem is false. That is, if G is a ﬁnite group of order n and if J[n, then G need
184 Chapter 13 Groups Actions and the Sylow Theorems
not contain a subgroup of order J. If J is a prime ¸ or a power of a prime ¸
e
,
however, then we shall see that G must contain subgroups of that order. In particular,
we shall see that if ¸
d
is the highest power of ¸ that divides n, than all subgroups
of that order are actually conjugate, and we shall ﬁnally get a formula concerning
the number of such subgroups. These theorems constitute the Sylow theorems which
we will examine in this section. First we give an example where the converse of
Lagrange’s theorem is false.
Lemma 13.3.1. The alternating group on 4 symbols ·
4
has order 12 but has no
subgroup of order 6.
Proof. Suppose that there exists a subgroup U Ç·
4
with [U[ =6. Then [·
4
: U[ =2
since [·
4
[ = 12 and hence U is normal in ·
4
.
Now id, (1. 2)(3. 4), (1. 3)(2. 4), (1. 4)(2. 3) are in ·
4
. These each have order 2 and
commute so they form a subgroup V Ç ·
4
of order 4. This subgroup V Š Z
2
× Z
2
.
Then
12 = [·
4
[ _ [V U[ =
[V [[U[
[V ¨ U[
=
4 6
[V ¨ U[
.
It follows that V ¨U = {1} and since U is normal we have that V ¨U is also normal
in ·
4
.
Now(1. 2)(3. 4) ÷ V and by renaming the entries in V if necessary we may assume
that it is also in U so that (1. 2)(3. 4) ÷ V ¨ U. Since (1. 2. 3) ÷ ·
4
we have
(3. 2. 1)(1. 2)(3. 4)(1. 2. 3) = (1. 3)(2. 4) ÷ V ¨ U
and then
(3. 2. 1)(1. 4)(2. 3)(1. 2. 3) = (1. 2)(3. 4) ÷ V ¨ U.
But then V Ç V ¨ U and so V Ç U. But this is impossible since [V [ = 4 which
doesn’t divide [U[ = 6.
Deﬁnition 13.3.2. Let G be a ﬁnite group with [G[ = n and let ¸ be a prime such
that ¸
o
[n but no higher power of ¸ divides n. A subgroup of G of order ¸
o
is called
a ¸Sylow subgroup.
It is not a clear that a ¸Sylow subgroup must exist. We will prove that for each
¸[n a ¸Sylow subgroup exists.
We ﬁrst consider and prove a very special case.
Theorem 13.3.3. Let G be a ﬁnite abelian group and let ¸ be a prime such that
¸[[G[. Then G contains at least one element of order ¸.
Section 13.3 The Sylow Theorems 185
Proof. Suppose that G is a ﬁnite abelian group of order ¸n. We use induction on n.
If n = 1 then G has order ¸ and hence is cyclic and so has an element of order ¸.
Suppose that the theorem is true for all abelian groups of order ¸m with m < n and
suppose that G has order ¸n. Suppose that g ÷ G. If the order of g is ¸t for some
integer t then g
t
= 1 and g
t
has order ¸ proving the theorem in this case. Hence
we may suppose that g ÷ G has order prime to ¸ and we show that there must be
an element whose order is a multiple of ¸ and then use the above argument to get an
element of exact order ¸.
Hence we have g ÷ G with order m where (m. ¸) = 1. Since m[[G[ = ¸n we
must have m[n. Since G is abelian (g) is normal and the factor group G,(g) is abelian
of order ¸(
n
n
) < ¸n. By the inductive hypothesis G,(g) has an element h(g) of order
¸, h ÷ G, and hence h
]
= g
k
for some k. g
k
has order m
1
[m and therefore h has
order ¸m
1
. Now as above h
n
1
has order ¸ proving the theorem.
Therefore if G is an abelian group and if ¸[n, then G contains a subgroup of or
der ¸, the cyclic subgroup of order ¸ generated by an element a ÷ G of order ¸
whose existence is guaranteed by the above theorem. We now present the ﬁrst Sylow
theorem.
Theorem 13.3.4 (ﬁrst Sylow theorem). Let G be a ﬁnite group and let ¸[[G[, then G
contains a ¸Sylow subgroup, that is a ¸Sylow subgroup exists.
Proof. Let G be a ﬁnite group of order ¸n and as above we do induction on n. If
n = 1 then G is cyclic and G is its own maximal ¸subgroup and hence all of G is
a ¸Sylow subgroup. We assume then that if [G[ = ¸m with m < n then G has a
¸Sylow subgroup.
Assume that [G[ = ¸
t
m with (m. ¸) = 1. We must show that G contains a
subgroup of order ¸
t
. If H is a proper subgroup whose index is prime to ¸ then [H[ =
¸
t
m
1
with m
1
<m. Therefore by the inductive hypothesis H has a ¸Sylow subgroup
of order ¸
t
. This will also be a subgroup of G and hence a ¸Sylow subgroup of G.
Therefore we may assume that the index of any proper subgroup H of G must have
index divisible by ¸. Now consider the class equation for G,
[G[ = [7(G)[ ÷
¿…7(G)
[G : C
G
(g)[.
By assumption each of the indices are divisible by ¸ and also ¸[[G[. Therefore
¸[[7(G)[. It follows that 7(G) is a ﬁnite abelian group whose order is divisible
by ¸. From Theorem 13.3.3 there exists an element g ÷ 7(G) Ç G of order ¸. Since
g ÷ 7(G) we must have (g) normal in G. The factor group G,(g) then has order
¸
t1
m and by the inductive hypothesis must have a ¸Sylow subgroup 1 of order
¸
t1
and hence of index m. By the correspondence theorem there is a subgroup 1
of G with (g) Ç 1 such that 1,H Š 1. Therefore [1[ = ¸
t
and 1 is a ¸Sylow
subgroup of G.
186 Chapter 13 Groups Actions and the Sylow Theorems
On the basis of this theorem, we can now strengthen the result obtained in Theo
rem 13.3.3.
Theorem 13.3.5 (Cauchy). If G is a ﬁnite group and if ¸ is a prime such that ¸[[G[,
then G contains at least one element of order ¸.
Proof. Let 1 be a ¸Sylow subgroup of G, and let [1[ = ¸
t
. If g ÷ 1, g = 1, then
the order of g is ¸
t
1
. Then g
]
t
1
1
has order ¸.
We have seen that ¸Sylow subgroups exist. We now wish to show that any two
¸Sylow subgroups are conjugate. This is the content of the second Sylow theorem.
Theorem 13.3.6 (second Sylow theorem). Let G be a ﬁnite group and ¸ a prime
such that ¸[[G[. Then any ¸subgroup H of G is contained in a ¸Sylow subgroup.
Further all ¸Sylow subgroups of G are conjugate. That is, if 1
1
and 1
2
are any two
¸Sylow subgroups of G then there exists an a ÷ G such that 1
1
= a1
2
a
1
.
Proof. Let Cbe the set of ¸Sylowsubgroups of G and let G act on Cby conjugation.
This action will of course partition C into disjoint orbits. Let 1 be a ﬁxed ¸Sylow
subgroup and C
1
be its orbit under the conjugation action. The size of the orbit is the
index of its stabilizer that is [C
1
[ = [G : Stab
G
(1)[. Now 1 Ç Stab
G
(1) and 1 is
a maximal ¸subgroup of G. It follows that the index of Stab
G
(1) must be prime to
¸ and so the number of ¸Sylow subgroups conjugate to 1 is prime to ¸.
Now let H be a ¸subgroup of G and let H act on C
1
by conjugation. C
1
will
itself decompose into disjoint orbits under this actions. Further the size of each orbit
is an index of a subgroup of H and hence must be a power of ¸. On the other hand
the size of the whole orbit is prime to ¸. Therefore there must be one orbit that has
size exactly 1. This orbit contains a ¸Sylow subgroup 1
t
and 1
t
is ﬁxed by H under
conjugation, that is H normalizes 1
t
. It follows that H1
t
is a subgroup of G and 1
t
is normal in H1
t
. From the second isomorphism theorem we then obtain
H1
t
,1
t
Š H,(H ¨ 1
t
).
Since H is a ¸group the size of H,(H ¨ 1
t
) is a power of ¸ and therefore so
is the size of H1
t
,1
t
. But 1
t
is also a ¸group so it follows that H1
t
also has
order a power of ¸. Now 1
t
Ç H1
t
but 1
t
is a maximal ¸subgroup of G. Hence
H1
t
= 1
t
. This is possible only if H Ç 1
t
proving the ﬁrst assertion in the theorem.
Therefore any ¸subgroup of G is obtained in a ¸Sylow subgroup.
Now let H be a ¸Sylow subgroup 1
1
and let 1
1
act on C
1
. Exactly as in the
argument above 1
1
Ç 1
t
where 1
t
is a conjugate of 1. Since 1
1
and 1
t
are both
¸Sylow subgroups they have the same size and hence 1
1
= 1
t
. This implies that
1
1
is a conjugate of 1. Since 1
1
and 1 are arbitrary ¸Sylow subgroups it follows
that all ¸Sylow subgroups are conjugate.
Section 13.4 Some Applications of the Sylow Theorems 187
We come nowto the last of the three Sylowtheorems. This one gives us information
concerning the number of ¸Sylow subgroups.
Theorem 13.3.7 (third Sylow theorem). Let G be a ﬁnite group and ¸ a prime such
that ¸[[G[. Then the number of ¸Sylow subgroups of G is of the form 1 ÷ ¸k and
divides the order of [G[. It follows that if [G[ = ¸
o
m with (¸. m) = 1 then the
number of ¸Sylow subgroups divides m.
Proof. Let 1 be a ¸Sylow subgroup and let 1 act on C, the set of all ¸Sylow
subgroups, by conjugation. Now 1 normalizes itself so there is one orbit, namely 1
that has size exactly 1. Every other orbit has size a power of ¸ since the size is the
index of a nontrivial subgroup of 1 and therefore must be divisible by ¸. Hence the
size of the C is 1 ÷¸k.
13.4 Some Applications of the Sylow Theorems
We now give some applications of the Sylow theorems. First we show that the con
verse of Lagrange’s theorem is true for both general ¸groups and for ﬁnite abelian
groups.
Theorem 13.4.1. Let G be a group of order ¸
n
. Then G contains at least one normal
subgroup of order ¸
n
, for each m such that 0 _ m _ n.
Proof. We use induction on n. For n = 1 the theorem is trivial. By Lemma 10.5.7
any group of order ¸
2
is abelian. This together with Theorem 13.3.3 establishes the
claim for n = 2.
We now assume the theorem is true for all groups G of order ¸
k
where 1 _ k < n,
where n > 2. Let G be a group of order ¸
n
. From Lemma 10.3.4 G has a nontrivial
center of order at least ¸ and hence an element g ÷ 7(G) of order ¸. Let N = (g).
Since g ÷ 7(G) it follows that N is normal subgroup of order ¸. Then G,N is of
order ¸
n1
, and therefore, contains (by the induction hypothesis) normal subgroups
of orders ¸
n1
, for 0 _ m÷ 1 _ n ÷ 1. These groups are of the form H,N, where
the normal subgroup H Ç G contains N and is of order ¸
n
, 1 _ m _ n, because
[H[ = [N[ŒH : N = [N[ [H,N[.
On the basis of the ﬁrst Sylow theorem we see that if G is a ﬁnite group and if
¸
k
[[G[, then G must contain a subgroup of order ¸
k
. One can actually show that, as
in the case of Sylow ¸groups, the number of such subgroups is of the form 1 ÷¸t ,
but we shall not prove this here.
Theorem 13.4.2. Let G be a ﬁnite abelian group of order n. Suppose that J[n. Then
G contains a subgroup of order J.
188 Chapter 13 Groups Actions and the Sylow Theorems
Proof. Suppose that n = ¸
e
1
1
¸
e
k
k
is the prime factorization of n. Then J =
¸
(
1
1
¸
(
k
k
for some nonnegative ¡
1
. . . . . ¡
k
. Now G has ¸
1
Sylow subgroup H
1
of
order ¸
e
1
1
. Hence fromTheorem13.4.1 H
1
has a subgroup 1
1
of order ¸
(
1
1
. Similarly
there are subgroups 1
2
. . . . . 1
k
of G of respective orders ¸
(
2
2
. . . . . ¸
(
k
k
. Further since
the orders are disjoint 1
i
¨ 1
}
= {1} if i = ¸ . It follows that (1
1
. 1
2
. . . . . 1
k
) has
order [1
1
[[1
2
[ [1
k
[ = ¸
(
1
1
¸
(
k
k
= J.
In Section 10.5 we examined the classiﬁcation of ﬁnite groups of small orders. Here
we use the Sylow theorems to extend some of this material further.
Theorem 13.4.3. Let ¸. q be distinct primes with ¸ < q and q not congruent to
1 mod ¸. Then any group of order ¸q is cyclic. For example any group of order 15
must be cyclic.
Proof. Suppose that [G[ = ¸q with ¸ < q and q not congruent to 1 mod ¸. The
number of qSylow subgroups is of the form 1 ÷qk and divides ¸. Since q is greater
than ¸ this implies that there can be only one and hence there is a normal qSylow
subgroup H. Since q is a prime, H is cyclic of order q and therefore there is an
element g of order q.
The number of ¸Sylow subgroups is of the form 1 ÷ ¸k and divides q. Since
q is not congruent to 1 mod ¸ this implies that there also can be only one ¸Sylow
subgroup and hence there is a normal ¸Sylow subgroup 1. Since ¸ is a prime 1
is cyclic of order ¸ and therefore there is an element h of order ¸. Since ¸. q are
distinct primes H ¨ 1 = {1}. Consider the element g
1
h
1
gh. Since 1 is normal
g
1
hg ÷ 1. Then g
1
h
1
gh = (g
1
h
1
g)h ÷ 1. But H is also normal so
h
1
gh ÷ H. This then implies that g
1
h
1
gh = g
1
(h
1
gh) ÷ H and therefore
g
1
h
1
gh ÷ 1 ¨ H. It follows then that g
1
h
1
gh = 1 or gh = hg. Since g. h
commute the order of gh is the lcm of the orders of g and h which is ¸q. Therefore
G has an element of order ¸q. Since [G[ = ¸q this implies that G is cyclic.
In the above theorem since we assumed that q is not congruent to 1 mod ¸ and
hence ¸ = 2. In the case when ¸ = 2 we get another possibility.
Theorem 13.4.4. Let ¸ be an odd prime and G a ﬁnite group of order 2¸. Then
either G is cyclic or G is isomorphic to the dihedral group of order 2¸, that is the
group of symmetries of a regular ¸gon. In this latter case G is generated by two
elements g and h which satisfy the relations g
]
= h
2
= (gh)
2
= 1.
Proof. As in the proof of Theorem 13.4.3 G must have a normal cyclic subgroup of
order ¸ say (g). Since 2[[G[ the group G must have an element of order 2 say h.
Consider the order of (gh). By Lagrange’s theorem this element can have order
1. 2. ¸. 2¸. If the order is 1 then gh = 1 or g = h
1
= h. This is impossible
Section 13.4 Some Applications of the Sylow Theorems 189
since g has order ¸ and h has order 2. If the order of gh is ¸ then from the second
Sylow theorem gh ÷ (g). But this implies that h ÷ (g) which is impossible since
every nontrivial element of (g) has order ¸. Therefore the order of gh is either 2
or 2¸.
If the order of gh is 2¸ then since G has order 2¸ it must be cyclic.
If the order of gh is 2 then within G we have the relations g
]
= h
2
= (gh)
2
= 1.
Let H = (g. h) be the subgroup of G generated by g and h. The relations g
]
=
h
2
= (gh)
2
= 1 imply that H has order 2¸. Since [G[ = 2¸ we get that H = G.
G is isomorphic to the dihedral group D
]
of order 2¸ (see exercises).
In the above description g represents a rotation of
2t
]
of a regular ¸gon about
its center while h represents any reﬂection across a line of symmetry of the regular
¸gon.
We have looked at the ﬁnite ﬁelds Z
]
. We give an example of a ¸Sylow subgroup
of a matrix group over Z
]
.
Example 13.4.5. Consider GL(n. ¸), the group of n ×n invertible matrices over Z
]
.
If {·
1
. . . . . ·
n
} is a basis for (Z
]
)
n
over Z
]
then the size of GL(n. ¸) is the number of
independent images {n
1
. . . . . n
n
} of {·
1
. . . . . ·
n
}. For n
1
there are ¸
n
÷ 1 choices,
for n
2
there are ¸
n
÷ ¸ choices and so on. It follows that
[GL(n. ¸)[ = (¸
n
÷ 1)(¸
n
÷ ¸) (¸
n
÷ ¸
n1
) = ¸
1÷2÷···÷(n1)
m = ¸
n.n1/
2
m
with (¸. m) = 1. Therefore a ¸Sylow subgroup must have size ¸
n.n1/
2
.
Let 1 be the subgroup of upper triangular matrices with 1’s on the diagonal.
Then 1 has size ¸
1÷2÷···÷(n1)
= ¸
n.n1/
2
and is therefore a ¸Sylow subgroup
of GL(n. ¸).
The ﬁnal example is a bit more difﬁcult. We mentioned that a major result on ﬁnite
groups is the classiﬁcation of the ﬁnite simple groups. This classiﬁcation showed that
any ﬁnite simple group is either cyclic of prime order, in one of several classes of
groups such as the ·
n
or one of a number of special examples called sporadic groups.
One of the major tools in this classiﬁcation is the following famous result called the
Feit–Thompson theorem that showed that any ﬁnite group G of odd order is solvable
and, in addition, if G is not cyclic then G is nonsimple.
Theorem 13.4.6 (Feit–Thompson theorem). Any ﬁnite group of odd order is solvable.
The proof of this theorem, one of the major results in algebra in the twentieth
century is way beyond the scope of this book – the proof is actually hundreds of
pages in length when one counts the results used. However we look at the smallest
nonabelian simple group.
190 Chapter 13 Groups Actions and the Sylow Theorems
Theorem 13.4.7. Suppose that G is a simple group of order 60. Then G is isomorphic
to ·
5
. Further ·
5
is the smallest nonabelian ﬁnite simple group.
Proof. Suppose that G is a simple group of order 60 = 2
2
3 5. The number of 5
Sylow subgroups is of the form 1 ÷5k and divides 12. Hence there is 1 or 6. Since G
is assumed simple and all 5Sylow subgroups are conjugate there cannot be only one
and hence there are 6. Since each of these is cyclic of order 5 they intersect only in
the identity. Hence these 6 subgroups cover 24 distinct elements.
The number of 3Sylow subgroups is of the form 1 ÷ 3k and divides 20. Hence
there are 1. 4. 10. We claim that there are 10. There can’t be only 1 since G is simple.
Suppose there were 4. Let G act on the set of 3Sylow subgroups by conjugation.
Since an action is a permutation this gives a homomorphism ¡ from G into S
4
. By
the ﬁrst isomorphism theorem G, ker(¡ ) Š im(¡ ). However since G is simple the
kernel must be trivial and this implies that G would imbed into S
4
. This is impossible
since [G[ = 60 > 24 = [S
4
[. Therefore there are 10 3Sylow subgroups. Since each
of these is cyclic of order 3 they intersect only in the identity and therefore these 10
subgroups cover 20 distinct elements.
Hence together with the elements in the 5Sylow subgroups we have 44 nontrivial
elements.
The number of 2Sylow subgroups is of the form 1 ÷ 2k and divides 15. Hence
there are 1. 3. 5. 15. We claim that there are 5. As before there can’t be only 1 since
G is simple. There can’t be 3 since as for the case of 3Sylow subgroups this would
imply an imbedding of G into S
3
which is impossible since [S
3
[ = 6. Suppose
that there were 15 2Sylow subgroups each of order 4. The intersections would have a
maximum of 2 elements and therefore each of these would contribute at least 2 distinct
elements. This gives a minimum of 30 distinct elements. However we already have
44 nontrivial elements from the 3Sylow and 5Sylow subgroups. Since [G[ = 60 this
is too many. Therefore G must have 5 2Sylow subgroups.
Now let G act on the set of 2Sylow subgroups. This then as above implies an
imbedding of G into S
5
so we may consider G as a subgroup of S
5
. However the
only subgroup of S
5
of order 60 is ·
5
and therefore G Š ·
5
.
The proof that ·
5
is the smallest nonabelian simple group is actually brute force.
We show that any group G of order less than 60 either has prime order or is nonsimple.
There are strong tools that we can use. By the Feit–Thompson theorem we must only
consider groups of even order. From Theorem 13.4.4 we don’t have to consider orders
2¸. The rest can be done by an analysis using Sylow theory. For example we show
that any group of order 20 is nonsimple. Since 20 = 2
2
5 the number of 5Sylow
subgroups is 1 ÷5k and divides 4. Hence there is only one and therefore it must be
normal and so G is nonsimple. There is a strong theorem, whose proof is usually done
with representation theory, which says that any group whose order is divisible by only
two primes is solvable. Therefore for [G[ = 60 we only have to show that groups of
order 30 = 2 3 5 and 42 = 2 3 7 are nonsimple. This is done in the same manner
Section 13.5 Exercises 191
as the ﬁrst part of this proof. Suppose [G[ = 30. The number of 5Sylow subgroups
is of the form 1 ÷ 5k and divides 6 hence there are 1 or 6. If G were simple there
would have to be 6 covering 24 distinct elements. The number of 3Sylow subgroups
is of the form 1 ÷3k and divides 10 hence there are 1. 4. 10. If there were 10 these
would cover an additional 20 distinct elements which is impossible since we already
have 24 and G has order 30. If there were 4 and G were simple then G would imbed
into S
4
, again impossible since [G[ = 30 > 24. Therefore there is only one and hence
a normal 3Sylow subgroup. It follows that G cannot be simple. The case [G[ = 42
is even simpler. There must be a normal 7Sylow subgroup.
13.5 Exercises
1. Prove Lemma 13.1.3.
2. Let the group G act on itself by conjugation. That is g(g
1
) = g
1
g
1
g. Prove that
this is an action on the set G.
3. Show that the dihedral group D
n
of order 2n has the presentation (r. ¡ : r
n
=
¡
2
= (r¡ )
2
= 1).
4. Show that each group of order _ 59 is solvable.
5. Let 1
1
and 1
2
be two different ¸Sylow subgroups of a ﬁnite group G. Show that
1
1
1
2
is not a subgroup of G.
6. Let G be a ﬁnite group. For a prime ¸ the following are equivalent:
(i) G has exactly one ¸Sylow subgroup.
(ii) The product of two elements of order ¸ has again order ¸.
7. Let 1 and Q be two ¸Sylow subgroups of the ﬁnite group G. If 7(1) is a normal
subgroup of Q, then 7(1) = 7(Q).
8. Let ¸ be a prime and G = SL(2. ¸). Let 1 = (a), where a =
_
1 1
0 1
_
.
(i) Determine the normalizer N
G
(1) and the number of ¸Sylow subgroups
of G.
(ii) Determine the centralizer C
G
(a). How many elements of order ¸ does G
have? In how many conjugacy classes can they be decomposed?
(iii) Show that all subgroups of G of order ¸(¸ ÷ 1) are conjugate.
(iv) Show that G has no elements of order ¸(¸ ÷ 1) for ¸ _ 5.
Chapter 14
Free Groups and Group Presentations
14.1 Group Presentations and Combinatorial Group
Theory
In discussing the symmetric group on 3 symbols and then the various dihedral groups
in Chapters 9, 10 and 11 we came across the concept of a group presentation. Roughly
for a group G a presentation consists of a set of generators X for G, so that G = (X),
and a set of relations between the elements of X from which in principle the whole
group table can be constructed. In this chapter we make this concept precise. As
we will see, every group G has a presentation but it is mainly in the case where the
group is ﬁnite or countably inﬁnite that presentations are most useful. Historically the
idea of group presentations arose out of the attempt to describe the countably inﬁnite
fundamental groups that came out of low dimensional topology. The study of groups
using group presentations is called combinatorial group theory.
Before looking at group presentations in general we revisit two examples of ﬁnite
groups and then a class of inﬁnite groups.
Consider the symmetric group on 3 symbols, S
3
. We saw that it has the following
6 elements:
1 =
_
1 2 3
1 2 3
_
. a =
_
1 2 3
2 3 1
_
. b =
_
1 2 3
3 1 2
_
c =
_
1 2 3
2 1 3
_
. J =
_
1 2 3
3 2 1
_
. e =
_
1 2 3
1 3 2
_
.
Notice that a
3
= 1, c
2
= 1 and that ac = ca
2
. We claim that
(a. c: a
3
= c
2
= (ac)
2
= 1)
is a presentation for S
3
. First it is easy to show that S
3
= (a. c). Indeed
1 = 1. a = a. b = a
2
. c = c. J = ac. e = a
2
c
and so clearly a. c generate S
3
.
Now from (ac)
2
= acac = 1 we get that ca = a
2
c. This implies that if we write
any sequence (or word in our later language) in a and c we can also rearrange it so
that the only powers of a are a and a
2
, the only powers of c are c and all a terms
precede c terms. For example
aca
2
cac = aca(acac) = a(ca) = a(a
2
c) = (a
3
)c = c.
Section 14.2 Free Groups 193
Therefore using the three relations form the presentation above each element of S
3
can be written as a
˛
c
ˇ
with ˛ = 0. 1. 2 and ˇ = 0. 1. From this the multiplication of
any two elements can be determined.
Exactly this type of argument applies to all the dihedral groups D
n
. We saw that
in general [D
n
[ = 2n. Since these are the symmetry groups of a regular ngon we
always have a rotation r of angle
2t
n
about the center of the ngon. This element r
would have order n. Let ¡ be a reﬂection about any line of symmetry. Then ¡
2
= 1
and r¡ is a reﬂection about the rotated line which is also a line of symmetry. Therefore
(r¡ )
2
= 1. Exactly as for S
3
the relation (r¡ )
2
= 1 implies that ¡ r = r
1
¡ =
r
n1
¡ . This allows us to always place r terms in front of ¡ terms in any word on r
and ¡ . Therefore the elements of D
n
are always of the form
r
˛
¡
ˇ
. ˛ = 0. 1. 2. . . . . n ÷ 1. ˇ = 0. 1
and further the relations r
n
= ¡
2
= (r¡ )
2
= 1 allow us to rearrange any word in r
and ¡ into this form. It follows that [(r. ¡ )[ = 2n and hence D
n
= (r. ¡ ) together
with the relations above. Hence we obtain:
Theorem 14.1.1. If D
n
is the symmetry group of a regular ngon then a presentation
for D
n
is given by
D
n
= (r. ¡ : r
n
= ¡
2
= (r¡ )
2
= 1).
We now give one class of inﬁnite examples. If G is an inﬁnite cyclic group, so that
G Š Z, then G = (g: ) is a presentation for G. That is G has a single generator with
no relations.
A direct product of n copies of Z is called a free abelian group of rank n. We will
denote this by Z
n
. A presentation for Z
n
is then given by
Z
n
= (.
1
. .
2
. . . . . .
n
: .
i
.
}
= .
}
.
i
for all i. ¸ = 1. . . . . n).
14.2 Free Groups
Crucial to the concept of a group presentation is the idea of a free group.
Deﬁnition 14.2.1. A group J is free on a subset X if every map ¡ : X ÷G with G
a group can be extended to a unique homomorphism ¡ : J ÷ G. X is called a free
basis for J. In general a group J is a free group if it is free on some subset X. If X
is a free basis for a free group J we write J = J(X).
We ﬁrst show that given any set X there does exist a free group with free basis X.
Let X = {.
i
}
i÷J
be a set (possibly empty). We will construct a group J(X) which
is free with free basis X. First let X
1
be a set disjoint from X but bijective to X. If
.
i
÷ X then the corresponding element of X
1
under the bijection we denote .
1
i
and
194 Chapter 14 Free Groups and Group Presentations
say that .
i
and .
1
i
are associated. The set X
1
is called the set of formal inverses
fromX and we call XLX
1
the alphabet. Elements of the alphabet are called letters,
hence a letter has the form .
e
1
i
where c
i
= ˙1. A word in X is a ﬁnite sequence of
letters from the alphabet. That is a word has the form
n = .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
where .
i
j
÷ X and c
i
j
= ˙1. If n = 0 we call it the empty word which we will
denote e. The integer n is called the length of the word. Words of the form .
i
.
1
i
or
.
1
i
.
i
are called trivial words. We let W(X) be the set of all words on X.
If n
1
. n
2
÷ W(X) we say that n
1
is equivalent to n
2
, denoted n
1
~ n
2
, if
n
1
can be converted to n
2
by a ﬁnite string of insertions and deletions of trivial
words. For example if n
1
= .
3
.
4
.
1
4
.
2
.
2
and n
2
= .
3
.
2
.
2
then n
1
~ n
2
. It is
straightforward to verify that this is an equivalence relation on W(X) (see exercises).
Let J(X) denote the set of equivalence classes in W(X) under this relation, hence
J(X) is a set of equivalence classes of words from X.
A word n ÷ W(X) is said to be freely reduced or reduced if it has no trivial
subwords (a subword is a connected sequence within a word). Hence in the example
above n
2
= .
3
.
2
.
2
is reduced but n
1
= .
3
.
4
.
1
4
.
2
.
2
is not reduced. In each
equivalence class in J(X) there is a unique element of minimal length. Further this
element must be reduced or else it would be equivalent to something of smaller length.
Two reduced words in W(X) are either equal or not in the same equivalence class in
J(X). Hence J(X) can also be considered as the set of all reduced words from
W(X).
Given a word n = .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
we can ﬁnd the unique reduced word n equiv
alent to n via the following free reduction process. Beginning from the left side of n
we cancel each occurrence of a trivial subword. After all these possible cancellations
we have a word n
t
. Now we repeat the process again starting from the left side. Since
n has ﬁnite length eventually the resulting word will either be empty or reduced. The
ﬁnal reduced n is the free reduction of n.
Now we build a multiplication on J(X). If
n
1
= .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
. n
2
= .
e
j
1
}
1
.
e
j
2
}
2
. . . .
e
j
m
}
m
are two words in W(X) then their concatenation n
1
= n
2
is simply placing n
2
af
ter n
1
,
n
1
= n
2
= .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
.
e
j
1
}
1
.
e
j
2
}
2
. . . .
e
j
m
}
m
.
If n
1
. n
2
÷ J(.) then we deﬁne their product as
n
1
n
2
= equivalence class of n
1
= n
2
.
That is we concatenate n
1
and n
2
, and the product is the equivalence class of the
resulting word. It is easy to show that if n
1
~ n
t
1
and n
2
~ n
t
2
then n
1
= n
2
~
Section 14.2 Free Groups 195
n
t
1
=n
t
2
so that the above multiplication is welldeﬁned. Equivalently we can think of
this product in the following way. If n
1
. n
2
are reduced words then to ﬁnd n
1
n
2
ﬁrst
concatenate and then freely reduce. Notice that if .
e
i
n
i
n
.
e
j
1
}
1
is a trivial word then it is
cancelled when the concatenation is formed. We say then that there is cancellation in
forming the product n
1
n
2
. Otherwise the product is formed without cancellation.
Theorem 14.2.2. Let X be a nonempty set and let J(X) be as above. Then J(X) is
a free group with free basis X. Further if X = 0 then J(X) = {1}, if [X[ = 1 then
J(X) Š Z and if [X[ _ 2 then J(X) is nonabelian.
Proof. We ﬁrst show that J(X) is a group and then show that it satisﬁes the universal
mapping property on X. We consider J(X) as the set of reduced words in W(X)
with the multiplication deﬁned above. Clearly the empty word acts as the identity
element 1. If n = .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
and n
1
= .
e
i
n
i
n
.
e
i
n1
i
n1
. . . .
e
i
1
i
1
then both
n = n
1
and n
1
= n freely reduce to the empty word and so n
1
is the inverse of n.
Therefore each element of J(X) has an inverse. Therefore to show that J(X) forms
a group we must show that the multiplication is associative. Let
n
1
= .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
. n
2
= .
e
j
1
}
1
.
e
j
2
}
2
. . . .
e
j
m
}
m
. n
3
= .
e
k
1
k
1
.
e
k
2
k
2
. . . .
e
k
p
k
p
be three freely reduced words in J(X). We must show that
(n
1
n
2
)n
3
= n
1
(n
2
n
3
).
To prove this we use induction on m, the length of n
2
. If m = 0 then n
2
is the
empty word and hence the identity and it is certainly true. Now suppose that m = 1
so that n
2
= .
e
j
1
}
1
. We must consider exactly four cases.
Case (1): There is no cancellation in forming either n
1
n
2
or n
2
n
3
. That is .
e
j
1
}
1
=
.
e
i
n
i
n
and .
e
j
1
}
1
= .
e
k
1
k
1
. Then the product n
1
n
2
is just the concatenation of the
words and so is (n
1
n
2
)n
3
. The same is true for n
1
(n
2
n
3
). Therefore in this case
n
1
(n
2
n
3
) = (n
1
n
2
)n
3
.
Case (2): There is cancellation in forming n
1
n
2
but not in forming n
2
n
3
. Then
if we concatenate all three words the only cancellation occurs between n
1
and n
2
in
either n
1
(n
2
n
3
) or in (n
1
n
2
)n
3
and hence they are equal. Therefore in this case
n
1
(n
2
n
3
) = (n
1
n
2
)n
3
.
Case (3): There is cancellation in forming n
2
n
3
but not in forming n
1
n
2
. This is
entirely analogous to Case (2) so therefore in this case n
1
(n
2
n
3
) = (n
1
n
2
)n
3
.
Case (4): There is cancellation in forming n
1
n
2
and also in forming n
2
n
3
. Then
.
e
j
1
}
1
= .
e
i
n
i
n
and .
e
j
1
}
1
= .
e
k
1
k
1
. Here
(n
1
n
2
)n
3
= .
e
i
1
i
1
. . . .
e
i
n1
i
n1
.
e
k
1
k
1
.
e
k
2
k
2
. . . .
e
k
p
k
p
.
196 Chapter 14 Free Groups and Group Presentations
On the other hand
n
1
(n
2
n
3
) = .
e
i
1
i
1
. . . .
e
i
n
i
n
.
e
k
2
k
2
. . . .
e
k
p
k
p
.
However these are equal since .
e
i
n
i
n
= .
e
k
1
k
1
. Therefore in this ﬁnal case n
1
(n
2
n
3
) =
(n
1
n
2
)n
3
. It follows inductively from these four cases that the associative law holds
in J(X) and therefore J(X) forms a group.
Now suppose that ¡ : X ÷G is a map fromX into a group G. By the construction
of J(X) as a set of reduced words this can be extended to a unique homomorphism.
If n ÷ J with n = .
e
i
1
i
1
. . . .
e
i
n
i
n
then deﬁne ¡(n) = ¡(.
i
1
)
e
i
1
¡(.
i
n
)
e
i
n
. Since
multiplication in J(X) is concatenation this deﬁnes a homomorphism and again form
the construction of J(X) its the only one extending ¡ . This is analogous to con
structing a linear transformation from one vector space to another by specifying the
images of a basis. Therefore J(X) satisﬁes the universal mapping property of Deﬁ
nition 14.2.1 and hence J(X) is a free group with free basis X.
The ﬁnal parts of Theorem 14.2.2 are straightforward. If X is empty the only re
duced word is the empty word and hence the group is just the identity. If X has a
single letter then J(X) has a single generator and is therefore cyclic. It is easy to see
that it must be torsionfree and therefore J(X) is inﬁnite cyclic, that is J(.) Š Z.
Finally if [X[ _ 2 let .
1
. .
2
÷ X. Then .
1
.
2
= .
2
.
1
and both are reduced. There
fore J(X) is nonabelian.
The proof of Theorem 14.2.2 provides another way to look at free groups.
Theorem 14.2.3. J is a free group if and only if there is a generating set X such that
every element of F has a unique representation as a freely reduced word on X.
The structure of a free group is entirely dependent on the cardinality of a free ba
sis. In particular the cardinality of a free basis [X[ for a free group J is unique
and is called the rank of J. If [X[ < o, J is of ﬁnite rank. If J has rank n
and X = {.
1
. . . . . .
n
} we say that J is free on {.
1
. . . . . .
n
}. We denote this by
J(.
1
. .
2
. . . . . .
n
).
Theorem 14.2.4. If X and Y are sets with the same cardinality, that is [X[ = [Y [,
then J(X) Š J(Y ), the resulting free groups are isomorphic. Further if J(X) Š
J(Y ) then [X[ = [Y [.
Proof. Suppose that ¡ : X ÷ Y is a bijection from X onto Y . Now Y Ç J(Y )
so there is a unique homomorphism ç : J(X) ÷ J(Y ) extending ¡ . Since ¡ is
a bijection it has an inverse ¡
1
: Y ÷ X and since J(Y ) is free there is a unique
homomorphismç
1
fromJ(Y ) to J(X) extending ¡
1
. Then çç
1
is the identity map
on J(Y ) and ç
1
ç is the identity map on J(X). Therefore ç. ç
1
are isomorphisms
with ç = ç
1
1
.
Section 14.2 Free Groups 197
Conversely suppose that J(X) Š J(Y ). In J(X) let N(X) be the subgroup gen
erated by all squares in J(X) that is
N(X) = ({g
2
: g ÷ J(X)}).
Then N(X) is a normal subgroup and the factor group J(X),N(X) is abelian where
every nontrivial element has order 2 (see exercises). Therefore J(X),N(X) can
be considered as a vector space over Z
2
, the ﬁnite ﬁeld of order 2, with X as a
vector space basis. Hence [X[ is the dimension of this vector space. Let N(Y )
be the corresponding subgroup of J(Y ). Since J(X) Š J(Y ) we would have
J(X),N(X) Š J(Y ),N(Y ) and therefore [Y [ is the dimension of the vector space
J(Y ),N(Y ). Therefore [X[ = [Y [ from the uniqueness of dimension of vector
spaces.
Expressing elements of J(X) as a reduced word gives a normal form for elements
in a free group J. As we will see in Section 14.5 this solves what is termed the
word problem for free groups. Another important concept is the following: a freely
reduced word W = .
e
1
¡
1
.
e
2
¡
2
. . . .
e
n
¡
n
is cyclically reduced if ·
1
= ·
n
or if ·
1
= ·
n
then e
1
= ÷e
n
. Clearly then every element of a free group is conjugate to an element
given by a cyclically reduced word. This provides a method to determine conjugacy
in free groups.
Theorem 14.2.5. In a free group J two elements g
1
. g
2
are conjugate if and only if
a cyclically reduced word for g
1
is a cyclic permutation of a cyclically reduced word
for g
2
.
The theory of free groups has a large and extensive literature. We close this section
by stating several important properties. Proofs for these results can be found in [24],
[23] or [15].
Theorem 14.2.6. A free group is torsionfree.
From Theorem 14.2.4 we can deduce:
Theorem 14.2.7. An abelian subgroup of a free group must be cyclic.
Finally a celebrated theorem of Nielsen and Schreier states that a subgroup of a free
group must be free.
Theorem 14.2.8 (Nielsen–Schreier). A subgroup of a free group is itself a free group.
Combinatorially J is free on X if X is a set of generators for J and there are no
nontrivial relations. In particular:
198 Chapter 14 Free Groups and Group Presentations
There are several different proofs of this result (see [24]) with the most straightfor
ward being topological in nature. We give an outline of a simple topological proof in
Section 14.4.
Nielsen, using a technique now called Nielsen transformations in his honor, ﬁrst
proved this theorem about 1920 for ﬁnitely generated subgroups. Schreier shortly
after found a combinatorial method to extend this to arbitrary subgroups. A com
plete version of the original combinatorial proof appears in [24] and in the notes by
Johnson [19].
Schreier’s combinatorial proof also allows for a description of the free basis for the
subgroup. In particular, let J be free on X, and H Ç J a subgroup. Let T = {t
˛
}
be a complete set of right coset representatives for J mod H with the property that if
t
˛
= .
e
1
¡
1
.
e
2
¡
2
. . . .
e
n
¡
n
÷ T , with c
i
= ˙1 then all the initial segments 1. .
e
1
¡
1
. .
e
1
¡
1
.
e
2
¡
2
,
etc. are also in T . Such a system of coset representatives can always be found and is
called a Schreier system or Schreier transversal for H. If g ÷ J let g represent its
coset representative in T and further deﬁne for g ÷ J and t ÷ T , S
t¿
= tg(tg)
1
.
Notice that S
t¿
÷ H for all t. g. We then have:
Theorem 14.2.9 (explicit form of Nielsen–Schreier). Let J be free on X and H a
subgroup of J. If T is a Schreier transversal for J mod H then H is free on the set
{S
tx
: t ÷ T. . ÷ X. S
tx
= 1}.
Example 14.2.10. Let J be free on {a. b} and H = J(X
2
) the normal subgroup of
J generated by all squares in J.
Then J,J(X
2
) = (a. b: a
2
= b
2
= (ab)
2
= 1) = Z
2
× Z
2
. It follows that a
Schreier system for J mod H is {1. a. b. ab} with a = a. b = b and ba = ab. From
this it can be shown that H is free on the generating set
.
1
= a
2
. .
2
= bab
1
a
1
. .
3
= b
2
. .
4
= abab
1
. .
5
= ab
2
a
1
.
The theorem also allows for a computation of the rank of H given the rank of J
and the index. Speciﬁcally:
Corollary 14.2.11. Suppose J is free of rank n and [J : H[ = k. Then H is free of
rank nk ÷ k ÷1.
From the example we see that J is free of rank 2, H has index 4 so H is free of
rank 2 4 ÷ 4 ÷1 = 5.
14.3 Group Presentations
The signiﬁcance of free groups stems from the following result which is easily de
duced from the deﬁnition and will lead us directly to a formal deﬁnition of a group
Section 14.3 Group Presentations 199
presentation. Let G be any group and J the free group on the elements of G consid
ered as a set. The identity map ¡ : G ÷ G can be extended to a homomorphism of
J onto G, therefore:
Theorem 14.3.1. Every group G is a homomorphic image of a free group. That is let
G be any group. Then G = J,N where J is a free group.
In the above theorem instead of taking all the elements of G we can consider just
a set X of generators for G. Then G is a factor group of J(X), G Š J(X),1. The
normal subgroup N is the kernel of the homomorphism from J(X) onto G. We use
Theorem 14.3.1 to formally deﬁne a group presentation.
If H is a subgroup of a group G then the normal closure of H denoted by N(H) is
the smallest normal subgroup of G containing H. This can be described alternatively
in the following manner. The normal closure of H is the subgroup of G generated by
all conjugates of elements of H.
Now suppose that G is a group with X a set of generators for G. We also call X
a generating system for G. Now let G = J(X),N as in Theorem 14.3.1 and the
comments after it. N is the kernel of the homomorphism ¡ : J(X) ÷ G. It follows
that if r is a free group word with r ÷ N then r = 1 in G (under the homomorphism).
We then call r a relator in G and the equation r = 1 a relation in G. Suppose that 1
is a subset of N such that N = N(1), then 1 is called a set of deﬁning relators for G.
The equations r = 1, r ÷ 1, are a set of deﬁning relations for G. It follows that any
relator in G is a product of conjugates of elements of 1. Equivalently r ÷ J(X) is
a relator in G if and only if r can be reduced to the empty word by insertions and
deletions of elements of 1 and trivial words.
Deﬁnition 14.3.2. Let G be a group. Then a group presentation for G consists of
a set of generators X for G and a set 1 of deﬁning relators. In this case we write
G = (X: 1). We could also write the presentation in terms of deﬁning relations as
G = (X: r = 1. r ÷ 1).
From Theorem 14.3.1 it follows immediately that every group has a presentation.
However in general there are many presentations for the same group. If 1 Ç 1
1
then
1
1
is also a set of deﬁning relators.
Lemma 14.3.3. Let G be a group. Then G has a presentation.
If G = (X: 1) and X is ﬁnite then G is said to be ﬁnitely generated. If 1 is ﬁnite
G is ﬁnitely related. If both X and 1 are ﬁnite G is ﬁnitely presented.
Using group presentations we get another characterization of free groups.
Theorem 14.3.4. J is a free group if and only if J has a presentation of the form
J = (X: ).
200 Chapter 14 Free Groups and Group Presentations
Mimicking the construction of a free group from a set X we can show that to each
presentation corresponds a group. Suppose that we are given a supposed presentation
(X: 1) where 1 is given as a set of words in X. Consider the free group J(X)
on X. Deﬁne two words n
1
. n
2
on X to be equivalent if n
1
can be transformed into
n
2
using insertions and deletions of elements of 1 and trivial words. As in the free
group case this is an equivalence relation. Let G be the set of equivalence classes.
If we deﬁne multiplication as before as concatenation followed by the appropriate
equivalence class then G is a group. Further each r ÷ 1 must equal the identity in
G so that G = (X: 1). Notice that here there may be no unique reduced word for an
element of G.
Theorem 14.3.5. Given (X: 1) where X is a set and 1 is a set of words on X. Then
there exists a group G with presentation (X: 1).
We now give some examples of group presentations.
Example 14.3.6. A free group of rank n has a presentation
J
n
= (.
1
. . . . . .
n
: ).
Example 14.3.7. A free abelian group of rank n has a presentation
Z
n
= (.
1
. . . . . .
n
: .
i
.
}
.
1
i
.
1
}
. i = 1. . . . . n. ¸ = 1. . . . . n).
Example 14.3.8. A cyclic group of order n has a presentation
Z
n
= (.: .
n
= 1).
Example 14.3.9. The dihedral groups of order 2n representing the symmetry group
of a regular ngon has a presentation
(r. ¡ : r
n
= 1. ¡
2
= 1. (r¡ )
2
= 1).
We look at this example in Section 14.3.1.
14.3.1 The Modular Group
In this section we give a more complicated example and then a nice application to
number theory.
If 1is any commutative ring with an identity, then the set of invertible n×n matrices
with entries from 1 forms a group under matrix multiplication called the ndimen
sional general linear group over R (see [25]). This group is denoted by GL
n
(1).
Since det(·) det(T) = det(·T) for square matrices ·. T it follows that the subset
of GL
n
(1) consisting of those matrices of determinant 1 forms a subgroup. This
Section 14.3 Group Presentations 201
subgroup is called the special linear group over R and is denoted by SL
n
(1). In this
section we concentrate on SL
2
(Z), or more speciﬁcally a quotient of it, PSL
2
(Z) and
ﬁnd presentations for them.
The group SL
2
(Z) then consists of 2 × 2 integral matrices of determinant one:
SL
2
(Z) =
__
a b
c J
_
: a. b. c. J ÷ Z. aJ ÷ bc = 1
_
.
SL
2
(Z) is called the homogeneous modular group and an element of SL
2
(Z) is called
a unimodular matrix.
If G is any group, recall that its center 7(G) consists of those elements of G which
commute with all elements of G:
7(G) = {g ÷ G : gh = hg. Vh ÷ G}.
7(G) is a normal subgroup of G and hence we can form the factor group G,7(G).
For G = SL
2
(Z) the only unimodular matrices that commute with all others are
˙1 = ˙
_
1 0
0 1
_
. Therefore 7(SL
2
(Z)) = {1. ÷1}. The quotient
SL
2
(Z),7(SL
2
(Z)) = SL
2
(Z),{1. ÷1}
is denoted PSL
2
(Z) and is called the projective special linear group or inhomoge
neous modular group. More commonly PSL
2
(Z) is just called the Modular Group
and denoted by M.
M arises in many different areas of mathematics including number theory, com
plex analysis and Riemann surface theory and the theory of automorphic forms and
functions. M is perhaps the most widely studied single ﬁnitely presented group. Com
plete discussions of M and its structure can be found in the books Integral Matrices
by M. Newman [38] and Algebraic Theory of the Bianchi Groups by B. Fine [34].
Since M = PSL
2
(Z) = SL
2
(Z),{1. ÷1} it follows that each element of M can be
considered as ˙· where · is a unimodular matrix. A projective unimodular matrix
is then
˙
_
a b
c J
_
. a. b. c. J ÷ Z. aJ ÷ bc = 1.
The elements of M can also be considered as linear fractional transformations over
the complex numbers
:
t
=
a: ÷b
c: ÷J
. a. b. c. J ÷ Z. aJ ÷ bc = 1. where : ÷ C.
Thought of in this way, M forms a Fuchsian group which is a discrete group of
isometries of the nonEuclidean hyperbolic plane. The book by Katok [20] gives
a solid and clear introduction to such groups. This material can also be found in
condensed form in [35].
202 Chapter 14 Free Groups and Group Presentations
We now determine presentations for both SL
2
(Z) and M = PSL
2
(Z).
Theorem 14.3.10. The group SL
2
(Z) is generated by the elements
X =
_
0 ÷1
1 0
_
and Y =
_
0 1
÷1 ÷1
_
.
Further a complete set of deﬁning relations for the group in terms of these genera
tors is given by
X
4
= Y
3
= YX
2
Y
1
X
2
= 1.
It follows that SL
2
(Z) has the presentation
(X. Y : X
4
= Y
3
= YX
2
Y
1
X
2
= 1).
Proof. We ﬁrst show that SL
2
(Z) is generated by X and Y , that is every matrix · in
the group can be written as a product of powers of X and Y .
Let
U =
_
1 1
0 1
_
.
Then a direct multiplication shows that U = XY and we show that SL
2
(Z) is gener
ated by X and U which implies that it is also generated by X and Y . Further
U
n
=
_
1 n
0 1
_
so that U has inﬁnite order.
Let · =
_
o b
c d
_
÷ SL
2
(Z). Then we have
X· =
_
÷c ÷J
a b
_
and U
k
· =
_
a ÷kc b ÷kJ
c J
_
for any k ÷ Z. We may assume that [c[ _ [a[ otherwise start with X· rather than ·.
If c = 0 then · = ˙U
q
for some q. If · = U
q
then certainly · is in the group
generated by X and U. If · = ÷U
q
then · = X
2
U
q
since X
2
= ÷1. It follows
that here also · is in the group generated by X and U.
Now suppose c = 0. Apply the Euclidean algorithm to a and c in the following
modiﬁed way:
a = q
0
c ÷r
1
÷c = q
1
r
1
÷r
2
r
1
= q
2
r
2
÷r
3
.
.
.
(÷1)
n
r
n1
= q
n
r
n
÷0
Section 14.3 Group Presentations 203
where r
n
= ˙1 since (a. c) = 1. Then
XU
q
n
XU
q
0
· = ˙U
q
nC1
with q
n÷1
÷ Z.
Therefore
· = X
n
U
q
0
XU
q
1
XU
q
n
XU
q
nC1
with m = 0. 1. 2. 3; q
0
. q
1
. . . . . q
n÷1
÷ Z and q
0
. . . . . q
n
= 0. Therefore X and U
and hence X and Y generate SL
2
(Z).
We must now show that
X
4
= Y
3
= YX
2
Y
1
X
2
= 1
are a complete set of deﬁning relations for SL
2
(Z) or that every relation on these
generators is derivable from these. It is straightforward to see that X and Y do satisfy
these relations. Assume then that we have a relation
S = X
e
1
Y
˛
1
X
e
2
Y
˛
2
Y
˛
n
X
e
nC1
= 1
with all c
i
. ˛
}
÷ Z. Using the set of relations
X
4
= Y
3
= YX
2
Y
1
X
2
= 1
we may transform S so that
S = X
e
1
Y
˛
1
Y
˛
m
X
e
mC1
with c
1
. c
n÷1
= 0. 1. 2 or 3 and ˛
i
= 1 or 2 for i = 1. . . . . mand m _ 0. Multiplying
by a suitable power of X we obtain
Y
˛
1
X Y
˛
m
X = X
˛
= S
1
with m _ 0 and ˛ = 0. 1. 2 or 3. Assume that m _ 1 and let
S
1
=
_
a ÷b
÷c J
_
.
We show by induction that
a. b. c. J _ 0. b ÷c > 0
or
a. b. c. J _ 0. b ÷c < 0.
This claim for the entries of S
1
is true for
YX =
_
1 0
÷1 1
_
and Y
2
X =
_
÷1 1
0 ÷1
_
.
204 Chapter 14 Free Groups and Group Presentations
Suppose it is correct for S
2
=
_
o
1
b
1
c
1
d
1
_
. Then
YXS
2
=
_
a
1
÷b
1
÷(a
1
÷c
1
) b
1
÷J
1
_
and
Y
2
XS
2
=
_
÷a
1
÷ c
1
b
1
÷J
1
c
1
J
1
_
.
Therefore the claim is correct for all S
1
with m _ 1. This gives a contradiction, for
the entries of X
˛
with ˛ = 0. 1. 2 or 3 do not satisfy the claim. Hence m = 0 and S
can be reduced to a trivial relation by the given set of relations. Therefore they are a
complete set of deﬁning relations and the theorem is proved.
Corollary 14.3.11. The modular group M = PSL
2
(Z) has the presentation
M = (.. ,: .
2
= ,
3
= 1).
Further .. , can be taken as the linear fractional transformations
. : :
t
= ÷
1
:
and , : :
t
= ÷
1
: ÷1
.
Proof. The center of SL
2
(Z) is ˙1. Since X
2
= ÷1 setting X
2
= 1 in the pre
sentation for SL
2
(Z) gives the presentation for M. Writing the projective matrices as
linear fractional transformations gives the second statement.
This corollary says that M is the free product of a cyclic group of order 2 and a
cyclic group of order 3 a concept we will introduce in Section 14.7.
We note that there is an elementary alternative proof to Corollary 14.3.11 as far
as showing that X
2
= Y
3
= 1 are a complete set of deﬁning relations. As linear
fractional transformations we have
X(:) = ÷
1
:
. Y(:) = ÷
1
: ÷1
. Y
2
(:) = ÷
: ÷1
:
.
Now let
R
÷
= {. ÷ R : . > 0} and R

= {. ÷ R : . < 0}.
Then
X(R

) Ç R
÷
and Y
˛
(R
÷
) Ç R

. ˛ = 1. 2.
Let S ÷ I. Using the relations X
2
= Y
3
= 1 and a suitable conjugation we may
assume that either S = 1 is a consequence of these relations or that
S = Y
˛
1
XY
˛
2
XY
˛
n
with 1 _ ˛
i
_ 2 and ˛
1
= ˛
n
.
Section 14.3 Group Presentations 205
In this second case if . ÷ R
÷
then S(.) ÷ R

and hence S = 1.
This type of pingpong argument can be used in many examples (see [23], [15]
and [19]). As another example consider the unimodular matrices
· =
_
0 1
÷1 2
_
. T =
_
0 ÷1
1 2
_
.
Let ·. T denote the corresponding linear fractional transformations in the modular
group M. We have
·
n
=
_
÷n ÷1 n
÷n n ÷1
_
. T
n
=
_
÷n ÷1 ÷n
n n ÷1
_
for n ÷ Z.
In particular · and T have inﬁnite order. Now
·
n
(R

) Ç R
÷
and T
n
(R
÷
) Ç R

for all n = 0. The pingpong argument used for any element of the type
S = ·
n
1
T
n
1
T
n
k
·
n
kC1
with all n
i
. m
i
= 0 and n
1
÷n
k÷1
= 0 shows that S(.) ÷ R
÷
if . ÷ R

. It follows
that there are no nontrivial relations on · and T and therefore the subgroup of M
generated by ·. T must be a free group of rank 2.
To close this section we give a nice number theoretical application of the modular
group. First we need the following corollary to Corollary 14.3.11.
Corollary 14.3.12. Let M = (X. Y : X
2
= Y
3
= 1) be the modular group. If · is
an element of order 2 then · is conjugate to X. If T is an element of order 3 then T
is conjugate to either Y or Y
2
.
Deﬁnition 14.3.13. Let a. n be relatively prime integers with a = 0. n _ 1. Then
a is a quadratic residue mod n if there exists an . ÷ Z with .
2
= a mod n, that is
a = .
2
÷kn for some k ÷ Z.
The following is called Fermat’s twosquare theorem.
Theorem 14.3.14 (Fermat’s twosquare theorem). Let n > 0 be a natural number.
Then n = a
2
÷b
2
with (a. b) = 1 if and only if ÷1 is a quadratic residue modulo n.
Proof. Suppose ÷1 is a quadratic residue mod n. Then there exists an . with .
2
÷
÷1 mod n or .
2
= ÷1 ÷ mn. This implies that ÷.
2
÷ mn = 1 so that there must
exist a projective unimodular matrix
· = ˙
_
. n
m ÷.
_
.
206 Chapter 14 Free Groups and Group Presentations
It is straightforward that ·
2
= 1 so by Corollary 14.3.12 · is conjugate within M
to X. Now consider conjugates of X within M. Let T =
_
o b
c d
_
. Then
T
1
=
_
J ÷b
÷c a
_
and
TXT
1
=
_
a b
c J
__
0 1
÷1 0
__
J ÷b
÷c a
_
= ˙
_
÷(bJ ÷ac) a
2
÷b
2
÷(c
2
÷J
2
) bJ ÷ac
_
. (+)
Therefore any conjugate of X must have form (+) and therefore ·must have form (+).
Therefore n=a
2
÷b
2
. Further (a. b)=1 since in ﬁnding form(+) we had aJ÷bc =1.
Conversely suppose n = a
2
÷b
2
with (a. b) = 1. Then there exist c. J ÷ Z with
aJ ÷ bc = 1 and hence there exists a projective unimodular matrix
T = ˙
_
a b
c J
_
.
Then
TXT
1
= ˙
_
˛ a
2
÷b
2
; ÷˛
_
= ˙
_
˛ n
; ÷˛
_
.
This has determinant one, so
÷˛
2
÷ n; = 1 == ˛
2
= ÷1 ÷ n; == ˛
2
÷ ÷1 mod n.
Therefore ÷1 is a quadratic residue mod n.
This type of group theoretical proof can be extended in several directions. Kern
Isberner and Rosenberger [21] considered groups of matrices of the form
U =
_
a b
_
N
c
_
N J
_
. a. b. c. J. N ÷ Z. aJ ÷ Nbc = 1
or
U =
_
a
_
N b
c J
_
N
_
. a. b. c. J. N ÷ Z. NaJ ÷ bc = 1.
They then proved that if
N ÷ {1. 2. 4. 5. 6. 8. 9. 10. 12. 13. 16. 18. 22. 25. 28. 37. 58}
and n ÷ N with (n. N) = 1 then:
(1) If ÷N is a quadratic residue mod n and n is a quadratic residue mod N then n
can be written as n = .
2
÷N,
2
with .. , ÷ Z.
(2) Conversely if n = .
2
÷ N,
2
with .. , ÷ Z and (.. ,) = 1 then ÷N is a
quadratic residue mod n and n is a quadratic residue mod N.
The proof of the above results depends on the class number of Q(
_
÷N) (see [21]).
In another direction Fine [33] and [32] showed that the Fermat twosquare property
is actually a property satisﬁed by many rings 1. These are called sum of squares
rings. For example if ¸ ÷ 3 mod 4 then Z
]
n for n > 1 is a sum of squares ring.
Section 14.4 Presentations of Subgroups 207
14.4 Presentations of Subgroups
Given a group presentation G = (X: 1) it is possible to ﬁnd a presentation for a sub
group H of G. The procedure to do this is called the Reidemeister–Schreier process
and is a consequence of the explicit version of the Nielsen–Schreier theorem (Theo
rem 14.2.9). We give a brief description. A complete description and a veriﬁcation of
its correctness is found in [24] or in [15].
Let G be a group with the presentation (a
1
. . . . . a
n
: 1
1
. . . . . 1
k
). Let H be a
subgroup of G and T a Schreier system for G mod H deﬁned analogously as above.
Reidemeister–Schreier Process. Let G. H and T be as above. Then H is generated
by the set
{S
to
v
: t ÷ T. a
¡
÷ {a
1
. . . . . a
n
}. S
to
v
= 1}
with a complete set of deﬁning relations given by conjugates of the original relators
rewritten in terms of the subgroup generating set.
In order to actual rewrite the relators in terms of the new generators we use a map
ping t on words on the generators of G called the Reidemeister rewriting process.
This map is deﬁned as follows: If
W = a
e
1
¡
1
a
e
2
¡
2
. . . a
e
j
¡
j
with e
i
= ˙1 deﬁnes an element of H
then
t(W) = S
e
1
t
1
,o
v
1
S
e
2
t
2
,o
v
2
S
e
j
t
j
,o
v
j
where t
i
is the coset representative of the initial segment of W preceding a
¡
i
if e
i
= 1
and t
i
is the representative of the initial segment of W up to and including a
1
¡
i
if
e
i
= ÷1. The complete set of relators rewritten in terms of the subgroup generators
is then given by
{t(t1
i
t
1
)} with t ÷ T and 1
i
runs over all relators in G.
We present two examples; one with a ﬁnite group and then an important example
with a free group which shows that a countable free group contains free subgroups of
arbitrary ranks.
Example 14.4.1. Let G = ·
4
be the alternating group on 4 symbols. Then a presen
tation for G is
G = ·
4
= (a. b: a
2
= b
3
= (ab)
3
= 1).
Let H = ·
t
4
the commutator subgroup. We use the above method to ﬁnd a presenta
tion for H. Now
G,H = ·
4
,·
t
4
= (a. b: a
2
= b
3
= (ab)
3
= Œa. b = 1) = (b: b
3
= 1).
208 Chapter 14 Free Groups and Group Presentations
Therefore [·
4
: ·
t
4
[ = 3. A Schreier system is then {1. b. b
2
}. The generators for ·
t
4
are then
X
1
= S
1o
= a. X
2
= S
bo
= bab
1
. X
3
= S
b
2
o
= b
2
ab
while the relations are
1. t(aa) = S
1o
S
1o
= X
2
1
2. t(baab
1
) = X
2
2
3. t(b
2
aab
2
) = X
2
3
4. t(bbb) = 1
5. t(bbbbb
1
) = 1
6. t(b
2
bbbb
2
) = 1
7. t(ababab) = S
1o
S
bo
S
b
2
o
= X
1
X
2
X
3
8. t(babababb
1
) = S
bo
S
b
2
o
S
1o
= X
2
X
3
X
1
9. t(b
2
abababb
2
) = S
b
2
o
S
1o
S
bo
= X
3
X
1
X
2
.
Therefore after eliminating redundant relations and using that X
3
= X
1
X
2
we get as
a presentation for ·
t
4
,
(X
1
. X
2
: X
2
1
= X
2
2
= (X
1
X
2
)
2
= 1).
Example 14.4.2. Let J = (.. ,: ) be the free group of rank 2. Let H be the commu
tator subgroup. Then
J,H = (.. ,: Œ.. , = 1) = Z × Z
a free abelian group of rank 2. It follows that H has inﬁnite index in J. As Schreier
coset representatives we can take
t
n,n
= .
n
,
n
. m = 0. ˙1. ˙2. . . . . n = 0. ˙1. ˙2. . . . .
The corresponding Schreier generators for H are
.
n,n
= .
n
,
n
.
n
,
n
. m = 0. ˙1. ˙2. . . . . n = 0. ˙1. ˙2. . . . .
The relations are only trivial and therefore H is free on the countable inﬁnitely many
generators above. It follows that a free group of rank 2 contains as a subgroup a
free group of countably inﬁnite rank. Since a free group of countable inﬁnite rank
contains as subgroups free groups of all ﬁnite ranks it follows that a free group of
rank 2 contains as a subgroup a free subgroup of any arbitrary ﬁnite rank.
Section 14.5 Geometric Interpretation 209
Theorem 14.4.3. Let J be free of rank 2. Then the commutator subgroup J
t
is free
of countable inﬁnite rank. In particular a free group of rank 2 contains as a subgroup
a free group of any ﬁnite rank n.
Corollary 14.4.4. Let n. m be any pair of positive integers n. m _ 2 and J
n
, J
n
free
groups of ranks n. m respectively. Then J
n
can be embedded into J
n
and J
n
can be
embedded into J
n
.
14.5 Geometric Interpretation
Combinatorial group theory has its origins in topology and complex analysis. Espe
cially important in the development is the theory of the fundamental group. This con
nection is so deep that many people consider combinatorial group theory as the study
of the fundamental group – especially the fundamental group of a lowdimensional
complex. This connection proceeds in both directions. The fundamental group pro
vides methods and insights to study the topology. In the other direction the topology
can be used to study the groups.
Recall that if X is a topological space then its fundamental group based at a point
.
0
, denoted ¬(X. .
0
), is the group of all homotopy classes of closed paths at .
0
. If X
is path connected then the fundamental groups at different points are all isomorphic
and we can speak of the fundamental group of X which we will denote ¬(X). Histor
ically group presentations were developed to handle the fundamental groups of spaces
which allowed simplicial or cellular decompositions. In these cases the presentation
of the fundamental group can be read off from the combinatorial decomposition of
the space.
An (abstract) simplicial complex or cell complex 1 is a topological space consist
ing of a set of points called the vertices, which we will denote V(1), and collections
of subsets of vertices called simplexes or cells which have the property that the inter
section of any two simplices is again a simplex. If n is the number of vertices in a cell
then n÷1 is called its dimension. Hence the set of vertices are the 0dimensional cells
and a simplex {·
1
. . . . . ·
n
} is an (n ÷ 1)dimensional cell. The 1dimensional cells
are called edges. These have the form {u. ·} where u and · are vertices. One should
think of the cells in a geometric manner so that the edges are really edges, the 2cells
are ﬁlled triangles, that are equivalent to disks and so on. The maximum dimension of
any cell in a complex 1 is called the dimension of 1. From now on we will assume
that our simplicial complexes are path connected.
A graph I is just a 1dimensional simplicial complex. Hence I consists of just
vertices and edges. If 1 is any complex then the set of vertices and edges is called the
1skeleton of 1. Similarly all the cells of dimension less than or equal to 2 comprise
the 2skeleton. A connected graph with no closed paths in it is called a tree. If 1 is
210 Chapter 14 Free Groups and Group Presentations
any complex then a maximal tree in 1 is a tree that can be contained in no other tree
within 1.
From the viewpoint of combinatorial group theory what is relevant is that if 1 is
a complex then a presentation of its fundamental group can be determined from its
2skeleton and read off directly. In particular:
Theorem 14.5.1. Suppose that 1 is a connected cell complex. Suppose that T is
a maximal tree within the 1skeleton of 1. Then a presentation for ¬(1) can be
determined in the following manner:
Generators: all edges outside of the maximal tree T
Relations: (a) {u. ·} = 1 if {u. ·} is an edge in T
(b) {u. ·}{·. n} = {u. n} if u. ·. n lie in a simplex of 1.
From this the following is obvious:
Corollary 14.5.2. The fundamental group of a connected graph is free. Further its
rank is the number of edges outside a maximal tree.
A connected graph is homotopic to a wedge or bouquet of circles. If there are
n circles in a bouquet of circles then the fundamental group is free of rank n. The
converse is also true. A free group can be realized as the fundamental group of a
wedge of circles.
An important concept in applying combinatorial group theory is that of a covering
complex.
Deﬁnition 14.5.3. Suppose that 1 is a complex. Then a complex 1
1
is a covering
complex for 1 if there exists a surjection ¸ : 1
1
÷ 1 called a covering map with
the property that for any cell s ÷ 1 the inverse image ¸
1
(s) is a union of pairwise
disjoint cells in 1
1
and ¸ restricted to any of the preimage cells is a homeomorphism.
That is for each simplex S in 1 we have
¸
1
(S) =
_
S
i
and ¸ : S
i
÷S is a bijection for each i .
The following then becomes clear.
Lemma 14.5.4. If 1
1
is a connected covering complex for 1 then 1
1
and 1 have
the same dimension.
What is crucial in using covering complexes to study the fundamental group is that
there is a Galois theory of covering complexes and maps. The covering map ¸ induces
a homomorphism of the fundamental group which we will also call ¸. Then:
Section 14.5 Geometric Interpretation 211
Theorem 14.5.5. Let 1
1
be a covering complex of 1 with covering map ¸. Then
¸(¬(1
1
)) is a subgroup of ¬(1). Conversely to each subgroup H of ¬(1) there is a
covering complex 1
1
with ¬(1
1
) = H. Hence there is a onetoone correspondence
between subgroups of the fundamental group of a complex 1 and covers of 1.
We will see the analog of this theorem in regard to algebraic ﬁeld extensions in
Chapter 15.
A topological space X is simply connected if ¬(X) = {1}. Hence the covering
complex of 1 corresponding to the identity in ¬(1) is simply connected. This is
called the universal cover of 1 since it covers any other cover of 1.
Based on Theorem 14.5.1 we get a very simple proof of the Nielsen–Schreier the
orem.
Theorem 14.5.6 (Nielsen–Schreier). Any subgroup of a free group is free.
Proof. Let J be a free group. Then J = ¬(1) where 1 is a connected graph. Let
H be a subgroup of J. Then H corresponds to a cover 1
1
of 1. But a cover is also
1dimensional and hence H = ¬(1
1
) where 1
1
is a connected graph and hence H
is also free.
The fact that a presentation of a fundamental group of a simplicial complex is de
termined by its 2skeleton goes in the other direction also. That is given an arbitrary
presentation there exists a 2dimensional complex whose fundamental group has that
presentation. Essentially given a presentation (X: 1) we consider a wedge of circles
with cardinality [X[. We then paste on a 2cell for each relator W in 1 bounded by
the path corresponding to the word W.
Theorem 14.5.7. Given an arbitrary presentation (X: 1) there exists a connected
2complex 1 with ¬(1) = (X: 1).
We note that the books by Rotman [26] and Camps, große Rebel and Rosen
berger [15] have very nice detailed and accessible descriptions of groups and com
plexes.
Cayley and then Dehn introduced for each group G, a graph, now called its Cayley
graph, as a tool to apply complexes to the study of G. The Cayley graph is actually
tied to a presentation and not to the group itself. Gromov reversed the procedure and
showed that by considering the geometry of the Cayley graph one could get informa
tion about the group. This led to the development of the theory of hyperbolic groups
(see for instance [15]).
Deﬁnition 14.5.8. Let G = (X: 1) be a presentation. We form a graph I(G. X) in
the following way. Let · = X L X
1
. For the vertex set of I(G. X) we take the
elements of G, that is V(I) = {g : g ÷ G}. The edges of I are given by the set
{(g. .) : g ÷ G. . ÷ ·}. We call g the initial point and g. is the terminal point. That
212 Chapter 14 Free Groups and Group Presentations
is two points g. g
1
in the vertex set are connected by an edge if g
1
= g. for some
. ÷ ·. We have (g. .)
1
= (g.. .
1
). This gives a directed graph called the Cayley
graph of G on the generating set X.
Call . the label on the edge (g. .). Given a g ÷ G then G is represented by at least
one word W in ·. This represents a path in the Cayley graph. The length of the word
W is the length of the path. This is equivalent to making each edge have length one.
If we take the distance between 2 points as the minimum path length we make the
Cayley graph a metric space. This metric is called the word metric. If we extend this
metric to all pairs of points in the Cayley graph in the obvious way (making each edge
a unit real interval) then the Cayley graph becomes a geodesic metric space. Each
closed path in the Cayley graph represents a relator.
By left multiplication the group G acts on the Cayley graph as a group of isometries.
Further the action of G on the Cayley graph is without inversion, that is ge = e
1
,
if e is an edge.
If we sew in a 2cell for each closed path in the Cayley graph we get a simply
connected 2complex called the Cayley complex.
14.6 Presentations of Factor Groups
Let G be a group with a presentation G = (X: 1). Suppose that H is a factor
group of G, that is H Š G,N for some normal subgroup N of G. We show that
a presentation for H is then H = (X: 1 L 1
1
) where 1
1
is a, perhaps additional,
system of relators.
Theorem 14.6.1 (Dyck’s theorem). Let G = (X: 1) and suppose that H Š G,N
where N is a normal subgroup of G. Then a presentation for H is (X: 1 L 1
1
) for
some set of words 1
1
on X. Conversely the presentation (X: 1L1
1
) deﬁnes a group
that is a factor group of G.
Proof. Since each element of H is a coset of N they have the formgN for g ÷ G. It is
clear then that the images of X generate H. Further since H is a homomorphic image
of G each relator in 1 is a relator in H. Let N
1
be a set of elements that generate
N and let 1
1
be the corresponding words in the free group on X. Then 1
1
is an
additional set of relators in H. Hence 1 L 1
1
is a set of relators for H. Any relator
in H is either a relator in G and hence a consequence of 1 or can be realized as an
element of G that lies in N and hence is a consequence of 1
1
. Therefore 1L1
1
is a
complete set of deﬁning relators for H and H has the presentation H = (X: 1L1
1
).
Conversely G = (X: 1). G
1
= (X: 1 L 1
1
). Then G = J(X),N
1
where N
1
=
N(1) and G
1
= J(X),N
2
where N
2
= N(1 L 1
1
). Hence N
1
Ç N
2
. The normal
subgroup N
2
,N
1
of J(X),N
1
corresponds to a normal subgroup of H of G and
Section 14.7 Group Presentations and Decision Problems 213
therefore by the isomorphism theorem
G,H Š (J(X),N
1
),(N
2
,N
1
) Š J(X),N
2
Š G
1
.
14.7 Group Presentations and Decision Problems
We have seen that given any group G there exists a presentation for it, G = (X: 1).
In the other direction given any presentation (X: 1) we have seen that there is a group
with that presentation. In principle every question about a group can be answered via
a presentation. However things are not that simple. Max Dehn in his pioneering work
on combinatorial group theory about 1910 introduced the following three fundamental
group decision problems.
(1) Word Problem: Suppose G is a group given by a ﬁnite presentation. Does there
exist an algorithm to determine if an arbitrary word n in the generators of G
deﬁnes the identity element of G?
(2) Conjugacy Problem: Suppose G is a group given by a ﬁnite presentation. Does
there exist an algorithm to determine if an arbitrary pair of words u. · in the
generators of G deﬁne conjugate elements of G?
(3) Isomorphism Problem: Does there exist an algorithm to determine given two
arbitrary ﬁnite presentations whether the groups they present are isomorphic or
not?
All three of these problems have negative answers in general. That is for each of
these problems one can ﬁnd a ﬁnite presentation for which these questions cannot
be answered algorithmically (see [23]). Attempts for solutions and for solutions in
restricted cases have been of central importance in combinatorial group theory. For
this reason combinatorial group theory has always searched for and studied classes of
groups in which these decision problems are solvable.
For ﬁnitely generated free groups there are simple and elegant solutions to all three
problems. If J is a free group on .
1
. . . . . .
n
and W is a freely reduced word in
.
1
. . . . . .
n
then W = 1 if and only if 1(W) _ 1. Since freely reducing any word
to a freely reduced word is algorithmic this provides a solution to the word problem.
Further a freely reduced word W = .
e
1
¡
1
.
e
2
¡
2
. . . .
e
n
¡
n
is cyclically reduced if ·
1
= ·
n
or if ·
1
= ·
n
then e
1
= ÷e
n
. Clearly then every element of a free group is conjugate
to an element given by a cyclically reduced word called a cyclic reduction. This leads
to a solution to the conjugacy problem. Suppose V and W are two words in the
generators of J and V . W are respective cyclic reductions. Then V is conjugate to
W if and only if V is a cyclic permutation of W. Finally two ﬁnitely generated free
groups are isomorphic if and only if they have the same rank.
214 Chapter 14 Free Groups and Group Presentations
14.8 Group Amalgams: Free Products and Direct Products
Closely related to free groups in both form and properties are free products of groups.
Let · = (a
1
. . . . : 1
1
. . . .) and T = (b
1
. . . . : S
1
. . . .) be two groups. We consider ·
and T to be disjoint. Then:
Deﬁnition 14.8.1. The free product of · and T denoted · + T is the group G with
the presentation (a
1
. . . . . b
1
. . . . : 1
1
. . . . . S
1
. . . .), that is the generators of G consist
of the disjoint union of the generators of · and T with relators taken as the disjoint
union of the relators 1
i
of · and S
}
of T. · and T are called the factors of G.
In an analogous manner the concept of a free product can be extended to an arbitrary
collection of groups.
Deﬁnition 14.8.2. If ·
˛
= (gens ·
˛
: rels ·
˛
). ˛ ÷ I, is a collection of groups, then
their free product G = +·
˛
is the group whose generators consist of the disjoint
union of the generators of the ·
˛
and whose relators are the disjoint union of the
relators of the ·
˛
.
Free products exist and are nontrivial. We have:
Theorem 14.8.3. Let G = · + T. Then the maps · ÷ G and T ÷ G are in
jections. The subgroup of G generated by the generators of · has the presentation
(generators of ·: relators of ·), that is, is isomorphic to ·. Similarly for T. Thus ·
and T can be considered as subgroups of G. In particular · + T is nontrivial if ·
and T are.
Free products share many properties with free groups. First of all there is a cate
gorical formulation of free products. Speciﬁcally we have:
Theorem 14.8.4. A group G is the free product of its subgroups · and T if · and T
generate G and given homomorphisms ¡
1
: · ÷ H. ¡
2
: T ÷ H into a group H
there exists a unique homomorphism ¡ : G ÷H extending ¡
1
and ¡
2
.
Secondly each element of a free product has a normal form related to the reduced
words of free groups. If G = · + T then a reduced sequence or reduced word in G
is a sequence g
1
g
2
. . . g
n
, n _ 0, with g
i
= 1, each g
i
in either · or T and g
i
. g
i÷1
not both in the same factor. Then:
Theorem 14.8.5. Each element g ÷ G = · + T has a unique representation as a
reduced sequence. The length n is unique and is called the syllable length. The case
n = 0 is reserved for the identity.
Section 14.8 Group Amalgams: Free Products and Direct Products 215
A reduced word g
1
. . . g
n
÷ G = ·+T is called cyclically reduced if either n _ 1
or n _ 2 and g
1
and g
n
are from different factors. Certainly every element of G is
conjugate to a cyclically reduced word.
From this we obtain several important properties of free products which are analo
gous to properties in free groups.
Theorem 14.8.6. An element of ﬁnite order in a free product is conjugate to an el
ement of ﬁnite order in a factor. In particular a ﬁnite subgroup of a free product is
entirely contained in a conjugate of a factor.
Theorem 14.8.7. If two elements of a free product commute then they are both powers
of a single element or are contained in a conjugate of an abelian subgroup of a factor.
Finally a theorem of Kurosh extends the Nielsen–Schreier theorem to free products.
Theorem 14.8.8 (Kurosh). A subgroup of a free product is also a free product. Ex
plicitly if G = · + T and H Ç G then
H = J + (+·
˛
) + (+T
ˇ
)
where J is a free group and (+·
˛
) is a free product of conjugates of subgroups of ·
and (+T
ˇ
) is a free product of conjugates of subgroups of T.
We note that the rank of J as well as the number of the other factors can be com
puted. A complete discussion of these is in [24], [23] and [15].
If ·and T are disjoint groups then we now have two types of products forming new
groups out of them, the free product and the direct product. In both these products
the original factors inject. In the free product there are no relations between elements
of · and elements of T while in a direct product each element of · commutes with
each element of T. If a ÷ · and b ÷ T a cross commutator is Œa. b = aba
1
b
1
.
The direct product is a factor group of the free product and the kernel is precisely the
normal subgroup generated by all the cross commutators.
Theorem 14.8.9. Suppose that · and T are disjoint groups. Then
· × T = (· = T),H
where H is the normal closure in ·= T of all the cross commutators. In particular a
presentation for · × T is given by
· × T = (gens ·. gens T: rels ·. rels T. Œa. b for all a ÷ ·. b ÷ T).
216 Chapter 14 Free Groups and Group Presentations
14.9 Exercises
1. Let X
1
be a set disjoint from X but bijective to X. A word in X is a ﬁnite
sequence of letters from the alphabet. That is a word has the form
n = .
e
i
1
i
1
.
e
i
2
i
2
. . . .
e
i
n
i
n
.
where .
i
j
÷ X and c
i
j
= ˙1. Let W(X) be the set of all words on X.
If n
1
. n
2
÷ W(X) we say that n
1
is equivalent to n
2
, denoted n
1
~ n
2
, if n
1
can be converted to n
2
by a ﬁnite string of insertions and deletions of trivial words.
Verify that this is an equivalence relation on W(X).
2. In J(X) let N(X) be the subgroup generated by all squares in J(X) that is
N(X) = ({g
2
: g ÷ J(X)}).
Show that N(X) is a normal subgroup and the factor group J(X),N(X) is abelian
where every nontrivial element has order 2.
3. Show that a free group J is torsionfree.
4. Let J be a free group and a. b ÷ J. Show: If a
k
= b
k
, k = 0, then a = b.
5. Let J = (a. b: ) a free group with basis {a. b}. Let c
i
= a
i
ba
i
, i ÷ Z. Then
G = (c
i
. i ÷ Z) is free with basis {c
i
[i ÷ Z}.
6. Show that (.. ,: .
2
,
3
. .
3
,
4
) Š (.: .) = {1}.
7. Let G = (·
1
. . . . . ·
n
: ·
2
1
·
2
n
), n _ 1, and ˛ : G ÷ Z
2
the epimorphism with
˛(·
i
) = ÷1 for all i . Let U be the kernel of ˛. Then U has a presentation
U = (.
1
. . . . . .
n1
. ,
1
. . . . . ,
n1
: ,
1
.
1
,
n1
.
n1
,
1
n1
.
1
n1
,
1
1
.
1
1
).
8. Let M = (.. ,: .
2
. ,
3
) Š PSL(2. Z) be the modular group. Let M
t
be the
commutator subgroup. Show that M
t
is a free group of rank 2 with a basis
{Œ.. ,. Œ.. ,
2
}.
Chapter 15
Finite Galois Extensions
15.1 Galois Theory and the Solvability of Polynomial
Equations
As we mentioned in Chapter 1, one of the origins of abstract algebra was the problem
of trying to determine a formula for ﬁnding the solutions in terms of radicals of a
ﬁfth degree polynomial. It was proved ﬁrst by Rufﬁni in 1800 and then by Abel that
it is impossible to ﬁnd a formula in terms of radicals for such a solution. Galois in
1820 extended this and showed that such a formula is impossible for any degree ﬁve
or greater. In proving this he laid the groundwork for much of the development of
modern abstract algebra especially ﬁeld theory and ﬁnite group theory. One of the
goals of this book has been to present a comprehensive treatment of Galois theory and
a proof of the results mentioned above. At this point we have covered enough general
algebra and group theory to discuss Galois extensions and general Galois theory.
In modern terms, Galois theory is that branch of mathematics that deals with the
interplay of the algebraic theory of ﬁelds, the theory of equations and ﬁnite group
theory. This theory was introduced by Evariste Galois about 1830 in his study of the
insolvability by radicals of quintic (degree 5) polynomials, a result proved somewhat
earlier by Rufﬁni and independently by Abel. Galois was the ﬁrst to see the close
connection between ﬁeld extensions and permutation groups. In doing so he initiated
the study of ﬁnite groups. He was the ﬁrst to use the term group, as an abstract
concept, although his deﬁnition was really just for a closed set of permutations.
The method Galois developed not only facilitated the proof of the insolvability of
the quintic and higher powers but led to other applications and to a much larger theory
as well.
The main idea of Galois theory is to associate to certain special types of algebraic
ﬁeld extensions called Galois extensions a group called the Galois group. The prop
erties of the ﬁeld extension will be reﬂected in the properties of the group, which are
somewhat easier to examine. Thus, for example, solvability by radicals can be trans
lated into solvability of groups which was discussed in Chapter 12. Showing that for
every degree ﬁve or greater there exists a ﬁeld extension whose Galois group is not
solvable proves that there cannot be a general formula for solvability by radicals.
The tiein to the theory of equations is as follows: If ¡(.) = 0 is a polynomial
equation over some ﬁeld J, we can form the splitting ﬁeld 1. This is usually a Galois
218 Chapter 15 Finite Galois Extensions
extension, and therefore has a Galois group called the Galois group of the equation.
As before, properties of this group will reﬂect properties of this equation.
15.2 Automorphism Groups of Field Extensions
In order to deﬁne the Galois group we must ﬁrst consider the automorphism group of
a ﬁeld extension. In this section 1. 1. M will always be (commutative) ﬁelds with
additive identity 0 and multiplicative identity 1.
Deﬁnition 15.2.1. Let 1[1 be a ﬁeld extension. Then the set
Aut(1[1) = {˛ ÷ Aut(1) : ˛
[1
= the identity on 1}
is called the set of automorphisms of 1 over 1. Notice that if ˛ ÷ Aut(1[1) then
˛(k) = k for all k ÷ 1.
Lemma 15.2.2. Let 1[1 be a ﬁeld extension. Then Aut(1[1) forms a group called
the Galois group of 1[1.
Proof. Aut(1[1) Ç Aut(1) and hence to show that Aut(1[1) is a group we only
have to show that its a subgroup of Aut(1). Now the identity map on 1 is certainly
the identity map on 1 so 1 ÷ Aut(1[1) and hence Aut(1[1) is nonempty. If ˛. ˇ ÷
Aut(1[1) then consider ˛
1
ˇ. If k ÷1 then ˇ(k) =k and ˛(k) =k so ˛
1
(k) =k.
Therefore ˛
1
ˇ(k) = k for all k ÷ 1 and hence ˛
1
ˇ ÷ Aut(1[1). It follows that
Aut(1[1) is a subgroup of Aut(1) and therefore a group.
If ¡(.) ÷ 1Œ. \ 1 and 1 is the splitting ﬁeld of ¡(.) over 1 then Aut(1[1) is
also called the Galois group of ¡(.).
Theorem 15.2.3. If 1 is the prime ﬁeld of 1 then Aut(1[1) = Aut(1).
Proof. We must show that any automorphism of a prime ﬁeld 1 is the identity. If
˛ ÷ Aut(1) then ˛(1) = 1 and so ˛(n 1) = n 1. Therefore in 1, ˛ ﬁxes all integer
multiples of the identity. However every element of 1 can be written as a quotient
n 1
n 1
of integer multiples of the identity. Since ˛ is a ﬁeld homomorphism and ˛ ﬁxes
both the top and the bottom it follows that ˛ will ﬁx every element of this form and
hence ﬁx each element of 1.
For splitting ﬁelds the Galois group is a permutation group on the roots of the
deﬁning polynomial.
Theorem 15.2.4. Let ¡(.) ÷ 1Œ. and 1 the splitting ﬁeld of ¡(.) over 1. Suppose
that ¡(.) has roots ˛
1
. . . . . ˛
n
÷ 1.
Section 15.2 Automorphism Groups of Field Extensions 219
(a) Then each ç ÷ Aut(1[1) is a permutation on the roots. In particular Aut(1[1)
is isomorphic to a subgroup of S
n
and uniquely determined by the zeros of ¡(.).
(b) If ¡(.) is irreducible then Aut(1[1) operates transitively on {˛
1
. . . . . ˛
n
}.
Hence for each i. ¸ there is a ç ÷ Aut(1[1) such that ç(˛
i
) = ˛
}
.
(c) If ¡(.) = b(.÷˛
1
) (.÷˛
n
) with ˛
1
. . . . . ˛
n
distinct and Aut(1[1) operates
transitively on ˛
1
. . . . . ˛
n
then ¡(.) is irreducible.
Proof. For the proofs we use the results of Chapter 8.
(a) Let ç ÷ Aut(1). Then from Theorem 8.1.5 we obtain that ç permutes the roots
˛
1
. . . . . ˛
n
. Hence ç
[¦˛
1
,...,˛
n

÷ S
n
. This map then deﬁnes a homomorphism
t : Aut(1[1) ÷S
n
by t(ç) = ç
[¦˛
1
,...,˛
n

.
Further ç is uniquely determined by the images ç(˛
i
). It follows that t is a mono
morphism.
(b) If ¡(.) is irreducible then Aut(1[1) operates transitively on the set {˛
1
. . . . .
˛
n
} again following from Theorem 8.1.5.
(c) Suppose that ¡(.) = b(. ÷ ˛
1
) (. ÷ ˛
n
) with ˛
1
. . . . . ˛
n
distinct and ¡ ÷
Aut(1[1) operates transitively on ˛
1
. . . . . ˛
n
. Assume that ¡(.) = g(.)h(.) with
g(.). h(.) ÷ 1Œ. \ 1. Without loss of generality let ˛
1
be a zero of g(.) and ˛
n
be
a zero of h(.).
Let ˛ ÷ Aut(1[1) with ˛(˛
1
) = ˛
n
. However ˛(g(.)) = g(.), that is ˛(˛
1
) is
a zero of ˛(g(.)) = g(.) which gives a contradiction since ˛
n
is not a zero of g(.).
Therefore ¡(.) must be irreducible.
Example 15.2.5. Let ¡(.) = (.
2
÷ 2)(.
2
÷ 3) ÷ QŒ.. The ﬁeld 1 = Q(
_
2.
_
3)
is the spitting ﬁeld of ¡(.).
Over 1 we have
¡(.) = (. ÷
_
2)(. ÷
_
2)(. ÷
_
3)(. ÷
_
3).
We want to determine the Galois group Aut(1[Q) = Aut(1) = G.
Lemma 15.2.6. The Galois group G above is the Klein 4group.
Proof. First we show that [Aut(1)[ _ 4. Let ˛ ÷ Aut(1). Then ˛ is uniquely
determined by ˛(
_
2) and ˛(
_
3) and
˛(2) = 2 = (
_
2)
2
= ˛(
_
2
2
) = (˛(
_
2))
2
.
Hence ˛(
_
2) = ˙
_
2. Analogously ˛(
_
3) = ˙
_
3. From this it follows that
[Aut(1)[ _ 4. Further ˛
2
= 1 for any ˛ ÷ G.
Next we show that the polynomial ¡(.) = .
2
÷3 is irreducible over 1 = Q(
_
2).
Assume that .
2
÷ 3 were reducible over 1. Then
_
3 ÷ 1. This implies that
_
3 =
220 Chapter 15 Finite Galois Extensions
o
b
÷
c
d
_
2 with a. b. c. J ÷ Z and b = 0 = J and gcd(c. J) = 1. Then bJ
_
3 =
aJ ÷bc
_
2 hence 3b
2
J
2
= a
2
b
2
÷2b
2
c
2
÷2
_
2aJbc. Since bJ = 0 this implies
that we must have ac = 0. If c = 0 then
_
3 =
o
b
÷ Qa contradiction. If a = 0 then
_
3 =
c
d
_
2 which implies 3J
2
= 2c
2
. It follows from this that 3[ gcd(c. J) = 1
again a contradiction. Hence ¡(.) = .
2
÷ 3 is irreducible over 1 = Q(
_
2).
Since 1is the splitting ﬁeld of ¡(.) and ¡(.) is irreducible over 1 then there exists
an automorphism ˛ ÷ Aut(1) with ˛(
_
3) = ÷
_
3 and ˛
[1
= 1
1
, that is ˛(
_
2) =
_
2. Analogously there is a ˇ ÷ Aut(1) with ˇ(
_
2) = ÷
_
2 and ˇ(
_
3) =
_
3.
Clearly ˛ = ˇ, ˛ˇ = ˇ˛ and ˛ = ˛ˇ = ˇ. It follows that Aut(1) = {1. ˛. ˇ. ˛ˇ}.
completing the proof.
15.3 Finite Galois Extensions
We now deﬁne (ﬁnite) Galois extensions. First we introduce the concept of a Fix ﬁeld.
Let 1 be a ﬁeld and G a subgroup of Aut(1). Deﬁne the set
Fix(1. G) = {k ÷ 1 : g(k) = k Vg ÷ G}.
Theorem 15.3.1. For a G Ç Aut(1), the set Fix(1. G) is a subﬁeld of 1 called the
Fix ﬁeld of G over 1.
Proof. 1 ÷ 1 is in Fix(1. G) so Fix(1. G) is not empty. Let k
1
. k
2
÷ Fix(1. G) and
let g ÷ G. Then g(k
1
˙ k
2
) = g(k
1
) ˙ g(k
2
) since g is an automorphism. Then
g(k
1
) ˙g(k
2
) = k
1
˙k
2
and it follows that k
1
˙k
2
÷ Fix(1. G). In an analogous
manner k
1
k
1
2
÷ Fix(1. G) if k
2
= 0 and therefore Fix(1. G) is a subﬁeld of 1.
Using the concept of a ﬁx ﬁeld we deﬁne a ﬁnite Galois extension.
Deﬁnition 15.3.2. 1[1 is a (ﬁnite) Galois extension if there exists a ﬁnite subgroup
G Ç Aut(1) such that 1 = Fix(1. G).
We now give some examples of ﬁnite Galois extensions.
Lemma 15.3.3. Let 1 = Q(
_
2.
_
3) and 1 = Q. Then 1[1 is a Galois extension.
Proof. Let G = Aut(1[1). From the example in the previous section there are
automorphisms ˛. ˇ ÷ G with
˛(
_
3) = ÷
_
3. ˛(
_
2) =
_
2 and ˇ(
_
2) = ÷
_
2. ˇ(
_
3) =
_
3.
We have
Q(
_
2.
_
3) = {c ÷J
_
3 : c. J ÷ Q(
_
2)}.
Let t = a
1
÷b
1
_
2 ÷(a
2
÷b
2
_
2)
_
3 ÷ Fix(1. G).
Section 15.4 The Fundamental Theorem of Galois Theory 221
Then applying ˇ we have
t = ˇ(t ) = a
1
÷ b
1
_
2 ÷(a
2
÷ b
2
_
2)
_
3.
It follows that b
1
÷ b
2
_
3 = 0, that is, b
1
= b
2
= 0 since
_
3 … Q. Therefore
t = a
1
÷ a
2
_
3. Applying ˛ we have ˛(t ) = a
1
÷ a
2
_
3 and hence a
2
= 0.
Therefore t = a
1
÷ Q. Hence Q = Fix(1. G) and 1[1 is a Galois extension.
Lemma 15.3.4. Let 1 = Q(2
1
4
) and 1 = Q. Then 1[1 is not a Galois extension.
Proof. Suppose that ˛ ÷ Aut(1) and a = 2
1
4
. Then a is a zero of .
4
÷ 2 and hence
˛(a) = 2
1
4
or
˛(a) = i 2
1
4
… 1 since i … 1 or
˛(a) = ÷2
1
4
or
˛(a) = ÷i 2
1
4
… 1 since i … 1.
In particular ˛(
_
2) =
_
2 and therefore
Fix(1. Aut(1)) = Q(
_
2) = Q.
15.4 The Fundamental Theorem of Galois Theory
We now state the fundamental theorem of Galois theory. This theorem describes the
interplay between the Galois group and Galois extensions. In particular the result ties
together subgroups of the Galois group and intermediate ﬁelds between 1 and 1.
Theorem 15.4.1 (fundamental theorem of Galois theory). Let 1[1 be a Galois ex
tension with Galois group G = Aut(1[1). For each intermediate ﬁeld 1 let t(1) be
the subgroup of G ﬁxing 1. Then:
(1) t is a bijection between intermediate ﬁelds containing 1 and subgroups of G.
(2) 1[1 is a ﬁnite extension and if M is an intermediate ﬁeld then
[1 : M[ = [Aut(1[M)[
[M : 1[ = [Aut(1[1) : Aut(1[M)[.
(3) If M is an intermediate ﬁeld then
(a) 1[M is always a Galois extension
(b) M[1 is a Galois extension if and only if
Aut(1[M) is a normal subgroup of Aut(1[1).
222 Chapter 15 Finite Galois Extensions
(4) If M is an intermediate ﬁeld and M[1 is a Galois extension then
(a) ˛(M) = M for all ˛ ÷ Aut(1[1),
(b) the map ç : Aut(1[1) ÷ Aut(M[1) with ç(˛) = ˛
[Æ
= ˇ is an
epimorphism,
(c) Aut(M[1) = Aut(1[1), Aut(1[M).
(5) The lattice of subﬁelds of 1 containing 1 is the inverted lattice of subgroups of
Aut(1[1).
We will prove this main result via a series of theorems and then combine them all.
Theorem 15.4.2. Let G be a group, 1 a ﬁeld and ˛
1
. . . . . ˛
n
pairwise distinct group
homomorphisms from G to 1

the multiplicative group of 1. Then ˛
1
. . . . . ˛
n
are
linearly independent elements of the 1vector space of all homomorphisms from G
to 1.
Proof. The proof is by induction on n. If n = 1 and k˛
1
= 0 with k ÷ 1 then
0 = k˛
1
(1) = k 1 and hence k = 0. Now suppose that n _ 2 and suppose that each
n ÷ 1 of the ˛
1
. . . . . ˛
n
are linearly independent over 1. If
n
i=1
k
i
˛
i
= 0. k
i
÷ 1 (+)
then we must show that all k
i
= 0. Since ˛
1
= ˛
n
there exists an a ÷ G with
˛
1
(a) = ˛
n
(a). Let g ÷ G and apply the sum above to ag. We get
n
i=1
k
i
(˛
i
(a))(˛
i
(g)) = 0. (++)
Now multiply equation (+) by ˛
n
(a) ÷ 1 to get
n
i=1
k
i
(˛
n
(a))(˛
i
(g)) = 0. (+++)
If we subtract equation (+++) from equation (++) then the last term vanishes and we
have an equation in the n ÷ 1 homomorphism ˛
1
. . . . . ˛
n1
. Since these are linearly
independent we obtain
k
1
(˛
1
(a)) ÷ k
1
(˛
n
(a)) = 0
for the coefﬁcient for ˛
1
. Since ˛
1
(a) = ˛
n
(a) we must have k
1
= 0. Now
˛
2
. . . . . ˛
n1
are by assumption linearly independent so k
2
= = k
n
= 0 also.
Hence all the coefﬁcients must be zero and therefore the mappings are indepen
dent.
Section 15.4 The Fundamental Theorem of Galois Theory 223
Theorem 15.4.3. Let ˛
1
. . . . . ˛
n
be pairwise distinct monomorphisms from the ﬁeld
1 into the ﬁeld 1
t
. Let
1 = {k ÷ 1 : ˛
1
(k) = ˛
2
(k) = = ˛
n
(k)}.
Then 1 is a subﬁeld of 1 with [1 : 1[ _ n.
Proof. Certainly 1 is a ﬁeld. Assume that r = [1 : 1[ < n and let {a
1
. . . . . a
i
} be a
basis of the 1vector space 1. We consider the following system of linear equations
with r equations and n unknowns.
(˛
1
(a
1
)).
1
÷ ÷(˛
n
(a
1
)).
n
= 0
.
.
.
(˛
1
(a
i
)).
1
÷ ÷(˛
n
(a
i
)).
n
= 0.
Since r < n there exists a nontrivial solution (.
1
. . . . . .
n
) ÷ (1
t
)
n
.
Let a ÷ 1. Then
a =
i
}=1
l
}
a
}
with l
}
÷ 1.
From the deﬁnition of 1 we have
˛
1
(l
}
) = ˛
i
(l
}
) for i = 2. . . . . n.
Then with our nontrivial solution (.
1
. . . . . .
n
) we have
n
i=1
.
i
(˛
i
(a)) =
n
i=1
.
i
_
i
}=1
˛
i
(l
}
)˛
i
(a
}
)
_
=
i
}=1
(˛
1
(l
}
))
n
i=1
.
i
(˛
i
(a
}
)) = 0
since ˛
1
(l
}
) = ˛
i
(l
}
) for i = 2. . . . . n. This holds for all a ÷ 1 and hence
n
i=1
.
i
˛
i
= 0 contradicting Theorem 15.4.2. Therefore our assumption that [1 :
1[ < n must be false and hence [1 : 1[ _ n.
Deﬁnition 15.4.4. Let 1 be a ﬁeld and G a ﬁnite subgroup of Aut(1). The map
tr
G
: 1 ÷1 given by
tr
G
(k) =
˛÷G
˛(k)
is called the Gtrace of 1.
224 Chapter 15 Finite Galois Extensions
Theorem 15.4.5. Let 1 be a ﬁeld and G a ﬁnite subgroup of Aut(1). Then
{0} = tr
G
(1) Ç Fix(1. G).
Proof. Let ˇ ÷ G. Then
ˇ(tr
G
(k)) =
˛÷G
ˇ˛(k) =
˛÷G
˛(k) = tr
G
(k).
Therefore tr
G
(1) Ç Fix(1. G).
Now assume that tr
G
(k) =0 for all k ÷ 1. Then
˛÷G
˛(k) =0 for all k ÷ 1.
It follows that
˛÷G
˛ is the zero map and hence the set of all ˛ ÷ G are linearly de
pendent as elements of the 1vector space of all maps from 1 to 1. This contradicts
Theorem 15.4.2 and hence the trace cannot be the zero map.
Theorem 15.4.6. Let 1 be a ﬁeld and G a ﬁnite subgroup of Aut(1). Then
[1 : Fix(1. G)[ = [G[.
Proof. Let 1 = Fix(1. G) and suppose that [G[ = n. From Theorem 15.4.3 we
know that [1 : 1[ _ n. We must show that [1 : 1[ _ n.
Suppose that G = {˛
1
. . . . . ˛
n
}. To prove the result we show that if m > n and
a
1
. . . . . a
n
÷ 1 then a
1
. . . . . a
n
are linearly dependent.
We consider the system of equations
(˛
1
1
(a
1
)).
1
÷ ÷(˛
1
1
(a
n
)).
n
= 0
.
.
.
(˛
1
n
(a
1
)).
1
÷ ÷(˛
1
n
(a
n
)).
n
= 0.
Since m > n there exists a nontrivial solution (,
1
. . . . . ,
n
) ÷ 1
n
. Suppose that
,
I
= 0. Using Theorem 15.4.5 we can choose k ÷ 1 with tr
G
(k) = 0. Deﬁne
(.
1
. . . . . .
n
) = k,
1
I
(,
1
. . . . . ,
n
).
This mtuple (.
1
. . . . . .
n
) is then also a nontrivial solution of the system of equations
considered above.
Then we have
tr
G
(.
I
) = tr
G
(k) since .
I
= k.
Now we apply ˛
i
to the i th equation to obtain
a
1
(˛
1
(.
1
)) ÷ ÷a
n
(˛
1
(.
n
)) = 0
.
.
.
a
1
(˛
n
(.
1
)) ÷ ÷a
n
(˛
n
(.
n
)) = 0.
Section 15.4 The Fundamental Theorem of Galois Theory 225
Summation leads to
0 =
n
}=1
a
}
n
i=1
(˛
i
(.
}
)) =
n
}=1
(tr
G
(.
}
))a
}
by deﬁnition of the Gtrace. Hence a
1
. . . . . a
n
are linearly dependent over 1 since
tr
G
(.
I
) = 0. Therefore [1 : 1[ _ n. Combining this with Theorem 15.4.3 we get
that [1 : 1[ = n = [G[.
Theorem 15.4.7. Let 1 be a ﬁeld and G a ﬁnite subgroup of Aut(1). Then
Aut(1[ Fix(1. G)) = G.
Proof. G Ç Aut(1[ Fix(1. G)) since if g ÷ G then g ÷ Aut(1) and g ﬁxes
Fix(1. G) by deﬁnition. Therefore we must show that Aut(1[ Fix(1. G)) Ç G.
Assume then that there exists an ˛ ÷ Aut(1[ Fix(1. G)) with ˛ … G. Suppose, as
in the previous proof, [G[ = n and G = {˛
1
. . . . . ˛
n
} with ˛
1
= 1. Now
Fix(1. G) = {a ÷ 1 : a = ˛
2
(a) = = ˛
n
(a)}
= {a ÷ 1 : ˛(a) = a = ˛
2
(a) = = ˛
n
(a)}.
From Theorem 15.4.3 we have that [1 : Fix(1. G)[ _ n ÷1. However from Theo
rem 15.4.6 [1 : Fix(1. G)[ = n getting a contradiction.
Suppose that 1[1 is a Galois extension. We now establish that the map t between
intermediate ﬁelds 1 Ç 1 Ç 1 and subgroups of Aut(1[1) is a bijection.
Theorem 15.4.8. Let 1[1 be a Galois extension. Then:
(1) Aut(1[1) is ﬁnite and
Fix(1. Aut(1[1)) = 1.
(2) If H Ç Aut(1[1) then
Aut(1[ Fix(1. H)) = H.
Proof. (1) If (1[1) is a Galois extension there is a ﬁnite subgroup of Aut(1) with
1 = Fix(1. G). From Theorem 15.4.7 we have G = Aut(1[1). In particular
Aut(1[1) is ﬁnite and 1 = Fix(1. Aut(1[1)).
(2) Let H Ç Aut(1[1). From part (1) H is ﬁnite and then Aut(1[ Fix(1. H)) =
H from Theorem 15.4.7.
226 Chapter 15 Finite Galois Extensions
Theorem 15.4.9. Let 1[1 be a ﬁeld extension. Then the following are equivalent.
(1) 1[1 is a Galois extension.
(2) [1 : 1[ = [Aut(1[1)[ < o.
(3) [Aut(1[1)[ < oand 1 = Fix(1. Aut(1[1)).
Proof. (1) =(2): Now [Aut(1[1)[ < o and Fix(1. Aut(1[1)) = 1 from Theo
rem 15.4.8. Therefore [1 : 1[ = [Aut(1[1)[ from Theorem 15.4.6.
(2) =(3): Let G = Aut(1[1). Then 1 Ç Fix(1. G) Ç 1. From Theorem 15.4.6
we have
[1 : Fix(1. G)[ = [G[ = [1 : 1[.
(3) =(1) follows directly from the deﬁnition completing the proof.
We now show that if 1[1 is a Galois extension then 1[M is also a Galois extension
for any intermediate ﬁeld M.
Theorem 15.4.10. Let 1[1 be a Galois extension and 1 Ç M Ç 1 be an interme
diate ﬁeld. Then 1[M is always a Galois extension and
[M : 1[ = [Aut(1[1) : Aut(1[M)[.
Proof. Let G = Aut(1[1). Then [G[ < o and further 1 = Fix(1. G) from The
orem 15.4.9. Deﬁne H = Aut(1[M) and M
t
= Fix(1. H). We must show that
M
t
= M for then 1[M is a Galois extension.
Since the elements of H ﬁx M we have M Ç M
t
. Let G =
_
i
i=1
˛
i
H a disjoint
union of the cosets of H. Let ˛
1
= 1 and deﬁne ˇ
i
= (˛
i
)
[Æ
. The ˇ
1
. . . . . ˇ
i
are
pairwise distinct for if ˇ
i
= ˇ
}
, that is (˛
i
)
[Æ
= (˛
}
)
[Æ
, then ˛
1
}
˛
i
÷ H so ˛
i
and
˛
}
are in the same coset.
We claim that
{a ÷ M : ˇ
1
(a) = = ˇ
i
(a)} = M ¨ Fix(1. G).
Further we know that
M ¨ Fix(1. G) = M ¨ 1 = 1
from Theorem 15.4.9.
To establish the claim it is clear that
M ¨ Fix(1. G) Ç {a ÷ M : ˇ
1
(a) = = ˇ
i
(a)}
since
a = ˇ
i
(a) = ˛
i
(a) for ˛
i
÷ G. a ÷ 1.
Section 15.4 The Fundamental Theorem of Galois Theory 227
Hence we must show that
{a ÷ M : ˇ
1
(a) = = ˇ
i
(a)} Ç M ¨ Fix(1. G).
To do this we must show that ˛(b) = b for all ˛ ÷ G, b ÷ M. We have ˛ ÷ ˛
i
H
for some i and hence ˛ = ˛
i
; for ; ÷ H. We obtain then
˛(b) = ˛
i
(;(b)) = ˛
i
(b) = ˇ
i
(b) = b
proving the inclusion and establishing the claim.
Now [M : 1[ _ r from Theorem 15.4.3. From the degree formula we get
[1 : M
t
[[M
t
: M[[M : 1[ = [1 : 1[ = [G[ = [G : H[[H[ = r[1 : M
t
[
since [1 : 1[ = [G[ from Theorem 15.4.9 and [H[ = [1 : M
t
[ from Theorem 15.4.6.
Therefore [M : M
t
[ = 1 and hence M = M
t
since [M : 1[ _ r. Now
[M : 1[ = [G : H[ = [Aut(1[1) : Aut(1[M)[
completing the proof.
Lemma 15.4.11. Let 1[1 be a ﬁeld extension and 1 Ç M Ç 1 be an intermediate
ﬁeld. If ˛ ÷ Aut(1[1) then
Aut(1[˛(M)) = ˛ Aut(1[M)˛
1
.
Proof. Now ˇ ÷ Aut(1[˛(M)) if and only if ˇ(˛(a)) = ˛(a) for all a ÷ M. This
occurs if and only if ˛
1
ˇ˛(a) = a for all a ÷ M which is true if and only if
ˇ ÷ ˛ Aut(1[M)˛
1
.
Lemma 15.4.12. Let 1[1 be a Galois extension and 1 Ç M Ç 1be an intermediate
ﬁeld. Suppose that ˛(M) = M for all ˛ ÷ Aut(1[1). Then
ç : Aut(1[1) ÷Aut(M[1) by ç(˛) = ˛
[Æ
is an epimorphism with kernel ker(ç) = Aut(1[M).
Proof. It is clear that ç is a homomorphism with ker(ç) = Aut(1[M) (see exercises).
We must show that it is an epimorphism.
Let G = im(ç). Since 1[1 is a Galois extension we get that
Fix(M. G) = Fix(1. Aut(1[1)) ¨ M = 1 ¨ M = 1.
Then from Theorem 15.4.8 we have
Aut(M[1) = Aut(M[ Fix((M. G)) = G
and therefore ç is an epimorphism.
228 Chapter 15 Finite Galois Extensions
Theorem 15.4.13. Let 1[1 be a Galois extension and 1 Ç M Ç 1 be an interme
diate ﬁeld. Then the following are equivalent.
(1) M[1 is a Galois extension.
(2) If ˛ ÷ Aut(1[1) then ˛(M) = M.
(3) Aut(1[M) is a normal subgroup of Aut(1[1).
Proof. (1) =(2): Suppose that M[1 is a Galois extension. Let Aut(M[1) = {˛
1
.
. . . . ˛
i
}. Consider the ˛
i
as monomorphisms from M into 1. Let ˛
i÷1
: M ÷1 be
a monomorphism with ˛
i÷1
jK
= 1. Then
{a ÷ M : ˛
1
(a) = ˛
2
(a) = ˛
i
(a) = ˛
i÷1
(a)} = 1
since M[1 is a Galois extension. Therefore from Theorem 15.4.3 we have that, if the
˛
1
. . . . . ˛
i
. ˛
i÷1
are distinct then
[M : 1[ _ r ÷1 > r = [Aut(M[1)[ = [M : 1[
giving a contradiction. Hence if ˛
i÷1
÷ Aut(1[1) is arbitrary then ˛
i÷1
jM
÷
{˛
1
. . . . . ˛
i
}, that is ˛
i÷1
ﬁxes M.
(2) =(1): Suppose that if ˛ ÷ Aut(1[1) then ˛(M) = M. The map ç :
Aut(1[1) ÷ Aut(M[1) with ç(˛) = ˛
[Æ
is surjective. Since 1[1 is a Galois
extension then Aut(1[1) is ﬁnite. Therefore also H = Aut(M[1) is ﬁnite. To
prove (1) then it is sufﬁcient to show that 1 = Fix(M. H).
The ﬁeld 1 Ç Fix(M. H) from the deﬁnition of the Fix ﬁeld. Hence we must show
that Fix(M. H) Ç 1. Assume that there exists an ˛ ÷ Aut(1[1) with ˛(a) = a.
Recall that 1[1 is a Galois extension and therefore Fix(1. Aut(1[1)) = 1. Deﬁne
ˇ = ˛
[Æ
. Then ˇ ÷ H since ˛(M) = M and our original assumption. Then
ˇ(a) = a contradicting a ÷ Fix(M. H). Therefore 1 = Fix(M. H) and M[1 is a
Galois extension.
(2) =(3): Suppose that if ˛ ÷ Aut(1[1) then ˛(M) = M. Then Aut(1[M) is a
normal subgroup of Aut(1[1) follows from Lemma 15.4.12 since Aut(1[M) is the
kernel of ç.
(3) =(2): Suppose that Aut(1[M) is a normal subgroup of Aut(1[1). Let ˛ ÷
Aut(1[1) then from our assumption and Lemma 15.4.11 we get that
Aut(1[˛(M)) = Aut(1[M).
Now 1[M and 1[˛(M) are Galois extensions by Theorem 15.4.10. Therefore
˛(M) = Fix(1. Aut(1[˛(M)) = Fix(1. Aut(1[M)) = M
completing the proof.
Section 15.4 The Fundamental Theorem of Galois Theory 229
We now combine all of these results to give the proof of Theorem 15.4.1, the fun
damental theorem of Galois theory.
Proof of Theorem 15.4.1. Let 1[1 be a Galois extension.
(1) Let G Ç Aut(1[1). Both G and Aut(1[1) are ﬁnite from Theorem 15.4.8.
Further G = Aut(1[ Fix(1. G)) from Theorem 15.4.7.
Now let M be an intermediate ﬁeld of 1[1. Then 1[M is a Galois extension from
Theorem 15.4.10 and then Fix(1. Aut(1[M)) = M from Theorem 15.4.8.
(2) Let M be an intermediate ﬁeld of 1[1. From Theorem 15.4.10 1[M is a
Galois extension. From Theorem 15.4.9 we have [1 : M[ = [Aut(1[M)[. Applying
Theorem 15.4.10 we get the result on indices
[M : 1[ = [Aut(1[1) : Aut(1[M)[.
(3) Let M be an intermediate ﬁeld of 1[1.
(a) From Theorem 15.4.10 we have that 1[M is a Galois extension.
(b) From Theorem 15.4.13 M[1 is a Galois extension if and only if
Aut(1[M) is a normal subgroup of Aut(1[1).
(4) Let M[1 be a Galois extension.
(a) ˛(M) = M for all ˛ ÷ Aut(1[1) from Theorem 15.4.13.
(b) The map ç : Aut(1[1) ÷ Aut(M[1) with ç(˛) = ˛
[Æ
= ˇ is an epimorph
ism follows from Lemma 15.4.12 and Theorem 15.4.13.
(c) Aut(M[1)=Aut(1[1), Aut(1[M) follows directly from the group isomorph
ism theorem.
(5) That the lattice of subﬁelds of 1 containing 1 is the inverted lattice of sub
groups of Aut(1[1) follows directly from the previous results.
In Chapter 8 we looked at the following example (Example 8.1.7). Here we analyze
it further using the Galois theory.
Example 15.4.14. Let ¡(.) = .
3
÷7 ÷ QŒ.. This has no zeros in Qand since it is
of degree 3 it follows that it must be irreducible in QŒ..
Let o = ÷
1
2
÷
_
3
2
i ÷ C. Then it is easy to show by computation that
o
2
= ÷
1
2
÷
_
3
2
i and o
3
= 1.
Therefore the three zeros of ¡(.) in C are
a
1
= 7
1{3
. a
2
= o(7
1{3
). a
3
= o
2
(7
1{3
).
230 Chapter 15 Finite Galois Extensions
Hence 1 = Q(a
1
. a
2
. a
3
) is the splitting ﬁeld of ¡(.). Since the minimal polyno
mial of all three zeros over Qis the same (¡(.)) it follows that
Q(a
1
) Š Q(a
2
) Š Q(a
3
).
Since Q(a
1
) Ç R and a
2
. a
3
are nonreal it is clear that a
2
. a
3
… Q(a
1
).
Suppose that Q(a
2
) = Q(a
3
). Then o = a
3
a
1
2
÷ Q(a
2
) and so 7
1{3
= o
1
a
2
÷
Q(a
2
). Hence Q(a
1
) Ç Q(a
2
) and therefore Q(a
1
) = Q(a
2
) since they are the same
degree over Q. This contradiction shows that Q(a
2
) and Q(a
3
) are distinct.
By computation we have a
3
= a
1
1
a
2
2
and hence
1 = Q(a
1
. a
2
. a
3
) = Q(a
1
. a
2
) = Q(7
1{3
. o).
Now the degree of 1 over Qis
[1 : Q[ = [Q(7
1{3
. o) : Q(o)[[Q(o) : Q[.
Now [Q(o) : Q[ = 2 since the minimal polynomial of o over Q is .
2
÷ . ÷ 1.
Since no zero of ¡(.) lies in Q(o) and the degree of ¡(.) is 3 it follows that ¡(.) is
irreducible over Q(o). Therefore we have that the degree of 1 over Q(o) is 3. Hence
[1 : Q[ = (2)(3) = 6.
Clearly then we have the following lattice of intermediate ﬁelds:
The question then arises as to whether these are all the intermediate ﬁelds. The
answer is yes which we now prove.
Let G = Aut(1[Q) = Aut(1). (Aut(1[Q) = Aut(1) since Q is a prime ﬁeld).
Now G Š S
3
. G acts transitively on {a
1
. a
2
. a
3
} since ¡ is irreducible. Let ı : C ÷
C be the automorphism of C taking each element to its complex conjugate, that is
ı(:) = :.
Then ı(¡ ) = ¡ and ı
[1
÷ G (see Theorem 8.2.2). Since a
1
÷ R we get that
ı
[¦o
1
,o
2
,o
3

= (a
2
. a
3
) the 2cycle that maps a
2
to a
3
and a
3
to a
2
. Since G is
transitive on {a
1
. a
2
. a
3
} there is a t ÷ G with t(a
1
) = a
2
.
Section 15.5 Exercises 231
Case 1: t(a
3
) = a
3
. Then t = (a
1
. a
2
) and (a
1
. a
2
)(a
2
. a
3
) = (a
1
. a
2
. a
3
) ÷ G.
Case 2: t(a
3
) = a
3
. Then t is a 3cycle. In either case G is generated by a
transposition and a 3cycle. Hence G is all of S
3
. Then 1[Q is a Galois extension
from Theorem 15.4.9 since [G[ = [1 : Q[.
The subgroups of S
3
are as follows:
Hence the above lattice of ﬁelds is complete. 1[Q. Q[Q. Q(o)[Qand 1[Q(a
i
) are
Galois extensions while Q(a
i
)[Qwith i = 1. 2. 3 are not Galois extensions.
15.5 Exercises
1. Let 1 Ç M Ç 1 be a chain of ﬁelds and let ç : Aut(1[1) ÷ Aut(M[1) be
deﬁned by ç(˛) = ˛
[Æ
. Show that ç is an epimorphism with kernel ker(ç) =
Aut(1[M).
2. Show that Q(5
1
4
)[Q(
_
5) and Q(
_
5)[Q are Galois extensions and Q(5
1
4
)[Q is
not a Galois extension.
3. Let 1[1 be a ﬁeld extension and u. · ÷ 1 algebraic over 1 with [1(u) : 1[ = m
and [1(·) : 1[ = n. If m and n are coprime, then [1(u. ·) : 1[ = n m.
4. Let ¸. q be prime numbers with ¸ = q. Let 1 = Q(
_
¸. q
1
3
). Show that
1 = Q(
_
¸ q
1
3
). Determine a basis of 1 over Q and the minimal polynomial
of
_
¸ q
1
3
.
5. Let 1 = Q(2
1
n
) with n _ 2.
(i) Determine the number of Qembeddings o : 1 ÷ R. Show that for each
such embedding we have o(1) = 1.
(ii) Determine Aut(1[Q).
232 Chapter 15 Finite Galois Extensions
6. Let ˛ =
_
5 ÷2
_
5.
(i) Determine the minimal polynomial of ˛ over Q.
(ii) Show that Q(a)[Qis a Galois extension.
(iii) Determine Aut(Q(a)[Q).
7. Let 1 be a ﬁeld of prime characteristic ¸ and let ¡(.) = .
]
÷ . ÷a ÷ 1 be an
irreducible polynomial. Let 1 = 1(·), where · is a zero of ¡(.).
(i) If ˛ is a zero of ¡(.) then also ˛ ÷1.
(ii) 1[1 is a Galois extension.
(iii) There is exactly one 1automorphism o of 1 with o(·) = · ÷1.
(iv) The Galois group Aut(1[1) is cyclic with generating element o.
Chapter 16
Separable Field Extensions
16.1 Separability of Fields and Polynomials
In the previous chapter we introduced and examined Galois extensions. Recall that
1[1 is a Galois extension if there exists a ﬁnite subgroup G Ç Aut(1) such that
1 = Fix(1. G). The following questions immediately arise.
(1) Under what conditions is a ﬁeld extension 1[1 a Galois extension?
(2) When is 1[1 a Galois extension when 1 is the splitting ﬁeld of a polynomial
¡(.) ÷ 1Œ.?
In this chapter we consider these questions and completely characterize Galois ex
tensions. In order to do this we must introduce separable extensions.
Deﬁnition 16.1.1. Let 1 be a ﬁeld. Then a nonconstant polynomial ¡(.) ÷ 1Œ. is
called separable over 1 if each irreducible factor of ¡(.) has only simple zeros in its
splitting ﬁeld.
We now extend this deﬁnition to ﬁeld extensions.
Deﬁnition 16.1.2. Let 1[1 be a ﬁeld extension and a ÷ 1. Then a is separable
over 1 if a is a zero of a separable polynomial. The ﬁeld extension 1[1 is a separable
ﬁeld extension or just separable if all a ÷ 1 are separable over 1. In particular a
separable extension is an algebraic extension.
Finally we consider ﬁelds where every nonconstant polynomial is separable.
Deﬁnition 16.1.3. A ﬁeld 1 is perfect if each nonconstant polynomial in 1Œ. is
separable over 1.
The following is straightforward from the deﬁnitions. An element a is separable
over 1 if and only if its minimal polynomial m
o
(.) is separable.
If ¡(.) ÷ 1Œ. then ¡(.) =
n
i=0
k
i
.
i
with k
i
÷ 1. The formal derivative of
¡(.) is then ¡
t
(.) =
n
i=1
i k
i
.
i1
. As in ordinary Calculus we have the usual
differentiation rules
(¡(.) ÷g(.))
t
= ¡
t
(.) ÷g
t
(.)
and
(¡(.)g(.))
t
= ¡
t
(.)g(.) ÷¡(.)g
t
(.)
for ¡(.). g(.) ÷ 1Œ..
234 Chapter 16 Separable Field Extensions
Lemma 16.1.4. Let 1 be a ﬁeld and ¡(.) an irreducible nonconstant polynomial in
1Œ.. Then ¡(.) is separable if and only if its formal derivative is nonzero.
Proof. Let 1 be the splitting ﬁeld of ¡(.) over 1. Let ¡(.) = (. ÷ a)
i
g(.) where
(. ÷ a) does not divide g(.). Then
¡
t
(.) = (. ÷ a)
i1
(rg(.) ÷(. ÷ a)g
t
(.)).
If ¡
t
(.) = 0 then a is a zero of ¡(.) in 1 over 1 of multiplicity m _ 2 if and only
if (. ÷ a)[¡(.) and also (. ÷ a)[¡
t
(.).
Let ¡(.) be a separable polynomial over 1Œ. and let a be a zero of ¡(.) in 1.
Then if ¡(.) = (. ÷ a)
i
g(.) with (. ÷ a) not dividing g(.) we must have r = 1.
Then
¡
t
(.) = g(.) ÷(. ÷ a)g
t
(.).
If g
t
(.) = 0 then ¡
t
(.) = g(.) = 0. Now suppose that g
t
(.) = 0. Assume
that ¡
t
(.) = 0. Then necessarily (. ÷ a)[g(.) giving a contradiction. Therefore
¡
t
(.) = 0.
Conversely suppose that ¡
t
(.) = 0. Assume that ¡(.) is not separable. Then both
¡(.) and ¡
t
(.) have a common zero a ÷ 1. Let m
o
(.) be the minimal polynomial
of a in 1Œ.. Then m
o
(.)[¡(.) and m
o
(.)[¡
t
(.). Since ¡(.) is irreducible then the
degree of m
o
(.) must equal the degree of ¡(.). But m
o
(.) must also have the same
degree as ¡
t
(.) which is less than that of ¡(.) giving a contradiction. Therefore
¡(.) must be separable.
We now consider the following example of a nonseparable polynomial over the
ﬁnite ﬁeld Z
]
of ¸ elements. We will denote this ﬁeld now as GF(¸), the Galois ﬁeld
of ¸ elements.
Example 16.1.5. Let 1 = GF(¸) and 1 = 1(t ) the ﬁeld of rational functions in t
over 1. Consider the polynomial ¡(.) = .
]
÷ t ÷ 1Œ..
Now 1Œt ,t1Œt  Š 1. Since 1 is a ﬁeld this implies that t1Œt  is a maximal ideal
and hence a prime ideal in 1Œt  with prime element t ÷ 1Œt  (see Theorem 3.2.7). By
the Eisenstein criteria ¡(.) is an irreducible polynomial in 1Œ. (see Theorem 4.4.8).
However ¡
t
(.) = ¸.
]1
= 0 since char(1) = ¸. Therefore ¡(.) is not separable.
16.2 Perfect Fields
We now consider when a ﬁeld 1 is perfect. First we show that in general any ﬁeld
of characteristic 0 is perfect. In particular the rationals Q are perfect and hence any
extension of the rationals is separable.
Theorem 16.2.1. Each ﬁeld 1 of characteristic zero is perfect.
Section 16.2 Perfect Fields 235
Proof. Suppose that 1 is a ﬁeld with char(1) = 0. Suppose that ¡(.) is a non
constant polynomial in 1Œ.. Then ¡
t
(.) = 0. If ¡(.) is irreducible then ¡(.) is
separable from Lemma 16.1.4. Therefore by deﬁnition each nonconstant polynomial
¡(.) ÷ 1Œ. is separable.
We remark that in the original motivation for Galois theory the ground ﬁeld was the
rationals Q. Since this has characteristic zero it is perfect and all extensions are sepa
rable. Hence the question of separability didn’t arise until the question of extensions
of ﬁelds of prime characteristic arose.
Corollary 16.2.2. Any ﬁnite extension of the rationals Qis separable.
We now consider the case of prime characteristic.
Theorem 16.2.3. Let 1 be a ﬁeld with char(1) = ¸ = 0. If ¡(.) is a nonconstant
polynomial in 1Œ. then the following are equivalent:
(1) ¡
t
(.) = 0.
(2) ¡(.) is a polynomial in .
]
, that is, there is a g(.) ÷ 1Œ. with ¡(.) = g(.
]
).
If in (1) and (2) ¡(.) is irreducible then ¡(.) is not separable over 1 if and only if
¡(.) is a polynomial in .
]
.
Proof. Let ¡(.) =
n
i=1
a
i
.
i
. Then ¡
t
(.) = 0 if and only if ¸[i for all i with
a
i
= 0. But this is equivalent to
¡(.) = a
0
÷a
]
.
]
÷ ÷a
n
.
n]
.
If ¡(.) is irreducible then we have that ¡(.) is not separable if and only if ¡
t
(.) =
0 from Lemma 16.1.4.
Theorem 16.2.4. Let 1 be a ﬁeld with char(1) = ¸ = 0. Then the following are
equivalent:
(1) 1 is perfect.
(2) Each element in 1 has a ¸th root in 1.
(3) The Frobenius homomorphism . ÷.
]
is an automorphism of 1.
Proof. First we show that (1) implies (2). Suppose that 1 is perfect and a ÷ 1. Then
.
]
÷a is separable over 1. Let g(.) ÷ 1Œ. be an irreducible factor of .
]
÷a. Let 1
be the splitting ﬁeld of g(.) over 1 and b a zero of g(.) in 1. Then b
]
= a. Further
.
]
÷b
]
= (.÷b)
]
÷ 1Œ. since the characteristic of 1 is ¸. Hence g(.) = (.÷b)
x
and then s must equal 1 since g(.) is irreducible. Therefore b ÷ 1 and b is a ¸th
root of a.
Now we show that (2) implies (3). Recall that the Frobenius homomorphism t :
. ÷.
]
is injective (see Theorem 1.8.8). We must show that it is also surjective. Let
236 Chapter 16 Separable Field Extensions
a ÷ 1 and let b be a ¸th root of a so that a = b
]
. Then t(b) = b
]
= a and t is
surjective.
Finally we show that (3) implies (1). Let t : . ÷ .
]
be surjective. It follows that
each a ÷ 1 has a ¸th root in 1. Now let ¡(.) ÷ 1Œ. be irreducible. Assume that
¡(.) is not separable. From Theorem 16.2.3 there is a g(.) ÷ 1Œ. with ¡(.) =
g(.
]
), that is
¡(.) = a
0
÷a
1
.
]
÷ ÷a
n
.
n]
.
Let b
i
÷ 1 with a
i
= b
]
i
. Then
¡(.) = b
]
o
÷b
]
1
.
]
÷ ÷b
]
n
.
n]
= (b
0
÷b
1
. ÷ ÷b
n
.
n
)
]
.
However this is a contradiction since ¡(.) is irreducible. Therefore ¡(.) is separable
completing the proof.
Theorem 16.2.5. Let 1 be a ﬁeld with char(1) = ¸ = 0. Then each element of 1
has at most one ¸th power in 1.
Proof. Suppose that b
1
. b
2
÷ 1 with b
]
1
= b
]
2
= a. Then
0 = b
]
1
÷ b
]
2
= (b
1
÷ b
2
)
]
.
Since 1 has no zero divisors it follows that b
1
= b
2
.
16.3 Finite Fields
In this section we consider ﬁnite ﬁelds. In particular we show that if 1 is a ﬁnite ﬁeld
then [1[ = ¸
n
for some prime ¸ and natural number m > 0. Further we show that
if 1
1
. 1
2
are ﬁnite ﬁelds with [1
1
[ = [1
2
[ then 1
1
Š 1
2
. Hence there is a unique
ﬁnite ﬁeld for each possible order.
Notice that if 1 is a ﬁnite ﬁeld then by necessity char 1 = ¸ = 0. We ﬁrst show
that in this case 1 is always perfect.
Theorem 16.3.1. A ﬁnite ﬁeld is perfect.
Proof. Let 1 be a ﬁnite ﬁeld of characteristic ¸ > 0. Then the Frobenius map t :
. ÷ .
]
is surjective since its injective and 1 is ﬁnite. Therefore 1 is perfect from
Theorem 16.2.4.
Next we show that each ﬁnite ﬁeld has order ¸
n
for some prime ¸ and natural
number m > 0.
Lemma 16.3.2. Let 1 be a ﬁnite ﬁeld. Then [1[ = ¸
n
for some prime ¸ and natural
number m > 0.
Section 16.3 Finite Fields 237
Proof. Let 1 be a ﬁnite ﬁeld with characteristic ¸ > 0. Then 1 can be considered
as a vector space over 1 = GF(¸) and hence of ﬁnite dimension since [1[ < o. If
˛
1
. . . . . ˛
n
is a basis then each ¡ ÷ 1 can be written as ¡ = c
1
˛
1
÷ ÷c
n
˛
n
with
each c
i
÷ GF(¸). Hence there are ¸ choices for each c
i
and therefore ¸
n
choices for
each ¡ .
In Theorem 9.5.16 we proved that any ﬁnite subgroup of the multiplicative group
of a ﬁeld is cyclic. If 1 is a ﬁnite ﬁeld then its multiplicative subgroup 1

is ﬁnite
and hence is cyclic.
Lemma 16.3.3. Let 1 be a ﬁnite ﬁeld. Then its multiplicative subgroup 1

is cyclic.
If 1 is a ﬁnite ﬁeld with order ¸
n
then its multiplicative subgroup 1

has order
¸
n
÷1. Then from Lagrange’s theorem each nonzero element to the power ¸
n
is the
identity. Therefore we have the result.
Lemma 16.3.4. Let 1 be a ﬁeld of order ¸
n
. Then each ˛ ÷ 1 is a zero of the
polynomial .
]
m
÷ .. In particular if ˛ = 0 then ˛ is a zero of .
]
m
1
÷ 1.
If 1 is a ﬁnite ﬁeld of order ¸
n
, it is a ﬁnite extension of GF(¸). Since the mul
tiplicative group is cyclic we must have 1 = GF(¸)(˛) for some ˛ ÷ 1. From
this we obtain that for a given possible ﬁnite order there is only one ﬁnite ﬁeld up to
isomorphism.
Theorem 16.3.5. Let 1
1
. 1
2
be ﬁnite ﬁelds with [1
1
[ = [1
2
[. Then 1
1
Š 1
2
.
Proof. Let [1
1
[ = [1
2
[ = ¸
n
. From the remarks above 1
1
= GF(¸)(˛) where ˛
has order ¸
n
÷ 1 in 1

1
. Similarly 1
2
= GF(¸)(ˇ) where ˇ also has order ¸
n
÷ 1
in 1

2
. Hence GF(¸)(˛) Š GF(¸)(ˇ) and therefore 1
1
Š 1
2
.
In Lemma 16.3.2 we saw that if 1 is a ﬁnite ﬁeld then [1[ = ¸
n
for some prime
¸ and positive integer n. We now show that given a prime power ¸
n
there does exist
a ﬁnite ﬁeld of that order.
Theorem 16.3.6. Let ¸ be a prime and n > 0 a natural number. Then there exists a
ﬁeld 1 of order ¸
n
.
Proof. Given a prime ¸ consider the polynomial g(.) = .
]
n
÷ . ÷ GF(¸)Œ.. Let
1 be the splitting ﬁeld of this polynomial over GF(¸). Since a ﬁnite ﬁeld is perfect
1 is a separable extension and hence all the zeros of g(.) are distinct in 1.
Let J be the set of ¸
n
distinct zeros of g(.) within 1. Let a. b ÷ J. Since
(a ˙b)
]
n
= a
]
n
˙b
]
n
and (ab)
]
n
= a
]
n
b
]
n
238 Chapter 16 Separable Field Extensions
it follows that J forms a subﬁeld of 1. However J contains all the zeros of g(.) and
since 1 is the smallest extension of GF(¸) containing all the zeros of g(.) we must
have 1 = J. Since J has ¸
n
elements it follows that the order of 1 is ¸
n
.
Combining Theorems 16.3.5 and 16.3.6 we get the following summary result indi
cating that up to isomorphism there exists one and only one ﬁnite ﬁeld of order ¸
n
.
Theorem 16.3.7. Let ¸ be a prime and n > 0 a natural number. Then up to iso
morphism there exists a unique ﬁnite ﬁeld of order ¸
n
.
16.4 Separable Extensions
In this section we consider some properties of separable extensions.
Theorem 16.4.1. Let 1 be a ﬁeld with 1 Ç 1 and 1 algebraically closed. Let
˛ : 1 ÷1be a monomorphism. Then the number of monomorphisms ˇ : 1(a) ÷1
with ˇ
[1
= ˛ is equal to the number of pairwise distinct zeros in 1 of the minimal
polynomial m
o
of a over 1.
Proof. Let ˇ be as in the statement of the theorem. Then ˇ is uniquely determined
by ˇ(a) and ˇ(a) is a zero of the polynomial ˇ(m
o
(.)) = ˛(m
o
(.)). Now let a
t
be a zero of ˛(m
o
(.)) in 1. Then there exists a ˇ : 1(a) ÷ 1 with ˇ(a) = a
t
from Theorem 7.1.4. Therefore ˛ has exactly as many extensions ˇ as ˛(m
o
(.)) has
pairwise distinct zeros in 1. The number of pairwise distinct zeros of ˛(m
o
(.)) is
equal to the number of pairwise distinct zeros of m
o
(.). This can be seen as follows.
Let 1
0
be a splitting ﬁeld of m
o
(.) and 1
1
Ç 1 a splitting ﬁeld of ˛(m
o
(.)). From
Theorems 8.1.5 and 8.1.6 there is an isomorphism [ : 1
0
÷ 1
1
which maps the
zeros of m
o
(.) onto the zeros of ˛(m
o
(.)).
Lemma 16.4.2. Let 1[1 be a ﬁnite extension with 1 Ç 1 and 1 algebraically
closed. In particular 1 = 1(a
1
. . . . . a
n
) where the a
i
are algebraic over 1. Let
k
i
be the number of pairwise distinct zeros of the minimal polynomial m
o
i
of a
i
over
1(a
1
. . . . . a
n1
) in 1. Then there are exactly k
1
. . . . . k
n
monomorphisms ˇ : 1 ÷1
with ˇ
[1
= 1
1
.
Proof. From Theorem 16.4.1 there are exactly ¸
1
monomorphisms ˛ : 1(a
1
) ÷ 1
with ˛
[1
equal to the identity on 1. Each such ˛ has exactly ¸
2
extensions of the
identity on 1 to 1(a
1
. a
2
). We now continue in this manner.
Theorem 16.4.3. Let 1[1 be a ﬁeld extension with M an intermediate ﬁeld. If a ÷ 1
is separable over 1 then it is also separable over M.
Section 16.4 Separable Extensions 239
Proof. This follows directly from the fact that the minimal polynomial of a over M
divides the minimal polynomial of a over 1.
Theorem 16.4.4. Let 1[1 be a ﬁeld extension. Then the following are equivalent.
(1) 1[1 is ﬁnite and separable.
(2) There are ﬁnitely many separable elements a
1
. . . . . a
n
over 1 with 1 = 1(a
1
.
. . . . a
n
).
(3) 1[1 is ﬁnite and if 1 Ç 1 with 1 algebraically closed then there are exactly
Œ1 : 1 monomorphisms ˛ : 1 ÷1 with ˛
[1
= 1
1
.
Proof. That (1) implies (2) follows directly from the deﬁnitions. We show then that
(2) implies (3). Let 1 = 1(a
1
. . . . . a
n
) where a
1
. . . . . a
n
are separable elements
over 1. The extension 1[1 is ﬁnite (see Theorem 5.3.4). Let ¸
i
be the number of
pairwise distinct zeros in 1 of the minimal polynomial m
o
i
(.) = ¡
i
(.) of a
i
over
1(a
1
. . . . . a
i1
). Then ¸
i
_ deg(¡
i
) = [1(a
1
. . . . . a
i
) : 1(a
1
. . . . . a
i1
)[. Hence
¸
i
= deg(¡
i
(.)) since a
i
is separable over 1(a
1
. . . . . a
i1
) from Theorem 16.4.3.
Therefore
Œ1 : 1 = ¸
1
¸
n
is equal to the number of monomorphisms ˛ : 1 ÷1 with ˛
[1
the identity on 1.
Finally we show that (3) implies (1). Suppose then the conditions of (3). Since
1[1 is ﬁnite there are ﬁnitely many a
1
. . . . . a
n
÷ 1 with 1 = 1(a
1
. . . . . a
n
). Let ¸
i
and ¡
i
(.) be as in the proof above and hence ¸
i
_ deg(¡
i
(.)). By assumption we
have
Œ1 : 1 = ¸
1
¸
n
equal to the number of monomorphisms ˛ : 1 ÷1 with ˛
[1
the identity on 1. Also
Œ1 : 1 = ¸
1
¸
n
_ deg(¡
1
(.)) deg(¡
n
(.)) = Œ1 : 1.
Hence ¸
i
= deg(¡
i
(.)). Therefore by deﬁnition each a
i
is separable over 1.
To complete the proof we must show that 1[1 is separable. Inductively it sufﬁces
to prove that 1(a
1
)[1 is separable over 1 whenever a
1
is separable over 1 and not
in 1.
This is clear if char(1) = 0 because 1 is perfect. Suppose then that char(1) =
¸ > 0. First we show that 1(a
]
1
) = 1(a
1
). Certainly 1(a
]
1
) Ç 1(a
1
). Assume that
a
1
… 1(a
]
1
). Then g(.) = .
]
÷ a
]
1
is the minimal polynomial of a
1
over 1. This
follows from the fact that .
]
÷a
]
1
= (. ÷a
1
)
]
and hence there can be no irreducible
factor of .
]
÷ a
]
1
of the form (. ÷ a
1
)
n
with m < ¸ and m[¸.
However it follows then in this case that g
t
(.) = 0 contradicting the separability
of a
1
over 1. Therefore 1(a
1
) = 1(a
]
1
).
Let 1 = 1(a
1
) then also 1 = 1(1
]
) where 1
]
is the ﬁeld generated by the ¸th
powers of 1. Now let b ÷ 1 = 1(a
1
). We must show that the minimal polynomial
of b, say m
b
(.), is separable over 1.
240 Chapter 16 Separable Field Extensions
Assume that m
b
(.) is not separable over 1. Then
m
b
(.) =
k
i=0
b
i
.
]i
. b
i
÷ 1. b
k
= 1
from Theorem 16.2.3. We have
b
0
÷b
1
b
]
÷ ÷b
k
b
]k
= 0.
Therefore the elements 1. b
]
. . . . . b
]k
are linearly dependent over 1. Since 1(a
1
) =
1 = 1(1
]
) we ﬁnd that 1. b. . . . . b
k
are linearly dependent also since if they were
independent the ¸th powers would also be independent. However this is not possible
since k < deg(m
b
(.)). Therefore m
b
(.) is separable over 1 and hence 1(a
1
)[1 is
separable. Altogether 1[1 is then ﬁnite and separable completing the proof.
Theorem 16.4.5. Let 1[1 be a ﬁeld extension and let M be an intermediate ﬁeld.
Then the following are equivalent.
(1) 1[1 is separable.
(2) 1[M and M[1 are separable.
Proof. We ﬁrst show that (1) =(2): If 1[1 is separable then 1[M is separable by
Theorem 16.4.3 and M[1 is separable.
Now suppose (2) and let M[1 and 1[M be separable. Let a ÷ 1 and let
m
o
(.) = ¡(.) = b
0
÷ ÷b
n1
.
n1
÷.
n
be the minimal polynomial of a over M. Then ¡(.) is separable. Let
M
t
= 1(b
1
. . . . . b
n1
).
We have 1 Ç M
t
Ç M and hence M
t
[1 is separable since M[1 is separable.
Further a is separable over M
t
since ¡(.) is separable and ¡(.) ÷ M
t
Œ.. From
Theorem 16.4.1 there are m = deg(¡(.)) = ŒM
t
(a) : M
t
 extensions of ˛ : M
t
÷
M with M the algebraic closure of M
t
.
Since M
t
[1 is separable and ﬁnite there are ŒM
t
: 1 monomorphisms ˛ : M
t
÷
M from Theorem 16.4.4. Altogether there are ŒM
t
(a) : 1 monomorphisms ˛ :
M
t
÷ M with ˛
[1
the identity on 1. Therefore M
t
(a)[1 is separable from Theo
rem 16.4.4. Hence a is separable over 1 and then 1[1 is separable. Therefore (2)
implies (1).
Theorem 16.4.6. Let 1[1 be a ﬁeld extension and let S Ç 1 such that all elements
of S are separable over 1. Then 1(S)[1 is separable and 1ŒS = 1(S).
Section 16.5 Separability and Galois Extensions 241
Proof. Let W be the set of ﬁnite subsets of S. Let T ÷ W. From Theorem 16.4.4 we
obtain that 1(T )[1 is separable. Since each element of 1(S) is contained in some
1(T ) we have that 1(S)[1 is separable. Since all elements of S are algebraic we
have that 1ŒS = 1(S),
Theorem 16.4.7. Let 1[1 be a ﬁeld extension. Then there exists in 1 a uniquely
determined maximal ﬁeld M with the property that M[1 is separable. If a ÷ 1 is
separable over M then a ÷ M. M is called the separable hull of 1 in 1.
Proof. Let S be the set of all elements in 1 which are separable over 1. Deﬁne
M = 1(S). Then M[1 is separable from Theorem 16.4.6. Now, let a ÷ 1 be
separable over M. Then M(a)[M is separable fromTheorem16.4.4. Further M(a)[1
is separable from Theorem 16.4.5. It follows that a ÷ M.
16.5 Separability and Galois Extensions
We now completely characterize Galois extensions 1[1 as ﬁnite, normal, separable
extensions.
Theorem 16.5.1. Let 1[1 be a ﬁeld extension. Then the following are equivalent.
(1) 1[1 is a Galois extension.
(2) 1 is the splitting ﬁeld of a separable polynomial in 1Œ..
(3) 1[1 is ﬁnite, normal and separable.
Therefore we may characterize Galois extensions of a ﬁeld 1 as ﬁnite, normal and
separable extensions of 1.
Proof. Recall from Theorem 8.2.2 that an extension 1[1 is normal if
(1) 1[k is algebraic and
(2) each irreducible polynomial ¡(.) ÷ 1Œ. that has a zero in 1 splits into linear
factors in 1Œ..
Now suppose that 1[1 is a Galois extension. Then 1[1 is ﬁnite from Theo
rem 15.4.1. Let 1 = 1(b
1
. . . . . b
n
) and m
b
i
(.) = ¡
i
(.) be the minimal polynomial
of b
i
over 1. Let a
i
1
. . . . . a
i
n
be the pairwise distinct elements from
H
i
= {˛(b
i
) : ˛ ÷ Aut(1[1)}.
Deﬁne
g
i
(.) = (. ÷ a
i
1
) (. ÷ a
i
n
) ÷ 1Œ..
If ˛ ÷ Aut(1[1) then ˛(g
i
) = g
i
since ˛ permutes the elements of H
i
. This means
that the coefﬁcients of g
i
(.) are in Fix(1. Aut(1[1)) = 1. Further g
i
(.) ÷ 1Œ.
242 Chapter 16 Separable Field Extensions
because b
i
is one of the a
i
j
and ¡
i
(.)[g
i
(.). The group Aut(1[1) acts transitively on
{a
i
1
. . . . . a
i
n
} by the choice of a
i
1
. . . . . a
i
n
. Therefore each g
i
(.) is irreducible (see
Theorem 15.2.4). It follows that ¡
i
(.) = g
i
(.). Now ¡
i
(.) has only simple zeros
in 1, that is no zero has multiplicity _ 2 and hence ¡
i
(.) splits over 1. Therefore
1 is a splitting ﬁeld of ¡(.) = ¡
1
(.) ¡
n
(.) and ¡(.) is separable by deﬁnition.
Hence (1) implies (2).
Now suppose that 1 is a splitting ﬁeld of the separable polynomial ¡(.) ÷ 1Œ.
and 1[1 is ﬁnite. From Theorem 16.4.4 we get that 1[1 is separable since 1 =
1(a
1
. . . . . a
n
) with each a
i
separable over K. Therefore 1[1 is normal from Deﬁni
tion 8.2.1. Hence (2) implies (3).
Finally suppose that 1[1 is ﬁnite, normal and separable. Since 1[1 is ﬁnite and
separable from Theorem 16.4.4 there exist exactly Œ1 : 1 monomorphisms ˛ : 1 ÷
1, 1 the algebraic closure of 1, with ˛
[1
the identity on 1. Since 1[1 is normal
these monomorphisms are already automorphisms of 1 from Theorem 8.2.2. Hence
Œ1 : 1 _ [Aut(1[1)[. Further [1 : 1[ _ [Aut(1[1)[ from Theorem 15.4.3.
Combining these we have Œ1 : 1 = Aut(1[1) and hence 1[1 is a Galois extension
from Theorem 15.4.9. Therefore (3) implies (1) completing the proof.
Recall that any ﬁeld of characteristic 0 is perfect and therefore any ﬁnite extension
is separable. Applying this to Qimplies that the Galois extensions of the rationals are
precisely the splitting ﬁelds of polynomials.
Corollary 16.5.2. The Galois extensions of the rationals are precisely the splitting
ﬁelds of polynomials in QŒ..
Theorem 16.5.3. Let 1[1 be a ﬁnite, separable ﬁeld extension. Then there exists an
extension ﬁeld M of 1 such that M[1 is a Galois extension.
Proof. Let 1 = 1(a
1
. . . . . a
n
) with all a
i
separable over 1. Let ¡
i
(.) be the mini
mal polynomial of a
i
over 1. Then each ¡
i
(.) and hence also ¡(.)=¡
1
(.) ¡
n
(.)
is separable over 1. Let M be the splitting ﬁeld of ¡(.) over 1. Then M[1 is a
Galois extension from Theorem 16.5.1.
Example 16.5.4. Let 1 = Q be the rationals and let ¡(.) = .
4
÷ 2 ÷ QŒ.. From
Chapter 8 we know that 1 = Q(
4
_
2. i ) is a splitting ﬁeld of ¡(.). By the Eisenstein
criteria ¡(.) is irreducible and Œ1 : Q = 8. Moreover
4
_
2. i
4
_
2. ÷
4
_
2. ÷i
4
_
2
are the zeros of ¡(.). Since the rationals are perfect, ¡(.) is separable. 1[1 is a
Galois extension by Theorem 16.5.1. From the calculations in Chapter 15 we have
[Aut(1[1)[ = [Aut(1)[ = Œ1 : 1 = 8.
Section 16.5 Separability and Galois Extensions 243
Let
G = Aut(1[1) = Aut(1[Q) = Aut(1).
We want to determine the subgroup lattice of the Galois group G. We show G Š D
4
the dihedral group of order 8. Since there are 4 zeros of ¡(.) and G permutes these
G must be a subgroup of S
4
and since the order is 8, G is a 2Sylow subgroup of S
4
.
From this we have that
G = ((2. 4). (1. 2. 3. 4)).
If we let t = (2. 4) and o = (1. 2. 3. 4) we get the isomorphism between G and D
4
.
From Theorem 14.1.1 we know that D
4
= (r. ¡ : r
4
= ¡
2
= (r¡ )
2
= 1).
This can also be seen in the following manner. Let
a
1
=
4
_
2. a
2
= i
4
_
2. a
3
= ÷
4
_
2. a
4
= ÷i
4
_
2.
Let ˛ ÷ G. ˛ is determined if we know ˛(
4
_
2) and ˛(i ). The possibilities for ˛(i )
are i or ÷i that is the zeros of .
2
÷1.
The possibilities for
4
_
2 are the 4 zeros of ¡(.) = .
4
÷ 2. Hence we have 8
possibilities for ˛. These are exactly the elements of the group G. We have ı. t ÷ G
with
ı(
4
_
2) = i
4
_
2. ı(i ) = i
and
t(
4
_
2) =
4
_
2. t(i ) = ÷i.
It is straightforward to show that ı has order 4, t has order 2 and ıt has order 2. These
deﬁne a group of order 8 isomorphic to D
4
and since G has 8 elements this must be
all of G.
We now look at the subgroup lattice of G and then the corresponding ﬁeld lattice.
Let ı and t be as above. Then G has 5 subgroups of order 2
{1. ı
2
}. {1. t}. {1. ıt}. {1. ı
2
t}. {1. ı
3
t}.
Of these only {1. ı
2
} is normal in G.
G has 3 subgroups of order 4
{1. ı. ı
2
. ı
3
}. {1. ı
2
. t. tı
2
}. {1. ı
2
. ıt. ı
3
t}
and all are normal since they all have index 2.
244 Chapter 16 Separable Field Extensions
Hence we have the following subgroup lattice:
From this we construct the lattice of ﬁelds and intermediate ﬁelds. Since there are
10 proper subgroups of G, from the fundamental theorem of Galois theory there are
10 intermediate ﬁelds in 1[Q namely the ﬁx ﬁelds Fix(1. H) where H is a proper
subgroup of G. In the identiﬁcation the extension ﬁeld corresponding to the whole
group G is the ground ﬁeld Q (recall that the lattice of ﬁelds is the inverted lattice
of the subgroups), while the extension ﬁeld corresponding to the identity is the whole
ﬁeld 1. We now consider the other proper subgroups. Let ı. t be as before.
(1) Consider M
1
= Fix(1. {1. t}). Now {1. t} ﬁxes Q(
4
_
2) elementwise so that
Q(
4
_
2) Ç M
1
. Further Œ1 : M
1
 = [{1. t}[ = 2 and hence Œ1 : Q(
4
_
2) = 2.
Therefore M
1
= Q(
4
_
2).
(2) Consider M
2
= Fix(1. {1. tı}). We have
tı(
4
_
2) = t(i
4
_
2) = ÷i
4
_
2
tı(i
4
_
2) = t(÷
4
_
2) = ÷
4
_
2
tı(÷
4
_
2) = t(÷i
4
_
2) = i
4
_
2
tı(÷i
4
_
2) = t(
4
_
2) =
4
_
2.
It follows that tı ﬁxes (1 ÷ i )
4
_
2 and hence M
2
= Q((1 ÷ i )
4
_
2).
(3) Consider M
3
= Fix(1. {1. tı
2
}. The map tı
2
interchanges a
1
and a
3
and ﬁxes
a
2
and a
4
. Therefore M
3
= Q(i
4
_
2).
Section 16.6 The Primitive Element Theorem 245
In an analogous manner we can then consider the other 5 proper subgroups and
corresponding intermediate ﬁelds. If we let b
1
= (1 ÷i )
4
_
2 and b
2
= (1 ÷i )
4
_
2 we
get the following lattice of ﬁelds and subﬁelds.
16.6 The Primitive Element Theorem
In this section we describe ﬁnite separable ﬁeld extensions as simple extensions. If
follows that a Galois extension is always a simple extension.
Theorem 16.6.1 (primitive element theorem). Let 1 = 1(;
1
. . . . . ;
n
) and suppose
that each ;
i
is separable over 1. Then there exists a ;
0
÷ 1 such that 1 = 1(;
0
).
The element ;
0
is called a primitive element.
Proof. Suppose ﬁrst that 1 is a ﬁnite ﬁeld. Then 1 is also a ﬁnite ﬁeld and therefore
1

= (;
0
) is cyclic. Therefore 1 = 1(;
0
) and the theorem is proved if 1 is a ﬁnite
ﬁeld.
Now suppose that 1 is inﬁnite. Inductively it sufﬁces to prove the theorem for
n = 2. Hence let ˛. ˇ ÷ 1 be separable over 1. We must show that there exists a
; ÷ 1 with 1(˛. ˇ) = 1(;).
Let 1 be the splitting ﬁeld of the polynomial m
˛
(.)m
ˇ
(.) over 1 where m
˛
(.).
m
ˇ
(.) are respectively the minimal polynomials of ˛. ˇ over 1. In 1Œ. we have
m
˛
(.) = (. ÷ ˛
1
)(. ÷ ˛
2
) (. ÷ ˛
x
) with ˛ = ˛
1
m
ˇ
(.) = (. ÷ ˇ
1
)(. ÷ ˇ
2
) (. ÷ ˇ
t
) with ˇ = ˇ
1
.
By assumption the ˛
i
and the ˇ
}
are respectively pairwise distinct.
For each pair (i. ¸ ) with 1 _ i _ s, 2 _ ¸ _ t the equation
˛
1
÷:ˇ
1
= ˛
i
÷:ˇ
}
246 Chapter 16 Separable Field Extensions
has exactly one solution : ÷ 1 since ˇ
}
÷ ˇ
1
= 0 if ¸ _ 2. Since 1 is inﬁnite there
exists a c ÷ 1 with
˛
1
÷cˇ
1
= ˛
i
÷cˇ
}
for all i. ¸ with 1 _ i _ s, 2 _ ¸ _ t . With such a value c ÷ 1 we deﬁne
; = ˛ ÷cˇ = ˛
1
÷cˇ
1
.
We claim that 1(˛. ˇ) = 1(;). It sufﬁces to show that ˇ ÷ 1(;) for then ˛ =
; ÷ cˇ ÷ 1(;). This implies that 1(˛. ˇ) Ç 1(;) and since ; ÷ 1(˛. ˇ) it follows
that 1(˛. ˇ) = 1(;).
To show that ˇ ÷ 1(;) we ﬁrst deﬁne ¡(.) = m
˛
(; ÷ c.) and let J(.) =
gcd(¡(.). m
ˇ
(.)). We may assume that J(.) is monic. We show that J(.) = . ÷ˇ.
Then ˇ ÷ 1(;) since J(.) ÷ 1(;)Œ..
Assume ﬁrst that J(.) = 1. Then gcd(¡(.). m
ˇ
(.)) = 1 and ¡(.) and m
ˇ
(.) are
also relatively prime in 1Œ.. This is a contradiction since ¡(.) and m
ˇ
(.) have the
common zero ˇ ÷ 1 and hence the common divisor . ÷ ˇ.
Therefore J(.) = 1 so deg(J(.)) _ 1.
The polynomial J(.) is a divisor of m
ˇ
(.) and hence J(.) splits into linear factors
of the form . ÷ ˇ
}
, 1 _ ¸ _ t in 1Œ.. The proof is completed if we can show that
no linear factor of the form . ÷ ˇ
}
with 2 _ ¸ _ t is a divisor of ¡(.). That is, we
must show that ¡(ˇ
}
) = 0 in 1 if ¸ _ 2.
Now ¡(ˇ
}
) = m
˛
(; ÷cˇ
}
) = m
˛
(˛
1
÷cˇ
1
÷cˇ
}
). Suppose that ¡(ˇ
}
) = 0 for
some ¸ _ 2. This would imply that ˛
i
= ˛
1
÷cˇ
1
÷cˇ
}
, that is, ˛
1
÷cˇ
1
= ˛
}
÷cˇ
}
for ¸ _ 2. This contradicts the choice of the value c. Therefore ¡(ˇ
}
) = 0 if ¸ _ 2
completing the proof.
In the above theorem it is sufﬁcient to assume that n÷1 of ;
1
. . . . . ;
n
are separable
over 1. The proof is similar. We only need that the ˇ
1
. . . . . ˇ
t
are pairwise distinct
if ˇ is separable over 1 to show that 1(˛. ˇ) = 1(;) for some ; ÷ 1.
If 1 is a perfect ﬁeld then every ﬁnite extension is separable. Therefore we get the
following corollary.
Corollary 16.6.2. Let 1[1 be a ﬁnite extension with 1 a perfect ﬁeld. Then 1 =
1(;) for some ; ÷ 1.
Corollary 16.6.3. Let 1[1 be a ﬁnite extension with 1 a perfect ﬁeld. Then there
exist only ﬁnitely many intermediate ﬁelds 1 with 1 Ç 1 Ç 1.
Proof. Since 1 is a perfect ﬁeld we have 1 = 1(;) for some ; ÷ 1. Let m
;
(.) ÷
1Œ. be the minimal polynomial of ; over 1 and let 1 be the splitting ﬁeld of m
;
(.)
over 1. Then 1[1 is a Galois extension and hence there are only ﬁnitely many
intermediate ﬁelds between 1 and 1. Therefore also only ﬁnitely many between 1
and 1.
Section 16.7 Exercises 247
Suppose that 1[1 is algebraic. Then in general 1 = 1(;) for some ; ÷ 1 if and
only if there exist only ﬁnitely many intermediate ﬁelds 1 with 1 Ç 1 Ç 1.
This condition on intermediate ﬁelds implies that 1[1 is ﬁnite if 1[1 is algebraic.
Hence we have proved this result in the case that 1 is perfect. The general case is
discussed in the book of S. Lang [8].
16.7 Exercises
1. Let ¡(.) = .
4
÷ 8.
3
÷24.
2
÷ 32. ÷14 ÷ QŒ. and let · ÷ C be a zero of ¡ .
Let ˛ := ·(4 ÷ ·) and 1 a splitting ﬁeld of ¡ over Q. Show:
(i) ¡ is irreducible over Qand ¡(.) = ¡(4 ÷ .).
(ii) There is exactly one automorphism o of Q(·) with o(·) = 4 ÷ ·.
(iii) 1 := Q(˛) is the Fix ﬁeld of o and [1 : Q[ = 2.
(iv) Determine the minimal polynomial of ˛ over Qand determine ˛.
(v) [Q(·) : 1[ = 2 and determine the minimal polynomial of · over 1 and
determine · and all other zeros of ¡(.).
(vi) Determine the degree of [1 : Q[.
(vii) Determine the structure of Aut(1[Q).
2. Let 1[1 be a ﬁeld extension and ¡ ÷ 1Œ. a separable polynomial. Let 7 be
a splitting ﬁeld of ¡ over 1 and 7
0
a splitting ﬁeld of ¡ over 1. Show that
Aut(7[1) is isomorphic to a subgroup of Aut(7
0
[1).
3. Let 1[1 be a ﬁeld extension and · ÷ 1. For each element c ÷ 1 it is 1(· ÷c) =
1(·). For c = 0 it is 1(c·) = 1(·).
4. Let · =
_
2 ÷
_
3 and let 1 = Q(·). Show that
_
2 and
_
3 are presentable as a
Qlinear combination of 1. ·. ·
2
. ·
3
. Conclude that 1 = Q(
_
2.
_
3).
5. Let 1 be the splitting ﬁeld of .
3
÷ 5 over Q in C. Determine a primitive element
t of 1 over Q.
Chapter 17
Applications of Galois Theory
17.1 Applications of Galois Theory
As we mentioned in Chapter 1 Galois theory was originally developed as part of the
proof that polynomial equations of degree 5 or higher over the rationals cannot be
solved by formulas in terms of radicals. In this chapter we do this ﬁrst and prove the
insolvability of the quintic by radicals. To do this we must examine in detail what we
call radical extensions.
We then return to some geometric material we started in Chapter 6. There using
general ﬁeld extensions we proved the impossibility of certain geometric compass
and straightedge constructions. Here we use Galois theory to consider constructible
ngons.
Finally we will use Galois theory to present a proof of the fundamental theorem
of algebra which says essentially that the complex number ﬁeld C is algebraically
closed.
17.2 Field Extensions by Radicals
We would like to use Galois theory to prove the insolvability by radicals of polynomial
equations of degree 5 or higher. To do this we must introduce extensions by radicals
and solvability by radicals.
Deﬁnition 17.2.1. Let 1[1 be a ﬁeld extension.
(1) Each zero of a polynomial .
n
÷a ÷ 1Œ. in 1 is called a radical (over 1). We
denote it by
n
_
a (if a more detailed identiﬁcation is not necessary).
(2) 1is called a simple extension of 1 by a radical if 1 = 1(
n
_
a) for some a ÷ 1.
(3) 1 is called an extension of 1 by radicals if there is a chain of ﬁelds
1 = 1
0
Ç 1
1
Ç Ç 1
n
= 1
such that each 1
i
is a simple extension of 1
i1
by a radical for each i =
1. . . . . m.
(4) Let ¡(.) ÷ 1Œ.. Then the equation ¡(.) = 0 is solvable by radicals or just
solvable if the splitting ﬁeld of ¡(.) over 1 is contained in an extension of 1
by radicals.
Section 17.2 Field Extensions by Radicals 249
In proving the insolvability of the quintic we will look for necessary and sufﬁcient
conditions for the solvability of polynomial equations. Our main result will be that if
¡(.) ÷ 1Œ. then ¡(.) = 0 is solvable over 1 if the Galois group of the splitting
ﬁeld of ¡(.) over 1 is a solvable group (see Chapter 11).
In the remainder of this section we assume that all ﬁelds have characteristic zero.
The next theorem gives a characterization of simple extensions by radicals.
Theorem 17.2.2. Let 1[1 be a ﬁeld extension and n ÷ N. Assume that the polyno
mial .
n
÷ 1 splits into linear factors in 1Œ. so that 1 contains all the nth roots of
unity. Then 1 = 1(
n
_
a) for some a ÷ 1 if and only if 1 is a Galois extension over
1 and Aut(1[1) = Z,mZ for some m ÷ N with m[n.
Proof. The nth roots of unity, that is the zeros of the polynomial .
n
÷ 1 ÷ 1Œ.,
form a cyclic multiplicative group F Ç 1

of order n since each ﬁnite subgroup of
the multiplicative group 1

of 1 is cyclic and [F [ = n. We call an nth root of unity
o primitive if F = (o).
Now let 1 = 1(
n
_
a) with a ÷ 1, that is, 1 = 1(ˇ) with ˇ
n
= a ÷ 1. Let o be
a primitive nth root of unity. With this ˇ the elements oˇ. o
2
ˇ. . . . . o
n
ˇ = ˇ are
zeros of .
n
÷ a. Hence the polynomial .
n
÷ a splits into linear factors over 1 and
hence 1 = 1(ˇ) is a splitting ﬁeld of .
n
÷a over 1. It follows that 1[1 is a Galois
extension.
Let o ÷ Aut(1[1). Then o(ˇ) = o
¡
ˇ for some 0 < v _ n. The element o
¡
is
uniquely determined by o and we may write o
¡
= o
c
.
Consider the map ç : Aut(1[1) ÷ F given by o ÷ o
c
where o
c
is deﬁned as
above by o(ˇ) = o
c
ˇ. If t. o ÷ Aut(1[1) then
ot(ˇ) = o(o
r
)o(ˇ) = o
r
o
c
ˇ
because o
r
÷ 1.
Therefore ç(ot) = ç(o)ç(t) and hence ç is a homomorphism. The kernel ker(ç)
contains all the 1automorphisms of 1 for which o(ˇ) = ˇ. However since 1 =
1(ˇ) it follows that ker(ç) contains only the identity. The Galois group Aut(1[1) is
therefore isomorphic to a subgroup of F . Since F is cyclic of order n we have that
Aut(1[1) is cyclic of order m for some m[n completing one way in the theorem.
Conversely ﬁrst suppose that 1[1 is a Galois extension with Aut(1[1) = Z
n
a
cyclic group of order n. Let o be a generator of Aut(1[1). This is equivalent to
Aut(1[1) = {o. o
2
. . . . . o
n
= 1}.
Let o be a primitive nth root of unity. Then by assumption o ÷ 1, o(o) = o
and F = {o. o
2
. . . . . o
n
= 1}. Further the pairwise distinct automorphism o
¡
,
v = 1. 2. . . . . n, of 1 are linearly independent, that is there exists an j ÷ 1 such that
o = j =
n
¡=1
o
¡
o
¡
(j) = 0.
250 Chapter 17 Applications of Galois Theory
The element o =j is called the Lagrange resolvent of o by j. We ﬁx such an element
j ÷ 1. Then we get, since o(o) = o,
o(o = j) =
n
¡=1
o
¡
o
¡÷1
(j) = o
1
n
¡=1
o
¡÷1
o
¡÷1
(j) = o
1
n÷1
¡=2
o
¡
o
¡
(j)
= o
1
n
¡=1
o
¡
o
¡
(j) = o
1
(o = j).
Further o
µ
(o =j) = o
µ
(o =j), j = 1. 2. . . . . n. Hence the only 1automorphism
of 1 which ﬁxes o = j is the identity. Therefore Aut(1[1(o = j)) = {1} and hence
1 = 1(o = j) by the fundamental theorem of Galois theory.
Further
o((o = j)
n
) = (o(o = j))
n
= (o
1
(o = j))
n
= o
n
(o = j)
n
= (o = j)
n
.
Therefore (o = j)
n
÷ Fix(1. Aut(1[1)) = 1 again from the fundamental theorem
of Galois theory. If a = (o = j)
n
÷ 1 then ﬁrst a ÷ 1 and second 1 = 1(
n
_
a) =
1(o = j). This proves the result in the case where m = n. We now use this to prove
it in general.
Suppose that 1[1 is a Galois extension with Aut(1[1) = Z
n
a cyclic group of
order m where n = qm for some q _ 1. If n = qm then 1 = 1(
m
_
b) for some
b ÷ 1 by the above argument. Hence 1 = 1(ˇ) with ˇ
n
÷ 1. Then certainly
a = ˇ
n
= (ˇ
n
)
q
÷ 1 and therefore 1 = 1(ˇ) = 1(
n
_
a) for some a ÷ 1
completing the general case.
We next show that every extension by radicals is contained in a Galois extension by
radicals.
Theorem 17.2.3. Each extension 1 of 1 by radicals is contained in a Galois exten
sion
¯
1 of 1 by radicals. This means that there is an extension
¯
1 of 1 by radicals
with 1 Ç
¯
1 and
¯
1[1 is a Galois extension.
Proof. We use induction on the degree m = Œ1 : 1. Suppose that m = 1. If
1 = 1(
n
_
a) then if o is a primitive nth root of unity deﬁne
¯
1 = 1(o) and
¯
1 =
¯
1(
n
_
a). We then get the chain 1 Ç
¯
1 Ç
¯
1 with 1 Ç
¯
1 and
¯
1[1 is a Galois
extension. This last statement is due to the fact that
¯
1 is the splitting ﬁeld of the
polynomial .
n
÷ a ÷ 1Œ. over 1. Hence the theorem is true if m = 1.
Now suppose that m _ 2 and suppose that the theorem is true for all extensions J
of 1 by radicals with ŒJ : 1 < m.
Since m _ 2 by the deﬁnition of extension by radicals there exists a simple exten
sion 1[1 by a radical. That is there exists a ﬁeld 1 with
1 Ç 1 Ç 1. Œ1 : 1 _ 2
Section 17.2 Field Extensions by Radicals 251
and 1 = 1(
n
_
a) for some a ÷ 1. n ÷ N. Now Œ1 : 1 < m so be the inductive
hypothesis there exists a Galois extension by radicals
¯
1 of 1 with 1 Ç
¯
1. Let G =
Aut(
¯
1[1) and let
¯
1 be the splitting ﬁeld of the polynomial ¡(.) = m
o
(.
n
) ÷ 1Œ.
over
¯
1 where m
o
(.) is the minimal polynomial of a over 1. We show that
¯
1 has the
desired properties.
Now
n
_
a ÷ 1 is a zero of the polynomial ¡(.) and 1 Ç
¯
1 Ç
¯
1. Therefore
¯
1
contains an 1isomorphic image of 1 = 1(
n
_
a) and hence we may consider
¯
1 as an
extension of 1.
Since
¯
1 is a Galois extension of 1 the polynomial ¡(.) may be factored as
¡(.) = (.
n
÷ ˛
1
) (.
n
÷ ˛
x
)
with ˛
i
÷
¯
1 for i = 1. . . . . s. All zeros of ¡(.) in
¯
1 are radicals over
¯
1. Therefore
¯
1 is an extension by radicals of
¯
1. Since
¯
1 is also an extension by radicals of 1 we
obtain that
¯
1 is an extension by radicals of 1.
Since
¯
1 is a Galois extension of 1 we have that
¯
1 is a splitting ﬁeld of a polynomial
g(.) ÷ 1Œ.. Further
¯
1 is a splitting ﬁeld of ¡(.) ÷ 1Œ. over
¯
1. Altogether then
we have that
¯
1 is a splitting ﬁeld of ¡(.)g(.) ÷ 1Œ. over 1. Therefore
¯
1 is a
Galois extension of 1 completing the proof.
We will eventually show that a polynomial equation is solvable by radicals if and
only if the corresponding Galois group is a solvable group. We now begin to ﬁnd
conditions where the Galois group is solvable.
Lemma 17.2.4. Let 1 = 1
0
Ç 1
1
Ç Ç 1
i
= 1 be a chain of ﬁelds such that
the following hold:
(i) 1 is a Galois extension of 1.
(ii) 1
}
is a Galois extension of 1
}1
for ¸ = 1. . . . . r.
(iii) G
}
= Aut(1
}
[1
}1
) is abelian for ¸ = 1. . . . . r.
Then G = Aut(1[1) is solvable.
Proof. We prove the lemma by induction on r. If r = 0 then G = {1} and there is
nothing to prove. Suppose then that r _ 1 and assume that the lemma holds for all
such chains of ﬁelds with a length r
t
< r. Since 1
1
[1 is a Galois extension then
Aut(1
1
[1) is a normal subgroup of G by the fundamental theorem of Galois theory
and further
G
1
= Aut(1
1
[1) = G, Aut(1[1
1
).
Since G
1
is an abelian group it is solvable and by assumption Aut(1[1
1
) is solvable.
Therefore G is solvable (see Theorem 12.2.4).
Lemma 17.2.5. Let 1[1 be a ﬁeld extension. Let
¯
1 and
¯
1 be the splitting ﬁelds of
the polynomial .
n
÷ 1 ÷ 1Œ. over 1 and 1 respectively. Since 1 Ç 1 we have
¯
1 Ç
¯
1. Then the following hold:
252 Chapter 17 Applications of Galois Theory
(1) If o ÷ Aut(
¯
1[1) then o
[
¯
1
÷ Aut(
¯
1[1) and the map
Aut(
¯
1[1) ÷Aut(
¯
1[1) given by o ÷o
[
¯
1
is an injective homomorphism.
(2) Suppose that in addition 1[1 is a Galois extension. Then
¯
1[1 is also a Galois
extension. If further o ÷ Aut(
¯
1[
¯
1) then o
[1
÷ Aut(1[1) and
Aut(
¯
1[
¯
1) ÷Aut(1[1) given by o ÷o
[1
is an injective homomorphism.
Proof. (1) Let o be a primitive nth root of unity. Then
¯
1 = 1(o) and
¯
1 = 1(o).
Each o ÷ Aut(
¯
1[1) maps o onto a primitive nth root of unity and ﬁxes 1 Ç 1
elementwise. Hence from o ÷ Aut(
¯
1[1) we get that o
[
¯
1
÷ Aut(
¯
1[1). Certainly
the map o ÷ o
[
¯
1
deﬁnes a homomorphism Aut(
¯
1[1) ÷ Aut(
¯
1[1). Let o
[
¯
1
= 1
with o ÷ Aut(
¯
1[1). Then o(o) = o and therefore we have already that o = 1 since
¯
1 = 1(o).
(2) If 1is the splitting ﬁeld of a polynomial g(.) over 1 then
¯
1is the splitting ﬁeld
of g(.)(.
n
÷ 1) over 1. Hence
¯
1[1 is a Galois extension. Therefore 1 Ç 1 Ç
¯
1
and 1[1.
¯
1[1 and
¯
1[1 are all Galois extensions. Therefore from the fundamental
theorem of Galois theory
Aut(1[1) = {o
[1
: o ÷ Aut(
¯
1[1)}.
In particular o
[1
÷ Aut(1[1) if o ÷ Aut(
¯
1[
¯
1). Certainly the map Aut(
¯
1[
¯
1) ÷
Aut(1[1) given by o ÷ o
[1
is a homomorphism. From o ÷ Aut(
¯
1[
¯
1) we get that
o(o) = o where as above o is a primitive nth root of unity. Therefore if o
[1
= 1
then already o = 1 since
¯
1 = 1(o). Hence the map is injective.
17.3 Cyclotomic Extensions
Very important in the solvability by radicals problem are the splitting ﬁelds of the
polynomials .
n
÷ 1 over Q. These are called cyclotomic ﬁelds.
Deﬁnition 17.3.1. The splitting ﬁeld of the polynomial .
n
÷ 1 ÷ QŒ. with n _ 2 is
called the nth cyclotomic ﬁeld denoted k
n
.
We have k
n
= Q(o) where o is a primitive nth root of unity, for example o =
e
2i
n
over Q. k
n
[Q is a Galois extension and the Galois group Aut(k
n
[Q) is the set
of automorphisms o
n
: o ÷o
n
with 1 _ m _ n and gcd(m. n) = 1.
To understand this group G we need the following concept. A prime residue class
mod n is a residue class a ÷ nZ with gcd(a. n) = 1. The set of the prime residue
Section 17.4 Solvability and Galois Extensions 253
classes mod n is just the set of invertible elements with respect to multiplication of
the Z,nZ. This forms a multiplicative group that we denote by (Z,nZ)

= 1
n
. We
have [1
n
[ = ç(n) where ç(n) is the Euler phifunction.
If G = Aut(k
n
[Q) then clearly G Š 1
n
under the map o
n
÷m÷nZ.
If n = ¸ is a prime number then G = Aut(k
n
[Q) is cyclic with [G[ = ¸ ÷ 1.
If n = ¸
2
then [G[ = [Aut(k
]
2[Q)[ = ¸(¸ ÷ 1) since
.
]
2
1
. ÷ 1
. ÷ 1
.
]
÷ 1
= .
](]1)
÷.
](]1)1
÷ ÷1
and each primitive ¸th root of unity is a zero of this polynomial.
Lemma 17.3.2. Let 1 be a ﬁeld and
¯
1 be the splitting ﬁeld of .
n
÷ 1 over 1. Then
Aut(
¯
1[1) is abelian.
Proof. We apply Lemma 17.2.5 for the ﬁeld extension 1[Q. This can be done since
the characteristic of 1 is zero and Qis the prime ﬁeld of 1. It follows that Aut(
¯
1[1)
is isomorphic to a subgroup of Aut(
¯
Q[Q) frompart (1) of Lemma 17.2.5. But
¯
Q = k
n
and hence Aut(
¯
Q[Q) is abelian. Therefore Aut(
¯
1[1) is abelian.
17.4 Solvability and Galois Extensions
In this section we prove that solvability by radicals is equivalent to the solvability of
the Galois group.
Theorem 17.4.1. Let 1[1 be a Galois extension of 1 by radicals. Then G =
Aut(1[1) is a solvable group.
Proof. Suppose that 1[1 is a Galois extension. Then we have a chain of ﬁelds
1 = 1
0
Ç 1
1
Ç Ç 1
i
= 1
such that 1
}
= 1
}1
(
n
j
_
a
}
) for some a
}
÷ 1
}
. Let n = n
1
n
i
and let
¯
1
}
be
the splitting ﬁeld of the polynomial .
n
÷ 1 ÷ 1Œ. over 1
}
for each ¸ = 0. 1. . . . . r.
Then
¯
1
}
=
¯
1
}1
(
n
j
_
a
}
) and we get the chain
1 Ç
¯
1 =
¯
1
0
Ç
¯
1
1
Ç Ç
¯
1
i
=
¯
1.
From part (2) of Lemma 17.2.5 we get that
¯
1[1 is a Galois extension. Further
¯
1
}
[
¯
1
}1
is a Galois extension with Aut(
¯
1
}
[
¯
1
}1
) cyclic fromTheorem17.2.2. Espe
cially Aut(
¯
1
}
[
¯
1
}1
) is abelian. The group Aut(
¯
1[1) is abelian from Lemma 17.3.2.
Therefore we may apply Lemma 17.2.4 to the chain
1 Ç
¯
1 =
¯
1
0
Ç Ç
¯
1
i
=
¯
1.
254 Chapter 17 Applications of Galois Theory
Therefore
¯
G = Aut(
¯
1[1) is solvable. The group G = Aut(1[1) is a homomorphic
image of
¯
G from the fundamental theorem of Galois theory. Since homomorphic
images of solvable groups are still solvable (see Theorem 12.2.3) it follows that G is
solvable.
Lemma 17.4.2. Let 1[1 be a Galois extension and suppose that G = Aut(1[1) is
solvable. Assume further that 1 contains all qth roots of unity for each prime divisor
q of m = Œ1 : 1. Then 1 is an extension of 1 by radicals.
Proof. Let 1[1 be a Galois extension and suppose that G = Aut(1[1) is solvable
and assume that 1 contains all the qth roots of unity for each prime divisor q of
m = Œ1 : 1. We prove the result by induction on m.
If m = 1 then 1 = 1 and the result is clear. Now suppose that m _ 2 and assume
that the result holds for all Galois extensions 1
t
[1
t
with Œ1
t
: 1
t
 < m. Now G =
Aut(1[1) is solvable and G is nontrivial since m _ 2. Let q be a prime divisor of m.
From Lemma 12.2.2 and Theorem 13.3.5 it follows that there is a normal subgroup
H of G with G,H cyclic of order q. Let 1 = Fix(1. H). From the fundamental
theorem of Galois theory 1[1 is a Galois extension with Aut(1[1) Š G,H and
hence Aut(1[1) is cyclic of order q. FromTheorem17.2.2 1[1 is a simple extension
of 1 by a radical. The proof is completed if we can show that 1 is an extension of 1
by radicals.
The extension 1[1 is a Galois extension and the group Aut(1[1) is solvable since
it is a subgroup of G = Aut(1[1). Each prime divisor ¸ of Œ1 : 1 is also a prime
divisor of m = Œ1 : 1 by the degree formula. Hence as an extension of 1 the ﬁeld
1 contains all the ¸th roots of unity. Finally
Œ1 : 1 =
Œ1 : 1
Œ1 : 1
=
m
q
< m.
Therefore 1[1 is an extension of 1 by radicals from the inductive assumption com
pleting the proof.
17.5 The Insolvability of the Quintic
We are now able to prove the insolvability of the quintic. This is one of the most
important applications of Galois theory. As we mentioned we do this by equating the
solvability of a polynomial equation by radicals to the solvability of the Galois group
of the splitting ﬁeld of this polynomial.
Theorem 17.5.1. Let 1 be a ﬁeld of characteristic 0 and let ¡(.) ÷ 1Œ.. Suppose
that 1 is the splitting ﬁeld of ¡(.) over 1. Then the polynomial equation ¡(.) = 0
is solvable by radicals if and only if Aut(1[1) is solvable.
Section 17.5 The Insolvability of the Quintic 255
Proof. Suppose ﬁrst that ¡(.) = 0 is solvable by radicals. Then 1 is contained in
an extension 1
t
of 1 by radicals. Hence 1 is contained in a Galois extension
¯
1 of
1 by radicals from Theorem 17.2.3. The group
¯
G = Aut(
¯
1[1) is solvable from
Theorem 17.4.1. Further 1[1 is a Galois extension. Therefore the Galois group
Aut(1[1) is solvable as a subgroup of
¯
G.
Conversely suppose that the group Aut(1[1) is solvable. Let q
1
. . . . . q
i
be the
prime divisors of m = Œ1 : 1 and let n = q
1
q
i
. Let
¯
1 and
¯
1 be the splitting
ﬁelds of the polynomial .
n
÷ 1 ÷ 1Œ. over 1 and 1 respectively. We have
¯
1 Ç
¯
1.
From part (2) of Lemma 17.2.5 we have that
¯
1[1 is a Galois extension and Aut(
¯
1[
¯
1)
is isomorphic to a subgroup of Aut(1[1). From this we ﬁrst obtain that Œ
¯
1 :
¯
1 =
[Aut(
¯
1[
¯
1)[ is a divisor of Œ1 : 1 = [Aut(1[1)[. Hence each prime divisor q of
Œ
¯
1 :
¯
1 is also a prime divisor of Œ1 : 1. Therefore
¯
1 is an extension by radicals
of
¯
1 by Lemma 17.4.2. Since
¯
1 = 1(o) where o is a primitive nth root of unity
we obtain that
¯
1 is also an extension of 1 by radicals. Therefore 1 is contained in an
extension
¯
1 of 1 by radicals and therefore ¡(.) = 0 is solvable by radicals.
Corollary 17.5.2. Let 1 be a ﬁeld of characteristic 0 and let ¡(.) ÷ 1Œ. be a
polynomial of degree m with 1 _ m _ 4. Then the equation ¡(.) = 0 is solvable by
radicals.
Proof. Let 1 be the splitting ﬁeld of ¡(.) over 1. The Galois group Aut(1[1) is
isomorphic to the subgroup of the symmetric group S
n
. Now the group S
4
is solvable
via the chain
{1} Ç Z
2
Ç D
2
Ç ·
4
Ç S
4
where Z
2
is the cyclic group of order 2 and D
2
is the Klein 4group which is iso
morphic to Z
2
× Z
2
. Because S
n
Ç S
4
for 1 _ m _ 4 it follows that Aut(1[1) is
solvable. From Theorem 17.5.1 the equation ¡(.) = 0 is solvable by radicals.
Corollary 17.5.2 uses the general theory to show that any polynomial equation of
degree less than or equal to 4 is solvable by radicals. This however does not provide
explicit formulas for the solutions. We present these below.
Let 1 be a ﬁeld of characteristic 0 and let ¡(.) ÷ 1Œ. be a polynomial of degree
m with 1 _ m _ 4.
Case (1): If deg(¡(.)) = 1 then ¡(.) = a. ÷b with a. b ÷ 1 and a = 0. A zero
is then given by k = ÷
b
o
.
Case (2): If deg(¡(.)) = 2 then ¡(.) = a.
2
÷ b. ÷ c with a. b. c ÷ 1 and
a = 0. The zeros are then given by the quadratic formula
k =
÷b ˙
_
b
2
÷ 4ac
2a
.
We note that the quadratic formula holds over any ﬁeld of characteristic not equal to 2.
Whether there is a solution within the ﬁeld 1 then depends on whether b
2
÷ 4ac has
a square root within 1.
256 Chapter 17 Applications of Galois Theory
For the cases of degrees 3 and 4 we have the general forms of what are known as
Cardano’s formulas.
Case (3): If deg(¡(.)) = 3 then ¡(.) = a.
3
÷b.
2
÷c. ÷J with a. b. c. J ÷ 1
and a = 0. Dividing through by a we may assume without loss of generality that
a = 1.
By a substitution . = , ÷
b
3
the polynomial is transformed into
g(,) = ,
3
÷¸, ÷q ÷ 1Œ,.
Let 1 be the splitting ﬁeld of g(,) over 1 and let ˛ ÷ 1 be a zero of g(,) so that
˛
3
÷¸˛ ÷q = 0.
If ¸ = 0 then ˛ =
3
_
÷q so that g(,) has the three zeros
3
_
÷q. o
3
_
÷q. o
2
3
_
÷q
where o is a primitive third root of unity, o
3
= 1 with o = o
2
.
Now let ¸ = 0 and let ˇ be a zero of .
2
÷˛. ÷
]
3
in a suitable extension 1
t
of 1.
We have ˇ = 0 since ¸ = 0. Hence ˛ = ˇ ÷
]
3ˇ
. Putting this into the transformed
cubic equation
˛
3
÷¸˛ ÷q = 0
we get
ˇ
3
÷
¸
3
27ˇ
3
÷q = 0.
Deﬁne ; = ˇ
3
and ı = (
]
3ˇ
)
3
so that
; ÷ı ÷q = 0.
Then
;
2
÷q; ÷
_
¸
3
_
3
= 0 and ÷
¸
3
27ı
÷ı ÷q = 0 and ı
2
÷qı ÷
_
¸
3
_
3
= 0.
Hence the zeros of the polynomial
.
2
÷q. ÷
_
¸
3
_
3
are
;. ı = ÷
q
2
˙
_
_
q
2
_
2
÷
_
¸
3
_
3
.
Section 17.5 The Insolvability of the Quintic 257
If we have ; = ı then both are equal to ÷
q
2
and
_
_
q
2
_
2
÷
_
¸
3
_
3
= 0.
Then from the deﬁnitions of ;. ı we have ; = ˇ
3
and ı = (
]
3ˇ
)
3
. From above
˛ = ˇ ÷
]
3ˇ
. Therefore we get ˛ by ﬁnding the cube roots of ; and ı.
There are certain possibilities and combinations with these cube roots but because
of the conditions the cube roots of ; and ı are not independent. We must satisfy the
condition
3
_
;
3
_
ı = ˇ
÷¸
3ˇ
= ÷
¸
3
.
Therefore we get the ﬁnal result:
The zeros of g(,) = ,
3
÷¸, ÷q with ¸ = 0 are
u ÷·. ou ÷o
2
·. o
2
u ÷o·
where o is a primitive third root of unity and
u = 3
¸
¸
¸
_
÷
q
2
÷
_
_
q
2
_
2
÷
_
¸
3
_
3
and · = 3
¸
¸
¸
_
÷
q
2
÷
_
_
q
2
_
2
÷
_
¸
3
_
3
.
The above is known as the cubic formula or Cardano’s formula.
Case (4): If deg(¡(.)) = 4 then ¡(.) = a.
4
÷ b.
3
÷ c.
2
÷ J. ÷ e with
a. b. c. J. e ÷ 1 and a = 0. Dividing through by a we may assume without loss of
generality that a = 1.
By a substitution . = , ÷
b
4o
the polynomial ¡(.) is transformed into
g(,) = ,
4
÷¸,
2
÷q, ÷r.
The zeros of g(,) are
,
1
=
1
2
(
_
˛
1
÷
_
˛
2
÷
_
˛
3
)
,
2
=
1
2
(
_
˛
1
÷
_
˛
2
÷
_
˛
3
)
,
3
=
1
2
(÷
_
˛
1
÷
_
˛
2
÷
_
˛
3
)
,
4
=
1
2
(÷
_
˛
1
÷
_
˛
2
÷
_
˛
3
)
where ˛
1
. ˛
2
. ˛
3
are the zeros of the cubic polynomial
h(:) = :
3
÷2¸: ÷(¸
2
÷ 4r): ÷ q
2
÷ QŒ:.
258 Chapter 17 Applications of Galois Theory
The polynomial h(:) is called the cubic resolvent of g(,). For a detailed proof of the
case where m = 4 see [8].
The following theorem is due to Abel and shows the insolvability of the general
degree 5 polynomial over the rationals Q.
Theorem 17.5.3. Let 1be the splitting ﬁeld of the polynomial ¡(.) = .
5
÷2.
4
÷2 ÷
QŒ. over Q. Then Aut(1[1) = S
5
the symmetric group on 5 letters. Since S
5
is not
solvable the equation ¡(.) = 0 is not solvable by radicals.
Proof. The polynomial ¡(.) is irreducible over Qby the Eisenstein criterion. Further
¡(.) has ﬁve zeros in the complex numbers C by the fundamental theorem of algebra
(see Section 17.7). We claim that ¡(.) has exactly 3 real zeros and 2 nonreal zeros
which then necessarily are complex conjugates. In particular the 5 zeros are pairwise
distinct.
To see the claim notice ﬁrst that ¡(.) has at least 3 real zeros from the intermediate
value theorem. As a real function ¡(.) is continuous and ¡(÷1) = ÷1 < 0 and
¡(0) = 2 > 0 so it must have a real zero between ÷1 and 0. Further ¡(
3
2
) =
÷
S1
3
< 0 and ¡(2) = 2 > 0. Hence there must be distinct real zeros between 0
and
3
2
and between
3
2
and 2. Suppose that ¡(.) has more than 3 real zeros. Then
¡
t
(.) = .
3
(5. ÷ 8) has at least 3 pairwise distinct real zeros from Rolle’s theorem.
But ¡
t
(.) clearly has only 2 real zeros so this is not the case. Therefore ¡(.) has
exactly 3 real zeros and hence 2 nonreal zeros that are complex conjugates.
Let 1 be the splitting ﬁeld of ¡(.). The ﬁeld 1 lies in C and the restriction of the
map ı : : ÷: of C to 1 maps the set of zeros of ¡(.) onto themselves. Therefore ı
is an automorphism of 1. The map ı ﬁxes the 3 real zeros and transposes the 2 nonreal
zeros. From this we now show that Aut(1[Q) = Aut 1 = G = S
5
the full symmetric
group on 5 symbols. Clearly G Ç S
5
since G acts as a permutation group on the 5
zeros of ¡(.).
Since ı transposes the 2 nonreal roots, G (as a permutation group) contains at least
one transposition. Since ¡(.) is irreducible G acts transitively on the zeros of ¡(.).
Let .
0
be one of the zeros of ¡(.) and let G
x
0
be the stabilizer of .
0
. Since G acts
transitively .
0
has ﬁve images under G and therefore the index of the stabilizer must
be 5 (see Chapter 10).
5 = ŒG : G
x
0

which by Lagrange’s theorem must divide the order of G. Therefore from the
Sylow theorems G contains an element of order 5. Hence G contains a 5cycle and
a transposition and therefore by Theorem 11.4.3 it follows that G = S
5
. Since S
5
is
not solvable it follows that ¡(.) cannot be solved by radicals.
Since Abel’s theorem shows that there exists a degree 5 polynomial that cannot be
solved by radicals it follows that there can be no formula like Cardano’s formula in
terms of radicals for degree 5.
Section 17.6 Constructibility of Regular nGons 259
Corollary 17.5.4. There is no general formula for solving by radicals a ﬁfth degree
polynomial over the rationals.
We now show that this result can be further extended to any degree greater than 5.
Theorem 17.5.5. For each n _ 5 there exist polynomials ¡(.) ÷ QŒ. of degree n
for which the equation ¡(.) = 0 is not solvable by radicals.
Proof. Let ¡(.) = .
n5
(.
5
÷2.
4
÷2) and let 1be the splitting ﬁeld of ¡(.) over Q.
Then Aut(1[Q) = Aut(1) contains a subgroup that is isomorphic to S
5
. It follows
that Aut(1) is not solvable and therefore the equation ¡(.) = 0 is not solvable by
radicals.
This immediately implies the following.
Corollary 17.5.6. There is no general formula for solving by radicals polynomial
equations over the rationals of degree 5 or greater.
17.6 Constructibility of Regular nGons
In Chapter 6 we considered certain geometric material related to ﬁeld extensions.
There, using general ﬁeld extensions, we proved the impossibility of certain geomet
ric compass and straightedge constructions. In particular there were four famous in
solvable (to the Greeks) construction problems. The ﬁrst is the squaring of the circle.
This problem is, given a circle, to construct using straightedge and compass a square
having area equal to that of the given circle. The second is the doubling of the cube.
This problem is given a cube of given side length, to construct, using a straightedge
and compass, a side of a cube having double the volume of the original cube. The
third problem is the trisection of an angle. This problem is to trisect a given angle
using only a straightedge and compass. The ﬁnal problem is the construction of a
regular ngon. This problems asks which regular ngons could be constructed using
only straightedge and compass. In Chapter 6 we proved the impossibility of the ﬁrst
3 problems. Here we use Galois theory to consider constructible ngons.
Recall that a Fermat number is a positive integer of the form
J
n
= 2
2
n
÷1. n = 0. 1. 2. 3. . . . .
If a particular J
n
is prime it is called a Fermat prime.
Fermat believed that all the numbers in this sequence were primes. In fact J
0
. J
1
.
J
2
. J
3
. J
4
are all prime but J
5
is composite and divisible by 641 (see exercises). It is
still an open question whether or not there are inﬁnitely many Fermat primes. It has
been conjectured that there are only ﬁnitely many. On the other hand if a number of
260 Chapter 17 Applications of Galois Theory
the form 2
n
÷1 is a prime for some integer n then it must be a Fermat prime that is n
must be a power of 2.
We ﬁrst need the following.
Theorem 17.6.1. Let ¸ = 2
n
÷1, n = 2
x
with s _ 0 be a Fermat prime. Then there
exists a chain of ﬁelds
Q = 1
0
Ç 1
1
Ç Ç 1
n
= k
]
where k
]
is the ¸th cyclotomic ﬁeld such that
Œ1
}
: 1
}1
 = 2
for ¸ = 1. . . . . n.
Proof. The extension k
]
[Q is a Galois extension and Œk
]
: Q = ¸ ÷ 1. Further
Aut(k
]
) is cyclic of order ¸ ÷ 1 = 2
n
. Hence there is a chain of subgroups
{1} = U
n
Ç U
n1
Ç Ç U
0
= Aut(k
]
)
with ŒU
}1
: U
}
 = 2 for ¸ = 1. . . . . n. From the fundamental theorem of Galois
theory the ﬁelds 1
}
= Fix(k
]
. U
}
) with ¸ = 0. . . . . n have the desired properties.
The following corollaries describe completely the constructible ngons tying them
to Fermat primes.
Corollary 17.6.2. Consider the numbers 0. 1, that is a unit line segment or a unit
circle. A regular ¸gon with ¸ _ 3 prime is constructible from {0. 1} using a straight
edge and compass if and only if ¸ = 2
2
s
÷1. s _ 0 is a Fermat prime.
Proof. From Theorem 6.3.13 we have that if a regular ¸gon is constructible with a
straightedge and compass then ¸ must be a Fermat prime. The sufﬁciency follows
from Theorem 17.6.1.
We now extend this to general ngons. Let m. n ÷ N. Assume that we may con
struct from {0. 1} a regular ngon and a regular mgon. In particular this means that
we may construct the real numbers cos(
2t
n
). sin(
2t
n
). cos(
2t
n
) and sin(
2t
n
). If the
gcd(m. n) = 1 then we may construct from {0. 1} a regular mngon.
To see this notice that
cos
_
2¬
n
÷
2¬
m
_
=cos
_
2(n ÷m)¬
nm
_
=cos
_
2¬
n
_
cos
_
2¬
m
_
÷ sin
_
2¬
n
_
sin
_
2¬
m
_
and
sin
_
2¬
n
÷
2¬
m
_
=sin
_
2(n ÷m)¬
nm
_
=sin
_
2¬
n
_
cos
_
2¬
m
_
÷cos
_
2¬
n
_
sin
_
2¬
m
_
.
Section 17.7 The Fundamental Theorem of Algebra 261
Therefore we may construct from{0. 1} the numbers cos(
2t
nn
) and sin(
2t
nn
) because
gcd(n ÷m. mn) = 1. Therefore we may construct from {0. 1} a regular mngon.
Now let ¸ _ 3 be a prime. Then Œk
]
2 : Q = ¸(¸ ÷ 1) which is not a power
of 2. Therefore from {0. 1} it is not possible to construct a regular ¸
2
gon. Hence
altogether we have the following.
Corollary 17.6.3. Consider the numbers 0. 1, that is a unit line segment or a unit
circle. A regular ngon with n ÷ N is constructible from {0. 1} using a straightedge
and compass if and only if
(i) n = 2
n
, m _ 0 or
(ii) ¸ = 2
n
¸
1
¸
2
¸
i
, m _ 0 and the ¸
i
are pairwise distinct Fermat primes.
Proof. Certainly we may construct a 2
n
gon. Further if r. s ÷ N with gcd(r. s) = 1
and if we can construct a regular rsgon then clearly we may construct a regular rgon
and a regular sgon.
17.7 The Fundamental Theorem of Algebra
The fundamental theorem of algebra is one of the most important algebraic results.
This says that any nonconstant complex polynomial must have a complex zero. In
the language of ﬁeld extensions this says that the ﬁeld of complex numbers C is
algebraically closed. There are many distinct and completely different proofs of this
result. In [3] twelve proofs were given covering a wide area of mathematics. In this
section we use Galois theory to present a proof. Before doing this we brieﬂy mention
some of the history surrounding this theorem.
The ﬁrst mention of the fundamental theorem of algebra, in the form that every
polynomial equation of degree n has exactly n roots, was given by Peter Roth of
Nurnberg in 1608. However its conjecture is generally credited to Girard who also
stated the result in 1629. It was then more clearly stated by Descartes in 1637 who
also distinguished between real and imaginary roots. The ﬁrst published proof of the
fundamental theorem of algebra was then given by D’Alembert in 1746. However
there were gaps in D’Alembert’s proof and the ﬁrst fully accepted proof was that
given by Gauss in 1797 in his Ph.D. thesis. This was published in 1799. Interestingly
enough, in reviewing Gauss’ original proof, modern scholars tend to agree that there
are as many holes in this proof as in D’Alembert’s proof. Gauss, however, published
three other proofs with no such holes. He published second and third proofs in 1816
while his ﬁnal proof, which was essentially another version of the ﬁrst, was presented
in 1849.
Theorem 17.7.1. Each nonconstant polynomial ¡(.) ÷ CŒ., where C is the ﬁeld of
complex numbers, has a zero in C. Therefore C is an algebraically closed ﬁeld.
262 Chapter 17 Applications of Galois Theory
Proof. Let ¡(.) ÷ CŒ. be a nonconstant polynomial and let 1 be the splitting ﬁeld
of ¡(.) over C. Since the characteristic of the complex numbers C is zero this will
be a Galois extension of C. Since C is a ﬁnite extension of R this ﬁeld 1 would also
be a Galois extension of R. The fundamental theorem of algebra asserts that 1 must
be C itself, and hence the fundamental theorem of algebra is equivalent to the fact
that any nontrivial Galois extension of C must be C.
Let 1 be any ﬁnite extension of R with [1 : R[ = 2
n
q. (2. q) = 1. If m = 0, then
1 is an odddegree extension of R. Since 1 is separable over R, from the primitive
element theorem it is a simple extension, and hence 1 = R(˛), where the minimal
polynomial m
˛
(.) over R has odd degree. However, odddegree real polynomials
always have a real root, and therefore m
˛
(.) is irreducible only if its degree is one.
But then ˛ ÷ R and 1 = R. Therefore, if 1 is a nontrivial ﬁnite extension of R
of degree 2
n
q we must have m > 0. This shows more generally that there are no
odddegree ﬁnite extensions of R.
Suppose that 1 is a degree 2 extension of C. Then 1 = C(˛) with deg m
˛
(.) = 2
where m
˛
(.) is the minimal polynomial of ˛ over C. But from the quadratic formula
complex quadratic polynomials always have roots in C so a contradiction. Therefore,
C has no degree 2 extensions.
Now, let 1 be a Galois extension of C. Then 1 is also Galois over R. Suppose
[1 : R[ = 2
n
q, (2. q) = 1. From the argument above we must have m > 0. Let
G = Gal(1,R) be the Galois group. Then [G[ = 2
n
q, m > 0, (2. q) = 1. Thus G
has a 2Sylow subgroup of order 2
n
and index q (see Theorem 13.3.4). This would
correspond to an intermediate ﬁeld 1 with [1 : 1[ = 2
n
and [1 : R[ = q. However,
then 1 is an odddegree ﬁnite extension of R. It follows that q = 1 and 1 = R.
Therefore, [1 : R[ = 2
n
and [G[ = 2
n
.
Now, [1 : C[ = 2
n1
and suppose G
1
= Gal(1,C). This is a 2group. If it were
not trivial, then from Theorem 13.4.1 there would exist a subgroup of order 2
n2
and index 2. This would correspond to an intermediate ﬁeld 1 of degree 2 over C.
However from the argument above C has no degree 2 extensions. It follows then that
G
1
is trivial, that is, [G
1
[ = 1, so [1 : C[ = 1 and 1 = C completing the proof.
The fact that C is algebraically closed limits the possible algebraic extensions of
the reals.
Corollary 17.7.2. Let 1 be a ﬁnite ﬁeld extension of the real numbers R. Then 1=R
or 1 = C.
Proof. Since [1 : R[ < o by the primitive element theorem 1 = R(˛) for some
˛ ÷ 1. Then the minimal polynomial m
˛
(.) of ˛ over R is in RŒ. and hence in
CŒ.. Therefore form the fundamental theorem of algebra it has a root in C. Hence
˛ ÷ C. If ˛ ÷ R then 1 = R, if not then 1 = C.
Section 17.8 Exercises 263
17.8 Exercises
1. For ¡(.) ÷ QŒ. with
¡(.) = .
6
÷ 12.
4
÷36.
2
÷ 50
(¡(.) = 4.
4
÷ 12.
2
÷20. ÷ 3)
determine for each complex zero ˛ of ¡(.) a ﬁnite number of radicals ;
i
= ˇ
1
m
i
i
,
i = 1. . . . . r, and a presentation of ˛ as a rational function in ;
1
. . . . . ;
i
over Q
such that ;
i÷1
is irreducible over Q(;
1
. . . . . ;
i
) and ˇ
i÷1
÷ Q(;
1
. . . . . ;
i
) for
i = 0. . . . . r ÷ 1.
2. Let 1 be a ﬁeld of prime characteristic ¸. Let n ÷ N and 1
n
the splitting ﬁeld of
.
n
÷ 1 over 1. Show that Aut(1
n
[1) is cyclic.
3. Let ¡(.) = .
4
÷ . ÷1 ÷ ZŒ.. Show:
(i) ¡ has a real zero.
(ii) ¡ is irreducible over Q.
(iii) If u÷i· (u. · ÷ R) is a zero of ¡ in C, then g = .
3
÷4. ÷1 is the minimal
polynomial of 4u
2
over Q.
(iv) The Galois group of ¡ over Qhas an element of order 3.
(v) No zero a ÷ C of ¡ is constructible from the points 0 and 1 with straightedge
and compass.
4. Show that each polynomial ¡(.) over Rdecomposes in linear factors and quadrat
ic factors (¡(.) = J(. ÷a
1
) (. ÷a
2
) (.
2
÷b
1
. ÷c
1
) (.
2
÷b
2
. ÷c
2
) ,
J ÷ R).
5. Let 1 be a ﬁnite (commutative) ﬁeld extension of R. Then 1 Š R or 1 Š C.
6. Let n _ 1 be a natural number and . an indeterminate over C. Consider the
polynomial .
n
÷ 1 ÷ ZŒ.. In CŒ. it decomposes in linear factors:
.
n
÷ 1 = (. ÷ ¸
1
)(. ÷ ¸
2
) (. ÷ ¸
n
).
where the complex numbers
¸
¡
= e
2ti
n
= cos
2¬v
n
÷i sin
2¬v
n
. 1 _ v _ n.
are all (different) nth roots of unity, that is especially ¸
n
= 1. These ¸
¡
form a
from ¸
1
generated multiplicative cyclic group G = {¸
1
. ¸
2
. . . . . ¸
n
}. It is ¸
¡
= ¸
¡
1
.
An nth root of unity ¸
¡
is called a primitive nth root of unity, if ¸
¡
is not an mth
root of unity for any m < n.
264 Chapter 17 Applications of Galois Theory
Show that the following are equivalent:
(i) ¸
¡
is a primitive nth root of unity.
(ii) ¸
¡
is a generating element of G.
(iii) gcd(v. n)=1.
7. The polynomial ç
n
(.) ÷ CŒ., whose zeros are exactly the primitive nth roots of
unity, is called the nth cyclotomic polynomial. With Exercise 6 it is:
ç
n
(.) =
1_¡_n
gcd(¡,n)=1
(. ÷ ¸
¡
) =
1_¡_n
gcd(¡,n)=1
(. ÷ e
2ti
n
).
The degree of ç
n
(.) is the number of the integers {1. . . . . n}, which are coprime
to n. Show:
(i) .
n
÷ 1 =
d_1
d[n
ç
d
(.).
(ii) ç
n
(.) ÷ ZŒ. for all n _ 1.
(iii) ç
n
(.) is irreducible over Q(and therefore also over Z) for all n _ 1.
8. Showthat the Fermat numbers J
0
. J
1
. J
2
. J
3
. J
4
are all prime but J
5
is composite
and divisible by 641.
Chapter 18
The Theory of Modules
18.1 Modules Over Rings
Recall that a vector space V over a ﬁeld J is an abelian group V with a scalar multi
plication : J × V ÷V satisfying
(1) ¡(·
1
÷·
2
) = ¡ ·
1
÷¡ ·
2
for ¡ ÷ J and ·
1
. ·
2
÷ V .
(2) (¡
1
÷¡
2
)· = ¡
1
· ÷¡
2
· for ¡
1
. ¡
2
÷ J and · ÷ V .
(3) (¡
1
¡
2
)· = ¡
1
(¡
2
·) for ¡
1
. ¡
2
÷ J and · ÷ V .
(4) 1· = · for · ÷ V .
Vector spaces are the fundamental algebraic structures in linear algebra and the
study of linear equations. Vector spaces have been crucial in our study of ﬁelds and
Galois theory since any ﬁeld extension is a vector space over any subﬁeld. In this
context the degree of a ﬁeld extension is just the dimension of the extension ﬁeld as a
vector space over the base ﬁeld.
If we modify the deﬁnition of a vector space to allow scalar multiplication from an
arbitrary ring we obtain a more general structure called a module. We will formally
deﬁne this below. Modules generalize vector spaces but the fact that the scalars do
not necessarily have inverses makes the study of modules much more complicated.
Modules will play an important role in both the study of rings and the study of abelian
groups. In fact any abelian group is a module over the integers Z so that modules, be
sides being generalizations of vector spaces can also be considered as generalizations
of abelian groups.
In this chapter we will introduce the theory of modules. In particular we will extend
to modules the basic algebraic properties such as the isomorphism theorems that have
been introduced earlier for groups, rings and ﬁelds.
In this chapter we restrict ourselves to commutative rings so that throughout 1 is
always a commutative ring. If 1 has an identity 1 then we always consider only the
case that 1 = 0. Throughout this chapter we use letters a. b. c. m. . . . for ideals in 1.
For principal ideals we write (a) or a1 for the ideal generated by a ÷ 1. We note
however that the deﬁnition can be extended to include modules over noncommutative
rings. In this case we would speak of left modules and right modules.
Deﬁnition 18.1.1. Let 1 = (1. ÷. ) a commutative ring and M = (M. ÷) an
abelian group. M together with a scalar multiplication : 1×M ÷M. (˛. .) ÷˛.,
is called a 1module or module over 1 if the following axioms hold:
266 Chapter 18 The Theory of Modules
(M1) (˛ ÷ˇ). = ˛. ÷ˇ.,
(M2) ˛(. ÷,) = ˛. ÷˛, and
(M3) (˛ˇ). = ˛(ˇ.) for all ˛. ˇ ÷ 1 and .. , ÷ M.
If 1 has an identity 1 then M is called an unitary 1module if in addition
(M4) 1 . = . for all . ÷ M holds.
In the following, 1 always is a commutative ring. If 1 contains an identity 1 then
M always is an unitary 1module. If 1 has an identity 1 then we always assume
1 = 0.
As usual we have the rules:
0 . = 0. ˛ 0 = 0. ÷(˛.) = (÷˛). = ˛(÷.)
for all ˛ ÷ 1 and for all . ÷ M.
We next present a series of examples of modules.
Example 18.1.2. (1) If 1 = 1 is a ﬁeld then a 1module is a 1vector space.
(2) Let G = (G. ÷) be an abelian group. If n ÷ Z and . ÷ G then n. is deﬁned as
usual:
0 . = 0.
n. = . ÷ ÷.
„ ƒ‚ …
ntimes
if n > 0 and
n. = (÷n)(÷.) if n < 0.
Then G is an unitary Zmodule via the scalar multiplication
: Z × G ÷G. (n. .) ÷n..
(3) Let S be a subring of 1. Then via (s. r) ÷ sr the ring 1 itself becomes an
Smodule.
(4) Let 1 be a ﬁeld, V a 1vector space and ¡ : V ÷ V a linear map of V . Let
¸ =
i
˛
i
t
i
÷ 1Œt . Then ¸(¡ ) :=
i
˛
i
¡
i
deﬁnes a linear map of V and
V is an unitary 1Œt module via the scalar multiplication
1Œt  × V ÷V. (¸. ·) ÷¸· := ¸(¡ )(·).
(5) If 1 is a commutative ring and a is an ideal in 1 then a is a module over 1.
Basic to all algebraic theory is the concept of substructures. Next we deﬁne sub
modules.
Section 18.1 Modules Over Rings 267
Deﬁnition 18.1.3. Let M be an 1module. 0 = U Ç M is called a submodule of M
if
(UMI) (U. ÷) < (M. ÷) and
(UMII) ˛ ÷ 1. u ÷ U =˛u ÷ U, that is, 1U Ç U.
Example 18.1.4. (1) In an abelian group G, considered as a Zmodule, the sub
groups are precisely the submodules.
(2) The submodules of 1, considered as a 1module, are precisely the ideals.
(3) 1. := {˛. : ˛ ÷ 1} is a submodule of M for each . ÷ M.
(4) Let 1 be a ﬁeld, V a 1vector space and ¡ : V ÷ V a linear map of V .
Let U be a submodule of V , considered as a 1Œt module as above. Then the
following holds:
(a) U < V .
(b) ¸U = ¸(¡ )U Ç U for all ¸ ÷ 1Œt . Especially ˛U Ç U for ¸ = ˛ ÷ 1
and t U = ¡(U) Ç U for ¸ = t , that is, U is an ¡ invariant subspace. On
the other side also, ¸(¡ )U Ç U for all ¸ ÷ 1Œt  if U is an ¡ invariant
subspace.
We next extend to modules the concept of a generating system. For a single gener
ator, as with groups, this is called cyclic.
Deﬁnition 18.1.5. A submodule U of the 1module M is called cyclic if there exists
an . ÷ M with U = 1..
As in vector spaces, groups and rings the following constructions are standard lead
ing us to generating systems.
(1) Let M be a 1module and {U
i
: i ÷ 1} a family of submodules. Then
_
i÷J
U
i
is a submodule of M.
(2) Let M be a 1module. If · Ç M then we deﬁne
(·) :=
_
{U : U submodule of M with · Ç U}.
(·) is the smallest submodule of M which contains ·. If 1 has an identity 1
then (·) is the set of all linear combinations
i
˛
i
a
i
with all ˛
i
÷ 1, all
a
i
÷ ·. This holds because M is unitary and na = n(1 a) = (n 1)a for
n ÷ Z and a ÷ ·, that is, we may consider the pseudoproduct na as a real
product in the module. Especially, if 1 has a unit 1 then a1 = ({a}) =: (a).
268 Chapter 18 The Theory of Modules
Deﬁnition 18.1.6. Let 1 have an identity 1. If M = (·) then ·is called a generating
system of M. M is called ﬁnitely generated if there are a
1
. . . . . a
n
÷ M with M =
({a
1
. . . . . a
n
}) =: (a
1
. . . . . a
n
).
The following is clear.
Lemma 18.1.7. Let U
i
be submodules of M, i ÷ 1, 1 an index set. Then
_
_
i÷J
U
i
_
=
_
i÷1
a
i
: a
i
÷ U
i
. 1 Ç 1 ﬁnite
_
.
We write (
_
i÷J
U
i
) =:
i÷J
U
i
and call this submodule the sum of the U
i
. A sum
i÷J
U
i
is called a direct sum if for each representation of 0 as 0 =
a
i
, a
i
÷ U
i
,
it follows that all a
i
= 0. This is equivalent to U
i
¨
iy}
U
}
= 0 for all i ÷ 1.
Notation:
i÷J
U
i
; and if 1 = {1. . . . . n} then we write U
1
˚ ˚U
n
, too.
In analogy with our previously deﬁned algebraic structure we extend to modules
the concepts of quotient modules and module homomorphisms.
Deﬁnition 18.1.8. Let U be a submodule of the 1module M. Let M,U be the factor
group. We deﬁne a (welldeﬁned) scalar multiplication
1 × M,U ÷M,U. ˛(. ÷U) := ˛. ÷U.
With this M,U is a 1module, the factor module or quotient module of M by U. In
M,U we have the operations
(. ÷U) ÷(, ÷U) = (. ÷,) ÷U
and
˛(. ÷U) = ˛. ÷U.
A module M over a ring 1 can also be considered as a module over a quotient ring
of 1. The following is straightforward to verify (see exercises).
Lemma 18.1.9. Let a C 1 an ideal in 1 and M a 1module. The set of all ﬁnite
sums of the form
˛
i
.
i
, ˛
i
÷ a, .
i
÷ M, is a submodule of M which we denote
by aM. The factor group M,aM becomes a 1,amodule via the welldeﬁned scalar
multiplication
(˛ ÷a)(m÷aM) = ˛m÷aM.
If here 1 has an identity 1 and a is a maximal ideal then M,aM becomes a vector
space over the ﬁeld 1 = 1,a.
Section 18.1 Modules Over Rings 269
We next deﬁne module homomorphisms
Deﬁnition 18.1.10. Let 1 be a ring and M, N be 1modules. A map ¡ : M ÷ N
is called a 1module homomorphism (or 1linear) if
¡(. ÷,) = ¡(.) ÷¡(,)
and
¡(˛.) = ˛¡(.)
for all ˛ ÷ 1 and all .. , ÷ M. Endo, epi, mono, iso and automorphisms are
deﬁned analogously via the corresponding properties of the maps. If ¡ : M ÷ N
and g : N ÷ 1 are module homomorphisms then g ı ¡ : M ÷ 1 is also a module
homomorphism. If ¡ : M ÷N is an isomorphism then also ¡
1
: N ÷M.
We deﬁne kernel and image in the usual way:
ker(¡ ) := {. ÷ M : ¡(.) = 0}
and
im(¡ ) := ¡(M) = {¡(.) : . ÷ M}.
ker(¡ ) is a submodule of M and im(¡ ) is a submodule of N. As usual:
¡ is injective ”ker(¡ ) = {0}.
If U is a submodule of M then the map . ÷ . ÷U deﬁnes a module epimorphism
(the canonical epimorphism) from M onto M,U with kernel U.
There are module isomorphism theorems. The proofs are straightforward exten
sions of the corresponding proofs for groups and rings.
Theorem 18.1.11 (module isomorphism theorems). Let M. N be 1modules.
(1) If ¡ : M ÷N is a module homomorphism then
¡(M) Š M, ker(¡ ).
(2) If U. V are submodules of the 1module M then
U,(U ¨ V ) Š (U ÷V ),V.
(3) If U and V are submodules of the 1module M with U Ç V Ç M then
(M,U),(V,U) Š M,V.
270 Chapter 18 The Theory of Modules
For the proofs, as for groups, just consider the map ¡ : U ÷ V ÷ U,(U ¨ V ),
u ÷· ÷ u ÷(U ¨ V ) which is welldeﬁned because U ¨ V is a submodule of U;
we have ker(¡ ) = V .
Note that ˛ ÷ ˛j, j ÷ 1 ﬁxed, deﬁnes a module homomorphism 1 ÷ 1 if we
consider 1 itself as a 1module.
18.2 Annihilators and Torsion
In this section we deﬁne torsion for an 1module and a very important subring of 1
called the annihilator.
Deﬁnition 18.2.1. Let M be an 1module. For a ﬁxed a ÷ M consider the map
z
o
: 1 ÷ M, z
o
(˛) := ˛a. z
o
is a module homomorphism considering 1 as an
1module. We call ker(z
o
) the annihilator of a denoted Ann(a), that is
Ann(a) = {˛ ÷ 1 : ˛a = 0}.
Lemma 18.2.2. Ann(a) is a submodule of 1 and the module isomorphism theorem
(1) gives 1, Ann(a) Š 1a.
We next extend the annihilator to whole submodules of M.
Deﬁnition 18.2.3. Let U be a submodule of the 1module M. The annihilator
Ann(U) is deﬁned to be
Ann(U) := {˛ ÷ 1 : ˛u = 0 for all u ÷ U}.
As for single elements, since Ann(U) =
_
u÷U
Ann(u), then Ann(U) is a sub
module of 1. If j ÷ 1, u ÷ U, then ju ÷ U, that means, if u ÷ Ann(U) then also
ju ÷ Ann(U) because (˛j)u = ˛(ju) = 0. Hence, Ann(U) is an ideal in 1.
Suppose that G is an abelian group. Then as mentioned G is a Zmodule. An
element g ÷ G is a torsion element or has ﬁnite order if ng = 0 for some n ÷ N. The
set Tor(G) consists of all the torsion elements in G. An abelian group is torsionfree
if Tor(G) = {0}.
Lemma 18.2.4. Let G be an abelian group. Then Tor(G) is a subgroup of G and
G, Tor(G) is torsionfree.
We extend this concept now to general modules.
Section 18.3 Direct Products and Direct Sums of Modules 271
Deﬁnition 18.2.5. The 1module M is called faithful if Ann(M) = {0}. An element
a ÷ M is called a torsion element, or element of ﬁnite order, if Ann(a) = {0}.
A module without torsion elements = 0 is called torsionfree. If the 1module M is
torsionfree then 1 has no zero divisors = 0.
Theorem 18.2.6. Let 1 be an integral domain and M an 1module (by our agree
ment M is unitary). Let Tor(M) = T(M) be the set of torsion elements of M. Then
Tor(M) is a submodule of M and M, Tor(M) is torsionfree.
Proof. If m ÷ Tor(M), ˛ ÷ Ann(m), ˛ = 0 and ˇ ÷ 1 then we get ˛(ˇm) =
(˛ˇ)m = (ˇ˛)m = ˇ(˛m) = 0, that is, ˇm ÷ Tor(M), because ˛ˇ = 0 if ˇ = 0
(1 is an integral domain). Let m
t
another element of Tor(M) and 0 = ˛
t
÷ Ann(m
t
).
Then ˛˛
t
= 0 and ˛˛
t
(m÷m
t
) = ˛˛
t
m÷˛˛
t
m
t
= ˛
t
(˛m) ÷˛(˛
t
m
t
) = 0, that
is, m÷m
t
÷ Tor(M). Therefore Tor(M) is a submodule.
Now, let m ÷ Tor(M) be a torsion element in M, Tor(M). Let ˛ ÷ 1, ˛ = 0
with ˛(m ÷ Tor(M)) = ˛m ÷ Tor(M) = Tor(M). Then ˛m ÷ Tor(M). Hence
there exists a ˇ ÷ 1, ˇ = 0, with 0 = ˇ(˛m) = (ˇ˛)m. Since ˇ˛ = 0 we get that
m ÷ Tor(M) and the torsion element m÷Tor(M) is trivial.
18.3 Direct Products and Direct Sums of Modules
Let M
i
, i ÷ 1, 1 = 0, be a family of 1modules. On the direct product
1 =
i÷J
M
i
=
_
¡ : 1 ÷
_
i÷J
M
i
: ¡(i ) ÷ M
i
for all i ÷ 1
_
we deﬁne the module operations
÷ : 1 × 1 ÷1 and : 1 × 1 ÷1
via
(¡ ÷g)(i ) := ¡(i ) ÷g(i ) and (˛¡ )(i ) := ˛¡(i ).
Together with this operations 1 =
i÷J
M
i
is an 1module, the direct product of
the M
i
. If we identify ¡ with the 1tuple of the images ¡ = (¡
i
)
i÷J
then the sum
and the scalar multiplication are componentwise. If 1 = {1. . . . . n} and M
i
= M for
all i ÷ 1 then we write, as usual, M
n
=
i÷J
M
i
.
We make the agreement that
i÷J=0
M
i
:= {0}.
i÷J
M
i
:= {¡ ÷
i÷J
M
i
: ¡(i ) = 0 for almost all i } (“for almost all i ”
means that there are at most ﬁnitely many i with ¡(i ) = 0) is a submodule of the
direct product, called the direct sum of the M
i
. If 1 = {1. . . . . n} then we write
i÷J
M
i
= M
1
˚ ˚M
n
. Here
n
i=1
M
i
=
n
i=1
M
i
for ﬁnite 1.
272 Chapter 18 The Theory of Modules
Theorem 18.3.1. (1) If ¬ ÷ Per(1) is a permutation of I then
i÷J
M
i
Š
i÷J
M
t(i)
and
i÷J
M
i
Š
i÷J
M
t(i)
.
(2) If 1 =
´
_
}÷J
1
}
, the disjoint union, then
i÷J
M
i
Š
}÷J
_
i÷J
j
M
i
_
and
i÷J
M
i
Š
}÷J
_
i÷J
j
M
i
_
.
Proof. (1) Consider the map ¡ ÷¡ ı ¬.
(2) Consider the map ¡ ÷
_
}÷J
¡
}
where ¡
}
÷
i÷J
j
M
i
is the restriction of ¡
onto 1
}
, and
_
}÷J
¡
}
is on J deﬁned by (
_
}÷J
¡
}
)(k) := ¡
k
= ¡(k).
Let 1 = 0. If M =
i÷J
M
i
then we get in a natural manner module homomorph
isms ¬
i
: M ÷ M
i
via ¡ ÷ ¡(i ): ¬
i
is called the projection onto the i th compo
nent. In duality we deﬁne module homomorphisms ı
i
: M
i
÷
i÷J
M
i
Ç
i÷J
M
i
via ı
i
(m
i
) = (n
}
)
}÷J
where n
}
= 0 if i = ¸ and n
i
= m
i
. ı
i
is called the
i th canonical injection. If 1 = {1. . . . . n} then ¬
i
(a
1
. . . . . a
i
. . . . . a
n
) = a
i
and
ı
i
(m
i
) = (0. . . . . 0. m
i
. 0. . . . . 0).
Theorem 18.3.2 (universal properties). Let ·. M
i
. i ÷ 1 = 0, be 1modules.
(1) If ç
i
: · ÷M
i
, i ÷ 1, are module homomorphisms then there exists exactly one
module homomorphism ç : · ÷
i÷J
M
i
such that for each i the following
diagram commutes:
that is, ç
}
= ¬
}
ı ç where ¬
}
is the ¸ th projection.
Section 18.4 Free Modules 273
(2) If ‰
i
: M
i
÷ ·, i ÷ 1, are module homomorphisms then there exists exactly
one module homomorphism ‰ :
i÷J
M
i
÷ · such that for each ¸ ÷ J the
following diagram commutes:
that is, ‰
}
= ‰ ı ı
}
where ı
}
is the ¸ th canonical injection.
Proof. (1) If there is such ç then the ¸ th component of ç(a) is equal ç
}
(a) because
¬
}
ı ç = ç
}
. Hence, deﬁne ç(a) ÷
i÷J
M
i
via ç(a)(i ) := ç
i
(a), and ç is the
desired map.
(2) If there is such a ‰ with ‰ı˛
}
=‰
}
then ‰(.)=‰((.
i
))=‰(
i÷J
ı
i
(.
i
))=
i÷J
‰ ı ı
i
(.
i
) =
i÷J
‰
i
(.
i
). Hence deﬁne ‰((.
i
)) =
i÷J
‰
i
(.
i
), and ‰ is
the desired map (recall that the sum is well deﬁned).
18.4 Free Modules
If V is a vector space over a ﬁeld J then V always has a basis over J which may
be inﬁnite. Despite the similarity to vector spaces, because the scalars may not have
inverses this is not necessarily true for modules.
We now deﬁne a basis for a module and show that only free modules have bases.
Let 1 be a ring with identity 1, M be a unitary 1module and S Ç M. Each ﬁnite
sum
˛
i
s
i
, the ˛
i
÷ 1 and the s
i
÷ S, is called a linear combination in S. Since
M is unitary and S = 0 then (S) is exactly the set of all linear combinations in S.
In the following we assume that S = 0. If S = 0 then (S) = (0) = {0}, and this
case is not interesting. For convention, in the following we always assume m
i
= m
}
if i = ¸ in a ﬁnite sum
˛
i
m
i
with all ˛
i
÷ 1 and all m
i
÷ M.
Deﬁnition 18.4.1. A ﬁnite set {m
1
. . . . . m
n
} Ç M is called linear independent or
free (over 1) if a representation 0 =
n
i=1
˛
i
m
i
implies always ˛
i
= 0 for all i ÷
{1. . . . . n}, that is, 0 can be represented only trivially on {m
1
. . . . . m
n
}. A nonempty
subset S Ç M is called free (over 1) if each ﬁnite subset of S is free.
Deﬁnition 18.4.2. Let M be a 1module (as above).
(1) S Ç M is called a basis of M if
(a) M = (S) and
(b) S is free (over 1).
274 Chapter 18 The Theory of Modules
(2) If M has a basis then M is called a free 1module. If S is a basis of M then M
is called free on S or free with basis S.
In this sense we can consider {0} as a free module with basis 0.
Example 18.4.3. 1. 1 × 1 = 1
2
, as 1module, is free with basis {(1. 0). (0. 1)}.
2. More general, let 1 = 0. Then
i÷J
1
i
with 1
i
= 1 for all i ÷ 1 is free with
basis {c
i
: 1 ÷1 : c
i
(¸ ) = ı
i}
. i. ¸ ÷ 1} where
ı
i}
=
_
0 if i = ¸ .
1 if i = ¸ .
Especially, if 1 = {1. . . . . n} then 1
n
= {(a
1
. . . . . a
n
) : a
i
÷ 1} is free with
basis {c
i
= (0. . . . . 0
„ ƒ‚ …
i1
. 1. 0. . . . . 0): 1 _ i _ n}.
3. Let G be an abelian group. If G, as a Zmodule, is free on S Ç G, then G is
called a free abelian group with basis S. If [S[ = n < othen G Š Z
n
.
Theorem 18.4.4. The 1module M is free on S if and only if each m ÷ M can be
written uniquely in the form
˛
i
s
i
with ˛
i
÷ 1, s
i
÷ S. This is exactly the case
when M =
x÷S
1s is the direct sum of the cyclic submodules 1s, and each 1s is
module isomorphic to 1.
Proof. If S is a basis then each m ÷ M can be written as m =
˛
i
s
i
because
M = (S). This representation is unique because if
˛
i
s
i
=
ˇ
i
s
i
then
(˛
i
÷
ˇ
i
)s
i
= 0, that is ˛
i
÷ ˇ
i
= 0 for all i . If, on the other side, we assume that the
representation is unique then we get from
˛
i
s
i
= 0 =
0 s
i
that all ˛
i
= 0,
and therefore M is free on S. The rest of the theorem essentially is a rewriting of
the deﬁnition. If each m ÷ M can be written as m =
˛
i
s
i
then M =
x÷S
1s.
If . ÷ 1s
t
¨
x÷S,xyx
0 1s with s
t
÷ S then . = ˛
t
s
t
=
x
i
yx
0
,x
i
÷S
˛
i
s
i
and
0 = ˛
t
s
t
÷
x
i
yx
0
,x
i
÷S
˛
i
s
i
and therefore ˛
t
= 0 and ˛
i
= 0 for all i . This
gives M =
x÷S
1s. The cyclic modules 1s are isomorphic to 1, Ann(s), and
Ann(s) = {0} in the free modules. On the other side such modules are free on S.
Corollary 18.4.5. (1) M is free on S =M Š
x÷S
1
x
, 1
x
= 1 for all s ÷ S.
(2) If M is ﬁnitely generated and free then there exists an n ÷ N
0
such that M Š
1
n
= 1 ˚ ˚1
„ ƒ‚ …
ntimes
.
Proof. Part (1) is clear. We prove part (2). Let M = (.
1
. . . . . .
i
) and S a basis of M.
Each .
i
is uniquely representable on S as .
i
=
x
i
÷S
˛
i
s
i
. Since the .
i
generate M
we get m =
ˇ
i
.
i
=
i,}
ˇ
i
˛
}
s
}
for arbitrary m ÷ M, and we need only ﬁnitely
many s
}
to generate M. Hence S is ﬁnite.
Section 18.4 Free Modules 275
Theorem 18.4.6. Let 1 be a commutative ring with identity 1 and M a free 1
module. Then any two bases of M have the same cardinality.
Proof. 1 contains a maximal ideal m, and 1,m is a ﬁeld (see Theorem 2.3.2 and
2.4.2). Then M,mM is a vector space over 1,m. From M Š
x÷S
1s with basis
S we get mM Š
x÷S
ms and, hence,
M,mM Š
_
x÷S
1s
_
,mM Š
x÷S
(1s,mM) Š
x÷S
1,m.
Hence the 1,mvector space M,mM has a basis of the cardinality of S. This gives
the result.
Let 1 be a commutative ring with identity 1 and M a free 1module. The cardi
nality of a basis is an invariant of M, called the rank of M or dimension of M. If
rank(M) = n < othen this means M Š 1
n
.
Theorem 18.4.7. Each 1module is a (module)homomorphic image of a free 1
module.
Proof. Let M be a 1module. We consider J :=
n÷Æ
1
n
with 1
n
= 1 for all
m ÷ M. J is a free 1module. The map ¡ : J ÷ M, ¡((˛
n
)
n÷Æ
) =
˛
n
m
deﬁnes a surjective module homomorphism.
Theorem 18.4.8. Let J. M be 1modules, and let J be free. Let ¡ : M ÷ J be a
module epimorphism. Then there exists a module homomorphism g : J ÷ M with
¡ ı g = id
T
, and we have M = ker(¡ ) ˚g(J).
Proof. Let S be a basis of J. By the axiom of choice there exists for each s ÷ S an
element m
x
÷ M with ¡(m
x
) = s (¡ is surjective). We deﬁne the map g : J ÷ M
via s ÷m
x
linearly, that is, g(
x
i
÷S
˛
i
s
i
) =
x
i
÷S
˛
i
m
x
i
. Since J is free, the map
g is well deﬁned. Obviously ¡ ıg(s) = ¡(m
x
) = s for s ÷ S, that means ¡ ıg = id
T
because J is free on S. For each m ÷ M we have also m = gı¡(m)÷(m÷gı¡(m))
where g ı ¡(m) = g(¡(m)) ÷ g(J), and since ¡ ı g = id
T
the elements of the
form m÷ g ı ¡(m) are in the kernel of ¡ . Therefore M = g(J) ÷ker(¡ ). Now let
. ÷ g(J) ¨ker(¡ ). Then . = g(,) for some , ÷ J and 0 = ¡(.) = ¡ ıg(,) = ,,
and hence . = 0. Therefore the sum is direct: M = g(J) ˚ker(¡ ).
Corollary 18.4.9. Let M be an 1module and N a submodule such that M,N is
free. Then there is a submodule N
t
of M with M = N ˚N
t
.
Proof. Apply the above theorem for the canonical map ¬ : M ÷ M,N with
ker(¬) = N.
276 Chapter 18 The Theory of Modules
18.5 Modules over Principal Ideal Domains
We now specialize to the case of modules over principal ideal domains. For the re
mainder of this section 1 is always a principal ideal domain = {0}. We now use the
notation (˛) := ˛1, ˛ ÷ 1, for the principal ideal ˛1.
Theorem 18.5.1. Let M be a free 1module of ﬁnite rank over the principal ideal
domain 1. Then each submodule U is free of ﬁnite rank, and rank(U) _ rank(M).
Proof. We prove the theorem by induction on n = rank(M). The theorem certainly
holds if n = 0. Now let n _ 1 and assume that the theorem holds for all free 1
modules of rank < n. Let M be a free 1module of rank n with basis {.
1
. . . . . .
n
}.
Let U be a submodule of M. We represent the elements of U as linear combination
of the basis elements .
1
. . . . . .
n
, and we consider the set of coefﬁcients of .
1
for the
elements of U:
a =
_
ˇ ÷ 1 : ˇ.
1
÷
n
i=2
ˇ
i
.
i
÷ U
_
.
Certainly a is an ideal in 1. Since 1 is a principal ideal domain we have a = (˛
1
)
for some ˛
1
÷ 1. Let u ÷ U be an element in U which has ˛
1
as its ﬁrst coefﬁcient,
that is
u = ˛
1
.
1
÷
n
i=2
˛
i
.
i
÷ U.
Let · ÷ U be arbitrary. Then
· = j(˛
1
.
1
) ÷
n
i=2
j
i
.
i
.
Hence ·÷ju÷U
t
:=U ¨M
t
where M
t
is the free 1module with basis {.
2
. . . . . .
n
}.
By induction, U
t
is a free submodule of M
t
with a basis {,
1
. . . . . ,
t
}, t _ n ÷ 1. If
˛
1
= 0 then a = (0) and U = U
t
, and there is nothing to prove. Now let ˛
1
= 0. We
show that {u. ,
1
. . . . . ,
t
} is a basis of U. · ÷ ju is a linear combination of the basis
elements of U
t
, that is, · ÷ju =
t
i=1
j
i
,
i
uniquely. Hence · = ju ÷
t
i=1
j
i
,
i
and U = (u. ,
1
. . . . . ,
t
). Now let be 0 = ;u ÷
t
i=1
j
i
,
i
. We write u and the
,
i
as linear combinations in the basis elements .
1
. . . . . .
n
of M. There is only an
.
1
portion in ;u. Hence
0 = ;˛
1
.
1
÷
n
i=2
j
t
i
.
i
.
Therefore ﬁrst ;˛
1
.
1
= 0, that is, ; = 0 because 1 has no zero divisor = 0, and
further j
t
2
= = j
t
n
= 0, that means, j
1
= = j
t
= 0.
Section 18.5 Modules over Principal Ideal Domains 277
Let 1 be a principal ideal domain. Then the annihilator Ann(.) in 1modules M
has certain further properties. Let . ÷ M. By deﬁnition
Ann(.) = {˛ ÷ 1 : ˛. = 0} < 1. an ideal in 1.
hence Ann(.) = (ı
x
). If . = 0 then (ı
x
) = 1. ı
x
is called the order of x and
(ı
x
) the order ideal of x. ı
x
is uniquely determined up to units in 1 (that is, up
to elements j with jj
t
= 1 for some j
t
÷ 1). For a submodule U of M we call
Ann(U) =
_
u÷U
(ı
u
) = (j) the order ideal of U. In an abelian group G, considered
as a Zmodule, this order for elements corresponds exactly with the order as group
elements if we choose ı
x
_ 0 for . ÷ G.
Theorem 18.5.2. Let 1 be a principal ideal domain and M be a ﬁnitely generated
torsionfree 1module. Then M is free.
Proof. Let M = (.
1
. . . . . .
n
) torsionfree and 1 a principal ideal domain. Each
submodule (.
i
) = 1.
i
is free because M is torsionfree. We call a subset S Ç
(.
1
. . . . . .
n
) free if the submodule (S) is free. Since (.
i
) is free there exist such
nonempty subsets. Under all free subsets S Ç (.
1
. . . . . .
n
) we choose one with a
maximal number of elements. We may assume that {.
1
. . . . . .
x
}, 1 _ s _ n, is such
a maximal set – after possible renaming. If s = n then the theorem holds. Now, let
s < n. By the choice of s the sets {.
1
. . . . . .
x
. .
}
} with s < ¸ _ n are not free.
Hence there are ˛
}
÷ 1 and ˛
i
÷ 1, not all 0, with
˛
}
.
}
=
x
i=1
˛
i
.
i
. ˛
}
= 0. s < ¸ _ n.
For the product ˛ := ˛
x÷1
˛
n
= 0 we get ˛.
}
÷ 1.
1
˚ ˚ 1.
x
=: J,
s < ¸ _ n, because ˛.
i
÷ J for 1 _ i _ s. Altogether we get ˛M Ç J. ˛M is a
submodule of the free 1module J of rank s. By Theorem 18.5.1 we have that ˛M
is free. Since ˛ = 0 and M is torsionfree, the map M ÷ ˛M, . ÷ ˛., deﬁnes an
(module) isomorphism, that is, M Š ˛M. Therefore, also M is free.
We remind that for an integral domain 1 the set
Tor(M) = T(M) = {. ÷ M : J˛ ÷ 1. ˛ = 0. with ˛. = 0}
of the torsion elements of an 1module M is a submodule with torsionfree factor
module M,T(M).
Corollary 18.5.3. Let 1 be a principal ideal domain and M be a ﬁnitely generated
1module. Then M = T(M) ˚J with a free submodule J Š M,T(M).
Proof. M,T(M) is a ﬁnitely generated, torsionfree 1module, and, hence, free. By
Corollary 18.4.9 we have M = T(M) ˚J, J Š M,T(M).
278 Chapter 18 The Theory of Modules
From now on we are interested in the case that M = {0} is a torsion 1module,
that is, M = T(M). Let 1 be a principal ideal domain and M = T(M) an 1
module. Let M = {0} and ﬁnitely generated. As above, let ı
x
the order of . ÷ M –
unique up to units in 1 – and (ı
x
) = {˛ ÷ 1 : ˛. = 0} the order ideal of x. Let
(j) =
_
x÷Æ
(ı
x
) the order ideal of M. Since (j) Ç (ı
x
) we have ı
x
[j for all
. ÷ M. Since principal ideal domains are unique factorization domains, if j = 0
then there can not be many essentially different orders (that means different up to
units). Since M = {0} and ﬁnitely generated we have in any case j = 0, because if
M = (.
1
. . . . . .
n
), ˛
i
.
i
= 0 with ˛
i
= 0 then ˛M = {0} if ˛ := ˛
1
˛
n
= 0.
Lemma 18.5.4. Let 1 be a principal ideal domain and M = {0} be an 1module
with M = T(M).
(1) If the orders ı
x
and ı
,
of .. , ÷ M are relatively prime, that is, gcd(ı
x
. ı
,
)=1,
then (ı
x÷,
) = (ı
x
ı
,
).
(2) Let ı
z
be the order of : ÷ M, : = 0. If ı
z
= ˛ˇ with gcd(˛. ˇ) = 1 then there
exist .. , ÷ M with : = . ÷, and (ı
x
) = (˛), (ı
,
) = (ˇ).
Proof. (1) Since ı
x
ı
,
(. ÷ ,) = ı
x
ı
,
. ÷ ı
x
ı
,
, = ı
,
ı
x
. ÷ ı
x
ı
,
, = 0 we get
(ı
x
ı
,
) Ç (ı
x÷,
). On the other side, from ı
x
. = 0 and ı
x÷,
(. ÷ ,) = 0 we get
0 = ı
x
ı
x÷,
(. ÷,) = ı
x
ı
x÷,
,, that means, ı
x
ı
x÷,
÷ (ı
,
) and, hence, ı
,
[ı
x
ı
x÷,
.
Since gcd(ı
x
. ı
,
) = 1 we have ı
,
[ı
x÷,
. Analogously ı
x
[ı
x÷,
. Hence, ı
x
ı
,
[ı
x÷,
and (ı
x÷,
) Ç (ı
x
ı
,
).
(2) Let ı
z
= ˛ˇ with gcd(˛. ˇ) = 1. Then there are j. o ÷ 1 with 1 = j˛ ÷oˇ.
Therefore we get
: = 1 : = j˛:
„ƒ‚…
=:,
÷ oˇ:
„ƒ‚…
=:x
= , ÷. = . ÷,.
Since ˛. = ˛oˇ: = oı
z
: = 0 we get ˛ ÷ (ı
z
), that means, ı
x
[˛. On the other
side, from 0 = ı
x
. = oˇı
x
: we get ı
z
[oˇı
x
and hence ˛ˇ[oˇı
x
because ı
z
= ˛ˇ.
Therefore ˛[oı
x
. From gcd(˛. o) = 1 we get ˛[ı
x
. Therefore ˛ is associated to
ı
x
, that is ˛ = ı
x
c with c a unit in 1 and further (˛) = (ı
x
). Analogously (ˇ) =
(ı
,
).
In Lemma 18.5.4 we do not need M = T(M). We only need .. ,. : ÷ M with
ı
x
= 0, ı
,
= 0 and ı
z
= 0, respectively.
Corollary 18.5.5. Let 1 be a principal ideal domain and M = {0} be an 1module
with M = T(M).
1. Let .
1
. . . . . .
n
÷ M be pairwise different and pairwise relatively prime orders
ı
x
i
= ˛
i
. Then , = .
1
÷ ÷.
n
has order ˛ := ˛
1
. . . ˛
n
.
Section 18.6 The Fundamental Theorem for Finitely Generated Modules 279
2. Let 0 = . ÷ M and ı
x
= c¬
k
1
1
. . . ¬
k
n
n
be a prime decomposition of the order
ı
x
of . (c a unit in 1 and the ¬
i
pairwise nonassociate prime elements in 1)
where n > 0, k
i
> 0. Then there exist .
i
, i = 1. . . . . n, with ı
x
i
associated to
¬
k
i
i
and . = .
1
÷ ÷.
n
.
18.6 The Fundamental Theorem for Finitely Generated
Modules
In Section 10.4 we described the following result called the basis theorem for ﬁnite
abelian groups (in the following we give a complete proof in detail; an elementary
proof is given in Chapter 19.).
Theorem 18.6.1 (Theorem 10.4.1, basis theorem for ﬁnite abelian groups). Let G be
a ﬁnite abelian group. Then G is a direct product of cyclic groups of prime power
order.
This allowed us, for a given ﬁnite order n, to present a complete classiﬁcation of
abelian groups of order n. In this section we extend this result to general modules over
principal ideal domains. As a consequence we obtain the fundamental decomposition
theorem for ﬁnitely generated (not necessarily ﬁnite) abelian groups, which ﬁnally
proves Theorem 10.4.1. In the next chapter we present a separate proof of this in a
slightly different format.
Deﬁnition 18.6.2. Let 1 be a principal ideal domain and M be an 1module. Let
¬ ÷ 1 be a prime element. M
t
:= {. ÷ M : Jk _ 0 with ¬
k
. = 0} is called the
¬primary component of M. If M = M
t
for some prime element ¬ ÷ 1 then M is
called ¬primary.
We certainly have the following.
1. M
t
is a submodule of M.
2. The primary components correspond to the ¸subgroup inabelian groups.
Theorem 18.6.3. Let 1 be a principal ideal domain and M = {0} be an 1module
with M = T(M). Then M is the direct sum of its ¬primary components.
Proof. . ÷ M has ﬁnite order ı
x
. Let ı
x
= c¬
k
1
1
¬
k
n
n
be a prime decomposition
of ı
x
. By Corollary 18.5.5 we have that . =
.
i
with .
i
÷ M
t
i
. That means
M =
t÷1
M
t
where 1 is the set of the prime elements of 1. Let , ÷ M
t
¨
c÷1,cyt
M
c
, that is, ı
,
= ¬
k
for some k _ 0 and , =
.
i
with .
i
÷ M
c
i
, that
means ı
x
i
= o
I
i
for some l
i
_ 0. By Corollary 18.5.5 we get that , has the order
c
i
yt
o
I
i
i
, that means, ¬
k
is associated to
c
i
yt
o
I
i
i
. Therefore k = l
i
= 0 for
all i , and the sum is direct.
280 Chapter 18 The Theory of Modules
If 1 is a principal ideal domain and {0} = M = T(M) a ﬁnitely generated torsion
1module then there are only ﬁnitely many ¬primary components, that is to say for
the prime elements ¬ with ¬[j where (j) is the order ideal of M.
Corollary 18.6.4. Let 1 be a principal ideal domain and {0} = M be a ﬁnitely
generated torsion 1module. Then M has only ﬁnitely many nontrivial primary com
ponents M
t
1
. . . . . M
t
n
, and we have
M =
n
i=1
M
t
i
.
Hence we have a reduction of the decomposition problem to the primary compo
nents.
Theorem 18.6.5. Let 1 be a principal ideal domain, ¬ ÷ 1 a prime element and
M = {0} be a 1module with ¬
k
M = {0}; further let m ÷ M with (ı
n
) = (¬
k
).
Then there exists a submodule N Ç M with M = 1m˚N.
Proof. By Zorn’s lemma the set {U : U submodule of M and U ¨ 1m = {0}} has
a maximal element N. This set is nonempty because it contains {0}. We consider
M
t
:= N ˚ 1m Ç M and have to show that M
t
= M. Assume that M
t
= M.
Then there exists a . ÷ M with . … M
t
, especially . … N. Then N is properly
contained in the submodule 1. ÷ N = (.. N). By our choice of N we get · :=
(1. ÷ N) ¨ 1m = {0}. If : ÷ ·, : = 0, then : = jm = ˛. ÷ n with j,
˛ ÷ 1 and n ÷ N. Since : = 0 we have jm = 0; also . = 0 because otherwise
: ÷ 1m¨N = {0}. ˛ is not a unit in 1 because otherwise . = ˛
1
(jm÷n) ÷ M
t
.
Hence we have: If . ÷ M, . … M
t
then there exist ˛ ÷ 1, ˛ = 0, ˛ not a unit in 1,
j ÷ 1 with jm = 0 and n ÷ N such that
˛. = jm÷n. (=)
Especially ˛. ÷ M
t
.
Now let ˛ = c¬
1
. . . ¬
i
be a prime decomposition. We consider one after the
other the elements .. ¬
i
.. ¬
i1
¬
i
.. . . . . c¬
1
. . . ¬
i
. = ˛.. We have . … M
t
but
˛. ÷ M
t
; hence there exists an , … M
t
with ¬
i
, ÷ N ÷1m.
1. ¬
i
=¬, ¬ the prime element in the statement of the theorem. Then gcd(¬
i
. ¬
k
)
= 1, hence there are o, o
t
÷ 1 with o¬
i
÷ o
t
¬
k
= 1, and we get 1m =
(1¬
i
÷ 1¬
k
)m = ¬
i
1m because ¬
k
m = 0. Therefore ¬
i
, ÷ M
t
= N ˚
1m = N ÷¬
i
1m.
2. ¬
i
= ¬. Then we write ¬, as ¬, = n ÷ zm with n ÷ N and z ÷ 1.
This is possible because ¬, ÷ M
t
. Since ¬
k
M = {0} we get 0 = ¬
k1
¬, =
¬
k1
n÷¬
k1
zm. Therefore ¬
k1
n = ¬
k1
zm = 0 because N¨1m = {0}.
Especially we get ¬
k1
z ÷ (ı
n
), that is, ¬
k
[¬
k1
z and, hence, ¬[z. Therefore
¬, = n ÷zm = n ÷¬z
t
m ÷ N ÷¬1m, z
t
÷ 1.
Section 18.6 The Fundamental Theorem for Finitely Generated Modules 281
Hence, in any case we have ¬
i
, ÷ N ÷¬
i
1m, that is, ¬
i
, = n ÷¬
i
: with n ÷ N
and : ÷ 1m. It follows ¬
i
(, ÷ :) = n ÷ N.
, ÷: is not an element of M
t
because , … M
t
. By (=) we have therefore ˛. ˇ ÷ 1,
ˇ = 0 not a unit in 1 with ˇ(, ÷ :) = n
t
÷ ˛m, ˛m = 0, n
t
÷ N. We write
:
t
= ˛m, then :
t
÷ 1m, :
t
= 0, and ˇ(, ÷ :) = n
t
÷:
t
. So, we have the equations
ˇ(, ÷ :) = n
t
÷:
t
, :
t
= 0, and
¬
i
(, ÷ :) = n. (==)
We have gcd(ˇ. ¬
i
) = 1 because otherwise ¬
i
[ˇ and, hence, ˇ(, ÷ :) ÷ N and
:
t
= 0 because N ¨ 1m = {0}. Then there exist ;. ;
t
with ;¬
i
÷;
t
ˇ = 1. In (==)
we multiply the ﬁrst equation with ;
t
and the second with ;.
Addition gives , ÷ : ÷ N ˚ 1m = M
t
, and hence , ÷ M
t
which contradicts
, … M
t
. Therefore M = M
t
.
Theorem 18.6.6. Let 1 be a principal ideal domain, ¬ ÷ 1 a prime element and
M = {0} a ﬁnitely generated ¬primary 1module. Then there exist ﬁnitely many
m
1
. . . . . m
x
÷ M with M =
x
i=1
1m
i
.
Proof. Let M = (.
1
. . . . . .
n
). Each .
i
has an order ¬
k
i
. We may assume that
k
1
= max{k
1
. k
2
. . . . . k
n
}, possibly after renaming. We have ¬
k
i
.
i
= 0 for all i .
Since .
k
1
i
= (.
k
i
i
)
k
1
k
i
we have also ¬
k
1
M = 0, and also (ı
x
1
) = (¬
k
1
). Then
M = 1.
1
˚N for some submodule N Ç M by Theorem 18.6.5.Now N Š M,1.
1
and M,1.
1
is generated by the elements .
2
÷ 1.
1
. . . . . .
n
÷ 1.
1
. Hence, N is
ﬁnitely generated by n ÷ 1 elements; and certainly N is ¬primary. This proves the
result by induction.
Since 1
n
i
Š 1, Ann(m
i
) and Ann(m
i
) = (ı
n
i
) = (¬
k
i
) we get the following
extension of Theorem 18.6.6.
Theorem 18.6.7. Let 1 be a principal ideal domain, ¬ ÷ 1 a prime element and
M = {0} a ﬁnitely generated ¬primary 1module. Then there exist ﬁnitely many
k
1
. . . . . k
x
÷ N with
M Š
x
i=0
1,(¬
k
i
).
and M is, up to isomorphism, uniquely determined by (k
1
. . . . . k
x
).
Proof. The ﬁrst part, that is, a description as M Š
x
i=0
1,(¬
k
i
) follows directly
from Theorem 18.6.6. Now, let
M Š
n
i=0
1,(¬
k
i
) Š
n
i=0
1,(¬
I
i
).
282 Chapter 18 The Theory of Modules
We may assume that k
1
_ k
2
_ _ k
n
> 0 and l
1
_ l
2
_ _ l
n
> 0. We
consider ﬁrst the submodule N := {. ÷ M : ¬. = 0}. Let M =
n
i=1
1,(¬
k
i
). If
we write then . =
(r
i
÷(¬
k
i
)) we have ¬. = 0 if and only if r
i
÷ (¬
k
i
1
), that
is, N Š
n
i=1
(¬
k
i
1
),(¬
k
i
) Š
n
i=1
1,(¬) because ¬
k1
1,¬
k
1 Š 1,¬1.
Since (˛ ÷(¬)). = ˛. if ¬. = 0 we get that N is an 1,(¬)module, and, hence,
a vector space over the ﬁeld 1,(¬). From the decompositions
N Š
n
i=1
1,(¬) and, analogously, N Š
n
i=1
1,(¬)
we get
n = dim
T{(t)
N = m. (===)
Assume that there is an i with k
i
< l
i
or l
i
< k
i
. Without loss of generality assume
that there is an i with k
i
< l
i
.
Let ¸ be the smallest index for which k
}
< l
}
. Then (because of the ordering of
the k
i
)
M
t
:= ¬
k
j
M Š
n
i=1
¬
k
j
1,¬
k
i
1 Š
}1
i=1
¬
k
j
1,¬
k
i
1.
because if i > ¸ then ¬
k
j
1,¬
k
i
1 = {0}.
We now consider M
t
= ¬
k
j
M with respect to the second decomposition, that is,
M
t
Š
n
i=1
¬
k
j
1,¬
I
i
1. By our choice of ¸ we have k
}
< l
}
_ l
i
for 1 _ i _ ¸ .
Therefore, in this second decomposition, the ﬁrst ¸ summands ¬
k
j
1,¬
I
i
1 are
unequal {0}, that is ¬
k
j
1,¬
I
i
1 = {0} if 1 _ i _ ¸ . The remaining summands are
{0} or of the form 1,¬
x
1. Hence, altogether, on the one side M
t
is a direct sum
of ¸ ÷ 1 cyclic submodules and on the other side a direct sum of t _ ¸ nontrivial
submodules. But this contradicts the above result (===) about the number of direct
sums for ﬁnitely generated ¬primary modules because, certainly, M
t
is also ﬁnitely
generated and ¬primary. Therefore k
i
= l
i
for i = 1. . . . . n. This proves the
theorem.
Theorem 18.6.8 (fundamental theorem for ﬁnitely generated modules over princi
pal ideal domains). Let 1 be a principal ideal domain and M = {0} be a ﬁnitely
generated (unitary) 1module. Then there exist prime elements ¬
1
. . . . . ¬
i
÷ 1,
0 _ r < oand numbers k
1
. . . . . k
i
÷ t N, t ÷ N
0
such that
M Š 1,(¬
k
1
1
) ˚1,(¬
k
2
2
) ˚ ˚1,(¬
k
r
i
) ˚1 ˚ ˚1
„ ƒ‚ …
ttimes
.
and M is, up to isomorphism, uniquely determined by (¬
k
1
1
. . . . . ¬
k
r
i
. t ).
Section 18.7 Exercises 283
The prime elements ¬
i
are not necessarily pairwise different (up to units in 1), that
means it can be ¬
i
= c¬
}
for i = ¸ where c is a unit in 1.
Proof. The proof is a combination of the preceding results. The free part of M is
isomorphic to M,T(M), and the rank of M,T(M), which we call here t , is uniquely
determined because two bases of M,T(M) have the same cardinality. Therefore we
may restrict ourselves on torsion modules. Here we have a reduction to ¬primary
modules because in a decomposition M =
i
1,(¬
k
i
i
) is M
t
=
t
i
=t
1,(¬
k
i
i
)
the ¬primary component of M (an isomorphism certainly maps a ¬primary com
ponent onto a ¬primary component). So it is only necessary, now, to consider ¬
primary modules M. The uniqueness statement now follows from Theorem 18.6.8.
Since abelian groups can be considered as Zmodules, and Z is a principal ideal
domain, we get the following corollary. We will restate this result in the next chapter
and prove a different version of it.
Theorem 18.6.9 (fundamental theorem for ﬁnitely generated abelian groups). Let
{0} = G = (G. ÷) be a ﬁnitely generated abelian group. Then there exist prime
numbers ¸
1
. . . . . ¸
i
, 0 _ r < o, and numbers k
1
. . . . . k
i
÷ N, t ÷ N
0
such that
G Š Z,(¸
k
1
1
Z) ˚ ˚Z,(¸
k
r
i
Z) ˚Z ˚ ˚Z
„ ƒ‚ …
ttimes
.
and G is, up to isomorphism, uniquely determined by (¸
k
1
1
. . . . . ¸
k
r
i
. t ).
18.7 Exercises
1. Let M and N be isomorphic modules over a commutative ring 1. Then End
T
(M)
and End
T
(N) are isomorphic rings. (End
T
(M) is the set of all 1modules endo
morphisms of M.)
2. Let 1 be an integral domain and M an 1module with M = Tor(M) (torsion
module). Show that Hom
T
(M. 1) = 0. (Hom
T
(M. 1) is the set of all 1module
homomorphisms from M to 1.)
3. Prove the isomorphism theorems for modules (1), (2) and (3) in Theorem 18.1.11
in detail.
4. Let M. M
t
. N be 1modules, 1 a commutative ring. Show:
(i) Hom
T
(M ˚M
t
. N) Š Hom
T
(M. N) × Hom
T
(M
t
. N)
(ii) Hom
T
(N. M × M
t
) Š Hom
T
(N. M) ˚Hom
T
(N. M
t
).
5. Show that two free modules having bases whose cardinalities are equal are isomor
phic.
284 Chapter 18 The Theory of Modules
6. Let M be an unitary 1module (1 a commutative ring) and let {m
1
. . . . . m
x
} be a
ﬁnite subset of M. Show that the following are equivalent:
(i) {m
1
. . . . . m
x
} generates M freely.
(ii) {m
1
. . . . . m
x
} is linearly independent and generates M.
(iii) Every element m ÷ M is uniquely expressible in the form m =
x
i=1
r
i
m
i
with r
i
÷ 1.
(iv) Each 1m
i
is torsionfree, and M = 1m
1
˚ ˚1m
x
.
7. Let 1 be a principal domain and M = {0} be an 1module with M = T(M).
(i) Let .
1
. . . . . .
n
÷ M be pairwise different and pairwise relatively prime or
ders ı
x
i
= ˛
i
. Then , = .
1
÷ ÷.
n
has order ˛ := ˛
1
. . . ˛
n
.
(ii) Let 0 = . ÷ M and ı
x
= c¬
k
1
1
¬
k
n
n
be a prime decomposition of the
order ı
x
of . (c a unit in 1 and the ¬
i
pairwise nonassociate prime elements
in 1) where n > 0, k
i
> 0. Then there exist .
i
, i = 1. . . . . n, with ı
x
i
associated to ¬
k
i
i
and . = .
1
÷ ÷.
n
.
Chapter 19
Finitely Generated Abelian Groups
19.1 Finite Abelian Groups
In Chapter 10 we described the following theorem that completely provides the struc
ture of ﬁnite abelian groups. As we saw in Chapter 18 this result is a special case of a
general result on modules over principal ideal domains.
Theorem 19.1.1 (Theorem 10.4.1, basis theorem for ﬁnite abelian groups). Let G be
a ﬁnite abelian group. Then G is a direct product of cyclic groups of prime power
order.
We review two examples that show how this theorem leads to the classiﬁcation of
ﬁnite abelian groups. In particular this theorem allows us, for a given ﬁnite order n,
to present a complete classiﬁcation of abelian groups of order n.
Since all cyclic groups of order n are isomorphic to (Z
n
. ÷), Z
n
= Z,nZ, we will
denote a cyclic group of order n by Z
n
.
Example 19.1.2. Classify all abelian groups of order 60. Let G be an abelian group
of order 60. From Theorem 10.4.1 G must be a direct product of cyclic groups of
prime power order. Now 60 = 2
2
3 5 so the only primes involved are 2, 3 and 5.
Hence the cyclic groups involved in the direct product decomposition of G have order
either 2, 4, 3 or 5 (by Lagrange’s theorem they must be divisors of 60). Therefore G
must be of the form
G Š Z
4
× Z
3
× Z
5
or
G Š Z
2
× Z
2
× Z
3
× Z
5
.
Hence up to isomorphism there are only two abelian groups of order 60.
Example 19.1.3. Classify all abelian groups of order 180. Let G be an abelian group
of order 180. Now 180 = 2
2
3
2
5 so the only primes involved are 2, 3 and 5. Hence
the cyclic groups involved in the direct product decomposition of G have order either
2, 4, 3, 9 or 5 (by Lagrange’s theorem they must be divisors of 180). Therefore G
286 Chapter 19 Finitely Generated Abelian Groups
must be of the form
G Š Z
4
× Z
9
× Z
5
G Š Z
2
× Z
2
× Z
9
× Z
5
G Š Z
4
× Z
3
× Z
3
× Z
5
G Š Z
2
× Z
2
× Z
3
× Z
3
× Z
5
.
Hence up to isomorphism there are four abelian groups of order 180.
The proof of Theorem 19.1.1 involves the following lemmas. We refer back to
Chapter 10 or Chapter 18 for the proofs. Notice how these lemmas mirror the re
sults for ﬁnitely generated modules over principal ideal domains considered in the
last chapter.
Lemma 19.1.4. Let G be a ﬁnite abelian group and let ¸[[G[ where ¸ is a prime.
Then all the elements of G whose orders are a power of ¸ form a normal subgroup
of G. This subgroup is called the ¸primary component of G, which we will denote
by G
]
.
Lemma 19.1.5. Let G be a ﬁnite abelian group of order n. Suppose that n =
¸
e
1
1
¸
e
k
k
with ¸
1
. . . . . ¸
k
distinct primes. Then
G Š G
]
1
× × G
]
k
where G
]
i
is the ¸
i
primary component of G.
Theorem 19.1.6 (basis theorem for ﬁnite abelian groups). Let G be a ﬁnite abelian
group. Then G is a direct product of cyclic groups of prime power order.
19.2 The Fundamental Theorem: pPrimary Components
In this section we use the fundamental theorem for ﬁnitely generated modules over
principal ideal domains to extend the basis theorem for ﬁnite abelian groups to the
more general case of ﬁnitely generated abelian groups. In this section we consider
the decomposition into ¸primary components, mirroring our result in the ﬁnite case.
In the next chapter we present a different form of the basis theorem with a more
elementary proof.
Section 19.2 The Fundamental Theorem: ¸Primary Components 287
In Chapter 18 we proved the following:
Theorem 19.2.1 (fundamental theorem for ﬁnitely generated modules over princi
pal ideal domains). Let 1 be a principal ideal domain and M = {0} be a ﬁnitely
generated (unitary) 1module. Then there exist prime elements ¬
1
. . . . . ¬
i
÷ 1,
0 _ r < oand numbers k
1
. . . . . k
i
÷ N, t ÷ N
0
such that
M Š 1,(¬
k
1
1
) ˚1,(¬
k
2
2
) ˚ ˚1,(¬
k
r
i
) ˚1 ˚ ˚1
„ ƒ‚ …
ttimes
.
and M is, up to isomorphism, uniquely determined by (¬
k
1
1
. . . . . ¬
k
r
i
. t ).
The prime elements ¬
i
are not necessarily pairwise different (up to units in 1), that
means it can be ¬
i
= c¬
}
for i = ¸ where c is a unit in 1.
Since abelian groups can be considered as Zmodules, and Z is a principal ideal
domain, we get the following corollary which is extremely important in its own right.
Theorem 19.2.2 (fundamental theorem for ﬁnitely generated abelian groups). Let
{0} = G = (G. ÷) be a ﬁnitely generated abelian group. Then there exist prime
numbers ¸
1
. . . . . ¸
i
, 0 _ r < o, and numbers k
1
. . . . . k
i
÷ N, t ÷ N
0
such that
G Š Z,(¸
k
1
1
Z) ˚ ˚Z,(¸
k
r
i
Z) ˚Z ˚ ˚Z
„ ƒ‚ …
ttimes
.
and G is, up to isomorphism, uniquely determined by (¸
k
1
1
. . . . . ¸
k
r
i
. t ).
Notice that the number t of inﬁnite components is unique. This is called the rank
or Betti number of the abelian group G. This number plays an important role in the
study of homology and cohomology groups in topology.
If G = Z × Z × × Z = Z
i
for some r we call G a free abelian group of rank
r. Notice that if an abelian group G is torsionfree then the ¸primary components
are just the identity. It follows that in this case G is a free abelian group of ﬁnite rank.
Again using module theory it follows that subgroups of this must also be free abelian
and of smaller or equal rank. Notice the distinction between free abelian groups and
absolutely free groups (see Chapter 14). In the free group case a nonabelian free
group of ﬁnite rank contains free subgroups of all possible countable ranks. In the
free abelian case however the subgroups have smaller or equal rank. We summarize
this comments.
Theorem 19.2.3. Let G = {0} be a ﬁnitely generated torsionfree abelian group.
Then G is a free abelian group of ﬁnite rank r that is G Š Z
i
. Further if H is a
subgroup of G then H is also free abelian and the rank of H is smaller or equal than
the rank of G.
288 Chapter 19 Finitely Generated Abelian Groups
19.3 The Fundamental Theorem: Elementary Divisors
In this section we present the fundamental theorem of ﬁnitely generated abelian
groups in a slightly different form and present an elementary proof of it.
In the following G is always a ﬁnitely generated abelian group. We use the addition
“÷” for the binary operation, that is,
÷ : G × G ÷G. (.. ,) ÷. ÷,.
We also write ng instead of g
n
and use 0 as the symbol for the identity element in G,
that is, 0÷g = g for all g ÷ G. G = (g
1
. . . . . g
t
), 0 _ t < o, that is, G is (ﬁnitely)
generated by g
1
. . . . . g
t
, is equivalent to the fact that each g ÷ G can be written in
the form g = n
1
g
1
÷ n
2
g
2
÷ ÷ n
t
g
t
, n
i
÷ Z. A relation between the g
i
with
coefﬁcients n
1
. . . . . n
t
is then each equation of the form n
1
g
1
÷ ÷ n
t
g
t
= 0.
A relation is called nontrivial if n
i
= 0 for at least one i . A system 1 of relations
in G is called a system of deﬁning relations, if each relation in G is a consequence
of 1. The elements g
1
. . . . . g
t
are called integrally linear independent if there are
no nontrivial relations between them. A ﬁnite generating system {g
1
. . . . . g
t
} of G
is called a minimal generating system if there is no generating system with t ÷ 1
elements.
Certainly each ﬁnitely generated group has a minimal generating system. In the
following we always assume that our ﬁnitely generated abelian group G is unequal
{0}, that is, G is nontrivial.
As above, we may consider G as a ﬁnitely generated Zmodule, and in this sense,
the subgroups of G are precisely the submodules. Hence, it is clear what we mean if
we call G a direct product G = U
1
× × U
x
of its subgroups U
1
. . . . . U
x
, namely,
each g ÷ G can be written as g = u
1
÷u
2
÷ ÷u
x
with u
i
÷ U
i
and
U
i
¨
_
x
}=1,}yi
U
}
_
= {0}.
To emphasize the little difference between abelian groups and Zmodules we here
use the notation “direct product” instead of “direct sum”. Considered as Zmodules,
for ﬁnite index sets 1 = {1. . . . . s} we have anyway
x
i=1
U
i
=
x
i=1
U
i
.
Finally we use the notation Z
n
instead of Z,nZ, n ÷ N. In general, we use 7
n
to
be a cyclic group of order n.
Section 19.3 The Fundamental Theorem: Elementary Divisors 289
The aim in this section is to prove the following:
Theorem 19.3.1 (basis theorem for ﬁnitely generated abelian groups). Let G = {0}
be a ﬁnitely generated abelian group. Then G is a direct product
G Š 7
k
1
× × 7
k
r
× U
1
× × U
x
.
r _ 0, s _ 0, of cyclic subgroups with [7
k
i
[ = k
i
for i = 1. . . . . r, k
i
[k
i÷1
for
i = 1. . . . . r ÷ 1 and U
}
Š Z for ¸ = 1. . . . . s. Here the numbers k
1
. . . . . k
i
, r and
s are uniquely determined by G, that means, are k
t
1
. . . . . k
t
i
. r
t
and s
t
the respective
numbers for a second analogous decomposition of G then r = r
t
, k
1
= k
t
1
. . . . . k
i
=
k
t
i
and s = s
t
.
The numbers k
i
are called the elementary divisors of G.
We can have r = 0 or s = 0 (but not both because G = {0}). If s > 0, r = 0 then
G is a free abelian group of rank s (exactly the same rank if you consider G as a free
Zmodule of rank s). If s = 0 then G is ﬁnite, in fact: s = 0 =G is ﬁnite.
We ﬁrst prove some preliminary results.
Lemma 19.3.2. Let G = (g
1
. . . . . g
t
), t _ 2, an abelian group. Then also G =
(g
1
÷
t
i=2
m
i
g
i
. g
2
. . . . . g
t
) for arbitrary m
2
. . . . . m
t
÷ Z.
Lemma 19.3.3. Let G be a ﬁnitely generated abelian group. Among all nontrivial re
lations between elements of minimal generating systems of G we choose one relation
m
1
g
1
÷ ÷m
t
g
t
= 0 (=)
with smallest possible positive coefﬁcient, and let this smallest coefﬁcient be m
1
. Let
n
1
g
1
÷ ÷n
t
g
t
= 0 (==)
be another relation between the same generators g
1
. . . . . g
t
. Then
(1) m
1
[n
1
and
(2) m
1
[m
i
for i = 1. 2. . . . . t .
Proof. (1) Assume m
1
− n
1
. Then n
1
= qm
1
÷m
t
1
with 0 < m
t
1
< m
1
. If we mul
tiply the relation (=) with q and subtract the resulting relation from the relation (==)
then we get a relation with a coefﬁcient m
t
1
< m
1
which contradicts the choice of m
1
.
Hence m
1
[n
1
.
(2) Assume m
1
− m
2
. Then m
2
= qm
1
÷ m
t
2
with 0 < m
t
2
< m
2
. {g
1
÷
qg
2
. g
2
. . . . . g
t
} is a minimal generating system which satisﬁes the relation m
1
(g
1
÷
qg
2
) ÷ m
t
2
g
2
÷ m
3
g
3
÷ ÷ m
t
g
t
= 0, and this relation has a coefﬁcient m
t
2
<
m
1
. This again contradicts the choice of m
1
. Hence m
1
[m
2
and further m
1
[m
i
for
i = 1. . . . . t .
290 Chapter 19 Finitely Generated Abelian Groups
Lemma 19.3.4 (invariant characterization of k
i
for ﬁnite abelian groups G). Let G =
7
k
1
× × 7
k
r
and 7
k
i
ﬁnite cyclic of order k
i
_ 2, i = 1. . . . . r, with k
i
[k
i÷1
for
i = 1. . . . . r ÷ 1. Then k
i
is the smallest natural number n such that ng = 0 for all
g ÷ G. k
i
is called the exponent or the maximal order of G.
Proof. 1. Let g ÷ G arbitrary, that is, g = n
1
g
1
÷ ÷ n
i
g
i
with g
i
÷ 7
k
i
.
Then k
i
g
i
= 0 for i = 1. . . . . r by the theorem of Fermat. Since k
i
[k
i
we get
k
i
g = n
1
k
1
g
1
÷ ÷n
i
k
i
g
i
= 0.
2. Let a ÷ G with 7
k
r
= (a). Then the order of a is k
i
and, hence, na = 0 for all
0 < n < k
i
.
Lemma 19.3.5 (invariant characterization of s). Let G = 7
k
1
× × 7
k
r
× U
1
×
× U
x
, s > 0, where the 7
k
i
are ﬁnite cyclic groups of order k
i
and the U
}
are
inﬁnite cyclic groups. Then, s is the maximal number of integrally linear independent
elements of G; s is called the rank of G.
Proof. 1. Let g
i
÷ U
i
, g
i
= 0, for i = 1. . . . . s. Then the g
1
. . . . . g
x
are integrally
linear independent because from n
1
g
1
÷ ÷n
x
g
x
= 0, the n
i
÷ Z, we get n
1
g
1
÷
U
1
¨ (U
2
× × U
x
) = {0}, and, hence, n
1
g
1
= 0, that is n
1
= 0, because g
1
has
inﬁnite order. Analogously we get n
2
= = n
x
= 0.
2. Let g
1
. . . . . g
x÷1
÷ G. We look for integers .
1
. . . . . .
x÷1
, not all 0, such that a
relation
x÷1
i=1
.
i
g
i
= 0 holds. Let 7
k
i
÷ (a
i
), U
}
= (b
}
). Then we may write each
g
i
as g
i
= m
i1
a
1
÷ ÷m
ii
a
i
÷n
i1
b
1
÷ ÷n
ix
b
x
for i = 1. . . . . s ÷1, where
m
i}
a
}
÷ 7
k
j
and n
iI
b
I
÷ U
I
.
Case 1: all m
i}
a
}
= 0. Then
x÷1
i=1
.
i
g
i
= 0 is equivalent to
x÷1
i=1
.
i
_
x
}=1
n
i}
b
}
_
=
x
}=1
_
x÷1
i=1
n
i}
.
i
_
b
}
= 0.
The system
x÷1
i=1
n
i}
.
i
= 0, ¸ = 1. . . . . s, of linear equations has at least one non
trivial rational solution (.
1
. . . . . .
x÷1
) because we have more unknown than equa
tions. Multiplication with the common denominator gives a nontrivial integral solu
tion (.
1
. . . . . .
x÷1
)÷Z
x÷1
. For this solution we get
x÷1
i=1
.
i
g
i
= 0.
Case 2: m
i}
a
}
arbitrary. Let k = 0 be a common multiple of the orders k
}
of the
cyclic groups 7
k
j
, ¸ = 1. . . . . r. Then
kg
i
= m
i1
ka
1
„ ƒ‚ …
=0
÷ ÷m
ii
ka
i
„ ƒ‚ …
=0
÷n
i1
kb
1
÷ ÷n
ix
kb
x
Section 19.3 The Fundamental Theorem: Elementary Divisors 291
for i = 1. . . . . s ÷ 1. By case 1 the kg
1
. . . . . kg
x÷1
are integrally linear depen
dent, that is, we have integers .
1
. . . . . .
x÷1
, not all 0, with
x÷1
i=1
.
i
(kg
i
) = 0 =
x÷1
i=1
(.
i
k)g
i
, and the .
i
k are not all 0. Hence, also g
1
. . . . . g
x÷1
are integrally
linear dependent.
Lemma 19.3.6. Let G := 7
k
1
× × 7
k
r
Š 7
k
0
1
× × 7
k
0
r
0
=: G
t
, the 7
k
i
. 7
k
0
j
cyclic groups of orders k
i
= 1 and k
t
}
= 1, respectively and k
i
[k
i÷1
for i =
1. . . . . r ÷ 1 and k
t
}
[k
t
}÷1
for ¸ = 1. . . . . r
t
÷ 1. Then r = r
t
and k
1
= k
t
1
,
k
2
= k
t
2
. . . . . k
i
= k
t
i
.
Proof. We prove this lemma by induction on the group order [G[ = [G
t
[. Certainly,
Lemma 19.3.6 holds if [G[ _ 2 because then either G = {0}, and here r = r
t
= 0 or
G Š Z
2
, and here r = r
t
= 1. Now let [G[ > 2. Then especially r _ 1. Inductively
we assume that Lemma 19.3.6 holds for all ﬁnite abelian groups of order less than [G[.
By Lemma 19.3.4 the number k
i
is invariantly characterized, that is, from G Š G
t
follows k
i
= k
t
i
0
, that is especially, 7
k
r
Š 7
k
0
r
0
. Then G,7
k
r
Š G,7
k
0
r
0
, that is,
7
k
1
× ×7
k
r1
Š 7
k
0
1
× × 7
k
0
r
0
1
. Inductively r ÷1 = r
t
÷1, that is, r = r
t
,
and k
1
= k
t
1
. . . . . k
i1
= k
t
i
0
1
. This proves Lemma 19.3.6.
We can now present the main result, which we state again, and its proof.
Theorem 19.3.7 (basis theorem for ﬁnitely generated abelian groups). Let G = {0}
be a ﬁnitely generated abelian group. Then G is a direct product
G Š 7
k
1
× × 7
k
r
× U
1
× × U
x
. r _ 0. s _ 0.
of cyclic subgroups with [7
k
i
[ = k
i
for i = 1. . . . . r, k
i
[k
i÷1
for i = 1. . . . . r ÷ 1
and U
}
Š Z for ¸ = 1. . . . . s. Here the numbers k
1
. . . . . k
i
. r and s are uniquely
determined by G, that means, are k
t
1
. . . . . k
t
i
. r
t
and s
t
the respective numbers for
a second analogous decomposition of G then r = r
t
, k
1
= k
t
1
. . . . . k
i
= k
t
i
and
s = s
t
.
Proof. (a) We ﬁrst prove the existence of the given decomposition. Let G = {0} be a
ﬁnitely generated abelian group. Let t , 0 < t < o, be the number of elements in a
minimal generating system of G. We have to show that G is decomposable as a direct
product of t cyclic groups with the given description. We prove this by induction on t .
If t = 1 then the basis theorem certainly is correct. Now let t _ 2 and assume that
the assertion holds for all abelian groups with less then t generators.
Case 1: There does not exist a minimal generating system of G which satisﬁes a
nontrivial relation. Let {g
1
. . . . . g
t
} be an arbitrary minimal generating system for G.
Let U
i
= (g
i
). Then all U
i
are inﬁnite cyclic and we have G = U
1
× ×U
t
because
if, for instance, U
1
¨ (U
2
÷ ÷U
t
) = {0} then we must have a nontrivial relation
between the g
1
. . . . . g
t
.
292 Chapter 19 Finitely Generated Abelian Groups
Case 2: There exist minimal generating systems of G which satisfy nontrivial rela
tions. Among all nontrivial relations between elements of minimal generating systems
of G we choose one relation
m
1
g
1
÷ ÷m
t
g
t
= 0 (=)
with smallest possible positive coefﬁcient. Without loss of generality, let m
1
be
this coefﬁcient. By Lemma 19.3.3 we get m
2
= q
2
m
1
. . . . . m
t
= q
t
m
1
. Now,
{g
1
÷
t
i=2
q
i
g
i
. g
2
. . . . . g
t
} is a minimal generating system of G by Lemma 19.3.2.
Deﬁne h
1
= g
1
÷
t
i=2
q
i
g
i
, then m
1
h
1
= 0. If n
1
h
1
÷n
2
g
2
÷ ÷n
t
g
t
= 0 is
an arbitrary relation between h
1
. g
2
. . . . . g
t
then m
1
[n
1
by Lemma 19.3.3, and, hence
n
1
h
1
= 0. Deﬁne H
1
:= (h
1
) and G
t
= (g
2
. . . . . g
t
). Then G = H
1
× G
t
. This we
can see as follows: First, each g ÷ G can be written as g = m
1
h
1
÷m
2
g
2
÷ ÷
m
t
g
t
= m
1
h
1
÷ g
t
with g
t
÷ G
t
. Also H
1
¨ G
t
= {0} because m
1
h
1
= g
t
÷ G
t
implies a relation n
1
h
1
÷ n
2
g
2
÷ ÷ n
t
g
t
= 0 and from this we get, as above,
n
1
h
1
= g
t
= 0. Now, inductively, G
t
= 7
k
2
× × 7
k
r
× U
1
× × U
x
with 7
k
i
a cyclic group of order k
i
, i = 2. . . . . r, k
i
[k
i÷1
for i = 2. . . . . r ÷ 2, U
}
Š Z for
¸ = 1. . . . . s, and (r ÷ 1) ÷ s = t ÷ 1, that is, r ÷ s = t . Further, G = H
1
× G
t
where H
1
is cyclic of order m
1
. If r _ 2 then we get a nontrivial relation
m
1
h
1
„ƒ‚…
=0
÷ k
2
h
2
„ƒ‚…
=0
= 0
since k
2
= 0. Again m
1
[k
2
by Lemma 19.3.3. This gives the desired decomposition.
(b) We now prove the uniqueness statement.
Case 1: G is ﬁnite abelian. Then the claim follows from Lemma 19.3.6
Case 2: G is arbitrary ﬁnitely generated and abelian. Let T := {. ÷ G : [.[ < o},
that is the set of elements of G of ﬁnite order. Since G is abelian T is a subgroup of G,
the so called torsion subgroup of G. If, as above, G = 7
k
1
× ×7
k
r
×U
1
× ×U
x
then T = 7
k
1
× ×7
k
r
because an element b
1
÷ ÷b
i
÷c
1
÷ ÷c
x
with b
i
÷ 7
k
i
,
c
}
÷ U
}
has ﬁnite order if and only if all c
}
= 0. That means: 7
k
1
× × 7
k
r
is,
independent of the special decomposition, uniquely determined by G, and hence, also
the numbers r. k
1
. . . . . k
i
by Lemma 19.3.6. Finally the number s, the rank of G,
is uniquely determined by Lemma 19.3.5. This proves the basis theorem for ﬁnitely
generated abelian groups.
As a corollary we get the fundamental theorem for ﬁnitely generated abelian groups
as given in Theorem 19.2.1.
Theorem 19.3.8. Let {0} = G = (G. ÷) be a ﬁnitely generated abelian group. Then
there exist prime numbers ¸
1
. . . . . ¸
i
, 0 _ r < o, and numbers k
1
. . . . . k
i
÷ N,
t ÷ N
0
such that
G Š Z
]
k
1
1
× × Z
]
k
r
r
× Z × × Z
„ ƒ‚ …
ttimes
.
and G is, up to isomorphism, uniquely determined by (¸
k
1
1
. . . . . ¸
k
r
i
. t ).
Section 19.3 The Fundamental Theorem: Elementary Divisors 293
Proof. For the existence we only have to show that Z
nn
ŠZ
n
×Z
n
if gcd(m. n)=1.
For this we write U
n
= (m ÷ mnZ) < Z
nn
, U
n
= (n ÷ nmZ) < Z
nn
, and
U
n
¨ U
n
= {mnZ} because gcd(m. n) = 1. Further there are h. k ÷ Z with 1 =
hm ÷ kn. Hence, l ÷ mnZ = hlm ÷ mnZ ÷ kln ÷ mnZ, and therefore Z
nn
=
U
n
× U
n
Š Z
n
× Z
n
.
For the uniqueness statement we may reduce the problemto the case [G[ = ¸
k
for a
prime number ¸ and k ÷ N. But here the result follows directly from Lemma 19.3.6.
From this proof we automatically get the Chinese remainder theorem for the case
Z
n
= Z,nZ.
Theorem 19.3.9 (Chinese remainder theorem). Let m
1
. . . . . m
i
÷ N, r _ 2, with
gcd(m
i
. m
}
) = 1 for i = ¸ . Deﬁne m := m
1
m
i
.
(1) ¬ : Z
n
÷Z
n
1
× × Z
n
r
, a ÷mZ ÷(a ÷m
1
Z. . . . . a ÷m
i
Z), deﬁnes a
ring isomorphism.
(2) The restriction of ¬ on the multiplicative group of the prime residue classes
deﬁnes a group isomorphism Z

n
÷Z

n
1
× × Z

n
r
.
(3) For given a
1
. . . . . a
i
÷ Z there exists modulo m exactly one . ÷ Z with . ÷
a
i
(mod m
i
) for i = 1. . . . . r.
Recall that for k ÷ N a prime residue class is deﬁned by a÷kZwith gcd(a. k) = 1.
The set of prime residue classes modulo k is certainly a multiplicative group.
Proof. By Theorem 19.3.1 we get that ¬ is an additive group isomorphism which can
be extended directly to a ring isomorphism via (a ÷mZ)(b ÷mZ) ÷ (ab ÷m
1
Z.
. . . . ab ÷m
i
Z). The remaining statements are now obvious.
Let ·(n) be the number of nonisomorphic ﬁnite abelian groups of order n =
¸
k
1
1
¸
k
r
i
, r _ 1, with pairwise different prime numbers ¸
1
. . . . . ¸
i
and k
1
. . . . .
k
i
÷ N. By Theorem 19.2.2 we have ·(n) = ·(¸
k
1
1
) ·(¸
k
r
i
). Hence, to cal
culate ·(n), we have to calculate ·(¸
n
) for a prime number ¸ m ÷ N. Again, by
Theorem 19.2.2, we get G Š Z
]
m
1
× × Z
]
m
k
, all m
i
_ 1, if G is abelian of
order ¸
n
. If we compare the orders we get m = m
1
÷ ÷ m
k
. We may order
the m
i
by size. A ktuple (m
1
. . . . . m
k
) with 0 < m
1
_ m
2
_ _ m
k
and
m
1
÷m
2
÷ ÷m
k
= m is called a partition of m. From above each abelian group
of order ¸
n
gives a partition (m
1
. . . . . m
k
) of m for some k with 1 _ k _ m. On
the other side, each partition (m
1
. . . . . m
k
) of m gives an abelian group of order ¸
n
,
namely Z
]
m
1
× ×Z
]
m
k
. Theorem 19.2.2 shows that different partitions give non
isomorphic groups. If we deﬁne ¸(m) to be the number of partitions of m then we get
the following: ·(¸
n
) = ¸(m) and ·(¸
k
1
1
¸
k
r
i
) = ¸(k
1
) ¸(k
i
).
294 Chapter 19 Finitely Generated Abelian Groups
19.4 Exercises
1. Let H be a ﬁnite generated abelian group, which is the homomorphic image of a
torsionfree abelian group of ﬁnite rank n. Show that H is the direct sum of _ n
cyclic groups.
2. Determine (up to isomorphism) all groups of order ¸
2
(p prime) and all abelian
groups of order _ 15.
3. Let G be an abelian group with generating elements a
1
. . . . . a
4
and deﬁning rela
tions
5a
1
÷4a
2
÷a
3
÷5a
4
= 0
7a
1
÷6a
2
÷5a
3
÷11a
4
= 0
2a
1
÷2a
2
÷10a
3
÷12a
4
= 0
10a
1
÷8a
2
÷ 4a
3
÷4a
4
= 0.
Express G as a direct product of cyclic groups.
4. Let G be a ﬁnite abelian group and u =
¿÷G
g the product of all elements of G.
Show: If G has an element a of order 2, then u = a, otherwise u = e. Conclude
from this the theorem of Wilson:
(¸ ÷ 1)Š ÷ ÷1(mod ¸) for each prime ¸.
5. Let ¸ be a prime and G a ﬁnite abelian ¸group, that is the order of all elements of
G is ﬁnite and a power of ¸. Show that G is cyclic, if G has exactly one subgroup
of order ¸. Is the statement still correct, if G is not abelian?
Chapter 20
Integral and Transcendental Extensions
20.1 The Ring of Algebraic Integers
Recall that a complex number ˛ is an algebraic number if it is algebraic over the
rational numbers Q. That is ˛ is a zero of a polynomial ¸(.) ÷ QŒ.. If ˛ ÷ C is not
algebraic then it is a transcendental number.
We will let A denote the totality of algebraic numbers within the complex num
bers C, and T the set of transcendentals so that C = A L T . The set A is the
algebraic closure of Qwithin C.
The set A of algebraic numbers forms a subﬁeld of C (see Chapter 5) and the
subset A
t
= A¨ R of real algebraic numbers forms a subﬁeld of R. The ﬁeld A is
an algebraic extension of the rationals Q, however the degree is inﬁnite.
Since each rational is algebraic it is clear that there are algebraic numbers. Further
there are irrational algebraic numbers,
_
2 for example, since it is a root of the irre
ducible polynomial .
2
÷2 over Q. In Chapter 5 we proved that there are uncountably
inﬁnitely many transcendental numbers (Theorem 5.5.3). However it is very difﬁcult
to prove that any particular real or complex number is actually transcendental. In
Theorem 5.5.4 we showed that the real number
c =
o
}=1
1
10
}Š
is transcendental.
In this section we examine a special type of algebraic number called an algebraic
integer. These are the algebraic numbers that are zeros of monic integral polynomials.
The set of all such algebraic integers forms a subring of C. The proofs in this section
can be found in [35].
After we do this we extend the concept of an algebraic integer to a general con
text and deﬁne integral ring extensions. We then consider ﬁeld extensions that are
nonalgebraic – transcendental ﬁeld extensions. Finally we will prove that the familiar
numbers e and ¬ are transcendental.
Deﬁnition 20.1.1. An algebraic integer is a complex number ˛ that is a root of a
monic integral polynomial. That is, ˛ ÷ C is an algebraic integer if there exists
¡(.) ÷ ZŒ. with ¡(.) = .
n
÷b
n1
.
n1
÷ ÷b
0
, b
i
÷ Z, n _ 1, and ¡(˛) = 0.
An algebraic integer is clearly an algebraic number. The following are clear.
296 Chapter 20 Integral and Transcendental Extensions
Lemma 20.1.2. If ˛ ÷ C is an algebraic integer, then all its conjugates, ˛
1
. . . . . ˛
n
,
over Qare also algebraic integers.
Lemma 20.1.3. ˛ ÷ C is an algebraic integer if and only if m
˛
÷ ZŒ..
To prove the converse of this lemma we need the concept of a primitive integral
polynomial. This is a polynomial ¸(.) ÷ ZŒ. such that the GCD of all its coefﬁcients
is 1. The following can be proved (see exercises):
(1) If ¡(.) and g(.) are primitive then so is ¡(.)g(.).
(2) If ¡(.) ÷ ZŒ. is monic then it is primitive.
(3) If ¡(.) ÷ QŒ. then there exists a rational number c such that ¡(.) = c¡
1
(.)
with ¡
1
(.) primitive.
Now suppose ¡(.) ÷ ZŒ. is a monic polynomial with ¡(˛) = 0. Let ¸(.) =
m
˛
(.). Then ¸(.) divides ¡(.) so ¡(.) = ¸(.)q(.).
Let ¸(.) = c
1
¸
1
(.) with ¸
1
(.) primitive and let q(.) = c
2
q
1
(.) with q
1
(.)
primitive. Then
¡(.) = c¸
1
(.)q
1
(.).
Since ¡(.) is monic it is primitive and hence c = 1 so ¡(.) = ¸
1
(.)q
1
(.).
Since ¸
1
(.) and q
1
(.) are integral and their product is monic they both must be
monic. Since ¸(.) = c
1
¸
1
(.) and they are both monic it follows that c
1
= 1 and
hence ¸(.) = ¸
1
(.). Therefore ¸(.) = m
˛
(.) is integral.
When we speak of algebraic integers we will refer to the ordinary integers as ra
tional integers. The next lemma shows the close ties between algebraic integers and
rational integers.
Lemma 20.1.4. If ˛ is an algebraic integer and also rational then it is a rational
integer.
The following ties algebraic numbers in general to corresponding algebraic inte
gers. Notice that if q ÷ Qthen there exists a rational integer n such that nq ÷ Z. This
result generalizes this simple idea.
Theorem 20.1.5. If 0 is an algebraic number then there exists a rational integer r =0
such that r0 is an algebraic integer.
We saw that the set A of all algebraic numbers is a subﬁeld of C. In the same
manner the set I of all algebraic integers forms a subring of A. First an extension of
the following result on algebraic numbers.
Lemma 20.1.6. Suppose ˛
1
. . . . . ˛
n
are the set of conjugates over Qof an algebraic
integer ˛. Then any integral symmetric function of ˛
1
. . . . . ˛
n
is a rational integer.
Section 20.1 The Ring of Algebraic Integers 297
Theorem 20.1.7. The set I of all algebraic integers forms a subring of A.
We note that A, the ﬁeld of algebraic numbers, is precisely the quotient ﬁeld of the
ring of algebraic integers.
An algebraic number ﬁeld is a ﬁnite extension of Q within C. Since any ﬁnite
extension of Q is a simple extension each algebraic number ﬁeld has the form 1 =
Q(0) for some algebraic number 0.
Let 1 = Q(0) be an algebraic number ﬁeld and let 1
1
= 1 ¨I. Then 1
1
forms
a subring of 1 called the algebraic integers or just integers of 1. An analysis of the
proof of Theorem 20.1.5 shows that each ˇ ÷ 1 can be written as
ˇ =
˛
r
with ˛ ÷ 1
1
and r ÷ Z.
These rings of algebraic integers share many properties with the rational integers.
While there may not be unique factorization into primes there is always prime factor
ization.
Theorem 20.1.8. Let 1 be an algebraic number ﬁeld and 1
1
its ring of integers.
Then each ˛ ÷ 1
1
is either 0, a unit or can be factored into a product of primes.
We stress again that the prime factorization need not be unique. However from
the existence of a prime factorization we can mimic Euclid’s original proof of the
inﬁnitude of primes (see [35]) to obtain:
Corollary 20.1.9. There exist inﬁnitely many primes in 1
1
for any algebraic number
ring 1
1
.
Just as any algebraic number ﬁeld is ﬁnite dimensional over Qwe will see that each
1
1
is of ﬁnite degree over Q. That is if 1 has degree n over Q we show that there
exists o
1
. . . . . o
n
in 1
1
such that each ˛ ÷ 1
1
is expressible as
˛ = m
1
o
1
÷ ÷m
n
o
n
where m
1
. . . . . m
n
÷ Z.
Deﬁnition 20.1.10. An integral basis for 1
1
is a set of integers o
1
. . . . . o
t
÷ 1
1
such that each ˛ ÷ 1
1
can be expressed uniquely as
˛ = m
1
o
1
÷ ÷m
t
o
t
where m
1
. . . . . m
t
÷ Z.
The ﬁnite degree comes from the following result that shows there does exist an
integral basis (see [35]).
Theorem 20.1.11. Let 1
1
be the ring of integers in the algebraic number ﬁeld 1 of
degree n over Q. Then there exists at least one integral basis for 1
1
.
298 Chapter 20 Integral and Transcendental Extensions
20.2 Integral ring extensions
We now extend the concept of an algebraic integer to general ring extensions. We ﬁrst
need the idea of an 1algebra where 1 is a commutative ring with identity 1 = 0.
Deﬁnition 20.2.1. Let 1 be a commutative ring with an identity 1 = 0. An 1
algebra or algebra over 1 is a unitary 1module · in which there is an additional
multiplication such that
(1) · is a ring with respect to the addition and this multiplication
(2) (r.), = .(r,) = r(.,) for all r ÷ 1 and .. , ÷ ·.
As examples of 1algebras ﬁrst consider 1 = 1 where 1 is a ﬁeld and let · =
M
n
(1) the set of all (n × n)matrices over 1. Then M
n
(1) is a 1algebra. Further
the set of polynomials 1Œ. is also a 1algebra.
We now deﬁne ring extensions. Let · be a ring, not necessarily commutative, with
an identity 1 = 0, and 1 be a commutative subring of · which contains 1. Assume
that 1 is contained in the center of ·, that is, r. = .r for all r ÷ 1 and . ÷ ·. We
then call · a ring extension of 1 and write ·[1. If ·[1 is a ring extension then · is
an 1algebra in a natural manner.
Let · be an 1algebra with an identity 1 = 0. Then we have the canonical ring
homomorphism ç : 1 ÷ ·, r ÷ r 1. The image 1
t
:= ç(1) is a subring of the
center of ·, and 1
t
contains the identity element of ·. Then ·[1
t
is a ring extension
(in the above sense). Hence, if · is a 1algebra with an identity 1 = 0 then we may
consider 1 as a subring of · and ·[1 as a ring extension.
We now will extend to the general context of ring extensions the ideas of integral
elements and integral extensions. As above, let 1 be a commutative ring with an
identity 1 = 0 and let · be an 1algebra.
Deﬁnition 20.2.2. An element a ÷ ·is said to be integral over 1 or integrally depen
dent over 1 if there is a monic polynomial ¡(.) = .
n
÷˛
n1
.
n1
÷ ÷˛
0
÷ 1Œ.
of degree n _ 1 over 1 with ¡(a) = a
n
÷˛
n1
a
n1
÷ ÷˛
0
= 0. That is, a is
integral over 1 if it is a root of a monic polynomial of degree _ 1 over 1.
An equation that an integral element satisﬁes is called integral equation of a over 1.
If · has an identity 1 = 0 then we may write a
0
= 1 and
n
i=0
˛
i
a
i
with ˛
n
= 1.
Example 20.2.3. 1. Let 1[1 be a ﬁeld extension. a ÷ 1 is integral over 1 if and
only if a is algebraic over 1. If 1 is the quotient ﬁeld of an integral domain 1
and a÷1 is algebraic over 1 then there exists an ˛÷1 with ˛a integral over 1,
because if 0 = ˛
n
a
n
÷ ÷˛
0
then 0 = (˛
n
a)
n
÷ ÷˛
n1
n
˛
0
.
2. The elements of C which are integral over Z are precisely the algebraic integers
over Z, that is, the roots of monic polynomials over Z.
Section 20.2 Integral ring extensions 299
Theorem 20.2.4. Let 1 be as above and · an 1algebra with an identity 1 = 0. If
· is, as an 1module, ﬁnitely generated then each element of · is integral over 1.
Proof. Let {b
1
. . . . . b
n
} be a ﬁnite generating system of ·, as an 1module. We may
assume that b
1
= 1, otherwise add 1 to the system. As explained in the preliminaries,
without loss of generality, we may assume that 1Ç·. Let a÷·. For each 1_¸ _ n
we have an equation ab
}
=
n
k=1
˛
k}
b
k
for some ˛
k}
÷ 1. In other words:
n
k=1
(˛
k}
÷ ı
}k
a)b
k
= 0 (==)
for ¸ = 1. . . . . n, where
ı
}k
=
_
0 if ¸ = k.
1 if ¸ = k.
Deﬁne ;
}k
:= ˛
k}
÷ ı
}k
a and C = (;
}k
)
},k
. C is an (n × n)matrix over the
commutative ring 1Œa; recall that 1Œa has an identity element. Let
¯
C = ( ¯ ;
}k
)
},k
be
the complimentary matrix of C. Then
¯
CC = (det C)1
n
. From (==) we get
0 =
n
}=1
¯ ;
i}
_
n
k=1
;
}k
b
k
_
=
n
k=1
n
}=1
¯ ;
i}
;
}k
b
k
=
n
k=1
(det C)ı
ik
b
k
= (det C)b
i
for all 1 _ i _ n. Since b
1
= 1 we have necessarily that det C = det(˛
}k
÷
ı
}k
a)
},k
= 0 (recall that ı
}k
= ı
k}
). Hence a is a root of the monic polynomial
¡(.) = det(ı
}k
. ÷ ˛
}k
) ÷ 1Œ. of degree n _ 1. Hence a is integral over 1.
Deﬁnition 20.2.5. A ring extension ·[1 is called an integral extension if each ele
ment of ·is integral over 1. A ring extension ·[1 is called ﬁnite if ·, as a 1module,
is ﬁnitely generated.
Recall that ﬁnite ﬁeld extensions are algebraic extensions. As an immediate conse
quence of Theorem 20.2.4 we get the corresponding result for ring extensions.
Theorem 20.2.6. Each ﬁnite ring extension ·[1 is an integral extension.
Theorem 20.2.7. Let · be an 1algebra with an identity 1 = 0. If a ÷ · then the
following are equivalent:
(1) a is integral over 1.
(2) The subalgebra 1Œa is, as an 1module, ﬁnitely generated.
(3) There exists a subalgebra A
t
of · which contains a and which is, as an 1
module, ﬁnitely generated.
A subalgebra of an algebra over 1 is a submodule which is also a subring.
300 Chapter 20 Integral and Transcendental Extensions
Proof. (1) =(2): We have 1Œa = {g(a) : g ÷ 1Œ.}. Let ¡(a) = 0 be an integral
equation of a over 1. Since ¡ is monic, by the division algorithm, for each g ÷ 1Œ.
there are h. r ÷1Œ. with g=h ¡ ÷r and r =0 or r =0 and deg(r) <deg(¡ ) =: n.
Let r = 0. Since g(a) = r(a), we get that {1. a. . . . . a
n1
} is a generating system
for the 1module 1Œa.
(2) =(3): Take ·
t
= 1Œa.
(3) =(1): Use Theorem 20.2.4 for A
t
.
For the remainder of this chapter all rings are commutative with an identity 1 = 0.
Theorem 20.2.8. Let ·[1 and T[· be ﬁnite ring extensions. Then also T[1 is ﬁnite.
Proof. From · = 1e
1
÷ ÷1e
n
and T = ·¡
1
÷ ÷·¡
n
we get T = 1e
1
¡
1
÷
÷1e
n
¡
n
.
Theorem 20.2.9. Let ·[1 be a ring extension. Then the following are equivalent:
(1) There are ﬁnitely many, over 1 integral elements a
1
. . . . . a
n
in · such that
· = 1Œa
1
. . . . . a
n
.
(2) ·[1 is ﬁnite.
Proof. (2) =(1): We only need to take for a
1
. . . . . a
n
a generating system of · as
an 1module, and the result holds because · = 1a
1
÷ ÷ 1a
n
, and each a
i
is
integral over 1 by Theorem 20.2.4.
(1) =(2): We use induction for m. If m = 0 then there is nothing to prove.
Now let m _ 1, and assume that (1) holds. Deﬁne ·
t
= 1Œa
1
. . . . . a
n1
. Then
· = ·
t
Œa
n
, and a
n
is integral over A
t
. ·[·
t
is ﬁnite by Theorem 20.2.7. By the
induction assumption, ·
t
[1 is ﬁnite. Then ·[1 is ﬁnite by Theorem 20.2.8.
Deﬁnition 20.2.10. Let ·[1 be a ring extension. Then the subset C = {a ÷ · : a is
integral over 1} Ç · is called the integral closure of 1 in ·.
Theorem 20.2.11. Let ·[1 be a ring extension. Then the integral closure of 1 in ·
is a subring of · with 1 Ç ·.
Proof. 1 Ç C because ˛ ÷ 1 is a root of the polynomial . ÷ ˛. Let a. b ÷ C.
We consider the subalgebra 1Œa. b of the 1algebra ·. 1Œa. b[1 is ﬁnite by Theo
rem 20.2.9. Hence, by Theorem 20.2.4, all elements from 1Œa. b are integral over 1,
that is, 1Œa. b Ç C. Especially, a ÷b, a ÷ b and ab are in C.
We extend to ring extensions the idea of a closure.
Deﬁnition 20.2.12. Let ·[1 a ring extension. 1 is called integrally closed in ·, if 1
itself is its integral closure in 1, that is, 1 = C, the integral closure of 1 in ·.
Section 20.2 Integral ring extensions 301
Theorem 20.2.13. For each ring extension ·[1 the integral closure C of 1 in · is
integrally closed in ·.
Proof. Let a ÷ · be integral over C. Then a
n
÷˛
n1
a
n1
÷ ÷˛
0
= 0 for some
˛
i
÷ C, n _ 1. Then a is also integral over the 1subalgebra ·
t
= 1Œ˛
0
. . . . . ˛
n1

of C; and ·
t
[1 is ﬁnite. Further ·
t
Œa[· is ﬁnite. Hence ·
t
Œa[1 is ﬁnite. By
Theorem 20.2.4, then a ÷ ·
t
Œa is already integral over 1, that is, a ÷ C.
Theorem 20.2.14. Let ·[1 and T[· be ring extensions. If ·[1 and T[· are integral
extensions then also T[1 is an integral extension (and certainly vice versa).
Proof. Let C be the integral closure of 1 in T. We have · Ç C since ·[1 is integral.
Together with T[·we also have that T[C is integral. By Theorem 20.2.13 we get that
C is integrally closed in T. Hence, T = C.
We now consider integrally closed integral domains.
Deﬁnition 20.2.15. An integral domain 1 is called integrally closed if 1 is integrally
closed in its quotient ﬁeld 1.
Theorem 20.2.16. Each unique factorization domain 1 is integrally closed.
Proof. Let ˛ ÷ 1 and ˛ =
o
b
with a. b ÷ 1, a = 0. Since 1 is a unique factorization
domain we may assume that a and b are relatively prime. Let ˛ be integral over 1.
Then we have over 1 an integral equation ˛
n
÷ a
n1
˛
n1
÷ ÷ a
0
= 0 for ˛.
Multiplication with b
n
gives a
n
÷ ba
n1
÷ ÷ b
n
a
0
= 0. Hence b is a divisor
of a
n
. Since a and b are relatively prime in 1, we have that b is a unit in 1 and,
hence, ˛ =
o
b
÷ 1.
Theorem 20.2.17. Let 1 be an integral domain and 1 its quotient ﬁeld. Let 1[1
be a ﬁnite ﬁeld extension. Let 1 be integrally closed, and ˛ ÷ 1 be integral over 1.
Then the minimal polynomial g ÷ 1Œ. of ˛ over 1 has only coefﬁcients of 1.
Proof. Let g ÷ 1Œ. be the minimal polynomial of ˛ over 1 (recall that g is monic
by deﬁnition). Let
¨
1 be an algebraic closure of 1. Then g(.) = (. ÷˛
1
) (. ÷˛
n
)
with ˛
1
= ˛ over
¨
1. There are 1isomorphisms o
i
: 1(˛) ÷
¨
1 with o
i
(˛) = ˛
i
.
Hence all ˛
i
are also integral over 1. Since all coefﬁcients of g are polynomial
expressions C
}
(˛
1
. . . . . ˛
n
) in the ˛
i
we get that all coefﬁcients of g are integral over
1 (see Theorem 20.2.11). Now g ÷ 1Œ. because g ÷ 1Œ. and 1 is integrally
closed.
Theorem 20.2.18. Let 1 be an integrally closed integral domain and 1 be its quo
tient ﬁeld. Let ¡. g. h ÷ 1Œ. be monic polynomials over 1 with ¡ = gh. If
¡ ÷ 1Œ. then also g. h ÷ 1Œ..
302 Chapter 20 Integral and Transcendental Extensions
Proof. Let 1 be the splitting ﬁeld of ¡ over 1. Over 1 we have ¡(.) = (. ÷
˛
1
) (. ÷ ˛
n
). Since ¡ is monic all ˛
k
are integral over 1 (see the proof of Theo
rem 20.2.17). Since ¡ = gh there are 1. J Ç {1. . . . . n} with g(.) =
i÷J
(. ÷ ˛
i
)
and h(.) =
}÷J
(. ÷ ˛
}
). As polynomial expressions in the ˛
i
, i ÷ 1, and ˛
}
,
¸ ÷ J, respectively, the coefﬁcients of g and h, respectively, are integral over 1.
On the other side all these coefﬁcients are in 1 and 1 is integrally closed. Hence
g. h ÷ 1Œ..
Theorem 20.2.19. Let 1[1 be an integral ring extension. If 1 is a ﬁeld then also 1
is a ﬁeld.
Proof. Let ˛ ÷ 1 \ {0}. The element
1
˛
÷ 1 satisﬁes an integral equation (
1
˛
)
n
÷
a
n1
(
1
˛
)
n1
÷ ÷a
0
= 0 over 1. Multiplication with ˛
n1
gives
1
˛
= ÷a
n1
÷
a
n2
˛ ÷ ÷ a
0
˛
n1
÷ 1. Hence, 1 is a ﬁeld.
20.3 Transcendental ﬁeld extensions
Recall that a transcendental number is an element of C that is not algebraic over Q.
More generally if 1[1 is a ﬁeld extension then an element ˛ ÷ 1 is transcendental
over 1 if it is not algebraic, that is, it is not a zero of any polynomial ¡(.) ÷ 1Œ..
Since ﬁnite extensions are algebraic clearly 1[1 will contain transcendental elements
only if Œ1 : 1 = o. However this is not sufﬁcient. The ﬁeld Aof algebraic numbers
is algebraic over Q but inﬁnite dimensional over Q. We now extend the idea of a
transcendental number to that of a transcendental extension.
Let 1 Ç 1 be ﬁelds, that is, 1[1 is a ﬁeld extension. Let M be a subset of 1.
The algebraic cover of M in 1 is deﬁned to be the algebraic closure H(M) of 1(M)
in 1, that is, H
1,T
(M) = H(M) = {˛ ÷ 1 : ˛ algebraic over 1(M)}. H(M) is a
ﬁeld with 1 Ç 1(M) Ç H(M) Ç 1. ˛ ÷ 1 is called algebraically dependent on
M (over K) if ˛ ÷ H(M), that is, if ˛ is algebraic over 1(M).
The following are clear.
1. M Ç H(M),
2. M Ç M
t
=H(M) Ç H(M
t
) and
3. H(H(M)) = H(M).
Deﬁnition 20.3.1. (a) M is said to be algebraically independent (over 1) if ˛ …
H(M \ {˛}) for all ˛ ÷ M, that is, if each ˛ ÷ M is transcendental over
1(M \ {˛}).
(b) M is said to be algebraically dependent (over 1) if M is not algebraically
independent.
The proofs of the statements in the following lemma are straightforward.
Section 20.3 Transcendental ﬁeld extensions 303
Lemma 20.3.2. (1) M is algebraically dependent if and only if there exists an
˛ ÷ M which is algebraic over 1(M \ {˛}).
(2) Let ˛ ÷ M. Then ˛ ÷ H(M \ {˛}) =H(M) = H(M \ {˛}).
(3) If ˛ … M and ˛ is algebraic over 1(M) then M L {˛} is algebraically depen
dent.
(4) M is algebraically dependent if and only if there is a ﬁnite subset in M which
is algebraically dependent.
(5) M is algebraically independent if and only if each ﬁnite subset of M is alge
braically independent.
(6) M is algebraically independent if and only if the following holds: If ˛
1
. . . . . ˛
n
are ﬁnitely many, pairwise different elements of M then the canonical homo
morphism ç : 1Œ.
1
. . . . . .
n
 ÷ 1, ¡(.
1
. . . . . .
n
) ÷ ¡(˛
1
. . . . . ˛
n
) is injec
tive; or in other words: for all ¡ ÷ 1Œ.
1
. . . . . .
n
 we have that ¡ = 0 if
¡(˛
1
. . . . . ˛
n
) = 0, that is, there is no nontrivial algebraic relation between the
˛
1
. . . . . ˛
n
over 1.
(7) Let M Ç 1, ˛ ÷ 1. If M is algebraically independent and M L {˛} al
gebraically dependent then ˛ ÷ H(M), that is, ˛ is algebraically dependent
on M.
(8) Let M Ç 1, T Ç M. If T is maximal algebraically independent, that is, if
˛ ÷ M \ T then T L {˛} is algebraically dependent, then M Ç H(T), that is,
each element of M is algebraic over 1(T).
We will show that any ﬁeld extension can be decomposed into a transcendental
extension over an algebraic extension. We need the idea of a transcendence basis.
Deﬁnition 20.3.3. T Ç 1 is called a transcendence basis of the ﬁeld extension 1[1
if the following two conditions are satisﬁed:
1. 1 = H(T), that is, the extension 1[1(T) is algebraic.
2. T is algebraically independent over 1.
Theorem 20.3.4. If T Ç 1 then the following are equivalent:
(1) T is a transcendence basis of 1[1.
(2) If T Ç M Ç 1 with H(M) = 1, then T is a maximal algebraically indepen
dent subset of M.
(3) There exists a subset M Ç 1 with H(M) = 1 which contains T as a maximal
algebraically independent subset.
304 Chapter 20 Integral and Transcendental Extensions
Proof. (1) =(2): Let ˛ ÷ M \ T. We have to show that T L {˛} is algebraically
dependent. But this is clear because ˛ ÷ H(T) = 1.
(2) =(3): We just take M = 1.
(3) =(1): We have to show that H(T) = 1. Certainly M Ç H(T). Hence,
1 = H(M) Ç H(H(T)) = H(T) Ç 1.
We next show that any ﬁeld extension does have a transcendence basis.
Theorem 20.3.5. Each ﬁeld extension 1[1 has a transcendence basis. More con
cretely: If there is a subset M Ç 1 such that 1[1(M) is algebraic and if there is a
subset C Ç M which is algebraically independent then there exists a transcendence
basis T of 1[1 with C Ç T Ç M.
Proof. We have to extend C to a maximal algebraically independent subset T of M.
By Theorem 20.3.4, such a T is a transcendence basis of 1[1. If M is ﬁnite then
such a T certainly exists. Now let M be not ﬁnite. We argue analogously as for the
existence of a basis of a vector space, for instance with Zorn’s lemma: If a partially
ordered, nonempty set S is inductive, then there exist maximal elements in S. Here,
a partially ordered, nonempty set S is said to be inductive if every totally ordered
subset of S has an upper bound in S. The set N of all algebraically independent
subsets of M which contain C is partially ordered with respect to “Ç”, and N = 0
because C ÷ N. Let 1 = 0 be an ascending chain in N, that is, given an ascending
chain 0 = Y
1
Ç Y
2
Ç . in N. The union U =
_
Y ÷1
Y is also algebraically
independent. Hence, there exists a maximal algebraically independent subset T Ç M
with C Ç T.
Theorem 20.3.6. Let 1[1 be a ﬁeld extension and M be a subset of 1 for which
1[1(M) is algebraic. Let C be an arbitrary subset of 1 which is algebraically
independent on 1. Then there exists a subset M
t
Ç M with C ¨ M
t
= 0 such that
C L M
t
is a transcendence basis of 1[1.
Proof. Take M L C and deﬁne M
t
:= T \ C in Theorem 20.3.5.
Theorem 20.3.7. Let T. T
t
be two transcendence bases of the ﬁeld extension 1[1.
Then there is a bijection ç : T ÷ T
t
. In other words, any two transcendence bases
of 1[1 have the same cardinal number.
Proof. (a) If T is a transcendental basis of 1[1 and M is a subset of 1 such that
1[1(M) is algebraic then we may write T =
_
˛÷Æ
T
˛
with ﬁnite sets T
˛
. Espe
cially, if T is inﬁnite then the cardinal number of T is not bigger than the cardinal
number of M.
(b) Let T and T
t
be two transcendence bases of 1[1. If T and T
t
are both
inﬁnite then T and T
t
have the same cardinal number by (a) and the theorem by
Section 20.3 Transcendental ﬁeld extensions 305
Schroeder–Bernstein [5]. We now prove Theorem 20.3.7 for the case that 1[1 has
a ﬁnite transcendence basis. Let T be ﬁnite with n elements. Let C be an arbi
trary algebraically independent subset in 1 over 1 with m elements. We show that
m _ n. Let C = {˛
1
. . . . . ˛
n
} with m _ n. We show by induction that for each
integer k, 0 _ k _ n, there are subsets T ¥ T
1
¥ ¥ T
k
of T such that
{˛
1
. . . . . ˛
k
} L T
k
is a transcendence basis of 1[1 and {˛
1
. . . . . ˛
k
} ¨ T
k
= 0. For
k = 0 we take T
0
= T, and the statement holds. Assume now that the statement
is correct for 0 _ k < n. By Theorem 20.3.4 and 20.3.5 there is a subset T
k÷1
of {˛
1
. . . . . ˛
k
} L T
k
such that {˛
1
. . . . . ˛
k÷1
} L T
k÷1
is a transcendence basis of
1[1 and {˛
1
. . . . . ˛
k÷1
} ¨ T
k÷1
= 0. Then necessarily T
k÷1
Ç T
k
. Assume
T
k
= T
k÷1
. Then on one side, T
k
L {˛
1
. . . . . ˛
k÷1
} is algebraic independent be
cause T
k
= T
k÷1
. On the other side, also T
k
L{˛
1
. . . . . ˛
k
}L{a
k÷1
} is algebraically
dependent, which gives a contradiction. Hence, T
k÷1
¦ T
k
. Now T
k
has at most
n ÷ k elements, hence T
n
= 0, that is, {˛
1
. . . . . ˛
n
} = {˛
1
. . . . . ˛
n
} L T
n
is a tran
scendence basis of 1[1. Because C = {˛
1
. . . . . ˛
n
} is algebraically independent,
we cannot have m > n. Hence m _ n; and T and T
t
have the same number of
elements because T
t
must also be ﬁnite.
Since the cardinality of any transcendence basis for a ﬁeld extension 1[1 is the
same we can deﬁne the transcendence degree.
Deﬁnition 20.3.8. The transcendence degree trgd(1[1) of a ﬁeld extension is the
cardinal number of one (and hence of each) transcendence basis of 1[1. A ﬁeld
extension 1[1 is called purely transcendental, if 1[1 has a transcendence basis T
with 1 = 1(T).
We note the following facts:
(1) If 1[1 is purely transcendental and T = {˛
1
. . . . . ˛
n
} is a transcendence basis
of 1[1 then 1 is 1isomorphic to the quotient ﬁeld of the polynomial ring
1Œ.
1
. . . . . .
n
 of the independence indeterminates .
1
. . . . . .
n
.
(2) 1 is algebraically closed in 1 if 1[1 is purely transcendental.
(3) By Theorem 20.3.4, the ﬁeld extension 1[1 has an intermediate ﬁeld J, 1 Ç
J Ç 1, such that J[1 is purely transcendental and 1[J is algebraic. Certainly
J is not uniquely determined.
For example take Q Ç J Ç Q(i. ¬), and for J we may take J = Q(¬) and
also J = Q(i ¬), for instance.
(4) trgd(R[Q) = trgd(C[Q) = card R, the cardinal number of R. This holds
because the set of the algebraic numbers (over Q) is countable.
Theorem 20.3.9. Let 1[1 a ﬁeld extension and J an arbitrary intermediate ﬁeld,
1 Ç J Ç 1. Let T a transcendence basis of J[1 and T
t
a transcendence base of
306 Chapter 20 Integral and Transcendental Extensions
1[J. Then T ¨ T
t
= 0, and T L T
t
is a transcendence basis of 1[1. Especially:
trgd(1[1) = trgd(1[J) ÷trgd(J[1).
Proof. (1) Assume ˛ ÷ T ¨ T
t
. As an element of J, then ˛ is algebraic over
J(T
t
) \ {˛}. But this gives a contradiction because ˛ ÷ T
t
, and T
t
is algebraically
independent over J.
(2) J[1(T) is an algebraic extension, and also J(T
t
)[1(T L T
t
) = 1(T)(T
t
).
Since the relation “algebraic extension” is transitive, we have that 1[1(T L T
t
) is
algebraic.
(3) Finally we have to show that T L T
t
is algebraically independent over 1. By
Theorems 20.3.5 and 20.3.6 there is a subset T
tt
of T L T
t
with T ¨ T
tt
= 0 such
that T L T
tt
is a transcendence basis of 1[1. We have T
tt
Ç T
t
, and have to show
that T
t
Ç T
tt
. Assume that there is an ˛ ÷ T
t
with ˛ … T
tt
. Then ˛ is algebraic
over 1(TLT
tt
) = 1(T)(T
tt
) and, hence, algebraic over J(T
tt
). Since T
tt
Ç T
t
we
have that ˛ is algebraically independent over J, which gives a contradiction. Hence
T
tt
= T
t
.
Theorem 20.3.10 (Noether’s normalization theorem). Let 1 be a ﬁeld and · =
1Œa
1
. . . . . a
n
. Then there exist elements u
1
. . . . . u
n
, 0 _ m _ n, in · with the
following properties:
(1) 1Œu
1
. . . . . u
n
 is 1isomorphic to the polynomial ring 1Œ.
1
. . . . . .
n
 of the
independent indeterminates .
1
. . . . . .
n
.
(2) The ring extension ·[1Œu
1
. . . . . u
n
 is an integral extension, that is, for each
a÷·\1Œu
1
. . . . . u
n
 there exists a monic polynomial ¡(.)=.
n
÷˛
n1
.
n1
÷
÷˛
0
÷ 1Œu
1
. . . . . u
n
Œ. of degree n _ 1 with ¡(a) = a
n
÷˛
n1
a
n1
÷
÷˛
0
= 0. Especially ·[1Œu
1
. . . . . u
n
 is ﬁnite.
Proof. Without loss of generality, let the a
1
. . . . . a
n
be pairwise different. We prove
the theorem by induction on n. If n = 1 then there is nothing to show. Now, let
n _ 2, and assume that the statement holds for n÷1. If there is no nontrivial algebraic
relation ¡(a
1
. . . . . a
n
) = 0 over 1 between the a
1
. . . . . a
n
then there is nothing to
show. Hence, let there exists a polynomial ¡ ÷ 1Œ.
1
. . . . . .
n
 with ¡ = 0 and
¡(a
1
. . . . . a
n
) = 0. Let ¡ =
¡=(¡
1
,...,¡
n
)
c
¡
.
¡
1
1
.
¡
n
n
. Let j
2
. j
3
. . . . . j
n
be
natural numbers which we specify later. Deﬁne b
2
= a
2
÷ a
µ
2
1
, b
3
= a
3
÷ a
µ
3
1
,
. . . . b
n
= a
n
÷ a
µ
n
1
. Then a
i
= b
i
÷ a
µ
i
1
for 2 _ i _ n, hence, ¡(a
1
. b
2
÷
a
µ
2
1
. . . . . b
n
÷a
µ
n
1
) = 0. We write 1 := 1Œ.
1
. . . . . .
n
 and consider the polynomial
ring 1Œ,
2
. . . . . ,
n
 of the n ÷ 1 independent indeterminates ,
2
. . . . . ,
n
over 1. In
1Œ,
2
. . . . . ,
n
 we consider the polynomial ¡(.
1
. ,
2
÷.
µ
2
1
. . . . . ,
n
÷.
µ
n
1
). We may
rewrite this polynomial as
¡=(¡
1
,...,¡
n
)
c
¡
.
¡
1
÷µ
2
¡
2
÷···÷µ
n
¡
n
1
÷g(.
1
. ,
2
. . . . . ,
n
)
Section 20.4 The transcendence of e and ¬ 307
with a polynomial g(.
1
. ,
2
. . . . . ,
n
) for which, as a polynomial in .
1
over 1Œ,
2
. . . . .
,
n
, the degree in .
1
is smaller than the degree of
¡=(¡
1
,...,¡
n
)
c
¡
.
¡
1
÷µ
2
¡
2
÷···÷µ
n
¡
n
1
,
provided that we may choose the j
2
. . . . . j
n
in such a way that this really holds. We
now specify the j
2
. . . . . j
n
. We write j := (1. j
2
. . . . . j
n
) and deﬁne the scalar
product jv = 1 v
1
÷ j
2
v
2
÷ ÷ j
n
v
n
. Choose ¸ ÷ N with ¸ > deg(¡ ) =
max{v
1
÷ ÷ v
n
: c
¡
= 0}. We now take j = (1. ¸. ¸
2
. . . . . ¸
n1
). If v =
(v
1
. . . . . v
n
) with c
¡
= 0 and v
t
= (v
t
1
. . . . . v
t
n
) with c
t
¡
= 0 are different ntuple
then indeed jv = jv
t
because v
i
. v
t
i
< ¸ for all i , 1 _ i _ n. This follows from
the uniqueness of the ¸adic expression of a natural number. Hence, we may choose
j
2
. . . . . j
n
such that ¡(.
1
. ,
2
÷ .
µ
2
1
. . . . . ,
n
÷ .
µ
n
1
) = c.
1
1
÷ h(.
1
. ,
2
. . . . . ,
n
)
with c ÷ 1, c = 0, and h ÷ 1Œ,
2
. . . . . ,
n
Œ.
1
 has in .
1
a degree < N. If we divide
by c and take a
1
. b
2
. . . . . b
n
for .
1
. ,
2
. . . . . ,
n
then we get an integral equation of a
1
over 1Œb
2
. . . . . b
n
. Therefore, the ring extension · = 1Œa
1
. . . . . a
n
[1Œb
2
. . . . . b
n

is integral (see Theorem 20.2.9), a
i
= b
i
÷ a
µ
i
1
for 2 _ i _ n. By induction there
exists elements u
1
. . . . . u
n
in 1Œb
2
. . . . . b
n
 with the following properties:
1. 1Œu
1
. . . . . u
n
 is a polynomial ring of the mindependent indeterminates u
1
. . . . .
u
n
and
2. 1Œb
2
. . . . . b
n
[1Œu
1
. . . . . u
n
 is integral.
Hence, also ·[1Œu
1
. . . . . u
n
 is integral by Theorem 20.2.14.
Corollary 20.3.11. Let 1[1 be a ﬁeld extension. If 1 = 1Œa
1
. . . . . a
n
 for a
1
. . . . .
a
n
÷ 1 then 1[1 is algebraic.
Proof. By Theorem 20.3.10 we have that 1 contains a polynomial ring 1Œu
1
. . . . .
u
n
, 0 _ m _ n, of the m independent indeterminates u
1
. . . . . u
n
as a subring for
which 1[1Œu
1
. . . . . u
n
 is integral. We claim that then already 1Œu
1
. . . . . u
n
 is a
ﬁeld. To prove that, let a ÷ 1Œu
1
. . . . . u
n
, a = 0. The element a
1
÷ 1 satisﬁes an
integral equation (a
1
)
n
÷˛
n1
(a
1
)
n1
÷ ÷˛
0
= 0 over 1Œu
1
. . . . . u
n
 =: 1.
Hence, a
1
= ÷˛
n1
÷ ˛
n2
a ÷ ÷ ˛
0
a
n1
÷ 1. Therefore 1 is a ﬁeld which
proves the claim. This is possible only for m = 0, and then 1[1 is integral, that is
here algebraic.
20.4 The transcendence of e and
Although we have shown that within C there are continuously many transcendental
numbers we have only shown that one particular number is transcendental. In this
section we prove that the numbers e and ¬ are transcendental. We start with e.
Theorem 20.4.1. e is a transcendental number, that is, transcendental over Q.
308 Chapter 20 Integral and Transcendental Extensions
Proof. Let ¡(.) ÷ RŒ. with the degree of ¡(.) = m _ 1. Let :
1
÷ C, :
1
= 0, and
; : Œ0. 1 ÷C, ;(t ) = t :
1
. Let
1(:
1
) =
_
;
e
z
1
z
¡(:)J: =
__
z
1
0
_
;
e
z
1
z
¡(:)J:.
By (
_
z
1
0
)
;
we mean the integral from 0 to :
1
along ;. Recall that
__
z
1
0
_
;
e
z
1
z
¡(:)J: = ÷¡(:
1
) ÷e
z
1
¡(0) ÷
__
z
1
0
_
;
e
z
1
z
¡
t
(:)J:.
It follows then by repeated partial integration that
(1) 1(:
1
) = e
z
1
n
}=0
¡
(})
(0) ÷
n
}=0
¡
(})
(:
1
).
Let [¡ [(.) be the polynomial that we get if we replace the coefﬁcients of ¡(.) by
their absolute values. Since [e
z
1
z
[ _ e
[z
1
z[
_ e
[z
1
[
, we get
(2) [1(:
1
)[ _ [:
1
[e
[z
1
[
[¡ [([:
1
[).
Now assume that e is an algebraic number, that is,
(3) q
0
÷q
1
e ÷ ÷q
n
e
n
= 0 for n _ 1 and integers q
0
= 0. q
1
. . . . . q
n
, and the
greatest common divisor of q
0
. q
1
. . . . . q
n
, is equal to 1.
We consider now the polynomial ¡(.) = .
]1
(. ÷1)
]
. . . (. ÷n)
]
with ¸ a suf
ﬁciently large prime number, and we consider 1(:
1
) with respect to this polynomial.
Let
J = q
0
1(0) ÷q
1
1(1) ÷ ÷q
n
1(n).
From (1) and (3) we get that
J = ÷
n
}=0
n
k=0
q
k
¡
(})
(k).
where m = (n ÷1)¸ ÷ 1 since (q
0
÷q
1
e ÷ ÷q
n
e
n
)(
n
}=0
(¡
(})
(0)) = 0.
Now, ¡
(})
(k) = 0 if ¸ < ¸, k > 0, and if ¸ < ¸ ÷ 1 then k = 0, and hence
¡
(})
(k) is an integer that is divisible by ¸Š for all ¸. k except for ¸ = ¸ ÷ 1, k = 0.
Further, ¡
(]1)
(0) = (¸ ÷ 1)Š(÷1)
n]
(nŠ)
]
, and hence, if ¸ > n, then ¡
(]1)
(0) is
an integer divisible by (¸ ÷ 1)Š but not by ¸Š.
It follows that J is a nonzero integer that is divisible by (¸ ÷ 1)Š if ¸ > [q
0
[ and
¸ > n. So let ¸ > n. ¸ > [q
0
[, so that [J[ _ (¸ ÷ 1)Š.
Now, [¡ [(k) _ (2n)
n
. Together with (2) we then get that
[J[ _ [q
1
[e[¡ [(1) ÷ ÷[q
n
[ne
n
[¡ [(n) _ c
]
for a number c independent of ¸. It follows that
(¸ ÷ 1)Š _ [J[ _ c
]
.
Section 20.4 The transcendence of e and ¬ 309
that is,
1 _
[J[
(¸ ÷ 1)Š
_ c
c
]1
(¸ ÷ 1)Š
.
This gives a contradiction, since
c
p1
(]1)Š
÷ 0 as ¸ ÷ o. Therefore, e is transcen
dental.
We now move on to the transcendence of ¬. We ﬁrst need the following lemma.
Lemma 20.4.2. Suppose ˛ ÷ C is an algebraic number and ¡(.) = a
n
.
n
÷ ÷a
0
,
n _ 1, a
n
= 0, and all a
i
÷ Z (¡(.) ÷ ZŒ.) with ¡(˛) = 0. Then a
n
˛ is an
algebraic integer.
Proof.
a
n1
n
¡(.) = a
n
n
.
n
÷a
n1
n
a
n1
.
n1
÷ ÷a
n1
n
a
0
= (a
n
.)
n
÷a
n1
(a
n
.)
n1
÷ ÷a
n1
n
a
0
= g(a
n
.) = g(,) ÷ ZŒ,
where , = a
n
. and g(,) is monic. Then g(a
n
˛) = 0, and hence a
n
˛ is an algebraic
integer.
Theorem 20.4.3. ¬ is a transcendental number, that is, transcendental over Q.
Proof. Assume that ¬ is an algebraic number. Then 0 = i ¬ is also algebraic. Let
0
1
= 0. 0
2
. . . . . 0
d
be the conjugates of 0. Suppose
¸(.) = q
0
÷q
1
. ÷ ÷q
d
.
d
÷ ZŒ.. q
d
> 0. and gcd(q
0
. . . . . q
d
) = 1
is the entire minimal polynomial of 0 over Q. Then 0
1
= 0. 0
2
. . . . . 0
d
are the zeros
of this polynomial. Let t = q
d
. Then from Lemma 20.4.2, t 0
i
is an algebraic integer
for all i . From e
it
÷1 = 0 and from 0
1
= i ¬ we get that
(1 ÷e
0
1
)(1 ÷e
0
2
) (1 ÷e
0
d
) = 0.
The product on the left side can be written as a sum of 2
d
terms e
¢
, where ç =
c
1
0
1
÷ ÷c
d
0
d
, c
}
= 0 or 1. Let n be the number of terms c
1
0
1
÷ ÷c
d
0
d
that
are nonzero. Call these ˛
1
. . . . . ˛
n
. We then have an equation
q ÷e
˛
1
÷ ÷e
˛
n
= 0
with q = 2
d
÷ n > 0. Recall that all t ˛
i
, are algebraic integers and we consider the
polynomial
¡(.) = t
n]
.
]1
(. ÷ ˛
1
)
]
(. ÷ ˛
n
)
]
310 Chapter 20 Integral and Transcendental Extensions
with ¸ a sufﬁciently large prime integer. We have ¡(.) ÷ RŒ., since the ˛
i
are alge
braic numbers and the elementary symmetric polynomials in ˛
1
. . . . . ˛
n
are rational
numbers.
Let 1(:
1
) be deﬁned as in the proof of Theorem 20.4.1, and now let
J = 1(˛
1
) ÷ ÷1(˛
n
).
From (1) in the proof of Theorem 20.4.1 and (4) we get
J = ÷q
n
}=0
¡
(})
(0) ÷
n
}=0
n
k=1
¡
(})
(˛
k
).
with m = (n ÷1)¸ ÷ 1.
Now,
n
k=1
¡
(})
(˛
k
) is a symmetric polynomial in t ˛
1
. . . . . t ˛
n
with integer coef
ﬁcients since the t ˛
i
are algebraic integers. It follows from the main theorem on sym
metric polynomials that
n
}=0
n
k=1
¡
(})
(˛
k
) is an integer. Further, ¡
(})
(˛
k
) = 0
for ¸ < ¸. Hence
n
}=0
n
k=1
¡
(})
(˛
k
) is an integer divisible by ¸Š.
Now, ¡
(})
(0) is an integer divisible by ¸Š if ¸ = ¸ ÷ 1, and ¡
(]1)
(0) = (¸ ÷
1)Š(÷t )
n]
(˛
1
. . . ˛
n
)
]
is an integer divisible by (¸ ÷ 1)Š but not divisible by ¸Š if ¸
is sufﬁciently large. In particular, this is true if ¸ > [t
n
(˛
1
˛
n
)[ and also ¸ > q.
From (2) in the proof of Theorem 20.4.1 we get that
[J[ _ [˛
1
[e
[˛
1
[
[¡ [([˛
1
[) ÷ ÷[˛
n
[e
[˛
n
[
[¡ [([˛
n
[) _ c
]
for some number c independent of ¸.
As in the proof of Theorem 20.4.1, this gives us
(¸ ÷ 1)Š _ [J[ _ c
]
.
that is,
1 _
[J[
(¸ ÷ 1)Š
_ c
c
]1
(¸ ÷ 1)Š
.
This, as before, gives a contradiction, since
c
p1
(]1)Š
÷ 0 as ¸ ÷ o. Therefore, ¬
is transcendental.
20.5 Exercises
1. A polynomial ¸(.) ÷ ZŒ. is primitive if the GCD of all its coefﬁcients is 1. Prove
the following:
(i) If ¡(.) and g(.) are primitive then so is ¡(.)g(.).
(ii) If ¡(.) ÷ ZŒ. is monic then it is primitive.
Section 20.5 Exercises 311
(iii) If ¡(.) ÷ QŒ. then there exists a rational number c such that ¡(.) = c¡
1
(.)
with ¡
1
(.) primitive.
2. Let J be a squarefree integer and 1 = Q(
_
J) be a quadratic ﬁeld. Let 1
1
be
the subring of 1 of the algebraic integers of 1. Show that
(i) 1
1
= {m ÷ n
_
J : m. n ÷ Z} if J ÷ 2(mod 4) or J ÷ 3(mod 4).
{1.
_
J} is an integral basis for 1
1
.
(ii) 1
1
= {m÷n
1÷
_
d
2
: m. n ÷ Z} if J ÷ 1( mod 4). {1.
1÷
_
d
2
} is an integral
basis for 1
1
.
(iii) If J < 0 then there are only ﬁnitely many units in 1
1
.
(iv) If J > 0 then there are inﬁnitely many units in 1
1
.
3. Let 1 = Q(˛) with ˛
3
÷˛ ÷1 = 0 and 1
1
the subring of the algebraic integers
in 1. Show that
(i) {1. ˛. ˛
2
} is an integral basis for 1
1
.
(ii) 1
1
= ZŒ˛.
4. Let ·[1 be an integral ring extension. If · is an integral domain and 1 a ﬁeld then
· is also a ﬁeld.
5. Let ·[1 be an integral extension. Let P be a prime ideal of · and p be a prime
ideal of 1 such that P ¨ 1 = p. Show that
(i) If p is maximal in 1 then P is maximal in ·. (Hint: consider ·,P.)
(ii) If P
0
is another prime ideal of · with P
0
¨ 1 = p and P
0
Ç P then P =
P
0
. (Hint: we may assume that · is an integral domain and P ¨ 1 = {0},
otherwise go to ·,P.)
6. Show that for a ﬁeld extension 1[1 the following are equivalent:
(i) Œ1 : 1(T) < ofor each transcendence basis T of 1[1.
(ii) trgd(1[1) < o and Œ1 : 1(T) < o for each transcendence basis T of
1[1.
(iii) There is a ﬁnite transcendence basis T of 1[1 with Œ1 : 1(T) < o.
(iv) There are ﬁnitely many .
1
. . . . . .
n
÷ 1 with 1 = 1(.
1
. . . . . .
n
).
7. Let 1[1 be a ﬁeld extension. If 1[1 is purely transcendental then 1 is alge
braically closed in 1.
Chapter 21
The Hilbert Basis Theorem and the Nullstellensatz
21.1 Algebraic Geometry
An extremely important application of abstract algebra and an application central to
all of mathematics is the subject of algebraic geometry. As the name suggests this
is the branch of mathematics that uses the techniques of abstract algebra to study
geometric problems. Classically algebraic geometry involved the study of algebraic
curves which roughly are the sets of zeros of a polynomial or set of polynomials
in several variables over a ﬁeld. For example, in two variables a real algebraic plane
curve is the set of zeros in R
2
of a polynomial ¸(.. ,) ÷ RŒ.. ,. The common planar
curves such as parabolas and the other conic sections are all plane algebraic curves.
In actual practice plane algebraic curves are usually considered over the complex
numbers and are projectivized.
The algebraic theory that deals most directly with algebraic geometry is called com
mutative algebra. This is the study of commutative rings, ideals in commutative rings
and modules over commutative rings. A large portion of this book has dealt with
commutative algebra.
Although we will not consider the geometric aspects of algebraic geometry in gen
eral we will close the book by introducing some of the basic algebraic ideas that are
crucial to the subject. These include the concept of an algebraic variety or algebraic
set and its radical. We also state and prove two of the cornerstones of the theory as
applied to commutative algebra – the Hilbert basis theorem and the Nullstellensatz.
In this chapter we consider a ﬁxed ﬁeld extension C[1 and the polynomial ring
1Œ.
1
. . . . . .
n
 of the n independent indeterminates .
1
. . . . . .
n
. Again, in this chapter
we often use letters a. b. m. p. P. A. Q. . . . for ideals in rings.
21.2 Algebraic Varieties and Radicals
We ﬁrst deﬁne the concept of an algebraic variety.
Deﬁnition 21.2.1. If M Ç 1Œ.
1
. . . . . .
n
 then we deﬁne
N(M) = {(˛
1
. . . . . ˛
n
) ÷ C
n
: ¡(˛
1
. . . . . ˛
n
) = 0 V¡ ÷ M}.
˛ = (˛
1
. . . . . ˛
n
) ÷ N(M) is called a zero (Nullstelle) of M in C
n
; and N(M) is
called the zero set of M in C
n
. If we want to mention C then we write N(M) =
Section 21.2 Algebraic Varieties and Radicals 313
N
C
(M). A subset V Ç C
n
of the form V = N(M) for some M Ç 1Œ.
1
. . . . . .
n
 is
called an algebraic variety or (afﬁne) algebraic set of C
n
over 1, or just an algebraic
1set of C
n
.
For any subset N of C
n
we can reverse the procedure and consider the set of poly
nomials whose zero set is N.
Deﬁnition 21.2.2. Suppose that N Ç C
n
. Then
1(N) = {¡ ÷ 1Œ.
1
. . . . . .
n
 : ¡(˛
1
. . . . . ˛
n
) = 0 V(˛
1
. . . . . ˛
n
) ÷ N}.
Instead of ¡ ÷ 1(N) we also say that ¡ vanishes on N (over 1). If we want to
mention 1 then we write 1(N) = 1
1
(N).
What is important is that the set 1(N) forms an ideal. The proof is straightforward.
Theorem 21.2.3. For any subset N Ç C
n
the set 1(N) is an ideal in 1Œ.
1
. . . . . .
n
;
it is called the vanishing ideal of N Ç C
n
in 1Œ.
1
. . . . . .
n
,
The following result examines the relationship between subsets in C
n
and their
vanishing ideals.
Theorem 21.2.4. The following properties hold:
(1) M Ç M
t
=N(M
t
) Ç N(M),
(2) If a = (M) is the ideal in 1Œ.
1
. . . . . .
n
 generated by M, then N(M) = N(a),
(3) N Ç N
t
=1(N
t
) Ç 1(N),
(4) M Ç 1N(M) for all M Ç 1Œ.
1
. . . . . .
n
,
(5) N Ç N1(N) for all N Ç C
n
,
(6) If (a
i
)
i÷J
is a family of ideals in 1Œ.
1
. . . . . .
n
 then
_
i÷J
N(a
i
)=N(
i÷J
a
i
).
Here
i÷J
a
i
is the ideal in 1Œ.
1
. . . . . .
n
, generated by the union
_
i÷J
a
i
,
(7) If a. b are ideals in 1Œ.
1
. . . . . .
n
 then N(a) L N(b) = N(ab) = N(a ¨ b).
Here ab is the ideal in 1Œ.
1
. . . . . .
n
 generated by all products ¡g where ¡ ÷ a
and g ÷ b,
(8) N(M) = N1N(M) for all M Ç 1Œ.
1
. . . . . .
n
,
(9) V = N1(V ) for all algebraic 1sets V ,
(10) 1(N) = 1N1(N) for all N Ç C
n
.
Proof. The proofs are straightforward. Hence, we prove only (7), (8) and (9). The
rest can be left as exercise for the reader.
Proof of (7): Since ab Ç a¨b Ç a. b we have by (1) the inclusion N(a)LN(b) Ç
N(a ¨ b) Ç N(ab). Hence, we have to show that N(ab) Ç N(a) L N(b).
314 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
Let ˛ = (˛
1
. . . . . ˛
n
) ÷ C
n
be a zero of ab but not a zero of a. Then there is an
¡ ÷ a with ¡(˛) = 0, and hence for all g ÷ b we get ¡(˛)g(˛) = (¡g)(˛) = 0 and,
hence, g(˛) = 0. Therefore ˛ ÷ N(b).
Proof of (8) and (9): Let M Ç 1Œ.
1
. . . . . .
n
. Then, on the one side, M Ç
1N(M) by (5) and further N1N(M) Ç N(M) by (1). On the other side, N(M) Ç
N1N(M) by (6). Therefore N(M) = N1N(M) for all M Ç 1Œ.
1
. . . . . .
n
.
Now, the algebraic 1sets of C
n
are precisely the sets of the form V = N(M).
Hence, V = N1(V ).
We make the following agreement: if a is an ideal in 1Œ.
1
. . . . . .
n
 then we write
a < 1Œ.
1
. . . . . .
n
.
If a < 1Œ.
1
. . . . . .
n
 then we do not have a = 1N(a) in general, that is, a is in
general not equal to the vanishing ideal of its zero set in C
n
. The reason for this is
that not each ideal a occurs as a vanishing ideal of some N Ç C
n
. If a = 1(N) then
we must have:
¡
n
÷ a. m _ 1 ==¡ ÷ a. (=)
Hence, for instance, if a = (.
2
1
. . . . . .
2
n
) < 1Œ.
1
. . . . . .
n
 then a is not of the form
a = 1(N) for some N Ç C
n
. We now deﬁne the radical of an ideal.
Deﬁnition 21.2.5. Let 1 be a commutative ring, and a < 1 be an ideal in 1. Then
_
a = {¡ ÷ 1 : ¡
n
÷ a for some m ÷ N} is an ideal in 1.
_
a is called the radical
of a (in 1). a is said to be reduced if
_
a = a.
We note that the
_
0 is called the nil radical of 1; it contains exactly the nilpotent
elements of 1, that is, the elements a ÷ 1 with a
n
= 0 for some m ÷ N.
Let a < 1 be an ideal in 1 and ¬ : 1 ÷ 1,a the canonical mapping. Then
_
a is
exactly the preimage of the nil radical of 1,a.
21.3 The Hilbert Basis Theorem
In this section we show that if 1 is a ﬁeld then each ideal a<1Œ.
1
. . . . . .
n
 is ﬁnitely
generated. This is the content of the Hilbert basis theorem. This has as an important
consequence that any algebraic variety of C
n
is the zero set of only ﬁnitely many
polynomials.
The Hilbert basis theorem follows directly from the following Theorem 21.3.2.
Before we state this theorem we need a deﬁnition.
Deﬁnition 21.3.1. Let 1 be a commutative ring with an identity 1 = 0. 1 is said to
be noetherian if each ideal in 1 is generated by ﬁnitely many elements, that is, each
ideal in 1 is ﬁnitely generated.
Section 21.4 The Hilbert Nullstellensatz 315
Theorem 21.3.2. Let 1 be a noetherian ring. Then the polynomial ring 1Œ. over 1
is also noetherian.
Proof. Let 0 = ¡
k
÷ 1Œ.. With deg(¡
k
) we denote the degree of ¡
k
. Let a < 1Œ.
be an ideal in 1Œ.. Assume that a is not ﬁnitely generated. Then, especially, a = 0.
We construct a sequence of polynomials ¡
k
÷ a such that the highest coefﬁcients a
k
generate an ideal in 1 which is not ﬁnitely generated. This produces then a contra
diction, and, hence, a is in fact ﬁnitely generated. Choose ¡
1
÷ a, ¡
1
= 0, so that
deg(¡
1
) = n
1
is minimal.
If k _ 1 then choose ¡
k÷1
÷ a, ¡
k÷1
… (¡
1
. . . . . ¡
k
) so that deg(¡
k÷1
) = n
k÷1
is
minimal for the polynomials in a \ (¡
1
. . . . . ¡
k
). This is possible because we assume
that a is not ﬁnitely generated. We have n
k
_ n
k÷1
by our construction. Further
(a
1
. . . . . a
k
) ¦ (a
1
. . . . . a
k
. a
k÷1
).
Proof of this claim: Assume that (a
1
. . . . . a
k
) = (a
1
. . . . . a
k
. a
k÷1
). Then a
k÷1
÷
(a
1
. . . . . a
k
). Hence, there are b
i
÷ 1 with a
k÷1
=
k
i=1
a
i
b
i
. Let g(.) =
k
i=1
b
i
¡
i
(.).
n
kC1
n
i
, hence, g ÷ (¡
1
. . . . . ¡
k
) and g = a
k÷1
.
n
kC1
÷ . There
fore deg(¡
k÷1
÷ g) < n
k÷1
and ¡
k÷1
÷ g … (¡
1
. . . . . ¡
k
) which contradicts the
choice of ¡
k÷1
. This proves the claim.
Hence (a
1
. . . . . a
k
) ¦ (a
1
. . . . . a
k
. a
k÷1
) which contradicts the fact that 1 is
noetherian. Hence a is ﬁnitely generated.
We now have the Hilbert basis theorem.
Theorem 21.3.3 (Hilbert basis theorem). Let 1 be a ﬁeld. Then each ideal a <
1Œ.
1
. . . . . .
n
 is ﬁnitely generated, that is, a = (¡
1
. . . . . ¡
n
) for ﬁnitely many ¡
1
. . . . .
¡
n
÷ 1Œ.
1
. . . . . .
n
.
Corollary 21.3.4. If C[1 is a ﬁeld extension then each algebraic 1set V of C
n
is
already the zero set of only ﬁnitely many polynomials ¡
1
. . . . . ¡
n
÷ 1Œ.
1
. . . . . .
n
:
V = {(˛
1
. . . . . ˛
n
) ÷ C
n
: ¡
i
(˛
1
. . . . . ˛
n
) = 0 for i = 1. . . . . m}.
Further we write V = N(¡
1
. . . . . ¡
n
).
21.4 The Hilbert Nullstellensatz
Vanishing ideals of subsets of C
n
are not necessarily reduced. For an arbitrary ﬁeld
C, the condition
¡
n
÷ a. m _ 1 ==¡ ÷ a
is, in general, not sufﬁcient for a < 1Œ.
1
. . . . . .
n
 to be a vanishing ideal of a subset
of C
n
. For example let n _ 2, 1 = C = Rand a = (.
2
1
÷ ÷.
2
n
) <RŒ.
1
. . . . . .
n
.
a is a prime ideal in RŒ.
1
. . . . . .
n
 because .
2
1
÷ ÷ .
2
n
is a prime element in
316 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
RŒ.
1
. . . . . .
n
. Hence, a is reduced. But, on the other side, N(a) = {0} and
1({0}) = (.
1
. . . . . .
n
). Therefore a is not of the form 1(N) for some N Ç C
n
.
If this would be the case, then a = 1(N) = 1N1(N) = 1{0} = (.
1
. . . . . .
n
)
because of Theorem 21.2.4(10), which gives a contradiction.
The Nullstellensatz of Hilbert which we give in two forms shows that if a is re
duced, that is, a =
_
a, then 1N(a) = a.
Theorem 21.4.1 (Hilbert’s Nullstellensatz, ﬁrst form). Let C[1 be a ﬁeld extension
with C algebraically closed. If a < 1Œ.
1
. . . . . .
n
 then 1N(a) =
_
a. Moreover, if a
is reduced, that is, a =
_
a, then 1N(a) = a. Therefore N deﬁnes a bijective map
between the set of reduced ideals in 1Œ.
1
. . . . . .
n
 and the set of the algebraic 1sets
in C
n
; and 1 deﬁnes the inverse map.
The proof follows from:
Theorem 21.4.2 (Hilbert’s Nullstellensatz, second form). Let C[1 be a ﬁeld exten
sion with C algebraically closed. Let a < 1Œ.
1
. . . . . .
n
 with a = 1Œ.
1
. . . . . .
n
.
Then there exists an ˛ = (˛
1
. . . . . ˛
n
) ÷ C
n
with ¡(˛) = 0 for all ¡ ÷ a, that is,
N
C
(a) = 0.
Proof. Since a = 1Œ.
1
. . . . . .
n
 there exists a maximal ideal m<1Œ.
1
. . . . . .
n
 with
a Ç m. We consider the canonical map ¬ : 1Œ.
1
. . . . . .
n
 ÷ 1Œ.
1
. . . . . .
n
,m. Let
ˇ
i
= ¬(.
i
) for i = 1. . . . . n. Then 1Œ.
1
. . . . . .
n
,m = 1Œˇ
1
. . . . . ˇ
n
 =: 1. Since
m is maximal, 1 is a ﬁeld. Moreover 1[1 is algebraic by Corollary 20.3.11. Hence
there exists a 1homomorphism o : 1Œˇ
1
. . . . . ˇ
n
 ÷C (C is algebraically closed).
Let ˛
i
= o(ˇ
i
); we have ¡(˛
1
. . . . . ˛
n
) = 0 for all ¡ ÷ m. Since a Ç m this holds
also for all ¡ ÷ a. Hence we get a zero (˛
1
. . . . . ˛
n
) of a in C
n
.
Proof of Theorem 21.4.1. Let a < 1Œ.
1
. . . . . .
n
, and let ¡ ÷ 1N(a). We have to
show that ¡
n
÷ a for some m ÷ N. If ¡ = 0 then there is nothing to show.
Now, let ¡ = 0. We consider 1Œ.
1
. . . . . .
n
 as a subring of 1Œ.
1
. . . . . .
n
. .
n÷1

of the n ÷ 1 independent indeterminates .
1
. . . . . .
n
. .
n÷1
. In 1Œ.
1
. . . . . .
n
. .
n÷1

we consider the ideal ¨ a = (a. 1÷.
n÷1
¡ ) <1Œ.
1
. . . . . .
n
. .
n÷1
, generated by a and
1 ÷ .
n÷1
¡ .
Case 1: ¨ a = 1Œ.
1
. . . . . .
n
. .
n÷1
.
¨ a then has a zero (ˇ
1
. . . . . ˇ
n
. ˇ
n÷1
) in C
n÷1
by Theorem 21.2.4. Hence, for
(ˇ
1
. . . . . ˇ
n
. ˇ
n÷1
) ÷ N(¨ a) we have the equations:
(1) g(ˇ
1
. . . . . ˇ
n
) = 0 for all g ÷ a and
(2) ¡(ˇ
1
. . . . . ˇ
n
)ˇ
n÷1
= 1.
From (1) we get (ˇ
1
. . . . . ˇ
n
) ÷ N(a). Hence, especially, ¡(ˇ
1
. . . . . ˇ
n
) = 0 for
our ¡ ÷ 1N(a). But this contradicts (2). Therefore ¨ a = 1Œ.
1
. . . . . .
n
. .
n÷1
 is not
possible. Therefore we have
Section 21.5 Applications and Consequences of Hilbert’s Theorems 317
Case 2: ¨ a = 1Œ.
1
. . . . . .
n
. .
n÷1
, that is, 1 ÷ ¨ a. Then there exists a relation of the
form
1 =
i
h
i
g
i
÷h(1 ÷ .
n÷1
¡ ) for some g
i
÷a and h
i
. h÷1Œ.
1
. . . . . .
n
. .
n÷1
.
The map .
i
÷ .
i
for 1 _ i _ n and .
n÷1
÷
1
(
deﬁnes a homomorphism ç :
1Œ.
1
. . . . . .
n
. .
n÷1
 ÷1(.
1
. . . . . .
n
), the quotient ﬁeld of 1Œ.
1
. . . . . .
n
. From (3)
we get a relation 1 =
i
h
i
(.
1
. . . . . .
n
.
1
(
)g
i
(.
1
. . . . . .
n
) in 1(.
1
. . . . . .
n
). If we
multiply this with a suitable power ¡
n
of ¡ we get ¡
n
=
i
¯
h
i
(.
1
. . . . . .
n
)g
i
(.
1
.
. . . . .
n
) for some polynomials
¯
h ÷ 1Œ.
1
. . . . . .
n
. Since g
i
÷ a we get ¡
n
÷ a.
21.5 Applications and Consequences of Hilbert’s Theorems
Theorem 21.5.1. Each nonempty set of algebraic 1sets in C
n
contains a minimal
element. In other words: For each descending chain
V
1
¸ V
2
¸ ¸ V
n
¸ V
n÷1
¸ (1)
of algebraic 1sets V
i
in C
n
there exists an integer m such that V
n
= V
n÷1
=
V
n÷2
= , or equivalently, every strictly descending chain V
1
¥ V
2
¥ of
algebraic 1sets V
i
in C
n
is ﬁnite.
Proof. We apply the operator 1, that is, we pass to the vanishing ideals. This gives an
ascending chain of ideals
1(V
1
) Ç 1(V
2
) Ç Ç 1(V
n
) Ç 1(V
n÷1
) Ç . (2)
The union of the 1(V
i
) is an ideal in 1Œ.
1
. . . . . .
n
, and, hence, by Theorem 21.3.3
ﬁnitely generated. Hence, there is an m with 1(V
n
) = 1(V
n÷1
) = 1(V
n÷2
) = .
Now we apply the operator N and get the desired result because V
i
= N1(V
i
) by
Theorem 21.2.4 (10).
Deﬁnition 21.5.2. An algebraic 1set V = 0 in C
n
is called irreducible if it is not
describable as a union V = V
1
L V
2
of two algebraic 1sets V
i
= 0 in C
n
with
V
i
= V for i = 1. 2. An irreducible algebraic 1set in C
n
is also called a 1variety
in C
n
.
Theorem 21.5.3. An algebraic 1set V = 0 in C
n
is irreducible if and only if its
vanishing ideal 1
k
(V )=1(V ) is a prime ideal of 1=1Œ.
1
. . . . . .
n
 with 1(V )=1.
Proof. (1) Let V be irreducible. Let ¡g ÷ 1(V ). Then V = N1(V ) Ç N(¡g) =
N(¡ ) L N(g), hence V = V
1
L V
2
with the algebraic 1sets V
1
= N(¡ ) ¨ V and
318 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
V
2
= N(g) ¨ V . Now V is irreducible, hence, V = V
1
or V = V
2
; say V = V
1
.
Then V Ç N(¡ ), and therefore ¡ ÷ 1N(¡ ) Ç 1(V ). Since V = 0 we have further
1 … 1(V ), that is, 1(V ) = 1.
(2) Let 1(V ) < 1 with 1(V ) = 1 be a prime ideal. Let V = V
1
L V
2
, V
1
= V ,
with algebraic 1sets V
i
in C
n
. First,
1(V ) = 1(V
1
L V
2
) = 1(V
1
) ¨ 1(V
2
) ¸ 1(V
1
)1(V
2
). (=)
where 1(V
1
)1(V
2
) is the ideal generated by all products ¡g with ¡ ÷ 1(V
1
), g ÷
1(V
2
). We have 1(V
1
) = 1(V ) because otherwise V
1
= N1(V
1
) = N1(V ) = V
contradicting V
1
= V . Hence, there is a ¡ ÷ 1(V
1
) with ¡ … 1(V ). Now, 1(V ) = 1
is a prime ideal, and hence, necessarily 1(V
2
) Ç 1(V ) by (=). It follows that V Ç V
2
,
and, hence, V is irreducible.
Note that the afﬁne space 1
n
is, as the zero set of the zero polynomial 0, itself
an algebraic 1set in 1
n
. If 1 is inﬁnite then 1(1
n
) = {0} and, hence 1
n
is
irreducible by Theorem 21.5.3. Moreover, if 1 is inﬁnite then 1
n
can not be written
as a union of ﬁnitely many proper algebraic 1subsets. If 1 is ﬁnite then 1
n
is not
irreducible.
Further each algebraic 1set V in C
n
is also an algebraic Cset in C
n
. If V is an
irreducible algebraic 1set in C
n
then, in general, it is not an irreducible algebraic
Cset in C
n
.
Theorem 21.5.4. Each algebraic 1set V in C
n
can be written as a ﬁnite union
V = V
1
L V
2
L L V
i
of irreducible algebraic 1sets V
i
in C
n
. If here V
i
ª V
k
for all pairs (i. k) with i = k then this presentation is unique, up to the ordering of
the V
i
; and then the V
i
are called the irreducible 1components of V .
Proof. Let a be the set of all algebraic 1sets in C
n
which can not be presented as a
ﬁnite union of irreducible algebraic 1sets in C
n
.
Assume that a = 0. By Theorem 21.4.1 there is a minimal element V in a. This V
is not irreducible, otherwise we have a presentation as desired. Hence there exists a
presentation V = V
1
LV
2
with algebraic 1sets V
i
which are strictly smaller than V .
By deﬁnition, both V
1
and V
2
have a presentation as desired, and hence V has one,
too, which gives a contradiction. Hence, a = 0.
Now suppose that V = V
1
L LV
i
= W
1
L LW
x
be two presentations of the
desired form. For each V
i
we have a presentation V
i
= (V
i
¨W
1
) L L(V
i
¨W
x
).
Each V
i
¨ W
}
is a 1algebraic set (see Theorem 21.2.4). Since V
i
is irreducible, we
get that there is a W
}
with V
i
= V
i
¨ W
}
, that is, V
i
Ç W
}
. Analogously, for this W
}
there is a V
k
with W
}
Ç V
k
. Altogether V
i
Ç W
}
Ç V
k
. But V
]
ª V
q
if ¸ = q.
Hence, from V
i
Ç W
}
Ç V
k
we get i = k and therefore V
i
= W
}
, that means, for
each V
i
there is a W
}
with V
i
= W
}
. Analogously, for each W
k
there is a V
I
with
W
k
= V
I
. This proves the theorem.
Section 21.5 Applications and Consequences of Hilbert’s Theorems 319
Example 21.5.5. 1. Let M = {gh} Ç RŒ.. , with g(.) = .
2
÷ ,
2
÷ 1 and
¡(.) = .
2
÷ ,
2
÷ 2. Then N(M) = V = V
1
L V
2
where V
1
= N(g) and
V
2
= N(¡ ), and V is not irreducible.
2. Let M = {¡ } Ç RŒ.. , with ¡(.. ,) = ., ÷ 1; ¡ is irreducible in RŒ.. ,,
therefore the ideal (¡ ) is a prime ideal in RŒ.. ,. Hence V = N(¡ ) is irre
ducible.
Deﬁnition 21.5.6. Let V be an algebraic 1set in C
n
. The residue class ring 1ŒV  =
1Œ.
1
. . . . . .
n
,1(V ) is called the (afﬁne) coordinate ring of V .
1ŒV  can be identiﬁed with the ring of all those functions V ÷C which are given
by polynomials from 1Œ.
1
. . . . . .
n
. As a homomorphic image of 1Œ.
1
. . . . . .
n
, we
get that 1ŒV  can be described in the form 1ŒV  = 1Œ˛
1
. . . . . ˛
n
; therefore a 1
algebra of the form 1Œ˛
1
. . . . . ˛
n
 is often called an afﬁne 1algebra. If the algebraic
1set V in C
n
is irreducible – we can call V now an (afﬁne) 1variety in C
n
– then
1ŒV  is an integral domain with an identity because 1(V ) is then a prime ideal with
1(V ) = 1 by Theorem 21.4.2. The quotient ﬁeld 1(V ) = Quot 1ŒV  is called the
ﬁeld of rational functions on the 1variety V .
We note the following:
1. If C is algebraically closed then V = C
n
is a 1variety and 1(V ) is the ﬁeld
1(.
1
. . . . . .
n
) of the rational functions in n variables over 1.
2. Let the afﬁne 1algebra · = 1Œ˛
1
. . . . . ˛
n
 be an integral domain with an iden
tity 1=0. Then · Š 1Œ.
1
. . . . . .
n
,p for some prime ideal p=1Œ.
1
. . . . . .
n
.
Hence, if C is algebraically closed then · is isomorphic to the coordinate ring
of the 1variety V = N(p) in C
n
(see Hilbert’s Nullstellensatz, ﬁrst form,
Theorem 21.4.1).
3. If the afﬁne 1algebra · = 1Œ˛
1
. . . . . ˛
n
 is an integral domain with an identity
1 = 0 then we deﬁne the transcendence degree trgd(·[1) to the transcendence
degree of the ﬁeld extension Quot(·)[1, that is, trgd(·[1)=trgd(Quot(·)[1),
Quot(·) the quotient ﬁeld of ·.
In this sense trgd(1Œ.
1
. . . . . .
n
[1) = n. Since Quot(·) = 1(˛
1
. . . . . ˛
n
) we
get trgd(·[1) _ n by Theorem 20.3.10.
4. An arbitrary afﬁne 1algebra 1Œ˛
1
. . . . . ˛
n
 is, as a homomorphic image of
the polynomial ring 1Œ.
1
. . . . . .
n
, noetherian (see Theorem 21.2.4 and Theo
rem 21.2.3).
Example 21.5.7. Let o
1
. o
2
÷ C two elements which are linear independent over R.
An element o = m
1
o
1
÷ m
2
o
2
with m
1
. m
2
÷ Z, is called a period. The periods
describe an abelian group C = {m
1
o
1
÷m
2
o
2
: m
1
. m
2
÷ Z} Š Z ˚Z and give a
lattice in C.
320 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
An elliptic function ¡ (with respect to C) is a meromorphic function with period
group C, that is, ¡(: ÷n) = ¡(:) for all : ÷ C. The Weierstrass ¿function,
¿(:) =
1
:
2
÷
0yu÷D
_
1
(: ÷ n)
2
÷
1
n
2
_
.
is an elliptic function.
With g
2
= 60
0yu÷D
1
u
4
and g
3
= 140
0yu÷D
1
u
6
we get the differential
equation ¿
t
(:)
2
= 4¿(:)
3
÷g
2
¿(:) ÷g
3
= 0. The set of elliptic functions is a ﬁeld
1, and each elliptic function is a rational function in ¿ and ¿
t
(for details see, for
instance, [27]).
The polynomial ¡(t ) = t
2
÷ 4s
3
÷g
2
s ÷g
3
÷ C(s)Œt  is irreducible over C(s).
For the corresponding algebraic C(s)set V we get 1(V ) = C(s)Œt ,(t
2
÷ 4s
3
÷
g
2
s ÷g
3
) Š 1 with respect to t ÷¿
t
, s ÷¿.
21.6 Dimensions
From now we assume that C is algebraically closed.
Deﬁnition 21.6.1. (1) The dimension dim(V ) of an algebraic 1set V in C
n
is said
to be the supremum of all integers m for which there exists a strictly descending
chain V
0
© V
1
© © V
n
of 1varieties V
i
in C
n
with V
i
Ç V for all i .
(2) Let · be a commutative ring with an identity 1 = 0. The height h(p) of a prime
ideal p = · of · is said to be the supremum of all integers m for which there
exists a strictly ascending chain p
0
¨ p
1
¨ ¨ p
n
= p of prime ideals p
i
of
· with p
i
= ·. The dimension (Krull dimension) dim(·) of · is said to be the
supremum of the heights of all prime ideals = · in ·.
Theorem 21.6.2. Let V be an algebraic 1set in C
n
. Then dim(V ) = dim(1ŒV ).
Proof. By Theorem 21.2.4 and Theorem 21.4.2 we have a bijective map between
the 1varieties W with W Ç V and the prime ideals = 1 = 1Œ.
1
. . . . . .
n
 of 1
which contain 1(V ) (the bijective map reverses the inclusion). But these prime ideals
correspond exactly with the prime ideals = 1ŒV  of 1ŒV  = 1Œ.
1
. . . . . .
n
,1(V )
which gives the statement.
Section 21.6 Dimensions 321
Suppose that V is an algebraic 1set in C
n
and let V
1
. . . . . V
i
the irreducible com
ponents of V . Then dim(V ) = max{dimV
1
. . . . . dimV
i
} because if V is a 1variety
with V
t
Ç V then V
t
= (V
t
¨V
1
)L L(V
t
¨V
i
). Hence, we may restrict ourselves
on 1varieties V .
If we consider the special case of the 1variety V = C
1
= C (recall that C is
algebraically closed and, hence, especially C is inﬁnite). Then 1ŒV  = 1Œ., the
polynomial ring 1Œ. in one indeterminate .. Now, 1Œ. is a principal ideal domain,
and hence, each prime ideal = 1Œ. is either a maximal ideal or the zero ideal {0}
of 1Œ.. The only 1varieties in V = C are therefore V itself and the zero set of
irreducible polynomials in 1Œ.. Hence, if V = C then dim(V ) = dim1ŒV  = 1 =
trgd(1ŒV [1).
Theorem 21.6.3. Let · = 1Œ˛
1
. . . . . ˛
n
 be an afﬁne 1algebra and let · be also
an integral domain. Let {0} = p
0
¨ p
1
¨ ¨ p
n
be a maximal strictly ascending
chain of prime ideals in · (such a chain exists since · is noetherian). Then m =
trgd(·[1) = dim(·). In other words:
All maximal ideals of · have the same height, and this height is equal to the tran
scendence degree of · over 1.
Corollary 21.6.4. Let V be a 1variety in C
n
. Then dim(V ) = trgd(1ŒV [1).
We prove Theorem 21.6.3 in several steps.
Lemma 21.6.5. Let 1 be an unique factorization domain. Then each prime ideal p
with height h(p) = 1 is a principal ideal.
Proof. p = {0} since h(p) = 1. Hence there is an ¡ ÷ p, ¡ = 0. Since 1 is
an unique factorization domain, ¡ has a decomposition ¡ = ¸
1
¸
x
with prime
elements ¸
i
÷ 1. Now, p is a prime ideal, hence some ¸
i
÷ p because ¡ ÷ p, say
¸
1
÷ p. Then we have the chain {0} ¨ (¸
1
) Ç p, and (¸
1
) is a prime ideal of 1.
Since h(p) = 1 we get (¸
1
) = p.
Lemma 21.6.6. Let 1 = 1Œ,
1
. . . . . ,
i
 be the polynomial ring of the r independent
indeterminates ,
1
. . . . . ,
i
over the ﬁeld 1 (recall that 1 is a unique factorization
domain). If p is a prime ideal in 1 with height h(p) = 1 then the residue class ring
¨
1 = 1,p has transcendence degree r ÷ 1 over 1.
Proof. By Lemma 21.6.5 we have that p = (¸) for some nonconstant polynomial ¸ ÷
1Œ,
1
. . . . . ,
i
. Let the indeterminate , = ,
i
occur in ¸, that is, deg
,
(¸) _ 1, the de
gree in ,. If ¡ is a multiple of ¸ then also deg
,
(¡ ) _ 1. Hence, p¨1Œ,
1
. . . . . ,
i
 =
{0}. Therefore the residue class mapping 1 ÷
¨
1 = 1Œ ¨ ,
1
. . . . . ¨ ,
i
 induces an iso
morphism 1Œ,
1
. . . . . ,
i1
 ÷1Œ ¨ ,
1
. . . . . ¨ ,
i1
 of the subring 1Œ,
1
. . . . . ,
i1
, that
is, ¨ ,
1
. . . . . ¨ ,
i1
are algebraically independent over 1. On the other side ¸( ¨ ,
1
. . . . .
322 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
¨ ,
i1
. ¨ ,
i
) = 0 is a nontrivial algebraic relation for ¨ ,
i
over 1( ¨ ,
1
. . . . . ¨ ,
i1
). Hence,
altogether trgd(
¨
1[1) = trgd(1( ¨ ,
1
. . . . . ¨ ,
i
)[1) = r ÷ 1 by Theorem 20.3.9.
Before we describe the last technical lemma we need some preparatory theoretical
material.
Let 1. · be integral domains (with identity 1 = 0) and let ·[1 be a ring extension.
We ﬁrst consider only 1.
(1) A subset S Ç 1 \ {0} is called a multiplicative subset of 1 if 1 ÷ S for the
identity 1 of 1, and if s. t ÷ S then also st ÷ S. (.. s) ~ (,. t ) := .t ÷ ,s = 0
deﬁnes an equivalence relation on M = 1 × S. Let
x
x
be the equivalence class of
(.. s) and S
1
1 the set of all equivalence classes. We call
x
x
a fraction. If we add
and multiply fractions as usual we get that S
1
1 becomes an integral domain; it is
called the ring of fractions of 1 with respect to S. If, especially, S = 1 \ {0} then
S
1
1 = Quot(1), the quotient ﬁeld of 1.
Now, back to the general situation. i : 1 ÷ S
1
1, i(r) =
i
1
, deﬁnes an embed
ding of 1 into S
1
1. Hence, we may consider 1 as a subring of S
1
1. For each
s ÷ S Ç 1 \ {0} we have that i(s) is an unit in S
1
1, that is, i(s) is invertible,
and that each element of S
1
1 has the form i(s)
1
i(r) with r ÷ 1, s ÷ S. There
fore S
1
1 is uniquely determined up to isomorphisms; and we have the following
universal property:
If ç : 1 ÷ 1
t
is a ring homomorphism (of integral domains) such that ç(s)
is invertible for each s ÷ S then there exist exactly one ring homomorphism z :
S
1
1 ÷1
t
with zıi = ç. If a<1 is an ideal in a then we write S
1
a for the ideal
in S
1
1 generated by i(a). S
1
a is the set of all elements of the form
o
x
with a ÷ a
and s ÷ S; further S
1
a = (1) =a ¨ S = 0.
Vice versa, if A< S
1
1 is an ideal in S
1
1 then we denote the ideal i
1
(A) < 1
with A¨ 1, too. An ideal a < 1 is of the form a = i
1
(A) if and only if there is no
s ÷ S such that its image in 1,a under the canonical map 1 ÷1,a is a proper zero
divisor in 1,a. Under the mapping P ÷ P ¨ 1 and p ÷ S
1
p the prime ideals in
S
1
1 correspond exactly to the prime ideals in 1 which do not contain an element
of S.
We now identify 1 with i(1).
(2) Now, let p < 1 be a prime ideal in 1. Then S = 1 \ p is multiplicative. In this
case we write 1
p
instead of S
1
1 and call 1
p
the quotient ring of 1 with respect to
p or the localization of 1 of p. Put m = p1
p
= S
1
p. Then 1 … m. Each element
of 1
p
,m is a unit in 1
p
and vice versa. In other words: Each ideal a = (1) in 1
p
is
contained in m, or equivalently, m is the only maximal ideal in 1
p
. A commutative
ring with an identity 1 = 0, which has exactly one maximal ideal, is called a local
ring. Hence 1
p
is a local ring. From part (1) we get further: the prime ideals of
the local ring 1
p
correspond bijectively to the prime ideals of 1 which are contained
in p.
Section 21.6 Dimensions 323
(3) Now we consider our ring extension ·[1 as above. Let q be a prime ideal in 1.
Claim: If q·¨1 = q then there exists a prime ideal Q<· with Q¨1 = q (and
vice versa).
Proof of the claim: If S = 1 \ q then q· ¨ S = 0. Hence qS
1
· is a proper
ideal in S
1
· and hence contained in a maximal ideal m in S
1
·, here qS
1
· is
the ideal in S
1
· which is generated by q. Deﬁne Q = m¨ ·; Q is a prime ideal
in ·, and Q¨ 1 = q by part (1) because Q¨ S = 0 where S = 1 \ q.
(4) Now let ·[1 be an integral extension (·. 1 integral domains as above). Assume
that 1 is integrally closed in its quotient ﬁeld 1. Let P< · be a prime ideal in · and
p = P ¨ 1.
Claim: If q < 1 is a prime ideal in · with q Ç p then q·
p
¨ 1 = q.
Proof of the claim: An arbitrary ˇ ÷ q·
p
has the form ˇ =
˛
x
with ˛ ÷ q·, q·
the ideal in · generated by q, and s ÷ S = · \ p. An integral equation for ˛ ÷ q·
over 1 is given a form ˛
n
÷ a
n1
˛
n1
÷ ÷ a
0
= 0 with a
i
÷ q. This can be
seen as follows: we have certainly a form ˛ = b
1
˛
1
÷ ÷b
n
˛
n
with b
i
÷ q and
˛
i
÷ ·. The subring ·
t
= 1Œ˛
1
. . . . . ˛
n
 is, as an 1module, ﬁnitely generated, and
˛·
t
Ç q·
t
. Now, a
i
÷ q follows with the same type of arguments as in the proof of
Theorem 20.2.4.
Now, in addition, let ˇ ÷ 1. Then, for s =
˛
ˇ
, we have an equation
s
n
÷
a
n1
ˇ
s
n1
÷ ÷
a
0
ˇ
n
= 0
over 1. But s is integral over 1, and, hence, all
o
n1
ˇ
i
÷ 1.
We are now prepared to prove the last preliminary lemma which we need for the
proof of Theorem 21.6.3.
Lemma 21.6.7 (Krull’s going up lemma). Let ·[1 be an integral ring extension of
integral domains and let 1 be integrally closed in its quotient ﬁeld. Let p and q be
prime ideals in 1 with q Ç p. Further let P be a prime ideal in · with P ¨ 1 = p.
Then there exists a prime ideal Q in · with Q¨ 1 = q and QÇ P.
Proof. It is enough to show that there exists a prime ideal Q in ·
p
with Q¨1 = q.
This can be seen from the preceding preparations. By part (1) and (2) such a Q has
the form Q = Q
t
·
p
with a prime ideal Q
t
in · with Q
t
Ç P and Q¨ · = Q
t
.
It follows q = Q
t
¨ 1 Ç P ¨ 1 = p. And the existence of such a Q follows from
parts (3) and (4).
Proof of Theorem 21.6.3. Let ﬁrst be m = 0. Then {0} is a maximal ideal in · and,
hence, · = 1Œ˛
1
. . . . . ˛
n
 a ﬁeld. By Corollary 20.3.11 then ·[1 is algebraic and,
hence, trgd(·[1) = 0. So, Theorem 21.3.3 holds for m = 0.
Now, let m _ 1. We use Noether’s normalization theorem. · has a polynomial
ring 1 = 1Œ,
1
. . . . . ,
i
 of the r independent indeterminates ,
1
. . . . . ,
i
as a subring,
324 Chapter 21 The Hilbert Basis Theorem and the Nullstellensatz
and ·[1 is an integral extension. As a polynomial ring over 1 the ring 1 is a unique
factorization domain and hence, certainly, algebraically closed (in its quotient ﬁeld).
Now, let
{0} = P
0
¨ P
1
¨ ¨ P
n
(1)
be a maximal strictly ascending chain of prime ideals in ·. If we intersect with 1 we
get a chain
{0} = p
0
Ç p
1
Ç Ç p
n
(2)
of prime ideals p
i
= P
i
¨1 of 1. Since ·[1 is integral, the chain (2) is also a strictly
ascending chain. This follows from Krull’s going up lemma (Lemma 21.6.7) because
if p
i
= p
}
then P
i
= P
}
. If P
n
is a maximal ideal in · then also p
n
is a maximal
ideal in 1 because ·[1 is integral (consider ·,P
n
and use Theorem 17.2.21). If the
chain (1) is maximal and strictly then also the chain (2).
Now, let the chain (1) be maximal and strictly. If we pass to the residue class rings
¨
· = ·,P
1
and
¨
1 = 1,p
1
then we get the chains of prime ideals {0} =
¨
P
1
Ç
¨
P
2
Ç Ç
¨
P
n
and {0} = ¨ p
1
Ç ¨ p
2
Ç Ç ¨ p
n
for the afﬁne 1algebras
¨
· and
¨
1, respectively, but with a 1 less length. By induction, we may assume that already
trgd(
¨
·[1) = m ÷ 1 = trgd(
¨
1[1). On the other side, by construction we have
trgd(·[1) = trgd(1[1) = r. To prove Theorem 21.3.3 ﬁnally, we have to show that
r = m. If we compare both equations then r = m follows if trgd(
¨
1[1) = r ÷1. But
this holds by Lemma 21.6.6.
Theorem 21.6.8. Let V be a 1variety in C
n
. Then dim(V ) = n ÷ 1 if and only if
V = (¡ ) for some irreducible polynomial ¡ ÷ 1Œ.
1
. . . . . .
n
.
Proof. (1) Let V be a 1variety in C
n
with dim(V ) = n ÷ 1. The correspond
ing ideal (in the sense of Theorem 21.2.4) is by Theorem 21.4.2 a prime ideal p in
1Œ.
1
. . . . . .
n
. By Theorem 21.3.3 and Corollary 21.3.4 we get h(p) = 1 for the
height of p because dim(V ) = n ÷1 (see also Theorem 21.3.2). Since 1Œ.
1
. . . . . .
n

is a unique factorization domain we get that p = (¡ ) is a principal ideal by Lem
ma 21.6.5.
(2) Now let ¡ ÷ 1Œ.
1
. . . . . .
n
 be irreducible. We have to show that V = N(¡ )
has dimension n ÷ 1. For that, by Theorem 21.6.3, we have to show that the prime
ideal p = (¡ ) has the height h(p) = 1. Assume that this is not the case. Then
there exists a prime ideal q = p with {0} = q Ç p. Choose g ÷ q, g = 0. Let
g = u¡
e
1
¬
e
2
2
¬
e
r
i
be its prime factorization in 1Œ.
1
. . . . . .
n
. Now g ÷ q and
¡ … q because q = p. Hence, there is a ¬
i
in q ¨ p = (¡ ) which is impossible.
Therefore h(p) = 1.
Section 21.7 Exercises 325
21.7 Exercises
1. Let · = 1Œa
1
. . . . . a
n
 and C[1 a ﬁeld extension with C algebraically closed.
Show that there is a 1algebra homomorphism 1Œa
1
. . . . . a
n
 ÷C.
2. Let 1Œ.
1
. . . . . .
n
 be the polynomial ring of the n independent indeterminates
.
1
. . . . . .
n
over the algebraically closed ﬁeld 1. The maximal ideals of 1Œ.
1
. . . . .
.
n
 are exactly the ideals of the form m(˛) = (.
1
÷ ˛
1
. .
2
÷ ˛
2
. . . . . .
n
÷ ˛
n
)
with ˛ = (˛
1
. . . . . ˛
n
) ÷ 1
n
.
3. The nil radical
_
0 of · = 1Œa
1
. . . . . a
n
 corresponds with the Jacobson radical
of ·, that is, the intersection of all maximal ideals of ·.
4. Let 1 be a commutative ring with 1 = 0. If each prime ideal of 1 is ﬁnitely
generated then 1 is noetherian.
5. Prove the theoretical preparations for Krull’s going up lemma in detail.
6. Let 1Œ.
1
. . . . . .
n
 be the polynomial ring of the n independent indeterminates
.
1
. . . . . .
n
. For each ideal a of 1Œ.
1
. . . . . .
n
 there exists a natural number m
with the following property: if ¡ ÷ 1Œ.
1
. . . . . .
n
 vanishes on the zero set of a
then ¡
n
÷ a.
7. Let 1 be a ﬁeld with char 1 = 2 and a. b ÷ 1

. We consider the polynomial
¡(.. ,) = a.
2
÷ b,
2
÷ 1 ÷ 1Œ.. ,, the polynomial ring of the independent
indeterminates . and ,. Let C be the algebraic closure of 1(.) and ˇ ÷ C with
¡(.. ˇ) = 0. Show that:
(i) ¡ is irreducible over the algebraic closure C
0
of 1 (in C).
(ii) trgd(1(.. ˇ)[1) = 1, Œ1(.. ˇ) : 1(.) = 2, and 1 is algebraically closed
in 1(.. ˇ).
Chapter 22
Algebraic Cryptography
22.1 Basic Cryptography
As we have mentioned, much of mathematics has been algebraicized, that is uses the
methods and techniques of abstract algebra. Throughout this book we have looked
at various applications of the algebraic ideas. Many of these were to other areas of
mathematics, such as the insolvability of the quintic. In this ﬁnal chapter we move
in a slightly different direction and look at applications of algebra to cryptography.
This has become increasingly important because of the extensive use of cryptography
and cryptosystems in modern commerce and communications. We ﬁrst give a brief
introduction to general cryptography and its history.
Cryptography refers to the science and/or art of sending and receiving coded mes
sages. Coding and hidden ciphering is an old endeavor used by governments and
militaries and between private individuals from ancient times. Recently it has become
even more prominent because of the necessity of sending secure and private informa
tion, such as credit card numbers, over essentially open communication systems.
Traditionally cryptography is the science and or art of devising and implementing
secret codes or cryptosystems. Cryptanalysis is the science and or art of breaking cryp
tosystems while cryptology refers to the whole ﬁeld of cryptography plus cryptanal
ysis. In most modern literature cryptography is used synonymously with cryptology.
Theoretically cryptography uses mathematics, computer science and engineering.
A cryptosystem or code is an algorithm to change a plain message, called the plain
text message, into a coded message, called the ciphertext message. In general both the
plaintext message (uncoded message) and the ciphertext message (coded message)
are written in some N letter alphabet which is usually the same for both plaintext and
code. The method of coding or the encoding algorithm is then a transformation of
the N letters. The most common way to perform this transformation is to consider
the N letters as N integers modulo N and then perform a number theoretical func
tion on them. Therefore most encoding algorithms use modular arithmetic and hence
cryptography is closely tied to number theory. The subject is very broad, and as men
tioned above, very current, due to the need for publically viewed but coded messages.
There are many references to the subject. The book by Koblitz [60] gives an outstand
ing introduction to the interaction between number theory and cryptography. It also
includes many references to other sources. The book by Stinson [68] describes the
whole area.
Section 22.1 Basic Cryptography 327
Modern cryptography is usually separated into classical cryptography also called
symmetric key cryptography and public key cryptography. In the former, both the
encoding and decoding algorithms are supposedly known only to the sender and re
ceiver, usually referred to as Bob and Alice. In the latter, the encryption method is
public knowledge but only the receiver knows how to decode.
The message that one wants to send is written in plaintext and then converted into
code. The coded message is written in ciphertext. The plaintext message and cipher
text message are written in some alphabets that are usually the same. The process
of putting the plaintext message into code is called enciphering or encryption while
the reverse process is called deciphering or decryption. Encryption algorithms break
the plaintext and ciphertext message into message units. These are single letters or
pairs of letters or more generally kvectors of letters. The transformations are done
on these message units and the encryption algorithm is a mapping from the set of
plaintext message units to the set of ciphertext message units. Putting this into a
mathematical formulation we let
P = set of all plaintext message units
C = set of all ciphertext message units.
The encryption algorithm is then the application of an invertible function
¡ : P ÷C.
The function ¡ is the encryption map. The inverse
¡
1
: C ÷P
is the decryption or deciphering map. The triple {P. C. ¡ }, consisting of a set of
plaintext message units, a set of cipertext message units and an encryption map, is
called a cryptosystem.
Breaking a code is called cryptanalysis. An attempt to break a code is called an
attack. Most cryptanalysis depends on a statistical frequency analysis of the plaintext
language used (see exercises). Cryptanalysis depends also on a knowledge of the form
of the code, that is, the type of cryptosystem used.
We now give some examples of cryptosystems and cryptanalysis.
Example 22.1.1. The simplest type of encryption algorithm is a permutation cipher.
Here the letters of the plaintext alphabet are permuted and the plaintext message is
sent in the permuted letters. Mathematically if the alphabet has N letters and o is a
permutation on 1. . . . . N, the letter i in each message unit is replaced by o(i ). For
example suppose the plaintext language is English and the plaintext word is BOB and
328 Chapter 22 Algebraic Cryptography
the permutation algorithm is
a b c d e f g h i j k l m
b c d f g h j k l n o p r
n o p q r s t u v w x y z
s t v w x a e i z m q y u
then BOB ÷CSC.
Example 22.1.2. A very straightforward example of a permutation encryption algo
rithm is a shift algorithm. Here we consider the plaintext alphabet as the integers
0. 1. . . . . N ÷ 1 mod N. We choose a ﬁxed integer k and the encryption algorithm is
¡ : m ÷m÷k mod N.
This is often known as a Caesar code after Julius Caesar who supposedly invented
it. It was used by the Union Army during the American Civil War. For example if
both the plaintext and ciphertext alphabets were English and each message unit was
a single letter then N = 26. Suppose k = 5 and we wish to send the message
ATTACK. If a = 0 then ATTACK is the numerical sequence 0. 20. 20. 0. 2. 11. The
encoded message would then be FZZFIP.
Any permutation encryption algorithm which goes letter to letter is very simple to
attack using a statistical analysis. If enough messages are intercepted and the plaintext
language is guessed then a frequency analysis of the letters will sufﬁce to crack the
code. For example in the English language the three most commonly occurring letters
are E, T and A with a frequency of occurrence of approximately 13% and 9% and 8%
respectively. By examining the frequency of occurrences of letters in the ciphertext
the letters corresponding to E, T and A can be uncovered.
Example 22.1.3. A variation on the Caesar code is the Vignère code. Here message
units are considered as kvectors of integers mod N from an N letter alphabet. Let
T = (b
1
. . . . . b
k
) be a ﬁxed kvector in Z
k
n
. The Vignère code then takes a message
unit
(a
1
. . . . . a
k
) ÷(a
1
÷b
1
. . . . . a
k
÷b
k
) mod N.
From a cryptanalysis point of view a Vignère code is no more secure than a Caesar
code and is susceptible to the same type of statistical attack.
The Alberti Code is a polyalphabetic cipher and can be often be used to thwart a
statistical frequency attack. We describe it in the next example.
Example 22.1.4. Suppose we have an N letter alphabet. We then form an N × N
matrix 1 where each row and column is a distinct permutation of the plaintext alpha
bet. Hence 1 is a permutation matrix on the integers 0. . . . . N ÷ 1. Bob and Alice
Section 22.1 Basic Cryptography 329
decide on a keyword. The keyword is placed above the plaintext message and the
intersection of the keyword letter and plaintext letter below it will determine which
cipher alphabet to use. We will make this precise with an 9 letter alphabet A, B, C, D,
E, O, S, T, U. Here for simplicity we will assume that each row is just a shift of the
previous row, but any permutation can be used.
Key Letters
A B C D E O S T U
a A a b c d e o s t u
l B b c d e o s t u a
p C c d e o s t u a b
h D d e o s t u a b c
a E e o s t u a b c d
b O o s t u a b c d e
e S s t u a b c d e o
t T t u a b c d e o s
s U u a b c d e o s t.
Suppose the plaintext message is STAB DOC and Bob and Alice have chosen the
keyword BET. We place the keyword repeatedly over the message
B E T B E T B
S T A B D O C.
To encode we look at B which lies over S. The intersection of the B key letter and the
S alphabet is a t so we encrypt the S with T. The next key letter is E which lies over T.
The intersection of the E keyletter with the T alphabet is c. Continuing in this manner
and ignoring the space we get the encryption
STAB DOC ÷ TCTCTDD.
Example 22.1.5. A ﬁnal example, which is not number theory based, is the socalled
Beale Cipher. This has a very interesting history which is related in the popular
book Archimedes Revenge by Paul Hoffman (see [56]). Here letters are encrypted
by numbering the ﬁrst letters of each word in some document like the Declaration of
Independence or the Bible. There will then be several choices for each letter and a
Beale cipher is quite difﬁcult to attack.
Until relatively recent times cryptography was mainly concerned with message con
ﬁdentiality – that is sending secret messages so that interceptors or eavesdroppers
cannot decipher them. The discipline was primarily used in military and espionage
situations. This changed with the vast amount of conﬁdential data that had to be
transmitted over public airways so the ﬁeld has expanded to many different types of
cryptographic techniques such as digital signatures and message authentications.
330 Chapter 22 Algebraic Cryptography
Cryptography and encryption does have a long and celebrated history. In the Bible,
in the book of Jeremiah, they use what is called an Atabash Code. In this code the
letters of the alphabet – Hebrew in the Bible but can be used with any alphabet – are
permuted ﬁrst to last. That is, in the Latin alphabet, Z would go to A and so on.
The Kabbalists and the Kabbala believe that the Bible – written in Hebrew where
each letter also stands for a number – is a code from heaven. They have devised
elaborate ways to decode it. This idea has seeped into popular culture where the book
“The Bible Code” became a bestseller.
In his military campaigns Julius Caesar would send out coded messages. His
method, which we looked at in the last section, is now known as a Caesar code. It
is a shift cipher. That is each letter is shifted a certain amount to the right. A shift
cipher is a special case of an afﬁne cipher that will be elaborated upon in the next
section. The Caesar code was resurrected and used during the American Civil War.
Coded messages produced by most of the historical methods reveal statistical in
formation about the plaintext. This could be used in most cases to break the codes.
The discovery of frequency analysis was done by the Arab mathematician AlKindi
in the ninth century and the basic classical substitution ciphers became more or less
easily breakable. About 1470 Leon Alberti developed a method to thwart statistical
analysis. His innovation was to use a polyalphabetic cipher where different parts of
the message are encrypted with different alphabets. We looked at an example of an
Alberti code in this section.
A different way to thwart statistical attacks is to use blank and neutral letters, that
is meaningless letters within the message. Mary, Queen of Scots, used a random
permutation cipher with neutrals in it, where a neutral was a random meaningless
symbol. Unfortunately for her, her messages were decoded and she was beheaded.
There have been various physical devices and aids used to create codes. Prior
to the widespread use of the computer the most famous cryptographic aid was the
Enigma machine developed and used by the German military during the Second World
War. This was a rotor machine using a polyalphabetic cipher. An early version was
broken by Polish cryptographers early in the war so a larger system was built that was
considered unbreakable. British cryptographers led by Alan Turing broke this and
British knowledge of German secrets had a great effect on the latter part of the war.
The development of digital computers allowed for the development of much more
complicated cryptosystems. Further this allowed for the encryption using anything
that can be placed in binary formats whereas historical cryptosystems could only be
rendered using language texts. This has revolutionized cryptography.
In 1976 Difﬁe and Hellman developed the ﬁrst usable public key exchange protocol.
This allowed for the transmission of secret data over open airways. Ayear later Rivest,
Adelman and Shamir, developed the RSA algorithm, a second public key protocol.
There are now many and we will discuss them later. In 1997 it became known that
public key cryptography had been developed earlier by James Ellis working for British
Section 22.2 Encryption and Number Theory 331
Intelligence and that both the Difﬁe–Hellman and RSA protocols had been developed
earlier by Malcom Williamson and Clifford Cocks respectively.
22.2 Encryption and Number Theory
Here we describe some basic number theoretically derived cryptosystems. In applying
a cryptosystem to an N letter alphabet we consider the letters as integers mod N.
The encryption algorithms then apply number theoretic functions and use modular
arithmetic on these integers. One example of this was the shift, or Caesar cipher,
described in the last section. In this encryption method a ﬁxed integer k is chosen and
the encryption map is given
¡ : m ÷m÷k mod N.
The shift algorithm is a special case of an afﬁne cipher. Recall that an afﬁne map
on a ring 1 is a function ¡(.) = a. ÷b with a. b. . ÷ 1. We apply such a map to
the ring of integers modulo n, that is, 1 = Z
n
, as the encryption map. Speciﬁcally
again suppose we have an N letter alphabet and we consider the letters as the integers
0. 1. . . . . N ÷ 1 mod N, that is in the ring Z
1
. We choose integers a. b ÷ Z
1
with
(a. N) = 1 and b = 0. a. b are called the keys of the cryptosystem . The encryption
map is then given by
¡ : m ÷am÷b mod N.
Example 22.2.1. Using an afﬁne cipher with the English language and keys a = 3,
b = 5 encode the message EAT AT JOE’S. Ignore spaces and punctuation.
The numerical sequence for the message ignoring the spaces and punctuation is
4. 0. 19. 0. 19. 9. 14. 4. 18.
Applying the map ¡(m) = 3m÷5 mod 26 we get
17. 5. 62. 5. 62. 32. 47. 17. 59 ÷17. 5. 10. 5. 10. 6. 21. 17. 7.
Now rewriting these as letters we get
EAT AT JOE’S ÷RFKFKGVRH.
Since (a. N) = 1 the integer a has a multiplicative inverse a
1
mod N. The de
cryption map for an afﬁne cipher with keys a. b is then
¡
1
: m ÷a
1
(m÷ b) mod N.
Since an afﬁne cipher, as given above, goes letter to letter it is easy to attack using
a statistical frequency approach. Further if an attacker can determine two letters and
332 Chapter 22 Algebraic Cryptography
knows that it is an afﬁne cipher the keys can be determined and the code broken. To
give better security it is preferable to use kvectors of letters as message units. The
form then of an afﬁne cipher becomes
¡ : · ÷·· ÷T
where here · and T are kvectors from Z
k
1
and · is an invertible k × k matrix with
entries from the ring Z
1
. The computations are then done modulo N. Since · is a
kvector and · is a k × k matrix the matrix product ·· produces another kvector
fromZ
k
1
. Adding the kvector T again produces a kvector so the ciphertext message
unit is again a kvector. The keys for this afﬁne cryptosystem are the enciphering
matrix · and the shift vector T. The matrix · is chosen to be invertible over Z
1
(equivalent to the determinant of · being a unit in the ring Z
1
) so the decryption
map is given by
· ÷·
1
(· ÷ T).
Here ·
1
is the matrix inverse over Z
1
and · is a kvector. The enciphering matrix
· and the shift vector T are now the keys of the cryptosystem.
A statistical frequency attack on such a cryptosystem requires knowledge, within
a given language, of the statistical frequency of kstrings of letters. This is more
difﬁcult to determine than the statistical frequency of single letters. As for a letter to
letter afﬁne cipher, if k ÷1 message units, where k is the message block length, are
discovered, then the code can be broken.
Example 22.2.2. Using an afﬁne cipher with message units of length 2 in the English
language and keys
· =
_
5 1
8 7
_
. T = (5. 3)
encode the message EAT AT JOE’S. Again ignore spaces and punctuation.
Message units of length 2, that is 2vectors of letters are called digraphs. We ﬁrst
must place the plaintext message in terms of these message units. The numerical
sequence for the message EAT AT JOE’S ignoring the spaces and punctuation is as
before
4. 0. 19. 0. 19. 9. 14. 4. 18.
Therefore the message units are
(4. 0). (19. 0). (19. 9). (14. 4). (18. 18)
repeating the last letter to end the message.
The enciphering matrix · has determinant 1 which is a unit mod 26 and hence is
invertible so it is a valid key.
Section 22.2 Encryption and Number Theory 333
Now we must apply the map ¡(·) = ··÷T mod 26 to each digraph. For example
·
_
4
0
_
÷T =
_
5 1
8 7
__
4
0
_
÷
_
5
3
_
=
_
20
32
_
÷
_
5
3
_
=
_
25
9
_
.
Doing this to the other message units we obtain
(25. 9). (22. 25). (5. 10). (1. 13). (9. 13).
Now rewriting these as digraphs of letters we get
(Z, J), (W, Z), (F, K), (B, N), (J, N).
Therefore the coded message is
EAT AT JOE’S ÷ZJWZFKBNJN.
Example 22.2.3. Suppose we receive the message ZJWZFKBNJN and we wish to
decode it. We know that an afﬁne cipher with message units of length 2 in the English
language and keys
· =
_
5 1
8 7
_
. T = (5. 3)
is being used.
The decryption map is given by
· ÷·
1
(· ÷ T).
so we must ﬁnd the inverse matrix for ·. For a 2 ×2 invertible matrix
_
o b
c d
_
we have
_
a b
c J
_
1
=
1
aJ ÷ bc
_
J ÷b
÷c a
_
.
Therefore in this case recalling that multiplication is mod 26
· =
_
5 1
8 7
_
== ·
1
=
_
7 ÷1
÷8 5
_
.
The message ZJWZFKBNJN in terms of message units is
(25. 9). (22. 25). (5. 10). (1. 13). (9. 13).
We apply the decryption map to each digraph. For example
·
1
__
20
6
_
÷ T
_
=
_
7 ÷1
÷8 5
___
25
9
_
÷
_
5
3
__
= (4. 0).
334 Chapter 22 Algebraic Cryptography
Doing this to each we obtain
(4. 0). (19. 0). (19. 9). (14. 4). (18. 18)
and rewriting in terms of letters
(E, A), (T, A), (T, J), (O, E), (S, S).
This gives us
ZJWZFKBNJN ÷EATATJOESS.
Modern cryptography is done via a computer. Hence all messages both plaintext
and ciphertext are actually presented as binary strings. Important in this regard is the
concept of a hash function.
A cryptographic hash function is a deterministic function
h : S ÷{0. 1}
n
that returns for each arbitrary block of data, called a message, a ﬁxed size bit string.
It should have the property that a change in the data will change the hash value. The
hash value is called the digest.
An ideal cryptographic hash function has the following properties:
(1) It is easy to compute the hash value for any given message.
(2) It is infeasible to ﬁnd a message that has a given hash value (preimage resistant).
(3) It is infeasible to modify a message without changing its hash.
(4) It is infeasible to ﬁnd two different messages with the same hash (collision re
sistant).
A cryptographic hash function can serve as a digital signature.
Hash functions can also be used with encryption. Suppose that Bob and Alice want
to communicate openly. They have exchanged a secret key 1 that supposedly only
they know. Let ¡
1
be an encryption function or encryption algorithm based on the
key 1. Alice wants to send the message m to Bob and m is given as a binary bit
string. Alice sends to Bob
¡
1
(m) ˚h(1)
where ˚is addition modulo 2.
Bob knows the key 1 and hence its hash value h(1). He now computes
¡
1
(m) ˚h(1) ˚h(1).
Section 22.3 Public Key Cryptography 335
Since addition modulo 2 has order 2 we have
¡
1
(m) ˚h(1) ˚h(1) = ¡
k
(M).
Bob now applies the decryption algorithm ¡
1
1
to decode the message.
Alice could have just as easily sent ¡
1
(m) ˚ 1. However sending the hash has
two beneﬁts. Usually the hash is shorter than the key and from the properties of hash
functions it gives another level of security. As we will see, tying the secret key to
the actual encryption in this manner is the basis for the ElGamal and elliptic curve
cryptographic methods.
The encryption algorithm ¡
1
is usually a symmetric key encryption so that any
one knowing 1 can encrypt and decrypt easily. However it should be resistant to
plaintextciphertext attacks. That is if an attacker gains some knowledge of a piece
of plaintext together with the corresponding ciphertext it should not compromise the
whole system.
The encryption algorithm can either be a block cipher or a stream cipher. In the
former, blocks of ﬁxed length k are transformed into blocks of ﬁxed length n and
there is a method to ties the encrypted blocks together. In the latter, a stream cipher,
bits are transformed one by one into new bit strings by some procedure.
In 2001 the National Institute of Standards and Technology adopted a block ci
pher now called AES for Advanced Encryption System as the industry standard for a
symmetric key encryption. Although not universally used it is the most widely used.
This block cipher was a standardization of the Rijnadel cipher named after its inventor
Rijmen and Daeman.
AES replaced DES or Digital Encryption System which had been the standard.
Parts of DES were found to be insecure. AES proceeds with several rounds of en
crypting blocks and then mixing blocks. The mathematics in AES is done over the
ﬁnite ﬁeld GF(2
S
).
22.3 Public Key Cryptography
Presently there are many instances where secure information must be sent over open
communication lines. These include for example banking and ﬁnancial transactions,
purchasing items via credit cards over the Internet and similar things. This led to the
development of public key cryptography. Roughly, in classical cryptography only the
sender and receiver know the encoding and decoding methods. Further it is a feature
of such cryptosystems, such as the ones that we’ve looked at, that if the encrypting
method is known then the decryption can be carried out. In public key cryptogra
phy the encryption method is public knowledge but only the receiver knows how to
decode. More precisely in a classical cryptosystem once the encrypting algorithm is
known the decryption algorithm can be implemented in approximately the same order
of magnitude of time. In a public key cryptosystem, developed ﬁrst by Difﬁe and
336 Chapter 22 Algebraic Cryptography
Hellman, the decryption algorithm is much more difﬁcult to implement. This difﬁ
culty depends on the type of computing machinery used and as computers get better,
new and more secure public key cryptosystems become necessary.
The basic idea in a public key cryptosystem is to have a oneway function or trap
door function. That is a function which is easy to implement but very hard to invert.
Hence it becomes simple to encrypt a message but very hard, unless you know the
inverse, to decrypt.
The standard model for public key systems is the following. Alice wants to send a
message to Bob. The encrypting map ¡
¹
for Alice is public knowledge as well as the
encrypting map ¡
B
for Bob. On the other hand the decryption algorithms ¡
1
¹
and
¡
1
B
are secret and known only to Alice and Bob respectively. Let P be the message
Alice wants to send to Bob. She sends ¡
B
¡
1
¹
(P). To decode Bob applies ﬁrst ¡
1
B
,
which only he knows. This gives him ¡
1
B
(¡
B
¡
1
¹
(P)) = ¡
1
¹
(P). He then looks
up ¡
¹
which is publically available and applies this ¡
¹
(¡
1
¹
(P)) = P to obtain the
message. Why not just send ¡
B
(P). Bob is the only one who can decode this. The
idea is authentication, that is being certain from Bob’s point of view that the message
really came from Alice. Suppose P is Alice’s veriﬁcation; signature, social security
number etc. If Bob receives ¡
B
(P) it could be sent by anyone since ¡
B
is public. On
the other hand since only Alice supposedly knows ¡
1
¹
getting a reasonable message
from ¡
¹
(¡
1
B
¡
B
¡
1
¹
(P)) would verify that it is from Alice. Applying ¡
1
B
alone
should result in nonsense.
Getting a reasonable one way function can be a formidable task. The most widely
used (at present) public key systems are based on difﬁcult to invert number theoretic
functions. The original public key system was developed by Difﬁe and Hellman in
1976. It was followed closely by a second public key system developed by Rivest,
Shamir and Adeelman known as the RSA system. Although at present there are many
different public key systems in use most are variations of these original two. The
variations are attempts to make the systems more secure. We will discuss four such
systems.
22.3.1 The Difﬁe–Hellman Protocol
Difﬁe and Hellman in 1976 developed the original public key idea using the discrete
log problem. In modular arithmetic it is easy to raise an element to a power but difﬁcult
to determine, given an element, if it is a power of another element. Speciﬁcally if G
is a ﬁnite group, such as the cyclic multiplicative group of Z
]
where ¸ is a prime,
and h = g
k
for some k then the discrete log of h to the base g is any integer t with
h = g
t
.
The rough form of the Difﬁe–Hellman public key system is as follows. Bob and
Alice will use a classical cryptosystem based on a key k with 1 < k < q ÷ 1 where
q is a prime. It is the key k that Alice must share with Bob. Let g be a multiplicative
Section 22.3 Public Key Cryptography 337
generator of Z

q
the multiplicative group of Z
q
. The generator g is public. It is known
that this group is cyclic if q is a prime.
Alice chooses an a ÷ Z
q
with 1 < a < q ÷ 1. She makes public g
o
. Bob chooses
a b ÷ Z

q
and makes public g
b
. The secret key is g
ob
. Both Bob and Alice, but
presumably none else, can discover this key. Alice knows her secret power a and the
value g
b
is public from Bob. Hence she can compute the key g
ob
= (g
b
)
o
. The
analogous situation holds for Bob. An attacker however only knows g
o
and g
b
and g.
Unless the attacker can solve the discrete log problem the key exchange is secure.
Given q. g. g
o
. g
b
the problem of determining the secret key g
ob
is called the
Difﬁe–Hellman problem. At present the only known solution is to solve the discrete
log problem which appears to be very hard. In choosing the prime q and the generator
g it is assumed that the prime q is very large so that the order of g is very large. There
are algorithms to solve the discrete log problem is q is too small.
One attack on the Difﬁe–Hellman key exchange is a man in the middle attack. Since
the basic protocol involves no authentication an attacker can pretend to be Bob and
get information from Alice and then pretend to be Alice and get information from
Bob. In this way the attacker could get the secret shared key. To prevent this, digital
signatures are often used (see [60] for a discussion of these).
The decision Difﬁe–Hellman problem is given a prime q and g
o
mod q, g
b
mod q
and g
c
mod q determine if g
c
= g
ob
.
In 1997 it became known that the ideas of public key cryptography were developed
by British Intelligence Services prior to Difﬁe and Hellman.
22.3.2 The RSA Algorithm
In 1977 Rivest, Adelman and Shamir developed the RSA algorithm, which is presently
(in several variations) the most widely used public key cryptosystem. It is based on
the difﬁculty of factoring large integers and in particular on the fact that it is easier to
test for primality than to factor very large integers.
In basic form the RSA algorithm works as follows. Alice chooses two large primes
¸
¹
. q
¹
and an integer e
¹
relatively prime to ç(¸
¹
q
¹
) = (¸
¹
÷ 1)(q
¹
÷ 1) where ç
is the Euler phifunction. It is assumed that these integers are chosen randomly to
minimize attack. Primality tests arise in the following manner. Alice ﬁrst randomly
chooses a large odd integer m and tests it for primality. If its prime it is used. If
not, she tests m ÷ 2, m ÷ 4 and so on until she gets her ﬁrst prime ¸
¹
. She then
repeats the process to get q
¹
. Similarly she chooses another odd integer m and tests
until she gets an e
¹
relatively prime to ç(¸
¹
q
¹
). The primes she chooses should be
quite large. Originally RSA used primes of approximately 100 decimal digits, but as
computing and attack have become more sophisticated, larger primes have had to be
utilized. Presently keys with 400 decimal digits are not uncommon. Once Alice has
obtained ¸
¹
. q
¹
. e
¹
she lets n
¹
= ¸
¹
q
¹
and computes J
¹
, the multiplicative inverse
of e
¹
modulo ç(n
¹
). That is J
¹
satisﬁes e
¹
J
¹
÷ 1 mod (¸
¹
÷1)(q
¹
÷1). She makes
338 Chapter 22 Algebraic Cryptography
public the enciphering key 1
¹
= (n
¹
. e
¹
) and the encryption algorithm known to all
is
¡
¹
(P) = P
e
A
mod n
¹
where P ÷ Z
n
A
is a message unit. It can be shown that if (e
¹
. (¸
¹
÷1)(q
¹
÷1)) = 1
and e
¹
J
¹
÷ 1 mod (¸
¹
÷ 1)(q
¹
÷ 1) then P
e
A
d
A
÷ P mod n
¹
(see exercises).
Therefore the decryption algorithm is
¡
1
¹
(C) = C
d
a
mod n
¹
.
Notice then that ¡
1
¹
(¡
¹
(P)) = P
e
A
d
A
÷ P mod n
¹
so it is the inverse.
Now Bob makes the same type of choices to obtain ¸
B
. q
B
. e
B
. He lets n
B
=
¸
B
q
B
and makes public his key 1
B
= (n
B
. e
B
).
If Alice wants to send a message to Bob that can be authenticated to be from Alice
she sends ¡
B
(¡
1
¹
(P)). An attack then requires factoring n
¹
or n
B
which is much
more difﬁcult than obtaining the primes ¸
¹
. q
¹
. ¸
B
. q
B
.
In practice suppose there is an N letter alphabet which is to be used for both plain
text and ciphertext. The plaintext message is to consist of k vectors of letters and the
ciphertext message of l vectors of letters with k < l. Each of the k plaintext letters
in a message unit P are then considered as integers mod N and the whole plaintext
message is considered as a k digit integer written to the base N (see example below).
The transformed message is then written as an l digit integer mod N and then the
digits are then considered integers mod N from which encrypted letters are found.
To ensure that the range of plaintext messages and ciphertext messages are the same
k < l are chosen so that
N
k
< n
U
< N
I
for each user U, that is n
U
= ¸
U
q
U
. In this case any plaintext message P is an
integer less than N
k
considered as an element of Z
n
U
. Since n
U
< N
I
the image
under the power transformation corresponds to an l digit integer written to the base
N and hence to an l letter block. We give an example with relatively small primes. In
real world applications the primes would be chosen to have over a hundred digits and
the computations and choices must be done using good computing machinery.
Example 22.3.1. Suppose N = 26, k = 2 and l = 3. Suppose further that Alice
chooses ¸
¹
= 29, q
¹
= 41, e
¹
= 13. Here n
¹
= 29 41 = 1189 so she makes
public the key 1
¹
= (1189. 13). She then computes the multiplicative inverse J
¹
of 13 mod 1120 = 28 40. Now suppose we want to send her the message TABU.
Since k = 2 the message units in plaintext are 2 vectors of letters so we separate the
message into TA BU. We show how to send TA. First the numerical sequence for the
letters TA mod 26 is (19,0). We then use these as the digits of a 2digit number to the
base 26. Hence
TA = 19 26 ÷0 1 = 494.
Section 22.3 Public Key Cryptography 339
We now compute the power transformation using her e
¹
= 13 to evaluate
¡(19. 0) = 494
13
mod 1189.
This is evaluated as 320. Now we write 320 to the base 26. By our choices of k. l this
can be written with a maximum of 3 digits to this base. Then
320 = 0 26
2
÷12 26 ÷8.
The letters in the encoded message then correspond to (0. 12. 8) and therefore the
encryption of TA is AMI.
To decode the message Alice knows J
¹
and applies the inverse transformation.
Since we have assumed that k < l this seems to restrict the direction in which
messages can be sent. In practice to allow messages to go between any two users the
following is done. Suppose Alice is sending an authenticated message to Bob. The
keys k
¹
= (n
¹
. e
¹
). k
B
= (n
B
. e
B
) are public. If n
¹
< n
B
Alice sends ¡
B
¡
1
¹
(P).
On the other hand if n
¹
> n
B
she sends ¡
1
¹
¡
B
(P).
There have been attacks on RSA for special types of primes so care must be chosen
in choosing the primes.
The computations and choices used in real world implementations of the RSA algo
rithm must be done with computers. Similarly, attacks on RSA are done via comput
ers. As computing machinery gets stronger and factoring algorithms get faster, RSA
becomes less secure and larger and larger primes must be used. In order to combat
this, other public key methods are in various stages of ongoing development. RSA and
Difﬁe–Hellman and many related public key cryptosystems use properties in abelian
groups. In recent years a great deal of work has been done to encrypt and decrypt us
ing certain nonabelian groups such as linear groups or braid groups. We will discuss
these later in the chapter.
22.3.3 The ElGamal Protocol
The ElGamal cryptosystem is a method to use the Difﬁe–Hellman key exchange
method to do encryption. The method works as follows and uses the fact that hash
functions can also be used with encryption. Suppose that Bob and Alice want to
communicate openly. They have exchanged a secret key 1 that supposedly only they
know. Let ¡
1
be an encryption function or encryption algorithm based on the key 1.
Alice wants to send the message m to Bob and m is given as a binary bit string. Alice
sends to Bob
¡
1
(m) ˚h(1)
where ˚is addition modulo 2.
Bob knows the key 1 and hence its hash value h(1). He now computes
¡
1
(m) ˚h(1) ˚h(1).
340 Chapter 22 Algebraic Cryptography
Since addition modulo 2 has order 2 we have
¡
1
(m) ˚h(1) ˚h(1) = ¡
k
(M).
Bob now applies the decryption algorithm ¡
1
1
to decode the message.
Hence if 1 is a publicly exchanged secret key and ¡
1
is a cryptosystem based on
1 then the above format allows an encryption algorithm to go with the key exchange.
The ElGamal system does this with the Difﬁe–Hellman key exchange protocol.
Suppose that Bob and Alice want to communicate openly. Alice chooses a prime
q and a generator g of the multiplicative group Z
q
. q should be large enough to
thwart the known discrete logarithm algorithms. Alice then chooses an integer a with
1 < a < q ÷ 1. She then computes
· = g
o
mod q.
Her public key is then (q. g. ·). Bob wants to send a message M to Alice. He ﬁrst
encrypts the message an integer m mod q. For Bob to now send an encrypted message
m to Alice he chooses a random integer b with 1 < b < q ÷ 2 and compute
T = g
b
mod q.
Bob then sends to Alice the integer
c = ·
b
m mod q
that is Bob encrypts the whole message by multiplying it by the Difﬁe–Hellman
shared key. The complete ElGamal ciphertext is then the pair (T. c).
How does Alice decode the message. Given the message m she knows how to
reconstruct the plaintext message M so she must recover the mod q integer m. As in
the Difﬁe–Hellman key exchange she can compute the shared key ·
b
= T
o
. She can
then divide c by this Difﬁe–Hellman key g
ob
to obtain m. To avoid having to ﬁnd the
inverse of T
o
mod q which can be difﬁcult she computes the exponent . = ¸÷1÷a.
The inverse is then T
x
mod q.
For each new ElGamal encryption a new exponent b is chosen so that there is a
random component of ElGamal which improves the security.
Breaking the ElGamal system is as difﬁcult as breaking the Difﬁe–Hellman pro
tocol and hence is based on the difﬁculty of the discrete log problem. However the
ElGamal has the advantage that the choice of primes is random. As mentioned the
primes should be chosen large enough to not be susceptible to known discrete log
algorithms. Presently the primes should be of binary length at least 512.
c = ·
b
m mod q.
Section 22.3 Public Key Cryptography 341
22.3.4 Elliptic Curves and Elliptic Curve Methods
A very powerful approach which has had wide ranging applications in cryptography
is to use elliptic curves. If J is a ﬁeld of characteristic not equal to 2 or 3 then an
elliptic curve over J is the locus of points (.. ,) ÷ J × J satisfying the equation
,
2
= .
3
÷a. ÷b with 4a
3
÷27b
2
= 0.
We denote by 0 a single point at inﬁnity and let
1(J) = {(.. ,) ÷ J × J : ,
2
= .
3
÷a. ÷b} L {0}.
The important thing about elliptic curves from the viewpoint of cryptography is that
a group structure can be placed on 1(J). In particular we deﬁne the operation ÷ on
1(J) by
1. 0 ÷1 = 1 for any point 1 ÷ 1(J).
2. If 1 = (.. ,) then ÷1 = (.. ÷,) and ÷0 = 0.
3. 1 ÷(÷1) = 0 for any point 1 ÷ 1(J).
4. If 1
1
= (.
1
. ,
1
). 1
2
= (.
2
. ,
2
) with 1
1
= ÷1
2
then
1
1
÷1
2
= (.
3
. ,
3
) with
.
3
= m
2
÷ (.
1
÷.
2
). ,
3
= ÷m(.
3
÷ .
1
) ÷ ,
1
and
m =
,
2
÷ ,
1
.
2
÷ .
1
if .
2
= .
1
and
m =
3.
2
1
÷a
2,
1
if .
2
= .
1
.
This operation has a very nice geometric interpretation if J = R the real numbers.
It is known as the chord and tangent method. If 1
1
= 1
2
are two points on the curve
then the line through 1
1
. 1
2
intersects the curve at another point 1
3
. If we reﬂect 1
3
through the .axis we get 1
1
÷1
2
. If 1
2
= 1
2
we take the tangent line at 1
1
.
With this operation 1(J) becomes an abelian group (due to Cassels) whose struc
ture can be worked out.
Theorem 22.3.2. 1(J) together with the operations deﬁned above forms an abelian
group. In J is a ﬁnite ﬁeld of order ¸
k
then 1(J) is either cyclic or has the structure
1(J) = Z
n
1
× Z
n
2
with m
1
[m
2
and m
1
[(¸
k
÷ 1).
342 Chapter 22 Algebraic Cryptography
A comprehensive description and discussion of elliptic curve methods can be found
in Crandall and Pomerance [52].
The groups of elliptic curves can be used for cryptography as developed by Koblitz
and others. If q is a prime and a. b ÷ Z
q
then we can form the elliptic curve 1(¸ :
a. b) and the corresponding elliptic curve abelian group. In this group the Difﬁe–
Hellman key exchange protocol and the corresponding ElGamal encryption system
can be implemented. Care must be taken that the discrete log problem in 1(q: a. b) is
difﬁcult. The curve is then called a cryptographically secure elliptic curve.
Elliptic curve publickey cryptosystems are at present the most important commu
tative alternatives to the use of the RSA algorithm. There are several reasons for this.
They are more efﬁcient in many cases than RSA and keys in elliptic curve systems are
much smaller than keys in RSA. It is felt that it is important to have good workable
alternatives to RSA in the event that factoring algorithms become strong enough to
compromise RSA encryption.
22.4 Noncommutative Group based Cryptography
The public key cryptosystems and public key exchange protocols that we have dis
cussed, such as the RSA algorithm, Difﬁe–Hellman, ElGamal and elliptic curve
methods are number theory based and hence depend on the structure of abelian groups.
Although there have been no overall successful attacks on the standard methods there
is a feeling that the strength of computing machinery has made these techniques theo
retically susceptible to attack. As a result of this, there has been a recent active line of
research to develop cryptosystems and key exchange protocols using noncommutative
cryptographic platforms. This line of investigation has been given the broad title of
noncommutative algebraic cryptography. Since most of the cryptographic platforms
are groups this is also known a