## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Ratings:

676 pages8 hours

"Mastering the contents of Hall's book will lead a student to the frontiers of group theory. He will be well equipped to read any recent literature and start original research himself in this field ... This remarkable book undoubtedly will become a standard text on group theory." — Eugene Guth, *American Scientist*"This is a book which I wish I could put in the hands of every graduate student who has shown an interest in the elements of group theory. The first 10 chapters would give him an excellent foundation in group theory, and there would still remain 10 chapters for his delight." — Richard Hubert Bruck,

Publisher: Dover PublicationsReleased: Jan 10, 2018ISBN: 9780486828244Format: book

**SYMBOLS **

**1. INTRODUCTION **

A large part of algebra is concerned with systems of elements which, like numbers, may be combined by addition or multiplication or both. We are given a system whose elements are designated by letters *a, b, c*, · · ·. We write *S *= *S*(*a, b, c*, · · ·) for this system. The properties of these systems depend upon which of the following basic laws hold:

These mean that, for every ordered pair of elements, *a, b *of *S, a *+ *b *= *c *exists and is a unique element of *S*, and that also *ab *= *d *exists and is a unique element of *S*.

DEFINITION: *A system satisfying all these laws is called a field. A system satisfying A*0, −1, −2, −3, −4, *M*0, −1, *and D*1, −2 *is called a ring*.

It should be noted that *A*0–*A*4 are exactly parallel to *M*0–*M*4: except for the nonexistence of the inverse of 0 in M4. In the distributive laws, however, addition and multiplication behave quite differently. This parallelism between addition and multiplication is exploited in the use of logarithms, where the basic correspondence between them is given by the law:

In general an *n*-ary operation in a set *S *is a function *f *= *f*(*a*1 · · ·, *an*) of *n *arguments (*a*1 · · ·, *an*) which are elements of *S *and whose value *f*(*a*1 · · ·, *an*) = *b *is a unique element of *S *when *f *is defined for these arguments. If, for every choice of *a*1 · · ·, *an *in *S, f*(*a*1 · · ·, *an*) is defined, we say that the operation *f *is *well defined *or that the set *S *is *closed *with respect to the operation *f*.

In a field *F*, addition and multiplication are well-defined binary operations, while the inverse operation *f*(*a*) = *a*−1 is a unary operation defined for every element except zero.

A very fundamental concept of modern mathematics is that of a *mapping *of a set *S *into a set *T*.

DEFINITION: *A mapping α of a set S into a set T is a rule which as signs to each x of the set S a unique y of the set T*. Symbolically we write this in either of the notations:

The element *y *is called the *image *of *x *under *α*. If every *y *of the set *T *is the image of at least one *x *in *S*, we say that *α *is a mapping of *S onto T*.

The mappings of a set into (or onto) itself are of particular importance. For example a rotation in a plane may be regarded as a mapping of the set of points in the plane onto itself. Two mappings *α *and *β *of a set *S *into itself may be combined to yield a third mapping of *S *into itself, according to the following definition.

DEFINITION: *Given two mappings α, β, of a set S into itself, we define a third mapping γ of S into itself by the rule: If y *= (*x*)*α and z *= (*y*)*β, then z *= (*x*)*γ. The mapping γ is called the product of α and β, and we write γ *= *αβ*.

Here, since *y *= (*x*)*α *is unique and *z *= (*y*)*β *is unique, *z *= [(*x*)*α*]*β *= (*x*)*γ *is defined for every *x *of *S *and is a unique element of *S*.

THEOREM 1.2.1. *The mappings of a set S into itself satisfy M*0, *M*1, *and M*3 *if multiplication is interpreted to be the product of mappings*.

*Proof: *It has already been noted that *M*0 is satisfied. Let us consider *M*1. Let *α, β, γ *be three given mappings. Take any element *x *of *S *and let *y *= (*x*)*α, z *= (*y*)*β*, and *w *= (*z*)*γ*. Then (*x*)[(*αβ*)*γ*] = *z*(*γ*) = *w*, and (*x*)[*α*(*βγ*)] = *y*(*βγ*) = *w*. Since both mappings, (*αβ*)*γ *and *α*(*βγ*), give the same image for every *x *in *S*, it follows that (*αβ*)*γ *= *α*(*βγ*).

As for *M*3, let 1 be the mapping such that (*x*)1 = *x *for every *x *in *S*. Then 1 is a unit in the sense that for every mapping *α, α*1 = 1*α *= *α*.

In general, neither *M*2 nor *M*4 holds for mappings. But *M*4 holds for an important class of mappings, namely, the one-to-one mappings of *S *onto itself.

DEFINITION: *A mapping a of a set S onto T is said to be one-to-one (which we will frequently write *1–1) *if every element of T is the image of exactly one element of S*, where *x *is an element of *S*, and *y *is an element of *T*. We say that *S *and *T *have the same cardinal number*** of elements. **

THEOREM 1.2.2. *The one-to-one mappings of a set S onto itself satisfy M*0, *M*1, *M*3, *and M*4.

*Proof: *Since **Theorem 1.2.1 covers M0, M1, and M3, we need only verify Mis a one-to-one mapping of S onto itself, then by definition, for every y of S there is exactly one x of S such that y = (x)a. This assignment of a unique x to each y of S onto itself. From the definition of τ we see that (x)(ατ) = x for every x in S and y(τα) = y for every y in S. Hence, ατ = τα = 1, and τ is a mapping satisfying the requirements for α−1 in M4. **

We call a one-to-one mapping of a set onto itself a *permutation*are two permutations of the set *S*. Note that the product rule for permutations given here is obtained by multiplying from left to right. Some authors define the product so that multiplication is from right to left.

We see that, as single operations, the laws governing addition and multiplication are the same. Of these, all but the commutative law are satisfied by the product rule for the one-to-one mappings of a set onto itself. The laws obeyed by these one-to-one mappings are those which we shall use to define a group.

DEFINITION (FIRST DEFINITION OF A GROUP): *A group G is a set of elements G*(*a, b, c*, · · ·) *and a binary operation called product such that: *

*G*0. Closure Law. *For every ordered pair a, b of elements of G, the product ab *= *c exists and is a unique element of G*.

*G*1. Associative Law. (*ab*)*c *= *a*(*bc*).

*G*2. Existence of Unit. *An element *1 *exists such that *1*a *= *a*1 = *a for every a of G*.

*G*3. Existence of Inverse. *For every a of G there exists an element a*−1 *of G such that a*−1*a *= *aa*−1 = 1.

These laws are redundant. We may omit one-half of *G*2 and *G*3, and replace them by:

*G*2.* *An element *1 *exists such that *1*a *= *a for every a of G*.

*G*3.* *For every a of G there exists an element x of G such that xa *= 1.

We can show that these in turn imply *G*2 and *G*3. For a given a let

by *G*3.*

Then we have

so that *G*3 is satisfied. Also,

so that *G*2 is satisfied.

The uniqueness of the unit 1 and of an inverse *a*−1 are readily established (see **Ex. 13). We could, of course, also replace G2 and G3 by the assumption of the existence of 1 and x such that: a1 = a and ax = 1. But if we assume that they satisfy a1 = a and xa = 1, the situation is slightly different.* **

There are a number of ways of bracketing an ordered sequence *a*1*a*2 · · · *an *to give it a value by calculating a succession of binary products. For *n *= 3 there are just two ways of bracketing, namely, (*a*1*a*2)*a*3 and *a*1(*a*2*a*3), and the associative law asserts the equality of these two products. An important consequence of the associative law is the *generalized associative law*.

*All ways of bracketing an ordered sequence a*1*a*2, · · · *an to give it a value by calculating a succession of binary products yield the same value*.

It is a simple matter, using induction on *n*, to prove that the generalized associative law is a consequence of the associative law (see **Ex. 1). **

Another definition may be given which does not explicitly postulate the existence of the unit.

DEFINITION (SECOND DEFINITION OF A GROUP): *A group G is a set of elements G*(*a, b*, · · ·) *such that *

1) *For every ordered pair a, b of elements of G, a binary product ab is defined such that ab *= *c is a unique element of G*.

2) *For every element a of G a unary operation inverse, a*−1,

3) *Associative Law*. (*ab*)*c *= *a*(*bc*).

4) *Inverse Laws*. *a*−1(*ab*) = *b *= (*ba*)*a*−1.

It is an easy task to show that any set which satisfies the axioms of the first definition also satisfies those of the second. To show the converse, assume the axioms of the second definition and consider the relation:

When *a *= *b*, we see that *a*−1*a *= *aa*−1, and consequently the element *a*−1*a *= *aa*−1 is the same for every *a *in *G*. Let us call this element 1,

so that *G*3 is satisfied. Also,

and

and *G*2 is satisfied. Therefore the two definitions of a group are equivalent.

There is a third definition of a group as follows:

DEFINITION (THIRD DEFINITION OF A GROUP): *A group G is a set of elements G*(*a, b*, · · ·) *and a binary operation a/b such that: *

*L*0. *For every ordered pair a, b of elements of G, a/b is defined such that a/b *= *c is a unique element of G*.

*L*1. *a/a *= *b/b*.

*L*2. *a/*(*b/b*) = *a*.

*L*3. (*a/a*)/(*b/c*) = *c/b*.

*L*4. (*a/c*)/(*b/c*) = *a/b*.

In terms of this operation, let us define a unary operation of inverse *b*−1 by the rule

Here

using in turn *L*3 and *L*2. We now define a binary operation of product by the rule

Then *a/b *= *a/*(*b*−1)−1 = *ab*−1. Let us write 1 for the common value of *a/a *= *b/b *as given by *L*1. Then *L*1 becomes *aa*−1 = 1, whence also for any *a*, 1 = *a*−1(*a*−1)−1 = *a*−1*a*. Thus *G*3 of the first definition holds. In *b*−1 = (*b/b*)/*b*, put *b *= 1, whence 1−1 = 11−1, and so 1 = 1/1 = 11−1 = 1−1. *L*2 now becomes *a*1−1 = *a*1 = a. By definition *b*−1 = 1/*b *= 1*b*−1, and with *b *= *a*−1, this gives (*a*−1)−1 = 1(*a*−1)−1, or *a *= 1*a*. Thus *G*2 of the first definition holds. *L*3 now becomes 1(*bc*−1)−1 = *cb*−1, whence (*bc*−1)−1 = *cb*−1. In *L*4, put *a *= *x, b *= 1, *c *= *y*−1; whence (*xy*)(1*y*)−1 = *x*1−1 = *x *or (*xy*)*y*−1 = *x*. Now, for any *x, y, z*, put *a *= *xy, b *= *z*−1, *c *= *y*. Then *ac*−1 = (*xy*)*y*−1 = *x*, and *L*4 becomes (*ac*−1)(*bc*−1)−1 = *ab*−1, whence (*ac*−1)(*cb*−1) = *ab*−1. But in terms of *x, y, z *this becomes *x*(*yz*) = (*xz*)*z*, the associative law *G*1. Thus this definition of group implies the first definition. But in terms of the first definition if we put *ab*−1 = *a/b*, we easily find that the laws *L*0, -1, -2, -3, -4 are satisfied, and therefore the definitions are equivalent.

There are systems which satisfy some but not all the axioms for a group. The following are the main types:

DEFINITION: *A quasi-group Q is a system of elements Q*(*a, b, c*, · · ·) *in which a binary operation of product ab is defined such that, in ab *= *c, any two of a, b, c determine the third uniquely as an element of Q*.

DEFINITION: *A loop is a quasi-group with a unit *1 *such that *1*a *= *a*1 = *a for every element a*.

DEFINITION: *A semi-group is a system S*(*a, b, c*, · · ·) *of elements with a binary operation of product ab such that *(*ab*)*c *= *a *(*be*).

A group clearly satisfies all these definitions. We may, with Kurosch, further define a group as a set which is both a semi-group and a quasi-group. As a semi-group *G*0 and *G*1 are satisfied. Let *t *be the unique element such that *tb *= *b *for some particular *b*, and let *y *be determined by *b *and *a *so that *by *= *a*. Then (*tb*)*y *= *by *and *t*(*by*) = *by*, or *ta *= *a *for any *a*, and *G*2* is satisfied. In a quasigroup *G*3* is also satisfied. But we have already shown that these properties define a group.

We call a system with a binary product and unary inverse satisfying

a quasi-group with the inverse property, this law being the inverse property. We must show that the product defines a quasi-group. If *ab *= *c*, we find *b *= *a*−1(*ab*) = *a*−1*c*, and *a *= (*ab*)*b*−1 = *cb*−1. Thus *a *and *b *determine *c *uniquely; and also given *c *and *a*, there is at most one *b*, and given *c *and *b*, there is at most one *a*. Write *a*(*a*−1*c*) = *w*. Then *a*−1[*a*(*a*−1*c*)] = *a*−1*w*, whence *a*−1*c *= *a*−1*w*. Then (*a*−1)−1(*a*−1*c*) = (*a*−1)−1(*a*−1*c*) whence *c *= *w*. Hence *a*(*a*−1*c*) = *c*, and similarly, (*cb*−1)*b *= *c*, and the system is a quasi-group. We note that an inverse quasi-group need not be a loop. With three elements *a, b, c *and relations *a*² = *a, ab *= *ba *= *c, b*² = *b, bc *= *cb *= *a, c*² = *c, ca *= *ac *= *b*, we find that each element is its own inverse, and we have a quasi-group with inverse property but no unit.

A subset of the elements of a group *G *may itself form a group with respect to the product as defined in *G*. Such a set of elements *H *is called a *subgroup*.

In any group *G *the unit 1 satisfies 1² = 1. Conversely, if *x *is an element of *G *such that *x*² = *x*, then *x *= *x*−1(*x*²) = *x*−1*x *= 1. Thus the unit of a subgroup *H*, since it satisfies *x*² = *x*, must be the same as the unit of the whole group *G*.

THEOREM 1.4.1. *A non-empty subset H of a group G is a subgroup if the two following conditions hold: *

*S*1. *If *, *then *.

*S*2. *If *, *then *.

*Proof: *The two properties given guarantee the validity of *G*0, *G*2, *G*3 in *H*. And since products in *H *agree with those in *G, G*1 is also satisfied in *H*.

There are various relationships between pairs of groups which are worth considering. The first such relationship is that of *isomorphism*.

DEFINITION: *A one-to-one mapping **of the elements of a group G onto those of a group H is called an isomorphism if whenever **and *, *then *.

EXAMPLE 1. Since all the permutations of a set form a group (**Theorem 1.2.2), any set of permutations satisfying S1 and S2 will form a group which is a subgroup of the full group of permutations. For example, let us consider the following two such subgroups: **

If we map *xi *of *G*1 onto *yi *of *G*2, we find that products correspond in every instance. Hence *G*1 and *G*2 are isomorphic.

More generally we may have a mapping (usually many to one) of the elements of one group *G *onto those of another group *H*, which we call a *homomorphism *if the mapping preserves products.

DEFINITION: *A mapping G *→ *H of the elements of a group G onto those of a group H is called a homomorphism if whenever g*1 → *h*1 *and g*2 → *h*2 *then g*1*g*2 → *h*1*h*2.

In the homomorphism *G *→ *H *let 1 be the identity of *G *and let 1 → *e*, where *e *is in *H*. Then 1² → *e*². Since 1² = 1, then *e*² = *e*. We see that *e *is therefore the identity of *H*. Also if *g *→ *h *and *g*−1 → *k*, then *gg*−1 → *hk*, and so 1 → *hk *= *e*. Therefore *k *= *h*−1 and the mapping takes inverses into inverses. We may observe that a one-to-one homomorphism is an isomorphism.

EXAMPLE 2. If *G*1 is the permutation group above and *H *is the multiplicative group of the two real numbers 1, −1, then we have a homomorphism:

Not only are permutation groups of interest in themselves, but also every such group is isomorphic to a permutation group.

THEOREM 1.4.2 (CAYLEY). *Every group G is isomorphic to a permutation group of its own elements*.

*Proof: *, define the mapping *R*(*g*): *x *→ *xg *. For a fixed *g *this is a mapping of the elements of *G *onto themselves, since for a given *y, yg*−1 → (*yg*−1)*g *= *y*. It is also one-to-one, since from *x*1*g *= *x*2*g *it follows that *x*1 = *x*2. Thus *R*(*g*) is a permutation for each *g*. The mapping *R*(*g*1)*R*(*g*, and so, *R*(*g*1)*R*(*g*2) = *R*(*g*1*g*2). Moreover, in *R*(*g*. Hence if *g*1 ≠ *g*2, then *R*(*g*1) ≠ *R*(*g*is an isomorphism. We observe in addition that *R*(1) = *I*, the identical mapping, and that *R*(*g*−1)*R*(*g*) = *I*, so that *R*(*g*−1) = [*R*(*g*)]−1.

are called the *right regular representation *of *G*, the *left regular representation *of *G*. We find that *L*(*g*) is anti-isomorphic to *G*. This means that the mapping *L*(*g*) is one-to-one and that it *reverses *multiplication, i.e., *L*(*g*1*g*2) = *L*(*g*2)*L*(*g*1).

If we have a set of subgroups *Hi *of *G *where *j *ranges over a system of indices *J*, then the set of elements of *G*, each of which belongs to every *Hi*, will satisfy *S*1 and *S*2 and so be a subgroup *H *called the *intersection *of the *Hi*. Moreover, the set of all finite products, *g*1*g*2 · · · *g*s, where each *gi *belongs to some *Hi *also satisfies *S*1 and *S*2. This set forms a subgroup *T *called the *union *of the *Hi*. For the intersection and union of two subgroups *H *and *K *we write *H *∩ *K *and *H *∪ *K*, respectively. This notation is in agreement with that of lattice theory and will be considered more fully in **Chap. 8. **

An arbitrary set of elements in a group is called a *complex*. If *A *and *B *are two complexes in a group *G*, we write *AB *for the complex consisting of all elements *ab*, and call *AB *the product of *A *and *B*. We easily verify the associative law (*AB*)*C *= *A*(*BC*) for the multiplication of complexes.

If *K *is any complex in a group *G*, we designate by {*K*} the subgroup consisting of all finite products *x*1 · · · *xn*, where each *xi *is an element of *K *or the inverse of an element of *K*. We say that {*K*} is *generated *by *K*. It is easy to see that {*K*} is contained in any subgroup of *G *which contains *K*.

Given a group *G *and a subgroup *H*. The set of elements *hx*, *x *fixed, is called a *left coset *of *H *and we write *Hx *to designate this set. Similarly, the set of elements *xh*, is called a *right coset xH *of *H*.

THEOREM 1.5.1. *Two left *(*right*) *cosets of H in G are either disjoint or identical sets of elements. A left *(*right*) *coset of H contains the same cardinal number of elements as H*.

*Proof: *If cosets *Hx *and *Hy *. Then *z *= *h*1*x *= *h*2*y*. Here *x *= *h*1−1*h*2*y *and *hx *= *hh*1−1*h*2*y *= *h′y*, whence *Hx *⊆ *Hy*. Similarly, *hy *= *hh*2−1*h*1*x *= *h″x*, whence *Hy *⊆ *Hx*. Here *Hx *= *Hy*, show that *H, Hx*, and *xH *contain the same cardinal number of elements.

The element *x *= *x*1 = 1*x *belongs to the cosets *xH *and *Hx *and is called the *representative *of the coset. From **may be taken as the representative, since Hu = Hx. Thus H = H1 = 1H is one of its own cosets, and it is usually convenient (and under certain conventions compulsory) to take the identity as the representative of a subgroup regarded as one of its own cosets. We write **

to indicate that the cosets *H, Hx*2, · · ·, *Hxr *are disjoint and exhaust *G*. Here the indicated addition is only a convenient notation and not to be regarded as an operation.

Since (*Hx*)−1 (the set of inverses of the elements of the form *hx*) is equal to *x*−1*H *and (*yH*)−1 = *Hy*−1, there is a one-to-one correspondence between left and right cosets of *H*. Thus, from **(1.5.1), **

The cardinal number *r *of right or left cosets of a subgroup H in a group *G *is called the *index *of *H *in *G *and is written [*G*:*H*]. The *order *of a group *G *is the cardinal number of elements in *G*. The identity alone is a subgroup, and its cosets consist of single elements. Thus the order of a group is the index of the identity subgroup.

THEOREM 1.5.2 (THEOREM of LAGRANGE). *The order of a group G is the product of the order of a subgroup H and the index of H in G*.

*Proof: *Each of the *r *= [*G*:*H*] disjoint cosets of *H *in *G *contains the same number of elements as *H*, which is the order of *H*.

If *H *is a subgroup of *G*, and *K *is a subgroup of *H*, let

, *g *= *hxi*in a unique way, and *h *= *kyi*uniquely. Thus the cosets of *K *in *G *are given by *Kyixj i *= 1, · · · *r, j *= 1, · · ·, *s*. For two such cosets to be equal, they would have to belong to the same coset of *H *and so have the same *xj*. Multiplying by *xj*−1 on the right, we see that they would also have to have the same *yi*. Thus the cosets of *K *in *H *are given by *Kyixj*, and these are all different. We have thus proved the theorem:

THEOREM 1.5.3. *If G *⊇ *H *⊇ *K, then *[*G*:*K*] = [*G*:*H*][*H*:*K*].

A group *G *is *cyclic *if every element in it is a power *bi *of some fixed element *b*. If we write (*b*−1)*r *= *b*−*r*, then by the associative law and induction we can show *bmbt *= *bm*+*t *for any integral exponents *m, t*. If all powers of *b *are distinct, then the cyclic group is of infinite order and is isomorphic with the additive group of all integers, these being the exponents of the generator *b*. If not all powers are distinct, let *bm *= *bt *with *m *> *t*. Then *bm*−*t *= 1, with *m *− *t *positive. Let *n *> 0 be the least positive integer, with *bn *= 1. Then we readily see that the elements of the group are 1, *b*, · · ·, *bn*−1 and that with 0 ≤ *r, s *< *n, brbs *= *br*+*s *if *r *+ *s *< *n*, while *brbs *= *br*+*s*−*n *if *r *+ *s *≥ *n*. From this we may verify directly that for each positive *n *there is, to within isomorphism, a unique cyclic group of order *n*. This is also the additive group of integers modulo *n*. Thus, for a cyclic group generated by an element *b*, its order will either be infinite or some positive integer *n*, in which case *n *is the smallest positive integer such that *bn *= 1. We define the *order of an element b *as the order of the cyclic group {*b*} which it generates.

The nature and number of subgroups of a group *G *are surely of great value in describing *G *itself. But if *G *contains no subgroup except itself and the identity, then there are no proper subgroups which describe its structure. In this case we can give a very simple direct description of *G*.

THEOREM 1.5.4. *Let G be a group, not the identity alone*. *Then G has no subgroup except itself and the identity if, and only if, G is a finite cyclic group of prime order*.

*Proof: *Under the hypothesis if *b *≠ 1 is an element of *G*, then the cyclic group generated by *b *is not the identity and must be the entire group *G*. If *b *is of infinite order, then *b*² generates a proper subgroup, the elements *b*²*j*. Hence *b *is of finite order, *n*, and *bn *= 1. If *n *is not a prime, then *n *= *uv *with *u *> 1, *v *> 1. Here the powers of *bu *generate a proper subgroup of order *v*. Hence *n *is a prime and *G *is a cyclic group of prime order. But from the Theorem of Lagrange a group of prime order cannot contain a subgroup different from the identity and the whole group.

There is a basic relation on indices of subgroups.

*Theorem *1.5.5. *Inequality on indices*. [*A *∪ *B*:*B*] ≥ [*A*:*A *∩ *B*].

*Proof: *Call *A *∩ *B *= *D *and let *A *= *D*1 + *Dx*2 + · · · + *Dxr*. Then we assert that the cosets *B*1, *Bx*2, · · ·, *Bxr *are all distinct in *A *∪ *B*. For if *Bxi *= *Bxj *≠ *i*, then *xi *= *bxi *. But here *xi *and *xj *both belong to *A*, and so for this *b *; so the cosets *Dxj *and *Dxi *have in common the element *xj *= *bxi *contrary to assumption. Hence there are at least as many distinct cosets of *B *in *A *∪ *B *as there are of *A *∩ *B *in *A*, proving the inequality.

THEOREM 1.5.6. EQUALITY OF INDICES. *If*[*A *∪ *B*:*B*] *and *[*A *∪ *B*:*A*] *are finite and relatively prime, then *[*A *∪ *B*:*B*] = [*A*:*A *∩ *B*] *and *[*A *∪ *B*:*A*] = [*B*:*A *∩ *B*].

*Proof: *By **Theorem 1.5.3, **

By **Theorem 1.5.5, [ A ∪ B:B] ≥ [A:A ∩ B], but also from the above relation [A ∪ B:B] divides [A:A ∩ B], since it is relatively prime to [A ∪ B:A]. Hence [A ∪ B:B] = [A:A ∩ B] and similarly [A ∪ B:A] = [B:A ∩ B]. **

Let *G *be a group and *S *any set of elements in *G*. Then the set *S′ *of elements of the form *x*−1*sx*, *x *fixed, is called the *transform *of *S *by *x *and is written in either of the forms *S′ *= *x*−1*Sx *or *S′ *= *Sx*.

LEMMA 1.6.1. *S and Sx contain the same number of elements*.

*Proof: *is a 1–1 correspondence, since *s *→ *x*−1*sx *= *s′ *is a mapping and so is *s′ *→ *xs′x*−1 = *x*(*x*−1*sx*)*x*−1 = *s*.

If *S *and *S′ *are two sets in *G, H *is some subgroup of *G*exists such that *S′ *= *Sx*, we say that *S *and *S′ *are *conjugate under H*. If *S′ *= *x*−1*Sx*, then *S *= (*x*−1)−1*S′x*−1 Moreover, if *S″ *= *y*−1*S′y*, then *S" *= *y*−1*x*−1*Sxy *= (*xy*)−1(*xy*)−1*S*(*xy*). Since trivially *S *= 1−1*S*1, we see that the relation of being conjugate under *H *is an equivalence relation, being reflexive, symmetric, and transitive. We call the set of all *S′ *conjugate to a given *S *a *class *of conjugates. From (*x*−1*sx*)−1 = *x*−1*s*−1*x *and *x*−1*s*1*x*·*x*−1*s*2*x *= *x*−1(*s*1*s*2)*x*, we deduce:

LEMMA 1.6.2. *Any set conjugate to a subgroup is also a subgroup*.

If *x*−1*Sx *= *S*, then *S *= *xSx*−1. If also *y*−1*Sy *= *S*, then *S *= (*xy*)−1*S*(*xy*, such that *Sx *= *S*, is a subgroup of *H *which we shall call the *normalizer of S in H*, and we designate this as *NH*(*S*such that *x*−1*sx *= *s *, may similarly be shown to be a subgroup of *H *which we call the *centralizer of S in H *and designate *CH*(*S*) [or *ZH*(*S*) if we follow the German spelling]. Note that if *S *consists of a single element, the centralizer and normalizer are identical; moreover, always *CH*(*S*) ⊆ *NH*(*S*). When *H *= *G *it is customary to speak merely of the normalizer or centralizer of *S*. The centralizer *Z *of *G *in *G *is called the *center of G*.

THEOREM 1.6.1. *The number of conjugates of S under H is the index in H of the normalizer of S in H*, [*H*:*NH*(*S*)].

*Proof: *Write *NH*(*S*) = *D *for brevity and let

Then *x*−1*Sx *= *y*−1*Sy, x*if, and only if, *S *= (*yx*−1)−1*S*(*yx*. Hence two conjugates of *S *under *H *are the same if, and only if, the transforming elements belong to the same left coset of *D*. Hence the number of distinct conjugates is the index of *D *in *H*, as was to be shown.

If *S *consists of a single element *s*, the conjugates under *G *form a *class*. Thus the classes of elements in *G *are a partitioning of the elements of *G*, and we write

the *Ci *being disjoint classes and every element being in exactly one class. The identity 1 is always a class. From **Theorem 1.6.1 the number of elements in a class Ci is the index of a subgroup and hence a divisor of the order of the group. **

Given a group *G *and two subgroups *H *and *K*, not necessarily distinct, the set of elements *HxK*, where *x *is some fixed element of *G*, is called a *double coset*. As with ordinary cosets, we may prove:

LEMMA 1.7.1. *Two double cosets HxK and HyK are either disjoint or identical*.

*Proof: *Here, if *z *= *h*1*xk*1 = *h*2*yk*2, *hxk *= *hh*1−1*h*2*yk*2*k*1−1*k*, whence *HxK *⊆ *HyK*, and similarly, *HyK *⊆ *HxK*.

A double coset *HxK *contains all left cosets of *H *of the form *Hxk *and all right cosets of *K *of the form *hxK*. Moreover, it is clear that *HxK *consists of complete left cosets of *H *and of complete right cosets of *K*.

THEOREM 1.7.1. *The number of left cosets of H in HxK is *[*K*:*K *∩ *x*−1*Hx*], *and the number of right cosets of K in HxK is *[*x*−1*Hx*:*K *∩ *x*−1*Hx*].

*Proof: *We put the elements of *HxK *into a 1–1 correspondence with the elements *x*−1*HxK *. This correspondence gives a 1–1 correspondence between the left cosets *Hxk *of *H *in *HxK *and the left cosets *x*−1*Hx*·*k *of *x*−1*Hx *in *x*−1*HxK*, and also between the right cosets *hxK *of *K *in *HxK *and the right cosets *x*−1*hxK *of *K *in *x*−1*HxK*. Let us write *x*−1*Hx *= *A*, and *A *∩ *K *= *D*. Now if *A *= 1·*D *+ *u*2*D *+ · · · + *urD, r *= [*A*:*D*, whence *K, u*2*K*, · · ·, *urK *are right cosets of *K *in *AK*. They are distinct since if *uiK *= *ujK*, but since *ui*, and thus *uiD *= *ujD *contrary to assumption.² Every right coset of *K *in *AK *is of the form *aK*is of the form *uid *. But *uidK *= *uiK*. Thus the number of right cosets of *K *in *AK *is [*A*:*D*] = [*x*−1*Hx*:*x*−1*Hx *∩ *K*], and by the 1–1 correspondence, this is the number of right cosets of *K *in *HxK*. In the same way it may be shown that the number of left cosets of *A *in *AK *is [*K*:*D*] = [*K*:*x*−1*Hx *∩ *K*] and this, by the 1–1 correspondence, is the number of left cosets of *H *in *HxK*.

Many of the theorems on groups do not involve the issue as to whether or not the groups are finite. But in some instances the facts are essentially different for finite and infinite groups, and occasionally when the facts are similar, the methods of proof differ.

An infinite group *G *may have certain finite properties. Some important properties of this kind are:

1) *G *is finitely generated.

2) *G *is *periodic*, that is, the elements of *G *are of finite order.

3) *G *satisfies the *maximal condition: *Every ascending chain of distinct subgroups *A*1 ⊂ *A*2 ⊂ *A*3 ⊂ · · · is necessarily finite.

4) *G *satisfies the *minimal condition: *Every descending chain of distinct subgroups *A*1 ⊃ *A*2 ⊃ *A*3 ⊃ · · · is necessarily finite.

An infinite group *G *is said to have a property *locally *if this property holds for every finitely generated subgroup. A family *Hi *of homomorphic images of a group *G *is said to be a *residual family *for *G*, if for every *g *≠ 1 of *G *there is at least one *Hi *in which the image of this *g *is not the identity. We say that *G *has a property *residually *if there is a residual family for *G *of homomorphic images all having the property.

THEOREM 1.8.1. *A group G satisfies the maximal condition if, and only if, G and every subgroup of G are finitely generated*.

*Proof: *Let *H *be a subgroup of *G *which is not finitely generated. We may construct recursively an infinite ascending chain of distinct subgroups of *H*, {*h*1} ⊂ {*h*1, *h*2} ⊂ · · · ⊂ {*h*1, · · ·, *hi*} ⊂ · · ·, by choosing *hi *arbitrarily, and recursively *hi*, an element of *H *not in {*h*1, · · ·, *hi*−1}. Such an *hi *always exists, since *H *cannot be the finitely generated group {*h*1, · · ·, *hi*−1}. Conversely, suppose that *G *and all its subgroups are finitely generated. Then let *B*1 ⊆ *B*2 ⊆ *B*3 ⊆ · · · be an ascending chain of subgroups in *G*. We shall show that after a certain point in this chain all subgroups are equal, and so there is not an infinite ascending chain of distinct subgroups. The set of all elements *b*for some *Bi *in the chain, forms a subgroup *B *of *G*, then both *b *and *b′ *belong to any *Bk *with *k *≥ *i, k *≥ *j*, and so also their product and their inverses are in *Bk*.

By hypothesis *B *is finitely generated, say, by elements *x*1 · · ·, *xn*. Let *Bi*1 be the first *Bi *containing *x*1 and generally *Bik *be the first *Bi *containing *xk*, for *k *= 1, · · ·, *n*. Then if *m *is the largest of *j*1, · · · *jn, Bm *will contain all of *x*1, · · ·, *xn*, and so *B *= *Bm *− *Bm*+1 = · · ·, and all further groups in the chain are equal to *B*. We shall see later that there are groups which are finitely generated but which have subgroups that are not finitely generated.

THEOREM 1.8.2. *A group G which satisfies the minimal condition is periodic*.

*Proof: *If *G *contains an element *b *of infinite order, then {*b*²} ⊃ {*b*⁴} ⊃ · · · ⊃ (*b*²*i*} ⊃ · · · is an infinite descending chain of distinct subgroups.

In an infinite group we cannot use finite induction on its order, and so some substitute is needed to replace this method of proof which is so valuable for finite groups. One way to make this replacement is to appeal to certain very general axioms on sets and ordering. Suppose that we have an ordering relation *a *≤ *b *on the elements of a set *S *of objects (*a, b, c*, · · ·}. The ordering may satisfy some of the following axioms:

*O*1) If *a *≤ *b, and b *≤ *a, then a *= *b*.

*O*2) If *a *≤ *b, and b *≤ *c, then a *≤ *c*.

*O*3) *Either a *≤ *b or b *≤ *a for any two a, b*.

*O*4) *Any nonempty subset T of S has a first element x*1, *i.e., an element x*1 *such that x*1 ≤ *t for every *.

If the first two axioms hold, we say that the ordering is a *partial ordering*. If the first three axioms hold, we say that the ordering is a *simple ordering*. If all four axioms hold, we say that the ordering is a *well-ordering*. We may appeal to the axiom of well-ordering: *Every set S may be well-ordered*. Let us write *a *< *b *to mean *a *≤ *b *but *a *≠ *b*.

In a well-ordered set we may prove propositions by the method of *transfinite induction*. This proceeds as follows: Designate the first element of *S *as 1. Then, if *P*(*a*) is a proposition about the elements of *S *and if *P*(1) is true, and if the truth of *P*(*x*) for all *x *< *a *implies the truth of *P*(*a*), we conclude that *P*(*b*. Let *T *be the subset of *S*, such that *P*(*t*. If *T *is nonempty, it contains a first element *c*. But then either *c *= 1 or *P*(*x*) is true for all *x *< *c*. In either event this would lead to the truth of *P*(*c*) contrary to the choice of *c *in *T*. Hence *T *must be empty and *P*(*b*. We note in passing that in a well-ordered set any descending sequence *a*1 > *a*2 > *a*3 > · · · is necessarily finite since it must contain a first element.

Another axiom, logically equivalent to the axiom of well-ordering is Zorn’s lemma. This again deals with ordering in sets.

LEMMA 1.8.1. (ZORN’S LEMMA). *Given a partially ordered set S. Suppose that every simply ordered subset of S has an upper bound *(*lower bound*) *in S. Then S has a maximal *(*minimal*) *element*. Here if *U *is a subset of *S*, then an *upper bound b *of *U is *an element such that *b *≥ *u *. A *maximal element w *has no upper bound different from itself. Reversing the inclusion, we similarly define *lower bound *and *minimal element*.

Suppose we consider subgroups of a group *G *partially ordered by inclusion *A *⊆ *B *if *A *is a subgroup of *B*. Then the set of all elements in a simply ordered subset of subgroups will itself form a subgroup, since if *g*1 is in one of the groups and *g*2 is in another, then both *g*1 and *g*2 are in the greater of the two subgroups and so also are their product and their inverses. For this reason Zorn’s lemma is well suited to proofs in group theory or to abstract algebra in general.

Both the Axiom of Well-Ordering and Zorn’s lemma are logically equivalent to:

AXIOM OF CHOICE. *For any family F of subsets *{*Si*} *of a set S, there is a choice function f*(*Si*) *defined for the subsets of F whose values are elements of S, such that *, *the subsets Si not being void*.

In certain arguments the Axiom of Choice appears to lead to paradoxes, and for this reason it is suspect. All three principles are surely valid if the set *S *is countable, i.e., its objects may be put into a one-to-one correspondence with the natural numbers 1,2,3, · · ·. Presumably, they are valid for other sets *S *and possibly for all well-defined sets, though it might be remarked that as yet no one has actually constructed a well-ordering of the set of all real numbers. When using any one of these principles in this book, it is to be understood that by "Every set *S" *we mean "Every set *S *for which the axiom is valid."

A useful application of these methods is the following:

THEOREM 1.8.3. *Let g be an element of a group G and H a subgroup of G which does not contain g. Then there is a subgroup M containing H which is maximal with respect to the property of not containing g*.

*Proof: *We use Zorn’s lemma. Subgroups containing *H *but not containing *g *form a partially ordered set under inclusion. The elements of a simply ordered set of these groups themselves form a group which contains *H *but not *g*. Hence a maximal group *M *exists containing *H *but not *g*.

Using this theorem, we easily derive the following:

THEOREM 1.8.4. *Let G be a finitely generated group and H a proper subgroup of G. Then there exists a maximal subgroup M of G containing H*.

*Proof: *Let *G *be generated by *x*1, · · ·, *xm *and let *y*1 be the first of *x*1, · · ·, *xm *not contained in *H*. Let *M*1 ⊇ *H *where *M*1 is maximal with respect to the property of not containing *y*1. Then any subgroup of *G *containing *M*1 properly contains *y*1 and so also {*M*1, *y*1} = *H*1. If *H*1 = *G*, then *M*1 is the maximal subgroup sought. If not, choose *M*2 ⊇ *H*1 where *M*2 is maximal with respect to the property of not containing *y*2, the first of *x*1 · · ·, *xm *not contained in *H*1. Since *G *= {*x*1, · · ·, *xm*}, by continuing this process we must reach an *Mi *⊇ *Hi*−1 ⊇ · · · ⊇ *H*, where {*Mi, yi*} = *G *and *Mi *= *M *is the maximal subgroup sought.

The one-to-one mappings of a set onto itself which preserve some property usually form a group. Many of the most interesting groups arise naturally in this way. The symmetries of a geometric figure are of this kind. These are the congruent (i.e., distance-preserving) mappings of the figure onto itself. The first two examples below are groups of symmetries.

sides form a group of order 2*n*. These are determined entirely by the way in which the vertices are mapped onto themselves. Let the vertices be numbered 1, 2, · · ·, *n *in a clockwise manner. The vertex 1 may be mapped onto any vertex 1, 2, · · ·, n, and the remaining vertices placed in either a clockwise or a counter-clockwise direction. All symmetries are generated by the rotation

and the reflection

Here *an *= 1, *b*² = 1, *ba *= *a*−1*b*. Moreover, these relations determine the group completely, since every element generated by *a *and *b *is of the form *ai*1*bj*1 · · · *airbjr *and since *bai *= *a*−*ib*, as we may show from the last relation, every element can be put into the form *ai *or *aib *with *i *= 0, 1, · · ·, *n *− 1; these are the 2*n *different elements of the group. These relations also define a group for *n *= 2 which is of order 4. This is called the *four group*.

EXAMPLE 2. SYMMETRIES OF THE CUBE. The symmetries of the cube are determined by the mappings of the eight vertices onto themselves. Let these be numbered as in the figure. The symmetries include the rotation

and the reflection

Fig. 1. Symmetries of and the reflection the cube.

The elements *a *and *b *generate a group *G*1 which takes every vertex into every other vertex. We may see this from the diagram

means that the element *x *takes *i *into *j*. From

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading