com
UNIT I
1
GROUP THEORY
1.1
Introduction
1.2
Equivalence Relation
1.3
Conjugacy Relation
1.4
Cauchys Theorem
1.5
Let us sum up
1.6
Lessonend Activities
1.7
References
1.0
In this lesson we introduce a conjugacy relation in a group in order to derive the class
equation of a finite group. We verify the equation in the case of S 3 , the permutation group on
three symbols. As an application of class equation, we prove an important theorem due to
Cauchy.
After going through this lesson, you will be able to:
(i)
Define conjugacy relation in a group.
(ii)
Prove conjugacy relation is an equivalence relation.
(iii) Obtain the class equation of a finite group.
(iv)
Prove Cauchys theorem.
(v)
Define the partition of a positive integer n.
(vi)
Obtain the conjugate classes of S n .
1.1 INTRODUCTION
Counting principle is a powerful tool for deriving certain theorems in algebra. The
process of counting the elements in two different ways and then comparing the two, yields
the desired conclusions. We define a conjugacy relation in a group to derive the class
equation of a finite group. As an application of this equation, we prove that cauchys
Theorem which asserts the existence of an element of order p in G whenever p
, for a
o(G )
given prime p. The number of distinct conjugate classes of S n is obtained as the number of
partitions of n.
Group Theory
(1)
(2)
(3)
Definition
If is an equivalence relation on A and a A, the equivalence class of a is the set
C (a) = { x : x a }
Theorem
Any two equivalence classes on a set are disjoint or identical
Theorem
If is an equivalence relation on a set A, then A is the union of its distinct
equivalence classes
The proofs of the above theorems are left as exercise.
1.3 CONJUGACY RELATION
Definiton
Let G be a group and a, b G. We say a is conjugate of b, if there exists an element
g G such that a = g1 bg. We write a b.
Lemma
In a group G, the conjugacy relation is an equivalence relation.
Proof : Reflexivity : Since a = e1 ae, where e is identity element of G, a a.
Symmetry : If a b, then $ x G such that b = x1 ax.
Then (x1 )1 b x 1 = (x 1 ) 1 (x 1 ax) x 1
i.e., x b x 1 = x (x 1 ax) x 1 .
Algebra
= (x x 1 ) a (xx 1 ) .
=a
\ a = x bx 1
\b
Transitivity : Let a
b and b c.
Then
b = x 1 ax, for some x G and
c = y by, for some y G.
c = y (x ax) y
= (y x ) a (xy)
= (xy) a (xy)
\a c
Therefore conjugacy is an equivalence relation.
Definition
If a belongs to the group G, the equivalence class of a under the conjugacy relation is
C (a) = {x : x a }. It is called the conjucate class of a. If G is a finite group, then the number
of elements in C (a) is donated by c a.
Lemma
If a belongs to aroup G, then N(a) = { x G : xa = ax } is a subgroup of G and is
called normaliser of a.
Proof : Let x,y G Then
xa = ax
ya = ay
Therefore, (xy) a
 (1)
 (2)
= x (ya),
= x (ay),
= (xa) y,
= (ax) y,
= a (xy),
Therefore xy N (a).
From (1), x (xa) x
i.e., (x x) (a x )
i.e.,
ax
by associativity
by (2)
by associativity
by (1)
by associativity.
= x (ax) x
= (x a) (x x )
= x a
Therefore x N (a).
Hence N (a) is a subgroup of G.
Group Theory
Theorem
If G is a finite group, then ca=
o(G )
O( N (a ))
Proof : First we show that if x and y are in the same right coset of N(a) in G, then they give
rise to same conjugate of a.
Let x, y N (a) g, where g G.
Then for some n1 N (a)
x = n1 g
 (1)
 (2)
 (3)
 (4)
 (5)
 (6)
Next we show that if x and y are in different right cosets of N (a) in G, then they give
rise to different conjugates of a, that is x ax y ay.
Suppose
x ax
then y(x a x) x
(i.e).
yx a
\ yx
= y ay
= y (y ay) x
= ayx
N(a).
Algebra
\y
Let y
= nx
 (7)
N (a), y
= n' g
 (8)
N(a)
Hence x and y are in the same right coset of N (a) in G. This is a contradiction.
Therefore x and y are in different right costs of N (a) in G, they yield different
conjugates of a.
Hence there is a oneone correspondence between the conjugates of a and right cosets
of N (a).
The number of right cosets of N (a) is equal to O(G ) . Hence
O( N (a ))
= O(G )
O( N (a ))
Corollary
If G is a finite group,
then O(G) = O(G )
O( N (a ))
Where the sum runs over one element a in each conjugate class
Proof : Since the conjugate classes are disjoint and their union is G,
O (G) = Ca
Where the sum runs over one element a in each conjugate class. But by the above
theorem
O(G )
O( N (a ))
Hence we get,
O (G) =
O (G )
O ( N (a ))
Group Theory
Example
We know that S n the set of n symbols form a group under composition of maps. This
group is called symmetric group of n symbols.
Now
123
= ( 1,2 ) is
The conjugate class of
213
C (1,2) = { e (1,2)e, (1,3) (1,2) (1,3), (2,3) (1,2), (2,3), (1,2,3) (1,2) (1,2,3),
(1,3,2) (1,2) (1,3,2) }.
= { (1,2), (1,3), (2,3)}
Similarly,
C (1,2,3) = {(1,2,3), (1,3,2)}
We note that the conjugate classes C(e), C(1,2), C(1,2,3) are pairwise disjoint and
their union is s3
Also
= 1,
O( s3 ) =
(1, 2 )
(1, 2 )
O ( S 3)
O ( N (e))
O( S 3)
O( N (1,2))
O ( S 3)
O( N (1,2,3))
c
+c
= 3,
(1, 2 , 3)
=2
(1, 2 , 3)
= { e, (1,2) }
= { e, (1,2,3), (1,3,2) }
= s3
6
= 1 = Ce
6
6
= =3=C
2
6
= = 2 = C (1, 2,3)
3
(1, 2 )
Also
O( s3 )
O( S 3)
O( N (e))
O( S 3)
O( N (1,2))
O( S 3)
O( N (1,2,3))
Algebra
Note : If G is a group then Z, the set of all elements of G which commute with all the
elements of the group G, is a subgroup and is called center of G.
Lemma
Let Z be the center of the group G. Then a
Z. iff N(a) = G.
 (1)
 (2)
Theorem
If O(G) = p n where p is a prime number, then Z(G) {e }
Proof : Let a G. Then N(a) is a subgroup of G. Then by Lagranges theorem,
O(N(a)) divides O(G) = p n . Therefore,
O(N(a)) = p na
Where na is a positive integer
By previous lemma, a Z.
N ( a ) =G
O(G ) +
O( N (a ))
=O(Z(G)) +
N ( a ) G
N ( a ) G
O(G )
O( N (a ))
O (G )
O ( N (a ))
Therefore,
O(Z(G))
= O(G) 
N ( a ) G
= pn 
N ( a ) G
O(G )
O( N (a ))
pn
p na
 (1)
pn
p na
\ p divides each term of the sum and hence p is a divisor of the sum.
\ p is a divisor of R.H.S. of (1)
Group Theory
Corollary
If O(G) = p 2 where p is a prime number, then G is abelian.
Proof : First we note that G is abelian iff Z(G) = G.
Since O(G) = p 2 by previous theorem, Z(G) { e } and hence O(Z(G)) must be
either p or p 2 .
If o(Z(G)) = p 2 , then O(G) = O(Z(G)) = p 2 and Z(G) = G
\ G is abelian
Let O(Z(G)) = p
Let a G, a Z(G)
Then N (a) is a subgroup of G, Z(G) N (a)
But a Z(G)
\ Z(G) N(a)
\ N(a) = G a (G), which is a contradiction
Hence O(Z(G)) p and O(Z(G)) = p 2 only
\ G = Z(G)
\ G is abelian
1.4 CAUCHYS THEOREM
Theorem
\ O (Z(G)) = O(G) 
N ( a ) G
O (G )
O ( N (a ))
 (1)
o(G )
o( N (a ))
\ p divides each term of the sum and hence p divides the sum.
Algebra
Definition
n2
n1
Example
We have 4 = 4, 4 = 1+3, 4 = 1+1+2, 4 = 1+1+1+1, 4 =2+2
\ p(4) = 5.
Similarly p(1) = 1, p(2) = 2, p(3) = 3, p(5) = 7, p(6) = 11.
Theorem
The number of distinct conjugate classes of S n is p(n).
Proof : Let s
n1
)(
b ,b
1
n2
) .
( x1 , x 2 x n )
k
t = (a 1 ,a 2 .a n ) ( b , b .. b ) ...
1
Define q
g ,g
1
..
nk
nk
=
a1 ,a 2 .......a n1 b1 , b 2 .......b n 2 g 1.......g n k
Group Theory
10
q c1 q 1 = ( q ( a1 ), q ( a2 ), ., q ( an1 ).
Therefore if c1 is of length n1 , then q C1 q 1 is also of length n1 .
Also c1 , c2 , ., c k are disjoint, q c1 q 1 , q c2 q 1 ,., q c k q 1 are
also disjoint.
\ s and s 1 define the same partition.
\ Corresponding to a conjugate class, there is a unique partition of n.
\ The number of distinct conjugate classes of s n is p(n).
Example
Number of district conjugate classes of
3 = 1+1+1.
(ii)
(iii)
(iv)
(v)
Cauchys Theorem/
(vi)
1.6
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
o(G )
O( N (a ))
1.7 REFERENCES
1) A First Course in Abstract Algebra by J.B. Fraleigh, Narota Publishing
House, New Delhi, 1988.
2) Topics in Algebra by I.N. Herstein Second Edition.
UNITI
12
LESSON 2
GROUP THEORY
SYLOWS THEOREM
CONTENTS:
2.0
2.1
Introduction
2.2
Sylows Theorem
2.3
Let us sum up
2.4
Lessonend activities
2.5
References
2.0
(i)
(ii)
(iii)
(iv)
2.1
INTRODUCTION
According to Lagranges Theorem, the order of a subgroup always divides the order
of a finite group. However, the converse need not be true sylows Theorem asserts the
existence of subgroups of prescribed order in arbitrary finite groups. We introduce psylow
sub groups in a finite group and device a method for finding the number of psylow
subgroups of a finite group.
13
Algebra
n(n  1)...(n  k + 1)
k (k  1)...3.2.1
a
Let n = p m where p is a prime number, p r /m and p r +1 m. Let k = p a .
pa m( pa m  1)( pa m  2).....( pa m  i )  ( pa m  pa + 1)
=
pa m c p a
pa ( pa  1).....( pa  i ).....1.
c
n k
Now we show that the power of p dividing p a m i in the numerator is same as the
power of p dividing p a i in the denominator. Let p k ( p a i). Then p a  i = a p k where k
a . Then.
i
= a p k  pa
= pa m + a p k  pa
\ p a m i
= (m1) p a + a p k
= p k [ (m1) pa  k + a ]
\ p k  ( p a m i)
Conversely
p k  ( p a m i) p k  ( p a i)
Therefore all the powers of p in the numerator and denominator cancel out except the
power of p which divides m.
\ As p r m and p r +1 m we get p r  p a m c pa but p r +1 p a m c pa .
~
Let M be the set of all subsets of G which have p a elements.
~
~
Then M has p a m c p a elements Define for M 1 , M 2 M 1 M 1 M 2 if $ g G
Group Theory
14
M 1 g1 = M 2 g 2
M 1 g1 g 2 1 = M 1
g1 g 2 1 H
H g1 = H g 2 .
Therefore our claim is proved.
Now,
o(G )
= number of right cosets of H in G
o( H )
= number of elements in { M 1 , M 2 .. M t }
=t
= t o(H)
\ o(G)
= o(G) = p a m.
\ t o(H)
Since, p r +1 t and pa + r
p a m = t o(H), it must follow that p a o(H). and so
o(H) p a
 (1)
For all h H, M 1 h = M 1 , by definition of H.
Therefore M 1 has atleast o(H) distinct elements. Therefore,
O( M 1 ) o(H)
But, o( M 1 ) = p a
 (2)
\ p a o(H)
From (1) and (2),
o(H) = p a .
Corollary
If p m  o(G) and p m +1 o(G), then G has a subgroup of order p m .
This is a special case of the above theorem and is usually known as Sylows theorem.
Definition
A subgroup of order p m where p m  o(G) but p m +1
of G.
Lemma
Let n(k) be defined by p n (k )  ( p k ) ! but p n ( k +1)
( p k )!
15
Algebra
Then
n(k) = 1 + p + p 2 + .. + p k 1
 (1)
k
Proof : If k = 1, ( p )! = p! = 1.2.3. . (p1) p.
Then p divides p! but p 2 p! Therefore n(1) = 1.
Now
p k ! = 1.2 .. p(p+1) .. 2p (2p+1) .. 3p (3p+1) .. p k 1 p.
Therefore the factors in the expansion of p k ! that can contribute to the powers of p
dividing p k ! are p, 2p, 3p, p k 1 p.
Therefore n(k) is a power of p which divides
k
p.(2p) . (3p) ( p k 1 p) = p p 1 . ( p k 1 )!
Therefore
n(k) = p k 1 + n(k1)
Therefore,
n(k) n(k1) = p k 1 .
n(k1) n(k2) = p k  2 .
n(2) n(1) = p
n(1) = 1
Adding these up with cross cancellation,
n(k) = 1 + p + p 2 + .. + p k 1
Lemma
S pk has a p Sylow subgroup
Proof : The proof is by induction on k.
If k = 1, then the element (1,2,3,.p) is in S p and is of order p and so generates a
subgroup of order p.
Since n(1) = 1, the result is proved for k = 1.
Suppose that the result is true for k 1. We show that the result is true for k.
Divide the integers 1,2,. p k into p clumps each with p k 1 elements as follows.
{1,2,., p k 1 }, { p k 1 +1, p k 1 +2,..,2 p k 1 } .{(p1) p k 1 + 1, . p k }
Then the permutation s defined by
s = (1, p k 1 +1, ., (p1) p k 1 +1)
(2, p k 1 +2, ., (p1) p k 1 +2)
.,
.,
.,
k 1
(j, p +j,
.,(p1) p k 1 +j)
.,
.,
.,
k 1
k 1
k
(p ,2p
., p )
Group Theory
16
has the following properties : (1) s p = e,(2) If t is a permutation that leaves all the
elements of i fixed for i > p k 1 and hence affects only 1,2,. p k 1 then s  j t s j
moves only j p k 1 +1, j p k 1 +2,. (j+1) p k 1 .
Consider A = { t : t S p , t (i) = i if i > p k 1 } Let t 1 , t 2 A.
Then
t 1 (i) = " > p k 1
t 2 (i) = " > p k 1
Then t 1 t 2 (i) = t 1 (i) " > p k 1
= " > p k 1
\ t 1 t 2 A.
Therefore A is a subgroup of S p k and elements in A can carry out any permutation
on 1,2 p k 1 .
\A
s p k 1 .
2
T = P1 (s P1s ) (s P1s ) . (s
= P1 P2 ..........PP 1
Where Pi +1 = s 1 P1 s 1
p 1
n ( k 1)
Ps
1
p 1
. Let
n ( k 1)
i j
pn ( k 1)
. Since s p
= e and s  i P1 s i = pi we have s  i T s = T.
Let P = { s j t : t T, 0 j p  1}. Since s T and s 1 T s = T,
pn ( k 1)
we have T is a subgroup of S p k and o(P) = po(T) = p. p
p= p
pn ( k 1) p +1
17
Algebra
Lemma
The relation ~ defined above is an equivalence relation and the equivalence class of
x G is the set
A x B = {a x b :a A ,b B}
Proof : (i)
Since x = x e x " x G, x ~ x " x G
(ii)
If x ~ y and y = a x b. Then x = a 1 y b 1 \ Therefore y ~ x.
(iii) If x ~ y and y ~ z then
y = a x b for some a A, b B and
z = a1 y b1 for some a1 A1 , b1 B.
Then
z = a1 (axb) b1
= ( a1 a) x (b b1 )
Where a1 a A and b b1 B.
\ x ~ z.
\ ~ is an equivalence relation on G. The equivalence class of x is {y : y =a x b
for some a A, b B} = A x B.
Lemma
If A and B are finite subgroups of G, then
o( A).o( B )
O(A x B) =
o( A xBx 1 )
Proof Define T : A x B A x B x 1 by
(a x b) T = a x b x 1
First we show T is oneone.
Let
(a x b) T =( a1 x b1 ) T.
1
Then ax b x = a1 x b1 x 1 .
\ a x b = a1 x b1
\ T is oneone.
Next we show that T is onto.
Let a x b x 1 A x B x 1 .
Then a x b A x B. and (a x b) T = a x b x 1 . Therefore T is onto.
 (1)
\ o(A x B) = o (A x B x 1 )
1
But as A and x B x are subgroups of G,
o( A).o( xBx 1 )
o(A x B x 1 ) =
 (2)
o( A xBx 1 )
The map : B x B x 1 defined by
(b) = x b x 1 " b B
is oneone and onto.
\ o(x B x 1 ) = o (B)
From (1), (2), (3) we get
o( A)o( B )
O(AxB) =
o( A xBx 1 )
 (3)
Group Theory
18
Definition
Two subgroups A and B of a group G are said to be conjugate if $ x G such that
A = x B x 1 .
Theorem (Second Part of Sylows Theorem)
If G is a finite group, p a prime and p n  o(G) but p n +1
of G of order p n are conjugates.
Proof : Let A and B be subgroups of G each of order p n .
G is the union of disjoint double cosets of A and B.
G = A x B.
By Lemma
o( A).o( B )
o( A xBx 1 )
If A x B x 1 , for every x G, then
o (A x B x 1 ) = p m where m < n
o( A).o( B )
p 2n
o(A x B) =
=
= p 2 n  m and 2n m n + 1.
m
m
p
p
n +1
Since p  o(A x B) for every x and since o(G) = o(A x B) we get the
contradiction p n +1  o(G). Therefore
A= g B g 1 for some g G.
The theorem is proved.
o(A x B) =
Lemma
o(G )
where P is any p Sylow
o( N ( P))
subgroup of G. In particular, this number is a divisor of o(G).
Proof: We have the normaliser of a subgroup H of G is defined by
N(H) = {x : x H = H x} = {x : x H x 1 = H}
is a subgroup of G.
Since the number of distinct conjugates x H x 1 of H in G is the index of N(H) in G,
the proof of the lemma follows as pSylow subgroups are conjugate.
19
Algebra
By Lemma
o( P).o( P)
[o(P)]2
=
o( P xPx 1 )
o(P xPx 1 )
Therefore if P x P x 1 P, p n +1  o(P x P). Where o(P) = p n .
\ if x N (P) then P x P = P(Px) = P 2 x = Px and so o(P x P) = p n .
Now
o(G) = o(P x P) = o(P x P)
 (1)
o(P x P) =
xN ( P )
xN ( P )
where each sum runs over one element from each double coset. However, if x
N(P), since P x P = P x ,
 (2)
o(P x P) = o(P x ) = o(N(P))
xN ( P )
xN ( P )
n +1
Therefore
o(G) = o(N(P)) + p n +1 u
Therefore
p n+1u
o(G )
=1+
o( N ( P))
o( N ( P))
 (3)
o(G )
is an ingeter.
o( N ( P))
p n +1u
must be
o( N ( P))
p n+1u
as kp where k is an integer. Therefore (3)
o( N ( P))
becomes
o(G )
= 1 + kp
o( N ( P))
2.3
LET US SUMUP
In this lesson we have discussed the following concepts :
(i)
n
If G is a finite group, p is prime and P
(ii)
(iii)
(iv)
o(G )
Group Theory
2.4
LESSONEND ACTIVITIES
1.
2.
2.5 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition Chapter  2.
20
UNIT I
GROUP THEORY
21
LESSON 3
DIRECT PRODUCTS
CONTENTS:
3.0
3.1
Introduction
3.2
3.3
Let us sum up
3.4
Lessonend activities.
3.5
References
3.0
In this lesson we introduce two different products of groups and then we prove that
they are isomorphic.
After going through this lesson, you will be able to :
(i)
(ii)
(iii)
3.1
INTRODUCTION
Given two groups A and B, we obtain a new group G = AxB as the external direct
product A and B. We show that G can also be obtained from internal construction. We obtain
G as internal direct product of two normal subgroups A and B of G. Finally we prove the
isomorphism between the two products.
22
3.2
Algebra
EXTERNAL AND INTERNAL DIRECT PRODUCTS
DIRECT PRODUCTS
Theorem
Let A and B be any two groups. In A x B = {(a,b) : a A, b B} we define a binary
operation by
( a1 , b1 ) ( a2 , b2 ) = ( a1 a2 , b1 b2 ) " ( a1 , b1 ) A x B and ( a2 , b2 ) A x B. Then A x B is
a group under this binary operation.
Proof : Let ( a1 , b1 ) ( a2 , b2 ), ( a3 ,b3 ) A x B
Then ( a1 , b1 )[( a2 , b2 ) ( a3 ,b3 )]
= ( a1 , b1 ) ( a2 a3 , b2 b3 ) = ( a1 ( a2 a3 ), b1 ( b2 b3 )) = (( a1 a2 ) a3 , ( b1 b2 ) b3 )
\ Associativity is satisfied.
Let e and f be the identify elements of A and B respectively. Then (e,f)
A x B. If (a,b) A x B, then
(a,b) (e,f)
= (ae, bf)
= (a,b) by identity in A and B.
Similarly (e,f) (a,b) = (a,b) Therefore (e,f) is the identity element of A x B.
Let (a,b) A x B where a A and b B Let a 1 be the inverse of a in A and b 1 be
the inverse of b in B. They ( a 1 , b 1 ) A x B
and
(a,b) ( a 1 , b 1 ) = (e,f) also ( a 1 , b 1 ) (a,b) = (e,f)
Therefore ( a 1 , b 1 ) is the inverse of (a,b)
\ A x B is a group.
It is called the external direct product of the groups A and B.
Definition
Let G1 , G2 , G3 , ., Gn be any n groups. Let G = G1 x G2 x . x Gn =
{( g1 , g 2 ,.., g n ) : g i Gi }. Define a product in G by ( g1 , g 2 ,.., g n ) ( g11 , g 21 ,
1
G = N1 N 2 N n = { h1 , h2 . hn : hi N i = 1, 2, n}
Given g G then g is expressed as g = m1 , m2 .. mn , mi N i in a unique
way, then we say G is internal direct product of N1 , N 2 , . N n .
Group Theory
23
Theorem
If G is the external direct product of the groups A and B, then G is internal direct
product of A and B where
= { (a,f) : a A }
A
= { (e,b) : b B}
B
~
~
e, f being the identity elements of A and B respectively and A A , B B .
Proof: Define : A A by (a) = (a,f), for all a A.
Then
( a1 ) = ( a2 ) ( a1 ,f) = ( a2 ,f)
a1 = a2
Therefore is oneone.
Let (a,f) A . Then a A and
(a) = (a,f) Therefore is onto. Also
if a1 , a2 A, ( a1 a2 )
= ( a1 a2 , f)
= ( a1 ,f) ( a2 ,f)
= ( a1 ) ( a2 )
Therefore is a homomorphism.
~
A A
\
Similarly B
A Then.
= ( a1 ,f) ( a2
= ( a1 a2
1
1
,f)
, f) A
1
,f) A
24
Algebra
Lemma
Suppose G is internal direct product of N 1 , N 2 , .. N n .
Then for i j, N i N j = {e} and if a . N i , b N j then ab = ba.
Proof: G is internal direct product of N 1 , N 2 , .. N n Then N 1 , N 2 , .. N n are normal
subgroups of G. Let i j and x N i N j .
Then x N i and x N j .
As x N i , x = e1 e2 . ei 1 x ei +1 . e j .. en
 (1)
 (2)
 (3)
Similarly since a 1 N i ,b a 1 b 1 N i .
\ ab a 1 b 1 N i
 (4)
Theorem
Let G be a group and G is the internal direct product of N 1 , N 2 , .. N n .
Let T = N 1 x N 2 x . x N n be the external direct product of N 1 , N 2 , .. N n
Then G and T are isomorphic.
Proof:
G = { b1 . b2 . bn : bi N i }
T = {( b1 , b2 . bn ) : bi N i }
Define y : T G by
y (( b1 , b2 . bn )) = b1 b2 . bn where each bi N i .
If y G, then y = b1 b2 . bn , bi N i .
Let x = ( b1 , b2 . bn ). Then x T and
y (x) = Y (( b1 , b2 . bn ))
= b1 . b2 . bn
=y
Group Theory
25
\ Y is onto.
Let Y (x) = Y (y) where x = ( a1 , a2 ,.., an ) and y = ( b1 , b2 . bn ). Then
Y (( a1 , a2 ,.., an )) = Y (( b1 , b2 . bn ))
\ a1 a2 . an = b1 b2 ,. bn .
\ a1 = b1 , a2 = b2 , . an = bn .
\ x=y
\ Y is oneone.
Also, Y (xy) = Y ( a1 b1 , a2 b2 , ., an bn )
= a1 b1 a2 b2 an bn
= a1 a 2 .. an b1 b2 .. bn (as ai b j = b j ai ).
= Y (x) Y (y)
\ y is a homomorphism i, G N T.
3.3
LET US SUMUP
1.
2.
If G is a group and N1 , N 2 are normal subgroups of G such that G = N1 N 2 =
{ h1 h2 / h1 N1 , h2 N 2 } and every element of G can be uniquely expressed as a product of
elements of N1 and N 2 then G is the internal direct product of N1 and N 2 .
3.
4.
3.4
1.
2.
3.
3.5 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition.
Algebra
LESSON 4
EUCLIDEAN RINGS
CONTENTS
4.0 Aims and Objectives
4.1 Introduction
4.2 Euclidean Rings
4.3 A particular Euclidean Ring
4.4 Let us sum up
4.5 Lesson end activities.
4.6 References
4.1 INTRODUCTION
The class of rings we propose to study now is motivated by several existing examplesthe ring of integers, the Gaussian integers and polynomial rings. Euclidean rings constitute a
special category of rings, in which there is defined a nonnegative d value (an integer) for each
Ring Theory
27
and every element of the ring. After introducing some important properties of this particular
Euclidean ring, we shall prove the Unique Factorization Theorem, the structure of maximal
ideals, and Fermats Theorem.
4.2 EUCLIDEAN RINGS
Definition: An integral domain R is said to be a Euclidean ring if for for a 0 in R there
is defined a nonnegative integer d (a) such that
1. for all a, b R , both non zero, d (a) d (ab)
2. for any a,b R, both non zero, there exist t, r R such that a = tb + r where either
r = 0 or d (r) < d (b).
Examples :
1.
Any field F, with d(a) = 1 for all a F  {0}. Then is an Enclidean domain
For a.b F {0} then d(a) = 1, d(ab) = 1
So that d (a) d (ab)
Let a, b F with b
\
2.
F is an Euclidean domain.
\ d (x)
d (xy)
1).
28
Algebra
(ii)
Let x,y Z [i] with y 0. To show that there exists t, r z[i] such that x = ty +r
where either r = 0 or d ( r) < d(y)
P=
0)
a + ib
x
=
= (a + ib )(m  in ) = p + iq, where
m + in
y
m 2 + n2
am + bn
bm  an
q=
m + n
m + n
  p
(1)
Put t = + i and r = x ty
, m Z [i], r Z [i], Also x = ty + r
To prove that either r = 0 or d(r) < d(y)
If r 0 then d(r) = d(x  ty)
x
= d  t y
y
= d {[(p +iq)] ( + i m ] } (m + in)
=
[( p  l ) + (q  m ) (m
2
1 1
m2 + n2 +
4 4
< m2 + n2
= d(y)
d(r) < d(y)
Hence Z[i] is an Euclidean domain.
+ n2
)
from (1)
Ring Theory
29
Definition: An integral domain R with unit element is a principal ideal ring if every ideal A
in R is of the from A = (a) for some a R. An integral domain R is called a Principal Ideal
Domain, if every ideal of R is a principal ideal.
THEROREM: Let R be a Euclidean ring and let A be an ideal of R. Then there exist an
element a0 A such that A consists exactly of all a0 x as x ranges over R.(Every Euclidean
ring is a Principal ideal Domain)
PROOF:
r = a  q a0
Now, a A, q a0 A (Q A is an ideal)
a  q a0 A
r A
o d (r) < d0
Thus r = 0.
30
Algebra
\
a =q a0 ( a0 )
and hence A
( a0 )
(Q A is an ideal)
b A
\ ( a0 )
COROLLARY :
PROOF:
A A = ( a0 )
A Euclidean ring possesses a unit element.
R is a principal ideal.
2.
3.
Definition: An integral Domain R is called a Principal ideal Domain ( P.I.D) if every ideal
of R is a principal ideal.
Ring Theory
31
Examples:
1. Every field is a P.I.D. For, the only ideal of a field F are {0} and F itself.
2. Since every ideal of Z is of the from nZ for some n Z, each ideal is a principal
ideal. \ z is a P.I.D.
Definition: If a, b R then d R is said to be a greatest common divisor of a and b if.
1.
d /a and d /b
2.
LEMMA: Let R be Euclidean ring. Then any two elements a and b in R have a greatest
common division d. Moreover, d = a + b for some , R.
PROOF:
x = r1 a + s1 b, y = r2 a + s2 b and x y = ( r1 r2 ) a + ( s1 s2 )b A.
32
Algebra
Note: A unit in a ring is an element whose inverse is also in the ring. Also note that unit is
different from unity.
LEMMA: Let R be integral domain with unit element and suppose that for a, b, R both
a/b and b/a are true. Then a =ub,where u is a unit in R.
PROOF:
d (xa) for x 0 in R.
Thus the d value of a in the minimum for the d value of any element in A. (ie) d (a) =
min {d (t) /t A  {0}}
Now ab A if d (a) = d (ab)
Q
Ring Theory
33
d (a)
d (ab) = d (1)
. (1)
d (1.a) = d (a)
a 0 in R
\ d (a)
d (1)
.. (2)
\ r=0
(ie) 1 = aq and hence a is a unit.
34
Algebra
Step 3: Let a be non zero element of R.Proof is by induction on d (a).
If d (a) = d(1), then a is a unit (by step 2). Hence the lemma is true.
Let d (a) d (1)
Assume the lemma to be true for all elements x in R such that d (x) < d (a)
b = p 1 p 2 p n , c = p '1 p ' 2 p ' m where p ' s and p ' s are prime elements of R.
\ a = bc = ( p1 p 2 .. pn ) ( p'1 p' m )
A product of a finite number of prime elements
This completes the proof.
Definition: In the Educlidean ring R, a and b in R said to be relatively prime if their
greatest common divisor is a unit in R.
(ie) (a, b) = 1.
LEMMA : Let R be a Euclidean ring Suppose that for a,b,c, R a/bc, but (a,b) = 1. Then
a/c.
PROOF:
By Lemma the greatest common divisor of a and b is d which is of the form
l a + m b i.e, d = l a + m b
Here G.C.D is 1 then l a + m b = 1
Multiplying this relation by c, l ac + m c = c
Since a/bc, we have bc =xa for some x R.
\ c = l ac + m
xa
Ring Theory
35
LEMMA : If is a prime element in the Euclidean ring R and /ab where a, b, R then
divides atleast one of a or b.
PROOF:
Result: Suppose p does not divide a. Sine p is prime and p soes not divide a.
then p and a are relatively prime.
P is a prime element in R and p does not divide a.
\ (p,a) = 1
Proof of Lemma:
divide b.
When n = 2 then p / a1 a2
p / a1 or p / a2
When ( p , a3 ) = 1
then p / a2 a3 a4 an
p divides any one ai (i 2)
36
Algebra
THEOREM:
m and m
n.
\ p1 \
p1
By lemma p1 must divide p'1 .Since p1 p'1 are both prime elements of R and p1 / p'1
must be associates and p'1 = u 1 p1 where u 1 is a unit in R.
\ p1 p 2 . p n =
= u 1 p1 p'2 p' m
= u 1 u 2 p'3 p' m
If m > n then after n steps L.H.S become 1 while the R.H.S. reduces to a product of a
certain number of s.
'
\ Product of some units and certain number of p ' s cannot be equal to one.
Ring Theory
37
 (1)
,
 (2)
Also in the above process we have shown that every p j is an associate of some p j are
associates of some p j .
LEMMA : The ideal A = ( a0 ) is a maximal ideal of the Euclidean ring R if a0 is a prime
element of R.
PROOF :
Step 1 :
\ A B.
\ For any other ideal B R such that A B and A
maximal ideal of R.
\
ao
B, B R A cannot be
is a prime element of R
38
Algebra
A = U or U = R.
Let ao A but A U
ao A U ao U.
\ ao also a multiple of
uo
..(1)
Case 1: u o is a unit in R, u0 R.
For u o U and u0
1
1
u u0 U
1U
(i.e) u0 A But u0 U U A.
Also A U A = U
\ A is the maximal ideal of R. Hence the proof.
(Q A isan ideal)
Ring Theory
39
THEOREM :
J[i] is a Euclidean ring. J[i] = {a + ib/a, b Z} called the set of
Gaussian integers in an Euclidean domain.
PROOF :
Step 1: J[i] is an integral domain under usual addition and multilplication of complex
numbers.
Let a1 + i b1 , a2 + i b2 J[i]
Then
( a1 + i b1 + a2 + i b2 ) = ( a1 + a2 ) + i ( b1 + b2 ) J[i]
( a1 + i b1 ) ( a2 + i b2 ) = a1 a2  b1 b2 + i( a1 b2 + a2 b1 ) J[i]
\ J[i] is closed under addition and multiplication, since J[i] is a subset of complex
number + and . are associate and commutative and distributive over +.
(a + b ) > 0.
In fact (a + b)
(i) d (x)
1 Q a, b are integers.
0 V x J [i]
40
Algebra
\ d (x)
(iii)
d (xy) for y
0 ( Q d (y)
1.)
Case
1:
Apply divition algorithm in the ring of integers to c and n there exists u and v such
that c = nu + v where u and v are integer satisfying
v
1
2
n.
\ x = c + id = n (u + iu 1 ) + (v +i v 1 )
\ x =qn + r where q = u +
iu 1 and r
= v + i v1
2
2
1 2
Here d (r) = d (v + i v 1 ) < n + n =
< d (n)
2 n
4
4
yx
1
2
n.
41
Algebra

yx
(x qy) = r and
Here d( y ) .d (y) = a + b
Note that 1, 1, i,  i are the only units in J [i] and that Z is a sub ring of J [i] .
Ring Theory
42
= (a + b) ( + g)
PROOF:
Let x = 1.2.3 p1 In this product there are n even number of terms.
2
. (p 1)  1 (modp)(1)
2
2 2
p  1
(modp).
p+ 3 = p p 3  p  3 (mod p).
2
..
..
P 1  1 (mod p)
\ (1) becomes
(p  1)
x. 
(p  3)
(2) (1)

2
p 1
2
 1 (mod p)
1(mod p)
p 1
Thus x  1 (mod p)
43
Algebra
Hence the proof.
THEOREM (FERMAT).
for some integers a, b.
PROOF:
By Lemma there exists an integer x such that x =  1 (mod p). We can choose
x so that 0 x p 1.
If x >
p
2
put y = p x
Since y = p
x and x >
p
.
2
and x  1(mod p)
\ x + 1 is a multiple of p.
x + 1 = mp, m is an integer.
2
\ m < p and p c
\ By lemma there exists integers a, b such that p = a + b
Ring Theory
44
1.7 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition Chapter 3.
45
Algebra
LESSON 5
POLYNOMIAL RINGS
CONTENTS :
5.0 Aims and Objective
5.1 Introduction
5.2 Poly nomial rings
5.3 Poly nomial over the Rational Field
5.4 Let us Sum up
5.5 Lesson end activities
5.6 References
5.0 AIMS AND OBJECTIVES
In this lesson we introduce another class of rings namely polynomial rings. We prove some
important properties of polynomial rings and we conclude the lesson by providing an
important theorem to determine the irreducibility of polynomials.
After going through this lesson, you will be able to:
(1) Define the structure of polynomial rings
(2) Prove a polynomial ring is an Euclidean ring
(3) Prove the Eisenstein Criterion for irreducibility of polynomials.
5.1 INTRODUCTION
Let F be a field. The ring of Polynomials in the indeterminate, x, we mean the set of all
symbols ao + a1 x +. + am x m , where m can be any nonnegative integer and coefficients
ao , a1 .. a m are all in F. This ring is denoted by F[x]. We introduce the basic algebraic
operations to claim that F[x] is a ring. After proving the Division Algorithm in F[x], we will
show that F[x] is a Euclidean ring. Finally by proving Gauss Lemma, we shall establish a
powerful tool for verifying irreducibility (Factorization) of a polynomial.
5.2 POLYNOMIAL RINGS
Definition: If p (x) = ao + a1 x +. + am x m and
Ring Theory
46
q(x) = bo + b1 x +. + bn x n are in F[x], then p (x) = q(x) if and only if for every integer i 0,
ai = bi .
Example:
Let p(x) = 1 + x  x, q (x) = 2 + x + x
Here ao = 1, a1 = 1, a2 = 1, a3 = a4 = .. = 0
bo
= 2, b1 = 0, b2 = 1, b3 = 1, b 4 = b5 = = 0
Thus c o = ao b o = 1.2 = 2
c5
= a1 b o + ao b1 = 1.2 +1.0 = 2
= a2 b o + a1 b1 + ao b2 = (1) 2 + 1.0 + 1.1 =  1
= a3 b o + a2 b1 + a1 b 2 + ao b3 = 0 (2) + (1)(0) + 1.1 + 1.1 = 2
= a 4 b o + a3 b1 + a2 b 2 + a1 b 3 + ao b 4
= 0 (2) + 0 (0) + (1) (1) +(1)(1) + (1)(0)= 0
= a5 b o + a 4 b1 + a3 b 2 + a2 b3 + a1 b 4 + ao b5 = 1
c6
= a6 b o + a5 b1 + a4 b2 + a3 b 3 + a2 b 4 + a1 b 5 + ao b 6 = 0
c1
c2
c3
c4
c7 = c8
= . = 0
47
Algebra
m
PROOF:
Suppose (x) =
ai x i = ao + a1 x am x m and
i =0
g(x) =
i =0
+ a1 bi 1 +. + a i b o .
a=0v
i
\ (x) g(x) = c0 + c1 x + .. + cn + m x n + m
deg ((x) g(x)) = m + n = deg (x) +deg g (x).
COROLLARY:
If (x), g(x) are non zero elements in (x) then deg
(x) deg (x)g (x)
PROOF:
If deg g(x)
Ring Theory
48
n
If (x)=
ak c k g(x) =
k =0
bk x k are in F [x]
k =0
n+m
( ak + b k ) x k
k =0
n+m
(x) g(x) =
k =0
ck x k where ck =
a i b k i
i =0
Since all but a finite number of a k 's and b k 's are zeros it follows that only a finite number of
a k + b k can be non zero and only a finite number of ck ' s can be non zero.
Hence (x) + g(x) F [x] and (x) g(x) F[x]
With addition and multiplication defined on F[x].
We show that F [x] is a commutative ring with unity
We can verify that [f(x) g(x)] h(x) = f(x) [g(x) h(x)] and [f(x) + g(x)] = f(x) h(x) + g(x) h(x)
for any f(x), g(x), h(x), F[x].
F[x] inherits commutativity from F and 1 F treated as a constand polynomial is the
unity in F (x). Thus F(x) is a commutative ring with unity.
Now suppose (x) g (x) = 0, (x),g(x) F [x]
If both (x)and g(x) are constant polynomials then from the fact that F is an integral
domain it follows that either (x) = 0 or g(x)= 0.
If one of the polynomials say (x) is non constant, then (x) g(x) = 0 only if g(x) = 0.
Thus F(x) has no zero divisors and hence F [x] is an integral Domain.
LEMMA : (The Division Algorithm) Given two polynomials (x) and g(x) o in F[x] then
there exist two polynomials t(x) and r (x) in F(x) such that (x) = t(x) g(x) + r(x) where r (x)
= 0 or deg r(x) < deg g(x).
PROOF:
1.
2.
If deg (x) < deg(x),take t(x) = 0,r(x) = (x) we get the result.
Assume deg (x) deg g (x)
Let (x) = a0 + a1 x +. + am x m , a m
49
Algebra
\ (x) = a
+ a1 x
1
= b o ( ao b0 + a1 b0 1 x) and r (x) = 0
(post multiply by b0
1
& premultiply by b o )
If n = 1, g(x) = b0 + b1 x , b1 0
\ (x) = a0 + a1 x =
a1 b 0 1 ( b0 + b1
x )+( ao  a1 b o b0 1 )
1 (x) = t(x) g(x) + r (x) where r (x) = 0 (or) deg r(x) < deg g(x)
m 1< m
= t 1 (x) g(x) + r(x) where r (x) = 0 deg r(x) < deg g (x)
THEOREM :
PROOF:
Ring Theory
50
(x) = q (x) g(x) + r(x) where either r(x) = 0 or deg r(x)< deg g(x)
(i.e) either r(x) = 0 or d [r(x)] < d [g(x)]
Thus d satisfies all the properties of an Euclidean ring.
PROOF:
LEMMA 2.9.4: Given two polynomials (x),g(x) in F(x) they have a greatest common
divisor d(x) which can be realized as d(x) = (x) (x) + (x) g(x)
PROOF:
Now by division algorithm $ q(x),r(x), F[x] such that (x) = q(x) d(x) + r(x),
Where either r(x) = 0 or deg r(x) <deg d(x)
51
Algebra
If r(x)
Proof is by induction on n.
If n = 1, then (x) is irreducible over F and hence there is nothing to prove.
Assume that the theorem is true for any nonconstant polynomial of degree less than n..
Let (x) be a polynomial of degree n
If (x) itself is irreducible over F, then there is nothing to prove Otherwise let (x) = g(x)h(x),
where 1 deg g(x) <deg (x) = n and 1
Ring Theory
52
\ p1 (x) p 2 (x). p r (x ) =
u 1 p1 (x) q 2 (x) q s (x )
Clearly, J F [x]
Suppose I = J, then (x) J (x) I
53
Algebra
F[x]
We prove J = F[x]
F[x] is a P.I.D and J is an ideal in F[x].
\ J is a principal ideal. J = (f(x)) (say) for some for f(x) F [x] with f(x) I (If f(x) I,
then J = 1)
Now I J p(x) J
#
Ring Theory
54
hence by some prime number p. Since f(x) is primitive, p does not divide some coefficient
ai . Let a j be the first coefficient of f(x) which is not divisible by p. Similarly let bk be the
first co efficient of g(x) which p doesnt divide. In f(x) g(x), the coefficient of x j  k is c j  k
which is given by c j  k = ai bk + ( a j +1 bk 1 + a j + 2 bk  2 +.+ a j + k b o )
+ ( a j =1 bk 1 + a j  2 bk  2 +.+ ao b j + k )______ (1)
By the choice of bk , we have p bk 1 , p bk  2 ,.. p  b o
p ( a j +1 bk 1 + a j + 2 bk  2 +..+ a j + k b o )
Similarly by the choice of a j , we have p a j 1 , p a j  2 ,. p ao
p ( a j 1 bk +1 + a j  2 bk + 2 +. + ao b j + k )
primitive .Thus b f(x) = a l (x) m (x) since f(x) is primitive, the content of the left hand side is
b. since l (x) and m (x) are primitive the content of the right side is a. Thergere a =b and
hence f(x) = l (x) m (x) where l (x) and m (x) have integer coefficients.
Definition: A polynomial is said to be integer monic if all its coefficients are integer and its
highest coefficient is 1.
Note: An integer monic polynomial is of the form x m + a1 x n1 +.+ a n where the a1 s
are integer. An integer monic polynomial is always primitive
Theorem (The Eisenstein criterion)
55
Algebra
Proof: We may assume that f(x) is primitive (otherwise, f(x) = d. g(x)) where g(x) is
primitive and we can prove the result for g(x)) If f(x) can be factored as a product of two
rational polynomials then by Gauss Lemma, it can be factored as the product of two
polynomials having integer coefficient thus if we assume that f(x) is reducible then
f(x) = ( b o + b1 x + .+ br x r ) ( c0 + c1 x +.+ cs x s )
Where the bs and cs are integer and where r > o and s > o on comparing the coefficients we
get ao = b o c0 , a1 = b o c1 + b1 c0 .
a k = bk c0 + bk c1 +..+ b o ck etc.,
Since p ao , p must divide one of b o or c0 . Since p 2
ao ,
suppose p b o but p c0 .since f(x) is primitive, all its coefficient are not divisible by p and
hence all the coefficient b o , b1 , b2 ,. br are not divisible by p.
Let bk be the first coefficient which is not divisible by p then p bk 1 , p bk 2 ,.. p b o
now a k = bk c0 + bk c1 +..+ b o ck . By the hypothesis of the theorem p a k and by the
choice of bk , p bk 1 , bk 2 ,. p  b o so that p( bk + c1 + bk 2 c2 + b o ck ) p bk c0 .
But p bk and p c0 , this contradiction proves that f(x) is irreducible.
5.4 LET US SUMUP
1.
2.
3.
4.
5.
If p is a prime number prove that the polynomial x n p is irreducible over the rationals
UNIT III
FIELD THEORY
56
Algebra
LESSON 6
EXTENSION FIELDS
CONTENTS :
6.0
6.1
Introduction
6.2
Extension of Fields
6.3
Let us Sum up
6.4
6.5
References
6.0
In this lesson we study about a field containing a given field, which is called the
extension field. This field extension play and important role in the theory of equations and
theory of numbers.
After going through this lesson, you will be able to :
(1)
(2)
(3)
(4)
6.1
INTRODUCTION
Any field K which contains the given field F is called an extension field of F. The
extension field K can be regarded as a vector space over F. If the dimension of K over F is
finite then we define K as a finite extension over F. We also introduce algebraic extension
over F.
6.2
EXTENSION FIELDS
Definition
Let F be a field. A field K is said to be an extension of F if F K. Equivalently,
a field K is an extension of field F, if F is subfield of K.
Note : Throughout this lesson, F will denote a given field and K, a field extension of F.
Fields
57
Example
We know that R, the set of reals, Q the set of rationals and C the set of complex
numbers are fields under addition + and multiplication. And Q R C: So R is an extension
of Q and C is an extension o R.
Note: If K is an extension of F, then under the field operations in K, K is vector space
over F.
Definition
The degree of an extension K over F is the dimension of K as a vector space over F
and is denoted by [K : F]
Definition
If [K:F] is finite, then K is said to be a finite extension of F.
Example
Q ( 2 ) = {a+b 2 : a, b Q} where Q is the set of rationals, is an extension of Q. The
set {1, 2 } is a basis of Q( 2) over Q. Therefore [Q ( 2): Q]=2. Hence Q( 2) is a finite
extension of Q.
Theorem:
K is a finite extension of F and L is a finite extension of K, then L is a finite extension
of F and [L : F] =[[L : K] [K : F]
Proof: Let [L : K] = m and [K : F] =n. We have to prove that [L : F] =mn.
Let { v1 v2 ., vm } be a basis of L over K and { w1 , w2 , , wn } be a basis of K
over F.
The theorem will be proved if we show that the mn elements vi w j , i = 1,2 m,
j=1, 2,., n form a basis of L over F.
For this we have to show that these mn elements are linearly independent over F and
every element of L is a linear combination of these mn elements.
Let t L. As { v1 v2 ., vm } is a basis of L over K,
t = k1 v1 + k 2 v2 +. . . .+ k m vm
.. (1)
where k1 , k 2 ,. . . . k m belong to K.
As{ w1 , w2 , , wn } is a basis of K over F , every element in K is a linear
combination of the elements w1 , w2 ,,.. wn Let
k1 ,= f11 w1 + f12 w2 +. . . . .+ f1n wn
k 2 = f 21 w1 + f 22 w2 +. . . . .+ f 2 n wn
.................................
.................................
.................................
.. (2)
ki = f11 w1 + f i 2 w2 + . . . . .+ f in wn
58
Algebra
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .
..................................
..................................
k m = f m , w1 + f m 2 w2 + . . . . . f mn wn
where f ij are in F.
Substituting for k1 , k 2 , . . . . ., ki , . . . . . k m from (2) in (1) we get
t=
( f11 w1 + f12 w2 + . . . . + f1n w) v1
+
( f 21 w1 + f 22 w2 + .. . . + f 2 n wn ) v2
................................
...............................
.................................
+
( f11 w1 + f i 2 w2 + . . . .+ f in wn ) vi
. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .
................................
................................
+
( f m1 w1 + f m 2 w2 + . . .. .+ f mn wn ) vm
=
f11 v1 w1 + f12 v1 w2 + . . . .+ f1n vi wn
+
f 21 v2 w1 + f 22 v2 w2 + . . . .+ f 2 n v2 wn
+
..................................
+
..................................
..................................
f i1 vi w1 + f i 2 vi vi w2 + . . . . .+ f in vi wn
+
+
....................................
.....................................
....................................
+ f m1 vm w1 + f m 2 vm w2 + . . . .+ f mn vm wn
Therefore t L is a linear combination of vi , w j , i = 1, 2, . . . .m, j = 1,2, . . . . n over F.
Next, we prove that the elements vi w j , i = 1, 2. . . . .m, j=1, 2, . . . . ,m are linearly
independent.
Suppose
f11 v1 w1 + f12 v1 w2 + . . . .+ f1n v1 wn
+ f 21 v2 w1 + f 22 v2 w2 + . . . .+ f 2 n v2 wn + . . .
+ f11 vi w1 + f12 vi w2 + . . . . .+ f in vi wn + . . .
+ f m1 vm w1 + f m 2 vm w2 + . . . .+ f mn vm wn = 0
. (3)
where f ij are in F.
Regrouping (3) we get
( f11 w1 + f12 w2 + . . . . .+ f1n wn ) v1 =
( f 21 w1 + f 22 w2 + . . . . .+ f 2 n wn ) v2 + . . .
+
( f i1 w1 + f i 2 w2 + . . . . . .+ f in wn ) vi + . . .
+
( f m1 w1 + f m 2 w2 + . . . . . + f mn wn ) vm = 0
. (4)
Fields
59
Let f i1 w1 + f i 2 w2 + . . . . .+ f in wn = ki
. (5)
for i = 1, 2, . . . . m. Then ki , i =1, 2, . . . ., m are in k and from (4) and (5) we get
k1 v1 + k 2 v2 + . . .+ ki vi + . . .+ k m vi = 0.
But v1 v2 , . . . . vm are linearly independent over K
Therefore ki = 0 for i = 1,2 . . . .m.
That is,
f i1 w1 + f i 2 w2 + . . . . .+ f in wn = 0
.. (5)
for i = 1 , 2 . . . .m
But { w1 , w2 ..... wn } is a basis of K over F and hence are linearly independent
Hence (5) gives
f11 = 0, f12 = 0, . . . . . f in = 0
for i = 1, 2, . . . . , m
That is
f ij = 0 for i = 1, 2 , . . . . .m, j = 1, 2 , . . . . n.
Hence vi w j , i = 1, 2, . . . . .m, j = 1, 2, . . . . ., n are linearly independent and hence
from a basis of L over F.
Therefore [L : F] = mn
That is [L : F] = [L : K] [K : F]
Corollary
If L is a finite extension of F and K is a subfield of L which contains F then [K : F]
divides [L : F].
Proof: Here L, K, F are fields and L K F. Let [L : F] be finite.
Let u1 u2 , . . . . . . . . ur in be L be linearly independent over K. Then we claim that
u1 u2 , . . . . . . . . ur , are linearly independent over F also. For, if possible let them be linearly
dependent over F. Then scalars f 1 , f 2 , . . . . . , f r , not all zero exist in F such that
f1 u1 + f 2 u2 + . . . .+ f r ur = 0.
But as F K, f1 , f 2 , . . . . . f r are in K also . Therefore u1 u2 , . . . . . . . . ur are
linearly dependent over K also . This is a contradiction. Therefore u1 u2 , . ur in L are
linearly independent over F if they are linearly independent over K.
Now, [L : F] is finite. The basis vectors of L (K) are linearly independent in L(F) and
hence are finite. Therefore [L: K] is also finite. Hence [K: F] is finite as K is a subspace L
over F.
By the previous theorem
[L: F] = [L: K] [K: F]
Therefore [L : F] = [L: K] is a finite positive integer
[K : F]
60
Algebra
Definition
Let K be an extension field of F. Then an element a K is said to be algebraic over F
if there exist element a 0 , a1 ,...., a n in F, not all zero, such that a 0 a n + a1an 1 + ...a n = 0. That is,
a K is algebraic over F if there exists a nonzero polynomial p(x) F[x] such that p(a) = 0.
Definition
Let K be an extension of F and let a K. Let M be the collection of all subfields of
K each of which contains both F and a. M is nonempty since K itself is an element of M. The
intersection of any number of subfields of K is again a subfield of K. Thus, the intersection of
all those subfield of K which are members of M is a subfield of K. It is donated by F (a).
F (a) contains both F and a. Every subfield of K in M contains F (a). Hence F (a) is the
smallest subfield of K containing both F and a. F (a) is called the subfield of K obtained by
adjoining a to F.
Definition
Let S be a subset of a field K. Then a subfield K1 of K is said to be generated by S if
(i) S K1 (ii) for any subfield L of K, S L implies K1 L.
Notation: The subfield generated by S will be denoted by < S>.
The subfield generated by S is the intersection of all subfields of K which contain S.
Now, let K be a field extension of F and S be any subset of K, then the Subfield K generated
by F U S is said to be the subfield of K generated by S over F and this subfield is denoted by
F(S). However, if S is a finite set { a1 , a2 .....an } we write F(S) = F( a1 , a2 .....an ).
Definition
A field K is said to be finitely generated if there exists a finite number of
elements a1 , a2 .....an in K such that K = F ( a1 , a2 .....an ).
Definition
If K is generated by a single element over F, then K is called a simple extension of F.
Note: We had already seen that if K is an extension of the field F and a K, then F(a) is the
smallest subfield of K containing both F and a . Thus F(a) is a simple extension of F.
Definition
A nonzero polynomial f(x) in F[x] is said to be a monic polynomial over F if the
coefficient of the highest power of x in f(x) is equal to 1, the unity of F.
Example f(x) = x 2  2x + 3 is a monic polynomial in Q[x].
Fields
61
Theorem
If an element a K is algebraic over F, then there exists a unique monic polynomial
p(x) of positive degree over F such that (1) p(a) = 0 (2) if for any f(x) F[x], f(a) = o then
p(x) divides f(x) .
Proof: Since a is algebraic over F, a is a root of some nonzero polynomial t(x) =
a0 + a1 x + a1 x 2 + .....an x n with a , F and an 0. Without loss of generality, let us assume
that t(x) is the nonzero polynomial over F of smallest degree such that t(a) = 0. Now degree
of t(x) is n and a n 0 Hence a n 1 F. We write
a n 1 t(x) = p(x) = x n + a n 1a n 1x n 1 + a n 1a n 2x n 2 + ... + a n 1a1x + a n 1a 0 .
As t(a) = 0, p(a) = 0. Therefore p(x) is the required monic nonzero polynomial of
smallest degree n such that p(a) = 0. We have to prove that p(x) divides f(x) if f(a) = 0. By
division algorithm we can find polynomials q(x), r(x), in F[x] such that
f(x) = p(x) q(x) + r(x)
where r(x) = 0 or deg r(x) < deg p(x) then f(a) = p(a) q(a) + r(a). As f(a) = 0, p(a) = 0
we get r(a) = 0.
\ r (x) = 0 for if r(x) 0, we get a contradiction to the fact the p(x) is a polynomial of
minimum degree satisfied by a. Next we prove uniqueness of p(x).
Let, if possible, p(x) and p1 (x) be two nonzero minimal monic polynomials of a K
over F.
Then p(a) = 0, p1 (a) = 0 . Also p(x) and divides p1 (x) and p1 (x) divides p(x) by the
result just proved. As p(x) and p1 (x) are nonzero monic polynomials we conclude that p(x) =
p1 (x).
Definition
Let K be an extension field of the field F. If an element a K is algebraic over F, then
the unique monic polynomial of smallest degree over F satisfied by a is called minimal
polynomial of a over F.
If the degree of the minimal polynomial of a K over F is n then a is said to be algebraic over
F of degree n.
Theorem
The minimal polynomial p(x) of a K over F is irreducible over F.
Proof : Let if possible, the minimal polynomial of a K over F be reducible over F. Then
p(x) can be factored into nontrivial factors. Let p(x) = f(x) g(x) where f(x) , g(x) F[x] and
each of f(x), g(x) is of degree less than that of p(x).
We have p(a) = f (a) g(a)
i.e. 0 = f(a) g(a)
Therefore f(a) = 0 or g(a) = 0 since F is integral domain. Therefore a
satisfies f(x) or g(x) each of which is of degree less than p(x). This is a contradiction
since p(x) is the minimal polynomial of a K over F.
Therefore minimal polynomial p(x) of a K over F is irreducible over F.
62
Algebra
Theorem
An element a K is algebraic over F if and only if F(a) is a finite extension of F.
Proof : First we prove the if part of the theorem. Let F(a) be a finite extension of F. That is
[F(a) : F] is finite. Let [F(a) : F] = m. Then every set of m+1 elements of F(a) must be
linearly dependent over F. In particular 1,a, a 2 .....a m must be linearly dependent over F. There
exist elements a 0 , a1 ,...., a m in F. not all 0, such that
a 0 a + a1a + a 2a2 + .... + a m am = 0
Fields
63
F(x)
V
is a field.
F[x]
V
is isomorphic
64
Algebra
P(x) + V
= ( r0 + r1 x + r2 x 2 + ... + r n 1 x n 1 ) + V
= ( r0 + V) + (r1 + V)(x + V) + (r2 + V)(x 2 + V) + ... + (r n 1 + V)(x n 1 + V)
= (r0 + V) + (r1 + V)(x + V) + (r2 + V)(x + V)2 + ... + (r n 1 + V)(x + V)n 1
= r0 (1 + V) + r1(x + V) + (r2 + V)(x + V)2 + ... + rn 1(x + V)n 1
F[x]
Thus any element p(x) +V of
is liner combination of 1+V, x+V, ( x + v) 2
V
. . . . . ( x + v) n 1 with coefficients in F.
Next we show that these elements are linearly independent .
Suppose
r1 (1 + v) + r2 ( x + v) + r3 (1 + v) 2 + . . . . + rn 1 ( x + v) n 1 = V
F[x]
the zero element of
V
2
Then r1 + r2 x + r3 x + ... + r n 1 x n 1 + V =V.
Hence r1 + r2 x + r3 x 2 + ... + r n 1 x n 1 belongs to V.
But V is the ideal generated by the polynomial u(x) whose degree is n.
Therefore u(x)  ( r0 + r1 x + r2 x 2 + ... + r n 1x n 1 )
This is possible only if all the coefficients r0 , r1 , . . . . rn 1 are zero.
Thus (1+V), (x+V), ( x + V) 2 , . . . . . ( x + v) n 1 are all linearly independent over F.
F[x]
Hence 1+V, x+V, ( x + V) 2 , . . . ., ( x + v) n 1 from a basis of
over F.
V
F[x]
Therefore
is of finite dimension n over F.
V
F[x]
is isomorphic with F(a). Therefore F(a) is a finite dimension over F.
V
In the course of the proof of the above theorem we had also proved the following
theorem.
But
Theorem
If a K is algebraic of degree n over F, then [F(a) : F] = n.
Proof: The proof is actually the proof of the only if part of the previous
Theorem
If a, b in K be algebraic over F. then a b, ab and a/b (if b 0) are all algebraic over
F. In other words the elements in K which are algebraic over F form a subfield of K.
Proof : Let a be algebraic of degree m over F and let b be algebraic of degree n over F.
Let T = F(a). Then T contains both F and a. By second theorem [T : F] = m.
Since b is algebraic of degree n over F and since F T, b is algebraic over T also and
b is algebraic of degree atmost n over T [That, is degree of b is n over T]. Let W = T(b).
W is a subfield of K and [W : F] n, by third theorem.
Fields
65
66
Algebra
Now by first theorem [M(u) : F] = [M (u): M] [M : F]. As the R.H.S. is finite, the
L.H.S. is finite. Hence M(u) is a finite extension of F. Hence u M(u) is algebraic over F, by
second theorem. Hence the theorem is proved.
Definition Algebraic Number
A complex number is said to be an algebraic number if it is algebraic over Q, the field
of rational numbers.
for example 2 + 3i is an algebraic number since it satisfies the polynomial x 2 4x +13
over Q.
Definition Transcendental Number
A complex number which is not an algebraic number is called transcendental number.
e and p are transcendental numbers.
If a is a complex number and is transcendental and if f(a) = 0 where f(x) Q[x], then
f(x) = 0. And F(a) is called simple transcendental extension of F.
6.3
LET US SUMUP
(1)
(2)
(3)
(4)
(5)
6.4
(1)
(2)
(3)
6.5 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition Chapter  5.
Fields
67
LESSON 7
ROOTS OF POLYNOMIALS
CONTENTS :
7.0
7.1
Introduction
7.2
Roots of Polynomials
7.3
Let us Sum up
7.4
7.5
References
7.0
In this lesson we construct an extension field which contains all the roots of the given
polynomial over the base field F.
After going through this lesson, you will be able to:
(1)
Prove that if p(x) F[x] and is an irreducible polynomial over F with degree n 1,
then there is an extension E of F such that [E:F] = n in which p(x) has a root.
(2)
Prove if f(x) F[x], then there exists an extension of F such that its degree is atmost
n! and in which f(x) has all its roots.
(3)
(4)
Prove that any two splitting fields of the same polynomial over F are isomorphic.
7.1
INTRODUCTION
In the previous lesson we discussed about algebraic elements in an extension field K
of F, i.e., elements which satisfied polynomials in F[x]. Now we wish to find a field K
which is an extension of F in which p(x) has a root. We conclude this lesson by
constructing the splitting field (minimal extension of F) containing all the roots of the
given polynomial in F[x].
7.2
ROOTS OF POLYNOMIALS
Definition
Let p(x) F[x]. Then an element a lying in some extension of F is called a root of
p(x) if p(a) = 0.
68
Algebra
Note
If a K is a root of p(x) F[x] of multiplicity m, we will count a as m roots.
Lemma
A polynomial of degree n over a field has atmost n roots in any extension field.
Fields
69
Proof
Let F be a field and p(x) F[x]. We prove the theorem by induction on n, the degree
of p(x).
If p(x) is of degree 1, then it must be of the form a x + b Where a , b F. a 0.
If p(a) = 0 for some a, then a a + b = 0. Hence a =  b /a. Hence p(x) has a unique root
 b / a . Therefore the theorem is true when n = 1.
Next we assume that the theorem is true for all polynomials in F[x] of degree less than
n. We also assume that the degree of p(x) is n.
If p(x) has no roots in an extension field K, then the number of roots of p(x) in K is =
0 < n. Hence the theorem is proved. However, if p(x) has a root a K and if a root of
multiplicity m, then (xa) m divides p(x). Hence m n.
Let p(x) = (xa) m q(x) where q(x) K[x] and deg q(x) = n m
Next, if b 0 and b is a root in K of p(x), then p(b) = 0. Hence (ba) m q(b) = 0. Since
ba 0 and F has no zero divisors, we have q(b) = 0. That is, every root of p(x) in K, other
than a must be a root of q(x).
As deg q(x) = nm<n, by our induction hypothesis q(x) has atmost n m roots in K.
As p(x) = (xa) m q(x), we see that p(x) has one root a with multiplicity m and all the
roots of q(x) which are atmost n m in number.
Therefore p(x) has atmost m + n m roots in K. This proves the lemma.
Theorem
Let F be a field. If p(x) is a polynomial in F[x] of degree n 1 and is irreducible over
F, then there is an extension E of F such that [E:F] = n, in which p(x) has root.
Proof
As F is a field, F[x] is a ring of polynomials in the index x over F.
Let V = <p(x)> be the ideal of F[x] generated by p(x). As p(x) is irreducible over F, V
is maximal ideal of F[x].
So E =
F[ x ]
V
is a field and hence a ring. This E will be shown to satisfy the conclusions of the
present theorem.
Let F = { a +V: a F}. Define a mapping y from F[x] into
E=
F[ x ]
V
by y f(x) = f(x) + V.
First we show that y is a homomorphism. Let f(x) and g(x) belong to F[x]. Then f(x)
+ g(x) F[x] and f(x) g(x) F[x] as F[x] is a ring. Now
= f(x) + g(x) +V
y [f(x) + g(x)]
= [f(x) + V] + [g(x) + V]
= y [f(x)] + y [g(x)]
Next, y [f(x)g(x)]
= f(x)g(x) + V
= [f(x) + V] [g(x) + V]
= y [f(x)] + y [g(x)]
70
Algebra
Therefore y is a homomorphism.
Consider the restriction y F of y of F.
That is y F : F F is defined by .
y F ( a ) = a + V for all a F
Then we can easily verify that y F is a homomorphism. Now we show that y F is oneone. Let a Ker ( y ). Then y ( a ) = Zero element of F = V.
\ a+ V = V
\ a V. That is a <p(x)>.
Hence a = p(x) q(x) for some q(x) F(x).
In this equation LHS is a polynomial of degree zero and RHS is a polynomial of degree > n.
This is possible only when q(x) = 0. Hence a = 0. Hence y F is an isomorphism of F onto F .
Hence F is isomorphic to F . In this way E is an extension of F. Then E is a vector space over
E.
We now prove that the n elements 1+V,x+V, (x +V) 2 = x 2 +V, (x +V) 3 = x 3 +V . . .
, (x +V) n1 = x n 1 +V of E form a basis of E over F.
First we show that they are linearly independent over F. If possible let them be
linearly dependent. Then, scalars a 1 , a 2 ....., a n 1 not all zero exist such that
a 0 (1 + V) + a1 (x + V) + a 2 (x 2 + V) + ..... + a n 1 (x n 1 + V) = V
That is
a 0 + a1x + ... + a n 1x n 1 + V = V.
\ a 0 + a1x + ... + a n 1x n 1 V = <p(x)>.
\ a 0 + a1x + ... + a n 1x n 1 =
p(x) q(x) for some q(x) F[x]. Here degree of LHS is n1
and degree of RHS n. This is possible only when q(x) = 0. Therefore
a 0 + a1x + ... + a n 1x n 1 = 0 for all values of x. This is possible only where a 0 = a1 = . .
. . a n1 = 0.
Hence 1 + V, x + V, x 2 + V. . . x n 1 + V of E are linearly independent over F.
Next we show that these n elements generate E.
Let f(x) + V E =
F[ x ]
V
Fields
71
F[ x ]
V
(2),
p(a) = 0 since. V F is identified with o F. Hence a is a root of p(x). Hence E is the required
field. Hence the theorem.
Corollary
If f(x) F[x], then there is a finite extension of E of F in which f(x) has a root.
Moreover, [E: F] deg f(x).
Proof
Let p(x) be an irreducible factor of f(x). Any root of p(x) is a root of f(x). By the
theorem there is an extension E of F with [E:F] = deg p(x) deg f(x) in which p(x) and so f(x)
has a root.
Theorem
Let p(x) F(x) be of degree n 1. Then there is an extension E of F of degree atmost
n! in which p(x) has n roots (and so a full complement of roots).
72
Algebra
Proof
Then proof is by induction on n, the degree of p(x). A root of p(x) of multiplicity m is
counted m times.
When n =1, let p(x) = a 0 + a1x where a 0 ,a1 F. Then
a 0
a
a 0
F. In this case we take E = F and we get [E:F]=
a
Fields
73
The proof follows from the fact that the polynomials in F ' [t] are added or multiplied in the
same way as the corresponding polynomials in F1 (t) are added or multiplied. Since t is onto,
t * is onto. Any element a of F considered as zero degree polynomial in F[x] is mapped into
a1 = t (x) under t *.Hence the lemma.
Lemma
1
There is an isomorphism t ** of
a F . at ** = a1
F [t ]
F[ x ]
onto 1
with the property that for every
(f (x ))
( f (t ))
Proof
As in first theorem we can identify F as a subfield of the ring
subfield of the ring
Define t ** :
F[ x ]
and F1 as a
(f (x ))
F1[ t ]
(f 1 (t ))
F1[ t ]
F[ x ]
1
(f (x ))
(f (t ))
by.
t ** (p(x) + (f(x)) = p1 (t) + ( f 1 (t)), where p(x) F[x] and p1 (t) F1 (t), p1 (t) = t* (p(x))
**
74
Algebra
F[ x ]
(f (x ))
F1[ t ]
onto
(f 1 (t ))
Next
t** ( a ) = t* ( a +(f(x)) = a1 +( f 1 (t))
Hence the lemma is proved.
Theorem
If p(x) is irreducible in F[x] and if u is a root of p(x) then F(v) is isomorphic to F1 (w)
where (w) is a root of p1 (t). Moreover, this isomorphism s can so be chosen that
1)
us = w
2)
as = a1 for every a F
Proof
Let u be a root of the irreducible polynomial p(x) in some extension K of F.
Then as we saw in the proof of theorem in lesson 13.
F[ x ] ~
F (v) K.
(f (x ))
F1[ t ]
1
(p (t ))
F1 (w) .
f 1[ t ]
F[ x ]
(f (x ))
(p1 (t ))
q**
q**
where
*1
Since q* is an isomoriphism q exists and is an isomorphism
F(v)
F[ x ]
(p(x ))
F1[ t ]
F[ x ]
q**
y * F1 (w) .
(p(x )) (p1 (t ))
Fields
75
s (v)
=
=
=
=
=
Further
) =
( y * f** j*1 ) ( a )
=
y * f** j*1 ( a )
=
y * f** ( a + (p(x))
=
y * ( a1 + ( p1 (t)), by definition of j**
a1
=
Hence s is the required isomorphism mentioned in the theorem
Hence the proof.
s (a
Corollary
If p(x) F[x] is irreducible and if a and b are two roots of p(x) then F(a) is
isomorphic to F(b) by an isomorphism which takes a into b and keeps every element of F
fixed.
Proof
In the above theorem, we take the two fields F and F as identical and j as the
identity automorphism of F and a 1 = a for all a F and the polynomial p(x) = F[x] and the
corresponding polynomial f 1 (t) F 1 (t) are identical. Hence p(x) and p ' (t) are identical and
we can take a and b as roots of both p(x) and p ' (t). Hence by the above theorem there is an
isomorphism s : F(a) F(b) such s (a) = b and s ( a ) = a " a F
Theorem
Any two splitting fields E and E ' of the polynomials f(x) F[x] and f 1 (t) F 1 (t)
are isomorphic by an isomorphism j with the property that
a j = a 1 for every a F.
In particular, any two splitting fields of the same polynomials over a given field F are
isomorphic by an isomorphism leaving every element of F fixed.
Proof
The proof is by induction on the degree of the splitting field over the initial field.
Let f 1 (t) be the polynomial if F 1 [t] corresponding to the polynomial f(x) in F[x] and
76
Algebra
E be the splitting field of f 1 (t) over F and E 1 be the splitting field f 1 (t) over F 1 .
Let [E: F] = 1. Then we know that E = F. And f(x) splits completely into linear factors
over F. Therefore f 1 (t) splits completely into linear factors over F 1 .
Hence E 1 = F 1 therefore [ E 1 = F 1 ] = 1 But then j = t provides us with an
isomorphism of E onto E ' coinciding with t on F. Hence the theorem is true when degree of
the splitting field E over the initial field F is
Let us assume that the theorem is true for any field Fo and for any polynomial g(x)
over Fo whenever the degree of the splitting field Eo of g(x) over Fo is less than n, the
degree of E over F.
Now n > 1, Hence E F. Therefore when f(x) is factorised over F, there exists at least
one non linear irreducible factor, say p(x). Let degree of p(x) be n Then 1 < r n. Let p1 (t)
be the corresponding irreducible factor of f 1 (t) when it is factored over F 1 . Then the degree
p1 (t) is also r.
Now f(x) splits completely into linear factors over E. Therefore p(x) which is a factor
of f(x) must also split into linear factors over F. Hence there is a root v E of p[x] and
[F (v): F] = r. Similarly there exists a root w E ' of p ' (t) and [ F 1 (w): F] = r
Now by the last theorem there exists an isomorphism.
s : F(v) F 1 (w)
such that s (v) = w and s ( a ) = a 1 " a F.
Now [E:F(v)] [F(v):F] = [E:F]
[E:F(v)] r = n. Therefore, [E:F(v)] = n/r. Since n/r < n. We claim that E is the where
field for f(x) considered as polynomial over Fo = F(v), for no subfield of E. Containing Fo
and hence F, can split f(x) since E is assumed to be a splitting field of f(x) over F. Similarly
E ' is a splitting field f 1 (t) over F 10 = F 1 (w) where j = f(v).
By induction hypothesis there exists an isomorphism j of E into E ' such a j = a a for
all a F Fo , a j = a a = a1 . This completes the induction and the theorem is proved.
Now we give the proof of the particular case. Take the two fields F and F ' as
identical and the isomorphism j as identity automorphism under these assumptions the two
polynomials f(x) f 1 (t) are one and the same and E and E ' are the splitting fields of the same
polynomial over F. Hence any two splitting fields of any given polynomial over any given
Fields
77
LET US SUMUP
(1)
A polynomial of degree n over a field has atmost n roots in any extension field.
(2)
(3)
(4)
Any two splitting fields E and E' of the polynomials f(x) F[x] and f(t) F' (t) are
isomorphic.
7.4
(1)
Let F be a field. If p(x) is a polynomial in F[x] of degree n 1 and its irreducible over
F, show that there is an extension E of F such that [E:F] = n, in which p(x) has a root.
(2)
If p(x) is irreducible in F[x] and if v is a root of p(x) the show that F(v) is isomorphic
to F' (w) where w is a root of p' (t).
(3)
7.5 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition.
78
Algebra
LESSON 8
CONTENTS :
8.0
8.1
Introduction
8.2
8.3
Let us Sum up
8.4
8.5
References
8.0
In the previous lesson we saw the answers to several questions relating to the roots of
polynomials. This lesson is devoted to multiple roots of polynomials.
(1)
(2)
(3)
Prove if a and b are algebraic over F, then there is element c such that F(a,b) = F(c).
(4)
8.1
INTRODUCTION
Definition
If f(x) = a 0 x n + a1x n 1 + . . . . + a i x n i + . . . + a n 1x + a n is in F[x], then the
derivative of f(x) is denoted by f ' (x) and is defined as f (x) = a 0 nx n 1 + a1(1)x n 2 +. . . .
+ a i (n  i )x n i 1 + . . . . + a n+1 .
Fields
79
Lemma
If F is any field and f(x), g(x) are in F[x] and a F then
(1)
[f(x) + g(x)] ' = f ' (x) + g ' (x).
(2)
[ a f(x)] ' = a f ' (x)
(3)
[f(x) g(x)] ' = f ' (x) g(x) + f(x) g ' (x).
Proof
(i)
Let
f(x) = a 0 + a1x + a 2x 2 + . . . . + a n x n
g(x) = b0 + b1x + b2x 2 + . . . . . + b m x m
where a s and b s are in F. We assume m n without loss of generality.
Then
f(x) + g(x) = a 0 + b0 + ( a 1 + b1 )x + ( a 2 + b 2 ) x 2 + . . . . . + ( a m + b m )
x m + a m +1 x m +1 + . . . . . + a n x n
Then
[f(x) + g(x)] ' = a1 + b1 + 2(a 2 + b2 )x + ..... + m (a m + b m )x m 1 +
a m +1(m +1)x m + ...... + a n nx n 1.
(ii)
(iii)
'
= a i b j (i+j) x i + j 1
Then f(x) =
i =o
j =1
g j (x).
80
Algebra
n
'
[f(x)g(x)] = [
i =o
i =o
j =0
i =o
'
f i (x) g j (x) +
j =0
i =o
f i (x) g j (x)]
j =0
'
f i (x)
i =o
j =1
'
'
f i (x) g
i =o
(x)
j =0
g j (x)
'
f i (x)
g ' j (x)
j =1
i =o
Fields
81
Proof
Since f(x) is irreduciable, its only factors in F[x] are and f(x).
If f(x) has a multiple root then by the lemma, f(x) and f ' (x) have a nontrivial
common factor. Hence f(x) divides f 1 (x). Since the degree of f 1 (x) is less than the degree of
f(x) this is possible only when f 1 (x) = 0. In Characteristics 0, this implies f(x) is constant
which has no root. Therefore f(x) has no multiple roots.
In characteristic p this forces f(x) = g( x p )
Corollary
n
If F is a field of characteristics p 0, then the polynomial x p  x F[x], for n
has distinct roots
1,
Proof
n
The derivatime of x p  x = p n is p n x p
n 1
1
= 1 since F is of characteristics p.
Therefore x p  x and its derivative 1 are relatively prime. Therefore by the above lemma
n 1
n 1
x p  x has no multiple roots, that is, all the roots of the polynomial x p  x are distinct
Simple Extension
Definition
The extension K of a field F is called a simple extension if K = F( a ) for some a F.
Theorem
If F is of characteristics 0 and a,b are algebraic over F, then there exists an element
c F such that F(a,b) = F(c).
82
Algebra
Proof
Let f(x) and g(x) be irreducible polynomials over F satisfied by a and b. Let their
degrees be m and n.
Let K be a field extension of F in which both f(x) and g(x) split completely.
Since characteristic of F is zero, all the roots of f(x) are distinct ny corollary 1 of
lemma. Likewise the roots of g(x) are all distinct.
Let the roots of f(x) be a1, a2......am and the roots of g(x) be b1 , b2......b m
where a1 = a and b1 = b.
If j 1, then b j b1 = b. Then the equation
a i + lb j = a1 + lb1 = a + lb has only one solution namely
l ij
ai  a
b b j
i = 1, 2, .. m,
j= 2, 3, .. n
Then l ij are finite in number. In fact they are (m1) (n1) + 1 in number.
F is of characteristics 0. So it has infinite number of elements.
Therefore apart from the above element l ij , we can find an element r F such that
r
ai  a
"i
b b j
Fields
83
Example
Let F be the field of rational numbers. Then f(x) x 2 2 belongs to F[x]. f(x) = x 3 ( 21 / 3 ) 3 = (x 21 / 3 ) ( x 2 + 21 / 3 x+ 22 / 3 ). The roots of ( x 2 + 21 / 3 x+ 22 / 3 ) = 0 are given by
 21 / 3 22 / 3  4.1.22 / 3
2
roots of f(x) = x 3 2 are 21 / 3 , 21 / 3 w
x=
Hence the
1+ i 3
2
Now F ( 3 3 ) is a subfield of the field of reals and hence does not contain w and w2 .
Hence f(x) does not factorise in F ( 21 / 3 ). Let E be the splitting field of f(x) over F.
Then [E: F] 3! = 6. But [E : F] = [E: F ( 21 / 3 )] [F ( 21 / 3 ): F] and [F ( 21 / 3 ): F] = 3.
Hence [E: F ( 21 / 3 )].3 6. [E : F ( 21 / 3 )] \ [E: F ( 21 / 3 )] 2.
But [E: F ( 21 / 3 )] >1. \ [E: F ( 21 / 3 )] = 2.
w
Hence [E: F] = 6.
We could, of course, get this result by making two extensions F1 = F ( 21 / 3 ) and
E = F1 (w) and showing that w satisfies an irreducible quadratic equation over F1 .
Example
Let F be the field of rational numbers and let
f(x) = x 4 + x 2 +1 F[x].
Now x 4 + x 2 +1 = ( x 2 + x +1) ( x 2  x +1)
In some extension E of F, if a is a root of x 2 + x +1, then  a is a root of x 2 + x +1 in that
extension. Consequently that extension E splits x 4 + x 2 +1. Therefore the splitting field of
x 4 + x 2 +1 is same as the splitting field of x 2 + x +1.
In the field of complex numbers the roots of x 2 + x +1 are w and w2 .
Since f(x) is irreducible over F, and deg f(x) = 2, in any extension of F of degree less
than 2, f(x) cannot have a root. So if E is the splitting field of f(x) then [E:F]>1. But [E:F] 2 .
So we must have [E:F] = 2.
F (w) contains a root w of f(x) since w F (w) , w2 F (w) . Hence F (w) contains both the
roots w , w2 of F(x). \ F (w) splits x 2 + x +1, f(x) is irreducible over F and degree of f(x) is 2.
Also w is a root of f(x). \ w is algebraic over F of degree 2.
So, [F (w) : F] = 2. Therefore, F (w) is the splitting field of f(x) over F.
8.3
LET US SUMUP
(1)
A Polynomial f(x) F[x] has a multiple root in some extension of F if and only if f(x)
and its derivative f ' (x) have a nontrivial common factor.
84
Algebra
(2)
8.4
(1)
If F is a field and f(x) and g(x) are in F[x] then show that
(2)
(a)
(b)
[ a f(x)] = a f (x).
(c)
8.5 REFERENCES
1) Topics in Algebra by I.N. Herstein Second Edition Chapter 5.
UNIT IV
85
Algebra
LESSON 9
CONTENTS :
9.0
9.1
Introduction
9.2
Automorphism
9.3
Galois Group
9.4
Let us Sum up
9.5
9.6
References
9.0
In this lesson we prove the fundamental theorem of Galois theory which is useful for
the solvability by radicals of the roots of a polynomial.
After going through this lesson, you will be able to :
(1)
(3)
(4)
Define normal extension, to find the size of the fixed field of any subgroup of G(K,F).
(5)
9.1
INTRODUCTION
(2)
Galois theory is concerned with finding the conditions under which is given
polynomial p(x) over a field of characteristic o can be solved by means of radicals only. We
associate with each polynomial p(x), a group called galois group of p(x), The galois group of
p(x) is a group of the permutations of the roots of p(x) which is introduced using the splitting
field of p(x). We define the normal extension of a field. If K is a normal extension of F, We
show that [K:F] = O(G(K,F)), sharpening the result. Finally the fundamental theorem galois
theory sets up of one to one correspondence between subfields of the splitting field of f(x)
and the subgroups of galois group. Finally it gives a criterion that a subfield of a normal
Fields
86
extension itself to be normal extension of F. This fundamental theorem will be used to derive
the conditions for the solvability by radicals of the roots of polynomial.
9.2
AUTOMORPHISMS
Definition
If K is a field, then a map s : K K which is onto and satisfies s (a+b) =
s (a)+ s (b) and s (ab) = s (a) s (b) for all a,b K is called an automorphism of K.
Definition
Two automorphisms s and t of K are said to be distinct if s (a) t (a) for some a
in K.
Theorem
If K is a field and if s 1 , s 2 ,. . . . . s n , are distinct automorphisms of K, then it is
impossible to find elements a1 , a2 ,. . . . . , an not all 0, in K such that a1 s 1 (u) + a2 s 2 (u)+ . .
. . . + an s n (u) = 0 for all u K.
Proof
Let s 1 , s 2 ,. . . . . s n be distinct automorphisms of K. Suppose it is possible to find
elements a1 , a2 ,. . . . . , an not all 0, in K such that
 (1)
a1 s 1 (u) + a2 s 2 (u)+ . . . . . + an s n (u) = 0
for all u K.
Then we can find a relation of the form (1) having as few non zero terms as possible.
On renumbering we can assume that one such minimal relation is
 (2)
a1 s 1 (u) + a2 s 2 (u)+ . . . . . + ams m (u) = 0
Where a1 , a2 ,. . . . . , am are all different from 0.
In that case m=1, a1 s 1 (u) = 0 for all u K. Then a1 = 0. This contracticts our
assumption. Therefore m>1.
Since cu K for all u K, (2) hold for cu. Hence we have
a1 s 1 (cu) + a2 s 2 (cu)+ . . . . . + ams m (cu) = 0.
Since s 1 , s 2 , . . . . . s m are automorphism this relation gives us
 (3)
a1 s 1 (u) s 1 (u) + a2 s 2 (u) s 2 (u) + . . . . . + ams m (c) s m (u) = 0
Multiplying (2) by s 1 (c)
a1 s 1 (u) s 1 (u) + a2 s 1 (u) s 2 (u) + . . . . . + am s 1 (c) s m (u) = 0  (4)
(3) (4) gives
a2 [ s 2 (c) s 1 (c)] s 2 (u) + . . . . . + am [ s m (c)  s 1 (c)] s m (u) = 0  (5)
Let a1 [ s 1 (c) s 1 (c)] = b j , j=1,2,3, . . . . m
(5) gives
 (6)
b2 s 2 (u) + b3 s 3 (u) + . . . . . + bm s m (u) = 0
The number of terms in (6) is less than the number of terms in (2). Hence we get a
contracdiction to the fact that (2) has the minimum no of terms among the relations of the
87
Algebra
Fields
88
Suppose the theorem is false. Then 0(G(K,F)) > n. Therefore there exist n+1 distinct auto
orphisms s 1 , s 2 , . . . . . , s n +1 in G(K,F). Consider the following n homogeneous equations in
the n+1 unknowns x1 , x2 , . . . . . xn +1 .
 (1)
s 1 ( u1 ) x1 + s 2 ( u1 ) x2 + . . . . + s n +1 ( u1 ) xn +1 = 0.
 (2)
s 1 ( u2 ) x1 + s 2 ( u2 ) x2 + . . . . + s n +1 ( u2 ) xn +1 = 0.
.
.
.
.
 (i)
s 1 ( ui ) x1 + s 2 ( ui ) x2 + . . . . + s n +1 ( ui ) xn +1 = 0.
.
.
.
.
 (n)
s 1 ( u n ) x1 + s 2 ( u n ) x2 + . . . . + s n +1 ( u n ) xn +1 = 0.
Because the number of equations here is less than the number of unknowns, we know
that these equations have a nontrivial solution, say, x1 = a1 , x2 = a2 . . . . xn = an . Then
 (1)
a1 a1 ( ui ) + a2 a 2 ( ui ) + . . . . + an +1 a n +1 ( ui ) = 0
for i = 1,2,3, . . . . n.
Let t K. Then as u1 , u2 , . . . . u n is a basis of K over F.
t = a1 u1 + . . . . + an u n with a1 , a 2 , . . . . a n F.
Then a1 s 1 (t) + a2 a 2 (t) + . . . . + an +1 a n +1 (t)
= a1 a1 s 1 ( a1 u1 +. . . . .+ a n u n ) + a2 s 2 ( a1 u1 +. . . .+ an u n )+. . . . .+ an 1 a n 1
( a1 u1 + . . . . . + a n u n )
= a1 [ a1 s 1 ( u1 ) + a2 s 2 ( u1 ) + . . . . + an +1 s n +1 ( u1 )]
+ a 2 [ a1 s 1 ( u2 ) + a2 s ( u2 ) + . . . . + an +1 s n +1 ( u2 )] + . . . .
+ a n [ a1 s 1 ( u n ) + a2 s 2 ( u2 ) + . . . . + an +1 s n +1 ( u n )]
= 0, using (1)
This contradicts the results of previous theorem. Hence the theorem is proved.
Note : If p( x1 , x2 , . . . . , xn ) and q( x1 , x2 , . . . . , xn ) are two polynomials in n variable
x1 , x2 , . . . . , xn over F, then
p ( x1 , x2 ....xn )
r( x1 , . . . . xn ) =
is a rational function
q ( x1 , x2 ....xn )
where the denominator polynomial is non zero polynomial.
Let s S n . We can make s act on r( x1 , x2 , . . . . , xn ) and obtain another rational
function s ( r( x1 , x2 , . . . . , xn ))
= r ( xs (1) , xs ( 2 ) , . . . . xs (n ) )
89
Algebra
=
p ( xs (1) , xs ( 2 ) , xs ( n ) , )
q ( xs (1) , xs ( 2 ) , xs ( n ) , )
a1 = x1 + x2
+ . . . . + xn =
i =1
a2 =
xi x j
i< j
a3 =
xi x j x k
i< j <k
an = x1 x2 . . . . xn
Theorem
Let F be a field and let F( x1 , x2 , . . . . , xn ) be the field of rational functions in
x1 , x2 , . . . . , xn over F. Let S be the field of symmetric rational functions. Then
(1)
[F( x1 , x2 , . . . . , xn ): S] = n!
(2)
G(F( x1 , x2 , . . . . , xn ),S) = S n , the symmetric group of degree n.
(3)
If a1 , a2 , . . . an are the elementary symmetric functions in x1 , x2 , . . . . , xn then
S=F( a1 , a2 , . . . an )
(4)
F( x1 , x2 , . . . . , xn ) is the splitting field over F( a1 , a2 , . . . an ) = S of the polynomial
t n  a1 t n  2 + . . . . . +(1) n an .
(5)
Proof: Every permulation in the group S n is an automorphism of F( x1 , x2 , . . . . , xn )
Fields
90
 (3)
 (4)
n!
 (6)
Example
Let K be the field of complex numbers and F be the field of real numbers. Let us
determine G(K,F).
Let s be an automerphism in G(K,F). Then
[ s (i)] 2 = s (i) 2 = s (1) = 1
\ s (i) = i
If a+bi K where a and b are real, s (a+bi) = s (a) + s (bi) = a+ s (b) s (i) = a ib
Let s 1 (a+bi) = a+bi and s 2 (a+bi) = abi. Then s 1 is identity map on K and s 2 is
conjugation map. Therefore G(K,F) is a group of order 2.
Now let us compute the fixed field of G(K,F).
The fixed field of G(K,F) certainly contains F.
Let a + bi the fixed field of G(K,F).
Let s 2 (a+ib) = a + ib. But s 2 (a+ib) = aib. Therefore a+ib =a ib b=0 a+ ib is
real F.
Therefore the fixed field of G(K,F) is F.
91
Example
Let F0 be the field of rational numbers and let K = F0 ( 3 2 ). Where
cube root of 2. Then every element of K is of the form a 0 + a1
a 2 , are rational numbers.
2 +a2 (
Algebra
3
2)
2 is the real
2
where a 0 , a1 ,
2 + a 2 ( 3 2 ) 2 where a 0 , a1 , a 2 F0 and
Fields
92
s 1 (a)
=1
a2 =
s (a) s j (a)
.
.
.
.
.
.
.
.
.
a n = s 1 (a) + s 2 (a) + . . . . + s n (a).
We prove that each s is invariant under every s H.
Let s be any element of H. Consider the h products s s 1 , s s 2 , . . . . s s n , since H
is a group of all these are elements of H by closure. Also these are distinct since if s s =
<1
< j
93
Algebra
=
s [ s (a)] s [ s j (a)]
s s (a) s s j (a)
< j
< j
= a2
Similarly we can show that
s 3 , s 4 , . . . . . , s n are invariants under s . Hence s 1 , s 2 , . . . . , s n belong to the
fixed field K H of H.
Now consider the polynomial
p(x) = [x s 1 (a)] [x s 2 (a)] . . . . [x s n (a)]
 (5)
= x n  a1 x n 1 + a 2 x n 3 + . . . . + (1) n a n .
We see that p(x) is a non zero polynomial of degree h over K H .
Also from (5) we see that a = s 1 (a) is root of p(x).
Hence we see that a satisfies a nontrivial polynomial over K H of degree
lower than m. We must have
h m
O(H) [K : K H ]
 (6)
From (4) and (6), we get
O(H) = [K : K H ]
Putting O(H) =[K : K H ] in (3), O(H) O(G(K, K H )) O(H).
\ O(H) = G(K : K H ).
Now H is a subgroup of G(K, K H )
\ O(H) = O(G(K : K H )) implies that H = G(K, K H )
Now in the proof of the theorem take H = G(K,F) itself. Then K H = the fixed field of
H = the fixed field of G(K,F) = F. Hence we get the result
O(G(K,F)) = [K : F]
Lemma
Let K be the splitting field of f(x) in F[x] and let p(x) be an irreducible factor of f(x)
in F[x]. If the roots of p(x) are a1 , a 2 , . . . a r , then for each i there exists an automorphism
s i in G(K,F) such that s i ( a1 ) = a i .
Proof : It is given that K is the splitting field of f(x) F[x] and p(x) is an irreducible factor
of f(x) in F[x] .
Since p(x) is a factor of f(x), every root of p(x) is also root of f(x) and so every root of
p(x) must be in K.
Fields
94
Consider the polynomial p(x) = [x s 1 (a)] [x s 2 (a)] . . . . [x s n (a)] where s 1 , s 2 , . . . . . ,
s n are all the elements of G(K,F). Expanding p(x),
p(x) = x n  a1 x n 1 + a 2 x n  2 + . . . . + (1) n a n where a1 , a 2 , . . . a n are the
elementary symmetric function in a = s 1 (a), s 2 (a), . . . . s n (a). But they are each invariant
with respect to every s G(K,F) and hence by the normality of K over F, they must all be in
F. Therefore K splits the polynomial p(x) F[x] into a product of linear factors. Since a is a
root of p(x) and since a generates K over F, a can be in proper subfield of K which contains
F. Thus K is the splitting field of p(x) over F. This proves the if part of the theorem.
Now, we shall prove the only if part of the theorem.
Let K be the splitting field of the polynomial f(x) F[x]. Then we have to prove that
K is a normal extension of F. Since every splitting field is a finite extension, K is a finite
extension of F.
Let [K: F] = n. The proof is by induction on n.
If n = 1, [K: F] = 1. Then K = F, therefore K is a normal extension of F as F is a
Normal extension of F.
Now we assume that if K1 is the splitting field over F1 of a polynomial in F1 [x] and
[ K1 : F1 ] = n1 < n then K1 is a normal extension of F1 (Induction hypothesis)
Let [K: F] = n < 1. K is the splitting field over F of f(x) F[x] if f(x) splits into linear
factors over F, then K=F and consequently [K:F] = 1, contrary to our assumption that [K:F] =
n>1. So we assume that f(x) has an irreducible factor p(x) F[x] of degree r>1. Since
characteristics of F is O, and p(x) is irreducible, p(x) cannot have multiple roots. Further p(x)
is a factor of f(x) and K is the splitting field of f(x). Therefore p(x) has a full complement of
roots in K. Let a1 , a 2 , . . . a r K be the r distinct roots of p(x).
Since p(x) is irreducible over F and degree of p(x) = r, [F( a1 ) :F] = r.
95
Algebra
But as K is a normal extension of F( a1 ),F( a1 ) is the fixed field of G(K, F( a1 )). Therefore q
is in F( a1 )
So we can write
q = l0 + l1 a1 + l2 a12 + . . . . . + lr 1 a1r 1
 (1)
Where l0 , l1 , . . . . . , lr 1 F
Now, by the above lemma for each i = 1, 2, . . . . . , r, there is an auto orphism
s i G(K,F) such that s i ( a1 ) = a i . Since q is left fixed by every auto orphism in G(K,F),
a i (q ) = q .
Also if l0 , l1 , . . . . . , lr 1 F, then s i ( l0 ) = l0 , s i ( l1 ) = l1 , . . . . . s i ( lr 1 ) = lr 1 ,
Operating s i on both sides of (1), we get,
s i ( q ) = s i ( l0 + l1 a1 + l2 a12 + . . . . . + lr 1 a1r 1 )
This gives,
q = s i ( l0 ) + s i ( l1 a1 ) + s i ( l2 a12 ) + . . . . . + s i ( lr 1 a1r 1 )
= s i ( l0 ) + s i ( l1 ) s i ( a1 ) + s i ( l2 ) s i ( a12 ) + . . . . . + s i ( lr 1 ) s i ( a i
r 1
= s i ( l0 ) + s i ( l1 ) s i ( a1 ) + s i ( l2 ) [ s i ( a1 )] + . . . . . + s i ( lr 1 )
[ s i ( a i )] r 1
2
i.e., q = l0 + l1 a i + l2 a i + . . . . . + lr 1 a i
r 1
r 2
r 1
 (2)
\ lr 1 a i + lr  2 a i + . . . . . + l1 a i + l0  q = 0
For i = 1, 2, 3 . . . r
Since a 1 , a 2 a r are all distinct, from (2) we see that the polynomial
Fields
96
GALOIS GROUP
K H = {x K : s (x) = x " s H}
Then the association of T with G ( K,T) sets up a oneone correspondence of the set of
subfields of K which contain F onto the set subgroups of G (K,F) such that
(1)
T = K G ( K ,T )
(2)
(3)
(4)
H = G (K, K H )
[K,T] = 0 (G (K,T)), [T:F] = index of G( K,T) in G (K,F)
T is a normal extension of F if and only if G (K,T) is a normal subgroup of G
(K,F)
G[K : F ]
When T is a normal extension of F, G(T,F) is somorphic to
G[K : T]
(5)
Proof: Since K is the splitting field of f(x) over =, it is also the splitting field of f(x) over any
subfield T which contains F.Therefore by K is normal extention of T.Then by the definition
of normal extention, T is fix field of G(K,T). That is T = K G ( K ,T ) we have
H=G(K, K H ). Hence (2) is proved.
97
Algebra
Further, this shows that any subgroup of G(K,F) arises in the from G(K,T). the
association of T with G(K,T) maps the set of all subfields of K Containing F onto the set of
all subgroups of G(K,F).This mapping is one to one, for if G(K,T) = G(K, T2 ) then by
part(1), T1 = K G ( K ,T1 ) = K G ( K ,T2 ) = T2 .
Next, since K is normal over T, [K:T] = 0 (G(K,T)).
Then 0(G(K,F)) = [K:F]
= [K: T] [T: F]
= 0 (G (K, T)). [T: F]
Therefore
0(G(K : F ))
= index of G(K.T) in G(K,F).
0(G(K : T))
This proves part (3).
[T:F] =
This part (4) and (5) pertain to normality. We need the following result to prove
normality.
T is normal extension of F if and only it for every s in G( K,F), s (T) T,where K is a
normal extension of F and T a subfield of K.
We prove this result first. Since K is a normal extension of F, K is a finite extension of F.
Let be a subfield of K containing F. Then T is also Finite extension of F which is of
characteristic 0.Therefore T is a simple extension of F. So there is an element a in T such that
T = F(a).
Proof of Only if part of the Result Let s (T)
is a normal extension of F.
1x
n 1
+a
2x
n2
..(1) na n
Fields
98
Also s 1 (a) = a is a root p(x) in T = F (a). Since a generates T over T1 , a can be ina no
proper subfield of T which contains F. Thus T is the splitting field of p(x) F(x). Therefore
T is a normal extension of F.
Now we prove that if part of the result.
Let T be a normal extension of F. Then we have to prove that s (T) T " s
G) K ,F).
Since T is normal extension of F, G(T,F) is a finite group say of order m. Let
y 1 ,y 2 ,..,y m be the element of G(T,F) and let y 1 be the identity element of G (T,F). Since
a T = F(a), c(a), y 2 (a). y m (a) are all in T.
Consider the polynomial.
q(x) = [x  y 1 (a)] [x  y 2 (a)].[x  y m (a)] over T. Expanding we get
q(x) = x m  b
1X
m1
+ b
Where b , b ,. b
1
2X
m2
+ .+ ( 1)mbm
the form of q(x) we see that q(x) has all its roots in T. Also y 1 (a) = a is a root of q(x)in T
and hence in K.
Now let s be any element of G(K,F). s is an auto orphism of K leaving every
element of F fixed, a is a root of q(x) F[x] in K.
Since all roots of q(x) are in T, s (a) must be an element of T. Now T= F(a). If [T:F]
= r, then t T can be written as
T = l0 + l1a + l
2a
+.+ l
r 1a
r 1
2a
+.+ l
r 1a
r 1
r 1
r 1
[ s (a)]
) s (a
r 1
r 1
\ s leaves every
99
Algebra
r 1
T F.
G(K : F )
0 (G (K : F )
whose order is
= [T:F] (By part (3)) = 0
G(K : T)
0 (G(K : T)
Thus the image of G(K,F) in G(T,F) is all of G(T,F) and so we have G(T,F) is
isomorphic to
G(K : F )
. Hence part (5) is proved.
G(K : T)
Fields
100
Example:
Let K be field and S a set of automorphisms of K, if S denotes the subgroup of A(K),
the group of all auromorphisms generated by S, prove that the fixed field of S and that of S
are identical
Solution:
Let S be the set of all automorphisms of K. Let S be the subgroup of
A(K) the group of all automorphisms of K generated by S.
Let F be the fixed of S and F2 be the fixed field of S .
Let a F2 then s (a) = a for all s in S .Hence s (a) = a " s S since S S
Therefore a F1 . Hence F2 F1 .
To prove F1 F2
Let x F1 , Then s (x) " s S
Let j S then
p
p
j = s 1 1 . s 2 2 s n
integers. Then for x K.
j (x) = s 1
p1
, s 2 p2 .. s n
= x since s i
\x
pi ( x )
pn
pn
(x)
F2 . F2 F1 \ F1 = F2 .
Example
Prove that the Galois group 01 x 3  2 over F0 ,the field of rational number is
isomorphic to S3, the symmetric group of permulations of degree 3.
Solution:
101
Algebra
Since x 3 2 F0 [x] has a full complement of roots in
9.4
LET US SUMUP
(1)
(2)
(3)
If K is a finite extension of F,then G(K,F) is a finite group and its order O(G(K,F))
satisfies O(G(K,F)) [K:F].
(4)
9.5
(1)
(2)
(3)
9.6 REFERENCES
1) Topics in Algebra by I.N. Herstein (Second Edition) Chapter 5.0
Fields
102
LESSON 10
SOLVABILITY BY RADICALS
CONTENTS :
10.0
10.1
Introduction
10.2
Solvability Groups
10.3
Let us Sum up
10.5
10.6
References
10.0
(4)
(5)
Prove that if p(x) is solvable by radicals, then the galois group over F of p(x) is a
solvable group.
(6)
10.1
INTRODUCTION
Let F be any field and p(x) = x 2 + a1x + a2 where a1 , a2 F. Let F( a1 , a2 ) be the field of
rational functions in two variables a1 , a2 over F. Let w2 = a12  4a2 . Then the roots of p(x) are
in the extension obtained by adjoining w to F( a1 , a2 ). The formula for the roots of p(x) is in
terms of a1 , a2 and square roots of rational functions of a1 and a2 .
The roots of the cubic polynomial over F0 are given by Cardans formula. Let
p(x) = x 3 + a1x 2 + a2x + a3 and p = a2 
a12
2a2 a a
, q = 1  1 2 + a3
3
27
3
103
Algebra
q p3 q 2
p =  +
1/ 2
q p3 q 2
and Q =  +
1/ 2
1/ 3
a
a
a
Then the roots of p(x) are P + Q  1 , wp + w2Q  1 and w 2 p + wQ  1
3
3
3
Where w 1 is a cube root of unity. This formula shows that by adjoining a certain square
root and then a cube root of F( a1 , a 2 , a 3 ). We obtain a field in which p(x) has its roots.
2
27 4
1/ 3
27 4
Next given a fourth degree polynomial x 4 + a1x 3 + a2x 2 + a3x + a4 , we can use Feraris
method to solve this polynomial. We reduce this problem to the problem of solving certain
cubic polynomial. Here too a formula can be given expressing the roots in terms of rational
functions of the coefficients. In this lesson we prove that the general polynomial of degree
n 5 is not solvable by radicals.
10.2
SOLVABLE GROUPS
Definition
A group G is said to be solvable if we can find chain of subgroups
G = N 0 N1 N 2 N k = {e}, where N i is a normal subgroup of N i 1 and such that
every factor group
N i 1
is abelian
Ni
Example
Every abelian group G is solvable for we can take N 0 = G and N1 = {e}. Then G =
N 0 N1 = {e}. If g G, g e g 1 = e N1
\ N1 is normal subgroup of N 0 = G.
N0
is abelian
N1
Example
The symmetric group S 3 of degree 3 is solvable if we take N 0 = G, N1 = {e, (1,2,3),
S
(1,3,2)}. Here N1 is a normal subgroup of S 3 and 3 and
N1
N1
are both abelian of orders 2 and 3 respectively.
{e}
Example 3
Consider S 4 the symmetric group of degree 4. Let A4 be the subgroup of even
permutations of S 4 and
Fields
104
V=
1234
,
1234
1234
,
2143
1234
,
3412
1234
4321
S 4 A4
, are groups of order 2 and 3 respectively.
A4 V
SH
If K is any subgroup of G such that S H, then H K.
Then
1
1
11
= ( s . s sm ) ( s m s n 1 .. s 2
which is again a product of finite number of elements of S.
\ (xy) 1 G 1
\ G1 is a subgroup of G.
1
1
1
2
1
s11
1
) 1
105
Algebra
It is called commutator subgroup of G.
Note : G1 is the smallest subgroup containing S.
Lemma
The commutator subgroup G1 of a subgroup G is normal subgroup of G.
Proof: Let x G and a G 1 . Then
xa x 1 = (xa x 1 a 1 ) a
As xa x 1 a 1 G 1 and a G 1 and G1 a subgroup (xa x 1 a 1 ) a G 1 .
i.e. xa x 1 G 1 .
Therefore, G1 is a normal subgroup of G.
Lemma
Let G be a group G1 the commutator subgroup of G. Then
(1)
G
is abelian
G1
(2)
G
is abelian then G1 H
H
Proof
G1 is commutator subgroup of G. Then G1 is a normal subgroup of G. Then
a G1 b G1
= ab G1
(1)
1
1
1
bG aG
= ba G
(2)
1 1
1
(ab) ba
= b a ba G1 .
\ b 1a 1 ba G1 = G1 .
(ab) 1 (ba) G1 = G1 .
\ ba G1 = ab G1 .
\ a G1 b G1 = b G1 a G1
G
\ 1 is abelian
G
G
(ii)
Let
be abelian. Then for a, b G
H
\a H b H = b H a H
\ ab H = ba H
1
(ba) ab H \ a 1b 1 a b H.
\ H contains all the elements of the form ab a 1b 1 .
\ G1 H
Note
We write
(G1 )1 = G ( 2 ) , (G ( 2 ) )1 = G (3) . In general (G (i ) )1 = G ( i +1)
Fields
106
Lemma
A group G is solvable if and only if G (k ) = {e} for some integer k.
Proof
Let G (k ) = {e}
Then G G (1) G ( 2 ) .. G (k ) = {e}
Then by lemma G ( i 1) is a normal subgroup of G (i ) .
G (i )
Be lemma, (i +1) is abelian. Take N 0 = G.
G
(1)
N1 = G , N 2 = G ( 2 ) .. N k = G (k ) = {e}. Then N i is a normal subgroup of
N i 1 .
Ni
is abelian
N i +1
\ G is solvable.
Conversely let G be solvable. Then there exists a chain:
G = N 0 N1 N 2 .. N k = {e}
where
(1)
Each N i is normal in N i 1
N i1
(2)
is abelian
N1
Then be lemma, the commutator subgroup N 1i 1 of N i must be contained in
N i . i.e., N i N 1i 1 .
1
N1 N 0 = G 1 .
Thus
N 2 N11 ( G1 ) 1 = G ( 2 )
N 3 N 21 ( G ( 2 ) ) 1 = G (3) .
{e} = N k G (k )
\ G (k ) = {e}
Corollary
If G is a solvable group and if G is a homomorphic image of G, then G is solvable.
107
Algebra
Proof
Let j : G G be an onto homomorphism. Let
S
= { a 1b 1 ab : a, b G}
= { s1 , s2 , sn : si S} m aribitrary
G1
Let
1
1
={ a b a b : a, b G }
= { s1 . s2 . .. sn : s i S, n arbitrary}
S
G
1
1
1
a b
1
= f ( a b a b) f (s)
\ S f (S)
\ f (S) = S
Let s1 , s2 sm G1 . Then
= j ( s1 ) j ( s2 ) . . . . j ( sm )
j ( s1 , s2 sm )
G 1 Since j ( si ) S
\ j ( G1 ) G 1 .
Fields
108
Lemma
Let G = S n , where n 5. Then G (k ) for k = 1,2,3,.. contains every 3 cycle of S n .
Proof
Let G be a group and let N be a normal subgroup of G. Then N 1 the commutator
subgroup of N by lemma, is a normal subgroup of N.
Next we clain that if N is a normal subgroup of G = S n , where n 5 which contains
every 3cycle, then N 1 must contain every 3 cycle. For suppose a = (1,2,3) and b = (1,4,5)
are in N.
123
As
a
= (1, 2, 3) =
231
a 1
As
123
= (1, 3, 2)
=
321
145
= (1, 4, 5) =
451
145
= (1, 5, 4)
=
514
\ a 1b 1 a b N 1 .
\ (1, 4, 2) N 1 . Let s = (1, 4, 2).
Since N 1 is a normal subgroup of G, for any p S n , p 1 s p must also be in
b 1
N1 .
If j 1 , 2 , 3 then j p 1 s p = j
\ p 1 s p = ( 1 , 2 , 3 )
\ ( 1 , 2 , 3 ) N 1 .
\ N 1 Contains all 3 cycles.
Letting N = G = S n , which contains all 3 cycles and is normal in G; G1 = N 1
contains all 3 cycles.
Since G1 contains all 3 cycles and G ( 2 ) is normal in G, ( G1 ) 1 = G ( 2 ) contains all 3
cycles.
Continuing this argument, we get G (k ) contains all 3cycles for any positive integer k.
109
Algebra
Theorem
S n is not solvable for n 5 .
Proof
If G = S n , by lemma G (k ) contains all 3cycles in S n for every positive integer k.
Therefore G (k ) = {e} for any positive integer k.
Therefore by lemma G is not solvable.
10.3
SOLVABILITY BY RADICALS
Lemma
Suppose that the field F has the nth roots of unity for some particularly n and suppose
that a F. a 0. Let x n a F[x] and let K be its splitting field over F. Then
(1)
(2)
Proof: Since F contains all the nth roots of unity, it contains w = e 2p /n . Then w n = 1 but
w n 1 for 0 < m < n.
Let u be a root of x n a. Then u n a = 0
 (1)
i
n i
n
i
Then ( w u) a
= ( w ) u a
= u n a
= 0 by (1)
i
Therefore w u is a root of x n a for i = 0, 1, 2, . . . . . , n1. That is u, w u, w 2 u, . . . . ,
w n 1 u are roots of x n a. Also they are distinct since if
w i u = w j u for 0 i < j < n,
then
( w i  w j )u = 0 and as u 0.
wi w j = 0
\ w i = w j \ w j i = 1 which impossible since i j<n.
Since w F, all of u, w u, . . . . . , w n 1 u are in F(u).
Thus F(u) splits x n a.
Since no proper subfield of F(u) which contains F also contains u, no proper subfield
of F(u) can split x n a.
Thus F(u) is the splitting field of x n a, and we have proved K = F(u).
If s , p are any two elements in the galosis group of x n a, Then s , p are
automorphisms of K = F (u) leaving every element of F fixed.
As u is a root of x n a, u n a = 0. Therefore s ( u n a) = 0
\ s ( u n )  s (a) = 0.
i.e., { s (u)} n  a = 0
Fields
110
Therefore, s (u) is a root of x n a.
Similarly, t (u) is a root of x n a.
Therefore, s (u) = w i u and t (u)  w i u for some i and j. Then
s t (u) = s ( w i u) = w j s (u) = w j w i u = w i+ j u,
t s (u) = t ( w i u) = w j s (u) = w i w j u = w i+ j u
\ s t (u) = t s (u).
Let x F(u). Then x = a 0 + a1 u + a 2 u 2 + . . . . + a m u m where a 0 , a1 , . . . . a m F.
s t (x)= s t ( a 0 + a1 u + a 2 u 2 + . . . . + a m u m )
= a 0 + a1 s t (u) + a 2 s t ( u 2 ) + . . . . . + a m s t ( u m )
= a 0 + a1 s t (u) + a 2 { s t (u)} 2 + . . . . . + a m { s t (u)} m
= a 0 + a1 s t (u) + a 2 { t s (u)} 2 + . . . . . + a m { t s (u)} m
= a 0 + a1 t s (u) + a 2 t s ( u 2 ) + . . . . . + a m t s ( u m )
= s t ( a 0 + a1 u + a 2 u 2 + . . . . + a m u m )
= s t (x)
st =t s
Therefore the Galois group is abelian.
Theorem
If p(x) F[x] is solvable by radicals over F, then the galois group over F of p(x) is a
solvable group.
Proof: Let K be the splitting field of p(x) over F. The galois group of p(x) over F is G(K,F).
Since p(x) is solvable by radicals, there exists a sequence of fields
F F1 = F( w1 ) F2 = F1 ( w2 ) . . . Fk = Fk 1 ( wk )
r
111
Algebra
The converse of the theorem is also true, that is, if the galois group of p(x) over F is
solvable then p(x) is solvable by redicals over F.
2.
This theorem and its converse are true even if F does not contain the roots of unity.
10.4
LET US SUMUP
(1)
If p(x) F[x] is solvable by radicals, then the galois group of p(x) is a solvable group.
(2)
Fields
112
10.5
(1)
Prove that a group G is solvable if and only if G(k ) = {e} for some integer k.
(2)
(3)
10.6 REFERENCES
1) Topics in Algebra by I.N. Herstein (Second Edition) Chapter 5.
113
Algebra
11.1
Introduction
11.2
Finite Fields
11.3
Let us Sum up
11.4
11.5
References
11.0
In this lesson we study above finite fields and their structure. We conclude discussion
by proving that any two finite fields are isomorphic.
After going through this lesson, You will be able to :
(1)
If F is a finite field with q elements and F K where K is also finite field, then K has
q n elements where q = [K:F]
(2)
If F is finite field with characteristic p, a prime number, then F has pm elements for
some integer m.
(3)
(4)
For every prime p and every integer m there is a unique field having pm elements.
(5)
11.1
INTRODUCTION
For a given prime p, we know that Jp the set of integers modulo p is a field having p
elements. We now determine such a finite field and discuss some of their properties.
Fields
11.2
114
FINITE FIELDS
Definition
A field having finite number of elements is called a finite field.
Lemma
Let F be a finite field having q elements. Let F K where K is also a finite field and
[K:F] = n. Then K has q n elements.
Proof: Since [K:F] = n, the dimension of the vector space K over F is n.
Then K has a basis of n elements over F. Let v1 , v2 ,....., vn be a basis of K over F.
Then every element v K can be uniquely written as
v = a1 v1 + a 2 v2 + . . . . . + a n vn where a , a 2 , . . . . , a n F.
Therefore the number of elements in K is the number of a1 v1 + a 2 v2 + . . . . . + a n
vn as the scalars a , a 2 , . . . . , a n range over F.
Since each coefficient can have q values, K must have q n elements.
Corollary
Let F be a finite field. Then F has p m elements where the prime number p is the
characteristics of F.
Proof: F is a field. Therefore under the addition operation in F, it is an abelian group.
Therefore by corollary to Lagranges theorem if 1 is the multiplicative identity of F, 10 ( F ) = 0.
That is 1 + 1 + . . . . 0 (F) terms = 0. Let 0(F) = f. Then f.1 = 0. \ F is of finite characteristics.
Therefore F has a characteristic p, a prime number.
Hence F contains a field F0 isomorphic to J p . Since F0 has p elements by the above
lemma, we have F0 has p m elements where m = [F : F0 ].
Corollary
m
If the field F has p m element every a F satisfies a p = a.
m
F[x] as x p  x =
lF
(x l )
115
Algebra
m
(x l )
lF
Corollary
m
If the field F has p m elements then F is the splitting field of the polynomial x p  x.
m
splitting fields of the polynomial x p  x over J p . The splitting fields of a polynomial are
isomorphic. Therefore F1 and F2 are isomorphic.
Lemma
For every prime number p and every positive integer m there exists a field having p m
elements.
Proof: J p = {0, 1, 2, . . . , p1} is a field under addition madulo p and multiplication modulo
p when p is a prime number.
m
Consider x p  x in J p [x]. Let K be the splitting field of this polynomial.
m
xp
m1
. b + . . . . . + b p = a p + b p since p m Cr a p
pm
mr r
= 0 as the characteristic is p.
Theorem
For every prime number p and for every positive integer m there is a unique field
having p m elements.
Fields
116
Proof follows from above two lemmas.
Lemma
Let G be a finite abelian group with the property that the relation x n = e is satisfied by
at most n elements of G, for every integer n. Then G is a cyclic group.
Proof: First consider the case when the order of G is a power of some prime number q.
Let a be an element of G whose order is as large as possible. By Lagranges theorem
0(a) divides of 0(G). Since 0(G) is a power of the prime q, 0(a) must also be a power of q.
r
Then b q = ( b q ) ( r  s ) = e ( r  s ) = e.
r
\ b is a solution of x q = e.
\ G is cyclic.
Next consider the case where 0(G) is not power of some prime number. Then we
know that, G can be written as
G = S q1 S q2 . . . . S qk
Where q i are distinct prime divisors of 0(G) and S qi are the sylow subgroups of G.
Here every g G can be uniquely written as
g = s1 s2 . . . . sk
where si S q j .
Any solution of x n = e in S q j is a solution of x n = e in G. Therefore the relation x n =
e is satisfied by atmost n elements of S qi for every integer n. Then by then remarks of first
case S qi
Let c = a1 , a2 .....ak .
We show c is a generator o G.
m
m
m
Let 0(c) = m. Then c m = e. That is, a1 , a2 .....ak = e. Also e. e. e . . . = e. Therefore
by uniqueness of the representation of elements of S as a product of elements of sqi , we have
m
117
Algebra
atmost n roots in K and so at most n roots in G. Therefore by the previous lemma G is cyclic.
Theorem
The multiplicative group of nonzero elements of a finite field is cyclic.
Proof : Let F be a field. Then the nonzero elements of F is a finite group under
multiplication. Take this group as the group G in the previous lemma and K = F. Then G is
cyclic. Hence the theorem.
Lemma
If F a finite field, and a 0, b 0 are two elements of F then we can find elements
a and b in F such that 1 + a a 2 + b b 2 = 0.
Proof : Let the characteristics of F be 2. Then F has 2 n elements. Moreover every
n
LET US SUMUP
(1)
(2)
(3)
Show that any two finite field having same number of elements are isomorphic
(4)
Show that for every prime number p and every positive integer m there exists a field
Fields
118
having pm elements
(5)
Show that the multiplicative group of nonzero elements of a finite field is cyclic.
11.3
(1)
(2)
(3)
Show that any two finite field having same number of elements are isomorphic
(4)
Show that for every prime number p and every positive integer m there exists a field
having pm elements
(5)
Show that the multiplicative group of nonzero elements of a finite field is cyclic.
11.6 REFERENCES
1) Topics in Algebra by I.N. Herstain (Second Edition).
LINEAR TRANSFORMATIONS
UNIT V
119
Algebra
LESSON 12
CANONICAL FORMS.
CONTENTS:
12.0 Aims and Objectives
12.1
Introduction
12.2
Triangular Form
12.3
Nilpotent Transformations
12.4
Let us Sum up
12.5
12.6
References
12.0
In this lesson be study about Triangular Form, which is one of the canonical forms of
matrices. We also introduce Nilpotent Transformation and their related properties.
After going through this lesson, You will be able to:
1. Define Similarity of matrices.
2. Define an invariant subspace.
3. Prove that T A(V) has all its characteristic roots in F, then there is a basis of V in
which the matrix of T is triangular.
4. State and prove some properties about nilpotent linear transformations.
12.1 INTRODUCTION
One class of linear transformation which have all their characteristic roots in F is the class of
nilpotent ones. However a nilpotent linear transformation can always be brought to triangular
form over F.
Linear Transformations
120
Definition :
The subspace W of V is invarant under T A (V) if WT W.
Lemma:
If W V is invariant under T, then T induces a linear transformation T on V/W
defined by (v + W) T = vT + W. If T satisfies the polynomial q(x) F(x), then so does T. If
p1 (x) is minimal polynomial of T over F and if p(x) is that of T. then p1 (x) / [P(x).
Proof:
Let W be invariant subspace of V under T. Let V = V /W = {v +W/ v V}, Define
T : V V by (v + W) T = vT + W. Let us establish that T is well defined on V.
Let v1 + W, v2 + W V and v1 +W= v2 + W. Then v1  v2 W. Since W is invariant
under T, we have ( v1  v2 ) T W. e., v1 T  v2 T W I. e., v1 T + W = v2 T + W.
i.e,.( v1 +W) T = ( v1 + v2 ) T.
Thus T is well defined.
To prove that T is a linear transformation on V.
Let a , F, v1 + W, v2 + W V
Than
( a ( v1 + W) + ( v2 + W)) T
=( ( a v1 + v2 ) + W) T
= ( a v1 + v2 ) T + W
= a ( v1 T) + ( v2 T) + W
= ( a ( v1 T) +W + ( ( v2 T) + W)
= a ( v1 T+W) + ( v2 T + W)
= a ( v1 +W) T + ( v2 +W) T
121
Algebra
implies that r ( T ) = 0, since p ( T ) = 0 and p1 ( T ) = 0. This is a Contradiction. Thus r(x) = 0.
So p1 (x)  p(x), the lemma is proved.
Triangular From
Definition
A matrix A is called triangular if all the entries above the main diagonal are Zero. In
particular this matrix A is called as lower triangular. Equivalently, if T is linear
transformation on V over F the matrix of T in the basis v1, v 2 .....v n in triangular if
v1 T = a11 v1
v 2 T = a 21 v1 + a 22 v 2
.
.
v i T = a i 1 v1 + a i 2 v 2 + . . . . . + a i 1 v i
.
.
v n T = a n1 v1 + a n 2 v 2 + . . . . . . . + a nn v n
i.e., if v i T is a linear combination only of v i and its predecessors in the basis.
Theorem
If T A (V) has all its characteristic roots in F, then there is a basis of V in
which the matrix of T is triangular.
Assume that the theorem is true for all vector spaces over F of dimension n1 and let
V be a vector space of dimension n over F. Let T be a linear transformation On V having its
entire characteristic roots in F. Let l1 F be a characteristic root of T.There exists a nonzero
vector v1 in V such that v1 T = l1v1 . Let W = { a v1  a F}. W isA one dimensional
subspace of V generated by v1 , and is invariant under T, for if x
= a v1 W, we have xT = ( a v1 ) T = a ( v1 T) = a ( l1v1 ) = ( a l1 ) v1 W. Let v = V/W.
We know that dim V = dim V dim W = n1.
By lemma T induces a linear transformation T on V whose minimal polynomial
over F divides the minimal polynomial of T over F. Thus all the roots of the minimal
polynomial of T must be roots of the minimal polynomial of T since the roots of a minimal
polynomial of T are characteristic roots of T, we get that the roots of minimal polynomial of
T must lie in F. Thus the linear transformation T has its entire root in F and satisfies the
hypothesis of the theorem. Since dim V = n1, by our induction hypothesis there is a basis
{ v 2 , v 3 .....v n } of V in which matrix of T is triangular i,e.
Linear Transformations
122
v 2 T = a 22 v 2
v 3 T = a 32 v 2 + a 33 v 3
.
.
v i T = a i 2 v 2 + a i 3 v 3 + . . . . .+ a i 1 v i
.
.
v n T = a n 2 v 2 + a n 3 v 3 + . . . .+ a nn v n
We know that v i = v i + W, i = 2 . . . . n with element v i V. we claim that
{ v1 , v 2 , . . . . . . v n } is a basis of V.
Let v V then v = v + W V =V/W
n
=
=
i.e., v =
Thus v =
ai v i
i =2
n
i =2
n
a i (v i + W )
n
( a i v i + W )= i =2 a i v i + W
i =2
i =1
Hence v 
i = 2
for some a i F.
ai v i W
i =2
= { a i v i / a F}
i =1
a i vi
a 2 v2 + . . . . +a n vn + W = W
a 2 ( v 2 +W) + . . . . . + a n ( v n +W) = W
a 2 v2 + . . . . + a n vn
W = 0.
v 2 T a 21 v1 + a 22 v 2
Similarly v i T = a i1 v1 + a12 v 2 + .... + a ii v i .
123
Algebra
(I .e.) The basis v 1 , v 2.... v n of V over E provide us with a basis, where every v1 T
are a linear combination of v i and its processors in the basis. Therefore the matrix of T in this
basis is triangular. This completes the induction and proves the theorem.
Proof: suppose that the matrix A Fn has all its characteristic roots in F.
Clearly A defines a linear transformation. T on F(n) whose matrix in the basis v1 =
(1,0,.. 0), v 2 = (0,1,0,.,0), v n = (0,0.,0,1) is precisely A, the characteristic
roots of T, being equal to those of A, are all in F and so by the theorem, there is a
basis of F(n) in which the matrix of T is tr angular.
But by a known result, this change of basis merely changes the matrix of T,
namely A, in the first basis, into CAC1 for a suitable C F(n) . Thus CAC 1 is triangular.
Theorem
If V is n dimensional over F and if T A(V) has all its characteristic roots in
F. then T satisfies a polynomial of degree n over F.
Proof : Since T A(V) has all its characteristic roots in F. by theorem we can find basis
v 1 , v 2 .... v n of V over F such that T is triangular.
v1 T = l1 v1
v 2 T = a 21 v1 + l2 v 2
.
v i T = a i1 v1 + + a ii 1 v i + li v i
for i = 1,2 . . . . .n
The above set of equations can be rewritten in the form
v1 (T l1 ) = 0
v 2 (T l2 ) = a 21 v1
.
.
v i (T li ) = a i1 v1 + a i 2 v 2 + ..+ l ii 1 v i1
for i = 1,2.n
Now v 2 (T l2 ) (T l1 ) = ( v 2 (T l2 )) ( T l1 )
= ( a 2 v1 ) ( T l1 )
= ( a 21 ( v1 (T l1 ))
Linear Transformations
124
= a2 0
Since (T l2 ) (T l1 )
=0
= (T l1 ) (T l2 )
v 2 (T  l i ) (T l i 1 ).(T l1 )= 0
.
.
.
v1 (T l1 ) (T l 11 ).. (T l1 ) = 0
for i = n, the matrix S = (T  l n ) (T  l n 1 )..(T  l1 )
Satisfies v1 S = v 2 S = . v n S = 0 ( i.e.., S annihilates a basis of V.S must annihtate
all of V. Therefore S = 0. Consequently T satisfies the polynomials (x  l1 )(x l2 ).(x l n ) in
F[x] of degree n. This completes the proof of the theorem.
A1
O
.
.
.
O
O
A2
.
.
.
O
..O
..O
.. A k
( 2)
125
Algebra
Since, V = V1 V2 . Vr ,
{ V1(1) , V2 (1) , . . . . . , Vn1(1) , Vn1( 2 ) , Vn1( 2 ) , . . . . . Vn2 ( 2 ) . . . . .}is basis of V.
Since, each V1 is in invariant 0 under T.
= 1
1
S
S2
Sr 1
 2 + 3  .... + (1)r 1 r
ao
a o
a o a o
S
S2
+ 2 + . . . .+(1)
ao ao
= 1 + (1)
r 1
r 1
Sr
Sr 1 S
S2
r 1
+
+
.
.
.
.+
(
)
1
r
r
2
ao ao ao
ao
Sr
a
(i.e) if Sr = 0, then
( a o +S)
Now, if
Then
r 1
1
S
S2
r 1 S
=1
 2 + 3  ... + (1)
r
ao
a o a o a o
T r = O,
S = a 0 + a1 T + . . .+ a m T m also must
Linear Transformations
126
0
.
0
.
All of whose entries are o except on the super diagonal, where they are all
1s
Definition
If T A(V) is nilpotent, then k is called the index of nilpotence of T if T k = 0
but T k 1 0.
Theorem
If T A(V) is nilpotent, of index of nilpotence n, then a basis of V can be
found such that the matrix of T in this basis has the form,
M n1
O
.
.
.
O
M n2
M nr
However s < n1
127
Algebra
This contradicts that T n1 1 O.
Thus no such a s exists and
a1 = a 2 = . . .= a n1 = O.
(i.e,) { v, vT, . . ., vT n1 1 } is linearly independent over F.
Let V1 is invariant under T.
v1 = v1 , v2 = vT, . . . , v 11 = vT n1 1 .
Now , V1 is invariant under T.
For Let x V1 ,Then
x
= a1 v1 + . . . .+ a n1 vn and so
xT
= ( a1 v1 + . . . .+ a n1 vn1 )T
= a1 (vT) + a 2 ( vT 2 ) + . . . .+ a n1 1 an1 (vT n 1 )
= a1 (vT) + a 2 ( vT 2 ) + . . . .+ a n1 1 (vT n1 1 ) V1
Q T n1 = O
Then V1 is invariant under T and in the above basis, the linear transformation
Induced by T on V1 has as matrix M n1 .
Now to continue the proof we are in need of the following two lemmas.
Lemma
If u V, is such that u T n1  k = O.
Where O < k n.. then u = u o T for some u o V1
Proof : Since u V1 and V1 is the subspace generated by v, vT, vT 2 ,...vT n11
We have,
u = a1v + a1(vT) + . . . + a k (vT k 1 ) + a k +1(vT k ) + .. + a n11 (vT n1 1 )T n1 k = 0.
(i.e) a 1 vT n1 k + .... + a k vT n 1 = 0
Since vT n1 = 0
n 1
However v T n1  k . . . . .. . vT
are linearly independent over F.
Where a1 = 0 = a 2 = . . .= a k and so.
u
= a k 1 vT k + . . . .+ a n vT n1 1
= ( a k 1 v+ . . .+ a n1vT n1k 1 )T k
= u 0 T k , where u 0 V1 and the proof of lemma is complete.
Lemma
There exists a subspace W of V, invariant under T such that V= V1 W.
Proof : Let W be suspace of V of largest possible dimension such that
(i) V1 W = (0)
(ii) W is invariant under T.
existence of such a subspace is always gurented by the subspace (0).
Linear Transformations
128
We claim that V = V1 + W.
Assume that this is not true , then there exist an element z V such that z
V1 + W.
Since T n1 = 0, there exists an integer k, o < k n1 such that zT k V1 + Wand
Such that zT1 V1 + W, for such that zT k V1 + W for 1< k,
Thus zT k = u +w for some u V1 and w W
Then O
= zT n1 = ( zT k ) T n1  k = (u + w) T n1  k
= uT n1  k + wT n1  k . Since V1 and W are invariant under T. uT n1  k V1
and wT n1  k W.
Also uT n1  k V1 and V1 ,W = (o)
implies that uT n1  k = wT n1  k V1 W = (o).
Thus uT n1  k = 0
By lemma, u = u o T k . for some u 0 Vi ,
Therefore zT k = u + w = u o T k +w.
Let z1 = z u o . Then z1T k = zT k  u o T k = w W
Since W is invariant under T. we have
Z1 T m W for all m k
on the other hand, if i < k, zT i = zT i  u o T i V1 + W. For otherwise zT i V1 + W.
Which is a contradiction to the choice of k.
Let w 1 be the subspace of V spanned by W and z1 , z1T, . . . .z1T k 1 . Since z1
W, and Since W W1 , the dimension of W1 must be larger than that of W.
Moreover, since z1T k W and W is invariant under T. W1 is invariant under T by
the maximal nature of W, W1 must violate one of properties of W. But the condition W1 is
Invariant under T W1 V1 (o) .
(i.e) there exists an element of the form
Wo + a1 z1 + a 2 ( z1T ) + . . .+ a k ( z1T k 1 ) 0. in W1 V1 . Where w o W.
The scalars a1 , a 2 , . . . , a k are all not zero, for otherwise o w W V1 =(0). a
contradiction.
Let a s be the first now zero a then w o + z1T s1 ( a s + a s +1 T + . . .+ a k T k s ) V.
Since a s o
a s + a s +1 T + . . .+ a k T k s is invertible and its inverse R, is a polynomial in T.
129
Algebra
M n1 O
where
O A 2
A 2 is the matrix of T2 ,
O
.
.
.
O
M n2
O
O
M nr
Lemma
Let dim M= m, if M is cyclic with respect to T then dim MT k = mk, for all k m.
Proof : Since M is of dimension m and cyclic with respect to T, z, zT , zT 2 , . . . , zT m1
from a basis of M. consider the elements { zT k , zT k 1 , . . . . zTm + k  1 }
Since MT m =O the above set becomes { zT k , . . . , zT m1 }. We claim that this is basis for MT k . Clearly they are linearly independent, since they are elements of a
basis M. Also if x MT k ,then x= yT k for some y M. Now y = a1z + a 2 (zT) + . . .+
a m ( zT m1 ) and so x = YT k = a1 zT k + a 2 ZT kH + . . . a m ( zT m1 ) i. e., { zT k , . . ., zT m1 )
is a basis for MT k and so MT k is of dimension mk.
Theorem
Tow nilpotent linear transformations are similar if and only if they have the
same invariants.
Proof : By theorem for every nilpotent transformation T A(V), we can find
integers n 1 n 2 ... n r and subspace V1 , V2 . . .Vr of V cyclic with respect to T
Linear Transformations
130
M n1
O
M n1
.
.
O
O
O
.
.
.
O
M nr
131
Algebra
( m ij ) is the matrix of T are well as S, then
m ij v ij and
Wi T =
m ij w j for i = 1,2.n
Thus ( v i A ) T = m ij ( v ij A)
vi S
j =1
j =1
j =1
v i (AT) =
( m ij v ij )A
n
j =1
= [ j =1 j =1 m ij v ij ] A
= ( v i S) A
= v i (SA)
i.e. AT = SA
Hence S and T are similar and so the proof of the theorem is complete.
The linear transformations S,T A(V) are said to be similar if there exists an
invertible element C A(V) such that T = CSC1
(2)
(3)
If T A(V) has all its characteristic roots in F, then there is a basis of V in which
the matrix of T is triangular.
(4)
(5)
(6)
Two nilpotent linear transformation are similar if and only if they have the same
invariants.
If T A(V) has all its characteristic roots in F, then there is a basis of V in which
the matrix of T is triangular.
Prove that there exists a subspace W of V invariant under nilpotent transformation
T of index n such that V = V1 + W where V1 is the subspace of V.
Two nilpotent linear transformation are similar if and only if they have the same
invariants.
12.6 REFERENCES
(1)
Linear Transformations
LESSON 13
132
CONTENTS:
13.0
Aims of Objectives
13.1
Introduction
13.2
Trace of a Matrix
13.3
Transpose of a Matrix
13.4
Let us Sum up
13.5
13.6
References
13.0
In these Lessons we study about trace and transpose of a matrix and the properties.
After going through this lesson, You will be able to:
(1)
(2)
13.1
INTRODUCTION
We introduce the trace of a matrix (sum of the main diagonal elements) and prove some
important properties
tr A =
ii
i =1
Lemma
For A, B Fn and l F,
(1)
tr ( l A)
= l tr A
(2)
tr (A + B)
= trA = trB
(3)
tr (AB)
= tr (BA)
133
Algebra
proof
1.
Let A = ( a ij )
l A = ( l a ij )
tr A =
l a ii
=l
i =1
2.
ii
i =1
= l tr A.
Let A = ( a ij ) and B = ( ij )
n
tr (A+B)
( a ii + ij )
i =1
a ii
i =1
3.
b ii
i =1
= tr A + tr B
Let A = ( a ij ) and B = ( ij )
AB
= ( ij )
n
b kj
= a ik
K = 1
and BA
= ( d ij )
n
= b ik a kj
k = 1
a ik ki
K =1
i =1
i =1
if we interchange the order of summation in this last sum, we get
n
tr (AB)
tr (AB)
( g ii ) =
k =1
a ki ik =
i =1
i =1
ik a ki
k =1
d kk = tr(BA)
k =1
Corallary
If A is invertible then tr ( ACA 1 ) = tr(C)
Proof
Let B = CA 1 , then tr ( ACA 1 )
= tr (AB)= tr(BA)
We have the alternate definition for trace of A.
= tr C( A 1A ) = tr (C).
Linear Transformations
134
Definition
If T A(V) then the trace of T, is trace of m1 (T) where m1 (T) is the Matrix of T in
some basis of V.
Lemma
If T A(V) then tr T is the sum of the characteristic roots of T (using each
Characteristic roots as often as its multiplicity)
Proof : Assume that T is a matrix in Fn If K is the splitting field for the minimal polynomial
of T over F then in K n ,T can be brought to its Jordan form J. Now J is a matrix on whose
diagonal appear the characteristic roots of T, each root appearing as often as its multiplicity.
Then tr J = sum of the characteristic roots of T;since J is of the form ATA 1 ,
tr J = trT and hence the lemma.
Note : If T is nilpotent then all its characteristic roots are o, so that by the
above lemma tr = 0.But if T is nilpotent. Then so is T 2 , T 3 , . . . ,Thus tr T i = 0 for
all i 1.
Lemma
If F is filed of characteristic 0, and if T A F (v) is that tr T i = 0 for all i 1.
Then T is nilpotent.
Proof : since T A F (V), T satisfies some minimal polynomial P(x) = x m +
1x m 1 + . . .+ a m . i.e. T m + 1T m 1 + ... + m 1T + m = 0 .
Taking trace on both sides
tr T m + 1trT m 1 + ... + m1trT + tr m = 0 .
However, by assumption, tr T i = 0 for i 1, thus we get tr m = 0 i.e., m tr 1= 0
i.e., n m = 0. where dim V = n.
\ m = 0 since the constant term of the minimal polynomial of T is 0, T is
Singular and so O is a characteristic root of T.
Let us consider T as a martix in Fn and therefore also as a martix in K n ,where
K is an extension of F which in turn contain all the characteristic roots of T. In K n ,
we can bring T to triangular from, and since o is a characteristic root of T,we can
actually bring it to the from.
0 ..... 0
0
b
0
2 a 2 .....
.
....
.
....
.
....
b n .... ..... a n
Where
0
T2
135
Algebra
0
a 2 0 .....
.
.............. is an (n1)x (n1)
T2 =
* ....................a n
matrix (the*s indicate parts in which we are not interested in the explict
entries)
Now
0
Tk =
K
T2
Hence tr T k = tr T2 .but tr T k = 0
K
\ tr T2 = 0 for all k 1.
Jacobsons lemma
If F is of characteristic O and if S and T. in A F (V) , are such that ST TS
commutes with S, then ST TS is nilpotent.
Proof : For any k 1 we compute (ST TS ) k .
(ST TS ) k
Linear Transformations
136
Lemma
For all A, B Fn
(1)
( A ' )' = A
(2)
(A + B )' = A ' B'
(AB)' = B' A ' .
(3)
Proof
1.
Let A
= ( a ji )
Then A '
2.
= ( ij ) and ij = a ji
Let A
= ( a ij ) and B = ( b ij )
A +B
(A + B)'
= (A + B)'
Let A = ( a ij ) and B = ( b ij ) then
n
ik kj
k =1
= l ji
n
a jk b ki
k =1
k =1
x ik g kj =
k =1
a jk b ki = mij
137
Algebra
Linear Transformations
138
LESSON 14
HERMITIAN, UNITARY AND NORMAL TRANSFORMATIONS
CONTENTS:
14.0
14.1
Introductions
14.2
14.3
Normal Transformations
14.4
Let us sum up
14.5
Understand Hermitian and unitary transformations using the inner product space
notions and results.
Define normal transformations.
Prove the related properties of normal Transformations
14.1 INTRODUCTION
We introduce a special type of transformations over the inner product space.
We derive some properties which are useful for further studies.
139
Algebra
The following lemma answers the converse of the note, namely, if T A(V)
preserves the length whether T will be unitary. The answer is yes.
Lemma
If (vT, vT) = (v,v) " v V, then T is unitary.
Proof : The proof is in the spirit of the following result.
If T A(V) is such that (vT, v) = 0 " w V then T = 0
Proof of result : Since (vT, v) = 0 " v V, Let u, v V,( (u+w)T, (u+w)) = 0.
Expanding the inner product, we get (uT, u+w) + (wT, u+w) = 0
(uT,u) + (uT,w) + (wT,u) + (wT,w) = 0
(uT,w) + (wT,u) = 0 ..
(1)
(Since (uT,u) = 0 = (wT,w))
Equation (1) holds for arbitrary u, w V.
It still holds if we replace w by iw.
(uT,iw) + ((iw) T,u) = 0
i(uT,w) + i(wT,u) = 0
(uT,w) + (wT, u) = 0 ..
(2)
Adding (1) and (2) we get
(wT,u) = 0 " w, u V.
In particular let us take u = wT
\ (wT, wT) = 0
wT = 0 " w V
hence, T = 0.
Proof of the Lemma
Since (vT, vT) = (v,v)
Let u, v V
((u+v)T, (u+v)T) = (u+v, u+v)
(uT+vT, uT + vT) = (u+v, u+v)
(Since T A(V))
(uT, uT) + (uT, vT) + (vT,uT) + (vT,vT)
= (u,u) + (u,v) + (v,u) + (v,v)
(uT, vT) + (vT, uT)
= (u,v) + (v,u) .(1)
(Since (uT, uT) = (u,u))
Since (1) is true for all u,v V replacing v by iv in (1) we get
(uT, (iv)T) + ((iv)T, uT)
= (u, iv) + (iv, u)
i (uT,vT) +i (vT,uT)
= i (u,v) +i (v,u)
Cancelling i, we get,
(uT,vT) + (vT,uT)
= (u,v) + (v,u) .. (2)
Adding (1) and (2)
(vT, uT) = (v,u) " u, v V
Thus T is unitary.
Linear Transformations
140
Theorem
The linear transformation T on V is unitary if and only if it takes an orthonormal basis
of V into an orthonormal basis.
Proof : Suppose that { v1 , v2 , . . . vn } is an orthonormal basis of V. Then
0 for i j
1 for i j
( vi , v j ) =
0 if i j
1 if i = j
\ { v1 T, v2 T, . . . vn T} is an orthonormal basis of V.
u=
a i vi = a1 v1 + . . . + a n vn
i =1
w=
b i vi = b1 v1 + . . . + b n vn
i =1
By orthonormality
n
(u,w) =
a i bi
i =1
Now
uT =
a i vi T
i =1
wT =
b i vi T
i =1
(uT, wT ) = a i bi
i =1
i.e. T is unitary.
Hence the proof.
= (u, w )
141
Algebra
The above theorem states that a change of basis from one orthonormal basis to
another orthonormal basis is accomplished by a unitary linear transformation.
Lemma
If T A(V), then given any v V there exist an element w V depending on V and
T such that (uT,v) = (u,w) for all u V. This element w is uniquely determined by v and T.
Proof : Let { u1 ,u2 , . . . , u n } be an orthonormal basis of V. Let v V.
n
We define w =
(uiT , v) ui
i =1
= (uT, v+w)
Linear Transformations
142
= (uT,v) + (uT,w)
= (u,vT*) + (u,wT*)
= (u,vT* + wT*)
To prove (T*)* = T.
(u, v(T*)*)
= (uT*,v)
= (v, uT *)
= (vT , u )
= (u,vT) " u, v V
v(T*)*
=
vT
which implies that
\
(T*)* = T.
2.
To prove (S + T)*
(u,v(S+T)*)
So, (S+T)*
3.
4.
= S* + T*.
= (u(S +T),u)
= (uS+uT,v)
= (uS,v) + (uT,v)
= (u,vS*) + (u,vT*)
= (u,vS* + vT*)
= (u,v(S*+T*)
= S* + T*.
143
Algebra
Lemma
T A(V) is unitary is and only if TT* = 1.
Proof : Let us assume T is unitary
" u,v V, (u,vTT*)
= (uT,vT)
= (u,v)
So that
TT* = 1
Conservely let us assume that TT* = 1.
(u,v) = (u,vTT*)
= (uT,vT) T is unitary.
Theorem
If { v1 , v2 , . . . vn } is an orthonormal basis of V and if the matrix of T A(V) in this
basis is ( a ij ) then the matrix of T* in the basis is ( b ij ) where b ij = a ji .
Proof : Since the matrix of T in the basis { v1 , . . . . . vn } is ( a ij) we have,
n
vi T =
a ij v j
j =1
vi T* =
b ij v j
j =1
( vi T*, v j )
= ( b i1 v1 + b i 2 v2 + b in vn , v j )
= ( b i1 v1 , v j ) + b i 2 v2 , v j ) + . . . . + ( b in vn , v j )
= b ij ( v j , v j )
(Since ( v j , v j ) = 0 if i j)
= b ij
Thus b ij
= (v,T*, v j )
= ( vi , v j T)
n
= ( vi , a jk vk )
k =1
Linear Transformations
144
Theorem
If T A(V) is Hermitian then all the characteristics roots are real.
Proof : Since T is Hermitian, T = T*.
Let l be the characteristic root of T.
\ v 0 s.t vT = l v.
(vT,v)
= ( l v,v)
= l (v,v) .. (1)
Since T = T*, (vT,v) = (vT*,v)
= (v, vT)
= (v, l v)
= l (v,v) .(2)
From (1) and (2)
l (v,v)
= l (v,v)
l =l
l
is real. Hence the proof
\
Lemma
If S A (V) and if vSS* = 0.
then vS = 0
Proof ; (vSS*,v) = 0 since vSS* = 0.
Thus 0
= (vSS*,v)
= (vS. v(S*)*) (\ (S*)* =S)
= (vS, vS)
Which implies vS = 0.
Corollary : If T is Hermitian and vT k = 0 for k 1 then vT = 0.
First let us show that if vT
2 m1
Then vT = 0. Let S = T
Then S*
SS*
=0
2 m1
=S= T
2 m1
2 m1
= T .T
=T
=T
=0
(vSS*,v)
2 m1
2.2 m1
2.m
2 m1
0 = vS = v T
Continuing this way we get vT = 0.
If v T k = 0 then v T
2.m
145
Algebra
= (vN*N,v)
= (vNN*,v)
= (vN,vN) = 0
\ vN* = 0
= N* (N  l )  l (N  l )
= (N*  l ) (N  l )
= (N  l )* (N  l )
Hence (N  l ) is normal. Moreover v(N  l ) = 0 for all v V.
Hence by Lemma v ( N  l )* = 0
i.e
v (N*  l ) = 0
Hence vN* = l v.
Corollary 2 : If T is unitary and if l is a characteristic root of T, then  l  = 1.
Linear Transformations
146
147
Algebra
Theorem
If N is normal linear transformation on V, then there exists an orthonormal basis
consisting of characteristic vectors of N, in which the matrix of N is diagonal. Equivalently, if
N is a normal matrix then there exists a unitary matrix U such that
UNU 1 (= UNU * ) is diagonal.
Proof : Let N be normal and let l1 ,.. lk be the distance characteristic roots of N. By the
corollary of theorem from previous lesson . We can decompose V = V1 V2 Vk
where every v i v i is annihilated by (N  l i )n i , by the corollary of lemma we have v i N =
l i v i Thus v i consists only of characteristic vectors of N corresponding to the characteristic
root l i . The inner product of V induces inner product on v i . We can find an orthonormal
basis for v i relative to this product.
By Lemma, elements lying in distinct v i s are orthogonal. Thus the orthonormal basis
of Vi s provide us with the orthonormal basis of V. This basis consists of characteristic vector
of N, the matrix of N is diagonal.
Since every unitary and Hermitaion transformation are normal we have the following
corollaries.
Corollary 1: If T is a unitary transformation there is an orthonormal basis in which
the matrix of T is diagonal.
Corollary 2 : If T is Hermitian liner transformation, then there exists an orthonormal
basis in which the matrix of T is diagonal.
The following lemma find the condition under which the normal transformation may
be unitary or Hermiltion.
Lemma
The normal transformation N is
(1)
(2)
Proof
(1)
If N is Hermitian, then it is normal and all its characteristic roots are real.
Linear Transformations
148
Suppose N is normal and all its characteristic roots are real, we show that N is
1
 a ij 
i. j
149
Algebra
Lemma
The Hermitian linear transformations T is nonnegative if and only if all its
characteristic roots are nonnegative.
Proof : Suppose T 0; if l is a characteristic root of T, the vT = l v for some v 0. We
have to prove, l is nonnegative.
0 (vT,v) = ( l v,v) = l (v,v); Since (v,v) > 0, we deduce that l 0.
Conversely, if T is Hermitian with nonnegative characteristic roots then it has an
orthonormal basis { v1 , . . . vn } consisting of characteristic vectors of T.
For each vi , vi T = li vi , where li 0
\ v = a i vi
Hence vT
= a i vi T
= a i li vi
But (vT,v)
= ( li a i vi , a i vi )
= li a i a i ( vi , vi )
since li 0 and a i a i 0, we get,
(vT,v) 0. Hence T 0.
Lemma
T 0 if and only if T = AA* for some A.
Proof : We first show that AA* 0. Given v V, (vAA*,v) = (vA, vA) 0
Hence AA* 0 i.e. T 0
Conversely suppose that T 0
Claim
T = AA* for some A. Since T is Hermitian such that an orthonormal basis in which
the matrix of T is diagonal. i.e., there exists a unitary matrix U such that UTU* is diagonal.
l1
i.e., UTC* =
l2
.
ln
150
Algebra
l1
l2
.
Let S =
ln
l2
= U*
U
.
ln
= U*UTU*U
= T.
(3)