Professional Documents
Culture Documents
Section 10.3 by: Yusuf Goren, Miguel-Angel Manrique and Rory Laster
Section 10.4 by: Elizabeth Pannell, Toby Stockley, and Wei Yuan
Exercise 10.3.2. Let R be a commutative ring with identity. For all positive integers n and m,
Rn ∼
= Rm if and only if n = m.
Proof. Let φ : Rn → Rm be an isomorphism of R-modules and let I E R be a maximal ideal. Then
the map φ̄ : Rn → Rm /IRm given by φ̄(α) = φ(α) is a morphism of R-modules. Moreover
ker φ̄ = {α ∈ Rn | φ̄(α) = 0} = {α ∈ Rn | φ(α) ∈ IRm } = φ−1 (IRm ) = IRn .
Therefore by the first isomorphism theorem Rn /IRn ∼
= Rm /IRm . We already showed that Rm /IRm ∼
=
(R/I)m . Since I is maximal, F := R/I is a field and we have an isomorphism of F -vector spaces
R/I)n ∼
= (R/I)m . Hence n = m.
Exercise 10.3.6. Let R be a ring with identity and let M be a left R-module. If M is a finitely
generated R-module that is generated by n elements, then every quotient of M may be generated by n
or fewer elements. In particular, quotients of cyclic modules are cyclic.
Proof. Assume that M is generated by A = {a1 , . . . , an } and let N E M 1 . Then M/N is generated
by Ā = {a1 + N, . . . , an + N } since for any m ∈ M , m = r1 a1 + · · · + rn an for some r1 , . . . , rn ∈ R and
m + N = r1 a1 + · · · + rn an + N = (r1 a1 + N ) + · · · + (rn an + N ) = r1 (a1 + N ) + · · · + rn (an + N ).
Exercise 10.3.7. Let R be a ring with identity and let N be a left R-submodule of M . If both M/N
and N are finitely generated, then M is also.
Proof. Assume that N E M with generators b1 , . . . , bk and that M/N is generated by a1 + N, . . . , an +
N . Let m ∈ M be arbitrary. Then there exist r1 , . . . , rn ∈ R such that
m + N = r1 (a1 + N ) + · · · + rn (an + N ) = r1 a1 + · · · + rn an + N =⇒ m − (r1 a1 + · · · + rn an ) ∈ N.
Hence there exist rn+1 , . . . , rn+k ∈ R such that
m − (r1 a1 + · · · + rn an ) = rn+1 b1 + · · · + rn+k bk =⇒ m = r1 a1 + · · · + rn an + rn+1 b1 + · · · + rn+k bk .
Therefore M is generated by a1 , . . . , an , b1 , . . . , bk .
Definition. Let R be a ring. An R-module M is irreducible if M 6= 0 and if 0 and M are the only
R-submodules of M .
Exercise 10.3.9. Let R be a ring with identity and let M be a left R-module. M is irreducible if and
only if M is a nonzero cyclic module such that any nonzero element of M is a generator.
Proof. Assume M 6= 0 is irreducible and let 0 6= m ∈ M be arbitrary. Then the submodule generated
by m is, as the name asserts, a non-trivial submodule of M hence it must be equal to M . Therefore
M is cyclic and generated by any non-zero element.
Conversely, assume that M is cyclic module generated by any non-zero element. Assume that
N E M is a non-trivial submodule. Since it is non-trivial, it contains a non-zero element, say n.
Therefore the submodule generated by n is a submodule of N and hence Rn = M E N which implies
that N = M . So M is irreducible.
1 I tend to denote subgroups and subrings by “≤”; on the other hand normal subgroups, ideals and submodules are
denoted by “E”. The idea is to distinguish the substructures that you can quotient out and the ones that you can’t.
Exercise 10.3.11. Let R be a ring with identity and let M1 and M2 be left R-modules. If M1 and
M2 are irreducible, then any nonzero R-module homomorphism from M1 to M2 is an isomorphism.
Moreover, for every irreducible left R-module M , EndR (M ) is a division ring.
f : M → M/A1 M × · · · × M/Ak M ; m 7→ (m + A1 M, . . . , m + Ak M )
f (m̃) = (0 + A1 M, 0 + A2 M, . . . , 0 + Ak M ),
Exercise 10.3.17. In the notation of the previous exercise, assume further that R is commutative
and that the ideals A1 , A2 , . . . , Ak are pairwise comaximal. Then
M/(A1 . . . Ak )M ∼
= M/A1 M × · · · × M/Ak M.
Proof. We will first show that for any two comaximal ideals Ai and Aj , we have Ai M ∩Aj M = Ai Aj M .
The inclusion Ai Aj M ⊂ Ai M ∩ Aj M is clear. For the reverse, suppose x ∈ Ai M ∩ Aj M , and write
ai + aj = 1 with ai ∈ Ai and aj ∈ Aj . Then
x = 1 · x = (ai + aj )x = ai x + aj x
The sum of terms involving y’s lies in Ak , and x1 . . . xk−1 ∈ A1 ∩ · · · ∩ Ak−1 , whence 1 ∈ A1 ∩ · · · ∩
Ak−1 + Ak , as desired.
Now we will use the first isomorphism theorem for modules to complete the problem. To do so it
remains to show the map f is surjective. Let (m1 + A1 , . . . , mk + Ak ) ∈ M/A1 M × · · · × M/Ak M.
be arbitrary. Since the A1 , . . . , Ak are pairwise coprime, for i, j ∈ {1, . . . , k} and i 6= j, there exist
aij ∈ Ai and bij ∈ Aj so that aij + bij = 1. For l ∈ {1, . . . , k}, define cl = a1l a2l · · · ac
ll · · · akl . Also
define x = c1 m1 + · · · + ck mk . We have
f (x) = ((c1 m1 + · · · + ck mk ) + A1 M, . . . , (c1 m1 + · · · + ck mk ) + Ak M )
= ((c1 m1 + A1 M ) + · · · + (ck mk + A1 M ), . . . , (c1 m1 + Ak M ) + · · · + (ck mk + Ak M ))
Notice cj = (a1j a2j · · · ai−1,j ai+1,j · · · ac
jj · · · akj )aij . Since aij ∈ Ai , we have cj ∈ Ai if i 6= j. Also,
for n ∈ {1, . . . , k}, we have
Supplemental Problem
Both maps are impossible. The easier part is that if m < n, then it is impossible to have a surjection
Rm → Rn . To see this, we can choose any maximal ideal I ⊂ R, and note that a surjection Rm → Rn
would yield upon reduction modulo I a surjection (R/I)m → (R/I)n of vector spaces over the field
R/I. This is impossible by basic linear algebra (for instance, by the rank nullity theorem).
Unfortunately, this technique of reducing to a field does not work directly for the other part of the
problem, since modding out by I does not preserve injectivity (in the more advanced language from a
future section, this is saying that tensoring with R/I over R is right exact, but not left exact).
We therefore give a direct proof. Suppose we have an R-module injection Rm → Rn with m > n.
This is equivalent to the existence of m elements of Rn that are R-linearly independent. We will prove
that given n + 1 elements of Rn , there must exist a non-trivial dependence between them.
Suppose for a contradiction that we have row vectors vi ∈ Rn , for i = 1, . . . n + 1, that are linearly
independent over R. Let M be the (n + 1) × n matrix with entries in R created from the row vectors
vi . We will prove the following statement for 0 ≤ t ≤ n − 1, by induction on t:
This claim for t = n − 1 implies that M = 0, which is clearly a contradiction to the linear
independence of the rows of M , and completes the proof. It remains to show the claim.
Base case of claim (t = 0): We must show that if we remove one of the rows vi , the determinant
of the n × n matrix
Pn+1 that remains is equal to 0. Let us call this determinant di . Then we claim that
if we let x := i=1 (−1)i di vi ∈ Rn , then x = 0. Indeed, one can check that the jth component of x,
Pn+1
namely i=1 (−1)i di vij , is equal to ± the determinant of the (n + 1) × (n + 1) matrix obtained by
concatenating another copy of the jth column to M , and expanding by minors along this last column.
Since the determinant of a matrix with a replicated column is zero, we obtain that x = 0 as claimed.
Now, the R-linear independence of the vi implies that di = 0 for all i. This gives the base case t = 0.
Inductive step of claim: Let t > 0, and delete t rows of the matrix M , yielding an (n + 1 − t) × n
matrix N . Let us delete t columns of N , and call the resulting (n + 1 − t) × (n − t) matrix L. For
each i = 1, . . . , n + 1 − t, let di denote the determinant of the matrix obtained
Pn+1−tfrom L by eliminating
the ith row of L. Our goal is to show that each di = 0. We claim that i=1 (−1)i di wi = 0, where
we have written wi for the ith row of N . To see this, note that the jth component of this sum is
± the determinant of the square matrix obtained by concatenating a copy of the jth column of N
to the matrix L. Each of these (n + 1 − t) × (n + 1 − t) matrices either has a duplicated column
(and hence has determinant zero) or has determinant zero by the inductive hypothesis. Therefore
Pn+1−t
i=1 (−1)i di wi = 0 as claimed, implying that di = 0 for all i since the wi are R-linearly independent
(as these are a subset of the original vi ). This completes the inductive step, hence the proof of the
claim and of the problem.
Exercise 10.4.3.
Clearly C is both a left and right R-module, so by the construction of tensor product in 10.4, C⊗R C
is an R-module. Similarily, C ⊗C C is a C-module, and since R ⊂ C, C ⊗C C is an R-module as well.
Now, since {1, i} is a basis for C over R, {1 ⊗ 1, 1 ⊗ i, i ⊗ i, i ⊗ 1} is a basis for C ⊗R C over R. Also,
C ⊗C C ∼= C (via a ⊗ b 7→ ab), so C ⊗C C has dimension 2 over R.
So C ⊗R C has dimension 4 while C ⊗C C has dimension 2. Therefore, C ⊗R C and C ⊗C C are not
isomorphic as R-modules.
Exercise 10.4.7.
k
X
Suppose x ∈ Q ⊗R N . Then x can be written as the finite sum of simple tensors, x = (ai /bi ) ⊗ (ni ).
i=1
Let d = lcm{bi } and di = (d/bi ). Then,
k
X k
X
x= (ai /bi ) ⊗ (ni ) = ((ai di )/d) ⊗ (ni )
i=1 i=1
k
X k
X
= (1/d) ⊗ (ai di ni ) = (1/d) ⊗ ( (ai di ni )),
i=1 i=1
k
X
with (ai di ni ) ∈ N since ai di ∈ R and N is a left R-module.
i=1
Exercise 10.4.8.
First note the misprint: “(u, n) ∼ (u0 , n) if and only if u0 n = un0 in N ” should read “(u, n) ∼ (u0 , n0 )
if and only if xu0 n = xun0 ∈ N for some x ∈ U .” (The relation given in the book is not transitive,
hence this fix is needed.)
(a) We show that U −1 N is an abelian group. Suppose (u, n), (u1 , n1 ), (u2 , n2 ), and (u3 , n3 ) ∈ U −1 N .
For associativity, we calculate:
and
(1, 0) + (u, n) = (1u, u0 + 1n) = (u, n).
For commutativity, note:
1.
2.
(rs)(u, n) = (u, (rs)n) = (u, r(sn)) = r(u, sn) = r(s(u, n)).
3.
4.
1(u, n) = (u, 1n) = (u, n).
Therefore U −1 N is an R-module.
(b) Let h : Q × N → U −1 N be the map that sends (a/b, n) to (b, an). We will show that h is an
R-balanced map. For a/b, c/d ∈ Q, m, n ∈ N , and r ∈ R we have
1.
2.
3.
h(a/b, rm) = (b, a(rm)) = (b, (ar)m) = h(ar/b, n).
Therefore h is an R-balanced map, and hence by the universal property of tensor products there is
a unique homomorphism f : Q ⊗R N → U −1 N such that f (q ⊗ n) = h(q, n).
Next we claim that the map g : U −1 N → Q ⊗R N given by g((u, n)) = (1/u) ⊗ n is well defined.
To see this, suppose (u1 , n1 ) = (u2 , n2 ). Then there exists an x ∈ U such that xu1 n2 = xu2 n1 . We
have:
and
g(f ((a/b) ⊗ n)) = g((b, an)) = (1/b) ⊗ an = (a/b) ⊗ n.
Therefore, U −1 N is isomorphic to Q ⊗R N as an R-module.
(c) Since U −1 N is isomorphic to Q ⊗R N , we have
(d) Suppose A is an abelian group. We use the results of parts (a), (b), and (c), with Q = Q,
R = Z, and N = A. We see that every element of Q ⊗Z A can be written (1/d) ⊗ a for some a ∈ A,
and this tensor element is 0 if and only na = 0 from some n ∈ Z, n 6= 0. In other words, Q ⊗Z A = 0
if and only if A is a torsion group.
Exercise 10.4.11.
Suppose ∃ v, w ∈ V such that e1 ⊗ e2 + e2 ⊗ e1 = v ⊗ w. Write v = ae1 + be2 and w = ce1 + de2
for some a, b, c, d ∈ R. Then we calculate
(ae1 + be2 ) ⊗ (ce1 + de2 ) = (ae1 ) ⊗ (ce1 ) + (ae1 ) ⊗ (de2 ) + (be2 ) ⊗ (ce1 ) + (be2 ) ⊗ (de2 )
= (ac)(e1 ⊗ e1 ) + (ad)(e1 ⊗ e2 ) + (bc)(e2 ⊗ e1 ) + (bd)(e2 ⊗ e2 ).
v ⊗ v 0 = (av 0 ) ⊗ v 0 = v 0 ⊗ (av 0 ) = v 0 ⊗ v.
For the forward direction, note that since v and v 0 are nonzero, and there exists no scalar a ∈ F
such that v = av 0 , it follows that v and v 0 are linearly independent. We can therefore choose linear
functionals `, `0 : V → F such that
The fact that such functionals exist relies essentially on the fact that we are dealing with vector spaces
over a field; for example, we may complete {v, v 0 } to a basis of V and let `, `0 be the corresponding
elements of the “dual basis.” One easily checks that the map V × V → F given by (a, b) 7→ `(a)`0 (b) is
an F -bilinear map, and hence induces an F -vector space map φ : V ⊗F V → F . The image of v ⊗ v 0
under φ is 1, whereas the image of v 0 ⊗ v under φ is 0. Therefore, v ⊗ v 0 6= v 0 ⊗ v in V ⊗F V .
References
[DF] Dummit, David and Foote, Richard. Abstract Algebra, 3rd edition.