You are on page 1of 18

AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.

html

AlgebraicStructures
[FrontPage] [TitleIndex] [WordIndex]

Note: You are looking at a static copy of the former PineWiki site, used for class notes by
James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are
likely to be other bugs as well. These will most likely not be fixed. You may be able to
find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes
/aspnes/#classes.

Contents
1. What algebras are
2. Why we care
3. Cheat sheet: axioms for algebras (and some not-
quite algebras)
4. Classification of algebras with a single binary
operation (with perhaps some other operations
sneaking in later)
1. Magmas
2. Semigroups
3. Monoids
4. Groups
5. Abelian groups
5. Operations on algebras
1. Subalgebras
2. Homomorphisms
3. Free algebras
1. Applications of free algebras
4. Product algebras
5. Congruences and quotient algebras
6. Algebraic structures with more binary operations
1. Rings
1. Examples of rings
2. Properties of rings
3. Invertible elements and zero divisors
4. Subrings
5. Ideals and quotient rings
6. Ring homomorphisms
2. Semirings
3. Fields
1. Subfields and homomorphisms
4. Vector spaces
1. Homomorphisms of vector spaces
2. Subspaces

What algebras are


An algebra is a set S (called the carrier) together with zero or more operations, each of
which is a function from Sk→S for some k. The value k is the number of arguments to the
operation, and is called the arity of the operation. Most operations that one encounters are
either unary (one argument) or binary (two arguments); examples are negation (unary)
and addition (binary). Some programming languages provide ternary operation with three
arguments, like C's notorious ?: operation. Some algebras also include 0-ary operations:
most people call these constants. The collection of operations provided by an algebra is

1 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

called the algebra's signature.

For example, the integers with addition and negation have as carrier the set ℤ and signature
(+, 0, -), where + is a binary operation, 0 is a 0-ary operation (i.e., constant), and - is a
unary operation. Other operations (e.g. binary -) can be defined using operations in the
signature.

Often we use the same letter to refer to both an algebra and its carrier when this will not
cause confusion.

Why we care
The various standard algebras are abstractions that allow us to reason about lots of
different situations at the same time. From the perspective of computer science, they are
like base classes high up in an object-oriented type hierarchy, that provide minimal
features common to all the classes that inherit from them without specifying the details. So
for example in a monoid, where the single operation is associative and there is an identity
e that satisfies the equations ae = a and ea = a, we can prove that there can't be a second
identity e', since if ae' = a for all a we have ee' = e (from ea = a) and ee' = e' (from ae' = a),
giving e = e'. This tells use that we can't extend ℤ by adding a second copy of 0 or the set
of n×n matrices by adding a second copy of I; but we didn't need to know anything
specific about either structure to know this.

Similarly we can sometimes recognize that our proofs (or later, our algorithms) don't
require the detailed structure of ℚ, or ℝ, or ℤ, but in fact work in any algebra of the
appropriate type. For example, in computing the length of a path through a graph, we take
the sum of the edge weights, and in computing the shortest path, we take the min over all
such paths. If we know a lot about semirings, we might recognize this as a sum over
products in the (min,+) semiring, and realize that many of our shortest-path algorithms
might work equally well in any other semiring.

Cheat sheet: axioms for algebras (and


some not-quite algebras)
Anything that satisfies this and
Axioms Name
above is a
0 binary operations
set
1 binary operation +
magma
x+(y+z)=(x+y)+z Associativity semigroup
0+x=x+0=x Identity monoid
x+(-x)=(-x)+x=0 Inverse group
x+y=y+x Commutativity abelian group
2 binary operations + and ⋅

2 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

x(y+z)=xy+xz,
Distributivity
(y+z)x=yx+yz
Associativity rng (ring without the
x(yz)=(xy)z
(multiplicative) multiplicative identity)1
1x=x1=x Identity (multiplicative) ring
Commutativity
xy=yx commutative ring
(multiplicative)
xx-1=x-1x=1 when x≠0 Inverse (multiplicative) field

Semirings don't quite fit into this progression; they act like rings, but the + operation gives
a commutative monoid instead of an abelian group. An example is the (max,+) semiring,
where max acts as the addition operator and + (which distributes over max) acts as
multiplication.

See below for examples.

Classification of algebras with a single


binary operation (with perhaps some
other operations sneaking in later)
The simplest interesting algebras consist of a set together with a single binary operation (*,
say). These algebras are classified depending on what axioms * satisfies. Each of the
classes below adds an additional constraint to the previous one.

Magmas
If * satisfies closure (which just says that its codomain is S), then (S,*) is a magma (or
sometimes groupoid, but groupoid is more often used to mean something else that is more
useful: see WikiPedia: Groupoid). There really isn't very much to say about magmas, but
we can define one interesting magma: the free magma on a set A has as its elements (1)
all the elements of A, and (2) (x*y) for any x and y in the magma. This effectively
generates all binary trees whose leaves are elements of A.

Semigroups
A magma (S,*) becomes a semigroup if its operator is associative, that is, if (x*y)*z =
x*(y*z) for all x, y, and z in S. Semigroups show up quite often in ComputerScience.
Examples are:

Let A be a set, and S be a set of functions from A to A. Then (S,∘) is a semigroup,


where ∘ is the composition operation defined by (f∘g)(x) = f(g(x)).
Let A+ be the set of nonempty finite sequences of elements of some alphabet A.
Then (A+,+) is a semigroup, where + is the concatenation operation: abc+def =
abcdef. This semigroup is called the free semigroup on A.
Let ℤ+ be the positive integers. Then (ℤ+,+) is a semigroup, which is isomorphic
(see below) to (A+,+) if A has only one element.

3 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

The empty set Ø and the empty function from Ø2→Ø together make the empty
semigroup.
Let S be a set and let x be an element of S. Define for each y, z in S y*z = x. Then
(S,*) is a semigroup.

Most magmas are not semigroups and can't easily be turned into semigroups without
destroying information.

Monoids
A semigroup (S,*) is a monoid if it has an identity element e, that is, if there is an element
e such that e*x = x and x*e = x for all x. In algebraic terms, we usually think of the
identity element as being provided by a 0-ary operation, also known as a constant.

If there is an identity element, it is easy to prove that it is unique: Let e and e' be identities.
Then e*e' = e' (by the e*x=x rule) and e*e' = e (by the x*e'=e' rule). Thus e=e'.

The semigroup (S,∘) from the previous section is a monoid if S contains the identity
function f (for which f(x)=x for all x).
Let A* be the set of all finite sequences of elements of some alphabet A. Then
(A*,+) is a monoid (the free monoid on A), whose identity is the empty string <>.
(ℕ,+) and (ℕ,*), where + and * are the usual addition and multiplication operations,
are both monoids. Note that (ℤ+,+) is not a monoid, because it doesn't contain the
required identity element 0.
Let A be a set and let S be the power set of A. Then (S,∪) and (S,∩) are both
monoids (with identity elements Ø and A, respectively).
Let S = { x } and let x*x=x. Then (S,*) is a monoid.
Any semigroup can be turned into a monoid by adding a new identity element (if it
doesn't have one already).

Groups
A monoid (S,*) is a group if for each a and b in S, there are solutions x and y to the
equations ax=b and ya=b. This fact is equivalent to the existence of a unary inverse
operation taking x to x-1 (or -x when the binary operation is written as +), where
x-1x=xx-1=e. The existence of an inverse operation can be derived from the more
fundamental definition by proving a sequence of properties of groups that are interesting in
their own right:

1. Let y∈S. If there exists any x∈S such that xy=x or yx=x, then y=e. (In other words,
any y that ever acts like the identity is the identity; this is not true in general in
monoids.) Proof: Suppose xy=x; the other case is symmetric. We will show that y =
ey = e. Let q be a solution to qx=e. Then y = ey = (qx)y = q(xy) = qx = e.
2. If xy = e, then yx = e. (This says that if y is a right inverse of x, it is also a left
inverse.) Proof: Let xy = e, and observe that yxy = ye = y. Now let q be a solution to
yq=e. Then (yxy)q = yq, but (yxy)q = (yx)(yq) = yx and yq = e, so we have yx = e.
3. If xy = e and xy' = e, then y = y'. (I.e., right inverses are unique.) Proof: Consider
yxy'. Grouping the operations as y(xy') we get yxy' = y(xy') = ye = y. Grouping them
the other way gives yxy' = (yx)y' = ey' = y'. So we have y = y'.

It follows that if xy=e, then yx=e and y is uniquely defined by x. Thus the inverse

4 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

operation x-1 is well-defined (it identifies a unique element of S satisfying the required
property xx-1 = x-1x = e). The inverse of the identity is always the identity, since e-1 = e-1e
= e.

Note that if we start with an inverse operation, we can easily solve ax=b or ya=b by
computing a = axx-1 = bx-1 or a = y-1ya = y-1b.

Examples of groups:

ℤ, ℚ, and ℝ are all groups with the binary operation +, but not with the binary
operation * (0 has no inverse). ℚ-{0} and ℝ-{0} (sometimes written ℚ* and ℝ* in
this context) are groups with respect to *; ℤ-{0} with multiplication is still not a
group (only 1 and -1 have inverses). This is a special case of a general method for
extracting a group from a monoid (see below).
Let ℤm be the quotient set of the equivalence relation a~b if a=b (mod m), and
define an addition operation on ℤm by the rule <a> + <b> = <a+b>. Then ℤm is a
group, called the integers mod m.
Let ℤ*m be the elements of ℤm whose representatives are relatively prime to m. Then
(ℤ*m,*) is a group.
Let A be a set, and let S be the set of all bijections from A to A. Then (S,∘) is a
group, called the permutation group on A.
Let A be a set of symbols, and let S consist of all finite sequences s in which each
position has the value x for some x in A, or x-1 for some x in A, and neither x,x-1
nor x-1,x appear consecutively in s. Define a group operation by concatenation
followed by the removal of consecutive xx-1 and x-1x pairs. The resulting algebra is
a group, called the free group on A.
Let S = {x} and let x*x=x. Then (S,*) is a group, with identity x and x-1=x.
The set of n×m matrices over ℝ (in general, over any field—see below for the
definition of a field) is a group under addition. The set of invertible n×n matrices is a
group under multiplication. If we don't require invertibility, we only get a monoid.
Given any monoid M, the set M* of invertible elements of M is a group, where an
element x is invertible if there exists some element x-1 such that xx-1 = x-1x = e.
Examples include ℚ* = ℚ - { 0 }, ℝ* = ℝ - { 0 }, and, for each n, the group of
invertible n×n matrices. A less interesting example is ℤ* = { -1, 1 }. The proof that
the invertible elements form a group depends on the fact that if a and b are both
invertible, so is ab, since ab(b-1a-1) = (b-1a-1)ab = e. This doesn't work in general in
semigroups, because without associativity we can't necessarily regroup ab(b-1a-1) =
a(bb-1)a-1 = (ae)a-1 = aa-1 = e. Note that we didn't need to use anything in the proof
that ab has an inverse beside the group axioms; so this is an example of the sort of
very general proof that can be done by using powerful enough abstractions.

Groups are about the weakest algebraic structure that allow calculations similar to what we
expect with ordinary arithmetic: we can add and subtract, solve for unknown variables, etc.
Note that the operation in a group does not have to be commutative: it may be that xy ≠ yx
for some x and y. For example, in the permutation group on three elements, the function
that swaps the first two elements and the function that swaps the last two elements do not
commute: 123 → 213 → 231 does not give the same result as 123 → 132 → 312.
Similarly, matrix multiplication is in general not commutative.

5 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Abelian groups
If a group is commutative, which means that xy=yx for all x and y, then it is an abelian
group (after Niels Henrik Abel, a 19th-century Norwegian mathematician---but by
convention spelled with a lowercase "a" despite its etymology).

All of the groups listed previously except the permutation group (on three or more
elements) and the free group are abelian. By convention, the binary operation in an abelian
group is often (but not always) written as +.

For more on the structure of groups and abelian groups, see GroupTheory.

Operations on algebras
Like sets or graphs, we can define operations that connect one algebra to another or allow
us to construct new algebras. When considering multiple algebras, we generally assume
that all share the same set of operations (or at least operations with the same names and
arities). Most operations on an algebra are defined in terms of operations on the carrier (the
set of elements of the algebra).

Subalgebras
A subalgebra of an algebra A is obtained by taking a subset S' of the carrier that is closed
under all operations of the algebra; i.e. for any n-ary operation f, f(x1, ..., xn) is in S' if all
the xi are. If B is a subalgebra of A, we write B⊆A, as with subsets.

For specific classes of algebras (such as semigroups, monoids, groups, or abelian groups),
any axioms satisfied by the parent algebra are inherited by its subalgebras: e.g., if xy = yx
in A, then xy = yx in B⊆A.

One important way to get a subalgebra is the subalgebra generated by a particular


element or set of elements. This is the smallest subalgebra of a given algebra that includes
the specified elements (which are called generators); formally, this is defined by taking
the intersection of all subalgebras that contain the generators. (The proof that taking the
intersection of subalgebras gives a subalgebra is left as an exercise.)

Examples:

Let A and B be sets, which we can think of as algebras with no operations. The B is
a subalgebra of A iff B is a subset of A.
Let A be the semigroup (ℕ,+), and let B = { x | x > 137 and x = 3k for some k in ℕ
}. Then B is a subsemigroup of A.
Let A be the monoid (ℤ,*), and let B be the monoid (ℤ+,*). Then B is a submonoid
of A. Note that 1 has to be present in B for it to be a submonoid, or else B is not
closed under the (0-ary) identity operation.
Let A be the group (ℤ,+), and let S' be the set of all even integers. Then B = (S',+) is
a subgroup of A.
Let A be the group (ℤ,+), and let B be the subalgebra of A generated by 2. Then B is
precisely the subgroup described in the previous example.
Let A be the free monoid over { a, b, c }, and let B be the subalgebra generated by
aaa. Then B consists of all strings of k a's where k is a non-negative multiple of 3.

6 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Homomorphisms
A function from the carrier of A to the carrier of B is a homomorphism, written f:A→B,
if for any n-ary operation g, g(f(x1), ..., f(xn)) = f(g(x1, ..., xn)). Note that the g on the left-
hand side is B's version of g and the g on the right-hand side is A's. If this equation holds
for a particular operation g, f is said to preserve g. A homomorphism preserves all
operations.

Examples:

Any function f:A→B on sets is a homomorphism, since it preserves all none of the
operations on sets.
The composition g∘f of two homomorphisms f:A→B and g:B→C is a
homomorphism from A to C.
Let A be the free magma over a set S and let B be the free semigroup over the same
set S. Define f:A→B by f(x) = x whenever x is in S, and f(xy) = f(x)f(y) otherwise.
Then f is a homomorphism (immediately from the definition). The effect of f is to
"flatten" trees into an ordered list of their leaves.
Let A be the free semigroup over S and let f(x) = 1 for x in S and f(xy) = f(x)+f(y)
otherwise. Then f is a homomorphism from A to (ℤ+,+).
Consider the function ℤ → ℤm defined by f(x) = x mod m. Then f is a
homomorphism from the additive group of the integers (ℤ, +, 0, -) to ℤm.
Consider the function f:ℤ→ℤ defined by f(x)=cx for some constant c. Then f(x+y) =
c(x+y) = cx + cy = f(x) + f(y), f(0) = c*0 = 0, and f(-x) = c*(-x) = -(cx) = -f(x): f is a
group homomorphism from ℤ to ℤ.

In the last case the image of f, defined as f(A) = { f(x) | x in A }, is a subgroup of the
codomain ℤ. This turns out always to be the case for images of algebra homomorphisms:

Theorem 1
Let f:A→B be an algebra homomorphism. Then f(A) is a subalgebra of B.
Proof
To show that f(A) = { f(x) | x in A } is a subalgebra of B, we need to show that it is
closed under every operation of B. Let g be an n-ary operation of B, and consider
g(y1, ..., yn) where each yi is in f(A). Because each yi is in f(A), there exists some xi
such that yi = f(xi). So we can rewrite g(y1, ..., yn) as g(f(x1), ..., f(xn,)) = f(g(x1, ...,
xn)) ∈ f(A). (The equality holds because f is a homomorphism.)

As with graphs, homomorphisms on algebras give rise to isomorphisms (bijective


homomorphisms, the inverses of which will also be homomorphisms) and
automorphisms (isomorphisms between an algebra and itself). Every algebra has at least
one automorphism, the identity homomorphism that maps every element to itself. An
injective homomorphism is called an embedding (sometimes imbedding); any embedding
f of A into B gives an isomorphism between A and B's subalgebra f(A).

More examples:

Recall that the free magma on S consists of all binary trees with leaves in S. Let's
call this free magma T (for tree). Consider the magma of nonempty
WikiPedia: LISP lists with atoms in S and the cons operation, defined by x cons
(y1 ... yk) = (x y1 ... yk); we'll call this one L (for LISP). Define f:T→L by the rule

7 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

f(s) = (s) when s∈S and f(T1T2) = f(T1) cons f(T2). Then f is clearly a
homomorphism. But we can show that it is in fact and isomorphism, by defining
f-1((x)) = x (for length-1 lists) and f-1((x y1 ... yk)) = f-1(x)f-1((y1 ... yk)) (for
length-2 and longer lists). It's not hard to verify that that f-1(f(t)) = t for any tree t. If
we allow empty binary trees and empty lists, we similarly get an isomorphism
between two slightly larger magmas.
Let A be the free monoid on a single-element set S, and define f(an) = n. Then f is a
monoid isomorphism between A and (ℕ,+); its inverse is f-1(n) = an. Conversely, if
M is a free monoid and g(n) = an for any element a of M, then g is an embedding of
(ℕ,+) into M.
Let f:ℕ→ℕ be defined by f(n) = 3n. Then f is a monoid embedding of (ℕ,+) into
(ℕ,*), since (a) f is a homomorphism: f(a+b) = 3a+b = 3a3b = f(a)f(b) and f(0) = 30 =
1; and (b) f is injective. It's not an isomorphism because it doesn't have an inverse.
Let f:ℝ→ℝ+ be defined by f(x) = 3x, where ℝ+ = { x∈ℝ | x > 0 }. Now f is an
isomorphism between the groups (ℝ,+) and (ℝ+,*). Proof: f(a+b) = 3a+b = 3a3b =
f(a)f(b), f(0) = 30 = 1, f(-a) = 3-a = (3a)-1 = f(a)-1, and log3(3x) = x shows that log3
gives an inverse for f.

Free algebras
We've seen examples of various kinds of algebras that have been called "free", such as free
magmas, free semigroups, etc. There is in fact a single definition of a free algebra (from a
particular class of algebras) that produces each of these specific free algebras.

The essential idea of a free algebra A over a set S is that (a) it contains all the elements of
S, (b) it contains g(x1, ..., xn) for any n-ary operation g whenever x1,...,xn are in A (which
is just the closure requirement), and (c) for particular xi and x'i, g(x1,....,xn) = g(x'1,...,xn)
only if required by the axioms.

So in a free magma, x(yz) is never equal to (xy)z, no matter what values x, y, and z have.
In a free semigroup, x(yz) = (xy)z, always, because of the associativity axiom, but xy is
never equal to x for any x and y, xy is never equal to yx, etc. In a free monoid, xe = ex = x,
but xy is never equal to x unless y = e, and so forth.

The formal definition that yields these properties is this:

Let C be a class of algebras, where the names and arities of the operations are the same for
all algebras in the class, and let S be a set. Then a free algebra in C over S, written F(S) or
FC(S), is an algebra in C such that (a) S is a subset of the carrier of F(S), and (b) for any
algebra B in C, and any function from S to the carrier of B, there is a unique
homomorphism f* from F(S) to B such that f*(x) = f(x) for any x in S. (In this case f is said
to lift to f*; another way of expressing this is that f is a subset of f* when both are viewed
as sets of ordered pairs.)

How does this definition yield all the specific cases of free algebras? Let's take free
monoids as an example. Recall that a free monoid F(S) over S consists of all finite
sequences of elements of S, with + = concatenation and 0 = the empty sequence. It's
trivially the case that S is a subset of the carrier of F(S) (assuming we identify single
elements with one-element sequences). Now let's show that any function f from S to the
carrier of some semigroup B lifts to a unique f* from F(S) to S.

8 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Define f*(x1...xk) = f(x1)f(x2)...f(xk). It is easy to see that f* is a (monoid) homomorphism,


and that f*(x) = f(x) when x is in S. Now let g be any other monoid homomorphism from
F(S) to B such that g(x) = f(x) for any x in S. We'll show that for sequence s=x1...xk in
F(S), g(s) = f*(s), by induction on the length k of s. The cases k=0 and k=1 are the base
cases. For k=0, g(<>) = f*(<>) = the identity of B because both g and f* are monoid
homomorphisms (and therefore must preserve the identity). For k=1, we have g(x1) = f(x1)
= f*(x1) by the requirement that g(x) = f(x) when x in S. For k > 1, we have g(x1...xk-1xk)
= g(x1...xk-1)g(xk) = f*(x1...xk-1)f*(xk) = f*(x1...xk-1xk).

Note that there may be more than one way to define a free algebra. For example, we could
define the free monoid over S by setting + to be reverse concatenation: xyz+abc = abcxyz.
This again gives sequences of elements of S, but to translate them back to the previous
definition we have to reverse the sequences. Fortunately this reversal operation is an
isomorphism between the two monoids, and in general we can show that any free algebra
for a given class with a given base set is unique up to isomorphism, which means that
while there may be more than one such algebra, there is only one isomorphism
equivalence class.

Theorem 2
If F and G are free algebras over S in a class C, then F and G are isomorphic.
Proof
The proof is by repeated application of the uniqueness of f*:F(S)→A for any
f:S→A.
1. Let f:S→G be defined by f(x)=x. Then there is a unique homomorphism
f*:F→G.
2. Similarly, let g:S→F be defined by g(x) = x. Then there is a unique
homomorphism g*:G→F.
3. We will now show that g*=(f*)-1. Consider the homomorphism h=g*∘f*. For
any x in S, we have h(x) = g*(f*(x)) = g(f(x)) = x. So h is a homomorphism
from F to F, such that for any x in S, h(x)=x. Since F is a free algebra, h is
unique among all such homomorphisms: if somebody shows up with another
homomorphism h':F→F with h'(x) = x for all x in S, then h'=h. Now out of the
box let us pull IF:F→F, the identity homomorphism on F defined by IF(x)=x
for all x in F. Clearly IF(x)=x for all x in S, so IF=h. It follows that g*∘f* is the
identity---i.e., that g* = (f*)-1. Since f* is invertible and thus bijective, it's an
isomorphism, and F and G are isomorphic.

The nice thing about this proof is that at no time did we have to talk about specific
operations of the algebras; everything follows from the definition of a free algebra in terms
of homomorphisms.

Applications of free algebras

The selling point for free algebras is that we can define homomorphisms just by defining
what happens to elements of the underlying set S. This operation is done all the time.
Some examples:

Simple Roman numerals (without the IV = V-I rule). Define f(I) = 1, f(V) = 5, f(X) =
10, etc., and assert f is a homomorphism from the free semigroup on {I,V,X,...,M} to
(ℤ+,+).

9 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Flattening trees (homomorphism from the free magma to the free semigroup
mapping each element of S to itself), counting the number of leaves in a tree
(homomorphism from the free magma on S to (ℤ+,+) mapping each element of S to
1), or counting the total weight of the leaves of a tree (homomorphism from the free
magma on S to (ℝ,+) mapping each element of S to its weight). Indeed, just about
any recursive function on binary trees is an example of a magma homomorphism.

Product algebras
Let F and G be algebras with the same signatures. Then F×G is an algebra whose elements
consist of a pair of one element from F and one from G (i.e., is the usual cartesian product
F×G), and an operation f on (F×G)k carries out the corresponding F and G operations on
both sides of the ordered pairs.

Examples:

Consider the monoids F=(ℤ,+,0) and G=(ℤ,*,1). Then elements of F×G are pairs of
integers, and the binary operation (which we'll call *) adds the first elements of each
pair and multiples the second: (2,3)*(-6,7) = (-4,21). The algebra F×G is a monoid
with identity (0,1), which is itself the product of the identities of F and G.

Congruences and quotient algebras


A congruence is an equivalence relation on elements of an algebra that respects the
structure of the algebra. In particular, a relation ~ is a congruence on A if

1. It is an equivalence relation on elements of A, and


2. for any n-ary operation g of A, if x1 ~ y1, x2 ~ y2, ..., xn ~ yn, then g(x1, x2, ... xn) ~
g(y1, y2, ... yn).

An example of a congruence is the relation x~y if x and y are both even or x and y are both
odd, where x and y are elements of additive group of the integers (ℤ,+,0,-).

Given a congruence ~ on an algebra A, we can define a quotient algebra A/~ whose


elements are the equivalence classes C of ~, and whose operations are defined by letting
g(C1...Cn) be the equivalence class of g(x1...xn) where each xi is some element of the
corresponding Ci. The requirements for ~ to be a congruence guarantee that this defines a
unique equivalence class no matter which representatives xi we pick.

For example, the odd-even congruence defined previously yields a quotient group ℤ/~ =
ℤ2, the integers mod 2. Associated with this congruence is a homomorphism that maps
each integer to one of the two elements 0 or 1 of ℤ2, depending on whether it is even (0) or
odd (1). This is not surprising; in fact, there is always such a homomorphism from any
algebra to any of its quotient algebras.

Theorem 3
If ~ is a congruence on A, then the function f mapping each x in A to its equivalence
class in A/~ is a homomorphism.
Proof
We need to show that g(f(x1)...f(xn)) = f(g(x1...gn)) for each n-ary operation g and
sequence of arguments {xi}. For each i let Ci = f(xi) be the equivalence class of i

10 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

under ~; by the definition of operations in A/~ we have g(C1...Cn) is the equivalence


class of g(x1...gn) = f(g(x1...gn)).

The preceding theorem tells us we can map A onto A/~ via a homomorphism. There is a
sense in which we can go the other way: given any homomorphism f:A→B, we can define
a congruence ~ on A by x~y if f(x)=f(y). This congruence is called the kernel of f. (Note:
in GroupTheory, the kernel is defined differently, as the subgroup Ker f of A given by
f-1(e). The relation between the kernel-as-subgroup and the kernel-as-congruence is that in
a group we can define a congruence a~b iff ab-1 is in Ker f, which turns out to be
equivalent to the definition f(a)=f(b) for algebras in general.)

Theorem 4
Let f:A→B be a homomorphism, and let ~ be the kernel of f. Then ~ is a
congruence.
Proof
It is easy to see that ~ is an equivalence relation, so we will concentrate on showing
that xi~yi implies g(x1...xn)~g(y1...yn) for any {xi}, {yi}, and g. Pick some particular
xi, yi, and g; then we have f(xi)=f(yi) (from the definition of the kernel) and so
f(g(x1...xn))=g(f(x1)...f(xn))=g(f(y1)...f(yn))=f(g(y1)...g(yn)), which implies
g(x1...xn)~g(y1)...g(yn).

We have previously observed that the image of a homomorphism is a subalgebra of its


codomain. This subalgebra turns out to be isomorphic to the quotient algebra of the
domain by the kernel of the homomorphism:

Theorem 5 (First Isomorphism Theorem)


Let f:A→B be a homomorphism, and let ~ be the kernel of f. Then A/~ is
isomorphic to f(A).
Proof
Define h:(A/~)→B by setting h(C) = f(x), where x is any element of C. Note that h
is well-defined because each C consists precisely of all x for which f(x) is some
particular element y of B. We can also define an inverse h-1 by letting h-1(y) = { x |
f(x) = y }, so h is a bijection. It remains only to show that it is a homomorphism. Let
g be an n-ary operation of A/~, and let x1...xn be representatives of equivalence
classes C1...Cn. Then h(g(C1...Cn)) = f(g(x1...xn)) = g(f(x1)...f(xn)) =
g(h(C1)...h(Cn).

In the case that f is surjective, the theorem gives an isomorphism between A/~ and B. This
is a fundamental tool in many branches of algebra.

Algebraic structures with more binary


operations
All of the structures we have considered so far had only a single binary operation, which
we usually wrote as either multiplication or addition. We now consider structures that have
more binary operations. The simplest of these, rings and fields, are the natural
generalization of the ways that real numbers, integers, etc. support both an addition and a
multiplication operation.

11 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Rings
A ring is a set S together with two binary operations, + and ×, called addition and
multiplication (as elsewhere, they are called this because of the symbol used, even if they
really do something else). Considering each of the operations separately, we require:

That (S,+) is an abelian group. In particular, there is an additive identity (usually


written 0) such that x+0=0+x=x for all x in S, and for each x in S, there is an
additive inverse -x such that x + -x = 0.
That (S,×) is a monoid. The multiplication operation is associative, and there is a
multiplicative identity (usually written 1) such that x×1=1×x=x for all x in S.

In addition, a distributive law relates the two operations. For all x, y, and z, we have:

x×(y+z) = (x×y)+(x×z) and (y+z)×x = (y×x)+(z×x).

Note the need for both equations since multiplication is not necessarily commutative.

In writing about rings, we adopt the usual convention that multiplication binds more
tightly that addition and drop the multiplication operator, so that we can w, for example,
yx+zx for (y×x)+(z×x).

Examples of rings

The integers with the usual addition and multiplication operations. These form a
commutative ring, in which multiplication is commutative.
The integers mod m, with addition and multiplication mod m
The dyadic numbers---rational numbers like 27/1, 53/2, or 91/128 where the
denominator is a power of two.
The rationals.
The reals.
The ring of polynomials in some variable x with coefficients in some commutative
ring R, which is written R[x]. Addition is the usual polynomial addition, e.g. (x+3) +
(x2+12) = (x2+x+12), and multiplication is the usual polynomial multiplication, e.g.
(x+3) × (x2+12) = (x3+3x2+12x+36). See Polynomials.
Square matrices of fixed size with coefficients in some field. We'll see more of these
in AbstractLinearAlgebra, but for the moment let us just observe that they are an
example of a non-commutative ring.
Formal power series of the form ∑i aixi, as used in the theory of
GeneratingFunctions, where addition and multiplication are as defined there. These
differ from ordinary polynomials in that ordinary polynomials can only have a finite
number of terms.

Properties of rings

Though rings vary widely in their properties, some properties follow from the definition
and thus hold in all rings. Two important theorems about rings are that multiplying by 0
has the effect we expect from our experience with the integers, rationals, etc., and that
negation commutes with multiplication.

Theorem
Let R be a ring and let x be any element of R. Then 0×x=x×0=0.

12 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Proof
We'll prove that 0×x=0; the other case is symmetric. The proof uses both the
distributive law and the existence of 1, the multiplicative identity. Compute 0x+x =
0x+1x = (0+1)x = 1x = x = 0+x. But since addition in a ring is a group, if 0x+x =
0+x then 0x = 0.

The familiar rule that (-x)y = -(xy), (-x)(-y) = xy, etc. follows immediately from the
preceding theorem:

Corollary
Let R be a ring and x and y any elements of R. Then (-x)y = x(-y) = -(xy).
Proof
Again we will prove only the first case (-x)y = -(xy). It is enough for this to show
that (-x)y + xy = 0. Compute (-x)y + xy = (-x + x)y = 0y = 0. Here the first step uses
the distributive law, the second uses the definition of the additive inverse, and the
last uses the theorem---just proved---that 0x = 0 for all x.

Invertible elements and zero divisors

An element of a ring is invertible if it has a multiplicative inverse. The element 1 is


always invertible, since 1*1=1, and so is -1 (although -1 may be the same element as 1, as
in ℤ2). The set of invertible elements is closed under multiplication: if a is invertible and b
is invertible, then so is ab (its inverse is b-1a-1). Since it also contains the multiplicative
identity and inverses, it's a group, called the multiplicative group of the ring. So the
multiplicative groups ℤ*m of integers relatively prime to m are just a special case of this
definition.

One way to show that an element is not invertible is to show that it's a zero divisor. An
element x is a zero divisor if it divides 0, i.e. if there is some element y≠0 such that xy = 0
or yx=0. For example, in ℤ6, the element 2 is a zero divisor (2*3=0). A zero divisor can't
be invertible because if xy=0 and x-1 existed, then y=x-1xy=x-10=0. However, in many
rings there are elements that are not invertible without being zero divisors, e.g. 2 in ℤ or x
in ℤ[x].

Subrings

A subring S of a ring R is just a subset of R that is also a ring. The main thing we need to
prove in order to show that this is true is that S is closed under all the ring operations.

Examples:

ℤ is a subring of ℚ, which is a subring of ℝ, which is a subring of ℂ. Proof: 0 and 1


are in all of them; a+b is an integer if a and b are both integers, a rational if a and b
are both rationals (proof: m/n + m'/n' = (mn'+nm')/(nn')), and a real if and and b are
both reals; ab is similarly an integer/rational/real if a and b both are.
The even integers (more generally, any set of the form mℤ = { mn | n∈ℤ }) is a
subring of ℤ.
The dyadics are a subring of ℚ.
For any set of primes P, the set ℚ - { m/n | (m,n) = 1 and p|n for some p∈P } is a
subring of ℚ. Proof: It contains 0 and 1; and given m/n and m'/n', if p divides nn' in
either their sum (mn'+m'n)/(nn') or their product (mm')/(nn') then p divides one of n
or n', so it's closed under addition and multiplication.

13 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Ideals and quotient rings

An ideal of a ring R is a subring S such that whenever x is in S, xy and yx are in S for any
y in R. An example is the even integers, which are an ideal of the ring of integers. An
example of a subring that is not and ideal is the dyadics; (1/2)(1/3) = (1/6), which is not a
dyadic.

The main reason for looking at ideals is that they have the same role in rings that normal
subgroups have in groups: the additive cosets in R of an ideal I yield a quotient ring R/I.
This follows from the fact that as an additive subgroup, an ideal is always a normal
subgroup (since addition is commutative), so (I+a)+(I+b)=I+(a+b) is well-defined; and that
when we multiply (I+a)(I+b), we get II + Ib + aI + ab = I + ab since the first three terms all
collapse into I. One way to remember the required properties of an ideal is that the ideal
must act like 0 in the quotient ring it yields: it's an additive identity (normal subgroup) but
a multiplicative annihilator (drags products into itself).

Some examples of quotient rings:

ℤ/mℤ is isomorphic to ℤm.


ℚ/ℤ, the rationals mod 1, is obtained from the rationals by keeping only the
fractional part after each operation. For example, in ℚ/ℤ, 3/5 + 4/7 = (21+20)/35 =
41/35 = 1 + 6/35 = 6/35.
ℤ₂[x]/(1+x+x²) as given below in the construction of a finite field of order 4. This is
shorthand for ℤ₂[x]/(1+x+x²)ℤ₂[x], where the ideal is the set of all multiples of the
polynomial (1+x+x²); the second ℤ₂[x] disappears because of the usual notational
laziness.

Ring homomorphisms

A ring homomorphism is a function that preserves all the ring operations (this is just a
special case of the usual definition of an algebra homomorphism). So in particular, given a
function f:R→S, we need f(0R) = 0S, f(1R) = 1S, f(a+b) = f(a)+f(b), and f(ab) = f(a)f(b).
These are pretty strong requirements, so ring homomorphisms are much rarer than, say,
group homomorphisms.

Some examples of ring homomorphisms:

If R/S is a quotient ring, there is a homomorphism from R to R/S (this is true for all
quotient algebras). So for example, the map f(x) = x mod m is a homomorphism
from ℤ to ℤm ≈ ℤ/mℤ.
There is a homomorphism from ℚ - { m/n | (m,n) = 1 and p|n } to ℤp, given by
f(m/n) = (m mod p)(n mod p)-1 when m/n is expressed in lowest terms. The tricky
part here is showing that addition and multiplication are preserved. For convenience,
it may help to show that reducing to lowest terms is only needed to get rid of
extraneous copies of p; if a∤p, then (am mod p)(an mod p)-1 = ama-1n-1 = mn-1 =
f(am/an) [with all computations in ℤp]. So f(m/n+m'/n') = f((mn'+nm')/nn') =
(mn'+nm')(nn')-1 = mn'n-1(n')-1 + nm'n-1(n')-1 = mn-1 + m'(n')-1 = f(m/n) + f(m'/n')
and f((m/n)(m'/n')) = f(mm'/nn') = mm'n-1(n')-1 = (mn-1)(m'(n')-1) = f(m/n)f(m'/n'),
as desired.

Semirings

14 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

A semiring is like a ring, except that the addition group is replaced by a commutative
monoid; alternatively, we can think of a semiring as a ring without additive inverses. The
theorem in rings that 0*x=x*0=0 becomes an additional axiom (the annihilation axiom)
for semirings—without inverses it is impossible to prove it from the other axioms.

An example of a semiring is the tropical semiring or max-plus semiring on ℝ∪{-∞}


where addition is max, multiplication is +, the additive identity is -∞, and the
multiplicative identity is 0. This gives a commutative monoid for both addition (max) and
multiplication (+), since max and + both commute and max(x,-∞) = x, but neither
operation yields a group (for the multiplication operation +, -∞ has no inverse). It also
satisfies the distributive law: x+max(y,z) = max(x,y) + max(x,z); and the annihilation
axiom: -∞ + x = -∞. The tropical semiring is useful for studying scheduling problems: for
example, if it takes 2 years to graduate after taking CPSC 202 and 3 more years after
taking MATH 112, then my graudation time is at least max(t202 + 2, t112 + 3), and the
ability to manipulate complicated expressions involving max and + may help me plan out
more difficult tasks.

For more examples of semirings see WikiPedia: Semiring.

Fields
If the nonzero elements of a commutative ring form an abelian group, the ring is a field.
An equivalent definition is that 1≠0 (so the group is nonempty) and every nonzero element
is invertible. The most common examples in ordinary mathematics are ℚ, ℝ, and ℂ, with
ℤp showing up in NumberTheory. A more exotic example is the field of rational
functions F(x1,...xn,): functions of the form p/q where p and q are both polynomials with
variables x1 through xn and coefficients in F, with q≠0.

For ComputerScience, the most useful fields are the finite fields, since (being finite) they
fit inside computers. There is a unique (up to isomorphism) finite field of size pk for every
prime p and every positive integer k. These fields are known as the Galois fields after
WikiPedia: Evariste_Galois (the guy who died from staying up all night doing his
homework) and are written as GF(pk). When k=1, they have a particularly simple
construction; for larger values of k the Galois fields are more complicated.

The field GF(p) is just ℤp with multiplication defined mod p. We've already seen (in
NumberTheory) that every nonzero element of ℤp has a multiplicative inverse; everything
else follows from ℤp being a ring.

For n > 0, there is a standard construction of GF(pn) based on polynomials. The idea is to
start with all polynomials over ℤp, and then take remainders mod some particular
polynomial to get rid of any high-degree terms. We'll do GF(22) as an example.

Step 1: Consider all polynomials in ℤ2[x]: 0, 1, x, 1+x, 1+x+x3, etc. Note that the only
coefficients are 0 and 1, since those are the only elements we have in ℤ2.

Step 2: Choose an irreducible polynomial r, one for which the equation r(x) = 0 has no
solutions in ℤ2. The degree (highest exponent) of this polynomial will determine the n in
GF(pn). We'll pick r(x) = 1+x+x2. It is easy to check that r(0) = 1+0+0 = 1 and r(1) =
1+1+1 = 1, so it's irreducible.

15 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

Step 3: Let the elements of GF(pn) be the residue classes mod r, i.e. the equivalence
classes of polynomials where p1~p2 if there is some polynomial q such that p1 = p2 + qr. In
GF(22) this means that whenever we multiply two polynomials together and get a
polynomial with an x2 in it, we subtract r to get rid of the x2. In this way we take as
representatives of the classes the polynomials that have degree at most n-1; in GF(22) this
will be the four polynomials 0, 1, x, and 1+x.

Addition of elements of GF(22) is done term by term, e.g. (1+x) + (x) = 1. (The x's cancel
since 1+1=0 in ℤ2). Multiplication is more complicated. Here's the full multiplication
table:

0 1 x 1+x
0 0 0 0 0
1 0 1 x 1+x
x 0 x 1+x 1
1+x 0 1+x 1 x

Most of the entries in the table are straightforward (we know, for example, that 0*y = 0 for
all y). The tricky ones are in the bottom right corner. Let's look at x*x first. This is x2,
which is too big, so we'll take the remainder mod x2+x+1, which in this case means
computing x2-(x2+x+1)=x+1 (remember we are in ℤ2 where 0-1=1). Similarly we can
compute (1+x)*(1+x) = 1 + 2x + x2 = 1+x2 = 1+x2-(1+x+x2) = x. For x(1+x) we have
x(1+x) = x+x2 = x+x2-(1+x+x2) = 1.

This construction is a special case of a more general technique called field extension,
where one makes an existing field (ℤ2 in this case) bigger by tacking on an extra member
(x) that satisfies some polynomial equation not satisfied by any element of the old field
(1+x+x2=0). Another example of the same technique is the construction of the complex
numbers from the reals by taking ℝ and adding a new element i that satisfies 1+i2=0. Our
operation of replacing x2 with x+1 in GF(22) is the equivalent in that field of replacing i2
with -1 in the complex numbers. The fact that we can get away with this in each case can
be justified formally by sufficient handwaving about free rings and quotients.

Note that the structure of GF(22) is very different from ℤ4, which isn't a field (since 2*2 =
0 mod 4). Even the additive group of GF(22) is not the same as ℤ4; there is no single
element, for example, that generates the entire additive group. This is typical of GF(pn),
whose additive group in general will look like the product of n copies of ℤp, and whose
multiplicative group will be terribly tangled up.

For more on finite fields see http://mathworld.wolfram.com/FiniteField.html or


Chapter 23 of BiggsBook.

Exercise
Which of the examples of rings given previously are also fields?

Subfields and homomorphisms

Subfields are fairly common. For example, ℚ is a subfield of ℝ which is in turn a subfield

16 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

of ℂ. Among finite fields, GF(p) is a subfield of GF(pn) for any n (consider the set of
polynomials with only a constant term).

Field homomorphisms are much less common; indeed, we can prove that all
homomorphisms between fields are injective (giving an embedding—a bijection between
the domain and some subfield of the codomain). Proof: Let f:X→Y be a field
homomorphism and suppose that f is not injective; i.e., that f(x)=f(y) for some x≠y. Then
f(x-y) = f(x)-f(y) = 0. But since x≠y, x-y has an inverse (x-y)-1, and so f(1) = f((x-y)(x-
y)-1) = f(x-y)f((x-y)-1) = 0⋅f((x-y)-1) = 0 ≠ 1, contradicting the requirement that f maps 1 to
1. It follows that any homomorphic f is injective.

Vector spaces
A vector space is not, strictly speaking, an algebraic structure of the sort we have defined
previously, as it involves two classes of objects: scalars, which are elements of some field,
and vectors, which are elements of some abelian group, written additively. In addition to
their own field and group operations, scalars and vectors are connected by an operation
called (rather confusingly) scalar multiplication that multiplies a scalar by a vector to
produce another vector. Scalar multiplication must satisfy the following axioms, where
normal letters represent scalars and boldfaced letters represent vectors:

a(bx) = (ab)x: scalar multiplication is associative with multiplication of scalars.


1x = x: the multiplicative identity 1 in the scalar field is also an identity for scalar
multiplication.
a(x+y) = ax+ay: scalar multiplication distributes over vector addition.
(a+b)x = ax+bx: scalar multiplication distributes over scalar addition.

The distributive laws imply further properties like 0x = 0 or (-1)x = -x; the proofs are
essentially the same as in a field. Note that the two zeroes and the two negations in these
formulas are different; on the left-hand side we are in the scalar field (or its additive
group), and on the right-hand side we are in the vector space (thinking of it as a group).

When writing about vector spaces, the vectors have the starring role, and we think of a
vector space V as consisting of the set of vectors, with the field of scalars often assumed to
be some standard field like the reals or the complex numbers. (If we want to emphasize
that the scalar field for some particular vector space V is F, we say that V is a vector space
over F.) The original motivation for vectors was directions in (real, Euclidean, physical)
space: in pirate terms, vector x might representing walking one pace North, vector y might
represent walking three paces East, and vector 9x+12y might represent walking 9 paces
north then 12*3=36 paces East in order to find the buried treasure. The assumption that
vector addition is abelian says that it doesn't matter if we walk the 9x vector or the 12y
vector first---this assumption may or may not world in the real world of treasure islands
full of traps and pitfalls, but we should get to the same place either way in an idealized
space without obstacles. The properties of vector multiplication follow from the intuition
that multiplying by a scalar is "scaling" the vector---stretching it or shrinking it in its
original direction by some factor determined by the particular scalar we choose.

If we replace the scalar field with a scalar ring, we get a more general structure called a
module. We won't talk about these much.

For calculational purposes, we often think of vectors as sequences of coordinates, where


each coordinate is an element of the field the vector space is defined over. Adding two
vectors involves adding their coordinates componentwise, e.g. (1,2,3) + (30,20,10) =
(31,22,13), and scalar multiplication also multiplies componentwise, e.g. 3(1,2,3) =

17 of 18 4/6/2020, 1:51 PM
AlgebraicStructures https://www.cs.yale.edu/homes/aspnes/pinewiki/AlgebraicStructures.html

(3,6,9). The length of each sequence is the dimension of the vector space. This approach
can be taken in any vector space that has a finite basis: details are given in
AbstractLinearAlgebra.

Some other examples of vector spaces:

Any field F is a vector space over itself. The vectors are identical to the scalars, and
vector multiplication is just the usual multiplication operation in the field.
The set of functions from some fixed set S to a field F is a vector space over F, with
f+g defined by (f+g)(x) = f(x)+g(x), af by (af)(x) = a*f(x), etc. Special cases of this
for restricted classes of functions include random variables with values in F,
continuous functions from the reals to the reals, etc.
The ring of polynomials F[x], where F is a field, is a vector space over F with the
obvious interpretation of scalar multiplication.

Homomorphisms of vector spaces

A homomorphism between two vector spaces over the same field is a homomorphism
between the groups of vectors that also preserves scalar multiplication. In particular, this
means that

For any x and y, f(x+y) = f(x)+f(y).


For any x, f(-x) = -f(x).
f(0) = 0.
For any x and scalar a, f(ax) = a*f(x).

Such homomorphisms are called linear transformations or linear maps. Linear


transformations between finite-dimensional vector spaces can be represented by matrices
(more on this in AbstractLinearAlgebra).

Subspaces

A subspace of a vector space is just a subgroup of the group of vectors that is closed under
scalar multiplication, i.e. a set V'⊆V such that x+y, -x, and ax are in V' whenever x and y
are, where a is any scalar. Subspaces also have an important role in AbstractLinearAlgebra
and will be discussed further there.

CategoryMathNotes

1. Also called pseudo-rings. You do not need to remember this definition. (1)

2014-06-17 11:57

18 of 18 4/6/2020, 1:51 PM

You might also like