You are on page 1of 53

MAPS BETWEEN

SPACES: ISOMORPHISM
AND HOMOMORPHISM.
Lecturer: Askarbekkyzy Aknur
Maps Between Spaces
Definition 1. Let 𝐴 and 𝐵 be arbitrary
nonempty sets. Suppose to each element 𝑎 ∈ 𝐴
there is assigned a unique element of 𝐵 so
called the image of 𝑎. The collection 𝑓 of such
assignments is called a mapping (a map) from
𝐴 to 𝐵, and it is denoted by 𝑓 ∶ 𝐴 → 𝐵.
Maps Between Spaces
The set 𝐴 is called the domain of the mapping,
and 𝐵 is called target set.
Notations:
◦A map 𝑓 from 𝐴 to 𝐵 if and only if 𝑓 ∶ 𝐴 → 𝐵
◦𝑓(𝑎) = 𝑏 if and only if 𝑓 ∶ 𝑎 ⟼ 𝑏.
Definition 2. Let 𝑉 and 𝑈 be vector spaces over
𝕂. A mapping 𝐹 ∶ 𝑉 → 𝑈 is called a linear
mapping or homomorphism if it satisfies the
following two conditions
1. for any vectors 𝑣! , 𝑣" ∈ 𝑉, 𝐹(𝑣! + 𝑣" ) =
𝐹(𝑣! ) + 𝐹(𝑣" ).
2. for any scalar 𝜆 ∈ 𝕂 and vector 𝑣 ∈
𝑉 , 𝐹 (𝜆𝑣) = 𝜆𝐹 (𝑣).
In other words, 𝐹 preserves the operations of
addition and scalar multiplication.
Note that the conditions above can be written
as one condition as follows: for any vectors
𝑣! , 𝑣" ∈ 𝑉, and for any scalars 𝜆, 𝜇 ∈ 𝕂

𝐹(𝜆𝑣! + 𝜇𝑣" ) = 𝜆𝐹(𝑣! ) + 𝜇𝐹(𝑣" ).


Proposition 1. If 𝐹 ∶ 𝑉 → 𝑈 is a linear mapping,
then it sends the zero vector into the zero vector,
that is, 𝐹 (𝟎) = 𝟎.
Example 1. a) Let 𝐹 ∶ ℝ# → ℝ" defined by 𝐹 ∶
𝑥 𝑥 #
𝑦 → 𝑦 . Let 𝑣, 𝑣! , 𝑣" ∈ ℝ and
𝑧
𝑥 𝑥! 𝑥"
𝑣 = 𝑦 , 𝑣! = 𝑦! , 𝑣" = 𝑦" and 𝜆 ∈ ℝ.
𝑧 𝑧! 𝑧"
Then
𝑥! 𝑥" 𝑥! + 𝑥"
𝐹 𝑣! + 𝑣" = 𝐹 𝑦! + 𝑦" = 𝐹 𝑦! + 𝑦"
𝑧! 𝑧" 𝑧! + 𝑧"
𝑥! 𝑥"
𝑥! + 𝑥" 𝑥! 𝑥"
= 𝑦 +𝑦 = 𝑦 + 𝑦 = 𝐹 𝑦! + 𝐹 𝑦"
! " ! "
𝑧! 𝑧"
= 𝐹 𝑣! + 𝐹 𝑣" .
𝑎 𝑏
b) Let 𝐹 ∶ 𝑀"," → ℝ given by 𝐹 ∶ →𝑎+
𝑐 𝑑
𝑎 𝑏
𝑑. Then for 𝑣, 𝑣! , 𝑣" ∈ 𝑀"," and 𝑣 = ,
𝑐 𝑑
𝑎! 𝑏! 𝑎" 𝑏"
𝑣! = , 𝑣" = and 𝜆 ∈ ℝ.
𝑐! 𝑑! 𝑐" 𝑑"
We have
𝑎! 𝑏! 𝑎" 𝑏"
𝐹 𝑣! + 𝑣" = 𝐹 +
𝑐! 𝑑! 𝑐" 𝑑"
𝑎! + 𝑎" 𝑏! + 𝑏"
=𝐹
𝑐! + 𝑐" 𝑑! + 𝑑"
= 𝑎! + 𝑎" + 𝑑! + 𝑑" = 𝑎! + 𝑑! + 𝑎" + 𝑑"
𝑎! 𝑏! 𝑎" 𝑏"
=𝐹 +𝐹 = 𝐹 𝑣! + 𝐹 𝑣" .
𝑐! 𝑑! 𝑐" 𝑑"
𝑎 𝑏 𝜆𝑎 𝜆𝑏
𝐹 𝜆𝑣 = 𝐹 𝜆 =𝐹
𝑐 𝑑 𝜆𝑐 𝜆𝑑
𝑎 𝑏
= 𝜆𝑎 + 𝜆𝑑 = 𝜆 𝑎 + 𝑑 = 𝜆𝐹
𝑐 𝑑
= 𝜆𝐹 𝑣 .
Hence 𝐹 is a linear mapping.
" "
𝑥 𝑥+2
c) Let 𝐹: ℝ → ℝ defined by 𝐹: 𝑦 → .
𝑦−3
We note that
0 0+2 2 0
𝐹 𝟎 =𝐹 = = ≠ = 𝟎.
0 0−3 −3 0
So 𝐹 does not send the zero vector into the zero
vector, therefore 𝐹 is not linear.
d) Let 𝑉 = 𝑃(𝑡) be a space of polynomials in 𝑡.
Let
𝐷: 𝑃 𝑡 → 𝑃 𝑡 (the derivative mapping)
defined by
𝑑𝑣
𝐷: 𝑣 →
𝑑𝑡
for any 𝑣 = 𝑣(𝑡) ∈ 𝑃(𝑡). Then by properties of
derivative mapping in Calculus one can easily
show that 𝐷 is a linear mapping.
e) Again consider a mapping on the space of
polynomials
𝐽∶ 𝑃 𝑡 → 𝑃 𝑡 (an integral mapping)
defined by
!
𝐽: 𝑣 ⟼ G 𝑣𝑑𝑡 .
#
Using the properties of integrals, one can show that
𝐽 is linear.
f) Given two mappings from 𝑉 to 𝑈 defined by
𝐹(𝑣) = 0 and 𝐼(𝑣) = 𝑣 for any 𝑣 ∈ 𝑉. Then it is
easy to show that they are linear mappings. 𝐹 is
called zero mapping and 𝐼 is called identity
mapping.
Theorem 1. Let 𝑉 and 𝑈 be vector spaces over
𝕂 . Let {𝑣! , . . . , 𝑣% } be a basis of 𝑉 and
{𝑢! , . . . , 𝑢% } be a basis of 𝑈. Then there exists a
unique linear mapping 𝐹 ∶ 𝑉 → 𝑈 such that
𝐹(𝑣! ) = 𝑢! , . . . , 𝐹(𝑣% ) = 𝑢% .
Example 2. Let 𝐹 ∶ ℝ → ℝ be the linear
mapping for which
1 2 0 1
𝐹 = and 𝐹 = .
2 3 1 4
𝑎
Now let us find a formula for 𝐹 . By the
𝑏
theorem above it exists and unique if 𝐹 is linear,
1 0 2 1 "
and , and , are bases of ℝ .
2 1 3 4
"
It is easy to check that they are bases of ℝ .
𝑎 1
First write as a linear combination of
𝑏 2
0
and .
1

𝑎 1 0 𝑥
=𝑥 +𝑦 = 2𝑥 + 𝑦 .
𝑏 2 1
Then 𝑎 = 𝑥 and 𝑏 = 2𝑥 + 𝑦. From them we derive
𝑥 = 𝑎 and 𝑦 = −2𝑎 + 𝑏. Then

𝑎 1 0 1 0
𝐹 =𝐹 𝑥 +𝑦 = 𝑥𝐹 + 𝑦𝐹
𝑏 2 1 2 1
2 1 𝑏
=𝑎 + −2𝑎 + 𝑏 = .
3 4 −5𝑎 + 4𝑏
Hence

𝑎 𝑏
𝐹 = .
𝑏 −5𝑎 + 4𝑏
Definition 3. Let 𝐹 ∶ 𝑉 → 𝑈 be a linear mapping.
The kernel of 𝐹, written 𝐾𝑒𝑟𝐹, is the set of vectors
in 𝑉 that map into the zero vector 𝟎 in 𝑈, that is,
𝐾𝑒𝑟𝐹 = {𝑣 ∈ 𝑉 |𝐹(𝑣) = 0}.
The image of 𝐹, written 𝐼𝑚𝐹, is the set of images
of vectors in 𝑈, that is,
𝐼𝑚(𝐹 ) = {𝐹 (𝑣) ∈ 𝑈 | 𝑣 ∈ 𝑉 }.
Proposition 2. Let 𝐹 ∶ 𝑉 → 𝑈 be a linear
mapping. Then 𝐾𝑒𝑟𝐹 is a subspace of 𝑉, and
𝐼𝑚𝐹 is a subspace of 𝑈.
Proposition 3. Suppose 𝑣! , . . . , 𝑣& span a vector
space 𝑉 and suppose 𝐹 ∶ 𝑉 → 𝑈 is linear. Then
𝐹(𝑣! ), . . . , 𝐹(𝑣& ) span 𝐼𝑚𝐹.
Example 3. a) Let 𝐷 be the derivative map from
𝑃# (𝑡) to 𝑃# (𝑡). Then
" #
𝐷(𝑎' + 𝑎! 𝑡 + 𝑎" 𝑡 + 𝑎# 𝑡 )
= 𝑎! + 𝑎" 𝑡 + 𝑎# 𝑡 " .
We note that
𝐼𝑚𝐷 = {𝑏' + 𝑏! 𝑡 + 𝑏" 𝑡 " |𝑏' , 𝑏! , 𝑏" ∈ ℝ} =
𝑃" (𝑡) and 𝐾𝑒𝑟𝐷 = ℝ.
b) Let 𝐹 ∶ ℝ# → ℝ# defined by
𝑥 𝑥
𝐹 𝑦 = 𝑦 .
𝑧 0
𝑎
Then I𝑚𝐹 = { 𝑏 |𝑐 = 0} = 𝑥𝑦 −plane and
𝑐
𝑎
𝐾𝑒𝑟𝐹 = { 𝑏 |𝑎 = 𝑏 = 0} = 𝑧 −axis.
𝑐
Definition 3. Let 𝐹 ∶ 𝑉 → 𝑈 be a linear mapping.
The rank of 𝑭 is defined to be the dimension of
its image, and the nullity of 𝑭 is defined to be the
dimension of its kernel; namely,

𝑟𝑎𝑛𝑘 𝐹 = 𝑑𝑖𝑚 𝐼𝑚 𝐹
and 𝑛𝑢𝑙𝑙𝑖𝑡𝑦(𝐹 ) = 𝑑𝑖𝑚(𝐾𝑒𝑟 𝐹 ).
Example 4. a) Let 𝐷 be the derivative map from
𝑃# (𝑡) to 𝑃# (𝑡). We already know that 𝐼𝑚𝐷 =
{𝑏' + 𝑏! 𝑡 + 𝑏" 𝑡 " |𝑏' , 𝑏! , 𝑏" ∈ ℝ} = 𝑃" (𝑡) and
𝐾𝑒𝑟 𝐷 = 𝑅.
Then 𝑟𝑎𝑛𝑘(𝐹) = 3 and 𝑛𝑢𝑙𝑙𝑖𝑡𝑦(𝐹) = 1 .
𝑑𝑖𝑚𝑃# (𝑡) = 3 + 1 = 4.
𝑥 𝑥
# #
b) Let 𝐹 ∶ ℝ → ℝ defined by 𝐹 𝑦 = 𝑦 .
𝑧 0
We already know that
𝑎
𝐼𝑚𝐹 = { 𝑏 |𝑐 = 0} = 𝑥𝑦 − plane and
𝑐
𝑟𝑎𝑛𝑘(𝐹) = 2.
𝑎
𝐾𝑒𝑟𝐹 = { 𝑏 |𝑎 = 𝑏 = 0} = 𝑧 − axis and
𝑐
𝑛𝑢𝑙𝑙𝑖𝑡𝑦(𝐹) = 1. 𝑑𝑖𝑚ℝ# = 2 + 1 = 3.
c) Let 𝐹 ∶ ℝ$ → ℝ% be a linear map:
𝑥 𝑥−𝑦+𝑧+𝑡
𝑦
𝐹 = 2𝑥 − 2𝑦 + 3𝑧 + 4𝑡 .
𝑧
3𝑥 − 3𝑦 + 4𝑧 + 5𝑡
𝑡
We want to find
◦a basis and dimension of 𝐼𝑚𝐹.
◦a basis and dimension of 𝐾𝑒𝑟𝐹.
a basis and dimension 𝐼𝑚𝐹: First we find the image
of the standard basis vectors of ℝ$ :
1 0
1 −1
0 1
𝐹 = 2 , 𝐹 = −2 ,
0 0
3 −3
0 0
0 0
1 1
0 0
𝐹 = 3 , 𝐹 = 4 .
1 0
4 5
0 1
As we know the image vectors span 𝐼𝑚𝐹. Now,
#
in order to have a basis of ℝ we need to define
independent vectors in the set of image vectors.
To get independent vectors we write the image
vectors as rows of matrix (as we did in Lecture 4)
and find its echelon matrix.
1 2 3 1 2 3
−1 −2 −3 0 0 0
~ ~
1 3 4 0 1 1
1 4 5 0 2 2
1 2 3
0 0 0 1 2 3
~ .
0 1 1 0 1 1
0 0 0
So there are 2 independent vectors which span 𝐼𝑚 𝐹.
Hence
1 0
{ 2 . 1 }
3 1
is a basis of 𝐼𝑚𝐹 and 𝑑𝑖𝑚(𝐼𝑚𝐹) =
𝑟𝑎𝑛𝑘(𝐹) = 2.
a basis and dimension 𝐾𝑒𝑟𝐹: Set 𝐹(𝑣) = 0 for
𝑥
𝑦
some 𝑣 = . From the definition of 𝐹 we
𝑧
𝑡
𝑥−𝑦+𝑧+𝑡 0
have 𝐹 𝑣 = 2𝑥 − 2𝑦 + 3𝑧 + 4𝑡 = 0 .
3𝑥 − 3𝑦 + 4𝑧 + 5𝑡 0
𝑥−𝑦+𝑧+𝑡 =0
^2𝑥 − 2𝑦 + 3𝑧 + 4𝑡 = 0
3𝑥 − 3𝑦 + 4𝑧 + 5𝑡 = 0
𝑥−𝑦+𝑧+𝑡 =0
⟺ ^ 𝑧 + 2𝑡 = 0
𝑧 + 2𝑡 = 0
𝑥−𝑦+𝑧+𝑡 =0
⟺`
𝑧 + 2𝑡 = 0
The system has free unknowns: 𝑦, 𝑡.
𝑥 𝑦+𝑡 1 1
𝑢 𝑦 1 0
= =𝑦 +𝑡 ,
𝑧 −2𝑡 0 −2
𝑡 𝑡 0 1
where 𝑦 and 𝑡 any numbers.
Thus,
1 1
1 0
{ , }
0 −2
0 1
◦is a basis of 𝐾𝑒𝑟𝐹 and 𝑑𝑖𝑚(𝐾𝑒𝑟𝐹) =
𝑛𝑢𝑙𝑙𝑖𝑡𝑦(𝐹) = 2. As expected,
𝑑𝑖𝑚(𝐼𝑚𝐹) + 𝑑𝑖𝑚(𝐾𝑒𝑟𝐹) = 4 = 𝑑𝑖𝑚ℝ( = 4.
We have seen many types of vector spaces so far. In
fact, many of them have ”the same” vector space
structures. For example, 𝑀"," and 𝑃% (𝑡) are ”the
same” as a vector space. They just have different
nature of vectors. Another example is 𝑀"," and 𝕂$ .
To see that we define a new notion so called
isomorphism (”the same structure”) of vector
spaces. Furthermore, we see that any
𝑛 − dimensional vector space 𝑉 over 𝕂 is
isomorphic to 𝕂' .
Definition 4. A mapping 𝐹 ∶ 𝑉 → 𝑈 is called
an isomorphism if
1.𝐹 is linear
2.𝐹 is bijective, that is, one-to-one and onto.
If 𝐹 is an isomorphism, then we say 𝑉 and 𝑈
are isomorphic and write 𝑉 ≅ 𝑈.
Recall definitions of one-to-one and onto
mappings from Discrete Mathematics. A mapping
𝐹 ∶ 𝑉 → 𝑈 is said to be one-to-one, if 𝐹(𝑣! ) =
𝐹(𝑣" ), then 𝑣! = 𝑣" . A mapping 𝐹 ∶ 𝑉 → 𝑈 is
said to be onto, if for any 𝑢 ∈ 𝑈 there exists
some 𝑣 ∈ 𝑉 such that 𝐹(𝑣) = 𝑢. In other words,
a mapping 𝐹 ∶ 𝑉 → 𝑈 is said to be onto, if
𝐼𝑚𝐹 = 𝑈.
Example 5. a) Let 𝑃" (𝑡) be a vector space over
ℝ. Define 𝐹 ∶ 𝑃" (𝑡) → ℝ# as follows
𝑎'
𝑎' + 𝑎! 𝑡 + 𝑎" 𝑡 " ⟼ 𝑎! .
𝑎"
One can show that 𝐹 is onto and one-to-one. It
is easy to check that it is linear. Hence vector
#
space 𝑃" (𝑡) over ℝ is isomorphic to ℝ and
𝑃" (𝑡) ≅ ℝ# .
b) Let 𝑃# (𝑡) and 𝑀"," be vector spaces over ℝ.
Define 𝐹 ∶ 𝑀"," → 𝑃# (𝑡) as follows
𝑎 𝑏 " #
⟼ 𝑎 + 𝑏𝑡 + 𝑐𝑡 + 𝑑𝑡
𝑐 𝑑
Then we note that 𝐹 is onto and one-to-one, and it
is linear. Hence vector space 𝑀"," over ℝ is
isomorphic to 𝑃# (𝑡) and 𝑀"," ≅ 𝑃# (𝑡).
Theorem 3. Vector spaces are isomorphic if
and only if they have the same dimension.
Corollary 1. Any vector space of dimension 𝑛
%
over 𝕂 is isomorphic to 𝕂 .
The corollary is important when we work with
finite dimensional vector spaces.
Let 𝑉 be a vector space over 𝕂 of dimension 𝑛.
%
Then 𝑉 ≅ 𝕂 . Suppose 𝑆 = {𝑢! , . . . , 𝑢% } is a
basis of 𝑉. Then each vector 𝑣 ∈ 𝑉 can be
written as a linear combination of vector in 𝑆.
Suppose 𝑣 = 𝜆! 𝑢! + . . . + 𝜆% 𝑢% for some
𝜆! , . . . , 𝜆% ∈ 𝕂.
Then 𝜆! , . . . , 𝜆% are called the coordinates of 𝑣
with respect to 𝑆. In this case, we write

𝜆!
𝑣 ) = ⋮ .
𝜆%
We note that 𝑢 + 𝑣 ) = 𝑢 ) + 𝑣 ) and
𝜆𝑣 ) = 𝜆 𝑣 ) for all 𝑢, 𝑣 ∈ 𝑉 and 𝜆 ∈ 𝕂. If we
𝜆!
consider ⋮ as a vector of 𝕂 , then the
𝜆%
𝜆!
correspondence 𝑣 ⟼ ⋮ defines an
𝜆%
%
isomorphism between 𝑉 ≅ 𝕂 .
Example 6. Let 𝑉 = 𝑀"," over ℝ. As a basis of
𝑀"," wetake

1 0 0 1 0 0 0 0
𝑆={ , , , }.
0 0 0 0 1 0 0 1
Since
𝑎 𝑏 1 0 0 1
=𝑎 +𝑏
𝑐 𝑑 0 0 0 0
0 0 0 0
+𝑐 +𝑑 ,
1 0 0 1
𝑎
𝑎 𝑏 𝑏
we have = .
𝑐 𝑑 ) 𝑐
𝑑
( 𝑎 𝑏
Then 𝐹 ∶ 𝑀"," → ℝ defined as ⟼
𝑐 𝑑
𝑎
𝑏
. It defines isomorphism and have 𝑀"," ≅
𝑐
𝑑
ℝ( .
Let 𝐹 ∶ 𝑉 → 𝑈 and 𝐺 ∶ 𝑉 → 𝑈 be linear
mappings. We define the sum 𝐹 + 𝐺 and the
scalar product 𝜆𝐹 , where 𝜆 ∈ 𝕂 as follows:

𝐹 + 𝐺 𝑣 := 𝐹 𝑣 + 𝐺 𝑣
and (𝜆𝐹)(𝑣) ∶= 𝜆𝐹(𝑣).
Proposition 4. Let 𝑉 and 𝑈 be vector spaces over
𝕂. 𝐹 ∶ 𝑉 → 𝑈 and 𝐺: 𝑉 → 𝑈 be linear mappings.
Then 𝐹 + 𝐺 and 𝜆𝐹 are linear mappings.
Proposition 5. Let 𝑉 and 𝑈 be vector spaces over
𝕂. Then the collection of all linear mappings
from 𝑉 into 𝑈 with the above operations forms a
vector space over 𝕂.
We write 𝐻𝑜𝑚(𝑉, 𝑈) for space of linear mappings
from 𝑉 into 𝑈.
Proposition 6. Suppose 𝑑𝑖𝑚𝑉 = 𝑚 and 𝑑𝑖𝑚𝑈 =
𝑛. Then 𝑑𝑖𝑚 𝐻𝑜𝑚(𝑉, 𝑈) = 𝑚𝑛.

You might also like