You are on page 1of 8

MA 353

Problem Set III


Caden Matthews
To Verify a Vector Space Homomorphism (show that 𝐿 is linear):
Show that 𝐿(𝑎1𝑣1 + 𝑎2𝑣2) = 𝑎1𝐿(𝑣1) + 𝑎2𝐿(𝑣2) for all 𝑎1, 𝑎2 ∈ 𝐹 and for all 𝑣1, 𝑣2 ∈ 𝑉.

When I mention an “ij” entry of a matrix, I am referring to the general entry of the form 𝑎𝑖𝑗.
So even though I am saying the “ij” entry, I am speaking generally of every entry.
22.
a. We wish to show that 𝐿+(𝑐1𝐴 + 𝑐2𝐵) = 𝑐1𝐿+(𝐴) + 𝑐2𝐿+(𝐵). We note that the “ij” entry of
𝑐1𝐴 + 𝑐2𝐵 is equal to 𝑐1𝑎𝑖𝑗 + 𝑐2𝑏𝑖𝑗, and the “ji” entry is 𝑐1𝑎𝑗𝑖 + 𝑐2𝑏𝑗𝑖. Thus, we may conclude
1
that the “ij” entry of 𝐿+(𝑐1𝐴 + 𝑐2𝐵) is 2
[𝑐1(𝑎𝑖𝑗 + 𝑎𝑗𝑖) + 𝑐2(𝑏𝑖𝑗 + 𝑏𝑗𝑖)]. However, we note
1 𝑐1
that the “ij” entry of 𝐿+(𝐴) is 2
(𝑎 + 𝑎𝑗𝑖), and thus 𝑐1𝐿+(𝐴) = 2
(𝑎𝑖𝑗 + 𝑎𝑗𝑖). Similarly, we
𝑖𝑗
𝑐2
conclude that 𝑐2𝐿+(𝐵) = 2
(𝑏𝑖𝑗 + 𝑏𝑗𝑖). We note that using the distributive property, we
may conclude that 𝐿+(𝑐1𝐴 + 𝑐2𝐵) = 𝑐1𝐿+(𝐴) + 𝑐2𝐿+(𝐵). Therefore, 𝐿+ is a vector space
homomorphism.

b. We wish to show that 𝐿−(𝑐1𝐴 + 𝑐2𝐵) = 𝑐1𝐿−(𝐴) + 𝑐−𝐿+(𝐵). We note that the “ij” entry
of 𝑐1𝐴 + 𝑐2𝐵 is equal to 𝑐1𝑎𝑖𝑗 + 𝑐2𝑏𝑖𝑗, and the “ji” entry is 𝑐1𝑎𝑗𝑖 + 𝑐2𝑏𝑗𝑖. Thus, we may
1
conclude that the “ij” entry of 𝐿−(𝑐1𝐴 + 𝑐2𝐵) is 2
[𝑐1(𝑎𝑖𝑗 − 𝑎𝑗𝑖) + 𝑐2(𝑏𝑖𝑗 − 𝑏𝑗𝑖)]. However,
1 𝑐1
we note that the “ij” entry of 𝐿−(𝐴) is 2
(𝑎 − 𝑎𝑗𝑖), and thus 𝑐1𝐿−(𝐴) = 2
(𝑎𝑖𝑗 − 𝑎𝑗𝑖).
𝑖𝑗
𝑐2
Similarly, we conclude that 𝑐2𝐿−(𝐵) = 2
(𝑏𝑖𝑗 − 𝑏𝑗𝑖). We note that using the distributive
property, we may conclude that 𝐿−(𝑐1𝐴 + 𝑐2𝐵) = 𝑐1𝐿−(𝐴) + 𝑐2𝐿−(𝐵). Therefore, 𝐿− is a
vector space homomorphism.

𝑇
c. Given that 𝑎𝑖𝑗 is the “ij” entry of 𝐴, the “ij” entry of 𝐴 + 𝐴 must be 𝑎𝑖𝑗 + 𝑎𝑗𝑖. Since these
entries are reals, they are commutative. Thus, this equals 𝑎𝑗𝑖 + 𝑎𝑖𝑗, our “ji” entry. This
𝑇
shows that 𝐴 + 𝐴 is symmetric. Since the subspace of symmetric matrices is closed under
1 𝑇
scalar multiplication, 2
(𝐴 + 𝐴 ) is symmetric. Thus, 𝐿+(𝐴) is a symmetric matrix.

𝑇
d. Given that 𝑎𝑖𝑗 is the “ij” entry of 𝐴, the “ij” entry of 𝐴 − 𝐴 must be 𝑎𝑖𝑗 − 𝑎𝑗𝑖. We note that
𝑇
the additive inverse of this entry is 𝑎𝑗𝑖 − 𝑎𝑖𝑗, our “ji” entry. This shows that 𝐴 − 𝐴 is
skew-symmetric. Since the subspace of skew-symmetric matrices is closed under scalar
1 𝑇
multiplication, 2
(𝐴 − 𝐴 ) is skew-symmetric. Thus, 𝐿−(𝐴) is a skew-symmetric matrix.
e. Assume 𝐿+(𝐴) = 𝐴. Since 𝐿+(𝐴) must be symmetric and is equal to 𝐴, 𝐴 must be
symmetric.
Assume 𝐴 is symmetric. Then, 𝑎𝑖𝑗 = 𝑎𝑗𝑖. The “ij” entry of 𝐿+(𝐴) is
1 1
2
(𝑎𝑖𝑗 + 𝑎𝑗𝑖) = 2
(𝑎𝑖𝑗 + 𝑎𝑖𝑗) = 𝑎𝑖𝑗. Since the “ij” entry of 𝐴 is equal to the “ij” entry of
𝐿+(𝐴), we may conclude that 𝐿+(𝐴) = 𝐴.
We may now conclude the “if and only if.”

f. Assume 𝐿−(𝐴) = 𝐴. Since 𝐿−(𝐴) must be skew-symmetric and is equal to 𝐴, 𝐴 must be


skew-symmetric.
Assume 𝐴 is skew-symmetric. Then, 𝑎𝑖𝑗 + 𝑎𝑗𝑖 = 0. The “ij” entry of 𝐿−(𝐴) is
1 1
2
(𝑎𝑖𝑗 − 𝑎𝑗𝑖) = 2
(𝑎𝑖𝑗 + 𝑎𝑖𝑗) = 𝑎𝑖𝑗. Since the “ij” entry of 𝐴 is equal to the “ij” entry of
𝐿−(𝐴), we may conclude that 𝐿−(𝐴) = 𝐴.
We may now conclude the “if and only if.”

g. We wish to find all 𝐴 such that 𝐿+(𝐴) is the zero matrix. Therefore, the “ij” entry of 𝐿+(𝐴)
1
must be zero. Thus, 2
(𝑎𝑖𝑗 + 𝑎𝑗𝑖) = 0. Therefore, 𝑎𝑖𝑗 + 𝑎𝑗𝑖 = 0. We note this indicates that
𝐴 must be a skew-symmetric matrix. The kernel of 𝐿+(𝐴) is the subspace of all
skew-symmetric matrices.

h. We wish to find all 𝐴 such that 𝐿−(𝐴) is the zero matrix. Therefore, the “ij” entry of 𝐿−(𝐴)
1
must be zero. Thus, 2
(𝑎𝑖𝑗 − 𝑎𝑗𝑖) = 0. Therefore, 𝑎𝑖𝑗 − 𝑎𝑗𝑖 = 0. Finally, 𝑎𝑖𝑗 = 𝑎𝑗𝑖. We note
this indicates that 𝐴 must be a symmetric matrix. The kernel of 𝐿−(𝐴) is the subspace of all
symmetric matrices.
23. Since 𝑆 and 𝑆' are subspaces, they are nonempty. Thus there exists some 𝑠 ∈ 𝑆, 𝑠' ∈ 𝑆'.
We then know (𝑠, 𝑠') ∈ 𝑆 × 𝑆', so 𝑆 × 𝑆' is nonempty. We will also note now that 𝑆 × 𝑆' is
a subset of 𝑉 × 𝑊 as 𝑆 ⊆ 𝑉 and 𝑆' ⊆ 𝑊. We now wish to show that 𝑆 × 𝑆' is closed under
vector addition. Consider (𝑠, 𝑠') ∈ 𝑆 × 𝑆' and (𝑡, 𝑡') ∈ 𝑆 × 𝑆'. We note that
(𝑠, 𝑠') + (𝑡, 𝑡') = (𝑠 + 𝑡, 𝑠' + 𝑡'). Since 𝑆 and 𝑆' are closed under vector addition,
𝑠 + 𝑡 ∈ 𝑆 and 𝑠' + 𝑡' ∈ 𝑆'. Thus, (𝑠 + 𝑡, 𝑠' + 𝑡') ∈ 𝑆 × 𝑆'. We may now conclude that
𝑆 × 𝑆' is closed under vector addition. We now wish to show that 𝑆 × 𝑆' is closed under
scalar multiplication. Consider (𝑠, 𝑠') ∈ 𝑆 × 𝑆'. We note that 𝑎 * (𝑠, 𝑠') = (𝑎 * 𝑠, 𝑎 * 𝑠').
Since 𝑆 and 𝑆' are closed under scalar multiplication, 𝑎 * 𝑠 ∈ 𝑆 and 𝑎 * 𝑠' ∈ 𝑆'. Therefore,
(𝑎 * 𝑠, 𝑎 * 𝑠') ∈ 𝑆 × 𝑆'. We may then conclude that 𝑆 × 𝑆' is closed under scalar
multiplication. We have now shown that 𝑆 × 𝑆' ≤ 𝑉 × 𝑊.

24. We will first show that 𝐷 is a vector space homomorphism. We must show that
𝑛
𝑗
𝐷(𝑐1𝑃1 + 𝑐2𝑃2) = 𝑐1𝐷(𝑃1) + 𝑐2𝐷(𝑃2). We will write 𝑃1 as ∑ 𝑎𝑗𝑥 . We will write 𝑃2 as
𝑗=0
𝑚 𝑛 𝑛
𝑗 𝑗 𝑗
∑ 𝑏𝑗𝑥 . We note 𝑐1𝑃1 = 𝑐1 ∑ 𝑎𝑗𝑥 = ∑ 𝑐1𝑎 𝑥 . We note a similar equality for 𝑐2𝑃2. We may
𝑗=0 𝑗=0 𝑗=0 𝑗
𝑚𝑎𝑥(𝑛,𝑚)
𝑗
write 𝑐1𝑃1 + 𝑐2𝑃2 as ∑ (𝑐1𝑎𝑗 + 𝑐2𝑏𝑗)𝑥 . We then note that
𝑗=0
𝑚𝑎𝑥(𝑛,𝑚)
𝑗−1
𝐷(𝑐1𝑃1 + 𝑐2𝑃2) = ∑ 𝑗(𝑐1𝑎𝑗 + 𝑐2𝑏𝑗)𝑥 . We note that
𝑗=1
𝑛 𝑚
𝑗−1 𝑗−1
𝑐1𝐷(𝑃1) + 𝑐2𝐷(𝑃2) = 𝑐1 ∑ 𝑗𝑎𝑗𝑥 + 𝑐2 ∑ 𝑗𝑏𝑗𝑥 . We note this equals
𝑗=1 𝑗=1
𝑛 𝑚 𝑚𝑎𝑥(𝑛,𝑚)
𝑗−1 𝑗−1 𝑗−1
∑ 𝑐1𝑗𝑎𝑗𝑥 + ∑ 𝑐2𝑗𝑏𝑗𝑥 = ∑ 𝑗(𝑐1𝑎𝑗 + 𝑐2𝑏𝑗)𝑥 . We can now clearly see that
𝑗=1 𝑗=1 𝑗=1
𝐷(𝑐1𝑃1 + 𝑐2𝑃2) = 𝑐1𝐷(𝑃1) + 𝑐2𝐷(𝑃2). Thus, 𝐷 is a vector space homomorphism.
We now wish to show that 𝐷 is surjective. Consider some 𝑄 ∈ 𝑃𝑜𝑙𝑦(𝑅). We wish to find
𝑛
𝑗−1
some 𝑃 such that 𝐷(𝑃) = 𝑄. Let 𝑄 equal ∑ 𝑎𝑗𝑥 . (We note that any polynomial may be
𝑗=1
𝑛 𝑎𝑗
𝑗
represented in this way). Now consider 𝑃 = ∑ 𝑏𝑗𝑥 where 𝑏𝑗 = 𝑗
for 𝑗 ≠ 0, and 𝑏0 = 4.
𝑗=0
(Though we notice that what 𝑏0 is is completely arbitrary) We note
𝑛 𝑛
𝑗−1 𝑗−1
𝐷(𝑃) = ∑ 𝑗 * 𝑏𝑗 * 𝑥 = ∑ 𝑎𝑗𝑥 = 𝑄. (This is because 𝑗 ≠ 0 and 𝑗𝑏𝑗 = 𝑎𝑗). We also
𝑗=1 𝑗=1
note that 𝑃 ∈ 𝑃𝑜𝑙𝑦(𝑅), as all coefficients are real. We may now conclude that 𝐷 is a
surjective.
25. We will first show that 𝐼 is a vector space homomorphism. We must show that
𝑛
𝑗
𝐼(𝑐1𝑃1 + 𝑐2𝑃2) = 𝑐1𝐼(𝑃1) + 𝑐2𝐼(𝑃2). We will write 𝑃1 as ∑ 𝑎𝑗𝑥 . We will write 𝑃2 as
𝑗=0
𝑚 𝑛 𝑛
𝑗 𝑗 𝑗
∑ 𝑏𝑗𝑥 . We note 𝑐1𝑃1 = 𝑐1 ∑ 𝑎𝑗𝑥 = ∑ 𝑐1𝑎 𝑥 . We note a similar equality for 𝑐2𝑃2. We may
𝑗=0 𝑗=0 𝑗=0 𝑗
𝑚𝑎𝑥(𝑛,𝑚)
𝑗
write 𝑐1𝑃1 + 𝑐2𝑃2 as ∑ (𝑐1𝑎𝑗 + 𝑐2𝑏𝑗)𝑥 . We then note that
𝑗=0
𝑚𝑎𝑥(𝑛,𝑚) (𝑐 𝑎 +𝑐 𝑏 )𝑥𝑗+1
1 𝑗 2 𝑗
𝐼(𝑐1𝑃1 + 𝑐2𝑃2) = ∑ 𝑗+1
. We note that
𝑗=0
𝑛 𝑎𝑗𝑥
𝑗+1 𝑚 𝑏 𝑥𝑗+1 𝑚𝑎𝑥(𝑛,𝑚) (𝑐 𝑎 +𝑐 𝑏 )𝑥𝑗+1
𝑗 1 𝑗 2 𝑗
𝑐1𝐼(𝑃1) + 𝑐2𝐼(𝑃2) = 𝑐1 ∑ 𝑗+1
+ 𝑐2 ∑ 𝑗+1
. We note this equals ∑ 𝑗+1
.
𝑗=0 𝑗=0 𝑗=0
We can now clearly see that 𝐼(𝑐1𝑃1 + 𝑐2𝑃2) = 𝑐1𝐼(𝑃1) + 𝑐2𝐼(𝑃2). Thus, 𝐼 is a vector space
homomorphism.
We now wish to show that 𝐼 is injective. Consider some 𝑃(𝑥) and 𝑄(𝑥) in Poly(R) such that
𝑛 𝑚
𝑗 𝑗
𝑃(𝑥) ≠ 𝑄(𝑥). We write 𝑃(𝑥) as ∑ 𝑎𝑗𝑥 and 𝑄(𝑥) as ∑ 𝑏𝑗𝑥 . For 𝑃(𝑥) ≠ 𝑄(𝑥) to hold, we
𝑗=0 𝑗=0
know that for some 𝑗, 𝑎𝑗 ≠ 𝑏𝑗. Let’s call this 𝑗 value 𝑘. That is, 𝑎𝑘 ≠ 𝑏𝑘. We will now apply 𝐼
𝑛 𝑎𝑗𝑥
𝑗+1 𝑚 𝑏 𝑥𝑗+1
𝑗
to 𝑃(𝑥) and 𝑄(𝑥). 𝐼(𝑃(𝑥)) = ∑ 𝑗+1
. 𝐼(𝑄(𝑥)) = ∑ 𝑗+1
. Consider the term of degree
𝑗=0 𝑗=0
𝑘+1 𝑘+1
𝑎𝑘𝑥 𝑏𝑘𝑥 𝑘+1
𝑥
𝑘+1
𝑥
𝑘 + 1 of each of these. We have 𝑘+1
and 𝑘+1
. We know that 𝑘+1
= 𝑘+1
. We also know
that for some non-zero real 𝑟, and reals 𝑎, 𝑏 such that 𝑎 ≠ 𝑏, 𝑎𝑟 ≠ 𝑏𝑟. Since we can see
𝑘+1
1
that for 𝑥 = 1, 𝑘+1
can never be zero for any non-negative integer 𝑘, we know that
𝑘+1 𝑘+1
𝑎𝑘𝑥 𝑏𝑘𝑥
𝑘+1
≠ 𝑘+1
. Since the 𝑘 + 1 terms of 𝐼(𝑃(𝑥)) and 𝐼(𝑄(𝑥)) are not equal,
𝐼(𝑃(𝑥)) ≠ 𝐼(𝑄(𝑥)). Thus, 𝐼 is injective.
26. 𝑓, 𝑔: 𝑋 → 𝑋, 𝑔 ◦ 𝑓 = 𝐼𝑑𝑋.
1: Suppose 𝑓 is surjective. Since 𝑓 is surjective, we know |𝑓(𝑋)| ≤ |𝑋|. Since 𝑓 has
codomain 𝑋 and 𝑓 is surjective, 𝑓(𝑋) = 𝑋. This means |𝑓(𝑋)| = |𝑋|, which then implies
𝑓(𝑋) = 𝑋. Now, since 𝑔 ◦ 𝑓 is the identity function of 𝑋, |𝑔 ◦ 𝑓(𝑋)| = |𝑋|. We have
already found that 𝑓(𝑋) = 𝑋, thus we know |𝑔(𝑋)| = |𝑋|. This means 𝑔 must be bijective.
This then implies 𝑔 is injective.

2: Suppose 𝑓 is injective. Again, we know |𝑔 ◦ 𝑓(𝑋)| = |𝑋|. Since 𝑓 is injective we know


|𝑓(𝑋)| ≥ |𝑋| Since 𝑓(𝑋) must be a subset of 𝑋, as the codomain of 𝑓 is 𝑋, we know
|𝑓(𝑋)| ≤ |𝑋|. This means we may conclude |𝑓(𝑋)| = |𝑋| and therefore that 𝑓(𝑋) = 𝑋.
Now we have that |𝑔(𝑋)| = |𝑋|. Just as in the last part, we may conclude from that 𝑔 is
bijective, therefore meaning 𝑔 is surjective.

𝑛 𝑛 𝑗+1
𝑎𝑗𝑥
𝑗
27. Consider some 𝑃(𝑥) ∈ 𝑃𝑜𝑙𝑦(𝑅). ∑ 𝑎𝑗𝑥 = 𝑃(𝑥). We then apply 𝐼 to get ∑ 𝑗+1
We opt
𝑗=0 𝑗=0
𝑛+1 𝑎𝑗−1
𝑗
to rewrite this as ∑ 𝑏𝑗𝑥 where 𝑏0 = 0 and for all other 𝑗, 𝑏𝑗 = 𝑗
. We may now apply 𝐷
𝑗=0
𝑛+1
𝑗−1
to get ∑ 𝑗𝑏𝑗𝑥 . Since all 𝑗 in this summation are non-zero, we may rewrite this as
𝑗=1
𝑛+1 𝑎𝑗−1 𝑛+1 𝑛
𝑗−1 𝑗−1 𝑗
∑ 𝑗 𝑗
𝑥 = ∑ 𝑎𝑗−1𝑥 . We then note this equals ∑ 𝑎𝑗𝑥 = 𝑃(𝑥). This shows that
𝑗=1 𝑗=1 𝑗=0
𝐷 ◦ 𝐼 equals the identity function of Poly(R).
28.
0
The kernel of 𝐷 is all constant polynomials. That is, all polynomials of the form 𝑎0𝑥 .
𝑛
𝑗−1
(Where 𝑎0 is a real number). We note that ∑ 𝑗𝑎𝑗𝑥 is zero precisely when 𝑎𝑗 = 0 for all
𝑗=1
𝑗 > 0. All polynomials that satisfy this condition are of the form given in the 2nd sentence
of this answer.
Since 𝐷 is surjective, 𝐷(𝑃𝑜𝑙𝑦(𝑅)) = 𝑃𝑜𝑙𝑦(𝑅).
−1 0 1
𝐷 (1) is the set of all polynomials of the form 𝑎0𝑥 + 𝑎1𝑥 where 𝑎0 is a real number, and
𝑛
𝑗−1
𝑎1 = 1. For ∑ 𝑗𝑎𝑗𝑥 to be one, 𝑎𝑗 for all 𝑗 > 1 must be zero, and 𝑎1 = 1. We note this
𝑗=1
matches the form given as the answer.
−1 0 1 2
𝐷 (𝑥) is the set of all polynomials of the form 𝑎0𝑥 + 𝑎1𝑥 + 𝑎2𝑥 where 𝑎0 is a real
𝑛
1 𝑗−1
number, 𝑎1 = 0, and 𝑎2 = 2
. For ∑ 𝑗𝑎𝑗𝑥 to be 𝑥, we need 𝑎1 to be zero, 2 * 𝑎2 to be
𝑗=1
one, and 𝑎𝑗 for all 𝑗 greater than two to be zero. This matches the form given.

29.
𝑛 𝑗+1
𝑎𝑗𝑥
The kernel of 𝐼 is the polynomial equal to zero. For ∑ 𝑗+1
to be zero, 𝑎𝑗 must be zero for
𝑗=0
all 𝑗. The only polynomial that satisfies this is the zero polynomial.
𝐼(𝑃𝑜𝑙𝑦(𝑅)) is all polynomials in Poly(R) except for polynomials with constant term of the
0
form 𝑎0𝑥 where 𝑎0 is non-zero. The reasoning for this is shown in the next part. Specifically
“no non-zero constant term exists in that polynomial” (that polynomial referring to a
general polynomial in 𝐼(𝑃𝑜𝑙𝑦(𝑅)).)
𝑛 𝑗+1
𝑎𝑗𝑥
It is impossible for ∑ 𝑗+1
to be 1, as no non-zero constant term exists in that polynomial.
𝑗=0
−1
Therefore, 𝐼 (1) is empty. This is not shocking, as 𝐼 was not shown to be injective.
𝑛 𝑗+1
𝑎𝑗𝑥 𝑎0
For ∑ 𝑗+1
to be 𝑥, 𝑎𝑗 for 𝑗 > 0 must be zero, and 0+1
must be one. We note the only
𝑗=0
−1 0
polynomial satisfying this is 𝑓(𝑥) = 1. Thus, 𝐼 (𝑥) = 1𝑥 .
30.
a. We know from results in the text that the image of a vector space homomorphism is a
𝑘+1 𝑘
vector space. Therefore, we need only show that 𝐿 (𝑉) ⊆ 𝐿 (𝑉). We note that
𝑘+1 𝑘 𝑘+1
𝐿 = 𝐿 ◦ 𝐿 (We can see this by decomposing 𝐿 into the composition of 𝑘 + 1 𝐿s. We
𝑘
then ignore the rightmost 𝐿 and see that all of the 𝐿𝑠 on the left are equal to 𝐿 by
𝑘+1
definition.). Thus if we consider some element of 𝐿 (𝑉), we know that for some 𝑣 ∈ 𝑉,
𝑘+1 𝑘
𝐿 (𝑣) is equal to said element. Thus, 𝐿 ◦ 𝐿(𝑣) equals this element. Thus this element
𝑘 𝑘+1 𝑘
must be in 𝐿 (𝑉) since 𝐿(𝑣) ∈ 𝑉. We may now conclude that 𝐿 (𝑉) ⊆ 𝐿 (𝑉). Therefore,
𝑘+1 𝑘
𝐿 (𝑉) ≤ 𝐿 (𝑉).

𝑘 𝑘
b. Consider some element 𝑣 of 𝐾𝑒𝑟(𝐿 ). By definition, 𝐿 (𝑣) = 0𝑉. We know by definition
𝑘+1 𝑘 𝑘+1 𝑘
that 𝐿 = 𝐿 ◦ 𝐿 . Thus, we know that 𝐿 (𝑣) = 𝐿(𝐿 (𝑣)) = 𝐿(0𝑉). While I would
typically assert that homomorphisms map identity to identity, I will demonstrate this. We
know 𝐿(𝑣1 + 𝑣2) = 𝐿(𝑣1) + 𝐿(𝑣2). This means 𝐿(𝑣) = 𝐿(𝑣 + 0𝑉) = 𝐿(𝑣) + 𝐿(0𝑉). We
may then conclude from 𝐿(𝑣) = 𝐿(𝑣) + 𝐿(0𝑉) that 𝐿(0𝑉) is the identity element of the
codomain of the vector space homomorphism. Since our particular homomorphism is from
𝑘+1 𝑘+1
𝑉 to 𝑉, 𝐿(0𝑉) = 0𝑉. Therefore, 𝐿 (𝑣) = 0𝑉. Thus, by definition, 𝑣 ∈ 𝐾𝑒𝑟(𝐿 ).
𝑘 𝑘+1
Therefore, 𝐾𝑒𝑟(𝐿 ) ⊆ 𝐾𝑒𝑟(𝐿 ). We know from results in the text that kernels of vector
space homomorphisms are themselves vector spaces. Thus, we may conclude
𝑘 𝑘+1
𝐾𝑒𝑟(𝐿 ) ≤ 𝐾𝑒𝑟(𝐿 ).

𝑘 𝑘
c. Consider some element of 𝐿 (𝑉). Let 𝑣 ∈ 𝑉 be such that 𝐿 (𝑣) is said element. We note
𝑛 𝑛−𝑘 𝑘 𝑛 𝑛−𝑘 𝑘 𝑛
that 𝐿 = 𝐿 ◦ 𝐿 . Thus, 𝐿 (𝑣) = 𝐿 (𝐿 (𝑣)). We know that 𝐿 (𝑣) = 0𝑉. Thus,
𝑛−𝑘 𝑘 𝑘 𝑛−𝑘 𝑘 𝑛−𝑘
𝐿 (𝐿 (𝑣)) = 0𝑉. Therefore, 𝐿 (𝑣) is in the kernel of 𝐿 . Thus, 𝐿 (𝑉) ⊆ 𝐾𝑒𝑟(𝐿 ). By
𝑘 𝑛−𝑘
results previously mentioned, we may then conclude 𝐿 (𝑉) ≤ 𝐾𝑒𝑟(𝐿 ).

𝑛 𝑛 𝑛
d. Since for all 𝑣 ∈ 𝑉, 𝐿 (𝑣) = 0𝑉, 𝑉 ⊆ 𝐾𝑒𝑟(𝐿 ). Since the domain of 𝐿 is 𝑉,
𝑛 𝑛
𝐾𝑒𝑟(𝐿 ) ⊆ 𝑉. Therefore, we may conclude 𝑉 = 𝐾𝑒𝑟(𝐿 ).

You might also like