You are on page 1of 8

Invertibility of Linear

Transformations - 1
• Definition: Any function f from V into W is said
to be invertible if there exists a function g from W
into V such that gf is the identity function on V
and fg is the identity function on W.
• Observation 1: In case f is invertible, then the
function g is unique, and is called the inverse of f,
denoted by f 1.
• Observation 2: A function f is invertible if and
only if f is injective (old terminology: 1:1 or one-
to-one) and surjective (old terminology: onto, i.e.
the range of f is all of W), i.e. bijective.
Invertibility of Linear Transformations - 2

• Proposition 37: If T: V  W is an invertible linear


transformation, its inverse function T 1: W  Vis also a
linear transformation.
• Proof: See notes.
• Remark: In our earlier terminology, an invertible linear
transformation is an isomorphism. We can now use the
above result to obtain a nice corollary.
• Corollary 37.1: Isomorphism is an equivalence relation
on the set of all vector spaces over a given field F.
• Outline of Proof: Reflexive property is obvious,
symmetric property follows from Proposition 37, and
transitive property can be derived from Proposition 33.
Singular and Non-Singular Linear
Transformations

• Definition: A linear transformation T from V into W is


said to be non-singular if the null space of T is {0}, i.e.
Tv = 0 implies v = 0.
• Remark: This is equivalent to saying that T is injective
(we had already noted this when we initially defined the
null space or kernel).
• Proposition 38: Let T be a linear transformation from V
into W. Then T is non-singular if and only if T carries
every linearly independent subset of V into a linearly
independent subset of W.
• Proof: Left as an exercise
Invertibility of Linear
Transformations - 3
• Proposition 39: Let V and W be finite-
dimensional spaces with dim V = dim W. Let T be
a linear transformation from V into W. Then the
following are equivalent:
a. T is invertible
b. T is non-singular
c. T is surjective, i.e. the range of T is W
d. T carries every basis of V into a basis of W
• Proof is left as an exercise.
Invertibility of Linear Transformations - 4
• Remark: The essential point in the above Proposition 39 is
that for finite-dimensional spaces with equal dimension,
if the linear transformation is non-singular (i.e. injective)
then it must be surjective, and if it is surjective, then it
must be injective. However, this holds only for finite-
dimensional spaces.
• For infinite-dimensional spaces V, it is possible to find a
linear operator T: V  V which is surjective but not
injective. Similarly, it is possible to find a linear operator
T: V  V which is injective but not surjective. (Left as an
exercise.)
Infinite-Dimensional Vector Spaces
• Remark: We had earlier seen that the space R[t] of all polynomials with real
coefficients is infinite-dimensional. We had also discussed the case of the
space C[0,1] of continuous functions, and by using Proposition 18, we could
see that it is also infinite-dimensional. We would like to extend the concepts of
linear dependence/independence and bases to infinite-dimensional spaces. We
therefore make the following definitions:
• Definition: A (possibly infinite) set S of vectors in a vector space V is said to
be linearly independent if every finite subset of S is linearly independent.
• Definition: If S is a subset of V, then Span S = smallest subspace of V which
contains S. This definition covers the case of infinite subsets S and coincides
with our earlier (alternative) definition for finite subsets.
• Remark: Actually, it can be seen that Span S is nothing but the set of all
possible linear combinations of vectors in S; i.e. Span S = {c1v1 + c2v2 + …. +
cpvp: vi  S, ci  F}. This also coincides with our earlier (initial) definition in
the case that S is finite. You may try this as an exercise.
Infinite-Dimensional Vector Spaces - 2

• Definition: A subset S of a space V is stb a basis of V if S is linearly


independent and Span S = V.
• Example: The set B = {1, t, t2, t3, …….. } = {tn: n  N} is a basis for the
space R[t] of all polynomials with real coefficients.
• Note: If we consider the subset Bn = {1, t, t2, t3, …….., tn} of Rn[t], then we
can easily see that it is both linearly independent and a spanning set for Rn[t]. It
follows that Bn is a basis for Rn[t] and hence dim Rn[t] = n + 1. We have
informally used this earlier; it is now stated explicitly for the sake of
completeness.

• Then B is the union of all the Bn. We can show without too much difficulty
that B is both linearly independent and a spanning set for R[t], using the
definitions on the previous slide. Thus B is a basis for R[t].
Infinite-Dimensional Vector Spaces - 3
• Theorem 4 (Basis Theorem or Fundamental Theorem of
Linear Algebra): Every vector space V has a basis; more
precisely, if v  V is a non-zero vector, then there exists a basis B
of V such that v  B.
• Remark 1: The proof of the above requires advanced concepts
from set theory, and is usually not given in elementary linear
algebra textbooks. Moreover, it is a pure existence proof, it
doesn’t provide any technique for constructing a basis.
• Remark 2: The space R[t] of polynomials is exceptional amongst
infinite-dimensional spaces in that we can actually exhibit a basis.
For other interesting spaces such as R and C[a,b], it has not been
possible to provide a construction for a basis.

You might also like