Professional Documents
Culture Documents
For example
Some Definitions
➤ Ring
Definition: A non-empty set R is called ring if
➤ Field
Definition: A non-empty set F is called a field if
1. F is abelian group under addition
2. F - {0} is abelian group under multiplication
3. Right distribution law holds in F.
i.e. a, b, c, e F
(a + b)c = ac + be
Example
(i) (R, +, •) is a field
(ii) (C, +, •) is a field
(iii) (Q, +, •) is a field
(iv) (Z, +, •) is not field as (Z - {0}, •) is not group under
multiplication.
Space
Similarly V(ℂ) is called the Complex Vector Space and V( ꙩ) is
called the Rational Vector Space. The set of three dimensional
vectors of geometry is called Three Dimensional Vector Space and
is written as V 3(F) or V3(F) or simply V 3.
Important Remarks:
Also a ∈ H, x ∈ F ⇒ a ∈ F, x ∈ F ⇒ ax ∈ F
Hence usual multiplication in F can be taken as external
composition.
Now we verify the various postulates of a vector space in F(H):
V1. (F, +) is an abelian group .
Therefore F n(F) is closed for the above defined addition and scalar
multiplication operations.
= a (bα)
Therefore scalar multiplication is associative in F n
V5. Let I be the unity element of the field F, then for α ∈ Fn
= (a1, a2,...., a n)
=α
From the above discussion, it is clear that Fn satisfies all the
axioms for the vector space, therefore F n(F) is a vector space.
Remarks:
Let M = {[aij]m*n |aij ∈ R} . If A, B ∈ M
where A = [a ij]m*n ;B = [bij]m*n and a∈ℝ, then matrix addition and
scalar multiplication in M are defined as follows :
[by (1)]
From the above discussion, it is clear that P(x) satisfies all the
axioms for the vector space, therefore P(x) is a vector space.
Therefore ℝ+ is closed for the above defined vector addition (and
scalar multiplication).
Verification of a space axioms in R+(R):
V1. It can be easily see that is an additive abelian group,
for 1 ⊕ x = x ⊕ 1, x1 = x [∵ x1 = x, ∀ x ∈ ℝ+]
and for every x ∈ ℝ+, there exist 1/x ∈ ℝ+ such that
V5.
From the above discussion, it is clear that ℝ+ satisfies all the
axioms for the vector space, therefore ℝ+(ℝ) is a vector space.
(ii) 0α = 0, ∀ α ∈ V
(iii) a(-α) = -(aα), ∀ a ∈ F, a ∈ V
(iv) (-a)α = -(aα), ∀ a ∈ F, a ∈ V
(v) a(α - β) = aα - αβ, ∀ a ∈ F, α, β ∈ V
(iv) For any a ∈ F, α ∈V
aα = 0 a = 0 or α = 0
Proof,
⇒ a(0 + 0) = a0 V a ∈ F
⇒ a0 + a0 = a0 ...... [by V 2]
⇒ a0 + a0 = a0 + 0 ...... [∵ 0 is zero element in V]
⇒ a0 = 0
(ii) ∵ 0 ∈ F is additive identity in F, therefore
0+0=0
⇒ (0 + 0)α = 0α ∀ a ∈ V
⇒ 0α+ 0α = 9α ...... [by V 3]
⇒ 0α + 0α = 0α + 0 ...... [∵ 0 is zero element in V]
⇒ 0α = 0 [cancellation law in V]
(iii) For any a ∈ F and a ∈ V
a[α + (-α) = aa + a(-α) ...... [by V 2]
⇒ a0 = aα + a(-α)
⇒ 0 = aα + a(-α) ...... [by (1)]
⇒ a(-α) is additive inverse of aa
⇒ a (-α) = -(aα)
= aα - aβ
⇒ 1α = 0
⇒ α = 0 ......[by V5]
Hence aα = 0 ⇒ a = 0 or α = 0
aα = bα ⇒ a = b
aα = aβ ⇒ α = β
(i) aα = bα ⇒ aα - b α = 0
(a - b)α = 0 .... [by V3]
⇒ [a + (-b)]α = 0
⇒ a + (-b) = 0 [∵ α ≠ 0]
⇒a=b
(ii) aα = aβ => aα - aβ = 0
⇒α=β
(iii)
(a) It can be easily seen that the additive identity for the defined
vector addition does not exist. If suppose any element (c, d) ∈ V is
taken as additive identity for vector addition, then
(a, b) + (c, d) = (0, b + d) * (a, b)
Therefore (V, +) is not an abelian group.
Space Kn
Let K be an arbitrary field. The notation K n is frequently used to
denote the set of all n-tuples of elements in K. Here K n is a vector
space over K using the following operations :
(i) Vector
Addition:
Polynomial Space
P(t) Let P(t) denote the set of all polynomials of the form
Vector Subspaces
Let V(F) be a vector space and W ⊂ V. We have also seen that the
set W is closed for the binary operation +, if for any
Proof.
The condition is necessary (⇒) :
Let W be a subspace of V(F). Then W itself is also a vector space
wrt the operations defined for V. Therefore (W, +) is also an abelian
group.
Consequently,
[by V5]
Therefore W is closed for the vector addition.
Now taking a = -1, b = 0, then again by the given condition
Conversely:
Let W1 ∪ W2 be the subspace of the vector space V(F).
Therefore p, q ∈ F and α, β ∈ W ⇒ pα + qβ ∈ W
Therefore W is a subspace of the vector space V 3(F).
∵ V3(ℝ) = {(a, b, c) I a, b, c ∈ ℝ}
and W = {(a, b, c) I a - 3b + 4c = 0; a, b, c ∈ ℝ} therefore, clearly W
⊂ V3(ℝ)
Let
.....(1)
Now if, p, q ∈ ℝ, then
Therefore p, q ∈ ℝ and α, β ∈ W ⇒ pα + qβ ∈ W
∴ W is a subspace of the vector space V3(ℝ).
(b) let
where x1, x2 ∈ ℝ. If a, b ∈ ℝ, then
Therefore W 2 is a subspace of the vector space V 3(ℝ).
(c) Let a = (3, 5, 4) ∈ W3 then for a = √2 ∈ ℝ
and p ∈ F, then
(by
the properties of the field F)= a0 + b0
= 0 + 0 = 0 ...(3)
Therefore by (2) and (3),
aα + bβ = (ax 1 + by1, ax2 + by2, ax 3 + by3) ∈ W
Therefore W is a subspace of V 3(F).
Let f(x), g(x) ∈ S, so that f(x) and g(x) are polynomials over F of
degree ≤ n. Now if a and b are any scalars ∈ F, then af(x) + bg(x)
will also be a polynomials of degree ≤ n.
If (x1, y1, z1) and (x2, y2, z2) be solutions of the given equations,
Hence if (x 1, y1, z2) ∈ V, (x2, y2, z2) ∈ V, then (x 1 + x2, y1 + y2, z1 + z2)
is also in V.
Similarly a(ax 1) + b(αy1) + c(αz1) = (aαx1 + bαy1 + cαz1)
(a) Let
Again let and b = (b1, b2,...... b n) ∈ W1,
then by the given condition a 1 ≥ 0, b1 ≥ 0.
Now for a, b ∈ ℝ
But the square of (aa 1 + bb1) is not necessarily (aa 2 + bb2).
for example When a 2 = 4, a1 = 2, b2 = 9, b1 = 3, a = 2, b = -2,
then aa 2 + bb2 = 8 - 18 = - 10 when (aa 1 + bb1)2 = (4 - 6)2 = 4.
therefore Hence W3 is not a subspace of ℝn.
(d) Let
Again let α = (a 1, a2, ..., a n) ∈ W4 and β = (b1, b2, ..., bn) ∈ W4
Then by the given condition a 1a2 = 0 and b1b2 = 0.
Now for a, b ∈ ℝ
(e) Let
Again let α = (a 1, a2, ..., a n) ∈ W5 and β = (b1, b2, ..., bn) ∈ W4
Then by the given condition a 2 b2 are rational numbers.
Now for a, b ∈ ℝ
for example
If we take a = √2, b = √3, a 2 = 3, b2 = 4
aa2 + bb2 = 3√2 + 4√3 which is not rational
Therefore aα + bβ ∉ W5. Hence W5 is not a subspace of ℝn.
Linear Combination
Spanning Set
Let V be a vector space over K. Vectors u 1, u2, .... , u m in V are said
to span V or to form a spanning set of V if every v in V is a linear
combination of the vectors u 1, u2, .... , u m that is, if there exist
scalars a1, a2, ...... a m in K such that
The following remarks follow directly from the definition.
Remark
1. Suppose u1, u2, .... , um span V. Then, for any vector w, the
w,u1, u2, .... , um also spans V.
2. Suppose u1, u2, .... , um span V and suppose u k is a linear
combination of some of the other u’s. Then u’s without
uk also span V.
3. Suppose u1, u2, .... , um span V and suppose one of the u’s is
the zero vector. Then the u’s without the zero vector also
span V.
where
and
Hence aα + bβ ∈ L(S)
Remark:
Again
Therefore a, b ∈ F
and
∴ (W1 + W2) is also a subspace of V(F).
.... (4)
But we know that L(W 1 ∪ W2) is the smallest subset of V(F)
containing W1 ∪ W2 and by (3)
therefore
.... (5)
(4) and (5)
Theorem: If S and T are subsets of a vector space V(F) then:
(a) S ⊂ T u L(S) ⊂ L(T)
(b) S ⊂ L(T) L(S) ⊂ L(T)
(c) L(S ∪ T) ⇒ L(S) + L(T)
(d) S is a subspace of V ⇔ L(S) = S
(e) L{L(S)} = L(S)
Proof:
Therefore
Therefore
Let
Now (2, - 5 , 3) = a 1(1, - 3 , 2) + a 2(2, - 4 , - 1 ) + a 3(1, - 5 , 7)
and
(1) and (3)
Example 3: In the vector space V 3(ℝ), let α1 = (1, 2, 1); α 2 = (3,
1,5); α 3 = (3, -4, 7). Then prove that the subspaces spanned by
S = {α 1, α2} and T = {α 1, α2, α3} are the same.
Since the linear span L(T) of T is the set of LC of the vectors α 1, α2,
α3, therefore let
Let
then
⇒ α1, α2,......αn is LD ( ∵ - 1 ≠ 0)
⇒ S is LD
Therefore let there exist scalars a 1, a2, .... ak-1 ∈ F such that
Conversely: Let S be LD.
Therefore by (1),
..... (2)
where b i ∈ F(i = 1, 2, ..., k) are scalars such that atleast one of
them is non zero. But a K ≠ 0, because if a k = 0 then set {α 1,
α2,......α k-1} will be LD which is contrary to our earlier assumption.
Therefore by (2)
a1 = (1, 3, 2); a 2 = (1, -7, -8); a 3 = (2, 1, -1)
⇒ Sn is LI.
⇒ Every finite subset of S is LI
⇒ S is also LI.
.... (1)
But α1, α2, α3 are LI, therefore by (1)
Consequently the vectors α 1 + α2, α2 + α3, α3 + α1 are also LI.
(b) Again let b1, b2, b3 ∈ ℂ such that
But α1, α2, α3 are LI, therefore by (2)
but
Let {α1 + a1α2 + a2α3, α2, α3} be a LD set, then there exist a, b, c ∈ F
(not all zero) such that
Now if in the above relation (1) if all the coefficients of the vectors
α1, α2, α3 are not zero, then the set {α 1, α2, α3} will also be LD.
If a ≠ 0, then for any value of b and c, the set {α 1, α2, α3} will be LD.
If a = 0, then at least one of b and c will not be zero,
(because if all the three are zero, then the other set will not be LD).
Consequently, at least one of the coefficient (aa 1 + b) and (aa 2 + c)
will not be zero.
Therefore the set {α 1, α2, α3} will be LD.
Example : (a) Let u = (1, 1, 0), v = (1, 3, 2), w = (4, 9, 5). Then u,
v, w are linearly dependent, because 3u + 5v - 2w = (1, 1, 0) +
5(1, 3, 2) - 2(4, 9, 5) = (0, 0, 0) = 0
(b) We show that the vectors u = (1, 2, 3), v = (2, 5, 7), w = (1, 3,
5) are linearly independent. We form the vector equation xu +
yv + zw = 0, where x, y, z are unknown scalars. This yields
Back-substitution yields x = 0, y = 0, z = 0. We have shown that xu
+ yv + zw = 0 implies x = 0, y = 0, z = 0
Accordingly, u, v, w are linearly independent.
Lemma: Suppose two or more nonzero vectors v 1, v2, ......, v m are
linearly dependent. Then one of the vectors is a linear combination
of the preceding vectors; that is, there exists k > 1 such that
vk = c1v1 + c2v2 + ... + ck-1vk-1,
Theorem: The nonzero rows of a matrix in echelon form are
linearly independent.
Proof: Every non-zero row of a matrix in reduced row-echelon form
contains a leading 1, and the other entries in that column are
zeroes. Then any linear combination of those other non-zero rows
must contain a zero in that position, so the original non-zero row
cannot be a linear combination of those other rows. This is true no
matter which non-zero row you start with, so the non-zero rows of
the matrix must be linearly independent.
Remark:
Examples of Bases
Example 1: (Finite Basis) Let V n(F) be a vector space, then S =
{e1, e2,....., en} is a basis of V n(F), where e1 = (1, 0, 0, .... 0); e 2 =
(0, 1, 0 0);....., e n = (0, 0, 0, 1).
Earlier we have already proved that S is LI.
Again for every vector a = (a 1, a2,....., an) of Vn,
there exist a 1, a2,....., an ∈ F
such that a = a 1e1 + a2e2 +...+ anen
⇒ each vector of V n is a LC of elements of S.
Remarks:
Theorem: (Replacement theorem):
Let V(F) be a vector space which is generated by a finite set S =
{v1, v2,......vn} of V, then any LI set of V contains not more than n
elements.
Proof: Let S = {v 1, v2,......vn} generates the vector space V.
In particular, u 1 ∈ S ’⊂ V
⇒ u1 is a LC of v 1, v2,......vn
⇒ {u1,v1, v2,......vn} = S1 say) is LD and L(S,) = V (Since the set S, is
a spanning set for V)
Let us remove this vector v k from S1 and denote the remaining set
by S2 i.e.
Therefore remove this vector from the set S 3 and denote the
remaining set by S 4 which generates V.
Proceeding in this manner we find that each step consist in the
exclusion of a v and the inclusion of u and the resulting set after
each step generates V.
If m > n, then after n steps we obtain a set
generates V and therefore v n+1, is a LC of the proceeding
vectors leading us to the conclusion that the
set
which contradicts the hypothesis that S’ is LI.
Hence m ≥ n i.e. m ≤ n.
To prove m = n:
Now S1 is basis ⇒ S1 is LI and L(S 1) = V ...(i)
and S2 is a basis S 2 is LI and L(S 2) = V ...(ii)
(i) and (ii) ⇒ L(S1) = V and S 2 is LI
Therefore by the above result, m ≤n ...(1)
Also when L(S 2) = V and S 1 is LI
n ≤ m ...(2)
(1) and (2) ⇒ m = n.
Obviously, L(B”) = V.
Now if B” is LI, then this will be a basis of V and is the required
extension set.
If B” is LD, then we repeat the process till after a finite number of
steps, we obtain a LI set containing α 1, α2,......α m and generating V
i.e. basis of V.
Since dim V = n, therefore every basis of V will contain n elements.
Thus exactly (n - m) elements of B will be adjoined to S.
so as to form a basis of V
which is the extended form of S. Hence either S is already a basis
(when n = m) or it can be extended (when m < n) by adjoining (n -
m) elements of B to form the basis of V.
Another form: “Any LI subset of a FDVSV is a part of a basis"
Theorem: In an n-dimensional vector space V(F), any set of (n + 1)
or more vectors of V is LD.
Proof: Let V(F) be a vector space and dim V = n.
Therefore every basis of V will contain exactly n elements.
Let S be a subset of V containing (n + 1) or more vectors.
Let, if possible, S is LI, then either it is already a basis or it can be
extended to form the basis of V.
Thus in both the cases, the basis will contain (n + 1) or more than
(n + 1) vectors which contradicts the hypothesis that V is n-
dimensional.
Therefore S is LD and so is every superset of the same.
Dimension of a Subspace
⇒ dim W = dim V.
Let V(F) and V'(F) be two vector spaces, then the product set
Disjoint subspaces
Two subspaces W 1 and W2 of the vector space V(F) are said to be
disjoint if W1 ∩ W2 = {0} = zero space.
Theorem: The necessary and sufficient conditions for a vector
space V(F) to be a direct sum of its two subspaces W 1 and W2 are
Proof: Th
e Conditions are necessary (⇒):
Let V = W 1 ⊕ W2
By definition of direct sum, each element a ∈ V is uniquely
expressed
Let, If possible
Evidently
Since the sum for a is unique and hence α = 0But α ∈ W1 ∩ W2 is
arbitrary
∴ W1 ∩ W2 = {o}. ...(2)
The conditions are sufficient (⇐):
Let V = W 1 + W2 and W1 ∩ W2 = {0}
Let α ∈ V be arbitrary.
being subspaces of V
(given)
Hence V = W1 ⊕ W2
By theorem we have
...(3)
Here we haveW 1 + W2 =
Example 1: Prove that the set S = {(1,2, 1), (2, 1, 0), (1,-1, 2)}
forms a basis of the vector space V 3(ℝ).
To show that S is LI :
...(1)
To show that S’ is LI :
...(1)
Let
Example 6: Show that the set S = {(1, 0, 0); (1, 1, 0); (1, 1, 1); (0,
1, 0)} spans the vector spaceV 3(ℝ) but is not a basis set.
...(A)
Therefore the set S spans V 3(ℝ), but it is not a basis set. Hence
Proved.
since
➤ Examples of Bases
➤ Theorem on Bases
The following three theorems will be used frequently.
It is clear that :
otherwise the elements u 1, u2, ...... ,un would be linearly dependent.
Thus, we obtain from (1)
➤ Quotient Space
Definition :
Let V(F) be a vector space and W be its subspace. Then for any α
∈ V, the set W + α = {w + αlw ∈ W} is called the Right coset of W
wrt a in V.
Also the set is called the Left coset of
W wrt α in V.
where
⇒ V/W = L(C)∴ C
is a basis of V/W.
⇒ dim(V/W) = n - m = dimV - dimW.
Example 1: Show that S = {(1, 1, 1), (0, 1, 1), (0, 0, 1)} is a basis
for the space V3(ℝ). Also find the co-ordinates of α = (3, 1, -4) ∈
V3(ℝ) relative to this basis.
Clearly S ⊂ V3(ℝ).
S is LI: Let there exist a 1, a2, a3 ∈ ℝ such that
Some definitions
= a0 + b0
= at(α) + bt(α)
This is called Zero linear mapping.
Example: Let V3(F) and V2(F) be two vector spaces over the field
F, then
Proof:
(a) Let α ∈ V. Since 0 is zero element in V, therefore
α+0=α
⇒ t(α + 0) = t(α)
⇒ t(α) + t(0) = t(α) [∵ t is a linear transformation]
⇒ t(α) + t(0) = t(α) + O' [∵ O’ is zero element in V’]
⇒ t(0) = 0’ [by cancellation law, (V’, +) being an abelian group]
Proof: Let V(F) and V’(F) be the two vector spaces and 0 and O’
be their zero vectors respectively. Let t be a linear transformation
from V to V’ and K be the kernel of t i.e. Ker t = K = {α ∈ V I t(α) =
O’ ∈ V’}
To show that K is a subspace of
V:
⇒ aα + bβ ∈ K
Therefore a, b ∈ F and a, b ∈ F ⇒ aα + bβ ∈ K
∴ K is a subspace of V.
(ii) onto and
Let α = (a, b,) and β = (a 2, b2) be any two elements of V2(ℝ) and a,
b ∈ ℝ, then
Let α = (a1, b1, c,) and β = (a 2, b2, c2) be any two elements of the
space V3(ℝ) and a, b ∈ R, then
Let α = (a1, a2, a3) and β = (b 1, b2, b3) be any two elements of the
space V3(F) and a, b ∈ F, then
Kernel: Let Ker f = K, then K will be the set of all the those vectors
of V3 which map on the zero vector (0, 0) of V 2 i.e.
and
be any two elements of P[x]
where a 0, a1, .... a n; b0, b1,....., bn are all real numbers. Then
[∵ a ≠ 0]
∴ f is one-one.
Onto: For every a = (a 1, a2, a3) ∈ V3(ℝ) there
∴ f is onto.
f is a linear transformation: For any a’, b’ ∈ ℝ
∴ f is a linear transformation.
Consequently, f is an isomorphism on V3(ℝ)
Example 6: Show that the mapping f : V 2(ℝ) → V2(ℝ), where
(x, y) = (x cos θ - y sin θ, x sin θ + y cos θ) is an isomorphism
on V2(ℝ)
∴ f is one.
Onto: Since for every (x cos θ - y sineθ, x sinθ + y cosθ) ∈ V2(ℝ)
there exist (x, y) el such that f(x, y) = (x cosθ - y sinθ, x sinθ + y
cosθ)
∴ f is onto.
f is a linear transformation : For any a, b ∈ ℝ
∴ f is a linear transformation.
Consequently, f is an isomorphism on V 2(ℝ).
is a linear transformation.
Let A, B ∈ V, then
f(A + B) = (A + B)M + M(A + B)
= AM + BM + MA + MA [By distributivity of Matrices]
= (AM + MA) + (BM + MB) [By associativity for +]
= f(A) + f(B) ...(1)
Again, for any a ∈ F
f(aA) = (aA)M + M(aA)
= a(AM) + a(MA)
= a(AM + MA) By scalar multiplication of Matrices]
= af(A) ...(2)
From (1) and (2), f is a linear transformation on V(F).
[by (2)]
...(3)
and for any
...(4)
From (3) and (4), it is clear than t -1 is also a linear transformation.
f(α) = W + α
∴ f is onto i.e. f(V) = V/W
Therefore f : V → V/W is onto homomorphism
⇒ V/W is homomorphic image of V.
(b) ker f = {α ∈ V I f(α) = W} [∵ W is zero element of V/W]
= α ∈V I W + α = W}
= {α ∈ V I α ∈ W} = W
Proof: Let Ker f = K.
∴ ϕ is well defined.
ϕ is one-one.
ϕ is onto: ∵ f is onto, therefore for every α’ ∈ V', there exist α ∈ V
such that
[by (1)]
Therefore for α’ ∈ V there exist K + α ∈ V/K such that
∴ ϕ is onto
ϕ is a linear transformation : For any a, b ∈ F
∴ ϕ is a linear transformation.
Proof: Let V(F) and V’(F) be the two vector spaces and dim V = n.
Firstly, let V ≌ V', then there exist a map f from V to V' which is
one-one onto linear transformation.
Let be the basis of the space V.
if
then to show that B’ is the basis of V’.
Therefore
∴ f is one-one.
f is onto: Let be any element of V’, then
there exist such that
∴ f is onto.
f is a linear transformation: For any a, b ∈ F
[by (1)]
∴ f is a linear transformation.
Consequently, V≅ V ’
Hence
∴ f is one-one.
∴ f is onto.
Therefore
∴ R(t) is a vector subspace of V’.
Hence t is one-one.
∴ t is an isomorphism.
Let B = {v 1, v2, ....., vn} be a LI subset of U, then its image set under
t is
[t is linear]
[t is nonsingular]
[B is LI]
Hence the image B, of B under t is LI.
[Note]
S, = {t(v)},
Therefore,
Consider, say, a 3 * 4 matrix A and the usual basis {e 1, e2, e3, e4} of
K4 (written as columns):
Definition
The transpose of the above matrix of coefficients, denoted by m s(T)
or [T]s, is called the matrix representation of T relative to the basis
S, or simply the matrix of T in the basis S.
Example: Let F: ℝ2 → ℝ2 be the linear operator defined by F(x, y)
= (2x + 3y, 4x - 5y).
1. First find F(u 1), and then write it as a linear combination of the
basis vectors u 1 and u2. (For notational convenience, we use
column vectors.) We have
So
lve the system to get x = 129, y = -55. Thus F(u 2) = 129u 1 -
55u 2.
Now write the coordinates of F(u,) and F(u2) as columns to
obtain the matrix
Change of Basis
Let V be an n-dimensional vector space over a field K. We have
shown that once we have selected a basis S of V, every vector v ∈
V can be represented by means of an n-tuple [v] s in Kn, and every
linear operator T in A(V) can be represented by an n * n matrix over
K.
Definition
Let S = {u 1, u2, ...., u n} be a basis of a vector space V, and let S’ =
{v1, v2, ..., vn} be another basis. (For reference, we will call S the
“old” basis and S’ the “new" basis.) Because S is a basis, each
vector in the “new” basis S' can be written uniquely as a linear
combination of the vectors in S; say,
Remark : Because the vectors v 1, v2, ..., vn in the new basis S’ are
linearly independent, the matrix P is invertible. Similarly, Q is
invertible. In fact, we have the following proposition.
(a) Find the change-of-basis matrix P from S to the “new” basis S”.
Write each of the new basis vectors of S’ as a linear combination of
the original basis vectors u, and u2 of S. We have
Thus
Note that the coordinates of v 1 and v2 are the columns, not rows, of
the change-of-basis matrix P.
Rank-Nullity Theorem
∴ ϕ is one-one.
ϕ is onto : Let K + α ∈ V/E, then there exist t(α) ∈ ℝ such that
ϕ[t(α)] = K + α
Therefore preimage of each element of V/K exist in ℝ.
∴ ϕ is onto.
ϕ is a linear transformation: For any a, b ∈ F and t(α), t(β) ∈ ℝ
f[at(α) + bt(β)] = ϕ[t(aα + bβ)] [∵ t is a linear transformation]
∴ ϕ is a linear transformation.
Hence ϕ is an isomorphism from R to V/K.
⇒ ℝ ≅ V/K
⇒ dim ℝ = dim(V/K)
⇒ dim ℝ = dim V - dim K
⇒ dim ℝ + dim K = dim V
⇒ Rank(t) + Nullity(t) = dimV.
Example 1: If
Consequently, there are two free variables, x 3 = t, and x 4 = t2, so
that
x2 = 7t, + 7t 2, x1 = —9t1 - 10t 2.
Hence,
nullspace (A)
Since the two vectors in this spanning set are not proportional, they
are linearly independent. Consequently, a basis for nullspace(A) is
{(-9, 7, 1, 0), (-10, 7, 0, 1)}, so that nullity(A) = 2. In this problem, A
is 3 * 4 matrix, and so, in the Rank-Nullity Theorem, n = 4. Further,
from the foregoing row- echelon form of the augmented matrix of
the system Ax = 0, we see that rank(A) = 2. Hence, rank(A) +
nullity(A) = 2 + 2 = 4 = Dim(A) and the Rank-Nullity Theorem is
verified.
Thus, nullity (A) = 2 .
is
We see that rank(A) = 2 (2 leading 1’s). Therefore nullity (A) = 5 - 2
= 3.