You are on page 1of 51

Online Education by

Department of Mathematics
M K Bhavnagar University
For M.Sc. (Mathematics) Sem-2
Paper No.: 108
Linear Algebra
Instructor: Dr. P. I. Andharia
Syllabus: Unit-1
• System of Linear Equations
• Rank of Matrices
• Vector Spaces over Fields
• Subspaces
• Bases and Dimension
System of linear equations
Definition
Suppose 1, 2, … … … , ∈ ℝ which satisfy the conditions

11 1 + 12 2 + ………+ 1 = 1

21 1 + 22 2 + ………+ 2 = 2
. . . .
. . . .(1)
. . . .

1 1 + 2 2 + ………+ =
where 1 , 2 , … … … , and , 1 ≤ ≤ , 1 ≤ ≤ , are
given constants in ℝ. The collection in Equation (1) is called a
system of linear equations in unknowns. Any – tuple
( 1 , 2 , … … … , ) is called a solution of this system if it satisfies
each of the equations in (1).
 A system of linear equations can have infinitely many
solutions, exactly one solution or no solutions at all.
 A system is called consistent if it has a solution. Otherwise, it is
called inconsistent.
 If b1 , b2 , … … … , bm all are zero then the system is
homogeneous system of linear equations otherwise it is non-
homogeneous system of linear equations.
 Every homogeneous system is consistent, since 1 = 0; 2 =
0; … … ; = 0 is always a solution. This solution is called the
trivial solution; any other solution is called nontrivial.
 In a homogeneous system of linear equations, if m ≥ n, then
the system has a non-trivial solution.
Matrix notations for a system of linear equations:
The system of linear equations (1) can be written in matrix form
as = where
1 1
11 ⋯ 1
2 2
= ⋮ ⋱ ⋮ , = and = .
⋮ ⋮
1 ⋯

Elementary row operations on a matrix


Let be any × matrix. Elementary row operation on the
matrix is one of the following:
(i) Multiplication of ℎ row of the matrix by a non-zero
constant . This is denoted by .
(ii) Replacement of ℎ row of the matrix by ℎ row + ( -
times) ℎ row, which is denoted by + .
(iii) Interchange of ℎ row and ℎ
row of the matrix . This
operation is denoted by ↔ .
Note: + (−1) can also be written as − .
Definition
Let and be two matrices of same order × . We say that
is row-equivalent to if can be obtained from by a finite
sequence of elementary row operations.
If and are row-equivalent matrices of order × , then the
homogeneous systems of linear equations = and =
have the same solutions. Here is zero matrix.
Example 1
Solve the following homogeneous system of linear equations
using the elementary row operations:
1 +6 2 − 18 3 =0
−4 1 +5 3 =0
−3 1 +6 2 − 13 3 =0
−7 1 +6 2 −8 3 =0
Solution
The given homogeneous system of linear equations can be
written in matrix form as
=
where
1 6 −18 0
1
= −4 0 5 , =
2 and = 0 .
−3 6 −13 0
3
−7 6 −8 0
Using elementary row operations on the matrix , it can be
reduced as follow
1 6 −18 1 6 −18
−4 0 5 3 − 2 −4 0 5 4− 3
⎯⎯⎯ ⎯⎯⎯
−3 6 −13 1 6 −18
−7 6 −8 −7 6 −8
1 6 −18 1 6 −18
−4 0 5 2 2 −8 0 1− 3
10 ⎯⎯⎯
1 6 −18 1 6 −18
−8 0 10 −8 0 10
0 0 0 0 0 0 1
−8 0 10 ⎯⎯⎯2− 4 0 0 0 8 4
⎯⎯
1 6 −18 1 6 −18
−8 0 10 −8 0 10
0 0 0
0 0 0 ⎡0 0 0 ⎤ 1
⎡0 0 ⎤ 3+ 4 ⎢

0 67⎥ 6 3
⎢1 −18⎥⎥ ⎯⎯⎯ ⎢ − ⎥ ⎯⎯
6 5 ⎥ ⎢0 6 4⎥
⎢ 5 ⎥
⎣−1 0 ⎦ ⎢−1 0
4 ⎣ 4 ⎦
0 0 0 0 0 0
0 0 0 (−1) 4 0 0 0
−67/24 ⎯⎯⎯
0 1 0 1 −67/24
−1 0 5/4 1 0 −5/4
The above sequence of elementary row operations on the matrix
0 0 0
0 0 0
shows that the final matrix is row equivalent
0 1 −67/24
1 0 −5/4
to the matrix .
This means that the solution of the given system of linear
equations
1 +6 2 − 18 3 =0
−4 1 +5 3 =0
−3 1 +6 2 − 13 3 =0
−7 1 +6 2 −8 3 =0
and
67
2− 3 =0
24
5
1− 3 =0
4
are same.
In the second system it is clear that if we assign any constant
5 67
value to 3, we obtain a solution , , .
4 24
Note: The method described in Example 1 to obtain a solution of
system of linear equations is called a method of elimination.
Rank of a matrix
Definition
A matrix is said to be in row echelon if it satisfies
(i) The leading entry (the first non – zero element) in each row
is 1,
(ii) Each leading entry is in a column to the right of the leading
entry in the previous row,
(iii) Rows with all zero elements, if any, are below rows having a
non – zero element.
Method to transform a matrix into its row echelon form:
Use a series of row operations:
(1) Pivot the matrix
(i) Find pivot (the first non – zero entry in the first column
of the matrix)
(ii) If pivot is not in the first row, interchange rows for
moving the pivot row to the first row
(iii) Multiply each element in the pivot row by the inverse
of the pivot, so pivot equals 1
(iv) Add multiple of the pivot row to each of the lower
rows, so every element in the pivot column of the
lower rows equals 0
(2) Repeat the procedure of step (1), ignoring previous pivot
rows
(3) Continue until there are no more pivot to be processed.
Note: The process of reducing a matrix to a row-echelon form
discussed above is known as Gaussian elimination.
Definition
Let be any matrix. The number of non – zero rows in the row –
echelon form of is called the rank of .
Example 3
2 −1 3
Find rank of a matrix: = 1 0 1
0 2 −1
1 1 4
Solution
(i) To obtain row echelon form of
1 3
⎡1 − 2
1 3
2 −1 3 1 ⎡1 − 2 ⎤ 2 ⎤
2
1 0 1 2 1 2− 1 ⎢ 1 1⎥
⎯⎯ ⎢1 0 1⎥ ⎯⎯⎯ ⎢0 2 − ⎥
0 2 −1 ⎢0 2 −1⎥
2
1 1 4 ⎢0 2 −1⎥
⎣1 1 4⎦ ⎣1 1 4⎦
1 3
⎡1 − 2
1 3
2 ⎤ ⎡1 − 2 ⎤ 1 −1/2 3/2
2
⎢ 1
− ⎥⎥ 2
1
2 ⎢0
4− 1 0
⎯⎯⎯ ⎢ 2 2
1 −1⎥ ⎯⎯⎯⎯
3 −2 2 0 1 −1
⎢ ⎢ 2 −1⎥ 0 0 1
0 2 −1⎥ ⎢ 0 3 5 ⎥
⎢ 3 5 ⎥ 0 3/2 5/2
⎣0 2 ⎣0 2 2 ⎦
2 ⎦

3 1 −1/2 3/2 1 −1/2 3/2


4− 2 4 −4 3
⎯⎯⎯⎯⎯
2 0 1 −1 ⎯⎯⎯⎯ 0 1 −1
0 0 1 0 0 1
0 0 4 0 0 0

The final matrix is the row echelon form of . In this matrix


the number of non –zero rows is 3.
Hence the rank of is 3.
Vector
 A scalar has magnitude only while a vector is a geometrical
object that has magnitude and direction.
 Mathematically a vector can be represented as an array. For
example = ( 1 , 2 , 3 ) is a 3-dimensional vector.
 If 1 , 2 , 3 ∈ ℝ then a vector = ( 1 , 2 , 3 ) ∈ ℝ × ℝ × ℝ =
ℝ3 .
 If all components of a vector are zero then the vectors is
known as zero vector, otherwise non-zero vector.
Vector addition
Standard addition of two vectors = ( 1, 2, 3) and =
( 1 , 2 , 3 ) is defined as
+ =( 1 + 1, 2 + 2, 3 + 3)
Scalar multiplication
Standard multiplication of a scalar with a vector = ( 1, 2, 3)
is defined as = ( 1, 2, 3)
Definition: Vector space
A non empty set is said to be a vector space over (the field of
scalars) if there exist vector addition map +∶ × → defined
by +( , ) = + and scalar multiplication map ∙ ∶ × →
defined by ∙ ( , ) = ∙ , which satisfies the following
properties for every , , ∈ and , ∈ :
(i) Addition is commutative: + = + .
(ii) Addition is associative: + ( + ) = ( + ) + .
(iii) Existence of additive identity: 
There exist a unique vector ∈ such that
+ = = + .
(iv) Existence of additive inverse: For each vector ∈ ,
there exist ∈ such that 
+ = = + .
In this case is denoted by − .
(v) ∙ ( + ) = ∙ + ∙ .
(vi) ( + ) ∙ = ∙ + ∙ .
(vii) ( ) ∙ = ∙ ( ∙ )).
(viii) 1 ∙ = .
Remarks:
[1] + (− ) is written as − .
[2] For ∈ and ∈ we write instead of ∙ .
[3] Elements of a vector space are vectors.
[4] If not specified, we will consider a vector space over a field
of real numbers ℝ.
Example 1
Show that ℝ3 is a vector space over ℝ under standard vector
addition and scalar multiplication.
Solution
Let = ( 1 , 2 , 3 )), = ( 1 , 2 , 3 ) and = ( 1, 2, 3) be three
vectors in ℝ3 and be a scalar in ℝℝ.
We know that the standard vector addition is defined as
+ = ( 1 + 1 , 2 + 2 , 3 + 3 ) and the standard scalar
multiplication is defined as = ( 1 , 2 , 3 ).
(i) + = ( 1 + 1, 2 + 2, 3 + 3)
= ( 1 + 1, 2 + 2, 3 + 3)
= +
Hence addition is commutative.
(ii) + ( + ) = ( 1 , 2 , 3 ) + [( 1 , 2 , 3 ) + ( 1 , , 3 )]
= ( 1, 2, 3) + ( 1 + 1, 2 + 2, 3 + 3)
= ( 1 + 1 + 1, 2 + 2 + 2, 3 + 3 + 3)
Similarly
( + ) + = ( 1 + 1 + 1, 2 + 2 + 2, 3 + 3 + 3)
Hence addition is associative.
(iii) = (0, 0, 0) is the additive identity in ℝ3 as
∀ = ( 1 , 2 , 3 ) ∈ ℝ3 ,
+ = ( 1 , 2 , 3 ) + (0, 0, 0)
= ( 1 + 0, 2 + 0, 3 + 0)
= ( 1, 2, 3)
=
(iv) ∀ = ( 1 , 2 , 3 ) ∈ ℝ3 , the additive inverse is the vector
− = (− 1 , − 2 , − 3 ) of ℝ3 as
+ (− ) = ( 1 , 2 , 3 ) + (− 1 , − 2 , − 3 )
= ( 1 − 1 , 2 − 2 , 3 − 3 ) = (0, 0, 0)=
(v) ( + ) = ( 1 + 1, 2 + 2, 3 + 3)
= ( ( 1 + 1 ), ( 2 + 2 ), ( 3 + 3 ))
= ( 1 + 1, 2 + 2, 3 + 3)
and + = ( 1, 2, 3) + ( 1, 2, 3)
= ( 1, 2, 3) + ( 1, 2, 3)
= ( 1 + 1, 2 + 2, 3+ 3)
Hence ( + ) = +
(vi) ( + ) = ( + )( 1 , 2 , 3 )
= (( + ) 1 , ( + ) 2 , ( + ) 3)
= ( 1 + 1, 2 + 2, 3+ 3)
and + = ( 1, 2, 3) + ( 1, 2, 3)
= ( 1, 2, 3) + ( 1, 2, 3)
= ( 1 + 1, 2 + 2, 3+ 3)
Hence ( + ) = +
(vii) ( ) = ( ) ( 1 , 2 , 3 )
=( 1, 2, 3)
= ( 1, 2, 3)= ( 1, 2, 3) = ( )
(viii) 1 ∙ = 1 ∙ ( 1 , 2, 3) = ( 1, 2, 3) =
Thus, ℝ3 satisfies all the conditions of a vector space. Hence ℝ3 is
a vector space over ℝ.
Example 2
Let be a non-empty set and = { : → ℝ}
ℝ}. For ∈ ℝ and
, ∈ , if we define + : → ℝ as ( + )( ) = ( ) +
( ) and : → ℝ as ( )( ) = ( ( )),
) for every ∈
then is a vector space over ℝ.

Hint: Let : → ℝ, : → ℝ and ℎ: → ℝ be three members
of and be a scalar in ℝ. Then for every ∈ , we have
(i) ( + )( ) = ( ) + ( ) = ( ) + ( ) = ( + )( )
This is true for every ∈
∴ + = + Hence addition is commutative.
Theorem 1
In a vector space over a field ℝ, we have
(i) 0 ∙ = for all ∈
(ii) Additive identity in V is unique
(iii) Additive inverse in V is unique
(iv) (−1) ∙ = − for all ∈
(v) ∙ = for all ∈ ℝ
(vi) If ∈ with ∙ = , for all ∈ ℝ then either =0
or = .
Proof
(i) Let ∈ .
0 ∙ = (0 + 0 ) ∙ = 0 ∙ + 0 ∙ (1)
Now −0 ∙ is an additive inverse of 0 ∙
∴ = 0 ∙ + (−0 ∙ )
= (0 ∙ + 0 ∙ ) + (−0 ∙ ) by using Equation (1)
= 0 ∙ + (0 ∙ + (−0 ∙ )) Associativity
=0∙ +
=0∙
(ii) Suppose and ′ are two additive identities in .
is additive identity therefore + = for all ∈ .
In particular if we take = ′ then we have
′+ = ′ (2)
Again ′ is additive identity therefore ′ + = for all ∈ .
In particular if we take = then we have
′+ = (3)
Comparing Equation (2) and Equation (3), we get ′ = .
Hence the additive identity is unique.
(iii) For ∈ , suppose and ′ are two additive inverse of .
Then + = and + ′=
Now + ′= ⟹ ( + ′) + = +
⟹ +( ′ + )= Associativity
⟹ + ( + ′) = Commutativity
⟹ ( + )+ ′ = Associativity
⟹ + ′=

⟹ =
Hence additive inverse is unique.
(iv) (−1) ∙ + = (−1) ∙ + 1 ∙
= (−1 + 1) ∙
=0∙
=
So (−1) ∙ is the additive inverse of . But additive inverse of
is denoted by –
∴ (−1) ∙ = −
(v) ∙ = ∙ ( + ) = ∙ + ∙ (4)
Now − ∙ is an additive inverse of ∙
∴ = ∙ + (− ∙ )
= ( ∙ + ∙ ) + (− ∙ ) by using Equation (4)
= ∙ + ∙ + (− ∙ ) Associativity
= ∙ +
= ∙
(vi) Given ∈ with ∙ = , for all ∈ ℝ.
Then if = 0 , nothing to prove.
Suppose ≠ 0 then −1 exist and −1 = 1.
Multiplying both sides of ∙ = by −1 we get
−1
( ∙ ) = −1 ∙
⟹ ( −1 ) ∙ =
⟹1∙ =
⟹ =
Hence either = 0 or = .
Note: Now onwards we will write instead of ∙ .
Definition: Vector Subspace
A nonempty subset of a vector space is said to be a vector
subspace or simply a subspace of if itself is a vector space under
the operations + and ∙ induced in . Alternatively a nonempty subset
of a vector space is said to be a vector subspace of if
(i) ∈
(ii) 1 + 2 ∈ ∀ 1, 2 ∈
(iii) ∈ , ∀ ∈ and a scalar .
Note: For a vector space , subspaces itself and a subspace
consisting of the zero vector of are called trivial subspaces of .
Theorem:
Let be a vector space over a field . A nonempty subset of is a
subspace of if and only if for , ∈ and 1 , 2 ∈ we have
1+ 2 ∈ .
Proof:
Let is a subspace of . Then by definition of vector subspace,
(i) ∈
(ii) 1+ 2 ∈ ∀ 1, 2∈
(iii) ∈ , ∀ ∈ and a scalar .
So for , ∈ and 1, 2 ∈ by (iii) we have
1 ∈ and 2 ∈
And by (ii) we have 1 + 2 ∈ .
Conversely, assume that 1 + 2 ∈ for , ∈ and
1, 2 ∈ .
Take = 0, = 0 we get 0( 1) + 0( 2) ∈ i.e. ∈ .
Take = 1, = 1 we get 1( 1) + 1( 2) ∈ i.e. 1 + 2 ∈ .
Take = 0 we get 1 + 0( 2) ∈ i.e. 1 + ∈ or
1 ∈ .
Thus, is a vector subspace of .
Example:
Show that a set = {( , , ) ∈ ℝ3 / 3 − 5 = 4 } is a vector
subspace of ℝ3 .
Solution:
Let 1 = ( 1, 1, 1) ∈ , 2 = ( 2, 2, 2) ∈ and , ∈ ℝ.
To show: 1 + 2 ∈
1 ∈ ⟹3 1 − 5 1 =4 1

2 ∈ ⟹3 2 − 5 2 =4 2
Now, 1 + 2 =( 1 + 2, ∝ 1 + 2, 1 + 2)

And 3( 1 + 2) − 5( 1 + 2) = (3 1 − 5 1) + (3 2 − 5 2)
= (4 1 ) + (4 2 )
= 4( 1 + 2)
It shows that 1 + 2 ∈ . Hence is a subspace of ℝ3 .
Theorem:
Intersection of two vector subspaces is again a vector subspace.
Proof:
Let be a vector space and 1 and 2 be two vector subspaces of .
We shall show that 1 ∩ 2 is a vector subspace of .
Let , be two scalars and 1, 2 ∈ 1 ∩ 2 , then
1, 2 ∈ 1 and 1, 2 ∈ 2.

Since 1 is a vector subspace, we have 1 + 2 ∈ 1

And 2 is a vector subspace, we have 1 + 2 ∈ 2

So, we get 1 + 2 ∈ 1 ∩ 2.

It shows that 1 ∩ 2 is a vector subspace of .


Definition:
Let be a vector space and 1 , 2 , … … … , be vectors in . A linear
combination of 1 , 2 , … … … , is a vector of the form ∑ =1 ,
where ′ are scalars.
Thus, a vector 1 is a linear combination of 1, 2, … … … , if there
exist scalars 1 , 2, … … … , such that 1 = 1 1 + 2 2 + ………+
.
Definition:
Let be a vector space and = { 1, 2, … … … , } be a subset of .
We say is
(i) Linearly dependent if there exists scalars , 1 ≤ ≤ , not all
′ are zero such that ∑ =1 = .
(ii) Linearly independent if ∑ =1 = then = 0, for all
1 ≤ ≤ .
Example:
Determine whether the set = {(1, 0), (2, 0), (0, 1)} in ℝ2 is linearly
dependent or linearly independent.
Solution:
Since 2(1, 0) + (−1)(2, 0) + 0(0, 1) = (0, 0), we get scalars
1 , 2 , 3 not all are zero such that 1 (1,0) + 2 (2,0) + 3 (0,1) =
(0,0).
(0,0)
Hence the given set is linearly dependent set.
Example:
Let = { 1 , 2 } be a subset of a vector space . If is linearly
independent then show that the set ′ = { 1 + 2 , 1 − 2 } is also
linearly independent.
Solution: To show ′ is linearly independent set, let first assume
( 1 + 2 ) + ( 1 − 2 ) = and our aim is to show = 0, = 0.
By our assumption we have ( + ) 1 + ( − ) 2 = . But is
linearly independent set, so the two coefficients + and − must
be zero. i.e. + = 0 and − = 0 which gives = 0, = 0.
Hence ′ is linearly independent set.
Definition: Let be a vector space and be any subset of . The
collection of all finite linear combinations of elements of is called span
of and it is denoted by ( ) or ( ) or [ ]].
So, [ ] = ∑ =1 ∶ ∈ ℕ, ∈ , ∈ ℝ.
Theorem:
Let be a vectors space and be any nonempty subset of . Then [ ] is
the smallest vector subspace of containing .
Proof:
We know that [ ] is the collection of all finite linear combinations of
elements of .
∴ [ ]= ∶ ∈ , ∈ ℝ, ∈
=1
Since =0 1 which is a finite linear combination, hence ∈ [ ].
Let 1, 2 ∈[ ]
Then there exists ∈ ℝ, ∈ , = 1, 2, … … , such that
1 = ∑ =1

and there exists ∈ ℝ, ′ ∈ , = 1, 2, … … , such that 2 =


∑ =1 ′.
′ ′
So 1 + 2 = 1 1 + 2 2 + ………+ + 1 1 + 2 2 +
……… + ′
Which is again a finite linear combination of elements of S and
therefore, 1 + 2 ∈ [ ].
Let ∈ ℝ then 1 = ( 1 1 + 2 2 + ………+ )
= 1 1 + 2 2 + ………+
Which is a finite linear combination of elements of S and therefore,
1 ∈ [ ].

Thus we have proved that


1, 2 ∈[ ]⟹ 1 + 2 ∈[ ]
∈ ℝ, 1 ∈[ ]⟹ 1 ∈[ ]
So [ ] is a subspace of .
Now we will show that [ ] contains i.e. ⊂[ ].
Let ∈ . Now = 1 ∙ where 1 ∈ ℝ.
So is a linear combination of elements of and hence ∈ [ ].
∴ ⊂[ ].
Thus [ ] is a subspace of containing .
It remains to show that [ ] is such a smallest subspace.
Suppose is another vector subspace of with ⊂ . We will show
that [ ] ⊂ .
Let ∈ [ ] then is a finite linear combination of elements of .
So there exists 1 , 2 , … … , ∈ ℝ, 1, 2, … … , ∈ such that
= 1 1 + 2 2 + ………+
We have ⊂ .
∴ 1, 2, … … , ∈ ⟹ 1, 2, … … , ∈
Also T is a vector subspace,
∴ 1, 2, … … , ∈ ℝ , 1, 2, … … , ∈
⟹ 1 1 + 2 2 + ………+ ∈
Or ∈ . So we have proved that each element of [ ] is also an
element of . ∴ [ ] ⊂
Note: For a vector space and a nonempty subset of , [ ] is called a
subspace of generated by .
Theorem:
Let be a vectors space and = { 1 , 2 , … … … , } be a subset of .
Then is linearly dependent if and only if one of the is linear
combination of other ′ .
Proof:
Given that the set = { 1, 2, … … … , } is linearly dependent.
So there exists scalars 1 , 2 , … … … , , not all zero, such that
1 1 + 2 2 + ……… + =0
Here not all 1 , 2 , … … … , are zero. i.e. at least one ≠ 0 for
some with 1 ≤ ≤ .
Thus,
1 1 + 2 2 + ………+ −1 −1 + + +1 +1 + ………+
= 0 where ≠0
∴ −
= 1 1 + 2 2 + ………+ −1 −1 + +1 +1 + ………
+
− 1 − 2 − −1
∴ = 1+ 2 + ……+ −1
− +1 −
+ +1 + ……+

If we assume = for each = 1, 2, … … , − 1, + 1, … … , then

= 1 1 + 2 2 + ………+ −1 −1 + +1 +1 + ………+
Hence one of the is a linear combination of the other ′ .
Conversely, suppose one of the is a linear combination of the other
′ that is
= 1 1 + 2 2 + ………+ −1 −1 + +1 +1 + ………+
where 1, 2, … … −1 , +1 , … , are scalars.
∴ 1 1 + 2 2 + ………+ −1 −1 − + +1 +1 + ………
+ =0
∴ 1 1 + 2 2 + ………+ −1 −1 + (− ) + +1 +1 + ………
+ =0
∴ ∑ =1 = 0 with = −1 ≠ 0.
This shows that the set { 1 , 2, … … … , } is linearly dependent.
Hence the theorem.

Definition: Basis of a vector space:


Let be a vector space and = { 1 , 2 , … … … , } be a subset of .
is said to be a basis for if every vector of can be uniquely expressed
as linear combination of elements of . i.e. every ∈ can be
expressed as = ∑ =1 where each ∈ ℝ and if =
∑ =1 = ∑ =1 then = , for all = 1, 2, … … , .
Note: There can be more than one basis for a vector space, but each
basis have the same number of elements.
Definition: Dimension of a vector space
Let be a vector space. We say the dimension of is or we say is
- dimensional vector space if it has a basis consisting of elements. In
this case we write = .
Theorem:
A basis of a vector space is linearly independent set.
Proof:
Let be a vector space and = { 1 , 2 , … … … , } be a basis for .
Then every member of can be uniquely expressed as linear
combination of elements of .
Now has a zero vector .
So there exists unique 1, 2, … … … , ∈ ℝ such that
= 1 1 + 2 2 + ………+ .
But =0 1 + 0 2 + ………+ 0 .
∴ 1 = 0, 2 = 0, … … … , =0
Thus, ∑ =1 = then = 0, for all 1 ≤ ≤ .
Hence is linearly independent set.
Alternative definition of Basis:
A subset of a vector space is said to be a basis for if is linearly
independent and [ ] = .
Example:
Verify that the set = {(1, 0), (0, 1)} is a basis for the vector space ℝ2 .
Solution:
First we will check whether is linearly independent set or not.
Let , ∈ ℝ.
Take (1, 0) + (0, 1) = (0, 0) then
( , ) = (0, 0)
∴ = 0, =0
Hence is linearly independent set.
Now ℝ2 = {( , )| , ∈ ℝ}
Since ( , ) = (1, 0) + (0, 1) 1), every vector ( , ) of ℝ2 can be
expressed as linear combination of members of .
∴ [ ] = ℝ2 .
Hence is a basis for ℝ2 .
Example:
In general, if is the vector of a vector space ℝ whose ℎ component
is 1 and zero otherwise. Then = { 1 , 2 , … … … , } is a basis for ℝ .
This basis is called standard basis of ℝ .
Theorem:
If a vector space has a basis of elements then any subset of with
more than vectors is linearly dependent set.
Proof:
Let = { 1 , 2 , … … … , } be a basis for a vector space and let
= { 1, 2, … … … , } be a subset of with > .
Since is a basis for , every vector of can be expressed as linear
combination of members of .
So there exists 1, 2, … … … , ∈ ℝ such that
= 1 1 + 2 2 + ………+ ,1≤ ≤ . (1)
Now we have to show that the set is linearly dependent set. For this
suppose 1 , 2 , … … … , are scalars with
1 1 + 2 2 + ………+ = (2)
We will prove that all scalars 1, 2, … … … , are not zero.
Using Equation (1) in Equation (2), we get
1 ( 11 1 + 12 2 + ………+ 1 )
+ 2 ( 21 1 + 22 2 + … … … + 2 ) + ………
+ ( 1 1+ 2 2 + ………+ ) =
∴ ( 1 11+ 2 21 + … … … + 1) 1 + ( 1 12 + 2 22
+ ………+ 2) 2 + … … … + ( 1 1 + 2 2 + ………
+ ) =
But is linearly independent set, so coefficients of 1, 2, … … … ,
must be zero.
1 11 + 2 21 + ………+ 1 =0
1 12 + 2 22 + ………+ 2 =0
…………….
……………
1 1 + 2 2 + ………+ = 0
This is homogeneous system with more number of unknowns than
number of equations. Hence it has non-trivial solution. That is, not all of
1, 2, … … … , are zero. Thus, is linearly dependent set.
Theorem:
If is an - dimensional vector space then any linearly independent
subset of with vectors is a basis for .
Proof:
Let be a vector space with = and let = { 1 , 2 , … … … , }
be a linearly independent subset of . We have to prove that is a
basis for . Given that is linearly independent set. It only remains to
show that [ ] = .
Let 0 be any arbitrary member of . Then
= { 0 , 1 , 2 , … … … , } becomes a subset of with more than
vectors. Hence is linearly dependent set.
Therefore, there exists n+1 scalars , 0 ≤ ≤ , not all are zero such
that
0 0 + 1 1 + ………+ = (1)
We claim that 0 ≠ 0.
Since, all are not zero, so if 0 = 0 then one of , 1 ≤ ≤ is
nonzero.
Putting 0 = 0 in Equation (1),
1 1 + ………+ = where ≠0, 1 ≤ ≤
∴ = { 1, 2, … … … , } is linearly dependent set.
Which is contradiction.
So we must have 0 ≠ 0.
Therefore, from Equation (1) we have
−1
0 = ( 1 1 + ………+ )
0

If we assume = for each = 1, 2, … … , , then
0

0 = 1 1 + 2 2 + ………+
So every member of can be expressed as a linear combination of
members of . That is [ ] = .
Hence is a basis for .
Theorem:
Let be a vector space with = and let be a subspace of .
Then any basis of can be extended to a basis of .
Proof:
Let = { 1 , 2 , … … … , } is a basis for then is linearly
independent set and [ ] = .
Also is a subspace of . Therefore, ⊆ .
If = then is a basis for .

If ≠ , then there exist 1 ∈ but 1 ∉ .
We claim that the set 1 = { 1, 2, … … … , , 1} is linearly
independent set.
Suppose ∑ =1 + 1 1 = , where 1, … … … , , 1 are scalars.

Then 1 = 0 because if 1 ≠ 0 then 1 = ∑ =1 ∈[ ]= ,
1
which is contradiction.
Therefore, ∑ =1 =
But is linearly independent set, so = 0, for all 1 ≤ ≤ .
Thus, ∑ =1 + 1 1 = ⟹ = 0, 1 = 0.
That is the set 1 = { 1, 2, … … … , , 1} is linearly independent
set.
Now, if [ 1 ] = then 1 is a basis for .
If [ 1 ] ≠ then there exist 2 ∈ such that 2 ∉ [ 1] = 1 (say)
With the same argument done above, we can say that the set
2 = { 1, 2, … … … , , 1 , 2 } is linearly independent set.
if [ 2 ] = then 2 is a basis for , otherwise continue the above
process.
Since is finite dimensional vector space, this process must end after a
finite number of steps.
Note: Every finite dimensional vector space has a basis.

You might also like