You are on page 1of 41

DEPARTMENT OF MATHEMATICS

STEWART SCIENCE COLLEGE, CUTTACK-753001


ODISHA

A CASE STUDY ON LINEAR ALGEBRA


PROJECT SUBMITTED FOR THE PARTIAL FULFILMENT OF
THE REQUIREMENT FOR THE DEGREE OF BACHELOR OF
SCIENCE IN MATHEMATICS
(2020-2023)

SUBMITTED BY:
PALLAVI JOSHI
ROLL NO: 2002010620020142
UNDER THE GUIDANCE OF
DR.(MRS) KASTURI RAY
LECTURER
DEPARTMENT OF MATHEMATICS

Page 1 of 41
CERTIFICATE

This is to certify that Miss. Pallavi Joshi bearing University


Roll No. 2002010620060142 of +3 3rd year Degree Science of Stewart Science
College, Cuttack has successfully completed his final semester project work
entitled "A Case Study on Linear Algebra" for department of Mathematics,
Stewart Science College, Cuttack, Odisha under the guidance of
Dr.(Mrs) Kasturi Ray, as a partial fulfilment of Academic curriculum, she
has worked on this project for a period from 15th February to 30th April 2023.

External Examiner Internal Examiner

Date: Date:

Page 2 of 41
ACKNOWLEDGEMENT

With great pleasure, I offer my heartfelt gratitude and indebtness to


my esteemed supervisor, Lectures in Mathematics, Stewart Science College,
Dr.(Mrs) Kasturi Ray, for his learned guidance and supervision. He has been
perennial source of inspiration all trough the piece of investigation and
completion of dissertation.

I sincerely express my thanks to MR. PRASHANTA KUMAR


MOHANTY, Head of department of Mathematics, Stewart Science College for
the facilities rendered by him for the present research & his encouragement in
the successful completion of the study. I place in the record my sincere gratitude
to my revered.

I would like to thanks my parents for their blessing. And i would


thank enough to my friends who supported me throughout the investigation. I
thank all the members of the department of mathematics who have provided me
steady support throughout completion of the investigation.

`Last, but not the least, I would like to thank all those who are
directly or indirectly involved in this academic endeavour.

PALLAVI JOSHI

Page 3 of 41
CERTIFICATE FROM H.O.D

This to certify that Miss. Pallavi Joshi, bearing University Roll


No. 2002010620060142 of +3 3rd year Degree Science of Stewart Science
College has done a Project work entitled " A Case Study on Linear Algebra”
during the academic sessions of 2022-2023 under my guidance and has
completed it successfully.

The project work submitted by her as a part fulfilment of the B.Sc


degree syllabus in mathematics encompass the detailed project work carried out
by his and it entirely belongs to his own. I wish him all success in life.

H. O. D. OF MATHEMATICS

Page 4 of 41
BONAFIDE CERTIFICATE

This is to certify that Miss. Pallavi Joshi, bearing University Hall


No. 2002010620060142, of +3 3rd year Degree Science of Stewart Science
College, Cuttack, has done the project work entitled “A Case Study on Linear
Algebra" during the academic session 2022-2023 under my guidance and
supervision. His project work is original and genuine.

I find it complete and worthwhile for submission.

Academic Guide

Date:

Page 5 of 41
DECLARATION

I do hereby declare that the project work entitled


"A Case Study on Linear Algebra” submitted by me under the guidance of
Dr.(Mrs) Kasturi Ray to Utkal University, Odisha in partial fulfilment of the
requirements for the Degree of Bachelor of science is my own and is not
submitted to any other universities.

Place: Cuttack Pallavi Joshi

Date: +3 3rd Year Science

Roll No.2002010620060142

Department Of Mathematics,

Stewart Science College, Cuttack

Page 6 of 41
ABSTRACT

Linear algebra is a branch of mathematics that deals with the study of linear
equations and their properties. It involves the use of matrices, vectors, and
linear transformations to solve problems in a wide range of fields such as
physics, engineering, economics, and computer science.

One of the main objectives of linear algebra is to understand the structure of


linear equations and how they can be represented using matrix operations. This
involves the study of concepts such as systems of linear equations, matrix
algebra, determinants, and eigenvalues and eigenvectors.

Linear algebra has a wide range of applications in modern science and


technology. For example, it is used in the development of algorithms for
machine learning and computer vision, in the analysis of data sets in statistics
and data science, and in the simulation of physical systems in engineering and
physics.

Overall, the study of linear algebra provides a powerful set of tools for solving
complex problems in a variety of disciplines, making it an essential area of
study for anyone interested in applied mathematics and its applications.

Page 7 of 41
CONTENTS

1. Chapter no. 01
1.1. Introduction………………………………………………9
1.2. Aims and objectives of linear algebra……………………9
1.3. Scope of study of linear algebra………………………...10
1.4. History of linear algebra………………………………...11

2. Chapter no. 02
2.1. Definition of vector space……………………………….12
2.2. n-tuples…………………………………………………..12
2.3. Pointwise addition and scalar multiplication in the set of all
functions from S to F……………………………………13
2.4. Examples of vector spaces………………………………13
2.5. Subspaces………………………………………………..18
2.6. Sum and direct sum……………………………………...20
2.7. Range space and null space……………………………...20

3. Chapter no. 03
3.1. Linear combination……………………………………...23
3.2. Linear dependence………………………………………24
3.3. Linear independence…………………………………….25
3.4. Some important theorems……………………………….27

4. Chapter no. 04
4.1. Applications of vector space…………………………….36
4.2. Conclusion………………………………………………40
4.3. References……………………………………………….40

Page 8 of 41
CHAPTER NO. 01
1.1: INTRODUCTION
Linear algebra is the central to almost all areas of mathematics like geometry
and functional analysis. Its concepts are a crucial perquisite for understanding
the theory behind machine learning. Most machine learning models can be
expressed in matrix form. A dataset itself is often represented as a matrix.
Linear algebra is used in data processing, data transformation and model
evaluation. Linear algebra is used to study financial trading strategies and
expectations. Financial conditions are examined via matrix equations using
rank, column space and null space arguments. It is a computer friendly form of
mathematics. The discrete representation of systems in matrix vector forms is
compatible with the discrete, digital architecture of electronic devices. It is the
primary mathematical computation tool in artificial intelligence and in many
other areas of science and engineering. Many real life applications of linear
algebra involve the calculation of speed, distance or time. It is used for
projecting a three dimensional view into two dimensional plane, handled by
linear maps. It is used to create ranking algorithms in search engines such as
Google. The linear regression model is used to predict data related to decision
making, medical diagnosis, statistical interference, etc.

We have studied R2 and R3. Also we have defined the two operations of vector
addition and scalar multiplication on them along with certain properties. This
can be done in a more general way, i.e., we may start with any set V (in place of
R2 and R3) and convert V into a vector space by introducing ‘addition’ and
‘scalar multiplication’ in such a way that they have all the basic properties
which vector addition and scalar multiplication have in R 2 and R3. We will
prove a number of results and about the general vector space V. These results
will be true for all vector spaces, no matter what the elements are.

1.2: AIMS AND OBJECTIVES OF LINEAR ALGEBRA


The aims and objectives of linear algebra include:

1. To study the properties of vector spaces: Linear algebra aims to understand


the properties of vector spaces, which are collections of objects that can be
added and multiplied by scalars. These properties include the dimension, basis,
and subspaces of a vector space.

Page 9 of 41
2. To study linear transformations: Linear algebra aims to understand the
behavior of linear transformations, which are functions that preserve vector
addition and scalar multiplication. This includes the concepts of eigenvalues
and eigenvectors, which are used to analyze the behavior of linear
transformations.

3. To solve systems of linear equations: Linear algebra aims to develop methods


for solving systems of linear equations, which are used in a wide range of
applications, from physics and engineering to economics and social sciences.

4. To study matrices and determinants: Linear algebra aims to understand the


properties of matrices and determinants, which are used to represent linear
transformations and solve systems of linear equations.

5. To develop applications in various fields: Linear algebra aims to develop


applications in various fields, including physics, engineering, computer science,
and data analysis. This includes the development of algorithms for numerical
linear algebra and the use of linear algebra in machine learning and computer
graphics.

Overall, the aims and objectives of linear algebra are to develop a deep
understanding of the mathematical properties of linear equations and
transformations, and to apply this understanding to a wide range of real-world
problems.

1.3: SCOPE OF STUDY OF LINEAR ALGEBRA


Linear algebra is a branch of mathematics that deals with linear equations, linear
functions, and their representations in vector spaces and matrices. The scope of
study of linear algebra includes:

1. Vector Spaces: Linear algebra studies vector spaces, which are collections of
objects that can be added together and multiplied by scalars. Vector spaces are
used to represent physical quantities such as velocity, acceleration, and force.

2. Matrices: Linear algebra deals with matrices, which are rectangular arrays of
numbers that can be added, subtracted, and multiplied. Matrices are used to
represent systems of linear equations and transformations.

3. Linear Transformations: Linear algebra studies linear transformations, which


are functions that preserve vector addition and scalar multiplication. Linear

Page 10 of 41
transformations are used to describe geometric transformations such as rotation,
reflection, and scaling.

4. Eigenvalues and Eigenvectors: Linear algebra deals with eigenvalues and


eigenvectors, which are used to study the behavior of linear transformations.
Eigenvalues and eigenvectors are used to analyze the stability and convergence
of systems of differential equations.

5. Applications in Computer Science: Linear algebra has many applications in


computer science, including computer graphics, machine learning, and
cryptography.

Overall, linear algebra is a fundamental branch of mathematics that has


applications in many fields, including physics, engineering, economics, and
computer science.

1.4: HISTORY OF LINEAR ALGEBRA


The history of linear algebra dates back to ancient times, with the use of
systems of linear equations in ancient China and Greece.

However, the modern development of linear algebra as a mathematical


discipline can be traced back to the 19th century.

In the 1800s, mathematicians began to develop the theory of matrices and


determinants, which are central to the study of linear algebra. In 1844,
Augustin-Louis Cauchy introduced the concept of determinants in his work on
linear equations. A few years later, James Joseph Sylvester introduced the term
"matrix" to refer to rectangular arrays of numbers. In the late 1800s, William
Rowan Hamilton developed the theory of quaternions, which are a type of
hypercomplex number system that extends the concepts of vectors and matrices.

The early 20th century saw the development of abstract algebra, which provided
a rigorous foundation for linear algebra. In 1901, David Hilbert introduced the
concept of abstract vector spaces, which are used to study linear
transformations. The work of Emmy Noether and others in the early 20th
century laid the groundwork for the development of modern algebraic
structures, including groups, rings, and fields.

During World War II, linear algebra became an important tool for solving
systems of equations in physics and engineering. The development of electronic

Page 11 of 41
computers in the postwar era led to the widespread use of linear algebra in
scientific computing and data analysis. In the latter half of the 20th century,
linear algebra became a core subject in mathematics and computer science
curricula, and it continues to be an active area of research and application today.

CHAPTER NO. 02
2.1: DEFINITION OF VECTOR SPACE
Let V is a set upon which we have defined two operations :(i) Vector addition
(denoted by ‘+’), which combines two elements of V and (ii) Scalar
multiplication (denoted by ‘.’),which combines a complex number with an
element of V. Then V, along with the two operations, is a vector space over
field F if the following ten properties hold.

The elements of the field F are called ‘scalars’ and the elements of the vector
space V are called ‘vectors’.

 u+v  V ∀ u, v  V. [Closure Property of Addition]


 αuV ∀ αF and uV. [Scalar Closure Property]
 u+v=v+u ∀ u, v V. [Commutative Property]
 u+ (v+w) = (u+v) +w ∀ u, v, w  V. [Associative property of Addition]
 u+0=u ∀ u  V and 0 is called the zero vector.
 For each element u  V, there exists an element u’  V such that
u+u’=0=u’+u. The element u’ is called additive inverse of u and is written
as –u. [Existence of Additive Inverse]
 α(βu)=(αβ)u ∀ α ,βF and uV.[Associative Property of Scalar
Multiplication]
 α(u+v)=αu+αv ∀ αF and u,v  V. [Distributive Property over Vector
Addition]
 (α+β)u=αu+βu ∀ α, βF and u  V. [Distributive Property over Scalar
Addition]
 1.u=u ∀ u  V

2.2: n-tuples:
An object of the form (a1,a2,a3,.....an), where the entries a1,a2,a3,.....an are
elements of a field F, is called an n-tuple with entries from F. The elements
a1,a2,a3,.....an are called entries or the components of the n-tuple.

Page 12 of 41
Equality of n-tuple: Two n-tuples (a1,a2,a3,.....an) and (b1,b2,b3,.....bn) with
entries from a field F are called equal if ai=bi ∀ i=1,2,3,.....n.

Notation of tuples: The set of all n-tuples with entries from a field F is
denoted by Fn or Vn. Vn is called real vector space if it is defined over R, the
set of real numbers.

The vectors in Fn may be written as column vectors.

Coordinate wise addition and scalar multiplication in F n:

 If u=(a1,a2,a3,.....an)Fn and v=(b1,b2,b3,.....bn)Fn, then


u+v=(a1+b1,a2+b2,a3+b3,.....an+bn). This is called coordinate wise
addition.
 If cF, uFn,then cu=(ca1,ca2,ca3,.....can). This is called coordinate
wise scalar multiplication.

For example,
 (2,3,7,4,8) + (8,-5,-7,2,3) = (2+8,3+(-5),7+(-7),4+2,8+3)
= (10,-2,0,6,11)
 3(8,9,1,2)=(24,27,3,6)

2.3:POINTWISE ADDITION AND SCALAR


MULTIPLICATION IN THE SET OF ALL FUNCTIONS
FROM S TO F:
Let S be any non empty set and F be any field.
Let G(S,F) denote the set of all the functions from S to F.
If f and g are in G(S, F), then f and g are equal, if f(s) =g(s) for each s  S.
Point wise addition: If f, g  G(S,F), then f+g is the function defined by
(f+g)(s)=f(s)+g(s) ∀ s  S.
Scalar multiplication: For α𝜖F, f  G(S, F). Let αf be the function given by
(αf)(s)=α{f(s)} ∀ s  S.

2.4:EXAMPLES OF VECTOR SPACE:


1. Prove that the set G(S, F) is a vector space with the operations of
addition and scalar multiplication.

Solution. Since S is nonempty, therefore G is nonempty.


We now need to show that all the conditions of a vector space.

Page 13 of 41
VS 1: Let f, g  G(S,F). Then ∀ s  S.
(f + g)(s) = f(s) + g(s)
= g(s) + f(s)
= (g + f)(s)
Hence f+g = g+f.
VS 2: Let f, g, h  G (S, F)
To show that, (f + g) + h = f + (g + h)
Now, for all s  S,
((f + g) + h)(s) = (f + g)(s) + h(s) = (f(s) + g(s)) + h(s) and
(f+(g+h)) (s)=f(s) + (g+h)(s)
=f(s) + (g(s) +h(s))
But, f(s), g(s), h(s) are scalar in the field F, where addition of scalars is
associative.
Hence, (f(s) + g (s))+h(s)=f(s) + (g(s)+h(s))
Accordingly, (f+g)+h=f+(g+h)
VS 3: Let 0 denotes the zero function:
0(s) =0 ∀ s  S:
The for any function f  G(S, F) , we have
(f + 0)(s) = f(s) + 0(s)
= f(s) + 0
= f(s) ∀ s  S
Thus, 0(s) is the additive identity.
Hence, f+0=f and 0 is the zero vector in G(S, F).
VS 4: For any function f  G(S, F),
Let –f be the function defined by (- f)(s) =-[f(s)] ∀ s  S
Then, (f + (- f))(s) = f(s) + (- f)(s)
= f(s) - f(s)
=0
= 0(s) ∀ s  S
Hence, the inverse of f is –f.
Thus, f + g = 0 where g = - f
VS 5: Let f  G(S, F)
Then, for 1  F, (1f)(s) = 1f(s)
= f(s) ∀ s  S
Hence 1f = s
VS 6: Let f  G(S, F) and a, b  F

Page 14 of 41
Then ∀ s  S .
((ab)f)(s) = (ab)f(s) = a(bf(s))
= a(bf)(s)
=( a(bf)(s)
Hence, (ab) * f = a(bf)
VS 7:Let f, g  G (S, F) and a  F
Then, ∀ s  S .
(a(f + g))(s) = a((f + g)(s))
=a(f(s) + g(s))
= af(s) + ag(s)
= (af)(s) + (ag)(s)
= (af + ag)(s)
Hence, a (f+ g)= af + ag
VS 8: Let f  G(S, F) and a ,b  F
Then, ∀ s  S
((a + b) * f)(s) = (a + b) * f(s)
= af(s) + bf(s)
= (af)(s) + (bf)(s)
= (af + bf)(s)
Hence, (a + b)f = af + bf
Since all conditions are satisfied, G (S, F) is a vector space.

2. Show that P(F) is a vector space.

Solution. Let f(x)=anxn+an-1xn-1+…..+a1x+a0,


g(x)= bmxm+bm-1xm-1+…..+b1x+b0 and
h(x)= ckxk+ck-1xk-1+…..+c1x+c0, where k≤m≤n.
If m≤n, define, bm+1=bm+2=…..=bn=0
If k≤m, define, ck+1=ck+2=…..=cm=0
Then g(x) can be written as g(x)= bnxn+bn-1xn-1+…..+b1x+b0 and h(x) can be
written as h(x)= cnxn+cn-1xn-1+…..+c1x+c0
VS 1:Since addition operation on P(F) is commutative, therefore f + g = g + f
VS 2:Since addition operation on P(F) is associative, therefore (f+g)+h=f+(g+h)
VS 3: The polynomial 0 is the additive identity, i .e.,f+0=f
VS 4: g(x) = - f(x)=-anxn-an-1xn-1+…..-a1x-a0, is the additive inverse of f(x).
So, f(x) + g(x) = 0
i.e., f+g=0  g = - f .
VS 5: 1f(x) = f(x)

Page 15 of 41
VS 6: If a,b  F,
then, (ab) f(x)=(ab)( anxn+an-1xn-1+…..+a1x+a0)
=a[(ban)xn+(ban-1)xn-1+…..+(ba1)x+(ba0)]
=a(bf(x))
VS 7:∀ a  F ,
a(f(x)+g(x))=a[(an+ bn)xn +(an-1 +bn-1 )xn-1 +...+(a1+b1)x+(a0+b0)]
= [a(anxn)+a(an-1xn-1 )+…..+a(a1x)+aa0 ]+
[a(bnxn) + a(bn-1xn-1)+…..+a(b1x)+ab0]
= af(x) + ag(x)
VS 8: ∀a ,b  F
(a + b)f(x) = af(x) + bf(x)
Hence, the properties VS 1 to VS 8 holds true for P(F).
Hence, P(F) is a vector space.

3. Show that the set of all the sequences V in a field F is a vector space.

Solution. Let V consists of all sequences {a n} in F that have only a finite


number of nonzero terms {an} .
If {an} and {bn} are in V and t𝜖F,
we define, {an} +{bn}={an+bn} and t{an} = {tan} .
Now to verify all the properties VS 1 to VS 8, which holds for V.
VS 1: {an} + {bn} = {an+ bn}
= {bn+an} = {bn} + {an}
VS 2: If {an},{bn},{cn}are in V, we have
({an} +{bn}) + {cn} ={an} +{bn} + {cn}
={(an+bn)+cn}
={an+(bn+cn)}
={an} +{bn+cn}
={an} +({bn} + {cn})
VS 3: The zero sequence {0} is the additive identity i.e,{an}+{0}={an+0}= {an}
VS 4: For every sequence {an} , there exists a sequence {- an} ,
such that, {an} + {- an} = {an + (- an)} = {0}
VS 5: 1{an} = {1an} = {an}
VS 6: ∀ a,b  F,
(ab){an}= {(ab)an} = {a(ban)} = a{ban}
VS 7:∀ a  F,
a({an}+{bn})=a({an+bn})=a{an+bn}={aan+abn}={aan}+{abn}
=a{an}+a{bn}

Page 16 of 41
VS 8: ∀ a,b  F,
(a + b){an} = {(a+b)an}
={aan+ban}
={aan}+{ban}
=a{an}+b{an}
Since VS 1 to VS 8 hold, therefore V is a vector space.

4.Let S={(x1,x2) : x1,x2  R}. For (x1,x2), (y1,y2)  S and c  R, define


(x1,x2)+(y1,y2)=(x1+y1,x2-y2) and c(x1,x2)= (cx1,cx2). Is S a vector space? In
not mention the properties which does not hold.

Solution:
VS 1: Let x1, x2, y1,y2  R
 (x1, x2) 𝜖 S, (y1 ,y2)  S
(x1, x2)+(y1 ,y2)= (x1+y1, x2-y2) and
(y1 ,y2)+(y1 ,y2)= (y1+x1, y2-x2),
 (x1+y1, x2-y2)≠(y1+x1, y2-x2)
Since x2 -y2 ≠y2- x2. Thus, VS 1 fails to hold.
VS 2: Let (x1, x2),(y1 ,y2),(z1,z2)𝜖S .
{(x1, x2)+(y1 ,y2)}+(z1,z2)
= (x1+y1, x2-y2)+(z1,z2)
= ((x1+y1)+z1, (x2-y2)-z2)
= (x1+(y1+z1),x2-(y2+z2))
And (x1, x2)+{(y1 ,y2)+(z1,z2)}=(x1, x2)+(y1+z1,y2-z2)
= (x1+(y1+z1), x2-(y2-z2))
(x1+(y1+z1),x2-(y2+z2))≠ (x1+(y1+z1), x2-(y2-z2)),
Since x2-(y2+z2) ≠ x2-(y2-z2)
VS 2 fails to hold.
VS 8: Let a, b  R
 (a + b)(x1, x2)= ((a + b)x1, (a + b)x2) , by definition.
=(ax1+bx1,ax2+bx2 ) and
a(x1, x2)+b(x1, x2)=(ax1, ax2)+(bx1, bx2)=(ax1+bx1,ax2-bx2),
By definition
(a + b)(x1, x2) ≠ a(x1, x2)+b(x1, x2)
VS 8 fails to hold.
Hence, S is not a vector space.

Page 17 of 41
2.5:SUBSPACES:
A nonempty subset W of a vector space V over a field F is called a subspace of
V is W is a vector space over F with the operations of addition and scalar
multiplication defined on V.

 In a vector space V, V and {0} are subspaces.


 {0} is called zero subspace of V.
 A subset W of a vector space V is a subspace of V if and only if the
following four properties holds good.
1. x +y  W ∀x,y  W [W is closed under addition]
2. cx  W ∀ c  F, x  W [W is closed under scalar multiplication]
3. 0  W i .e. W has a zero vector.
4. Each vector in W has an additive inverse in W.

EXAMPLES OF SUBSPACE:
1. Determine whether the following sets are subspaces of R 3 under the
operations of addition and scalar multiplication defined on R 3
(a) W1 ={(a1 ,a2 ,a3 )R3 : a1= 3a2 and a3= - a2 }
(b) W2 ={(a1 ,a2 ,a3 )R3 :2a1- 7a2+ a3= 0}

Solution:
(a) W1 ={(3a2, a2, - a2) :a2  R}
Taking a2 = 0, we see that (0, 0, 0)  W1
So, W1 ≠ 
Let x  W1, y  W1
x =(3a2, a2, - a2) and y=(3b2, b2, - b2) ∀ a2 ,b2  R
x + y =(3a2 + 3b2, a2 + b2, - a2 – b2)
=(3(a2+b2), (a2 + b2), - (a2 + b2))
=(3c2, c2, - c2)  W1, where c2=(a2+b2)  R
So ,x  W1 ,y  W1  x+y  W1.
Further, let c  R, x  W1
cx = c(3a2, a2, - a2)
=(3ca2, ca2, - ca2)
=(3(ca2), (ca2), - (ca2))
=(3t1,t1,-t1)  W, where t= ca2 R
W1 is a subspace of R3

Page 18 of 41
(b) W2 ={(a1 ,a2 ,a3 )  R3 :2a1- 7a2+ a3= 0}
Taking a1=a2=a3=0, we see that (0,0,0)  W2
Let x = (a1 ,a2 ,a3 )R3 and y=(b1 ,b2 ,b3 )R3
2a1- 7a2+ a3= 0 and 2b1- 7b2+ b3= 0
x+y= (a1+b1, a2+b2, a3+b3)
Now, 2(a1+b1 )-7(a2+b2)+(a3+b3)
=(2a1- 7a2+ a3)+(2b1- 7b2+ b3)=0+0=0
x+y  W2
Further, if cR, then cx = (ca1 ,ca2 ,ca3)
Now, 2ca1- 7ca2+ ca3= c (2a1- 7a2+ a3)= c.0=0
cx  W2
Hence, W2 is a subspace of R3.

2. Let V be the vector space of all functions from the field R into R.
If W= {f: f (7) =2+f (1)}, then is W a subspace of V?

Solution: Suppose f, g  W
Then f (7) =2+f(1) and
g (7)=2+g(1)
Then, (f+g) (7) = f (7) +g (7)
= 2 + f (1) + 2 + g(1)
= 4 + f(1) +g(1)
= 4 + (f + g)(1)
≠2+ (f+g) (1)
Hence, f +g is not in W
So, W is not a subspace of V.

3. Show that {0} is a subspace of the vector space V over F.

Solution. Let W= {0}


W is non-empty.
0 + 0 = 0  {0} and 00=0  {0}
Thus, {0} is a subspace of V.

Page 19 of 41
2.6: SUM AND DIRECT SUM:
Let S1 and S2 be non-empty subsets of a vector space V, then the sum of S 1 and
S2 , denoted by S1 + S2, is the set {x+y : x  S1, y  S2 }
Let W1 and W2 be two subspaces of a vector space V. Then V is called the
direct sum of W1 and W2 if W1 + W2 = V and W1  W2 = {0}
We denote that V is the direct sum of W 1 and W2 by writing V = W1  W2

2.7: THE RANGE SPACE AND THE NULL SPACE:


Let V and W be vector spaces over a field F. let T:VW be a linear
transformation.
The range of T denoted by R (T), is the set {T(x): xV}.
The null space (or kernel) of T, denoted by ker T or N (T) is the set {xV:
T(x) =0}.
Note: R (T)  W and N (T)  V.
The rank of T is the dimension of the range of T if R (T) is finite dimensional.
It is denoted by r (T) or rank (T) or  (T).
The nullity of T is the dimension of kernel of T, if N (T) is finite dimensional.
It is denoted by n (T) or nullity (T).

EXAMPLES OF RANK SPACE AND NULL SPACE:

1. Let T: R3R3 defined by T (a1, a2, a3) = (a1 –a2 +2a3, 2a1 +a2, -a1 -2a2
+2a3) be a linear transformation. Find R (T) and N (T).

Solution. To find R(T), we must find condition on y1, y2, y3  R,


so that (y1, y2, y3)  R(T)
i.e., we must find some (a1, a2, a3)R3 ,
so that (y1, y2, y3) = T(a1, a2, a3)
= (a1 –a2 +2a3, 2a1 +a2, -a1 -2a2 +2a3)
Thus,
a1 –a2 +2a3=y1, ……….(1)
2a1 +a2=y2, ……….(2)
-a1 -2a2 +2a3=y3 ……….(3)
From equation (1), (2), and (3), we observe that equation (2) + equation (3),
gives equation (1).
i.e., y2+y3=y1
[Otherwise, equation (2) – 2 equation (1)

Page 20 of 41
 3a2 – 4a3 = y2 -2y1 ……….(4)
And equation (1) + equation (3) , gives – 3a2 + 4a3 = y1 + y3 ……….(5)
equation (4) + equation (5) gives
y2 – 2y1 + y1 + y3 = 0
 y2 + y3 = y1 ]
1
If y2+y3=y1, we can choose a3=0, a2= (y2 -2y1) and
3
1 1
a1= y1 + (y2 -2y1) = (y1 + y2)
3 3
Then, we have T(a1, a2, a3) = (y1, y2, y3)
Thus, y2 + y3 = y1  (y1, y2, y3)  R(T)
Hence,
R(T)= {(y1, y2, y3)  R3 : y2 + y3 = y1}
Now,
(a1, a2, a3)  N(T) if and only if the following equations are true.
a1- a2+ 2a3 = 0 ……….(6)
2a1+ a2 = 0 ……….(7)
-a1- 2a2+ 2a3=0 ……….(8)
 a1=0, a2=0, a3=0 is a solution.
To check whether the solution is unique, we have
Equation (7) – 2 equation (6) gives
3
3a2 -4a3=0  a3=( )a2
4
Equation (6) – equation (8) gives
1
2a1 + a2 = 0  a1 = (− )a2
2
1 3
 N(T)={(− ) 𝑎2 , 𝑎2 , ( ) 𝑎2 }
2 4
1 3
={(− ) 𝛼, 𝛼, ( ) 𝛼} taking a2=α
2 4
2. Let R R be the linear transformation T (a1, a2, a3) =(a1- a2, 2a3). Then
3 2

find N(T) and R(T).

Solution: Let (a1, a2, a3)  N(T)


 T(a1, a2, a3) = 0, 0  R2
 (a1- a2, 2a3) = (0,0)
 a1- a2=0 and 2a3=0
 a1 = a2 and a3=0
Let a1=a (any arbitrary value)
 a2= a1=a and a3=0

Page 21 of 41
N(T) = {(a, a, 0) : a  R}
Now to find R(T)
For this, we must find conditions on b1, b2  R, so that (b1, b2)  R(T)
i.e., we must find some (a1, a2, a3)  R3,
so that (b1, b2)= T(a1, a2, a3)= (a1- a2, 2a3)
i.e.,a1- a2= b1 and 2a3= b2
1
Taking a2=0, therefore a1=b1 and a3= b2
2
1
 T(a1, a2, a3) =(a1- a2, 2a3) = (b1- 0, 2 b2) = (b1, b2)
2
R(T) = {(b1, b2): b1, b2  R}= R 2

3. Let T be the zero transformation. Find N(T) and R(T). Does 1  R(T).

Solution: Let T:V  W be a linear transformation.


By definition,
T(v) = 0  v  V
 N(T) = {x  V: T(x) = 0}= V.
R(T) = { T(x) : x  V }= {0}
 1  R(T)

4. Prove that T: R2R3 defined by T(a1, a2) = (a1+ a2 , 0, 2a1- a2) is a linear
transformation and find bases of both N(T) and R(T). then compute nullity
and rank of T.

Solution. Basis for N(T)


Let c  F, x= (a1, a2), y= (b1, b2), x, y  R2
T(cx + y) = T(ca1 + b1, ca2 + b2)
= (ca1 + b1+ ca2 + b2, 0, 2ca1 + 2b1- ca2 - b2)
= (c(a1 + a2), 0, c(2a1 - a2)) + ((b1 + b2), 0, 2b1 - b2)
= cT(x) + T(y)
T is linear.
We have to find the set of all vectors (a1, a2) in R2 such that T(a1, a2)= (0, 0, 0)
i.e., (a1+ a2 , 0, 2a1- a2) = ( 0, 0, 0)
 a1+ a2 = 0, 2a1- a2 = 0
 a1= 0, a2 = 0
So, a basis for N(T) = {(0, 0)}, i.e., N(T) = span {(0, 0)}
Basis for R(T)B
The basis for R2 is β = {(1, 0), (0, 1)}

Page 22 of 41
Then under T, the images of elements of B generate the range R(T) of T
i.e., the generates of R(T) are
T( 1, 0) = (1, 0, 2) and T( 0, 1) = (1, 0, -1)
R(T) = span (T(β))
= span {T(1, 0) , T(0, 1)}
= span {(1, 0, 2), (1, 0, -1)}
Nullity (T) = 0 = dim (N(T))
Rank (T) = 2 = dim (R(T))

CHAPTER NO. 03
3.1:LINEAR COMBINATION:
Let V be a vector space and S be a non empty subset of V.
A vector v in V is called linear combination of S, then there exists a finite
number of vectors u1,u2,u3,…..,un  S and scalars α1, α2, α3,…..αn  F such that
v=α1 u1+α2 u2+α3 u3+…..αn un

EXAMPLES OF LINEAR COMBINATION:


1. Let us examine whether the vector (4, 11, 13, 22) can be expressed as a
linear combination of u1= (1, 1, 2, 2),u2 = (2, 3, 5, 6) and u3 = (- 3, 1, - 4, 2)

Hence, we have to determine the scalars α1, α2, α3 such that


(4, 11, 13, 22) =α1 (1, 1, 2, 2) + α2 (2, 3, 5, 6) + α3 (- 3, 1, - 4, 2)
=(α1 ,α1 ,2α1 ,2α1 )+(2α2 ,3α2 ,5α2 ,6α2 )+(-3α3 ,α3 ,-4α3 ,2α3 )
=(α1 +2α2 -3α3 ,α1 +3α2 +α3 ,2α1 +5α2 -4α3 ,2α1 +6α2 +2α3 )
α1 +2α2 -3α3 =4, α1 +3α2 +α3 =11, 2α1 +5α2 -4α3 =13, 2α1 +6α2 +2α3 =22
Solving the above four equations ,we get: α1=1, α2=3, α3=1

2. For each of the following vectors in R 3 ,determine whether the first


vector can be expressed as a linear combination of the other two:
(a) (-2,0,3), (1, 3, 0), (2, 4, -1)
(b) (3,4,1), (1, -2, 1), (-2, -1, 1)

Solution:
(a) Let v=a1u1+a2u2 where a1 ,a2  R, where v=(- 2,0,3),u1=(1,3,0) ,u2 =(2,4,-1)
i. e .,(-2,0,3)=a1 (1,3,0)+a2 (2,4,-1)
= (a1 +2a2, 3a1+4a2, -a2)

Page 23 of 41
Here (-2, 0, 3) can be expressed as a linear combination of u1 and u2 if and only
if there is an ordered pair (a1 , a2 ) of scalars satisfying the system of linear
equations
a1 +2a2=-2, 3a1+4a2 =0, -a2=3
Solving system, we obtain : a1 = 4, a2 = - 3
Therefore, (- 2, 0, 3) = 4(1, 3, 0) + (- 3)(2, 4, - 1) so that, (-2, 0, 3) is a linear
combination of (1, 3, 0) and (2, 4, -1)

(b) Let v=a1u1+a2u2 where a1 ,a2  R


i.e., (3, 4, 1) = a1 (1, - 2, 1) + a2 (- 2, - 1, 1)
= (a1 - 2a2, - 2a1 – a2 , a1 + a2 )
Now to examine whether there exists scalars a 1 ,a2
such that a1 - 2a2=3, -2a1–a2 =4, a1 + a2=1
The system of equations has no solution, since a 2 has two different values.
Hence (3, 4, 1) can't be expressed as a linear combination of the vectors (1,-2, 1)
and (- 2, - 1, 1).

3.2:LINEAR DEPENDENCE:
A subset S of a vector space V is called linearly dependent if there exist a finite
number of distinct vectors u1 ,u2 ,u3 ,…..un in S and scalars a1 ,a2 ,a3 ,…..an not
all zero, such that a1 u1 +a2 u2 +a3 u3 +…..+an un = 0, in this case we say that the
vectors are linearly dependent.

EXAMPLES OF LINEAR DEPENDENCE VECTORS:


1. Decide which of the following are linearly dependent:
(a)S1={(1,-1,2),(1,-2,1),(1,1,4)} in R3
(b)S2={(1,-2,1),(2,1,-1),(7,-4,1)} in R3

Solution:

(a)Set a linear combination of vectors u1=(1,-1,2), u2=(1,-2,1), u3=(1,1,4) equals


to zero vector for unknown scalars α, β, γ.
αu1 +βu2 +γu3 =0
α(1,-1,2)+β(1,-2,1)+γ(1,1,4)=(0,0,0)
 (α, -α, 2α)+(β, -2β, β)+(γ, γ, 4γ)=(0,0,0)
 (α+β+γ, -α-2β+γ, 2α+β+4γ)= (0, 0, 0)
α+β+γ=0, -α-2β+γ=0, 2α+β+4γ=0
Solving we get: (α, β, γ) = (-3γ, 2γ, γ)

Page 24 of 41
In particular, setting γ=1, the vector (-3, 2, 1) is a solution .
Therefore, -3(1,-1,2)+2(1,-2,1)+1(1,1,4)=(0,0,0)
This shows that the system has a non-zero solution.
So, the set S1 is linearly dependent.

(b)Let α, β, γ be scalars such that


α(1,-2,1)+β(2,1,-1)+γ(7,-4,1)=(0,0,0)
 (α, -2α, α)+(2β, β, -β)+(7γ, -4γ, γ)=(0,0,0)
 (α+2β+7γ, -2α+β-4γ, α-β+γ)=(0,0,0)
 α+2β+7γ=0, -2α+β-4γ=0, α-β+γ=0
Solving the above equations, we get: α=-3, β=-2, γ=1
Thus, -3(1,-2,1)-2(2,1,-1)+(7,-4,1)=(-3-4+7, 6-2-4, -3+2+1)=(0,0,0)=0
So, the set S2 is linearly dependent.

2. Show that the set {x3 + 4x2 - 2x + 3, x3 + 6x2 – x + 4, 3x3+8x2-8x+7 } in


P3(R) is linearly dependent.
Solution: Suppose there exist scalars a, b, c such that
a(x3 + 4x2 - 2x + 3)+b(x3 + 6x2 – x + 4)+c(3x3+8x2-8x+7)=0
(a+b+3c)x3 + (4a+6b+8c)x2 +(-2a-b-8c)x+(3a+4b+7c)=0
Setting the coefficients of the powers of x each equal to 0, we have
(a+b+3c)=0, (4a+6b+8c)=0, (-2a-b-8c)=0, (3a+4b+7c)=0
Solving the above equations, we get: a=-5c, b=2c, c is arbitrary.
Taking c=1, i.e., taking a=(-5), b=2. c=1, we have
-5(x3 + 4x2 - 2x + 3)+2(x3 + 6x2 – x + 4)+(3x3+8x2-8x+7)=0
This shows that the given set is linearly dependent.

3.3:LINEARLY INDEPENDENT:
A subset S of a vector space V is called linearly independent if there exist a
finite number of distinct vectors u1 ,u2 ,u3 ,…..un in S and scalars a1 ,a2 ,a3 ,…..an
, such that a1 u1 +a2 u2 +a3 u3 +…..+an un = 0
 a1 =a2 =a3 =…..=an = 0

EXAMPLES OF LINEARLY INDEPENDENT VECTORS:


1.Show that the set {x4-x3+5x2-8x+6, -x4+x3-5x2+5x-3, x4+3x2-3x+5,
2x4+3x3+4x2-x+1, x3-x+2 } in P4(R) is linearly independent.
Solution: Suppose there exist scalars a, b, c, d, e such that
a (x4-x3+5x2-8x+6) + b (-x4+x3-5x2+5x-3) + c (x4+3x2-3x+5) +
d(2x4+3x3+4x2-x+1) + e(x3-x+2)=0

Page 25 of 41
(a-b+c+2d)x4 + (-a+b+3d+e)x3 + (5a-5b+3c+4d)x2 +(-8a+5b-3c-d-e)x +
(6a-3b+5c+d+2e)=0
Setting the coefficients of the powers of x each equal to 0, we have
(a-b+c+2d)=0, (-a+b+3d+e)=0, (5a-5b+3c+4d)=0, (-8a+5b-3c-d-e)=0,
(6a-3b+5c+d+2e)=0
Solving the system of equations, we get: a=b=c=d=e=0.
Hence, the given set is linearly independent in P4(R).
2. Let V be a vector space over a field F of characteristics not equal to 2.
(a) If u and v be distinct vectors in V. show that {u, v} is linearly
independent if and only if { u+v , u-v } is linearly independent.
(b) If u, v and w be distinct vectors in V, show that { u, v, w} is linearly
independent if and only if { u+v, u+w, v+w } is linearly independent.
Solution:
(a) Suppose
a (u+v)+ b(u-v)=0, a,b ∈ F
 (a+b)u + (a-b)v =0
 (a+b) =0, (a-b) =0 [Since, {u,v} is linearly independent.]
a=0, b=0
{u+v, u-v} is linearly independent.
Conversely, let {u+v, u-v} be linearly independent.
Now, let
au + bv = α(u + v) + β(u - v), where α,β ∈ F
α+β=a, α-β=b
𝑎+𝑏 𝑎−𝑏
α= and β=
2 2
𝑎+𝑏 𝑎−𝑏
 au+bv= ( )(u+v) + ( )(u-v)
2 2
So, au+bv=0
𝑎+𝑏 𝑎−𝑏
 =0, =0 [ Since, {u+v, u-v} is linearly independent]
2 2
a=0=b
{u, v} is linearly independent.

(b) Let a (u + v) + b(u + w) + c(v + w) = 0, a, b ,c ∈ F


 (a + b) u + (a + c) v + (b + c) w = 0
a + b = 0, a + c = b + c [Since, {u, v, w} is linearly independent]
a = 0 = b = c
So, {(u + v), (u + w), (v + w)} is linearly independent.
(=>) We have
au+bv+cw
𝑎+𝑏 𝑎+𝑐 𝑏+𝑐
= ( ) (u + v) + ( ) (u + w) + ( ) (v + w)
2 2 2
 au + bv + cw = 0

Page 26 of 41
𝑎+𝑏 𝑎+𝑐 𝑏+𝑐
( )= 0 = ( )= ( ) [Since, {u + v, u + w, v + w} is linearly
2 2 2
independent]
a = 0 = b = c
{u, v, w} is linearly independent.
3. Let S= {(1, 1, 0),(1, 0, 1),(0,1,1)} be a subset of the vector space F3.
(a) Prove that if F=R, then S is linearly independent
(b) Prove that if F has characteristic 2, then S is linearly dependent
Solution:
(a)Let a(1, 1, 0) + b(1, 0, 1) +c(0,1,1)=(0, 0, 0), where a, b, c  F
 (a+b, a+c, b+c)=(0,0,0)
 a+b=0, a+c=0, b+c=0
 a=b=c=0
S is linearly independent
(b) If a = b = c =1, then
a(1, 1, 0) + b(1, 0, 1) +c(0,1,1)
=(1,1,0)+(1, 0, 1)+(0, 1, 1)
=(2,2,2)=(0,0,0)
{(1. 1, 0), (1, 0, 1), (0, 1, 1)} is linearly independent.

3.4: SOME IMPORTANT THEOREMS:


Theorem-1. (Cancellation law for vector addition)
If x, y, z  V where V is a vector space, such that x+z=y+z, then x=y.

Proof. By the properties of a vector space, we know there exist a vector vV
such that z+v=0........... (1)
x=x+0
=x+(z+v) [By equation (1)]
=(x+z) +v [By Associative Property of Vector space]
=(y+z) +v [By assumption]
=y+ (z+v) [By Associative Property of Vector space]
=y+0 [By equation (1)]
=y [Identity Property of Vector spaces]

Theorem-2. In any vector space V, the following statements are true:


(a) 0x=0  xV
(b)(-a) x= - (ax) = a (-x)  aF, xV
In particular, (-1) x= -x  xV
(c) a0= 0  aF

Page 27 of 41
Proof. (a)By properties of vector space, we know (a+b)x=ax+bx  a,bF,xV
In particular,
0.x=(0+0)x=0x+0x
0+0x=0x+0x [By identity property of vector space]
0x=0 [x+z=y+z x=y]

(b)We have
[ a+ (-a)]x=ax+(-a)x ……….(1)
Again
[a+(-a)]x=0x=0 ……….(2)
(1)and(2) gives
ax+(-a)x=0
We know, (-a)x=-(ax)
In particular,(-1)x=(-x)
Further, a(-x)=a[(-1)x]
=[a(-1)]x
=(-a)x

(c)a(x+y)= ax+ay
In particular, a(0 + 0)=a0 + a0
But 0 + 0 = 0
Hence, a.0 + a.0 = a.0
 a.0 + a.0 = a.0 + 0
a.0 = 0 [By cancellation law]

Theorem-3. Let V be a vector space and W be a subset of V. Then W is a


subspace of V if and only if the following conditions are satisfied for the
operations defined in V.(a) 0  W
(b) x+y  W whenever x  W and y  W
(c) cx  W whenever c  F and x  W
Proof. Let W be a subspace of V. Then W is a vector space under the same
operations as those of V. Hence, conditions (b) and (c) holds good. For (a), let
there exists a vector 0’ such that
x + 0’ = x  x  W
But, we know by the identity property of vector space that
x+0=x
 x + 0’ = x + 0
0’ = 0
So, condition (a) holds good.
Conversely, let W be a non-empty subset of V satisfying the conditions (a), (b)
and (c).

Page 28 of 41
In order to show that W is a subspace of V, we only need to prove that 0  W
and additive inverse of each vector in W lies in W.
Since W is non-empty, there is some x  W.
Then by (c), 0x  W i.e., 0W
Further, if xW, then (-1)x  W, by condition (c)
But -x=(-1)x
 -x  W
Hence, W is a subspace of V.

Theorem-4. Any intersection of subspaces of a vector space V is a subspace


of V.

Proof. C be the collection of subspaces of V and let W denote the intersection


of the subspaces in C.
1. Since every subspace contain zero vector, therefore 0  W.
2. Let x,y W.
The x and y are contained in each subspace of C.
Since each subspace in C is closed under addition, therefore x+y is
contained in each subspace in C.
 x+y  W whenever x, y  W.
3. Let a  F, x W.
Since each subspace in C is closed under scalar multiplication, therefore
ax is contained in each subspace in C.
 ax  W whenever a  F, x  W.
Hence, W is a subspace of V.

Theorem-5. The vector space V is the direct sum of its subspaces W 1 and
W2 if and only if (i) V = W1 + W2 and
(ii) W1  W2 = {0}

Proof. Suppose V = W1  W2.


Then any v  V can be uniquely written in the form
v = w1+w2 , w1  W1 and w2  W2
Thus, in particular, V = W1 + W2
Now suppose v  W1  W2
Then,
(i) v=v+0, where v  W1 and 0  W2 and
(ii) v=0+v, where 0  W1 and v  W2
Since such a sum for v must be unique, v = 0  W1  W2 = {0}
On the other hand, suppose
V = W1 + W2 and W1  W2 = {0}
Let v = V

Page 29 of 41
Since V = W1 + W2, there exist w1  W1 and w2  W2 such that v = w1+w2
We have to show that such a sum is unique.
Suppose there is another representation
v = w1’+w2’ , where w1’  W1 and w2’  W2.
Then w1+w2 = w1’+w2’
 w1- w1’= w2’ - w2
But w1- w1’  W1 and w2’ - w2  W2
Hence, w1- w1’= w2’ - w2  W1  W2 = {0}
 w1- w1’= 0 and w2’ - w2 = 0
 w1= w1’ and w2’ = w2
Thus such a sum for v  V is unique and V = W1  W2.

Theorem-6. The span of any subset S of a vector space V is a subspace of V.


Further, any subspace of V that contains S must also contain the span of S.

Proof. If S=, then [S] = {0}, which is a subspace that is contained in any
subspace of V.
If S, then S contain a vector v.
So, 0v = 0  [S]
Now, we have to prove that [S] is closed under addition and scalar
multiplication.
Let x, y  [S].
Then,
x=a1u1 + a2u2 +……….+anun, for some scalars ai, some ui’s  S and a positive
integer n.
y=b1v1 + b2v2 +……….+bmvm, for some scalars bi, some vi’s  S and a positive
integer m.
Then, x+y = a1u1 + a2u2 +……….+anun+ b1v1 + b2v2 +……….+bmvm, which is a
finite linear combination of the vectors in S.
So, x+y  [S]
For any scalar α,
αx= (αa1)u1 + (αa2)u2 +……….+ (αan)un,
which is clearly a linear combination of the vectors in S.
So, αx  [S].
Hence, [S] is a subspace of V, which proves the first part.
For the second part, let T denote any subspace of V that contains S.
If t  [S], then t = c1t1 + c2t2 +……….+cktk, for some vectors t1,t2,……….,tk in
T and some scalars c1,c2,…….,ck.
As T is a subspace and ti  T  i,
c1t1 + c2t2 +……….+cktk  T i.e., t  T
We have proved that t  [S]  t  T
Hence, [S]  T

Page 30 of 41
This completes the proof of the theorem.

Theorem-7. If S1 and S2 are subsets of a vector space V such that S1  S2,


then span(S1)  span(S2)
Moreover, if S1  S2 and span(S1) = V, then span(S2) = V.

Proof. Given S1  S2
Let x  span (S1)
 x=a1u1 + a2u2 +……….+anun,
For u1,u2,……….,un  S1 and a1,a2,……….,an being scalars.
 x=a1u1 + a2u2 +……….+anun  span(S2) [Since, S1  S2, so u1,u2,……….,un
 S2]
So, span(S1)  span(S2)
This proves the first part.
For the second part
Let S1 ={v1,v2,……….,vn-1} and S2={v1,v2,……….,vn}
Let x  span(S2)
x=a1v1 + a2v2 +……….+anvn for v1,v2,……….,vn  S2 and for some scalars
a1,a2,……….,an.
Let vn= b1v1 + b2v2 +……….+bn-1vn-1, for some scalars b1,b2,……….,bn-1
So, x=a1v1 + a2v2 +……….+an(b1v1 + b2v2 +……….+bn-1vn-1)
= (a1+anb1)v1+(a2+anb2)v2+……….+ (an-1+anbn-1)vn-1
This is a linear combination of v1,v2,……….,vn-1
 x  span(S1)
Hence, span(S2)  span(S1)
Let x  span(S1)
 x=a1’ v1 + a2’ v2 +……….+an-1’ vn-1, for some scalars a1’,a2’,……….,an-1’
 x=a1’ v1 + a2’ v2 +……….+an-1’ vn-1+ 0.vn
 x  span(S2)
So, span(S1)  span(S2)
Thus, span(S1) = span(S2)
If span(S1) = V, then span(S2) = V.
This completes the proof of the theorem.

Theorem-8. If 0  {u1,u2,……….,un}, a subset of the vector space V, then


the set {u1,u2,……….,un} is linearly dependent.

Proof: Since 0  {u1,u2,……….,un}, therefore 0 is one of the ui’s.


We ,may assume that u1=0.
Then, 1.u1+0.u2+……….+0.un
= 0+0+……….+0 = 0

Page 31 of 41
This shows that 0 is a linear combination of u1,u2,……….,un in which all the
scalars are not zero.
Hence, the set {u1,u2,……….,un} is linearly dependent.

Theorem-9. Let V be a vector space and let S1  S2  V. If S1 is linearly


dependent, then S2 is linearly dependent.

Proof: Let S1={u1,u2,……….,uk} and S1  S2  V.


We want to show that S2 is linearly dependent.
If S1 = S2 , there is nothing to prove.
Otherwise,
Let S2 = S1  {v1,v2,……….,vm}, where m > 0.
= {u1,u2,……….,uk,v1,v2,……….,vm}, where m > 0.
S1, being linearly dependent, therefore, there exists scalars α1,α2,.....,αk not all
zero such that α1u1 + α2u2 +……….+αkuk = 0.
But then, α1u1 + α2u2 +……….+αkuk +0.v1+0.v2+……….+0.vm = 0, with some
αi  0
Thus S2 is linearly dependent.
Now, what happens when one of the vectors in a set can be written as a linear
combination of the other vectors in the set. The next theorem states that such a
set is linearly dependent.

Theorem-10. Let S = {u1,u2,……….,un} be a subset of a vector space V.


Then S is linearly dependent if and only if some vectors of S is a linear
combination of the remaining vectors of S.
Proof: We have to prove that
(i) If some ui, say u1 is a linear combination of u2,u3,……….,un, then S is
linearly dependent and
(ii)If S is linearly dependent, then some ui is the linear combination of the
remaining uI’s.
Let us prove (i)
For this suppose u1 is a linear combination of u2,u3,……….,un,
So, u1= α2u2 + α3u3 +……….+αnun, where the scalars α2,α3,.....,αn are not all
zero.
Then, u1- α2u2 - α3u3 - ………. - αnun = 0
 S is linearly dependent.
We now prove (ii)
Since, S is linearly dependent, there exist scalars α1,α2,.....,αn not all zero such
that α1u1 + α2u2 +……….+αkuk = 0
Since some αi  0, suppose αk  0, k ≤ n.
Then we have,
αkuk = -α1u1 - α2u2 -……….- αk-1uk-1 - αk+1uk+1 -……….-αnun

Page 32 of 41
𝛼 𝛼
uk = -( 1 )u1 - ……….- ( 𝑛)un
𝛼𝑘 𝛼𝑘
= β1u1 +……….+ βnun
𝛼
where, βi = -( 𝑖 ), i = 1, 2, ……….,n and ik
𝛼𝑘
Thus uk I s the linear combination of u1,u2,……….,uk-1,uk+1 ,……….,un.

Theorem-11. Let V be a vector space and let S1  S2  V. If S2 is linearly


independent, then S1 is linearly independent.

Proof. Suppose S2  V is linearly independent and S1  S2.


If possible suppose S1 is not linearly independent.
Then S1 is linearly dependent, but then by theorem 9,
S2 is also linearly dependent, since S1  S2.
This is a contradiction.
Hence, our supposition is wrong.
i.e., S1 is linearly independent.

Theorem-12. Let S be linearly independent subset of a vector space V and


let u be a vector in V, which is not S.
Then S  {u} is linearly dependent if and only if u  span (S).

Proof. Let S={u1,u2,……….,un} and T = S  {u}.


If T is linearly dependent then there exist scalars α, α1, α2, ....., αn not all zero
such that αu + α1u1 + α2u2 +……….+αnun = 0
Now if α = 0, this implies that there exist scalars α1, α2, ....., αn not all zero, such
that α1u1 + α2u2 +……….+αnun = 0
But this is impossible as S, is linearly independent.
Hence, α  0.
But then,
𝛼 𝛼 𝛼
u = -( 1 )u1 -( 2 )u2 ……….- ( 𝑛)un
𝛼 𝛼 𝛼
i.e., u is a linear combination of u1,u2,……….,un
i.e., u  span(S),
Therefore, T = S  {u} must be linearly independent.
Conversely, let u  span(S). then there exist vectors u1,u2,……….,un in S and
scalars b1, b2, ....., bn such that u = b1u1 + b2u2 +……….+bnun
Hence, 0 = b1u1 + b2u2 +……….+bnun + (1)u
Since u  ui for i = 1, 2, ………., n, the coefficient of u (i.e., -1) is not zero.
So, {u1,u2,……….,un,u} is linearly dependent.
Therefore T = S  {u} is linearly independent by theorem 9.

Theorem-13. Let V and W be vector spaces and T : V  W be linear. Then


N(T) and R(T) are subspaces of V and W respectively.

Page 33 of 41
Proof. Let x, y  N(T) and c  F
Let 0v and 0w denote the zero vectors of V and W respectively.
Since T(0v) = 0w , we have 0v  N(T)
Now, T(x+y) = T(x) + T(y) = 0w + 0w = 0w
 x+y  N(T)
And T(cx) = c T(x) = c 0w = 0w
 cx  N(T)
Hence, N(T) is a subspace of V.
Now to establish that R(T) is a subspace of W.
Since T(0v) = 0w , therefore, 0w  R(T).
Now, let x, y  R(T) and c  F.
By definition of R(T), there exist vector v and w in V
such that T(v)=x and T(w)=y
So, x+y = T(v) + T(w) = T(v+w)
And cx = c T(v) = T(cv).
Hence, x+y  R(T) and cx  R(T)
 R(T) is a subspace of W.

Theorem-14. Let V and W be vector spaces and let T : V  W be linear.


If β={v1, v2, ………., vn} is a basis for V, then
R(T)=span(T(β))=span({T(v1),T(v2),……….,T(vn)})

Proof. We have T(vi)R(T) for each i.


Since R(T) is a subspace of V, therefore
R(T)span{ T(v1),T(v2),……….,T(vn)}= span (T(β)) ……….(1)
[Since any subspace of V that contains S, must also contain the span of S]
Suppose w  R(T)
Then w=T(v), for some v  V.
Since β={v1, v2, ………., vn} is a basis for V, we have,
v= a1v1+a2v2+ ……….+anvn
=∑𝑛𝑖=1 𝑎𝑖 𝑣𝑖 , 𝑎𝑖  F
So, w=T(v)=T(∑𝑛𝑖=1 𝑎𝑖 𝑣𝑖 )= ∑𝑛𝑖=1 𝑎𝑖 𝑇(𝑣𝑖 )  span (T(β))
So, R(T)  span(T(β)) ……….(2)
From (1) and (2), we have
R(T)=span(T(β))
=span{T(v1),T(v2),……….,T(vn)}

Theorem-15.(Dimension theorem)
Let V and W be two vector spaces and let T : V  W be linear with V as
finite dimensional. Then, nullity(T) + rank(T) = dim (V)
This theorem is also known as Rank - Nullity theorem.
Proof. We have seen that N(T) is a subspace of V.

Page 34 of 41
Since, V is finite dimensional; therefore N(T) is finite dimensional.
Let dim(N(T)) = k = nullity(T)
Let β1 ={v1, v2, ………., vk} be a basis for N(T).
Since {v1, v2, ………., vk} is a linearly independent subset of V,
Therefore we can extend it to form a basis for V.
Let dim(V) = n. n ≥ k
Let β2 ={v1, v2, ……,vk, vk+1, …., vn} be a basis for V.
Therefore R(T)= span{T(v1),T(v2),……,T(vk ), T(vk+1), ….,T(vn)}
Consider the set B= { T(vk+1 ), T(vk+2), ….,T(vn)}
We claim that B is a basis for R(T).
(i) First we shall prove that span(B) = R(T)
Since span(β2) = V,
Therefore, R(T) = span{T(v1),T(v2),……,T(vk ), T(vk+1), ….,T(vn)}
But vi  N(T) for I = 1, 2, ……,k
T(vi)=0
T(v1) = 0 = T(v2) =……= T(vk)
R(T) = span{T(vk+1 ), T(vk+2), ….,T(vn)}= span (B).
(ii) Now to show that B is linearly independent.
Let αk+1, αk+2,………., αn be scalars such that
αk+1T(vk+1 )+ αk+2T(vk+2)+ ….+ αnT(vn)=0 ……….(1)
T(αk+1vk+1 + αk+2vk+2+ ….+ αnvn) = 0 (Since T is linear)
 αk+1vk+1 + αk+2vk+2+ ….+ αnvn  N(T)
 αk+1vk+1 + αk+2vk+2+ ….+ αnvn = β1v1 + β2v2+ ….+ βkvk
(Since each element of N(T) is a linear combination of the element of
β1)
 β1v1 + β2v2+ ….+ βkvk+(-αk+1)vk+1 +(-αk+2)vk+2+ ….+(-αn)vn=0
Since β2 is a basis for V, therefore β2 is linearly independent.
 β1= β2=….= βk=(-αk+1)=(- αk+2)= ….=(-αn)=0
 βi = 0, i=1,2,…..,k and ai = k+1, k+2, ………., n
From (1),
αk+1 T(vk+1)+ αk+2 T(vk+2 )+ ….+ αn T(vn)=0
 αk+1=0, αk+2=0, ………., αn=0
 B is linearly independent.
Hence B is a basis for R(T) and as number of elements in B = n-k,
Therefore, rank(T)=n-k.
Since, β1 is a basis for N(T) and nullity (T)= k,
Therefore, rank(T)= n – nullity (T)
nullity (T) + rank (T) = n = dim(V).

Page 35 of 41
CHAPTER NO. 04
4.1: APPLICATIONS OF VECTOR SPACE:
Here are some of the application of linear algebra:

1. CRYPTOGRAPHY:
It is the study of decoding and encoding of the secret messages. Using
electronic transactions and communications, solid encryption methods can be
applied. Those methods involve modular arithmetic to decode/encode the
messages. And the simpler encoding methods apply using the concept of matrix
transformation.
I mentioned above that the linear algebra concept is used to decode the secret
message (a cryptography method).

Let’s understand it with an example.


Assign the specific number to each alphabetical letter to encode the short secret
messages.
Then, the sequence of the number of each text should be organized in the square
matrix form (taken as A).
[Note: if the letters’ number is lower than the number of the element, then fill
the matrix with the zero elements.]

Assume a nonsingular square matrix as B. To encode the message, multiply


matrix A with matrix B (i.e., matrix A * matrix B). Assume the matrix B as:

2 0 1
B=(1 0 1)
0 1 0

Here is the text ”BILA KOCKA” (a white cat) change into matrix A:
And to encode the text, apply:

19 19 14
Z=BA=(12 15 11)
8 11 6

Page 36 of 41
Now, multiply the matrix Z to B inverse to decode the message:

1 −1 0 19 19 14
(0 0 1) (12 15 11)=A
−1 2 0 8 11 6

As matrix multiplication is not commutative, it must maintain the matrices with


the exact product value. If you multiply the Z and B inverse matrix in the
opposite order, you will obtain:

1 −1 0 19 19 14 5 9 19
(0 0 1) (12 15 11) = (1 10 15)
−1 2 0 8 11 6 2 4 11

Now, the secret message was “CERNY PSIK” (a variety of black dogs).

2. GAME THEORY
It is another one of the applications of linear algebra, which is a mathematical
study that describes the number of possible options. The players make these
options during the game playing. As per psychologists, the social interaction
theory is used to consider the player’s options against other players in the
competition.
Although game theory focuses on cards, board games, and other competitive
games, it also applies to the military strategy used in wars.

Let’s understand it with an example


➢ Rock, Paper, Scissors

It is one of the simple examples of the Zero-sum game. A payoff matrix is used
similarly to that of Prisoner’s Dilemma’s payoff.
How?
Suppose you need to count the two players’ scores over several games. A point
is added to the player with every win, and a point is subtracted with each loss.
For a tie, the point is neither substracted nor added to the score.
Then the payoff matrix will look as:

n=m=3
P1 = P2 = {Rock, Paper, Scissors}

Page 37 of 41
0 1 −1
P1  P2 = [−1 0 1]
1 −1 0
This matrix is similar to that of a skew-symmetric matrix. This implies that the
game is symmetrical.
This matrix is similar to that of a skew-symmetric matrix. This implies that the
game is symmetrical.
If a player wins, the other one will lose. If both tie, then the players will not get
and detect the point.
It is what the zero-sum game is.
[Each positive option will result in a reward, and a negative option will lead to
opposition.]

3. APPLICATION OF LINEAR ALGEBRA IN REAL LIFE:


WHERE IT IS USED?
Linear algebra is widely used in the fields of Math, Science, and Engineering.
Basically, it plays a vital role in determining unknown quantities. Below are
some of the linear algebra concepts that are used in real life.

 Linear Algebra is used to check the distribution of microwave energy in a


microwave oven.
 It is used to create ranking algorithms in search engines such as Google,
Yahoo, etc.
 Used to recover the codes that have been tampered with during
processing or transmission.
 Used for space studies.
 It is used for projecting a three-dimensional view into a two-dimensional
plane, handled by linear maps.
 Used to examine the digital signals and encode or decode them. These
can be the signals of audio or video.
 It is used to optimize in the field of linear programming.
 Used to check the energy levels of atoms.
Suppose you are interested in computer science and want to know where linear
algebra applications are used in computer science. We listed six linear algebra
applications that are used in computer science.

Page 38 of 41
4. APPLICATIONS OF LINEAR ALGEBRA IN COMPUTER
SCIENCE:
Linear algebra is essential for things like:

 Pattern Recognition.
 Graph theory (social graphs, for example).
 Data Classification and Clustering.
 Singular Value Decomposition for recommendation systems.
 Graphics Programming.
 Various forms of Artificial Intelligence (AI).

5. APPLICATIONS OF LINEAR ALGEBRA IN


ENGINEERING:
Here are some of the applications of linear algebra in engineering:

 Solving systems of linear equations to model real-world engineering


problems.
 Applying matrix operations to analyze and design control systems.
 Employing least squares regression to fit experimental data and make
predictions.
 Utilizing linear transformations to manipulate images and signals in
digital signal processing.
 Applying linear algebra to design and analyze communication networks.
 Using linear algebra to optimize resource allocation and scheduling in
project management.
6. HERE, SOME OTHER APPLICATIONS OF LINEAR
ALGEBRA ARE GIVEN AS:

 Ranking in Search Engines – One of the most important applications of


linear algebra is in the creation of Google. The most complicated ranking
algorithm is created with the help of linear algebra.
 Signal Analysis – It is massively used in encoding, analyzing and
manipulating the signals that can be either audio, video or images etc.
 Linear Programming – Optimization is an important application of
linear algebra which is widely used in the field of linear programming.
 Error-Correcting Codes – It is used in coding theory. If encoded data is
tampered with a little bit and with the help of linear algebra it should be
recovered. One such important error-correcting code is called hamming
code

Page 39 of 41
 Prediction – Predictions of some objects should be found using linear
models which are developed using linear algebra.
 Facial Recognition- An automated facial recognition technology that
uses linear algebraic expression is called principal component analysis.
 Graphics- An important part of graphics is projecting a 3-dimensional
scene on a 2-dimensional screen which is handled only by linear maps
which are explained by linear algebra.

4.2: CONCLUSION:
In conclusion, linear algebra is a fundamental branch of mathematics that
studies linear equations and transformations, vector spaces, matrices, and
determinants. It has a wide range of applications in various fields, including
physics, engineering, economics, computer science, and data analysis.

Linear algebra provides a powerful set of tools for solving systems of linear
equations and representing geometric transformations. It also plays a central
role in the development of abstract algebraic structures, such as groups, rings,
and fields.

The development of electronic computers in the postwar era has led to the
widespread use of linear algebra in scientific computing and data analysis. The
development of algorithms for numerical linear algebra has made it possible to
solve large-scale systems of equations and perform complex calculations with
ease.

Overall, linear algebra is an essential part of modern mathematics and has


numerous applications in science, engineering, and technology. It continues to
be an active area of research and development, with new applications and
discoveries being made all the time.

4.3: REFERENCES:
Here are some references for a linear algebra project:

1. "Linear Algebra: A Modern Introduction" by David Poole

2. "Linear Algebra and Its Applications" by Gilbert Strang

3. "Introduction to Linear Algebra" by Serge Lang

4. "Linear Algebra Done Right" by Sheldon Axler

5. "Matrix Analysis and Applied Linear Algebra" by Carl Meyer

Page 40 of 41
6. "Linear Algebra: A Geometric Approach" by Theodore Shifrin and Malcom
Adams

7. "Linear Algebra" by Georgi E. Shilov

8. "Linear Algebra and Geometry" by Irving Kaplansky

9. "Linear Algebra: A First Course with Applications" by Jeffery Holt

10. "Linear Algebra" by Kenneth Hoffman and Ray Kunze

11. "Applied Linear Algebra" by Peter J. Olver and Chehrzad Shakiban

12. "Linear Algebra and Its Applications" by Peter D. Lax

13. "Linear Algebra for Engineers and Scientists" by Kenneth Hardy

14. "Linear Algebra with Applications" by W. Keith Nicholson

15. "Linear Algebra: A Pure Mathematical Approach" by Harvey E. Rose

16. "A First Course in Linear Algebra" by Robert A. Beezer

17. "Linear Algebra: Theory and Applications" by Ward Cheney and David
Kincaid

18. "Linear Algebra: An Introduction to Abstract Mathematics" by Robert J.


Valenza

19. "Linear Algebra and Its Applications" by Richard Bronson and Gabriel
Costa

20. "Linear Algebra and Its Applications" by Peter Lax and Maria Shea Terrell

21. "Elementary Linear Algebra" by Howard Anton and Chris Rorres

These references can serve as a starting point for a linear algebra project and
provide a solid foundation for further exploration of the subject.

Page 41 of 41

You might also like