1 views

Uploaded by Mahesh Abnave

Clear explanation of vector spaces

- Invariant Subspaces
- 2819 Dsp
- Linear Algebra Answers to Exercises
- lo mejor algebra lineal
- Gauss Elimination method
- ma322F16Quiz4
- Algebra 1
- Graphs and matrices
- List of Helpful R Functions
- Multidimensional Matrix Mathematics- MAtrix Equations and Matrix Calculus-WCE2010_pp1848-1850
- Chapter 2, ENGR 62, Stanford
- Truss
- Theorems
- 3. Representer Function
- Vector Integration - GATE Study Material in PDF
- Propagation Matrices From the Finite Element Method
- Bootcamp 2011 Algebra
- Roudbari_Extended2
- Linear Algebra, 2Nd Edition - Kenneth Hoffmann and Ray Kunze 20181106233255
- course file of M-I (2016-2017) 12-10-2017

You are on page 1of 6

R2 is represented by the usual x-y plane; the two components of the vector become the x and y

coordinates of the corresponding point. The three components of a vector in R3 give a point in

three-dimensional space. The one-dimensional space R1 is a line.

Within all vector spaces, two operations are possible:

o We can add any two vectors, and

o we can multiply all vectors by scalars.

In other words, we can take linear combinations.

A real vector space is a set of vectors together with rules for vector addition and multiplication

by real numbers. Addition and multiplication must produce vectors in the space, and they must

satisfy the eight conditions:

o 𝑥+𝑦 =𝑦+𝑥

o 𝑥 + (𝑦 + 𝑧) = (𝑥 + 𝑦) + 𝑧

o There is a unique “zero vector” such that 𝑥 + 0 = 𝑥 for all 𝑥

o For each 𝑥 there is a unique vector −𝑥 such that 𝑥 + (−𝑥) = 0

o 1𝑥 = 𝑥

o (𝑐 𝑐 )𝑥 = 𝑐 (𝑐 𝑥)

o 𝑐(𝑥 + 𝑦) = 𝑐𝑥 + 𝑐𝑦

o (𝑐 + 𝑐 )𝑥 = 𝑐 𝑥 + 𝑐 𝑥

Three important examples of vectors are as follows

o The infinite-dimensional space R∞

o The space of 3 by 2 matrices. In this case the “vectors” are matrices! This space is

almost the same as R6. (The six components are arranged in a rectangle instead of a

column.)

o The space of functions f(x) Here we admit all functions f that are defined on a fixed

interval, say 0 ≤ x ≤ 1. The space includes f(x) = x2, g(x) = sinx, their sum (f+g)(x) =x2+sinx,

and all multiples like 3x2 and −sinx. The vectors are functions, and the dimension is

somehow a larger infinity than for R∞.

Subspace

A subspace of a vector space is a nonempty subset that satisfies the requirements for a vector

space: Linear combinations stay in the subspace.

i. If we add any vectors x and y in the subspace, x+y is in the subspace.

ii. If we multiply any vector x in the subspace by any scalar c, cx is in the subspace. Commented [1]:

Thus, a subspace is a subset that is “closed” under addition and scalar multiplication.

Example: Geometrically, think of the usual three-dimensional R 3 and choose any plane through

the origin. That plane is a vector space in its own right. If we multiply a vector in the plane by 3, Commented [2]: In usual 3 dimensional plane? Only and

or −3, or any other scalar, we get a vector in the same plane. If we add two vectors in the plane, any planes passing through the origin?

their sum stays in the plane. This plane through (0,0,0) illustrates one of the most fundamental

ideas in linear algebra; it is a subspace of the original space R 3.

Note: the zero vector will belong to every subspace. That comes from rule (ii): Choose the

scalar to be c=0. This means that if the subspace takes the form of a line, then must pass

through the origin.

The smallest subspace Z contains only one vector, the zero vector. It is a “zero dimensional

space,” containing only the point at the origin. Rules (i) and (ii) are satisfied, since the sum 0+0

is in this one-point space, and so are all multiples c0.

The smallest subspace is one with only zero vector, since empty space is not allowed.

The largest subspace is whole of the original space.

Not a subspace example

Consider all vectors in R2 whose components are positive or zero. It is a first quadrant of xy

plane. It is not a subspace, since even though addition of any two vectors leaves us in same

space, scalar multiplication does not: -1[1,1] =[-1,-1].

If we include third quadrant along with first, scalar multiplication will be all right, but still will

fail: [12]+[−2−1]gives[−11],which is not in either quadrant.

Column space

The column space of m*n matrix A contain all linear combinations of the columns of A, i.e. all

possible b’s. It is a subspace of Rm. (Since the number of elements in the vectors of column Commented [3]: Is it?

space equals number of rows in A) Commented [4]:

The system Ax=b is solvable if and only if the vector b can be expressed as a combination of the

columns of A. Then b is in the column space.

Proof:

i. Suppose b and b' lie in the column space, so that Ax=b for some x and Ax'=b' for some x'.

Then A(x+x') =b+b', so that b+b' is also a combination of the columns. The column space

of all attainable vectors b is closed under addition.

ii. If b is in the column space C(A), so is any multiple cb. If some combination of columns

produce b (say Ax=b), then multiplying that combination by c will produce b. On other

words, A(cx)=cb.

Possible n+1 solutions:

i. nth column itself: This involves setting all but nth unknown to zero.

ii. Zero: this involves setting all unknowns to zero

Smallest possible column space (one vector only) comes from the zero matrix A = 0. The only

combination of the column is b=0.

Largest possible column space - For any n*n non singular matrix A, the whole of the Rn is

column space, since we can solve Ax = b by Gaussian elimination.

For easy understanding, take an example of A, the 5*5 identity matrix. Then C(I) is the whole of

R5; the five columns of I can combine to produce any 5 dimensional vector b.

Nullspace

In Ax = b, the right hand side b = 0 always allows the solution x = 0, but there may be infinitely

many solutions.

There are always infinitely many solutions, if there are more number of unknowns than

equations, n > m. (To put restrictions on all n unknowns, n equations are required. Lesser than n

equations means not all unknowns are restricted. Thus the free/un-restricted unknowns can

take any values thus resulting in infinite solutions.) Commented [M5]: TODO:revise/check correctness

The null space N(A) of a m*n matrix A consists of all vectors x such that Ax=0. It is a subspace of

Rn (because number of elements in the vectors of null space equals n), just as the column space

was a subspace of Rm.

The column space and null space of invertible / non singular matrix

For an invertible matrix, the nullspace contains only x = 0 (multiply Ax = 0 by A−1). The column

space is the whole space (Ax=b has a solution for every b).

Consider a non-singular / invertible matrix

1 0

𝐴= 5 4

2 4

in the system

1 0 𝑢 0

5 4 𝑣 = 0

2 4 0

The first equation gives u = 0, and the second equation then forces v = 0. The nullspace

contains only the vector (0,0). The column space is R3.

The column space and null space of non invertible / singular matrix

The nullspace contains more than the zero vector and/or the column space contains less than all

vectors.

There are two implications of this:

o Any vector xn in the null space can be added to a particular solution xp. The solutions to

all linear equations have this form, x = xp+xn:

Complete solution Axp =b and Axn=0 produce A(xp+xn)=b.

o When the column space doesn’t contain every b in R m, we need the conditions on b that

make Ax = b solvable.

Example (1*1 system)

The 1 by 1 system 0x=b, one equation and one unknown, shows two possibilities:

o 0x =b has no solution unless b = 0. The column space of the 1 by 1 zero matrix contains

only b = 0.

o 0x =0 has infinitely many solutions. The nullspace contains all x.

o A particular solution is xp = 0, and the complete solution is x = xp+xn =0+(any x).

Example (2*2 system)

Consider the singular matrix

1 1

𝐴=

2 2

in the system of equations:

1 1 𝑏

𝑥=

2 2 𝑏

o There is no solution unless b2=2b1. Thus, the column space contain only those b’s which

are multiple of (1,2).

o The null space of A contain (-1,1) and all its multiples xn=(-c,c)

When b2=2b1, there are infinite solutions. The particular solution to x+y=2 and 2x+2y=4

is xp=(1,1)

𝐶𝑜𝑚𝑝𝑙𝑒𝑡𝑒 𝑦+𝑧 =2 1 −1 1−𝑐

is solved by 𝑥 + 𝑥 = +𝑐 =

𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 2𝑦 + 2𝑧 = 4 1 1 1+𝑐

Example

Let us another column (sum of first two) to the matrix given in the example above to make it

singular:

1 0 1 0

5 4 9 [𝑐 𝑐 −𝑐] = 0

2 4 6 0

Thus the nullspace of new matrix contains the vector (1,1,-1) and automatically contains any

multiple (c,c,-c). The nullspace takes the form of the line which goes through the origin, as any

subspace must.

During process, for square matrix, if any pivot turns out to be zero, then we conclude that the

matrix is singular and the process stops. However with rectangular matrix, it does not mean that

the matrix is singular, and process continues.

Echlon form has following properties:

o The pivots are the first non zero entries in rows.

o Below each pivot is a column of zeroes, obtained by elimination.

o Each pivot lies to the right of the pivot in the row above. This produces the staircase

pattern.

o Zero rows come last.

Only difference between two forms is that, the entries above pivots in echelon form may or may

not be zero however with row Reduced echelon form, entries above pivots are strictly zero.

We can simplify row echelon form U to reduced row echelon form R by dividing a row by its

pivot (making pivot equals one) and then using pivot row to produce zero above pivot.

Below we convert rectangular row echelon matrix to reduced row echelon matrix:

1 3 3 2 1 3 3 2 𝟏 𝟑 𝟎 −𝟏

0 0 3 3 → 0 0 𝟏 𝟏 → 𝟎 𝟎 𝟏 𝟏 =𝑅

0 0 0 0 0 0 0 0 𝟎 𝟎 𝟎 𝟎

Reduced row echelon form of square invertible matrix is identity matrix. There is full set of

pivots, all equal to 1, with zeroes above and below.

IMP: Finding Null Space from pivots in row reduced form R of matrix A

Steps

1. After reaching Rx = 0, identify the pivot variables and free variables.

2. Give one free variable the value 1, set the other free variables to 0, and solve Rx=0 for the pivot

variables. This x is a special solution.

3. Every free variable produces its own “special solution” by step 2. The combinations of special

solutions form the nullspace—all solutions to Ax = 0.

From R, we can quickly find the nullspace of A. Rx = 0 has the same solutions as Ux = 0 and Ax =

0.

In row reduced form R in system Rx = 0, free variables can take any values, while values of pivot

variables are determined by the values of free variables.

Thus, to find the most general solution to Rx=0 (or, equivalently, to Ax=0) we may assign

arbitrary values to the free variables and then find the corresponding values of the pivot

variables.

Example

For example consider the R below:

Let us assume the values of free variables as v and y itself. Now the values of pivot variables can

be determined in terms of the pivot variables as follows:

𝑢 + 3𝑣 − 𝑦 = 0 𝑦𝑖𝑒𝑙𝑑𝑠 𝑢 = −3𝑣 + 𝑦

𝑅𝑥 = 0

𝑤+𝑦 =0 𝑦𝑖𝑒𝑙𝑑𝑠 𝑤 = −𝑦

There is a “double infinity” of solutions, with v and y free and independent.

The complete solution is a combination of two “special solutions”:

−𝟑𝒗 + 𝒚 −𝟑 𝟏

𝑵𝒖𝒍𝒍𝒔𝒑𝒂𝒄𝒆 𝒄𝒐𝒏𝒕𝒂𝒊𝒏𝒔

𝒗 𝟏 𝟎

𝒂𝒍𝒍 𝒄𝒐𝒎𝒃𝒊𝒏𝒂𝒕𝒊𝒐𝒏𝒔 𝑥 = =𝒗 +𝒚

−𝒚 𝟎 −𝟏

𝒐𝒇 𝒔𝒑𝒆𝒄𝒊𝒂𝒍 𝒔𝒐𝒍𝒖𝒕𝒊𝒐𝒏𝒔

𝒚 𝟎 𝟏

The special solution (-3,1,0,0) has free variables v = 1, y = 0. The other special solution (1,0,-

1,1) has v = 0 and y = 1. All solutions in nullspace are linear combinations of these two.

The systems corresponding to above two special solutions are:

−3 −3 0

1 3 0 −1

1 1 0

Putting 𝑣 = 1 𝑖𝑛 𝑣 𝑔𝑖𝑣𝑒𝑠 0 0 1 1 =

0 0 0

0 0 0 0

0 0 0

1 1 0

1 3 0 −1

0 0 0

Putting 𝑦 = 1 𝑖𝑛 𝑦 𝑔𝑖𝑣𝑒𝑠 0 0 1 1 =

−1 −1 0

0 0 0 0

1 1 0

−3 1

1 0

Or we can say nullspace 𝑁 =

0 −1

0 1

Dimensions of nullspace

Thus within the four-dimensional space of all possible vectors x, the solutions to Ax = 0 form a

two-dimensional subspace—the nullspace of A. Thus a nullspace is a line.

And if there are additional free variables, the nullspace becomes more than just a line in n-

dimensional space.

This, the nullspace has the same “dimension” as the number of free variables and special

solutions.

Number of columns in nullspace equals number of non-zero rows in row echelon form.

Number of rows in nullspace equals number of unknowns or dimensions.

Reverse the signs of non-pivot columns in R to find the pivot variables in the nullspace.

−3 1 𝑛𝑜𝑡 𝑓𝑟𝑒𝑒

1 3 0 −1 𝑓𝑟𝑒𝑒

1 0

𝑅𝑥 = 0 0 1 1 and 𝑁 =

0 −1 𝑛𝑜𝑡 𝑓𝑟𝑒𝑒

0 0 0 0

0 1 𝑓𝑟𝑒𝑒

If there are more unknowns than equations then there are more solutions than trivial

Suppose a matrix has more columns than rows, n > m. Since m rows can hold at most m pivots, there

must be at least n-m free variables. There will be even more free variables if some rows of R reduce to

zero; but no matter what, at least one variable must be free. This free variable can be assigned any value

resulting in its own special solution. This leads to the following conclusion:

If Ax = 0 has more unknowns than equations (n > m), it has at least one special solution: There are

more solutions than the trivial x = 0.

U: 8/15/2018 1:39 PM

D: 8/15/2018 1:39 PM

M: 8/15/2018 1:39 PM

- Invariant SubspacesUploaded byDipro Mondal
- 2819 DspUploaded bytorinomg
- Linear Algebra Answers to ExercisesUploaded byKleyton Nascimento
- lo mejor algebra linealUploaded byMiguel Antonio Leonardo Sepulveda
- Gauss Elimination methodUploaded byAditya Agrawal
- ma322F16Quiz4Uploaded byMichael Raba
- Algebra 1Uploaded bySantosh Sutar
- Graphs and matricesUploaded byPraneeth Akula Sona
- List of Helpful R FunctionsUploaded byits4krishna3776
- Multidimensional Matrix Mathematics- MAtrix Equations and Matrix Calculus-WCE2010_pp1848-1850Uploaded byAnshuman Sabath
- Chapter 2, ENGR 62, StanfordUploaded bysuudsfiin
- TrussUploaded byNileshkumar Patel
- TheoremsUploaded bymwalders
- 3. Representer FunctionUploaded byGauravJain
- Vector Integration - GATE Study Material in PDFUploaded bySupriya Santre
- Propagation Matrices From the Finite Element MethodUploaded byprateek
- Bootcamp 2011 AlgebraUploaded bypetere056
- Roudbari_Extended2Uploaded byPaulo Rogerio
- Linear Algebra, 2Nd Edition - Kenneth Hoffmann and Ray Kunze 20181106233255Uploaded byShallu Nandwani
- course file of M-I (2016-2017) 12-10-2017Uploaded byManjulajonnadula
- Term 1 Chapter 3 - Matrices_new_2013Uploaded bysound05
- Chapter 2 Normed Spaces (1)Uploaded byMelissa Aylas
- prlmLA-a09Uploaded bySOULed_Outt
- Mathematics XIIUploaded byAjay Choudhury
- Assignment 1 - MATH2010Uploaded bybgrin7
- 1701.01296Uploaded byCroco Ali
- eigenvectorsUploaded byHemant Kumar
- RREFUploaded byHoang Dang Nhat
- lec3Uploaded byjagaenator
- 4C7C0CF2d01Uploaded byDylan Chapp

- Discrete Fourier TransformUploaded byMahesh Abnave
- PCI ExpressUploaded byMahesh Abnave
- Git CommandsUploaded byMahesh Abnave
- Game API.docxUploaded byMahesh Abnave
- Chipset ArchitectureUploaded byMahesh Abnave
- Chapter 04 Image e Nhanc SpatUploaded byMahesh Abnave
- Different types of operating systemsUploaded byMahesh Abnave
- Wireless LAN BasicsUploaded byMahesh Abnave
- Expansion SlotsUploaded byMahesh Abnave
- DSIP Case StudiesUploaded byMahesh Abnave
- Functional Dependencies & NormalisationUploaded byMahesh Abnave
- Lcm & Clcm PrintoutUploaded byMahesh Abnave
- Digital Image ProcessingUploaded byMahesh Abnave
- Multi Sever Queue PrintoutUploaded byMahesh Abnave
- Inventory SystemUploaded byMahesh Abnave
- Clocked Synchronous State MachinesUploaded byMahesh Abnave
- Discrete Image ProcessingUploaded byMahesh Abnave
- Datastructures ProgramsUploaded byMahesh Abnave
- Operational AmplifierUploaded byMahesh Abnave
- Operational Amplifier Basics by Harry LythallUploaded byMahesh Abnave
- NormalisationUploaded byMahesh Abnave
- 945G Chipeset detailsUploaded byMahesh Abnave
- 915G Chipset detailsUploaded byMahesh Abnave
- Read and Write Heads and Head Actuator Mechanisms of Hard disksUploaded byMahesh Abnave
- 04 04 Intuition for Regularization 06-59Uploaded byMahesh Abnave
- 04 04 Data Dimensions 3-08Uploaded byMahesh Abnave
- Turing Encoding(ITCS 7)Uploaded byMahesh Abnave
- Anand Presentation150813-03 ConvertedUploaded byMahesh Abnave

- Pseudo InverseUploaded bycmtinv
- Chapter 4Uploaded byHamed Nikbakht
- IIST - Curriculum (2008)Uploaded bySujay Das
- laUploaded byChris Moody
- Using Maple in Linear Algebra (Notes).PsUploaded byAntonioJoseR.M
- MTE-02-E(2016)Uploaded byAlok Thakkar
- (ebook-pdf) - Mathematics - Advanced determinant calculusUploaded bymjk
- Geometrical Meaning Moore PenroseUploaded bykadarsh226521
- Krattenthaler Advanced Determinant CalculusUploaded bysandor
- Eeg 06 Fund Matrix Subspace sUploaded byHân Bảo
- Semester I 2017Uploaded byDishant Pandya
- 9231 Syllabus Content 2016Uploaded byAzizan Wazir
- M208 Assignment Book 1Uploaded bygromit9999
- A Concept Lattice-Based Kernel for SVM Text ClassificationUploaded byShay Tay
- Qingwen Hu-Concise Introduction to Linear Algebra-Chapman and Hall_CRC (2017)Uploaded byStjepkoKatulić
- Finite Difference Methods by LONG CHENUploaded bySusana Barbosa
- facialexpressions.pdfUploaded byzentropia
- Princeton MAT 217 Honor Linear Algebra 2013Uploaded byChung Chee Yuen
- HWStrang4thEdUploaded byShazia Ahmed
- Diagram Transition PhaseUploaded bySamagassi Souleymane
- Kent TensegritreeUploaded byTensegrity Wiki
- Ch6-solUploaded byhassan.murad
- EQMOMUploaded byab_dutt
- Linear Algebra in four PagesUploaded byphyzics
- Maths-329490-2019-syllabusUploaded bymukeshnt
- Pellegrino Calladine 1985 Matrix Analysis of Statically and Kinetically Indeterminate FrameworksUploaded bycolkurtz21
- Lin AlgebraUploaded byamit9338
- s Cet Prospectus 2013Uploaded byEngr Tariq
- Chap5Uploaded byeshbli