9 views

Uploaded by ursml12

chapter 4 and et all of physics

- EN-LeLing8.pdf
- Intro Multivaroable Calculus
- Combine Lectures (29-36)
- linear-algebra-and-its-application.127.pdf
- LA Cheat Sheet
- staticscheduling.pdf
- Matrices
- Linear Algebra Chapter 5
- Fallsem2015-16 Cp0658 Asgn01 Challtaskcse1001
- page 1
- Lecture 1426864972
- Scattered in Terp Course Notes
- Approximation of Large-scale Dynamical Systems
- HW_1
- Learning J
- Image Reconstruction Methods for MATLAB
- em02-im
- Calculation and Modelling of Radar Performance 3 Matrices and Vectors
- Final Answers
- Syllabus Class XII (PCMBE)

You are on page 1of 10

Linear Transformations

4.1

whether these vectors are linearly independent. This amounts to asking

whether

x1 v1 + + xm vm = 0

(4.1.1)

has a non-trivial solution (the trivial solution is x1 = = xm = 0, anything

else is a non-trivial solution). Let A be the n m matrix whose column

vectors are given by v . Then, the above question is equivalent to asking

whether

Ax = 0, x = (x1 , xm )T

(4.1.2)

where has a non-trivial solution. This is nothing other than (3.5.1). Theorem

8 tells us exactly when there are non-trivial solutions. If the rank r is equal

to m, then there is only the trivial solution, x = 0, and thus, the vectors

v1 , , vm are linearly independent. Otherwise, the vectors are linearly

dependent.

We state the above observation as a proposition.

Proposition 2. Let v1 , , vm be vectors in Rn , and let A be the n m

matirx whose column vectors are given by v1 , , vm . Then, if the rank r

of A is equal to m, then the vectors v1 , , vm are linearly independent. If

not, the vectors are linearly dependent.

51

Proposition 3. It is impossible to have more than n linearly independent

vectors in Rn .

Proof. Suppose we have m > n linearly independent vectors. This implies

that the n m matrix A formed by these vectors must have rank m, by

Proposition 2. But the row-echelon matrix is a n m matrix, and therefore,

can only have at most n pivot columns. Therefore, its rank is at most n.

Since m < n, this is a contradiction.

Another consequence of Proposition 2 is the following.

Theorem 9. For a n n matrix A, the following statements are equivalent.

1. A is invertible.

2. The column vectors of A are linearly independent.

3. The row vectors of A are linearly independent.

Proof. Item (2) is equivalent to the statement that

Ax = 0

(4.1.3)

has only the trivial solution. The equivalence of item 1 and item 2 thus

follows from Theorem 6. Since the row vectors of A are the column vectors

of AT , item 3 is equivalent to AT being invertible. So we have only to show

that the invertibility of A is equivalent to the invertibility of AT . Suppose

A is invertible. Then, there is a matrix B such that

AB = BA = I.

(4.1.4)

Let us now take the transpose of the above, and use (1.4.3).

B T AT = AT B T = I T = I.

(4.1.5)

Since the transpose of AT is A, we can repeat the same argument to show

that the invertibility of AT implies the invertibility of A.

Now, suppose the rank r of A in (4.1.2) is smaller than m. Let us look

at the situation a little further. As we did in Section 3.4, we label the

columns with pivots as i1 < i2 < < ir and the columns without pivots as

MATH 2574H

52

Yoichiro Mori

j1 < j2 < < jmr . According to theorem 8, the solution to (4.1.2) can

be written as:

x = c1 a1 + + cmr amr .

(4.1.6)

Take any a , = 1, , mr. x = a is a solution to (4.1.2), and this implies

that the vector vj can be written as a linear combination of viq , iq < j . On

the other hand, the vectors vi1 , , vir are linearly independent. Indeed,

if vi1 , , vir is linearly dependent, there will be a nontrivial solution to

(4.1.2) such that xj = 0 for all j . But xj = 0 implies c = 0 in (4.1.6), a

contradiction. Let us put this into a proposition.

Proposition 4. Suppose we have a n m matrix A whose rank is r, and

suppose we reduced A to row echelon form R. The r column vectors of A

corresponding to the pivot columns of R are linearly independent, and the

rest of the column vectors of A can be written as linear combinations of these

r vectors.

Example 8. Consider the column vectors of the 4 5 matrix (3.5.5). The

rank of this matrix is 3, as can be seen by (3.5.6). Looking at the row echelon

form of this matrix (3.5.6), we see that the following column vectors 1, 2, 5

are linearly independent:

1

2

1

1

3

3

(4.1.7)

v1 =

2 , v2 = 2 , v5 = 2 .

0

3

3

The column vectors v3 and v4 are expressed as:

v3 = v1 + v2 , v4 = v1 v2 .

(4.1.8)

Definition 4 (Subspace of Rn ). A subspace V of Rn is a subset of Rn with

the following two properties.

1. For a vector v V and an arbitrary scalar c, cv also belongs to V .

2. For two vectors v, w V , v + w is also in V .

Given m vectors v1 , , vm , the set of all vectors:

c1 v1 + + cm vm

(4.1.9)

combinations of vectors v1 , , vm , we say that the vectors v1 , , vm spans

V.

MATH 2574H

53

Yoichiro Mori

linearly independent vectors.

Proof. If the subspace consists of just the 0 vector, there is nothing to prove.

Suppose otherwise. Pick a non-zero vector v1 that is in V . Consider the

span:

c1 v1 , c1 R.

(4.1.10)

If this spans all of V , we are done. If not, there must be a vector v2 that

cannot be expressed in the above form. Therefore, v1 and v2 are linearly

independent and

c1 v1 + c2 v2 , c1 , c2 R

(4.1.11)

must belong to V . If this spans all of V , we are done. If not, we add another

vector v3 not expressible as above. This is thus linearly independent with

respect to the rest. This process has to stop before we add the n+1st vector,

since there are at most n linearly vectors in Rn , according to Proposition

3.

Definition 5 (Basis). Suppose a subspace V of Rn is spanned by linearly

independent vectors v1 , , vm . We say that such vectors are a set of basis

vectors of V .

Proposition 5 thus states that every subspace has a basis.

Example 9. Consider the following subset V of R3 :

1

1

c1 2 + c2 1 ,

0

4

(4.1.12)

by (1, 2, 4)T and (1, 1, 0)T . The two vectors are linearly independent, and

therefore, the two vectors form a basis of V and the dimension of V is 2. It

is also possible to express the same subspace as:

2

1

(4.1.13)

c1 3 + c2 1 .

4

0

There are thus many different ways of expressing the same subspace. It is

also true that V can be expressed as:

1

2

1

c1 2 + c2 3 + c3 1 .

(4.1.14)

4

4

0

MATH 2574H

54

Yoichiro Mori

but in this case, the three vectors are not linearly independent.

As we have seen above, there are various choices for basis vectors of a

subspace, but the number of basis vectors is always the same.

Proposition 6. Two sets of basis vectors always has the same number of

vectors.

Proof. Suppose otherwise. Then, there are basis vectors v1 , , vm and

w1 , , wq with m 6= q. Suppose m < q. Then, each vector in wk can be

written as:

wk = a1k v1 + a2k v2 + + amk vm .

(4.1.15)

where the ajk are scalar constants. To examine linear independence of wk ,

we must examine the expression

!

q

q

m

X

X

X

xk w k =

ajk xk vj = 0,

(4.1.16)

j=1

k=1

k=1

q

X

ajk xk = 0 for j = 1, , m

(4.1.17)

k=1

x1 , , xq . Since m < q, by Proposition 1 there is a non-trivial solution.

This contradicts the assumption that w1 , , wq were linearly independent.

The case q < m can be handled in exactly the same manner.

The above proposition allows us to define the dimension of a subspace.

Definition 6. The dimension of a subspace V in Rn is the number of basis

vectors of the subspace.

In particular, this means that subspaces of Rn can be classified by their

dimension. Subspaces of R2 are:

Dimension 0: the origin.

Dimension 1: lines through the origin.

Dimension 2: the whole plane.

Subspaces of R3 are:

MATH 2574H

55

Yoichiro Mori

Dimension 1: lines through the origin.

Dimension 2: planes through the origin.

Dimension 3: the whole space.

A similar classification is possible for Rn .

Example 10. Consider the vectors:

7

4

1

v1 = 2 , v2 = 5 , v3 = 8 .

9

6

3

(4.1.18)

The span of these three vectors form a subspace in R3 . To find the dimension

of the subspace, form the matrix consisting of these three vectors:

1 4 7

A = 2 5 8

(4.1.19)

3 6 9

The row echelon form is:

1 0 1

R = 0 1 2

0 0 0

(4.1.20)

This shows that the space spanned by the three vectors is two dimensional,

and is spanned by vectors v1 and v2 , with

v3 = 2v2 v1 .

(4.1.21)

v1 and v2 are not the only vectors that form a basis of this subspace. Indeed,

one can find any number of bases. For example, v1 and v3 is also a basis of

the same subspace.

4.2

Linear Transformations

a linear transformation. We define two important concepts for a linear

transformation.

MATH 2574H

56

Yoichiro Mori

is set of vectors v Rn that satisfy:

Av = 0.

(4.2.1)

vectors in v Rn for which

Ax = v

(4.2.2)

has a solution x. The image of A is written as ImA.

Both the kernel and image are subspaces of Rn . This can be seen as

follows. Suppose v and w are in kerA. Then,

A(cv) = cA(v) = 0, A(v + w) = Av + Aw = 0.

(4.2.3)

in the image of A. This means that there are vectors x and y such that

Ax = v, Ay = w.

(4.2.4)

Therefore, we have

A(cx) = cAx = cv, A(x + y) = Ax + Ay = v + w.

(4.2.5)

Since the kernel and image are both subspaces of Rn , we can consider

their dimension. Let us now consider the dimension of the kernel and the

image.

Proposition 7. Let A be a n n matrix. The dimension of the kernel is

equal to n r, where r is the rank of the matrix.

Proof. Finding the kernel is the same as solving the equation:

Ax = 0, x Rn .

(4.2.6)

We know from item 1 of Theorem 8 that the solution to the above is written

as a linear combination of nr linearly independent vectors. This is nothing

other than the statement that the kernel has dimension n r.

We now turn to the image.

Proposition 8. Let A be a n n matrix. The dimension of the image is

equal to the rank r of the matrix A.

MATH 2574H

57

Yoichiro Mori

Proof. The image of the matrix A consists of all vectors of the form:

Ax = x1 v1 + + cn vn

(4.2.7)

where v1 , , vn are the column vectors of A and x = (x1 , , xn ). Therefore, the image is spanned by the column vectors. We know from Proposition

4 that the r vectors that correspond to the pivots are linearly independent

and that the rest are written as linear combinations of the others.

Thus follows the main result of this section.

Theorem 10. Suppose A is a n n square matrix. Then,

rankA + dimKerA = n,

(4.2.8)

Example 11. Consider the

A= 1

0

1 0 1

2 3

(4.2.9)

3 5 , B = 0 1 2

0 0 0

1 2

1

2

v1 = 1 , v2 = 3

0

1

(4.2.10)

by:

1

2 .

(4.2.11)

1

Consider the matrix

1

4

2

7

A=

3

4

1 1

3 2

1

5 3

0

, R=

0

1 1

0 1

0

form

0 1 2

1 1

1

.

0 0

0

0 0

0

1

4

2 7

, .

3 4

1

1

MATH 2574H

58

(4.2.12)

(4.2.13)

Yoichiro Mori

1

1

,

1

0

4.3

2

1

.

0

1

(4.2.14)

Exercises

vectors. If linearly dependent, find a set of linearly independent vectors

and express the other vectors in terms of them.

(a) (1, 3, 1)T , (1, 0, 1)T , (1, 0, 1)T , (3, 3, 1)T .

(b) (2, 1, 0)T , (0, 1, 2)T , (1, 0, 1)T , (1, 1, 1)T .

(c) (3, 0, 0, 3)T , (1, 0, 1, 0)T , (0, 1, 0, 0)T .

2. Consider the xy plane (the plane z = 0) in the three-dimensional space

R3 . Find two different sets of basis vectors for the xy plane.

3. Let x = (x, y, z, w)T R4 . Consider the set of all vectors in R4

consisting of vectors w = 0.

(a) Show that this set is a subspace of R4 .

(b) Find a set of basis vectors for this subspace. What is its dimension?

4. Argue why the following subsets of R2 are not subpsaces.

(a) The inside of a circle in R2 centered at the origin.

(b) A line in R2 that does not go through the origin.

(c) The first quadrant of R2 .

(d) The first and third qudrants of R2 combined (including the x and

y axes).

5. Consider two n n matrices A and B. The matrix A has rank n and

B has rank r. What is the rank of the matrix AB? What about BA?

Can you say anything about the rank of A + B?

MATH 2574H

59

Yoichiro Mori

1 0

1 2 1

2 1 0

1 2 1 , 2 3 4 , 0 0

1 0

1 1 1

0 1 2

1 2 1

5

0 1 0 1

1 2 3 2 1 1 1 0

1 6 1 12 , 1 2 3 0

0 4 2 7

2 2 4 1

7. Consider the matrix:

1

0 ,

1

1 1 1

1

A = 1 1 1

3

1 1 1

(b) Let v be a vector on the line spanned by (1, 1, 1)T . Where does

v get mapped to?

(c) Let w be a vector perpendicular to the vector (1, 1, 1)T . Where

does w get mapped to?

(d) Geometrically describe what kind of linear transformation A is.

(e) Show that A2 = A, and hence, (I A)2 = (I A).

(f) Geometrically describe what kind of linear transformation I A

is.

MATH 2574H

60

Yoichiro Mori

- EN-LeLing8.pdfUploaded bySudan Shrestha
- Intro Multivaroable CalculusUploaded byprivate
- Combine Lectures (29-36)Uploaded byAkshat Garg
- linear-algebra-and-its-application.127.pdfUploaded byraqibapp
- LA Cheat SheetUploaded byMuhammad Rizwan
- staticscheduling.pdfUploaded byTadele Basazenew
- MatricesUploaded byAli Subhan
- Linear Algebra Chapter 5Uploaded bybhameed-1958
- Fallsem2015-16 Cp0658 Asgn01 Challtaskcse1001Uploaded byYashawantBasu
- page 1Uploaded byDinesh Patil
- Lecture 1426864972Uploaded byWatan Sahu
- Scattered in Terp Course NotesUploaded byMichael Parker
- Approximation of Large-scale Dynamical SystemsUploaded byصلاح الماربي
- HW_1Uploaded byLoller Troller
- Learning JUploaded byRohit Vishal Kumar
- Image Reconstruction Methods for MATLABUploaded byajalbornoz
- em02-imUploaded byPaul Mendoza Dionisio III
- Calculation and Modelling of Radar Performance 3 Matrices and VectorsUploaded bymmhorii
- Final AnswersUploaded byAnonymous UrVkcd
- Syllabus Class XII (PCMBE)Uploaded byBilal
- Torrione 2002 MastersUploaded byAasthaa Sharma
- translate in swedish (and finnish)-- -- -- --0-09- --#####////...//....§§!!!§§.translate in swedish (and finnish)-- -- -- --0-09- --#####////...//....§§!!!§§.translate in swedish (and finnish)-- -- -- --0-09- --#####////...//....§§!!!§§.Uploaded byddanober
- Question 4 Scientific ComputingUploaded byPete Jacopo Belbo Caya
- MPRA Paper 17163Uploaded byconaly
- Learning to Localize Objects with Structured Output RegressionUploaded bySerge
- MIT18_06S10_L04Uploaded byManu Sharma
- vector calculus bookUploaded byunbeatableamrut
- optimizationUploaded byKunal Anurag
- math project - sheet1Uploaded byapi-351220233
- fa12-em-ee1-h1Uploaded byHumaira Hassan

- Nietzsche - CritiqueUploaded byKonstantinAtanasov
- Representation Theory of Lorentz Group.pdfUploaded byursml12
- Chapter 7Uploaded byvivekverma390
- CMHW7Uploaded byursml12
- LogarithmUploaded byursml12
- 4. Quantum Mechanics Quantum_Mechanics_NET-JRF June 2011-Dec 2014Uploaded bysaratharackal
- Lagrangian Quantum Field TheoryUploaded byursml12
- Chapter 1 Key Features of Quantum MechanicsUploaded byAndrea Annunziata
- quantum Fields theory - W. SiegelUploaded bymor6382
- Permutations and Combinations - Topics in PrecalculusUploaded byursml12
- Computational Physics Script CQP DistUploaded byursml12
- An Anthropic ArgumentUploaded byursml12
- Dirac EquationUploaded bybatenas1
- Standard ModelUploaded byursml12
- OperatorsUploaded byValentin Manzur
- Ph106ab NotesUploaded bytjdjswo
- The Black Hole Information ParadoxUploaded byursml12
- RiemannUploaded byJonathan Severes Tes
- tifr 2012Uploaded byAman Deep
- Greens FnUploaded byHuy Võ
- Methods for Solving Inverse Problems in Mathematical Physics - Prilepko, OrlovskiyUploaded byMohammad Mofeez Alam
- GS2013_QP_PHY_XUploaded bySaurav Patel
- Representation TheoryUploaded byniqqaplz
- LectureUploaded byursml12
- Angle–Action VariablesUploaded byursml12
- The Best Love Poems and Romantic Poems of All TimeUploaded byursml12
- this is not hereUploaded byursml12
- Paradox of a Charge in a Gravitational Field - Wikipedia, The Free EncyclopediaUploaded byursml12
- 1208.0034Uploaded byursml12
- peskin6_3Uploaded byursml12

- Prime NumbersUploaded byDarkas2
- ps7solUploaded bydjoseph_1
- Mathematical Excalibur Noo 1-16Uploaded byLeon Petrakovsky
- AP Calculus Summer AssignmentUploaded byBen
- Original Proof Of Gödel's Completeness TheoremUploaded bypkrakesh
- Mws Gen Aae Spe Sources of ErrorUploaded bySyed Zain Ul Hassan
- research paperUploaded byHenrikki Matilainen
- 3 Kundur DiffEq Implement HandoutsUploaded byvimalaspl7831
- Mello David C._(2017) Invitation to Linear Algebra.pdfUploaded byOscar Reynaga Alarcón
- math syllabusUploaded byGrant Chen
- HW1Uploaded byGina Saechao
- Applied Discrete StructuresUploaded byGaryColon
- Quaccs Gill xcdft.fUploaded byprofpetergill
- The life and work of Fergus Gaines (1939–2001)Uploaded bygylpm
- Operasi Ke Atas SetUploaded byshivani4598
- Solution.nda .47Uploaded bySanjay Gupta
- 4 Node QuadUploaded byMathiew Estepho
- Noise CancellationUploaded byvantienbk
- Use of Windows for Harmonic Analysis With DFTUploaded byNatália Cardoso
- Cpc Profit LossUploaded byVyas Adithya
- Handbook 2011-2012Uploaded byHarapan Afiq
- Additional MathematicsUploaded byTime Crisis S-shooter
- DTFT PresentationUploaded byAAYUSH RIJAL
- Advanced Topics in Inequalities - Franklyn Wang Et Al. - AoPS 2015Uploaded byPRATIK ROY
- Implicit Differentiation Equation of the Tangent LineUploaded byjeffconnors
- Summary Modal AnalysisUploaded bycelestinodl736
- 10.1.1.120.6062Uploaded byMuhammad Hanif
- DAA 2MARKSUploaded byPradeep Kumar
- Decision-making Model for Multidimensional Analysis of Preference Based on Intuitionistic Trapezium Fuzzy SetUploaded bySEP-Publisher
- Hybrid Method for Aerodynamic Shape OptimizationUploaded bygustavojorge01