5 views

Original Title: Lin Trans

Uploaded by Cody Sage

- Displacements
- Maple by Example
- QRG_EE.pdf
- Structural Dynamics - MDOF
- Eigen Vector Explaination
- release_guide.pdf
- Finite Element Model Updating With Damping Identification
- Fibonacci Module Prime
- math1241_s2_15_exam
- Final Exam Review and Answers
- UT Dallas Syllabus for engr2300.001.11f taught by Jung Lee (jls032000)
- Final Report
- knowledgebase-003
- 10.1.1.27.1734
- syllabus
- Simulation of Transients in Unerground Cables With Frequency Dependent Modal
- 2a
- Wittrick - Williams
- AdvMath
- Applied Mathematics - 2013

You are on page 1of 4

D Joyce, Fall 2012 One of the principles of modern mathematics is that functions between objects are as important as the objects themselves. The objects were looking at are vector spaces, and the functions that preserve the structure of vector spaces are called linear transformations. The structure of vector space V over a eld F is its addition and scalar multiplication, or, if you prefer, its linear combinations. Denition 1. A linear transformation from one vector space V to another W is a function T that preserves vector addition and scalar multiplication. In more detail, a linear transformation T : V W is a function such that T (v + w) = T (v) + T (w) for every vector v and w in V and every scalar c. The vector space V is called the domain of T while the vector space W is called the codomain of T . The term linear operator is also used when W = V , especially when the elements of the vector spaces are functions. Weve already discussed isomorphisms. An isomorphism T : V W is a linear transformation which is also a bijection; its inverse function T 1 : W V is also an isomorphism. Properties of linear transformations. A few important properties follow directly from the denition. For instance, every linear transformation sends 0 to 0. Also, linear transformations preserve subtraction since subtraction can we written in terms of vector addition and scalar multiplication. A more general property is that linear transformations preserve linear combinations. For example, if v is a certain linear combination of other vectors s, t, and u, say v = 3s + 5t 2u, then T (v) is the same linear combination of the images of those vectors, that is T (v) = 3T (s) + 5T (t) 2T (u). This property can be stated as the identity T (c1 v1 + c2 v2 + + ck vk ) = c1 T (v1 ) + c2 T (v2 ) + + ck T (vk ) The property of preserving linear combinations is so important, that in some texts, a linear transformation is dened as a function that preserves linear combinations. Example 2. Let V and W both be R[x], the vector space of polynomials with real coecients. d Youre familiar with some linear operators on it. Dierentiation dx : R[x] R[x] is a linear operator since (1) the derivative of a polynomial is another polynomial, (2) the derivative of 1 and T (cv) = cL(v)

a sum is the sum of derivatives, and (3) the derivative of a constant times a polynomial is the constant times the derivative of the polynomial. x Integrals are also linear operators. The denite integral a over the interval [a, x] where a is any constant is also a linear operator. The Fundamental Theorem of Calculus relates these two operators by composition. Let x d D stand for dx , and I stand for a . Then the ftc says IDf = f f (a), while the inverse ftc says DIf = f . These dont quite say that D and I are inverse linear operators since IDf is f f (a) instead of f itself, but theyre almost inverse operators. Example 3 (Projection and inclusion functions). The projection functions from the product of two vector spaces to those vector spaces are linear transformations. They pick out the coordinates. The rst projection 1 : V W V is dened by 1 (v, w) = v, and the second projection 2 : V W W is dened by 1 (v, w) = w. Since the operations on the product V W were dened coordinatewise in the rst place, these projections will be linear transformations. There are also inclusion functions from the vector spaces to their product. 1 : V V W is dened by 1 (v) = (v, 0), and 2 : V V W is dened by 2 (w) = (0, w). There are some interesting connections among the projection and inclusion functions. For one thing, the composition 1 1 is the identity transformation 1V : V V on V , and the composition 2 2 is the identity transformation 1W : W W on W . The other compositions are 0, that is to say, 2 1 : V W and 1 2 : W V are both 0 transformations. Furthermore, summing the reverse compositions (1 1 ) + (2 2 ) : V W V W gives the identity function 1V W : V W V W on V W . V

d d

1

1V

d d 1

V W

d d 2 d d E

2

1W

The symmetry of the equations and diagrams suggest that inclusion and projection functions are two halves of a larger structure. They are, and later well see how theyre dual to each other. Because of that, well use a dierent term and dierent notation for products of vector spaces. Well call the product V W the direct sum of the vector spaces V and W and denote it V W from now on. The standard matrix for a linear transformations T : Rn Rm . Our vector spaces usually have coordinates. We can treat any vector space of dimension n as if it were Rn (or F n if our scalar eld is something other than R). What do our linear transformations look like then?

Each vector is a linear combination of the basis vectors v = (v1 , . . . , vn ) = v1 e1 + + vn en . Therefore, T (v) = v1 T (e1 ) + + vn T (en ). Thus, when you know where T sends the basis vectors, you know where T sends all the rest of the vectors. Suppose that T (e1 ) T (e2 ) ... T (en ) = = = = (a11 , a21 , . . . , am1 ) (a12 , a22 , . . . , am2 ) ... (a1n , a2n , . . . , amn )

Then knowing all the scalars aij tells you not only where T sends the basis vectors, but where it sends all the vectors. Well put all these scalars aij in a matrix, but well do it column by column. We already have a notation for columns, so lets use it. a11 a12 a1n a21 a22 a2n [T (e1 )] = . , [T (e2 )] = . , . . . , [T (en )] = . . . . . . . am1 am2 amn where denotes the standard basis of Rm . Now place them in a11 a21 A = [T (e1 )] [T (e2 )] [T (en )] = . . . am1 a rectangular m n matrix. a12 a1n a22 a2n . . .. . . . . . am2 amn

Thus, the matrix A determines the transformation T . This matrix A, whose j th column is the m-vector [T (ej )] , is called the standard matrix representing the linear transformation T . Well also denote it as [T ] , where two s are used since we used one standard basis for Rn and another standard bases for Rm . Thus, a linear transformation T : Rn Rm determines an mn matrix A, and conversely, an m n matrix A determines a linear transformation T : Rn Rm . Evaluation. Since the matrix A holds all the information to determine the transformation T , well want to see how to use it to evaluate T at a vector v. Given A and the v, how do you compute T (v)? Well, what are v and T (v) in terms of coordinates? Let v have coordinates (v1 , . . . , vn ). Then T (v) = v1 T (e1 ) + v2 T (e2 ) + + vn T (en ) = v1 (a11 , a21 , . . . , am1 ) + v2 (a12 , a22 , . . . , am2 ) + + vn (a1n , a2n , . . . , amn ) = (v1 a11 + v2 a12 + + vn a1n , v1 a21 + v2 a22 + + vn a2n , . . . , v1 am1 + v2 am2 + + vn amn ) 3

For various reasons, both historical and for simplicity, were writing our vectors v and T (v) as column vectors. Then v1 v1 a11 + v2 a12 + + vn a1n v1 v1 a21 + v2 a22 + + vn a2n [v] = . and [T (v)] = . . . . . vn v1 am1 + v2 am2 + + vn amn Our job is to take the matrix A and the column vector for v and produce the column vector for T (v). Lets see what that the equation Av = T (v), or more properly, A[v] = [T (v)] looks like in terms of matrices. a11 a12 a1n v1 v1 a11 + v2 a12 + + vn a1n a21 a22 a2n v1 v1 a21 + v2 a22 + + vn a2n . . . . = . .. . . . . . . . . . . . am1 am2 amn vn v1 am1 + v2 am2 + + vn amn Aha! You see it! If you take the elements in the ith row from A and multiply them in order with elements in the column of v, then add those n products together, you get the ith element of T (v). With this as our denition of multiplication of an m n matrix by a n 1 column vector, we have Av = T (v). So far weve only looked at the case when the second matrix of a product is a column vector. Later on well look at the general case. Linear operators on Rn , eigenvectors, and eigenvalues. Very often we are interested in the case when m = n. A linear transformation T : Rn Rn is also called a linear transformation on Rn or a linear operator on Rn . The standard matrix for a linear operator on Rn is a square n n matrix. One particularly important square matrix is the identity matrix I whose ij th entry is ij , where ii = 1 but if i = j then ij = 0. In other words, the identity matrix I has 1s down the main diagonal and 0s elsewhere. Its important because its the matrix that represents the identity transformation Rn Rn . When the domain and codomain of a linear operator are the same, there are more questions you can ask of it. In particular, there may be some xed points, that is, vectors v such that T (v) = v. Or there may be points sent to their negations, T (v) = v. When a linear operator T sends a vector to a scalar multiple of itself, then that vector is called an eigenvector of T , and the scalar is called the corresponding eigenvalue. All the eigenvectors associated to a specic eigenvalue form an eigenspace for that eigenvalue. For instance, xed points are eigenvectors with eigenvalue 1, and the eigenspace for 1 consists of the subpace of xed points. Well study eigenvectors and eigenvalues in detail later on, but well mention them as we come across them. They tell you something about the geometry of the linear operator. Math 130 Home Page at http://math.clarku.edu/~djoyce/ma130/

- DisplacementsUploaded byElizabeth Santiago
- Maple by ExampleUploaded bytresttia_tresttia
- QRG_EE.pdfUploaded byckvirtualize
- Structural Dynamics - MDOFUploaded bynitroxx7
- Eigen Vector ExplainationUploaded byHimanshu
- release_guide.pdfUploaded byMilan Mikulić
- Finite Element Model Updating With Damping IdentificationUploaded byKimi Konon
- Fibonacci Module PrimeUploaded byYoung Hegelian
- math1241_s2_15_examUploaded byClinton3011
- Final Exam Review and AnswersUploaded bySingh Karan
- UT Dallas Syllabus for engr2300.001.11f taught by Jung Lee (jls032000)Uploaded byUT Dallas Provost's Technology Group
- Final ReportUploaded byBrody Nagy
- knowledgebase-003Uploaded byDeepak Chachra
- 10.1.1.27.1734Uploaded byAnonymous OBGSxG
- syllabusUploaded byRusty Jones
- Simulation of Transients in Unerground Cables With Frequency Dependent ModalUploaded byMadhusudhan Srinivasan
- 2aUploaded byLuis Fuentes
- Wittrick - WilliamsUploaded byBrendon Menezes
- AdvMathUploaded byAnirudh Sreeraj
- Applied Mathematics - 2013Uploaded bymahapatih_51
- MA1506TUT9Uploaded byWong Jiayang
- On the Performance of the Constant Modulus Array Restricted to the Signal SubspaceUploaded byJosé Ramón Cerquides Bueno
- Chapter 7Uploaded byAriana Ribeiro Lameirinhas
- On Sampling Errors in Empirical Orthogonal FunctionsUploaded bycfisicaster
- 18mat112Uploaded byChintu
- 10.1109@T08.2001914Uploaded bykamiyab
- 0905Uploaded byrutendo13
- [ 5.1].pdfUploaded byVictor Felipe Domínguez Malo
- lecture02.pdfUploaded byHerbert Mohanu
- Sheik Hole Slam i 2017Uploaded byOscar Romeu

- 1-2-42Uploaded byCody Sage
- Autograf.New.York.Citys.Graffiti.Writers.2004.eBook-AEROHOLICS.pdfUploaded byCody Sage
- Scribble.Magazine.Issue.12.1999.eBook-AEROHOLICS.pdfUploaded byCody Sage
- Juxtapoz - April 2012.pdfUploaded byCody Sage
- LatexUploaded byCody Sage
- Graffiti_-_Banksy_-_Banging_your_head_against_a_brick_wall__eBook_.pdfUploaded byCody Sage
- Banksy - Existencilism.pdfUploaded byCody Sage
- Graffiti.Arte.Popular.Presenta.Ilegales.2.Issue.14.2006.eBook-AEROHOLICS.pdfUploaded byCody Sage
- m24s12day10solnsWEBUploaded byCody Sage
- m24s12day9solnsWEBUploaded byCody Sage
- m24s12day8solnsWEBUploaded byCody Sage
- m24s12day7solnsWEBUploaded byCody Sage
- m24s12day6solnsWEBUploaded byCody Sage
- m24s12day5solnsWEBUploaded byCody Sage
- m24s12day3solnsWEBUploaded byCody Sage
- LUhomework_2Uploaded byCody Sage
- Lu HomeworkUploaded byCody Sage
- lintrans2Uploaded byCody Sage
- lindiffeqUploaded byCody Sage
- Lin CombosUploaded byCody Sage
- isomorphismUploaded byloncie
- InverseUploaded byCody Sage
- inner2Uploaded byCody Sage
- inner1Uploaded byCody Sage
- HW12Uploaded byCody Sage
- HW11Uploaded byCody Sage
- HW10Uploaded byCody Sage
- Hw 9 SolutionsUploaded byCody Sage
- HW09Uploaded byCody Sage

- QM_notesUploaded by140557
- FunctionalAnalysisMathematica.pdfUploaded byAlfredo Martínez
- MSc syllabus.pdfUploaded byRAJESH KUMAR
- Some Mathematical Preliminaries Linear Vector SpacesUploaded byjasmon
- KTUMTechSignalProcessing(2015)Uploaded byShabeeb Ali Oruvangara
- set-Theory.pdfUploaded bymanuel
- Advanced Functional Analysis - Ssab.pdfUploaded byMichael
- Principles of Applied Math (Math 641).pdfUploaded byLauren Snider
- ThesisUploaded bydeiveegan1
- Questions & AnswersUploaded byFernando Vieira
- MA225 - DifferentiationUploaded byRebecca Rumsey
- (1993) 'the Growing Importance of LA in Undergrad Mathematics'Uploaded byCaio Viega
- 21a the Adjoint of a Linear OperatorUploaded byBianca Saboia
- 192 - Elements of Functional Analysis.pdfUploaded byVishakh Kumar
- M.A.-M.Sc-MathsUploaded byKishan Choudhary
- Introductory Mathematics-Part AUploaded byVlad Chalapco
- Electricity and Magnetism for Mathematicians GarrityUploaded byChristian Gonzalez
- International Journal of Scientific and Innovative Mathematical Research (IJSIMR)-9Uploaded byInternational Journal of Scientific and Innovative Mathematical Research (IJSIMR)
- Mathematics for Signal ProcessingUploaded bySandeep Sony
- Advanced Vibrations - S. Graham KellyUploaded bymach20_aardvark8064
- Hilbert SpacesUploaded byAdam Tucker
- CH7m.pdfUploaded byQa Sim
- Linear GuestUploaded byManish kumar
- RLE-TR-378-04740995Uploaded byRajesh Bathija
- 101_2015_3_bUploaded byAdam Michael
- Orlov 1987Uploaded byamir ha
- Teschl G.- Mathematical Methods in Quantum Mechanics With Applications to Schrodinger Operators (T)(247s)Uploaded byAndrea
- SNU GraduateCourseCatalogUploaded byNick Striker
- Trb Pg Syllabus for All Subjects wUploaded byBOSS BOSS
- Wave Propagation and Diffraction Mathematical Methods and ApplicationsUploaded byGiculPiticul