LINEAR ALGEBRA

Vector Spaces
Paul Dawkins

Linear Algebra

Table of Contents
Preface............................................................................................................................................. ii Vector Spaces ................................................................................................................................. 3
Introduction ................................................................................................................................................ 3 Vector Spaces ............................................................................................................................................. 5 Subspaces ................................................................................................................................................. 15 Span .......................................................................................................................................................... 25 Linear Independence ................................................................................................................................ 34 Basis and Dimension ................................................................................................................................ 45 Change of Basis........................................................................................................................................ 61 Fundamental Subspaces ........................................................................................................................... 74 Inner Product Spaces ................................................................................................................................ 85 Orthonormal Basis ................................................................................................................................... 93 Least Squares ......................................................................................................................................... 105 QR-Decomposition ................................................................................................................................ 113 Orthogonal Matrices ............................................................................................................................... 121

© 2007 Paul Dawkins

i

http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

Preface
Here are my online notes for my Linear Algebra course that I teach here at Lamar University. Despite the fact that these are my “class notes”, they should be accessible to anyone wanting to learn Linear Algebra or needing a refresher. These notes do assume that the reader has a good working knowledge of basic Algebra. This set of notes is fairly self contained but there is enough Algebra type problems (arithmetic and occasionally solving equations) that can show up that not having a good background in Algebra can cause the occasional problem. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. 1. Because I wanted to make this a fairly complete set of notes for anyone wanting to learn Linear Algebra I have included some material that I do not usually have time to cover in class and because this changes from semester to semester it is not noted here. You will need to find one of your fellow class mates to see if there is something in these notes that wasn’t covered in class. 2. In general I try to work problems in class that are different from my notes. However, with a Linear Algebra course while I can make up the problems off the top of my head there is no guarantee that they will work out nicely or the way I want them to. So, because of that my class work will tend to follow these notes fairly close as far as worked problems go. With that being said I will, on occasion, work problems off the top of my head when I can to provide more examples than just those in my notes. Also, I often don’t have time in class to work all of the problems in the notes and so you will find that some sections contain problems that weren’t worked in class due to time restrictions. 3. Sometimes questions in class will lead down paths that are not covered here. I try to anticipate as many of the questions as possible in writing these notes up, but the reality is that I can’t anticipate all the questions. Sometimes a very good question gets asked in class that leads to insights that I’ve not included here. You should always talk to someone who was in class on the day you missed and compare these notes to their notes and see what the differences are. 4. This is somewhat related to the previous three items, but is important enough to merit its own item. THESE NOTES ARE NOT A SUBSTITUTE FOR ATTENDING CLASS!! Using these notes as a substitute for class is liable to get you in trouble. As already noted not everything in these notes is covered in class and often material or insights not in these notes is covered in class.

© 2007 Paul Dawkins

ii

http://tutorial.math.lamar.edu/terms.aspx

aspx .lamar. Here is a listing of the topics in this chapter. Orthonormal Basis – In this section we will develop and use the Gram-Schmidt process for constructing an orthogonal/orthonormal basis for an inner product space. With all that said a good many of our examples will be examples from ¡ n since that is a setting that most people are familiar with and/or can visualize. Towards the end of the chapter we’ll take a quick look at inner product spaces. A vector space is nothing more than a collection of vectors (whatever those now are…) that satisfies a set of axioms. Inner Product Spaces – We will be looking at a special kind of vector spaces in this section as well as define the inner product. Nor does a vector have to represent the vectors we looked at in ¡ n . Change of Basis – In this section we will see how to change the set of basis vectors for a vector space. As we’ll soon see a vector can be a matrix or a function and that’s only a couple of possibilities for vectors. Vector Spaces – In this section we’ll formally define vectors and vector spaces. including the row space. The main idea of study in this chapter is that of a vector space. column space and null space. Basis and Dimension – We’ll be looking at the idea of a set of basis vectors and the dimension of a vector space. is a much more general concept and it doesn’t necessarily have to represent a directed line segment in ¡ 2 or ¡ 3 .edu/terms. Linear Independence – Here we will take a look at what it means for a set of vectors to be linearly independent or linearly dependent.Linear Algebra Vector Spaces Introduction In the previous chapter we looked at vectors in Euclidean n-space and while in ¡ 2 and ¡ 3 we thought of vectors as directed line segments. Fundamental Subspaces – Here we will take a look at some of the fundamental subspaces of a matrix. Once we get the general definition of a vector and a vector space out of the way we’ll look at many of the important ideas that come with vector spaces. A vector however. We will however try to include the occasional example that does not lie in ¡ n . Subspaces – Here we will be looking at vector spaces that live inside of other vector spaces. Least Squares – In this section we’ll take a look at an application of some of the ideas that we will be discussing in this chapter. © 2007 Paul Dawkins 3 http://tutorial.math. Span – The concept of the span of a set of vectors will be investigated in this section.

in this section.lamar.math. © 2007 Paul Dawkins 4 http://tutorial. Orthogonal Matrices – We will take a look at a special kind of matrix.Linear Algebra QR-Decomposition – Here we will take a look at the QR-Decomposition for a matrix and how it can be used in the least squares process.edu/terms. the orthogonal matrix.aspx .

Linear Algebra Vector Spaces As noted in the introduction to this chapter vectors do not have to represent directed line segments in space.edu/terms. but the reality is that most of the examples will be in ¡ n . such that u . © 2007 Paul Dawkins 5 http://tutorial. (f) For every u in V there is another object in V. however. but there will be the occasional example where we won’t. the first two axioms may seem a little strange at first glance. For the most part we will be doing addition and scalar multiplication in a fairly standard way. do not get too locked into the “standard” ways of defining addition and scalar multiplication. with all that out of the way let’s go ahead and get the definition of a vector and a vector space out of the way. So. When we first start looking at many of the concepts of a vector space we usually start with the directed line segment idea and their natural extension to vectors in ¡ n because it is something that most people can visualize and get their hands on. (a) u + v is in V – This is called closed under addition. In order for something to be a vector space it simply must have an addition and scalar multiplication that meets the above axioms and it doesn’t matter how strange the addition or scalar multiplication might be. we will see at least one example in this section of a set that is not closed under a particular scalar multiplication. Definition 1 Let V be a set on which addition and scalar multiplication are defined (this means that if u and v are objects in V and c is a scalar then we’ve defined u + v and cu in some way).u = u + ( -u ) = 0 . So. It might seem like these two will be trivially true for any definition of addition or scalar multiplication. denoted 0 and called the zero vector. (b) cu is in V – This is called closed under scalar multiplication. v. such that for all u in V we have u + 0 = 0 + u = u .aspx .lamar. However. (g) c ( u + v ) = cu + cv (h) ( c + k ) u = cu + ku (i) c ( ku ) = ( ck ) u (j) 1u = u We should make a couple of comments about these axioms at this point.math. First. the first thing that we need to do in this chapter is to define just what a vector space is and just what vectors really are. If the following axioms are true for all objects u. denoted -u and called the negative of u. and w in V and all scalars c and k then V is called a vector space and the objects in V are called vectors. before we actually do that we should point out that because most people can visualize directed line segments most of our examples in these notes will revolve around vectors in ¡ n . Next. We will try to always include an example or two with vectors that aren’t in ¡ n just to make sure that we don’t forget that vectors are more general objects. (c) u + v = v + u (d) u + ( v + w ) = ( u + v ) + w (e) There is a special object in V.

math. In this case because we’re dealing with the standard addition all the axioms involving the addition of objects from V (a. they are axioms. They could be complex numbers. However. they don’t have to be real numbers. the scalars are real numbers. g. and j). c ( u1 . Example 1 If n is any positive integer then the set V = ¡ n with the standard addition and scalar multiplication as defined in the Euclidean n-space section is a vector space. We will see at least one example later in this section of a set that is not closed under scalar multiplication as we’ll define it there.Linear Algebra Finally. Showing that something is not a vector space can be tricky because it’s completely possible that only one of the axioms fails. i. however that was done in Theorem 1 from the Euclidean n-space section and so we won’t do that for this example. c. However. First we should at least show that the set meets axiom (b) and this is easy enough to show. Given a definition of addition and scalar multiplication we’ll simply need to verify that the above axioms are satisfied by our definitions. © 2007 Paul Dawkins 6 http://tutorial. When we restrict the scalars to real numbers we generally call the vector space a real vector space and when we allow the scalars to be complex numbers we generally call the vector space a complex vector space. in this case of all the axioms involving the scalar multiplication (b. We should also make a quick comment about the scalars that we’ll be using here. Also. Some of these will be fairly standard vector spaces while others may seem a little strange at first but are fairly important to other areas of mathematics. in this case they aren’t properties. Axioms are simply the rules under which we’re going to operate when we work with vector spaces. d. All of these axioms were in one of the theorems from the discussion on vectors and/or Euclidean nspace in the previous chapter. with the exception of the first two these axioms should all seem familiar to you. Example 2 The set V = ¡ 2 with the standard vector addition and scalar multiplication defined as. do not get used to this happening.edu/terms. Again. only (h) is not valid. in that we can see that the result of the scalar multiplication is again a point in ¡ 2 and so the set is closed under scalar multiplication. We should now look at some examples of vector spaces and at least a couple of examples of sets that aren’t vector spaces. and f) will be valid. and in all the examples we’ll be looking at in the future. cu2 ) is NOT a vector space. h. We will be working exclusively with real vector spaces and from this point on when we see vector space it is to be understood that we mean a real vector space. We’ll show this in a bit. u2 ) = ( u1 .lamar. e.aspx . Note that from this point on when we refer to the standard vector addition and standard vector scalar multiplication we are referring to that we defined in the Euclidean n-space section. To this point. Technically we should show that the axioms are all met here. What that means is that they aren’t to be proven. but the point needs to be made here that only one of the axioms will fail in this case and that is enough for this set under this definition of addition and multiplication to not be a vector space.

( u1 . u2 ) = ( v1 + 2u1 . 0. there is a single axiom that fails in this case. v2 ) = ( u1 + 2v1 . you really should see all the detail that needs to go into actually showing that a set along with a definition of addition and scalar multiplication is a vector space. 1u = 1( u1 . In this case it is the last axiom. cu2 + ku2 ) So. u2 ) = ( u1 . We’ll leave it to you to verify that the others hold. It is possible for a set under the standard scalar multiplication and addition to fail to be a vector space as we’ll see in a bit. This means that axiom (h) is not valid for this definition of scalar multiplication. it’s possible for a set of this form to have a non-standard scalar multiplication and/or addition and still be a vector space. However. u2 ) + ( v1 . we’ve now seen three examples of sets of the form V = ¡ n that are NOT vector spaces so hopefully it is clear that there are sets out there that aren’t vector spaces.edu/terms. We’ll leave those to you. ( c + k ) u2 ) = ( u1 . cu2 ) + ( u1 . In fact. u3 ) ¹ ( u1 . This is probably going to be the only example that we’re going to go through and do in excruciating detail in this section. c ( u1 . u2 + v2 ) Is NOT a vector space. Second. we can see that ( c + k ) u ¹ cu + ku because the first components are not the same. In each case we had to change the definition of scalar multiplication or addition to make the set fail to be a vector space. because only the first component of the second point listed gets multiplied by 2 we can see that u + v ¹ v + u and so this is not a vector space. So.math. u2 ) + k ( u1 .lamar. u2 . v2 + u2 ) So. To see that this is not a vector space let’s take a look at the axiom (c). u2 . cu2 + ku2 ) cu + ku = c ( u1 .Linear Algebra Now. u3 ) = ( 0. Again. Likewise. u + v = ( u1 . (j). All you need to do is compute both sides of the equal sign and show that you get the same thing on each side. don’t read too much into that. 0. First. to show that (h) is not valid we’ll need to compute both sides of the equality and show that they aren’t equal. that fails as the following work shows. v2 ) + ( u1 . u3 ) = ( 0. let’s take a look at the following example. u3 ) = u Example 4 The set V = ¡ 2 with the standard scalar multiplication and addition defined as. ku2 ) = ( 2u1 . u2 ) = ( u1 . We’re doing this for two reasons. 0. u2 ) + ( v1 . ( c + k ) u = ( c + k )( u1 . our © 2007 Paul Dawkins 7 http://tutorial. u2 . We’ll not verify that the remaining scalar multiplication axioms are valid for this definition of scalar multiplication.aspx . cu3 ) is NOT a vector space. Example 3 The set V = ¡ 3 with the standard vector addition and scalar multiplication defined as. u2 + v2 ) v + u = ( v1 . You should go through the other axioms and determine if they are valid or not for the practice. v2 ) = ( u1 + 2v1 . (1) u3 ) = ( 0.

We’ll now verify (d). we’ll make it clear how we’re going about each step with this one. (a) and (b). First notice that we’re taking V to be only a portion of ¡ . x > 0 ) with addition and scalar multiplication defined as follows. y. In fact in this case it can’t be zero if for no other reason than the fact that the number zero isn’t in the set V ! We need to find an element that is in V so that under our definition of addition we have. they are two positive real numbers).e. Example 5 Suppose that the set V is the set of positive real numbers (i.lamar.edu/terms. First assume that x and y are any two elements of V (i. and z are any three elements of V. Next we’ll verify (c).aspx . Again. x+0 = 0+ x = x It looks like we should define the “zero vector” in this case as : 0=1. Since x is positive then for any c x c is a positive real number and so V is closed under scalar multiplication. Next.e. We’ll do this one with some detail pointing out how we do each step. and we need to be careful here. Since by x and y are positive numbers their product xy is a positive real number and so the V is closed under addition. In other words the zero vector for this set will be the number 1! Let’s see how that works and remember that our © 2007 Paul Dawkins 8 http://tutorial. let’s go through each of the axioms and verify that they are valid. Assume that x. If we took it to be all of ¡ we would not have a vector space. First let’s take a look at the closure axioms. We use 0 to denote the zero vector but it does NOT have to be the number zero.math. x + y = xy cx = x c This set under this addition and scalar multiplication is a vector space. Okay. Even though they are not addition and scalar multiplication as we think of them we are still going to call them the addition and scalar multiplication operations for this vector space. do not get excited about the definitions of “addition” and “scalar multiplication” here. 0. Next we need to find the zero vector.Linear Algebra definitions are NOT going to be standard here and it would be easy to get confused with the details if you had to go through them on your own.

Also. under our definition of addition and x 1 =1= 0 x x + (-x) = x × Therefore. x + 0 = x ×1 = x & 0 + x = 1× x = x Sure enough that does what we want it to do. x So. If x and y are any two elements of V and c is any scalar then.x with “minus x”. at this point we’ve taken care of the closure and addition axioms we now just need to deal with the axioms relating to scalar multiplication.lamar. this is just the notation we use to denote the negative of x.aspx .x . it looks like we’ve verified (g).x = 1 . In our case we need an element of V (so it can’t be minus x since that isn’t in V) such that x + (-x) = 0 and remember that 0=1 in our case! Given an x in V we know that x is strictly positive and so positive (since x is positive) and therefore the zero vector we have. We next need to define the negative. Let’s now verify (h).edu/terms. As with the zero vector to not confuse .math. If x is any element of V.Linear Algebra “addition” here is really multiplication and remember to substitute the number 1 in for 0. for the set V the negative of x is . . If x is any element of V and c and k are any two scalars then. for each element x that is in V. © 2007 Paul Dawkins 9 http://tutorial. 1 is defined (since x isn’t zero) and is x 1 is in V. We’ll do this one in some detail so you can see what we’re doing at each step. So. We’ll start with (g).

that was a lot of work and we’re not going to be showing that much work in the remainder of the examples that are vector spaces.Linear Algebra So. 1x = x1 = x Just remember that 1x is the notation for scalar multiplication and NOT multiplication of x by the number 1. Now. Let’s first show that V is closed under addition. Example 6 Let the set V be the points on a line through the origin in ¡ 2 with the standard addition and scalar multiplication. y2 ) . If x is any element of V and c and k are any two scalars then. The set V is all the points that are on some line through the origin in ¡ 2 . This amounts to showing that this point satisfies the equation of the line and that’s fairly simple to do. and we’ll need to show that u + v = ( x1 + x2 . y1 ) and v = ( x2 . For these examples you should check the other axioms to see if they are valid or fail. First. We’ve now shown that V is closed under addition. For those examples that aren’t a vector space we’ll show the details on at least one of the axioms that fails. Also note that a and b are fixed constants and aren’t allowed to change. We’ll leave that up to you to check most of the axioms now that you’ve seen one done completely out. To do this we’ll need the sum of two random points from V.edu/terms. ax + by = 0 for some a and some b. we know that the line must have the equation. and hence in V. In other words we are always on the same line. provided it satisfies the equation above. y1 ) will be on the line.aspx . let’s verify (i). after some rearranging and using the fact that both u and v were both in V (and so satisfied the equation of the line) we see that the sum also satisfied the line and so is in V. just plug the coordinates into the equation and verify we get zero. To show that V is closed under scalar multiplication we’ll need to show that for any u from V and © 2007 Paul Dawkins 10 http://tutorial. a point ( x1 .math. let’s think about just what V is. at least one not zero. So.lamar. a ( x1 + x2 ) + b ( y1 + y2 ) = ( ax1 + by1 ) + ( ax2 + by2 ) = 0 + 0 = 0 So. We’ve got the final axiom to go here and that’s a fairly simple one to verify. Now. say u = ( x1 . this axiom is verified. Okay. Then V is a vector space. We’ll show that V is closed under addition and scalar multiplication and leave it to you to verify the remaining axioms. y1 + y2 ) is also in V.

the cu = ( cx1 .math. cu is on the line and hence in V. This is done pretty much as we did closed under addition. does not satisfy the equation and hence is not in V and so V is not closed under addition.x1 . Showing that this set is a vector space can be a little difficult if you don’t know the equation of a line in ¡ n however. 0 ) -u = . but because this doesn’t satisfy the equation it is not in V and so axiom (e) is also not satisfied. that because we’re working with the standard addition that the zero vector and negative are the standard zero vector and negative that we’re used to dealing with. y2 ) be any two points from V (and so they satisfy the equation above). and so we won’t show the work here. We’ll leave it to you to verify that this particular V is not closed under scalar multiplication. Example 7 Let the set V be the points on a line that does NOT go through the origin in ¡ 2 with the standard addition and scalar multiplication. V is therefore closed under scalar multiplication. y1 + y2 ) .lamar. © 2007 Paul Dawkins 11 http://tutorial. 0 = ( 0. a ( cx1 ) + b ( cy1 ) = c ( ax1 + by1 ) = c ( 0 ) = 0 So.Linear Algebra any scalar. Then. c. 0 ) . u + v = ( x1 + x2 . Before moving on we should note that prior to this example all the sets that have not been vector spaces we’ve not been operating under the standard addition and/or scalar multiplication. ax + by = c for fixed constants a. In this example we’ve now seen that for some sets under the standard addition and scalar multiplication will not be vector spaces either. cy1 ) is also in V. and c where at least one of a and b is non-zero and c is not zero. Note however. b.edu/terms. y1 ) = ( . This set is not closed under addition or scalar multiplication. Also. note that since we are working on a set of points from ¡ 2 with the standard addition then the zero vector must be 0 = ( 0. In order for V to be a vector space it must contain the zero vector 0! You should go through the remaining axioms and see if there are any others that fail. Here is the work showing that it’s not closed under addition. Then V is not a vector space. Again we’ll leave it to you to verify the remaining axioms. a ( x1 + x2 ) + b ( y1 + y2 ) = ( ax1 + by1 ) + ( ax2 + by2 ) = c + c = 2c ¹ c So the sum.aspx . as many of you probably don’t.( x1 . In this case the equation of the line will be.y1 ) Note that we can extend this example to a line through the origin in ¡ n and still have a vector space. y1 ) and v = ( x2 . Let u = ( x1 . .

M n m is a vector space. to be the matrix -A then the properties of matrix arithmetic will show that the remainder of the axioms are valid. 0. Next. b ] . and c are fixed constants and at least one is not zero.math. A. etc. we’ve seen quite a few examples to this point. and the scalar multiple. we can define the negative.lamar. cu. M 59 . are also n ´ m matrices and hence are in M n m . ( f + g )( x ) = f ( x ) + g ( x ) Under these operations F [ a. the set of 2 ´ 2 matrices. but they’ve all involved sets that were some or all of ¡ n and so we now need to see a couple of examples of vector spaces whose elements (and hence the “vectors” of the set) are not points in ¡ n . Note that this example now gives us a whole host of new vector spaces. b ] and any scalar c define addition and scalar multiplication as. If we let c be any scalar and let the “vectors” u and v represent any two n ´ m matrices (i. b. Also. is a vector space and the set of all 5 ´ 9 matrices. b ] be the set of all real valued functions that are defined on the interval [ a. The equation of a plane through the origin in ¡ 3 is. -u . M 22 .Linear Algebra Example 8 Let the set V be the points on a plane through the origin in ¡3 with the standard addition and scalar multiplication.e. b ] is a vector space.edu/terms. Okay. Then M n m is a vector space. Then given any two “vectors”. Example 10 Let F [ a. Then V is a vector space. So M n m is closed under addition and scalar multiplication. they are both objects in M n m ) then we know from our work in the first chapter that the sum. ax + by + cz = 0 where a. u + v . if we define the zero vector. the “vectors” in this vector space are really matrices! Here’s another important example that may appear to be even stranger yet. to be the n ´ m zero matrix and if the “vector” u is some n ´ m . © 2007 Paul Dawkins 12 ( cf )( x ) = cf ( x ) http://tutorial. For instance.aspx . Example 9 Let n and m be fixed numbers and let M n m represent the set of all n ´ m matrices. Given the equation you can (hopefully) see that this will work in pretty much the same manner as the Example 6 and so we won’t show any work here. Therefore. from F [ a. Also let addition and scalar multiplication on M n m be the standard matrix addition and standard matrix scalar multiplication. is a vector space. f = f ( x ) and g = g ( x ) .

0.10] but not in F [ -3. and the negative of the “vector” f is the “vector” -f = .f ( x ) . b ) which is the set of all real valued functions that are defined on ( a. It can mean a lot of different things depending upon what type of vector space we’re working with.math. depending upon the interval we choose to work with we may have a different set of 1 would be in F [ 2. First. There is one final example that we need to look at in this section. For instance. Example 11 Let V consist of a single object. The “zero vector”. Both of the vector spaces from Examples 9 and 10 are fairly important vector spaces and as we’ll look at them again in the next section where we’ll see some examples of some related vector spaces. (a) 0u = 0 (b) c0 = 0 (c) ( -1) u = -u (d) If c u = 0 then either c = 0 or u = 0 © 2007 Paul Dawkins 13 http://tutorial. 6] x In this case the “vectors” are now functions so again we need to be careful with the term vector. the function because of division by zero. the function that is zero for all x.edu/terms.e. The last thing that we need to do in this section before moving on is to get a nice set of facts that fall pretty much directly from the axioms and will be true for all vector spaces. We should make a couple of quick comments about this vector space before we move on. functions in the set. for both addition and scalar multiplication we just going to plug x into both f ( x ) and/or g ( x ) and both of these are defined and so the sum or the product with a scalar will also be defined and so this space is closed under addition and scalar multiplication. Then. We could also look at the set F ( a. b ] . denoted by 0.lamar.e. b ) ( a < x < b . no endpoints) or F ( -¥. ¥ ) and we’ll still have a vector space. the set of all real valued functions defined on Also. and define 0+0 = 0 c0 = 0 Then V is a vector space and is called the zero vector space. recall that the [ a. Then. for F [ a. b ] is the zero function. we include the endpoints). Theorem 1 Suppose that V is a vector space. ¥ ) ( -¥.aspx . u is a vector in V and c is any scalar. b ] represents the interval a £ x £ b (i. i.Linear Algebra By assumption both f and g are real valued and defined on [ a.

u + ( -1) u = 0 © 2007 Paul Dawkins 14 http://tutorial. in essence. This isn’t too hard to show. Add the negative to both sides and then use axiom (f) again to say that 0u + ( -0u ) = 0 0u + ( -0u ) = 0u + 0u + ( -0u ) 0 = 0u + 0 Finally. We’ll give the proof for two of them and you should try and prove the other two on your own. 0 = 0u u + ( -1) u = 1u + ( -1) u Next. it is in the vector space and so we know from axiom (f) that it has a negative which we’ll denote by -0u. use axiom (h) on the right side and then a nice property of real numbers. Coming up with them on your own can be difficult sometimes.aspx . u + ( -1) u = (1 + ( -1) ) u = 0u Finally.edu/terms. and we’ve proven (a). or in other words that ( -1) u = -u . use axiom (e) on the right side to get. (c) In this case if we can show that u + ( -1) u = 0 then from axiom (f) we’ll know that ( -1) u is the negative of u.lamar. We’ll start with 0u and use the fact that we can always write 0 = 0 + 0 and then we’ll use axiom (h). 0u = ( 0 + 0 ) u = 0u + 0u This may have seemed like a silly and/or strange step. but it was required.Linear Algebra The proofs of these are really quite simple. Proof : (a) Now. while we don’t know just what 0u is as a vector. We’ll start with u + ( -1) u and use axiom (j) to rewrite the first u as follows. be using the fact that 0u = 0 and that’s what we’re trying to prove! So. but each of these steps will come straight from a property of real numbers or one of the axioms above. We couldn’t just add a 0u onto one side because this would. but they only appear that way after you’ve seen them. this can seem tricky.math. use part (a) of this theorem on the right side and we get.

under the addition and scalar multiplication that is defined on V.math. technically if we wanted to show that a subset W of a vector space V was a subspace we’d need to show that all 10 of the axioms from the definition of a vector space are valid. We want to investigate this idea in more detail and we’ll start off with the following definition. the only two axioms that we really need to worry about are the two closure axioms. In and of itself that isn’t important. To see an example of this see Example 7 from the previous section.aspx .Linear Algebra Subspaces Let’s go back to the previous section for a second and examine Example 1 and Example 6. and j) deal with how addition and scalar multiplication work. Note that the sum and scalar multiple will be in V we just don’t know if it will be in W. In Example 1 we saw that ¡ n was a vector space with the standard addition and scalar multiplication for any positive integer n. Many of the axioms (c. In Example 6 we saw that the set of points on a line through the origin in ¡ 2 with the standard addition and vector space multiplication was also a vector space. Now. As the following theorem shows however. © 2007 Paul Dawkins 15 http://tutorial. given a line we can find points in ¡ 2 that aren’t on the line. Definition 1 Suppose that V is a vector space and W is a subset of V. all of which require something to be in the subset W. Therefore. it is possible to take certain subsets of the original vector space and as long as we retain the definition of addition and scalar multiplication we will get a new vector space. the set of points in the vector space of Example 6 are also in the set of points in the vector space of Example 1. i. it is possible for some subsets to not be a new vector space. h. just what is so important about these two examples? Well first notice that they both are using the same addition and scalar multiplication.edu/terms. at least for some vector spaces. So. What we’ve seen here is that. we will get the zero vector and negative vector for free. We also need to verify that the zero vector (axiom e) is in W and that each element of W has a negative that is also in W (axiom f). however. but W is inheriting the definition of addition and scalar multiplication from V.lamar. in particular ¡ 2 is a vector space with the standard addition and scalar multiplication. If. in reality that doesn’t need to be done. since elements of W are also elements of V the six axioms listed above are guaranteed to be valid on W. d. Next. While it’s not important to the discussion here note that the opposite isn’t true. In that example we’ve got a subset of ¡ 2 with the standard addition and scalar multiplication and yet it’s not a vector space. but it will be important for the end result of what we want to discuss here. So. W is also a vector space then we call W a subspace of V. g. The first two (a. The only ones that we really need to worry about are the remaining four. Of course. and b) are the closure axioms that require that the sum of any two elements from W is back in W and that the scalar multiple of any element from W will be back in W. Once we have those two axioms valid.

aspx .edu/terms. even though we didn’t explicitly use it in the proof it was required in order to guarantee that we have a vector space.e. W is closed under addition). However.e. We only need to verify that assuming the two closure condition we get axioms e and f as well. d. Therefore. W is closed under scalar multiplication).math. Because V can be thought of as a subset of itself we can also think of it as a subspace of itself. W = {0} is a subset of V and is a vector space in its own right and so will be a subspace of V. the zero space which is the vector space consisting only of the zero vector. At this point we should probably take a look at some examples. Be careful with this proof. and j are true simply based on the fact that W is a subset of V and it uses the same addition and scalar multiplication and so we get these for free. From the second condition above we see that we are assuming that W is closed under scalar multiplication and so both 0u and ( -1) u must be in W. Also. Fact Every vector space. (b) If u is in W and c is any scalar then cu is also in W (i. 0u = 0 ( -1) u = -u But this means that the zero vector and the negative of u must be in W and so we’re done. in order for W to be a vector space it must be closed under addition and so without that first condition we can’t know whether or not W is in fact a vector space. Where the definition of addition and scalar multiplication on W are the same as on V.e. h. has at least two subspaces.Linear Algebra Theorem 1 Suppose that W is a non-empty (i. (a) If u and v are in W then u + v is also in W (i. As we discussed above the axioms c. Namely. © 2007 Paul Dawkins 16 http://tutorial. V. Next we should acknowledge the following fact.lamar. On the surface it may look like we never used the first condition of closure under addition and we didn’t use that to show that axioms e and f were valid. at least one element in it) subset of the vector space V then W will be a subspace if the following two conditions are true. Proof : To prove this theorem all we need to do is show that if we assume the two closure axioms are valid the other 8 axioms will be given to us for free. In all of these examples we assume that the standard addition and scalar multiplication are being used in each case unless otherwise stated. i. g. V itself and W = {0} (the zero space). but from Theorem 1 from the previous section we know that.

(a) Let W be the set of all points. y ) . x3 ) . x3 ) .Linear Algebra Example 1 Determine if the given set is a subspace of the given vector space.edu/terms. ( x. cx3 ) So. Therefore. This set is closed under neither addition nor scalar multiplication. x3 ) . y1 + y2 ) and since x1 . x2 + y2 . Is this a subspace of ¡ 3 ? This one is fairly simple to check a point will be in W if the first component is zero. x3 ) . x2 ³ 0 we also have x1 + x2 ³ 0 and so the resultant point is back in W. c ( x. y2 ) = ( x1 + x2 . y3 ) be any two points in W and let c be any scalar then. x2 . ( x1 . y2 . Is this a subspace of (b) Let W be the set of all points from ¡ 3 of the form ( 0. y3 ) = ( 0. from ¡ 2 in which x ³ 0 . x2 . x2 . W is not a subspace of V. (a) Let W be the set of all points. let x = ( 0. x2 . x + y = ( 0.math. Is this a subspace of ¡ 2 ? [Solution] ¡3 ? [Solution] ¡3 ? [Solution] ¡2 ? This set is closed under addition because. x3 ) and y = ( 0. both x + y and cx are in W and so W is closed under addition and scalar multiplication and so W is a subspace. y1 ) + ( x2 . However. In order for points to be in W in this © 2007 Paul Dawkins 17 http://tutorial. [Return to Problems] (b) Let W be the set of all points from ¡ 3 of the form ( 0. from ¡ 2 in which x ³ 0 . y ) . y2 .lamar. x2 . Is this a subspace of Solution In each of these cases we need to show either that the set is closed under addition and scalar multiplication or it is not closed for at least one of those. [Return to Problems] (c) Let W be the set of all points from ¡ 3 of the form (1. ( x. this set is not closed under scalar multiplication. Let c be any negative scalar and further assume that x > 0 then. Is this a subspace of ¡ 3 ? This one is here just to keep us from making any assumptions based on the previous part.aspx . x3 ) + ( 0. cy ) Then because x > 0 and c < 0 we must have cx < 0 and so the resultant point is not in W because the first component is neither zero nor positive. Is this a subspace of (c) Let W be the set of all points from ¡ 3 of the form (1. x2 . cx2 . y ) = ( cx. x3 + y3 ) cx = ( 0. So.

math. y3 ) = ( 2. cx3 ) Neither of which is in W and so W is not a subspace. y2 . x2 . (a) Let W be the set of diagonal matrices of size n ´ n .aspx .Linear Algebra case the first component must be a 1. x2 . Is this a subspace of M n n ? [Solution] é0 ê (b) Let W be the set of matrices of the form a21 ê ê a31 ë [Solution] a12 ù a22 ú . x2 + y2 . Is this a subspace of M n n ? Let u and v be any two n ´ n diagonal matrices and c be any scalar then. 0 é u1 0 L 0 ù é v1 0 L 0 ù é u1 + v1 ê 0 u L 0ú ê 0 v L 0ú ê 0 u2 + v2 2 2 ú+ê ú=ê u+v = ê ê M M O Mú ê M M O Mú ê M M ê ú ê ú ê 0 ë 0 0 L un û ë 0 0 L vn û ë 0 0 L 0ù é c u1 ê 0 cu L 0ú 2 ê ú cu = ê M M O Mú ê ú 0 L c un û ë 0 L O L L ù 0 ú ú M ú ú un + vn û 0 Both u + v and cu are also diagonal n ´ n matrices and so W is closed under addition and scalar multiplication and so is a subspace of M n n . © 2007 Paul Dawkins 18 http://tutorial.lamar. x3 ) and y = (1. [Return to Problems] Example 2 Determine if the given set is a subspace of the given vector space.edu/terms. x + y = (1.Is this a subspace of M 22 ? a22 ú û (c) Let W be the set of matrices of the form ê [Solution] é2 ë0 Solution (a) Let W be the set of diagonal matrices of size n ´ n . y3 ) be any two points in W and let c be any scalar other than 1 we get. y2 . cx2 . x3 ) + (1. if x = (1. [Return to Problems] é0 ê (b) Let W be the set of matrices of the form a21 ê ê a31 ë a12 ù a22 ú .Is this a subspace of M 32 ? ú a32 ú û Let u and v be any two matrices from W and c be any scalar then. However. x3 + y3 ) cx = ( c.Is this a subspace of M 32 ? ú a32 ú û a12 ù .

W is not closed under addition. In general the set of upper triangular n ´ n matrices (without restrictions. b ] ? (d) Let W be the set of all functions such that f ( 6 ) = 10 . b ] .Is this a subspace of M 22 ? a22 ú û 4 0 u12 + v12 ù u22 + v22 ú û So. b ] . You should also verify for yourself that W is not closed under scalar multiplication either.lamar. Is this a subspace of F [ a.math. Is this a subspace of (c) Let W be the set of all polynomials of degree exactly n. [Solution] (b) Let Pn be the set of all polynomials of degree n or less. the set of all real valued functions on the interval [ a. é2 ë0 a12 ù . Therefore. [Return to Problems] é2 u+v = ê ë0 u12 ù é 2 + u22 ú ê 0 û ë v12 ù é = v22 ú ê û ë Do not read too much into the result from part (c) of this example. Is this a subspace of F [ a. Is this a subspace of F [ a. u + v isn’t in W since the entry in the first row and first column isn’t a 2. b ] be the set of all continuous functions on the interval [ a. You should verify this for the practice. b ] where we have a £ 6 £ b ? [Solution] [Solution] F [ a. Example 3 Determine if the given set is a subspace of the given vector space. (a) Let C [ a. unlike part (c) from above) is a subspace of M n n and the set of lower triangular n ´ n matrices is also a subspace of M n n .aspx . In either case W is not a subspace of M 22 . b ] ? [Solution] © 2007 Paul Dawkins 19 http://tutorial. [Return to Problems] (c) Let W be the set of matrices of the form ê Let u and v be any two matrices from W then. b ] .edu/terms.Linear Algebra é 0 u12 ù é 0 v12 ù é 0 u + v = ê u21 u22 ú + ê v21 v22 ú = ê u21 + v21 ê ú ê ú ê ê u31 u32 ú ê v31 v32 ú ê u31 + v31 ë û ë û ë cu12 ù é 0 ê cu c u = ê 21 cu22 ú ú ê cu31 cu32 ú ë û u12 + v12 ù u22 + v22 ú ú u32 + v32 ú û Both u + v and cu are also in W and so W is closed under addition and scalar multiplication and hence is a subspace of M 32 .

Is this a subspace of F [ a. but with the sum. b ] . if you’ve not had Calculus you may not know what a continuous function is. b ] ? First recall that a polynomial is said to have degree n if its largest exponent is n. Okay. You can sketch the graph of the function from a to b without ever picking up your pencil of pen.lamar. Okay.ax 2 + dx + e where a is not zero. So. [Return to Problems] (b) Let Pn be the set of all polynomials of degree n or less. u + v = ( an + bn ) x n + L + ( a1 + b1 ) x + a0 + b0 cu = can x n + L + ca1 x + ca0 In both cases the degree of the new polynomial is not greater than n. b ] . C [ a. u = ax 2 + bx + c v = .math. b ] . b ] . b ] if there are no holes or breaks in the graph. To see this let’s take a look at the n = 2 case to keep things simple (the same argument will work for other values of n) and consider the following two polynomials. the set of all real valued functions on the interval [ a. So. b ] . A fact from Calculus (which if you haven’t had please just believe this) is that the sum of two continuous functions is continuous and multiplying a continuous function by a constants will give a new continuous function. Is this a subspace of F [ a. Put in another way. b ] be the set of all continuous functions on the interval [ a. Of course in the case of scalar multiplication it will remain degree n. Is this a subspace of F [ a.Linear Algebra Solution (a) Let C [ a. The point is that Pn is closed under addition and scalar multiplication and so will be a subspace of F [ a. The other constants may or may not be zero. Both are polynomials of exactly degree 2 (since a is not zero) and if we add them we get.edu/terms.aspx . Then. it is possible that some of the coefficients cancel out to zero and hence reduce the degree of the polynomial. b ] ? n this case W is not closed under addition. A quick and dirty definition of continuity (not mathematically correct. what this fact tells us is that the set of continuous functions is closed under standard function addition and scalar multiplication and that is what we’re working with here. b ] is a subspace of F [ a. u + v = (b + d ) x + c + e 20 © 2007 Paul Dawkins http://tutorial. [Return to Problems] (c) I Let W be the set of all polynomials of degree exactly n. we know this is true because each polynomial must have degree 2. but useful if you haven’t had Calculus) is that a function is continuous on [ a. let u = an x n + L + a1 x + a0 and v = bn x n + L + b1 x + b0 and let c be any scalar.

if we evaluate the sum or the scalar multiple at 6 we’ll get a result of 10. C [ a. Finally. [Return to Problems] (d) Let W be the set of all functions such that f ( 6 ) = 10 . b ] . this won’t happen. so we will assume that a £ 6 £ b . b ] . This means that f ( 6 ) = 10 and g ( 6 ) = 10 . and so the scalar is not in W either. b ] with a £ 6 £ b ? Go back and think about how we did the work for that part and that should show you © 2007 Paul Dawkins 21 http://tutorial. In other words.Linear Algebra So. Pn . was a subspace of F [ a. The sum is. f = f ( x ) and g = g ( x ) . First. is a fairly important vector space in its own right to many areas of mathematical study. In this case suppose that we have two elements from W. Likewise. In order for W to be a subspace we’ll need to show that the sum and a scalar multiple will also be in W. Next. Therefore for n = 2 W is not closed under addition.math. Let’s take a more general look at this. b ] with a £ 6 £ b . if c is any scalar that isn’t 1 we’ll have. However. we should just point out that the set of all continuous functions on the interval [ a.aspx .lamar. For some fixed number k let W be the set of all real valued functions for which f ( 6 ) = k . We could just have easily done the work for general n and we’d get the same result and so W is not a subspace. Let’s take a look at the sum. subspaces can have subspaces themselves. We looked at n = 2 only to make it somewhat easier to write down the two example polynomials. b ] . [Return to Problems] Before we move on let’s make a couple of observations about some of the sets we looked at in this example.edu/terms. Are there any values of k for which W will be a subspace of F [ a. the sum had degree 1 and so is not in W. ( f + g )( 6 ) = f ( 6 ) + g ( 6 ) = 10 + 10 = 20 ¹ 10 ( c f )( 6 ) = cf ( 6 ) = c (10 ) ¹ 10 and so the sum will not be in W. Is this a subspace of F [ a. if you’ve had Calculus you’ll know that polynomials are continuous and so Pn can also be thought of as a subspace of C [ a. here is something for you to think about. In the last part we saw that the set of all functions for which f ( 6 ) = 10 was not a subspace of F [ a. However. b ] as well. we saw that the set of all polynomials of degree less than or equal to n. In other words. b ] where we have a £ 6 £ b ? First notice that if we don’t have a £ 6 £ b then this problem makes no sense. Therefore W is not closed under addition or scalar multiplication and so is not a subspace.

math.aspx . Let’s see some examples of null spaces that are easy to find. In equation form it is easy to see that the only solution is x1 = x2 = 0 . é 2 0ù (a) A = ê ú [Solution] ë -4 10 û é 1 -7 ù (b) B = ê ú [Solution] ë -3 21û é0 0 ù (c) 0 = ê ú [Solution] ë0 0 û Solution (a) A = ê é 2 0ù ú ë -4 10 û é 2 0 ù é x1 ù é0 ù ê -4 10 ú ê x ú = ê0 ú ë ûë 2û ë û 2 x1 = 0 -4 x1 + 10 x2 = 0 To find the null space of A we’ll need to solve the following system of equations. © 2007 Paul Dawkins 22 http://tutorial.Linear Algebra that there is one value of k (and only one) for which W will be a subspace. In terms of vectors from ¡ 2 the solution consists of the single vector {0} and hence the null space of A is {0} . Example 4 Determine the null space of each of the following matrices. If you need a refresher on solutions to system take a look at the first section of the first chapter.7 x2 = 0 -3 x1 + 21x2 = 0 t is any real number Here is the system that we need to solve for this part. Þ We’ve given this in both matrix form and equation form.lamar. [Return to Problems] (b) B = ê é 1 -7 ù ú ë -3 21û é 1 -7 ù é x1 ù é0 ù ê -3 21ú ê x ú = ê0 ú ë û ë 2û ë û x1 = 7t x2 = t x1 . we can see that these two equations are in fact the same equation and so we know there will be infinitely many solutions and that they will have the form. Can you figure out what that number has to be? We now need to look at a fairly important subspace of ¡ m that we’ll be seeing in future sections.edu/terms. The null space of A is the set of all x in ¡ m such that Ax = 0 . Þ Now. Definition 2 Suppose A is an n ´ m matrix.

x2 ) from ¡ 2 that are in the form. The null space for the first matrix was {0} . Theorem 2 Suppose that A is an n ´ m matrix then the null space of A will be a subspace of ¡m . in ¡ m will be a solution to this system and so we know that the null space is not empty.math. if you think about it. Thinking back to the early parts of this section we can see that all of these are in fact subspaces of ¡2 . this will always be the case as the following theorem shows. First. Therefore. the null space of B will consist of all the vectors x = ( x1 . x = ( 7t . every vector x in ¡ 2 will be a solution to this system since we are multiplying x by the zero matrix. rather than vectors in ¡ 2 .7 x2 = 0 . [Return to Problems] To see some examples of a more complicated null space check out Example 7 from the section on Basis and Example 2 in the Fundamental Subspace section.edu/terms. Hence the null space of 0 is all of ¡ 2 . In fact. the since the null space of B consists of all the solutions to Bx = 0 .1) t is any real number We’ll see a better way to write this answer in the next section. Both of these examples have more going on in them. This is a good thing since a vector space (subspace or not) must contain at least one element.aspx . © 2007 Paul Dawkins 23 http://tutorial. Proof : We know that the subspace of A consists of all the solution to the system Ax = 0 . but the first step is to write down the null space of a matrix so you can check out the first step of the examples and then ignore the remainder of the examples. 0. For the second matrix the null space was the line through the origin given by x1 . let’s note that the null space of B will be all of the points that are on the equation through the origin give by x1 . In terms of equations. let’s go back and take a look at all the null spaces that we saw in the previous example. we should point out that the zero vector. [Return to Problems] (c) 0 = ê é0 0 ù ú ë0 0 û In this case we’re going to be looking for solutions to é0 0 ù é x1 ù é0ù ê0 0 ú ê x ú = ê0ú ë ûë 2û ë û However.lamar. Now. t ) = t ( 7. The null space for the zero matrix was all of ¡ 2 .7 x2 = 0 .Linear Algebra So.

let’s take a look at the scalar multiple. Therefore the null space is a subspace of ¡ m . The null space is therefore closed under addition. A ( x + y ) = Ax + Ay = 0 + 0 = 0 The sum.Linear Algebra Now that we know that the null space is not empty let x and y be two elements from the null space and let c be any scalar. Let’s start with the sum. x + y is a solution to Ax = 0 and so is in the null space. © 2007 Paul Dawkins 24 http://tutorial.aspx .lamar. A ( c x ) = c Ax = c 0 = 0 The scalar multiple is also in the null space and so the null space is closed under scalar multiplication. Next.math. We just need to show that the sum and scalar multiple of these are also in the null space and we’ll be done.edu/terms.

n e1 = (1. 20 ) a linear combination of v1 = ( 2. v n . We will be revisiting this idea again in a couple of sections. if there are scalars c1 . u = u1e1 + u2e 2 + L + une n Example 1 Determine if the vector is a linear combination of the two given vectors.K . t ) = t ( 7. e 2 .K .10 ) and v 2 = ( -3. 2 ) and v 2 = ( 4. but we can think of it as that if we need to. c2 . we can see that the null space we were looking at above is in fact all the linear combinations of the vector (7. u2 .1) Or. 0 ) e2 = ( 0. 0.lamar. It may seem strange to talk about linear combinations of a single vector since that is really scalar multiplication.all from V.aspx . 0. e1 .1).1) t is any real number We would like a more compact way of stating this result and by the end of this section we’ll have that. (a) Is w = ( -12. Let’s first revisit an idea that we saw quite some time ago. In that example we saw that the null space of the given matrix consisted of all the vectors of the form x = ( 7t . -15 ) ? (c) Is w = (1. In the section on Matrix Arithmetic we looked at linear combinations of matrices and columns of matrices. e n . -15 ) ? [Solution] [Solution] [Solution] © 2007 Paul Dawkins 25 http://tutorial. Definition 1 We say the vector w from the vector space V is a linear combination of the vectors v1 . We can also talk about linear combinations of vectors.K . 0 ) L e n = ( 0. Let’s take a look at an example or two. 0.10 ) and v 2 = ( -3. The standard basis vectors for ¡ n were defined as. We saw that we could take any vector u = ( u1 .edu/terms. 20 ) a linear combination of v1 = ( -1. 0. cn so that w can be written w = c1 v1 + c2 v 2 + L + cn v n So. but the point here is simply that we’ve seen linear combinations of vectors prior to us actually discussing them here.K . 0. The null space above was not the first time that we’ve seen linear combinations of vectors however. we could write u as a linear combination of the standard basis vectors.Linear Algebra Span In this section we will cover a topic that we’ll see off and on over the course of this chapter.1.K . un ) from ¡ and write it as. -6 ) ? (b) Is w = ( 4. v 2 . in other words.K .K . When we were looking at Euclidean n-space we introduced these things called the standard basis vectors. -4 ) a linear combination of v1 = ( 2. Let’s start off by going back to part (b) of Example 4 from the previous section.math.

There are just a few of the possibilities.3c2 = 4 10c1 . has at least one solution then w is a linear combination of the two vectors. ( -12.Linear Algebra Solution (a) Is w = ( -12. [Return to Problems] © 2007 Paul Dawkins 26 http://tutorial.e. unlike the previous part there are literally an infinite number of ways in which we can write the linear combination. 2 ) + c2 ( 4. 20 ) = c1 ( -1. any of the following combinations would work for instance. So. -4 ) a linear combination of v1 = ( 2. 20 ) a linear combination of v1 = ( 2.10 ) and v 2 = ( -3.v 2 3 w = . We’ll leave it to you to verify that the solution to this system is c1 = 4 and c2 = -2 .10 ) and v 2 = ( -3. w = c1 v1 + c2 v 2 -c1 + 4c2 = -12 2c1 . -6 ) ? In each of these cases we’ll need to set up and solve the following equation. Therefore. [Return to Problems] (b) Is w = ( 4. However. w = 2 v1 + ( 0 ) v 2 w = 8 v1 + 4 v 2 4 w = ( 0 ) v1 . -15 ) ? Here is the system we’ll need to solve for this part. 2c1 . w is a linear combination of v1 and v 2 and we can write w = 4 v1 . (c) Is w = (1.15c2 = 20 The solution to this system is. If there is no solution then w is not a linear combination of the two vectors. -15 ) ? For this part we’ll need to the same kind of thing so here is the system.2 v 2 . 20 ) a linear combination of v1 = ( -1.edu/terms.lamar. -6 ) Then set coefficients equal to arrive at the following system of equations.v1 . 3 c1 = 2 + t 2 c2 = t t is any real number This means w is linear combination of v1 and v 2 .6c2 = 20 If the system is consistent (i. 2c1 . 2 ) and v 2 = ( 4.aspx .math.3c2 = 1 10c1 .15c2 = -4 This system does not have a solution and so w is not a linear combination of v1 and v 2 .2 v 2 [Return to Problems] There are of course many more.

K . Now that we’ve seen how linear combinations work and how to tell if a vector is a linear combination of a set of other vectors we need to move into the real topic of this section. Proof : (a) So. v n } be a set of vectors in a vector space V and let W be the set of all linear combinations of the vectors v1 .K . u = c1 v1 + c2 v 2 + L + cn v n and w = k1 v1 + k2 v 2 + L + kn v n Now. we need to show that W is closed under addition and scalar multiplication. v n . v n be vectors in a vector space V and let their span be W = span { v1 . It is now time to give that notation. it is kind of strange to be talking about linear combinations of a single vector). Now. v n span W. there are scalars c1 . The set W is the span of the vectors v1 .K . let’s take a look at the sum. cn and k1 .K .K . but if we add in more components and/or more vectors to the set the problem will work in essentially the same manner. © 2007 Paul Dawkins 27 http://tutorial. As pointed out at the time we’re after a more compact notation for denoting this. v 2 .aspx . v 2 . v 2 . v 2 .K . We can now see that the null space from that example is nothing more than all the linear combinations of the vector (7. is a linear combination of the vectors v1 . v 2 .edu/terms. v 2 . Definition 2 Let S = { v1 . k2 . kn so that.K . v n .lamar. span { ( 7.K . So. v 2 .1) (and again.K . since W is the set of all linear combinations of v1 . v 2 .K .Linear Algebra So. So.K . In the opening of this section we recalled a null space that we’d looked at in the previous section.1) } Before we move on to some examples we should get a nice theorem out of the way. u + w .math.K . v n and is denoted by W = span ( S ) OR W = span { v1 . (b) W is the smallest subspace of V that contains all of the vectors v1 . with this notation we can now see that the null space that we examined at the start of this section is now nothing more than. (a) W is a subspace of V. Theorem 1 Let v1 . v n } We also say that the vectors v1 . Let u and w be any two vectors from W. v 2 . c2 . v 2 . this example was kept fairly simple. v n and hence must be in W and so W is closed under addition. v n that means that both u and w must be a linear combination of these vectors. v n } then. u + w = ( c1 + k1 ) v1 + ( c2 + k2 ) v 2 + L + ( cn + kn ) v n So the sum.

v i = 0 v1 + 0 v 2 + L + 1v i + L + 0 v n Now. a v1 + b v 2 = ( a. v 2 . k u = ( k c1 ) v1 + ( k c2 ) v 2 + L + ( k cn ) v n As we can see the scalar multiple. ci v i . 0 ) for any choices of a and b.K .K . Now.K . u is in W and so must be a linear combination of v1 .1. v 2 . u = c1 v1 + c2 v 2 + L + cn v n Each of the terms in this sum. (b) v1 = (1. 0 ) and v 2 = ( 0. 0 ) = ( a. let’s start this off by noticing that W does in fact contain each of the v i ’s since. b. Example 2 Describe the span of each of the following sets of vectors. let k be any scalar and let’s take a look at. 0. 0 ) and v 2 = ( 0.lamar. But this means that u is the sum of a bunch of vectors that are in W ¢ which is closed under addition and so that means that u must in fact be in W ¢ . Now. v n and hence must be in W and so W is closed under scalar multiplication. 0. 0 ) So. 0.math. v n . b.K . let’s take a look at some examples of spans. v 2 . If we can show that u must also be in W ¢ then we’ll have shown that W ¢ contains a copy of W since it will contain all the vectors in W. is the set of all linear combinations and we can write down a general linear combination for these two vectors. is a linear combination of the vectors v1 . let W ¢ be a vector space that contains v1 .aspx . v 2 . 0. is a scalar multiple of a vector that is in W ¢ and since W ¢ is a vector space it must be closed under scalar multiplication and so each ci v i is in W ¢ .1.Linear Algebra Now. 0 ) + ( 0.K . ku. © 2007 Paul Dawkins 28 http://tutorial. v n and consider any vector u from W. span { v1 . (b) In these cases when we say that W is the smallest vector space that contains the set of vectors v1 . v 2 } . it looks like span { v1 .1. v n then it will also contain a complete copy of W as well. v 2 } will be all of the vectors from ¡ 3 that are in the form ( a. W must be a vector space. b. So. v 2 . We’ve now shown that W ¢ contains every vector from W and so must contain W itself.edu/terms. 0 ) . -1) Solution (a) The span of this set of vectors. v n we’re really saying that if W ¢ is also a vector space that contains v1 . (a) v1 = (1. Therefore.

We’ll need to find a set of vectors so that the span of that set will be exactly the space given. (b) A general linear combination in this case is. However. é 1 0ù é0 0 ù (a) v1 = ê ú and v 2 = ê0 1ú ë0 0 û ë û (b) v1 = 1 . Example 4 Determine a set of vectors that will exactly span each of the following vector spaces.edu/terms.lamar. b. So just what do we need to do to mathematically show that two sets are equal? Let’s suppose that we want to show that A and B are equal sets. The only difference is that this time we aren’t working in ¡ n for this example. a. and v 3 = x3 Solution These work exactly the same as the previous set of examples worked. -b ) So. a. before we start this let’s think about just what we need to show here. -b ) = ( a. a v1 + b v 2 + c v3 = a + bx + cx3 In this case span { v1 . we need to show that the span of our proposed set of vectors is in fact the same set as the vector space. (a) ¡ n [Solution] (b) M 22 [Solution] (c) Pn [Solution] Solution Okay. What we’ll need in each of these examples is a set of vectors with which we can write a general vector from the space as a linear combination of the vectors in the set. 0. a v1 + b v 2 = ( a. span { v1 . v 2 . v 3 } will be all the polynomials from P3 that do not have a quadratic term. Likewise.math. -b ) for any choices of a and b. a. b. Now. é a 0ù é0 0 ù é a 0 ù a v1 + b v 2 = ê ú+ê ú=ê ú ë 0 0û ë0 b û ë 0 b û Here it looks like span { v1 . b. we’ll need to show that each b in B will be in A and in doing that we’ll have shown that A will contain all of B. 0 ) + ( 0. A general linear combination will look like.Linear Algebra (b) This one is fairly similar to the first one. v 2 } will be all the diagonal matrices in M 22 . (a) Here is a general linear combination of these “vectors”. v 2 = x . 0.aspx . In other words. To so this we’ll need to show that each a in A will be in B and in doing so we’ll have shown that B will at the least contain all of A. the only way that A can contain all of B and B can contain all of A is for A and © 2007 Paul Dawkins 29 http://tutorial. let’s see if we can determine a set of vectors that will span some of the common vector spaces that we’ve seen. v 2 } will be all the vectors from ¡ 4 of the form ( a. Example 3 Describe the span of each of the following sets of “vectors”.

(a) ¡ n We’ve pretty much done this one already. v 3 = ê0 0 ú . Therefore. In that example we saw a set of matrices that would span all the diagonal matrices in M 22 and so we can do a natural extension to get a set that will span all of M 22 . [Return to Problems] (c) Pn We can use Example 3(b) to help with this one. Next we’ll need to show that each vector in our span will also be in the vector space. given any matrix from M 22 . e n and so at the least the span of the standard basis vectors will contain all of ¡ n . © 2007 Paul Dawkins 30 http://tutorial.K . é a cù A=ê ú = a v1 + b v 2 + c v 3 + d v 4 ëb dû and so M 22 must be contained in the span of these vectors and so these vectors will span M 22 . It looks like the following set should do it. e 2 . e1 . However. é a cù A=ê ú ëb dû we can write it as the following linear combination of these “vectors”.math. Likewise. v 4 = ê0 1ú ë0 0 û ë û ë û ë û Clearly any linear combination of these four matrices will be a 2 ´ 2 matrix and hence in M 22 and so the span of these matrices must be contained in M 22 . Earlier in the section we showed that any vector from ¡ n can be written as a linear combination of the standard basis vectors. v 2 = ê 1 0 ú .aspx .edu/terms. since any linear combination of the standard basis vectors is going to be a vector in ¡ n we can see that ¡ n must also contain the span of the standard basis vectors. the span of the standard basis vectors must be ¡ n . First recall that Pn is the set of all polynomials of degree n or less. Using Example 3(b) as a guide it looks like the following set of “vectors” will work for us. for our example we’ll need to determine a possible set of spanning vectors show that every vector from our vector space is in the span of our set of vectors. (b) M 22 We can use result of Example 3(a) above as a guide here.lamar. [Return to Problems] é 1 0ù é0 0 ù é0 1ù é0 0 ù v1 = ê ú .Linear Algebra B to be the same set. So.

c2 + c3 = u1 If we set components equal we arrive at the following system of equations. 2. 3c2 + c3 = u2 c1 + 4c2 . 4 ) + c3 (1.aspx .3. -1. © 2007 Paul Dawkins 31 http://tutorial. Okay let’s think about how we’ve got to approach this. It should be clear (hopefully) that a linear combination of these is a polynomial of degree n or less and so will be in Pn .1) .math. Choose a general vector from ¡ 3 . [Solution] Solution (a) v1 = ( 2. and v 3 = (1. 4 ) . -2 ) 2c1 .1. So to answer that question here we’ll do the following. u3 ) = c1 ( 2. and v 3 = (1. [Return to Problems] There is one last idea about spans that we need to discuss and its best illustrated with an example. in this case its not so clear.1.2c3 = u3 In matrix form this is. and determine if we can find scalars c1 .edu/terms.Linear Algebra v 0 = 1. c2 . Or. 4 ) . -1) . (a) v1 = ( 2.0. v 2 = ( -1.3. Likewise. v 2 = ( -1.3.lamar. v1 = x. v 2 = ( 3. and c3 so that u is a linear combination of the given vectors. u2 . span { v1 . Therefore the span of these vectors will be contained in Pn . [Solution] (b) v1 = (1.1. Clearly the span of these vectors will be in ¡ 3 since they are vectors from ¡ 3 . v 2 .8. In the previous example our set of vectors contained vectors that we could easily show this.1) . u3 ) . and v 3 = ( -3.1) + c2 ( -1. However. v n = x n Note that used subscripts that matched the degree of the term and so started at v 0 instead of the usual v1 . -2 ) . -5 ) . we can write a general polynomial of degree n or less. 0. u = ( u1 . v 3 } . u2 . v 2 = x 2 . The real question is whether or not ¡ 3 will be contained in the span of these vectors. as the following linear combination p = a0 + a1 x + L + an x n p = a0 v 0 + a1 v1 + L + an v n Therefore Pn is contained in the span of these vectors and this means that the span of these vectors is exactly Pn . 0. ( u1 .K .1) . Example 5 Determine if the following sets of vectors will span ¡3 . -2 ) .

aspx © 2007 Paul Dawkins . As with the first part. Note that there are in fact infinitely many choices of u = ( u1 . u2 . u2 . u2 . v 3 } is contained in ¡ 3 . We’ll leave it to you to verify that the matrix form of this system is. u2 . u3 ) if the coefficient matrix. é 1 3 -3ù é c1 ù é u1 ù ê 2 -1 8 ú ê c ú = ê u ú ê úê 2ú ê 2ú ê -1 1 -5ú ê c3 ú ê u3 ú ë ûë û ë û This system will have a solution for every choice of u = ( u1 . u2 . 2.math. have at least one solution) for every possible choice of u = ( u1 .edu/terms. and v 3 = ( -3. v 3 } and so the span of these three [Return to Problems] 32 http://tutorial. v 2 . u3 ) cannot be written as a linear combination of these three vectors. u3 ) provided the coefficient matrix is invertible and we can check that be doing a quick determinant computation. u2 . v 3} = ¡3 [Return to Problems] (b) v1 = (1.e. Therefore the coefficient matrix is invertible and so this system will have a solution for every choice of u = ( u1 . This theorem tells us that this system will be consistent for every choice of u = ( u1 . u2 . -5 ) . u3 ) for which this system will not have a solution and so u = ( u1 .8. This in turn tells us that span { v1 .1) . let’s choose a general vector u = ( u1 . We’ll do this one a little quicker. v 2 . u2 . we know that span { v1 .lamar. least one vector from ¡ 3 that is not contained in span { v1 . v 2 . u3 ) form ¡3 and form up the system that we need to solve. A. but we’ve just shown that there is at vectors will not be all of ¡ 3 . is invertible. v 2 = ( 3. -1) . -1. v 2 . This in turn tells us that there is at least one choice of u = ( u1 . u3 ) . However. Nicely enough this is very easy to do if you recall Theorem 9 from the section on Determinant Properties. v 3 } is contained in ¡ 3 and so we’ve now shown that span { v1 .Linear Algebra é 2 -1 1ù é c1 ù é u1 ù ê 0 3 1ú êc2 ú = êu2 ú ê úê ú ê ú ê 1 4 -2 ú ê c3 ú ê u3 ú ë ûë û ë û What we need to do is to determine if this system will be consistent (i. if we denote the coefficient matrix as A we’ll leave it to you to verify that det ( A ) = -24 . u3 ) . So. u3 ) that will not yield solutions! Now. in this case we have det ( A ) = 0 (you should verify this) and so the coefficient matrix is not invertible.

This tells us that the set of vectors that will span a vector space are not unique. © 2007 Paul Dawkins 33 http://tutorial.lamar. Secondly.edu/terms. we’ve now seen at least two different sets of vectors that will span ¡ 3 . First. In other words. This is an idea we’re going to be looking at in much greater detail in the next couple of sections.aspx .math.Linear Algebra This example has shown us two things. There are the three vectors from Example 5(a) as well as the standard basis vectors for ¡ 3 . we can have more that one set of vectors span the same vector space. it has shown us that we can’t just write down any set of three vectors and expect to get those three vectors to span ¡ 3 .

1. 2 ) . Definition 1 Suppose S = { v1 . and v 3 = ( 0. S = { v1 . 2 ) . -5. [Solution] (c) v1 = (1. In this section we’d like to start looking at when it will be possible to express a given vector from a vector space as exactly one linear combinations of the set S. Recall that by span we mean that every vector in the space can be written as a linear combination of the vectors in S. However. 4 ) . v 2 . 0. This solution is called the trivial solution. v n } . © 2007 Paul Dawkins 34 http://tutorial. (a) v1 = ( 3. -2. c1 v1 + c2 v 2 + L + cn v n = 0 This equation has at least one solution. [Solution] Solution To answer the question here we’ll need to set up the equation for each part.1) . We’ll start this section off with the following definition. If there is another solution then the vectors in the set S are called linearly dependent and the set is called a linearly dependent set. If we only get the trivial solution the vectors will be linearly independent and if we get more than one solution the vectors will be linearly dependent. 0 ) . v 2 . namely. 0). At this point we’ve got a system of equations that we can solve. 4 ) .edu/terms. We’ll do this one in detail and then do the remaining parts quicker. -8 ) and v 2 = ( -9. Example 1 Determine if each of the following sets of vectors are linearly independent or linearly dependent. Let’s take a look at some examples. 0.math. as we saw in Example 1(b) of that section there is sometimes more than one linear combination of the same set of vectors can be used for a given vector. can span a vector space. We also saw in the previous section that some sets of vectors. combine the left side into a single vector and the set all the components of the vector equal to zero (since it must be the zero vector. and v 3 = ( 0.aspx c1 v1 + c2 v 2 + L + cn v n = 0 . v 2 = ( 0.K .1. v 2 = ( 3. -1) and v 2 = ( -2. c1 = 0 . 0 ) . [Solution] (d) v1 = ( 2. We’ll first set up the equation and get the left side combined into a single vector.1) . … . c2 = 0 . cn = 0 . v n } is a non-empty set of vectors and form the vector equation. [Solution] (b) v1 = (12.Linear Algebra Linear Independence In the previous section we saw several examples of writing a particular vector as a linear combination of other vectors.lamar. If the trivial solution is the only solution to this equation then the vectors in the set S are called linearly independent and the set is called a linearly independent set. (a) v1 = ( 3.K . -1) and v 2 = ( -2.6 ) .

0. -8 ) + c2 ( -9.1. 0 ) + c3 ( 0. 3 c1 = t 4 c2 = t t is any real number We’ve got more than the trivial solution (note however that the trivial solution IS still a solution.aspx .math. Here is the vector equation for this part. (b) v1 = (12.9c2 = 0 -8c1 + 6c2 = 0 The system of equations that we’ll need to solve is. [Return to Problems] (c) v1 = (1. -8 ) and v 2 = ( -9. 0 ) + c2 ( 0. set each of the components equal to zero to arrive at the following system of equations. -c1 + 2c2 ) = ( 0. The only difference between this one and the previous two are the fact that we now have three vectors out of ¡ 3 . there’s just more than that this time) and so these vectors are linearly dependent.lamar. 0 ) . c1 = 0 c2 = 0 [Return to Problems] The trivial solution is the only solution and so these two vectors are linearly independent. and the solution to this system is. 6 ) = 0 12c1 . not much solving to do this time.6 ) . 0 ) .0. -1) + c2 ( -2. 0.2c2 . Here is the vector equation we need to solve.1) = 0 c1 = 0 c2 = 0 c3 = 0 The system of equations to solve for this part is. It is clear that the only solution will be the trivial solution and so these vectors are linearly independent. c1 (12. 3c1 . So.edu/terms.2c2 = 0 -c1 + 2c2 = 0 Solving this system gives to following solution (we’ll leave it to you to verify this).Linear Algebra c1 ( 3. 0 ) Now. 2 ) = 0 ( 3c1 . and v 3 = ( 0. 0. [Return to Problems] © 2007 Paul Dawkins 35 http://tutorial. v 2 = ( 0.1) . c1 (1.1.

1) = 0 2c1 + 3c2 = 0 -2c1 .lamar. 4 ) . i.5c2 + c3 = 0 4c1 + 4c2 + c3 = 0 The system of equations that we’ll need to solve here is. Theorem 9 from the Properties of the Determinant section can help us answer this question without solving the system.t 4 1 c2 = t 2 c3 = t t is any real number We’ve got more than just the trivial solution and so these vectors are linearly dependent. If the coefficient matrix is not square then we can’t take the determinant and so we’ll not have a choice but to solve the system. 4 ) + c3 ( 0.aspx . 0. 4 ) . -2. The solution to this system is. We should work a couple of examples that does not fit this mold to make sure that you understand that we don’t need to have the same number of vectors as components. two vectors from ¡ 2 or three vectors from ¡ 3 . 0. In fact the standard basis vectors for ¡ n . e 2 = ( 0. -5. 0. that the actual solution to the system isn’t ever important as we’ll see towards the end of the section. it can be shown that if the determinant is zero then the system will have infinitely many solutions.Linear Algebra (d) v1 = ( 2. e1 = (1. 0 ) . © 2007 Paul Dawkins 36 http://tutorial.e. once the system is set up if the coefficient matrix is square all we really need to do is take the determinant of the coefficient matrix and if it is non-zero the set of vectors will be linearly independent and if the determinant is zero then the set of vectors will be linearly dependent.K .1) . The vectors in the previous example all had the same number of components as vectors. Therefore. -5. 4 ) + c2 ( 3. 0. Likewise. namely the trivial solution. 3 c1 = . All we were interested in it was whether or not the system had only the trivial solution or if there were more solutions in addition to the trivial solution. This theorem tells us that if the determinant of the coefficient matrix is non-zero then the system will have exactly one solution.1. c1 ( 2.1. -2.edu/terms. v 2 = ( 3. This does not mean however. Here is the vector equation for this final part.K . and v 3 = ( 0.1.K . Before proceeding on we should point out that the vectors from part (c) of this were actually the standard basis vectors for ¡ 3 .K .1) will be linearly independent. e n = ( 0. [Return to Problems] Note that we didn’t really need to solve any of the systems above if we didn’t want to. 0 ) . 0.math.

-2 ) . Here is the vector equation we need to solve. -2 ) . 2 ) + c3 ( 4.3. [Solution] (b) v1 = ( -2. -1) = 0 ( c1 . c1 ( -2. 0. -8.c3 = 0 and this has the solution. -3) + c2 ( -2. we’ll not be showing the details of solving the systems of equations so you should verify all the solutions for yourself. c1 (1. 4. Here is the vector equation for this part. technically we don’t need to solve this system for the same reason we really didn’t need to solve the system in the previous part. -2 ) . c1 . -2 ) = 0 -2c1 . -1) .1) + c2 ( -1.1) . -2.2c2 + 4c3 .3c2 . Again. (a) v1 = (1.3. -1) .c3 ) = ( 0. Note that we didn’t really need to solve this system to know that they were linearly dependent. v 2 = ( -1. -2. [Return to Problems] (b) v1 = ( -2. 0 ) The system of equations that we need to solve is. we’ll do the first part in some detail and then leave it to you to verify the details in the remaining parts.c2 + 4c3 = 0 c1 . -3) . -3) .1. (a) v1 = (1. -1. 2 ) and v 3 = ( 4. Also. v 2 = ( -2. 2 ) and v 3 = ( 4. [Solution] (c) v1 = (1.3c1 + 2c2 .1) . v 2 = ( 2. [Solution] (d) v1 = (1.edu/terms. . 2 ) and v 3 = (1. 2 ) . -3) and v 3 = ( 4. -3) and v 3 = ( 4.lamar.math.3. 3 c1 = t 2 c2 = 11 t 4 c3 = t t is any real number We’ve got more than the trivial solution and so these vectors are linearly dependent. Now. From Theorem 2 in the solving systems of equations section we know that if there are more unknowns than equations in a homogeneous system then we will have infinitely many solutions. v 2 = ( -1. -2. -4 ) . [Solution] Solution These will work in pretty much the same manner as the previous set of examples worked. There are more unknowns than equations so the system © 2007 Paul Dawkins 37 http://tutorial.Linear Algebra Example 2 Determine if the following sets of vectors are linearly independent or linearly dependent. v 2 = ( -1.1.aspx .2c3 = 0 The system of equations we’ll need to solve is. -1) .2c2 + 4c3 = 0 -3c1 + 2c2 . 2 ) and v 3 = ( 2. -3) + c3 ( 4. v 2 = ( -2.

c3 = 0 The system of equations that we’ll need to solve this time is. -2. 4.1. -1) . -2.math. -8.edu/terms. [Return to Problems] (d) v1 = (1.Linear Algebra will have infinitely many solutions (so more than the trivial solution) and therefore the vectors will be linearly dependent. We still have solutions other than the trivial solution and so these vectors are linearly dependent.3. -2. 2 ) and v 3 = ( 2. -2.1. [Return to Problems] We should make one quick remark about part (b) of this problem. 0. In this case we had a set of three vectors and one of the scalars was zero. v 2 = ( 2. 2 ) and v 3 = (1.8c3 = 0 -c1 + 3c3 = 0 2c1 + 2c2 . -8.3. [Return to Problems] (c) v1 = (1. c1 = 2t c2 = 0 c3 = t t is any real number In this case one of the scalars was zero. -1. -2. -2 ) . Here is the vector equation for this part.3. v 2 = ( -1. -1.2c3 = 0 -4c1 + 2c2 . c1 (1. 2 ) . -4 ) . 4.1. However. -2 ) = 0 c1 . -2. Note that was it does say however.3.lamar. Here is the solution.3. let’s solve anyway since there is an important idea we need to see in this part.2c3 = 0 This system has only the trivial solution and so these three vectors are linearly independent.c2 + c3 = 0 -2c1 + 3c2 + c3 = 0 3c1 + 4c2 .3. The system of equations is. is that v1 and v 3 are linearly dependent themselves regardless of v 2 . c1 (1. 2 ) + c3 ( 2.1. c1 = 3t 5 c2 = . 0. 2 ) + c2 ( 2.aspx . -4 ) + c2 ( -1.2c2 . The vector equation for this part is. There is nothing wrong with this. -1) = 0 c1 + 2c2 + 2c3 = 0 c1 . This will happen on occasion and as noted this only means that the vectors with the zero scalars are not really required in order to make the set © 2007 Paul Dawkins 38 http://tutorial. 2 ) + c3 (1. The solution to this system is.t 2 c3 = t t is any real number We’ve got more solutions than the trivial solution and so these three vectors are linearly dependent.

v1 . 0. v k } is a set of vectors in ¡ n . In fact. We’re not going to prove this one but we will outline the basic proof. This part has shown that if you have a set of vectors and a subset is linearly dependent then the whole set will be linearly dependent. é 1 0 0ù é0 0 ú . is in fact the zero matrix of the same size as the vectors in the given set. the basic process here is pretty much the same as the previous set of examples.math. To this point we’ve only seen examples of linear independence/dependence with sets of vectors in ¡ n . c1 v1 + c2 v 2 + L + ck v k = 0 Example 3 Determine if the following sets of vectors are linearly independent or linearly dependent. Proof : This is a fairly simply proof. We’ll need to remember that this time the zero vector. If we set up the system of equations corresponding to the equation. and v 3 = ê0 1 0 ú . 1( 0 ) + 0 v1 + 0 v 2 + L + 0 v n = 0 We can see from this that we have a non-trivial solution to this equation and so the set of vectors is linearly dependent. [Solution] 0ú û Solution Okay. Theorem 1 A finite set of vectors that contains the zero vector will be linearly dependent. v 2 . If k > n then the set of vectors is linearly dependent. It just may not appear that way at first however. Let S = {0. v 2 . Often the only way to determine if a set of vectors is linearly independent or linearly dependent is to set up a system as above and solve it. However.K . Theorem 2 Suppose that S = { v1 .edu/terms. v 2 = ê0 0 ë0 0 0 û ë é 1 2ù é 4 (b) v1 = ê ú and v 2 = ê 0 ë 0 -1û ë é 8 -2 ù é -12 (c) v1 = ê ú and v1 = ê -15 ë 10 0 û ë (a) v1 = ê 1ù é0 0 0 ù ú . [Solution] 0û ë û 1ù . We can then set up the following equation. The vectors will therefore be linearly dependent. We should now take a look at some examples of vectors from some other vector spaces. there are a couple of cases were we can get the answer just be looking at the set of vectors. © 2007 Paul Dawkins 39 http://tutorial.K . we saw how to prove this theorem in parts (a) and (b) from Example 2.aspx .lamar. we will get a system of equation that has more unknowns than equations (you should verify this) and this means that the system will have infinitely many solutions. [Solution] -3ú û 3ù . v n } be any set of vectors that contains the zero vector as shown.Linear Algebra linearly dependent.

c1 = 0 c2 = 0 c3 = 0 Of course this isn’t really much of a system as it tells us that we must have the trivial solution and so these matrices (or vectors if you want to be exact) are linearly independent. and v 3 = ê0 1 0 ú . c1 + 4c2 = 0 2c1 + c2 = 0 -c1 .Linear Algebra (a) v1 = ê é 1 0 0ù é 0 0 1ù é0 0 0 ù ú . [Return to Problems] © 2007 Paul Dawkins 40 http://tutorial.) on the left into a single matrix using basic matrix scalar multiplication and addition. 1ù é0 0 ù é 1 2ù é 4 c1 ê + c2 ê ú ú=ê ú ë 0 -1û ë 0 -3û ë0 0 û The system of equations we need to solve here is. ë 0 -1û ë û So we can see that for the most part these problems work the same way as the previous problems did. they’re matrices so let’s call them that…. é c1 0 c2 ù é 0 0 0 ù = ê0 c 0ú ê0 0 0ú 3 û ë û ë Now.3c2 = 0 We’ll leave it to you to verify that the only solution to this system is the trivial solution and so these matrices are linearly independent. This gives the following system of equations. Next.edu/terms.math. Here is the vector equation we need to solve for this part. This means that the three entries in the matrix on the left that are not already zero need to be set equal to zero. We just need to set up a system of equations and solve. ë0 0 0 û ë û ë û c1 v1 + c2 v 2 + c3 v 3 = 0 é 1 0 0ù é0 0 1ù é0 0 0ù é0 0 0ù c1 ê ú + c2 ê0 0 0ú + c3 ê0 1 0ú = ê0 0 0ú ë0 0 0 û ë û ë û ë û We’ll first need to set of the “vector” equation. combine the “vectors” (okay. For the remainder of these problems we’ll not put in the detail that did in the first part. [Return to Problems] (b) v1 = ê 1ù é 1 2ù é 4 ú and v 2 = ê 0 -3ú .lamar. we need both sides to be equal.aspx . v 2 = ê 0 0 0 ú .

15c3 = 0 The solution to this system is. 0ú û 3ù é0 0 ù = 0 ú ê0 0 ú û ë û Here is the vector equation for this part é 8 -2 ù é -12 c1 ê ú + c2 ê -15 ë 10 0 û ë and the system of equations is. the only solution to the vector equation will be the trivial solution and so these polynomials (or vectors if you want to be precise) are linearly independent.aspx . So. [Return to Problems] © 2007 Paul Dawkins 41 http://tutorial. and p3 = x 2 in P2 . we could set up a system of equations here. In order for these two second degree polynomials to be equal the coefficient of each term must be equal. and p3 = x 2 in P2 . however we don’t need to.12c2 = 0 -2c1 + 3c2 = 0 10c1 .x + 7 . we’ve got solutions other than the trivial solution and so these vectors are linearly dependent.Linear Algebra (c) v1 = ê é 8 -2 ù é -12 ú and v1 = ê -15 ë 10 0 û ë 3ù . is the zero function. At this point is it should be pretty clear that the polynomial on the left will only equal if all the coefficients of the polynomial on the left are zero. Let’s first set up the equation that we need to solve. (a) p1 = 1 . and p3 = x 2 . p 2 = x .math. [Return to Problems] Example 4 Determine if the following sets of vectors are linearly independent or linearly dependent. and p3 = x 2 + 1 in P2 . Since we’re actually working in P2 for all these parts we can think of this as the following polynomial. (a) p1 = 1 . these will work in essentially the same manner as the previous problems. [Solution] (b) p1 = x .edu/terms. 8c1 . p 2 = x 2 + 2 x . 3 c1 = t 2 c2 = t t is any real number So.3 . p 2 = x . c1 p1 + c2 p 2 + c3 p3 = 0 c1 (1) + c2 x + c3 x 2 = 0 + 0 x + 0 x 2 Now.lamar. [Solution] Solution Again. 0 = 0 + 0 x + 0 x2 In other words a second degree polynomial with zero coefficients. In this problem set the zero vector. [Solution] (c) p1 = 2 x 2 .2 x + 4 in P2 . p 2 = x 2 + 4 x + 2 . 0.

t 4 © 2007 Paul Dawkins c3 = t t is any real number http://tutorial. 4 ) .3) + c2 ( x 2 + 2 x ) + c3 ( x 2 + 1) = 0 + 0 x + 0 x 2 ( c2 + c3 ) x 2 + ( c1 + 2c2 ) x + ( -3c1 + c3 ) = 0 + 0 x + 0 x 2 Now. 2c1 + c2 + c3 = 0 -c1 + 4c2 .1. Let’s go back and examine the results of the very first example that we worked in this section and in particular let’s start with the final part. In this part the vector equation is. In this part we looked at the vectors v1 = ( 2. [Return to Problems] Now that we’ve seen quite a few examples of linearly independent and linearly dependent vectors we’ve got one final topic that we want to discuss in this section.1) = 0 1 c2 = t 2 42 and found that it had the solution. 2 c1 = .2c3 ) x + ( 7c1 + 2c2 + 4c3 ) = 0 + 0 x + 0 x 2 The system of equation we need to solve is. The vector equation for this part is. p 2 = x 2 + 4 x + 2 .1. c1 ( x . [Return to Problems] (c) p1 = 2 x 2 . -2. 4 ) + c3 ( 0.edu/terms. 3 c1 = .lamar. This leads to the following system of equations. -5. We did this by solving the vector equation.2 x + 4 in P2 . -2.x + 7 ) + c2 ( x 2 + 4 x + 2 ) + c3 ( x 2 .aspx . -5. c1 ( 2. v 2 = ( 3.t 3 1 c2 = t 3 c3 = t t is any real number So. and v 3 = ( 0.2c3 = 0 7c1 + 2c2 + 4c3 = 0 The solution to this system is. we have more solutions than the trivial solution and so these polynomials are linearly dependent.math. p 2 = x 2 + 2 x . as with the previous part the coefficients of each term on the left must be zero in order for this polynomial to be the zero vector.Linear Algebra (b) p1 = x . c1 ( 2 x 2 . 4 ) + c2 ( 3. 4 ) .3 .x + 7 . c2 + c3 = 0 c1 + 2c2 = 0 -3c1 + c3 = 0 The only solution to this system is the trivial solution and so these polynomials are linearly independent. and p3 = x 2 . and p3 = x 2 + 1 in P2 .2 x + 4 ) = 0 + 0 x + 0 x 2 ( 2c1 + c2 + c3 ) x 2 + ( -c1 + 4c2 .1) and determined that they were linearly dependent.

lamar.edu/terms. Let’s see if we can do this with the three vectors from the third part of this example.2 ( 0. 0 ) = c1 ( 0. we were able to write one of the vectors as a linear combination of the other two.1) or. there is no way we can write the first vector as a linear combination of the other two. -5. (1. © 2007 Paul Dawkins 43 http://tutorial.math. and v 3 = ( 0. Say. With a set of linearly dependent vectors we were able to write at least one of them as a linear combination of the other vectors in the set and with a set of linearly independent vectors we were not able to do this for any of the vectors. ( 3.Linear Algebra We knew that the vectors were linearly dependent because there were solutions to the equation other than the trivial solution.1) = 0 2 Now. c1 = - 3 2 c2 = 1 c3 = 2 In fact. Notice as well that we could have just as easily written v1 and a linear combination of v 2 and v 3 or v 3 as a linear combination of v1 and v 2 if we’d wanted to. So.1) and in that part we determined that these vectors were linearly independent. 0 ) . So. -2.1.aspx . let’s plug these values into the vector equation above.1. 0. -5. to see this take a look at Example 2(b).1) = ( 0. 0 ) + c2 ( 0. 4 ) . You should also verify we also do this in any of the other combinations either.1. 2 So. what have we seen here with these two examples. In this part we were looking at the three vectors v1 = (1. In this example we determined that the vectors v1 = ( -2. 0. This will always be the case. 4 ) + ( 3. - 3 ( 2.2 v 3 . if we rearrange this a little we arrive at. On the other hand. In the example of linearly dependent vectors we were looking at above we could write any of the vectors as a linear combination of the others. With a set of linearly independent vectors we will never be able to write one of the vectors as a linear combination of the other vectors in the set. while we can find values of c1 and c2 that will make the second and third entries zero as we need them to we’re in some pretty serious trouble with the first entry. This will not always be the case. if we have a set of linearly dependent vectors then at least one of them can be written as a linear combination of the remaining vectors. 0. -2.1) . v 2 = ( 0. 0. 4 ) = ( 2. 4 ) + 2 ( 0. Let’s see if we can write v1 and a linear combination of v 2 and v 3 . In the vector on the left we’ve got a 1 in the first entry and in the vector on the right we’ve got a 0 in the first entry.1. Let’s take a look at one of them. in a little more compact form : v 2 = 3 2 3 v1 . c1 . c2 ) Now. If we can we’ll be able to find constants c1 and c2 that will make the following equation true. 0 ) .

However. Simply pick a value of t and the rearrange as you need to. This can be a useful idea about linearly independent/dependent vectors on occasion. Doing this in our case we see that we can do one of the following. ( 4.lamar. We also saw that the solution to the equation. -2 ) were linearly dependent. was given by c1 ( -2. -2 ) = -2 ( -2.( 0 )( -1. -2 ) It’s easy in this case to write the first or the third vector as a combination of the other vectors.( 0 )( -1.1) = . there is no way that we can write the second vector as a linear combination of the first and third vectors. -3) 1 2 ( -2. -3) + c3 ( 4.( 4. because the coefficient of the second vector is zero.Linear Algebra v 2 = ( -1. © 2007 Paul Dawkins 44 http://tutorial. What that means here is that the first and third vectors are linearly dependent by themselves (as we pointed out in that example) but the first and second are linearly independent vectors as are the second and third if we just look at them as a pair of vectors (you should verify this). -2 ) = 0 c1 = 2t c2 = 0 c3 = t t is any real number and as we saw above we can always use this to determine how to write at least one of the vectors as a linear combination of the remaining vectors.math.edu/terms. -3) .1) .aspx . -3) and v 3 = ( 4.1) + c2 ( -1.

lamar. 0.1.1) . 0. in matrix form. u3 ) As we saw in the section on Span all we need to do is convert this to a system of equations. v 2 = ( 0. 0 ) and v 2 = ( -1. -2 ) and v 3 = ( -1. [Solution] (c) v1 = (1. -1. 4. -1. 0.1) + c2 ( 0. -4 ) .K .e. [Solution] (b) v1 = (1. Then S is called a basis (plural is bases) for V if both of the following conditions hold.1. v 2 = ( -1. [Solution] (d) v1 = (1.math. and c3 so that a general vector u = ( u1 .1) . v n } is a set of vectors from the vector space V. Definition 1 Suppose S = { v1 . -1. i. Recall as well that if the determinant of the coefficient matrix is non-zero then there will be exactly one solution to this system for each u. First. (a) span ( S ) = V . 2 ) + c3 ( 3. -1) .edu/terms.Linear Algebra Basis and Dimension In this section we’re going to take a look at an important idea in the study of vector spaces. v 2 . (a) v1 = (1. 0 ) and v 3 = ( 0. 0 ) . Now. Let’s take a look at some examples. -1.aspx . (b) S is a linearly independent set of vectors. c2 . c1 (1. and then determine if the coefficient matrix has a non-zero determinant or not. If the determinant of the coefficient matrix is non-zero then the set will span the given vector space and if the determinant of the coefficient matrix is zero then it will not span the given vector space. 2 ) and v 3 = ( 3. u2 . v 2 = ( 0. 0 ) . u2 . 0. S spans the vector space V.1.1) . Example 1 Determine if each of the sets of vectors will be a basis for ¡3 . We will also be drawing heavily on the ideas from the previous two sections and so make sure that you are comfortable with the ideas of span and linear independence. -1) = ( u1 . -1) . 2. [Solution] Solution (a) v1 = (1. v 2 = ( 0. The matrix form of the system is. 0.1. We’ll start this section off with the following definition. é 1 ê -1 ê ê 1 ë © 2007 Paul Dawkins 3ù é c1 ù é u1 ù 1 0 ú êc2 ú = êu2 ú úê ú ê ú 2 -1ú ê c3 ú ê u3 ú ûë û ë û 0 45 http://tutorial. 0. we’ll need to show that these vectors span ¡ 3 and from the section on Span we know that to do this we need to determine if we can find scalars c1 .1) . let’s see what we’ve got to do here to determine whether or not this set of vectors will be a basis for ¡ 3 . 2 ) and v 3 = ( 3. u3 ) from ¡ 3 can be expressed as a linear combination of these three vectors or.1.

é 1 ê -1 ê ê 1 ë 3ù é c1 ù é0 ù 1 0 ú ê c2 ú = ê0 ú úê ú ê ú 2 -1ú ê c3 ú ê0 ú ûë û ë û 0 Also.1) . if these vectors will span ¡ 3 then there will be exactly one solution to the system for each u. 0.1. that this is really just a specific case of the system that we need to solve for the span question. Likewise. -1.1) + c2 ( 0. In Example 4(a) of the section on Span we determined that the standard basis vectors (Interesting name isn’t it? We’ll come back to this in a bit) e1 . Namely here we need to solve. Note however. these vectors will form a basis for ¡ 3 .edu/terms. However. (b) v1 = (1. On the other hand. if the determinant is zero then the vectors will not span ¡3 and will not be linearly independent and so they won’t be a basis for ¡3 .math. © 2007 Paul Dawkins 46 http://tutorial. we’ve done all the work for this one in previous sections. 2 ) + c3 ( 3. 0 ) If this system has only the trivial solution the vectors will be linearly independent and if it has solutions other than the trivial solution then the vectors will be linearly dependent.lamar. e2 and e3 will span ¡ 3 . 0.aspx . as noted above. [Return to Problems] Now. Notice that while we’ve changed the notation a little just for this problem we are working with the standard basis vectors here and so we know that they will span ¡ 3 .1. c1 (1. all that we need to do here is compute the determinant of the coefficient matrix and if it is non-zero then the vectors will both span ¡ 3 and be linearly independent and hence the vectors will be a basis for ¡ 3 . in this case. In this case we know that the trivial solution will be a solution. 0 ) . our only question is whether or not it is the only solution.Linear Algebra Before we get the determinant of the coefficient matrix let’s also take a look at the other condition that must be met in order for this set to be a basis for ¡ 3 . 0. So. é 1 A = ê -1 ê ê 1 ë 3ù 1 0ú ú 2 -1ú û 0 Þ det ( A) = -10 ¹ 0 So. 0 ) and v 3 = ( 0. here is the determinant of the coefficient matrix for this problem. So. From the section on Linear Independence we know that to determine this we need to solve the following equation. -1) = 0 = ( 0. we could use a similar path for this one as we did earlier. 0. v 2 = ( 0. In order for these vectors to be a basis for ¡ 3 then they must be linearly independent. in Example 1(c) from the section on Linear Independence we saw that these vectors are linearly independent.

that these two vectors are linearly independent (you should verify that). We should discuss the name a little more at this point and we’ll do it a little more generally than in ¡ 3 . we can see right away that there will be problems here. 2.math. u3 ) é 1 -1 -1ù é c1 ù é0 ù ê -1 2 4 ú ê c ú = ê 0 ú ê úê 2ú ê ú ê 1 -2 -4 ú ê c3 ú ê0 ú ë ûë û ë û and the matrix form of this is. 4. if we choose u = ( u1 . -4 ) = ( u1 . u2 . 0 ) and v 2 = ( -1. 4.1) + c2 ( -1. u2 . So. c1 (1.Linear Algebra Hence based on all this previous work we know that these three vectors will form a basis for ¡ 3 . 0. Either of which will mean that these three vectors are not a basis for ¡3 . (c) v1 = (1. (d) v1 = (1. The third component of the each of these vectors is zero and hence the linear combination will never have any non-zero third component. -1. u3 ) to be any vector in ¡ 3 with u3 ¹ 0 we will not be able to find scalars c1 and c2 to satisfy the equation above. As we pointed out at the time the three vectors we were looking at were the standard basis vectors for ¡3 . u2 . The vectors © 2007 Paul Dawkins 47 http://tutorial. -4 ) [Return to Problems] In this case we’ve got three vectors with three components and so we can use the same method that we did in the first part.1. -2 ) + c3 ( -1. u3 ) in ¡ 3 there must be scalars c1 and c2 so that. 0 ) . 2. let’s just start this out be checking to see if these two vectors will span ¡ 3 . c1 (1.1) . If these two vectors will span ¡ 3 then for each u = ( u1 . v 2 = ( -1. the vectors are still not a basis for ¡ 3 since they do not span ¡ 3 . [Return to Problems] We can’t use the method from part (a) here because the coefficient matrix wouldn’t be square and so we can’t take the determinant of it. u2 . u3 ) However. 0.1. We’ll leave it to you to verify that det ( A ) = 0 and so these three vectors do not span ¡ 3 and are not linearly independent. Therefore. 0 ) + c2 ( -1.lamar. Therefore. The general equation that needs solved here is. [Return to Problems] Before we move on let’s go back and address something we pointed out in Example 1(b).edu/terms.aspx . these two vectors do not span ¡ 3 and hence cannot be a basis for ¡ 3 . Despite this however. Note however. 0 ) = ( u1 . -2 ) and v 3 = ( -1. -1.

1. In other words. p 2 = x 2 . n e1 = (1.K .math. 0. v 2 = ê 1 0 ú . © 2007 Paul Dawkins 48 http://tutorial. é 1 0ù é0 0 ù é0 1ù é0 0 ù Example 3 The set v1 = ê ú . v 3 = ê0 0ú . this set of vectors is in fact a basis for Pn . 0.K . Example 2 The set p 0 = 1 . p n = x n is a basis for Pn and is usually called the standard basis for Pn . this equation has only the trivial solution and so the matrices are linearly independent. So. but you should be able to modify this appropriately to arrive at a set of standard basis vector for M n m in general. 0 ) e2 = ( 0. Note that we only looked at the standard basis vectors for M 22 . A similar argument can be used for the general case here and we’ll leave it to you to go through that argument. and v 4 = ê0 1ú is a basis for ë0 0 û ë û ë û ë û M 22 and is usually called the standard basis for M 22 . We have yet so show that they are linearly in dependent however. 0 ) L e n = ( 0. 0. p1 = x . Let’s take a look at each of them. 0. This combined with the fact that they span M 22 shows that they are in fact a basis for M 22 . So.K .Linear Algebra will span ¡ as we saw in the section on Span and it is fairly simple to show that these vectors are linearly independent (you should verify this) and so they form a basis for ¡ n . In Example 4(c) of the section on Span we showed that this set will span Pn .1) We also have a set of standard basis vectors for a couple of the other vector spaces we’ve been looking at occasionally. é 1 0ù é0 0 ù é0 1ù é0 0 ù é0 0 ù c1 ê ú + c2 ê 1 0 ú + c3 ê0 0 ú + c4 ê0 1ú = ê0 0 ú ë0 0 û ë û ë û ë û ë û é c1 c3 ù é0 0 ù êc c ú = ê0 0 ú û ë 2 4û ë So. 0. Next let’s take a look at the following theorem that gives us one of the reasons for being interested in a set of basis vectors. In Example 4(b) of the section on Span we showed that this set will span M 22 .aspx .lamar. the only way the matrix on the left can be the zero matrix is for all the scalars to be zero. In some way this set of vectors is the simplest (we’ll see this in a bit) and so we call them the standard basis vectors for ¡ n . following the procedure from the last section we know that we need to set up the following equation. In Example 4(a) of the section on Linear Independence we shows that for n = 2 these form a linearly independent set in P2 .edu/terms. … .

c2 = k2 . 0 = u . Here are the dimensions of some of the vector spaces we’ve been dealing with to this point.e.k1 = 0. since we know that the vectors in S are a basis for V then for any vector u in V we can write it as a linear combination as follows.k2 ) v 2 + L + ( cn . © 2007 Paul Dawkins 49 http://tutorial. By assumption the set is linearly independent and by definition V is the span of S and so the set must be a basis for V. let’s suppose that it is also possible to write it as the following linear combination. v n } . cn .aspx . If S contains a finite number of vectors. v 2 . That means that this equation can only have the trivial solution. and so the two linear combinations were in fact the same linear combination. but we’ll call it that anyway. we must have.edu/terms. is n (i. Or. If V is not a finite dimensional vector space (so S does not have a finite number of vectors) then we call it an infinite dimensional vector space.K .lamar.K . We now need to take a look at the following definition.kn ) v n However. the vector space consisting solely of the zero vector) is zero. v n } is a set of linearly independent vectors then S is a basis for the vector space V = span ( S ) The proof here is so simple that we’re not really going to give it.k1 ) v1 + ( c2 . Theorem 2 Suppose that S = { v1 . By definition the dimension of the zero vector space (i. then we call V a finite dimensional vector space and we say that the dimension of V. because the vectors in S are a basis they are linearly independent. u = k1 v1 + k2 v 2 + L + kn v n If we take the difference of these two linear combinations we get. in other words. K .e. It probably doesn’t really rise to the level of a theorem. v 2 .math.u = ( c1 . the number of basis elements in S). u = c1 v1 + c2 v 2 + L + cn v n Now.k2 = 0. cn = kn But this means that. Proof : First. v n } is a basis for the vector space V then every vector u from V can be expressed as a linear combination of the vectors from S in exactly one way. c1 .Linear Algebra Theorem 1 Suppose that the set S = { v1 . say S = { v1 . Definition 2 Suppose that V is a non-zero vector space and that S is a set of vectors from V that form a basis for V. c2 .K . denoted by dim (V ) .kn = 0 c1 = k1 . We also have the following fact. K . v 2 .

p0 = 1 p1 = x p2 = x2 L pn = xn (c) dim M 2 2 = ( 2 )( 2 ) = 4 since the standard basis vectors for M 22 are. This set will be in either of the two vector spaces above and in the following theorem we can show that there will be no finite basis set for these vector spaces.K . are infinite dimensional vector spaces. 0 ) e2 = ( 0. 0 ) L e n = ( 0. 0.K . w1 = a11v1 + a21 v 2 + L + an1 v n M w 2 = a12 v1 + a22 v 2 + L + an 2 v n w m = a1m v1 + a2 m v 2 + L + an m v n © 2007 Paul Dawkins 50 http://tutorial.math.lamar. e1 = (1. If we take all the polynomials (of all degrees) then we can form a set (see part (b) above for elements of that set) that does not have a finite number of elements in it and yet is linearly independent. (a) If a set has more than n vectors then it is linearly dependent. (a) dim ( ¡ n ) = n since the standard basis vectors for ¡ n are.Linear Algebra Example 4 Dimensions of some vector spaces. This follows from the natural extension of the previous part. F [ a. We now need to take a look at several important theorems about vector spaces. 0. The set of standard basis vectors will be a set of vectors that are zero in all entries except one entry which is a 1. There are nm possible positions of the 1 and so there must be nm basis vectors (e) The set of real valued functions on a interval.edu/terms.aspx . b ] . w m } and suppose that m > n . and the set of continuous ( ) functions on an interval. v 2 .K . 0. but here is something to think about.1) (b) dim ( Pn ) = n + 1 since the standard basis vectors for Pn are. (b) If a set has fewer than n vectors then it does not span V. w 2 . v n } is any basis for V. Proof : (a) Let R = {w1 . One of the more important uses of these two theorems is constructing a set of basis vectors as we’ll see eventually. 0. Theorem 3 Suppose that V is a vector space and that S = { v1 . 0. ( ) é 1 0ù v1 = ê ú ë0 0 û é0 0ù v2 = ê ú ë 1 0û é0 1ù v3 = ê ú ë0 0 û é0 0ù v4 = ê ú ë 0 1û (d) dim M n m = nm .K . Since S is a basis of V every vector in R can be written as a linear combination of vectors from S as follows. The first couple of theorems will give us some nice ideas about linearly independent/dependent sets and spans. b ] . C [ a.K . This is not easy to show at this point.1.

as we have here. but if we assume for a second that R does span V we’ll see that we’ll run into some problems with our basis set S. k1w1 + k2 w 2 + L + km w m = 0 v1 = a11w1 + a21w 2 + L + am1w m M v 2 = a12 w1 + a22 w 2 + L + am 2 w m v n = a1n w1 + a2 n w 2 + L + am n w m Let’s now look at the equation. However.aspx . will have more solutions than the trivial solution and so the vectors in R must be linearly dependent. because m > n there are more unknowns than equations and so by Theorem 2 in the solving systems of equations section we know that if there are more unknowns than equations in a homogeneous system. It’s not so easy to show directly that R will not span V. Therefore the equation. So. a11k1 + a12 k2 + L + a1m km = 0 M a21k1 + a22 k2 + L + a2 m km = 0 an1k1 + an 2 k2 + L + an m km = 0 Now. (b) The proof of this part is very similar to the previous part. So. we want to show that the vectors in R are linearly dependent. © 2007 Paul Dawkins 51 http://tutorial. k1w1 + k2 w 2 + L + km w m = 0 If we plug in the set of linear combinations above for the w i ’s in this equation and collect all the coefficients of the v j ’s we arrive at. ( a11k1 + a12 k2 + L + a1m km ) v1 + ( a21k1 + a22 k2 + L + a2 m km ) v 2 + L + ( an1k1 + an 2 k2 + L + an m km ) v n = 0 Now. the v j ’s are linearly independent and so we know that the coefficients of each of the v j in this equation must be zero. Let’s start with the set R = {w1 . we’ll need to show that there are more solutions than just the trivial solution to the following equation.lamar.K . we’ll assume that R will span V. w 2 . This means that all the vectors in S can be written as a linear combination of the vectors in R or.Linear Algebra Now. w m } and this time we’re going to assume that m < n .edu/terms.math. in this system the ai j ’s are known scalars from the linear combinations above and the ki ’s are unknowns. there will be infinitely many solutions. So we can see that there are n equations and m unknowns. This is called a proof by contradiction. We’ll assume the opposite of what we want to prove and show that this will lead to a contradiction of something that we know is true (in this case that S is a basis for V). This gives the following system of equations.

a11k1 + a12 k2 + L + a1n kn = 0 M a21k1 + a22 k2 + L + a2 n kn = 0 am1k1 + am 2 k2 + L + am n kn = 0 Again. if we substitute the linear combinations of the v i ’s into this. c1 v1 + c2 v 2 + L + cn v n + cn +1u = 0 Now. there are more unknowns than equations here and so there are infinitely many solutions. (a) If S is linearly independent and u is any vector in V that is not in span ( S ) then the set R = S È {u} (i.edu/terms. v 2 . So. our original assumption that R spans V must be wrong. contradicts the fact that u is not in span ( S ) . Therefore R will not span V. let’s form the equation. This contradicts the fact that we know the only solution to the equation is the trivial solution. However. Theorem 4 Suppose S is a non-empty set of vectors in a vector space V. because S is a basis we know that the v i ’s must be linearly independent and so the only solution to this must be the trivial solution.lamar. Then. v 2 . S and S . u} is linearly independent. if cn +1 is not zero we will be able to write u as a linear combination of the v i ’s but this is now.aspx © 2007 Paul Dawkins . we’ve shown that the only solution to c1 v1 + c2 v 2 + L + cn v n + cn +1u = 0 52 http://tutorial. span ( S ) = span ( R ) In other words. Proof : (a) If S = { v1 . (b) If u is any vector in S that can be written as a linear combination of the other vectors in S let R = S .K . v n } we need to show that the set R = { v1 .e. v n .{u} be the set we get by removing u from S.{u} will span the same space.Linear Algebra k1 v1 + k2 v 2 + L + kn v n = 0 Now.math. the set of S and u) is also a linearly independent set.K . Therefore we must have cn +1 = 0 and our equation c1 v1 + c2 v 2 + L + cn v n = 0 c1 = 0 c2 = 0 L cn = 0 But the vectors in S are linearly independent and so the only solution to this is the trivial solution. k1v1 + k2 v 2 + L + kn v n = 0 So. rearrange as we did in part (a) and then setting all the coefficients equal to zero gives the following system of equations. So.

So. v 2 . Now. v n } is a basis for V. Theorem 3 also tells us that if R contains less than n elements then it won’t span V and hence can’t be a basis for V. v 2 . w can be written as a linear combination of all the vectors in S or.K .edu/terms. Then by Theorem 3 above if R contains more than n elements it can’t be a linearly independent set and so can’t be a basis. (b) Let’s suppose that our set is S = { v1 .math. v n .lamar. be written as a linear combination of vectors from S and so is also in span ( S ) .Linear Algebra is c1 = 0 c2 = 0 L cn = 0 cn +1 = 0 Therefore. v 2 . Since we’ve shown that span ( S ) must be contained in span ( R ) and that every vector in span ( R ) must also be contained in span ( S ) this can only be true if span ( S ) = span ( R ) . Finally. by assumption u is a linear combination of the remaining vectors in S or.K . we know that. However. let R be any other basis for V. Proof : Suppose that S = { v1 . w is a linear combination of vectors only in R and so at the least every vector that is in span ( S ) must also be in span ( R ) . but since these are also vectors in S we see that w can also.aspx . u} and so we have R = { v1 . v n } . the vectors in R are linearly independent. if w is any vector in span ( R ) then it can be written as a linear combination of vectors from R. Therefore the only possibility is that R must contain exactly n elements. by default. © 2007 Paul Dawkins 53 http://tutorial. Now plug in the expression for u above to get. u = c1 v1 + c2 v 2 + L + cn v n Next let w be any vector in span ( S ) . So. at the least R can’t contain more than n elements. Theorem 5 Suppose that V is a vector space then all the bases for V contain the same number of vectors. w = k1 v1 + k2 v 2 + L + kn v n + kn+1u w = k1 v1 + k2 v 2 + L + kn v n + kn +1 ( c1v1 + c2 v 2 + L + cn v n ) = ( k1 + kn+1c1 ) v1 + ( k2 + kn +1c2 ) v 2 + L + ( kn + kn+1cn ) v n So. We’ve just shown that every vector in span ( R ) must also be in span ( S ) . First.K . We can use the previous two theorems to get some nice ideas about the basis of a vector space.

math. let’s suppose that S is linearly independent. Also suppose that S is a set that contains exactly n vectors. Theorem 3(a) tells us that any set with more vectors than the basis (i. Now. If R now spans V we’ve got a basis for V and if not add another element to form the new linearly independent set R¢ . greater than n) can’t be linearly independent.Linear Algebra Theorem 6 Suppose that V is a vector space and that dim (V ) = n . If S is linearly dependent then there must be some vector u in S that can be written as a linear combination of other vectors in S and so by Theorem 4(b) we can remove u from S and our new set of n . Therefore. So. Let R be the set that results from removing u from S. Then by Theorem 4(a) the set R is still linearly independent. Then by Theorem 4(b) R will still span V. Theorem 3(b) tells us that any set with fewer vectors than a basis (i.1 vectors will still span V. Okay we should probably see some examples of some of these theorems in action.e. (b) If S is linearly independent but is not a basis for V then it can be enlarged to a basis for V by adding in certain vectors from V. If S does not span V then there must be a vector u that is not in span ( S ) . If R is linearly independent then we have a basis for V and if it is still linearly dependent we can remove another element to form a new set R¢ that will still span V. there is some vector u in S that can be written as a linear combination of the other vectors in S.aspx . (a) If S spans V but is not a basis for V then it can be reduced to a basis for V by removing certain vectors from S. there is a vector u that is not in span ( S ) . If we add u to S the resulting set with n + 1 vectors must be linearly independent by Theorem 4(a). Therefore.lamar. So. S will be a basis for V if either V = span {S } or S is linearly independent. S must be linearly independent and hence S is a basis for V.edu/terms. On the other hand. Theorem 7 Suppose that V is a finite dimensional vector space with dim (V ) = n and that S is any finite set of vectors from V. (b) If S is linearly independent but not a basis then it must not span V. Proof : (a) If S spans V but is not a basis for V then it must be a linearly dependent set. less than n in this case) can’t span V. Therefore.e. add u to S to form the new set R. We continue in this way until we’ve reduced S down to a set of linearly independent vectors and at that point we will have a basis of V. However. S must span V and hence S is a basis for V. Proof : First suppose that V = span {S } . Continue in this fashion until we reach a set with n vectors and then by Theorem 6 this set must be a basis for V. © 2007 Paul Dawkins 54 http://tutorial.

Again. that is what we need to look for. This means that we can remove any of these since we could write any one of them as a linear combination of the other two. First. [Solution] (b) p 0 = 2 . v 2 = ( 0. 0 ) . 4. let’s remove v 3 for no other reason that the entries in this vector are larger than the others. p3 = 2 x + 7 and p 4 = 5 x 2 . 0 ) for ¡ 3 . 0. 0 ) . c2 = 6t c3 = -2t c4 = t t is any real number we can see that these in fact are linearly dependent vectors. it looks like p3 is a linear combination of p 0 and p1 (you should verify this) and so we can remove p3 and the set will still span P2 . First. p1 = -4x . So. Also. you should verify that the set of vectors does indeed span P2 . remember that each vector we remove must be a linear combination of some of the other vectors. Theorem 4(b) tells us that if we remove a vector that is a linear combination of some of the other vectors we won’t change the span of the set. (a) v1 = (1. v 3 = ( 0. 2. [Solution] Solution First. 2. 0.1 for P2 . (b) p 0 = 2 .math. -1) v 4 = ( 0.edu/terms. For instance if we removed v1 the set would no longer span ¡3 . 0. 0 ) v 2 = ( 0. -3 ) and v 4 = ( 0. v 2 = ( 0. 2. p3 = 2 x + 7 and p 4 = 5 x 2 . p 2 = x 2 + x + 1 . © 2007 Paul Dawkins 55 http://tutorial. because dim ( P2 ) = 3 we know that we’ll need to remove two of the vectors. it looks like the last three vectors are probably linearly dependent so if we set up the following equation c2 v 2 + c3 v 3 + c4 v 4 = 0 and solve it. notice that provided each of these sets of vectors spans the given vector space Theorem 7(a) tells us that this can in fact be done. 0 ) For the practice you should verify that this set does span ¡ 3 and is linearly independent. [Return to Problems] We’ll go through this one a little faster. The following set the still spans ¡ 3 and has exactly 3 vectors and so by Theorem 6 it must be a basis for ¡ 3 . we can’t just remove any of the vectors. v1 = (1. p 2 = x 2 + x + 1 . -1) .aspx . 4. However. -1) . This leaves us with the following set of vectors. 0 ) for ¡ 3 . We will leave it to you to verify that this set of vectors does indeed span ¡ 3 and since we know that dim ¡ 3 = 3 we can see that we’ll need to remove one vector from the list in order to get ( ) down to a basis. -3 ) and v 4 = ( 0. You should verify this.Linear Algebra Example 5 Reduce each of the following sets of vectors to obtain a basis for the given vector space. Now. So.1 for P2 .1. (a) v1 = (1. v 3 = ( 0.1.1. p1 = -4x .lamar. but you can also quickly see that only v1 has a non-zero first component and so will be required for the vectors to span ¡ 3 .

0. it looks like p 2 can easily be written as a linear combination of the remaining vectors (again.1) Since this set is still linearly independent and now has 4 vectors by Theorem 6 this set is a basis for ¡ 4 (you should verify this). Now. [Solution] (b) v1 = ê é 1 0ù é 2 ú and v 2 = ê -1 ë0 0 û ë 0ù in M 22 . 0 ) . v1 = (1. We now have the following set. (a) v1 = (1. 0.lamar. Also. Therefore.1. (a) v1 = (1.1. 0 ) v 3 = (1.1) The last one seems to be in keeping with the pattern of the original three vectors so we’ll use that one to get the following set of four vectors. [Return to Problems] © 2007 Paul Dawkins 56 http://tutorial. v 2 = (1.1) ( 0.1. 0 ) .1. v 2 . 0. 0. 0 ) . all that we need to do is take any vector that has a non-zero fourth component and we’ll have a vector that is outside span { v1 . v 2 .0. v 2 = (1.1. 0. v 3 = (1. 0 ) v 2 = (1. 0 ) .1.math. v 3 = (1. Here are some possible vectors we could use. [Return to Problems] Example 6 Expand each of the following sets of vectors into a basis for the given vector space.edu/terms.1 which has 3 vectors and will span P2 and so it must be a basis for P2 by Theorem 6. we need to find a vector that is not in the span of the given vectors. 2 ) ( 6. -4. -3. 0.1. please verify this) and so we can remove that one as well. We’ll leave it to you to verify that these vectors are linearly independent. 0. 0.0 ) in ¡ 4 . 0.1 Now. v 3 } .1.1. dim ¡ 4 = 4 and so it looks like we’ll just need to add in a single vector to get a basis. -1) (1.1.aspx .1. This will in turn give us a set of 4 linearly independent vectors and so by Theorem 6 will have to be a basis for ¡ 4 . This means that all the vectors that are in span { v1 .1. 0 ) v 4 = (1. ( ) ( 0. This is easy to do provided you notice that all of the vectors have a zero in the fourth component. Theorem 4(a) tells us that provided the vector we add in is not in the span of the original vectors we can retain the linear independence of the vectors.1. 0. 2.0 ) in ¡ 4 . 0. v 3 } will have a zero in the fourth component. p0 = 2 p1 = -4 x p4 = 5x2 .Linear Algebra p0 = 2 p1 = -4 x p2 = x2 + x + 1 p4 = 5x2 . [Solution] 0ú û Solution Theorem 7(b) tells us that this is possible to do provided the sets are linearly independent.

let’s again note that because of our choice of v 3 all the vectors in span { v1 . v 2 } will have zeroes in the second column so anything that doesn’t have a zero in at least one entry in the second column will work for v 3 . v 2 . 0ù 2ú û é0 1ù v3 = ê ú ë0 1û é -3 v4 = ê ë 0 0ù 2ú û é 1 0ù v1 = ê ú ë0 0 û space with dimension of 4. é 2 v2 = ê ë -1 0ù 0ú û and they will be a basis for M 22 since these are four linearly independent vectors in a vector [Return to Problems] We’ll close out this section with a couple of theorems and an example that will relate the dimensions of subspaces of a vector space to the dimension of the vector space itself. Now. We’ll choose the following for v 3 . Again. first notice that all the vectors in span { v1 . In this case we’ll use. v 2 } and the second vector we add cannot be in ( ) span { v1 . v 3 } .Linear Algebra (b) v1 = ê é 1 0ù é 2 ú and v 2 = ê -1 ë0 0 û ë 0ù in M 22 . v 2 . Here is the list of vectors that we’ve got to this point. © 2007 Paul Dawkins 57 http://tutorial. 0ú û The two vectors here are linearly independent (verify this) and dim M 2 2 = 4 and so we’ll need to add in two vectors to get a basis.aspx . we’ll go with something that is probably not the best choice if we had to work with this basis. v 3 } where v 3 is the new vector we added in the first step. There are.lamar. v 2 . However. but let’s not get too locked into always taking the easy choice. on occasion.edu/terms. It would have been easier to choose something that had a zero in one of the entries of the second column. So. The first vector we add cannot be in span { v1 . v 3 } will have identical numbers in both entries of the second column and so we can chose any new vector that does not have identical entries in the second column and we’ll have something that is outside of span { v1 . We will have to do this in two steps however. é 1 0ù v1 = ê ú ë0 0 û é 2 v2 = ê ë -1 0ù 0ú û é0 1ù v3 = ê ú ë0 1û Now. é0 v3 = ê ë0 1ù 1ú û Note that this is probably not the best choice since its got non-zero entries in both entries of the second column.math. we need to find a fourth vector and it needs to be outside of span { v1 . if we don’t do that this will allow us make a point about choosing the second vector. v 3 } . v 2 . reason to choose vectors other than the “obvious” and easy choices. é -3 v4 = ê ë 0 This gives us the following set of vectors.

S must be a basis for V as well. Since we’ve assumed that W is not finite dimensional we know that S will not have a finite number of vectors in it. Therefore W must be finite dimensional as well. We should probably work one quick example illustrating this theorem. In this case S will be a set of n linearly independent vectors in a vector space of dimension n (since dim (W ) = dim (V ) ) and so by Theorem 6. since S is a basis for W we know that they must be linearly independent and we also know that they must be vectors in V. The only way for this to be true is if we have W =V . means that we’ve got a set of more than n vectors that is linearly independent and this contradicts the results of Theorem 3(a). Proof : Suppose that dim (V ) = n . So. Let’s also suppose that W is not finite dimensional and suppose that S is a basis for W. Proof : By Theorem 8 we know that W must be a finite dimensional vector space and so let’s suppose that S = {w1 . © 2007 Paul Dawkins 58 http://tutorial. since S is also a basis for W this means that u must also be in W.K . If S is a basis for V then by Theorem 5 we have that dim (V ) = dim (W ) = n . Now.lamar.aspx . However. However. Theorem 9 Suppose that W is a subspace of a finite dimensional vector space V then dim (W ) £ dim (V ) and if dim (W ) = dim (V ) then in fact we have W = V . So. let’s just assume that all we know is that dim (W ) = dim (V ) . w n } is a basis for W. This however.Linear Algebra Theorem 8 Suppose that W is a subspace of a finite dimensional vector space V then W is also finite dimensional. On the other hand. we’ve shown that in every case we must have dim (W ) £ dim (V ) . if S is not a basis for V by Theorem 7(b) (the vectors of S must be linearly independent since they form a basis for W) it can be expanded into a basis for V and so we then know that dim (W ) < dim (V ) . This means that any vector u from V can be written as a linear combination of vectors from S.edu/terms. and because W is a subspace of V we know that every vector in W is also in V. we’ve just shown that every vector in V must also be in W. w 2 .math. S is either a basis for V or it isn’t a basis for V. We can actually go a step further here than this theorem. Now.

One that contains only terms with a t in them and one that contains only term with an s in them. 2 1 x1 = t + s 3 3 x4 = t x5 = s x2 = 0 1 8 x3 = t + s 3 3 s and t are any numbers Now. 1÷ 3 è3 ø and so the null space of A is spanned by these two vectors. This is actually easier to find than you might think. 1. x2 . recall that the null space of an n ´ m matrix will be a subspace of ¡ m so the null space of this matrix must be a subspace of ¡ 5 and so its dimension should be 5 or less. 1 8 æ2 ö æ1 ö x = ç t . 0. To verify this we’ll need the basis for the null space. t . t . t . 0. . s ÷ 3 3 è3 3 ø Now. 0.edu/terms. The null space will consist of all vectors in ¡ 5 that have the form. Then factor the t and s out of the vectors. . 0 ÷ 3 è3 ø 8 æ1 ö v 2 = ç . 0. 0. 0. 0. 0 ÷ + s ç . x = ( x1 . . 1. 0 ÷ + ç s. You should also verify that these two vectors are linearly independent and so they in fact form a basis for the null space of A. t + s.Linear Algebra Example 7 Determine a basis and dimension for the null space of 2 -2 -4 3ù é 7 ê -3 -3 0 2 A=ê 1ú ú ê 4 -1 -8 0 20 ú ë û Solution First recall that to find the null space of a matrix we need to solve the following system of equations. 0. 0. x5 ) 1 8 æ2 1 ö = ç t + s.math. © 2007 Paul Dawkins 59 http://tutorial. é x1 ù 2 -2 -4 3ù ê x2 ú é0 ù é 7 ê ú ê -3 -3 0 2 1ú ê x3 ú = ê0 ú ê úê ú ê ú ê 4 -1 -8 0 20 ú ê x4 ú ê0 ú ë û ë û ê x5 ú ë û We solved a similar system back in Example 7 of the Solving Systems of Equation section so we’ll leave it to you to verify that the solution is. 0. 1÷ 3 3 è3 ø è3 ø So. x3 . split this up into two vectors. s ÷ 3 3 è3 ø è3 ø 1 8 æ2 ö æ1 ö = t ç .aspx . s. we can see that the null space is the space that is the set of all vectors that are a linear combination of 1 æ2 ö v1 = ç . x4 .lamar. This also means that the null space of A has a dimension of 2 which is less than 5 as Theorem 9 suggests it should be. .

edu/terms.aspx .Linear Algebra © 2007 Paul Dawkins 60 http://tutorial.math.lamar.

The following definition will help with this.1. Since u is a vector in V it can be expressed as a linear combination of the vectors from S as follows. -1. 0 ) .5.1) The standard basis for any vector space is generally the easiest to work with. 0. © 2007 Paul Dawkins 61 http://tutorial. 0 ) e2 = ( 0. u = c1 v1 + c2 v 2 + L + cn v n The scalars c1 . although not all as we’ll see shortly. for example the e1 = (1. v 2 . v n } is a basis for a vector space V and that u is any vector from V.K . This means that every vector in ¡3 . It is very easy to confuse the coordinate vector/matrix with the vector itself if we aren’t paying attention. To start this section off we’re going to first need a way to quickly distinguish between the various linear combinations we get from each basis. 0. In this section we’re going to take a look at a way to move between two different bases for a vector space and see how to write a general vector as a linear combination of the vectors from each basis.K . The coordinate vector of u relative to S is denoted by ( u ) S and defined to be the following vector in ¡ n . v 3 = ( 3. c2 .math. so be careful.edu/terms. Definition 1 Suppose that S = { v1 . cn are called the coordinates of u relative to the basis S. 0.Linear Algebra Change of Basis In Example 1 of the previous section we saw that the vectors v1 = (1. c2 . can be written as a linear combination of these three vectors. cn ) Note that by Theorem 1 of the previous section we know that the linear combination of vectors from the basis will be unique for u and so the coordinate vector ( u ) S will also be unique. In most cases. not the least of which is the standard basis for ¡ 3 .aspx . ( u ) S = ( c1 .K . Also. on occasion it will be convenient to think of the coordinate vector as a matrix. Of course this is not the only basis for ¡ 3 . -1) formed a basis for ¡3 . the coordinate vector/matrix is NOT the vector itself that we’re after.1) . 2 ) and vector x = (10. There are many other bases for ¡ 3 out there in the world.lamar. [ u ]S é c1 ù êc ú = ê 2ú êMú ê ú ëcn û At this point we should probably also give a quick warning about the coordinate vectors. It is nothing more than the coefficients of the basis vectors that we need in order to write the given vector as a linear combination of the basis vectors. 0 ) e3 = ( 0. In these cases we will call it the coordinate matrix of u relative to S. but unfortunately there are times when we need to work with other bases. v 2 = ( 0. The coordinate matrix will be denoted and defined as follows.1.

e3 } . 0 ) = 10e1 + 5e 2 + 0e3 and so the coordinate vector for x relative to the standard basis vectors for ¡ 3 is. v 3 } where. -1. Now. We saw how to solve this kind of vector equation in both the section on Span and the section on Linear Independence. in this case we’ll have a little work to do.1) . [Solution] [Solution] (b) The basis A = { v1 .1. -1. c1 + 3c3 = 10 -c1 + c2 = 5 c1 + 2c2 . 2 ) and v 3 = ( 3.c3 = 0 We’ll leave it to you to verify that the solution to this system is. We need to set up the following system of equations.Linear Algebra Let’s see some examples of coordinate vectors. 0. -1.edu/terms.1) . In this case the linear combination is simple to write down.5.3. v 2 .5.5. 4 ) [Return to Problems] © 2007 Paul Dawkins 62 http://tutorial. (b) The basis A = { v1 . We’ll first need to set up the following vector equation. e3 } . in the case of the standard basis vectors we’ve got that. (a) The standard basis vectors for ¡ 3 . x = (10. 0 ) = c1 (1. v 2 . Example 1 Determine the coordinate vector of x = (10. (a) The standard basis vectors for ¡ 3 . of course.5. v 2 = ( 0.aspx .5. 0 ) relative to the following bases. 2 ) and v 3 = ( 3. 2 ) + c3 ( 3. S = {e1 . ( x )S = (10. v 2 = ( 0. c2 and c3 .1. 0 ) as a linear combination of the given basis vectors.lamar. e 2 . -1) and we’ll need to determine the scalars c1 . ( x ) A = ( -2. S = {e1 . The coordinate vectors relative to the standard basis vectors is just the vector itself. what makes the standard basis vectors so nice to work with.5. 0. e 2 . v1 = (1. 0 ) So. v1 = (1. c1 = -2 c2 = 3 c3 = 4 The coordinate vector for x relative to A is then. v 3 } where. -1) . ( x )S = (10. Solution In each case we’ll need to determine who to write x = (10. 0.1) + c2 ( 0.1. 0 ) = x [Return to Problems] this is. -1) . (10.math.

aspx . [Solution] (b) The basis for P2 .1 . but they are a little different so we can expect the coordinate vector to change. p 2 = -4x . A = {p1 . c2 and c3 for the following linear combination. { } So. Here is the solution. [Return to Problems] (b) The basis for P2 . the coordinate vector for p relative to the standard basis vectors is. Okay. -2. x. this set is similar to the standard basis vectors. c1 = 23 10 c2 = 1 2 c3 = 3 5 The coordinate vector for p relative to this basis is then.edu/terms. We’ll need to find scalars c1 . (a) The standard basis for P2 S = 1.2 x + 3 x 2 relative to the following bases. x 2 . Note as well that we proved in Example 5(b) of the previous section that this set is a basis. p 2 . it’s already written in that way. and p3 = 5 x 2 . 4 . p 2 . and p3 = 5 x 2 . x. ( p )S = ( 4.1 . p3 } . However. p 2 = -4x . [Solution] { } Solution (a) The standard basis for P2 S = 1. The will mean solving the following system of equations.2 x + 3x 2 = c1p1 + c2p 2 + c3p3 = c1 ( 2 ) + c2 ( -4 x ) + c3 ( 5 x 2 . where p1 = 2 . we need to write p as a linear combination of the standard basis vectors in this case.c3 = 4 5c3 = 3 -4c2 = -2 This is not a terribly difficult system to solve. where p1 = 2 . p3 } . (p ) A = æ ç 23 1 3 ö . So. ÷ è 10 2 5 ø [Return to Problems] © 2007 Paul Dawkins 63 http://tutorial.Linear Algebra As always we should do an example or two in a vector space other than ¡ n . A = {p1 . . x 2 . Example 2 Determine the coordinate vector of p = 4 .math.3) The ease with which we can write down this vector is why this set of vectors is standard basis vectors for P2 .1) 2c1 .lamar.

ê ú. 0.edu/terms. ë0 0 û ë û é0 1ù é -3 0 ù v3 = ê ú . c3 and c4 for the following linear combination.1. v 2 = ê -1 0 ú . -4 ) [Return to Problems] (b) The basis for M 22 . Here is the solution. v 3 . v 3 . é -1 0 ù é 1 0ù é 2 ê 1 -4 ú = c1 ê0 0 ú + c2 ê -1 ë û ë û ë 0ù é0 ú + c3 ê 0 0û ë 1ù é -3 ú + c4 ê 0 1û ë 0ù 2ú û Adding the matrices on the right into a single matrix and setting components equal gives the following system of equations that will need to be solved.Linear Algebra é -1 0 ù Example 3 Determine the coordinate vector of v = ê ú relative to the following bases. v 2 . v 2 = ê -1 ë0 0 û ë 0ù . c1 + 2c2 . It is very easy to write any 2 ´ 2 matrix as a linear combination of these vectors. as usual.3c4 = -1 -c2 = 1 c3 = 0 c3 + 2c4 = -4 Not a bad system to solve. Here it is for this case. [Solution] ë0 1û ë û Solution (a) The standard basis of M 22 . but won’t be too bad. 0ú û é0 v3 = ê ë0 1ù é -3 ú .ê ú ý . ý. and v 4 = ê 0 2 ú . S = í ê î ë0 ì é 1 0 ù é0 0 ù é0 1ù é0 0 ù ü . ë 1 -4 û ì é 1 0 ù é0 0 ù é0 1ù é0 0ù ü (a) The standard basis of M 22 . v 2 .lamar. c2 . We’ll need to find scalars c1 . 0 ú ê 1 0 ú ê0 0 ú ê0 1ú þ û ë û ë û ë û As with the previous two examples the standard basis is called that for a reason. .math. A = { v1 . . c1 = -5 © 2007 Paul Dawkins c2 = -1 64 c3 = 0 c4 = -2 http://tutorial. 2ú û This one will be a little work. A = { v1 . v 4 } where v1 = ê é 1 0ù é 2 ú .aspx . and v 4 = ê 0 1û ë 0ù . v 4 } where v1 = ê ú .ê ú. [Solution] î ë0 0 û ë 1 0 û ë0 0 û ë0 1û þ é 1 0ù é 2 0ù (b) The basis for M 22 . é -1 0 ù é 1 0ù é0 0 ù é0 1ù é0 0 ù ê 1 -4 ú = ( -1) ê0 0 ú + (1) ê 1 0 ú + ( 0 ) ê0 0 ú + ( -4 ) ê0 1ú ë û ë û ë û ë û ë û The coordinate vector for v relative to the standard basis is then. S = í ê ú. ( v )S = ( -1.

Recall that the coordinate vector is nothing more than the scalars in the linear combination and so all we need to do is reform the linear combination and then multiply and add everything out to determine the vector. v 2 = ( 0. but they are relative to different orderings of the basis vectors. Example 4 The vectors v1 = (1. Actually. -1. the second scalar in the coordinate vector is the coefficient for the second vector listed. This is called a change of basis. -1) = ( 27. 0.1. it will be easier to convert the coordinate matrix for a vector.1) = ( 5. these are both the same coordinate vector. 2 ) and v 3 = ( 3. -1. (a) ( x ) A = ( 3. etc. -4. The “old” basis. v1} be different orderings of these vectors and determine the vector in ¡ 3 that has the following coordinate vectors. but these are essentially the same thing as the coordinate vectors so if we can convert one we can convert the other. -5. © 2007 Paul Dawkins 65 http://tutorial.15 ) (b) And here is the work for this part. x = 3 (1. Let A = { v1 . -1.Linear Algebra The coordinate vector for v relative to this basis is then. and the “new” basis. -1) form a basis for ¡3 . 2 ) + ( 8 )( 3. ( v ) A = ( -5.edu/terms.1. 0. So let’s start off and assume that V is a vector space and that dim (V ) = 2 . Determining the vector in ¡ 3 for each is a simple thing to do. -7 ) x = 3 ( 0.1) . Let’s also suppose that we have two bases for V. Now that we’ve got the coordinate vectors out of the way we want to find a quick and easy way to convert between the coordinate vectors from one basis to a different basis.1) + ( -1)( 0. we clearly get different vectors simply by rearranging the order of the vectors in our basis. The first scalar is the coefficient of the first vector listed in the set.aspx B = { v1 . -1.math. -1. to see this let’s take a look at the following example. The one thing that we need to be careful of order however. -1. So.lamar.8 ) Solution So. 0. We will develop the method for vectors in a 2-dimensional space (not necessarily ¡ 2 ) and in the process we will see how to do this for any vector space.8 ) (b) ( x ) B = ( 3. (a) Here is the work for this part. 0. -1) + ( 8 )(1. -2 ) [Return to Problems] Before we move on we should point out that the order in which we list our basis elements is important. v 2 } . v 3 . v 2 . v 3 } and B = { v 2 . 2 ) + ( -1)( 3.1.

We’re converting to a new basis C and yet we’ve found a way to instead find the coordinate matrix for u relative to the old basis B and © 2007 Paul Dawkins 66 http://tutorial. [u ]B = P [u]C Note that this may seem a little backwards at this point. In terms of the new basis. [u ]B = êb c 1 + d c2 ú = ê ë 1 2 éac +cc ù û é a c ù é c1 ù é a c ù úê ú = ê ú [u ]C ë b d û ëc2 û ë b d û So. This gives. C. we can write u as. we can convert the coordinate matrix for u relative to the new basis C into a coordinate matrix for u relative to the old basis B as follows. [ w 1 ]B = ê éa ù ú ëbû u = c1w1 + c2 w 2 [ w 2 ]B = ê é cù ú ëd û Next.math. we know how to write the basis vectors from C as linear combinations of the basis vectors from B so substitute these into the linear combination for u above. [u ]B = êb c 1 + d c2 ú û We can now do a little rewrite as follows. let u be any vector in V. because B is a basis for V we can write each of the basis vectors from C as a linear combination of the vectors from B. [u ]C = êc1 ú ë 2û Now.lamar. if we define P to be the matrix. éc ù u = c1 ( a v1 + b v 2 ) + c2 ( c v1 + d v 2 ) u = ( a c1 + c c2 ) v1 + ( b c1 + d c2 ) v 2 éac +cc ù ë 1 2 Rearranging gives the following equation. and so its coordinate matrix relative to C is. w1 = a v1 + b v 2 w 2 = c v1 + d v 2 This means that the coordinate matrices of the vectors from C relative to the basis B are. We now know the coordinate matrix of u is relative to the “old” basis B.aspx .Linear Algebra C = {w 1 . Namely.edu/terms. é a cù P=ê ú ëb dû where the columns of P are the coordinate matrices for the basis vectors of C relative to B. w 2 } Now.

edu/terms.K . -1.3. B = {e1 . However. When we go to work with them we’ll need to convert to them to a coordinate matrix. [u ]B = P [u]C We should probably take a look at an example or two at this point. 4 ) . v1 = (1. When the basis we’re going to (B in this case) is the standard basis vectors for the vector space computing the transition matrix is generally pretty simple. Recall that the columns of P are just the coordinate matrices of the vectors in C relative to B.1. relative to B. -8 ) . is then related to the coordinate matrix of u relative to C by the following equation. -1) . and the basis C = { v1 . Therefore. v 3 } where. the coordinate matrix in this case is.1) . P = é [ w1 ]B ë [ w 2 ]B L [ w n ]B ù û where the ith column of P is the coordinate matrix of w i relative to B. when B is the standard basis vectors we saw in Example 1 above that the coordinate vector (and hence the coordinate matrix) is simply the vector itself. [Solution] (d) Use the result of part (a) to compute [u ]B given ( u )C = ( 9. e3} . v 2 . Also. 0 ) .math. 2 ) and v 3 = ( 3. (a) Find the transition matrix from C to B. Definition 2 Suppose that V is a n-dimensional vector space and further suppose that B = { v1 . as we’ll see we can use this process to go the other way around. [Solution] (f) Use the result of part (b) to compute [u ]C given ( u ) B = ( -6. 0. [Solution] (e) Use the result of part (b) to compute [u ]C given ( u ) B = (10.aspx . é 1 P = ê -1 ê ê 1 ë 3ù 1 0ú ú 2 -1ú û 0 [Return to Problems] © 2007 Paul Dawkins 67 http://tutorial. [Solution] (c) Use the result of part (a) to compute [u ]B given ( u )C = ( -2. it could be that we have a coordinate matrix for a vector relative to the new basis and we need to determine what the coordinate matrix relative to the old basis will be and this will allow us to do that.5. (a) Find the transition matrix from C to B. [Solution] (b) Find the transition matrix from B to C. v 2 = ( 0. e 2 . w n } are two bases for V. 2 ) . [Solution] Solution Note as well that we gave the coordinate vector in the last four parts of the problem statement to conserve on space. The transition matrix from C to B is defined to be. 7.lamar. -1. v 2 . The coordinate matrix of a vector u in V. However. Example 5 Consider the standard basis for ¡3 . v n } and C = {w1 . Here is the formal definition of how to perform a change of basis between two basis sets.K .Linear Algebra not the other way around. w 2 .

[Return to Problems] (c) Use the result of part (a) to compute [u ]B given ( u )C = ( -2. we’ve done most of the work for this problem. [Return to Problems] © 2007 Paul Dawkins 68 http://tutorial. [ u ]B é 1 = ê -1 ê ê 1 ë 3ù é -2 ù é10 ù 1 0 ú ê 3ú = ê 5 ú úê ú ê ú 2 -1ú ê 4 ú ê 0 ú ûë û ë û 0 Sure enough we got the coordinate matrix for the point that we converted to get ( u )C = ( -2. To find this transition matrix we need the coordinate matrices of the standard basis vectors relative to C. do not make the mistake of thinking that the transition matrix here will be the same as the transition matrix from part (a).3. First. notice we used a slightly different notation for the transition matrix to make sure that we can keep the two transition matrices separate for this problem.Linear Algebra (b) Find the transition matrix from B to C.lamar. It won’t be.v3 10 10 10 e1 = The coordinate matrices for each of this is then. We will leave it to you to verify the following linear combinations. Here is the matrix multiplication for this part. 1 é 10 ù 1 = ê 10 ú ê ú 3 ê 10 ú ë û [e1 ]C [e2 ]C é P¢ = ê ê ê ë 1 10 1 10 3 10 é. 1 1 3 v1 + v 2 + v 3 10 10 10 3 2 1 e 2 = .3.v1 + v 2 + v 3 5 5 5 3 3 1 e3 = v1 + v 2 .ú û 3 10 3 10 1 10 3 é 10 ù 3 = ê 10 ú ê ú 1 ê . a significantly different matrix as suggested at the start of this problem. This means that we need to write each of the standard basis vectors as linear combinations of the basis vectors from C.math. Okay.3 ù 5 ê 2ú = ê 5ú ê 1ú ë 5û -3 5 2 5 1 5 [e3 ]C ù ú ú . Note as well that we already know what the answer to this is from Example 1 above. 4 ) . The remaining steps are just doing some matrix multiplication. Also. 4 ) from Example 1.10 ú ë û The transition matrix from B to C is then.aspx .edu/terms. So.

÷ . 2 ) . since B is è 5 5 5ø the standard basis vectors we know that the vector from ¡ 3 that we’re starting with is ( -6. Remember that the coordinate matrix/vector is not the vector itself. [Return to Problems] (f) Use the result of part (b) to compute [u ]C given ( u ) B = ( -6. -8 ) . However. 2 ) .ú ê 0ú ê 4 ú ûë û ë û 3 10 3 10 1 10 And again.3 ú û ë û ë 5û 3 10 3 10 1 10 So what does this give us? We’ll first we know that ( u )C = ç - Recall that when dealing with the standard basis vectors for ¡ 3 the coordinate matrix/vector just © 2007 Paul Dawkins 69 æ 21 14 3 ö .aspx . [Return to Problems] (e) Use the result of part (b) to compute [u ]C given ( u ) B = (10. Also.21 ù 5 ú ê 7 ú = ê 14 ú úê ú ê 5 ú . what have we learned here? Well. we were given the coordinate vector of a point relative to C. Again. . -10. In this case the vector is. 7.Linear Algebra (d) Use the result of part (a) to compute [u ]B given ( u )C = ( 9. we really did need to determine the coordinate matrix of the vector relative to the “old’ basis here since that allowed us to quickly determine just what the vector was. we got the result that we would expect to get. here we are really just verifying the result of Example 1 in this part. Here is the matrix multiplication for this part. The matrix multiplication for this part is. Since the vectors in C are not the standard basis vectors we don’t really have a frame of reference for what this vector might actually look like. with this computation we know now the coordinates of the vectors relative to the standard basis vectors and this means that we actually know what the vector is.ú ê 2ú ê . Here is the matrix multiplication for this part. -1. only the coefficients for the linear combination of the basis vectors.5. 0 ) . 7.15) So.lamar. 3 -5 2 5 1 5 [u ]C é =ê ê ê ë 1 10 1 10 3 10 ù é -6 ù é . u = ( -15. [ u ]B é 1 = ê -1 ê ê 1 ë 3ù é 9 ù é -15ù 1 0 ú ê -1ú = ê -10 ú úê ú ê ú 2 -1ú ê -8ú ê 15ú ûë û ë û 0 So. even though we’re considering C to be the “new” basis here. http://tutorial. as you can see.math.edu/terms. . [u ]C é =ê ê ê ë 1 10 1 10 3 10 3 -5 2 5 1 5 ù é10ù é -2 ù ú ê 5 ú = ê 3ú úê ú ê ú .

( -6. 2 ) = - 21 14 3 v1 + v 2 . Doing this gives. [Solution] (b) Determine the polynomial that has the coordinate vector ( p )C = ( -4. We know what the coordinates of the polynomial are relative to C. So. Now. it would be somewhat illustrative to use the transition matrix to answer this question. the second row will be the coefficient of x from each basis vector and the third column will be the coefficient of x 2 from each basis vector. 7.edu/terms.1 . x. x 2 } and the basis C = {p1 .Linear Algebra also happens to be the vector itself. where p1 = 2 .aspx . B = {1. However. [p ]B = P [p ]C é 2 0 = ê 0 -4 ê ê 0 0 ë -1ù é -4 ù é -19 ù 0 ú ê 3ú = ê -12 ú úê ú ê ú 5ú ê 11ú ê 55ú ûë û ë û So. and p3 = 5 x 2 . Again. p 3} . p 2 . we need to find [p ]B and luckily we’ve got the correct transition matrix to do that for us. The first row will be the constant terms from each basis vector. The coordinate matrix/vector that we just found tells us how to write the vector as a linear combination of vectors from the basis C. One way to get the solution is to just form up the linear combination with the coordinates as the scalars in the linear combination and compute it. (a) Find the transition matrix from C to B. the coordinate vector for u relative to the standard basis vectors is © 2007 Paul Dawkins 70 http://tutorial. [Solution] Solution (a) Find the transition matrix from C to B. p 2 = -4x .v 3 5 5 5 [Return to Problems] Example 6 Consider the standard basis for P2 .3. do not always expect this to happen. since B (the matrix we’re going to) is the standard basis vectors writing down the transition matrix will be easy this time.11) .math. [Return to Problems] (b) Determine the polynomial that has the coordinate vector ( p )C = ( -4. but this is not the standard basis and so it is not really clear just what the polynomial is.11) .3. é 2 0 P = ê 0 -4 ê ê 0 0 ë -1ù 0ú ú 5ú û Each column of P will be the coefficients of the vectors from C since those will also be the coordinates of each of those vectors relative to the standard basis vectors.lamar. All we need to do is to do the following matrix multiplication.

We just wanted to point out the alternate method of working this problem. v 3 . To find this we need to write v 4 as a linear combination of the standard basis vectors. © 2007 Paul Dawkins 71 http://tutorial. as mentioned above we can also do this problem as follows. é -3 ê 0 ë [ v 4 ]B é -3ù ê 0ú =ê ú ê 0ú ê ú ë 2û So. 0ù é 1 0ù é0 0ù é 0 1ù é0 0 ù ú = ( -3) ê0 0 ú + ( 0 ) ê 1 0ú + ( 0 ) ê 0 0ú + ( 2 ) ê0 1ú 2û ë û ë û ë û ë û So. but it won’t always be less work to do it this way. Let’s look at the fourth column. and ë0 0 û ë û ë û é -3 0 ù v4 = ê ú. the polynomial is.3.12 x + 55 x 2 Note that.aspx . p ( x ) = -19 .ê úý .1) = -19 . -2 ) . as with the previous couple of problems. [Return to Problems] ì é 1 0 ù é0 0 ù é 0 1ù é0 0 ù ü Example 7 Consider the standard basis for M 22 . [Solution] Solution (a) Find the transition matrix from C to B. v 2 = ê -1 0 ú .ê ú. î ë0 0 û ë 1 0 û ë 0 0 û ë0 1û þ é 1 0ù é 2 0ù é0 1ù and the basis C = { v1 . v 3 = ê0 1ú .Linear Algebra ( p ) B = ( -19.5. each column will be the entries from the v i ’s and with the standard basis vectors in the order that we’ve using them here. B is the standard basis vectors but this time let’s be a little careful. Let’s find one of the columns of the transition matrix in detail to make sure we can quickly write down the remaining columns. the coordinate matrix for v 4 relative to B and hence the fourth column of P is. Here is the transition matrix for this problem.math. p ( x ) = -4p1 + 3p 2 + 11p3 = -4 ( 2 ) + 3 ( -4 x ) + 11( 5 x 2 .55 ) Therefore. v 2 . This is fairly simple to do. Now. the first two entries is the first column of the v i and the last two entries will be the second column of v i .lamar. -12.ê ú.edu/terms. ë 0 2û (a) Find the transition matrix from C to B.12 x + 55 x 2 The same answer with less work. [Solution] (b) Determine the matrix that has the coordinate vector ( v )C = ( -8. B = í ê ú. v 4 } where v1 = ê ú .

B = {(1. Example 8 Consider the two bases for ¡ 2 . Let’s work one more example and this time we’ll avoid the standard basis vectors.1) = 2 (1. © 2007 Paul Dawkins 72 http://tutorial. -1) + ( 0. [Solution] (b) Find the transition matrix from B to C. [ v ]B é ê =ê ê ê ë 1 2 0 -1 0 0 0 0 0 -3ù é -8ù é 4 ù 0 0ú ê 3ú ê -3ú úê ú =ê ú 1 0 ú ê 5ú ê 5ú úê ú ê ú 1 2û ë -2 û ë 1û 5ù 1ú û [Return to Problems] Now that we’ve got the coordinates for v relative to the standard basis we can write down v. Here are those linear combinations. ( -1. As with the previous problem we could just write down the linear combination of the vectors from C and compute it directly.math. In this example we’ll just find the transition matrices. -1) + ( 0. ( 0.5. -1) . just as with the previous problem we have the coordinate vector.Linear Algebra é ê P=ê ê ê ë 1 2 0 -1 0 0 0 0 0 -3ù 0 0ú ú 1 0ú ú 1 2û [Return to Problems] (b) Determine the matrix that has the coordinate vector ( v )C = ( -8. [Solution] Solution Note that you should verify for yourself that these two sets of vectors really are bases for ¡ 2 as we claimed them to be. 4 )} . 6 ) 2 ( 2.aspx . -2 ) . To do this we’ll need to write the vectors from C as linear combinations of the vectors from B. (a) Find the transition matrix from C to B. but that is for the nonstandard basis vectors and so it’s not readily apparent what the matrix will be. 1 2 1 ( -1. but let’s go ahead and use the transition matrix.(1. So.edu/terms.lamar.1) . (a) Find the transition matrix from C to B. 6 ) The two coordinate matrices are then.3. é 4 v=ê ë -3 To this point we’ve only worked examples where one of the bases was the standard basis vectors. 4 ) = . 6 )} and C = {( 2.

Theorem 1 Suppose that V is a finite dimensional vector space and that P is the transition matrix from C to B then.1) + ( -1.( -1. You should go back to Examples 5 and 8 above and verify that the two transition matrices are in fact inverses of each other. we’ll need to do pretty much the same thing here only this time we need to write the vectors from B as linear combinations of the vectors from C. Also. -1) = ( 2. 4 ) û B = ê 1 ú ë 2û é 2 -1ù P=ê 1 1ú ë 2 2û [Return to Problems] (b) Find the transition matrix from B to C. There is another way of computing the second transition matrix from the first and we will close out this section with the theorem that tells us how to do that. é -1ù é ù ë( -1.1) û B = ê 1 ú ë2û and the transition matrix is then. 6 ) ù C = ê 3 ú ë û 4 ë3û é 1 3 P¢ = ê 1 -3 ë 2 3 4 3 The coordinate matrices are.aspx .1) .lamar. note that due to the difficulties sometimes present in finding the inverse of a matrix it might actually be easier to compute the second transition matrix as we did above. (1. -1) û C = ê 1 ú ù ë . (b) P -1 is the transition matrix from B to C.3û ë The transition matrix is. Here are the linear combinations. ù ú û [Return to Problems] In Examples 5 and 8 above we computed both transition matrices for each direction. (a) P is invertible and. © 2007 Paul Dawkins 73 http://tutorial. 4 ) 3 3 é2ù é( 0. 6 ) = ( 2.Linear Algebra é2ù é ù ë( 2.math.edu/terms. 4 ) 1 1 3 3 2 4 ( 0. Okay. é 1ù 3 é(1.

Example 1 Write down the row vectors and column vectors for é -1 5 ù ê 0 -4 ú ú A=ê ê 9 2ú ê ú ë 3 -7 û Solution The row vectors are. In other words. The column vectors (again we called them column matrices at the time) are the vectors in ¡ n that are formed out of the columns of A.aspx . they really are row/column matrices and so we may as well denote them as such and second in this way we can keep the “orientation” of each to remind us whether or not they are row vectors or column vectors. previously although we will give its definition here again for the sake of completeness. First. row vectors are listed horizontally and column vectors are listed vertically. In fact they are so important that they are often called the fundamental subspaces of a matrix. We’ve actually already seen one of the fundamental subspaces. Given an n ´ m matrix é a11 êa 21 A=ê ê M ê ê an1 ë a12 a22 M an 2 L L L a1m ù a2 m ú ú M ú ú an m ú û The row vectors (we called them row matrices at the time) are the vectors in R m formed out of the rows of A. Because we’ll be using the matrix notation for the row and column vectors we’ll be using matrix notation for vectors in general in this section so we won’t be mixing and matching the notations too much. © 2007 Paul Dawkins 74 http://tutorial.math. the null space. r1 = [ -1 5] r2 = [ 0 -4] é -1ù ê 0ú c1 = ê ú ê 9ú ê ú ë 3û r3 = [ 9 2] é 5ù ê -4 ú c2 = ê ú ê 2ú ê ú ë -7 û r4 = [ 3 -7 ] The column vectors are Note that despite the fact that we’re calling them vectors we are using matrix notation for them.Linear Algebra Fundamental Subspaces In this section we want to take a look at some important subspaces that are associated with matrices. The reason is twofold.lamar.edu/terms. Before we give the formal definitions of the fundamental subspaces we need to quickly review a concept that we first saw back when we were looking at matrix arithmetic.

2 x5 . x1 = -t + r x4 = t 1 1 x2 = .aspx . let’s go ahead and give the notation for the null space. s. Because we’ve already seen how to find the basis for the null space (Example 4(b) in the Subspaces section and Example 7 of the Basis section) we’ll do one example at this point and then devote the remainder of the discussion on basis/dimension of these subspaces to finding the basis/dimension for the row and column space. (a) The subspace of R m that is spanned by the row vectors of A is called the row space of A.4 x2 . So.edu/terms. 2 x1 . (b) The subspace of R n that is spanned by the column vectors of A is called the column space of A. Definition 2 The dimension of the null space of A is called the nullity of A and is denoted by nullity ( A) . to find the null space we need to solve the following system of equations. We should work an example at this point.2 x3 + 4 x4 . At this point we can give the notation for the dimension of the null space.x1 + 2 x2 + x5 .x6 = 0 10 x1 .math. We are going to be particularly interested in the basis for each of these subspaces and that in turn means that we’re going to be able to discuss the dimension of each of them. 1 2 -2 -3ù é 2 -4 A = ê -1 2 0 0 1 -1ú ê ú ê 10 -4 -2 4 -2 4 ú ë û Solution So. r are any real numbers In matrix form the solution can be written as.s + r x3 = -2t + 5r 2 2 x5 = s x6 = r t . © 2007 Paul Dawkins 75 http://tutorial. Note that we will see an example or two later in this section of null spaces as well. The reason for the delay will be apparent once we reach that point.Linear Algebra Here then are the definitions of the three fundamental subspaces that we’ll be investigating in this section. but we’ll need to wait a bit before we do so for the row and column spaces.4 x2 + x3 + 2 x4 . (c) The set of all x in ¡ m such that Ax = 0 (which is a subspace of ¡ m by Theorem 2 from the Subspaces section) is called the null space of A.3 x6 = 0 .2 x5 + 4 x6 = 0 We’ll leave it to you to verify that the solution is given by.t .lamar. Definition 1 Suppose that A is an n ´ m matrix. Example 2 Determine a basis for the null space of the following matrix.

the solution can be written as a linear combination of the three linearly independent vectors (verify the linearly independent claim!) é -1ù ê.1 ú ê 1ú 2 ê 2 ú ê 2ú ê 2ú ê ú ê -2t + 5r ú ê -2 ú ê 0ú ê 5ú x=ê ú =tê ú+sê ú+rê ú t ê ú ê 1ú ê 0ú ê0ú ê ú ê 0ú ê 1ú ê0ú s ê ú ê ú ê ú ê ú r ë û ë 0û ë 0û ë 1û So.1 t . The first theorem tells us how to find the basis for a matrix that is in row-echelon form. remember that we’ll be using matrix notation for vectors in this section. now that we’ve gotten an example of the basis for the null space taken care of we need to move onto finding bases (and hence the dimensions) for the row and column spaces of a matrix. The row vectors containing leading 1’s (so the non-zero row vectors) will form a basis for the row space of U. before we do that we first need a couple of theorems out of the way. Theorem 1 Suppose that the matrix U is in row-echelon form.1 ú ê. r1 = [ 1 5 -2 3 5] r2 = [ 0 1] 0 1 -1 0] r3 = [ 0 0 0 0 © 2007 Paul Dawkins 76 http://tutorial. Note that this also means that the null space has a dimension of 3 since there are three basis vectors for the null space and so we can see that nullity ( A) = 3 Again. The column vectors that contain the leading 1’s from the row vectors will form a basis for the column space of U.edu/terms. Okay. So. Example 3 Find the basis for the row and column space of the following matrix.aspx . the basis for the row space is simply all the row vectors that contain a leading 1.1 ú ê 2ú ê -2 ú x1 = ê ú ê 1ú ê 0ú ê ú ë 0û é 0ù ê.1 s + r ú ê.lamar. 3 5ù é 1 5 -2 ê 0 0 1 -1 0 ú ê ú U= ê 0 0 0 0 1ú ê ú ë 0 0 0 0 0û Solution Okay. However. for this matrix the basis for the row space is.Linear Algebra é -t + r ù é -1ù é 0ù é 1ù ê.1 ú ê 2ú ê 0ú x2 = ê ú ê 0ú ê 1ú ê ú ë 0û é 1ù ê 1ú ê ú ê 5ú x3 = ê ú ê0 ú ê0 ú ê ú ë 1û and so these three vectors then form the basis for the null space since they span the null space and are linearly independent.math.

How does this theorem help us to find a basis for the column space of a general matrix? We’ll let’s start with a matrix A and reduce it to row-echelon form. From our theorems above we know that to find a basis for both the row and column space of a matrix A we first need to reduce it to row-echelon form and we can get a basis for the row and column space from that.lamar. what about a basis for the column space? That’s not quite as straight forward.Linear Algebra We can also see that the dimension of the row space will be 3. Now. suppose the columns 1. as we know. to find a basis for the row space of a matrix A we’ll need to reduce it to rowechelon form. The basis for the column space will be the columns that contain leading 1’s and so for this matrix the basis for the column space will be.edu/terms. © 2007 Paul Dawkins 77 http://tutorial. These columns will probably not be a basis for the column space of A however. However. 2. but is almost as simple. Also note that the dimension of the column space is 3 as well. Therefore. So. how does this theorem help us? Well if the matrix A and U have the same row space then if we know a basis for one of them we will have a basis for the other. Now. 5 and 8 from U form a basis for the column space of U then columns 1. most matrices will not be in row-echelon form. Next. We will generally do that for these problems. For example. 2. Notice as well that we assumed the matrix U is in row-echelon form and we do know how to find a basis for its row space. (which we’ll need for a basis for the row space anyway). Theorem 2 Suppose that A is a matrix and U is a matrix in row-echelon form that has been obtained by performing row operations on A. but that is the same as the row space of A and so that set of vectors will also be a basis for the row space of A. So.math. Then the row space of A and the row space of U are the same space. The following two theorems will tell us how to find the basis for the row and column space of a general matrix. what Theorem 3 tells us is that corresponding columns from A will form a basis for the columns space of A.aspx . because we arrived at U by applying row operations to A we know that A and U are row equivalent. Once in row-echelon form we can write down a basis for the row space of U. all of this is fine provided we have a matrix in row-echelon form. U. Before we work an example we can now talk about the dimension of the row and column space of a matrix A. Theorem 3 Suppose that A and B are two row equivalent matrices (so we got from one to the other by row operations) then a set of column vectors from A will be a basis for the column space of A if and only if the corresponding columns from B will form a basis for the column space of B. 5 and 8 from A will form a basis for the column space of A. from Theorem 1 we know how to identify the columns from U that will form a basis for the column space of U. é 1ù ê0ú c1 = ê ú ê0ú ê ú ë0û é -2 ù ê 1ú c3 = ê ú ê 0ú ê ú ë 0û é 5ù ê0 ú c5 = ê ú ê 1ú ê ú ë0 û Note that we subscripted the vectors here with the column that each came out of.

Now. We call this common dimension the rank of A and denote it by rank ( A ) . Think about this for a second. Note that if A is an n ´ m matrix we know that the row space will be a subspace of R m and hence have a dimension of m or less and that the column space will be a subspace of R n and hence have a dimension of n or less. because we know that the dimension of the row and column space must be the same we have the following upper bound for the rank of a matrix.aspx . a basis for the row space of the matrix will be every row that contains a leading 1 (all of them in this case). recall that there is more than one possible row-echelon form for a given matrix. This will always happen and this is the reason that we delayed talking about the dimension of the row and column space above. For example. We will leave it to you to verify that the following is one possible row echelon form for the matrix from Example 2 above. Theorem 4 Suppose that A is a matrix then the row space of A and the column space of A will have the same dimension. m ) We should now work an example. If there are k leading 1’s in a row-echelon matrix then there will be k row vectors in a basis for the row space and so the row space will have a dimension of k. According to this theorem the rows with leading 1’s will form a basis for the row space and the columns that containing the same leading 1’s will form a basis for the column space.lamar. rank ( A ) £ min ( n. If you need a refresher on how to reduce a matrix to row-echelon form you can go back to the section on Solving Systems of Equations for a refresher.math. © 2007 Paul Dawkins 78 http://tutorial. Determine the rank of the matrix. However. Example 4 Find a basis for the row and column space of the matrix from Example 2 above. since each of the leading 1’s will be in separate columns there will also be k column vectors that will form a basis for the column space and so the column space will also have a dimension of k. We needed to get a couple of theorems out of the way so we could give the following theorem/definition. A basis for the row space is then. Also. the first thing that we need to do is get the matrix into row-echelon form.edu/terms. there are a fixed number of leading 1’s and each leading 1 will be in a separate column.Linear Algebra Let’s go back and take a look at Theorem 1 in a little more detail. Then. 0 é 1 -2 ê 0 U =ê 1 -1 8 ê 0 0 1 ë 0 1 4 -1 1 2 2 1ù 3 . there won’t be two leading 1’s in the second column because that would mean that the upper 1 (one) would not be a leading 1. Solution Before starting this example let’s note that by the upper bound for the rank above we know that the largest that the rank can be is 3 since that is the smaller of the number of rows and columns in A. So.8ú ú 0 -5 ú û So.

The fact that they were the same won’t always happen as we’ll see shortly and so isn’t all that important. To do this we’ll need to solve the following system of equations.8ù û Next. Theorem 5 Suppose that A is an n ´ m matrix. as Theorem 4 suggested both the row space and the column space of A have dimension 3 and so we have that rank ( A) = 3 Before going on to another example let’s stop for a bit and take a look at the results of Examples 2 and 4. . Therefore the first three columns of A will form a basis for the column space of A. é 2ù c1 = ê -1ú ê ú ê10 ú ë û é -4 ù c2 = ê 2 ú ê ú ê -4 ú ë û é 1ù c3 = ê 0 ú ê ú ê -2 ú ë û Now.x3 + 5 x4 + 6 x5 = 0 4 x1 . rank ( A ) + nullity ( A ) = m Let’s take a look at a couple more examples now. We’ll find the null space first since that was the first thing asked for.2 x4 + 4 x5 = 0 -3x1 + x2 + 7 x3 .6 x3 . é -1 ê 4 A=ê ê 2 ê ë -3 2 -4 0 1 -1 5 -4 -12 -6 -2 7 -2 6ù -8ú ú 4ú ú 12û Solution Before we get started we can notice that the rank can be at most 4 since that is smaller of the number of rows and number of columns.edu/terms.lamar. Example 5 Find a basis for the null space.math. the first three columns of U will form a basis for the column space of U since they all contain the leading 1’s.12 x4 .Linear Algebra r1 = [ 1 -2 0 0 -1 r3 = [ 0 1] 0 1 r2 = é 0 1 -1 8 ë 2 0 -5] 1 4 1 2 3 .8 x5 = 0 2 x1 .4 x2 . Determine the rank and nullity of the matrix.4 x3 . Then. This in fact will always be the case. row space and column space of the following matrix.2 x4 + 12 x5 = 0 You should verify that the solution is. From these two examples we saw that the rank and nullity of the matrix used in those examples were both 3. What is important to note is that 3 + 3 = 6 and there were 6 columns in this matrix. This gives the following basis for the column space of A.aspx . © 2007 Paul Dawkins 79 http://tutorial.x1 + 2 x2 .

rank ( A ) = # columns . é ê U =ê ê ê ë r1 = [ 1 -2 1 -2 1 -5 -6 ù 0 1 -2 2 4 ú ú 0 0 0 1 -2 ú ú 0 0 0 0 0û r2 = [ 0 1 -2 2 4] The rows containing leading 1’s will form a basis for the row space of A and so this basis is. According to this theorem the rank must be. At this point we know the rank of A by Theorem 5 above. let’s get a basis for the row space and the column space.aspx . Here is that basis.lamar. We now know that each should contain three vectors. second and fourth columns of U contain leading 1’s and so will form a basis for the column space of U and this tells us that the first. the first.nullity ( A ) = 5 .edu/terms. é 3t ù é 3ù é 0ù ê 2t . Speaking of which.2 = 3 This will give us a nice check when we find a basis for the row space and the column space. second and fourth columns of A will form a basis for the column space of A. We’ll need to reduce A to row-echelon form first.math. © 2007 Paul Dawkins 80 http://tutorial. é 3ù ê 2ú ê ú x1 = ê 1ú ê ú ê 0ú ê 0ú ë û é 0ù ê -8 ú ê ú x2 = ê 0ú ê ú ê 2ú ê 1ú ë û Therefore we now know that nullity ( A ) = 2 . We’ll leave it to you to verify that a possible row-echelon form for A is. 1 -5 -6] r3 = [ 0 0 0 1 -2 ] Next.Linear Algebra x1 = 3t x2 = 2t .8s ú ê 2 ú ê -8 ú ê ú ê ú ê ú x = ê t ú = t ê 1ú + s ê 0 ú ê ú ê ú ê ú ê 2s ú ê 0 ú ê 2ú ê s ú ê0ú ê 1ú ë û ë û ë û and so we can see that a basis for the null space is.8s x3 = t x4 = 2 s x5 = s s and t are any real numbers The null space is then given by.

© 2007 Paul Dawkins 81 http://tutorial. The basis for the column space of A is then. r1 = [ 1 . x1 = 0 x2 = 0 This is actually the point to this problem. 6 x1 . Therefore. é 6 -3 ù A = ê -2 3ú ê ú ê -8 4 ú ë û Solution In this case we can notice that the rank of this matrix can be at most 2 since that is the minimum of the number of rows and number of columns.aspx .edu/terms. To find the null space we’ll need to solve the following system of equations. é 1 . 0. There is only a single solution to the system above.1 ] 2 é 6ù c1 = ê -2 ú ê ú ê -8ú ë û r2 = [ 0 1] and since both columns of U form a basis for the column space of U both columns from A will form a basis for the column space of A.math. You should verify that one possible row-reduced form for A is. This also tells us that the rank of this matrix must be 2 by Theorem 5.lamar. Determine the nullity and rank of this matrix. vector spaces consisting solely of the zero vectors are defined to have a dimension of zero. both have dimension of 2 as we knew they should from our use of Theorem 5 above. é -3ù c 2 = ê 3ú ê ú ê 4ú ë û Once again. Let’s now find a basis for the row space and the column space. the nullity of this matrix is zero.1ù 2 ê 0 U =ê 1ú ú ê 0 0ú ë û A basis for the row space of A is then.3x2 = 0 -2 x1 + 3 x2 = 0 -8 x1 + 4 x2 = 0 We’ll leave it to you to verify that the solution to this system is.Linear Algebra é -1ù ê 4ú c1 = ê ú ê 2ú ê ú ë -3 û é 2ù ê -4 ú c2 = ê ú ê 0ú ê ú ë 1û é 5ù ê -12 ú ú c4 = ê ê -2 ú ê ú ë -2 û Note that the dimension of each of these is 3 as we noted it should be above. Also. Example 6 Find a basis for the null space. namely the zero vector. Therefore the null space consists solely of the zero vector and vector spaces that consist solely of the zero vector do not have a basis and so we can’t give one. row space and column space of the following matrix.

Example 7 Find a basis for the row space of the matrix in Example 5 that consists of rows from the original matrix. 1 -4 -2 3ù 0 1 1 . when this is all said and done by finding a basis for the column space of AT we will also be finding a basis for the row space of A and it will be in terms of rows from A and not rows from the row-echelon form of the matrix. In doing so the rows of A will become the columns of AT . Also note that we can’t necessarily use the same idea we used to get a basis for the column space to get a basis for the row space.math. So. So. in this case they won’t. In this case the third row is twice the first row added onto the second row and so the first three rows are not linearly independent (which you’ll recall is required for a set of vectors to be a basis). So. 4 é -1 ê 2 -4 ê AT = ê -1 -4 ê ê 5 -12 ê 6 -8 ë é ê ê U =ê ê ê ê ë 2 0 -6 -2 4 -3ù 1ú ú 7ú ú -2 ú 12 ú û Here is a possible row-echelon form of the transpose (you should verify this). but that does not mean that the first three rows of A will also form a basis for the row space.Linear Algebra In all of the examples that we’ve worked to this point in finding a basis for the row space and the column space we should notice that the basis we found for the column space consisted of columns from the original matrix while the basis we found for the row space did not consist of rows from the original matrix.aspx .5ú 4ú 0 0 0 1ú ú 0 0 0 0ú 0 0 0 0ú û The first. © 2007 Paul Dawkins 82 http://tutorial. but the columns of AT are the rows of A and the column space of AT is the row space of A. The first three rows of U formed a basis for the row space. For example let’s go back and take a look at Example 5. In fact. Therefore. we’ll be finding a basis for the column space of AT in terms of the columns of AT . Solution The first thing that we’ll do is take the transpose of A.edu/terms. Recall as well that we find a basis for the column space in terms of columns from the original matrix ( AT in this case). what do we do if we do want rows from the original matrix to form our basis? The answer to this is surprisingly simple. second and fourth columns of U form a basis for the column space of U and so a basis for the column space of AT is. This means that the row space of A will become the column space of AT .lamar. here is the transpose of A.

but there are only n rows (columns) in A and so all the rows (columns) of A must be in the basis. This should be clear from application of the Theorem above. the column space of AT is nothing more than the row space of A and so these three column are rows from A and will also form a basis for the row space. the row (column) space is a subspace of ¡ n which also has a dimension of n. (a) A is invertible. r1 = [ -1 2 -1 5 6] r2 = [ 4 -4 -4 -12 -8] r3 = [ -3 1 7 -2 12] Next we want to give a quick theorem that gives a relationship between the solution to a system of equations and the column space of the coefficient matrix. have at least one solution) if and only if b is in the column space of A. (b) The null space of A is {0} . let’s change notation a little to make it clear that we’re dealing with a basis for the row space and we’ll be done. We will point our however that if the rank of an n ´ n matrix is n then a basis for the row (column) space must contain n vectors. rank and nullity so we’re not going to give it here. (f) The row vectors of A form a basis for ¡ n . The following statements are equivalent.e. (c) nullity ( A ) = 0 . Theorem 6 The system of linear equations Ax = b will be consistent (i. We’ll close out this section with a couple of theorems relating the invertibility of a square matrix A to some of the ideas in this section. So. Theorem 7 Let A be an n ´ n matrix. These ideas are helpful in showing that (d) will imply either (e) or (f). Here is a basis for the row space of A in terms of rows from A itself. Also. This theorem can be useful on occasion. however. i. just the zero vector. (d) rank ( A ) = n . but that means that it can be written as a linear combination of the basis vectors for the column space of A.Linear Algebra é -1ù ê 2ú ê ú c1 = ê -1ú ê ú ê 5ú ê 6ú ë û é 4ù ê -4 ú ê ú c 2 = ê -4 ú ê ú ê -12 ú ê -8 ú ë û é -3 ù ê 1ú ê ú c4 = ê 7 ú ê ú ê -2 ú ê 12 ú ë û Again.lamar. (e) The columns vectors of A form a basis for ¡ n .aspx . © 2007 Paul Dawkins 83 http://tutorial. This theorem tells us that b must be in the column space of A.math. Note that since the basis for the column space of a matrix is given in terms of the certain columns of A this means that a system of equations will be consistent if and only if b can be written as a linear combination of at least some of the columns of A. The proof of this theorem follows directly from Theorem 9 in the Properties of Determinants section and from the definitions of null space.edu/terms.e.

edu/terms. (g) det ( A ) ¹ 0 (h) The null space of A is {0} . The following statements are equivalent. (b) The only solution to the system Ax = 0 is the trivial solution.Linear Algebra Finally. (d) A is expressible as a product of elementary matrices.e. (c) A is row equivalent to I n . this was also a theorem listing many equivalent statements on the invertibility of a matrix. Theorem 8 Let A be an n ´ n matrix. (f) Ax = b is consistent for every n ´ 1 matrix b.math. (i) nullity ( A ) = 0 .aspx . (j) rank ( A ) = n .lamar. speaking of Theorem 9 in the Properties of Determinant section. (l) The row vectors of A form a basis for ¡ n . © 2007 Paul Dawkins 84 http://tutorial. (k) The columns vectors of A form a basis for ¡ n . just the zero vector. (e) Ax = b has exactly one solution for every n ´ 1 matrix b. We can merge that theorem with Theorem 7 above into the following theorem. (a) A is invertible. i.

Linear Algebra

Inner Product Spaces
If you go back to the Euclidean n-space chapter where we first introduced the concept of vectors you’ll notice that we also introduced something called a dot product. However, in this chapter, where we’re dealing with the general vector space, we have yet to introduce anything even remotely like the dot product. It is now time to do that. However, just as this chapter is about vector spaces in general, we are going to introduce a more general idea and it will turn out that a dot product will fit into this more general idea. Here is the definition of this more general idea.

Definition 1 Suppose u, v, and w are all vectors in a vector space V and c is any scalar. An
inner product on the vector space V is a function that associates with each pair of vectors in V, say u and v, a real number denoted by u, v that satisfies the following axioms. (a) u, v = v, u (b) u + v, w = u, w + v, w (c) c u, v = c u, v (d) u, u ³ 0 and u, u = 0 if and only if u = 0 A vector space along with an inner product is called an inner product space. Note that we are assuming here that the scalars are real numbers in this definition. In fact we probably should have been using the terms “real vector space” and “real inner product space” in this definition to make it clear. If we were to allow the scalars to be complex numbers (i.e. dealing with a complex vector space) the axioms would change slightly. Also, in the rest of this section if we say that V is an inner product space we are implicitly assuming that it is a vector space and that some inner product has been defined on it. If we do not explicitly give the inner product then the exact inner product that we are using is not important. It will only be important in these cases that there has been an inner product defined on the vector space.

Example 1 The Euclidean inner product as defined in the Euclidean n-space section is an inner
product. For reference purposes here is the Euclidean inner product. Given two vectors in ¡ n , u = ( u1 , u2 ,K , un ) and v = ( v1 , v2 ,K , vn ) , the Euclidean inner product is defined to be,

u, v = ug v = u1v1 + u2v2 + L + un vn

By Theorem 2 from the Euclidean n-space section we can see that this does in fact satisfy all the axioms of the definition. Therefore, ¡ n is an inner product space. Here are some more examples of inner products.

© 2007 Paul Dawkins

85

http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

Example 2 Suppose that u = ( u1 , u2 ,K , un ) and v = ( v1 , v2 ,K , vn ) are two vectors in ¡ n and
that w1 , w2 , … , wn are positive real numbers (called weights) then the weighted Euclidean inner product is defined to be,

u, v = w1u1v1 + w2u2v2 + L + wnun vn

It is fairly simple to show that this is in fact an inner product. All we need to do is show that it satisfies all the axioms from Definition 1. So, suppose that u, v, and a are all vectors in ¡ n and that c is a scalar. First note that because we know that real numbers commute with multiplication we have,

u, v = w1u1v1 + w2u2 v2 + L + wn un vn = w1v1u1 + w2v2u2 + L + wn vnun = v, u

So, the first axiom is satisfied. To show the second axiom is satisfied we just need to run through the definition as follows,

u + v, a = w1 ( u1 + v1 ) a1 + w2 ( u2 + v2 ) a2 + L + wn ( un + vn ) an = u, a + v , a

= ( w1u1a1 + w2u2 a2 + L + wnun an ) + ( w1v1a1 + w2 v2 a2 + L + wn vn an )

and the second axiom is satisfied. Here’s the work for the third axiom.

c u, v = w1c u1v1 + w2 c u2 v2 + L + wn c un vn = c ( w1u1v1 + w2u2 v2 + L + wnun vn ) = c u, v

Finally, for the fourth axiom there are two things we need to check. Here’s the first,
2 2 u, u = w1u12 + w2u2 + L + wnun ³ 0

Note that this is greater than or equal to zero because the weights w1 , w2 , … , wn are positive numbers. If we hadn’t made that assumption there would be no way to guarantee that this would be positive. Now suppose that u, u = 0 . Because each of the terms above is greater than or equal to zero the only way this can be zero is if each of the terms is zero itself. Again, however, the weights are positive numbers and so this means that

ui2 = 0

Þ

ui = 0,

i = 1, 2,K , n

We therefore must have u = 0 if u, u = 0 . Likewise if u = 0 then by plugging in we can see that we must also have u, u = 0 and so the fourth axiom is also satisfied.

© 2007 Paul Dawkins

86

http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

é a a3 ù é b1 b3 ù Example 3 Suppose that A = ê 1 ú and B = êb b ú are two matrices in M 22 . An ë a2 a4 û ë 2 4û inner product on M 22 can be defined as, A, B = tr ( AT B )
where tr ( C ) is the trace of the matrix C. We will leave it to you to verify that this is in fact an inner product. This is not difficult once you show (you can do a direct computation to show this) that

tr ( AT B ) = tr ( BT A ) = a1b1 + a2b2 + a3b3 + a4b4

This formula is very similar to the Euclidean inner product formula and so showing that this is an inner product will be almost identical to showing that the Euclidean inner product is an inner product. There are differences, but for the most part it is pretty much the same. The next two examples require that you’ve had Calculus and so if you haven’t had Calculus you can skip these examples. Both of these however are very important inner products in some areas of mathematics, although we’re not going to be looking at them much here because of the Calculus requirement.

Example 4 Suppose that f = f ( x ) and g = g ( x ) are two continuous functions on the interval

[ a, b ] .

In other words, they are in the vector space C [ a, b ] . An inner product on C [ a, b ] can

be defined as,

f , g = ò f ( x ) g ( x ) dx
a

b

Provided you remember your Calculus, showing this is an inner product is fairly simple. Suppose that f, g, and h are continuous functions in C [ a, b ] and that c is any scalar. Here is the work showing the first axiom is satisfied.

f , g = ò f ( x ) g ( x ) dx = ò g ( x ) f ( x ) dx = g, f
a a

b

b

The second axiom is just as simple,

f + g, h = ò

b a b

( f ( x ) + g ( x ) ) h ( x ) dx
b a

= ò f ( x ) h ( x ) dx + ò g ( x ) h ( x ) dx = f , h + g, h
a

Here’s the third axiom.

c f , g = ò c f ( x ) g ( x ) dx = c ò f ( x ) g ( x ) dx = c f , g
a a

b

b

Finally, the fourth axiom. This is the only one that really requires something that you may not remember from a Calculus class. The previous examples all used properties of integrals that you
© 2007 Paul Dawkins 87 http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

should remember. First, we’ll start with the following,
b b

f , f = ò f ( x ) f ( x ) dx = ò f 2 ( x ) dx
a a

Now, recall that if you integrate a continuous function that is greater than or equal to zero then the integral must also be greater than or equal to zero. Hence,

f ,f ³ 0
Next, if f = 0 then clearly we’ll have f , f = 0 . Likewise, if we have f , f = then we must also have f = 0 .

òa

b

f 2 ( x ) dx = 0

Example 5 Suppose that f = f ( x ) and g = g ( x ) are two vectors in C [ a, b ] and further
suppose that w ( x ) > 0 is a continuous function called a weight. A weighted inner product on

C [ a, b ] can be defined as, f , g = ò f ( x ) g ( x ) w ( x ) dx
a b

We’ll leave it to you to verify that this is an inner product. It should be fairly simple if you’ve had calculus and you followed the verification of the weighted Euclidean inner product. The key is again the fact that the weight is a strictly positive function on the interval [ a, b ] . Okay, once we have an inner product defined on a vector space we can define both a norm and distance for the inner product space as follows.

Definition 2 Suppose that V is an inner product space. The norm or length of a vector u in V is
defined to be,

u = u, u

1 2

Definition 3 Suppose that V is an inner product space and that u and v are two vectors in V. The distance between u and v, denoted by d ( u, v ) is defined to be, d ( u, v ) = u - v
We’re not going to be working many examples with actual numbers in them in this section, but we should work one or two so at this point let’s pause and work an example. Note that part (c) in the example below requires Calculus. If you haven’t had Calculus you should skip that part.

© 2007 Paul Dawkins

88

http://tutorial.math.lamar.edu/terms.aspx

(a) u = ( 2.v = ( -1.lamar. -1. (b) u = ( 2.2 ç ÷= 5 è5ø So.v = ( -1. 1 . w2 = 6 and w3 = 1 . -3. 4 ) and v = ( 3. [Solution] 5 u. v ) = u .aspx . Note however.math. v = ( 2 )( 3) + ( -1)( 2 ) + ( 4 )( 0 ) = 4 1 2 u = u.edu/terms.2 ÷= 5 è5ø 2 d ( u. v . [Return to Problems] © 2007 Paul Dawkins 89 http://tutorial. Note that in under this weighted norm u is “smaller” in some way than under the standard Euclidean norm and the distance between u and v is “larger” in some way than under the standard Euclidean norm. 4 ) and v = ( 3. w2 = 6 and w3 = [Solution] [Solution] (c) u = x and v = x 2 in C [ 0. 4 ) and v = ( 3. 2. u 1 2 = ( 2 ) ( 2 ) + ( -1) ( 6 ) + ( 42 ) æ ç 2 2 2 2 1ö 86 = 17. -1. v ) = u . 0 ) in ¡ 3 with the weighed Euclidean inner product using the weights w1 = 2 . 0 ) in ¡ 3 with the standard Euclidean inner product. æ1ö u. 5 Again.Linear Algebra Example 6 For each of the following compute u. 4 ) = ( -1) + ( -3) + ( 4 ) = 26 [Return to Problems] (b) u = ( 2. 2. 0 ) in ¡ 3 with the standard Euclidean inner product. that even though we’ve got the same vectors as part (a) we should expect to get different results because we are now working with a weighted inner product. u = ( 2 ) + ( -1) 2 2 + ( 4 2 ) = 21 2 2 2 d ( u. There really isn’t much to do here other than go through the formulas. -1. 2. u and d ( u. we did get different answers here.1] using the inner product defined in Example 4. -1. v = ( 2 )( 3)( 2 ) + ( -1)( 2 )( 6 ) + ( 4 )( 0 ) ç ÷ = 0 è5ø u = u. 4 ) = ( -1) ( 2 ) + ( -3) ( 6 ) + ( 4 ) 296 æ1ö = 59. 0 ) in ¡ 3 with the weighed Euclidean inner product using the weights w1 = 2 . Solution (a) u = ( 2. 4 ) and v = ( 3. -3. 2. v ) for the given pair of vectors and inner product. not a lot to do other than formula work.

v.Usually called the Triangle Inequality © 2007 Paul Dawkins 90 http://tutorial. v = c u. 0 = 0.x 4 + x 3 ÷ = 2 3 ø0 30 è5 [Return to Problems] Now.1] using the inner product defined in Example 4.v. again if you haven’t had Calculus this part won’t make much sense and you should skip it. w = u. (a) u. (a) u ³ 0 (b) u = 0 if and only if u=0.v = x . v . (c) c u = c u (d) u + v £ u + v . v (e) u. w (c) u. v + w = u.Linear Algebra (c) u = x and v = x 2 in C [ 0.x = 1 1 ö 1 æ1 dx = ç x5 .lamar. w .math.w = u. We’ll list them all here for reference purposes and so you can see them with the updated inner product notation. v . If you have had Calculus this should be a fairly simple example. Then. v £ u v Theorem 3 Suppose u and v are two vectors in an inner product space and that c is a scalar then. norm and distance that we had for the dot product back in the Euclidean n-space section. Theorem 1 Suppose u. w (d) c u.v. and w are vectors in an inner product space and c is any scalar.edu/terms. w (b) u .x ) 1 2 2 1 1 2 x 0 d ( u. v = ò x ( x ) dx = ò 1 2 0 1 3 x 0 1 1 dx = x 4 = 4 0 4 1 3 1 dx = x = 3 0 3 1 1 1 u = u. Okay. u = 0 Theorem 2 Cauchy-Schwarz Inequality : Suppose u and v are two vectors in an inner product space then u. u. u 1 2 = 2 ò 0 x ( x ) dx = ò ò0(x .u. v + u.aspx . we also have all the same properties for the inner product. v ) = u . The proofs for these theorems are practically identical to their dot product counterparts and so aren’t shown here.

There are also subspaces that will only arise if we are working with an inner product space. © 2007 Paul Dawkins 91 http://tutorial. w2 = 6 and w3 = We saw the computations for these back in Example 6. Example 7 The two vectors u = ( 2. 4 ) and v = ( 3. 1 .Usually called the Triangle Inequality There was also an important concept that we saw back in the Euclidean n-space section that we’ll need in the next section. Note that whether or not two vectors are orthogonal will depend greatly on the inner product that we’re using. They are said to be orthogonal if u. and w are vectors in an inner product space then. u ) (d) d ( u. Now that we have the definition of orthogonality out of the way we can give the general version of the Pythagorean Theorem of an inner product space. 5 Theorem 5 Suppose that u and v are two orthogonal vectors in an inner product space then.Linear Algebra Theorem 4 Suppose u. Definition 5 Suppose that W is a subspace of an inner product space V.edu/terms. (a) d ( u. v ) . In previous sections we spent quite a bit of time talking about subspaces of a vector space.math. v = 0 . although they will show up on occasion. Here are a couple of nice theorems pertaining to orthogonal complements. v ) £ d ( u.aspx . We just wanted to acknowledge that there are subspaces that are only going to be found in inner product spaces. 2 2 2 u+v = u + v There is one final topic that we want to briefly touch on in this section. The following definition gives one such subspace. (c) d ( u. 0 ) in ¡3 are not orthogonal with respect to the standard Euclidean inner product. We say that W and W ^ are orthogonal complements. v. but are orthogonal with respect to the weighted Euclidean inner product with weights w1 = 2 . v ) = d ( v. w ) + d ( w . v ) ³ 0 (b) d ( u. We’re not going to be doing much with the orthogonal complement in these notes. 2. Here is the definition for this concept in terms of inner product spaces. -1. We say that a vector u from V is orthogonal to W if it is orthogonal to every vector in W.lamar. Two vectors may be orthogonal with respect to one inner product defined on a vector space. Definition 4 Suppose that u and v are two vectors in an inner product space. v ) = 0 if and only if u=v. The set of all vectors that are orthogonal to W is called the orthogonal complement of W and is denoted by W ^ . but not orthogonal with respect to a second inner product defined on the same vector space.

(c) W ^ ( ) ^ = W . is common to both W and W ^ . Then.Linear Algebra Theorem 6 Suppose W is a subspace of an inner product space V.math. the orthogonal complement of W ^ is W. Theorem 7 If A is an n ´ m matrix then.aspx . Here is a nice theorem that relates some of the fundamental subspaces that we were discussing in the previous section. (a) W ^ is a subspace of V. (b) Only the zero vector. (a) The null space of A and the row space of A are orthogonal complements in ¡ m with respect to the standard Euclidean inner product. 0. Or in other words. © 2007 Paul Dawkins 92 http://tutorial.lamar.edu/terms. (b) The null space of AT and the column space of A are orthogonal complements in ¡ n with respect to the standard Euclidean inner product.

0.edu/terms. (a) If each pair of distinct vectors from S is orthogonal then we call S an orthogonal set. Definition 1 Suppose that S is a set of vectors in an inner product space. To show that they don’t form an orthonormal set we just need to show that at least one of them does not have a norm of 1.math. -1. -1) . but the other two don’t and so they are not an orthonormal set of vectors. and yes it does require an inner product space to construct. We are going to be looking at a special kind of basis in this section that can arise in an inner product space. v 2 = ( 2 )( 0 ) + ( 0 )( -1) + ( -1)( 0 ) = 0 v1 . [Return to Problems] © 2007 Paul Dawkins 93 http://tutorial. v 3 = ( 2 )( 2 ) + ( 0 )( 0 ) + ( -1)( 4 ) = 0 v 2 . 0. v 2 = ( 0. v1 . they do form an orthogonal set.aspx . before we do that we’re going to need to get some preliminary topics out of the way first. 4 ) in ¡3 answer each of the following. We’ll first need to get a set of definitions out of way. (b) If S is an orthogonal set and each of the vectors in S also has a norm of 1 then we call S an orthonormal set. 0 ) and v 3 = ( 2. However. v 3 = ( 0 )( 2 ) + ( -1)( 0 ) + ( 0 )( 4 ) = 0 So. v1 = v2 = v3 = ( 2 ) + ( 0 ) + ( -1) 2 2 2 2 2 = 5 =1 ( 0 ) + ( -1) + ( 0 ) ( 2) + ( 0) + ( 4) 2 2 2 2 = 20 = 2 5 So. For the practice we’ll compute all the norms. [Solution] (b) Turn them into a set of vectors that will form an orthonormal set of vectors under the standard Euclidean inner product for ¡ 3 .Linear Algebra Orthonormal Basis We now need to come back and revisit the topic of basis. (a) Show that they form an orthogonal set under the standard Euclidean inner product for ¡3 but not an orthonormal set. Let’s take a quick look at an example. one of them has a norm of 1.lamar. [Solution] Solution (a) Show that they form an orthogonal set under the standard Euclidean inner product for ¡3 but not an orthonormal set. All we need to do here to show that they form an orthogonal set is to compute the inner product of all the possible pairs and show that they are all zero. Example 1 Given the three vectors v1 = ( 2.

-1. 2. 0. vi = 0. v n } is an orthogonal set of non-zero vectors in an inner product space. cn = 0 .K . -1) = ç . c1 v1 + c2 v 2 + L + cn v n = 0 c1 v1 . c2 = 0 . u3 = u 2 . and we’ll need to show that the only scalars that work here are c1 = 0 . -1. 1 1 1 ö æ 2 v1 = ( 2. 0.÷ v1 5 5ø è 5 1 1 u2 = v 2 = ( 0. v i + c2 v 2 . v 2 . We’ve actually done most of the work here for this part of the problem already. vi + c2 v 2 .math. v i = 0 c1 v1 + c2 v 2 + L + cn v n . we can do this in a single step. 0.K . v i . Proof : Note that we need the vectors to be non-zero vectors because the zero vector could be in a set of orthogonal vectors and yet we know that if a set includes the zero vector it will be linearly dependent. u1 . n .Linear Algebra (b) Turn them into a set of vectors that will form an orthonormal set of vectors under the standard Euclidean inner product for ¡ 3 . 0 ) = ( 0. All we need to do is take the inner product of both sides with respect to v i . Back when we were working in ¡ n we saw that we could turn any vector v into a vector with norm 1 by dividing by its norm as follows. [Return to Problems] We have the following very nice fact about orthogonal sets. i = 1. We’ll leave it to you to verify that. and then use the properties of inner products to rearrange things a little. we can turn each of the vectors above into a set of vectors with norm 1. 0. 0 ) v2 1 u1 = u3 = 1 1 2 ö æ 1 v3 = ( 2. So. then S is also a set of linearly independent vectors. … . ÷ v3 2 5 5ø è 5 All that remains is to show that this new set of vectors is still orthogonal. now that we know there is a chance that these vectors are linearly independent (since we’ve excluded the zero vector) let’s form the equation. So.edu/terms. 1 v v This new vector will have a norm of 1. vi = 0 © 2007 Paul Dawkins 94 http://tutorial. Theorem 1 Suppose S = { v1 .lamar. 4 ) = ç . u 2 = u1 .aspx c1v1 . v i + L + cn v n . . v i + L + cn v n . In fact. u3 = 0 and so we have turned the three vectors into a set of vectors that form an orthonormal set.

K . v j = 0 if i ¹ j and so this reduced down to. ci v i . v 2 . v i = c1v1 + c2 v 2 + L + cn v n . u. v i > 0 and so the only way that this can be zero is if ci = 0 . v i = c1 v1 . v 2 v2 2 v2 + L + u. … . So. v 2 .K . The standard basis vectors for ¡ n are an orthonormal basis. 2. The following fact gives us one of the very nice properties about orthogonal/orthonormal basis. … .math. Theorem 2 Suppose that S = { v1 . v n } is an orthogonal basis for an inner product space and that u is any vector from the inner product space then.Linear Algebra Now. because we know the vectors in S are orthogonal we know that v i . u = u. v 2 v 2 + L + u. n . cn so that. v n v n Proof : We’ll just show that the first formula holds. given u we need to find scalars c1 . vi © 2007 Paul Dawkins 95 http://tutorial. since we know that the vectors are all non-zero we have v i . we are now ready to move into the main topic of this section. Definition 2 Suppose that S = { v1 . So. v i = 0 Next. (b) If S is also an orthonormal set then we call S an orthonormal basis. v1 v1 + u. Okay. (a) If S is also an orthogonal set then we call S an orthogonal basis. v1 v1 2 v1 + u. u= u. c2 = 0 .K . vi + c2 v 2 . cn = 0 and so these vectors are linearly independent.edu/terms. i = 1.lamar. v i + L + cn v n .aspx . Once we have that the second will follow directly from the fact that all the vectors in an orthonormal set have a norm of 1. Note that we’ve been using an orthonormal basis already to this point. we’ve shown that we must have c1 = 0 . u = c1 v1 + c2 v 2 + L + cn v n To find these scalars simply take the inner product of both sides with respect to v i . v n } is a basis for an inner product space. v n vn 2 vn If in addition S is in fact an orthonormal basis then. c2 . Since a set of orthogonal vectors are also linearly independent if they just happen to span the vector space we are working on they will also form a basis for the vector space.

vi > 0 . © 2007 Paul Dawkins 96 http://tutorial. with an orthogonal/orthonormal basis.aspx .edu/terms.projW u This theorem is not really the one that we need to construct an orthonormal basis.math. but we needed it more to acknowledge that we could do projections and to get the notation out of the way. We will use portions of this theorem. The following two theorems will help us to do that. vi u. Also note that projW ^ u can be easily computed from projW u by. v i = ci v i . This then gives us. it is very easy to write down the linear combination of basis vectors for that vector. u = projW u + projW ^ u where projW u is a vector that is in W and is called the orthogonal projection of u on W and projW ^ u is a vector in W ^ (the orthogonal complement of W) and is called the component of u orthogonal to W. because v i is a basis vector we know that it isn’t the zero vector and so we also know that v i . from the definition of the norm we see that we can also write this as. v i vi . v i Also. v j = 0 if i ¹ j and so this reduces to. The following theorem is the one that will be the main workhorse of the process. ci = and so we’re done. We would like to be able to construct an orthogonal/orthonormal basis for a finite dimensional vector space given any basis of that vector space. In other words.Linear Algebra Now.lamar. u. Then u can be written as. ci = u. projW ^ u = u . v i vi 2 However. we don’t need to go through all the work to find the linear combinations that we were doing in earlier sections. Note that this theorem is really an extension of the idea of projections that we saw when we first introduced the concept of the dot product. Theorem 3 Suppose that W is a finite dimensional subspace of an inner product space V and further suppose that u is any vector in V. since we have an orthogonal basis we know that v i . What this theorem is telling us is that for any vector in an inner product space.

{u1 . Further suppose that W has an orthogonal basis S = { v1 . called the Gram-Schmidt process.projW1 v 2 = v 2 - v 2 . projW u = u. However. u 2 is the portion of v 2 that is 1 orthogonal to u1 ). v n vn 2 vn If in addition S is in fact an orthonormal basis then. Step 1 : Let u1 = v1 . To see that u 2 ¹ 0 assume for a second that we do have u 2 = 0 .aspx . So. Finally. Using the result of Theorem 3 and the formula from Theorem 4 gives us the following formula for u 2 . u 2 .lamar. we need to verify that u 2 ¹ 0 because the zero vector cannot be a basis vector. just how does this theorem help us to construct an orthogonal/orthonormal basis? The following process. let’s observe an interesting consequence of how we found u 2 .math. v 2 v2 2 v2 + L + u. this isn’t terribly useful from a computational standpoint. v 2 .Linear Algebra Theorem 4 Suppose that W is a finite dimensional subspace of an inner product space V. v2 = v 2 . Step 2 : Let W1 = span {u1} and then define u 2 = projW ^ v 2 (i. projW u = u. This would give us.K . u1 u1 2 u1 Next. u 2 ¹ 0 . will construct an orthogonal/orthonormal basis for a finite dimensional inner product space given any basis. Technically. Gram-Schmidt Process Suppose that V is a finite dimensional inner product space and that { v1 . v 2 .K .K . v n v n So. Both u1 and u 2 are orthogonal and so are linearly independent by Theorem 1 above and this means that they are a © 2007 Paul Dawkins 97 http://tutorial. v 2 v 2 + L + u. u1 u1 2 v1 since u1 = v1 But this tells us that v 2 is a multiple of v1 which we can’t have since they are both basis vectors and are hence linearly independent. u n } . this is all there is to step 2 (once we show that u 2 ¹ 0 anyway) since u 2 will be orthogonal to u1 because it is in W1^ . v1 v1 + u. v n } is a basis for V.edu/terms. The following process will construct an orthogonal basis for V. u 2 = v 2 . We’ll also be able to develop some very nice facts about the basis that we’re going to be constructing as we go through the construction process. u1 u1 2 u1 = v 2 . v n } and that u is any vector in V then. v1 v1 2 v1 + u.e. To find an orthonormal basis simply divide the ui ’s by their norms.

Then because v1 . can be a little daunting and can make the process look more complicated than it in fact is. You should probably go through the steps of verifying the claims made here for the practice. We can compute u3 as follows. In particular u3 must be orthogonal to v1 and v 2 . u 2 } = span { v1 . span. define W2 = span {u1 . Therefore. Step 3 : This step is really an extension of Step 2 and so we won’t go into quite the detail here as we did in Step 2. With this step we can also note that because u3 is in the orthogonal complement of W2 (by construction) and because we know that.math. u 2 } = span { v1 . u3 = v3 . and v 3 are linearly independent we know that we must have u3 ¹ 0 . Finally. v3 } Step 4 : Continue in this fashion until we’ve found u n . u 2 } and this subspace has dimension of 2. v 2 } from the previous step we know as well that u3 must be orthogonal to all vectors in W2 .lamar. following an argument similar to that in Step 2 we get that. and v 3 . v 2 } which also has dimension 2. v1 and v 2 . First. Let’s summarize the process before we go onto a couple of examples. However. by Theorem 9 from the section on Basis we can see that we must in fact have. W2 = span {u1 . v 2 . both u1 and u 2 are linear combinations of v1 and v 2 and so u3 can be thought of as a linear combination of v1 . u 2 } and then define u3 = projW ^ v 3 and so u3 2 will be the portion of v 3 that is orthogonal to both u1 and u 2 . v 2 . span {u1 . u3 } = span { v1 . © 2007 Paul Dawkins 98 http://tutorial.edu/terms. v 2 } So. v 2 . with all the explanation as we provided.projW2 v3 = v3 - v 3 . u2 u2 2 u2 Next. will in fact span the same subspace as the two original vectors.aspx . span {u1 . There is the Gram-Schmidt process. u1 u1 2 u1 - v3 . Going through the process above. they are also linear combinations of v1 and v 2 and so W2 is a subspace of span { v1 . the two new vectors. This is a nice consequence of the Gram-Schmidt process.Linear Algebra basis for the subspace W2 = span {u1 . u1 and u 2 . u 2 .

u1 = 2 The second vector is then. 0 ) . let’s go through a couple of examples here.aspx .K n . -1. v 2 = (1. 0. u2 u2 2 v3 . u1 u1 2 u1 2 v 2 . u 2 .Linear Algebra Gram-Schmidt Process Suppose that V is a finite dimensional inner product space and that { v1 . 7.K . u n } .math. 0 ) The remaining two steps are going to involve a little more work. u2 u2 2 u2 . v 2 .K . span {u1 .K . u k } = span { v1 .K . u1 u1 u1 2 u1 u1 M u1 v3 . u1 2 u2 v n . -1) - 2 1 2 ( 2. u1 = v1 = ( 2. v n } is a basis for V then an orthogonal basis for V. but won’t be all that bad. 0 ) = æ . due to the construction process we have and uk will be orthogonal to span { v1 . u2 = v 2 and here is all the quantities that we’ll need. 0. . v k } k = 1. we’ll need to go through the Gram-Schmidt process a couple of times.lamar. v 2 . v k -1} for k = 2. Here is the formula for the second vector in our orthogonal basis. u1 u1 2 vn . -1) is a basis of ¡3 and assuming that we’re working with the standard Euclidean inner product construct an orthogonal basis for ¡ 3 . -1. The first step is easy.3. {u1 . -1) . Now.L - v n . Okay. n Example 2 Given that v1 = ( 2. u n -1 u n-1 2 u n -1 To convert the basis to an orthonormal basis simply divide all the new basis vectors by their norm. u 2 . u1 =5 u 2 = (1. Solution You should verify that the set of vectors above is in fact a basis for ¡ 3 . and v 3 = ( 3. v 2 .K . can be found using the following process. Also. u1 = v1 u2 = v 2 u3 = v3 M un = v n v 2 . -1.edu/terms. v 2 . 2.K . -1ö ç ÷ 5 è5 5 ø © 2007 Paul Dawkins 99 http://tutorial.

. u1 = -1 The third vector is then. u3 = v3 - v 3 .math. This will have two effects. There are two ways to approach this. . . -1. Let’s do it this way and see what we get. v 3 . The second way is to go through the Gram-Schmidt process and this time divide by the norm as we find each new vector.aspx . note that this is almost the same problem as the previous one except this time we’re looking for an orthonormal basis instead of an orthogonal basis. -1. u1 u1 2 u1 - v3 . ÷ è 6 6 6ø Okay that’s the first way to do it. . it will put a fair amount of roots into the vectors that we’ll need to work with.Linear Algebra The formula for the third (and final vector) is. © 2007 Paul Dawkins 100 http://tutorial. The first is often the easiest way and that is to acknowledge that we’ve got a orthogonal basis and we can turn that into an orthonormal basis simply by dividing by the norms of each of the vectors. -1) . ÷ è5 5 ø è 3 3 3ø æ 8 16 8 ö u3 = ç . 0 ) æ1 2 ö u 2 = ç . v 2 = (1.. 0. ÷ è3 3 3ø So. . 0 ) . -1) is a basis of ¡3 and assuming that we’re working with the standard Euclidean inner product construct an orthonormal basis for ¡ 3 . 7.0÷ 5 ø è 5 2 5 ö æ 1 w2 = ç . -1. u1 = 5 u2 = 30 5 u3 = 8 6 3 Note that in order to eliminate as many square roots as possible we rationalized the denominators of the fractions here. First. . -1 ÷ è5 5 ø You should verify that these do in fact form an orthogonal set. Example 3 Given that v1 = ( 2. because we are turning the new vectors into vectors with length one the norm in the Gram-Schmidt formula will also be 1 and so isn’t needed. 1 æ 2 ö w1 = ç . Dividing by the norms gives the following set of vectors. u2 u2 2 u2 6 5 and here are the quantities that we need for this step.lamar. -1) - -1 ( 2. and v 3 = ( 3. Second. v3 .edu/terms. u1 = ( 2. 0 ) 5 æ1 2 ö æ 8 16 8 ö ç . 7. the orthogonal basis that we’ve constructed is. Here are the norms of the vectors from the previous example.÷ 30 ø è 30 30 æ 1 2 1 ö w3 = ç . -1 ÷ = ç . Solution First. u 2 = 22 5 22 5 6 5 u1 2 =5 u2 2 = u3 = ( 3.

as we saw in the previous example there are two ways to get an orthonormal basis from any given basis. 0 ÷ = ç . The first new vector will be.v 2 .lamar. u 2 u 2 Finally. u1 = Here is the new orthogonal vector. 7. 1 5 v3 . -1÷ ç 5è 5 5 ø è5 5 ø Notice that this is the same as the second vector we found in Example 2.v 3 .. v 3 .edu/terms. to get the second vector we first need to compute. -1) .. this is the third orthogonal vector that we found in Example 2.0÷ . u3 = 1 3 æ 8 16 8 ö æ 1 2 1 ö w= . -1) - 2 æ 2 1 ö æ1 2 ö . 0 ) = ç . 0. So. . ÷=ç ÷ w 8 6 è3 3 3ø è 6 6 6 ø So.v3 . v 2 . w = v2 - u1 = v 2 . . Here is the dot product that we need for this step.÷ç ç ÷=ç . u1 u1 2 Now. ç .Linear Algebra Let’s go through this once just to show you the differences. -1. Of course that is something that we should expect to happen here. 0 ÷ v1 5 5 ø è 5 v 2 . u2 = 1 5 æ1 2 2 5 ö ö æ 1 w= . . and again we’ve acknowledged that the norms of the first two vectors will be 1 and so aren’t needed in this formula. .aspx . . Also note that we’ve acknowledged that the norm of u1 is 1 and so we don’t need that in the formula. ÷ 5 øè 5 5 ø 30 è 30 30 30 ø è 3 3 3 ø è Again. u1 = 1 1 1 æ 2 ö v1 = ( 2. u1 u1 ..ç . In this case we’ll need to divide by its norm to get the vector that we want in this case. -1 ÷ = ç ÷ w 30 è 5 5 30 ø ø è 30 30 w = v 3 . . Here is the final step to get our third orthonormal vector for this problem. u 2 = 22 30 1 2 5 ö æ 8 16 8 ö æ 1 öæ 2 ö 22 æ 1 w = ( 3. Each has its pros and cons and you’ll need to decide which method to use. u1 u1 however we won’t call it u 2 yet since we’ll need to divide by it’s norm once we’re done.math. . If we © 2007 Paul Dawkins 101 http://tutorial. Here are the dot products that we’ll need. . for the third orthogonal vector the formula will be. u1 = The orthogonal vector is then. we got exactly the same vectors as if we did when we just used the results of Example 2.ç . 2 5 w = (1.

. u 4 = (1.ö .1) æ1 1 1 3ö u2 = ç . 0.3 æ .1. we’ll need their norms so we can turn this set into an orthonormal basis. . . 0. Again. 0. u3 = u3 2 1 3 =4 = 3 4 = 2 3 and the fourth vector in out new orthogonal basis is. 0 ) . v 3 = (1. . v3 .1.1.1.1) 2 Here’s the dot product and norm we need for the second vector. 0 ) - 1 2 1 1 1 3 1 1 2 (1. . v 2 . 0 ) and v 4 = (1. . Example 4 Given that v1 = (1..1) . we’re looking for an orthonormal basis and so we’ve got our two options on how to proceed here. u1 =4 u 2 = (1. . the orthogonal basis is then. 0 ö ÷ 2ç ÷ ç ÷ 3 ç 4 4ø 3 è3 3 3 ø è2 2 ø 4 è4 4 4 Okay. u1 = 2 and the third orthogonal vector is.aspx . .1) = æ . .lamar.1. Solution Now. 0 ) - 3 1 1 1 3 (1. 0 ) is a basis of ¡ 4 and assuming that we’re working with the standard Euclidean inner product construct an orthonormal basis for ¡ 4 . . The first vector is.1. . .. © 2007 Paul Dawkins 102 http://tutorial.. . .1. 0 ÷ è3 3 3 ø æ1 1 ö u 4 = ç . .1. u1 = 3 The second orthogonal vector is then. however we do need to compute norms that we won’t need otherwise. 0 ö = æ . u1 = (1.4 æ . for the fourth orthogonal vector we’ll need.1.÷ è4 4 4 4ø æ1 1 2 ö u3 = ç . u2 = u2 2 1 4 v 4 . 0. .. 0. ..ö = æ . . v 2 = (1. u1 = 1 u1 2 v4 . 0. . 0. u1 = v1 = (1.1.Linear Algebra first compute the orthogonal basis and the divide all of them at then end by their norms we don’t have to work much with square roots. 0 ) - 1 1 1 1 1 1 3 1 1 2 1 1 (1. In this case we’ll construct an orthogonal basis and then convert that into an orthonormal basis at the very end.1.1.edu/terms. 0 ö ÷ ç ÷ 3 ç 4 4ø è3 3 3 ø 4 è4 4 4 Finally. u 2 = u1 2 =4 u2 2 = u3 = (1.2 æ .1.1. it will be up to you to determine what the best method for you to use is. .1) .1.ö ç ÷ 4 è4 4 4 4ø 1 2 3 4 For the third vector we’ll need the following dot products and norms v 3 .1. v 4 . 0 ÷ è2 2 ø Next.1) .math.1. 0.1.

. To get an orthogonal basis we would need to perform Gram-Schmidt on the set. 0 ) (1. © 2007 Paul Dawkins 103 http://tutorial. To make our life at least somewhat easier with the work let’s add in the fourth on to get the set of vectors. 4 ) into an orthogonal basis for ¡3 and assume that we’re working with the standard Euclidean inner product. ÷ u1 è2 2 2 2ø 1 1 1 3 ö æ 1 . 0 ÷ u4 = ç u4 2 è 2 ø Now. we saw how to expand a linearly independent set of vectors into a basis for a vector space.Linear Algebra u1 = 2 u2 = 3 2 u3 = 6 3 u4 = 2 2 The orthonormal basis is then. ( 0. However. Next. So.1) If we used the first one we’d actually have an orthogonal set without any work. We can do the same thing here with orthogonal sets of vectors and the Gram-Schmidt process. 0 ) ( 0. v1 = ( 2.0. since they are already orthogonal that will simplify some of the work. Since both of these vectors have a zero in the second term we can add in any of the following to the set.1.1. However.edu/terms.math. . Now.1) Now. we know these are linearly independent and since there are three vectors by Theorem 6 from the section on Basis we know that they form a basis for ¡ 3 . . they don’t form an orthogonal basis. -1) v 2 = ( 2. recall that in order to expand a linearly independent set into a basis for a vector space we need to add in a vector that is not in the span of the original vectors.. but that would be boring and defeat the purpose of the example.. Example 5 Expand the vectors v1 = ( 2.0. .0÷ u3 = ç u3 6 ø è 6 6 1 1 æ 1 ö .lamar. Solution First notice that the two vectors are already orthogonal and linearly independent. w1 = w2 = w3 = w4 = 1 æ1 1 1 1ö u1 = ç .1) (1.1. 4 ) v 3 = ( 0.1. 0.0.1. -1) and v 2 = ( 2. .u2 = ç ÷ u2 è2 3 2 3 2 3 2 3ø 1 2 æ 1 1 ö . since the first two vectors are already orthogonal performing Gram-Schmidt would not have any affect (you should verify this).aspx . Since they are linearly independent and we know that a basis for ¡ 3 will contain 3 vectors we know that we’ll only need to add in one more vector. 0. Doing so will retain the linear independence of the set. let’s just rename the first two vectors as.

Linear Algebra

u1 = ( 2, 0, -1)

u 2 = ( 2,0, 4 )
2 2

and then just perform Gram-Schmidt for the third vector. Here are the dot products and norms that we’ll need.

v 3 , u1 = -1
The third vector will then be,

v3 , u 2 = 4

u1

=5

u2

= 20

u3 = ( 0,1,1) -

-1 4 ( 2, 0, -1) - ( 2, 0, 4 ) = ( 0,1, 0 ) 5 20

© 2007 Paul Dawkins

104

http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

Least Squares
In this section we’re going to take a look at an important application of orthogonal projections to inconsistent systems of equations. Recall that a system is called inconsistent if there are no solutions to the system. The natural question should probably arise at this point of just why we would care about this. Let’s take a look at the following examples that we can use to motivate the reason for looking into this.

Example 1 Find the equation of the line that runs through the four points (1, -1) , ( 4,11) ,

( -1, -9 ) and ( -2, -13) .
Solution So, what we’re looking for are the values of m and b for which the line, y = mx + b will run through the four points given above. If we plug these points into the line we arrive at the following system of equations.

m + b = -1 4m + b = 11 - m + b = -9 -2m + b = -13

The corresponding matrix form of this system is,

é 1 ê 4 ê ê -1 ê ë -2
m=4

1ù é -1ù ú m ê 11ú 1ú é ù ê ú = 1ú ê b ú ê -9 ú ë û ú ê ú 1û ë -13û
b = -5

Solving this system (either the matrix form or the equations) gives us the solution, So, the line y = 4 x - 5 will run through the four points given above. Note that this makes this a consistent system.

Example 2 Find the equation of the line that runs through the four points ( -3, 70 ) , (1, 21) ,

( -7,110 )

and ( 5, -35 ) .

Solution So, this is essentially the same problem as in the previous example. Here are the system of equations and matrix form of the system of equations that we need to solve for this problem.

-3m + b = 70 m + b = 21 -7m + b = 110 5m + b = -35

é -3 ê 1 ê ê -7 ê ë 5

1ù é 70 ù ú m ê 21ú 1ú é ù ê ú = 1ú ê b ú ê 110 ú ë û ú ê ú 1û ë -35û

Now, try as we might we won’t find a solution to this system and so this system is inconsistent.
© 2007 Paul Dawkins 105 http://tutorial.math.lamar.edu/terms.aspx

Linear Algebra

The previous two examples were asking for pretty much the same thing and in the first example we were able to answer the question while in the second we were not able to answer the question. It is the second example that we want to look at a little closer. Here is a graph of the points given in this example.

We can see that these points do almost fall on a line. Without the reference line that we put into the sketch it would not be clear that these points did not fall onto a line and so asking the question that we did was not totally unreasonable. Let’s further suppose that the four points that we have in this example came from some experiment and we know for some physical reason that the data should all lie on a straight line. However, inaccuracies in the measuring equipment caused some (or all) of the numbers to be a little off. In light of this the question in Example 2 is again not unreasonable and in fact we may still need to answer it in some way. That is the point of this section. Given this set of data, can we find the equation of a line that will as closely as possible (whatever this means…) approximate each of the data points. Or more generally, given an inconsistent system of equations, Ax = b , can we find a vector, let’s call it x , so that Ax will be as close to b as possible (again, what ever this means…). To answer this question let’s step back a bit and take a look at the general situation. So, we will suppose that we have an inconsistent system of n equations in m unknowns, Ax = b , so the coefficient matrix, A, will have size n ´ m . Let’s rewrite the system a little and make the following definition.

e = b - Ax

We will call e the error vector and we’ll call e = b - Ax the error since it will measure the distance between Ax and b for any vector x in ¡ m (there are m unknowns and so x will be in ¡ m ). Note that we’re going to be using the standard Euclidean inner product to compute the norm in these cases. The least squares problem is then the following problem. Least Square Problem Given an inconsistent system of equations, Ax = b , we want to find a vector, x , from ¡ m so that the error e = b - A x is the smallest possible error. The vector x is called the least squares solution.
© 2007 Paul Dawkins 106 http://tutorial.math.lamar.edu/terms.aspx

lamar. the component of u orthogonal to W. upon dropping the middle term. just what does this theorem do for us? Well for any vector x in ¡ m we know that Ax will be a linear combination of the column vectors from A. Therefore projW u .projW u < u . The first thing that we’ll want to do is look at a more general situation. So.Ax © 2007 Paul Dawkins 107 http://tutorial. The following theorem will be useful in solving the least squares problem.projW u + projW u .projW u is in fact projW ^ u .w > u .projW u are orthogonal vectors.w > u . So. u . so that e is smaller than (i.w 2 2 2 Or. Theorem 1 Suppose that W is a finite dimensional subspace of an inner product space V and that u is any vector in V. Then Ax will not only be in W (since it’s a linear combination of the column vectors) but as we let x range over all possible vectors in ¡ m Ax will range over all of W. e < e .w and u . 2 2 2 2 > 0 and so if we drop this u .projW u This is equivalent to. Now. and so is orthogonal to any vector in W. smaller norm) than all other possible values of e .math. ¡ n since each column of A has n entries) that is spanned by the column vectors of A.edu/terms.w is a difference of vectors in W and hence must also be in W. Now. u .w = ( u .projW u and so we’re done. Likewise. we’re calling it x . the least squares problem is asking us to find the vector x in ¡ m . if we have w ¹ projW u then we know that projW u . If we plug in for the definition of the errors we arrive at. b .w Finally. 2 2 u .w ) = u .w Proof : For any vector w in W we can write.e. let W be the subspace of ¡ n (yes. By best approximation we mean that for every w (that is not projW u ) in W we will have.projW u ) + ( projW u .w = u .projW u + projW u . by the Pythagorean Theorem we have. u .Linear Algebra Solving this problem is actually easier than it might look at first. i.e.w = ( u . The best approximation to u from W is then projW u .projW u ) + ( projW u .w term we get.w ) Notice that projW u . 2 u . u .Ax < b .aspx .

Before we give that theorem however. Theorem 2 Suppose that A is an n ´ m matrix with linearly independent columns. AT Ax = AT b Further if A has linearly independent columns then there is a unique least squares solution given by. Ax = x1c1 + x2c2 + L + xmc m Then using Ax = 0 we also know that. We could first compute projW b and then solve Ax = projW b for x and we’d have the solution that we’re after. Ax = x1c1 + x2c2 + L + xmc m = 0 However. So. Ax = projW b Of course we are actually after x and not Ax but this does give us one way to find x . AT A is an invertible matrix. The Ax range over all possible vectors in W and we want the one that is closed to some vector b in ¡ n . Theorem 3 Given the system of equations Ax = b . this is exactly the type of situation that Theorem 1 is telling us how to solve.lamar. x = 0 . let’s suppose that AT Ax = 0 . a least squares solution to the system denoted by x . The following theorem will now give us a better method for finding the least squares solution to a system of equations. This tells us that Ax is in the null space of AT . If c1 .aspx . Therefore AT Ax = 0 has only the trivial solution and so AT A is an invertible matrix. Theorem 7 from the section on Inner Product Spaces tells us that these two spaces are orthogonal complements and Theorem 6 from the same section tells us that the only vector in common to both must be the zero vector and so we know that Ax = 0 . c2 . x = ( AT A ) AT b -1 © 2007 Paul Dawkins 108 http://tutorial. Then.edu/terms. since the columns of A are linearly independent this equations can only have the trivial solution. cm are the columns of A then we know that Ax can be written as. However. There is however a better way of doing this.math. but we also know that Ax is in the column space of A. … . Proof : From Theorem 8 in the Fundamental Subspaces section we know that if AT Ax = 0 has only the trivial solution then AT A will be an invertible matrix. we’ll need a quick fact.Linear Algebra With the least squares problem we are looking for the closest that we can get Ax to b. will also be a solution to the associated normal system. Theorem 1 tells us that the one that we’re after is.

Ax ) = 0 AT Ax = AT b Or. So. let’s consider. -35 ) . For the second part we don’t have much to do.lamar. b .projW b ) = AT ( b . ( -7. é -3 ê 1 ê ê -7 ê ë 5 1ù é 70 ù ú m ê 21ú 1ú é ù ê ú = 1ú ê b ú ê 110 ú ë û ú ê ú 1û ë -35û © 2007 Paul Dawkins 109 http://tutorial.Ax = b .projW b is in the orthogonal complement of W. AT Ax = AT b and we will have a least squares solution. (1. Ax = projW b Now. So.projW b However as pointed out in the proof of Theorem 1 we know that b .Linear Algebra Proof : Let’s suppose that x is a least squares solution and so. To find the unique solution we just need to multiply both sides by the inverse of AT A . Example 3 Use least squares to find the equation of the line that will best approximate the points ( -3. b .edu/terms. with a little rewriting we arrive at. AT ( b . and so we see that x must also be a solution to the normal system of equations. Now we should work a couple of examples. Next. to find a least squares solution to Ax = b all we need to do is solve the normal system of equations. Solution The system of equations that we need to solve from Example 2 is. by Theorem 8 in the Fundamental Subspaces section this means that AT Ax = AT b has a unique solution. However. 70 ) . 21) . W is the column space of A and by Theorem 7 from the section on Inner Product Spaces we know that the orthogonal complement of the column space of A is in fact the null space of AT and so.110 ) and ( 5.aspx . If the columns of A are linearly independent then AT A is invertible by Theorem 2 above. We’ll start with Example 2 from above. we must then have.projW b must be in the null space of AT .math.

1 10 y = -12. m=- 121 = -12.math. Here are the various matrices that we’ll need here. we have.4 5 So.aspx .1x + 29.lamar.4 b= 147 = 29. the line that best approximates all the points above is given by. The sketch of the line and points after Example 2 above shows this line in relation to the points. Example 4 Find the least squares solution to the following system of equations. © 2007 Paul Dawkins 110 http://tutorial.edu/terms. é 2 -1 1ù ê 1 -5 2 ú ú A=ê ê -3 1 -4 ú ê ú ë 1 -1 1û 1 -3 1ù é 2 ê -1 -5 T A =ê 1 -1ú ú ê 1 2 -4 ú 1û ë é -22 ù A b = ê 0ú ê ú ê -21ú ë û T é -4 ù ê 2ú b=ê ú ê 5ú ê ú ë -1û é 15 -11 17 ù A A = ê -11 28 -16 ú ê ú ê 17 -16 22 ú ë û T The normal system if then. é 2 -1 1ù é -4 ù ê 1 -5 2 ú é x1 ù ê 2 ú ê ú ê x2 ú = ê ú ê -3 1 -4 ú ê ú ê 5 ú ê ú ê x3 ú ê ú ë û ë 1 -1 1û ë -1û Solution Okay there really isn’t much to do here other than run through the formula.Linear Algebra So. é -3 ê 1 ë 1 -7 1 1 é -3 5ù ê 1 ê 1ú ê -7 û ê ë 5 1ù 1ú é m ù é -3 ú = 1ú ê b ú ê 1 ë û ë ú 1û 1 -7 1 1 é 84 -4 ù é m ù é -1134ù ê -4 4ú ê b ú = ê 166ú ë ûë û ë û This is a fairly simple system to solve and upon doing so we get. é -3 ê 1 A=ê ê -7 ê ë 5 1ù 1ú ú 1ú ú 1û é -3 AT = ê ë 1 1 -7 1 1 5ù 1ú û é 70 ù ê 21ú ú b=ê ê 110 ú ê ú ë -35û é 70 ù 5ù ê 21ú ê ú 1ú ê 110 ú û ê ú ë -35û The normal system that we need to solve is then.

lamar. each measure just how close each possible choice of m and b will get us to the exact answer (which is given by the components of b). Then if we plug in the points that we’ve been given we’ll see that this formula is nothing more than the components of the error vector. the x coordinate or our point.( m + b ) ú êe 2 ú 1ú é ù ê = =ê ú 1ú ê b ú ê110 . the components of the error vector.Linear Algebra é 15 -11 17 ù é x1 ù é -22 ù ê -11 28 -16 ú ê x ú = ê 0 ú ê ú ê 2ú ê ú ê 17 -16 22ú ê x3 ú ê -21ú ë ûë û ë û This system is a little messier to solve than the previous example.Ax e = b . We then defined.( 5m + b ) ú ëe 4 û ë û So. é . e = b . x1 = - 18 7 x2 = - 151 210 x3 = 107 210 In vector form the least squares solution is then. in the case of our example we were looking for. é 70 ù é -3 ê 21ú ê 1 ú-ê e = b . Okay. but upon solving we get.( mxi + b ) is as small as possible (in some way that we’re trying to figure out here) for all the points that we’ve been given.151 ú x = ê 210 ú ê 107 ú ë 210 û We need to address one more issue before we move on to the next section.math. e .( -7m + b ) ú êe 3 ú ë û ê ú ê ú ú 1û ê -35 . and stated that what we meant by as close to b as possible was that we wanted to find the x for which.edu/terms. For this example the general formula for e is.( -3m + b ) ù é e1 ù 1ù ú ú m ê 21 . We’ve been given a set of points ( xi . yi ) and we want to determine an m and a b so that when we plug xi . e i = yi . Now.Ax = ê ê 110 ú ê -7 ê ú ê ë -35û ë 5 é 70 . this is all fine in terms of mathematically defining what we mean by “as close as possible”. We can also think about this in terms of the equation of the line. so that Ax will be as close to b as possible in some way.Ax e < e for all x ¹ x . into mx + b the error. © 2007 Paul Dawkins 111 http://tutorial.aspx . denoted x . When we opened this discussion up we said that we were after a solution.18 ù 7 ê . but in practical terms just what are we asking for here? Let’s go back to Example 3.

Ax is as small as possible.4 ) = 4.3 e 2 = 21 .9 On a side note. Example 5 Compute the error for the solution from Example 3.2 = 8.1ú 1ú ë û ú ê ú 1û ë -3.7 ) + ( -4. will be the value of x for which.3 ù ú -12.9û 2 e 2 = ( 4.1 e 4 = -35 .1( 5 ) + 29.4 ) = -3.edu/terms.math.( -12.( -12. x . or in other words is smaller than all other possible choices of x.0125. 1ù é 4. y = -12. the line that we found using least squares is.4 ú = ê -4. we could have just as easily computed these by doing the following matrix work.4 ) = 3.1( -7 ) + 29. The solution we’re after is the value that will give the least value of the sum of the squares of the errors.4 We can compute the errors for each of the points by plugging in the given x value into this line and then taking the difference of the result form the equation and the known y value.1) + ( -3. © 2007 Paul Dawkins 112 http://tutorial.2 2 2 2 Þ e = 64. according to our discussion above this means that if we choose any other value of m and b and compute the error we will arrive at a value that is larger than 8. Here are the error computations for each of the four points in Example 3. Solution First. let’s compute the following.lamar. e 2 2 2 = e12 + e 22 + e 32 + e 42 < e12 + e 2 + e 32 + e 4 = e 2 and hence the name “least squares”.9 ) = 64.aspx .Linear Algebra émù x=ê ú ëb û so that. e 1 = 70 .( -12.1 ê 3.1(1) + 29.1x + 29.7 e 3 = 110 .0125 Now. We can now answer just what we mean by “as small as possible”. é 70 ù é -3 ê 21ú ê 1 ú-ê e =ê ê 110 ú ê -7 ê ú ê ë -35û ë 5 The square of the error and the error is then.4 ) = -4. First. e 2 2 2 = e12 + e 2 + e 32 + e 4 The least squares solution.7 ú 1ú é ù ê ú ê 29. e = b .3) + ( 3.1( -3) + 29.( -12.

c2 . define R (and yes. u 2 . u2 M cm . u m . u1 ê c . u m u m Next. We can then write A and Q as. u m in the following manner. Next. u m u m c m = c m . u 2 M c2 . QR. define Q (yes. u1 c2 . u 2 u 2 + L + c 2 . because each of the ci ’s are in span {u1 . … . c 2 . u2 M c2 .edu/terms. u1 cm . Okay. é c1 . cm . u 2 M cm . Theorem 1 Suppose that A is an n ´ m matrix with linearly independent columns then A can be factored as. u1 u1 + c m . A = QR where Q is an n ´ m matrix with orthonormal columns and R is an invertible m ´ m upper triangular matrix. u1 u1 + c 2 . u 2 . Proof : The proof here will consist of actually constructing Q and R and showing that they in fact do multiply to give A. u1 u1 + c1 .math. Also suppose that we perform the Gram-Schmidt process on these vectors and arrive at a set of orthonormal vectors u1 .Linear Algebra QR-Decomposition In this section we’re going to look at a way to “decompose” or “factor” an n ´ m matrix as follows. this will eventually be the R from the theorem statement) to be the m ´ m matrix defined as. um L L O L ù ú ú ú ú ú û c m . A = [ c1 c2 L cm ] Q = [ u1 u2 L um ] Next. u 2 . um L L O L c 2 . the Q in the theorem statement) to be the n ´ m matrix whose columns are u1 . u 2 . u m cm . u m ù ú ú ú ú ú û QR = [ u1 u2 é c1 . u m u m M M c 2 = c 2 .u R=ê 1 2 ê M ê ê c1 . u1 cm .lamar. let’s examine the product.u L um ] ê 1 2 ê M ê ê c1 . u 2 u 2 + L + c1 . let’s start with A and suppose that it’s columns are given by c1 . … . u m ë Now. u 2 u 2 + L + c m . c1 = c1 . u1 ê c .aspx . u1 c2 . u m and so Q will be a matrix with orthonormal columns. u m ë 113 © 2007 Paul Dawkins http://tutorial. … .K u m } we know from Theorem 2 of the previous section that we can write each ci as a linear combination of u1 . … .

are non-zero. u1 u1 .ci . u 2 u 2 . … . u2 M c2 .Linear Algebra From the section on Matrix Arithmetic we know that the jth column of this product is simply Q times the jth column of R.L . we know from Theorem 2 from the Special Matrices section that a triangular matrix will be invertible if the main diagonal entries. Here is the general formula for ui from the Gram-Schmidt process. Now. the diagonal entries of R are non-zero and hence R must be invertible. ui = ui + ci . ci . u 2 M cm . ui -1 u i -1 . 2.ci . u1 u1 .math. u j with i < j . u i ¹ 0 ui . u i + ci . ui + ci . u 2 L um ] ê ê M ê ê c1 . u1 ê c1 . we can factor A as a product of Q and R and Q has the correct form. u i -1 ui -1 . u m ù ú ú ú ú ú û So. u1 u1 + ci . ui = ci .edu/terms. So. ck -1 . ui = 0 j = 1. Now. c2 . QR = [ u1 = [ c1 =A u2 c2 é c1 . u i = u i . u 2 u 2 + L + ci . u m L L O L c m . ci .lamar. ui -1 ui -1 Let’s look at the diagonal entries of R.ci . ui = u i .K i . ui However the ui are orthonormal basis vectors and so we know that u j . solving this for ci gives.1 ci . Now all that we need to do is to show that R is an invertible upper triangular matrix and we’ll be done. This is fairly easy to show. if you work through a couple of these you’ll see that when we multiply Q times the jth column of R we arrive at the formula for c j that we’ve got above. from the Gram-Schmidt process we know that uk is orthogonal to c1 . u1 c2 . In other words. This means that all the inner products below the main diagonal must be zero since they are all of the form ci . ui . u1 cm . However. ui ¹ 0 Using these we see that the diagonal entries are nothing more than. ci = ui + ci .aspx . u1 u1 + ci . We’ll plug in the formula for ci into the inner product and do some rewriting using the properties of the inner product. u 2 u 2 + L + ci . First. ui -1 ui -1 Recall that we’re assuming that we found the orthonormal ui ’s and so each of these will have a norm of 1 and so the norms are not needed in the formula. u m ë L cm ] c 2 . ui + L + ci . u 2 u 2 . © 2007 Paul Dawkins 114 http://tutorial.

é 2 1 3ù A = ê -1 0 7 ú ê ú ê 0 -1 -1ú ë û Solution The columns from A are. é c1 . u1 c3 . u2 0 c3 . u 3 ù é 5 ú ê ú=ê 0 ú ê 0 û ê ë ùé 5 úê 2 úê 0 6 úê 1 6úê ûë 0 1 6 2 5 6 30 - 1 5 22 30 16 6 0 ù ú ú ú ú û ù ú ú ú ú û So. é 25 ù ê ú u1 = ê . now that we’ve gotten the proof out of the way let’s work an example.1 ê ú ê 5 ê 0 -1 -1ú ê 0 ë û ê ë 1 30 2 30 5 30 2 5 6 30 - 1 5 22 30 16 6 0 We’ll leave it to you to verify that this multiplication does in fact give A. u 2 c3 .aspx . So. u1 ê R=ê 0 ê 0 ë c2 . é 2ù c1 = ê -1ú ê ú ê 0ú ë û é 1ù c2 = ê 0ú ê ú ê -1ú ë û é 3ù c3 = ê 7 ú ê ú ê -1ú ë û We performed Gram-Schmidt on these vectors in Example 3 of the previous section. © 2007 Paul Dawkins 115 http://tutorial. the orthonormal vectors that we’ll use for Q are. 2 é 2 1 3ù é 5 ê -1 0 7 ú = ê .Linear Algebra So. There is a nice application of the QR-Decomposition to the Least Squares Process that we examined in the previous section. the QR-Decomposition for this matrix is.math.edu/terms. é ê u2 = ê ê êë é 2 ê 5 Q = ê .15 ê ê 0 ë 1 30 2 30 5 30 1 30 ù ú 2 ú 30 ú 5 30 ú û ù ú 2 ú 6 ú 1 6ú û 1 6 é ê u3 = ê ê ê ë 1 6 ù ú 2 ú 6 ú 1 6ú û The matrix R is.15 ú ê ú ë 0û and the matrix Q is. u1 c2 .lamar. To see this however. Example 1 Find the QR-decomposition for the matrix. we will first need to prove a quick theorem.

Linear Algebra Theorem 2 If Q is an n ´ m matrix with n ³ m then the columns of Q are an orthonormal set of vectors in ¡ n with the standard Euclidean inner product if and only if QT Q = I m . qT (the transposes are needed to turn the column vectors. i T é q1 ù ê Tú q T Q =ê 2ú ê M ú ê Tú êq m ú ë û Now. é q1 gq1 ê q gq T Q Q=ê 2 1 ê M ê ë q m gq1 q 2 gq 2 M q m gq 2 q1 gq 2 q1 gq m ù q 2 gq m ú ú M ú ú q m gq m û © 2007 Paul Dawkins 116 http://tutorial. q m be the columns of Q. Because the columns are vectors in ¡ n and we know from Theorem 1 in the Orthonormal Basis section that a set of orthogonal vectors will also be linearly independent. … . v . there is some work that we can do that we’ll need in both parts so let’s do that first.math. q 2 . Proof : Now recall that to prove an “if and only if” theorem we need to assume each part and show that this implies the other part. Entries in the product will be rows of QT times columns of Q and so the product will be.aspx . into 2 m row vectors. ug v . qi .lamar. the T rows of QT are q1 . qT . T é q1 q1 ê T q q QT Q = ê 2 1 ê M ê T ê q mq1 ë T q1 q 2 qT q 2 2 M T q mq 2 L L L L L L T q1 q m ù ú qT q m ú 2 M ú ú qT q m ú m û Recalling that ug v = vT u we see that we can also write the product as. Note that the only way Q can have orthonormal columns in ¡ n is to require that n ³ m . Therefore. Q = [ q1 q2 L qm ] For the transpose we take the columns of Q and turn them into the rows of QT . because we want to make it clear that we’re using the standard Euclidean inner product we will go back to the dot product notation. However. However. Also.edu/terms. u. Let q1 . instead of the usual inner product notation. qT ) and. So. let’s take a look at the product QT Q . from Theorem 2 in the Linear Independence section we know that if m > n the column vectors will be linearly dependent. … .

let’s actually do the proof. 0 1 M 0 0ù 0ú ú=I m Mú ú 1û ( Ü) Here assume that QT Q = I m and we’ll need to show that this means that the columns of Q are orthogonal.K m Therefore the product is. qi gq j = 0 i ¹ j qi gqi = 1 L L L i. qi gq j = 0 i ¹ j i. 2. We’ll start with the normal system for Ax = b . The following theorem can be used. Rx = QT b Proof : There really isn’t much to do here other than plug formulas in. j = 1.math.Linear Algebra Now.lamar.edu/terms. AT Ax = AT b Now. we’re assuming that. ( Þ ) Assume that the columns of Q are orthogonal and show that this means that we must have QT Q = I m . é q1 gq1 ê q gq T Q Q=ê 2 1 ê M ê ë q m gq1 q 2 gq 2 M q m gq 2 q1 gq 2 L L L q1 gq m ù é 1 q 2 gq m ú ê 0 ú=ê M ú êM ú ê q m gq m û ë 0 qi gqi = 1 0 1 M 0 L L L 0ù 0ú ú = Im Mú ú 1û However. 2. © 2007 Paul Dawkins 117 http://tutorial. to significantly reduce the amount of work required for the least squares problem. é1 ê0 T Q Q=ê êM ê ë0 So we’re done with this part. on occasion. simply by setting entries in these two matrices equal we see that. Then the normal system associated with Ax = b can be written as. Since we are assuming that the columns of Q are orthonormal we know that. A has linearly independent columns we know that it has a QR-Decomposition for A so let’s plug the decomposition into the normal system and using properties of transposes we’ll rewrite things a little. Theorem 3 Suppose that A has linearly independent columns.K m and this is exactly what it means for the columns to be orthogonal so we’re done. So. j = 1.aspx .

So. é 2 -1 1ù é -4 ù ê 1 -5 2 ú é x1 ù ê 2 ú ê ú ê x2 ú = ê ú ê -3 1 -4 ú ê ú ê 5 ú ê ú ê x3 ú ê ú ë û ë 1 -1 1û ë -1û Solution First.Linear Algebra ( QR ) T QRx = ( QR ) b T RT QT QRx = RT QT b Now.3ú ê ú ë 1û é -1ù ê -5ú c2 = ê ú ê 1ú ê ú ë -1û é 1ù ê 2ú c3 = ê ú ê -4 ú ê ú ë 1û Now. we’ll need to perform Gram-Schmidt on these to get them into a set of orthonormal vectors. The first step is. Example 2 Find the least squares solution to the following system of equations.math.edu/terms.lamar. é 2ù ê 1ú u1 = c1 = ê ú ê -3ú ê ú ë 1û Here’s the inner product and norm that we’ll need for the second step. u1 = -11 The second vector is then. we’ll leave it to you to verify that the columns of A are linearly independent. we’ll also multiply both sides by RT ( ) -1 .aspx . Upon doing all this we arrive at. just how is this supposed to help us with the Least Squares Problem? We’ll since R is upper triangular this will be a very easy system to solve. Here are the columns of A. é 2ù ê 1ú c1 = ê ú ê . u1 2 = 15 © 2007 Paul Dawkins 118 http://tutorial. c2 . Rx = QT b So. since the columns of Q are orthonormal we know that QT Q = I m by Theorem 2 above. Let’s rework the last example from the previous section only this time we’ll use the QR-Decomposition method. Also. we know that R is an invertible matrix and so know that RT is also an invertible matrix. It can however take some work to get down to this system.

6 ú u1 5 ê ú ê ú ê 4ú -1û 1û ë .-11 ê 1ú = ê 15 ú u2 = c2 u1 = 2 ê 1ú 15 ê -3ú ê . u2 = 4485 15 u3 = 3 20930 299 é ê ê w1 = ê êê ë ù ú 1 15 ú 3 ú 15 ú 1 ú 15 û 2 15 7 é 4485 ù ê 64 ú ê . If we divide each of them by their norms we will get the orthonormal vectors that we need for the decomposition.64 ú c2 .6 ú ê . c3 . these are the orthogonal vectors.20930 ù ê 11 ú ê 20930 ú w 3 = ê 81 ú ê .lamar.4 ú ë 4485 û 118 é .math.15 ê 1 ë 15 Finally. R is given by. u1 = 17 c3 .20930 ú 18 .299 ê 15 ú = ê 299 ú 243 ê -4ú 15 ê -3ú 15 ê . and final orthogonal vector is then.64 ú . u1 ê ú . u1 u1 2 u1 - u2 7 é 15 ù é .299 ú 5 ê 4 ú ê 54 ú ê ú ê ú ë 1û ë 1û ë . u 2 = - 53 15 c3 .18 ú ë 20930 û We can now write down Q for the decomposition.15 17 = ê ú .20930 ú ê .354 ù é 1ù é 2ù 299 ê 33 ú ê 2ú ê 1ú 53 ê .4485 ú w 2 = ê 18 ú ê .ê ú .edu/terms. u1 = 15 The orthonormal vectors are then. é 2 15 ê 1 ê 15 Q=ê 3 ê . u 2 u2 2 u1 = 15 2 u2 2 = 299 15 The third.4485 ú ê.20930 ú û 118 20930 © 2007 Paul Dawkins 119 http://tutorial. The norms are.aspx .15 û ë ë The final step will require the following inner products and norms. u 3 = c3 - c3 .299 û Okay. 7 4485 - - 64 4485 18 4485 4 4485 ù ú 11 20930 ú 81 ú .Linear Algebra 7 é -1ù é 2 ù é 15 ù ê -5ú ê ú ê .15 û ë .

After all. 15 x1 - 11 17 22 x2 + x3 = 15 15 15 299 53 242 x2 x3 = 4485 4485 4485 210 107 x3 = 20930 20930 Þ Þ Þ These are the same values that we received in the previous section.edu/terms.4485 ú ú 18 . u1 c2 .math.4485 ú ê x2 ú = ê .4485 ú ú 210 20930 ú û 17 15 Okay.lamar. The answer is that by hand. u 3 ù é ú ê ú=ê ú ê û ê ë 15 0 0 - 11 15 299 4485 0 ù ú 53 .20930 ú ê ú û ë -1û 1 15 15 0 0 - 11 15 299 4485 0 ù é x ù é .Linear Algebra é c1 .4485 ú ê ú ú ê 5ú 18 . however. this may not be the best way for doing these problems.20930 ú û 1 15 The normal system can then be written as.aspx . u 2 c3 .4485 ú ê x2 ú = ê ú êx ú ê 210 20930 ú ë û ë 3 û ê17 15 2 15 7 4485 118 20930 17 15 1 15 - 3 15 18 4485 81 20930 - 64 4485 11 20930 ù é -4 ù ú ê 2ú 4 . © 2007 Paul Dawkins 120 http://tutorial. we can now proceed with the Least Squares process. é ê ê ê ê ë 15 0 0 - 11 15 299 4485 0 é ê ê ê ê ë ùéx ù é ú ê 1ú ê 53 . u1 ê R=ê 0 ê 0 ë c2 . é ê QT = ê ê êë 2 15 7 4485 118 20930 1 15 - 3 15 18 4485 81 20930 - 64 4485 11 20930 ù ú 4 . it was a lot of work and some of the numbers were downright awful. At this point you are probably asking yourself just why this method is better than the method we used in the previous section.22 ù ú ê 1 ú ê 15 ú 53 . u1 c3 . if you are going to program the least squares method into a computer all of the steps here are very easy to program and so this method is a very nice method for programming the Least Squares process.242 ú 4485 ú ê x ú ê 107 ú 210 20930 ú ë û û ë 3 û ê 20930 ú 18 7 151 x2 = 210 107 x3 = 210 x1 = - This corresponds to the following system of equations. First we’ll need QT . u2 0 c3 .

Definition 1 Let Q be a square matrix and suppose that Q -1 = Q T then we call Q an orthogonal matrix.lamar. Theorem 2 Suppose that Q is an n ´ n matrix. but this promptly tells us that. Before we see any examples of some orthogonal matrices (and we have already seen at least one orthogonal matrix) let’s get a couple of theorems out of the way. The next theorem gives us an easier check for a matrix being orthogonal.math. This is also going to be a fairly short section (at least in relation to many of the other sections in this chapter anyway) to close out the chapter. (a) Q is orthogonal.aspx . in this case if we prove ( a ) Û ( b ) and ( a ) Û ( c ) we can get the above loop of equivalences by default and it will be much easier to prove the two equivalences as we’ll see. since this is exactly what is needed to show that we have an inverse we can see that Q -1 = QT and so Q is orthogonal. © 2007 Paul Dawkins 121 http://tutorial. then the following are all equivalent. We’ll start with the following definition. Proof : This is a really simple proof that falls directly from the definition of what it means for a matrix to be orthogonal. ( Þ ) In this direction we’ll assume that Q is orthogonal and so we know that Q -1 = QT . The equivalence ( a ) Û ( b ) is directly given by Theorem 2 from the previous section since that theorem is in fact a more general version of this equivalence. Theorem 1 Suppose that Q is a square matrix then Q is orthogonal if and only if QQT = QT Q = I . However. (b) The columns of Q are an orthonormal set of vectors in ¡ n under the standard Euclidean inner product. Notice that because we need to have an inverse for Q in order for it to be orthogonal we are implicitly assuming that Q is a square matrix here. Normally in this kind of theorem we’d prove a loop of equivalences such as ( a ) Þ ( b ) Þ ( c ) Þ ( a ) . (c) The rows of Q are an orthonormal set of vectors in ¡ n under the standard Euclidean inner product. Proof : We’ve actually done most of this proof already.Linear Algebra Orthogonal Matrices In this section we’re going to be talking about a special kind of matrix called an orthogonal matrix. QQT = QT Q = I ( Ü ) In this direction we’ll assume that QQT = QT Q = I .edu/terms.

Example 1 Here are the QR-Decompositions that we performed in the previous section. From Example 1 2 é 2 1 3ù é 5 ê A = ê -1 0 7 ú = ê . This may seem odd given that we call the matrix “orthogonal” when “orthonormal” would probably be a better name for the matrix. In the previous section we were finding QR-Decompositions and if you recall the matrix Q had columns that were a set of orthonormal vectors and so if Q is a square matrix then it will also be an orthogonal matrix.3 ê ú ê 1 15 ë 1 -1 1û ë 15 7 4485 - 64 4485 18 4485 4 4485 ù úé 11 ê 20930 ú ê 81 ú .20930 ú ê ê 18 . As noted above.math.20930 ú ë û 118 20930 - 15 0 0 - 11 15 299 4485 0 ù ú 53 . in order for a matrix to be an orthogonal matrix it must be square. but traditionally this kind of matrix has been called orthogonal and so we’ll keep up with tradition. Since it is much easier to verify that the columns/rows of a matrix are orthonormal than it is to check Q -1 = QT in general this theorem will be useful for identifying orthogonal matrices. but does have orthonormal columns will not be orthogonal. while if it isn’t square then it won’t be an orthogonal matrix. At this point we should probably do an example or two.aspx .Linear Algebra The proof of the equivalence ( a ) Û ( c ) is nearly identical to the proof of Theorem 2 from the previous section and so we’ll leave it to you to fill in the details. Also. So a matrix that is not square.lamar. In the second case the matrix Q is.edu/terms.15 ê ê 0 ë 1 30 2 30 5 30 ù ú 2 ú 6 ú 1 6ú û 1 6 and by construction this matrix has orthonormal columns and since it is a square matrix it is an orthogonal matrix. é 2 ê 5 Q = ê . © 2007 Paul Dawkins 122 http://tutorial.4485 ú = QR ú 210 20930 ú û 17 15 In the first case the matrix Q is.15 ê ú ê 0 -1 -1ú ê 0 ë û ê ë 1 30 2 30 5 30 ùé 5 úê 2 úê 0 6 úê 1 6úê ûë 0 1 6 2 5 6 30 - 1 5 22 30 16 6 0 ù ú ú = QR ú ú û From Example 2 2 é 2 -1 1ù é 15 ê 1 -5 2 ú ê 1 ê ú = ê 15 A=ê ê -3 1 -4 ú ê . note that we did mean to say that the columns are orthonormal.

the columns of Q are.20930 ú û 118 20930 Again.a + b + c = 0 3 3 3 From the first dot product we can see that. However. Let’s start with the two dot products and see what we get. c = a . 5 q1 gq 3 = Now. by construction this matrix has orthonormal columns. b. b.20930 ú 18 .aspx . Plugging this into the second dot product 2 gives us. In other words we need to © 2007 Paul Dawkins 123 http://tutorial.5 ú ë û é. é 0ù ê ú q1 = ê 15 ú ê 2ú ê. b 2c =0 5 5 2 2 1 q 2 gq3 = . é 0 -2 aù 3 ê ú 2 Q = ê 15 bú 3 ê 2 ú 1 cú ê.edu/terms.math.lamar. using the above work we now know that in order for the third column to be orthogonal (since we haven’t even touched orthonormal yet) it must be in the form. since it is not a square matrix it is NOT an orthogonal matrix. we need to make sure that then third column has norm of 1.2 ù 3 ê 2ú q2 = ê 3 ú ê 1ú ë 3û éa ù q3 = ê b ú ê ú ê cú ë û We will leave it to you to verify that q1 = 1 . and c for which we will have q3 = 1 . Example 2 Find value(s) for a.15 ê 1 ë 15 7 4485 - - 64 4485 18 4485 4 4485 ù ú 11 20930 ú 81 ú . and c for which the following matrix will be orthogonal. q 2 = 1 and q1 gq 2 = 0 and so all we need to do if find a. éaù w = ê 4 aú ê5 ú ê 2 aú ë5 û Finally.Linear Algebra é 2 15 ê 1 ê 15 Q=ê 3 ê . b = 2c . Using the fact that we now know what c is in terms of a and plugging this into 5 4 b = 2c we can see that b = a . q1 gq 3 = 0 and q 2 gq3 = 0 .5 3 ë û Solution So.

Qy == Q ( x + y ) .y = xg y 4 4 which is what we were after in this case. set it equal to one and see what we get.Q ( x . Theorem 3 If Q is an n ´ n matrix then the following are all equivalent. let’s compute w . (c) Qx g Qy = xgy for all x and all y in ¡ n . So. This is often called preserving norms.math. Qx = ( QxgQx ) 2 However. 1 = w = a2 + 2 16 2 4 2 45 2 a + a = a 25 25 25 Þ a=± This gives us two possible values of a that we can use and this in turn means that we could used either of the following two vectors for q3 é ê q3 = ê ê ê ë ù ú 4 ú 45 ú 2 45 ú û 5 45 OR éê q3 = ê ê êë 5 45 ù ú 4 ú 45 ú 2 45 ú û A natural question is why do we care about orthogonal matrices? The following theorem gives some very nice properties of orthogonal matrices.edu/terms. we know that we can write the dot product as.lamar.Linear Algebra require that w = 1 . ( b ) Þ ( c ) : We’ll assume that Qx g Qy = two vectors in ¡ n .aspx . or we can require that w 2 2 = 1 since we know that the norm must be a 5 45 positive quantity here. (b) Qx = x for all x in ¡ n . (a) Q is orthogonal. 2 2 1 1 1 1 2 2 Qx + Qy . 1 Qx = ( xg Q Qx ) T 1 1 2 Now we can use the fact that Q is orthogonal to write QT Q = I .x .Qx . This is often called preserving dot products. © 2007 Paul Dawkins 124 http://tutorial. Proof : We’ll prove this set of statements in the order : ( a ) Þ ( b ) Þ ( c ) Þ ( a ) ( a ) Þ ( b ) : We’ll start off by assuming that Q is orthogonal and let’s write down the norm.y are in ¡ and so by assumption and a use of Theorem 8 again we Qx g Qy = 1 1 2 2 x + y . Then using Theorem 8 from the section on Euclidean n-space we have. Using this gives.y ) 4 4 4 4 n Next both x + y and x . Let’s assume that x and y are any have. Qx = ( xgx ) 2 = x which is what we were after. Qx = x for all x in ¡ n .

y ) = 0 x g ( QT Q . (Q Q . (a) A-1 is an orthogonal matrix.I ) y = 0 Now. (c) Either det ( A ) = 1 or det ( A ) = -1 Proof : The proof of all three parts follow pretty much from basic properties of orthogonal matrices.edu/terms. x g QT Qy = xgy x g QT Qy . Now. QT Q . This then gives. Now.xgy = 0 x g ( QT Qy . (a) Since A is orthogonal then its column vectors form an orthogonal (in fact they are orthonormal. by the definition of orthogonal matrices. we have A-1 = AT . rearrange things a little and we can arrive at the following.math. (Q Q .I = 0 Þ QT Q = I Finally.lamar.Linear Algebra (c) Þ (a) : In this case we’ll assume that Qx g Qy = xgy for all x and all y in ¡ n . The second and third statement for this theorem are very useful since they tell us that we can add or take out an orthogonal matrix from a norm or a dot product at will and we’ll preserve the result.aspx © 2007 Paul Dawkins . (b) AB is an orthogonal matrix. this must hold for a x in ¡ n and so let x = QT Q .I ) y g (Q Q .I ) y = 0 T and this must be true for all y in ¡ n . As we did in the first part of this proof we’ll rewrite the dot product on the left. That can only happen if the coefficient matrix of this system is the zero matrix or.I ) y = 0 T T ( ) Theorem 2(e) from the Euclidean n-space section tells us that we must then have. But this means that the rows of A-1 are nothing more than the columns of A and so are an orthogonal set of vectors and so by Theorem 2 above A-1 is an orthogonal matrix. As a final theorem for this section here are a couple of other nice properties of orthogonal matrices. ABx = A ( Bx ) 125 http://tutorial. but we only need orthogonal for this) set of vectors. by Theorem 1 above Q must be orthogonal. (b) In this case let’s start with the following norm.I y . Theorem 4 Suppose that A and B are two orthogonal n ´ n matrices then.

ABx = A ( Bx ) = Bx ABx = Bx = x Now we can use the fact that B is also orthogonal and so will preserve norms a well. (c) In this case we’ll start with fact that since A is orthogonal we know that AAT = I and let’s take the determinant of both sides. In other words we must have.edu/terms. © 2007 Paul Dawkins 126 http://tutorial. the product AB also preserves norms and hence by Theorem 3 must be orthogonal. This gives. we get the result. Therefore. But A is orthogonal and so by Theorem 3 above must preserve norms.Linear Algebra where x is any vector from ¡ n . det ( AAT ) = det ( I ) = 1 Next use Theorem 3 and Theorem 6 from the Properties of Determinants section to rewrite this as.aspx . det ( A) det ( AT ) = 1 det ( A ) det ( A) = 1 édet ( A ) ù = 1 ë û 2 Þ det ( A ) = ±1 So.math.lamar.

math.edu/terms.lamar.Calculus I © 2007 Paul Dawkins 127 http://tutorial.aspx .

Sign up to vote on this title
UsefulNot useful