This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Introduction

Many math structures are different at first sight, but looking deeper the resemblance is astonishing. The benefits of the study of an abstract structure is that all the properties can be applied to all the representations of that structure. The concept of a 'real vector space' is an abstract structure in that way.

Concept

We start with a set V and the field of real numbers R. We define the concept 'vector space' by means of postulates. We say V is a vector space if and only if 1. There is an addition '+' in V such that V,+ is a commutative group. 2. Any element v in V and any r in R determine a scalar product rv in V. This scalar product has the following properties for any r,s in R and any v,w in V. 3. r(sv) = (rs)v 4. r(v + w) = rv + rw 5. (r + s)v = rv + sv 6. 1v = v Any element of a vector space is called a vector. The identity element of the group V,+ is called the vector 0. The symmetric element of v is called the opposite vector -v. The subtraction v - v' is defined by v - v' = v + (-v') . Examples of real vector spaces are

The ordinary vectors in the plane or in space. The couples of real numbers. The complex numbers. The real numbers The n-tuples (a,b,c,..,l) ; with a,b,.. in R. Real 2x2 matrices Polynomials in x, of third degree or lower and with real coefficients.

Calculation rules

Deducing from the postulates of a vector space, one can prove the following calculation rules. They hold for each vector u,v in V, and for each r,s in R.

u + u + u + u + ... + u = n.u (n terms at the left side) 0u = 0 r0 = 0

(-r)u = r(-u) = -(ru) r(u - v) = ru - rv (r - s)u = ru - su ru = 0 <=> (r = 0 or u = 0) (ru = rv with r not zero) = > u = v (ru = su with u not zero) => r = s

Subspaces

definition

M is a subspace of a vector space V if and only if

M is a non-void subset of V M is a real vector space

Criterion

Theorem: A non-void subset M of a vector space V, is a vector space if and only if rx + sy is in M for any r,s in R and any x,y in M. Part 1: first we prove that If M is a vector space then rx + sy is in M for any r,s in R and any x,y in M. Well, if M is a vector space then, from the postulates, rx and sy are in M and therefore rx + sy is in M . Part 2: we prove that If rx + sy is in M for any r,s in R and any x,y in M, then M is a vector space.

Since rx + sy is in M, choose r = s = 1. So, x + is in M Since the associativity holds in V, it holds in M Since rx + sy is in M, choose r = 1 s = -1. So, 0 is in M Since rx + sy is in M, choose r = -1 s = 0. So, -x is in M Since the commutativity holds in V, it holds in M Since rx + sy is in M, choose s = 0. So, rx is in M The properties of scalar multiplication hold because they hold in V.

Q.E.D. Example 1 V is the vector space of all polynomials in x. M is the set of all the polynomials in x of second degree or lower.

Since (0. Example 2 V is the vector space of all polynomials in x. Example 3 V is the vector space of all couples of real numbers.y in R and x + y = 1} One can now follow the method as in the previous examples. s at random in R and ax2+bx+c and dx2+ex+f are random elements in M. is itself a subspace of V. To this end we choose r. Intersection of two spaces Theorem: The intersection of two subspaces M and N of V. M = { (x. To this end we choose r.q(x) ) is in M Since the last claim is true. So. To this end we choose r.( r.y) | x.(x-2)q(x) is in M <=> (x-2). M is the set of all the regular 2x2 matrices. M is a subspace of V. Proof: . M is not a vector space.B = 0 and the 0-matrix is not in M. M is subspace of V <=> r(x-2)p(x) + s. We investigate whether or not M is a subspace of V. M is subspace of V <=> r A + s B is in M The last claim is false because for r=s=0 we have 0. but it can shorter. M is the set of all the polynomials in x divisible by (x-2). M is not a subspace of V. Example 4 V is the vector space of all 2x2 matrices.p(x) + s.0) is not in M.We investigate whether or not M is a subspace of V. s at random in R and (x-2)p(x) and (x-2)q(x) are random elements in M. We investigate whether or not M is a subspace of V. M is a subspace of V. s at random in R and A and B are regular 2x2 matrices. M is subspace of V <=> r(ax2+bx+c) + s(dx2+ex+f) <=> (ra+sd)x2 + (rb+ se)x + (rc+sf) is in M is in M Since the last claim is true.A + 0.

y in R }.s in R and any x.b. Appealing on previous criterion the intersection of M and N is a subspace of V.d .l. Since each vector space containing the vectors a.b.c..z .s in R. . .v from M.c. generators of M. + zl a linear combination of the vectors a..0) .b. N is the subspace of all the polynomials in x divisible by (x-1). Generators of a vector space Linear combinations of vectors Take from a vector space V. Example 1 V is the vector space R3 = R x R x R . we have that ru + sv is a linear combination of two linear combinations of a. appealing on previous criterion. ..d .l must contain each linear combination of these vectors. take two vectors u. .. (-1.d . Conclusions and definitions: All linear combinations of vectors of D = { a.y.l is denoted span(D).l } a fixed set of vectors from V. M is the subspace of all the polynomials in x divisible by (x-2). M is the 'smallest' vector space generated by D. ..b.t. D = { (2. . we can state: (rx + sy is in M) and (rx + sy is in N).d .l } generate a vector space M. For any r. For any r.b.b.l .c.d .d .. ....0) | x. I is a subspace of V. 0 is in the intersection.0) }. M = span(D) = { (x. The intersection I of M and N is the set of all the polynomials in x divisible by (x-1)(x-2). Generating a vector space Let D = { a. . . y in the intersection of M and N..c. the vectors a. .c. So ru + sv is itself a linear combination of a..Since 0 is in M and in N. Indeed. Then we call ra + sb + tc + . .l and therefore ru + sv is in M. so it is in the intersection.c. We'll show that M is a vector space. The vector space spanned by the vectors a.3..s..d .. M is called the vector space spanned by D.b.b.. Example V is the vector space of all polynomials in x.. Take as much real numbers r. It is the smallest vector space containing the set D.. Let M be the set of all possible linear combinations of the vectors of D. The elements of D are called.c.4..l.d ...c.

0. If we multiply a vector from D with a real number. D = { [1.0] = [3.3.0.1. [-2.1.0] .b.1.0] + [0.1.3. It is easy to see that: If we add a vector from M to the set D.1. [2.0. then still M = span(D).0] } M = span(D) = span( [1. Suppose there is a vector in D.b. [0. A set of one vector is called dependent. Example: V is the vector space of all row matrices [a.4.b. M = span( [1.3.4.1.d in R.1. [2.0] .1.d] with a.0.c in R. that is a linear combination of the other vectors in D.0.0] } Linear dependent Linear dependent vectors A set D of vectors is called dependent.1.0.-1.d in R.0.b.0] . D = { [1. [2.0] + 4 [0.0.0] is a linear combination of the other vectors in D because [2.0] .0] } M = span([17.0.0.4. [2.0] .r] | r in R} Properties of a generating set Say D is a subset of vector space V and M = span(D). then still M = span(D).0] .1. [0. M = span(D) = { [r.0.0].0.0] .0] . [2.0.c.1. [0.0] + [2.1.0] . if there is at least one vector in D.3. [0.0] . then still M = span(D).0] .0. [2.0.1.0] is in M.0.0. [2.0] .0] ) [2.b. [0.4.[2.0.0.4.1. [2.Example 2 V is the vector space of all row matrices [a.0] ) 3 [1.0] = 0 [1.1.0] } . Examples: V is the vector space of all row matrices [a. that can be written as a linear combination of the other vectors of D.0] .0] .0] . then still M = span(D). M = span( { [1.c.c.3.4.c] with a.0. if and only if it is the vector 0.0] . and add that result to another vector of D.1.1.3. [0.0] } M = span( [1.0.1.0] . [2.0] .1.1.0] .1] }.0] . [0.0.1. [2.c.4.0. [3. So.d] with a. If we multiply a vector from D with a real number (not 0).1.b.4.1.1.3.0.0.r. If we remove that vector from D. D = { [1.0. So. [2.3.3.0] .1.4.1. [2.0.0.0] .0.0.

. .0.. not all zero.0. . -7.s.0] .0.t. who is a linear combination of the other vectors of D...s = -1. ....0. such that -12 r + 10 s .t not all zero.. there is a suitable set of real numbers r.z . That set D is linear dependent if and only if there is a suitable set of real numbers r.0] = 0 [1.0] <=> There are real numbers r.b. such that r [-12. 8] + t [-11. + zl = 0 Proof: Part 1 : If the set D is dependent.... + zl = -sb .1.-1.0] + 3. such that ra + sb + tc + .. such that ra + sb + tc + ..[0. then we can choose a nonzero coefficient. if and only if that set is not a dependent set.s. -7. 17. we see that b is a linear combination of the other vectors of D. 24] = [0. + zl = 0 . Dividing both sides by (-s)..0] Linear independent vectors A set D of vectors is called independent. 3..The vectors in D are dependent because [-2.t. not all zero. .s. such that ra + sb + tc + . + zl = 0 Part 2 : If there is a suitable set of real numbers r. + zl = 0 So. say s . say b..l} of (more than one) vectors from a vector space V. 3. 17. 14] + s [10. 24] .z .7 s + 3t = 0 14 r + 8 s + 24t = 0 [10. Such set is called a free set of vectors..1. Example: De vectors [-12. not all zero. Criterion for linear dependent vectors Theorem: Take a set D = {a. + zl <=> ra + (-1)b + tc + .t not all zero. 14] are linear dependent <=> There are real numbers r. .t. .. So.s.c. D is an dependent set..0.[2... there is at least one vector in D.11t = 0 17 r . and then ra + tc + . Then b = ra + tc + .z .3. 8] [-11.

not all zero.. 17.. That set D is linear independent if and only if ra + sb + tc + . Part 2: If a vector of D is a linear combination of the PREVIOUS vectors of D. then it is a linear combination of the other vectors of D (with coefficients 0 for the following vectors).z . + vi Dividing both sides by (-w). ...b.. 0] are independent. + vi + wj = 0 <=> -wj = ra + sb + tc + . we know from the first criterion that there is a suitable set of real numbers r. + vi + wj + . 17. [12. That set D is linear dependent if and only if there is at least one vector. -13] .s. So. 7. 7..c. 0] = [12. 0. The third vector is not a linear combination of the previous vectors because r[1. 0. 0] <=> r + 2s = 12 17s = 7 ... = z = 0 Second Criterion for linear dependent vectors Take an ordered set D = {a.. the first two vectors are independent.c. 0] .l} of (more than one) vectors from a vector space V. The second vector is not a linear combination of the previous one..... .. the three vectors are not linear dependent.<=> | -12 | 17 | 14 <=> -3930 = 0 10 -7 8 (Relying on the theory of systems) -11 | 3 | = 0 24 | Since the last statement is false. we see that vector j is a linear combination of the PREVIOUS vectors of D.. Example: We investigate whether the vectors [1. who is a linear combination of the PREVIOUS vectors in D. [2.l} of (more than one) vectors from a vector space V. Corollary: Take a set D = {a.. Proof: Part 1: If the set D is linear dependent.. D is a linear dependent set. such that ra + sb + tc + .t. then ra + sb + tc + . + zl = 0 => r = s = t = .. + zl = 0 Say w is the last non-zero coefficient.b. -13] + s [2.. Thus..

Such a minimal generating set of M is called a basis of M.b.z')l = 0 But.4..t.c...-3) Properties of coordinates It is easy to verify that co(a + b) = co(a) + co(b) co(ra) = r. For the remaining part D' still holds M = span(D').r')a + (s . Mind the difference: v(2. Coordinates in a vector space Say D = (a. The three vectors are linear independent. D is a generating set of M. Conclusion: Each vector v of M is uniquely expressible is a linear combination of the vectors of the ordered basis D. + (z . appealing on the criterion of linear independent vectors.s')b + (t . + zl = r'a + s'b + t'c + .s.-3) is vector v is equal to the vector (2. D' is a free set that spans M. but the vectors in D' are now linear independent.-3).. ... Each vector v in M can be written as a linear combination of the vectors in D. .b in M and r in R. s = s' .co(a) with a. then still M = span(D). .. that is a linear combination of the other vectors in D.t.z). But in v = (2. We write co(v) = (r. then we have v = ra + sb + tc + .. + z'l and then (r ..-13r = 0 We see immediately that there is no solution for that system..l) is a ordered basis of M. The unique coefficients are called the coordinates of v with respect to D.t')c + . So.4.. all the coefficients must be 0. and if we remove that vector from D..4. It is a free set.-3) is the vector v with coordinates (2. In this introduction. Two bases of V have exactly the same number of elements .z) or v(r. Assume v can be written in two ways as a linear combination of the vectors in D.. Basis and dimension of a vector space Minimal generating set and basis Say M = span(D). if there is a vector in D.4. we restrict the theory to vector spaces with a finite basis. r = r' . the vectors from D who are a linear combination of the others.. Now remove..s. one after another. We know that. .

m. c or d.m. We can omit this vector and then V = span{c. which is a linear combination of the previous vectors.m) (n.n) (2.0. It contains at least one vector. 1.0) .d.v. the linear independent vectors of D form a basis of V.c.m) (n. because b. say v.b.d.u} and {b. (2+m. then every set that spans V has at least n vectors. c and d are independent (as a part of a basis). such that r(2+m.u. Corollary If dim(V) = n.n) (2.d} and {a. Take the three vectors ( (2+m. V = span{c.u.w} and {d. that number n is characteristic for that space and is called the dimension of that space. Example Let V = the vector space R3. We search the necessary and sufficient condition for m and n such that these three vectors are not a basis of R3.c. Again. From all this we see that is is impossible that two bases.m.1. V = span{a. Dim(V) = 3.2.2. not all zero.b.m) (n.2. That vector can't be b. That vector must be u! We can omit this vector and then V = span{b. each free set of n vectors is a basis. with a different number of elements.c. who is a linear combination of the previous vectors. Again. 1.u. -4) <=> (2+m. B1 and B2.d. 1. every free set has at most n vectors. (0.u. V = span{b.u} .v. have a different number of elements. 1. -4) ). We write dim(V) = n.m. Dimension of a vector space Since a vector space has a constant number of vectors in a basis.w} is a linear dependent set.d.n) (2.d} . We can omit this vector and then V = span{d.u} is a linear dependent set. Again. Note that if D spans V. we have V = span{d. That vector can't be d.Suppose there are two bases.0.d.d} is a linear dependent set. span(B2) = V . each set of n vectors that spans V is a basis.2. Each basis consists of three vectors but three random vectors do not always constitute a basis.n) + t(2. -4) = 0 <=> are not a basis . It contains at least one vector.w} . (0.u. -4) are linear dependent <=> There is an r. B1 and B2.w} and {c.s and t .v.c.c.c.m) + s(m.b.w} is a linear dependent set.w} Then.1)). which is a linear combination of the previous vectors.say w. Now. Assume B1 = {a. This is impossible because it is a basis.0) . It contains at least one vector. because c and d are independent (as a part of a basis).d} and B2 = {u. An obvious basis is ((1.

The rows of that matrix can be viewed as a set D of vectors. Choose another vector from R3 such that the three vectors form a basis of R3.3) are two vectors of R3.0. The space generated by D is called the row space of A.1.5) and (-1.0. we have : The row space of A does not change if we interchange two rows multiply a row with a real number (not zero) add a real multiple of row to another row So.1. (1.0) (1..The following system has a solution different from (0.0). As in the previous example we have: (1.2 n -16 = 0 n 2 n 2 | 1 | = 0 -4 | The three vectors are not a basis of V if and only if the latter condition is fulfilled.0) (2+m)r + n s + 2t = 0 m r + 2 s + t = 0 m r + n s . The dimension of the row space.5) and (-1. .4t = 0 <=> | 2+m | m | m <=> 6 m n .0. We try with the simple vector (1. <=> | 1 0 0 | | 1 2 5 | is not zero |-1 1 3 | When we unfold the determinant following the first row. Vector spaces and matrices Row space of a matrix Say A is a m x n matrix. such a row transformation does not change the row space of A.2. is the number of independent rows of A.0.5) and (-1. we see immediately that the determinant is 1.2.3) are linear independent <=> .0.1. From the properties of generating parts..3) constitute a basis.1. So.12 m . Example (1. of the vector space of all n-tuples of real numbers.2.3) constitute a basis <=> (1. The rows of A are a generating set of the row space.0) (1.5) and (-1.0) (1.2.

R1 [1 [0 [0 (-1)R3 [1 [0 [0 0 2 3] 1 -1 -1] 0 1 3] 0 2 3] 1 -1 -1] 0 -1 -3] 0 2 3] 1 -1 -1] 0 1 0] 0 2 3] 2 -2 -2] 0 1 0] 0 2 0 2 0 1 3] 1] 0] . by means of row transformations until we reach the canonic matrix. It is a subspace of R4. Example: We'll find the row space of a matrix A and the unique basis for that row space. [1 A = [1 [1 0 2 0 2 0 1 3] 1] 0] The rank of A is 3. we can say that the rank of A is the dimension of the row space of A. we simplify the matrix A. But the number of non-zero rows is the rank of A. by suitable row transformations. Then the non-zero rows are linear independent and form a basis of the row space. Hence.(1 2 0 1).(1 0 1 0)). In this example the three rows of A form a basis of the row space. Corollary : the rank of A is the number of linear independent rows. [1 [1 [1 R2-R1 [1 [0 [1 (1/2)R2 [1 [0 [1 R3 . Now.Dimension of a row space We know that it is possible to transform a matrix A. It can be proved that the non-zero rows of the canonic matrix form a unique basis for the row space. There are 3 linear independent rows. The row space is a three dimensional space with basis ((1 0 2 3). to a row canonic matrix.

(0 0 1 3)) Column space of a matrix Say A is a m x n matrix. . is the number of independent columns of A. Corollary : the rank of A is the number of linear independent columns. Thus. The dimension of the column space. to a column canonic matrix. such a column transformation does not change the column space of A. The columns of that matrix can be viewed as a set D of vectors of the vector space of all m-tuples of real numbers.(0 1 0 2). Exercise: Take the matrix A from previous example and find the unique basis of the column space. ((1 0 0 -3). The columns of A are a generating set of the column space. we have the unique basis of the row space. But the number of non-zero columns is the rank of A. Dimension of a column space We know that it is possible to transform a matrix A. by suitable column transformations. From the properties of generating parts. Then the non-zero columns are linear independent and form a basis of the column space. It can be proved that the non-zero columns of the canonic matrix form a unique basis for the column space. Example: Find the m-values such that: (the dimension of the row space of A) = 3.R1-2. the column space of A and the row space of A have the same dimension. we have : The column space of A does not change if we interchange two columns multiply a column with a real number (not zero) add a real multiple of column to another column So. we can say that the rank of A is the dimension of the column space of A. The space generated by D is called the column space of A.R3 [1 [0 [0 R2 + R3 [1 [0 [0 0 1 0 0 -3] 0 2] 1 3] 0 0 -3] 1 -1 -1] 0 1 3] Now.

also have coordinates with respect to the old basis (u.y. But the vectors of the new basis (u'. If we take another basis (u'.v'. but it can be extended to vector spaces with dimension n. Thus we get a new . We know that s = xu + yv + zw = x'u' + y'v' + z'w'. Take an ordered basis (u. Conclusion: (the dimension of the row space of A) = 3 if and only if m is different from 17.w'). then s has other coordinates (x'. They are the unit vectors along x-axis.[ m 1 A = [ 3 1 [ 1 -2 2 ] 0 ] 1 ] The dimension of the row space of A = rank A. Example: Say V is the vector space of the ordinary three dimensional space. by an angle of 90 degrees. The rank A is 3 if and only if (the determinant of A is not zero).v.v'. [x] [a [y] = [b [z] [c [a [b [c d e f d e f g] [x'] h].f) => v' = du + ev + fw co(w') = (g. In that space we take a standard basis e1. Coordinates and changing a basis We'll show the properties in a vector space with dimension 3. y-axis and z-axis. co(u') = (a. We rotate the three basis vectors.z') with respect to that new basis.e.w'). Then each vector s has coordinates (x. we'll investigate the link between these two ordered sets of coordinates. Now.b.y'. e2.v.z) with respect to this basis. i] The columns of the transformation matrix are the coordinates of the new basis with respect to the old basis.i) => w' = gu + hv + iw Then s = = but s = x' (au + bv + cw) + y' (du + ev + fw) + z' (gu + hv + iw) (ax' + dy' + gz')u + (bx' + ey' + hz')v + (cx' + fy' + iz')w from above we have also xu + yv + zw Therefore.w) of V. e3. the relation between the coordinates is x = ax' + dy' + gz' y = bx' + ey' + hz' z = cx' + fy' + iz' These relations can be written in matrix notation.h.w).[y'] i] [z'] g] h] is called the transformation matrix. The determinant of A is m-17. around the z-axis.c) => u' = au + bv + cw co(v') = (d.

y'.(3/5)t The set of solutions can we written as (-z + (2/5)t .1) The transformation matrix is [0 [1 [0 -1 0 0 0] 0] 1] (x.0) co(u3) = (0.z) are the coordinates of a vector v with respect to the old basis.0) + t(2/5.0.1) with z and t in R . / 2x + 3y . Basis of a solution space By means of an example. Each solution of that system can be viewed as a vector from the vector space V of all the real n-tuples. and the sum of two solutions is a solution too.y. x and y can be taken as main unknowns.-3/5.basis u1. The solutions are x = -z + (2/5)t y = z .1. t ) with z and t in R <=> z(-1. Therefore. It is called the solution space of the system.e1 u3 = e3 co(u1) = (0.z') are the coordinates of the vector v with respect to the new basis. Each real multiple of that solution is a solution too. (x'.[y'] 1] [z'] Vector spaces and systems of linear equations Vector spaces and homogeneous systems Take a homogeneous system of linear equations in n unknowns.y + 2z . The link between old and new base is u1 = e2 u2 = .1.1. z and t are the side unknowns. z . all the solutions of the system form a subspace M of V.0. The connection is [x] [0 -1 [y] = [1 0 [z] [0 0 0] [x'] 0]. u2. we show how a basis of a solution space can be found.z + t = 0 \ x .(3/5)t .t = 0 This is a system of the second kind.0.0) co(u2) = (-1. u3. z .

If X" is a arbitrary solution of AX = 0 then AX" = 0 .X' is a solution of AX = 0. These two vectors constitute a basis of the solution space.1. Furthermore: If X' is a fixed solution of AX = B then AX' = B . Sum of two vector spaces Say A and B are subspaces of a vector space V. Example: / 2x + 3y .0.t = 0 Above we have seen that the solutions are z(-1.z + t = 5 \ x .0. Conclusion: If we add an arbitrary solution of AX = 0 to a fixed solution of AX = B then X' + X" is a solution of AX = B. If X' is a fixed solution of AX = B then AX' = B .1. with coefficient matrix A. then we get all solutions AX = B.X') = 0 <=> X" . AX" . .0) and (2/5.1. AX' + AX" = B <=> A(X' + X") = B <=> X' + X" is a solution of AX = B. Then.1. <=> X" = X' + (a solution of AX = 0). Then.1).-3/5.z + t = 0 \ x . the column matrix B of the known terms and X is the column matrix of the unknowns.0.0) + t(2/5.-3/5.t = 0 has a solution (1. All solutions of the last system are (1.1.y + 2z .AX' = 0 <=> A(X" .Hence. all solutions are linear combinations of the linear independent vectors (-1.0) + t(2/5. If X" is a arbitrary solution of AX = B then AX" = B . We define the sum of A and B as the set { a + b with a in A and b in B } We write this sum as A + B. if we have a fixed solution of AX = B and we add to this solution all the solutions of the corresponding homogeneous system one after another.0) . Solutions of a non homogeneous system We can denote such system shortly as AX = B.3/5.1. Consider also the corresponding homogeneous system AX = 0 with the same A and X as above.1).1).0. Conclusion: Any arbitrary solution of AX = B is the sum of a fixed solution of AX = B and a solution of AX =0 So.y + 2z .0) + z(-1. / 2x + 3y .1.0.1.

2.4) + t.2. Example In the space R3 A = span{ (3.4s -3t = 0 Since |3 |2 |1 -2 -1 -4 0| -1| is -3| not 0.t = 0 \ r .0) <=> / 3r .(3. . is a direct sum if and only if the vector 0 is the only vector common to A and B.s.s. (0.2.The sum as subspace Theorem: The sum A+B. as defined above.2s = 0 | 2r .(2. Proof: For all a1 and a2 in A and all b1 and b2 in B and all r. s in R we have r(a1 + b1) + s(a2 + b2) is in A + B.3) = (0.4) .1.s.1) = s.t are real numbers.(0.(3.0.1.1. as defined above.(3.(2.(0.1.4) + t.1) and each vector in space B is of the form s.(0. = (r a1 + s a2) + (r b1 + s b2) Direct sum of two vector spaces The sum A+B.s .3) } Investigate if A+B is a direct sum Say r.1) } B = span{ (2. there is a suitable r.2.1.1.4) . For each common vector.3) <=> r. the previous system has only the solution r = s = t = 0.1) .1.(2.t such that r.3) .t. is a subspace of V. then each vector in space A is of the form r.1.

then each vector v in A+B can be written. B is the supplementary vector space of A with respect to V.b1 is in B Therefore a1 .l } is a basis of M and {a'. Basis and direct sum Theorem: Say V is the direct sum of the spaces M and N. then {a. If {a.c.b.......b..c'. A+B is a direct sum.c'. But the only common element is 0..0) is the only common vector of A and B.c. in just one way.b1 is a common element of A and B.l. and a1 ..0.b'.The vector (0.l' } is a basis of M+N..a2 = 0 and b2 .l' } is a basis of N. as the sum of an element of A and an element of B.b1 = 0 a1 = a2 and b2 = b1 Supplementary vector spaces Say that vector space V is the direct sum of A and B.a'.. Proof: Suppose v = a1 + b1 = a2 + b2 Then with ai in A and bi in B.b1 and a1 . Property of direct sum If A + B is a direct sum. a1 ..a2 is in A and b2 .a2 = b2 . Proof: . So. A and B are supplementary vector spaces with respect to V. then A is the supplementary vector space of B with respect to V.a2 = b2 .b'. Thus.

...c..l..t'c' ..l' } are linear independent..t. it is necessarily that all coefficients are 0 and from this we know that the generating vectors {a. That way is 0 = 0 + 0 with 0 in M and 0 in N. The only common vector is 0.s'.b'...... Thus each vector v = ra + sb + tc + .l' } is a basis of N.b. Direct sum criterion ......... + z'l' Therefore the set {a.b'.l } is a basis of M and {a'. Each element n of N can be written as r'a' + s'b' + t'c' + .a'.r'a' .... From this we see that necessarily m = 0 and n = 0 and thus ra + sb + tc + . Then m = ra + sb + tc + . ...b'..l'} are linear independent..z.....Each vector v of V can be written as m + n with m in M and n in N.b.c.. For a common element we have ra + sb + tc + .... + zl and n = r'a' + s'b' + t'c' + ...l. + zl ..a'.. + z'l' . and {a.s'b' .. then ra + sb + tc + .. + zl . then M+N is a direct sum. + z'l' = 0 .. + z'l' is a vector n of N... + z'l' = 0 Since all vectors in these expressions are linear independent. + zl = r'a' + s'b' + t'c' + .b.c'.l'} generates V.. + z'l' ..c'.. with r.. + zl is a vector m of M and r'a' + s'b' + t'c' + . as the sum of an element of M and an element of N.r'..z' real coefficients. + zl = 0 and r'a' + s'b' + t'c' + . + z'l' <=> ra + sb + tc + .c'.... + zl + r'a' + s'b' + t'c' + .a'..b'.c. Proof: Each element m of M can be written as ra + sb + tc + .b.l.. From a previous theorem we know that we can write the vector 0 in just one way..t'..s. If ra + sb + tc + .. Dimension of a direct sum From previous theorem it follows that dim(A+B) = dim(A) + dim(B) Converse theorem If {a. + zl + r'a' + s'b' + t'c' + ....z'l' = 0 Since all vectors are linear independent..c..c'.. all coefficients must be 0.

.l' } is a basis of N.7 ) = 4x .x2 To create the matrix of a projection see chapter: linear transformations.c. x } N = span { x2. x3 } Each vector of V is the sum of exactly one vector of M and of N.c..x2) + (4x ..b'. Now we can define the transformation p: V --> V : v --> m We define this transformation as the projection of V on M with respect to N Projection.l..7 Say q is the projection of V on N with respect to M.l } is a basis of M and {a'. e. then M+N is a direct sum independent <=> {a..b. Each vector v of V can be written in exactly one way as the sum of an element m of M and an element n of N.b'.7 ) = 2x3 .. We define two supplementary subspaces M = span { 1. example V is the space of all polynomials with a degree not greater than 3. Then v = m + n .x2 + 4x . Similarity transformation of a vector space Let r = any constant real number. then p(2x3 ...a'... then q(2x3 .x2 + 4x .b.c'.7) Say p is the projection of V on M with respect to N.x2 + 4x .l' } are linear Projection in a vector space Choose two supplementary subspaces M and N with respect to the space V.7 = (2x3 .g. In a vector space V we define the transformation .From the two previous theorems we deduce that: If {a. 2x3 .c'...

(1.1.h : V --> V : v --> r.2.1) + t.1. M = span{(0.1) + (3.0)} N = span{(0. then you'll see that with the previous definition. z = 0.1.3.1.3.0.0.3. Reflection in a vector space Choose two supplementary subspaces M and N with respect to the space V.3.0) The image of the reflection of vector v = (4.-1. .3.(0.(3. Indeed.2.1) + y.1) To create the matrix of a reflection see chapter: linear transformations.2.2. Now we define the transformation s : V --> V : v --> m .(3. First we write v as the sum of exactly one vector m of M and n of N.0) = (-2.3. if you take the ordinary vectors in a plane and if M and N are one dimensional supplementary subspaces.0) + z.0. (This is left as an exercise.1) in M with respect to N.3. This definition is a generalization of the ordinary reflection in a plane.0) The solution of this system gives x = 1. 1 and -1.3.1) in M with respect to N is vector v' = (1. Now we'll calculate the image of the reflection of vector v = (4.v We say that h is a similarity transformation of V with factor r.-1.(0. Example of a reflection Take V = R4.1.1) .n We say that s is the reflection of V in M with respect to the N.0.1) = x. s becomes the ordinary reflection in M with respect to the direction given by N.-1. M and N are supplementary subspaces.1. y = 1. t = 1.2.3.(1. Each vector v of V is the sum of exactly one vector m of M and n of N.1) = (1.2.0)} It is easy to show that M and N have only the vector 0 in common. The unique representation of v is (4.1).(3. Important special values of r are 0.1).0.1. (4.1.0.1.3.) So.

space

space

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd