ELEG 593: Advanced Linear Algebra and Its Applications

HOMEWORK ASSIGNMENT # 1
Due: Wednesday, 18th February, 2015
1. (3 points) Verify that the set of all (
2. (3 points) Prove: If
3. (3 points) Let
specified by:

) matrices with real entries is a real vector space.

is a vector spce, then the zero vector, , is inuque.

be the vector space of all real (

{
Verify that

[

]

}

- defined by:
* ( )

is a subspace of ,

Verify that
5. (3 points) Let
specified by:

,

2

-

( )

( )+

-.

be the vector space of all real (

Show that

be the subset of

is a subspace of .

be the subset of ,

4. (3 points) Let

) matrices, and let

0

) matrices, and let

be the subset of

1

3

is not a subspace of .

6. (5 points) Let
be the set of all (
) skew-symmetric matrices. Show that
the vector space of all (
) matrices, and exhibit a spanning set for .
7. (6 points) Let
by
*
0

be the vector space of all (
+, where
1,

Obtain a basis for

0

1,

) matrices, and let the subset

0

1

0

is a subset of
of

be definded

1

* +.

8. (4 points) In , consider the set of vectors
, ( )
, ( )
( )
. Is a basis for
?

*
,

+, where
( )

( )
, and

ELEG 593: Advanced Linear Algebra and Its Applications, Spring 2015, HW #1, Dr. Ahmed Al-Durra

1

property ( ) is immediate from the definition of scalar multiplication [clearly 1A=A fir any ( )] matrix A. (4 points) Let the inner product on Starting with the natural basis * orthogonal basis for . 0 0 0    0 0 0  And clearly inverse for A because for any ( ( ) ) matrix A. ( ). Ahmed Al-Durra 2 . and ( ). ELEG 593: Advanced Linear Algebra and Its Applications. For emphasis we recall that the zero element in this vector space is the matrix. A+B and aA are defined by a12 a13   b11 b12 b13  a A  B   11    a21 a22 a23  b21 b22 b23  a12  b12 a13  b13  a  b   11 11   a21  b21 a22  b22 a23  b23  a12 a13   aa11 aa12 aa13  a aA  a  11    a21 a22 a23   aa21 aa22 aa23  From these definition it is obviously that both the sum A+B and the scalar multiple aA are again ( ) matrices.6. HW #1. ( ). ( ) and ( ) are proved in Theorem 8 and 9 of Section 1. use Gram-Schmidt orthogonalization to obtain an * + be the orthogonal basis for relative to . Dr. obtained in Problem 9. (5 points) Let coordniates of be defined as: ∫ ( ) ( ) +. so ( ) and ( ) of definition 1 hold. Find the Solutions Problem 1 Let A and B be any ( ) matrices. 10. (-1)A is a matrix we can add to A to produce the zero element O.6. Spring 2015.] A supplication of these arguments shows that for any m and n the set of all ( ) matrices with real entries is real vector space. we further observe that (-1) A is the additive [That is. Therefore. and ( ) follow from theorem 7 of section 1.9. Properties ( ).

we conclude that . Then setting . that is. we know that property ( ). 0   0 aA    aa21 aa12  0  Therefore. (2). and 6 and leave the remaining properties as exercises.(1). Therefore. In particular.b] is the zero function. B    a21 0  b21 0  Thus A+B and aA have the form  0 A B    a21  b21 a12  b12  . Ahmed Al-Durra 3 . and Eq.  0 a12   0 b12  A . 4. then A and B have the form. Problem 3 The zero vector for V is the ( ) zero matrix O is in W since it satisfies the defining relationships of W. ( ). If A and B are any two vectors in W. and we conclude that W is a subspace of the set of all real ( matrices. (3) Or .] We first prove property 1. Now let ( )and ( ) be any two functions that are in W. Problem 2 [We prove properties 1. HW #1. A+B and aA are in W. Suppose that is a vector in V such that for all v in V. we know also that (2) But from property ( ) of Definition 1. ( ) and ( ) . Dr.The next three examples show that certain sets of functions have a natural vector structures.b]. ( ) is in W. ELEG 593: Advanced Linear Algebra and Its Applications. Spring 2015. we have (1) By property ( ) of definition 1. so using Eq. defined by ( ) for all x in the interval ( ) since ( ) [a. ) Problem 4 The zero vector in C[a.

so O is in W.b]. then ( ) ( ) So cA is in W. Similarly. and is subset of . Problem 6 Let O denote the ( then and ) zero matrix. ) ( is skew symmetric.b]. ( ) ) is in W. For example. Again. If A and B are in W. HW #1. define A and B by 1 0  A  and 0 0 0 0 B  0 1 Then A and B are in W. ( )( ) . Therefore. Theorem 2 now implies that W is a subspace of C[a. note that property (1) gives ( ) ( ) ( ) ( ) ( ) ( ) ( ). then it is immediate from property (1) that ( ) ( ) is in W. Dr. if c is a scalar. It follows that Problem 5 It is straightforward to show that W satisfies properties ( ) and ( ) of theorem 1. W is a subspace of V. as well as a vector space in its own right. can be considered as subspace of C[a. this assertion follows directly from Definition 2 since any polynomial is continuous on any interval [a. is a subspace of C[a. then is a subspace of . ( It follows that .( ) ( ) and ( ) ( ) The sum of ( ) and ( ) is the function ( ) defined by ( ) in W.b]. Likewise. we must show that ( ). We can verify this assertion directly from definition 2 since we have already shown that and are each real vector spaces. By theorem 2. the remarks preceding the example imply that W can be described by ELEG 593: Advanced Linear Algebra and Its Applications. To see that s(x) is ( ) Similarly. Thus to demonstrate that W is not a subspace of V. so If n .b]. Spring 2015. Moreover. if c is a scalar. Ahmed Al-Durra 4 . for any n. Therefore. but A+B is not. Clearly . ( ). that is. since 1 0  A B    0 1  In particular.

and frequently Sp(Q) is defined as the set of all finite linear combinations of vector from Q. then 0  1   1  0   . for example. using the method demonstrated in Example 7 of section 2. Then.4.  0 a b  W   A : A   a 0 c  . This is not a required assumption. and    . HW #1. c any real numbers    From this description it is easily seen that a natural spanning set for W is the set * +  0 1 0 A1   1 0 0 .  A2 B . where Q may be either an infinite set or a finite set. note that in Definition 3 we have implicitly assumed that spanning sets are finite. Spring 2015.   b c 0     a.4 we form the matrix ELEG 593: Advanced Linear Algebra and Its Applications.3…A natural spanning set for P (in the generalized sense described earlier) is the infinite set) * + Problem 7 If B is the natural basis for V. for instance.  A4 B  Several techniques for obtaining a basis for Sp(T) were illustrated in section 2.  1   3  A2 B * +.  1 0 0 0 0 0 A3  0 0 1  0 1 0 and Finally. We do not need this full generality.2.  A3 B .  0 0 0  0 0 1 A2   0 0 0  .  A1 B 1 2   . n=1. is a subspace of P for each n. Dr.  A3 B   1  1      4  10  A4 B 3 7    2   6 Let T   A1 B . consisting of all polynomials (we place no upper limit on the degree). Ahmed Al-Durra 5 . b. and we will explore this idea no further other than to note later that one contrast between the vector space and a general vector space V is that V might not possess a finite spanning set. An example of a vector space where the most natural spanning set is infinite is the vector space P.

respectively. Thus 1 2 W1    .  1   3 and * + is a basis for Sp(T). 1 0 1 3   2 1 0 7   C  1 1 1 2     3 4 10 6  The matrix can be reduced to the matrix 1 2 1 0 1 1 DT   0 0 2  0 0 0 3 4  1  0 1 0  2 1 D  1 1  3 4 0 0  0  0 Thus 0 0 2 1 And the nonzero columns of D constitute a basis for Sp(T).  B2 B  W2 then. and ELEG 593: Advanced Linear Algebra and Its Applications. Dr. it follows from theorem 5 that * are ( and ) matrices such that  B3 B  W3 + is a basis for Sp(S). HW #1. Ahmed Al-Durra 6 . If B1  E11  2E12  E21  3E22 Then clearly  B1 B  W1 B and B are obtained in the same fashion. and  B1 B  W1. . If 0 0  1 0  W2    and W3    1 2     4 1  . Spring 2015. denote the nonzero columns of D by and .

  1  0   p3 B  3 1     0 . Hence. the coordinate vectors in T are  p1 B  4  2    0 . we can use theorem 5 to pass from a problem in to an analogous problem in . the columns of A are linearly independent.. we cannot use theorem 5. Problem 8 * . once we know that is a basis. we form the matrix A whose columns are the vectors in T and use Matlab function (rref(A)) to reduce A to echelon form. S is a *. Dr. Ahmed Al-Durra 7 . For example.   0 1   p4 B 2  1     0 . . we have p1  x   x  p0 . By the corollary to theorem 5. Therefore. in .. x p0 . To check whether T is linearly independent. and B3     1 4  2 1  1 2 B1   . As can be seen from the results below. The next step of Gram-Schmitdt orthogonalization process is to form ELEG 593: Advanced Linear Algebra and Its Applications. . HW #1.   1  1   p5 B 0 0    1    0 1  Since has dimension 5 and T contains 5 vectors. p0   dx  1 0 . however. x   xdx  0 so ( ) 1 and 2 1 p0 . S is a basis for rref(A)=Identity( ) Problem 9 If we let * + denote the orthogonal basis..    3 1   p2 B 1 5     1 . T will be a basis for if T is a linearly independent set. In order to obtain the first basis B. T is a basis for .0 1 0 0 B2   . Spring 2015.. p0 ( ) and find ( ) from p0  x  we calculate 1 p0 .  1 3 Although theorem 5 shows that questions regarding the span or the linear dependence/independence of a subset of V can be translated to an equivalent problem in . we do * + need one basis for V as a point of reference. . basis for if and only if T is a basis for +. .+ In particular. where Let B denote the standard basis for . .

x 2 p2  x   x  2 p1 . x 2 / p2 . We can easily check that . p1 a2  p2 . x 2    x 2 dx  1 0 1 12 1 3 1 p0 . p2    x 2  x  (1/ 6) x 2  dx  1/180 1 0 Thus .p1 . Ahmed Al-Durra 8 . Problem 10 ( ) By theorem 10. Spring 2015. x 2 / p1 . HW #1. p0   dx  1 0 Therefore. p1    x 2  x  1/ 4 dx  1 0 p0 . p1 p0  x  p0 . and . x 2    x3  (1/ 2) x 2  dx  1/12 1 0 p2 . ( ) ( ) ( ) . / ( ) ( ) ( ) ELEG 593: Advanced Linear Algebra and Its Applications. x 2 / p0 . and * + is an orthogonal basis for with respect to inner product. where a0  p0 . p0   dx  1 0 p1 . p0 a1  p1 . Dr. p1    x 2  x  (1/ 4)  dx  1/12 1 0 p2 . x 2 p1  x   p1 . p0 The required constant are 1  p1 . p2 The necessary calculations are 1 p0 . x 2   x 2 dx  1/ 3 0 p1 . ( ) ( ). x 2   x 3  x 0 2 2 dx  121 p1 . x 2    x 4  x 3  (1/ 6) x 2  dx  1/180 1 0 1 p0 .