Professional Documents
Culture Documents
Tensión Hiperbólica
Tensión Hiperbólica
⎛ 8 3 1 ⎞ ⎛ 9 1 4 ⎞ ⎛ 17 4 5 ⎞
5. ⎜ 3 0 0 ⎟+⎜ 3 0 0 ⎟=⎜ 6 0 0 ⎟.
⎝ 3 0 0 ⎠ ⎝ 1 1 0 ⎠ ⎝ 4 1 0 ⎠
⎛ 4 2 1 3 ⎞
6. M = ⎜ 5 1 1 4 ⎟. Since all the entries has been doubled, we have 2M
⎝ 3 1 2 6 ⎠
can describe the inventory in June. Next, the matrix 2M − A can describe
the list of sold items. And the number of total sold items is the sum of all
entries of 2M − A. It equals 24.
7. It’s enough to check f (0) + g(0) = 2 = h(0) and f (1) + g(1) = 6 = h(1).
8. By (VS 7) and (VS 8), we have (a + b)(x + y) = a(x + y) + b(x + y) =
ax + ay + bx + by.
9. For two zero vectors 00 and 01 , by Thm 1.1 we have that 00 + x = x = 01 + x
implies 00 = 01 , where x is an arbitrary vector. If for vector x we have two
inverse vectors y0 and y1 . Then we have that x + y0 = 0 = x + y1 implies
y0 = y1 . Finally we have 0a + 1a = (0 + 1)a = 1a = 0 + 1a and so 0a = 0.
10. We have sum of two differentiable real-valued functions or product of scalar
and one differentiable real-valued function are again that kind of function.
And the function f = 0 would be the 0 in a vector space. Of course, here
the field should be the real numbers.
11. All condition is easy to check because there is only one element.
12. We have f (−t) + g(−t) = f (t) + g(t) and cf (−t) = cf (t) if f and g are both
even function. Futhermore, f = 0 is the zero vector. And the field here
should be the real numbers.
13. No. If it’s a vector space, we have 0(a1 , a2 ) = (0, a2 ) be the zero vector.
But since a2 is arbitrary, this is a contradiction to the uniqueness of zero
vector.
14. Yes. All the condition are preserved when the field is the real numbers.
15. No. Because a real-valued vector scalar multiply with a complex number
will not always be a real-valued vector.
16. Yes. All the condition are preserved when the field is the rational numbers.
17. No. Since 0(a1 , a2 ) = (a1 , 0) is the zero vector but this will make the zero
vector not be unique, it cannot be a vector space.
18. No. We have ((a1 , a2 ) + (b1 , b2 )) + (c1 , c2 ) = (a1 + 2b1 + 2c1 , a2 + 3b2 + 3c2 )
but (a1 , a2 ) + ((b1 , b2 ) + (c1 , c2 )) = (a1 + 2b1 + 4c1 , a2 + 3b2 + 9c2 ).
8
19. No. Because (c + d)(a1 , a2 ) = ((c + d)a1 , c+d
a2
) may not equal to c(a1 , a2 ) +
d(a1 , a2 ) = (ca1 + dc1 , c + d ).
a2 a2
21. Let 0V and 0W be the zero vector in V and W respectly. Then we have
(0V , 0W ) will be the zero vector in Z. The other condition could also be
checked carefully. This space is called the direct product of V and W .
22. Since each entries could be 1 or 0 and there are m × n entries, there are
2m×n vectors in that space.
1.3 Subspaces
1. (a) No. This should make sure that the field and the operations of V and
W are the same. Otherwise for example, V = R and W = Q respectly.
Then W is a vector space over Q but not a space over R and so not
a subspace of V .
(b) No. We should have that any subspace contains 0.
(c) Yes. We can choose W = 0.
(d) No. Let V = R, E0 = {0} and E1 = {1}. Then we have E0 ∩ E1 = ∅ is
not a subspace.
(e) Yes. Only entries on diagonal could be nonzero.
(f) No. It’s the summation of that.
(g) No. But it’s called isomorphism. That is, they are the same in view
of structure.
−4 5
2. (a) ( ) with tr= −5.
2 −1
⎛ 0 3 ⎞
(b) ⎜ 8 4 ⎟.
⎝ −6 7 ⎠
−3 0 6
(c) ( ).
9 −2 1
⎛ 10 2 −5 ⎞
(d) ⎜ 0 −4 7 ⎟ with tr= 12.
⎝ −8 3 6 ⎠
⎛ 1 ⎞
⎜ −1 ⎟
(e) ⎜ ⎟.
⎜ 3 ⎟
⎝ 5 ⎠
9
⎛ −2 7 ⎞
⎜ 5 0 ⎟
(f) ⎜ ⎟.
⎜ 1 1 ⎟
⎝ 4 −6 ⎠
(g) ( 5 6 7 ).
⎛ −4 0 6 ⎞
(h) ⎜ 0 1 −3 ⎟ with tr= 2.
⎝ 6 −3 5 ⎠
3. Let M = aA+bB and N = aAt +bB t . Then we have Mij = aAij +bBij = Nji
and so M t = N .
4. We have Atij = Aji and so Atji = Aij .
10
14. It’s closed under addition since the number of nonzero points of f +g is less
than the number of union of nonzero points of f and g. It’s closed under
scalar multiplication since the number of nonzero points of cf equals to
the number of f . And zero function is in the set.
15. Yes. Since sum of two differentiable functions and product of one scalar
and one differentiable function are again differentiable. The zero function
is differentiable.
16. If f (n) and g (n) are the nth derivative of f and g. Then f (n) + g (n) will be
the nth derivative of f + g. And it will continuous if both f (n) and g (n)
are continuous. Similarly cf (n) is the nth derivative of cf and it will be
continuous. This space has zero function as the zero vector.
17. There are only one condition different from that in Theorem 1.3. If W is
a subspace, then 0 ∈ W implies W ≠ ∅. If W is a subset satisfying the
conditions of this question, then we can pick x ∈ W since it’t not empty
and the other condition assure 0x = 0 will be a element of W .
18. We may compare the conditions here with the conditions in Theorem 1.3.
First let W be a subspace. We have cx will be contained in W and so is
cx + y if x and y are elements of W . Second let W is a subset satisfying
the conditions of this question. Then by picking a = 1 or y = 0 we get the
conditions in Theorem 1.3.
19. It’s easy to say that is sufficient since if we have W1 ⊂ W2 or W2 ⊂ W1 then
the union of W1 and W2 will be W1 or W2 , a space of course. To say it’s
necessary we may assume that neither W1 ⊂ W2 nor W2 ⊂ W1 holds and
then we can find some x ∈ W1 /W2 and y ∈ W2 /W1 . Thus by the condition
of subspace we have x + y is a vector in W1 or in W2 , say W1 . But this
will make y = (x + y) − x should be in W1 . It will be contradictory to the
original hypothesis that y ∈ W2 /W1 .
20. We have that ai wi ∈ W for all i. And we can get the conclusion that a1 w1 ,
a1 w1 + a2 w2 , a1 w1 + a2 w2 + a3 w3 are in W inductively.
21. In calculus course it will be proven that {an + bn } and {can } will converge.
And zero sequence, that is sequence with all entris zero, will be the zero
vector.
22. The fact that it’s closed has been proved in the previous exercise. And a
zero function is either a even function or odd function.
23. (a) We have (x1 + x2 ) + (y1 + y2 ) = (x1 + y1 ) + (x2 + y2 ) ∈ W1 + W2 and
c(x1 + x2 ) = cx1 + cx2 ∈ W1 + W2 if x1 , y1 ∈ W1 and x2 , y2 ∈ W2 . And
we have 0 = 0 + 0 ∈ W1 + W2 . Finally W1 = {x + 0 ∶ x ∈ W1 , 0 ∈ W2 } ⊂
W1 + W2 and it’s similar for the case of W2 .
(b) If U is a subspace contains both W1 and W2 then x + y should be a
vector in U for all x ∈ W1 and y ∈ W2 .
11
24. It’s natural that W1 ∩ W2 = {0}. And we have Fn = {(a1 , a2 , . . . , an ) ∶ ai ∈
F} = {(a1 , a2 , . . . , an−1 , 0) + (0, 0, . . . , an ) ∶ ai ∈ F} = W1 ⊕ W2 .
25. This is similar to the exercise 1.3.24.
12
But there are one more thing should be checked, that is whether it
is well-defined. But this is the exercise 1.3.31.(c).
13
5. (a) Yes.
(b) No.
(c) No.
(d) Yes.
(e) Yes.
(f) No.
(g) Yes.
(h) No.
6. For every (x1 , x2 , x3 ) ∈ F3 we may assume
11. To prove it’s sufficient we can use Theorem 1.5 and then we know W =
span(W ) is a subspace. To prove it’s necessary we can also use Theorem
1.5. Since W is a subspace contains W , we have span(W ) ⊂ W . On the
other hand, it’s natural that span(W ) ⊃ W .
12. To prove span(S1 ) ⊂ span(S2 ) we may let v ∈ S1 . Then we can write
v = a1 x1 + a2 x2 + ⋯ + a3 x3 where xi is an element of S1 and so is S2 for
all n = 1, 2, . . . , n. But this means v is a linear combination of S2 and we
complete the proof. If span(S1 ) = V , we know span(S2 ) is a subspace
containing span(S1 ). So it must be V .
13. We prove span(S1 ∪ S2 ) ⊂ span(S1 ) + span(S2 ) first. For v ∈ span(S1 ∪ S2 )
we have v = ∑ni=1 ai xi + ∑m
j=1 bj yj with xi ∈ S1 and yj ∈ S2 . Since the first
summation is in span(S1 ) and the second summation is in span(S2 ), we
have v ∈ span(S1 ) + span(S2 ). For the converse, let u + v ∈ span(S1 ) +
span(S2 ) with u ∈ span(S1 ) and v ∈ span(S2 ). We can right u + v =
∑i=1 ai xi + ∑j=1 bj yj with xi ∈ S1 and yj ∈ S2 and this means u + v ∈
n m
span(S1 ∪ S2 ).
14
14. For v ∈ span(S1 ∩ S2 ) we may write v = ∑ni=1 ai xi with xi ∈ S1 and xi ∈ S2 .
So v is an element of both span(S1 ) and span(S2 ) and hence an element
of span(S1 ) ∩ span(S2 ). For example we have if S1 = S2 = (1, 0) then they
are the same and if S1 = (1, 0) and S2 = (0, 1) then we have the left hand
side is the set of zero vector and the right hand side is the the plane R2 .
1 −3 −2 6
2. (a) Linearly dependent. We have −2 ( )=( ). So to
−2 4 4 −8
check the linearly dependency is to find the nontrivial solution of
equation a1 x1 + a2 x2 + ⋯ + an xn = 0. And x1 and x2 are the two
matrices here.
(b) Linearly independent.
(c) Linearly independent.
(d) Linearly dependent.
(e) Linearly dependent.
(f) Linearly independent.
(g) Linearly dependent.
(h) Linearly independent.
(i) Linearly independent.
(j) Linearly dependent.
3. Let M1 , M2 , . . . , M5 be those matrices. We have M1 +M2 +M3 −M4 −M5 = 0.
15