This action might not be possible to undo. Are you sure you want to continue?
Apostol,
Calculus, vol. 1 assigned to doctoral students in
years 20022003
andrea battinelli
dipartimento di scienze matematiche e informatiche “R.Magari”
dell’università di Siena
via del Capitano 15  53100 Siena
tel: +390577233769/02 fax: /01/30
email: battinelli @unisi.it
web: http//www.batman vai li
December 12, 2005
2
Contents
I Volume 1 1
1 Chapter 1 3
2 Chapter 2 5
3 Chapter 3 7
4 Chapter 4 9
5 Chapter 5 11
6 Chapter 6 13
7 Chapter 7 15
8 Chapter 8 17
9 Chapter 9 19
10 Chapter 10 21
11 Chapter 11 23
12 Vector algebra 25
12.1 Historical introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12.2 The vector space of ntuples of real numbers . . . . . . . . . . . . . . 25
12.3 Geometric interpretation for n ≤ 3 . . . . . . . . . . . . . . . . . . . 25
12.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12.4.1 n. 1 (p. 450) . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12.4.2 n. 2 (p. 450) . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
12.4.3 n. 3 (p. 450) . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
12.4.4 n. 4 (p. 450) . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
12.4.5 n. 5 (p. 450) . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
12.4.6 n. 6 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4 CONTENTS
12.4.7 n. 7 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.4.8 n. 8 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.4.9 n. 9 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.4.10 n. 10 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.4.11 n. 11 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.4.12 n. 12 (p. 451) . . . . . . . . . . . . . . . . . . . . . . . . . . 32
12.5 The dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.6 Length or norm of a vector . . . . . . . . . . . . . . . . . . . . . . . . 33
12.7 Orthogonality of vectors . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.8.1 n. 1 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.8.2 n. 2 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.8.3 n. 3 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
12.8.4 n. 5 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
12.8.5 n. 6 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
12.8.6 n. 7 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
12.8.7 n. 10 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 36
12.8.8 n. 13 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 36
12.8.9 n. 14 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 37
12.8.10 n. 15 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 39
12.8.11 n. 16 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 39
12.8.12 n. 17 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 39
12.8.13 n. 19 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 40
12.8.14 n. 20 (p. 456) . . . . . . . . . . . . . . . . . . . . . . . . . . 40
12.8.15 n. 21 (p. 457) . . . . . . . . . . . . . . . . . . . . . . . . . . 40
12.8.16 n. 22 (p. 457) . . . . . . . . . . . . . . . . . . . . . . . . . . 41
12.8.17 n. 24 (p. 457) . . . . . . . . . . . . . . . . . . . . . . . . . . 42
12.8.18 n. 25 (p. 457) . . . . . . . . . . . . . . . . . . . . . . . . . . 42
12.9 Projections. Angle between vectors in nspace . . . . . . . . . . . . . 43
12.10 The unit coordinate vectors . . . . . . . . . . . . . . . . . . . . . . . 43
12.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12.11.1 n. 1 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12.11.2 n. 2 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12.11.3 n. 3 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12.11.4 n. 5 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
12.11.5 n. 6 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
12.11.6 n. 8 (p. 460) . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
12.11.7 n. 10 (p. 461) . . . . . . . . . . . . . . . . . . . . . . . . . . 46
12.11.8 n. 11 (p. 461) . . . . . . . . . . . . . . . . . . . . . . . . . . 47
12.11.9 n. 13 (p. 461) . . . . . . . . . . . . . . . . . . . . . . . . . . 48
12.11.10 n. 17 (p. 461) . . . . . . . . . . . . . . . . . . . . . . . . . . 48
12.12 The linear span of a ﬁnite set of vectors . . . . . . . . . . . . . . . . 50
CONTENTS 5
12.13 Linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.14 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.15 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.15.1 n. 1 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.15.2 n. 3 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12.15.3 n. 5 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.15.4 n. 6 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.15.5 n. 7 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.15.6 n. 8 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.15.7 n. 10 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . 52
12.15.8 n. 12 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . 53
12.15.9 n. 13 (p. 467) . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.15.10 n. 14 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.15.11 n. 15 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.15.12 n. 17 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.15.13 n. 18 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.15.14 n. 19 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.15.15 n. 20 (p. 468) . . . . . . . . . . . . . . . . . . . . . . . . . . 58
12.16 The vector space V
n
(C) of ntuples of complex numbers . . . . . . . 59
12.17 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
13 Applications of vector algebra to analytic geometry 61
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.2 Lines in nspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.3 Some simple properties of straight lines . . . . . . . . . . . . . . . . . 61
13.4 Lines and vectorvalued functions . . . . . . . . . . . . . . . . . . . . 61
13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.5.1 n. 1 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.5.2 n. 2 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
13.5.3 n. 3 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
13.5.4 n. 4 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
13.5.5 n. 5 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
13.5.6 n. 6 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
13.5.7 n. 7 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
13.5.8 n. 8 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
13.5.9 n. 9 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
13.5.10 n. 10 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . 65
13.5.11 n. 11 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . 66
13.5.12 n. 12 (p. 477) . . . . . . . . . . . . . . . . . . . . . . . . . . 67
13.6 Planes in euclidean nspaces . . . . . . . . . . . . . . . . . . . . . . . 67
13.7 Planes and vectorvalued functions . . . . . . . . . . . . . . . . . . . 67
13.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6 CONTENTS
13.8.1 n. 2 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
13.8.2 n. 3 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
13.8.3 n. 4 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
13.8.4 n. 5 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
13.8.5 n. 6 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
13.8.6 n. 7 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
13.8.7 n. 8 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
13.8.8 n. 9 (p. 482) . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
13.8.9 n. 10 (p. 483) . . . . . . . . . . . . . . . . . . . . . . . . . . 73
13.8.10 n. 11 (p. 483) . . . . . . . . . . . . . . . . . . . . . . . . . . 74
13.8.11 n. 12 (p. 483) . . . . . . . . . . . . . . . . . . . . . . . . . . 75
13.8.12 n. 13 (p. 483) . . . . . . . . . . . . . . . . . . . . . . . . . . 75
13.8.13 n. 14 (p. 483) . . . . . . . . . . . . . . . . . . . . . . . . . . 75
13.9 The cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13.10 The cross product expressed as a determinant . . . . . . . . . . . . . 76
13.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13.11.1 n. 1 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13.11.2 n. 2 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13.11.3 n. 3 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13.11.4 n. 4 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
13.11.5 n. 5 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
13.11.6 n. 6 (p. 487) . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
13.11.7 n. 7 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13.11.8 n. 8 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.11.9 n. 9 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.11.10 n. 10 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 80
13.11.11 n. 11 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 80
13.11.12 n. 12 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 82
13.11.13 n. 13 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 82
13.11.14 n. 14 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13.11.15 n. 15 (p. 488) . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.12 The scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . 85
13.13 Cramer’s rule for solving systems of three linear equations . . . . . . 85
13.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13.15 Normal vectors to planes . . . . . . . . . . . . . . . . . . . . . . . . 85
13.16 Linear cartesian equations for planes . . . . . . . . . . . . . . . . . . 85
13.17 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.17.1 n. 1 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.17.2 n. 2 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.17.3 n. 3 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.17.4 n. 4 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.17.5 n. 5 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
CONTENTS 7
13.17.6 n. 6 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.17.7 n. 8 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.17.8 n. 9 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.17.9 n. 10 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . 89
13.17.10 n. 11 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.17.11 n. 13 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.17.12 n. 14 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.17.13 n. 15 (p. 496) . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.17.14 n. 17 (p. 497) . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13.17.15 n. 20 (p. 497) . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13.18 The conic sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13.19 Eccentricity of conic sections . . . . . . . . . . . . . . . . . . . . . . 91
13.20 Polar equations for conic sections . . . . . . . . . . . . . . . . . . . . 91
13.21 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
13.22 Conic sections symmetric about the origin . . . . . . . . . . . . . . . 92
13.23 Cartesian equations for the conic sections . . . . . . . . . . . . . . . 92
13.24 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
13.25 Miscellaneous exercises on conic sections . . . . . . . . . . . . . . . . 92
14 Calculus of vectorvalued functions 93
15 Linear spaces 95
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.2 The deﬁnition of a linear space . . . . . . . . . . . . . . . . . . . . . 95
15.3 Examples of linear spaces . . . . . . . . . . . . . . . . . . . . . . . . 95
15.4 Elementary consequences of the axioms . . . . . . . . . . . . . . . . . 95
15.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.5.1 n. 1 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.5.2 n. 2 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
15.5.3 n. 3 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
15.5.4 n. 4 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
15.5.5 n. 5 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
15.5.6 n. 6 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
15.5.7 n. 7 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.5.8 n. 11 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.5.9 n. 13 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.5.10 n. 14 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.5.11 n. 16 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.5.12 n. 17 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 101
15.5.13 n. 18 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 101
15.5.14 n. 19 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
15.5.15 n. 22 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
8 CONTENTS
15.5.16 n. 23 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
15.5.17 n. 24 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
15.5.18 n. 25 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
15.5.19 n. 26 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 102
15.5.20 n. 27 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.5.21 n. 28 (p. 555) . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.6 Subspaces of a linear space . . . . . . . . . . . . . . . . . . . . . . . . 103
15.7 Dependent and independent sets in a linear space . . . . . . . . . . . 103
15.8 Bases and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.9.1 n. 1 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
15.9.2 n. 2 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.3 n. 3 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.4 n. 4 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.5 n. 5 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.6 n. 6 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.7 n. 7 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
15.9.8 n. 8 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.9.9 n. 9 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.9.10 n. 10 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.9.11 n. 11 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 105
15.9.12 n. 12 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 106
15.9.13 n. 13 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 106
15.9.14 n. 14 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 106
15.9.15 n. 15 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 107
15.9.16 n. 16 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 107
15.9.17 n. 22 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 108
15.9.18 n. 23 (p. 560) . . . . . . . . . . . . . . . . . . . . . . . . . . 108
15.10 Inner products. Euclidean spaces. Norms . . . . . . . . . . . . . . . 111
15.11 Orthogonality in a euclidean space . . . . . . . . . . . . . . . . . . . 111
15.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
15.12.1 n. 9 (p. 567) . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
15.12.2 n. 11 (p. 567) . . . . . . . . . . . . . . . . . . . . . . . . . . 112
15.13 Construction of orthogonal sets. The GramSchmidt process . . . . . 115
15.14 Orthogonal complements. projections . . . . . . . . . . . . . . . . . 115
15.15 Best approximation of elements in a euclidean space by elements in a
ﬁnitedimensional subspace . . . . . . . . . . . . . . . . . . . . . . . . 115
15.16 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
15.16.1 n. 1 (p. 576) . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
15.16.2 n. 2 (p. 576) . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.16.3 n. 3 (p. 576) . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
15.16.4 n. 4 (p. 576) . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
CONTENTS 9
16 Linear transformations and matrices 121
16.1 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.2 Null space and range . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.3 Nullity and rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.4.1 n. 1 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.4.2 n. 2 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.4.3 n. 3 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.4.4 n. 4 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.4.5 n. 5 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.4.6 n. 6 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.4.7 n. 7 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.4.8 n. 8 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.4.9 n. 9 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
16.4.10 n. 10 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 124
16.4.11 n. 16 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 125
16.4.12 n. 17 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 125
16.4.13 n. 23 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 126
16.4.14 n. 25 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 126
16.4.15 n. 27 (p. 582) . . . . . . . . . . . . . . . . . . . . . . . . . . 127
16.5 Algebraic operations on linear transformations . . . . . . . . . . . . . 128
16.6 Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.7 Onetoone linear transformations . . . . . . . . . . . . . . . . . . . . 128
16.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.8.1 n. 15 (p. 589) . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.8.2 n. 16 (p. 589) . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.8.3 n. 17 (p. 589) . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.8.4 n. 27 (p. 590) . . . . . . . . . . . . . . . . . . . . . . . . . . 129
16.9 Linear transformations with prescribed values . . . . . . . . . . . . . 129
16.10 Matrix representations of linear transformations . . . . . . . . . . . . 129
16.11 Construction of a matrix representation in diagonal form . . . . . . . 129
16.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
16.12.1 n. 3 (p. 596) . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
16.12.2 n. 4 (p. 596) . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
16.12.3 n. 5 (p. 596) . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
16.12.4 n. 7 (p. 597) . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
16.12.5 n. 8 (p. 597) . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
16.12.6 n. 16 (p. 597) . . . . . . . . . . . . . . . . . . . . . . . . . . 135
10 CONTENTS
Part I
Volume 1
1
Chapter 1
CHAPTER 1
4 Chapter 1
Chapter 2
CHAPTER 2
6 Chapter 2
Chapter 3
CHAPTER 3
8 Chapter 3
Chapter 4
CHAPTER 4
10 Chapter 4
Chapter 5
CHAPTER 5
12 Chapter 5
Chapter 6
CHAPTER 6
14 Chapter 6
Chapter 7
CHAPTER 7
16 Chapter 7
Chapter 8
CHAPTER 8
18 Chapter 8
Chapter 9
CHAPTER 9
20 Chapter 9
Chapter 10
CHAPTER 10
22 Chapter 10
Chapter 11
CHAPTER 11
24 Chapter 11
Chapter 12
VECTOR ALGEBRA
12.1 Historical introduction
12.2 The vector space of ntuples of real numbers
12.3 Geometric interpretation for n ≤ 3
12.4 Exercises
12.4.1 n. 1 (p. 450)
(a) a +b = (5, 0, 9).
(b) a −b = (−3, 6, 3).
(c) a +b −c = (3, −1, 4).
(d) 7a −2b −3c = (−7, 24, 21).
(e) 2a +b −3c = (0, 0, 0).
12.4.2 n. 2 (p. 450)
The seven points to be drawn are the following:
µ
7
3
, 2
¶
,
µ
5
2
,
5
2
¶
,
µ
11
4
,
13
4
¶
, (3, 4) , (4, 7) , (1, −2) , (0, −5)
The purpose of the exercise is achieved by drawing, as required, a single picture,
containing all the points (included the starting points A and B, I would say).
26 Vector algebra
It can be intuitively seen that, by letting t vary in all R, the straight line through
point A with direction given by the vector b ≡
−→
OB is obtained.
12.4.3 n. 3 (p. 450)
The seven points this time are the following:
µ
5
3
,
10
3
¶
,
µ
2,
7
2
¶
,
µ
5
2
,
15
4
¶
, (3, 4) , (5, 5) , (−1, 2) , (−3, 1)
3
2
1
0
1
2
3
4
5
3 2 1 1 2 3 4 5
It can be intuitively seen that, by letting t vary in all R, the straight line through B
with direction given by the vector a ≡
−→
OA is obtained.
Exercises 27
12.4.4 n. 4 (p. 450)
(a) The seven points to be drawn are the following:
µ
3
2
, 2
¶
,
µ
5
4
,
5
2
¶
,
µ
4
3
,
7
3
¶
, (3, −1) , (4, −3) ,
µ
1
2
, 4
¶
, (0, 5)
The whole purpose of this part of the exercise is achieved by drawing a single picture,
containing all the points (included the starting points A and B, I would say). This
is made clear, it seems to me, by the question immediately following.
4
2
0
2
4
4 2 2 4
(b) It is hard not to notice that all points belong to the same straight line; indeed,
as it is going to be clear after the second lecture, all the combinations are aﬃne.
(c) If the value of x is ﬁxed at 0 and y varies in [0, 1], the segment OB is obtained;
the same construction with the value of x ﬁxed at 1 yields the segment AD, where
−→
OD =
−→
OA+
−→
OB, and hence D is the vertex of the parallelogram with three vertices
at O, A, and B. Similarly, when x =
1
2
the segment obtained joins the midpoints of
the two sides OA and BD; and it is enough to repeat the construction a few more
times to convince oneself that the set
©
xa +yb : (x, y) ∈ [0, 1]
2
ª
is matched by the set of all points of the parallelogram OADB. The picture below
is made with the value of x ﬁxed at 0,
1
4
,
1
2
,
3
4
, 1,
5
4
,
3
2
, and 2.
28 Vector algebra
0
1
2
3
4
0.5 1 1.5 2 2.5 3
(d) All the segments in the above construction are substituted by straight lines, and
the resulting set is the (inﬁnite) stripe bounded by the lines containing the sides OB
and AD, of equation 3x −y = 0 and 3x −y = 5 respectively.
(e) The whole plane.
12.4.5 n. 5 (p. 450)
x
µ
2
1
¶
+y
µ
1
3
¶
=
µ
c
1
c
2
¶
I 2x +y = c
1
II x + 3y = c
2
3I −II 5x = 3c
1
−c
2
2II −I 5y = 2c
2
−c
1
µ
c
1
c
2
¶
=
3c
1
−c
2
5
µ
2
1
¶
+
2c
2
−c
1
5
µ
1
3
¶
12.4.6 n. 6 (p. 451)
(a)
d = x
_
_
1
1
1
_
_
+y
_
_
0
1
1
_
_
+z
_
_
1
1
0
_
_
=
_
_
x +z
x +y +z
x +y
_
_
(b)
I x +z = 0
II x +y +z = 0
III x +y = 0
I x = −z
(↑) ,→II y = 0
(↑) ,→III x = 0
(↑) ,→I z = 0
(c)
I x +z = 1
II x +y +z = 2
III x +y = 3
II −I y = 1
II −III z = −1
(↑) ,→I x = 2
Exercises 29
12.4.7 n. 7 (p. 451)
(a)
d = x
_
_
1
1
1
_
_
+y
_
_
0
1
1
_
_
+z
_
_
2
1
1
_
_
=
_
_
x + 2z
x +y +z
x +y +z
_
_
(b)
I x + 2z = 0
II x +y +z = 0
III x +y +z = 0
I x = −2z
(↑) ,→II y = z
z ←1 (−2, 1, 1)
(c)
I x + 2z = 1
II x +y +z = 2
III x +y +z = 3
III −II 0 = 1
12.4.8 n. 8 (p. 451)
(a)
d = x
_
_
_
_
1
1
1
0
_
_
_
_
+y
_
_
_
_
0
1
1
1
_
_
_
_
+z
_
_
_
_
1
1
0
0
_
_
_
_
=
_
_
_
_
x +z
x +y +z
x +y
y
_
_
_
_
(b)
I x +z = 0
II x +y +z = 0
III x +y = 0
IV y = 0
IV y = 0
IV ,→III x = 0
(↑) ,→I z = 0
II (check) 0 = 0
(c)
I x +z = 1
II x +y +z = 5
III x +y = 3
IV y = 4
IV y = 4
IV ,→III x = −1
(↑) ,→I z = 2
II (check) −1 + 4 + 2 = 5
(d)
I x +z = 1
II x +y +z = 2
III x +y = 3
IV y = 4
II −I −IV 0 = −3
30 Vector algebra
12.4.9 n. 9 (p. 451)
Let the two vectors u and v be both parallel to the vector w. According to the
deﬁnition at page 450 (just before the beginning of § 12.4), this means that there are
two real nonzero numbers α and β such that u = αw and v = βw. Then
u = αw = α
µ
v
β
¶
=
α
β
v
that is, u is parallel to v.
12.4.10 n. 10 (p. 451)
Assumptions:
I c = a +b
II ∃k ∈ R ∼ {0} , a = kd
Claim:
(∃h ∈ R ∼ {0} , c = hd) ⇔(∃l ∈ R ∼ {0} , b = ld)
I present two possible lines of reasoning (among others, probably). If you look
carefully, they diﬀer only in the phrasing.
1.
∃h ∈ R ∼ {0} , c = hd
⇔
I
∃h ∈ R ∼ {0} , a +b = hd
⇔
II
∃h ∈ R ∼ {0} , ∃k ∈ R ∼ {0} , kd +b = hd
⇔ ∃h ∈ R ∼ {0} , ∃k ∈ R ∼ {0} , b = (h −k) d
(b 6= 0) h 6= k
⇔
l≡h−k
∃l ∈ R ∼ {0} , b = ld
2. (⇒) Since (by I) we have b = c −a, if c is parallel to d, then (by II) b is the
diﬀerence of two vectors which are both parallel to d; it follows that b, which
is nonnull, is parallel to d too.
(⇐) Since (by I) we have c = a + b, if b is parallel to d, then (by II) c is
the sum of two vectors which are both parallel to d; it follows that c, which is
nonnull, is parallel to d too.
12.4.11 n. 11 (p. 451)
(b) Here is an illustration of the ﬁrst distributive law
(α +β) v = αv +βv
Exercises 31
with v = (2, 1), α = 2, β = 3. The vectors v, αv, βv, αv + βv are displayed by
means of representative oriented segments from left to right, in black, red, blue, and
redblue colour, respectively. The oriented segment representing vector (α +β) v is
above, in violet. The dotted lines are there just to make it clearer that the two
oriented segments representing αv + βv and (α +β) v are congruent, that is, the
vectors αv +βv and (α +β) v are the same.
0
2
4
6
8
2 4 6 8 10 12 14
tails are marked with a cross, heads with a diamond
An illustration of the second distributive law
α(u +v) = αu +αv
is provided by means of the vectors u = (1, 3), v = (2, 1), and the scalar α = 2. The
vectors u and αu are represented by means of blue oriented segments; the vectors
v and αv by means of red ones; u + v and α(u +v) by green ones; αu + αv is in
violet. The original vectors u, v, u + v are on the left; the “rescaled” vectors αu,
αv, α(u +v) on the right. Again, the black dotted lines are there just to emphasize
congruence.
32 Vector algebra
0
2
4
6
8
4 2 2 4 6 8
tails are marked with a cross, heads with a diamond
12.4.12 n. 12 (p. 451)
The statement to be proved is better understood if written in homogeneous form,
with all vectors represented by oriented segments:
−→
OA+
1
2
−→
AC =
1
2
−→
OB (12.1)
Since A and C are opposed vertices, the same holds for O and B; this means that
the oriented segment OB covers precisely one diagonal of the parallelogram OABC,
and AC precisely covers the other (in the rightwarddownwards orientation), hence
what needs to be proved is the following:
−→
OA+
1
2
³
−→
OC −
−→
OA
´
=
1
2
³
−→
OA+
−→
OC
´
which is now seen to be an immediate consequence of the distributive properties.
In the picture below,
−→
OA is in red,
−→
OC in blue,
−→
OB =
−→
OA +
−→
OC in green, and
−→
AC =
−→
OC −
−→
OA in violet.
The dot product 33
The geometrical theorem expressed by equation (12.1) is the following:
Theorem 1 The two diagonals of every parallelogram intersect at a point which di
vides them in two segments of equal length. In other words, the intersection point of
the two diagonals is the midpoint of both.
Indeed, the lefthand side of (12.1) is the vector represented by the oriented
segment OM, where M is the midpoint of diagonal AC (violet), whereas the righthand
side is the vector represented by the oriented segment ON, where N is the midpoint
of diagonal AB (green). More explicitly, the movement from O to M is described
as achieved by the composition of a ﬁrst movement from O to A with a second
movement from A towards C, which stops halfway (red plus half the violet); whereas
the movement from A to N is described as a single movement from A toward B,
stopping halfway (half the green). Since (12.1) asserts that
−−→
OM =
−→
ON, equality
between M and N follows, and hence a proof of (12.1) is a proof of the theorem
12.5 The dot product
12.6 Length or norm of a vector
12.7 Orthogonality of vectors
12.8 Exercises
12.8.1 n. 1 (p. 456)
(a) ha, bi = −6
(b) hb, ci = 2
(c) ha, ci = 6
(d) ha, b +ci = 0
(e) ha −b, ci = 4
12.8.2 n. 2 (p. 456)
(a) ha, bi c =(2 · 2 + 4 · 6 + (−7) · 3) (3, 4, −5) = 7 (3, 4, −5) = (21, 28, −35)
(b) ha, b +ci = 2 · (2 + 3) + 4 · (6 + 4) + (−7) (3 −5) = 64
34 Vector algebra
(c) ha +b, ci = (2 + 2) · 3 + (4 + 6) · 4 + (−7 + 3) · (−5) = 72
(d) ahb, ci = (2, 4, −7) (2 · 3 + 6 · 4 + 3 · (−5)) = (2, 4, −7) 15 = (30, 60, −105)
(e)
a
hb,ci
=
(2,4,−7)
15
=
¡
2
15
,
4
15
, −
7
15
¢
12.8.3 n. 3 (p. 456)
The statement is false. Indeed,
ha, bi = ha, ci ⇔ha, b −ci = 0 ⇔a ⊥ b −c
and the diﬀerence b −c may well be orthogonal to a without being the null vector.
The simplest example that I can conceive is in R
2
:
a = (1, 0) b = (1, 1) c = (1, 2)
See also exercise 1, question (d).
12.8.4 n. 5 (p. 456)
The required vector, of coordinates (x, y, z) must satisfy the two conditions
h(2, 1, −1) , (x, y, z)i = 0
h(1, −1, 2) , (x, y, z)i = 0
that is,
I 2x +y −z = 0
II x −y + 2z = 0
I +II 3x +z = 0
2I +II 5x +y = 0
Thus the set of all such vectors can be represented in parametric form as follows:
{(α, −5α, −3α)}
α∈R
12.8.5 n. 6 (p. 456)
hc, bi = 0
c = xa +yb
¾
⇒ hxa +yb, bi = 0
⇔ xha, bi +y hb, bi = 0
⇔ 7x + 14y = 0
Let (x, y) ≡ (−2, 1). Then c = (1, 5, −4) 6= 0 and h(1, 5, −4) , (3, 1, 2)i = 0.
Exercises 35
12.8.6 n. 7 (p. 456)
1 (solution with explicit mention of coordinates) From the last condition, for
some α to be determined,
c = (α, 2α, −2α)
Substituting in the ﬁrst condition,
d = a −c = (2 −α, −1 −2α, 2 + 2α)
Thus the second condition becomes
1 · (2 −α) + 2 · (−1 −2α) −2 (2 + 2α) = 0
that is,
−4 −9α = 0 α = −
4
9
and hence
c =
1
9
(−4, −8, 8)
d =
1
9
(22, −1, 10)
2 (same solution, with no mention of coordinates) From the last condition, for
some α to be determined,
c = αb
Substituting in the ﬁrst condition,
d = a −c = a −αb
Thus the second condition becomes
hb, a −αbi = 0
that is,
hb, ai −αhb, bi = 0 α =
hb, ai
hb, bi
=
2 · 1 + (−1) · 2 + 2 · (−2)
1
2
+ 2
2
+ (−2)
2
= −
4
9
and hence
c =
1
9
(−4, −8, 8)
d =
1
9
(22, −1, 10)
36 Vector algebra
12.8.7 n. 10 (p. 456)
(a) b = (1, −1) or b = (−1, 1).
(b) b = (−1, −1) or b = (1, 1).
(c) b = (−3, −2) or b = (3, 2).
(d) b = (b, −a) or b = (−b, a).
12.8.8 n. 13 (p. 456)
If b is the required vector, the following conditions must be satisﬁed
ha, bi = 0 kbk = kak
If a = 0, then kak = 0 and hence b = 0 as well. If a 6= 0, let the coordinates of a be
given by the couple (α, β), and the coordinates of b by the couple (x, y). The above
conditions take the form
αx +βy = 0 x
2
+y
2
= α
2
+β
2
Either α or β must be diﬀerent from 0. Suppose α is nonzero (the situation is
completely analogous if β 6= 0 is assumed). Then the ﬁrst equations gives
x = −
β
α
y (12.2)
and substitution in the second equation yields
µ
β
2
α
2
+ 1
¶
y
2
= α
2
+β
2
that is,
y
2
=
α
2
+β
2
α
2
+β
2
α
2
= α
2
and hence
y = ±α
Substituting back in (12.2) gives
x = ∓β
Thus there are only two solutions to the problem, namely, b = (−β, α) and b =
(β, −α) In particular,
(a) b = (−2, 1) or b = (2, −1).
(b) b = (2, −1) or b = (−2, 1).
(c) b = (−2, −1) or b = (2, 1).
(d) b = (−1, 2) or b = (1, −2)
Exercises 37
12.8.9 n. 14 (p. 456)
1 (right angle in C; solution with explicit mention of coordinates) Let (x, y, z)
be the coordinates of C. If the right angle is in C, the two vectors
−→
CA and
−→
CB must
be orthogonal. Thus
h[(2, −1, 1) −(x, y, z)] , [(3, −4, −4) −(x, y, z)]i = 0
that is,
h(2, −1, 1) , (3, −4, −4)i + h(x, y, z) , (x, y, z)i −h(2, −1, 1) + (3, −4, −4) , (x, y, z)i = 0
or
x
2
+y
2
+z
2
−5x + 5y + 3z + 6 = 0
The above equation, when rewritten in a more perspicuous way by “completion of
the squares”
µ
x −
5
2
¶
2
+
µ
y +
5
2
¶
2
+
µ
x +
3
2
¶
2
=
25
4
is seen to deﬁne the sphere of center P =
¡
5
2
, −
5
2
, −
3
2
¢
and radius
5
2
.
2 (right angle in C; same solution, with no mention of coordinates) Let
a ≡
−→
OA b ≡
−→
OB x ≡
−→
OC
(stressing that the point C, hence the vector
−→
OC, is unknown). Then, orthogonality
of the two vectors
−→
CA = a −x
−→
CB = b −x
is required; that is,
ha −x, b −xi = 0
or
ha, bi −ha +b, xi + hx, xi = 0
Equivalently,
kxk
2
−2
¿
a +b
2
, x
À
+
°
°
°
°
a +b
2
°
°
°
°
2
=
°
°
°
°
a +b
2
°
°
°
°
2
−ha, bi
°
°
°
°
x −
a +b
2
°
°
°
°
2
=
°
°
°
°
a −b
2
°
°
°
°
2
38 Vector algebra
The last characterization of the solution to our problem shows that the solution set is
the locus of all points having ﬁxed distance (
°
°
a−b
2
°
°
) from the midpoint of the segment
AB. Indeed, if π is any plane containing AB, and C is any point of the circle of π
having AB as diameter, it is known by elementary geometry that the triangle ACB
is rectangle in C.
3 (right angle in B; solution with explicit mention of coordinates) With
this approach, the vectors required to be orthogonal are
−→
BA and
−→
BC. Since
−→
BA =
(−1, 3, 5) and
−→
BC = (x −1, y + 4, z + 4), the following must hold
0 =
D
−→
BA,
−→
BC
E
= 1 −x + 3y + 12 + 5z + 20
that is,
x −3y −5z = 33
The solution set is the plane π through B and orthogonal to (1, −3, −5).
4 (right angle in B; same solution, with no mention of coordinates) Pro
ceeding as in the previous point, with the notation of point 2, the condition to be
required is
0 = ha −b, x −bi
that is,
ha −b, xi = ha −b, bi
Thus the solution plane π is seen to be through B and orthogonal to the segment
connecting point B to point A.
Exercises 39
5 (right angle in B) It should be clear at this stage that the solution set in this
case is the plane π
0
through A and orthogonal to AB, of equation
hb −a, xi = hb −a, ai
It is also clear that π and π
0
are parallel.
12.8.10 n. 15 (p. 456)
I c
1
−c
2
+ 2c
3
= 0
II 2c
1
+c
2
−c
3
= 0
I +II 3c
1
+c
3
= 0
I + 2II 5c
1
+c
2
= 0
c = (−1, 5, 3)
12.8.11 n. 16 (p. 456)
p = (3α, 4α)
q = (4β, −3β)
I 3α + 4β = 1
II 4α −3β = 2
4II + 3I 25α = 11
4I −3II 25β = −2
(α, β) =
1
25
(11, −2)
p =
1
25
(33, 44) q =
1
25
(−8, 6)
12.8.12 n. 17 (p. 456)
The question is identical to the one already answered in exercise 7 of this section.
Recalling solution 2 to that exercise, we have that p ≡
−→
OP must be equal to αb =
α
−→
OB, with
α =
ha, bi
hb, bi
=
10
4
Thus
p =
µ
5
2
,
5
2
,
5
2
,
5
2
¶
q =
µ
−
3
2
, −
1
2
,
1
2
,
3
2
¶
40 Vector algebra
12.8.13 n. 19 (p. 456)
It has been quickly seen in class that
ka +bk
2
= kak
2
+ kbk
2
+ 2 ha, bi
substituting −b to b, I obtain
ka −bk
2
= kak
2
+ kbk
2
−2 ha, bi
By subtraction,
ka +bk
2
−ka −bk
2
= 4 ha, bi
as required. You should notice that the above identity has been already used at the
end of point 2 in the solution to exercise 14 of the present section.
Concerning the geometrical interpretation of the special case of the above
identity
ka +bk
2
= ka −bk
2
if and only if ha, bi = 0
it is enough to notice that orthogonality of a and b is equivalent to the property
for the parallelogram OACB (in the given order around the perimeter, that is, with
vertex C opposed to the vertex in the origin O) to be a rectangle; and that in such
a rectangle ka +bk and ka −bk measure the lengths of the two diagonals.
12.8.14 n. 20 (p. 456)
ka +bk
2
+ ka −bk
2
= ha +b, a +bi + ha −b, a −bi
= ha, ai + 2 ha, bi + hb, bi + ha, ai −2 ha, bi + hb, bi
= 2 kak
2
+ 2 kbk
2
The geometric theorem expressed by the above identity can be stated as follows:
Theorem 2 In every parallelogram, the sum of the squares of the four sides equals
the sum of the squares of the diagonals.
12.8.15 n. 21 (p. 457)
Exercises 41
Let A, B, C, and D be the four vertices of the quadrilateral, starting from left
in clockwise order, and let M and N be the midpoint of the diagonals
−→
AC and
−→
DB.
In order to simplify the notation, let
u ≡
−→
AB, v ≡
−→
BC, w ≡
−→
CD, z ≡
−→
DA = −(u +v +w)
Then
c ≡
−→
AC = u +v d ≡
−→
DB = u +z = −(v +w)
−−→
MN =
−→
AN −
−−→
AM = u −
1
2
d −
1
2
c
2
−−→
MN = 2u+(v +w) −(u +v) = w +u
4
°
°
°
−−→
MN
°
°
°
2
= kwk
2
+ kuk
2
+ 2 hw, ui
kzk
2
= kuk
2
+ kvk
2
+ kwk
2
+ 2 hu, vi + 2 hu, wi + 2 hv, wi
kck
2
= kuk
2
+ kvk
2
+ 2 hu, vi
kdk
2
= kvk
2
+ kwk
2
+ 2 hv, wi
We are now in the position to prove the theorem.
¡
kuk
2
+ kvk
2
+ kwk
2
+ kzk
2
¢
−
¡
kck
2
+ kdk
2
¢
= kzk
2
−kvk
2
−2 hu, vi −2 hv, wi
= kuk
2
+ kwk
2
+ 2 hu, wi
= 4
°
°
°
−−→
MN
°
°
°
2
12.8.16 n. 22 (p. 457)
Orthogonality of xa +yb and 4ya −9xb amounts to
0 = hxa +yb, 4ya −9xbi
= −9 ha, bi x
2
+ 4 ha, bi y
2
+
¡
4 kak
2
−9 kbk
2
¢
xy
Since the above must hold for every couple (x, y), choosing x = 0 and y = 1 gives
ha, bi = 0; thus the condition becomes
¡
4 kak
2
−9 kbk
2
¢
xy = 0
and choosing now x = y = 1 gives
4 kak
2
= 9 kbk
2
Since kak is known to be equal to 6, it follows that kbk is equal to 4.
42 Vector algebra
Finally, since a and b have been shown to be orthogonal,
k2a + 3bk
2
= k2ak
2
+ k3bk
2
+ 2 h2a, 3bi
= 2
2
· 6
2
+ 3
2
· 4
2
+ 2 · 2 · 3 · 0
= 2
5
· 3
2
and hence
k2a + 3bk = 12
√
2
12.8.17 n. 24 (p. 457)
This is once again the question raised in exercises 7 and 17, in the general context of
the linear space R
n
(where n ∈ N is arbitrary). Since the coordinatefree version of
the solution procedure is completely independent from the number of coordinates of
the vectors involved, the full answer to the problem has already been seen to be
c =
hb, ai
ha, ai
a
d = b −
hb, ai
ha, ai
a
12.8.18 n. 25 (p. 457)
(a) For every x ∈ R,
ka +xbk
2
= kak
2
+x
2
kbk
2
+ 2xha, bi
= kak
2
+x
2
kbk
2
if a ⊥ b
≥ kak
2
if a ⊥ b
(b) Since the norm of any vector is nonnegative, the following biconditional is true:
(∀x ∈ R, ka +xbk ≥ kak) ⇔
¡
∀x ∈ R, ka +xbk
2
≥ kak
2
¢
Moreover, by pure computation, the following biconditional is true:
∀x ∈ R, ka +xbk
2
−kak
2
≥ 0 ⇔∀x ∈ R, x
2
kbk
2
+ 2xha, bi ≥ 0
If ha, bi is positive, the trinomial x
2
kbk
2
+ 2xha, bi is negative for all x in the open
interval
³
−
2ha,bi
kbk
2
, 0
´
; if it is negative, the trinomial is negative in the open interval
³
0, −
2ha,bi
kbk
2
´
. It follows that the trinomial can be nonnegative for every x ∈ R only
if ha, bi is zero, that is, only if it reduces to the second degree term x
2
kbk
2
. In
conclusion, I have proved that the conditional
(∀x ∈ R, ka +xbk ≥ kak) ⇒ha, bi = 0
is true.
Projections. Angle between vectors in nspace 43
12.9 Projections. Angle between vectors in nspace
12.10 The unit coordinate vectors
12.11 Exercises
12.11.1 n. 1 (p. 460)
ha, bi = 11 kbk
2
= 9
The projection of a along b is
11
9
b =
µ
11
9
,
22
9
,
22
9
¶
12.11.2 n. 2 (p. 460)
ha, bi = 10 kbk
2
= 4
The projection of a along b is
10
4
b =
µ
5
2
,
5
2
,
5
2
,
5
2
¶
12.11.3 n. 3 (p. 460)
(a)
cos
c
ai =
ha, ii
kak kik
=
6
7
cos
c
aj =
ha, ji
kak kjk
=
3
7
cos
c
ai =
ha, ki
kak kkk
= −
2
7
(b) There are just two vectors as required, the unit direction vector u of a, and its
opposite:
u =
a
kak
=
µ
6
7
,
3
7
, −
2
7
¶
−
a
kak
=
µ
−
6
7
, −
3
7
,
2
7
¶
44 Vector algebra
12.11.4 n. 5 (p. 460)
Let
A ≡ (2, −1, 1) B ≡ (1, −3, −5) C ≡ (3, −4, −4)
p ≡
−→
BC = (2, −1, 1) q ≡
−→
CA = (−1, 3, 5) r ≡
−→
AB = (−1, −2, −6)
Then
cos
b
A =
h−q, ri
k−qk krk
=
35
√
35
√
41
=
√
35
√
41
41
cos
b
B =
hp, −ri
kpk k−rk
=
6
√
6
√
41
=
√
6
√
41
41
cos
b
C =
h−p, qi
k−pk kqk
=
0
√
6
√
35
= 0
There is some funny occurrence in this exercise, which makes a wrong solution
apparently correct (or almost correct), if one looks only at numerical results. The
angles in A, B, C, as implicitly argued above, are more precisely described as B
b
AC,
C
b
BA, A
b
CB; that is, as the three angles of triangle ABC. If some confusion is made
between points and vectors, and/or angles, one may be led into operate directly with
the coordinates of points A, B, C in place of the coordinates of vectors
−→
BC,
−→
AC,
−→
AB,
respectively. This amounts to work, as a matter of fact, with the vectors
−→
OA,
−→
OB,
−→
OC, therefore computing
D
−→
OB,
−→
OC
E
°
°
°
−→
OB
°
°
°
°
°
°
−→
OC
°
°
°
= cos B
b
OC an angle of triangle OBC
D
−→
OC,
−→
OA
E
°
°
°
−→
OC
°
°
°
°
°
°
−→
OA
°
°
°
= cos C
b
OA an angle of triangle OCA
D
−→
OA,
−→
OB
E
°
°
°
−→
OA
°
°
°
°
°
°
−→
OB
°
°
°
= cos A
b
OB an angle of triangle OAB
instead of cos B
b
AC, cos C
b
BA, cos A
b
CB, respectively. Up to this point, there is
nothing funny in doing that; it’s only a mistake, and a fairly bad one, being of
conceptual type. The funny thing is in the numerical data of the exercise: it so
happens that
1. points A, B, C are coplanar with the origin O; more than that,
2.
−→
OA =
−→
BC and
−→
OB =
−→
AC; more than that,
Exercises 45
3. A
b
OB is a right angle.
Point 1 already singles out a somewhat special situation, but point 2 makes
OACB a parallelogram, and point 3 makes it even a rectangle.
−→
OA red,
−→
OB blue,
−→
OC green, AB violet
It turns out, therefore, that
°
°
°
−→
OA
°
°
° =
°
°
°
−→
BC
°
°
° = kpk
°
°
°
−→
OB
°
°
° =
°
°
°
−→
AC
°
°
° = k−qk = kqk
°
°
°
−→
OC
°
°
° =
°
°
°
−→
AB
°
°
° = krk
C
b
OA = C
b
BA B
b
OC = B
b
AC A
b
OB = A
b
CB
and such a special circumstance leads a wrong solution to yield the right numbers.
12.11.5 n. 6 (p. 460)
Since
ka +c ±bk
2
= ka +ck
2
+ kbk
2
± 2 ha +c, bi
from
ka +c +bk = ka +c −bk
it is possible to deduce
ha +c, bi = 0
This is certainly true, as a particular case, if c = −a, which immediately implies
c ac = π
c
bc = π −
c
ab =
7
8
π
46 Vector algebra
Moreover, even if a +c = 0 is not assumed, the same conclusion holds. Indeed, from
hc, bi = −ha, bi
and
kck = kak
it is easy to check that
cos
c
bc =
hb, ci
kbk kck
= −
ha, ci
kbk kak
= −cos
c
ab
and hence that
c
bc = π ±c ac =
7
8
π,
9
8
π (the second value being superﬂuous).
12.11.6 n. 8 (p. 460)
We have
kak =
√
n kb
n
k =
r
n(n + 1) (2n + 1)
6
ha, b
n
i =
n(n + 1)
2
cos
[
ab
n
=
n(n+1)
2
√
n
q
n(n+1)(2n+1)
6
=
v
u
u
t
n
2
(n+1)
2
4
n
2
(n+1)(2n+1)
6
=
r
3
2
n + 1
2n + 1
lim
n→+∞
cos
[
ab
n
=
√
3
2
lim
n→+∞
[
ab
n
=
π
3
12.11.7 n. 10 (p. 461)
(a)
ha, bi = cos ϑsin ϑ −cos ϑsin ϑ = 0
kak
2
= kbk
2
= cos
2
ϑ + sin
2
ϑ = 1
(b) The system
µ
cos ϑ sin ϑ
−sin ϑ cos ϑ
¶µ
x
y
¶
=
µ
x
y
¶
that is,
µ
cos ϑ −1 sin ϑ
−sin ϑ cos ϑ −1
¶µ
x
y
¶
=
µ
0
0
¶
has the trivial solution as its unique solution if
¯
¯
¯
¯
cos ϑ −1 sin ϑ
−sin ϑ cos ϑ −1
¯
¯
¯
¯
6= 0
Exercises 47
The computation gives
1 + cos
2
ϑ + sin
2
ϑ −2 cos ϑ 6= 0
2 (1 −cos ϑ) 6= 0
cos ϑ 6= 1
ϑ / ∈ {2kπ}
k∈Z
Thus if
ϑ ∈ {(−2kπ, (2k + 1) π)}
k∈Z
the only vector satisfying the required condition is (0, 0). On the other hand, if
ϑ ∈ {2kπ}
k∈Z
the coeﬃcient matrix of the above system is the identity matrix, and every vector in
R
2
satisﬁes the required condition.
12.11.8 n. 11 (p. 461)
Let OABC be a rhombus, which I have taken (without loss of generality) with one
vertex in the origin O and with vertex B opposed to O. Let
a ≡
−→
OA (red) b ≡
−→
OB (green) c ≡
−→
OC (blue)
Since OABC is a parallelogram, the oriented segments AB (blue, dashed) and OB
are congruent, and the same is true for CB (red, dashed) and OA. Thus
−→
AB = b
−→
CB = a
From elementary geometry (see exercise 12 of section 4) the intersection point M of
the two diagonals OB (green) and AC (violet) is the midpoint of both.
The assumption that OABC is a rhombus is expressed by the equality
kak = kck ((rhombus))
48 Vector algebra
The statement to be proved is orthogonality between the diagonals
D
−→
OB,
−→
AC
E
= 0
that is,
ha +c, c −ai = 0
or
kck
2
−kak
2
+ ha, ci −hc, ai = 0
and hence (by commutativity)
kck
2
= kak
2
The last equality is an obvious consequence of ((rhombus)). As a matter of fact, since
norms are nonnegative real numbers, the converse is true, too. Thus a parallelogram
has orthogonal diagonals if and only if it is a rhombus.
12.11.9 n. 13 (p. 461)
The equality to be proved is straightforward. The “law of cosines” is often called
Theorem 3 (Carnot) In every triangle, the square of each side is the sum of the
squares of the other two sides, minus their double product multiplied by the cosine of
the angle they form.
The equality in exam can be readily interpreted according to the theorem’s
statement, since in every parallelogram ABCD with a =
−→
AB and b =
−→
AC the
diagonal vector
−→
CB is equal to a − b, so that the triangle ABC has side vectors a,
b, and a −b. The theorem specializes to Pythagoras’ theorem when a ⊥ b.
12.11.10 n. 17 (p. 461)
(a) That the function
R
n
→R, a 7→
X
i∈n
a
i

is positive can be seen by the same arguments used for the ordinary norm; nonnega
tivity is obvious, and strict positivity still relies on the fact that a sum of concordant
numbers can only be zero if all the addends are zero. Homogeneity is clear, too:
∀a ∈ R
n
, ∀α ∈ R,
kαak =
X
i∈n
αa
i
 =
X
i∈n
α a
i
 = α
X
i∈n
a
i
 = α kak
Exercises 49
Finally, the triangle inequality is much simpler to prove for the present norm (some
times referred to as “the taxicab norm”) than for the euclidean norm:
∀a ∈ R
n
, ∀b ∈ R
n
,
ka +bk =
X
i∈n
a
i
+b
i
 ≤
X
i∈n
a
i
 + b
i
 =
X
i∈n
a
i
 +
X
i∈n
b
i

= kak + kbk
(b) The subset of R
2
to be described is
S ≡
©
(x, y) ∈ R
2
: x + y = 1
ª
= S
++
∪ S
−+
∪ S
−−
∪ S
+−
where
S
++
≡
©
(x, y) ∈ R
2
+
: x +y = 1
ª
(red)
S
−+
≡ {(x, y) ∈ R
−
×R
+
: −x +y = 1} (green)
S
−−
≡
©
(x, y) ∈ R
2
−
: −x −y = 1
ª
(violet)
S
+−
≡ {(x, y) ∈ R
+
×R
−
: x −y = 1} (blue)
Once the lines whose equations appear in the deﬁnitions of the four sets above are
drawn, it is apparent that S is a square, with sides parallel to the quadrant bisectrices
2
1
0
1
2
2 1 1 2
(c) The function
f : R
n
→R, a 7→
X
i∈n
a
i

is nonnegative, but not positive (e.g., for n = 2, f (x, −x) = 0 ∀x ∈ R). It is
homogeneous:
∀a ∈ R
n
, ∀α ∈ R,
kαak =
¯
¯
¯
¯
¯
X
i∈n
αa
i
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
α
X
i∈n
a
i
¯
¯
¯
¯
¯
= α
¯
¯
¯
¯
¯
X
i∈n
a
i
¯
¯
¯
¯
¯
= α kak
50 Vector algebra
Finally, f is subadditive, too (this is another way of saying that the triangle inequality
holds). Indeed,
∀a ∈ R
n
, ∀b ∈ R
n
,
ka +bk =
¯
¯
¯
¯
¯
X
i∈n
(a
i
+b
i
)
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
X
i∈n
a
i
+
X
i∈n
b
i
¯
¯
¯
¯
¯
≤
¯
¯
¯
¯
¯
X
i∈n
a
i
¯
¯
¯
¯
¯
+
¯
¯
¯
¯
¯
X
i∈n
b
i
¯
¯
¯
¯
¯
= kak + kbk
12.12 The linear span of a ﬁnite set of vectors
12.13 Linear independence
12.14 Bases
12.15 Exercises
12.15.1 n. 1 (p. 467)
x(i −j) +y (i +j) = (x +y, y −x)
(a) x +y = 1 and y −x = 0 yield (x, y) =
¡
1
2
,
1
2
¢
.
(b) x +y = 0 and y −x = 1 yield (x, y) =
¡
−
1
2
,
1
2
¢
.
(c) x +y = 3 and y −x = −5 yield (x, y) = (4, −1).
(d) x +y = 7 and y −x = 5 yield (x, y) = (1, 6).
12.15.2 n. 3 (p. 467)
I 2x +y = 2
II −x + 2y = −11
III x −y = 7
I +III 3x = 9
II +III y = −4
III (check) 3 + 4 = 7
The solution is (x, y) = (3, −4).
Exercises 51
12.15.3 n. 5 (p. 467)
(a) If there exists some α ∈ R ∼ {0} such that
∗
a = αb, then the linear combination
1a−αb is nontrivial and it generates the null vector; a and b are linearly dependent.
(b) The argument is best formulated by counterposition, by proving that if a and b
are linearly dependent, then they are parallel. Let a and b nontrivially generate the
null vector: αa + βb = 0 (according to the deﬁnition, at least one between α and β
is nonzero; but in the present case they are both nonzero, since a and b have been
assumed both diﬀerent from the null vector; indeed, αa + βb = 0 with α = 0 and
β 6= 0 implies b = 0, and αa + βb = 0 with β = 0 and α 6= 0 implies a = 0). Thus
αa = −βb, and hence a = −
β
α
b (or b = −
α
β
a, if you prefer).
12.15.4 n. 6 (p. 467)
Linear independency of the vectors (a, b) and (c, d) has been seen in the lectures
(proof of Steinitz’ theorem, part 1) to be equivalent to non existence of nontrivial
solutions to the system
ax +cy = 0
bx +dy = 0
The system is linear and homogeneous, and has unknowns and equations in equal
number (hence part 2 of the proof of Steinitz’ theorem does not apply). I argue by
the principle of counterposition. If a nontrivial solution (x, y) exists, both (a, c) and
(b, d) must be proportional to (y, −x) (see exercise 10, section 8), and hence to each
other. Then, from (a, c) = h(b, d) it is immediate to derive ad−bc = 0. The converse
is immediate, too
12.15.5 n. 7 (p. 467)
By the previous exercise, it is enough to require
(1 +t)
2
−(1 −t)
2
6= 0
4t 6= 0
t 6= 0
12.15.6 n. 8 (p. 467)
(a) The linear combination
1i + 1j + 1k + (−1) (i +j +k)
is nontrivial and spans the null vector.
(b) Since, for every (α, β, γ) ∈ R
3
,
αi +βj +γk = (α, β, γ)
∗
Even if parallelism is deﬁned more broadly − see the footnote in exercise 9 of section 4 − α
cannot be zero in the present case, because both a and b have been assumed diﬀerent from the null
vector.
52 Vector algebra
it is clear that
∀(α, β, γ) ∈ R
3
, (αi +βj +γk = 0) ⇒α = β = γ = 0
so that the triple (i, j, k) is linearly independent.
(c) Similarly, for every (α, β, γ) ∈ R
3
,
αi +βj +γ (i +j +k) = (α +γ, β +γ, γ)
and hence from
αi +βj +γ (i +j +k) = 0
it follows
α +γ = 0 β +γ = 0 γ = 0
that is,
α = β = γ = 0
showing that the triple (i, j, i +j +k) is linearly independent.
(d) The last argument can be repeated almost verbatim for triples (i, i +j +k, k)
and (i +j +k, j, k), taking into account that
αi +β (i +j +k) +γk = (α +β, β, β + γ)
α(i +j +k) +βj +γk = (α, α +β, α +γ)
12.15.7 n. 10 (p. 467)
(a) Again from the proof of Steinitz’ theorem, part 1, consider the system
I x +y +z = 0
II y +z = 0
III 3z = 0
It is immediate to derive that its unique solution is the trivial one, and hence that
the given triple is linearly independent.
(b) We need to consider the following two systems:
I x +y +z = 0
II y +z = 1
III 3z = 0
I x +y +z = 0
II y +z = 0
III 3z = 1
It is again immediate that the unique solutions to the systems are (−1, 1, 0) and
¡
0, −
1
3
,
1
3
¢
respectively.
Exercises 53
(c) The system to study is the following:
I x +y +z = 2
II y +z = −3
III 3z = 5
III z =
5
3
(↑) ,→II y = −
14
3
(↑) ,→I x = 5
(d) For an arbitrary triple (a, b, c), the system
I x +y +z = a
II y +z = b
III 3z = c
has the (unique) solution
¡
a −b, b −
c
3
,
c
3
¢
. Thus the given triple spans R
3
, and it is
linearly independent (as seen at a).
12.15.8 n. 12 (p. 467)
(a) Let xa +yb +zc = 0, that is,
x +z = 0
x +y +z = 0
x +y = 0
y = 0
then y, x, and z are in turn seen to be equal to 0, from the fourth, third, and ﬁrst
equation in that order. Thus (a, b, c) is a linearly independent triple.
(b) Any nontrivial linear combination d of the given vectors makes (a, b, c, d) a
linearly dependent quadruple. For example, d ≡ a +b +c = (2, 3, 2, 1)
(c) Let e ≡ (−1, −1, 1, 3), and suppose xa +yb +zc +te = 0, that is,
x +z −t = 0
x +y +z −t = 0
x +y +t = 0
y + 3t = 0
Then, subtracting the ﬁrst equation from the second, gives y = 0, and hence t, x,
and z are in turn seen to be equal to 0, from the fourth, third, and ﬁrst equation in
that order. Thus (a, b, c, e) is linearly independent.
(d) The coordinates of x with respect to {a, b, c, e}, which has just been seen to be
a basis of R
4
, are given as the solution (s, t, u, v) to the following system:
s +u −v = 1
s +t +u −v = 2
s +t +v = 3
t + 3v = 4
54 Vector algebra
It is more direct and orderly to work just with the table formed by the system
(extended) matrix
¡
A x
¢
, and to perform elementary row operations and column
exchanges.
_
_
_
_
_
_
_
_
s t u v x
1 0 1 −1 1
1 1 1 −1 2
1 1 0 1 3
0 1 0 3 4
_
_
_
_
_
_
_
_
a
1
↔a
3
,
2
a
0
←
2
a
0
−
1
a
0
_
_
_
_
_
_
_
_
u t s v x
1 0 1 −1 1
0 1 0 0 1
0 1 1 1 3
0 1 0 3 4
_
_
_
_
_
_
_
_
a
2
↔a
3
,
2
a
0
↔
3
a
0
_
_
_
_
_
_
_
_
u s t v x
1 1 0 −1 1
0 1 1 1 3
0 0 1 0 1
0 0 1 3 4
_
_
_
_
_
_
_
_
4
a
0
←
4
a
0
−
3
a
0
_
_
_
_
_
_
_
_
u s t v x
1 1 0 −1 1
0 1 1 1 3
0 0 1 0 1
0 0 0 3 3
_
_
_
_
_
_
_
_
Thus the system has been given the following form:
u +s −v = 1
s +t +v = 3
t = 1
3v = 3
which is easily solved from the bottom to the top: (s, t, u, v) = (1, 1, 1, 1).
Exercises 55
12.15.9 n. 13 (p. 467)
(a)
α
¡√
3, 1, 0
¢
+β
¡
1,
√
3, 1
¢
+γ
¡
0, 1,
√
3
¢
= (0, 0, 0)
⇔
I
√
3α +β = 0
II α +
√
3β +γ = 0
III β +
√
3γ = 0
⇔
I
√
3α +β = 0
√
3II −III −I 2β = 0
III β +
√
3γ = 0
⇔ (α, β, γ) = (0, 0, 0)
and the three given vectors are linearly independent.
(b)
α
¡√
2, 1, 0
¢
+β
¡
1,
√
2, 1
¢
+γ
¡
0, 1,
√
2
¢
= (0, 0, 0)
⇔
I
√
2α +β = 0
II α +
√
2β +γ = 0
III β +
√
2γ = 0
This time the sum of equations I and III (multiplied by
√
2) is the same as twice
equation II, and a linear dependence in the equation system suggests that it may
well have nontrivial solutions. Indeed,
¡√
2, −2,
√
2
¢
is such a solution. Thus the
three given triple of vectors is linearly dependent.
(c)
α(t, 1, 0) +β (1, t, 1) +γ (0, 1, t) = (0, 0, 0)
⇔
I tα +β = 0
II α +tβ +γ = 0
III β +tγ = 0
It is clear that t = 0 makes the triple linearly dependent (the ﬁrst and third vector
coincide in this case). Let us suppose, then, t 6= 0. From I and III, as already
noticed, I deduce that α = γ. Equations II and III then become
tβ + 2γ = 0
β +tγ = 0
a 2 by 2 homogeneous system with determinant of coeﬃcient matrix equal to t
2
−2.
Such a system has nontrivial solutions for t ∈
©√
2, −
√
2
ª
. In conclusion, the given
triple is linearly dependent for t ∈
©
0,
√
2, −
√
2
ª
.
56 Vector algebra
12.15.10 n. 14 (p. 468)
Call as usual u, v, w, and z the four vectors given in each case, in the order.
(a) It is clear that v = u + w, so that v can be dropped. Moreover, every linear
combination of u and w has the form (x, y, x, y), and cannot be equal to z. Thus
(u, w, z) is a maximal linearly independent triple.
(b) Notice that
1
2
(u +z) = e
(1)
1
2
(u −v) = e
(2)
1
2
(v −w) = e
(3)
1
2
(w−z) = e
(4)
Since (u, v, w, z) spans the four canonical vectors, it is a basis of R
4
. Thus (u, v, w, z)
is maximal linearly independent.
(c) Similarly,
u −v = e
(1)
v −w = e
(2)
w−z = e
(3)
z = e
(4)
and (u, v, w, z) is maximal linearly independent.
12.15.11 n. 15 (p. 468)
(a) Since the triple (a, b, c) is linearly independent,
α(a +b) +β (b +c) +γ (a +c) = 0
⇔ (α +γ) a + (α +β) b + (β +γ) c = 0
⇔
I α +γ = 0
II α +β = 0
III β +γ = 0
⇔
I +II −III 2α = 0
−I +II +III 2β = 0
I −II +III 2γ = 0
and the triple (a +b, b +c, a +c) is linearly independent, too
(b) On the contrary, choosing as nontrivial coeﬃcient triple (α, β, γ) ≡ (1, −1, 1),
(a −b) −(b +c) + (a +c) = 0
it is seen that the triple (a −b, b +c, a +c) is linearly dependent.
12.15.12 n. 17 (p. 468)
Let a ≡ (0, 1, 1) and b ≡ (1, 1, 1); I look for two possible alternative choices of a
vector c ≡ (x, y, z) such that the triple (a, b, c) is linearly independent. Since
αa +βb +γc = (β +γx, α +β +γy, α +β +γz)
Exercises 57
my choice of x, y, and z must be such to make the following conditional statement
true for each (α, β, γ) ∈ R
3
:
I β +γx = 0
II α +β +γy = 0
III α +β +γz = 0
_
_
_
⇒
_
_
_
α = 0
β = 0
γ = 0
Subtracting equation III from equation II, I obtain
γ (y −z) = 0
Thus any choice of c with y 6= z makes γ = 0 a consequence of IIIII; in such a case,
I yields β = 0 (independently of the value assigned to x), and then either II or III
yields α = 0. Conversely, if y = z, equations II and III are the same, and system
IIII has inﬁnitely many nontrivial solutions
α = −γy −β
β = −γx
γ free
provided either x or y (hence z) is diﬀerent from zero.
As an example, possible choices for c are (0, 0, 1), (0, 1, 0), (1, 0, 1), (1, 1, 0).
12.15.13 n. 18 (p. 468)
The ﬁrst example of basis containing the two given vectors is in point (c) of exercise
14 in this section, since the vectors u and v there coincide with the present ones.
Keeping the same notation, a second example is (u, v, w +z, w−z).
12.15.14 n. 19 (p. 468)
(a) It is enough to prove that each element of T belongs to lin S, since each element
of lin T is a linear combination in T, and S, as any subspace of a vector space, is
closed with respect to formation of linear combinations. Let u, v, and w be the three
elements of S (in the given order), and let a and b be the two elements of T (still in
the given order); it is then readily checked that
a = u −w b = 2w
(b) The converse inclusion holds as well. Indeed,
v = a −b w =
1
2
b u = v +w = a −
1
2
b
Thus
lin S = lin T
58 Vector algebra
Similarly, if c and d are the two elements of U,
c = u +v d = u + 2v
which proves that lin U ⊆ lin S. Inverting the above formulas,
u = 2c −d v = d −c
It remains to be established whether or not w is an element of lin U. Notice that
αc +βd = (α +β, 2α + 3β, 3α + 5β)
It follows that w is an element of lin U if and only if there exists (α, β) ∈ R
2
such
that
_
_
_
I α +β = 1
II 2α + 3β = 0
III 3α + 5β = −1
The above system has the unique solution (3, −2), which proves that lin S ⊆ linU.
In conclusion,
linS = linT = lin U
12.15.15 n. 20 (p. 468)
(a) The claim has already been proved in the last exercise, since from A ⊆ B I can
infer A ⊆ lin B, and hence lin A ⊆ lin B.
(b) By the last result, from A ∩ B ⊆ A and A ∩ B ⊆ B I infer
lin A∩ B ⊆ lin A linA ∩ B ⊆ lin B
which yields
lin A∩ B ⊆ lin A∩ linB
(c) It is enough to deﬁne
A ≡ {a, b} B ≡ {a +b}
where the couple (a, b) is linearly independent. Indeed,
B ⊆ lin A
and hence, by part (a) of last exercise,
lin B ⊆ lin lin A = lin A
lin A∩ lin B = lin B
On the other hand,
A∩ B = ∅ lin A∩ B = {0}
The vector space V
n
(C) of ntuples of complex numbers 59
12.16 The vector space V
n
(C) of ntuples of complex numbers
12.17 Exercises
60 Vector algebra
Chapter 13
APPLICATIONS OF VECTOR ALGEBRA TO ANALYTIC
GEOMETRY
13.1 Introduction
13.2 Lines in nspace
13.3 Some simple properties of straight lines
13.4 Lines and vectorvalued functions
13.5 Exercises
13.5.1 n. 1 (p. 477)
A direction vector for the line L is
−→
PQ = (4, 0), showing that L is horizontal. Thus
a point belongs to L if and only if its second coordinate is equal to 1. Among the
given points, (b), (d), and (e) belong to L.
13.5.2 n. 2 (p. 477)
A direction vector for the line L is v ≡
1
2
−→
PQ = (−2, 1). The parametric equations
for L are
x = 2 −2t
y = −1 +t
If t = 1 I get point (a) (the origin). Points (b), (d) and (e) have the second coordinate
equal to 1, which requires t = 2. This gives x = −2, showing that of the three points
only (e) belongs to L. Finally, point (c) does not belong to L, because y = 2 requires
t = 3, which yields x = −4 6= 1.
62 Applications of vector algebra to analytic geometry
13.5.3 n. 3 (p. 477)
The parametric equations for L are
x = −3 +h
y = 1 −2h
z = 1 + 3h
The following points belong to L:
(c) (h = 1) (d) (h = −1) (e) (h = 5)
13.5.4 n. 4 (p. 477)
The parametric equations for L are
x = −3 + 4k
y = 1 +k
z = 1 + 6h
The following points belong to L:
(b) (h = −1) (e)
µ
h =
1
2
¶
(f)
µ
h =
1
3
¶
A direction vector for the line L is
−→
PQ = (4, 0), showing that L is horizontal.
Thus a point belongs to L if and only if its second coordinate is equal to 1. Among
the given points, (b), (d), and (e) belong to L.
13.5.5 n. 5 (p. 477)
I solve each case in a diﬀerent a way.
(a)
−→
PQ = (2, 0, −2)
−→
QR = (−1, −2, 2)
The two vectors are not parallel, hence the three points do not belong to the same
line.
(b) Testing aﬃne dependence,
I 2h + 2k + 3l = 0
II −2h + 3k +l = 0
III −6h + 4k +l = 0
I −2II +III 2l = 0
I +II 5k + 4l = 0
2I −III 10h + 5l = 0
the only combination which is equal to the null vector is the trivial one. The three
points do not belong to the same line.
Exercises 63
(c) The line through P and R has equations
x = 2 + 3k
y = 1 −2k
z = 1
Trying to solve for k with the coordinates of Q, I get k = −
4
3
from the ﬁrst equation
and k = −1 from the second; Q does not belong to L(P, R).
13.5.6 n. 6 (p. 477)
The question is easy, but it must be answered by following some orderly path, in
order to achieve some economy of thought and of computations (there are
¡
8
2
¢
= 28
diﬀerent oriented segments joining two of the eight given points. First, all the eight
points have their third coordinate equal to 1, and hence they belong to the plane
π of equation z = 1. We can concentrate only on the ﬁrst two coordinates, and,
as a matter of fact, we can have a very good hint on the situation by drawing a
twodimensional picture, to be considered as a picture of the π
15
10
5
0
5
10
15
15 10 5 5 10 15
Since the ﬁrst two components of
−→
AB are (4, −2), I check that among the
twodimensional projections of the oriented segments connecting A with points D to
H
p
xy
−→
AD = (−4, 2) p
xy
−→
AE = (−1, 1) p
xy
−→
AF = (−6, 3)
p
xy
−→
AG = (−15, 8) p
xy
−→
AH = (12, −7)
64 Applications of vector algebra to analytic geometry
only the ﬁrst and third are parallel to p
xy
−→
AB. Thus all elements of the set P
1
≡
{A, B, C, D, F} belong to the same (black) line L
ABC
, and hence no two of them can
both belong to a diﬀerent line. Therefore, it only remains to be checked whether or
not the lines through the couples of points (E, G) (red), (G, H) (blue), (H, E) (green)
coincide, and whether or not any elements of P
1
belong to them. Direction vectors
for these three lines are
1
2
p
xy
−→
EG = (−7, 4)
1
3
p
xy
−→
GH = (9, −5) p
xy
−−→
HE = (−13, 7)
and as normal vectors for them I may take
n
EG
≡ (4, 7) n
GH
≡ (5, 9) n
HE
≡ (7, 13)
By requiring point E to belong to the ﬁrst and last, and point H to the second, I end
up with their equations as follows
L
EG
: 4x + 7y = 11 L
GH
: 5x + 9y = 16 L
HE
: 7x + 13y = 20
All these three lines are deﬁnitely not parallel to L
ABC
, hence they intersect it exactly
at one point. It is seen by direct inspection that, within the set P
1
, C belongs to
L
EG
, F belongs to L
GH
, and neither A nor B, nor D belong to L
HE
.
Thus there are three (maximal) sets of at least three collinear points, namely
P
1
≡ {A, B, C, D, F} P
2
≡ {C, E, G} P
3
≡ {F, G, H}
13.5.7 n. 7 (p. 477)
The coordinates of the intersection point are determined by the following equation
system
I 1 +h = 2 + 3k
II 1 + 2h = 1 + 8k
III 1 + 3h = 13k
Subtracting the third equation from the sum of the ﬁrst two, I get k = 1; substituting
this value in any of the three equations, I get h = 4. The three equations are
consistent, and the two lines intersect at the point of coordinates (5, 9, 13).
13.5.8 n. 8 (p. 477)
(a) The coordinates of the intersection point of L(P; a) and L(Q; b) are determined
by the vector equation
P +ha = Q+kb (13.1)
which gives
P −Q = kb −ha
that is,
−→
PQ ∈ span {a, b}
Exercises 65
13.5.9 n. 9 (p. 477)
X (t) = (1 +t, 2 −2t, 3 + 2t)
(a)
d (t) ≡ kQ−X (t)k
2
= (2 −t)
2
+ (1 + 2t)
2
+ (−2 −2t)
2
= 9t
2
+ 8t + 9
(b) The graph of the function t 7→ d (t) ≡ 9t
2
+ 8t + 9 is a parabola with the
point (t
0
, d (t
0
)) =
¡
−
4
9
,
65
9
¢
as vertex. The minimum squared distance is
65
9
, and the
minimum distance is
√
65
3
.
(c)
X (t
0
) =
µ
5
9
,
26
9
,
19
9
¶
Q−X (t
0
) =
µ
22
9
,
1
9
, −
10
9
¶
hQ−X (t
0
) , Ai =
22 −2 + 20
9
= 0
that is, the point on L of minimum distance from Q is the orthogonal projection of
Q on L.
13.5.10 n. 10 (p. 477)
(a) Let A ≡ (α, β, γ), P ≡ (λ, µ, ν) and Q ≡ (%, σ, τ). The points of the line L
through P with direction vector
−→
OA are represented in parametric form (the generic
point of L is denoted X (t)). Then
X (t) ≡ P +At = (λ +αt, µ +βt, ν +γt)
f (t) ≡ kQ−X (t)k
2
= kQ−P −Atk
2
= kQk
2
+ kPk
2
+ kAk
2
t −2 hQ, Pi + 2 hP −Q, Ai t
= at
2
+bt +c
where
a ≡ kAk
2
= α
2
+β
2
+γ
2
b
2
≡ hP −Q, Ai = α(λ −%) +β (µ −σ) +γ (ν −τ)
c ≡ kQk
2
+ kPk
2
−2 hQ, Pi = kP −Qk
2
= (λ −%)
2
+ (µ −σ)
2
+ (ν −τ)
2
The above quadratic polynomial has a second degree term with a positive coeﬃcient;
its graph is a parabola with vertical axis and vertex in the point of coordinates
µ
−
b
2a
, −
∆
4a
¶
=
Ã
hQ−P, Ai
kAk
2
,
kAk
2
kP −Qk
2
−hP −Q, Ai
2
kAk
2
!
66 Applications of vector algebra to analytic geometry
Thus the minimum value of this polynomial is achieved at
t
0
≡
hQ−P, Ai
kAk
2
=
α(% −λ) +β (σ −β) +γ (τ −ν)
α
2
+β
2
+γ
2
and it is equal to
kAk
2
kP −Qk
2
−kAk
2
kP −Qk
2
cos
2
ϑ
kAk
2
= kP −Qk
2
sin
2
ϑ
where
ϑ ≡
\
−→
OA,
−→
QP
is the angle formed by direction vector of the line and the vector carrying the point
Q not on L to the point P on L.
(b)
Q−X (t
0
) = Q−(P +At
0
) = (Q−P) −
hQ−P, Ai
kAk
2
A
hQ−X (t
0
) , Ai = hQ−P, Ai −
hQ−P, Ai
kAk
2
hA, Ai = 0
13.5.11 n. 11 (p. 477)
The vector equation for the coordinates of an intersection point of L(P; a) and
L(Q; a) is
P +ha = Q+ka (13.2)
It gives
P −Q = (h −k) a
and there are two cases: either
−→
PQ / ∈ span {a}
and equation (13.2) has no solution (the two lines are parallel), or
−→
PQ ∈ span {a}
that is,
∃c ∈ R, Q−P = ca
and equation (13.2) is satisﬁed by all couples (h, k) such that k −h = c (the two lines
intersect in inﬁnitely many points, i.e., they coincide). The two expressions P + ha
and Q + ka are seen to provide alternative parametrizations of the same line. For
each given point of L, the transformation h 7→k (h) ≡ h +c identiﬁes the parameter
change which allows to shift from the ﬁrst parametrization to the second.
Planes in euclidean nspaces 67
13.5.12 n. 12 (p. 477)
13.6 Planes in euclidean nspaces
13.7 Planes and vectorvalued functions
13.8 Exercises
13.8.1 n. 2 (p. 482)
We have:
−→
PQ = (2, 2, 3)
−→
PR = (2, −2, −1)
so that the parametric equations of the plane are
x = 1 + 2s + 2t (13.3)
y = 1 + 2s −2t
z = −1 + 3s −t
(a) Equating (x, y, z) to
¡
2, 2,
1
2
¢
in (13.3) we get
2s + 2t = 1
2s −2t = 1
3s −t =
3
2
yielding s =
1
2
and t = 0, so that the point with coordinates
¡
2, 2,
1
2
¢
belongs to the
plane.
(b) Similarly, equating (x, y, z) to
¡
4, 0,
3
2
¢
in (13.3) we get
2s + 2t = 3
2s −2t = −1
3s −t =
1
2
yielding s =
1
2
and t = 1, so that the point with coordinates
¡
4, 0,
3
2
¢
belongs to the
plane.
(c) Again, proceeding in the same way with the triple (−3, 1, −1), we get
2s + 2t = −4
2s −2t = 0
3s −t = −2
68 Applications of vector algebra to analytic geometry
yielding s = t = −1, so that the point with coordinates (−3, 1, −1) belongs to the
plane.
(d) The point of coordinates (3, 1, 5) does not belong to the plane, because the system
2s + 2t = 2
2s −2t = 0
3s −t = 4
is inconsistent (from the ﬁrst two equations we get s = t = 1, contradicting the third
equation).
(e) Finally, the point of coordinates (0, 0, 2) does not belong to the plane, because
the system
2s + 2t = −1
2s −2t = −1
3s −t = 1
is inconsistent (the ﬁrst two equations yield s = −
1
2
and t = 0, contradicting the
third equation).
13.8.2 n. 3 (p. 482)
(a)
x = 1 +t
y = 2 +s +t
z = 1 + 4t
(b)
u = (1, 2, 1) −(0, 1, 0) = (1, 1, 1)
v = (1, 1, 4) −(0, 1, 0) = (1, 0, 4)
x = s +t
y = 1 +s
z = s + 4t
Exercises 69
13.8.3 n. 4 (p. 482)
(a) Solving for s and t with the coordinates of the ﬁrst point, I get
I s −2t + 1 = 0
II s + 4t + 2 = 0
III 2s +t = 0
I +II −III t + 3 = 0
2III −I −II 2s −3 = 0
check on I
3
2
−6 + 1 6= 0
and (0, 0, 0) does not belong to the plane M. The second point is directly seen to
belong to M from the parametric equations
(1, 2, 0) +s (1, 1, 2) +t (−2, 4, 1) (13.4)
Finally, with the third point I get
I s −2t + 1 = 2
II s + 4t + 2 = −3
III 2s +t = −3
I +II −III t + 3 = 2
2III −I −II 2s −3 = −5
check on I −1 + 2 + 1 = 2
check on II −1 −4 + 2 = −3
check on III −2 −1 = −3
and (2, −3, −3) belongs to M.
(b) The answer has already been given by writing equation (13.4):
P ≡ (1, 2, 0) a = (1, 1, 2) b = (−2, 4, 1)
13.8.4 n. 5 (p. 482)
This exercise is a replica of material already presented in class and inserted in the
notes.
(a) If p +q +r = 1, then
pP + qQ +rR = P −(1 −p) P +qQ+rR
= P + (q +r) P +qQ +rR
= P +q (Q−P) +r (R −P)
and the coordinates of pP + qQ + rR satisfy the parametric equations of the plane
M through P, Q, and R..
(b) If S is a point of M, there exist real numbers q and r such that
S = P +q (Q−P) +r (R −P)
Then, deﬁning p ≡ 1 −(q +r),
S = (1 −q −r) P +qQ+rR
= pP +qQ +rR
70 Applications of vector algebra to analytic geometry
13.8.5 n. 6 (p. 482)
I use three diﬀerent methods for the three cases.
(a) The parametric equations for the ﬁrst plane π
1
are
I x = 2 + 3h −k
II y = 3 + 2h −2k
III z = 1 +h −3k
Eliminating the parameter h (I −II −III) I get
4k = x −y −z + 2
Eliminating the parameter k (I +II −III) I get
4h = x +y −z −4
Substituting in III (or, better, in 4 · III),
4z = 4 +x +y −z −4 −3x + 3y + 3z −6
I obtain a cartesian equation for π
1
x −2y +z + 3 = 0
(b) I consider the four coeﬃcients (a, b, c, d) of the cartesian equation of π
2
as un
known, and I require that the three given points belong to π
2
; I obtain the system
2a + 3b +c +d = 0
−2a −b −3c +d = 0
4a + 3b −c +d = 0
which I solve by elimination
_
_
2 3 1 1
−2 −1 −3 1
4 3 −1 1
_
_
µ
2
a
0
←
1
2
(
2
a
0
+
1
a
0
)
3
a
0
←
3
a
0
−2
1
a
0
¶
_
_
2 3 1 1
0 1 −1 1
0 −3 −3 −1
_
_
¡
3
a
0
←
1
2
(
3
a
0
+ 3
1
a
0
)
¢
_
_
2 3 1 1
0 1 −1 1
0 0 −3 1
_
_
Exercises 71
From the last equation (which reads −3c +d = 0) I assign values 1 and 3 to c and d,
respectively; from the second (which reads b −c +d = 0) I get b = −2, and from the
ﬁrst (which is unchanged) I get a = 1. The cartesian equation of π
2
is
x −2y +z + 3 = 0
(notice that π
1
and π
2
coincide)
(c) This method requires the vector product (called cross product by Apostol ), which
is presented in the subsequent section; however, I have thought it better to show its
application here already. The plane π
3
and the given plane being parallel, they have
the same normal direction. Such a normal is
n =(2, 0, −2) × (1, 1, 1) = (2, −4, 2)
Thus π
3
coincides with π
2
, since it has the same normal and has a common point
with it. At any rate (just to show how the method applies in general), a cartesian
equation for π
3
is
x −2y +z +d = 0
and the coeﬃcient d is determined by the requirement that π
3
contains (2, 3, 1)
2 −6 + 1 +d = 0
yielding
x −2y +z + 3 = 0
13.8.6 n. 7 (p. 482)
(a) Only the ﬁrst two points belong to the given plane.
(b) I assign the values −1 and 1 to y and z in the cartesian equation of the plane M,
in order to obtain the third coordinate of a point of M, and I obtain P = (1, −1, 1) (I
may have as well taken one of the ﬁrst two points of part a). Any two vectors which
are orthogonal to the normal n = (3, −5, 1) and form a linearly independent couple
can be chosen as direction vectors for M. Thus I assign values arbitrarily to d
2
and
d
3
in the orthogonality condition
3d
1
−5d
2
+d
3
= 0
say, (0, 3) and (3, 0), and I get
d
I
= (−1, 0, 3) d
II
= (5, 3, 0)
The parametric equations for M are
x = 1 −s + 5t
y = −1 + 3t
z = 1 + 3s
72 Applications of vector algebra to analytic geometry
13.8.7 n. 8 (p. 482)
The question is formulated in a slightly insidious way (perhaps Tom did it on pur
pose...), because the parametric equations of the two planes M and M
0
are written
using the same names for the two parameters. This may easily lead to an incorrect
attempt two solve the problem by setting up a system of three equations in the two
unknown s and t, which is likely to have no solutions, thereby suggesting the wrong
answer that M and M
0
are parallel. On the contrary, the equation system for the
coordinates of points in M ∩ M
0
has four unknowns:
I 1 + 2s −t = 2 +h + 3k
II 1 −s = 3 + 2h + 2k
III 1 + 3s + 2t = 1 + 3h +k
(13.5)
I take all the variables on the lefthand side, and the constant terms on the righthand
one, and I proceed by elimination:
s t h k  const.
2 −1 −1 −3  1
−1 0 −2 −2  2
3 2 −3 −1  0
_
_
1
r
0
←
1
r
0
+ 2
2
r
0
3
r
0
←
3
r
0
+ 3
2
r
0
1
r
0
↔
2
r
0
_
_
s t h k  const.
−1 0 −2 −2  2
0 −1 −5 −7  5
0 2 −9 −7  6
µ
3
r
0
←
3
r
0
+ 2
2
r
0
1
r
0
↔
2
r
0
¶
s t h k  const.
−1 0 −2 −2  2
0 −1 −5 −7  5
0 0 −19 −21  16
A handy solution to the last equation (which reads −19h − 21k = 16) is obtained
by assigning values 8 and −8 to h and k, respectively. This is already enough to
get the ﬁrst point from the parametric equation of M
0
, which is apparent (though
incomplete) from the righthand side of (13.5). Thus Q = (−14, 3, 17). However, just
to check on the computations, I proceed to ﬁnish up the elimination, which leads
to t = 11 from the second row, and to s = −2 from the ﬁrst. Substituting in the
lefthand side of (13.5), which is a trace of the parametric equations of M, I do get
indeed (−14, 3, 17). Another handy solution to the equation corresponding to the last
row of the ﬁnal elimination table is (h, k) =
¡
−
2
5
, −
2
5
¢
. This gives R =
¡
2
5
,
7
5
, −
3
5
¢
,
and the check by means of (s, t) =
¡
−
2
5
, −
1
5
¢
is all right.
13.8.8 n. 9 (p. 482)
(a) A normal vector for M is
n = (1, 2, 3) × (3, 2, 1) = (−4, 8, −4)
Exercises 73
whereas the coeﬃcient vector in the equation of M
0
is (1, −2, 1). Since the latter is
parallel to the former, and the coordinates of the point P ∈ M do not satisfy the
equation of M
0
, the two planes are parallel.
(b) A cartesian equation for M is
x −2y +z +d = 0
From substitution of the coordinates of P, the coeﬃcient d is seen to be equal to 3.
The coordinates of the points of the intersection line L ≡ M∩M
00
satisfy the system
½
x −2y +z + 3 = 0
x + 2y +z = 0
By sum and subtraction, the line L can be represented by the simpler system
½
2x + 2z = −3
4y = 3
as the intersection of two diﬀerent planes π and π
0
, with π parallel to the yaxis, and
π
0
parallel to the xzplane. Coordinate of points on L are now easy to produce, e.g.,
Q =
¡
−
3
2
,
3
4
, 0
¢
and R =
¡
0,
3
4
, −
3
2
¢
13.8.9 n. 10 (p. 483)
The parametric equations for the line L are
x = 1 + 2r
y = 1 −r
z = 1 + 3r
and the parametric equations for the plane M are
x = 1 + 2s
y = 1 +s +t
z = −2 + 3s +t
The coordinates of a point of intersection between L and M must satisfy the system
of equations
2r −2s = 0
r +s +t = 0
3r −3s −t = −3
The ﬁrst equation yields r = s, from which the third and second equation give t = 3,
r = s = −
3
2
. Thus L ∩ M consists of a single point, namely, the point of coordinates
¡
−2,
5
2
. −
7
2
¢
.
74 Applications of vector algebra to analytic geometry
13.8.10 n. 11 (p. 483)
The parametric equations for L are
x = 1 + 2t
y = 1 −t
z = 1 + 3t
and a direction vector for L is v =(2, −1, 3)
(a) L is parallel to the given plane (which I am going to denote π
a
) if v is a linear
combination of (2, 1, 3) and
¡
3
4
, 1, 1
¢
. Thus I consider the system
2x +
3
4
y = 2
x +y = −1
3x +y = 3
where subtraction of the second equation from the third gives 2x = 4, hence x = 2;
this yields y = −3 in the last two equations, contradicting the ﬁrst.
Alternatively, computing the determinant of the matrix having v, a, b as
columns
¯
¯
¯
¯
¯
¯
2 2
3
4
−1 1 1
3 3 1
¯
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
¯
4 0 −
5
4
−1 1 1
6 0 −2
¯
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
4 −
5
4
6 −2
¯
¯
¯
¯
= −
1
2
shows that {v, a, b} is a linearly independent set.
With either reasoning, it is seen that L is not parallel to π
a
.
(b) By computing two independent directions for the plane π
b
, e.g.,
−→
PQ = (3, 5, 2) −
(1, 1, −2) and
−→
PR = (2, 4, −1) − (1, 1, −2), I am reduced to the previous case. The
system to be studied now is
2x +y = 2
4x + 3y = −1
4x +y = 3
and the three equations are again inconsistent, because subtraction of the third equa
tion from the second gives y = −2, whereas subtraction of the ﬁrst from the third
yields x =
1
2
, these two values contradicting all equations. L is not parallel to π
b
.
(c) Since a normal vector to π
c
has for components the coeﬃcients of the unknowns
in the equation of π
c
, it suﬃces to check orthogonality between v and (1, 2, 3):
h(2, −1, 3) , (1, 2, 3)i = 9 6= 0
L is not parallel to π
c
.
Exercises 75
13.8.11 n. 12 (p. 483)
Let R be any point of the given plane π, other than P or Q. A point S then belongs
to M if and only if there exists real numbers q and r such that
S = P +q (Q−P) +r (R −P) (13.6)
If S belongs to the line through P and Q, there exists a real number p such that
S = P +p (Q−P)
Then, by deﬁning
q ≡ p r ≡ 0
it is immediately seen that condition (13.6) is satisﬁed.
13.8.12 n. 13 (p. 483)
Since every point of L belongs to M, M contains the points having coordinates equal
to (1, 2, 3), (1, 2, 3) +t (1, 1, 1) for each t ∈ R, and (2, 3, 5). Choosing, e.g., t = 1, the
parametric equations for M are
x = 1 +r +s
y = 2 +r +s
z = 3 +r + 2s
It is possible to eliminate the two parameters at once, by subtracting the ﬁrst equaiton
from the second, obtaining
x −y + 1 = 0
13.8.13 n. 14 (p. 483)
Let d be a direction vector for the line L, and let Q ≡ (x
Q
, y
Q
, z
Q
) be a point of L.
A plane containing L and the point P ≡ (x
P
, y
P
, z
P
) is the plane π through Q with
direction vectors d and
−→
QP
x = x
Q
+hd
1
+k (x
P
−x
Q
)
y = y
Q
+hd
2
+k (y
P
−y
Q
)
z = z
Q
+hd
3
+k (z
P
−z
Q
)
L belongs to π because the parametric equation of π reduces to that of L when k is
assigned value 0, and P belongs to π because the coordinates of P are obtained from
the parametric equation of π when h is assigned value 0 and k is assigned value 1.
Since any two distinct points on L and P determine a unique plane, π is the only
plane containing L and P.
76 Applications of vector algebra to analytic geometry
13.9 The cross product
13.10 The cross product expressed as a determinant
13.11 Exercises
13.11.1 n. 1 (p. 487)
(a) A×B = −2i + 3j −k.
(b) B ×C = 4i −5j + 3k.
(c) C ×A = 4i −4j + 2k.
(d) A× (C ×A) = 8i + 10j + 4k.
(e) (A×B) ×C = 8i + 3j −7k.
(f) A× (B ×C) = 10i + 11j + 5k.
(g) (A×C) ×B = −2i −8j −12k.
(h) (A+B) × (A−C) = 2i −2j.
(i) (A×B) × (A×C) = −2i + 4k.
13.11.2 n. 2 (p. 487)
(a)
A×B
kA×Bk
= −
4
√
26
i +
3
√
26
j +
1
√
26
k or −
A×B
kA×Bk
=
4
√
26
i −
3
√
26
j −
1
√
26
k.
(b)
A×B
kA×Bk
= −
41
√
2054
i −
18
√
2054
j +
7
√
2054
k or −
A×B
kA×Bk
=
41
√
2054
i +
18
√
2054
j −
7
√
2054
k.
(c)
A×B
kA×Bk
= −
1
√
6
i −
2
√
6
j −
1
√
6
k or −
A×B
kA×Bk
=
1
√
6
i +
2
√
6
j +
1
√
6
k.
13.11.3 n. 3 (p. 487)
(a)
−→
AB ×
−→
AC = (2, −2, −3) × (3, 2, −2) = (10, −5, 10)
area ABC =
1
2
°
°
°
−→
AB ×
−→
AC
°
°
° =
15
2
(b)
−→
AB ×
−→
AC = (3, −6, 3) × (3, −1, 0) = (3, 9, 15)
area ABC =
1
2
°
°
°
−→
AB ×
−→
AC
°
°
° =
3
√
35
2
Exercises 77
(c)
−→
AB ×
−→
AC = (0, 1, 1) × (1, 0, 1) = (1, 1, −1)
area ABC =
1
2
°
°
°
−→
AB ×
−→
AC
°
°
° =
√
3
2
13.11.4 n. 4 (p. 487)
−→
CA×
−→
AB = (−i + 2j −3k) × (2j +k) = 8i −j −2k
13.11.5 n. 5 (p. 487)
Let a ≡ (l, m, n) and b ≡ (p, q, r) for the sake of notational simplicity. Then
ka ×bk
2
= (mr −nq)
2
+ (np −lr)
2
+ (lq −mp)
2
= m
2
r
2
+n
2
q
2
−2mnrq +n
2
p
2
+l
2
r
2
−2lnpr +l
2
q
2
+m
2
p
2
−2lmpq
kak
2
kbk
2
=
¡
l
2
+m
2
+n
2
¢ ¡
p
2
+q
2
+r
2
¢
= l
2
p
2
+l
2
q
2
+l
2
r
2
+m
2
p
2
+m
2
q
2
+m
2
r
2
+n
2
p
2
+n
2
q
2
+n
2
r
2
(ka ×bk = kak kbk) ⇔
¡
ka ×bk
2
= kak
2
kbk
2
¢
⇔ (lp +mq +nr)
2
= 0
⇔ ha, bi = 0
13.11.6 n. 6 (p. 487)
(a)
ha, b +ci = ha, b ×ai = 0
because b ×a is orthogonal to a.
(b)
hb, ci = hb, (b ×a) −bi = hb, (b ×a)i + hb, −bi = −kbk
2
because b ×a is orthogonal to b. Thus
hb, ci < 0 cos
c
bc = −
kbk
kck
< 0
c
bc ∈
i
π
2
, π
i
. Moreover,
c
bc = −π is impossible, because
kck
2
= kb ×ak
2
+ kbk
2
−2 kb ×ak kbk cos
\
(b ×a) b
= kb ×ak
2
+ kbk
2
> kbk
2
since (a, b) is linearly independent and kb ×ak > 0.
(c) By the formula above, kck
2
= 2
2
+ 1
2
, and kck =
√
5.
78 Applications of vector algebra to analytic geometry
13.11.7 n. 7 (p. 488)
(a) Since kak = kbk = 1 and ha, bi = 0, by Lagrange’s identity (theorem 13.12.f)
ka ×bk
2
= kak
2
kbk
2
−ha, bi
2
= 1
so that a × b is a unit vector as well. The three vectors a, b, a × b are mutually
orthogonal either by assumption or by the properties of the vector product (theorem
13.12.de).
(b) By Lagrange’s identity again,
kck
2
= ka ×bk
2
kak
2
−ha ×b, ai
2
= 1
(c) And again,
k(a ×b) ×bk
2
= ka ×bk
2
kbk
2
−ha ×b, bi
2
= 1
k(a ×b) ×ak
2
= kck
2
= 1
Since the direction which is orthogonal to (a ×b) and to b is spanned by a, (a ×b)×b
is either equal to a or to −a. Similarly, (a ×b) × a is either equal to b or to −b.
Both the righthand rule and the lefthand rule yield now
(a ×b) ×a = b (a ×b) ×b = −a
(d)
(a ×b) ×a =
_
_
(a
3
b
1
−a
1
b
3
) a
3
−(a
1
b
2
−a
2
b
1
) a
2
(a
1
b
2
−a
2
b
1
) a
1
−(a
2
b
3
−a
3
b
2
) a
3
(a
2
b
3
−a
3
b
2
) a
2
−(a
3
b
1
−a
1
b
3
) a
1
_
_
=
_
_
(a
2
2
+a
2
3
) b
1
−a
1
(a
3
b
3
+a
2
b
2
)
(a
2
1
+a
2
3
) b
2
−a
2
(a
1
b
1
+a
3
b
3
)
(a
2
1
+a
2
2
) b
3
−a
3
(a
1
b
1
+a
2
b
2
)
_
_
Since a and b are orthogonal,
(a ×b) ×a =
_
_
(a
2
2
+a
2
3
) b
1
+a
1
(a
1
b
1
)
(a
2
1
+a
2
3
) b
2
+a
2
(a
2
b
2
)
(a
2
1
+a
2
2
) b
3
+a
3
(a
3
b
3
)
_
_
Since a is a unit vector,
(a ×b) ×a =
_
_
b
1
b
2
b
3
_
_
The proof that (a ×b) ×b = −a is identical.
Exercises 79
13.11.8 n. 8 (p. 488)
(a) From a ×b = 0, either there exists some h ∈ R such that a = hb, or there exists
some k ∈ R such that b = ka. Then either ha, bi = hkbk
2
or ha, bi = k kak
2
, that
is, either hkbk = 0 or k kak = 0. In the ﬁrst case, either h = 0 (which means a = 0),
or kbk = 0 (which is equivalent to b = 0). In the second case, either k = 0 (which
means b = 0), or kak = 0 (which is equivalent to a = 0). Geometrically, suppose
that a 6= 0. Then both the projection
ha,bi
kak
2
a of b along a and the projecting vector
(of length
kb×ak
kak
) are null, which can only happen if b = 0.
(b) From the previous point, the hypotheses a 6= 0, a×(b −c) = 0, and ha, b −ci =
0 imply that b −c = 0.
13.11.9 n. 9 (p. 488)
(a) Let ϑ ≡
c
ab, and observe that a and c are orthogonal. Froma×b = kak kbk sin ϑ,
in order to satisfy the condition
a ×b = c
the vector b must be orthogonal to c, and its norm must depend on ϑ according to
the relation
kbk =
kck
kak sin ϑ
(13.7)
Thus
kck
kak
≤ kbk < +∞. In particular, b can be taken orthogonal to a as well, in
which case it has to be equal to
a×c
kak
2
or to
c×a
kak
2
. Thus two solutions to the problem
are
±
µ
7
9
, −
8
9
, −
11
9
¶
(b) Let (p, q, r) be the coordinates of b; the conditions a ×b = c and ha, bi = 1 are:
I −2q −r = 3
II 2p −2r = 4
III p + 2q = −1
IV 2p −q + 2r = 1
Standard manipulations yield
2II + 2IV +III 9p = 9
(↑) ,→II 2 −2r = 4
(↑) ,→I −2q + 1 = 3
check on IV 2 + 1 −2 = 1
check on III 1 −2 = 1
that is, the unique solution is
b =(1, −1, −1)
The solution to this exercise given by Apostol at page 645 is wrong.
80 Applications of vector algebra to analytic geometry
13.11.10 n. 10 (p. 488)
Replacing b with c in the ﬁrst result of exercise7,
(a ×c) ×a = c
Therefore, by skewsimmetry of the vector product, it is seen that the c × a is a
solution to the equation
a ×x = c (13.8)
However, c ×a is orthogonal to a, and hence it does not meet the additional require
ment
ha, xi = 1 (13.9)
We are bound to look for other solutions to (13.8). If x and z both solve (13.8),
a × (x −z) = a ×x −a ×z = c −c = 0
which implies that x − z is parallel to a. Thus the set of all solutions to equation
(13.8) is
©
x ∈ R
3
: ∃α ∈ R, x = a ×c +αa
ª
From condition (13.9),
ha, a ×c +αai = 1
that is,
α = −
1
kak
2
Thus the unique solution to the problem is
b ≡ a ×c−
a
kak
2
13.11.11 n. 11 (p. 488)
(a) Let
u ≡
−→
AB = (−2, 1, 0)
v ≡
−→
BC = (3, −2, 1)
w ≡
−→
CA = (−1, 1, −1)
Each side of the triangle ABC can be one of the two diagonals of the parallelogram
to be determined.
Exercises 81
If one of the diagonals is BC, the other is AD, where
−→
AD =
−→
AB +
−→
AC = u −w. In
this case
D = A+u −w = “B +C −A” = (0, 0, 2)
If one of the diagonals is CA, the other is BE, where
−→
BE =
−→
BC +
−→
BA = v −u. In
this case
E = B +v −u = “A+C −B” = (4, −2, 2)
If one of the diagonals is AB, the other is CF, where
−→
CF =
−→
CA +
−→
CB = −v + w.
In this case
F = A−v +w = “A +B −C” = (−2, 2, 0)
(b)
u ×w =
µ¯
¯
¯
¯
1 0
1 −1
¯
¯
¯
¯
, −
¯
¯
¯
¯
−2 0
−1 −1
¯
¯
¯
¯
,
¯
¯
¯
¯
−2 1
−1 1
¯
¯
¯
¯
¶
= (−1, −2, −1)
ku ×wk =
√
6 = area (ABC) =
√
6
2
82 Applications of vector algebra to analytic geometry
13.11.12 n. 12 (p. 488)
b +c = 2 (a ×b) −2b
ha, b +ci = −2 ha, bi = −4
cos
c
ab =
ha, bi
kak kbk
=
1
2
ka ×bk
2
= kak
2
kbk
2
sin
2
c
ab = 12
kck
2
= 4 ka ×bk
2
−12 ha ×b, bi + 9 kbk
2
= 192
kck = 8
√
3
hb, ci = 2 hb, a ×bi −3 kbk
2
= 48
cos
c
bc =
hb, ci
kbk kck
=
√
3
2
13.11.13 n. 13 (p. 488)
(a) It is true that, if the couple (a, b) is linearly independent, then the triple
(a +b, a −b, a ×b)
is linearly independent, too. Indeed, in the ﬁrst place a and b are nonnull, since every
ntuple having the null vector among its components is linearly dependent. Secondly,
(a ×b) is nonnull, too, because (a, b) is linearly independent. Third, (a ×b) is
orthogonal to both a + b and a −b (as well as to any vector in lin {a, b}), because
it is orthogonal to a and b; in particular, (a ×b) / ∈ lin {a.b}, since the only vector
which is orthogonal to itself is the null vector. Fourth, suppose that
x(a +b) +y (a −b) +z (a ×b) = 0 (13.10)
Then z cannot be diﬀerent from zero, since otherwise
(a ×b) = −
x
z
(a +b) −
y
z
(a −b)
= −
x +y
z
a +
y −x
z
b
would belong to lin{a, b}. Thus z = 0, and (13.10) becomes
x(a +b) +y (a −b) = 0
which is equivalent to
(x +y) a+(x −y) b = 0
By linear independence of (a, b),
x +y = 0 and x −y = 0
Exercises 83
and hence x = y = 0.
(b) It is true that, if the couple (a, b) is linearly independent, then the triple
(a +b, a +a ×b, b +a ×b)
is linearly independent, too. Indeed, let (x, y, z) be such that
x (a +b) +y (a +a ×b) +z (b +a ×b) = 0
which is equivalent to
(x +y) a + (x +z) b + (y +z) a ×b = 0
Since, arguing as in the fourth part of the previous point, y + z must be null, and
this in turn implies that both x +y and x +z are also null. The system
x + y = 0
x +z = 0
y +z = 0
has the trivial solution as the only solution.
(c) It is true that, if the couple (a, b) is linearly independent, then the triple
(a, b, (a + b) × (a −b))
is linearly independent, too. Indeed,
(a +b) × (a −b) = a ×a −a ×b +b ×a −b ×b
= −2a ×b
and the triple
(a, b, a ×b)
is linearly independent when the couple (a, b) is so.
13.11.14 n. 14 (p. 488)
(a) The cross product
−→
AB ×
−→
AC equals the null vector if and only if the couple
³
−→
AB,
−→
AC
´
is linearly dependent. In such a case, if
−→
AC is null, then A and C coin
cide and it is clear that the line through them and B contains all the three points.
Otherwise, if
−→
AC is nonnull, then in the nontrivial null combination
x
−→
AB +y
−→
AC = 0
x must be nonnull, yielding
−→
AB = −
y
x
−→
AC
84 Applications of vector algebra to analytic geometry
that is,
B = A−
y
x
−→
AC
which means that B belongs to the line through A and C.
(b) By the previous point, the set
n
P :
−→
AP ×
−→
BP = 0
o
is the set of all points P such that A, B, and P belong to the same line, that is, the
set of all points P belonging to the line through A and B.
13.11.15 n. 15 (p. 488)
(a) From the assumption (p ×b) +p = a,
hb, p ×bi + hb, pi = hb, ai
hb, pi = 0
that is, p is orthogonal to b. This gives
kp ×bk = kpk kbk = kpk (13.11)
and hence
1 = kak
2
= kp ×bk
2
+ kpk
2
= 2 kpk
2
that is, kpk =
√
2
2
.
(b) Since p, b, and p×b are pairwise orthogonal, (p, b, p ×b) is linearly independent,
and {p, b, p ×b} is a basis of R
3
.
(c) Since p is orthogonal both to p×b (deﬁnition of vector product) and to b (point
a), there exists some h ∈ R such that
(p ×b) ×b = hp
Thus, taking into account (13.11),
h kpk = kp ×bk kbk
¯
¯
¯sin
π
2
¯
¯
¯ = kpk
and h ∈ {1, −1}. Since the triples (p, b, p ×b) and (p ×b, p, b) deﬁne the same
orientation, and (p ×b, b, p) deﬁnes the opposite one, h = −1.
(d) Still from the assumption (p ×b) +p = a,
hp, p ×bi + hp, pi = hp, ai
hp, ai = kpk
2
=
1
2
p × (p ×b) +p ×p = p ×a
kp ×ak = kpk
2
kbk =
1
2
The scalar triple product 85
Thus
p =
1
2
a +
1
2
q (13.12)
where q = 2p−a is a vector in the plane π generated by p and a, which is orthogonal
to a. Since a normal vector to π is b, there exists some k ∈ R such that
q = k (b ×a)
Now
kqk
2
= 4 kpk
2
+ kak
2
−4 kpk kak cos c pa
= 2 + 1 −4
√
2
2
√
2
2
= 1
kqk = 1 k = 1
If on the plane orthogonal to b the mapping u 7→b×u rotates counterclockwise, the
vectors a = p + b × p, b × p, and b × a form angles of
π
4
,
π
2
, and
3π
4
, respectively,
with p. On the other hand, the decomposition of p obtained in (13.12) requires that
the angle formed with p by q is −
π
4
, so that q is discordant with b × a. It follows
that k = −1. I have ﬁnally obtained
p =
1
2
a −
1
2
(b ×a)
13.12 The scalar triple product
13.13 Cramer’s rule for solving systems of three linear equations
13.14 Exercises
13.15 Normal vectors to planes
13.16 Linear cartesian equations for planes
86 Applications of vector algebra to analytic geometry
13.17 Exercises
13.17.1 n. 1 (p. 496)
(a) Out of the inifnitely many vectors satisfying the requirement, a distinguished one
is
n ≡ (2i + 3j −4k) × (j +k) = 7i −2j + 2k
(b)
hn, (x, y, z)i = 0 or 7x −2y + 2z = 0
(c)
hn, (x, y, z)i = hn, (1, 2, 3)i or 7x −2y + 2z = 9
13.17.2 n. 2 (p. 496)
(a)
n
knk
=
µ
1
3
,
2
3
, −
2
3
¶
(b) The three intersection points are:
Xaxis : (−7, 0, 0) Y axis :
µ
0, −
7
2
, 0
¶
Zaxis :
µ
0, 0,
7
2
¶
.
(c) The distance from the origin is
7
3
.
(d) Intersecting with π the line through the origin which is directed by the normal
to π
_
¸
¸
_
¸
¸
_
x = h
y = 2h
z = −2h
x + 2y −2z + 7 = 0
yields
h + 4h + 4h + 7 = 0
h = −
7
9
(x, y, z) =
µ
−
7
9
, −
14
9
,
14
9
¶
Exercises 87
13.17.3 n. 3 (p. 496)
A cartesian equation for the plane which passes through the point P ≡ (1, 2, −3) and
is parallel to the plane of equation
3x −y + 2z = 4
is
3 (x −1) −(y −2) + 2 (z + 3) = 0
or
3x −y + 2z + 5 = 0
The distance between the two planes is
4 −(−5)
√
9 + 1 + 4
=
9
√
14
14
13.17.4 n. 4 (p. 496)
π
1
: x + 2y −2z = 5
π
2
: 3x −6y + 3z = 2
π
3
: 2x +y + 2z = −1
π
4
: x −2y +z = 7
(a) π
2
and π
4
are parallel because n
2
= 3n
4
; π
1
and π
3
are orthogonal because
hn
1
, n
3
i = 0.
(b) The straight line through the origin having direction vector n
4
has equations
x = t
y = −2t
z = t
and intersects π
2
and π
4
at points C and D, respectively, which can be determined
by the equations
3t
C
−6 (−2t
C
) + 3t
C
= 2
t
D
−2 (−2t
D
) +t
D
= 7
Thus t
C
=
1
9
, t
D
=
7
6
,
−→
CD =
¡
7
6
−
1
9
¢
n
4
, and
°
°
°
−→
CD
°
°
° =
19
18
kn
4
k =
19
18
√
6
Alternatively, rewriting the equation of π
4
with normal vector n
2
3x −6y + 3z = 21
dist (π, π
0
) =
d
2
−d
4

kn
2
k
=
19
√
54
88 Applications of vector algebra to analytic geometry
13.17.5 n. 5 (p. 496)
(a) A normal vector for the plane π through the points P ≡ (1, 1, −1), Q ≡ (3, 3, 2),
R ≡ (3, −1, −2) is
n ≡
−→
PQ×
−→
QR = (2, 2, 3) × (0, −4, −4) = (4, 8, −8)
(b) A cartesian equation for π is
x + 2y −2z = 5
(c) The distance of π from the origin is
5
3
.
13.17.6 n. 6 (p. 496)
Proceeding as in points (a) and (b) of the previous exercise, a normal vector for the
plane through the points P ≡ (1, 2, 3), Q ≡ (2, 3, 4), R ≡ (−1, 7, −2) is
n ≡
−→
PQ×
−→
RQ = (1, 1, 1) × (3, −4, 6) = (10, −3, −7)
and a cartesian equation for it is
10x −3y −7z + 17 = 0
13.17.7 n. 8 (p. 496)
A normal vector to the plane is given by any direction vector for the given line
n ≡ (2, 4, 12) −(1, 2, 3) = (1, 2, 9)
A cartesian equation for the plane is
(x −2) + 2 (y −3) + 9 (z + 7) = 0
or
x + 2y + 9z = 55
13.17.8 n. 9 (p. 496)
A direction vector for the line is just the normal vector of the plane. Thus the
parametric equations are
x = 2 + 4h y = 1 −3h z = −3 +h
Exercises 89
13.17.9 n. 10 (p. 496)
(a) The position of the point at time t can be written as follows:
x = 1 −t y = 2 −3t z = −1 + 2t
which are just the parametric equations of a line.
(b) The direction vector of the line L is
d = (−1, −3, 2)
(c) The hitting instant is determined by substituting the coordinates of the moving
point into the equation of the given plane π
2 (1 −t) + 3 (2 −3t) + 2 (−1 + 2t) + 1 = 0
−7t + 7 = 0
yielding t = 1. Hence the hitting point is (0, −1, 1).
(d) At time t = 3, the moving point has coordinates (−2, −7, 5). Substituting,
2 · (−2) + 3 · (−7) + 2 · 5 +d = 0
d = 15
A cartesian equation for the plane π
0
which is parallel to π and contains (−2, −7, 5)
is
2x + 3y + 2z + 15 = 0
(e) At time t = 2, the moving point has coordinates (−1, −4, 3). A normal vector for
the plane π
00
which is orthogonal to L and contains (−1, −4, 3) is −d. Thus
−1 · −1 −3 · −4 + 2 · 3 +d = 0
d = −19
A cartesian equation for π
00
is
(x + 1) + 3 (y + 4) −2 (z −3) = 0
or
x + 3y −2z + 19 = 0
90 Applications of vector algebra to analytic geometry
13.17.10 n. 11 (p. 496)
From
c
ni =
π
3
c
nj =
π
4
c
nk =
π
3
we get
n
knk
=
1
2
³
1,
√
2, 1
´
A cartesian equation for the plane in consideration is
(x −1) +
√
2 (y −1) + (z −1) = 0
or
x +
√
2y +z = 2 +
√
2
13.17.11 n. 13 (p. 496)
First I ﬁnd a vector of arbitrary norm which satisﬁes the given conditions:
l + 2m−3n = 0
l −m+ 5n = 0
Assigning value 1 to n, the other values are easily obtained: (l, m) =
¡
−
7
3
,
8
3
¢
. Then
the required vector is
³
−
7
√
122
,
8
√
122
,
3
√
122
´
13.17.12 n. 14 (p. 496)
A normal vector for the plane π which is parallel to both vectors i +j and j +k is
n ≡ (i +j) × (j +k) = i −j +k
Since the intercept of π with the Xaxis is (2, 0, 0), a cartesian equation for π is
x −y +z = 2
13.17.13 n. 15 (p. 496)
I work directly on the coeﬃcient matrix for the equation system:
3 1 1  5
3 1 5  7
1 −1 3  3
With obvious manipulations
0 4 −8  −4
0 0 4  2
1 −1 3  3
I obtain a unique solution (x, y, z) =
¡
3
2
, 0,
1
2
¢
.
The conic sections 91
13.17.14 n. 17 (p. 497)
If the line ` under consideration is parallel to the two given planes, which have as
normal vectors n
1
≡ (1, 2, 3) and n
2
≡ (2, 3, 4), a direction vector for ` is
d ≡ (2, 3, 4) × (1, 2, 3) = (1, −2, 1)
Since ` goes through the point P ≡ (1, 2, 3), parametric eqautions for ` are the
following:
x = 1 +t y = 2 −2t z = 3 +t
13.17.15 n. 20 (p. 497)
A cartesian equation for the plane under consideration is
2x −y + 2z +d = 0
The condition of equal distance from the point P ≡ (3, 2, −1) yields
6 −2 −2 +d
3
=
6 −2 −2 + 4
3
that is,
2 +d = 6
The two solutions of the above equation are d
1
= 4 (corresponding to the plane
already given) and d
2
= −8. Thus the required equation is
2x −y + 2z = 8
13.18 The conic sections
13.19 Eccentricity of conic sections
13.20 Polar equations for conic sections
13.21 Exercises
92 Applications of vector algebra to analytic geometry
13.22 Conic sections symmetric about the origin
13.23 Cartesian equations for the conic sections
13.24 Exercises
13.25 Miscellaneous exercises on conic sections
Chapter 14
CALCULUS OF VECTORVALUED FUNCTIONS
94 Calculus of vectorvalued functions
Chapter 15
LINEAR SPACES
15.1 Introduction
15.2 The deﬁnition of a linear space
15.3 Examples of linear spaces
15.4 Elementary consequences of the axioms
15.5 Exercises
15.5.1 n. 1 (p. 555)
The set of all real rational functions is a real linear space. Indeed, let P, Q, R, and
S be any four real polynomials, and let
f : x 7→
P (x)
Q(x)
g : x 7→
R(x)
S (x)
Then for every two real numbers α and β
αf +βg : x 7→
αP (x) S (x) +βQ(x) R(x)
Q(x) S (x)
is a well deﬁned real rational function. This shows that the set in question is closed
with respect to the two linear space operations of function sum and function multipli
cation by a scalar. From this the other two existence axioms (of the zero element and
of negatives) also follow as particular cases, for (α, β) = (0, 0) and for (α, β) = (0, −1)
respectively. Of course it can also be seen directly that the identically null function
x 7→ 0 is a rational function, as the quotient of the constant polynomials P : x 7→ 0
96 Linear spaces
and Q : x 7→1. The remaining linear space axioms are immediately seen to hold, tak
ing into account the general properties of the operations of function sum and function
multiplication by a scalar
15.5.2 n. 2 (p. 555)
The set of all real rational functions having numerator of degree not exceeding the
degree of the denominator is a real linear space. Indeed, taking into account exercise
1, and using the same notation, it only needs to be proved that if deg P ≤ deg Q and
deg R ≤ deg S, then
deg [αPS +βQR] ≤ deg QS
This is clear, since for every two real numbers α and β
deg [αPS +βQR] ≤ max {deg PS, deg QR}
deg PS = deg P deg S ≤ deg Qdeg S
deg QR = deg Qdeg R ≤ deg Qdeg S
so that the closure axioms hold. It may be also noticed that the degree of both
polynomials P and Q occurring in the representation of the identically null function
as a rational function is zero.
15.5.3 n. 3 (p. 555)
The set of all real valued functions which are deﬁned on a ﬁxed domain containing 0
and 1, and which have the same value at 0 and 1, is a real linear space. Indeed, for
every two real numbers α and β,
(αf +βg) (0) = αf (0) +βg (0) = αf (1) +βg (1) = (αf +βg) (1)
so that the closure axioms hold. Again, the two existence axioms follow as particular
cases; and it is anyway clear that the identically null function x 7→0 achieves the same
value at 0 and 1. Similarly, that the other linear space axioms hold is a straightforward
consequence of the general properties of the operations of function sum and function
multiplication by a scalar.
15.5.4 n. 4 (p. 555)
The set of all real valued functions which are deﬁned on a ﬁxed domain containing
0 and 1, and which achieve at 0 the double value they achieve at 1 is a real linear
space. Indeed, for every two real numbers α and β,
(αf +βg) (0) = αf (0) +βg (0) = α2f (1) +β2g (1) = 2 (αf +βg) (1)
so that the closure axioms hold. A ﬁnal remark concerning the other axioms, of a
type which has become usual at this point, allows to conclude.
Exercises 97
15.5.5 n. 5 (p. 555)
The set of all real valued functions which are deﬁned on a ﬁxed domain containing
0 and 1, and which have value at 1 which exceeds the value at 0 by 1, is not a real
linear space. Indeed, let α and β be any two real numbers such that α+β 6= 1. Then
(αf +βg) (1) = αf (1) +βg (1) = α[1 +f (0)] +β [1 +g (0)]
= α +β +αf (0) +βg (0) = α +β + (αf +βg) (0)
6= 1 + (αf +βg) (0)
and the two closure axioms fail to hold (the above shows, however, that the set in
question is an aﬃne subspace of the real linear space). The other failing axioms are:
existence of the zero element (the identically null function has the same value at 0
and at 1), and existence of negatives
f (1) = 1 +f (0) ⇒(−f) (1) = −1 + (−f) (0) 6= 1 + (−f) (0)
15.5.6 n. 6 (p. 555)
The set of all real valued step functions which are deﬁned on [0, 1] is a real linear
space. Indeed, let f and g be any two such functions, so that for some nonnegative
integers m and n, some increasing (m+ 1)tuple (σ
r
)
r∈{0}∪m
of elements of [0, 1] with
σ
0
= 0 and σ
m
= 1, some increasing (n + 1)tuple (τ
s
)
s∈{0}∪n
of elements of [0, 1]
with τ
0
= 0 and τ
m
= 1, some (m+ 1)tuple (γ
r
)
r∈{0}∪m
of real numbers, and some
ntuple (δ
s
)
s∈{0}∪n
of real numbers
∗
,
f ≡ γ
0
χ
{0}
+
X
r∈m
γ
r
χ
Ir
g ≡ δ
0
χ
{0}
+
X
s∈n
δ
s
χ
Js
where for each subset C of [0, 1], χ
C
is the characteristic function of C
χ
C
≡ x 7→
¿
1 if x ∈ C
0 if x / ∈ C
and (I
r
)
r∈m
, (J
s
)
s∈n
are the partitions of (0, 1] associated to (σ
r
)
r∈{0}∪m
, (τ
s
)
s∈{0}∪n
I
r
≡ (σ
r−1
, σ
r
] (r ∈ m)
J
s
≡ (τ
s−1
, τ
s
] (s ∈ n)
Then, for any two real numbers α and β,
αf + βg = (αγ
0
+βδ
0
) χ
{0}
+
X
(r,s)∈m×n
(αγ
r
+βδ
s
) χ
K
rs
∗
I am using here the following slight abuse of notation for degenerate intervals: (σ, σ] ≡ {σ};
This is necessary in order to allow for “point steps”.
98 Linear spaces
where (K
rs
)
(r,s)∈m×n
is the “meet” partition of (I
r
)
r∈m
and (J
s
)
s∈n
K
rs
≡ I
r
∩ J
s
( (r, s) ∈ m×n)
which shows that αf +βg is a step function too, with no more
†
than m+n steps on
[0, 1].
Thus the closure axioms hold, and a ﬁnal, usual remark concenrning the other
linear space axioms applies. It may also be noticed independently that the identically
null function [0, 1] 7→R, x 7→0 is indeed a step function, with just one step (m = 1,
γ
0
= γ
1
= 0), and that the opposite of a step function is a step function too, with
the same number of steps.
15.5.7 n. 7 (p. 555)
The set of all real valued functions which are deﬁned on R and convergent to 0 at
+∞ is a real linear space. Indeed, by classical theorems in the theory of limits, for
every two such functions f and g, and for any two real numbers α and β,
lim
x→+∞
(αf +βg) (x) = α lim
x→+∞
f (x) + β lim
x→+∞
g (x)
= 0
so that the closure axioms hold. Final remark concerning the other linear space
axioms. It may be noticed independently that the identically null function [0, 1] 7→
R, x 7→0 indeed converges to 0 at +∞(and, for that matter, at any x
0
∈ R∪{−∞}).
15.5.8 n. 11 (p. 555)
The set of all real valued and increasing functions of a real variable is not a real linear
space. The ﬁrst closure axiom holds, because the sum of two increasing functions is an
increasing function too. The second closure axiom does not hold, however, because
the function αf is decreasing if f is increasing and α is a negative real number.
The axiom of existence of the zero element holds or fails, depending on whether
monotonicity is meant in the weak or in the strict sense, since the identically null
function is weakly increasing (and, for that matter, weakly decreasing too, as every
constant function) but not strictly increasing. Thus, in the former case, the set of all
real valued and (weakly) increasing functions of a real variable is a convex cone. The
axiom of existence of negatives fails, too. All the other linear space axioms hold, as
in the previous examples.
15.5.9 n. 13 (p. 555)
The set of all real valued functions which are deﬁned and integrable on [0, 1], with
the integral over [0, 1] equal to zero, is a real linear space. Indeed, for any two such
functions f and g, and any two real numbers α and β,
Z
1
0
(αf +βg) (x) dx = α
Z
1
0
f (x) dx +β
Z
1
0
g (x) dx = 0
†
Some potential steps may “collapse” if, for some (r, s) ∈ m×n, σ
r−1
= τ
s−1
and ασ
r
+βτ
s
=
ασ
r−1
+βτ
s−1
.
Exercises 99
15.5.10 n. 14 (p. 555)
The set of all real valued functions which are deﬁned and integrable on [0, 1], with
nonnegative integral over [0, 1], is a convex cone, but not a linear space. For any two
such functions f and g, and any two nonnegative real numbers α and β,
Z
1
0
(αf +βg) (x) dx = α
Z
1
0
f (x) dx +β
Z
1
0
g (x) dx ≥ 0
It is clear that if α < 0 and
R
1
0
(αf) (x) dx > 0, then
R
1
0
(αf) (x) dx < 0, so that
axioms 2 and 6 fail to hold.
15.5.11 n. 16 (p. 555)
First solution. The set of all real Taylor polynomials of degree less than or equal to
n (including the zero polynomial) is a real linear space, since it coincides with the set
of all real polynomials of degree less than or equal to n, which is already known to be
a real linear space. The discussion of the above statement is a bit complicated by the
fact that nothing is said concerning the point where our Taylor polynomials are to be
centered. If the center is taken to be 0, the issue is really easy: every polynomial is
the Taylor polynomial centered at 0 of itself (considered, as it is, as a real function of
a real variable). This is immediate, if one thinks at the motivation for the deﬁnition
of Taylor polynomials: best ndegree polynomial approximation of a given function.
At any rate, if one takes the standard formula as a deﬁnition
Taylor
n
(f) at 0 ≡ x 7→
n
X
j=0
f
(j)
(0)
j!
x
j
here are the computations. Let
P : x 7→
n
X
i=0
p
i
x
n−i
Then
P
0
: x 7→
P
n−1
i=0
(n −i) p
i
x
n−1−i
P
0
(0) = p
n−1
P
00
: x 7→
P
n−2
i=0
(n −i) (n −(i + 1)) p
i
x
n−2−i
P
00
(0) = 2p
n−2
.
.
.
.
.
.
.
.
.
P
(j)
: x 7→
P
n−j
i=0
(n−i)!
(n−i−j)!
p
i
x
n−j−i
P
(j)
(0) = j!p
n−j
.
.
.
.
.
.
.
.
.
P
(n−1)
: x 7→n!p
0
x + (n −1)!p
1
P
(n−1)
(0) = (n −1)!p
1
P
(n)
: x 7→n!p
0
P
(n)
(0) = n!p
0
100 Linear spaces
and hence
n
X
j=0
P
(j)
(0)
j!
x
j
=
n
X
j=0
j!p
n−j
j!
x
j
=
n
X
i=0
p
i
x
n−i
= P (x)
On the other hand, if the Taylor polynomials are meant to be centered at some x
0
6= 0
Taylor
n
(f) at x
0
≡ x 7→
n
X
j=0
f
(j)
(x
0
)
j!
(x −x
0
)
j
it must be shown by more lengthy arguments that for each polynomial P the following
holds:
n
X
j=0
P
(j)
(x
0
)
j!
(x −x
0
)
j
=
n
X
i=0
p
i
x
n−i
Second solution. The set of all real Taylor polynomials (centered at x
0
∈ R)
of degree less than or equal to n (including the zero polynomial) is a real linear space.
Indeed, let P and Q be any two such polynomials; that is, let f and g be two real
functions of a real variable which are m and n times diﬀerentiable in x
0
, and let
P ≡Taylor
m
(f) at x
0
, Q ≡Taylor
n
(g) at x
0
, that is,
P : x 7→
m
X
i=0
f
(i)
(x
0
)
i!
(x −x
0
)
i
Q : x 7→
n
X
j=0
g
(j)
(x
0
)
j!
(x −x
0
)
j
Suppose ﬁrst that m = n. Then for any two real numbers α andβ
αP +βQ =
m
X
k=0
½
α
·
f
(k)
(x
0
)
k!
¸
+β
·
g
(k)
(x
0
)
k!
¸¾
(x −x
0
)
k
=
m
X
k=0
(αf +βg)
(k)
(x
0
)
k!
(x −x
0
)
k
= Taylor
m
(αf +βg) at x
0
Second, suppose (without loss of generality) that m > n. In this case, however,
αP +βQ =
n
X
k=0
αf
(k)
(x
0
) +βg
(k)
(x
0
)
k!
(x −x
0
)
k
+
m
X
k=n+1
αf
(k)
(x
0
)
k!
(x −x
0
)
k
a polynomial which can be legitimately considered the Taylor polynomial of degree
n at x
0
of the function αf + βg only if all the derivatives of g of order from n + 1
to m are null at x
0
. This is certainly true if g itself a polynomial of degree n. In
fact, this is true only in such a case, as it can be seen by repeated integration. It is
hence necessary, in addition, to state and prove the result asserting that each Taylor
polynomial of any degree and centered at any x
0
∈ R can be seen as the Taylor
polynomial of itself.
Exercises 101
15.5.12 n. 17 (p. 555)
The set S of all solutions of a linear secondorder homogeneous diﬀerential equation
∀x ∈ (a, b) , y
00
(x) +P (x) y
0
(x) +Q(x) y (x) = 0
where P and Q are given everywhere continuous real functions of a real variable,
and (a, b) is some open interval to be determined together with the solution, is a real
linear space. First of all, it must be noticed that S is nonempty, and that its elements
are indeed real functions which are everywhere deﬁned (that is, with (a, b) = R), due
to the main existence and solution continuation theorems in the theory of diﬀerential
equations. Second, the operator
L : D
2
→R
R
, y 7→y
00
+Py
0
+Qy
where R
R
is the set of all the real functions of a real variable, and D
2
is the subset
of R
R
of all the functions having a second derivative which is everywhere deﬁned, is
linear:
∀y ∈ D
2
, ∀z ∈ D
2
, ∀α ∈ R, ∀β ∈ R,
L(αy +βz) = (αy +βz)
00
+P (αy +βz)
0
+Q(αy +βz)
= α(y
00
+Py
0
+Qy) +β (z
00
+Pz
0
+Qz)
= αL(y) +βL(z)
Third, D
2
is a real linear space, since
∀y ∈ D
2
, ∀z ∈ D
2
, ∀α ∈ R, ∀β ∈ R,
αy +βz ∈ D
2
by standard theorems on the derivative of a linear combination of diﬀerentiable
functions, and the usual remark concerning the other linear space axioms. Finally,
S =ker L is a linear subspace of D
2
, by the following standard argument:
L(y) = 0 ∧ L(z) = 0 ⇒L(αy +βz) = αL(y) +βL(z) = 0
and the usual remark.
15.5.13 n. 18 (p. 555)
The set of all bounded real sequences is a real linear space. Indeed, for every two real
sequences x ≡ (x
n
)
n∈N
and y ≡ (y
n
)
n∈N
and every two real numbers α and β, if x
and y are bounded, so that for some positive numbers ε and η and for every n ∈ N
the following holds:
x
n
 < ε y
n
 < η
then the sequence αx +βy is also bounded, since for every n ∈ N
αx
n
+βy
n
 ≤ α x
n
 + β y
n
 < α ε + β η
Thus the closure axioms hold, and the usual remark concerning the other linear space
axioms applies.
102 Linear spaces
15.5.14 n. 19 (p. 555)
The set of all convergent real sequences is a real linear space. Indeed, for every two
real sequences x ≡ (x
n
)
n∈N
and y ≡ (y
n
)
n∈N
and every two real numbers α and β, if
x and y are convergent to x and y respectively, then the sequence αx+βy converges
to αx +βy. Thus the closure axioms hold. Usual remark concerning the other linear
space axioms.
15.5.15 n. 22 (p. 555)
The set U of all elements of R
3
with their third component equal to 0 is a real linear
space. Indeed, by linear combination the third component remains equal to 0; U is
the kernel of the linear function R
3
→R, (x, y, z) 7→z.
15.5.16 n. 23 (p. 555)
The set W of all elements of R
3
with their second or third component equal to 0 is
not a linear subspace of R
3
. For example, (0, 1, 0) and (0, 0, 1) are in the set, but
their sum (0, 1, 1) is not. The second closure axiom and the existence of negatives
axiom fail too. The other axioms hold. W is not even an aﬃne subspace, nor it is
convex; however, it is a cone.
15.5.17 n. 24 (p. 555)
The set π of all elements of R
3
with their second component which is equal to the
third multiplied by 5 is a real linear space, being the kernel of the linear function
R
3
→R, (x, y, z) 7→5x −y.
15.5.18 n. 25 (p. 555)
The set ` of all elements (x, y, z) of R
3
such that 3x + 4y = 1 and z = 0 (the line
through the point P ≡ (−1, 1, 0) with direction vector v ≡ (4, −3, 0)) is an aﬃne
subspace of R
3
, hence a convex set, but not a linear subspace of R
3
, hence not a
linear space itself. Indeed, for any two triples (x, y, z) and (u, v, w) of R
3
, and any
two real numbers α and β
3x + 4y = 1
z = 0
3u + 4v = 1
w = 0
_
¸
¸
_
¸
¸
_
⇒
½
3 (αx +βu) + 4 (αy +βv) = α +β
z + w = 0
Thus both closure axioms fail for `. Both existence axioms also fail, since neither
the null triple, nor the opposite of any triple in `, belong to `. The associative and
distributive laws have a defective status too, since they make sense in ` only under
quite restrictive assumptions on the elements of R
3
or the real numbers appearing in
them.
15.5.19 n. 26 (p. 555)
The set r of all elements (x, y, z) of R
3
which are scalar multiples of (1, 2, 3) (the line
through the origin having (1, 2, 3) as direction vector) is a real linear space. For any
Subspaces of a linear space 103
two elements (h, 2h, 3h) and (k, 2k, 3k) of r, and for any two real numbers α and β,
the linear combination
α(h, 2h, 3h) +β (k, 2k, 3k) = (αh +βk) (1, 2, 3)
belongs to r. r is the kernel of the linear function R
3
→R
2
, (x, y, z) 7→(2x −y, 3x −z).
15.5.20 n. 27 (p. 555)
The set of solutions of the linear homogenous system of equations
A(x, y, z)
0
= 0
is the kernel of the linear function R
3
→ R, (x, y, z) 7→ A(x, y, z)
0
and hence a real
linear space.
15.5.21 n. 28 (p. 555)
The subset of R
n
of all the linear combinations of two given vectors a and b is a
vector subspace of R
n
, namely span {a, b}. It is immediate to check that every linear
combination of linear combinations of a and b is again a linear combination of a and
b.
15.6 Subspaces of a linear space
15.7 Dependent and independent sets in a linear space
15.8 Bases and dimension
15.9 Exercises
15.9.1 n. 1 (p. 560)
The set S
1
of all elements of R
3
with their ﬁrst coordinate equal to 0 is a linear
subspace of R
3
(see exercise 5.22 above). S
1
is the coordinate Y Zplane, its dimension
is 2. A standard basis for it is {(0, 1, 0) , (0, 0, 1)}. It is clear that the two vectors are
linearly independent, and that for every real numbers y and z
(0, y, z) = y (0, 1, 0) +z (0, 0, 1)
104 Linear spaces
15.9.2 n. 2 (p. 560)
The set S
2
of all elements of R
3
with the ﬁrst and second coordinate summing up to
0 is a linear subspace of R
3
(S
2
is the plane containing the Zaxis and the bisectrix of
the evennumbered quadrants of the XY plane). A basis for it is {(1, −1, 0) , (0, 0, 1)};
its dimension is 2. It is clear that the two vectors are linearly independent, and that
for every three real numbers x, y and z such that x +y = 0
(x, y, z) = x(1, −1, 0) +z (0, 0, 1)
15.9.3 n. 3 (p. 560)
The set S
3
of all elements of R
3
with the coordinates summing up to 0 is a linear
subspace of R
3
(S
3
is the plane through the origin and normal vector n ≡ (1, 1, 1)).
A basis for it is {(1, −1, 0) , (0, −1, 1)}; its dimension is 2. It is clear that the two
vectors are linearly independent, and that for every three real numbers x, y and z
such that x +y +z = 0
(x, y, z) = x(1, −1, 0) +z (0, −1, 1)
15.9.4 n. 4 (p. 560)
The set S
4
of all elements of R
3
with the ﬁrst two coordinates equal is a linear subspace
of R
3
(S
4
is the plane containing the Zaxis and the bisectrix of the oddnumbered
quadrants of the XY plane). A basis for it is {(1, 1, 0) , (0, 0, 1)}; its dimension is 2.
It is clear that the two vectors are linearly independent, and that for every three real
numbers x, y and z such that x = y
(x, y, z) = x(1, 1, 0) +z (0, 0, 1)
15.9.5 n. 5 (p. 560)
The set S
5
of all elements of R
3
with all the coordinates equal is a linear subspace of
R
3
(S
5
is the line through the origin and direction vector d ≡ (1, 1, 1)). A basis for it
is {d}; its dimension is 1.
15.9.6 n. 6 (p. 560)
The set S
6
of all elements of R
3
with the ﬁrst coordinate equal either to the second
or to the third is not a linear subspace of R
3
(S
6
is the union of the plane S
4
of
exercise 4 and the plane containing the Y axis and the bisectrix of the oddnumbered
quadrants of the XZplane). For example, (1, 1, 0) and (1, 0, 1) both belong to S
6
,
but their sum (2, 1, 1) does not.
15.9.7 n. 7 (p. 560)
The set S
7
of all elements of R
3
with the ﬁrst and second coordinates having identical
square is not a linear subspace of R
3
(S
7
is the union of the two planes containing
the Zaxis and one of the bisectrices of the odd and evennumbered quadrants of the
XY plane). For example, (1, −1, 0) and (1, 1, 0) both belong to S
7
, but their sum
Exercises 105
(2, 0, 0) or their semisum (1, 0, 0) do not. S
7
is not an aﬃne subspace of R
3
, and it
is not even convex. However, S
7
is a cone, and it is even closed with respect to the
external product (multiplication by arbitrary real numbers).
15.9.8 n. 8 (p. 560)
The set S
8
of all elements of R
3
with the ﬁrst and second coordinates summing up to
1 is not a linear subspace of R
3
(S
8
is the vertical plane containing the line through
the points P ≡ (1, 0, 0) and Q ≡ (0, 1, 0)) For example, for every α 6= 1, α(1, 0, 0) ,
and α(0, 1, 0) do not belong to S
8
. S
8
is an aﬃne subspace of R
3
, since
∀(x, y, z) ∈ R
3
, ∀(u, v, w) ∈ R
3
, ∀α ∈ R
(x +y = 1) ∧ (u +v = 1) ⇒[(1 −α) x +αu] + [(1 −α) y +αv] = 1
15.9.9 n. 9 (p. 560)
See exercise 26, p.555.
15.9.10 n. 10 (p. 560)
The set S
10
of all elements (x, y, z) of R
3
such that
x +y +z = 0
x −y −z = 0
is a line containing the origin, a subspace of R
3
of dimension 1. A base for it is
{(0, 1, −1)}. If (x, y, z) is any element of S
10
, then
(x, y, z) = y (0, 1, −1)
15.9.11 n. 11 (p. 560)
The set S
11
of all polynomials of degree not exceeding n, and taking value 0 at 0 is a
linear subspace of the set of all polynomials of degree not exceeding n, and hence a
linear space. If P is any polynomial of degree not exceeding n,
P : t 7→p
0
+
X
h∈n
p
h
t
h
(15.1)
then P belongs to S
11
if and only if p
0
= 0. It is clear that any linear combination of
polynomials in S
11
takes value 0 at 0. A basis for S
11
is
B ≡
©
t 7→t
h
ª
h∈n
Indeed, by the general principle of polynomial identity (which is more or less explicitly
proved in example 6, p.558), a linear combination of B is the null polynomial, if and
only if all its coeﬃcients are null. Moreover, every polynomial belonging to S
11
is a
linear combination of B. It follows that dimS
11
= n.
106 Linear spaces
15.9.12 n. 12 (p. 560)
The set S
12
of all polynomials of degree not exceeding n, with ﬁrst derivative taking
value 0 at 0, is a linear subspace of the set of all polynomials of degree not exceeding
n. Indeed, for any two suach polynomials P and Q, and any two real numbers α and
β,
(αP +βQ)
0
(0) = αP
0
(0) +βQ
0
(0) = α · 0 +β · 0 = 0
A polynomial P as in (15.1) belongs to S
12
if and only if p
1
= 0. A basis for S is
B ≡
n
¡
t 7→t
h+1
¢
h∈n−1
, (t 7→1)
o
That B is a basis can be seen as in the previous exercise. Thus dimS
12
= n.
15.9.13 n. 13 (p. 560)
The set S
13
of all polynomials of degree not exceeding n, with second derivative
taking value 0 at 0, is a linear subspace of the set of all polynomials of degree not
exceeding n. This is proved exactly as in the previous exercise (just replace
0
with
00
).
A polynomial P as in (15.1) belongs to S
13
if and only if p
2
= 0. A basis for S is
B ≡
n
¡
t 7→t
h+2
¢
h∈n−2
, (t 7→1) , (t 7→t)
o
That B is a basis can be seen as in the exercise 11. Thus dimS
13
= n.
15.9.14 n. 14 (p. 560)
The set S
14
of all polynomials of degree not exceeding n, with ﬁrst derivative taking
value at 0 which is the opposite of the polynomial value at 0, is a linear subspace of
the set of all polynomials of degree not exceeding n. A polynomial P as in (15.1)
belongs to S
14
if and only if p
0
+ p
1
= 0. If P and Q are any two polynomials in S,
and α and β are any two real numbers,
(αP +βQ) (0) + (αP +βQ)
0
(1) = αP (0) +βQ(0) +αP
0
(0) +βQ
0
(0)
= α[P (0) +P
0
(0)] +β [Q(0) +Q
0
(0)]
= α · 0 +β · 0
A basis for S
14
is
B ≡
n
t 7→1 −t,
¡
t 7→t
h+1
¢
h∈n−1
o
Indeed, if α is the ntuple of coeﬃcients of a linear combination R of B, then
R : t 7→α
1
−α
1
t +
X
h∈n−1
α
h+1
t
h+1
Exercises 107
By the general principle of polynomial identity, the ntuple α of coeﬃcients of any
linear combination of B spanning the null vector must satisfy the conditions
α
1
= 0
α
h+1
= 0 (h ∈ n −1)
that is, the combination must be the trivial one. If P is any polynomial such that
p
0
+p
1
= 0, P belongs to span B, since by the position
α
1
= p
0
= −p
1
α
h+1
= p
h+1
(h ∈ n −1)
the linear combination of B with α as ntuple of coeﬃcients is equal to P. It follows
that B is a basis of S
14
, and hence that dimS = n.
15.9.15 n. 15 (p. 560)
The set S
15
of all polynomials of degree not exceeding n, and taking the same value
at 0 and at 1 is a linear subspace of the set of all polynomials of degree not exceeding
n, and hence a linear space. This can be seen exactly as in exercise 3, p.555. A
polynomial P belongs to S
15
if and only if
p
0
= p
0
+
X
h∈n
p
h
that is, if and only if
X
h∈n
p
h
= 0
A basis for S
15
is
B ≡
n
(t 7→1) ,
¡
t 7→(1 −t) t
h
¢
h∈n−1
o
and the dimension of S
15
is n.
15.9.16 n. 16 (p. 560)
The set S
16
of all polynomials of degree not exceeding n, and taking the same value
at 0 and at 2 is a linear subspace of the set of all polynomials of degree not exceeding
n, and hence a linear space. This can be seen exactly as in exercise 3, p.555. A
polynomial P belongs to S
16
if and only if
p
0
= p
0
+
X
h∈n
2
h
p
h
that is, if and only if
X
h∈n
2
h
p
h
= 0
108 Linear spaces
A basis for S
16
is
B ≡
n
(t 7→1) ,
¡
t 7→(2 −t) t
h
¢
h∈n−1
o
and the dimension of S
16
is n.
15.9.17 n. 22 (p. 560)
(b) It has alredy been shown in the notes that
lin S ≡
\
W subspace of V
S⊆W
W
=
(
v : ∃n ∈ N, ∃u ≡ (u
i
)
i∈n
∈ V
n
, ∃a ≡ (α
i
)
i∈n
∈ F
n
, v =
X
i∈n
α
i
u
i
)
hence if T is a subspace, and it contains S, then T contains lin S.
(c) If S = lin S, then of course S is a subspace of V because lin S is so. Conversely, if
S is a subspace of V , then S is one among the subspaces appearing in the deﬁnition
of lin S, and hence lin S ⊆ S; since S ⊆ lin S always, it follows that S = linS.
(e) If S and T are subspaces of V , then by point c above lin S = S and lin T = T,
and hence
S ∩ T = lin S ∩ lin T
Since the intersection of any family of subspaces is a subspace, S ∩ T is a subspace
of V .
(g) Let V ≡ R
2
, S ≡ {(1, 0) , (0, 1)}, T ≡ {(1, 0) , (1, 1)}. Then
S ∩ T = {(1, 0)}
L(S) = L(T) = R
2
L(S ∩ T) = {(x, y) : y = 0}
15.9.18 n. 23 (p. 560)
(a) Let
f : R →R, x 7→1
g : R →R, x 7→e
ax
h : R →R, x 7→e
bx
0 : R →R, x 7→0
and let (u, v, w) ∈ R
3
be such that
uf +vg +wh = 0
Exercises 109
In particular, for x = 0, x = 1, and x = 2, we have
u +v +w = 0
u +e
a
v +e
b
w = 0 (15.2)
u +e
2a
v +e
2b
w = 0
The determinant of the coeﬃcient matrix of the above system is
¯
¯
¯
¯
¯
¯
1 1 1
1 e
a
e
b
1 e
2a
e
2b
¯
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
¯
1 1 1
0 e
a
−1 e
b
−1
0 e
2a
−1 e
2b
−1
¯
¯
¯
¯
¯
¯
= (e
a
−1)
¡
e
b
−1
¢ £
e
b
+ 1 −(e
a
+ 1)
¤
= (e
a
−1)
¡
e
b
−1
¢ ¡
e
b
−e
a
¢
Since by assumption a 6= b, at most one of the two numbers can be equal to zero. If
none of them is null, system (15.2) has only the trivial solution, the triple (f, g, h)
is linearly independent, and dimlin {f, g, h} = 3. If a = 0, then g = f; similarly, if
b = 0, then h = f. Thus if either a or b is null then (f, g, h) is linearly dependent (it
suﬃces to take (u, v, w) ≡ (1, −1, 0) or (u, v, w) ≡ (1, 0, −1), respectively).
It is convenient to state and prove the following (very) simple
Lemma 4 Let X be a set containing at least two distinct elements, and let f : X →R
be a nonnull constant function. For any function g : X → R, the couple (f, g) is
linearly dependent if and only if g is constant, too.
Proof. If g is constant, let Imf ≡ {y
f
} and Img ≡ {y
g
}. Then the function
y
g
f −y
f
g is the null function. Conversely, if (f, g) is linearly dependent, let (u, v) ∈
R
2
∼ (0, 0) be such that uf +vg is the null function. Thus
∀x ∈ X, uy
f
+vg (x) = 0
Since f is nonnull, y
f
6= 0. This yields
v = 0 ⇒u = 0
and hence v 6= 0. Then
∀x ∈ X, g (x) = −
u
v
y
f
and g is constant, too.
It immediately follows from the above lemma (and from the assumption a 6= b)
that if either a = 0 or b = 0, then dimlin {f, g, h} = 2.
(b) The two functions
f : x 7→e
ax
g : x 7→xe
ax
110 Linear spaces
are linearly independent, since
[∀x ∈ R, αe
x
+βxe
ax
= 0] ⇔ [∀x ∈ R, (α +βx) e
ax
= 0]
⇔ [∀x ∈ R, (α +βx) = 0]
⇔ α = β = 0
Notice that the argument holds even in the case a = 0, Thus dimlin {f, g} = 2.
(c) Arguing as in point a, let (u, v, w) ∈ R
3
be such that
uf +vg +wh = 0
where
f : x 7→1 g : x 7→e
ax
h : x 7→xe
ax
In particular, for x = 0, x = 1, and x = −1, we have
u +v +w = 0
u +e
a
v +e
a
w = 0 (15.3)
u +e
−a
v −e
−a
w = 0
a homogeneous system of equations whose coeﬃcient matrix has determinant
¯
¯
¯
¯
¯
¯
1 1 1
1 e
a
e
a
1 e
−a
−e
−a
¯
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
¯
1 1 1
0 e
a
−1 e
a
−1
0 e
−a
−1 −e
−a
−1
¯
¯
¯
¯
¯
¯
= −(e
a
−1)
¡
e
−a
+ 1 +e
−a
−1
¢
= 2e
−a
(1 −e
a
)
Thus if a 6= 0 the above determinant is diﬀerent from zero, yielding (u, v, w) =
(0, 0, 0). The triple (f, g, h) is linearly independent, and dimlin {f, g, h} = 3. On the
other hand, if a = 0 then f = g and h = Id
R
, so that dimlin {f, g, h} = 2.
(f) The two functions
f : x 7→sinx g : x 7→cos x
are linearly independent. Indeed, for every two real numbers α and β which are not
both null,
∀x ∈ R, αsinx +β cos x = 0
m
∀x ∈ R,
α
√
α
2
+β
2
sinx +
β
√
α
2
+β
2
cos x = 0
m
∀x ∈ R, sin (γ +x) = 0
Inner products. Euclidean spaces. Norms 111
where
γ ≡ signβ arccos
α
p
α
2
+β
2
since the sin function is not identically null, the last condition cannot hold. Thus
dimlin{x 7→sin x, x 7→cos x} = 2.
(h) From the trigonometric addition formulas
∀x ∈ R cos 2x = cos
2
x −sin
2
x = 1 −2 sin
2
x
Let then
f : x 7→1 g : x 7→cos x g : x 7→sin
2
x
The triple (f, g, h) is linearly dependent, since
f −g + 2h = 0
By the lemma discussed at point a, dimlin {f, g, h} = 2.
15.10 Inner products. Euclidean spaces. Norms
15.11 Orthogonality in a euclidean space
112 Linear spaces
15.12 Exercises
15.12.1 n. 9 (p. 567)
hu
1
, u
2
i =
Z
1
−1
t dt =
t
2
2
¯
¯
¯
¯
1
−1
=
1
2
−
1
2
= 0
hu
2
, u
3
i =
Z
1
−1
t +t
2
dt = 0 +
t
3
3
¯
¯
¯
¯
1
−1
=
1
3
−
µ
−
1
3
¶
=
2
3
hu
3
, u
1
i =
Z
1
−1
1 +t dt = t
1
−1
+ 0 = 1 −(−1) = 2
ku
1
k
2
=
Z
1
−1
1 dt = 2 ku
1
k =
√
2
ku
2
k
2
=
Z
1
−1
t
2
dt =
2
3
ku
2
k =
r
2
3
ku
3
k
2
=
Z
1
−1
1 + 2t +t
2
dt = 2 +
2
3
ku
3
k =
r
8
3
cos [ u
2
u
3
=
2
3
q
2
3
q
8
3
=
1
2
cos [ u
1
u
3
=
2
√
2
q
8
3
=
√
3
2
[ u
2
u
3
=
π
3
[ u
1
u
3
=
π
6
15.12.2 n. 11 (p. 567)
(a) Let for each n ∈ N
I
n
≡
Z
+∞
0
e
−t
t
n
dt
Then
I
0
=
Z
+∞
0
e
−t
dt = lim
x→+∞
Z
x
0
e
−t
dt
= lim
x→+∞
−e
−t
¯
¯
x
0
= lim
x→+∞
−e
−x
+ 1
= 1
Exercises 113
and, for each n ∈ N,
I
n+1
=
Z
+∞
0
e
−t
t
n+1
dt = lim
x→+∞
Z
x
0
e
−t
t
n+1
dt
= lim
x→+∞
½
−e
−t
t
n+1
¯
¯
x
0
+ (n + 1)
Z
x
0
e
−t
t
n
dt
¾
= lim
x→+∞
©
−e
−x
x
n+1
−0
ª
+ (n + 1) lim
x→+∞
½Z
x
0
e
−t
t
n
dt
¾
= 0 + (n + 1)
Z
+∞
0
e
−t
t
n
dt
= (n + 1) I
n
The integral involved in the deﬁnition of I
n
is always convergent, since for each n ∈ N
lim
x→+∞
e
−x
x
n
= 0
It follows that
∀n ∈ N, I
n
= n!
Let now
f : t 7→α
0
+
X
i∈m
α
i
t
i
g : t 7→β
0
+
X
i∈n
β
j
t
j
be two real polynomials, of degree m and n respectively. Then the product fg is a
real polynomial of degree m+n, containing for each k ∈ m+n a monomial of degree
k of the form α
i
t
i
β
j
t
j
= α
i
β
j
t
i+j
whenever i +j = k. Thus
fg = α
0
β
0
+
X
k∈m+n
_
_
_
_
X
i∈m, j∈n
i+j=k
α
i
β
j
_
_
_
_
t
k
114 Linear spaces
and the scalar product of f and g is results in the sum of m + n + 1 converging
integrals
hf, gi =
Z
+∞
0
e
−t
_
¸
¸
_
α
0
β
0
+
X
k∈m+n
_
_
_
_
X
i∈m, j∈n
i+j=k
α
i
β
j
_
_
_
_
t
k
_
¸
¸
_
dt
= α
0
β
0
Z
+∞
0
e
−t
dt +
X
k∈m+n
_
_
_
_
X
i∈m, j∈n
i+j=k
α
i
β
j
_
_
_
_
Z
+∞
0
e
−t
t
k
dt
= α
0
β
0
I
0
+
X
k∈m+n
_
_
_
_
X
i∈m, j∈n
i+j=k
α
i
β
j
_
_
_
_
I
k
= α
0
β
0
+
X
k∈m+n
_
_
_
_
X
i∈m, j∈n
i+j=k
α
i
β
j
_
_
_
_
k!
(b) If for each n ∈ N
x
n
: t 7→t
n
then for each m ∈ N and each n ∈ N
hx
m
, x
n
i =
Z
+∞
0
e
−t
t
m
t
n
dt
= I
m+n
= (m+n)!
(c) If
f : t 7→(t + 1)
2
g : t 7→t
2
+ 1
then
fg : t 7→t
4
+ 2t
3
+ 2t
2
+ 2t + 1
hf, gi =
Z
+∞
0
e
−t
¡
t
4
+ 2t
3
+ 2t
2
+ 2t + 1
¢
dt
= I
4
+ 2I
3
+ 2I
2
+ 2I
1
+I
0
= 4! + 2 · 3! + 2 · 2! + 2 · 1! + 1
= 43
Construction of orthogonal sets. The GramSchmidt process 115
(d) If
f : t 7→t + 1 g : t 7→αt +β
then
fg : t 7→αt
2
+ (α +β) t +β
hf, gi = αI
2
+ (α +β) I
1
+βI
0
= 3α + 2β
and
hf, gi = 0 ⇔(α, β) = (2γ, −3γ) (γ ∈ R)
that is, the set of all polynomials of degree less than or equal to 1 which are orthogonal
to f is
P
1
∩ perp {f} = {g
γ
: t 7→2γt −3γ}
γ∈R
15.13 Construction of orthogonal sets. The GramSchmidt process
15.14 Orthogonal complements. projections
15.15 Best approximation of elements in a euclidean space by elements
in a ﬁnitedimensional subspace
15.16 Exercises
15.16.1 n. 1 (p. 576)
(a) By direct inspection of the coordinates of the three given vectors. it is seen that
they all belong to the plane π of equation x−z = 0. Since no two of them are parallel,
lin {x
1
, x
2
, x
3
} = π and dimlin{x
1
, x
2
, x
3
} = 2. A unit vector in π is given by
v ≡
1
3
(2, 1, 2)
Every vector belonging to the line π ∩ perp {v} must have the form
(x, −4x, x)
116 Linear spaces
with norm 3
√
2 x. Thus a second unit vector which together with v spans π, and
which is orthogonal to v, is
w ≡
√
2
6
(1, −4, 1)
The required orthonormal basis for lin {x
1
, x
2
, x
3
} is {v, w}.
(b) The answer is identical to the one just given for case a.
15.16.2 n. 2 (p. 576)
(a) First solution. It is easily checked that
x
2
/ ∈ lin {x
1
}
x
3
/ ∈ lin {x
1
, x
2
}
0 = x
1
−x
2
+x
3
−x
4
Thus dimW ≡ dimlin {x
1
, x
2
, x
3
, x
4
} = 3, and three vectors are required for any
basis of W. The following vectors form an orthonormal one:
y
1
≡
x
1
kx
1
k
=
√
2
2
(1, 1, 0, 0)
y
2
≡
x
2
−hx
2
, y
1
i y
1
kx
2
−hx
2
, y
1
i y
1
k
=
(0, 1, 1, 0) −
√
2
2
√
2
2
(1, 1, 0, 0)
°
°
°(0, 1, 1, 0) −
√
2
2
√
2
2
(1, 1, 0, 0)
°
°
°
=
¡
−
1
2
,
1
2
, 1, 0
¢
q
1
4
+
1
4
+ 1
=
√
6
6
(−1, 1, 2, 0)
y
3
≡
x
3
−hx
3
, y
1
i y
1
−−hx
3
, y
2
i y
2
kx
3
−hx
3
, y
1
i y
1
−−hx
3
, y
2
i y
2
k
=
(0, 0, 1, 1) −0 −2
√
6
6
√
6
6
(−1, 1, 2, 0)
°
°
°(0, 0, 1, 1) −0 −2
√
6
6
√
6
6
(−1, 1, 2, 0)
°
°
°
=
¡
1
3
, −
1
3
,
1
3
, 1
¢
q
1
9
+
1
9
+
1
9
+ 1
=
√
3
6
(1, −1, 1, 3)
Second solution. More easily, it suﬃces to realize that
lin {x
1
, x
2
, x
3
, x
4
} = H
(1,−1,1,−1),0
≡
©
(x, y, z, t) ∈ R
4
: x −y +z −t = 0
ª
is the hyperplane of R
4
through the origin, with (1, −1, 1, −1) as normal vector, to
directly exhibit two mutually orthogonal unit vectors in lin {x
1
, x
2
, x
3
, x
4
}
u ≡
√
2
2
(1, 1, 0, 0) v ≡
√
2
2
(0, 0, 1, 1)
Exercises 117
It remains to ﬁnd a unit vector in H
(1,−1,1,−1),0
∩perp{u, v}. The following equations
characterize H
(1,−1,1,−1),0
∩ perp{u, v}:
x −y +z −t = 0
x +y = 0
z +t = 0
yielding (by addition) 2 (x +z) = 0, and hence
w ≡
1
2
(1, −1, −1, 1) or w ≡ −
1
2
(1, −1, −1, 1)
Thus an orthonormal basis for lin {x
1
, x
2
, x
3
, x
4
} is {u, v, w}.
(b) It is easily checked that
x
2
/ ∈ lin {x
1
}
0 = 2x
1
−x
2
−x
3
Thus dimW ≡ dimlin {x
1
, x
2
, x
3
} = 2, and two vectors are required for any basis of
W. The following vectors form an orthonormal one:
y
1
≡
x
1
kx
1
k
=
√
3
3
(1, 1, 0, 1)
y
2
≡
x
2
−hx
2
, y
1
i y
1
kx
2
−hx
2
, y
1
i y
1
k
=
(1, 0, 2, 1) −2
√
3
3
√
3
3
(1, 1, 0, 1)
°
°
°(0, 1, 1, 0) −2
√
3
3
√
3
3
(1, 1, 0, 1)
°
°
°
=
¡
1
3
, −
2
3
, 2,
1
3
¢
q
1
9
+
4
9
+ 4 +
1
9
=
√
42
42
(1, −2, 6, 1)
15.16.3 n. 3 (p. 576)
hy
0
, y
0
i =
Z
π
0
1
√
π
1
√
π
dt = π
1
π
= 1
hy
n
, y
n
i =
Z
π
0
r
2
π
cos nt
r
2
π
cos nt dt =
2
π
Z
π
0
cos
2
nt dt
Let
I
n
≡
Z
π
0
cos
2
ntdt =
Z
π
0
cos nt cos nt dt
integrating by parts
I
n
= cos nt
sinnt
n
¯
¯
¯
¯
π
0
−
Z
π
0
−nsin nt
sin nt
n
dt
=
Z
π
0
sin
2
nt dt =
Z
π
0
1 −cos
2
nt dt
= π −I
n
118 Linear spaces
Thus
I
n
=
π
2
hy
n
, y
n
i = 1
and every function y
n
has norm equal to one.
To check mutual orthogonality, let us compute
hy
0
, y
n
i =
Z
π
0
1
√
π
cos nt dt =
1
√
π
sin nt
n
¯
¯
¯
¯
π
0
= 0
hy
m
, y
n
i =
Z
π
0
cos mt cos nt dt
= cos mt
sinnt
n
¯
¯
¯
¯
π
0
−
Z
π
0
−msin mt
sin nt
n
dt
=
m
n
Z
π
0
sin mt sinnt dt
=
m
n
sin mt
µ
−
cos nt
n
¶¯
¯
¯
¯
π
0
−
m
n
Z
π
0
mcos mt
µ
−
cos nt
n
¶
dt
=
³
m
n
´
2
hy
m
, y
n
i
Since m and n are distinct positive integers,
¡
m
n
¢
2
is diﬀerent from 1, and the equation
hy
m
, y
n
i =
³
m
n
´
2
hy
m
, y
n
i
can hold only if hy
m
, y
n
i = 0.
That the set {y
n
}
+∞
n=0
generates the same space generated by the set {x
n
}
+∞
n=0
is trivial, since, for each n, y
n
is a multiple of x
n
.
15.16.4 n. 4 (p. 576)
We have
hy
0
, y
0
i =
Z
1
0
1dt = 1
hy
1
, y
1
i =
Z
1
0
3
¡
4t
2
−4t + 1
¢
dt = 3
Ã
4
3
t
3
−2t
2
+t
¯
¯
¯
¯
1
0
!
= 1
hy
2
, y
2
i =
Z
1
0
5
¡
36t
4
−72t
3
+ 48t
2
−12t + 1
¢
dt
= 5
Ã
36
5
t
5
−18t
4
+ 16t
3
−6t
2
+t
¯
¯
¯
¯
1
0
!
= 1
Exercises 119
which proves that the three given functions are unit vectors with respect to the given
inner product.
Moreover,
hy
0
, y
1
i =
Z
1
0
√
3 (2t −1) dt =
√
3
³
t
2
−t
¯
¯
1
0
´
= 0
hy
0
, y
2
i =
Z
1
0
√
5
¡
6t
2
−6t + 1
¢
dt =
√
5
³
2t
3
−3t
2
+t
¯
¯
1
0
´
= 0
hy
1
, y
2
i =
Z
1
0
√
15
¡
12t
3
−18t
2
+ 8t −1
¢
dt
=
√
15
³
3t
4
−6t
3
+ 4t
2
−t
¯
¯
1
0
´
= 0
which proves that the three given functions are mutually orthogonal. Thus {y
1
, y
2
, y
3
}
is an orthonormal set, and hence linearly independent.
Finally,
x
0
= y
0
x
1
=
1
2
y
0
+
√
3
2
y
1
x
2
=
Ã
1 −
√
5
30
!
y
1
+
√
3
3
y
1
+
√
5
30
y
2
which proves that
lin {y
1
, y
2
, y
3
} = lin {x
1
, x
2
, x
3
}
120 Linear spaces
Chapter 16
LINEAR TRANSFORMATIONS AND MATRICES
16.1 Linear transformations
16.2 Null space and range
16.3 Nullity and rank
16.4 Exercises
16.4.1 n. 1 (p. 582)
T is linear, since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, for
every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αy +βv, αx +βu)
= α(y, x) +β (v, u)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the trivial subspace {(0, 0)}, the range of T is R
2
, and hence
rank T = 2 nullity T = 0
T is the orthogonal symmetry with respect to the bisectrix r of the ﬁrst and third
quadrant, since, for every (x, y) ∈ R
2
, T (x, y) − (x, y) is orthogonal to r, and the
midpoint of [(x, y) , T (x, y)], which is (x +y, x +y), lies on r.
122 Linear transformations and matrices
16.4.2 n. 2 (p. 582)
T is linear, since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, and for
every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αy +βv, αx +βu)
= α(y, x) +β (v, u)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the trivial subspace {(0, 0)}, the range of T is R
2
, and hence
rank T = 2 nullity T = 0
T is the orthogonal symmetry with respect to the Xaxis.
16.4.3 n. 3 (p. 582)
T is linear , since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, and
for every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αx +βu, 0)
= α(x, 0) +β (u, 0)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the Y axis, the range of T is the Xaxis, and hence
rank T = 1 nullity T = 1
T is the orthogonal projection on the Xaxis.
16.4.4 n. 4 (p. 582)
T is linear, since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, and for
every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αx +βu, αx +βu)
= α(x, x) +β (u, u)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the Y axis, the range of T is the bisectrix of the I and III
quadrant, and hence
rank T = nullity T = 1
Exercises 123
16.4.5 n. 5 (p. 582)
T is not a linear function, since, e.g., for x and u both diﬀerent from 0,
T (x +u, 0) =
¡
x
2
+ 2xy +u
2
, 0
¢
T (x, 0) +T (u, 0) =
¡
x
2
+u
2
, 0
¢
16.4.6 n. 6 (p. 582)
T is not linear; indeed, for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R,
for every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
=
¡
e
αx+βu
, e
αy+βv
¢
=
h
(e
x
)
α
· (e
u
)
β
, (e
y
)
α
· (e
v
)
β
i
αT [(x, y)] +βT [(u, v)] = α(e
x
, e
y
) +β (e
u
, e
v
)
= (αe
x
+βe
u
, αe
y
+βe
v
)
so that, e.g., when x = y = 0 and u = v = α = β = 1,
T [(0, 0) + (1, 1)] = (e, e)
T [(0, 0)] +T [(1, 1)] = (1 +e, 1 +e)
16.4.7 n. 7 (p. 582)
T is not an aﬃne function, but it is not linear. Indeed, for every (x, y) ∈ R
2
, for
every (u, v) ∈ R
2
, for every α ∈ R, and for every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αx +βu, 1)
αT [(x, y)] +βT [(u, v)] = α(x, 1) +β (u, 1)
= (αx +βu, α +β)
16.4.8 n. 8 (p. 582)
T is aﬃne, but not linear; indeed, for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for
every α ∈ R, and for every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αx +βu + 1, αy +βv + 1)
αT [(x, y)] +βT [(u, v)] = α(x + 1, y + 1) +β (u + 1, v + 1)
= (αx +βu +α +β, αy +βv +α +β)
124 Linear transformations and matrices
16.4.9 n. 9 (p. 582)
T is linear, since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, and for
every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (αx +βu −αy −βv, αx +βu +αy +βv)
= α(x −y, x +y) +β (u −v, u +v)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the trivial subspace {(0, 0)}, the range of T is R
2
, and hence
rank T = 2 nullity T = 0
The matrix representing T is
A ≡
·
1 −1
1 1
¸
=
√
2
"
√
2
2
−
√
2
2 √
2
2
√
2
2
#
=
· √
2 0
0
√
2
¸ ·
cos
π
4
−sin
π
4
sin
π
4
cos
π
4
¸
Thus T is the composition of a counterclockwise rotation by an angle of
π
4
with a
homothety of modulus
√
2.
16.4.10 n. 10 (p. 582)
T is linear, since for every (x, y) ∈ R
2
, for every (u, v) ∈ R
2
, for every α ∈ R, and for
every β ∈ R,
T [α(x, y) +β (u, v)] = T [(αx +βu, αy +βv)]
= (2 (αx +βu) −(αy +βv) , (αx +βu) + (αy +βv))
= α(2x −y, x +y) +β (2u −v, u +v)
= αT [(x, y)] +βT [(u, v)]
The null space of T is the trivial subspace {(0, 0)}, the range of T is R
2
, and hence
rank T = 2 nullity T = 0
The matrix representing T is
A ≡
·
2 −1
1 1
¸
the characteristic polynomial of A is
λ
2
−3λ + 3
Exercises 125
and the eigenvalues of A are
λ
1
≡
3 +
√
3i
2
λ
2
≡
3 −
√
3i
2
An eigenvector associated to λ
1
is
z ≡
µ
2
1 −
√
3i
¶
The matrix representing T with respect to the basis
{Imz, Re z} =
½µ
0
−
√
3
¶
,
µ
2
1
¶¾
is
B ≡
"
3
2
−
√
3
2 √
3
2
3
2
#
=
√
3
"
√
3
2
−
1
2
1
2
√
3
2
#
=
· √
3 0
0
√
3
¸ ·
cos
π
6
−sin
π
6
sin
π
6
cos
π
6
¸
Thus T is the composition of a counterclockwise rotation by an angle of
π
6
with a
homothety of modulus
√
3.
16.4.11 n. 16 (p. 582)
T is linear, since for every (x, y, z) ∈ R
3
, for every (u, v, w) ∈ R
3
, for every α ∈ R,
for every β ∈ R,
T [α(x, y, z) +β (u, v, w)] = T [(αx +βu, αy + βv, αz +βw)]
= (αz +βw, αy +βv, αx +βu)
= α(z, y, x) +β (w, v, u)
= αT [(x, y, z)] +βT [(u, v, w)]
The null space of T is the trivial subspace {(0, 0, 0)}, the range of T is R
3
, and hence
rank T = 3 nark T = 0
16.4.12 n. 17 (p. 582)
T is linear (as every projection on a coordinate hyperplane), since for every (x, y, z) ∈
R
3
, for every (u, v, w) ∈ R
3
, for every α ∈ R, for every β ∈ R,
T [α(x, y, z) +β (u, v, w)] = T [(αx +βu, αy + βv, αz +βw)]
= (αx +βu, αy +βv, 0)
= α(x, y, 0) +β (u, v, 0)
= αT [(x, y, z)] +βT [(u, v, w)]
The null space of T is the Zaxis, the range of T is the XY plane, and hence
rank T = 2 nark T = 1
126 Linear transformations and matrices
16.4.13 n. 23 (p. 582)
T is linear, since for every (x, y, z) ∈ R
3
, for every (u, v, w) ∈ R
3
, for every α ∈ R,
for every β ∈ R,
T [α(x, y, z) +β (u, v, w)] = T [(αx +βu, αy +βv, αz +βw)]
= (αx +βu +αz +βw, 0, αx +βu +αy +βv)
= [α(x +z) , 0, α(x +y)] + [β (u +w) , 0, β (u +v)]
= α(x, y, z) +β (u, v, w)
= αT [(x, y, z)] +βT [(u, v, w)]
The null space of T is the axis of central symmetry of the (+, −, −)orthant and of
the (−, +, +)orthant, of parametric equations
x = t y = −t z = −t
the range of T is the XZplane, and hence
rank T = 2 nark T = 1
16.4.14 n. 25 (p. 582)
Let D
1
(−1, 1) or, more shortly, D
1
be the linear space of all real functions of a real
variable which are deﬁned and everywhere diﬀerentiable on (−1, 1). If
T : D
1
→R
(−1,1)
, f 7→(x 7→xf
0
(x))
then T is a linear operator. Indeed,
T (f +g) =
¡
x 7→x[f +g]
0
(x)
¢
= (x 7→x[f
0
(x) +g
0
(x)])
= (x 7→xf
0
(x) +xg
0
(x))
= (x 7→xf
0
(x)) + (x 7→+xg
0
(x))
= T (f) +T (g)
Moreover,
ker T = {f ∈ D
1
: ∀x ∈ (−1, 1) , xf
0
(x) = 0}
= {f ∈ D
1
: ∀x ∈ (−1, 0) ∪ (0, 1) , f
0
(x) = 0}
By an important though sometimes overlooked theorem in calculus, every function f
which is diﬀerentiable in (−1, 0) ∪ (0, 1) and continuous in 0, is diﬀerentiable in 0 as
well, provided lim
x→0
f
0
(x) exists and is ﬁnite, in which case
f
0
(0) = lim
x→0
f
0
(x)
Exercises 127
Thus
ker T = {f ∈ D
1
: f
0
= 0}
(here 0 is the identically null function deﬁned on (−1, 1)). If f belongs to ker T, by
the classical theorem due to Lagrange,
∀x ∈ (−1, 1) , ∃ϑ
x
∈ (0, x)
f (x) = f (0) +xf
0
(ϑ
x
)
and hence f is constant on (−1, 1). It follows that a basis of ker T is given by the
constant function (x 7→1), and nullity T = 1. Since T (D
1
) contains, e.g., all the
restrictions to (−1, 1) of the polynomials with vanishing zerodegree monomial, and
it is already known that the linear space of all polynomials has an inﬁnite basis, the
dimension of T (D
1
) is inﬁnite.
16.4.15 n. 27 (p. 582)
Let D
2
be the linear space of all real functions of a real variable which are deﬁned
and everywhere diﬀerentiable on R. If
L : D
2
→R
R
, y 7→y
00
+Py
0
+Q
where P and Q are real functions of a real variable which are continuous on R, it has
been shown in the solution to exercise 17, p.555, that L is a linear operator. Thus
ker L is the set of all solutions to the linear diﬀerential equation of the second order
∀x ∈ R, y
00
(x) +P (x) y
0
(x) +Q(x) y (x) = 0 (16.1)
By the uniqueness theorem for Cauchy’s problems in the theory of diﬀerential equa
tions, for each (y
0
, y
0
0
) ∈ R
2
there exists a unique solution to equation (16.1) such
that y (0) = y
0
and y
0
(0) = y
0
0
. Hence the function
ϕ : ker L →R
2
, y 7→
µ
y (0)
y
0
(0)
¶
is injective and surjective. Moreover, let u be the solution to equation (16.1) such
that (u (0) , u
0
(0)) = (1, 0), and let v be the solution to equation (16.1) such that
(v (0) , v
0
(0)) = (0, 1). Then for each (y
0
, y
0
0
) ∈ R
2
, since ker L is a linear subspace of
D
2
, the function
y : x 7→y
0
u(x) +y
0
0
v (x)
is a solution to equation (16.1), and by direct inspection it is seen that y (0) = y
0
and
y
0
(0) = y
0
0
. In other words, ϕ
−1
((y
0
, y
0
0
)), and
ker L = span {u, v}
Finally, u and v are linearly independent; indeed, since u(0) = 1 and v (0) = 0,
αu(x) + βv (x) = 0 for each x ∈ R can only hold (by evaluating at x = 0) if α = 0;
in such a case, from βv (x) = 0 for each x ∈ R and v
0
(0) = 1 it is easily deduced that
β = 0 as well. Thus nullity L = 2.
128 Linear transformations and matrices
16.5 Algebraic operations on linear transformations
16.6 Inverses
16.7 Onetoone linear transformations
16.8 Exercises
16.8.1 n. 15 (p. 589)
T is injective (or one to one), since
T (x, y, z) = T (x
0
, y
0
, z
0
) ⇔
_
_
_
x = x
0
2y = 2y
0
3z = 3z
0
⇔(x, y, z) = (x
0
, y
0
, z
0
)
T
−1
(u, v, w) =
³
u,
v
2
,
w
3
´
16.8.2 n. 16 (p. 589)
T is injective, since
T (x, y, z) = T (x
0
, y
0
, z
0
) ⇔
_
_
_
x = x
0
y = y
0
x +y +z = x
0
+y
0
+z
0
⇔(x, y, z) = (x
0
, y
0
, z
0
)
T
−1
(u, v, w) = (u, v, w −u −v)
16.8.3 n. 17 (p. 589)
T is injective, since
T (x, y, z) = T (x
0
, y
0
, z
0
) ⇔
_
_
_
x + 1 = x
0
+ 1
y + 1 = y
0
+ 1
z −1 = z
0
−1
⇔(x, y, z) = (x
0
, y
0
, z
0
)
T
−1
(u, v, w) = (u −1, v −1, w + 1)
Linear transformations with prescribed values 129
16.8.4 n. 27 (p. 590)
Let
p = x 7→p
0
+p
1
x +p
2
x
2
+ · · · +p
n−1
x
n−1
+p
n
x
n
DT (p) = D[T (p)] = D
·
x 7→
Z
x
0
p(t) dt
¸
= x 7→p(x)
the last equality being a consequence of the fundamental theorem of integral calculus.
TD(p) = T [D(p)] = T
£
x 7→p
1
+ 2p
2
x + · · · (n −1) p
n−1
x
n−2
+np
n
x
n−1
¤
= x 7→
Z
x
0
p
1
+ 2p
2
t + · · · (n −1) p
n−1
t
n−2
+np
n
t
n−1
dt
= x 7→ p
1
t +p
2
t
2
+ · · · p
n−1
t
n−1
+p
n
t
n
¯
¯
x
0
= x 7→p
1
x +p
2
x
2
+ · · · p
n−1
x
n−1
+p
n
x
n
= p −p
0
Thus TD acts as the identity map only on the subspace W of V containing all
polynomials having the zero degree monomial (p
0
) equal to zero. ImTD is equal to
W, and ker TD is the set of all constant polynomials, i.e., zerodegree polynomials.
16.9 Linear transformations with prescribed values
16.10 Matrix representations of linear transformations
16.11 Construction of a matrix representation in diagonal form
16.12 Exercises
16.12.1 n. 3 (p. 596)
(a) Since T (i) = i +j and T (j) = 2i −j,
T (3i −4j) = 3T (i) −4T (j) = 3 (i +j) −4 (2i −j)
= −5i + 7j
T
2
(3i −4j) = T (−5i + 7j) = −5T (i) + 7T (j)
= −5 (i +j) + 7 (2i −j) = 9i −12j
130 Linear transformations and matrices
(b) The matrix of T with respect to the basis {i, j} is
A ≡
£
T (i) T (j)
¤
=
·
1 2
1 −1
¸
and the matrix of T
2
with respect to the same basis is
A
2
≡
·
3 0
0 3
¸
(c) First solution (matrix for T). If e
1
= i −j and e
2
= 3i +j, the matrix
P ≡
£
e
1
e
2
¤
=
·
1 3
−1 1
¸
transforms the (canonical) coordinates (1, 0) and (0, 1) of e
1
and e
2
with respect to
the basis {e
1
, e
2
} into their coordinates (1, −1) and (3, 1) with respect to the basis
{i, j}; hence P is the matrix of coordinate change from basis {e
1
, e
2
} to basis {i, j},
and P
−1
is the matrix of coordinate change from basis {i, j} to basis {e
1
, e
2
}. The
operation of the matrix B representing T with respect to the basis {e
1
, e
2
} can be
described as the combined eﬀect of the following three actions: 1) coordinate change
from coordinates w.r. to basis {e
1
, e
2
} into coordinates w.r. to the basis {i, j} (that
is, multiplication by matrix P); 2) transformation according to T as expressed by
the matrix representing T w.r. to the basis {i, j} (that is, multiplication by A); 3)
coordinate change from coordinates w.r. to the basis {i, j} into coordinates w.r. to
basis {e
1
, e
2
} (that is, multiplication by P
−1
). Thus
B = P
−1
AP =
1
4
·
1 −3
1 1
¸ ·
1 2
1 −1
¸ ·
1 3
−1 1
¸
=
1
4
·
−2 5
2 1
¸ ·
1 3
−1 1
¸
=
1
4
·
−7 −1
1 7
¸
Second solution (matrix for T). The matrix B representing T with respect to the
basis {e
1
, e
2
} is
B ≡
£
T (e
1
) T (e
2
)
¤
provided T (e
1
) and T (e
2
) are meant as coordinate vectors (α, β) and (γ, δ) with
respect to the basis {e
1
, e
2
}. Since, on the other hand, with respect to the basis {i, j}
we have
T (e
1
) = T (i −j) = T (i) −T (j) = (i +j) −(2i −j)
= −i + 2j
T (e
2
) = T (3i +j) = 3T (i) +T (j) = 3 (i +j) + (2i −j)
= 5i + 2j
Exercises 131
it suﬃces to ﬁnd the {e
1
, e
2
}coordinates (α, β) and (γ, δ) which correspond to the
{i, j}coordinates (−1, 2) and (5, 2). Thus we want to solve the two equation systems
(in the unknowns (α, β) and (γ, δ), respectively)
αe
1
+βe
2
= −i + 2j
γe
1
+δe
2
= 5i + 2j
that is,
½
α + 3β = −1
−α +β = 2
½
γ + 3δ = 5
−γ +δ = 2
The (unique) solutions are (α, β) =
1
4
(−7, 1) and (γ, δ) =
1
4
(−1, 7), so that
B =
1
7
·
−7 −1
1 7
¸
Continuation (matrix for T
2
). Since T
2
is represented by a scalar diagonal matrix
with respect to the initially given basis, it is represented by the same matrix with
respect to every basis (indeed, since scalar diagonal matrices commute with every
matrix of the same order, P
−1
DP = D for every scalar diagonal matrix D).
16.12.2 n. 4 (p. 596)
T : (x, y) (reﬂection w.r. to the Y axis) 7→ (−x, y)
(length doubling) 7→ (−2x, 2y)
Thus T may be represented by the matrix
A
T
≡
µ
−2 0
0 2
¶
and hence T
2
by the matrix
A
2
T
≡
µ
4 0
0 4
¶
16.12.3 n. 5 (p. 596)
a)
T (i + 2j + 3k) = T (k) +T (j +k) +T (i +j +k)
= (2i + 3j + 5k) +i + (j −k)
= 3i + 4j + 4k
132 Linear transformations and matrices
The three image vectors T (k), T (j +k), T (i +j +k) form a linearly independent
triple. Indeed, the linear combination
xT (k) +yT (j +k) +zT (i +j +k) = x(2i + 3j + 5k) +yi +z (j −k)
= (2x +y) i + (3x +z) j + (5x −z) k
spans the null vector if and only if
2x +y = 0
3x +z = 0
5x −z = 0
which yields (II + III) x = 0, and hence (by substitution in I and II) y = z = 0.
Thus the range space of T is R
3
, and rank T is 3. It follows that the null space of T
is the trivial subspace {(0, 0, 0)}, and nark T is 0.
b) The matrix of T with respect to the basis {i, j, k} is obtained by aligning as
columns the coordinates w.r. to {i, j, k} of the image vectors T (i), T (j), T (k). The
last one is given in the problem statement, but the ﬁrst two need to be computed.
T (j) = T (j +k −k) = T (j +k) −T (k)
= −i −3j −5k
T (i) = T (i +j +k −j −k) = T (i +j +k) −T (j +k)
= −i +j −k
A
T
≡
_
_
−1 −1 2
1 −3 3
−1 −5 3
_
_
16.12.4 n. 7 (p. 597)
(a)
T (4i −j + k) = 4T (i) −T (j) +T (k) = (0, −2)
Since {T (j) , T (k)} is a linearly independent set,
ker T = {0} rank T = 2
(b)
A
T
=
µ
0 1 1
0 1 −1
¶
Exercises 133
(c)
Let C = (w
1
, w
2
)
T (i) = 0w
1
+ 0w
2
T (j) = 1w
1
+ 0w
2
T (k) = −
1
2
w
1
+
3
2
w
2
(A
T
)
C
=
µ
0 1 −
1
2
0 0
3
2
¶
(d)
Let B ≡ {j, k, i} and C = {w
1
, w
2
} = {(1, 1) , (1, −1)}. Then
(A
T
)
B,C
=
µ
1 0 0
0 1 0
¶
16.12.5 n. 8 (p. 597)
(a) I shall distinguish the canonical unit vectors of R
2
and R
3
by marking the former
with an underbar. Since T (i) = i +k and T
¡
j
¢
= −i +k,
T
¡
2i −3j
¢
= 2T (i) −3T
¡
j
¢
= 2 (i +k) −3 (i −k)
= −i + 5k
For any two real numbers x and y,
T
¡
xi +yj
¢
= xT (i) +yT
¡
j
¢
= x(i +k) +y (i −k)
= (x + y) i + (x −y) k
It follows that
ImT = lin {i, k}
rank T = 2
nullity T = 2 −2 = 0
(b) The matrix of T with respect to the bases
©
i, j
ª
and {i, j, k} is
A ≡
£
T (i) T
¡
j
¢ ¤
=
_
_
1 1
0 0
1 −1
_
_
(c) First solution. If w
1
= i +j and w
2
= i + 2j, the matrix
P ≡
£
w
1
w
2
¤
=
·
1 1
1 2
¸
134 Linear transformations and matrices
transforms the (canonical) coordinates (1, 0) and (0, 1) of w
1
and w
2
with respect to
the basis {w
1
, w
2
} into their coordinates (1, 1) and (1, 2) with respect to the basis
©
i, j
ª
; hence P is the matrix of coordinate change from basis {w
1
, w
2
} to basis
©
i, j
ª
,
and P
−1
is the matrix of coordinate change from basis
©
i, j
ª
to basis {w
1
, w
2
}. The
operation of the matrix B of T with respect to the bases {w
1
, w
2
} and {i, j, k} can be
described as the combined eﬀect of the following two actions: 1) coordinate change
from coordinates w.r. to basis {w
1
, w
2
} into coordinates w.r. to the basis
©
i, j
ª
(that
is, multiplication by matrix P); 2) transformation according to T as expressed by the
matrix representing T w.r. to the bases
©
i, j
ª
and {i, j, k} (that is, multiplication by
A). Thus
B = AP =
_
_
1 1
0 0
1 −1
_
_
·
1 1
1 2
¸
=
_
_
2 3
0 0
0 −1
_
_
Second solution (matrix for T). The matrix B representing T with respect to the
bases {w
1
, w
2
} and {i, j, k} is
B ≡
£
T (w
1
) T (w
2
)
¤
where
T (w
1
) = T
¡
i +j
¢
= T (i) +T
¡
j
¢
= (i +k) + (i −k)
= i
T (w
2
) = T
¡
i + 2j
¢
= T (i) + 2T
¡
j
¢
= (i +k) + 2 (i −k)
= 3i −k
Thus
B =
_
_
1 3
0 0
0 −1
_
_
(c) The matrix C representing T w.r. to bases {u
1
, u
2
} and {v
1
, v
2
, v
3
} is diagonal,
that is,
C =
_
_
γ
11
0
0 γ
22
0 0
_
_
if and only if the following holds
T (u
1
) = γ
11
v
1
T (u
2
) = γ
22
v
2
Exercises 135
There are indeed very many ways to achieve that. In the present situation the simplest
way seems to me to be given by deﬁning
u
1
≡ i v
1
≡ T (u
1
) = i +k
u
2
≡ j v
2
≡ T (u
2
) = i −k
v
3
≡any vector such that {v
1
, v
2
, v
3
} is lin. independent
thereby obtaining
C =
_
_
1 0
0 1
0 0
_
_
16.12.6 n. 16 (p. 597)
We have
D(sin) = cos D
2
(sin) = D(cos) = −sin
D(cos) = −sin D
2
(cos) = D(−sin) = −cos
D(id · sin) = sin +id · cos D
2
(id· sin) = D(sin +id · cos) = 2 cos −id · sin
D(id · cos) = cos −id· sin D
2
(id· cos) = D(cos −id · sin) = −2 sin −id · cos
and hence the matrix representing the diﬀerentiation operator D and its square D
2
acting on lin {sin, cos, id · sin, id · cos} with respect to the basis {sin, cos, id · sin, id · cos}
is
A =
_
¸
¸
_
0 −1 1 0
1 0 0 1
0 0 0 −1
0 0 1 0
_
¸
¸
_
A
2
=
_
¸
¸
_
−1 0 0 −2
0 −1 2 0
0 0 −1 0
0 0 0 −1
_
¸
¸
_
2
Contents
I Volume 1 1
3 5 7 9 11 13 15 17 19 21 23 25 25 25 25 25 25 25 26 27 28 28
1 Chapter 1 2 Chapter 2 3 Chapter 3 4 Chapter 4 5 Chapter 5 6 Chapter 6 7 Chapter 7 8 Chapter 8 9 Chapter 9 10 Chapter 10 11 Chapter 11 12 Vector algebra 12.1 Historical introduction . . . . . . . . . . . . 12.2 The vector space of ntuples of real numbers 12.3 Geometric interpretation for n ≤ 3 . . . . . 12.4 Exercises . . . . . . . . . . . . . . . . . . . . 12.4.1 n. 1 (p. 450) . . . . . . . . . . . . . 12.4.2 n. 2 (p. 450) . . . . . . . . . . . . . 12.4.3 n. 3 (p. 450) . . . . . . . . . . . . . 12.4.4 n. 4 (p. 450) . . . . . . . . . . . . . 12.4.5 n. 5 (p. 450) . . . . . . . . . . . . . 12.4.6 n. 6 (p. 451) . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 460) . . . . . . . . . . . . . 12. .11. . . . . . . . . . 12. . . . 456) . . .8. . . . .16 n. . . .9 n. 3 (p. . . . . .4 n. . . . . . . . . . . . 456) . 460) . . .10 The unit coordinate vectors . . . . . . . . . . . . . . . . . . . 12. . . . . .8 n. . . . . . . . . . . . . . . . . . . . . . . . . 12. . . . .8. . . . . 12.8. . . . . 16 (p.8.12 The linear span of a ﬁnite set of vectors .11. . . . . . . . . . . . . 12. . . . . . . . . 451) . . . . . . . . . . . 12. . . .9 n. . . . . 10 (p. . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . 457) . . . . . . . . .11. 457) . . . 12. . . . 10 (p. . . 456) . . . . . . . . . . . . . . . . . . . 12. . . . .8. . 7 (p. . . . 12.12 n. .5 n. . . . . . .15 n. . . . . . . 12. . . . . . . . . . . . . . . . . . . . . 461) . . . . . 9 (p. 1 (p. . .2 n. . 12. . . . . . . . . . . . . . . . . . . . . . 17 (p. 457) . . . . . . . 12. . . . . . . . . 461) . . .8. . . . .3 n.8. . . . . 29 29 30 30 30 32 33 33 33 33 33 33 34 34 34 35 36 36 37 39 39 39 40 40 40 41 42 42 43 43 43 43 43 43 44 45 46 46 47 48 48 50 . .4 CONTENTS 12. . . . 12.8. . . . . . .6 Length or norm of a vector . 17 (p. . . . .8. . . 12. . . . . . . . . . 12. . . 456) . . . . . . . . . . . . . . .11. . . . . . . . . . 456) . 12. . 1 (p. . 11 (p. .14 n. . . . . . . . . . . 456) . . . . . . . . . . . . . 12 (p. .3 n. 12. . . . . . 12. . . . .8. . . . 12. . . . . 461) . . . . . .11. . . . 19 (p.8. . . . . . . 13 (p. . . . . 12. . . . . .6 n.8 n. . . . . . . . . . 456) . 461) . . . . . . . . . . . . 13 (p. . . . . . . . . . . .8 Exercises . 5 (p.11 n.8. . . . . 7 (p. . . . . . . . . . 451) . . . . . . . . . . . . . . . 456) . . . . . . . . . . . . 12. . . . . . . . . .6 n. . .11. . . . . . . . . . . . . . . . . . . . .17 n.8.4. . . . . . . 12. . . 451) .11 n. . . . . . 12. . . . . . . . . . . . . . .12 n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 n. . . . . . 460) . . . . . . . . . . . . . . 457) . . . . . . . . . . . . . . . . . . . .1 n. 11 (p. . . . . . . . . . . . . . . 12. . . . .5 The dot product . . 12. . .11. 12. . . . . . . . . . . . . 24 (p. . . . 12.4. . . . . .10 n. 22 (p. . . . . .7 n. . . . .8 n. . . . . . . 12. . . . . . . . . . . . . . . . 12. . . . . . . . . . . .7 n.10 n. . . . 456) . . . . . . . . . 6 (p. . . 12.9 n. . . . . . . . . . 460) . . . . 451) . 456) . .8.18 n. 456) . 12. . . . . . . . . . . .11. . . . . . . . .8. . . . . . . . . . . . . . .11 Exercises . . . . . . . . . . . . . . . . . 8 (p. 20 (p. 451) . . . . . . . . . 460) .4 n.8. . 10 (p. . . . . . 12. . . . . Angle between vectors in nspace 12. . . . . . . . . .10 n. . 6 (p. . 456) . . . . . 12. 15 (p. . . . . .5 n. . . .13 n. . . . . . . . 2 (p. . . . . . . . 12. . . . . . . . . . . . . . . . . . . . . . . . . 456) . . . . . . . .4. . . . . . 451) . . . . . 14 (p. . 12. 12. . .4. . . . . . . . . . . . . . . .2 n. . . . . . . . . .4. . . . . . . 456) . . . . . 12. . . . . . . . . . . . . . .8. . . 12.7 Orthogonality of vectors . . . . . . . . . .4. . . . . . . . . . . . 12. . . . . . . . . . . . . . . . . . . . . . 3 (p. 8 (p.11. 2 (p. . . . . . . . . . . . . . . . . . . 460) . . . . . . . . . . . . . . . . . . . . . . . . 5 (p. . . . . . . . . . .8.9 Projections. . . . . . . . . . 25 (p. . .1 n. . . . . . . . . . 21 (p. . .
467) . . . .8 n. . . . .4 n. . .10 n. . . . .15. . . 12. . . . .11 n. . .2 n. . . . . . . . . . . . . . . . . . . . . . . .6 n. . . 467) . 13. . . . . . 13. . . . . . . . . . . . 12. . . . . . . .5 Exercises . . . . . 2 (p. . . . . . . . . . . . . . . . . . . . 10 (p. . . . . . 3 (p. .12 n. . . . 12. . . . .1 Introduction . . . . . . . . . . . . . 8 (p. 468) . . 477) . . . . . .7 n. . . . . . . . . . . . . . . . 6 (p. . . . . . . . . 13. . . . . . . . . 12. . 477) . . . . . .CONTENTS 5 12. . . . . . . .15. . . . 12. . . .15. . . . . . . . . . . . . . . . . . . . .15. . . 467) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 (p. 13. . 13. . . . . . . . . . . . .5. . . . . .16 The vector space Vn (C) of ntuples 12. . . . . . . . . . . . . . . . . . . . . . .15. . . 3 (p. . . . . . . 467) . 467) . . . . . 13. . .9 n. . . . . . . . . . 12. . 13. . . . . . .9 n. . . . . . . . . . . . . . 1 (p. 468) . 477) . . .15. . . . . . . . 13. . 12. . . . . . . . . 19 (p. . . . .15. . . . . 477) . 6 (p. . . . . . . . 5 (p. . .3 n. . . .15. . . . . . . . . . . . . . .6 Planes in euclidean nspaces . . . . 467) . . . . . . . . . . . . . . .5. . . . 12. . . . . . . . .5. . . . 12 (p. . . . . 13. 468) . .4 Lines and vectorvalued functions . . . . . . . . . . . . 12. . . . . . . . . . . . . . . . . . . . .11 n. . . . . . . 467) . . . . . 17 (p. . . 1 (p. . 4 (p. . . . . . . . . . . . . . . . . . . . . .5 n. . . . . . . . . . . . . . . . . . . . . . . . .15. 13. . . . . . . . .6 n. . . . . . . . . . . . . . . . . . . 13. . . . .2 n. . . . . . . . .5. . . . . . . . . . . . . . . . . . 468) . . . . . . . . 12. . . . . . . . . .14 n. . . .7 Planes and vectorvalued functions . . . . . .5. 12. . . . . . . . . . . . . . . .4 n. . . . . . . . . . . . 12. . . . . . . . . . . . . . . . . . 13. . . . . 468) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. . . . . . . 12. . . . 20 (p. . . . . . . . 13. . . . . . . . . . . . . . . . . 12. . .12 n. . . . . . . . . . . . . . . . . . . . . . . . . 13. 14 (p.5. . . . . 12 (p. . . . . 477) . 477) . . . . . . . . . . . . . . . . . . . . . . . . . . 477) . . . . . . . . . . . . . 9 (p. . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . .5.5. . . . 18 (p.2 Lines in nspace . . . . . . . . 13. .15. . . . . . . . . . . . . . . . . .5 n. . 10 (p. . 12. . . 13.5. . . 15 (p. .15. .5. 13 (p. . . .8 Exercises . . . . . . . . .7 n. . . . 468) . . . . . . . 477) . . of complex numbers . . . . . . . . . . . . . .3 n. . . . . 13. . . . . . . . . . . . . 477) . . . . .15 Exercises . . . . . 477) . 8 (p. . . . . . . . . . . .14 Bases . . 7 (p. . 50 50 50 50 50 51 51 51 51 52 53 55 56 56 56 57 57 58 59 59 61 61 61 61 61 61 61 61 62 62 62 63 64 64 65 65 66 67 67 67 67 13 Applications of vector algebra to analytic geometry 13.3 Some simple properties of straight lines . . . . .15. . . . . 11 (p. . . . . . . . . . . . . 467) . . 12. . . . . . . . . . . . . . . .15. . . . . .15 n. . . . . . . . . . . . . 13. . . 12. . . . . . . . . . . . . . . . . . . 477) . . . . . . . . . .1 n.10 n. . . . . . . . . . . . . 477) . . . . . . . . . . . . .8 n.13 Linear independence . 467) . . . . . . . . . . .13 n. .17 Exercises . . .15. . . . . . . . .1 n. . . . . . 7 (p. . . . 12.15. . . . . .5. . . . . . .
. .11. . . . 15 (p. . . . . 13.2 n. . .8. .11 n. . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . .14 Exercises . . . . . . . . . . . . . . . . . .8. . . 6 (p. . . 13. . . . . . . . 12 (p. . . . . . . . . . 496) . . . . . . . . . . . . . . . 13 (p. 496) . . . . .3 n. . . . . . . . . . . . . . 10 (p. . . 8 (p. . . . . 13. . . . . . . . . . . . . 482) . . . . . . . . . . . .1 n. . . . . . . . . . 13. 488) . . . . . .13 Cramer’s rule for solving systems of three linear 13. . . . . .5 n. . . . 482) . 5 (p. .17. . . 13. . . .6 CONTENTS 13. . . 4 (p. . . . 10 (p. 482) . . . . . . . .7 n. . . . . . 13. . . . . 1 (p.13 n. . . .8. . . . . . . . 488) . . . . .11. . 483) . . . 13. . . .17.1 n. . . . . . . . . . . . . 13. . . . . . . . . .3 n. . . . . . . . . . .9 n. . . . . . . .8. . . . . . . . . .11. . . . . . .11. . . . . . . . .11. . . . . . . . . .6 n. . . 14 (p. . . . . 13. . . . . .8. . . . . . . .4 n. . . 487) . . .17. . . . . . . .11. . . . . . . . . .8. .13 n. . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . .12 n. . . . . . . . . . . . 496) . . . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . . .4 n. . . 488) . . . 13. . . . . . . .10 The cross product expressed as a determinant . . . 3 (p. . . . . . 13. . . .8. .11. . . . . . . . . . . . 14 (p. . . . 11 (p. . . . 13. . . . . . . . 13. .16 Linear cartesian equations for planes . 13. . . 483) . . . . . . . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . .17. . . . . . . .5 n.9 The cross product . . . .12 n. . . . . . 13. . . . .11 n. . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . .12 The scalar triple product . . . . 487) . . . . . . . . 488) . . 5 (p. . . . . 487) . . 482) . . 2 (p. . . . . . . . 6 (p. . . . . . 482) . 13. . . . .17 Exercises . . . . . . 67 68 69 69 70 71 72 72 73 74 75 75 75 76 76 76 76 76 76 77 77 77 78 79 79 80 80 82 82 83 84 85 85 85 85 85 86 86 86 87 87 88 . . . . . . . . . . . . 9 (p. . . . . . . . . . 487) . . . .8 n. . 13.11. . . . . . . 496) .2 n. . . . equations . 13. . . . . . . 488) . . . . . . . . . . . . . . . . . . 483) . 487) . . 13.8. . . . . . . . . . 3 (p. . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . 488) .6 n. . . . . . . .15 Normal vectors to planes .17.8. . . . . .8. . . . 13. .14 n. . . .1 n. . . . . . . .11. . . . . . . . 13. . . . . . . . . . . .15 n. . . . . . . 13. . . . . . . . . . . . . . . . . . . 488) . . 3 (p. . 13. . . . . . . . . . . . . . . . . . . . . . . . . . .8 n. . . . 13. . . . . . . . . . 488) . . . . . . . . . . . 13. . . . . . . . . . 9 (p. . . . 7 (p. . . . . . . . . . . . . . . . 7 (p. . 488) . . . . . . . . . . . .11. 2 (p. .11. 13. . 487) . . . . . . . . . . . . . . . . . . . 13. . .10 n. . . . . . 4 (p. . . . . .11. . . . . . . . . . . . 13. . . . . . . . . . 5 (p. . . . . . . . . . . . 12 (p. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482) . .3 n. 1 (p. . . 13. 11 (p. . . .9 n. . . . 13. 13. . . . . . . . . . . . . . . . . . .11 Exercises . . . . . . . . .5 n. . . . . . .2 n. . . . 13. . . . . 2 (p. . 482) . . . . 13. 13 (p. . . . . . .10 n. . 13. . . 13. . . . . . . . . 13. . . 496) . . . . . . . . . . 482) . . .4 n. . . . . . 483) . . . . .8.8. . .11. .11. . . . . . . . . . . . . . . . . . . . . 483) . . . . 4 (p. . .7 n. . . 8 (p. . . . . . . .
. . . . . . . . . . .3 n. . . . . . . . . . . . 15. . . . . . . . . . . . .17. . . . . . . . . . . . . . . . . . . . . 555) . 15. .9 n. . 13. . . . . 1 (p. . . . . . . . . . 555) . . . . . . 15. . . . 15. . . . .17. . . 3 (p. . 15. 555) . 4 (p. .4 n. . . . . . . . .11 n. . . . .17. . . 13 (p. . . . .6 n. . . .5. . . . .CONTENTS 7 13. .12 n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 The conic sections . 496) . . . . . . . . . . 22 (p. 555) . . . . .10 n. . . .22 Conic sections symmetric about the origin 13. . . . . . 16 (p. . . . . . . . . . . . . . . . . . . . .1 Introduction . . . 13. . . . . . . . . . 496) . . . . . 7 (p. .5 n.8 n. . . . . .8 n. . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . . 6 (p. . . . . . .17. . . . . . . . . . . . . .5. . . . . . . . . . . . . . 15.14 n. . . . . .5. . . . . .23 Cartesian equations for the conic sections 13. . 555) . 19 (p. . . . . . . .11 n. . .3 Examples of linear spaces . . . . 15. . . .13 n. . . . . . . . . . . . . . .6 n. 496) . . . . . . . . . . . . . . .5. . . . . . . . . . 2 (p. . .13 n. .5. 10 (p. .20 Polar equations for conic sections . . . . . . . . . . . .5. . 555) . . . . . . . . . .24 Exercises . . . . . . . .17. . . . . . 15 (p. . . . . . . . 13. 15. . . . . 13. . . . . . . . . . . . .5. . . . . . . . . . .7 n. . . . . . . . 555) . . . . .21 Exercises . 20 (p. . . 13. . . 555) . . 18 (p. . . . . 555) . . .17. . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Exercises . . . 15. 14 (p. . . . . . . . . . . . . 496) . . . . . . . . . . . . . . . . 15. . . . . . . . . . . . . . . . 14 Calculus of vectorvalued functions 15 Linear spaces 15.5. . . . . . . . . . . . . . . 13. . . 11 (p. . 555) .25 Miscellaneous exercises on conic sections . . . . . . . . . . . . . . . . . . . . . . . 13.5. . . . . . . .5. . . . . . . . . . . 496) . . . . . 555) . . . . . . . . . . . . . . . . . . 14 (p. .17. . .12 n. . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 n. . 6 (p. 497) . . . . .17. . . . . . . . .17. . . . . . . . 13. . . . . 555) . . . . . . . . . . . . . . . . . . . . . 17 (p. . . . . . . 15. . . . .5. 497) . . . 13. . . . . . . . . . . . . . 15. . . . . . .5. . . . . 15. . . . . .15 n. . . . . . . . . . . . . . . . . . . 15. . . . . . . . . . . . . . . . . . . . . . . . . .2 The deﬁnition of a linear space . . . . . . . . . . . . . . .5.1 n. . . . . . . . . . . . . . . 555) . . . . . . . . . .17. .9 n. . . . . . . . 15. . 11 (p. . . .5. .14 n. . 5 (p. . . . 555) . . . . . 496) . . 13 (p. . . . . . . . . . . . . 9 (p. . . . . . . . . . . . . . . . . . . . . 17 (p. . . 13. 496) . . . . . . . . . . . . .4 Elementary consequences of the axioms 15. . . . . . . . . . . . . . . . 88 88 88 89 90 90 90 90 91 91 91 91 91 91 92 92 92 92 93 95 95 95 95 95 95 95 96 96 96 97 97 98 98 98 99 99 101 101 102 102 . . . . 13. . . .19 Eccentricity of conic sections . . . 8 (p. . . . .5.10 n. . . . . 13. . . . . . . . . . 496) . . .15 n. . . . . 13. .2 n. . . . . . . . . 555) . . . . . . . . . . . . . 15. . . . . .
. . . . . . 560) . . . . . . . . . .9. . . . . . . . . . . . . 15. . . 576) . .16. . . . . . . . 555) . . . . . . . . . . 15. . . 23 (p. . . . . .11 n. . . . 555) . . . . . . . . . . . .5. . . . 15. . . . . . . . . 11 (p. . .9. . . . . . . .14 n. . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . 555) . . . . . 15. . 15. . . . . 555) . . . . . . . . . .16. . . 560) . . . . . . . 15. 560) . . . . . . . . . . . 11 (p. . . .8 CONTENTS 15. . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . 15.11 Orthogonality in a euclidean space . . . . . . 15. 13 (p. . . . . . . . . . . . . . . . . . . . . . . . .2 n. . . . . . . .5. . . . . . . . . . . . . . . . . . . . . .8 n. . . . . . . . . . . . . . . . . . . . . . . . . . 15. . . . . . . . . .16 n. . . . . . . . . . .9. . . . . 15. . . . 560) . . . . . 560) . . . . . . .5. . . . 560) . .12. . . .13 n. . . . . . . . . . . . . . . . . . . . . . . . . . . 25 (p. . . . . . . . . .18 n. . 15. . . .7 n. . . . . . . . 15. . . .1 n. . . . 15. .9. . . . . . . . . . . . . . . . . . . . . . .18 n. . . .7 Dependent and independent sets in a linear space . . . 15. . . . . 15. . . 15. . . . . . . . . 555) . . .16. . . . . . . . . . . . . . .1 n. . . . 576) . . . Norms . 15. 15.10 Inner products. 15. . . . . . .9. . . . . . . . . . . . 15. . . . . 5 (p. .12 Exercises . .9. . . . . . . . .9. . . . . . . . . 9 (p. . .12 n. . . 4 (p. . . . 560) . . . .16. . The GramSchmidt process . . . .9. . . . . . . . 15. .9. . . . . .19 n. . . . . . 1 (p. . . . . . . 23 (p. . . . . . .17 n. . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . 576) . 15. . . .20 n. . 15 (p. 15. . 15. . . . . . . . . . . . . . 7 (p. . . . . . . . . . . . . . . . .8 Bases and dimension . . . . . 15. .2 n. . . . . 15. . . . . . . . . . 576) . . . . . 555) . . . 15. . . . 3 (p. . . . . . . . .2 n. .9. . . . . . . . . . . . . . . . . . . . 15. .3 n. . .3 n. . . . . . . . . .5. . . . . . 3 (p.9. . . . . . . . . . . . . . . . . 560) . . . .6 n. . . . . . . . . . . . . . 26 (p. . . .9. . . . . . . . . 4 (p. . . . . . . . 16 (p. . . . . . . . . . . . .15 n. . . . . . . . . . . . . . . . .4 n. . . 15. . . 560) . . . . .12. . . . .21 n. . . . . . 27 (p. . 24 (p.14 Orthogonal complements. . . . . . . . . . . . . . . . . Euclidean spaces. . . . . . . . . . . . . . . .9. . . . . . . . . . 15. 567) . . . 15.9 n. . . . . . 15.1 n. 560) .4 n. . . . . . . . . 567) . . .16 n. . 15. . . . . 10 (p.5 n. . . . 560) . 6 (p. . 560) . . . . . . 14 (p. 15. .17 n.5. . 2 (p. . . . . . . . . . . . . 28 (p. . . 560) . . . . . . 22 (p. . . . . . . . . . . . . . . . 560) . . . . . . . . . . . . . . . . . . . . . . 560) . . . 102 102 102 102 103 103 103 103 103 103 103 104 104 104 104 104 104 105 105 105 105 106 106 106 107 107 108 108 111 111 112 112 112 115 115 115 115 115 116 117 118 . 15. . . . . . . . . . . . . . . . . . . . 15. . . . . . .9. . .9. . .6 Subspaces of a linear space .10 n. . . 1 (p. . . . . . 15. . 8 (p. . 15. 2 (p. 15. . . . . . . 560) . . . projections . . . . . . . . . . . . . . 560) . . . .9 Exercises . . 560) . . . . . .16 Exercises . . .13 Construction of orthogonal sets. 12 (p. . . . 9 (p.9. . . . . . . . . . . . . . . . . . . . . . .15 Best approximation of elements in a euclidean space by elements in a ﬁnitedimensional subspace . . . . . . . . . 15. . . . . . . . . . .
. .4. . . . . 16. . . . . 582) . . . . . . . . . 25 (p. .4 n. . . .4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 n. . . . . . .1 n. . . . . . . . . . . . . . . . . . . . .7 n. . . 8 (p. . . . . . . . . . .6 n.11 n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 n. . 17 (p.5 n. . 7 (p.4. . . . . . . . 582) . . . . . . . . . . . . .4. . . . . . . . . . . . 10 (p. . . . . . . . 16. . . . . . . . . . .6 n. . . . . . 16. . . . .8. . . 3 (p.4.CONTENTS 9 16 Linear transformations and matrices 16. .1 n. . . . . 589) . . . .4. . . . . . . .8. 582) . 8 (p. . . . . . . . . . . . . .14 n. . 17 (p. . 16. . . . . . . . .12 n. . . .2 Null space and range . . . . form . 582) . . .4. . . . . . . . 596) . . . . . . . . . . 3 (p. . . . . . . . . . . . . 16 (p. . . . .12. . . . . . . . . . . . . . . . . . 16. 6 (p. . . . .1 n. . . . . . . . . . . . . 16. . . . . 16. . 582) . . 16 (p. . . . . . . . . . . . . . . . 16. . . . . . . .4. . . . . . . 16. 16. . . 27 (p. 16. .8. . . . . . . 582) . . . . . . . .12. 596) . . . . 5 (p. . . . . . . . . . . . . . . . . . . . . . . .12 Exercises . . . . . . . . . . . . .11 Construction of a matrix representation in diagonal 16. . . .3 n. . . . . . . . . . . . . . . . 597) . . . . . . . . . . . . . . . . 597) . . . . . . . . . . .2 n. . . . 4 (p. . . . . 582) . . . . . . . . . . . . 16. . . . . 590) . . . . . . . . . . . . . .8. . 16.12. . . . . . . . .5 n. . .4. . . . . . .3 n. 16. . . . 16. . 27 (p.4 n. .13 n. . . . . . . . . .4 n. . . . . 1 (p. . . . . . . . . . . . . . . . . . .12. . . 582) . . . 16. . . . . . . .2 n. . . . . 16. . . . . . . . . . . 16. . . 582) . . . . .9 Linear transformations with prescribed values .5 Algebraic operations on linear transformations . . . . . . . .4. . . . . . 582) . . . 16. . 15 (p. . . . . . . . . . 16. . . . . . . . . . . . . . .12. . . . .4. . . . 16. . . . 16. . . . . . . . . . . . . . . . . . . . 582) . 16. . .6 Inverses . . . . . . . . . . . . 597) . . . . . . . . . . . . . . . . . . . . . . 16. . . .4. . . . . . . . . . . . . . 16. . . . . .10 n. . . . . . 9 (p. . . . . . . .4. . . .10 Matrix representations of linear transformations . . . . . . . . . 16. . . . . . . . 16.7 Onetoone linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 (p. . 582) . . . . . . 121 121 121 121 121 121 122 122 122 123 123 123 123 124 124 125 125 126 126 127 128 128 128 128 128 128 128 129 129 129 129 129 129 131 131 132 133 135 . . .8 Exercises . . . . .9 n. . . . . . 589) . .1 Linear transformations . . . . . . 5 (p. .4. . 16. . . . . . 16. . . . . . . . . . . .4. . . . . . . . . . . . 16. . . . . 7 (p. . . 16. .3 Nullity and rank . . . . . . . . . . . . . . . . . .15 n.4. . . . . . . . 4 (p. . 16. . . . . . . . . . . 16 (p. . . . . . . 582) . .12. . . . . . . . . . . . . 596) . . . . . . . . . . 2 (p. . 16. 589) . . . . . 16. . . 16. .8 n. . . . . 582) . . . . . . . . . . . . . . . . . . . 582) . .
10 CONTENTS .
Part I Volume 1 1 .
.
Chapter 1 CHAPTER 1 .
4 Chapter 1 .
Chapter 2 CHAPTER 2 .
6 Chapter 2 .
Chapter 3 CHAPTER 3 .
8 Chapter 3 .
Chapter 4 CHAPTER 4 .
10 Chapter 4 .
Chapter 5 CHAPTER 5 .
12 Chapter 5 .
Chapter 6 CHAPTER 6 .
14 Chapter 6 .
Chapter 7 CHAPTER 7 .
16 Chapter 7 .
Chapter 8 CHAPTER 8 .
18 Chapter 8 .
Chapter 9 CHAPTER 9 .
20 Chapter 9 .
Chapter 10 CHAPTER 10 .
22 Chapter 10 .
Chapter 11 CHAPTER 11 .
24 Chapter 11 .
. 12. as required.3 Geometric interpretation for n ≤ 3 12.Chapter 12 VECTOR ALGEBRA 12. −2) .4. 21). 450) µ ¶ µ ¶ µ ¶ 7 5 5 11 13 . 2 (p. 0. (e) 2a + b − 3c = (0. (c) a + b − c = (3. I would say). .1 n. . −5) 3 2 2 4 4 The seven points to be drawn are the following: The purpose of the exercise is achieved by drawing. a single picture. 24. containing all the points (included the starting points A and B.4. 9).1 Historical introduction 12. 4) . 0. 7) .4 Exercises 12.2 .2 n. (d) 7a − 2b − 3c = (−7. 4). 450) (a) a + b = (5. (3. (0. (b) a − b = (−3. 0). .2 The vector space of ntuples of real numbers 12. (1. . (4. 1 (p. 3). 6. −1.
3 n. (−1. (5. 2. 1) 3 3 2 2 4 5 4 3 2 1 3 2 1 0 1 2 3 1 2 3 4 5 It can be intuitively seen that. 450) The seven points this time are the following: µ ¶ µ ¶ µ ¶ 5 10 7 5 15 . by letting t vary in all R. 12. (3. (−3. .26 Vector algebra It can be intuitively seen that. . the straight line through B − → with direction given by the vector a ≡ OA is obtained. 4) . . the straight line through − → point A with direction given by the vector b ≡ OB is obtained. 3 (p.4. . 5) . 2) . . by letting t vary in all R.
−1) . 4 (p. (c) If the value of x is ﬁxed at 0 and y varies in [0. 1 .4.2 . 1 . and it is enough to repeat the construction a few more times to convince oneself that the set © ª xa + yb : (x. and 2. Similarly. 5 . . . (3.Exercises 27 12. y) ∈ [0. and hence D is the vertex of the parallelogram with three vertices at O. and B. all the combinations are aﬃne. it seems to me. (0. indeed. when x = 1 the segment obtained joins the midpoints of 2 the two sides OA and BD. I would say). by the question immediately following. 4 2 4 2 0 2 4 2 4 (b) It is hard not to notice that all points belong to the same straight line. 4 . 3 . A. 1]. . −3) . 4 2 4 4 2 . The picture below is made with the value of x ﬁxed at 0. the same construction with the value of x ﬁxed at 1 yields the segment AD. where −→ − → − → OD = OA + OB. (4. 5) 2 4 2 3 3 2 The whole purpose of this part of the exercise is achieved by drawing a single picture. 3 . . This is made clear. 450) (a) The seven points to be drawn are the following: µ ¶ µ ¶ µ ¶ µ ¶ 3 5 5 4 7 1 . the segment OB is obtained.4 n. 1]2 is matched by the set of all points of the parallelogram OADB. . as it is going to be clear after the second lecture. containing all the points (included the starting points A and B. 1.
→ I x = −z y=0 x=0 z=0 c1 c2 ¶ 3c1 − c2 = 5 µ 3I − II 5x = 3c1 − c2 2II − I 5y = 2c2 − c1 2 1 ¶ 2c2 − c1 + 5 µ 1 3 ¶ (b) .→ II (↑) . 6 (p.→ III (↑) .4.→ I x = 2 I (↑) .5 2 2. and the resulting set is the (inﬁnite) stripe bounded by the lines containing the sides OB and AD. 12. 5 (p.6 (a) n. (e) The whole plane. 450) µ ¶ µ ¶ µ ¶ 2 1 c1 +y = x 1 3 c2 I 2x + y = c1 II x + 3y = c2 µ 12.28 Vector algebra 4 3 2 1 0 0.4.5 1 1.5 n. 451) 1 0 1 x+z d = x 1 + y 1 + z 1 = x + y + z 1 1 0 x+y I x+z =0 II x + y + z = 0 III x+y =0 (c) I x+z =1 II x + y + z = 2 III x+y =3 II − I y=1 II − III z = −1 (↑) .5 3 (d) All the segments in the above construction are substituted by straight lines. of equation 3x − y = 0 and 3x − y = 5 respectively.
4.Exercises 29 12.→ II y = z z ← 1 (−2. 1) III − II 0 = 1 (b) (c) I x + 2z = 1 II x + y + z = 2 III x + y + z = 3 12.7 (a) n.→ III (↑) . 8 (p.→ I II (check) y=0 x=0 z=0 0=0 (b) II − I − IV 0 = −3 . 451) 1 0 1 1 d = x + y 1 1 0 1 I II III IV (c) I II III IV (d) I II III IV x+z =1 x+y+z =2 x+y =3 y=4 x+z =1 x+y+z =5 x+y =3 y=4 IV IV .→ I II (check) y=4 x = −1 z=2 −1 + 4 + 2 = 5 x+z =0 x+y+z =0 x+y =0 y=0 1 x+z 1 x+y+z +z = 0 x+y 0 y IV IV .4. 451) 1 0 2 x + 2z d = x 1 + y 1 + z 1 = x + y + z 1 1 1 x+y+z I x + 2z = 0 II x + y + z = 0 III x + y + z = 0 I x = −2z (↑) .→ III (↑) . 7 (p. 1.8 (a) n.
451) Assumptions: I c=a+b II ∃k ∈ R ∼ {0} . 451) (b) Here is an illustration of the ﬁrst distributive law (α + β) v = αv + βv . which is nonnull.4. c = hd ∃h ∈ R ∼ {0} . 451) Let the two vectors u and v be both parallel to the vector w. if c is parallel to d. Then µ ¶ v α u = αw = α = v β β that is. 12. ⇔ I II ⇔ (b 6= 0) ⇔ l≡h−k ⇔ ∃h ∈ R ∼ {0} . 11 (p. ∃k ∈ R ∼ {0} . a = kd Claim: (∃h ∈ R ∼ {0} . If you look carefully. According to the deﬁnition at page 450 (just before the beginning of § 12.4. (⇒) Since (by I) we have b = c − a. they diﬀer only in the phrasing.4). it follows that b. probably). 12. if b is parallel to d. kd + b = hd ∃h ∈ R ∼ {0} . ∃k ∈ R ∼ {0} . 1. this means that there are two real nonzero numbers α and β such that u = αw and v = βw. which is nonnull. then (by II) b is the diﬀerence of two vectors which are both parallel to d. c = hd) ⇔ (∃l ∈ R ∼ {0} .30 Vector algebra 12. it follows that c. is parallel to d too. is parallel to d too. a + b = hd 2. u is parallel to v. b = ld ∃h ∈ R ∼ {0} . then (by II) c is the sum of two vectors which are both parallel to d.9 n. b = (h − k) d h 6= k ∃l ∈ R ∼ {0} . 9 (p.10 n. b = ld) I present two possible lines of reasoning (among others.4.11 n. 10 (p. (⇐) Since (by I) we have c = a + b.
Exercises
31
with v = (2, 1), α = 2, β = 3. The vectors v, αv, βv, αv + βv are displayed by means of representative oriented segments from left to right, in black, red, blue, and redblue colour, respectively. The oriented segment representing vector (α + β) v is above, in violet. The dotted lines are there just to make it clearer that the two oriented segments representing αv + βv and (α + β) v are congruent, that is, the vectors αv + βv and (α + β) v are the same.
8 6 4 2
0
2
4
6
8
10
12
14
tails are marked with a cross, heads with a diamond
An illustration of the second distributive law
α (u + v) = αu + αv
is provided by means of the vectors u = (1, 3), v = (2, 1), and the scalar α = 2. The vectors u and αu are represented by means of blue oriented segments; the vectors v and αv by means of red ones; u + v and α (u + v) by green ones; αu + αv is in violet. The original vectors u, v, u + v are on the left; the “rescaled” vectors αu, αv, α (u + v) on the right. Again, the black dotted lines are there just to emphasize congruence.
32
Vector algebra
8 6 4 2
4
2
0
2
4
6
8
tails are marked with a cross, heads with a diamond 12.4.12 n. 12 (p. 451) The statement to be proved is better understood if written in homogeneous form, with all vectors represented by oriented segments: − → 1− → 1− → OA + AC = OB (12.1) 2 2 Since A and C are opposed vertices, the same holds for O and B; this means that the oriented segment OB covers precisely one diagonal of the parallelogram OABC, and AC precisely covers the other (in the rightwarddownwards orientation), hence what needs to be proved is the following: ³ ´ 1³ ´ → − → − → − → − → 1 − OA + OC − OA = OA + OC 2 2 which is now seen to be an immediate consequence of the distributive properties. − → − → − → − → − → In the picture below, OA is in red, OC in blue, OB = OA + OC in green, and − → − → − → AC = OC − OA in violet.
The dot product
33
The geometrical theorem expressed by equation (12.1) is the following: Theorem 1 The two diagonals of every parallelogram intersect at a point which divides them in two segments of equal length. In other words, the intersection point of the two diagonals is the midpoint of both. Indeed, the lefthand side of (12.1) is the vector represented by the oriented segment OM , where M is the midpoint of diagonal AC (violet), whereas the righthand side is the vector represented by the oriented segment ON , where N is the midpoint of diagonal AB (green). More explicitly, the movement from O to M is described as achieved by the composition of a ﬁrst movement from O to A with a second movement from A towards C, which stops halfway (red plus half the violet); whereas the movement from A to N is described as a single movement from A toward B, −→ − −→ stopping halfway (half the green). Since (12.1) asserts that OM = ON , equality between M and N follows, and hence a proof of (12.1) is a proof of the theorem 12.5 The dot product
12.6
Length or norm of a vector
12.7
Orthogonality of vectors
12.8
Exercises
12.8.1 n. 1 (p. 456) (a) ha, bi = −6 (b) hb, ci = 2 (c) ha, ci = 6 (d) ha, b + ci = 0 (e) ha − b, ci = 4 12.8.2 n. 2 (p. 456) (a) ha, bi c = (2 · 2 + 4 · 6 + (−7) · 3) (3, 4, −5) = 7 (3, 4, −5) = (21, 28, −35) (b) ha, b + ci = 2 · (2 + 3) + 4 · (6 + 4) + (−7) (3 − 5) = 64
bi = ha. The simplest example that I can conceive is in R2 : a = (1. y.8.4.3 n. 2) . z)i = 0 that is. y) ≡ (−2. −105) (e) a hb. 2)i = 0. −4) . ci = (2. 1. −3α)}α∈R 12. 2) Thus the set of all such vectors can be represented in parametric form as follows: {(α. 456) The statement is false. bi = 0 c = xa + yb ¾ ⇒ hxa + yb. 1. of coordinates (x. b − ci = 0 ⇔ a ⊥ b − c and the diﬀerence b − c may well be orthogonal to a without being the null vector. 1). I 2x + y − z = 0 II x − y + 2z = 0 I + II 3x + z = 0 2I + II 5x + y = 0 b = (1.4 n. 5 (p. 456) The required vector. 0) See also exercise 1. ci = (2 + 2) · 3 + (4 + 6) · 4 + (−7 + 3) · (−5) = 72 (d) ahb. z) must satisfy the two conditions h(2. 60. Then c = (1. −7) (2 · 3 + 6 · 4 + 3 · (−5)) = (2.8. −1. bi = 0 ⇔ 7x + 14y = 0 Let (x.5 n. −4) 6= 0 and h(1. 12. bi + y hb. (x. −5α. 4.−7) 15 = ¡2 7 . .34 Vector algebra (c) ha + b. − 15 15 15 ¢ 12. y.8. z)i = 0 h(1.ci = (2. 1) c = (1. ci ⇔ ha. Indeed. (x. 4 . ha. question (d). 5. 5. bi = 0 ⇔ x ha. 456) hc. 3 (p. −1) . 6 (p. y. 4. (3. −7) 15 = (30.
hb.6 n. 2 + 2α) Thus the second condition becomes 1 · (2 − α) + 2 · (−1 − 2α) − 2 (2 + 2α) = 0 that is. bi 9 1 α=− 4 9 . d = a − c = a − αb Thus the second condition becomes hb. 7 (p. for some α to be determined. 10) 9 c = α= hb. 8) 9 1 d = (22. bi = 0 and hence 1 (−4. 8) 9 1 d = (22. d = a − c = (2 − α. −1. −8. c = αb Substituting in the ﬁrst condition. −2α) Substituting in the ﬁrst condition. −1 − 2α. 10) 9 c = 2 (same solution. −1. −4 − 9α = 0 and hence 1 (−4.Exercises 35 12. 456) 1 (solution with explicit mention of coordinates) From the last condition. c = (α. with no mention of coordinates) From the last condition. for some α to be determined. −8. a − αbi = 0 that is. ai − α hb. ai 2 · 1 + (−1) · 2 + 2 · (−2) 4 = =− 2 + 22 + (−2)2 hb. 2α.8.
−α) In particular. −1) or b = (2. −a) or b = (−b. bi = 0 kbk = kak If a = 0.2) gives x = ∓β 2 α2 + β 2 α2 +β 2 α2 = α2 Thus there are only two solutions to the problem. 1). 1). y = and hence y = ±α Substituting back in (12. b = (−β.8. let the coordinates of a be given by the couple (α. −1). 2). a). 1). then kak = 0 and hence b = 0 as well.8. −1) or b = (−2. −1) or b = (1. (d) b = (b. (a) b = (−2. If a 6= 0. the following conditions must be satisﬁed ha.8 n. The above conditions take the form αx + βy = 0 x2 + y 2 = α2 + β 2 Either α or β must be diﬀerent from 0. (c) b = (−3. 12. 2) or b = (1. 1) or b = (2. α) and b = (β. namely.2) β x=− y α and substitution in the second equation yields ¶ µ 2 β + 1 y 2 = α2 + β 2 α2 that is. 10 (p. (b) b = (−1. and the coordinates of b by the couple (x. 456) If b is the required vector. (c) b = (−2. −2) or b = (3. Suppose α is nonzero (the situation is completely analogous if β 6= 0 is assumed).7 n. β). Then the ﬁrst equations gives (12. −2) .36 Vector algebra 12. 1). (d) b = (−1. y). 13 (p. (b) b = (2. −1) or b = (−1. 456) (a) b = (1.
Exercises 37 12. −4. y. z)i − h(2. xi = 0 Equivalently. Thus h[(2. If the right angle is in C. hence the vector OC. −1. ha − x. − 3 and radius 5 . −4) − (x. bi − ha + b. −4)i + h(x. 1) + (3. with no mention of coordinates) Let − → − → − → a ≡ OA b ≡ OB x ≡ OC − → (stressing that the point C. z) . h(2. −1. y. (3. z) − → − → be the coordinates of C. (x. orthogonality of the two vectors − → CA = a − x is required. −4) . y. y. (x. z)i = 0 or x2 + y 2 + z 2 − 5x + 5y + 3z + 6 = 0 The above equation. Then. z)]i = 0 that is. 1) − (x.9 n.x + ° ° ° 2 ° 2 2 ° °2 ° °2 ° ° ° ° °x − a + b ° = ° a − b ° ° ° 2 ° 2 ° . y. 2 2 2 2 2 (right angle in C. b − xi = 0 or ha. is unknown). y. that is. 456) 1 (right angle in C. the two vectors CA and CB must be orthogonal. z)] . −4. kxk − 2 2 − → CB = b − x ¿ ° ° °2 À ° ° a + b °2 ° ° a+b ° ° = ° a + b ° − ha. 14 (p. −4. xi + hx. bi . solution with explicit mention of coordinates) Let (x. 1) . same solution.8. when rewritten in a more perspicuous way by “completion of the squares” µ ¶2 µ ¶2 µ ¶2 5 3 25 5 x− + y+ + x+ = 2 2 2 4 ¡ ¢ is seen to deﬁne the sphere of center P = 5 . − 5 . [(3. −1.
solution with explicit mention of coordinates) With − → − → − → this approach. with the notation of point 2. y + 4. x − bi that is. 5) and BC = (x − 1. bi Thus the solution plane π is seen to be through B and orthogonal to the segment connecting point B to point A. the following must hold D E − − → → 0 = BA. the condition to be required is 0 = ha − b. z + 4). BC = 1 − x + 3y + 12 + 5z + 20 that is. . 4 (right angle in B. ha − b. and C is any point of the circle of π having AB as diameter. 3. xi = ha − b.38 Vector algebra The last characterization of the solution to our problem shows that the solution set is ° ° the locus of all points having ﬁxed distance (° a−b °) from the midpoint of the segment 2 AB. Indeed. it is known by elementary geometry that the triangle ACB is rectangle in C. −5). 3 (right angle in B. if π is any plane containing AB. Since BA = − → (−1. with no mention of coordinates) Proceeding as in the previous point. x − 3y − 5z = 33 The solution set is the plane π through B and orthogonal to (1. −3. the vectors required to be orthogonal are BA and BC. same solution.
− → Recalling solution 2 to that exercise.10 n. 4α) q = (4β.8.Exercises 39 5 (right angle in B) It should be clear at this stage that the solution set in this case is the plane π 0 through A and orthogonal to AB. 6) 25 The question is identical to the one already answered in exercise 7 of this section. 12.− . with α= Thus ¶ 5 5 5 5 p = . 15 (p. bi 4 . 456) p = (3α. .11 n. 44) 25 I 3α + 4β = 1 II 4α − 3β = 2 1 (11.12 n. xi = hb − a. −3β) 4II + 3I 25α = 11 4I − 3II 25β = −2 1 (33. we have that p ≡ OP must be equal to αb = − → αOB. 16 (p. 5. of equation hb − a. 456) I c1 − c2 + 2c3 = 0 II 2c1 + c2 − c3 = 0 I + II 3c1 + c3 = 0 I + 2II 5c1 + c2 = 0 c = (−1. 3) 12. 17 (p. bi 10 = hb.8. 2 2 2 2 µ ¶ 3 1 1 3 q = − . 2 2 2 2 µ ha. β) = p= 12. −2) 25 (α. .8. ai It is also clear that π and π 0 are parallel. . 456) q= 1 (−8.
You should notice that the above identity has been already used at the end of point 2 in the solution to exercise 14 of the present section. bi + hb.8. 456) It has been quickly seen in class that ka + bk2 = kak2 + kbk2 + 2 ha. and that in such a rectangle ka + bk and ka − bk measure the lengths of the two diagonals. 12. ka + bk2 − ka − bk2 = 4 ha. 456) ka + bk2 + ka − bk2 = ha + b. the sum of the squares of the four sides equals the sum of the squares of the diagonals.8. with vertex C opposed to the vertex in the origin O) to be a rectangle. 12.13 n. bi + hb. bi + ha. 457) . 20 (p. ai − 2 ha.8. bi = 0 ka − bk2 = kak2 + kbk2 − 2 ha. I obtain By subtraction.40 Vector algebra 12.14 n. ai + 2 ha. bi substituting −b to b.15 n. a − bi = ha. a + bi + ha − b. Concerning the geometrical interpretation of the special case of the above identity ka + bk2 = ka − bk2 if and only if ha. 21 (p. 19 (p. that is. bi = 2 kak2 + 2 kbk2 The geometric theorem expressed by the above identity can be stated as follows: Theorem 2 In every parallelogram. bi as required. bi it is enough to notice that orthogonality of a and b is equivalent to the property for the parallelogram OACB (in the given order around the perimeter.
bi = 0. starting from left − → −→ in clockwise order.8. ¢ ¡ ¢ ¡ kuk2 + kvk2 + kwk2 + kzk2 − kck2 + kdk2 = kzk2 − kvk2 − 2 hu. let − → u ≡ AB. 4ya − 9xbi ¡ ¢ = −9 ha. wi + 2 hv.Exercises 41 Let A. thus the condition becomes ¡ ¢ 4 kak2 − 9 kbk2 xy = 0 and choosing now x = y = 1 gives 4 kak2 = 9 kbk2 Since kak is known to be equal to 6. wi ° ° − °− →°2 = 4 °MN ° 12. 22 (p. 457) Orthogonality of xa + yb and 4ya − 9xb amounts to Since the above must hold for every couple (x. Then c −→ − MN −→ − 2MN ° ° − °− →°2 4 °M N ° − → −→ ≡ AC = u + v d ≡ DB = u + z = − (v + w) 1 1 −→ −→ − = AN − AM = u − d − c 2 2 = 2u+ (v + w) − (u + v) = w + u = kwk2 + kuk2 + 2 hw. wi We are now in the position to prove the theorem. and D be the four vertices of the quadrilateral. −→ z ≡ DA = − (u + v + w) kzk2 = kuk2 + kvk2 + kwk2 + 2 hu. it follows that kbk is equal to 4. wi = kuk2 + kwk2 + 2 hu. vi kdk2 = kvk2 + kwk2 + 2 hv. B. choosing x = 0 and y = 1 gives ha.16 n. 0 = hxa + yb. and let M and N be the midpoint of the diagonals AC and DB. vi − 2 hv. vi + 2 hu. y). ui − → v ≡ BC. −→ w ≡ CD. C. bi y 2 + 4 kak2 − 9 kbk2 xy . In order to simplify the notation. bi x2 + 4 ha. wi kck2 = kuk2 + kvk2 + 2 hu.
25 (p. 457) This is once again the question raised in exercises 7 and 17. bi = 0 .42 Vector algebra Finally. Since the coordinatefree version of the solution procedure is completely independent from the number of coordinates of the vectors involved. bi = kak2 + x2 kbk2 if a ⊥ b 2 ≥ kak if a ⊥ b (b) Since the norm of any vector is nonnegative. 24 (p. is true. by pure computation. ai hb. bi ³ positive. bi is negative for all x in the open is interval − 2ha. ka + xbk ≥ kak) ⇒ ha. the full answer to the problem has already been seen to be hb.17 n. − kbk2 . ai a ha. the trinomial is negative in the open interval 2 ³ ´ kbk 2ha. 0 . ka + xbk2 − kak2 ≥ 0 ⇔ ∀x ∈ R. x2 kbk2 + 2x ha. ka + xbk ≥ kak) ⇔ ∀x ∈ R. I have proved that the conditional (∀x ∈ R. ai d = b− a ha.bi 0. ka + xbk2 ≥ kak2 ∀x ∈ R.8.8. k2a + 3bk2 = k2ak2 + k3bk2 + 2 h2a. ka + xbk2 = kak2 + x2 kbk2 + 2x ha. the following biconditional is true: ¡ ¢ (∀x ∈ R. bi is zero.bi . 457) (a) For every x ∈ R. only if it reduces to the second degree term x2 kbk2 . It follows that the trinomial can be nonnegative for every x ∈ R only if ha. In conclusion. if it is negative. the following biconditional is true: If ha. in the general context of the linear space Rn (where n ∈ N is arbitrary).18 n.´the trinomial x2 kbk2 + 2x ha. ai c = 12. that is. 3bi = 22 · 62 + 32 · 42 + 2 · 2 · 3 · 0 = 25 · 32 and hence √ k2a + 3bk = 12 2 12. since a and b have been shown to be orthogonal. bi ≥ 0 Moreover.
460) ha. . 1 (p.11. bi = 10 The projection of a along b is 10 b= 4 12.10 The unit coordinate vectors 12. the unit direction vector u of a. .11.1 Exercises n. 9 9 9 ¶ The projection of a along b is 11 b= 9 12.11 12. and its opposite: µ ¶ a 6 3 2 u = = .− kak 7 7 7 µ ¶ a 6 3 2 − = − . bi = 11 kbk2 = 9 11 22 22 . ii = kak kik 7 ha. Angle between vectors in nspace 43 12. kak 7 7 7 6 ha.11.Projections.3 (a) n.9 Projections. .− . ji 3 c cos a j = = kak kjk 7 ha. Angle between vectors in nspace 12.2 n. 2 (p. 460) µ 5 5 5 5 . 460) ha. ki 2 c cos a i = =− kak kkk 7 c cos a i = . . 3 (p. 2 2 2 2 ¶ kbk2 = 4 µ (b) There are just two vectors as required.
C. −6) There is some funny occurrence in this exercise. ri 35 35 41 b cos A = =√ √ = k−qk krk 41 35 41 √ √ hp. if one looks only at numerical results. 5 (p. −5) − → q ≡ CA = (−1.44 Vector algebra 12. points A. B. respectively. − − → → respectively. 5) C ≡ (3. −1. cos ACB. AC. −4) − → r ≡ AB = (−1. This amounts to work.4 Let n. If some confusion is made between points and vectors. OA b ° °° ° an angle of triangle OCA → → °− ° °− ° = cos C OA °OC ° °OA° D E − − → → OA. which makes a wrong solution apparently correct (or almost correct). OB b ° °° ° an angle of triangle OAB → → °− ° °− ° = cos AOB °OA° °OB ° b b b instead of cos B AC. − → − → − → − → 2. OA = BC and OB = AC. −3. Up to this point. as implicitly argued above. 1) − → p ≡ BC = (2. The b angles in A. therefore computing D E − − → → OB. ACB. with the vectors OA. B. it’s only a mistake. 3. C are coplanar with the origin O. that is. −ri 6 6 41 b cos B = =√ √ = kpk k−rk 41 6 41 0 h−p. as a matter of fact. 460) A ≡ (2. are more precisely described as B AC. OB.11. 1) Then B ≡ (1. as the three angles of triangle ABC. and/or angles. −1. The funny thing is in the numerical data of the exercise: it so happens that 1. b b C BA. −2. there is nothing funny in doing that. being of conceptual type. qi b cos C = = √ √ =0 k−pk kqk 6 35 . and a fairly bad one. one may be led into operate directly with − − − → → → the coordinates of points A. √ √ h−q. OC b ° °° ° an angle of triangle OBC → → °− ° °− ° = cos B OC OB ° °OC ° ° D E − − → → OC. AB. more than that. C in place of the coordinates of vectors BC. more than that. −4. B. cos C BA. − → OC.
OB blue. 12. 460) Since ka + c ± bk2 = ka + ck2 + kbk2 ± 2 ha + c. as a particular case. − → − → − → OA red.11. AOB is a right angle. therefore. if c = −a. that ° ° ° ° ° ° ° ° → → → → °− ° °− ° °− ° °− ° °OA° = °BC ° = kpk °OB ° = °AC ° = k−qk = kqk b b C OA = C BA b b B OC = B AC ° ° ° ° → → °− ° °− ° °OC ° = °AB ° = krk and such a special circumstance leads a wrong solution to yield the right numbers. AB violet It turns out. bi from ka + c + bk = ka + c − bk it is possible to deduce ha + c. and point 3 makes it even a rectangle. which immediately implies c ac = π 7 c c bc = π − ab = π 8 b b AOB = ACB .5 n.Exercises 45 Point 1 already singles out a somewhat special situation. bi = 0 This is certainly true. 6 (p. OC green. b 3. but point 2 makes OACB a parallelogram.
ci c =− = − cos ab kbk kck kbk kak c c 8 8 and hence that bc = π ± ac = 7 π. 9 π (the second value being superﬂuous). the same conclusion holds.7 (a) n. Indeed. bn i = kak = 6 2 v u n2 (n+1)2 r n(n+1) u 3 n+1 4 cos [ = √ q 2 a bn = t n2 (n+1)(2n+1) = 2 2n + 1 n n(n+1)(2n+1) 6 6 √ 3 π lim [ = lim cos [ = a bn a bn n→+∞ n→+∞ 2 3 12. even if a + c = 0 is not assumed. ci ha. bi and kck = kak it is easy to check that c cos bc = hb.11. µ cos ϑ − 1 sin ϑ − sin ϑ cos ϑ − 1 ¶µ x y ¶ = µ 0 0 ¶ cos ϑ sin ϑ − sin ϑ cos ϑ ¶µ x y ¶ = µ x y ¶ has the trivial solution as its unique solution if ¯ ¯ cos ϑ − 1 sin ϑ ¯ ¯ − sin ϑ cos ϑ − 1 ¯ ¯ ¯ 6= 0 ¯ .46 Vector algebra Moreover. bi = − ha. 461) ha.11. from hc.6 n. 10 (p. 8 (p. bi = cos ϑ sin ϑ − cos ϑ sin ϑ = 0 kak2 = kbk2 = cos2 ϑ + sin2 ϑ = 1 (b) The system µ that is. 460) We have r √ n (n + 1) n (n + 1) (2n + 1) n kbn k = ha. 12.
461) Let OABC be a rhombus. On the other hand.Exercises 47 The computation gives 1 + cos2 ϑ + sin2 ϑ − 2 cos ϑ 2 (1 − cos ϑ) cos ϑ ϑ Thus if ϑ ∈ {(−2kπ. dashed) and OB are congruent. The assumption that OABC is a rhombus is expressed by the equality kak = kck ((rhombus)) . which I have taken (without loss of generality) with one vertex in the origin O and with vertex B opposed to O.11. 11 (p. Thus − → AB = b − → CB = a 6= 6= 6= ∈ / 0 0 1 {2kπ}k∈Z From elementary geometry (see exercise 12 of section 4) the intersection point M of the two diagonals OB (green) and AC (violet) is the midpoint of both. 12. the oriented segments AB (blue. if ϑ ∈ {2kπ}k∈Z the coeﬃcient matrix of the above system is the identity matrix. 0). and the same is true for CB (red. Let − → − → − → a ≡ OA (red) b ≡ OB (green) c ≡ OC (blue) Since OABC is a parallelogram.8 n. (2k + 1) π)}k∈Z the only vector satisfying the required condition is (0. and every vector in R2 satisﬁes the required condition. dashed) and OA.
since in every parallelogram ABCD with a = AB and b = AC the − → diagonal vector CB is equal to a − b.9 n. c − ai = 0 or kck2 − kak2 + ha. 461) (a) That the function X Rn → R. 461) The equality to be proved is straightforward. The theorem specializes to Pythagoras’ theorem when a ⊥ b. 13 (p. ai = 0 and hence (by commutativity) kck2 = kak2 The last equality is an obvious consequence of ((rhombus)). minus their double product multiplied by the cosine of the angle they form. The “law of cosines” is often called Theorem 3 (Carnot) In every triangle. too: ∀a ∈ Rn . ci − hc.11. and a − b. As a matter of fact. 17 (p. Homogeneity is clear. 12. The statement to be proved is orthogonality between the diagonals D E − − → → OB. Thus a parallelogram has orthogonal diagonals if and only if it is a rhombus. a 7→ ai  i∈n is positive can be seen by the same arguments used for the ordinary norm.48 Vector algebra that is.11. since norms are nonnegative real numbers. AC = 0 ha + c.10 n. the converse is true. nonnegativity is obvious. so that the triangle ABC has side vectors a. b. ∀α ∈ R. the square of each side is the sum of the squares of the other two sides. X X X kαak = αai  = α ai  = α ai  = α kak i∈n i∈n i∈n . The equality in exam can be readily interpreted according to the theorem’s − → − → statement. and strict positivity still relies on the fact that a sum of concordant numbers can only be zero if all the addends are zero. 12. too.
y) ∈ R+ × R− : x − y = 1} (red) (green) (violet) (blue) Once the lines whose equations appear in the deﬁnitions of the four sets above are drawn. but not positive (e. y) ∈ R− × R+ : −x + ª = 1} y © 2 ≡ (x.. f (x. −x) = 0 ∀x ∈ R). the triangle inequality is much simpler to prove for the present norm (sometimes referred to as “the taxicab norm”) than for the euclidean norm: ∀a ∈ Rn . a 7→ X i∈n ai  is nonnegative. y) ∈ R2 : x + y = 1 + ≡ {(x. for n = 2. y) ∈ R2 : x + y = 1 = S++ ∪ S−+ ∪ S−− ∪ S+− where S++ S−+ S−− S+− © ª ≡ (x. X X X X ka + bk = ai + bi  ≤ ai  + bi  = ai  + bi  i∈n i∈n i∈n i∈n = kak + kbk (b) The subset of R2 to be described is © ª S ≡ (x. y) ∈ R− : −x − y = 1 ≡ {(x. ∀b ∈ Rn . ¯ ¯ ¯ ¯ ¯ ¯ ¯X ¯ ¯ X ¯ ¯X ¯ ¯ ¯ ¯ ¯ ¯ ¯ kαak = ¯ αai ¯ = ¯α ai ¯ = α ¯ ai ¯ = α kak ¯ ¯ ¯ ¯ ¯ ¯ i∈n i∈n i∈n . it is apparent that S is a square. with sides parallel to the quadrant bisectrices 2 1 2 1 0 1 1 2 2 (c) The function f : Rn → R.g.Exercises 49 Finally. It is homogeneous: ∀a ∈ Rn . ∀α ∈ R.
−4). ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯X ¯ ¯X X ¯ ¯X ¯ ¯X ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ka + bk = ¯ (ai + bi )¯ = ¯ ai + bi ¯ ≤ ¯ ai ¯ + ¯ bi ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ i∈n i∈n i∈n i∈n i∈n = kak + kbk 12. ∀a ∈ Rn . 467) x (i − j) + y (i + j) = (x + y.50 Vector algebra Finally. y) = (1. 467) I 2x + y = 2 II −x + 2y = −11 III x − y = 7 The solution is (x.15 12.1 .14 Bases 12. y) = − 1 . I + III 3x = 9 y = −4 II + III III (check) 3 + 4 = 7 . f is subadditive. 1 . ∀b ∈ Rn .15.1 Exercises n. Indeed. 12. 6). y) = (4.2 n.15. −1). y) = ¡1 ¢ . too (this is another way of saying that the triangle inequality holds).12 The linear span of a ﬁnite set of vectors 12. 3 (p. y) = (3. 2 2 (d) x + y = 7 and y − x = 5 yield (x. 1 (p. (a) x + y = 1 and y − x = 0 yield (x. y − x) ¢ ¡ (b) x + y = 0 and y − x = 1 yield (x.13 Linear independence 12. 2 2 (c) x + y = 3 and y − x = −5 yield (x.
(b) Since. d) must be proportional to (y. 467) Linear independency of the vectors (a. and αa + βb = 0 with β = 0 and α 6= 0 implies a = 0). c) and (b.15. and hence a = − α b (or b = − α a. by proving that if a and b are linearly dependent.15. Then. 467) (a) If there exists some α ∈ R ∼ {0} such that∗ a = αb.4 n. β. I argue by the principle of counterposition. 8 (p. b) and (c. then the linear combination 1a − αb is nontrivial and it generates the null vector. 7 (p. 467) By the previous exercise.Exercises 51 12. d) it is immediate to derive ad − bc = 0. The converse is immediate. and has unknowns and equations in equal number (hence part 2 of the proof of Steinitz’ theorem does not apply). then they are parallel. αa + βb = 0 with α = 0 and β 6= 0 implies b = 0. d) has been seen in the lectures (proof of Steinitz’ theorem. Thus β αa = −βb. section 8). 6 (p.5 n. γ) ∈ R3 . at least one between α and β is nonzero. too 12. If a nontrivial solution (x. because both a and b have been assumed diﬀerent from the null vector. indeed. (b) The argument is best formulated by counterposition. y) exists. β 12. since a and b have been assumed both diﬀerent from the null vector. but in the present case they are both nonzero. from (a. γ) Even if parallelism is deﬁned more broadly − see the footnote in exercise 9 of section 4 − α cannot be zero in the present case. Let a and b nontrivially generate the null vector: αa + βb = 0 (according to the deﬁnition. β. −x) (see exercise 10. part 1) to be equivalent to non existence of nontrivial solutions to the system ax + cy = 0 bx + dy = 0 The system is linear and homogeneous. and hence to each other. both (a. ∗ .3 n.15. c) = h (b. a and b are linearly dependent. for every (α. 5 (p. 467) (a) The linear combination 1i + 1j + 1k + (−1) (i + j + k) is nontrivial and spans the null vector. αi + βj + γk = (α. it is enough to require (1 + t)2 − (1 − t)2 6= 0 4t 6= 0 t 6= 0 12. if you prefer).6 n.15.
10 (p. part 1.15. k) and (i + j + k. (αi + βj + γk = 0) ⇒ α = β = γ = 0 so that the triple (i. 1 respectively. γ) and hence from αi + βj + γ (i + j + k) = 0 it follows α+γ =0 β+γ =0 γ =0 that is. β. j. β. 0) and ¢ ¡ 0. 467) (a) Again from the proof of Steinitz’ theorem. (d) The last argument can be repeated almost verbatim for triples (i. β + γ.52 Vector algebra it is clear that ∀ (α. β + γ) α (i + j + k) + βj + γk = (α. (b) We need to consider the following two systems: I x+y+z =0 II y + z = 1 III 3z = 0 I x+y+z =0 II y + z = 0 III 3z = 1 It is again immediate that the unique solutions to the systems are (−1. i + j + k. 1. i + j + k) is linearly independent. for every (α. j. k) is linearly independent. αi + βj + γ (i + j + k) = (α + γ. α + γ) 12. consider the system I x+y+z =0 II y + z = 0 III 3z = 0 It is immediate to derive that its unique solution is the trivial one. taking into account that αi + β (i + j + k) + γk = (α + β. γ) ∈ R3 . β. and hence that the given triple is linearly independent. (c) Similarly.7 n. α=β=γ=0 showing that the triple (i. j. − 1 . γ) ∈ R3 . k). α + β. 3 3 .
which has just been seen to be a basis of R4 . and hence t. c. b. −1. Thus (a. subtracting the ﬁrst equation from the second. d) a linearly dependent quadruple. from the fourth. 3. that is. u. Thus the given triple spans R3 . t. d ≡ a + b + c = (2. third.→ I x = 5 (d) For an arbitrary triple (a. 1. e) is linearly independent.→ II y = − 14 3 (↑) . For example. and it is linearly independent (as seen at a). the system I x+y+z = a II y + z = b III 3z = c ¢ ¡ c c has the (unique) solution a − b. that is. 1) (c) Let e ≡ (−1.8 n. and suppose xa + yb + zc + te = 0. b. c). b. Thus (a. b. are given as the solution (s. b − 3 . b. c. and z are in turn seen to be equal to 0. 12. 12 (p.Exercises 53 (c) The system to study is the following: I x+y+z =2 II y + z = −3 III 3z = 5 III z=5 3 (↑) . 3). x. (b) Any nontrivial linear combination d of the given vectors makes (a. from the fourth. 3 . v) to the following system: s+u−v s+t+u−v s+t+v t + 3v = = = = 1 2 3 4 . and z are in turn seen to be equal to 0. (d) The coordinates of x with respect to {a. c. x+z x+y+z x+y y = = = = 0 0 0 0 then y. third. x+z−t x+y+z−t x+y+t y + 3t = = = = 0 0 0 0 Then. and ﬁrst equation in that order. 467) (a) Let xa + yb + zc = 0. x. and ﬁrst equation in that order. 2. e}. gives y = 0.15. c) is a linearly independent triple.
1).54 Vector algebra It is more direct and orderly to work just with the table formed by the system ¡ ¢ (extended) matrix A x . 2 a0 ← 2 a0 − 1 a0 1 0 1 −1 0 1 0 0 0 1 1 1 0 1 0 3 1 1 0 −1 0 1 1 1 0 0 1 0 0 0 1 3 1 1 0 −1 0 1 1 1 0 0 1 0 0 0 0 3 u+s−v s+t+v t 3v u s t v u s t v u t s v x 1 1 3 4 1 3 1 4 x 1 3 1 3 a2 ↔ a3 . v) = (1. 2 a0 ↔ 3 a0 x 0 4a ← 4 a0 − 3 a0 Thus the system has been given the following form: = = = = 1 3 1 3 which is easily solved from the bottom to the top: (s. u. 1. s t u v x 1 0 1 −1 1 1 1 1 −1 2 1 1 0 1 3 0 1 0 3 4 a1 ↔ a3 . and to perform elementary row operations and column exchanges. . t. 1.
− 2 . From I and III. the given √ ª © √ triple is linearly dependent for t ∈ 0. 0. 1. γ) = (0. (c) α (t. 3. and a linear dependence in the equation system suggests that it may √ ¢ ¡√ well have nontrivial solutions. − 2 . 2. t. 2.15. I deduce that α = γ. 1. In conclusion. Thus the three given triple of vectors is linearly dependent. Indeed. (b) √ ¢ ¡√ ¢ ¡ √ ¢ ¡ α 2. 2 = (0. 1. β.9 (a) n. 0 + β 1. 0) and the three given vectors are linearly independent.Exercises 55 12. 0 + β 1. 0. 467) √ ¢ ¡√ ¢ ¡ √ ¢ ¡ α 3. −2. t 6= 0. 0. 0. as already noticed. √ ª ©√ Such a system has nontrivial solutions for t ∈ 2. t) = (0. 2 is such a solution. 1) + γ (0. 0) √ 3α√ β = 0 + I ⇔ II α + √3β + γ = 0 III β + 3γ = 0 √ I 3α + β = 0 √ ⇔ 3II − III − I 2β = 0 √ III β + 3γ = 0 ⇔ (α. 1 + γ 0. 1. . 0) I tα + β = 0 ⇔ II α + tβ + γ = 0 III β + tγ = 0 It is clear that t = 0 makes the triple linearly dependent (the ﬁrst and third vector coincide in this case). 1. 3 = (0. Equations II and III then become tβ + 2γ = 0 β + tγ = 0 a 2 by 2 homogeneous system with determinant of coeﬃcient matrix equal to t2 − 2. Let us suppose. then. 0) + β (1. 0) √ I 2α√ β = 0 + ⇔ II α + √2β + γ = 0 III β + 2γ = 0 √ This time the sum of equations I and III (multiplied by 2) is the same as twice equation II. 2. 1. 1 + γ 0. 13 (p.
12. choosing as nontrivial coeﬃcient triple (α. b. (c) Similarly. w. 15 (p. every linear combination of u and w has the form (x. 468) Call as usual u.12 n. z) such that the triple (a. a + c) is linearly dependent. γ) ≡ (1. w. x. u − v = e(1) w − z = e(3) v − w = e(2) z = e(4) and (u. v. α + β + γy. (a − b) − (b + c) + (a + c) = 0 it is seen that the triple (a − b. β. in the order. a + c) is linearly independent. and cannot be equal to z. I look for two possible alternative choices of a vector c ≡ (x. y. 12. Since αa + βb + γc = (β + γx.56 Vector algebra 12. 17 (p. α + β + γz) .11 n. α (a + b) + β (b + c) + γ (a + c) = 0 ⇔ (α + γ) a + (α + β) b + (β + γ) c = 0 I α+γ =0 α+β =0 ⇔ II III β + γ = 0 I + II − III 2α = 0 ⇔ −I + II + III 2β = 0 I − II + III 2γ = 0 and the triple (a + b. 14 (p. 1. c) is linearly independent. Moreover. z) is a maximal linearly independent triple. 1. b + c. w. z) is maximal linearly independent. v. (b) Notice that 1 (u + z) = e(1) 2 1 (v − w) = e(3) 2 1 (u − v) = e(2) 2 1 (w − z) = e(4) 2 Since (u. b + c. c) is linearly independent. 1) and b ≡ (1. so that v can be dropped.10 n. z) is maximal linearly independent. y.15. too (b) On the contrary. (a) It is clear that v = u + w. v. z) spans the four canonical vectors. and z the four vectors given in each case. Thus (u.15. −1. b. y). Thus (u. 468) (a) Since the triple (a. 1). 1). w. 468) Let a ≡ (0.15. v. it is a basis of R4 . w.
γ) ∈ R3 : I β + γx = 0 α=0 II α + β + γy = 0 β=0 ⇒ III α + β + γz = 0 γ=0 Thus any choice of c with y 6= z makes γ = 0 a consequence of IIIII. (1. and system IIII has inﬁnitely many nontrivial solutions α β γ = −γy − β = −γx free provided either x or y (hence z) is diﬀerent from zero.Exercises 57 Subtracting equation III from equation II. Conversely. y. 0). Keeping the same notation.15. in such a case. Indeed. Let u. since each element of lin T is a linear combination in T . is closed with respect to formation of linear combinations. v. I yields β = 0 (independently of the value assigned to x). 18 (p. 468) (a) It is enough to prove that each element of T belongs to lin S. 12. and w be the three elements of S (in the given order). w − z). As an example. possible choices for c are (0. it is then readily checked that a=u−w b = 2w (b) The converse inclusion holds as well. 1). and let a and b be the two elements of T (still in the given order). and z must be such to make the following conditional statement true for each (α. 0. equations II and III are the same. 1). (1. 19 (p.15. and then either II or III yields α = 0. since the vectors u and v there coincide with the present ones. v =a−b Thus lin S = lin T 1 w= b 2 1 u=v+w = a− b 2 .14 n. I obtain γ (y − z) = 0 my choice of x. if y = z.13 n. 468) The ﬁrst example of basis containing the two given vectors is in point (c) of exercise 14 in this section. 1. v. 0). as any subspace of a vector space. 1. a second example is (u. w + z. β. (0. 12. and S. 0.
b) is linearly independent. β) ∈ R2 such that α+β =1 I II 2α + 3β = 0 III 3α + 5β = −1 The above system has the unique solution (3. 3α + 5β) It follows that w is an element of lin U if and only if there exists (α.15. 2α + 3β. 468) (a) The claim has already been proved in the last exercise. In conclusion. (b) By the last result. b} B ≡ {a + b} lin A ∩ B ⊆ lin B where the couple (a. Notice that αc + βd = (α + β. −2). since from A ⊆ B I can infer A ⊆ lin B.15 n. B ⊆ lin A and hence. lin S = lin T = lin U 12. c=u+v d = u + 2v which proves that lin U ⊆ lin S. from A ∩ B ⊆ A and A ∩ B ⊆ B I infer lin A ∩ B ⊆ lin A which yields lin A ∩ B ⊆ lin A ∩ lin B (c) It is enough to deﬁne A ≡ {a. Indeed. if c and d are the two elements of U . lin B ⊆ lin lin A = lin A lin A ∩ lin B = lin B On the other hand. and hence lin A ⊆ lin B. Inverting the above formulas. 20 (p. which proves that lin S ⊆ lin U . u = 2c − d v =d−c It remains to be established whether or not w is an element of lin U . A∩B =∅ lin A ∩ B = {0} .58 Vector algebra Similarly. by part (a) of last exercise.
The vector space Vn (C) of ntuples of complex numbers 59 12.16 The vector space Vn (C) of ntuples of complex numbers 12.17 Exercises .
60 Vector algebra .
which yields x = −4 6= 1. point (c) does not belong to L.4 Lines and vectorvalued functions 13.5. 477) − → A direction vector for the line L is v ≡ 1 P Q = (−2. (d) and (e) have the second coordinate equal to 1. Among the given points. 13. which requires t = 2. and (e) belong to L.5. This gives x = −2. 0).2 Lines in nspace 13. (b). .1 Introduction 13. The parametric equations 2 for L are x = 2 − 2t y = −1 + t If t = 1 I get point (a) (the origin). (d). Finally. showing that of the three points only (e) belongs to L. 1).3 Some simple properties of straight lines 13. 477) − → A direction vector for the line L is P Q = (4. Points (b).5 Exercises 13. because y = 2 requires t = 3. 1 (p. 2 (p.Chapter 13 APPLICATIONS OF VECTOR ALGEBRA TO ANALYTIC GEOMETRY 13.2 n.1 n. showing that L is horizontal. Thus a point belongs to L if and only if its second coordinate is equal to 1.
3 (p.4 n.5.5 n.62 Applications of vector algebra to analytic geometry 13. hence the three points do not belong to the same line. 477) The parametric equations for L are x = −3 + h y = 1 − 2h z = 1 + 3h The following points belong to L: (c) (h = 1) (d) (h = −1) (e) (h = 5) 13. 13. 4 (p.3 n. (d). Thus a point belongs to L if and only if its second coordinate is equal to 1. I 2h + 2k + 3l = 0 II −2h + 3k + l = 0 III −6h + 4k + l = 0 I − 2II + III 2l = 0 I + II 5k + 4l = 0 2I − III 10h + 5l = 0 the only combination which is equal to the null vector is the trivial one. (b) Testing aﬃne dependence. 5 (p. 0). 0.5.5. Among the given points. (b). −2. 477) I solve each case in a diﬀerent a way. The three points do not belong to the same line. 477) The parametric equations for L are x = −3 + 4k y = 1+k z = 1 + 6h The following points belong to L: (b) (h = −1) (e) µ 1 h= 2 ¶ (f ) µ 1 h= 3 ¶ − → A direction vector for the line L is P Q = (4. −2) − → QR = (−1. and (e) belong to L. showing that L is horizontal. (a) − → P Q = (2. . 2) The two vectors are not parallel.
as a matter of fact. 477) The question is easy. 3) − → −→ pxy AG = (−15. 1) pxy AF = (−6. −7) . We can concentrate only on the ﬁrst two coordinates. I check that among the twodimensional projections of the oriented segments connecting A with points D to H − → − → − → pxy AD = (−4. −2). we can have a very good hint on the situation by drawing a twodimensional picture.6 n. Q does not belong to L (P. R). First. 8) pxy AH = (12. I get k = − 4 from the ﬁrst equation 3 and k = −1 from the second.Exercises 63 (c) The line through P and R has equations x = 2 + 3k y = 1 − 2k z = 1 Trying to solve for k with the coordinates of Q. and. 6 (p. and hence they belong to the plane π of equation z = 1. but it must be answered by following some orderly ¡ ¢ path. 13. all the eight points have their third coordinate equal to 1.5. 2) pxy AE = (−1. to be considered as a picture of the π 15 10 5 15 10 5 0 5 10 15 5 10 15 − → Since the ﬁrst two components of AB are (4. in order to achieve some economy of thought and of computations (there are 8 = 28 2 diﬀerent oriented segments joining two of the eight given points.
it only remains to be checked whether or not the lines through the couples of points (E.5.7 n. namely P 1 ≡ {A. 9. F belongs to LGH . B. C. E. B.1) . E) (green) coincide.64 Applications of vector algebra to analytic geometry − → only the ﬁrst and third are parallel to pxy AB. Therefore. b) are determined by the vector equation P + ha = Q + kb which gives P − Q = kb − ha that is. within the set P 1 . I end up with their equations as follows LEG : 4x + 7y = 11 LGH : 5x + 9y = 16 LHE : 7x + 13y = 20 All these three lines are deﬁnitely not parallel to LABC . and whether or not any elements of P 1 belong to them.5. 477) (a) The coordinates of the intersection point of L (P . hence they intersect it exactly at one point. C belongs to LEG . D. nor D belong to LHE . a) and L (Q. C. G) (red). 9) nHE ≡ (7. F } belong to the same (black) line LABC . 13. 7 (p. − → P Q ∈ span {a. It is seen by direct inspection that. b} (13. (G.8 n. G} P 3 ≡ {F. substituting this value in any of the three equations. 8 (p. 477) The coordinates of the intersection point are determined by the following equation system I 1 + h = 2 + 3k II 1 + 2h = 1 + 8k III 1 + 3h = 13k Subtracting the third equation from the sum of the ﬁrst two. H) (blue). and neither A nor B. The three equations are consistent. 7) pxy EG = (−7. and the two lines intersect at the point of coordinates (5. 13) By requiring point E to belong to the ﬁrst and last. F } P 2 ≡ {C. −5) 2 3 and as normal vectors for them I may take nEG ≡ (4. G. 7) nGH ≡ (5. I get k = 1. (H. and hence no two of them can both belong to a diﬀerent line. H} 13. Direction vectors for these three lines are 1 −→ 1 − → −→ − pxy HE = (−13. Thus there are three (maximal) sets of at least three collinear points. 13). and point H to the second. I get h = 4. 4) pxy GH = (9. D. Thus all elements of the set P 1 ≡ {A.
Ai kAk kP − Qk − hP − Q.5. 3 + 2t) (a) d (t) ≡ kQ − X (t)k2 = (2 − t)2 + (1 + 2t)2 + (−2 − 2t)2 = 9t2 + 8t + 9 (b) The graph of ¡ the function t 7→ d (t) ≡ 9t2 + 8t + 9 is a parabola with the ¢ 4 point (t0 . 13. γ). (c) ¶ ¶ µ µ 5 26 19 22 1 10 X (t0 ) = . 2 − 2t. Ai t = at2 + bt + c where a ≡ kAk2 = α2 + β 2 + γ 2 b ≡ hP − Q.− = . µ + βt.− 9 9 9 9 9 9 hQ − X (t0 ) . Ai = α (λ − %) + β (µ − σ) + γ (ν − τ ) 2 c ≡ kQk2 + kP k2 − 2 hQ. τ ). . 65 as vertex. 477) X (t) = (1 + t. 10 (p. µ. Q − X (t0 ) = . P i + 2 hP − Q. β. 2a 4a kAk2 kAk2 .Exercises 65 13.9 n.10 n. and the 9 9 9 65 minimum distance is 3 . d (t0 )) = −√. Ai − . 477) (a) Let A ≡ (α. P ≡ (λ. the point on L of minimum distance from Q is the orthogonal projection of Q on L. its graph is a parabola with vertical axis and vertex in the point of coordinates ! µ ¶ Ã 2 2 2 b ∆ hQ − P. The minimum squared distance is 65 .5. Ai = 22 − 2 + 20 =0 9 that is. 9 (p. ν + γt) f (t) ≡ kQ − X (t)k2 = kQ − P − Atk2 = kQk2 + kP k2 + kAk2 t − 2 hQ. Then X (t) ≡ P + At = (λ + αt. σ. ν) and Q ≡ (%. P i = kP − Qk2 = (λ − %)2 + (µ − σ)2 + (ν − τ )2 The above quadratic polynomial has a second degree term with a positive coeﬃcient. The points of the line L − → through P with direction vector OA are represented in parametric form (the generic point of L is denoted X (t)). .
i. Ai α (% − λ) + β (σ − β) + γ (τ − ν) = 2 α2 + β 2 + γ 2 kAk where hQ − P.5. or − → P Q ∈ span {a} that is.2) has no solution (the two lines are parallel).2) is satisﬁed by all couples (h. 477) The vector equation for the coordinates of an intersection point of L (P .2) . a) and L (Q. QP is the angle formed by direction vector of the line and the vector carrying the point Q not on L to the point P on L. k) such that k − h = c (the two lines intersect in inﬁnitely many points. The two expressions P + ha and Q + ka are seen to provide alternative parametrizations of the same line. a) is P + ha = Q + ka It gives P − Q = (h − k) a and there are two cases: either − → P Q ∈ span {a} / and equation (13. Q − P = ca and equation (13. Ai = hQ − P. (13. the transformation h 7→ k (h) ≡ h + c identiﬁes the parameter change which allows to shift from the ﬁrst parametrization to the second. Ai − hQ − P. they coincide). Ai A kAk2 hQ − P. ∃c ∈ R. (b) Q − X (t0 ) = Q − (P + At0 ) = (Q − P ) − hQ − X (t0 ) .66 Applications of vector algebra to analytic geometry Thus the minimum value of this polynomial is achieved at t0 ≡ and it is equal to kAk2 kP − Qk2 − kAk2 kP − Qk2 cos2 ϑ = kP − Qk2 sin2 ϑ 2 kAk −\ → − → ϑ ≡ OA. For each given point of L. Ai hA. Ai = 0 kAk2 13.11 n.. 11 (p.e.
1 n.6 Planes in euclidean nspaces 13.8 Exercises 13.8. 3 belongs to the 2 2 plane. 0. −2.Planes in euclidean nspaces 67 13. ¡ ¢ (b) Similarly. z) to 2. 0. 2. proceeding in the same way with the triple (−3. 3 in (13. 477) 13. so that the point with coordinates 4.7 Planes and vectorvalued functions 13. 2. (c) Again. so that the point with coordinates 2. z) to 4. −1).12 n. y.3) we get 2 2s + 2t = 3 2s − 2t = −1 1 3s − t = 2 2s + 2t = 1 2s − 2t = 1 3 3s − t = 2 ¡ ¢ yielding s = 1 and t = 1. 3) − → P R = (2.3) we get 2 ¡ ¢ yielding s = 1 and t = 0.3) ¡ ¢ (a) Equating (x. 1. 2. y. 12 (p. 482) We have: − → P Q = (2. we get 2s + 2t = −4 2s − 2t = 0 3s − t = −2 . 1 belongs to the 2 2 plane. 1 in (13. equating (x. 2 (p. −1) so that the parametric equations of the plane are x = 1 + 2s + 2t y = 1 + 2s − 2t z = −1 + 3s − t (13.5.
2 (a) n. 4) − (0. 1. 1. because the system 2s + 2t = −1 2s − 2t = −1 3s − t = 1 is inconsistent (the ﬁrst two equations yield s = − 1 and t = 0. because the system 2s + 2t = 2 2s − 2t = 0 3s − t = 4 is inconsistent (from the ﬁrst two equations we get s = t = 1. contradicting the 2 third equation). 1) v = (1. 0) = (1. 1) − (0. 0. −1) belongs to the plane. (e) Finally. 482) x = 1+t y = 2+s+t z = 1 + 4t (b) u = (1. 13. 1. contradicting the third equation). 1. 5) does not belong to the plane. 1. 4) x = s+t y = 1+s z = s + 4t . 1.8. (d) The point of coordinates (3. 2) does not belong to the plane. so that the point with coordinates (−3.68 Applications of vector algebra to analytic geometry yielding s = t = −1. 0. 2. 3 (p. 0) = (1. the point of coordinates (0.
The second point is directly seen to belong to M from the parametric equations (1. 4. 5 (p.4) and (2. 0) does not belong to the plane M . (a) If p + q + r = 1.8.Exercises 69 13. 2. Q. 0) + s (1. 4.4): P ≡ (1.4 n. deﬁning p ≡ 1 − (q + r). 1) 13. and R. 1) Finally. −3) belongs to M .3 n. S = (1 − q − r) P + qQ + rR = pP + qQ + rR .8. 4 (p. then pP + qQ + rR = P − (1 − p) P + qQ + rR = P + (q + r) P + qQ + rR = P + q (Q − P ) + r (R − P ) and the coordinates of pP + qQ + rR satisfy the parametric equations of the plane M through P . 0) a = (1.. 2. 482) (a) Solving for s and t with the coordinates of the ﬁrst point. −3. 1. 1. 2) b = (−2. with the third point I get I s − 2t + 1 = 2 II s + 4t + 2 = −3 III 2s + t = −3 I + II − III 2III − I − II check on I check on II check on III t+3=2 2s − 3 = −5 −1 + 2 + 1 = 2 −1 − 4 + 2 = −3 −2 − 1 = −3 (13. I get I s − 2t + 1 = 0 II s + 4t + 2 = 0 III 2s + t = 0 I + II − III t + 3 = 0 2III − I − II 2s − 3 = 0 3 check on I − 6 + 1 6= 0 2 and (0. 2) + t (−2. 0. (b) The answer has already been given by writing equation (13. there exist real numbers q and r such that S = P + q (Q − P ) + r (R − P ) Then. (b) If S is a point of M. 482) This exercise is a replica of material already presented in class and inserted in the notes.
70 Applications of vector algebra to analytic geometry 13.5 n. and I require that the three given points belong to π 2 . (a) The parametric equations for the ﬁrst plane π 1 are I x = 2 + 3h − k II y = 3 + 2h − 2k III z = 1 + h − 3k Eliminating the parameter h (I − II − III) I get 4k = x − y − z + 2 Eliminating the parameter k (I + II − III) I get 4h = x + y − z − 4 Substituting in III (or. b. 6 (p. c. better. d) of the cartesian equation of π 2 as unknown. I obtain the system 2a + 3b + c + d = 0 −2a − b − 3c + d = 0 4a + 3b − c + d = 0 which I solve by elimination µ 0 1 2 0 2 3 1 1 −2 −1 −3 1 4 3 −1 1 2a ← ( 2a + 1a ) 0 0 3a ← 3a − 2 1a 0 0 0 ¶ 2 3 1 1 0 1 −1 1 0 −3 −3 −1 2 3 1 1 0 1 −1 1 0 0 −3 1 ¡ 3a ← 1 ( 3 a0 + 3 1 a0 ) 2 ¢ .8. in 4 · III). 482) I use three diﬀerent methods for the three cases. 4z = 4 + x + y − z − 4 − 3x + 3y + 3z − 6 I obtain a cartesian equation for π 1 x − 2y + z + 3 = 0 (b) I consider the four coeﬃcients (a.
3.Exercises 71 From the last equation (which reads −3c + d = 0) I assign values 1 and 3 to c and d. The cartesian equation of π 2 is x − 2y + z + 3 = 0 (notice that π 1 and π 2 coincide) (c) This method requires the vector product (called cross product by Apostol ). 3) The parametric equations for M are x = 1 − s + 5t y = −1 + 3t z = 1 + 3s dII = (5. 1. −2) × (1. 1) (I may have as well taken one of the ﬁrst two points of part a). and I get dI = (−1. Such a normal is n = (2. 1) 2−6+1+d =0 yielding x − 2y + z + 3 = 0 13. 7 (p. Any two vectors which are orthogonal to the normal n = (3. 3) and (3. and I obtain P = (1. At any rate (just to show how the method applies in general). −1. (0. 0). Thus I assign values arbitrarily to d2 and d3 in the orthogonality condition 3d1 − 5d2 + d3 = 0 say. 1) = (2. from the second (which reads b − c + d = 0) I get b = −2. which is presented in the subsequent section. 0) . 0. respectively. since it has the same normal and has a common point with it. I have thought it better to show its application here already. in order to obtain the third coordinate of a point of M . 482) (a) Only the ﬁrst two points belong to the given plane. 1) and form a linearly independent couple can be chosen as direction vectors for M . however. −4.8. 3. a cartesian equation for π 3 is x − 2y + z + d = 0 and the coeﬃcient d is determined by the requirement that π 3 contains (2. −5. (b) I assign the values −1 and 1 to y and z in the cartesian equation of the plane M . The plane π3 and the given plane being parallel. 2) Thus π 3 coincides with π2 . and from the ﬁrst (which is unchanged) I get a = 1. 0. they have the same normal direction.6 n.
This is already enough to get the ﬁrst point from the parametric equation of M 0 . 13.72 Applications of vector algebra to analytic geometry 13. which is apparent (though incomplete) from the righthand side of (13.). − 5 is all right.. I proceed to ﬁnish up the elimination. respectively. and to s = −2 from the ﬁrst. because the parametric equations of the two planes M and M 0 are written using the same names for the two parameters. Another handy solution to¡the equation corresponding¡ the last to ¢ ¢ row of the ﬁnal elimination table is (h. 3. just to check on the computations.8. − 3 . and I proceed by elimination: s t h k 2 −1 −1 −3 −1 0 −2 −2 3 2 −3 −1 ← 1 r0 + 2 2 r0 3 r0 ← 3 r0 + 3 2 r0 0 0 1r ↔ 2r 1r 0  const. 2. −4) . I do get indeed (−14. 17). 8 (p. 482) The question is formulated in a slightly insidious way (perhaps Tom did it on purpose. Substituting in the lefthand side of (13. 1) = (−4. Thus Q = (−14. which is a trace of the parametric equations of M . 2. −1 0 −2 −2  2 0 −1 −5 −7  5 0 2 −9 −7  6 s t h k −1 0 −2 −2 0 −1 −5 −7 0 0 −19 −21  const. which leads to t = 11 from the second row. However. − 2 . 3.5) I take all the variables on the lefthand side. 482) (a) A normal vector for M is n = (1.7 n.8. the equation system for the coordinates of points in M ∩ M 0 has four unknowns: I 1 + 2s − t = 2 + h + 3k II 1 − s = 3 + 2h + 2k III 1 + 3s + 2t = 1 + 3h + k (13.  1  2  0 s t h k  const. 3) × (3.  2  5  16 µ 0 0 0 3r ← 3r + 2 2r 0 0 1r ↔ 2r ¶ A handy solution to the last equation (which reads −19h − 21k = 16) is obtained by assigning values 8 and −8 to h and k. t) = − 5 . This may easily lead to an incorrect attempt two solve the problem by setting up a system of three equations in the two unknown s and t.5). 17).. On the contrary. 5 5 5 5 5 ¡ 2 1¢ and the check by means of (s.5).8 n. 9 (p. 8. This gives R = 2 . k) = − 2 . thereby suggesting the wrong answer that M and M 0 are parallel. 7 . which is likely to have no solutions. and the constant terms on the righthand one.
483) The parametric equations for the line L are x = 1 + 2r y = 1−r z = 1 + 3r and the parametric equations for the plane M are x = 1 + 2s y = 1+s+t z = −2 + 3s + t The coordinates of a point of intersection between L and M must satisfy the system of equations 2r − 2s = 0 r+s+t = 0 3r − 3s − t = −3 The ﬁrst equation yields r = s. .9 n. 0 and R = 0. and the coordinates of the point P ∈ M do not satisfy the equation of M 0 . with π parallel to the yaxis. 3 . the coeﬃcient d is seen to be equal to 3.. 2 . − 2 .g. e. 1). 4 . The coordinates of the points of the intersection line L ≡ M ∩ M 00 satisfy the system ½ x − 2y + z + 3 = 0 x + 2y + z = 0 By sum and subtraction. Coordinate of points on L are now easy to produce. and π 0 parallel to ¢ xzplane. 10 (p. −2. 3 r . the line L can be represented by the simpler system ½ 2x + 2z = −3 4y = 3 as the intersection of two diﬀerent planes π and π 0 . (b) A cartesian equation for M is x − 2y + z + d = 0 From substitution of the coordinates of P . − 3 4 2 13. Since the latter is parallel to the former. the point of coordinates −2.8. ¡ ¡ 3 3 the ¢ Q = − 2 .Exercises 73 whereas the coeﬃcient vector in the equation of M 0 is (1. from which the third and second equation give t = 3. namely. 2 ¡ = s5= −7 ¢ Thus L ∩ M consists of a single point. the two planes are parallel.
computing the determinant of the matrix having v. 1 . P Q = (3. 2. 3)i = 9 6= 0 L is not parallel to π c . a. 1. because subtraction of the third equation from the second gives y = −2.10 n. 3) (a) L is parallel to the given plane (which I am going to denote π a ) if v is a linear ¡ ¢ combination of (2. it is seen that L is not parallel to πa . this yields y = −3 in the last two equations.74 Applications of vector algebra to analytic geometry 13. 1. 5. L is not parallel to π b . (1. −1.. With either reasoning. 4. The system to be studied now is 2x + y = 2 4x + 3y = −1 4x + y = 3 where subtraction of the second equation from the third gives 2x = 4. 2) − − → (1. e. hence x = 2. −2). 483) The parametric equations for L are x = 1 + 2t y = 1−t z = 1 + 3t and a direction vector for L is v = (2. 2 (c) Since a normal vector to π c has for components the coeﬃcients of the unknowns in the equation of π c . it suﬃces to check orthogonality between v and (1. b} is a linearly independent set. 3) . these two values contradicting all equations. 1. 3) and 3 . Thus I consider the system 4 3 2x + y = 2 4 x + y = −1 3x + y = 3 shows that {v. 3): h(2. whereas subtraction of the ﬁrst from the third yields x = 1 . contradicting the ﬁrst.g. −1. −2) and P R = (2. 1. 11 (p. 2. Alternatively. . I am reduced to the previous case.8. b as columns ¯ ¯ ¯ ¯ ¯ ¯ 2 2 3 ¯ ¯ 4 0 −5 ¯ ¯ 5 ¯ 4 ¯ 4 ¯ ¯ ¯ ¯ ¯ −1 1 1 ¯ = ¯ −1 1 1 ¯ = ¯ 4 − 4 ¯ = − 1 ¯ ¯ ¯ ¯ ¯ 6 −2 ¯ 2 ¯ 3 3 1 ¯ ¯ 6 0 −2 ¯ and the three equations are again inconsistent. − → (b) By computing two independent directions for the plane π b . a. −1) − (1.
A plane containing L and the point P ≡ (xP . 2. and (2. 1) for each t ∈ R. 12 (p. 13 (p. there exists a real number p such that S = P + p (Q − P ) Then. Since any two distinct points on L and P determine a unique plane. zP ) is the plane π through Q with − → direction vectors d and QP x = xQ + hd1 + k (xP − xQ ) y = yQ + hd2 + k (yP − yQ ) z = zQ + hd3 + k (zP − zQ ) L belongs to π because the parametric equation of π reduces to that of L when k is assigned value 0. M contains the points having coordinates equal to (1. 5). yP .6) If S belongs to the line through P and Q. 3). π is the only plane containing L and P . 483) Since every point of L belongs to M . 14 (p.g. by subtracting the ﬁrst equaiton from the second. 13.8. by deﬁning q≡p r≡0 it is immediately seen that condition (13. 3) + t (1. t = 1. obtaining x−y+1 =0 13.8. yQ . 1.8. and P belongs to π because the coordinates of P are obtained from the parametric equation of π when h is assigned value 0 and k is assigned value 1.11 n. Choosing.13 n.6) is satisﬁed. 2. 483) Let R be any point of the given plane π. 3. other than P or Q. the parametric equations for M are x = 1+r+s y = 2+r+s z = 3 + r + 2s It is possible to eliminate the two parameters at once. . A point S then belongs to M if and only if there exists real numbers q and r such that S = P + q (Q − P ) + r (R − P ) (13.. 483) Let d be a direction vector for the line L.Exercises 75 13. zQ ) be a point of L. and let Q ≡ (xQ . e. (1.12 n.
3 (a) n. 26 √ 18 j 2054 √ 18 j 2054 + √7 k 2054 A×B or − kA×Bk = 1 √ i 6 √ 41 i 2054 − √ 7 k. (c) C × A = 4i − 4j + 2k. 3) × (3. 487) (a) A × B = −2i + 3j − k. (h) (A + B) × (A − C) = 2i − 2j. (e) (A × B) × C = 8i + 3j − 7k. (f) A × (B × C) = 10i + 11j + 5k. −2) = (10. 9.11. (i) (A × B) × (A × C) = −2i + 4k. 13. −5.1 n.11.11. 10) ° 1 °− → ° → − ° 15 area ABC = °AB × AC ° = 2 2 − → − → AB × AC = (3. 487) − → − → AB × AC = (2. 2. 487) A×B (a) kA×Bk = − √4 i + √3 j + 26 26 (b) (c) A×B kA×Bk A×B kA×Bk 41 = − √2054 i − 1 = − √6 i − √1 k 26 A×B or − kA×Bk = √4 i 26 − √3 j 26 − + √1 k. 15) √ ° 1 °− → ° → − ° 3 35 area ABC = °AB × AC ° = 2 2 (b) . −3) × (3.76 Applications of vector algebra to analytic geometry 13.2 n. 1 (p. 2054 2 √ j 6 − 1 √ k 6 A×B or − kA×Bk = + 2 √ j 6 + 1 √ k. (b) B × C = 4i − 5j + 3k. (d) A × (C × A) = 8i + 10j + 4k.9 The cross product 13. 3 (p. 6 13.10 The cross product expressed as a determinant 13. −2. 2 (p. −1. (g) (A × C) × B = −2i − 8j − 12k. −6. 0) = (3.11 Exercises 13.
ci = hb. r) for the sake of notational simplicity. (b × a)i + hb. n) and b ≡ (p. 487) 13.π bc 2 c . bc = −π is impossible. (b × a) − bi = hb.4 − → − → CA × AB = (−i + 2j − 3k) × (2j + k) = 8i − j − 2k 13.5 n. (b) hb.11. m. 5 (p. 4 (p.11. and kck = 5. ci < 0 kbk c cos bc = − <0 kck i i c ∈ π. 1. 487) ha. b × ai = 0 because b × a is orthogonal to a. q. b + ci = ha. Thus hb. Then ka × bk2 = (mr − nq)2 + (np − lr)2 + (lq − mp)2 = m2 r2 + n2 q 2 − 2mnrq + n2 p2 + l2 r2 −2lnpr + l2 q 2 + m2 p2 − 2lmpq ¡2 ¢¡ ¢ kak2 kbk2 = l + m2 + n2 p2 + q2 + r2 = l2 p2 + l2 q 2 + l2 r2 + m2 p2 + m2 q 2 +m2 r2 + n2 p2 + n2 q 2 + n2 r2 ¢ ¡ (ka × bk = kak kbk) ⇔ ka × bk2 = kak2 kbk2 ⇔ (lp + mq + nr)2 = 0 ⇔ ha. 487) Let a ≡ (l.6 (a) n. because \ kck2 = kb × ak2 + kbk2 − 2 kb × ak kbk cos (b × a) b = kb × ak2 + kbk2 > kbk2 since (a. 0. 1) = (1. −bi = − kbk2 because b × a is orthogonal to b. 6 (p. √ (c) By the formula above. kck2 = 22 + 12 .11. −1) ° √3 1 °− → ° → − ° area ABC = °AB × AC ° = 2 2 n. . Moreover. 1) × (1. b) is linearly independent and kb × ak > 0. bi = 0 13.Exercises 77 (c) − → − → AB × AC = (0. 1.
bi = 0.11. bi2 = 1 so that a × b is a unit vector as well.78 Applications of vector algebra to analytic geometry 13. Both the righthand rule and the lefthand rule yield now (a × b) × a = b (d) (a3 b1 − a1 b3 ) a3 − (a1 b2 − a2 b1 ) a2 (a × b) × a = (a1 b2 − a2 b1 ) a1 − (a2 b3 − a3 b2 ) a3 (a2 b3 − a3 b2 ) a2 − (a3 b1 − a1 b3 ) a1 2 (a2 + a2 ) b1 − a1 (a3 b3 + a2 b2 ) 3 = (a2 + a2 ) b2 − a2 (a1 b1 + a3 b3 ) 1 3 (a2 + a2 ) b3 − a3 (a1 b1 + a2 b2 ) 1 2 (a2 + a2 ) b1 + a1 (a1 b1 ) 2 3 (a × b) × a = (a2 + a2 ) b2 + a2 (a2 b2 ) 1 3 (a2 + a2 ) b3 + a3 (a3 b3 ) 1 2 b1 (a × b) × a = b2 b3 (a × b) × b = −a Since a and b are orthogonal. The proof that (a × b) × b = −a is identical. (a × b) × a is either equal to b or to −b. 488) (a) Since kak = kbk = 1 and ha. Since a is a unit vector. by Lagrange’s identity (theorem 13. a × b are mutually orthogonal either by assumption or by the properties of the vector product (theorem 13.de). ai2 = 1 (c) And again. 7 (p.7 n. b. bi2 = 1 k(a × b) × ak2 = kck2 = 1 Since the direction which is orthogonal to (a × b) and to b is spanned by a.12.f) ka × bk2 = kak2 kbk2 − ha. (a × b)×b is either equal to a or to −a. k(a × b) × bk2 = ka × bk2 kbk2 − ha × b. . The three vectors a. Similarly. kck2 = ka × bk2 kak2 − ha × b.12. (b) By Lagrange’s identity again.
and ha. either k = 0 (which means b = 0). 9 (p. b can be taken orthogonal to a as well. the conditions a × b = c and ha. 8 (p. and observe that a and c are orthogonal. q. Geometrically.11. 488) c (a) Let ϑ ≡ ab.− . the unique solution is 9p = 9 2 − 2r = 4 −2q + 1 = 3 2+1−2 =1 1−2 =1 b = (1. bi = h kbk2 or ha. suppose that a 6= 0. kak (b) From the previous point. which can only happen if b = 0. 13. either h kbk = 0 or k kak = 0. From a×b = kak kbk sin ϑ.− 9 9 9 the vector b must be orthogonal to c. Thus two solutions to the problem are µ ¶ 7 8 11 ± .Exercises 79 13. or kbk = 0 (which is equivalent to b = 0). either there exists some h ∈ R such that a = hb. a × (b − c) = 0.→ II (↑) . either h = 0 (which means a = 0).11. .bi a of b along a and the projecting vector kak2 (of length kb×ak ) are null.8 n. bi = k kak2 . In the ﬁrst case. or there exists some k ∈ R such that b = ka. b − ci = 0 imply that b − c = 0. In particular. in a×c c×a which case it has to be equal to kak2 or to kak2 . In the second case. Then either ha. −1. −1) The solution to this exercise given by Apostol at page 645 is wrong.→ I check on IV check on III that is. or kak = 0 (which is equivalent to a = 0). that is. 488) (a) From a × b = 0. in order to satisfy the condition a×b= c kck Thus kak ≤ kbk < +∞. and its norm must depend on ϑ according to the relation kck kbk = (13. r) be the coordinates of b.9 n. Then both the projection ha. bi = 1 are: I II III IV −2q − r = 3 2p − 2r = 4 p + 2q = −1 2p − q + 2r = 1 Standard manipulations yield 2II + 2IV + III (↑) .7) kak sin ϑ (b) Let (p. the hypotheses a 6= 0.
11 (a) Let n. .11. 488) − → u ≡ AB = (−2. a kak2 1 kak2 (13. by skewsimmetry of the vector product. 0) − → v ≡ BC = (3.11.8). ha. 1) − → w ≡ CA = (−1. 488) Replacing b with c in the ﬁrst result of exercise7.8) However. 1. If x and z both solve (13. c × a is orthogonal to a. 1. xi = 1 a × (x − z) = a × x − a × z = c − c = 0 which implies that x − z is parallel to a. 10 (p.9) We are bound to look for other solutions to (13. −1) Each side of the triangle ABC can be one of the two diagonals of the parallelogram to be determined. (a × c) × a = c Therefore. it is seen that the c × a is a solution to the equation a×x=c (13. Thus the set of all solutions to equation (13. 11 (p.80 Applications of vector algebra to analytic geometry 13. α=− Thus the unique solution to the problem is b ≡ a × c− 13.10 n.8) is © ª x ∈ R3 : ∃α ∈ R. a × c + αai = 1 that is. and hence it does not meet the additional requirement ha.8). x = a × c + αa From condition (13. −2.9).
−1) √ 6 area (ABC) = 2 . the other is CF . 0. where AD = AB + AC = u − w. the other is AD. 2) − → − → − → If one of the diagonals is AB. 2. 0) (b) µ¯ ¯ 1 0 ¯ u×w = ¯ 1 −1 √ 6= ku × wk = ¯ ¯ ¯ ¯ ¯ . where BE = BC + BA = v − u. the other is BE. − ¯ −2 0 ¯ ¯ −1 −1 ¯ ¯ ¯¶ ¯ ¯ −2 1 ¯ ¯. In this case D = A + u − w = “B + C − A” = (0. In this case F = A − v + w = “A + B − C” = (−2. −2. In this case E = B + v − u = “A + C − B” = (4.Exercises 81 − → − → − → If one of the diagonals is BC.¯ ¯ ¯ ¯ −1 1 ¯ = (−1. −2. 2) − → − → − → If one of the diagonals is CA. where CF = CA + CB = −v + w.
in particular. b}).12 n. Thus z = 0. b) is linearly independent. Indeed. ci = 2 hb. b + ci = −2 ha.11.10) kck2 = 4 ka × bk2 − 12 ha × b. 12 (p. suppose that x (a + b) + y (a − b) + z (a × b) = 0 Then z cannot be diﬀerent from zero. since the only vector / which is orthogonal to itself is the null vector. b). (a × b) ∈ lin {a.b}. bi 1 c cos ab = = kak kbk 2 2 2 2 c ka × bk = kak kbk sin2 ab = 12 13. a × bi − 3 kbk2 = 48 √ 3 hb. b) is linearly independent. 13 (p. if the couple (a.10) becomes x (a + b) + y (a − b) = 0 which is equivalent to (x + y) a+ (x − y) b = 0 By linear independence of (a. 488) (a) It is true that.82 Applications of vector algebra to analytic geometry 13. too. in the ﬁrst place a and b are nonnull. (a × b) is nonnull. bi + 9 kbk2 = 192 √ kck = 8 3 hb. a × b) is linearly independent. a − b. since every ntuple having the null vector among its components is linearly dependent.13 n. because it is orthogonal to a and b. too.11. (a × b) is orthogonal to both a + b and a − b (as well as to any vector in lin {a. Secondly. x+y =0 and x−y =0 (13. because (a. b}. bi = −4 ha. since otherwise x y (a × b) = − (a + b) − (a − b) z z x+y y−x a+ b = − z z would belong to lin {a. 488) b + c = 2 (a × b) − 2b ha. Third. and (13. ci c cos bc = = kbk kck 2 . Fourth. then the triple (a + b.
yielding y− − → → AB = − AC x . 14 (p. b. AC is linearly dependent. z) be such that x (a + b) + y (a + a × b) + z (b + a × b) = 0 which is equivalent to (x + y) a + (x + z) b + (y + z) a × b = 0 Since. a + a × b. if the couple (a.Exercises 83 and hence x = y = 0. y + z must be null. then A and C coincide and it is clear that the line through them and B contains all the three points. let (x. and this in turn implies that both x + y and x + z are also null. b) is linearly independent. The system x+y = 0 x+z = 0 y+z = 0 has the trivial solution as the only solution.14 n. (a + b) × (a − b) = a × a − a × b + b × a − b × b = −2a × b and the triple (a. then the triple (a + b. if AC is null. 13. y. (a + b) × (a − b)) is linearly independent.11. too. if the couple (a. arguing as in the fourth part of the previous point. if AC is nonnull. Indeed. then in the nontrivial null combination − → − → xAB + y AC = 0 x must be nonnull. 488) → − → − (a) The cross product AB × AC equals the null vector if and only if the couple ³ ´ − − → → − → AB. In such a case. (b) It is true that. b) is so. then the triple (a. b. (c) It is true that. too. b) is linearly independent. b + a × b) is linearly independent. − → Otherwise. Indeed. a × b) is linearly independent when the couple (a.
pi = hb. kpk = 22 . p × b} is a basis of R3 . p.11. and (p × b. b) deﬁne the same orientation. b. −1}. pi = hp. p is orthogonal to b. ai hp. p × bi + hb. Since the triples (p. (b) Since p. b. pi = 0 that is. the set of all points P belonging to the line through A and B. hp. that is. (p. B.11) 1 = kak2 = kp × bk2 + kpk2 = 2 kpk2 that is.11). p × bi + hp. the set n o − → − → P : AP × BP = 0 is the set of all points P such that A. 15 (p. b. ¯ π¯ ¯ ¯ h kpk = kp × bk kbk ¯sin ¯ = kpk 2 and h ∈ {1. ai = kpk2 = 1 2 1 2 p × (p × b) + p × p = p × a kp × ak = kpk2 kbk = . p) deﬁnes the opposite one. p × b) and (p × b. b. ai hb. taking into account (13. (c) Since p is orthogonal both to p × b (deﬁnition of vector product) and to b (point a).84 Applications of vector algebra to analytic geometry that is.15 n. and p×b are pairwise orthogonal. p × b) is linearly independent. (d) Still from the assumption (p × b) + p = a. b. y− → B = A − AC x which means that B belongs to the line through A and C. there exists some h ∈ R such that (p × b) × b = hp Thus. 488) (a) From the assumption (p × b) + p = a. and P belong to the same line. h = −1. This gives kp × bk = kpk kbk = kpk and hence √ (13. and {p. (b) By the previous point. hb. 13.
12) requires that the angle formed with p by q is − π . which is orthogonal to a.15 Normal vectors to planes 13. and 3π .12) where q = 2p − a is a vector in the plane π generated by p and a.The scalar triple product 85 Thus 1 1 p= a+ q 2 2 (13. there exists some k ∈ R such that q = k (b × a) Now c kqk2 = 4 kpk2 + kak2 − 4 kpk kak cos pa √ √ 2 2 = 2+1−4 =1 2 2 kqk = 1 k = 1 If on the plane orthogonal to b the mapping u 7→ b × u rotates counterclockwise. so that q is discordant with b × a. Since a normal vector to π is b. 4 2 4 with p. It follows 4 that k = −1. the decomposition of p obtained in (13. the vectors a = p + b × p. π .14 Exercises 13.16 Linear cartesian equations for planes . respectively. b × p.12 The scalar triple product 13. and b × a form angles of π .13 Cramer’s rule for solving systems of three linear equations 13. I have ﬁnally obtained 1 1 p = a − (b × a) 2 2 13. On the other hand.
. z)i = hn. . 496) µ ¶ µ ¶ 7 0.86 Applications of vector algebra to analytic geometry 13.2 (a) n.− . z) = − . y. y. y. (x. 0. a distinguished one is n ≡ (2i + 3j − 4k) × (j + k) = 7i − 2j + 2k (b) hn. 1 (p.− 3 3 3 7 0. 2 (p.17 Exercises 13. 2 or 7x − 2y + 2z = 9 or 7x − 2y + 2z = 0 n = knk (b) The three intersection points are: Xaxis : (−7. − . 3)i 13.1 n. 2. 0) Y axis : µ 1 2 2 . z)i = 0 (c) hn.17. 3 (d) Intersecting with π the line through the origin which is directed by the normal to π x=h y = 2h z = −2h x + 2y − 2z + 7 = 0 yields h + 4h + 4h + 7 = 0 7 h = − µ9 ¶ 7 14 14 (x. 0 2 ¶ Zaxis : (c) The distance from the origin is 7 .17. 9 9 9 . 0. (x. 496) (a) Out of the inifnitely many vectors satisfying the requirement. (1.
(b) The straight line through the origin having direction vector n4 has equations x = t y = −2t z = t and intersects π 2 and π 4 at points C and D. and 9 6 ° ° 19 √ °−→° 19 kn4 k = 6 °CD° = 18 18 Alternatively.Exercises 87 13. π 1 and π3 are orthogonal because hn1 . CD = 6 − 9 n4 . 496) π1 π2 π3 π4 : : : : x + 2y − 2z = 5 3x − 6y + 3z = 2 2x + y + 2z = −1 x − 2y + z = 7 (a) π 2 and π 4 are parallel because n2 = 3n4 . 3 (p. which can be determined by the equations 3tC − 6 (−2tC ) + 3tC = 2 tD − 2 (−2tD ) + tD = 7 −→ ¡ 7 1 ¢ Thus tC = 1 . π 0 ) = .17. rewriting the equation of π 4 with normal vector n2 3x − 6y + 3z = 21 d2 − d4  19 =√ kn2 k 54 dist (π. 496) A cartesian equation for the plane which passes through the point P ≡ (1. −3) and is parallel to the plane of equation 3x − y + 2z = 4 is 3 (x − 1) − (y − 2) + 2 (z + 3) = 0 or 3x − y + 2z + 5 = 0 The distance between the two planes is √ 4 − (−5) 9 14 √ = 14 9+1+4 13.3 n. 2.4 n. respectively. tD = 7 . n3 i = 0.17. 4 (p.
R ≡ (3. 496) (a) A normal vector for the plane π through the points P ≡ (1. 3 13. Thus the parametric equations are x = 2 + 4h y = 1 − 3h z = −3 + h . R ≡ (−1. −1. 9 (p. a normal vector for the plane through the points P ≡ (1. 2. −3. 3). 1) × (3. 496) Proceeding as in points (a) and (b) of the previous exercise. −2) is − → − → n ≡ P Q × RQ = (1. 9) A cartesian equation for the plane is (x − 2) + 2 (y − 3) + 9 (z + 7) = 0 or x + 2y + 9z = 55 13.17. 496) A direction vector for the line is just the normal vector of the plane. 2. 6 (p.6 n.17.8 n. 3. Q ≡ (2. −4) = (4. −1). 2). 8. 3. 7. 496) A normal vector to the plane is given by any direction vector for the given line n ≡ (2. 1. −7) and a cartesian equation for it is 10x − 3y − 7z + 17 = 0 13. 2.88 Applications of vector algebra to analytic geometry 13.17.5 n.7 n. 4). 4. 12) − (1. −4.17. −2) is − → − → n ≡ P Q × QR = (2. 6) = (10. 2. 5 (p. 3) = (1. 8 (p. 1. −8) (b) A cartesian equation for π is x + 2y − 2z = 5 (c) The distance of π from the origin is 5 . 3) × (0. Q ≡ (3. −4.
Hence the hitting point is (0.Exercises 89 13. 496) (a) The position of the point at time t can be written as follows: x= 1−t y = 2 − 3t z = −1 + 2t which are just the parametric equations of a line. −7. −1. −4. Substituting. 5). the moving point has coordinates (−2. −7.9 n. (b) The direction vector of the line L is d = (−1. 1). 2 · (−2) + 3 · (−7) + 2 · 5 + d = 0 d = 15 A cartesian equation for the plane π 0 which is parallel to π and contains (−2. −3. (d) At time t = 3. Thus −1 · −1 − 3 · −4 + 2 · 3 + d = 0 d = −19 A cartesian equation for π 00 is (x + 1) + 3 (y + 4) − 2 (z − 3) = 0 or x + 3y − 2z + 19 = 0 . 10 (p. A normal vector for the plane π 00 which is orthogonal to L and contains (−1. 3) is −d.17. 3). the moving point has coordinates (−1. 5) is 2x + 3y + 2z + 15 = 0 (e) At time t = 2. 2) (c) The hitting instant is determined by substituting the coordinates of the moving point into the equation of the given plane π 2 (1 − t) + 3 (2 − 3t) + 2 (−1 + 2t) + 1 = 0 −7t + 7 = 0 yielding t = 1. −4.
2 2 ¡ ¢ Assigning value 1 to n.11 n.17.17. m) = − 7 . 1 knk 2 13. z) = 3 . 11 (p. 8 . 496) I work directly on the coeﬃcient matrix for the equation system: 3 1 1  5 3 1 5  7 1 −1 3  3 With obvious manipulations 0 4 −8  −4 0 0 4  2 1 −1 3  3 ¡ ¢ I obtain a unique solution (x.13 n. y. 496) c π ni = 3 c π nj = 4 π c nk = 3 we get A cartesian equation for the plane in consideration is √ (x − 1) + 2 (y − 1) + (z − 1) = 0 or x+ √ √ 2y + z = 2 + 2 n 1³ √ ´ = 1.12 n. √8 .17. √3 the required vector is − 122 122 122 .10 From n.90 Applications of vector algebra to analytic geometry 13. Then are 3 3 ³ √7 .17. a cartesian equation for π is x−y+z = 2 13. 0. 496) First I ﬁnd a vector of arbitrary norm which satisﬁes the given conditions: l + 2m − 3n = 0 l − m + 5n = 0 13. 0). 13 (p. the other values ´ easily obtained: (l. 0. 14 (p. 496) A normal vector for the plane π which is parallel to both vectors i + j and j + k is n ≡ (i + j) × (j + k) = i − j + k Since the intercept of π with the Xaxis is (2. 1 . 15 (p. 2.
15 n. −1) yields 6 − 2 − 2 + 4 6 − 2 − 2 + d = 3 3 that is.14 n. 20 (p. 3) = (1. 2 + d = 6 The two solutions of the above equation are d1 = 4 (corresponding to the plane already given) and d2 = −8. 497) If the line ` under consideration is parallel to the two given planes. 3. 2. −2. 2. parametric eqautions for ` are the following: x=1+t y = 2 − 2t z =3+t 13. Thus the required equation is 2x − y + 2z = 8 13.17. 2.17.19 Eccentricity of conic sections 13.21 Exercises .20 Polar equations for conic sections 13. a direction vector for ` is d ≡ (2. 3). 17 (p. 4) × (1. 2. 3. which have as normal vectors n1 ≡ (1. 3) and n2 ≡ (2. 4). 1) Since ` goes through the point P ≡ (1.18 The conic sections 13.The conic sections 91 13. 497) A cartesian equation for the plane under consideration is 2x − y + 2z + d = 0 The condition of equal distance from the point P ≡ (3.
22 Conic sections symmetric about the origin 13.24 Exercises 13.23 Cartesian equations for the conic sections 13.25 Miscellaneous exercises on conic sections .92 Applications of vector algebra to analytic geometry 13.
Chapter 14 CALCULUS OF VECTORVALUED FUNCTIONS .
94 Calculus of vectorvalued functions .
and S be any four real polynomials.5 Exercises 15. let P . Indeed. and let f : x 7→ P (x) Q (x) g : x 7→ R (x) S (x) Then for every two real numbers α and β αf + βg : x 7→ αP (x) S (x) + βQ (x) R (x) Q (x) S (x) is a well deﬁned real rational function.3 Examples of linear spaces 15.2 The deﬁnition of a linear space 15.5. Of course it can also be seen directly that the identically null function x 7→ 0 is a rational function. 555) The set of all real rational functions is a real linear space. β) = (0. 0) and for (α.1 n.Chapter 15 LINEAR SPACES 15. −1) respectively. as the quotient of the constant polynomials P : x 7→ 0 . This shows that the set in question is closed with respect to the two linear space operations of function sum and function multiplication by a scalar. 1 (p. R.1 Introduction 15. β) = (0. for (α. Q.4 Elementary consequences of the axioms 15. From this the other two existence axioms (of the zero element and of negatives) also follow as particular cases.
4 n. The remaining linear space axioms are immediately seen to hold.5. of a type which has become usual at this point.5. A ﬁnal remark concerning the other axioms. since for every two real numbers α and β deg [αP S + βQR] ≤ max {deg P S. and using the same notation. the two existence axioms follow as particular cases. 4 (p. deg QR} deg P S = deg P deg S ≤ deg Q deg S deg QR = deg Q deg R ≤ deg Q deg S so that the closure axioms hold. 2 (p. 3 (p. Indeed. (αf + βg) (0) = αf (0) + βg (0) = αf (1) + βg (1) = (αf + βg) (1) so that the closure axioms hold. Indeed. for every two real numbers α and β. and which have the same value at 0 and 1. is a real linear space. It may be also noticed that the degree of both polynomials P and Q occurring in the representation of the identically null function as a rational function is zero. 555) The set of all real valued functions which are deﬁned on a ﬁxed domain containing 0 and 1.96 Linear spaces and Q : x 7→ 1. and which achieve at 0 the double value they achieve at 1 is a real linear space. allows to conclude. 555) The set of all real rational functions having numerator of degree not exceeding the degree of the denominator is a real linear space. Indeed. it only needs to be proved that if deg P ≤ deg Q and deg R ≤ deg S. and it is anyway clear that the identically null function x 7→ 0 achieves the same value at 0 and 1. Again.2 n. . Similarly. 15. that the other linear space axioms hold is a straightforward consequence of the general properties of the operations of function sum and function multiplication by a scalar. (αf + βg) (0) = αf (0) + βg (0) = α2f (1) + β2g (1) = 2 (αf + βg) (1) so that the closure axioms hold. 15. then deg [αP S + βQR] ≤ deg QS This is clear. taking into account the general properties of the operations of function sum and function multiplication by a scalar 15. 555) The set of all real valued functions which are deﬁned on a ﬁxed domain containing 0 and 1.3 n.5. for every two real numbers α and β. taking into account exercise 1.
Indeed. some increasing (n + 1)tuple (τ s )s∈{0}∪n of elements of [0. Indeed. and existence of negatives f (1) = 1 + f (0) ⇒ (−f) (1) = −1 + (−f) (0) 6= 1 + (−f ) (0) 15. 1] is a real linear space. 1] with σ 0 = 0 and σ m = 1. and some ntuple (δ s )s∈{0}∪n of real numbers∗ . Then (αf + βg) (1) = αf (1) + βg (1) = α [1 + f (0)] + β [1 + g (0)] = α + β + αf (0) + βg (0) = α + β + (αf + βg) (0) 6= 1 + (αf + βg) (0) and the two closure axioms fail to hold (the above shows. (τ s )s∈{0}∪n Ir ≡ (σ r−1 . 1] associated to (σ r )r∈{0}∪m . and which have value at 1 which exceeds the value at 0 by 1. 5 (p. . some (m + 1)tuple (γ r )r∈{0}∪m of real numbers.5 n. 555) The set of all real valued functions which are deﬁned on a ﬁxed domain containing 0 and 1. χC is the characteristic function of C ¿ 1 if x ∈ C χC ≡ x 7→ 0 if x ∈ C / and (Ir )r∈m. so that for some nonnegative integers m and n.Exercises 97 15. however. (Js )s∈n are the partitions of (0.5. The other failing axioms are: existence of the zero element (the identically null function has the same value at 0 and at 1).s)∈m×n I am using here the following slight abuse of notation for degenerate intervals: (σ. is not a real linear space. σ] ≡ {σ}. for any two real numbers α and β. 555) The set of all real valued step functions which are deﬁned on [0. αf + βg = (αγ 0 + βδ 0 ) χ{0} + ∗ (r ∈ m) (s ∈ n) X (αγ r + βδ s ) χKrs (r. 1]. 1] with τ 0 = 0 and τ m = 1. This is necessary in order to allow for “point steps”. σ r ] Js ≡ (τ s−1 .6 n. τ s ] Then. f ≡ γ 0 χ{0} + X γ r χIr g ≡ δ 0 χ{0} + X s∈n δ s χJs r∈m where for each subset C of [0. let α and β be any two real numbers such that α + β 6= 1. 6 (p. that the set in question is an aﬃne subspace of the real linear space).5. some increasing (m + 1)tuple (σ r )r∈{0}∪m of elements of [0. let f and g be any two such functions.
1]. The axiom of existence of the zero element holds or fails. is a real linear space. 15. The ﬁrst closure axiom holds. † .5. 555) The set of all real valued and increasing functions of a real variable is not a real linear space. with just one step (m = 1. and that the opposite of a step function is a step function too. The axiom of existence of negatives fails. with no more† than m + n steps on [0. however. and a ﬁnal. 555) The set of all real valued functions which are deﬁned on R and convergent to 0 at +∞ is a real linear space. with the same number of steps. and any two real numbers α and β. weakly decreasing too. Indeed.8 n. 1] → R. It may also be noticed independently that the identically null function [0. 1] equal to zero. because the sum of two increasing functions is an increasing function too. 13 (p.98 Linear spaces where (Krs )(r. It may be noticed independently that the identically null function [0. by classical theorems in the theory of limits. because the function αf is decreasing if f is increasing and α is a negative real number. 15. for every two such functions f and g. x 7→ 0 indeed converges to 0 at +∞ (and. 1]. Z 1 Z 1 Z 1 (αf + βg) (x) dx = α f (x) dx + β g (x) dx = 0 0 0 0 Some potential steps may “collapse” if. in the former case. All the other linear space axioms hold. at any x0 ∈ R∪{−∞}). since the identically null function is weakly increasing (and.9 n.s)∈m×n is the “meet” partition of (Ir )r∈m and (Js )s∈n Krs ≡ Ir ∩ Js ( (r. 7 γ 0 = γ 1 = 0). 1] 7→ R. for any two such functions f and g. x→+∞ lim (αf + βg) (x) = α lim f (x) + β lim g (x) x→+∞ x→+∞ = 0 so that the closure axioms hold. for that matter. the set of all real valued and (weakly) increasing functions of a real variable is a convex cone. Thus. and for any two real numbers α and β. σr−1 = τ s−1 and ασr + βτ s = ασr−1 + βτ s−1 . too. as in the previous examples. as every constant function) but not strictly increasing. The second closure axiom does not hold. Thus the closure axioms hold. x 7→ 0 is indeed a step function. 15. usual remark concenrning the other linear space axioms applies. 555) The set of all real valued functions which are deﬁned and integrable on [0. with the integral over [0. Indeed. s) ∈ m × n. 11 (p.5. s) ∈ m × n) which shows that αf + βg is a step function too. for that matter.7 n. 7 (p. Final remark concerning the other linear space axioms. depending on whether monotonicity is meant in the weak or in the strict sense.5. for some (r.
: x 7→ n!p0 x + (n − 1)!p1 : x 7→ n!p0 P 0 (0) = pn−1 P 00 (0) = 2pn−2 . 1]. . which is already known to be a real linear space. as a real function of a real variable). . The set of all real Taylor polynomials of degree less than or equal to n (including the zero polynomial) is a real linear space. P (j) .5. . . The discussion of the above statement is a bit complicated by the fact that nothing is said concerning the point where our Taylor polynomials are to be centered. P (n−i)! : x 7→ n−j (n−i−j)! pi xn−j−i i=0 . P (j) (0) = j!pn−j . . 555) The set of all real valued functions which are deﬁned and integrable on [0. Let P : x 7→ Then P0 P 00 . is a convex cone. with nonnegative integral over [0. 16 (p. (αf ) (x) dx > 0. if one takes the standard formula as a deﬁnition Taylorn (f ) at 0 ≡ x 7→ here are the computations.5. P (n−1) (0) = (n − 1)!p1 P (n) (0) = n!p0 n X i=0 n X f (j) (0) j=0 R1 (αf ) (x) dx < 0. if one thinks at the motivation for the deﬁnition of Taylor polynomials: best ndegree polynomial approximation of a given function. If the center is taken to be 0. as it is. For any two such functions f and g. . This is immediate. 1]. . the issue is really easy: every polynomial is the Taylor polynomial centered at 0 of itself (considered. . . but not a linear space. Z 1 (αf + βg) (x) dx = α R1 0 0 Z 1 f (x) dx + β 0 Z 1 0 g (x) dx ≥ 0 0 It is clear that if α < 0 and axioms 2 and 6 fail to hold. . P (n−1) P (n) P : x 7→ n−1 (n − i) pi xn−1−i Pi=0 : x 7→ n−2 (n − i) (n − (i + 1)) pi xn−2−i i=0 . . 14 (p. 555) First solution. .Exercises 99 15. then 15. so that j! xj pi xn−i .10 n. since it coincides with the set of all real polynomials of degree less than or equal to n. At any rate.11 n. and any two nonnegative real numbers α and β.
100
Linear spaces
and hence
n X P (j) (0) j=0
j!
xj =
On the other hand, if the Taylor polynomials are meant to be centered at some x0 6= 0 Taylorn (f ) at x0 ≡ x 7→
n X f (j) (x0 ) j=0
n X j!pn−j j=0
j!
xj =
n X i=0
pi xn−i = P (x)
j!
(x − x0 )j
it must be shown by more lengthy arguments that for each polynomial P the following holds: n n X P (j) (x0 ) X (x − x0 )j = pi xn−i j! j=0 i=0 Second solution. The set of all real Taylor polynomials (centered at x0 ∈ R) of degree less than or equal to n (including the zero polynomial) is a real linear space. Indeed, let P and Q be any two such polynomials; that is, let f and g be two real functions of a real variable which are m and n times diﬀerentiable in x0 , and let P ≡ Taylorm (f ) at x0 , Q ≡ Taylorn (g) at x0 , that is, P : x 7→
m X f (i) (x0 ) i=0
i!
(x − x0 )
i
Q : x 7→
Suppose ﬁrst that m = n. Then for any two real numbers α and β · (k) ¸¾ m X ½ · f (k) (x0 ) ¸ g (x0 ) +β (x − x0 )k αP + βQ = α k! k!
k=0 m X (αf + βg)(k) (x0 ) = (x − x0 )k k! k=0
n X g (j) (x0 ) j=0
j!
(x − x0 )j
= Taylorm (αf + βg) at x0
n X αf (k) (x0 ) + βg (k) (x0 ) k=0
Second, suppose (without loss of generality) that m > n. In this case, however, αP + βQ = k!
m X αf (k) (x0 ) (x − x0 ) + (x − x0 )k k! k k=n+1
a polynomial which can be legitimately considered the Taylor polynomial of degree n at x0 of the function αf + βg only if all the derivatives of g of order from n + 1 to m are null at x0 . This is certainly true if g itself a polynomial of degree n. In fact, this is true only in such a case, as it can be seen by repeated integration. It is hence necessary, in addition, to state and prove the result asserting that each Taylor polynomial of any degree and centered at any x0 ∈ R can be seen as the Taylor polynomial of itself.
Exercises
101
15.5.12 n. 17 (p. 555) The set S of all solutions of a linear secondorder homogeneous diﬀerential equation where P and Q are given everywhere continuous real functions of a real variable, and (a, b) is some open interval to be determined together with the solution, is a real linear space. First of all, it must be noticed that S is nonempty, and that its elements are indeed real functions which are everywhere deﬁned (that is, with (a, b) = R), due to the main existence and solution continuation theorems in the theory of diﬀerential equations. Second, the operator where RR is the set of all the real functions of a real variable, and D2 is the subset of RR of all the functions having a second derivative which is everywhere deﬁned, is linear: ∀y ∈ D2 , ∀z ∈ D2 , ∀α ∈ R, ∀β ∈ R, L (αy + βz) = (αy + βz)00 + P (αy + βz)0 + Q (αy + βz) = α (y 00 + P y 0 + Qy) + β (z 00 + P z 0 + Qz) = αL (y) + βL (z) Third, D2 is a real linear space, since ∀y ∈ D2 , ∀z ∈ D2 , ∀α ∈ R, ∀β ∈ R, αy + βz ∈ D2 L : D2 → RR , y 7→ y 00 + P y 0 + Qy ∀x ∈ (a, b) , y 00 (x) + P (x) y 0 (x) + Q (x) y (x) = 0
by standard theorems on the derivative of a linear combination of diﬀerentiable functions, and the usual remark concerning the other linear space axioms. Finally, S = ker L is a linear subspace of D2 , by the following standard argument: and the usual remark. 15.5.13 n. 18 (p. 555) The set of all bounded real sequences is a real linear space. Indeed, for every two real sequences x ≡ (xn )n∈N and y ≡ (yn )n∈N and every two real numbers α and β, if x and y are bounded, so that for some positive numbers ε and η and for every n ∈ N the following holds: then the sequence αx + βy is also bounded, since for every n ∈ N Thus the closure axioms hold, and the usual remark concerning the other linear space axioms applies. αxn + βyn  ≤ α xn  + β yn  < α ε + β η xn  < ε yn  < η L (y) = 0 ∧ L (z) = 0 ⇒ L (αy + βz) = αL (y) + βL (z) = 0
102
Linear spaces
15.5.14 n. 19 (p. 555) The set of all convergent real sequences is a real linear space. Indeed, for every two real sequences x ≡ (xn)n∈N and y ≡ (yn )n∈N and every two real numbers α and β, if x and y are convergent to x and y respectively, then the sequence αx + βy converges to αx + βy. Thus the closure axioms hold. Usual remark concerning the other linear space axioms. 15.5.15 n. 22 (p. 555) The set U of all elements of R3 with their third component equal to 0 is a real linear space. Indeed, by linear combination the third component remains equal to 0; U is the kernel of the linear function R3 → R, (x, y, z) 7→ z.
15.5.16 n. 23 (p. 555) The set W of all elements of R3 with their second or third component equal to 0 is not a linear subspace of R3 . For example, (0, 1, 0) and (0, 0, 1) are in the set, but their sum (0, 1, 1) is not. The second closure axiom and the existence of negatives axiom fail too. The other axioms hold. W is not even an aﬃne subspace, nor it is convex; however, it is a cone. 15.5.17 n. 24 (p. 555) The set π of all elements of R3 with their second component which is equal to the third multiplied by 5 is a real linear space, being the kernel of the linear function R3 → R, (x, y, z) 7→ 5x − y.
15.5.18 n. 25 (p. 555) The set ` of all elements (x, y, z) of R3 such that 3x + 4y = 1 and z = 0 (the line through the point P ≡ (−1, 1, 0) with direction vector v ≡ (4, −3, 0)) is an aﬃne subspace of R3 , hence a convex set, but not a linear subspace of R3 , hence not a linear space itself. Indeed, for any two triples (x, y, z) and (u, v, w) of R3 , and any two real numbers α and β 3x + 4y = 1 ½ z=0 3 (αx + βu) + 4 (αy + βv) = α + β ⇒ 3u + 4v = 1 z+w =0 w=0 Thus both closure axioms fail for `. Both existence axioms also fail, since neither the null triple, nor the opposite of any triple in `, belong to `. The associative and distributive laws have a defective status too, since they make sense in ` only under quite restrictive assumptions on the elements of R3 or the real numbers appearing in them.
15.5.19 n. 26 (p. 555) The set r of all elements (x, y, z) of R3 which are scalar multiples of (1, 2, 3) (the line through the origin having (1, 2, 3) as direction vector) is a real linear space. For any
and for any two real numbers α and β. 0. 15.22 above). 0) + z (0. 2h.21 n.20 n.8 Bases and dimension 15. the linear combination α (h. and that for every real numbers y and z (0. 1. y. 28 (p. It is immediate to check that every linear combination of linear combinations of a and b is again a linear combination of a and b. its dimension is 2. 0) . 15. 3k) of r. It is clear that the two vectors are linearly independent. 15. z) 7→ A (x.9 15. z)0 and hence a real linear space. y. z) 7→ (2x − y. 1 (p. 3x − z). y. 3) belongs to r. namely span {a. 27 (p. r is the kernel of the linear function R3 → R2 . 3h) and (k.9.5. (x. y. S1 is the coordinate Y Zplane. 2. 2h. z) = y (0.7 Dependent and independent sets in a linear space 15.1 Exercises n. 2k. 555) The subset of Rn of all the linear combinations of two given vectors a and b is a vector subspace of Rn .Subspaces of a linear space 103 two elements (h. 560) The set S1 of all elements of R3 with their ﬁrst coordinate equal to 0 is a linear subspace of R3 (see exercise 5. 1.5. 1) . A standard basis for it is {(0. 1)}. 3h) + β (k. 3k) = (αh + βk) (1. b}.6 Subspaces of a linear space 15. 555) The set of solutions of the linear homogenous system of equations A (x. y. 0. (0. (x. 2k. z)0 = 0 is the kernel of the linear function R3 → R.
A basis for it is {(1. 15. 560) The set S5 of all elements of R3 with all the coordinates equal is a linear subspace of R3 (S5 is the line through the origin and direction vector d ≡ (1. 560) The set S3 of all elements of R3 with the coordinates summing up to 0 is a linear subspace of R3 (S3 is the plane through the origin and normal vector n ≡ (1. 1)}. −1. 1. For example. 1. but their sum . (0. 1) 15. and that for every three real numbers x. y and z such that x + y + z = 0 (x. 0) + z (0. 560) The set S4 of all elements of R3 with the ﬁrst two coordinates equal is a linear subspace of R3 (S4 is the plane containing the Zaxis and the bisectrix of the oddnumbered quadrants of the XY plane). and that for every three real numbers x. 1)}. y and z such that x + y = 0 (x.5 n. It is clear that the two vectors are linearly independent. 1. 3 (p. −1. 0) + z (0. (1. 0. 1)). 1. (0. 0. 4 (p. 560) The set S7 of all elements of R3 with the ﬁrst and second coordinates having identical square is not a linear subspace of R3 (S7 is the union of the two planes containing the Zaxis and one of the bisectrices of the odd and evennumbered quadrants of the XY plane). y. 0. 1.9. 0) .2 n. y and z such that x = y (x.3 n. 15.6 n.4 n. 7 (p. z) = x (1. 560) The set S6 of all elements of R3 with the ﬁrst coordinate equal either to the second or to the third is not a linear subspace of R3 (S6 is the union of the plane S4 of exercise 4 and the plane containing the Y axis and the bisectrix of the oddnumbered quadrants of the XZplane). 1)}. 0. It is clear that the two vectors are linearly independent. 1) does not. A basis for it is {(1. 1. its dimension is 2. A basis for it is {d}. 2 (p.9. A basis for it is {(1. −1.9. z) = x (1. 0. 0) . It is clear that the two vectors are linearly independent. −1. and that for every three real numbers x. 1) 15. 1)). z) = x (1. 5 (p. y.9.9. (0. 1. For example. 560) The set S2 of all elements of R3 with the ﬁrst and second coordinate summing up to 0 is a linear subspace of R3 (S2 is the plane containing the Zaxis and the bisectrix of the evennumbered quadrants of the XY plane).104 Linear spaces 15. 0) and (1. −1. but their sum (2. its dimension is 1. −1. 1) both belong to S6 .9. (1. its dimension is 2. 0) + z (0. y. 0) and (1.7 n. 0) both belong to S7 . its dimension is 2. 6 (p. −1. 1) 15. 0) .
. 0) do not. 0)) For example. p. z) = y (0. S8 is an aﬃne subspace of R3 . y.9.1) h∈n then P belongs to S11 if and only if p0 = 0. 1. α (1. 0) or their semisum (1. S7 is a cone. −1) 15. 560) See exercise 26.Exercises 105 (2. z) of R3 such that x+y+z = 0 x−y−z = 0 is a line containing the origin. 1. 560) The set S8 of all elements of R3 with the ﬁrst and second coordinates summing up to 1 is not a linear subspace of R3 (S8 is the vertical plane containing the line through the points P ≡ (1. 0. if and only if all its coeﬃcients are null.11 n. 15. v. A basis for S11 is ª © B ≡ t 7→ th h∈n Indeed. It follows that dim S11 = n. 1. for every α 6= 1.8 n. However. since ∀ (x.9. If (x. 0. 560) The set S10 of all elements (x.9 n. a subspace of R3 of dimension 1. 0. Moreover. 1. p.9. −1)}. X P : t 7→ p0 + ph th (15. 0) and Q ≡ (0. by the general principle of polynomial identity (which is more or less explicitly proved in example 6. 15. and it is even closed with respect to the external product (multiplication by arbitrary real numbers). 11 (p. A base for it is {(0.555. 0. and hence a linear space. and it is not even convex. If P is any polynomial of degree not exceeding n.10 n. 560) The set S11 of all polynomials of degree not exceeding n. 8 (p. then (x. every polynomial belonging to S11 is a linear combination of B. a linear combination of B is the null polynomial. and taking value 0 at 0 is a linear subspace of the set of all polynomials of degree not exceeding n. z) ∈ R3 . 0) . ∀ (u. y.9. It is clear that any linear combination of polynomials in S11 takes value 0 at 0. y. 0) do not belong to S8 .558). y. and α (0. z) is any element of S10 . 10 (p. 9 (p. ∀α ∈ R (x + y = 1) ∧ (u + v = 1) ⇒ [(1 − α) x + αu] + [(1 − α) y + αv] = 1 15. w) ∈ R3 . S7 is not an aﬃne subspace of R3 .
12 n. 560) The set S14 of all polynomials of degree not exceeding n. for any two suach polynomials P and Q. A polynomial P as in (15.1) belongs to S14 if and only if p0 + p1 = 0. 14 (p.14 n. (t 7→ 1) . 15. This is proved exactly as in the previous exercise (just replace 0 with 00 ).106 Linear spaces 15. then X R : t 7→ α1 − α1 t + αh+1 th+1 h∈n−1 .9. if α is the ntuple of coeﬃcients of a linear combination R of B. 13 (p. (t 7→ 1) h∈n−1 That B is a basis can be seen as in the previous exercise. 15. is a linear subspace of the set of all polynomials of degree not exceeding n.1) belongs to S13 if and only if p2 = 0. A basis for S is n¡ o ¢ h+1 B ≡ t 7→ t . is a linear subspace of the set of all polynomials of degree not exceeding n. 12 (p. Thus dim S12 = n. (αP + βQ)0 (0) = αP 0 (0) + βQ0 (0) = α · 0 + β · 0 = 0 A polynomial P as in (15. and α and β are any two real numbers. with ﬁrst derivative taking value at 0 which is the opposite of the polynomial value at 0. and any two real numbers α and β. (αP + βQ) (0) + (αP + βQ)0 (1) = αP (0) + βQ (0) + αP 0 (0) + βQ0 (0) = α [P (0) + P 0 (0)] + β [Q (0) + Q0 (0)] = α·0+β·0 A basis for S14 is n o ¡ ¢ h+1 B ≡ t 7→ 1 − t. A basis for S is n¡ o ¢ B ≡ t 7→ th+2 h∈n−2 . t 7→ t h∈n−1 Indeed.1) belongs to S12 if and only if p1 = 0. (t 7→ t) That B is a basis can be seen as in the exercise 11.9.13 n. A polynomial P as in (15. Thus dim S13 = n. with ﬁrst derivative taking value 0 at 0. Indeed. is a linear subspace of the set of all polynomials of degree not exceeding n. 560) The set S12 of all polynomials of degree not exceeding n.9. 560) The set S13 of all polynomials of degree not exceeding n. If P and Q are any two polynomials in S. with second derivative taking value 0 at 0.
and hence that dim S = n. 16 (p. t 7→ (1 − t) th h∈n−1 that is. This can be seen exactly as in exercise 3. 15. 560) The set S15 of all polynomials of degree not exceeding n. and hence a linear space. A polynomial P belongs to S16 if and only if X p0 = p0 + 2h ph h∈n n o ¡ ¢ B ≡ (t 7→ 1) . and taking the same value at 0 and at 1 is a linear subspace of the set of all polynomials of degree not exceeding n.16 n. If P is any polynomial such that p0 + p1 = 0. A polynomial P belongs to S15 if and only if X p0 = p0 + ph h∈n that is. the ntuple α of coeﬃcients of any linear combination of B spanning the null vector must satisfy the conditions α1 = 0 αh+1 = 0 (h ∈ n − 1) that is.9. and hence a linear space. the combination must be the trivial one. 15 (p. since by the position α1 = p0 = −p1 αh+1 = ph+1 (h ∈ n − 1) the linear combination of B with α as ntuple of coeﬃcients is equal to P . This can be seen exactly as in exercise 3. 15.Exercises 107 By the general principle of polynomial identity. p.555. if and only if X h∈n ph = 0 A basis for S15 is and the dimension of S15 is n.15 n. if and only if X h∈n 2h ph = 0 . 560) The set S16 of all polynomials of degree not exceeding n. and taking the same value at 0 and at 2 is a linear subspace of the set of all polynomials of degree not exceeding n. P belongs to span B.9. It follows that B is a basis of S14 . p.555.
S ≡ {(1. v = X i∈n αi ui ) hence if T is a subspace. w) ∈ R3 be such that uf + vg + wh = 0 . y) : y = 0} 15. R → R. then by point c above lin S = S and lin T = T . (c) If S = lin S. (0. it follows that S = lin S. ∃u ≡ (ui )i∈n ∈ V n . (1. 15. then S is one among the subspaces appearing in the deﬁnition of lin S. 23 (p. if S is a subspace of V . 0)} L (S) = L (T ) = R2 L (S ∩ T ) = {(x. 560) f g h 0 : : : : R → R. 0) .9. and hence S ∩ T = lin S ∩ lin T Since the intersection of any family of subspaces is a subspace. (e) If S and T are subspaces of V . S ∩ T is a subspace of V . 22 (p.17 n. then of course S is a subspace of V because lin S is so. Conversely.9. and it contains S.108 Linear spaces A basis for S16 is n o ¡ ¢ B ≡ (t 7→ 1) . 1)}.18 (a) Let n. 0) . 560) (b) It has alredy been shown in the notes that \ lin S ≡ W W subspace of V S⊆W = ( v : ∃n ∈ N. R → R. t 7→ (2 − t) th h∈n−1 and the dimension of S16 is n. R → R. (g) Let V ≡ R2 . then T contains lin S. Then S ∩ T = {(1. v. T ≡ {(1. and hence lin S ⊆ S. ∃a ≡ (αi )i∈n ∈ F n . 1)}. x 7→ 1 x 7→ eax x 7→ ebx x 7→ 0 and let (u. since S ⊆ lin S always.
system (15. let (u. Proof. the couple (f. yf 6= 0. too. 0) or (u. −1). v. for x = 0. This yields v=0⇒u=0 and hence v 6= 0. and let f : X → R be a nonnull constant function. and x = 2. w) ≡ (1. similarly. If a = 0. g. 0) be such that uf + vg is the null function. at most one of the two numbers can be equal to zero. 0. g. x = 1. g. It immediately follows from the above lemma (and from the assumption a 6= b) that if either a = 0 or b = 0. g) is linearly dependent.2) Since by assumption a 6= b. then g = f . v. h) is linearly independent. uyf + vg (x) = 0 Since f is nonnull. It is convenient to state and prove the following (very) simple Lemma 4 Let X be a set containing at least two distinct elements. then dim lin {f. g. Then u ∀x ∈ X. −1. Thus ∀x ∈ X. g) is linearly dependent if and only if g is constant. If g is constant.2) has only the trivial solution. if (f. For any function g : X → R. the triple (f. w) ≡ (1. h) is linearly dependent (it suﬃces to take (u. h} = 3. too. v) ∈ R2 ∼ (0.Exercises 109 In particular. then h = f . let Im f ≡ {yf } and Im g ≡ {yg }. Then the function yg f − yf g is the null function. and dim lin {f. if b = 0. g (x) = − yf v and g is constant. respectively). Thus if either a or b is null then (f. h} = 2. Conversely. (b) The two functions f : x 7→ eax g : x 7→ xeax . If none of them is null. we have u+v+w = 0 u + ea v + eb w = 0 u + e2a v + e2b w = 0 The determinant of the ¯ ¯ 1 1 ¯ ¯ 1 ea ¯ ¯ 1 e2a coeﬃcient matrix of the above system is ¯ ¯ ¯ ¯ 1 ¯ 1 ¯ 1 1 ¯ ¯ ¯ eb ¯ = ¯ 0 ea − 1 eb − 1 ¯ ¯ ¯ ¯ ¯ 0 e2a − 1 e2b − 1 ¯ e2b ¯ ¡ ¢£ ¤ = (ea − 1) eb − 1 eb + 1 − (ea + 1) ¡ ¢¡ ¢ = (ea − 1) eb − 1 eb − ea (15.
for x = 0. 0. w) ∈ R3 be such that uf + vg + wh = 0 where f : x 7→ 1 g : x 7→ eax h : x 7→ xeax In particular. (α + βx) eax = 0] ⇔ [∀x ∈ R. we have u+v+w = 0 u + ea v + ea w = 0 u + e−a v − e−a w = 0 a homogeneous system of equations whose coeﬃcient matrix has determinant ¯ ¯ ¯ ¯ ¯ 1 1 ¯ 1 ¯ 1 ¯ 1 1 ¯ ¯ ¯ ¯ a a a a ¯ 1 e e ¯ = ¯ 0 e −1 e −1 ¯ ¯ ¯ ¯ ¯ ¯ 1 e−a −e−a ¯ ¯ 0 e−a − 1 −e−a − 1 ¯ ¡ ¢ = − (ea − 1) e−a + 1 + e−a − 1 = 2e−a (1 − ea ) Thus if a 6= 0 the above determinant is diﬀerent from zero. let (u. (f) The two functions f : x 7→ sin x g : x 7→ cos x (15. (c) Arguing as in point a.110 Linear spaces are linearly independent. sin (γ + x) = 0 . g} = 2. g. v. for every two real numbers α and β which are not both null. and x = −1. √ α 2 sin x + √ β 2 cos x = 0 2 2 α +β α +β m ∀x ∈ R. v. 0).3) are linearly independent. yielding (u. On the other hand. α sin x + β cos x = 0 m ∀x ∈ R. x = 1. h} = 3. The triple (f. h) is linearly independent. (α + βx) = 0] ⇔ α=β=0 Notice that the argument holds even in the case a = 0. Indeed. αex + βxeax = 0] ⇔ [∀x ∈ R. if a = 0 then f = g and h = IdR . h} = 2. w) = (0. so that dim lin {f. g. and dim lin {f. Thus dim lin {f. g. since [∀x ∈ R. ∀x ∈ R.
x → cos x} = 2. Norms 111 where α α2 + β 2 γ ≡ sign β arccos p since the sin function is not identically null. 7 (h) From the trigonometric addition formulas ∀x ∈ R cos 2x = cos2 x − sin2 x = 1 − 2 sin2 x Let then f : x 7→ 1 g : x 7→ cos x The triple (f. h) is linearly dependent.10 Inner products. h} = 2. Euclidean spaces. Norms 15. Thus dim lin {x 7→ sin x.11 Orthogonality in a euclidean space .Inner products. the last condition cannot hold. Euclidean spaces. since g : x 7→ sin2 x f − g + 2h = 0 By the lemma discussed at point a. dim lin {f. g. 15. g.
9 (p. 567) hu1 .112 Linear spaces 15.12 15. u1 i = ku1 k2 = ku2 k2 = ku3 k2 = 1 cos u2 u3 = q q = [ 2 2 8 3 3 ¯1 t2 ¯ 1 1 t dt = ¯ = − = 0 ¯ 2 −1 2 2 −1 ¯1 µ ¶ Z 1 t3 ¯ 2 ¯ = 1 − −1 = 2 t + t dt = 0 + ¯ 3 −1 3 3 3 −1 Z 1 1 + t dt = t1 + 0 = 1 − (−1) = 2 −1 −1 Z 1 √ 1 dt = 2 ku1 k = 2 −1 r Z 1 2 2 ku2 k = t2 dt = 3 3 −1 r Z 1 2 8 1 + 2t + t2 dt = 2 + ku3 k = 3 3 −1 Z 1 2 3 π u2 u3 = [ 3 √ 2 3 cos u1 u3 = √ q = [ 2 2 8 3 π u1 u3 = [ 6 15. 11 (p.12. u2 i = hu2 .1 Exercises n.12.2 n. 567) (a) Let for each n ∈ N In ≡ Then Z +∞ −t Z +∞ e−t tn dt 0 I0 = = x→+∞ = 1 e dt = lim e−t dt x→+∞ 0 0 ¯ −t ¯x lim −e 0 = lim − e−x + 1 x→+∞ Z x . u3 i = hu3 .
Thus f g = α0 β 0 + k∈m+n X X αi β j tk i∈m. for each n ∈ N. Then the product f g is a real polynomial of degree m + n. of degree m and n respectively. In+1 = e t dt = lim e−t tn+1 dt x→+∞ 0 0 ½ ¾ Z x ¯ −t n+1 ¯x −t n = lim −e t + (n + 1) e t dt 0 x→+∞ 0 ½Z x ¾ © −x n+1 ª −t n = lim −e x − 0 + (n + 1) lim e t dt x→+∞ x→+∞ 0 Z +∞ = 0 + (n + 1) e−t tn dt 0 Z +∞ −t n+1 Z x = (n + 1) In The integral involved in the deﬁnition of In is always convergent. Let now X i∈m In = n! f : t 7→ α 0 + αi ti g : t 7→ β 0 + X i∈n β j tj be two real polynomials. since for each n ∈ N lim e−x xn = 0 x→+∞ It follows that ∀n ∈ N. j∈n i+j=k . containing for each k ∈ m + n a monomial of degree k of the form αi ti β j tj = αi β j ti+j whenever i + j = k.Exercises 113 and.
gi = e−t α0 β 0 + αi β j tk dt 0 k∈m+n = α0 β 0 Z +∞ 0 Z X X e dt + αi β j −t k∈m+n i∈m. j∈n i+j=k (b) If for each n ∈ N xn : t 7→ tn then for each m ∈ N and each n ∈ N hxm . j∈n i+j=k i∈m. gi = e−t t4 + 2t3 + 2t2 + 2t + 1 dt fg : 0 g : t 7→ t2 + 1 = I4 + 2I3 + 2I2 + 2I1 + I0 = 4! + 2 · 3! + 2 · 2! + 2 · 1! + 1 = 43 . j∈n i+j=k 0 = α0 β 0 + k∈m+n X X αi β j k! i∈m. xn i = Z +∞ e−t tm tn dt 0 = Im+n = (m + n)! (c) If f : t 7→ (t + 1)2 then t 7→ t4 + 2t3 + 2t2 + 2t + 1 Z +∞ ¡ ¢ hf.114 Linear spaces and the scalar product of f and g is results in the sum of m + n + 1 converging integrals Z +∞ X X hf. j∈n i+j=k +∞ e−t tk dt = α0 β 0 I0 + k∈m+n X X αi β j Ik i∈m.
2) 3 Every vector belonging to the line π ∩ perp {v} must have the form (x. the set of all polynomials of degree less than or equal to 1 which are orthogonal to f is P1 ∩ perp {f } = {gγ : t 7→ 2γt − 3γ}γ∈R 15. gi = 0 ⇔ (α. projections 15.13 Construction of orthogonal sets.1 n. 1. A unit vector in π is given by v≡ 1 (2.15 Best approximation of elements in a euclidean space by elements in a ﬁnitedimensional subspace 15. Since no two of them are parallel. x2 . x3 } = π and dim lin {x1 .Construction of orthogonal sets. The GramSchmidt process 115 (d) If f : t 7→ t + 1 then f g : t 7→ αt2 + (α + β) t + β hf. β) = (2γ.16 Exercises 15. x2 . 576) (a) By direct inspection of the coordinates of the three given vectors. it is seen that they all belong to the plane π of equation x−z = 0.14 Orthogonal complements. gi = αI2 + (α + β) I1 + βI0 = 3α + 2β and hf. −3γ) (γ ∈ R) g : t 7→ αt + β that is. 1 (p. −4x.16. The GramSchmidt process 15. lin {x1 . x3 } = 2. x) .
−1. It is easily checked that x2 ∈ lin {x1 } / x3 ∈ lin {x1 . 15. 1. (b) The answer is identical to the one just given for case a. 0)° √ √ is the hyperplane of R4 through the origin. x2 . x3 . 1. More easily. 3. y2 i y2 = ≡ kx3 − hx3 . x3 } is {v. and three vectors are required for any basis of W . y. −4. y1 i y1 ≡ =° ° kx2 − hx2 . 1. x2 . x2 . The following vectors form an orthonormal one: √ x1 2 = (1. 1. 1. 0. 0) ° ° √ √ ° ° °(0. x2 . −1) as normal vector. and which is orthogonal to v. is √ 2 w≡ (1.16. y1 i y1 k °(0. 576) (a) First solution. 0)° y3 Second solution.0 ≡ (x. 1) − 0 − 2 66 66 (−1. −1. x4 } = 3. 1) 6 The required orthonormal basis for lin {x1 . 0. −3. 1) − 0 − 2 66 66 (−1. z. 1. 2 (p. 1. 1. 0) 6 1 + 1 +1 4 4 √ √ 2 2 2 2 √ √ 2 2 2 2 (1. 0) − x2 − hx2 . 0) 2 √ 2 v≡ (0. . Thus a second unit vector which together with v spans π. x3 . 1. 1. 1. 0. 2. 0. w}. 3) 6 1 + 1 + 1 +1 9 9 9 (0. 1. t) ∈ R4 : x − y + z − t = 0 √ 2 u≡ (1. with (1. it suﬃces to realize that © ª lin {x1 . x2 } / 0 = x1 − x2 + x3 − x4 Thus dim W ≡ dim lin {x1 . 1.1. 0) y1 ≡ kx1 k 2 y2 (0. x4 } . 0 6 = q2 2 = (−1. −1. 2. 1. y1 i y1 − − hx3 . 0. 0) − ¢ ¡ 1 1 √ − . y1 i y1 − − hx3 . 1. 1. 1) 2 x3 − hx3 . x4 } = H(1. 0) ° ° (1. y2 i y2 k ¡1 1 1 ¢ √ . 0. 1 3 = q3 = (1.2 n. to directly exhibit two mutually orthogonal unit vectors in lin {x1 . 1.−1). 0. 2. x3 .116 Linear spaces √ with norm 3 2 x.
Exercises
117
It remains to ﬁnd a unit vector in H(1,−1,1,−1),0 ∩ perp {u, v}. The following equations characterize H(1,−1,1,−1),0 ∩ perp {u, v}: x−y+z−t = 0 x+y = 0 z+t = 0 yielding (by addition) 2 (x + z) = 0, and hence 1 1 (1, −1, −1, 1) or w ≡ − (1, −1, −1, 1) 2 2 Thus an orthonormal basis for lin {x1 , x2 , x3 , x4 } is {u, v, w}. (b) It is easily checked that w≡ x2 ∈ lin {x1 } / 0 = 2x1 − x2 − x3 Thus dim W ≡ dim lin {x1 , x2 , x3 } = 2, and two vectors are required for any basis of W . The following vectors form an orthonormal one: √ x1 3 y1 ≡ = (1, 1, 0, 1) kx1 k 3 y2 (1, 0, 2, 1) − 2 33 33 (1, 1, 0, 1) x2 − hx2 , y1 i y1 ° =° ≡ √ √ ° ° kx2 − hx2 , y1 i y1 k °(0, 1, 1, 0) − 2 33 33 (1, 1, 0, 1)° ¡1 2 ¢ √ , − 3 , 2, 1 42 3 = q3 (1, −2, 6, 1) = 42 1 + 4 +4+ 1 9 9 9 Z
π √ √
15.16.3
n. 3 (p. 576)
Let
1 1 1 √ √ dt = π = 1 π π π 0 r Z πr Z 2 2 2 π hyn , yn i = cos2 nt dt cos nt cos nt dt = π π π 0 0 hy0 , y0 i = In ≡ Z
π
cos ntdt =
2
0
integrating by parts In
Z
π
cos nt cos nt dt
0
¯π Z π sin nt ¯ sin nt ¯ − = cos nt −n sin nt dt ¯ n 0 n 0 Z π Z π 2 = sin nt dt = 1 − cos2 nt dt
0 0
= π − In
118
Linear spaces
Thus π 2 hyn , yn i = 1 In = and every function yn has norm equal to one. To check mutual orthogonality, let us compute ¯π Z π 1 sin nt ¯ 1 ¯ =0 √ cos nt dt = √ hy0 , yn i = π π n ¯0 0 Z π hym , yn i = cos mt cos nt dt 0 ¯ Z π sin nt ¯π sin nt ¯ − = cos mt −m sin mt dt ¯ n 0 n 0 Z m π sin mt sin nt dt = n 0 µ ¶¯π µ ¶ Z π m cos nt ¯ cos nt ¯ −m = sin mt − dt m cos mt − ¯ n n n 0 n 0 ³ m ´2 = hym , yn i n ¡ ¢2 Since m and n are distinct positive integers, m is diﬀerent from 1, and the equation n ³ m ´2 hym , yn i = hym , yn i n
can hold only if hym , yn i = 0. That the set {yn }+∞ generates the same space generated by the set {xn }+∞ n=0 n=0 is trivial, since, for each n, yn is a multiple of xn . 15.16.4 n. 4 (p. 576) We have Z 1 hy0 , y0 i = 1dt = 1 0 Ã ¯1 ! Z 1 ¯ ¡ 2 ¢ 4 3 hy1 , y1 i = 3 4t − 4t + 1 dt = 3 t − 2t2 + t¯ ¯ 3 0 0 = 1 Z 1 ¡ ¢ hy2 , y2 i = 5 36t4 − 72t3 + 48t2 − 12t + 1 dt 0 Ã ¯1 ! ¯ 36 5 = 5 t − 18t4 + 16t3 − 6t2 + t¯ ¯ 5 0 = 1
Exercises
119
which proves that the three given functions are unit vectors with respect to the given inner product. Moreover, Z 1√ √ ³ ¯1 ´ hy0 , y1 i = 3 (2t − 1) dt = 3 t2 − t¯0 = 0 0 Z 1√ √ ³ ¯1 ´ ¡ ¢ hy0 , y2 i = 5 6t2 − 6t + 1 dt = 5 2t3 − 3t2 + t¯0 = 0 Z0 1 √ ¡ ¢ hy1 , y2 i = 15 12t3 − 18t2 + 8t − 1 dt 0 √ ³ 4 ¯1 ´ 3 2 15 3t − 6t + 4t − t¯0 = 0 = which proves that the three given functions are mutually orthogonal. Thus {y1 , y2 , y3 } is an orthonormal set, and hence linearly independent. Finally, Ã √ √ ! √ √ 1 3 5 3 5 y1 y1 + y1 + y2 x0 = y0 x1 = y0 + x2 = 1 − 2 2 30 3 30 which proves that lin {y1 , y2 , y3 } = lin {x1 , x2 , x3 }
120 Linear spaces .
v)] The null space of T is the trivial subspace {(0. for every β ∈ R.4. the range of T is R2 .Chapter 16 LINEAR TRANSFORMATIONS AND MATRICES 16. 0)}. y)] + βT [(u. 582) T is linear. T (x.3 Nullity and rank 16. y) is orthogonal to r. and hence rank T = 2 nullity T = 0 T is the orthogonal symmetry with respect to the bisectrix r of the ﬁrst and third quadrant. αy + βv)] (αy + βv. v)] = = = = T [(αx + βu.2 Null space and range 16. y) − (x. y) . . v) ∈ R2 . u) αT [(x. x + y). x) + β (v.1 Linear transformations 16. for every (x. lies on r. T [α (x. for every α ∈ R. y)]. 1 (p. y) ∈ R2 . αx + βu) α (y.1 Exercises n. y) + β (u. which is (x + y. and the midpoint of [(x. y) ∈ R2 . since. since for every (x. T (x. for every (u.4 16.
and hence rank T = 2 nullity T = 0 T is the orthogonal symmetry with respect to the Xaxis. 4 (p. 0) αT [(x. T [α (x.4. αy + βv)] (αx + βu. T [α (x. y) ∈ R2 . 582) T is linear . x) + β (u.2 n. v) ∈ R2 . the range of T is the bisectrix of the I and III quadrant. 3 (p. v) ∈ R2 . and for every β ∈ R. T [α (x. the range of T is the Xaxis. v)] The null space of T is the Y axis. and for every β ∈ R. since for every (x. y) ∈ R2 . αy + βv)] (αx + βu. y)] + βT [(u. αy + βv)] (αy + βv. v) ∈ R2 . 582) T is linear. v)] The null space of T is the trivial subspace {(0.3 n. αx + βu) α (x. for every α ∈ R. since for every (x. x) + β (v. u) αT [(x. and hence rank T = 1 nullity T = 1 T is the orthogonal projection on the Xaxis. 16. y) + β (u.4.4. v)] = = = = T [(αx + βu. v)] = = = = T [(αx + βu. for every (u. and hence rank T = nullity T = 1 . 0) + β (u. 0)}. αx + βu) α (y. 0) α (x. for every α ∈ R. for every (u. since for every (x. y)] + βT [(u. for every (u.122 Linear transformations and matrices 16. 16. y) ∈ R2 . and for every β ∈ R. v)] = = = = T [(αx + βu. v)] The null space of T is the Y axis. 2 (p. u) αT [(x. the range of T is R2 . for every α ∈ R.4 n. y) + β (u. y)] + βT [(u. 582) T is linear. y) + β (u.
and for every β ∈ R. v)] = = 16. e. y) ∈ R2 . for x and u both diﬀerent from 0.g.7 n. for every α ∈ R. 0) = x2 + 2xy + u2 .g. v) ∈ R2 . 0 T is not a linear function. 5 (p.4. ey ) + β (eu . αy + βv)] ¡ ¢ = eαx+βu . y)] + βT [(u. for every (x. y)] + βT [(u. 1)] = (e.5 n. 582) ¢ ¡ T (x + u. v)] = α (ex . 582) T [(αx + βu. y) + β (u. (ey )α · (ev )β αT [(x. 7 (p. v)] = = αT [(x. 0)] + T [(1. when x = y = 0 and u = v = α = β = 1. y + 1) + β (u + 1. T [α (x.. 1 + e) 16. 1)] = (1 + e. 16. T [α (x. αy + βv + α + β) . 1) α (x. v) ∈ R2 . α + β) T is aﬃne. but not linear. 0) = x2 + u2 . 6 (p. since. v) ∈ R2 . 0 ¡ ¢ T (x. for every α ∈ R. 0) + T (u. y) ∈ R2 . e. 582) T is not an aﬃne function. αy + βv + 1) α (x + 1. αey + βev ) so that. Indeed. 582) T is not linear. indeed. y) + β (u. ev ) = (αex + βeu . v + 1) (αx + βu + α + β.4. 8 (p. eαy+βv i h = (ex )α · (eu )β . 1) + β (u. y) + β (u. for every (u. e) T [(0. for every (u. v)] = T [(αx + βu. for every (u.Exercises 123 16. αy + βv)] (αx + βu.6 n.4. T [α (x. but it is not linear.4. 0) + (1. T [(0. v)] = = αT [(x. y) ∈ R2 . and for every β ∈ R.. indeed. for every α ∈ R. for every (x.8 n. for every β ∈ R. y)] + βT [(u. for every (x. 1) (αx + βu. αy + βv)] (αx + βu + 1. v)] = = T [(αx + βu.
since for every (x. the range of T is R2 . y) + β (u.4. 0)}. v)] The null space of T is the trivial subspace {(0. v)] = = = = T [(αx + βu. y) + β (u. u + v) αT [(x. αx + βu + αy + βv) α (x − y. for every α ∈ R. for every α ∈ R. v) ∈ R2 . 0)}. 10 (p. for every (u. x + y) + β (2u − v. 9 (p. and hence rank T = 2 The matrix representing T is A≡ the characteristic polynomial of A is λ2 − 3λ + 3 · 2 −1 1 1 ¸ nullity T = 0 . y)] + βT [(u. (αx + βu) + (αy + βv)) α (2x − y. v) ∈ R2 . αy + βv)] (2 (αx + βu) − (αy + βv) . x + y) + β (u − v. for every (u. v)] = = = = T [(αx + βu. y)] + βT [(u. and for every β ∈ R. the range of T is R2 . T [α (x. 16. y) ∈ R2 . and hence rank T = 2 The matrix representing T is A ≡ 1 −1 1 1 ¸· · √ 2 √ 0 cos = sin 0 2 · ¸ √ = 2 " √ √ 2 − 22 2 √ √ 2 2 2 2 π − sin π 4 4 π cos π 4 4 nullity T = 0 # ¸ Thus T is the composition of a counterclockwise rotation by an angle of π with a 4 √ homothety of modulus 2. αy + βv)] (αx + βu − αy − βv. and for every β ∈ R.10 n. v)] The null space of T is the trivial subspace {(0.9 n.124 Linear transformations and matrices 16. 582) T is linear. u + v) αT [(x. 582) T is linear. since for every (x.4. y) ∈ R2 . T [α (x.
12 n. and hence 16. y. αy + βv. for every (u. αy + βv. T [α (x. and hence . 17 (p.4. 1 − 3 is B ≡ " 3 2 √ 3 2 − Thus T is the composition of a counterclockwise rotation by an angle of π with a 6 √ homothety of modulus 3. v. 0) α (x. z) ∈ R3 . Re z} = . y. y. 0)}. v. for every β ∈ R. z)] + βT [(u. y. x) + β (w. αx + βu) α (z. v. w) ∈ R3 . v. w)] = = = = rank T = 2 T [(αx + βu. v. z) ∈ R3 . for every α ∈ R. u) αT [(x.4. 582) T is linear. αy + βv. the range of T is R3 . 0. z) + β (u. αy + βv. the range of T is the XY plane. 0) + β (u. w) ∈ R3 . w)] = = = = rank T = 3 T [(αx + βu. y. 0) αT [(x. v. since for every (x. w)] nark T = 0 · √ ¸· 3 √ 0 cos π − sin 6 = sin π cos π 3 0 6 6 √ 3 2 3 2 # √ = 3 " √ 3 2 1 2 −1 √2 3 2 ¸ π 6 # The null space of T is the trivial subspace {(0. w)] nark T = 1 The null space of T is the Zaxis. T [α (x. αz + βw)] (αx + βu. 16. for every α ∈ R. v. for every β ∈ R. 16 (p. for every (u. 582) T is linear (as every projection on a coordinate hyperplane). y. y. v.Exercises 125 and the eigenvalues of A are √ 3 + 3i λ1 ≡ 2 An eigenvector associated to λ1 is µ z≡ √ 3 − 3i λ2 ≡ 2 2 √ 1 − 3i ¶ The matrix representing T with respect to the basis ½µ ¶ µ ¶¾ 0 2 √ {Im z. z)] + βT [(u. z) + β (u. y. since for every (x. αz + βw)] (αz + βw.11 n.
in which case x→0 f 0 (0) = lim f 0 (x) x→0 . Indeed.4. αz + βw)] (αx + βu + αz + βw.1) . v. 1) . 582) T is linear. provided lim f 0 (x) exists and is ﬁnite. z)] + βT [(u. 1) . w)] The null space of T is the axis of central symmetry of the (+. y. 0. z) + β (u. ker T = {f ∈ D1 : ∀x ∈ (−1. +)orthant. for every (u.14 n. y. 1) or. αy + βv. v. of parametric equations x=t y = −t z = −t the range of T is the XZplane. 23 (p. β (u + v)] α (x. v. z) ∈ R3 . v. 1) and continuous in 0. w) ∈ R3 . is diﬀerentiable in 0 as well. every function f which is diﬀerentiable in (−1. 25 (p. w) αT [(x. for every α ∈ R. 582) Let D1 (−1. 0. If T : D1 → R(−1.126 Linear transformations and matrices 16. z) + β (u. xf 0 (x) = 0} = {f ∈ D1 : ∀x ∈ (−1. f 0 (x) = 0} By an important though sometimes overlooked theorem in calculus. ¡ ¢ T (f + g) = x 7→ x [f + g]0 (x) = (x 7→ x [f 0 (x) + g 0 (x)]) = (x 7→ xf 0 (x) + xg 0 (x)) = (x 7→ xf 0 (x)) + (x 7→ +xg 0 (x)) = T (f ) + T (g) Moreover. αx + βu + αy + βv) [α (x + z) . w)] = = = = = T [(αx + βu. 1).13 n. and hence rank T = 2 nark T = 1 16. T [α (x. 0. for every β ∈ R. D1 be the linear space of all real functions of a real variable which are deﬁned and everywhere diﬀerentiable on (−1. y. +. −. f 7→ (x 7→ xf 0 (x)) then T is a linear operator. y. 0) ∪ (0.4. −)orthant and of the (−. since for every (x. 0) ∪ (0. α (x + y)] + [β (u + w) . more shortly.
let u be the solution to equation (16. by the classical theorem due to Lagrange.Exercises 127 Thus (here 0 is the identically null function deﬁned on (−1. If f belongs to ker T . y 7→ y 0 (0) is injective and surjective.1) such 0 that y (0) = y0 and y 0 (0) = y0 . ∀x ∈ (−1. Thus ker L is the set of all solutions to the linear diﬀerential equation of the second order By the uniqueness theorem for Cauchy’s problems in the theory of diﬀerential equa0 tions. since ker L is a linear subspace of D2 .555. αu (x) + βv (x) = 0 for each x ∈ R can only hold (by evaluating at x = 0) if α = 0. 0). and let v be the solution to equation (16.. 1)). v 0 (0)) = (0. 1) . and it is already known that the linear space of all polynomials has an inﬁnite basis. Moreover. 1) of the polynomials with vanishing zerodegree monomial. ∃ϑx ∈ (0. the dimension of T (D1 ) is inﬁnite. ker L = span {u. It follows that a basis of ker T is given by the constant function (x 7→ 1). If where P and Q are real functions of a real variable which are continuous on R.15 n. for each (y0 . p. 582) Let D2 be the linear space of all real functions of a real variable which are deﬁned and everywhere diﬀerentiable on R. y0 ) ∈ R2 . Hence the function µ ¶ y (0) 2 ϕ : ker L → R . since u (0) = 1 and v (0) = 0. and nullity T = 1. and by direct inspection it is seen that y (0) = y0 and 0 0 y 0 (0) = y0 .g. the function 0 y : x 7→ y0 u (x) + y0 v (x) ker T = {f ∈ D1 : f 0 = 0} L : D2 → RR . e.4. indeed. u and v are linearly independent. all the restrictions to (−1. in such a case. from βv (x) = 0 for each x ∈ R and v 0 (0) = 1 it is easily deduced that β = 0 as well. 1).1) is a solution to equation (16. 27 (p. y 7→ y 00 + P y 0 + Q ∀x ∈ R. y0 ) ∈ R2 there exists a unique solution to equation (16. u0 (0)) = (1. Since T (D1 ) contains. y 00 (x) + P (x) y 0 (x) + Q (x) y (x) = 0 (16.1) such that 0 (v (0) .1). Then for each (y0 . ϕ−1 ((y0 . 16. In other words.1) such that (u (0) . v} . it has been shown in the solution to exercise 17. that L is a linear operator. x) f (x) = f (0) + xf 0 (ϑx ) and hence f is constant on (−1. 1). y0 )). Thus nullity L = 2. and Finally.
8. z) = T (x . y 0 . z) = T (x . since x + 1 = x0 + 1 0 0 0 T (x. since x = x0 0 0 0 2y = 2y 0 ⇔ (x. y 0 . z ) ⇔ y + 1 = y 0 + 1 ⇔ (x. y 0 . 2 3 16.7 Onetoone linear transformations 16. y . y. z) = (x0 . z) = T (x . z 0 ) z − 1 = z0 − 1 T −1 (u. y .1 n. z) = (x0 . w) = (u.2 n.8 Exercises 16. w − u − v) 16.3 n. v − 1. 589) T is injective. y.5 Algebraic operations on linear transformations 16. v. 16 (p. y. 15 (p. w) = u. 589) T is injective (or one to one). 589) T is injective. since x = x0 y = y0 T (x. w) = (u − 1. y. v. z 0 ) T (x.128 Linear transformations and matrices 16.8. y. z 0 ) x + y + z = x0 + y 0 + z 0 0 0 0 T −1 (u.8. z ) ⇔ ⇔ (x. z ) ⇔ 3z = 3z 0 ³ v w´ T −1 (u. v.6 Inverses 16. z) = (x0 . v. y. 17 (p. w + 1) . y . .
T (3i − 4j) = = 2 T (3i − 4j) = = 3T (i) − 4T (j) = 3 (i + j) − 4 (2i − j) −5i + 7j T (−5i + 7j) = −5T (i) + 7T (j) −5 (i + j) + 7 (2i − j) = 9i − 12j . i. 596) (a) Since T (i) = i + j and T (j) = 2i − j.9 Linear transformations with prescribed values 16. and ker T D is the set of all constant polynomials.12 Exercises 16.1 n. zerodegree polynomials..4 Let n.12. 590) p = x 7→ p0 + p1 x + p2 x2 + · · · + pn−1 xn−1 + pn xn · Z x 0 DT (p) = D [T (p)] = D x 7→ = x 7→ p (x) ¸ p (t) dt the last equality being a consequence of the fundamental theorem of integral calculus. 16. Im T D is equal to W .11 Construction of a matrix representation in diagonal form 16. 3 (p. £ ¤ T D (p) = T [D (p)] = T x 7→ p1 + 2p2 x + · · · (n − 1) pn−1 xn−2 + npn xn−1 Z x = x→ 7 p1 + 2p2 t + · · · (n − 1) pn−1 tn−2 + npn tn−1 dt 0 ¯x = x 7→ p1 t + p2 t2 + · · · pn−1 tn−1 + pn tn ¯0 = x 7→ p1 x + p2 x2 + · · · pn−1 xn−1 + pn xn = p − p0 Thus T D acts as the identity map only on the subspace W of V containing all polynomials having the zero degree monomial (p0 ) equal to zero.8. 27 (p.10 Matrix representations of linear transformations 16.e.Linear transformations with prescribed values 129 16.
e2 }. The operation of the matrix B representing T with respect to the basis {e1 .r. 1) with respect to the basis {i. with respect to the basis {i.r. and P −1 is the matrix of coordinate change from basis {i. to the basis {i. e2 } can be described as the combined eﬀect of the following three actions: 1) coordinate change from coordinates w. multiplication by A). to the basis {i. multiplication by P −1 ). the matrix · ¸ £ ¤ 1 3 P ≡ e1 e2 = −1 1 provided T (e1 ) and T (e2 ) are meant as coordinate vectors (α. e2 }. e2 } is £ ¤ B ≡ T (e1 ) T (e2 ) (c) First solution (matrix for T ). δ) with respect to the basis {e1 . e2 } into coordinates w. j}. j} we have T (e1 ) = = T (e2 ) = = T (i − j) = T (i) − T (j) = (i + j) − (2i − j) −i + 2j T (3i + j) = 3T (i) + T (j) = 3 (i + j) + (2i − j) 5i + 2j . Thus · ¸· ¸· ¸ 1 1 −3 1 2 1 3 −1 B = P AP = 1 −1 −1 1 4 1 1 · ¸· ¸ 1 −2 5 1 3 = 2 1 −1 1 4 · ¸ 1 −7 −1 = 1 7 4 Second solution (matrix for T ).r. 3) coordinate change from coordinates w. 0) and (0.130 Linear transformations and matrices (b) The matrix of T with respect to the basis {i. j} (that is. 1) of e1 and e2 with respect to the basis {e1 . 2) transformation according to T as expressed by the matrix representing T w. e2 } to basis {i. j} is · ¸ £ ¤ 1 2 A ≡ T (i) T (j) = 1 −1 and the matrix of T 2 with respect to the same basis is · ¸ 3 0 2 A ≡ 0 3 transforms the (canonical) coordinates (1. j} to basis {e1 . −1) and (3. The matrix B representing T with respect to the basis {e1 . on the other hand. j} into coordinates w. hence P is the matrix of coordinate change from basis {e1 . e2 } (that is. If e1 = i − j and e2 = 3i + j. e2 } into their coordinates (1. to basis {e1 . Since. β) and (γ. to the basis {i.r. j}. multiplication by matrix P ). to basis {e1 . j} (that is.r.
β) = 1 (−7. so that 4 4 1 B= 7 −7 −1 1 7 Continuation (matrix for T 2 ).12. since scalar diagonal matrices commute with every matrix of the same order.12.3 a) n. 2) and (5. j}coordinates (−1. 596) ≡ µ 4 0 0 4 ¶ 7→ (−x. Since T 2 is represented by a scalar diagonal matrix with respect to the initially given basis. 2). δ) which correspond to the {i. Thus we want to solve the two equation systems (in the unknowns (α.r. 596) T : (x. it is represented by the same matrix with respect to every basis (indeed.Exercises 131 it suﬃces to ﬁnd the {e1 . to the Y axis) (length doubling) Thus T may be represented by the matrix µ ¶ −2 0 AT ≡ 0 2 and hence T 2 by the matrix A2 T 16. β) and (γ. 7). y) 7→ (−2x. 16. 5 (p. y) (reﬂection w. P −1 DP = D for every scalar diagonal matrix D). e2 }coordinates (α. 4 (p. respectively) αe1 + βe2 = −i + 2j γe1 + δe2 = 5i + 2j that is.2 n. 2y) T (i + 2j + 3k) = T (k) + T (j + k) + T (i + j + k) = (2i + 3j + 5k) + i + (j − k) = 3i + 4j + 4k . δ) = 1 (−1. δ). 1) and (γ. β) and (γ. ½ α + 3β = −1 −α + β = 2 · ½ γ + 3δ = 5 −γ + δ = 2 ¸ The (unique) solutions are (α.
12. T (k)} is a linearly independent set. T (i + j + k) form a linearly independent triple. T (j). Thus the range space of T is R3 . k} of the image vectors T (i). and rank T is 3. 0.132 Linear transformations and matrices The three image vectors T (k). T (j) = = T (i) = = T (j + k − k) = T (j + k) − T (k) −i − 3j − 5k T (i + j + k − j − k) = T (i + j + k) − T (j + k) −i + j − k −1 −1 2 AT ≡ 1 −3 3 −1 −5 3 16. k} is obtained by aligning as columns the coordinates w. ker T = {0} (b) µ 0 1 1 0 1 −1 ¶ rank T = 2 AT = . It follows that the null space of T is the trivial subspace {(0.4 (a) n. b) The matrix of T with respect to the basis {i. the linear combination xT (k) + yT (j + k) + zT (i + j + k) = x (2i + 3j + 5k) + yi + z (j − k) = (2x + y) i + (3x + z) j + (5x − z) k spans the null vector if and only if 2x + y = 0 3x + z = 0 5x − z = 0 which yields (II + III) x = 0.r. j. Indeed. 0)}. but the ﬁrst two need to be computed. and nark T is 0. j. and hence (by substitution in I and II) y = z = 0. −2) Since {T (j) . 7 (p. to {i. T (k). 597) T (4i − j + k) = 4T (i) − T (j) + T (k) = (0. T (j + k). The last one is given in the problem statement.
5 n. ¡ ¢ ¡¢ T 2i − 3j = 2T (i) − 3T j = 2 (i + k) − 3 (i − k) = −i + 5k For any two real numbers x and y. j and {i. 597) (a) I shall distinguish the canonical unit vectors of R2 and R3 by marking the former ¡¢ with an underbar. w2 ) T (i) = 0w1 + 0w2 T (j) = 1w1 + 0w2 1 3 T (k) = − w1 + w2 2 2 (AT )C = (d) Let B ≡ {j. k. −1)}. Since T (i) = i + k and T j = −i + k. Then µ ¶ 1 0 0 (AT )B. ¡ ¢ ¡¢ T xi + yj = xT (i) + yT j = x (i + k) + y (i − k) = (x + y) i + (x − y) k It follows that Im T = lin {i.12.Exercises 133 (c) Let C = (w1 . 8 (p. k} is 1 1 ¡¢ ¤ £ A ≡ T (i) T j = 0 0 1 −1 (c) First solution. i} and C = {w1 . w2 } = {(1. k} rank T = 2 nullity T = 2 − 2 = 0 © ª (b) The matrix of T with respect to the bases i. If w1 = i + j and w2 = i + 2j. 1) . (1.C = 0 1 0 16. the matrix · ¸ £ ¤ 1 1 P ≡ w1 w2 = 1 2 µ 0 1 −1 2 0 0 3 2 ¶ . j.
0) and (0. j and {i. k} is £ ¤ B ≡ T (w1 ) T (w2 ) T (w1 ) = = T (w2 ) = = Thus ¡ ¡¢ ¢ T i + j = T (i) + T j = (i + k) + (i − k) i ¡ ¡¢ ¢ T i + 2j = T (i) + 2T j = (i + k) + 2 (i − k) 3i − k 1 3 B= 0 0 0 −1 where if and only if the following holds (c) The matrix C representing T w.r. w2 } and {i. w2 } into coordinates w. k} can be described as the combined eﬀect of the following two actions: 1) coordinate ª © change from coordinates w.r. The operation of the matrix B of T with respect to the bases {w1 .134 Linear transformations and matrices transforms the (canonical) coordinates (1. j . j. to basis {w1 . multiplication by A). 1) and (1. hence P is the matrix of coordinate change from basis {w1 . 2) with respect to the©basis © ª ª i. © ª and P −1 is the matrix of coordinate change from basis i. to the basis i. j (that is. j . γ 11 0 C = 0 γ 22 0 0 T (u1 ) = γ 11 v1 T (u2 ) = γ 22 v2 . j to basis {w1 . v3 } is diagonal. u2 } and {v1 . Thus · ¸ 1 1 1 1 B = AP = 0 0 1 2 1 −1 2 3 = 0 0 0 −1 Second solution (matrix for T ). to bases {u1 . w2 }. v2 . k} (that is.r. 1) of w1 and w2 with respect to the basis {w1 .r. multiplication by matrix P ). 2) transformation according to T as expressed by the © ª matrix representing T w. w2 } to basis i. to the bases i. w2 } and {i. w2 } into their coordinates (1. j. that is. The matrix B representing T with respect to the bases {w1 . j.
6 n. id · sin.12. id · cos} with respect to the basis {sin. v3 } is lin. id · sin. v2 .Exercises 135 There are indeed very many ways to achieve that. 16 (p. cos. id · cos} is 0 −1 1 0 −1 0 0 −2 1 0 0 1 0 −1 2 0 A2 = A= 0 0 0 −1 0 0 −1 0 0 0 1 0 0 0 0 −1 . cos. independent 1 0 C = 0 1 0 0 thereby obtaining 16. 597) We have D (sin) = cos D (cos) = − sin D (id · sin) = sin + id · cos D (id · cos) = cos − id · sin D2 (sin) = D (cos) = − sin D2 (cos) = D (− sin) = − cos D2 (id · sin) = D (sin + id · cos) = 2 cos − id · sin D2 (id · cos) = D (cos − id · sin) = −2 sin − id · cos and hence the matrix representing the diﬀerentiation operator D and its square D2 acting on lin {sin. In the present situation the simplest way seems to me to be given by deﬁning u1 ≡ i u2 ≡ j v1 ≡ T (u1 ) = i + k v2 ≡ T (u2 ) = i − k v3 ≡ any vector such that {v1 .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.