# SESA2021 Engineering Analysis: Vector Spaces and Linear Algebra Lecture Notes 2009/2010.

Lecturer: Dr A. A. Shah School of Engineering Sciences, University of Southampton Room 1027, Building 25 e-mail: A.Shah@soton.ac.uk

Copyright c 2010 University of Southampton

Contents
1 Introduction and example applications 2 Basic deﬁnitions and examples 3 Subspaces of vector spaces 4 Linear Transformations 5 Span 6 Linear independence 7 Basis and dimension 8 Changing the basis 9 Fundamental subspaces 10 Square matrices and systems of linear equations 11 Inner product spaces and orthogonality 12 Orthogonal and orthonormal bases 13 Orthogonal projections 14 The Gram-Schmidt process 15 Least squares approximations 1 6 10 11 15 20 23 26 35 47 51 62 67 74 80

Engineering Analysis SESA 2021
x1 x2

m

m

k1

k2

k1

Figure 1: A mechanical system with 2 masses and 3 springs (vibration example 1.1).

1

Introduction and example applications

In this course we will be concerned primarily with solving systems of linear equations (including eigenvalue problems), which are diﬃcult to avoid in any aspect of engineering. These systems, which can be very large, can be written as equations involving matrices and vectors. Let’s look at some examples in which vectors, matrices and eigenvalues arise. Example 1.1 Consider the system with 2 masses and 3 springs shown in Figure 1. We can use Newton’s second law along with Hooke’s law to write down a system of equations for the displacements, x1 and x2 , of the two masses

m¨1 + k1 x1 + k2 (x1 − x2 ) = 0 x m¨2 + k1 x2 + k2 (x2 − x1 ) = 0 x where k1 and k2 are the spring constants. These equations can be written as

(1)

Page 1

Engineering Analysis SESA 2021

x1 = − ¨

k1 + k2 m

x1 +

k2 x2 m x2

k2 x2 = x1 − ¨ m

k1 + k2 m

(2)

We could now write the system in matrix form by ﬁrst introducing a “vector” form of the solution: x = (x1 , x2 ). Then     

¨  x1   −β α   x1    =  x2 α −β x2 ¨
¨ x A x

(3)

where β = (k1 + k2 )/m and α = k2 /m. The obvious thing to do is look for oscillatory solutions that are of the form  

 v1  iωt x = veiωt =  e v2

(4)

where ω is the vibration frequency. The new vector v contains just constants, v1 and v2 , which we would want to ﬁnd. The variable part is in the eiωt . Substituting (4) into (3) and cancelling the eiωt terms on both sides gives us a new system of equations for v     

 v1   −β α   v1  −ω 2   =  α −β v2 v2

or

Av = −ω 2 v

(5)

This is an example of an eigenvalue problem, i.e., something of the form: “a

Page 2

2. In this present case we would be most interested in the frequencies of vibration and the corresponding solutions (normal modes). Data ﬁtting example 1. We need to Page 3 . The objective is to get the best straight line ﬁt to the data. Let’s say there are 4 temperature measurements T1 to T4 taken at times t1 to t4 . transformation (in this case a matrix A) acting on a object (the vector v) and giving us a constant (in this case −ω 2 ) times the object”. and we want to represent temperature as T = a + bt. We may now ask how many solutions there are and what they look like. Example 1. It turns out that there are 2 frequencies because there are two degrees of freedom.Engineering Analysis SESA 2021 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1 0 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 00 11 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1 0 1 0 1111111111111111111111 0000000000000000000000 1 0 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 11 00 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 11 1111 00 0000 1111111111111111111111 0000000000000000000000 11 11 00 00 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 11 00 1111111111111111111111 0000000000000000000000 11 00 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1 11 0 00 1111111111111111111111 0000000000000000000000 1 11 0 00 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 Temperature Time Figure 2: Output from an experiment in which temperature is measured with time. as seen in Figure 2. Let’s say you’ve taken measurements of temperature against time and expect a linear rise in temperature but due to experimental error. not all points will fall nicely onto a straight line.2 Suppose you have run an experiment and collected some data that you would like to ﬁt to a line or curve.

Engineering Analysis SESA 2021 ﬁnd a and b. The matrix equation we need to solve is        T1   a + bt1        T2   a + bt2  =     T3   a + bt3       a + bt4 T4   1 t1            1 t2   a    =      1 t3  b       1 t4 (6) Notice however. This means that you approximate the solutions at discrete points in time and space and try to ﬁnd the solutions at those points. we can’t solve it exactly but what we can do is ﬁnd the “best ﬁt” in some sense. Let’s deﬁne the “vector” (T1 .3 Partial diﬀerential equations (conservation laws) from aerodynamics are usually solved on a computer. that we have more equations than variables (a and b)! This is an example of an“overdetermined system”. ﬁnite volume or ﬁnite element methods in space together with a time-stepping procedure if the equations are unsteady. The numerical solutions are constructed ﬁrst by “discretising” the equations using the ﬁnite diﬀerence. If we take the two points T1 and T2 at times t1 and t2 we have T1 = a + bt1 T2 = a + bt2 which we can rearrange to ﬁnd a and b. Example 1. We need to ﬁnd one value for a and one for b. T4 ). One method we will look at to do this is called “least squares”. T2 . By discretising the equations this way you will end up with large matrix Page 4 . How do we solve the system? Well. T3 . The problem is that if we use another two points we will get diﬀerent values of a and b.

Page 5 . There are a great many ways of solving these systems depending on the accuracy and speed required and the stability of the methods. The more points (ﬁner mesh) you chose and the higher the dimension the larger the systems become.Engineering Analysis SESA 2021 systems. To understand these methods and to choose the most appropriate (in for example a CFD code) for a given problem you need to understand some linear algebra (theory of matrix systems). We ﬁrst need to develop the ideas of “vectors” and “transformations” (for our purposes these are matrices) so that you are familiar with the language used to describe matrix systems.

Example 2.1 Euclidean n spaces. denoted Rn .2. More on this below. In fact we can have vectors in higher dimensions. 2 Basic deﬁnitions and examples A vector is a quite general object. as shown in example 1. which is Page 6 . 3).Engineering Analysis SESA 2021 y z y x x Figure 3: Vectors in the plane R2 (left) and in space R3 (right). as shown in Figure 3. For both these spaces we can represent vectors graphically but in higher dimensions this is obviously not possible. There are two that you are familiar with: R2 (two dimensional space) and R3 (three-dimensional space). An example of a vector in R2 is u = (2. It doesn’t have to be a directed line segment in space or in the plane. The individual vectors will be given symbols like u or v . When we look at a particular set of vectors we will call it a vector space and give it a symbol like V . are vector spaces (we will see why in a second). So what is a “vector space” and what isn’t? Let’s look at a familiar example. In this course we will deal almost exclusively with these vector spaces.

1). T3 . 0) and e2 = (0. −1. The numbers 2 and 3 are called the “coordinates” of the vector u = (2. 2. 4. the vector (T1 . These will help us to understand what a vector space is precisely. −1 + 5) = (3. we just add the individual coordinates. Vectors in R2 and R3 can be represented graphically as shown in Figure 3. We could instead choose another way. More generally we can deﬁne vectors that have n coordinates. 1) is in R7 . (1) On R2 (space) we can “add” two vectors as follows: (1. Let’s recall some basic facts about the familiar vectors in the Euclidean 2 and 3 spaces R2 and R3 .Engineering Analysis SESA 2021 sometimes written as 2e1 +3e2 . We will look at these concepts in detail later on. These are vectors in the vector space Rn . T2 . 0.. To construct a vector space we basically take a bunch (set) of vectors and deﬁne ways of adding them together and multiplying them by numbers (scalars). −1) + (2.e. We can add vectors in any Rn space in this way. 3) in the “standard basis vectors” e1 = (1. we have made sure that the sum of two vectors in R2 is another vector in R2 . 4) (7) i. T4 ) in example 1. 0. 5) = (1 + 2. (2) On R2 we can multiply a vector by a scalar (number) as follows: Page 7 . By doing it as above.2 is in R4 and (1. For example. We are not able to visualise these in a graph. Note that we have chosen to deﬁne “addition” in this way.

−1) (9) (c + k)u = cu + ku for any three vectors u . −1) the vector doesn’t change. By doing it as above. −1) + 2(2. 0) = (1. 0). −1) e. There are some obvious rules. (3 + 2)(2. w in Rn and any scalars c and k.e. we have deﬁned “multiplication by a scalar” in a certain way. Again.Engineering Analysis SESA 2021 2(1. When we add 0 to any vector. We could instead choose another way. We can multiply vectors in any Rn space by a scalar in this way. v . 0) + (2. it doesn’t matter which way round we add vectors in Rn or which way round we multiply them by scalars. such as (i) (ii) u+v =v+u e. Page 8 . −1) + (1. we have made sure that multiplying a vector in R2 by a scalar gives another vector in R2 . 3) = (2 × 1. (2. i. (4) In R2 we have a zero vector 0. (3) Now that we have deﬁned a way of adding vectors in Rn (add individual components) and of multiplying them by scalars (multiply each component by the scalar). 2 × 3) = (2. (2.g.g. e. (0. 0) = (2.g. 6) (8) where we just multiply each coordinate by the scalar (number) 2. −1) + (0. −1) = 3(2.

we have to make sure that the rules above for the familiar way of doing things in Rn are preserved. • When a vector in V is multiplied by a scalar. For V to be a vector space: • The way we “add” vectors in V has to lead to other vectors in V . Example 2.2 Let’s deﬁne vector addition in R2 in the usual way (add individual components).Engineering Analysis SESA 2021 In a general vector space V we have to deﬁne the way we add vectors and multiply them by scalars. • The way we deﬁne addition and scalar multiplication of the vectors in V has to preserve rules (9) and other similar rules. If just one of these requirements is not satisﬁed. When constructing these deﬁnitions. cu2 ) (10) i. V will NOT be a vector space. but instead of the usual scalar multiplication we will use cu = c(u1 . We say that V is closed with respect to addition if this is true. u2 ) = (u1 .e.. • V has to have a zero vector and adding it to any vector should not change that vector. Let’s try to satisfy the last rule in Page 9 . we only multiply the second coordinate. the answer must be another vector in V . We say that V is closed with respect to scalar multiplication if this is true.

We can also treat functions and even more abstract objects as vectors in vector spaces. 1) = (1. we use curly brackets as follows V = {v1 . (0. vn } to represent this set of vectors. if we have a set of vectors consisting of v1 = (1. .. 0). 1) So. 2) + (1. 0) and v2 = (0.. One last bit of notation. 5) = 2 × (1. 1)} 3 Subspaces of vector spaces For some vector spaces it is possible to take a subset W (i. however.. we write V = {(1. There are some very important subspaces we will meet later on. which are usually referred to as “function spaces”.Engineering Analysis SESA 2021 equations (9) with any vector in R2 and any two scalars: 2 × (1. deﬁning scalar multiplication this way does NOT lead to a vector space. 1) + 3 × (1. 3) = (2.. . Page 10 . some of the vectors) of the original space V and obtain a new vector space using the same rules for addition and scalar multiplication. we will not consider these types of spaces. 5) but 5 × (1. v3 .. In this course. We call W a subspace of V .. For example.e. 1) = (1. If V consists of vectors v1 . v3 . vn . 1) + 3 × (1... v2 . v2 ... 1).

say u in a space V . we only need to make sure that the subspace is closed with respect to addition and scalar multiplication. Page 11 . cu2 ). This is like a function f (x) taking a number x and giving us another number y = f (x). which may or may not be the same as V . W is NOT a subspace of R2 .. u2 ) by c < 0. u3 ). 4 Linear Transformations The idea of a transformation (or map) is that it takes a vector.e. i.e. u2 ) in R2 such that u1 ≥ 0. Therefore. u2 . i. Example 3. Example 3. i. the ﬁrst coordinate is zero. when we add vectors in W or multiply a vector in W by a scalar we get another vector in W . Therefore. u2 . the ﬁrst coordinate non-negative.2 Let W be the set of all vectors (u1 . We get (cu1 . so the space consisting of such vectors is a subspace of R3 .. u3 ). where the ﬁrst coordinate is negative.. u3 ) lead to vectors of the form (0.1 Let W be the set of all vectors in R3 that are of the form (0. u2 .e. Is this a subspace of R2 with the usual rules for addition and scalar multiplication? Multiply (u1 . This is the case (Exercise: Check that it is).Engineering Analysis SESA 2021 It turns out that to be a subspace. and “transforms” or “maps” it into another vector Au in a space W . this space is not closed with respect to scalar multiplication (multiplication by a negative scalar will give a vector that is not in W ). Is this a subspace of R3 with the usual rules for addition and scalar multiplication? To ﬁnd out we need to verify that addition and scalar multiplication of vectors (0.

A takes a vector in R3 and transforms it by multiplication into another vector in R3 . i.e. In what sense is a matrix a transformation? Let’s take a look at an example. 3.. Page 12 . 0) in R3 and “transform” it with A (i. left multiply it by A) into another vector b in R3      1  1  2 0     Au =  1 2 −1   3     3 −2 5 0 A u   2        = 7        −3 b (12) In this example. Example 4.Engineering Analysis SESA 2021 When we are dealing with the Euclidean n spaces. The output vector b is called the image of u under A. we can write a transformation as a matrix.e. This is pronounced “A maps R3 to R3”.1 Consider the following 3 × 3 matrix 1   2 0     A =  1 2 −1      3 −2 5   (11) Let’s take the vector u = (1. In the general case. Anm : Rm → Rn . We write A : R3 → R3 to signify this. an n × m (n rows and m columns) matrix Anm takes any vector in Rm and transforms it by multiplication into a vector in Rn .

Example 4. Clearly. so A : R3 → R4 . which leads to another vector b  1  −1 5    0 −4 3    9 2 1   3 −7 −3 A   2     2      −7       1  =           19       −1 2 u in     (13) R3 b in R4 In this example we multiply a column vector in R3 (the domain) by A and get a column vector in R4 . • The range of Anm is the set of all possible outputs (images) in Rn . Let’s take a look at some more examples: Example 4. which is Rm . The set of possible outputs in R4 is the range of A.2 Consider the following multiplication (transformation) of a vector u by a 4 × 3 matrix A. b is in the range of A (it is one of the possible outputs).3 Consider the following multiplication of a vector by a 3 × 2 matrix A     3  7    −1 1   4 −3 A    1     1      =  −3          −2 10 u in R2 b in R3 (14) Page 13 .Engineering Analysis SESA 2021 • The domain of Anm is the set of inputs.

From the rules of matrix multiplication.Engineering Analysis SESA 2021 This time we multiply a column vector in R2 (the domain) by A and get a column vector in R3 .4  2 3  A=  −1 1 Then        (17)    1  u=  −1    −1  v=  0   (16)  2 3   1   2 3   −1  Au + Av =   +   −1 1 −1 −1 1 0  −1   −2   −3  =  = + −1 1 −2 and Page 14      . Example 4. we know that for any matrix A and vectors u and v A(u + v) = Au + Av and A(cu) = cAu (15) where c is any number (scalar). so A : R2 → R3 . These rules tell us that A preserves linear combinations. b is in the range of A. The set of possible outputs in R3 is the range of A.

you have already done this in the Euclidean spaces using the standard basis vectors in example 2.1. e2 and e3 . Let’s look at an example. Without perhaps knowing it. 0. We will build up to this slowly over the next few sections. 1. 1) You can write any vector in R3 as a linear combination of e1 ..1 The standard basis vectors in R3 are e1 = (1. Example 5.e. 0. i. 5 Span We want to be able to write all vectors in a space V as sums of some special ‘fundamental’ vectors. Because matrices satisfy the rules (15) we call them linear transformations (or linear maps). For example: Page 15 . This means that any vector is a constant times e1 plus a constant times e2 plus constant times e3 . that A times 5u is the same as 5 times Au.Engineering Analysis SESA 2021  1 − 1   2 3   0   −3  A(u + v) = A  =  =  −1 + 0 −1 1 −1 −1        (18) Exercise: Check that A(5u) = 5Au. 0) e2 = (0. 0) e3 = (0.

.. v2 . i. Example 5.. 1) = 2 × e1 + 3 × e2 + 1 × e3 = 2(1. c2 ..2 Is u = (−12.. So u = 4v1 − 2v2 . 1) = (2. . 20) = c1 (−1. 0. We can generalise this idea of linear combinations to general vector spaces.. 20) in R2 a linear combination of v1 = (−1. 1. u is a linear combination of v1 and v2 . 0.. + cn vn (20) for some scalars c1 . 3.. −6) or −c1 + 4c2 = −12 and 2c1 − 6c2 = 20 (21) (22) The solution to these equations is c1 = 4 and c2 = −2. 1) and this holds in all of the Euclidean spaces. Page 16 . .. then (−12. 0) + (0. 2) + c2 (4..Engineering Analysis SESA 2021 (2. 2) and v2 = (4.e.. vn (all in V ) if it can be written as: (19) w = c1 v1 + c2 v2 + . We say that a vector w in V is a linear combination of vectors v1 . −6)? If it is. v3 ..cn . 0) + 3(0. 3.

u is NOT a linear combination of v1 and v2 . −15) or 2c1 − 3c2 = 1 10c1 − 15c2 = −4 ⇒ 2c1 − 3c2 = −4 5 (23) (24) The second equation contradicts the ﬁrst. e3 } We now generalise the idea of a span to general vector spaces: Page 17 .3 Is u = (1. −4) = c1 (2. −4) in R2 a linear combination of v1 = (2. Example 5. We say that these vectors “span R3 ”. 10) and v2 = (−3. e3 }.Engineering Analysis SESA 2021 Example 5. e2 . all vectors in R3 can be written as linear combinations of them. −15)? If it is. Remember that we write a set of vectors inside curly brackets.e. e2 . To signify that this set spans R3 we write R3 = span {e1 .4 Any vector in R3 can be written as a linear combination of the standard basis vectors e1 . Suppose we have a vector space V .. 10) + c2 (−3. i. then (1. so there is no solution. We are interested in ﬁnding a set S of vectors from V that allows us to write any vector in V as a linear combination of the vectors in S. so the set of basis vectors in R3 is written {e1 . Let’s look at an example. e2 and e3 .

c2 and c3 . then any vector in R3 . 1.Engineering Analysis SESA 2021 Let S = {v1 . .. vn } Example 5. . −2) or 2c1 − c2 + c3 = u1 3c2 + c3 = u2 c1 + 4c2 − 2c3 = u3 We need to be able to ﬁnd values for c1 .. 3. v2 . v2 and v3 : u = (u1 . 4) + c3 (1..vn } be a set of vectors in a space V and let W be the set of all linear combinations of the vectors in S. u2 .. u2 ..5 Do the following vectors span R3 ? v1 = (2.. say u = (u1 . 1) v2 = (−1. 0. .. The set W is called the span of the vectors v1 . v2 . can be written as a linear combination of v1 . u3 ) = c1 (2. u2 . 3. u3 ). 1) + c2 (−1.vn and we write v ˜ ˜ W = span S = span {˜1 .. 0. 4) v3 = (1. u3 ) = c1 v1 + c2 v2 + c3 v3 (25) (u1 .. These equations can be written in matrix form as (26) Page 18 . 1. v2 . −2) If they do.

For then: c = A−1 u. i. v2 . c2 and c3 . Exercise: check that the determinant of A is −24. have an inverse. v3 } = R3 Page 19 . the matrix A has to be invertible.. Therefore span {v1 . To have an inverse. So we can ﬁnd values for c1 .e.Engineering Analysis SESA 2021   2 −1 1    0 3 1   1 4 −2 A    c1   u 1            c2  =  u 2          u3 c3 c u    (27) To have a solution c. the determinant of A has to be non-zero.

0. 4) + c3 (0. By a redundant vector. −5. we mean that it is a linear combination of the other vectors in the set. Substituting. 1. 4) and v3 = (0. we should be able to get c1 v 1 + c2 v 2 + c3 v 3 = 0 or c1 v1 = −c2 v2 − c3 v3 (28) where the scalars c1 . 1.1 Consider the vectors v1 = (2. i. 4) + c2 (3. There are two things we have to make sure of: (i) there are enough vectors in S to describe all of V and (ii) there are no redundant vectors in S so we can write each vector in V as a linear combination in only one way. c2 and c3 cannot all be zero (otherwise it is not possible to form a linear combination and the vectors are independent). 4) and v2 = (3. i. by which again we mean that none of them is a linear combination of the others. If these vectors are “dependent” we can form linear combinations. so we don’t really need it.e. 0) which leads to a system of equations in matrix form Page 20 . −2. we get c1 (2. 1) = (0. −2..e..Engineering Analysis SESA 2021 6 Linear independence We want to know when a set of vectors S will span the whole of a vector space V . 1) in R3 . −5. In order to reach this goal. Example 6. we need ﬁrstly to identify when vectors in a set are “independent” of each other. when we can write all vectors in V as linear combinations of the vectors in S.

the vectors v1 . Let’s look at some more examples. we say that the vectors v1 . Let S = {v1 . Otherwise. v2 and v3 are not independent. we say that the vectors are linearly dependent... which is just a generalisation of the “dependence” concept above. there is only the trivial solution (c = 0) if A is invertible. = cn = 0.Engineering Analysis SESA 2021  3 0  2    −2 −5 1   4 4 1 A    c1   0            c2  =  0          0 c3 c    (29) For equations of the form Ac = 0. . 2) in R2 linearly independent? Let’s set up the equation: Page 21 . c to equation (28)... Thus. Example 6. Otherwise. Exercise: check that det(A) = 0.vn are linearly independent.. So we can ﬁnd at least one non-trivial solution. If the equation c1 v1 + c2 v2 + . −1) and v2 = (−2. This leads us on to the deﬁnition of “linear independence”. v2 ... there will be a non-trivial solution (at least one of c1 .. v2 .. c2 and c3 will not be zero).2 Are the vectors v1 = (3.. + cn vn = 0 is only satisﬁed when c1 = c2 = .vn } be a set of vectors in some vector space V . .

Therefore. are linearly independent. v1 and v2 linearly independent. Try to ﬁnd numbers c1 . 2) = (0.3 The standard basis vectors in R3 . e1 . −1) + c2 (−2.Engineering Analysis SESA 2021 c1 v 1 + c2 v 2 = 0 ⇒ c1 (3. e2 and e3 . Example 6. 0) which leads to a system of equations 3c1 − 2c2 = 0 − c1 + 2c2 = 0 the only solution to which is c1 = c2 = 0 (the trivial solution). It’s impossible! Page 22 . c2 and c3 such that c1 e1 + c2 e2 + c3 e3 = 0.

..vn } be a set of vectors in some vector space V ...Engineering Analysis SESA 2021 7 Basis and dimension To this point.. v2 . S = {v1 . v2 .. First we will tackle the issue of “basis”.. These two requirements basically lead to the special set of vectors we are looking for.. If V = span {v1 . v2 . e2 . I said we were working towards writing all vectors in a space V as sums (linear combinations) of some special ‘fundamental’ vectors. Moreover.. e2 and e3 form a basis for R3 (hence the name).e. Earlier. y and z. Example 7. vn }. Page 23 . .. There should be enough vectors to span the whole of V . without really knowing what the “basis” part of this expression means. in these so-called “n-dimensional” spaces.1 and 5.vn are linearly independent. we’ve been using the term “standard basis” in the Euclidean n spaces.4 that R3 = span {e1 . e3 }.1 The standard basis vectors e1 .. . what does “dimension” actually mean. . v2 .. The more general concept of dimension will reduce to this deﬁnition. typically labelled x. . At the same time. we call S a basis for V . i. V = span S (any vector in V can be obtained from a linear combination of the vectors in S). vn } and v1 . there should be no redundant (linearly dependent) vectors in S because the linear combinations should be unique.. In R2 and R3 the “dimension” is usually thought of geometrically as the number of axes. and we call this set a “basis for V ”. Let S = {v1 . (i) We already know from examples 5.

v2 = (0. 0. 0. 2) and v3 = (3. −1. −1) form a basis for R3 . First we have to check whether these vectors are linearly dependent. 1. u2 . 1) + c2 (0. −1. any vector (u1 .e.Engineering Analysis SESA 2021 (ii) From example 6.. −1) = (0. to have R3 = span {v1 . 0) or in matrix form       1 0 3    −1 1 0   1 2 −1 A   c1   0            c2  =  0          c3 0 c (30) Also.. u3 ) in R3 has to be a linear combination of v1 . v2 and v3 C1 (1. 0. v2 . 2) + C3 (3. 0. 1) + C2 (0. 1). −1. can we ﬁnd c1 . u3 ) or in matrix form       1 0 3    −1 1 0   1 2 −1 A   C 1   u1            C 2  =  u2          C3 u3 C u (31) Page 24 . c2 and c3 (not all zero) such that c1 v1 + c2 v2 + c3 v3 = 0? c1 (1.3 we know that the standard basis vectors are linearly independent.2 Determine if the vectors v1 = (1. −1) = (u1 . Example 7. 2) + c3 (3. 1. v3 }. 1. u2 . i.

the space is said to be inﬁnite dimensional. v2 . say n. R3 has dimension 3. Exercise: Check that det(A) = −10. . If there are more.. they will not span R3 . We now come to the concept of “dimension”. Thus. then (31) has a unique solution C and the only solution to equation (30) is the trivial solution c = 0... Therefore. If there are fewer. v2 and v3 are linearly independent and they span R3 .3 All the spaces Rn are ﬁnite dimensional with dimension n. If the number of vectors in S is ﬁnite. they form a basis for R3 .. For example.Engineering Analysis SESA 2021 If det(A) is not zero. we say that V is ﬁnite dimensional with dimension n. It turns out importantly that All bases of V contain the same number of vectors Example 7. they will not be linearly independent. vn } is a basis for a vector space V . All bases for R3 will have 3 vectors. We write dim(V ) = n . Otherwise. v1 . Suppose that S = {v1 . Page 25 .

2) as (3. the coordinates are simple to ﬁnd: they are just the numbers in the vector itself. v3 } in example 7.2 are both bases in R3 .1 Using the standard basis in R3 we can write the vector (3. v2 . 0. Example 8. 0) + 2(0. the standard basis in R3 and the set of vectors {v1 . 1. are called the “coordinates” of the vector. 0) + 5(0. 2) = 3(1. 3. 5. For example. The standard basis in Rn is generally the easiest one to work with but there may be cases in which an alternative basis is preferable. It is clear that the coordinates will change depending on the basis. For the standard basis. 1) = 3e1 + 5e2 + 2e3 The numbers multiplying the basis vectors. 5. We now generalise the idea of coordinates. Let’s look at an example to sort out some terminology. we need to ﬁnd a way to convert between diﬀerent bases. Page 26 .Engineering Analysis SESA 2021 8 Changing the basis We’ve already seen through examples that a basis for a vector space is not unique. 0. For other bases. 5 and 2. Therefore. you have to think a bit more.

.. 5 and 0. Since S is a basis. For the standard bases in Rn .2 Determine the coordinate vector (u)S of the vector u = (10. v2 . c2 . 0) relative to the following bases.Engineering Analysis SESA 2021 Let S = {v1 .... 5...vn } be a basis for a vector space V . u = c1 v1 + c2 v2 + . 5. c2 . (i) The standard basis in R3 . which we call a coordinate vector (u)S = (c1 . + cn vn The numbers c1 . In this case u = 10e1 + 5e2 + 0e3 so the coordinates are 10. .. cn are called the coordinates of u with respect to the basis S The coordinates for a vector with respect to a basis S can themselves be written as a vector in Rn ... the coordinate vector (u)S is exactly the same as the vector u itself. cn ) The subscript S makes it clear that the coordinates are with respect to S. and the coordinate vector is simply (u)S = (10. we can express any vector u in V as a linear combination of the vectors in S. . 0) = u Page 27 . . as seen in the example above... Example 8.

c2 and c3 such that c1 (1. In this case. v2 = (0. 0) This is equivalent to the system of equations c1 + 3c3 = 10 −c1 + c2 = 5 c1 + 2c2 − c3 = 0 The answer is c1 = −2. −1) = (10.Engineering Analysis SESA 2021 (ii) S = {v1 . w2 } Basis 1 Basis 2 Now because B is a basis for R2 . v2 } C = {w1 . 2) + c3 (3. 1) + c2 (0. 3. Therefore. 4) Now onto how to change bases. −1. −1. v2 . −1). v3 } where v1 = (1. 0. 1). 1. we have to ﬁnd the coordinates c1 . c2 = 3 and c3 = 4. each of the basis vectors in C can be written as a linear combination of the basis vectors in B Page 28 . Exercise: Check this result. (u)S = (−2. We will work in R2 to demonstrate the procedure Suppose we have two bases for the space R2 : (32) B = {v1 . 2) and v3 = (3. 5. 1. 0.

let u be any vector in V . Instead of ( )B we are going to write them using [ ]B and call them coordinate matrices. Next. written as columns.      a  [w1 ]B =   b and  c  [w2 ]B =   d (34) They are basically the same as the coordinate vectors.Engineering Analysis SESA 2021 w1 = av1 + bv2 (33) w2 = cv1 + dv2 This means that the coordinate vectors for w1 and w2 relative to the basis B are (w1 )B = (a. d) Unfortunately. b) and (w2 )B = (c. we can write u as u = c1 w 1 + c2 w 2 (35) The coordinate matrix of u relative to C is:    c1  [u]C =   c2 Page 29 (36) . In terms of the basis C. we now have to introduce a new notation for writing these coordinate vectors.

We can therefore write P compactly as P = [[w1 ]B [w2 ]B ] Page 30 . we can use it to ﬁnd the coordinate matrix relative to the basis B. Notice that its columns are the coordinate matrices for the basis vectors C relative to B.Engineering Analysis SESA 2021 Equation (33) tells us how to write the basis vectors in C as linear combinations of the basis vectors B. we get u = c1 w 1 + c2 w 2 = c1 (av1 + bv2 ) + c2 (cv1 + dv2 ) = (ac1 + cc2 )v1 + (bc1 + dc2 )v2 This gives us the coordinate matrix of u relative to the basis B   (37)  ac1 + cc2  [u]B =   bc1 + dc2 Let us re-write this as        (38)  ac1 + cc2   a c   c1   a c  [u]B =   = =  [u]C b d c2 bc1 + dc2 b d P [u]C (39) The matrix P is called the transition matrix from C to B: given the coordinate matrix of a vector relative to the basis C. Substituting equation (33) into equation (35). [w1 ]B and [w2 ]B .

−1.. . Suppose we have two bases for the vector space V : B = {v1 .. 0. vn } C = {w1 . v2 . where v1 = (1..3 Consider the standard basis B = {e1 . e2 .. 1. [wn ]B ] (40) where the ith column of P is the coordinate matrix of wi relative to B. 2) and v3 = (3. Page 31 . wn } Basis 1 Basis 2 The transition matrix from C to B is deﬁned as P = [[w1 ]B [w2 ]B ...Engineering Analysis SESA 2021 Equation (39) can then be written compactly as [u]B = P [u]C = [[w1 ]B [w2 ]B ] [u]C We can now generalise this result. −1). v2 . w2 . e3 } and the basis C = {v1 . 1) and v2 = (0. . for R3 .. The coordinate matrix of a vector u in V relative to B is then related to the coordinate matrix of u relative to C by [u]B = P [u]C (41) Example 8. v3 }.....

2 that in the standard basis. We know from examples 8. Thus        1      [v1 ]B =  −1      1  0      [v2 ]B =  1      2  3      [v3 ]B =  0      −1 (42) From equation (40). This requires more work (for you!) Exercise: Verify that Page 32 . the transition matrix from C to B is then          1 0 3   P = [[v1 ]B [v2 ]B [v3 ]B ] =  −1 1 0   1 2 −1 (43) (ii) To ﬁnd the transition matrix from B to C we need the coordinate matrices of the standard basis vectors relative to C.Engineering Analysis SESA 2021 (i) Find the transition matrix from C to B (ii) Find the transition matrix from B to C (i) Recall that the columns of the transition matrix are coordinate matrices for the basis vectors C relative to B. e2 and e3 when they are written as linear combinations of v1 . In other words. we have to ﬁnd the coordinates of the basis vectors v1 . e2 and e3 .1 and 8. v2 and v3 when they are written as linear combinations of e1 . the coordinate vector (and therefore the coordinate matrix) is simply the vector itself. v2 and v3 . In other words. we have to ﬁnd the coordinates of e1 .

4 Using the results of the previous example. the coordinate matrices of the standard basis vectors relative to C are        1/10      [e1 ]B =  1/10      3/10  −3/5      [e2 ]B =  2/5      1/5  3/10      [e3 ]B =  3/10      −1/10 (44) and the transition matrix from B to C is          1/10 −3/5 3/10   ′ P = [[e1 ]C [e2 ]C [e3 ]C ] =  1/10 2/5 3/10   3/10 1/5 −1/10 (45) Example 8. 5. compute (i) [u]B given (u)C = (−2.e. 3.. i. 4) (ii) [u]C given (u)B = (10. 0) (i) All we need to do now is use equation (41).Engineering Analysis SESA 2021 e1 = 1 10 v1 + 1 10 v2 + 3 10 v3 2 1 3 e2 = − 5 v 1 + 5 v 2 + 5 v 3 e3 = 3 10 v1 + 3 10 v2 − 1 10 v3 Therefore. some matrix multiplication Page 33 .

  10   −2           5  =  3          4 0 (47) There is one ﬁnal observation to make The transition matrix from the basis B to C is the inverse of the transition matrix from C to B. Once we have the transition matrix.2(ii). Page 34 . since we’re going from B to C. Exercise: Check that P ′ is the inverse of P in example 8.4. we swap the bases and use the transition matrix P ′ instead of P .       1/10 −3/5 3/10   ′ [u]C = P [u]B =  1/10 2/5 3/10   3/10 1/5 −1/10 as expected from part (i). (ii) This time. we can perform this computation quickly and easily for many vectors. we can see that this is the right result.Engineering Analysis SESA 2021  1 0 3   [u]B = P [u]C =  −1 1 0   1 2 −1     −2   10           3  =  5          4 0    (46) Looking back at example 8.

 .1 Consider the 4 × 2 matrix  −1 5       0 −4   A=    9 2      3 −7 The row vectors are   (49) r1 = (−1.  . The row vectors are the vectors formed out of the rows of Anm (these are in Rm ) and the column vectors are the vectors formed out of the columns of Anm (these are in Rn ). 2) r4 = (3.  . Example 9.  an1 an2 · · · a1m    a2m   .   anm It has n rows and m columns. . . −4) r3 = (9. 5) r2 = (0. . These subspaces are associated with matrices. . Let’s look at a general n × m matrix   (48) Anm  a11 a12 · · ·    a21 a22 · · · =  ..  . .Engineering Analysis SESA 2021 9 Fundamental subspaces There are some very important subspaces of Rn that we will be interested in. −7) (50) which are vectors in R2 (there are m = 2 columns) and the column vectors are Page 35 .

The domain of Anm is Rm (the set of inputs) and the range of Anm is the set of all possible outputs (“images”) in Rn (which is generally not all of Rn . just a subspace of it). The set of all vectors u in the domain Rm that give Anm u = 0 (52) is called the null space or kernel of Anm . There are three important subspaces of Rn and Rm associated with a matrix Anm . In other words. We call them the fundamental subspaces of Anm . those vectors in the domain (inputs) that when operated on by Anm give us the zero vector in Rn . Now onto the fundamental subspaces of Anm . First let’s recall that a matrix Anm is a linear transformation that takes any column vector in Rm and transforms by multiplication into a column vector in Rn . We write Anm : Rm → Rn . We write the null space of a matrix A as null(A) or ker(A) Page 36 .Engineering Analysis SESA 2021  −1      0     c1 =    9      3    5      −4     c2 =    2      −7   (51) which are vectors in R4 (there are n = 4 columns). (1) The ﬁrst subspace is related to the zero vector in Rn .

Let’s assume that (u1 . Because the row vectors are in Rm . u2 ) is a vector in ker(A). We write the row space of a matrix A as row(A) (3) The span of the column vectors of Anm . Then equation (52) leads to       1 −7   u1   0  A= =   0 u2 −3 21 which can be written as as system of linear equations (54) Page 37 .2 Find the null space ker(A) of the following matrix    1 −7  A=  −3 21 (53) To ﬁnd the null space.. the row space is a subspace of Rm .. the set of all linear combinations of the row vectors. the column space is a subspace of Rn . We write the column space of a matrix A as col(A) We will be interested in ﬁnding bases for each of these spaces. is called the column space of Anm . Example 9. the set of all linear combinations of the column vectors.e. First another example. Because the column vectors are in Rn . i.e. is called the row space of Anm . we use equation (52). i.Engineering Analysis SESA 2021 (2) The span of the row vectors of Anm .

ker(A) consists of all vectors of the form (7t. However. we want to be able to ﬁnd bases for all the fundamental spaces for more complicated matrices using just one procedure. This procedure is described through another example. t) for any number t. of which there are inﬁnitely many. and are satisﬁed when (u1 . Before we move onto the example. we ﬁrst have to review the concepts of augmented matrices and reduced echelon forms. Therefore. Now. this is one way of ﬁnding the null space and a basis for it. u2 ) = (7t.Engineering Analysis SESA 2021 u1 − 7u2 = 0 −3u1 + 21u2 = 0 ⇒ −u1 + 7u2 = 0 (55) The two equations are equivalent. which you have covered in your ﬁrst year maths modules. Suppose we have a linear system of homogeneous (right hand sides are zero) equations: −u1 + 2u2 − u3 + 5u4 + 6u5 = 0 4u1 − 4u2 − 4u3 − 12u4 − 8u5 = 0 2u1 − 6u3 − 2u4 + 4u5 = 0 −3u1 + u2 + 7u3 − 2u4 + 12u5 = 0 We can write this in matrix form as (56) Page 38 . t) for any number t.

when you solve 2 linear simultaneous equations.. Page 39 . Now. e.Engineering Analysis SESA 2021  5 6  −1 2 −1    4 −4 −4 −12 −8    2 0 −6 −2 4   −3 1 7 −2 12 A                u1      u2       u3  =        u4     u5  0    0    0    0  (57) A convenient way of writing this system of equations is by forming the augmented matrix   (58) 5 6 0   −1 2 −1      4 −4 −4 −12 −8 0       2 0 −6 −2 4 0      −3 1 7 −2 12 0 The entries to the left of the line represent the coeﬃcients of u1 to u5 in equations (56) and (57).g. in the system of equations (56) we can multiply or divide any equation by a constant. You do this. The zeros to the right of the line represent the terms on the right hand sides of the ‘=’ signs in equations (56) and (57). we can add or subtract equations or we can swap the equations around without altering the solutions.

Engineering Analysis SESA 2021 Aside. the ﬁrst non zero entry from the left is 1. so they do not aﬀect the row operations. The augmented matrix is just a more compact way of doing it. for the homogeneous system above they are zero. multiplying them by constants and interchanging them. We now want to ﬁnd the reduced row echelon form of the matrix. of the augmented matrix (58). which represents the system of equations (56): We can • Interchange 2 rows • Multiply or divide a row by a non-zero number • Add a multiple of one row to another. This is called the leading 1. • The leading 1 in each row is to the right of the leading 1 in the row above. These are called elementary row operations. therefore. • In each row. We get this by performing elementary row operations until the augmented matrix satisﬁes the following properties. Page 40 . We also have to be careful about the right hand sides when we perform the operations. They are equivalent to adding equations (56). u1 − 2u2 = 2 3u1 + u2 = −2 The same is true. However. Solve the following system and make a note of the steps required.

3 Determine a basis for the null space of the following 4 × 5 matrix Page 41 . Example 9.Engineering Analysis SESA 2021 • All rows consisting entirely of zeros are at the bottom of the matrix.row 2 (8) row 4 + 5 × row 2 (9) row 4 ↔ row 3 (10) row 4 ÷ 7 to conﬁrm that the reduced row echelon form is            (59)  1 −2 1 −5 −6 0   4 0  0 1 −2 2 U =   0 0 0 1 −2 0   0 0 0 0 0 0 We now move onto the example. Exercise: go through the following steps on the augmented matrix (58) (1) row 2 + 4 × row 1 (2) row 3 + 2 × row 1 (3) row 2 ÷ 4 (4) row 1 × −1 (5) row 3 ÷ 4 (6) row 4 + 3 × row 1 (7) row 3 .

The third equation (row) gives u4 = 2u5 = 2s Now set u3 = t for any number t. u3 . We put it into the augmented matrix. The second equation (row) gives u2 = 2u3 − 2u4 − 4u5 = 2t − 8s Finally. we have done this already. Let’s set u5 = s.Engineering Analysis SESA 2021 5 6   −1 2 −1     4 −4 −4 −12 −8     A=   2 0 −6 −2 4      −3 1 7 −2 12   (60) To ﬁnd the null space we need to solve equation (52) for u = (u1 . Now we need the reduced row echelon form of the matrix. which is given by matrix (58). we only have 3 equations (the top 3 rows). the ﬁrst equation (row) gives u1 = 2u2 − u3 + 5u4 + 6u5 = 3t Page 42 . u4 . The answer is given by matrix (59)            (61)  1 −2 1 −5 −6 0   4 0  0 1 −2 2 U =   0 0 0 1 −2 0   0 0 0 0 0 0 Thus. but 5 unknowns. This is the same as equation (57) above. where s is any number. u2 . u5 ) in R5 . Again.

In the above example. but are they linearly independent. they are (Exercise: check that they are).we still haven’t speciﬁed a basis! It looks like the vectors u2 and u2 could form a basis. So. Page 43 .Engineering Analysis SESA 2021 The full solution is 3t     2t − 8s   u= t     2s   s    3        2      1  +s         0      0   0    −8    0     2    1 u2              = t              (62) u1 for any numbers t and s. we haven’t quite answered the question . We now come to the main reason for solving the system by ﬁnding the reduced row echelon form. they satisfy the two properties required to be a basis. The certainly span the whole of the null space. There are inﬁnitely many solutions because the number of unknowns is greater than the number of equations. So. the null space consists of all vectors of the form c1 u1 + c2 u2 . Yes.

. Suppose that these columns vectors correspond to column numbers m1 . m2 . They form a basis for the null space of the reduced echelon matrix and for the null space of the original matrix.e. The dimension of the null space (i. Page 44 . m2 . mk form a basis for the original matrix.. • The column vectors containing the leading 1’s in the reduced echelon from of Anm form a basis for the column space of the reduced echelon form. mk .Engineering Analysis SESA 2021 Let Anm be an n × m matrix... . The column vectors of the original matrix Anm corresponding to column numbers m1 . • The vectors found for the null space of the reduced echelon form of Anm are always linearly independent. • The row vectors containing the leading 1’s in the reduced echelon form of Anm form a basis for the row space of the reduced echelon matrix and for the row space of the original matrix Anm .. number of basis vectors) is called the nullity of Anm . written nullity(Anm ) . .

0) 2 c′ = (−5. −2. 0) 4 A basis for the column space of A is therefore given by the 1st . 2 and 4 of U contain the leading 1’s. All bases have the same number of vectors. 0. 0. Therefore. 0. 0. Rows 1. Therefore. −2. 1. The reduced row echelon form U is given by equation (59)            (63)  1 −2 1 −5 −6 0   4 0  0 1 −2 2 U =   0 0 0 1 −2 0   0 0 0 0 0 0 We found that there are two vectors in the basis for the null space. 2nd and 4th column vectors of U c′ = (1. −2)        with dim(row(A))=3 Columns 1. 2. a basis for the column space of U is given by the 1st . Therefore nullity(A) = 2. a basis for the row space of both A and U is given by   r1 = (1. 1. 0) 1 c′ = (−2. 1. −5. 4) r3 = (0. 1. 1.3.Engineering Analysis SESA 2021 Example 9. 2nd and 4th Page 45 . 2 and 3 of U contain the leading 1’s. 2. 0.4 Let’s look again at the matrix A in example 9. −6)       r2 = (0.

−12. 2. 4. The row space and column space of a general n×m matrix Anm have the same dimension. The second thing to notice from the example above is that nullity(A)+rank(A) = 2 + 3 = 5. i. −4.Engineering Analysis SESA 2021 column vectors of A         c1 = (−1. We call this dimension the rank of Anm ... For a general n × m matrix Anm (m columns) nullity(Anm ) + rank(Anm ) = m For an n × n matrix A nullity(A) + rank(A) = n (64) Page 46 . 1) with dim(col(A))=3       c4 = (5. 0.e. −2. the column and row spaces have the same dimension. This again is always true. −3) c2 = (2. the number of columns. This is always true. i.e. written rank(Anm ) . −2)  Notice in this example that dim(row(A)) = dim(col(A)).

Engineering Analysis SESA 2021 10 Square matrices and systems of linear equations The concepts of rank and nullity are important. c2 and c3 are the column vectors of A       (66)  1      c1 =  2      −3  −2      c2 =  1      0  1      c3 =  −2      2 (67) Now consider the procedure for multiplying a vector u = (u1 . Example 10. A typical problem in many applications of engineering is to ﬁnd a solution u in Rn to the equation Au = b (65) where the vector b in Rn is known. We will look at certain aspects of this problem with an example. u2 . u3 ) by A Page 47 . Let’s consider a square n × n matrix A : Rn → Rn .1 Consider the matrix    1 −2 1      A= 2 1 −2  = (c1 c2 c3 )     −3 0 2 where c1 .

let’s consider the nullity and rank.. the vector b has to be in the column space of A.e. some of the column and row vectors will be linearly dependent they can be obtained from the other rows by forming linear combinations and are. a solution will not exist Page 48 . therefore. the range of A) are in the column space of A The range of a square matrix is its column space Next. we know that if rank(A) < n.. What happens when the rank of an n × n matrix is less than n? From the deﬁnition of rank. If we were to set up the matrix system (65) with some vector b and look for a solution u. Therefore. then we would not have enough equations or some equations would contradict each other.e. i. It also shows that all output vectors (i.e. From the above example we can see that if we want to solve equation (65).Engineering Analysis SESA 2021   1 −2 1   u1      2 1 −2   u2     −3 0 2 u3        1 × u1 + (−2) × u2 + 1 × u3      =  2 × u1 + 1 × u2 + (−2) × u3     (−3) × u1 + 0 × u2 + 2 × u3            (68)  1   −2   1              = u1  2  + u2  1  + u3  −2  = u1 c1 + u2 c2 + u3 c3             2 0 −3 i. a vector in the column space. any matrix multiplication leads to a linear combination of the column vectors. redundant.

The rank of A is less than n ⇐⇒ A is not invertible A square n × n matrix A with rank(A) = n is said to have full rank (obviously the rank cannot be any bigger!) If rank(A) < n.Engineering Analysis SESA 2021 all or there will be inﬁnitely many solutions ⇒ A will not have an inverse. A is rank deﬁcient ⇐⇒ det(A) = 0 Another way to look at nullity and rank is by considering the solutions to Av = 0 (69) Page 49 . By performing elementary row operations (adding multiples of rows to other rows) we can get a new matrix B that will have a row of zeros. the matrix A is said to be rank deﬁcient. We can restate the above as A is rank deﬁcient ⇐⇒ A is not invertible By deﬁnition. The determinants of A and B will diﬀer only by a constant. since det(B) = 0. Therefore. if a matrix A is rank deﬁcient some of the rows are linearly dependent. we have det(A) = 0. which means that A will not have an inverse.

we can write A(u + v) = Au + Av = b + 0 = b What does this tell us? It tells us that if u is a solution to equation (69) then so is u + v. The nullity of A is the number of vectors in the basis for ker(A).Engineering Analysis SESA 2021 the solution to this equation clearly gives us the null space ker(A). and there may be an inﬁnite number of the v. This suggests that if a solution to equation (65) exists. it will not be unique. In this case. If there are non-zero solutions to equation (69). then nullity(A) > 0. A is rank deﬁcient ⇐⇒ no unique solution to Au = b Page 50 . Equation (64) then tells us that rank(A) < n.

Notice also that √ u·u= u2 + u2 + u2 = |u| 1 2 3 Page 51 . v3 ) = u1 v1 + u2 v2 + u3 v3 where we multiply the ﬁrst. second. What we would like to do. second etc. coordinate of the second vector and add the results. In R2 and R3 you can visualise these but in higher dimensions you can’t. coordinate of the ﬁrst vector by the ﬁrst.Engineering Analysis SESA 2021 11 Inner product spaces and orthogonality There is a special class of spaces that we are going to look at. The dot product has a geometric interpretation u · v = |u||v| cos θ where |u| = u2 + u2 + u2 and |v| = 1 2 3 2 2 2 v1 + v2 + v3 are the “magnitudes” of the vectors and θ is the angle between the vectors in the plane that contains them both. The basic idea is to introduce generalisations of the familiar “dot product” and “magnitude” of a vector in R2 or R3 . etc.1 The dot product in R2 and R3 is deﬁned as follows u · v = (u1 . The Euclidean spaces fall into this category. u2 . as in Rn . v2 . Example 11. u3 ) · (v1 . is measure the (i) magnitude (or “length”) of a vector and (ii) angles and distances between vectors.

+ un vn and the length of a vector in Rn is given by √ |u| = u2 + u2 + ...Engineering Analysis SESA 2021 and that u·v =v·u (u + v) · w = u · w + v · w (cu) · v = c(u · v) for any scalar c (70) u · u = u2 + u2 + u2 ≥ 0 1 2 3 u·u=0 if and only if u=0 In general Rn spaces we can deﬁne the same dot product (multiply individual respective components) u · v = u1 v1 + u2 v2 + .. We want similar measures of “angles” and “magnitudes”.. + u2 = n 1 2 u·u Now let’s look at a general vector space V .. Page 52 .

1) and w = (3. v u. the inner product of two vectors gives us a number. −2. It is called the standard inner product on these spaces. just the dot product) u. 0.3 Let u = (1. −2. Example 11. −2c. 4) = (−2)1 + 0(−2) + 4 = 2 = u. −2. v = (c. u = (−2. 4). v to represent the inner product of two vectors. (−2. w + v. Example 11. u = Exercise: Check this √ 21 = |u| 12 + (−2)2 + 42 = Page 53 .2 The dot product on Euclidean spaces is an example of an inner product. v = (1. v u + v. • As with the dot product. 1) = −2c + 0 + 4c = 2c = c u. 4c). we will be able to use the inner product to measure “angles” and “magnitudes”. −2. (1. v u. (−2. w = u.e. 0. w Exercise: Check this (71) cu.Engineering Analysis SESA 2021 • What we do is extend the idea of the dot product and call it an inner product • Like the dot product of two vectors. With the standard inner product (i. 0. cv = c u.. 1) = 1 × (−2) + (−2) × 0 + 4 × 1 = 2 v. 2). 4). 0. 1). • We write u. v = (−2.

.. Example 11. u3 ) and v = (v1 .. v + w = u. v2 . + vn is just a single vector when we perform the addition.4 We can deﬁne other inner products on the Rn spaces. Exercise: Show that u. This means that the inner product is “linear in the second argument” as well as the ﬁrst.... u is called “symmetry”. To ﬁx ideas... v = u1 v1 + u2 v2 + u3 v3 Standard inner product: Page 54 ... w + . w (72) HINT: We can write v1 + v2 + . w for the vectors in the example above... Repeat the procedure by taking out v2 from the sum s to form a new sum: s2 = v3 + .Engineering Analysis SESA 2021 • The properties demonstrated in this example always hold. w + v2 . let’s take vectors u = (u1 . • The third property is termed “linearity in the ﬁrst argument” (the two “arguments” are the vectors on either side of the comma. v = w1 u1 v1 + w2 u2 v2 + w3 u3 v3 u. It is. v3 ) in R3 .... The property u... + vn . v + u. v = v. + vn ). Keep going until the new sum has only the term vn . therefore. The following deﬁnes an inner product New: u. + vn ) The sum s = v2 + . Then we can apply the third rule in (71). Hardish exercise (used later on): Show that (“additivity” property) (v1 + v2 + .. + vn . bilinear.. w = v1 . u2 . + vn = v1 + (v2 + .

we we will not write the norm (magnitude) as |u|. Moreover. Before we go on to deﬁne the magnitude in general we are going to rename it.3 you saw that u. u is the mag- nitude of u.e. w2 and w3 multiplying the ﬁrst second and third terms in the sum respectively.Engineering Analysis SESA 2021 The new and standard inner products are the same except for the numbers w1 . This is an example of a “weighted inner product”. dot product. These numbers are called weights. • A vector space on which we can deﬁne an inner product is called a inner product space. So how do we measure the “magnitude” of a vector? In the last computation in example 11. we are interested in inner product spaces and the inner product allow us to deﬁne a norm as Page 55 . • The inner product has to satisfy the rules (70) when we swap the dot product for the inner product. A norm can be deﬁned without reference to an inner product. We will not say the “magnitude of u ” but will instead say the “norm of u”. i. but instead we will write it as u . • We are mainly interested in the vector spaces Rn with the inner product deﬁned by standard inner product. However.

−2. check that 2u = 2 u = 6 Page 56 . v = √ 32 + 42 = 5 (73) 22 + (−1)2 + 22 + (−3)2 = √ √ 18 = 3 2 Example 11.Engineering Analysis SESA 2021 u = u. for all vectors u = (u1 .. u = v.. the norm induced by the inner product is u = u. −1.5 In the Euclidean spaces with the standard inner product. u3 ) in R3 u = u. u = u2 + u2 + . u Example 11.6 Find the norms of the vectors u = (3. 4) and v = (2. For example. u2 . −3) using the standard inner product u = v = u.7 In the Euclidean spaces. 2. the norm induced by the standard inner product satisﬁes certain properties. + u2 n 1 2 Note that for this space the norm u is identical to the magnitude |u| Example 11. 2). u = u2 + u2 + u2 = 0 1 2 3 if and only if u=0 (74) cu = c u for any scalar c Exercise: For u = (1.

. u − v (u1 − v1 )2 + (u2 − v2 )2 + .8 In the Euclidean spaces with the standard inner product. the metric is d(u.e. v) = u − v = u − v.. Exercise: Try to show that d(u. + (un − vn )2 Note that for this space. the distance between u and v is given by |u − v|. i. Page 57 . In R2 and R3 . the magnitude of the diﬀerence. u − v is identical to |u − v| . v) = d(v. Next we must ﬁnd a way to compute “distances” between vectors.Engineering Analysis SESA 2021 All norms must satisfy these properties. For a general inner product space we have The distance between two vectors u and v is given by the metric d(u.. u) HINT: (a − b)2 = (b − a)2 for any scalars a and b. v) = u − v = = u − v. u − v (also called distance function) Example 11.

e.9 Calculate the metric for u = (3. v = 0 Example 11. 0). We say that these vectors are orthogonal. u). 0. 1) = 0 and so on (remember these are just dot products). we say that u and v are orthogonal if u. −3) √ d(u. u − v = v − u . 2. In direct analogy. 0). (0. (0. v) = u − v = (3 − 2)2 + (4 + 1)2 + (1 − 2)2 + (−1 + 3)2 = 31 Exercise: Check that d(u. for a general inner product space. 1. −1. Now. 0) = 0. We say that a vector u from V is orthogonal to W if it is orthogonal to every vector in W . Recall that in R2 and R3 . suppose that W is a subspace of an inner product space V .10 The standard basis vectors in Rn are orthogonal to each other with the standard inner product. 4. For example (1. 1.Engineering Analysis SESA 2021 Example 11. v) = d(v. The set of all vectors that are orthogonal to W is called the orthogonal complement of W and is denoted by W ⊥ ( “W perp”). (0.. i. two vectors are at right angles if u · v = 0 because u · v = |u||v| cos θ. −1) and v = (2. 0. 1. Page 58 .

let’s brieﬂy revisit the fundamental subspaces of a matrix. c) for any c. q = (u1 . So. we are looking at vectors of the form (0. 0) = 0 For this to be true for any choice of u1 . which means vectors of the form ce3 = (0. 0). 0. Therefore W ⊥ = span {e3 } Armed with the deﬁnition or orthogonal complement. These are vectors in the direction of e3 .e. u3 ). The span of e3 is all linear combinations of e3 .12 Consider the 3 × 3 matrix A and its transpose AT  a11 a12 a13   A =  a21 a22 a23   a31 a32 a33          a11 a21 a31   T A =  a12 a22 a32   a13 a23 a33         (75) Page 59 .. It is associated with the transpose of the matrix. q1 and q2 . we must have u1 = u2 = 0. It doesn’t matter what u3 is because the third component of q is always zero. u3 ). Let W be the subspace of R3 consisting of all vectors that lie in the xy plane. (q1 . The orthogonal complement of W will be all vectors u in R3 that are orthogonal to every vector in W . q2 . for any scalars q1 and q2 .Engineering Analysis SESA 2021 Example 11.11 Consider the space R3 with the standard basis. 0. q2 . u3 . that is u. u2 . There is actually another one. of the form q = (q1 . i. Example 11. u2 .

a2 and a3 . c1      =  u. c2 and c3 . c2     u. i. ci = 0. a33 ) These are also the 3 row vectors of AT . a31 ). a22 . 2. Let v be any vector in col(A). a21 . i.. ker(AT ). u3 ) by AT               a11 a21 a31    a12 a22 a32   a13 a23 a33   u1   a11 u1 + a21 u2 + a31 u3         u2  =  a12 u1 + a22 u2 + a32 u3       u3 a13 u1 + a23 u2 + a33 u3   u. 3 (u is orthogonal to every one of the column vectors of A).Engineering Analysis SESA 2021 To get AT . Then AT u = 0 which. all linear combinations of c1 .e. for i = 1. Now consider the procedure for multiplying a vector u = (u1 . a23 . c3 (76) Suppose that the vector u is in the null space of AT .. we swap the columns for the rows. c2 = (a12 . It follows that Finding a basis for the column space of A is equivalent to ﬁnding a basis for the row space of AT .e. looking at equation (76) means that u. a32 ). u2 . Then v has the form v = a1 c1 + a2 c2 + a3 c3 for some numbers a1 . c1 = (a13 . The 3 column vectors of A are c1 = (a11 . This gives Page 60 .

v = u. c2 and c3 . c1 + a2 u. in particular. We have demonstrated is that if u is in col(A)⊥ . Combining this will the previous result. From equation (76) we then see that AT u = 0. so u is in ker(AT ). we conclude that col(A)⊥ and ker(AT ) are the same thing! We also know that col(A) and the range of A are the same. We have demonstrated is that if u is in ker(AT ). it must also be in the orthogonal complement of col(A). a1 c1 + a2 c2 + a3 c3 (77) = a1 u. Therefore ker(AT ) = col(A)⊥ = range(A)⊥ (HARD) Exercise: Use similar arguments to show that ker(A) = row(A)⊥ ker(AT ) is the fourth fundamental subspace. Now suppose that u is in col(A)⊥ . c2 + a3 u. it must also be in ker(AT ). to the individual column vectors c1 . Page 61 . c3 = 0 + 0 + 0 = 0 which means that u is orthogonal to any vector v in col(A). written as col(A)⊥ .Engineering Analysis SESA 2021 u. called the left null space or cokernel. Then it is orthogonal to every vector in col(A).

Recall that B is a basis for a vector space V if every vector in V can be written as a linear combination of the vectors in B and the vectors in B are linearly independent (none of them is a linear combination of the others). we can turn B into a special type of basis.1 Given the vectors v1 = (2. Example 12. −1. 0) and v3 = (2.e. • Let S be a set of vectors in an inner product space. it is an inner product space). (a) To show that they form an orthogonal set. then S is called an orthormal set. • If S is an orthogonal set and each vector in S has a norm of 1. 0. This new basis will have important and very useful properties. Before showing you how to construct it. furthermore. you will need to understand a few basic concepts. 0. 4) in R3 (a) Show that they form an orthogonal set with the standard inner product but do not form an orthonormal set. we have to demonstrate that each distinct pair is orthogonal. −1). v2 = (0. the space V has an inner product (i.Engineering Analysis SESA 2021 12 Orthogonal and orthonormal bases We now come back to the issue of basis. u2 and u3 . Page 62 . If B is a basis for V and. If each distinct pair of vectors is orthogonal we call S an orthogonal set. (b) Turn them into an orthonormal set u1 .

v1 .Engineering Analysis SESA 2021 v1 . e1 = (1. to be an orthonormal set. 1. 0. 0. −1. All we have to do is divide each vector by its norm 1 v1 = √ (2. e2 = (0. v3 = 0 × 2 + (−1) × 0 + 0 × 4 = 0 Exercise: Why didn’t we compute v2 . v1 and v3 . − √ 5 5 (80) u1 = u2 = u3 = Exercise: Verify that the norms of these vectors are 1 and that they are orthogonal. v2 = 2 × 0 + 0 × (−1) + (−1) × 0 = 0 v1 . the norms (magnitudes) of v1 . 0. For example. v3 = √ 22 + 02 + (−1)2 = 5 # (79) # 02 + (−1)2 + 02 = 1 22 + 02 + 42 = √ √ 20 = 2 5 (b) Most of the work is done. 0) v2 1 v3 = √ (2. −1) = v1 5 v2 = (0. v2 and v3 have to be 1. Example 12. Let’s compute them √ (78) v1 = v2 = v3 = v1 . v2 ? Now. 0. √ 5 5 2 1 √ .2 The standard basis vectors in Rn form an orthonormal set with the standard inner product. v3 . 0) and e3 = Page 63 . 4) = v3 2 5 1 2 √ . v2 = v3 . 0). 0. v1 = v2 . v3 = 2 × 2 + 0 × 0 + (−1) × 4 = 0 v2 .

1) in R3 ... Let’s now take the inner product of both sides of (81) with any of the vectors. which gives us a way to simplify the left hand side of equation (82) Page 64 ... c2 .. let’s say v1 (c1 v1 + c2 v2 + .. vn } be the set of vectors in question.. Let’s recall the deﬁnition of linear independence: The vectors v1 .. vn are linearly independent if the only way to get c1 v1 + c2 v2 + . v1 = 0. We know they are orthogonal.. There is a special property of orthogonal/orthonormal sets that will come in very handy If S is an orthogonal set of vectors in an inner product space..cn equal to zero. .. + cn vn = 0 (81) is by having all the numbers c1 . Exercise: Compute the norms of these vectors and their pairwise inner products to show that they form an orthonormal set.. + cn vn ). . v2 . This is equivalent to saying that no vector can be a linear combination of the others.Engineering Analysis SESA 2021 (0... then S is also a set of linearly independent vectors How can we show this? Let S = {v1 .. 0. v1 (82) The inner product has to satisfy equation (72) (called “additivity”). . v2 ..

. Remember that the coordinates are the numbers multiplying the basis vectors in the linear combination: if S = {v1 . v1 = c1 v1 . v1 (83) = c1 v1 .. v1 = c1 v 1 . vn } is the orthogonal/orthonormal basis for V . If we perform the same procedure with v2 instead of v1 .Engineering Analysis SESA 2021 (c1 v1 + c2 v2 + . v2 . + cn vn Page 65 . Therefore.. the inner product of two distinct vectors is zero so the only nonzero term in the third line of (83) is c1 v1 . v1 + .. v1 = v1 2 > 0 unless v1 is the zero vector. we must have c1 = 0. we will get c2 = 0.... so we end up with c1 v 1 ... vn } is orthogonal. Therefore. Therefore.... . which it isn’t. and so on with all the other scalars.. The right hand side of equation (82) is obviously zero. v 1 What happened to all the terms after c1 v1 . ... v1 + c2 v2 . The great thing about having an orthogonal/orthonormal basis for a space V is that we can easily ﬁnd the coordinates of any vector in V wih respect this basis. then any vector u (in V ) can be written as u = c1 v1 + c2 v2 + . v2 . v1 + ... v 1 = 0 Now v1 . + cn vn ).... the set S is linearly independent. + cn vn . v1 .. + cn vn . v1 + c2 v2 .. v1 ? Remember that the set S = {v1 .

.. v1 + .. v1 = c1 v 1 ... vn vn (86) Page 66 ..Engineering Analysis SESA 2021 Let’s take the inner product of both sides with v1 (same as the procedure above) u. v2 u..... + cn vn ..vk } is an orthonormal basis. v1 + c2 v2 .. v2 v2 + . + u. + vn 2 1 2 2 v1 v2 vn 2 (85) If {v1 . Similarly c2 = u. v3 .. v3 2 .... v 1 (84) Since we know u and we know v1 we can ﬁnd c1 c1 = u. v2 . v1 v1 + u. cn = u.. then u = u. v2 .. v1 = (c1 v1 + c2 v2 + . v1 = v1 .. v1 v + v + ... v1 v1 2 using the deﬁnition of the norm.. vn vn 2 Therefore.. v2 2 c3 = u.. v1 u. we can write the vector u as u= u. vn u. v1 = c1 v1 . .. + cn vn ).

2. 1) = 2e1 +2e2 +e3 in R3 . What is this exactly? Basically. 0). The vector OQ is the orthogonal projection. 2. We write it as proj W u.e. landing at a point Q. what we do is drop a straight line from the point P in Figure 4 to the xy plane. Let’s look at a simple example Example 13. The only possibility for this is −→ − v = (0. 13 Orthogonal projections We now introduce the idea of “orthogonal projections”.. it is parallel to the z axis.1) Figure 4: The othogonal projection of a vector u in R3 on the xy plane W (example 13.2.1) z y π/2 Q O W = xy plane x proj W u = (2. 0. The line vector − − → v = P Q has to be perpendicular to the xy plane. Page 67 .0. 0). 2. i. The orthogonal projection of u on the xy plane W is the vector u = (2. We can deﬁne a subspace W of R3 as that space with all vectors of the form q = (q1 .0) P v = (0.1 Let’s take the vector u = (2. q2 . This is nothing more than those vectors in R3 that lie in the xy plane. 1). They are linear combinations of e1 and e2 .Engineering Analysis SESA 2021 u = (2.1). Notice that it lies in W (the xy plane). where q1 and q2 are any scalars.

and v + proj W u = (0. there is the obvious reason that the line − − → P Q we drop is perpendicular (orthogonal) to the xy plane. The two parts are orthogonal to each other. 1) + (2. 1) = u What we have managed to do is decompose u into two parts. 0. q = (0.Engineering Analysis SESA 2021 Why do we call it orthogonal? Well. rather than choosing the subspace as the xy Page 68 . 1). 0) = (2. For instance. 0) = 0 Therefore.11). 2. Now this is all well and good but what if we have a vector in a general Rn space and we want to approximate it by a vector in a general subspace of Rn . on the other hand. is in W . v is in the orthogonal complement W ⊥ of W (see example 11. Notice that the vector − − → v = P Q is orthogonal to every vector in q = (q1 . We can get these two parts by splitting the linear combination of orthogonal basis vectors u = 2e1 + 2e2 + proj W u (in W ) ˜3 e v (in W ⊥ ) Finally. If we wanted to approximate the vector u using only the basis vectors in W (e1 and e2 ). (q1 . 0) in W (the xy plane): v. q2 . in the above example. q2 . one in W ⊥ and the other in W . proj W u would be the best approximation. 0. The orthogonal projection proj W u. we can see from Figure 4 that v is the shortest distance between P and the plane W . 2.

+ vk 2 1 2 2 v1 v2 vk 2 (87) If {v1 . v2 .vk } is an orthonormal basis for W . such as 2x + 3y − z = 2. Let W be a subspace of Rn with an orthogonal basis {v1 ... . The orthogonal projection of u on W is given by proj W u = u. but we haven’t quite shown that they hold in the general case. orthogonal. The vectors proj W u and v are. . vk vk (88) • proj W u is in W and the vector v = (u−proj W u) is in W ⊥ ....Engineering Analysis SESA 2021 plane we could have chosen another plane. v2 . Most of these facts are suggested by example 13.. vk v + v + .. the vector proj W u is the best approximation to u.. v2 v2 + .. Let’s start with the claim that that vectors v = (u − proj W u) and proj W u are orthogonal. v1 u. • Of all the vectors in the subspace W . Let u be a vector in Rn endowed with the standard inner product. where k ≤ n. then proj W u = u.. • The shortest “distance” between the vector u and the subspace W is the norm (magnitude) of v: v = shortest distance between u and W . To simplify the notation Page 69 ... v2 u.1.. therefore. + u. We would then have to approximate the vector u by a linear combination of basis vectors that describe this plane in order to obtain the orthogonal projection. v1 v1 + u..vk }.

.. u. v1 u. proj W u (89) = u. v1 + . all the vi ’s have vi = 1.. proj W u = u.Engineering Analysis SESA 2021 let’s assume that the basis {v1 . .vk } is orthonormal. Now. v1 v1 + u. vk − proj W u.. u. proj W u = (u − proj W u). proj W u − proj W u.. proj W u − proj W u. vk vk proj W u − proj W u. we just need to show that v is orthogonal to each of these basis vectors (why?). let’s assume they Page 70 .. proj W u 2 + u.. v2 . Exercise: Repeat this procedure for an orthogonal (but not orthornormal) basis for W . + u. v2 .. proj W u clearly lies in W by the way it is deﬁned (a linear combination of the basis vectors in W ). + u.. i.. Again. .. v1 v1 + . + u. vk − proj W u. v2 v2 + . proj W u = u. vk vk − proj W u.. + u.. u. vk = u..e. then v is orthogonal to every vector in W . v1 2 u. Then v..vk }. proj W u = proj W u. proj W u = u... Since every vector in W is a linear combination of the vectors in {v1 . proj W u = 0 so they are indeed orthogonal. v2 2 + . How do we show that v is in W ⊥ ? If it is..

v1 = u. v1 = u.. + u. and take the inner product v.. v1 v1 . v1 v1 . v1 (90) u. v1 + u. v1 + . v2 v2 + . vk vk . v1 − u. v1 = (u − proj W u). We choose any one of them. In R2 and R3 (as you can see in Figure 4). v1 − u. v1 v1 .. v2 v2 . vk = u.. v1 v1 + u. v1 = u. vk vk . v2 v2 . Let’s restate clearly want we want to do: ﬁnd the vector in W that gives us the best approximation to a general vector u in Rn . say v1 . v1 − proj W u.Engineering Analysis SESA 2021 are orthonormal.. + u. We are claiming that this Page 71 . v1 − u.. v1 + . v1 − u. Now onto the statement about “shortest distance” and “best approximation”. v1 = u.. v1 vk . v1 + u.. v1 − = u. + u.. v1 = 0 We can do the same with all the basis vectors. Exercise: Repeat this procedure for an orthogonal (but not orthornormal) basis for W . the vector v takes us from a P to the closest point on the subspace W because the shortest distance between two points is a straight line! It is essentially this concept that we want to generalise for higher dimensions.

Let’s start by choosing any vector w in W that is NOT the same as proj W u. To proceed. For two orthogonal vectors a and b in a general inner product space. We can write (a simple mathematical trick that will help us) u − w = (u − proj W u) + (proj W u − w) The vector (proj W u− w) is a combination of (basis) vectors in W and so belongs to W itself. the Pythagorean theorem becomes |a + b|2 = |a|2 + |b|2 . the equivalent theorem is a+b 2 = a 2 + b 2 Page 72 . For two vectors a and b in R2 or R3 that are at right angles (orthogonal).Engineering Analysis SESA 2021 c2 = a 2+ b 2 c b a Figure 5: Illustration of the Pythagorean theorem. demonstrated in Figure 5. we look at a familiar concept: the Pythagorean theorem for a right triangle. We already know that the vector v = u − proj W u is in W ⊥ . vector is proj W u. Therefore (u − proj W u) and (proj W u − w) are orthogonal.

We have shown that given any subspace W of Rn for which we have an orthogonal basis. • proj W u gives us the “best approximation” to u by a vector in W . This is written as Rn = W ⊕ W ⊥ Page 73 . we can write any vector in Rn as a sum of a vector in W and a vector in W ⊥ . that Rn is the direct sum of W and W ⊥ . One ﬁnal note. therefore. The dimension of Rn has to be n. except proj W u In turn. We say. this means that • The shortest “distance” between the vector u and W is the norm (magnitude) of the vector v = u − proj W u. Therefore the dimensions (number of basis vectors) of W and W ⊥ have to sum to n: dim W + dim W ⊥ = n We have essentially partitioned Rn into the two spaces W and W ⊥ .Engineering Analysis SESA 2021 Putting a = (u − proj W u) and b = (proj W u − w) in this formula we get u−w 2 = u − proj W u > u − proj W u 2 + proj W u − w 2 2 because w = proj W u Therefore u − proj W u 2 < u−w 2 for all vectors w in W .

We will meet another in the next section. unlike the standard basis.Engineering Analysis SESA 2021 14 The Gram-Schmidt process The ﬁrst important application of orthogonal projections is the Gram-Scmidt process. 2).1). where v1 = (3. where the vectors e1 and e2 are perpendicular (orthogonal). We’ve seen Page 74 . v2 } for R2 . Here. the orthogonal basis u1 and u2 is constructed from v1 and v2 . The way we do this is called the Gram-Schmidt process. 1) and v2 = (2. y u2 = v2 − proj W v2 Old basis v2 v1 v2 v1 = u1 proj W v2 u2 New basis u1 x Figure 6: An illustration of the Gram Schmidt process in R2 (see example 14. Example 14. This may not be a particularly convenient basis. It turns out that this is always possible: given n linearly independent vectors for Rn we can turn them into an orthogonal basis. Suppose we have an arbitraty (non-orthogonal) basis for Rn .1 Consider the basis S = {v1 . Throughout this section. The basis vectors are linearly independent but we would prefer them to be orthogonal too. We will illustrate the procedure using two examples. the standard inner product on Rn will be assumed.

but with a diﬀerent origin. We could do this by simply looking at Figure 6 and making the simple observation that in order for u2 to be orthogonal to u1 = v1 . is shown in Figure 6. We can apply this information here by putting u = v2 and W = span {u1 }. −3) for any number t. However. proj W v2 . So we set u2 = v2 − proj W v2 to get a vector orthogonal to u1 = v1 . In fact we can! We start by putting u1 = v1 . We can project the vector v2 from the original basis S onto W . in particular to u1 . W is the set of all the linear combinations (in this case multiples) of u1 . Let’s form the subspace W = span {u1 } of R2 . The formula for the projection is by equation (87) proj W v2 = v2 . u2 }? It would then resemble the standard basis. we want a systematic way of doing it because there are generally many more than two basis vectors. This will be our ﬁrst orthogonal basis vector. It is in the space W because it is a multiple of u1 . The vector v2 − proj W v2 is orthogonal to every vector in W = span {u1 }. What if we could turn these linearly independent vectors into an orthogonal basis {u1 . We now need to construct a second vector that is orthogonal to u1 .Engineering Analysis SESA 2021 how easy it is in that case to write down the coordinates for a general vector. it must be of the form u2 = t(1. u1 u1 u1 2 (only one basis vector u1 in W ) (91) Page 75 . This orthogonal projection. In the previous section we saw that any vector u in Rn can be written as a sum of two vectors: (i) the projection proj W u of u onto a subspace W (which has an orthogonal basis) and (ii) the vector u − proj W u in W ⊥ . It is nothing more than the component of v2 that points in the direction of u1 .

So let’s project v3 on W2 to get a vector proj W2 v3 in the subspace W2 . 3) u1 2 5 5 (92) Exercise: Check that u1 and u2 are orthogonal. Example 14. u1 u1 u1 2 ( v2 . to ﬁnd a vector u3 that lies in ⊥ W2 . 0) 5 1 = (1. v2 = (1. 7. Again. the vector ⊥ v = v3 − proj W2 v3 lies in W2 . 0. ﬁnd an orthogonal basis {u1 . As in the last example we set u1 = v1 and form the subspace W1 = span {u1 } of R3 . The vector proj W2 v3 is again given by equation Page 76 . u3 } for R3 . therefore. 0). u2 . Then. we project v2 on W1 to get the component of v2 in the direction ⊥ of u1 . −1) 5 (93) =5) Now what? We repeat the previous steps. The component of v2 in W1 then gives us u2 u2 = v2 − proj W1 v2 = v2 − v2 . Our task is. 0. −1) − (2. u2 } of R3 consists of all linear combinations of u1 and u2 . u1 = 2 and u1 2 2 = (1.2 Given the basis v1 = (2. the subspace of all vectors that are orthogonal to both u1 and u2 . −1). The subspace W2 = span {u1 . u1 u1 = v2 − u1 = (−1. −1) and v3 = (3. −1.Engineering Analysis SESA 2021 This gives u2 = v2 − 4 2 v2 . 2. We want a vector orthogonal to both u1 and u2 . −1.

In the two examples above we have developed a procedure for turning a general basis into an orthogonal basis. We now summarise the procedure. Exercise: How do we know that the orthogonal set of vectors we have constructed in this example. {u1 . u2 u + u2 2 1 u1 u2 2 (94) 22/5 1 u2 = v3 − − u1 + 5 6/5 8 = (1. so u3 is u3 = v3 − proj W2 v3 = v3 − v3 . u3 }. 1) 3 Exercise: Check that u1 .Engineering Analysis SESA 2021 (87). and (ii) the relationship between linear independence and orthogonality. u1 v3 . 2. is actually a basis for R3 ? In other words. u2 and u3 are mutually orthogonal. u2 . Page 77 . does this set of vectors span the whole of R3 ? HINT: Think about (i) the number of basis vectors required to span Rn .

Step n. Then an orthogonal basis u1 . + un−1 u1 2 u2 2 un−1 2 Page 78 . u2 = v2 − Step 3. v2 .. . .. . u1 u1 + u2 u1 2 u2 2 . v2 .. u1 = v1 Step 2. . . u2 vn ..Engineering Analysis SESA 2021 Let v1 . . . u1 vn . .. . .. u2 v3 .un for Rn can be found by the following GramSchmidt process Step 1. . u2 . .. un = vn − vn . u1 u1 u1 2 u3 = v3 − v3 . un−1 u1 + u2 + .vn be a set of linearly independent vectors for Rn ..

0) u1 5 u2 1 = √ (1. −1. 0). The orthogonal basis is u1 = (2. 1) u3 6 Exercise: (a) Given the basis v1 = (1. v3 = (1. 2. construct an orthogonal basis for R4 . 1). we simply divide each new member of the orthogonal basis by its norm. v2 = (1. w3 }. 0. −1. (b) Convert the orthogonal basis found into an orthonormal basis. 1) 3 We compute √ √ √ 8 6 = 3 u1 = which yields 5 u2 = 30 5 u3 w1 = w2 = w3 = u1 1 = √ (2. −1) 5 8 u3 = (1.2 into an orthonormal basis {w1 . 1. 2. 0) for R4 . 1.Engineering Analysis SESA 2021 Note that to obtain an orthonormal basis from the new orthogonal basis. 2. 0) and v4 = (1. Page 79 . Example 14. w2 . 1. 0. 1. 0. 2. 0) 1 u2 = (1. −5) u2 30 1 u3 = √ (1.3 Convert the orthogonal basis found in example 14. 1.

Engineering Analysis SESA 2021

15

Least squares approximations

We now come to a second important application of orthogonal projections. Remember the temperature data example 1.2 in which we wanted to ﬁt a line to the data but ended up with a system of equations that had more equations than unknowns? This type of system is called “overdetermined”. The other way round, when we have more unknowns than equations, the system is called “underdetermined”. Both of these types of systems are called inconsistent. Suppose that we have an inconsistent system of n equations in m unknowns. In matrix form, the system is:

Au = b

for an n × m matrix A and vector b in Rn . There is no solution u (in Rm ) to this equation, i.e., there is no vector u such that Au = b. Perhaps, however, we can look for a vector u such that Au will be close to b. To this end, let’s deﬁne a residual r as follows:

r = b − Au

(95)

r is obviously a vector (both b and Au are vectors). It is a measure of how close a vector u will be to satisfying the equation. What we do is look for the vector u that makes the norm (magnitude) of r as small as possible. This leads to the least squares solution.

Page 80

Engineering Analysis SESA 2021

Given an inconsistent system Au = b, the vector ul that makes r = b − Aul as small as possible is called the least squares solution.

Okay, this has given us some sort of criterion, but how exactly do we ﬁnd this vector ul ? Recall example 10.1 in which it was shown that the multiplication of a vector by a matrix results in a linear combination of the matrix column vectors, i.e., all outputs Au are in col(A), the column space. Now put W = col(A). Au will be in W for any u. Indeed, the set of outputs Au for all choices of u in Rm will span W ; any linear combination of the column vectors is possible for the right choice of u = (u1 , u2 , ..., um ). Therefore: range(A) = col (A) = W At this point let’s state the least squares problem in a diﬀerent way: Given an inconsistent system Au = b for some b in Rn , ﬁnd the vector Aul in W = col (A) that is the closest approximation to b, i.e., b − Aul < b − Au for all possible choices of Au. Let’s restate a result from section 13 on orthogonal projections. Suppose W is a subspace of Rn and x is a vector in Rn . The closest approximation to x by a vector in W is given by projW x, the orthogonal projection of x

Page 81

Engineering Analysis SESA 2021 on W. projW x is in W by deﬁnition and x − projW x is in W ⊥ . Let’s put W = col (A) and swap x for Au (all vectors in the column space (range) of A). Then the closest approximation to b by a vector in W is projW b. This is the vector Aul that we want: Aul = projW b We could ﬁnd Aul this way and invert the result to ﬁnd ul , but there is a better way to solve the problem. The least squares solution ul to the problem Au = b also satisﬁes the normal system:

AT Aul = AT b

(96)

This system is always consistent. If the equation Ax = 0 has only the trivial solution x = 0, a unique solution to the least squares problem is

ul = (AT A)−1 AT b

Before we move onto an example, let’s see why the above statements are true. We’ve determined that Aul = projW b, which is a vector in W = col (A). We can always ﬁnd ul = (u1 , u2 , ..., um ), with the right choice of coordinates, such that Aul gives us the projection vector we want. This means we always have a solution. Page 82

Engineering Analysis SESA 2021

The residual, given by equation (95), satisﬁes: r = b − Aul = b − projW b The vector on the right-hand side is in W ⊥ = col (A)⊥ , as stated above. In example 11.12 we showed that for a matrix A, col (A)⊥ is the same as ker (AT ), the null space of AT . So, for the least squares solution, the residual is in ker (AT ), which means that AT r = 0, i.e.,

AT r = AT (b − Aul ) = 0

or

AT b = AT Aul

If we had two solutions u1 and u2 , then we would have: l l Au1 = Au2 l l or A(u1 − u2 ) = 0 l l

But this has only the solution (u1 − u2 ) = 0, so u1 = u2 . In other words, we l l l l have contradicted ourselves, which means we can’t have two solutions.

Example 15.1 Use a least squares approximation to ﬁnd the equation of the line that will best approximate the points (x, y) = (−2, 65), (1, 20), (−7, 105) and (5, −34). The line will have the form y = ax + b. If we put each of the x and y values into y = ax + b we will get 4 equations for a and b (clearly too many!). The system is overdetermined. It is written in matrix form as follows:

Page 83

8. (99) Exercise Find the least squares solution to the following system: Page 84 .Engineering Analysis SESA 2021   −2 1     65          1 1   a   20      =        105   −7 1  b         u −34 5 1 A b    (97) The normal system (96) for the least squares solution is given by multiplying both sides by the transpose of A:     −2     −2 1 −7 5   1    1 1 1 1  −7   T A 5  A 1  65           1   a   −2 1 −7 5   20    =       1  b 1 1 1 1  105        T u A 1 −34 b (98) which leads to a much simpler equation       79 −3   a   −1015     =   −3 4 b 156 This is an easy system to solve.7 and b = 47. The answer is a = −11.

Engineering Analysis SESA 2021   2 −1 1   −4   a         1 −5 2     2    b  =             −3 1 −4    5          c 1 −1 1 −1 u A b      (100) Page 85 .