P. 1
Matrices and Linear Algebra in Control Applications

# Matrices and Linear Algebra in Control Applications

|Views: 90|Likes:
Published by D.Viswanath

Vector Spaces,Matrices and Determinants, Differential Equations, Linear Transformations involving matrices and linear algebra etc., are vital to the understanding of how the controlled system behaves to feedback and helps in designing feedback parameters so as to achieve system stability with desired output.

Vector Spaces,Matrices and Determinants, Differential Equations, Linear Transformations involving matrices and linear algebra etc., are vital to the understanding of how the controlled system behaves to feedback and helps in designing feedback parameters so as to achieve system stability with desired output.

### More info:

Categories:Types, School Work
Published by: D.Viswanath on Dec 23, 2012
Copyright:Attribution Non-commercial

### Availability:

Read on Scribd mobile: iPhone, iPad and Android.
See more
See less

09/17/2013

# Linear Algebra and Control Applications

Notes

Synopsis
Modern control approach using state space method requires extensive usage of matrices and deep understanding of linear algebra.

1

Contents

Synopsis 1 Introduction 1.1 1.2 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Algebra Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 1.3.2 1.4 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 1 2 2 2 3 4 4 4 6 6 7 7 7

Linear Combination and Vector Spaces . . . . . . . . . . . . . . . . . . .

2 Matrix Algebra 2.1 2.2 2.3 2.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices and Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . Linear Independence and Matrices . . . . . . . . . . . . . . . . . . . . . . 2.4.1 2.5 Linear Dependence/Independence of Vectors and Determinants .

Vector Span and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Column Space and Row Space . . . . . . . . . . . . . . . . . . . .

2

. . . . . . . . . . . . . . . . . .Column Rank Less (r < n) . . . . . . . . . . . .3 Full Column Rank (r = n). . . . . . Dimension of Symmetric Matrix . . .1 Projection Onto a Line . . . . . Dimension of Diagonal Matrix . . . . . . . . 2. . . . . 2. . . . . 2. . . . .7. . . . . . . . . . . . .12 Orthogonality of the Four Subspaces . . . . . . . .13 Projections . .10 The Complete Solution to Ax = 0 . . . . . . . . . . . .1 Full Row Rank(r = m). . . . . . . . . . .11. . The Complete Solution to Rx = 0 . . . .11. . 2. . . . . . . . . . . . r < n) . . . .11. . . .13. Dimension of Upper Triangular Matrix . . . . . . . . . . . .3 Projection Onto a Subspace . . . . . .2 2. . .7. . .2 Projection with Trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 8 8 8 9 9 9 9 10 10 11 11 12 13 14 15 16 18 18 18 19 20 22 24 2. . . . . . . . . . . . . . . . . . . . . . .4 Row Space . . . . . . . . . . .13. . . . . . . . . . . . . . . . 2. . . . . . . 2. . . . . . . . . . . . . . .6. . . . . . . . . . . . 2. . . . . .13. . . . .3 2. . . . . . . . . . . . 3 .4 Row Rank and Column Rank Less (r < m. . . . . . . . . . . Column Space . . . . . . 2. . . .9 The Null Space and the Four Subspaces of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Full Column Rank(r = n) . . . . . . . . . .2. . . . . . . . . . . . 2. . Rank and Matrices 2. . .1 2. . . . . . . . . .Row Rank Less (r < m) . . . . . . . 2. . . . . .9. . . .9.11 The Complete Solution to Ax = b . . .7 Dimension.1 Pivot Rows and Columns of a Matrix and Basis . . . . . . . .8 2. . . . .1 2. 2. . . . . . . . .9. . . . . . . . . . Dimension of Null Space . . . . . .7. 2. .9. . . . . The Complete Solution to Rx = 0 . . . 2. . . . . . . . .6 Basis and Matrices . . . . . . . . . . . .4 Dimension of Whole Matrix . . . . 2.2 Full Row Rank (r = m).11. .7. . . . .2 2. . . . . . . . . . .3 2. . .

. . . . . . . . . 25 31 31 31 4 Solution to Dynamic Problems with Eigen Values and Eigen Vectors 32 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 4.14 Least Squares Solution to Ax = b . . . . . . . Linear Transformations . . . . . . .2. Linear Diﬀerential Equations and Matrices .2 Introduction . . . . . . . . . . . . . . . . Determinant of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 32 33 33 33 5 Matrices and Linear Transformations 5. . . . . . . . . .1 5. . . . .2 Introduction . . . . . . . 4 . . . . . . . . . . . . .1 3. . . 3 Matrices and Determinants 3. . . . . . .

These notes are based on the above books and video lectures. State space methods use matrices extensively and require a thorough understanding of linear algebra.Chapter 1 Introduction 1.2 Literature Survey The books . The state space approach has several advantages over the transfer function approach. 1 .Linear Algebra and its Applications and Introduction to Linear Algebra . 1.by Gilbert Strang and his video lectures in the MIT website give a very good insight into linear algebra theory. Chief among them are that they are computationally straightforward and simple to use in multivariable control problems as well as non-linear control problems.1 Overview Control systems engineering can be divided into two types of approaches namely the classical or traditional control wherein transfer functions (ratio of Laplace transforms of output to input) are used and the modern control approach wherein state space methods are used.

F . F . is a set of elements called scalars. (2) For any three elements a. for every a ∈ F there exists an element b ∈ F such that a + b = 0. i.2 Vector Space A vector space or linear vector space or linear space over the ﬁeld. there is a unique sum x + y ∈ V .b = b.3 Linear Algebra Basics In the study of linear system of equations.1. 1. y ∈ V .. for every a ∈ F there exists an element b ∈ F such that a. Further by the law of commutativity. a + b = b + a and a. 1. there is a unique sum a + b ∈ F and a unique product a. i. (2) For any vector x ∈ V .3..b ∈ F . Further by the law of commutativity. together with the two operations of addition and multiplication for which the following axioms hold:(1) For any pair of elements a. and scalar α ∈ F . is a set V of elements called vectors (also denoted as V (F )). b ∈ F .1 Field A ﬁeld. 2 .e.a. there is always a unique product αx ∈ V . the vector space over a ﬁeld is an important deﬁnition and stems from linear algebra. x + y = y + x. together with two operations of vector addition and scalar multiplication for which the following axioms hold:(1) For any pair of vectors x.1 = a for ever a ∈ F .b = 1. (3) F contains the zero element denoted by 0 and the unity element denoted as 1.e.3. (4) F contains the additive inverse. such that a + 0 = a and a. the associative laws a + (b + c) = (a + b) + c and a(bc) = (ab)c as well as the distributive law a(b + c) = ab + ac hold good. b and c ∈ F . (5) F contains the multiplicative inverse.

such that x. the distributive law α(x + y) = αx + αy holds good.(3) For any three vectors x. (7) V contains the unity vector. They are vector addition and scalar multiplication.4 Linear Combination and Vector Spaces Linear algebra is based on two operations on vectors which deﬁne the vector space. (4) For any two vectors x and y ∈ V . 3 .1 = x for every x ∈ V . (5) For any two scalars α and β ∈ F . 1. and scalar α ∈ F . (8) For every x ∈ V . If c and d are scalars ∈ F and u and v are two vectors ∈ V then linear combination of u and v is deﬁned as cu + dv. y and z ∈ V . associative law α(βx) = αβx and the distributive law (α + β)x = αx + βx hold good. the associative law x + (y + z) = (x + y) + z holds good. there exists an element −x ∈ V such that x + (−x) = 0. denoted by 0. denoted by 1. such that x + 0 = x for every x∈V. and vector x ∈ V . (6) V contains the zero or null vector.

0 −1 1 1 0 Their linear combination in three-dimensional space can be given by cu + dv + ew i. let three vectors u.e.1) 4 . v and w be given as Example:   0       u = −1. w = 0.1 Introduction In this chapter. v =  1 .     0 c         c −1 + d  1  + e 0 = d − c 0 −1 1 e−d 1 0         (2.. For example. 2.2 Matrices We can form linear combination of vectors using matrices. an understanding of matrices is developed from linear algebra basics.Chapter 2 Matrix Algebra 2.

is multiplying the scalars as   c   d = cu + dv + ew e Ax = u v w (2.3) Thus the rewriting of the linear combination in matrix form has brought about a crucial change in view point explained as follows:(a) At ﬁrst the scalars c. The result of the matrix multiplication of vector x. (2. An m × n matrix can be said to be made up of m row vectors of n elements each or n column vectors of m elements each. A. i. the matrix. acts on the vector.e. (b) In the matrix form.e. A.d and e are deﬁned as the components of the vector x.2) (2.. the matrix times vector can be given as   c   Ax = u v w d = cu + dv + ew e where the scalars c.d and e were multiplying the vectors u. 5 .4) Thus matrix. x.The above linear combination can be rewritten using matrices as       1 0 0 c c       −1 1 0 d = d − c 0 −1 1 e e−d i. v and w to form the linear combination cu + dv + ew. Thus matrices can be said to be made up of row vectors and column vectors. Ax can be deﬁned as the column vector b expressed as b = Ax (2.5) Linear combinations are the key to linear algebra and the output Ax is a linear combination of the columns of A..

. . v and w produces a particular vector b. . αn are scalars equals the sero vector if and only if α1 = α2 = .   .. 6 .. v2 ... then the set is linearly dependent.. .7) then v1 .   .. ann (2. vn a11 a12 . (2. x2 and x3 .. a2n  = .. can be considered as column vectors. . we will consider c. . . = xn = 0..6) 2. . . vn is said to be linearly independent if their linear combination. α2 . For the example discussed above where   c   Ax = u v w d = b e from linear algebra point of view. Thus the columns of A are linearly independent when the only solution to Ax = 0 is x = 0..2. a1n  (2.4 Linear Independence and Matrices A set of vectors v1 . . d and e to represent the elements x1 .. . xn ]T and  A = v 1 v2 . In case of linear equations. n is non-zero. an1 an2 . vn are linearly independent if x1 v1 + x2 v2 + .. . α1 v1 + α2 v2 + . .. say. b = Ax. .. .. to ﬁnd the input x that gives the desired output. A.e..xn vn = 0 and if and only if x1 = x2 = . i.. v2 .. . we were interested in computing the linear combination cu + dv + ew to ﬁnd b. Thus Ax = b can be seen as a system of linear equations that has to be solved for x1 . + αn vn where α1 . If x = [x1 x2 .  ... i = 1.... 2. x2 and x3 of the column vector x and consider the problem to be that of ﬁnding which combination of u.. The columns of a matrix.3 Matrices and Linear Equations Matrices and linear algebra concepts can be very useful in solving a system of linear equations.. = αn = 0 Even of one αi . .. .8)    a21 a22 . .. . ..

e. Ax = 0 and |A| = 0 implies x = 0 and hence the column vectors are independent. 2. then x = 0 and hence dependent. the columns are considered as vectors and the column space is the vector space consisting of linear combinations of these column vectors. AT ..5 Vector Span and Matrices A set of vectors is said to span a vector space or linear space if their linear combinations ﬁll that space. In other words. then that set is said to span its vector space.6 Basis and Matrices A basis for a vector space is a set of vectors with two properties:(a) The basis vectors are linearly independent. The row space of the matrix A is the column space of its transpose. A. (b) They span the vector space. 7 .1 Linear Dependence/Independence of Vectors and Determinants The linear dependence or independence of the columns of a matrix is an important property and it can be veriﬁed by calculating the determinant of the matrix.1 Column Space and Row Space The columns of a matrix span its column space. matrix A is singular. then the column vectors are linearly dependent. If the determinant |A| is zero.. If the determinant |A| is not zero. 2..4. i. In a similar manner. 2.2.if Ax = 0 and |A| = 0..e. matrix A is invertible then the column vectors are linearly independent i. The set of vectors may be dependent or independent.e.5.e.e.e... i. if the given set of vectors are enough to produce the rest of the vectors in that space. i. the rows of a matrix span its row space. i. i.

1 Dimension of Whole Matrix The dimension of a whole n × n matrix space is n2 . v2 . 0 0 1 0 8 . 2. Thus n n n . v2 . i. . The rank of the matrix is the dimension of the column space. vn are a basis of invertible matrix.9) . Every vector. 0 1 0 0 . vn are independent. The pivot rows and columns of the echelon form R (reduced form) also form a basis for its respective row space and column space.7 Dimension.. then they form a basis. 0 0 0 1 (2.. The rank is also the dimension of the row space of that matrix. Thus there is one and only one way to write v as a combination of the basis vectors. 2. A2 . For a 2 × 2 matrix considering the basis. Most important is that the combination of the basis vectors that form v is unique since these basis vectors. . in the space is a combination of basis vectors... 2. If the set of vectors are exactly the number required to produce the rest of the vectors in the space (neither too many nor too few).. v1 . The columns of every invertible n × n matrix form a basis for vectors v1 . A4 = 1 0 0 0 . the exactly when they are the columns of an n × n has inﬁnitely many diﬀerent bases. because they span the space.6.This combination of properties is fundamental to linear algebra.e. The columns of the n × n identity matrix gives the standard basis for n . Rank and Matrices The dimension of a vector space is the number of basis vectors in that space.1 Pivot Rows and Columns of a Matrix and Basis The pivot columns of matrix A are a basis for its column space. In other words. v. A3 .7.. The pivot rows of A are a basis for its row space.. A1 . say.

Each solution x belongs to the null space.the dimension is 4. A4 form a basis for symmetric matrices.e.7.9) above.7. 2 For the example basis in eqn. 2. Hence dimension is 2.3 Dimension of Diagonal Matrix The dimension of the subspace of diagonal matrices is n. 2. For the example basis in eqn.(2. 2.8 The Null Space and the Four Subspaces of a Matrix The null space of a given matrix A consists of all solutions to Ax = 0.9) above. A2 and A4 form a basis for upper triangular matrices and hence dimension is 3. 2.7.(2. A3 . there are non-zero solutions to Ax = 0. A1 . A1 . For other matrices that are not invertible. A1 and A4 form a basis for diagonal matrices.2 Dimension of Upper Triangular Matrix The dimension of the subspace of upper triangular matrices is 1 n2 + 1 n. A2 .. The m × n matrix A can be square or rectangular.4 Dimension of Symmetric Matrix 1 The dimension of subspace of symmetric matrices is 2 n2 + 1 n. A4 (or) A1 . For invertible matrices. this is the only solution. The solution vector x has n 9 . singular matrices. N (A). 2 2 For the example basis in eqn. It is denoted by N (A).(2.9) above. Hence dimension is 3. i. One immediate solutions to Ax = 0 is x = 0.

m since the solution vector y 2. This is because the four dimensions for the four subspaces are the same for A and R. So the null space is a subspace .1 Row Space The row space of the above matrix R contains combinations of all three rows. The dimension of the row space and its rank is 2. It is denoted by N (AT ). Thus the rank of this matrix R is 2 since there are two pivots and the two pivots form a 2 × 2 identity matrix. 10 .9. The basis for each subspace is found and its dimension is checked as follows:Consider the following example of 3 × 5 matrix in reduced form R:  1 2 5 0 7   R = 0 0 0 1 3 0 0 0 0 0 The pivot rows in the above matrix are 1 and 2 and the pivot columns are 1 and 4. The ﬁrst two non-zero rows are independent and form a basis since they span the row space C(RT ). Their column spaces and null spaces are also diﬀerent. (2. The matrices A and AT are usually diﬀerent. For the m × n matrix A. However the third row contains only zeros. the left null space consists of all solutions to AT y = 0. The column space of a m × n matrix A denoted by C(A) is a subspace of Rem . The row space of a m × n matrix A denoted by C(AT ) is a subspace of n . Thus they are vectors in of n n . The left null space is a sub space of has m components in this case.components in this case.9 The Complete Solution to Rx = 0 The matrix A can be generally reduced to its row echelon form R so that the four spaces can be easily identiﬁed.10) 2.

2. −3. (b) Column 2 is two times column 1.9. However the pivot columns are 1 and 4 and form a basis for the column space C(R). (c) Column 3 is ﬁve times column 1. 0). The solution vector for satisfying Rx = 0 corresponding to column 5 is (−7. 11 . This is the same as the row space since both the pivot rows and pivot columns form a 2 × 2 identity matrix whose rank is 2.9.3 Dimension of Null Space In the above example. They are independent since they form the 2 × 2 identity matrix. The solution vector for satisfying Rx = 0 corresponding to column 2 is (−2. 0. The dimension of the column space is the rank 2. Hence they are independent and form a basis for the column space C(R). 0).. Thus this is a special solution for x3 = 1. Thus columns 1 and 4 span the column space. Hence the null space has a dimension given by n − r = 5 − 2 = 3. Thus we can solve for x1 to x5 as x1 = x2 = . 0. Thus this is a special solution for x5 = 1. 0. Thus the null space N (R) has dimension n − r. the combination of 1 and 4 columns does not give a zero column. Thus there are three free variables. 0.. Thus this is a special solution for x2 = 1. 1). = x5 = 0 which is a unique solution. From inspection of the matrix R. 0. 1.2. All other columns are a linear combination of these two columns. Also. 1. The solution vector for satisfying Rx = 0 corresponding to column 3 is (−5. 0.2 Column Space The column space of the above matrix R contains combinations of all ﬁve columns. the number of columns (n) is 5 and rank (r) is 2. 3 and 5 are free and yield three special solutions to Rx = 0. (d) Column 5 is seven times column 1 plus three times column 4. Here the columns 2. it is found that (a) Columns 1 and 4 are linearly independent and form a 2 × 2 identity matrix when placed together.

(e) Thus the null space consists of the above three solution vectors i. For the given example. (ii) For the column vectors 2.  −2    1   N (R) =  0  .. = x5 = 0. This corresponds to   −2    1   x= 0      0 0 (2.4 The Complete Solution to Rx = 0 Thus the complete solution for Rx = 0 for the given example matrix R can be calculated in two steps:(i) For the column vectors 1 and 4 of the matrix R which are independent. the solution vector is unique and is given by x1 = x2 = .      0 0   −7    0    0     −3 1 (2.. Thus all linear combinations which satisfy x1 = −2x2 and x3 = x4 = x5 = 0 are solutions to Rx = 0.e.11) These solution vectors are independent and hence form a basis. then x1 = −2 and x3 = x4 = x5 = 0.. 2.12) (ab) If x3 = 1 is chosen.9. then x1 = −5 and x2 = x4 = x5 = 0.      0 0    −5    0    1 . 3 and 5 which are dependent (linear combinations of 1 and/or 4). All solutions to Rx = 0 are linear combinations of these three column vectors. three solutions which form a basis for satisfying Rx = 0 are as follows:(aa) If x2 = 1 is chosen. Thus all linear combinations which satisfy x1 = −5x3 and x2 = x4 = x5 = 0 are solutions to 12 . there can be an inﬁnite number of solutions.

This corresponds to   −5    0   x= 1      0 0 (2. Thus all linear combinations which satisfy x1 = −7x5 . then x1 = −7. For the reduced form R given by   1 2 5 0 7   R = 0 0 0 1 3 0 0 0 0 0 the matrix A can be chosen such that it reduces to R as   1 2 5 0 7   A = 0 0 0 1 3  2 4 10 1 17 It can be seen from the above matrices A and R that the following properties hold:- (2. x4 = −3x5 and x2 = x3 = 0 are solutions to Rx = 0. This corresponds to   −7    0   x= 0     −3 1 (2.Rx = 0.13) (ac) If x5 = 1 is chosen. This invertible elimination matrix E is a rpoduct of elementary matrices such that R = EA and A = E −1 A.14) 2. A can be reduced to R through an elimination matrix E.10 The Complete Solution to Ax = 0 The dimensions of the four subspaces for a matrix A are the same as its reduced form R. x4 = −3 and x2 = x3 = 0.15) (2.16) 13 .

(a) A has the same row space as R. for the transpose of A i. The Fundamental Theorem of Linear Algebra for a m×n matrix A is given as follows:”The column space and row space both have dimension r for A as well as AT . it is necessary that ”the number of independent rows equals the number of independent columns. Thus the dimensions are also the same and the basis is also the same. (d) The dimension of the left null space of A i. The null spaces have dimensions n − r for A and m − r for AT . This is because every row of A is a combination of the rows of R. This is because the number of pivot columns (which are independent) are the same for A and R. Similarly.” Thus the dimensions of A and R are the same though the column spaces are diﬀerent. (c) The matrix A has the same null space as R and hence the same dimension and same basis.e. Thus the whole space is n such that r + (n − r) = n. the null space of AT is the same as the left null space of R since the dimensions of the column space of R and AT are the same. then the dimension of its null space is n − r.11 The Complete Solution to Ax = b The solution to Ax = b when b = 0 was dealt with in the previous sections. Elimination operation reduced the matrix A to its reduced form R and converted the 14 . the n × m matrix AT . Elimination changes rows but not row spaces. This is because columns of R may end in zeros but columns of A may not end in zeros.” 2. However for every matrix.. Thus the whole space is m such that r + (m − r) = m.e. For the m×n matrix A. This is because the elimination steps for reducing A to R do not change the solutions to Ax = 0. (b) The matrix A may not have the same column space as R. Thus the pivot columns of A are a basis and form its column space.. the column space has the same dimension r while the dimension of its null space is m − r. the column space has dimension. say. r.

19) Thus the solution is unique (only solution) given by x = [1 3 − 2]T . Example 1 Let   1 0 0   A = 0 2 0 0 1 1 and   1   b = 6 1 (2. The Null space N (A) contains the zero vector only..18) A is a non singular matrix since its determinant(|A| = 2 = 0). the solution x was in the null space of A. Since x = 0 is one of the solutions when b = 0. the complete solution x = xp + 0. The column vector b is a linear combination of one or more column vectors of the matrix A.17) (2. Thus A is invertible and Ax = b has exactly one solution.11. 2.1 Full Row Rank(r = m). When the right hand side of Ax = b.5 1 given below:    1 1     6 =  3  1 −2 (2. xp is known as the particular solution and xn is known as the null space solution. i. The free variables were assigned values one and zero so that the pivot variables were found by back substitution. In this case.problem to solving Rx = 0.Full Column Rank(r = n) This condition occurs when A is a non-singular square matrix.5 0 0 −0. b is not zero then the solution can be separated into two parts as x = xp + xn where x is known as the complete solution. 15 . There are four possiblities for the complete solution of Ax = b depending on the rank r which are discussed as follows with the consideration that A is a m × n non-zero matrix. Thus the solution can be found from x = A−1 b as   1 0 0   x = 0 0.e.

Thus the null space is a basis consisting of n − m non-zero vectors. (c) The null space N (A) contains the zero vector x = 0 and also solution vectors n − r = n − m such that Ax = 0.22) (2. Every matrix A with full row rank i.e. (b) There is a solution to Ax = b for any and every right hand side b.20) To solve this easily.. Ax = b has inﬁnite number of solutions. There can be inﬁnite number of solution vectors which are the linear combinations of these non-zero solution vectors which form the basis.2. Example 2 Let   1 5 0 3   A = 0 0 1 2 1 5 1 5 and   1   b = 3 4 (2.Column Rank Less (r < n) In this case. Thus for the above system. r = m satisﬁes the following properties:(a) All rows have pivots and the reduced form R has no zero rows.2 Full Row Rank (r = m).21) (2. the system of equations Ax = b is reduced to a simpler system Rx = d by using the augmented matrix [A b] (wherein b is added as an extra column to the matrix A) and applying elimination steps to this augmented matrix. the augmented matrix is given by   1 5 0 3 1   [A b] = 0 0 1 2 3 1 5 1 5 4 Applying the elimination step R3 → R3 − R1 gives   1 5 0 3 1   ˆ [R b] = 0 0 1 2 3 0 0 1 2 3 16 (2.11.23) .

Selecting the variables against the free columns as x2 and x4 to be equal to zero. For the given Ax = b. the null space N (A) contains n − r = 4 − 2 = 2 vectors apart from the zero vector. back-substitution gives x3 = 3 and x1 = 1. 0   0   −3   0   −2   1 (2. The pivot columns of R are 1 and 3 while the free columns are 2 and 4. Thus the unique solution or particular solution for Ax = b can be given by [1 0 3 0]T for the given by b. The ﬁrst 17 .Applying the elimination step R3 → R3 − R2 gives   1 5 0 3 1   ˆ [R b] = 0 0 1 2 3 0 0 0 0 0 (2.26) As can be seen from the above system of equations.24) Let the solution vector x = [x1 x2 x3 x4 ]T . The ﬁrst and third planes are parallel to each other. These two vectors form the basis of the null space and their linear combinations give an inﬁnite number of solutions to Ax = b.25) Thus the complete solution to Ax = b in this case is given by x = xp + xn where xp is the particular solution which solves Axp = b and xn is the n − r special solutions which solve Axn = 0 for the given example. Ax = b comprises three planes in the x1 x2 x3 x4 space. 3 and 2 in the free columns of R:-   −5   1 (x2 . the complete solution is given by       −5 −5 −3       1 1 0 x =   + x2   + x 4   0 0 −2       0 0 1 (2. the null space is found to contain the following vectors which form a basis by reversing the signs of 5. x4 )n =   . However since the rank of the matrix A is 2 while the number of columns is 4.

If v and w are the sides of a right angles triangle. Otherwise there is no solution.11. 2. v. this situation is least possible or rarely occurs.e. 2.and second planes are not parallel and intersect at a point along two lines. i. Then x = xp + xn gives the complete solution. Adding the null space vectors xn results in the solution moving along the two lines and hence the plane containing the two lines. (c) If Ax = b has a solution then it has only one solution. Example 3 TO BE CONTINUED 2. In practice.12 Orthogonality of the Four Subspaces Two vectors are said to be orthogonal to each other if their dot product is zero.Row Rank Less (r < m) In this case. then ||v||2 + ||w||2 = ||v + w||2 The right hand side of the above expression can also be written as ||v + w||2 = (v + w)T (v + w) 18 . (b) Thus the null space contains the zero vector x = 0 only. Every matrix A with full column rank i. r = n satisﬁes the following properties:(a) All columns of the matrix A are the pivot columns..e.w = 0 or v T w = 0.. Ax = b has nil or one solution.11.3 Full Column Rank (r = n). r < n) In this case.4 Row Rank and Column Rank Less (r < m. Ax = b has nil solution or inﬁnite number of solutions.

N (A) are orthogonal subspaces over over m n while the column space. The row space is perpendicular to the null space of A. the null space of AT is considered for computing the least squares solution of Ax = b since e = b − Ax which will be in the null space of AT is used for computing this least squares solution. then the projection of b along the z-axis can be found with the help of the projection matrix given by   0 0 0   P = 0 0 0 0 0 1 (2. When b is not in the column space of A then we cannot get a direct solution for Ax = b.This equation will be ||v + w||2 = v T v + wT w if and only if v T w = wT v = 0 which gives the proof of orthogonality. Similarly the projection of a vector b onto a plane is the part of b in that plane. i.e. N (AT ). are orthogonal subspaces . N (A) while the column space is perpendicular to the null space of AT . N (AT ). For example. 2. Thus the row space C(AT ) and null space of A. say b.e. i. z1 ). if a point in this plane is described by the vector. onto a line is the part of b along that line. C(A) and null space of AT .13 Projections Projection of a vector.. b = (x1 . considering a standard three-dimensional plane xyz. The Fundamental Theorem of Linear Algebra consists of two parts:(a) Part-I. y1 .27) 19 . In that case. If p is the projection vector then it is given by p = P b where P is the projection matrix. (b) Part-II. The row and column spaces have the same dimension r and the two null spaces N (A) and N (AT ) have the remaining dimensions n − r and m − r..

29) (2. i. (2.13..   1 0   P2 = 0 1 0 0 The z-axis (line) and the xy plane are orthogonal complements since their dimensions add up to three and the sum of projection matrices P1 and P2 is identity matrix.31) 20 .e.The projection of b onto the xy plane can be found with the help of the projection matrix   1 0 0   P = 0 1 0 0 0 0 Every subspace of m (2. p of any point on a line b on another line a is given by the product of a scalar x and a.1 Projection Onto a Line The projection. Similarly.e. In case of a line.30) 2. Thus b = P1 b + P2 b = (P1 + P2 )b = Ib. p is a multiple of a. in case of a two dimensional xy plane. the dimension is one and the matrix P has only one column. The projection matrix P of a subspace is best described by its basis. For example   0   P1 = 0 1 gives the z-axis. i. Thus the projection P of any vector b onto the column space of any m × n matrix.28) has its own m × m projection matrix.. Thus the columns of P are the basis vectors of the subspace. P1 + P2 = I. ˆ The key to projection is orthogonality wherein the perpendicular to the vector a joined to b decides the error vector e given by e = b − xa ˆ (2.

ˆ a. Thus a.b − xa.(b − xa) = 0 => a. Thus if b = a then x = 1 and the projection of a onto a is itself.e = 0 => a.a = 0 => x = ˆ ˆ ˆ The transpose applies to matrices. the projection vector p is given by xa = ˆ 5 a 9 5 9 10 9 (2.33)   =  10  9 The error vector between b and p is e = b − p.e.34)   =  −1  9 −1 9 21 .a a a Example 1 Problem : Project the vector b = [1 1 1]T onto another vector given by a = [1 2 2]T . x = 0.32) (2. the projection p = 0 i. If b is perpendicular ˆ to a then aT b = 0.b aT b = T a. aT b aT a 5 x = ˆ 9 x = ˆ Hence.. Solution : Here we need to ﬁnd the projection of b on a given by p = xa. From the ˆ formula given above. x from e from the ˆ fact that e is perpendicular to a when their dot product is zero. Thus   5 1 9    10  e = 1 −  9  10 1 9 4 9 (2. we can determine the scalar.Since this line is perpendicular to a.

2 Projection with Trigonometry As shown in the above example.13. This can be proved as   1   T 4 e a = 9 −1 −1 2 = 0 9 9 2 2. Hence the better way to arrive at ||p|| = 5 9 is through the projection p = xa given by ˆ x= ˆ aT b aT a Thus the projection matrix can now be formulated from p = aˆ as x p=a giving P = aaT aT a aaT . Thus the components of b from trigonometry are given as ||p|| = ||b||cos θ and ||e|| = ||b||sin θ The above formula will involve calculating square roots. aT a aT b aaT = T b = Pb aT a a a Thus the projection matrix P onto the line through a can be given P = 22 . the vector b has been split into two parts:(a) Component along the line through a given by p (b) Component perpendicular to the line through a given by e These two components thus form the sides of a right angled triangle having lengths ||b|| cos θ and ||b|| sin θ.This error e must be perpendicular to a.

When P projects onto one subspace. This is the vector e.Example 2 For the Example 1 calculate the projection matrix P . If the matrix is squared. P 2 equals P since P is a symmetric matrix. I − P projects onto the perpendicular subspace.If the vector a is doubled the matrix P stays the same. It still projects on the same line. P = aaT aT a   1   2 2 (2. 23 . p = Pb     1 2 2 1 1    = 2 4 4 1 9 2 4 4 1 5 9 (2.36)   =  10  9 10 9 Note:.35) 1 2 2   1   2 2  2  4 4 = 1 2 2  1 2 1 = 2 4 9 2 4 This matrix projects any vector b onto a as can be proved from the above Example 1.

its null space contains only the zero vector. Hence x is also in the null space of AT A. The projection onto this n-dimensional plane is given by p = x1 a1 + x2 a2 + . the projection matrix is given by the formula P = A(AT A)−1 AT If the matrix A is rectangular.. .When the columns of A are linearly independent.. a2 .e.37) . then Ax = 0. an in m where all the a’s are independent.3 Projection Onto a Subspace The above results are for the one dimensional problem of projection onto a line i. 24 (2. + xn an = Aˆ ˆ ˆ ˆ x This projection which is closest to b is calculated from AT (b − Aˆ) = 0 x which gives AT Aˆ = AT b x The symmetric matrix AT A is an n × n matrix and is invertible since the a’s are independent. The solution is given by x = (AT A)−1 AT b ˆ Thus the projection p can now be given as p = Aˆ = A(AT A)−1 AT b x Hence if p = P b is the projection of b onto a then P . if x is in null space of A. In that case.. Proof:. ˆ In case of an n-dimensional plane.. However when A has independent columns. AT A has the same null space as A if A has linearly independent columns. To prove that AT A has the same null space as A we need to prove Ax = 0 from AT Ax = 0. then it has no inverse matrix.2. p = xa is the projection of any vector b on the line passing through a.13. AT A is invertible. For every matrix A. Multiplying by AT gives AT Ax = 0... we start with n vectors a1 .

40) (2. 25 . a solution is possible if and only if b is in the column space of A.. In this case. x 2. The error is given by e = b − p = b − Aˆ x 3.38) 2. Simply put there are more equations than unknowns (m > n). Hence proved. However quite often Ax = b has no solution. Note:.e. Thus if AT Ax = 0 then Ax has length zero. + xn an ˆ ˆ ˆ This gives ˆ x = (AT A)−1 AT b Thus projection p = Aˆ .. i.1. This is the case when the matrix A is invertible or b is in the column space of A. The projection matrix P has two properties namely PT = P and P2 = P (2. then the error deﬁned by e = b − Ax will be equal to zero..14 Least Squares Solution to Ax = b If x is an exact solution of Ax = b.Row Rank Less (r < m).The projection p of b on a is given by p = x1 a1 + x2 a2 + . Thus Ax = 0.Multiplying by xT gives xT AT Ax = 0 (or) (Ax)T Ax = 0 or ||(Ax)2 || = 0. Thus every vector x in the null space of A is in the other null space of AT A. This happens when A has Full Column Rank (r = n). the matrix has more rows than columns.39) (2.

y1 ). However. we get y1 = D. calculus or by setting the derivative of error to zero (diﬀerential equation).42) (2. If b is not in the column space of A. x is a least squares solution. algebra. if the measurements include noise or external disturbances or uncertainties.Let y = Cx + D be the equation of the straight line. For the given problem. In this case. 0). Hence the problem will now be to minimise this error e to as small value ˆ as possible. 0) and (2. x2 = 1. let the points be (x1 . When the length of e is as small as possible. Thus we have the special case of least squares solution where x is the least squares solution such that the length e is as small as possible. Example 3 Find the closest line to the points (0.43) (2. then b will be in the column space of A. Thus x1 = 0. if all the measurements are perfect.In most practical cases of a control problem. Solution:. x3 . y2 ) and (x3 . then b is outside the column space of A. Substituting for x1 . y2 = C + D. then we consider the projection p of b on x which is connected by p = Aˆ . ˆ When Ax = b has no solution. 6). x3 = 2. in y = Ax + B. y3 ).41) ˆ It can be proved that x is the least squares solution or the best solution for Ax = b which minimiuses the error e = b−Ax to as small as possible through geometry. x2 . (x2 . ˆ Thus if we consider e = b − Ax. then e is zero when x is an exact solution of Ax = b or in other words b is in the column space oa A. the error e will not be zero. y3 = 2C + D 26 . (1. then e is not zero. This projection p which is closest to b is calculated from x AT (b − Aˆ ) = 0 x or AT Aˆ = AT b x Thus the solution is given by ˆ x = (AT A)−1 AT b (2.

  0 1   ˆ 1 1 x = 2 1 5 3 3 3 ˆ x =   6   0 0 C D (2.44) (2.Since y1 = 6..46) 27 .e. we solve AT Aˆ = AT b x i. Expressing the above system of equations in matrix form we get     0 1 6   C   = 0 1 1 D 2 1 0 Considering   0 1   A = 1 1 2 1 x= and   6   b = 0 0 then Ax = b is not solvable since b is not in the column space of A.45) 0 1 2 1 1 1 0 1 2 1 1 1 0 6 (2. y2 = y3 = 0. we get the set of equations D = 6 C +D = 0 2C + D = 0 The above system of equations does not have a solution. To ﬁnd the least squares solution.

The projection p of b onto the column space of A is thus given by p = Aˆ x   0 1   p = 1 1 2 1   5   =  2 −1 Thus the error e = b − p is given as     5 6     e = 0 −  2  0 −1   1   = −2 1 The length of e for this solution is  ||e2 || = eT e = 1 −2 1 1  (2. y = −3 x + 5 is the equation of the line closest to the three given points.Thus ˆ x = = 5 3 3 3 −3 5 −1 0 6 (2.47) Thus with C = −3 and D = 5.48) −3 5 (2.49)   −2 = 6 1 28 .

ﬁrst we ﬁnd P .To check and verify:(1) e must be perpendicular to both columns of A. (2) p = P b To verify this. P = A(AT A)−1 AT   0 1   0 1 = 1 1 ( 1 1 2 1   0 1 3 1  = 1 1 6 −3 2 1   0 1 1  −3 = 1 1 6 5 2 1   5 2 −1 1  = 2 2 2 6 −1 2 5 (2.50) = 0 eT A2 = 1 −2 1 = 0 veriﬁes the same. Hence   0   1 2   1   1 1 eT A 1 = 1 −2 1 (2.51) 2 1 −3 5 0 3   0 1   −1 0 1 2 1 1) 1 1 1 2 1 0 1 2 1 1 1 2 −1 29 .

Hence calculating p = Pb  (2.52)    5 2 −1 6 1    =  2 2 2  0 6 −1 2 5 0   5   = 2 −1 which was the earlier result. 30 .

Chapter 3 Matrices and Determinants 3.1 Introduction In this chapter. A square matrix is invertible if and only if its determinant is not equal to zero.e. 31 . i.2 Determinant of a Square Matrix The determinant of a square matrix is a scalar which can immediately tell us whether the given matrix is invertible or not. properties of determinants are discussed.. the matrix is singular then it is not invertible. If the determinant is zero. 3.

1 Introduction In this chapter. Their solutions change with time and can be decaying with time. growing with time or oscillating.2 Linear Diﬀerential Equations and Matrices du dt Steady state problems can be expressed by linear equations of the form Ax = b. Eigen values and eigen vectors help us in arriving at the solution of these diﬀerential equations expressed in matrix form in simple and easy steps. eigen values and eigen vectors are discussed. Dynamic problems are those of the form = Au. 32 . 4.Chapter 4 Solution to Dynamic Problems with Eigen Values and Eigen Vectors 4.

33 . 5. if v is an input vector in the vector space. V then a transformation T assigns an output T (v) to each input vector v. Similarly.2 Linear Transformations Transformation is like a function.Chapter 5 Matrices and Linear Transformations 5.1 Introduction In this chapter. In case of a function. A linear transformation is a transformation which satisﬁes the two conditions of homogeneity and superposition. for every input x the output is expressed as f (x). linear transformations and their applications using matrices are discussed.

scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->