You are on page 1of 161

# Computational Linear Algebra

Syllabus

NUMERICAL ANALYSIS Linear Systems of Equations and Matrix Computations Module 1: Direct methods for solving linear system of equation Simple Gaussian elimination method, gauss elimination method with partial pivoting, determinant evaluation, gauss Jordan method, L U decompositions Doolittles lu decomposition, Doolittles method with row interchange. Module 2: Iterative methods for solving linear systems of equations Iterative methods for the solution of systems equation, Jacobin iteration, gauss seidel method, successive over relaxation method (sort method). Module 3: Eigenvalues and Eigenvectors An introduction, eigenvalues and eigenvectors, similar matrices, hermitian matrices, gramm Schmidt orthonormalization, vector and matrix norms. Module 4: Computations of eigenvaues Computation of eigenvalues of a real symmetric matrix, determination of the eigenvalues of a real symmetric tridiagonal matrix, tridiagonalization of a real symmetric matrix, Jacobin iteration for finding eigenvalues of a real symmetric matrix, the q r decomposition, the Q-R algorithm.

V1/1-4-04/1

## Computational Linear Algebra

Syllabus

Lecture Plan Modules 1. Direct methods for solving linear system of equation. Learning Units 1. Simple Gaussian elimination method 2. Gauss elimination method with partial pivoting. 3. Determinant evaluation 4. Gauss Jordan method 5. L U decompositions 6. Doolittles LU Decomposition 7. Doolittles method with row interchange. 2. Iterative methods for solving linear systems of equations. 8. Iterative methods for the solution of systems equation 9. Jacobi iteration. 10. Gauss Seidel method 11. Successive over relaxation method (sort method). 3. Eigenvalues 12. An introduction. and Eigenvectors 13. Eigenvalues and eigenvectors, 14. Similar matrices, 15. Hermitian matrices. 16. Gramm Schmidt orthonormalization, 17. Vector and matrix norms. 4. Computations of eigenvalues. 18. Computation of eigenvalues 19. Computation of eigenvalues of a real symmetric matrix. 20. Determination of the eigenvalues of a real symmetric tridiagonal matrix, 21. Tridiagonalization of a real symmetric matrix 22. Jacobian iteration for finding eigenvalues of a real symmetric matrix 23. The Q R decomposition 1 1 1 2 2 3 2 2 2 1 2 2 1 2 1 2 2 2 1 1 1 11 9 9 Hours per Topics 1 2 Total Hours 10

V1/1-4-04/2

Syllabus

V1/1-4-04/3

Lecture Notes

## 1. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

1.1. SIMPLE GAUSSIAN ELIMINATION METHOD
Consider a system of n equations in n unknowns, a11x1 + a12x2 + . + a1nxn = y1 a21x1 + a22x2 + . + a2nxn = y2

an1x1 + an2x2 + . + annxn = yn We shall assume that this system has a unique solution and proceed to describe the simple Gaussian elimination method for finding the solution. The method reduces the system to an upper triangular system using elementary row operations (ERO). Let A(1) denote the coefficient matrix A.

(1) A =
Let

a ( 1 ) 11 a ( 1 ) 21 ..... ..... a
(1 ) n1

a ( 1 ) 12 a ( 1 ) 22 ...... ...... a
(1 ) n2

## a (1 ) 1 n a (1 ) 2 n ...... Where a(1)ij = aij ...... a ( 1 ) nn

y (1 )1 (1 ) y 2 (1) y = M (1 ) y n

Where y(1)i = yi

We assume a(1)11 0 Then by ERO of type applied to A(1) reduce all entries below a(1)11 to zero. Let the resulting matrix be denoted by A(2). A(1)

Ri +m(1)i1R1

A(2) Where

(1)

i1

a (1) i1 = (1) ; a 11

i > 1.

## Note A(2) is of the form

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/1

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

A(2) =

a(1)11 0 0 ... 0

a(1)12 ... ... a(1)1n a(2) 22 ... ... a(2) 2n a(2) 32 ... ... a(2) 3n ... ... ... ... a( 2) n2 ... ... a( 2) nn

Notice that the above row operations on A(1) can be effected by premultiplying A(1) by M(1) where

M(1)

## 1 (1) m21 (1 = m31) M m (1) n1

0 I n 1

0 0

i.e. M(1) A(1) = A(2) Let y(2) = M(1) y(1) A(2)x = y(2) Next we assume a(2)22 0 and reduce all entries below this to zero by ERO A(2) Here 1 0 0 M(2) = 0 1 m(2)32 0 m(2)42 In-2
M1/L1and L2/V1/May2004/2
R i + 1 R1 y (1) m i y ( 2 )

i.e

## Then the system Ax = y is equivalent to

Ri +m( 2)i 2

A(3) ;

( 2)

i2

a ( 2) i 2 = ( 2) ; a 22

i > 2.

0 0

... 0 ... 0

VittalRao/IISc, Bangalore

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

. 0 m(2)n2 M(2) y(2) = y(3) ; and M(2) A(2) = A(3) ; and A(3) is of the form

A ( 3)

a (1)11 a (1)12 a ( 2) 22 0 = 0 0 M M 0 0

... a ( 2) 23 a (3) 33 M a ( 3) n 3

## ... a (1)1n ... a ( 2) 2 n ... a (3) 3n ... M ... a (3) nn

We next assume a(3)33 0 and proceed to make entries below this as zero. We thus get M(1), M(2), . , M(r) where

## 1 0 . 0 1 . 0 M(r) = rxr . . . m(r)nr m(r)r+1r 1

0 0

m(r)r+2r

In-r

(r )

(r )

=A

( r +1)

a (1)11 ... a ( 2) 22 0 0 M = M M M M 0 0

... ... a ( r ) rr 0 M 0

## ... ... ... a ( r +1) r +1r +1 ... a ( r +1) nr +1

( r +1) ... a r +1n ... ... ... a ( r +1) nn ... ... ... a (1)1n a ( 2) 2 n a ( r ) rn

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/3

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

M(r) y(r) = y(r+1) At each stage we assume a(r)rr 0. Proceeding thus we get, M(1), M(2), . , M(n-1) such that M(n-1) M(n-2) . M(1) A(1) = A(n) ; M(n-1) M(n-2) . M(1) y(1) = y(n)

a(1)11 a(1)12 . . . . . a(1)1n a(2)22 . . . . . a(2)2n where A(n) = . a(n)nn which is an upper triangular matrix and the given system is equivalent to A(n)x = y(n) and since this is an upper triangular, this can be solved by backward substitution; and hence the system can be solved easily Note further that each M(r) is a lower triangular matrix with all diagonal entries as 1. Thus let M(r) is 1 for every r. Now, A(n) = M(n-1) . M(1) A(1) Thus det A(n) = det M(n-1) det M(n-2) . det M(1) det A(1) det A(n) = det A(1) = det A since A = A(1) Now A(n) is an upper triangular matrix and hence its determinant is a(1)11 a(2)22 . a(n)nn. Thus det A is given by det A = a(1)11 a(2)22 . a(n)nn Thus the simple GEM can be used to solve the system Ax = y and also to evaluate det A provided a(i)ii 0 for each i. Further note that M(1), M(2), . , M(n-1) are lower triangular, and nonsingular as their det = 1 0. They are all therefore invertible and their inverses are all lower triangular, i.e. if L = M(n-1) M(n-2) . M(1) then L is lower triangular, and nonsingular and L-1 is also lower triangular. Now LA = LA(1) = M(n-1) M(n-2) . M(1) A(1) = A(n) .

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/4

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

Therefore A = L(-1) A(n) Now L(-1) is lower triangular which we denote by and A(n) is upper triangular which we denote by u, and we thus get the so called u decomposition A = u of a given matrix A as a product of a lower triangular matrix with an upper triangular matrix. This is another application of the simple GEM. REMEMBER IF AT ANY STAGE WE GET a(1)ii = 0 WE CANNOT PROCEED FURTHER WITH THE SIMPLE GSM. EXAMPLE: Consider the system x1 + x2 + 2x3 = 4 2x1 - x2 + x3 = 2 x1 + 2x2 Here =3

1 1 2 A = 2 1 1 1 2 0

4 y = 2 3

A (1 )

1 = 2 1

1 1 2

2 R 2R 1 1 2 1 0 R R 0 3 1 0

1 3 1

2 3 = A (2) 2

a(1)11 = 1 0

m(1)21 = -2 m(1)31 = -1

a(2)22 = -3 0

(1)

1 0 0 = 2 1 0 1 0 1

(1 )

4 4 = 2 6 = y (2) = M 3 1

(1 )

y (1 )

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/5

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

(2)

1 R3 + R 2 3

1 0 0

1 3 1

2 3 = A (3) 2

a(3)33 = -3

M(2)31 = 1/3
1 = 0 0 0 0 1 0 1 1 3

( 2)

(3)

= M

(2)

(2)

4 = 6 3

Therefore the given system is equivalent to A(3)x = y(3) x1 + x2 + 2x3 = 4 -3x2 - 3x3 = -6 - 3x3 = -3 Backward Substitution x3 = 1 -3x2 - 3 = - 6 -3x2 = -3 x2 = 1 x1 + 1 + 2 = 4 x1 = 1 Thus the solution of the given system is,

x1 x = x2 x 3

1 = 1 1

The determinant of the given matrix A is a(1)11 a(2)22 a(3)33 = (1) (-3) (-3) = 9. Now

M1

( 1)

1 0 0 = 2 1 0 1 0 1
1 0 0 = 0 1 0 1 0 1 3

M2

( 1)

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/6

Lecture Notes

## L = M(2) M(-1) L-1 = (M(2) M(1))-1 = (M(1))-1 (M(2))-1

1 0 0 1 0 0 = 2 1 0 0 1 0 1 0 1 0 1 1 3 1 0 0 L = L(-1) = 2 1 0 1 1 1 3
(n) (3)

u=A

=A

2 1 1 = 0 3 3 0 0 3

Therefore A = lu i.e.,
1 0 0 1 1 2 2 1 1 = 2 1 0 1 2 0 1 1 1 3 2 1 1 0 3 3 0 0 3

is the lu decomposition of the given matrix. We observed that in order to apply simple GEM we need a(r)rr 0 for each stage r. This may not be satisfied always. So we have to modify the simple GEM in order to overcome this situation. Further, even if the condition a(r)rr 0 is satisfied at each stage, simple GEM may not be a very accurate method to use. What do we mean by this? Consider, as an example, the following system: (0.000003) x1 + (0.213472) x2 + (0.332147) x3 = 0.235262 (0.215512) x1 + (0.375623) x2 + (0.476625) x3 = 0.127653 (0.173257) x1 + (0.663257) x2 + (0.625675) x3 = 0.285321 Let us do the computations to 6 significant digits. Here,

## 0 .332147 0 .476625 0 .625675

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/7

Lecture Notes

(1)

## 0 . 235262 = 0 . 127653 0 . 285321

(1) a 11 = 0.000003 0

M (1) 21 =

M (1) 31 =

## a (1) 31 0.173257 = = 57752.3 (1) 0.000003 a 11

(1)

1 = 71837 . 3 57752 . 3

0 1 0

0 0 ; 1

(2)

=M

(1)

(1)

## 0.000003 0 A(2) = M(1) A(1) = 0

a(2)22 = - 15334.9 0
M ( 2 ) 32 =

## a ( 2 ) 32 12327.8 = 0.803905 = ( 2) 15334.9 a 22

0 0 1 1 0 M(2) = 0 0 0.803905 1
y
(3)

=M

(2)

(2)

## 0.000003 0 A(3) = M(2) A(2) = 0

A(3)x = y(3)

0.213472 15334 .9 0

## Thus the given system is equivalent to the upper triangular system

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/8

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

Back substitution yields, x1 = 0.40 00 00 x2 = 0.47 97 23 x3 = -1.33 33 3 This compares poorly with the correct answers (to 10 digits) given by x1 = 0.67 41 21 46 9 x2 = 0.05 32 03 93 39.1 x3 = -0.99 12 89 42 52 Thus we see that the simple Gaussian Elimination method needs modification in order to handle the situations that may lead to a(r)rr = 0 for some r or situations as arising in the above example. In order to do this we introduce the idea of Partial Pivoting. The idea of partial pivoting is the following: At the r th stage we shall be trying to reduce all the entries below the r th diagonal as zero. Before we do this we look at the entries in the r th diagonal and below it and then pick the one that has the largest absolute value and we bring it to the r th diagonal position by a row interchange, and then reduce the entries below the r th diagonal as zero. When we incorporate this idea at each stage of the Gaussian elimination process we get the GAUSS ELIMINATION METHOD WITH PARTIAL PIVOTING. We now illustrate this with a few examples: Example: x1 + x2 + 2 x3 = 4 2x1 x2 + x3 = 2 x1 + 2x2 We have Aavg = =3

1 2 1

1 1 2

2 4 1 2 0 3

1st Stage: The pivot has to be chosen as 2 as this is the largest absolute valued entry in the first column. Therefore we do

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/9

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture Notes

Aavg

R 12

2 1 1

1 1 2

1 2 2 4 0 3

Therefore we have

0 M(1) = 1 0
M A
(1) (1)

1 0 0
(2)

## 0 0 and M(1) A(1) = A(2) = 1

2 = 4 3

2 1 1

1 1 1 2 2 0

=y

Next we have

R2 R1 A(2)avg Here
1 (2) M = 1 2 1 2 0 1 0

2 3/2 3/2 0
0 0 1
2 (2)

-1 3 5/2

0 R3 R1

-1/2 3

## 2 (2) (2) (3) M A =A = 0 0

1 3 2 5 2

1 3 2 1 2

M y

=y

(3)

2 = 3 2

5 since this is the entry with the largest absolute value 2 in the 1st column of the next sub matrix. So we have to do another row interchange.

## Now at the next stage the pivot is

Therefore 2
R23 A(3)avg

-1 5/2

-1/2 2
M1/L1and L2/V1/May2004/10

VittalRao/IISc, Bangalore

Lecture Notes

3/2

3/2

1 (3) M = 0 0

0 0 1

0 1 0

1 5 2 3 2

1 1 2 3 2

(3)

(3)

=y

(4)

2 = 2 3

## Next we have 2 A(4)avg 5 Here

1 (4) M = 0 0 0 1 3 5 0 0 1
3 R3 R2

-1 0 0

1 5/2 0

1 1 2 9 5

## 2 M(4) A(4) = A(5) = 0 0

1 5 2 0

2 M(4) y(4) = y(5) = 2 9 5 This completes the reduction and we have that the given system is equivalent to the system A(5)x = y(5) i.e. 2x1 x2 + x3 = 2
5 1 x2 - x3 = 2 2 2
9 9 x3 = 5 5

## We now get the solution by back substitution:

VittalRao/IISc, Bangalore M1/L1and L2/V1/May2004/11

Lecture Notes

5 1 x2 - = 2 2 2

## giving and hence x2 = 1.

5 5 x2 = 2 2

Using the values of x1 and x2 in the first equation we get 2x1 1 + 1 = 2 giving x1 = 1 Thus we get the solution of the system as x1 = 1, x2 = 1, x3 = 1; the same as we had obtained with the simple Gaussian elimination method earlier. Example 2: Let us now apply the Gaussian elimination method with partial pivoting to the following example: (0.000003)x1 + (0.213472)x2 + (0.332147) x3 = 0.235262 (0.215512)x1 + (0.375623)x2 + (0.476625) x3 = 0.127653 (0.173257)x1 + (0.663257)x2 + (0.625675) x3 = 0.285321, the system to which we had earlier applied the simple GEM and had obtained solutions which were for away from the correct solutions. Note that

## 0 .000003 A = 0 .215512 0 .173257

0 . 235262 y = 0 . 127653 0 . 285321

## 0 .332147 0 .476625 0 .625675

We observe that at the first stage we must choose 0.215512 as the pivot. So we have A
(1)

= A A
R12

(2)

## 0 .476625 0 .332147 0 .625675

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/12

Lecture Notes

(1)

= y y
R12

(2)

0 M = 1 0
(1)

1 0 0

0 0 1

R2 + m21R1 A
(2)

(3)

## a31 0.173257 == - 0.803932 a11 0.215512

0 .127653 = 0 .235260 0 . 182697

## 1 0 0 M(2) = 0.000014 1 0 ; 0.803932 0 1

(2)

=M

(2)

(2)

In the next stage we observe that we must choose 0.361282 as the pivot. Thus we have to interchange 2nd and 3rd row. We get,

A(3)

0.215512 R 0 A(4) = 0
23

1 M(3) = 0 0

0 0 1

0 1 0

## Now reduce the entry below 2nd diagonal as zero

R3 + M 32 R2

A(4)

0.215512 0 A5 = 0

0.375623 0.361282 0

## 0.476625 0.242501 0.188856

M1/L1and L2/V1/May2004/13

VittalRao/IISc, Bangalore

Lecture Notes

M32 = -

1 M(4) = 0 0
A(5) x = y(5)

0 1 .59086

0 0 1

(5)

=M

(4)

(4)

## 0 .127653 = 0 .182697 0 .127312

Thus the given system is equivalent to which is an upper triangular system and can be solved by back substitution to get x3 = 0.674122 x2 = 0.053205 x1 = 0.991291 which compares well with the 10 decimal accurate solution given at the end of page 9. Notice that while we got very bad errors in the solutions while using simple GEM whereas we have come around this difficulty by using partial pivoting. ,

VittalRao/IISc, Bangalore

M1/L1and L2/V1/May2004/14

## Numerical analysis/Direct methods for solving linear system of equation

Lecture notes

DETERMINANT EVALUATION Notice that even in the partial pivoting method we get Matrices M(k), M(k-1) . M(1) such that M(k), M(k-1) . M(1) A is upper triangular and therefore det M(k), det M(k-1) . det M(1) det A = Product of the diagonal entries in the final upper triangular matrix. Now det M(i) = 1 if it refers to the process of nullifying entries below a diagonal to zero; and det M(i) = 1 if it refers to a row interchange necessary for a partial pivoting. Therefore det M(k) . det M(1) = (-1)m where m is the number of row inverses effected in the reduction. Therefore det A = (-1)m product of the diagonals in the final upper triangular matrix. In our example 1 above, we had M(1), M(2), M(3), M(4) of which M(1) and M(3) referred to row interchanges. Thus therefore there were to row interchanges and hence det A = (-1)2 (2)(
5 9 )( ) = 9. 2 5

In example 2 also we had M(1), M(3) as row interchange matrices and therefore det A = (-1)2 (0.215512) (0.361282) (0.188856) = 0.013608 LU decomposition: Notice that the M matrices corresponding to row interchanges are no longer lower triangular. (See M(1) & M(3) in the two examples.) Thus, M(k) M(k-1) . . . . . M(1) is not a lower triangular matrix in general and hence using partial pivoting we cannot get LU decomposition in general.

VittalRao/IISc, Bangalore

M1/L3/V1/May 2004/1

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

GAUSS JORDAN METHOD This is just the method of reducing Aavg to (AR / yR ) where AR = In is the Row Reduced Echelon Form of A (in the case A is nonsingular). We could also do the reduction here by partial pivoting. Remark: In case in the reduction process at some stage if we get arr = ar+1r = . . . . = ar+1n = 0, then even partial pivoting does not being any nonzero entry to rth diagonal because there is no nonzero entry available. In such a case A is singular matrix and we proceed to the RRE form to get the general solution of the system. As observed earlier, in the case A is singular, Gauss-Jordan Method leads to AR = In and the product of corresponding M(i) give us A-1.

VittalRao/IISc, Bangalore

M1/L3/V1/May 2004/1

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

LU decompositions We shall now consider the LU decomposition of matrices. Suppose A is an nxn matrix. If L and U are lower and upper triangular nxn matrices respectively such that A = LU. We say that this is a LU decomposition of A. Note that LU decomposition is not unique. For example if A = LU is a decomposition then A = L U is also a LU decomposition where 0 is any scalar and L = L and U = 1/ U. Suppose we have a LU decomposition A = LU. Then, the system, Ax = y, can be solved as follows: Set Ux = z LUx = y, i.e., Lz = y ..(2) Now (2) is a triangular system infact lower triangular and hence we can solve it by forward substitution to get z. Substituting this z in (1) we get an upper triangular system for x and this can be solved by back substitution. Further if A = LU is a LU decomposition then det. A can be calculated as det. A = det. L . det. U = l11 l22 .lnn u11u22 ..unn Where lii are the diagonal entries of L and uii are the diagonal entries of U. Also A-1 can be obtained from an LU decomposition as A-1 = U-1 L-1. Thus an LU decomposition helps to break a system into Triangular system; to find the determinant; and to find the inverse of a matrix. We shall now give methods to find LU decomposition of a matrix. Basically, we shall be considering three cases. First, we shall consider the decomposition Tridiagonal matrix; secondly the Doolittless method for a general matrix, and thirdly the Choleskys method for a symmetric matrix. (1) Then the system Ax = y can be written as,

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/1

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

TRIDIAGONAL MATRIX

Let

b1 c1 0 A = .... .... 0 0

## .... .... .... .... .... bi 1 c i 1

0 0 0 .... .... ai b

be an nxn tridiagonal matrix. We seek a LU decomposition for this. First we shall give some preliminaries. Let i denote the determinant of the ith principal minor of A

b1 c1

a2 b2

## .... .... .... .... bi 1 c i 1

0 0 .... .... a1 bi

i =

## .... .... .... .... 0 0 .... ....

Expanding by the last row we get, i = bi i-1 ci-1 ai i-2 ; I = 2,3,4, .. ..(I) i = b1 We define i = 1 From (I) assuming that i are all nonzero we get

i = bi c i 1 a i i 2 i 1 i 1
setting

i = ki i 1
ai

## this can be written as

bi = k i + ci 1

k i 1

...............................( II )
M1/L5/V1/May 2004/2

VittalRao/IISc, Bangalore

Lecture notes

## Now we seek a decomposition of the form A = LU where,

u1 2 0 .... 0 0 .... .... .... 0 1 0 u 2 3 .... 0 w1 1 0 .... .... 0 L = 0 w2 1 0 .... 0 ; U = .... .... .... .... .... 0 0 .... u n 1 n .... .... .... .... .... .... 0 0 .... 0 u 0 0 .... .... w 1 n n 1
i.e. we need the lower triangular and upper triangular parts also to be tridiagonal triangular. Note that if A = (Aij) then because A is tridiagonal, Aij is nonzero only when i and j differ by 1. i.e. only Ai-1i, Aii, Aii+1 are nonzero. In fact, Ai-1i = ai Aii = bi Ai+1i = ci In the case of L and U we have Li + 1i = wi Lii Lij =1 = 0 if j>i or j<i and i-j 2. .. (IV) .. (III)

Uii+1 = i+1 Uii = ui Uij = 0 if i>j or i<j and j-I 2. Now A = LU is what is needed. Therefore,
. (V)

Aij =
Therefore

L
k =1

ik

U kj .......... .........( VI )

Ai 1i = Li 1kU ki
k =1

## Using (III), (IV) AND (V) we get

VittalRao/IISc, Bangalore M1/L5/V1/May 2004/3

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

Therefore

ai = Li 1i 1U i 1i = i
Therefore

i = ai
n

.. (VII)

This straight away gives us the off diagonal entries of U. From (VI) we also get

Aii = LikU ki
k =1

Therefore

## b i = w i 1 i + u i .......... ......( VIII )

From (VI) we get further,

A i +1i =

k =1

L i +1k U

ki

ci = W iu i
Thus

ci = W i u i

(IX)

## Using (IX) in (VIII) we get (also using I = ai)

bi =

c i 1 a i + ui u i 1
c i 1 a i .......... .......... ..( X ) u i 1

Therefore

bi = u i +

## Comparing (X) with (II) we get

ui = ki =

i .....................( XI ) i 1
M1/L5/V1/May 2004/4

VittalRao/IISc, Bangalore

Lecture notes

## using this in (IX) we get

wi =

ci ci i 1 = ...........................( XII ) ui i
.(XIII)

## From (VII) we get

I = ai

(XI), (XII) and (XIII) completely determine the matrices L and U and hence we get the LU decomposition. Note : We can apply this method only when I are all nonzero. i.e. all the principal minors have nonzero determinant. Example:

2 2 Let A = 0 0 0
We have b1 = 2 c1 = -2 a2 = -2 We have 0 = 1 1 = 2 b2 = 1

2 1 2 0 0

0 1 5 9 0

0 0 2 3 3

1 0 0 0 1

## Let us now find the LU decomposition as above. b3 = 5 c3 = 9 a4 = -2 b4 = -3 c4 = 3 a5 = 1 b5 = -1

c2 = -2 a3 = 1

2 = b2 1 a2 c1 0 =

2-4 = -2

3 = b3 2 a3 c2 1 = (-10) (-2) (2) = -6 4 = b4 3 a4 c3 2= (-3) (-6) (-18) (-2) = -18 5 = b5 4 a5 c4 3 = (-1) (-18) (3) (-6)= 36. Note 1,2,3,4,5 are all nonzero. So we can apply the above method. Therefore by (XI) we get

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/5

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

u1 =
u4 =

1 2 6 = 2; u 2 = 2 = = 1; u 3 = 3 = =3 0 1 2 2 2
4 18 =3 ; = 3 6
and

u5 =

5 36 = = 2 4 18
From (XIII) we get

## From (XII) we get

w1 =
w2 =

c1 2 = = 1 u1 2
c2 2 = =2 u2 1
c3 9 = = 3 u3 3
c4 3 = =1 u4 3
0 1 2 0 0 0 0 1 3 0 0 0 0 1 1 0 0 0 0 1

2 = a2 = 2

3 = a3 = 1
4 = a4 = 2

w3 =
w4 =
Thus;

5 = a5 = 1
2 0 U = 0 0 0 2 1 0 0 0 0 1 3 0 0 0 0 2 3 0 0 0 0 1 2

1 1 L= 0 0 0

In the above method we had made all the diagonal entries of L as 1. This will facilitate solving the triangular system LZ = y (equation (2)) in page 17. However by choosing these diagonals as 1 it may be that the ui, the diagonal entries in U are small thus creating problems in backward substitution for the system Ux = z (equation (1) on page 17). In order to avoid this situation Wilkinson suggests that in any triangular decomposition choose the diagonal entries of L and U to be of the same magnitude. This can be achieved as follows: We seek A = LU where

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/6

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

l1 L = w1 l2 ; wn-1ln

## u1 2 0 0 u2 3 U = .... .... .... 0 .... 0 0 0 ....

Lii = li Now Li+1i = wi Lij = 0 Uii = ui Uii+1 = i+1 Uij = 0

0 .... .... n un

n

## Now (VII), (VIII) and (IX) change as follows:

ai = Ai 1i
Therefore

= Li 1k U ki
k 1

= Li 1i 1U i 1i = l i 1 i

ai = li-1 I . (VII`)

bi = Aii

L
k 1

ik

U ki

= Lii 1U i 1i + LiiU ii = Wi 1 i + li u i

## ci = Ai +1i = Li +1k U ki = Li +1iU ii = w i u i

n k 1

ci = wi ui .. (IX`)
From (VIII`) we get using (VII`) and (IX`)

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/7

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

bi =

c i 1 a i . + li u i u i 1 l i 1

a i c i 1 + li u i l i 1 u i 1
ai ci 1 + pi ......................( X `) pi 1

bi =
where

## pi = li ui Comparing (X`) with (II) we get

pi = k i =
therefore

i 1

li u i =

i 1
li =

we choose

i 1
i i 1

. (XIV)

u i = sgn i i 1

(XV)

Thus li and ui have same magnitude. These then can be need to get wi and i from (VII`) and (IX`). We get finally,

li =
wi = Ci

i 1

u i = sgn . i i i 1 . . . . . . . . . . . .(XI`) i 1

ui
i 1

. . . . . . . . . . . . .. . . . . . . . . (XII`)
. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .(XIII`)

i = ai l

These are the generalizations of formulae (XI), (XII) and (XIII). Let us apply this to our example matrix (on page 21).
VittalRao/IISc, Bangalore M1/L5/V1/May 2004/8

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

We get; 0 = 1 b1 = 2 c1 = -2 a1 = -2 1 = 2 b2 = 1 c2 = -2 a3 = 1 2 = -2 b3 = 5 c3 = 9 a4 = -2 3 = -6 b4 = -3 c4 = 3 a5 = 1 4 = -18 b5 = -1 5 = 36

We get 1/0 = 2 ; 2/1 = -1 ; 3/2 = 3 ; 4/3 = 3 ; 5/4 = -2 Thus from (XI`) we get l1 = 2 l2 = 1 l3 = 3 l4 = 3 l5 = 2 u1 = 2 u2 = -1 u3 = 3 u4 = 3 u5 = -2

## From (XII`) we get

w1 =
w3 =

C1 2 = = 2; u1 2
C3 9 = =3 3; u3 3
a2 2 = = 2; l1 2
a4 2 = ; l3 3

w2 =
w4 =

C2 2 = = 2; u2 1
C4 3 = = 3 u4 3 a3 1 = =1 ; l2 1

2 =
4 =

3 =

5 =

a5 1 = l4 3

## Thus, we have LU decomposition,

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/9

## Numerical Analysis / Direct methods for solving linear system of equation

Lecture notes

0 2 2 2 0 0 0 2 2 1 1 0 A= 0 2 5 2 0 = 0 0 9 3 1 0 0 0 0 0 3 1 0

0 0 1 0 2 3 0 3 3 0 0

0 0 0 3 3

0 0 0 0 2 U

2 0 0 0 0

2 1 0 0 0

0 1 3 0 0

0 0 2 3 3 0

0 0 0 1 3 2

in which the L and U have corresponding diagonal elements having the same magnitude.

VittalRao/IISc, Bangalore

M1/L5/V1/May 2004/10

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

DOOLITTLES LU DECOMPOSITION We shall now consider the LU decomposition of a general matrix. The method we describe is due to Doolittle. Let A = (aij). We seek as in the case of a tridiagonal matrix, an LU decomposition in which the diagonal entries lii of L are all 1. Let L = (lii) ; U = (uij). Since L is a lower triangular matrix, we have lij = 0 if j > i ; and by our choice, lij =1. Similarly, since U is an upper triangular matrix, we have uij = 0 if i > j. We determine L and U as follows : The 1st row of U and 1st column of L are determined as follows :

a 11 =

k =1

l1 k u k 1

a1 j =

k =1

l 1 k u kj

## = l11 u11 Since l1k = 0 for k>1 = u1j Since l11 = 1.

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/1

Lecture notes

## u1j = a1j . . . . . . . . . (I)

Thus the first row of U is the same as the first row of A. The first column of L is determined as follows:

j1

k =1

l jk u k 1

## = lj1 u11 Since uk1 = 0 if k>1 lj1 = aj1/u11 . . . . . . . . . (II)

Note : u11 is already obtained from (I). Thus (I) and (II) determine respectively the first row of U and first column of L. The other rows of U and columns of L are determined recursively as given below: Suppose we have determined the first i-1 rows of U and the first i-1 columns of L. Now we proceed to describe how one then determines the ith row of U and ith column of L. Since first i-1 rows of U have been determined, this means, ukj ; are all known for 1 k i-1 ; 1 j n. Similarly, since first i-1 columns are known for L, this means, lik are all known for 1 i n ; 1 k i-1. Now

a ij =
= =
i

k =1

k =1

l ik u
kj

kj

i 1 k =1

l ik u

## Since lik = 0 for k>i

ik

u kj + l ii u ij

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/2

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

l
k =1

i 1

ik

u kj + u ij

since lii = 1.

ij

= a ij

i1

k =1

l ik u

kj

. . . . . ... . . . .(III)

Note that on the RHS we have aij which is known from the given matrix. Also the sum on the RHS involves lik for 1 k i-1 which are all known because they involve entries in the first i-1 columns of L ; and they also involve ukj ; 1 k i-1 which are also known since they involve only the entries in the first i-1 rows of U. Thus (III) determines the ith row of U in terms of the known given matrix and quantities determined upto the previous stage. Now we describe how to get the ith column of L :

ji

=
i

k =1

jk

u ki

= =

i 1 k =1

k =1

jk

u ki

## Since uki = 0 if k>i

l
ji

jk

u ki + l ji u ii

1 = a u ii

ji

i1

k =1

jk

ki

..(IV)

Once again we note the RHS involves uii, which has been determined using (III); aij which is from the given matrix; ljk; 1 k i-1 and hence only entries in the first i-1 columns of L; and uki, 1 k i-1 and hence only entries in the first i-1 rows of U. Thus RHS in (IV) is completely known and hence lji, the entries in the ith column of L are completely determined by (IV).
VittalRao/IISc, Bangalore M1/L6/V1/May 2004/3

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

Summarizing, Doolittles procedure is as follows: lii = 1; 1st row U = 1st row of A ; lj1 = aj1/u11 For i 2; we determine Step 1 determining 1st row of U and

1st column of L.

ij

= a

ij

ji

i1

k =1

l ik u
i 1

kj

## (Note for j<i we have uij = 0)

ji

1 = a u ii

k =1

jk

u ki

; j = i, i+1, i+2,..,n

(Note for j<i we have ljj = 0) We observe that the method fails if uii = 0 for some i. Example: Let 2 1 1 3 2 2 6 4 A= 4 14 19 4 6 0 6 12 Let us determine the Doolittle decomposition for this matrix. First step:

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/4

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

1st row of U : Same as 1st row of A. u11 = 2 ; u12 = 1 ; u13 = -1 ; u14 = 3 1st column of L: l11 = 1; l21 = a21/u11 = -2/2 = -1. l31 = a31/u11 = 4/2 = 2. l41 = a41/u11 = 6/2 = 3. Second step: 2nd row of U : u12 = 0 (Because upper triangular) u22 = a22 l21 u12 = 2 (-1) (1) = 3. u23 = a23 l21 u13 = 6 (-1) (-1) = 5. u24 = a24 l21 u14 = - 4 (-1) (3) = -1. 2nd column of L : l12 = 0 (Because lower triangular) l22 = 1. l32 = (a32 l31 u12) /u22 = [14 (2)(1)]/3 = 4. L42 = (a42 l41 u12) /u22
VittalRao/IISc, Bangalore M1/L6/V1/May 2004/5

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

= [0 (3)(1)]/3 = -1. Third Step: 3rd row of U: u31 = 0 u32 = 0 u33 = a33 l31 u13 l32 u23 = 19 (2) (-1) (4)(5) = 1. u34 = a34 l31 u14 l32 u24 = 4 (2) (3) (4)(-1) = 2. 3rd column of L : l13 = 0 l23 = 0 l33 =1 l43 = (a43 l41 u13 l42 u23)/ u33 = [-6 (3) (-1) (-1) (5)]/1 = 2. Fourth Step: 4th row of U: u41 = 0 u42 = 0 u43 = 0 u44 = a44 l41 u14 l42 u24 l43 u34 = 12 (3) (3) (-1) (-1) (2) (2) = -2.
VittalRao/IISc, Bangalore M1/L6/V1/May 2004/6

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

4th column of L : l14 = 0 = l24 = l34 Because lower triangular l44 = 1. Thus. 0 1 1 1 L= 2 4 3 1 and A = LU. This gives us the LU decomposition by Doolittles method for the given A. As we observed in the case of the LU decomposition of a tridiagonal matrix; it is not advisable to choose the lii as 1; but to choose in such a way that the diagonal entries of L and the corresponding diagonal entries of U are of the same magnitude. We describe this procedure as follows: Once again 1st row and 1st column of U & L respectively is our first concern: Step 1: a11 = l11 u11 0 0 1 2 0 0 ; 0 1 2 0 U = 0 0 1 1 3 3 5 1 . . . . . . . . . . .(V) 0 1 2 0 0 2

Choose l11

n

Next

## aij = l1k u kj = l11u1 j asl1k = 0 fork > 1

k =1

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/7

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

u ij

a1 j l 11

Thus note that u1j have been scaled now as compared to what we did earlier. Similarly,

l j1

j1

u 11

These determine the first row of U and first column of L. Suppose we have determined the first I-1 rows of U and first I-1 columns of L. We determine now the ith row of U and ith column of L as follows:
n

a ii =

k =1

l ik u ki

k =1

l ik u ki

## for lik = 0 if k>i

i 1

k =1

l ik u ki + l ii u ii

l ii u ii = a ii

l
k =1

i 1

ik

u ki = p i , say
i 1

Choose

l ii =

pi =

a ii l ik u ki
k =1

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/8

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

u ii = sgn pi
n i

pi

a ij = l ik u kj = l ik u kj Q l ik = 0 fork > i
k 1 k =1

l
k =1

i 1

ik

u kj + l ii u ij
lii

i 1 u ij = a ij l ik u kj k =1

## determining the ith row of U.

n

a ji =

l
k =1

jk

u ki

l
k =1

jk

u ki Q u ki = 0 ifk > i

i 1

k =1

l jk u ki + l ji u ii

i 1 l ji = a ji l jk u ki k =1

uii ,

thus determining the ith column of L. Let us now apply this to matrix A in the example in page 30.

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/9

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

First Step:

l 11 u 11 = a 11 = 2 l 11 =
u12 =

2 ; u 11 =

## a a a12 3 1 1 = ; u14 = 14 = u13 = 13 = l11 l11 l11 2 2 2

1 1 3 ; u13 = ; u14 = 2 2 2

u11 = 2 ; u12 =

l 21 =

a 21 u 11

2 2

= 2

l 31 =

a 31 4 = =2 2 u 11 2

l 41 =

a 41 6 = =3 2 u 11 2

therefore
l11 = 2 l 21 = 2 l 31 = 2 2 l 41 = 3 2

Second Step:

l 22 u 22 = a 22 l 21 u12
VittalRao/IISc, Bangalore M1/L6/V1/May 2004/10

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

1 = 2 2 =3 2

l 22 = 3; u 22 = 3

u 23 = (a 23 l 21u13 )

l22

1 = 6 2 / 3 = 5 3 2

u 24 = [a 24 l 21u14 ]

l22

3 = ( 4 ) 2 / 3 = 1 3 2
therefore

u 21 = 0; u 22 = 3; u 23 =

5 3

; u 24 =

1 3

## l32 = (a32 l31u12 ) / u 22

1 = 14 2 2 / 2

=4 3
VittalRao/IISc, Bangalore M1/L6/V1/May 2004/11

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

l 42 = (a42 l 41u12 ) / u 22
1 = 0 3 2 / 3 2

= 3
therefore

## l33u 33 = a33 l31u13 l32 u 23

5 1 = 19 2 2 4 3 2 3
=1

( )

( )

l33 = 1; u 33 = 1

## u34 = (a34 l31u14 l32u 24 ) / l33

VittalRao/IISc, Bangalore M1/L6/V1/May 2004/12

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

1 3 = 4 2 2 / 1 4 3 3 2

( )

=2

## u31 = 0; u32 = 0; u33 = 1, u34 = 2

l 43 = [a 43 l 41u13 l 42 u 23 ] / u 33
5 1 /1 = 6 3 2 3 2 3
=2 therefore
l 13 l 23 l 33 l 43 Fourth Step: = 0 = 0 = 1 = 2

## l44u44 = a44 l41u14 l42u24 l43u34

1 3 = 12 3 2 (2 )(2 ) 3 3 2
= -2

( )

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/13

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

l 44 = 2; u 44 = 2

u 41 = 0; u 42 = 0; u 43 = 0; u 44 = 2
l 14 l 24 l 34 l 44 = 0 = 0 = 0 2 =

## Thus we get the LU decompositions,

L= 2 3

2 2 2 2

0 3

0 0

4 3 1 3 2

0 0 0 2

U =

2 0 0 0

1 2 3 0 0

1 2 5 3 1 0

3 2 1 3 2 2

in which magnitude.

lii = u ii , i.e. the corresponding diagonal entries of L and U have the same

Note: Compare this with the L and U of page 32. What is the difference. The U in page 36 can be obtained from the U of page 32 by (1) replacing the numbers in the diagonal of that U and keeping the same sign. Thus the first diagonal 2 is replaced by 2 ; 2nd diagonal 3 is replaced by 3 , third diagonal1 by 1 and 4th diagonal 2 by - 2 . These then give the diagonals of the U in page 36. (2) Divide each entry to the right of a diagonal in the U of page 32 by these replaced diagonals.

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/14

## Numerical Analysis/Direct methods for solving linear system of equation

Lecture notes

Thus 1st row changes to 1st row of U in page 36 2nd row changes to 2nd row of U in page 36 3rd row changes to 3rd row of U in page 36 4th row changes to 4th row of U in page 36 This gives the U of page 36 from that of page 32. The L in page 36 can be obtained from the L of page 32 as follows: (1) Replace the diagonals in L by magnitude of the diagonals in U of page 36. (2) Multiply each entry below the diagonal of L by this new diagonal entry. We get the L of page 32 changing to the L of page 36.

VittalRao/IISc, Bangalore

M1/L6/V1/May 2004/15

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

DOOLITTLES METHOD WITH ROW INTERCHANGES We have seen that Doolittle factorization of a matrix A may fail the moment at stage i we encounter a uii which is zero. This occurrence corresponds to the occurrence of zero pivot at the ith stage of simple Gaussian elimination method. Just as we avoided this problem in the Gaussian elimination method by introducing partial pivoting we can adopt this procedure in the modified Doolittles procedure. The Doolittles method which is used to factorize A as LU is used from the point of view of reducing the system Ax = y To two triangular systems Lz = y Ux = z as already mentioned in page 17. Thus instead of actually looking for a factorization A = LU we shall be looking for a system, A*x = y* and for which A* has LU decomposition. We illustrate this by the following example: The basic idea is at each stage calculate all the uii that one can get by the permutation of rows of the matrix and choose that matrix which gives the max. absolute value for uii. As an example consider the system Ax = y where

3 1 2 1 3 2 2 2 A= 1 5 4 1 3 1 2 3
We keep lii = 1. Stage 1:

3 8 y= 3 1

We want LU decomposition for some matrix that is obtained from A by row interchanges.

## 1st diagonal of U. By Doolittle decomposition, u11 = a11 = 3

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/1

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

If we interchange 2nd or 3rd or 4th rows with 1st row and then find the u11 for the new matrix we get respectively u11 = 2 or 1 or 3. Thus interchange of rows does not give any advantage at this stage as we have already got 3 without row interchange for u11. So we keep the matrix as it is and calculate 1st row of U, by Doolittles method.

l11 = 1; l 21 =
Thus

## a a a 21 2 3 1 = ; l31 = 31 = ; l 41 = 41 = = 1. 3 u11 3 u11 3 u11

0 1 * * 0 0 1 * 0 0 ; and 0 *

1 2 3 L is of the form 1 3 1

3 0 U is of the form 0 0
Stage 2

## u 22 = a22 l 21u12 = 2 2 (1) = 8

3

Suppose we interchange 2nd row with 3rd row of A and calculate u22 : our new a22 is 5. But note that the L gets in the 1st column 2nd and 3rd row interchanged. Therefore new l21 is1/3. Suppose instead of above we interchange 2nd row with 4th row of A: New a22 = 1 and new l21 = 1 and therefore new u22 = 1 (1) (1) = 0 Of these 14/3 has largest absolute value. So we prefer this. Therefore we interchange 2nd and 3rd row. 3 1 NewA = 2 3 1 5 2 1 2 4 2 2 1 3 1 3 ; Newy = 3 8 1 3

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/2

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

1 13 NewL = 2 3 1

0 0 0 3 1 14 1 0 0 0 3 ; NewU = * 1 0 0 0 0 0 * * 1

2 1 * * * * 0 *

Now we do the Doolittle calculation for this new matrix to get 2nd row of U and 2nd column of L.

u 23 = a 23 l 21u13 = ( 4) 1 ( 2) = 10
3 3

u 24 = a 24 l21u14
2nd column of L:

2 1 = ( 1) ( 1) = 3 3
= ( 2 )
4 2 14 = (1 ) 7 3 3

l 32 = [a 32 l 31u12 ] u 22

## l42 = [a42 l41u12 ] u11 = [3 (1 )(1 )]

Therefore new L has form
3 0 New U has form 0 0 1 14 3 0 0

14 =0 3

1 1 3 2 3 1
2 10 3 * 0

0 1 4 7 0
1 2 3 * *

0 0 1 *

0 0 0 1

This completes the 2nd stage of our computation. Note: We had three choices of u22 to be calculated, namely 8/3, 14/3, 0 before we chose 14/3. It appears that we are doing more work than Doolittle. But this is not really so. For, observe, that the rejected u22 namely 8/3 and 0 when divided by the chosen u22 namely 14/3 give the entries of L below the second diagonal.

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/3

Lecture notes

## u 33 = a33 l31u13 l32 u 23

4 10 10 2 = 2 ( 2 ) = 7 7 3 3

Suppose we interchange 3rd row and 4th row of new A obtained in 2nd stage. We get new a33 = 2. But in L also the second column gets 3rd and 4th row interchanges Therefore new l31 = 1 and new l32 = 0
10 Therefore new u33 = a33 l31 u13 l32 u23 = 2 (1)( 2 ) + (0 ) = 4. 3

Of these two choices of u33 we have 4 has the larges magnitude. So we interchange 3rd and 4th rows of the matrix of 2nd stage to get

3 3 1 2 1 1 5 4 1 3 NewA = NewY = 1 3 1 2 3 8 2 2 2 3
1 1 NewL = 3 1 2 3 0 1 0 4 7 0 0 1 * 0 3 0 0 ; NewU = 0 0 0 1 2 10 3 4 0 1 2 3 * *

1 14 3 0 0

Now for this set up we calculate the 3rd stage entries as in Doolittles method:

## u 34 = a34 l31u14 l32 u 24

2 = 3 (1)( 1) (0 ) = 4 3

l 43 = (a 43 l 41u13 l 42 u 23 ) u 33
VittalRao/IISc, Bangalore M1/L7/V1/May 2004/4

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

2 4 10 = 2 ( 2) 4 = 5/14. 7 3 3

1 1 NewL = 3 1 2 3
4th Stage

0 1 0 4 7

0 0 1 5 14

0 3 1 14 0 0 ; NewU = 3 0 0 0 0 0 1

2 10 3 4 0

1 2 3 4 *

## Note: The rejected u33 divided by chosen u33 gives l43.

u 44 = [a 44 l 41u14 l 42 u 24 l 43 u 34 ]
2 4 2 5 = 3 ( 1) (4 ) = 13/7. 3 7 3 14

3 1 * NewA = A = 3 2
1 1 L* = 3 1 2 3 0 1 0 4 7 0 0 1 5 14

1 5 1 2

2 4 2 2

1 3 1 3 * NewY = Y = 3 1 8 3
2 10 3 4 0 1 2 3 , 4 13 7

New L = L* , New U = U*

0 3 1 14 0 0 3 ; U * = 0 0 0 0 0 1

and A* = L*U* The given system Ax=y is equivalent to the system A*x=y* and hence can be split into the triangular systems, L*z = y* U*x = z Now L*z = y* gives by forward substitution: Z1 =3
VittalRao/IISc, Bangalore M1/L7/V1/May 2004/5

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

1 z1 + z 2 = 3 z 2 = 3 1 = 2 3

z1 + z 3 = 1 z 3 = 1 z1 = 4
2 4 5 z1 z 2 + z 3 + z 4 = 8 3 7 14
2 4 5 (3 ) (2 ) + ( 4 ) + z 4 = 8 3 7 14

z4 =

52 7

## 3 2 z = 4 52 7 Therefore U*x = z gives by back-substitution;

13 52 x4 = 7 7

therefore x4 = -4.

4x3 + 4x4 = 4 x3 + x4 = 1 x3 = 1 x4 = 3
therefore x3 = 3

14 10 2 x2 x3 x 4 = 2 3 3 3
14 10 2 x 2 (3) ( 4 ) = 2 3 3 3

x2 = 2

3x1 + x 2 2 x3 x 4 = 3
3 x1 + 2 6 + 4 3 x1 = 1
Therefore the solution of the given system is

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/6

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

1 2 x = 3 4

Some Remarks: The factorization of a matrix A as the product of lower and upper triangular matrices is by no means unique. In fact, the diagonal elements of one or the other factor can be chosen arbitrarily; all the remaining elements of the upper and lower triangular matrices may then be uniquely determined as in Doolittles method; which is the case when we choose all the diagonal entries of L as 1. The name of Crout is often associated with triangular decomposition methods, and in crouts method the diagonal elements of U are all chosen as unity. Apart from this, there is little distinction, as regards procedure or accuracy, between the two methods. As already mentioned, Wilkinsons suggestion is to get a LU decomposition in which l ii = u ii ;1 i n . We finally look at the cholesky decomposition for a symmetric matrix: Let A be a symmetric matrix. Let A = LU be a LU decomposition Then A1 = U1 L1 U1 is also lower triangular L1 is upper triangular Therefore U1L1 is a decomposition of A1 as product of lower and upper triangular matrices. But A1 = A since A is symmetric. Therefore LU = U1L1 We ask the question whether we can choose L as U1; so that A = U1U (or same as LL1) Now therefore determining U automatically gets L = U1 We now do the Doolittle method for this. Note that it is enough to determine the rows of U. Stage 1: 1st row of U:

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/7

Lecture notes

a11 =

l1 k u k 1 =
k =1

u
k =1

k1

Q l1k = uk1

Q L = U1

## = u 211 Q u 1 k = 0 for k>1 since U is upper triangular

u 11 =

a11

We finally look at the cholesky decomposition for a symmetric matrix: Let A be a symmetric matrix. Let A = LU be a LU decomposition Then A1 = U1 L1 U1 is also lower triangular L1 is upper triangular Therefore U1L1 is a decomposition of A1 as product of lower and upper triangular matrices. But A1 = A since A is symmetric. Therefore LU = U1L1 We ask the question whether we can choose L as U1; so that A = U1U (or same as LL1) Now therefore determining U automatically gets L = U1 We now do the Doolittle method for this. Note that it is enough to determine the rows of U. Stage 1: 1st row of U:
n n

k =1 k =1

Q l1k = uk1

Q L = U1

u 11 =
n

a11
n

## a1i = l1k u ki = u k1u ki

k =1 k =1

= u 11 u 1 i

Q u k1 = 0 fork > 1

u11 = a11
u1i = a1i / u11
VittalRao/IISc, Bangalore

## determines first row of U. and hence first column of L.

M1/L7/V1/May 2004/8

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

Having determined the 1st i-1 rows of U; we determine the ith row of U as follows:

a ii =

k =1

l ik u ki =

ki

k =1

Q l ik = u ki

=
=

i 1 k =1

u
2 ki

ki

k =1

Q u

ki

= 0

for k > i

u
2 ii

+ u 2 ii

= a ii

u
2

i 1

ki

k =1

u ii =

a ii

i 1 ki

k =1

; Note: uki are known for k i -1, 1st i-1 rows have already been obtained.

a ij =
i

l
k =1

ik

u kj =

u
k =1

ki

u kj

= u ki u kj
k =1

## Because uki = 0 for k > i

u
k =1

i 1

ki

u kj + u ii u ij

Therefore

u ij

= a ij

u
k =1

i 1

ki

u kj u ii

VittalRao/IISc, Bangalore

M1/L7/V1/May 2004/9

## Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

i 1 2 u ii = a ii u ki k =1 i 1 u = a 1 u ki u kj u ij ij ij k=

determines the ith row of U in terms of the previous rows. Thus we get U and L is U1. This is called CHOLESKY decomposition.

Example:

1 1 1 5 Let A = 1 3 1 3
1st row of U

1 3 3 1

1 3 1 10

## This is a symmetric matrix. Let us find the Cholesky decomposition.

u 11 u 12 u 13 u 14

a11 = 1

2nd row of U

M1/L7/V1/May

Lecture notes

3rd row of U

## u = a u 213 u 2 23 = 3 1 1 = 1 33 33 u 34 = (a34 u13u14 u 23u 24 ) u 33 = (1 (1)(1) ( 1)(2 )) 1 = 2

4th row of U

u 44 = a44 u 214 u 2 24 u 2 34 = 10 1 4 4 = 1
1 1 1 0 2 1 U = 0 0 1 0 0 0
A = LU = LL1 = U1U

1 2 2 1

1 0 1 2 1 U = L = 1 1 1 2

0 0 0 0 and 1 0 2 1

M1/L7/V1/May

## Numerical Analysis / Iterative methods for solving linear systems of equations

Lecture notes

ITERATIVE METHODS FOR THE SOLUTION OF SYSTEMS EQUATION In general an iterative scheme is as follows: We have an nxn matrix M and we want to get the solution of the systems x = Mx + y ..(1)
k We obtain the solution x as the limit of a sequence of vectors, x which are obtained as follows:

{ }

We start with any initial vector x(0), and calculate x(k) from, x(k) = Mx(k-1) + y .(2) for k = 1,2,3, .. successively. A necessary and sufficient condition for the sequence of vectors x(k) to converge to M sp solution x of (1) is that the spectral radius of the iterating matrix M is less than 1 or M <1 if for some matrix norm. We shall now consider some iterative schemes for solving systems of linear equations, Ax = y .(3) We write this system in detail as

## a21 x1 + a22 x2 + ..... + a2 n xn = y 2

...... ...... ......

. . . . . . . .(4)

## an1 x1 + an 2 x2 + ..... + ann xn = y n

a11 a WehaveA = 21 K a n1

a12 a 22 K an 2

K K K K

## We denote by D, L, U the matrices

VittalRao/IISc, Bangalore

M2/L1/V1/May 2004/1

Lecture notes

a11 0 D= 0 ... 0

0 a 22 0 ... 0

## the diagonal part of A; and

0 a 21 L = a31 K a n1

0 0 a32

K K 0

K K K

K K K a n 2 K a n 1

0 0 0 ..................................(7) K 0

## 0 u12 0 0 U = ... ... 0 0

Note that, A=D+L+U

## We assume that aii 0 ; i = 1, 2, , n So that D-1 exists.

We now describe two important iterative schemes, below, for solving the system (3).

VittalRao/IISc, Bangalore

M2/L1/V1/May 2004/2

Lecture notes

## a22 x2 = a21 x1 a23 x3 ..... a2 n xn + y 2

...... ...... ......

. . . . . . . .(11)

x (0 )

## x (0 )1 (0 ) x 2 .......... .......... ........(12 ) = M x (0 ) n

and substitute this vector for x on the RHS of (11) and calculate x1,x2, .., xn and this vector is called x(1). We now substitute this vector on the RHS of (11) to calculate again x1, x2, .., xn and call this new vector as x(2) and continue this procedure to calculate the sequence x(k). We can describe this briefly as follows: The equation (11) can be written as, Dx = - (L + U) x + y which we can write as x = -D-1 (L+U) x +D-1 y, giving . (13)

x = Jx + y
where

(14)

## x(k) = Jx(k1) + y; k = 1,2,.......

as the iterating scheme. This is similar to (2) with the iterating matrix M as J = -D-1 (L + U); J is called the Jacobi Iteration Matrix. The scheme will converge to the solution x of our system if J sp < 1 . We shall see an easier condition below:
VittalRao/IISc, Bangalore M2/L2/V1/May 2004/1

Lecture notes

We have

a12 a11 0

.... a n2 ann

## Now therefore the ith Absolute row sum for J is

Ri =
j i

a ij a ii

= ( a i1 + a i 2 + .... + a ii 1 + a ii +1 + .... + a in ) / a ii

## ai1 + ai 2 + ..... + aii 1 + aii +1 + ..... + ain < aii

i.e. in each row of A the sum of the absolute x values of the nondiagonal entries is dominated by the absolute value of the diagonal entry. i.e. A is strictly row diagonally dominant. Thus the Jacobi iteration scheme for the system (3) converges if A is strictly row diagonally dominant (Of course this condition may not be satisfied) and still Jacobi iteration scheme may converge if J sp < 1.

VittalRao/IISc, Bangalore

M2/L2/V1/May 2004/2

## Numerical Analysis/Iterative methods for solving linear system of equation

Lecture notes

Example: Consider the system x1 + 2x2 2x3 = 1 x1 + x2 + x3 =0 .(I) 2x1 + 2x2 + x3 = 0 Let us apply the Jacobi iteration scheme with the initial vector as

x (0)

0 = = 0 .(II) 0

We

1 A = 1 2

2 1 2

2 1 1

1 D = 0 0
1 y = 0 0

0 1 0

0 0 1

0 L +U = 1 2

2 0 2

2 1 ; 0

0 2 + 2 J = D 1 (L + U ) = 1 0 1 2 2 0
Thus the Jacobi scheme (16) becomes

1 y = D y = 0 0
1

(0 )

0 = 0 0

## x(k) = Jx(k1) + y, k =1,2,......

x (1) 1 = Jx(0 ) + y = J + y = y = 0 0
M2/L2/V1/May 2004/3

VittalRao/IISc, Bangalore

## Numerical Analysis/Iterative methods for solving linear system of equation

Lecture notes

x (2 ) = Jx (1)

0 2 + 2 1 1 + y = 1 0 1 0 + 0 2 2 0 0 0
0 1 1 = 1 + 0 = 1 2 0 2

x (3 )

0 2 2 1 1 = Jx (2 ) + y = 1 0 1 1 + 0 2 2 0 2 0
2 1 1 = 1 + 0 = 1 0 0 0

x (4 )

0 2 2 1 1 = Jx (3 ) + y = 1 0 1 1 + 0 2 2 0 0 0
2 1 1 = 1 + 0 = 1 = x (3 ) 0 0 0

x(4) = x(5) = x(6) = . = x(3) x(k) = x(3) and x(k) converges to x(3) The solution is x =

lim k

(k )

=x

(3 )

1 = 1 0

Can easily check that this is the exact solution. Here, there is no convergence problem at all.

VittalRao/IISc, Bangalore

M2/L2/V1/May 2004/4

## Numerical Analysis/Iterative methods for solving linear system of equation

Lecture notes

Example 2: 8x1 + 2x2 2x3 = 8 x1 - 8x2 + 3x3 = 19 2x1 + x2 + 9x3 = 30 Let us apply Jacobi iteration scheme starting with x
(0 )

0 = 0 0

We have

8 0 0 1 D = 0 8 0 D 0 0 9

1 8 = 0 0
0.25

0 1 8

0 0 1 9

0 J = D (L + U ) = + 0.125 0.22222
1

## + 0.25 0 0.375 0 0.11111

1 y = D y = 2 .375 3 .33333
1

## a11 = 8and a12 + a13 = 2 + 2 = 4 a11 > a12 + a13

a 22 = 8and a 21 + a 23 = 1 + 3 = 4; a 22 > a 21 + a 23

## a33 = 9and a31 + a32 = 2 + 1 = 3 a 33 > a 31 + a 32

Thus we have strict row diagonally dominant matrix A. Hence the Jacobi iteration scheme will converge. The scheme is,
VittalRao/IISc, Bangalore M2/L2/V1/May 2004/5

Lecture notes

0 x = 0 0
0

(k )

= Jx

( k 1)

+y

## 0.25 0 0.25 = 0.125 0 0.375 x ( k 1) + y 0.22222 0.11111 0

(1 )

1 = y = 2 .375 3 .33333

We continue the iteration until the components of x(k) and x(k+1) differ by at most, say; 3x10-5 = i .e . x ( k + 1 ) x ( k ) 3 x10 5 we get x (1 ) x (0 ) = 3 .33333 . So we

continue

x (2 ) = Jx (1)

## 2.42708 + y = 1.00000 3.37500

2 .09375 + y = 0 .80599 ; 2 .90509

x (2 ) x (1)

= 1 .42708

(3 )

= Jx

(2 )

x (3 ) x ( 2 )

= 0 .46991

(4 )

## 1 .92777 = Jx (3 ) + y = 1 .02387 2 .95761

= Jx
(4 )

( ) ( ) ; x 4 x 3
( ) ( ) ; x 5 x 4

= 0 .21788

(5 )

= 0 .06760

(6 )

## 2 .01091 = Jx (5 ) + y = 0 .99356 3 .00380

x ( 6 ) x (5 )

= 0 .03136

VittalRao/IISc, Bangalore

M2/L2/V1/May 2004/6

Lecture notes

(7 )

= Jx

(6 )

## 1 .99934 + y = 0 .99721 ; x (7 ) x (6 ) 2 .99686

( ) ( ) ; x 8 x 7
( ) ( ) ; x 9 x 8
;

= 0 .01157

(8 )

## 1 .99852 = Jx (7 ) + y = 1 .00126 2 .99984

2 .00027 + y = 1 .00025 3 .00047

= 0 .00405

(9 )

= Jx

(8 )

= 0 .00176

(10 )

x (10 ) x (9 )

= 0 .00050

(11 )

= Jx

(10 )

x (11 ) x (10 )

= 0 .00024

x (12 )

## 1 .99998 = Jx (11 ) + y = 1 .00003 ; 3 .00001

2 .00001 + y = 1 .00000 3 .00001 ;

x (12 ) x (11 )

= 0 .00008

(13 )

= Jx

(12 )

x (13 ) x (12 )

= 0 .00003 =

## SOLUTION IS 2 = x1 ; -1 = x2, 3.00001 = x3 (Exact solution is x1 = 1, x2 = -2, x3 =3).

VittalRao/IISc, Bangalore M2/L2/V1/May 2004/7

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

Gauss Seidel Method Once again we consider the system Ax = y .. (I) In the Jacobi scheme we used the values of x(k)2, x(k)3, .., x(k)n obtained in the k the iteration, in place of x2, x3, .., xn in the first equation,

## a11 x1 + a12 x2 + ..... + a1n xn = y1

to calculate x(k+1)1 from

## a11 x (k +1)1 = a12 x (k ) 2 a13 x (k ) 3 ..... a1n x (k ) n + y1

Similarly, in the ith equation we used the values, x(k)1, x(k)2, ., x(k)i-1, x(k)i+1, ., x(k)n, in place of x1, x2, .., xi-1, xi+1, .., xn to calculate x(k+1)i from

## aii +1 x (k )i +1 ..... ain x (k )n + yi .

a ii x (k +1) i = a i1 x (k )1 a i 2 x (k ) 2 ...... a ii 1 x (k ) i 1

What Gauss Seidel suggest is that having obtained x(k+1)1from the first equation use this value for x1 in the second equation to calculate x(k+1)2 from

## a 22 x (k +1) 2 = a 21 x (k +1)1 a 23 x (k )3 ...... a 2 n x (k )n + y 2

and use these values of x(k+1)1, x(k+1)2, in the 3rd equation to calculate x(k+1)3, and so on. Thus in the equation use x(k+1)1, .., x(k+1)i-1 to calculate x(k+1)i from

aii x (k +1)i = ai1 x (k +1)1 ai 2 x (k +1) 2 ...... aii 1 x (k +1)i 1 aii +1 x (k )i +1 aii +2 x (k )i + 2 ..... ain x (k )n + yi
In matrix notation we can write this as,

Dx (k +1) = Lx (k +1) Ux (k ) + y
which can be rewritten as,

## (D + L )x (k +1) = Ux (k ) + y , and hence

x (k +1) = (D + L ) Ux k + (D + L ) y
1 1

Thus we get the Gauss Seidel iteration scheme as, x(0) initial guess

x (k +1) = Gx (k ) + y
VittalRao/IISc, Bangalore

..(II)
M2/L3/V1/May 2004/1

Lecture notes

1 y = (D + L ) y

sp

## < 1. Of course, the scheme will converge if < 1.

G < 1 in some matrix norm. But some matrix norm, G 1 does not mean that the
scheme will diverge. The acid test for convergence is G We shall now consider some examples. Example 3: Let us consider the system x1 + 2x2 2x3 = 1 x1 + x2 + x3 =0 2x1 + 2x2 + x3 = 0 considered on page 5; and for which the Jacobi scheme gave the exact solution in the 3rd iteration. (see page 6). We shall now try to apply the Gauss Seidel scheme for this system. We have,
sp

1 A = 1 2

2 1 2

2 1 1
0 1 2

1 y = 0 0

1 D + L = 1 2

0 0 ; 1

0 u = 0 0
0 0 1

2 0 0

2 1 0

(D + L )
Thus,

0 1 = 1 1 0 2

VittalRao/IISc, Bangalore

M2/L3/V1/May 2004/2

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

G = (D + L )

0 0 0 2 2 0 2 2 1 u = 1 1 0 0 0 1 = 0 2 3 0 2 1 0 0 2 0 0 0

## Thus, Gauss Seidel iteration matrix is,

0 G = 0 0

2 2 0

2 3 2

Since G is triangular we get its eigenvalues immediately, as its diagonal entries. Thus 1 = 0, 2 = 2, 3 = 2 are the three eigenvalues. Therefore,

sp

= 2 >1

Hence the Gauss Seidel scheme for this system will not converge. Thus for this system the Jacobi scheme converges so rapidly giving the exact solution in the third iteration itself whereas the Gauss Seidel scheme does not converge. Example 4: Consider the system

1 1 x2 x3 = 1 2 2 x1 + x 2 + x 3 = 0 x1 1 1 x1 x 2 + x 3 = 0 2 2

## Let us apply the Gauss Seidel scheme to this system. We have,

1 A= 1 1 2

1 2 1 1 2

1 2 1 ; 1
0 0 ; 1

1 y = 0 0

1 D+L= 1 1 2

0 1 1 2

(D + L )1

1 = 1 0

0 1 1 2

0 0 , 1

VittalRao/IISc, Bangalore

M2/L3/V1/May 2004/3

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

0 u = 0 0

1 2 0 0

1 2 1 . 0

Thus,

1 1 G = (D + L ) u = 1 0

0 1 1 2

0 0 0 0 1 0

1 2 0 0

1 2 1 0

0 G = 0 0

1 2 1 2 0

1 2 3 .......... ..(*) 2 1 2

is the Gauss Seidel matrix for this sytem. The Gauss Seidel scheme is

x (k +1 ) = Gx (k ) + y x
(0 )

0 = 0 0

where

1 1 y = (D + L ) y = 1 0
where G is given (*).
VittalRao/IISc, Bangalore

0 1 1 2

0 1 1 0 0 = 1; and 1 0 0

M2/L3/V1/May 2004/4

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

Notice that G is upper triangular and hence we readily get the eigenvalues of G as its diagonal entries. Thus the eigenvalues of G are, 1 = 0, 2 = -1/2, 3 = -1/2. Hence

sp

## 1 < 1 . Hence the Gauss Seidel scheme will converge. 2

Let us now carry out a few steps of the Gauss Seidel iteration, since we have now been assured of convergence. (We shall first do some exact calculations).
(1 )

= Gx

(0 )

0 1 1 + y = G 0 + 1 = 1 0 0 0 1 1 + y = G 1 + 1 0 0
0 = 0 0 1 2 1 2 0 1 2 3 2 1 2 1 1 1 + 1 0 0

(2 )

= Gx

(1 )

1 1 2 1 = 1 2 0

x (3 ) = Gx (2 )

1 1 + 1 2 2 2 1 1 + 1 +y= 2 22 0

## If we continue this process we get

VittalRao/IISc, Bangalore

M2/L3/V1/May 2004/5

Lecture notes

x (k )

## ( 1)k 1 k 1 1 1 2 + 1 2 ..... + 2 2 k 1 1 1 + 1 ..... + ( 1) = 2 2 k 1 22 0

Clearly,

x (k )

1 1 + 1 2 2 2 1 1 + 2

+ 1 4 ..... 23 2 1 2 1 3 ..... 2 2 0 1

## and by summing up the geometric series we get,

x (k )

2 3 2 3 0

which is the exact solution. Of course, here we knew a priori that the sequence is going to sum up neatly for each component and so we did exact calculation. If we had not noticed this we still would have carried out the computations as follows:
(1 )

= Gx

(0 )

1 + y = 1 0

as before

x (2 )

0 .5 = Gx (1) + y = 0.5 0 = Gx
(2 )

(3 )

## 0 .625 + y = 0 .625 0 0 .6875 + y = 0.6875 0

M2/L3/V1/May 2004/6

(4 )

= Gx

(3 )

VittalRao/IISc, Bangalore

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

(5 )

= Gx

(4 )

0.65625 + y = 0.65625 ; 0

x (5 ) x ( 4 )

= 0 .03125

x (6 )

0 .671875 = Gx (5 ) + y = 0.671875 ; 0

x ( 6 ) x (5 )

= 0 .025625

(7 )

0 . 664062 = 0 . 664062 0

x (7 ) x (6 )

= 0 .007813

(8 )

0 . 667969 = 0 . 667969 0

x (8 ) x (7 )

= 0 .003907

x (9 )

0 . 666016 = 0 . 666016 0

x (9 ) x (8 )

= 0 .001953

(10 )

0 . 666504 = 0 . 666504 0

x (10 ) x (9 )

= 0 .000488

(Since now error is < 10-3 we may stop here and take x(10) as our solution for the system. Or we may improve our accuracy by doing more iterations, to get,

x (11 )

x (14 ) x (13 )

## 0 . 666687 (13 ) = 0 . 666687 ; x 0

(14 )

= 0 .000031 < 10 4

and hence we can take x(14) as our solution within error 10-4. Let us now try to apply the Jacobi scheme for this system. We have

VittalRao/IISc, Bangalore

M2/L3/V1/May 2004/7

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

1 A= 1 1 2
0 J = 1 1 2

1 2 1 1 2
1 2 0 1 2

1 2 1 ; and therefore, 1

1 2 1 0

I J = + 1
1 2

2 2

1 2 +1 = +

1 2 + 1 2 2

## Thus the eigenvalues of J are

1 = ; 2 =
1 =

1 2

1 1 + i 15 ; 3 = i 15 4 4 2 2
1 15 + = 16 = 2 4 4 4

1 ; 2 = 3 = 2

sp

= 2 which is >1. Thus the Jacobi scheme for this system will not converge.

Thus, in example 3 we had a system for which the Jacobi scheme converged but Gauss Seidel scheme did not converge; where in example 4 above we have a system for which the Jacobi scheme does not converge, but the Gauss Seidel scheme converges. Thus, these two examples demonstrate that, in general, it is not correct to say that one scheme is better than the other. Let us now consider another example. Example 5: 2x1 x2 =y1 -x1 + 2x2 x3 = y2 -x2 + 2x3 x4 =y3 -x3 + 2x4 = y4
VittalRao/IISc, Bangalore M2/L3/V1/May 2004/8

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

Here

0 2 1 0 1 2 1 0 A= , 0 1 2 1 0 0 1 2

## The Jacobi matrix for this scheme is

J =

0 1 2 0 0

1 2 0 1 2 0

0 1 2 0 1 2

0 0 1 2 0

The characteristic equation is, 16 4 - 12 2 + 1 = 0 (CJ) Set 2 = Therefore 162 - 12 + 1 = 0 (CJ1) is the square root of the roots of (CJ1). Thus the eigenvalues of J are 0.3090; 0.8090. Hence

sp

## The Gauss Seidel matrix for the system is found as follows:

VittalRao/IISc, Bangalore M2/L3/V1/May 2004/9

## Numerical Analysis/ Iterative methods for solving linear system of equation

Lecture notes

0 0 2 1 2 0 (D + L ) = 0 1 2 0 0 1

0 0 0 2

0 0 U = 0 0

1 0 0 0

0 1 0 0
1 2 1 4 1 8 1 16

0 0 1 0
0 1 2 1 4 1 8 0 0 1 2 1 4 0 0 0 1 2

(D

+ L

) 1

1 2 1 1 G = (D + L ) U = 4 1 8 1 16
= 0 0 0 0

0 1 2 1 4 1 8

0 0 1 2 1 4
1 2 1 4 1 8 1 16

0 0 0 0 0 0 0 1 2
0 1 2 1 4 1 8

1 0 0 0

0 1 0 0

0 0 1 0

0 0 1 2 1 4

## I G = 0 , which becomes in this case

164 123 + 2 = 0....................(C G )
VittalRao/IISc, Bangalore M2/L3/V1/May 2004/10

Lecture notes

## This can be factored as

2 (16 2 12 + 1) = 0

Thus the eigenvalues of G are roots of 2 = 0 ; and 162 - 12 + 1 = 0 .(CG1) Thus one of the eigenvalues of G is 0 (repeated twice), and two eigenvalues of G are roots of (CG1). Notice that roots of (CG1) are same as those of (CJ1). Thus nonzero eigenvalues of G are squares of eigenvalues of J. the nonzero eigenvalues of G are, 0.0955, 0.6545. Thus,

sp

= 0.6545 < 1

## Thus the Gauss Seidel scheme also converges. Observe that

sp

= J

2 sp

sp

< J

sp

Thus the Gauss Seidel scheme converges faster than the Jacobi scheme. In many class of problems where both schemes converge it is the Gauss Seidel scheme that converges faster. We shall not go into any further details of this aspect.

VittalRao/IISc, Bangalore

M2/L3/V1/May 2004/11

## Numerical Analysis ( Iterative methods for solving linear systems of equations)

Lecture notes

SUCCESSIVE OVERRELAXATION METHOD (SOR METHOD) We shall now consider SOR method for the system Ax = y ..(I) We take a parameter 0 and multiply both sides of (I) to get an equivalent system, Ax = y (II) Now

A = (D + L + U )
We write (II) as (D + L + U)x = y, i.e. (D + L) = - Ux + y i.e. (D + L)x + (-1) Dx = - Ux +y i.e. (D + L)x = - [( 1)D + U]x + y i.e. x = - (D + L)-1 [(-1)D + U]x + [D + L]-1y. We thus get the SOR scheme as

x ( k +1 ) = M x ( k ) + y x (0 ) = ;
where,

M = (D + L)
and

[( 1)D + u]

1 y = (D + L ) y

## M is the SOR matrix for the system.

VittalRao/IISc, Bangalore

M2/L4/V1/May 2004/1

## Numerical Analysis ( Iterative methods for solving linear systems of equations)

Lecture notes

Notice that if = 1 we get the Gauss Seidel scheme. The strategy is to choose such that M sp is < 1, and is al small as possible so that the scheme converges as rapidly as possible. This is easier said than achieved. How does one choose ? It can be shown that convergence cannot be achieved if 2. (We assume > 0). Usually is chosen between 1 and 2. Of course, one must analyse M Let us consider an example of this aspect. Example 6: Consider the system given in example 5. For that system,
sp

## as a function of and find that

value 0 of for which this is minimum and work with this value of 0.

## M = - (D + L)-1 [(-1) D +U]

1 1 12 2 = 2 1 2 1 3 4 4 1 3 1 4 8 8
4

1 2 1 1 + 2 4 1 1 2 1 3 + 2 2 8 1 2 1 3 1 4 + 4 4 16

0 1 2 1 1 + 2 4 1 1 2 1 3 + 2 2 8

0 1 2 1 2 1 + 4 0

## and the characteristic equation is

16 ( 1 + ) 12 2 ( 1 + ) + 4 2 = 0.......... .......(C M )
2

Thus the eigenvalues of M are roots of the above equation. Now when is = 0 a root? If = 0 we get from (CM), 16(-1)4 = 0 = 1, i.e. in the Gauss Seidel case. So let us take 1; so = 0 is not a root. So we can divide the above equation (CM) by 42 to get

( 1 + )2 ( 1 + )2 + 1 = 0 16 12 2 2
2

Setting

VittalRao/IISc, Bangalore

M2/L4/V1/May 2004/2

## Numerical Analysis ( Iterative methods for solving linear systems of equations)

Lecture notes

( 1+ )2 = 2
2

we get

16 4 12 2 + 1 = 0
which is the same as (CJ). Thus

= 0 . 3090 ; 0 . 8090 .
Now

( 1 + )2 = 2 = 2

0.0955 or 0.6545

.(*)

## Thus, this can be simplified as

1 1 2 = 2 2 ( 1) 2 2 ( 1) 2 4
as the eigenvalues of M. With = 1.2 and using the two values of 2 in (*) we get, = 0.4545, 0.0880, -0.1312 i (0.1509). as the eigenvalues. The modulus of the complex roots is 0.2 Thus
M

sp

sp

= 0.8090 and

sp

## = 0.6545 computed in Examples. Thus for

this system, SOR with = 1.2 is faster than Jacobi and Gauss Seidel scheme. We can show that in this example when = 0 = 1.2596, the spectral radius M 0 is smaller than M for any other . We have

M 1.2596 = 0.2596
Thus the SOR scheme with = 1.2596 will be the method which converges fastest. Note: We had M
1 .2
sp

= 0.4545
M2/L4/V1/May 2004/3

VittalRao/IISc, Bangalore

## Numerical Analysis ( Iterative methods for solving linear systems of equations)

Lecture notes

And

M 1.2596

sp

= 0.2596

Thus a small change in the value of brings about a significant change in the spectral

sp

VittalRao/IISc, Bangalore

M2/L4/V1/May 2004/4

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

EIGENVALUES AND EIGENVECTORS Let A be an nxn matrix. A scalar is called an eigenvalue of A if there exists a nonzero nx1 vector x such that Ax = x Example: Let

9 A = 8 16

4 3 8

4 4 7

Let = 1
Consider

9 4 4 1 1 1 1 Ax = 8 3 4 2 = 2 = 1 2 x = 2 . We have 16 8 7 0 0 0 0

= ( 1)x = x

eigenvalue of A.

Thus is an

## 1 x = 2 we find that Similarly, if we take = 3, 0

Ax = x. Thus, = 3 is also an eigenvalue of A. Let be an eigenvalue of A. Then any nonzero x such that Ax = x is called an eigenvector of A.

= {x C

n

: Ax = x .

## Then : (i) is nonempty. Q x

Vittal rao/IISc.Bangalore

= n
M3/L2/V1/May2004/1

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

(ii) x, y Ax = x, Ay = y A( x + y) = ( x + y)
For any constant ; Ax = x = ( x )
A ( x ) = ( x )

x + y

(iii)

x
Thus is a subspace of C . This is called the characteristic subspace or the eigensubspace corresponding to the eigenvalue .
n

Example: Consider the A in the example on page 1. We have sum = -1 is an eigenvalue. What is -1, the eigensubspace corresponding to 1? We want to find all x such that Ax = -x i.e., (A+I)x = . i.e., we want to find all solutions of the homogeneous system Mx = ; where

8 M = A+ I = 8 16

4 4 8

4 4 7

We now can use our row reduction to find the general solution of the system.

8 R 2 R1 M 0 R 3 2 R1 0
Thus, x1 =
1 1 x 2 + x3 2 2

4 0 0

4 1 R1 8 0 0

1 0 0

0 0

1 2

0 1 2 0

## Thus the general solution of (A+I) x = is

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/2

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 1 x3 x2 + 1 1 2 1 2 1 = x2 x2 2 + x3 0 2 0 2 2 x3
1 1 = A1 2 + A 2 0 2 0 where A1 and A2 are arbitrary constants. Thus -1 consists of all vectors of the form

1 1 A1 2 + A2 0 . 2 0
1 1 Note: The vectors 2 , 0 form a basis for -1 and therefore 0 2 dim -1 = 2. What is 3 the eigensubspace corresponding to the eigenvalue 3 for the above matrix We need to find all solutions of Ax = 3x, i.e., Ax 3x = i.e., Nx = Where

12 N = A 3I = 8 16
Again we use row reduction

4 0 8

4 4 4

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/3

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

2 R 2 R 1 12 3 N 0 4 R 3 R1 3 0

4 8 3 8 3

12 4 4 R3 + R4 0 3 0 4 3

4 8 3 0

4 4 3 0

12 x1 = 4 x 2 + 4 x 3
8 4 x2 = x3 3 3
x3 = 2 x2

12 x1 = 4 x 2 + 8 x 2 = 12 x 2 x 2 = x1

x 2 = x1 ; x 3 = 2 x 2 = 2 x1
The general solution is

x1 x1 2x 1

1 = x1 1 2

## Thus 3 consists of all vectors of the form

1 1 2
Where is an arbitrary constant. 1 Note: The vector 1 forms a basis for 3 and hence 2 dim. 3 = 1. Now When can a scalar be an eigenvalue of a matrix A? We shall now investigate this question. Suppose is an eigenvalue of A.
Vittal rao/IISc.Bangalore M3/L2/V1/May2004/4

Lecture notes

This

## There is a nonzero vector x such that Ax = x.

( A I ) x = ; andx .
The system

(A I )x =

## nullity (A - I) 1 rank (A - I) < n

(A - I) is singular
det. (A - I) = 0 Thus, is an eigenvalue of A det. (A - I) = 0. Conversely, is a scalar such that det. (A - I) = 0. This

(A - I) is singular
rank (A - I) < n nullity (A - I) 1 The system

(A I )x =

## has nonzero solution.

is an eigenvalue of A. Thus, is a scalar such that det. (A - I) = 0 Combining the two we get, is an eigenvalue of A det. (A - I) = 0 det. (I - A) = 0 Now let C() = det. (I - A) Thus we see that, The eigenvalues of a matrix A are precisely the roots of C() = det. (I - A). is an eigenvalue.

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/5

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

a 11
C ( ) = a 21 K K

a 12

a 22
K K

K K K K K

a 1n a 2n K K

a n1

a n2

a nn
n

= n (a11 + K + a nn ) n 1 + K + ( 1) det . A
Thus ; C() is a polynomial of degree n. Note the leading coefficient of C() is 1. We say C() is a monic polynomial of degree n. This is called CHARACTERISTIC POLYNOMIAL of A. The roots of the characteristic polynomial are the eigenvalues of A. The equation C() = 0 is called the characteristic equation.

Sum of the roots of C() = Sum of the eigenvalues of A = a11 + . . . . . . + ann , and this is called the TRACE of A. Product of the roots of C() = Product of the eigenvalues of A = det. A. In our example in page 1 we have

9 A = 8 16

4 3 8

4 4 7

C ( ) = det .( I A ) =

+9
8 16

4 3 8

4 4

+1 4 4 C + C + + 1 3 4 C +1 8 7
1 2 3

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/6

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 = ( + 1 )1 1
R 2 R1 R 3 R1

4 3 8

4 4

1 = ( + 1) 0 0

4 0

+1
4

= ( + 1 )( + 1 )( 3 )
= ( + 1 ) ( 3 )
2

## Thus the characteristic polynomial is

C ( ) = ( + 1 ) ( 3 )
2

The eigenvalues are 1 (repeated twice) and 3. Sum of eigenvalues = (-1) + (-1) + 3 = 1 Trace A = Sum of diagonal entries. Product of eigenvalues = (-1) (-1) (3) = 3 = det. A. Thus, if A is an nxn matrix, we define the CHARACTERISTIC POLYNOMIAL as,

C ( ) = I A

. . . . . . . . . . . . .(1)

and observe that this is a monic polynomial of degree n. When we factorize this as,

## C() = ( 1 ) 1 ( 2 ) 2 KK( k ) k . . . . . . . .(2)

a a a

Where 1, 2, . . . . . ., k are the distinct roots; these distinct roots are the distinct eigenvalues of A and the multiplicities of these roots are called the algebraic multiplicities of these eigenvalues of A. Thus when C() is as in (2), the distinct eigenvalues are 1, 2, . . . . . ., k and the algebraic multiplicities of these eigenvalues are respectively, a1, a2, . . . . . , ak. For the matrix in Example in page 1 we have found the characteristic polynomial on page 6 as

C ( ) = ( + 1 ) ( 3 )
2

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/7

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Thus the distinct eigenvalues of this matrix are 1 = -1 ; and 2 = 3 and their algebraic multiplicities are respectively a1 = 2 ; a2 = 1. If i is an eigenvalues of A the characteristic subspace corresponding to i is defined as

and is

= {x : Ax = i x}
i

The dimension of

## is called the GEOMETRIC MULTIPLICITY of the eigenvalue i

and is denoted by gi. Again for the matrix on page 1, we have found on pages 3 and 4 respectively that, dim 1 = 2 ; and dim. 3 = 1. Thus the geometric multiplicities of the eigenvalues 1 = -1 and 2 = 3 are respectively g1 = 2 ; g2 = 1. Notice that in this example a1 = g1 = 2 ; and a2 = g2 = 1. In general this may not be so. It can be shown that for any matrix A having C() as in (2), 1 gi ai ; 1 i k i.e., for any eigenvalue of A, 1 geometric multiplicity algebraic multiplicity. We shall study the properties of the eigenvalues and eigenvectors of a matrix. We shall start with a preliminary remark on Lagrange Interpolation polynomials : Let 1, 2, . . . . . . . ., s be a distinct scalars, (i.e., i j if i j ). Consider, . . . . . . . . . . . .(3)

p i ( ) =

( 1 )( 2 ) K ( i 1 )( i +1 ) K ( s ) ( i 1 )( i 2 ) K ( i i 1 )( i i +1 ) K ( i s )

( j ) ( i j )
for i = 1,2, . . . . . . ., s . . . . . . .. (4)

1 j s
ji

Then pi() are all polynomials of degree s-1. Further notice that

pi (1 ) = K= pi (i1 ) = pi (i+1 ) = K= pi (s ) = 0 pi (i ) = 1

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/8

Lecture notes

## Thus pi()are all polynomials of degree s-1 such that,

pi ( j ) = ij

if j i

. . . . . . . . . . (5)

## We call these the Lagrange Interpolation polynomials. If as follows:

p() is any polynomial of degree s-1 then it can be written as a linear combination of p1(),p2(), . . ., ps()
p ( ) = p ( 1 ) p1 ( ) + p ( 2 ) p 2 ( ) + L + p ( s ) p s ( )
. . . . (6)

p ( ) p ( )
i =1 i i

With this preliminary, we now proceed to study the properties of the eigenvalues and eigenvectors of an nxn matrix A. Let 1, . . . . , k be the distinct eigenvalues of A. Let 1, 2, . . . , k be eigenvectors corresponding to these eigenvalues respectively ; i.e., i are nonzero vectors such that

Ai = ii
From (6) it follows that

. . . . . . . . . . . .(6)

A 2 i = A( A i ) = A ( i i ) = i A i = 2 i i
A 3 i = A ( A 2 i ) = A ( 2 i i ) = 2 i A i = 3 i i

## and by induction we get

A m i = m i i
(We interpret A0 as I). Now let,

## for any integer m 0 . . . . . . . . . . .(7)

p ( ) = a 0 + a1 + K K + a s s
be any polynomial. We define p(A) as the matrix,

p ( A ) = a 0 I + a1 A + K K + a s A s
Vittal rao/IISc.Bangalore M3/L2/V1/May2004/9

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Now

p ( A ) i = ( a 0 I + a 1 A + K K + a s A s ) i = a 0 i + a 1 A i + K K + a s A s i = a 0 i + a 1 i i + K K + a s s i i = ( a 0 + a 1 i + K K + a s s i ) i = p ( i ) i .
Thus, If i is any eigenvalue of A and i is an eigenvector corresponding to i then for any polynomial p() we have by (6)

p ( A ) i = p ( i ) i .

Now are the eigenvectors, 1, 2, . . . . , k corresponding to the distinct eigenvalues 1, 2, . . . . , k of A, linearly independent ? In order to establish this linear independence, we must show that

## C11 + C22 + K+ CKK = n C1 = C2 = K = CK = 0 . . . (8)

Now if in (4) & (5) we take s = k ; i = i then we get the Lagrange Interpolation polynomials as

p i ( ) =
ji

( j ) ( i j )
; i = 1,2,.., k (9)

1 j k

and

p i ( j ) = ij if j i

(10)

Now,

## C11 + C22 + .... + Ckk = n

For 1 i k,
Vittal rao/IISc.Bangalore M3/L2/V1/May2004/10

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

pi ( A)[C11 + C 2 2 + .... + C k k ] = pi ( A) n = n
C1 pi ( A)1 + C 2 pi ( A) 2 + .... + C k pi ( A) k = n C1 pi (1 )1 + C 2 pi (2 ) 2 + .... + C k pi (k ) k = n , (by property I on page 10)

Cii = ;1 i k ; Ci = 0;1 i k
Thus

## Eigen vectors corresponding to distinct eigenvalues of A are linearly independent.

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/11

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

SIMILAR MATRICES

We shall now introduce the idea of similar matrices and study the properties of similar matrices.
DEFINITION

An nxn matrix A is said to be similar to a nxn matrix B if there exists a nonsingular nxn matrix P such that, P-1 A P = B We then write, A B
Properties of Similar Matrices

## (1) Since I-1 A I = A (2) A B A = P B P-1

it follows that A A

## P, nonsingular show that., P-1 A P = B

A = Q-1 B P, where Q = P-1 is nonsingular nonsingular Q show that Q-1 B Q = A BA Thus A B BA A C. (3) Similarly, we can show that A B, B C (4) Properties (1), (2) and (3) above show that similarity is an equivalence relation on the set of all nxn matrices. (5) Let A and B be similar matrices. Then there exists a nonsingular matrix P such that A = P-1 B P Now, let CA() and CB () be the characteristic polynomials of A and B respectively. We have,

C A ( ) = I A = I P 1 BP
= P 1 P P 1 BP = P 1 (I B )P = P 1 I B P
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/1

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

= I B sin ce P 1 P = 1
= CB ( ) Thus SIMILAR POLYNOMIALS . A = P-1 B P Now for any positive integer k, we have MATRICES HAVE THE SAME CHARACTERISTIC

(6) Let A and B be similar matrices. Then there exists a nonsingular matrix P such that

A k = P 1 BP P 1 BP ..... P 1 BP 14444244443
ktimes

)(

) (

= P-1 Bk P Therefore, Ak = On Bk = On Thus if A and B are similar matrices then Ak = On Then Bk = On . Now let p() = a0 + a1 + .. + ak be any polynomial. P-1 Bk P = On

p ( A) = a 0 I + a1 A + ..... + a k A k = a 0 I + a1 P 1 BP + a 2 P 1 B 2 P + ..... + a k P 1 B k P
= P 1 a 0 I + a1 B + a 2 B 2 + ..... + a k B k P

= P 1 p(B )P
Thus

p( A) = On P 1 p(B )P = On
p (B ) = O n
Thus IF A and B ARE SIMILAR MATRICES THEN FOR ANY POLYNOMIAL p (); p (A) = On p (B) = On . (7) Let A be any matrix. By A(A) we denote the set of all polynomials p() such that p(A) = On, i.e.
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/2

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

A (A) = {p() : p(A) = On} Now from (6) it follows that, IF A AND B ARE SIMILAR MATRICES THEN A(A) = A (B) . Then set A (A) is called the set ANNIHILATING POLYNOMIALS OF A . Thus similar matrices have the same set of annihilating polynomials. We shall discuss more about annihilating polynomials later. We now investigate the following question? Given an nxn matrix A when is it similar to a simple matrix? What are simple matrices? The simplest matrix we know is the zero matrix On. Now A On . There is a nonsingular matrix P such that A = P-1 On P = On. THE ONLY MATRIX SIMILAR TO On IS ITSELF . The next simple matrix we know is the identity matrix In. Now A In nonsingular P such that A = P-1 In P A = In. Thus THE ONLY MATRIX SIMILAR TO In IS ITSELF . The next class of simple matrices are the DIAGONAL MATRICES. So we now ask the question Which type of nxn matrices are similar to diagonal matrices? Suppose now A is an nxn matrix; and A is similar to a diagonal matrix, 1 D= 2 n (I not necessarily distinct). Then there exists a nonsingular matrix P such that P-1 A P = D AP = PD ..(1) there is a

P11 P LetP = 21 M P n1

P12 P22 M Pn 2

## ..... P1n ..... P2 n M M ..... Pnn

Vittal rao/IISc.Bangalore

M3/L3/V1/May2004/3

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

a11 a A = 21 ..... a n1

## ..... ..... ..... .....

a1 n a 2n ..... a nn

P1 i P LetP i = 2 i M P ni

## Now the ith column of AP is

a11 P1i + a12 P2 i + ..... + a1n Pni a 21 P1i + a 22 P2 i + ..... + a 2 n Pni .......... .......... .......... .......... .... a P + a P + ..... + a P n 2 2i nn ni n1 1i
which is equal to APi. Thus the ith column of A P, the L.H.S. of (1), is A Pi. Now the ith column of P D is

P1i i P1i P2 i i P = i 2 i = i Pi M M P P ni i ni
Thus the ith column of P D, the R.H.S. of (1), is I Pi. Since L.H.S. = R.H.S. by (1) we have APi = i Pi ; i = 1, 2, ., n ..(2) Note that since P is nonsingular no column of P can be zero vector. Thus none of the column vectors Pi are zero. Thus we conclude that, IF A IS SIMILAR TO A DIAGONAL MATRIX D THEN THE DIAGONAL ENTRIES OF D MUST BE THE EIGENVALUES OF A AND IF P-1AP = D THEN THE ith COLUMN VECTOR MUST BE AN EIGENVECTOR CORRESPON DING TO THE EIGENVALUE WHICH IS THE ith DIAGONAL ENTRY OF D. Note:
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/4

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

The n columns of P must be linearly independent since P is nonsingular and thus these n columns give us n linearly independent eigenvectors of A Thus the above result can be restated as follows: A is similar to a diagonal matrix D and P-1 A P = D A has n linearly independent eigenvectors; taking these as the columns of P we get P-1 A P we get D where the ith diagonal entry of D is the eigenvalue corresponding to the ith eigenvector. Conversely, it is now obvious that if A has n linearly independent eigenvectors then A is similar to a diagonal matrix D and if P is the matrix whose ith column is the eigenvector, then D is P-1 A P and ith diagonal of D is the eigenvalue corresponding to the ith eigenvector. When does then a matrix have n linearly independent eigenvectors. It can be shown that a matrix A has n linearly independent eigenvectors the algebraic multiplicity of each eigenvalue of A is equal to its geometric multiplicity. Thus A IS SIMILAR TO A DIAGONAL MATRIX FOR EVERY EIGENVALUE OF A, ALGEBRAIC MULTIPLICITY IS EQUAL TO ITS GEOMETRIC MULTPLICITY.

2 ..... k where 1, 2, .., k are RECALL; if C = 1 the distinct eigenvalues of A, then ai is called the algebraic multiplicity of the eigenvalue i. Further, let
1 2 k

( ) (

)a (

)a

)a

i = {x : Ax = i x}
be the eigensubspace corresponding to i. Then gi = dim i is called the geometric multiplicity of i. Therefore, we have, If A is an nxn matrix with C( ) = ( - 1)a1 ( - k) ak where 1, .., k are the district eigenvalues of A, then A is similar to a diagonal matrix ai = gi (=dimi) ; 1 i k. Example: Let us now consider

9 4 4 A = 8 3 4 16 8 7
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/5

Lecture notes

## On page 6, we found the characteristic polynomial of A as C( ) = ( +1)2 ( - 3) Thus 1 = -1 ; a1 = 2

2 = 3 ; a2 = 1
On pages 3 and 4 we found, 1 = eigensubspace corresponding to = -1

1 1 = x : x = A1 2 + A2 0 0 2
2 = eigensubspace corresponding to = 3

1 = x : x = k 1 2
Thus dim 1 = 2 dim 2 = 2 Thus a1 = 2 = g1 a 2 = 1 = g2 to a diagonal matrix. How do we get P such that P-1AP is a diagonal matrix? Recall the columns of P must be linearly independent eigenvectors. From 1 we get two linearly 1 1 1 eigenvectors, namely, 2 and 0 ; and from 2 we get third as 1 . 0 2 2 Thus if we take these as columns and write, g1 = 2 g2 = 1 and ence A must be similar.

1 1 1 P = 2 0 1 0 2 2

Vittal rao/IISc.Bangalore

M3/L3/V1/May2004/6

Lecture notes

Then P

1 = 2 2

0 1 1
0 1 1

## 1 2 1 2 ; and it can be verified that 1

1 9 2 1 8 2 1 16 4 3 8 4 1 4 2 7 0 1 0 2 1 1 2

1 1 P AP = 2 2

1 = 0 0

0 1 0

0 0 a diagonal matrix. 3
A has n

Thus we can conclude that A is similar to a diagonal matrix, i.e., P-1 AP = D linearly independent eigenvectors namely the n columns of AP.

Conversely, A has n linearly independent eigenvectors P-1 AP is a diagonal matrix where the columns of P are taken to be the n linearly eigenvectors. We shall now see a class of matrices for which it is easy to decide whether they are similar to a diagonal matrix; and in which case the P-1 is easy to compute. But we shall first introduce some preliminaries.

x1 y1 x2 y2 If x = ; y = are any two vectors in Cn, we define the INNER PRODUCT OF M M x y n n x with y (which is denoted by (x,y)) as,

(x , y ) =
Example 1:

x1 y 1 + x 2 y 2 + K + x n y n =

i =1

xi y

i 1 x = 2 + i ; y = 1 i ; then, If 1 i

( x , y ) = i .1 +

(2

+ i )(1 i ) +

( 1 )(i )
M3/L3/V1/May2004/7

Vittal rao/IISc.Bangalore

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

= i +

(2

+ i

1 Whereas ( y , x ) = 1 i + ( i )(2 + i ) +

()

)(1

+ i

) + (

(i )( 1 ) =

)(

)=

1 + 5 i

1 5i

We now observe some of the properties of the inner product, below: (1) For any vector x in Cn, we have

(x , x ) =
(x , x ) =

i =1

xi x i =

i =1

xi

## Which is real 0. Further,

2 i

= 0

i=1

x i = 0 ;1 i n x = n
Thus, (x,x) is real and 0 and = 0 x = n

(2)

(x , y ) =

i =1

xi y

i =1

yi xi

= (y , x )
Thus,

(x , y ) = ( y , x )
(3) For any complex number , we have,

( x , y ) = ( x i ) y i
i =1

i =1

xi y i

= (x , y )
Thus
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/8

Lecture notes

## (x,y) = (x,y) for any complex number . We note,

(x , y ) = ( y , x )

by (2)

= ( y , x ) = y , x = (x , y )

( )

(4)

(x

+ y, z

) = (x i
i=1

+ y i )z

x
i =1

zi + yi zi
i =1

= ( x, y ) + ( x , z )
Thus (x + y, z) = (x,z) + (y,z) and similarly (x, y + z) = (x, y) + (x, z) We say that two vectors x and y are ORTHOGONAL if (x, y) = 0.
Example :

1 1 (1) If x = i ; y = i , 0 i
then,

## (x, y ) = 1( 1) + i(i ) + ( i )(0 )

= -1 + 1 = 0 Thus x and y are orthogonal.

1 1 x = i , y = a (2) If 1 i
Vittal rao/IISc.Bangalore M3/L3/V1/May2004/9

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

then

(x, y ) = 1 + ai i
x, y orthogonal

(1 + i ) + a i = 0 1+ i a = = i (1 + i ) = 1 i i a =1+ i

Vittal rao/IISc.Bangalore

M3/L3/V1/May2004/10

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

HERMITIAN MATRICES

Let A = (aij); be an nxn matrix. We define the Hermitian conjugate of A, denoted by A* as ; A* = (a*ij) where a*ij = aji. A* is the conjugate of the transpose of A. Example 1:

1 i A= i i
Transpose of A =

1 i i i

i 1 A* = i i
Example 2:

1 i A= i 2
Transpose of A =

1 i i 2

1 i A* = i 2
Observe that in Example 1. A* A. Whereas in Example 2, A* = A. DEFINITION: An nxn matrix A is said to be HERMITIAN if A* = A. We now state some properties of Hermitian matrices. (1) If A = (aij) ; A* = (a*ij) and A = A* then aii = a*ii = aii Thus the DIAGONAL ENTRIES OF A HERMITIAN MATRIX ARE REAL.

Vittal rao/IISc.Bangalore

M3/L4/V1/May 2004/1

Lecture notes

## x1 y1 x2 y2 (2) Let x = ; y = be any two vectors in Cn. M M x y n n

( Ay )1 ( Ax )1 ( Ay )2 ( Ax )2 ; Ay = Let Ax = M M ( Ax ) ( Ay ) n n
We have

( Ax )i
Now

a
j =1

ij

x j ; ( Ay

)j

a
i =1

ji

yi.

( Ax , y ) = ( Ax )i y i
i =1

i =1

j =1

a ij x j y

j =1

n x j a ij y i i =1

j =1

n x j a ij y i i =1

j =1

n x j a i =1

ji

## y i (Q aij = a ji sin ceA = A* )

Vittal rao/IISc.Bangalore

M3/L4/V1/May 2004/2

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

j =1

(Ay )

= (x, Ay)
Thus IF A IS HERMITIAN THEN

## (Ax, y) = (x, Ay)

FOR ANY TWO VECTORS x, y. (3) Let be any eigenvalue of A. Then there is an x Cn, x n such that

Ax = x.
Now,

( x , x ) = ( x , x ) = ( Ax , x )
= ( x , Ax ) = (x , x )
A is Hermitian.

= (x , x )

)(x , x ) =

0 . But

(x , x )

0Q x

= 0 =

is real.

THUS THE EIGENVALUES OF A HERMITIAN MATRIX ARE ALL REAL. (4) Let , be two different eigenvalues of A and x, y corresponding eigenvectors. We have,

Ax = x and Ay = y
and , are real by (3). Now,

( x , y ) = ( x , y )

Vittal rao/IISc.Bangalore

M3/L4/V1/May 2004/3

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

= ( Ax , y ) = ( x , Ay )by ( 2 ) = (x , y ) = (x , y ) = ( x , y )Q isreal .

)( x , y ) =

0 . But

(x,y) = 0

## x and y are orthogonal.

THUS IF A IS A HERMITIAL MATRIX THEN THE EIGENVECTORS CORRESPONDING TO DISTINCT EIGENVALUES ARE ORTHOGONAL.

Vittal rao/IISc.Bangalore

M3/L4/V1/May 2004/4

Lecture notes

## Gramm Schmidt Orthonormalization

We shall now discuss the Gramm Schmidt Orthonormalization process: Let U1, U2, ., Uk be k linearly independent vectors in Cn. The Gramm Schmidt process is the method to get an orthonormal set 1 , 2 ,....., k show that the subspace spanned by U1, .., Uk is the same as the subspace spanned by 1 ,....., k thus providing an orthonormal basis for . The process goes as follows: Let 1 = U 1 ;

1 =
Next, let,

1 = 1

1 Note 1 = 1 ( 1 , 1 )

2 = U 2 (U 21 )1
Note that

( 2 1 )
= (U 2 ,1 ) ((U 22 )1 ,1 ) = (U 2 , 1 ) (U 2 2 )(11 )
= (U 2 , 1 ) (U 21 )Q (11 ) = 1
2 1 .
Let

2 =
Also

2 ; 2

clearly

2 = 1, 1 = 1, (1 , 2 ) = 0

x = 1 U1 + 2 U2 then

x = 1 1 1 + 2 [ 2 2 + (U 2 ,1 )1 ]
Vittal rao/IISc.Bangalore M3/L5/V1/May 2004/1

x = 1 1 + 2 ( 2 + (U 2 , 1 )1 )

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

x = 11 + 2 2 where

1 = 1 1 + 2 (U 2 ,1 )

2 = 2 2
Thus x subspace spanned by U1, U2 x subspace spanned by 1, 2. Thus 1, 2 is an orthonormal basis for the subspace [U1,U2]. Having defined 1, 2,.., i-1 we define i as follows:
i 1

i = U i (U i , i ) i
p =1

Clearly

( , ) = 0
i p

1 p i-1

and

i =

i i

Obviously i = 1and i , j = 0 for1 j i 1 and x [U1, U2, .., Ui] x [1, .., i]

and thus 1, 2, .., i is an orthonormal basis for [U1, .., Uk]. Thus at the kth stage we get an orthonormal basis 1, ., k for [U1, .., Uk]. Example:

2 1 1 3 1 Let U 1 = ;U 2 = 1 ;U 3 = 1 1 1 0 0 0
be l.i. Vectors in R4. Let us find an orthonormal basis for the subspace spanned by U1, U2, U3 using the Gramm Schmidt process.

1 1 1 = U 1 = ; 1 0
Vittal rao/IISc.Bangalore

1 =

1 ( 1 , 1 )

1 1 1 = 3 1 0
M3/L5/V1/May 2004/2

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

= 1 3 1 3 1 3 0

2 = U 2 (U 2 , 1 )1
1 1 1 1 1 = + 1 3 3 3 0
1 1 3 1 1 3 = 1 1 0 3 0

1 3 1 3 1 3 0

2 3 2 = 3 4 3 0

and 2 =

4 4 16 2 6 + + = 9 9 9 3

2 =

2 2

2 1 6 3 1 3 2 6 = 3 = 2 6 4 2 3 6 0 0

Vittal rao/IISc.Bangalore

M3/L5/V1/May 2004/3

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Thus

1 6 1 6 2 = 2 6 0
Finally,

3 = U 3 (U 3 , 1 )1 (U 3 , 2 ) 2
1 1 2 6 3 3 6 1 3 1 6 3 = 6 2 1 3 1 0 6 3 0 0

2 2 12 3 2 1 = 2 2 1 1 0 0 0
1 2 1 = 2 0 0

3 =

1 +1 = 4 4

1 = 1 2 2

3 =

3 = 3

1 1 2 2 1 1 2 2 = 2 0 0 0 0

Vittal rao/IISc.Bangalore

M3/L5/V1/May 2004/4

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Thus the required orthonormal basis for , the subspace spanned by U1,U2, U3 is 1, 2, 3, where

1 =

1 1 6 3 2 1 1 1 ; = ; = 6 3 2 2 3 2 1 0 6 3 0 0 0 1

Note that these i are mutually orthogonal and have, each, length one. We now get back to Hermitian matrices. We had seen that the eigenvalues of a Hermitian matrix are all real; and that the eigenvectors corresponding to district eigenvalues are mutually orthogonal. We can further show the following: (We shall not give a proof here, but illustrate with an example). Let

C ( ) = ( 1 ) ( 2 ) .....( k )
a1 a2

be

any

nxn

Hermitian
ak

matrix.

Let

## be its characteristic polynomial,

where 1, 2, .., k are its distinct eigenvalues and a1, .., ak are their algebraic multiplicities. If i is the characteristic subspace corresponding to the eigen value i ; that is,

i = {x : Ax = i x }
then it can be shown that dim is i = ai. We then choose any basis for i and orthonormalize it by G-S process and get an orthonormal basis for i. If we now take all these orthonormal basis vectors for 1, . . ., k and write them as the columns of a matrix P then P*AP Will be a diagonal matrix. Example :

6 A = 2 2 Notice A* = A1 = A1 = A.

2 3 1

2 1 3

## Thus the matrix A is Hermitian.

Vittal rao/IISc.Bangalore M3/L5/V1/May 2004/5

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Characteristic Polynomial of A:

6 I A =
2 2

2 1 3

3
1

2

R1 + 2 R 2

2 2

2 ( 2 ) 3 1

0 1 3

1 2 2

2 3 1

0 1

R 2 2 R1 R 3 + 2 R1

= ( 2 ) 0 0

0 1

7
5

= ( 2)[( 7)( 3) 5]
= ( 2) 2 10 + 16
= ( 2) ( 8)
2

= ( 2)( 2)( 8)

Thus

C ( ) = ( 2 ) ( 8)
2

1 = 2

a1 = 2
M3/L5/V1/May 2004/6

Vittal rao/IISc.Bangalore

Lecture notes

2 = 8

a2 = 1

## The characteristic subspaces:

1 = {x : Ax = 2 x}
= {x : ( A 2 I )x = }
i.e. We have to solve (A 2I) x =

4 i.e. 2 2

2 1 1

2 x1 0 1 x 2 = 0 1 x 3 0

2x1 x2 + x3 = 0 x3 = - 2x1 + x2

x1 x = x2 ; x1 , x 2 arbitrary 2x + x 1 2
1 = x : x = 2 +
A basis for i is

; , scalars

1 0 U 1 = 0 ; U 2 = 1 2 1
We now orthonormalize this: 1 1 = U1 = 0 2

1 = 5

1 =

1 1

Vittal rao/IISc.Bangalore

M3/L5/V1/May 2004/7

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 =

5 0 2 5

2 = U 2 (U 2 , 1 )1
0 2 = 1 5 1
2 0 5 = 1 + 0 1 4 5

1 5 0 2 5

2 5 = 1 1 5

2 =

4 1 +1+ = 25 25

30 = 25

30 5

2 =

2 = 2

2 2 30 5 5 1 = 5 30 30 1 5 1 30

## 1, 2 is an orthonormal basis for 1.

2 = {x : Ax = 8 x}
= {x : ( A 8I )x = }
So we have to solve (A-8I) x = i.e.

Vittal rao/IISc.Bangalore

M3/L5/V1/May 2004/8

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

2 2 2

2 5 1

2 x1 0 1 x 2 = 0 5 x 3 0
and therefore the general solution is

## This yields x1 = -2x2 = 2x3

2 2

2 = 1 1 1

2 Basis : U 3 = 1 1
Orthonormalize: only one step:

2 3 = U 3 = 1 1
3 3
2 2 6 1 = 1 = 1 6 6 1 1 6
2 30 5 30 1 30 2 6 1 1 6 6

3 =

1 5 If P = 0 2 5
Then P* = P1 and

2 P AP = P AP = 0 0
* 1

0 2 0 ; adiagonal matrix. 0 8 0
M3/L5/V1/May 2004/9

Vittal rao/IISc.Bangalore

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Vittal rao/IISc.Bangalore

M3/L5/V1/May 2004/10

Lecture notes

## VECTOR AND MATRIX NORMS Consider the space,

x1 R 2 = x = ; x1 , x2 R , x 2
our usual two-dimensional plane.

x =

x 21 + x 2 2

## We observe that (i) x 0 for every vector x in R2

x 0 if and only if x is ;
(ii) x = x for any scalar ; for any vector x.

(iii) x + y x + y for any two vectors x and y. (The inequality (iii) is usually

referred to as the triangle inequality). We now generalize this idea to define the concept of a norm on Cn or Rn. The norm on a vector space V is a rule which associates with each vector x in V, a real number x satisfying,
(i)

## x 0 for every x V and

x 0 if and only if x = ;
(ii) x = x for every scalar and every vector x in V,

(iii)

x + y x + y for every x, y in V.

Let x =

## n n be any vector x in C (or R ) xn x1 x2 M

M3/L6/V1/May 2004/1

Vittal rao/IISc.Bangalore

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

We

can
2

define
2

various
1 2 2

norms
1 2

as

follows:(1)

= x1 + x 2
2

+ ..... + x n

n 2 = x i i =1
n

(2) x

= x1 + x 2 + .... + x n =

x
i =1

n = x i i =1

## If we set p = 2 in (3) we get x 2 as in (1) and if we set p = 1 in (3) we get x 1 as in (2).

(4)

= max .{ x1 , x 2 ,....., x n }

All these can be verified to satisfy the conditions (i), (ii) and (iii) required of a norm. Thus these give several types of norms on Cn and Rn. Example:

## 1 (1) Let x = 2 in R3 1 Then

x x
x x

= 1+ 2 +1 = 4 = (1 + 4 + 1) 1 = 2 6

= max .{ , 2 ,1} = 2 1 = 14 + 2 4 + 14

1 4

= 18 4

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/2

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 (2) Let x = i in C3 2i

Then

x 1 = 1+ 2 +1 = 4 x
2

x x
k =1

= max .{ , 2,1} = 2 1 = 1 + 2 +1
3 3

= (1 + 4 + 1) = 6

1 2

1 3

= 10

1 3

(k ) Consider a sequence x

{ }

## of vectors in Cn (or Rn)

x (k )

x (k )1 (k ) x 2 = M x (k ) n
x1 x x = 2 C n ( orR n ) M x n

Suppose

DEFINITION:
We say that the sequence sequence of numbers, x and x

(k )

} converges to x
x(k)i xi

(k )

(k ) (k )

## i.e. for every i=1, 2, ., n.

M3/L6/V1/May 2004/3

As k ;

Vittal rao/IISc.Bangalore

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Example:
i k = 1 2 be a sequence of vectors in R3. 1 k 2 k + 1

Let x ( k )

0 Let x = 1 . 0

(k ) Here x 1 =

1 0 = x1 k

x (k ) 2 = 1

2 1 = x2 k

x (k )3 =

1 0 = x3 k +1
2

x ( k ) i x i for I=1,2,3.
x (k ) x
If {x (k ) } is a sequence of vectors such that in some norm, the sequence of real numbers,
x ( k ) x converges to the real number 0 then we say that the sequence of vectors

## converges to x with respect to this norm. We then write,

x (k ) x
For example consider the sequence, 1 k 2 = 1 in R3 as before and, k 1 2 k + 1

x (k )

0 x = 1 0
We have
Vittal rao/IISc.Bangalore M3/L6/V1/May 2004/4

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 k 2 x (k ) x = k 1 2 k + 1 Now

x (k ) x

1 2 1 + + 2 0 k k k +1

1 x ( k ) x

Similarly

x (k ) x

1 2 1 2 = max . , , 2 = 0 k k k + 1 k

x ( k ) x
1 2 1 2 (k ) 0 x x = 2 + 2 + 2 2 k k k 2 +1
1

2 x ( k ) x

Also,

x (k ) x

1 1 2 = p + + k k k 2 +1
p

0 p

1 p

p x (k ) x

p ;1 p

IF A SEQUENCE {x (k ) }OF VECTORS IN Cn (or Rn) CONVERGES TO A VECTOR x IN Cn (or Rn) WITH RESPECT TO ONE VECTOR NORM THEN THE SEQUENCE CONVERGES TO x WITH RESPECT TO ALL VECTOR NORMS AND ALSO THE
Vittal rao/IISc.Bangalore M3/L6/V1/May 2004/5

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

SEQUENCE CONVERGES TO x ACCORDING TO DEFINITION ON PAGE 40. CONVERSELY IF A SEQUENCE CONVERGES TO x AS PER DEFINITION ON PAGE 40 THEN IT CONVERGES WITH RESPECT TO ALL VECTOR NORMS. Thus when we want to check the convergence of a sequence of vectors we can choose that norm which is convenient to that sequence. MATRIX NORMS Let M be the set of all nxn matrices (real or complex). A matrix norm is a rule, which associates a real number A with each matrix A and satisfying, (i)

## A = 0 if and only if A = On,

(ii) A = A for every scalar and every matrix A, (iii) A + B A + B for all matrices A and B, (iv)

## AB A B for all matrices A and B.

Before we give examples of matrix norms we shall see a method of getting a matrix norm starting with a vector norm.

Suppose .

Ax x

## (where A is an nxn matrix); for x

n. This given us an idea to by what proportion the matrix A has distorted the length of x. Suppose we take the maximum distortion as we vary x over all vectors. We get

max x n

Ax x

## a real number. We define

A =

max Ax x n x

We can show this is a matrix norm and this matrix norm is called the matrix norm subordinate to the vector norm . We can also show that

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/6

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

A =

max max Ax Ax = x n x x =1

## For example, max Ax A1 = x 1 =1

A = A =

max x
2

=1

Ax

max x

=1

Ax
Ax

A =

max
x
p

=1

How hard on easy is it to compute these matrix norms? We shall give some idea of computing A 1 , A and A 2 for a matrix A.

Let

a 11 a A = 21 ..... a n1

a 12 a 22 ..... an2

## ..... ..... ..... .....

a1n a 2n ..... a nn

The sum of the absolute values of the entries in the ith column is called the absolute column sum and is denoted by Ci. We have

## C1 = a11 + a 21 + a31 + ..... + a n1 = a i1

C 2 = a12 + a 22 + a32 + ..... + a n 2 = ai 2
i =1

i =1 n

.. .. .. .. .. .. ..

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/7

Lecture notes

Cj =

i =1

a ij

1jn

## C = max .{C1 , C 2 ,....., C n }

This is called the maximum absolute column sum. We can show that,

= max 1 j n
For example, if

i=1

ij

1 A = 1 3
then

2 0 2

3 1 , 4

## C 1 = 1 + 1 + 3 = 5; C 2 = 2 + 0 + 2 = 4; and C = max. {5, 4, 8} = 8 C3 = 3 + 1 + 4 = 8

A1 =8
Similarly we denote by Ri the sum of the absolute values of the entries in the ith row R1

j =1 n

## R2 = a21 + a22 + ..... + a2 n = a2 j

j =1

.. .. .. .. .. .

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/8

## Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

R i = a i1 + a i 2 + ..... + a in =
sum as, R = max {R1, .., Rn} It can be show that,

a
j =1

ij

## and define R, the maximum absolute row

A = R = max{R1 ,....., Rn }
For example, for the matrix

= max 1 i n

n a ij j =1

1 A = 1 3

2 0 2

3 1 , we have 4

## R1 = 1 + 2 + 3 = 6; R2 = 1 + 0 + 1 =2; R3 = 3 + 2 + 4 = 9 and R = max {6, 2, 9}= 9

A =9
The computation of computation of A matrix.
2

A 1 and A

## for a matrix are thus fairly easy. However, the

is not very easy; but somewhat easier in the case of the Hermitian

Let A be any nxn matrix; and , be its characteristic polynomial, where 1, 2, .., k are the district characteristic values of A. Let

C ( ) = ( 1 ) 1 L ( k )
a

ak

P = max .{ 1 , 2 , K , k

}
A
sp

## This is called the spectral radius of A and is also denoted by

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/9

Lecture notes

A 2=P= A
6 A = 2 2
A

sp

2 3 1

2 1 3

## which is Hermitian we found on page 33, the district eigenvalues as 1 = 2; 2 = 8

xp

= P = max .{2,8} = 8
sp

A2= A

=8

If A is any general nxn matrix (not Hermitian) then let B = A* A. Then B* = A* A = B, and hence B is Hermitian and its eigenvalues are real and in fact its eigenvalues are nonnegative. Let the eigenvalues (district) of B be 1, 2, .., r. Then let

## = max {1, 2, .., r}

We can show that

A 2 = = max{1 ,....., n }
If follows from the matrix norm definition subordinate to a vector norm, that

A =

max x n

Ax x

## For any x in Cn or Rn , we have, if x n

Ax x

max Ax = A x n x

and therefore

Ax

A x

for all x n

But this is obvious for x = n Thus if A is a matrix norm subordinate to the vector norm x then
Vittal rao/IISc.Bangalore M3/L6/V1/May 2004/10

Lecture notes

Ax

A x

## for every vector x in Cn (or Rn).

Vittal rao/IISc.Bangalore

M3/L6/V1/May 2004/11

## Numerical analysis/ Computations of eigenvalues

Lecture notes

COMPUTATION OF EIGEN VALUES In this section we shall discuss some standard methods for computing the eigenvalues of an nxn matrix. We shall also briefly discuss some methods for computing the eigenvectors corresponding to the eigenvalues. We shall first discuss some results regarding the general location of the eigenvalues. Let A = (aij) be an nxn matrix; and let 1, 2, .., n be its eigenvalues (including multiplicities). We defined

P = A

xp

= max { 1 , 2 ,....., n

Thus if we draw a circle of radius P about the origin in the complex plane, then all the eigenvalues of A will lie on or inside this closed disc. Thus we have (A) If A is an nxn matrix then all the eigenvalues of A lie in the closed disc { : P} in the complex plane. This result give us a disc inside which all the eigenvalues of A are located. However, to locate this circle we need P and to find P we need the eigenvalues. Thus this result is not practically useful. However, from a theoretical point of view, this suggests the possibility of locating all the eigenvalues in some disc. We shall now look for other discs which can be easily located and inside which the eigenvalues can all be trapped. Let A be any matrix norm. Then it can be shown that P A . Thus if we draw a disc of radius A and origin as center then this disc will be at least as big as the disc given in (A) above and hence will trap all the eigenvalues. Thus, the idea is to use a matrix norm, which is easy to compute. For example we can use A or A 1 which are easily
computed as MARS or MACS respectively. Thus we have,(B) If A is an nxn matrix then all its eigenvalues are trapped in the closed disc : A or the disc

{ :

}.

## (The idea is to use A if it is smaller than A 1 if it is smaller than

A ).
COROLLORY (C) If A is Hermitian, all its eigenvalues are real and hence all the eigenvalues lie in the intervals,

{ : p p}

by (A)

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/1

## Numerical analysis/ Computations of eigenvalues

Lecture notes

{ : A { : A

A A
1

by (B).

Example 1: Let

1 1 2 A = 1 2 3 1 2 0
P2 = 6 P3 = 3

= MARS = 6

{ :

6}

## The Column sums are C1 = 3, C2 = 5, C3 = 5.

A 1 = MACS = 5
The eigenvalues are all in the disc,

{ :

5},

{ :

## = 6; and hence we use A 1 and get the smaller disc

The above results locate all the eigenvalues in one disc. The next set of results try to isolate these eigenvalues to some extent in smaller discs. These results are due to GERSCHGORIN. Let A = (aij) be an nxn matrix. The diagonal entries are

## 1 = a11 ; 2 = a22 ; .., n = ann ;

Now let Pi denote the sum of the absolute values of the off-diagonal entries of A in the ith row.

## Pi = ai1 + ai 2 + ..... + aii 1 + aii +1 + ..... + ain

Vittal rao/IISc.Bangalore M4/L1/V1/May2004/2

Lecture notes

## Now consider the discs:

G1 : Centre1 ; radiusP : { : 1 P1 } 1

G2 : Centre 21 ; radiusP : { : 2 P2 } 2
.. .. .. .. .. .. .

and in general

Gi : Centre i ; radiusPi : { : i Pi }

Thus we get n discs G1, G2, .., Gn. These are called the GERSCHGORIN DISCS of the matrix A. The first result of Gerschgorin is the following: (D) Every eigenvalue of A lies in one of the Gerschgorin discs. Example 2:
Let
1 A = 0 3 1 0 4 1 1 5

The Gerschgorin discs are found as follows: 1 = (1,0) ; 2 = (4,0) ; 3 = (-5,0) P1 = 1 ; P2 = 1 ; P3 = 4 G1 : Centre (1,0) radius 1 G2 : Centre (4,0) radius 1 G3 : Centre (-5,0) radius 4.

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/3

## Numerical analysis/ Computations of eigenvalues

Lecture notes

G1 (1,0) G3 (-5,0)

G2 (4,0)

Thus every eigenvalue of A must lie in one of these three discs. Example 3:

Let

10 A= 1 1 .5

1 0 .5 3 20 4 10

(It can be shown that the eigenvalues are exactly 1 = 8, 2 = 12, 3 = 20). Now for this matrix we have, 1 = (10,0) P1 = 5 2 = (10,0) P2 = 1.5 3 = 20 P3 = 4.5

## Thus we have the three Gerschgorin discs

G 1 = { : 10 5}

G 2 = { : 10 1 . 5}

G 3 = { : 20 4 . 5}

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/4

## Numerical analysis/ Computations of eigenvalues

Lecture notes

G1

G3

Thus all the eigenvalues of A are in these discs. But notice that our exact eigenvalues are 8,12 and 20. Thus no eigenvalue lies in G2; and one eigenvalue lie in G3 (namely 20) and two lie in G1 (namely 8 and 12).

Example 4: Let

1 A = 1 1

0 2 0

1 0 5 3 = (5,0) P3 = 1

## The Gerschgorin discs are

G 1 = { : 1 1}

G 2 = { : 2 1} G 3 = { : 5 1}

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/5

## Numerical analysis/ Computations of eigenvalues

Lecture notes

G1 (1,0) G2 (2,0)

G3 (5,0)

Thus every eigenvalue of A must lie in one of these three discs. In example 2, all the Gerschgorin discs were isolated; and in examples 3 and 4 some discs intersected and others were isolated. The next Gerschgoin result is to identify the location of the eigenvalues in such cases. (E) If m of the Gerschgorin discs intersect to form a common connected region and the remaining discs are isolated from this region then exactly m eigenvalues lie in this common region. In particular if Gerschgorin disc is isolated from all the rest then exactly one eigenvalue lies in this disc. Thus in example 2 we have all three isolated discs and thus each disc will trap exactly one eigenvalue. In example 3; G1and G2 intersected to form the connected (shaded) region and this is isolated from G3. Thus the shaded region has two eigenvalues and G3 has one eigenvalue. In example 4, G1and G2 intersected to form a connected region (shaded portion) and this is isolated from G3. Thus the shaded portion has two eigenvalues and G3 has one eigenvalue. REMARK: In the case of Hermitian matrices, since all the eigenvalues are real, the Gerschgorin discs, Gi = { : aii Pi } = { : i Pi } can be replaced by the Gerschgorin intervals,

Gi = { : i Pi } = { : i Pi i + Pi }
Example 5:

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/6

## Numerical analysis/ Computations of eigenvalues

Lecture notes

Let

1 1 1 A = 1 5 0 1 0 1 2
P1 = 2 P2 = 1 P3 = 1

Note A is Hermitian. (In fact A is real symmetric) Here; 1 = (1,0) 2 = (5,0) 3 = (-1/2,0) G1 : -1 3 G2 : 4 6 G3 : -3/2

## Thus the Gerschgorin intervals are

-2

-1 G3

0 G1

3 G2

Note that G1 and G3 intersect and give a connected region, -3/2 3; and this is isolated from G2 : 4 6. Thus there will be two eigenvalues in 3/2 3 and one eigenvalue in 4 6. All the above results (A), (B), (C), (D), and (E) give us a location of the eigenvalues inside some discs and if the radii of these discs are small then the centers of these circles give us a good approximations of the eigenvalues. However if these discs are of large radius then we have to improve these approximations substantially. We shall now discuss this aspect of computing the eigenvalues more accurately. We shall first discuss the problem of computing the eigenvalues of a real symmetric matrix.

Vittal rao/IISc.Bangalore

M4/L1/V1/May2004/7

## Numerical analysis/ Computations of eigenvalues

Lecture notes

COMPUTATION OF THE EIGENVALUES OF A REAL SYMMETRIC MATRIX We shall first discuss the method of reducing the given matrix to a similar tridiagonal matrix and then computing the eigenvalues of a real symmetric tridiagonal matrix. Thus the process of determining the eigenvalues of A = (aij), a real symmetric method involves two steps: STEP 1: Find a real symmetric tridiagonal matrix T which is similar to A. STEP 2: Find the eigenvalues of T. (The eigenvalues of A will be same as those of T since A and T are similar). We shall first discuss step 2.

Vittal rao/IISc.Bangalore

M4/L2/V1/May2004/1

Lecture notes

EIGENVALUES

OF

REAL

SYMMETRIC

Let

## .... ..... ..... ..... a n 1 bn 1

..... bn 1 an 0 0 0

## be a real symmetric tridiagonal matrix. Let us find Pn () = det [T - I]

a1 b1 = ..... 0 0

## b1 a2 ..... ..... .....

0 b2 ..... 0 .....

..... 0 ..... bn 2 0 a

## ..... ..... .....

n 1

0 0 ..... bn 1 a n

bn 1

The eigenvalues of T are precisely the roots of Pn () = 0 (Without loss of generality we assume bi 0 for all i. For if bi = 0 for some i then the above determinant reduces to two diagonal blocks of the same type and thus the problem reduces to that of the same type involving smaller sized matrices). We define Pi () to be the ith principal minor of the above determinant. We have

P0 ( ) = 1

P1 ( ) = a1

Pi ( ) = (a i )Pi 1 ( ) b

i 1

Pi 2 ( )

.. (I)

What we are interested in finding the zeros of Pn ( ) . To do this we analyse the polynomials P0 ( ), P1 ( ),....., Pn ( ) . Let C be any real number. Compute P0 (C ), P1 (C ),....., Pn (C ) (which can be calculated recursively by (I)). Let N (C) denote the agreements in sign between two consecutive in
VittalRao/IISc Bangaolre M4/L3/V1/May 2004/1

## Numerical analysis/ Computations of eigenvaues

Lecture notes

the above sequence of values, P0 (C ), P1 (C ),....., Pn (C ) . [If for some i, Pi (C ) = 0 , we take its sign to be the same as that of Pi 1 (C ) ]. Then we have (F) There are exactly N (C) eigenvalues of T that are C. Example: If for an example we have an 8 x 8 matrix T (real symmetric tridiagonal) giving use to,

P0 (1 ) = 1

P1 (1 ) = 2

P2 (1 ) = 3 P3 (1 ) = 2 P4 (1 ) = 6 P6 (1 ) = 0

P5 (1 ) = 1 P7 (1 ) = 4
Here

P8 (1 ) = 2

P0 (1 ), P1 (1 )

P2 (1 ), P3 (1 ) P5 (1 ), P6 (1 )

## agree in sign agree in sign

(Because since P6 (1) = 0 we have to take its sign as the same as that of P5 (1). Thus three pairs of sign agreements are achieved. Thus N (C) = 3; and there will be 3 eigenvalues of T greater than or equal to 1; and the remaining 5 eigen values are < 1. It is this idea of result (F) that will be combined with (A), (B), (C), (D) and (E) and clever repeated applications of (F) that locate the eigenvalues of T. We now explain this by means of an example.

VittalRao/IISc Bangaolre

M4/L3/V1/May 2004/2

## Numerical analysis/ Computations of eigenvaues

Lecture notes

Example 7:

Let

1 2 T = 0 0

0 1 4 0 4 2 1 0 1 3 2 0

Here we have Absolute Row sum 1 = 3 Absolute Row sum 2 = 7 Absolute Row sum 3 = 7 Absolute Row sum 4 = 4 and therefore,

= MARS = 7
1

## (Note since T is symmetric we have MARS = MACS and therefore T

= T

= T ).

Thus by our result (C) we have that the eigenvalues are all in the interval 7 7 [ -7 -6 -5 -4 -3 -2 -1 0 1 2 3 Now the Gerschgorin (discs) intervals are as follows: G1 : Centre 1 G2 : Centre -1 G3 : Centre 2 G4 : Centre 3 radius : 2 radius : 6 radius : 5 radius : 1 G1 : [-1, 3] G2 : [-7, 5] G3 : [-3, 7] G4 : [-2, 4]
M4/L3/V1/May 2004/3

] 4 5 6 7

VittalRao/IISc Bangaolre

## Numerical analysis/ Computations of eigenvaues

Lecture notes

G3 G1 [ -7 -6 -5 -4 -3 -2 -1 0 G4 ] 1 G2 2 3 4 5 6 7

We see that G1, G2, G3 and G4 all intersect to form one single connected region [-7, 7]. Thus by (E) there will be 4 eigenvalues in [-7, 7]. This gives therefore the same information as we obtained above using (C). Thus so far we know all eigenvalues are in [-7, 7]. Now we shall see how we use (F) to locate the eigenvalues. First of al let us see how many eigenvalues will be 0. Let C = 0. Find N (0) and we will get the number of eigenvalues 0 to be N (0). Now

T I =

1 2 0 0

2 1 4 0

0 4 2 1

0 0 1 3

P0 ( ) = 1
Now, we have,

P1 ( ) = 1 P3 ( ) = (2 )P2 ( ) 16 P1 ( ) P4 ( ) = (3 )P3 ( ) P2 ( )

P2 ( ) = (1 + )P1 ( ) 4 P0 ( )

P0 (0 ) = 1 P1 (0 ) = 1 P2 (0 ) = 5

P3 (0 ) = 26

P4 (0 ) = 73

VittalRao/IISc Bangaolre

M4/L3/V1/May 2004/4

Lecture notes

We have

P0 (0 ), P1 (0 )

P2 (0 ), P3 (0 )

P3 (0 ), P4 (0 )

## as three consecutive pairs having sign agreements.

N (0 ) = 3
Three are 3 eigenvalues 0 and one eigenvalue < 0. i.e. there are eigenvalues in [0, 7]and there is 1 eigenvalue in [-7, 0]

One eigenvalue

3 eigenvalues

-7

-6

-5

-4

-3

-2

-1

1 Fig.1

## Let us take C = -1 and calculate N (C). We have

P 0 ( 1 ) = 1

P1 ( 1 ) = 2

P 2 ( 1 ) = 4

P 3 ( 1 ) = 48

P 4 ( 1 ) = 188
Again we have N (-1) = 3. There are 3 eigenvalues -1 compare this with figure1. We get

One eigenvalue

3 eigenvalues

VittalRao/IISc Bangaolre

M4/L3/V1/May 2004/5

## Numerical analysis/ Computations of eigenvaues

Lecture notes

-7

-6

-5

-4

-3

-2

-1

(Fig.2) Let us take the mid point of [-7, -1] in, which the negative eigenvalue lies. So let C = -4. P0 (-4) = 1 P1 (-4) = 5 P2 (-4) = 11 P3 (-4) = -14 P4 (-4) = -109 that the negative eigenvalue is in [-7, -4] ..(*) Let us try mid pt. C = -5.5 We have P0 (-5.5) = 1 P1 (-5.5) = + 6.5 P2 (-5.5) = 25.25 P3 (-5.5) = 85.375 P4 (-5.5) = 683.4375 We again take the mid pt. C and calculate N (C) and locate in which half of this interval does this negative eigenvalue lie and continue this bisection process until we trap this negative eigenvalue in as small an interval as necessary. Now let us look at the eigenvalues 0. We have from fig. 2 three eigenvalues in [0, 7]. Now let us take C = 1 P0 (1) = 1 P1 (1) = 0 N (1) = 3
M4/L3/V1/May 2004/6

Again there are three pairs of sign agreements. N (-4) = 3. There are 3 eigenvalues -4. Comparing with fig. 2 we get

N (-5.5) = 4. 4 eigenvalues -5.5. Combining this with (*) and fig. 2 we get that negative eigenvalue is in [-5.5 4].

VittalRao/IISc Bangaolre

## Numerical analysis/ Computations of eigenvaues

Lecture notes

P2 (1) = - 4 P3 (1) = - 4 P4 (1) = - 4 C=2 P0 (2) = 1 P1 (2) = -1 P2 (2) = - 1 P3 (2) = 16 P4 (2) = 17 C=3 P0 (3) = 1 P1 (3) = -2 P2 (3) = 4 P3 (3) = 28 P4 (3) = - 4

## all the eigenvalues are 1 .. (**)

N (2) = 2 There are two eigenvalues 2. Combining this with (**) we get one eigenvalue in [1, 2) and two in [2, 7].

N (3) = 1

one eigenvalue 3

Combining with above observation we get one eigenvalue in [1, 2) one eigenvalue in [2, 3) one eigenvalue in [3, 7)

Let us locate the eigenvalue in [3, 7] a little better. Take C = mid point = 5 P0 (5) = 1 P1 (5) = - 4 P2 (5) = 20 P3 (5) = 4 P4 (5) = -28 This eigenvalue is in [5, 7] Let us take mid point C = 6 P0 (6) = 1 P1 (6) = - 5 N (6) = 0
M4/L3/V1/May 2004/7

## N (5) = 1 this eigenvalue is 5

VittalRao/IISc Bangaolre

Lecture notes

## No eigenvalue 6 the eigenvalue is in [5, 6)

Thus combining all, we have, one eigenvalue in [-5.5, -4) one eigenvalue in [1, 2) one eigenvalue in [2, 3) one eigenvalue in [5, 6) Each one of these locations can be further narrowed down by the bisection applied to each of these intervals. We shall now discuss the method of obtaining a real symmetric tridiagonal T similar to a given real symmetric matrix A.

VittalRao/IISc Bangaolre

M4/L3/V1/May 2004/8

## Numerical analysis/Computations of eigenvaues

Lecture notes

TRIDIAGONALIZATION OF A REAL SYMMETRIC MATRIX Let A = (aij) be a real symmetric nxn matrix. Our aim is to get a real symmetric tridiagonal matrix T such that T is similar to A. The process of obtaining this T is called the Givens Householder scheme. The idea is to first find a reduction process which annihilates the off tridiagonal matrices in the first row and first column of A and repeatedly use this idea. We shall first see some preliminaries.

Let

## U1 U U = 2 be a real nx1 vector. Then (U n) M U n

H = UUt is an nxn real symmetric matrix. Let be a real number (which we shall suitably choose) and consider

P = I H = I UU t ...........(I )
We shall choose such that P is its own inverse. (Note that Pt = P). So we need P2 = I i.e. (I - H) (I - H) = I i.e. (I - UUt) (I - UUt) = I

I 2 UUt + 2 UUt UUt = I So we choose such that 2 UUt UUt = 2 UUt Obviously, we choose 0. Because otherwise we get P = I; and we dont get any new transformation. We need
Vittal rao/IISc.Bangalore M4/L4/V1/May 2004/1

## Numerical analysis/Computations of eigenvaues

Lecture notes

UUt UUt = 2. UUt But UtU = U21 + U22 + .. + U2n = U (Ut U). and hence
2

## is a real number 0 and thus we have

UUt = 2 UUt

2 .......... ..( II ) U tU

Thus if we U is an nx1 vector and different from n and is as in (II) then P defined as

P = I UU t ..............( III )
is such that

P = P t = P 1 ..............( IV )
Now we go back to our problem of tridiagonalization of A. Our first aim is to find a P of the form (IV) such that P t AP = PAP has off tridiagonal entries in 1st row and 1st column as zero. We can choose the P as follows: Let

## s 2 = a 2 21 + a 2 31 + ..... + a 2 n1 .......... ......(V )

s = nonnegative square root of s2.

(the sum of the squares of the entries below the 1st diagonal entry in A) Let Let

0 a 21 + s sgn .a 21 U = a 31 M a n1

. (VI)

Thus U is the same as the 1st column of A except that the 1st component is taken as 0 and second component is a variation of the second component in the 1st column of A. All others are same as 1st column of A. Then

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/2

## Numerical analysis/Computations of eigenvaues

Lecture notes

U tU = 2

= (a 21 + s sgn .a 21 ) + a 2 31 + a 2 41 + ..... + a 2 n1 / 2
2

= a 2 21 + s 2 + 2 a 21 s + a 2 31 + ..... + a 2 n1 / 2

= a 2 21 + a 2 31 + ..... + a 2 n1 + s 2 + 2 s a 21 / 2

{[(

] }

1 s 2 + s a 21
1 s + s a 21
2

(VII)

Thus if is as in (VII) and U is as in (VI) where s is as in (V) then P = I - UUt is s.t. P = Pt = P-1, and it can be shown that A2 = PA1P = PAP (i.e let A1 = A) is similar to A and has off tridiagonal entries in 1st row and 1st column as 0. Now we apply this procedure to the matrix obtained by ignoring 1st column and 1st row of A2. Thus we now choose

s 2 = a 2 32 + a 2 42 + ..... + a 2 n 2
(where now aij denote entries of A2) (i.e. s2 is sum of squares of the entries below second diagonal entry of A2) s = Positive square root of s2

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/3

## Numerical analysis/Computations of eigenvaues

Lecture notes

0 0 a + ( sign .a ) s 32 U = 32 a 42 M an2

1 s + s a 32
2

P = I - UUt Then A3 = PA2P has off tridiagonal entries 1n 1st, 2nd rows and columns as zero. We proceed similarly and annihilate all off tridiagonal entries and get T, real symmetric tridiagonal and similar to A. Note: For an nxn matrix we get tridiagonalization in n 2 steps. Example:

5 4 A= 1 1

4 5 1 1

1 1 4 2

1 1 2 4

A is a real symmetric matrix and is 4 x 4. Thus we get tridiagonalization after (4 2) i.e. 2 steps. Step 1:

s 2 = 4 2 + 1 2 + 1 2 = 18 s = 18 = 4 .24264

## 1 1 1 = = = 0.02860 s + s a 21 18 + (4.24264 )(4 ) 34 .97056

2

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/4

## Numerical analysis/Computations of eigenvaues

Lecture notes

0 0 a 21 + s sgn .a 21 4 + 4 .24264 U = = a 31 1 a 41 1
With this , U, we get

## 0 0.23570 0.02860 0.97140

5 4 .24264 A2 = PAP = 0 0
Step 2
s 2 = ( 1 ) + (1 ) = 2
2 2

4 .24264 6 1 1

0 0 1 1 3 .5 1 .5 1 .5 3 .5

s =

2 = 1 . 41421

## 0 0 0 0 0 0 U = = 1 1.41421 = 2.41421 a + s sgn .a 32 32 a 42 1 1

Vittal rao/IISc.Bangalore M4/L4/V1/May 2004/5

## Numerical analysis/Computations of eigenvaues

Lecture notes

P = I - UUt

1 0 = 0 0

0 1 0 0

0 0 0 .70711 0 .70711

0 .70711 0 .70711 0 0

A3 = PA2P

5 4.24264 = 0 0

4.24264 6 1.41421 0

0 1.41421 0 5 0 0 2 0

which is tridiagonal. Thus the Givens Householder scheme for finding the eigenvalues involves two steps, namely, STEP 1: Find a tridiagonal T (real symmetric) similar to T (by the method described above) STEP 2: Find the eigenvalues of T (by the method of sturm sequences and bisection described earlier) However, it must be mentioned that this method is used mostly to calculate the eigenvalue of the largest modulus or to sharpen the calculations done by some other method. If one wants to calculate all the eigenvalues at the same time then one uses the Jacobi iteration which we now describe.

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/6

## Numerical analysis/ Computations of eigenvaues

Lecture notes

JACOBI ITERATION FOR FINDING EIGENVALUES OF A REAL SYMMETRIC MATRIX Some Preliminaries: a 12 a A = 11 Let be a real symmetric matrix. a 12 a 22 Let Note

Cos P= sin

sin Cos

Cos Pt = sin

## Thus P is an orthogonal matrix. Now

sin a 11 a 12 cos A 1 = P t AP = sin cos a 12 a 22 sin a11 cos + a12 sin cos = sin cos a cos + a sin 12 22
a11 cos 2 + 2a12 sin cos + a 22 sin 2 = ( a + a )sin cos + a cos 2 sin 2 11 22 12 Thus if we choose such that,

## ( a11 + a 22 )sin cos + a12 (cos 2 sin 2 )

a11 sin 2 2a12 sin cos + a 22 cos 2

cos sin sin cos a11 sin + a12 cos a12 sin + a 22 cos

## ( a11 + a 22 )sin cos + a12 (cos 2 sin 2 ) = 0 . . . (I)

a11 + a22 sin 2 + a12 (cos 2 ) = 0 2

We get the entries in (1,2) position and (2,1) position of A1 as zero. (I) gives

a 12 cos 2 =

a 11 a 22 sin 2 2
M4/L5/V1/May 2004/1

Vittal rao/IISc.Bangalore

Lecture notes

tan 2 =
=

## 2 a12 2 a sgn (a11 a 22 ) = 12 (a11 a 22 ) a11 a 22

. . . . . (II)

Where

= 2a12 sgn(a11 a 22 )

. . . . . (III)

= a11 a 22
sec 2 2 = 1 + tan
= 1+
=
2

. . . . . (IV)

2 2

from (II)

2 +2 2
2 2 + 2

cos 2 2 =
cos 2 =

2 +2
1 1 + 2

2 cos 2 1 =

2 +2
. . . . . . . (V)

cos = and

2 +
2

## 2 sin cos = sin 2 =

=

1 cos 2 =
2

2 2

2 +

2 +
2 2

2 + 2

sin =

2 cos 2 + 2

. . . . . .(VI)

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/2

Lecture notes

cos P = sin

sin cos

## with these values of cos, sin, then

PtAP = A1 has (2,1) and (1,2) entries as zero. We now generalize this idea. Let A = (aij) be an nxn real symmetric matrix. Let 1 g < p < n. (Instead of (1,2) position above choose (q, p) position) Consider,

= 2 a qp sgn( a qq a pp )
= a qq a pp
cos = 1 1 + 2 2 +2

. . . . . . . (A)

. . . . . . . (B)

. . . . . . . (C)

sin =

1 2 cos

2 +2
p . . . . . . . (D)

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/3

## Numerical analysis/ Computations of eigenvaues

Lecture notes

1 1 O cos P= sin

sin 1 O cos O 1

then A1 = Pt AP has the entries in (q, p) position and (p, q) position as zero. In fact A1 differs from A only in qth row, pth row and qth column and pth column and it can be shown that these new entries are

## a 1 qi = a qi cos + a pi sin a 1 pi = a qi sin + a pi cos a 1 iq = a iq cos + a ip sin a 1 ip = a iq sin + a ip cos

i q, p (qth column pth column) . .(F) i q, p (qth row pth row) . .(E)

## a 1 qq = a qq cos 2 + 2 a qp sin cos + a pp sin 2 a 1 pp = a qq sin 2 2 a qp sin cos + a pp cos 2 a 1 qp = a 1 pq = 0 .

Now the Jacobi iteration is as follows. Let A = (aij) be nxn real symmetric. . . . (G)

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/4

Lecture notes

a qp

## is largest among the absolute values of all the off

diagonal entries in A. For this q, p find P as above. Let A1 = Pt AP. A1 can be obtained as follows: All p
th th

rows Column

## of A1 are same as A except qth row,

row, q column, pth column which are obtained from (E), (F), (G).

Now A1 has 0 in (q, p), (p, q) position. Replace A by A1 and repeat the process. The process converges to a diagonal matrix the diagonal entries of which give the eigenvalues of A. Example: 7 3 A = 2 1 3 9 2 4 2 2 4 2 1 4 2 3

## = 2 sgn (a qq a pp ).a qp = 2 sgn (a 22 a 44 ).a 24

= (2 )(1 )(4 ) = 8 .
= a qq a pp = 9 3 = 6

2 + 2 = 100 ;
cos = 1 2 1 +

2 + 2 = 10

2

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/5

## Numerical analysis/ Computations of eigenvaues

Lecture notes

1 6 1 + = 2 10

4 = 0.8 = 0.89442 5

sin =

1 2 cos

2 +2
0 0 . 89442 0 0 . 44721

1 8 = 0.44721 2( 0 .89442 ) 10

1 0 P = 0 0

0 0 1 0

0 0 . 44721 0 0 . 89442

A1 = PtAP will have a124 = a142 = 0. Other entries that are different from that of A are a121, a122, a123 ; a141, a142, a143, a144 ; (of course by symmetric corresponding reflected entries also change). We have,

## a 1 21 = a 21 cos + a 41 sin = 3.1305 a141 = a21 sin + a41 cos = 0.44721

a123 = a23 cos + a43 sin = 0.89443 a143 = a23 sin + a43 cos = 2.68328
a122 = a22 cos 2 + 2a24 sin cos + a44 sin 2 = 11 a144 = a22 sin 2 2a24 sin cos + a44 cos 2 = 1
7 3.1305 11 3.1305 A1 = 2 0.89443 0.44721 0 2 0.89443 4 2.68328 0.44721 0.0000 2.68328 1.00000

Now we repeat the process with this matrix. The largest absolute value is at (1, 2) position. q = 1, p = 2.
Vittal rao/IISc.Bangalore M4/L5/V1/May 2004/6

## Numerical analysis/ Computations of eigenvaues

Lecture notes

= a qq a pp = a11 a 22 = 7 11 = 4 = 4

## = 2a gp sgn (a qq a pp ) = 2(3.1305 )( 1) = - 6.2610.

2 +
2

= 55 . 200121
2

2 +
cos =

= 7 . 42968

1 1 + 2

= 0 .87704 ; 2 +2

sin =

1 2 cos

2 +2

= 0.48043

## The entries that change are

a 112 = a 1 21 = 0 a 113 = a 13 cos + a 23 sin = 2 . 18378 a 1 23 = a 13 sin + a 23 cos = 0 . 17641 a 114 = a 14 cos + a 24 sin = 0 . 39222 a 1 24 = a 14 sin + a 24 cos = 0 . 21485 a 111 = a 11 cos 2 + 2 a 12 sin cos + a 22 sin 2 = 5 . 28516 a 1 22 = a 11 sin 2 2 a 12 sin cos + a 22 cos 2 = 12 . 71484
and the new matrix is

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/7

Lecture notes

## 0.39222 0.21485 2.68328 1

Now we repeat with q = 3, p = 4 and so on. And at the 12th step we get the diagonal matrix

0 5.78305 0 12 .71986 0 0 0 0

0 0 5.60024 0

0 0 2.09733 0

giving eigenvalues of A as 5.78305, 12.71986, -5.60024, 2.09733. Note: At each stage when we choose (q, p) position and apply the above transformation to get new matrix A1 then sum of squares of off diagonal entries of A1 will be less than that 2 of A by 2a qp.

Vittal rao/IISc.Bangalore

M4/L5/V1/May 2004/8

## Numerical analysis/Computations of eigenvaues

Lecture notes

The Q R decomposition: Let A be an nxn real nonsingular matrix. Then we can find an orthogonal matrix Q and an upper triangular matrix R (with rii >0) such that A=QR called the QR decomposition of A. The Q and R are found as follows: Let a(1) ; a(2) ; . , a(n) be the columns of q(1) ; q(2) ; , q(n) be the columns of Q r(1) , r(2) , ., r(n) be the columns of R. Note: Since Q is Hermitian we have

q (1)
i

= 1 = q (2 )
j

## = ....... = q (n ) .......... ..( A )

2

(q ( ) , q ( ) ) = 0

if i j .. (B).

r (i )

## QR = r1i q (1) + r2i q (2 ) + ..... + rii q (i ) ....................(D )

We want A = QR. Comparing 1st column on both sides we get a(1) = QR s first column = Qr(1) = r11q(1) by (D)

a (1)

= r11 q (1)

= r11 q (1)

## = r11 r11 > 0 and q1

= 1by ( A)

Vittal rao/IISc.Bangalore

M4/L6/L1/May2004/1

## Numerical analysis/Computations of eigenvaues

Lecture notes

r11 = a (1 )

1 (1) a .......... ....... (E ) r11 giving 1st columns of R and Q. Next comparing second columns on both sides we get
and q (1) = 2

a(2) = Qr(2) = r12 q(1) +r22 q(2) .. (*) Therefore from (*) we get

(a ( ) , q ( ) ) = r (q ( ) , q ( ) ) + r (q ( ) , q ( ) )
2 1 1 1 2 1

12

22

2 2

= 1by ( A )

(2 )

(*) gives

## r22 q (2 ) = a (2 ) r12 q (1 ) and r22 q (2 )

2

= a ( 2 ) r12 q (1 )
2

## and q (2 ) = 1 a (2 ) r12 q (1 ) .......... .......... ..(H r22

(F), (G), (H) give 2nd columns of Q and R. We can proceed having got the first i - 1 columns of Q and R we get ith columns of Q and R as follows:

## r1i = a (i ) , q (1) ; r2i = a (1) , q (2 ) ,............, ri 1i = a (i ) , q (i 1)

rii = a (i ) r1i q (1) r2i a (2 ) ....... ri 1i q (i 1)
2

q (i ) =

## i (i ) a r1i q (1) r2 i q (2 ) .......... ri 1 q (i 1) rii

Vittal rao/IISc.Bangalore

M4/L6/L1/May2004/2

Lecture notes

Example:

1 A = 1 0

2 1 0 1 1 1

## 1st column of Q and R

r11 = a (1) = 12 + 12 = 2
1 2 1 = 2 0

q (1 ) =

1 (1 ) a r11

## 2nd column of Q and R:

r12 = a (2 ) , q (1)

2 = 0 , 1

1 2 1 2 = 2 = 2 0

r22 = a

(2 )

r12 q

(1)
2

2 1 = 0 1 1 0
= 1

1 = 1 1

= 3
2

q (2 ) =

1 3

[a ( ) r
2

12

q (1 )

3 1 3 1 3

Vittal rao/IISc.Bangalore

M4/L6/L1/May2004/3

Lecture notes

## 3rd column of Q and R:

r13 = a (3 ) , q (1 )

1 = 1, 1

1 2 1 2 = 2 = 2 0

r23 = a (3 ) , q (2 ) =

1 3

## r33 = a (3 ) r13 q (1) r23 q (2 )

1 1 1 = 1 1 3 1 0
=
and

3 1 3 1 3

1 3 1 3 2 3

1 1 4 + + = 9 9 9

2 3

q (3 ) =

## 1 (3 ) a r13 q (1) r23 q (2 ) r33

Vittal rao/IISc.Bangalore

M4/L6/L1/May2004/4

## Numerical analysis/Computations of eigenvaues

Lecture notes

1 3 3 1 = 2 3 2 3

1 6 1 6 2 3
2 R = 0 0

## 1 1 1 2 3 6 1 1 1 Q = 3 6 2 1 2 0 3 3 and 1 2 1 QR = 1 0 1 = A 0 1 1 giving us QR decomposition of A.

2 3 0

2 1 3 2 3

Vittal rao/IISc.Bangalore

M4/L6/L1/May2004/5

## Numerical analysis/Computations of eigenvaules

Lecture notes

QR algorithm Let A be any nonsingular nxn matrix. Let A = A1 = Q1 R1 be its QR decomposition. Let A2 = R1 Q1. Then find the QR decomposition of A2 as A2 = Q2 R2 Define A3 = R2 Q2 ; find QR decomposition of A3 as A3 = Q3 R3. Keep repeating the process. Thus A1 = Q1 R1 A2 = R1 Q1 and the ith step is Ai = Ri-1 Qi-1 Ai = Qi Ri Then Ai converges to an upper triangular matrix exhibiting the eigenvalues of A along the diagonal.

Vittal rao/IISc.Bangalore

M4/L7/V1/May 2004/1