P. 1
Numerical Analysis

Numerical Analysis

|Views: 4|Likes:
Published by Muhammed Ammachandy
Linear Systems of Equations and Matrix Computations
Linear Systems of Equations and Matrix Computations

More info:

Published by: Muhammed Ammachandy on Mar 07, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

11/07/2015

pdf

text

original

Computational Linear Algebra

Syllabus

NUMERICAL ANALYSIS Linear Systems of Equations and Matrix Computations Module 1: Direct methods for solving linear system of equation Simple Gaussian elimination method, gauss elimination method with partial pivoting, determinant evaluation, gauss Jordan method, L U decompositions Doolittle’s lu decomposition, Doolittle’s method with row interchange. Module 2: Iterative methods for solving linear systems of equations Iterative methods for the solution of systems equation, Jacobin iteration, gauss – seidel method, successive over relaxation method (sort method). Module 3: Eigenvalues and Eigenvectors An introduction, eigenvalues and eigenvectors, similar matrices, hermitian matrices, gramm – Schmidt orthonormalization, vector and matrix norms. Module 4: Computations of eigenvaues Computation of eigenvalues of a real symmetric matrix, determination of the eigenvalues of a real symmetric tridiagonal matrix, tridiagonalization of a real symmetric matrix, Jacobin iteration for finding eigenvalues of a real symmetric matrix, the q r decomposition, the Q-R algorithm.

Vittal Rao/IISc, Bangalore

V1/1-4-04/1

Computational Linear Algebra

Syllabus

Lecture Plan Modules 1. Direct methods for solving linear system of equation. Learning Units 1. Simple Gaussian elimination method 2. Gauss elimination method with partial pivoting. 3. Determinant evaluation 4. Gauss Jordan method 5. L U decompositions 6. Doolittle’s LU Decomposition 7. Doolittle’s method with row interchange. 2. Iterative methods for solving linear systems of equations. 8. Iterative methods for the solution of systems equation 9. Jacobi iteration. 10. Gauss – Seidel method 11. Successive over relaxation method (sort method). 3. Eigenvalues 12. An introduction. and Eigenvectors 13. Eigenvalues and eigenvectors, 14. Similar matrices, 15. Hermitian matrices. 16. Gramm – Schmidt orthonormalization, 17. Vector and matrix norms. 4. Computations of eigenvalues. 18. Computation of eigenvalues 19. Computation of eigenvalues of a real symmetric matrix. 20. Determination of the eigenvalues of a real symmetric tridiagonal matrix, 21. Tridiagonalization of a real symmetric matrix 22. Jacobian iteration for finding eigenvalues of a real symmetric matrix 23. The Q R decomposition 1 1 1 2 2 3 2 2 2 1 2 2 1 2 1 2 2 2 1 1 1 11 9 9 Hours per Topics 1 2 Total Hours 10

Vittal Rao/IISc, Bangalore

V1/1-4-04/2

Computational Linear Algebra

Syllabus

24. The Q-R algorithm.

2

Vittal Rao/IISc, Bangalore

V1/1-4-04/3

. .. ⎟ ⎟ Where a(1)ij = aij ........ ⎟ ⎟ a ( 1 ) nn ⎟ ⎠ ⎛ y (1 )1 ⎜ (1 ) ⎜ y 2 (1) y = ⎜ M ⎜ (1 ) ⎜ y n ⎝ ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ Where y(1)i = yi We assume a(1)11 ≠ 0 Then by ERO of type applied to A(1) reduce all entries below a(1)11 to zero... ...Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes 1. + annxn = yn We shall assume that this system has a unique solution and proceed to describe the simple Gaussian elimination method for finding the solution. ....... + a1nxn = y1 a21x1 + a22x2 + ….. DIRECT METHODS FOR SOLVING LINEAR SYSTEMS OF EQUATIONS 1... A(1) ⎯ ⎯ → ⎯ ⎯ Ri +m(1)i1R1 A(2) Where m (1) i1 a (1) i1 = − (1) ....1.... The method reduces the system to an upper triangular system using elementary row operations (ERO).... . SIMPLE GAUSSIAN ELIMINATION METHOD Consider a system of n equations in n unknowns... Let A(1) denote the coefficient matrix A... Let the resulting matrix be denoted by A(2)... a (1 ) n1 a ( 1 ) 12 a ( 1 ) 22 .. .... a11x1 + a12x2 + ….. Note A(2) is of the form VittalRao/IISc... Bangalore M1/L1and L2/V1/May2004/1 . ... + a2nxn = y2 … … … … … an1x1 + an2x2 + ….. a (1 ) n2 . a (1 ) 1 n ⎞ ⎟ a (1 ) 2 n ⎟ .... ⎛ ⎜ ⎜ ⎜ (1) A = ⎜ ⎜ ⎜ ⎜ ⎝ Let a ( 1 ) 11 a ( 1 ) 21 . a 11 i > 1.

. ...... 0 0 . a(2) 3n ⎟ .. a 22 i > 2... .. M(1) A(1) = A(2) Let y(2) = M(1) y(1) A(2)x = y(2) Next we assume a(2)22 ≠ 0 and reduce all entries below this to zero by ERO A(2) Here 1 0 0 M(2) = 0 1 m(2)32 0 m(2)42 In-2 M1/L1and L2/V1/May2004/2 R i + 1 R1 y (1) ⎯ ⎯ m i⎯→ y ( 2 ) ⎯ i... ⎜ ⎝ 0 a(1)12 ..e. .... 0 .. ... . Bangalore .. ⎟ ⎟ a( 2) n2 .. a(1)1n ⎞ ⎟ a(2) 22 . m ( 2) i2 a ( 2) i 2 = − ( 2) ... 0 VittalRao/IISc...e Then the system Ax = y is equivalent to ⎯ ⎯→ ⎯⎯ Ri +m( 2)i 2 A(3) . a(2) 2n ⎟ ⎟ a(2) 32 ..Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes A(2) = ⎛ a(1)11 ⎜ ⎜ 0 ⎜ ⎜ 0 ⎜ . . a( 2) nn ⎠ Notice that the above row operations on A(1) can be effected by premultiplying A(1) by M(1) where M(1) ⎛ 1 ⎜ (1) ⎜ m21 (1 = ⎜ m31) ⎜ ⎜ M ⎜ m (1) ⎝ n1 0 0 I n −1 0 0⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ i.. ....

m(r)nr …… m(r)r+1r 1 0 0 m(r)r+2r In-r M (r ) A (r ) =A ( r +1) ⎛ a (1)11 . a ( r +1) r +1r +1 .. a (1)1n a ( 2) 2 n a ( r ) rn VittalRao/IISc. a (3) 3n ⎟ ... a ( 2) 23 a (3) 33 M a ( 3) n 3 . a ( 2) 2 n ⎟ ⎟ ... 0 1 …. .. Bangalore M1/L1and L2/V1/May2004/3 .. 0 m(2)n2 M(2) y(2) = y(3) .... a ( r +1) nr +1 ⎞ ⎟ ⎟ ⎟ ⎟ ( r +1) . a ( r ) rr 0 M 0 .. M(r) where 1 0 ….. a (3) nn ⎠ We next assume a(3)33 ≠ 0 and proceed to make entries below this as zero.. ⎜ a ( 2) 22 ⎜ 0 ⎜ 0 M =⎜ ⎜ M M ⎜ M ⎜ M ⎜ 0 0 ⎝ .... M ⎟ ⎟ .. We thus get M(1).... a ( r +1) nn ⎟ ⎠ .. M(2).. and M(2) A(2) = A(3) ...Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes ... a (1)1n ⎞ ⎟ .. . a r +1n ⎟ ⎟ .. .. .. 0 M(r) = rxr . ...... …. .. . .. .. ⎟ ... and A(3) is of the form A ( 3) ⎛ a (1)11 a (1)12 ⎜ a ( 2) 22 ⎜ 0 =⎜ 0 0 ⎜ ⎜ M M ⎜ 0 ⎝ 0 .

a(2)2n where A(n) = . . . M(1) then L is lower triangular. Proceeding thus we get. Now. . Bangalore M1/L1and L2/V1/May2004/4 . det M(1) det A(1) det A(n) = det A(1) = det A since A = A(1) Now A(n) is an upper triangular matrix and hence its determinant is a(1)11 a(2)22 …. and nonsingular as their det = 1 ≠ 0. i. a(n)nn.Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes M(r) y(r) = y(r+1) At each stage we assume a(r)rr ≠ 0. …. a(1)1n a(2)22 . . . Thus let M(r) is 1 for every r. They are all therefore invertible and their inverses are all lower triangular. . and hence the system can be solved easily Note further that each M(r) is a lower triangular matrix with all diagonal entries as 1. . A(n) = M(n-1) …. M(2). and nonsingular and L-1 is also lower triangular. M(1) A(1) = A(n) . M(1) y(1) = y(n) a(1)11 a(1)12 . M(n-1) are lower triangular. . M(2). . M(1) A(1) Thus det A(n) = det M(n-1) det M(n-2) …. M(1) A(1) = A(n) . VittalRao/IISc. Thus det A is given by det A = a(1)11 a(2)22 …. …. M(n-1) M(n-2) …. Now LA = LA(1) = M(n-1) M(n-2) …. this can be solved by backward substitution. a(n)nn which is an upper triangular matrix and the given system is equivalent to A(n)x = y(n) and since this is an upper triangular. M(1). Further note that M(1). if L = M(n-1) M(n-2) …. . a(n)nn Thus the simple GEM can be used to solve the system Ax = y and also to evaluate det A provided a(i)ii ≠ 0 for each i.e. M(n-1) such that M(n-1) M(n-2) ….

REMEMBER IF AT ANY STAGE WE GET a(1)ii = 0 WE CANNOT PROCEED FURTHER WITH THE SIMPLE GSM.Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes Therefore A = L(-1) A(n) Now L(-1) is lower triangular which we denote by α and A(n) is upper triangular which we denote by u. and we thus get the so called αu decomposition A = αu of a given matrix A – as a product of a lower triangular matrix with an upper triangular matrix. Bangalore M1/L1and L2/V1/May2004/5 .x2 + x3 = 2 x1 + 2x2 Here =3 ⎛ 1 1 2⎞ ⎜ ⎟ A = ⎜ 2 −1 1⎟ ⎜ 1 2 0⎟ ⎝ ⎠ ⎛4⎞ ⎜ ⎟ y = ⎜2⎟ ⎜3⎟ ⎝ ⎠ A (1 ) ⎛1 ⎜ = ⎜2 ⎜1 ⎝ 1 −1 2 2 ⎞ R −2R ⎛ 1 1 ⎜ ⎟ 2 1⎟ → ⎜0 R −R 0⎟ 3 1 ⎜0 ⎝ ⎠ 1 −3 1 2 ⎞ ⎟ − 3 ⎟ = A (2) − 2⎟ ⎠ a(1)11 = 1 ≠ 0 m(1)21 = -2 m(1)31 = -1 a(2)22 = -3 ≠ 0 M (1) ⎛ 1 0 0⎞ ⎜ ⎟ = ⎜ − 2 1 0⎟ ⎜ − 1 0 1⎟ ⎝ ⎠ y (1 ) ⎛4⎞ ⎛ 4 ⎞ ⎜ ⎟ ⎜ ⎟ = ⎜ 2 ⎟ → ⎜ − 6 ⎟ = y (2) = M ⎜3⎟ ⎜ −1⎟ ⎝ ⎠ ⎝ ⎠ (1 ) y (1 ) VittalRao/IISc. This is another application of the simple GEM. EXAMPLE: Consider the system x1 + x2 + 2x3 = 4 2x1 .

Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes A (2) 1 R3 + R 2 3 → ⎛1 ⎜ ⎜0 ⎜0 ⎝ 1 −3 1 2 ⎞ ⎟ − 3 ⎟ = A (3) − 2⎟ ⎠ a(3)33 = -3 M(2)31 = 1/3 ⎛ ⎜1 = ⎜0 ⎜ ⎜0 ⎝ ⎞ 0 0⎟ 1 0⎟ ⎟ 1 1⎟ 3 ⎠ M ( 2) y (3) = M (2) y (2) ⎛ 4 ⎞ ⎜ ⎟ = ⎜− 6⎟ ⎜− 3⎟ ⎝ ⎠ Therefore the given system is equivalent to A(3)x = y(3) x1 + x2 + 2x3 = 4 -3x2 . Bangalore M1/L1and L2/V1/May2004/6 .3 = . ⎛ x1 ⎜ x = ⎜ x2 ⎜x ⎝ 3 ⎞ ⎛1⎞ ⎟ ⎜ ⎟ ⎟ = ⎜1⎟ ⎟ ⎜1⎟ ⎠ ⎝ ⎠ The determinant of the given matrix A is a(1)11 a(2)22 a(3)33 = (1) (-3) (-3) = 9. Now M1 ( −1) ⎛ 1 0 0⎞ ⎟ ⎜ = ⎜ 2 1 0⎟ ⎜ 1 0 1⎟ ⎠ ⎝ ⎞ ⎛ ⎜ 1 0 0⎟ = ⎜ 0 1 0⎟ ⎟ ⎜ 1 ⎜0 − 1⎟ 3 ⎠ ⎝ M2 ( −1) VittalRao/IISc.3x3 = -6 .6 ⇒ -3x2 = -3 ⇒ x2 = 1 x1 + 1 + 2 = 4 ⇒ x1 = 1 Thus the solution of the given system is.3x3 = -3 Backward Substitution x3 = 1 -3x2 .

e. simple GEM may not be a very accurate method to use. ⎛1 0 0⎞ ⎛ 1 1 2⎞ ⎟ ⎜ ⎜ ⎟ 2 −1 1⎟ = ⎜ 2 1 0⎟ ⎜ ⎟ ⎜ ⎜ 1 2 0⎟ ⎜ 1 − 1 1⎟ ⎝ ⎠ 3 ⎠ ⎝ 2 ⎞ ⎛1 1 ⎜ ⎟ ⎜ 0 − 3 − 3⎟ ⎜ 0 0 − 3⎟ ⎝ ⎠ is the lu decomposition of the given matrix. Bangalore M1/L1and L2/V1/May2004/7 .375623 0 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes L = M(2) M(-1) L-1 = (M(2) M(1))-1 = (M(1))-1 (M(2))-1 ⎞ ⎛ ⎛ 1 0 0⎞ ⎜ 1 0 0⎟ ⎟⎜ ⎜ = ⎜ 2 1 0⎟ 0 1 0⎟ ⎟ ⎜ ⎜ 1 0 1⎟ ⎜ 0 − 1 1⎟ ⎠ ⎝ 3 ⎠ ⎝ ⎞ ⎛ ⎜ 1 0 0⎟ L = L(-1) = ⎜ 2 1 0 ⎟ ⎟ ⎜ 1 ⎜1 − 1⎟ 3 ⎠ ⎝ (n) (3) u=A =A 2 ⎞ ⎛1 1 ⎜ ⎟ = ⎜ 0 − 3 − 3⎟ ⎜ 0 0 − 3⎟ ⎝ ⎠ Therefore A = lu i. We observed that in order to apply simple GEM we need a(r)rr ≠ 0 for each stage r.000003) x1 + (0.476625) x3 = 0.285321 Let us do the computations to 6 significant digits.215512) x1 + (0.127653 (0.332147) x3 = 0. ⎛ 0 .000003 ⎜ A(1) = ⎜ 0 . This may not be satisfied always.663257) x2 + (0.332147 ⎞ ⎟ 0 . even if the condition a(r)rr ≠ 0 is satisfied at each stage.235262 (0.213472) x2 + (0.375623) x2 + (0.213472 0 . Here.173257) x1 + (0.215512 ⎜ 0 . So we have to modify the simple GEM in order to overcome this situation.173257 ⎝ 0 .476625 ⎟ 0 .663257 0 . Further..625675) x3 = 0. as an example. the following system: (0. What do we mean by this? Consider.625675 ⎟ ⎠ VittalRao/IISc.

127653 ⎜ 0 .0 ⎟ − 12327 .9 ≠ 0 M ( 2 ) 32 = − 0. 1⎟ ⎠ y (2) =M (1) y (1) = ⎛ 0.213472 a ( 2 ) 32 − 12327.000003 ≠ 0 ⎟ ⎠ M (1) 21 = − 0.0 ⎟ − 0.5 ⎟ ⎜ − 13586.8 − 19181 . Bangalore M1/L1and L2/V1/May2004/8 . 3 ⎜ − 57752 .9 − 23860 . 5 ⎟ ⎜ − 0 .3 (1) 0.3 (1) 0. 285321 ⎝ ⎞ ⎟ (1) ⎟ a 11 = 0.8 = −0.235262 ⎞ ⎜ ⎟ ⎜ − 16900.6 ⎟ ⎝ ⎠ ⎛ 0.803905 1 ⎟ ⎠ ⎝ y (3) =M (2) y (2) ⎛ 0 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes y (1) ⎛ 0 . 3 ⎝ 0 1 0 0⎞ ⎟ 0⎟ .332147 ⎞ ⎟ − 23860 . 235262 ⎜ = ⎜ 0 .215512 a (1) 21 =− = −71837.15334.000003 ⎜ 0 A(2) = M(1) A(1) = ⎜ ⎜ 0 ⎝ a(2)22 = .000003 ⎜ 0 A(3) = M(2) A(2) = ⎜ ⎜ 0 ⎝ A(3)x = y(3) 0.332147 ⎞ ⎟ − 15334 . 20000 ⎟ ⎝ ⎠ ⎛ 0.173257 =− = −57752.000003 a 11 M (1) 1 ⎛ ⎜ = ⎜ − 71837 .50000 ⎟ ⎠ Thus the given system is equivalent to the upper triangular system VittalRao/IISc.000003 a 11 M (1) 31 = − a (1) 31 0.9 a 22 0 0⎞ ⎛1 ⎟ ⎜ 1 0⎟ M(2) = ⎜ 0 ⎜ 0 − 0.9 0 0. 235262 ⎞ ⎜ ⎟ = ⎜ − 16900 .213472 − 15334 .803905 =− ( 2) − 15334.7 ⎟ ⎠ 0.

In order to do this we introduce the idea of Partial Pivoting. Bangalore M1/L1and L2/V1/May2004/9 .05 32 03 93 39.99 12 89 42 52 Thus we see that the simple Gaussian Elimination method needs modification in order to handle the situations that may lead to a(r)rr = 0 for some r or situations as arising in the above example. The idea of partial pivoting is the following: At the r th stage we shall be trying to reduce all the entries below the r th diagonal as zero. Before we do this we look at the entries in the r th diagonal and below it and then pick the one that has the largest absolute value and we bring it to the r th diagonal position by a row interchange.33 33 3 This compares poorly with the correct answers (to 10 digits) given by x1 = 0. x1 = 0.1 x3 = -0.47 97 23 x3 = -1. We now illustrate this with a few examples: Example: x1 + x2 + 2 x3 = 4 2x1 – x2 + x3 = 2 x1 + 2x2 We have Aavg = =3 1 2 1 1 −1 2 2 4 1 2 0 3 1st Stage: The pivot has to be chosen as 2 as this is the largest absolute valued entry in the first column.40 00 00 x2 = 0. When we incorporate this idea at each stage of the Gaussian elimination process we get the GAUSS ELIMINATION METHOD WITH PARTIAL PIVOTING.Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes Back substitution yields. Therefore we do VittalRao/IISc. and then reduce the entries below the r th diagonal as zero.67 41 21 46 9 x2 = 0.

Now at the next stage the pivot is Therefore 2 R23 A(3)avg ⎯⎯→ -1 5/2 1 2 0 -1/2 2 M1/L1and L2/V1/May2004/10 VittalRao/IISc. Bangalore .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes Aavg ⎯→ ⎯ R 12 2 1 1 −1 1 2 1 2 2 4 0 3 Therefore we have ⎛0 ⎜ M(1) = ⎜ 1 ⎜0 ⎝ M A (1) (1) 1 0 0 (2) 0⎞ ⎟ 0 ⎟ and M(1) A(1) = A(2) = 1⎟ ⎠ ⎛2⎞ ⎜ ⎟ = ⎜4⎟ ⎜3⎟ ⎝ ⎠ ⎛2 ⎜ ⎜1 ⎜1 ⎝ −1 1⎞ ⎟ 1 2⎟ 2 0⎟ ⎠ =y Next we have R2 – ½ R1 A(2)avg Here ⎛ ⎜ ⎜ 1 (2) M = ⎜− 1 ⎜ 2 ⎜ 1 ⎜− 2 ⎝ 0 1 0 2 3/2 3/2 0 ⎞ ⎟ 0⎟ 0⎟ ⎟ ⎟ 1⎟ ⎠ 2 (2) -1 3 5/2 1 2 0 R3 – ½ R1 -1/2 3 . So we have to do another row interchange. ⎛2 ⎜ (2) (2) (3) M A =A = ⎜0 ⎜ ⎜0 ⎝ −1 3 2 5 2 1 ⎞ ⎟ 3 ⎟ 2 ⎟ −1 ⎟ 2⎠ M y =y (3) ⎛2⎞ ⎜ ⎟ = ⎜3⎟ ⎜2⎟ ⎝ ⎠ 5 since this is the entry with the largest absolute value 2 in the 1st column of the next sub matrix.

2x1 – x2 + x3 = 2 5 1 x2 .x3 = 2 2 2 9 9 x3 = 5 5 We now get the solution by back substitution: VittalRao/IISc. Bangalore M1/L1and L2/V1/May2004/11 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes 0 3/2 3/2 3 ⎛1 ⎜ (3) M = ⎜0 ⎜0 ⎝ 0 0 1 0⎞ ⎟ 1⎟ 0⎟ ⎠ ⎛2 ⎜ M(3) A(3) = A(4) = ⎜ 0 ⎜ ⎜0 ⎝ −1 5 2 3 2 1 ⎞ ⎟ −1 ⎟ 2⎟ 3 ⎟ 2 ⎠ M (3) y (3) =y (4) ⎛2⎞ ⎜ ⎟ = ⎜2⎟ ⎜3⎟ ⎝ ⎠ Next we have 2 A(4)avg ⎯⎯5⎯→ ⎯ Here ⎛ ⎜1 (4) M = ⎜0 ⎜ ⎜0 ⎝ 0 1 3 − 5 ⎞ 0⎟ 0⎟ ⎟ 1⎟ ⎠ 3 R3 − R2 -1 0 0 1 5/2 0 2 -1/2 2 9/5 9/5 ⎞ ⎟ 1 ⎟ 1 − ⎟ 2⎟ 9 ⎟ ⎟ 5 ⎠ ⎛ ⎜ ⎜2 M(4) A(4) = A(5) = ⎜ 0 ⎜ ⎜ ⎜0 ⎝ −1 5 2 0 ⎛ 2 ⎞ ⎟ ⎜ M(4) y(4) = y(5) = ⎜ 2 ⎟ ⎜9 ⎟ ⎟ ⎜ ⎝ 5⎠ This completes the reduction and we have that the given system is equivalent to the system A(5)x = y(5) i.e.

625675) x3 = 0.332147 ⎞ ⎟ 0 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes The 3rd equation gives. x2 = 1. the system to which we had earlier applied the simple GEM and had obtained solutions which were for away from the correct solutions.663257 0 .000003 ⎜ 0 .000003)x1 + (0.285321.213472 0 . 235262 ⎜ y = ⎜ 0 .235262 (0. 127653 ⎜ 0 . the same as we had obtained with the simple Gaussian elimination method earlier.215512 ⎜ 0 .000003 ⎜ A = ⎜ 0 . Bangalore M1/L1and L2/V1/May2004/12 . 285321 ⎝ ⎞ ⎟ ⎟ ⎟ ⎠ 0 .173257 ⎝ 0 .215512 ⎜ = ⎜ 0 .476625 ⎞ ⎟ 0 .625675 ⎟ ⎠ VittalRao/IISc. Note that ⎛ 0 .663257)x2 + (0.375623 0 .173257)x1 + (0.375623 0 . 5 5 x2 = 2 2 Using the values of x1 and x2 in the first equation we get 2x1 – 1 + 1 = 2 giving x1 = 1 Thus we get the solution of the system as x1 = 1. Example 2: Let us now apply the Gaussian elimination method with partial pivoting to the following example: (0. So we have A (1) = A ⎯⎯→ A R12 (2) ⎛ 0 .215512)x1 + (0. x3 = 1 using this in second equation we get 5 1 x2 .332147 ⎟ 0 .213472 0 .625675 ⎟ ⎠ We observe that at the first stage we must choose 0.375623)x2 + (0.173257 ⎝ ⎛ 0 .127653 (0.213472)x2 + (0.476625) x3 = 0. x3 = 1.= 2 2 2 giving and hence x2 = 1.332147) x3 = 0.663257 0 .476625 ⎟ 0 .215512 as the pivot.

476625 0 0.127653 ⎞ ⎜ ⎟ y(4) = M(3) y(3) = ⎜ 0 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes y (1) = y ⎯⎯→ y R12 (2) ⎛ 0 .803932 0 1 ⎟ ⎝ ⎠ y (2) =M (2) y (2) In the next stage we observe that we must choose 0.215512 a31 0.000014 a11 0. 127653 ⎜ = ⎜ 0 .235260 ⎟ ⎝ ⎠ Now reduce the entry below 2nd diagonal as zero R3 + ⎯⎯M 32⎯→ ⎯ R2 A(4) ⎛ 0. 182697 ⎝ ⎞ ⎟ ⎟ ⎟ ⎠ 1 0 0⎞ ⎛ ⎜ ⎟ M(2) = ⎜ − 0.235260 ⎜ 0 .332140 0.361282 as the pivot.476625 ⎞ ⎟ 0.0.361282 0.361282 0 0.173257 == .182697 ⎟ ⎜ 0 .242501 A (3) R3 + m21R1 Where m21 = m31 = - a 21 0. Thus we have to interchange 2nd and 3rd row.375623 0.215512 ⎜ 0 A5 = ⎜ ⎜ 0 ⎝ 0.215512 0. Bangalore .213467 0.375623 0.0.213467 0 0.803932 a11 0.215512 ⎛ 0 . 285321 ⎝ ⎞ ⎟ ⎟ ⎟ ⎠ ⎛0 ⎜ M = ⎜1 ⎜0 ⎝ (1) 1 0 0 0⎞ ⎟ 0⎟ 1⎟ ⎠ Next stage we make all entries below 1st diagonal as zero R2 + m21R1 A (2) 0. 235262 ⎜ 0 . ⎜ − 0.215512 ⎜ R 0 ⎯⎯→ A(4) = ⎜ ⎜ 0 ⎝ 23 0.127653 ⎜ = ⎜ 0 .361282 0.332140 ⎟ ⎠ ⎛1 ⎜ M(3) = ⎜ 0 ⎜0 ⎝ 0 0 1 0⎞ ⎟ 1⎟ 0⎟ ⎠ ⎛ 0 . We get.242501 ⎟ 0.242501 ⎟ 0.000003 == .000014 1 0 ⎟ .188856 ⎟ ⎠ M1/L1and L2/V1/May2004/13 VittalRao/IISc.375623 0. A(3) ⎛ 0.476625 ⎞ ⎟ 0.

.213467 = .Numerical Analysis/ Direct methods for solving linear system of equation Lecture Notes M32 = - 0.59086 0⎞ ⎟ 0⎟ 1⎟ ⎠ y (5) =M (4) y (4) ⎛ 0 . Notice that while we got very bad errors in the solutions while using simple GEM whereas we have come around this difficulty by using partial pivoting.991291 which compares well with the 10 decimal accurate solution given at the end of page 9.590860 0. VittalRao/IISc.0.127653 ⎞ ⎜ ⎟ = ⎜ 0 . Bangalore M1/L1and L2/V1/May2004/14 .674122 x2 = 0.182697 ⎟ ⎜ 0 .127312 ⎟ ⎝ ⎠ Thus the given system is equivalent to which is an upper triangular system and can be solved by back substitution to get x3 = 0.361282 ⎛1 ⎜ M(4) = ⎜ 0 ⎜0 ⎝ A(5) x = y(5) 0 1 − .053205 x1 = 0.

. .) Thus. M(k-1) …. M(2). M(1) is not a lower triangular matrix in general and hence using partial pivoting we cannot get LU decomposition in general. . (See M(1) & M(3) in the two examples.188856) = 0.215512) (0. Bangalore M1/L3/V1/May 2004/1 . Now det M(i) = 1 if it refers to the process of nullifying entries below a diagonal to zero.Numerical analysis/Direct methods for solving linear system of equation Lecture notes DETERMINANT EVALUATION Notice that even in the partial pivoting method we get Matrices M(k). det M(1) = (-1)m where m is the number of row inverses effected in the reduction. M(3). VittalRao/IISc. . det M(k-1) ….013608 LU decomposition: Notice that the M matrices corresponding to row interchanges are no longer lower triangular. M(k-1) …. Therefore det A = (-1)m product of the diagonals in the final upper triangular matrix. Therefore det M(k) …. we had M(1). M(1) such that M(k).361282) (0. M(1) A is upper triangular and therefore det M(k). and det M(i) = 1 if it refers to a row interchange necessary for a partial pivoting. det M(1) det A = Product of the diagonal entries in the final upper triangular matrix. M(3) as row interchange matrices and therefore det A = (-1)2 (0. In our example 1 above. M(k) M(k-1) . Thus therefore there were to row interchanges and hence det A = (-1)2 (2)( 5 9 )( ) = 9. 2 5 In example 2 also we had M(1). M(4) of which M(1) and M(3) referred to row interchanges.

As observed earlier. Gauss-Jordan Method leads to AR = In and the product of corresponding M(i) give us A-1. Remark: In case in the reduction process at some stage if we get arr = ar+1r = . Bangalore M1/L3/V1/May 2004/1 . then even partial pivoting does not being any nonzero entry to rth diagonal because there is no nonzero entry available. . in the case A is singular. We could also do the reduction here by partial pivoting. VittalRao/IISc. . In such a case A is singular matrix and we proceed to the RRE form to get the general solution of the system.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes GAUSS JORDAN METHOD This is just the method of reducing Aavg to (AR / yR ) where AR = In is the Row Reduced Echelon Form of A (in the case A is nonsingular). . = ar+1n = 0.

Also A-1 can be obtained from an LU decomposition as A-1 = U-1 L-1. VittalRao/IISc. i. For example if A = LU is a decomposition then A = Lα Uα is also a LU decomposition where α ≠ 0 is any scalar and Lα = α L and Uα = 1/α U.. First. we shall consider the decomposition Tridiagonal matrix. and thirdly the Cholesky’s method for a symmetric matrix. Thus an LU decomposition helps to break a system into Triangular system.unn Where lii are the diagonal entries of L and uii are the diagonal entries of U. …………… (1) Then the system Ax = y can be written as. A can be calculated as det. We shall now give methods to find LU decomposition of a matrix. U = l11 l22 ….Numerical Analysis / Direct methods for solving linear system of equation Lecture notes LU decompositions We shall now consider the LU decomposition of matrices.lnn u11u22 …. Suppose A is an nxn matrix. We say that this is a LU decomposition of A. If L and U are lower and upper triangular nxn matrices respectively such that A = LU. Bangalore M1/L5/V1/May 2004/1 . the system. Basically. Substituting this z in (1) we get an upper triangular system for x and this can be solved by back substitution.(2) Now (2) is a triangular system – infact lower triangular and hence we can solve it by forward substitution to get z. secondly the Doolittles’s method for a general matrix. can be solved as follows: Set Ux = z LUx = y. det. Ax = y.. we shall be considering three cases. Note that LU decomposition is not unique. Further if A = LU is a LU decomposition then det. and to find the inverse of a matrix.. Lz = y ……………. Then. L .e. to find the determinant. Suppose we have a LU decomposition A = LU. A = det.

Numerical Analysis / Direct methods for solving linear system of equation Lecture notes 1 TRIDIAGONAL MATRIX Let ⎛ b1 ⎜ ⎜ c1 ⎜0 ⎜ A = ⎜ . bi −1 c i −1 0 0 .... ........ I = 2. a1 bi δi = ..... bi −1 c i −1 0⎞ ⎟ 0⎟ 0⎟ ⎟ . 0 0 ... ...... ........ Let δi denote the determinant of the ith principal minor of A b1 c1 a2 b2 0 a3 .( II ) M1/L5/V1/May 2004/2 VittalRao/IISc..... δi = bi δi-1 – ci-1 ai δi-2 ... 0 .. First we shall give some preliminaries......... . ⎟ ⎟ ai ⎟ ⎟ b ⎟ ⎠ be an nxn tridiagonal matrix...4.. ……...... . . ……...(I) δi = b1 We define δi = 1 From (I) assuming that δi are all nonzero we get δi δ = bi − c i −1 a i i − 2 δ i −1 δ i −1 setting δi = ki δ i −1 ai this can be written as bi = k i + ci −1 k i −1 .. .......... ........ ⎟ ....... ... ⎜ ...... ...... ci−2 0 ....... ⎜ ⎜0 ⎜ ⎜0 ⎝ a2 b2 c2 ... We seek a LU decomposition for this... .. Expanding by the last row we get.. 0 a3 b3 . ... Bangalore ...... .. ... ........ 0 0 a4 . ci−2 0 ... ..... .3..... ..

..Numerical Analysis / Direct methods for solving linear system of equation Lecture notes Now we seek a decomposition of the form A = LU where. ⎟ ⎜ ⎟ ⎜ 0 0 .. .... .. .... . . (IV) AND (V) we get VittalRao/IISc........ ⎛ u1 α 2 0 . u n −1 α n ⎟ ⎜ .. ........... Aii. i... . 0 ⎟ ... U = ⎜ . ... 0 ⎟ ⎜ ⎟ L = ⎜ 0 w2 1 0 .... 0 ⎟ ⎜ ⎜ w1 1 0 ..... Ai-1i = ai Aii = bi Ai+1i = ci In the case of L and U we have Li + 1i = wi Lii Lij =1 = 0 if j>i or j<i and i-j ≥ 2... ...e..... w ⎟ 1⎠ n ⎠ n −1 ⎝ ⎝ i. Aij is nonzero only when i and j differ by 1. ..... Bangalore M1/L5/V1/May 2004/3 .... (IV) …………….... …………….... Now A = LU is what is needed.... 0 ⎞ ⎛1 ⎜ ⎟ ⎜ ⎟ 0 u 2 α 3 .. we need the lower triangular and upper triangular parts also to be ‘tridiagonal’ triangular. Note that if A = (Aij) then because A is tridiagonal. ……………………. only Ai-1i.e.. Therefore.... ... (III) Uii+1 = αi+1 Uii = ui Uij = 0 if i>j or i<j and j-I ≥ 2. Aii+1 are nonzero....( VI ) Ai −1i = ∑ Li −1kU ki k =1 n Using (III).... 0 u ⎟ ⎜ 0 0 . 0 ⎞ 0 .... ... (V) Aij = Therefore ∑L k =1 n ik U kj . In fact.⎟ ⎜ 0 0 .... .. ..

..( X ) u i −1 Therefore bi = u i + Comparing (X) with (II) we get ui = ki = δi . ...... From (VI) we also get Aii = ∑ LikU ki k =1 = Lii−1U i−1i + LiiU ii Therefore b i = w i − 1α i + u i ............ ... ....Numerical Analysis / Direct methods for solving linear system of equation Lecture notes Therefore ai = Li −1i −1U i −1i = α i Therefore αi = ai n …………………....( VIII ) From (VI) we get further............................( XI ) δ i −1 M1/L5/V1/May 2004/4 VittalRao/IISc.. (VII) This straight away gives us the off diagonal entries of U. A i +1i = ∑ n k =1 L i +1k U ki = Li +1iU ii + Li +1i +1U i +1i ci = W iu i Thus ci = W i u i ………………… (IX) Using (IX) in (VIII) we get (also using αI = ai) bi = c i −1 a i + ui u i −1 c i −1 a i . Bangalore .

.. So we can apply the above method.....δ5 are all nonzero.. Therefore by (XI) we get VittalRao/IISc.. (XII) and (XIII) completely determine the matrices L and U and hence we get the LU decomposition..e..δ2.δ4... all the principal minors have nonzero determinant.( XII ) ui δi ………………....(XIII) From (VII) we get αI = ai (XI).... Note : We can apply this method only when δI are all nonzero... Example: ⎛ 2 ⎜ ⎜− 2 ⎜ Let A = ⎜ 0 ⎜ 0 ⎜ 0 ⎝ We have b1 = 2 c1 = -2 a2 = -2 We have δ0 = 1 δ1 = 2 b2 = 1 −2 1 −2 0 0 0 1 5 9 0 0 0 −2 −3 3 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ − 1⎟ ⎠ 0 0 0 1 Let us now find the LU decomposition as above.... Note δ1.δ3. Bangalore M1/L5/V1/May 2004/5 .. b3 = 5 c3 = 9 a4 = -2 b4 = -3 c4 = 3 a5 = 1 b5 = -1 c2 = -2 a3 = 1 δ2 = b2 δ1 – a2 c1 δ0 = 2-4 = -2 δ3 = b3 δ2 – a3 c2 δ1 = (-10) – (-2) (2) = -6 δ4 = b4 δ3 – a4 c3 δ2= (-3) (-6) – (-18) (-2) = -18 δ5 = b5 δ4 – a5 c4 δ3 = (-1) (-18) – (3) (-6)= 36.Numerical Analysis / Direct methods for solving linear system of equation Lecture notes using this in (IX) we get wi = ci ci δ i −1 = . i...

In order to avoid this situation Wilkinson suggests that in any triangular decomposition choose the diagonal entries of L and U to be of the same magnitude. = δ3 − 6 and u5 = δ 5 36 = = −2 δ 4 − 18 From (XIII) we get From (XII) we get w1 = w2 = c1 − 2 = = −1 u1 2 c2 − 2 = =2 u2 − 1 c3 9 = = 3 u3 3 c4 3 = =1 u4 3 0 1 2 0 0 0 0 1 3 0 0 0 0 1 1 0⎞ ⎟ 0⎟ 0⎟ ⎟ 0⎟ 1⎟ ⎠ α 2 = a2 = −2 α 3 = a3 = 1 α 4 = a4 = −2 w3 = w4 = Thus. u 2 = 2 = = −1.Numerical Analysis / Direct methods for solving linear system of equation Lecture notes u1 = u4 = δ δ1 δ −2 −6 = 2. u 3 = 3 = =3 δ0 δ1 δ2 − 2 2 δ 4 − 18 =3 . This will facilitate solving the triangular system LZ = y (equation (2)) in page 17. However by choosing these diagonals as 1 it may be that the ui. In the above method we had made all the diagonal entries of L as 1. α 5 = a5 = 1 ⎛2 ⎜ ⎜0 U = ⎜0 ⎜ ⎜0 ⎜0 ⎝ −2 −1 0 0 0 0 1 3 0 0 0 0 −2 3 0 0 ⎞ ⎟ 0 ⎟ 0 ⎟ ⎟ 1 ⎟ − 2⎟ ⎠ ⎛ 1 ⎜ ⎜−1 L=⎜ 0 ⎜ ⎜ 0 ⎜ 0 ⎝ . Bangalore M1/L5/V1/May 2004/6 . This can be achieved as follows: We seek A = LU where VittalRao/IISc. the diagonal entries in U are small thus creating problems in backward substitution for the system Ux = z (equation (1) on page 17).

⎟ . ⎜ 0 ...... ... . ⎜0 ⎜0 0 ...... ......Numerical Analysis / Direct methods for solving linear system of equation Lecture notes l1 L = w1 l2 . ⎟ ⎟ αn ⎟ un ⎟ ⎠ if j>i or j<i and i-j ≥ 2 if i>j..... .. Bangalore M1/L5/V1/May 2004/7 ... . (IX`) From (VIII`) we get using (VII`) and (IX`) VittalRao/IISc.. wn-1ln ⎛ u1 α 2 0 ⎜ ⎜ 0 u2 α 3 U = ⎜ . (VIII) and (IX) change as follows: ai = Ai −1i Therefore = ∑ Li −1k U ki k −1 = Li −1i −1U i −1i = l i −1α i ai = li-1 αI ………………. or j>i and j-i ≥ 2 n Now (VII)............ ⎝ Lii = li Now Li+1i = wi Lij = 0 Uii = ui Uii+1 = αi+1 Uij = 0 ..... (VII`) bi = Aii = ∑L k −1 n ik U ki = Lii −1U i −1i + LiiU ii = Wi −1α i + li u i ∴ bi = W i −1α i + li u i ... u n −1 0 0 ⎞ ⎟ ..( VIII `) ci = Ai +1i = ∑ Li +1k U ki = Li +1iU ii = w i u i n k −1 ci = wi ui ………………....

..... . ... . .. i ⎟ i ⎜ δ i −1 . . ..( X `) pi −1 bi = where pi = li ui Comparing (X`) with (II) we get pi = k i = therefore δi δ i −1 li u i = δi δ i −1 li = we choose δi δ i −1 δi δ i −1 ………………. . . . . . Bangalore M1/L5/V1/May 2004/8 . . . . . ... . Let us apply this to our example matrix (on page 21). . . (XIV) δ ⎞ u i = ⎛ sgn i ⎜ δ i −1 ⎟ ⎠ ⎝ ………… (XV) Thus li and ui have same magnitude... . . . . (XII`) . . . . . li = wi = Ci δi δ i −1 . . + li u i u i −1 l i −1 = a i c i −1 + li u i l i −1 u i −1 ai ci −1 + pi .. ⎛ δ ⎞ δ u i = ⎜ sgn . . . . . .. .. . . . . .. . .Numerical Analysis / Direct methods for solving linear system of equation Lecture notes bi = c i −1 a i . .(XI`) δ i −1 ⎟ ⎝ ⎠ ui i −1 . These then can be need to get wi and αi from (VII`) and (IX`). .... We get finally. . . . . ... (XII) and (XIII). . . . . . . . .. VittalRao/IISc. . . . ..(XIII`) α i = ai l These are the generalizations of formulae (XI).

VittalRao/IISc. Bangalore M1/L5/V1/May 2004/9 . u1 2 C3 9 = =3 3. δ5/δ4 = -2 Thus from (XI`) we get l1 = √2 l2 = 1 l3 = √3 l4 = √3 l5 = √2 u1 = √2 u2 = -1 u3 = √3 u4 = √3 u5 = -√2 From (XII`) we get w1 = w3 = C1 − 2 = =− 2. l1 2 a4 − 2 = . l2 1 From (XIII`) we get α2 = α4 = α3 = α5 = a5 1 = l4 3 Thus. δ0 = 1 b1 = 2 c1 = -2 a1 = -2 δ1 = 2 b2 = 1 c2 = -2 a3 = 1 δ2 = -2 b3 = 5 c3 = 9 a4 = -2 δ3 = -6 b4 = -3 c4 = 3 a5 = 1 δ4 = -18 b5 = -1 δ5 = 36 We get δ1/δ0 = 2 . δ2/δ1 = -1 . u3 3 a2 − 2 = =− 2. u2 −1 C4 3 = = 3 u4 3 a3 1 = =1 . we have LU decomposition. δ3/δ2 = 3 . l3 3 w2 = w4 = C2 − 2 = = 2.Numerical Analysis / Direct methods for solving linear system of equation Lecture notes We get. δ4/δ3 = 3 .

Numerical Analysis / Direct methods for solving linear system of equation Lecture notes 0⎞ ⎛ 2 ⎛ 2 −2 0 0 ⎜ ⎟ ⎜ 0 ⎟ ⎜− 2 ⎜− 2 1 1 0 A=⎜ 0 −2 5 −2 0 ⎟=⎜ 0 ⎜ ⎟ ⎜ 0 9 −3 1 ⎟ ⎜ 0 ⎜ 0 ⎜ ⎜ 0 0 0 3 − 1⎟ ⎝ 0 ⎝ ⎠ 0 0 1 0 2 3 0 3 3 0 0 0 0 0 3 3 ⎛ 0 ⎞⎜ ⎟⎜ 0 ⎟⎜ ⎟ 0 ⎟⎜ ⎜ 0 ⎟⎜ ⎟⎜ 2 ⎠⎜ ⎝ U 2 0 0 0 0 − 2 −1 0 0 0 0 1 3 0 0 0 0 −2 3 3 0 0 ⎞ ⎟ 0 ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ 3 ⎟ − 2⎟ ⎠ L in which the L and U have corresponding diagonal elements having the same magnitude. VittalRao/IISc. Bangalore M1/L5/V1/May 2004/10 .

We seek as in the case of a tridiagonal matrix. Since L is a lower triangular matrix. We determine L and U as follows : The 1st row of U and 1st column of L are determined as follows : a 11 = ∑ n k =1 l1 k u k 1 = l11 u11 Since l1k = 0 for k>1 = u11 Since l11 = 1. The method we describe is due to Doolittle. Bangalore M1/L6/V1/May 2004/1 . Let L = (lii) . we have uij = 0 if i > j. a1 j = ∑ n k =1 l 1 k u kj = l11 u11 Since l1k = 0 for k>1 = u1j Since l11 = 1. VittalRao/IISc. and by our choice. Let A = (aij). since U is an upper triangular matrix.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes DOOLITTLE’S LU DECOMPOSITION We shall now consider the LU decomposition of a general matrix. ∴u11 =1. Similarly. lij =1. In general. an LU decomposition in which the diagonal entries lii of L are all 1. U = (uij). we have lij = 0 if j > i .

Now we proceed to describe how one then determines the ith row of U and ith column of L. since first i-1 columns are known for L. . . Since first i-1 rows of U have been determined. (II) Note : u11 is already obtained from (I). Now a ij = = = i ∑ k =1 n k =1 l ik u kj kj ∑ i −1 k =1 l ik u Since lik = 0 for k>i ∑l ik u kj + l ii u ij VittalRao/IISc. . 1 ≤ j ≤ n. . Similarly. . . . ukj . .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes ⇒ u1j = a1j . lik are all known for 1 ≤ i ≤ n . Thus (I) and (II) determine respectively the first row of U and first column of L. Bangalore M1/L6/V1/May 2004/2 . . . 1 ≤ k ≤ i-1. (I) Thus the first row of U is the same as the first row of A. . . this means. are all known for 1 ≤ k ≤ i-1 . The first column of L is determined as follows: a j1 = ∑ n k =1 l jk u k 1 = lj1 u11 Since uk1 = 0 if k>1 ⇒ lj1 = aj1/u11 . this means. . . . . The other rows of U and columns of L are determined recursively as given below: Suppose we have determined the first i-1 rows of U and the first i-1 columns of L.

Numerical Analysis/Direct methods for solving linear system of equation Lecture notes = ∑l k =1 i −1 ik u kj + u ij since lii = 1. Thus RHS in (IV) is completely known and hence lji. . . 1 ≤ k ≤ i-1 and hence only entries in the first i-1 columns of L. VittalRao/IISc. . Thus (III) determines the ith row of U in terms of the known given matrix and quantities determined upto the previous stage.(III) Note that on the RHS we have aij which is known from the given matrix..(IV) Once again we note the RHS involves uii.. aij which is from the given matrix. Now we describe how to get the ith column of L : a ji = i ∑ l n l k =1 jk u ki = = ∑ i −1 k =1 k =1 jk u ki Since uki = 0 if k>i ∑l ji jk u ki + l ji u ii ⇒ l 1 ⎡ = ⎢a u ii ⎣ ji − ∑ i−1 l k =1 jk u ki ⎤ ⎥ ⎦ …. and they also involve ukj . which has been determined using (III). 1 ≤ k ≤ i-1 and hence only entries in the first i-1 rows of U. . Also the sum on the RHS involves lik for 1 ≤ k ≤ i-1 which are all known because they involve entries in the first i-1 columns of L . the entries in the ith column of L are completely determined by (IV).. and uki. . . ⇒ u ij = a ij − ∑ i−1 k =1 l ik u kj . ljk. . Bangalore M1/L6/V1/May 2004/3 . . 1 ≤ k ≤ i-1 which are also known since they involve only the entries in the first i-1 rows of U. .

i+2. i+1. Doolittle’s procedure is as follows: lii = 1. lj1 = aj1/u11 For i ≥ 2. Example: Let ⎛ 2 1 −1 3 ⎞ ⎜ ⎟ ⎜ − 2 2 6 − 4⎟ A=⎜ 4 14 19 4 ⎟ ⎜ ⎟ ⎜ 6 0 − 6 12 ⎟ ⎝ ⎠ Let us determine the Doolittle decomposition for this matrix. 1st row U = 1st row of A .. ……. u ij = a ij − ∑ ji i−1 k =1 l ik u i −1 kj . j = i..n (Note for j<i we have ljj = 0) We observe that the method fails if uii = 0 for some i. we determine Step 1 determining 1st row of U and 1st column of L.….Numerical Analysis/Direct methods for solving linear system of equation Lecture notes Summarizing. i+2..n (Note for j<i we have uij = 0) l ji 1 ⎡ = ⎢a u ii ⎣ − ∑ l k =1 jk u ki ⎤ ⎥ ⎦ . Bangalore M1/L6/V1/May 2004/4 . First step: VittalRao/IISc. j = i. i+1.

L42 = (a42 – l41 u12) /u22 VittalRao/IISc. u12 = 1 . l21 = a21/u11 = -2/2 = -1. Second step: 2nd row of U : u12 = 0 (Because upper triangular) u22 = a22 – l21 u12 = 2 – (-1) (1) = 3. u24 = a24 – l21 u14 = . u23 = a23 – l21 u13 = 6 – (-1) (-1) = 5. l32 = (a32 – l31 u12) /u22 = [14 – (2)(1)]/3 = 4. l31 = a31/u11 = 4/2 = 2.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes 1st row of U : Same as 1st row of A. u13 = -1 . l41 = a41/u11 = 6/2 = 3.4 – (-1) (3) = -1. 2nd column of L : l12 = 0 (Because lower triangular) l22 = 1. Bangalore M1/L6/V1/May 2004/5 . ∴u11 = 2 . u14 = 3 1st column of L: l11 = 1.

Third Step: 3rd row of U: u31 = 0 u32 = 0 u33 = a33 – l31 u13 – l32 u23 = 19 – (2) (-1) – (4)(5) = 1. Fourth Step: 4th row of U: u41 = 0 u42 = 0 u43 = 0 u44 = a44 – l41 u14 – l42 u24 – l43 u34 = 12 – (3) (3) – (-1) (-1) – (2) (2) = -2. Bangalore M1/L6/V1/May 2004/6 Because upper triangular Because lower triangular Because upper triangular . VittalRao/IISc. u34 = a34 – l31 u14 – l32 u24 = 4 – (2) (3) – (4)(-1) = 2.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes = [0 – (3)(1)]/3 = -1. 3rd column of L : l13 = 0 l23 = 0 l33 =1 l43 = (a43 – l41 u13 – l42 u23)/ u33 = [-6 – (3) (-1) – (-1) (5)]/1 = 2.

0 ⎛1 ⎜ ⎜−1 1 L=⎜ 2 4 ⎜ ⎜ 3 −1 ⎝ and A = LU. As we observed in the case of the LU decomposition of a tridiagonal matrix. . . . . Thus. This gives us the LU decomposition by Doolittle’s method for the given A. . .(V) 0 1 2 ⎟ ⎟ 0 0 − 2⎟ ⎠ Choose l11 = a11 .a11 ) a11 n Next aij = ∑ l1k u kj = l11u1 j asl1k = 0 fork > 1 k =1 VittalRao/IISc. u11 = (sgn . . Bangalore M1/L6/V1/May 2004/7 .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes 4th column of L : l14 = 0 = l24 = l34 Because lower triangular l44 = 1. We describe this procedure as follows: Once again 1st row and 1st column of U & L respectively is our first concern: Step 1: a11 = l11 u11 0 0 1 2 0⎞ ⎟ 0⎟ . . it is not advisable to choose the lii as 1. . but to choose in such a way that the diagonal entries of L and the corresponding diagonal entries of U are of the same magnitude. 0⎟ ⎟ 1⎟ ⎠ ⎛2 ⎜ ⎜0 U =⎜ 0 ⎜ ⎜0 ⎝ 1 −1 3 ⎞ ⎟ 3 5 −1⎟ . .

say i −1 Choose l ii = pi = a ii − ∑ l ik u ki k =1 VittalRao/IISc. We determine now the ith row of U and ith column of L as follows: n a ii = ∑ ∑ n k =1 l ik u ki = k =1 l ik u ki for lik = 0 if k>i = ∑ i −1 k =1 l ik u ki + l ii u ii ∴ l ii u ii = a ii − ∑l k =1 i −1 ik u ki = p i . l j1 a j1 u 11 These determine the first row of U and first column of L. Similarly. Bangalore M1/L6/V1/May 2004/8 .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes ⇒ u ij a1 j l 11 Thus note that u1j have been scaled now as compared to what we did earlier. Suppose we have determined the first I-1 rows of U and first I-1 columns of L.

Bangalore M1/L6/V1/May 2004/9 . thus determining the ith column of L. VittalRao/IISc. Let us now apply this to matrix A in the example in page 30.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes u ii = − sgn pi n i pi a ij = ∑ l ik u kj = ∑ l ik u kj Q l ik = 0 fork > i k −1 k =1 = ∑l k =1 i −1 ik u kj + l ii u ij lii i −1 ⎛ ⎞ ⇒ u ij = ⎜ a ij − ∑ l ik u kj ⎟ k =1 ⎝ ⎠ determining the ith row of U. n a ji = ∑l k =1 jk u ki = ∑l k =1 i jk u ki Q u ki = 0 ifk > i = ∑ i −1 k =1 l jk u ki + l ji u ii i −1 ⎛ ⎞ ⇒ l ji = ⎜ a ji − ∑ l jk u ki ⎟ k =1 ⎝ ⎠ uii .

Numerical Analysis/Direct methods for solving linear system of equation Lecture notes First Step: l 11 u 11 = a 11 = 2 ∴ l 11 = u12 = 2 . u14 = 2 2 2 u11 = 2 . u14 = 14 = u13 = 13 = − l11 l11 l11 2 2 2 1 1 3 . u12 = l 21 = a 21 u 11 =− 2 2 =− 2 l 31 = a 31 4 = =2 2 u 11 2 l 41 = a 41 6 = =3 2 u 11 2 therefore l11 = 2 l 21 = − 2 l 31 = 2 2 l 41 = 3 2 Second Step: l 22 u 22 = a 22 − l 21 u12 VittalRao/IISc. u13 = − . u 11 = 2 a a a12 3 1 1 = . Bangalore M1/L6/V1/May 2004/10 .

u 22 = 3. Bangalore M1/L6/V1/May 2004/11 . u 22 = 3 u 23 = (a 23 − l 21u13 ) l22 ⎡ ⎛ 1 ⎞⎤ = ⎢6 − − 2 ⎜ − ⎟⎥ / 3 = 5 3 2 ⎠⎦ ⎝ ⎣ ( ) u 24 = [a 24 − l 21u14 ] l22 ⎡ ⎛ 3 ⎞⎤ = ⎢(− 4 ) − − 2 ⎜ ⎟⎥ / 3 = − 1 3 ⎝ 2 ⎠⎦ ⎣ therefore ( ) ∴ u 21 = 0.Numerical Analysis/Direct methods for solving linear system of equation Lecture notes ⎛ 1 ⎞ = 2− − 2 ⎜ ⎟=3 ⎝ 2⎠ ( ) ∴ l 22 = 3. u 24 = − 1 3 l32 = (a32 − l31u12 ) / u 22 ⎡ ⎛ 1 ⎞⎤ = ⎢14 − 2 2 ⎜ ⎟⎥ / ⎝ 2 ⎠⎦ ⎣ ( ) 3 =4 3 VittalRao/IISc. u 23 = 5 3 .

u 33 = 1 u34 = (a34 − l31u14 − l32u 24 ) / l33 VittalRao/IISc. Bangalore M1/L6/V1/May 2004/12 .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes l 42 = (a42 − l 41u12 ) / u 22 ⎛ ⎛ 1 ⎞⎞ = ⎜0 − 3 2 ⎜ ⎟⎟ / 3 ⎜ 2 ⎠⎟ ⎝ ⎝ ⎠ ( ) =− 3 therefore l12 = 0 l 22 = 3 l32 = 4 3 l 42 = 3 Third Step: l33u 33 = a33 − l31u13 − l32 u 23 ⎛ 5 ⎞ ⎛ 1 ⎞ = 19 − 2 2 ⎜ − ⎟ ⎟− 4 3 ⎜ ⎜ ⎟ 2⎠ ⎝ ⎝ 3⎠ =1 ( ) ( ) ∴ l33 = 1.

u34 = 2 l 43 = [a 43 − l 41u13 − l 42 u 23 ] / u 33 ⎛ ⎛ 5 ⎞⎞ ⎛ 1 ⎞ ⎟⎟ /1 = ⎜− 6 − 3 2 ⎜− ⎟− − 3 ⎜ ⎜ ⎟ ⎜ 2⎠ 3 ⎠⎟ ⎝ ⎝ ⎝ ⎠ =2 therefore ⎡ l 13 ⎢l ⎢ 23 ⎢ l 33 ⎢ ⎣ l 43 Fourth Step: = 0⎤ = 0⎥ ⎥ = 1 ⎥ ⎥ = 2⎦ ( ) ( ) l44u44 = a44 − l41u14 − l42u24 − l43u34 ⎛ 1 ⎞ ⎛ 3 ⎞ = 12 − 3 2 ⎜ ⎟ − (2 )(2 ) ⎟ − − 3 ⎜− ⎜ ⎟ 3⎠ ⎝ 2⎠ ⎝ = -2 ( ) ( ) VittalRao/IISc. u32 = 0. u33 = 1. Bangalore M1/L6/V1/May 2004/13 .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes ⎡⎛ ⎛ 1 ⎞ ⎞⎤ ⎛ 3 ⎞ = ⎢⎜ 4 − 2 2 ⎜ ⎟ ⎟⎥ / 1 ⎟ − 4 3 ⎜− ⎜ ⎟ ⎜ 3 ⎠ ⎟⎥ ⎝ 2⎠ ⎢⎝ ⎝ ⎠⎦ ⎣ ( ) ( ) =2 ∴u31 = 0.

the corresponding diagonal entries of L and U have the same Note: Compare this with the L and U of page 32. u 42 = 0. These then give the diagonals of the U in page 36. u 43 = 0. u 44 = − 2 ⎡ l 14 ⎢l ⎢ 24 ⎢ l 34 ⎢ ⎢ l 44 ⎣ = 0 ⎤ = 0 ⎥ ⎥ = 0 ⎥ ⎥ 2⎥ = ⎦ Thus we get the LU decompositions. i. ⎛ ⎜ ⎜− L=⎜ ⎜2 ⎜3 ⎝ 2 2 2 2 0 3 0 0 4 3 1 − 3 2 0 ⎞ ⎟ 0 ⎟ ⎟ 0 ⎟ 2⎟ ⎠ . third diagonal1 by 1 and 4th diagonal –2 by . The U in page 36 can be obtained from the U of page 32 by (1) replacing the ‘numbers’ in the diagonal of that U and keeping the same sign. VittalRao/IISc. Bangalore M1/L6/V1/May 2004/14 . 2nd diagonal 3 is replaced by 3 .e.2 . u 44 = − 2 ∴ u 41 = 0. ⎛ ⎜ ⎜ ⎜ U =⎜ ⎜ ⎜ ⎜ ⎝ 2 0 0 0 1 2 3 0 0 − 1 2 5 3 1 0 3 ⎞ ⎟ 2 ⎟ 1 ⎟ − 3⎟ ⎟ 2 ⎟ − 2⎟ ⎠ in which magnitude. (2) Divide each entry to the right of a diagonal in the U of page 32 by these replaced diagonals. Thus the first diagonal 2 is replaced by 2 .Numerical Analysis/Direct methods for solving linear system of equation Lecture notes ∴ l 44 = 2. What is the difference. lii = u ii .

Numerical Analysis/Direct methods for solving linear system of equation Lecture notes Thus 1st row changes to 1st row of U in page 36 2nd row changes to 2nd row of U in page 36 3rd row changes to 3rd row of U in page 36 4th row changes to 4th row of U in page 36 This gives the U of page 36 from that of page 32. (2) Multiply each entry below the diagonal of L by this new diagonal entry. Bangalore M1/L6/V1/May 2004/15 . VittalRao/IISc. We get the L of page 32 changing to the L of page 36. The L in page 36 can be obtained from the L of page 32 as follows: (1) Replace the diagonals in L by magnitude of the diagonals in U of page 36.

absolute value for uii. Thus instead of actually looking for a factorization A = LU we shall be looking for a system. The Doolittle’s method which is used to factorize A as LU is used from the point of view of reducing the system Ax = y To two triangular systems Lz = y Ux = z as already mentioned in page 17. u11 = a11 = 3 VittalRao/IISc. We illustrate this by the following example: The basic idea is at each stage calculate all the uii that one can get by the permutation of rows of the matrix and choose that matrix which gives the max. Bangalore M1/L7/V1/May 2004/1 . 1st diagonal of U. A*x = y* and for which A* has LU decomposition. By Doolittle decomposition.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes DOOLITTLE’S METHOD WITH ROW INTERCHANGES We have seen that Doolittle factorization of a matrix A may fail the moment at stage i we encounter a uii which is zero. As an example consider the system Ax = y where ⎛ 3 1 − 2 − 1⎞ ⎟ ⎜ 3⎟ ⎜2 − 2 2 A=⎜ 1 5 − 4 − 1⎟ ⎟ ⎜ ⎜3 1 2 3⎟ ⎠ ⎝ We keep lii = 1. Just as we avoided this problem in the Gaussian elimination method by introducing partial pivoting we can adopt this procedure in the modified Doolittle’s procedure. This occurrence corresponds to the occurrence of zero pivot at the ith stage of simple Gaussian elimination method. Stage 1: ⎛ 3 ⎞ ⎜ ⎟ ⎜ − 8⎟ y= ⎜ ⎟ 3 ⎜ ⎟ ⎜ − 1⎟ ⎝ ⎠ We want LU decomposition for some matrix that is obtained from A by row interchanges.

l31 = 31 = . l 41 = 41 = = 1. Suppose instead of above we interchange 2nd row with 4th row of A: New a22 = 1 and new l21 = 1 and therefore new u22 = 1 – (1) (1) = 0 Of these 14/3 has largest absolute value. l11 = 1. 0 * *⎟ ⎟ 0 0 *⎟ ⎠ We now calculate the second diagonal of U: By Doolittle’s method we have u 22 = a22 − l 21u12 = −2 − ⎛ 2 ⎞(1) = − 8 ⎜ ⎟ ⎝3⎠ 3 Suppose we interchange 2nd row with 3rd row of A and calculate u22 : our new a22 is 5. Therefore new l21 is1/3. A and Y remaining unchanged. 3 u11 3 u11 3 u11 0 1 * * 0 0 1 * 0⎞ ⎟ 0⎟ ⎟ . l 21 = Thus a a a 21 2 3 1 = . and 0⎟ ⎟ *⎟ ⎠ ⎛1 ⎜2 ⎜ 3 L is of the form ⎜ 1 ⎜ ⎜3 ⎜1 ⎝ ⎛3 ⎜ ⎜0 U is of the form ⎜ 0 ⎜ ⎜0 ⎝ Stage 2 1 − 2 − 1⎞ ⎟ * * *⎟ . Therefore we interchange 2nd and 3rd row. So we keep the matrix as it is and calculate 1st row of U. So we prefer this. ⎛3 ⎜ ⎜1 NewA = ⎜ 2 ⎜ ⎜3 ⎝ 1 5 −2 1 −2 −4 2 2 − 1⎞ ⎛ 3 ⎞ ⎟ ⎜ ⎟ − 1⎟ ⎜ 3 ⎟ .Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes If we interchange 2nd or 3rd or 4th rows with 1st row and then find the u11 for the new matrix we get respectively u11 = 2 or 1 or 3. by Doolittle’s method. Thus interchange of rows does not give any advantage at this stage as we have already got 3 without row interchange for u11. Newy = ⎜ 3 ⎟ − 8⎟ ⎟ ⎜ ⎟ ⎜ − 1⎟ 3 ⎟ ⎠ ⎝ ⎠ VittalRao/IISc. But note that the L gets in the 1st column 2nd and 3rd row interchanged. Bangalore M1/L7/V1/May 2004/2 .

Bangalore M1/L7/V1/May 2004/3 . VittalRao/IISc. 14/3. namely –8/3. For. observe. 0 before we chose 14/3.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes ⎛ 1 ⎜ ⎜ 13 NewL = ⎜ 2 ⎜ 3 ⎜ 1 ⎝ 0 0 0⎞ ⎛3 1 ⎜ ⎟ 14 1 0 0⎟ ⎜0 3 ⎟. NewU = ⎜ * 1 0⎟ 0 0 ⎜ ⎜0 0 * * 1⎟ ⎠ ⎝ − 2 − 1⎞ ⎟ * *⎟ ⎟ * *⎟ 0 *⎟ ⎠ Now we do the Doolittle calculation for this new matrix to get 2nd row of U and 2nd column of L. u 23 = a 23 − l 21u13 = (− 4) − ⎛ 1 ⎞(− 2) = − 10 ⎜ ⎟ ⎝ 3⎠ 3 u 24 = a 24 − l21u14 2nd column of L: 2 ⎛1⎞ = (− 1) − ⎜ ⎟ (− 1) = − 3 ⎝3⎠ ⎡ = ⎢ (− 2 ) − ⎣ 4 ⎛ 2 ⎞ ⎤ 14 = − ⎜ ⎟ (1 )⎥ ÷ 7 3 ⎝3⎠ ⎦ l 32 = [a 32 − l 31u12 ] ÷ u 22 l42 = [a42 − l41u12 ] ÷ u11 = [3 − (1 )(1 )] ÷ ⎛ ⎜ ⎜ Therefore new L has form ⎜ ⎜ ⎜ ⎜ ⎝ ⎛3 ⎜ ⎜0 New U has form ⎜ ⎜0 ⎜0 ⎝ 1 14 3 0 0 14 =0 3 1 1 3 2 3 1 −2 − 10 3 * 0 0 1 − 4 7 0 −1 ⎞ −2⎟ ⎟ 3 ⎟ * ⎟ * ⎟ ⎠ 0 0 1 * 0⎞ ⎟ 0⎟ ⎟ 0⎟ ⎟ 1⎟ ⎠ This completes the 2nd stage of our computation. that the rejected u22 namely – 8/3 and 0 when divided by the chosen u22 namely 14/3 give the entries of L below the second diagonal. But this is not really so. It appears that we are doing more work than Doolittle. Note: We had three choices of u22 to be calculated.

⎝ 3⎠ Of these two choices of u33 we have 4 has the larges magnitude. So we interchange 3rd and 4th rows of the matrix of 2nd stage to get ⎛ 3 ⎞ ⎛ 3 1 − 2 − 1⎞ ⎜ ⎟ ⎜ ⎟ 1 5 − 4 − 1⎟ ⎜ 3 ⎟ ⎜ NewA = ⎜ NewY = ⎜ ⎟ −1 3 1 2 3⎟ ⎜ ⎟ ⎜ ⎟ ⎜ − 8⎟ ⎜2 − 2 2 3⎟ ⎝ ⎠ ⎝ ⎠ ⎛1 ⎜1 ⎜ NewL = ⎜ 3 ⎜1 ⎜2 ⎜ ⎝3 0 1 0 4 − 7 0 0 1 * 0⎞ ⎛3 ⎟ ⎜ 0⎟ ⎜0 ⎟.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes 3rd Stage: 3rd diagonal of U: u 33 = a33 − l31u13 − l32 u 23 ⎛ 4 ⎞⎛ 10 ⎞ 10 ⎛ 2⎞ = 2 − ⎜ ⎟(− 2 ) − ⎜ − ⎟⎜ − ⎟ = 7 ⎝ 7 ⎠⎝ 3 ⎠ ⎝ 3⎠ Suppose we interchange 3rd row and 4th row of new A obtained in 2nd stage. Bangalore M1/L7/V1/May 2004/4 . We get new a33 = 2. But in L also the second column gets 3rd and 4th row interchanges Therefore new l31 = 1 and new l32 = 0 ⎛ 10 ⎞ Therefore new u33 = a33 – l31 u13 – l32 u23 = 2 − (1)(− 2 ) + (0 )⎜ − ⎟ = 4. NewU = ⎜ 0⎟ ⎜0 ⎟ ⎜0 1⎟ ⎝ ⎠ −2 10 − 3 4 0 −1 ⎞ 2⎟ − ⎟ 3⎟ * ⎟ * ⎟ ⎠ 1 14 3 0 0 Now for this set up we calculate the 3rd stage entries as in Doolittle’s method: u 34 = a34 − l31u14 − l32 u 24 ⎛ 2⎞ = 3 − (1)(− 1) − (0 )⎜ − ⎟ = 4 ⎝ 3⎠ l 43 = (a 43 − l 41u13 − l 42 u 23 ) ÷ u 33 VittalRao/IISc.

New U = U* 0⎞ ⎛3 1 ⎜ ⎟ 14 0⎟ ⎜0 3 ⎟. 4 ⎟ 13 ⎟ ⎟ 7 ⎠ New L = L* . L*z = y* U*x = z Now L*z = y* gives by forward substitution: Z1 =3 VittalRao/IISc.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes ⎡ ⎛ 2⎞ ⎛ 4 ⎞⎛ 10 ⎞⎤ = ⎢2 − ⎜ ⎟(− 2) − ⎜ − ⎟⎜ − ⎟⎥ ÷ 4 = 5/14. ⎝ 7 ⎠⎝ 3 ⎠⎦ ⎣ ⎝ 3⎠ ⎛1 ⎜1 ⎜ ∴ NewL = ⎜ 3 ⎜1 ⎜2 ⎜ ⎝3 4th Stage 0 1 0 4 − 7 0 0 1 5 14 0⎞ ⎛3 1 ⎟ ⎜ 14 0⎟ ⎜0 ⎟. Bangalore M1/L7/V1/May 2004/5 . u 44 = [a 44 − l 41u14 − l 42 u 24 − l 43 u 34 ] ⎛ 2⎞ ⎛ 4 ⎞⎛ 2 ⎞ ⎛ 5 ⎞ = 3 − ⎜ ⎟(− 1) − ⎜ − ⎟⎜ − ⎟ − ⎜ ⎟(4 ) = 13/7. ⎝ 3⎠ ⎝ 7 ⎠⎝ 3 ⎠ ⎝ 14 ⎠ ⎛3 ⎜ ⎜1 * ∴ NewA = A = ⎜ 3 ⎜ ⎜2 ⎝ ⎛1 ⎜1 ⎜ L* = ⎜ 3 ⎜1 ⎜2 ⎜ ⎝3 0 1 0 4 − 7 0 0 1 5 14 1 5 1 −2 −2 −4 2 2 − 1⎞ ⎛ 3 ⎞ ⎜ ⎟ ⎟ − 1⎟ ⎜ 3 ⎟ * NewY = Y = ⎜ ⎟ 3⎟ −1 ⎜ ⎟ ⎟ ⎜ − 8⎟ 3⎟ ⎝ ⎠ ⎠ −2 10 − 3 4 0 −1 ⎞ 2⎟ − ⎟ 3 ⎟. U * = ⎜ 0⎟ 0 0 ⎜ ⎜0 0 ⎟ 1⎟ ⎜ ⎝ ⎠ and A* = L*U* The given system Ax=y is equivalent to the system A*x=y* and hence can be split into the triangular systems. NewU = ⎜ 3 0⎟ 0 0 ⎜ ⎜0 0 1⎟ ⎟ ⎝ ⎠ −2 10 − 3 4 0 −1 ⎞ 2⎟ − ⎟ 3⎟ 4 ⎟ * ⎟ ⎠ Note: The rejected u33 divided by chosen u33 gives l43.

Bangalore M1/L7/V1/May 2004/6 .Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes 1 z1 + z 2 = 3 ⇒ z 2 = 3 − 1 = 2 3 z1 + z 3 = − 1 ⇒ z 3 = − 1 − z1 = − 4 2 4 5 z1 − z 2 + z 3 + z 4 = −8 3 7 14 ⎛2⎞ ⎛4⎞ ⎛ 5 ⎞ ⎜ ⎟ (3 ) − ⎜ ⎟ (2 ) + ⎜ ⎟ (− 4 ) + z 4 = − 8 ⎝3⎠ ⎝7⎠ ⎝ 14 ⎠ ⇒ z4 = − 52 7 ⎛ 3 ⎞ ⎟ ⎜ ⎜ 2 ⎟ ∴z =⎜ −4 ⎟ ⎜ 52 ⎟ ⎟ ⎜− ⎝ 7 ⎠ Therefore U*x = z gives by back-substitution. 13 52 x4 = − 7 7 therefore x4 = -4. 4x3 + 4x4 = −4 ⇒ x3 + x4 = −1 ⇒ x3 = −1− x4 = 3 therefore x3 = 3 14 10 2 x2 − x3 − x 4 = 2 3 3 3 14 ⎛ 10 ⎞ ⎛ 2 ⎞ x 2 − ⎜ ⎟(3)⎜ − ⎟(− 4 ) = 2 3 ⎝ 3 ⎠ ⎝ 3⎠ ⇒ x2 = 2 3x1 + x 2 − 2 x3 − x 4 = 3 ⇒ 3 x1 + 2 − 6 + 4 − 3 ⇒ x1 = 1 Therefore the solution of the given system is VittalRao/IISc.

between the two methods. The name of Crout is often associated with triangular decomposition methods. which is the case when we choose all the diagonal entries of L as 1. But A1 = A since A is symmetric. Apart from this. as regards procedure or accuracy. Bangalore M1/L7/V1/May 2004/7 . Stage 1: 1st row of U: VittalRao/IISc. so that A = U1U (or same as LL1) Now therefore determining U automatically gets L = U1 We now do the Doolittle method for this. Let A = LU be a LU decomposition Then A1 = U1 L1 U1 is also lower triangular L1 is upper triangular Therefore U1L1 is a decomposition of A1 as product of lower and upper triangular matrices.1 ≤ i ≤ n . there is little distinction. all the remaining elements of the upper and lower triangular matrices may then be uniquely determined as in Doolittle’s method. Wilkinson’s suggestion is to get a LU decomposition in which l ii = u ii . Therefore LU = U1L1 We ask the question whether we can choose L as U1. We finally look at the cholesky decomposition for a symmetric matrix: Let A be a symmetric matrix.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes ⎛ 1 ⎞ ⎜ ⎟ ⎜ 2 ⎟ x = ⎜ 3 ⎟ ⎜ ⎟ ⎜− 4⎟ ⎝ ⎠ Some Remarks: The factorization of a matrix A as the product of lower and upper triangular matrices is by no means unique. Note that it is enough to determine the rows of U. the diagonal elements of one or the other factor can be chosen arbitrarily. In fact. and in crout’s method the diagonal elements of U are all chosen as unity. As already mentioned.

Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes a11 = ∑ l1 k u k 1 = k =1 n ∑u k =1 n 2 k1 Q l1k = uk1 Q L = U1 = u 211 Q u 1 k = 0 for k>1 since U is upper triangular ∴ u 11 = a11 We finally look at the cholesky decomposition for a symmetric matrix: Let A be a symmetric matrix. so that A = U1U (or same as LL1) Now therefore determining U automatically gets L = U1 We now do the Doolittle method for this. Bangalore determines first row of U. Note that it is enough to determine the rows of U. ∴ u 11 = n a11 n a1i = ∑ l1k u ki = ∑ u k1u ki k =1 k =1 = u 11 u 1 i Q u k1 = 0 fork > 1 u11 = a11 ∴ u1i = a1i / u11 VittalRao/IISc. Therefore LU = U1L1 We ask the question whether we can choose L as U1. M1/L7/V1/May 2004/8 . But A1 = A since A is symmetric. and hence first column of L. Stage 1: 1st row of U: n n a11 = ∑l1k uk1 = ∑u 2 k1 k =1 k =1 Q l1k = uk1 Q L = U1 = u211 Q u k 1 = 0 for k>1 since U is upper triangular. Let A = LU be a LU decomposition Then A1 = U1 L1 U1 is also lower triangular L1 is upper triangular Therefore U1L1 is a decomposition of A1 as product of lower and upper triangular matrices.

Note: uki are known for k ≤ i -1. Bangalore M1/L7/V1/May 2004/9 . we determine the ith row of U as follows: a ii = ∑ n k =1 l ik u ki = ∑ n u 2 ki k =1 Q l ik = u ki = = ∑ i −1 k =1 i u 2 ki 2 ki k =1 Q u ki = 0 for k > i ∑u 2 ii + u 2 ii ∴ u = a ii − ∑ u 2 i −1 u 2 ki k =1 ∴ u ii = a ii − ∑ n i −1 ki k =1 . 1st i-1 rows have already been obtained. a ij = i ∑l k =1 n ik u kj = ∑u k =1 ki u kj Now we need uij for j > i = ∑ u ki u kj k =1 Because uki = 0 for k > i = ∑u k =1 i −1 ki u kj + u ii u ij Therefore u ij ⎡ = ⎢ a ij − ⎣ ∑u k =1 i −1 ki ⎤ u kj ⎥ ÷ u ii ⎦ VittalRao/IISc.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes Having determined the 1st i-1 rows of U.

This is called CHOLESKY decomposition.Numerical Analysis/ Direct methods for solving linear system of equation Lecture notes i −1 ⎧ 2 ⎪ u ii = a ii − ∑ u ki ⎪ k =1 ∴⎨ i −1 ⎤ ⎪u = ⎡ a − ∑1 u ki u kj ⎥ ÷ u ij ij ij ⎢ ⎪ k= ⎣ ⎦ ⎩ determines the ith row of U in terms of the previous rows. ⎧u 11 ⎪ ⎪u 12 ⎨ ⎪u 13 ⎪u ⎩ 14 = a11 = 1 = a12 ÷ u 11 = − 1 = a13 ÷ u 11 = 1 = a14 ÷ u 11 = 1 2nd row of U VittalRao/IISc. Thus we get U and L is U1. Bangalore 2004/10 M1/L7/V1/May . Let us find the Cholesky decomposition. Example: ⎛ 1 −1 ⎜ ⎜−1 5 Let A = ⎜ 1 −3 ⎜ ⎜ 1 3 ⎝ 1st row of U 1 −3 3 1 1⎞ ⎟ 3⎟ 1⎟ ⎟ 10 ⎟ ⎠ This is a symmetric matrix.

Numerical Analysis/ Direct methods for solving linear system of equation

Lecture notes

⎧u = a − u 212 = 5 − 1 = 2 22 ⎪ 22 ⎪ ⎨u 23 = (a 23 − u12 u13 ) ÷ u 22 = (− 3 − (− 1)(1)) ÷ 2 = −1 ⎪u = (a − u u ) ÷ u = (3 − (− 1)(1)) ÷ 2 = 2 24 12 14 22 ⎪ 24 ⎩
3rd row of U

⎧u = a − u 213 − u 2 23 = 3 − 1 − 1 = 1 ⎪ 33 33 ⎨ ⎪u 34 = (a34 − u13u14 − u 23u 24 ) ÷ u 33 = (1 − (1)(1) − (− 1)(2 )) ÷ 1 = 2 ⎩
4th row of U

u 44 = a44 − u 214 − u 2 24 − u 2 34 = 10 − 1 − 4 − 4 = 1
⎛1 −1 1 ⎜ ⎜0 2 −1 ∴U = ⎜ 0 0 1 ⎜ ⎜0 0 0 ⎝
A = LU = LL1 = U1U

1⎞ ⎟ 2⎟ 2⎟ ⎟ 1⎟ ⎠

⎛1 0 ⎜ ⎜ −1 2 1 ∴U = L = ⎜ 1 −1 ⎜ ⎜1 2 ⎝

0 0⎞ ⎟ 0 0⎟ and 1 0⎟ ⎟ 2 1⎟ ⎠

VittalRao/IISc, Bangalore 2004/11

M1/L7/V1/May

Numerical Analysis / Iterative methods for solving linear systems of equations

Lecture notes

ITERATIVE METHODS FOR THE SOLUTION OF SYSTEMS EQUATION In general an iterative scheme is as follows: We have an nxn matrix M and we want to get the solution of the systems x = Mx + y ……………………..(1)
k We obtain the solution x as the limit of a sequence of vectors, x which are obtained as follows:

{ }

We start with any initial vector x(0), and calculate x(k) from, x(k) = Mx(k-1) + y ……………….(2) for k = 1,2,3, ….. successively. A necessary and sufficient condition for the sequence of vectors x(k) to converge to M sp solution x of (1) is that the spectral radius of the iterating matrix M is less than 1 or M <1 if for some matrix norm. We shall now consider some iterative schemes for solving systems of linear equations, Ax = y …………….(3) We write this system in detail as

a11 x1 + a12 x 2 + ..... + a1n x n = y1

a21 x1 + a22 x2 + ..... + a2 n xn = y 2
...... ...... ......

. . . . . . . .(4)

an1 x1 + an 2 x2 + ..... + ann xn = y n

⎛ a11 ⎜ ⎜a WehaveA = ⎜ 21 K ⎜ ⎜a ⎝ n1

a12 a 22 K an 2

K K K K

a1n ⎞ ⎟ a2n ⎟ K ⎟ . . . . . . . . . . . (5) ⎟ a nn ⎟ ⎠

We denote by D, L, U the matrices

VittalRao/IISc, Bangalore

M2/L1/V1/May 2004/1

Numerical Analysis / Iterative methods for solving linear systems of equations

Lecture notes

⎛ a11 ⎜ ⎜ 0 D=⎜ 0 ⎜ ⎜ ... ⎜ 0 ⎝

0 a 22 0 ... 0

... ... a 33 ... ...

... ... ... ... ...

0 ⎞ ⎟ 0 ⎟ 0 ⎟.......... .......... ......( 6) ⎟ ... ⎟ a nn ⎟ ⎠

the diagonal part of A; and

⎛ 0 ⎜ ⎜ a 21 L = ⎜ a31 ⎜ ⎜K ⎜a ⎝ n1

0 0 a32

K K 0

K K K

K K K a n 2 K a n −1

0⎞ ⎟ 0⎟ 0 ⎟..................................(7) ⎟ K⎟ 0⎟ ⎠

the lower triangular part of A; and

⎛ 0 u12 ⎜ ⎜0 0 U =⎜ ... ... ⎜ ⎜0 0 ⎝
Note that, A=D+L+U

... ... u1n ⎞ ⎟ u 23 ... u 2 n ⎟ ........................................(8) ... ... ... ⎟ ⎟ 0 ... 0 ⎟ ⎠

the upper triangular part of A. ……………………… (9). …………(10)

We assume that aii ≠ 0 ; i = 1, 2, ……, n So that D-1 exists.

We now describe two important iterative schemes, below, for solving the system (3).

VittalRao/IISc, Bangalore

M2/L1/V1/May 2004/2

... . J is called the Jacobi Iteration Matrix... − ann−1 xn−1 + y n We start with an initial vector.. ... x (0 ) ⎛ x (0 )1 ⎞ ⎜ (0 ) ⎟ ⎜x 2⎟ .. k = 1. (13) ˆ x = Jx + y where ……………… (14) J = -D-1 (L + U) ……………. …. We now substitute this vector on the RHS of (11) to calculate again x1..(15) and. We shall see an easier condition below: VittalRao/IISc..........(11) ann xn = − an1 x1 − an 2 x2 ... giving ………………….... . …...... ...(12 ) =⎜ M ⎟ ⎟ ⎜ ⎜ x (0 ) n ⎟ ⎠ ⎝ and substitute this vector for x on the RHS of (11) and calculate x1.. .... we get x(0) starting vector ……………. ... x2..x2.Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes JACOBI ITERATION We write the system as in (4) as a11 x1 = − a12 x 2 − a13 x3 .. Dx = . as the iterating scheme... ..(16) ˆ x(k) = Jx(k−1) + y... . . We can describe this briefly as follows: The equation (11) can be written as.. Bangalore M2/L2/V1/May 2004/1 . xn and call this new vector as x(2) and continue this procedure to calculate the sequence x(k).........(L + U) x + y which we can write as x = -D-1 (L+U) x +D-1 y... . . This is similar to (2) with the iterating matrix M as J = -D-1 (L + U)..... − a1n x n + y1 a22 x2 = −a21 x1 − a23 x3 .2. xn and this vector is called x(1).. The scheme will converge to the solution x of our system if J sp < 1 .. − a2 n xn + y 2 .. ..

Rn } < 1 and we have convergence.. ........…. . ⎟ 0 ⎟ ⎟ ⎠ − Now therefore the ith Absolute row sum for J is Ri = ∑ j ≠i a ij a ii = ( a i1 + a i 2 + . Thus the Jacobi iteration scheme for the system (3) converges if A is strictly row diagonally dominant (Of course this condition may not be satisfied) and still Jacobi iteration scheme may converge if J sp < 1...Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes We have 1/a11 D-1 = 1/a22 1/ann and therefore ⎛ ⎜ 0 ⎜ ⎜ a21 − −1 J = − D (L + U ) = ⎜ a ⎜ 22 ⎜ ...3. + a ii −1 + a ii +1 + .. Now Ri < 1 means ai1 + ai 2 + .......... i. + aii −1 + aii +1 + ....... + a in ) / a ii ∴ If Ri <1 for i =1. A is ‘strictly row diagonally dominant’...... .. − ...2. + ain < aii i... a − nn−1 ann a1n ⎞ ⎟ a11 ⎟ a ⎟ − 2n ⎟ a22 ⎟ ... Bangalore M2/L2/V1/May 2004/2 . ⎜ − an1 ⎜ a ⎝ nn − a12 a11 0 .. in each row of A the sum of the absolute x values of the nondiagonal entries is dominated by the absolute value of the diagonal entry.e....n then J ∞ = max{R1 . a − n2 ann a13 a11 a − 23 a22 .. VittalRao/IISc.e..

Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes Example: Consider the system x1 + 2x2 – 2x3 = 1 x1 + x2 + x3 =0 ………….. 0 ⎟ ⎠ ⎛ 0 − 2 + 2⎞ ⎜ ⎟ J = − D −1 (L + U ) = ⎜ − 1 0 − 1 ⎟ ⎜− 2 − 2 0 ⎟ ⎝ ⎠ Thus the Jacobi scheme (16) becomes .(I) 2x1 + 2x2 + x3 = 0 Let us apply the Jacobi iteration scheme with the initial vector as x (0) ⎛0⎞ ⎜ ⎟ = θ = ⎜ 0 ⎟ ………….(II) ⎜0⎟ ⎝ ⎠ We ⎛1 ⎜ A = ⎜1 ⎜2 ⎝ 2 1 2 − 2⎞ ⎟ 1 ⎟ 1 ⎟ ⎠ . ⎛1 ⎜ D = ⎜0 ⎜0 ⎝ ⎛1⎞ ⎜ ⎟ y = ⎜0⎟ ⎜0⎟ ⎝ ⎠ 0 1 0 0⎞ ⎟ 0⎟ 1⎟ ⎠ ⎛0 ⎜ L +U = ⎜1 ⎜2 ⎝ 2 0 2 − 2⎞ ⎟ 1 ⎟ ... k =1.. Bangalore ..2.. ⎛1⎞ ⎜ ⎟ ˆ y = D y = ⎜ 0⎟ ⎜ 0⎟ ⎝ ⎠ −1 x (0 ) ⎛ 0⎞ ⎜ ⎟ = ⎜ 0⎟ ⎜ 0⎟ ⎝ ⎠ ˆ x(k) = Jx(k−1) + y. ∴ x (1) ⎛ 1⎞ ⎜ ⎟ ˆ ˆ ˆ = Jx(0 ) + y = Jθ + y = y = ⎜ 0 ⎟ ⎜ 0⎟ ⎝ ⎠ M2/L2/V1/May 2004/3 VittalRao/IISc.

Bangalore M2/L2/V1/May 2004/4 . Here. there is no convergence problem at all. = x(3) ∴ x(k) = x(3) and x(k) converges to x(3) ∴ The solution is x = lim k →∞ x (k ) =x (3 ) ⎛ − 1⎞ ⎜ ⎟ =⎜ 1 ⎟ ⎜0⎟ ⎝ ⎠ Can easily check that this is the exact solution.Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes x (2 ) = Jx (1) ⎛ 0 − 2 + 2 ⎞⎛ 1 ⎞ ⎛ 1 ⎞ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ˆ + y = ⎜ −1 0 − 1 ⎟⎜ 0 ⎟ + ⎜ 0 ⎟ ⎜ − 2 − 2 0 ⎟⎜ 0 ⎟ ⎜ 0 ⎟ ⎠⎝ ⎠ ⎝ ⎠ ⎝ ⎛ 0 ⎞ ⎛1⎞ ⎛ 1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ − 1⎟ + ⎜0⎟ = ⎜ − 1⎟ ⎜ − 2⎟ ⎜0⎟ ⎜ − 2⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ x (3 ) ⎛ 0 − 2 2 ⎞⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ˆ = Jx (2 ) + y = ⎜ − 1 0 − 1⎟⎜ − 1 ⎟ + ⎜ 0 ⎟ ⎜ − 2 − 2 0 ⎟⎜ − 2 ⎟ ⎜ 0 ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎛ − 2 ⎞ ⎛ 1 ⎞ ⎛ − 1⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ 1 ⎟ + ⎜ 0⎟ = ⎜ 1 ⎟ ⎜ 0 ⎟ ⎜ 0⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ x (4 ) ⎛ 0 − 2 2 ⎞⎛ − 1⎞ ⎛ 1 ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ˆ = Jx (3 ) + y = ⎜ − 1 0 − 1⎟⎜ 1 ⎟ + ⎜ 0 ⎟ ⎜ − 2 − 2 0 ⎟⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎛ − 2 ⎞ ⎛ 1 ⎞ ⎛ − 1⎞ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ = ⎜ 1 ⎟ + ⎜ 0 ⎟ = ⎜ 1 ⎟ = x (3 ) ⎜ 0 ⎟ ⎜ 0⎟ ⎜ 0 ⎟ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ∴ x(4) = x(5) = x(6) = ………. VittalRao/IISc.

375 ⎟ ⎜ 3 .125 ⎜ − 0.25 ⎞ ⎟ 0 0. VittalRao/IISc.8x2 + 3x3 = 19 2x1 + x2 + 9x3 = 30 Let us apply Jacobi iteration scheme starting with x (0 ) ⎛ 0⎞ ⎜ ⎟ = ⎜ 0⎟ ⎜ 0⎟ ⎝ ⎠ We have ⎛ 8 0 0⎞ ⎟ ⎜ −1 D = ⎜ 0 − 8 0⎟ ∴ D ⎜ 0 0 9⎟ ⎠ ⎝ ⎛1 ⎜ ⎜8 = ⎜0 ⎜ ⎜ ⎜0 ⎝ − 0.33333 ⎟ ⎝ ⎠ −1 Now the matrix is such that a11 = 8and a12 + a13 = 2 + 2 = 4 ∴ a11 > a12 + a13 a 22 = 8and a 21 + a 23 = 1 + 3 = 4. Hence the Jacobi iteration scheme will converge.22222 ⎝ −1 + 0. The scheme is.Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes Example 2: 8x1 + 2x2 – 2x3 = 8 x1 . Bangalore M2/L2/V1/May 2004/5 .375 ⎟ 0 ⎟ − 0. ∴ a 22 > a 21 + a 23 a33 = 9and a31 + a32 = 2 + 1 = 3 ∴ a 33 > a 31 + a 32 Thus we have strict row diagonally dominant matrix A.25 0 − 1 8 0 ⎞ 0⎟ ⎟ 0⎟ ⎟ 1⎟ ⎟ 9⎠ 0 ⎛ ⎜ J = − D (L + U ) = ⎜ + 0.11111 ⎠ 1 ⎛ ⎞ ⎜ ⎟ ˆ y = D y = ⎜ − 2 .

02387 ⎜ 2 . x 5 − x 4 ⎟ ⎠ ∞ = 0 .375 ⎟ x ( k −1) + y ⎜ − 0. x 4 − x 3 ⎟ ⎠ ⎞ ⎟ ( ) ( ) ⎟. x ( k + 1 ) − x ( k ) ≤ 3 x10 − 5 we get x (1 ) − x (0 ) = 3 .09375 ⎞ ⎜ ⎟ ˆ + y = ⎜ − 0 .Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes ⎛0⎞ ⎜ ⎟ x = ⎜0⎟ ⎜0⎟ ⎝ ⎠ 0 x (k ) = Jx ( k −1) ˆ +y − 0. 3x10-5 = ∈ i .125 0 0.46991 ≥∈ x (4 ) ⎛ 1 .42708 ≥∈ x (3 ) = Jx (2 ) x (3 ) − x ( 2 ) ∞ = 0 .22222 − 0. ⎜ 2 .21788 ≥∈ x (5 ) ⎛ 1 .25 ⎞ ⎛ ⎜ ⎟ ˆ = ⎜ 0.80599 ⎟.42708 ⎞ ⎜ ⎟ ˆ + y = ⎜ − 1.95761 ⎝ = Jx (4 ) ⎞ ⎟ ( ) ( ) ⎟.375 ⎟ ⎜ 3 .90509 ⎟ ⎝ ⎠ x (2 ) − x (1) ∞ = 1 .33333 ⎟ ⎝ ⎠ We continue the iteration until the components of x(k) and x(k+1) differ by at most.01870 ⎝ ∞ = 0 .00380 ⎝ ⎞ ⎟ ⎟.92777 ⎜ ˆ = Jx (3 ) + y = ⎜ − 1 .e .11111 0 ⎟ ⎝ ⎠ x (1 ) 1 ⎛ ⎞ ⎜ ⎟ ˆ = y = ⎜ − 2 .01091 ⎜ ˆ = Jx (5 ) + y = ⎜ − 0 . So we ∞ ∞ continue x (2 ) = Jx (1) ⎛ 2.00000 ⎟ ⎜ 3.99537 ⎜ ˆ + y = ⎜ − 1 .25 0 0. ⎟ ⎠ x ( 6 ) − x (5 ) ∞ = 0 . Bangalore M2/L2/V1/May 2004/6 .99356 ⎜ 3 .06760 ≥∈ x (6 ) ⎛ 2 .03136 ≥∈ VittalRao/IISc.02492 ⎜ 3 .37500 ⎟ ⎝ ⎠ ⎛ 2 .33333 . say.

x3 =3).Numerical Analysis/Iterative methods for solving linear system of equation Lecture notes x (7 ) = Jx (6 ) ⎛ 1 .99686 ⎟ ⎝ ⎠ ⎞ ⎟ ( ) ( ) ⎟.99984 ⎝ ⎛ 2 .00047 ⎝ ∞ = 0 . VittalRao/IISc.99721 ⎟.00018 ⎜ ˆ = Jx (9 ) + y = ⎜ − 0 .00001 ⎝ ⎞ ⎟ ⎟. Bangalore M2/L2/V1/May 2004/7 .00001 = x3 (Exact solution is x1 = 1.99994 ⎝ ⎞ ⎟ ⎟. ⎟ ⎠ x (11 ) − x (10 ) ∞ = 0 . x (7 ) − x (6 ) ⎜ 2 .00025 ⎜ 3 .00008 ≥∈ x (13 ) = Jx (12 ) x (13 ) − x (12 ) ∞ = 0 .00003 ⎟.00024 ≥∈ x (12 ) ⎛ 1 .01157 ≥∈ x (8 ) ⎛ 1 .00405 ≥∈ x (9 ) = Jx (8 ) ∞ = 0 .99997 ⎝ x (10 ) − x (9 ) ∞ = 0 . x 9 − x 8 ⎟ ⎠ ⎞ ⎟ ⎟.99979 ⎜ 2 .00000 ⎜ 3 .00003 =∈ ∴ SOLUTION IS 2 = x1 .00050 ≥∈ x (11 ) = Jx (10 ) ⎛ 1 .99999 ⎜ 2 .00176 ≥∈ x (10 ) ⎛ 2 . -1 = x2.99852 ⎜ ˆ = Jx (7 ) + y = ⎜ − 1 . ⎟ ⎠ ∞ = 0 .00126 ⎜ 2 .00001 ⎜ ˆ + y = ⎜ − 1 . ⎟ ⎠ x (12 ) − x (11 ) ∞ = 0 .99998 ⎞ ⎜ ⎟ ˆ = Jx (11 ) + y = ⎜ − 1 .00001 ⎟ ⎝ ⎠ ⎛ 2 . 3. ⎜ 3 .00027 ⎜ ˆ + y = ⎜ − 1 . x 8 − x 7 ⎟ ⎠ ⎞ ⎟ ( ) ( ) ⎟. x2 = -2.99934 ⎞ ⎜ ⎟ ˆ + y = ⎜ − 0 .99994 ⎜ ˆ + y = ⎜ − 0 .

. …. a ii x (k +1) i = − a i1 x (k )1 − a i 2 x (k ) 2 − ....... xn in the first equation...(II) M2/L3/V1/May 2004/1 .... x(k)1.... in the 3rd equation to calculate x(k+1)3. x(0) initial guess ˆ x (k +1) = Gx (k ) + y VittalRao/IISc.. x(k+1)i-1 to calculate x(k+1)i from aii x (k +1)i = − ai1 x (k +1)1 − ai 2 x (k +1) 2 − . a11 x1 + a12 x2 + . − a1n x (k ) n + y1 Similarly. and so on. in place of x1.... (D + L )x (k +1) = −Ux (k ) + y .. xi-1... in the ith equation we used the values.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes Gauss – Seidel Method Once again we consider the system Ax = y ……………. x(k)n. x(k)2..... …. − aii −1 x (k +1)i −1 − aii +1 x (k )i +1 − aii +2 x (k )i + 2 . Thus in the equation use x(k+1)1. x(k+1)2. ….. ….... ….. x(k)i+1. Dx (k +1) = − Lx (k +1) − Ux (k ) + y which can be rewritten as. in place of x2... x(k)n obtained in the k the iteration.. …. xn to calculate x(k+1)i from − aii +1 x (k )i +1 − .. + a1n xn = y1 to calculate x(k+1)1 from a11 x (k +1)1 = − a12 x (k ) 2 − a13 x (k ) 3 . − ain x (k )n + yi . x(k)3. − ain x (k )n + yi In matrix notation we can write this as.... − a ii −1 x (k ) i −1 What Gauss – Seidel suggest is that having obtained x(k+1)1from the first equation use this value for x1 in the second equation to calculate x(k+1)2 from a 22 x (k +1) 2 = − a 21 x (k +1)1 − a 23 x (k )3 − .. and hence x (k +1) = − (D + L ) Ux k + (D + L ) y −1 −1 Thus we get the Gauss – Seidel iteration scheme as. − a 2 n x (k )n + y 2 and use these values of x(k+1)1.. x3. ….. x2... (I) In the Jacobi scheme we used the values of x(k)2... x(k)i-1. xi+1. Bangalore …….

sp ⎛1 ⎜ A = ⎜1 ⎜2 ⎝ 2 1 2 − 2⎞ ⎟ 1 ⎟ 1 ⎟ ⎠ 0 1 2 . We shall now try to apply the Gauss – Seidel scheme for this system. But some matrix norm. and −1 ˆ y = (D + L ) y The scheme converges if and only if G sp < 1. −1 0 ⎛ 1 ⎜ = ⎜−1 1 ⎜ 0 −2 ⎝ VittalRao/IISc. G ≥ 1 does not mean that the scheme will diverge. Bangalore M2/L3/V1/May 2004/2 . Example 3: Let us consider the system x1 + 2x2 – 2x3 = 1 x1 + x2 + x3 =0 2x1 + 2x2 + x3 = 0 considered on page 5. We have. The acid test for convergence is G We shall now consider some examples. G < 1 in some matrix norm. G = -(D+L)-1U is the Gauss – Seidel iteration matrix.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes where. (see page 6). Of course. ⎛1⎞ ⎜ ⎟ y = ⎜0⎟ ⎜0⎟ ⎝ ⎠ ⎛1 ⎜ D + L = ⎜1 ⎜2 ⎝ 0⎞ ⎟ 0⎟ . 1⎟ ⎠ ⎛0 ⎜ − u = ⎜0 ⎜0 ⎝ 0⎞ ⎟ 0⎟ 1⎟ ⎠ −2 0 0 2 ⎞ ⎟ − 1⎟ 0 ⎟ ⎠ (D + L ) Thus. the scheme will converge if < 1. and for which the Jacobi scheme gave the exact solution in the 3rd iteration.

Bangalore M2/L3/V1/May 2004/3 . ⎟ 1⎟ ⎠ VittalRao/IISc. Gauss – Seidel iteration matrix is. ⎛0 ⎜ G = ⎜0 ⎜0 ⎝ −2 2 0 2 ⎞ ⎟ − 3⎟ 2 ⎟ ⎠ Since G is triangular we get its eigenvalues immediately. ⎟ 1⎟ ⎠ ⎛1⎞ ⎜ ⎟ y = ⎜0⎟ ⎜0⎟ ⎝ ⎠ ⎛ ⎜ 1 D+L=⎜ 1 ⎜ 1 ⎜− ⎝ 2 0 1 1 − 2 (D + L )−1 ⎛ ⎜ 1 = ⎜−1 ⎜ ⎜ 0 ⎝ 0 1 1 2 ⎞ 0⎟ 0⎟ . ⎛ ⎜ 1 ⎜ A=⎜ 1 ⎜− 1 ⎜ 2 ⎝ 1 2 1 1 − 2 − − 1⎞ ⎟ 2⎟ 1 ⎟ . as its diagonal entries. We have. Thus for this system the Jacobi scheme converges so rapidly giving the exact solution in the third iteration itself whereas the Gauss – Seidel scheme does not converge. Example 4: Consider the system 1 1 x2 − x3 = 1 2 2 x1 + x 2 + x 3 = 0 x1 − − 1 1 x1 − x 2 + x 3 = 0 2 2 Let us apply the Gauss – Seidel scheme to this system. G sp = 2 >1 Hence the Gauss – Seidel scheme for this system will not converge. Therefore. λ2 = 2. λ3 = 2 are the three eigenvalues. Thus λ1 = 0. 1 ⎟ ⎟ ⎠ ⎞ 0⎟ 0⎟ .Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes G = − (D + L ) −1 0 0 ⎞⎛ 0 − 2 2 ⎞ ⎛ 0 − 2 2 ⎞ ⎛ 1 ⎜ ⎟⎜ ⎟ ⎜ ⎟ u = ⎜ − 1 1 0 ⎟⎜ 0 0 − 1⎟ = ⎜ 0 2 − 3 ⎟ ⎜ 0 − 2 1 ⎟⎜ 0 0 2 ⎟ 0 ⎟ ⎜0 0 ⎝ ⎠⎝ ⎠ ⎝ ⎠ Thus.

.(*) 2⎟ 1⎟ − ⎟ 2⎠ is the Gauss – Seidel matrix for this sytem. ⎛ ⎜ 1 −1 G = − (D + L ) u = ⎜ − 1 ⎜ ⎜ 0 ⎝ 0 1 1 2 ⎞⎛ 0 ⎟⎜ 0 ⎜ 0 ⎟⎜ 0 ⎟ 1 ⎟⎜ 0 ⎠⎜ ⎝ 1 2 0 0 1 ⎞ ⎟ 2 ⎟ − 1⎟ 0 ⎟ ⎟ ⎠ ⎛ ⎜0 ⎜ ∴G = ⎜0 ⎜ ⎜ ⎜0 ⎝ 1 2 1 − 2 0 1 ⎞ ⎟ 2 ⎟ 3 − ⎟...... Bangalore 0 1 1 2 ⎞ 0 ⎟⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ ⎟ ⎜ ⎟ 0 ⎟⎜ 0 ⎟ = ⎜ − 1⎟..Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes ⎛ ⎜0 ⎜ − u = ⎜0 ⎜0 ⎜ ⎝ 1 2 0 0 1 ⎞ ⎟ 2 ⎟ − 1⎟ ... and ⎟ 1 ⎟⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎠ M2/L3/V1/May 2004/4 . The Gauss – Seidel scheme is ˆ x (k +1 ) = Gx (k ) + y x (0 ) ⎛0⎞ ⎜ ⎟ = ⎜0⎟ ⎜0⎟ ⎝ ⎠ where ⎛ ⎜ 1 −1 ˆ y = (D + L ) y = ⎜ − 1 ⎜ ⎜ 0 ⎝ where G is given (*). VittalRao/IISc. 0 ⎟ ⎟ ⎠ Thus...

Hence the Gauss – Seidel scheme will converge.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes Notice that G is upper triangular and hence we readily get the eigenvalues of G as its diagonal entries. λ1 = 0. Thus the eigenvalues of G are. since we have now been assured of convergence. Bangalore M2/L3/V1/May 2004/5 . 2 Let us now carry out a few steps of the Gauss – Seidel iteration. (1 ) x = Gx (0 ) ⎛ 0⎞ ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ˆ + y = G ⎜ 0 ⎟ + ⎜ − 1⎟ = ⎜ − 1⎟ ⎜ 0⎟ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ ⎟ ⎜ ⎟ ˆ + y = G ⎜ − 1⎟ + ⎜ − 1⎟ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎜0 ⎜ = ⎜0 ⎜ ⎜ ⎜0 ⎝ 1 2 1 − 2 0 1 2 3 − 2 1 − 2 ⎞ ⎟ ⎟⎛ 1 ⎞ ⎛ 1 ⎞ ⎟⎜ − 1⎟ + ⎜ − 1⎟ ⎟ ⎟ ⎜ ⎟⎜ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎟⎝ ⎠ ⎠ ⎝ ⎟ ⎠ x (2 ) = Gx (1 ) 1 ⎞ ⎛ ⎜ 1− ⎟ 2 ⎟ ⎜ 1⎞ ⎛ = ⎜ − ⎜1 − ⎟ ⎟ ⎜ ⎝ 2 ⎠⎟ ⎜ ⎟ 0 ⎜ ⎟ ⎝ ⎠ x (3 ) = Gx (2 ) ⎛ 1− 1 + 1 2 ⎜ 2 2 ⎜− 1− 1 + 1 ˆ +y= 2 ⎜ 22 ⎜ 0 ⎝ ( ) ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ If we continue this process we get VittalRao/IISc. (We shall first do some exact calculations). Hence G sp = 1 < 1 . λ3 = -1/2. λ2 = -1/2.

. + ⎟ 2 2 ⎜ ⎟ k −1 ⎞⎟ ⎜ − ⎛ 1 − 1 + 1 − ....Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes x (k ) ⎛ (− 1)k −1 k −1 ⎞ ⎜ 1 − 1 2 + 1 2 − .5 ⎞ ⎟ ⎜ ˆ = Gx (1) + y = ⎜ − 0..5 ⎟ ⎜ 0 ⎟ ⎠ ⎝ = Gx (2 ) x (3 ) ⎛ 0 ..625 ⎟ ⎜ 0 ⎟ ⎠ ⎝ ⎛ 0 ..6875 ⎟ ⎟ ⎜ 0 ⎠ ⎝ M2/L3/V1/May 2004/6 x (4 ) = Gx (3 ) VittalRao/IISc.625 ⎞ ⎟ ⎜ ˆ + y = ⎜ − 0 .6875 ⎞ ⎟ ⎜ ˆ + y = ⎜ − 0... ⎞ ⎟ 23 2 1 2 − 1 3 ..... x (k ) ⎛ 2 3⎞ ⎟ ⎜ → ⎜ 2 3⎟ ⎜ 0 ⎟ ⎠ ⎝ which is the exact solution... + (− 1) = ⎜ ⎟ 2 ⎜ ⎝ 2 k −1 ⎠ ⎟ 22 ⎜ ⎟ 0 ⎜ ⎟ ⎝ ⎠ Clearly. x (k ) ⎛1 − 1 + 1 2 ⎜ 2 2 ⎜ − 1− 1 + → 2 ⎜ ⎜ ⎝ ( + 1 4 . Of course. ⎟ ⎟ 2 2 ⎟ 0 ⎠ − 1 ) and by summing up the geometric series we get. here ‘we’ knew ‘a priori’ that the sequence is going to sum up neatly for each component and so we did exact calculation. If we had not noticed this we still would have carried out the computations as follows: (1 ) x = Gx (0 ) ⎛ 1 ⎞ ⎜ ⎟ ˆ + y = ⎜ − 1⎟ ⎜ 0 ⎟ ⎝ ⎠ as before x (2 ) ⎛ 0 . Bangalore ..

666016 ⎜ 0 ⎝ ⎞ ⎟ ⎟ . x ⎟ ⎟ ⎜ 0 ⎠ ⎝ ⎞ ⎟ ⎟ ⎟ ⎠ x (14 ) − x (13 ) ⎞ ⎛ 0 .003907 x (9 ) ⎛ 0 . Let us now try to apply the Jacobi scheme for this system. 666748 ⎜ 0 ⎝ ⎛ 0 . 666656 ⎜ 0 ⎝ ⎞ ⎛ 0 . 666687 . ⎜ ⎟ 0 ⎝ ⎠ x ( 6 ) − x (5 ) ∞ = 0 .03125 x (6 ) ⎛ 0 . Or we may improve our accuracy by doing more iterations. x (11 ) ⎛ 0 . 666626 . We have VittalRao/IISc.65625 ⎞ ⎟ ⎜ ˆ + y = ⎜ − 0. 664062 ⎜ 0 ⎝ ⎞ ⎟ ⎟ .007813 x (8 ) ⎛ 0 . ⎟ ⎜ 0 ⎠ ⎝ x (5 ) − x ( 4 ) ∞ = 0 . 664062 ⎜ = ⎜ − 0 . Bangalore M2/L3/V1/May 2004/7 .65625 ⎟ . 666016 ⎜ = ⎜ − 0 . 666504 ⎜ 0 ⎝ ⎞ ⎟ ⎟. 667969 ⎜ 0 ⎝ ⎞ ⎟ ⎟ .671875 ⎟ . 666626 ⎟ ⎜ (12 ) = ⎜ − 0 .671875 ⎞ ⎜ ⎟ ˆ = Gx (5 ) + y = ⎜ − 0. 666687 ⎟ ⎜ (13 ) = ⎜ − 0 . ⎟ ⎠ x (9 ) − x (8 ) ∞ = 0 . ⎟ ⎠ x (8 ) − x (7 ) ∞ = 0 .000031 < 10 −4 and hence we can take x(14) as our solution within error 10-4. to get. ⎟ ⎠ x (10 ) − x (9 ) ∞ = 0 .025625 x (7 ) ⎛ 0 . 666748 ⎜ = ⎜ − 0 .001953 x (10 ) ⎛ 0 . x ⎟ ⎟ ⎜ 0 ⎠ ⎝ ⎞ ⎟ ⎟ ⎟ ⎠ x (14 ) ∞ = 0 .000488 (Since now error is < 10-3 we may stop here and take x(10) as our solution for the system. ⎟ ⎠ x (7 ) − x (6 ) ∞ = 0 . 666656 ⎜ = ⎜ − 0 . 667969 ⎜ = ⎜ − 0 .Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes x (5 ) = Gx (4 ) ⎛ 0. 666504 ⎜ = ⎜ − 0 .

where in example 4 above we have a system for which the Jacobi scheme does not converge. Thus. λ 2 = λ3 = 2 ∴ J sp = 2 which is >1. in example 3 we had a system for which the Jacobi scheme converged but Gauss – Seidel scheme did not converge. Thus. Let us now consider another example. Bangalore M2/L3/V1/May 2004/8 . these two examples demonstrate that. Thus the Jacobi scheme for this system will not converge. 1 ⎟ ⎟ ⎠ 1 ⎞ ⎟ 2 ⎟ − 1⎟ 0 ⎟ ⎟ ⎠ We have the characteristic polynomial of J as λ λI − J = + 1 −1 2 −1 λ −1 2 2 −1 2 ⎛ +1 = ⎜λ + ⎝ λ 1 ⎞⎛ 2 λ ⎞ ⎟⎜ λ − + 1⎟ 2 ⎠⎝ 2 ⎠ Thus the eigenvalues of J are λ1 = − . Example 5: 2x1 – x2 =y1 -x1 + 2x2 – x3 = y2 -x2 + 2x3 –x4 =y3 -x3 + 2x4 = y4 VittalRao/IISc. in general. and therefore.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes ⎛ ⎜ 1 ⎜ A=⎜ 1 ⎜− 1 ⎜ 2 ⎝ ⎛ ⎜ 0 ⎜ J = ⎜−1 ⎜ 1 ⎜ 2 ⎝ 1 2 1 1 − 2 − 1 2 0 1 2 − 1⎞ ⎟ 2⎟ 1 ⎟ . λ3 = − i 15 4 4 2 2 1 15 + = 16 = 2 4 4 4 1 . it is not ‘correct’ to say that one scheme is better than the other. λ2 = ∴ λ1 = 1 2 1 1 + i 15 . but the Gauss – Seidel scheme converges.

and the Jacobi scheme will converge.3090.8090. The Jacobi matrix for this scheme is ⎛ ⎜ ⎜ ⎜ J = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 2 0 0 1 2 0 1 2 0 0 1 2 0 1 2 ⎞ 0 ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ ⎟ 2 ⎟ 0 ⎟ ⎟ ⎠ The characteristic equation is.12α + 1 = 0 ………………(CJ1) ∴ λ is the square root of the roots of (CJ1).8090 . The Gauss – Seidel matrix for the system is found as follows: VittalRao/IISc.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes Here 0 ⎞ ⎛ 2 −1 0 ⎜ ⎟ ⎜−1 2 −1 0 ⎟ A=⎜ .12 λ 2 + 1 = 0 ………………(CJ) Set λ 2 = α Therefore 16α2 . ± 0. Thus the eigenvalues of J are ± 0. Hence J sp = 0. Bangalore M2/L3/V1/May 2004/9 . 16 λ 4 . 0 − 1 2 − 1⎟ ⎜ ⎟ ⎜ 0 0 −1 2 ⎟ ⎝ ⎠ is a symmetric tridiagonal matrix.

........ Bangalore M2/L3/V1/May 2004/10 . which becomes in this case 16λ4 − 12λ3 + λ2 = 0.........(C G ) VittalRao/IISc.Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes 0 0 ⎛ 2 ⎜ −1 2 0 (D + L ) = ⎜ ⎜ 0 −1 2 ⎜ ⎜ 0 0 −1 ⎝ 0⎞ ⎟ 0⎟ 0⎟ ⎟ 2⎟ ⎠ ⎛0 ⎜ ⎜0 −U = ⎜ 0 ⎜ ⎜0 ⎝ 1 0 0 0 0 1 0 0 1 2 1 4 1 8 1 16 0⎞ ⎟ 0⎟ 1⎟ ⎟ 0⎟ ⎠ 0 1 2 1 4 1 8 0 0 1 2 1 4 ⎞ 0 ⎟ ⎟ 0 ⎟ ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ ⎟ 2 ⎠ (D + L )− 1 ⎛ ⎜ ⎜ ⎜ ⎜ = ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ 1 ⎜ ⎜ 2 ⎜ 1 −1 G = − (D + L ) U = ⎜ 4 ⎜ 1 ⎜ ⎜ 8 ⎜ 1 ⎜ ⎝ 16 ⎛ ⎜ ⎜ ⎜ = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 0 0 0 0 1 2 1 4 1 8 0 0 1 2 1 4 1 2 1 4 1 8 1 16 ⎞ 0⎟ ⎟⎛ 0 0 ⎟⎜ 0 ⎟⎜ ⎟⎜ 0 ⎟⎜ 0 ⎜ ⎟ 0 1 ⎟⎝ ⎟ 2⎠ 0 1 2 1 4 1 8 1 0 0 0 0 1 0 0 0⎞ ⎟ 0⎟ 1⎟ ⎟ 0⎟ ⎠ ⎞ 0 ⎟ ⎟ 0 ⎟ ⎟ 1 ⎟ ⎟ 2 ⎟ 1 ⎟ ⎟ 4 ⎠ The characteristic equation of G is λI − G = 0 ....

VittalRao/IISc.12λ + 1 = 0 …………. Thus nonzero eigenvalues of G are squares of eigenvalues of J. We shall not go into any further details of this aspect. 0. G sp < J sp Thus the Gauss – Seidel scheme converges faster than the Jacobi scheme.6545. and 16λ2 .Numerical Analysis/ Iterative methods for solving linear system of equation Lecture notes This can be factored as λ 2 (16λ 2 − 12λ + 1) = 0 Thus the eigenvalues of G are roots of λ2 = 0 .6545 < 1 Thus the Gauss – Seidel scheme also converges. In many class of problems where both schemes converge it is the Gauss – Seidel scheme that converges faster. 0. G sp = 0. ∴ the nonzero eigenvalues of G are. Thus. Observe that G sp = J 2 sp . Notice that roots of (CG1) are same as those of (CJ1).0955. and two eigenvalues of G are roots of (CG1). Bangalore M2/L3/V1/May 2004/11 .(CG1) Thus one of the eigenvalues of G is 0 (repeated twice).

VittalRao/IISc. i.ωUx + ωy i.[(ω – 1)D + ωU]x + ωy i. initial guess ……………(III) M ω = −(D + ωL) and −1 [(ω −1)D + ωu] −1 ˆ y = (D + ω L ) ω y Mω is the SOR matrix for the system.e.ωUx +ωy i.e.e. We thus get the SOR scheme as ˆ x ( k +1 ) = M ω x ( k ) + y x (0 ) = θ . (D + ωL)x + (ω-1) Dx = . where. ωAx = ωy ………………(II) Now A = (D + L + U ) We write (II) as (ωD + ωL + ωU)x = ωy.e. (D + ωL)x = . x = . (ωD + ωL) = .(I) We take a parameter ω ≠ 0 and multiply both sides of (I) to get an equivalent system..(D + ωL)-1 [(ω-1)D + ωU]x + ω [D + ωL]-1y.Numerical Analysis ( Iterative methods for solving linear systems of equations) Lecture notes SUCCESSIVE OVERRELAXATION METHOD (SOR METHOD) We shall now consider SOR method for the system Ax = y ………. Bangalore M2/L4/V1/May 2004/1 .

(D +ω L)-1 [(ω-1) D +ωU] ⎛ 1−ω ⎜ ⎜ ⎜ 1ω − 1ω2 ⎜ 2 =⎜ 2 1 2 1 3 ⎜ ω − ω 4 ⎜4 1 3 1 4 ⎜ ω − ω ⎜ 8 ⎝8 4 1 ω 2 1 1−ω + ω 2 4 1 1 2 1 3 ω− ω + ω 2 2 8 1 2 1 3 1 4 ω − ω + ω 4 4 16 0 1 ω 2 1 1−ω + ω 2 4 1 1 2 1 3 ω− ω + ω 2 2 8 ⎞ ⎟ ⎟ ⎟ 0 ⎟ ⎟ 1 ω ⎟ 2 ⎟ 1 2⎟ 1−ω + ω ⎟ 4 ⎠ 0 and the characteristic equation is 16 (ω − 1 + λ ) − 12ω 2 λ (ω − 1 + λ ) + ω 4 λ 2 = 0. so λ = 0 is not a root. Now when is λ = 0 a root? If λ = 0 we get from (CMω).. How does one choose ω? It can be shown that convergence cannot be achieved if ω ≥ 2. So we can divide the above equation (CMω) by ω4λ2 to get ⎡ (ω − 1 + λ )2 ⎤ (ω − 1 + λ )2 + 1 = 0 16⎢ ⎥ − 12 ω 2λ ⎦ ω 2λ ⎣ 2 Setting VittalRao/IISc... ‘Usually’ ω is chosen between 1 and 2. 16(ω-1)4 = 0 ⇒ ω = 1.(C Mω ) 2 Thus the eigenvalues of Mω are roots of the above equation. Mω = . For that system... The strategy is to choose ω such that M ω sp is < 1... Of course..e. . one must analyse M ω Let us consider an example of this aspect.Numerical Analysis ( Iterative methods for solving linear systems of equations) Lecture notes Notice that if ω = 1 we get the Gauss – Seidel scheme. Example 6: Consider the system given in example 5.. Bangalore M2/L4/V1/May 2004/2 .... So let us take ω ≠ 1. (We assume ω > 0). i. This is easier said than achieved. and is al small as possible so that the scheme converges as rapidly as possible.. in the Gauss – Seidel case... sp as a function of ω and find that value ω0 of ω for which this is minimum and work with this value of ω0.

± 0 .1509).1312 ± i (0.(*) Thus. 3090 . λ = 0. With ω = 1. -0.2 is faster than Jacobi and Gauss – Seidel scheme. Thus for this system.0955 or 0. the spectral radius M ω 0 is smaller than M ω for any other ω. 0.6545 ……….8090 and G sp = 0. Note: We had M 1 .4545 sp which is less that J = 0. We can show that in this example when ω = ω0 = 1.2596 Thus the SOR scheme with ω = 1. as the eigenvalues.2 Thus M ω sp 1 when ω = 1.2 is 0.2596. We have M 1. The modulus of the complex roots is 0.2596 = 0. Bangalore .4545.4545 M2/L4/V1/May 2004/3 VittalRao/IISc.2 sp = 0. SOR with ω = 1.6545 computed in Examples.Numerical Analysis ( Iterative methods for solving linear systems of equations) Lecture notes (ω − 1+ λ )2 µ = 2 2 ω λ we get 16 µ 4 − 12 µ 2 + 1 = 0 which is the same as (CJ).2 and using the two values of µ2 in (*) we get. 8090 . Now (ω − 1 + λ )2 = µ 2 = 2 ω λ 0. Thus µ = ± 0 .2596 will be the method which converges fastest.0880. this can be simplified as 1 ⎧1 ⎫2 λ = µ 2ω 2 − (ω − 1) ± µω ⎨ µ 2ω 2 − (ω − 1)⎬ 2 ⎩4 ⎭ as the eigenvalues of Mω.

Numerical Analysis ( Iterative methods for solving linear systems of equations) Lecture notes And M 1.2596 Thus a small change in the value of ω brings about a significant change in the spectral ω sp . Bangalore M2/L4/V1/May 2004/4 .2596 radius M sp = 0. VittalRao/IISc.

We have ⎜ − 16 8 7 ⎟⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 0⎟ ⎜ 0⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ = (− 1)x = αx ∴α = −1 is such that there exists a nonzero vector x such that Ax = αx. } Then : (i) ωα is nonempty. α = 3 is also an eigenvalue of A.Bangalore = θ n ∈ ωα M3/L2/V1/May2004/1 . Let. Thus α is an ⎛ 1⎞ ⎜ ⎟ x = ⎜ 2 ⎟ we find that Similarly. Let α be an eigenvalue of A. A scalar α is called an eigenvalue of A if there exists a nonzero nx1 vector x such that Ax = αx Example: Let ⎛ − 9 ⎜ A = ⎜ −8 ⎜ − 16 ⎝ 4 3 8 4⎞ ⎟ 4⎟ 7⎟ ⎠ Let α = − 1 Consider ⎛ − 9 4 4 ⎞⎛ 1 ⎞ ⎛ − 1 ⎞ ⎛ 1⎞ ⎛1⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ Ax = ⎜ − 8 3 4 ⎟⎜ 2 ⎟ = ⎜ − 2 ⎟ = −1⎜ 2 ⎟ x = ⎜ 2 ⎟ . ω α = {x ∈ C Let α be an eigenvalue of A. ⎜ 0⎟ ⎝ ⎠ Ax = αx. Then any nonzero x such that Ax = αx is called an eigenvector of A.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes EIGENVALUES AND EIGENVECTORS Let A be an nxn matrix. if we take α = 3. Thus. n : Ax = α x . eigenvalue of A. Q x Vittal rao/IISc.

we want to find all solutions of the homogeneous system Mx = θ . the eigensubspace corresponding to –1? We want to find all x such that Ax = -x i. i. We have sum α = -1 is an eigenvalue.Bangalore M3/L2/V1/May2004/2 . ⎛− 8 R 2 − R1 ⎜ M → ⎜ 0 R 3 − 2 R1 ⎜ ⎝ 0 Thus.e.. Ay = αy ⇒ A( x + y) = α ( x + y) For any constant κ .e.. n Example: Consider the A in the example on page 1. where ⎛ −8 ⎜ M = A+ I = ⎜ −8 ⎜ − 16 ⎝ 4 4 8 4⎞ ⎟ 4⎟ 7⎟ ⎠ We now can use our row reduction to find the general solution of the system. y ∈ ωα ⇒ Ax = αx. This is called the characteristic subspace or the eigensubspace corresponding to the eigenvalue α. (A+I)x = θ. What is ω-1. x1 = 1 1 x 2 + x3 2 2 4 0 0 4⎞ 1 − R1 ⎟ 8 0⎟ ⎯⎯ → ⎯ 0⎟ ⎠ ⎛ ⎜1 ⎜ ⎜0 ⎜0 ⎜ ⎝ − 0 0 1 2 − ⎞ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎠ 1 2 0 Thus the general solution of (A+I) x = θ is Vittal rao/IISc. κ Ax = κα x = α ( κ x ) ⇒ A (κ x ) = α (κ x ) ⇒ x + y ∈ ωα (iii) ⇒ κx ∈ ωα Thus ωα is a subspace of C .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes (ii) x.

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

1 ⎛1 ⎞ x3 ⎟ ⎜ x2 + ⎛1⎞ ⎛1⎞ 2 ⎜ ⎟ 1 ⎜ ⎟ ⎜2 ⎟ 1 = x2 x2 ⎜ 2⎟ + x3 ⎜ 0 ⎟ ⎜ ⎟ 2 ⎜0⎟ 2 ⎜2⎟ ⎜ ⎟ x3 ⎝ ⎠ ⎝ ⎠ ⎜ ⎟ ⎝ ⎠
⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ = A1 ⎜ 2 ⎟ + A 2 ⎜ 0 ⎟ ⎜2⎟ ⎜0⎟ ⎝ ⎠ ⎝ ⎠ where A1 and A2 are arbitrary constants. Thus ω-1 consists of all vectors of the form

⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ A1 ⎜ 2 ⎟ + A2 ⎜ 0 ⎟ . ⎜ 2⎟ ⎜0⎟ ⎝ ⎠ ⎝ ⎠
⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ Note: The vectors ⎜ 2 ⎟ , ⎜ 0 ⎟ form a basis for ω-1 and therefore ⎜0⎟ ⎜2⎟ ⎝ ⎠ ⎝ ⎠ dim ω-1 = 2. What is ω3 the eigensubspace corresponding to the eigenvalue 3 for the above matrix We need to find all solutions of Ax = 3x, i.e., Ax – 3x = θ i.e., Nx = θ Where

⎛ − 12 ⎜ N = A − 3I = ⎜ − 8 ⎜ − 16 ⎝
Again we use row reduction

4 0 8

4⎞ ⎟ 4⎟ 4⎟ ⎠

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/3

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

⎛ ⎜ 2 R 2 − R 1 ⎜ − 12 3 N → ⎜ 0 ⎜ 4 R 3 − R1 ⎜ 3 ⎜ 0 ⎝

4 8 − 3 8 3

⎞ ⎟ ⎛ − 12 4 ⎟ 4 ⎟ R3 + R4 ⎜ ⎯ ⎯⎯ → ⎜ 0 3 ⎟ ⎜ ⎜ 0 4⎟ ⎝ − ⎟ 3⎠

4 8 − 3 0

4⎞ 4⎟ ⎟ 3⎟ 0⎟ ⎠

∴ 12 x1 = 4 x 2 + 4 x 3
8 4 x2 = x3 3 3
∴ x3 = 2 x2

∴12 x1 = 4 x 2 + 8 x 2 = 12 x 2 ∴ x 2 = x1

∴ x 2 = x1 ; x 3 = 2 x 2 = 2 x1
∴ The general solution is

⎛ x1 ⎜ ⎜ x1 ⎜2x 1 ⎝

⎛1⎞ ⎞ ⎜ ⎟ ⎟ ⎟ = x1 ⎜ 1 ⎟ ⎜2⎟ ⎟ ⎝ ⎠ ⎠

Thus ω3 consists of all vectors of the form

⎛1⎞ ⎜ ⎟ κ ⎜1⎟ ⎜2⎟ ⎝ ⎠
Where κ is an arbitrary constant. ⎛ 1⎞ ⎜ ⎟ Note: The vector ⎜ 1 ⎟ forms a basis for ω3 and hence ⎜ 2⎟ ⎝ ⎠ dim. ω3 = 1. Now When can a scalar α be an eigenvalue of a matrix A? We shall now investigate this question. Suppose α is an eigenvalue of A.
Vittal rao/IISc.Bangalore M3/L2/V1/May2004/4

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

This

There is a nonzero vector x such that Ax = αx.

⇒ ( A − α I ) x = θ ; andx ≠ θ .
The system

(A − αI )x = θ

has at least one nonzero solution.

nullity (A - αI) ≥ 1 rank (A - αI) < n

(A - αI) is singular
det. (A - αI) = 0 Thus, α is an eigenvalue of A det. (A - αI) = 0. Conversely, α is a scalar such that det. (A - αI) = 0. This

(A - αI) is singular
rank (A - αI) < n nullity (A - αI) ≥ 1 The system

(A − αI )x = θ

has nonzero solution.

α is an eigenvalue of A. Thus, α is a scalar such that det. (A - αI) = 0 Combining the two we get, α is an eigenvalue of A det. (A - αI) = 0 det. (αI - A) = 0 Now let C(λ) = det. (λI - A) Thus we see that, “The eigenvalues of a matrix A are precisely the roots of C(λ) = det. (λI - A)”. α is an eigenvalue.

Vittal rao/IISc.Bangalore

M3/L2/V1/May2004/5

. We say C(λ) is a ‘monic’ polynomial of degree n. and this is called the TRACE of A. + ann . . Product of the roots of C(λ) = Product of the eigenvalues of A = det.Bangalore M3/L2/V1/May2004/6 . The equation C(λ) = 0 is called the characteristic equation.( λ I − A ) = λ +9 8 16 −4 λ −3 −8 −4 −4 λ −7 λ +1 − 4 −4 ⎯C + C +⎯→ λ + 1 λ − 3 − 4 ⎯ ⎯C λ +1 − 8 λ − 7 1 2 3 Vittal rao/IISc. A. Sum of the roots of C(λ) = Sum of the eigenvalues of A = a11 + .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes λ − a 11 C (λ ) = − a 21 K K − a 12 λ − a 22 K K K K K K K − a 1n − a 2n K K − a n1 − a n2 λ − a nn n = λ n − (a11 + K + a nn )λ n −1 + K + (− 1) det . . This is called CHARACTERISTIC POLYNOMIAL of A. In our example in page 1 we have ⎛ −9 ⎜ A = ⎜ −8 ⎜ − 16 ⎝ 4 3 8 4⎞ ⎟ 4⎟ 7⎟ ⎠ ∴ C (λ ) = det . A Thus . . C(λ) is a polynomial of degree n. . Note the ‘leading’ coefficient of C(λ) is 1. The roots of the characteristic polynomial are the eigenvalues of A.

. . . λ2. . . . .Bangalore M3/L2/V1/May2004/7 . . . . . . . . ak. When we factorize this as. . . Thus.. λk are the distinct roots. these distinct roots are the distinct eigenvalues of A and the multiplicities of these roots are called the algebraic multiplicities of these eigenvalues of A. . a2. A. . λk and the algebraic multiplicities of these eigenvalues are respectively. . the distinct eigenvalues are λ1. . Sum of eigenvalues = (-1) + (-1) + 3 = 1 Trace A = Sum of diagonal entries. . . . Product of eigenvalues = (-1) (-1) (3) = 3 = det.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes 1 = (λ + 1 )1 1 R 2 − R1 R 3 − R1 − 4 λ − 3 − 8 − 4 − 4 λ − 7 1 = ( λ + 1) 0 0 −4 −4 0 → λ +1 −4 λ −3 = (λ + 1 )(λ + 1 )(λ − 3 ) = (λ + 1 ) (λ − 3 ) 2 Thus the characteristic polynomial is C ( λ ) = (λ + 1 ) (λ − 3 ) 2 The eigenvalues are –1 (repeated twice) and 3. . .(1) and observe that this is a monic polynomial of degree n. C(λ) = (λ − λ1 ) 1 (λ − λ2 ) 2 KK(λ − λk ) k . . . For the matrix in Example in page 1 we have found the characteristic polynomial on page 6 as C ( λ ) = (λ + 1 ) (λ − 3 ) 2 Vittal rao/IISc. . a1. . . C (λ ) = λI − A . . . . we define the CHARACTERISTIC POLYNOMIAL as. . Thus when C(λ) is as in (2). . if A is an nxn matrix. .. . λ2.(2) a a a Where λ1.

. . Notice that in this example a1 = g1 = 2 . s . . . . . .. .2. . . If λi is an eigenvalues of A the characteristic subspace corresponding to λi is defined as ωλ i and is ω λ = {x : Ax = λ i x} i The dimension of ωλ i is called the GEOMETRIC MULTIPLICITY of the eigenvalue λi and is denoted by gi.e. a2 = 1.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Thus the distinct eigenvalues of this matrix are λ1 = -1 .. 1 ≤ i ≤ k i.. . Again for the matrix on page 1. g2 = 1. and λ2 = 3 and their algebraic multiplicities are respectively a1 = 2 . ω3 = 1. Further notice that pi (α1 ) = K= pi (αi−1 ) = pi (αi+1 ) = K= pi (αs ) = 0 pi (αi ) = 1 Vittal rao/IISc. . and a2 = g2 = 1. . . We shall study the properties of the eigenvalues and eigenvectors of a matrix. for any eigenvalue of A. .(3) p i (λ ) = (λ − α 1 )(λ − α 2 ) K (λ − α i −1 )(λ − α i +1 ) K (λ − α s ) (α i − α 1 )(α i − α 2 ) K (α i − α i −1 )(α i − α i +1 ) K (α i − α s ) = ∏ (λ − α j ) (α i − α j ) for i = 1. and dim. dim ω1 = 2 . (4) 1≤ j ≤ s j≠i Then pi(λ) are all polynomials of degree s-1. 1 ≤ gi ≤ ai . . . . . .. αi ≠ αj if i ≠ j ). It can be shown that for any matrix A having C(λ) as in (2). . . 1 ≤ geometric multiplicity ≤ algebraic multiplicity. .. . In general this may not be so. . Consider. . . we have found on pages 3 and 4 respectively that. . We shall start with a preliminary remark on Lagrange Interpolation polynomials : Let α1. α2. (i. . .e. .Bangalore M3/L2/V1/May2004/8 . . αs be a distinct scalars. . Thus the geometric multiplicities of the eigenvalues λ1 = -1 and λ2 = 3 are respectively g1 = 2 .

. Let λ1. . φk be eigenvectors corresponding to these eigenvalues respectively . Let φ1. . . . . . . . . φi are nonzero vectors such that Aφi = λiφi From (6) it follows that . .(7) p ( λ ) = a 0 + a1λ + K K + a s λ s be any polynomial. . . .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Thus pi(λ)are all polynomials of degree s-1 such that. (5) We call these the Lagrange Interpolation polynomials. i. . .(6) A 2φ i = A( A φ i ) = A (λ iφ i ) = λ i A φ i = λ 2 iφ i A 3φ i = A ( A 2φ i ) = A ( λ 2 iφ i ) = λ2 i A φ i = λ3 iφ i and by induction we get A mφ i = λ m iφ i (We interpret A0 as I). . . . . . φ2. . . If as follows: p(λ) is any polynomial of degree ≤ s-1 then it can be written as a linear combination of p1(λ).Bangalore M3/L2/V1/May2004/9 . We define p(A) as the matrix. . . . . . . . for any integer m ≥ 0 . Now let. pi (α j ) = δij if j ≠ i . . . . . we now proceed to study the properties of the eigenvalues and eigenvectors of an nxn matrix A. ps(λ) p (λ ) = p (α 1 ) p1 (λ ) + p (α 2 ) p 2 (λ ) + L + p (α s ) p s (λ ) . .e.. . . . p ( A ) = a 0 I + a1 A + K K + a s A s Vittal rao/IISc. . λk be the distinct eigenvalues of A. . . . . . .. (6) = ∑ p (α ) p (λ ) i =1 i i s With this preliminary.p2(λ).

. .. . k …………(9) 1≤ j ≤ k and p i (λ j ) = δ ij if j ≠ i …………(10) Now.. φ2. . + Ckφk = θ n For 1≤ i ≤ k.. λ2. Thus. . φk corresponding to the distinct eigenvalues λ1. i = 1. . . .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Now p ( A ) φ i = ( a 0 I + a 1 A + K K + a s A s )φ i = a 0φ i + a 1 A φ i + K K + a s A sφ i = a 0φ i + a 1 λ iφ i + K K + a s λ s iφ i = ( a 0 + a 1 λ i + K K + a s λ s i )φ i = p ( λ i )φ i . . Now are the eigenvectors.Bangalore M3/L2/V1/May2004/10 . αi = λi then we get the Lagrange Interpolation polynomials as p i (λ ) = ∏ j≠i (λ − λ j ) (λ i − λ j ) .. λk of A. C1φ1 + C2φ2 + .. . Vittal rao/IISc.2. we must show that C1φ1 + C2φ2 + K+ CKφK = θ n ⇒ C1 = C2 = K = CK = 0 . . linearly independent ? In order to establish this linear independence. φ1. . (8) Now if in (4) & (5) we take s = k . If λi is any eigenvalue of A and φi is an eigenvector corresponding to λi then for any polynomial p(λ) we have by (6) p ( A )φ i = p ( λ i )φ i .….

(by property I on page 10) ⇒ Ciφi = θ .1 ≤ i ≤ k ..Bangalore M3/L2/V1/May2004/11 .. + C k pi ( A)φ k = θ n ⇒ C1 pi (λ1 )φ1 + C 2 pi (λ2 )φ 2 + . = C n = 0 proving (8). + C k φ k = θ n ⇒ C1 = C 2 = ... + C k φ k ] = pi ( A)θ n = θ n ⇒ C1 pi ( A)φ1 + C 2 pi ( A)φ 2 + ... + C k pi (λk )φ k = θ n . ⇒ Ci = 0.. Thus we have Eigen vectors corresponding to distinct eigenvalues of A are linearly independent. Vittal rao/IISc.....1 ≤ i ≤ k Thus by (10) since φi are nonzero vectors C1φ1 + C 2φ 2 + ...Numerical analysis /Eigenvalues and Eigenvectors Lecture notes pi ( A)[C1φ1 + C 2φ 2 + ...

let CA(λ) and CB (λ) be the characteristic polynomials of A and B respectively. Then there exists a nonsingular matrix P such that A = P-1 B P Now. (5) Let A and B be similar matrices.. C A (λ ) = λI − A = λI − P −1 BP = λP −1 P − P −1 BP = P −1 (λI − B )P = P −1 λ I − B P Vittal rao/IISc. P-1 A P = B We then write. where Q = P-1 is nonsingular ∃ nonsingular Q show that Q-1 B Q = A B∼A Thus A∼ B B∼A A ∼ C. (2) and (3) above show that similarity is an equivalence relation on the set of all nxn matrices. P-1 A P = B A = Q-1 B P. A∼ B Properties of Similar Matrices (1) Since I-1 A I = A (2) A ∼ B A = P B P-1 it follows that A ∼ A ∃ P. We have.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes SIMILAR MATRICES We shall now introduce the idea of similar matrices and study the properties of similar matrices.Bangalore M3/L3/V1/May2004/1 . B ∼ C (4) Properties (1). DEFINITION An nxn matrix A is said to be similar to a nxn matrix B if there exists a nonsingular nxn matrix P such that. (3) Similarly. nonsingular show that. we can show that A ∼ B.

i.. P-1 Bk P = On p ( A) = a 0 I + a1 A + .. + a k A k = a 0 I + a1 P −1 BP + a 2 P −1 B 2 P + .. P −1 BP 14444244443 ktimes ( )( ) ( ) = P-1 Bk P Therefore. A = P-1 B P Now for any positive integer k.. Now let p(λ) = a0 + a1λ + ….. p (A) = On p (B) = On ”....Bangalore M3/L3/V1/May2004/2 .e.. + a k P −1 B k P = P −1 a 0 I + a1 B + a 2 B 2 + .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes = λI − B sin ce P −1 P = 1 = CB ( λ ) Thus “ SIMILAR POLYNOMIALS ”... Vittal rao/IISc. (7) Let A be any matrix.... + akλ be any polynomial. + a k B k P [ ] = P −1 p(B )P Thus p( A) = On ⇔ P −1 p(B )P = On ⇔ p (B ) = O n Thus “ IF A and B ARE SIMILAR MATRICES THEN FOR ANY POLYNOMIAL p (λ).. we have MATRICES HAVE THE SAME CHARACTERISTIC (6) Let A and B be similar matrices. By A(A) we denote the set of all polynomials p(λ) such that p(A) = On.. Then there exists a nonsingular matrix P such that A k = P −1 BP P −1 BP .. Ak = On Bk = On “ Thus if A and B are similar matrices then Ak = On Then Bk = On ”.

The next simple matrix we know is the identity matrix In.... P1n ⎞ ⎟ .. We shall discuss more about annihilating polynomials later.(1) there is a ⎛ P11 ⎜ ⎜P LetP = ⎜ 21 M ⎜ ⎜P ⎝ n1 P12 P22 M Pn 2 .. “ IF A AND B ARE SIMILAR MATRICES THEN A(A) = A (B) ”....... Pni . λ1 D= λ2 λn (λI not necessarily distinct).. Then there exists a nonsingular matrix P such that P-1 A P = D AP = PD ……….. P2 n ⎟ M M ⎟ ⎟ .Bangalore M3/L3/V1/May2004/3 .. P2i M M . Then set A (A) is called the set “ ANNIHILATING POLYNOMIALS OF A ”.... Pnn ⎟ ⎠ Vittal rao/IISc.... Thus “THE ONLY MATRIX SIMILAR TO In IS ITSELF ”. P1i . and A is similar to a diagonal matrix. Thus similar matrices have the same set of annihilating polynomials.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes A (A) = {p(λ) : p(A) = On} Now from (6) it follows that. ∴ “ THE ONLY MATRIX SIMILAR TO On IS ITSELF ”.... We now investigate the following question? Given an nxn matrix A when is it similar to a “simple matrix”? What are simple matrices? The simplest matrix we know is the zero matrix On.. Now A ∼ On . Now A ∼ In nonsingular P such that A = P-1 In P A = In. There is a nonsingular matrix P such that A = P-1 On P = On. So we now ask the question “ Which type of nxn matrices are similar to diagonal matrices”? Suppose now A is an nxn matrix.. The next class of simple matrices are the DIAGONAL MATRICES.

+ a1n Pni ⎞ ⎟ ⎜ ⎜ a 21 P1i + a 22 P2 i + . i = 1... Now the ith column of P D is ⎛ P1i λ i ⎞ ⎛ P1i ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ P2 i λ i ⎟ ⎜P ⎟ = λ i ⎜ 2 i ⎟ = λ i Pi ⎜ M ⎟ M ⎜ ⎟ ⎜ ⎟ ⎜P λ ⎟ ⎜P ⎟ ⎝ ni i ⎠ ⎝ ni ⎠ Thus the ith column of P D.... 2.S... n ……………. is A Pi. Thus the ith column of A P..H. ..Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛ a11 ⎜ ⎜a A = ⎜ 21 .(2) Note that since P is nonsingular no column of P can be zero vector.H........ ...... .. an2 .... “IF A IS SIMILAR TO A DIAGONAL MATRIX D THEN THE DIAGONAL ENTRIES OF D MUST BE THE EIGENVALUES OF A AND IF P-1AP = D THEN THE ith COLUMN VECTOR MUST BE AN EIGENVECTOR CORRESPON DING TO THE EIGENVALUE WHICH IS THE ith DIAGONAL ENTRY OF D”.S.... = R... the R..... .. + a P ⎟ n 2 2i nn ni ⎠ ⎝ n1 1i which is equal to APi. ⎟ ⎟ a nn ⎟ ⎠ ⎛ P1 i ⎜ ⎜P LetP i = ⎜ 2 i M ⎜ ⎜P ⎝ ni ⎞ ⎟ ⎟ th ⎟ denote the i column of P.. + a 2 n Pni ⎟ ⎜ ....... Note: Vittal rao/IISc.Bangalore M3/L3/V1/May2004/4 . Thus none of the column vectors Pi are zero..... Since L.... ⎟ ⎟ ⎜ ⎜ a P + a P + .. of (1).H... . of (1)....... Thus we conclude that.S. a1 n ⎞ ⎟ a 2n ⎟ ...... .S.. ⎟ ⎟ ⎠ Now the ith column of AP is ⎛ a11 P1i + a12 P2 i + ..... is λI Pi.....H.. the L..... by (1) we have APi = λi Pi .. . ….. ⎜ ⎜a ⎝ n1 a12 a 22 ...

ALGEBRAIC MULTIPLICITY IS EQUAL TO ITS GEOMETRIC MULTPLICITY”. 1≤ i ≤ k”. Conversely. it is now obvious that if A has n linearly independent eigenvectors then A is similar to a diagonal matrix D and if P is the matrix whose ith column is the eigenvector. then A is similar to a diagonal matrix ai = gi (=dimωi) ... let 1 2 k ( ) ( )a ( )a ( )a ω i = {x : Ax = λi x} be the eigensubspace corresponding to λ i. …. When does then a matrix have n linearly independent eigenvectors’.. we have. Therefore. if C λ = λ − λ1 the distinct eigenvalues of A.. Thus A IS SIMILAR TO A DIAGONAL MATRIX FOR EVERY EIGENVALUE OF A. then ai is called the algebraic multiplicity of the eigenvalue λ i. then D is P-1 A P and ith diagonal of D is the eigenvalue corresponding to the ith eigenvector.λ 1)a1…… ( λ . It can be shown that a matrix A has n linearly independent eigenvectors the algebraic multiplicity of each eigenvalue of A is equal to its geometric multiplicity. “ If A is an nxn matrix with C( λ ) = ( λ . Further. λ 2.λ k) ak where λ 1. Example: Let us now consider ⎛ − 9 4 4⎞ ⎟ ⎜ A = ⎜ − 8 3 4⎟ ⎜ − 16 8 7 ⎟ ⎠ ⎝ Vittal rao/IISc. Then gi = dim ωi is called the geometric multiplicity of λ i. λ k are RECALL. …. λ k are the district eigenvalues of A. taking these as the columns of P we get P-1 A P we get D where the ith diagonal entry of D is the eigenvalue corresponding to the ith eigenvector. λ − λ2 . λ − λk where λ 1..Bangalore M3/L3/V1/May2004/5 ....Numerical analysis /Eigenvalues and Eigenvectors Lecture notes The n columns of P must be linearly independent since P is nonsingular and thus these n columns give us n linearly independent eigenvectors of A Thus the above result can be restated as follows: A is similar to a diagonal matrix D and P-1 A P = D A has n linearly independent eigenvectors.

ω1 = eigensubspace corresponding to λ = -1 ⎧ ⎛1⎞ ⎛ 1 ⎞⎫ ⎜ ⎟ ⎜ ⎟⎪ ⎪ = ⎨ x : x = A1 ⎜ 2 ⎟ + A2 ⎜ 0 ⎟ ⎬ ⎜ 0⎟ ⎜ 2 ⎟⎪ ⎪ ⎝ ⎠ ⎝ ⎠⎭ ⎩ ω2 = eigensubspace corresponding to λ = 3 ⎧ ⎛ 1 ⎞⎫ ⎜ ⎟⎪ ⎪ = ⎨ x : x = k ⎜ 1 ⎟⎬ ⎜ 2 ⎟⎪ ⎪ ⎝ ⎠⎭ ⎩ Thus dim ω1 = 2 dim ω2 = 2 Thus a1 = 2 = g1 a 2 = 1 = g2 to a diagonal matrix. ∴ g1 = 2 ∴ g2 = 1 and ence A must be similar. ⎜ 2 ⎟ and ⎜ 0 ⎟ . How do we get P such that P-1AP is a diagonal matrix? Recall the columns of P must be linearly independent eigenvectors.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes On page 6. namely.3) Thus λ 1 = -1 . From ω1 we get two linearly ⎛1⎞ ⎛1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ eigenvectors. ⎛ 1 1 1⎞ ⎜ ⎟ P = ⎜ 2 0 1⎟ ⎜ 0 2 2⎟ ⎝ ⎠ Vittal rao/IISc. and from ω2 we get third as ⎜ 1 ⎟ . we found the characteristic polynomial of A as C( λ ) = ( λ +1)2 ( λ .Bangalore M3/L3/V1/May2004/6 . a2 = 1 On pages 3 and 4 we found. ⎜0⎟ ⎜ 2⎟ ⎜ 2⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ Thus if we take these as columns and write. a1 = 2 λ 2 = 3 .

we define the INNER PRODUCT OF M M ⎜ ⎟ ⎜ ⎟ ⎜x ⎟ ⎜y ⎟ ⎝ n⎠ ⎝ n⎠ x with y (which is denoted by (x.y)) as. (x .. Conversely. y = ⎜ ⎟ are any two vectors in Cn. and in which case the P-1 is easy to compute. P-1 AP = D linearly independent eigenvectors namely the n columns of AP. ⎛ x1 ⎞ ⎛ y1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ x2 ⎟ ⎜ y2 ⎟ If x = ⎜ ⎟ .e.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Then P −1 ⎛ 1 ⎜ =⎜ 2 ⎜ ⎜− 2 ⎝ 0 −1 1 0 −1 1 − 1 ⎞ 2⎟ 1 ⎟ − 2 ⎟ . then. y ) = i . We shall now see a class of matrices for which it is easy to decide whether they are similar to a diagonal matrix. and it can be verified that 1 ⎟ ⎠ − 1 ⎞⎛ − 9 2 ⎟⎜ − 1 ⎟⎜ − 8 2⎟ 1 ⎟ ⎜ − 16 ⎠⎝ 4 3 8 4 ⎞⎛ 1 ⎟⎜ 4 ⎟⎜ 2 7 ⎟⎜ 0 ⎠⎝ 1 0 2 1⎞ ⎟ 1⎟ 2⎟ ⎠ ⎛ 1 ⎜ −1 P AP = ⎜ 2 ⎜ ⎜− 2 ⎝ ⎛−1 ⎜ =⎜ 0 ⎜ 0 ⎝ 0 −1 0 0⎞ ⎟ 0 ⎟ a diagonal matrix. 3⎟ ⎠ A has n Thus we can conclude that A is similar to a diagonal matrix. But we shall first introduce some preliminaries. y = ⎜ 1 − i ⎟ . i. y ) = Example 1: x1 y 1 + x 2 y 2 + K + x n y n = ∑ n i =1 xi y i ⎛ i ⎞ ⎛ 1 ⎞ ⎜ ⎜ ⎟ ⎟ x = ⎜ 2 + i ⎟ .1 + (2 + i )(1 − i ) + (− 1 )(i ) M3/L3/V1/May2004/7 Vittal rao/IISc. A has n linearly independent eigenvectors P-1 AP is a diagonal matrix where the columns of P are taken to be the n linearly eigenvectors. If ⎜ −1 ⎟ ⎜ i ⎟ ⎝ ⎝ ⎠ ⎠ ( x .Bangalore .

Which is real ≥ 0. x ) (3) For any complex number α. (x . we have (x . x ) = ∑ (x . x ) Thus.1 ≤ i ≤ n ⇔ x = θn Thus. (α x . 0 ⇔ ∑ n 2 i = 0 i=1 ⇔ x i = 0 . below: (1) For any vector x in Cn. x ) = n i =1 xi x i = ∑ x n i =1 xi 2 . y ) = ∑ (α x i ) y i i =1 n =α ∑ n i =1 xi y i = α (x . y ) Thus Vittal rao/IISc. we have. Further. x ) = 1 i + ( − i )(2 + i ) + () )(1 + i ) + (− 1 (i )(− 1 ) = )(− i )= 1 + 5 i 1 − 5i We now observe some of the properties of the inner product.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes = i + (2 + i 1 Whereas ( y . y ) = ( y .Bangalore M3/L3/V1/May2004/8 .x) is real and ≥ 0 and = 0 x = θn (2) (x . y ) = ∑ n i =1 xi y i ⎛ = ⎜ ⎜ ⎝ ∑ n i =1 ⎞ yi xi ⎟ ⎟ ⎠ = (y . (x.

z) = (x. y) + (x. x ) = α y .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes (αx. z ) Thus (x + y. ⎛ − 1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ x = ⎜ i ⎟. y ) ( ) (4) (x + y. x ) by (2) = α ( y . y) = 0. y + z) = (x. z) We say that two vectors x and y are ORTHOGONAL if (x. (x. Example : ⎛ − 1⎞ ⎛1⎞ ⎜ ⎟ ⎜ ⎟ (1) If x = ⎜ i ⎟.y) for any complex number α. x = α (x . y = ⎜ a ⎟ (2) If ⎜1⎟ ⎜− i⎟ ⎝ ⎠ ⎝ ⎠ Vittal rao/IISc.z) and similarly (x.y) = α (x. ⎜0⎟ ⎜− i⎟ ⎝ ⎠ ⎝ ⎠ then. y = ⎜ i ⎟. We note. y ) = 1(− 1) + i(i ) + (− i )(0 ) = -1 + 1 = 0 Thus x and y are orthogonal.z) + (y.Bangalore M3/L3/V1/May2004/9 . y ) + ( x . (x . z ) = ∑ (x i i=1 n + y i )z i = ∑x i =1 n i zi + ∑ yi zi i =1 n = ( x. α y ) = (α y .

y ) = −1 + ai − i ∴ x.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes then (x.Bangalore M3/L3/V1/May2004/10 . y orthogonal ⇔ − (1 + i ) + a i = 0 ⎛1+ i ⎞ ⇔ a = ⎜ ⎟ = − i (1 + i ) = 1 − i i ⎠ ⎝ ⇔ a =1+ i Vittal rao/IISc.

A* = (a*ij) and A = A* then aii = a*ii = aii Thus the DIAGONAL ENTRIES OF A HERMITIAN MATRIX ARE REAL. Vittal rao/IISc. A* ≠ A. A* = (a*ij) where a*ij = aji.Bangalore M3/L4/V1/May 2004/1 . We define the Hermitian conjugate of A. (1) If A = (aij) . We now state some properties of Hermitian matrices. denoted by A* as . A* is the conjugate of the transpose of A. DEFINITION: An nxn matrix A is said to be HERMITIAN if A* = A.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes HERMITIAN MATRICES Let A = (aij). be an nxn matrix. A* = A. Whereas in Example 2. Example 1: ⎛ 1 i⎞ A=⎜ ⎜− i i⎟ ⎟ ⎝ ⎠ Transpose of A = ⎜ ⎜ ⎛1 − i ⎞ ⎟ i i ⎟ ⎝ ⎠ i ⎞ ⎛1 ∴ A* = ⎜ ⎟ ⎜− i − i⎟ ⎠ ⎝ Example 2: ⎛ 1 i⎞ A=⎜ ⎜ − i 2⎟ ⎟ ⎝ ⎠ Transpose of A = ⎜ ⎜ ⎛1 − i ⎞ ⎟ i 2⎟ ⎝ ⎠ ⎛ 1 i⎞ ∴ A* = ⎜ ⎟ ⎜ − i 2⎟ ⎠ ⎝ Observe that in Example 1.

( Ay )j = ∑a i =1 n ji yi. Ay = ⎜ Let Ax = ⎜ M ⎟ M ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ( Ax ) ⎟ ⎜ ( Ay ) ⎟ n ⎠ n ⎠ ⎝ ⎝ We have ( Ax )i Now = ∑a j =1 n ij x j .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛ x1 ⎞ ⎛ y1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ x2 ⎟ ⎜ y2 ⎟ (2) Let x = ⎜ ⎟ .Bangalore M3/L4/V1/May 2004/2 . y = ⎜ ⎟ be any two vectors in Cn. ( Ax . y ) = ∑ ( Ax )i y i i =1 n = ∑ ∑ i =1 n ⎛ ⎜ ⎜ ⎝ n j =1 ⎞ a ij x j ⎟ y ⎟ ⎠ i = ∑ n n j =1 ⎛ n ⎞ x j ⎜ ∑ a ij y i ⎟ ⎝ i =1 ⎠ = ∑ j =1 ⎛ n ⎞ x j ⎜ ∑ a ij y i ⎟ ⎝ i =1 ⎠ = ∑ n j =1 ⎛ n x j⎜∑ a ⎝ i =1 ji ⎞ y i ⎟ (Q aij = a ji sin ceA = A* ) ⎠ Vittal rao/IISc. M M ⎜ ⎟ ⎜ ⎟ ⎜x ⎟ ⎜y ⎟ ⎝ n⎠ ⎝ n⎠ ⎛ ( Ay )1 ⎞ ⎛ ( Ax )1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ( Ay )2 ⎟ ⎜ ( Ax )2 ⎟ .

x ) = ( x . y corresponding eigenvectors. µ be two different eigenvalues of A and x. (3) Let λ be any eigenvalue of A. But (x .Bangalore M3/L4/V1/May 2004/3 . x ) = 0 . x ) = (λ x . THUS THE EIGENVALUES OF A HERMITIAN MATRIX ARE ALL REAL.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes = ∑ n x j =1 j (Ay ) j = (x. = λ (x . y) = (x. y ) Vittal rao/IISc. Ay) Thus IF A IS HERMITIAN THEN (Ax. λ x ) A is Hermitian. x ) ≠ 0Q x ≠ θ n ∴ λ − λ = 0∴ λ = λ ∴ λ is real. λ ( x . Now. Now. Then there is an x ∈ Cn. Ax ) = (x . We have. x ≠ θn such that Ax = λx. y. x ) ∴ λ − λ ( )(x . Ax = λx and Ay = µy and λ. y ) = (λ x . Ay) FOR ANY TWO VECTORS x. (4) Let λ. λ ( x . x ) = ( Ax . µ are real by (3).

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

= ( Ax , y ) = ( x , Ay )by ( 2 ) = (x , µ y ) = µ (x , y ) = µ ( x , y )Q µ isreal .

∴ (λ − µ

)( x , y ) =

0 . But λ ≠ µ

∴ (x,y) = 0

x and y are orthogonal.

THUS IF A IS A HERMITIAL MATRIX THEN THE EIGENVECTORS CORRESPONDING TO DISTINCT EIGENVALUES ARE ORTHOGONAL.

Vittal rao/IISc.Bangalore

M3/L4/V1/May 2004/4

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

Gramm – Schmidt Orthonormalization
We shall now discuss the Gramm – Schmidt Orthonormalization process: Let U1, U2, …., Uk be k linearly independent vectors in Cn. The Gramm – Schmidt process is the method to get an orthonormal set φ1 , φ 2 ,....., φ k show that the subspace ω spanned by U1, ….., Uk is the same as the subspace spanned by φ1 ,....., φ k thus providing an orthonormal basis for ω. The process goes as follows: Let ψ 1 = U 1 ;

φ1 =
Next, let,

ψ1 = ψ1

ψ1 Note φ 1 = 1 (ψ 1 ,ψ 1 )

ψ 2 = U 2 − (U 2φ1 )φ1
Note that

(ψ 2 φ 1 )
= (U 2 ,φ1 ) − ((U 2φ2 )φ1 ,φ1 ) = (U 2 , φ1 ) − (U 2φ 2 )(φ1φ1 )
= (U 2 , φ1 ) − (U 2φ1 )Q (φ1φ1 ) = 1
∴ψ 2 ⊥ φ1 .
Let

φ2 =
Also

ψ2 ; ψ2

clearly

φ 2 = 1, φ1 = 1, (φ1 , φ 2 ) = 0

x = α1 U1 + α2 U2 then

⇔ x = α 1 ψ 1 φ1 + α 2 [ ψ 2 φ 2 + (U 2 ,φ1 )φ1 ]
Vittal rao/IISc.Bangalore M3/L5/V1/May 2004/1

⇔ x = α1ψ 1 + α 2 (ψ 2 + (U 2 , φ1 )φ1 )

Numerical analysis /Eigenvalues and Eigenvectors

Lecture notes

⇔ x = β 1φ1 + β 2φ 2 where

β1 = α1 ψ 1 + α 2 (U 2 ,φ1 )

β2 = α2 ψ 2
Thus xε subspace spanned by U1, U2 xε subspace spanned by φ1, φ2. Thus φ1, φ2 is an orthonormal basis for the subspace [U1,U2]. Having defined φ1, φ2,….., φi-1 we define φi as follows:
i −1

ψ i = U i − ∑ (U i , φ i )φ i
p =1

Clearly

(ψ , φ ) = 0
i p

1≤ p ≤ i-1

and

φi =

ψ ψ

i i

Obviously φi = 1and φi ,φ j = 0 for1 ≤ j ≤ i − 1 and xε [U1, U2, ….., Ui] xε [φ1, ….., φi]

(

)

and thus φ1, φ2, ….., φi is an orthonormal basis for [U1, ….., Uk]. Thus at the kth stage we get an orthonormal basis φ1, …., φk for [U1, ….., Uk]. Example:

⎛ ⎞ ⎛ 2⎞ ⎛1⎞ ⎜ 1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 3⎟ ⎜1⎟ Let U 1 = ⎜ ⎟;U 2 = ⎜ 1 ⎟;U 3 = ⎜ ⎟ 1 1 ⎜ − 1⎟ ⎜ ⎟ ⎜ ⎟ ⎜0⎟ ⎜ 0⎟ ⎜0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠
be l.i. Vectors in R4. Let us find an orthonormal basis for the subspace ω spanned by U1, U2, U3 using the Gramm – Schmidt process.

⎛1⎞ ⎜ ⎟ ⎜1⎟ ψ 1 = U 1 = ⎜ ⎟; 1 ⎜ ⎟ ⎜0⎟ ⎝ ⎠
Vittal rao/IISc.Bangalore

φ1 =

ψ1 (ψ 1 ,ψ 1 )

⎛1⎞ ⎜ ⎟ 1 ⎜1⎟ = 3 ⎜1⎟ ⎜ ⎟ ⎜0⎟ ⎝ ⎠
M3/L5/V1/May 2004/2

φ1 )φ1 ⎛ ⎜ ⎛ 1 ⎞ ⎜ ⎜ ⎟ ⎜ 1 1 ⎞⎜ ⎜ 1 ⎟ ⎛ 1 =⎜ ⎟−⎜ + − ⎟ ⎟ −1 ⎜ 3 3 3 ⎠⎜ ⎝ ⎜ ⎟ ⎜ ⎜ 0 ⎟ ⎝ ⎠ ⎜ ⎜ ⎝ ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ 3⎟ ⎟ ⎜ 1 ⎟ ⎜ ⎜ 1 ⎟ ⎜ 3⎟ − = ⎜ − 1⎟ ⎜ 1 ⎟ ⎟ ⎜ ⎜ 0 ⎟ ⎜ 3 ⎟ ⎠ ⎜ 0 ⎟ ⎝ ⎠ ⎝ 1 ⎞ ⎟ 3⎟ 1 ⎟ 3⎟ ⎟ 1 ⎟ 3⎟ 0 ⎟ ⎠ ⎛ 2 ⎞ ⎜ 3 ⎟ ⎜ 2 ⎟ =⎜ 3 ⎟ ⎜− 4 ⎟ 3⎟ ⎜ 0 ⎠ ⎝ and ψ 2 = 4 4 16 2 6 + + = 9 9 9 3 ∴φ2 = ψ2 ψ2 ⎛ 2 ⎞ ⎛ 1 6 ⎞ ⎟ ⎜ 3 ⎟ ⎜ ⎜ 1 ⎟ 3 ⎜ 2 ⎟ ⎜ 6 ⎟ = ⎜ 3 ⎟= ⎟ 2 6 ⎜− 4 ⎟ ⎜− 2 3⎟ ⎜ ⎜ 6⎟ 0 ⎠ ⎜ 0 ⎟ ⎝ ⎝ ⎠ Vittal rao/IISc.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ∴ ⎛ ⎜ ⎜ ⎜ = ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 1 ⎞ ⎟ 3 ⎟ 1 ⎟ 3 ⎟ ⎟ 1 ⎟ 3 ⎟ 0 ⎟ ⎠ φ1 ψ 2 = U 2 − (U 2 .Bangalore M3/L5/V1/May 2004/3 .

φ1 )φ1 − (U 3 .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Thus ⎛ 1 ⎞ ⎜ 6 ⎟ ⎜ 1 ⎟ ⎜ 6 ⎟ φ2 = ⎜− 2 ⎟ ⎜ 6⎟ ⎜ 0 ⎟ ⎝ ⎠ Finally. φ 2 )φ 2 ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜ ⎜ ⎛ 2⎞ 6 ⎟ 3⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 3 ⎟ ⎛ 6 ⎞⎜ 1 ⎟ ⎛ 3 ⎞⎜ 1 6 ⎟ 3 −⎜ ⎟ ⎟ =⎜ ⎟−⎜ ⎟⎜ ⎜ ⎟ ⎟ ⎜ 6 ⎟⎜ − 2 1 ⎠ ⎜ ⎟ ⎝ 3 ⎠⎜ 1 ⎟ ⎝ ⎜ ⎜0⎟ 6⎟ 3 ⎝ ⎠ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ 2⎞ ⎛ 2⎞ ⎛ 12 ⎞ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 3⎟ ⎜ 2⎟ ⎜ 1 ⎟ ⎜ = ⎜ ⎟ − ⎜ ⎟ − ⎜ 2⎟ 2 1 ⎜ ⎟ ⎜ ⎟ ⎜ − 1⎟ ⎜0⎟ ⎜0⎟ ⎜ ⎝ ⎠ ⎝ ⎠ ⎝ 0 ⎟ ⎠ ⎛− 1 ⎞ ⎜ 2⎟ ⎜ 1 ⎟ =⎜ 2 ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎝ ⎠ ψ3 = 1 +1 = 4 4 1 = 1 2 2 ∴φ3 = ψ3 = ψ3 ⎞ ⎛− 1 ⎞ ⎛− 1 2⎟ ⎜ 2⎟ ⎜ ⎟ ⎜ 1 ⎟ ⎜ 1 2⎜ 2 ⎟ = ⎜ 2 ⎟ 0 ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎠ ⎜ 0 ⎟ ⎝ ⎠ ⎝ Vittal rao/IISc. ψ 3 = U 3 − (U 3 .Bangalore M3/L5/V1/May 2004/4 .

U3 is φ1.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Thus the required orthonormal basis for ω. φ3. each. the subspace spanned by U1. . φ = ⎜ 6 3 2 2 ⎟ ⎟ 3 ⎜ ⎜ 2 ⎟ 1 0 ⎟ ⎟ ⎜− ⎟ ⎟ ⎜ 6 3 ⎜ 0 ⎟ ⎜ 0 ⎟ ⎠ ⎝ 0 ⎟ ⎠ ⎝ ⎠ 1 Note that these φi are mutually orthogonal and have.Bangalore M3/L5/V1/May 2004/5 . ….. …. Let be its characteristic polynomial. φ = ⎜ ⎟... We had seen that the eigenvalues of a Hermitian matrix are all real. Let C (λ ) = (λ − λ1 ) (λ − λ 2 ) . where λ1. where ⎛ ⎜ ⎜ φ1 = ⎜ ⎜ ⎜ ⎜ ⎝ ⎞ ⎛ 1 ⎞ ⎛− 1 ⎞ ⎜ ⎜ 6 ⎟ 3⎟ 2⎟ ⎟ ⎜ 1 ⎟ ⎜ 1 1 ⎟ ⎟.. Example : ⎛ 6 ⎜ A = ⎜− 2 ⎜ 2 ⎝ Notice A* = A1 = A1 = A. −2 3 −1 2 ⎞ ⎟ − 1⎟ 3 ⎟ ⎠ Thus the matrix A is Hermitian.. If ωi is the characteristic subspace corresponding to the eigen value λi . We can further show the following: (We shall not give a proof here.. that is..(λ − λ k ) a1 a2 A be any nxn Hermitian ak matrix. but illustrate with an example). ωk and write them as the columns of a matrix P then P*AP Will be a diagonal matrix.. . λk are its distinct eigenvalues and a1. Vittal rao/IISc. and that the eigenvectors corresponding to district eigenvalues are mutually orthogonal. ω i = {x : Ax = λ i x } then it can be shown that dim is ωi = ai. We then choose any basis for ωi and orthonormalize it by G-S process and get an orthonormal basis for ωi. If we now take all these orthonormal basis vectors for ω1. .U2.. We now get back to Hermitian matrices. ‘length’ one. ak are their algebraic multiplicities. φ2. λ2.

Bangalore .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Characteristic Polynomial of A: λ −6 λI − A = 2 −2 2 −2 1 λ −3 λ −3 1 λ −2 ⎯⎯ ⎯ → ⎯ R1 + 2 R 2 2 −2 2 (λ − 2 ) λ −3 1 0 1 λ −3 = (λ − 2 ) 1 2 − 2 2 λ − 3 1 0 1 λ − 3 R 2 − 2 R1 R 3 + 2 R1 → = (λ − 2 ) 0 0 1 2 0 1 λ −7 5 λ −3 = (λ − 2)[(λ − 7)(λ − 3) − 5] = (λ − 2) λ2 − 10λ + 16 = (λ − 2) (λ − 8) 2 [ ] = (λ − 2)(λ − 2)(λ − 8) Thus C (λ ) = (λ − 2 ) (λ − 8) 2 ∴ λ1 = 2 a1 = 2 M3/L5/V1/May 2004/6 Vittal rao/IISc.

e.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes λ2 = 8 a2 = 1 The characteristic subspaces: ω1 = {x : Ax = 2 x} = {x : ( A − 2 I )x = θ } i.Bangalore M3/L5/V1/May 2004/7 .e.2x1 + x2 x1 ⎞ ⎛ ⎟ ⎜ ∴x =⎜ x2 ⎟. x1 . β scalars ⎟ ⎠ ⎫ ⎪ ⎬ ⎪ ⎭ ⎛ 1 ⎞ ⎛0⎞ ⎜ ⎟ ⎜ ⎟ U 1 = ⎜ 0 ⎟. α . ⎜ − 2 ⎜ 2 ⎝ −2 1 −1 2 ⎞ ⎛ x1 ⎞ ⎛ 0 ⎞ ⎟⎜ ⎟ ⎜ ⎟ − 1⎟⎜ x 2 ⎟ = ⎜ 0 ⎟ 1 ⎟⎜ x 3 ⎟ ⎜ 0 ⎟ ⎠⎝ ⎠ ⎝ ⎠ 2x1 – x2 + x3 = 0 x3 = . x 2 arbitrary ⎜ − 2x + x ⎟ 1 2⎠ ⎝ ⎧ α ⎛ ⎜ ⎪ ∴ ω1 = ⎨x : x = ⎜ β ⎜ − 2α + β ⎪ ⎝ ⎩ ∴ A basis for ωi is ⎞ ⎟ ⎟ . We have to solve (A – 2I) x = θ ⎛ 4 ⎜ i. U 2 = ⎜ 1 ⎟ ⎜ − 2⎟ ⎜1⎟ ⎝ ⎠ ⎝ ⎠ We now orthonormalize this: ⎛ 1 ⎞ ⎟ ⎜ ψ 1 = U1 = ⎜ 0 ⎟ ⎜ − 2⎟ ⎠ ⎝ ψ1 = 5 φ1 = ψ ψ 1 1 Vittal rao/IISc.

Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛ ⎜ ⎜ ∴ φ1 = ⎜ ⎜ ⎜− ⎝ 1 ⎞ ⎟ 5 ⎟ 0 ⎟ 2 ⎟ 5⎟ ⎠ ψ 2 = U 2 − (U 2 .e. φ1 )φ1 ⎛ ⎜ ⎛ 0⎞ ⎜ ⎟ ⎛ 2 ⎞⎜ = ⎜1⎟ − ⎜ − ⎟⎜ ⎜ ⎟ 5 ⎠⎜ ⎝ ⎜1⎟ ⎝ ⎠ ⎜− ⎝ ⎛ 2 ⎞ ⎛0⎞ ⎜ 5 ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜1⎟ + ⎜ 0 ⎟ ⎜1⎟ ⎜ − 4 ⎟ ⎝ ⎠ ⎜ ⎟ ⎝ 5⎠ 1 ⎞ ⎟ 5 ⎟ 0 ⎟ 2 ⎟ 5⎟ ⎠ ⎛2 ⎞ ⎜ 5⎟ =⎜ 1 ⎟ ⎜1 ⎟ ⎜ ⎟ ⎝ 5⎠ ψ2 = 4 1 +1+ = 25 25 30 = 25 30 5 ∴φ2 = ψ2 = ψ2 ⎞ ⎛ 2 ⎟ ⎛2 ⎞ ⎜ 30 ⎟ ⎜ 5⎟ ⎜ 5 ⎜ 1 ⎟=⎜ 5 ⎟ 30 ⎟ 30 ⎜ 1 ⎟ ⎜ ⎜ ⎟ ⎟ ⎝ 5⎠ ⎜ 1 30 ⎠ ⎝ ∴ φ1. ω 2 = {x : Ax = 8 x} = {x : ( A − 8I )x = θ } So we have to solve (A-8I) x = θ i. φ2 is an orthonormal basis for ω1.Bangalore M3/L5/V1/May 2004/8 . Vittal rao/IISc.

adiagonal matrix.Bangalore .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛− 2 ⎜ ⎜− 2 ⎜ 2 ⎝ −2 −5 −1 2 ⎞⎛ x1 ⎞ ⎛ 0 ⎞ ⎟⎜ ⎟ ⎜ ⎟ − 1 ⎟⎜ x 2 ⎟ = ⎜ 0 ⎟ − 5 ⎟⎜ x 3 ⎟ ⎜ 0 ⎟ ⎠⎝ ⎠ ⎝ ⎠ and therefore the general solution is This yields x1 = -2x2 = 2x3 ⎛ γ ⎜ ⎜−γ ⎜ 2 ⎜ γ ⎜ ⎝ 2 ⎞ ⎟ ⎛ 2 ⎞ ⎟ = γ 1 ⎜ − 1⎟ ⎜ ⎟ ⎟ ⎜ 1 ⎟ ⎟ ⎝ ⎠ ⎟ ⎠ ⎛2⎞ ⎜ ⎟ ∴ Basis : U 3 = ⎜ − 1⎟ ⎜1⎟ ⎝ ⎠ ∴ Orthonormalize: only one step: ⎛2⎞ ⎜ ⎟ ψ 3 = U 3 = ⎜ − 1⎟ ⎜1⎟ ⎝ ⎠ ψ3 ψ3 ⎛ 2 ⎞ ⎟ 2⎞ ⎜ ⎛ 6 ⎟ ⎜ 1 ⎜ ⎟ = ⎜ − 1⎟ = ⎜ − 1 ⎟ 6⎟ 6⎜ ⎟ ⎜ 1 ⎠ ⎜ 1 ⎝ ⎟ 6 ⎠ ⎝ 2 30 5 30 1 30 2 6 1 − 1 6 ⎞ ⎟ ⎟ ⎟ 6⎟ ⎟ ⎟ ⎠ φ3 = ⎛ 1 ⎜ ⎜ 5 ⎜ ∴ If P = ⎜ 0 ⎜ 2 ⎜− 5 ⎝ Then P* = P1 and ⎛2 ⎜ P AP = P AP = ⎜ 0 ⎜0 ⎝ * 1 0⎞ ⎟ 2 0 ⎟. 0 8⎟ ⎠ 0 M3/L5/V1/May 2004/9 Vittal rao/IISc.

Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Vittal rao/IISc.Bangalore M3/L5/V1/May 2004/10 .

for any vector x. a real number x satisfying. ⎜x ⎟ ⎝ 2⎠ ⎭ ⎩ our ‘usual’ two-dimensional plane. x1 . (i) x ≥ 0 for every x ε V and x ≥ 0 if and only if x = θ. (iii) x + y ≤ x + y for any two vectors x and y. (The inequality (iii) is usually referred to as the triangle inequality). The norm on a vector space V is a rule which associates with each vector x in V. ⎛ x1 ⎞ If x = ⎜ ⎟ is any vector in this space we ⎜x ⎟ ⎝ 2⎠ defineits‘usual’ ‘length’ or ‘norm’ as x = x 21 + x 2 2 We observe that (i) x ≥ 0 for every vector x in R2 x ≥ 0 if and only if x is θ. (iii) x + y ≤ x + y for every x. (ii) αx = α x for any scalar α.Bangalore . x2 ∈ R ⎬. (ii) αx = α x for every scalar α and every vector x in V. ⎫ ⎧ ⎛ x1 ⎞ R 2 = ⎨ x = ⎜ ⎟. y in V. We now generalize this idea to define the concept of a norm on Cn or Rn. Examples of Vector Norms on Cn and Rn ⎛ ⎜ ⎜ Let x = ⎜ ⎜ ⎜ ⎝ ⎞ ⎟ ⎟ n n ⎟ be any vector x in C (or R ) ⎟ xn ⎟ ⎠ x1 x2 M M3/L6/V1/May 2004/1 Vittal rao/IISc.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes VECTOR AND MATRIX NORMS Consider the space.

Thus these give several types of norms on Cn and Rn.Bangalore M3/L6/V1/May 2004/2 .{ ... + x n = ∑x i =1 i In general for 1 ≤ p < ∞ we can define..Numerical analysis /Eigenvalues and Eigenvectors Lecture notes We can 2 define 2 various 1 2 2 norms 1 2 as follows:(1) x = x1 + x 2 2 [ + ...{ x1 .1} = 2 1 = 14 + 2 4 + 14 4 ( ) 1 4 1 = 18 4 Vittal rao/IISc... 2 .. x n } All these can be verified to satisfy the conditions (i). (ii) and (iii) required of a norm. (4) x ∞ = max . Example: ⎛ 1 ⎞ ⎜ ⎟ (1) Let x = ⎜ − 2 ⎟ in R3 ⎜ −1⎟ ⎝ ⎠ Then x x x x 1 = 1+ 2 +1 = 4 = (1 + 4 + 1) 1 = 2 6 2 ∞ = max . + x n ] ⎡ n 2⎤ = ⎢∑ x i ⎥ ⎣ i =1 ⎦ n (2) x 1 = x1 + x 2 + .. (3) x p ⎧ n = ⎨∑ x i ⎩ i =1 p ⎫p ⎬ ⎭ 1 If we set p = 2 in (3) we get x 2 as in (1) and if we set p = 1 in (3) we get x 1 as in (2).... x 2 ..

for every i=1. x and x { (k ) n } converges to x x(k)i → xi { (k ) 1 {x } of vectors CONVERGES to the vector x if the } converges to the number x . M3/L6/V1/May 2004/3 As k → ∞.e.{ .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛ 1 ⎞ ⎟ ⎜ (2) Let x = ⎜ i ⎟ in C3 ⎜ − 2i ⎟ ⎠ ⎝ Then x 1 = 1+ 2 +1 = 4 x 2 x x k =1 ∞ = max . (k ) (k ) 1 2 2 n i.1} = 2 1 = 1 + 2 +1 3 3 = (1 + 4 + 1) = 6 1 2 3 ( 3 ) 1 3 = 10 1 3 (k ) Consider a sequence x { } ∞ of vectors in Cn (or Rn) x (k ) ⎛ x (k )1 ⎞ ⎜ (k ) ⎟ ⎜x 2⎟ =⎜ M ⎟ ⎜ ⎟ ⎜ x (k ) n ⎟ ⎝ ⎠ ⎛ x1 ⎞ ⎜ ⎟ ⎜x ⎟ x = ⎜ 2 ⎟ ∈ C n ( orR n ) M ⎜ ⎟ ⎜x ⎟ ⎝ n⎠ Suppose DEFINITION: We say that the sequence sequence of numbers. Vittal rao/IISc. {x } converges to x . …. ….. 2. 2. n.Bangalore .

2. ⎜ 1 k⎟ ⎜ ⎟ ⎜ 2 ⎟ ⎝ k + 1⎠ Let x ( k ) ⎛ 0⎞ ⎜ ⎟ Let x = ⎜ 1 ⎟ . x ( k ) − x converges to the real number 0 then we say that the sequence of vectors converges to x with respect to this norm. ∴ x (k ) → x If {x (k ) } is a sequence of vectors such that in some norm. We then write.3. x (k ) ⎯ ⎯→ x For example consider the sequence.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Example: ⎛ ⎞ ⎜ i ⎟ ⎜ k ⎟ = ⎜1 − 2 ⎟ be a sequence of vectors in R3. ⎛ 1 ⎞ ⎟ ⎜ ⎜ k ⎟ 2 = ⎜ 1 − ⎟ in R3 as before and. ⎜ k ⎟ ⎜ 1 ⎟ ⎟ ⎜ 2 ⎝ k + 1⎠ x (k ) ⎛ 0⎞ ⎜ ⎟ x = ⎜ 1⎟ ⎜ 0⎟ ⎝ ⎠ We have Vittal rao/IISc. the sequence of real numbers. ⎜ 0⎟ ⎝ ⎠ (k ) Here x 1 = 1 → 0 = x1 k x (k ) 2 = 1 − 2 → 1 = x2 k x (k )3 = 1 → 0 = x3 k +1 2 ∴ x ( k ) i → x i for I=1.Bangalore M3/L6/V1/May 2004/4 .

⎨ .Bangalore M3/L6/V1/May 2004/5 It can be shown that . x (k ) − x p ⎧ 1 ⎪ 1 ⎛ 2⎞ = ⎨ p +⎜ ⎟ + ⎝k⎠ ⎪k k 2 +1 ⎩ p ( ) ⎫ ⎪ →0 p ⎬ ⎪ ⎭ 1 p p ∴ x (k ) ⎯⎯ → x ⎯ ∀ p . 2 ⎬ = → 0 ⎩ k k k + 1⎭ k ∴ x ( k ) ⎯ ⎯∞ → x ⎯ ⎧1 2 1 ⎫2 ⎪ ⎪ (k ) →0 x −x =⎨ 2 + 2 + 2⎬ 2 k ⎪k k 2 +1 ⎪ ⎭ ⎩ 1 ( ) 2 ∴ x ( k ) ⎯ ⎯→ x Also.1 ≤ p ≤ ∞ “ IF A SEQUENCE {x (k ) }OF VECTORS IN Cn (or Rn) CONVERGES TO A VECTOR x IN Cn (or Rn) WITH RESPECT TO ONE VECTOR NORM THEN THE SEQUENCE CONVERGES TO x WITH RESPECT TO ALL VECTOR NORMS AND ALSO THE Vittal rao/IISc. .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes ⎛ 1 ⎞ ⎟ ⎜ ⎜ k ⎟ 2 x (k ) − x = ⎜ − ⎟ ⎜ k ⎟ ⎜ 1 ⎟ ⎟ ⎜ 2 ⎝ k + 1⎠ Now x (k ) − x 1 = 1 2 1 + + 2 →0 k k k +1 1 ∴ x ( k ) ⎯⎯→ x Similarly x (k ) − x ∞ ⎧1 2 1 ⎫ 2 = max .

We can also show that Vittal rao/IISc. MATRIX NORMS Let M be the set of all nxn matrices (real or complex). Before we give examples of matrix norms we shall see a method of getting a matrix norm starting with a vector norm.Bangalore M3/L6/V1/May 2004/6 . We define A = max Ax x ≠ θn x We can show this is a matrix norm and this matrix norm is called the matrix norm subordinate to the vector norm . consider Ax x (where A is an nxn matrix). (iii) A + B ≤ A + B for all matrices A and B. (iv) AB ≤ A B for all matrices A and B. This given us an idea to by what proportion the matrix A has distorted the length of x. is a vector norm. We get max x ≠ θn Ax x a real number. CONVERSELY IF A SEQUENCE CONVERGES TO x AS PER DEFINITION ON PAGE 40 THEN IT CONVERGES WITH RESPECT TO ALL VECTOR NORMS”. which associates a real number A with each matrix A and satisfying. Suppose . Thus when we want to check the convergence of a sequence of vectors we can choose that norm which is convenient to that sequence. Suppose we take the maximum distortion as we vary x over all vectors. A matrix norm is a rule. (i) A ≥ 0 for all matrices A A = 0 if and only if A = On. Then. for x ≠ θn. (ii) αA = α A for every scalar α and every matrix A.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes SEQUENCE CONVERGES TO x ACCORDING TO DEFINITION ON PAGE 40.

... ⎟ ⎟ a nn ⎟ ⎠ The sum of the absolute values of the entries in the ith column is called the absolute column sum and is denoted by Ci.. Vittal rao/IISc... We have C1 = a11 + a 21 + a31 + ..... …......... . a1n ⎞ ⎟ a 2n ⎟ . …...... A ∞ and A 2 for a matrix A....... ….. + a n1 = ∑ a i1 C 2 = a12 + a 22 + a32 + .. max Ax A1 = x 1 =1 1 A = A = max x 2 =1 Ax 2 max x ∞ =1 Ax Ax ∞ A = max x p =1 p How hard on easy is it to compute these matrix norms? We shall give some idea of computing A 1 . …. an2 ...Bangalore M3/L6/V1/May 2004/7 .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes A = max max Ax Ax = x ≠ θn x x =1 For example. ... ….. Let ⎛ a 11 ⎜ ⎜a A = ⎜ 21 .. ⎜ ⎜a ⎝ n1 a 12 a 22 .. …... + a n 2 = ∑ ai 2 i =1 n i =1 n …... ..

Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Cj = ∑ i =1 n a ij ... C2.... 1≤j≤n Thus we have Let n absolute column sums. + a1n = ∑ a1 j j =1 n n R2 = a21 + a22 + . We can show that.. …. ….. − 4⎟ ⎠ C 1 = 1 + 1 + 3 = 5. 8} = 8 C3 = 3 + 1 + 4 = 8 ∴ A1 =8 Similarly we denote by Ri the sum of the absolute values of the entries in the ith row R1 = a11 + a12 + .. Vittal rao/IISc. and C = max.. if ∑ n a i=1 ij ⎤ ⎥ ⎦ ⎛ 1 ⎜ A = ⎜ −1 ⎜− 3 ⎝ then 2 0 2 − 3⎞ ⎟ 1 ⎟.... + a2 n = ∑ a2 j j =1 …. C 2 = 2 + 0 + 2 = 4. Cn.Bangalore M3/L6/V1/May 2004/8 . C 2 ...{C1 .. {5.... …... ….. C n } This is called the maximum absolute column sum.{C1 .. A 1 = C = max . ….. C n } = max ⎡ 1≤ j ≤ n ⎢ ⎣ For example. C2 ..... 4. …. C = max .

2 A 1 and A ∞ for a matrix are thus fairly easy. where λ1. R = max {R1. ….. λk are the district characteristic values of A. we have − 4⎟ ⎠ R1 = 1 + 2 + 3 = 6.Bangalore M3/L6/V1/May 2004/9 . the maximum absolute row A ∞ = R = max{R1 . 2. Let C (λ ) = (λ − λ1 ) 1 L (λ − λk ) a ak P = max . for the matrix = max 1≤ i ≤ n ⎫ ⎧ n a ij ⎬ ⎨∑ ⎭ ⎩ j =1 ⎛ 1 ⎜ A = ⎜ −1 ⎜− 3 ⎝ 2 0 2 − 3⎞ ⎟ 1 ⎟. and .{λ 1 ... However. λ 2 . be its characteristic polynomial. ∑a j =1 n ij and define R.. …. R3 = 3 + 2 + 4 = 9 and R = max {6. + a in = sum as.. the is not very easy. K . λ2. R2 = 1 + 0 + 1 =2.. Rn} It can be show that.... λ k } A sp This is called the spectral radius of A and is also denoted by Vittal rao/IISc.. Rn } For example.Numerical analysis /Eigenvalues and Eigenvectors Lecture notes R i = a i1 + a i 2 + . but somewhat easier in the case of the Hermitian Let A be any nxn matrix... 9}= 9 ∴ A ∞ =9 The computation of computation of A matrix...

µ n } If follows from the matrix norm definition subordinate to a vector norm..Bangalore M3/L6/V1/May 2004/10 . −2 3 −1 2 ⎞ ⎟ − 1⎟ 3 ⎟ ⎠ which is Hermitian we found on page 33. ….8} = 8 sp ∴ A2= A =8 If A is any general nxn matrix (not Hermitian) then let B = A* A..Numerical analysis /Eigenvalues and Eigenvectors Lecture notes It can be show that for a Hermitian matrix A. µ2.... µ2. …. µr. Then let µ = max {µ1. for the matrix. the district eigenvalues as λ1 = 2.. if x ≠ θn Ax x ≤ max Ax = A x ≠ θn x and therefore Ax ≤ A x for all x ≠ θn But this is obvious for x = θn Thus if A is a matrix norm subordinate to the vector norm x then Vittal rao/IISc.. that A = max x ≠ θn Ax x ∴ For any x in Cn or Rn .. λ2 = 8 xp = P = max . µr} We can show that A 2 = µ = max{µ1 .. Then B* = A* A = B. we have.{2.. Let the eigenvalues (district) of B be µ1. A 2=P= A ⎛ 6 ⎜ A = ⎜− 2 ⎜ 2 ⎝ ∴ A sp For example. and hence B is Hermitian and its eigenvalues are real and in fact its eigenvalues are nonnegative.

Bangalore M3/L6/V1/May 2004/11 .Numerical analysis /Eigenvalues and Eigenvectors Lecture notes Ax ≤ A x for every vector x in Cn (or Rn). Vittal rao/IISc.

and let λ1. { } (The idea is to use A ∞ if it is smaller than A 1 if it is smaller than A ∞ ). {λ : − p ≤ λ ≤ p} by (A) Vittal rao/IISc. Let A = (aij) be an nxn matrix..Numerical analysis/ Computations of eigenvalues Lecture notes COMPUTATION OF EIGEN VALUES In this section we shall discuss some standard methods for computing the eigenvalues of an nxn matrix. We shall now look for other discs which can be easily located and inside which the eigenvalues can all be trapped. Thus this result is not practically useful.. the idea is to use a matrix norm. For example we can use A ∞ or A 1 which are easily computed as MARS or MACS respectively. We shall first discuss some results regarding the general location of the eigenvalues. We shall also briefly discuss some methods for computing the eigenvectors corresponding to the eigenvalues.. λ2. Thus if we draw a disc of radius A and origin as center then this disc will be at least as big as the disc given in (A) above and hence will trap all the eigenvalues.Bangalore M4/L1/V1/May2004/1 . λ n } Thus if we draw a circle of radius P about the origin in the complex plane. However. Thus. Then it can be shown that P ≤ A .. all its eigenvalues are real and hence all the eigenvalues lie in the intervals. to locate this circle we need P and to find P we need the eigenvalues. Thus we have. Let A be any matrix norm. λn be its eigenvalues (including multiplicities). This result give us a disc inside which all the eigenvalues of A are located.. this suggests the possibility of locating all the eigenvalues in some disc. from a theoretical point of view. Thus we have (A) If A is an nxn matrix then all the eigenvalues of A lie in the closed disc {λ : λ ≤ P} in the complex plane. then all the eigenvalues of A will lie on or inside this closed disc. ….(B) If A is an nxn matrix then all its eigenvalues are trapped in the closed disc λ : λ ≤ A ∞ or the disc {λ : λ ≤ A 1 }. which is easy to compute. λ 2 .. However.. We defined P = A xp = max {λ 1 .. COROLLORY (C) If A is Hermitian.

Pi = ai1 + ai 2 + . C2 = 5. These results are due to GERSCHGORIN...Bangalore M4/L1/V1/May2004/2 .Numerical analysis/ Computations of eigenvalues Lecture notes {λ : − A {λ : − A ∞ 1 ≤λ≤ A ≤λ≤ A 1 ∞ } } by (B)... In this example A 1 = 5 < A {λ : λ ≤ 5}. {λ : λ ≤ 5}... The next set of results try to isolate these eigenvalues to some extent in smaller discs. Example 1: Let ⎛ 1 − 1 2⎞ ⎜ ⎟ A = ⎜ − 1 2 3⎟ ⎜ 1 2 0⎟ ⎝ ⎠ P2 = 6 P3 = 3 Here ‘Row sums’ are P1 = 4 ∴ A ∞ = MARS = 6 Thus the eigenvalues are all in the disc . + ain Vittal rao/IISc. ∴ A 1 = MACS = 5 ∴ The eigenvalues are all in the disc. The diagonal entries are ξ1 = a11 . Let A = (aij) be an nxn matrix.. inside which all eigenvalues are located. {λ : λ ≤ 6} The ‘Column sums’ are C1 = 3. and hence we use A 1 and get the smaller disc The above results locate all the eigenvalues in one disc.. ξ n = ann . + aii −1 + aii +1 + . ξ 2 = a22 .. C3 = 5. …. Now let Pi denote the sum of the absolute values of the off-diagonal entries of A in the ith row.. ∞ = 6.

…. These are called the GERSCHGORIN DISCS of the matrix A. Example 2: Let ⎛1 ⎜ A = ⎜0 ⎜3 ⎝ −1 0 ⎞ ⎟ 4 1 ⎟ 1 − 5⎟ ⎠ The Gerschgorin discs are found as follows: ξ1 = (1.. P2 = 1 ... G2. The first result of Gerschgorin is the following: (D) Every eigenvalue of A lies in one of the Gerschgorin discs. …. P3 = 4 G1 : Centre (1.0) radius 4. …. …. ξ3 = (-5. Gn. …. and in general Gi : Centreξ i . radiusP : {λ : λ − ξ1 ≤ P1 } 1 G2 : Centreξ 21 .. radiusP : {λ : λ − ξ 2 ≤ P2 } 2 ….0) radius 1 G3 : Centre (-5.0) .Bangalore M4/L1/V1/May2004/3 . ξ2 = (4.. radiusPi : {λ : λ − ξ i ≤ Pi } Thus we get n discs G1.0) P1 = 1 . Vittal rao/IISc... ….0) ..0) radius 1 G2 : Centre (4. ….Numerical analysis/ Computations of eigenvalues Lecture notes Now consider the discs: G1 : Centreξ1 .

0) G2 (4. Example 3: Let ⎛ 10 ⎜ A=⎜ 1 ⎜ 1 . 5} G 3 = {λ : λ − 20 ≤ 4 .0) P1 = 5 ξ2 = (10.5 ⎟ − 3 20 ⎟ ⎠ 4 10 (It can be shown that the eigenvalues are exactly λ1 = 8. Now for this matrix we have. λ3 = 20). 5} Vittal rao/IISc.5 Thus we have the three Gerschgorin discs G 1 = {λ : λ − 10 ≤ 5} G 2 = {λ : λ − 10 ≤ 1 .0) Thus every eigenvalue of A must lie in one of these three discs.Bangalore M4/L1/V1/May2004/4 .5 ⎝ 1 ⎞ ⎟ 0 .0) P2 = 1.5 ξ3 = 20 P3 = 4. λ2 = 12.Numerical analysis/ Computations of eigenvalues Lecture notes G1 (1.0) G3 (-5. ξ1 = (10.

0) P1 = 1 ξ2 = (2.Bangalore M4/L1/V1/May2004/5 . But notice that our exact eigenvalues are 8. Example 4: Let ⎛1 ⎜ A = ⎜1 ⎜1 ⎝ 0 2 0 1⎞ ⎟ 0⎟ 5⎟ ⎠ ξ3 = (5. Thus no eigenvalue lies in G2. ξ1 = (1.0) P2 = 1 The Gerschgorin discs are G 1 = {λ : λ − 1 ≤ 1} G 2 = {λ : λ − 2 ≤ 1} G 3 = {λ : λ − 5 ≤ 1} Vittal rao/IISc. and one eigenvalue lie in G3 (namely 20) and two lie in G1 (namely 8 and 12).Numerical analysis/ Computations of eigenvalues Lecture notes G1 G3 Thus all the eigenvalues of A are in these discs.0) P3 = 1 Now.12 and 20.

Bangalore M4/L1/V1/May2004/6 . Thus the shaded region has two eigenvalues and G3 has one eigenvalue. (E) If m of the Gerschgorin discs intersect to form a common connected region and the remaining discs are isolated from this region then exactly m eigenvalues lie in this common – region. since all the eigenvalues are real. In example 3. In example 2. The next Gerschgoin result is to identify the location of the eigenvalues in such cases.0) G2 (2. and in examples 3 and 4 some discs intersected and others were isolated. In example 4.0) Thus every eigenvalue of A must lie in one of these three discs. In particular if Gerschgorin disc is isolated from all the rest then exactly one eigenvalue lies in this disc. Thus in example 2 we have all three isolated discs and thus each disc will trap exactly one eigenvalue. Gi = {λ : λ − ξ i ≤ Pi } = {λ : ξ i − Pi ≤ λ ≤ ξ i + Pi } Example 5: Vittal rao/IISc. Thus the shaded portion has two eigenvalues and G3 has one eigenvalue. G1and G2 intersected to form the connected (shaded) region and this is isolated from G3.0) G3 (5. G1and G2 intersected to form a connected region (shaded portion) and this is isolated from G3. all the Gerschgorin discs were isolated.Numerical analysis/ Computations of eigenvalues Lecture notes G1 (1. REMARK: In the case of Hermitian matrices. the Gerschgorin discs. Gi = {λ : λ − aii ≤ Pi } = {λ : λ − ξ i ≤ Pi } can be replaced by the Gerschgorin intervals.

(B). Vittal rao/IISc.0) ξ2 = (5.Numerical analysis/ Computations of eigenvalues Lecture notes Let ⎛ 1 −1 1 ⎞ ⎜ ⎟ A = ⎜−1 5 0 ⎟ ⎜ ⎟ ⎜ 1 0 −1 ⎟ 2⎠ ⎝ P1 = 2 P2 = 1 P3 = 1 Note A is Hermitian. However if these discs are of large radius then we have to improve these approximations substantially.Bangalore M4/L1/V1/May2004/7 . -3/2 ≤ λ ≤ 3. (D). We shall now discuss this aspect of computing the eigenvalues more accurately. and (E) give us a location of the eigenvalues inside some discs and if the radii of these discs are small then the centers of these circles give us a good approximations of the eigenvalues.0) G1 : -1≤ λ ≤ 3 G2 : 4 ≤ λ ≤ 6 G3 : -3/2 ≤ λ ≤ ½ Thus the Gerschgorin intervals are -2 -1 G3 0 G1 1 2 3 G2 4 5 6 Note that G1 and G3 intersect and give a connected region. (In fact A is real symmetric) Here.0) ξ3 = (-1/2. All the above results (A). ξ1 = (1. We shall first discuss the problem of computing the eigenvalues of a real symmetric matrix. and this is isolated from G2 : 4 ≤ λ ≤ 6. Thus there will be two eigenvalues in –3/2 ≤ λ ≤ 3 and one eigenvalue in 4 ≤ λ ≤ 6. (C).

STEP 2: Find the eigenvalues of T.Numerical analysis/ Computations of eigenvalues Lecture notes COMPUTATION OF THE EIGENVALUES OF A REAL SYMMETRIC MATRIX We shall first discuss the method of reducing the given matrix to a similar tridiagonal matrix and then computing the eigenvalues of a real symmetric tridiagonal matrix. a real symmetric method involves two steps: STEP 1: Find a real symmetric tridiagonal matrix T which is similar to A. Vittal rao/IISc.Bangalore M4/L2/V1/May2004/1 . We shall first discuss step 2. (The eigenvalues of A will be same as those of T since A and T are similar). Thus the process of determining the eigenvalues of A = (aij).

. .... P1 (λ ).. (I) What we are interested in finding the zeros of Pn (λ ) .. .... 0 .. 0 ⎝ .......... P1 (C ). . Pn (C ) (which can be calculated recursively by (I))............. 0 b2 ..... ....Numerical analysis/ Computations of eigenvaues Lecture notes DETERMINATION OF THE TRIDIAGONAL MATRIX EIGENVALUES OF A REAL SYMMETRIC Let 0 0 0 ⎛ a1 b1 ⎜ 0 0 ⎜ b1 a 2 b2 ⎜ 0 b a3 b3 0 2 T =⎜ ⎜ .. ...... ....... Let us find Pn (λ) = det [T ...... 0 b n−2 ⎜ ⎜ 0 0 .... bn −1 a n −λ bn −1 The eigenvalues of T are precisely the roots of Pn (λ) = 0 (Without loss of generality we assume bi ≠ 0 for all i....... To do this we analyse the polynomials P0 (λ )... a n −1 bn −1 ⎞ ⎟ ⎟ ⎟ ⎟ .. Let C be any real number... ...... bn − 2 0 a ...... ⎟ bn −1 ⎟ ⎟ an ⎟ ⎠ 0 0 0 be a real symmetric tridiagonal matrix... . ⎜ 0 .. .. Let N (C) denote the agreements in sign between two consecutive in VittalRao/IISc Bangaolre M4/L3/V1/May 2004/1 ..... 0 . ....... Pn (λ ) . . We have P0 (λ ) = 1 P1 (λ ) = a1 − λ Pi (λ ) = (a i − λ )Pi −1 (λ ) − b 2 i −1 Pi − 2 (λ ) ……. −λ n −1 0 0 ....... We define Pi (λ) to be the ith principal minor of the above determinant..... .... Compute P0 (C )......... .. 0 0 b1 a2 − λ ...λI] a1 − λ b1 = . For if bi = 0 for some i then the above determinant reduces to two diagonal blocks of the same type and thus the problem reduces to that of the same type involving smaller sized matrices).

P0 (1 ) = 1 P1 (1 ) = 2 P2 (1 ) = − 3 P3 (1 ) = − 2 P4 (1 ) = 6 P6 (1 ) = 0 P5 (1 ) = − 1 P7 (1 ) = 4 Here P8 (1 ) = − 2 P0 (1 ). and there will be 3 eigenvalues of T greater than or equal to 1.. It is this idea of result (F) that will be combined with (A). Thus N (C) = 3.. (C). P1 (C ). Then we have (F) There are exactly N (C) eigenvalues of T that are ≥ C.Numerical analysis/ Computations of eigenvaues Lecture notes the above sequence of values. VittalRao/IISc Bangaolre M4/L3/V1/May 2004/2 . P0 (C ). Pn (C ) ... P3 (1 ) P5 (1 ). we take its sign to be the same as that of Pi −1 (C ) ]. Example: If for an example we have an 8 x 8 matrix T (real symmetric tridiagonal) giving use to. (B). We now explain this by means of an example. [If for some i. P6 (1 ) agree in sign agree in sign (Because since P6 (1) = 0 we have to take its sign as the same as that of P5 (1). Thus three pairs of sign agreements are achieved. Pi (C ) = 0 . (D) and (E) and clever repeated applications of (F) that locate the eigenvalues of T.. and the remaining 5 eigen values are < 1.. P1 (1 ) P2 (1 ).

3] ∴ G2 : [-7. 4] M4/L3/V1/May 2004/3 ] 4 5 6 7 VittalRao/IISc Bangaolre . 7] ∴ G4 : [-2. Thus by our result (C) we have that the eigenvalues are all in the interval –7 ≤ λ ≤ 7 [ -7 -6 -5 -4 -3 -2 -1 0 1 2 3 Now the Gerschgorin (discs) intervals are as follows: G1 : Centre 1 G2 : Centre -1 G3 : Centre 2 G4 : Centre 3 radius : 2 radius : 6 radius : 5 radius : 1 ∴ G1 : [-1.Numerical analysis/ Computations of eigenvaues Lecture notes Example 7: Let ⎛1 ⎜ ⎜2 T =⎜ 0 ⎜ ⎜0 ⎝ 0 ⎞ ⎟ −1 4 0 ⎟ 4 2 − 1⎟ ⎟ 0 −1 3 ⎟ ⎠ 2 0 Here we have Absolute Row sum 1 = 3 Absolute Row sum 2 = 7 Absolute Row sum 3 = 7 Absolute Row sum 4 = 4 and therefore. 5] ∴ G3 : [-3. T ∞ = MARS = 7 1 (Note since T is symmetric we have MARS = MACS and therefore T = T ∞ = T ).

This gives therefore the same information as we obtained above using (C). we have. Now we shall see how we use (F) to locate the eigenvalues. P1 (λ ) = 1 − λ P3 (λ ) = (2 − λ )P2 (λ ) − 16 P1 (λ ) P4 (λ ) = (3 − λ )P3 (λ ) − P2 (λ ) P2 (λ ) = − (1 + λ )P1 (λ ) − 4 P0 (λ ) P0 (0 ) = 1 P1 (0 ) = 1 P2 (0 ) = − 5 P3 (0 ) = − 26 P4 (0 ) = − 73 VittalRao/IISc Bangaolre M4/L3/V1/May 2004/4 . Let C = 0. Now T − λI = 1− λ 2 0 0 2 −1− λ 4 0 0 4 2−λ −1 0 0 −1 3−λ P0 (λ ) = 1 Now. Find N (0) and we will get the number of eigenvalues ≥ 0 to be N (0). G3 and G4 all intersect to form one single connected region [-7. 7]. 7]. G2. Thus so far we know all eigenvalues are in [-7. First of al let us see how many eigenvalues will be ≥ 0.Numerical analysis/ Computations of eigenvaues Lecture notes G3 G1 [ -7 -6 -5 -4 -3 -2 -1 0 G4 ] 1 G2 2 3 4 5 6 7 We see that G1. Thus by (E) there will be 4 eigenvalues in [-7. 7].

∴ N (0 ) = 3 ∴ Three are 3 eigenvalues ≥ 0 and ∴ one eigenvalue < 0. there are eigenvalues in [0. P1 (0 ) P2 (0 ). i. We get One eigenvalue 3 eigenvalues VittalRao/IISc Bangaolre M4/L3/V1/May 2004/5 . 0] One eigenvalue 3 eigenvalues -7 -6 -5 -4 -3 -2 -1 0 1 Fig. 7]and there is 1 eigenvalue in [-7.Numerical analysis/ Computations of eigenvaues Lecture notes We have P0 (0 ). We have P 0 (− 1 ) = 1 P1 (− 1 ) = 2 P 2 (− 1 ) = − 4 P 3 (− 1 ) = − 48 P 4 (− 1 ) = − 188 Again we have N (-1) = 3.e. P3 (0 ) P3 (0 ). ∴ There are 3 eigenvalues ≥ -1 compare this with figure1.1 2 3 4 5 6 7 Let us take C = -1 and calculate N (C). P4 (0 ) as three consecutive pairs having sign agreements.

7].(*) Let us try mid pt. ∴ N (-4) = 3.5 P2 (-5.5) = + 6. Now let us take C = 1 P0 (1) = 1 P1 (1) = 0 ∴ N (1) = 3 M4/L3/V1/May 2004/6 Again there are three pairs of sign agreements.. 2 we get that negative eigenvalue is in [-5. 2 we get ∴ N (-5. VittalRao/IISc Bangaolre . -1] in.5) = 683. C = -5. ∴ 4 eigenvalues ≥ -5.5) = 4.5 We have P0 (-5.Numerical analysis/ Computations of eigenvaues Lecture notes -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 (Fig. We have from fig.5) = 1 P1 (-5.375 P4 (-5. So let C = -4.4375 We again take the mid pt. Comparing with fig.5) = 25. P0 (-4) = 1 P1 (-4) = 5 P2 (-4) = 11 P3 (-4) = -14 P4 (-4) = -109 that the negative eigenvalue is in [-7.5. which the negative eigenvalue lies. Combining this with (*) and fig. Now let us look at the eigenvalues ≥ 0.5) = 85.2) Let us take the mid point of [-7. C and calculate N (C) and locate in which half of this interval does this negative eigenvalue lie and continue this bisection process until we trap this negative eigenvalue in as small an interval as necessary.25 P3 (-5. ∴ There are 3 eigenvalues ≥ -4. 2 three eigenvalues in [0. -4] ……….5 – 4].

(**) ∴ N (2) = 2∴ There are two eigenvalues ≥ 2.4 P4 (1) = .1 P3 (2) = 16 P4 (2) = 17 C=3 P0 (3) = 1 P1 (3) = -2 P2 (3) = 4 P3 (3) = 28 P4 (3) = . 7]. ∴ N (3) = 1 ∴ one eigenvalue ≥ 3 Combining with above observation we get one eigenvalue in [1.4 P2 (5) = 20 P3 (5) = 4 P4 (5) = -28 ∴ This eigenvalue is in [5. Take C = mid point = 5 P0 (5) = 1 P1 (5) = .4 P3 (1) = .. 7) Let us locate the eigenvalue in [3. 2) and two in [2.4 ∴ all the eigenvalues are ≥ 1 …………. 3) one eigenvalue in [3. Combining this with (**) we get one eigenvalue in [1. 2) one eigenvalue in [2. 7] a little better.Numerical analysis/ Computations of eigenvaues Lecture notes P2 (1) = . 7] Let us take mid point C = 6 P0 (6) = 1 P1 (6) = .5 ∴ N (6) = 0 M4/L3/V1/May 2004/7 ∴ N (5) = 1 ∴ this eigenvalue is ≥ 5 VittalRao/IISc Bangaolre .4 C=2 P0 (2) = 1 P1 (2) = -1 P2 (2) = .

6) Thus combining all. we have. We shall now discuss the method of obtaining a real symmetric tridiagonal T similar to a given real symmetric matrix A. 3) one eigenvalue in [5. VittalRao/IISc Bangaolre M4/L3/V1/May 2004/8 .5.Numerical analysis/ Computations of eigenvaues Lecture notes P2 (6) = 31 P3 (6) = .44 P4 (6) = 101 ∴ No eigenvalue ≥ 6 ∴ the eigenvalue is in [5. 6) Each one of these locations can be further narrowed down by the bisection applied to each of these intervals. 2) one eigenvalue in [2. one eigenvalue in [-5. -4) one eigenvalue in [1.

. So we need P2 = I i. ∴ We need Vittal rao/IISc.. (Note that Pt = P).Bangalore M4/L4/V1/May 2004/1 . Because otherwise we get P = I.α UUt) = I I – 2 α UUt + α2 UUt UUt = I So we choose α such that α2 UUt UUt = 2 α UUt Obviously.e.α H) = I i. Then (U ≠ θn) M ⎜ ⎟ ⎜U ⎟ ⎝ n⎠ H = UUt is an nxn real symmetric matrix. we choose α ≠ 0.. The idea is to first find a reduction process which annihilates the off – tridiagonal matrices in the first row and first column of A and repeatedly use this idea..α UUt) (I . Our aim is to get a real symmetric tridiagonal matrix T such that T is similar to A. The process of obtaining this T is called the Givens – Householder scheme...e. We shall first see some preliminaries.Numerical analysis/Computations of eigenvaues Lecture notes TRIDIAGONALIZATION OF A REAL SYMMETRIC MATRIX Let A = (aij) be a real symmetric nxn matrix.. (I . and we don’t get any new transformation...α H) (I ..(I ) We shall choose α such that P is its own inverse. Let α be a real number (which we shall suitably choose) and consider P = I − αH = I − αUU t . Let ⎛ U1 ⎞ ⎜ ⎟ ⎜U ⎟ U = ⎜ 2 ⎟ be a real nx1 vector. (I .

Numerical analysis/Computations of eigenvaues

Lecture notes

α UUt UUt = 2. UUt But UtU = U21 + U22 + ….. + U2n = U α (Ut U). and hence
2

is a real number ≠ 0 and thus we have

UUt = 2 UUt

α =

2 .......... ..( II ) U tU

Thus if we U is an nx1 vector and different from θn and α is as in (II) then P defined as

P = I − αUU t ..............( III )
is such that

P = P t = P −1 ..............( IV )
Now we go back to our problem of tridiagonalization of A. Our first aim is to find a P of the form (IV) such that P t AP = PAP has off tridiagonal entries in 1st row and 1st column as zero. We can choose the P as follows: Let

s 2 = a 2 21 + a 2 31 + ..... + a 2 n1 .......... ......(V )
s = nonnegative square root of s2.

(the sum of the squares of the entries below the 1st diagonal entry in A) Let Let

0 ⎛ ⎜ ⎜ a 21 + s sgn .a 21 U =⎜ a 31 ⎜ M ⎜ ⎜ a n1 ⎝

⎞ ⎟ ⎟ ⎟ ⎟ ………. (VI) ⎟ ⎟ ⎠

Thus U is the same as the 1st column of A except that the 1st component is taken as 0 and second component is a variation of the second component in the 1st column of A. All others are same as 1st column of A. Then

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/2

Numerical analysis/Computations of eigenvaues

Lecture notes

⎡U tU ⎤ α =⎢ ⎥ ⎣ 2 ⎦

−1

= (a 21 + s sgn .a 21 ) + a 2 31 + a 2 41 + ..... + a 2 n1 / 2
2

[

]

−1

= a 2 21 + s 2 + 2 a 21 s + a 2 31 + ..... + a 2 n1 / 2

[

]

−1

= a 2 21 + a 2 31 + ..... + a 2 n1 + s 2 + 2 s a 21 / 2

{[(

)

] }

−1

=

1 s 2 + s a 21
1 s + s a 21
2

∴α =

(VII)

Thus if α is as in (VII) and U is as in (VI) where s is as in (V) then P = I - α UUt is s.t. P = Pt = P-1, and it can be shown that A2 = PA1P = PAP (i.e let A1 = A) is similar to A and has off tridiagonal entries in 1st row and 1st column as 0. Now we apply this procedure to the matrix obtained by ignoring 1st column and 1st row of A2. Thus we now choose

s 2 = a 2 32 + a 2 42 + ..... + a 2 n 2
(where now aij denote entries of A2) (i.e. s2 is sum of squares of the entries below second diagonal entry of A2) s = Positive square root of s2

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/3

Numerical analysis/Computations of eigenvaues

Lecture notes

0 ⎞ ⎛ ⎟ ⎜ 0 ⎟ ⎜ ⎜ a + ( sign .a ) s ⎟ 32 ⎟ U = ⎜ 32 a 42 ⎟ ⎜ ⎟ ⎜ M ⎟ ⎜ ⎟ ⎜ an2 ⎠ ⎝

α =

1 s + s a 32
2

P = I - α UUt Then A3 = PA2P has off tridiagonal entries 1n 1st, 2nd rows and columns as zero. We proceed similarly and annihilate all off tridiagonal entries and get T, real symmetric tridiagonal and similar to A. Note: For an nxn matrix we get tridiagonalization in n – 2 steps. Example:

⎛5 ⎜ ⎜4 A=⎜ 1 ⎜ ⎜1 ⎝

4 5 1 1

1 1 4 2

1⎞ ⎟ 1⎟ 2⎟ ⎟ 4⎟ ⎠

A is a real symmetric matrix and is 4 x 4. Thus we get tridiagonalization after (4 – 2) i.e. 2 steps. Step 1:

s 2 = 4 2 + 1 2 + 1 2 = 18 s = 18 = 4 .24264

α=

1 1 1 = = = 0.02860 s + s a 21 18 + (4.24264 )(4 ) 34 .97056
2

Vittal rao/IISc.Bangalore

M4/L4/V1/May 2004/4

41421 0 0 0 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 0 0 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ U =⎜ ⎟ = ⎜ − 1 − 1.24264 6 −1 −1 0 0 ⎞ ⎟ − 1 − 1⎟ 3 .41421 ⎟ a + s sgn .97140 ⎟ ⎠ 5 ⎛ ⎜ ⎜ − 4 .94281 P = I − αUU t = ⎜ 0 − 0.24264 ⎟ U =⎜ ⎟=⎜ ⎟ a 31 1 ⎜ ⎟ ⎜ ⎟ ⎟ ⎜ ⎟ ⎜ a 41 1 ⎠ ⎝ ⎠ ⎝ With this α.29289 s 2 + s a32 2 + (1.23570 ⎟ − 0.97140 − 0.02860 ⎟ ⎟ 0. we get 0 ⎛1 ⎜ ⎜ 0 − 0.a 21 ⎟ ⎜ 4 + 4 . U.24264 A2 = PAP = ⎜ 0 ⎜ ⎜ 0 ⎝ Step 2 s 2 = (− 1 ) + (1 ) = 2 2 2 − 4 .5 ⎟ ⎟ 1 .41421 ⎟ = ⎜ − 2. 41421 α= 1 1 1 = = = 0.23570 0.Numerical analysis/Computations of eigenvaues Lecture notes 0 0 ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ a 21 + s sgn .02860 0 ⎞ ⎟ − 0.a 32 ⎜ 32 ⎟ ⎜ ⎟ ⎜ ⎟ ⎟ ⎜ ⎜ ⎟ ⎜ a 42 −1 −1 ⎟ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ Vittal rao/IISc.41421)(1) 3.Bangalore M4/L4/V1/May 2004/5 .23570 ⎝ 0 − 0.5 1 .5 ⎟ ⎠ s = 2 = 1 .23570 ⎜ ⎜ 0 − 0.5 3 .

Bangalore M4/L4/V1/May 2004/6 .41421 0 0⎞ ⎟ 1. Thus the Givens – Householder scheme for finding the eigenvalues involves two steps.24264 =⎜ 0 ⎜ ⎜ 0 ⎝ − 4. STEP 1: Find a tridiagonal T (real symmetric) similar to T (by the method described above) STEP 2: Find the eigenvalues of T (by the method of sturm sequences and bisection described earlier) However.Numerical analysis/Computations of eigenvaues Lecture notes P = I .41421 0 ⎟ 5 0⎟ ⎟ 0 2⎟ ⎠ 0 which is tridiagonal.70711 − 0 .70711 ⎞ ⎟ ⎟ − 0 .α UUt ⎛1 ⎜ ⎜0 =⎜ 0 ⎜ ⎜0 ⎝ 0 1 0 0 0 0 − 0 . Vittal rao/IISc. If one wants to calculate all the eigenvalues at the same time then one uses the Jacobi iteration which we now describe. namely.24264 6 1.70711 ⎟ ⎠ 0 0 A3 = PA2P 5 ⎛ ⎜ ⎜ − 4. it must be mentioned that this method is used mostly to calculate the eigenvalue of the largest modulus or to sharpen the calculations done by some other method.70711 ⎟ ⎟ 0 .

. ⎜a ⎟ ⎝ 12 a 22 ⎠ Let Note ⎛ Cos θ P=⎜ ⎜ sin θ ⎝ − sin θ Cos θ ⎞ ⎟ .Numerical analysis/ Computations of eigenvaues Lecture notes JACOBI ITERATION FOR FINDING EIGENVALUES OF A REAL SYMMETRIC MATRIX Some Preliminaries: a 12 ⎞ ⎛a ⎟ A = ⎜ 11 Let be a real symmetric matrix. . (I) gives ⇒ a 12 cos 2θ = a 11 − a 22 sin 2θ 2 M4/L5/V1/May 2004/1 Vittal rao/IISc. (where we choose θ ≤ π for ⎟ 4 ⎠ purposes of convergence of the scheme) ⎛ Cos θ Pt = ⎜ ⎜ − sin θ ⎝ sin θ ⎞ ⎟ and P t P = PP t = I Cos θ ⎟ ⎠ Thus P is an orthogonal matrix.1) position of A1 as zero.2) position and (2. ( ) (− a11 + a 22 )sin θ cosθ + a12 (cos 2 θ − sin 2 θ )⎞ ⎟ a11 sin 2 θ − 2a12 sin θ cosθ + a 22 cos 2 θ ⎟ ⎠ ⎞ ⎛ cos θ − sin θ ⎞ ⎟⎜ ⎟ ⎟ ⎜ sin θ cos θ ⎟ ⎠ ⎠⎝ − a11 sin θ + a12 cos θ ⎞ ⎟ − a12 sin θ + a 22 cos θ ⎟ ⎠ (− a11 + a 22 )sin θ cos θ + a12 (cos 2 θ − sin 2 θ ) = 0 . (I) ⎛ − a11 + a22 ⎞ ⎟ sin 2θ + a12 (cos 2θ ) = 0 ⎜ 2 ⎠ ⎝ We get the entries in (1.Bangalore . Now sin θ ⎞ ⎛ a 11 a 12 ⎛ cos θ A 1 = P t AP = ⎜ ⎜ − sin θ cos θ ⎟ ⎜ a ⎟⎜ ⎝ ⎠ ⎝ 12 a 22 sin θ ⎞ ⎛ a11 cos θ + a12 sin θ ⎛ cos θ =⎜ ⎜ − sin θ cos θ ⎟ ⎜ a cos θ + a sin θ ⎟⎜ ⎝ ⎠ ⎝ 12 22 ⎛ a11 cos 2 θ + 2a12 sin θ cosθ + a 22 sin 2 θ =⎜ ⎜ (− a + a )sin θ cosθ + a cos 2 θ − sin 2 θ 11 22 12 ⎝ Thus if we choose θ such that.

.Numerical analysis/ Computations of eigenvaues Lecture notes ⇒ tan 2θ = = 2 a12 2 a sgn (a11 − a 22 ) = 12 (a11 − a 22 ) a11 − a 22 α β .(VI) Vittal rao/IISc. . . . . . . . . . . . (V) ⇒ cos θ = and β α2 +β 2 ⎤ ⎥ ⎥ ⎦ 2 sin θ cos θ = sin 2θ = = 1 − cos 2θ = 2 1− β 2 2 α2 +β α2 α +β 2 2 = α α2 + β2 ∴ sin θ = α 2 cos θ α 2 + β 2 . . (IV) 2θ α2 β2 from (II) α2 +β2 β2 β2 α2 + β2 β ∴ cos 2 2θ = ∴ cos 2θ = α2 +β2 1⎡ ⎢1 + 2⎢ ⎣ ⇒ 2 cos 2 θ − 1 = β α2 +β2 . . . . . . . . (II) Where α = 2a12 sgn(a11 − a 22 ) .Bangalore M4/L5/V1/May 2004/2 . . (III) β = a11 − a 22 ∴ sec 2 2θ = 1 + tan = 1+ = 2 . . .

cosθ and if we choose ⎛ cos θ P = ⎜ ⎜ sin θ ⎝ − sin θ cos θ ⎞ ⎟ with these values of cosθ. then ⎟ ⎠ PtAP = A1 has (2.Numerical analysis/ Computations of eigenvaues Lecture notes (V) and (VI) give sinθ. . . . (Instead of (1. (D) q Vittal rao/IISc. (B) . . . (C) sin θ = 1 2 cos θ α α2 +β2 p . . Let A = (aij) be an nxn real symmetric matrix. .2) entries as zero. We now generalize this idea. (A) . . . sinθ. .2) position above choose (q. α = 2 a qp sgn( a qq − a pp ) β = a qq − a pp cos θ = 1⎡ β ⎢1 + 2⎢ α2 +β2 ⎣ ⎤ ⎥ ⎥ ⎦ . . . .Bangalore M4/L5/V1/May 2004/3 . . . . . . . p) position) Consider. .1) and (1. . . . Let 1 ≤ g < p < n. .

. .Bangalore M4/L5/V1/May 2004/4 . . Now the Jacobi iteration is as follows. p (qth row pth row) .Numerical analysis/ Computations of eigenvaues Lecture notes ⎛1 ⎜ ⎜ 1 ⎜ O ⎜ cos θ ⎜ ⎜ P=⎜ ⎜ ⎜ sin θ ⎜ ⎜ ⎜ ⎜ ⎝ ⎞ ⎟ ⎟ ⎟ ⎟ − sin θ ⎟ ⎟ 1 ⎟ ⎟ O ⎟ cosθ ⎟ O ⎟ ⎟ 1⎟ ⎠ q p then A1 = Pt AP has the entries in (q. q) position as zero. pth row and qth column and pth column and it can be shown that these new entries are a 1 qi = a qi cos θ + a pi sin θ a 1 pi = − a qi sin θ + a pi cos θ a 1 iq = a iq cos θ + a ip sin θ a 1 ip = − a iq sin θ + a ip cos θ i ≠ q. p) position and (p. In fact A1 differs from A only in qth row. p (qth column pth column) . . (G) Vittal rao/IISc.(E) a 1 qq = a qq cos 2 θ + 2 a qp sin θ cos θ + a pp sin 2 θ a 1 pp = a qq sin 2 θ − 2 a qp sin θ cos θ + a pp cos 2 θ a 1 qp = a 1 pq = 0 . .(F) i ≠ q. Let A = (aij) be nxn real symmetric.

For this q. ∴ cos θ = 1 2 ⎡⎛ ⎢⎜1 + ⎢⎜ ⎣⎝ α 2 + β 2 = 10 β α 2 + β 2 ⎞⎤ ⎟⎥ ⎟⎥ ⎠⎦ Vittal rao/IISc. 4) position. (p. q column. Replace A by A1 and repeat the process.a qp = 2 sgn (a 22 − a 44 ). Now A1 has 0 in (q. (G). A1 can be obtained as follows: All p th th rows Column of A1 are same as A except qth row. q) position.Bangalore M4/L5/V1/May 2004/5 . α = 2 sgn (a qq − a pp ). Example: ⎛7 ⎜ ⎜3 A = ⎜ 2 ⎜ ⎜1 ⎝ 3 9 − 2 4 2 − 2 − 4 2 1⎞ ⎟ 4⎟ 2⎟ ⎟ 3⎟ ⎠ Entry with largest modulus is at (2. p). row. Let A1 = Pt AP. p find P as above.a 24 = (2 )(1 )(4 ) = 8 . ∴ q = 2. p = 4. β = a qq − a pp = 9 − 3 = 6 ∴α 2 + β 2 = 100 . (F). The process converges to a diagonal matrix the diagonal entries of which give the eigenvalues of A.Numerical analysis/ Computations of eigenvaues Lecture notes Find 1 ≤ g < p ≤ n such that a qp is largest among the absolute values of all the off diagonal entries in A. pth column which are obtained from (E).

89442 ⎟ ⎠ A1 = PtAP will have a124 = a142 = 0.89443 ⎜ ⎜ − 0.1305 ∴ A1 = ⎜ 2 − 0.44721 0 ⎝ 2 − 0.68328 − 0.44721 2( 0 .Bangalore M4/L5/V1/May 2004/6 . 44721 = 1 8 = 0. a142.89443 a143 = −a23 sin θ + a43 cos θ = 2. a141. ∴ q = 1. a123 .68328 ⎟ ⎟ 1. We have. 2) position. The largest absolute value is at (1.89443 −4 2.1305 a141 = − a21 sin θ + a41 cos θ = −0. a 1 21 = a 21 cos θ + a 41 sin θ = 3. p = 2. a143.1305 ⎛ ⎜ 11 ⎜ 3.68328 a122 = a22 cos 2 θ + 2a24 sin θ cos θ + a44 sin 2 θ = 11 a144 = a22 sin 2 θ − 2a24 sin θ cos θ + a44 cos 2 θ = 1 7 3. 44721 ⎟ ⎟ 0 ⎟ 0 . Vittal rao/IISc.89442 5 sin θ = 1 2 cos θ α α2 +β2 0 0 . (of course by symmetric corresponding reflected entries also change).Numerical analysis/ Computations of eigenvaues Lecture notes = 1⎛ 6⎞ ⎜1 + ⎟ = 2 ⎝ 10 ⎠ 4 = 0.0000 ⎟ 2. 89442 0 0 .89442 ) 10 ⎛1 ⎜ ⎜0 ∴ P = ⎜ 0 ⎜ ⎜0 ⎝ 0 0 1 0 0 ⎞ ⎟ − 0 .44721 a123 = a23 cos θ + a43 sin θ = −0.44721 ⎞ ⎟ 0. Other entries that are different from that of A are a121.00000 ⎟ ⎠ Now we repeat the process with this matrix. a122.8 = 0. a144 .

18378 a 1 23 = − a 13 sin θ + a 23 cos θ = 0 .1305 )(− 1) = .87704 .Bangalore M4/L5/V1/May 2004/7 . α2 +β 2 = 55 .2610. 28516 a 1 22 = a 11 sin 2 θ − 2 a 12 sin θ cos θ + a 22 cos 2 θ = 12 . α2 +β2 ⎥ ⎦ β sin θ = 1 2 cos θ α α2 +β2 = −0. 21485 a 111 = a 11 cos 2 θ + 2 a 12 sin θ cos θ + a 22 sin 2 θ = 5 .6. 71484 and the new matrix is Vittal rao/IISc.Numerical analysis/ Computations of eigenvaues Lecture notes β = a qq − a pp = a11 − a 22 = 7 − 11 = − 4 = 4 α = 2a gp sgn (a qq − a pp ) = 2(3. 17641 a 114 = a 14 cos θ + a 24 sin θ = − 0 .48043 ∴ The entries that change are a 112 = a 1 21 = 0 a 113 = a 13 cos θ + a 23 sin θ = 2 . 42968 1⎡ ⎢1 + 2⎢ ⎣ ⎤ ⎥ = 0 . 200121 2 α2 +β cos θ = = 7 . 39222 a 1 24 = − a 14 sin θ + a 24 cos θ = − 0 .

71986 ⎜ ⎜ 0 0 ⎜ ⎜ 0 0 ⎝ 0 0 − 5.71986. 2.21485 ⎟ 2.09733.18378 ⎜ ⎜ − 0. And at the 12th step we get the diagonal matrix 0 ⎛ 5.71484 0.21485 2.78305. 12. Vittal rao/IISc.18378 0.78305 ⎜ 0 12 .39222 ⎞ ⎟ − 0.39222 ⎝ 0 12 .Numerical analysis/ Computations of eigenvaues Lecture notes ⎛ 5.68328 − 0.60024.68328 ⎟ ⎟ ⎟ 1 ⎠ Now we repeat with q = 3.28516 ⎜ 0 ⎜ ⎜ 2.09733 ⎟ ⎠ 0 giving eigenvalues of A as 5. p = 4 and so on. p) position and apply the above transformation to get new matrix A1 then sum of squares of off diagonal entries of A1 will be less than that 2 of A by 2a qp. -5.17641 −4 2.60024 0 ⎞ ⎟ 0 ⎟ 0 ⎟ ⎟ 2.Bangalore M4/L5/V1/May 2004/8 . Note: At each stage when we choose (q.17641 − 0.

... and since R is upper triangular we have r (i ) ⎛ r1i ⎞ ⎜ ⎟ ⎜ r2 i ⎟ ⎜ M ⎟ ⎜ ⎟ = ⎜ rii ⎟.......... Note: Since Q is Hermitian we have q (1) i 2 = 1 = q (2 ) j 2 = ...... a(2) ....(D ) We want A = QR. . ……….... (B). The Q and R are found as follows: Let a(1) ... ……. q(n) be the columns of Q r(1) .. + rii q (i ) .. ..Bangalore M4/L6/L1/May2004/1 ..( A ) 2 (q ( ) .Numerical analysis/Computations of eigenvaues Lecture notes The Q R decomposition: Let A be an nxn real nonsingular matrix.(C ) ⎜ 0 ⎟ ⎜ ⎟ ⎜ M ⎟ ⎜ ⎟ ⎝ 0 ⎠ Also the ith column of QR is. r(n) be the columns of R.. r(2) .......... q ( ) ) = 0 if i ≠ j …………….... .... .. ……… .. = q (n ) ... Qr(i) and ∴ ith column of QR = r1i q (1) + r2i q (2 ) + .. q(2) ... Then we can find an orthogonal matrix Q and an upper triangular matrix R (with rii >0) such that A=QR called the QR decomposition of A.... a(n) be the columns of q(1) .... Comparing 1st column on both sides we get a(1) = QR ’s first column = Qr(1) = r11q(1) by (D) ∴ a (1) 2 = r11 q (1) 2 = r11 q (1) 2 = r11 ∵ r11 > 0 and q1 2 = 1by ( A) Vittal rao/IISc...

....... ri −1i = a (i ) . − ri −1 q (i −1) rii [ ] Vittal rao/IISc...... ....... (H) give 2nd columns of Q and R.... ....(H r22 [ ] ) (F).. q ( ) ) = r (q ( ) . q (1) = q (1) ( 12 ) 22 2 2 = 1by ( A ) ∴ r12 = a .......... .. ....... (*) Therefore from (*) we get (a ( ) ...... q (1) = 0by (B ) ( ( ) ) ∴ (*) gives r22 q (2 ) = a (2 ) − r12 q (1 ) and ∴ r22 q (2 ) 2 = a ( 2 ) − r12 q (1 ) 2 2 ∴ r22 = a ( 2 ) − r12 q (1 ) ...... q (i −1) rii = a (i ) − r1i q (1) − r2i a (2 ) ...1 columns of Q and R we get ith columns of Q and R as follows: r1i = a (i ) ... q (1) .. − ri −1i q (i −1) 2 ( ) ( ) ( ) q (i ) = i (i ) a − r1i q (1) − r2 i q (2 ) .... (G). (E ) r11 giving 1st columns of R and Q....(F ) (2 ) and q (2 ) ..... q (1) .Numerical analysis/Computations of eigenvaues Lecture notes ∴ r11 = a (1 ) 1 (1) a .. r2i = a (1) ..........Bangalore M4/L6/L1/May2004/2 ....... (G ) and q (2 ) = 1 a (2 ) − r12 q (1 ) .... q ( ) ) 2 1 1 1 2 1 = r12 Q q (1) . q ( ) ) + r (q ( ) .... q (2 ) .. We can proceed having got the first i .. Next comparing second columns on both sides we get and q (1) = 2 a(2) = Qr(2) = r12 q(1) +r22 q(2) ………...

q (1) ( ) ⎛ ⎛ ⎜ ⎜ ⎜⎛ 2⎞ ⎜ ⎜⎜ ⎟ ⎜ = ⎜ ⎜ 0 ⎟.Bangalore M4/L6/L1/May2004/3 . ⎜ ⎜⎜1⎟ ⎜ ⎝ ⎠ ⎜ ⎜ ⎜ ⎜ ⎝ ⎝ 1 ⎞⎞ ⎟⎟ 2 ⎟⎟ 1 ⎟⎟ 2 ⎟⎟ = 2 = 2 ⎟ 0 ⎟⎟ ⎟ ⎟⎟ ⎠⎠ 2 r22 = a (2 ) − r12 q (1) 2 ⎛ 2⎞ ⎛ 1⎞ ⎜ ⎟ ⎜ ⎟ = ⎜ 0⎟ − ⎜1⎟ ⎜ 1⎟ ⎜ 0⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎜ ⎜ ⎜ = ⎜− ⎜ ⎜ ⎜ ⎝ 1 2 ⎛ 1 ⎞ ⎜ ⎟ = ⎜ − 1⎟ ⎜ 1 ⎟ ⎝ ⎠ = 3 2 q (2 ) = 1 3 [a ( ) − r 2 12 q (1 ) ] ⎞ ⎟ 3 ⎟ 1 ⎟ ⎟ 3⎟ 1 ⎟ ⎟ 3 ⎠ Vittal rao/IISc.Numerical analysis/Computations of eigenvaues Lecture notes Example: ⎛1 ⎜ A = ⎜1 ⎜0 ⎝ 2 1⎞ ⎟ 0 1⎟ 1 1⎟ ⎠ 1st column of Q and R r11 = a (1) = 12 + 12 = 2 ⎛ 1 ⎞ ⎟ ⎜ ⎜ 2 ⎟ ⎜ 1 ⎟ = ⎜ 2 ⎟ ⎜ 0 ⎟ ⎟ ⎜ ⎟ ⎜ ⎠ ⎝ q (1 ) = 1 (1 ) a r11 2nd column of Q and R: r12 = a (2 ) .

q (2 ) = ( ) 1 3 r33 = a (3 ) − r13 q (1) − r23 q (2 ) ⎛ ⎜ ⎜ ⎛ 1⎞ ⎛ 1 ⎞ ⎜ ⎟ ⎜ ⎟ 1 ⎜ = ⎜ 1⎟ − ⎜ 1 ⎟ − ⎜− 3⎜ ⎜ 1⎟ ⎜ 0 ⎟ ⎝ ⎠ ⎝ ⎠ ⎜ ⎜ ⎝ ⎛ ⎜− ⎜ ⎜ = ⎜ ⎜ ⎜ ⎜ ⎝ and 1 ⎞ ⎟ 3 ⎟ 1 ⎟ ⎟ 3⎟ 1 ⎟ ⎟ 3 ⎠ 2 1 ⎞ ⎟ 3⎟ 1 ⎟ ⎟ 3 ⎟ 2 ⎟ ⎟ 3 ⎠ = 1 1 4 + + = 9 9 9 2 3 = 2 3 2 q (3 ) = 1 (3 ) a − r13 q (1) − r23 q (2 ) r33 [ ] Vittal rao/IISc. q (1 ) ( ) ⎛ ⎛ ⎜ ⎜ ⎜ ⎛ 1⎞ ⎜ ⎜⎜ ⎟ ⎜ = ⎜ ⎜ 1⎟.Numerical analysis/Computations of eigenvaues Lecture notes 3rd column of Q and R: r13 = a (3 ) . ⎜ ⎜ ⎜ 1⎟ ⎜ ⎝ ⎠ ⎜ ⎜ ⎜ ⎜ ⎝ ⎝ 1 ⎞⎞ ⎟⎟ 2 ⎟⎟ 1 ⎟⎟ 2 ⎟⎟ = 2 = 2 ⎟ 0 ⎟⎟ ⎟ ⎟⎟ ⎠⎠ 2 r23 = a (3 ) .Bangalore M4/L6/L1/May2004/4 .

Numerical analysis/Computations of eigenvaues Lecture notes = ⎛ 1⎞ ⎛− ⎜− ⎟ ⎜ ⎜ 3⎟ ⎜ 3 ⎜ 1 ⎟ ⎜ = ⎜ 2 ⎜ 3 ⎟ ⎜ ⎜ 2 ⎟ ⎜ ⎟ ⎜ ⎝ 3 ⎠ ⎜ ⎝ 1 ⎞ ⎟ 6⎟ 1 ⎟ ⎟ 6 ⎟ 2 ⎟ ⎟ 3 ⎠ ⎛ ⎜ ⎜ 2 ⎜ R =⎜ 0 ⎜ ⎜ ⎜ 0 ⎝ ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎛ 1 1 1 ⎞ ⎜ ⎟ − ⎜ 2 3 6⎟ ⎜ 1 1 1 ⎟ − ∴Q = ⎜ ⎟ 3 6 ⎟ ⎜ 2 ⎜ 1 2 ⎟ ⎜ 0 ⎟ 3 3 ⎠ ⎝ and ⎛ 1 2 1⎞ ⎜ ⎟ QR = ⎜ 1 0 1⎟ = A ⎜ 0 1 1⎟ ⎝ ⎠ giving us QR decomposition of A. 2 3 0 .Bangalore M4/L6/L1/May2004/5 . 2 1 3 2 3 Vittal rao/IISc.

find QR decomposition of A3 as A3 = Q3 R3. Vittal rao/IISc.Numerical analysis/Computations of eigenvaules Lecture notes QR algorithm Let A be any nonsingular nxn matrix. Keep repeating the process. Let A = A1 = Q1 R1 be its QR decomposition.Bangalore M4/L7/V1/May 2004/1 . Let A2 = R1 Q1. Then find the QR decomposition of A2 as A2 = Q2 R2 Define A3 = R2 Q2 . Thus A1 = Q1 R1 A2 = R1 Q1 and the ith step is Ai = Ri-1 Qi-1 Ai = Qi Ri Then Ai ‘ converges’ to an upper triangular matrix exhibiting the eigenvalues of A along the diagonal.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->