You are on page 1of 5

3.2.

1

Example of Gaussian elimination

We wish to solve the following matrix equation by Gaussian elimination:

⎡ ⎢11 ⎢ ⎢23 ⎢ ⎢ ⎢22 ⎢ ⎢ ⎢12 ⎣
Step M1

17 27 32 15

18 25 34 41

⎤⎡ ⎤ ⎡ ⎤ 16⎥ ⎢x1 ⎥ ⎢10⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 28⎥ ⎥ ⎢x2 ⎥ ⎢20⎥ ⎥⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 36⎥ ⎥ ⎢x3 ⎥ ⎢30⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 36⎥ ⎦ ⎣x4 ⎦ ⎣40⎦

(3.31)

Towards this goal, we proceed with the different steps as follows: On equation (3.31), the operation R1 A(1, 1) (where, A(1, 1) = 11) is performed to yield,

⎤ ⎤⎡ ⎤ ⎡ ⎡ ⎢ 1 1.5455 1.6364 1.4545⎥ ⎢x1 ⎥ ⎢0.9091⎥ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢23 27 25 28 ⎥ ⎥ ⎢x2 ⎥ ⎢ 20 ⎥ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢22 32 34 36 ⎥ ⎥ ⎢x3 ⎥ ⎢ 30 ⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢12 15 41 36 ⎥ ⎦ ⎣x4 ⎦ ⎣ 40 ⎦ ⎣
Step M2

(3.32)

On equation (3.32), the operations (R2 − R1 ∗ A(2, 1)), (R3 − R1 ∗ A(3, 1)) and (R4 − R1 ∗ A(4, 1)) are carried out (where A(2, 1) = 23, A(3, 1) = 22 and A(4, 1) = 12) and the resulting matrix equation is given by;

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢1 1.5455 1.6364 1.4545 ⎥ ⎢x1 ⎥ ⎢ 0.9091 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 −8.5455 −12.6364 −5.4545⎥ ⎢x ⎥ ⎢−0.9091⎥ ⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ x − 2 . 0 − 2 . 0 4 . 0 10 3 ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 −3.5455 21.3636 18.5455 ⎥ ⎢x4 ⎥ ⎢ 20.0909 ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦

(3.33)

Step M3 On equation (3.33), the operation R2 A(2, 2) (where, A(2, 2) = −8.5455) is carried out to get;

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢1 1.5455 1.6364 1.4545 ⎥ ⎢x1 ⎥ ⎢ 0.9091 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ 1 1.4787 0.6383 ⎥ ⎢ ⎥ ⎢x2 ⎥ ⎢ 0.1064 ⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ −2.0 −2.0 4.0 ⎥ ⎢ ⎥ ⎢x3 ⎥ ⎢ 10 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 −3.5455 21.3636 18.5455⎥ ⎢x4 ⎥ ⎢20.0909⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Step M4

(3.34)

On equation (3.34), the operations (R3 − R2 ∗ A(3, 2)) and (R4 − R2 ∗ A(4, 2)) are carried out to obtain (where A(3, 2) = −2.0 and A(4, 2) = −3.5455);

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢1 1.5455 1.6364 1.4545 ⎥ ⎢x1 ⎥ ⎢ 0.9091 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ 1 1.4787 0.6383 ⎥ ⎢ ⎥ ⎢x2 ⎥ ⎢ 0.1064 ⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ 0 0.9574 5.2766 ⎥ ⎢ ⎥ ⎢x3 ⎥ ⎢10.2128⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎥ 0 26.6065 20.8085⎥ ⎣ ⎦ ⎣x4 ⎦ ⎣29.4681⎦
90

(3.35)

37) one can obtain x3 = 10. Similarly.4837 + 1. x4 can be calculated as x4 = 254.0214 = −0. all the original zero elements have been converted to non-zero terms (denoted by ‘⊗’).9574) is carried out to obtain the matrix equation shown below: ⎤ ⎤⎡ ⎤ ⎡ ⎡ ⎢1 1. 5114 3 ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢x4 ⎥ ⎢−254. at every stage. if the elimination process is carried out in an appropriate order.9091 ⎥ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢0 1 1. the operation (R4 − R3 ∗ A(4.1064 ⎥ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢0 0 1 5.0214. Lastly. From the last row of this equation.4545 ⎥ ⎢x1 ⎥ ⎢ 0. significant 91 .5086.4787 0.9091 ⎥ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢x ⎥ ⎢ 0.35). on equation (3. 4787 0 .5455 1. only the positions of non-zero terms (denoted by ‘×’) and zero terms (denoted by ‘o’) are shown. an initial co-efficient matrix is shown at the left hand side (part ‘a’) and the structure of the co-efficient matrix after step-1 is shown at the right hand side (part ‘b’).5455 × 0. Back substituting this value of x4 in the third row of equation (3. 3.4837. Moreover. then the occurrence of ‘fill-in’ can be avoided to a great extent.36) Lastly. if the calculations pertaining to Gaussian elimination is carried out only using the non-zero terms.6672⎥ ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢0 0 26.5114 ⎥ ⎥ ⎢x3 ⎥ ⎢10.6065 20. instead of following the normal sequence.6383 ⎥ ⎥ ⎢x2 ⎥ ⎢ 0.6364 1.4787 × 0.1064 ⎥ ⎢0 1 1 .83 = 2.6364 1.4735 − 0.1064 + 1. 3) = 0.0214 = −0.4545 ⎥ ⎢x1 ⎥ ⎢ 0.6065) is carried out to get.6383 × 2.37). 83 ⎦ ⎦⎣ ⎦ ⎣ ⎣ (3.3484⎥ ⎢0 0 0 − 125 . A simple example given below illustrates this point. On the other hand.38).3 Optimal order of elimination We have seen that the Gaussian Elimination method is quite effective for solving a large set of spare linear equations without having to invert the co-efficient matrix.8085⎥ ⎦ ⎣x4 ⎦ ⎣29.6364 × 0. if the elimination process is carried out in the normal sequence. 6672 x 0 1 5 . at any stage of elimination.9091 + 1. great saving in the computational burden can be achieved. the operation R3 A(3.5114 × 2. after step-1. substitution of the values of x3 and x4 in the second row of equation (3.0214 = −0. or.6672 − 5. In equation (3. substituion of x2 .37) gives x1 = 0.4545 × 2. in other words. the co-efficient matrix has been converted to an upper-triangular matrix. ⎤ ⎤⎡ ⎤ ⎡ ⎡ ⎢1 1.Step M5 On equation (3.36).4681⎦ ⎣ Step M6 (3.5455 1. 6383 ⎥ ⎥ ⎢ 2⎥ ⎢ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 10 .38). As can be seen in equation (3. 3) = 26.4735 − 1. the original zero-elements may the concerted into a non-zero element.37) In equation (3. This is normally termed as ‘fill-in’ phenomenon.3484 125. x3 and x4 in the first row of equation (3.37) yields the of x2 as x2 = 0. However. It is to be noted that in this equation. 3) (where A(3. 3)) (where A(4.4735.

However. off-diagonal terms. if the original co-efficient matrix shown in part (a) of equation (3. non-zero term are numbered first. then the number of ‘fill-in’ would be minimum. it is apparent that if the rows are eliminated in an ‘optimal order’. then after step 1.level of ‘fill-in’ has occurred.diagonal terms are numbered second and so on. before elimination. there would be no ‘fill-in’ as can be observed in part (b) of equation (3. Scheme 2 In this scheme the rows of the co-efficient matrix ‘A’ are numbered such that at each step of the elimination procedure.39) From the above example. the row with the fewest number of non-zero off-diagonal terms would be operated next. If more than one row meets this criterion. this scheme requires the simulation of the elimination procedure to estimate the changes occurring in the co-efficient matrix in advance. this scheme in quite easy and straight forward to implement. Some of them are discussed below: Scheme 1 In this scheme. Thus. Therefore. off. Therefore. the rows with only one off-diagonal. 1 2 3 4 a) Initial ‘A’ matrix ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 2 3 4 × × × × × × × × o o o × o o o × ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 1 2 3 4 b) ‘A’ matrix after step 1 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 2 3 4 1 × o × o ⊗ o ⊗ × ⊗ × ⊗ × ⊗ ⊗ × ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (3. number the rows of the co-efficient matrix ‘A’ according to the number of non-zero. Thus. this method takes longer time as compared to scheme 1 to compute the solution. those with two non-zero. As an alternative. 4 2 3 1 a) Rearranged ‘A’ matrix ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 4 3 2 1 × o o o o × o × o × × × × × × × ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 4 2 3 1 b) Rearranged ‘A’ matrix after step 1 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 4 3 2 1 1 o o o o × o × o o × × × × × × ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (3. this scheme does not take into account the changes occurring in the co-efficient matrix during the elimination process.39).38) is re-arranged as shown in part (a) of equation (3. then any one row is chosen.39). However. various ‘near optimal ordering’ schemes have been developed. an ideal ‘optimal order’ is very difficult to develop and perhaps is impossible. but is definitely better than scheme 1.38) Now. 92 .

a square matrix A is expressed as a product of two triangular matrices as A = LU. y = Ux.42). the triangular factorization or LU decomposition.42) we get. where L is a lower triangular matrix and U is an upper triangular matrix.41) Ly = b In equation (3.4 Triangular factorization: In triangular factorization or decomposition method.42) ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢α11 0 0 0 ⎥ ⎢y1 ⎥ ⎢b1 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢α α ⎢ ⎥ ⎢ ⎥ 0 ⎥ ⎢ 21 22 0 ⎥ ⎢y2 ⎥ ⎢b2 ⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎢α31 α32 α33 0 ⎥ ⎢y3 ⎥ ⎢b3 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢α41 α42 α43 α44 ⎥ ⎢y4 ⎥ ⎢b4 ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ From equation (3.Scheme 3 In this scheme. this method also takes longer time than scheme 1. ⎡ ⎢a11 ⎢ ⎢a ⎢ 21 ⎢ ⎢a31 ⎢ ⎢ ⎢a41 ⎣ a12 a22 a32 a42 a13 a23 a33 a43 ⎤ ⎡ ⎤ ⎡ 0 0 ⎥ ⎢β11 β12 β13 a14 ⎥ ⎢α11 0 ⎥ ⎢ ⎥ ⎢ ⎢ ⎢ 0 ⎥ a24 ⎥ ⎥ ⎢α21 α22 0 ⎥ ⎢ 0 β22 β23 ⎥=⎢ ⎥×⎢ ⎢ ⎥ ⎢ a34 ⎥ 0 β33 ⎥ ⎢α31 α32 α33 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ a44 ⎥ 0 0 ⎦ ⎣α41 α42 α43 α44 ⎦ ⎣ 0 ⎤ β14 ⎥ ⎥ β24 ⎥ ⎥ ⎥ β34 ⎥ ⎥ ⎥ β34 ⎥ ⎦ (3. (3. Hence. Let us now look at another technique for solving a set of linear equations without the need of inverting the co-efficient matrix. this scheme also requires the simulation of the elimination process to study its effects on the co-efficient matrix in advance. (3. choose any one row.43). ⋯ N (3. If more than one row satisfies this criterion. Expanding equation (3. Upon triangular factorization (or ‘LU’ decomposition).43) y1 = yi b1 α11 i −1 1 bi − αij yj . Again. 3. Or. the rows are numbered in such a way so that the row which will introduce fewest nonzero off-diagonal terms would be operated upon next. Ax = b or. (LU)x = b or.44) 93 . the intermediate vector y can be calculated as. As an example. the equation Ax = b can be written as. let the matrix A be a 4 × 4 (N = 4) matrix. L(Ux) = b Or. (3. = αii j =1 i = 2. namely. 3. the matrix A is represented as A = LU.40) With this decomposition.

45). ⎡ ⎢β11 β12 β13 ⎢ ⎢0 β β ⎢ 22 23 ⎢ ⎢0 0 β33 ⎢ ⎢ ⎢0 0 0 ⎣ ⎤⎡ ⎤ ⎡ ⎤ β14 ⎥ ⎢x1 ⎥ ⎢y1 ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ β24 ⎥ ⎥ ⎢x2 ⎥ ⎢y2 ⎥ ⎥⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ β34 ⎥ ⎥ ⎢x3 ⎥ ⎢y3 ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ β34 ⎥ ⎦ ⎣x4 ⎦ ⎣y4 ⎦ (3. With the knowledge of the intermediate vector y. from equation (3.Again.46) We will now look into the basic procedure of obtaining the ‘LU’ decomposition in the next lecturel. ⋯ 1 (3. = i βii j =1+1 i = (N − 1).45) Now. (N − 2). the solution vector x can be calculated as. 94 . xN = xi yn βN N N 1 y − βij xj . expanding the expression y = Ux we get.