Professional Documents
Culture Documents
Numerical Methods
3. 1. Solving Systems of Linear Algebraic Equations
Gauss Elimination
Mechanical Engineering Department
Middle East Technical University
Ankara, Turkey
Ali Karakus (akarakus@metu.edu.tr)
These presentations were adapted from lecture notes of Dr. Cüneyt Sert, Dr. Sezer Özerinç and Altuğ
Özçelikkale.
They cannot be used and/or modified without the permission of the authors.
1

4.8*1.168552478–8.6*0.01268045242 = 5.500000004
2

- Multiply the 1st eqn. by a21/a11 & subtract it from the 2nd equation. This is the new 2nd eqn.
- Multiply the 1st eqn. by a31/a11 & subtract it from the 3rd equation. This is the new 3rd eqn.
...
- Multiply the 1st eqn. by an1/a11 & subtract it from the nth equation. This is the new nth eqn.
3

4

...
(2.n) Substitute all previously calculated x values to the 1st eqn and solve for x1.
Example 2: Solve the following system using Naive Gauss Elimination.
6x1 – 2x2 + 2x3 + 4x4 = 16
12x1 – 8x2 + 6x3 + 10x4 = 26
...
(2.n) Substitute all previously calculated x values to the 1st eqn and solve for x1.
Example 2: Solve the following system using Naive Gauss Elimination.
6x1 – 2x2 + 2x3 + 4x4 = 16
12x1 – 8x2 + 6x3 + 10x4 = 26
7

Naive Gauss Elimination Method
End of Step 1: An upper triangular coefficient matrix is obtained.
6 –2 2 4 | 16
0 –4 2 2 | -6
0 0 2 -5 | -9
0 0 0 -3 | -3
8

B i = B i – FACTOR * B k X i = (B i – SUM) / A ii
ENDLOOP ENDLOOP
ENDLOOP
Note that a division by zero may occur if the pivot element is zero. Naive Gauss
Elimination does not check for this potential problem.
Exercise 3: Implement the above pseudocode in MATLAB. Write a main program and
two functions for the forward elimination and the back substitution. 9

⎡ n3 2 ⎤
n
2 n (n + 1)(2n + 1) n 2
3
∑ ± = ⎡ n + O ( n3
) ⎤ − ⎡ n + O ( n ) ⎤ 3
+ + O2
( n ) ∑ k = = + O ( n )
⎣ ⎦ ⎣ ⎦ ⎢3 ⎥ k =1 6 3
FE ⎣ ⎦
n3 n3
∑ ± = + O ( n ) A similar analysis gives ∑ × ÷ = + O ( n 2 )
FE 3 3
10

Operation Count for Naive GE Method
The total number of FLOPs during forward elimination is:
2n 3 2
∑ ± × ÷ = + O ( n )
FE 3
FLOPs of Step 2, back substitution, is simpler to calculate:
n (n − 1) n (n + 1)
∑ ± × ÷ = (∑ ± )+ (∑ × ÷ ) = +
BS 2 2
2
∑ ± = n + O (n)
BS
• As the size of the system, i.e. n, increases, the computational effort increases rapidly.
• Forward elimination constitutes most of the required FLOPs.
11

2n 3 2
∑ ± × ÷ = + O ( n )
FE 3
FLOPs of Step 2, back substitution, is simpler to calculate:
n (n − 1) n (n + 1)
∑ ± × ÷ = (∑ ± )+ (∑ × ÷ ) = +
BS 2 2
2
∑ ± = n + O (n)
BS
• As the size of the system, i.e. n, increases, the computational effort increases rapidly.
• Forward elimination constitutes most of the required FLOPs.
12

• Round-off errors
• Ill-conditioned systems
• Zero pivot elements may be created during the forward elimination step even if they are
not present in the original matrix.
• Pivoting is used to avoid this problem. We interchange rows at each step to put the
coefficient with the largest magnitude on the diagonal.
• In addition to avoiding the division by zero problem, pivoting also reduces round-off
errors since it tends to eliminate divisions by small numbers. Therefore pivoting makes the
solution of ill-conditioned systems easier.
• Complete pivoting uses both row and column interchanging. It is not used
frequently. WHY NOT?
• Partial pivoting uses only row interchanging. We will use this approach.
14

Partial Pivoting
Example: Solve the following system using Gauss Elimination with pivoting.
Original System Step 0: Form the augmented matrix
2x2 + + x4 = 0 0 2 0 1 | 0
2x1 + 2x2 + 3x3 + 2x4 = -2 2 2 3 2 | -2
4x1 – 3x2 + x4 = -7 4 –3 0 1 | -7
6x1 + x2 - 6x3 - 5x4 = 6 6 1 -6 -5 | 6
0 2 0 1 | 0 6 1 -6 -5 | 6
2 2 3 2 | -2 2 2 3 2 | -2
4 –3 0 1 | -7 pivoting 4 –3 0 1 | -7
6 1 -6 -5 | 6 0 2 0 1 | 0
6 1 -6 -5 | 6 6 1 -6 -5 | 6
2 2 3 2 | -2 0 1.6667 5 3.6667 |-4
Eliminate x1 15
4 –3 0 1 | -7 0 -3.6667 4 4.3333 |-11
0 2 0 1 | 0 0 2 0 1 | 0

Partial Pivoting
(1.2) Eliminate x2 from the 3rd and 4th eqns. There is no division by zero problem but pivoting will
still be performed to reduce round-off errors. Interchange the 2nd and 3rd rows. Complete pivoting
would have interchanged 2nd and 3rd columns.
6 1 -6 -5 | 6 pivoting 6 1 -6 -5 | 6
0 1.6667 5 3.6667 |-4 0 -3.6667 4 4.3333 | -11
0 -3.6667 4 4.3333 |-11 0 1.6667 5 3.6667 | -4
0 2 0 1 | 0 0 2 0 1 | 0
6 1 -6 -5 |6 Eliminate 6 1 -6 -5 | 6
0 -3.6667 4 4.3333 |-11 x2 0 -3.6667 4 4.3333 |-11
0 1.6667 5 3.6667 |-4 0 0 6.8182 5.6364 |-9.0001
0 2 0 1 |0. 0 0 2.1818 3.3636 |-5.9999
6 1 -6 -5 | 6
0 -3.6667 4 4.3333 |-11
0 0 6.8182 5.6364 |-9.0001
0 0 0 1.5600 |-3.1199
Partial Pivoting
The upper triangular matrix is obtained:
6 1 -6 -5 | 6
0 -3.6667 4 4.3333 | -11
0 0 6.8182 5.6364 | -9.0001
0 0 0 1.5600 | -3.1199
Scaling
• Scaling is normalizing the equations so that the maximum coefficient in every row is equal to 1. That is,
divide the elements of each row by the largest-magnitude element in that row.
• While checking a system for being ill-conditioned, the system should be scaled first. For example:
2x1 – 3x2 = 5 20x1 - 30x2 = 50 They are actually the same system,
[A2] = 10[A1], {b2} = 10{b1}.
3.98x1 – 6x2 = 7 39.8x1 - 60x2 = 70
• Determinant of the 1st system is 2(-6) – (-3)(3.98) = -0.06 , which is close to zero.
• Determinant of the 2nd system is 20(-60) – (-30)(39.8) = -6 , which is not that close to zero.
• Scale the system (do not consider the right-hand-side vector while looking for the largest magnitude):
-0.6667 x1 + x2 = -1.6667
-0.6633 x1 + x2 = -1.1667
• Now the determinant is 0.6667*1 – 1*(-0.6633) = 1.33. This is a better measure of system’s condition.
• There are other ways to determine a system’s condition (See Section 10.3 in Chapra and Canale).
Scaling
Remarks
• Scaling helps determine whether pivoting is necessary. But scaling also
introduces additional round-off errors.
• Use scaling to decide pivoting, then use the original coefficients for
forward elimination.
Solve the following set of equations using Gauss elimination with scaled partial
pivoting.
Scaling
Example: Solve the following system using Gauss Elimination with scaled partial pivoting.
Keep numbers as fractions of integers to eliminate round-off errors.
3 -13 9 3 | -19
-6 4 1 -18 | -34
6 -2 2 4 | 16
12 -8 6 10 | 26
Generate a scale vector. It stores the largest coefficient (in magnitude) from each row.
SV = {13 18 6 12}
This scale vector will be updated each time we interchange rows during pivoting.
Scaling
6 -2 2 4 | 16
-6 4 1 -18 | -34
3 -13 9 3 | -19
12 -8 6 10 | 26
Eliminate x1.
Subtract (-6/6) times row 1 from row 2.
Subtract (3/6) times row 1 from row 3.
Subtract (12/6) times row 1 from row 4.
Resulting system is
6 -2 2 4 | 16
0 2 3 -14 | -18
0 -12 8 1 | -27
0 -4 2 2 | -6
Scaling
SV = {6 18 13 12}
(1.2) Compare scaled coefficients 2/18, 12/13, 4/12. Second one is the largest. Interchange rows 2
and 3.
6 -2 2 4 | 16 6 -2 2 4 | 16
0 2 3 -14 | -18 0 -12 8 1 | -27
0 -12 8 1 | -27 0 2 3 -14 | -18
0 -4 2 2 | -6 0 -4 2 2 | -6
6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 -2/3 5/3 | 3
Scaling
SV = {6 13 18 12}
(1.3) Compare scaled coefficients (13/3)/18, (2/3)/12. First one is larger. No need for pivoting.
6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 -2/3 5/3 | 3
Eliminate x3: Subtract ((-2/3)/(13/3)) times row 3 from row 4. Resulting system is
6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 0 -6/13 | -6/13
ill-Conditioned Systems
• They have almost singular coefficient matrices
and they have a unique solution. x2
• Calculating the determinant: Forward Elimination provides an upper triangular matrix. For such
matrices the determinant is the multiplication of diagonal elements. If we perform pivoting m times:
Note: If any aii becomes zero during forward elimination, then the system is singular.
Gauss-Jordan Method
Gauss-Jordan is a variation of the Gauss Elimination technique. The differences are:
1) All rows are normalized by dividing them to their pivot element.
2) The unknowns are eliminated from all other equations, not just the subsequent ones.
3) Gauss-Jordan method yields an identity matrix as a result of elimination (as opposed to an upper
triangular one in the Gauss Elimination).
4) There is no need for back substitution. Right-hand-side vector becomes the solution.
Example: Solve the following 4 x 4 system using G-J method with pivoting. This is the same system that
we used to demonstrate Gauss Elimination with pivoting.
0 2 0 1 | 0 6 1 -6 -5 | 6
2 2 3 2 | -2 pivoting 2 2 3 2 | -2
4 –3 0 1 | -7 4 –3 0 1 | -7
6 1 -6 -5 | 6 0 2 0 1 | 0
Normalize the first row and then eliminate x1 from the 2nd, 3rd and 4th equations.
Gauss-Jordan Method
Interchange rows 2 and 3:
1 0.1667 -1 -0.8333 | 1 1 0.1667 -1 -0.8333 | 1
0 1.6667 5 3.6667 | -4. pivoting 0 -3.6667 4 4.3333 | -11
0 -3.6667 4 4.3333 | -11 0 1.6667 5 3.6667 | -4
0 2 0 1 | 0 0 2 0 1 | 0
Normalize the second row and eliminate x2 from the 1st, 3rd and 4th equations.
No pivoting is required. Normalize the third row and eliminate x3 from the 1st, 2nd and 4th equations.
Gauss-Jordan Method
• Normalize the last row:
1 0 0 0.04 | 0.58 1 0 0 0.04 | 0.58
0 1 0 -0.280 | 1.56 0 1 0 -0.280 | 1.56
normalization
0 0 1 0 | -1.32 0 0 1 0 | -1.32
0 0 0 1.5599 | -3.12 0 0 0 1 | -2
• Gauss-Jordan requires almost 50% more operations than Gauss Elimination (n3 instead of 2n3/3).
… … …
• [A] usually indicates the interactions between the components of the system (such as “knxn” terms in a
• {B} usually indicates the external effects on the system (such as external forces, “mg”).
• If the same system is going to be analyzed for different external conditions, then [A] remains the same and
{B} changes. Calculating [A]-1 once and using it repeatedly for obtaining the solution {xi} is an effective
approach in such cases.
• [A]-1 also provides insight to the relationship between the external effects and the response of the system
components to these effects.
• To take the inverse of [A], augment it with an identity matrix and apply the Gauss-Jordan method.
1 -1 2
[A] = 3 0 1 [A]-1 = ?
1 0 2
• Augment [A] with a 3 x 3 identity matrix and Apply the Gauss-Jordan method to the system:
• 3 x 3 matrix at the right is [A]-1. The inverse can now be used to solve a system with the coefficient matrix
[A] and the right-hand-side vector {b} : {x} = [A]-1 {B}
• LU decomposition can be used for the same purpose and it is more efficient (Section 10.2 in Chapra and
Canale 2010).
LU Decomposition
• LU decomposition is the decomposition of [A] into a lower triangular matrix [L], and an upper triangular
matrix [U], such that [A] = [L][U].
• [U] is the upper triangular matrix that is obtained ⎡ a11 a12 a13 a14 ⎤ each prime is a
⎢0 a22
ʹ a23
ʹ ʹ ⎥
a24 modification to
from the forward elimination step of Gauss elimination. [U] = ⎢ ⎥
⎢0 0 a33
ʹʹ a34 ⎥
ʹʹ aij during
For a 4 x 4 system, an example is shown on the right. ⎢0 0 0 ʹʹʹ ⎥⎦
a44
⎣ elimination
• fij values are the factors that the pivot rows are multiplied with during forward elimination. The
superscript (j-2) indicates the number of times aij is modified before fij is calculated.