You are on page 1of 33

ME 310

Numerical Methods
3. 1. Solving Systems of Linear Algebraic Equations
Gauss Elimination
Mechanical Engineering Department
Middle East Technical University
Ankara, Turkey
Ali Karakus (akarakus@metu.edu.tr)

These presentations were adapted from lecture notes of Dr. Cüneyt Sert, Dr. Sezer Özerinç and Altuğ
Özçelikkale.
They cannot be used and/or modified without the permission of the authors.
1

Elimination of Unknowns Method


2.5 x1 + 6.2 x2 = 3.0
Example 1: Given a 2-equation system:
4.8 x1 - 8.6 x2 = 5.5
21.50 x1 + 53.32 x2 = 25.8
• Multiply the 1st eqn by 8.6 and the 2nd eqn by 6.2:
29.76 x1 – 53.32 x2 = 34.1
• Add these equations: 51.26 x1 + 0 x2 = 59.9

• Solve for x1 : x1 = 59.9/51.26 = 1.168552478

• Use 1st eqn to solve for x2 : x2 = (3.0–2.5*1.168552478)/6.2 =


0.01268045242
• Check if the solution satisfies the 2nd eqn:

4.8*1.168552478–8.6*0.01268045242 = 5.500000004

2

Naive Gauss Elimination Method


• It is a formalized way of the elimination of the unknowns technique from the previous
slide. Consider the following system of n equations.

a11x1 + a12x2 + ... + a1nxn = b1 (1)


a21x1 + a22x2 + ... + a2nxn = b2 (2)
...
an1x1 + an2x2 + ... + annxn = bn (n)

Step 0 (optional): Form the augmented matrix of [A|B].

Step 1 Forward Elimination: Reduce the system to an upper triangular system.

(1.1) First eliminate x1 from 2nd to nth equations.

- Multiply the 1st eqn. by a21/a11 & subtract it from the 2nd equation. This is the new 2nd eqn.

- Multiply the 1st eqn. by a31/a11 & subtract it from the 3rd equation. This is the new 3rd eqn.

...

- Multiply the 1st eqn. by an1/a11 & subtract it from the nth equation. This is the new nth eqn.

3


Naive Gauss Elimination Method


⎡ a11 a12 a13 ! a1n ⎤ ⎧ x1 ⎫ ⎧ b1 ⎫
⎢ 0 aʹ22 aʹ23 ! aʹ2n ⎥ ⎪ x 2 ⎪ ⎪bʹ ⎪
⎢ ⎥⎪⎪ ⎪ ⎪ ⎪
⎪ ⎪
2
⎪ ʹ indicates the elements
The modified system is ⎢ 0 aʹ32 aʹ33 ! aʹ3n ⎥ ⎨ x 3 ⎬ = ⎨bʹ3 ⎬

! ! ! !⎥ ⎪ !⎪
⎥ ⎪ !⎪ that are modified once.
⎢ ! ⎪ ⎪ ⎪ ⎪
⎢⎣ 0 aʹn2 aʹn3 ! aʹnn ⎥⎦ ⎪
⎩ xn ⎪
⎭ ⎩bʹn ⎪
⎪ ⎭

(1.2) Now eliminate x2 from 3rd to nth equations.


⎡ a11 a12 a13 ! a1n ⎤ ⎧ x1 ⎫ ⎧ b1 ⎫
⎢ 0 aʹ aʹ23 ! aʹ2n ⎥ ⎪ x 2 ⎪ ⎪bʹ ⎪
⎢ 22 ⎥⎪⎪ ⎪ ⎪ ⎪
⎪ ⎪
2
⎪ ʹʹ indicates the elements
The modified system is ⎢ 0 0 aʹ33
ʹ ! aʹ3ʹn ⎥ ⎨ x 3 ⎬ = ⎨bʹ3ʹ ⎬

! ! !⎥ ⎪ !⎪
⎥ ⎪ !⎪ that are modified twice.
⎢ ! ! ⎪ ⎪ ⎪ ⎪
⎢⎣ 0 0 aʹnʹ3 ! aʹnn ⎩ xn ⎪
ʹ ⎥⎦ ⎪ ⎭ ⎩bʹnʹ ⎪
⎪ ⎭

Repeat (1.1) and (1.2) until ( 1.(n-1) ).

⎡ a11 a12 a13 ! a1n ⎤ ⎧ x1 ⎫ ⎧ b1 ⎫


At the end of Step 1, ⎢ 0 a22 a23 ! a2n ⎥ ⎪ x 2 ⎪ ⎪b ⎪
2
⎢ ⎥⎪⎪ ⎪ ⎪ ⎪
⎪ ⎪ ⎪
we will get an upper ⎢ 0 0 a33 ! a3n ⎥ ⎨ x 3 ⎬ = ⎨b 3 ⎬ Primes are removed for clarity.
⎢ ⎥
triangular system: ⎢ 0 0 0 ! !⎥ ⎪ !⎪ ⎪ !⎪
⎪ ⎪ ⎪ ⎪
⎢⎣ 0 0 0 0 ann ⎥⎦ ⎪
⎩ xn ⎪
⎭ ⎩bn ⎪
⎪ ⎭

4

Naive Gauss Elimination Method


Step 2 Back substitution: Find the unknowns starting from the last equation.
(2.1) Last equation (nth equation) involves only xn. Solve for it.

(2.1) Substitute xn to the (n-1)th equation and solve for xn-1.

...
(2.n) Substitute all previously calculated x values to the 1st eqn and solve for x1.
Example 2: Solve the following system using Naive Gauss Elimination.
6x1 – 2x2 + 2x3 + 4x4 = 16
12x1 – 8x2 + 6x3 + 10x4 = 26

3x1 – 13x2 + 9x3 + 3x4 = -19


-6x1 + 4x2 + x3 - 18x4 = -34

Step 0: Form the augmented matrix


6 –2 2 4 | 16
12 –8 6 10 | 26
3 –13 9 3 | -19
-6 4 1 -18 | -34
5









Naive Gauss Elimination Method


Step 2 Back substitution: Find the unknowns starting from the last equation.
(2.1) Last equation (nth equation) involves only xn. Solve for it.

(2.1) Substitute xn to the (n-1)th equation and solve for xn-1.

...
(2.n) Substitute all previously calculated x values to the 1st eqn and solve for x1.
Example 2: Solve the following system using Naive Gauss Elimination.
6x1 – 2x2 + 2x3 + 4x4 = 16
12x1 – 8x2 + 6x3 + 10x4 = 26

3x1 – 13x2 + 9x3 + 3x4 = -19


-6x1 + 4x2 + x3 - 18x4 = -34

Step 0: Form the augmented matrix


6 –2 2 4 | 16
12 –8 6 10 | 26
3 –13 9 3 | -19
-6 4 1 -18 | -34
6









Naive Gauss Elimination Method


Step 1: Forward elimination
(1.1) Eliminate x1 6 –2 2 4 | 16 Pivot equation
0 –4 2 2 | -6
pivot element 0 –12 8 1 | -27
0 2 3 -14 | -18

(1.2) Eliminate x2 6 –2 2 4 | 16 Does not change.

0 –4 2 2 | -6 Does not change. Pivot element is -4.


0 0 2 -5 | -9
0 0 4 -13 | -21

(1.3) Eliminate x3 6 –2 2 4 | 16 Does not change.

0 –4 2 2 | -6 Does not change.


0 0 2 -5 | -9 Does not change. Pivot element is 2.
0 0 0 -3 | -3

7

































Naive Gauss Elimination Method
End of Step 1: An upper triangular coefficient matrix is obtained.
6 –2 2 4 | 16
0 –4 2 2 | -6
0 0 2 -5 | -9
0 0 0 -3 | -3

Step 2: Back substitution.

(2.1) Find x4 x4 = (-3)/(-3) = 1,

(2.2) Find x3 x3 = (-9+5*1)/2 = -2,

(2.3) Find x2 x2 = (-6-2*(-2)-2*1)/(-4) = 1,

(2.4) Find x1 x1 = (16+2*1-2*(-2)-4*1)/6 = 3.

8









Naive Gauss Elimination Method


For a general n x n system [A] {x} = {B}.
Forward Elimination Back Substitution

LOOP k from 1 to n-1 X n = B n / A nn


LOOP i from k+1 to n LOOP i from n-1 to 1
FACTOR = A ik / A kk SUM = 0

LOOP j from k+1 to n LOOP j from i+1 to n

A ij = A ij – FACTOR * A kj SUM = SUM + A ij * X j

END LOOP END LOOP

B i = B i – FACTOR * B k X i = (B i – SUM) / A ii

ENDLOOP ENDLOOP

ENDLOOP
Note that a division by zero may occur if the pivot element is zero. Naive Gauss
Elimination does not check for this potential problem.
Exercise 3: Implement the above pseudocode in MATLAB. Write a main program and
two functions for the forward elimination and the back substitution. 9




















Operation Count for Naive GE Method


Let’s count the FLOPs
(floating-point operations)
during Step 1, forward
elimination.

Table from Chapra and Canale 2010


Total number of addition/subtraction FLOPs during FE:
n −1 n −1
2
∑ ± = ∑ (n − k )(n + 1 − k ) = ∑ ⎡⎣n (n + 1) − k (2n + 1) + k
FE k =1 k =1
⎤⎦
n −1 n −1 n −1
Useful formulae
2
n
n (n + 1) n 2
∑ ± = n (n + 1)∑1 − (2n + 1)∑ k + ∑ k
FE k =1 k =1 k =1
∑ k=
2
=
2
+ O (n)
k =1

⎡ n3 2 ⎤
n
2 n (n + 1)(2n + 1) n 2
3

∑ ± = ⎡ n + O ( n3
) ⎤ − ⎡ n + O ( n ) ⎤ 3
+ + O2
( n ) ∑ k = = + O ( n )
⎣ ⎦ ⎣ ⎦ ⎢3 ⎥ k =1 6 3
FE ⎣ ⎦
n3 n3
∑ ± = + O ( n ) A similar analysis gives ∑ × ÷ = + O ( n 2 )
FE 3 3
10





Operation Count for Naive GE Method
The total number of FLOPs during forward elimination is:

2n 3 2
∑ ± × ÷ = + O ( n )
FE 3
FLOPs of Step 2, back substitution, is simpler to calculate:
n (n − 1) n (n + 1)
∑ ± × ÷ = (∑ ± )+ (∑ × ÷ ) = +
BS 2 2
2
∑ ± = n + O (n)
BS

• As the size of the system, i.e. n, increases, the computational effort increases rapidly.
• Forward elimination constitutes most of the required FLOPs.

Table from Chapra and Canale 2010

11

Operation Count for Naive GE Method


The total number of FLOPs during forward elimination is:

2n 3 2
∑ ± × ÷ = + O ( n )
FE 3
FLOPs of Step 2, back substitution, is simpler to calculate:
n (n − 1) n (n + 1)
∑ ± × ÷ = (∑ ± )+ (∑ × ÷ ) = +
BS 2 2
2
∑ ± = n + O (n)
BS

• As the size of the system, i.e. n, increases, the computational effort increases rapidly.
• Forward elimination constitutes most of the required FLOPs.

Table from Chapra and Canale 2010

12

Pitfalls of Elimination Methods


• Division by zero

– Caused by a pivot element being zero.

– Solution: Pivoting (exchanging rows).

• Round-off errors

– Each FLOP introduces additional round-off error.

– # of FLOPs is proportional to n3, therefore round-off errors become


increasingly important as the system size increases.

– Solution: Using double precision, and applying scaling combined with


pivoting.

• Ill-conditioned systems

– Systems whose solutions are very sensitive to the coefficients of [A].

– This amplifies the effect of round-off errors.


13

Pitfalls of Elimination Methods


• In Naive Gauss Elimination, a division by zero occurs if the pivot element is zero.

• Zero pivot elements may be created during the forward elimination step even if they are
not present in the original matrix.

• Pivoting is used to avoid this problem. We interchange rows at each step to put the
coefficient with the largest magnitude on the diagonal.

• In addition to avoiding the division by zero problem, pivoting also reduces round-off
errors since it tends to eliminate divisions by small numbers. Therefore pivoting makes the
solution of ill-conditioned systems easier.

• Complete pivoting uses both row and column interchanging. It is not used
frequently. WHY NOT?

• Partial pivoting uses only row interchanging. We will use this approach.
14

Partial Pivoting
Example: Solve the following system using Gauss Elimination with pivoting.
Original System Step 0: Form the augmented matrix

2x2 + + x4 = 0 0 2 0 1 | 0
2x1 + 2x2 + 3x3 + 2x4 = -2 2 2 3 2 | -2
4x1 – 3x2 + x4 = -7 4 –3 0 1 | -7
6x1 + x2 - 6x3 - 5x4 = 6 6 1 -6 -5 | 6

Step 1: Forward Elimination


(1.1) Eliminate x1 . But the pivot element is 0. We have to interchange the 1st row with one of the
rows below it. Interchange it with the 4th row because 6 is the largest possible pivot.

0 2 0 1 | 0 6 1 -6 -5 | 6
2 2 3 2 | -2 2 2 3 2 | -2
4 –3 0 1 | -7 pivoting 4 –3 0 1 | -7
6 1 -6 -5 | 6 0 2 0 1 | 0

6 1 -6 -5 | 6 6 1 -6 -5 | 6
2 2 3 2 | -2 0 1.6667 5 3.6667 |-4
Eliminate x1 15
4 –3 0 1 | -7 0 -3.6667 4 4.3333 |-11
0 2 0 1 | 0 0 2 0 1 | 0










Partial Pivoting
(1.2) Eliminate x2 from the 3rd and 4th eqns. There is no division by zero problem but pivoting will

still be performed to reduce round-off errors. Interchange the 2nd and 3rd rows. Complete pivoting
would have interchanged 2nd and 3rd columns.

6 1 -6 -5 | 6 pivoting 6 1 -6 -5 | 6
0 1.6667 5 3.6667 |-4 0 -3.6667 4 4.3333 | -11
0 -3.6667 4 4.3333 |-11 0 1.6667 5 3.6667 | -4
0 2 0 1 | 0 0 2 0 1 | 0

6 1 -6 -5 |6 Eliminate 6 1 -6 -5 | 6
0 -3.6667 4 4.3333 |-11 x2 0 -3.6667 4 4.3333 |-11
0 1.6667 5 3.6667 |-4 0 0 6.8182 5.6364 |-9.0001
0 2 0 1 |0. 0 0 2.1818 3.3636 |-5.9999

(1.3) Eliminate x3 . 6.8182 > 2.1818, therefore no pivoting is necessary.

6 1 -6 -5 | 6
0 -3.6667 4 4.3333 |-11
0 0 6.8182 5.6364 |-9.0001
0 0 0 1.5600 |-3.1199

Partial Pivoting
The upper triangular matrix is obtained:

6 1 -6 -5 | 6
0 -3.6667 4 4.3333 | -11
0 0 6.8182 5.6364 | -9.0001
0 0 0 1.5600 | -3.1199

Step 2: Back substitution

x4 = -3.1199 / 1.5600 = -1.9999


x3 = [-9.0001 – 5.6364*(-1.9999)] / 6.8182 = 0.33325
x2 = [-11 – 4.3333*(-1.9999) – 4*0.33325] / -3.6667 = 1.0000
x1 = [6 – (-5)*(-1.9999) – (-6)*0.33325 – 1*1.0000] / 6 = -0.50000

Exact solution is {x} = [-2 1/3 1 -0.5]T .

Use more than 5 significant figures to reduce round-off errors.


Scaling
• Scaling is normalizing the equations so that the maximum coefficient in every row is equal to 1. That is,
divide the elements of each row by the largest-magnitude element in that row.

• While checking a system for being ill-conditioned, the system should be scaled first. For example:

2x1 – 3x2 = 5 20x1 - 30x2 = 50 They are actually the same system,
[A2] = 10[A1], {b2} = 10{b1}.
3.98x1 – 6x2 = 7 39.8x1 - 60x2 = 70

• Determinant of the 1st system is 2(-6) – (-3)(3.98) = -0.06 , which is close to zero.

• Determinant of the 2nd system is 20(-60) – (-30)(39.8) = -6 , which is not that close to zero.

• So is this system ill-conditioned or not?

• Scale the system (do not consider the right-hand-side vector while looking for the largest magnitude):

-0.6667 x1 + x2 = -1.6667

-0.6633 x1 + x2 = -1.1667

• Now the determinant is 0.6667*1 – 1*(-0.6633) = 1.33. This is a better measure of system’s condition.

• Note that “How close to zero?” is still an open question.

• There are other ways to determine a system’s condition (See Section 10.3 in Chapra and Canale).


Scaling
Remarks
• Scaling helps determine whether pivoting is necessary. But scaling also
introduces additional round-off errors.
• Use scaling to decide pivoting, then use the original coefficients for
forward elimination.

Solve the following set of equations using Gauss elimination with scaled partial
pivoting.

4x1 + 4x2 + 7x3 = 1

2x1 + 1x2 + 3x3 = 1

2x1 + 5x2 + 9x3 = 3


Scaling
Example: Solve the following system using Gauss Elimination with scaled partial pivoting.
Keep numbers as fractions of integers to eliminate round-off errors.
3 -13 9 3 | -19
-6 4 1 -18 | -34
6 -2 2 4 | 16
12 -8 6 10 | 26

Generate a scale vector. It stores the largest coefficient (in magnitude) from each row.
SV = {13 18 6 12}
This scale vector will be updated each time we interchange rows during pivoting.

Step 1: Forward Elimination


(1.1) Compare scaled coefficients 3/13, 6/18, 6/6, 12/12. Third one is the largest (fourth
one is the same but we use the first occurrence). Interchange rows 1 and 3.
6 -2 2 4 | 16
-6 4 1 -18 | -34
3 -13 9 3 | -19
12 -8 6 10 | 26
Update the scale vector: SV = {6 18 13 12}

Scaling
6 -2 2 4 | 16
-6 4 1 -18 | -34
3 -13 9 3 | -19
12 -8 6 10 | 26

Eliminate x1.
Subtract (-6/6) times row 1 from row 2.
Subtract (3/6) times row 1 from row 3.
Subtract (12/6) times row 1 from row 4.

Resulting system is

6 -2 2 4 | 16
0 2 3 -14 | -18
0 -12 8 1 | -27
0 -4 2 2 | -6

Scaling
SV = {6 18 13 12}
(1.2) Compare scaled coefficients 2/18, 12/13, 4/12. Second one is the largest. Interchange rows 2
and 3.

6 -2 2 4 | 16 6 -2 2 4 | 16
0 2 3 -14 | -18 0 -12 8 1 | -27
0 -12 8 1 | -27 0 2 3 -14 | -18
0 -4 2 2 | -6 0 -4 2 2 | -6

Update the scale vector SV = {6 13 18 12}.


Eliminate x2.
Subtract (2/(-12)) times row 2 from row 3.
Subtract ((-4)/(-12)) times row 2 from row 4.
Resulting system is

6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 -2/3 5/3 | 3

Scaling
SV = {6 13 18 12}
(1.3) Compare scaled coefficients (13/3)/18, (2/3)/12. First one is larger. No need for pivoting.
6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 -2/3 5/3 | 3
Eliminate x3: Subtract ((-2/3)/(13/3)) times row 3 from row 4. Resulting system is

6 -2 2 4 | 16
0 -12 8 1 | -27
0 0 13/3 -83/6 | -45/2
0 0 0 -6/13 | -6/13

Step 2: Back substitution


Equation 4 → x4 = 1
Equation 3 → x3 = -2
Equation 2 → x2 = 1
Equation 1 → x1 = 3
These are exact results because round-off errors are eliminated by not using floating point numbers.




ill-Conditioned Systems
• They have almost singular coefficient matrices
and they have a unique solution. x2

• Consider the following 2 x 2 system:


x1 + 2x2 = 10
1st system / /
1.1x1 + 2x2 = 10.4

The solution of this system is [x]T = [4 3]T.

• Change a21 from 1.1 to 1.05.


2nd system / /
1.05x1 + 2x2 = 10.4
• The new solution is [x]T = [8 1]T. x1

• This is a typical ill-conditioned system.


• Its solution is very sensitive to changes in the coefficient
matrix and the right-hand-side vector.
• Ill-conditioned systems are very sensitive to round-off errors.
• Double precision and scaled pivoting must be used to reduce round-off errors as much as possible.
• Fortunately not many engineering problems result in an ill-conditioned system.

Other Uses of Gauss Elimination


• Calculating LU decomposition: [L][U] = [A] where [L] and [U] are triangular matrices. LU
decomposition will be described later.

• Calculating the determinant: Forward Elimination provides an upper triangular matrix. For such
matrices the determinant is the multiplication of diagonal elements. If we perform pivoting m times:

det(A) = |A| = (-1)m * a11 * a22 * … * ann

Remember the example we used to describe pivoting. [A] was:


0 2 0 1
2 2 3 2
4 –3 0 1
6 1 -6 -5
After the Forward Elimination step we found the following upper triangular matrix.
6 1 -6 -5
0 -3.6667 4 4.3333
0 0 6.8182 5.6364
0 0 0 1.5600
During elimination we used pivoting and interchanged rows twice. Therefore:
|A| = (-1)2 * 6 * (-3.6667) * 6.8182 * 1.56 = -234.0028.

Note: If any aii becomes zero during forward elimination, then the system is singular.

Gauss-Jordan Method
Gauss-Jordan is a variation of the Gauss Elimination technique. The differences are:
1) All rows are normalized by dividing them to their pivot element.
2) The unknowns are eliminated from all other equations, not just the subsequent ones.
3) Gauss-Jordan method yields an identity matrix as a result of elimination (as opposed to an upper
triangular one in the Gauss Elimination).
4) There is no need for back substitution. Right-hand-side vector becomes the solution.

Example: Solve the following 4 x 4 system using G-J method with pivoting. This is the same system that
we used to demonstrate Gauss Elimination with pivoting.

0 2 0 1 | 0 6 1 -6 -5 | 6
2 2 3 2 | -2 pivoting 2 2 3 2 | -2
4 –3 0 1 | -7 4 –3 0 1 | -7
6 1 -6 -5 | 6 0 2 0 1 | 0

Normalize the first row and then eliminate x1 from the 2nd, 3rd and 4th equations.

1 0.1667 -1 -0.8333 | 1 1 0.1667 -1 -0.8333 | 1


2 2 3 2 | -2 elimination 0 1.6667 5 3.6667 | -4
4 –3 0 1 | -7 0 -3.6667 4 4.3333 | -11
0 2 0 1 | 0 0 2 0 1 | 0









Gauss-Jordan Method
Interchange rows 2 and 3:
1 0.1667 -1 -0.8333 | 1 1 0.1667 -1 -0.8333 | 1
0 1.6667 5 3.6667 | -4. pivoting 0 -3.6667 4 4.3333 | -11
0 -3.6667 4 4.3333 | -11 0 1.6667 5 3.6667 | -4
0 2 0 1 | 0 0 2 0 1 | 0

Normalize the second row and eliminate x2 from the 1st, 3rd and 4th equations.

1 0.1667 -1 -0.8333 | 1 1 0 -0.8182 -0.6364 | 0.5


0 1 -1.0909 -1.1818 | 3 0 1 -1.0909 -1.1818 | 3
elimination
0 1.6667 5 3.6667 | -4 0 0 6.8182 5.6364 | -9
0 2 0 1 | 0 0 0 2.1818 3.3636 | -6

No pivoting is required. Normalize the third row and eliminate x3 from the 1st, 2nd and 4th equations.

1 0 -0.8182 -0.6364 | 0.5 1 0 0 0.04 | 0.58


0 1 -1.0909 -1.1818 | 3 elimination 0 1 0 -0.280 | 1.56
0 0 1 0 | -1.32 0 0 1 0 | -1.32
0 0 2.1818 3.3636 | -6 0 0 0 1.5599 | -3.12

Gauss-Jordan Method
• Normalize the last row:
1 0 0 0.04 | 0.58 1 0 0 0.04 | 0.58
0 1 0 -0.280 | 1.56 0 1 0 -0.280 | 1.56
normalization
0 0 1 0 | -1.32 0 0 1 0 | -1.32
0 0 0 1.5599 | -3.12 0 0 0 1 | -2

• Eliminate x4 from the 1st, 2nd and 3rd equations.


1 0 0 0.04 | 0.58 1 0 0 0 | -0.5
0 1 0 -0.280 | 1.56 0 1 0 0 | 1.0001
elimination
0 0 1 0 | -1.32 0 0 1 0 | 0.3333
0 0 0 1 | -2 0 0 0 1 | -2

• Right-hand-side vector is the solution. No back substitution is required.

• Gauss-Jordan requires almost 50% more operations than Gauss Elimination (n3 instead of 2n3/3).

Study Problem: Show that Gauss-Jordan’s arithmetic complexity is n3


Uses of Matrix Inversions


{x1} = [A]-1 {B1}

• A common engineering problem : {x2} = [A]-1 {B2}

… … …

{xn} = [A]-1 {Bn}

• [A] usually indicates the interactions between the components of the system (such as “knxn” terms in a

system of spring-connected bodies).

• {B} usually indicates the external effects on the system (such as external forces, “mg”).

• If the same system is going to be analyzed for different external conditions, then [A] remains the same and
{B} changes. Calculating [A]-1 once and using it repeatedly for obtaining the solution {xi} is an effective
approach in such cases.

• [A]-1 also provides insight to the relationship between the external effects and the response of the system
components to these effects.



Calculating the Inverse of a Matrix using GJ Method


• Calculating the inverse of a matrix is useful if we would like to solve many systems with the same coefficient
matrix but different right-hand-side vectors.

• To take the inverse of [A], augment it with an identity matrix and apply the Gauss-Jordan method.

Example: Find the inverse of the following matrix.

1 -1 2
[A] = 3 0 1 [A]-1 = ?
1 0 2
• Augment [A] with a 3 x 3 identity matrix and Apply the Gauss-Jordan method to the system:

1 -1 2 | 1 0 0 G-J 1 0 0 | 0 0.4 –0.2


3 0 1 | 0 1 0 0 1 0 | -1 0 1
1 0 2 | 0 0 1 0 0 1 | 0 –0.2 0.6

• 3 x 3 matrix at the right is [A]-1. The inverse can now be used to solve a system with the coefficient matrix
[A] and the right-hand-side vector {b} : {x} = [A]-1 {B}

• LU decomposition can be used for the same purpose and it is more efficient (Section 10.2 in Chapra and
Canale 2010).

LU Decomposition
• LU decomposition is the decomposition of [A] into a lower triangular matrix [L], and an upper triangular
matrix [U], such that [A] = [L][U].

• [U] is the upper triangular matrix that is obtained ⎡ a11 a12 a13 a14 ⎤ each prime is a
⎢0 a22
ʹ a23
ʹ ʹ ⎥
a24 modification to
from the forward elimination step of Gauss elimination. [U] = ⎢ ⎥
⎢0 0 a33
ʹʹ a34 ⎥
ʹʹ aij during
For a 4 x 4 system, an example is shown on the right. ⎢0 0 0 ʹʹʹ ⎥⎦
a44
⎣ elimination

• [L] is a lower triangular matrix with unit values


⎡1 0 0 0⎤
on the main diagonal, and elimination factors ⎢f 1 0 0⎥ aij( j −2)
[L] = ⎢ 21 ⎥ , f ij =
in the lower elements. For a 4 x 4 system, an ⎢ f 31 f 32 1 0⎥ a (jjj −2)
⎢f f 42 f 43 1 ⎥⎦
example is shown on the right. ⎣ 41

• fij values are the factors that the pivot rows are multiplied with during forward elimination. The

superscript (j-2) indicates the number of times aij is modified before fij is calculated.

See Section 10.1.2 in Chapra and Canale 2010 for details.


Solving Linear Systems with LU Decomposition


33

You might also like