You are on page 1of 32

Numerical Methods

(CENG-2084)
Chapter 3
LINEAR ALGEBRAIC EQUATIONS

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Linear Equation
• A linear equation in n variables:
f1 ( x1 , x2 , x3 ,  , xn )  0
f 2 ( x1 , x2 , x3 ,  , xn )  0

f n ( x1 , x2 , x3 ,  , xn )  0

a11x1 + a12x2 + … + a1nxn = b1


a21x1 + a22x2 + … + a2nxn = b2
.
.
an1x1 + an2x2 + … + annxn = bn

2
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Solving Systems of Equations
• A linear equation in n variables:
a1x1 + a2x2 + … + anxn = b

• For small (n ≤ 3), linear algebra provides several tools to solve


such systems of linear equations:

– Graphical method
– Cramer’s rule
– Method of elimination

• Nowadays, easy access to computers makes the solution of


very large sets of linear algebraic equations possible
3
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Graphical Method

4
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Determinants and Cramer’s Rule
Ax  b [A] : coefficient matrix

a11 a12 a13  a11 a12 a13   x1  b1 


  a a a   x   b 
A  a21 a22 a23   21 22 23   2   2 
a31 a32 a33  a31 a32 a33   x3  b3 

b1 a12 a13 a11 b1 a13 a11 a12 b1


b2 a22 a23 a21 b2 a23 a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1  x2  x3 
D D D
D : Determinant of A matrix
5
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
a11 a12 a13
Computing the
D  a21 a22 a23
Determinant
a31 a32 a33

Ax  B D11 


a22 a23
a32 a33
 a22 a33  a32 a23

a11 a12 a13  a21 a23


  D12   a21 a33  a31 a23
A  a21 a22 a23  a31 a33
a31 a32 a33  D13 
a21 a22
 a21 a32  a31 a22
a31 a32
a22 a23 a21 a23 a21 a22
Determinant of A  D  a11  a12  a13
a32 a33 a31 a33 a31 a32
6
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Gauss Elimination
 a11 a12 a13 b1 
• Solve Ax = b a a23 b2 
 21 a22 Forward
a31 a32 a33 b3  Elimination
• Consists of two phases:
–Forward elimination 
–Back substitution a11 a12 a13 b1 
 0 a' '
a23 b2' 
 22
• Forward Elimination  0 0 ''
a33 b3'' 
reduces Ax = b to an upper
triangular system Tx = b’ 
Back
b3'' b2'  a23
'
x3
• Back substitution can then x3  '' x2  '
Substitution
solve Tx = b’ for x a33 a22
b1  a13 x3  a12 x2
x1 
a11
7
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Forward Elimination
Gaussian Elimination
x1 - x2 + x3 = 6 x1 - x2 + x3 = 6
-(3/1) 3x1 + 4x2 + 2x3 = 9 0 +7x2 - x3 = -9
-(2/1) 2x1 + x2 + x3 = 7 -(3/7) 0 + 3x2 - x3 = -5

x1 - x2 + x3 = 6
0 7x2 - x3 = -9
0 0 -(4/7)x3=-(8/7)

**here Solve using BACK SUBSTITUTION: x3 = 2 x2=-1 x1 =3

0
0 0
0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
8
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Back Substitution

1x0 +1x1 –1x2 +4x3 = 8

– 2x1 –3x2 +1x3 = 5

2x2 – 3x3 = 0

x3 = 2 2x3 = 4

9
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Back Substitution

1x0 +1x1 –1x2 = 0

– 2x1 –3x2 = 3

x2 = 3 2x2 = 6

10
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Back Substitution

1x0 +1x1 = 3

x1 = –6 – 2x1 = 12

11
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Back Substitution

x0 = 9 1x0 = 9

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Back Substitution
(* Pseudocode *)

for i  n down to 1 do

/* calculate xi */
x [ i ]  b [ i ] / a [ i, i ]

/* substitute x [ i ] in the equations above */


for j  1 to i-1 do
b [ j ]  b [ j ]  x [ i ] × a [ j, i ]
endfor
endfor

Time Complexity?  O(n2)


13
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Forward Elimination
Gaussian Elimination

  a ji 
a ji  a ji  a ii    0
 a ii 

14
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
M
U
Forward Elimination
L
T
I
P
L
I
E 4x0 +6x1 +2x2 – 2x3 = 8
R
S

-(2/4) 2x0 +5x2 – 2x3 = 4

-(-4/4) –4x0 – 3x1 – 5x2 +4x3 = 1

-(8/4) 8x0 +18x1 – 2x2 +3x3 = 40

15
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Forward Elimination
M
U
L
T
I 4x0 +6x1 +2x2 – 2x3 = 8
P
L
I
E
R – 3x1 +4x2 – 1x3 = 0
S

-(3/-3) +3x1 – 3x2 +2x3 = 9

-(6/-3) +6x1 – 6x2 +7x3 = 24

16
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Forward Elimination

4x0 +6x1 +2x2 – 2x3 = 8


M
U
L
T
– 3x1 +4x2 – 1x3 = 0
I
P
L
I 1x2 +1x3 = 9
E
R

?? 2x2 +5x3 = 24

17
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Forward Elimination

4x0 +6x1 +2x2 – 2x3 = 8

– 3x1 +4x2 – 1x3 = 0

1x2 +1x3 = 9

3x3 = 6

18
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Operation count in Forward
Gaussian Elimination
Elimination
b11

2n 0 b22

2n 0 0 b33

2n 0 0 0 b44

2n 0 0 0 0 b55

2n 0 0 0 0 0 b66

2n 0 0 0 0 0 0 b77

2n 0 0 0 0 0 0 0 b66
2n 0 0 0 0 0 0 0 0 b66

TOTAL
1st column: 2n(n-1) 2n2 2(n-1)2 2(n-2)2 …….

TOTAL # of Operations for FORWARD ELIMINATIO N :


n
2n  2(n  1)  ...  2 * (2)  2 * (1)  2 i 2
2 2 2 2

i 1

n(n  1)(2n  1)
2
6
 O(n 3 ) 19
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Pitfalls of Elimination Methods
Division by zero
It is possible that during both elimination and back-substitution phases a
division by zero can occur.

For example:
2x2 + 3x3 = 8 0 2 3
4x1 + 6x2 + 7x3 = -3 A= 4 6 7
2x1 + x2 + 6x3 = 5 2 1 6

Solution: pivoting (to be discussed later)

20
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Pitfalls (cont.)
Round-off errors

• Because computers carry only a limited number of significant figures,


round-off errors will occur and they will propagate from one iteration to the
next.

• This problem is especially important when large numbers of equations (100


or more) are to be solved.

• Always use double-precision numbers/arithmetic. It is slow but needed for


correctness!

• It is also a good idea to substitute your results back into the original
equations and check whether a substantial error has occurred.

21
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Pitfalls (cont.)
ill-conditioned systems - small changes in coefficients result in large
changes in the solution. Alternatively, a wide range of answers can
approximately satisfy the equations.
(Well-conditioned systems – small changes in coefficients result in small
changes in the solution)

Problem: Since round off errors can induce small changes in the coefficients, these
changes can lead to large solution errors in ill-conditioned systems.
Example:
b1 a12 10 2
x1 + 2x2 = 10
b2 a22 10.4 2 2(10)  2(10.4)
1.1x1 + 2x2 = 10.4 x1    4 x2  3
D 1(2)  2(1.1)  0.2

x1 + 2x2 = 10 b1 a12 10 2
1.05x1 + 2x2 = 10.4 b2 a22 10.4 2 2(10)  2(10.4)
x1    8 x2  1
D 1(2)  2(1.05)  0.1

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
ill-conditioned systems (cont.) –
• Surprisingly, substitution of the erroneous values, x1=8 and x2=1, into the original
equation will not reveal their incorrect nature clearly:
x1 + 2x2 = 10 8+2(1) = 10 (the same!)
1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!)

IMPORTANT OBSERVATION:
An ill-conditioned system is one with a determinant close to zero

• If determinant D=0 then there are infinitely many solutions  singular system

• Scaling (multiplying the coefficients with the same value) does not change the
equations but changes the value of the determinant in a significant way.
However, it does not change the ill-conditioned state of the equations!
DANGER! It may hide the fact that the system is ill-conditioned!!

How can we find out whether a system is ill-conditioned or not?


Not easy! Luckily, most engineering systems yield well-conditioned results!

• One way to find out: change the coefficients slightly and recompute & compare

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
COMPUTING THE DETERMINANT OF A MATRIX
USING GAUSSIAN ELIMINATION

The determinant of a matrix can be found using Gaussian elimination.

Here are the rules that apply:


• Interchanging two rows changes the sign of the determinant.
• Multiplying a row by a scalar multiplies the determinant by that scalar.
• Replacing any row by the sum of that row and the multiple of any other row
does NOT change the determinant.
• The determinant of a triangular matrix (upper or lower triangular) is the product
of the diagonal elements. i.e. D=t11t22t33

t11 t12 t13


D 0 t22 t23
0 0 t33

24
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Techniques for Improving Solutions
• Use of more significant figures – double precision arithmetic

• Pivoting
If a pivot element is zero, normalization step leads to division by zero. The
same problem may arise, when the pivot element is close to zero. Problem
can be avoided:
– Partial pivoting
Switching the rows below so that the largest element is the pivot element.
**here* Go over the solution in: CHAP9e-Problem-11.doc

– Complete pivoting
• Searching for the largest element in all rows and columns then switching.
• This is rarely used because switching columns changes the order of x’s
and adds significant complexity and overhead  costly

• Scaling
– used to reduce the round-off errors and improve accuracy

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Gauss-Jordan Elimination

a11 0 0 0 0 0 0 0 0 b11

0 x22 0 0 0 0 0 0 0 b22

0 0 x33 0 0 0 0 0 0 b33

0 0 0 x44 0 0 0 0 0 b44

0 0 0 0 x55 0 0 0 0 b55

0 0 0 0 0 x66 0 0 0 b66

0 0 0 0 0 0 x77 0 0 b77

0 0 0 0 0 0 0 x88 0 b88
0 0 0 0 0 0 0 0 x99 b99

26
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Gauss-Jordan Elimination: Example
1 1 2  x1  8  1 1 2| 8 
 1  2 3  x   1  Augmented Matrix :  1  2 3 | 1 
  2   
 3 7 4  x3  10  3 7 4 |10

1 1 2 | 8  Scaling R2: 1 1 2 | 8 
R2  R2 - (-1)R1 0 1 5 | 9  R2  R2/(-1) 0 1 5| 9 
   
R3  R3 - ( 3)R1 0 4 2| 14  0 4 2| 14 

R1  R1 - (1)R2 1 0 7 | 17  1 0 7 | 17 
0 1  5 |  9  Scaling R3: 0 1  5 |  9 
   
R3  R3-(4)R2 0 0 18 | 22  R3  R3/(18) 0 0 1 |11/ 9

R1  R1 - (7)R3 1 0 0 | 8.444  RESULT:


R2  R2-(-5)R3
0 1 0 |  2.888
 
x1=8.45, x2=-2.89, x3=1.23
0 0 1 | 1.222 
27
Time Complexity?  O(n3)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Systems
Systemsof
of Nonlinear Equations
Nonlinear Equations
• Locate the roots of a set of simultaneous nonlinear equations:

f1 ( x1 , x2 , x3 ,, xn )  0
f 2 ( x1 , x2 , x3 , , xn )  0

f n ( x1 , x2 , x3 ,, xn )  0
Example:
x12  x1 x2  10  f1 ( x1 , x2 )  x12  x1 x2  10  0
x2  3x1 x2 2  57  f 2 ( x1 , x2 )  x2  3x1 x2 2  57  0
28
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
• First-order Taylor series expansion of a function with more than one variable:
f1(i ) f1( i )
f1(i 1)  f1( i )  ( x1( i 1)  x1( i ) )  ( x2( i 1)  x2( i ) )  0
x1 x2
f 2(i ) f 2(i )
f 2(i 1)  f 2(i )  ( x1( i 1)  x1( i ) )  ( x2( i 1)  x2( i ) )  0
x1 x2
• The root of the equation occurs at the value of x1 and x2 where f1(i+1) =0 and f2(i+1) =0
Rearrange to solve for x1(i+1) and x2(i+1)

f1(i ) f1(i ) f1(i ) f1(i )


x1(i 1)  x2(i 1)   f1(i )  xi  x2 ( i )
x1 x2 x1 x2
f 2(i ) f 2(i ) f 2(i ) f 2(i )
x1(i 1)  x2(i 1)   f 2(i )  xi  x2 ( i )
x1 x2 x1 x2

 f1(i ) f1(i )   f1(i ) f1( i ) 


   f1(i )   x1 
 x1 x2   x1( i 1)  x2   x1(i ) 
f 2 (i )   x2 (i 1) 
     f f 2(i )   x2( i ) 
 f 2( i )  2(i )   2(i )
f
 x x2   x x2 
 1  1
• Since x1(i), x2(i), f1(i), and f2(i) are all known at the ith iteration, this represents a set of two
linear equations with two unknowns, x1(i+1) and x2(i+1)
• You may use several techniques to solve these equations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
General Representation for the solution of
Nonlinear Systems of Equations
 f1 f1 f1   Jacobian Matrix
f1 ( x1 , x2 , x3 ,, xn )  0  x 
x2 xn 
 1 
f 2 ( x1 , x2 , x3 , , xn )  0  f 2 f 2

f 2 
Solve this set of linear
J   x1 x2 xn 
      
equations at each iteration:
 f f n f n 
 n  
f n ( x1 , x2 , x3 ,, xn )  0  x1 x2 xn  [ J i ]{ X i 1}  {Fi }  [ J i ]{ X i }

 f1 f1 f1   f1 f1 f1 


 x   x   f ( x , x , x ,  , x )    x1 
xn     x 
1 1 1 2 3 n
x2   x2 xn  
 1   1 
 f 2 f 2 f 2      f f 2 f 2   
  x2   f 2 ( x1 , x2 , x3 ,, xn )   2   x2 
 x1 x2 xn           x1 x2 xn    
               
 f f n f n         f f n f n    
 n    xn   f n ( x1 , x2 , x3 , , xn )  n    xn 
 x1 x2 xn  i i 1 i  x1 x2 xn  i i

{X}(i+1) values are found by solving this system of linear equations.


These iterations will continue until the convergence is reached;
which will happen when the approximate relative error (defined on the vector of values) falls below the desired error tolerance εtol 30
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Solution of Nonlinear Systems of Equations

Solve this set of linear equations at each iteration: [ J i ]{ X i 1}  {Fi }  [ J i ]{ X i }

Rearrange: [ J i ]{ X i 1  X i }  {Fi }
[ J i ]{X i 1}  {Fi }

 f1 f1 f1 


 x    x1   f1 ( x1 , x2 , x3 ,  , xn ) 
x2 xn    
 1 
 f 2 f 2 f 2     
 x2   f 2 ( x1 , x2 , x3 , , xn ) 
 x1 x2 xn        
  
      
 f f n f n       
 n   xn   f n ( x1 , x2 , x3 ,  , xn )
 x1 x2 xn  i i 1 ii

31
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Notes on the solution of
Nonlinear Systems of Equations
n non-linear equations f1 ( x1 , x2 , x3 ,, xn )  0  f1
 x
f1

f1 
n unknowns to be solved: x2 xn 
 1 
f 2 ( x1 , x2 , x3 , , xn )  0  f 2 f 2

f 2 
Jacobian J   x1 x2 xn 
      
Matrix:  f f n f n 
f n ( x1 , x2 , x3 ,, xn )  0  n
 x1 x2
 
xn 

• Need to solve this set of Linear Equations

[ J i ]{X i 1}  {Fi }


in each and every iterative solution of the original Non-Linear system of equations.

• For n unknowns, the size of the Jacobian matrix is n2 and the time it takes to solve
the linear system of equations (one iteration of the non-linear system) is
proportional to O(n3)
• There is no guarantee for the convergence of the non-linear system. There may also
be slow convergence.
• In summary, finding the solution set for the Non-Linear system of equations is an
extremely compute-intensive task.
32
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

You might also like