Professional Documents
Culture Documents
INSTRUCTOR
DR. ÖZGÜR SELİMOĞLU
CONTENTS
c) Matrix Inversion
d) Iterative Methods
Matrix Inversion
Computing the inverse of a matrix and solving simultaneous equations are related tasks!
This method can be applied only when the coefficient matrix is a square matrix ( n ×n) and non-singular.
Consider the matrix equation
AX=I
where A is a square matrix and non singular. Since A is non singular, A−1 exists and A−1 A = AA−1 = I.
Pre - multiplying both sides of AX=I by A−1, we get A−1 ( AX ) = A−1I. That is, ( A−1 A) X = A−1I.
Hence, we get X = A−1
Assuming that LU decomposition is employed in the solution, the solution phase (forward and back substitution)
must be repeated n times, once for each bi . Since computation is proportional to n 3 for the decomposition phase
and n2 for each vector of the solution phase, of inversion is considerably more difficult than the solution of Ax = b
(single constant vector b).
Example: Solve the following linear equation by inversion method.
2x-y+3z = 9
x+y+z = 6
x-y+z =2
Solution: First we have to write the given equation in the form AX=B. Here X represents the unknown variables.
A represent coefficient of the variables and B represents constants.
determinant
matrix
https://www.mathsisfun.com/algebra/matrix-determinant.html
=2[(1x1)-(-1x1)]-(-1)[(1x1)-(1x1)]+3[(1x-1)-(1x1)]]
=2[2]+1[0]+3[-2]
=4+0-6
= -2 0
Actions to find Adjoint A: Minor matris Cofactor matrix Transposition of cofactor matrix
X =A-1B=x =
Iterative Methods:
The characteristic of these methods is that they compute the solution with a finite number of operations.
Gauss-Seidel Method
(1)
start by choosing the starting vector x. If a good guess for the solution is not available, x can be chosen randomly. Equation (1)
We
is then used to recompute each element of x, always using the latest available values of x j . This completes one iteration cycle. The
procedure is repeated until the changes in x between successive iteration cycles become sufficiently small.
Convergence of the Gauss-Seidel method can be improved by a technique known as relaxation. The aim is to take the new value of
xi as a weighted average of its previous value and the value predicted by Eq. (1). The corresponding iterative formula is;
(2)
where the weight ω is called the relaxation factor. It can be seen that if ω = 1, no relaxation takes place, because Eqs. (1) and (2)
produce the same result. If ω < 1, Eq. (2) represents interpolation between the old x i and the value given by Eq. (1). This is called
under-relaxation. In cases where ω > 1, we have extrapolation, or over-relaxation.
= be the magnitude of the change in x during the k th iteration (carried out without relaxation [i.e., with ω = 1]). If k is
sufficiently large (say k ≥ 5), it can be shown that an approximation of the optimal value of ω is
become
After five more iterations the results would agree with the exact solution x 1 = 3, x2 = x3 = 1 within five decimal places.
REFERENCES
• Jaan Kiusalaas, “Numerical Methods in Engineering with Python 3”,3rd Edition, Cambridge, NY, 2013
• S.C. Chapra and R.P. Canale, “Numerical Methods for Engineers”, 6th ed., McGraw-Hill,, NY, 2010
• www.python.org