You are on page 1of 69

Module 1

Systems of Linear
Equations
Eduardo E. Descalsota, Jr.
Email: descaltronix@gmail.com
Course Website: http://descaltronix.ucoz.com
Subtopics
1.1 Methods of Solution
1.2 Matrix Inversion Method
1.3 Gauss Elimination Method
1.4 Gauss-Jordan Method
1.5 LU Decomposition Methods
1.6 Jacobi’s Iteration Method
1.7 Gauss-Seidel Iteration Method
Notations of Linear Equations
Matrix Notations

• or simply Ax = b
Augmented Matrix
• obtained by adjoining the constant vector b to the
coefficient matrix A
Uniqueness of Solution
A system of n linear equations in n unknowns has a
unique solution, provided that:
• determinant of the coefficient matrix is nonsingular,
i.e., if |A| ≠ 0
• rows and columns of a nonsingular matrix are
linearly independent
– no row (or column) is a linear combination of other rows
(or columns)
1.1 Methods of Solutions
• direct methods
–transforms the original equation into equivalent
equations that can be solved more easily
–transformation is carried out by applying certain
operations
Methods of Solutions (cont’d.)
• indirect or iterative methods
–start with a guess of the solution x
–then repeatedly refine the solution until a certain
convergence criterion is reached
–less efficient than direct methods due to the large
number of operations or iterations required
Direct Methods
1. Matrix Inverse Method
2. Gauss Elimination Method
3. Gauss-Jordan Method
4. LU Decomposition Methods
Advantages and Drawbacks
• does not contain any truncation errors
• round off errors is introduced due to floating-point
operations
Indirect or Iterative Methods
1. Jacobi’s Iteration Method
2. Gauss-Seidel Iteration Method
Advantages and Drawbacks
• more useful to solve a set of ill-conditioned
equations
• round off errors (or even arithmetic mistakes) in
one iteration cycle are corrected in subsequent
cycles
• contains truncation error
• does not always converge to the solution
– the initial guess affects only the number of iterations
that are required for convergence
1.2 Matrix Inversion Method
• inverse of a matrix is obtained by dividing its
adjoint matrix by its determinant |A|
Requirements for obtaining a unique
inverse of a matrix
1. The matrix is a square matrix.
2. The determinant of the matrix is not zero (the
matrix is non-singular)
– if |A| = 0, then the elements of A-1 approach infinity

The inverse of a matrix is also defined by the


relationship:
A-1A = I
Solving Linear Equations
Consider a set of three simultaneous linear algebraic equations:
a11x1 + a12x2 + a13x3 = b1
a21x1 + a22x2 + a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
can be expressed in the matrix form:
Ax = b
we obtain the solution of x as:
x = A-1b
Example 1:
• Solve the following simultaneous linear equations:
x + 3y = 5
4x – y = 12
Example 2:
• Solve the following simultaneous linear equations:
x – y + 3z = 5
4x + 2y – z = 0
x + 3y + z = 5
Example 3:
• Solve the following simultaneous linear equations:
w+x=7
2w + 3x – y = 9
4x + 2y + 3z = 10
2y – 4z = 12

1.3 Gauss Elimination Method
• a popular technique for solving simultaneous linear
algebraic equations (Ax = b)
• reduces the coefficient matrix into an upper triangular
matrix (Ux = c)
• consists of two parts:
– elimination phase
– solution phase
• Initial Form: Ax = b
• Final Form: Ux = c
Gauss Elimination Operations
1. Multiplication of one equation by a non-zero constant.
2. Addition of a multiple of one equation to another
equation.
3. Interchange of two equations.

• Ax = b and Ux = c are equivalent if the sequence of


operations produce the new system Ux = c
• A is invertible if U is invertible
Gauss Elimination
Process
1. Eliminate x1 from the second and
third equations assuming a11 ≠ 0.
2. Eliminate x2 from the third row
assuming a'22 ≠ 0.
3. Apply back substitution:
x3 from a''33x3 = b''3
x2 from a'22x2 + a'23x3 = b'2
x1 from a11x1 + a12x2 + a13x3 = b1
Pivoting
• Gauss elimination method fails if any one of the
pivots becomes zero
• What if pivot is zero?
– Solution: interchange the equation with its lower
equations such that the pivots are not zero
Example:
Solve by Gauss elimination.
Solution: By back substitution:
Problem:
Use the method of Gaussian elimination to solve the
following system of linear equations:
x1 + x2 + x3 – x4 = 2
4x1 + 4x2 + x3 + x4 = 11
x1 – x2 – x3 + 2x4 = 0
2x1 + x2 + 2x3 – 2x4 = 2
1.4 Gauss-Jordan Method
• Ax = b is reduced to a diagonal set Ix = b'
where: I = a unit matrix or identity matrix
Ix = b' equivalent to x = b' where b'  solution vector
• implements the same series of row operations as
implemented by Gauss except that it applies below as
well as above the main diagonal
– all off-diagonal elements are reduced to zero
– all main diagonal elements become 1
Gauss-Jordan Process
1. Determine if pivot is non-zero. If it is zero, swap row to
succeeding rows with non-zero element.
2. Divide the pivot by itself to make the pivot equal to 1.
3. Eliminate all other elements on that column where pivot
is located.
4. Go to the next row, and repeat all steps until reaching
the last row.
Example: Solution:
Solve the following
equations
by Gauss-Jordan
method.
Problem 1:
Solve the following system of linear equations using
the Gauss-Jordan method.
Problem 2:
Solve the following system of linear equations using the
Gauss-Jordan method.
Problem 3:
Solve the following system of equations:

using:
(a) Gaussian elimination and
(b) Gauss-Jordan elimination
1.5 LU Decomposition Methods
• expressing the matrix as the multiplication of a
lower triangular matrix L and an upper triangular
matrix U
• A = LU
• Doolittle’s Method
• Crout’s Method
• Choleski’s Method
LU Decomposition
• aka LU Factorization
• process of computing L and U for a given A
• expressed as a product of a lower triangular matrix L and
an upper triangular matrix U
Constraints
• LU decomposition is not unique unless certain constraints
are placed on L or U
Doolittle’s Decomposition Method
• transforms Ax=b to LUx=b
Doolittle’s Decomposition Method
Example:
• Use Doolittle’s decomposition method to solve the
equations Ax = b, where
Decomposition Phase:

2 2 1
3 3 1

3 3 2
Solution Phase: Backward substitution: Ux= y
Forward substitution: Ly = b x1 + 4x2 + x3 = 7
y1 =7 2x2 – 2x3 = 6
y1 + y2 = 13 -9x3 = 18
2y1 – 4.5y2 + y3 = 5 x3 = -2
Solving for y2: Solving for x2:
y2 = 13 – y1 = 13 – 7 2x2 = 6 + 2x3 = 6 + 2(-2)
y2 = 6 x2 = 2/2 = 1
Solving for y3: Solving for x1:
y3 = 5 – 2y1 + 4.5y2 x1 = 7 – 4x2 – x3 = 7 – 4(1) + 2
y3 = 5 – 2(7) + 4.5(6) x1 = 5
y3 = 18
Problem:
• Solve AX = B with Doolittle’s decomposition and compute
|A|, where
Crout’s Decomposition Method
A = LU

or
Example: Solve the following set of equations by
Crout’s method:
2x + y + 4z =12
8x – 3y + 2z =20
4x + 11y – z =33

c2 = c2 – 0.5c1, c3 = c3 – 2c1

c3 = c3 – 2c2
Solution Phase:
Ly = b: forward subst.: Ux = y: backward subst.:
2y1 = 12 z = y3 = 1
y1 = 6
y + 2z = y2
8y1 – 7y2 = 20 y = 4 – 2(1)
-7y2 = 20 – 8(6) y=2
y2 = -28/-7 = 4
x + ½y + 2z = y1
4y1 + 9y2 – 27y3 = 33 x = 6 – ½(2) – 2(1)
-27y3 = 33 – 4(6) – 9(4) x=3
y3 = -27/-27 = 1
Problem:
• Solve the following set of equations by using the Crout’s
method:
2x1 + x2 + x3 = 7
x1 + 2x2 + x3 = 8
x1 + x2 + 2x3 = 9
Choleski’s Decomposition
• A = LLT where U=LT
• Limitations:
–requires A to be symmetric since the matrix
product LLT is symmetric
–involves taking square roots of certain
combinations of the elements of A
– square roots of negative numbers can be avoided only
if A is positive definite
Looking at Choleski’s A = LLT
Example:
• Compute the Choleski’s decomposition of matrix A and
solve x by using the constant vector b.
Solution:
Using Matlab: L=
1 1 1
>> A=[1 1 1; 1 2 2; 1 2 3];
0 1 1
>> b=[1 3/2 3]’; 0 0 1
>> L = chol(A), U=L’
U=
1 1 1
0 1 1
>> x = U\(L\b)
0 0 1
x=
1
-4.5
3
Problem:
• Solve the equation Ax = b by Choleski’s decomposition
method, where
Additional Problem:
• Given the LU decomposition A = LU, determine A and
|A|.
6. Jacobi’s Iteration Method
Consider the equation:
3x + 1 = 0
which can be cast into an iterative scheme as:
2x = -x – 1 or

w/c can be expressed as:


Iterations:
Another iterative
scheme:
where x0 is the initial guess
x = -2x – 1
xk+1 = -2xk – 1
Will this converge?

Will it converge?
Jacobi’s Iteration Method
• aka the method of simultaneous displacements
• applicable to predominantly diagonal systems
• Consider the system of • Unknowns are solved
linear equations: using the equations:

where a11, a22, and a33 are


the largest coefficients
Approximations and Iterations
• Let the initial approximations be x10, x20, and x30 respectively
• it is a general practice to assume x10 = x20 = x30 = 0
Iteration 1: Iteration 2:
x11 = (b1 – a12x20 – a13x30)/a11 x12 = (b1 – a12x21 – a13x31)/a11
x21 = (b2 – a21x10 – a23x30)/a22 x22 = (b2 – a21x11 – a23x31)/a22
x31 = (b3 – a31x10 – a32x20)/a33 x32 = (b3 – a31x11 – a32x21)/a33
• iteration process is continued until the values of x1, x2
and x3 are found to a pre-assigned degree of accuracy
Example:
Solve the following equations
by Jacobi’s method.
xk+1 = (b1 – a12yk – a13zk)/a11 x2 = [85 – 3(51/10) – (-2)(5/8)]/15
yk+1 = (b2 – a21xk – a23zk)/a22 = 4.73
zk+1 = (b3 – a31xk – a32yk)/a33 y2 = [51 – 2(17/3) – 5/8]/10
Let: x0 = y0 = z0= 0 = 3.904
x1 = b1/a11 = 85/15 = 17/3 z2 = [5 – 17/3 – (-2)(51/10)]/8
y1 = b2/a22 = 51/10 = 1.192
z1 = b3/a33 = 5/8
cont’d…
x3 = [85 – 3(3.904) + 2(1.192)]/15 = 5.045
y3 = [51 – 2(4.73) – 1(1.192)]/10 = 4.035
z3 = [5 – 1(4.73) + 2(3.904)]/8 = 1.010
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

x4 = [85 – 3(4.035) + 2(1.010)]/15 = 4.994


y4 = [51 – 2(5.045) – 1(1.010)]/10 = 3.99
z4 = [5 – 1(5.045) + 2(4.035)]/8 = 1.003
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

x5 = [85 – 3(3.99) + 2(1.003)]/15 = 5.002


y5 = [51 – 2(4.994) – 1(1.003)]/10 = 4.001
z5 = [5 – 1(4.994) + 2(3.99)]/8 = 0.998
Continuing the whole process of iteration:
x y z To check:
5.667 5.100 0.625 15x + 3y – 2z = 85
2x + 10y + z = 51
4.730 3.904 1.192 x – 2y + 8z = 5
5.045 4.035 1.010
4.994 3.990 1.003 15(5)+3(4)–2(1) = 85
75 + 12 – 2 = 85
5.002 4.001 0.998
2(5)+10(4)+1(1) = 85
5.000 4.000 1.000 10 + 40 + 1 = 51
5.000 4.000 1.000 5 – 2(4) + 8(1) = 85
5.000 4.000 1.000 5–8+8=5
5.000 4.000 1.000
Problem 1:
Use the Jacobi iterative scheme to obtain the solutions of
the system of equations correct to three decimal places.
Problem 2:
Use Jacobi iterative scheme to obtain the solution of the
system of equations correct to two decimal places.
7. Gauss-Seidel’s Iteration Method
• aka the method of successive approximations
• applicable to predominantly diagonal systems
• absolute value of the diagonal element in each case is
larger than or equal to the sum of the absolute values of
the other elements in that row
Gauss-Seidel vs. Jacobi
• Each iteration of Jacobi method updates the whole set of
N variables at a time
• Gauss-Seidel can speed up the convergence by using all
the most recent values of variables for updating each
variable even in the same iteration
Gauss-Seidel generalization formula
Gauss-Seidel Iterations
Iteration 1: Iteration 3:
x11 = (b1 – a12x20 – a13x30)/a11 x13 = (b1 – a12x22 – a13x32)/a11
x21 = (b2 – a21x11 – a23x30)/a22 x23 = (b2 – a21x13 – a23x32)/a22
x31 = (b3 – a31x11 – a32x21)/a3 x33 = (b3 – a31x13 – a32x23)/a3
Iteration 2: Iteration 4:
x12 = (b1 – a12x21 – a13x31)/a11 x14 = (b1 – a12x23 – a13x33)/a11
x22 = (b2 – a21x12 – a23x31)/a22 x24 = (b2 – a21x14 – a23x33)/a22
x32 = (b3 – a31x12 – a32x22)/a33 x34 = (b3 – a31x14 – a32x24)/a33
Example:
Solve the following equations
by Gauss-Seidel method.
Let: x0 = y0 = z0 = 0
x11 = [4 – 1(0) + 1(0)] / 4 =1
x21 = [-4 – 1(1) – 3(0)] /-8 = 5/8 i x y z
x31 = [12 – 2(1) – 1(5/8)] /9 = 1.042 0 0 0 0
1 1.000 0.625 1.042
x12 = [4 – 1(5/8) + 1(1.042)] / 4 = 1.104
x22 = [-4 – 1(1.104) – 3(1.042)] /-8 = 1.029 2 1.104 1.029 0.974
x32 = [12 – 2(1.104) – 1(1.029)] /9 = 0.974 3 0.986 0.988 1.004
4 1.004 1.002 0.999
x11 = [4 – 1(1.029) + 1(0.974)] / 4 = 0.986 5 0.999 0.999 1.000
x21 = [-4 – 1(0.986) – 3(0.974)] /-8 = 0.988
6 1.000 1.000 1.000
x31 = [12 – 2(0.986) – 1(0.988)] /9 = 1.004
Problem:
Solve the following equations by the Gauss-Seidel method.
1) 4x – y + z = 12
-x + 4y – 2z = -1
x – 2y + 4z = 5

2) 2x – y + 3z = 4
x + 9y – 2z = -8
4x – 8y + 11z = 15
Matlab Functions
• x = A\b
– returns the solution x of Ax =b
– obtained by Gauss elimination
• [L,U] = lu(A)
– returns an upper triangular matrix in U and a permuted lower triangular matrix L
• L = chol(A)
– Choleski’ s decomposition A = LLT
• B = inv(A)
– returns B as the inverse of A
• c = cond(A)
– returns the condition number of the matrix A
MatLab Functions
• A = spdiags(B,d,n,n)
– creates a n× n sparse matrix from the columns of matrix B by placing the
columns along the diagonals specified by d
• A = full(S)
– converts the sparse matrix S into a full matrix A
• S = sparse(A)
– converts the full matrix A into a sparse matrix S
• x = lsqr(A,b)
– conjugate gradient method for solving Ax = b
• spy(S)
– draws a map of the nonzero elements of S
Exercises: Set 1
Solve the following set of simultaneous linear equations by the matrix
inverse method.
(a)

(b)
Exercises: Set 2
Solve the following systems using Gaussian elimination
and Gauss-Jordan process:
a)

b)
Exercises: Set 3
1) Solve using appropriate LU method.
a)

b)
Exercises: Set 4
Solve using Jacobi’s and Gauss-Seidel iteration methods:
a)

b)

You might also like