You are on page 1of 5

Solving linear equations

A system of algebraic equations (linear equations) has the general form:

The unknowns are x1 to xn. A linear equation means no (xi*xi*… *xi) or xi*xj terms. In matrix-vector
format, the above equations can be written as Ax = b. This system of n linear equations with n
unknowns has a unique solution if the determinant of the coefficient matrix A is nonsingular (|A| ≠
0). That is, the rows and columns of a nonsingular matrix are linearly independent.

Singularity and ill conditions


We have singularity if some of the rows (or columns) of the coefficient matrix are linearly
dependent. For example, the set of equations x+3y=4 and 2x+6y=8 is singular, because the second
equation is obtained by multiplying the first one by 2. In this case we have infinite sets of solutions.
If the right side of the second equation is not 2*4, we still have a problem. For example, if 2x+6y=1,
the two equations are contradicted to each other, and no solution can be obtained.

A nonsingular matrix can still be ill-conditioned. For example, the determinant is very small relative
to its norm. That is, |A| << ‖A‖, where the norm of A (i.e. ‖A‖) can be the Euclidian norm

, or infinity norm .
A formal measure of this conditioning is the matrix condition number, defined as
cond(A) = ‖A‖*‖A-1‖. If cond(A)≈1, the matrix is well conditioned. Greater degree of ill conditioning
corresponds to a greater condition number. A singular matrix has the condition number of infinity.
Example of ill conditioning:
The equations 2x + y = 3 and 2x + 1.001y = 0 have the solution x = 1501.5, y = −3000. They are ill
conditioned because the determinant |A| = 2(1.001) − 2(1) = 0.002 is much smaller than the norm.
The effect of ill conditioning can be verified by modifying the second equation to be 2x + 1.002y = 0.
The solution to the new set of equations is x = 751.5, y = −1500. That is, a 0.1% change in y
coefficient caused a 100% change in the solution of y.

Before solving a set of linear equations, we should also review basic matrix operations: Addition,
subtraction, multiplication (with scalars or matrices), transpose, determinant, and inversion. If you
are unfamiliar with them, consult https://www.cuemath.com/algebra/matrix-operations/ .
There are two main methods for solving linear equations: direct (elimination) and iterative
methods.
Elimination methods
1. Gauss elimination (consisting of the elimination phase and backward substitution phase)
Example: Solve the system of the following 3 equations.
4x1 − 2x2 + x3 = 11 (a)
−2x1 + 4x2 − 2x3 = −16 (b)
x1 − 2x2 + 4x3 = 17 (c)
Solution:
Equation(a)/2 + equation(b) => 3x2 – 1.5x3 = −10.5 (b’)
-Equation(a)/4 + equation(c) => -1.5x2 +3.75x3 = 14.25 (c’)
(b’)+ (c’)*2 => 6x3 = 18 (c’’)
Thus, the new set of equations are
4x1 − 2x2 + x3 = 11 (a)
3x2 – 1.5x3 = −10.5 (b’)
6x3 = 18 (c’’)
Next, perform backward substitution.
From (c’’) we get x3 = 3. Then from (b’) we get 3x2 = 1.5x3 −10.5 = -6. Thus, x2 = -2. Finally, from (a)
we get x1 = 1.

2. L-U decomposition method


Any square matrix A can be expressed as a product of a lower triangular matrix L and an upper
triangular matrix U. That is, A=LU.
The equations Ax = b can then be rewritten as LUx = b. If we let y=Ux, then we get Ly = b, which can
be solved for y by forward substitution. Then Ux=y will yield x by back substitution.
Several decomposition methods exist. Here we introduce Doolittle’s decomposition, because it is
closely related to Gauss elimination. Assume that matrix A can be decomposed as follows.

Since A = LU, we get the following expression for matrix A.

Perform Gauss elimination by applying the following operations for row2 and row3.

A new matrix A’ is obtained.


Apply one more operation for row3 of A’ (i.e. multiply row2 by -L32, and add it to row3), we get

The above process revealed that:


1. Matrix U is resulted from Gauss elimination.
2. The lower off-diagonal elements of L are the multipliers (times -1) used during Gauss elimination.
Example:

Sol: To obtain the upper-triangular matrix U, we multiply row1 by (-1) and add to row2, and multiply
row1 by (-2) and add to row3. Thus, L21=1 and L31=2. New row2’ is [0 2 -2]; row3’ is [0 -9 0].
Next, multiply row2’ by 4.5 and add to row3’, which gives row3’’ = [0 0 -9]. Thus, L 32=-4.5. This
completes the decomposition, which gives L and U as

Since only the lower off-diagonal elements of L are needed, we can store L and U simultaneously as

After the decomposition process (A=LU), vector y and x can be obtained by forward and backward
substitution, respectively.
Iterative methods
Gauss-Seidel method
The system of linear equations Ax = b can be written in the following scalar form as

That is, we can start from an initial guess of the solution, and solve for xi iteratively. The procedure is
repeated until the changes in x between successive iteration cycles become sufficiently small.
To speed up the process of convergence, a technique called relaxation can be used. That is, the new
value of xi can be calculated as a weighted average of its previous value and the value predicted by
the above equation. The corresponding iterative formula is

Here ω is the relaxation factor. It is clear that ω=1 means no relaxation. The iteration is under
relaxation if 0<ω<1.

Tridiagonal systems
A tridiagonal matrix is a kind of sparse matrix (with a lot of zeros) with the following form:
Although the elimination methods we have learned can be used for solving this kind of system, they
are not recommended because of unnecessary use of storage and consequently longer computation
time. Two methods which only require the information of nonzero terms are introduced here.

Modified Gauss elimination method


From Gauss elimination we can transform the tridiagonal matrix into an upper triangular matrix.
That is, multiply the 1st row by -l2/d1 and add to the 2nd row, we get
[0 (d2-u1l2/d1) u2 0 … ]. The right hand side becomes (b2-u1l2/b1)
Similar calculation can be done to the rest rows.
After having the upper triangular matrix, backward substitution can be used to solve for x n to x1.
Example: Solve the system of linear equations Ax = b, where

Thomas’ algorithm
From the above equations, we have

The coefficient of the diagonal entry is then set to be 1. That is, divide everything by d 1 in the first
equation and let u1= u1/d1 and b1= b1/d1, the first equation becomes: x1 + u1x2 = b1. Similarly,

In every stage, the latest values of the elements are used. In this way bn can be calculated as

You might also like