Professional Documents
Culture Documents
The unknowns are x1 to xn. A linear equation means no (xi*xi*… *xi) or xi*xj terms. In matrix-vector
format, the above equations can be written as Ax = b. This system of n linear equations with n
unknowns has a unique solution if the determinant of the coefficient matrix A is nonsingular (|A| ≠
0). That is, the rows and columns of a nonsingular matrix are linearly independent.
A nonsingular matrix can still be ill-conditioned. For example, the determinant is very small relative
to its norm. That is, |A| << ‖A‖, where the norm of A (i.e. ‖A‖) can be the Euclidian norm
, or infinity norm .
A formal measure of this conditioning is the matrix condition number, defined as
cond(A) = ‖A‖*‖A-1‖. If cond(A)≈1, the matrix is well conditioned. Greater degree of ill conditioning
corresponds to a greater condition number. A singular matrix has the condition number of infinity.
Example of ill conditioning:
The equations 2x + y = 3 and 2x + 1.001y = 0 have the solution x = 1501.5, y = −3000. They are ill
conditioned because the determinant |A| = 2(1.001) − 2(1) = 0.002 is much smaller than the norm.
The effect of ill conditioning can be verified by modifying the second equation to be 2x + 1.002y = 0.
The solution to the new set of equations is x = 751.5, y = −1500. That is, a 0.1% change in y
coefficient caused a 100% change in the solution of y.
Before solving a set of linear equations, we should also review basic matrix operations: Addition,
subtraction, multiplication (with scalars or matrices), transpose, determinant, and inversion. If you
are unfamiliar with them, consult https://www.cuemath.com/algebra/matrix-operations/ .
There are two main methods for solving linear equations: direct (elimination) and iterative
methods.
Elimination methods
1. Gauss elimination (consisting of the elimination phase and backward substitution phase)
Example: Solve the system of the following 3 equations.
4x1 − 2x2 + x3 = 11 (a)
−2x1 + 4x2 − 2x3 = −16 (b)
x1 − 2x2 + 4x3 = 17 (c)
Solution:
Equation(a)/2 + equation(b) => 3x2 – 1.5x3 = −10.5 (b’)
-Equation(a)/4 + equation(c) => -1.5x2 +3.75x3 = 14.25 (c’)
(b’)+ (c’)*2 => 6x3 = 18 (c’’)
Thus, the new set of equations are
4x1 − 2x2 + x3 = 11 (a)
3x2 – 1.5x3 = −10.5 (b’)
6x3 = 18 (c’’)
Next, perform backward substitution.
From (c’’) we get x3 = 3. Then from (b’) we get 3x2 = 1.5x3 −10.5 = -6. Thus, x2 = -2. Finally, from (a)
we get x1 = 1.
Perform Gauss elimination by applying the following operations for row2 and row3.
Sol: To obtain the upper-triangular matrix U, we multiply row1 by (-1) and add to row2, and multiply
row1 by (-2) and add to row3. Thus, L21=1 and L31=2. New row2’ is [0 2 -2]; row3’ is [0 -9 0].
Next, multiply row2’ by 4.5 and add to row3’, which gives row3’’ = [0 0 -9]. Thus, L 32=-4.5. This
completes the decomposition, which gives L and U as
Since only the lower off-diagonal elements of L are needed, we can store L and U simultaneously as
After the decomposition process (A=LU), vector y and x can be obtained by forward and backward
substitution, respectively.
Iterative methods
Gauss-Seidel method
The system of linear equations Ax = b can be written in the following scalar form as
That is, we can start from an initial guess of the solution, and solve for xi iteratively. The procedure is
repeated until the changes in x between successive iteration cycles become sufficiently small.
To speed up the process of convergence, a technique called relaxation can be used. That is, the new
value of xi can be calculated as a weighted average of its previous value and the value predicted by
the above equation. The corresponding iterative formula is
Here ω is the relaxation factor. It is clear that ω=1 means no relaxation. The iteration is under
relaxation if 0<ω<1.
Tridiagonal systems
A tridiagonal matrix is a kind of sparse matrix (with a lot of zeros) with the following form:
Although the elimination methods we have learned can be used for solving this kind of system, they
are not recommended because of unnecessary use of storage and consequently longer computation
time. Two methods which only require the information of nonzero terms are introduced here.
Thomas’ algorithm
From the above equations, we have
The coefficient of the diagonal entry is then set to be 1. That is, divide everything by d 1 in the first
equation and let u1= u1/d1 and b1= b1/d1, the first equation becomes: x1 + u1x2 = b1. Similarly,
In every stage, the latest values of the elements are used. In this way bn can be calculated as