Professional Documents
Culture Documents
1
Systems of Linear Equations
Cholesky’s method
One of the most popular approaches involves Cholesky factorization (also called Cholesky
decomposition). This algorithm is based on the fact that a positive definite symmetric
matrix can be decomposed, as in
That is, the resulting triangular factors are the transpose of each other.
Course Module
The following are necessary (but not sufficient) conditions for a Hermitian
matrix A (which by definition has real diagonal elements aii) to be positive
definite.
The factorization can be generated efficiently by recurrence relations. For the ith row:
𝑎𝑖𝑗 − ∑𝑖−1
𝑘=1 𝑢𝑘𝑖 𝑢𝑘𝑗
𝐸𝑞 3 𝑢𝑖𝑗 =
𝑢𝑖𝑖
𝑓𝑜𝑟 𝑗 = 𝑖 + 1 , … , 𝑛
Example
Compute the Cholesky factorization for the symmetric matrix.
6 15 5
[A] = [15 55 225]
55 225 979
Solution
For the first row (i = 1), Eq 2 is employed to compute
𝑢11 = √𝑎11 = √6 = 2.44949
Then, Eq 3 can be used to determine:
𝑎12 15
𝑢12 = = = 6.123724
𝑢11 2.44949
𝑎13 55
𝑢13 = = = 22.45366
𝑢11 2.44949
Numerical Methods
3
Systems of Linear Equations
𝑎𝑖𝑗 − ∑𝑖−1
𝑘=1 𝑢𝑘𝑖 𝑢𝑘𝑗
𝐸𝑞 3 𝑢𝑖𝑗 =
𝑢𝑖𝑖
𝑓𝑜𝑟 𝑗 = 𝑖 + 1 , … , 𝑛
Course Module
𝑑1 𝑐1 𝑥1 𝑏1
𝑐1 𝑑2 𝑐2 𝑥2 𝑏2
𝑐2 𝑑3 𝑐3 𝑥3 𝑏3
⋱ ⋱ ⋱ ⋮ ⋮
Eq 4 𝑥𝑖 = 𝑏𝑖
𝑐𝑖−1 𝑑𝑖 𝑐𝑖
⋱ ⋱ ⋮ ⋮ ⋮
𝑐𝑛−2 𝑑𝑛−1 𝑐𝑛−1 𝑥𝑛−1 𝑏𝑛−1
[ 𝑐𝑛−1 𝑑𝑛 ] [ 𝑥𝑛 ] [ 𝑏𝑛 ]
The routine to be described now is called procedure Tri. It is designed to solve a system of
n linear equations in n unknowns, as shown in Eq 4. Both the forward elimination phase
and the back substitution phase are incorporated in the procedure, and no pivoting is used;
that is, the pivot equations are those given by the natural ordering {1, 2, . . . , n}. Thus, naive
Gaussian elimination is used.
A general symmetric tridiagonal system has the form
𝑑1 𝑐1 𝑥1 𝑏1
𝑎1 𝑑2 𝑐2 𝑥2 𝑏2
𝑎2 𝑑3 𝑐3 𝑥3 𝑏3
⋱ ⋱ ⋱ ⋮ ⋮
Eq 5 𝑥𝑖 =
𝑎𝑖−1 𝑑𝑖 𝑐𝑖 𝑏𝑖
⋱ ⋱ ⋮ ⋮ ⋮
𝑎𝑛−2 𝑑𝑛−1 𝑐𝑛−1 𝑥𝑛−1 𝑏𝑛−1
[ 𝑎𝑛−1 𝑑𝑛 ] [ 𝑥𝑛 ] [ 𝑏𝑛 ]
Since procedure Tri does not involve pivoting, it is natural to ask whether it is likely to fail.
Simple examples can be given to illustrate failure because of attempted division by zero
even though the coefficient matrix in Eq 5 is nonsingular. On the other hand, it is not easy
to give the weakest possible conditions on this matrix to guarantee the success of the
algorithm. We content ourselves with one property that is easily checked and commonly
encountered. If the tridiagonal coefficient matrix is diagonally dominant, then procedure
Tri will not encounter zero divisors.
|aii | = ∑𝑖−1
𝑗=1 | 𝑎𝑖𝑗 | (1 < j < n)
𝑗≠1
In the case of the tridiagonal system of Eq 4, strict diagonal dominance means simply that
(with a0 = an = 0).
The Gauss-Seidel method also known as the Liebmann method or the method of successive
displacement is the most commonly used iterative method. Assume that we are given a set
of n equations:
[A]{X} = {B}
There are two important characteristics of the Gauss-Seidel method should be noted.
Firstly, the computations appear to be serial. Since each component of the new iterate
depends upon all previously computed components, the updates cannot be done
simultaneously as in the Jacobi method. Secondly, the new iterate x(k) depends upon the
order in which the equations are examined. If this ordering is changed, the components of
the new iterates (and not just their order) will also change.
Suppose that for conciseness we limit ourselves to a 3 × 3 set of equations. If the diagonal
elements are all nonzero, the first equation n can be solved for x1, the second for x2, and the
third for x3 to yield:
𝑏1 − 𝑎12 𝑥2 −𝑎13 𝑥3
Eq 6a x1 = 𝑎11
𝑏2 − 𝑎21 𝑥1 −𝑎23 𝑥3
Eq 6b x2 = 𝑎22
𝑏3 − 𝑎31 𝑥1 −𝑎32 𝑥2
Eq 6c x3 = 𝑎33
As each new x value is computed for the Gauss-Seidel method, it is immediately used in the
next equation to determine another x value. Thus, if the solution is converging, the best
available estimates will be employed. An alternative approach, called Jacobi iteration,
utilizes a somewhat different tactic. Rather than using the latest available x’s, this
technique uses Eq 6 to compute a set of new x’s on the basis of a set of old x’s. Thus, as new
values are generated, they are not immediately used but rather are retained for the next
iteration.
Now, we can start the solution process by choosing guesses for the x’s. A simple way to
obtain initial guesses is to assume that they are all zero. These zeros can be substituted into
Eq 6a, which can be used to calculate a new value for x1 = b1/a11. Then, we substitute this
new value of x1 along with the previous guess of zero for x3 into Eq6b to compute a new
value for x2. The process is repeated for Eq 6c to calculate a new estimate for x3. Then we
return to the first equation and repeat the entire procedure until our solution converges
closely enough to the true values.
Course Module
Example:
Use the Gauss-Seidel method to obtain the solution of the same system
3x1 − 0.1x2 − 0.2x3 = 7.85
0.1x1 + 7x2 − 0.3x3 = −19.3
0.3x1 − 0.2x2 + 10x3 = 71.4
Since this is an iterative method, it is best to test of the solution will converge. The
strictly diagonal dominance in the tridiagonal system can be employed. For this
example, since the left hand side of the equation is just a 3x3 matrix, we can only
test the diagonal of the system.
The absolute value of the diagonal values should be greater than the sum of the non-
diagonals of that row.
In our example,
|3| > |-0.1||+| -0.2| (𝑜𝑘)√
|7| > |0.1| + |-0.3| (𝑜𝑘)√
|10| > |0.3|+ |-0.2|(𝑜𝑘)√
Since all of the conditions of the diagonals are satisfied, then the solution will converge.
Recall that the true solution is x1 = 3, x2=−2.5, and x3 = 7.
Solution:
First, solve each of the equations for its unknown on the diagonal by substituting the
values:
use the recently acquired value, this time we already have a new value of x1
−19.3 − 0.1(2.616667) + 0
x2 = = −2.794524
7
The first iteration is completed by substituting the calculated values for x1 and x2
71.4 − 0.3(2.616667) + 0.2(−2.794524 )
x3 = = 7.005610
10
The method is, therefore, converging on the true solution. Additional iterations could be
applied to improve the answers.
Black, Noel and Moore, Shirley. "Gauss-Seidel Method." From MathWorld--A Wolfram
Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/Gauss-
SeidelMethod.html. Date Accessed: September 29, 2017
Briggs, Keith. "Diagonally Dominant Matrix." From MathWorld--A Wolfram Web
Resource, created by Eric W. Weisstein.
Course Module
http://mathworld.wolfram.com/DiagonallyDominantMatrix.html. Date Accessed:
September 29, 2017
Weisstein, Eric W. "Hermitian Matrix." From MathWorld--A Wolfram Web
Resource. http://mathworld.wolfram.com/HermitianMatrix.html. Date accessed: August
5, 2017
Weisstein, Eric W. "Positive Definite Matrix." From MathWorld--A Wolfram Web
Resource. http://mathworld.wolfram.com/PositiveDefiniteMatrix.html . Date accessed:
August 5, 2017