Página |1 Diego Fernando Gómez Páez Facultad de Ciencias Físico-Químicas Escuela de Ingeniería de Petróleos Julio de 2010 LINEAR

EQUATIONS SYSTEMS 1. INTRODUCTION [1] In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example,

Is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by

Since it makes all three equations valid. In mathematics, the theory of linear systems is a branch of linear algebra, a subject which is fundamental to modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. A general system of m linear equations with n unknowns can be written as

Página |2 Here system, and are the unknowns, are the constant terms. are the coefficients of the

Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. Vector equation One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.

This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed. Matrix equation The vector equation is equivalent to a matrix equation of the form

Where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries.

The number of vectors in a basis for the span is now expressed as the rank of the matrix.

Página |3 2. SOLUTION OF LINEAR EQUATIONS SYSTEMS 2.1. DIRECT METHODS 2.1.1. Cramer’s Rule[2] Given a system of linear equations, Cramer's Rule is a handy way to solve for just one of the variables without having to solve the whole system of equations. They don't usually teach Cramer's Rule this way, but this is supposed to be the point of the Rule: instead of solving the entire system of equations, you can use Cramer's to solve for just one single variable. Let's use the following system of equations: 2x + y + z = 3 x– y–z=0 x + 2y + z = 0 We have the left-hand side of the system with the variables (the "coefficient matrix") and the right-hand side with the answer values. Let D be the determinant of the coefficient matrix of the above system, and let Dx be the determinant formed by replacing the xcolumn values with the answer-column values: System equations of Coefficient determinant matrix's Answer column Dx:coefficient determinant with answer-column values in x-column

2x + 1y + 1z = 3 1x – 1y – 1z = 0 1x + 2y + 1z = 0 Similarly, Dy and Dz would then be:

Evaluating each determinant, we get:

Página |4

Cramer's Rule says that x = Dx ÷ D, y = Dy ÷ D, and z = Dz ÷ D. That is: x = 3/3 = 1, y = –6/3 = –2, and z = 9/3 = 3 Example: Given the following system of equations, find the value of z. 2x + y + z = 1 x – y + 4z = 0 x + 2y – 2z = 3 To solve only for z, I first find the coefficient determinant.

Then I form Dz by replacing the third column of values with the answer column:

Then I form the quotient and simplify: z=2 The point of Cramer's Rule is that you don't have to solve the whole system to get the one value you need.

Página |5 2.2.2 Inverse Matrix Method [3] DEFINITION: Assuming we have a square matrix A, which is non-singular (i.e. det(A) does not equal zero), then there exists an n×n matrix A-1 which is called the inverse of A, such that this property holds: AA-1 = A-1A = I, where I is the identity matrix. The inverse of a general n×n matrix A can be found by using the following equation.

Where the adj(A) denotes the adjoint (or adjugate) of a matrix. It can be calculated by the following method: Given the n×n matrix A, define B = bij to be the matrix whose coefficients are found by taking the determinant of the (n-1) × (n-1) matrix obtained by deleting the ith row and jth column of A. The terms of B (i.e. B = bij) are known as the cofactors of A. Define the matrix C, where cij = (−1)i+j bij. The transpose of C (i.e. CT) is called the adjoint of matrix A. Lastly to find the inverse of A divide the matrix CT by the determinant of A to give its inverse. Method DEFINITION: The inverse matrix method uses the inverse of a matrix to help solve a system of equations, such like the above Ax = b. By pre-multiplying both sides of this equation by A-1 gives:

or alternatively

So by calculating the inverse of the matrix and multiplying this by the vector b we can find the solution to the system of equations directly. And from earlier we found that the inverse is given by

Página |6 From the above it is clear that the existence of a solution depends on the value of the determinant of A. There are three cases: 1. If the det(A) does not equal zero then solutions exist using 2. If the det(A) is zero and b=0 then the solution will be not be unique or does not exist. 3. If the det(A) is zero and b=0 then the solution can be x = 0 but as with 2. is not unique or does not exist.

2.2.3 Gauss Method [4] Gaussian elimination is a method for solving matrix equations of the form

To perform Gaussian elimination starting with the system of equations

Compose the "augmented matrix equation"

Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form

Solve the equation of the th row for st row to obtain a solution for

, then substitute back into the equation of the , etc., according to the formula

Página |7

A matrix that has undergone Gaussian elimination is said to be in echelon form. Example: Consider the matrix equation

In augmented form, this becomes

Switching the first and third rows (without switching the elements in the right-hand column vector) gives

Subtracting 9 times the first row from the third row gives

Subtracting 4 times the first row from the second row gives

Finally, adding

times the second row to the third row gives

Restoring the transformed matrix equation gives

Página |8 Which can be solved immediately to give , back-substituting to obtain (which actually follows trivially in this example), and then again back-substituting to find

2.2.4 Gauss-Jordan Method [5] [6] The Gauss-Jordan Elimination method works with the augmented matrix in order to solve the system of equations. The goal of the Gauss-Jordan Elimination method is to convert the matrix into this form (four dimensional matrix is used for demonstration purposes). 1 0 0 0 | r1 0 1 0 0 | r2 0 0 1 0 | r3 0 0 0 1 | r4

r1,r2,r3,r4 represent the results of each equation (constant terms). Once you have the matrix in this form, your problem is solved. The only thing you have to figure out is how to convert the matrix into this form. Requirements for a unique solution to a system of linear equations using the GaussJordan elimination method. The requirements for a unique solution to a system of linear equations using the GaussJordan Elimination Method are that the number of unknowns must equal the number of equations. When the number of equations and the number of unknowns are the same, you will obtain an augmented matrix where the number of columns is equal to the number of rows plus 1. For example, if you have a system of 4 linear equations in 4 unknowns, then the number of rows must be equal to 4 and the number of columns must be equal to 5 (4 columns for the coefficients and 1 column for the results).

Página |9 The vertical line between the coefficients part of the matrix and the results part of the matrix (the constant terms are in the results part of the matrix) does not count as a column. It's there for display purposes only if you have the capability to display it. When you create your matrix, just make sure that the number of columns is equal to the number of rows plus 1. Note that it is possible to get a unique solution to a system of linear equations where the number of equations is greater than the number of unknowns. An example of this would be 5 lines in a plane all intersecting in the same point. There is a unique solution for the x and y variables that makes all the equations in the system true. This type of situation, however, is not conducive to solving using the Gauss-Jordan Elimination Method since that method requires the number of equations and the number of unknowns to be the same. Note also that it is not possible to get a unique solution to a system of linear equations where the number of equations is less than the number of unknowns. If you use the Gauss-Jordan Elimination Method, just make sure that the number of equations is equal to the number of unknowns and the method will work just fine. Example: Write the augmented matrix for the system of linear equations.

Use elementary row operations on the augmented matrix [A|b] to transform A into diagonal form.

P á g i n a | 10

By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one.

Hence,

2.2.5 LU Decomposition [7] The LU-decomposition method first "decomposes" matrix A into A = L.U, where L and U are lower triangular and upper triangular matrices, respectively. More precisely, if A is a n×n matrix, L and U are also n×n matrices with forms like the following:

The lower triangular matrix L has zeros in all entries above its diagonal and the upper triangular matrix U has zeros in all entries below its diagonal. If the LU-decomposition of A = L.U is found, the original equation becomes B = (L.U).X. This equation can be rewritten as B = L.(U.X). Since L and B are known, solving for B = L.Y gives Y = U.X. Then, since U and Y are known, solving for X from Y = U.X yields the desired result. In this way, the original problem of solving for X from B = A.X is decomposed into two steps:

P á g i n a | 11 1. Solving for Y from B = L.Y 2. Solving for X from Y = U.X Forward Substitution How easy are these two steps? It turns out to be very easy. Consider the first step. Expanding B = L.Y gives

It is not difficult to verify that column j of matrix B is the product of matrix A and column j of matrix Y. Therefore, we can solve one column of Y at a time. This is shown below:

This equation is equivalent to the following:

From the above equations, we see that y1 = b1/l11. Once we have y1 available, the second equation yields y2 = (b2-l21y1)/l22. Now we have y1 and y2, from equation 3, we have y3 = (b3 - (l31y1 +l32y2)/l33. Thus, we compute y1 from the first equation and substitute it into the second to compute y2. Once y1 and y2 are available, they are substituted into the third equation to solve for y3. Repeating this process, when we reach equation i, we will have y1, y2, ..., yi-1 available. Then, they are substituted into equation i to solve for yi using the following formula:

P á g i n a | 12

Because the values of the yi's are substituted to solve for the next value of y, this process is referred to as forward substitution. We can repeat the forward substitution process for each column of Y and its corresponding column of B. The result is the solution Y. Backward Substitution After Y becomes available, we can solve for X from Y = U.X. Expanding this equation and only considering a particular column of Y and the corresponding column of X yields the following:

This equation is equivalent to the following:

Now, xn is immediately available from equation n, because xn = yn/un,n. Once xn is available, plugging it into equation n-1

and solving for xn-1 yields xn-1 = (yn-1- un-1,nxn)/ un-1,n-1. Now, we have xn and xn-1. Plugging them into equation n-2

and solving for xn-2 yields xn-2 = [yn-2- (un-2,n-1xn-1 + un-2,nxn-)]/ un-2,n-2.

P á g i n a | 13 From xn, xn-1 and xn-2, we can solve for xn-3 from equation n-3. In general, after xn, xn-1, ..., xi+1 become available, we can solve for xi from equation i using the following relation:

Repeat this process until x1 is computed. Then, all unknown x's are available and the system of linear equations is solved.

2.2.6 Thomas Algorithm [8] In numerical linear algebra, the tridiagonal matrix algorithm (TDMA), also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. This case is tridiagonal matrices A - that is A has non-zero entries only on the diagonal the super-diagonal and the sub-diagonal. Such matrices arise frequently in the study of numerical differential equations.

Because of the structure of A we expect L to have non-zero entries only on the diagonal and sub-diagonal, and U to have non-zero entries only on the diagonal and super-diagonal. In fact a little thought will tell you that the diagonal on either L or U could be chosen to be 1's; we will choose U to have this form. Thus

This yields:

P á g i n a | 14

The solution is now easy consisting of forward and backward substitution.

2.2 ITERATIVE METHODS 2.2.1 Jacobi Method [9] The Jacobi method is easily derived by examining each of the system in isolation. If in the th equation equations in the linear

we solve for the value of

while assuming the other entries of

remain fixed, we obtain

This suggests an iterative method defined by

Which is the Jacobi method. Note that the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. For this reason, the Jacobi method is also known as the method of simultaneous displacements, since the updates could in principle be done simultaneously.

2.2.2 Gauss-Seidel Method [10]

P á g i n a | 15 The Gauss-Seidel method (called Seidel's method by Jeffreys and Jeffreys 1988, p. 305) is a technique for solving the equations of the linear system of equations one at a time in sequence, and uses previously computed results as soon as they are available,

There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations appear to be serial. Since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method. Secondly, the new iterate depends upon the order in which the equations are examined. If this ordering is changed, the components of the new iterates (and not just their order) will also change. In terms of matrices, the definition of the Gauss-Seidel method can be expressed as

where the matrices , , and and strictly upper triangular parts of

represent the diagonal, strictly lower triangular, , respectively.

The Gauss-Seidel method is applicable to strictly diagonally dominant, or symmetric positive definite matrices .

3. References [1] http://en.wikipedia.org/wiki/System_of_linear_equations [2] http://www.purplemath.com/modules/cramers.htm

P á g i n a | 16 [3] http://www.maths.surrey.ac.uk/explore/emmaspages/option1.html [4] http://mathworld.wolfram.com/GaussianElimination.html [5] http://www.algebra.com/algebra/homework/Linear-equations/THEO-2010.lesson [6] http://ceee.rice.edu/Books/CS/chapter2/linear44.html [7] http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/CURVE-linear-system.html [8] http://www.math.buffalo.edu/~pitman/courses/mth437/na2/node3.html [9] http://www.netlib.org/linalg/html_templates/node12.html#eqnjacobipointwise [10] http://mathworld.wolfram.com/Gauss-SeidelMethod.html

Sign up to vote on this title
UsefulNot useful