You are on page 1of 9

Adenegan and Aluko 97

Journal of Science and Science Education, Ondo Vol. 3(1), pp. 97 – 105, 19th November, 2012.
Available online at http://www.josseo.org
ISSN 0795135-3 ©2012

GAUSS AND GAUSS-JORDAN ELIMINATION METHODS FOR


SOLVING SYSTEM OF LINEAR EQUATIONS: COMPARISONS AND
APPLICATIONS

Adenegan, Kehinde Emmanuel1* and Aluko, Tope Moses 1


1
Department of Mathematics, Adeyemi College of Education, Ondo.

Abstract: This paper examines the comparisons between the Gauss and Gauss-
Jordan methods for solving system of linear equations. Various terminologies were
discussed, some problems were considered in conjunction with the afore-mentioned
methods used in solving system of linear equations. It was noted very remarkably
from the solved problems that both Gaussian and Gauss-Jordan methods gave the
same answers. Equally, when the same system is pivoted or partially pivoted, same
answers are readily obtained. This necessarily implies that, since the same system
of linear equations is rearranged leading to its matrix form to be transformed as
the rows’ element obviously changed, the resultant solutions are still the same. The
paper also explicitly reveals that the Gauss/Gaussian and Gauss-Jordan
elimination methods could be applied to different systems of linear equations
arising in fields of study like Physics, Business, Economics, Chemistry, etc.

Keywords: Matrix coefficient, augmented matrix, Echelon, Upper triangular


matrix, undetermined system, Degenerated equation.

INTRODUCTION downwards. Matrices containing zeros below each


In linear algebra, Gaussian elimination is an pivot are said to be in row echelon form.
algorithm for solving system of linear equations, The process of Gaussian elimination has two
finding the rank of a matrix and calculating the parts. The first part (forward elimination) reduces
inverse of an invertible square matrix. Gaussian a given system to either triangular or echelon form
elimination is considered as the workhorse of or results in a degenerated equation with no
computational science for the solution of a system solution, indicating the system has no solution.
of linear equations. Carl Friedrich Gauss, a great This is accomplished through the use of
19th century mathematician suggested this elementary row operations. The second step uses
elimination method as a part of his proof of a back substitution to find the solution of system of
particular theorem. Computational scientists use linear equation. Another point of view which turns
this “proof” as a direct computational method. out to be very useful to analyze the algorithm is
Gaussian elimination is a systematic that Gaussian elimination computes matrix
application of elementary row operations to a decomposition. The three elementary row
system of linear equations in order to convert the operation used in the Gaussian elimination
system to upper triangular form. Once the (multiplying rows, switching rows and adding
coefficient matrix is in upper triangular form, we multiples of rows to other rows) amount to
use back substitution to find a solution. Gaussian multiplying the original matrix with invertible
elimination places zeros below each pivot in the matrix from the left. The first part of algorithm
matrix starting with the top row and working computes LU decomposition (decomposition into
a lower and upper triangular matrix), while the

*Corresponding Author. E-mail:akehinde2012@yahoo.com


98 Journal of Science and Science Education, Ondo

second part writes the original matrix as the of 2 x 2, 3 x 3, 4 x 4 and 5 x 5 matrices and
product of a uniquely determined invertible matrix applied them to solve equations but not system of
and a uniquely determined reduced row echelon linear equation.
matrix. In the 1730‟s, Maclaurin wrote Treatise of
The Gauss – Jordan method is a modification Algebra although it was not published until 1748,
of the Gaussian elimination. It is named after Carl two years after his death. It contains the first
Friedrich Gauss and Wilhelm Jordan because it is published results on determinant proving
a variation of Gaussian elimination as Jordan Crammer‟s rule for 2 x 2 and 3 x 3 systems and
described in 1887 while Gaussian elimination indicating how the 4 x 4 case would work.
places zeros below each pivot in the matrix Crammer gave the general rule for nxn systems in
starting with the top row and working downwards, a paper introduction to the analysis of algebraic
Gauss – Jordan elimination method goes a step curves (1750). It arose out of a desire to find the
further by placing zeroes above and below each equation of a plane curve passing through a
pivot. Every matrix has a reduced row echelon number of given points.
form and Gauss – Jordan elimination is In 1764, Bezout gave methods of calculating
guaranteed to find it. determinants, as did Vandermonde in 1771. In
The paper aims at investigating the methods of 1772, Laplace claimed that the method introduced
solving system of linear equations using Gauss by Crammer and Bezout were impractical and in a
and Gauss – Jordan elimination methods, compare paper where he studied the orbits of the inner
and contrast the two methods and at finding out planets, he discussed the solution of system of
application of the methods to other fields of study. linear equation without actually calculating it by
using determinants. Rather surprising Laplace
LITERATURE REVIEW used the word „resultant‟ for what we now call the
It is not surprising that the beginning of determinant. Surprisingly since it is the same
matrices and determinants should arise through word as used by Leibniz yet Laplace must have
the study of linear systems. The Babylonians been unaware of Leibniz‟s work. Laplace gave the
studied problems that led to simultaneous linear expansion of a determinant that is now named
equations and some of these are preserved in clay after him.
tablet that survive. The Chinese between 200BC Jacques strum gave a generalization of eigen
and 100BC came much closer to matrices than the value problem in the context of solving system of
Babylonians. Indeed, it is fair to say that the text ordinary differential equations. In fact the concept
nine chapters on the Mathematics Art written of an eigen value appeared 80 years earlier again
during the Han Dynasty give the first known in work on systems if linear differential equations
example of matrix methods. by O‟ Alembert studying the motion of a string
Cardan, in Art Magna (1545), gives a rule for with mass attached to it at various points.
solving a system of two linear equations which he The first to use term „matrix‟ was Sylvester in
called regular de modo and which is called mother 1850. Sylvester defined a matrix to be an oblong
of rules. This rule gives what essentially is arrangement of terms and saw it as something that
Crammer‟s rule for solving a 2 x 2 system. The led to various determinants from square arrays
idea of a determinant appeared in Japan and contained within it. After living America and
Europe at almost exactly same time although Seki returning to England in 1851, Sylvester became a
in Japan certainly published first. In 1683, Seki Lawyer and met Cayley, a fellow lawyer who
wrote method of solving the dissimulated problem shared his interest in mathematics. Cayley quickly
that contains matrix methods written as tables in saw the significance of matrix concept and by
exactly the Chinese methods describe above were 1853 Cayley had published a note giving for the
constructed. Without having any word that first time, the inverse of matrix.
corresponds to „determinant‟, Seki still introduced Frobenius, in 1878, wrote an important work
determinants and gave general methods for on matrices on linear substitutions and linear
calculating them based on examples. Using his forms although he seemed unaware of Cayley‟s
„determinant‟, Seki was able to find determinants work. Frobenius in his paper dealt with co-
Adenegan and Aluko 99

efficient of forms and does not use the term Gauss-Jordan elimination in print was in
matrix. However, he proved important results on handbook or geodesy written by Wilhelm-Jordan.
canonical matrices as representatives of In matrix analysis and linear algebra, Carl
equivalence classes of matrices. He cites D.Meyor (2000), writes “Although there has been
Kronecker and Weierstrass as having considered some confusion as to which Jordan should receive
special cases of his results in 1874 and 1868 credit for this algorithm, it is seems clear that the
respectively. Frobenius also proved the general method was in fact introduced by a geodesist
result that a matrix satisfies its characteristic named Wilhelm Jordan (1842-1899) and not by
equation. This 1878 paper by Frobenius also the more well known mathematician Marie
contained the definition of rank of a matrix that he Ennemond Camille Jordan (1838-1992), whose
used in his work on canonical forms and the name is often mistakenly associated with the
definition of orthogonal matrices. technique, but who is otherwise correctly credited
The method of Gaussian elimination appears with other important topic in matrix analysis, the
in chapter eight, rectangular Array of the Chinese Jordan Canonical form being the most notable”.
mathematics text, Jiuzhang Suanshu of the nine Having identified the right Jordan is not the end of
chapters on the Mathematical Art. Its use is the problem, for A.S. Household writes, in the
illustrated in eighteen problems with two to five theory of matrices in numerical analysis (1964, P.
equations. The first reference to the book by this 141) “The Gauss-Jordan method, so called seems
title is dated to 179BC but parts of it were written to have been described first by Clasen (1888)
as approximately 150BC. It was commenced on since it can be regarded as a modification of
by Liu Hui in the 3rd century. The method of Gaussian elimination, the name Gauss is properly
Europe stems from the notes of Isaac Newton. In applied, but that of Jordan seems to be due to an
1670, he wrote that all algebra books known to error, since the method was described only in the
him lacked a lesson for solving simultaneously third edition of his Hanbuch der Vermes
equation which Newton then supplied. Sungskunde, prepared after his death”. These
Cambridge University eventually published claims were examined by S.C. Althoen and R.
the notes as Arithmetical University in 1707 long Mcluaghlin (1987). American Mathematical
after Newton left academic life. The notes were monthly, 94, 130-142. They concluded that
widely initiated, which made (what is now called) Household was correct about Clasen and his 1888
Gaussian elimination a standard lesson in algebra publication but mistaken about Jordan who was
textbooks by the end of the 18th century. Carl very much alive when the third edition of his book
Frederick Gauss in 1810 devised a notation for appeared in 1888. They added that the “germ of
symmetric elimination that was adopted in the the idea” was already present in the second edition
19th century by professional hand computers to of 1877 (This entry was contributed by John
solve the normal equations of least squares Aldrich).
problems. The process of Gaussian elimination has two
Gauss developed Gaussian elimination around parts. The first part (forward elimination) reduces
1880 and used it to solve least squares problems a given system to either triangular or echelon
in celestial computations and later in computation form, or results in a degenerated equation with no
to measure the earth and its surface (the branch of solution, indicating the system has no solution.
applied mathematics) concerned with measuring This is accomplished through the use of
or determining the shape of the earth or with elementary row operations. The second step uses
locating exactly points on the earth‟s surface is back substitution to find the solution of system of
called geodesy. Even though, Gauss‟s name is linear equation.
associated with this technique for successively Stated equivalently for matrices, the first part
eliminating variables from systems of linear reduces a matrix to row echelon form using
equations. For years, Gaussian elimination was elementary row operations while the second
considered part of the development of the reduces it to reduced row echelon form. Another
geodesy, not mathematics. The first appearance of point of view which turns out to be very useful to
analyze the algorithm is that Gaussian elimination
100 Journal of Science and Science Education, Ondo

computes matrix decomposition. The 3 Gauss and Gauss-Jordan elimination methods


elementary row operations used in the Gaussian shall be considered.
elimination (multiplying rows, switching rows and
adding multiples of rows to other rows) amount to GAUSSIAN/GAUSS ELIMINATION METHOD
multiplying matrix with invertible matrix from the Gaussian elimination is a systematic
left. application of elementary row operations to a
system of linear equations in order to convert the
THE GAUSS AND GAUSS-JORDAN system to upper triangular form. Once the
ELIMINATION METHODS OF SOLVING coefficient matrix is in upper triangular form, we
SYSTEM OF LINEAR EQUATIONS use back substitution to find a solution. The
A system of linear equations (or linear system) general procedure for Gaussian elimination can be
is a collection of linear equation involving the summarized in the following steps:
same set of variables. A solution of a linear - Write the augmented matrix for the system
system is an assignment of values to the variable of linear equation.
x1 , x2 , x3 ,..., xn equivalently,  xi  i  1 such that - Use elementary operation on {A/b} to
n

transform A into upper triangular form. If a


each of the equation is satisfied. The set of all zero is located on the diagonal, switch the
possible solutions is called the solution set. A rows until a non zero is in that place. If you
linear system solution may behave in any one of are unable to do so, stop; the system has
three possible ways: either infinite or no solution.
1. The system has a infinitely many solutions - Use back substitution to find the solution of
2. The system has a single unique solution the problem
3. The system has no solution Consider the system of equation in matrix form
For three variables, each linear equation below.
determines a plane in three dimensional space and
 a11 a12 a13  x1   b1 
the solution set is the intersection of these planes.     
Thus, the solution set may be a line, a single point  a21 a22 a23  x2    b2 
or the empty set. a    
 31 a32 a33  x3   b3 
For variables, each linear equation determines Step 1: Write the above as augmented matrix
a hyperplane in n-dimensional space. The solution
set is the intersection of these hyper-planes, which  a11 a12 a13 b1 
 
may be a flat of any dimension. The solution set  a21 a22 a23 b2 
for two equations in three variables is usually a a 
 31 a32 a33 b3 
line.
In general, the bahaviour of a linear system is Step 2: Eliminate x1 from the 2nd and 3rd equations
determined by the relationship between the a a
by subtracting multiples m21  21 and m31  31
number of equations and the number of a11 a11
unknowns. Usually, a system with fewer of row 1 from rows 2 and 3 producing equivalent
equations than unknowns has infinitely many system:
solutions such system is also known as an
 a11 a12 a13 b1 
undetermined system. A system with the same  
 2
a23  b2  
2 2
number of equation and unknowns has a single  0 a22
unique solution. A system with more equations   2 2 
a33  b3  
2
than unknowns has no solution.There are many  0 a32
methods of solving linear system which include: Step 3: Eliminate x2 from 3rd equation by
a32
Substitution methods, elimination methods, matrix 2
inversion methods, graphical methods, crammer‟s subtracting multiples m32  of row 2 from
a22
2
methods, Gaussian elimination methods, Gauss-
Jordan elimination methods etc.In this paper, the row 3 producing matrix system:
Adenegan and Aluko 101

 a11 a12 a13 b1  The same example and others can also be
  2  considered using the Gaussian elimination method
 0 a22 a23 2 b2 2  without pivoting (as directly solved above) and
 3 
a33  b3   with partial pivoting by considering the magnitude
3
 0 0
For an illustration, using the steps above, solve the of the elements in the first column i.e. 3  2  1 ,
system changing their row‟s position according to
x1  2 x2  3x3  3 magnitude, we will still obtain the same answer.
2 x1  x2  x3  11
GAUSS-JORDAN ELIMINATION
3x1  2 x2  x3  5 Gauss-Jordan elimination is a modification of
This can be written as, Gaussian elimination. Again we are transforming
 1 2 3  x1   3  the coefficient matrix into another matrix that is
     much easier to solve and the system represented
 2 1 1  x2    11  by the new augmented matrix has the same
 3 2 1  x   5 
  3    solution set as the original system of linear
The augmented matrix becomes equations. In Gauss-Jordan elimination, the goal is
 1 2 3 3  transform the coefficient matrix into a diagonal
  matrix and the zeros are introduced into the matrix
 2 1 1 11  one column at a time. We work to eliminate the
 3 2 1 5 
  elements both above and below the diagonal
2 elements of a given column in one passes through
Now subtract times the first row from the the matrix. The general procedure for Gauss-
1
Jordan elimination can be summarized in the
3
second row and times the first row from the following steps:
1 i. Write the augmented matrix for a system of
third row which gives; linear equation
 1 2 3 3  ii. Use elementary row operation on the
 
 0 5 5 5 
augmented matrix [A/b] to the transform A
 0 4 10 14  into diagonal form. If a zero is located on the
  diagonal. Switch the rows until a non-zero is
4 4 in that place. If you are unable to do so, stop;
Now subtract , i.e. times the second row
5 5 the system has either infinite or no solution.
from the third row. Then we have iii. By dividing the diagonal elements and a
 1 2 3 3  right-hand side‟s element in each row by the
  diagonal elements in the row, make each
 0 5 5 5  diagonal elements equal to one.
 0 0 6 18 
  Given a system of equation 4 x 4 matrix of the
Note that as a result of these steps, the matrix of form:
coefficient of x has been reduced to a triangular  a11 a12 a13 a14  x1   b1 
matrix.     
 a21 a22 a23 a24  x2    b2 
Finally, we detach the right-hand column back to  a31 a32 a33 a34  x3   b3 
its original position:     
 1 2 3  x1   3   a41 a42 a43 a44   x4   b4 
     Step 1: Write the above as augmented matrix, we
 0 5 5  x2    5  have
 0 0 6  x   18 
  3   
Then by „back substitution‟, starting from the
bottom row we get:
x1  2; x2  4; x3  3
102 Journal of Science and Science Education, Ondo

 a11 a12 a13 a14 b1   1 2 3  x1   3 


      
 a21 a22 a23 a24 b2   2 1 1  x2    11 
 a31 b3   3 2 1  x   5 
a32 a33 a34   3   
 
 a41 a42 a43 a44 b4  The augmented matrix becomes
Step 2: Eliminate x1 from the 2nd and 3rd and 4th  1 2 3 3 
 
equation by subtracting multiples  2 1 1 11 
a21 a31 a41  3 2 1 5 
m21  , m31  and m41  of row 1, from  
a11 a11 a11 We now eliminate x1 from the 2nd and 3rd
2,3 and 4 respectively producing equation by subtracting multiple
 a11 a12 a13 a14 b1  a21 a31
  2  2  m21   2, m31   3 of row 1 from row 2
a24  b2 2 
2
 0 a22 a23 a11 a11
  2 
a33  a34  b3 2  and 3 respectively producing.
2 2
 0 a32
 0 a  2 a  2 a  2 b  2   1 2 3 3 
   
 0 5 5 5 
42 43 44 4

Step 3: eliminate x2 from the 1st, 3rd and 4th  0 4 10 14 


equation by subtracting multiples  
Eliminate x2 from 1 and 3rd equation by
st
a32  a42 
2 2
a12
m12   2 , m32   2 , m42   2 of row 2 subtracting multiple
a22 a22 a22
a 2 a 4
from row 1, 3 and 4 to produce m12  12  , and m32  32  of row 2 from
a22 5 a22 5
 a113 0 a13  a14  b13 
3 3

  row 1 and row 3


 0 a22  a23  a24  b2 2  We have
2 2 2

   1 0 1 5 
a33  a34  b33 
3 3
 0 0  

a433 a443 b43 
  0 5 5 5 
 0 0  0 0 6 18 
 
Step 4: Eliminate x3 from the 1st, 2ndand 4th
Eliminate x3 from 1 and 2nd equation by
st
equations by subtracting multiples
subtracting multiple
a  a  a 
3 3 3
m13  133 , m23  233 , m43  433 of row 3 from a 1 a 5
a33 a33 a33 m13  13  , and m23  23  of row 3 from
a33 6 a33 6
row 1, 2 and 4 producing row 1 and row 3.
 a11 4 0 0 a14  b1  
4 4
1 0 0 2 
   
 0 a22  a24  b2    0 5 0 20 
4 4 4
0
   0 0 6 18 
 0 0 a333 a343 b33   
 
 0 0 0 a44 4 b4 4 
From which we finally solve for x1 , x2 , x3 and x4
from the resulting simultaneous equations from
above.
For the purpose of comparison, let us solve the
problem given in (3.1) using Gauss-Jordan
elimination method. i.e.
Adenegan and Aluko 103

This finally gives a13 0 a 2 a 2


m13    0, m23  23  and , m43  43 
 x1   2  a33 7 a33 7 a33 7
   
 x2    4  of row 3 from row 1, 2 and 4 to produce
 x   3   1 0 0 1 3 
 3  
 2
For more illustration, consider the following  0 2 0 2 1 
system of equations using Gauss-Jordan 0 0 7 7 
Elimination (i) without pivoting (ii) with partial  7 
 0 0 0 2 4 
pivoting  
 1 1 1 1  x1   1  Eliminate x4 from equation (1), (2) and (3) and by
    
 1 1 1 1   x2    2  subtracting multiples
 2 4 3 5   x3   2  a 1 a a 7
     m14  14   , m24  24  1 and , m34  34 
 3 1 1 1   x4   1  a44 2 a44 a44 2
We represent the above in an augmented matrix of row 4 from row 1, 2 and 3 producing
form (i) without pivoting  1 0 0 0 1 
 1 1 1 1 1   2
   0 2 0 0 3 
 1 1 1 1 2  0 0 7 0 
 2 4 3 5 2   7 
 0 0 0 2 4 
   
 3 1 1 1 1 
 1 
We now eliminate x1 from the 2nd, 3rd, and 4th R2  1 R2  1 0 0 0 2 
2
equation by subtracting multiples  0 1 0 0 3 
R3  1 R3  2 
a a a 7 0 0 1 0 1 
m21  21  1, m31  31  2 and , m41  41  3 of
a11 a11 a11 R4  1 R4 0 0 0 1 
2  2 
row 1 from row 2, 3 and 4 respectively producing 
 1 1 1 1 1   x1  1 , x2  3 , x3  1 and x4  2
  2 2
 0 2 2 0 1  Hence,
 0 6 1 7 4   1 
   x1   2 
 0 4  2 4 4     
 x2    3 2 
Eliminate x2 from equations (1), (3) and (4) by  x3   
subtracting multiples    1 
 x4   2 
a12 1 a32 6 a42 
2
4  
m12   2  , m32   2  3 and , m42   2 (ii) With
 2 partial pivoting
a22 2 a22 2 a22 2
Since 3  2  1  1 , then we have the same
of row 2 from row 1, 3 and 4 respectively
producing answer following the same process.
 1 0 0 1 3 
 2 APPLICATIONS OF GAUSS AND GAUSS-
 0 2 2 0 1  JORDAN METHODS OF SOLVING
0 0 7 7  SYSTEM OF LINEAR EQUATIONS
 7 
Gauss and Gauss-Jordan methods of solving
 0 0 2 4 6 
  system of linear equations can be used in various
Eliminate x3 from equation (1), (2) and (4) by subject areas and are used to solve for unknowns.
With n equations, the highest possible number of
subtracting multiples
unknowns that can be solved for is n.
complications occur when the number of
104 Journal of Science and Science Education, Ondo

unknowns is unequal to the number of equations 0.2 x  0.4 y  0.3z  1160 i 


in the system.
Linear system of equations with more or less 0.3x  0.5 y  0.4 z  1560  ii 
equations than unknowns occurs quite often. 0.1x  0.2 y  0.1z  480  iii 
When there are fewer equations than unknowns, it Solving yields x =1200,y=800 and z=2000 (all in
is impossible solve for all the unknowns units)
algebraically. Because of this, there usually
infinite many solutions for each variable (it is hard Application in Economics
to narrow it down). When there are more Gauss and Gauss-Jordan methods can be used
equations than unknowns (an over determined to find the equilibrium price and quantity to be
system), there are usually no solutions. This supplied in a given market. Let‟s say there are two
occurs because there may be solution that satisfy products: orange juice and water which are
two of the three equations, but not necessarily all interrelated. Let P1 and q1 represents the price and
three. The only way there will be solution to an quantity demanded respectively for product 1
over determined system (for example, one with (orange juice) and P1 and q2 represents the same
three equations and two unknowns) would be if all for product 2 (water).
the equations are equivalent, if two of the three
equations are equivalent and the other equation Demand supply
intersects each other at a single point.
Pr oduct 1: P1  2000 - 3q1 - 2q2 , q1  100  2q1  q2
Some of the areas where Gauss and Gauss-
Jordan method can be applied are Business and Pr oduct 2: P2  2800 - q1 - 4q2 , q2  200  3q1  2q2
Economics, Physics, Biology, Chemistry, For equilibrium to be achieved, both price
Engineering etc. expressions must be equal, so the following
equations are obtained.
Application in Business Pr oduct 1: 2000 - 3q1 - 2q2 =100  2q1  q2
Gauss and Gauss-Jordan methods of solving
Pr oduct 2 : 2800 - q1 - 4q2 =200  3q1  2q2
linear system can be used in Business Education.
As a matter of fact Business and Economics are The above expressions when reframed and solved
interrelated. give q1=200, q2 = 300.
Bolade Nig-Limited a garment industry Hence P1 = N800, P2 = N1,400.
manufactures three shirts styles. Each shirt Finding the equilibrium price and quantity in a
requires the services of three departmental stores market is very important to an Economist since
shown below. beyond these values, nothing will be sold at a
STYLES profit.
A B C
Cutting 0.2 0.4 0.3 Applications in Physics
Sewing 0.3 0.5 0.4 Similar problems as above can occur or be
Packaging 0.1 0.2 0.1 formulated and both Gauss and Gauss-Jordan
The cutting, sewing and packaging have methods can be readily applied.
available maximum of 1160, 1560 and 480 hours
repressively. You are required to: CONCLUSION
i. Formulate the information into linear The study has concluded that when
equation forms. performing calculation by hand, Gauss-Jordan
ii. Using Gaussian and Gauss-Jordan method is more preferable to Gaussian elimination
elimination methods to find the number of version because it avoids the need for back
shirt that must be produced each week for substitution. Besides, it was noted very
the plant to operate at full capacity. remarkably that both Gauss elimination method
Let x represents style A and Gaussian-Jordan elimination method gave the
Let y represents style B same answers for each worked example and both
Let z represents style C with pivoting and partial pivoting equally gave the
Adenegan and Aluko 105

same answers. This necessarily implies that since Anderson, J.P. (1982): Mathematical Analysis and
the system of linear equations still remains the Applications to Business and Economics, 3rd Ed.
same despite that the equations are re-arranged PEP New York.
leading to its matrix form to be transformed as the Anton, H. (2005): ElementaryLinear Algebra
rows‟ elements obviously changed, the resultant (Application Version – 9th Edition) Willey
International, London.
solutions are still the same. The importance of http://www.purplemath.com/./systlin6.htm (2010).
Gauss and Gauss-Jordan elimination method can http://www.matrixanalysis.com/downloadchapter.html
not be over emphasized due to its relevance to (2007).
different field of studies (pure Science i.e. Ilori, S.A; Akinyele O. (1986): Elementary Abstract
Physics, Biology, Chemistry, Mathematics etc. and Linear Algebra, Ibadan University Press,
social science i.e. Economics, Geography, Ibadan.
Business Education etc.). Mayor, C.D. (2001): Engineering Mathematics 5th Ed.
Anthony Rowe Ltd. Chippenhem, Wiltshire.
REFERENCES Seymour, L. (1981): Theory and Problem of Linear
Adeola. T.A. (2009): An Investigation into Solution of Algebra – ISBN 0-07-99012-3.
System of Linear Equations, anUnpublished Webber, J.P. (1982): Mathematical Analysis 4th Ed;
Project Work, Adeyemi College of Education. Harpir Institute; London
Ondo.

You might also like