TOPIC: - BIOPOLYMERS.

SUBMITTED TO: JAGANJOT KAUR (SUBJECT TEACHER)

SUBMITTED BY:

GURMUKH SINGH SECTION: E4001 ROLL NO.: B42 REGD.NO:-11006705

ACKNOWLEDGEMENT
Thanks’ giving is a sacred Indian culture. So, first of all, I would like to thank our subject teacher, MISS JAGANJOT KAUR for his humble support and encouragement which enhanced me through the project. His likeness towards my topic uplifted my spirit which in turn helped me throughout.

Secondly, I express my gratitude to my Parents for being a continuous source of encouragement and the financial aid given to me. Thirdly, I would love to thank my beloved friends who helped me in their little ways. They were of great help and support as they helped me a lot with their innovative ideas. They helped me a lot in completing my work in time.

Finally, I would like to thank God for showering his blessings upon me.

GURMUKH SINGH

CONCLUSION REFERENCES 1 2 3 3-5 6-9 10-15 15-16 17-18 19 . GAUSS-JORDAN METHOD 8.CONTENTS 1. SYSTEM OF LINEAR EQUATIONS 7. APPLICATIONS 6. EXAMPLES 9. HISTORY 3. INTRODUCTION 2. EXAMPLES 5. ALGORITHM OVERVIEW 4.

Gaussian elimination is an algorithm for solving systems of linear equations. Elementary row operations are used to reduce a matrix to row echelon form. finding the rank of a matrix.INTRODUCTION In linear algebra. 1 . which makes it an example of Stigler's law. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss. an extension of this algorithm. reduces the matrix further to reduced row echelon form. and calculating the inverse of an invertible square matrix. Gauss–Jordan elimination. Gaussian elimination alone is sufficient for many applications. and is cheaper than the -Jordan version.

which Newton then supplied. Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems. Rectangular Arrays.History The method of Gaussian elimination appears in Chapter Eight. Its use is illustrated in eighteen problems. which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. with two to five equations. Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton left academic life. The notes were widely imitated. The method in Europe stems from the notes of Isaac Newton.[2] In 1670. he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations.[1] It was commented on by Liu Hui in the 3rd century. The first reference to the book by this title is dated to 179 CE. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject. 2 . but parts of it were written as early as approximately 150 BCE. of the important Chinese mathematical text Jiuzhang suanshu or The Nine Chapters on the Mathematical Art.

The first part (Forward Elimination) reduces a given system to either triangular or echelon form. and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. Example: Suppose the goal is to find and describe the solution(s). indicating the system has no solution. The first part of the algorithm computes LU decomposition. Another point of view. while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix. Formally: from 3 . In the example. or results in a degenerate equation with no solution. using back-substitution. The second step uses back substitution to find the solution of the system above. The three elementary row operations used in the Gaussian elimination (multiplying rows. This is accomplished through the use of elementary row operations. This will put the system into triangular form. is that Gaussian elimination computes matrix decomposition.Algorithm overview The process of Gaussian elimination has two parts. Stated equivalently for matrices. of the following system of linear equations: The algorithm is as follows: eliminate x from all equations below L1. which turns out to be very useful to analyze the algorithm. each unknown can be solved for. x is eliminated from L2 by adding to L2. if any. switching rows. X is then eliminated L3 by adding L1 to L3. Then. and then eliminate y from all equations below L2. or row canonical form. the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form.

and so the first part of the algorithm is complete. consists of solving for the known in reverse order. which can then be solved to obtain Next. if y had not occurred in L2 and L3 after the first step above. the algorithm would have been unable to reduce the system to triangular form. z can be substituted into L2. yet still have at least one valid solution: for example. which can be solved to obtain The system is solved. Some systems cannot be reduced to triangular form. z and y can be substituted into L1. back-substitution. 4 . The last part. It can thus be seen that Then.The result is: Now y is eliminated from L3 by adding − 4L2 to L3: The result is: This result is a system of linear equations in triangular form.

in terms of the free variables. zeros only under the leading 1) of the algorithm. the Gaussian Elimination algorithm applied to the augmented matrix begins with: Which. it is in reduced row echelon form. at the end of the first part (Gaussian elimination. one does not usually deal with the systems in terms of equations but instead makes use of the augmented matrix (which is also suitable for computer manipulations). it is in row echelon form. In this case. if the Gauss–Jordan elimination (zeros under and above the leading 1) is applied: That is. the system does not have a unique solution. The solution set can then be expressed parametrically (that is. 5 . a solution will be generated). For example: Therefore. or row canonical form. as it contains at least one free variable. it would still have reduced the system to echelon form. At the end of the algorithm. so that if values for the free variables are chosen.However. In practice. looks like this: That is.

third.APPLICATIONS Finding the inverse of a matrix Suppose A is a matrix and you need to calculate its inverse. then A is not invertible. This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T. the vector space spanned by the columns of A has a basis consisting of the first. In this way. and the *'s tell you how the other columns of A can be written as linear combinations of the basis columns. I]). If we get "stuck" in a given column. forming a matrix (the block matrix B = [A. which leaves A − 1 in the right block of B. the left block of B can be reduced to the identity matrix I. 6 . General algorithm to compute ranks and bases The Gaussian elimination algorithm can be applied to any matrix A. for example. some matrices can be transformed to a matrix that has a reduced row echelon form like (The *'s are arbitrary entries). Through application of elementary row operations and the Gaussian elimination algorithm. fourth. seventh and ninth column of A (the columns of the ones in T). we move to the next column. If the algorithm is unable to reduce A to triangular form. The identity matrix is augmented to the right of A.

Specific methods exist for systems whose coefficients follow a regular pattern (see system of linear equations). The Gaussian elimination can be performed over any field. Gaussian elimination is a method for solving matrix equations of the form (1) To perform Gaussian elimination starting with the system of equations (2) compose the "augmented matrix equation" 7 . However. So it has a complexity of . for a total of approximately 2n3 / 3 operations. the cost becomes prohibitive for systems with millions of equations. For general matrices. This algorithm can be used on a computer for systems with thousands of equations and unknowns. Gaussian elimination is usually considered to be stable in practice if you use partial pivoting as described below. These large systems are generally solved using iterative methods.Analysis Gaussian elimination to solve a system of n equations for n unknowns requires n (n+1) / 2 divisions. Gaussian elimination is numerically stable for diagonally dominant or positivedefinite matrices. even computing the rank of a tensor of order greater than 2 is a difficult problem. (2n3 + 3n2 − 5n)/6 multiplications. even though there are examples for which it is unstable. Higher order tensors Gaussian elimination does not generalize in any simple way to higher order tensors (matrices are order 2 tensors). and (2n3 + 3n2 − 5n)/6 subtractions.

v}]] LU decomposition of a matrix is frequently used as part of a Gaussian elimination process for solving a matrix equation. Row Reduce performs a version of Gaussian elimination. consider the matrix equation (6) 8 . then substitute back into the equation of the st row to obtain a solution for . For example. perform elementary row operations to put the augmented matrix into the upper triangular form (4) Solve the equation of the th row for .(3) Here. according to the formula (5) In Mathematical. A matrix that has undergone Gaussian elimination is said to be in echelon form. with the equation being solved by Gaussian Elimination [m_?MatrixQ.. Now. the column vector in the variables is carried along for labeling the matrix rows. etc. v_?VectorQ] := Last /@ Row Reduce [Flatten /@ Transpose [{m.

this becomes (7) Switching the first and third rows (without switching the elements in the righthand column vector) gives (8) Subtracting 9 times the first row from the third row gives (9) Subtracting 4 times the first row from the second row gives (10) Finally.In augmented form. back-substituting to obtain (which actually follows trivially in this example). adding times the second row to the third row gives (11) Restoring the transformed matrix equation gives (12) Which can be solved immediately to give . and then again backsubstituting to find 9 .

z. and w are unknown numbers.Systems of Linear Equations: Gaussian Elimination It is quite hard to solve non-linear systems of equations. For the sake of simplicity. Instead. There are numerical techniques which help to approximate nonlinear systems with linear ones in the hope that the solutions of the linear systems are close enough to the solutions of the nonlinear systems. we will restrict ourselves to three. The reader interested in the case of more unknowns may easily extend the following ideas. If h =0. The equation ax+by+cz+dw=h Where a. We will not discuss this here. while 10 . and h are known numbers. Definition. A linear system is a set of linear equations and a homogeneous linear system is a set of homogeneous linear equations. we will focus our attention on linear systems. while linear systems are quite easy to study. y. the linear equation is said to be homogeneous. while x. For example. is called a linear equation. d. unknowns. And Are linear systems. c. at most four. b.

we can rewrite the linear system above as the matrix equation 11 . The system Is a homogeneous linear system. First. The algebraic properties of matrices may then be used to solve systems. Matrix Representation of a Linear System Matrices are helpful in rewriting a linear system in a very simple form. consider the linear system Set the matrices Using matrix multiplications.is a nonlinear system (because of y2).

Consider the linear system 12 . The matrix A is called the matrix coefficient of the linear system. Its entries are the unknowns of the linear system. we obtain a new system still equivalent to old one. Two linear systems with n unknowns are said to be equivalent if and only if they have the same set of solutions. the new system is still equivalent to the old one. You may wonder how we will come up with such system. we do that through elementary operations. we again obtain an equivalent system. Definition. Let us see how it works in a particular case. But sometimes it is worth to solve the system directly without going through the matrix form. And finally replacing one equation with the sum of two equations. it is clear that if we interchange two equations. the linear system is homogeneous. The matrix C is called the nonhomogeneous term. If we multiply an equation with a nonzero number. where In general if the linear system has n equations with m unknowns. When .As you can see this is far nicer than the equations. This definition is important since the idea behind solving a system is to find an equivalent system which is easy to solve. Example. These operations are called elementary operations on systems. The augmented matrix associated with the system is the matrix [A|C]. Indeed. Now we turn our attention to the solutions of a system. then the matrix coefficient will be a nxm matrix and the augmented matrix an nx (m+1) matrix. The matrix X is the unknown matrix. Easy.

and we add the second to the third after multiplying it by 3. we keep the first and second equation. For example. We repeat the same procedure. and finally from the first equation we get x = 4. and subtract the first one from the last one. and we subtract the first from the second. Indeed. we get the equivalent system Next we keep the first and the last equation.The idea is to keep the first equation and work on the last two. Therefore the linear system has one solution 13 . we will try to kill one of the unknowns and solve for the other two. We get This obviously implies z = -2. we get y = -2. From the second equation. In doing that. Try to kill one of the two unknowns (y or z). if we keep the first and second equation. We get the equivalent system Now we focus on the second and the third equation.

is there any way we can rewrite what we did above in matrix form which will make our notation (or representation) easier? Indeed. if the matrix is in echelon form. Indeed. So the trick is to perform elementary operations to transform the initial linear system into another one for which the coefficient matrix is in echelon form. Keep in mind that linear systems for which the matrix coefficient is uppertriangular are easy to solve. This is particularly true. consider the augmented matrix Let us perform some elementary row operations on this matrix. Using our knowledge about matrices. and subtract the first one from the last one we get Next we keep the first and the last rows. We get Then we keep the first and second row. if we keep the first and second row. and we add the second to the third after multiplying it by 3 to get 14 .Going from the last equation to the first while solving for the unknowns is called backsolving. and we subtract the first from the second.

In fact. This is known as Gaussian Elimination. it is easy to play around with the elementary row operations and once we obtain a triangular matrix. The linear system for which this matrix is an augmented one is As you can see we obtained the same system as before. for example. This eliminates the need for successive substitution. As a result. This shows that instead of writing the systems over and over again. allowing one to just read off the solutions. that x1 = -1. instead of merely being reduced to a triangular shape. we followed the same elementary operations performed above. are reduced to a diagonal. The need for back-substitution to solve for each variable. and so on. In every step the new matrix was exactly the augmented matrix associated to the new system. The variables' coefficients. Gauss-Jordan Elimination Gauss-Jordan elimination goes the extra step of using such operations to eliminate variables above the diagonal as well.This is a triangular matrix which is not in echelon form. write the associated linear system and then solve it. x2 = 2. 15 . one can just read off the solution. GAUSS-JORDAN METHOD: The Gauss-Jordan method is a version of Gaussian elimination in solving systems of linear equations.

is therefore eliminated. A disadvantage of both Gaussian and Gauss-Jordan elimination is that they require the right vector. The gain. Replace an equation by a nonzero constant multiple of itself. Elimination Method for Solving Systems of Linear Equations A technique for solving systems of linear equations of any size is the GaussJordan Elimination Method.1. If the vector changes. the effort in factorizing has saved time as well.-3. 16 .As in Gaussian substitution. The additional operations of Gauss-Jordan add to rounding error and computer time. Difference from Gaussian Elimination The additional operations Gauss-Jordan performs to put the variables into a diagonal form triples the number of computations required. is in being able to read answers off immediately. however.4) above. a method called matrix factorization can prepare a triangular shape for easy calculation when the vector is known. 3. The operations of the Gauss-Jordan method are: 1. Disadvantages 1. This method uses a sequence of operations on a system of linear equations to obtain an equivalent system at each stage. Interchange any two equations. 2. for example. even with Gaussian elimination's back-substitution operations. to be known. An equivalent system is a system having the same solution as the original system. (4. If these numbers are to be learned later. Replace an equation by the sum of that equation and a constant multiple of any other equation.

This is also the unique solution to the original system.5 x row1 Row 2:= 2/23 x row 2 Row 1:= row 1 + 3/2 row 2 The reduced matrix represents the system 1x1 + 0x2 = 3 0x1 + 1x2 = 4 Which has the unique solution (x1. 17 .4). Let's solve the system:2x1 .3x2 = -6 5x1 + 4x2 = 31 We form the matrix (A|b) and reduce it: This is the matrix (A|b).x2) = (3.EXAMPLES: 1. Row 1:= 1/2 x row 1 Row 2:= row 2 .

1).0) that is parallel to the vector (1. -2.2x3. it looks clear that the solution set is the line through the point (2. (x3 R) Notice that these two equations have the perfect same meaning as one vector equation: When we write the equation this way.row1 the matrix is in echelon form row1:= row1.row2 the matrix is reduced The reduced matrix represents the system 1x1 + 0x2 . Let's solve the system x1 + x2 + x3 = 5 x1 + 2x2 + 3x3 = 8 We form the matrix (A|b) and reduce it: This is the matrix (A|b) Row 2:= row2 . 18 . x2 = 3 .2x3.x3 = 2 0x1 + 1x2 + 2x3 = 3 Which has infinitely many solutions: x1 = 2 + x3. x2 = 3 .3.2. x3 = any number We wrote the solution to the exercise on the before page as two equations: X1 = 2 + x3.

In each case choose the multiple so that the subtraction cancels or eliminates the same variable. The result is that the remaining m-1 equations contain only n-1 unknowns (x1 no longer appears). There remain equations but no variables (i.CONCLUSION Gaussian elimination is a method of solving a linear system (consisting of equations in unknowns) by bringing the augmented matrix. Each cycle reduces the number of variables and the number of equations. there may still be a unique solution and back-substitution can be used to find the values of the other variables. In the case of inconsistency the information contained in the equations is contradictory.O.   19 . In the case of redundancy. In that case. say x1. This indicates that either the system of equations is inconsistent or redundant. The process stops when either:  There remains one equation in one variable. In that case there is no unique solution. There remain variables but no equations. Elementary Row Operations (E. Now set aside the first equation and repeat the above process with the remaining m-1 equations in n-1 unknowns.'s) are applied in a specific order to transform an augmented matrix into triangular echelon form as efficiently as possible. In the Gaussian Elimination Method. pick the first equation and subtract suitable multiples of it from the remaining m-1 equations.R. Continue repeating the process. the lowest row(s) of the augmented matrix contain only zeros on the left side of the vertical line).e. This is the essence of the method: Given a system of m equations in n variables or unknowns. there is a unique solution and back-substitution is used to find the values of the other variables.

REFERENCES 1. S. 3.V.com . www.K. CHAND MATHEMATICS FOR CLASS XII BY S.google. RAMANA. CHAND PUBLICATIONS. 2. MODERN APPROACH TO MATHEMATICS BY N. NAG. 4. HIGHER ENGINEERING MATHEMATICS BY B.

Sign up to vote on this title
UsefulNot useful