You are on page 1of 33

COMPUTING

TECHNIQUE

Presented By:
Netra Bahadur Katuwal
MATRIX OPERATIONS IN GEOMATICS ENGINEERING
PROBLEMS

▪ Solution of linear equations

• A system of linear equations consists of two or more linear equations made up


of two or more variables such that all equations in the system are considered
simultaneously.
• To find the unique solution to a system of linear equations, we must find a
numerical value for each variable in the system that will satisfy all equations in
the system at the same time.
• Some linear systems may not have a solution and others may have an infinite
number of solutions. In order for a linear system to have a unique solution,
there must be at least as many equations as there are variables. Even so, this
does not guarantee a unique solution.
MATRIX OPERATIONS IN GEOMATICS ENGINEERING
PROBLEMS

▪ Solution of linear equations

• The two lines represent equation 1and equation 2 respectively. The intersection at a unique
point (2,4) is the solution that will satisfy both equations.
TYPES OF SOLUTIONS FOR LINEAR EQUATIONS

• A system of linear equations can have 3 types of solutions.

1. An independent system has exactly one solution pair. The point where the
two lines intersect is the only solution. It is also called consistent
independent system.

2. An inconsistent system has no solution. The two lines are parallel and will
never intersect. (i.e. no solution)

3. A dependent system has infinitely many solutions. The lines are


coincident. They are the same line, so every coordinate pair on the line is
a solution to both equations. (i.e. infinite solution). It is also called
consistent dependent system.
TYPES OF SOLUTIONS FOR LINEAR EQUATIONS
HOW TO FIND THE SOLUTION OF A LINEAR EQUATION?

• Solutions for linear equations in one variable

▪ Consider the equation, 2x + 4 = 8

• To find the value of x, first, we remove 4 from L.H.S, so we subtract 4


from both sides of the equation. 2x + 4 - 4 = 8 - 4
• Simply. Now we get, 2x = 4
• Now, we have to remove 2 from L.H.S in order to get x, therefore we
divide the equation by 2. 2x/2 = 4/2, x=2

▪ Hence, the solution of the equation 2x + 4 = 8 is x=2.


HOW TO FIND THE SOLUTION OF A LINEAR EQUATION?

• Solutions for linear equations in two variable

▪ The following methods can be used to find the solutions of linear equations
of two variables.
1. Substitution Method
2. Elimination Method
3. Graphical Method
HOW TO FIND THE SOLUTION OF A LINEAR EQUATION?

Solutions for linear equations in two variable


• Substitution Method
Consider the following pair of linear equations, let's solve the following linear
equations.
x + y = 4 and x - y = 2
• Let’s rearrange the first equation to express y in terms of x, as follows: x + y = 4,
y=4-x
• This expression for y can now be substituted in the second equation, so that we will
be left with an equation in x alone: x - y = 2, x - 4 + x = 2, 2x = 6 x = 6/2, x = 3
• Once we have the value of x, we can plug this back into any of the two equations to
find out y. Lets plug it into the first equation: x + y = 4 (3) + y = 4, y = 4 - 3 = 1, y = 1
• The final non trivial solution is: x = 3, y = 1
It should be clear why this process is called substitution. We express one variable in
terms of another using one of the pair of equations and substitute that expression into
the second equation.
HOW TO FIND THE SOLUTION OF A LINEAR EQUATION?
Solutions for linear equations in two variable
Elimination Method
Consider the following pair of linear equations:
2x + 3y - 11 = 0, 3x + 2y - 9 = 0
The coefficients of x in the two equations are 2 and 3 respectively. Let us multiply the first
equation by 3 and the second equation by 2, so that the coefficients of x in the two
equations become equal:
• 3 {2x + 3y - 11 = 0} 6x + 9y - 33 = 0
• 2 {3x + 2y - 9 = 0} 6x + 4y - 18 = 0
Now, let us subtract the two equations, which means that we subtract the left-hand sides
of the two equations, and the right-hand side of the two equations and the equality will still
be preserved.
6x + 9y - 33 = 0 ,6x + 4y - 18 = 0 0 + 5y - 15 = 0, 5y = 15, y = 3
Note how x gets eliminated, and we are left with an equation in y alone. Once we have the
value of y, we proceed as earlier – we plug this into any of the two equations. Let us put
this into the first equation:
2x + 3y - 11 = 0, 2x + 3 (3) - 11 = 0, 2x + 9 - 11 = 0\, 2x = 2, x = 1
Thus, the non trivial solution is: x = 1, y = 3
HOW TO FIND THE SOLUTION OF A LINEAR EQUATION?

Solutions for linear equations in two variable


Graphical Method
As an example, lets solve the following linear equation: x - y + 2 = 0, 2x + y - 5 = 0. We
draw the corresponding lines on the same axes:

The point of intersection is (1,3), which means that x = 1, y = 3 is a solution to the pair
of linear equations given by (2). In fact, it is the only solution to the pair, as two non-
parallel lines cannot intersect at more than one point.
• What is Echelon Form?

Echelon form means that the matrix is in one of two states:


• Row echelon form.
• Reduced row echelon form.

This means that the matrix meets the following three requirements:
1. The first number in the row (called a leading coefficient) is 1.
Note: some authors don’t require that the leading coefficient is a 1; it could be
any number).
2. Every leading 1 is to the right of the one above it.
3. Any non-zero rows are always above rows with all zeros.
• The following examples are not in echelon form:

• Matrix A does not have all-zero rows below non-zero rows.


• Matrix B has a 1 in the 2nd position on the third row. For row echelon form, it
needs to be to the right of the leading coefficient above it. In other words, it should
be in the fourth position in place of the 3.
• Matrix C has a 2 as a leading coefficient instead of a 1.
• Matrix D has a -1 as a leading coefficient instead of a 1.
▪ Uniqueness and echelon forms

• The echelon form of a matrix isn’t unique, which means there are infinite
answers possible when you perform row reduction.

• Reduced row echelon form is at the other end of the spectrum; it is unique,
which means row-reduction on a matrix will produce the same answer no
matter how you perform the same row operations.
▪ What is Row Echelon Form?

• A matrix is in row echelon form if it meets the following requirements:

1. The first non-zero number from the left (the “leading coefficient“) is always
to the right of the first non-zero number in the row above.
2. Rows consisting of all zeros are at the bottom of the matrix.

Note: Row echelon form. “a” can represent any number.


Technically, the leading coefficient can be any number. However, the majority of Linear
Algebra textbooks do state that the leading coefficient must be the number 1.

Note:To find the rank of a matrix,simply transform the


matrix to its row echelon form and count the number of
non-zero rows.
▪ Transformation of a Matrix to Row Echelon Form
GAUS METHOD, THE GAUS-JORDAN METHOD
▪ What is Reduced Row Echelon Form?

• Reduced row echelon form is a type of matrix used to solve systems of


linear equations. Reduced row echelon form has four requirements:

1. The first non-zero number in the first row (the leading entry) is the number
2. The second row also starts with the number 1, which is further to the right
than the leading entry in the first row. For every subsequent row, the
number 1 must be further to the right.
3. The leading entry in each row must be the only non-zero number in its
column.
4. Any non-zero rows are placed at the bottom of the matrix.

Note: The matrix is said to be in reduced row-echelon


form when all of the leading coefficients equal 1, and
every column containing a leading coefficient has zeros
elsewhere. This final form is unique; that means it is
independent of the sequence of row operations used.
▪ Transformation of a Matrix to Reduced Row Echelon Form
GAUS METHOD, THE GAUS-JORDAN METHOD

▪ What is the gauss elimination method?


• In mathematics, the Gaussian elimination method is known as the row reduction
algorithm for solving linear equations systems.
• It consists of a sequence of operations performed on the corresponding matrix of
coefficients. We can also use this method to estimate either of the following:
1. The rank of the given matrix
2. The determinant of a square matrix
3. The inverse of an invertible matrix [A matrix that is its own inverse (i.e., a matrix A
such that A = A−1 and A2 = I),]
GAUS METHOD, THE GAUS-JORDAN METHOD
▪ What is the gauss elimination method?
• To perform row reduction on a matrix, we have to complete a sequence of
elementary row operations to transform the matrix till we get 0s (i.e., zeros) on the
lower left-hand corner of the matrix as much as possible.
(That means the obtained matrix should be an upper triangular matrix.)
• There are three types of elementary row operations; they are:
1. Swapping two rows and this can be expressed using the notation ↔ ,
for example, R2 ↔ R3
1. Multiplying a row by a nonzero number, for example, R1 → kR2 where k is some
nonzero number
2. Adding a multiple of one row to another row,
for example, R2 → R2 + 3R1
• The obtained matrix will be in row echelon form.
GAUS METHOD, THE GAUS-JORDAN METHOD
▪ Gaus-Jordan Method:
• The Gauss-Jordan Method is similar to Gaussian Elimination, except that the entries
both above and below each pivot are targeted (zeroed out).

▪ After performing Gaussian Elimination on a matrix, the result is in row echelon form.

▪ After the Gauss-Jordan Method, the result is in reduced row echelon form.
▪ Example 1: Gauss Elimination
▪ Example 1: Gauss Elimination
▪ Example 1: Gauss Jordan Method:
EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS
EIGENVALUES AND EIGENVECTORS

Now we know eigenvalues, let us find their


matching eigenvectors.
EIGENVALUES AND EIGENVECTORS
DIFFERENTIATION OF MATRICES AND QUADRATIC FORMS
DIFFERENTIATION OF MATRICES AND QUADRATIC FORMS
DIFFERENTIATION OF MATRICES AND QUADRATIC FORMS

Applications:
Matrix calculus is used for deriving optimal stochastic estimators, often
involving the use of Lagrange multipliers. This includes the derivation of:
• Kalman filter
• Wiener filter
• Expectation-maximization algorithm for Gaussian mixture
• Gradient descent

You might also like