You are on page 1of 51

Republic of the Philippines

NEGROS ORIENTAL STATE UNIVERSITY


COLLEGE OF ENGINEERING AND ARCHITECHTURE
Main Campus II, Bajumpandan Dumaguete City

COMPILATION OF ALGORITHMS
ENM 331 - NUMERICAL METHODS AND ANALYSIS

SUBMITTED BY:
UMBAC, NICETTE ANNE D.

SUBMITTED TO:
ENGR. IRISMAY JUMAWAN
TABLE OF CONTENTS
I. INTRODUCTION
Numerical Methods
Algortihm
Symbol for Differen Operations
Examples of Algorithm
Types of Algorithm

II. SOLVING SYSTEM OF LINEAR EQUATIONS


A. Small Scale
Graphical Method
Cramer's Rule
B. Large Scale
Gauss Elimination Method
Gauss Elimination with row pivoting
LU Decomposition
Iteration Method
Jacobi Methods
Gausse-Seidel Method

III. SOLVING SYSTEMS OF NON-LINEAR EQUATION


A. Open Method
Fixed Point Iteration
Secant Method
Newton Raphson Method
B. Bracketing Method
Bisection Method
False Position Method

IV. CURVE FITTING AND INTERPOLATION


A. Least Square Regression
Simple Linear Regression
Polynomial Regression

V. NUMERICAL INTEGRATION
A. Euler's Method
B. Trapezoidal Rule
C. Simpson Rule
D. Runge-Kutta Methods
VI. REFERENCES
I. INTRODUCTION
Open Problem
In science and mathematics, an open problem or an open question is a known problem which can
be accurately stated, and which is assumed to have an objective and verifiable solution, but
which has not yet been solved (no solution for it is known).
Algorithm

- In mathematics and computer science, an algorithm is a finite sequence of well-defined


instructions, typically used to solve a class of specific problems or to perform a computation.
Algorithms are used as specifications for performing calculations, data processing, automated
reasoning, automated decision-making and other tasks.
- In basic terms, an algorithm is a set of well-defined steps or rules that you need to follow
to obtain a pre-determined result.

SYMBOL FOR DIFFERENT OPERATIONS


“+” Addition
“-“ Subtraction
“*” Multiplication
“/” Division
“^” Power

EXAMPLES OF ALGORITHM

Find the area of circle.


Input: Radius (r)
Output: Area of the circle
Algorithm: Step 1: Read/ Input the r
Step 2: Area = Pi * r
Step 3: Print the area

Another simple examples of algorithm is the recipe of baking cake, the method we use in
solvong long division, the process in doing laundry, and the functionality of search engine are all
examples of algorithm.

TYPES OF ALGORITHM
- Sequence
-          Branching (if – then)
-          Loop – doing a sequence of step repeatedly
NUMERICAL METHODS
Numerical method is the approach of solvong mathematical or physical equations using
computers, This is done by converting differential equations defined in continuous space and
time to a lasrge system o f equations in discretized domain.

There are a number of unique characteristics of numerical solution methods in engineering


analysis. Following are just a few obvious ones:
1) Numerical solutions are available only at selected (discrete) solution points, but not at all
points
covered by the functions as in the case with analytical solution methods.
2) Numerical methods are essentially “trail-and-error” processes. Typically, users need to
estimate an initial solution with selected increment of the variable to which the intended
solution will cover. Unstable numerical solutions may result from improper selection of
step sizes (the incremental steps) with solutions either in the form of “wild oscillation”
or becoming unbounded in the trend of values.
3) Most numerical solution methods results in errors in the solutions. There are two types of
errors that
are inherent with numerical solutions:
(a) Truncation errors – Because of the approximate nature of numerical solutions, they often 
consists
of lower order terms and higher order terms. The latter terms are often dropped in the
computations for the sake of computational efficiency, resulting in error in the solution, and
(b) Round-off errors –Most digital computers handle either numbers with 7 decimal points, or 14
decimal points in numerical solutions. In the case of 32-bit computer with double precision (i.e.
14
decimal points length numbers), any number after the 14th decimal point will be dropped. This
may
not sound like a big deal, but if a huge number of operations are involved in the computation,
such
error can accumulate and result in significant error in the end results.
Both these errors are of accumulative natures. Consequently, errors in numerical solution may

A system of equations is a set of two or more equations that are to be solved simultaneously.
A solution of a system of two equations in two variables is an ordered pair of numbers that
makes both equations true. The numbers in the ordered pair correspond to the variables in
alphabetical order.
II. SOLVING SYSTEM OF LINEAR EQUATIONS
A. Small Scale
Graphical Method

A system of equations is a set of two or more equations that are to be solved simultaneously.
A solution of a system of two equations in two variables is an ordered pair of numbers that
makes both equations true. The numbers in the ordered pair correspond to the variables in
alphabetical order.

When two lines are graphed on the same coordinate plane:

The graphs intersect at one point. The system is consistent and has one solution. Since neither
equation is a multiple of the other, they are independent.

The graphs are parallel. The system is inconsistent because there is no solution. Since the
equations are not equivalent, they are independent.

Equations have the same graph. The system is consistent and has an infinite number of
solutions. The equations are dependent since they are equivalent.

EXAMPLE 1. 2𝑥 2 + 𝑥 − 1 = 0

x f(x) f(x)
25
-2 5 20
-1 0
15
0 -1
10
1 2
5
2 9
3 20 0
-3 -2 -1 0 1 2 3 4
-5
EXAMPLE 2. 4𝑥 3 + 2𝑥 2 − 𝑥 + 1 = 0

x f(x)
-3 -86 600
f(x)
-2 -21
-1 0 400
0 1
1 6 200

2 39
0
3 124 -4 -3 -2 -1 0 1 2 3 4 5 6
4 285
-200
5 546

EXAMPLE 3. 2𝑥1 + 3𝑥2 = 5


4𝑥1 − 𝑥 2 = 10

x f(x) g(x)
-5 -30 -25 30

-4 -25 -22 20

-3 -20 -19 10
-2 -15 -16 0
-6 -4 -2 -10 0 2 4 6
-1 -10 -13
0 -5 -10 -20
1 0 -7 -30
2 5 -4 -40
3 10 -1 f(x) g(x)
4 15 2
5 20 5
CRAMER'S RULE
Cramer's rule is used to find the solutions of any number of variables (x,y,z) and the same
number of system of equations.
In solving Cramer's Rule: 𝑎1 𝑥 + 𝑎1 y = 𝑐1
𝑎2 𝑥 + 𝑎2 𝑦 = 𝑐2

Matrix: AX=B A= the coefficient matrix which is a square matrix


X= the column matrix with variables
B= the column matrix with the constants ( right side of
equations)

𝑎1 𝑏1
𝐷= = 𝑎1 𝑏2 − 𝑏1 𝑎2
𝑎2 𝑏2 𝐷𝑥
𝑥=
𝐷
𝑐1 𝑏1
𝐷𝑥 = = 𝑐1 𝑏2 − 𝑏1 𝑐2
𝑐2 𝑏2
𝐷𝑦
𝑦=
𝑎1 𝑐1 𝐷
𝐷𝑦 = 𝑎 𝑐2 = 𝑎1 𝑐2 − 𝑐1 𝑎2
2

Limitations in using Ramer's Rule:


● If there are n variables and n equations, we have to compute (n + 1) determinants.
● This rule can give the solutions only when D ≠ 0.
● If D = 0, the system has either an infinite number of solutions or no solutions.
● We cannot find solutions by using this rule when the system has an infinite number of 
solutions.
EXAMPLE 1. 2𝑥 + 6𝑦 = 10 Matrix:
5𝑥 − 4𝑦 = 8 A X B
2 6 x 10
2 6 5 -4 y 8
D= -38
5 -4

10 6 x = Dx/D
Dx= -88
8 -4 2.3158

2 10 y = Dy/D
Dy= -34
5 8 0.8947

Check by substituting the values of x and y to the equation:


2(2.3158)+6(0.8947) = 10
10 = 10

5(2.3158)-4(0.8947) = 8
8 =8
EXAMPLE 2. 12𝑥 − 4𝑦 = 20 Matrix:
5𝑥 + 3𝑦 = 15 A X B
12 -4 x 20
12 -4 5 3 y 15
D= 56
5 3

20 -4 x = Dx/D
Dx= 120
15 3 2.1429

12 20 y = Dy/D
Dy= 80
5 15 1.4286

Check by substituting the values of x and y to the equation:


12(2.1429)-4(1.4286) = 20
20 = 20

5(2.1429)+3(1.4286) = 15
15 = 15
EXAMPLE 3. 2𝑥 + 3𝑦 + 𝑧 = 8
15𝑥 + 2𝑦 − 3𝑧 = 10
4𝑥 − 2𝑦 + 2𝑧 = 6

Matrix:
A X B
4 -2 1 x 8
2 6 -3 y 10
2 4 2 z 6

4 -2 1
D= 2 6 -3 112
2 4 2

8 -2 1 x = Dx/D
Dx = 10 6 -3 272 2.4286
6 4 2

4 8 1 y = Dy/D
Dy = 2 10 -3 64 0.5714
2 6 2

4 -2 8 z = Dz/D
Dz = 2 6 10 -64 -0.5714
2 4 6
Check by substituting the values of x and y to the equation:
4(2.4286)-2(0.5741)+(-0.5714) = 8
8 =8
2(2.4286)+6(0.5714)-3(-0.5714) = 10
10 = 10
2(2.4286)+4(0.5714)+2(-0.5714) = 6
6 =6

B.Large Scale
GAUSSIAN ELIMINATION

It is standard numerical algorithm to solve a system of linear equations

● To perform Gaussian elimination, we form an Augmented Matrix by combining the 
matrix A with the column vector b:

AUGMENTED MATRIX: AX = B
Example: A B
2 4 2 10
1 5 3 6
4 6 8 12

● Row reduction is then performed on this matrix. Allowed operations are:
(1) multiply any row by a constant
(2) add multiple of one row to another row
(3) interchange the order of any rows
The goal is to convert the original matrix into an upper-triangular matrix.
A B
2 4 2 10
0 5 3 6
0 0 8 12
● The resulting equations can be determined from the matrix, and by back substituting 
we can get the values of x.

● To check, substitute the values to the problem equation. If the result is equal then 
the answer is correct.
EXAMPLE 1. 2𝑥1 + 6𝑥2 − 4𝑥3 = 8
𝑥1 + 4𝑥2 + 2𝑥3 = 6
4𝑥1 + 2𝑥2 − 𝑥3 = 12
AUGMENTED MATRIX:
A X B
2 6 -4 𝑥1 8
𝑥2 =
1 4 2 6
4 2 -1 𝑥3 12

2 6 -4 8 Back substitute o find the value of x:


2R2-R1 1 4 2 6 47x3 = 16
R3-2R1 4 2 -1 12 x3 = 0.3404

2 6 -4 8 2x2+8x3 = 4
0 2 8 4 x2 = 0.6383
R3+5R2 0 -10 7 -4
2x1+6x2-4x3 = 8
2 6 -4 8 x1 = 2.7660
0 2 8 4
0 0 47 16

To check, subtitute the values to the equation:

2(0.7660)+6(0.6383)-4(0.3404) = 8
8 =8
(2.7660)+4(0.6383)+2(0.3404) = 6
6 =6
4(2.7660)+2(0.6383)-(0.3404) = 12
12 = 12

EXAMPLE 2. 𝑥1 + 3𝑥2 − 2𝑥3 = 10


𝑥1 − 4𝑥2 + 2𝑥3 = 4
6𝑥1 + 4𝑥2 − 2𝑥3 = 12

AUGMENTED MATRIX:
A X B
1 3 -2 𝑥1 10
𝑥2 =
1 -4 2 4
6 4 -2 𝑥3 12
1 3 -2 10 Back substitute o find the value of x:
R2-R1 1 -4 2 4 2x3 = -36
R3-6R1 6 4 -2 12 x3 = -18.0000

1 3 -2 10 - 7x2+4x3 = -6
0 -7 4 -6 x2 = -9.4286
R3-2R2 0 -14 10 -48
x1+3x2-2x3 = 10
1 3 -2 10 x1 = 2.2857
0 -7 4 -6
0 0 2 -36

To check, subtitute the values to the equation:

(-28.5714)+3(0.8571)-2(-18.0000) = 10
10 = 10
(-28.5714)-4(0.8571)+2(-18.0000) = 4
4 =4
6(-28.5714)+4(0.8571)-2(-18.0000) = 12
12 = 12

EXAMPLE 3. 2𝑥1 + 3𝑥2 − 𝑥3 = 6


2𝑥1 − 𝑥2 + 4𝑥3 = 12
6𝑥1 + 4𝑥2 − 2𝑥3 = 16

AUGMENTED MATRIX:
A X B
2 3 -1 𝑥1 6
𝑥2 =
2 -1 4 12
6 4 -2 𝑥3 16

2 3 -1 6 Back substitute o find the value of x:


R2-R1 2 -1 4 12 -21x3= -38
R3-3R1 6 4 -2 16 x3 = -1.8095

2 3 -1 6 - 4x2+5x3 = 6
0 -4 5 6 x2 = -3.7619
4R3-5R2 0 -5 1 -2
2x1+3x2-x3 = 6
2 3 -1 6 x1 = 7.7381
0 -4 5 6
0 0 -21 -38
To check, subtitute the values to the equation:

2(-1.8095)+3(-3.7619)-(7.7381) = 6
6 =6
2(-1.8095)-(-3.7619)+4(7.7381) = 12
12 = 12
6(-1.8095)+4(-3.7619)-2(7.7381) = 16
16 = 16

GAUSSIAN ELIMINATION METHOD WITH ROW PIVOTING


- Gauss Elimination with Partial Pivoting is a direct method to solve the system of linear
equations.
- This method still uses the steps done in doing hause elimination but this method involves
partial pivoting where you have to find a pivot number to transform the matrix into an upper
triangular matrix.

NOTE: The pivot element is the highest value in the first column & interchange this pivot row
with the first row.

For the example, we will use the same problem in the Gaussian Elimination;
EXAMPLE 1. 2𝑥1 + 6𝑥2 − 4𝑥3 = 8
𝑥1 + 4𝑥2 + 2𝑥3 = 6
4𝑥1 + 2𝑥2 − 𝑥3 = 12
AUGMENTED MATRIX:
A X B
2 6 -4 𝑥1 8
𝑥2 =
1 4 2 6
4 2 -1 𝑥3 12

2 6 -4 8
1 4 2 6 M12 = 1/2 0.5
4 2 -1 12 M13 = 4/2 2

2 6 -4 8
0 1 4 2
0 -10 7 -4 M22 = -10/1 -10

2 6 -4 8
0 1 4 2
0 0 47 16
To check, subtitute the values to the equation: Back substitute o find the value of x:
47x3 = 16
2(0.7660)+6(0.6383)-4(0.3404) = 8 x3 = 0.3404
8 =8
(2.7660)+4(0.6383)+2(0.3404) = 6 x2+4x3 = 2
6 =6 x2 = 0.6383
4(2.7660)+2(0.6383)-(0.3404) = 12
12 = 12 2x1+6x2-4x3 = 8
x1 = 2.7660

EXAMPLE 2. 𝑥1 + 3𝑥2 − 2𝑥3 = 10


𝑥1 − 4𝑥2 + 2𝑥3 = 4
6𝑥1 + 4𝑥2 − 2𝑥3 = 12

AUGMENTED MATRIX:
A X B
1 3 -2 𝑥1 10
𝑥2 =
1 -4 2 4
6 4 -2 𝑥3 12

1 3 -2 10
1 -4 2 4 M12 = 1/1 1
6 4 -2 12 M13 = 6/1 6

1 3 -2 10
0 -7 4 -6
0 -14 10 -12 M23 = -14/-7 2

1 3 -2 10
0 -7 4 -6
0 0 2 0

To check, subtitute the values to the equation: Back substitute o find the value of x:
2x3 = 0
(7.4287)+3(0.8571)-2(0.0000) = 10 x3 = 0.0000
10 = 10
(7.4286)-4(0.8571)+2(0.0000) = 4 - 7x2+4x3 = -6
4 =4 x2 = 0.8571
6(7.4286)+4(0.8571)-2(0.0000) = 12
12 = 12 x1+3x2-2x3 = 10
x1 = 7.4286
EXAMPLE 3. 2𝑥1 + 3𝑥2 − 𝑥3 = 6
2𝑥1 − 𝑥2 + 4𝑥3 = 12
6𝑥1 + 4𝑥2 − 2𝑥3 = 16

AUGMENTED MATRIX:
A X B
2 3 -1 𝑥1 6
𝑥2 =
2 -1 4 12
6 4 -2 𝑥3 16

2 3 -1 6
2 -1 4 12 M12 = 2/2 1
6 4 -2 16 M13 = 6/2 3

2 3 -1 6
0 -4 5 6
0 -5 1 -2 M23 = -5/-4 1.25

2 3 -1 6
0 -4 5 6
0 0 2.25 -9.5

To check, subtitute the values to the equation: Back substitute o find the value of x:
2.25x3 = -9.5
2(-1.8095)+3(-3.7619)-(7.7381) = 6 x3 = 4.2222
6 =6
2(-1.8095)-(-3.7619)+4(7.7381) = 12 - 4x2+5x3 = 6
12 = 12 x2 = 3.7778
6(-1.8095)+4(-3.7619)-2(7.7381) = 16
16 = 16 2x1+3x2-x3 = 6
x1 = -0.5556
LU DECOMPOSITION
LU decomposition of a matrix is the factorization of a given square matrix into two triangular
matrices, one upper triangular matrix and one lower triangular matrix, such that the product of
these two matrices gives the original matrix. It was introduced by Alan Turing in 1948, who also
created the Turing machine.

This method of factorizing a matrix as a product of two triangular matrices has various
applications such as a solution of a system of equations, which itself is an integral part of many
applications such as finding current in a circuit and solution of discrete dynamical system
problems; finding the inverse of a matrix and finding the determinant of the matrix.

Steps in solving LU Decomposition:


1. Given a set of linear equations, first convert them into matrix form A X = C where A is the
coefficient matrix, X is the variable matrix and C is the matrix of numbers on the right-hand side
of the equations.
2. Now, reduce the coefficient matrix A, i.e., the matrix obtained from the coefficients of
variables in all the given equations such that for ‘n’ variables we have an nXn matrix, to row 
echelon form using Gauss Elimination Method. The matrix so obtained is U.
3. To find L, we have two methods. The first one is to assume the remaining elements as some
artificial variables, make equations using A = L U and solve them to find those artificial
variables.
4. The other method is that the remaining elements are the multiplier coefficients because of
which the respective positions became zero in the U matrix. (This method is a little tricky to
understand by words but would get clear in the example below)
5. Now, we have A (the nXn coefficient matrix), L (the nXn lower triangular matrix), U (the
nXn upper triangular matrix), X (the nX1 matrix of variables) and C (the nX1 matrix of numbers
on the right-hand side of the equations).
6. The given system of equations is A X = C. We substitute A = L U. Thus, we have L U X = C.
7. We put Z = U X, where Z is a matrix or artificial variables and solve for L Z = C first and then
solve for U X = Z to find X or the values of the variables, which was required.
EXAMPLE 1. 6𝑥1 + 2𝑥2 − 12𝑥3 = 6
2𝑥1 − 4𝑥2 + 3𝑥3 = 8
4𝑥1 + 𝑥2 − 2𝑥3 = 10
Augmented matrix
A X B
6 2 -12 x1 6
2 -4 3 x2 = 8
4 1 -2 x3 10

6 2 -12 6
2 -4 3 8 M12 = 0.3333
4 1 -2 10 M13 = 0.6667

6 2 -12 6
0 -4.6667 7 6
0 -0.3333 6 6 M23 = 0.0714

6 2 -12 6
0 -4.6667 7 6
0 0 5.5 5.5714

Upper Triangular Matrix Lower Triangular Matrix


6 2 -12 1 0 0
0 -4.6667 7 0.3333 1 0
0 0 5.5 0.6667 0.0714 1

1 0 0 y1 6
0.3333 1 0 X y2 = 6
0.6667 0.0714 1 y3 5.5714
y1= 6
y2= 4
y3= 1.2857
8
6 2 -12 x1 6
0 -4.66667 7 X x2 = 6
0 0 5.5000 x3 5.5714286

x1= 2.9481
x2= 0.2338
x3= 1.0130

To Check, substitute the values x1,x2,and x3


6(2.9481)+2(0.2338)-12(1.0130) = 6
6 =6
2(2.9481)-4(0.2338)+3(1.0130) = 8
8 =8
4(2.9481)+(0.2338)-2(1.0130) = 10
10 = 10

EXAMPLE 2. 5𝑥1 + 2𝑥2 − 𝑥3 = 12


2𝑥1 + 4𝑥2 + 3𝑥3 = 8
3𝑥1 + 6𝑥2 − 2𝑥3 = 4
Augmented matrix
A X B
5 2 -1 x1 12
2 4 3 x2 = 8
3 6 -2 x3 4

5 2 -1 12
2 4 3 8 M12 = 0.4000
3 6 -2 4 M13 = 0.6000

5 2 -1 12
0 3.2000 3.4 3.2
0 4.8000 -1.4 -3.2 M23 = 1.5000
5 2 -1 12
0 3.2000 3.4 3.2
0 0 -6.5 -8.0000

Upper Triangular Matrix Lower Triangular Matrix


5 2 -1 1 0 0
0 3.2000 3.4 0.4000 1 0
0 0 -6.5 0.6000 1.5000 1

1 0 0 y1 12
0.4000 1 0 X y2 = 3.2
0.6000 1.5000 1 y3 -8.0000

y1= 12
y2= -1.6
y3= -12.8000
8
5 2 -1 x1 12
0 3.2 3.4 X x2 = 3.2
0 0 -6.5000 x3 -8

x1= 2.7692
x2= -0.3077
x3= 1.2308

To Check, substitute the values x1,x2,and x3


5(2.7692)+2(-0.3077)-(1.2308) = 12
12 = 12
2(2.7692)+4(-0.3077)+3(1.2308) = 8
8 =8
3(2.7692)+6(-0.3077)-2(1.2308) = 4
4 =4
EXAMPLE 3. 12𝑥1 + 2𝑥2 − 3𝑥3 = 8
5𝑥1 + 𝑥2 + 4𝑥3 = 6
3𝑥1 + 4𝑥2 − 6𝑥3 = 10
Augmented matrix
A X B
12 2 -3 x1 8
5 1 4 x2 = 6
3 4 -6 x3 10

12 2 -3 8
5 1 4 6 M12 = 0.4167
3 4 -6 10 M13 = 0.2500

12 2 -3 8
0 0.1667 5.25 2.6667
0 3.5000 -5.25 8 M23 = 21.0000

12 2 -3 8
0 0.1667 5.25 2.6667
0 0 -115.5 -48.000

Upper Triangular Matrix Lower Triangular Matrix


12 2 -3 1 0 0
0 0.1667 5.25 0.4167 1 0
0 0 -115.5 0.2500 21.0000 1

1 0 0 y1 8
0.4167 1 0 X y2 = 2.6667
0.2500 21.0000 1 y3 -48.0000
y1= 8
y2= -0.6667
y3= -36.0000
8
12 2 -3 x1 8
0 0.1667 5.25 X x2 = 2.6667
0 0 -115.5000 x3 -48

x1= 0.2857
x2= 2.9091
x3= 0.4156

To Check, substitute the values x1,x2,and x3


12(0.2857)+2(2.9091)-3(0.41565) = 8
8 =8
5(0.2857)+(2.9091)+4(0.41565) = 6
6 =6
3(0.2857)+4(2.9091)-6(0.41565) = 10
10 = 10

ITERATION METHOD
JACOBI METHOD

The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along
its main diagonal (Bronshtein and Semendyayev 1997, p. 892). Each diagonal element is solved
for, and an approximate value plugged in. The process is then iterated until it converges. This
algorithm is a stripped-down version of the Jacobi transformation method of matrix
diagonalization.
The Jacobi method is easily derived by examining each of the n equations in the linear system of
equations Ax=b in isolation.
In this method, the order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently.

This method makes 2 assumptions:

Assumption 1: The given system of equations has a unique solution.


Assumption 2: The coefficient matrix A has no zeros on its main diagonal, namely, a11, a22,…, 
ann, are non-zeros

If any of the diagonal entries a11, a22,…, ann are zero, then we should interchange the rows or 
columns to obtain a coefficient matrix that has nonzero entries on the main diagonal. Follow the
steps given below to get the solution of a given system of equations.

NOTE:
Gauss Jacobi method takes the values obtained from the previous step.

EXAMPLE 1. 12x1+4x2-3x3+x4 = 6 x1 = (6-4x2+3x3-x4)/12


2x1+6x2-x3+2x4 = -7 x2 = (-7-2x1+x3-2x4)/6
x1-2x2+8x3+4x4 = 4 x3 = (4-x1+2x2-4x4)/8
2x1-x2+3x3+7x4 = 10 x4 = (10-2x1+x2-3x3)/7

Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10

k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.167 0.500 1.429
2 0.895 -1.726 -0.568 0.905
3 0.858 -1.861 -0.496 1.170
4 0.899 -1.925 -0.658 1.130
5 0.883 -1.953 -0.659 1.178
6 0.888 -1.964 -0.688 1.180
7 0.884 -1.970 -0.692 1.189
8 0.885 -1.973 -0.698 1.191
9 0.884 -1.975 -0.699 1.193
10 0.884 -1.976 -0.701 1.194
11 0.884 -1.976 -0.701 1.194
12 0.884 -1.976 -0.702 1.194
13 0.884 -1.976 -0.702 1.194
14 0.884 -1.976 -0.702 1.194
To check, substitute the values x1, x2, x3, x4 to the equation.
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10

EXAMPLE 2. 12x1+4x2-3x3+x4 = 6 x1 = (6-4x2+3x3-x4)/12


2x1+6x2-x3+2x4 = -7 x2 = (-7-2x1+x3-2x4)/6
x1-2x2+8x3+4x4 = 4 x3 = (12-x1+2x2-4x4)/8
2x1-x2+3x3+7x4 = 10 x4 = (10-2x1+x2-3x3)/7

Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10

k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.167 0.500 1.429
2 0.895 -1.726 -0.104 0.905
3 0.974 -1.784 0.009 0.971
4 1.016 -1.814 -0.072 0.892
5 1.012 -1.815 -0.044 0.910
6 1.018 -1.815 -0.058 0.899
7 1.016 -1.815 -0.052 0.903
8 1.017 -1.815 -0.055 0.901
9 1.016 -1.815 -0.053 0.902
10 1.017 -1.815 -0.054 0.902
11 1.016 -1.815 -0.054 0.902
12 1.016 -1.815 -0.054 0.902
To check, substitute the values of x1, x2, x3, and x4 to the equation:
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10

EXAMPLE 2. 11x1+3x2-2x3+x4 = 12 x1 = (12-3x2+2x3-x4)/11


x1+7x2-2x3+3x4 = 8 x2 = (8-x1+2x3-3x4)/7
5x1-2x2+10x3+x4 = 9 x3 = (9-5x1+2x2-x4)/10
3x1-2x2+x3+8x4 = 10 x4 = (10-3x1+2x2-x3)/8

Augmented Matrix:
A X B
11>3+2+1 11 3 -2 1 x1 12
7>1+2_+3 1 7 -2 3 x2 8
10>5+2+1 5 -2 10 1 x3 9
8>3+2+1 3 -2 1 8 x4 10

k X1 X2 X3 X4
0 0 0 0 0
1 1.091 1.143 0.900 1.250
2 0.829 0.708 0.458 1.014
3 0.889 0.721 0.526 1.059
4 0.894 0.712 0.494 1.031
5 0.893 0.714 0.492 1.031
6 0.892 0.714 0.493 1.032
7 0.892 0.714 0.494 1.032
8 0.892 0.714 0.494 1.032
To check, substitute the values of x1, x2, x3, and x4 to the equation:
11x1+3x2-2x3+x4 = 12 = 12
x1+7x2-2x3+3x4 = 8 = 8
5x1-2x2+10x3+x4 = 9 = 9
3x1-2x2+x3+8x4 = 10 = 10

EXAMPLE 3. 8x1+4x2-x3+2x4 = 11 x1 = (11-4x2+x3-2x2)/8


2x1+12x2-5x3+x4 = 6 x2 = (6-2x1+5x3-x4)/12
x1-2x2+6x3+2x4 = 13 x3 = (13-x1+2x2-2x4)/6
4x1-2x2+5x3+14x4 = 8 x4 = (8-4x1+2x2-5x3)/14

Augmented Matrix:
A X B
8>4+1+2 8 4 -1 2 x1 11
12>2+5+1 2 12 -5 1 x2 6
6>1+2+2 1 -2 6 2 x3 13
14>4+2+5 4 -2 5 14 x4 8

k X1 X2 X3 X4
0 0 0 0 0
1 1.375 0.500 2.167 0.571
2 1.253 1.126 1.914 -0.524
3 1.182 1.132 2.508 -0.309
4 1.200 1.374 2.450 -0.500
5 1.119 1.363 2.591 -0.450
6 1.130 1.431 2.584 -0.479
7 1.103 1.428 2.615 -0.470
8 1.105 1.445 2.616 -0.473
9 1.098 1.445 2.622 -0.472
10 1.098 1.449 2.623 -0.472
11 1.096 1.449 2.624 -0.472
12 1.096 1.450 2.624 -0.472
13 1.096 1.450 2.625 -0.472
14 1.096 1.450 2.625 -0.472
To check, substitute the values of x1, x2, x3, and x4 to the equation:
8x1+4x2-x3+2x4 = 11 = 11
2x1+12x2-5x3+x4 = 6 = 6
x1-2x2+6x3+2x4 = 13 = 13
4x1-2x2+5x3+14x4 = 8 = 8
GAUSSE-SEIDEL METHOD

It is an iterative technique for solving the n equations a square system of n linear equations with
unknown x, where Ax =b only one at a time in sequence. This method is applicable to strictly
diagonally dominant, or symmetric positive definite matrices A.

Gauss–Seidel method is an improved form of Jacobi method, also known as the successive 
displacement method. This method is named after Carl Friedrich Gauss (Apr. 1777–Feb. 1855) 
and Philipp Ludwig von Seidel (Oct. 1821–Aug. 1896). Again, we assume that the starting 
values are u2 = u3 = u4 = 0. The difference between the Gauss–Seidel and Jacobi methods is that 
the Jacobi method uses the values obtained from the previous step while the Gauss–Seidel 
method always applies the latest updated values during the iterative procedures, as demonstrated
in Table 7.2. The reason the Gauss–Seidel method is commonly known as the successive 
displacement method is because the second unknown is determined from the first unknown in
the current iteration, the third unknown is determined from the first and second unknowns, etc.

NOTE: Gauss–Seidel method always applies the latest updated values during the iterative procedures.

We will use the same examples in the Jacobi Method to compare its convergence speed.

EXAMPLE 1. 12x1+4x2-3x3+x4 = 6 x1 = (6-4x2+3x3-x4)/12


2x1+6x2-x3+2x4 = -7 x2 = (-7-2x1+x3-2x4)/6
x1-2x2+8x3+4x4 = 4 x3 = (4-x1+2x2-4x4)/8
2x1-x2+3x3+7x4 = 10 x4 = (10-2x1+x2-3x3)/7

Augmented Matrix:
A X B
12>4+3+1 12 4 -3 1 x1 6
6>2+1+2 2 6 -1 2 x2 -7
8>1+2+4 1 -2 8 4 x3 4
7>2+1+3 2 -1 3 7 x4 10
k X1 X2 X3 X4
0 0 0 0 0
1 0.500 -1.333 0.104 1.051
2 0.883 -1.794 -0.584 1.170
3 0.854 -1.939 -0.677 1.197
4 0.877 -1.971 -0.701 1.197
5 0.882 -1.976 -0.703 1.195
6 0.884 -1.977 -0.702 1.195
7 0.884 -1.977 -0.702 1.195
8 0.884 -1.976 -0.702 1.195
9 0.884 -1.976 -0.702 1.194
To check, substitute the values x1, x2, x3, x4 to the equation.
12x1+4x2-3x3+x4 = 6 = 6
2x1+6x2-x3+2x4 = -7 = -7
x1-2x2+8x3+4x4 = 4 = 4
2x1-x2+3x3+7x4 = 10 = 10

EXAMPLE 2. 11x1+3x2-2x3+x4 = 12 x1 = (12-3x2+2x3-x4)/11


x1+7x2-2x3+3x4 = 8 x2 = (8-x1+2x3-3x4)/7
5x1-2x2+10x3+x4 = 9 x3 = (9-5x1+2x2-x4)/10
3x1-2x2+x3+8x4 = 10 x4 = (10-3x1+2x2-x3)/8

Augmented Matrix:
A X B
11>3+2+1 11 3 -2 1 x1 12
7>1+2_+3 1 7 -2 3 x2 8
10>5+2+1 5 -2 10 1 x3 9
8>3+2+1 3 -2 1 8 x4 10

k X1 X2 X3 X4
0 0 0 0 0
1 1.091 0.987 0.552 1.019
2 0.829 0.745 0.532 1.059
3 0.888 0.714 0.493 1.034
4 0.892 0.713 0.493 1.032
5 0.892 0.714 0.493 1.032
6 0.892 0.714 0.494 1.032
7 0.892 0.714 0.494 1.032
To check, substitute the values of x1, x2, x3, and x4 to the equation:
11x1+3x2-2x3+x4 = 12 = 12
x1+7x2-2x3+3x4 = 8 = 8
5x1-2x2+10x3+x4 = 9 = 9
3x1-2x2+x3+8x4 = 10 = 10

EXAMPLE 3. 8x1+4x2-x3+2x4 = 11 x1 = (11-4x2+x3-2x2)/8


2x1+12x2-5x3+x4 = 6 x2 = (6-2x1+5x3-x4)/12
x1-2x2+6x3+2x4 = 13 x3 = (13-x1+2x2-2x4)/6
4x1-2x2+5x3+14x4 = 8 x4 = (8-4x1+2x2-5x3)/14
Augmented Matrix:
A X B
8>4+1+2 8 4 -1 2 x1 11
12>2+5+1 2 12 -5 1 x2 6
6>1+2+2 1 -2 6 2 x3 13
14>4+2+5 4 -2 5 14 x4 8

k X1 X2 X3 X4
0 0 0 0 0
1 1.375 0.271 2.028 -0.507
2 1.620 1.117 2.438 -0.603
3 1.272 1.354 2.607 -0.530
4 1.156 1.438 2.630 -0.493
5 1.108 1.452 2.630 -0.477
6 1.097 1.453 2.627 -0.473
7 1.095 1.452 2.626 -0.472
8 1.095 1.451 2.625 -0.472
9 1.096 1.450 2.625 -0.472
10 1.096 1.450 2.625 -0.472
To check, substitute the values of x1, x2, x3, and x4 to the equation:
8x1+4x2-x3+2x4 = 11 = 11
2x1+12x2-5x3+x4 = 6 = 6
x1-2x2+6x3+2x4 = 13 = 13
4x1-2x2+5x3+14x4 = 8 = 8

As we can see from the examples above, the the Gausse Seidel Method converges faster
than the Jacobi Method.

III. SOLVING SYSTEMS OF NONLINEAR EQUATIONS


A. Open Method
FIXED POINT ITERATION
Fixed point : A point, say, s is called a fixed point if it satisfies the equation x = g(x).
Fixed point Iteration : The transcendental equation f(x) = 0 can be converted algebraically into
the form x = g(x) and then using the iterative scheme with the recursive relation
xi+1= g(xi), i = 0, 1, 2, . . .,

with some initial guess x0 is called the fixed point iterative scheme.

Algorithm - Fixed Point Iteration Scheme


Given an equation f(x) = 0
Convert f(x) = 0 into the form x = g(x)
Let the initial guess be x0
Do
xi+1= g(xi)
while (none of the convergence criterion C1 or C2 is met)

EXAMPLE 1. 2x^2-x-2 = 0
2x^2-x = 2 𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
x(2x-1) = 2 𝑥𝑛𝑒𝑤
Formula: x = 2/(2x-1)

I X g(x) e
1 0.0000 -2.0000
2 -2.0000 -0.4000 100%
3 -0.4000 -1.1111 -400%
4 -1.1111 -0.6207 64%
5 -0.6207 -0.8923 -79%
6 -0.8923 -0.7182 30%
7 -0.7182 -0.8209 -24%
8 -0.8209 -0.7571 13%
9 -0.7571 -0.7955 -8%
10 -0.7955 -0.7719 5%
11 -0.7719 -0.7862 -3%
12 -0.7862 -0.7775 2%
13 -0.7775 -0.7828 -1%
14 -0.7828 -0.7795 1%
15 -0.7795 -0.7815 0%
16 -0.7815 -0.7803 0%
17 -0.7803 -0.7811 0%
18 -0.7811 -0.7806 0%
19 -0.7806 -0.7809 0%
20 -0.7809 -0.7807 0%
21 -0.7807 -0.7808 0%
EXAMPLE 2. x^2+3x+1 = 0
x(x+3) = - 1 𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
Formula: x = -1/(x+3) 𝑥𝑛𝑒𝑤

k x g(x) e
1 4.0000 -0.1429
2 -0.1429 -0.3500 2900%
3 -0.3500 -0.3774 59%
4 -0.3774 -0.3813 7%
5 -0.3813 -0.3819 1%
6 -0.3819 -0.3820 0%
7 -0.3820 -0.3820 0%

NEWTON RAPHSON METHOD

"The Newton - Raphson Method" uses one initial approximation to solve a given equation y =
f(x).In this method the function f(x) , is approximated by a tangent line, whose equation is found
from the value of f(x) and its first derivative at the initial approximation.
The tangent line then intersects the X - Axis at second point. This second point is again used as
next approximation to find the third point.

The Newton-Raphson method (also known as Newton's method) is a way to quickly find a good
approximation for the root of a real-valued function f(x) = 0f(x)=0. It uses the idea that a
continuous and differentiable function can be approximated by a straight line tangent to it.

Formula: 𝑓(𝑥𝑜𝑙𝑑 )
𝑥𝑛𝑒𝑤 = 𝑥𝑜𝑙𝑑 −
𝑓′(𝑥𝑜𝑙𝑑 )

𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
𝑥𝑛𝑒𝑤
EXAMPLE 1. f(x) = 2x^2-x-2
f'(x) = 4x - 1

k x f(x) f'(x) e
1 0.0000 -2.0000 -1.0000
2 -2.0000 8.0000 -9.0000 100.00%
3 -1.1111 1.5802 -5.4444
4 -0.8209 0.1685 -4.2834 35.36%
5 -0.7815 0.0031 -4.1261
6 -0.7808 0.0000 -4.1231 0.10%
7 -0.7808 0.0000 -4.1231
8 -0.7808 0.0000 -4.1231 0.00%
9 -0.7808 0.0000 -4.1231
10 -0.7808 0.0000 -4.1231 0.00%

EXAMPLE 2. f(x) = x^2+3x+1


f'(x) = 2x+3

k x f(x) f'(x) e
1 4.0000 29.0000 11.0000
2 1.3636 6.9504 5.7273 193.33%
3 0.1501 1.4727 3.3001
4 -0.2962 0.1992 2.4076 150.67%
5 -0.3789 0.0068 2.2422
6 -0.3820 0.0000 2.2361 0.80%
7 -0.3820 0.0000 2.2361
8 -0.3820 0.0000 2.2361 0.00%
9 -0.3820 0.0000 2.2361
10 -0.3820 0.0000 2.2361 0.00%
SECANT METHOD

Secant method is also a recursive method for finding the root for the polynomials by successive
approximation.
In this method, the neighbourhoods roots are approximated by secant line or chord to the
function f(x). It’s also advantageous of this method that we don’t need to differentiate the given 
function f(x), as we do in Newton-raphson method.

Note: To start the solution of the function f(x) two initial guesses are required such that f(x0)<0
and f(x1)>0. Usually it hasn’t been asked to find, that root of the polynomial f(x) at which f(x) 
=0.
Advantages of Secant Method:
The speed of convergence of secant method is faster than that of Bisection and Regula falsi
method.
It uses the two most recent approximations of root to find new approximations, instead of using
only those approximations which bound the interval to enclose root

Disadvantages of Secant Method:


The Convergence in secant method is not always assured.
If at any stage of iteration this method fails.
Since convergence is not guaranteed, therefore we should put limit on maximum number of
iterations while implementing this method on computer.

Formula: 𝑥𝑎 − 𝑓(𝑥𝑎 )(𝑥𝑎 − 𝑥𝑏 )


𝑥𝑛𝑒𝑤 =
𝑓 𝑥𝑎 − 𝑓(𝑥𝑏 )

𝑥𝑛𝑒𝑤 − 𝑥𝑜𝑙𝑑
𝑒= × 100%
𝑥𝑛𝑒𝑤
EXAMPLE 1. f(x) = 2x^2-x-2
Let:
Xa = 0
Xb = 4

k Xa Xb f(Xa) f(Xb) e
1 0.0000 4.0000 -2.0000 26.0000
2 4.0000 0.2857 26.0000 -2.1224 1300.00%
3 0.2857 0.5660 -2.1224 -1.9252 49.52%
4 0.5660 3.3027 -1.9252 16.5127 82.86%
5 3.3027 0.8518 16.5127 -1.4007 287.73%
6 0.8518 1.0434 -1.4007 -0.8659 18.37%
7 1.0434 1.3538 -0.8659 0.3115 22.92%
8 1.3538 1.2716 0.3115 -0.0375 6.46%
9 1.2716 1.2805 -0.0375 -0.0013 0.69%
10 1.2805 1.2808 -0.0013 0.0000 0.02%
11 1.2808 1.2808 0.0000 0.0000 0.00%
12 1.2808 1.2808 0.0000 0.0000 0.00%

EXAMPLE 2. f(x) = x^2+3x+1


Let:
Xa = 2
Xb = 4

k Xa Xb f(Xa) f(Xb) e
1 2.0000 4.0000 11.0000 29.0000
2 4.0000 0.7778 29.0000 3.9383 414.29%
3 0.7778 0.2714 3.9383 1.8880 186.55%
4 0.2714 -0.1948 1.8880 0.4535 239.32%
5 -0.1948 -0.3422 0.4535 0.0904 43.07%
6 -0.3422 -0.3789 0.0904 0.0068 9.69%
7 -0.3789 -0.3819 0.0068 0.0001 0.78%
8 -0.3819 -0.3820 0.0001 0.0000 0.01%
9 -0.3820 -0.3820 0.0000 0.0000 0.00%
10 -0.3820 -0.3820 0.0000 0.0000 0.00%
B. Bracketing Method
BISECTION METHOD
The Bisection Method, also called the interval halving method, the binary search method, or the
dichotomy method. is based on the Bolzano’s theorem for continuous functions.

The Bisection Method looks to find the value c for which the plot of the function f crosses the x-
axis. The c value is in this case is an approximation of the root of the function f(x). How close
the value of c gets to the real root depends on the value of the tolerance we set for the algorithm.

For a given function f(x),the Bisection Method algorithm works as follows:


1. two values a and b are chosen for which f(a) > 0 and f(b) < 0 (or the other way
around)
2. interval halving: a midpoint c is calculated as the arithmetic mean between a and b,
c = (a + b) / 2
3. if f(c) = 0 means that we found the root of the function, which is c
4. if f(c) ≠ 0 we check the sign of f(c):
- if f(c) has the same sign as f(a) we replace a with c and we keep the
same value for b
- if f(c) has the same sign as f(b), we replace b with c and we keep the
same value for a

The algorithm ends when the values of f(c) is less than a defined tolerance (e.g. 0.001). In this
case we say that c is close enough to be the root of the function for which f(c) ~= 0.

(𝑎𝑘 + 𝑏𝑘)
𝑀𝑖𝑑𝑝𝑜𝑖𝑛𝑡, 𝑐𝑘 =
2
EXAMPLE 1. f(x) = x^3+3x-5 Interval (-5, 5)

x f(x) f(x)
250 229
0 -5
1 -1 200
2 9 135
150
3 31
100 71
4 71
31
5 135 50 9
-5 -1
6 229 0
0 1 2 3 4 5 6 7
-50

Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -5.000 0.000 5.000 -145.000 -5.000 135.000
1 0.000 2.500 5.000 -5.000 18.125 135.000
2 0.000 1.250 2.500 -5.000 0.703 18.125
3 0.000 0.625 1.250 -5.000 -2.881 0.703
4 0.625 0.938 1.250 -2.881 -1.364 0.703
5 0.938 1.094 1.250 -1.364 -0.410 0.703
6 1.094 1.172 1.250 -0.410 0.125 0.703
7 1.094 1.133 1.172 -0.410 -0.148 0.125
8 1.133 1.152 1.172 -0.148 -0.013 0.125
9 1.152 1.162 1.172 -0.013 0.056 0.125
10 1.152 1.157 1.162 -0.013 0.021 0.056
11 1.152 1.155 1.157 -0.013 0.004 0.021
12 1.152 1.154 1.155 -0.013 -0.004 0.004
13 1.154 1.154 1.155 -0.004 0.000 0.004
14 1.154 1.154 1.154 -0.004 -0.002 0.000
15 1.154 1.154 1.154 -0.002 -0.001 0.000
16 1.154 1.154 1.154 -0.001 -0.001 0.000
17 1.154 1.154 1.154 -0.001 0.000 0.000
18 1.154 1.154 1.154 0.000 0.000 0.000
19 1.154 1.154 1.154 0.000 0.000 0.000
20 1.154 1.154 1.154 0.000 0.000 0.000
EXAMPLE 2. f(x) = x^2-2x-1 Interval (0, 10)

x f(x)
-2 7 f(x)
-1 2 50
47
0 -1
40 34
1 -2
2 -1 30 23
3 2 20 14
4 7 7 7
5 14 2 10 2
-1 -2 -1
6 23 0
-4 -2 0 2 4 6 8 10
7 34 -10
8 47

Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -10.0000 -5.0000 0.0000 119.0000 34.0000 -1.0000
1 -5.0000 -2.5000 0.0000 34.0000 10.2500 -1.0000
2 -2.5000 -1.2500 0.0000 10.2500 3.0625 -1.0000
3 -1.2500 -0.6250 0.0000 3.0625 0.6406 -1.0000
4 -0.6250 -0.3125 0.0000 0.6406 -0.2773 -1.0000
5 -0.6250 -0.4688 -0.3125 0.6406 0.1572 -0.2773
6 -0.4688 -0.3906 -0.3125 0.1572 -0.0662 -0.2773
7 -0.4688 -0.4297 -0.3906 0.1572 0.0440 -0.0662
8 -0.4297 -0.4102 -0.3906 0.0440 -0.0115 -0.0662
9 -0.4297 -0.4199 -0.4102 0.0440 0.0162 -0.0115
10 -0.4199 -0.4150 -0.4102 0.0162 0.0023 -0.0115
11 -0.4150 -0.4126 -0.4102 0.0023 -0.0046 -0.0115
12 -0.4150 -0.4138 -0.4126 0.0023 -0.0011 -0.0046
13 -0.4150 -0.4144 -0.4138 0.0023 0.0006 -0.0011
14 -0.4144 -0.4141 -0.4138 0.0006 -0.0003 -0.0011
15 -0.4144 -0.4143 -0.4141 0.0006 0.0002 -0.0003
16 -0.4143 -0.4142 -0.4141 0.0002 0.0000 -0.0003
17 -0.4142 -0.4142 -0.4141 0.0000 -0.0001 -0.0003
18 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
19 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
20 -0.4142 -0.4142 -0.4142 0.0000 -0.0001 -0.0001
FALSE POSITION METHOD

The Regula–Falsi Method is a numerical method for estimating the roots of a polynomial f(x).   
A value x replaces the midpoint in the Bisection Method and serves as the new approximation of
a root of f(x). The objective is to make convergence faster. Assume that f(x) is continuous.

Algorithm for the Regula–Falsi Method: Given a continuous function f(x)
● Find points a and b such that a < b and f(a) * f(b) < 0.

● Take the interval [a, b] and determine the next value of x1.
● If f(x1) = 0 then x1 is an exact root, else if f(x1) * f(b) < 0 
then let a = x1, else if f(a) * f(x1) < 0 then let b = x1.
● Repeat steps 2 & 3 until f(xi) = 0 or |f(xi)| £ DOA, where 
DOA stands for degree of accuracy.

Note that the line segment drawn from f(a)


to f(b) is called the interpolation line.

Graphically, if the root is in [ a, xi ], then the next interpolation line is drawn between ( a, f(a) ) and ( xi,
f(xi) ); otherwise, if the root is in [ xi, b ], then the next interpolation line is drawn between ( xi, f(xi) ) and
(b, f(b)).
𝑓(𝑏)(𝑏 − 𝑎)
𝑚𝑖𝑑𝑝𝑜𝑖𝑛𝑡, 𝑐𝑘 = 𝑏 −
𝑓 𝑏 − 𝑓(𝑎)

EXAMPLE 1. f(x) = x^3-3x+1 Interval (-1, 10)

f(x)
x f(x)
150
0 -5
1 -1 100
2 9
50
3 31
4 71 0
5 135 0 1 2 3 4 5 6
-50

Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 0 0.500 1.000 1.000 -0.375 -1.000
1 0 0.364 0.500 1.000 -0.043 -0.375
2 0 0.349 0.364 1.000 -0.004 -0.043
3 0 0.347 0.349 1.000 0.000 -0.004
4 0.347 0.347 0.349 0.000 0.000 -0.004

EXAMPLE 2. f(x) = x^2-2x-1 Interval (0, 10)

x f(x)
-2 7 f(x)
-1 2 47
50
0 -1 40 34
1 -2 30 23
2 -1 20 14
3 2 7 7
210 2
-1 -2 -1
4 7 0
5 14 -4 -2 -10 0 2 4 6 8 10
6 23
7 34
8 47
Left Right
k endpoint, midpoint, ck endpoint, f(ak) f(ck) f(bk)
ak bk
0 -10.000 -5.000 0.000 119.000 34.000 -1.000
1 -5.000 -2.500 0.000 34.000 10.250 -1.000
2 -2.500 -1.250 0.000 10.250 3.063 -1.000
3 -1.250 -0.625 0.000 3.063 0.641 -1.000
4 -0.625 -0.313 0.000 0.641 -0.277 -1.000
5 -0.625 -0.469 -0.313 0.641 0.157 -0.277
6 -0.469 -0.391 -0.313 0.157 -0.066 -0.277
7 -0.469 -0.430 -0.391 0.157 0.044 -0.066
8 -0.430 -0.410 -0.391 0.044 -0.011 -0.066
9 -0.430 -0.420 -0.410 0.044 0.016 -0.011
10 -0.420 -0.415 -0.410 0.016 0.002 -0.011
11 -0.415 -0.413 -0.410 0.002 -0.005 -0.011
12 -0.415 -0.414 -0.413 0.002 -0.001 -0.005
13 -0.415 -0.414 -0.414 0.002 0.001 -0.001
14 -0.414 -0.414 -0.414 0.001 0.000 -0.001
15 -0.414 -0.414 -0.414 0.001 0.000 0.000
16 -0.414 -0.414 -0.414 0.000 0.000 0.000
17 -0.414 -0.414 -0.414 0.000 0.000 0.000
18 -0.414 -0.414 -0.414 0.000 0.000 0.000
19 -0.414 -0.414 -0.414 0.000 0.000 0.000
20 -0.414 -0.414 -0.414 0.000 0.000 0.000

IV. CURVE FITTING AND INTERPOLATION


A. Least Square Regression
Regression analysis is a helpful statistical tool for studying the correlation between
two sets of events, or, statistically speaking, variables ― between a dependent 
variable and one or more independent variables.
SIMPLE LINEAR REGRESSION
Simple linear regression
This type of regression model allows you
to estimate the linear correlation between two
variables.
Linear regression is commonly used predictive analysis. It contains one explanatory
variable and a dependent variable. It attempts to model the relationship between two
variables by fitting a linear equation to observed data. Scatterplot is a useful tool in
determining the strength of the relationship between two variables.

Formula for simple linear regression:


y  a  bx
where, x and y are two variables on
;where a the regression line.

a (int ercept ) 
 y  x   x xy
2

n( x )  ( x )
2 2 b = Slope of the line.
a = y-intercept of the line.
;where b x = Values of the first data set.
y = Values of the second data set.
n xy  ( x)( y )
b( slope) 
n x 2  ( x ) 2

EXAMPLE 1:
n x y x2 xy
1 5 6 25 30
2 10 12 100 120
3 12 16 144 192
4 16 20 256 320
 43 54 525 662

b( slope) 
n xy  ( x)( y )
a (int ercept ) 
 y  x   x xy
2

n x 2  ( x ) 2 n( x )  ( x )
2 2

b= 1.2988 a= -0.4622

Linear regression is given by:


y  a  bx
y=-0.4622 + 1.2988x
EXAMPLE 2:
X Y (X-X) (Y-Ῡ) (X-X)(Y-Ῡ) (X-X)2 (Y-Ῡ)2
24 56 -4.90 -5.10 24.99 24.01 26.01
16 53 -12.90 -8.10 104.49 166.41 65.61
25 45 -3.90 -16.10 62.79 15.21 259.21
23 55 -5.90 -6.10 35.99 34.81 37.21
43 62 14.10 0.90 12.69 198.81 0.81
32 61 3.10 -0.10 -0.31 9.61 0.01
27 65 -1.90 3.90 -7.41 3.61 15.21
26 69 -2.90 7.90 -22.91 8.41 62.41
33 71 4.10 9.90 40.59 16.81 98.01
40 74 11.10 12.90 143.19 123.21 166.41
Mean: Sum:
28.9 61.1 394.10 600.9 730.9

Pearson correlation coefficient

r
 (( x x )( y  y ))  (Y  Y ) 2
 (x  x) 2
SY  Sx 
 ( x  x)  ( y  y )
2 2
n 1 n 1

r= 0.5947 Sy= 9.0117 Sx= 8.1711


Sy
b  r( )
SX a  y  bx y  a  bx where x=23.
b= 0.6558 a= 42.1459 y= 57.2305

80
74
70 69 71 y = 0.6558x + 42.146
65 R² = 0.3536
60 61 62
56
55
50 53
45
40 Series1
y

30 Linear (Series1)
20 Linear (Series1)
10
0
0 10 20 30 40 50
x
POLYNOMIAL REGRESSION
Polynomial regression is one of several methods of curve fitting. With polynomial regression,
the data is approximated using a polynomial function. A polynomial is a function that takes the
form f( x ) = c0 + c1 x + c2 x2 ⋯ cn xn where n is the degree of the
When to Use Polynomial Regression:
We use polynomial regression when the relationship between a predictor
and response variable is nonlinear.
1. Create a Scatterplot. It shows the relationship of explanatory variable(x)
and response variable (y).
2. Create a residuals vs. fitted plot.
3. Calculate the R2 of the model.

formula for Polynomial Regression:


y  b0 b1x1 b2x1 b3x1 ...bnx1
2 3 n

Advantages of using Polynomial Regression:


► Polynomial provides the best approximation of the relationship between the
dependent and independent variable.
► A Broad range of function can be fit under it.
► Polynomial basically fits a wide range of curvature.

Disadvantages of using Polynomial Regression


► The presence of one or two outliers in the data can seriously affect the
results of the nonlinear analysis.
► These are too sensitive to the outliers.
► In addition, there are unfortunately fewer model validation tools for the
detection of outliers in nonlinear regression than there are for linear
regression.
EXAMPLE 1:

n xk yk (xk)2 (xk)3
1 1 2 1 1
2 2 4 4 8
3 3 6 9 27
4 3 8 9 27
5 2 10 4 8
6 1 12 1 1

sum 12 42 28 72
mean 2 7.000

2 2
(xk)4 (xk)(yk) (xk2)(yk) Sr= (yk-b0-b1(xk)-b2(xk )) (yk-Ῡ)2
1 2 2 0.819 25.000
16 8 16 5.444 9.000
81 18 54 33.200 1.000
81 24 72 14.152 1.000
16 20 40 13.444 9.000
1 12 12 82.723 25.000

sum 196 84 196 149.782 70.000

Normal Equations
b0 b1 b2
6 + 12 + 28 = 42
12 + 28 + 72 = 84
28 + 72 + 196 = 196

6 12 28 b0 42
12 28 72 b1 = 84
28 72 196 b2 196
Using triangularization followed by back substition method.
b0 b1 b2 B
6 12 28 42
M12 = 2 12 28 72 84
M13 = 4.667 28 72 196 196

6 12 28 42
0 -28.000 -72.000 -84
M22 = 2.6 0 -72.000 -196.000 -196.000

6 12 28 42
0 -28.000 -72.000 -84
0 0 196.000 196.000
Therefore, y=1.476+0.429x+1.000x^2

Back Substitution: m(Degree)= 2 Coefficient of determination:


b2 = 1.000 n= 6 ( y  y ) 2  Sr y2= -113.97%
2  k
b1 = 0.429 Standard error: ( yk  y ) 2
b0 = 1.476
Sr
S
n  (m  1)

S= 7.066
V. NUMERICAL INTEGRATION
EULER'S METHOD
The Euler method is the most straightforward method to integrate a differential
equation. Consider the first-order differential equation
x˙ = f (t, x), 
with the initial condition x(0) = x0. Define tn = nDt and xn = x(tn). A Taylor
series expansion of xn+1 results in
xn+1 = x(tn + Dt)
= x(tn) + Dtx˙(tn) + O(Dt2)
= x(tn) + Dt f (tn, xn) + O(Dt2).
The Euler Method is therefore written as
xn+1 = x(tn) + Dt f (tn, xn).
We say that the Euler method steps forward in time using a time-step Dt, starting
from the initial value x0 = x(0). The local error of the Euler Method is O(Dt2).
The global error, however, incurred when integrating to a time T, is a factor of 1/Dt
larger and is given by O(Dt). It is therefore customary to call the Euler Method a
first-order method.

EXAMPLE 1: y= 2x^2+3x-5 y(0)= 1


y'= 4x+3 (0,1)
h= 0.2
Yn Percentage
n Xn Exact value
(Approxim error
0 0.0000 2.0000 -5.0000 -140%
1 0.2000 2.6000 -4.2400 -161%
2 0.4000 3.3600 -3.1600 -206%
3 0.6000 4.2800 -1.7600 -343%
4 0.8000 5.3600 -0.0400 -13500%
5 1.0000 6.6000 2.0000 230%

EULER'S METHOD: h=0.2


8.0000 6.6000
5.3600
6.0000 4.2800
3.3600
4.00002.0000 2.6000
2.0000
2.0000 -0.0400
0.0000 -1.7600
-2.00000.0000 0.2000 0.4000
-3.1600 0.6000 0.8000 1.0000 1.2000
-4.2400
-5.0000
-4.0000
-6.0000

Series1 Series2
TRAPEZOIDAL METHOD

Trapezoidal Rule is a rule that evaluates the area under the curves by dividing the total
area into smaller trapezoids rather than using rectangles. This integration works by
approximating the region under the graph of a function as a trapezoid, and it
calculates the area. This rule takes the average of the left and the right sum.

We suppose that the function f (x) is known at the n+1 points labeled as x0, x1, . . . ,
xn,
with the endpoints given by x0 = a and xn = b. Define
fi = f (xi), hi = xi+1 􀀀 xi.

where the last equality arises from the change-of-variables s = x 􀀀 xi. Applying the
trapezoidal rule to the integral, we have

If the points are not evenly spaced, say because the data are experimental values,
then the hi may differ for each value of i and is to be used directly.
However, if the points are evenly spaced, say because f (x) can be computed, we
have hi = h, independent of i. We can then define
xi = a + ih, i = 0, 1, . . . , n;
and since the end point b satisfies b = a + nh, we have
h = b-a/n
The composite trapezoidal rule for evenly space points then becomes

The first and last terms have a multiple of one; all other terms have a multiple of
two; and the entire sum is multiplied by h/2.
EXAMPLE 1: x y
-4 0
Δx= 2 -2 3
2 0 5
n= 6 2 7
4 4
6 9
8 2
b x
a
f ( x)dx  Tn 
2
[ f ( x0 )  2 f ( x1 )  2 f ( x2 )  ...2 f ( xn 1 )  f ( xn )]

T6 =(Δx/2)[f(x0)+ 2f(x1)+ 2f(x2)+2f(x3) + 2f(x4)+2f(x5)+ f(x6)]


A≈ T6 =(2/2)[0+ 2(3)+ 2(5)+2(7) + 2(4)+2(9) +2]
A≈ 66

SIMPSON'S RULE
Simpson’s rule is a numerical method that can be used to evaluate a definite integral 
or approximating the integrals. The antiderivative technique is applied in this
numerical integration.

We here consider the composite Simpson’s rule for evenly space points. We apply
Simpson’s rule over intervals of 2h, starting from a and ending at b:

Note that n must be even for this scheme to work. Combining terms, we have

The first and last terms have a multiple of one; the even indexed terms have a
multiple of 2; the odd indexed terms have a multiple of 4; and the entire sum is
multiplied by h/3.
EXAMPLE 1:
xs f(x) coefficient c*f(x) a b n
1.0000 1.0000 1 1.0000 1 3 12
1.1667 0.8571 3 2.5714
1.3333 0.7500 1 0.7500
1.5000 0.6667 3 2.0000 delta x
1.6667 0.6000 1 0.6000 0.1667
1.8333 0.5455 3 1.6364
2.0000 0.5000 1 0.5000
2.1667 0.4615 3 1.3846 0.7503
2.3333 0.4286 1 0.4286
2.5000 0.4000 3 1.2000
2.6667 0.3750 1 0.3750
2.8333 0.3529 3 1.0588
13.5048

1.2000
1.0000
1.0000 0.8571
0.7500
0.8000 0.6667
0.6000
0.5455
0.6000 0.5000
0.4615
0.4286
0.4000
0.3750
0.3529
0.4000

0.2000

0.0000
0.0000 0.5000 1.0000 1.5000 2.0000 2.5000 3.0000
EXAMPLE 2:
xs f(x) coefficient c*f(x) a b n
2.0000 0.5000 2 1.0000 2 4 10
2.2000 0.4545 4 1.8182
2.4000 0.4167 2 0.8333
2.6000 0.3846 4 1.5385 delta x
2.8000 0.3571 2 0.7143 0.2000
3.0000 0.3333 4 1.3333
3.2000 0.3125 2 0.6250
3.4000 0.2941 4 1.1765 0.7098
3.6000 0.2778 2 0.5556
3.8000 0.2632 4 1.0526
10.6473

0.6000
0.5000
0.5000 0.4545
0.4167
0.3846
0.4000 0.3571
0.3333
0.3125
0.2941
0.2778
0.2632
0.3000

0.2000

0.1000

0.0000
0 . 0 0 0 00 . 5 0 0 01 . 0 0 0 01 . 5 0 0 02 . 0 0 0 02 . 5 0 0 03 . 0 0 0 03 . 5 0 0 04 . 0 0 0 0
RUNGE-KUTTA METHOD
There are two types of Runge-Kutta Method one is the 2nd order and the other is the
4th order Runge-Kutta method. Both these Runge-Kutta methods are integrative
methods for approximation of solutions of differential equations. This method with
several versions were developed around 1900s byGerman mathematicians C. Runge
and M.W. Kutta.
The essence of the Runge-Katta methods involves the use of numerical
integrating the functionin differential equations by using a trial step at mid-point of an
interval, e.g., within a step Δx or h by using numerical integration techniques such as 
trapezoidal or Simpson rules . The numerical integrations would allow the
cancellation of low-order error termsfor more accurate solutions.

SECOND ORDER RUNGE-KUTTA METHOD


This is the simplest form of the Runge-Katta method with the formulations for the
solution of first order
differential equation in the following form:
y’(x) = f(x,y) 
with a specified solution point corresponding to one specific condition for the
Equation The solution points of this differential equation can be expressed as:

where O(h3) is the order of error of the step h3, and

where,
FOURTH ORDER RUNGE-KUTTA METHOD
This is the most popular version of the Runge-Kutta method for solving differential
equation of initial value problems. Formulation of this order solution method is
similar to that of the second order method.

where,

V. REFERENCES
https://www.sciencedirect.com/topics/engineering/numerical-method
onlinelearningmath.com
https://www.google.com/amp/s/www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/amp
https://mathworld.wolfram.com/JacobiMethod.html#:~:text=The%20Jacobi%20method%20is%20a,then%2
0iterated%20until%20it%20converges
https://www.easycalculation.com/algebra/gauss-seidel-method.php
https://www.sciencedirect.com/topics/engineering/gauss-seidel-method
https://math.iitm.ac.in/public_html/sryedida/caimna/transcendental/iteration%20methods/fixed-
point/iteration.html
https://www.google.com/amp/s/www.geeksforgeeks.org/secant-method-of-numerical-analysis/amp

https://www.mathworks.com/matlabcentral/fileexchange/68885-the-newton-raphson-method
https://brilliant.org/wiki/newton-raphson-
method/#:~:text=The%20Newton%2DRaphson%20method%20(also,straight%20line%20tangent%20to%20
https://x-engineer.org/bisection-method
http://www2.lv.psu.edu/ojj/courses/cmpsc-
201/numerical/regula.html#:~:text=The%20Regula%E2%80%93Falsi%20Method%20is,is%20to%20make

You might also like