You are on page 1of 15

student name: shara hiwa zahir

class: C

course title: computer programming &numerical


analysis||

Department: Civil Engineering

College of Engineering
Salahaddin University-Erbil
Academic Year 2019-2020
ABSTRACT:
This report will focus on the numerical methods involved in solving systems
of nonlinear equations. First, we will study bisection method for solving
multivariable nonlinear equations, Second, we will examine a Newton’s
method which involves using the Jacobian matrix. Solving nonlinear
equations and systems is one of the most important problems of applied
mathematics, from both a theoretical and practical point of view, as well as
many branches of science, engineering, physics, computing , astronomy,
finance, .... a look at the bibliography and the list of great mathematicians
who have worked on this issue reveals a high level of contemporary interest
in it. Although the rapid development of digital computers led to the
effective implementation of many numerical methods, in the practice is
necessary to analyze different problems such as computational efficiency
based on the time spent by the processor, the design of iterative methods that
have a rapid convergence to the desired solution, the rounding error control,
information about the error bounds of the approximate solution obtained, the
initial conditions that guarantee safe convergence, etc.. These problems are
the starting point for this work. The overall objective of this memory is to
design efficient iterative methods to solve an equation or a system of
nonlinear equations. The best known scheme for solving nonlinear equations
is Newton's method, whose generalization to systems of equations was
proposed by Ostrowski. In recent years, as shown in extensive literature, the
construction of iterative methods has increased considerably, both one point
as multipoint, in order to achieve optimal order of convergence and better
computational efficiency. In general, in this memory we have used the
technique of weight functions to design methods for solving equations and
systems, both derivativefree as with derivatives in their iterative expression.
TABLE OF CONTENTS: 1.abstract . Error! Bookmark not defined.
2.Introduction................................................................................................ 4

3.Historical Background ............................................................................... 6

4.Bisection method ....................................................................................... 7

5.Solve and calculate the actual error in the bisection method .................... 8

6.Advantages and disadvantages for bisection method ................................ 9

7.newton-raphson method .......................................................................... 10

8.Formula of Newton-Raphson method ..................................................... 11

9.Advantages and disadvantages for Newton-Raphson method ................ 11

10.regula-falsi method ................................................................................ 12

11.method of iteration................................................................................. 12

12.conclusion .............................................................................................. 13

13.references ............................................................................................... 14
INTRODUCTION:
In other words, in a nonlinear system of equations, the equation(s) to be
solved cannot be written as a linear combination of the unknown variables
or functions that appear in them.
Systems can be defined as nonlinear, regardless of whether known linear
functions appear in the equations.

Non-linear equations, as it says in its name, are any functions that are not
linear, for example, quadratic, circle and exponential functions. In this
lesson, we will learn how to graph nonlinear equations, and then determine
whether they are a function or not. The easiest way to verify if an equation
is a function, no matter if it is linear or non-linear, is by using the vertical
line test.

A system of nonlinear equations is a system of two or more equations in


two or more variables containing at least one equation that is not linear.
Recall that a linear equation can take the form A x + By+ C=0 A x + B y +
C = 0 . Any equation that cannot be written in this form in nonlinear. If the
equation in a system is nonlinear, you can use substitution.

In this situation, you can solve for one variable in the linear equation and
substitute this expression into the nonlinear equation, because solving for a
variable in a linear equation is a piece of cake! And any time you can solve
for one variable easily, you can substitute that expression into the other
equation to solve for the other one.

However there still exist some problems in solving systems of nonlinear


equations.
For most traditional numerical methods such as Newton’s method, the
convergence and performance characteristics can be highly sensitive to the
initial guess of solution. However, it is very difficult to select reasonable
initial guess of solution for most nonlinear equations.
The algorithm would fail or the results may be improper if the initial guess
of the solution is unreasonable. Many different combinations of the
traditional numerical methods and the intelligent algorithms are applied to
solve the systems of nonlinear equations.
which can overcome the problem of selecting reasonable initial guess of
the solution. But the algorithms are too complicated or expensive to
calculate when there are a number of systems of nonlinear equations to
solve.
Many improved intelligent algorithms, such as particle swarm algorithm
and genetic algorithm, are proposed to solve systems of nonlinear
equations. Though they overcome the problem of selecting reasonable
initial guess of the solution, they lack the sophisticated search capabilities
in local area, which may lead to convergence stagnation.

Solve the nonlinear equation for one variable:

1. Substitute the value of the variable into the nonlinear equation.


2. Solve the nonlinear equation for the variable. ...
3. Substitute the solution(s) into either equation to solve for the other
variable.

solve a nonlinear system when both system equations are nonlinear:


1. Solve for x2 or y2 in one of the given equations. ...
2. Substitute the value from Step 1 into the other equation. ...
3. Solve the quadratic equation. ...
4. Substitute the value(s) from Step 3 into either equation to solve for the
other variable.
Historical Background:

Numerical algorithms are at least as old as the Egyptian Rhind papyrus (c.
1650 BC), which describes a root-finding method for solving a simple
equation. Ancient Greek mathematicians made many further advancements
in numerical methods. In particular, Eudoxus of Cnidus (c. 400–350 BC)
created and Archimedes (c. 285–212/211 BC) perfected the method of
exhaustion for calculating lengths, areas, and volumes of geometric
figures. When used as a method to find approximations, it is in much the
spirit of modern numerical integration; and it was an important precursor
to the development of calculus by Isaac Newton (1642–1727) and
Gottfried Leibniz (1646–1716).

Solution of nonlinear equations methods


To solve transcendental equation such us x-sin(x)=4, tan(x)=x

There are many methods to approximate the solution:

1.bisection

2. newton-raphson method

3.regula-falsi

4.method of iteration

5.some methods

All of the methods are important but we focuses bisection method and
newton’s method because this two method are well-known more than the
other and used more than.
1.Bisection method:

The History of the Bisection Method Although there is little concrete


knowledge of the development the bisection method, we can infer that it
was developed a short while after the Intermediate Value Theorem was
first proven by Bernard Bolzano in 1817 (Edwards 1979). bisection method
is used to find the roots of a polynomial equation. It separates the interval
and subdivides the interval in which the root of the equation lies. The
principle behind this method is the intermediate theorem for continuous
functions. It works by narrowing the gap between the positive and negative
intervals until it closes in on the correct answer.

This method narrows the gap by taking the average of the positive and
negative intervals. It is a simple method, and it is relatively slow. The
bisection method is also known as interval halving method, root-finding
method, binary search method or dichotomy method. The Bisection
Method, also called the interval halving method, the binary search method,
or the dichotomy method. is based on the Bolzano's theorem for
continuous functions. Theorem (Bolzano): If a function f(x) is continuous
on an interval [a, b] and f(a)·f(b) < 0, then a value c ∈ (a, b) exist for
which f(c) = 0. The main way Bisection fails is if the root is a double root;
i.e. the function keeps the same sign except for reaching zero at one point.
The bisection method is an iterative algorithm used to find roots of
continuous functions. The main advantages to the method are the fact that
it is guaranteed to converge if the initial interval is chosen appropriately,
and that it is relatively simple to implement.

The major disadvantage, however, is that convergence is slower than most


other methods. You typically choose the method for tricky situations that
cause difficulties for other methods. For example, if your choices are
Bisection and Newton/Raphson, then Bisection will be useful if the
function’s derivative is equal to zero for some iteration, since that
condition causes Newton’s method to fail. It is not uncommon to develop
hybrid algorithms that use Bisection for some iterations and faster methods
for other iterations. This method is important Because of this, it is often
used to obtain a rough approximation to a solution which is then used as a
starting point for more rapidly converging methods. The method is also
called the interval halving method, the binary search method, or the
dichotomy method.
Solve and calculate the actual error in the bisection method:
For a given function f(x),the Bisection Method algorithm works as follows:
1. two values a and b are chosen for which f(a) > 0 and f(b) < 0 (or the
other way around)

2. interval halving: a midpoint c is calculated as the arithmetic mean


between a and b, c = (a + b) / 2

3. the function f is evaluated for the value of c

4. if f(c) = 0 means that we found the root of the function, which is c

5. if f(c) ≠ 0 we check the sign of f(c):

 if f(c) has the same sign as f(a) we replace a with c and we keep
the same value for b

 if f(c) has the same sign as f(b), we replace b with c and we keep
the same value for a

6. we go back to step 2. and recalculate c with the new value of a or b

If you knew what the actual error was then you would have the true root
and there would have been no reason to have used the method. In any case,
you can bound the error very easily.

At the first iteration, the error can be at worst half of the length of the
interval. The next iteration’s error will be at worst half of that.

And then the next iteration is at worst half of that, and so on. So then,
given an interval [a,b], the error at the n-the iteration is within (b-a)/2^n.
Advantages and disadvantages for bisection method
Advantages:
The bisection method is always convergent. Since the method brackets the
root, the method is guaranteed to converge.
As iterations are conducted, the interval gets halved. So one can guarantee
the error in the solution of the equation. Since the bisection method
discards 50% of the current interval at each step, it brackets the root much
more quickly than the incremental search method does. To compare: On
average, assuming a root is somewhere on the interval between 0 and 1, it
takes 6–7 function evaluations to estimate the root to within 0.1 accuracy.
Those same 6–7 function evaluations using bisection estimates the root to
within 1 2 4 = 0.625 to 1 2 5 = 0.031 accuracy.
Disadvantages:
Biggest disadvantage is the slow convergence rate. Typically bisection is
used to get an initial estimate for much faster methods such as newton
raphson that require an initial estimate. Like incremental search, the
bisection method only finds roots where the function crosses the x axis. It
cannot find roots where the function is tangent to the x axis. Like
incremental search, the bisection method can be fooled by singularities in
the function.
Like incremental search, the bisection method cannot find complex roots
of polynomials.
There's also the inability to detect multiple roots. If you are attempting to
find a solution in an interval where the function is continuous, then interval
bisection will never fail because you are simply taking points on the left
and right sides of the solution.
If there is a discontinuity in an interval, and the function's value has
opposite sign on either side of the discontinuity, then interval bisection
won't yield you any useful answer. But anyway in this case, a solution will
not exist.
2-Newton-Raphson method:

,Newton's method, also known as the Newton–Raphson method. Newton's


method was first published in 1685 in A Treatise of Algebra both Historical
and Practical by John Wallis. In 1690, Joseph Raphson published a
simplified description in Analysis aequationum universalis.Raphson again
viewed Newton's method purely as an algebraic method and restricted its use
to polynomials, but he describes the method in terms of the successive
approximations xn instead of the more complicated sequence of polynomials
used by Newton. Finally, in 1740, Thomas Simpson described Newton's
method as an iterative method for solving general nonlinear equations using
calculus, essentially giving the description above. In the same publication,
Simpson also gives the generalization to systems of two equations and notes
that Newton's method can be used for solving optimization problems by
setting the gradient to zero. Methods such as the bisection method and
the false position method of finding roots of a nonlinear equation
f(x)=0require bracketing of the root by two guesses. Such methods
are called bracketing methods. These methods are always convergent since
they are based on reducing the interval between the two guesses so as to zero
in on the root of the equation. In the Newton-Raphson method, the root
is not bracketed. In fact, only one initial guess of the root is needed to
get the iterative process started to find the root of an equation. The method
hence falls in the category of open methods. Convergence in open
methods is not guaranteed but if the method it does convergedoes
converge, it does so much faster than the bracketing methods. Newton's
method is a way to quickly find a good approximation for the root of a real-
valued function f ( x ) = 0 . It uses the idea that a continuous and
differentiable function can be approximated by a straight line tangent to it.
The Newton-Raphson method, or Newton Method, is a powerful technique
for solving equations numerically. Like so much of the differential calculus,
it is based on the simple idea of linear approximation. The Newton Method,
properly used, usually homes in on a root with devastating efficiency
Newton Raphson method is used to obtain real roots of linear or non linear
equations. It's fast but have the following disadvantages: It require the
derivatives of , if complicated then this method will tend to fail. It require
very accurate initial value or initial guess
Formula of Newton-Raphson method:
Suppose you need to find the root of a continuous, differentiable function
f(x), and you know the root you are looking for is near the point x =x0.
Then Newton's method tells us that a better approximation for the root is

X1=x0− f(x0)/ f′(x0)

This process may be repeated as many times as necessary to get the desired
accuracy. In general, for any x value xn, the next value is given by

xn+1=xn− f(xn)/ f′(xn)


Advantages and disadvantages for Newton-Raphson method
Unlike the incremental search and bisection methods, the Newton-Raphson
method isn’t fooled by singularities. Also, it can identify repeated roots,
since it does not look for changes in the sign of f (x) explicitly. It can find
complex roots of polynomials, assuming you start out with a complex
value for x1. For many problems, Newton-Raphson converges quicker than
either bisection or incremental search. Advantages of Newton-Raphson
Method o One of the fastest convergences to the root. o Converges on the
root quadratic. o Near a root, the number of significant digits
approximately doubles with each step. o This leads to the ability of the
Newton-Raphson Method to “polish” a root from another convergences
technique. The Newton-Raphson method only works if you have a
functional representation of f 0 (x). Some functions may be difficult to
impossible to differentiate. The Newton-Raphson method is not guaranteed
to find a root. For example, if the starting point x1 is sufficiently far away
from the root for the function f (x) = tan 𝑥, the function’s small slope
tends to drive the x guesses further and further away from the root. If the
derivative of the function at any tested point xi is sufficiently close to zero,
the next point xi+1 will be very far away. You may still find the root, but
you will be delayed. If the derivative of the function changes sign near a
tested point, the Newton-Raphson method may oscillate around a point
nowhere near the nearest root. Newton Raphson fails when you evaluate
the iteration at a location where the function value is nonzero and the
gradient is near zero. We know that the Newton iteration is based on the
function divided by its derivative. Thus, having a nonzero function value
and a near zero derivative value at the current root estimate causes very
large Newton steps. These large steps will often lead to divergence in these
problems

3.regula-falsi method:
The Regula–Falsi Method is a numerical method for estimating the roots of
a polynomial f(x). A value x replaces the midpoint in the Bisection Method
and serves as the new approximation of a root of f(x). The objective is to
make convergence faster. For simple roots, Anderson–Björck was the clear
winner in Galdino's numerical tests. For multiple roots, no method was
much faster than bisection. In fact, the only methods that were as fast as
bisection were three new methods introduced by Galdino. the Regula-Falsi
Method has Linear rate of Convergence. a suspected root. Advantage of
Regula-Falsi with respect to Newton's method: Regula Falsi always
converges, and often rapidly, when Newton's method doesn't converge.
Disadvantage of Regula-Falsi with respect to Newton's method: Newton's
method converges faster, under conditions favorable to it.

4.method of iteration:

acobi's method, Gauss Seidal method and Relaxation method are


theiterative methods and Gauss Jordan method is not as it does not involves
repetition of a particular set of steps followed by some sequence which is
known as iteration. Iteration is a way of solving equations. You would
usually use iteration when you cannot solve the equation any other way. An
iteration formula might look like the following: xn+1 = 2 + 1. xn . Iteration is
a way of solving equations. You would usually use iteration when you
cannot solve the equation any other way. An iteration formula might look
like the following: xn+1 = 2 + 1. xn . A new criterion for terminating
iterations when searching for polynomial zeros is described. This method
does not depend on the number of digits in the mantissa; moreover, it can
be used to determine the accuracy of the resulting zeros. A major
advantage of iterative methods is that roundoff errors are not given a
chance to “accumulate,” as they are in Gaussian Elimination and the
GaussJordan Method, because each iteration essentially creates a new
approximation to the solution. One of the main reasons iterative design is
important is that it allows teams to reduce usability issues and thus ensure a
good user experience of the product they are developing. On the other
hand, developers can reveal these flaws at the early stages when the cost of
eliminating mistakes is minimal.
Conclusion:
Hopefully this was helpful. The Newton-Raphson method is extremely
useful. However be careful! There are some cases where this method
doesn’t work very well. For example if an equation has multiple roots, your
initial guess must be fairly close to the answer you are looking for or you
could get the completely wrong root.Bisection method is the safest and it
always converges.
The bisection method is the simplest of all other methods and is guaranteed
to converge for a continuous function. It is always possible to find the
number of steps required for a given accuracy. Bisection method is the
safest and it always converges. The bisection method is the simplest of all
other methods and is guaranteed to converge for a continuous function. It is
always possible to find the number of steps required for a given accuracy .
and the new methods can also be developed from bisection method and
bisection method plays a very crucial role in computer science research.so
we fell that the Newton's method converges much faster, but has
limitations based upon the derivative of the function in question. The
bisection method is slow, but has no limitations and will always get you to
the same answer eventually. The difference between the two is simply what
you so with the information once you have it. In the method of false
position (sometimes called regula falsi), we refine our range so that [z1,z2]
always spans the root, as with bisection. Iteration method is obtain the
initial approximation to the root is based upon the intermediate value
theorem. ... f(b)<0, then the equation f(x)=0 has at least one real root in the
interval(a, b). Bisection method This method is based on the application of
intermediate valued theorem.one of the method very fast compared to
other methods.
Newton's Method is a very good method
When the condition is satisfied, Newton's method converges, and it also
converges faster than almost any other alternative iteration scheme based
on other methods of coverting the original f(x) to a function with a fixed
point.
References
1. Atkinson, Kendall E. (1978, 1989). "chapter 2.1". An Introduction To
Numerical Analysis (2nd ed.)
2. Süli, Endre; Mayers, David F (2003). "chapter 1.6". An introduction
to numerical analysis (1st ed.)
3. M. Cîrnu, I. Badralexi, New methods for solving algebraic equations,
Journal of
Information Systems and Operations Managements, vol. 4, no. 1,
May 2010, 138-141
4. M. Cîrnu, Generalized Newton type method (submitted)
//shara hiwa - Class C - Q.2
#include<iostream>
#include<cmath>
#include<iomanip>
using namespace std;
float f (float x)
{return ((3*sin((2*x)-1)) - (5*cos(1-(2*sin(x)))));}
int main()
{
float xb, a,b,xn,x1n,err,eps;
int k=0,kmax;
cout<<" enter the value of a,b ,eps, kmax & xn"<<endl;
cin>>a>>b>>eps>>kmax>>xn;
err=eps+1;
xb=(a+b)/2;
cout<<setw(20)<<"k"<<setw(20)<<"x-Bisection"<<setw(20)<<"x-Newton
Raphson"<<endl;
while((k<=kmax) && ((err>eps) || (xb-a>eps)))
{
if (f(xb)>0)
a=xb;
else
b=xb;
xb=(a+b)/2;
x1n=xn-((3*sin((2*xn)-1)) - (5*cos(1-(2*sin(xn)))))/(6*cos((2*xn)-1)-
(10*sin(1-(2*sin(xn)))*cos(xn)));
err=fabs(x1n-xn);
xn=x1n;
k++;
cout<<setprecision(5)<<setw(20)<<k<<setw(20)<<xb<<setw(20)<<xn<<e
ndl;
}
if (k>kmax)
cout <<"No gonvergence"<<endl;
return 0;
}

You might also like