Professional Documents
Culture Documents
class: C
College of Engineering
Salahaddin University-Erbil
Academic Year 2019-2020
ABSTRACT:
This report will focus on the numerical methods involved in solving systems
of nonlinear equations. First, we will study bisection method for solving
multivariable nonlinear equations, Second, we will examine a Newton’s
method which involves using the Jacobian matrix. Solving nonlinear
equations and systems is one of the most important problems of applied
mathematics, from both a theoretical and practical point of view, as well as
many branches of science, engineering, physics, computing , astronomy,
finance, .... a look at the bibliography and the list of great mathematicians
who have worked on this issue reveals a high level of contemporary interest
in it. Although the rapid development of digital computers led to the
effective implementation of many numerical methods, in the practice is
necessary to analyze different problems such as computational efficiency
based on the time spent by the processor, the design of iterative methods that
have a rapid convergence to the desired solution, the rounding error control,
information about the error bounds of the approximate solution obtained, the
initial conditions that guarantee safe convergence, etc.. These problems are
the starting point for this work. The overall objective of this memory is to
design efficient iterative methods to solve an equation or a system of
nonlinear equations. The best known scheme for solving nonlinear equations
is Newton's method, whose generalization to systems of equations was
proposed by Ostrowski. In recent years, as shown in extensive literature, the
construction of iterative methods has increased considerably, both one point
as multipoint, in order to achieve optimal order of convergence and better
computational efficiency. In general, in this memory we have used the
technique of weight functions to design methods for solving equations and
systems, both derivativefree as with derivatives in their iterative expression.
TABLE OF CONTENTS: 1.abstract . Error! Bookmark not defined.
2.Introduction................................................................................................ 4
5.Solve and calculate the actual error in the bisection method .................... 8
11.method of iteration................................................................................. 12
12.conclusion .............................................................................................. 13
13.references ............................................................................................... 14
INTRODUCTION:
In other words, in a nonlinear system of equations, the equation(s) to be
solved cannot be written as a linear combination of the unknown variables
or functions that appear in them.
Systems can be defined as nonlinear, regardless of whether known linear
functions appear in the equations.
Non-linear equations, as it says in its name, are any functions that are not
linear, for example, quadratic, circle and exponential functions. In this
lesson, we will learn how to graph nonlinear equations, and then determine
whether they are a function or not. The easiest way to verify if an equation
is a function, no matter if it is linear or non-linear, is by using the vertical
line test.
In this situation, you can solve for one variable in the linear equation and
substitute this expression into the nonlinear equation, because solving for a
variable in a linear equation is a piece of cake! And any time you can solve
for one variable easily, you can substitute that expression into the other
equation to solve for the other one.
Numerical algorithms are at least as old as the Egyptian Rhind papyrus (c.
1650 BC), which describes a root-finding method for solving a simple
equation. Ancient Greek mathematicians made many further advancements
in numerical methods. In particular, Eudoxus of Cnidus (c. 400–350 BC)
created and Archimedes (c. 285–212/211 BC) perfected the method of
exhaustion for calculating lengths, areas, and volumes of geometric
figures. When used as a method to find approximations, it is in much the
spirit of modern numerical integration; and it was an important precursor
to the development of calculus by Isaac Newton (1642–1727) and
Gottfried Leibniz (1646–1716).
1.bisection
2. newton-raphson method
3.regula-falsi
4.method of iteration
5.some methods
All of the methods are important but we focuses bisection method and
newton’s method because this two method are well-known more than the
other and used more than.
1.Bisection method:
This method narrows the gap by taking the average of the positive and
negative intervals. It is a simple method, and it is relatively slow. The
bisection method is also known as interval halving method, root-finding
method, binary search method or dichotomy method. The Bisection
Method, also called the interval halving method, the binary search method,
or the dichotomy method. is based on the Bolzano's theorem for
continuous functions. Theorem (Bolzano): If a function f(x) is continuous
on an interval [a, b] and f(a)·f(b) < 0, then a value c ∈ (a, b) exist for
which f(c) = 0. The main way Bisection fails is if the root is a double root;
i.e. the function keeps the same sign except for reaching zero at one point.
The bisection method is an iterative algorithm used to find roots of
continuous functions. The main advantages to the method are the fact that
it is guaranteed to converge if the initial interval is chosen appropriately,
and that it is relatively simple to implement.
if f(c) has the same sign as f(a) we replace a with c and we keep
the same value for b
if f(c) has the same sign as f(b), we replace b with c and we keep
the same value for a
If you knew what the actual error was then you would have the true root
and there would have been no reason to have used the method. In any case,
you can bound the error very easily.
At the first iteration, the error can be at worst half of the length of the
interval. The next iteration’s error will be at worst half of that.
And then the next iteration is at worst half of that, and so on. So then,
given an interval [a,b], the error at the n-the iteration is within (b-a)/2^n.
Advantages and disadvantages for bisection method
Advantages:
The bisection method is always convergent. Since the method brackets the
root, the method is guaranteed to converge.
As iterations are conducted, the interval gets halved. So one can guarantee
the error in the solution of the equation. Since the bisection method
discards 50% of the current interval at each step, it brackets the root much
more quickly than the incremental search method does. To compare: On
average, assuming a root is somewhere on the interval between 0 and 1, it
takes 6–7 function evaluations to estimate the root to within 0.1 accuracy.
Those same 6–7 function evaluations using bisection estimates the root to
within 1 2 4 = 0.625 to 1 2 5 = 0.031 accuracy.
Disadvantages:
Biggest disadvantage is the slow convergence rate. Typically bisection is
used to get an initial estimate for much faster methods such as newton
raphson that require an initial estimate. Like incremental search, the
bisection method only finds roots where the function crosses the x axis. It
cannot find roots where the function is tangent to the x axis. Like
incremental search, the bisection method can be fooled by singularities in
the function.
Like incremental search, the bisection method cannot find complex roots
of polynomials.
There's also the inability to detect multiple roots. If you are attempting to
find a solution in an interval where the function is continuous, then interval
bisection will never fail because you are simply taking points on the left
and right sides of the solution.
If there is a discontinuity in an interval, and the function's value has
opposite sign on either side of the discontinuity, then interval bisection
won't yield you any useful answer. But anyway in this case, a solution will
not exist.
2-Newton-Raphson method:
This process may be repeated as many times as necessary to get the desired
accuracy. In general, for any x value xn, the next value is given by
3.regula-falsi method:
The Regula–Falsi Method is a numerical method for estimating the roots of
a polynomial f(x). A value x replaces the midpoint in the Bisection Method
and serves as the new approximation of a root of f(x). The objective is to
make convergence faster. For simple roots, Anderson–Björck was the clear
winner in Galdino's numerical tests. For multiple roots, no method was
much faster than bisection. In fact, the only methods that were as fast as
bisection were three new methods introduced by Galdino. the Regula-Falsi
Method has Linear rate of Convergence. a suspected root. Advantage of
Regula-Falsi with respect to Newton's method: Regula Falsi always
converges, and often rapidly, when Newton's method doesn't converge.
Disadvantage of Regula-Falsi with respect to Newton's method: Newton's
method converges faster, under conditions favorable to it.
4.method of iteration: