You are on page 1of 3

Id: 202020031

Methods for solving nonlinear equations :


1- Bisection Method
2- False Position Method
3- Newton Raphson Method
4- Iterative method :
In computational mathematics, an iterative method is a mathematical procedure that
uses an initial value to generate a sequence of improving approximate solutions for a
class of problems, in which the n-th approximation is derived from the previous ones. A
specific implementation of an iterative method, including the termination criteria, is an
algorithm of the iterative method. An iterative method is called convergent if the
corresponding sequence converges for given initial approximations.

If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed
point of the function f, then one may begin with a point x1 in the basin of attraction of x,
and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x.
Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration
of x. Alternately, superscripts in parentheses are often used in numerical methods, so as
not to interfere with subscripts with other meanings. (For example, x(n+1) = f(x(n)).) If
the function f is continuously differentiable, a sufficient condition for convergence is
that the spectral radius of the derivative is strictly bounded by one in a neighborhood of
the fixed point. If this condition holds at the fixed point, then a sufficiently small
neighborhood (basin of attraction) must exist.
5- Fixed Point Iteration :
In numerical analysis, fixed-point iteration is a method of computing fixed points of a
function.
More specifically, given a function {f} defined on the real numbers with real values and
given a point { x_0} in the domain of {f}f, the fixed-point iteration is
which gives rise to the sequence
6- Roots of polynomials :
Roots of polynomials are the solutions for any given polynomial for which we need to
find the value of the unknown variable. If we know the roots, we can evaluate the value
of polynomial to zero. An expression of the form
where each variable has a constant accompanying it as its coefficient is called a
polynomial of degree ‘n’ in variable x. Each variable separated with an addition or
subtraction symbol in the expression is better known as the term. The degree of the
polynomial is defined as the maximum power of the variable of a polynomial.

7- Secant method :
In numerical analysis, the secant method is a root-finding algorithm that uses a
succession of roots of secant lines to better approximate a root of a function f. The
secant method can be thought of as a finite-difference approximation of Newton's
method. However, the secant method predates Newton's method by over 3000 years.
The secant method is defined by the recurrence relation :

As can be seen from the recurrence relation, the secant method requires two initial
values, x0 and x1, which should ideally be chosen to lie close to the root.

8- Homotopy and Continuation Methods :


Homotopy, or continuation, methods for nonlinear systems embed the problem to be
solved within a collection of problems. Specifically, to solve a problem of the form
F(x) = 0,
which has the unknown solution x∗, we consider a family of problems described using a
parameter λ that assumes values in [0, 1]. A problem with a known solution x(0)
corresponds to the situation when λ = 0, and the problem with the unknown solution
x(1) ≡ x∗ corresponds to λ = 1

9- Steepest Descent Techniques :


The advantage of the Newton and quasi-Newton methods for solving systems of
nonlinear equations is their speed of convergence once a sufficiently accurate
approximation is known.
A weakness of these methods is that an accurate initial approximation to the solution is
needed to ensure convergence.
The Steepest Descent method considered in this section converges only linearly to the
solution, but it will usually converge even for poor initial approximations.
As a consequence, this method is used to find sufficiently accurate starting
approximations for the Newton-based techniques in the same way the Bisection
method is used for a single equation.
The method of Steepest Descent determines a local minimum for a multivariable
function of the form g : Rn → R. The method is valuable quite apart from the application
as a starting method for solving nonlinear systems.
The connection between the minimization of a function from Rn to R and the solution of
a system of nonlinear equations is due to the fact that a system of the form
Reviewer's :
1- Richard L. Burden , J. Douglos Faires Numerical Analysis
2- www.mathworld.com

You might also like