You are on page 1of 1

Numerical Methods

Numerical methods are the mathematical methods that allows us to get approximation for
the different kinds of mathematical problems. The simplest numerical methods we use
everywhere, for instance, extracting the square root on a piece of paper.
There are problems, which could not be solved without sufficiently complex numerical
methods. A classic example is the discovery of Neptune on the anomalies of the Uranus
movement. In modern physics, there are many similar problems. Moreover, it is often necessary to
perform a huge number of computations in a short time, otherwise the answer will not be useful.
For example, the daily weather forecast has to be calculated in a few hours, correction of the
trajectory of the rocket must be calculated in a few minutes, the operating mode of the rolling mill
must be corrected in seconds. This is impossible without powerful computers that perform
thousands or even millions of operations per second.
Digital computers are able to perform only arithmetic and logical operations. Therefore,
in addition to developing a mathematical model, it is also necessary to develop an algorithm that
reduces all calculations to a sequence of arithmetic and logical operations. Opted algorithm must
consider the speed and volume of computer memory. That`s why people who employ numerical
methods for solving problems have to worry about the following issues: the rate of convergence
(how long does it take for the method to find the answer), the accuracy (or even validity) of the
answer, and the completeness of the response (do other solutions, in addition to the one found,
exist).
Numerical methods are ordinary used in computing values of functions, interpolation,
extrapolation, regression, optimization and others. They allow us to solve differential equations
and their systems, to evaluate integrals.
One of the elementary methods of solving ordinary differential equations is Euler
method. This is the most straightforward method to integrate a differential equation. Consider the
first-order differential equation x =f (t , x) with the initial condition x ( 0 )=x 0. Define tn = n∆t and
xn = x(tn). A Taylor series expansion of xn+1 results in
xn+1 = x(tn + ∆t) = x(tn) + ∆tx`(tn) + O(∆t2 ) = x(tn) + ∆t f(tn, xn) + O(∆t2).
The Euler Method is therefore written as
xn+1 = x(tn) + ∆t f(tn, xn).
We say that the Euler method steps forward in time using a time-step ∆t, starting from the initial
value x0 = x(0). The local error of the Euler Method is O(∆t 2 ). The global error, however,
incurred when integrating to a time T, is a factor of 1/∆t larger and is given by O(∆t). It is
therefore customary to call the Euler Method a first-order method.
The other problem to solve is finding roots of equation f(x) = 0 when an explicit
analytical solution is impossible. It can be solved, for example, with the help of Newton`s method.
This is the fastest method, but requires analytical computation of the derivative of f(x). Also, the
method may not always converge to the desired root.
We can derive Newton’s Method graphically, or by a Taylor series. We want to construct
a sequence x0, x1, x2, . . . that converges to the root x = r. Consider the x n+1 member of this
sequence, and Taylor series expand f(xn+1) about the point xn. We have
f(xn+1) = f(xn) + (xn+1 − xn) f` (xn) + . . . .
To determine xn+1, we drop the higher-order terms in the Taylor series, and assume f(xn+1) = 0.
Solving for xn+1, we have xn+1 = xn − f(xn) f`(xn) . Starting Newton’s Method requires a guess for x 0,
hopefully close to the root x = r

You might also like