Professional Documents
Culture Documents
Final Project PRINT
Final Project PRINT
Supervised By
Dr. Kamal Hassan
Table Of Content:
1
Introduction…………………..
……………………………………………………..3
Nonlinear equations
Bi-Section Method…………………………………...……….……………..4-
6
Fixed-Point Method……………………………………………..……….11-
12
Application…………………………………………………..……………13-
15
Data
Integration
Applications………………………………………………………………29-30
Romberg's method………………………………………………………31-
32
Applications………………………………………………………...……..36-
38
Conclusion………………………………...………………………………….……
38
References………………………………….………………………………….
….39
2
1. Introduction:
3
When using numerical methods or algorithms and computing with finite
precision, errors of approximation or rounding and truncation are
introduced. It is important to have a notion of their nature and their order.
A newly developed method is worthless without an error analysis. Neither
does it make sense to use methods which introduce errors with
magnitudes larger than the effects to be measured or simulated. On the
other hand, using a method with very high accuracy might be
computationally too expensive to justify the gain in accuracy.
The overall goal of the field of numerical analysis is the design and
analysis of techniques to give approximate but accurate solutions to hard
problems, the variety of which is suggested by the following:
Iteration tasks:
4
Bisection method is the simplest among all the numerical schemes to solve the
transcendental equations. This scheme is based on the intermediate value theorem for
continuous functions.
Consider a transcendental equation f (x) = 0 which has a zero in the interval [a,b] and f
(a) * f (b) <
Bisection scheme computes the zero, say c, by repeatedly halving the interval [a,b]. That
is, starting with
c = (a+b) / 2
The interval [a,b] is replaced either with [c,b] or with [a,c] depending on the sign of f (a)
* f (c) . This process is continued until the zero is obtained. Since the zero is obtained
numerically the value of c may not exactly match with all the decimal places of the
analytical solution of f (x) = 0 in the interval [a,b]. Hence any one of the following
mechanisms can be used to stop the bisection iterations:
C1. Fixing a priori the total number of bisection iterations N i.e., the length of the interval
or the maximum error after N iterations in this case is less than | b-a | / 2N.
C2. By testing the condition | ci - c i-1| (where i are the iteration number) less than some
tolerance limit, say epsilon, fixed a priori.
C3. By testing the condition | f (ci ) | less than some tolerance limit alpha again fixed a
priori.
Given a function f (x) continuous on an interval [a,b] and f (a) * f (b) < 0
Do
c = (a+b)/2
if f (a) * f (c) < 0 then b = c
else a = c
while (none of the convergence criteria C1, C2 or C3 is satisfied)
5
Simple Example:
Solution
f ( 1 )=−5 , f (2)=14
a+ b
a(−ve) v (+ ve) Xn=
2 f (Xn)
1 2 1.5 0
l Xn−Xn−1 +Ve
error= x 100
1 1.5 1.25 20Xn -Ve
1.25 1.5 1.375 9.090909091 +Ve
1.25 1.375 1.3125 4.761904762 -Ve
1.3125 1.375 1.34375 2.325581395 -Ve
1.34375 1.375 1.359375 1.149425287 -Ve
1.359375 1.375 1.3671875 0.571428571 +Ve
1.359375 1.3671875 1.36328125 0.286532951 -Ve
1.36328125 1.3671875 1.365234375 0.143061516 +Ve
1.36328125 1.36523438 1.364257813 0.071581961 -Ve
1.36328125 1.36425781 1.363769532 0.035803777 -Ve
6
1.3 Newton-Raphson Method:
History:
The name "Newton's method" is derived from Isaac Newton's description of a special
case of the method in De analysi per aequationes numero terminorum infinitas (written
in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum
infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by
John Colson). However, his method differs substantially from the modern method given
above: Newton applies the method only to polynomials.
He does not compute the successive approximations x_n, but computes a sequence of
polynomials, and only at the end arrives at an approximation for the root x. Finally,
Newton views the method as purely algebraic and makes no mention of the connection
with calculus. Newton may have derived his method from a similar but less precise
method by Vieta. The essence of Vieta's method can be found in the work of the Persian
mathematician Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshī used a form of
Newton's method to solve x^P - N = 0 to find roots of N (Ypma 1995).
A special case of Newton's method for calculating square roots was known much earlier
and is often called the Babylonian method.
Newton's method was used by 17th-century Japanese mathematician Seki Kōwa to solve
single-variable equations, though the connection with calculus was missing. Newton's
method was first published in 1685 in
A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph
Raphson published a simplified description in Analysis aequationum universalis. Raphson
again viewed Newton's method purely as an algebraic method and restricted its use to
polynomials, but he describes the method in terms of the successive approximations xn
instead of the more complicated sequence of polynomials used by Newton.
Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for
solving general nonlinear equations using calculus, essentially giving the description
above. In the same publication, Simpson also gives the generalization to systems of two
equations and notes that Newton's method can be used for solving optimization
problems by setting the gradient to zero.
7
Arthur Cayley in 1879 in The Newton-Fourier imaginary problem was the first to notice
the difficulties in generalizing Newton's method to complex roots of polynomials with
degree greater than 2 and complex initial values.
Newton-Raphson Iteration:
8
Idea behind Newton’s method
9
Simple Example:
10
1.4 Fixed Point
xi+1= g(xi), i = 0, 1, 2, . . .,
with some initial guess x0 is called the fixed point iterative scheme.
Do
xi+1= g(xi)
11
The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0
If g(x) and g'(x) are continuous on an interval J about their root s of the
equation x = g(x), and if |g'(x)|<1 for all x in the interval J then the fixed
point iterative process xi+1=g( xi), i = 0, 1, 2, . . ., will converge to the
root x = s for any initial approximation x0 belongs to the interval J .
Simple Example:
12
1.5 Application on (Bisection, Newton-Raphson)
If the beam is simply supported from the right end and hinged from the
other end. The beam is subjected to a linearly increasing distributed load
from the left support. The deformation of the beam is given by:
y=0.2(−x 5 +2 x 3−x )
13
Where is measured along the beam from the right.
−3
Determine the point of maximum deflection with relative error < 10
Solution:
y=0.2(−x 5 +2 x 3−x )
dy
=0
For maximum deflection dx
(−5 x 4 + 6 x2 −1 )=0
14
Chart Title
f ( x )=−20 x 3+12 x
a) Newton-Raphson Method:
15
No. X xn Xn−Xn−1
error=
X0 0.25
Xn
X1 0.489825581 48.96142433
X2 0.446807192 -9.627953767
X3 0.447213596 0.090874758
X4 0.447213595 -7.50207E-08
X5 0.447213595 2.48253E-14
b) Bisection Method:
a(−ve) v (+ ve) a+ b
Xn= f (Xn)
0 0.5 0.25
2 0
Xn−Xn−1 -Ve
0.25 0.5 0.375 error=33.33333333
x 100 -Ve
Xn
0.375 0.5 0.4375 14.28571429 -Ve
0.4375 0.5 0.46875 6.666666667 -Ve
0.46875 0.5 0.484375 3.225806452 -Ve
0.484375 0.5 0.4921875 1.587301587 -Ve
0.4921875 0.5 0.49609375 0.787401575 -Ve
0.49609375 0.5 0.498046875 0.392156863 -Ve
0.49804687
5 0.5 0.499023438 0.195694716 -Ve
0.49902343
8 0.5 0.499511719 0.097751761 -Ve
0.49951171
9 0.5 0.49975586 0.048851953 -Ve
0.49975586 0.5 0.49987793 0.024420062 -Ve
0.49987793 0.5 0.499938965 0.01220849 -Ve
0.49993896
5 0.5 0.499969483 0.006103873 -Ve
0.49996948
3 0.5 0.499984742 0.003051893 -Ve
0.49998474
2 0.5 0.499992371 0.001525923 -Ve
16
Introduction:
The idea and practice of interpolation has a long history going back to
antiquity and extending to modern times. We will briefly sketch the early
development of the subject in ancient times and the middle ages through
the 17th century, culminating in the work of Newton. We next draw
attention to a little-known paper of Waring predating Lagrange’s
interpolation formula by 16 years. The rest of the paper deals with a few
selected contributions made after Lagrange till recent times. They include
the computationally more attractive barycentric form of Lagrange’s
formula, the theory of error and convergence based on real-variable and
complex-variable analyses, Hermite and Hermite-Fejér as well as
nonpolynomial interpolation. Applications to numerical quadrature and the
solution of ordinary and partial differential equations are briefly indicated.
As seems appropriate for this auspicious occasion, however, we begin with
Lagrange himself.
But, as can be seen from the construction, each time a node xk changes,
all Lagrange basis polynomials have to be recalculated. A better form of
the interpolation polynomial for practical (or computational) purposes is
the barycentric form of the Lagrange interpolation (see below) or Newton
polynomials.
17
Lagrange and other interpolation at equally spaced points, as in the
example above, yield a polynomial oscillating above and below the tr```ue
function. This behaviour tends to grow with the number of points, leading
to a divergence known as Runge's phenomenon; the problem may be
eliminated by choosing interpolation points at Chebyshev nodes.[2]
18
Proof:
The function L(x) being sought is a polynomial in of the least degree that
interpolates the given data set; that is, assumes value at the
corresponding for all data points :
Observe that:
2.
1.
19
Main Idea:
19.1 0
19.1 -1
19 -2
18.8 -3
18.7 -4
18.3 -5
18.2 -6
17.6 -7
11.7 -8
20
9.9 -9
9.1 -10
Figure 1
Temperature vs. depth of a lake.
Using the given data, we see the largest change in temperature is
between z −8 m
and z −7 m. Determine the value of the temperature at z −7.5 m
using Newton’s
Solution:
21
for j=1:length(x)-1
f(j,1)=(y(j+1)-y(j))/(x(j+1)-x(j))
end
for k=1:length(x)-2
f(k,2)=(f(k+1,1)-f(k,1))/(x(k+2)-x(k))
end
for i=1:length(x)-3
f(i,3)=(f(i+1,2)-f(i,2))/(x(i+3)-x(i))
end
Definition: "Least squares" means that the overall solution minimizes the
sum of the squares of the errors made in the results of every single
equation. Polynomial least squares describes the variance in a prediction
of the dependent variable as a function of the independent variable and
the deviations from the fitted curve.
Often in the real world one expects to find linear relationships between
variables. For example, the force of a spring linearly depends on the
displacement of the spring: y = kx (here y is the force, x is the
22
displacement of the spring from rest, and k is the spring constant). To test
the proposed relationship, researchers go to the lab and measure what the
force is for various displacements. Thus they assemble data of the form
(xn,yn) for n ∈ {1, . . . , N}; here yn is the observed force in Newton’s
when the spring is displaced Xn meters.
23
How can we pick the coefficients that best fit the line to the data?
Why does the blue line appear to us to fit the trend better?
• Consider the distance between the data and points on the line
• Add up the length of all the red and blue vertical lines
• The one line that provides a minimum error is then the ‘best’ straight
line
1) Positive or negative error has the same value (data point is above or
below the line)
24
Denote data values as (x, y) denote points on the fitted line as (x, f(x))
sum the error at the four data points
The ‘best’ line has minimum error between line and data points this is
called the least squares approach, since we minimize the square of the
error.
Take the derivative of the error with respect to and , set each to zero
Solve for the and so that the previous two equations both = 0 re-write
these two equations
25
Put these into matrix form
Time
1 2 3 4 5 6 8 10 12 18 24
(hr)
0.5 0.7 0.9 1.4 1.5 2.1
S(m) 0 0.16 1.1 1.24 2.04
9 7 8 4 6 5
26
Liner Model
Equations:
A0 Equation 1
27
A1 Equation 2
b−a
Where h = n
28
some sense "small", then Simpson's rule will provide an adequate
approximation to the exact integral.
If the function being integrated is not smooth over the interval. Typically,
this means that either the function is highly oscillatory, or it lacks
derivatives at certain points. In these cases, Simpson's rule may give very
poor results. One common way of handling this problem is by breaking up
the interval into a number of small subintervals. Simpson's rule is
then applied to each subinterval, with the results being summed to
produce an approximation for the integral over the entire interval. This
sort of approach is termed the composite Simpson's rule.
b−a
Where h = n
Error in trapezoidal
The error of the composite trapezoidal rule is the difference between the
value of the integral and the numerical result:
29
There exists a number ξ between a and b, such that
Comparison
In general Simpson’s rule has faster convergence than the trapezoidal rule
for functions which are twice continuously differentiable, though not in all
specific cases. However for various classes of rougher functions (ones with
weaker smoothness conditions), the trapezoidal rule has faster
convergence in general than Simpson's rule. When using Simpson’s rule
the number of intervals (n) has to be even. Simpson is a better choice if
the shape of the function is curved not broken lines.
30
12.8
A1= 3 {43.77+34.54+2(29.81+31.97+34.26+23.35)
+4(32.22+29.54+35.43+28.68+22.76)} = 3889.536m2
16
A2 = 3 {36.23+45.46+ 2(20.48+18.55+18.95)
+4(20.6+20.71+21.42+22.54)} = 2877.067m2
A3 = 128 X 80 = 10240 m 2
AT = A3 -A2 –A1=3473.397 m2
3473.397
Total tiles = 1.2 x 1.2 = 2412.08
2412.08
Total cartoons = 6 = 402.01
31
Solution
315+ 413.5+321.75
S = =525.125
EBC 2
45
A EDC = 3 (27.2+50.6+2(42.9+47.7) +4(46.5+51.1+48.8))
+0.5x50.6x45= 13807.5 m2
35
A AFE = 2 X (33.3 + 35.6 + 2(65.3+41.1+71.4+50.6+59.6+35.2)) =
12517.75 m2
32
In numerical analysis, Romberg's method (Romberg 1955) is used to
estimate the definite integral
Using
X 0 16 32 48 64 80 96 112 128
F(x) 36.2 20.6 20.84 20.71 18.55 21.41 18.95 22.45 45.46
4 I 2−I 1 4 X 3800.32−5226.24
1st iteration = = =3325.0133
3 3
33
16 I 2−I 1 16 X 2964.48−3325.0133
2nd iteration = 15 = 15 = 2940.444
64 I 2−I 1 64 X 2383.644−2940.444
3RD iteration = 63 = 63 = 2374.806
Number of Step size Trapezoidal 1st iteration 2nd iteration 3rd iteration
interval (H) estimate
1 128 5226.24 3325.0133 2940.444 2374.806
2 64 3800.32
2964.48
3 32 3173.44 2383.644
4 16 2608.32 2419.9466
Biography:
Leonhard Euler
34
Mathematician (1707–1783)
Leonhard Euler was an 18th century physicist and scholar who were
responsible for developing many concepts that are an integral part of
modern mathematics.
Synopsis
Born on April 15, 1707, in Basel, Switzerland, Leonhard Euler was one of
math's most pioneering thinkers, establishing a career as an academy
scholar and contributing greatly to the fields of geometry, trigonometry
and calculus, among many others. He released hundreds of articles and
publications during his lifetime, and continued to publish after losing his
sight. He died on September 18, 1783.
Leonhard Euler was born on April 15, 1707, in Basel, Switzerland. Though
originally slated for a career as a rural clergyman, Euler showed an early
aptitude and propensity for mathematics, and thus, after studying with
Johan Bernoulli, he attended the University of Basel and earned his
master's during his teens. Moving to Russia in 1727, Euler served in the
navy before joining the St. Petersburg Academy as a professor of physics
and later heading its mathematics division.
He wed Katharina Gsell in early 1734, with the couple going on to have
many children, though only five lived past their father. The couple was
married for 39 years until Katharina's death, and Euler remarried in his
later years to her half-sister.
In 1736, he published his first book of many, Mechanical. By the end of the
decade, having suffered from fevers and overexertion due to cartography
work, Euler was severely hampered in the ability to see from his right eye.
35
Power series definition
For complex z
Using the ratio test it is possible to show that this power series has an
infinite radius of convergence, and so defines ez for all complex z.
Limit definition
For complex z
Runge–Kutta methods
See the article on numerical methods for ordinary differential equations for
more background and other methods
36
Here, y is an unknown function (scalar or vector) of time t which we
would like to approximate; we are told that , the rate at
which y changes, is a function of t and of y itself. At the initial time
the corresponding y-value is . The function f and the data are
given.
For n = 0, 1, 2, 3, . . . , using
[1]
37
In averaging the four increments, greater weight is given to
the increments at the midpoint. If is independent of , so
that the differential equation is equivalent to a simple
integral, then RK4 is Simpson's rule.
Use Euler and Rung kutta method of order 2 to solve the following problem
38
39
2.9 Application#2 on Euler and Rung
kutta method:
A submarine is at sea bed level and will start to float to the surface with
the rate of x=0, y= 1
f ( x , y )= y +x
Use Euler and Rung Kutta Method of order 2 to solve this application
40
Conclusion
Both the analytical and numerical methods yielded the same result which
means that the numerical method is effective and can be applied in
different ways and methods. We have tried to show several application
and examples to all methods to clarify the differences between each
method and its advantages and disadvantages.
References:
41
42