Professional Documents
Culture Documents
For the first function, all methods can converge and find the solution. The iteration steps needed are 42, 7,
6, and 7 for bisection, false position, Newton’s and secant method, respectively. The following figure shows
that Newton’s method has fastest convergence, consistent with its quadratic convergence rate. The Secant
method is a little slower, at super linear rate. Bisection is clearly linear. False position also convergences
very quickly, which is general the case compared with the bisection method. Note in the code how relative
error is calculated for the false position method.
The 2nd function is the square of the first one, so it has the same root but with a multiplicity of 2. Because
the function values are all positive around the root, closed domain methods don’t work. The figure below
shows that both Newton’s and Secant method are linearly convergent. We can also easily use the modify
Newton’s method since we know the multiplicity. The red line in the figure shows the modified Newton’s
method gives quadratic convergence rate (The modified Newton’s method is not required).
1
EP501 Numerical Methods Homework 2 Solution Due Sep 27, 2013 11:59 pm
The attachment MATLAB code HW2 p2.m gives the following output for each g function and two different
initial values. The initial values are chosen to be close to the root, identified from the plot of the function,
which is also generated by the MATLAB code.
Following are output with initial value x0 = 1.1.
g1 (x) gives a convergent iteration. The convergence speed appears to be linear. The derivative is calculated
both at the initial value and at the root. It shows that the |g1 (x)| < 1, which is consistent with the
convergence criterion.
2
EP501 Numerical Methods Homework 2 Solution Due Sep 27, 2013 11:59 pm
g2 (x) does not give a convergent iteration, when the maximum iteration step 50 is reached. The derivative
at the initial point is |g2 (x0 )| = 0.9416 < 1. However, when using the root α1 = 1.124123 found with |g1 (x),
we get |g2 (α1 )| = 1.0413 > 0. This explains why g2 (x) does not give a convergent iteration.
The convergence rate appears to be linear, and both |g3 (x0 )| and |g3 (α1 )| are less than 1. They are also
smaller than those for g1 (x), which explains why g3 (x) gives a faster iteration than g1 (x).
g40 (α1 ) is very close to zero at the root. Since the g(x) for Newton’s method also has zero derivative at the
root, this suggests that g4 is related to Newton’s method. For Newton’s method, we have
This confirms that g4 (x) actually represents Newton’s method. This explains why the convergence is fastest
among the four g functions.
Following are output with initial value x0 = 0.9.
----g4 x0=-0.900000 g4d(x0)=0.042770
Fixed Point Iteration Method
3
EP501 Numerical Methods Homework 2 Solution Due Sep 27, 2013 11:59 pm
We start fist with g4 first because we know it is the best method. Indeed it converges quickly and found the
negative root to be α2 = −0.8761.
g1 (x) gives a linear convergent iteration but still reaches the positive root. Examining the figure of g, we
can see g1 has no intersection with y = x near the negative root. In fact the expression of g1 (x) means
that it can only be positive. In other words, x = g(x) is not equivalent to f (x) = 0 when x < 0. We may
considering the negative of g1 ,
1
g1∗ (x) = −g1 (x) = − 3 + x − 2x2 4 (3)
which is equivalent to f (x) = 0 when x < 0. However, the MATLAB output above shows |g1∗ (α2 )| = 1.67 > 1,
this modified g1∗ (x) still will not be able to give a confergent iteration to the negative root.
Similar to g1 , g2 has no intersection with f (x) therefore the iteration cannot converge to α2 . Also because
the initial condition is now too far away from α1 , the iteration does not converge to α1 either. Also similar
to g2 , the function g2∗ (x) = −g2 (x) won’t give a convergent iteration because |g2 (α2 )| = 1.0528 > 1.
4
EP501 Numerical Methods Homework 2 Solution Due Sep 27, 2013 11:59 pm
Again g3 has no intersection with f (x) when x < 0, so it does not converge to the negative root. However,
since |g3 (α2 )| = 0.4836 < 1, if one use g3∗ (x) = −g3 (x) for the iteration, it will converge to the negative root
α2 .
Even though in this code, the error threshold is set to be eps, the smallest possible, the function values at
the first and last few roots are relative much larger than those in the middle. This is because the derivative
of the function at these roots are very large. A small uncertainty on the order of eps can cause a large
deviation from zero in the function value.