You are on page 1of 29

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/336252969

Solving System of Nonlinear Equations by using Newton Method and Quasi-


Newton Method.

Technical Report · January 2019


DOI: 10.13140/RG.2.2.20083.27680

CITATIONS READS

0 2,872

1 author:

Wan Syahimi Afiq Wan Ahlim


Universiti Teknologi MARA
1 PUBLICATION   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Wan Syahimi Afiq Wan Ahlim on 04 October 2019.

The user has requested enhancement of the downloaded file.


ABSTRACT

System of nonlinear equations arises in many parts of the engineering, physics and biology

applications. In most cases, the system arisen almost impossible to be solved analytically, hence

numerical analysis become the fashion. For many years, Newton’s method seems to be the

best method in solving the problems but some issues came out afterwards. For example, the

Jacobian matrix, J that need to be supplied in each iteration required a lot of calculation works

and consumed a lot of time. In this project, a new algorithm called Quasi-Newton method which

used a matrix B as an approximation for the Jacobian matrix has been used in solving system

of nonlinear equations. The method shown less computational works or higher time efficiency

compared to Newton’s method.

vii
1 INTRODUCTION

1.1 Research Backgroud

Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and

many other scientists because most systems are inherently nonlinear in nature. To specify, non-

linear system of equations arises in many parts of the engineering applications such as pressure

driven analysis of water distribution network in earth and atmospheric science, heat and thermal

conduction in mechanical engineering, power distribution system in electrical science, and so

on (Balaji et al., 2017). The solution of nonlinear system of equations is often the crucial step

in the solution of practical problems arising in physics and engineering. These equations can

be expressed as the simultaneous zeroing of a set of functions, where the number of functions

to be zeroed is equal to the number of independent variables. If the model of the system arise

from a well constructed of engineering or physical system, the solution will be meaningful to

some state of the system (Broyden, 1965).

Consider the system of nonlinear equations:

f1 (x1 , x2 , ..., xn ) = 0,

f2 (x1 , x2 , ..., xn ) = 0,
(1)
..
.

fn (x1 , x2 , ..., xn ) = 0,

where (x1 , x2 , ..., xn ) 2 Rn and each fi is a nonlinear real function, i = 1, 2, ..., n. The problem

(1) can be written as

F(x) = 0 (2)

where F(x) = ( f1 , f2 , ..., fn )T and x = (x1 , x2 , ..., xn )T . If the problem (2) is simple enough,

the solution can be found algebraically using some basic methods including the substitution

1
method and the elimination method. However, most of the problems in form (2) that arise from

physics and engineering application are almost impossible to solve analytically. Over the years,

many methods have been developed by mathematicians in solving (2) using numerical method

which is using iterative process with initial guess. For example the famous Newton’s method

with local quadratic convergence when the Jacobian J is a continuous function of x (Nocedal

& Wright, 1999), the classical method of steepest descents (Courant, 1943) which is the most

obvious choice for search direction in a line search method, and some variants of Newton’s

Method such as Trapezoidal Newton’s method (TN), Harmonic Newton’s method (HN) and

Midpoint Newton’s method (MN) which has been discussed by (Cordero & Torregrosa, 2006).

However, not all of the methods give a good performance in finding the solution due to a huge

of variables and equations in the nonlinear system of equation. The Newton’s method and it’s

variants are simple and easy to understand, but there are a lot limitations that arise in actual

application. The first issue devolves from the fact that the Newton’s method could required a

lot of computation works, since partial derivatives must be computed in order to obtain Jaco-

bian matrix and the Jacobian matrix must be inverted at every iteration which will consumed a

lot of time. Moreover, the requirement for the inverse of Jacobian matrix J could be a serious

problem when J is singular. In the majority of practical problems, however, the F(x) are far

too complicated for this, and an approximation to the Jacobian matrix must be obtained numer-

ically which motivated to Quasi-Newton method. The goal of this research is to investigate two

different numerical methods that are used to solve the system of nonlinear equations. The first

method is Newton’s method and the second method is Broyden’s method, which is sometimes

called a Quasi-Newton method that is derived from least-change secant-update method.

2
1.2 Problem Statement

The classical steepest descent method used by (Courant, 1943) or gradient descent method tend

to converge slowly due to the fact that the method only use the first derivative information of the

objective function. While Newton’s method have a number of issues including the difficulty of

computing the Jacobian matrix which required n2 partial derivatives of the n component func-

tions of F : Rn ! Rn to be computed and the Jacobian matrix must be inverted at every iteration

obviously required a lot of works (Broyden, 1965). When the Jacobian matrix is singular, the

Newton method may not even be converge.

Moreover, when the starting point is remote from a solution in Newton’s method, the

method can behave erratically. Furthermore, the root x⇤ may be degenerate, that is, J(x⇤ )

may be singular. An example of a degenerate problem is the scalar function f (x) = x2 , which

has a single degenerate root at x⇤ , when started from any nonzero x0 , the method generates

1
the sequence of iterates xk = x
2k 0
which converges to the solution 0, but at only a linear rate

(Nocedal & Wright, 1999).

1.3 Research Objective

In this study, the application of Quasi-Newton method in solving nonlinear system of equation

will be analyzed which is basically a method that using approximation for the Jacobian matrix.

In order to analyze the performance of this method, the method will be compared to Newton’s

method in solving a system of nonlinear equations.

The purpose of this project are

1. To solve nonlinear system of equations using Newton’s method.

2. To solve nonlinear system of equations using Quasi-Newton method.

3. To analyze the rate of convergence of Newton’s method and Quasi-Newton method.

3
1.4 Significant Of Project

For years, since analytic solution almost impossibly available for some complex system of

nonlinear equations, numerical analysis become the fashion. Until today, mathematicians and

physicists still cannot determine the best method to solve any system of nonlinear equations

even though a lot of methods and formulas has been developed. The goal of this study is to ex-

periment and analyze some numerical methods in solving system of nonlinear equations. This

project might bring a new lead in the quest for finding the best method for solving any system

of nonlinear equations.

Pressure driven analysis of water distribution network in earth and atmospheric science,

heat and thermal conduction in mechanical engineering, power distribution system in electrical

science, and so on are the arising problems from engineering application that demand for a so-

lution of system of nonlinear equations. For the purpose of those application, this project might

be a helping hand to determine the best way to solve those problems.

1.5 Scope Of Project

In this study, a system of nonlinear equations will be solved using two different numerical

methods which are the famous Newton’s method and the Quasi-Newton method. The sys-

tem of nonlinear equations from (Burden & Faires, 2011) will be used in this study since

the nonlinear system used by them seem to be complex. The equations involved with linear,

quadratic, trigonometry and exponential functions with F(x) = 0 where F = ( f1 , f2 , f3 )T and

x = (x1 , x2 , x3 )T just like some others system derive from real-life problems.

Newton’s method which originated from Taylor’s theorem has been described as one of

the best method in many numerical analysis books and journals due to it’s rapid convergence

4
rate when the initial guess or the starting point, x0 is closed enough with the solution, x⇤ .

Due to numbers of issue, some modifications for this method has been done which formed the

Quasi-Newton methods. One of the Quasi-Newton method which is proposed by (Broyden,

1965) will be used in this study. Basically, the method will replace the Jacobian matrix from

Newton’s method with an approximation matrix which has been claimed could reduce the cal-

culation time.

Both Newton’s method and Quasi-Newton method will be used to solve the system of non-

linear equations that has been described above. Analysis of convergence will be carried out for

each of the method hence comparison of the performance between these method can be done.

From there, we can determine the best method for solving the nonlinear system of equations.

5
2 LITERATURE REVIEW

Method for solving nonlinear system of equations in form of equation (2) has undergone changes

and improvements from time to time with the modern mathematical research. The famous New-

ton’s method could be the best method for solving F(x) = 0 due to it quadratic convergence

property as mentioned by (Nocedal & Wright, 1999) and when the iterate xk is close to a non-

degenerate root x? , Newton’s method has local super-linear convergence when the Jacobian J is

a continuous function of x. The formula for Newton’s method given by

xk+1 = xk Jk 1 F(xk ).

However, due to requirement for the Jacobian matrix, Jk to be calculated and inverted at every

iteration, the method required a lot of computational works and time consumption to be applied

(Martinez, 2000).

In line search methods, each iteration evaluate the search direction dk and decide a step size

to move along the direction. The formula for line search methods given by

xk+1 = xk + ak dk

where ak is a scalar which called the step size. The success of the method depends on the dk

and ak that has been chosen. The classical method of steepest descents with search direction

DF(xk ) one of the most obvious choice for search direction in a line search method. It is

intuitive, among all the directions we could move from xk , it is the one along which F decreases

most rapidly (Courant, 1943). Note that, the problem F(x) = 0 can be transform into a mini-

mization of the norm p(x) = 12 kFk k2 as discussed by (Huang, 2017), where k.k is the Euclidean

6
norm. (Li & Fukushima, 1999) proposed an approximation for ak by a monotone technique:

p(xk + ak dk ) p(xk )  d1 kak dk k2 d2 kak Fk k2 + ek kFk k2

where Fk = F(xk ), d1 > 0, d2 > 0 are positive constant, ak = rik , r 2 (0, 1), ik the smallest

nonnegative integer i satisfying the above formula and ek such that •


k=0 ek < •. From this

technique, (Zhu, 2005) proposed a non-monotonic technique:

p(xk + ak dk ) p(xl(k) )  b ak2 F(xk )T dk

where p(xl(k) ) = max0 jm(k) p(xk j ), m(0) = 0, m(k) = min (m(k 1) + 1, M), k 1, M is

nonnegative integer and b 2 (0, 1). All this line search methods vary in convergence rate de-

pends on the type and the scale of F(x). However, line search method usually tend to converge

slowly due to linear convergence property and often leads to zigzag phenomenon in practical

calculation which leads to excessive time consumption (Shi & Shen, 2007).

Some variants of Newton’s method such as Trapezoidal Newton’s method, Harmonic Newton’s

method and Midpoint Newton’s method has been discussed by (Cordero & Torregrosa, 2006)

which basically based on the trapezoidal rules and midpoint quadrature rules. These variants

of Newton’s method lost the quadratic convergence property of the classical Newton’s method

accept for the Midpoint Newton’s method. The Midpoint Newton’s method show faster con-

vergence than classical Newton’s method when the Jacobian matrix near to singularity. While,

when the Jacobian matrix is singular the Harmonic Newton’s method solve the system of non-

linear equations more successfully. Nevertheless, this method still need some improvement

since the convergence could be lost if the rounding errors occur in every iteration.

7
The Newton’s iteration can required a lot of computational works, since partial derivatives must

be computed and the linear system J(xk )sk = F(xk ) must be solved at every iteration. This fact

motivated the development of quasi-Newton methods, which are defined as the generalizations

of Newton’s method given by

x(k+1) = x(k) Bk 1 F(x(k) )

Secant methods, also known as quasi-Newton methods, do not require calculation of the Ja-

cobian J in each iteration. Instead, they use a matrix B as an approximation to the Jacobian

matrix J. The matrix B will be updated at each iteration so that it mimics the behavior of the

true Jacobian J over the step just taken as described by (Martinez, 2000). The matrix B will

be used in formula above to compute the new search direction. By Sherman-Morrison theo-

1
rem, Bk+1 is obtained from Bk 1 using simple procedures since the linear algebra computational

works involved in the formula is much less than in J(xk )sk = F(xk ). The update method for

1
Bk+1 will be discuss on the next section that has been proposed by (Broyden, 1965) called the

least-change secant update or Broyden’s method. This is because the fundamental equation of

quasi-Newton methods is given by the secant equation, Bk+1 sk = yk = F(x(k) ) F(x(k 1) ). The

formula was used by Broyden on some set of nonlinear system of equations and he found that

the method never failed to converge.

8
3 METHODOLOGY

3.1 System of nonlinear equations and Jacobian matrix

Consider the following nonlinear system of equations:

2 3
6 f1 (x1 , x2 , ..., xn )7
6 7
6 7
6 f (x , x , ..., x )7
6 2 1 2 n 7
F(x) = 6
6
7=0
7 (3)
6 .. 7
6 . 7
6 7
4 5
fn (x1 , x2 , ..., xn )

where (x1 , x2 , ..., xn ) 2 Rn and each fi is nonlinear function, i = 1, 2, 3.., n.

Here is an example of a nonlinear system of equations from (Burden & Faires, 2011)

1
3x1 cos (x2 x3 ) =0
2
x12 81(x2 + 0.1)2 + sin (x3 ) + 1.06 = 0 (4)

x1 x2 10p 3
e + 20x3 + =0
3
In this study, the point x⇤ 2 Rn will be frequently refer as the solution for equation (3) with

desired accuracy. The Jacobian matrix, J(x) will be referred to n ⇥ n matrix of all first-order

partial derivatives of the function F(x) as following.

2 3
df d f1 d f1
6 d x11 (x) d x2 (x) ··· d xn (x)7
6 7
6 7
6 d f2 (x) d f2
··· d f2 7
6 d x1 d x2 (x) d xn (x)7
J(x) = 6
6 . 7
7
6 .. .. 7 ..
6 . ··· 7 .
6 7
4 5
d f2 d f2 d f2
d x (x) dx (x) · · · d xn (x)
1 2

Next, we will introduce two different numerical methods that will be used to solved the equation

(3) through it’s iterative procedure. These methods include Newton’s method as the indicator

to compare with Broyden’s method usually known as Quasi-Newton method.

9
3.2 Newton’s Method for solving nonlinear system of equations

One of the famous methods for solving nonlinear system of equations is the Newton’s method

which describe by (Burden & Faires, 2011) as the most powerful method due to its quadratic

convergence property. This numerical method often used to solve the equation f (x) = 0 or to

find the root of a function. This method derived from the Taylor’s theorem of the function f (x)

about the point x1 which can be expand in form:

0 1 00
f (x) = f (x1 ) + (x x1 ) f (x1 ) + (x x1 )2 f (x1 ) + · · ·
2!

If we take the first two terms of the Taylor’s series expansion then we set to zero to find the root

then we have

f (x1 )
x = x2 = x1 0
f (x1 )

That procedure above could be convert into vector form in order to be applied for multivariate

problems. We can express the nonlinear system of equations as the following:

x(k+1) = x(k) J(x(k) ) 1 F(x(k) ) (5)

where k = 0, 1, 2, ..., n represents the iteration, x 2 Rn , F is a vector function, and J(x) 1 is the

inverse of the Jacobian matrix. The iterative formula (5) will be used to solve the F(x) = 0 in

Newton’s method by the following steps.

Procedure for Newton’s method:

Step 1 :
(0) (0) (0)
Choose an initial point, x(0) = (x1 , x2 , ..., xn )T

Step 2:

Calculate the F(x(0) ) and Jacobian J(x(0) ).


10
Step 3:

Now calculate y(0) by solving the linear system J(x(0) )y(0) = F(x(0) ), where

 T
y= y1 , y2 , · · · yn

Rearranging the linear system we have, y(0) = J(x(0) ) 1 F(x(0) ). Note that, we can simplify

(5) to

x(k+1) = x(k) J(x(k) ) 1 F(x(k) ) = x(k) + y(k)

Step 4:

Once y(0) is found, we can now proceed to finish the first iteration by solving for x(1) . Thus

using the result from Step 3, we have that

 T  T
(1) (0) (0)
x =x +y = (0)
x1 ,
(0)
x2 ,
(0)
· · · xn + y(0) , y(0) , · · · y(0)
n
1 2

Step 5:

Once we have calculated x(1) , we repeat the process again, until x(k) converges to x⇤ . When

F(x⇤ ) = 0 this indicates we have reached the solution to, where x⇤ is the solution.

When a set of vectors converges, the x(k+1) x(k) = 0 where,

q
(k+1) (k) (k+1) (k)
x(k+1) x(k) = (x1 x1 )2 + · · · + (xn xn )2 (6)

There are a lot of limitations of Newton’s method as mentioned in the section 2 which leads to

some modification of Newton’s method or specifically Quasi-Newton method. Next, we will

introduced the Quasi-Newton method which is the Broyden’s method.

11
3.3 Quasi-Newton Method for solving nonlinear system of equations

As mentioned earlier, Quasi-Newton method basically the modification of Newton’s method.

This implies that the form of the iterative algorithm for Broyden’s method is almost the same to

the Newton’s method. But in Broyden’s method, instead of using the Jacobian matrix J, it use a

symmetric matrix B in which (Nocedal & Wright, 1999) describe that, the matrix B used as an

approximation for the matrix J and will be updated at every iteration. From here the following

iterative formula for xk+1 will be

x(k+1) = x(k) Bk 1 F(x(k) ) (7)

where iterative formula for B as described by (Broyden, 1965) will be

yk Bk 1 sk T
Bk = Bk 1+ sk (8)
ksk k22

where,

1)
yk = F(x(k) ) F(x(k )

1)
sk = x(k) x(k .

The iterative formula in equation (7) required for inverted of matrix B to be supplied. The

formula (8) proposed by Broyden’s can be generalized by using Sherman-Morrison theorem as

follow
(sk Bk 11 yk )sTk Bk 11
Bk 1 = Bk 11 + (9)
sTk Bk 11 yk

We will used this formula to compute the inverse of matrix B in each iteration instead of using

the original formula from Broyden’s.

12
Procedure for Quasi-Newton method:

Step 1 :
(0) (0) (0)
Choose an initial point, x(0) = (x1 , x2 , ..., xn )T

Step 2:

Calculate the F(x(0) ).

Step 3:

For the first iteration, J(x(0) ) 1 will be the approximation to B0 1 because the Broyden’s

method allows us to use any initial guess for the matrix B0 (some books use the identity matrix

for B0 ).

Step 4: Calculate x(1) as follow

x(1) = x(0) B0 1 F(x(0) )

Step 5:

Evaluate F(x(1) ) then find the following

y1 = F(x(1) ) F(x(0) )

s1 = x(1) x(0) .

Step 6:

Now compute sT1 B0 1 y1 using the result in step 3 and 5.

Step 7:

Calculate B1 1 = B0 1 + ( 1
)(s1 B0 1 y1 )sT1 B0 1
sT1 B0 1 y1

13
Step 8:

Now we calculate our second approximation for the solution as follow

x(2) = x(1) B1 1 F(x(1) )

Step 9: Repeat the process again and again until the solution is found or converge to the solu-

tion when x(k) = x(k 1) = x? (refer norm equation (6)).

14
4 IMPLEMENTATION

In this section we will show how Newton’s method is carried out followed by the Broyden’s

method in solving the system of nonlinear equations below:

1
3x1 cos (x2 x3 ) =0
2
x12 81(x2 + 0.1)2 + sin (x3 ) + 1.06 = 0

x1 x2 10p 3
e + 20x3 + =0
3

4.1 Newton’s Method

Step 1 :

We set the initial point, x(0) = (0.1, 0.1, 0.1)T

Step 2:

Compute the F(x(0) ) and Jacobian, J(x(0) ).

2 3
1
6 0.3 cos ( 0.01) 7 2
6 7
6 7
F(x ) = 60.01 3.24 + sin ( 0.1) + 1.067
(0) 6
7
6 7
4 5
( 0.01) 10p 3
e 2+ 3

2 3
6 1.19995 7
6 7
6 7
= 6 2.2698334177
6
7
6 7
4 5
8.462025346

15
2 3
6 3 x3 sin (x2 x3 ) x2 sin (x2 x3 )7
6 7
6 7
J(x) = 6
6 2x1 162(x2 + 0.1) cos x3 7 7
6 7
4 5
x2 e( x1 x2 ) x1 e( x1 x2 ) 20
2 3
6 3 ( 0.1) sin ( 0.01) 0.1 sin ( 0.01)7
6 7
6 7
J(x ) = 6
(0)
6 0.2 32.4 cos ( 0.1) 7 7
6 7
4 5
0.1e( 0.01) 0.1e ( 0.01) 20
2 3
6 3 0.000999983 0.0009999837
6 7
6 7
=6
6 0.2 32.4 0.995004165 7
7
6 7
4 5
0.099004984 0.099004984 20

Step 3:

Now solve the system J(x(0) )y(0) = F(x(0) ) by using Gaussian Elimination:

2 32 3 2 3
(0)
6 3 0.000999983 0.0009999837 6y1 7 6 1.19995 7
6 76 7 6 7
6 76 7 6 7
6 0.995004165 7 6 (0) 7 6 2.2698334177
6 0.2 32.4 7 6y2 7 = 6 7
6 76 7 6 7
4 54 5 4 5
(0)
0.099004984 0.099004984 20 y3 8.462025346

Solving the linear system above yields to this result

2 3
6 0.39988467 7
6 7
6 7
y = 6 0.080533147
(0) 6
7
6 7
4 5
0.42152047

Step 4:

Once y(0) is found, we can now proceed to finish the first iteration by solving for x(1) . Thus

using the result from Step 3, we have that

16
2 3 2 3
6 0.1 7 6 0.39988467 7
6 7 6 7
6 7 6 7
x = x + y = 6 0.1 7 + 6 0.080533147
(1) (0) (0) 6 7 6
7
6 7 6 7
4 5 4 5
0.1 0.42152047
2 3
6 0.49988467 7
6 7
6 7
= 6 0.01946686 7
6
7
6 7
4 5
0.52152047

From here one can repeat the algorithm to find the next approximation for the solution, x(2)

by using the x(1) until the desired accuracy. Next we will see how the Quasi-Newton method

carried out to solve the system of nonlinear equations.

4.2 Quasi-Newton Method

Step 1 :

Just like in the Newton’s method, we set the initial point to be x(0) = (0.1, 0.1, 0.1)T

Step 2:

Calculate the F(x(0) ).

2 3
6 0.3 cos ( 0.01) 12 7
6 7
6 7
F(x ) = 60.01 3.24 + sin ( 0.1) + 1.067
(0) 6
7
6 7
4 5
e( 0.01) 2 + 10p3 3
2 3
6 1.19995 7
6 7
6 7
= 6 2.2698334177
6
7
6 7
4 5
8.462025346

17
Step 3:

For the first iteration, J(x(0) ) 1 will be the approximation to B0 1 because the Broyden’s method

allows us to use any initial guess for the matrix B0 (some books use the identity matrix for B0 ).

2 3
6 3 0.000999983 0.0009999837
6 7
6 7
B0 = J(x ) = 6
(0)
6 0.2 32.4 0.995004165 7
7
6 7
4 5
0.099004984 0.099004984 20
2 3
5 5
6 0.3333332 1.023852 ⇥ 10 1.615701 ⇥ 10 7
6 7
1 (0) 1
6 7
B0 = J(x ) = 662.108607 ⇥ 10 3 3.086883 ⇥ 10 2 1.535836 ⇥ 10 3 7
7
6 7
4 5
1.660520 ⇥ 10 3 1.527577 ⇥ 10 4 5.000768 ⇥ 10 2

Step 4:

Calculate x(1) as follow

2 3
6 0.4998697 7
6 7
1
6 7
x(1) = x(0) B0 F(x ) = 6
(0) 7
6 0.0194668 7
6 7
4 5
0.5215205

Step 5:

Evaluate F(x(1) ) to find the y1 and s1 .

2 3
4
6 3.394465 ⇥ 10 7
6 7
6 7
F(x ) = 6
(1)
6 0.3443879 7
7
6 7
4 5
3.188238 ⇥ 10 2

18
2 3
6 1.199611 7
6 7
6 7
y1 = F(x(1) ) F(x = 6 1.925445 7
(0) 6
7
6 7
4 5
8.430143
2 3
6 0.3998697 7
6 7
6 7
s1 = x(1) x =6
(0)
68.053315 ⇥ 10 27
7
6 7
4 5
0.4215204

Step 6:

Now we compute sT1 B0 1 y1

2 3T 2 32 3
5 5
6 0.3998 7 6 0.3333 1.024 ⇥ 10 1.616 ⇥ 10 7 6 1.1996 7
6 7 6 76 7
T 1
6 7 6 76 7
s1 B0 y1 = 6
68.053 ⇥ 10 27
7
62.108 ⇥ 10
6 3 3.086 ⇥ 10 2 1.535 ⇥ 10 7 6 1.9254 7
3 7 6
7
6 7 6 76 7
4 5 4 54 5
0.4215 1.660 ⇥ 10 3 1.527 ⇥ 10 4 5.001 ⇥ 10 2 8.4301

= 0.3424604

Step 7:

Compute B1 1 = B0 1 + ( 1
)(s1 B0 1 y1 )sT1 B0 1
sT1 B0 1 y1

1
B1 1 = B0 1 + ( )(s1 B0 1 y1 )sT1 B0 1
2 0.3424604 3
0.3333781 1.11050 ⇥ 10 5 8.967344 ⇥ 10 7 6
6
6 7
6 7
=6
6 2.021270 ⇥ 10 3 3.094849 ⇥ 10 2 2.196906 ⇥ 10 3 7
7
6 7
4 5
1.022214 ⇥ 10 3 1.650709 ⇥ 10 4 5.010986 ⇥ 10 2

19
Step 8:

Now we calculate our second approximation for the solution as follow

2 3
6 0.4999864 7
6 7
1
6 7
x(2) = x(1) B1 F(x ) = 6 0.0087378 7
(1) 6
7
6 7
4 5
0.5231746

From here, one can repeat the process again and again until achieved the desired valued of error,

x(k+1) x(k)  e. The full result from Newton’s and Quasi-Newton method will be present

and discussed in the next section where we set the error to be x(k+1) 16 .
x(k)  1.0 ⇥ 10

4.3 Matlab Application

For simplicity, we develop an algorithm for each of the method in Matlab application. There

are two advantage by using the application, first it help us to determined the solution faster

off course and to make sure the calculation free from human error caused, thus we can fairly

compare the methods. Second, we want to trace the time taken for Matlab in solving the system

of nonlinear equations by using this two method. With this strategies we can compare the

performance of the methods not only based on number of iteration but also how long it takes to

reach the desired accuracy. The result of the time taken by the two methods will be discussed

on the next section. The codes that we have used in the Matlab presented in the Appendix A

pages.

20
5 RESULTS AND DISCUSSION

Below is the result obtained for Newton’s method after the algorithm was repeated to achieve

the error, x(k+1) 6


x(k)  1.0 ⇥ 10 .

(k) (k) (k)


k x1 x2 x3 x(k+1) x(k) •

0 0.1000000 0.1000000 -0.1000000 -


0.4998697 0.0194668 -0.5215205 4.2152 ⇥ 10 1
1
(0.499983) (0.009441) (-0.5231013) (0.423)
0.5000142 0.0015886 -0.5235570 1.7878 ⇥ 10 2
2
(0.499996) (0.000026) (-0.523363) (9.4 ⇥ 10 3 )
0.5000001 0.0000124 -0.5235985 1.5761 ⇥ 10 3
3
(0.500000) (0.0000123) (-0.5235981) (2.3 ⇥ 10 4 )
0.5000000 0.0000000 -0.5235988 1.2444 ⇥ 10 5
4
(0.500000) (0.000000) (-0.5235984) (1.2 ⇥ 10 5 )
0.5000000 0.0000000 -0.5235988
5 0
(0.500000) (0.000000) (-0.5235987)

Table 5.1: Newton’s method result

The figures in the bracket are the results from (Burden & Faires, 2011) with the same initial

guess. From the result we can see that Newton’s method took at least five iteration to achieved

x(k) = 0. From the Table 5.1, the solution, x for the system of nonlinear
the error, x(k+1) ⇤

equations is

 T

x = 0.500000 0.000000 0.523599 .

Next we will show our result for the Quasi-Newton method in solving the system of nonlinear

equations along with the result from (Burden & Faires, 2011).

21
Table 5.2 below show the full result for the Quasi-Newton method with the results from

Burden and Faires in the bracket.

(k) (k) (k)


k x1 x2 x3 x(k+1) x(k) •
0 0.100000 0.100000 -0.100000 -
0.499870 0.019467 -0.521520 4.2152 ⇥ 10 1
1
(0.499870) (0.019467) (-0.521520) (0.422)
0.499986 0.008738 -0.523175 1.0729 ⇥ 10 2
2
(0.499986) (0.008738) (-0.523175) (1.1 ⇥ 10 2 )
0.500007 0.000867 -0.523572 7.8706 ⇥ 10 3
3
(0.500007) (0.000867) (-0.523691) (7.88 ⇥ 10 3 )
0.5000000 0.000040 -0.523598 8.2775 ⇥ 10 4
4
(0.5000003) (0.000061) (-0.5235985) (8.12 ⇥ 10 4 )
0.5000000 0.0000000 -0.523599 3.9335 ⇥ 10 5
5
(0.500000) (-0.000001) (-0.5235989) (6.24 ⇥ 10 5 )
0.5000000 0.0000000 -0.523599 0
6
(0.500000) (0.000000) (-0.5235988) (1.5 ⇥ 10 6 )

Table 5.2: Quasi-Newton method result

Quasi-Newton Method took at least six iteration to achieved the error, x(k+1) x(k) = 0 with
 T
the solution for the system of nonlinear equation, x⇤ = 0.500000 0.000000 0.523599

just like the result from Newton’s method. The method are also able to maintain the quadratic

convergence properties of the original Newton’s method.

From the result above, we can see that Newton’s method required less iteration compared to

Quasi-Newton method in solving the system of nonlinear equations proposed. But, does the

number of iteration alone is sufficient to conclude that the Newton’s method have a better per-

formance compared to Quasi-Newton method? For each of the methods, we had developed

the algorithm in Matlab to see how long it takes for the application to solve the system of the

nonlinear equations, thanks to the ’Run and Time’ feature in Matlab. With this strategy, we can

compare this two methods in term of time taken for the method to reach the desired accuracy.

22
Figure 5.1: Newton’s method time usage

Figure 5.2: Quasi-Newton method time usage

Figures 5.1 and 5.2 show the Matlab report on time usage for solving the system of nonlin-

ear equations for the Newton’s method and Quasi-Newton method respectively. Clearly there,

if we take the grand total for ’Total Time’ we will have 0.029 seconds for Newton’s method and

0.011 seconds for Quasi-Newton method. This indicates that even though with lower number

of iteration, but it required longer time to solved the system of nonlinear equations by using

Newton’s method compared to Quasi-Newton method.

23
There also a huge difference in total ’self time’ for the two methods which are 0.024 seconds

and 0.006 seconds for Newton’s method and Quasi-Newton method respectively. Self time is

the time spent in a function excluding the time spent in its child functions. Self time also in-

cludes overhead resulting from the process of profiling. In other words, the Newton’s method

using more CPU energy compare to Quasi-Newton method in attempt to solve the system of

nonlinear equation.

24
6 CONCLUSIONS AND RECOMMENDATIONS

Numerical methods, the Newton’s method and a proposed method, Quasi-Newton method has

been used to solve a system of nonlinear equations. With the same initial guess, Newton’s

method show a less number of iteration required for the algorithm to reach the desired accuracy

for the solution of the system of nonlinear equations compared to the Quasi-Newton algorithm.

This is due to the availability of the Jacobian matrix in each iteration for the Newton Method

while Quasi-Newton method only used the approximation for the Jacobian matrix in its algo-

rithm. Originated from the Taylor Theorem, off course the Newton’s Method algorithm will

arrive to the solution in smaller iteration when the Jacobian for the system can be well defined.

Since the system of nonlinear equations used in this study does not lead to the singularity of the

Jacobian matrix, the Newton method can perform very well in each iteration or precisely with

quadratic convergence rate.

But things not always what they seem, with Matlab application we are able to record the time

taken for each method in performing the algorithm to solve the system of nonlinear equations.

The result show that Quasi-Newton method has a better performance in terms of time efficiency.

In other words, Quasi-Newton method required less functional computation compare to New-

ton’s method and solved the problem faster although with higher number of iterations. This is

due to the requirement of Jacobian matrix, J in Newton’s method that need to be supplied in

every iteration do incurred a lot of computational works. Other contributing factor is the lin-

ear system, J(xk )sk = F(xk ) must be solved at every iteration which required a lot of works.

Thanks to Sherman-Morrison formula (9), the matrix, B does not need to be inverted at every

iteration in Quasi-Newton formula but only need to be obtained from the information of previ-

ous Bk 11 . This reduce the time taken in computing the Bk 1 in every iteration of the method.

25
In a nutshell, the number of iterations is a poor metric for comparison in this study. Although

with a higher number of iterations, Quasi-Newton method is more superior to Newton’s method

since its reduce calculation time and functional computation. Reduce time simply means faster,

and faster means better performance. For further study, the method should be tested on a bigger

system of nonlinear equations for examples F : Rn ! Rn where n 100. An investigation for

the case of singularity of Jacobian matrix also should be carried out with the initial, B0 = I

where I is the identity matrix as suggested by (Broyden, 1965).

26
REFERENCES

Balaji, S., Venkataraman, V., Sastry, D., & Raghul, M. (2017). Solution of system of nonlinear equations using

integrated RADM and ADM. International Journal of Pure and Applied Mathematics, 117(3), 367–373. doi:

10.12732/ijpam.v117i3.1

Broyden, C. (1965). A Class of Methods for Solving Nonlinear Simultaneous Equations. American Mathematical

Society, 19(92), 577–593.

Burden, R., & Faires, J. (2011). Numerical Solutions of Nonlinear Systems of Equations. In M. Julet (Ed.),

Numerical analysis (9th ed., pp. 629–670). Belmount: Thomson Brooks/Cole.

Cordero, A., & Torregrosa, J. R. (2006). Variants of Newton’s method for functions of several variables. Applied

Mathematics and Computation, 183(1), 199–208. doi: 10.1016/j.amc.2006.05.062

Courant, R. (1943). Variational Methods For The Solution Of Problems Of Equilibrium And Vibrations. Bull.

Amer. Math. Soc., 49(1), 1–23. doi: 10.1090/S0002-9904-1943-07818-4

Huang, L. (2017). A quasi-Newton algorithm for large-scale nonlinear equations. Journal of Inequalities and

Applications, 2017(1), 1–16. doi: 10.1186/s13660-017-1301-7

Li, D., & Fukushima, M. (1999). A global and superlinear convergent Gauss-Newton-based BFGS Method

for symmetric nonlinear equations. SIAM Journal on Numerical Analysis, 37(1), 152–172. doi: 10.1137/

S0036142998335704

Martinez, J. M. (2000). Practical quasi-Newton methods for solving nonlinear systems. Journal of Computational

and Applied Mathematics, 124(1-2), 97–121.

Nocedal, J., & Wright, S. (1999). Numerical optimization. New York: Springer Science & Business Media. doi:

10.1007/BF01068601

Shi, Z. J., & Shen, J. (2007). Memory gradient method with Goldstein line search. Computers and Mathematics

with Applications, 53(1), 28–40.

Zhu, D. (2005). Nonmonotone backtracking inexact quasi-Newton algorithms for solving smooth nonlinear

equations. Applied Mathematics and Computation, 161(3), 875–895.

27

View publication stats

You might also like