You are on page 1of 45

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/362166068

Accelerated Dai-Liao projection method for solving systems of monotone


nonlinear equations with application to image deblurring

Article  in  Journal of Global Optimization · July 2022


DOI: 10.1007/s10898-022-01213-4

CITATIONS READS

0 122

3 authors:

Branislav Ivanov Gradimir V. Milovanovic


Technical Faculty in Bor Serbian Academy of Sciences and Arts
13 PUBLICATIONS   26 CITATIONS    465 PUBLICATIONS   3,834 CITATIONS   

SEE PROFILE SEE PROFILE

Predrag Stanimirovic
University of Niš
366 PUBLICATIONS   3,187 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Generalized inverses View project

Symbolic Computation in Mathematica View project

All content following this page was uploaded by Predrag Stanimirovic on 27 July 2022.

The user has requested enhancement of the downloaded file.


Journal of Global Optimization
https://doi.org/10.1007/s10898-022-01213-4

Accelerated Dai-Liao projection method for solving systems


of monotone nonlinear equations with application to image
deblurring

Branislav Ivanov1 · Gradimir V. Milovanović2,3 · Predrag S. Stanimirović3

Received: 9 December 2021 / Accepted: 6 July 2022


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022

Abstract
A modified Dai-Liao type conjugate gradient method for solving large-scale nonlinear sys-
tems of monotone equations is introduced and investigated in actual research. The starting
point is the Dai-Liao type conjugate gradient method which is based on the descent Dai-
Liao method and the hyperplane projection technique, known as the Dai-Liao projection
method (DLPM). Our algorithm, termed MSMDLPM, proposes a novel search direction for
the DLPM, which arises from appropriate acceleration parameters obtained after hybridizing
the accelerated gradient-descent method MSM with the DLPM method. The main goal of
the proposed MSMDLPM method is to correlate the MSM and the DLPM. The global con-
vergence and the convergence rate of the MSMDLPM method are investigated theoretically.
Numerical results show the efficiency of the proposed method in solving large-scale nonlin-
ear systems of monotone equations. The effectiveness of the method in image restoration is
verified based on performed numerical experiments.

Keywords Dai–Liao conjugate gradient method · Nonlinear monotone system of


equations · Projection method · Image deblurring problems · Convergence

Mathematics Subject Classification 65K05 · 90C30

Branislav Ivanov, Gradimir V. Milovanović and Predrag S. Stanimirović contributed equally to this work.

B Gradimir V. Milovanović
gvm@mi.sanu.ac.rs
Branislav Ivanov
ivanov.branislav@gmail.com
Predrag S. Stanimirović
pecko@pmf.ni.ac.rs
1 Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, Bor 19210, Serbia
2 Mathematical Institute, Serbian Academy of Sciences and Arts, Kneza Mihaila 35, Belgrade 11000,
Serbia
3 Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, Niš 18000, Serbia

123
Journal of Global Optimization

1 Introduction and preliminaries

A system of nonlinear equations (SoNE) is modeled in the general form


F(x) = 0, x ∈ Rn , (1.1)
where Rn denotes n-dimensional vectors over real numbers R and F(x) = (F1 (x), . . . ,
Fn (x))T : Rn → Rn , such that Fi : Rn  → R. The SoNE problem (1.1) often appears
as a subproblem in many scientific areas, so solving such systems is topical. A survey of
real-world applications of SoNE is presented in [45]. It is known that (1.1) can be considered
as an optimization problem, so several optimization techniques have been proposed for its
solution. More than 20 heuristics for solving (1.1) are mentioned in [45]. The main of them is
a continuous variable neighborhood search method [45] or the method based on a continuous
global optimization heuristic [27]. Derivative-free methods for solving SoNE were considered
in [25, 26, 47, 50]. Strategy based on modifications of the Broyden method was exploited
in [6, 43, 44]. Various modifications of the conjugate gradient (CG) methods for solving
large-scale SoNE were proposed in [20, 22, 48, 49, 51, 52, 60, 61, 70, 71, 73, 79]. Another
class of SoNE solvers arises from various modifications of quasi-Newton methods [1, 9, 21,
53, 62, 65, 69, 79]. An efficient approximation to the Jacobian matrix with a computational
effort similar to that of matrix-free settings was proposed in [36]. The approach based on the
approximation of the Jacobian inverse by approximate diagonal matrix was exploited in [63,
64, 66–68]. Further on, an iterative scheme which is based on a modification of the Dai-Liao
CG method, classical Newton iterates, and the standard secant equation was originated in
[72]. A three-step method based on a proper diagonal updating was presented in [57]. Hybrid
methods which combine heuristic techniques with conjugate gradients or Quasi-Newton
variants are considered in [41, 42]. Moreover, improvements of specific iterative methods
are used [17].
In our implementation, it is assumed that F(x) is a continuous mapping that satisfies the
condition of monotonicity
(∀x, y ∈ Rn ) (F(x) − F(y))T (x − y) ≥ 0. (1.2)
In the projection method, a line search is used to construct a hyperplane that separates the
current iteration from the solutions of the system. Namely, a line search in a given direction
determines the step length and then generates an auxiliary iteration that serves to construct the
hyperplane that separates the current iteration from the solution of the system of equations.
The next iteration is obtained by projecting the current iteration on a given hyperplane.
In this way, a global convergence is achieved. The monotonicity of the underlying system
(1.1) suggests suitability for the application of a projection method in solving such systems,
because it enables simple globalization.
The projection method for solving monotone systems was introduced in [54], wherein a
hybrid projection inexact Newton procedure was proposed as a combination of the projection
method with the Newton search direction. The main feature of this procedure is that the whole
series of iterations generated by the algorithm converges globally to the solution of the system
without the assumption of regularity. In addition, these procedures do not use derivatives of the
objective function and are based only on the calculation of the value of F(x). Therefore they
are suitable for solving non-smooth monotone systems and systems with singular solutions.
As the projection method is the basis for a whole class of projective procedures, the
basic idea of using the modified Dai-Liao conjugate gradient (CG) method with acceleration
parameter for solving systems of monotonic nonlinear equations in combination with the
projection method will be presented.

123
Journal of Global Optimization

Iterative schemes for solving systems of monotone equations have received much attention
in previous years. After choosing the starting point x 0 ∈ Rn , the iterative scheme for solving
the problem (1.1) is of general form
xk+1 = xk + sk , k = 0, 1, . . . , (1.3)
where sk = αk dk , αk > 0 is the scalar step length obtained via a suitable line search, and the
vector dk is the search direction. The process needs to be accelerated using the monotonicity
condition (1.2) of F(x).
The complexity of the method directly depends on the search direction dk and the method
of calculating the step length αk . So it is essential to choose the correct line search. Searching
lines without the use of derivatives, given in [5, 37], are based only on calculating the value
of the function F(x), so it is suitable for large systems. The step length αk > 0 is the largest
length αk = r m k , r ∈ (0, 1), where m k is the smallest nonnegative integer such that it satisfies
the following condition
− F(xk + αk dk )T dk ≥ σ αk F(xk + αk dk )dk 2 , σ ∈ (0, 1), (1.4)
and the search direction dk satisfies the inequality
F(xk )T dk ≤ −cF(xk )2 for c > 0. (1.5)
Dai-Liao family of CG methods exploits the search direction dk defined by

−F(xk ), k = 0,
dk = (1.6)
−F(xk ) + βkDL dk−1 , k ≥ 1,

where βkDL is the Dai–Liao CG parameter [19]

F(xk )T yk−1 F(xk )T sk−1


βkDL = T d
− tk T d
, (1.7)
yk−1 k−1 yk−1 k−1

such that sk and yk are determined by sk = z k − xk = αk dk and yk = F(xk+1 ) − F(xk ).


The DLPM method was proposed in [2] combining the Dai-Liao CG method from [12]
and the hyperplane projection method from [54].
The subsequent particular contributions of this paper are highlighted.
(1) A modified Dai-Liao CG method with an acceleration parameter (marked with
MSMDLPM) for solving (1.1) is presented as a proper combination of the DLPM method
and an adaptation of the quasi-Newton MSM scheme from [31].
(2) Convergence properties of the proposed MSMDLPM method are investigated.
(3) Numerical results obtained using the MSMDLPM method are compared with numerical
results obtained using different values tk in the DLPM algorithm. Thus, we want to show
that the MSMDLPM method is more efficient compared to various modifications of the
primary DLPM method.
(4) An application of the proposed method in the image deblurring problem is described.
Also, numerical results obtained in the images deblurring by the MSMDLPM method
are compared with other known methods in the literature to show the efficiency of the
MSMDLPM method.
The rest of this research is based on the following global organization. A modified Dai-
Liao CG method with an acceleration parameter (termed as the MSM Dai–Liao projective
method, MSMDLPM) and corresponding algorithm for solving (1.1) is proposed in Sect. 2.

123
Journal of Global Optimization

Main details about the DLPM are presented in Sect. 2.1, while Sect. 2.2 describes the pro-
posed MSMDLPM method. Convergence analysis of the MSMDLPM method is given in
Sect. 3. Global convergence analysis is presented in Sect. 3.1, while Sect. 3.2 is aimed to the
convergence rate analysis. Section 4 presents and discusses the results of numerical exper-
iments. Comparative numerical results for the standard test problems and analysis of the
obtained results are presented in Sect. 4.1, while an application of the presented method in
image deblurring is shown in Sect. 4.2.

2 New modified Dai–Liao CG method with acceleration parameter

In this section, we introduce the MSMDLPM of Dai-Liao CG type for solving (1.1). The
method is defined as a proper combination of the DLPM appropriate acceleration parameter
and an adaptation of the quasi-Newton MSM scheme from [31]. The leading idea is to improve
the parameter tk from (1.7) unifying the DLPM and the MSM approaches and defining new
iteration xk+1 as in the DLPM.

2.1 Dai-Liao projection method (DLPM)

The Dai-Liao projection method (DLPM) was proposed in [2] using Dai–Liao CG parameter
defined by (1.7) and
T y
yk−1 2 sk−1 k−1 1
tk = p − q , p≥ , q ≤ 0. (2.1)
T
sk−1 yk−1 sk−1 
2 4

The CG parameter βkDL in (1.7) involves two known CG parameters. Namely, if p = 2


and q = 0 in (2.1), then tk becomes

yk−1 2
tk = 2 T s
. (2.2)
yk−1 k−1

Substituting tk from (2.2) to (1.7) yields the CG parameter βk (denoted by CG-DESCENT)


defined by Hager and Zhang in [24]. Also, if p = 1 and q = 0 in (2.1), then βkDL in (1.7)
is reduced to the conjugate gradient parameter βk (denoted by DK) and defined by Dai and
Kou in [18], where tk is equal to
yk−1 2
tk = T . (2.3)
yk−1 sk−1

Because of the large influence of tk in the Dai–Liao class of CG methods, there are various
approximations of it (see [8, 11, 12, 30, 40, 55], etc.). Several additional variants in defining the
tk parameter are listed in the rest of this section. Babaie-Kafaki and Ghanbari [11] presented
two optimal choices of tk , in the form
T y
sk−1 k−1 yk−1 
tk = + (2.4)
sk−1 2 sk−1 

and
yk−1 
tk = , (2.5)
sk−1 

123
Journal of Global Optimization

while the corresponding conjugate Dai-Liao type gradient methods were marked with M1
and M2. Lotfi and Hosseini in [40] proposed the following expression for determining tk

 
yk−1 2
tk = max tk∗ , θ T , (2.6)
sk−1 yk−1

where

  T gkT yk−1
1 − h k gk−1  j sk−1 gk + T s
yk−1
h k gk−1  j sk−1 2
tk∗ =
k−1
, (2.7)
gT s
gkT sk−1 + s Tk yk−1 h k gk−1  j sk−1 2
k−1 k−1
 
T y
sk−1 k−1
h k = C + max − , 0 gk−1 − j , (2.8)
sk−1 2

and θ (> 1/4), C, j are three positive constants.


However, the new iterative point xk+1 is defined using the hyperplane projection method
from [54] instead of the classical iteration principle (1.3). Based on the direction dk , it is
necessary to determine an auxiliary iteration z k = xk + sk which serves to generate the
hyperplane  
Hk = x ∈ Rn : F(z k )T (x − z k ) = 0 . (2.9)

The point z k is determined such that the hyperplane Hk generated by that point separates the
current iteration xk from the set of solutions of the system (1.1). However, the way in which
the point z k is determined requires the use of a line search that determines the step length
αk so that the hyperplane Hk generated with z k = xk + αk dk strictly separates the current
iteration from the solution set. In order to achieve this requirement, the condition

F(z k )T (xk − z k ) > 0 (2.10)

is imposed, which guarantees this property of the point z k . On the other hand, due to the
monotonicity of the function F(x), the following is valid for each solution x ∗ of the system
(1.1)
F(z k )T (x ∗ − z k ) = (F(z k ) − F(x ∗ ))T (x ∗ − z k ) ≤ 0. (2.11)

In virtue of(2.10) and (2.11), it follows that the hyperplane Hk strictly separates the current
iteration xk from the solution x ∗ . Using this fact, Solodov and Svaiter in [54] proposed the
next iteration xk+1 which is obtained by projecting the current iteration x k on the hyperplane
Hk , as follows:
F(z k )T (xk − z k )
xk+1 = xk − F(z k ). (2.12)
F(z k )2

Algorithm 1 is a formal description of the DLPM method proposed by Abubakar and


Kumam [2].

123
Journal of Global Optimization

Algorithm 1 Descent Dai–Liao projection method (DLPM).


Require: Given x0 ∈ Rn , r , σ ∈ (0, 1), stopping tolerance ε > 0 and set k := 0.
1: Compute F(xk ). If F(xk ) ≤ ε, then STOP, else go to Step 2.
2: Compute dk using (1.6). STOP if dk = 0.
3: Compute step length αk based on (1.4).
4: Compute z k = xk + αk dk . If F(z k ) = 0 STOP, else go to next step.
5: Compute xk+1 using (2.12).
6: Determine tk+1 using (2.1).
DL using (1.7).
7: Compute βk+1
8: Set k := k + 1, and go to Step 1.

2.2 The proposed MSMDLPM method

The starting point of our iterative scheme dates back to the MSM scheme defined in [31].
The adaptation of the MSM method to the problem (1.1) becomes
xk+1 = xk − (τk + τk2 − τk3 )(γkMSMDLPM )−1 F(xk ), (2.13)
where γkMSMDLPM > 0 is appropriately defined acceleration parameter and τk ∈ (0, 1] is an
additional parameter. The expression 1 + τk − τk2 > 1 will be shortly denoted by ζk and
used for additional acceleration. According to this convention, the search direction in (2.13)
is defined as
dk = −ζk (γkMSMDLPM )−1 F(xk ). (2.14)
The value τk is determined using (1.4), i.e. τk = αk .
Now, the first order Taylor expansion which approximates F(xk+1 ) is developed to find
γk+1
MSMDLPM :

F(xk+1 ) ≈ F(xk ) + ∇ F(ξ )(xk+1 − xk )


≈ F(xk ) + ∇ F(ξ )τk dk
≈ F(xk ) − ∇ F(ξ )τk ζk (γkMSMDLPM )−1 F(xk ), ξ ∈ [xk , xk+1 ]. (2.15)
Following [31], the approximation
∇ F(ξ ) ≈ γk+1
MSMDLPM
I
is assumed. Then the approximation in (2.15) becomes
F(xk+1 ) − F(xk ) ≈ −γk+1
MSMDLPM
τk ζk (γkMSMDLPM )−1 F(xk ). (2.16)
From (2.16) and yk = F(xk+1 ) − F(xk ), it follows
yk = −γk+1
MSMDLPM
τk ζk (γkMSMDLPM )−1 F(xk ). (2.17)

By multiplying both sides of (2.17) by ykT , the acceleration parameter γk+1


MSMDLPM can be

expressed from (2.17) in the following way:


ykT yk
γk+1
MSMDLPM
=− . (2.18)
τk ζk (γk
MSMDLPM )−1 ykT F(xk )
In order to fulfil the Second-Order Necessary Condition and Second-Order Sufficient
Condition, inappropriate values γk+1
MSMDLPM < 0 will be replaced by γ MSMDLPM = 1 [31,
k+1
56].

123
Journal of Global Optimization

Unknown parameter tk in (2.1) can be determined from (1.6), (1.7) and (2.14). By putting
(1.7) into (1.6), dk becomes

dk = −F(xk ) + βkDL dk−1


F(xk )T yk−1 F(xk )T sk−1


= −F(xk ) + T y
− tk T dk−1 . (2.19)
dk−1 k−1 dk−1 yk−1

In view of (2.14), after the replacement dk = −ζk (γkMSMDLPM )−1 F(xk ) in (2.19) it can
be obtained
F(xk )T yk−1
− ζk (γkMSMDLPM )−1 F(xk ) = −F(xk ) + T y
dk−1
dk−1 k−1

F(xk )T sk−1
−tk T y
dk−1 . (2.20)
dk−1 k−1

The following equation is acquired multiplying (2.20) by F(xk )T :

F(xk )T yk−1
−ζk (γkMSMDLPM )−1 F(xk )T F(xk ) = − F(xk )T F(xk ) + T y
F(xk )T dk−1
dk−1 k−1

F(xk )T sk−1
− tk T y
F(xk )T dk−1 . (2.21)
dk−1 k−1

Accordingly, on the basis of (2.21), it follows

F(xk )T sk−1
tk T y
F(xk )T dk−1
dk−1 k−1

F(xk )T yk−1
= −F(xk )2 + ζk (γkMSMDLPM )−1 F(xk )2 + T y
F(xk )T dk−1
dk−1 k−1
  F(xk )T yk−1
= ζk (γkMSMDLPM )−1 − 1 F(xk )2 + T F(xk )T dk−1 .
dk−1 yk−1

For the sake of simplicity, denote


 
Rk (F) := ζk (γkMSMDLPM )−1 − 1 F(xk )2 . (2.22)

Then the parameter tk can be expressed in the form


F(xk )T yk−1
Rk (F) + T y
dk−1
F(xk )T dk−1
k−1
tk = . (2.23)
F(xk )T sk−1
T y
dk−1
F(xk )T dk−1
k−1

Since sk−1 = αk−1 dk−1 = τk−1 dk−1 and τk−1 ∈ (0, 1], after the substitution of dk−1 =
sk−1 /τk−1 in (2.23), and after simplification, it can be obtained
F(xk )T yk−1
Rk (F) + 1 T F(xk )T τk−1
1
sk−1
τk−1 sk−1 yk−1
tk =
F(xk ) sk−1
T
1 T F(x )T τk−1
k
1
sk−1
τk−1 sk−1 yk−1

123
Journal of Global Optimization

F(xk )T yk−1
Rk (F) + T y
sk−1
F(xk )T sk−1
k−1
=
( F(xk )T sk−1 )2
T y
sk−1 k−1

Rk (F) sk−1
T y
k−1 + F(x k ) yk−1 F(x k ) sk−1
T T
=  2 . (2.24)
F(xk )T sk−1
To ensure the descent condition in the novel DL method, a definition of tk in (2.24) is
modified using principles from [7, 40] in the final value
 
yk−1 2 1
MSMDLPM
tk = max tk , θ T , θ> . (2.25)
sk−1 yk−1 4

Considering tk := tkMSMDLPM in the definition of βkDL in (1.7), we propose a novel improve-


ment of the Dai-Liao CG parameter βkDL , which is subject to the following rule:
F(xk )T yk−1 MSMDLPM F(x k ) sk−1
T
βkMSMDLPM = T y
− tk T y
. (2.26)
dk−1 k−1 dk−1 k−1

The iterations defined from the general DLPM class, which is determined by βkMSMDLPM
is denoted as MSM Dai-Liao projective method, or MSMDLPM in short.
Algorithm 2 is a formal description of the proposed MSMDLPM method.

Algorithm 2 MSM Dai–Liao projective method (MSMDLPM).


Require: Choose an initial point x0 ∈ Rn , constants r , σ ∈ (0, 1), stop condition ε > 0 and set k := 0.
1: Compute F(xk ). If F(xk ) ≤ ε, then STOP, else go to Step 2.
2: Compute search direction dk using (1.6). STOP if dk = 0.
3: Compute step length αk based on (1.4) and put τk = αk .
4: Compute z k = xk + τk dk . If F(z k ) = 0, then xk+1 = z k , STOP, else go to next step.
5: Compute xk+1 using (2.12).
6: Compute F(xk+1 ), sk = z k − xk = τk dk and yk = F(xk+1 ) − F(xk ).
MSMDLPM using (2.18). If γ MSMDLPM < 0, then γ MSMDLPM = 1.
7: Determine γk+1 k+1 k+1
MSMDLPM using (2.25).
8: Determine tk+1
MSMDLPM using (2.26).
9: Compute βk+1
10: Set k := k + 1, and go to Step 1.

Our next goal is to show that the MSMDLPM method is more efficient not only than the
DLPM method (Algorithm 1) but also concerning some of its modifications. Many researchers
have paid attention to improving the efficiency of the DL method by defining new approxima-
tions of tk [11, 18, 24, 40]. Based on existing research, we will use the DLPM algorithm as a
starting point to create different variants of the DLPM algorithm based on different values of
the tk parameter in the DLPM algorithm. This approach shows through numerical results that
the MSMDLPM algorithm achieves better results than the basic DLPM algorithm and vari-
ous modifications of DLPM algorithms. The names of corresponding algorithms are created
on the basic names related to the conjugate parameter βk by adding the suffix PM (projection
method), because the DLPM method itself is created by an appropriate combination of the
DL method and the projection method.
If tk in Step 6 of Algorithm 1 is replaced with:
• tk from (2.2), then the newly obtained method is denoted by CG-DESCENT-PM;

123
Journal of Global Optimization

• tk from (2.3), then the newly obtained method is denoted by DKPM;


• tk from (2.4), then the newly obtained method is denoted by M1PM;
• tk from (2.6), then the newly obtained method is denoted by MDLPM.

3 Convergence analysis

In this section, we investigate the global convergence and convergence rate of the MSMDLPM
method defined in Algorithm 2.

3.1 Global convergence analysis

The global convergence will be proved in this section under standard assumptions listed in
Assumption 3.1.

Assumption 3.1 (P1) The mapping F(x) is uniformly monotone on Rn , that is,
(F(x) − F(y))T (x − y) ≥ cx − y2 , c ≥ 0, (3.1)
for all x, y ∈ Rn .
(P2) The mapping F(x) is Lipschitz continuous, that is there exists a positive real number
L such that
F(x) − F(y) ≤ Lx − y, (3.2)
for all x, y ∈ Rn .

It follows from Assumption (P2) that there exists a positive constant ω such that
F(x) ≤ ω. (3.3)

Lemma 3.1 Let the sequence {xk } be generated by MSMDLPM method (Algorithm 2). Then
the search direction dk satisfies the following condition:
(∀k ≥ 0) F(xk )T dk ≤ −λF(xk )2 , λ > 0. (3.4)

Proof For k = 0 and d0 = −F(x0 ), the initial approximation satisfies


F(x0 )T d0 = −F(x0 )T F(x0 ) = −F(x0 )2 .
Then (3.4) holds for λ = 1. For k ≥ 1, sk−1 = τk−1 dk−1 and
dk = −F(xk ) + βkMSMDLPM dk−1 ,
it follows
F(xk )T dk = −F(xk )2 + βkMSMDLPM F(xk )T dk−1

F(xk )T yk−1 MSMDLPM F(x k ) sk−1


T
= −F(xk ) +
2
T y
− tk T y
F(xk )T dk−1
dk−1 k−1 dk−1 k−1

F(xk )T yk−1
= −F(xk )2 + T y
F(xk )T dk−1
dk−1 k−1

τk−1 (F(xk )T dk−1 )2


− tkMSMDLPM T y
. (3.5)
dk−1 k−1

123
Journal of Global Optimization

Two cases appear from (2.25).


Case (i): If
yk−1 2 yk−1 2
tkMSMDLPM = θ = θ ,
T y
sk−1 k−1 τk−1 dk−1
T y
k−1

where θ > 1/4, then the equality (3.5) becomes


F(xk )T yk−1
F(xk )T dk = −F(xk )2 + T y
F(xk )T dk−1
dk−1 k−1

yk−1 2 τk−1 (F(xk )T dk−1 )2


−θ ,
τk−1 dk−1
T y
k−1
T y
dk−1 k−1

i.e.,
1
F(xk )T dk = − F(xk )2 (dk−1
T
yk−1 )2
(dk−1
T y
k−1 )
2

+ (F(xk )T yk−1 )(dk−1


T
yk−1 )(F(xk )T dk−1 )

− θ yk−1 2 (F(xk )T dk−1 )2 . (3.6)

Applying the inequality M T N ≤ (M2 + N 2 )/2 to the equality (3.6) with


1 √
M = √ (dk−1
T
yk−1 )F(xk ) and N = 2θ (F(xk )T dk−1 )yk−1 ,

it is obtained
Q k (F)
F(xk )T dk ≤ −F(xk )2 + ,
pk2

where pk = dk−1
T y
k−1 and
 2  2 
1   √
Q k (F) =  √pk F(xk ) +  2θ (F(x )T
d )y

k−1 k−1  − θ yk−1 2 (F(xk )T dk−1 )2
2  2θ  k

p2
= k F(xk )2 + θ (F(xk )T dk−1 )2 yk−1 2 − θ yk−1 2 (F(xk )T dk−1 )2

p2
= k F(xk )2 .

Thus,

1 1
F(xk )T dk ≤ −F(xk )2 + F(xk )2 = − 1 − F(xk )2 . (3.7)
4θ 4θ
Since θ > 1/4, the inequality (3.4) is satisfied for all k ≥ 1 and λ = 1 − 1/(4θ ) in (3.7).
Case (ii): Let Rk (F) be defined again as in (2.22). If
Rk (F) sk−1
T y
k−1 + F(x k ) yk−1 F(x k ) sk−1
T T
tkMSMDLPM =
(F(xk )T sk−1 )2
Rk (F) τk−1 dk−1 yk−1 + τk−1 F(xk )T yk−1 F(xk )T dk−1
T
=
τk−1
2 (F(x )T d
k k−1 )
2

123
Journal of Global Optimization

Rk (F) dk−1
T y
k−1 + F(x k ) yk−1 F(x k ) dk−1
T T
= , (3.8)
τk−1 (F(xk )T dk−1 )2
then the equality (3.5) is of the form
F(xk )T yk−1 F(xk )T dk−1
F(xk )T dk = −F(xk )2 + T y
dk−1 k−1

Rk (F)dk−1
T y
k−1 + F(x k ) yk−1 F(x k ) dk−1
T T
τk−1 (F(xk )T dk−1 )2
− ·
τk−1 (F(xk )T dk−1 )2 T y
dk−1 k−1

F(xk )T yk−1 F(xk )T dk−1 Rk (F)dk−1


T y
k−1
= −F(xk )2 + T
− T
dk−1 yk−1 dk−1 yk−1
F(xk )T yk−1 F(xk )T dk−1
− T y
dk−1 k−1

= −F(xk )2 − Rk (F)


= −ζk (γkMSMDLPM )−1 F(xk )2 . (3.9)
Since γkMSMDLPM > 0 and τk ∈ (0, 1], then ζk > 0, and inequality (3.4) is satisfied for
λ = ζk (γkMSMDLPM )−1 in (3.9).
Lemma 3.2 ([54]) Suppose that Assumption 3.1 holds. If x ∗ is the solution of the system
F(x ∗ ) = 0 and the sequence {xk } is generated by Algorithm 2, then
xk+1 − x ∗ 2 ≤ xk − x ∗ 2 − xk+1 − xk 2 , (3.10)
and the sequence {xk } is bounded. Also, either the sequence {xk } is finite with the solution of
(1.1) being the last iteration or {xk } is infinite and


xk+1 − xk  < ∞, (3.11)
k=0

which implies
lim xk+1 − xk  = 0.
k→∞

Lemma 3.3 Let the sequence {xk } be generated by MSMDLPM. Then


lim τk dk  = 0. (3.12)
k→∞

Proof Using definition of xk+1 (2.12) and condition (1.4), we obtain


 
 F(z k )T (xk − z k ) 
xk+1 − xk  =  x
 k − F(z k ) − x 
k
F(z k ) 2

|F(z k )T (xk − z k )| |F(z k )T (xk − z k )|


= F(z k ) = ,
F(z k ) 2 F(z k )
i.e.,
|τk F(z k )T dk | τk |F(z k )T dk |
xk+1 − xk  = =
F(z k ) F(z k )
σ τk2 F(z k )dk 2

F(z k )

123
Journal of Global Optimization

= σ τk2 dk 2 ≥ 0.
Then Lemma 3.2 implies
lim xk+1 − xk  = 0. (3.13)
k→∞

Finally, the last two equations give (3.12).

Lemma 3.4 Suppose F(x) is Lipschitz continuous and the sequence {xk } be generated by
MSMDLPM method is bounded. Then there exists a constant M > 0 such that
(∀k ∈ N ∪ {0}) dk  ≤ M. (3.14)

Proof For k = 0, from d0 = −F(x0 ) and (3.3), it follows


d0  =  − F(x0 ) ≤ ω. (3.15)
For k ≥ 1, from (1.6), (2.26) and sk−1 = τk−1 dk−1 , it follows
 
dk  = −F(xk ) + βkMSMDLPM dk−1 
F(xk )T yk−1 F(xk )T sk−1
=  − F(xk ) + T
dk−1 − tkMSMDLPM T dk−1 
dk−1 yk−1 dk−1 yk−1
 
 F(xk )T yk−1 
MSMDLPM F(x k ) sk−1
T
 
= −F(xk ) + τk−1 d k−1 − t τk−1 d k−1 
 τk−1 dk−1
T y
k−1
k
τk−1 dk−1
T y
k−1 
 
 F(xk )T yk−1 
MSMDLPM F(x k ) sk−1
T
 
= −F(xk ) + sk−1 − t sk−1 . (3.16)
 T y
sk−1 k−1
k T y
sk−1 k−1 

To finish the proof, we use (3.1), (3.2), and (3.3). Two cases appear from (2.25).
Case (i): If tkMSMDLPM = θ yk−1 2 /(sk−1
T y
k−1 ), then the equality (3.16) initiates
 
 F(xk )T yk−1 yk−1 2 F(xk )T sk−1 
 
dk  = −F(xk ) + T sk−1 − θ T · T sk−1 
 sk−1 yk−1 sk−1 yk−1 sk−1 yk−1 
F(xk )yk−1  yk−1 2 F(xk )sk−1 
≤ F(xk ) + T
sk−1  + θ T · T y
sk−1 
sk−1 yk−1 sk−1 yk−1 sk−1 k−1

F(xk )Lsk−1  L 2 sk−1 2 F(xk )sk−1 


≤ F(xk ) + sk−1  + θ · sk−1 
csk−1 2 csk−1 2 csk−1 2
L L2
= F(xk ) + F(xk ) + θ 2 F(xk )
c c

L L2
≤ 1 + + θ 2 ω. (3.17)
c c
Case (ii): If
Rk (F) sk−1
T y
k−1 + F(x k ) yk−1 F(x k ) sk−1
T T
tkMSMDLPM = , (3.18)
(F(xk )T sk−1 )2
where Rk (F) is given by (2.22), then the equality (3.16) gives

 F(xk )T yk−1

dk  = −F(xk ) + T sk−1
 sk−1 yk−1

123
Journal of Global Optimization

Rk (F) sk−1 
k−1 + F(x k ) yk−1 F(x k ) sk−1 F(x k ) sk−1
T y T T T

−  2 · T sk−1 
F(xk ) sk−1
T s y
k−1 k−1


 F(xk )T yk−1

= −F(xk ) + T sk−1
 sk−1 yk−1

Rk (F) sk−1
T y
k−1 + F(x k ) yk−1 F(x k ) sk−1
T T
sk−1  
− · T 
F(xk )T sk−1 sk−1 yk−1 
F(xk )yk−1 
≤ F(xk ) + T y
sk−1 
sk−1 k−1
Rk (F) sk−1 yk−1  + F(xk )yk−1 F(xk )sk−1  sk−1 
+ · T
F(xk )sk−1  sk−1 yk−1
F(xk )yk−1 
= F(xk ) + T y
sk−1 
sk−1 k−1
 
ζk (γkMSMDLPM )−1 − 1 F(xk )sk−1 yk−1  + yk−1 F(xk )sk−1 
+ T y
,
sk−1 k−1

and further
F(xk ) L sk−1 
dk  ≤ F(xk ) + sk−1 
csk−1 2
 
ζk (γkMSMDLPM )−1 − 1 F(xk )sk−1 Lsk−1  + Lsk−1 F(xk )sk−1 
+
csk−1 2
 
L ζk (γk
MSMDLPM )−1 − 1 LF(xk ) + LF(xk )
= F(xk ) + F(xk ) +
c
c
L ζk (γkMSMDLPM )−1 L
= 1+ + F(xk )
c c

L ζk (γkMSMDLPM )−1 L
≤ 1+ + ω.
c c
(3.19)
If a constant M on the basis of (3.15), (3.17), and (3.19) is defined by
 

L L2 L ζk (γkMSMDLPM )−1 L
M := max ω, 1 + + θ 2 ω, 1 + + ω ,
c c c c

then inequality (3.14) holds for every k ∈ N ∪ {0}.

The following theorem will establish the global convergence of the MSMDLPM method.

Theorem 3.1 If {xk } and {z k } are sequences generated by the MSMDLPM method (Algorithm
2), then
lim inf F(xk ) = 0. (3.20)
k→∞

Proof If (3.20) does not hold, then exists a constant ν > 0 such that

(∀k ≥ 0) F(xk ) ≥ ν. (3.21)

123
Journal of Global Optimization

We first prove that the sequence {dk } is bounded by given conditions. By the Cauchy-Schwarz
inequality and (3.4), it is obtained

λF(xk )2 ≤ −F(xk )T dk ≤ F(xk )dk . (3.22)

From the inequalities (3.22) and (3.21) we get

(∀k ≥ 0) dk  ≥ λF(xk ) ≥ λν > 0. (3.23)

Therefore, using (3.12) and (3.23), it can be obtained

lim τk = 0. (3.24)
k→∞

Suppose r −1 τk does not satisfy the line search (1.4), i.e.,

−F(xk + r −1 τk dk )T dk < σ r −1 τk F(xk + r −1 τk dk )dk 2 .

Then by (3.4) and (3.3),

λF(xk )2 ≤ −F(xk )T dk (3.25)


−1 −1
= (F(xk + r τk dk ) − F(xk )) dk − F(xk + r
T
τk dk ) dk
T

−1 −1 −1
≤ Lr τk dk  + σ r
2
τk F(xk + r τk dk )dk 2
≤ Lr −1 τk dk 2 + σ r −1 τk ωdk 2
≤ (L + σ ω)r −1 τk dk 2 . (3.26)

From the previous inequality, (3.21) and Lemma 3.4, one obtains

r λF(xk )2 r λν 2
τk dk  ≥ ≥ , (3.27)
(L + σ ω)dk  (L + σ ω)M
which contradicts to (3.12). Thus, (3.20) holds.

3.2 Convergence rate analysis

Based on Theorem 3.1, the sequence {xk } converges to a solution of the problem (1.1). Thus,
assume that xk → x ∗ as k → ∞, where x ∗ belongs to the solution set  of the problem
(1.1). In order to analyze the convergence rate of the MSMDLPM method, we also need the
following assumption.

Assumption 3.2 For any x ∗ ∈ , there exist positive constants m and δ such that

(∀x ∈ N (x ∗ δ)) m dist(x, ) ≤ F(x)2 , (3.28)

where dist(x, ) denotes the distance from x to the solution set , and N (x ∗ , δ) := {x ∈
Rn | x − x ∗  ≤ δ}.

The proof of Theorem 3.2 is similar to in [38, 39].

Theorem 3.2 Suppose that Assumptions 3.1 and 3.2 hold, and the sequence {xk } generated by
the MSMDLPM method (Algorithm 2) converges to x ∗ ∈ . Then the sequence {dist(xk , )}
converges Q–linearly to 0, hence the whole sequence {x k } R–linearly converges to x ∗ .

123
Journal of Global Optimization

Proof Consider the closest solution to xk defined by vk := arg min{xk − v | v ∈ }, i.e.,
xk − vk  = dist(xk , ).
From the monotonicity of F, the following holds for z k = xk + τk dk :
(F(z k ) − F(vk ))T (z k − vk ) = F(z k )T (z k − vk ) ≥ 0.
Then from (1.4) it follows
F(z k )T (xk − vk ) ≥ F(z k )T (xk − z k )
= F(z k )T (xk − xk − τk dk )
= −τk F(z k )T dk
≥ σ τk2 F(z k )dk 2 ≥ 0. (3.29)
Taking (2.12) into account, one concludes
 2
 F(z k )T (xk − z k ) 
xk+1 − vk 2 =  x
 k − F(z k ) − v k

F(z k ) 2
 2
 F(z k )T (xk − z k ) 

= xk − vk − F(z k )
F(z k )2 
F(z k )T (xk − vk ) · F(z k )T (xk − z k )
= xk − vk 2 − 2
F(z k )2
F(z k ) (xk − z k ) · F(z k ) (xk − z k )
T T
+ . (3.30)
F(z k )2
From the first inequality (3.29) and (3.30), we have
F(z k )T (xk − z k ) · F(z k )T (xk − z k )
xk+1 − vk 2 ≤ xk − vk 2 − 2
F(z k )2
F(z k )T (xk − z k ) · F(z k )T (xk − z k )
+
F(z k )2
F(z k )T (xk − z k ) · F(z k )T (xk − z k )
= xk − vk 2 − . (3.31)
F(z k )2
From the definition of dist(xk , ), the above inequalities (3.31) and (3.29), it holds that
dist(xk+1 , )2 ≤ xk+1 − vk 2
F(z k )T (xk − z k ) · F(z k )T (xk − z k )
≤ xk − vk 2 −
F(z k )2
σ τk F(z k )dk 2 · σ τk2 F(z k )dk 2
2
≤ dist(xk , )2 −
F(z k )2
σ τk F(z k ) dk 4
2 4 2
= dist(xk , )2 −
F(z k )2
= dist(xk , )2 − σ 2 τk4 dk 4 . (3.32)
Then by (3.23) and (3.28), it follows
dist(xk+1 , )2 ≤ dist(xk , )2 − σ 2 τk4 λ4 F(xk )4

123
Journal of Global Optimization

≤ dist(xk , )2 − σ 2 τk4 λ4 m 2 dist(xk , )2


= (1 − σ 2 τk4 λ4 m 2 )dist(xk , )2 . (3.33)

The last inequality shows that the sequence {dist(x k , )} converges Q–linearly to 0. There-
fore, the sequence {xk } converges R–linearly to x ∗ .

4 Numerical experiments

This section is divided into two parts. In the first part, numerical results are given with the
intention to show efficiency of the proposed MSMDLPM method (Algorithm 2) in solving
large-scale nonlinear systems of monotone equations in relation to some variants of the DLPM
method. An application of the MSMDLPM method to images restoration is presented in the
second part of this section.

4.1 Numerical results on test examples

In this section, we present numerical results obtained by comparing the results of the
MSMDLPM method, defined in Algorithm 2, the DLPM method [2], and the methods CG-
DESCENT-PM, DKPM, M1PM, and MDLPM, generated by using different values of the tk
parameter in the DLPM method.
For all methods we use the same values of the constants σ = 0.01 and r = 0.6 in the line
search (1.4). The stopping condition for all algorithms is when the CPU time exceeds 1000
(sec.) or F(xk ) ≤ 10−6 , i.e., ε = 10−6 . In addition to the above constants, the DLPM
method also uses the constants p = 0.8 and q = −0.1. In the MSMDLPM and MDLPM
methods, the constant θ has a value of 0.26, while the constants j, and C in MDLPM method
have the same values as in [40], i.e., j = θ F(xk−1 ) and C = 1.
The codes of all the algorithms (methods) investigated in this study were written in Matlab
R2017a and run on a personal computer equipped with a 2.0 GHz Intel Core i3 CPU and
RAM of 8GB with the Windows 10 operating system. During the experiment, a collection of
eighteen (18) test problems are solved with different dimensions of variables n ∈ {103 , 2 ×
103 , 3 × 103 , 5 × 103 , 6 × 103 , 8 × 103 , 104 , 1.5 × 104 , 3 × 104 , 6 × 104 }. The initial points
are defined within a collection of eighteen test problems given below.

Problem 4.1 (Logarithmic function [2, 77]) The specific expression of the function F(x) is
defined as:
xi
Fi (x) = log(xi + 1) − , i = 1, 2, . . . , n,
n
Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.2 (Trigexp function [77]) The specific expression of the function F(x) is defined
as:
F1 (x) = 3x13 + 2x2 − 5 + sin(x1 − x2 ) sin(x1 + x2 ),

Fi (x) = −xi−1 e xi−1 −xi + xi (4 + 3xi2 ) + 2xi+1 + sin(xi − xi+1 ) sin(xi + xi+1 ) − 8,

i = 2, 3, . . . , n − 1,

123
Journal of Global Optimization

Fn (x) = −xn−1 e xn−1 −xn + 4xn − 3.

Initial point: x0 = (10, 10, . . . , 10)T .

Problem 4.3 (Strictly convex 1 function [2, 46, 77]) The specific expression of the function
F(x) is defined as:
Fi (x) = e xi − 1, i = 1, 2, . . . , n,

Initial point: x0 = (1/n, 2/n, . . . , 1).

Problem 4.4 (Discrete boundaryvalue problem [77]) The specific expression of the function
F(x) is defined as:
F1 (x) = 2x1 + 0.5h 2 (x1 + h)3 − x2 ,

Fi (x) = 2xi + 0.5h 2 (xi + hi)3 − xi−1 + xi+1 , i = 2, 3, . . . , n − 1,

Fn (x) = 2xn + 0.5h 2 (xn + hn)3 − xn−1 , where h = 1/(n + 1).

Initial point: x0 = (h(h − 1), h(2h − 1), . . . , h(nh − 1))T .

Problem 4.5 (Strictly convex 2 function [46]) The specific expression of the function F(x)
is defined as:
i xi
Fi (x) = (e − 1), i = 1, 2, . . . , n,
10
Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.6 ([33]) The specific expression of the function F(x) is defined as:
n
Fi (x) = 2c(xi − 1) + 4xi (x 2j − xi ), i = 1, 2, . . . , n, where c = 10−5 .
j=1
Initial point: x0 = (0.9, 0.9, 0.9, . . . , 0.9)T .

Problem 4.7 (Strictly convex 3 function [33]) The specific expression of the function F(x)
is defined as:
Fi (x) = e xi − 2, i = 1, 2, . . . , n,

Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.8 ([33]) The specific expression of the function F(x) is defined as:
F1 (x) = 2x1 − x2 + e x1 − 1,

Fi (x) = −xi−1 + 2xi − xi+1 + e xi − 1, i = 2, 3, . . . , n − 1,

Fn (x) = −xn−1 + 2xn + e xn − 1.

Initial point: x0 = (1, 1, . . . , 1).

Problem 4.9 (Exponential 1 function [3, 35]) The specific expression of the function F(x)
is defined as:
F1 (x) = e x1 − 1,

123
Journal of Global Optimization

Fi (x) = e xi + xi − 1, i = 2, 3, . . . , n.
 T
Initial point: x0 = 1, 21 , 13 , . . . , n1 .

Problem 4.10 (Strictly convex 4 function [3, 35]) The specific expression of the function
F(x) is defined as:
i
Fi (x) = e xi − 1, i = 1, 2, . . . , n.
n
Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.11 ([3]) The specific expression of the function F(x) is defined as:
Fi (x) = xi − sin(|xi − 1|), i = 1, 2, . . . , n.

Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.12 ([80, 81]) The specific expression of the function F(x) is defined as:
⎛ ⎞
5/2 1
⎜ 1 5/2 1 ⎟
⎜ ⎟
⎜ . . .
.. .. .. ⎟
F(x) = ⎜ ⎟ x + B, B = (1, 1, . . . , 1)T .
⎜ ⎟
⎝ 1 5/2 1 ⎠
1 5/2
Initial point: x0 = (0.1, 0.1, . . . , 0.1)T .

Problem 4.13 ([34, 59]) The specific expression of the function F(x) is defined as:
Fi (x) = 2xi − sin(|xi − 1|), i = 1, 2, . . . , n.

Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.14 ([34]) The specific expression of the function F(x) is defined as:
Fi (x) = xi − sin(|xi | − 1), i = 1, 2, . . . , n.

Initial point: x0 = (1, 1, . . . , 1)T .

Problem 4.15 (Broyden tridiagonal function [32]) The specific expression of the function
F(x) is defined as:
F1 (x) = (3 − x1 )x1 − 2x2 + 1,

Fi (x) = (3 − xi )xi − xi−1 + 2xi+1 + 1, i = 2, 3, . . . , n − 1,

Fn (x) = (3 − xn )xn − xn−1 + 1.

Initial point: x0 = (−1, −1, . . . , −1)T .

Problem 4.16 ([76]) The specific expression of the function F(x) is defined as:
1 1
F1 (x) = x13 + x22 ,
3 2
1 2 i 3 1 2
Fi (x) = − xi + xi + xi+1 , i = 2, . . . , n − 1,
2 3 2

123
Journal of Global Optimization

1 n
Fn (x) = − xn2 + xn3 .
2 3
Initial point: x0 = (−1, −1, . . . , −1)T .

Problem 4.17 (Exponential 2 function [10]) The specific expression of the function F(x)
is defined as:
F1 (x) = e x1 − 1,

Fi (x) = e xi + xi−1 − 1, i = 2, 3, . . . , n.
T
1 1 1
Initial point: x0 = 1, , , . . . , .
2 3 n
Problem 4.18 ([16]) The specific expression of the function F(x) is defined as:
⎛ ⎞
5 3
⎜2 5 3 ⎟
⎜ ⎟
⎜ .. .. .. ⎟
F(x) = ⎜ . . . ⎟ x + B, B = (−1, −2, . . . , −n)T .
⎜ ⎟
⎝ 2 5 3⎠
2 5
Initial point: x0 = (0.1, 0.1, . . . , 0.1)T .

From tests collected data for the number of iterations, the number of function evaluations,
the CPU time, and the norm of the objective function F at the approximate solution x ∗
(Norm).
Tables 3, 4, 5, 6, 7, 8 show the numerical results (the number of iterations, the number
of function evaluations, the CPU time, and Norm) for DLPM, CG-DESCENT-PM, M1PM,
DKPM, MDLPM, and MSMDLPM methods. These tables are placed in the Appendix. Fig-
ures 1, 2, and 3 were created based on the numerical results shown in the mentioned tables.
Some specific conclusions about the effectiveness of certain methods on certain classes
of problems are not reliable. For example, MSMDLPM is the best by all criteria and for all
dimensions of the test problem 4.1, except dimension n = 30000. We look at problem 4.13
and problem 4.14. They are defined systems of equations that are similar, but the results by
methods are different. In Problem 4.13, the DKPM method is convincing, while in Problem
4.14 DKPM is ineffective, and MSMDLPM, M1PM, and CG-DESCENT-PM give better
results compared to DKPM.
Study of numerical results collected in Tables 3–8 reveals that the MSMDLPM method
solves 89.44% of problems successfully (DLPM – 87.22%, CG-DESCENT-PM – 86.67%,
M1PM – 64.44%, DKPM – 86.11%, MDLPM – 83.89%). However, 14 (7.78%) problems
failed to solve any of the observed methods. In addition, a summary of the reported results
from Tables 3 to 8 is presented in Table 1 to outline the performance of each of the six methods
(DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM, and MSMDLPM) concerning the
number of iterations, the number of function evaluations, and CPU time. It is observed from
Table 1 that the MSMDLPM method solved 38.33% (69 out of 180) of tested problems in
the conducted experiments with the least number of iterations compared to the DLPM, CG-
DESCENT-PM, M1PM, DKPM, and MDLPM methods, which recorded 16.67% (30 out of
180), 0.56% (1 out of 180), 0% (0 out of 180), 12.22% (22 out of 180), and 3.89% (7 out
of 180) respectively. The results included in Table 1 show that 20.56% (37 out of 180) of
the problems were solved with the same number of iterations by either 2, 3, 4, 5, or all six

123
Journal of Global Optimization

0.9

0.8

0.7

0.6
( )

0.5

0.4

0.3 DLPM
CG-DESCENTPM
0.2 M1PM
DKPM
0.1 MDLPM
MSMDLPM
0
0 2 4 6 8 10

Fig. 1 Dolan and Moré performance profile with respect to number of iterations

0.9

0.8

0.7

0.6
( )

0.5

0.4

0.3 DLPM
CG-DESCENTPM
0.2 M1PM
DKPM
0.1 MDLPM
MSMDLPM
0
0 2 4 6 8 10

Fig. 2 Dolan and Moré performance profile with respect to number of function evaluation

methods involved in the experiments. Table 1 also shows that the MSMDLPM method solved
47.78% (86 out of 180) of all the problems with the minimum number of function evaluations
compared to the DLPM, CG-DESCENT-PM, M1PM, DKPM, and MDLPM methods, which
recorded 15.56% (28 out of 180), 0.56% (1 out of 180), 0% (0 out of 180), 12.78% (23 out
of 180), and 8.89% (16 out of 180) respectively. Besides, the table indicates that 6.67% (12
out of 180) of the problems were solved with equal function evaluations by either 2, 3, 4, 5,
or all six methods in the experiments. Finally, it is observed from the summarized results in
Table 1 that the MSMDLPM method solved 31.67% (57 out of 180) of all the problems with
the least CPU time in regards to DLPM, CG-DESCENT-PM, M1PM, DKPM, and MDLPM
methods, which solved 16.11% (29 out of 180), 1.11% (2 out of 180), 2.78% (5 out of 180),
13.33% (24 out of 180), and 11.11% (20 out of 180) respectively. Table 1 indicates that

123
Journal of Global Optimization

0.9

0.8

0.7

0.6
( )

0.5

0.4

0.3 DLPM
CG-DESCENTPM
0.2 M1PM
DKPM
0.1 MDLPM
MSMDLPM
0
0 2 4 6 8 10 12 14

Fig. 3 Dolan and Moré performance profile with respect to CPU time (in second)

Table 1 Summary of results from Tables 3–8 displaying the total number of achieved minimal values for Iter,
FEv, and CPU time by the six methods
Method Iter Percentage FEv Percentage CPU Percentage

DLPM 30 16.67% 28 15.56% 29 16.11%


CG-DESCENTPM 1 0.56% 1 0.56% 2 1.11%
M1PM 0 0% 0 0% 5 2.78%
DKPM 22 12.22% 23 12.78% 24 13.33%
MDLPM 7 3.89% 16 8.89% 20 11.11%
MSMDLPM 69 38.33% 86 47.78% 57 31.67%
Undecided 37 20.56% 12 6.67% 29 16.11%

16.11% (29 out of 180) of the problems were solved with an equal CPU time by 2 or more
methods in the experiments. The "Undecided" category shows how many times two or more
methods have achieved the best results, in which there was no single winner for a particular
problem.
To compare the numerical results, the performance profiles (in log2 scale) proposed by
Dolan and Moré [23] for the number of iterations, the number of function evaluations, and the
CPU time were observed. Denote by n y (resp. n x ) the number of solvers (resp. the number of
test problems). Given a set of solvers S and a set of problems P , the performance of the solver
y on the problem x is defined by tx,y . In our case, tx,y stands for the number of iterations
or the number of function evaluations, or the CPU time required to solve the problem x by
solver y. For any pair (x, y) of the problem x ∈ P and solver y ∈ S , the performance ratio
is given by

tx,y
r x,y = ,
min{tx,y : y ∈ S }

123
Journal of Global Optimization

where the performance tx,y is compared to the best performance achieved by included solver
on this problem [23]. The performance profile is defined by
1  
ρ y (τ ) = size x ∈ P : r x,y ≤ τ ,
nx
where ρ y (τ ) is the probability that a performance ratio r x,y of the solver y ∈ S is within
the factor τ ∈ R of the best possible ratio. The ρ y (τ ) function is the cumulative distribution
function for the performance ratio [23]. We plot fractions ρ y (τ ) ≤ 1 for which each method
achieves the least number of iterations, function evaluations, and CPU time with the proba-
bility τ . On the graphs of the selected performance profile, the upper curve corresponds to
the value of the method that shows the best performance.
The performance profiles of the DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM
and MSMDLPM methods are shown in Figs. 1, 2 and 3, which show that the MSMDLPM
method is the most efficient in all three cases (number of iterations, number of function
evaluations and CPU time). The MSMDLPM method successfully solved 55.81% (DLPM
– 18.60%, CG-DESCENT-PM – 12.21%, M1PM – 11.63%, DKPM – 22.67%, MDLPM
– 8.14%) problems with the least number of iterations, 56.40% (DLPM – 16.86%, CG-
DESCENT-PM – 0.58%, M1PM – 0.00%, DKPM – 20.35%, MDLPM – 9.88%) problems
with the least number of function evaluations, and 39.66% (DLPM – 24.02%, CG-DESCENT-
PM – 4.47%, M1PM – 6.15%, DKPM – 15.08%, MDLPM – 18.99%) problems in the shortest
time.
It should be emphasized that if the algorithms stopped with the same number of iterations,
or the same number of function evaluations, or at the same time, then they are all winners
for a given test problem.
By analyzing the results shown in Tables 3–8 and in Figs. 1, 2 and 3, we can conclude
that the MSMDLPM method achieved the best results. This observation leads us to the final
conclusion that the MSMDLPM method is the most efficient of the proposed methods for
solving a system of monotonic nonlinear equations in terms of all three observed parameters:
number of iterations, number of function evaluations, and the CPU time.

4.2 Application to image restoration problems

The image restoration problem plays an important role in medical sciences, biological engi-
neering, and other areas of science and engineering (see [13, 15], etc.).
In this section, the MSMDLPM algorithm is applied to solve problems arising from
compressive sensing, particularly image deblurring problems. Our main goal in applying the
MSMDLPM algorithm to the problem of noise reduction and image deblurring is to find an
image that matches the original image as much as possible. The model for this problem, ie
for achieving our goal, is defined by the following system:

b = Lx + y (4.1)

where b ∈ Rm is representing the observed image, L is the linear map (m ×n blurring matrix),
x ∈ Rn is the unknown image and y ∈ Rm is the noise. It is well-known that regularization
methods are used in image restoration problems. The 1 -regularization is a powerful tool in
image denoising and is given by:
1
min f (x) := L x − b22 + μx1 , (4.2)
x∈Rn 2

123
Journal of Global Optimization

Original Image Gaussian blur image The restored image by MSMDLPM The restored image by Algorithm 1 The restored image by CGD

Original Image Gaussian blur image The restored image by MSMDLPM The restored image by Algorithm 1 The restored image by CGD

Original Image Gaussian blur image The restored image by MSMDLPM The restored image by Algorithm 1 The restored image by CGD

Fig. 4 The original images (first column), the blurred images (second column), the restored images by methods
MSMDLPM (third column), Algorithm 1 (fourth column) and CGD (last column)

Original Image Gaussian blur image The restored image by MSMDLPM The restored image by Algorithm 1 The restored image by CGD

Original Image Gaussian blur image The restored image by MSMDLPM The restored image by Algorithm 1 The restored image by CGD

Fig. 5 The original images (first column), the blurred images (second column), the restored images by methods
MSMDLPM (third column), Algorithm 1 (fourth column) and CGD (last column)

where  · 2 denotes the Euclidean norm, μ is a positive regularization parameter and  · 1


is the 1 -regularization term. A proof that the monotone equation (1.1) is an appearance of
(4.2) can be found in any of the articles [28, 29, 74, 75].
To demonstrate the effectiveness of the MSMDLPM algorithm in restoring certain blurred
images, it is compared with Algorithm 1 [4] and one standard CGD algorithm [75] which is
more efficient than the SGCS algorithm [74]. When implementing the MSMDLPM algorithm
in these experiments, the parameters are assigned to values σ = 10−4 , r = 0.5 and ε = 10−3 .
The parameters for Algorithm 1 and the CGD algorithm are chosen as in [4] and [75],
respectively.
To have a correct comparison, each code was started from the same initial point x 0 = AT b
and terminated with the fulfillment of the conditions
xk − xk−1   f (xk ) − f (xk−1 )
< 10−3 or < 10−3 , (4.3)
xk−1   f (xk−1 )

123
Journal of Global Optimization

where

1
f (xk ) = L xk − b22 + μxk 1 .
2

Five colored images of different sizes are considered in the experiment (first column in
Figs. 4 and 5). These images are distorted using a Gaussian noise with the standard deviation
of 3.6 (second column in Figs. 4 and 5).
The following metrics are used in the paper for measuring the performance and quality
of restoration by each algorithm tested: number of iterations, CPU time (sec), structural
similarity index (SSIM) [58], signal-to-noise ratio (SNR), and Peak signal-to-noise ratio
(PSNR) [14].
The experimental results generated by MSMDLPM, Algorithm 1, and CGD are presented
in Table 2. From Table 2, it is evident that for all the test images, the restored images
by MSMDLPM are closer to the original than those restored by Algorithm 1 and CGD.
The MSMDLPM algorithm has lower values compared to the other two algorithms when
comparing the number of iterations and the CPU time, while it has higher values when
comparing metrics that measure the quality of the restored images: SSIM, SNR, and PSNR.
Figures 4 and 5 are generated to show the restoration results of different images obtained by
methods MSMDLPM (third column), Algorithm 1 (fourth column), and CGD (last column).
As it can be observed from these figures, all considered algorithms achieved similar quality
of restoration, but MSMDLPM is faster. It is thus concluded that MSMDLPM is the winner.
Taking everything together, we see that the MSMDLPM algorithm provides a valid approach
to solve 1 -norm minimization problems and image deblurring problems, and its performance
is better than the performance of competitive algorithms.

5 Conclusion

This paper presents an accelerated modified Dai–Liao projection method for solving non-
linear monotone systems of equations with application in image deblurring problems. The
MSM Dai–Liao projection method (MSMDLPM) defined in this paper can be considered an
extension of the Dai-Liao method for unconstrained optimization problems in combination
with the projection method and acceleration parameter that represents an approximation of
the Hessian matrix. Under mild assumptions, the method has been shown to be globally
convergent.
The MSM Dai–Liao projection method was compared with similar methods and the
numerical results confirm the efficiency of the MSMDLPM method in relation to the observed
DLPM, CG-DESCENT-PM, M1PM, DKPM, and MDLPM methods for solving nonlinear
monotone systems of equations. Also, the MSMDLPM algorithm is successfully applied to
image deblurring problems. The numerical results obtained during image deblurring using
the MSMDLPM algorithm were compared with the numerical results of Algorithm 1 and
the CGD algorithm and it can be concluded that the MSMDLPM algorithm achieved better
results in all experiments according to each criterion (number of iterations, CPU time, SSIM,
SNR, and PSNR). The experiments were carried out on various samples of images and the
results are recorded in Table 2 and Figs. 4 and 5, which clearly show that the MSMDLPM
approach is more effective with respect to tested methods.
Most generally, the paper shows that hybridization of quasi-Newton methods with a very
popular class of CG methods can initiate efficient optimization methods. Defining hybrid

123
Journal of Global Optimization

Table 2 The performance of MSMDLPM, Algorithm 1, and CGD algorithm on image deblurring
Images Size MSMDLPM Algorithm 1 CGD
Iter CPU SSIM SNR PSNR Iter CPU SSIM SNR PSNR Iter CPU SSIM SNR PSNR

Lenna 512×512 17 8.984 0.9563 22.346 27.802 25 11.047 0.9407 19.907 25.366 33 14.453 0.9491 21.172 26.613
Barbara 720×576 18 15.203 0.7069 16.731 23.273 43 28.250 0.6758 15.743 22.347 43 31.453 0.693 16.243 22.808
Yacht 512×480 24 9.688 0.7529 17.528 23.968 45 15.266 0.7051 15.577 22.083 47 16.031 0.736 16.431 22.893
Baboon 200×200 18 1.859 0.7035 17.657 23.243 64 4.500 0.6697 16.831 22.439 56 4.297 0.6878 17.272 22.857
Motobikes 494×494 27 12.125 0.6325 12.502 21.240 61 20.125 0.5664 11.183 20.078 57 21.266 0.5995 11.788 20.600

123
Journal of Global Optimization

methods between numerous and diverse quasi-Newton methods and CG methods is an open
area for future research.
Due to the importance of the image restoration problem, various scientific methods and
procedures have been developed. Recently, many researchers have been working on image
processing (including image restoration) using artificial neural networks [78].
At the moment, our goal is to apply the image restoration procedure using conjugate
gradient methods. Image restoration based on artificial neural networks is a prospective topis
for further research.

Appendix

Tables 3–8 show the obtained numerical results of Test Problems (TP) 4.1–4.18: number of
iterations (Iter), number of function evaluations (FEv), the CPU time in seconds, and Norm,
for the methods DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM, and MSMDLPM.
In order to save space, the dimensions n of the TP are given in the second column as n/1000,
and the numerical values of the parameters Iter, FEv, CPU, and Norm are presented in the
condensed form as ”Iter/FEv/CPU/Norm”. The sign ’*’ indicates that the corresponding
method covering its position has not reached convergence.

123
Table 3 Numerical results in TP 4.1–4.3 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of function
evaluations (FEv), CPU time (sec), and Norm
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.1 1 9/28/0.047/3.00E-07 15/74/0.047/2.96E-07 15/74/0.063/2.96E-07 5/16/0/3.60E-08 29/88/0.125/5.25E-07 5/16/0.047/3.60E-08


2 9/28/0/4.10E-07 15/74/0.016/4.10E-07 15/74/0.094/4.10E-07 5/16/0.031/1.55E-08 30/91/0.125/6.19E-07 5/16/0.031/1.55E-08
3 9/28/0.031/4.96E-07 15/74/0.078/4.99E-07 15/74/0.25/4.99E-07 5/16/0/1.01E-08 30/91/0.109/9.88E-07 5/16/0.031/1.01E-08
Journal of Global Optimization

5 9/28/0.047/6.34E-07 15/74/0.141/6.41E-07 15/74/0.094/6.41E-07 5/16/0/6.26E-09 31/94/0.438/9.04E-07 5/16/0.047/6.26E-09


6 9/28/0.031/6.93E-07 15/74/0.281/7.01E-07 15/74/0.266/7.01E-07 5/16/0/5.37E-09 32/97/0.453/5.74E-07 5/16/0.063/5.37E-09
8 9/28/0.094/7.97E-07 15/74/0.219/8.08E-07 15/74/0.25/8.08E-07 5/16/0.047/4.27E-09 32/97/0.266/7.86E-07 5/16/0.094/4.27E-09
10 9/28/0.141/8.90E-07 15/74/0.328/9.02E-07 15/74/0.219/9.02E-07 5/16/0.109/3.62E-09 33/100/0.391/5.19E-07 5/16/0.078/3.62E-09
15 10/31/0.156/1.09E-07 16/79/0.438/3.09E-07 16/79/0.375/3.09E-07 5/16/0.063/2.74E-09 33/100/0.438/8.23E-07 5/16/0.172/2.74E-09
30 10/32/0.438/5.71E-07 15/74/0.594/3.83E-07 15/74/0.625/3.83E-07 5/17/0.188/2.03E-07 37/113/0.906/7.51E-07 14/58/0.594/9.81E-07
60 11/37/0.859/4.53E-07 17/86/1.125/7.38E-07 17/86/1.453/7.38E-07 6/22/0.688/1.07E-08 39/120/1.719/9.46E-07 6/21/0.344/4.46E-09
4.2 1 32/349/0.078/3.94E-07 41/518/0.219/6.88E-07 41/463/0.125/9.77E-07 33/328/0.078/9.41E-07 55/510/0.266/8.47E-07 24/242/0.063/6.83E-07
2 36/407/0.094/5.60E-07 51/680/0.484/7.77E-07 37/427/0.156/9.73E-07 37/404/0.109/4.85E-07 8/88/0.281/0.00E+00 25/259/0.078/9.50E-07
3 45/537/0.641/9.31E-07 12/150/0.484/0.00E+00 43/491/0.703/9.89E-07 8/103/0.219/0.00E+00 47/412/0.469/9.35E-07 8/103/0.141/0.00E+00
5 33/359/0.516/6.64E-07 50/670/0.859/5.43E-07 14/185/0.625/0.00E+00 35/377/0.797/4.55E-07 12/141/0.328/0.00E+00 26/284/0.547/6.40E-07
6 11/147/0.313/0.00E+00 12/176/0.25/0.00E+00 14/193/0.219/0.00E+00 10/136/0.281/0.00E+00 10/120/0.188/0.00E+00 10/136/0.313/0.00E+00
8 12/165/0.297/0.00E+00 46/594/1.125/6.69E-07 42/509/0.75/9.39E-07 11/154/0.344/0.00E+00 15/205/0.516/0.00E+00 37/394/0.656/9.73E-07
10 40/450/1.391/7.43E-07 43/556/1.984/4.65E-07 47/571/1.453/6.83E-07 12/173/0.563/0.00E+00 10/134/0.453/0.00E+00 37/392/1.219/8.00E-07
15 42/519/1.891/9.58E-07 22/333/1.516/0.00E+00 18/274/1.047/0.00E+00 41/471/1.828/5.37E-07 10/144/0.656/0.00E+00 31/371/1.406/8.46E-07
30 49/658/4.094/3.94E-07 50/693/5.281/6.79E-07 53/693/4.031/9.01E-07 20/315/2.109/0.00E+00 10/153/1.203/0.00E+00 20/315/2.297/0.00E+00
60 57/760/8.422/6.81E-07 63/934/11.328/6.77E-07 62/873/9.797/8.53E-07 59/777/9.656/4.99E-07 10/165/2.266/0.00E+00 55/711/8.734/6.31E-07

123
Table 3 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000

123
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.3 1 19/83/0.016/1.58E-07 30/209/0.094/6.11E-07 29/158/0.016/3.99E-07 20/85/0.063/4.21E-07 28/93/0.031/8.90E-07 18/75/0.031/6.62E-07


2 17/68/0/8.75E-07 26/170/0.063/9.45E-07 23/123/0/8.30E-07 21/94/0/4.93E-07 29/95/0.016/7.18E-07 18/75/0/9.68E-07
3 17/68/0.031/6.04E-07 31/215/0.078/3.60E-07 26/141/0.047/9.76E-07 23/104/0.047/4.49E-07 30/98/0.031/6.52E-07 19/79/0.047/4.25E-07
5 17/68/0.047/1.28E-07 30/205/0.266/6.95E-07 26/138/0.109/7.83E-07 22/96/0.109/4.29E-07 30/98/0.094/9.85E-07 19/79/0.047/5.53E-07
6 22/109/0.172/6.06E-07 26/172/0.203/9.74E-07 28/153/0.188/8.86E-07 23/103/0.156/4.53E-07 31/101/0.203/5.83E-07 19/79/0.078/6.07E-07
8 22/117/0.188/7.96E-07 30/204/0.391/8.10E-07 26/139/0.25/7.29E-07 23/103/0.172/8.98E-07 31/101/0.234/7.56E-07 19/79/0.156/7.03E-07
10 18/74/0.172/1.35E-07 31/229/0.438/9.36E-07 28/150/0.328/7.08E-07 22/96/0.25/5.22E-07 31/101/0.313/9.20E-07 19/79/0.172/7.87E-07
15 17/69/0.172/2.53E-07 26/158/0.375/6.16E-07 32/172/0.563/4.47E-07 25/125/0.375/4.61E-07 32/104/0.406/7.07E-07 19/79/0.281/9.66E-07
30 18/77/0.531/3.73E-07 27/170/0.609/7.99E-07 27/145/0.625/7.83E-07 20/82/0.391/7.29E-07 42/165/0.719/5.50E-07 19/78/0.438/9.20E-07
60 21/91/0.703/7.18E-07 30/197/1.328/7.81E-07 27/144/0.984/6.01E-07 21/86/0.563/4.12E-07 43/155/1.094/5.30E-07 20/82/0.594/4.61E-07
Journal of Global Optimization
Table 4 Numerical results in TP 4.4–4.6 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of function
evaluations (FEv), CPU time (sec), and Norm
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm

4.4 1 25/187/0.094/ 39/400/0.266/ 32/238/0.109/7.63E-07 24/178/0.078/9.31E-07 25/126/0.078/8.44E-07 18/121/0.094/7.84E-07


9.85E-07 9.66E-07
Journal of Global Optimization

2 18/119/0.234/ 31/291/0.375/ 28/207/0.281/8.04E-07 22/168/0.344/9.47E-07 27/151/0.25/8.42E-07 17/110/0.188/4.75E-07


4.19E-07 9.49E-07
3 20/152/0.469/ 33/325/0.672/ 24/178/0.5/5.68E-07 18/127/0.594/9.34E-07 26/138/0.375/4.88E-07 16/98/0.375/4.97E-07
6.72E-07 9.88E-07
5 21/144/0.609/ 31/296/1.016/ 22/158/0.484/9.63E-07 23/175/0.734/7.20E-07 21/110/0.609/8.61E-07 14/93/0.453/7.61E-07
6.50E-07 7.87E-07
6 21/148/0.641/ 27/247/1/ 22/158/0.656/8.31E-07 21/153/0.781/7.63E-07 22/123/0.469/5.22E-07 14/85/0.484/6.18E-07
7.58E-07 8.29E-07
8 18/116/0.734/ 28/275/1.531/ */*/1000.016/* 18/135/0.891/9.97E-07 21/106/0.531/9.36E-07 15/87/0.625/5.16E-07
8.31E-07 9.23E-07
10 18/124/1.078/ 27/253/1.953/ 24/178/1.484/5.64E-07 16/130/1.172/8.74E-07 22/108/0.875/5.99E-07 16/101/1.109/6.26E-07
9.68E-07 8.50E-07
15 18/137/1.656/ 25/239/2.438/ 24/188/2.344/6.03E-07 18/146/1.813/8.44E-07 23/124/1.5/5.03E-07 16/96/1.313/6.50E-07
8.04E-07 8.45E-07
30 16/107/2.172/ 26/256/5.75/ 19/141/3.188/8.44E-07 21/159/3.531/7.12E-07 21/110/2.391/7.73E-07 14/79/2.188/6.59E-07
8.10E-07 9.61E-07
60 19/141/5.406/ 24/237/10.406/ 20/147/6.375/8.51E-07 16/115/5.063/5.62E-07 18/91/3.547/7.16E-07 12/70/3.25/5.41E-07
5.68E-07 9.77E-07

123
Table 4 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000

123
Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm

4.5 1 12569/151239/ 11866/166445/ */*/1000.016/* 20533/256717/10.828/9.99E-07 */*/1000.016/* 1617/18455/1.469/0.00E+00


6.016/9.90E- 6.219/9.65E-
07 07
2 27605/381798/ 24445/367266/ */*/1000.016/* 23417/328004/15.344/9.98E-07 */*/1000.016/* 19535/247569/15.656/0.00E+00
19.359/9.98E- 22.938/8.67E-
07 07
3 48352/677477/ 35414/567386/ */*/1000.016/* 35437/531063/100.656/9.98E-07 */*/1000.016/* 6227/83393/16.656/0.00E+00
115.156/9.94E- 101.391/9.55E-
07 07
5 76519/1148405/ 58770/1000015/ */*/1000.031/* 59604/953014/347/9.96E-07 */*/1000.016/* 11279/164157/68.578/0.00E+00
394.969/9.43E- 349.406/9.77E-
07 07
6 71256/1141045/ 76312/1298277/ */*/1000.031/* 70839/1134777/462.391/9.78E-07 */*/1000.031/* 60446/913135/400.344/0.00E+00
430.922/9.98E- 506.125/8.89E-
07 07
8 99398/1591475/ 94132/1695131/ */*/1000.016/* 97668/1655165/756.563/9.97E-07 */*/1000.016/* 99/1738/1.234/0.00E+00
748.938/9.80E- 775.75/9.98E-
07 07
10 */*/1000.031/* */*/1000.031/* */*/1000.031/* */*/1000.031/* */*/1000.031/* 24/435/0.688/0.00E+00
15 */*/1000.016/* */*/1000.016/* */*/1000.031/* */*/1000.047/* */*/1000.031/* 19/373/0.828/0.00E+00
30 */*/1000.016/* */*/1000.047/* */*/1000.047/* */*/1000.031/* */*/1000.031/* 18/365/1.688/0.00E+00
60 */*/1000.047/* */*/1000.016/* */*/1000.078/* */*/1000.031/* */*/1000.016/* */*/1000.047/*
Journal of Global Optimization
Table 4 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm

4.6 1 18/359/3.688/ 14/293/ 14/293/2.969/4.22E-07 20/400/4.109/5.40E-07 14/241/2.547/3.54E-07 20/400/5.203/5.36E-07


5.71E-07 3.063/4.22E-
07
Journal of Global Optimization

2 19/400/12.625/ 8/176/5.609/ 8/176/6.891/3.40E-07 16/337/14.719/2.15E-07 8/148/4.359/1.56E-07 16/337/12.172/2.13E-07


8.39E-07 3.40E-07
3 21/462/44.391/ 11/252/26.219/ 11/252/25/6.57E-07 17/374/36.547/6.32E-07 11/212/19.156/5.25E-07 17/374/37.938/6.30E-07
4.05E-07 6.57E-07
5 21/483/107.594/ 12/287/60.234/ 12/287/62.406/1.73E-07 18/414/94.063/3.65E-07 12/243/54.625/1.35E-07 18/414/96.234/3.65E-07
8.71E-07 1.73E-07
6 18/415/122.859/ 22/548/191.281/ 22/548/160.203/7.98E-07 13/300/95.922/9.38E-07 22/464/134.578/4.06E-07 13/300/93.125/9.35E-07
2.15E-07 7.98E-07
8 25/600/301.5/ 14/349/186.109/ 14/349/174.891/5.93E-07 21/504/266.438/4.73E-07 14/297/150.438/4.78E-07 21/504/259.531/4.73E-07
5.95E-07 5.93E-07
10 18/433/332.672/ 23/596/668.344/ 23/596/453.75/6.32E-07 14/337/276.828/2.47E-07 22/486/373.938/8.73E-07 14/337/280.656/2.47E-07
4.62E-07 6.32E-07
15 23/576/966.281/ 11/286/485.063/ 11/286/467.719/1.09E-07 19/476/868.797/2.37E-07 11/246/413.719/6.36E-08 19/476/839.344/2.37E-07
4.75E-07 1.09E-07
30 */*/1019.469/* */*/1021.094/* */*/1019.813/* */*/1013.516/* */*/1020.922/* */*/1024.953/*
60 */*/1051.016/* */*/1078.938/* */*/1064.469/* */*/1065.578/* */*/1052.219/* */*/1090.625/*

123
Table 5 Numerical results in TP 4.7–4.9 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of function
evaluations (FEv), CPU time (sec), and Norm)
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM

123
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.7 1 16/81/0.047/4.50E-07 9/54/0.031/2.95E-07 9/54/0.047/2.95E-07 13/66/0.031/6.49E-07 16/51/0.031/3.99E-07 8/41/0.016/6.11E-07


2 16/81/0/6.36E-07 9/54/0.047/4.18E-07 9/54/0/4.18E-07 13/66/0/9.18E-07 16/51/0.047/6.58E-07 8/41/0.031/8.64E-07
3 16/81/0.047/7.79E-07 9/54/0.031/5.12E-07 9/54/0.047/5.12E-07 14/71/0.047/3.15E-07 16/51/0.031/9.31E-07 9/46/0/1.21E-07
5 17/86/0.078/3.54E-07 9/54/0.047/6.61E-07 9/54/0.031/6.61E-07 14/71/0.047/4.07E-07 17/54/0.047/4.29E-07 9/46/0.094/1.56E-07
6 17/86/0.094/3.88E-07 9/54/0.031/7.24E-07 9/54/0.141/7.24E-07 14/71/0.031/4.45E-07 17/54/0.109/4.87E-07 9/46/0.031/1.71E-07
8 17/86/0.094/4.48E-07 9/54/0.094/8.36E-07 9/54/0.094/8.36E-07 14/71/0.109/5.14E-07 17/54/0.094/6.03E-07 9/46/0.094/1.97E-07
10 17/86/0.219/5.00E-07 9/54/0.125/9.34E-07 9/54/0.094/9.34E-07 14/71/0.234/5.75E-07 17/54/0.094/7.21E-07 9/46/0.172/2.20E-07
15 17/86/0.375/6.13E-07 10/60/0.25/1.56E-07 10/60/0.234/1.56E-07 14/71/0.25/7.04E-07 18/57/0.281/3.35E-07 9/46/0.188/2.70E-07
30 17/86/0.563/8.67E-07 10/60/0.406/2.20E-07 10/60/0.375/2.20E-07 14/71/0.5/9.96E-07 18/57/0.359/5.34E-07 9/46/0.344/3.82E-07
60 18/91/0.703/4.31E-07 10/60/0.563/3.11E-07 10/60/0.547/3.11E-07 15/76/0.734/3.94E-07 18/57/0.516/9.46E-07 9/46/0.516/5.40E-07
4.8 1 33/252/0.031/4.14E-07 81/785/0.078/6.28E-07 */*/1000.016/* 55/478/0.125/9.15E-07 82/497/0.047/3.88E-07 79/423/0.063/8.25E-07
2 45/379/0.063/4.03E-07 54/600/0.141/8.85E-07 */*/1000.016/* 45/383/0.125/7.91E-07 78/447/0.063/9.19E-07 100/484/0.109/7.33E-07
3 41/346/0.203/6.28E-07 70/639/0.453/6.84E-07 */*/1000.031/* 72/611/0.578/8.04E-07 74/467/0.453/9.83E-07 98/485/0.453/7.22E-07
5 45/307/0.484/4.42E-07 76/874/0.75/7.47E-07 */*/1000.031/* 71/688/0.656/9.62E-07 86/518/0.625/8.80E-07 90/475/0.547/9.04E-07
6 44/319/0.297/9.00E-07 86/1216/1.016/9.93E-07 */*/1000.016/* 47/428/0.344/4.51E-07 62/441/0.344/8.99E-07 92/465/0.453/7.50E-07
8 41/309/0.375/6.33E-07 67/753/0.719/7.13E-07 */*/1000.031/* 53/472/0.578/3.21E-07 89/571/0.641/8.66E-07 91/513/0.625/7.51E-07
10 49/358/0.563/3.89E-07 58/623/1.125/8.29E-07 */*/1000.016/* 56/513/0.938/7.62E-07 80/472/1/9.78E-07 93/559/1.188/7.81E-07
15 52/387/0.922/6.02E-07 67/652/1.547/9.17E-07 */*/1000.047/* 54/510/1.313/5.56E-07 80/501/1.422/8.62E-07 95/585/1.906/7.42E-07
30 43/303/1.172/1.00E-06 73/720/3.078/9.50E-07 */*/1000.047/* 52/470/2.375/8.71E-07 84/521/2.281/5.42E-07 95/553/2.984/7.33E-07
60 61/591/3.969/6.21E-07 80/735/5.031/9.67E-07 */*/1000.047/* 64/585/4.672/5.36E-07 98/592/4.531/5.69E-07 109/565/5.219/9.30E-07
Journal of Global Optimization
Table 5 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.9 1 20/119/0.016/4.13E-07 28/239/0.047/5.72E-07 */*/1000.016/* 13/66/0/5.22E-07 22/90/0.078/4.46E-07 24/124/0.063/6.86E-07


2 20/114/0.047/8.03E-07 25/214/0.047/9.50E-07 25/173/0.016/9.06E-07 13/66/0.031/5.22E-07 23/93/0.047/3.05E-07 19/107/0.031/9.29E-07
3 19/108/0.047/5.91E-07 28/240/0.078/6.06E-07 29/205/0.109/8.02E-07 13/66/0/5.22E-07 27/122/0.063/5.27E-07 18/96/0.031/6.23E-07
Journal of Global Optimization

5 20/113/0.109/4.15E-07 24/194/0.188/9.28E-07 28/192/0.109/6.09E-07 13/66/0.047/5.22E-07 21/97/0.172/7.46E-07 19/101/0.078/3.35E-07


6 20/113/0.094/5.15E-07 28/231/0.234/3.72E-07 23/155/0.078/8.31E-07 13/66/0.109/5.22E-07 22/122/0.188/6.80E-07 19/101/0.172/2.89E-07
8 20/113/0.172/6.72E-07 27/216/0.313/4.17E-07 26/178/0.172/7.01E-07 13/66/0.063/5.22E-07 22/91/0.188/3.80E-07 19/101/0.172/2.63E-07
10 20/113/0.25/7.82E-07 27/224/0.406/7.94E-07 20/132/0.141/8.55E-07 13/66/0.156/5.22E-07 23/94/0.172/8.12E-07 19/101/0.203/2.59E-07
15 20/113/0.313/9.44E-07 29/257/0.594/7.99E-07 27/183/0.281/6.44E-07 13/66/0.188/5.22E-07 21/89/0.313/8.33E-07 19/101/0.297/7.25E-07
30 22/145/0.641/6.54E-07 30/262/0.844/8.25E-07 25/171/0.453/4.38E-07 13/66/0.297/5.22E-07 23/95/0.484/8.40E-07 19/103/0.516/5.37E-07
60 21/123/0.781/7.15E-07 29/259/1.484/5.69E-07 24/160/0.703/4.06E-07 13/66/0.453/5.22E-07 23/96/0.766/2.10E-07 19/105/0.781/6.84E-07

123
Table 6 Numerical results in TP 4.10–4.12 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of
function evaluations (FEv), CPU time (sec), and Norm
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM

123
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.10 1 18/82/0.031/1.74E-07 32/221/0.078/4.78E-07 28/163/0.125/9.36E-07 26/122/0.047/4.31E-07 31/99/0.063/7.68E-07 21/93/0.031/3.70E-07


2 18/84/0.031/8.31E-07 29/185/0.031/3.49E-07 31/176/0.078/8.08E-07 24/117/0.016/5.80E-07 33/105/0.047/9.42E-07 23/95/0.016/5.52E-07
3 24/140/0.031/4.04E-07 31/203/0.125/8.03E-07 38/220/0.75/5.29E-07 23/99/0.031/9.25E-07 32/107/0.109/9.31E-07 23/102/0.031/4.15E-07
5 18/75/0.063/8.97E-07 33/213/0.328/5.20E-07 33/183/0.328/6.84E-07 24/103/0.125/5.38E-07 34/105/0.156/6.99E-07 24/104/0.156/6.37E-07
6 21/96/0.156/4.92E-07 34/247/0.453/3.58E-07 29/155/0.219/9.14E-07 26/115/0.188/4.22E-07 34/105/0.203/9.65E-07 22/98/0.141/8.84E-07
8 21/90/0.141/3.48E-07 29/180/0.234/9.43E-07 34/187/0.234/1.75E-07 26/116/0.141/4.20E-07 35/108/0.219/7.70E-07 23/103/0.219/3.56E-07
10 23/106/0.25/9.07E-07 36/242/0.547/7.07E-07 27/146/0.344/8.14E-07 23/95/0.25/9.56E-07 38/117/0.328/5.67E-07 25/112/0.438/7.30E-07
15 19/81/0.344/7.42E-07 30/187/0.594/8.27E-07 33/176/0.344/5.79E-07 27/136/0.609/8.12E-07 39/124/0.406/6.48E-07 24/111/0.547/5.42E-07
30 23/99/0.594/7.77E-07 36/239/1.328/7.59E-07 34/186/0.828/5.32E-07 27/134/0.719/9.85E-07 41/156/0.859/8.48E-07 22/90/0.766/7.21E-07
60 24/120/0.969/2.42E-07 34/240/2.063/8.92E-07 32/171/1.359/6.49E-07 25/120/1.078/8.54E-07 42/148/1.109/7.90E-07 23/94/1.469/3.83E-07
4.11 1 19/96/0.031/4.34E-07 11/66/0.031/5.29E-07 11/66/0.031/5.29E-07 16/81/0.047/4.21E-07 18/57/0.016/5.33E-07 10/51/0.016/9.86E-07
2 19/96/0.047/6.14E-07 11/66/0/7.48E-07 11/66/0/7.48E-07 16/81/0/5.96E-07 18/57/0.047/9.13E-07 11/56/0.031/2.38E-07
3 19/96/0/7.52E-07 11/66/0.031/9.17E-07 11/66/0.063/9.17E-07 16/81/0.031/7.30E-07 19/60/0.063/4.32E-07 11/56/0.047/2.91E-07
5 19/96/0.031/9.71E-07 12/72/0.063/2.26E-07 12/72/0.188/2.26E-07 16/81/0.063/9.42E-07 19/60/0.063/6.80E-07 11/56/0.047/3.76E-07
6 20/101/0.063/4.18E-07 12/72/0.109/2.48E-07 12/72/0.203/2.48E-07 17/86/0.094/3.37E-07 19/60/0.125/7.72E-07 11/56/0.047/4.12E-07
8 20/101/0.172/4.83E-07 12/72/0.094/2.86E-07 12/72/0.141/2.86E-07 17/86/0.094/3.89E-07 19/60/0.125/9.44E-07 11/56/0.094/4.76E-07
10 20/101/0.188/5.40E-07 12/72/0.094/3.20E-07 12/72/0.109/3.20E-07 17/86/0.188/4.34E-07 20/63/0.125/3.88E-07 11/56/0.125/5.32E-07
15 20/101/0.313/6.62E-07 12/72/0.25/3.92E-07 12/72/0.234/3.92E-07 17/86/0.234/5.32E-07 20/63/0.203/5.43E-07 11/56/0.125/6.52E-07
30 20/101/0.5/9.36E-07 12/72/0.359/5.54E-07 12/72/0.391/5.54E-07 17/86/0.484/7.52E-07 20/63/0.375/9.33E-07 11/56/0.313/9.22E-07
60 21/106/0.672/5.21E-07 12/72/0.531/7.84E-07 12/72/0.641/7.84E-07 18/91/0.594/3.47E-07 21/66/0.672/5.68E-07 12/61/0.359/2.23E-07
Journal of Global Optimization
Table 6 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.12 1 22/160/9.281/8.84E-07 102/887/50.922/7.59E-07 */*/1000.094/* 60/522/27.406/5.56E-07 76/382/19.391/9.37E-07 128/592/36.594/8.51E-07


2 40/261/35.406/9.73E-07 107/876/122.828/9.56E-07 */*/1000.313/* 60/485/69.25/7.41E-07 79/412/52.609/9.58E-07 127/590/99.844/9.42E-07
3 37/257/82.313/9.45E-07 97/875/281.156/8.59E-07 */*/1000.766/* 70/582/192.75/9.70E-07 79/434/137.844/6.81E-07 132/610/238.469/8.81E-07
Journal of Global Optimization

5 35/222/187.625/8.71E-07 101/861/733.734/9.33E-07 */*/1001.828/* 70/646/559/7.76E-07 71/384/323.422/8.82E-07 132/610/623.828/8.58E-07


6 33/202/244.156/9.13E-07 81/710/865.719/6.54E-07 */*/1002.422/* 68/560/686.281/8.75E-07 77/401/507.563/9.07E-07 129/597/878.188/9.48E-07
8 22/147/356.688/5.92E-07 */*/1006.375/* */*/1004.984/* 28/208/503.594/7.24E-07 */*/1006.375/* */*/1004.938/*
10 27/167/598.156/9.96E-07 */*/1006.859/* */*/1008.297/* */*/1009.844/* */*/1000.172/* */*/1004.656/*
15 */*/1005.297/* */*/1023.219/* */*/1019.672/* */*/1019.609/* */*/1027.531/* */*/1003.375/*
30 */*/1111.703/* */*/1138.688/* */*/1163.75/* */*/1165.438/* */*/1071.734/* */*/1076.234/*
60 */*/1111.703/* */*/1138.688/* */*/1163.75/* */*/1165.438/* */*/1071.734/* */*/1076.234/*

123
Table 7 Numerical results in TP 4.13–4.15 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of
function evaluations (FEv), CPU time (sec), and Norm
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM

123
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.13 1 9/47/0.25/1.79E-07 15/105/0.047/3.80E-07 15/105/0.063/3.80E-07 7/41/0.156/1.58E-07 14/46/0.031/6.70E-07 15/91/0.031/9.28E-07


2 9/47/0.031/2.54E-07 15/105/0.016/5.37E-07 15/105/0.063/5.37E-07 7/41/0.047/2.24E-07 14/46/0/9.43E-07 16/97/0/3.93E-07
3 9/47/0.047/3.10E-07 15/105/0.031/6.58E-07 15/105/0.031/6.58E-07 7/41/0/2.74E-07 15/49/0.047/3.07E-07 16/97/0.047/4.81E-07
5 9/47/0.063/4.01E-07 15/105/0.141/8.49E-07 15/105/0.109/8.49E-07 7/41/0.109/3.54E-07 15/49/0.063/4.05E-07 16/97/0.094/6.21E-07
6 9/47/0.063/4.39E-07 15/105/0.141/9.30E-07 15/105/0.125/9.30E-07 7/41/0.031/3.87E-07 15/49/0.063/4.49E-07 16/97/0.078/6.80E-07
8 9/47/0.109/5.07E-07 16/112/0.219/3.02E-07 16/112/0.359/3.02E-07 7/41/0.047/4.47E-07 15/49/0.094/5.33E-07 16/97/0.141/7.85E-07
10 9/47/0.031/5.67E-07 16/112/0.188/3.37E-07 16/112/0.25/3.37E-07 7/41/0.125/5.00E-07 15/49/0.109/5.92E-07 16/97/0.25/8.78E-07
15 9/47/0.078/6.94E-07 16/112/0.297/4.13E-07 16/112/0.219/4.13E-07 7/41/0.125/6.13E-07 15/49/0.109/7.20E-07 17/103/0.313/3.22E-07
30 9/47/0.172/9.82E-07 16/112/0.422/5.84E-07 16/112/0.453/5.84E-07 7/41/0.234/8.66E-07 16/52/0.359/2.69E-07 17/103/0.531/4.55E-07
60 10/53/0.344/1.97E-07 17/120/0.953/3.70E-07 17/120/0.906/3.70E-07 9/53/0.328/1.42E-09 16/53/0.672/6.43E-07 18/110/0.703/3.46E-07
4.14 1 18/88/0.031/6.82E-07 12/70/0.031/5.30E-07 12/70/0.063/5.30E-07 17/84/0.094/5.96E-07 20/61/0.031/5.37E-07 12/58/0.016/2.36E-07
2 18/88/0/9.65E-07 12/70/0/7.50E-07 12/70/0/7.50E-07 17/84/0.031/8.42E-07 20/61/0.031/9.32E-07 12/58/0.047/3.33E-07
3 19/93/0.047/4.65E-07 12/70/0.047/9.18E-07 12/70/0.141/9.18E-07 18/89/0/3.36E-07 21/64/0.063/4.43E-07 12/58/0.031/4.08E-07
5 19/93/0.047/6.00E-07 13/76/0.063/2.27E-07 13/76/0.141/2.27E-07 18/89/0.047/4.34E-07 21/64/0.047/7.02E-07 12/58/0.031/5.27E-07
6 19/93/0.031/6.57E-07 13/76/0.094/2.48E-07 13/76/0.109/2.48E-07 18/89/0.047/4.76E-07 21/64/0.094/7.95E-07 12/58/0.047/5.78E-07
8 19/93/0.109/7.59E-07 13/76/0.125/2.87E-07 13/76/0.109/2.87E-07 18/89/0.109/5.49E-07 21/64/0.109/9.73E-07 12/58/0.094/6.67E-07
10 19/93/0.172/8.49E-07 13/76/0.188/3.20E-07 13/76/0.188/3.20E-07 18/89/0.125/6.14E-07 22/67/0.188/4.01E-07 12/58/0.125/7.46E-07
15 20/98/0.25/5.23E-07 13/76/0.234/6.41E-07 13/76/0.203/6.41E-07 19/94/0.313/4.34E-07 24/74/0.297/3.90E-07 14/70/0.25/2.15E-07
30 18/88/0.438/4.16E-07 15/90/0.469/2.94E-07 15/90/0.484/2.94E-07 20/100/0.5/3.91E-07 25/78/0.391/6.58E-07 16/75/0.484/1.77E-07
60 21/105/0.625/7.74E-07 16/98/0.578/3.07E-07 16/98/0.578/3.07E-07 21/106/0.703/4.21E-07 26/81/0.547/3.80E-07 16/75/0.547/2.50E-07
Journal of Global Optimization
Table 7 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm

4.15 1 40/329/0.188/8.79E-07 42/417/0.047/8.50E-07 43/369/0.063/9.31E-07 39/342/0.141/7.53E-07 44/267/0.047/9.41E-07 18/127/0.063/4.39E-07


2 32/258/0.031/7.56E-07 38/360/0.031/8.08E-07 43/366/0.094/9.73E-07 26/205/0.047/8.29E-07 44/266/0.016/9.83E-07 18/127/0.031/7.36E-07
3 38/315/0.078/9.72E-07 48/493/0.266/8.23E-07 47/408/0.219/6.54E-07 38/308/0.094/7.17E-07 47/289/0.094/6.38E-07 18/127/0.031/9.76E-07
Journal of Global Optimization

5 37/306/0.297/6.04E-07 53/569/0.438/6.52E-07 48/418/0.547/8.32E-07 35/289/0.234/5.64E-07 44/277/0.234/9.97E-07 19/134/0.031/4.65E-07


6 31/239/0.203/6.61E-07 50/550/0.438/6.19E-07 44/376/0.281/8.18E-07 40/339/0.297/5.88E-07 46/281/0.359/9.61E-07 19/134/0.156/5.20E-07
8 35/287/0.266/8.97E-07 45/470/0.313/8.16E-07 46/393/0.266/9.42E-07 23/170/0.188/8.47E-07 44/277/0.281/8.91E-07 19/134/0.094/6.18E-07
10 35/284/0.313/7.00E-07 47/495/0.484/9.60E-07 44/377/0.469/8.48E-07 23/163/0.141/7.96E-07 47/288/0.359/5.95E-07 19/134/0.25/7.04E-07
15 36/286/0.391/5.91E-07 47/467/0.625/8.87E-07 46/394/0.625/8.57E-07 38/319/0.375/8.21E-07 51/337/0.5/8.97E-07 19/134/0.313/8.82E-07
30 31/236/0.703/9.42E-07 47/466/1.25/8.44E-07 44/373/0.859/7.75E-07 */*/1000.047/* 46/285/0.75/7.13E-07 20/141/0.563/4.51E-07
60 39/309/1.422/6.37E-07 47/469/1.906/6.64E-07 49/419/2.047/5.59E-07 38/309/1.125/9.80E-07 45/279/1.328/9.33E-07 21/149/0.594/4.02E-07

123
Table 8 Numerical results in TP 4.16–4.18 of DLPM, CG-DESCENT-PM, M1PM, DKPM, MDLPM and MSMDLPM methods for a number of iterations (Iter), number of
function evaluations (FEv), CPU time (sec), and Norm
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM

123
1000
Iter/FEv/ Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm CPU/Norm

4.16 1 16328/50408/ 7345/22904/ */*/1000.016/* 14628/44440/16.672/1.00E-06 14750/45154/ 17.859/1.00E-06 224/1733/1.266/ 0.00E+00


17.688/1.00E- 9.516/1.00E-
06 06
2 16866/52763/ 7667/25214/ */*/1000.016/* 15139/47177/27.969/1.00E-06 15185/46632/ 29.75/1.00E-06 378/3459/3.078/ 0.00E+00
33.391/1.00E- 15.734/1.00E-
06 06
3 17123/53667/ 7856/26474/ */*/1000.016/* 15219/47947/100.047/1.00E-06 15450/47856/ 97.453/1.00E-06 32/520/1.375/ 0.00E+00
105.859/1.00E- 53.313/1.00E-
06 06
5 17598/56767/ 8072/27654/ */*/1000.031/* 15771/50578/187.859/1.00E-06 15999/51921/203.375/1.00E-06 45/820/3.281/ 0.00E+00
203.313/1.00E- 102.703/1.00E-
06 06
6 17635/56552/ 8197/29054/ */*/1000.016/* 15931/51443/226.266/1.00E-06 15818/49208/227.797/1.00E-06 57/1013/4.453/ 0.00E+00
239.672/1.00E- 127.234/1.00E-
06 06
8 17997/59692/ 8354/30487/ */*/1000.016/* 16168/53338/307.188/1.00E-06 16036/50687/300.703/1.00E-06 */*/1000.016/*
327.906/1.00E- 170.969/1.00E-
06 06
10 18218/60572/ 8428/30711/ */*/1000.047/* 16388/55150/434.266/1.00E-06 16288/50987/418.125/1.00E-06 49/975/7.641/0.00E+00
461.375/1.00E- 239.578/1.00E-
06 06
15 18548/63032/ 8783/34132/ */*/1000.047/* 16822/58311/653.922/1.00E-06 16864/57110/660.016/1.00E-06 106/2079/24.328/0.00E+00
681.359/1.00E- 380.734/1.00E-
06 06
30 */*/1000.047/* 9662/42909/ */*/1000.094/* */*/1000.078/* 129/2318/49.063/0.00E+00 258/4979/107.563/0.00E+00
905.938/1.00E-
06
Journal of Global Optimization
Table 8 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000
Iter/FEv/ Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm CPU/Norm

60 */*/1000.125/* */*/1000.094/* */*/1000.172/* */*/1000.078/* */*/1000.094/* 77/1801/74.75/0.00E+00


4.17 1 2638/14402/ 3589/22899/ */*/1000.016/* 2690/15599/1.469/9.28E-07 3344/11131/1.297/9.64E-07 */*/1000.016/*
1.359/9.59E- 3.078/9.54E-
Journal of Global Optimization

07 07
2 4987/26711/ 5989/37937/ */*/1000.016/* 5005/28551/2.094/9.66E-07 6346/20940/1.859/9.96E-07 */*/1000.016/*
2.141/9.80E- 4.172/8.70E-
07 07
3 7230/38616/ 8465/53099/ */*/1000.016/* 7245/40550/10.547/9.42E-07 9211/30351/9.141/9.38E-07 118/576/0.219/0.00E+00
13.906/9.89E- 14.5/9.45E-
07 07
5 11653/61904/ 13258/82681/ */*/1000.016/* 11613/64102/36.656/9.99E-07 15101/49400/38.047/9.84E-07 121/542/0.406/0.00E+00
39.375/9.83E- 50.031/9.76E-
07 07
6 13798/72574/ 15545/96930/ */*/1000.016/* 13829/75384/49.078/9.97E-07 17918/58509/49.594/9.98E-07 39/377/0.328/0.00E+00
48.125/9.80E- 69.703/9.99E-
07 07
8 18148/95128/ 20215/125750/ */*/1000.016/* 18086/97930/76.141/9.97E-07 23717/77363/86.859/9.65E-07 330/1421/1.625/0.00E+00
82.891/9.93E- 115.844/9.53E-
07 07
10 22506/117909/ 24777/154207/ */*/1000.047/* 22414/120780/162.375/9.74E-07 29619/96400/146.906/9.93E-07 31/195/0.375/0.00E+00
146.047/9.99E- 219.594/9.90E-
07 07
15 33302/174933/ 36067/223468/ */*/1000.047/* 33084/177466/338.516/1.00E-06 43995/143002/271.984/9.68E-07 52/295/0.766/0.00E+00
296.563/9.97E- 443.688/9.11E-
07 07
30 */*/1000.047/* */*/1000.047/* */*/1000.047/* */*/1000.031/* 88103/285526/951.375/9.86E-07 62/366/1.594/0.00E+00
60 */*/1000.016/* */*/1000.047/* */*/1000.047/* */*/1000.016/* */*/1000.031/* 221/1034/7.391/0.00E+00

123
Table 8 continued
n
TP DLPM [2] CG-DESCENT-PM [24] M1PM [11] DKPM [18] MDLPM [40] MSMDLPM
1000

123
Iter/FEv/ Iter/FEv/ Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm Iter/FEv/CPU/Norm
CPU/Norm CPU/Norm

4.18 1 */*/1000.156/* */*/1000.156/* */*/1000.156/* */*/1000.141/* */*/1000.172/* 263/2136/120.172/0.00E+00


2 */*/1000.344/* */*/1000.375/* */*/1000.344/* */*/1000.391/* */*/1000.281/* 474/4047/616.344/0.00E+00
3 */*/1000.938/* */*/1000.766/* */*/1000.672/* */*/1000.672/* */*/1001.031/* */*/1000.75/*
5 */*/1000.578/* */*/1002.219/* */*/1002.047/* */*/1002.219/* */*/1002.563/* */*/1002.531/*
6 */*/1003.125/* */*/1002.828/* */*/1000.922/* */*/1003.375/* */*/1003.375/* */*/1004.031/*
8 */*/1006.828/* */*/1006.094/* */*/1006.719/* */*/1006.766/* */*/1006.656/* */*/1008.484/*
10 */*/1008.641/* */*/1008.734/* */*/1011.25/* */*/1008.141/* */*/1011.719/* */*/1013.766/*
15 */*/1020.313/* */*/1023.609/* */*/1027.359/* */*/1028/* */*/1025.688/* */*/1026.5/*
30 */*/1112.719/* */*/1161.219/* */*/1193.609/* */*/1153.359/* */*/1150.984/* */*/1206.828/*
60 */*/*/* */*/*/* */*/*/* */*/*/* */*/*/* */*/*/*
Journal of Global Optimization
Journal of Global Optimization

Acknowledgements The work of the second author was supported in part by the Serbian Academy of Sciences
and Arts (-96). Predrag Stanimirović is supported by the Ministry of Education, Science and Technological
Development, Republic of Serbia (No. 451-03-68/2022-14/200124), as well as by the Science Fund of the
Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications
- QUAM).

References
1. Abdullah, H., Waziri, M.Y., Yusuf, S.O.: A double direction conjugate gradient method for solving largE-
scale system of nonlinear equations. J. Math. Comput. Sci. 7, 606–624 (2017)
2. Abubakar, A.B., Kumam, P.: A descent Dai-Liao conjugate gradient method for nonlinear equations.
Numer. Algorithms 81, 197–210 (2019)
3. Abubakar, A.B., Kumam, P., Mohammad, H.: A note on the spectral gradient projection method for
nonlinear monotone equations with applications. Computational and Applied Mathematics 39 (2020),
Article number: 129
4. Abubakar, A.B., Muangchoo, K., Ibrahim, A.H., Muhammad, A.B., Jolaoso, L.O., Aremu, K.O.: A new
three-term Hestenes-Stiefel type method for nonlinear monotone operator equations and image restoration.
IEEE Access 9, 18262–18277 (2021)
5. Ahookhosh, M., Amini, K., Bahrami, S.: Two derivative-free projection approaches for systems of large-
scale nonlinear monotone equations. Numer. Algor. 64(1), 21–42 (2013)
6. Al-Baali, M., Spedicato, E., Maggioni, F.: Broyden’s Quasi-Newton methods for a nonlinear system of
equations and unconstrained optimization: A review and open problems. Optim. Methods Software 29(5),
937–954 (2014)
7. Aminifard, Z., Babaie-Kafaki, S.: An optimal parameter choice for the Dai-Liao family of conjugate
gradient methods by avoiding a direction of the maximum magnification by the search direction matrix.
A Quarterly Journal of Operations Research 17, 317–330 (2019)
8. Andrei, N.: A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues. Numer. Algorithms
77(4), 1273–1282 (2018)
9. Argyros, I.K.: On a class of Newton-like methods for solving nonlinear equations. J. Comput. Appl. Math.
228(1), 115–122 (2009)
10. Awwal, A.M., Wang, L., Kumam, P., Mohammad, H.: A two-step spectral gradient projection method
for system of nonlinear monotone equations and image deblurring problems. Symmetry 12(6) (2020),
Article number: 874
11. Babaie-Kafaki, S., Ghanbari, R.: The Dai-Liao nonlinear conjugate gradient method with optimal param-
eter choices. European J. Oper. Res. 234(3), 625–630 (2014)
12. Babaie-Kafaki, S., Gambari, R.: A descent family of Dai-Liao conjugate gradient methods. Optim. Meth.
Soft. 29(3), 583–591 (2014)
13. Banham, M.R., Katsaggelos, A.K.: Digital image restoration. IEEE Signal Process. Mag. 14(2), 24–41
(1997)
14. Bovik, A.C.: Handbook of Image and Video Processing. Academic, New York, NY, USA (2010)
15. Chan, C.L., Katsaggelos, A.K., Sahakian, A.V.: Image sequence filtering in quantum-limited noise with
applications to low-dose fluoroscopy. IEEE Trans. Med. Imaging 12(3), 610–621 (1993)
16. Cheng, W.: A PRP type method for systems of monotone equations. Math. Comput. Model. 50, 15–20
(2009)
17. Cordero, A., Hueso, J.L., Martınez, E., Torregrosa, J.R.: Increasing the convergence order of an iterative
method for nonlinear systems. Appl. Math. Lett. 25(12), 2369–2374 (2012)
18. Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved
Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)
19. Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl.
Math. Optim. 43(1), 87–101 (2001)
20. Dauda, M.K., Magaji, A.S., Abdullah, H., Sabi’u, J., Halilu, A.S.: A new search direction via hybrid
conjugate gradient coefficient for solving nonlinear system of equations. Malaysian Journal of Computing
and Applied Mathematics 2, 8–15 (2019)
21. González-Lima, D.M., de Oca, F.M.: A Newton-like method for nonlinear system of equations. Numer.
Algorithms 52(3), 479–506 (2009)
22. Dauda, M.K., Usman, S., Ubale, H., Mamat, M.: An alternative modified conjugate gradient coefficient
for solving nonlinear system of equations. Open Journal of Science and Technology 2, 5–8 (2019)

123
Journal of Global Optimization

23. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program.
91, 201–213 (2002)
24. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line
search. SIAM J. Optim. 16(1), 170–192 (2005)
25. Halilu, A.S., Waziri, M.Y.: An enhanced matrix-free method via double step length approach for solving
systems of nonlinear equations, International Journal of Applied. Math. Res. 6, 147–156 (2017)
26. Halilu, A.S., Waziri, M.Y.: A transformed double step length method for solving large-scale systems of
nonlinear equations. J. Numer. Math. Stoch. 9(1), 20–32 (2017)
27. Hirsch, M.J., Pardalos, P.M., Resende, M.G.C.: Solving systems of nonlinear equations with continuous
GRASP. Nonlinear Anal. Real World Appl. 10, 2000–2006 (2009)
28. Ibrahim, A.H., Kumam, P., Abubakar, A.B., Jirakitpuwapat, W., Abubakar, J.: A hybrid conjugate gradient
algorithm for constrained monotone equations with application in compressive sensing. Heliyon 6(3),
e03466 (2020)
29. Ibrahim, A.H., Kumam, P., Kumam, W.: A family of derivative-free conjugate gradient methods for
constrained nonlinear equations and image restoration. IEEE Access 8, 162714–162729 (2020)
30. Ivanov, B., Stanimirović, P. S., Shaini, B. I., Ahmad, H., Wang, M. K.: A Novel Value for the Parameter
in the Dai-Liao-Type Conjugate Gradient Method. Journal of Function Spaces 2021 (2021), Article ID
6693401, 10 pages
31. Ivanov, B., Stanimirović, P. S., Milovanović, G. V., Djordjević, S., Brajević, I.: Accelerated multiple
step-size methods for solving unconstrained optimization problems. Optimization Methods and Software
2019
32. Koorapetse, M.S., Kaelo, P.: Globally convergent three-term conjugate gradient projection methods for
solving nonlinear monotone equations. Arab. J. Math. (Springer) 7, 289–301 (2018)
33. Koorapetse, M., Kaelo, P.: A new three-term conjugate gradient-based projection method for solving
large-cale nonlinear monotone equations. Math. Model. Anal. 24(4), 550–563 (2019)
34. Koorapetse, M., Kaelo, P.: Self adaptive spectral conjugate gradient method for solving nonlinear mono-
tone equations. J. Egyptian Math. Soc. 28(1) (2020), Paper No. 4, 21 pp
35. La Cruz, W., Martinez, J.M., Raydan, M.: Spectral residual method without gradient information for
solving large-scale nonlinear systems of equations: theory and experiments. Math. Comp. 75(225), 1429–
1448 (2006)
36. Leong, W.J., Hassan, M.A., Yusuf, M.W.: A matrix-free quasi-Newton method for solving large-scale
nonlinear systems. Comput. Math. Appl. 62, 2354–2363 (2011)
37. Li, Q., Li, D.H.: A class of derivative-free methods for large-scale nonlinear monotone equations. IMA
J. Numer. Anal. 31, 1625–1635 (2011)
38. Liu, J., Li, S.: Multivariate spectral DY-type projection method for convex constrained nonlinear monotone
equations. Journal of Industrial & Management Optimization 13(1), 283–295 (2017)
39. Liu, J.K., Li, S.J.: A three-term derivative-free projection method for nonlinear monotone system of
equations. Calcolo 53, 427–450 (2016)
40. Lotfi, M., Hosseini, S.M.: An efficient Dai–Liao type conjugate gradient method by reformulating the
CG parameter in the search direction equation. J. Comput. Appl. Math. 371 (2020), Article 112708
41. Luo, Y.Z., Tang, G.J., Zhou, L.N.: Hybrid approach for solving systems of nonlinear equations using
chaos optimization and quasi-Newton method. Appl. Soft Comput. 8(2), 1068–1073 (2008)
42. Mo, Y., Liu, H., Wang, Q.: Conjugate direction particle swarm optimization solving systems of nonlinear
equations. Comput. Math. Appl. 57(11), 1877–1882 (2009)
43. Muhammad, K., Mamat, M., Waziri, M.Y.: A Broyden’s-like method for solving systems of nonlinear
equations. World Appl. Sci. J. 21, 168–173 (2013)
44. Osinuga, I.A., Yusuff, S.O.: Quadrature based Broyden-like method for systems of nonlinear equations.
Stat. Optim. Inf. Comput. 6, 130–138 (2018)
45. Pei, J., Dražić, Z., Dražić, M., Mladenović, N., Pardalos, P.M.: Continuous Variable Neighborhood Search
(C-VNS) for solving systems of nonlinear equations. INFORMS Journal on Computing, Articles in
advance, pp. 1–16
46. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization
problem. SIAM J. Optim. 7(1), 26–33 (1997)
47. Sabi’u, J.: Enhanced derivative-free conjugate gradient method for solving symmetric nonlinear equations.
International Journal of Advances in Applied Sciences 5, 50–57 (2016)
48. Sabi’u, J., Shah, A., Waziri, M.Y.: Two optimal Hager-Zhang conjugate gradient methods for solving
monotone nonlinear equations. Appl. Numer. Math. 153, 217–233 (2020)
49. Sabi’u, J., Gadu, A.M.: A Projected hybrid conjugate gradient method for solving large-scale system of
nonlinear equations. Malaysian Journal of Computing and Applied Mathematics 1, 10–20 (2018)

123
Journal of Global Optimization

50. Sabi’u, J., Sanusi, U.: An efficient new conjugate gradient approach for solving symmetric nonlinear
equations. Asian Journal of Mathematics and Computer Research 12, 34–43 (2016)
51. Sabi’u, J., Waziri, M.Y.: Effective modified hybrid conjugate gradient method for large-scale symmetric
nonlinear equations. Appl. Appl. Math. 12, 1036–1056 (2017)
52. Sabi’u, J., Waziri, M.Y., Idris, A.: A new hybrid Dai-Yuan and Hestenes-Stiefel conjugate gradient param-
eter for solving system of nonlinear equations, MAYFEB. Journal of Mathematics 1, 44–55 (2017)
53. Sharma, J.R., Guha, R.K.: Simple yet efficient Newton-like method for systems of nonlinear equations.
Calcolo 53(3), 451–473 (2016)
54. Solodov, M.V., Svaiter, B.F.: A globally convergent inexact Newton method for systems of monotone
equations, Mathematical programming: Reformulation; nonsmooth, piecewise smooth, semismooth and
smoothing methods. Appl. Optim. 22, 355–369 (1998)
55. Stanimirović, P.S., Ivanov, B., Ma, H., Mosić, D.: A survey of gradient methods for solving nonlinear
optimization. Electronic Research Archive 28(4), 1573–1624 (2020)
56. Stanimirović, P.S., Miladinović, M.B.: Accelerated gradient descent methods with line search. Numer.
Algorithms 54, 503–520 (2010)
57. Uba, L.Y., Waziri, M.Y.: Three-step derivative-free diagonal updating method for solving large-scale
systems of nonlinear equations. J. Numer. Math. Stoch. 6, 73–83 (2014)
58. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility
to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
59. Wang, S., Guan, H.: A scaled conjugate gradient method for solving monotone nonlinear equations with
convex constraints. J. Appl. Math. 2013 (2013), Article ID 286486
60. Waziri, M.Y., Ahmed, K., Sabi’u, J.: A Dai-Liao conjugate gradient method via modified secant equation
for system of nonlinear equations. Arab. J. Math. (Springer) 9, 443–457 (2020)
61. Waziri, M.Y., Ahmed, K., Sabi’u, J.: A family of Hager-Zhang conjugate gradient methods for system of
monotone nonlinear equations. Appl. Math. Comput. 361, 645–660 (2019)
62. Waziri, M.Y., Aisha, H.A.: A diagonal quasi-Newton method for system of nonlinear equations. Appl.
Math. Comput. Sci. 6, 21–30 (2014)
63. Waziri, M.Y., Leong, W.J., Hassan, M.A.: Diagonal Broyden-like method for large-scale systems of
nonlinear equations. Malays. J. Math. Sci. 6, 59–73 (2012)
64. Waziri, M.Y., Leong, W. J., Hassan, M. A., Mamat, M.: A two-step matrix-free secant method for solving
large-scale systems of nonlinear equations. J. Appl. Math. 2012, Art. ID 348654, 9 pp
65. Waziri, M.Y., Leong, W.J., Hassan, M.A., Monsi, M.: Jacobian computation-free Newton’s method for
systems of nonlinear equations. J. Numer. Math. Stoch. 2(1), 54–63 (2010)
66. Waziri, M.Y., Leong, W.J., Mamat, M.: An efficient solver for systems of nonlinear equations with singular
Jacobian via diagonal updating. Appl. Math. Sci. (Ruse) 4(69–72), 3403–3412 (2010)
67. Waziri, M.Y., Leong, W.J., Hassan, M.A., Monsi, M.: A new Newton’s Method with diagonal Jacobian
approximation for systems of nonlinear equations. J. Math. Stat. 6, 246–252 (2010)
68. Waziri, M.Y., Leong, W.J., Mamat, M., Moyi, A.U.: Two-step derivative-free diagonally Newton’s method
for large-scale nonlinear equations. World Appl. Sci. J. 21, 86–94 (2013)
69. Waziri, M.Y., Majid, Z.A.: An improved diagonal Jacobian approximation via a new quasi-Cauchy con-
dition for solving large-scale systems of nonlinear equations. J. Appl. Math. 2013, Art. ID 875935, 6
pp
70. Waziri, M.Y., Sabi’u, J.: A derivative-free conjugate gradient method and its global convergence for
solving symmetric nonlinear equations, Int. J. Math. Math. Sci. 2015, Art. ID 961487, 8 pp
71. Waziri, M.Y., Sabi’u, J.: An alternative conjugate gradient approach for large-scale symmetric nonlinear
equations. J. Math. Comput. Sci. 6, 855–874 (2016)
72. Yakubu, U.A., Mamat, M., Mohamad, M.A., Rivaie, M., Sabi’u, J.: A recent modification on Dai-Liao
conjugate gradient method for solving symmetric nonlinear equations. Far East J. Math. Sci. (FJMS) 103,
1961–1974 (2018)
73. Yan, Q.-R., Peng, X.-Z., Li, D.-H.: A globally convergent derivative-free method for solving large-scale
nonlinear monotone equations. J. Comput. Appl. Math. 234, 649–657 (2010)
74. Xiao, Y.H., Wang, Q.Y., Hu, Q.J.: Non-smooth equations based method for 1 -norm problems with
applications to compressed sensing. Nonlinear Anal. 74(11), 3570–3577 (2011)
75. Xiao, Y., Zhu, H.: A conjugate gradient method to solve convex constrained monotone equations with
applications in compressive sensing. J. Math. Anal. Appl. 405(1), 310–319 (2013)
76. Yana, Q.R., Peng, X.Z., Li, D.H.: A globally convergent derivative-free method for solving large-scale
nonlinear monotone equations. J. Comput. Appl. Math. 234, 649–657 (2010)
77. Yuan, G., Li, T., Hu, W.: A conjugate gradient algorithm for large-scale nonlinear equations and image
restoration problems. Appl. Numer. Math. 147, 129–141 (2020)

123
Journal of Global Optimization

78. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE
Transactions on Computational Imaging 3(1), 47–57 (2017)
79. Zheng, L., Yang, L., Liang, Y.: A conjugate gradient projection method for solving equations with convex
constraints. J. Comput. Appl. Math. 375 (2020), https://doi.org/10.1016/j.cam.2020.112781.
80. Zhou, W., Li, D.: Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math.
25(1), 89–96 (2007)
81. Zhou, W.J., Li, D.H.: A globally convergent bfgs method for nonlinear monotone equations without any
merit functions. Math. Comp. 77(264), 2231–2240 (2008)

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

123
View publication stats

You might also like