Professional Documents
Culture Documents
2016006
CONTROL AND OPTIMIZATION
Volume 6, Number 2, June 2016 pp. 161–173
The reviewing process of the paper was handled by Gerhard-Wilhelm Weber and Herman
Mawengkang as Guest Editors.
161
162 AHMET SAHINER, GULDEN KAPUSUZ AND NURULLAH YILMAZ
have been investigated [11]. One of the important tools for solving these types
of non-smooth (non-Lipshitz) problems is the smoothing approach. The smoothing
approach is based on making some modification on the objective function or approx-
imate the objective function by smooth functions. The first study on smoothing
approach is the Bertsekas’s famous paper [4]. In order to improve the smooth-
ing approaches, different types of valuable techniques and algorithms are developed
[18, 16, 3, 5]. In recent years, the smoothing approach protects its popularity among
the scientist who faced non-smooth optimization problems. Smoothing approach for
the exact penalty function start with [10], for other important studies we refer to
the following references [7, 8, 15, 9, 12, 2].
In this paper we aim to develop more efficient methods for smoothing each of
the exact penalty functions F1 (x, ρ) and Fp (x, ρ) and solve the corresponding un-
constrained optimization problems
(P1 ) min F1 (x, ρ),
x∈Rn
and
(Pp ) min Fp (x, ρ), p ∈ (0, 1),
x∈Rn
by using the differentiation based methods.
The next section is devoted to some preliminary knowledge about smoothing
approaches. In Section 3, we propose a new smoothing approach and we prove
some results for error estimates among the optimal objective function values of the
smoothed penalty problem, non-smooth penalty problem and original optimization
problem. In Section 4, we present minimization algorithms for problems (P1 ) and
(Pp ) in order to find an approximate solution for the problem (P ). In Section 5,
we apply the algorithms on the important test problems and compare the results
obtained from (P1 ) and (Pp ) and, show the convergence of the algorithms. In
sections 6, we present some concluding remarks.
together with the exact penalty function method for solving the inequality con-
strained optimization problems.
Theorem 3.4. For a fixed r, let {τj } → 0and xj be a solution of (P̃1 ) for ρ > 0.
Assume that x is an accumulation point of xj . Then x is an optimal solution for
(P1 ).
Proof. By considering the Corollary 1 and Theorem 3.3, the proof is obtained.
j
Theorem 3.5. For a fixed τ , let {rj } → ∞ and x be a solution of (P̃1 ) for ρ > 0.
j
Assume that x is an accumulation point of x . Then x is an optimal solution for
(P1 ).
Proof. By considering the Corollary 1 and Theorem 3.3, the proof is obtained.
Theorem 3.6. Let x∗ be an optimal solution for the problem (P1 ) and x be an
optimal solution for the problem (P̃1 ). Then we have the following:
mρτ
0 ≤ F̃1 (x, ρ, r, τ ) − F1 (x∗ , ρ) ≤ .
2
Proof. From the Theorem 3.3 we have the following:
0 ≤ F̃1 (x̄, ρ, r, τ ) − F1 (x̄, ρ) ≤ F̃1 (x̄, ρ, r, τ ) − F1 (x∗ , ρ)
mρτ
≤ F̃1 (x∗ , ρ, r, τ ) − F1 (x∗ , ρ) ≤ .
2
2.5 2.5
2 2
1.5 1.5
y−axis
y−axis
1 1
0.5 0.5
0 0
−0.5 −0.5
−3 −2 −1 0 1 2 −2 −1 0 1 2 3
x−axis x−axis
Figure 1. (a) The green and solid one is the graph q(x) =
max{x, 0}, the red and dotted one is the graph of q̃4 (x, 0.3), the
blue and dashed one is the graph of q̃4 (x, 0.8). (b) The green and
solid one is the graph q(x), the red and dotted one is the graph of
q̃4 (x, 0.5), the blue and dashed one is the graph of q̃1.5 (x, 0.5).
Pm
Proof. Since i=1 q(gi (x∗ )) = 0, we have
0 ≤ F̃1 (x̄, ρ, r, τ ) − F1 (x∗ , ρ)
m m
!
X
∗
X
∗ mρτ
= f (x̄) + ρ q̃r (gi (x̄)) − f (x ) + ρ q(gi (x )) ≤
i=1 i=1
2
and we have
m m
X X mρτ
−ρ q̃r (gi (x̄)) ≤ f (x̄) − f (x∗ ) ≤ −ρ q̃r (gi (x̄)) + .
i=1 i=1
2
From Lemma 3.1, we have
mρτ
− ≤ f (x̄) − f (x∗ ) ≤ 0.
2
Theorem 4.4. For a fixed r, let {τj } → 0and xj be a solution of (P̃p ) for ρ > 0.
Assume that x is an accumulation point of xj . Then x is an optimal solution for
(Pp ).
Proof. By considering the Corollary 2 and Theorem 4.3, the proof is obtained.
Theorem 4.5. Let x∗ be an optimal solution of (Pp ) and x be an optimal solution
of (P̃p ). Then we have
0 ≤ F̃p (x, ρ, r, τ ) − Fp (x∗ , ρ) ≤ mρτ.
Proof. From the Theorem 4.3 we have the following:
0 ≤ F̃p (x̄, ρ, r, τ ) − Fp (x̄, ρ) ≤ F̃p (x̄, ρ, r, τ ) − Fp (x∗ , ρ)
≤ F̃p (x∗ , ρ, r, τ ) − Fp (x∗ , ρ) ≤ mρτ.
1.4 1.4
1.2 1.2
1 1
0.8 0.8
y−axis
y−axis
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x−axis x−axis
(a) (b)
Figure 2. (a) The green and solid one is the graph f (x) = q 1/2 (x),
1/2
the red and dotted one is the graph of q̃4 (x, 0.1), the blue and
1/2
dashed one is the graph of q̃4 (x, 0.4). (b) The green and solid
one is the graph q 1/2 (x), the red and dotted one is the graph of
1/2 1/2
q̃8 (x, 0.5), the blue and dashed one is the graph of q̃4 (x, 0.5).
Pm
Proof. Since i=1 q(gi (x∗ )) = 0, we have
and we have
m
X m
X
−ρ q̃rp (gi (x̄)) ≤ f (x̄) − f (x∗ ) ≤ −ρ q̃rp (gi (x̄)) + mρτ.
i=1 i=1
Algorithm I:
Step 1 Determine the initial point x0 . Determine τ0 > 0, ρ0 > 0, r0 > 1, 0 < η < 1,
and N > 1, let j = 0 and go to Step 2.
Step 2 Use xj as the starting point to solve (P̃1 ). Let xj+1 be the solution.
Step 3 If xj+1 is τ −feasible for (P ), then stop and xj+1 is the optimal solution. If
not, determine ρj+1 = N ρj , τj+1 = ητj , rj+1 = N rj and j = j + 1, then go
to Step 2.
In order to guaranteed that the algorithm is worked straightly, we have to prove
the following theorem.
Theorem 5.1. Assume that for ρ ∈ [ρ0 , ∞), τ ∈ (0, τ0 ] and r ∈ (1, r0 ] the set
Let xj is generated by Algorithm I when ηN < 1. If {xj } has a limit point, then
the limit point of xj is the solution for (P ).
Proof. Assume x is a limit point of {xj }. Then there exists set J ⊂ N, such that
xj → x for j ∈ J. We have to show that x is the optimal solution for (P). Thus, it
is sufficient to show (i) x ∈ G0 and (ii) f (x) ≤ inf x∈G0 f (x).
i. Let us consider the contrary that x 6∈ G0 , i.e. for sufficiently large j ∈ J,
there exist δ0 > 0 and i0 ∈ {1, 2, . . . , m} such that
Algorithm II
Step 1 Determine the initial point x0 . Determine τ0 > 0, ρ0 > 0, r0 > 1, 0 < η < 1,
and N > 1, let j = 0 and go to Step 2.
Step 2 Use xj as the starting point to solve (P̃p ). Let xj+1 be the solution.
Step 3 If xj+1 is τ −feasible for (P ), then stop and xj+1 is the optimal solution. If
not, determine ρj+1 = N ρj , τj + 1 = ητj , rj+1 = rj + 2 and j = j + 1, then
go to Step 2.
Theorem 5.2. Assume that for ρ ∈ [ρ0 , ∞), τ ∈ (0, τ0 ] and r ∈ (1, r0 ] the set
argmin F̃p (x, ρ, r, τ ) 6= ∅.
x∈Rn
Let xj is generated by Algorithm II when ηN < 1. If {xj } has a limit point, then
the limit point of xj is the solution for (P).
Proof. The proof is very similar to the proof of the Theorem 5.1.
5.1. Numerical Examples. In this section, we apply our algorithm to test prob-
lems. The proposed algorithm is programmed in Matlab R2011A. Numerical results
shows the efficiency of this method. The detailed results are presented in the tables
for all problems. For these tables we use some symbols in order to abbreviate the
expressions. The meaning of these symbols are as the following:
j : The number of iterations.
j
x : The local minimum point of the jth iteration.
ρj : Penalty functions parameter of the jth iteration.
τj and rj : Smoothing parameter of the jth iteration.
gi (xj ) : The value of local minimum point xj under the constraint
functions.
j
F̃p (x , ρj , rj , τj ) : The value of local minimum point xj under F̃p .
f (xj ) : The value of local minimum point xj under f .
A NEW SMOOTHING FOR PENALTY FUNCTION 169
REFERENCES
[1] A. M. Bagirov, A. Al Nuamiat and N. Sultanova, Hyperbolic smoothing functions for non-
smooth minimization, Optimization, 62 (2013), 759–782.
[2] F. S. Bai, Z. Y. Wu and D. L. Zhu, Lower order calmness and exact penalty fucntion, Opti-
mization Methods ans Software, 21 (2006), 515–525.
[3] A. Ben-Tal and M. Teboule, Smoothing technique for nondifferentiable optimization problems,
Lecture notes in mathematics, 1405, Springer-Verlag, Heidelberg, (1989), 1–11.
[4] D. Bertsekas, Nondifferentiable optimization via approximation, Mathematical Programming
Study, 3 (1975), 1–25.
[5] C. Chen and O. L. Mangasarian, A Class of smoothing functions for nonlinear and mixed
complementarity problem, Computational Optimization and Application, 5 (1996), 97–138.
[6] X. Chen, Smoothing Methods for nonsmooth, nonconvex minimzation, Mathematical Pro-
gramming Serie B , 134 (2012), 71–99.
[7] S. J. Lian, Smoothing approximation to l1 exact penalty for inequality constrained optimiza-
tion, Applied Mathematics and Computation, 219 (2012), 3113–3121.
[8] B. Liu, On smoothing exact penalty function for nonlinear constrained optimization problem,
Journal of Applied Mathematics and Computing, 30 (2009), 259–270.
[9] Z. Meng, C. Dang, M. Jiang and R. Shen, A smoothing objective penalty function algo-
rithm foe inequality constrained optimization problems, Numerical Functional Analysis and
Optimization, 32 (2011), 806–820.
[10] M. C. Pinar and S. Zenios, On smoothing exact penalty functions for convex constrained
optimization, SIAM Journal on Optimization, 4 (1994), 468–511.
[11] Z. Y. Wu, H. W. J. Lee, F. S. Bai and L. S. Zhang, Quadratic smoothing approximation
to l1 exact penalty function in global optimization, Journal of Industrail and Management
Optimization, 53 (2005), 533–547.
[12] Z. Y. Wu, F. S. Bai, X. Q. Yang and L. S. Zhang, An exact lower orderpenalty function and
its smoothing in nonlinear programming, Optimization, 53 (2004), 51–68.
[13] A. E. Xavier, The hyperbolic smoothing clustering method, Pattern Recognition, 43 (2010),
731–737.
[14] A. E. Xavier and A. A. F. D. Oliveira, Optimal covering of plane domains by circles via
hyperbolic smoothing, Journal of Global Optimization, 31 (2005), 493–504.
[15] X. Xu, Z. Meng, J. Sun and R. Shen A penalty function method based on smoothing lower
order penalty function, Journal of Computational and Applied Mathematics, 235 (2011),
4047–4058.
A NEW SMOOTHING FOR PENALTY FUNCTION 173
[16] N. Yilmaz and A. Sahiner, A New Global Optimization Technique Based on the Smoothing
Approach for Non-smooth, Non-convex Optimization, Submitted.
[17] N. Yilmaz and A. Sahiner, Smoothing Approach for Non-lipschitz Optimization, Submitted.
[18] I. Zang, A smooting out technique for min-max optimization, Mathematical Programming,
19 (1980), 61–77.
[19] W. I. Zangwill, Nonlinear programing via penalty functions, Management Science, 13 (1967),
344–358.
Received December 2015; 1st revision March 2016; final revision May 2016.
E-mail address: ahmetnur32@gmail.com
E-mail address: nurullahyilmaz@sdu.edu.tr
E-mail address: guldenkapusuzz@gmail.com