Professional Documents
Culture Documents
1. INTRODUCTION
In the field of the applied mathematics, there are several areas that provide solutions to solve
problems related to science and engineering. One of them is mathematical optimization that
studies how to find the best solution for a given problem between all the possible alternatives.
𝑂𝑝𝑡. 𝑓(𝑥)
𝑠. 𝑡𝑜:
𝑔𝑖 (𝑥) ≤ 0; ∀𝑖 = 1, . . . , 𝑚, (1.1)
ℎ𝑗 (𝑥) = 0; ∀𝑗 = 1, . . . , 𝑝,
where 𝑓: ℝ𝑛 → ℝ is a given function , 𝑂𝑝𝑡. 𝑓(𝑥) means minimize or maximize the function 𝑓,
furthermore 𝑔𝑖 : ℝ𝑛 → ℝ and ℎ𝑗 : ℝ𝑛 → ℝ are given functions.
A particular and very broad class of the model (P) is the quasiconvex optimization (problems
where the objective function 𝑓 is quasi-convex, that is,
𝑓(𝜆𝑥 + (1 − 𝜆)𝑦) ≤ 𝑚𝑎𝑥{𝑓(𝑥), 𝑓(𝑦)} , ∀𝜆 ∈ [0,1], ∀𝑥, 𝑦 ∈ ℝ𝑛 ,
and the functions that define the restrictions 𝑔𝑖 and ℎ𝑗 are quasi-convex). This problem was
studied by [1], motivated by the applications of preferences and
utilities in consumer theory and subsequently by various researchers for example see [2].
On the other hand, location problems are of great importance in science and engineering. These
initially have been introduced in sciences as mathematical problems where illustrious
mathematicians, such as Fermat, Torriceli, Silvester, and Steiner, proposed ingenious methods to
solve them.
In the twentieth century location problems extended to several disciplines and currently have a
wide variety of applications in various research areas.
To overcome the above difficulty, we propose an extension of the proximal point method using
the iteration:
𝜆 2
0 ∈ 𝜕̂ (𝑓(. ) + ( 2𝑘 ) ‖∙ −𝑥 𝑘−1 ‖ ) (𝑥 𝑘 ) (1.4)
Where 𝜕̂ it is the subdiferencial of Clarke, a rigorous definition of this concept will be given in
Section 2. This iteration is more reasonable to solve from theoretical and practical point of view
𝜆 2
since instead of finding a point 𝑥 𝑘 that minimizes the function 𝑓(. ) + ( 2𝑘 ) ‖∙ −𝑥 𝑘−1 ‖ , we just
only need to find a critical point. Thus we reduce the computational cost in each iteration with
respect to the classical proximal method.
In this work we prove the convergence of the proximal method using the iteration (1.4) under the
assumption that the function 𝑓 is locally Lipschitz , quasiconvex and not necessarily
differentiable.
The organization of this paper is as follows: In Section 2, we present the tools for the development
of this work, among others we will see some results of convex analysis, and sub-differentiability
of convex functions. In Section 3 we present the quasiconvex location model, Clarke's
subdifferential theory, see [5] and [6]. In Section 4 we present the central part of the work, we
propose a proximal method and we present its convergence results. In Section 5, we present the
implementation of the method for some functions. In Section 6, a brief discussion is presented
and in Section 7 we present some conclusions.
2. BASIC DEFINITIONS
Deffiniton 1.2. A function 𝑓: ℝ𝑛 → ℝ ∪ {+∞}
is quasi-convex in ℝ𝑛 if for each 𝑥, 𝑦 ∈ ℝ𝑛 and for each 𝜆 ∈ [0,1] It is true that:
𝑓(𝜆𝑥 + (1 − 𝜆)𝑦) ≤ max{𝑓(𝑥), 𝑓(𝑦)}.
Definition 1.3 Let 𝑓: ℝ𝑛 → ℝ be a function. We say that 𝑥̅ ∈ ℝ𝑛 it is a local minimum of 𝑓 if it
exists 𝜀 > 0 such that 𝑓(𝑥̅ ) ≤ 𝑓(𝑦), ∀𝑥 ∈ 𝐵(𝑥̅ , 𝜀). We say that 𝑥̅ ∈ 𝑅 𝑛 it is a local maximum
of 𝑓 if it exists 𝜀 > 0 such that 𝑓(𝑥̅ ) ≥ 𝑓(𝑦), ∀𝑥 ∈ 𝐵(𝑥̅ , 𝜀).
2.1. Subdifferentiation
1 − 𝑥 ;𝑥 < 1
Example 2.1: Let: 𝑓(𝑥) = { 2
𝑥 ;𝑥 ≥ 1
−1 ;𝑥 < 1
the convex subdifferential of f in x is: 𝜕𝑓𝑐 (𝑥) = { 2𝑥 ;𝑥 > 1
[−1,2] ;𝑥 = 1
If 𝐶𝑖 , 𝑖 =, … , 𝑝, are compact sets with 0 ∈ int(𝐶𝑖 ), (int(𝐶𝑖 )) denotes the interior of 𝐶𝑖 we define
the distance between 𝑥 and 𝑑𝑖 by 𝛾𝐶𝑖 (𝑥 − 𝑑𝑖 ) where 𝛾𝐶𝑖 , is the Minkowski functional of the set
𝐶𝑖 , that is,
𝛾𝐶𝑖 = inf{𝑡 > 0: 𝑥 ∈ 𝑡𝐶𝑖 }
note that if 𝐶𝑖 is the unitary ball over ℝ𝑛 so 𝛾𝐶𝑖 (𝑥) is the Euclidean distance of 𝑥 to 0.
𝑝
To introduce the model, we define the function 𝛾: ℝ𝑛 → ℝ+ such that
𝛾(𝑥) = (𝛾𝐶𝑖 (𝑥 − 𝑑1 ), … , 𝛾𝐶𝑝 (𝑥 − 𝑑𝑝 ))
𝑝
and we assume that 𝑓𝑖 : ℝ𝑝 → ℝ, 𝑖 = 1, … , 𝑝 , is a non-decreasing function over ℝ+ that is if 𝑥, 𝑦 ∈
𝑝
𝑅+ satisfies 𝑥𝑖 ≤ 𝑦𝑖 , ∀𝑖 = 1, … , 𝑝, then 𝑓𝑖 (𝑥) ≤ 𝑓𝑖 (𝑦).
The model is given by:
min{∅(𝑥): 𝑥 ∈ ℝ𝑛 }
where
∅(𝑥) = max {∅𝑖 (𝑥)}
1≤𝑥≤𝑝
with ∅𝑖 : ℝ → ℝ defined by ∅𝑖 (𝑥) = 𝑓𝑖 (𝛾(𝑥)), for each 𝑖 = 1, … , 𝑝. If the function 𝑓𝑖 : ℝ𝑝 → ℝ
𝑛
𝑝
is quasiconvex in ℝ+ then it is possible to prove that the function ∅ is quasi-convex in ℝ𝑛 .
Theorem 4.2.1 If 𝑓: ℝ𝑛 → ℝ ∪ {+∞} is proper, Lipchitzian, bounded from below and lower
semicontinuous in 𝑅 𝑛 , then the sequence {𝑥 𝑘 } given by (4.2) and (4.3) exist.
Assumption A: 𝑓: ℝ𝑛 → ℝ ∪ {+∞} is a function bounded from below.
Assumption B:𝑓: ℝ𝑛 → ℝ ∪ {+∞} is locally lipschitzian and quasi-convex.
As we are interested in the asymptotic convergence of the method, we also assume that in each
iteration 0 ∉ 𝜕̂𝑓(𝑥 𝑘 ), which implies that 𝑥 𝑘 ≠ 𝑥 𝑘−1 , ∀𝑘.
Proposition 4.2.1 Under hypotheses A and B we have that {𝑓(𝑥 𝑘 )} is decreasing and convergent.
Theorem 4.2.2 Under assumptions A and B, the sequence x . generated by the proximal method
k
is Fejér convergent to 𝑈 ̅.
Proposition 4.2.2 Under assumptions A and B, the following are true:
a. ∀𝑥 ∈ 𝑈 ̅, the sequence {‖𝑥 − 𝑥 𝑘 ‖} is convergent.
b. lim ‖𝑥 𝑘 − 𝑥 𝑘−1 ‖ = 0.
𝑘→+∞
Theorem 4.2.3 Assume that assumptions A and B are satisfied. If 0 k , where it is a
positive real number, then the sequence {𝑥 𝑘 } converges to a point of U and . lim 𝑔𝑛 = 0 for
𝑛→∞
some 𝑔𝑘 ∈ 𝜕̂𝑓(𝑥 𝑘 ) . Furthermore, {𝑥 𝑘 } converges to a critical point of 𝑓, that is, to a point 𝑥𝜖ℝ𝑛
such that 0 ∈ 𝜕̂𝑓(𝑥).
5. NUMERICAL EXPERIMENTS
We implement the algorithm using the software “Matlab 5.3”. We give some academic
quasiconvex functions and each subproblem of the algorithm will be implement using Cuasi-
Newton methods. The starting point is arbitrary and the stop criteria is =0.001.
Example 5.1 Be 𝑓: ℝ𝑛 → ℝ
1 − x ; x 1
where: f ( x) =
x
2
; x 1
Results
Optimum point results according to the number of iterations
6. DISCUSSION
1. The convergence results obtained in this work are important but still weak (convergence to a
stationary point and not necessarily to the solution of the problem) for the purpose of obtaining
the global minimum points of quasiconvex optimization problems.
2. According to [8] thesis, non-differentiable optimization produces a series of difficulties that in
most cases cannot be overcome by traditional algorithms for differentiable problems.Thus,
non-differentiable problems need their own techniques for their development. In this
investigation, we work with quasiconvex non-differentiable functions and Clarke's
subdifferential is used as an alternative to solving a non-differentiable optimization problem.
7. CONCLUSIONS
1. The global convergence of the proximal point method for convex functions introduced by
Martinet (1970) can be extended to solve quasiconvex minimization problems. In this case the
point of convergence is a Clarke critical point when the functions are locally Lipschitz.
2. For a specific computational implementation of the proposed algorithm and software design,
it is need to study an appropriate method to solve the subproblems (4.2), as well as, it is need
to know a practical characterization of the Clarke subdifferential. We hope that in future works
these results can be obtained.
8. REFERENCES
[1] Arrow K y Enthoven A 1961 Quasi- concave Programming. Rev. Econometria. 29 779- 800.
[2] Kannai Y 1977 Concavifiability and Constructions of Concave Utility Functions. Rev.
Journal of Mathematical Economics 4 1-56.
[3] Martinet B 1970 Régularization d’inequation Variationelles par Approximations Successives
R.A.I.R.O
[4] Rockafellar R 1976 Augmented Lagrangians and applications of the Proximal point
algorithm in convex programming Math. Oper. Ress. 1 97-116.
[5] Clarke F, 1975 Generalized Gradients and Application .Rev. Transaction of the American
Mathematical Society 205 247-262.
[6] Clarke F 1990 Optimization and Nonsmooth Analysis Rev. New york: Wiley.
[7] Guler O 1991 On the Convergence of the Proximal Point Algorithm for Convex
Minimization. Rev. SIAM J. Control and Optimization 29 403-419.
[8] Navarro F 2013 Algunas aplicaciones y extensión del método del subgradiente (Lima:
Universidad Nacional Mayor de San Marcos).