You are on page 1of 6

A Proximal Method to Solve Quasiconvex Non-

differentiable Location Problems.

Miguel Angel Cano Lengua1 and Erik Alex Papa Quiroz2,3


1
Universidad Privada Norbert Wiener, Lima, Perú
2
Universidad Privada del Norte, Lima, Perú.
3
Universidad Nacional del Callao, Callao, Perú

Abstract. The location problem is of great interest in order to establish different


location demands in the state or private sector. The model of this problem is
usually reduced to a mathematical optimization problem.
In this paper we present a proximal method to solve location problems where the
objective function is quasi-convex and non-differentiable. We prove that the
iterations given by the method are well defined and under some assumptions on
the objective function we prove the convergence of the method.

1. INTRODUCTION
In the field of the applied mathematics, there are several areas that provide solutions to solve
problems related to science and engineering. One of them is mathematical optimization that
studies how to find the best solution for a given problem between all the possible alternatives.

The general optimization model is given by:

𝑂𝑝𝑡. 𝑓(𝑥)
𝑠. 𝑡𝑜:
𝑔𝑖 (𝑥) ≤ 0; ∀𝑖 = 1, . . . , 𝑚, (1.1)
ℎ𝑗 (𝑥) = 0; ∀𝑗 = 1, . . . , 𝑝,

where 𝑓: ℝ𝑛 → ℝ is a given function , 𝑂𝑝𝑡. 𝑓(𝑥) means minimize or maximize the function 𝑓,
furthermore 𝑔𝑖 : ℝ𝑛 → ℝ and ℎ𝑗 : ℝ𝑛 → ℝ are given functions.

A particular and very broad class of the model (P) is the quasiconvex optimization (problems
where the objective function 𝑓 is quasi-convex, that is,
𝑓(𝜆𝑥 + (1 − 𝜆)𝑦) ≤ 𝑚𝑎𝑥{𝑓(𝑥), 𝑓(𝑦)} , ∀𝜆 ∈ [0,1], ∀𝑥, 𝑦 ∈ ℝ𝑛 ,
and the functions that define the restrictions 𝑔𝑖 and ℎ𝑗 are quasi-convex). This problem was
studied by [1], motivated by the applications of preferences and
utilities in consumer theory and subsequently by various researchers for example see [2].

On the other hand, location problems are of great importance in science and engineering. These
initially have been introduced in sciences as mathematical problems where illustrious
mathematicians, such as Fermat, Torriceli, Silvester, and Steiner, proposed ingenious methods to
solve them.
In the twentieth century location problems extended to several disciplines and currently have a
wide variety of applications in various research areas.

This work is devoted to solve the following location problem:


min 𝑓(𝑥)
{ 𝑠. 𝑎: (1.2)
𝑥 ∈ ℝ𝑛
where 𝑓: ℝ𝑛 → ℝ It is a real non-differentiable and quasi-convex function (see Section 2 to
corroborate the model).
To solve the problem, we will extend the proximal point method, introduced by [3] and very
studied by [4], that for the case where the function 𝑓 is convex, it generates a sequence of points,
from a given point 𝑥 ° ∈ ℝ𝑛 , such that: Given 𝑘 = 1,2,3, …, if 0 ∈ 𝜕𝑐 𝑓(𝑥 𝑘−1 ) then stop (where
𝜕𝑐 𝑓 is the convex subdiferencial).
Otherwise, find 𝑥 𝑘 ∈ ℝ𝑛 such that:
𝜆 2
𝑥 𝑘 = arg min {𝑓(𝑥) + 2𝑘 ‖𝑥 − 𝑥 𝑘−1 ‖ : 𝑥 ∈ ℝ𝑛 } (1.3)
{𝜆 }
It was shown, see [5], that if 𝑓 is convex, proper and that the sequence 𝑘 satisfies:
+∞
1
∑ = +∞,
𝜆𝑘
𝑘=1
Then, the sequence {𝑓(𝑥 𝑘 )} converges to infimum of 𝑓 and if, in addition, the set of optimal
solutions is nonempty, then the sequence converges to a solution of the problem.
For the case when 𝑓 is quasi-convex (and therefore for the location problem that we want to
solve), we must observe that due to the non-convexity of 𝑓, the problem (1.3) is not necessarily a
convex problem, moreover, it is not necessarily a quasi-convex problem. That is the reason why
the iteration (1.3) may be more difficult to solve than the problem (1.2).

To overcome the above difficulty, we propose an extension of the proximal point method using
the iteration:
𝜆 2
0 ∈ 𝜕̂ (𝑓(. ) + ( 2𝑘 ) ‖∙ −𝑥 𝑘−1 ‖ ) (𝑥 𝑘 ) (1.4)
Where 𝜕̂ it is the subdiferencial of Clarke, a rigorous definition of this concept will be given in
Section 2. This iteration is more reasonable to solve from theoretical and practical point of view
𝜆 2
since instead of finding a point 𝑥 𝑘 that minimizes the function 𝑓(. ) + ( 2𝑘 ) ‖∙ −𝑥 𝑘−1 ‖ , we just
only need to find a critical point. Thus we reduce the computational cost in each iteration with
respect to the classical proximal method.
In this work we prove the convergence of the proximal method using the iteration (1.4) under the
assumption that the function 𝑓 is locally Lipschitz , quasiconvex and not necessarily
differentiable.
The organization of this paper is as follows: In Section 2, we present the tools for the development
of this work, among others we will see some results of convex analysis, and sub-differentiability
of convex functions. In Section 3 we present the quasiconvex location model, Clarke's
subdifferential theory, see [5] and [6]. In Section 4 we present the central part of the work, we
propose a proximal method and we present its convergence results. In Section 5, we present the
implementation of the method for some functions. In Section 6, a brief discussion is presented
and in Section 7 we present some conclusions.

2. BASIC DEFINITIONS
Deffiniton 1.2. A function 𝑓: ℝ𝑛 → ℝ ∪ {+∞}
is quasi-convex in ℝ𝑛 if for each 𝑥, 𝑦 ∈ ℝ𝑛 and for each 𝜆 ∈ [0,1] It is true that:
𝑓(𝜆𝑥 + (1 − 𝜆)𝑦) ≤ max{𝑓(𝑥), 𝑓(𝑦)}.
Definition 1.3 Let 𝑓: ℝ𝑛 → ℝ be a function. We say that 𝑥̅ ∈ ℝ𝑛 it is a local minimum of 𝑓 if it
exists 𝜀 > 0 such that 𝑓(𝑥̅ ) ≤ 𝑓(𝑦), ∀𝑥 ∈ 𝐵(𝑥̅ , 𝜀). We say that 𝑥̅ ∈ 𝑅 𝑛 it is a local maximum
of 𝑓 if it exists 𝜀 > 0 such that 𝑓(𝑥̅ ) ≥ 𝑓(𝑦), ∀𝑥 ∈ 𝐵(𝑥̅ , 𝜀).

Definition 1.4 A function 𝑓: ℝ𝑛 → 𝑅 is locally lipschitzian with positive constant 𝑘 in 𝑥 ∈ ℝ𝑛 if


there is 𝜀 > 0 such that;|𝑓(𝑦) − 𝑓(𝑧)| ≤ 𝑘‖𝑦 − 𝑧‖ for all 𝑧, 𝑦 ∈ 𝐵(𝑥, 𝜀).

2.1. Subdifferentiation

Definition 2.1.1 The directional derivative of 𝑓 at 𝑥 in the direction 𝑣 ∈ ℝ𝑛 is defined as:


𝑓(𝑥 + 𝑡𝑣) − 𝑓(𝑥)
𝑓´(𝑥, 𝑣) = lim
𝑡↓0 𝑡
If 𝑓 is differentiable at 𝑥 , then the directional derivative exists in every direction 𝑣 ∈ ℝ𝑛 and
𝑓´(𝑥, 𝑣) is a linear function at 𝑣 and we have the following relation:
𝑓´(𝑥, 𝑣) = 〈∇𝑓(𝑥); 𝑣〉.

Definition 2.1.2 Let 𝑓: ℝ𝑛 → ℝ be a convex function, given 𝑥 ∈ ℝ𝑛 , we say that 𝑠 is a


subgradient of 𝑓 if:
𝑓(𝑦) ≥ 𝑓(𝑥) + 〈𝑠; 𝑦 − 𝑥〉, ∀𝑦 ∈ ℝ𝑛
The set of all subgradients is called convex subdifferential of 𝑓 at 𝑥 and is denoted by
𝜕𝑓𝑐 (𝑥), that is:
𝜕𝑓𝑐 (𝑥) = {𝑠 ∈ ℝ𝑛 : 𝑓(𝑦) ≥ 𝑓(𝑥) + 〈𝑠; 𝑦 − 𝑥〉, ∀𝑦 ∈ ℝ𝑛 }.

Theorem 2.1.1 If 𝑓 is continuously differentiable at 𝑥, then

𝜕𝑓𝑐 (𝑥̅ ) = {∇𝑓(𝑥̅ )}

Next, we give some examples of convex functions and subdifferentials.

1 − 𝑥 ;𝑥 < 1
Example 2.1: Let: 𝑓(𝑥) = { 2
𝑥 ;𝑥 ≥ 1

−1 ;𝑥 < 1
the convex subdifferential of f in x is: 𝜕𝑓𝑐 (𝑥) = { 2𝑥 ;𝑥 > 1
[−1,2] ;𝑥 = 1

3. QUASI-CONVEX LOCATION MODEL


In this section we give an example of a nondifferntiable a quasiconvex location problem.
An example of a location problem is, for example, locate a fire station that can give service in the
shortest possible time to certain points such as a church, school, hospital, university, police
station, recreation center, bank, municipality, center of conventions and a condominium, etc,
We present a formal model of a nondifferentiable quasi-convex location.
Let = {𝑑1 , 𝑑2 , … , 𝑑𝑝 } ⊂ ℝ𝑛 , 𝑛 ≥ 2 , be a set of p points of different demands and 𝑥 ∈ ℝ𝑛 the
location of a facility to be chosen.

If 𝐶𝑖 , 𝑖 =, … , 𝑝, are compact sets with 0 ∈ int(𝐶𝑖 ), (int(𝐶𝑖 )) denotes the interior of 𝐶𝑖 we define
the distance between 𝑥 and 𝑑𝑖 by 𝛾𝐶𝑖 (𝑥 − 𝑑𝑖 ) where 𝛾𝐶𝑖 , is the Minkowski functional of the set
𝐶𝑖 , that is,
𝛾𝐶𝑖 = inf{𝑡 > 0: 𝑥 ∈ 𝑡𝐶𝑖 }
note that if 𝐶𝑖 is the unitary ball over ℝ𝑛 so 𝛾𝐶𝑖 (𝑥) is the Euclidean distance of 𝑥 to 0.
𝑝
To introduce the model, we define the function 𝛾: ℝ𝑛 → ℝ+ such that
𝛾(𝑥) = (𝛾𝐶𝑖 (𝑥 − 𝑑1 ), … , 𝛾𝐶𝑝 (𝑥 − 𝑑𝑝 ))
𝑝
and we assume that 𝑓𝑖 : ℝ𝑝 → ℝ, 𝑖 = 1, … , 𝑝 , is a non-decreasing function over ℝ+ that is if 𝑥, 𝑦 ∈
𝑝
𝑅+ satisfies 𝑥𝑖 ≤ 𝑦𝑖 , ∀𝑖 = 1, … , 𝑝, then 𝑓𝑖 (𝑥) ≤ 𝑓𝑖 (𝑦).
The model is given by:
min{∅(𝑥): 𝑥 ∈ ℝ𝑛 }
where
∅(𝑥) = max {∅𝑖 (𝑥)}
1≤𝑥≤𝑝
with ∅𝑖 : ℝ → ℝ defined by ∅𝑖 (𝑥) = 𝑓𝑖 (𝛾(𝑥)), for each 𝑖 = 1, … , 𝑝. If the function 𝑓𝑖 : ℝ𝑝 → ℝ
𝑛
𝑝
is quasiconvex in ℝ+ then it is possible to prove that the function ∅ is quasi-convex in ℝ𝑛 .

3.1. Clarke Subdifferential


𝑝
Definition 3.1.1 Let 𝑓: ℝ𝑛 → ℝ ∪ {+∞} be a locally lipschitzian function at the point 𝑥 ∈ ℝ+.
The generalized directional derivative of 𝑓 at 𝑥, in the direction of 𝑣 ∈ 𝑅 𝑛 , is defined by:
𝑓(𝑦 + 𝑡𝑣) − 𝑓(𝑦)
𝑓 0 (𝑥, 𝑣) = 𝑦→𝑥
lim sup
𝑡
𝑡↓0
We should note that 𝑓 0 exists thanks to the locally function lipschitzian condition of 𝑓.

Definition 3.1.2 Let 𝑓: ℝ𝑛 → 𝑅 ∪ {+∞} be a locally lipschitzian function at the point 𝑥 ∈ ℝ𝑛 ,


then the sub-differential, in sense of Clarke, of 𝑓 at 𝑥 is the set.
𝜕̂ 𝑓(𝑥) = {𝜉 ∈ ℝ𝑛 : 𝑓 0 (𝑥, 𝑣) ≥ 〈𝜉; 𝑣〉, ∀𝑣 ∈ ℝ𝑛 }
̂
Each element of 𝜉 ∈ 𝜕 𝑓(𝑥) is called subgradient of 𝑓 at 𝑥, in the sense of Clarke.
Definition 3.1.3 Let 𝑓: ℝ𝑛 → 𝑅 ∪ {+∞} be proper function, the set of generalized subgradients
(also called a sub-differential limit) of 𝑓 at 𝑥 ∈ ℝ𝑛 , denoted by 𝜕𝑓(𝑥) is defined as:
𝜕𝑓(𝑥) = {𝑠 ∈ ℝ𝑛 : ∃𝑥 𝑙 → 𝑥, 𝑓(𝑥 𝑙 ) → 𝑓(𝑥), ∃𝑠 𝑙 ∈ 𝜕̂𝑓(𝑥 𝑙 ) → 𝑠}
Definition 3.1.4 Let 𝑓: ℝ𝑛 → ℝ ∪ {+∞} be a lower semicontinuous and quasi-convex function.
If 𝑔 ∈ 𝜕̂𝑓(𝑥), such that〈𝑔; 𝑦 − 𝑥〉 > 0 then 𝑓(𝑥) ≤ 𝑓(𝑦).

4. PROXIMAL POINT METHOD


Consider the problem:
min 𝑓(𝑥)
{ 𝑠. 𝑎: (4.1)
𝑛
𝑥∈ℝ
where 𝑓: ℝ𝑛 → ℝ ∪ {+∞} is a proper function, lower semicontinuous and 𝑅 𝑛 is the Euclidean
space with norm ‖. ‖ . The Proximal Point (MPP) method was introduced by [3] for convex
optimization problems and was subsequently studied by [4] to find zeroes of maximal monotone
operators.

The (MPP) generates a sequence {𝑥 𝑘 } given by 𝑥 0 ∈ ℝ𝑛 (an arbitrary point) and


𝜆𝑘 2
𝑥 𝑘 = arg min {𝑓(𝑥) + ‖𝑥 − 𝑥 𝑘−1 ‖ : 𝑥 ∈ ℝ𝑛 }
2
where 𝜆𝑘 it is a positive parameter.
For the case in which 𝑓 is convex it is shown that the succession {𝑓(𝑥 𝑘 )} converges to the
infimum of 𝑓 and also if the set of optimal solutions is nonempty, then {𝑥 𝑘 } converges to a
solution of the problem, according to [7].
For the case when 𝑓 is quasiconvex and non-differentiable we introduce the following method:
Quasi-convex Proximal Optimization Method
Given a sequence of positive parameters {𝜆𝑘 }and a starting point
𝑥 0 ∈ ℝ𝑛 (4.2)
Stop Criteria: For each 𝑘 = 1,2, …, if 0 ∈ 𝜕̂𝑓(𝑥 𝑘 ),, then finish. Otherwise,
Iterative step: find a point 𝑥 𝑘 ∈ ℝ𝑛 such that
𝜆 2
0 ∈ 𝜕̂ (𝑓(. ) + ( 2𝑘 ) ‖∙ −𝑥 𝑘−1 ‖ ) (𝑥 𝑘 ) (4.3)
Do 𝑘 = 𝑘 + 1 and return to stop criteria.
For interested readers to prove this section, see [3].

4.1. Fejér convergence results

Definition 4.1.1 A sequence {𝑦 𝑘 } ⊂ ℝ𝑛 , is Fejér convergent to a set 𝑈 ⊆ ℝ𝑛 , with respect to


the Euclidean norm if:
‖𝑦 𝑘+1 − 𝑢‖ ≤ ‖𝑦 𝑘 − 𝑢‖. ∀𝑢 ∈ 𝑈, ∀𝑘 ≥ 0 (4.4)
{𝑦 𝑘} 𝑘}
Theorem 4.1.1 If the succession is Fejér convergent to a set 𝑈 ≠ ∅, then{𝑦 is bounded.
If an accumulation point 𝑦̅ of {𝑦 𝑘 } belong to 𝑈, then lim 𝑦 𝑘 = 𝑦̅.
𝑘→+∞
𝜆 2
Remark 4.1. From (4.3) and the differentiability of ( 2𝑘 ) ‖. −𝑥 𝑘−1 ‖ , we have:
0 ∈ 𝜕̂𝑓(𝑥 𝑘 ) + 𝜆𝑘 (𝑥 𝑘 − 𝑥 𝑘−1 ) That is, there exists 𝑔𝑘 ∈ 𝜕̂𝑓(𝑥 𝑘 ) such that
𝑔𝑘 = 𝜆𝑘 (𝑥 𝑘−1 − 𝑥 𝑘 ).

4.2. Convergence Results

Theorem 4.2.1 If 𝑓: ℝ𝑛 → ℝ ∪ {+∞} is proper, Lipchitzian, bounded from below and lower
semicontinuous in 𝑅 𝑛 , then the sequence {𝑥 𝑘 } given by (4.2) and (4.3) exist.
Assumption A: 𝑓: ℝ𝑛 → ℝ ∪ {+∞} is a function bounded from below.
Assumption B:𝑓: ℝ𝑛 → ℝ ∪ {+∞} is locally lipschitzian and quasi-convex.
As we are interested in the asymptotic convergence of the method, we also assume that in each
iteration 0 ∉ 𝜕̂𝑓(𝑥 𝑘 ), which implies that 𝑥 𝑘 ≠ 𝑥 𝑘−1 , ∀𝑘.
Proposition 4.2.1 Under hypotheses A and B we have that {𝑓(𝑥 𝑘 )} is decreasing and convergent.
Theorem 4.2.2 Under assumptions A and B, the sequence x . generated by the proximal method
k

is Fejér convergent to 𝑈 ̅.
Proposition 4.2.2 Under assumptions A and B, the following are true:
a. ∀𝑥 ∈ 𝑈 ̅, the sequence {‖𝑥 − 𝑥 𝑘 ‖} is convergent.
b. lim ‖𝑥 𝑘 − 𝑥 𝑘−1 ‖ = 0.
𝑘→+∞
Theorem 4.2.3 Assume that assumptions A and B are satisfied. If 0  k   , where  it is a
positive real number, then the sequence {𝑥 𝑘 } converges to a point of U and . lim 𝑔𝑛 = 0 for
𝑛→∞
some 𝑔𝑘 ∈ 𝜕̂𝑓(𝑥 𝑘 ) . Furthermore, {𝑥 𝑘 } converges to a critical point of 𝑓, that is, to a point 𝑥𝜖ℝ𝑛
such that 0 ∈ 𝜕̂𝑓(𝑥).

5. NUMERICAL EXPERIMENTS
We implement the algorithm using the software “Matlab 5.3”. We give some academic
quasiconvex functions and each subproblem of the algorithm will be implement using Cuasi-
Newton methods. The starting point is arbitrary and the stop criteria is =0.001.
Example 5.1 Be 𝑓: ℝ𝑛 → ℝ
1 − x ; x  1
where: f ( x) = 
 x
2
; x 1
Results
Optimum point results according to the number of iterations

Table 1: Approximate solution and number of iterations


for 𝜆𝑘 = 1 for 𝜆𝑘 = 1⁄𝑘
Error criterion satisfied Normagradiente =:0.0000 Error criterion satisfied Normagradiente =:0.0000
the approximate point is: 1.000000 the approximate point is: 1.000000
the approximate point is :1.000000 the approximate point is :1.000000
number of iterations is:4 number of iterations is:4
we solved the problem approximately we solved the problem approximately

6. DISCUSSION

1. The convergence results obtained in this work are important but still weak (convergence to a
stationary point and not necessarily to the solution of the problem) for the purpose of obtaining
the global minimum points of quasiconvex optimization problems.
2. According to [8] thesis, non-differentiable optimization produces a series of difficulties that in
most cases cannot be overcome by traditional algorithms for differentiable problems.Thus,
non-differentiable problems need their own techniques for their development. In this
investigation, we work with quasiconvex non-differentiable functions and Clarke's
subdifferential is used as an alternative to solving a non-differentiable optimization problem.

7. CONCLUSIONS

1. The global convergence of the proximal point method for convex functions introduced by
Martinet (1970) can be extended to solve quasiconvex minimization problems. In this case the
point of convergence is a Clarke critical point when the functions are locally Lipschitz.
2. For a specific computational implementation of the proposed algorithm and software design,
it is need to study an appropriate method to solve the subproblems (4.2), as well as, it is need
to know a practical characterization of the Clarke subdifferential. We hope that in future works
these results can be obtained.

8. REFERENCES

[1] Arrow K y Enthoven A 1961 Quasi- concave Programming. Rev. Econometria. 29 779- 800.
[2] Kannai Y 1977 Concavifiability and Constructions of Concave Utility Functions. Rev.
Journal of Mathematical Economics 4 1-56.
[3] Martinet B 1970 Régularization d’inequation Variationelles par Approximations Successives
R.A.I.R.O
[4] Rockafellar R 1976 Augmented Lagrangians and applications of the Proximal point
algorithm in convex programming Math. Oper. Ress. 1 97-116.
[5] Clarke F, 1975 Generalized Gradients and Application .Rev. Transaction of the American
Mathematical Society 205 247-262.
[6] Clarke F 1990 Optimization and Nonsmooth Analysis Rev. New york: Wiley.
[7] Guler O 1991 On the Convergence of the Proximal Point Algorithm for Convex
Minimization. Rev. SIAM J. Control and Optimization 29 403-419.
[8] Navarro F 2013 Algunas aplicaciones y extensión del método del subgradiente (Lima:
Universidad Nacional Mayor de San Marcos).

You might also like