You are on page 1of 15

SIAM J. OPTIM.


c 2004 Society for Industrial and Applied Mathematics
Vol. 15, No. 2, pp. 394–408

ON THE CLASSICAL NECESSARY SECOND-ORDER OPTIMALITY


Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

CONDITIONS IN THE PRESENCE OF EQUALITY


AND INEQUALITY CONSTRAINTS∗
ABDELJELIL BACCARI† AND ABDELHAMID TRAD‡
Abstract. For nonconvex optimization problems in Rn , a counterexample was recently given by
Anitescu [SIAM J. Optim., 10 (2000), pp. 1116–1135] that showed that the Mangasarian–Fromovitz
constraint qualification (MFCQ) is not sufficient for the classical necessary second-order optimality
conditions to hold. We prove these optimality conditions with the assumptions that the set of
Lagrange multipliers is a bounded line segment and a relaxed strict complementary slackness (SCS)
condition holds. A new constraint qualification is presented in this paper: We assume that the MFCQ
holds and the active constraint rank deficiency is 1. This assumption relaxes the linear independence
constraint qualification and enforces the MFCQ. In addition, the set of Lagrange multipliers is a
bounded line segment.

Key words. nonconvex optimization, necessary conditions, constraint qualifications, pair of


quadratic forms

AMS subject classifications. 49, 49K

DOI. 10.1137/S105262340342122X

1. Introduction. This paper is concerned with necessary second-order optimal-


ity conditions for nonconvex optimization problems of the form

(P ) min f (x) such that g(x) ≤ 0 , h(x) = 0.

The functions

f : Rn −→ R ; g : Rn −→ Rp ; h : Rn −→ Rq

are twice continuously differentiable. Problem (P ) may have no equality or inequality


constraints. We recall the main notations, definitions, and classical results that will
be used in what follows.
The generalized Lagrangian function of (P ) is defined on Rn × Rp+1
+ × Rq by


i=p 
j=q
L(x, λ0 , λ, µ) = λ0 f (x) + λi gi (x) + µj hj (x).
i=1 j=1

The Lagrangian function of (P ) is defined on Rn × Rp+ × Rq by

L(x, λ, µ) = L(x, 1, λ, µ).

The gradient and the Hessian matrix of L with respect to x are denoted, respectively,
by ∇x L(x, λ, µ) and ∇2xx L(x, λ, µ). The gradient of f with respect to x is the column
vector ∇f (x), and ∇f (x)t is its transpose. The feasible set is

F = {x | g(x) ≤ 0, h(x) = 0}.


∗ Received by the editors January 13, 2003; accepted for publication (in revised form) March 11,

2004; published electronically December 30, 2004.


http://www.siam.org/journals/siopt/15-2/42122.html
† Ecole Superieure des Sciences et Techniques de Tunis, 5 Avenue Taha Hussein, 1008 Tunis,

Tunisia (Abdjil.Baccari@fst.rnu.tn).
‡ Faculté des Sciences de Tunis, 1060 Tunis, Tunisia (abdelhamid.trad@fst.rnu.tn).

394
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 395

For x ∈ F , the active index set I(x), the critical cone C(x), the set of generalized
Lagrange multipliers Λ0 (x), and the set of Lagrange multipliers Λ(x) are defined as
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

follows:

I(x) = {i | gi (x) = 0},

C(x) = {d | ∇f (x)t d ≤ 0, ∇gi (x)t d ≤ 0, i ∈ I(x), ∇hj (x)t d = 0, j = 1, 2, . . . , q},

Λ0 (x) = {(λ0 , λ, µ) = 0 | ∇x L(x, λ0 , λ, µ) = 0, (λ0 , λ) ∈ Rp+1


+ , λi gi (x) = 0 ∀i},

Λ(x) = {(λ, µ) | λ0 = 1, (λ0 , λ, µ) ∈ Λ0 (x)}.

A quadratic form Q on Rn is defined by Q(x) = B(x, x), where B ia a bilinear and


symmetric form on Rn .
Definition 1.1. Let Q be a quadratic form on Rn , and let S be a subset of Rn .
Q is said to be positive semidefinite on S if

Q(s) ≥ 0 ∀s ∈ S.

This can be written as Q  0 on S.


Definition 1.2. A nonempty subset L ⊂ Rm is a line segment if there exists
X ∈ Rm , Y ∈ Rm , and an interval J ⊂ R such that

L = X + J.Y = {l = X + θY | θ ∈ J}.

Remark 1.1. If Y = 0 and L is closed and bounded, then J is closed and bounded
and L can be written as L = X + [a, b]Y for some real numbers a ≤ b. Moreover, if
L is not a singleton, then L has exactly two extreme points X + aY and X + bY .
Definition 1.3. A first-order cone K is a cone of the form K = E + R+ d0 ,
where E is a vector subspace in Rn and d0 ∈ Rn .
Note that every vector subspace E ⊂ Rn is a first-order cone. We recall the most
classical constraint qualifications that will be used.
Definition 1.4. A feasible point x∗ ∈ F is said to satisfy the strict complemen-
tary slackness (SCS) condition if Λ(x∗ ) is not empty and, for every i ∈ I(x∗ ), there
exists (λ, µ) ∈ Λ(x∗ ) such that λi > 0.
Remark 1.2. If x∗ ∈ F , the SCS condition holds, and p∗ is the number of active
inequality constraints, then the following hold:
(i) For any i ∈ I(x∗ ), there exists (λi , µi ) ∈ Λ(x∗ ) such that λii > 0,

1 
(λ∗ , µ∗ ) = (λi , µi ) ∈ Λ(x∗ )
p∗ ∗
i∈I(x )

and satisfies λ∗i > 0 for all i ∈ I(x∗ ).


(ii) The critical cone C(x∗ ) is a vector space. To see this, let d ∈ C(x∗ ) and
J = {1, 2, . . . , q}; then
(a) ∇x L(x∗ , λ∗ , µ∗ ) = 0 =⇒ ∇x L(x∗ , λ∗ , µ∗ )t d = 0,
(b) ∇gi (x∗ )t d ≤ 0, λ∗i > 0, ∀i ∈ I(x∗ ), ∇hj (x)t d = 0, ∀j ∈ J,
(c) ∇f (x∗ )t d ≤ 0 =⇒ ∇gi (x∗ )t d = 0, ∀i ∈ I(x∗ ).
396 ABDELJELIL BACCARI AND ABDELHAMID TRAD

So

C(x∗ ) = {d | ∇gi (x∗ )t d = 0, i ∈ I(x∗ ), ∇hj (x∗ )t = 0, j ∈ J}.


Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Definition 1.5. A feasible point x∗ ∈ F is said to satisfy the linear independence


constraint qualification (LICQ) if the vectors

∇gi (x∗ ) ∀i ∈ I(x∗ ) , ∇hj (x∗ ), j = 1, 2, . . . , q,

are linearly independent.


The well-known Mangasarian–Fromovitz constraint qualification (MFCQ) is de-
fined as follows [1].
Definition 1.6. We say that the MFCQ holds at a feasible point x∗ ∈ F , if the
two following conditions hold:
(i) The q vectors ∇hj (x∗ ) are linearly independent, and
(ii) there exists a vector d∗ such that

∇gi (x∗ )t d∗ < 0 ∀i ∈ I(x∗ ) ; ∇hj (x∗ )t d∗ = 0 , j = 1, 2, . . . , q.

The following lemma is well known (see, for instance, [2, p. 241]).
Lemma 1.7. The M F CQ holds if and only if there is no (λ, µ) = 0 such that

(1.1) λi ≥ 0 ∀i ∈ I(x∗ ),

 
j=q
(1.2) λi ∇gi (x∗ ) + µj ∇hj (x∗ ) = 0.
i∈I(x∗ ) j=1

Remark 1.3. Let x∗ ∈ F and satisfy one of the following conditions:


(a) There are no active inequality constraints.
(b) There is only one active inequality constraint.
Then, using (1.2), the MFCQ implies the LICQ.
Definition 1.8. A feasible point x∗ ∈ F satisfies the sufficient second-order
optimality conditions (SC2) if Λ(x∗ ) is not empty and

(1.3) sup (d)t ∇2xx L(x∗ , λ, µ)d > 0 ∀ d ∈ C(x∗ ), d = 0.


(λ, µ)∈Λ(x∗ )

The following result is known as the Karush–Kuhn–Tucker necessary optimality


conditions: Assume that x∗ is a local optimal solution of (P ) and satisfies the LICQ;
then Λ(x∗ ) is a singleton (i.e., Λ(x∗ ) = {(λ, µ)}) and satisfies the following classical
necessary second-order optimality conditions:

(CN 2) (d)t ∇2xx L(x∗ , λ, µ)d ≥ 0 ∀ d ∈ C(x∗ ).

(CN 2) has the important property that the Lagrange multiplier (λ, µ) is the same for
all critical vectors d ∈ C(x∗ ). If the LICQ does not hold, Λ(x∗ ) fails to be a singleton
and (CN 2) fails to be satisfied [4]. However, (CN2) holds if one of the following
conditions is satisfied (see, for instance, [3, p. 211]; [5, p. 230]):
(i) All constraint functions g and h are affine.
(ii) The functions f and g are convex, h is affine, and x∗ satisfies the Slater
condition.
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 397

(iii) There exists (λ, µ) ∈ Rp+ × Rq such that (x∗ , λ, µ) is a saddle point for the
Lagrangian function of (P ).
Without any constraint qualification, a local optimal solution x∗ of (P ) satisfies
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

the following Fritz John necessary first- and second-order optimality conditions (see,
for example, [5, p. 443]):

(1.4) Λ0 (x∗ ) = ∅,

(1.5) ∀d ∈ C(x∗ ) ∃ (λ0 , λ, µ) ∈ Λ0 (x∗ ) : (d)t ∇2xx L(x∗ , λ0 , λ, µ)d ≥ 0.

The necessary second-order optimality condition (1.5) has two drawbacks: The first
component λ0 of λ, in (1.5), may vanish, and the multiplier (λ0 , λ, µ) in (1.5) is not
necessarily the same for all critical vectors.
Assume that a local optimal solution x∗ of (P ) satisfies the MFCQ; then Λ(x∗ )
is not empty and is bounded [6], convex, and compact. Every (λ0 , λ, µ) ∈ Λ0 (x∗ )
satisfies λ0 > 0 . The condition (1.5) can be written as

(GN 2) max (d)t ∇2xx L(x∗ , λ, µ)d ≥ 0 ∀ d ∈ C(x∗ ),


(λ, µ)∈Λ(x∗ )

and (GN 2) is free from the first drawback. For the second drawback, the first example
was given in [4] and showed that the MFCQ does not imply (CN 2). However, in each
of the following cases, the MFCQ implies (CN 2) [7]:
(i) n ≤ 2.
(ii) There are, at most, two active inequality constraints.
A counterexample is given in [7] for n = 3 and three active inequality constraints.
This paper is a continuation of [7] and is devoted to the case in which the set
of Lagrange multipliers is a convex hull of two extreme points (i.e., a bounded line
segment). In this case the “ max ” in (GN 2) can be written as the “ max ” of two
quadratic forms. If the critical cone is a vector subspace or a first-order cone, then
classical results on a pair of quadratic forms [2, 8, 9, 10, 11] are generalized and used.
In section 2, two lemmas by Hestenes [2] are generalized to some closed cones (Lemma
2.1). In section 3, it is proved that Yuan’s lemma and its extension (Theorem 3.1)
can be easily deduced from Hestenes’s lemmas. In section 5, we prove (CN 2) for
first-order cones included in the critical cone (Theorem 4.1). The main result of this
paper (Theorem 5.1) is proved in section 6. It states that (CN 2) holds if the SCS
condition, or a weaker assumption (that only one index i ∈ I(x∗ ) satisfies λi = 0 for
all (λ, µ) ∈ Λ(x∗ )) is assumed. In section 7, the counterexample of [7] is used to show
that (CN 2) does not hold if Λ(x∗ ) is bounded but is not a line segment. In section
8, a criterion for Λ(x∗ ) to be a line segment is given (Lemma 7.2), a new constraint
qualification (called the modified MFCQ) for Λ(x∗ ) to be a bounded line segment is
proposed, and Theorem 5.1 is rewritten (Theorem 7.7). In section 8, some remarks
are made for further progress.
2. Extensions of Hestenes’s lemmas. Let P and Q be two quadratic forms
on Rn , and let K be a closed cone in Rn . Consider the following statements:

(2.1) d ∈ K, P (d) ≤ 0 , Q(d) ≤ 0 =⇒ d = 0;

(2.2) P 0 or Q  0 on K;
398 ABDELJELIL BACCARI AND ABDELHAMID TRAD

(2.3) ∃ (d1 , d2 ) ∈ K × K : P (d1 ) < 0 , Q(d2 ) < 0;


Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(2.4) ∃ θ > 0 : P (d) + θQ(d) > 0 ∀d ∈ K, d = 0.

Two lemmas by Hestenes [2, pp. 113–114] show that if K is a vector space, then
(i) (2.1) and (2.2) imply (2.4), and
(ii) (2.1) and (2.3) imply (2.4).
It can be seen that the statement (2.2) or (2.3) is always true. So, for a vector space
K, (2.1) implies (2.4). We extend Hestenes’s lemmas as follows.
Lemma 2.1.
(a) For every closed cone K in Rn , (2.1) and (2.2) imply (2.4).
(b) If K is a first-order cone, then (2.1) implies (2.4).
Proof. We prove (a): Assume first that Q  0 on K. We prove, by contradiction,
that there exists a real number c > 0 such that, for every θ > c, (2.4) holds. Suppose
that, for every c = n ∈ N∗ , there exists θn > n and dn ∈ K − {0} such that
P (dn ) + θn Q(dn ) ≤ 0. It follows that P (dn ) ≤ 0, yn = dn /||dn || ∈ K, (yn )n has a
subsequence (ynk )k convergent to some d ∈ K, ||d|| = 1, P (d) ≤ 0, θnk −→ +∞, and

P (dnk ) + θnk Q(dnk ) ≤ 0 =⇒ Q(d) ≤ 0.

From Q(d) ≤ 0 and P (d) ≤ 0, we get d = 0 and this contradicts ||d|| = 1. So c exists
and any θ > c satisfies (2.4). A similar argument is used in the case P  0 on K.
We prove (b): From (a) it suffices to prove (b) in the case where (2.2) does
not hold, that is, (2.3) holds. Let E be a vector space and d0 ∈ Rn such that
K = E + R+ d0 ; then the vector subspace E + Rd0 satisfies (2.1). To see this, suppose
that d = d0 − rd0 , with r > 0 and d0 ∈ E, satisfies P (d) ≤ 0 and Q(d) ≤ 0. It
follows that P (−d) ≤ 0, Q(−d) ≤ 0, −d ∈ K, and −d = 0. So we can suppose that
K = E +Rd0 and use the proof of [2, pp. 114–116]. To make the paper self-contained,
we add this proof with some minor modifications.
For d, x, y in Rn and µ ∈ R, let J(d, µ) = P (d) + µQ(d), I(x, y, µ) = (1/2)(J(x +
y, µ) − J(x, µ) − J(y, µ)), BQ(x, y, µ) = (1/2)(Q(x + y, µ) − Q(x, µ) − Q(y, µ)), and
S = {d ∈ K | Q(d) ≤ 0}. We apply (a) of Lemma 2.1 to the pair (P, Q) of quadratic
forms on the closed cone S, and there exists a real number θ > 0 such that

J(d, θ) = P (d) + θQ(d) > 0 ∀d ∈ S, d = 0.

Let

b = sup {θ > 0 | J(d, θ) > 0 ∀d ∈ S, d = 0}.

We have Q(d2 ) < 0 and d2 ∈ S. So b is finite and

J(d, b) ≥ 0 ∀d ∈ S.

The definition of b implies that, for every n ∈ N∗ , there exists dn ∈ S, dn = 0, such


that J(dn , b + n1 ) ≤ 0 and (dn /||dn ||)n has a subsequence convergent to some d∗ ∈ S
such that ||d∗ || = 1 and J(d∗ , b) ≤ 0. So, J(d∗ , b) = 0. If Q(d∗ ) = 0 or b = 0, then
P (d∗ ) = 0 and, using (2.1), d∗ = 0, which is impossible. We get b > 0, Q(d∗ ) < 0,
and P (d∗ ) > 0.
We claim that

J(d, b) ≥ 0 ∀d ∈ K; I(d∗ , d, b) = 0 ∀d ∈ K.
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 399

Let d ∈ K. For t ∈ R and |t| small enough, Q(d∗ + td) < 0, d∗ + td ∈ S, and
J(d∗ + td, b) ≥ 0. The function f , defined by f (t) = J(d∗ + td, b) = J(d∗ , b) +
2tI(d∗ , d, b) + t2 J(d, b), has a local minimum at t = 0. So f  (0) = 0 = 2I(d∗ , d, b) and
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

f  (0) = 2J(d, b) ≥ 0.
We conclude that there exists b > 0 and d∗ ∈ K such that
(1) ||d∗ || = 1, J(d∗ , b) = P (d∗ ) + bQ(d∗ ) = 0, P (d∗ ) > 0, and Q(d∗ ) < 0.
(2) J(d, b) ≥ 0 for all d ∈ K and I(d∗ , d, b) = 0 for all d ∈ K.
In the same way, replacing Q by P in S, we find c > 0 and d∗∗ ∈ K such that
(i) ||d∗∗ || = 1, cP (d∗∗ ) + Q(d∗∗ ) = 0, P (d∗∗ ) < 0, and Q(d∗∗ ) > 0.
(ii) cP (d) + Q(d) ≥ 0 for all d ∈ K and I(d∗∗ , d, 1/c) = 0 for all d ∈ K.
Let a = 1/c; then a and d∗∗ satisfy
(3) ||d∗∗ || = 1, J(d∗∗ , a) = P (d∗∗ ) + aQ(d∗∗ ) = 0, P (d∗∗ ) < 0, and Q(d∗∗ ) > 0,
(4) J(d, a) ≥ 0 for all d ∈ K and I(d∗∗ , d, a) = 0 for all d ∈ K.
As Q(d∗ ) < 0 and Q(d∗∗ ) > 0, the equation in t, Q(td∗ +d∗∗ ) = t2 Q(d∗ )+2tBQ(d∗ , d∗∗ )+
Q(d∗∗ ) = 0, admits a real solution t0 . So d0 = t0 d∗ +d∗∗ ∈ K −{0} satisfies Q(d0 ) = 0
and J(d0 , b) = P (d0 ) > 0. It follows that
(iii) J(d0 , b) = t20 J(d∗ , b) + 2t0 I(d∗ , d∗∗ , b) + J(d∗∗ , b) = J(d∗∗ , b).
(iv) 0 < P (d0 ) = J(d0 , b) = P (d∗∗ ) + bQ(d∗∗ ).
From J(d∗∗ , a) = 0 = P (d∗∗ ) + aQ(d∗∗ ) and (iv), we get (b − a)Q(d∗∗ ) > 0 and a < b.
Let θ ∈]a, b[ and d ∈ K; then

J(d, θ) = J(d, a) + (θ − a)Q(d) ≥ (θ − a)Q(d)

and

J(d, θ) = J(d, b) + (θ − b)Q(d) ≥ −(b − θ)Q(d).

This implies that J(d, θ) ≥ 0. If J(d, θ) = 0, then Q(d) = 0, P (d) = 0, and d = 0. So

J(d, θ) > 0 ∀d ∈ K, d =
 0.

3. Extension of Yuan’s lemma. Let K be a closed cone in Rn , and let P and


Q be two quadratic forms on Rn . Consider the following statements:

(3.1) max(P (d), Q(d)) ≥ 0 ∀d ∈ K;

(3.2) ∃ (t1 , t2 ) ∈ R2+ , t1 + t2 = 1 such that t1 P (d) + t2 Q(d) ≥ 0 ∀d ∈ K;

(3.3) max(P (d), Q(d)) > 0 ∀d ∈ K, d = 0;

(3.4) ∃ (t1 , t2 ) ∈ R2+ , t1 + t2 = 1 such that t1 P (d) + t2 Q(d) > 0 ∀d ∈ K, d = 0.

It was shown in [8] that, for K = Rn , (3.1) implies (3.2). In fact, these two statements
are equivalent for K = Rn . For the cone K = Rn+ , (3.1) and (3.2) are not equivalent
[9]. For three quadratic forms and K = Rn , (3.1) and (3.2) are not equivalent [10]. A
recent result in [11] shows that, for n ≥ 3 and K = Rn , (3.3) and (3.4) are equivalent.
A little progress is made, in this section, by proving that (3.3) and (3.4) are equivalent
for first-order cone K = E + R+ d0 ⊂ Rn with n unrestricted.
The main result of this section is as follows.
Theorem 3.1. For a first-order cone K, (3.3) and (3.4) are equivalent.
400 ABDELJELIL BACCARI AND ABDELHAMID TRAD

Proof. We only prove that (3.3) implies (3.4). Inequality (3.3) is equivalent to
the condition (2.1) and, from (b) of Lemma 2.1, (2.4) holds. So (3.4) is satisfied by
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

t1 = 1/(1 + θ) and t2 = θ/(1 + θ).


We get Yuan’s lemma as a corollary.
Corollary 3.2. For a first-order cone K, (3.1) and (3.2) are equivalent.
Proof. To prove that (3.1) implies (3.2), let n ∈ N∗ . The quadratic forms
Pn (d) = P (d) + (1/n)||d||2 and Qn (d) = Q(d) + (1/n)||d||2 satisfy (3.3). By Theorem
3.1, there exists tn1 ≥ 0 and tn2 ≥ 0 such that tn1 + tn2 = 1 and

tn1 Pn (d) + tn2 Qn (d) = tn1 P (d) + tn2 Q(d) + (1/n)||d||2 > 0 ∀d ∈ K, d = 0.

(tn1 , tn2 )n has a subsequence (tn1 k , tn2 k )k convergent to (t1 , t2 ), t1 ≥ 0, t2 ≥ 0, t1 + t2 = 1


and, for every fixed d ∈ K, d = 0,

tn1 P (d) + tn2 Q(d) + (1/n)||d||2 > 0 =⇒ t1 P (d) + t2 Q(d) ≥ 0.

4. Abstract main result. Theorem 3.1 and its corollary are used to prove the
following abstract main result.
Theorem 4.1. Let x∗ be a local optimal solution of (P ) and such that Λ(x∗ ) is a
bounded line segment; then, for every first-order cone K included in the critical cone
C(x∗ ), there exists (λK , µK ) ∈ Λ(x∗ ) such that

(4.1) (d)t ∇2xx L(x∗ , λK , µK )d ≥ 0 ∀ d ∈ K.

Moreover, if x∗ satisfies the sufficient SC2, then (λK , µK ) can be chosen so that

(4.2) (d)t ∇2xx L(x∗ , λK , µK )d > 0 ∀d ∈ K, d = 0.

Proof. We begin with (4.1): x∗ is a local optimal solution of (P ) and satisfies


(MFCQ), and (GN 2) holds. If Λ(x∗ ) is a singleton, then (GN 2) implies (4.1). Suppose
that the closed and bounded line segment Λ(x∗ ) is not a singleton; then Λ(x∗ ) has
exactly two extreme points (λ1 , µ1 ) ∈ Λ(x∗ ) and (λ2 , µ2 ) ∈ Λ(x∗ ). Let

P (d) = (d)t ∇2xx L(x∗ , λ1 , µ1 )d; Q(d) = (d)t ∇2xx L(x∗ , λ2 , µ2 )d.

(d)t ∇2 L(x∗ , λ, µ)d is linear in (λ, µ) and the “ max ” in (GN 2) is attained at an
extreme point. So we have

0≤ max (d)t ∇2xx L(x∗ , λ, µ)d = max(P (d), Q(d)) ∀d ∈ K,


(λ, µ)∈Λ(x∗ )

and (3.1) holds. From Corollary 3.2, (3.1) implies (3.2); that is, there exist t1 ≥ 0
and t2 ≥ 0 such that t1 + t2 = 1,

t1 P (d) + t2 Q(d) ≥ 0 ∀d ∈ K,

(λK , µK ) = t1 (λ1 , µ1 ) + t2 (λ2 , µ2 ) ∈ Λ(x∗ ),

t1 P (d) + t2 Q(d) = (d)t ∇2xx L(x∗ , λK , µK )d ≥ 0 ∀ d ∈ K.

To prove (4.2), we use Theorem 3.1 and the same above arguments.
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 401

5. Main result. Classical necessary second-order optimality conditions (CN 2),


without the LICQ, convexity assumptions, or SCS condition are not easy to get. A
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

first step for doing this is to consider optimization problems in which the critical cone
is a first-order cone and the set of Lagrange multipliers is a bounded line segment.
The main result of this paper is as follows.
Theorem 5.1. Let x∗ be a local optimal solution of (P ) and be such that the
following hold:
(i) The set of Lagrange multipliers Λ(x∗ ) is a bounded line segment.
(ii) There exists, at most, only one index i0 ∈ I(x∗ ) such that

(5.1) (λ, µ) ∈ Λ(x∗ ) =⇒ λi0 = 0;

then there exists a Lagrange multiplier (λ∗ , µ∗ ) ∈ Λ(x∗ ) such that

(d)t ∇2xx L(x∗ , λ∗ , µ∗ )d ≥ 0 ∀ d ∈ C(x∗ ).

Moreover, if x∗ satisfies the sufficient second-order optimality condition (SC2), then


(λ∗ , µ∗ ) can be chosen so that

(d)t ∇2xx L(x∗ , λ∗ , µ∗ )d > 0 ∀ d ∈ C(x∗ ), d = 0.

Proof. We prove first that C(x∗ ) is a first-order cone. We have the following two
cases:
(1) There is no i0 ∈ I(x∗ ) such that (5.1) holds. This means that x∗ satisfies the
SCS condition and C(x∗ ) is a vector subspace (see Remark 1.2).
(2) There exists only one index i0 ∈ I(x∗ ) such that (5.1) holds. It follows that
d ∈ C(x∗ ) if and only if

∇gi (x∗ )t d = 0 ∀i ∈ I(x∗ ) − {i0 }, ∇gi0 (x∗ )t d ≤ 0,

∇hj (d)t d = 0, j = 1, 2, . . . , q.

If C(x∗ ) is not a subspace, there exists d0 ∈ C(x∗ ) such that ∇gi0 (x∗ )t d0 < 0 and,
for every other d ∈ C(x∗ ), there exists r ≥ 0 such that

∇gi0 (x∗ )t d = r∇gi0 (x∗ )t d0 , ∇gi0 (x∗ )t (d − rd0 ) = 0,

∇gi (x∗ )t (d − rd0 ) = 0 ∀i ∈ I(x∗ ) − {i0 }; ∇hj (x∗ )t (d − rd0 ) = 0, j = 1, 2, . . . , q,

and w = d − rd0 is in the vector space

E = {w|∇gi (x∗ )t w = 0, i ∈ I(x∗ ); ∇hj (x∗ )t w = 0, j = 1, 2, . . . , q} ⊂ C(x∗ ).

So C(x∗ ) = E + R+ d0 .
We conclude that (ii) implies that C(x∗ ) is a first-order cone and, from Theorem
4.1, we have the desired result.
Remark 5.1.
(i) For Theorem 5.1, a direct proof, with Hestenes’s lemmas, is possible, but the
use of Yuan’s extended lemma (Theorem 3.1) and the abstract main result (Theorem
4.1) make this proof more clear.
402 ABDELJELIL BACCARI AND ABDELHAMID TRAD

(ii) If (5.1) holds for many active indexes i ∈ I(x∗ ), then the critical cone C(x∗ )
fails to be a first-order cone and Theorem 4.1 cannot be used to prove Theorem 5.1.
To see this fact, assume, for example, that 1 ∈ I(x∗ ) and 2 ∈ I(x∗ ) are the only active
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

indexes which satisfy (5.1), and suppose that there exists d1 ∈ C(x∗ ) and d2 ∈ C(x∗ )
such that

∇g1 (x∗ )t d1 < 0, ∇g2 (x∗ )t d1 = 0,

∇g2 (x∗ )t d2 < 0, ∇g1 (x∗ )t d2 = 0.

For any d ∈ C(x∗ ), we can find r1 ≥ 0 and r2 ≥ 0 such that

∇g1 (x∗ )t (d − r1 d1 − r2 d2 ) = ∇g1 (x∗ )t (d − r1 d1 ) = ∇g1 (x∗ )t (d) − r1 ∇g1 (x∗ )t (d1 ) = 0,

∇g2 (x∗ )t (d − r1 d1 − r2 d2 ) = ∇g2 (x∗ )t (d − r2 d2 ) = ∇g2 (x∗ )t (d) − r2 ∇g2 (x∗ )t (d2 ) = 0.

This means that w = d − r1 d1 − r2 d2 is in the vector space

E = {w|∇gi (x∗ )t w = 0, i ∈ I(x∗ ); ∇hj (x∗ )t w = 0, j = 1, 2, . . . , q} ⊂ C(x∗ ),

C(x∗ ) = E + R+ d1 + R+ d2 ,

and C(x∗ ) is not a first-order cone.


Example 5.1. The nonconvex optimization problem, in R4 , is

min{−x1 | gi (x) ≤ 0, i = 1, 2, 3},

where

g1 (x) = 2x21 − x22 + 2x23 + x1 , g2 (x) = −x22 + x1 − 2x4 , g3 (x) = −x21 + x22 − x23 + x1 + x4

for a feasible point x, g1 (x) + g2 (x) + 2g3 (x) = 4x1 ≤ 0. It can be seen that x∗ =
(0, 0, 0, 0) is a global optimal solution,

Λ(x∗ ) = {(λ1 , λ2 , λ3 ) ≥ 0 | λ1 + λ2 + λ3 = 1 ; 2λ2 = λ3 }

is a bounded line segment, x∗ satisfies the SCS condition, and C(x∗ ) = {d = (d1 , d2 , d3 ,
d4 )t | d1 = d4 = 0}. λ = (1/4, 1/4, 1/2) is the unique multiplier which satisfies
(CN 2).
A simple criterion for Λ(x∗ ) to be a bounded line segment is given in the following
lemma.
Lemma 5.2. Let x∗ be a local optimal solution of (P ), satisfy the MFCQ, and be
such that the number of active inequality constraints is at most two; then Λ(x∗ ) is a
bounded line segment.
Proof. Let px∗ be the number of active inequality constraints. If px∗ ≤ 1, then
the LICQ holds and Λ(x∗ ) is a singleton. If px∗ = 2, we can suppose, without loss of
generality, that I(x∗ ) = {1, 2}.
Assume that one of the following conditions holds:
(i) There exists (λ0 , µ0 ) ∈ Λ(x∗ ) such that λ01 = 0 = λ02 .
(ii) All (λ, µ) ∈ Λ(x∗ ) satisfy λi > 0, i = 1, 2.
(iii) All (λ, µ) ∈ Λ(x∗ ) satisfy λ1 = 0.
(iv) All (λ, µ) ∈ Λ(x∗ ) satisfy λ2 = 0.
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 403

We claim that Λ(x∗ ) is a singleton: Suppose that (i) holds; then


Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php


j=q
∇f (x∗ ) + µ0j ∇hj (x∗ ) = 0
j=1

and for any (λ, µ) ∈ Λ(x∗ ) we have


j=q
∇f (x∗ ) + λ1 ∇g1 (x∗ ) + λ2 ∇g2 (x∗ ) + µj ∇hj (x∗ ) = 0.
j=1

The difference between these two equations and Lemma 1.7 yield

λ1 = 0 = λ2 , µ = µ0 .

Similar arguments are used for other cases.


So, if Λ(x∗ ) is not a singleton, there exists (λ1 , µ1 ) ∈ Λ(x∗ ) and (λ2 , µ2 ) ∈ Λ(x∗ )
such that

λ11 > 0, λ12 = 0; λ21 = 0, λ22 > 0,


j=q
(a) ∇f (x∗ ) + λ11 ∇g1 (x∗ ) + µ1j ∇hj (x∗ ) = 0,
j=1


j=q
(b) ∇f (x∗ ) + λ22 ∇g1 (x∗ ) + µ2j ∇hj (x∗ ) = 0.
j=1

x∗ satisfies the MFCQ, the vectors of

{∇g1 (x∗ ), ∇hj (x∗ ), j = 1, 2 . . . q}

are linearly independent, and (λ1 , µ1 ) is unique. Also, (λ2 , µ2 ) is unique. Let (λ, µ) ∈
Λ(x∗ ); then


j=q
(c) ∇f (x∗ ) + λ1 ∇g1 (x∗ ) + λ2 ∇g2 (x∗ ) + µj ∇hj (x∗ ) = 0.
j=1

If λ1 = 0 or λ2 = 0, then (λ, µ) = (λ1 , µ1 ) or (λ, µ) = (λ2 , µ2 ). Suppose that λ1 > 0


and λ2 > 0. For t = λ1 /λ11 , (c) − t(a) gives


j=q
∗ ∗
(d) (1 − t)∇f (x ) + λ2 ∇g2 (x ) + (µj − tµ1j )∇hj (x∗ ) = 0.
j=1

If t = 1, then


j=q
λ2 ∇g2 (x∗ ) + (µj − µ1j )∇hj (x∗ ) = 0,
j=1
404 ABDELJELIL BACCARI AND ABDELHAMID TRAD

λ2 = 0, and this is impossible. If t > 1, then (d) can be written as


j=q
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(e) ∇f (x∗ ) + (λ2 /(1 − t))∇g2 (x∗ ) + (µj − tµ1j )/(1 − t)∇hj (x∗ ) = 0.
j=1

λ22 = λ2 /(1 − t) < 0, and this is impossible, too. So 0 < t < 1 , λ22 = λ2 /(1 − t),
µ2j = (µj − tµ1j )/(1 − t), (λ2 , µ2 ) = [(λ, µ) − t(λ1 , µ1 )]/(1 − t), and

(λ, µ) = t(λ1 , µ1 ) + (1 − t)(λ2 , µ2 ).

This means that every (λ, µ) ∈ Λ(x∗ ) is a convex combination of (λ1 , µ1 ) and (λ2 , µ2 )
and Λ(x∗ ) is a bounded line segment.
Remark 5.2. We list some cases in which Theorem 5.1 holds without (ii):
(i) The working space is R or R2 [7].
(ii) The number of active inequality constraints is, at most, two [7].
(iii) The critical cone is included in a one-dimensional vector space (use (GN 2)).
6. Theorem 5.1 does not hold without line segment assumption. The
counterexample in [7] can be used to show that condition (i) of Theorem 5.1 cannot
be replaced by the MFCQ.
The optimization problem, in R3 , is as follows.
Example 6.1.

min {x3 | gi (x) ≤ 0, i = 1, 2, 3},

where
√ √
g1 (x) = 2 3x1 x2 − 2x22 − x3 , g2 (x) = x22 − 3x21 − x3 , g3 (x) = −2 3x1 x2 − 2x22 − x3 .

It can be seen that x∗ = (0, 0, 0) is a global optimal solution and satisfies the
MFCQ. The set of Lagrange multipliers

Λ(x∗ ) = {λ ∈ R3 | λi ≥ 0, i = 1, 2, 3; λ1 + λ2 + λ3 = 1}

is not a line segment. The critical cone

C(x∗ ) = {d = (d1 , d2 , d3 )t | d3 = 0}

is a vector space. For the critical vectors d1 = (1, 0, 0)t and d2 = (0, 1, 0)t , there is no
λ ∈ Λ(x∗ ) such that (see [7])

(d1 )t ∇2xx L(x∗ , λ)d1 ≥ 0 and (d2 )t ∇2xx L(x∗ , λ)d2 ≥ 0.

7. A new constraint qualification. Let x∗ be a feasible point. Without loss


of generality, we suppose that there exists an integer p∗ ≤ p such that gi (x∗ ) = 0, i =
1, 2, . . . , p∗ , and gi (x∗ ) < 0, i = p∗ + 1 . . . p.
Definition 7.1. We say that a feasible point x∗ satisfies the rank condition (RC)
if the vectors of

{∇gi (x∗ ), i = 1, 2, . . . , p∗ ; ∇hj (x∗ ), j = 1, 2, . . . , q}

have p∗ + q − 1 linearly independent vectors.


Lemma 7.2. Let x∗ be a feasible point such that
NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 405

(i) Λ(x∗ ) is not empty, and


(ii) x∗ satisfies the RC;
then Λ(x∗ ) is a closed line segment.
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Proof. It suffices to prove the lemma in the case Λ(x∗ ) is not a singleton and
p = p. Let (λ∗ , µ∗ ) ∈ Λ(x∗ ); then


i=p 
j=q
(7.1) ∇f (x∗ ) + λ∗i ∇gi (x∗ ) + µ∗j ∇hj (x∗ ) = 0.
i=1 j=1

For any other (λ, µ) ∈ Λ(x∗ ), we have


i=p 
j=q
(7.2) ∇f (x∗ ) + λi ∇gi (x∗ ) + µj ∇hj (x∗ ) = 0;
i=1 j=1


i=p 
j=q
(7.3) (λi − λ∗i )∇gi (x) + (µj − µ∗j )∇hj (x) = 0.
i=1 j=1

Equation (7.3) means that the Jacobian matrix, Dc(x∗ ), of active constraints

c = (g1 , g2 , . . . , gp , h1 , h2 , . . . , hq )

satisfies

Dc(x∗ )t ((λ, µ) − (λ∗ , µ∗ )) = 0; (λ, µ) − (λ∗ , µ∗ ) ∈ Ker(Dc(x∗ )t ),

where Dc(x∗ )t is the transpose of the matrix Dc(x∗ ). It is well known that

Ker(Dc(x∗ )t ) = (R(Dc(x∗ )))⊥ ,

where (R(Dc(x∗ )))⊥ is the orthogonal space to the range of Dc(x∗ ). The dimension
of R(Dc(x∗ )) is p∗ + q − 1, (R(Dc(x∗ )))⊥ is a one-dimensional space, and there exists
z ∈ Rp+q , z = 0, such that

Ker(Dc(x∗ )t ) = (R(Dc(x∗ )))⊥ = Rz

and any (λ, µ) ∈ Λ(x∗ ) satisfies (λ, µ) − (λ∗ , µ∗ ) ∈ Rz, so

Λ(x∗ ) ⊂ (λ∗ , µ∗ ) + Rz.

Let J = {α ∈ R | (λ∗ , µ∗ ) + αz ∈ Λ(x∗ )}. Λ(x∗ ) is closed and convex and J is closed
and convex, too, so it is a closed interval and Λ(x∗ ) = (λ∗ , µ∗ ) + Jz is a closed line
segment.
Lemma 7.3. Let x∗ be a feasible point for (P ) and such that
(i) Λ(x∗ ) is not empty and bounded, and
(ii) x∗ satisfies the RC;
then there exist b ≥ 0, z ∈ Rp+q , and an extreme point (λ∗ , µ∗ ) ∈ Λ(x∗ ) such that

Λ(x∗ ) = (λ∗ , µ∗ ) + [0, b]z.

Proof. Λ(x∗ ) is convex and compact and has at least one extreme point (λ∗ , µ∗ ).
It can be seen, from Lemma 7.2 and Remark 1.1, that Λ(x∗ ) = (λ∗ , µ∗ ) + [a, b]z for
406 ABDELJELIL BACCARI AND ABDELHAMID TRAD

some vector z ∈ Rp+q and some real numbers a ≤ b. Supposing that a < b, we claim
that 0 ∈]a,
/ b[ : Otherwise, for ε ∈]0, b[, ε < −a, we have
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

(λ1 , µ1 ) = (λ∗ , µ∗ ) + εz ∈ Λ(x∗ ); (λ2 , µ2 ) = (λ∗ , µ∗ ) − εz ∈ Λ(x∗ ),


and
(λ∗ , µ∗ ) = [(λ1 , µ1 ) + (λ2 , µ2 )]/2
contradicts the fact that (λ∗ , µ∗ ) is an extreme point of Λ(x∗ ). It follows that 0 ∈]a,
/ b[.
(λ∗ , µ∗ ) ∈ Λ(x∗ ) and a = 0 or b = 0. Replacing z by −z, if necessary, we can assume
that a = 0.
Definition 7.4. We say that a feasible point x∗ satisfies the generalized strict
complementary slackness (GSCS) condition if there exists, at most, only one index
i0 ∈ I(x∗ ) such that
(λ, µ) ∈ Λ(x∗ ) =⇒ λi0 = 0.
Definition 7.5. We say that a feasible point x∗ satisfies the modified MFCQ
(MMF) if x∗ satisfies the RC and the MFCQ.
Remark 7.1. It will be helpful to have some equivalent forms of the MMF. Note
that the following hold.
(i) x∗ satisfies the LICQ if and only if there is no (λ, µ) = 0 such that (1.2) holds.
(ii) x∗ satisfies the MFCQ if and only if there is no (λ, µ) = 0 such that (1.1) and
(1.2) hold.
(iii) We look for a criterion, like (1.1) or (1.2), to characterize the MMF.
Let x∗ ∈ F and {i1 , i2 } ⊂ I(x∗ ); then x∗ is a feasible point for the optimization
problem
(P2 ) min{f (x) | gi (x) ≤ 0, i = i1 , i2 , gi (x) = 0, i ∈ I(x), i = i1 , i2 , h(x) = 0}.
In (P2 ), all active inequalities of (P ), except i1 and i2 , are converted into equalities.
The following lemma shows that x∗ satisfies the MMF if and only if there exists
{i1 , i2 } ⊂ I(x∗ ) such that x∗ satisfies the MFCQ for (P2 ).
Lemma 7.6. For a feasible point x∗ , with more than one active inequality con-
straint, the following conditions are equivalent:
(i) x∗ satisfies the MMF.
(ii) There exist i1 and i2 in I(x∗ ) such that there is no (λ, µ) = 0 such that
(7.4) λi1 ≥ 0, λi2 ≥ 0,
and
 
j=q
(7.5) λi ∇gi (x∗ ) + µj ∇hj (x∗ ) = 0.
i∈I(x∗ ) j=1

(iii) There exist i1 and i2 in I(x∗ ) such that the vectors


∇gi (x∗ ), i ∈ I(x∗ ) − {i1 , i2 }, ∇hj (x∗ ) ∀j,
are linearly independent and there is a vector d∗ such that
(7.6) ∇gi1 (x∗ )t d∗ < 0, ∇gi2 (x∗ )t d∗ < 0,

(7.7) ∇gi1 (x∗ )t d∗ = 0, i ∈ I(x∗ ) − {i1 , i2 }, ∇hj (x∗ )t d∗ = 0 ∀j.


NECESSARY SECOND-ORDER OPTIMALITY CONDITIONS 407

Proof. We begin with (i) implies (ii): If the vectors of

C = {∇gi (x∗ ), i ∈ I(x∗ ), ∇hj (x∗ ), j = 1, 2, . . . , q}


Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

are linearly independent, then any i1 and i2 in I(x∗ ) satisfy (ii). Suppose that the
vectors of C are not linearly independent; then there exists i1 ∈ I(x∗ ) such that

 
j=q
∇gi1 (x∗ ) = λ∗i ∇gi (x∗ ) + µ∗j ∇hj (x∗ ).
i∈I(x∗ )−{i1 } j=1

If all i ∈ I(x∗ ) − {i1 } satisfy λ∗i ≤ 0, then

 
j=q

∇gi1 (x ) − λ∗i ∇gi (x∗ ) − µ∗j ∇hj (x∗ ) = 0.
i∈I(x∗ )−{i 1} j=1

Because of the MFCQ and Lemma 1.7, we have

λ∗i = 0 ∀i ∈ I(x∗ ); µ∗j = 0, j = 1, 2, . . . , q,

and this contradicts λ∗i1 = 1. So there exists i2 ∈ I(x∗ ) − {i1 } such that λ∗i2 > 0. It
can be seen, from the RC, that the vectors of

B = {∇gi (x∗ ), i ∈ I(x∗ ) − {i1 }, ∇hj (x∗ ), j = 1, 2, . . . , q}

are linearly independent and the components (λ∗ , µ∗ ) of ∇gi1 (x∗ ) with respect to B
are unique. Suppose that (λ, µ) satisfies (7.4) and (7.5). If λi1 > 0, then −λi2 /λi1 ≤ 0
and must be equal to λ∗i2 > 0; this is impossible, so λi1 = 0 and all other components
of (λ, µ) vanish.
We prove that (ii) implies (iii): by Lemma 1.7, (ii) means that, for (P2 ), x∗
satisfies the MFCQ which is (iii). So (ii) and (iii) are equivalent. To prove that (ii)
implies (i), note that, by Lemma 1.7, x∗ satisfies the MFCQ for the problem (P ).
From (7.5), the vectors of

B = {∇gi (x∗ ), i ∈ I(x∗ ) − {i1 }, ∇hj (x∗ ), j = 1, 2, . . . , q}

are linearly independent and the RC holds.


The main result (Theorem 5.1) of this paper can be written, with the MMF and
GSCS condition, in the following form.
Theorem 7.7. Let x∗ be a local optimal solution of (P ) and be such that
(i) x∗ satisfies the MMF, and
(ii) x∗ satisfies the GSCS condition;
then there exists (λ∗ , µ∗ ) ∈ Λ(x∗ ) such that

(d)t ∇2xx L(x∗ , λ∗ , µ∗ )d ≥ 0 ∀d ∈ C(x∗ ).

Moreover, if x∗ satisfies the sufficient SC2, then (λ∗ , µ∗ ) can be chosen so that

(d)t ∇2xx L(x∗ , λ∗ , µ∗ )d > 0 ∀d ∈ C(x∗ ), d = 0.


408 ABDELJELIL BACCARI AND ABDELHAMID TRAD

8. Conclusion. We have proved the classical necessary second-order optimality


conditions (CN 2) without using the LICQ. Our result is concerned with nonconvex
optimization problems in which the set of Lagrange multipliers Λ(x∗ ), at the local
Downloaded 01/02/13 to 150.135.135.70. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

optimal solution x∗ , is a bounded line segment and the GSCS condition holds. The
GSCS condition is not necessary for some special structures of the critical cone. A
condition for Λ(x∗ ) to be a bounded line segment is given. If the LICQ is assumed,
then (CN 2) holds without the SCS or GSCS condition. Our result is limited by the
GSCS assumption which seems not to be necessary, but we have no counterexample.

REFERENCES

[1] O. L. Mangasarian and S. Fromovitz, The Fritz John necessary optimality conditions in
the presence of equality and inequality constraints, J. Math. Anal. Appl., 17 (1967), pp.
37–47.
[2] M. R. Hestenes, Optimization Theory: The Finite Dimensional Case, Robert E. Krieger
Publishing Company, Huntington, NY, 1975.
[3] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming: Theory and
Algorithms, John Wiley, New York, 1993.
[4] M. Anitescu, Degenerate nonlinear programming with a quadratic growth condition, SIAM J.
Optim., 10 (2000), pp. 1116–1135.
[5] J. F. Bonnans and A. Shapiro, Perturbation Analysis of Optimization Problems, Springer-
Verlag, Berlin, 2000.
[6] J. Gauvin, A necessary and sufficient regularity condition to have bounded multipliers in
nonconvex programming, Math. Program., 12 (1977), pp. 136–138.
[7] A. Baccari, On the classical necessary second-order optimality conditions, J. Optim. Theory
Appl., 123 (2004), pp. 213–221.
[8] Y. Yuan, On a subproblem of trust region algorithms for constrained optimization, Math.
Program., 47 (1990), pp. 53–63.
[9] J.-P. Crouzeix, J.-E. Martinez-Legaz, and A. Seeger, Un théorème de l’alternative pour
des formes quadratiques et quelques extensions, C. R. Acad. Sci. Paris Sér. I Math., 314
(1992), pp. 505–506.
[10] J.-E. Martinez-Legaz and A. Seeger, Yuan’s alternative theorem and the maximization of
the minimum eigenvalue function, J. Optim. Theory Appl., 82 (1994), pp. 159–167.
[11] J.-B. Hiriart-Urruty and M. Torki, Permanently going back and forth between the
“quadratic world” and the “convexity world” in optimization, Appl. Math. Optim., 45
(2002), pp. 169–184.

You might also like