Professional Documents
Culture Documents
Abstract
In general Banach space setting, we study the minimum time function determined by a closed convex
set K and a closed set S (this function is simply the usual Minkowski function of K if S is the singleton
consisting of the origin). In particular we show that various subdifferentials of a minimum time function
are representable by virtue of corresponding normal cones of sublevel sets of the function.
© 2005 Elsevier Inc. All rights reserved.
Keywords: Minkowski function; Minimum time function; Subdifferentials; Normal cones; Tangent cones
1. Introduction
Let X be a Banach space and K be a bounded closed convex subset of X containing the origin
as an interior point. The Minkowski function ρK of K is defined by
ρK (u) := inf t > 0: t −1 u ∈ K for all u ∈ X.
Let S be a closed subset of X. The function ΓS|K is defined by
ΓS|K (x) := inf ρK (s − x) for all x ∈ X. (1.1)
s∈S
✩
This work was partially supported by National Natural Science Foundation of China, grants (2001)01GY051-66 and
SZD0406 from Sichuan Province. The second author was supported by a direct grant (CUHK) and Earmarked Grant
from the Research Grant Council of Hong Kong.
* Corresponding author.
E-mail address: yiranhe@hotmail.com (Y. He).
0022-247X/$ – see front matter © 2005 Elsevier Inc. All rights reserved.
doi:10.1016/j.jmaa.2005.09.009
Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910 897
This function is not necessarily convex and can be viewed as a minimum time function of a
control system with constant dynamics [1–6]. Wolenski and Zhuang [6] and Colombo and Wolen-
ski [2] studied the proximal subdifferential of the function ΓS|K (x) respectively in the context
of an Euclidean space and of a Hilbert space. Colombo and Wolenski [3] studied the Fréchet
subdifferential of the function ΓS|K (x) in a Hilbert space. Li and Ni [7] studied the relationships
between the existence of minimizers of the minimization problem in (1.1) and directional deriv-
atives of the function ΓS|K (x). De Blasi and Myjak [8] and Li [9] studied the well-posedness of
the minimization problem in (1.1). If K is the (closed) unit ball in X, then ρK (x) and ΓS|K (x)
respectively reduce to the usual norm x and the usual distance dS (x), which is defined by
dS (x) := inf s − x for all x ∈ X.
s∈S
However, there is an essential difference between the function ΓS|K (x) and usual distance dS (x):
the set K defining ρK (x) is possibly asymmetric while the unit ball is always symmetric.
In this paper we show in general Banach space setting that the Fréchet, proximal, and Clarke
subdifferentials of the functions ρK and ΓS|K can be described by virtue of the corresponding
notions of normal cones of sublevel sets of these functions. The main results unify and generalize
the corresponding results in [10–12] from usual distance function dS (x) to ΓS|K (x) and that in
[2,3,6,12] from Euclidean spaces or Hilbert spaces to Banach spaces. As a byproduct, we obtain
a formula on the distance from a vector to the Clarke tangent cone of a closed set S in terms
of Clarke directional derivatives of dS (x), which seems new and complements a result in [11]
where a formula of this type was given in terms of contingent cone of S and Hadamard directional
derivative of dS (x).
2. Preliminaries
Let X be a normed vector space with norm denoted by · . Let X ∗ denote the topological
dual of X. We use B(x; r) to denote the open ball centered at x with radius r > 0 and ·,· to
denote the pairing between X ∗ and X. Let g : X → R be a lower semicontinuous function and
x ∈ X. Let us recall the following well-known classes of subdifferentials for g at x:
Let S ⊂ X be a closed set and let x ∈ S. The proximal normal cone and Fréchet normal cone
of S at x are defined as the corresponding subdifferential of the indicator function of S at x and
are denoted respectively as NSP (x) and NSF (x). That is, ξ ∈ NSP (x) if and only if there exist
σ > 0 and δ > 0 such that ξ, y − x σ y − x2 for all y ∈ S ∩ B(x; δ), and ξ ∈ NSF (x) if and
only if for any ε > 0 there exists δ > 0 such that ξ, y − x εy − x for all y ∈ S ∩ B(x; δ).
For K ⊂ X, K ◦ denotes the polar of K:
K ◦ := ξ ∈ X ∗ : ξ, v 1, for all v ∈ K .
For a closed set S and x ∈ S, TS (x) and TSc (x) respectively denote the (Bouligand-) contingent
cone and Clarke tangent cone. That is, h ∈ TS (x) if and only if there exist sequences tn → 0+
S
and hn → h such that x + tn hn ∈ S for every n; h ∈ TSc (x) if and only if for any xn − → x and
S
tn → 0+ there exists a sequence hn → h such that xn + tn hn ∈ S for every n, where xn − → x
means that the sequence {xn } is contained in S and converges to x. As in [10], we say that S is
tangentially regular at x if TS (x) = TSc (x).
We denote NSc (x) the normal cone in the sense of Clarke [13]:
NSc (x) := ξ ∈ X ∗ : ξ, y 0, for all y ∈ TSc (x) .
For a locally Lipschitz function g defined on X, g
(x; h) and g ◦ (x; h) respectively denote the
Hadamard directional derivative and Clarke generalized directional derivative:
g(x + th) − g(x)
g
(x; h) := lim inf ,
t→0+ t
g(y + th) − g(y)
g ◦ (x; h) := lim sup .
y→x,t↓0 t
The Clarke subdifferential of g at x denoted by ∂ c g(x) is defined by
∂ c g(x) := ξ ∈ X ∗ : ξ, h g ◦ (x; h), for all h ∈ X .
It is known that g ◦ (x; ·) is the support function of ∂ c g(x):
g ◦ (x; h) = sup ξ, h, for every h ∈ X.
ξ ∈∂ c g(x)
We refer the readers to [13] and [14] for more discussions on nonsmooth analysis.
Some useful properties of Minkowski functions are summarized below.
Proposition 2.1. Let K ⊂ X be a bounded closed convex set containing the origin as an interior
point. Then
Proof. The first three properties can be found in [15] in the more general context of topological
vector space. Now we prove (iv) and (v).
Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910 899
ρK (x) = inf t > 0: t −1 x ∈ K = inf t > 0: ξ, t −1 x 1, for all ξ ∈ K ◦
= sup ξ, x: ξ ∈ K ◦ K ◦ x,
where the second equality follows from the Bipolar Theorem (see Theorem 4.32 in [16]). For
any positive integer n, there exists tn > 0 such that tn−1 x ∈ K and tn < ρK (x) + 1/n. Therefore,
ρK (x) > tn − 1/n x/K − 1/n.
Letting n → ∞, we obtain (iv).
Now we prove (v). By (i), ρK (·) is a subadditive function and hence
ρK (x) − ρK (y) ρK (x − y) ∀x, y ∈ X.
In view of (iv), ρK (x − y) K ◦ x − y and it follows that
ρK (x) − ρK (y) K ◦ x − y ∀x, y ∈ X.
The conclusion is verified by exchanging the roles of x and y. 2
Proof. For any positive integer n, there exists some yn ∈ S such that
ρK (yn − y) < ΓS|K (y) + 1/n.
Therefore, ΓS|K (x) − ΓS|K (y) ρK (yn − x) − ρK (yn − y) + 1/n ρK (y − x) + 1/n. The
conclusion is verified by letting n → ∞. 2
Proof. Lemma 3.1 together with the second inequality in Proposition 2.1(iv) yield the Lipschitz
continuity of ΓS|K . 2
(i) Γ
S|K (x; h) infz∈TS (x) ρK (z − h); the equality holds if X is finite dimensional.
(ii) Γ◦S|K (x; h) infz∈TSc (x) ρK (z − h).
Proof. Let z ∈ TS (x). Then there exist tn → 0+ and zn → z such that x + tn zn belongs to S. It
follows that
ΓS|K (x + tn h) ρK (x + tn zn − x − tn h) = tn ρK (zn − h)
tn ρK (zn − z) + ρK (z − h)
tn K ◦ zn − z + ρK (z − h)
which implies that
ΓS|K (x + tn h) − ΓS|K (x) ΓS|K (x + tn h)
lim inf = lim inf ρK (z − h),
n→∞ tn n→∞ tn
900 Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910
being x ∈ S and hence ΓS|K (x) = 0. This implies that Γ
S|K (x; h) ρK (z − h) for every z ∈
TS (x). Thus the first assertion in (i) is verified.
Next we suppose that X is finite dimensional. Let {tn } be a sequence such that tn → 0+ and
ΓS|K (x + tn h) − ΓS|K (x)
Γ
S|K (x; h) = lim .
n→∞ tn
Since X is finite dimensional and ρK (·) is continuous (Proposition 2.1(v)), there exists a se-
quence {sn } ⊂ S such that ΓS|K (x + tn h) = ρK (sn − x − tn h). Putting zn := (sn − x)/tn , we
have
ΓS|K (x + tn h) − ΓS|K (x)
= ρK (zn − h) K−1 zn − h. (3.1)
tn
This and Lemma 3.1 imply that {zn } is bounded, and hence we may assume that zn → z0 for
some z0 . Thus z0 ∈ TS (x), and (3.1) together with the continuity of ρK imply that ρK (z0 − h) =
Γ
S|K (x; h). This verifies that the inequality in (i) becomes an equality.
Now we prove (ii). Let z ∈ TSc (x). In view of the definition of Γ◦S|K (x; h), there exist two
sequences {yn } and {tn } such that yn →x, tn → 0+, and
ΓS|K (yn + tn h) − ΓS|K (yn )
Γ◦S|K (x; h) = lim .
n→∞ tn
For any positive integer n, there exists xn ∈ S such that ρK (xn − yn ) < ΓS|K (yn ) + tn2 . Since
S
ΓS|K (yn ) ρ(x − yn ) → 0, it follows that xn − → x. As z ∈ TSc (x), there exists a sequence {zn }
converging to z such that {xn + tn zn } ⊂ S. It follows from Lemma 3.1 that
ΓS|K (yn + tn h) − ΓS|K (yn ) − tn2 ΓS|K (yn + tn h) − ρK (xn − yn ) ΓS|K (xn + tn h)
ρK (xn + tn zn − xn − tn h) = tn ρK (zn − h)
tn ρK (zn − z) + ρK (z − h)
tn K ◦ zn − z + ρK (z − h) .
Dividing tn on both sides in the above expression and letting n → ∞, we obtain that
ΓS|K (yn + tn h) − ΓS|K (yn )
Γ◦S|K (x; h) = lim ρK (z − h).
n→∞ tn
Since z ∈ TSc (x) is arbitrary, Γ◦S|K (x; h) infz∈TSc (x) ρK (z − h). 2
Recall that when K is the unit ball in X, ρK (x) and ΓS|K (x) reduce to x and dS (x), respec-
tively.
(i) dS
(x; h) dTS (x) (h); the equality holds if X is finite dimensional.
(ii) dS◦ (x; h) dTSc (x) (h).
Lemma 3.3. ΓTSc (x)|K (y) = supξ ∈A ξ, y for all y ∈ X, where A denotes the set of all ξ ∈ NSc (x)
satisfying ρK ◦ (−ξ ) 1.
The following result on support functions is well known and can be proved by a separation
theorem of convex sets (see [16, Theorem 3.18]).
Lemma 3.4. Let S1 and S2 be two closed convex sets in X ∗ . Then S1 ⊂ S2 if and only if
supx ∗ ∈S1 x ∗ , x supx ∗ ∈S2 x ∗ , x for all x ∈ X.
Now we present the Clarke subdifferential formula for minimum time function ΓS|K .
Theorem 3.1. Let x ∈ S. Then ∂ c ΓS|K (x) ⊂ NSc (x) ∩ {ξ ∈ X ∗ : ρK ◦ (−ξ ) 1}. The equality holds
if S is tangentially regular at x and the space X is finite dimensional.
Proof. In view of Lemmas 3.3 and 3.4 and Proposition 3.1(ii), the first assertion follows imme-
diately as Γ◦S|K (x; ·) is the support function of ∂ c ΓS|K (x). Now we prove the second assertion.
By the given assumptions for the point x, Proposition 3.1 implies that
ΓTSc (x)|K (h) Γ
S|K (x; h)
and hence the equality holds through because Γ◦S|K (x; h) Γ
S|K (x; h) always holds true. Thus
the conclusion follows from Lemmas 3.3 and 3.4. 2
Remark 3.1. Without additional assumptions, the inclusion in Theorem 3.1 can be strict. For
example, let X := R2 with the 2 -norm, K be the unit ball in X, S be the union of x-axis and
y-axis, and x = (0, 0). Then ΓS|K (y) = min{|y1 |, |y2 |}. We have ∂ c ΓS|K (x) is the unit ball in the
1 -norm while the right hand side of the inclusion in Theorem 3.1 is the unit ball in the 2 -norm.
902 Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910
Lemma 3.5. For r > 0, let S(r) := {y ∈ X: ΓS|K (y) r}. Then for every y ∈
/ S(r),
−1
Proof. Let y ∈/ S(r). Fix any s ∈ S. Define h(t) := ΓS|K (s + t (y − s)). Since ΓS|K is continu-
ous, h(0) = ΓS|K (s) = 0 and h(1) = ΓS|K (y) > r, the Intermediate Value Theorem implies that
ΓS|K (z) = r for some z in the line-segment (s, y). Applying Proposition 2.1, we obtain that
ρK (s − y) K−1 s − y = K−1 s − z + z − y
−1
KK ◦ ρK (s − z) + ρK (z − y)
−1
Theorem 3.2. Let x ∈ / S and r := ΓS|K (x). Suppose that ΓS|K is regular at x and that there exists
x̄ ∈ S such that ΓS|K (x) = ρK (x̄ − x). Then
∂ c ΓS|K (x) = NS(r)
c
(x) ∩ ξ ∈ X ∗ : ρK ◦ (−ξ ) = 1 .
Proof. By virtue of Lemma 3.5, ΓS|K (y) ΓS(r)|K (y) + ΓS|K (x) for every y ∈ X. This and the
regularity of ΓS|K imply that
Γ◦S|K (x; h) = Γ
S|K (x; h) Γ
S(r)|K (x; h) Γ◦S(r)|K (x; h).
Let Ar := {ξ ∈ NS(r)
c (x): ρ ◦ (−ξ ) 1}. In view of Proposition 3.1(ii) and Lemma 3.3,
K
Hence we have
which shows that ∂ c ΓS|K (x) ⊂ Ar as both sets are closed and convex.
By the assumption that ΓS|K is regular at x and ΓS|K (x) = ρK (x̄ − x),
ΓS|K (x + t (x̄ − x)) − ΓS|K (x)
Γ◦S|K (x; x̄ − x) = Γ
S|K (x; x̄ − x) = lim inf
t→0+ t
ρK (x̄ − x − t (x̄ − x)) − ρK (x̄ − x)
lim inf
t→0+ t
= −ρK (x̄ − x) < 0,
which implies that 0 ∈
/ ∂ c ΓS|K (x). On the other hand, Lemma 3.1 yields that
Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910 903
The regularity assumption in Theorem 3.2 cannot be dropped. For example, let X be R2 with
2 -norm,K be the unit ball in X, S := {x ∈ X: x 1}, and x = (0, 0). Then ∂ c ΓS|K (x) be the
closed unit ball in X, while the right hand side of the subdifferential formula in Theorem 3.2 is
an empty set.
Proof. Let x ∈ S and ξ ∈ ∂ F ΓS|K (x). Let ε > 0. Then there exists δ > 0 such that
ΓS|K (y) − ΓS|K (x) − ξ, y − x −εy − x, (4.1)
for all y ∈ B(x; δ). Since ΓS|K (·) = 0 on S and as x is assumed to be in S, this implies that
ξ, y − x εy − x, (4.2)
for all y ∈ S ∩ B(x; δ). Therefore ξ ∈ NSF (x).
Moreover, by (4.1) and Lemma 3.1 we have for any v ∈ X and t > 0 sufficiently small,
ρK (tv) ΓS|K (x − tv) − ΓS|K (x) ξ, −tv − ε − tv, (4.3)
that is, ρK (v) −ξ, v − εv. Letting ε → 0+, this implies that 1 supv∈K −ξ, v, that is
ρK ◦ (−ξ ) 1. Thus the relation “⊂” is verified.
Conversely, let ξ ∈ NSF (x) be such that ρK ◦ (−ξ ) 1. Let ε ∈ (0, 1). Then there exists δ > 0
such that (4.2) holds for all y ∈ S ∩ B(x; δ), that is, (4.1) holds for all y ∈ S ∩ B(x; δ) because
ΓS|K (·) = 0 on S and x ∈ S. The proof will be complete if one can show that
ΓS|K y
− ξ, y
− x −(1 + k)ε y
− x ∀y
∈ B(x; δ/k) \ S, (4.4)
where
k := K ◦ · K + K + 1. (4.5)
y
To do this let ∈ B(x; δ/k) \ S and take y ∈ S such that
ρK y − y
< ΓS|K y
+ ε y
− x . (4.6)
904 Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910
ξ, y
− x − ε k y
− x ,
that is (4.4) holds for this y
. 2
Proof. Let ξ ∈ ∂ F ΓS|K (x) and ε > 0. Take δ > 0 such that (4.1) holds for all y ∈ B(x; δ) as in
the proof of the preceding theorem. Since ΓS|K (·) r on S(r) and r = ΓS|K (x) it follows that
(4.2) holds for all y ∈ S(r) ∩ B(x; δ), that is ξ ∈ NS(r)
F (x). Further, (4.3) continues to hold and
so ρK ◦ (−ξ ) 1. To show that the equality in fact holds, take t such that 0 < t < min{1, ε} and
t 3 + ΓS|K (x)t − δ/K < 0. By definitions there exists st ∈ S such that
ρK (st − x) < ΓS|K (x) + t 2 . (4.7)
Let
xt := x + t (st − x). (4.8)
Then by Proposition 2.1, xt − x = tst − x tρK (st − x)K < δ and it follows from (4.1)
(applied to xt in place of y) and (4.7) that
tξ, st − x = ξ, xt − x ΓS|K (xt ) − ΓS|K (x) + εxt − x
ξ ∈ ∂ F ΓS(r)|K (x). Since ΓS(r)|K (·) = 0 on S(r) and x ∈ S(r), this implies that for any ε > 0,
there exists δ1 > 0 such that
ΓS(r)|K x
− ξ, x
− x −ε x
− x , (4.9)
by Lemma 3.5.
Since ξ ∈ NS(r)F (x), there exists δ > 0 such that
2
ξ, x
− x (1 + θ )−1 ε x
− x ∀x
∈ S(r) ∩ B(x; δ2 ). (4.11)
In view of (4.11) (and since θ 1), to verify (4.13), we may assume that ΓS|K (x
) < r (in
addition to x
∈ S(r) ∩ B(x; δ2 /(1 + θ ))). For each x
, one has t > 0 where t := ΓS|K (x) −
ΓS|K (x
). Note that, by Lemma 3.1, t K ◦ x − x
. On the other hand, since ρK ◦ (−ξ ) = 1,
there exists a sequence (vn ) in K such that −ξ, vn > 1 − 1/n for each n. Then Lemma 3.1
yields
Hence
ξ, x
− x tξ, vn + ε x − x
< t (−1 + 1/n) + ε x − x
.
In the setting of Hilbert spaces, the subdifferential formula in Theorem 4.2 was established
in [3, Theorem 3.1] where the proof uses a fact that every bounded closed convex set of X is
compact in the weak topology (see the proofs of [3]). This fact is not true in a general Banach
space.
906 Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910
Proof. Let x ∈ S and ξ ∈ ∂ P ΓS|K (x). Then there exist σ, δ > 0 such that
ΓS|K (y) − ΓS|K (x) − ξ, y − x −σ y − x2 (5.1)
for all y ∈ B(x; δ). Since ΓS|K (·) = 0 on S and since x is assumed to be in S, this implies that
ξ, y − x σ y − x2 (5.2)
for all y ∈ S ∩ B(x; δ). Therefore ξ ∈ NSP (x). Moreover, by (5.1) and Lemma 3.1 we have, for
any v ∈ X and t > 0 sufficiently small,
ρK (tv) ΓS|K (x − tv) − ΓS|K (x) ξ, −tv − σ tv2 , (5.3)
that is ρK (tv) ξ, −tv − σ tv2 . Letting t → 0+ it follows that ρK (v) −ξ, v − σ tv2 ,
and hence ρK ◦ (−ξ ) 1.
Conversely, let ξ ∈ NSP (x) be such that ρK ◦ (−ξ ) 1. Then there exists σ, δ > 0 such that
(5.2) holds for all y ∈ S ∩ B(x; δ), that is, (5.1) holds for all y ∈ S ∩ B(x; δ). The proof will be
complete if one can show that
2
ΓS|K y
− ξ, y
− x − 1 + σ k 2 y
− x ∀y
∈ B(x; δ/k) \ S, (5.4)
where k is defined as in (4.5). To do this let y
∈ B(x; δ/k) \ S and take y ∈ S such that
2
ρK y − y
< ΓS|K y
+ y
− x . (5.5)
Since x ∈ S it follows from Proposition 2.1 that
2 2
ρK y − y
< ρK x − y
+ y
− x K ◦ x − y
+ y
− x
K ◦ + 1 x − y
and so
y − x KρK y − y
+ y
− x K K ◦ + 1 y
− x + y
− x
= k y
− x < δ.
By our choice of σ and δ, (5.2) holds for this y, that is
σ y − x2 ξ, y − y
+ ξ, y
− x −ρK y − y
+ ξ, y
− x .
Consequently it follows from (5.5) that
2
ΓS|K y
+ y
− x ρK y − y
ξ, y
− x − σ y − x2
2
ξ, y
− x − σ k 2 y
− x ,
that is (5.4) holds for this y
. 2
Proof. Let ξ ∈ ∂ P ΓS|K (x). Take σ, δ > 0 such that (5.1) holds for all y ∈ B(x; δ) as in the
preceding theorem. Since ΓS|K (·) r on S and r = ΓS|K (x), it follows that (5.2) holds for all
P (x). Moreover, (5.3) continues to hold and so ρ ◦ (−ξ ) 1.
y ∈ S(r) ∩ B(x; δ), that is ξ ∈ NS(r) K
To show that the equality in fact holds, take t such that t ∈ (0, 1) and t 3 + ΓS|K (x)t − δ/K < 0.
By definitions there exists st ∈ S such that (4.7) holds. Let xt be defined as in (4.8). Therefore,
xt − x < δ. It follows from (5.1) (applied to xt in place of y) and (4.7) that
tξ, st − x = ξ, xt − x ΓS|K (xt ) − ΓS|K (x) + σ xt − x2
ξ ∈ ∂ P ΓS(r)|K (x). Since ΓS(r)|K (·) = 0 on S(r) and x ∈ S(r), this implies that the existence of
σ1 , δ1 > 0 such that
2
ΓS(r)|K x
− ξ, x
− x −σ1 x
− x (5.6)
for all x
∈ B(x; δ1 ). Let θ := KK ◦ . Then θ 1 by Proposition 2.1(iv), and (4.10) holds.
Since ξ ∈ NS(r) P (x), there exist σ , δ > 0 such that
2 2
ξ, x
− x σ2 x
− x ∀x
∈ S(r) ∩ B(x; δ2 ).
2
(5.7)
Combining (5.6) and (4.10), we have
2
ΓS|K x
− ΓS|K (x) ΓS(r)|K x
ξ, x
− x − σ1 x
− x
∀x
∈ B(x; δ1 ) \ S(r). (5.8)
Thus to prove ξ ∈ ∂ P ΓS|K (x)it suffices to show that
ΓS|K x − ΓS|K (x) ξ, x
− x − σ2 (1 + θ )2 x
− x
Before ending this section, we present two examples for using the formulae to calculate the
subdifferentials. The first one is suggested to us by one of the referees.
Example 5.1. Let I = [−1, 1], X = L1 (I ) and so X ∗ = L∞ (I ). Take K to be the closed unit
ball B in X; thus K ◦ is the closed unit ball in L∞ (I ). Let k be a fixed nonnegative integer and
let S be the finite dimensional subspace of X consisting of all polynomials of degree k. Then the
minimum time function ΓS|K is just the distance function dS to S and hence for each r 0,
S(r) = f ∈ X: dS (f ) r = rB + S (5.10)
(if dS (h) = r
then h − h̄1 = r
for some h̄ ∈ S and so h ∈ S + r
B). Therefore, by Theo-
rem 5.1, for each f ∈ S, ∂ΓS|K (f ) = NS (f ) ∩ K ◦ = (S − f )◦ ∩ K ◦ = S ◦ ∩ K ◦ = {f ∈ L∞ (I ):
1 1 1
f ∞ 1, −1 f = −1 tf (t) = · · · = −1 t k f (t) = 0}.
Next we consider the case that f ∈ X \ S. Then r := dS (f ) > 0. Let f¯ be a “best approxima-
tion” from S to f : f¯ ∈ S and f − f¯1 = r. Let f0 = f − f¯. Since S is a vector space, it is easy to
verify (see (4.48) and (4.40) in [18]) that NS+rB (f¯ + f0 ) = NS (f¯) ∩ NrB (f0 ) = S ◦ ∩ R+ J (f0 /r)
and it follows from (5.10) that
NS(r) (f ) = S ◦ ∩ R+ J (f0 /r),
where J denotes the duality map, that is,
1
∞
J (f0 /r) = g ∈ L (I ): g∞ = 1, (gf0 /r) = 1
−1
(thus g ∈ J (f0 /r) if and only if |g(t)| 1 if t ∈ I , g(x) = 1 if f0 (x) > 0, and g(x) = −1 if
f0 (x) < 0). Consequently, by Theorem 5.2, ∂ΓS|K (f ) = NS(r) (f ) ∩ K ◦ = S ◦ ∩ (R+ J (f0 /r)) ∩
1
K ◦ and g ∈ ∂ΓS|K (f ) if and only if the following conditions are satisfied: g∞ 1, −1 gf0 =
1 1
rg∞ , g(x) = g∞ if f0 (x) > 0, g(x) = −g∞ if f0 (x) < 0, and −1 g(t) = −1 tg(t) =
1
· · · = −1 t k g(t) = 0.
Example 5.2. Let X := R2 . Let K1 and K∞ denote the closed unit balls in R2 respectively with
respect to the 1 -norm and the ∞ -norm. Let K := K1 ∪ (K∞ ∩ R2+ ) where R2+ := [0, +∞) ×
[0, +∞). Thus
max{|x1 |, |x2 |}, if x ∈ R2+ ,
ρK (x) =
|x1 | + |x2 |, otherwise.
It is easy to verify that K ◦ = K∞ ∩ (K1 − R2+ ), that is,
K ◦ = co (1, 0), (1, −1), (−1, −1), (−1, 1), (0, 1) ,
Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910 909
where co denotes the convex hull. Let S be the triangle in R2 with vertices (0, 0), (1, 0), and
(0, 1). Since S is a convex set, ΓS|K (x) is a convex function. Thus the Clarke, Fréchet, proximal
subdifferentials of ΓS|K are same and equal to the subdifferential in the sense of convex analysis,
and the Clarke, Fréchet, proximal normal cones of NS (x) are same and equal to the normal cone
in the sense of convex analysis. In view of Theorem 5.1, we have ∂ΓS|K (x) = NS (x) ∩ (−K ◦ )
whenever x ∈ S, this ∂ΓS|K (x) can just be determined easily. To computer the subdifferential of
ΓS|K at points outside S, let us only do at x0 := (−2, −3) as an illustration. Then it is straightfor-
ward to verify that r := ΓS|K (x0 ) = 3, S(r) = co{(0, 4), (4, 0), (1, −3), (−3, −3), (−3, 1)}, and
consequently that NS(r) (x0 ) = {0} × R− . In view of Theorem 5.2, ∂ΓS|K (x0 ) = {(0, −1)}.
Theorem 6.1.
{ξ ∈ X ∗ : ρK ◦ (ξ ) = 1, ξ, x = ρK (x)}, x = 0;
∂ρK (x) =
K ◦, x = 0.
Proof. Put S = {0}. Then ΓS|K (x) = ρK (−x) and so ∂ΓS|K (−x) = −∂ρK (x) (by convexity,
∂ and ∂ F have the same meaning here). Let r := ΓS|K (−x) and Bx := {y ∈ X: ρK (y) ρK (x)}.
Then it is easy to verify that S(r) = −Bx and hence that NS(r) (−x) = −NBx (x).
Now if x = 0 (equivalently if r > 0) then −x ∈
/ S and, by Theorem 4.2,
−∂ΓS|K (−x) = −NS(r) (−x) ∩ ξ ∈ X ∗ : ρK ◦ (ξ ) = 1
which reads
∂ρK (x) = NBx (x) ∩ ξ ∈ X ∗ : ρK ◦ (ξ ) = 1 . (6.1)
Note that Bx = {y ∈ X: ρK (y/ρK (x)) 1} = ρK (x)K and hence that
Theorem 6.2. Let S be a closed convex set in a Banach space X and x ∈ / S. Then x̄ ∈ S satisfies
that ρK (x̄ − x) = ΓS|K (x) if and only if there exists ξ ∈ X ∗ with ρK ◦ (−ξ ) = 1 and ξ, x − x̄ =
ρK (x̄ − x) such that ξ ∈ NS (x̄).
Proof. Since ρK is a convex function and S is a convex set, we have ρK (x̄ − x) = ΓS|K (x) if
and only if x̄ minimizes ρK (s − x) over s ∈ S, which in turn is equivalent to 0 ∈ ∂ρK (· − x)(x̄) +
NS (x̄) by the definitions of subdifferential and normal cone in the sense of convex analysis. The
conclusion follows by invoking Theorem 6.1. 2
If K is the unit closed ball in X, then Theorem 6.2 reduces to the Deutsch–Rubenstein theo-
rem; see also [19, Proposition 2.2].
910 Y. He, K.F. Ng / J. Math. Anal. Appl. 321 (2006) 896–910
Acknowledgments
References
[1] O. Alvarez, S. Koike, I. Nakayama, Uniqueness of lower semicontinuous viscosity solutions for the minimum time
problem, SIAM J. Control Optim. 38 (2000) 470–481.
[2] G. Colombo, P.R. Wolenski, The subgradient formula for the minimal time function in the case of constant dynamics
in Hilbert space, J. Global Optim. 28 (2004) 269–282.
[3] G. Colombo, P.R. Wolenski, Variational analysis for a class of minimal time functions in Hilbert spaces, J. Conv.
Anal. 11 (2004) 335–361.
[4] P. Soravia, Discontinuous viscosity solutions to Dirichlet problems for Hamilton–Jacobi equations with convex
Hamiltonians, Comm. Partial Differential Equations 18 (1993) 1493–1514.
[5] P. Soravia, Generalized motion of a front propagating along its normal direction: a differential games approach,
Nonlinear Anal. 22 (1994) 1247–1262.
[6] P.R. Wolenski, Y. Zhuang, Proximal analysis and the minimal time function, SIAM J. Control Optim. 36 (1998)
1048–1072.
[7] C. Li, R. Ni, Derivatives of generalized distance functions and existence of generalized nearest points, J. Approx.
Theory 115 (2002) 44–55.
[8] F.S. De Blasi, J. Myjak, On a generalized best approximation problem, J. Approx. Theory 94 (1998) 54–72.
[9] C. Li, On well posed generalized best approximation problems, J. Approx. Theory 107 (2000) 96–108.
[10] M. Bounkhel, L. Thibault, On various notions of regularity of sets in nonsmooth analysis, Nonlinear Anal. 48 (2002)
223–246.
[11] J.V. Burke, M.C. Ferris, M. Qian, On the Clarke subdifferential of the distance function of a closed set, J. Math.
Anal. Appl. 166 (1992) 199–213.
[12] F.H. Clarke, R.J. Stern, P.R. Wolenski, Proximal smoothness and the lower-C 2 property, J. Convex Anal. 2 (1995)
117–144.
[13] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, 1983.
[14] F.H. Clarke, Y.S. Ledyaev, R.J. Stern, P.R. Wolenski, Nonsmooth Analysis and Control Theory, Springer-Verlag,
New York, 1998.
[15] W. Rudin, Functional Analysis, McGraw–Hill, New York, 1973.
[16] M. Fabian, P. Habala, P. Hájek, V. Montesinos Santalucía, J. Pelant, V. Zizler, Functional Analysis and Infinite-
Dimensional Geometry, Springer-Verlag, New York, 2001.
[17] C. Zălinescu, Convex Analysis in General Vector Spaces, World Scientific, River Edge, NJ, 2002.
[18] J.-P. Aubin, Optima and Equilibria, Springer-Verlag, Berlin, 1993.
[19] C. Li, K.F. Ng, Constraint qualification, the strong CHIP, and best approximation with convex constraints in Banach
spaces, SIAM J. Optim. 14 (2003) 584–607.