You are on page 1of 22

J Optim Theory Appl

DOI 10.1007/s10957-016-0921-2

Subgradients of the Value Function in a Parametric


Convex Optimal Control Problem

Le Quang Thuy1 · Nguyen Thi Toan1

Received: 29 August 2015 / Accepted: 4 March 2016


© Springer Science+Business Media New York 2016

Abstract Motivated by our recent works on the optimal value function in paramet-
ric optimal control problems under linear state equations, in this paper we study of
the first-order behavior of the value function of a parametric convex optimal con-
trol problem with a convex cost function and linear state equations. By establishing
an abstract result on the subdifferential of the value function to a parametric convex
mathematical programming problem, we derive a formula for computing the subdif-
ferential and the singular subdifferential of the value function to a parametric convex
optimal control problem. By virtue of the convexity, several assumptions used in the
above papers, like the existence of a local upper Lipschitzian selection of the solu-
tion map, as well as the V-inner semicontinuity of the solution map, are no longer
needed.

Keywords Parametric convex optimal control problem · Marginal function · Value


function · Normal cone · Subdifferential · Singular subdifferential

Mathematics Subject Classification 49J15 · 49J53 · 49K15 · 90C90

Communicated by Boris Vexler.

B Nguyen Thi Toan


toan.nguyenthi@hust.edu.vn
Le Quang Thuy
thuy.lequang@hust.edu.vn

1 Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam

123
J Optim Theory Appl

1 Introduction

The study of the first-order behavior of value functions is important in variational analy-
sis and optimization. An example is the study of distance functions and its applications
to optimal control problems (see [1–3]). There are many papers dealing with differen-
tiability properties and the Fréchet subdifferential of value functions, see, for example,
[4–10]. By considering a set of assumptions, which involves a kind of coherence prop-
erty, Penot [10] showed that the value functions are Fréchet differentiable. The results
of Penot gave sufficient conditions under which the value functions are Fréchet differ-
entiable rather than formulas for computing their derivatives. In [9], Mordukhovich,
Nam and Yen derived formulas for computing and estimating the Fréchet subdifferen-
tial and Mordukhovich subdifferential of value functions of parametric mathematical
programming problems in Banach spaces. By using the Moreau–Rockafellar theorem
and appropriate regularity conditions, An and Yen [4] obtained formulas for comput-
ing the subdifferential and the singular subdifferential of the optimal value function
to a parametric convex mathematical programming problem.
Besides the study of the first-order behavior of value functions in parametric
mathematical programming, the study of the first-order behavior of value func-
tions in optimal control problems has attracted attention of many researchers, see,
for example, [11–24]. Recently, Toan and Kien [21] have derived a formula for an
upper evaluation of the Fréchet subdifferential of the value function for the case
where the objective functions were not assumed to be convex. Under some assump-
tions which are weaker than the ones in [21], Chieu, Kien and Toan in [12] have
obtained a formula for an upper evaluation of the Fréchet subdifferential of the
value function, which complements the results in [21]. By establishing a result on
the Mordukhovich subdifferential of the value function of parametric mathematical
programming problems, we in [22] have proved that the formulas for computing
on the Fréchet subdifferential in [12] are also the formulas for computing on the
Mordukhovich subdifferential of the value function if the solution map is V-inner
semicontinuous.
Motivated by the our works [12] and [22] on the optimal value function in paramet-
ric optimal control problems under linear state equations, in this paper we continue
to study the first-order behavior of the value function to a parametric convex opti-
mal control problem with convex cost functions and linear state equations by giving
a shaper formula for computing the subdifferential of the value function. In order
to prove the main result, we first reduce the problem to a mathematical convex
programming problem and then using the Moreau–Rockafellar theorem (see [25,
p. 48]) and appropriate regularity conditions, we obtain formulas for computing
the subdifferential and the singular subdifferential of the optimal value function to
a parametric convex mathematical programming problem. By virtue of the convexity,
several assumptions used in the above papers, like the nonemptiness of the Fréchet
upper subdifferential of the objective function, the sequentially normally epi-compact
(SNEC) of the objective function as well as the existence of a local upper Lip-
schitzian selection of the solution map and the V-inner semicontinuity of the solution
map, are no longer needed and, surprisingly, all the upper estimates become equali-
ties.

123
J Optim Theory Appl

2 Problem Formulation and Statement of the Main Result

A wide variety of problems in optimal control problem can be posed in the following
form.
Determine a control vector u ∈ L p ([0, 1], Rm ) and a trajectory

x ∈ W 1, p ([0, 1], Rn ), 1 < p < ∞,

which minimize the cost



  1  
g x(1) + L t, x(t), u(t), θ (t) dt (1)
0

with the state equation

ẋ(t) = A(t)x(t) + B(t)u(t) + T (t)θ (t) a.e. t ∈ [0, 1], (2)

and the initial value


x(0) = α. (3)
Here W 1, p ([0, 1], Rn ) is the Sobolev space, which consists of absolutely continuous
functions x : [0, 1] → Rn such that ẋ ∈ L p ([0, 1], Rn ). Its norm is given by

x1, p = |x(0)| + ẋ p .

The notations in (1)–(3) have the following meanings:


x, u are the state variable and the control variable, respectively,
(α, θ ) ∈ Rn × L p ([0, 1], Rk ) are parameters,
g : Rn →  R̄, L : [0, 1] × R × R × R → R̄ are proper
n m k
 functions,

A(t) = ai j (t) n×n , B(t) = bi j (t) n×m and T (t) = ci j (t) n×k are matrix-valued
functions. Put

X = W 1, p ([0, 1], Rn ), U = L p ([0, 1], Rm ), Z = X × U,


Θ = L ([0, 1], R ), W = R × Θ.
p k n

It is well known that X, U, Z , Θ, W are Asplund spaces. For each w = (α, θ ) ∈ W ,


the optimal value and the solution set of problem (1)–(3) corresponding to parameter
w ∈ W are denoted by V (w) and S(w), respectively. Thus,

V : W → R̄

is an extended real-valued function, which is called the value function or the marginal
function of problem (1)–(3). It is assumed that V is finite at w̄ and z̄ = (x̄, ū) is a
solution of the problem corresponding to a parameter w̄, that is, z̄ = (x̄, ū) ∈ S(w̄).
For each w = (α, θ ) ∈ W , we put

123
J Optim Theory Appl


  1  
J (x, u, w) = g x(1) + L t, x(t), u(t), θ (t) dt
0

and

G(w) = {z = (x, u) ∈ X × U : (2) and (3) are satisfied} . (4)

Then the problem (1)–(3) can be formulated in the following form:

V (w) = inf J (z, w). (5)


z∈G(w)

We say that the solution map S(w) is V -inner semicontinuous at (w̄, z̄) if for every
V  
sequence wk − → w̄ i.e., wk → w̄ and V (wk ) → V (w̄) , there is a sequence {z k } with
z k ∈ S(wk ) for all k, which contains a subsequence converging to z̄.
To deal with our problem, we impose the following assumptions:
(H 1) The functions L : [0, 1] × Rn × Rm × Rk → R̄ and g : Rn → R̄ have properties
that L(·, x, u, v) is measurable for all (x, u, v) ∈ Rn × Rm × Rk , L(t, ·, ·, ·) and g(·)
are convex and continuously differentiable for almost every t ∈ [0, 1], and there exist
constants c1 > 0, c2 > 0, r ≥ 0, p ≥ p1 ≥ 0, p − 1 ≥ p2 ≥ 0, and a nonnegative
function ω1 ∈ L p ([0, 1], R), such that

 
|L(t, x, u, v)| ≤ c1 ω1 (t) + |x| p1 + |u| p1 + |v| p1 ,
 
max |L x (t, x, u, v)|, |L u (t, x, u, v)|, |L v (t, x, u, v)|
 
≤ c2 |x| p2 + |u| p2 + |v| p2 + r

for all (t, x, u, v) ∈ [0, 1] × Rn × Rm × Rk .


(H 2) The matrix-valued functions A : [0, 1] → Mn,n (R), B : [0, 1] → Mn,m (R)
and T : [0, 1] → Mn,k (R) are measurable and essentially bounded.
(H 3) There exists a constant c3 > 0 such that

|T T (t)v| ≥ c3 |v| ∀ v ∈ Rn , a.e. t ∈ [0, 1].

The following theorem gives us an upper estimate for the Fréchet subdifferential
of the value function V at a given parameter w̄ to the case where the functions g and
L are not necessarily convex.

Theorem 2.1 ([12, Theorem 1.1]) Suppose that assumptions (H 1)–(H 3) are fulfilled.
Then for a vector (α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a Fréchet subgradient of V at
(ᾱ, θ̄ ), it is necessary that there exists a function

y ∈ W 1,q ([0, 1], Rn )

123
J Optim Theory Appl

such that the following conditions are satisfied:


 

  1   1
α = g x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt,
0 0
 
y(1) = −g
x̄(1)

and
 
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
 
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1].

The above conditions are also sufficient for (α ∗ , θ ∗ ) ∈ ∂ V (ᾱ, θ̄ ) if the solution
map S has a locally upper  Lipschitzian selection
 at ( w̄, x̄, ū). Here, A T stands for
the transpose of A, ∇ L t, x̄(t), ū(t), θ̄ (t) stands for the gradient of L(t, ·, ·, ·) at
 
x̄(t), ū(t), θ̄ (t) , and q is the conjugate number of p, that is, 1 < q < +∞ and
1/ p + 1/q = 1.
By adding the assumption V -inner semicontinuous at (w̄, z̄) of the solution map S,
the following theorem gives us an upper estimate for the Mordukhovich subdifferential
of the value function V at a given parameter w̄ to the case where g and L are not
necessarily convex.
Theorem 2.2 ([22, Theorem 1.1]) Suppose that the solution map S is V -inner semi-
continuous at (w̄, z̄) ∈ gph S, and assumptions (H 1)–(H 3) are fulfilled. Then for
a vector (α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a Mordukhovich subgradient of V at
(ᾱ, θ̄ ), it is necessary that there exists a function y ∈ W 1,q ([0, 1], Rn ) such that the
following conditions are satisfied:
 
  1   1
α ∗ = g
x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt,
0 0
 
y(1) = −g
x̄(1)

and
 
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
 
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1].

The above conditions are also sufficient for (α ∗ , θ ∗ ) ∈ ∂ V (ᾱ, θ̄ ) if the solution map
S has a locally upper Lipschitzian selection at (w̄, x̄, ū).
We are now ready to state our main result. We note that by virtue of the convexity,
several assumptions used in Theorems 2.1 and 2.2, like the existence of a local upper
Lipschitzian selection and the V -inner semicontinuity of the solution map, are no
longer needed and, surprisingly, all the necessary conditions become the necessary
and sufficient conditions.

123
J Optim Theory Appl

Theorem 2.3 Suppose that assumptions (H 1)–(H 3) are fulfilled. Then for a vector
(α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a subgradient of V at (ᾱ, θ̄ ), it is necessary
and sufficient that there exists a function y ∈ W 1,q ([0, 1], Rn ) such that the following
conditions are satisfied:
 

  1   1
α = g x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt, (6)
0 0
 
y(1) = −g
x̄(1) (7)

and
 
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
 
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1]. (8)

In order to prove Theorem 2.3, we first reduce problem (1)–(3) to a mathematical


programming problem and then establish some formulas for computing the subdiffer-
ential of its value function. This procedure is presented in Sect. 4. A complete proof
for Theorem 2.3 is provided in Sect. 5.

3 Basic Definitions and Preliminaries

Let us recall some notions on generalized differentiation, which are related to our
problem. The notions and results of generalized differentiation can be found in [26–
28]. Let Z be an Asplund space, ϕ : Z → R̄ be an extended real-valued function and
z̄ ∈ Z be such that ϕ(z̄) is finite. The set

ϕ(z) − ϕ(z̄) − z ∗ , z − z̄

∂ϕ(z̄) := z ∗ ∈ Z ∗ : lim inf ≥0
z→z̄ z − z̄

is called the Fréchet subdifferential of ϕ at z̄. A given vector z ∗ ∈ ∂ϕ(z̄) is called


a Fréchet subgradient of ϕ at z̄. The set ∂ + ϕ(z̄) := −
∂(−ϕ)(z̄) is called the upper
subdifferential of ϕ at z̄. Let ϕ be a lower semicontinuous function around z̄. The set

∂ϕ(z̄) := Limsup
∂ϕ(z) (9)
ϕ
z−
→z̄
ϕ
is called the Mordukhovich subdifferential of ϕ at z̄, where the notation z −
→ z̄ means
z → z̄ and ϕ(z) → ϕ(z̄). Hence,

ϕ
→ z̄, and z k∗ ∈
z ∗ ∈ ∂ϕ(z̄) ⇐⇒ there exist sequences z k − ∂ϕ(z k )

w∗
such that z k∗ −→ z ∗ . Moreover, we have ∂ϕ(z̄) = ∅ for every locally Lipschitzian
function. It is known that the Mordukhovich subdifferential reduces to the classical

123
J Optim Theory Appl

Fréchet derivative for strictly differentiable functions and to subdifferential of convex


analysis for convex functions, where subdifferential of convex function ϕ at z̄ is given
by

∂ϕ(z̄) := z ∗ ∈ Z ∗ : z ∗ , z − z̄ ≤ ϕ(z) − ϕ(z̄), ∀z ∈ Z .

The set

∂ ∞ ϕ(z̄) := Limsupλ
∂ϕ(z)
ϕ
z→z̄
λ↓0

is called the singular subdifferential of ϕ at z̄. Hence,

ϕ
→ z̄, λk → 0+ , and z k∗ ∈ λk
z ∗ ∈ ∂ ∞ ϕ(z̄) ⇐⇒ there exist sequences z k − ∂ϕ(z k )

w∗
such that z k∗ −→ z ∗ . Let Ω be a nonempty closed set in Z and z 0 ∈ Ω. The set



(z 0 ; Ω) := z ∗ ∈ Z ∗ : lim sup z , z − z 0 ≤ 0
N
Ω z − z 0 
z−
→z 0

is called the Fréchet normal cone to Ω at z 0 , and the set

N (z 0 ; Ω) := lim N (z; Ω)
Ω
z−
→z 0

is called the Mordukhovich normal cone to Ω at z 0 . It is also known that if Ω is a


convex set, then the Mordukhovich normal cone coincides with the Fréchet normal
cone and coincides with normal cone of convex analysis for convex sets


N (z 0 ; Ω) := z ∗ ∈ Z ∗ : z ∗ , z − z 0 ≤ 0, ∀z ∈ Z .

Let D be a subset of Z and z̄ ∈ D be such that D is locally closed around z̄. The set
D
D is called sequentially normally compact (SNC) at z̄ if for any sequences z k −
→ z̄,
(z k ; D) one has
and z k∗ ∈ N

w∗
z k∗ −→ 0 ⇒ z k∗  → 0 as k → ∞.

An extended real-valued function ϕ on Z is called sequentially normally epi-compact


(SNEC) at z̄ if its epigraph is SNC at (z̄, ϕ(z̄)).
Let Z and E be Asplund spaces. We say that a set-valued map F : Z ⇒ E admits
a locally upper Lipschitzian selection at
 
(z̄, v̄) ∈ gph F := (z, v) ∈ Z × E : v ∈ F(z)

123
J Optim Theory Appl

if there is a single-valued mapping φ : Z → E, which is locally upper Lipschitzian


at z̄, that is, there exist numbers η > 0 and l > 0 such that

φ(z) − φ(z̄) ≤ lz − z̄ whenever z ∈ B(z̄, η),

which satisfies φ(z̄) = v̄ and φ(z) ∈ F(z) for all z in a neighborhood of z̄.

4 The Optimal Control Problem as a Programming Problem

In this section, we suppose that X, W and Z are Asplund spaces with the dual spaces
X ∗ , W ∗ and Z ∗ , respectively. Assume that M : Z → X and T : W → X are
continuous linear mappings. Let M ∗ : X ∗ → Z ∗ and T ∗ : X ∗ → W ∗ be adjoint
mappings of M and T , respectively. Let f : W × Z → R̄ be a function. For each
w ∈ W , we put
 
H (w) := z ∈ Z : M z = T w .

Consider the problem of computing the subdifferential and the singular subdifferential
of the value function

h(w) := inf f (w, z). (10)


z∈H (w)

This is an abstract model for (5).


We denote by S(w) the solution set of (10) corresponding to parameter w ∈ W .
Assume that the value function h is finite at w̄ and z̄ is a solution of the problem
corresponding to a parameter w̄, that is, z̄ ∈
S(w̄).
The following theorem gives us an upper estimate for the Fréchet subdifferential
of the value function h at a given parameter w̄.

Theorem 4.1 ([12, Theorem 2.1]) Suppose that


∂ + f (w̄, z̄) = ∅ and there exists a
constant c > 0 such that

T ∗ x ∗  ≥ cx ∗ , ∀x ∗ ∈ X ∗ .

Then one has


   

∂h(w̄) ⊆ w ∗ + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂ f (w̄,z̄)

Moreover, assume that f is Fréchet differentiable at (w̄, z̄) and the solution map
S
admits a local upper Lipschitzian selection at (w̄, z̄). Then
 

∂h(w̄) = ∇w f (w̄, z̄) + T ∗ (M ∗ )−1 (∇z f (w̄, z̄)) .

123
J Optim Theory Appl

The assumption ∂ + f (w̄, z̄) = ∅ in Theorem 4.1 is rather strict. It excludes from
our consideration convex, Lipschitzian functions of the type

f (w, z) = |w| + z, (w, z) ∈ R × R

or

f (w, z) = w + g(z), (w, z) ∈ W × Z ,

where g : Z → R is a given function and W and Z are Asplund spaces with dim Z ≥ 1.
Indeed, for the first example, choosing (w̄, z̄) = (0, 0) we have ∂ + f (w̄, z̄) = ∅. For
+
the second example, we have ∂ f (w̄, z̄) = ∅ for any (w̄, z̄) = (0, v) ∈ W × Z .
The above remark can be strengthened as follows: Theorem 4.1 cannot be used
for any problem of the form (10) with f being proper, convex, continuous and non-
differentiable at a given point (w̄, z̄) ∈ gph S. Indeed, since f is convex, the Fréchet
subdifferential ∂ f (w̄, z̄) coincides with the subdifferential in the sense of convex
analysis [25, Subsection 4.2.1]. As f is continuous at (w̄, z̄), the set ∂ f (w̄, z̄) is non-
empty by [25, Proposition 3,p. 199]. Hence, if ∂ + f (w̄, z̄) = ∅, then f is Fréchet
differentiable at (w̄, z̄) by [27, Proposition 1.87]. This contradicts the condition saying
that f is nondifferentiable at (w̄, z̄).
By adding the assumption h-inner semicontinuous at (w̄, z̄) of the solution map S,
the following theorem gives us an upper estimate for the Mordukhovich subdifferential
of the value function h at a given parameter w̄.

Theorem 4.2 ([22, Theorem 2.1]) Suppose the solution map


S is h-inner semicontin-
uous at (w̄, z̄) ∈ gph
S, f is SNEC at (w̄, z̄) and
(i) the following qualification condition is satisfied
 
∂ ∞ f (w̄, z̄) ∩ (T ∗ x ∗ , −M ∗ x ∗ ) : x ∗ ∈ X ∗ = {0},

(ii) there exists a constant c > 0 such that

T ∗ x ∗  ≥ cx ∗ , ∀x ∗ ∈ X ∗ .

Then one has


   
∂h(w̄) ⊆ w ∗ + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂ f (w̄,z̄)
   
∂ ∞ h(w̄) ⊆ w ∗ + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂ ∞ f (w̄,z̄)

Moreover, assume that f is strictly differentiable at (w̄, z̄) and the solution map
S
admits a local upper Lipschitzian selection at (w̄, z̄). Then
 
∂h(w̄) = ∇w f (w̄, z̄) + T ∗ (M ∗ )−1 (∇z f (w̄, z̄)) .

123
J Optim Theory Appl

We now derive a formula for computing the subdifferential and the singular sub-
differential of the value function to the parametric programming problem (10) to the
case where f is assumed to be convex. We are going to show that when the objective
function f is convex then the assumptions of Theorem 4.1, like the nonemptiness of
the Fréchet upper subdifferential of the objective function, as well as the existence of
a local upper Lipschitzian selection of the solution map and the assumptions of Theo-
rem 4.2, like the V -inner semicontinuity of the solution map, as well as the SNEC of
the objective function, and the existence of a local upper Lipschitzian selection of the
solution map, are no longer needed and, surprisingly, all the upper estimates become
equalities.

Theorem 4.3 Suppose that there exists a constant c > 0 such that

T ∗ x ∗  ≥ cx ∗ , ∀x ∗ ∈ X ∗ ,

f is proper convex on W × Z , and at least one of the following regularity conditions


is satisfied:
(i) int (gph H ) ∩ dom f = ∅,
(ii) f is continuous at a point (w0 , z 0 ) ∈ gph H .
Then one has
   
∂h(w̄) = w ∗ + T ∗ (M ∗ )−1 (z ∗ ) . (11)
(w∗ ,z ∗ )∈∂ f (w̄,z̄)
   
∂ ∞ h(w̄) = w ∗ + T ∗ (M ∗ )−1 (z ∗ ) . (12)
(w∗ ,z ∗ )∈∂ ∞ f (w̄,z̄)

For the proof of this theorem, we need the following lemma from [12]. Here, we
put Q = gphH .

Lemma 4.1 Suppose that assumptions of Theorem 4.3 are satisfied. Then for each
(w̄, z̄) ∈ Q, one has
   
N (w̄, z̄); Q = (−T ∗ x ∗ , M ∗ x ∗ ) : x ∗ ∈ X ∗ .

Proof of Theorem 4.3 Take any an arbitrary w̄ ∗ ∈ ∂h(w̄). Since the optimal value
function h is convex, we have

w̄ ∗ , w − w̄ ≤ h(w) − h(w̄), ∀w ∈ W.

Now, taking an arbitrary θ ∈ W and selecting a z ∈ H (θ ) from the above properties,


we get

w̄ ∗ , θ − w̄ + 0, z − z̄ ≤ h(θ ) − h(w̄) ≤ f (θ, z) − h(w̄) = f (θ, z) − f (w̄, z̄).

123
J Optim Theory Appl

Therefore,

(w̄ ∗ , 0), (θ, z) − (w̄, z̄) ≤ f (θ, z) − f (w̄, z̄), ∀(θ, z) ∈ Q.

Hence,
 
(w̄ ∗ , 0), (θ, z) − (w̄, z̄) ≤ f + δ(., Q) (θ, z)
 
− f + δ(., Q) (w̄, z̄), ∀(θ, z) ∈ W × Z , (13)
   
where δ ·; Q isthe indicator function of Q, that is, δ (w, z); Q = 0 if (w, z) ∈ Q
and δ (w, z); Q = +∞ otherwise. From (13), we get
 
(w̄ ∗ , 0) ∈ ∂ f + δ(.; Q) (w̄, z̄). (14)

Since Q is convex, δ(.; Q) : W × Z → R is convex. Obviously, δ(.; Q) is continuous


at every point belonging to int Q.
Consequently, if the regularity condition (i) is satisfied, then δ(.; Q) is continuous
at a point in dom f . By [25, Theorem 1 on p. 200], from (14) we have

(w̄ ∗ , 0) ∈ ∂ f (w̄, z̄) + ∂δ(.; Q)(w̄, z̄)


 
= ∂ f (w̄, z̄) + N (w̄, z̄); Q . (15)

Hence,

   
∂h(w̄) ⊂ w̄ ∗ ∈ W : (w̄ ∗ , 0) ∈ (w1∗ , z 1∗ ) + N (w̄, z̄); Q .
(w1∗ ,z 1∗ )∈∂ f (w̄,z̄)

By Lemma 4.1, there exists x ∗ ∈ X such that

w̄ ∗ − w1∗ = −T ∗ (x ∗ ) and − z 1∗ = M ∗ (x ∗ ),

for some (w1∗ , z 1∗ ) ∈ ∂ f (w̄, z̄). Hence,

w̄ ∗ = w1∗ + T ∗ (−x ∗ ) and − x ∗ ∈ (M ∗ )−1 (z 1∗ ).

Consequently,

w̄ ∗ ∈ w1∗ + T ∗ (M ∗ )−1 (z 1∗ ) , (16)

for some (w1∗ , z 1∗ ) ∈ ∂ f (w̄, z̄).


Consider the case where the regularity condition (ii) is fulfilled. Since

dom δ(.; Q) = Q,

123
J Optim Theory Appl

from (ii) it follows that f is continuous at a point in dom δ(.; Q). Therefore, by [25,
Theorem 1 on p. 200], from (14) we also have (15). Thus, there exist (w1∗ , z 1∗ ) ∈
∂ f (w̄, z̄) such that (16) is satisfied.
In both the cases, since w̄ ∗ ∈ ∂h(w̄) can be taken arbitrarily, by (16) we can deduce
that
 ∗  
∂h(w̄) ⊂ w + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂ f (w̄,z̄)

To establish the opposite inclusion, we need to prove that for each element
(w ∗ , z ∗ ) ∈ ∂ f (w̄, z̄) the following holds true:
 
w ∗ + T ∗ (M ∗ )−1 (z ∗ ) ⊂ ∂h(w̄).
 
Taking any u ∗ ∈ T ∗ (M ∗ )−1 (z ∗ ) , we will prove that w ∗ + u ∗ ∈ ∂h(w̄). From
 
u ∗ ∈ T ∗ (M ∗ )−1 (z ∗ ) , there exists u ∗1 ∈ (M ∗ )−1 (z ∗ ) ∈ X ∗ such that

u ∗ = T ∗ (u ∗1 )
M ∗ (u ∗1 ) = z ∗ ,

which is equivalent to

u ∗ = −T ∗ (−u ∗1 )
M ∗ (−u ∗1 ) = −z ∗ .

By Lemma 4.1, we get


 
(u ∗ , −z ∗ ) ∈ N (w̄, z̄); Q .

So,
 
(w ∗ , z ∗ ) + (u ∗ , −z ∗ ) ∈ ∂ f (w̄, z̄) + ∂δ .; Q (w̄, z̄).

Hence,
 
(u ∗ + w ∗ , 0) ∈ ∂ f (w̄, z̄) + ∂δ .; Q (w̄, z̄).

From the last inclusion, we can deduce that


 
(u ∗ + w ∗ , 0) ∈ ∂ f + δ(.; Q) (w̄, z̄).

Hence,

u ∗ + w ∗ , w − w̄ + 0, z − z̄ ≤ f (w, z) − f (w̄, z̄), ∀(w, z) ∈ Q. (17)

123
J Optim Theory Appl

For each fixed element w ∈ dom H , taking infimum on both sides of (17) on z ∈ H (w)
and remembering that h(w̄) = f (w̄, z̄), we obtain

u ∗ + w ∗ , w − w̄ ≤ h(w) − h(w̄).

Since h(w) = +∞ for all w ∈ / dom H , from the last property it follows that u ∗ +w ∗ ∈
∂h(w̄). Hence, we obtain the inclusion (11).
We are going to obtain (12) by the short proof. Observe that w ∈ dom h if and only
if

h(w) = inf{ f (w, z) : z ∈ H (w)} < +∞.

Since the last inequality holds if and only if there exists an z ∈ H (w) with (w, z) ∈
dom f and we have
 
δ(w; dom h) = inf{δ (w, z); dom f : z ∈ H (w)}. (18)

The representation (18) for δ(w; dom h) allows us to get (12) as a corollary of (11).
Indeed, since dom δ(.; dom f ) = dom f , if the regularity requirement in (i) is
satisfied, then int Q ∩ dom δ(.; dom f ) = ∅. Next, if the condition (ii) is satisfied,
then (w0 , z 0 ) ∈ int(dom f ); so δ(.; dom f ) is continuous at (w0 , z 0 ) ∈ Q. Now,

consider the optimization problem (10) with f (w, z) replaced by δ (w, z); dom f .
By (18), the corresponding optimal value function h(w) coincides with δ(w; dom h).
Therefore, in accordance with (11), we have
  
∂δ(.; dom h)(w̄) = w ∗ + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂δ(.;dom f )(w̄,z̄)

The latter yields (12) because

∂δ(.; dom h)(w̄) = N (w̄; dom h) = ∂ ∞ h(w̄)

and
 
∂δ(.; dom f )(w̄, z̄) = N (w̄, z̄); dom f = ∂ ∞ f (w̄, z̄)

by [4, Proposition 4.2]. 




Let us give some illustrative examples for Theorem 4.3.

Example 4.1 Let X = R2 , W = R2 , Z = R3 ,

1
f (w, z) = |z 1 | + w12 + |w2 |
2

123
J Optim Theory Appl

 
and H (w) = (z 1 , z 2 , z 3 ) ∈ R3 : z 1 +z 2 = 2w1 , z 3 = 2w2 . Assume that w̄ = (1, 0).
Then by a direct computation,

1 2
h(w) = w + |w2 |.
2 1

So

∂h(w̄) = {1} × [−1, 1] and ∂ ∞ h(w̄) = {0R2 }.

We will show that equalities (11) and (12) hold.


Indeed, for w̄ = (1, 0), we have

 1
h(w̄) = inf |z 1 | + ,
(z 1 ,z 2 ,z 3 )∈H (w̄) 2
 
where H (w̄) = (z 1 , z 2 , z 3 ) ∈ R3 : z 1 + z 2 = 2, z 3 = 2w2 . It is easy to check
that z̄ = (0, 2, 0) is the unique solution of the problem corresponding to w̄. For z̄ :=
(0, 2, 0) ∈ S(w̄), ∂ f (w̄, z̄) = {1} × [−1, 1] × [−1, 1] × {0} × {0} and ∂ ∞ f (w̄, z̄) =
N (w̄, z̄); dom f = {0R5 } by [4, Proposition 4.2].
It is easy to see that f (w, z) is continuous at (w̄, z̄) ∈ Q. Let M : R3 → R2 and
T : R2 → R2 be the mappings defined, respectively, by
⎡ ⎤

 z   
1 1 0 ⎣ 1⎦ 2 0 w1
Mz = z 2 and T w = .
001 0 2 w2
z3

Note that ⎡ ⎤
10  
20
M ∗ = ⎣1 0⎦ and T ∗ = .
02
01
So,

T ∗ x ∗  ≥ cx ∗ , ∀x ∗ ∈ X ∗ .

Thus, the assumptions of Theorem 4.3 are satisfied. We have


 
(M ∗ )−1 (z 1∗ , 0, 0) = (0, 0) and T ∗ (M ∗ )−1 (z 1∗ , 0, 0) = (0, 0).

Hence, the right-hand side of (11) can be computed as follows:


   
(1, w2∗ ) + T ∗ (M ∗ )−1 (z 1∗ , 0, 0) = {1} × [−1, 1].
w2∗ ∈[−1,1] z 1∗ ∈[−1,1]

123
J Optim Theory Appl

By using similar arguments, the right-hand side of (12) can be computed as follows:
 
[(0, 0) + T ∗ (M ∗ )−1 (0, 0, 0) = {0R2 }.

Hence, equalities (11) and (12) hold.

5 Proof of the Main Result

To prove Theorem 2.3, we first formulate problem (1)–(3) in the form to which The-
orem 4.3 can be applied to. We now consider the following linear mappings:
A : X → X defined by

 (·)
Ax := x − A(τ )x(τ )dτ,
0

B : U → X defined by

 (·)
Bu := − B(τ )u(τ )dτ,
0

M : X × U → X defined by

M(x, u) := Ax + Bu (19)

and T : W → X defined by

 (·)
T (α, θ ) := α + T (τ )θ (τ )dτ. (20)
0

Under the hypotheses (H2 ) and (H3 ), (4) can be written in the form


 (·)  (·)  (·)
G(w) = (x, u) ∈ X × U : x = α + Axdτ + Budτ + T θ dτ
0 0 0

 (·)  (·)  (·)
= (x, u) ∈ X × U : x − Axdτ − Budτ = α + T θ dτ
0

0 0

= (x, u) ∈ X × U : M(x, u) = T (w) .

Recall that for 1 < p < ∞, we have L p ([0, 1], Rn )∗ = L q ([0, 1], Rn ), where

1 1
1 < q < +∞, + = 1.
p q

123
J Optim Theory Appl

Besides, L p ([0, 1], Rn ) is pared with L q ([0, 1], Rn ) by the formula


 1
x ∗ , x = x ∗ (t)x(t)dt,
0

for all x ∗ ∈ L q ([0, 1], Rn ) and x ∈ L p ([0, 1], Rn ).


Also, we have W 1, p ([0, 1], Rn )∗ = Rn × L q ([0, 1], Rn ) and W 1, p ([0, 1], Rn ) is
pared with Rn × L q ([0, 1], Rn ) by the formula
 1
(a, u), x = a, x(0) + u(t)ẋ(t)dt,
0

for all (a, u) ∈ Rn × L q ([0, 1], Rn ) and x ∈ W 1, p ([0, 1], Rn ) (see [25, p. 21]).
In the case of p = 2, W 1,2 ([0, 1], Rn ) becomes a Hilbert space with the inner
product given by
 1
x, y = x(0), y(0) + ẋ(t) ẏ(t)dt,
0

for all x, y ∈ W 1,2 ([0, 1], Rn ).


In the sequel, we shall need the following lemmas.

Lemma 5.1 ([21, Lemma 2.3]) Suppose that M∗ and T ∗ are adjoint mappings of M
and T , respectively. Then the following assertions are valid:
(a) The mappings M and  T are continuous.
(b) T ∗ (a, u) = a, T T u for all (a, u) ∈ Rn × L q ([0, 1], Rn ).
 
(c) M∗ (a, u) = A∗ (a, u), B ∗ (a, u) , where B ∗ (a, u) = −B T u and

  1  (·)  1 

A (a, u) = a − A (t)u(t)dt; u +
T
A (τ )u(τ )dτ −
T
A T (t)u(t)dt ,
0 0 0

for all (a, u) ∈ Rn × L q ([0, 1], Rn ).

Recall that

Z = X ×U

and
 
G(w) = (x, u) ∈ X × U : M(x, u) = T (w) .

Then our problem can be written in the form

V (w) := inf J (z, w)


z∈G(w)

123
J Optim Theory Appl

with z = (x, u) ∈ Z , w = (α, θ ) ∈ W and


 
G(w) = z ∈ Z : M(z) = T (w) ,

where M : Z → X and T : W → X are defined by (19) and (20), respectively.

Lemma 5.2 ([21, Lemma 3.1]) Suppose that assumptions (H1 ), (H2 ) and (H3 ) are
valid. Then the following assertions are fulfilled:
(a) There exists a constant c > 0 such that

||T ∗ x ∗ ||  c||x ∗ ||, ∀x ∗ ∈ X ∗ .

(b) The functional J is Fréchet differentiable at (z̄, w̄) and ∇ J (z̄, w̄) is given by
  
∇w J (z̄, w̄) = 0, L θ t, x̄(t), ū(t), θ̄ (t) ,
 
∇z J (z̄, w̄) = Jx (x̄, ū, θ̄ ), Ju (x̄, ū, θ̄ )

with

Ju (x̄, ū, θ̄ ) = L u (·, x̄, ū, θ̄ )

and
  
 1  
Jx (x̄, ū, θ̄ ) = g
x(1) + L x t, x̄(t), ū(t), θ̄(t) dt,
0


  1   
g x(1) + L x t, x̄(t), ū(t), θ̄(t) dt .
(·)

We now return to the proof of Theorem 2.3, our main result.


Since g and L are convex, J (x, u, w) is convex. By Lemma 5.2, all conditions of
Theorem 4.3 are fulfilled. According to Theorem 4.3, we obtain
 
∂ V (w̄) = ∇w J (z̄, w̄) + T ∗ (M∗ )−1 (∇z J (z̄, w̄)) . (21)

By (21), (α ∗ , θ ∗ ) ∈ ∂ V (w̄), if and only if,


 
(α ∗ , θ ∗ ) − ∇w J (z̄, w̄) ∈ T ∗ (M∗ )−1 (∇z J (z̄, w̄)) ,

which is equivalent to
 ∗ ∗   
α , θ − Jθ (z̄, w̄) ∈ T ∗ (M∗ )−1 (∇z J (z̄, w̄)) .

Hence, there exists (a, v) ∈ Rn × L q ([0, 1], Rn ) such that


 ∗ ∗ 
α , θ − Jθ (z̄, w̄) = T ∗ (a, v) and ∇z J (z̄, w̄) = M∗ (a, v). (22)

123
J Optim Theory Appl

By Lemma 5.1, we get



α ∗ = a; θ ∗ − Jθ (z̄, w̄) = T T (·)v(·)
(22) ⇔    
Jx (x̄, ū, w̄), Ju (x̄, ū, w̄) = A∗ (a, v), B ∗ (a, v) .


⎪ α∗ = a

⎪  


⎪θ ∗ = L θ ·, x̄(·), ū(·), θ̄ (·) + T T (·)v(·)


⎪  1  1


⎨ 
g x̄(1) + 0 L x t, x̄(t), ū(t), θ̄ (t) dt = a − 0 A T (t)v(t)dt
⇔   1  

⎪g
x̄(1) + (·) L x τ, x̄(τ ), ū(τ ), θ̄ (τ ) dτ


⎪ = v(·) +  (·) A T (τ )v(τ )dτ −  1 A T (t)v(t)dt




⎪  0 0
⎩ L ·, x̄(·), ū(·), θ̄ (·) = −B T (·)v(·).

u



⎪ α∗ = a

⎪  

⎪ θ ∗ = L θ ·, x̄(·), ū(·), θ̄ (·) + T T (·)v(·)

⎨      
⇔ g
x̄(1) + 01 L x t, x̄(t), ū(t), θ̄ (t) dt = a − 01 A T (t)v(t)dt


   (·)    (·)


⎪g x̄(1) − 1 L x τ, x̄(τ ), ū(τ ), θ̄ (τ ) dτ = v(·) + 1 A T (τ )v(τ )dτ


⎩ L ·, x̄(·), ū(·), θ̄ (·) = −B T (·)v(·).

u


⎪v ∈ W 1,q ([0, 1], R n )

⎪  

⎪θ ∗ − T T (·)v(·) = L θ ·, x̄(·), ū(·), θ̄ (·)



⎨α ∗ = g
x̄(1) +  1 L x t, x̄(t), ū(t), θ̄(t)dt +  1 A T (t)v(t)dt

0 0
⇔   (23)

⎪v(1) = g
x̄(1)



⎪  

⎪−v̇(·) − A T (·)v(·) = L x ·, x̄(·), ū(·), θ̄ (·)

⎪  

−B T (·)v(·) = L u ·, x̄(·), ū(·), θ̄ (·) .

Putting y = −v, we have


⎧ ∗   1   1

⎪ α = g
x̄(1) + 0 L x t, x̄(t), ū(t), θ̄ (t) dt − 0 A T (t)y(t)dt
⎨ y(1) = −g
x̄(1)

(23) ⇔  


⎪ ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
⎩  
= ∇ L t, x̄(t), ū(t), θ̄ (t) ,

for a.e. t ∈ [0, 1]. The proof is complete. 




Example 5.1 We will illustrate the obtained result by a concrete problem. Put

X = W 1,2 ([0, 1], R2 ), U = L 2 ([0, 1], R2 ),


Θ = L 2 ([0, 1], R2 ), W = R2 × Θ.

123
J Optim Theory Appl

Consider the problem


 1 
J (x, u, θ ) = −x2 (1) + u 21 + u 22 + θ12 + θ22 + 1 dt −→ inf
0


⎪ ẋ1 = 2x1 + u 1 + θ1

⎨ẋ = x + u + θ
2 2 2 2
subject to (24)

⎪ x1 (0) = α1


x2 (0) = α2 .
 
Let (ᾱ, θ̄ ) = (1, 1), (0, 0) . The following assertions are valid:
(i) The pair (x̄, ū), where
  e  t e −t   1 
x̄ = e2t , 1 + e − e , ū = 0, e−t+1
4 4 2

is a solution of the problem corresponding to (ᾱ, θ̄ ).


(ii) ∂ V (ᾱ, θ̄ ) = {(0, −e), (0, −e1−t )}.
 
Indeed, for (ᾱ, θ̄ ) = (1, 1), (0, 0) the problem becomes
 1 
J0 (x, u) = −x2 (1) + u 21 + u 22 + 1 dt −→ inf
0

⎪ẋ1 = 2x1 + u 1


⎨ẋ = x + u
2 2 2
subject to (25)
⎪x1 (0) = 1



x2 (0) = 1.

By a direct computation, we see that the pair (x̄, ū) satisfies (25). Besides,

e2 9
J0 (x̄, ū) = − −e+ .
8 8
Note that (x̄, ū) is a solution of the problem. In fact, for all (x, u) satisfying (25) we
have
 1
 2 
J0 (x, u) = −x2 (1) + u 1 + u 22 + 1 dt
0
 1 
≥ −x2 (1) + 1 + u 22 dt
0
 1 
= −x2 (1) + 1 + (ẋ2 − x2 )2 dt
0
 1 
= (ẋ2 − x2 )2 − ẋ2 dt. (26)
0

123
J Optim Theory Appl

We now consider the variational problem


 1 
J (x2 ) := (ẋ2 − x2 )2 − ẋ2 dt −→ inf. (27)
0

By solving the Euler equation with noting that J is a convex function, we obtain that
x2 (t) = cet + (1 − c)e−t is a solution of (27), where c is determined by c = ae−1
e2 −1
and a = x2 (1). Hence

e2 − 1 1
J (x2 )  J (
x2 ) = 1 + 2 2 (c − 1)2 + (c − 1) − e(c − 1) − e. (28)
e e

Combining (26) with (28) and putting r = c − 1, we obtain

e2 − 1 2 1
J0 (x, u)  1 + 2 r + r − er − e
e2 e
e 2 9
 − − e + = J0 (x̄, ū).
8 8

Hence, (x̄, ū) is a solution of the problem corresponding to (ᾱ, θ̄ ). Assertion (i) is
proved.
We now prove (ii). From (24), we have
     
20 10 10
A= , B= , T = .
01 01 01

It is easy to see that conditions (H2 ) and (H3 ) are fulfilled. Since

L(t, x, u, θ ) = u 21 + u 22 + θ12 + θ22 + 1,

we have

|L(t, x, u, θ )|  |u|2 + |θ |2 + 1,
|L u (t, x, u, θ )|  2|u|; |L θ (t, x, u, θ )|  2|θ |,

for all (x, u, θ ) ∈ R2 × R2 × R2 and a.e. t ∈ [0, 1]. Thus, condition (H1 ) is also
valid. Hence, all conditions of Theorem 2.3 are fulfilled. By Theorem 2.3, (α ∗ , θ ∗ ) ∈
∂ V (ᾱ, θ̄ ), if and only if, there exists y = (y1 , y2 ) ∈ W 1,2 ([0, 1], R2 ) such that

 ⎪
⎪ ẏ1 = −2y1


ẏ + A T y = ∇x L(t, x̄, ū, θ̄ ) ẏ2 = −y2

y(1) = −g
(x(1)). ⎪

⎪ y 1 (1) = 0

y2 (1) = 1.

123
J Optim Theory Appl

This implies that (y1 , y2 ) = ( ȳ1 , ȳ2 ) = (0, e1−t ) = ȳ. By (8), we have

θ ∗ = −T T y + ∇θ L(t, x̄, ū, θ̄ ) = −T T y.

It follows that θ ∗ (t) = (0, −e1−t ). On the other hand, from (6) we get
 
  1   1
α ∗ = g
x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt. (29)
0 0

Substituting y = ȳ into (29), we obtain α ∗ = (0, −e).

6 Conclusions

We studied the first-order behavior of the value function to a parametric convex optimal
control problem with a convex cost function and linear state equations. In order to
achieve these results, we first established an abstract result on the subdifferential of the
value function to a parametric convex mathematical programming problem and then
we derived a formula for computing the subdifferential and the singular subdifferential
of the value function to a parametric convex optimal control problem. The main result
of this paper is illustrated by one example.

Acknowledgments In this research, we were partially supported by the NAFOSTED 101.01-2015.04 of


National Foundation for Science and Technology Development (Vietnam).

References
1. Aubin, J.-P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
2. Ioffe, A.D.: Euler–Lagrange and Hamiltonian formalisms in dynamic optimization. Trans. Am. Math.
Soc. 349, 2871–2900 (1997)
3. Vinter, R.B., Zheng, H.: Necessary conditions for optimal control problems with state constraints.
Trans. Am. Math. Soc. 350, 1181–1204 (1998)
4. An, D.T.V., Yen, N.D.: Differential stability of convex optimization problems under inclusion con-
straints. Appl. Anal. 94, 108–128 (2015)
5. Clarke, F.H.: Optimization and Nonsmooth Analysis. SIAM, Philadelphia (1990)
6. Clarke, F.H.: Methods of Dynamic and Nonsmooth Optimization. SIAM, Philadelphia (1989)
7. Mordukhovich, B.S., Nam, N.M.: Variational stability and marginal functions via generalized differ-
entiation. Math. Oper. Res. 30, 800–816 (2005)
8. Mordukhovich, B.S., Nam, N.M., Yen, N.D.: Fréchet subdifferential calculus and optimality conditions
in nondifferentiable programming. Optimization 55, 685–708 (2006)
9. Mordukhovich, B.S., Nam, N.M., Yen, N.D.: Subgradients of marginal functions in parametric math-
ematical programming. Math. Program. 116, 369–396 (2009)
10. Penot, J.-P.: Differentiability properties of optimal value functions. Can. J. Math. 56, 825–842 (2004)
11. Cernea, A., Frankowska, H.: A connection between the maximum principle and dynamic programming
for constrained control problems. SIAM J. Control Optim. 44, 673–703 (2005)
12. Chieu, N.H., Kien, B.T., Toan, N.T.: Further results on subgradients of the value function to a parametric
optimal control problem. J. Optim. Theory Appl. 168, 785–801 (2016)
13. Chieu, N.H., Yao, J.-C.: Subgradients of the optimal value function in a parametric discrete optimal
control problem. J. Ind. Manag. Optim. 6, 401–410 (2010)
14. Kien, B.T., Liou, Y.C., Wong, N.-C., Yao, J.-C.: Subgradients of value functions in parametric dynamic
programming. Eur. J. Oper. Res. 193, 12–22 (2009)

123
J Optim Theory Appl

15. Moussaoui, M., Seeger, A.: Sensitivity analysis of optimal value functions of convex parametric pro-
grams with possibly empty solution sets. SIAM J. Optim. 4, 659–675 (1994)
16. Moussaoui, M., Seeger, A.: Epsilon-maximum principle of Pontryagin type and perturbation analysis
of convex optimal control problems. SIAM J. Control Optim. 34, 407–427 (1996)
17. Rockafellar, R.T., Wolenski, P.R.: Convexity in Hamilton–Jacobi theory I: dynamics and duality. SIAM
J. Control Optim. 39, 1323–1350 (2000)
18. Rockafellar, R.T., Wolenski, P.R.: Convexity in Hamilton–Jacobi theory II: envelope representation.
SIAM J. Control Optim. 39, 1351–1372 (2000)
19. Rockafellar, R.T.: Hamilton–Jacobi theory and parametric analysis in fully convex problems of optimal
control. J. Glob. Optim. 248, 419–431 (2004)
20. Seeger, A.: Subgradient of optimal-value function in dynamic programming: the case of convex system
without optimal paths. Math. Oper. Res. 21, 555–575 (1996)
21. Toan, N.T., Kien, B.T.: Subgradients of the value function to a parametric optimal control problem.
Set-Valued Var. Anal. 18, 183–203 (2010)
22. Toan, N.T.: Mordukhovich subgradients of the value function to a parametric optimal control problem.
Taiwan. J. Math. 19, 1051–1072 (2015)
23. Toan, N.T., Yao, J.-C.: Mordukhovich subgradients of the value function to a parametric discrete
optimal control problem. J. Glob. Optim. 58, 595–612 (2014)
24. Vinter, R.B.: Optimal Control. Birkhäuser, Boston (2000)
25. Ioffe, A.D., Tihomirov, V.M.: Theory of Extremal Problems. North-Holand, Amsterdam (1979)
26. Borwein, J.M., Zhu, Q.J.: Techniques of Variational Analysis. Springer, New York (2005)
27. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basis Theory. Springer,
Berlin (2006)
28. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation II: Applications. Springer,
Berlin (2006)

123

You might also like