Professional Documents
Culture Documents
DOI 10.1007/s10957-016-0921-2
Abstract Motivated by our recent works on the optimal value function in paramet-
ric optimal control problems under linear state equations, in this paper we study of
the first-order behavior of the value function of a parametric convex optimal con-
trol problem with a convex cost function and linear state equations. By establishing
an abstract result on the subdifferential of the value function to a parametric convex
mathematical programming problem, we derive a formula for computing the subdif-
ferential and the singular subdifferential of the value function to a parametric convex
optimal control problem. By virtue of the convexity, several assumptions used in the
above papers, like the existence of a local upper Lipschitzian selection of the solu-
tion map, as well as the V-inner semicontinuity of the solution map, are no longer
needed.
123
J Optim Theory Appl
1 Introduction
The study of the first-order behavior of value functions is important in variational analy-
sis and optimization. An example is the study of distance functions and its applications
to optimal control problems (see [1–3]). There are many papers dealing with differen-
tiability properties and the Fréchet subdifferential of value functions, see, for example,
[4–10]. By considering a set of assumptions, which involves a kind of coherence prop-
erty, Penot [10] showed that the value functions are Fréchet differentiable. The results
of Penot gave sufficient conditions under which the value functions are Fréchet differ-
entiable rather than formulas for computing their derivatives. In [9], Mordukhovich,
Nam and Yen derived formulas for computing and estimating the Fréchet subdifferen-
tial and Mordukhovich subdifferential of value functions of parametric mathematical
programming problems in Banach spaces. By using the Moreau–Rockafellar theorem
and appropriate regularity conditions, An and Yen [4] obtained formulas for comput-
ing the subdifferential and the singular subdifferential of the optimal value function
to a parametric convex mathematical programming problem.
Besides the study of the first-order behavior of value functions in parametric
mathematical programming, the study of the first-order behavior of value func-
tions in optimal control problems has attracted attention of many researchers, see,
for example, [11–24]. Recently, Toan and Kien [21] have derived a formula for an
upper evaluation of the Fréchet subdifferential of the value function for the case
where the objective functions were not assumed to be convex. Under some assump-
tions which are weaker than the ones in [21], Chieu, Kien and Toan in [12] have
obtained a formula for an upper evaluation of the Fréchet subdifferential of the
value function, which complements the results in [21]. By establishing a result on
the Mordukhovich subdifferential of the value function of parametric mathematical
programming problems, we in [22] have proved that the formulas for computing
on the Fréchet subdifferential in [12] are also the formulas for computing on the
Mordukhovich subdifferential of the value function if the solution map is V-inner
semicontinuous.
Motivated by the our works [12] and [22] on the optimal value function in paramet-
ric optimal control problems under linear state equations, in this paper we continue
to study the first-order behavior of the value function to a parametric convex opti-
mal control problem with convex cost functions and linear state equations by giving
a shaper formula for computing the subdifferential of the value function. In order
to prove the main result, we first reduce the problem to a mathematical convex
programming problem and then using the Moreau–Rockafellar theorem (see [25,
p. 48]) and appropriate regularity conditions, we obtain formulas for computing
the subdifferential and the singular subdifferential of the optimal value function to
a parametric convex mathematical programming problem. By virtue of the convexity,
several assumptions used in the above papers, like the nonemptiness of the Fréchet
upper subdifferential of the objective function, the sequentially normally epi-compact
(SNEC) of the objective function as well as the existence of a local upper Lip-
schitzian selection of the solution map and the V-inner semicontinuity of the solution
map, are no longer needed and, surprisingly, all the upper estimates become equali-
ties.
123
J Optim Theory Appl
A wide variety of problems in optimal control problem can be posed in the following
form.
Determine a control vector u ∈ L p ([0, 1], Rm ) and a trajectory
V : W → R̄
is an extended real-valued function, which is called the value function or the marginal
function of problem (1)–(3). It is assumed that V is finite at w̄ and z̄ = (x̄, ū) is a
solution of the problem corresponding to a parameter w̄, that is, z̄ = (x̄, ū) ∈ S(w̄).
For each w = (α, θ ) ∈ W , we put
123
J Optim Theory Appl
1
J (x, u, w) = g x(1) + L t, x(t), u(t), θ (t) dt
0
and
We say that the solution map S(w) is V -inner semicontinuous at (w̄, z̄) if for every
V
sequence wk − → w̄ i.e., wk → w̄ and V (wk ) → V (w̄) , there is a sequence {z k } with
z k ∈ S(wk ) for all k, which contains a subsequence converging to z̄.
To deal with our problem, we impose the following assumptions:
(H 1) The functions L : [0, 1] × Rn × Rm × Rk → R̄ and g : Rn → R̄ have properties
that L(·, x, u, v) is measurable for all (x, u, v) ∈ Rn × Rm × Rk , L(t, ·, ·, ·) and g(·)
are convex and continuously differentiable for almost every t ∈ [0, 1], and there exist
constants c1 > 0, c2 > 0, r ≥ 0, p ≥ p1 ≥ 0, p − 1 ≥ p2 ≥ 0, and a nonnegative
function ω1 ∈ L p ([0, 1], R), such that
|L(t, x, u, v)| ≤ c1 ω1 (t) + |x| p1 + |u| p1 + |v| p1 ,
max |L x (t, x, u, v)|, |L u (t, x, u, v)|, |L v (t, x, u, v)|
≤ c2 |x| p2 + |u| p2 + |v| p2 + r
The following theorem gives us an upper estimate for the Fréchet subdifferential
of the value function V at a given parameter w̄ to the case where the functions g and
L are not necessarily convex.
Theorem 2.1 ([12, Theorem 1.1]) Suppose that assumptions (H 1)–(H 3) are fulfilled.
Then for a vector (α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a Fréchet subgradient of V at
(ᾱ, θ̄ ), it is necessary that there exists a function
123
J Optim Theory Appl
1 1
α = g x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt,
0 0
y(1) = −g
x̄(1)
and
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1].
The above conditions are also sufficient for (α ∗ , θ ∗ ) ∈ ∂ V (ᾱ, θ̄ ) if the solution
map S has a locally upper Lipschitzian selection
at ( w̄, x̄, ū). Here, A T stands for
the transpose of A, ∇ L t, x̄(t), ū(t), θ̄ (t) stands for the gradient of L(t, ·, ·, ·) at
x̄(t), ū(t), θ̄ (t) , and q is the conjugate number of p, that is, 1 < q < +∞ and
1/ p + 1/q = 1.
By adding the assumption V -inner semicontinuous at (w̄, z̄) of the solution map S,
the following theorem gives us an upper estimate for the Mordukhovich subdifferential
of the value function V at a given parameter w̄ to the case where g and L are not
necessarily convex.
Theorem 2.2 ([22, Theorem 1.1]) Suppose that the solution map S is V -inner semi-
continuous at (w̄, z̄) ∈ gph S, and assumptions (H 1)–(H 3) are fulfilled. Then for
a vector (α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a Mordukhovich subgradient of V at
(ᾱ, θ̄ ), it is necessary that there exists a function y ∈ W 1,q ([0, 1], Rn ) such that the
following conditions are satisfied:
1 1
α ∗ = g
x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt,
0 0
y(1) = −g
x̄(1)
and
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1].
The above conditions are also sufficient for (α ∗ , θ ∗ ) ∈ ∂ V (ᾱ, θ̄ ) if the solution map
S has a locally upper Lipschitzian selection at (w̄, x̄, ū).
We are now ready to state our main result. We note that by virtue of the convexity,
several assumptions used in Theorems 2.1 and 2.2, like the existence of a local upper
Lipschitzian selection and the V -inner semicontinuity of the solution map, are no
longer needed and, surprisingly, all the necessary conditions become the necessary
and sufficient conditions.
123
J Optim Theory Appl
Theorem 2.3 Suppose that assumptions (H 1)–(H 3) are fulfilled. Then for a vector
(α ∗ , θ ∗ ) ∈ Rn × L q ([0, 1], Rk ) to be a subgradient of V at (ᾱ, θ̄ ), it is necessary
and sufficient that there exists a function y ∈ W 1,q ([0, 1], Rn ) such that the following
conditions are satisfied:
∗
1 1
α = g x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt, (6)
0 0
y(1) = −g
x̄(1) (7)
and
ẏ(t) + A T (t)y(t), B T (t)y(t), θ ∗ (t) + T T (t)y(t)
= ∇ L t, x̄(t), ū(t), θ̄ (t) a.e. t ∈ [0, 1]. (8)
Let us recall some notions on generalized differentiation, which are related to our
problem. The notions and results of generalized differentiation can be found in [26–
28]. Let Z be an Asplund space, ϕ : Z → R̄ be an extended real-valued function and
z̄ ∈ Z be such that ϕ(z̄) is finite. The set
ϕ(z) − ϕ(z̄) − z ∗ , z − z̄
∂ϕ(z̄) := z ∗ ∈ Z ∗ : lim inf ≥0
z→z̄ z − z̄
∂ϕ(z̄) := Limsup
∂ϕ(z) (9)
ϕ
z−
→z̄
ϕ
is called the Mordukhovich subdifferential of ϕ at z̄, where the notation z −
→ z̄ means
z → z̄ and ϕ(z) → ϕ(z̄). Hence,
ϕ
→ z̄, and z k∗ ∈
z ∗ ∈ ∂ϕ(z̄) ⇐⇒ there exist sequences z k − ∂ϕ(z k )
w∗
such that z k∗ −→ z ∗ . Moreover, we have ∂ϕ(z̄) = ∅ for every locally Lipschitzian
function. It is known that the Mordukhovich subdifferential reduces to the classical
123
J Optim Theory Appl
The set
∂ ∞ ϕ(z̄) := Limsupλ
∂ϕ(z)
ϕ
z→z̄
λ↓0
ϕ
→ z̄, λk → 0+ , and z k∗ ∈ λk
z ∗ ∈ ∂ ∞ ϕ(z̄) ⇐⇒ there exist sequences z k − ∂ϕ(z k )
w∗
such that z k∗ −→ z ∗ . Let Ω be a nonempty closed set in Z and z 0 ∈ Ω. The set
∗
(z 0 ; Ω) := z ∗ ∈ Z ∗ : lim sup z , z − z 0
≤ 0
N
Ω z − z 0
z−
→z 0
N (z 0 ; Ω) := lim N (z; Ω)
Ω
z−
→z 0
Let D be a subset of Z and z̄ ∈ D be such that D is locally closed around z̄. The set
D
D is called sequentially normally compact (SNC) at z̄ if for any sequences z k −
→ z̄,
(z k ; D) one has
and z k∗ ∈ N
w∗
z k∗ −→ 0 ⇒ z k∗ → 0 as k → ∞.
123
J Optim Theory Appl
which satisfies φ(z̄) = v̄ and φ(z) ∈ F(z) for all z in a neighborhood of z̄.
In this section, we suppose that X, W and Z are Asplund spaces with the dual spaces
X ∗ , W ∗ and Z ∗ , respectively. Assume that M : Z → X and T : W → X are
continuous linear mappings. Let M ∗ : X ∗ → Z ∗ and T ∗ : X ∗ → W ∗ be adjoint
mappings of M and T , respectively. Let f : W × Z → R̄ be a function. For each
w ∈ W , we put
H (w) := z ∈ Z : M z = T w .
Consider the problem of computing the subdifferential and the singular subdifferential
of the value function
T ∗ x ∗ ≥ cx ∗ , ∀x ∗ ∈ X ∗ .
Moreover, assume that f is Fréchet differentiable at (w̄, z̄) and the solution map
S
admits a local upper Lipschitzian selection at (w̄, z̄). Then
∂h(w̄) = ∇w f (w̄, z̄) + T ∗ (M ∗ )−1 (∇z f (w̄, z̄)) .
123
J Optim Theory Appl
The assumption ∂ + f (w̄, z̄) = ∅ in Theorem 4.1 is rather strict. It excludes from
our consideration convex, Lipschitzian functions of the type
or
where g : Z → R is a given function and W and Z are Asplund spaces with dim Z ≥ 1.
Indeed, for the first example, choosing (w̄, z̄) = (0, 0) we have ∂ + f (w̄, z̄) = ∅. For
+
the second example, we have ∂ f (w̄, z̄) = ∅ for any (w̄, z̄) = (0, v) ∈ W × Z .
The above remark can be strengthened as follows: Theorem 4.1 cannot be used
for any problem of the form (10) with f being proper, convex, continuous and non-
differentiable at a given point (w̄, z̄) ∈ gph S. Indeed, since f is convex, the Fréchet
subdifferential ∂ f (w̄, z̄) coincides with the subdifferential in the sense of convex
analysis [25, Subsection 4.2.1]. As f is continuous at (w̄, z̄), the set ∂ f (w̄, z̄) is non-
empty by [25, Proposition 3,p. 199]. Hence, if ∂ + f (w̄, z̄) = ∅, then f is Fréchet
differentiable at (w̄, z̄) by [27, Proposition 1.87]. This contradicts the condition saying
that f is nondifferentiable at (w̄, z̄).
By adding the assumption h-inner semicontinuous at (w̄, z̄) of the solution map S,
the following theorem gives us an upper estimate for the Mordukhovich subdifferential
of the value function h at a given parameter w̄.
T ∗ x ∗ ≥ cx ∗ , ∀x ∗ ∈ X ∗ .
Moreover, assume that f is strictly differentiable at (w̄, z̄) and the solution map
S
admits a local upper Lipschitzian selection at (w̄, z̄). Then
∂h(w̄) = ∇w f (w̄, z̄) + T ∗ (M ∗ )−1 (∇z f (w̄, z̄)) .
123
J Optim Theory Appl
We now derive a formula for computing the subdifferential and the singular sub-
differential of the value function to the parametric programming problem (10) to the
case where f is assumed to be convex. We are going to show that when the objective
function f is convex then the assumptions of Theorem 4.1, like the nonemptiness of
the Fréchet upper subdifferential of the objective function, as well as the existence of
a local upper Lipschitzian selection of the solution map and the assumptions of Theo-
rem 4.2, like the V -inner semicontinuity of the solution map, as well as the SNEC of
the objective function, and the existence of a local upper Lipschitzian selection of the
solution map, are no longer needed and, surprisingly, all the upper estimates become
equalities.
Theorem 4.3 Suppose that there exists a constant c > 0 such that
T ∗ x ∗ ≥ cx ∗ , ∀x ∗ ∈ X ∗ ,
For the proof of this theorem, we need the following lemma from [12]. Here, we
put Q = gphH .
Lemma 4.1 Suppose that assumptions of Theorem 4.3 are satisfied. Then for each
(w̄, z̄) ∈ Q, one has
N (w̄, z̄); Q = (−T ∗ x ∗ , M ∗ x ∗ ) : x ∗ ∈ X ∗ .
Proof of Theorem 4.3 Take any an arbitrary w̄ ∗ ∈ ∂h(w̄). Since the optimal value
function h is convex, we have
w̄ ∗ , w − w̄ ≤ h(w) − h(w̄), ∀w ∈ W.
123
J Optim Theory Appl
Therefore,
Hence,
(w̄ ∗ , 0), (θ, z) − (w̄, z̄)
≤ f + δ(., Q) (θ, z)
− f + δ(., Q) (w̄, z̄), ∀(θ, z) ∈ W × Z , (13)
where δ ·; Q isthe indicator function of Q, that is, δ (w, z); Q = 0 if (w, z) ∈ Q
and δ (w, z); Q = +∞ otherwise. From (13), we get
(w̄ ∗ , 0) ∈ ∂ f + δ(.; Q) (w̄, z̄). (14)
Hence,
∂h(w̄) ⊂ w̄ ∗ ∈ W : (w̄ ∗ , 0) ∈ (w1∗ , z 1∗ ) + N (w̄, z̄); Q .
(w1∗ ,z 1∗ )∈∂ f (w̄,z̄)
w̄ ∗ − w1∗ = −T ∗ (x ∗ ) and − z 1∗ = M ∗ (x ∗ ),
Consequently,
w̄ ∗ ∈ w1∗ + T ∗ (M ∗ )−1 (z 1∗ ) , (16)
dom δ(.; Q) = Q,
123
J Optim Theory Appl
from (ii) it follows that f is continuous at a point in dom δ(.; Q). Therefore, by [25,
Theorem 1 on p. 200], from (14) we also have (15). Thus, there exist (w1∗ , z 1∗ ) ∈
∂ f (w̄, z̄) such that (16) is satisfied.
In both the cases, since w̄ ∗ ∈ ∂h(w̄) can be taken arbitrarily, by (16) we can deduce
that
∗
∂h(w̄) ⊂ w + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂ f (w̄,z̄)
To establish the opposite inclusion, we need to prove that for each element
(w ∗ , z ∗ ) ∈ ∂ f (w̄, z̄) the following holds true:
w ∗ + T ∗ (M ∗ )−1 (z ∗ ) ⊂ ∂h(w̄).
Taking any u ∗ ∈ T ∗ (M ∗ )−1 (z ∗ ) , we will prove that w ∗ + u ∗ ∈ ∂h(w̄). From
u ∗ ∈ T ∗ (M ∗ )−1 (z ∗ ) , there exists u ∗1 ∈ (M ∗ )−1 (z ∗ ) ∈ X ∗ such that
u ∗ = T ∗ (u ∗1 )
M ∗ (u ∗1 ) = z ∗ ,
which is equivalent to
u ∗ = −T ∗ (−u ∗1 )
M ∗ (−u ∗1 ) = −z ∗ .
So,
(w ∗ , z ∗ ) + (u ∗ , −z ∗ ) ∈ ∂ f (w̄, z̄) + ∂δ .; Q (w̄, z̄).
Hence,
(u ∗ + w ∗ , 0) ∈ ∂ f (w̄, z̄) + ∂δ .; Q (w̄, z̄).
Hence,
123
J Optim Theory Appl
For each fixed element w ∈ dom H , taking infimum on both sides of (17) on z ∈ H (w)
and remembering that h(w̄) = f (w̄, z̄), we obtain
u ∗ + w ∗ , w − w̄ ≤ h(w) − h(w̄).
Since h(w) = +∞ for all w ∈ / dom H , from the last property it follows that u ∗ +w ∗ ∈
∂h(w̄). Hence, we obtain the inclusion (11).
We are going to obtain (12) by the short proof. Observe that w ∈ dom h if and only
if
Since the last inequality holds if and only if there exists an z ∈ H (w) with (w, z) ∈
dom f and we have
δ(w; dom h) = inf{δ (w, z); dom f : z ∈ H (w)}. (18)
The representation (18) for δ(w; dom h) allows us to get (12) as a corollary of (11).
Indeed, since dom δ(.; dom f ) = dom f , if the regularity requirement in (i) is
satisfied, then int Q ∩ dom δ(.; dom f ) = ∅. Next, if the condition (ii) is satisfied,
then (w0 , z 0 ) ∈ int(dom f ); so δ(.; dom f ) is continuous at (w0 , z 0 ) ∈ Q. Now,
consider the optimization problem (10) with f (w, z) replaced by δ (w, z); dom f .
By (18), the corresponding optimal value function h(w) coincides with δ(w; dom h).
Therefore, in accordance with (11), we have
∂δ(.; dom h)(w̄) = w ∗ + T ∗ (M ∗ )−1 (z ∗ ) .
(w∗ ,z ∗ )∈∂δ(.;dom f )(w̄,z̄)
and
∂δ(.; dom f )(w̄, z̄) = N (w̄, z̄); dom f = ∂ ∞ f (w̄, z̄)
1
f (w, z) = |z 1 | + w12 + |w2 |
2
123
J Optim Theory Appl
and H (w) = (z 1 , z 2 , z 3 ) ∈ R3 : z 1 +z 2 = 2w1 , z 3 = 2w2 . Assume that w̄ = (1, 0).
Then by a direct computation,
1 2
h(w) = w + |w2 |.
2 1
So
1
h(w̄) = inf |z 1 | + ,
(z 1 ,z 2 ,z 3 )∈H (w̄) 2
where H (w̄) = (z 1 , z 2 , z 3 ) ∈ R3 : z 1 + z 2 = 2, z 3 = 2w2 . It is easy to check
that z̄ = (0, 2, 0) is the unique solution of the problem corresponding to w̄. For z̄ :=
(0, 2, 0) ∈ S(w̄), ∂ f (w̄, z̄) = {1} × [−1, 1] × [−1, 1] × {0} × {0} and ∂ ∞ f (w̄, z̄) =
N (w̄, z̄); dom f = {0R5 } by [4, Proposition 4.2].
It is easy to see that f (w, z) is continuous at (w̄, z̄) ∈ Q. Let M : R3 → R2 and
T : R2 → R2 be the mappings defined, respectively, by
⎡ ⎤
z
1 1 0 ⎣ 1⎦ 2 0 w1
Mz = z 2 and T w = .
001 0 2 w2
z3
Note that ⎡ ⎤
10
20
M ∗ = ⎣1 0⎦ and T ∗ = .
02
01
So,
T ∗ x ∗ ≥ cx ∗ , ∀x ∗ ∈ X ∗ .
123
J Optim Theory Appl
By using similar arguments, the right-hand side of (12) can be computed as follows:
[(0, 0) + T ∗ (M ∗ )−1 (0, 0, 0) = {0R2 }.
To prove Theorem 2.3, we first formulate problem (1)–(3) in the form to which The-
orem 4.3 can be applied to. We now consider the following linear mappings:
A : X → X defined by
(·)
Ax := x − A(τ )x(τ )dτ,
0
B : U → X defined by
(·)
Bu := − B(τ )u(τ )dτ,
0
M : X × U → X defined by
M(x, u) := Ax + Bu (19)
and T : W → X defined by
(·)
T (α, θ ) := α + T (τ )θ (τ )dτ. (20)
0
Under the hypotheses (H2 ) and (H3 ), (4) can be written in the form
(·) (·) (·)
G(w) = (x, u) ∈ X × U : x = α + Axdτ + Budτ + T θ dτ
0 0 0
(·) (·) (·)
= (x, u) ∈ X × U : x − Axdτ − Budτ = α + T θ dτ
0
0 0
Recall that for 1 < p < ∞, we have L p ([0, 1], Rn )∗ = L q ([0, 1], Rn ), where
1 1
1 < q < +∞, + = 1.
p q
123
J Optim Theory Appl
for all (a, u) ∈ Rn × L q ([0, 1], Rn ) and x ∈ W 1, p ([0, 1], Rn ) (see [25, p. 21]).
In the case of p = 2, W 1,2 ([0, 1], Rn ) becomes a Hilbert space with the inner
product given by
1
x, y
= x(0), y(0)
+ ẋ(t) ẏ(t)dt,
0
Lemma 5.1 ([21, Lemma 2.3]) Suppose that M∗ and T ∗ are adjoint mappings of M
and T , respectively. Then the following assertions are valid:
(a) The mappings M and T are continuous.
(b) T ∗ (a, u) = a, T T u for all (a, u) ∈ Rn × L q ([0, 1], Rn ).
(c) M∗ (a, u) = A∗ (a, u), B ∗ (a, u) , where B ∗ (a, u) = −B T u and
1 (·) 1
∗
A (a, u) = a − A (t)u(t)dt; u +
T
A (τ )u(τ )dτ −
T
A T (t)u(t)dt ,
0 0 0
Recall that
Z = X ×U
and
G(w) = (x, u) ∈ X × U : M(x, u) = T (w) .
123
J Optim Theory Appl
Lemma 5.2 ([21, Lemma 3.1]) Suppose that assumptions (H1 ), (H2 ) and (H3 ) are
valid. Then the following assertions are fulfilled:
(a) There exists a constant c > 0 such that
(b) The functional J is Fréchet differentiable at (z̄, w̄) and ∇ J (z̄, w̄) is given by
∇w J (z̄, w̄) = 0, L θ t, x̄(t), ū(t), θ̄ (t) ,
∇z J (z̄, w̄) = Jx (x̄, ū, θ̄ ), Ju (x̄, ū, θ̄ )
with
and
1
Jx (x̄, ū, θ̄ ) = g
x(1) + L x t, x̄(t), ū(t), θ̄(t) dt,
0
1
g x(1) + L x t, x̄(t), ū(t), θ̄(t) dt .
(·)
which is equivalent to
∗ ∗
α , θ − Jθ (z̄, w̄) ∈ T ∗ (M∗ )−1 (∇z J (z̄, w̄)) .
123
J Optim Theory Appl
Example 5.1 We will illustrate the obtained result by a concrete problem. Put
123
J Optim Theory Appl
By a direct computation, we see that the pair (x̄, ū) satisfies (25). Besides,
e2 9
J0 (x̄, ū) = − −e+ .
8 8
Note that (x̄, ū) is a solution of the problem. In fact, for all (x, u) satisfying (25) we
have
1
2
J0 (x, u) = −x2 (1) + u 1 + u 22 + 1 dt
0
1
≥ −x2 (1) + 1 + u 22 dt
0
1
= −x2 (1) + 1 + (ẋ2 − x2 )2 dt
0
1
= (ẋ2 − x2 )2 − ẋ2 dt. (26)
0
123
J Optim Theory Appl
By solving the Euler equation with noting that J is a convex function, we obtain that
x2 (t) = cet + (1 − c)e−t is a solution of (27), where c is determined by c = ae−1
e2 −1
and a = x2 (1). Hence
e2 − 1 1
J (x2 ) J (
x2 ) = 1 + 2 2 (c − 1)2 + (c − 1) − e(c − 1) − e. (28)
e e
e2 − 1 2 1
J0 (x, u) 1 + 2 r + r − er − e
e2 e
e 2 9
− − e + = J0 (x̄, ū).
8 8
Hence, (x̄, ū) is a solution of the problem corresponding to (ᾱ, θ̄ ). Assertion (i) is
proved.
We now prove (ii). From (24), we have
20 10 10
A= , B= , T = .
01 01 01
It is easy to see that conditions (H2 ) and (H3 ) are fulfilled. Since
we have
|L(t, x, u, θ )| |u|2 + |θ |2 + 1,
|L u (t, x, u, θ )| 2|u|; |L θ (t, x, u, θ )| 2|θ |,
for all (x, u, θ ) ∈ R2 × R2 × R2 and a.e. t ∈ [0, 1]. Thus, condition (H1 ) is also
valid. Hence, all conditions of Theorem 2.3 are fulfilled. By Theorem 2.3, (α ∗ , θ ∗ ) ∈
∂ V (ᾱ, θ̄ ), if and only if, there exists y = (y1 , y2 ) ∈ W 1,2 ([0, 1], R2 ) such that
⎧
⎪
⎪ ẏ1 = −2y1
⎪
⎨
ẏ + A T y = ∇x L(t, x̄, ū, θ̄ ) ẏ2 = −y2
⇔
y(1) = −g
(x(1)). ⎪
⎪
⎪ y 1 (1) = 0
⎩
y2 (1) = 1.
123
J Optim Theory Appl
This implies that (y1 , y2 ) = ( ȳ1 , ȳ2 ) = (0, e1−t ) = ȳ. By (8), we have
It follows that θ ∗ (t) = (0, −e1−t ). On the other hand, from (6) we get
1 1
α ∗ = g
x̄(1) + L x t, x̄(t), ū(t), θ̄(t) dt − A T (t)y(t)dt. (29)
0 0
6 Conclusions
We studied the first-order behavior of the value function to a parametric convex optimal
control problem with a convex cost function and linear state equations. In order to
achieve these results, we first established an abstract result on the subdifferential of the
value function to a parametric convex mathematical programming problem and then
we derived a formula for computing the subdifferential and the singular subdifferential
of the value function to a parametric convex optimal control problem. The main result
of this paper is illustrated by one example.
References
1. Aubin, J.-P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
2. Ioffe, A.D.: Euler–Lagrange and Hamiltonian formalisms in dynamic optimization. Trans. Am. Math.
Soc. 349, 2871–2900 (1997)
3. Vinter, R.B., Zheng, H.: Necessary conditions for optimal control problems with state constraints.
Trans. Am. Math. Soc. 350, 1181–1204 (1998)
4. An, D.T.V., Yen, N.D.: Differential stability of convex optimization problems under inclusion con-
straints. Appl. Anal. 94, 108–128 (2015)
5. Clarke, F.H.: Optimization and Nonsmooth Analysis. SIAM, Philadelphia (1990)
6. Clarke, F.H.: Methods of Dynamic and Nonsmooth Optimization. SIAM, Philadelphia (1989)
7. Mordukhovich, B.S., Nam, N.M.: Variational stability and marginal functions via generalized differ-
entiation. Math. Oper. Res. 30, 800–816 (2005)
8. Mordukhovich, B.S., Nam, N.M., Yen, N.D.: Fréchet subdifferential calculus and optimality conditions
in nondifferentiable programming. Optimization 55, 685–708 (2006)
9. Mordukhovich, B.S., Nam, N.M., Yen, N.D.: Subgradients of marginal functions in parametric math-
ematical programming. Math. Program. 116, 369–396 (2009)
10. Penot, J.-P.: Differentiability properties of optimal value functions. Can. J. Math. 56, 825–842 (2004)
11. Cernea, A., Frankowska, H.: A connection between the maximum principle and dynamic programming
for constrained control problems. SIAM J. Control Optim. 44, 673–703 (2005)
12. Chieu, N.H., Kien, B.T., Toan, N.T.: Further results on subgradients of the value function to a parametric
optimal control problem. J. Optim. Theory Appl. 168, 785–801 (2016)
13. Chieu, N.H., Yao, J.-C.: Subgradients of the optimal value function in a parametric discrete optimal
control problem. J. Ind. Manag. Optim. 6, 401–410 (2010)
14. Kien, B.T., Liou, Y.C., Wong, N.-C., Yao, J.-C.: Subgradients of value functions in parametric dynamic
programming. Eur. J. Oper. Res. 193, 12–22 (2009)
123
J Optim Theory Appl
15. Moussaoui, M., Seeger, A.: Sensitivity analysis of optimal value functions of convex parametric pro-
grams with possibly empty solution sets. SIAM J. Optim. 4, 659–675 (1994)
16. Moussaoui, M., Seeger, A.: Epsilon-maximum principle of Pontryagin type and perturbation analysis
of convex optimal control problems. SIAM J. Control Optim. 34, 407–427 (1996)
17. Rockafellar, R.T., Wolenski, P.R.: Convexity in Hamilton–Jacobi theory I: dynamics and duality. SIAM
J. Control Optim. 39, 1323–1350 (2000)
18. Rockafellar, R.T., Wolenski, P.R.: Convexity in Hamilton–Jacobi theory II: envelope representation.
SIAM J. Control Optim. 39, 1351–1372 (2000)
19. Rockafellar, R.T.: Hamilton–Jacobi theory and parametric analysis in fully convex problems of optimal
control. J. Glob. Optim. 248, 419–431 (2004)
20. Seeger, A.: Subgradient of optimal-value function in dynamic programming: the case of convex system
without optimal paths. Math. Oper. Res. 21, 555–575 (1996)
21. Toan, N.T., Kien, B.T.: Subgradients of the value function to a parametric optimal control problem.
Set-Valued Var. Anal. 18, 183–203 (2010)
22. Toan, N.T.: Mordukhovich subgradients of the value function to a parametric optimal control problem.
Taiwan. J. Math. 19, 1051–1072 (2015)
23. Toan, N.T., Yao, J.-C.: Mordukhovich subgradients of the value function to a parametric discrete
optimal control problem. J. Glob. Optim. 58, 595–612 (2014)
24. Vinter, R.B.: Optimal Control. Birkhäuser, Boston (2000)
25. Ioffe, A.D., Tihomirov, V.M.: Theory of Extremal Problems. North-Holand, Amsterdam (1979)
26. Borwein, J.M., Zhu, Q.J.: Techniques of Variational Analysis. Springer, New York (2005)
27. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basis Theory. Springer,
Berlin (2006)
28. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation II: Applications. Springer,
Berlin (2006)
123