Professional Documents
Culture Documents
Ionuţ Munteanu
Boundary
Stabilization
of Parabolic
Equations
Progress in Nonlinear Differential Equations
and Their Applications
Volume 93
Editors
Jean-Michel Coron, Université Pierre et Marie Curie, Paris, France
Editorial Board
Viorel Barbu, Facultatea de Matematică, Universitatea “Alexandru Ioan Cuza” din,
Iaşi, Romania
Piermarco Cannarsa, Department of Mathematics, University of Rome “Tor
Vergata”, Roma, Italy
Karl Kunisch, Institute of Mathematics and Scientific Computing, University of
Graz, Graz, Austria
Gilles Lebeau, Laboratoire J.A. Dieudonné, Université de Nice Sophia-Antipolis,
Nice, France
Tatsien Li, School of Mathematical Sciences, Fudan University, Shanghai, China
Shige Peng, Institute of Mathematics, Shandong University, Jinan, China
Eduardo Sontag, Department of Electrical and Computer Engineering, Northeastern
University, Boston, MA, USA
Enrique Zuazua, Departamento de Matemáticas, Universidad Autónoma de Madrid,
Madrid, Spain
Boundary Stabilization
of Parabolic Equations
Ionuţ Munteanu
Faculty of Mathematics
Alexandru Ioan Cuza University
Iaşi, Romania
Mathematics Subject Classification (2010): 35K05, 93D15, 93B52, 93C20, 47F05, 60H15, 35R09,
35Q30, 35Q92
This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered
company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my beloved daughter, Anastasia
Preface
vii
viii Preface
We mention that in the literature, there are also other notable results concerning
the boundary stabilization of parabolic equations, and though we mention some
basic references and offer a brief presentation of other significant works in the field,
we have not presented them in detail. We confine ourselves to the proportional-type
feedback design only, which is based on the spectral decomposition of a linearized
system in stable and unstable systems, thereby omitting other important results in
the literature. This book was written with the goal of presenting in detail new results
related to an algorithm for the design of proportional-type feedback forms, which
enabled us to obtain some of the first results in areas such as boundary stabilization
of the Cahn–Hilliard system, and trajectories for the semilinear heat equation and
even for stochastic partial differential equations. These ideas are still being devel-
oped, and one might expect in the future to obtain other spectacular achievements.
Besides stabilization, the robustness of stabilizable feedback under stochastic
perturbations is also discussed. The form of the feedback is based on the eigen-
functions of the linear operator, and we have tried to use a minimal set of them.
The reader is assumed to have a basic knowledge of linear functional analysis,
linear algebra, probability theory, and the general theory of elliptic, parabolic, and
stochastic equations. Most of this is reviewed in Chap. 1. The material included in
this book (excepting the comments on the references) represents the original con-
tribution of the author and his coworkers.
The author is indebted to Prof. Viorel Barbu for suggesting to us, five years ago,
that we develop some of his own earlier ideas on constructing proportional-type
feedback forms, which led to the conception of this entire book. We are indebted to
him as well for encouraging us to write this book and for useful discussions,
pertinent observations and suggestions, and unstinting support and guidance in the
writing of this book. Many thanks go to Hanbing Liu, and special thanks to my
parents for their love and support. Also, the author is indebted to Mrs. Elena
Mocanu, from the Institute of Mathematics Iaşi, who assisted in the typesetting of
this text.
1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Notation and Theoretical Results . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Stabilization of Abstract Parabolic Equations . . . . . . . . . . . . . . . . . . 19
2.1 Presentation of the Abstract Model . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 The Design of the Boundary Stabilizer . . . . . . . . . . . . . . . . . . . . 25
2.2.1 The Case of Mutually Distinct Unstable Eigenvalues . . . . . 26
2.2.2 The Semisimple Eigenvalues Case . . . . . . . . . . . . . . . . . . 37
2.3 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3 Stabilization of Periodic Flows in a Channel . . . . . . . ....... . . . . . 49
3.1 Presentation of the Problem . . . . . . . . . . . . . . . . . ....... . . . . . 49
3.2 The Stabilization Result . . . . . . . . . . . . . . . . . . . ....... . . . . . 51
3.2.1 The Feedback Law and the Stability of the System . . . . . . 63
3.3 Design of a Riccati-Based Feedback . . . . . . . . . . ....... . . . . . 71
3.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... . . . . . 75
4 Stabilization of the Magnetohydrodynamics Equations
in a Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 77
4.1 The Magnetohydrodynamics Equations of an Incompressible
Fluid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 77
4.2 The Stabilizing Proportional Feedback . . . . . . . . . . . . . . . . . .... 86
4.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 91
5 Stabilization of the Cahn–Hilliard System . . . . . . . . . . . . . . . . . . . . 93
5.1 Presentation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1.1 Stabilization of the Linearized System . . . . . . . . . . . . . . . 96
5.2 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
ix
x Contents
xi
xii Acronyms
For easy reference, we collect here some standard notation and results in functional
analysis and partial differential equations that will be used throughout this work.
Functional Spaces
Here O will stand for a bounded domain in Rd , d ∈ N \ {0} , i.e., a nontrivial con-
nected open subset of Rd such that
O ⊂ x ∈ Rd : x12 + · · · + xd2 ≤ R ,
for some R > 0. We denote by O its closure and by ∂O its boundary. We always
assume that the boundary of O is piecewise smooth (e.g., the boundary of a polygon
or a sphere). In many cases, the boundary of O will be split into two parts as ∂O =
Γ1 ∪ Γ2 , where Γ1 has nonzero surface measure.
Consider H to be a Hilbert space over C, with the inner product ·, ·. Then the
system {x1 , x2 , . . . , xN } ⊂ H is linearly independent in H if and only if the Gram
matrix N
G := xi , xj i,j=1
where G T denotes the transpose of the matrix G, ·, ·N denotes the scalar product
in CN , and z denotes the complex conjugate of z ∈ C (for more details on Gramians,
see, e.g., [69]).
Now consider a normed vector space (X ,
·
) over R. A sequence (un )n ⊂ X is
called a Cauchy sequence if for all ε > 0, there exists Nε ∈ N such that
un − um
< ε, ∀n, m ≥ Nε .
Together, the pair (X , F , μ) forms a measure space. A set F ∈ F is called a null set if
μ(F) = 0, and a measure space is said to be complete if all subsets of null sets belong
to F . (Every measure space (X , F , μ) can be extended to a complete measure space
by including all subsets of null sets). The Borel σ -algebra of a topological space Y ,
denoted by B(Y ), is the smallest σ -algebra containing all open subsets of Y . The
usual notion of volume of subsets of Rd gives rise to Lebesgue measure, thereby
introducing the notion of Lebesgue measure space.
A function u : X → Y is F -measurable if the pullback set u−1 (G) is in F for
every G ∈ B(Y ). A function s : X → Y is simple if there exist sj ∈ Y and Fj ∈ F
for j = 1, 2, . . . , N with μ(Fj ) < ∞ such that
N
s(x) = sj 1Fj (x), x ∈ X ,
j=1
un (x) − um (x)
Y d μ(x) < ε
X
If F ∈ F , we define
p1
f
Lp (O ) := |f (x)| dx
p
.
O
One can easily see that Lq (O) ⊂ Lp (O) for 1 ≤ p ≤ q. For the particular case p = 2,
L2 (O) is a Hilbert space, with the inner product
f , gL2 (O ) = f (x)g(x)dx.
O
Next, we set D(O) = C0∞ (O) for the space of infinitely differentiable functions
with compact support defined in O. Then denote by D
(O) its dual, known as the
space of distributions on O. Based on this, one can introduce the Sobolev spaces
W 1,p (O), 1 ≤ p ≤ ∞, defined as
∂u
u ∈ L (O) :
p
∈ L (O), i = 1, 2, . . . , d .
p
∂xi
Here the partial derivatives, like ∂x∂ i above, are taken in the sense of distributions, i.e.,
given a multi-index α ∈ Nd of order |α| = α1 + α2 + · · · + αd , we use the notation
∂ |α| u
Dα u :=
∂x1α1 . . . ∂xdαd
and define
α |α|
D uφdx = (−1) uDα φdx, ∀φ ∈ D(O).
O O
Further, we set W0 (O) for the closure of C0∞ (O) in W 1,p (O), i.e.,
1,p
W0 (O) := {u ∈ W 1,p (O) : ∃(un )n∈N ⊂ C0∞ (O) such that un −→ u in W 1,p (O)}.
1,p
1,p
In other words, W0 (O) consists of functions from W 1,p (O) that can be approxi-
mated by smooth functions with compact support. For the particular case p = 2, we
define W 1,2 (O) = H 1 (O) and W01,2 (O) = H01 (O).
1.1 Notation and Theoretical Results 5
The dual of the space W0 (O) is denoted by W −1,p (O), for p
= p−1
1,p p
. In partic-
−1,2 −1
ular, W (O) = H (O).
The connection between the Lebesgue and Sobolev spaces is given by the Sobolev
embeddings. Namely, assume u ∈ W 1,p (O). Then for q = dpd −p
we have W 1,p (O) ⊂
Lq (O); i.e., the identity map from W 1,p (O) to Lq (O) is bounded.
Proposition 1.1 (Poincaré’s inequality) There exists a constant CO (depending only
on the domain O) such that for all u ∈ H01 (O), we have
u
L2 (O ) ≤ CO
∇u
L2 (O ) .
uv
L1 (O ) ≤
u
Lp (O )
v
Lq (O ) .
u
C([0,T ];X ) := sup
u(t)
X , ∀u ∈ C([0, T ]; X ).
t∈[0,T ]
Also, Lp (0, T ; X ) stands for the space of X -valued Bochner integrable Lp functions
on (0, T ) with the norm
T p1
p
u
Lp (0,T ;X ) :=
u(t)
X dt .
0
where d
dt
u is taken in the sense of X -valued vectorial distributions on (0, T ).
6 1 Preliminaries
Fourier Series
where
x0 +P
1
f (x)e−i
2πkx
fk := P dx, k ∈ Z.
P x0
The coefficients fk are called the Fourier modes of the function f . In particular, if
x0 = 0 and P = 2π , we get that f can be represented as
f (x) = fk eikx , x ∈ R, (1.1)
k∈Z
2π −ikx
with fk := 2π 1
0 f (x)e dx, k ∈ Z.
One can immediately deduce that f is real-valued if and only if fk = f−k , ∀k ∈ Z,
that is, if fk and f−k are complex conjugates.
Introduce the space L2per (0, 2π ) consisting of all locally square-integrable func-
tions on R that are 2π -periodic. The norm in L2per (0, 2π ) is defined as
f
2L2per (0,2π) = 2π |fk |2 ,
k∈Z
where fk are the corresponding Fourier modes introduced above (also known as
Parseval’s identity).
Next, define
1
Hper (0, 2π ) := f ∈ H 1 (0, 2π ) : f (0) = f (2π ) ,
We can also define the Fourier series for functions of two variables x and y in the
square [0, 2π ] × [0, 2π ]:
f (x, y) = fkl eikx eily ,
k,l∈Z
2π 2π
where fkl := 1
4π 2 0 0 f (x, y)e−ikx e−ily dxdy.
1.1 Notation and Theoretical Results 7
Linear Operators
If X , Y are Banach spaces, then L(X , Y ) is the space of all linear continuous operators
from X to Y with the operatorial norm
Ax
Y
A
L(X ,Y ) := sup , ∀x ∈ X , x = 0 .
x
X
Ker(λI − A) := {x ∈ X : Ax = λx}
The dimension of the space of generalized eigenvectors is called the algebraic mul-
tiplicity of the eigenvalue λ. An eigenvalue λ of the operator A is called semisimple
if its algebraic multiplicity coincides with its geometric multiplicity.
We say that the linear operator A : D(A) ⊂ X → Y is closed if its graph is closed,
that is, if xn → x in X and yn = Axn → y in Y implies y = Ax. The operator A is
said to be densely defined if its domain D(A) is dense in X .
8 1 Preliminaries
An operator is called compact if it maps bounded sets into relatively compact sets.
Concerning the eigenvalues of an operator, we have the following result, known as
the Riesz–Schauder–Fredholm theorem (see [128], p. 283).
Theorem 1.3 Let A be a closed and densely defined operator in X with compact
resolvent (λI − A)
−1, for some λ ∈ ρ(A). Then the spectrum σ (A) consists of iso-
lated eigenvalues λj j∈N each of finite (algebraic) multiplicity.
If X is a Banach space, we denote by X
its dual space endowed with the dual
norm
x∗
X
:= sup X (x, x∗ )X
:
x
X = 1 ,
x
(D (A))
=
(λ0 I − A)−1 x
X , ∀x ∈ X .
By the closed graph theorem, one has that à ∈ L(X , (D(A∗ ))
). Moreover, one has
that the spectrum and the eigenvalues of à coincide with those of A.
u, Lu ≥ 0, ∀u ∈ H ,
N
ai aj k(xi , xj ) ≥ 0.
i,j=1
and let D(Aα ) be the set of all u = ∞ α α
j=1 uj ϕj such that A u ∈ H . The domain D(A )
is a Hilbert space with inner product
10 1 Preliminaries
∞
u, vα := Aα u, Aα v
j=1
Elliptic Operators
Let A be a scalar linear partial differential operator
A = a(x, D) = aα (x)Dα
|α|≤2l
a0 (x, ξ ) = 0, ∀x ∈ O, ∀ξ ∈ Rd \ {0} .
(b) Moreover, the operator A is regular elliptic, i.e., the equation a0 (x, ξ
, ζ ) = 0
with ξ
= 0 has the same number of roots ζ in the upper and lower half-planes
(and this number equals l).
(c) The boundary operator B obeys the Lopatinskii condition. In order to have a
simple statement of this, we transfer the origin to a point x0 and rotate the
coordinate system so that the t = xd axis is directed along the inner normal
1.1 Notation and Theoretical Results 11
to the boundary at this point. Suppose that the operators of the problem are
rewritten in this coordinate system. Consider the following problem on the ray
R+ = {t : t > 0} for fixed ξ
= ξ0
= 0:
a0 (x0 , ξ0
, Dt )v(t) = 0, t > 0,
(1.5)
b(x0 , ξ0
, Dt )v|t=0 = h,
Problem (1.5) is required to have precisely one solution in L2 (R+ ) for every
ξ0
= 0 and every number h. (More about the Lopatinskii condition can be found
in [90].)
Problem (1.5) is obtained from the original problem by freezing the coefficients
at the point x0 , removing the lower-order terms, and applying the formal Fourier
transform with respect to the tangent variables.
If all three conditions stated above hold, then the problem is said to be elliptic.
Let us assume that A can be written in divergence form as
A= (−1)|α| ∂ α (aα,β ∂ β ),
|α|≤l,|β|≤l
holds. Then, via the Lax–Milgram theorem, one may show that an elliptic problem
such as (1.5) has a unique solution. For more details on this subject, see [2].
The Cauchy Problem
Let X be a real Banach space with the dual denoted by X
, and let A : D(A) ⊂ X →
X be a linear unbounded operator. The operator A is said to be accretive if
(I + λA)−1 f
X ≤
f
X , λ > 0,
for all f , g in the range R(I + λA), of the operator I + λA. The operator A is said to
be m-accretive if it is accretive and R(I + λA) = X for all λ > 0 (equivalently, for
some λ > 0).
12 1 Preliminaries
Moreover,
d
u(t)
X ≤
uo
X and u(t)
=
Au(t)
X ≤
Auo
X , t ≥ 0.
dt X
f (u1 ) − f (u2 )
X ≤ Cf
u1 − u2
X , ∀u1 , u2 ∈ X ,
for some constant Cf > 0. If A is m-accretive, then the variation of constants formula
in (1.6) gives the following reformulation of Eq. (1.6):
t
−tA
u(t) = e uo + e−(t−s)A f (u(s))ds.
0
u(t)
X ≤ CT (1 +
uo
X ), 0 ≤ t ≤ T .
The above proposition does not provide enough smoothness of u(t) to interpret
(1.6) directly, since we do not know whether u(t) ∈ D(A). We can, however, develop
1
the weak form using test functions v ∈ D(A 2 ). Taking the inner product of (1.6) with
v gives
d
u, v = −Au, v + f (u), v,
dt
or equivalently,
1 d
A− 2 u, A 2 v = A 2 u, A 2 v + f (u), v.
1 1 1
dt
And so, due to the result in the above proposition, the expression is well defined for
1
t > 0. This leads to the definition of a weak solution for Eq. (1.6): Let V = D(A 2 )
and V
= D(A− 2 ), which is the dual of V . We say that u : [0, T ] → V is a weak
1
solution of (1.6) if for almost all s ∈ [0, T ], we have dtd u(s) ∈ V and
1
d 1
u(s), v = −a(u(s), v) + f (u(s)), v, ∀v ∈ V, where a(u, v) := A 2 u, A 2 v .
dt
Stochastic Processes
For this section, we refer for further details to [113]. A triple (Ω, F , P) is called
a probability space, where Ω is the set of possible outcomes, F is a σ -algebra of
subsets of Ω, called the set of events, and P : F → [0, 1] is a probability measure,
which assigns “probabilities” to the outcomes of Ω with P(Ω) = 1.
An F -measurable function X : Ω → H , i.e., a function such that X −1 (B) ⊂ F
for each Borel set B of H , where H is a Banach space, is called a random variable. A
family {X (t) : t ≥ 0} of random variables is called a stochastic process. The quantity
E(X ) := Xd P
Ω
Let X (t) be a stochastic process such that E(|X (t)|) < ∞ for all t ≥ 0. Then X (t)
is called a martingale if
14 1 Preliminaries
N
[X ](t) = lim (X (tk ) − X (tk−1 ))2 ,
Δ
→0
k=1
where Δ ranges over partitions of the interval [0, t] and the norm of the partition Δ
is the mesh.
A collection of σ -algebras Ft , t ≥ 0, satisfying Fs ⊂ Ft for all 0 ≤ s ≤ t is
called a filtration, and a stochastic process X (t) is said to be adapted to the filtration
Ft if for each t ≥ 0, X (t) is Ft -measurable. A random variable τ : Ω → [0, ∞) is
an Ft -stopping time if
{τ ≤ t} ⊂ Ft , ∀t ≥ 0.
where the supremum is taken over all the partitions of [0, t].
The following result, which is related to the martingale convergence theorem, is
important for obtaining convergence in probability of stochastic processes. Its proof
can be found in [85].
Lemma 1.1 Let I and I1 be nondecreasing adapted processes, V a nonnegative
semimartingale, and M a local martingale such that E(V (t)) < ∞, ∀t ≥ 0, I1 (∞) <
∞, P-a.s., and V (t) + I (t) = V (0) + I1 (t) + M (t), ∀t ≥ 0. Then there exists
= r −2λ .
This definition can be extended, in a standard way (see [113], pp. 1–4), to adapted
processes Φ : [0, T ] → L(U, H ) such that
t
P
Φ(s)
2HS ds < ∞, t ≥ 0 = 1,
0
where
·
HS is the Hilbert–Schmidt norm in L(U, H ). Then we have
(a) Itô’s isometry:
t 2
t
E Φ(s)d β(s) =E Φ (s)ds ,
2
0 0
t
(b) 0 Φ(s)d β(s) is a martingale, in particular,
t
E Φ(s)d β(s) = 0,
0
16 1 Preliminaries
(d) Stochastic calculus: let X (t) = (X (t)1 , . . . , X (t)n )T be a random vector such
that
dX (t) = μ(t)dt + G(t)d β(t),
for a vector μ(t) and a matrix G(t); then Itô’s lemma states that
∂φ 1
d φ(t, X (t)) = + (∇X φ) μ(t) + Tr[G (t)(HX φ)G(t)] dt
T T
∂t 2
+ (∇X φ)T G(t)d β(t),
1
P(|X | ≥ λ) ≤ E(|X |p ), ∀λ > 0.
λp
2. Borel–Cantelli Lemma application. If Xk → X in probability, then there exists
a subsequence
∞
Xkj j=1 ⊂ {Xk }∞
k=1
such that
Xkj (ω) → X (ω) for almost every ω.
Let H be a separable Hilbert space and consider the stochastic differential equation
1.1 Notation and Theoretical Results 17
e−tA Bx − e−tA By
HS ≤ γ
x − y
H , ∀t ∈ [0, T ], x, y ∈ H .
Then (1.7) has a unique mild solution X ∈ C([0, T ]; L2 (Ω, F , P, H )), which is the
space of all continuous functions [0, T ] → L2 (Ω, F , P, H ) that are adapted to the
filtration Ft .
This result extends to nonlinear differential equations of the form
via the rescaling y(t) := e−h(t)β(t) X (t) (for more details, see [22]).
Chapter 2
Stabilization of Abstract Parabolic
Equations
1
F0 ( ŷ)(y) := lim F0 ( ŷ + λy) − F0 ( ŷ)
λ→0 λ
in L 2 (O). Moreover, F0 (0) = 0, and for some α ∈ (0, 1) and C > 0, we have
It is easy to see that A is closed and densely defined and that −A generates a C0 -
semigroup on L 2 (O). The operator A can be viewed as the linearization of A + F0
around ŷ. In addition to (A1) and (A2), we assume that
(A3) the resolvent (λI + A)−1 of A is compact in L 2 (O).
Hypothesis (A3) implies, via the Fredholm–Riesz theory (see Theorem 1.3), that the
operator A has a countable set of eigenvalues λ j , j ∈ N∗ (repeated accordingly to
their multiplicities), and corresponding eigenfunctions ϕ j , j ∈ N∗ , i.e.,
Aϕ j = λ j ϕ j , j ∈ N∗ .
Besides this, given ρ > 0, there is a finite number N of eigenvalues such that
The first N eigenvalues are usually called the unstable eigenvalues. Recall that if
the algebraic multiplicity of an eigenvalue coincides with the geometric multiplicity,
then that eigenvalue is called semisimple (see Chap. 1). We add to the above context
the following assumption:
(A4) Each unstable eigenvalue λ j , j = 1, 2, . . . , N , is semisimple.
Taking into account that A is self-adjoint, one can easily check that hypotheses
(A1)–(A4) hold for this case.
Denote by ·, · and by · the scalar product and the corresponding norm in
L 2 (O), respectively. Since the spectrum of A might contain some complex eigenval-
ues, it will be convenient in the sequel to view A as a linear operator (still denoted
by A) in the complexified space L 2 (O) + iL 2 (O) (which will still be denoted by
L 2 (O)). We denote by ·, · and by · the corresponding scalar product and the
induced norm of the complexified L 2 (O), respectively.
2.1 Presentation of the Abstract Model 21
N
It is easily seen that the finite part of the spectrum λ j j=1 can be separated from
the rest of the spectrum by a rectifiable curve Γ N in the complex space C. Set Xu to
N
be the linear space generated by the eigenfunctions ϕ j j=1 , that is,
N
Xu := lin span ϕ j j=1 .
is known as the algebraic projection of L 2 (O) onto Xu . It is easy to see that the
operator
Au := PN A (2.5)
N
maps the space Xu into itself and σ (Au ) = λ j j=1 . More exactly, Au : Xu → Xu
is finite-dimensional and can be represented by an N × N matrix. (σ (Au ) stands for
the spectrum of the operator Au ; see Chap. 1.)
If A∗ is the adjoint operator of A, then its eigenvalues are precisely λ j j∈N∗ ,
with the corresponding eigenfunctions
A∗ ϕ ∗j = λ j ϕ ∗j , j ∈ N∗ .
N
while X N∗ = lin span ϕ ∗j = PN∗ L 2 (O) .
j=1
Via the Schmidt orthogonalizationprocedure, it follows by hypothesis (A4) that
N
N
∗
one can find a biorthogonal system ϕ j j=1 , ϕ j of eigenfunctions of A
j=1
and A∗ , respectively, i.e.,
ϕ j , ϕ ∗j = δi j , i, j = 1, 2, . . . , N , (2.6)
Aϕ j = λ j ϕ j , A∗ ϕ ∗j = λ j ϕ ∗j . (2.7)
If y ∈ L 2 (O) but y ∈
/ D(A), we will understand by Ay a differential form involv-
ing y rather than the operator A acting on y. In this light, consider the problem
⎧ ∂y
⎨ ∂t + Ay + F0 (y) = 0 in (0, ∞) × O,
B.C. (y, 0) on (0, ∞) × ∂O, (2.8)
⎩
y(0) = yo in O.
Here B.C. (y, 0) denotes some appropriate boundary conditions for the unknown
function y, and yo ∈ D(A) is the initial data. The operator ∂t∂ + A + F0 is called the
abstract parabolic differential operator.
Under appropriate conditions on A, F0 , and B.C. (y, 0), problem (2.8) is well
posed. Here we do not give any further details about the well-posedness, since these
were discussed in Chap. 1, Theorem 1.5. We simply assume that (2.8) with B.C. (y, 0)
generates a semiflow y = y(t, yo ), t ≥ 0.
An equilibrium (steady-state or stationary) solution ŷ to system (2.8) is a solution
to the stationary equation (if there exists one)
A ŷ + F0 ( ŷ) = 0.
where
G(z) := F0 (z + ŷ) − F0 ( ŷ) − F0 ( ŷ)(z). (2.11)
If the steady state ŷ is not stable, a way to stabilize it is to plug into (2.8) a
controller function u : [0, ∞) → U that takes values in another space U (which is
assumed Hilbert), obtaining thereby the following boundary controlled problem:
2.1 Presentation of the Abstract Model 23
⎧ ∂y
⎨ ∂t + Ay + F0 (y) = 0, in (0, ∞) × O,
B.C. (y, u) on (0, ∞) × ∂O, (2.12)
⎩
y(0) = yo in O,
or equivalently, by (2.10),
⎧∂
⎨ ∂t z + Az + G(z) = 0 in (0, ∞) × O,
B.C. (z, v) on (0, ∞) × ∂O, (2.13)
⎩
z(0) = z o in O,
Example 2.3 The boundary control problems (2.12) and (2.13) associated with (2.9)
from Example 2.2 look like
⎧ ∂y
⎨ ∂t − Δy + f (y) = 0 in (0, ∞) × O,
∂y
y = u, on (0, ∞) × Γ1 , ∂n = 0 on (0, ∞) × Γ2 , (2.14)
⎩
y(0) = yo in O,
and
⎧ ∂z
⎨ ∂t − Δz + f ( ŷ)z + G(z) = 0 in (0, ∞) × O,
∂z
z = v := u − ŷ on (0, ∞) × Γ1 , ∂n = 0 on (0, ∞) × Γ2 , (2.15)
⎩
z(0) = z o in O,
respectively. Here
G(z) := f (z + ŷ) − f ( ŷ) − f ( ŷ)(z).
in L 2 (O) for some constant c > 0, for all yo in a neighborhood of ŷ. Throughout this
book, we will use the shortened terminology stabilization (stability, stabilizability)
in referring to the asymptotic exponential stabilization (asymptotic exponential sta-
bility, stabilizability, respectively) of some system. If one can find such a controller,
then the equation is said to be stabilizable from the boundary. If the controller is in
feedback form, i.e.,
u(t) = K (y(t)), t ≥ 0,
where K is a given operator from L 2 (O) to U , then Eq. (2.12) is said to be a closed-
loop equation..
In practice, a controller given in feedback form is the most desirable, since it
ensures better performance. Roughly speaking, if at time t ∗ , the solution of Eq.
24 2 Stabilization of Abstract Parabolic Equations
(2.12) gets away from ŷ, then at the very same moment t ∗ , the feedback controller
u(t ∗ ) = K (y(t ∗ )) reacts and brings back the trajectory close to the steady state.
One can equivalently express the stabilization problem for Eq. (2.13). More pre-
cisely, the problem consists in finding a feedback control v such that once it is
inserted into Eq. (2.13), the corresponding solution z to the closed-loop equation
(2.13) satisfies
lim ect z(t) = 0 in L 2 (O).
t→∞
The main difficulty in this case is that the corresponding linear operator A has time-
dependent spectrum, making useless all the considerations and results from the sta-
tionary case. Therefore, this is a challenging subject, and in a subsequent chapter,
we will pose and solve this problem for a special case.
The theory of control and stabilization uses a number of tools, many of them
developed in the 1960 by Kalman with his theory of filtering and algebraic approach
to control systems, then by Pontryagin with his maximum principle, which is a
generalization of Lagrange multipliers, and by Bellman with his principle of dynamic
programming, or by Lyapunov and his Lyapunov functions, among others. In the
present book, we will rely mainly on unique continuation results and construct a
Lyapunov function for the system. More exactly, the form of the feedback controller
is given a priori (guaranteed by a unique continuation result), and then, once it has
been plugged into the system, one proves the stability of the closed-loop system by
finding a Lyapunov function.
2.2 The Design of the Boundary Stabilizer 25
Under appropriate boundary conditions B.C. (z̃, β) and γ > 0 sufficiently large,
there exists a unique solution to (2.16), defined such that the operator Dγ belongs to
1
L(L 2 (∂O), H 2 (O)) (for details, see [19] or [80], and also (1.4).
For later use, we need to compute the scalar product Dγ β, ϕ ∗j , j = 1, 2, . . . , N .
To this end, scalar multiplying (2.16) by ϕ ∗j and taking into account relations (2.6)
and (2.7) yields, via Green’s formula, that
1
Dγ β, ϕ ∗j = − β, D ϕ ∗j 0 , j = 1, 2, . . . , N . (2.17)
γ − λj
Here
D ϕ ∗j := −(γ − λ j )D∗γ ϕ j , j = 1, 2, . . . , N ,
where D∗γ denotes the adjoint operator of Dγ . And ·, ·0 stands for the scalar product
in L 2 (∂O).
As we shall see below, the algorithm requires a new hypothesis. More exactly,
(A5) None of the functions D ϕ ∗j , j = 1, 2, . . . , N , is identically zero on the bound-
ary ∂O.
This assumption is related to the unique continuation property of the eigenfunctions of
the adjoint A∗ of the linear operator A. It arises naturally in the context of boundary
control problems. In the existing literature on this subject, instead of hypothesis
(A5), a stronger one is assumed, namely linear independence of the traces of the
eigenfunctions on the boundary (see more in the “Comments” section below). For
such a hypothesis, it is usually hard to check its validity in practical examples, and
there are many simple cases of domains O for which it fails to hold. For all the
examples in this book, the weaker assumption (A5) is satisfied, while the one related
to linear independence is not. As a matter of fact, validation of assumption (A5), for
different models, will involve the major effort of this book, since once one has (A5)
satisfied (together with (A1)–(A4)), the control design algorithm may be applied
similarly, for all the models, as described in this chapter. Consequently, roughly
speaking, every evolution equation governed by an elliptic operator can be stabilized
26 2 Stabilization of Abstract Parabolic Equations
from the boundary by a proportional-type feedback of the form (2.26) below, once a
unique continuation-type result such as (A5) is provided.
Example 2.4 For Example 2.3, the corresponding map Dγ looks like Dγ := z̃, where
z̃ satisfies ⎧
⎨ N
−Δz̃ + f ( ŷ)z̃ − 2 λk z̃, ϕk ϕk + γ z̃ = 0 in O,
(2.18)
⎩ ∂ z̃
k=1
z̃ = β on Γ1 , ∂n = 0 on Γ2 .
This time, ·, ·0 stands for the scalar product in L 2 (Γ1 ). Assumption (A5), in this
∂ϕ
case, says that each trace of ∂nj , j = 1, 2, . . . , N , cannot identically vanish on Γ1 .
Equivalently, if ϕ j satisfies
∂ϕ j
−Δϕ j + f ( ŷ)ϕ j = λ j ϕ j in O, ϕ j = 0 on Γ1 and = 0 on ∂O,
∂n
Next, for the convenience of the reader, we will split the presentation into two
parts: first, we strengthen the hypothesis (A4) by (A4.1) below, assuming that the
unstable eigenvalues are distinct. Then in the second part, we slightly adjust the
feedback law to show that it still achieves stability in the more general framework
given by hypothesis (A4).
and
Bk := Λγk BΛγk , k = 1, . . . , N . (2.22)
zN
relation by z in C N yields
28 2 Stabilization of Abstract Parabolic Equations
2
N N
1
z D ∗
ϕ (x) d x = 0.
j
j=1 γk − λ j
j
k=1 ∂O
N
1
zj D ϕ ∗j (x) = 0, a.e. on ∂O,
j=1
γk − λ j
........................................................................
1 D ϕ ∗ (x) 1 D ϕ ∗ (x) . . . 1 D ϕ ∗ (x)
γ N −λ1 1 γ N −λ2 2 γ N −λ N N
1
⎛ ⎞ γ1 −λ 1
. . . γ1 −λ
1
1 γ1 −λ2 N
N
1 1
. . . γ2 −λ N
1 (2.23)
= ⎝ D ϕ ∗j (x)⎠ γ2 −λ1 γ2 −λ2
...................................
j=1
1 1
. . . γ N −λ N
1
γ N −λ1 γ N −λ2
⎛ ⎞
N N
(−1) j−1 j−1
(λ − λ )(γ − γ )
= ⎝ D ϕ ∗j (x)⎠
j k j k
= 0,
j=1 j=2
γ j − λ j k=1 (γk − λ j )(γ j − λk )
N
at least for some x ∈ ∂O, by virtue of the unique continuation property of D ϕi∗ i=1
N
assumed in (A5), the fact that the set λ j j=1 contains distinct elements, and the
inequality
ρ < γ1 < γ 2 < · · · < γ N .
This implies that the above homogeneous system has a unique solution, namely the
trivial one z = 0. This is in contradiction to our assumption. Hence we conclude that
the sum B1 + · · · + B N is indeed an invertible matrix.
Proposition 2.1 enables us to define the matrix
⎛ ⎞ ⎛ 1 ⎞
z(t), ϕ1∗ γk −λ1
D ϕ1∗ (x)
⎜ z(t), ϕ ∗ ⎟ ⎜ 1 D ϕ2∗ (x) ⎟
vk (z(t))(= vk (t, x)) := A ⎜ 2 ⎟ ⎜ γk −λ2 ⎟
⎝ .............. ⎠ , ⎝ ....................... ⎠ , (2.25)
z(t), ϕ N∗ 1
γk −λ N
D ϕ N∗ (x)
N
t ≥ 0, x ∈ ∂O, for k = 1, 2, . . . , N .
Next, we lift the boundary control v into Eq. (2.13). To achieve this, by (2.13) and
N
(2.16), setting η := z − Dγk vk , we have
k=1
N
∂η ∂Dγk vk
+ Aη + G(z) = − − ADγk vk
∂t k=1
∂t
∂
N N N
=− Dγk vk − 2 λ j Dγk vk , ϕ ∗j ϕ j + γk Dγk vk .
∂t k=1 k, j=1 k=1
(2.27)
Thus
∂
N N
∂η
+ Aη = − Dγ vk + G (z), η(0) = z o − Dγk vk (z(0)),
∂t ∂t k=1 k k=1
where
N
N
G (z) := −G(z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j + γk Dγk vk (z).
k, j=1 k=1
t
∂
t N
η(t) = e−tA η(0) − e−(t−s)A Dγk vk ds + e−(t−s)A G (z(s))ds.
0 ∂s k=1 0
where à stands for the extension of the operator A to the whole space L 2 (O) (see
(1.2) in Chap. 1). For the sake of notational simplicity, in the sequel we will omit
writing the symbol ˜, but keep in mind that by A we refer, in fact, to the extended
operator of A.
Hence, we finally get that (2.13) is equivalent to
dz N
+ Az + G(z) = (γk I + A) Dγk vk (z)
dt k=1
(2.29)
N
−2 λ j Dγk vk (z), ϕ ∗j ϕ j , t ≥ 0; z(0) = z o .
k, j=1
In order to show the stability of (2.29), we first consider only the linear part of it,
and obtain the following result.
Theorem 2.1 Under (A1)–(A5) and (A4.1), the unique solution to the linear equa-
tion
dz N
+ Az = (γk I + A) Dγk vk (z)
dt k=1
(2.30)
N
∗
−2 λ j Dγk vk (z), ϕ j ϕ j , t ≥ 0; z(0) = z o ,
k, j=1
satisfies
z(t)2 ≤ Ce−ρt z o 2 , ∀t ≥ 0, (2.31)
z u = PN z and z s = (I − PN )z,
with PN the projector defined by (2.4). In this way, (2.30) can be split as
2.2 The Design of the Boundary Stabilizer 31
dz u N
on Xu : + Au z u = (γk I + Au ) Dγk vk (z u )
dt k=1
(2.32)
N
−2 λ j Dγk vk (z u ), ϕ ∗j ϕ j , t ≥ 0; z u (0) = PN z o ,
k, j=1
and
dz s N
on Xs : + As z s = (γk I + As ) Dγk vk (z u ), t ≥ 0; z s (0) = (I − PN )z o .
dt k=1
(2.33)
Here
Au := PN A and As := (I − PN )A.
where the Bk were introduced in (2.22) above, for k = 1, . . . , N . This is indeed so.
We have by (2.25) that
32 2 Stabilization of Abstract Parabolic Equations
⎛ ⎞ ⎛ ⎞
z(t), ϕ1∗
1
γk −λ1
Dγk D ϕ1∗ , ϕ ∗j
⎜ z(t), ϕ ∗ ⎟ ⎜ ∗ ∗ ⎟
2 ⎟ , ⎜ γk −λ2 Dγk D ϕ2 , φ j ⎟
1
Dγk vk , ϕ ∗j = A ⎜
⎝ ............... ⎠ ⎝ ................................. ⎠ , j = 1, . . . , N .
z(t), ϕ N∗ γ −λ
1
Dγk D ϕ N∗ , ϕ ∗j
k N N
z(t), ϕ N∗
N
zu = z j (t)ϕ j ,
j=1
1
N
1
Zt + ΛZ = Zt + ΛZ − γk Bk AZ , t > 0; Z (0) = Zo , (2.38)
2 2 k=1
or equivalently,
2.2 The Design of the Boundary Stabilizer 33
N
Zt = −γ1 Z + (γ1 − γk )Bk AZ , t > 0; Z (0) = Zo . (2.39)
k=2
Here ⎛ ⎞
z(t), ϕ1∗
⎜ z(t), ϕ ∗ ⎟
Z := ⎜ 2 ⎟
⎝ .............. ⎠ and Λ := diag(λ1 , λ2 , . . . , λ N ).
z(t), ϕ N∗
Bk q, q N ≥ 0, ∀q ∈ C N , k = 1, . . . , N .
1 d 1 1
N
A 2 Z (t)2N = −γ1 A 2 Z (t)2N + (γ1 − γk )Bk AZ (t), AZ (t) N ,
2 dt k=2
(2.40)
which leads to
1 d 1 1
A 2 Z (t)2N ≤ −γ1 A 2 Z (t)2N , t ≥ 0,
2 dt
1
where using the fact that A 2 is a positive definite Hermitian matrix, we finally arrive at
Hence the conclusion of the theorem follows immediately by (2.42) and (2.43),
since z = z u + z s .
Now let us return to the full nonlinear system (2.29). In order to be able to show
its stability, we need to strengthen the assumptions on the nonlinear part F0 to
(A6) |F0 (y)| ≤ C(|y|m + 1), ∀y ∈ R, where 0 < m < ∞ for d = 1, 2 and m = 3
for d = 3.
Then we have the following result.
Theorem 2.2 Let 1 ≤ d ≤ 3. Assume that (A1)–(A6) hold together with (A4.1).
Then for each z o ∈ Uo , there exists a unique solution z to the equation
dz N N
+ Az + G(z) = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j ,
dt (2.44)
k=1 k, j=1
t ≥ 0; z(0) = z o ,
z(t)2 ≤ Ce−ct z o 2 , ∀t ≥ 0.
Here
Uo := z o ∈ L 2 (O); z o ≤ σ ,
N
N
A z := Az − (γk I + A) Dγk vk (z) + 2 λ j Dγk vk (z), ϕ ∗j ϕ j , z ∈ D(A ),
k=1 k, j=1
We are going to show that for z o ≤ σ sufficiently small, Eq. (2.46) has a unique
solution z ∈ L m+1 (0, ∞; H 1 (O)). To this end, we will proceed as in [19]. More
precisely, we consider the map : L m+1 (0, ∞; H 1 (O)) → L m+1 (0, ∞; H 1 (O))
defined by t
z := e−tA z o + e−(t−s)A G(z(s))ds,
0
and we shall show that for r sufficiently small, it maps the ball
B(0, r ) := z ∈ L m+1 (0, ∞; H 1 (O)) : z L m+1 (0,∞;H 1 (O )) ≤ r
into itself and is a contraction on B(0, r ), for z o sufficiently small and r suitably
chosen. By (A6), the definition of G, and the Sobolev embedding theorem (for
dimension 1 ≤ d ≤ 3), we have that
while
H 1 (O ) .
G(z) ≤ Czm+1
Then arguing as in the proof of [10, Theorem 3.5], one may show that is a
contraction on B(0, r ). Hence, via the contraction mapping theorem, for z o ≤ σ
sufficiently small, Eq. (2.46) has a unique solution z ∈ L m+1 (0, ∞; H 1 (O)). By a
standard argument, such as the one in [10, Proposition 5.9], this implies also that for
some constants C, c > 0,
z(t) ≤ Ce−ct z o 2 , ∀t ≥ 0,
To conclude this section, we recall the notation z := y − ŷ and see that Theorems
2.1 and 2.2 imply the following stabilization result for the original system (2.12).
Theorem 2.3 Under (A1)–(A6) and (A4.1), for 1 ≤ d ≤ 3, we have that for each
yo ∈ Uo , there exists a unique solution y to the equation
∂y
∂t
+ Ay + F0 (y) = 0 in (0, ∞) × O;
(2.47)
B.C.(y + ŷ, u(y)) on (0, ∞) × ∂O; y(0) = yo in O,
Here ⎛ ⎞ ⎛ ∗⎞
y(t) − ŷ, ϕ1∗ D ϕ1
⎜ y(t) − ŷ, ϕ ∗ ⎟ ⎜ D ϕ ∗ ⎟
⎜
u := Λ S A ⎝ 2 ⎟ , ⎜ 2 ⎟
................... ⎠ ⎝ ......... ⎠
y(t) − ŷ, ϕ N∗ D ϕ N∗ N
and
Uo := yo ∈ L 2 (O); yo − ŷ ≤ σ ,
Theorem 2.4 Assume that hypothesis (A4.1) holds and that f is a C 1 function such
that f ∈ C(R). Then the solution y to the equation
⎧
⎪
⎪ yt (t, x) − Δy(t, x) + f ( ŷ(x))y(t, x) = 0, t > 0, x ∈ O,
⎪
⎪ ⎛ ⎞ ⎛ ∂ϕ1 ⎞
⎪
⎪ y(t), ϕ1
⎪
⎪ ∂n
⎪
⎨ ⎜ y(t), ϕ2 ⎟ ⎜ ∂ϕ2 ⎟
y(t, x) = Λ S A ⎝ ⎜ ⎟ , ⎜ ∂n ⎟ , t > 0, x ∈ Γ1 ,
.............. ⎠ ⎝ ...... ⎠ (2.48)
⎪
⎪
⎪
⎪ y(t), ϕ N ∂ϕ N
⎪
⎪ ∂n N
⎪
⎪
∂
y(t, x) = 0, t > 0, x ∈ Γ2 ,
⎪
⎩ ∂n
y(0, x) = yo , x ∈ Ω,
Bk := Λγk BΛγk , k = 1, . . . , N ,
⎛ ⎞
∂ϕ 1
, ∂ϕ1 ∂ϕ
∂n ∂n 0
1
, ∂ϕ2 . . .
∂n ∂n 0
∂ϕ
∂n
1
, ∂ϕ N
∂n 0
⎜ ∂ϕ2 , ∂ϕ1 ∂ϕ2 , ∂ϕ2 . . . ∂ϕ , ∂ϕ ⎟
B := ⎜ ⎟
2 N
⎝ .................................................................. ⎠ ,
∂n ∂n 0 ∂n ∂n 0 ∂n ∂n 0
∂ϕ
∂n
N
, ∂ϕ 1
∂n 0
∂ϕ∂n
N
, ∂ϕ 2
∂n 0
... ∂ϕ
∂n
N
, ∂ϕ N
∂n 0
∞ ∞
where ·, ·0 stands for the scalar product in L 2 (Γ1 ). Finally, λ j j=1 , ϕ j j=1
denote the eigenvalues and eigenfunctions of the linear operator −Δ + f ( ŷ),
respectively; and γ1 , . . . , γ N are some real positive numbers.
for a prescribed ρ > 0 and a constant C > 0, provided that yo − ŷ is small enough.
For the notation A, Λs , ϕ j , . . ., we refer to Theorem 2.4.
In this section, we drop the hypothesis (A4.1) but keep the more general one (A4). In
this case, the result in Proposition 2.1 may fail to hold, since the determinant given
by (2.46) may be zero. In other words, in this case, the sum B1 + · · · + B N may be a
singular matrix. To overcome this problem, we slightly perturb the spectrum of the
38 2 Stabilization of Abstract Parabolic Equations
linear operator A. To illustrate our approach, let us assume, for instance, that
λ1 = λ2 and λ j = λk , ∀ j, k = 2, 3, . . . , N , j = k
N
where Λ S := Λγk , with Λγk given this time as
k=1
1 1 1
Λγk := diag , ,..., , (2.54)
γk − δ − λ1 γk − λ 2 γk − λ N
while the matrix A is given similarly as in (2.24) (we mention that since (λ1 + δ),
λ2 , . . . , λ N are distinct, a result similar to that in Proposition 2.1 can be proved,
showing in this way that A is well defined). Computations similar to those in (2.27)–
(2.29) yield that system (2.13) is equivalent to
dz N N
+ Az = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j
dt k=1 k, j=1
N
−δ Dγk vk , ϕ1∗ ϕ1 , t ≥ 0; z(0) = z o .
k=1
2.2 The Design of the Boundary Stabilizer 39
The main results in the present context are similar to those above. First, concerning
the linearized system, we have the following theorem.
dz N N
+ Az = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j
dt k=1 k, j=1
(2.55)
N
−δ Dγk vk , ϕ1∗ ϕ1 , t ≥ 0; z(0) = z o ,
k=1
satisfies
z(t)2 ≤ Ce−ρt z o 2 , ∀t ≥ 0, (2.56)
Proof We argue as in the proof of Theorem 2.1. We get this time that the finite-
dimensional unstable part of the linear system (2.55) (which corresponds to (2.32)),
has the equivalent form (see all the computations between (2.35) and (2.39))
N
δ
Zt = −γ1 Z + (γ1 − γk )Bk AZ − O; Z (0) = Zo ,
k=2
2
where ⎛ ⎞
z(t), ϕ1∗
⎜ 0 ⎟
O := ⎜ ⎟
⎝ .............. ⎠ .
0
d 1 1
A 2 Z (t)2N ≤ −2γ1 A 2 Z (t)2N + δAZ (t)2N , t ≥ 0, (2.57)
dt
where A stands for the classical Euclidean induced norm of the matrix A. Denote
by λ1 (A) > 0 the first eigenvalue of A and integrate over time in (2.57). This yields
t
λ1 (A)Z (t)2N ≤ e−2γ1 t A 2 Z0 2N + e−2γ1 (t−s) δAZ (s)2N ds,
1
(2.58)
0
Let bi∗j denote the entries of the adjoint of the matrix B1 + · · · + B N . By virtue of
the definition of the adjoint matrix and the above observation, we deduce that
for γ1 large enough. In conclusion, for γ1 large enough, there exists some μ > 0 such
that
δA A 1
− 2γ1 = 4 − 2γ1 ≤ C N 2 − 2γ1 ≤ −μ,
λ1 (A) γ1 λ1 (A) λ1 (A)
since 1
λ1 (A)
→ 0 for γ1 → ∞. This, together with (2.59), implies
1
e−μt A 2 Z0 2N , t ≥ 0,
1
Z (t)2N ≤
λ1 (A)
which represents the exponential decay of the first N modes of z. The rest of the
proof mimics the proof of Theorem 2.1, and it is therefore omitted.
Finally, based on the above result, one can immediately deduce the following
counterpart of Theorem 2.3.
Theorem 2.7 Under (A1)–(A6), for 1 ≤ d ≤ 3, we have that for each yo ∈ Uo , there
exists a unique solution y to the equation
2.2 The Design of the Boundary Stabilizer 41
∂y
∂t
+ Ay + F0 (y) = 0 in (0, ∞) × O;
(2.61)
B.C.(y + ŷ, u(y)) on (0, ∞) × ∂O; y(0) = yo in O,
Here ⎛ ⎞ ⎛ ∗⎞
y(t) − ŷ, ϕ1∗ D ϕ1
⎜ y(t) − ŷ, ϕ ∗ ⎟ ⎜ D ϕ ∗ ⎟
u := Λ S A ⎜ 2 ⎟ ⎜ 2 ⎟
⎝ .................. ⎠ , ⎝ ........ ⎠ ,
y(t) − ŷ, ϕ N∗ D ϕ N∗ N
In this section, we further particularize the model in Example 2.2 (see also Example
2.3), in the sense that we take the space dimension to be equal to one, and f of the
form
f (y) := −αy + βy 2 ,
Bk := Λγk BΛγk , k = 1, . . . , N ,
for a prescribed ρ > 0 and a constant C > 0, provided that yo L 2 (0,1) is small
enough. For the notation A, Λs , ϕ j , . . ., we refer to Theorem 2.8.
Now let us study the problem numerically. It is easy to see that when α > (2π )2 ,
there is more than one unstable eigenvalue. We take α = 50, β = 0.30, and set
the initial profile to be u 0 (x) = 5xe x , x ∈ [0, 1]. In this case, for decay rate ρ ∈
(0, (3π )2 − 50], a two-dimensional feedback controller can be designed to stabilize
the system. For a larger rate
1 −2
(50−π 2 )2 (50−π 2 )(50−(2π)2 )
B2 = π 2
−2 4 .
(50−π 2 )(50−(2π)2 ) (50−(2π)2 )2
As we may realize, it is more practical to use only part of the information about
the state. One can take a modified feedback law such as
$b
a y sin π xd x 1
u(t) = F(y)(t) := T A $ b , , (2.68)
1
a y sin 2π xd x 2
2.4 Comments
The problem of boundary stabilization of the heat equation was first solved in the
pioneering work of Triggiani [116]. His approach was based on spectral decompo-
sition, similar to what we do here and to what has been done in many papers on
this subject. Then several other methods were proposed for deriving new types of
controls. One of the most fruitful is the so-called backstepping method, developed
by Krstic and coworkers; see, for example, [1, 27, 29, 38, 87, 114, 124, 127] or
the book [75]. Let us briefly present it here, since it can be related to the results of a
subsequent chapter. Let us consider the reaction–diffusion equation on (0, 1):
yt (t, x) = yx x (t, x) + λy(t, x),
(2.69)
y(t, 0) = 0, y(t, 1) = u(t).
46 2 Stabilization of Abstract Parabolic Equations
with the kernel k such that w satisfies the target stable equation
wt (t, x) = wx x (t, x),
(2.70)
w(t, 0) = 0, w(t, 1) = 0.
Thus, the whole problem reduces to finding a kernel k that ensures this passage.
Plugging the form of w into Eq. (2.70), one easily deduces that k must obey
k x x (x, ξ ) − kξ ξ (x, ξ ) = λk(x, ξ ), x, ξ ∈ (0, 1),
(2.71)
k(x, 0) = 0, k(x, x) = − λ2 x, x ∈ (0, 1).
These form a well-posed PDE of hyperbolic type in the Goursat form. Moreover,
one can obtain explicitly the form of the kernel k, and consequently the form of the
feedback u.
A more direct method is the so-called design of proportional type feedback, which
we use here as well. We begin by mentioning the significant results obtained by Barbu
in [12, 13]; see also the monograph [14]. As mentioned before, the design algorithm
we developed here is based on the ideas in [13]. So in order to have a clear comparison
between what we have presented here and [13], let us briefly describe what is stated
and proved in that work. Consider the parabolic equation (see also Examples 2.1, 2.2)
yt (t, x) = Δy(t, x) + f (x, y(t, x)), in (0, ∞) × O,
∂y (2.72)
y = u on Γ1 , ∂n = 0 on Γ2 .
N
∂ϕ j
Under the assumption that the traces ∂n
are linearly independent in L 2 (Γ1 ),
j=1
the feedback
N
u=η μ j y, ϕ j j
j=1
2.4 Comments 47
% &
∂ϕl
j, = δ jl , j, l = 1, 2, . . . , N .
∂n 0
It is clear that such j can be constructed if and only if the above hypothesis on
linear independence holds. Here ·, ·0 stands for the scalar product in L 2 (Γ1 ).
Note that this u can be equivalently written as
⎛ ⎞ ⎛ ∂ϕ1 ⎞
y, ϕ1 ∂n
⎜ ⎟ ⎜ ∂ϕ2
−1 ⎜ y, ϕ2 ⎟ ⎜ ∂n
⎟
u = η Λ1 B ⎝ , ⎟ ,
.......... ⎠ ⎝ ... ⎠
y, ϕ N ∂ϕ N
∂n N
∂ N
where Λ1 = diag(μ1 , . . . , μ N ) and B is the Gram matrix of the system ∂n ϕ j |Γ1 j=1
in L 2 (Γ1 ). One can clearly see the similarity between this u and the control v in (2.26).
In the same proportional-type feedback context, we mention as well the recent
work of Lasiecka and Triggiani [82], in which the hypothesis of semisimple eigen-
values is dropped, but instead an additional internal controller is inserted into the
equations.
Other stabilization results are obtained for specific models, and they will be men-
tioned in the relevant subsequent chapters.
The results in this chapter, carried out for the particular case presented in Examples
2.1, 2.2, appeared in the work Munteanu [105], while their formulation here in
the general framework for an abstract parabolic differential operator of the type
∂
∂t
+ A + F0 , obeying assumptions (A1)–(A6), is new. The need to consider the
general abstract context is to emphasize that in all that follows in the chapters below,
we are presenting just some particular cases. The whole effort is to show that (A1)–
(A6) (especially (A5)) are satisfied for the considered examples. Consequently, the
present controller design algorithm is not confined to the considered models, but
can be applied by those working on this subject to a larger spectrum of models,
generically named parabolic-like equations.
The feedback designed here has many advantages: it is linear and of finite-dimen-
sional structure, expressed in a very simple form involving only the eigenfunctions
of the linear operator derived from the linearized equations, and is therefore easy to
manipulate from the numerical point of view.
We mention that one can easily adapt the present feedback control design tech-
nique to systems of the form
yt = Δy + f (x, y), in (0, ∞) × O,
∂y (2.73)
y = u on Γ1 , ∂n = 0 on Γ2 ,
48 2 Stabilization of Abstract Parabolic Equations
However, we will not go into details about this problem here, since later, we will
treat different types of systems such as the Navier–Stokes equations, the magne-
tohydrodynamics equations, and the phase field equations. Concerning the internal
stabilization of (2.73), one may consult the work [25].
Other important topics related to the control of parabolic-like equations, for
instance exact and approximate controllability and optimal control, are beyond the
scope of this presentation, and we refer to Coron’s book [50] for significant recent
results in this direction.
The numerical examples were published in the work [86] of of Liu et al.
Chapter 3
Stabilization of Periodic Flows
in a Channel
Here we apply the control design algorithm from Chap. 2 to the Navier–Stokes
equations, placed in a particular geometry, namely a semi-infinite channel. The high
instability of the Navier–Stokes equations is well known as is the fact that the principal
way to suppress the turbulence occurring in the dynamics of a fluid is to plug in a
stabilizing feedback control. In addition, a Riccati-based robust controller is also
constructed.
and
⎧
⎪
⎪ u t − νΔu + u ∂∂ux + v ∂u∂y
+ w ∂u
∂z
= − ∂∂ px ,
⎪
⎪
⎪ ∂v ∂v ∂v ∂p
⎪ vt − νΔv + u ∂ x + v ∂ y + w ∂z = − ∂ y ,
⎪
⎪
⎪
⎪
⎪
⎪ w − νΔw + u ∂w + v ∂w + w ∂w = − ∂∂zp ,
⎨ t ∂x ∂y ∂z
∂u
∂x
+ ∂∂vy + ∂w
∂z
= 0, ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),
⎪
⎪
⎪
⎪ (u, v, w, p)(t, x + 2π, y, z + 2π ) = (u, v, w, p)(t, x, y, z),
⎪
⎪
⎪
⎪ ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),
⎪
⎪
⎪
⎪ (u, w)(t, x, 0, z) = (u, w)(t, x, 1, z) = 0,
⎩
v(t, x, 0, z) = 0, v(t, x, 1, z) = Ψ (t, x, z), ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),
(3.1)
and the initial data
where the standard notation includes u for the streamwise velocity, v for the wall-
normal velocity, w for the spanwise velocity, p for the pressure, ν for the viscosity
coefficient of the fluid, while Ψ is the control. The incompressibility of the fluid is
described by the divergence-free condition.
In order not to deal with infinite domains, we have assumed that both the velocity
field and the pressure are 2π -periodic in the first and last spatial coordinates (for
more details on this, one may consult the work [33]). Instead of 2π , one could take
any L > 0, and all the results below still hold. However, we keep the particular period
2π for the sake of simplicity of notation.
The parabolic Poiseuille profile, denoted here by
such that
1
|u kl (y)|2 dy < ∞.
k,l∈Z 0
ψ00 ≡ ψ0l ≡ 0.
52 3 Stabilization of Periodic Flows in a Channel
t ≥ 0, and that
d
|ψkl (t)| + |ψkl (t)| ≤ Ce−μt , t ≥ 0, (3.8)
dt
for some positive constants C, μ, such that once plugged into (3.5), we have
− vkl (t) + (k 2 + l 2 )vkl (t)2 ≤ Ce−μt , t ≥ 0. (3.9)
We scalar multiply Eq. (3.10) by Vkl , take the real part of the result, and obtain
1 d
(Vkl 2 + (k 2 + l 2 )Vkl 2 ) + νVkl 2 + 2ν(k 2 + l 2 )Vkl 2
2 dt
1
+ ν(k 2 + l 2 )2 Vkl 2 = ik U Vkl Vkl dy
0
1
+ [−(12y − 6)ψkl + (k 2 + l 2 )(2y 3 − 3y 2 )ψkl ]t Vkl dy (3.11)
0
1
− ((2ν(k + l ) + ikU )(12y − 6)ψkl )Vkl dy
2 2
0
1
+ (ν(k + l ) + ik(k + l )U + ikU )(2y − 3y )ψkl )Vkl dy ,
2 2 2 2 2 3 2
0
1 d
(Vkl 2 + (k 2 + l 2 )Vkl 2 ) + ν(k 2 + l 2 )(Vkl 2 + (k 2 + l 2 )Vkl 2 )
2 dt
2 (3.12)
d
≤ Ckl Vkl + ψkl + |ψkl | + Vkl , t ≥ 0,
2 2 2
dt
for some Ckl > 0. By (3.8), (3.9), and the definition of Vkl , it follows that
for some Ckl > 0. By (3.12), together with (3.13) and (3.8), we obtain
Therefore,
vkl (t)2 ≤ Ce−μt vkl
0 2
, t ≥ 0. (3.14)
Multiplying the first equation of system (3.5) by il and the third by −ik and summing
them, we get that
54 3 Stabilization of Periodic Flows in a Channel
Scalar multiplying Eq. (3.16) by (ilu kl − ikwkl ) and taking the real part of the result,
we obtain that
1 d
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 + ν(ilu kl − ikwkl ) 2
2 dt 1
= −il U vkl ilu kl − ikwkl dy .
0
Hence
1
1 d
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 |l| ≤ U vkl ilu kl − ikwkl dy
2 dt 0
1 ν(k 2 + l 2 ) |l| a 2
≤ |l| ilu kl − ikwkl 2 + v kl 2
.
2 |l| ν(k 2 + l 2 ) 2ν
This yields
d a 2
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 ≤ vkl 2 .
dt 2ν
From the above estimate and relation (3.9), one can easily obtain that
We intend to write the system (3.5) in an abstract form, aiming to apply the control
design for abstract parabolic-like equations from Chap. 2. To this end, we define
the following operators: for each 0 = k ∈ Z, we define Lk : D(Lk ) ⊂ H → H and
Fk : D(Fk ) ⊂ H → H , by
3.2 The Stabilization Result 55
Lk v := −v + k 2 v, D(Lk ) = H 2 (0, 1) ∩ H01 (0, 1), (3.20)
Fk v := νv − (2νk + ikU )v + k(νk + ik U + iU )v, D(Fk )
2 3 2
Lkl v := −v + (k 2 + l 2 )v, D (Lkl ) = H01 (0, 1) ∩ H 2 (0, 1), (3.23)
Fkl v := νv − [2ν(k 2 + l 2 ) + ikU ]v + [ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU ]v, (3.24)
Lk v := −v + k 2 v, Lkl v := −v + (k 2 + l 2 )v, (3.26)
Fk v := νv − (2νk 2 + ikU )v + k(νk 3 + ik 2 U + iU )v, (3.27)
Fkl v := νv − [2ν(k 2 + l 2 ) + ikU ]v + [ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU ]v. (3.28)
and
σ (−Akl ) ⊂ {λ ∈ C : λ ≤ −γ } , ∀ k 2 + l 2 > S,
where
21
1 a
S := √ 1+γ + √ . (3.29)
2ν 2ν
56 3 Stabilization of Periodic Flows in a Channel
Here σ (−A) is the spectrum of the operator −A and ρ(−A) is the resolvent set
of −A.
Proof We will consider only the more complex case of −Akl (for −Ak , one may
argue likewise, because of their similar forms). So for λ ∈ C and f ∈ H , consider
the equation
λg + Akl g = f,
or equivalently,
λLkl v + Fkl v = f. (3.30)
Scalar multiplying this equation by v and taking into account (3.23) and (3.24), yields
1 1
λ (|v |2 + (k 2 + l 2 )|v|2 )dy + ν |v |2 dy
0 0
1 1
+ 2ν(k 2 + l 2 ) |v |2 dy + ν(k 2 + l 2 )2 |v|2 dy (3.31)
0 0
1
+k U (v v − v v)dy = f, v
0
and 1
1
λ (|v |2 + (k 2 + l 2 )|v|2 )dy + k |v |2 dy
0 0
1
(3.32)
1
+k (k + l )U + U |v| = f, v.
2 2 2
0 2
Then, via Poincaré’s inequality, we see by (3.31) and (3.32) that for some r > 0,
C
|(λI + Akl )−1 f | ≤ | f | for |λ| > r,
|λ| − r
which implies, via the Hille–Yosida theorem (see Chap. 1), that −Akl is the infinitesi-
mal generator of a C0 -analytic semigroup, denoted by e−Akl t , t ≥ 0, on H . Moreover,
by (3.31), (3.32) we see that (λI + Akl )−1 is compact on H , and it follows also that
all the eigenvalues λ of −Akl satisfy the estimates
1 1
2
λ (|v | + (k + l )|v| )dy + 2ν(k + l )
2 2 2 2 2
|v |2 dy
0 0
1 1
2
+ν |v | dy + ν(k + l )
2 2 2
|v|2 dy
0 0
1
≤ −k U (v v − vv)dy
0
3.2 The Stabilization Result 57
1 1
1
2
≤ 2νk |v | dy +
2
|U |2 |v|2 dy
0 2ν 0
1 1
2 a2
≤ 2ν(k + l )
2 2
|v | + 3 |v|2 dy,
0 8ν 0
where Av = −λv. By the above estimate we see that for γ > 0 arbitrary but fixed,
we have
1 a
λ ≤ −γ if k + l ≥ √
2 2 1+γ + √ ,
2ν 2ν
As announced in Chap. 2, this task is not an easy one. Even if the fourth-order
equation (3.34) has five null boundary conditions, we cannot deduce immediately
that the only solution is the trivial one, since the two boundary conditions in y = 0
may be linearly dependent with those given in y = 1. To overcome this problem,
we take into account the special form of the equation, more precisely its symmetric
nature. Indeed, by the form of U in (3.3), one can easily see that
and it turns out that these six null boundary conditions are enough to establish our
claim.
Lemma 3.3 Let λkj for some 0 < |k| ≤ S, and let j ∈ {1, . . . , Nk } be an unstable
eigenvalue of −A∗k . Then we can choose the corresponding adjoint eigenfunction ϕ k∗j
such that (ϕ k∗
j ) (1) > 0.
√
Moreover, let λklj for some 0 < k 2 + l 2 ≤ S, and let j ∈ {1, . . . , Nkl } be an
unstable eigenvalue of −A∗kl . Then we can choose the corresponding adjoint eigen-
kl∗
j such that (ϕ j ) (1) > 0.
function ϕ kl∗
Proof We will consider only the more complex case, the eigenvalues and eigenfunc-
tions of −A∗kl , while for −A∗k one may construct a similar argument. We aim to show
kl∗
j such that (ϕ j ) (1) = 0. Then if needed,
that we can choose the eigenfunction ϕ kl∗
kl∗
j by (ϕ j ) (1)ϕ j , we obtain the desired result. The proof follows in
replacing ϕ kl∗ kl∗
three steps.
Step 1. For a function f : [0, 1] → C, let us denote by fˇ : [0, 1] → C the function
We say that the function f : [0, 1] → C is symmetric if f (y) = fˇ(y), ∀y ∈ [0, 1],
and antisymmetric if f (y) = − fˇ(y), ∀y ∈ [0, 1]. In this step, we show that we
can choose a basis of the adjoint eigenfunction space consisting of symmetric or
antisymmetric functions.
3.2 The Stabilization Result 59
Let us observe that if ϕ ∗ is a solution to (3.35), then ϕˇ∗ is also a solution to (3.35),
because of the symmetric form of the equation.
Let us denote by H the fourth-dimensional linear space of the solutions to the
fourth-order linear homogeneous differential equation
ν(ϕ ∗ ) − (2ν(k 2 + l 2 ) − ikU + λ̄)(ϕ ∗ ) + 2ikU (ϕ ∗ )
+ ((k 2 + l 2 )λ̄ + ν(k 2 + l 2 )2 − ik(k 2 + l 2 )U )ϕ ∗ = 0, a.e. in (0, 1).
Then the eigenfunction space can be written as the linear space E , defined as
E := ϕ ∈ H : ϕ(0) = ϕ(1) = 0, (ϕ) (0) = (ϕ) (1) = 0 .
It is easy to see that the dimension of E is ≤ 2. We claim that we can find a basis for this
linear space consisting of symmetric functions or antisymmetric functions. Indeed,
let us assume that there exists ϕ ∈ E that is neither symmetric nor antisymmetric.
Then the two functions
ϕ1 := ϕ + ϕ̌ and ϕ2 := ϕ − ϕ̌
The proof of this claim will be given in the last step of the proof. In this step, we
will prove that under the above claim, there exists a solution to the Eq. (3.38), and
so we obtain the desired result.
Let us construct the function ϕ2 := ϕ1 + ϕ̌1 . As seen before, the equation Fkl ϕ1 =
0 is symmetric, and this implies that if ϕ1 is a solution, then also ϕ̌1 is. Hence we
have ⎧
⎨ Fkl ϕ2 = 0,
ϕ (0) = ϕ1 (0) − ϕ1 (1) = 0, (3.40)
⎩ 2
ϕ2 is symmetric.
So we can take ϕ := ϕ5 .
3.2 The Stabilization Result 61
Step 3. In the last step we show that there exists a function ϕ1 such that
Fkl ϕ1 = 0, y ∈ (0, 1) ,
(3.42)
ϕ1 (0) − ϕ1 (1) = 0.
We assume, for the sake of a contradiction, that this is not true. Hence for every
solution ψ to the equation Fkl ψ = 0, we have ψ (0) − ψ (1) = 0.
Let us denote by H1 the linear space of the solutions to the equation Fkl ψ = 0,
and by E1 the linear subspace of H1 defined by
E1 := ψ ∈ H1 : ψ (0) − ψ (1) = 0 .
since ψ ∈ H1 . Also,
Ψ (0) = ψ (0) − ψ (1) = 0, Ψ (1) = ψ (1) − ψ (0) = 0,
since ψ ∈ E1 .
Let us set Φ := Ψ − (k 2 + l 2 )Ψ . The equation Fkl Ψ = 0 can be rewritten in
the form
νΦ − (ν(k 2 + l 2 ) + ikU + λ)Φ + ikU Ψ = 0, (3.43)
and
Ψ − (k 2 + l 2 )Ψ = Φ. (3.44)
Observe that since Ψ (0) = Ψ (1) = 0 and Ψ (0) = Ψ (1) = 0, we have
Φ (0) = Φ (1) = 0.
and
1 1 1
− |Ψ |2 dy − (k 2 + l 2 ) |Ψ |2 dy = ΦΨ dy. (3.46)
0 0 0
62 3 Stabilization of Periodic Flows in a Channel
1
From (3.46) we see that 0 Ψ Φdy is a real number. Using this and taking the real
part of (3.45), we obtain that
1 1
2
−ν |Φ | dy − (ν(k + l ) + λ)
2 2
|Φ|2 dy = 0.
0 0
Since λ is an unstable eigenvalue, we have that λ > 0. So the relation above yields
Φ ≡ 0.
ψ ∈ E1 ⇒ ψ = −ψ̌. (3.47)
First, we consider separately only the equations for u k0 and vk0 , that is,
⎧
⎪
⎪ (u k0 )t − ν[−k 2 u k0 + u k0 ] + ikU u k0 + U vk0 = −ikpk0 , a.e. in (0, 1),
⎪
⎨ (v ) − ν[−k 2 v + v ] + ikU v = − p , a.e. in (0, 1),
k0 t k0 k0 k0 k0
(3.50)
⎪
⎪ iku + v = 0, a.e. in (0, 1),
⎪
⎩
k0 k0
u k0 (0) = u k0 (1) = 0, vk0 (0) = 0, vk0 (1) = ψk0 , ∀t ≥ 0.
We reduce the pressure from the system and use the free divergence condition to get
that vk0 satisfies the equation
⎧
⎪
⎨ (−vk0 + k vk0 )t + νvk0 − (2νk + ikU )vk0
2 2
where Lk and Fk are given in (3.26). Note that formally, Eq. (3.52) may be rewritten
as
(z k )t + Ak z k = 0
64 3 Stabilization of Periodic Flows in a Channel
by setting z k := Lk vk0 , where we recall the operators −Ak with their eigenvalues
{λkj } j and their eigenfunctions {ϕ kj } j , described in the previous section.
For the sake of simplicity, we assume that
which is the counterpart of hypothesis (A4.1) from Chap. 2. The present algorithm
works equally well in the case of semisimple eigenvalues by doing tricks similar to
those in Chap. 2. However, we will not develop this subject here since the presentation
may get too hard to follow.
Using (if necessary) the Gram–Schmidt procedure, we may assume that the sys-
tems {ϕ kj } Nj=1
k
and {ϕ k∗ Nk
j } j=1 are biorthonormal, that is,
ϕik , ϕ k∗
j = δi j , i, j = 1, . . . , Nk ,
(It is known that for γ > 0 large enough, the above equation has a unique solution
1
in H 2 (0, 1).) Next, let us compute Lk Dγ α, φmk∗ , for some 1 ≤ m ≤ Nk . To this
end, we have from (3.54) scalar multiplied by φmk∗ and by the biorthogonality of the
eigenfunctions systems that
We choose Nk constants 0 < γ1k < γ2k < · · · < γ Nk k large enough that Eq. (3.54),
corresponding to each γik , i = 1, . . . , Nk , has a solution, and denote by Dγik , i =
1, . . . , Nk , the corresponding solutions.
Now for each 0 < |k| ≤ S, we introduce the feedback ψk0 as (see the proportional
feedback defined in (2.6) in Chap. 2)
⎛! " ⎞ ⎛ k∗ ⎞
!Lk vk0 (t), ϕ1k∗ " (ϕ1 ) (1) $
k∗
⎜ Lk vk0 (t), ϕ ⎟ ⎜ (ϕ k∗ ) (1) ⎟
ψk0 (t) := − Λksum Ak ⎜ 2 ⎟ ⎜ ⎟
⎝ ...................... ⎠ , ⎝ ............... ⎠
2 , (3.56)
! " k∗
Lk vk0 (t), ϕ Nkk∗
(ϕ Nk ) (1) N k
⎛ ⎞
1
γik +λk1
0 ... 0
⎜ 0 1
... 0 ⎟
⎜ γik +λk2 ⎟
Λkγ k := ⎜ ⎟ , i = 1, . . . , Nk . (3.57)
i ⎝ .................................. ⎠
0 0 . . . γ k +λk
1
i Nk
Moreover,
Ak := (B1k + B2k + · · · + B Nk k )−1 (3.58)
(he counterpart of the Gram matrix B introduced in (2.20) in Chap. 2). Here ·, · N
stands for the classical scalar product in C N .
By Lemma 3.3, we know that (ϕik∗ ) (1) = 0, i = 1, . . . , Nk , and therefore,
hypothesis (A5) is verified for the present case. Hence just as in Proposition 2.1,
one can show that the above matrices Ak are well defined, and consequently, the
feedback ψk0 is well defined.
We plug the above ψk0 into (3.52) and show that it ensures its stability.
Similarly to (2.25), we decompose ψk0 as
where
⎛! " ⎞ ⎛ 1 (φ k∗ ) (1) ⎞
Lk vk0 (t), ϕ1k∗ γik +λk1 1 $
⎜ ! "
k∗ ⎟ ⎜ k 1 k (φ k∗ ) (1) ⎟
k ⎜ Lk vk0 (t), ϕ2 ⎟ ⎜ γi +λ2 2 ⎟
vi (t) := − A ⎝
k
⎠,⎜ ⎟ , t ≥ 0, (3.61)
.......................
! " ⎝ ....................... ⎠
Lk vk0 (t), ϕ Nk∗k 1
γ k +λk
(φ Nk∗k ) (1)
i Nk Nk
In the next lines, the approach will slightly differ from that in Chap. 2, in the sense
that we will not consider the equivalent reformulation of Eq. (3.52) via the variation
of constants formula and the extension operators. Instead, we perform computations
directly in Eq. (3.52). The reason is that in this case, we do not care about the nonlinear
equation, but only the linearized one.
Returning to the linear equation (3.52), we define
Nk
Nk
(z )t = −Ak z + 2
k k
λkj Lk Dγik vik , ϕ k∗
j ϕ j
k
+ γik Lk Dγik vik
i, j=1
i=1
(3.63)
Nk
− Lk Dγik vik .
i=1 t
In terms of the new variable z k , the feedbacks vik , i = 1, .., Nk , have the form
⎛ ! " ⎞ ⎛ 1
(ϕ k∗ ) (1)
⎞
Lk vk0 , ϕ1k∗ γik +λk1 1
⎟$
⎜ ! " ⎟ ⎜ 1
(ϕ k∗ ) (1) ⎟
k ⎜ Lk vk0 , ϕ2 ⎟ ⎜
k∗
1 ⎜ γik +λk2 2 ⎟
= A ⎜ ....................... ⎟ , ⎜ ⎟
2 ⎝ % & ⎠ ⎝ ....................... ⎠
Lk vk0 , ϕ k∗ 1
(ϕ k∗ ) (1)
Nk k k γi +λ N
Nk
k Nk
⎛% &⎞
⎛ ⎞
Lk Dγ k v kj , ϕ1k∗ 1
(ϕ k∗ ) (1)
⎜
⎟ γik +λk1 1
% & ⎟$
j
⎜ ⎜
1 k∗ ⎟ ⎜ (ϕ k∗ ) (1)
Nk 1
k ⎜ Lk D γ k v j , ϕ 2 ⎟ ⎜
k ⎟
− A ⎜ ⎟, γik +λk2 2 ⎟
⎜ ....................... ⎟ ⎜ ⎟
j
2 ⎝ ....................... ⎠
j=1 ⎝% & ⎠
k +λk (ϕ Nk ) (1)
1 k∗ ,
Lk Dγ k v kj , ϕ k∗
Nk γi N k
j Nk
(taking into account relation (3.62))
⎛ ! " ⎞ ⎛ 1
(ϕ k∗ ) (1)
⎞
!Lk vk0 , ϕ1k∗ "
k∗
γik +λk1 1
( ⎜ Lk vk0 , ϕ ⎟ ⎜ ⎟$
1 ' (ϕ k∗ ) (1) ⎟
1
k⎜ 2 ⎟ ⎜ ⎜ γik +λk2 2 ⎟
= I + A (B1 + · · · + B Nk ) A ⎜ ....................... ⎟ , ⎜
k k k
2 ⎝ % & ⎠ ⎝ ....................... ⎟ ⎠
Lk vk0 , ϕ k∗Nk
1 k∗
k (ϕ Nk ) (1)
k γi +λ N
k Nk
= −vik ,
Next, we decompose system (3.63) into its stable and unstable parts. Recall the
projections PNk , and its adjoint PN∗k , defined by
1 −1 1
PNk := (λI + Ak ) dλ; PN∗k := (λI + A∗k )−1 dλ,
2π i Γ 2π i Γ¯
where Γ (its conjugate Γ¯ , respectively) separates the unstable spectrum from the
stable one of −Ak (−A∗k , respectively). We set
68 3 Stabilization of Periodic Flows in a Channel
z k = z Nk + ζ Nk , z Nk := PNk z k , ζ Nk := (I − PNk )z k ,
d
z N + AuNk z Nk
dt k
⎡ ⎛ ⎞⎤
Nk
Nk
Nk
= PNk ⎣2 λ j Lk Dγ k vi , ϕ j ϕ j +
k k k∗ k
γi Lk Dγ k vi − ⎝Lk
k k
Dγ k vi ⎠ ⎦
k
(3.67)
i i i
i, j=1 i=1 i=1 t
d
ζ N + AsNk ζ Nk
dt k
⎡ ⎛ ⎞⎤
Nk
Nk
Nk
= (I −PNk )⎣2 λkj Lk Dγ k vik , ϕ k∗
j ϕ j +
k
γik Lk Dγ k vik − ⎝Lk Dγ k vik⎠ ⎦ (3.68)
i i i
i, j=1 i=1 i=1 t
respectively.
Let us decompose z Nk as
Nk
z Nk (t, y) = z k (t), ϕ k∗
j ϕ j (y).
k
j=1
Nk
1 k N
1 k N
Zt k = Λk Z k − Λk Bik Ak Z k − γik Bik Ak Z k + B k Ak Zt k , t ≥ 0,
i=1
2 i=1 2 i=1 i
⎛ ⎞
z k (t), ϕ1k∗
⎜ z k (t), ϕ k∗ ⎟
where Z k := ⎜ 2 ⎟
⎝ ................. ⎠ and Λ := diag (λ1 , λ2 , . . . , λ Nk ). Recalling that
k k k k
z k (t), ϕ Nk∗k
Ak = (B1k + · · · + B Nk k )−1 , we see that the above relation yields
Nk
Zt k = −γ1k Z k + (γ1k − γik )Bik Ak Z k , t ≥ 0, (3.69)
i=2
3.2 The Stabilization Result 69
which is the counterpart of Eq. (2.39) from Chap. 2. Thus continuing with arguments
similar to those in (2.39)–(2.41), we conclude that
u k0 (t)2 + vk0 (t)2 + wk0 (t)2 ≤ C3 e−μ3 t (u 0k0 2 + vk0 + wk0
0 2 0 2
),
(3.72)
∀t ≥ 0, ∀|k| > 0, for some constants C3 , μ3 > 0, independent of k.
The case k = 0 and l = 0 can be treated similarly to that above, obtaining that
the feedback
⎛ ! " ⎞ ⎛ kl∗ ⎞
!Lkl vkl (t), ϕ1kl∗ "
kl∗
(ϕ1 ) (1) $
⎜
kl ⎜ Lkl vkl (t), ϕ2
⎟ ⎜ (ϕ kl∗ ) (1) ⎟
ψkl = − Λkl ⎟ ⎜ 2 ⎟
sum A ⎝ .......................... ⎠ , ⎝ ................. ⎠
! "
Lkl vkl (t), ϕ Nkl∗kl (ϕ Nkl∗kl ) (1) Nkl
(3.73)
for 0 < k 2 + l 2 ≤ S,
ψkl ≡ 0 for k 2 + l 2 > S
sum := Λγ kl + · · · + Λγ kl , for
ensures the stability. Here Λkl kl kl
1 Nkl
⎛ ⎞
1
γikl +λkl
0 ... 0
⎜ 0 ⎟
1
⎜
1
γikl +λkl
... 0 ⎟
Λklkl := ⎜
2 ⎟ , i = 1, . . . , Nkl , (3.74)
γi ⎝ .................................... ⎠
0 0 . . . γ kl +λ1
kl
i Nkl
for some 0 < γ1kl < · · · < γ Nklkl , Nkl real constants sufficiently large. Moreover,
70 3 Stabilization of Periodic Flows in a Channel
where
Bikl := Λkl
γ kl
Bkl Λkl
γ kl
, i = 1, . . . , Nkl , (3.76)
i i
ψk0 (t) :=
⎛ −ikx k∗
⎞ ⎛ k∗ ⎞
O (−v yy (t) + k 2 v(t))e−ikx ϕ1k∗ (y)d xd ydz
2 (ϕ1 ) (1) $
⎜ ϕ2 (y)d xd ydz ⎟ ⎜ k∗ ⎟
− Λksum Ak ⎜ O (−v yy (t) + k v(t))e ⎟ , ⎜ (ϕ2 ) (1) ⎟ ,
⎝ ... ⎠ ⎝ .............. ⎠
−ikx k∗
O (−v yy (t) + k v(t))e ϕ Nk (y)d xd ydz (ϕ Nk ) (1)
2 k∗
Nk
(3.80)
∗
j the eigenfunctions of the adjoint operator −Ak of −Ak given by (3.22), and
with ϕ k∗
Λksum , Ak are defined
√ by (3.56)–(3.58).
And for 0 < k 2 + l 2 ≤ S,
ψkl (t) :=
⎛ −ikx e−ilz ϕ kl∗ (y)dxdydz
⎞ ⎛ kl∗ ⎞
O (−v yy (t) + (k + l )v(t))e
2 2 (ϕ1 ) (1) $
⎜ 1
−ikx e−ilz ϕ kl∗ (y)dxdydz ⎟ ⎜ ⎟
kl ⎜ O [−v yy (t) + (k + l )v(t)]e ⎟ ⎜ (ϕ2kl∗ ) (1) ⎟
2 2
− Λkl
sum A ⎜
2 ⎟,⎜ ⎟ ,
⎝ .................
⎠ ⎝ ... ⎠
−ikx −ilz
O [−v yy (t) + (k 2 + l 2 )v(t)]e e ϕ kl∗
Nkl (y)dxdydz (ϕ kl∗
N ) (1) kl Nkl
(3.81)
∗
with ϕ kl∗
j the eigenfunctions of the adjoint operator −Akl , of −Akl given in (3.25);
sum , A given in (3.74) and (3.75), respectively.
and Λkl kl
3.2 The Stabilization Result 71
∂u
⎪ ∂x
+ ∂∂vy + ∂w
∂z
= 0, ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1), (3.82)
⎪
⎪
⎪
⎪ (u, v, w, p)(t, x + 2π, y, z + 2π ) = (u, v, w, p)(t, x, y, z),
⎪
⎪
⎪
⎪ (u, w)(t, x, 0, z) = (u, w)(t, x, 1, z) = 0,
⎪
⎪
⎩
v(t, x, 0, z) = 0, v(t, x, 1, z) = Ψ (v), ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),
Observe that slight perturbations of the coefficients of Eq. (3.1) lead to different
eigenfunctions of the linearized operator. This means that under small perturbations,
the feedback given in Theorem 3.1 might no longer ensure the stability of the system.
A more robust controller can be constructed via a Riccati-based approach. This is
what we do in this section. We reconsider the stabilization problem associated with
system (3.5), at each level (k, l) ∈ Z2 , by looking for a feedback representation of
the controller ψkl in terms of an operator solving a Riccati algebraic equation. We
will use the standard technique, namely minimization of a cost functional.
Let us consider the case k = 0 and l = 0. As in (3.50)–(3.52), we get that vk0
satisfies
(Lk vk0 )t + Fk vk0 = 0, t ≥ 0, y ∈ (0, 1),
(3.83)
vk0 (0) = vk0 (1) = 0, vk0 (0) = 0, vk0 (1) = ψk0 (t),
For γ k > 0 sufficiently large there is a solution. Then doing computations similar to
those in (2.27)–(2.29), we obtain that (3.83) is equivalent to
d
z k + Ak z k = Ak Dγ k ψk0 , t ≥ 0; z k (0) = z k0 := Lk vk0
0
, (3.85)
dt
where z k := Lk vk0 .
We associate to (3.85) the following linear quadratic control problem:
∞
1
φ(z k0 ) := min (L−1
k z k (t) + |ψk0 (t)| )dt,
2 2
(3.86)
2 0
φ(z k0 ) ≤ a2 ||L−1
k z k || , ∀z k ∈ X .
0 2 0
(3.87)
It is easy to see that the map φ(z) → z ∈ X is continuous, and thus, ||L−1 k z|| ≤
cφ(z). This, together with relation (3.87), shows that there exist constants a1 and a2
such that
a1 L−1 0 2 0 −1 0 2
k z k ≤ φ(z k ) ≤ a2 Lk z k , ∀z k ∈ X .
0
(3.88)
1
φ(z k0 ) = Rk z k0 , z k0 X , ∀z k0 ∈ X , (3.89)
2
where Rk ∈ L(X , X ).
By the dynamic programming principle, for each 0 < t < T , the optimal solution
(ψk∗ , z k∗ ) to (3.86) and (3.85) is also the solution to the optimization problem
⎧ T ⎫
⎨1 ⎬
(||L−1 z (s)|| 2
+ |ψ (s)| 2
)ds + φ(z (T )),
min 2 t k k k0 k
, (3.90)
⎩ ⎭
subject to (3.85), z k (t) = z k∗ (t)
3.3 Design of a Riccati-Based Feedback 73
(T − t) 2 qT ∈ C([0, T ); D((−A∗k ) 2 ))
1 3
as claimed.
Finally, we show that Rk is a solution to a Riccati-type equation. To this end, we
first notice that again by the dynamic programming principle and (3.89), we have
∞
1 1
Rz k∗ (t), z k∗ (t)X = φ(z k∗ (t)) = (L−1 ∗ ∗
k z k (s) + |ψk0 (s)| )ds,
2 2
(3.93)
2 2 t
1 2 −2 1 −1 ∗
L−1 ∗ −1 ∗ ∗
k Ak z k (t), Lk Rk z k (t) + ν |(Lk Rk z k (t)) (1)| = Lk z k (t) , (3.94)
2 2
2 2
t ≥ 0, which implies, by setting t = 0, that Rk satisfies the following Riccati equa-
tion:
1 2 −2 1 −1 0 2
L−1 0 −1 0 0
k Ak z k , Lk Rk z k + ν |(Lk Rk z k ) (1)| = Lk z k , ∀z k ∈ H.
2 0
(3.95)
2 2
74 3 Stabilization of Periodic Flows in a Channel
L−2
k Rk z k ∈ H (0, 1).
0 4
and
ψk0 ≡ 0 for |k| > S
and
ψkl ≡ 0 for k 2 + l 2 > S,
where
⎧
⎪
⎪ −ν(L−2
k Rk Lk vk0 (t)) (1) for 0 < |k| ≤ S, l = 0,
⎪
⎪ for |k| > S, l = 0,
⎨ 0
ψkl (t) := 0 for k = 0, l ∈ Z,√
⎪
⎪ −2
⎪
⎪ −ν(L R kl kl vkl (t)) (1)
L for k, l = 0 and √k 2 + l 2 ≤ S,
⎩ kl
0 for k, l = 0 and k 2 + l 2 > S,
is plugged into system (3.4), this yields its exponential stability. Here Rk , Rkl : X →
X are linear, self-adjoint operators satisfying Riccati-type equations of the form
3.3 Design of a Riccati-Based Feedback 75
1 2 −2 1 −1 0 2
L−1 0 −1 0 0
k Ak z k , Lk Rk z k + ν |(Lk Rk z k ) (1)| = Lk z k , ∀z k ∈ H,
2 0
2 2
and
1 2 −2 1 −1 0 2
L−1 0 −1 0 0
kl Akl z kl , Lkl Rkl z kl + ν |(Lkl Rkl z kl ) (1)| = Lkl z kl , ∀z kl ∈ H,
2 0
2 2
respectively, where X is the dual of the space H 2 (0, 1) ∩ H01 (0, 1).
3.4 Comments
The local stabilization theory for the Navier–Stokes equations by feedback control
supported on the boundary of a domain filled with liquid was created in Fursikov
[59, 60]. In particular, the feedback theory was developed in Fursikov [58]. The idea
to construct boundary controllers was based on previous results on stabilization via
internal distributed feedbacks. Roughly speaking, it consists in extending the domain
O by a thin strip around the boundary, obtaining thereby the new domain O ∪ Oε ,
and considering Oε to be the support of the internal feedback. Once the internal
feedback is constructed for the new extended system, one may let ε go to zero. The
boundary controller for the former problem is the trace of the solution to the latter
problem.
Another method to deal with boundary actuators is to lift them into the equations
via an auxiliary operator acting on functions defined on the boundary with values on
the whole domain. (This method is used as well in the results presented in this book.)
Then, via the Riccati-based method, boundary stabilizing actuators were constructed
in Barbu et al. [19]. Other results on this subject were obtained in Raymond [118,
119].
In [12], Barbu designed an explicit feedback law of proportional type, called
oblique, that acts almost normal to the boundary. It has the following form:
N
∂Φ j
u=η μ j y, ϕ j + α(x)n(x) , x ∈ ∂O,
j=1
∂n
where α is an arbitrary continuous function with zero circulation on ∂O, that is,
α(x)d x = 0.
O
and
C
| cosu(t, x), n(x)| ≥ 1 − , ∀x ∈ ∂O,
C + |α(x)|
where C > 0 is independent of α. This means that the stabilizable boundary con-
troller u can be chosen almost normal to ∂O. However, for technical reasons the
limit case |α| = +∞, that is, u normal, is excluded from the discussion. Moreover,
again the feedback is under the requirement of linear independence of the system of
eigenfunctions.
The general domain O is replaced by the particular infinite channel form, and
via the backstepping technique, stabilizing feedbacks for the Poiseuille profile were
designed by Krstic and his coworkers in [1, 29, 124]. In all these works, in order
to achieve stability, all the components of the velocity field are controlled on the
boundary. Other results are obtained by Triggiani in [117].
From the practical point of view, to implement a tangential control into the system
is quite demanding, both from the technological point of view and the cost. The most
feasible case is that in which the control acts only on the normal component of the
velocity field, the so-called wall-normal controller. Results in this direction were
obtained by Barbu in [9, 12] and for the stochastic case in Barbu [11]. More results
on the stabilization of the Navier–Stokes flows can be found in the book Barbu [10].
The results presented in this chapter provide as well normal boundary stabilizers,
and concerning the construction of a proportional type feedback stabilizer, they
appeared in Munteanu [103], and concerning the Riccati-based approach, in the
author’s work [95, 96].
We mention that we were not able to deduce the local stability of the full nonlinear
Navier–Stokes system, because of the normal boundary conditions. More precisely,
in trying to reduce the pressure from the nonlinear system, the usual trick is to apply
the Leray projector. However, due to the nontangential conditions, this cannot be
done.
Chapter 4
Stabilization of the Magnetohydro-
dynamics Equations in a Channel
Here we consider again a channel flow. But in addition to the assumptions of the
previous chapter, we assume that the incompressible fluid is electrically conducting
and affected by a constant transverse magnetic field. This kind of flow was first
investigated both experimentally and theoretically by Hartmann [67]. The governing
equations are the magnetohydrodynamics equations (MHD, for short), which are a
coupling between the Navier–Stokes equations and the Maxwell equations.
Here (u, v) is the velocity field, p is the scalar pressure, and (B, C) is the mag-
netic field. The positive constants ρ, ν, μ, and σ represent the fluid mass density,
the kinematic viscosity, the magnetic permeability, and the electrical conductivity,
respectively; 2L is the distance between the walls.
These equations are of huge importance, and they are used in the study of magneto-
fluids such as plasmas, liquid metals, salt water, and electrolytes.
The fully developed steady state of (4.1), the Hartmann–Poiseuille profile, which
we are going to stabilize, is given by
∗ 1 1 cosh(Hay∗ )
û(y ) = 1− , v̂ ≡ 0
Ha tanh(Ha) cosh(Ha)
(4.2)
y∗ 1 sinh(Hay∗ )
B̂ = − + , Ĉ ≡ B0 ,
Ha Ha sinh(Ha)
σ
where y∗ := Ly , Ha := B0 L ρν . For later purposes, we notice that
1 e−Hay
∗
|(û + B̂) | = − + ≤ 2, y∗ ∈ [−1, 1]. (4.3)
Ha sinh(Ha)
y∗
with
L2 σ
v0 := − p̂x and b0 := −μL2 p̂y ,
ρν ρν
(p̂ is the pressure corresponding to the equilibrium solution (4.2)). For the sake of
simplicity we drop the star notation. However, we keep in mind that now we are
dealing with the above variables.
Again we assume 2π -periodicity with respect to the x-coordinate of the velocity
field, the magnetic field, and the pressure. In addition, we impose that the magnetic
Prandtl number of the fluid, i.e., Prm := νμσ , be equal to one. Such a periodic MHD
channel flow does not directly correspond to a specific laboratory fluid. It is, however,
often studied as an approximation to torus devices of plasma-controlled fusion, such
as the Tokamak and the reversed field pinch. Numerical simulations have shown that
turbulence may appear in the movement of this kind of flow; that is, the flow may
become unstable.
It is easily seen that Prm = 1 implies N = R = Rm = 1, after rescaling as nec-
essary. So the linearization of system (4.1) around the equilibrium profile (4.2),
supplemented with the boundary conditions, has the form
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 79
⎧
⎪
⎪ ut − Δu + ûux + vûy + B0 Cx − B0 By − B̂y C = px ,
⎪
⎪
⎪
⎪ vt − Δv + ûvx + B̂B + B̂By − B̂Cx = py ,
⎪
⎪
⎪
⎪ Bt − ΔB + ûBx + B̂y v − B̂ux − B0 uy − ûy C = 0,
⎪
⎪
⎪
⎪ Ct − ΔC + ûCx − B̂vx − B0 vy = 0,
⎪
⎪ u + v = 0, B + C = 0,
⎨ x y x y
and initial data u0 , v0 , B0 , C 0 . Here Ψ and Ξ are the boundary controllers, which
means that both the normal components of the velocity field and the magnetic field
are controlled on the upper wall. Of course, from the practical point of view, it would
have been more convenient to control only the wall-normal velocity. Unfortunately,
this is not possible with the algorithm from Chap. 2, because, in trying to show
the unique continuation property (see Lemma 4.2 below) related to a vector-valued
operator, one cannot prove that both components of the unstable eigenvector are
nonzero. Rather, a weaker result is available, saying that both components cannot
vanish simultaneously. Consequently, both the velocity and the magnetic field must
be controlled.
As in the previous chapter, we take advantage of the 2π -periodicity and decompose
(4.4) into Fourier modes. We get the following infinite system, indexed by k ∈ Z:
⎧
⎪
⎪ (uk )t − (−k 2 uk + uk ) + ik ûuk + û vk + ikB0 ck − B0 bk − B̂ ck = ikpk ,
⎪
⎪
⎪
⎪ (vk )t − (−k 2 vk + vk ) + ik ûvk + B̂ bk + B̂bk − ik B̂ck = pk ,
⎪
⎪ (b ) − (−k 2 b + b ) + ik ûb + B̂ v − ik B̂u − B u − û c = 0,
⎨ k t k k k k k 0 k k
Then we add the first equation to the third one of (4.5), and the second equation to
the fourth one of (4.5). In this way, we obtain the two-equation system
+ û D + B̂ D + ik(B c − B̂u ) = ikp ,
(S1k )t − (−k 2 S1k + S1k ) + ik ûS1k − B0 S1k 2k 2k 0 k k k
(S2k )t − (−k S2k + S2k ) + ik ûS2k + ik B̂S2k − B0 vk + B̂ bk + B̂bk = pk .
2
Then we reduce the pressure from it and use the divergence-free conditions to find
that
(−S2k + k 2 S2k )t + S2k + B̂S2k − [2k 2 + ik D̂]S2k
(4.7)
− [ik D̂ + k 2 B0 ]S2k
+ [k 4 + ik 3 D̂]S2k + ik[(Ŝ D2k ] = 0.
Ŝ := û + B̂ and D̂ := û − B̂.
We do the same for the differences. More precisely, we subtract the third equation
from the first one of (4.5), the fourth equation form the second one of (4.5), and
reduce the pressure as before to arrive at
(−D2k + k 2 D2k )t + D2k − B0 D2k − [2k 2 + ik Ŝ]D2k − [ik ŝ − k 2 B0 ]D2k
(4.8)
+ [k 4 + ik 3 Ŝ]D2k + ik[D̂ S2k ] = 0.
⎪
⎪
⎪
⎪ + [k 4 + ik 3 Ŝ]D2k + ik[D̂ S2k ] = 0,
⎪
⎪
⎪ S2k (−1) = S2k (1) = S2k (−1) = D2k
⎪ (−1) = D2k (1) = D2k (−1) = 0,
⎩
S2k (1) = ψk := ψk + ξk , D2k (1) = ψk := ψk − ξk ,
S D
(4.9)
and the initial data S2k0
:= vk0 + ck0 , D2k 0
:= vk0 − ck0 . Thus we have reduced the five-
unknown problem (4.5) to the two-unknown problem (4.9).
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 81
To write the above system in an abstract form, we introduce the linear operators
Lk : D(Lk ) ⊂ H × H → H × H and Fk : D(Fk ) ⊂ H × H → H × H , defined as
−S + k 2 S 2
Lk (S D)T := , D(Lk ) = H 2 (−1, 1) ∩ H01 (−1, 1) (4.10)
−D + k 2 D
Fk (S D)T (4.11)
S + B0 S − [2k 2 + ik D̂]S − [ik D̂ + k 2 B0 ]S + [(k 4 + ik 3 D̂]S + ik[Ŝ D]
:= ,
D − B0 D − [2k 2 + ik Ŝ]D − [ik Ŝ − k 2 B0 ]D + [(k 4 + ik 3 Ŝ]D + ik[D̂ S]
2
D (Fk ) = H 4 (−1, 1) ∩ H02 (−1, 1) ,
Regarding the operator −Ak , we may prove the following lemma, arguing similarly
as in the proof of Lemma 3.2.
Lemma 4.1 says that for all |k| > M , we may take ψkS ≡ ψkD ≡ 0, since at these
levels the system is stable. Therefore, it remains to stabilize the system (4.9) for
0 < |k| ≤ M only. Besides this, Lemma 4.1 guarantees that the operator −Ak has
a countable set of eigenvalues, denoted by {λkj }∞ j=1 (each repeated according to its
multiplicity); and there is only a finite number Nk of eigenvalues for which
λkj ≥
0, j = 1, . . . , Nk , the unstable eigenvalues. Finally, let
∞ ∞
ϕjk := (ϕ1jk ϕ2jk )T and ϕjk∗ := (ϕ1jk∗ ϕ2jk∗ )T
j=1 j=1
denote the corresponding eigenvectors of the operator −Ak and its adjoint −A∗k ,
respectively. For the sake of simplicity, we assume that the unstable eigenvalues are
simple. Hence, we may suppose that they are arranged such that
In other words, ϕ1 (1) and ϕ2 (1) cannot vanish simultaneously for every eigenvector
corresponding to an unstable eigenvalue of the adjoint operator. This is equivalent
to the fact that there exists μk ∈ C such that
for all the eigenvectors corresponding to the unstable eigenvalues. This is exactly
what we prove below. Then, with the help of this μk , we will construct our controller
(see (4.19) below).
Similarly as in Lemma 3.3, we will show that if (ϕ1 ϕ2 )T solves (4.12) plus
T
(ϕ1 ϕ2 ) (1) = (0 0)T , then necessarily
(ϕ1 ϕ2 )T ≡ (0 0)T ,
The lemma below says nothing but the fact that assumption (A5) from Chap. 2 holds
in the present case.
Lemma 4.2 Let 0 < |k| ≤ M . Then there exists μk ∈ C such that
Proof Below, we will understand by ∧ and by ∨ the logical symbols for “and” and
“or,” respectively.
Fix k ∈ Z such that 0 < |k| ≤ M . For the sake of simplicity of notation, let us set
λ := λkj and ϕ := ϕjk∗ . First, consider the case in which ϕ ∗ is a classical eigenvector
corresponding to the eigenvalue λ, i.e.,
−A∗k ϕ ∗ = λϕ ∗ .
Hence ϕ := L−1 ∗
k ϕ solves
(λLk + F∗k )ϕ = 0, (4.13)
ˇ
It is easy to check that Ŝ = D̂, and that
Set
(ψ1 ψ2 )T := (ϕ1 + ϕ̌2 ϕ2 + ϕ̌1 )T .
Scalar multiplying (4.15) by Ψ and taking the real part of the result, we get
1
2
Ψ + (k +
λ)Ψ +
ik
2 2
D̂ (ψ1 + ψ̌1 ) Ψ dy = 0. (4.16)
−1
k 4 + 2π 2 k 2 − 4 > 0, ∀k ∈ Z∗ .
Similarly as above, scalar multiplying (4.18) by Φ and taking the real part of the
result, we obtain that
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 85
ϕ1 = ϕ2 = 0.
Now if we take
(χ1 χ2 )T := (ϕ1 − ϕ̌1 ϕ2 − ϕ̌2 )T
and argue as before, we get that in the case χ1 (1) = χ2 (1) = 0, we necessarily have
that
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0).
(2) [ψ1 (1) = ψ2 (1) = 0] ∧ [(χ1 (1) = 0) ∨ (χ2 (1) = 0)]. Again the first one implies
that
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0),
ϕ (1)
which means, as before, that for all θ ∈ C∗ such that θ = − ϕ1 (1) (in the case
2
that ϕ2 (1) = 0, otherwise for all θ ∈ C∗ ), we have
ϕ1 (1) + θ ϕ2 (1) = 0.
(3) [(ψ1 (1) = 0) ∨ (ψ2 (1) = 0)] ∧ [χ1 (1) = χ2 (1) = 0]. The second one implies
ϕ (1)
as before that for all θ ∈ C∗ such that θ = − ϕ1 (1) (in the case that ϕ2 (1) = 0,
2
otherwise for all θ ∈ C∗ ), we have
ϕ1 (1) + θ ϕ2 (1) = 0.
(4) [(ψ1 (1) = 0) ∨ (ψ2 (1) = 0)] ∧ [(χ1 (1) = 0) ∨ (χ2 (1) = 0)]. By the fact that
(ψ1 + χ1 ψ2 + χ2 )T = 2(ϕ1 ϕ2 )T , we get as before that there exists infinitely
many θ ∈ C such that
86 4 Stabilization of the Magnetohydrodynamics Equations in a Channel
ϕ1 (1) + θ ϕ2 (1) = 0.
and
(λ + A∗k )j (ϕ1j ϕ2j )T = 0, j = 2, 3, . . . , J .
Concerning (ϕ11 ϕ21 )T , we may show, as in the above lines, that there exists some
μ such that
ϕ11 (1) + μϕ21 (1) = 0.
The main theorem of this section amounts to saying that the following feedback laws,
once plugged into the system (4.4) yield its stability. Let us define
1
Ψ (t, x) := (1 + μk )U k (t)eikx ,
2
0<|k|≤M
1 (4.19)
Ξ (t, x) := (1 − μk )U k (t)eikx .
2
0<|k|≤M
⎛ ⎞
1
γik +λk1
0 ... 0
⎜ 0 1
... 0 ⎟
⎜ γik +λk2 ⎟
Λkγ k := ⎜ ⎟ , i = 1, . . . , Nk , (4.21)
i ⎝ ................................... ⎠
0 0 . . . γ k +λ 1
k
i Nk
for some 0 < γ1k < · · · < γNkk , Nk real constants sufficiently large that relation (4.26)
below holds. Moreover,
where
G ki := Λkγ k G k Λkγ k , i = 1, . . . , Nk , (4.23)
i i
and 2π
ck (t, y) := C(t, x, y)e−ikx dx.
0
88 4 Stabilization of the Magnetohydrodynamics Equations in a Channel
Finally, ·, ·Nk stands for the classical scalar product in CNk .
Theorem 4.1 Once the feedbacks Ψ, Ξ defined in (4.19) are plugged into the linear
equation (4.4), we obtain the asymptotic exponential decay of the corresponding
solution to the closed-loop system (4.4).
Proof The stability will be shown at each level 0 < |k| ≤ M of the system (4.9)
(whose stability is equivalent to the stability of the system (4.5) and consequently to
the stability of the system (4.4)), since, as we saw earlier, the other levels are stable.
So let us fix some 0 < |k| ≤ M . In order to simplify the notation, since k is fixed, in
what follows we will omit the index k.
The corresponding closed-loop system (4.9) reads as follows:
⎧
⎨ L (S2 D2 )T ) t + F (S2 D2 )T = 0, y ∈ (−1, 1),
(S D )T (1) = (U μU )T , (4.25)
⎩ 2 2 T
(S2 D2 ) (−1) = (S2 D2 )T (−1) = (S2 D2 )T (1) = 0.
In order to lift the boundary conditions into the equations, aiming to use the spectral
decomposition method, we introduce the Dirichlet operator as in (2.16) in Chap. 2:
let α ∈ C, and denote by Dγ α := w the solution to the equation
⎧
⎨ F w + 2 λ L w, φ ∗ φ + γ L w = 0, y ∈ (−1, 1),
N
⎪
j j j
(4.26)
⎪
⎩ j=1
w(1) = (α μα)T , w(−1) = w (−1) = w (1) = 0.
Next, we choose N constants 0 < γ1 < γ2 < · · · < γN large enough that
⎛ ⎞ ⎛ 1 ⎞
L (S2 D2 )T (t), ϕ1∗ γi +λ1 1
l !
⎜ L (S2 D2 )T (t), ϕ ∗ ⎟ ⎜ 1 l2 ⎟
⎜
Ui (t) := − A ⎝ 2 ⎟ , ⎜ γ i +λ 2 ⎟
............................. ⎠ ⎝ .......... ⎠
∗
L (S2 D2 ) (t), ϕN
T 1
γi +λN N
l
⎛ ∗
⎞ ⎛ ⎞ N (4.29)
L (S2 D2 ) (t), ϕ1
T
l1 !
⎜ L (S2 D2 )T (t), ϕ ∗ ⎟ ⎜ l2 ⎟
= − Λγi A ⎜ 2 ⎟ ⎜ ⎟
⎝ .............................. ⎠ , ⎝ ... ⎠ , t ≥ 0,
L (S2 D2 )T (t), ϕN∗ lN N
where the G i are introduced in (4.23) above, for i = 1, . . . , N . This is indeed so. We
have, via relation (4.27),
N
1
N
1
N
Zt = ΛZ − ΛG i AZ − γi G i AZ + G i AZt , t ≥ 0,
i=1
2 i=1 2 i=1
⎛ ⎞
z(t), ϕ1∗
⎜ z(t), ϕ ∗ ⎟
where Z := ⎜ 2 ⎟
⎝ ............... ⎠ and Λ := diag(λ1 , λ2 , . . . , λN ).
z(t), ϕN∗
Recalling that A = (G 1 + · · · + G N )−1 , we see that the above relation yields
4.2 The Stabilizing Proportional Feedback 91
N
Zt = −γ1 Z + (γ1 − γi )G i AZ , t ≥ 0. (4.34)
i=2
Closely arguing as in the proof of Theorem 2.1, by (4.34) we get the exponential
decay of the unstable part of the solution. Then using that the stable part operator
is exponentially decaying, we conclude the proof. Further details are omitted, since
they mimic the proof of Theorem 2.1.
Remark 4.1 As in Sect. 3.3, based on the above feedback proportional stabilizer,
one may develop a Riccati-based one. Since the ideas are almost the same, we will
not develop this problem here (see [98] for details).
4.3 Comments
As mentioned above, due to the complexity of the problem, there are fewer results on
boundary stabilization for the MHD equations than for the Navier–Stokes equations.
One of the main reasons is that the usual procedure for transforming a boundary con-
trolled system into a system with controllers distributed in a subdomain, by extend-
ing the initial domain to a slightly larger domain (a technique that was presented in
the comments section from the previous chapter), are not directly applicable in the
MHD case. The special feature of the MHD system is that the equation satisfied by
the magnetic field needs to have a divergence-free right-hand side. Using localized
controllers, after applying the Leray projector, the controller usually becomes dis-
tributed in the whole domain, and the above transformation to the internal controller
case fails to work. That is why a special form of the internal controller for the second
extended equation must be used, which is done in Lefter [84].
However, it turns out that in the special case of the Hartmann MHD framework,
with the domain O=channel, it is easier to derive boundary stabilizers directly. One
of the reasons is that, assuming further that the fluid is with low value of the magnetic
Reynolds number Rm , the induced magnetic field is much weaker than the applied
one, and therefore, it can be neglected, so that we obtain the simplified magneto-
hydrodynamics equations (SMHD for short; see [127]). The SMHD equations are
nothing but some linear perturbations of the Navier–Stokes equations, and they look
like this:
⎧
⎨ ut − ν(uxx + uyy ) + uux + vuy + N B02 u = −px ,
vt − ν(vxx + vyy ) + uvx + vvy = −py , (4.35)
⎩
ux + vy = 0,
where B0 is the constant external applied magnetic field and N > 0 is the Stuart (or
interaction) number. Therefore, it is clear that the control design algorithm developed
for the Navier–Stokes equations in a channel is expected to work equally well for the
SMHD in the channel case. This is indeed true, and we refer for the backstepping
92 4 Stabilization of the Magnetohydrodynamics Equations in a Channel
technique to the work of Krstic and his coworkers [114, 127], while for the Riccati-
based method, we refer to the author’s work [97]. Other related results are [83, 115]
and the references therein.
In this chapter, we have considered arbitrary values for Rm , solving the problem
in the more general case than SMHD, namely the case of Prandtl number equal to
one. The results concerning the proportional feedback were published in Munteanu
[102], while the Riccati case in the author’s work [98].
Chapter 5
Stabilization of the Cahn–Hilliard System
In this chapter, the Cahn–Hilliard system will be investigated. This system describes
the process of phase separation, whereby the two components of a binary fluid spon-
taneously separate and form domains pure in each component. This phenomenon
appears in many engineering and medical applications.
In system (5.1)–(5.3), the variables θ, ϕ and μ represent the temperature, the order
parameter, and the chemical potential, respectively; ν, l0 , γ0 are positive constants
with some physical meaning; F is the derivative of the double-well potential
(ϕ 2 − 1)2
F(ϕ) = , (5.4)
4
and n is the unit outward normal vector to the boundary Γ . Finally, u is the control
acting only on the temperature flux, on one part of the boundary, namely Γ1 . The
equations (5.1)–(5.3) are known as conserved phase field system, due to the mass
conservation of ϕ, which is obtained by integrating the second equation in (5.1) in
space and using the boundary condition for μ from (5.2).
Let (ϕ̂, θ̂ ) ∈ H 4 (O) × H 2 (O) be a stationary solution of the uncontrolled system
(5.1)–(5.3), i.e.,
⎧
⎪
⎨ νΔ ϕ̂ − ΔF (ϕ̂) = −Δθ̂ = 0 in O,
2
∂ ϕ̂ ∂Δϕ̂ ∂ θ̂ (5.5)
⎪
⎩ = = = 0 on Γ.
∂n ∂n ∂n
(For a discussion of the existence of stationary solutions, see [17, Lemma A1].)
We emphasize that different stationary profiles correspond to different types of
phase separation.
We prefer to make a function transformation in (5.1), namely
σ := α0 (θ + l0 ϕ), (5.6)
that is,
γ0
α0 = . (5.8)
l0
Writing the system (5.1)–(5.3) in the variables ϕ and σ and using (5.7) and the
notation
l := γ0 l0 , (5.9)
y := ϕ − ϕ̂, z := σ − σ̂ , (5.11)
yo := ϕo − ϕ̂, z o := σo − σ̂ , (5.12)
where
1
:=
F∞ F (ϕ̂(ξ ))dξ, (5.15)
mO O
We remark that the above system is not the linearization of (5.13), since the replace-
ment of the nonlinear term is different from the usual one.
for all (φ ψ)T ∈ V. We see easily that A is bounded from V to V , the dual of V .
Indeed, we have
5.1 Presentation of the Problem 97
A(y z)T V = sup A(y z)T , (φ ψ)T ≤ C(y z)T V .
(φ ψ)T ∈V,(φ ψ)T )V ≤1
Moreover,
A(y z)T , (y z)T = (ν|Δy|2 + Fl |∇ y|2 − 2γ ∇ y · ∇z + |∇z|2 )d x
O
1
≥ νΔy2 − (|Fl | + 2γ 2 )∇ y2 + ∇z2
2
1 1
= νΔy2 + z2H 1 (O ) − νy2 − z2 − a0 ∇ y2 ,
2 2
ν C2
a0 ∇ y2 ≤ CΔyy ≤ Δy2 + y2 ,
2 2ν
we deduce from the above that
The above relations lead to the fact that A is quasi-m-accretive, which means that
A + C2 I : V → V
(y z)T 2V ≤ C( f 1 f 2 )T 2 , for λ ≥ C2 ,
and some C > 0, whence it follows that (λI + A)−1 (E) is relatively compact when-
ever E is bounded in L 2 × L 2 . (For more details, see [17, Proposition 2.1].)
∞
Therefore, A has a countable set λ j j=1 of real eigenvalues and a complete set
of corresponding eigenvectors. Moreover, all the eigenspaces are finite-dimensional,
and by repeating each eigenvalue according to its multiplicity, we have that
We note that zero is an eigenvalue, and it is of multiplicity 2, since 2m1O (1 1)T
and 2m1O (−1 1)T are eigenvectors for it. By (5.19), the number of nonpositive
eigenvalues is finite, i.e., for some N ∈ N, we have that
98 5 Stabilization of the Cahn–Hilliard System
(Of course, one can consider the general case as well, namely the semisimple case,
arguing similarly
as in the
∞last part of Chap. 2, and still obtain a stabilization result.)
Denote by (ϕ j ψ j )T j=1 the corresponding eigenvectors, that is,
⎧ 2
⎪ νΔ ϕ j − Fl Δϕ j + γ Δψ j = λ j ϕ j , in O,
⎪
⎨
γ Δϕ j − Δψ j = λ j ψ j , in O, (5.22)
⎪
⎩ ∂ϕ j = ∂Δϕ j = ∂ψ j = 0, on Γ,
⎪
∂n ∂n ∂n
for all j = 1, 2, . . . .
∞
By the self-adjointness of A, we may assume that the system (ϕ j ψ j )T j=1 forms
an orthonormal basis in L 2 (O) × L 2 (O) that is orthogonal in D(A).
The control design procedure developed in Chap. 2 requires further knowledge
about the eigenvectors of the linear operator A. We refer to the validation of the
decisive hypothesis (A5) regarding the unique continuation of the eigenvectors. It is
clear by the form of the operator A, which involves the Laplace operator, that the
eigenvectors (ϕ j ψ j )T can be associated with the eigenfunctions of the Neumann–
∞
Laplacian (this is indeed true; see (5.23) below). In this light, let us denote by μ j j=1
∞
and by e j j=1 the eigenvalues and the normalized eigenfunctions of the Neumann–
Laplacian, respectively, i.e.,
∂e j
Δe j = μ j e j in O and = 0 on Γ,
∂n
to which we simply refer as the Laplace operator Δ in the sequel.
We
∞ know that
μ j ≤ 0 for all j = 1, 2, . . . , μ j → −∞ for j → ∞. Moreover, e j j=1 forms an
orthonormal basis in L 2 (O) that is orthogonal in H 1 (O).
We have enough experience (from the previous chapters) to realize that the Neu-
mann boundary conditions yield that hypothesis (A5) reads as follows: the trace of
the eigenvector (ϕ j ψ j )T , j = 1, 2, . . . , N , is not identically zero on Γ1 . In any
case, due to the boundary conditions in (5.16) and the definition of the Neumann
map Dη in (5.27) below, in the present case we have that
D (ϕ ψ)T = ψ;
see (5.28) below. Hence our task is considerably simplified, since we must show
the nonvanishing of the second component of the eigenvector only. More exactly, it
5.1 Presentation of the Problem 99
ν
such that
γ μk
ψj ≡ ek . (5.23)
(γ μk ) + (λ j + μk )2
2
ψ j ≡ 0 on Γ1 , ∀ j = 1, 2, . . . , N . (5.24)
Proof Let j ∈ {1, 2, . . . , N − 2} . For the sake of simplicity of notation, we drop the
indices j, that is, we use the notation
(ϕ ψ)T = (ϕ j ψ j )T and λ = λ j .
For all j, this is a second-order linear homogeneous system, with the unknowns
ϕ j , ψ j . Computing the determinant of the matrix of the system, we get that μ j must
satisfy
may have at most two distinct negative roots (since the free term is λ2 > 0). Denote
them by X 1 < 0, X 2 < 0. Assume that we have
has unit norm and satisfies equation (5.25), i.e., it is an eigenvector for A correspond-
ing to the eigenvalue λ. Notice that X1 ∪ X2 contains M + L + 2 orthogonal unit
vectors, which in particular are linearly independent.
5.1 Presentation of the Problem 101
Furthermore, arguing as above, let (ϕ̃ ψ̃)T satisfy (5.25). Then necessarily
Taking into account that the system {ek , ek+1 , . . . , ek+M , es , es+1 , . . . , es+L } is lin-
early independent, plugging the above ϕ̃ and ψ̃ into relations (5.25), and recalling
that μk = · · · = μk+M = X 1 , μs = μs+1 = · · · = μs+L = X 2 , we deduce that
λ + X1 q
ϕ̃ q = ψ̃ , q = k, k + 1, . . . , k + M,
γ X1
λ + X2 q
ϕ̃ q = ψ̃ , q = s, s + 1, . . . , s + L .
γ X2
Hence
T T
λ+X 1 λ+X 1
(ϕ̃ ψ̃)T = ψ̃ k e
γ X1 k
ek + · · · + ψ̃ k+M e
γ X 1 k+M
ek+M
T T
λ+X 2 λ+X 2
+ψ̃ s e
γ X2 s
es + · · · + ψ̃ s+L e
γ X 2 s+L
es+L .
Or equivalently,
√ T
(λ+X 1 )2 +(γ X 1 )2 k λ+X 1 γ X1
(ϕ̃ ψ̃)T = γ X1
ψ̃ √ ek √ e k + ···
(λ+X 1 )2 +(γ X 1 )2 (λ+X 1 )2 +(γ X 1 )2
√ T
(λ+X 1 )2 +(γ X 1 )2 k+M λ+X 1 γ X1
+ γ X1
ψ̃ √ ek+M √ ek+M
2 (λ+X 1 ) +(γ X 1 )
2 2 (λ+X 1 ) +(γ X 1 )
2
√ T
(λ+X 2 )2 +(γ X 2 )2 s γ X2
+ γ X2
√ λ+X2 2
ψ̃ es √ es + ···
(λ+X 2 ) +(γ X 2 )2 (λ+X 2 )2 +(γ X 2 )2
√ T
(λ+X 2 )2 +(γ X 2 )2 s+L γ X2
+ γX
ψ̃ √ λ+X2 2 es+L √ es+L .
2 (λ+X 2 ) +(γ X 2 )
2 2 (λ+X 2 ) +(γ X 2 )
2
Thus we obtain that the above (ϕ̃ ψ̃)T may be written as a linear combination of the
vectors from X1 ∪ X2 . In other words, X1 ∪ X2 forms a system of generators for
the subspace of the eigenvectors of the operator A corresponding to the eigenvalue λ.
Recalling that X1 ∪ X2 is linearly independent, we conclude that in fact, X1 ∪ X2
represents an orthonormal basis of this subspace. Consequently, we may choose the
102 5 Stabilization of the Cahn–Hilliard System
We observe that the above polynomial has a negative root, namely λ, and a nonnega-
tive one. Indeed, assume for the sake of a contradiction that both roots are negative.
Then by Viète’s relations, we deduce that
−μ2k (1 + γ 2 ) > 0,
Fl −γ 2
and therefore, μk must satisfy μk ≥ ν
.
α0
Dη a, (ϕ j ψ j )T = a, ψ j 0 , j = 1, 2, . . . , N − 1,
η − λj
α0 (5.28)
Dη a, (ϕ N ψ N )T = a, ψ N 0 ,
η − λN − δ
Further, set
1 1 1 1
ηk := diag , ,..., , , (5.30)
ηk − λ1 ηk − λ2 ηk − λ N −1 ηk − λ N − δ
k = 1, 2, . . . , N , and
N
S := ηk .
k=1
Moreover, define
Bk := ηk Bηk , k = 1, 2, . . . , N , (5.31)
N
where B is the Gram matrix of the system ψ j |Γ1 j=1 , in L 2 (Γ1 ), i.e.,
⎛ ⎞
ψ1 , ψ1 0 ψ1 , ψ2 0 . . . ψ1 , ψ N 0
⎜ ψ2 , ψ1 0 ψ2 , ψ2 0 . . . ψ2 , ψ N 0 ⎟
B := ⎜ ⎟
⎝ ................................................... ⎠ . (5.32)
ψ N , ψ1 0 ψ N , ψ2 0 . . . ψ N , ψ N 0
Set
(B1 + B2 + · · · + B N )−1 =: A. (5.33)
into equations (5.16), one may show, similarly as in Theorems 2.6, 2.7, that it achieves
its exponential stability. More exactly, we have the following result, which is com-
mented on in the forthcoming Remark 5.1. Its proof is omitted, since it is similar to
the proof of Theorem 2.6, and it has been repeated several times in previous chapters.
Remark 5.1 From the practical point of view, it is important to describe how one can
compute the first N eigenvectors of the operator A, since the boundary feedback law
5.1 Presentation of the Problem 105
∂e j
Δe j = μ j e j , in O; = 0, on Γ ; j = 1, 2, . . . , K ,
∂n
for which
Fl − γ 2
μj ≥ , j = 1, 2, . . . , K .
ν
We have that μ1 = 0 and μi = 0 for i = 2, 3, . . . , K .
Then for each j = 1, 2, . . . , K one should check whether the polynomial
or there exists some j ∈ {2, . . . , K } such that the eigenvalue λi can be computed as
a root of the following second-degree polynomial:
In conclusion, the problem reduces to finding the first K eigenvalues and eigenfunc-
tions of the Neumann Laplace operator and computing the roots of some third-degree
polynomials.
Recalling the notation (5.11), The following result concerning the linearized sys-
tem of (5.10) follows immediately by Proposition 5.2.
O = O(θ, ϕ)
⎛ ⎞
ϕ, ϕ1 + α0 (θ + l0 ϕ), ψ1 − ϕ∞ , ϕ1 − α0 (θ∞ − l0 ϕ∞ ), ψ1
⎜ ϕ, ϕ2 + α0 (θ + l0 ϕ), ψ2 − ϕ∞ , ϕ2 − α0 (θ∞ − l0 ϕ∞ ), ψ2 ⎟
:= ⎜ ⎟
⎝ .......................................................................................... ⎠
ϕ, ϕ N + α0 (θ + l0 ϕ), ψ N − ϕ∞ , ϕ N − α0 (θ∞ − l0 ϕ∞ ), ψ N
(5.38)
and ⎛ ⎞
ψ1
⎜ ψ2 ⎟
J := ⎜ ⎝ ... ⎠ .
⎟
ψN
5.2 Comments
Instead of the Cahn-Hilliard system, it is usually studied its simpler form, the so-
called phase-field system, which reads as
%
θt − kΔθ + laΔϕ + lb(ϕ − ϕ 3 ) − ldθ = 0,
(5.39)
ϕt − aΔϕ − b(ϕ − ϕ 3 ) + dθ = 0, in R+ × O.
5.2 Comments 107
The problem of stabilization of the phase field system has been intensively studied
in the literature using various methods. The Riccati-based approach is used in Barbu
[8], where a stabilizing finite-dimensional feedback controller with compact support
acting only on one component of the system is constructed. The boundary stabiliza-
tion problem was studied, for example, in Chen [45] using the time optimal control
technique, while in Munteanu [99], a proportional boundary feedback is designed
under the constraint that the eigenfunctions are linearly independent.
Concerning the problem of stabilization of systems of Cahn-Hilliard type (5.1)–
(5.3), we mention the work Barbu et al. [17], which constructs an internal stabilizing
feedback, while concerning the boundary stabilization case, the result of this chapter
represents the first result in this direction, and it is based on the ideas in [17]. Other
results related to the control problem associated with the Cahn-Hilliard system are
the sliding mode controls in Barbu et al. [16] and Colli et al. [49], while in Marinoschi
[93], the singular potential case is investigated.
Chapter 6
Stabilization of Equations with Delays
The subject of this chapter is the Dirichlet boundary control problem of the following
evolution integro–partial differential equation:
⎧ t t
⎪
⎪ ∂ = Δy(t, + − + μ k(t − s)y(s, x)ds
⎪
⎪ t y(t, x) x) k(t s)Δy(s, x)
⎪
⎪ −∞ −∞
⎨
+ f (y(t, x)), (t, x) ∈ Q := (0, ∞) × O,
⎪
⎪ y(t, x) = u(t, x) on Σ 1 := (0, ∞) × Γ1 ,
⎪ ∂
⎪
⎪
⎪ y = 0 on Σ := (0, ∞) × Γ2 ,
⎩ ∂n 2
y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O.
(6.1)
The effects of the memory are expressed in the linear time convolution of the functions
Δy(·, ·), respectively y(·, ·), and the memory kernel k(·). The Dirichlet controller u
is applied on Γ1 , while Γ2 is insulated.
As can be seen, in the initial condition, the fourth equation in (6.1), it is assumed
that the function y(t, x) is known for all t ≤ 0. However, y(t, x) does not necessarily
satisfy the equation for negative t. We shall assume for the nonlinear function f that
f (0) = 0 and f (0) > 0, and choose from the following two hypotheses, which we
have met before:
(i) f ∈ C 1 (R);
(ii) f ∈ C 2 (R), and there exist C1 > 0, q ∈ N, αi > 0, i = 1, . . . , q, when d =
1, 2, and 0 < αi ≤ 1, i = 1, . . . , q, when d = 3, such that
q
| f (y)| ≤ C1 |y|αi + 1 , ∀y ∈ R.
i=1
Ay := −Δy, ∀y ∈ D(A),
∂
D(A) = y ∈ H (O) : y = 0 on Γ1 and
2
y = 0 on Γ2 .
∂n
∞ ∞
Let {ϕi }i=1 be an orthonormal basis of eigenfunctions of A. By {λi }i=1 we denote
the corresponding eigenvalues, repeated according to their multiplicity. It is easy to
see that we can rearrange the set of eigenvalues as
0 < λ1 ≤ λ2 ≤ · · · ≤ λi ≤ · · · .,
with λi → ∞ when i → ∞.
For the rest of this chapter, we let · stand for the norm in L 2 (O).
In addition to the above context, we assume as well that
(k) there exists some δ > 0 such that the nonnegative memory kernel
satisfies
k (t) + 2δk(t) ≤ 0 and k (t) + 2δk (t) ≥ 0, ∀t ≥ 0, (6.2)
and moreover,
d m ρδt
(−1)m m
e k(t) ≥ 0, m = 0, 1, 2,
dt
for all t > 0 and 0 ≤ ρ ≤ 1. Hence by [7, Proposition 4.1], we have that eρδ· k(·)
is a positive kernel, i.e.,
t τ
ρδ(τ −s)
w(τ ) e k(τ − s)w(s)ds dτ ≥ 0, ∀w ∈ L 2 (0, t; R), t ≥ 0,
0 0
(6.4)
for all 0 ≤ ρ ≤ 1.
It should be noted that such kernels k satisfying (6.2) are often considered in
the literature in studying the stability of heat equations with memory (see, for
example, [46, 88]). In fact, the exponential decay of k reflects the fading of the
far history in the model. Besides this, a simple example of a kernel that obeys
M
(6.2) is k(t) = bi e−ai t , t ≥ 0, for some ai , bi > 0, i = 1, . . . , M.
i=1
(o) The initial data yo belongs to the space L 2 (−∞, 0; H 2 (O)).
On setting
0 0
η(t, x) := k(t − s)Δyo (s, x)ds + μ k(t − s)yo (s, x)ds, (t, x) ∈ Q,
−∞ −∞
(6.5)
assumption (o) leads to the following estimates:
0 2 0 2
η(t) 2 ≤ 2 k(t − s) Δyo (s) ds +2 μ k(t − s) yo (s) ds
−∞ −∞
(using Schwarz’s inequality and relation (6.3), with ρ = 1)
0 0 0
≤2 e−4δ(t−s) ds Δyo (s) 2 ds + μ2 yo (s) 2 ds
−∞ −∞ −∞
max 1, μ −4δt
2
≤ e yo L 2 (−∞,0;H 2 (O )) , t ≥ 0.
2δ
(6.6)
Therefore, there exists a constant C > 0 such that
δ
− λi + f (0) + < 0 and − λi + μ < 0 for i = N + 1, N + 2, . . . . (6.8)
4
As we have done already in this book, in order to simplify the presentation, we
assume that the first N eigenvalues are simple. Of course, one may argue as in the
second part of Chap. 2 to deal with the general semisimple case.
Accordingly, changing the proportional feedback law provided in Chap. 2 to suit
∂ N
our case, we have that B is the Gram matrix of the system ∂n ϕi i=1 in the Hilbert
space L 2 (Γ1 ), with the standard scalar product
g, h0 := f (x)g(x)dσ
Γ1
N
Λ S := Λγk (6.11)
k=1
and
A = (B1 + B2 + · · · + B N )−1 , (6.12)
where
Bk := Λγk BΛγk , k = 1, . . . , N . (6.13)
(Recall that by virtue of Example 2.4 and Proposition 2.1, the sum B1 + · · · + B N
is invertible.)
Finally, the feedback laws are
⎛ ⎞ ⎛ 1 ∂ ⎞
y(t), ϕ1 γk −λ1 ∂n 1
ϕ (x)
⎟
⎜
y(t), ϕ2 ⎟ ⎜ ∂
⎜ γk −λ2 ∂n ϕ2 (x) ⎟
1
⎜
u k (t, x) = A ⎝ ⎟ ,⎜ ⎟ , t ≥ 0, x ∈ Γ1 , (6.14)
.............. ⎠ ⎝ ...................... ⎠ N
y(t), ϕ N 1
γ −λ ∂n N
∂
ϕ (x)
k N
6.1 Presentation of the Problem 113
u = u1 + u2 + · · · + u N ,
Since yo is known, we deduce that for negative time, u is in fact a known function.
The following result amounts to saying that the feedback u given by (6.15) globally
exponentially stabilizes the first-order approximation of (6.1).
Theorem 6.1 Let N ∈ N as in (6.8). Under hypothesis (k), (o), (i), for each
yo ∈ C([0, ∞); L 2 (O)) ∩ L 2 (−∞, 0; H 2 (O)), there exists a unique solution
y ∈ C [0, ∞); L 2 (O) ∩ L 2 (0, ∞; H 1 (O))
Proof First, we equivalently rewrite Eq. (6.17) as one with null boundary conditions.
To this end, we introduce, similarly as in (2.16) and (2.18), the map D, as follows:
given β ∈ L 1 (Γ1 ), we denote by Dγ β := y the solution to the equation
⎧
⎨
N
−Δy − 2 λk
y, ϕk ϕk + γ y = 0 in O,
(6.20)
⎩ k=1
∂
y = β on Γ1 , ∂n
y = 0 on Γ2 .
N
z(t, x) := y(t, x) − Dγk u k (t, x), (t, x) ∈ Q,
k=1
and
N
z o (x) := yo (0, x) − Dγk u k (0, x), x ∈ O,
k=1
⎛ ⎞ ⎛ 1 ∂ ⎞
z(t), ϕ ϕ1
1
⎜ 1 ∂ ϕ ⎟
γ k −λ 1 ∂n
1 ⎜
z(t), ϕ2 ⎟ ⎟,⎜ 2 ⎟
u k (t, x) = A ⎜ ⎜ γk −λ2 ∂n ⎟ . (6.23)
2 ⎝ ................. ⎠ ⎝ ................. ⎠ N
z(t), ϕ N 1 ∂
γk −λ N ∂n N
ϕ
⎪
N
⎪
⎪ + (R1 + R2 )(
z(t), ϕ1 , . . . ,
z(t), ϕ N ) + f (0)z + f (0) Dγk u k
⎪
⎪
⎪
⎪
⎪
⎪
k=1
⎪
⎪
t
⎪
⎪ + k(t − s)R2 (
z(s), ϕ1 , . . . ,
z(s), ϕ N ) + η(t), t > 0, x ∈ O,
⎪
⎪
⎪
⎩ 0
z(0, x) = z o (x), x ∈ O,
(6.25)
where
N
R1 (
z, ϕ1 , . . . ,
z, ϕ N ) := − Dγi u i
i=1 t
(6.26)
N
N
R2 (
z, ϕ1 , . . . ,
z, ϕ N ) := −2 λ j
Dγi u i , ϕ j ϕ j + γi Dγi u i .
i, j=1 i=1
(One may show that given an initial datum yo ∈ C((−∞, 0]; L 2 (O)) ∩ L 2 (0, ∞; H 1
(O)), there exists a unique solution z ∈ C([0, ∞); L 2 (O)) ∩ L 2 (0, ∞; H 1 (O)) to
the system (6.25), proving thereby the well-posedness of (6.25) and consequently
that of (6.17). See, for instance, [46, Theorem 2.1].)
By virtue of (6.24), using the fact that A is the inverse of the sum of Bk ’s, k =
1, . . . , N , we immediately see that
116 6 Stabilization of Equations with Delays
⎛ ⎞
R1 , ϕ1
⎜
R1 , ϕ2 ⎟ 1 N
1
⎜ ⎟= Bk AZt = Zt
⎝ .......... ⎠ 2 2
k=1
R1 , ϕ N
⎛ ⎞
R2 , ϕ1
⎜
R2 , ϕ2 ⎟ N
1
N
(6.27)
⎜ ⎟=Λ B AZ − γk Bk AZ
⎝ ........... ⎠ k
2 k=1
k=1
R2 , ϕ N
1
N
1
= ΛZ − γ1 Z + (γ1 − γk )Bk AZ ,
2 2 k=2
⎛ ⎞ ⎛ ⎞
z(t), ϕ1 λ1 0 . . . 0
⎜
z(t), ϕ2 ⎟ ⎜ 0 λ2 . . . 0 ⎟
where we have set Z (t) := ⎜ ⎟ ⎜ ⎟
⎝ ........... ⎠, t ≥ 0, and Λ := ⎝ ............... ⎠ .
z(t), ϕ N 0 0 . . . λN
Taking into account the above relations and projecting the Eq. (6.25) into the space
N
Xu := lin span ϕ j j=1 , it follows that
t t
d
Z (t) = −ΛZ (t) − k(t − s)ΛZ (s)ds + μ k(t − s)Z (s)ds
dt 0 0
μ t 1 d
− k(t − s)Z (s)ds + Z (t) + ΛZ (t)
2 0 2 dt
1
N
1
− γ1 Z (t) + (γ1 − γk )Bk AZ (t)
2 2 k=2
f (0)
+ f (0)Z (t) − Z (t)
2
1
t N
1
+ k(t − s) ΛZ (s) − γ1 Z (s) + (γ1 − γk )Bk AZ (s) ds + L(t),
0 2 2 k=2
⎛ ⎞
η(t), ϕ1
⎜
η(t), ϕ2 ⎟
for t > 0, where L(t) := ⎜ ⎟
⎝ .............. ⎠ .
η(t), ϕ N
Equivalently,
d ! t
Z (t) = −γ1 + f (0) Z (t) + (μ − γ1 ) k(t − s)Z (s)ds
dt 0
N
+ (γ1 − γk )Bk AZ (t)
k=2
6.2 Stability of the Linearized System 117
t
N
+ k(t − s) (γ1 − γk )Bk AZ (s) ds + 2L(t), t > 0. (6.28)
0 k=2
Now let us scalar multiply (in R N ) Eq. (6.28) by AZ (t), to arrive at (see (2.39)–
(2.40))
1 d "" 12
"2
"
"A Z (t)"
2 dt N
" 1 "2 t
" " 1 1
≤ [−γ1 + f (0)] "A 2 Z (t)" + (μ − γ1 ) A 2 Z (t), k(t − s)A 2 Z (s)ds
N 0 N
# ⎡ ⎤ (
t N
1 1 1 1
+ A 2 Z (t), k(t − s) ⎣ (γ1 − γk )A 2 Bk A 2 A 2 Z (s)⎦ ds
0 k=2 N
+ 2
AZ (t), L(t) N , t > 0.
d δτ "
" 1
" 2
"
" 1
"
" 2
"
e "A 2 Z (τ )" ≤ 2[−γ1 + f (0) + δ] eδτ "A 2 Z (τ )"
dτ N N
τ
δτ 21 δ(τ −s) δs 21
+ 2(μ − γ1 )
e A Z (τ ), e k(τ − s)e A Z (s)ds N
0
N )
+2 (γ1 − γk )
k=2
1 1 τ 1 1 1 *
1 1 2 1 2
× eδτ A 2 Bk A 2 A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Bk A 2 A 2 Z (s)ds
0 N
1
1
+ 4e A N L(τ ) N eδτ A 2 Z (τ ) N , τ > 0.
δτ 2
1
Here we used in the last term the Cauchy–Schwarz inequality and set A 2 N for the
1
induced Euclidean norm of the matrix A 2 . Then integrating the above equation with
respect to τ over (0, t), we deduce that
t " 1 " 2
1 1 " "
e2δt A 2 Z (t) 2N ≤ A 2 Z (0) 2N + 2[−γ1 + f (0) + δ] eδτ "A 2 Z (τ )" dτ
0 N
t τ
1 1
+ 2(μ − γ1 )
eδτ A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Z (s)ds N dτ
0 0
N )
+2 (γ1 − γk )
k=2
t# 1 1 1 τ 1 1 1
(
1 2 1 2
eδτ A 2 Bk A 2 A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Bk A 2 A 2 Z (s)ds dτ
0 0 N
t
1 1
+ 4 A 2 N eδτ L(τ ) N eδτ A 2 Z (τ ) N dτ
0
118 6 Stabilization of Equations with Delays
(using in the third and the fourth term relation (6.4) with ρ = 1,
and the fact that μ − γ1 < 0 and γ1 − γk < 0, k = 2, . . . , N )
t " 1 " 2
1 " "
≤ A 2 Z (0) 2N + 2[−γ1 + f (0) + δ] eδτ "A 2 Z (τ )" dτ
0 N
t
1 1
+ 4 A 2 N eδτ L(τ ) N eδτ A 2 Z (τ ) N dτ
0
(using, in the last term, Young’s inequality and the fact that − γ1 + f (0) + δ < 0)
1 t
1
2 8 A 2 2N
≤ A Z (0) N +
2 e2δτ L(τ ) 2N dτ, t > 0.
−γ1 + f (0) + δ 0
It follows that
1
A 2 Z (t) N
1
−2δt 1 8 A 2 2N t
≤e A Z
2 (0) 2N + e 2δτ
η(τ ) dτ
2
−γ1 + f (0) + δ 0
1
−2δt 1 8 A 2 2N
≤e A Z 2 + (0) 2N C yo L 2 (−∞,0;H 2 (O )) , t ≥ 0,
2
−γ1 + f (0) + δ
(6.29)
1
using (6.7). Hence recalling that A 2 is symmetric and positive definite, we see that
(6.29) yields the existence of a constant C > 0 such that
Z (t) N ≤ Ce−2δt Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0. (6.30)
Now taking the norm in (6.28) and using (6.30), (6.3) with ρ = 21 and (6.7), we
deduce that
" "
"d "
" Z (t)"
" dt "
N
t
−2δt −δ(t−s) −2δs −δt
≤C e + e e ds + e Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O ))
0
−δt
≤ Ce Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0,
(6.31)
N (6.32)
+
R1 (z 1 , . . . , z N ), ϕ j + [γi + f (0)]
Dγi u i (t), ϕ j
i=1
N t
N
+ k(t − s) γi
Dγi u i (s), ϕ j ds +
η(t), ϕ j , t > 0.
i=1 0 k=1
N
t
N
μ k(t − s)
Dγi u i (s), ϕ j ds + [γi + f (0)]
Dγi u i (t), ϕ j
i=1 0 i=1
N
t
N
(6.33)
+ k(t − s) γi
Dγi u i (s), ϕ j ds
i=1 0 k=1
−δt
≤ Ce Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0.
N
Recalling that y = z + i=1 Dγi u i , we immediately obtain (6.18), as desired.
120 6 Stabilization of Equations with Delays
To get the H 1 -norm estimate in (6.19), we scalar multiply Eq. (6.25) by z. After
some straightforward computations, using relations (6.35), (6.33), and (6.7), we get
that
t
z(t) 2 ≤ z o 2 − 2 ∇z(τ ) 2 dτ
0
) t τ *
−2 ∇z(τ, x) k(τ − s)∇z(s, x)ds dτ d x
O 0 0
t τ
+μ z(τ ) k(τ − s) z(s) ds dτ
0 0
t (6.36)
−δt
+ f (0) z(τ ) dτ + Ce
2
z(0) + yo L 2 (−∞,0;H 2 (O ))
2 2
0
t
≤ z o 2 − 2 ∇z(τ ) 2 dτ
0
t
+ f (0) z(τ ) 2 dτ + Ce−δt z(0) 2 + yo 2L 2 (−∞,0;H 2 (O )) ,
0
N
Recalling that y = z + i=1 Dγi u i , we get immediately that (6.19) holds, as claimed.
Here we plug the feedback u given by (6.15) into the nonlinear system (6.1) and
show that it locally stabilizes it. More precisely, we have the following theorem.
Theorem 6.2 Let N ∈ N as in (6.8). Under hypotheses (k), (o), (H), (ii), the
feedback controller u given by (6.15) locally exponentially stabilizes the nonlin-
ear system (6.1). More exactly, there exists ρ > 0 sufficiently small that for all
yo ∈ L 2 (−∞, 0; H 2 (O)) with yo (0) 2 + yo 2L 2 (−∞,0;H 2 (O )) ≤ ρ, there exists a
unique solution
y ∈ C([0, ∞); L 2 (O)) ∩ L 2 (0, ∞; H 1 (O))
to the equation
6.3 Feedback Stabilization of the Nonlinear System (6.1) 121
⎧ t t
⎪
⎪ ∂ =Δy(t, + − + μ k(t − s)y(s, x)ds
⎪
⎪ t y(t, x) x) k(t s)Δy(s, x)ds
⎨ −∞ −∞
+ f (y(t, x)), (t, x) ∈ (0, ∞) × O,
⎪
⎪
⎪
⎪ y(t, x) = u(y(t)) on Γ1 and ∂n∂
y = 0 on Γ2 ,
⎩
y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O,
(6.38)
which is L 2 -exponentially decaying.
Proof As in the proof of Theorem 6.1 (see (6.25)), we rewrite (6.38) via the operators
Dγk , k = 1, 2, . . . , N , as
⎧ t
⎪
⎪
⎪
⎪ z t (t, x) = −Az(t, x) − k(t − s)Az(s)ds
⎪
⎪
⎪
⎪ t
0
t
⎪
⎪
⎪
⎪ + μ k(t − s)z(s)ds + μ k(t − s)Du(s)ds
⎪
⎪
⎪
⎨ 0 0
+ (R1 + R2 )(
z(t), ϕ1 , . . . ,
z(t), ϕ N ) + f (0)z + f (0)Du (6.39)
⎪
⎪ t
⎪
⎪
⎪
⎪ + k(t − s)R2 (
z(s), ϕ1 , . . . ,
z(s), ϕ N ) + η(t)
⎪
⎪
⎪
⎪ 0
⎪
⎪
⎪
⎪ + f (z + Du(z)) − f (0)(z + Du(z)), t > 0, x ∈ O,
⎪
⎩
z(0, x) = z o (x), x ∈ O,
We claim that
JM (ξ ) ≤ C M ξ 2H 1 (O ) , ∀ξ ∈ H 1 (O), (6.40)
2 "
" q
"2
"
"
C1 "
JM (ξ ) 2 ≤ " |ξ + Du(ξ )|αi + 1" ξ + Du(ξ ) 4L 6 (O )
"
2 "
i=1 L (O )
6
2 q
2
C1 αi
≤ |ξ + Du(ξ )| L 6 (O ) + 1 L 6 (O ) ξ + Du(ξ ) 4L 6 (O )
2 i=1
2 q
2
C1 αi
= ξ + Du(ξ ) L 6αi (O ) + σ (O) ξ + Du(ξ ) 4L 6 (O ) ,
2 i=1
where using the Sobolev embedding theorem (i.e., L p (O) → H 1 (O), ∀0 < p ≤ 6),
we obtain
2
q
2
C1
JM (ξ ) ≤2
ξ + Du(ξ ) αHi 1 (O ) + σ (O) ξ + Du(ξ ) 4H 1 (O ) . (6.41)
2 i=1
Since
ξ + Du(ξ ) H 1 (O ) ≤ (1 + C D ) ξ H 1 (O ) , (6.42)
2
q
2
C1 (1 + C D )2 αi αi
JM (ξ ) ≤2
(1 + C D ) M + σ (O) ξ 4H 1 (O ) .
2 i=1
C1 (1+C D )2
q
αi αi
Hence taking C M = 2
(1 + C D ) M + σ (O) , we get (6.40), as
i=1
claimed. Likewise for ξ H 1 (O ) > M. We have, as before, that
+ +
+ M M +
+f
(ξ + Du(ξ )) − f (0) (ξ + Du(ξ ))++
+ ξ H 1 (O ) ξ H 1 (O )
q + +αi + +2
C1 + M + + M +
≤ + +
(ξ + Du(ξ ))+ + 1 + + (ξ + Du(ξ ))++ .
2 + ξ ξ
i=1
1 H (O ) 1 H (O )
JM (ξ ) 2
q " "αi 2 " "4
2 " M " " M "
≤ C21 " ξ 1 (ξ + Du(ξ ))" 1 + σ (O) " ξ 1 (ξ + Du(ξ ))" 1 .
i=1 H (O ) H (O ) H (O ) H (O )
" "
" "
Since " ξ M1 (ξ + Du(ξ ))" ≤ (1 + C D )M (see (6.42)), we get
H (O ) H 1 (O )
6.3 Feedback Stabilization of the Nonlinear System (6.1) 123
2
q
2
C1
JM (ξ ) 2 ≤ ((1 + C D )M)αi + σ (O) (1 + C D )4 ξ 4H 1 (O ) .
2 i=1
where u M = u(z M ).
Let us denote by {S(t) : t ≥ 0} the semigroup generated by the evolution
Eq. (6.25), guaranteed by Theorem 6.1, defined as follows: for each initial datum
z o ∈ L 2 (O), we denote by S(t)z o , t ≥ 0, the solution to (6.25). In the proof of
Theorem 6.1, we have actually shown that the semigroup {S(t) : t ≥ 0} is L 2 -
exponentially stable and satisfies
∞
S(t)g 2H 1 (O ) dt < c( g 2 + yo 2L 2 (−∞,0;H 2 (O )) ), ∀g ∈ D(L)
0
Since all the hypotheses from [19] are satisfied in the present case, we may apply
the same fixed-point argument for Λ on S(0, r M ) as in the proof of [19, Theo-
rem 5.1], in order to deduce that for each M > 0, there exist r M > 0 and ρ M > 0
124 6 Stabilization of Equations with Delays
The details are omitted. Here C , γ : R+ × R+ → R∗+ are some continuous functions
depending only on M and r M .
To conclude with the proof, it remains to show that there exists C > 0, independent
of M, such that
z M (t) H 1 (O ) ≤ C, ∀t ≥ 0. (6.46)
which immediately will lead to the conclusion that the exponentially decaying z M
is, in fact, a solution to the system (6.39). Then, recalling that y = z + Du, the result
stated in the theorem has been proved.
To show relation (6.46), we scalar multiply Eq. (6.44) by Az M , and use (6.3),
(6.45) and relations (6.7), (6.40). We deduce that
t
z M (t) 2H 1 (O ) ≤ C1 (M, r M )ρ M + C M z M (τ ) 4H 1 (O ) dτ, t ≥ 0,
0
Remark 6.3 It should be noted that the same stabilizing method developed here may
be also applied to the the following type of heat equation with memory:
⎧ ,t ,t
⎨∂t y(t, x) = Δy(t, x) + −∞ k1 (t − s)Δy(s, x)ds + μ −∞ k2 (t − s)y(s, x)ds
+ f (y(t, x)), (t, x) ∈ Q,
⎩ ∂ y = 0 on Σ , y(t, x) = y (t, x), (t, x) ∈ (−∞, 0]×O,
y(t, x) = u(t, x) on Σ1 , ∂n 2 o
(6.47)
with k1 , k2 two different positive kernels satisfying hypothesis (k). A similar result
to Theorem 6.2 can be obtained in this case. The details are omitted.
Remark 6.4 If there exists a constant a ∈ R, a = 0, such that f (a) = 0, then similar
results to those in Theorem 6.2 concerning the local stabilization of the steady-state
solution a in the nonlinear system (6.1) with μ = 0 can be obtained, following the
algorithm developed above. Indeed, setting y := y − a, we reduce the problem to
the null stabilization of the equivalent system
⎧ ,t
⎪
⎪ ∂t y(t, x) = Δy(t, x) + −∞ k(t − s)Δy(s, x)ds + f˜(y(t, x)),
⎨
(t, x) ∈ Q := (0, ∞) × O,
∂
⎪
⎪ y(t, x) = u(t, x) on Σ1 := (0, ∞) × Γ1 , ∂n y = 0 on Σ2 := (0, ∞) × Γ2 ,
⎩
y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O,
(6.48)
where f˜(y) := f (y + a), y ∈ R satisfies similar assumptions ( f 1 ), ( f 2 ) to those that
f does. Then it is clear that the algorithm can be applied. The details are omitted.
6.4 Comments
The model (6.1) was introduced in [66], and it describes the heat flow in a rigid
isotropic homogeneous heat conductor with memory. It is derived in the framework
of the theory of heat flows with memory established in [48]. Moreover, a system of
first-order hyperbolic PDEs can be transformed to a system described by retarded
functional differential equations like (6.1) (for details, see [71]). These equations
serve as a model for physical phenomena such as traffic flows, chemical reactors,
and heat exchangers.
Similar equations have been considered in different papers, but the problem of the
behavior of solutions and stability was directly addressed in [46, 62]. There, the main
ingredient used is the so-called history space setting, which consists in considering
some past history variables as additional components of the phase space correspond-
ing to the equation under study (this idea is due to Dafermos [51]), whereas concern-
ing the first-order hyperbolic equations, the backstepping method is implemented by
Krstic et al. [76].
The boundary stabilization problem associated with (6.1) with k ≡ 0 was studied
in Chap. 2. When the model incorporates memory terms, this problem is far from
being solved and well understood. The character of Eq. (6.1) is determined by the
nature of the kernel k, and in some situations, this equation might be of hyperbolic
126 6 Stabilization of Equations with Delays
type (that is, with finite speed propagation). This is the case, for instance, if k(t) =
e−εt . By virtue of relation (6.3), we clearly see the hyperbolic nature of Eq. (6.1).
In any case, there are cases of kernels k for which the equation is of parabolic type,
namely
k(t) = a0 t −ε , a0 > 0, 0 < ε < 1.
But we see that such a kernel cannot satisfy hypothesis (6.2). In the parabolic case,
the controllability problem is solved by relying on similar arguments to those in the
free memory case; see, for instance, the work of Barbu and Iannelli [26] or Pandolfi
[109]. Since our stabilization method requires an exponential decay of the kernel (see
hypothesis (k)) it is clear that the parabolic case is left outside, while the hyperbolic
case can be treated similarly to the free memory case. This is an interesting difference
between the stabilization and controllability problem associated with equations with
memory.
The results presented in this chapter are new, and are based on those obtained in
Munteanu [101]. While in [101] an additional hypothesis of linear independence of
the traces of the normal derivatives of the eigenfunctions on Γ1 is imposed, here we
drop it by using the control design in Chap. 2. Other results concerning the Navier–
Stokes equations with memory were obtained in the author’s work [100].
For more results on the controllability problem associated with (6.1), see [18],
for example, and for the optimal control problem, see [40], for instance. For more
details about heat equations with memory, one may consult the book [3]. Finally, we
call the reader’s attention to the result on the present subject in [64], as well as [15],
concerning the stochastic version of the problem.
Chapter 7
Stabilization of Stochastic Equations
This section answers to the following question: in the case in which the stabilizing
feedback designed in Chap. 2 is perturbed by a noise, will it still ensure the stabil-
ity of the system? This situation directly corresponds to practice. More precisely,
measuring instruments may present some malfunctions, and therefore, the accuracy
of the collected data may be randomly negatively affected. Thus, in order to have a
more realistic model, it makes sense to add a noise perturbation to the controller. We
confine ourselves to the one-dimensional case, with Neumann boundary conditions
in which the derivative of the unknown is equal to the sum of the control and a white
noise in time. For higher dimensions it is not even known whether this problem is
well posed.
The governing equations are
⎧
⎨ ∂t Y (t, x) = Yx x (t, x) + f (x, Y (t, x)), t > 0, x ∈ (0, L),
Y (t, 0) = u(t) + e−δt β̇(t), Yx (t, L) = 0, t > 0, (7.1)
⎩ x
Y (0, x) = Yo (x), x ∈ (0, L).
It is easy to check that A is self-adjoint in L 2 (0, L), and satisfies assumptions (A1)–
∞from Chap. 2. Hence −A has a countable set of real eigenvalues,
(A4) ∞ denoted by
λ j j=1 , with the corresponding eigenfunctions denoted by ϕ j j=1 , that is,
−Aϕ j = λ j ϕ j , j = 1, 2, 3, . . . .
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 129
The system ϕ j j∈N∗ may be chosen to be orthonormal. Besides this, given ρ > 0,
there exists N ∈ N such that
2ρ − δ < 0.
In order to simplify our presentation, we will assume further that the first N eigen-
values are distinct. The general case can be also considered and treated as in Chap. 2,
Sect. 2.2.2.
Recalling the notation in (2.20)–(2.26), we introduce the feedback law
⎛ ⎞ ⎛ ⎞
Z (t), ϕ1 ϕ1 (0)
⎜
Z (t), ϕ2 ⎟ ⎜ ϕ2 (0) ⎟
v(t) := Λ S A ⎜ ⎟ ⎜ ⎟
⎝ .............. ⎠ , ⎝ .......... ⎠ (7.5)
Z (t), ϕ N ϕ N (0) N
X : Ω × (0, T ) → H
such that T
E X (t)2H dt < ∞,
0
130 7 Stabilization of Stochastic Equations
Proof In order to lift the boundary control into the equations, for some
we introduce as in (2.16) the Neumann operators Dγk . The noise is lifted as well via
the map D = D(x), x ∈ (0, L), which is the solution to the equation
−Dx x (x) − f (x, ŷ)D(x) + γ D(x) = 0, x ∈ (0, L),
(7.10)
Dx (0) = 1, Dx (L) = 0,
for some sufficiently large γ > 0. Then arguing as in (2.27)–(2.29), it follows that
Eq. (7.9) may be equivalently rewritten as
⎧ ⎛ N
⎪
⎪ N
⎪
⎪ d Z (t) = ⎝AZ (t) − 2 λj Dγk vk (Z (t)), ϕ j ϕ j (x)
⎪
⎪
⎪
⎪
⎨ j=1 k=1
⎪ N
⎪
⎪ + (γk − A)Dγk (x)vk (Z (t)) dt + (γ − A)De−δt dβ, t > 0,
⎪
⎪
⎪
⎪
⎪
⎩
k=1
Z (0) = Z .o
(7.11)
Equation (7.11) is formal. The precise meaning of the state equation is as follows:
we say that a continuous L 2 (0, L) predictable process Z is a solution to the state
equation if P − a.s.
N
t
N
Z (t) = etA Z o − 2 e(t−τ )A λ j Dγk vk (Z (τ )), ϕ j ϕ j (x)dτ
j=1 0 k=1
N
t
+ e(t−τ )A (γk − A)Dγk (x)vk (Z (τ ))dτ (7.12)
k=1 0
t
+ e(t−τ )A (γ − A)De−δτ dβ(τ ).
0
Here the integral arising on the right-hand side of (7.12) is taken in the sense of Itô
with values in H −1 (0, L). (We refer to [52, Proposition 2.4] or [113] for the existence
and uniqueness of such a solution.)
We continue with the argument in Chap. 2 by projecting the system on Xu : =
N ∞
linspan ϕ j j=1 and Xs := linspan ϕ j j=N +1 (see (2.32) and (2.33)).
The so-called unstable part in this case reads as (see as well (2.39))
⎧
⎪
⎨
N
dZ = −γ1 Z + (γ1 − γk )Bk AZ dt + Φe−δt dβ, t > 0,
(7.13)
⎪
⎩ k=2
Z (0) = Zo .
132 7 Stabilization of Stochastic Equations
Here ⎛ ⎞
−ϕ1 (0)
⎜ −ϕ2 (0) ⎟
Φ := ⎜ ⎟
⎝ .......... ⎠ .
−ϕ N (0)
0
N
δs
+2e (γ1 − γk )
Bk AZ , AZ N ds
k=2
t t
+ e−δs
AΦ, Φ N ds + 2
AΦ, Z N dβ(s).
0 0
(7.14)
We see that (recall the positive semidefiniteness of the matrices Bk and the fact that
the sequence (γk )k=1,N is increasing)
N
2eδs (γ1 − γk )
Bk AZ , AZ N ≤ 0, s ≥ 0.
k=2
Also, recall that γ1 was taken such that γ1 > δ, which implies that δ − 2γ1 < 0.
Finally, notice that since A is positive definite, we have
AΦ, Φ ≥ 0. Hence taking
the expectation in (7.14) yields
1
E eδt A 2 Z 2N ≤ A 2 Zo 2N +
AΦ, Φ N < ∞, ∀t ≥ 0.
1 1
(7.15)
δ
Now let us define
t
I1 (t) :=
AΦ, Φ N e−δs ds,
0
t
M(t) := 2
AΦ, Z N dβ,
0
t
N
δs 1
δs
I (t) := − (δ − 2γ1 )e A Z 2 2N + 2e (γ1 − γk )
Bk AZ , AZ N ds.
0 k=2
Taking into account that M is a local martingale and I, I1 are nondecreasing, adapted,
and with finite variation processes, we conclude that
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 133
t→∞
1
Using that A 2 is an invertible positive definite symmetric matrix, it follows that
lim eδt Z 2N < ∞, P−a.s. This implies that
t→∞
N
since z u (t)2L 2 (0,L) = | Z (t), ϕ j |2 = Z (t)2N .
j=1
Concerning the stable part, since the spectrum of the operator As consists of
∞
−λ j j=N +1
with − λ j < −ρ, j ≥ N + 1,
1
−
Qz, As z + ρz = z2X s , ∀z ∈ Xs . (7.17)
2
The stable part of the system can be written as
dz s (t) = (As z s (t) + F(v(t))) dt + e−δt G(x)dβ,
(7.18)
z s (0) = z so ,
where
N
F(v(t)) := (γk − As )Dγk (x)vk (t)
k=1
and
G(x) := (γ − As )D(x).
We point out that by As , we understand, in fact, its extension Ãs (to recall this, see
(2.28), while for the extension operator, see Chap. 1). So
N
(γk − As )Dγk ∈ (D(As )) .
k=1
134 7 Stabilization of Stochastic Equations
By (7.16) and the definition of vk in (7.6), we easily see from the definition of F that
we have N
−δt
g, F(v(t)) ≤ C1 e g L 2 (γk − As )Dγk
(7.19)
k=1 (D (As ))
− 21 δt
≤ C2 e g L 2 , t ≥ 0, P-a.s.,
∀t ≥ 0, for some positive constant C. To this end, taking the expectation in (7.20)
and recalling that 2ρ − δ < 0, we deduce that
t 1 1
1
E e2ρt Q 2 z s 2L 2 (0,L) ≤ C + 2E e2ρτ Q 2 Q 2 z s , F(v(τ )) dτ , (7.22)
0
∀t ≥ 0, where
1 1 1
C = Q 2 z s0 2L 2 (0,L) + Q 2 G2 2 .
2(δ − ρ) L (0,L)
This implies via the Schwarz inequality, the stochastic Fubini’s theorem, and the
estimate (7.19) that
t 1
1 1
E e2ρt Q 2 z s 2L 2 (0,L) ≤ C + 2C e2ρτ E Q 2 z s L 2 (0,L) e− 2 δτ dτ
0
t t
1
≤ C + ρC E e2ρτ Q 2 z s 2L 2 (0,L) dτ + C e(2ρ−δ)τ dτ
0 0
(recall that 2ρ − δ < 0)
t
1
≤C +C ρE e2ρτ Q 2 z s 2L 2 (0,L) dτ.
0
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 135
1
Then via Grönwall’s inequality and the fact that Q 2 is a symmetric positive definite
operator, (7.21) follows immediately.
Next, from (7.20), we have
1
1 εp
P sup Q 2 z s (t)2L 2 (0,L) ≥ εp
≤ P Q 2 z s ( p)2L 2 (0,L) ≥
t∈[ p, p+1] 5
p+1 !
1 εp
+P 2ρQ 2 z s 2L 2 (0,L) + z s 2L 2 (0,L) dτ ≥
p 5
" t "
" " εp
+ P 2 sup "" Q 2 Q 2 z s , F(v(τ )) dτ "" ≥
1 1
t∈[ p, p+1] p 5
" t "
" " εp
+P sup "" e−2δτ
QG, G dτ "" ≥
t∈[ p, p+1] p 5
" t "
" 1 " ε
+P sup " e " −δτ 1
Q 2 z s , Q 2 G dβ " ≥ " p
t∈[ p, p+1] p 5
(using the Cebyshev inequality and the Burkholder–Davis–Gundy inequality)
p+1
5 1 5 1
≤ E Q 2 z s ( p)2L 2 (0,L) + E 2ρQ 2 z s 2L 2 (0,L) + z s 2L 2 (0,L) dτ
εp εp p
p+1 " "
10 " 1 1 "
+ E " Q 2 Q 2 z s , F(v(τ )) " dτ
εp p
p+1
5 # $
+ E e−2δτ |
QG, G| dτ
εp p
p+1 " 1 "2 !
5 " "
E e−2δτ " Q 2 z s , Q 2 G " dτ
1
+
εp p
(making use of the estimates (7.19) and (7.21))
1
≤ Ce−ρp ,
εp
− 21 ρp
≤ Ce− 2 ρp , ∀ p ∈ N∗ .
1 1
P sup Q 2 z s (t)2L 2 (0,L) ≥e (7.23)
t∈[ p, p+1]
The Borel–Cantelli lemma now implies that there exists p(ω) such that if p > p(ω),
then
sup Q 2 z s (t)2L 2 (0,L) ≤ Ce− 2 ρp ,
1 1
t∈[ p, p+1]
Recalling that z = z u + z s and invoking (7.16) and (7.24), we are led to the conclusion
of the theorem, thereby completing the proof.
Theorem 7.3 Under assumptions (i) and (ii), the closed-loop equation
⎧
⎪
⎪ ∂t Z (t, x) =Z x x (t, x) + f (x, Z (t, x) + Ŷ (x)) − f (x, Ŷ (x)), t > 0, x ∈ (0, L),
⎪
⎪ ⎛ ⎞ ⎛ ⎞
⎪
⎪
Z (t), ϕ1 ϕ1 (0)
⎪
⎪
⎪
⎨ Z (t, 0) = Λ A ⎜
⎪ ⎟ ⎜ ⎟
⎜
Z (t), ϕ2 ⎟ , ⎜ ϕ2 (0) ⎟ + e−δt β̇(t),
S ⎝
x
............ ⎠ ⎝ ......... ⎠
⎪
⎪
Z (t), ϕ N ϕ N (0)
⎪
⎪
⎪
⎪
N
⎪ Z x (t, L) =0, t ≥ 0,
⎪
⎪
⎪
⎩
Z (0, x) =Z o (x), x ∈ (0, L),
(7.25)
has a unique solution Z ∈ CP ([0, T ]; L 2 (0, L)), which satisfies
ρ
lim e 2 t Z (t)2L 2 (0,L) < ∞, P − a.s.,
t→∞
To end this section, returning to the initial variable y, Theorem 7.3 immediately
implies Theorem 7.1.
The subject of this section is represented by the heat equation on (0, L), L > 0,
perturbed by an internal multiplicative noise, i.e.,
⎧
⎨ ∂t Y (t, x) = Yx x (t, x) + λσ (x, Y (t, x))dβ(t, x), 0 < x < L , t > 0,
⎪
Yx (t, 0) = u(t), Yx (t, L) = 0, t ≥ 0, (7.26)
⎪
⎩
Y (0, x) = Yo (x), for x ∈ [0, L].
where λ is a positive number, usually refereed as the level of the noise; u is the
boundary control.
It was shown by Foondun and Nualart in [56] that in the absence of a control, no
matter how small (or large) λ is, the corresponding solution y to (7.26) is exponen-
tially unstable in the expectation. More precisely, it is shown in [56, Theorem 1.5]
that the solution y to (7.26) without the boundary control (i.e., u ≡ 0) satisfies
1
0 < lim inf log E|Y (t, x)|2 < ∞, ∀x ∈ (0, L).
t→∞ t
Hence it makes sense to search for a stabilizing feedback u for (7.26) in the sense
that once inserted into the equation, the corresponding solution of the closed-loop
equation satisfies
1
lim inf log E|Y (t, x)|2 < −γ , ∀x ∈ (0, L),
t→∞ t
for some γ > 0.
Our goal is to show that a feedback similar to the one described in Chap. 2 ensures
that the corresponding solution of the closed-loop equation (7.26) goes exponentially
fast to zero in a certain sense (see Theorem 7.4 and relation (7.35) below). Since the
stochastic force is of multiplicative type, one may guess that the method, presented in
Chap. 2, that was successfully applied in the previous section (in the case of additive
noise) may fail to work now. Indeed, the spectral decomposition method is useless
due to the presence of the nonlinearity σ (y)dβ, unless it is considered separately. That
is why, in comparison with the previous section, we will change the approach in the
following way: we consider separately the linear equation and show that after we lift
the control, the corresponding obtained linear operator generates a C0 −semigroup
that can be expressed in a mild formulation, via a kernel. Then returning to the full
nonlinear equation, we write its solution in an integral formulation, a fact that allows
one to obtain the desired exponential decay. All our effort will be concentrated in
showing that the kernel has “good properties”; see Lemma 7.1 below.
In any case, let us further explain the approach we will follow. Say that we are
dealing with the following heat equation:
∂t y(t, x) − y(t, x) − ay(t, x) = 0 in (0, ∞) × (0, L),
(7.28)
y(t, 0) = y(t, L) = 0, ∀t ≥ 0, y(0) = yo ,
∞ ∞
where a and L are some positive constants. Let us denote by λ j j=1 and by ϕ j j=1
the system of eigenvalues and the system of eigenfunctions that diagonalizes the
Dirichlet Laplacian in L 2 (0, L), respectively. It is known that they are given by
138 7 Stabilization of Stochastic Equations
!2 ! 21 !
jπ 2 jπ x
λj = and ϕ j = sin , j = 1, 2, . . . .
L L L
Then denoting by
∞
p(t, x, ξ ) := e−λ j t ϕ j (x)ϕ j (ξ ), t ≥ 0, x, ξ ∈ O, (7.29)
j=1
where
1 |x−ξ |2
pG (t, x, ξ ) := √ e− 4t
4π t
is the Gaussian kernel. Next, for t0 > 0 fixed, there is a constant c > 0 such that
It follows that there exists some constant c1 > 0 such that for all η ∈ (0, λ1 ), we have
∞ !
1 1
eηt p(t, x, x)dt ≤ c1 √ + . (7.32)
0 η λ1 − η
This is indeed so. We use the bounds (7.30) and (7.31). We therefore write
∞ t0 ∞
eηt p(t, x, x)dt = eηt p(t, x, x)dt + eηt p(t, x, x)dt =: I1 + I2 .
0 0 t0
Let us first consider the case a = 0. We can represent the corresponding solution
to (7.28) as L
y(t, x) = p(t, x, ξ )yo (ξ )dξ.
0
for some positive constant C. It follows by the semigroup property that the solution
y is exponentially decaying in the L 2 −norm · .
Now consider the case a = 0 such that λ1 − a < 0. Set μ j := λ j − a, for j =
1, 2, .., and
∞
p̃(t, x, ξ ) := e−μ j t ϕ j (x)ϕ j (ξ ).
j=1
It is clear that in trying to show an L 2 −norm exponential decay for this y, by following
the above ideas, an estimate like (7.32) fails to hold for the kernel p̃, because the
eigenvalue μ1 = λ1 − a is negative, and so (7.31) can no longer be obtained.
In order to achieve the stability in (7.28), a boundary feedback control may be
inserted. To achieve this goal, we look for a special feedback law that allows us
to write the corresponding solution of the closed-loop equation in an integral form
similar to (7.33). It is clear that in order for us to be able to do this, the feedback law
must be explicitly given in a simple form. It turns out that the proportional feedback
law designed in Chap. 2 is able to do this job.
140 7 Stabilization of Stochastic Equations
− ∞ < lim sup log E|Y (t, x)|2 < −ρ, ∀x ∈ (0, L). (7.35)
t→∞
( j − 1)2 π 2
μj = , j = 1, 2, . . . ,
L2
with the corresponding eigenfunctions
7.2 Stabilization of the Stochastic Heat Equation on a Rod 141
⎧&
⎨ 1 , j =1
ϕ j (x) = &
L
⎩ 2
cos ( j−1)π x
, j = 2, 3, . . . .
L L
k
γk := μ N + + N α , k = 1, 2, . . . , N , (7.37)
N
with 7
4
< α < 2. Note that we have
γk π2 γk − μ N
lim = and lim = 1.
N →∞ N2 L2 N →∞ Nα
and
γk − μ N , k = 1, 2, . . . , N , are of order O(N α ). (7.39)
This time, the counterpart of the Gram matrix B introduced in (2.20) is given by
⎛ √ √ ⎞
√1 2 ... 2
1 ⎜ 2 2 ... 2 ⎟
B := ⎜ ⎟. (7.40)
L ⎝√.................... ⎠
2 2 ... 2
And the corresponding form of the feedback law (2.26), in the present case, reads as
follows: for each k = 1, . . . , N , we set
⎛ ⎞ ⎛ ϕ1 (0) ⎞
y, ϕ1 γk −μ1
⎜
y, ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟
⎟
u k (y) := A ⎜ ⎟
⎝ ........... ⎠ , ⎜
γk −μ2 ⎟ , (7.41)
⎝ ......... ⎠
y, ϕ N ϕ N (0)
γk −μ N N
142 7 Stabilization of Stochastic Equations
then introduce u as
u(y) := u 1 (y) + · · · + u N (y). (7.42)
√
where b11 = 1 and bi j = 2, (i, j) = (1, 1). We want to estimate the magnitude
of u k , k = 1, 2, . . . , N . This reduces to estimating the first and last eigenvalues of
the matrix A, or equivalently, by the definition of A, to estimating the last and first
eigenvalues of the sum matrix B1 + · · · + B N . Let us denote by r1 , . . . , r N (arranged
as an increasing sequence) the positive eigenvalues of the latter matrix. By virtue of
(7.43), we have that
1
N
bii
r1 + · · · + r N = .
L i,k=1 (γk − μi )2
1 1
C1 ≤ r1 + r2 + · · · + r N ≤ C2 2α−2 , (7.44)
N2 N
for some positive constants C1 , C2 , independent of N (but for N large enough).
' NNext by the Gershgorin circle theorem, we know that the eigenvalues of the matrix
k=1 Bk cannot be far from its diagonal entries. More precisely, we know that there
exists some j ∈ {1, 2, . . . , N } such that
" √ " √
" N " N
" 2 " 2
"r N − "≤ .
" (γ k − μ j ) 2" (γ k − μ j )(γ k − μl )
k=1 k,l=1
1
ri ≤ C 3 , i = 1, 2, . . . , N , (7.45)
N 2α−1
for some constant C3 > 0, independent of N .
Finally, taking α very close to 2 as necessary, we see by (7.44) and (7.45) that
necessarily,
1
ri ≥ C4 3 , i = 1, 2, . . . , N , (7.46)
N
for some positive constant C#4 , independent
$ # of N$ . Hence we conclude that the orders
of r1 , . . . , r N lie between O N13 and O N 2α−1
1
.
Recalling that A = (B1 + B2 + · · · + B N )−1 , and denoting by λ1 (A) and λ N (A)
the first and last eigenvalues of the matrix A, respectively, we get that
λ1 (A) and λ N (A) = A have order between O(N 2α−1 ) and O(N 3 ). (7.47)
Proof of Theorem 7.4. First of all, we note that the stochastic equation (7.34) is well
posed, since both σ and the Neumann boundary conditions are Lipschitz, and thus
one can argue for the existence and uniqueness as in [113].
We lift the boundary conditions into the Eq. (7.34) by arguing similarly as in
(2.27)–(2.29), obtaining thereby the internal control-type problem
N
N
∂t Y (t) = − AY (t) + u i (Y (t))(Ã + γi )Dγi − 2 μ j u i (Y (t))Dγi , ϕ j ϕ j
i=1 i, j=1
+ λσ (Y (t))dβ; Y (0) = Yo .
(7.49)
(One can check the section above or [19, Sect. 1] for additional explanations on the
precise definition of a solution to (7.49).)
Next, the idea is to forget, for a while, about the stochastic perturbation, and
express the solution z to the linear equation
144 7 Stabilization of Stochastic Equations
N
∂t z(t) = − Az(t) + u i (z(t))(Ã + γi )Dγi
i=1
N
(7.50)
−2 μ j u i (z(t))Dγi , ϕ j ϕ j , t > 0,
i, j=1
z(0) = z o ,
in an integral form. This enables one to have a mild formulation for the solution y
to (7.49). One can prove the following results.
N
N
∂t z(t) = − Az(t) + u i (z(t))(Ã + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ;
i=1 i, j=1
z(0) = z o ,
(7.51)
can be written in a mild formulation as
L
z(t, x) = p(t, x, ξ )z o (ξ )dξ.
0
where z j (t) = z(t), ϕ j , j = 1, 2, . . . . From now on, all our effort will be devoted
to writing, for each j ∈ N∗ , z j in the form
∞
z j (t) = f i j (t)
z o , ϕi , (7.53)
i=1
∞
p(t, x, ξ ) = f i j (t)ϕi (x)ϕ j (ξ )
i, j=1
and play with the estimates (in terms of N ) of Ci j , ci j in order to deduce (7.52) as
well.
We emphasize that in the case of a general feedback law, after a lift of the boundary
conditions into the equations, it is not easy (or even possible) to get a relation like
(7.53). However, the simple explicit form of our feedback law allows us to do this.
Scalar multiplying equation (7.51) by ϕ j , j = 1, . . . , N , and arguing as in (2.38)–
(2.41), we get that the first N modes of the solution z satisfy
d N
Z = −γ1 Z + (γ1 − γk )Bk AZ , t > 0. (7.54)
dt k=2
N
z i (t) = qi j (t) z o , ϕ j , i = 1, . . . , N . (7.55)
j=1
C
|qi j (t)|2 ≤ e−γ1 t , ∀t ≥ 0, ∀i, j = 1, . . . , N . (7.56)
λ1 (A)
1
e−cN t , ∀t ≥ 0,
2
|qi j (t)| ≤ C (7.57)
α− 21
N
N
u i = u i (t) = ri j (t) z o , ϕ j , i = 1, . . . , N , (7.58)
j=1
146 7 Stabilization of Stochastic Equations
where by (7.41), (7.47), (7.39), and (7.57), we get that there exists C > 0 such that
√ √
N 1 −cN 2 t N
= C N 4−2α e−cN t ,
2
|ri j (t)| ≤ AZ (t) N ≤ CN 3
1 e α
γ1 − μ N α−
N 2 N
(7.59)
∀t ≥ 0, ∀i, j = 1, . . . , N .
We move on to the modes z j , j > N . Scalar multiplying equation (7.51) by
ϕ j , j > N , we get
(
2
N
d
z j = −μ j z j − u i , t > 0.
dt L i=1
We write ( N
j 2 t −μ j (t−s)
wi (t) := − e rki (s)ds.
L k=1 0
N
z j (t) = e−μ j t z o , ϕ j +
j
wi (t)
z o , ϕi , t ≥ 0. (7.61)
i=1
1
N 5−2α e−cN t , ∀t ≥ 0,
j 2
|wi (t)| ≤ C (7.62)
μ j − cN 2
We may now conclude by (7.55) and (7.61) that the solution z to (7.51) may be
written as L
z(t, x) = p(t, x, ξ )z o (ξ )dξ,
0
i=1 j=N +1
Thus
∞ L
e N t p 2 (t, x, ξ )dξ dt ≤ I1 + I2 + I3 , (7.65)
0 0
where ⎛ ⎞2
∞
N
N
I1 :=2 eNt ⎝ q ji (t)ϕ j (x)⎠ dt
0 i=1 j=1
∞
≤C e N t N 3 max |q ji (t)|2 dt
0 i, j=1,N
(7.66)
(using (7.57) and taking N large enough)
∞
1
e N t 2α−1 e−2cN t dt
2
≤ C N3
0 N
1
≤ C 2α−2 , ∀x ∈ (0, L).
N
148 7 Stabilization of Stochastic Equations
Next, ⎛ ⎞2
∞
N ∞
I2 := 2 eNt ⎝ wi (t)ϕ j (x)⎠ dt
j
0 i=1 j=N +1
(using (7.62))
⎡ ⎤2
∞ ∞
!
1 (7.67)
eNt N ⎣ N 5−2α e−cN t ⎦ dt
2
≤C
0 j=N +1
μ j − cN 2
⎛ ⎞2
∞ ∞
1
= C N 11−4α ⎝ ⎠ e N t e−2cN t dt.
2
j=N +1
μ j − cN 2
0
Note that
∞
∞
N N +1
1 1 1 1 1
≤ ≤C dx + dx + · · ·
j=N +1
μ j − cN 2 π2
L2
−c j=N +1
j2 N −1 x2 N x2
∞
1 1
=C dx = C .
N −1 x2 N −1
1 1 1
I2 ≤ C N 11−4α ≤ C 4α−7 , (7.68)
N 2cN − N
2 2 N
where θ > 0 is defined as (recall that α was chosen such that α > 47 )
7.2 Stabilization of the Stochastic Heat Equation on a Rod 149
1
θ := min 2α − 2; 4α − 7; , (7.71)
2
Proof of Theorem 7.4 (continued). Next, the idea is to get rid of the Brownian motion.
This is usually done by taking the second moment into the equation and using Itô’s
isometry. Before doing that, let us introduce
Then, by virtue of Lemma 7.1, we write the solution of (7.71) in a mild formulation
via the kernel p, i.e.,
L
Y (t, x) = p(t, x, ξ )Yo (ξ )dξ
0
t L (7.72)
+λ p(t − s, x, ξ )σ (x, Y (s, ξ )))β(ds, dξ ).
0 0
Notice that the kernel p, defined by (7.63), has similar structure, with similar prop-
erties, to the classical heat kernel. Consequently, one can easily argue as in [125, Ex.
3.4] or [44, Theorem 13] in order to deduce the unique existence of a solution Y to
(7.72).
Taking the second moment in (7.72) and using Itô’s isometry and relation (7.27),
we obtain that
L
E|Y (t, x)|2 ≤ 2L p2 (t, x, ξ )yo (ξ )dξ
0
t L
+ 2Lλ2 L 2σ p2 (t − s, x, ξ )E|Y (s, ξ )|2 dξ ds
0 0
(using (7.52))
t L
≤ Ce−N t + 2Lλ2 L 2σ p2 (t − s, x, ξ )E|Y (s, ξ )|2 dξ ds
0 0
t L
= Ce−N t + 2Lλ2 L 2σ e N (t−s) p2 (t − s, x, ξ )e−N (t−s) E|Y (s, ξ )|2 dξ ds
0 0
t L
≤ Ce−N t + Y 2,N 2Lλ2 L 2σ e−N t e N (t−s) p2 (t − s, x, ξ )dξ ds
0 0
(again using relation (7.52) )
!
1
≤ Ce−N t 1 + Lλ2 L 2σ Y 2,N .
Nθ
1
Y 2,N ≤ C + Cλ2 L L 2σ Y 2,N .
Nθ
150 7 Stabilization of Stochastic Equations
Therefore, if we choose N large enough that Cλ2 L L 2σ N1θ < 1 and N > ρ, we obtain
that
Y 2,ρ ≤ Y 2,N < ∞,
Here we propose to further develop the ideas from the previous section regarding
the equivalent rewrite of the solution in an integral form. We will consider again a
nonlinear stochastic equation, namely the stochastic Burgers equation, and stabilize
its null solution from the boundary. This equation reads as
⎧
⎪
⎪ dY (t, x) = νYx x (t, x)dt + b(t, x)Y (t, x)Yx (t, x)dt +θ Y (t, x)dβ(t),
⎨
t > 0, x ∈ (0, L),
⎪
⎪ Y x (t, 0) = v(t), Y x (t, L) = 0, t > 0,
⎩
Y (0, x) = yo (x), x ∈ (0, L).
(7.73)
It is clear that the second-order nonlinearity Y Yx significantly complicates the con-
text, which is left outside by the previous approach, applied to the stochastic heat
equation. In any case, this time, the stochastic perturbation is very simple, θ Y dβ,
with θ a positive constant. This allows one to do a rescaling in order to transform
(7.73) into a deterministic random PDE. Concerning the obtained random determin-
istic equation, one does not even know whether it well posed. So in fact, we have
to solve three problems at once, namely existence, uniqueness, and stabilization.
This can be achieved via a fixed-point argument. More exactly, we will consider an
auxiliary functional space, namely
ρ
Z = y = y(t, x) : sup e (y(t) + t yx (t)) < ∞ ,
Nt
t>0
for some positive ρ, write again the solution in a mild formulation, then show that
the corresponding nonlinear functional leaves the ball
Br (0) = e N t (y(t) + t ρ yx (t)) ≤ r
invariant and that it is a contraction on it, for r and initial data small enough. From
this, via the contraction mapping theorem, the three problems are solved.
Let us give some details about the functions and parameters that constitute (7.73).
The function b is such that there exist Cb > 0 and 0 ≤ m 1 ≤ m 2 ≤ · · · ≤ m S , for
some S ∈ N, for which
7.3 Stabilization of the Stochastic Burgers Equation 151
S
sup |b(t, x)| ≤ Cb t mk
+ 1 , ∀t > 0. (7.74)
x∈(0,L) k=1
1 2 1
θ = m S + + θ1 , (7.75)
2 4
where θ1 > 0.
In (7.73), consider the substitution
where Γ (t) : L 2 (0, L) → L 2 (0, L) is the linear continuous operator defined by the
equations
dΓ (t) = θ Γ (t)dβ(t), t ≥ 0, Γ (0) = 1,
Γ (t) = eθβ(t)− 2 θ , t ≥ 0.
t 2
(7.77)
By the transformation (7.76), Eq. (7.73) reduces to the random parabolic equation
⎧
⎪ ∂
⎪
⎪ y(t) = νΓ −1 (t)(Γ (t)y(t))x x + Γ −1 (t)b(t)(Γ (t)y(t)) (Γ (t)y(t))x ,
⎨ ∂t
t ∈ [0, ∞),
⎪
⎪ yx (t, 0) = Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞),
⎪
⎩
y(0) = yo .
(7.78)
Indeed, if y is a regular solution to (7.78) (for instance absolutely continuous in t)
that is progressively measurable in (t, ω) in the probability space {Ω, P, F , Ft }
and T
E y(t)2H 2 (O ) dt < ∞,
0
∂y
dY = ydΓ (t) + Γ (t) dt in (0, T ) × O.
∂t
Then we obtain for y the random Eq. (7.78), as claimed. On the other hand, an Ft -
adapted solution t → y(t) to Eq. (7.78) leads via transformation (7.76) to a solution
Y to (7.73) in the sense of the above definition. We equivalently write (7.78) as
152 7 Stabilization of Stochastic Equations
⎧
⎪ ∂
⎪
⎪ y(t) = νθ [βx x (t)y(t) + (βx (t))2 y(t) + 2βx (t)yx (t) + yx x (t)]
⎪
⎪ ∂t
⎨
+ b(t)Γ (t)y(t)[θβx (t)y(t) + yx (t)], t ∈ [0, ∞), (7.79)
⎪
⎪
⎪
⎪ yx (t, 0) = Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞),
⎪
⎩
y(0) = yo .
In order to simplify the problem, we assume that the Brownian motion β is only
time-dependent. Hence it follows by (7.79) that in fact, y satisfies the equation
⎧∂
⎨ ∂t y(t) = νyx x (t) + Γ (t)b(t)y(t)yx (t), t ∈ [0, ∞),
y (t, 0) = u(t) := Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞), (7.80)
⎩ x
y(0) = yo .
Of course, one may think to consider the more general case of β = β(t, x) and
apply the argument that follows to (7.79). Unfortunately, this is not a trivial task.
We will explain later what other difficulties appear in this general case and what
additional hypotheses should be added.
Below, we will frequently use the following obvious but useful inequality:
e−at ≤ t −a , ∀t > 0, a ≥ 0.
Before moving on, let us see that by the law of the iterated logarithm, arguing as
in Lemma 3.4 in [21], it follows that there exists a constant CΓ > 0 such that
≤ (S + 1)Ct − 4 , ∀t > 0.
1
Next, we recall the Neumann–Laplace operator A, and its spectrum {μk }∞ k=1
and its eigenfunction system {ϕk }∞ k=1 ; the Gram matrix B, the matrices Λk and
Bk , k = 1, . . . , N , and A; the Neumann operators Dγk , k = 1, 2, . . . , N , introduced
in relations (7.36)–(7.43) above, respectively. Also, we recall that based on them, we
7.3 Stabilization of the Stochastic Burgers Equation 153
and u as
u(y) := u 1 (y) + · · · + u N (y). (7.84)
N
N
∂t y(t) = − Ay(t) + u i (y(t))(A + γi )Dγi − 2 μ j u i (y(t))Dγi , ϕ j ϕ j
i=1 i, j=1
N
N
∂t z(t) = −Az(t) + u i (z(t))(A + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ,
i=1 i, j=1
z(0) = z o ,
(7.86)
can be written in a mild formulation as
L
z(t, x) = p(t, x, ξ )z o (ξ )dξ,
0
where
p(t, x, ξ ) := p1 (t, x, ξ ) + p2 (t, x, ξ ) + p3 (t, x, ξ ), (7.87)
i=1 j=N +1
154 7 Stabilization of Stochastic Equations
j
The quantities q ji (t) and wi (t) involved in the definition of p satisfy the following
estimates: for some Cq > 0, depending on N ,
|q ji (t)| ≤ Cq e−cN t , ∀t ≥ 0,
2
(7.88)
1
e−cN t , ∀t ≥ 0,
j 2
|wi (t)| ≤ Cw (7.89)
μ j − cN 2
N
N
B(u)(t) := u i (z(t))(A + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ,
i=1 i, j=1
and
(B(u)(t)) j := B(u)(t), ϕ j , j = 1, 2, . . . ,
N t
e−μ j (t−s) (B(u)(s)) j ds,
j
wi (t)
z o , ϕi =
i=1 0
7.3 Stabilization of the Stochastic Burgers Equation 155
where we have used the form of B(u)) and the fact that
|u i (t)| ≤ Ce−cN
2
t
sup |
z o , ϕl |, ∀t ≥ 0, i = 1, . . . , N .
l=1,2,...,N
Theorem 7.5 Let η > 0, depending on ω and sufficiently small, and let N ∈ N be
sufficiently large. Then for each yo ∈ L 2 (0, L) with y0 < η, there exists a unique
solution y to the random deterministic Eq. (7.85) belonging to the space Y ,
1
Y := y ∈ Cb ((0, ∞), H (0, L)) : sup e (y(t) + t yx (t)) < ∞ .
Nt 2 1
t≥0
where
t L
(F y) (t) := p(t − s, x, ξ )b(s, ξ )Γ (s)y(s, ξ )yξ (s, ξ )dξ ds,
0 0
where
F1 (y)(t, s, x)
% N
N L
:= q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ ϕ j (x),
j=1 i=1 0
F2 (y)(t, s, x)
∞ 2 L 3
−μ j (t−s)
:= e Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ ϕ j (x),
j=N +1 0
F3 (y)(t, s, x)
∞
% N
j L
:= wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ ϕ j (x).
j=N +1 i=1 0
(7.94)
It follows via Parseval’s identity that
F1 (y)
⎧ % N 2 ⎫ 21
⎨ N L ⎬
= q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=1 i=1
We continue with
F2 (y)
⎧ ⎫1
⎨ ∞ 2 L 32 ⎬ 2
= e−μ j (t−s) Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
158 7 Stabilization of Stochastic Equations
⎧ ⎫1
∞ 2
⎨ L 32 ⎬ 2
= e−2N t e−(μ j −2N )(t−s) Γ (s)e2N s b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
⎧⎡
⎨ ∞ L
# −(μ j −2N )(t−s) N s $
≤ Ce −N t ⎣ e e y(s, ξ )ϕ j (ξ )
⎩ 0 j=N +1
# $ 62 1 21
× Γ (s)b(s, ξ )e N s yξ (s, ξ ) dξ
(by Schwarz’s inequality)
⎧
⎨ ∞ L
≤ Ce−N t e−2(μ j −2N )(t−s) ϕ 2j (ξ )e2N s y 2 (s, ξ )dξ ×
⎩ 0j=N +1
L 21
Γ (s)b 2 2
(s, ξ )e2N s yξ2 (s, ξ )dξ
0
⎧ ⎡ ⎤
⎨ L ∞
= Ce−N t ⎣ e−2(μ j −2N )(t−s) ϕ 2j (ξ )⎦ e2N s y 2 (s, ξ )dξ ×
⎩ 0
j=N +1
L 21
Γ (s)b
2 2
(s, ξ )e2N s yξ2 (s, ξ )dξ
0
(use inequality between the heat and the Gaussian kernel (7.30))
L L 21
−N t − 21 2N s 2
≤ Ce (t − s) e y (s, ξ )dξ Γ (s)b (s, ξ )e yξ (s, ξ )dξ
2 2 2N s 2
0 0
(by (7.82))
≤ Ce−N t (t − s)− 4 s − 4 e N s y(s)e N s yξ (s)
1 1
(using (7.92))
≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
1 3
(7.96)
F3 (y)
⎧ % N 2 ⎫ 21
⎨ ∞ j L ⎬
= wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=N +1 i=1
We go on with the estimates in the H 1 -norm. Using the above notation, we have
(F1 (y))x
⎧ % N 2 ⎫ 21
⎨ N L ⎬
= μj q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=1 i=1
(arguing as in (7.95))
≤ Ce−cN (t−s) − 41
2
s y(s)yξ (s)
= Ce−2N t e(−cN +2N + 34 )(t−s) − 43 (t−s) − 14 N s
2
e y(s)e N s yξ (s)
e s
3
(by (7.92) and the fact that − cN 2 + 2N + < 0 for N large enough)
4
≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
3 3
(7.100)
Next,
(F2 (y))x
⎧ ⎫1
⎨ ∞ 2 L 32 ⎬ 2
= μ j e−μ j (t−s) Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
= (t − s)− 2
1
160 7 Stabilization of Stochastic Equations
⎧ ⎫1
∞ 2
⎨ L 32 ⎬ 2
1
(t − s) 2 μ j2 e−μ j (t−s) Γ (s)
1
× b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
≤ (t − s)− 2
1
⎧ ⎫1
⎨ ∞ 2 L 32 ⎬ 2
e− 2 μ j (t−s) Γ (s)
1
× b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
(arguing as in (7.96))
≤ C(t − s)− 2 e−N t (t − s)− 4 s − 4 |y|2Y
1 1 3
Finally,
(F3 (y))x
⎧ % N 2 ⎫ 21
⎨ ∞ j L ⎬
= μj wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=N +1 i=1
(by (7.90))
" "
≤ Ce−cN
2
(t−s)
sup " Γ (s)b(s, ·)y(s, ·)yξ (s, ·), ϕl (·) "
l=1,2,...,N
(7.102)
Therefore, (7.100)–(7.102) imply that
t !
−N t − 43 − 34 −N t − 21 1 1
(F (y)(t))x ≤ Ce (t − s) s ds|y|2Y =e t CB , |y|2Y ,
0 4 4
(7.103)
∀t > 0.
Heading toward the end of the proof, we note that
% !2 21
∞
1
L
∂p
e t Nt 2 (t, x, ξ ) dξ dt < ∞,
0 0 ∂x
since the presence of the μ j in the infinite sum is controlled as in (7.101) by the
1
presence of t 2 . Consequently, via the semigroup property, we deduce that
7.3 Stabilization of the Stochastic Burgers Equation 161
L
∂p
(t, x, ξ )dξ −N t − 21
∂x ≤ Ce t yo . (7.104)
0
Now, gathering together the relations (7.97), (7.99), (7.103), and (7.104), we arrive
at the fact that there exists a constant C1 > 0 such that
for all y ∈ Y .
It is easily seen that arguments similar to those above lead as well to
then taking r = 2C1 η, we get from (7.106) that G is a contraction and by (7.105)
that G maps the ball Br (0) into itself, as claimed.
Note that C1 , C2 depend on ω, since in the above, ω-estimates for Γ were used
(CΓ is ω-dependent). Thus η should depend on ω too. This means that in fact, yo
must depend on ω.
Remark 7.1 Let us return to Eq. (7.79). If one assumes that β depends on the space
variable as well, then in trying to apply the above approach, one has to estimate
terms like βx . The law of the iterated logarithm should work again. In any case, this
time, we will no longer have |y|2Y on the right-hand side, because of the terms like
βx y, βx yx . This implies that in applying the fixed-point argument, at some point one
should find an r > 0 sufficiently small that for some constants c1 , c2 ,
c1 r + c2 r 2 < r.
This is possible if and only if c1 < 1, but no one can guarantee this. In the above
case, we had
c2 r 2 < r,
and this is possible, provided r is sufficiently close to zero. Hence, we believe that
the above argument surely fails to work for the case of a space-dependent Brownian
motion β, unless some additional hypothesis on the coefficients of the equation, such
as small enough, are imposed.
162 7 Stabilization of Stochastic Equations
Recall that
we have denoted by A the Neumann–Laplace operator, see (7.36),
and by μ j j and ϕ j j its system of eigenvalues and system of eigenfunctions,
respectively. Let N ∈ N be sufficiently large that Theorem 7.4 holds, and assume
that this time, σ is a Lipschitz function of ỹ that depends only on the first N modes
of ỹ, namely
ỹ, ϕ1 , . . . ,
ỹ, ϕ N . Using the notation in (7.37)–(7.42), we introduce
the feedback form ũ as
where ⎞ ⎛ ϕ1 (0) ⎞
⎛ #7 t 6 $
ỹ τ τ , ϕ1 γk −μ1
#7 6 $
⎜ ỹ t τ , ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟
⎟
ũ k ( ỹ) := A ⎜ τ ⎟
⎝ .................... ⎠ , ⎜
γ k −μ 2 ⎟ . (7.109)
⎝ ........ ⎠
#7 t 6 $
ỹ τ τ , ϕ N ϕ N (0)
γk −μ N N
We observe that the feedback control ũ is designed based on the discrete-time state
observations ỹ(0), ỹ(τ ), ỹ(2τ ), . . ..
Next, we lift the boundary conditions into Eq. (7.107) by arguing similarly as in
(2.27)–(2.29), obtaining thereby the internal control-type problem
7.4 Stabilization by Discrete-Time Feedback Control 163
4 2 3 !! 5
N
t
d ỹ(t) = −A ỹ(t) + ũ i ỹ τ (A + γi )Dγi dt
i=1
τ
⎧ ⎫
⎨
N 8 2 3 !! 9 ⎬ (7.110)
t
+ −2 μ j ũ i ỹ τ )Dγi , ϕ j ϕ j dt
⎩ τ ⎭
i, j=1
+ λσ ( ỹ(t))dβ; ỹ(0) = yo .
We note that Eq. (7.110) is in fact a stochastic PDE with delays, with a bounded
variable delay. Indeed, if we define the bounded variable ζ : [0, ∞) → [0, τ ] by
+ λσ ( ỹ(t))dβ; ỹ(0) = yo .
Hence, the classical existence theory for delay SPDEs can be applied in order to
ensure that (7.111) (and implicitly (7.110)) is well posed.
To address the mean-square exponential stability of the controlled Eq. (7.110),
we will relate it to its continuous-time controlled version (also refereed to as the
auxiliary problem)
4
N
dy(t) = −Ay(t) + u i (y(t))(A + γi )Dγi
i=1
⎫
N
⎬
−2 μ j u i (y(s))Dγi , ϕ j ϕ j dt + λσ (y(t))dβ; y(0) = yo ,
⎭
i, j=1
(7.112)
which is mean exponential stable by virtue of Theorem 7.4. More precisely, we have
obtained that
− ∞ < lim sup log E|y(t, x)|2 < −ρ, ∀x ∈ (0, L). (7.113)
t→∞
We aim to prove a relation similar to (7.113) for the solution ỹ to (7.110). To this end,
we will compare ỹ with y, and show that they are close enough (in a proper sense),
164 7 Stabilization of Stochastic Equations
and
4 5
N
dY = −ΛY (t) + ΛY (t) + Bk AY (t) dt + λσ̃ (Y (t)) dβ(t), (7.115)
k=1
respectively. Here
⎛ ⎞ ⎛ ⎞
ỹ, ϕ1
y, ϕ1
⎜
ỹ, ϕ2 ⎟ ⎜ ⎟
Y˜ := ⎜ ⎟ , Y := ⎜
y, ϕ2 ⎟
⎝ ......... ⎠ ⎝ ....... ⎠
ỹ, ϕ N
y, ϕ N
and
Λ := diag(μ1 μ2 . . . μ N ).
Finally, σ̃ is a function depending only on the first N modes, since σ was assumed
to be like that, ⎛ ⎞
σ (Y ), ϕ1
⎜
σ (Y ), ϕ2 ⎟
σ̃ := ⎜ ⎟
⎝ .............. ⎠ .
σ (Y ), ϕ N
Note that with the two Eqs. (7.114) and (7.115) we are placed in the context of
finite-dimensional stochastic differential equations from [92]. And since the sublinear
assumptions from [92] are satisfied for the present case (see [92, Assumption 2.1],
because here we are dealing with matrices and σ is Lipschitz), we may deduce similar
results to those in [92, Lemma 3.2, Theorem 3.1]. More precisely, one may show
that " "2
" "
E "Y˜ (t) − Y (t)" ≤ C1 e−C2 t E|Yo |2 , ∀t ≥ 0, (7.116)
for some positive constants C1 and C2 . And taking advantage of the exponential
decay (7.113), we finally get that
7.4 Stabilization by Discrete-Time Feedback Control 165
" "2
" "
E "Y˜ (t)" ≤ Ce−ρt E|Yo |2 , ∀t ≥ 0, (7.117)
Similarly as in (7.117), one may deduce with arguments from [92, Lemma 3.2,
Theorem 3.1] that
E| ỹi (t)|2 ≤ Ce−ρt E| ỹi (0)|2 , ∀t ≥ 0,
7.5 Comments
fact that the solution is no longer L 2 -valued. More precisely, the solution lies in a
negative Sobolev space H α , α < − 14 . The reason is that the smoothing properties
of the heat equation are not strong enough to regularize a rough term such as a white
noise. However, one may suggest reconsidering the problem in the new framework
proposed in [53], namely in weighted L 2 -spaces. The difficulty is that in order to
apply the reduction method, one should consider the eigenbasis in the weighted L 2 -
space of the weighted Laplacian. Another idea is to consider the solution in the space
of distributions as in [37].
Besides this, even in the case of Neumann boundary conditions, it would be
interesting to consider a space dimension higher than one. The main difficulty in that
case is that the solution D of (7.10) should satisfy noise boundary conditions of the
type
∂
D(x) = e−δt dβ(t, x), x ∈ Γ,
∂n
where Γ is a part of the boundary of the domain in which the equation is considered,
while n is its outward unit normal. To define such a D, one may rely on the existing
results in [112], after imposing some additional conditions. Then the control design
algorithm may be applied.
The problem of stabilization of the stochastic versions of the deterministic models
has arisen naturally in the scientific community. First, the finite-dimensional case
was considered, and we mention Mao and his coworkers for notable results in this
direction; see the book [91], for example. There, it is mainly the Lyapunov stability
technique that is used, which consists in finding proper Lyapunov functions for
the equation under discussion. Then these ideas were reconsidered in the infinite-
dimensional case, and we refer to the joint work of Caraballo et al. [42], which treats
a similar problem to the one we presented above. Let us give some details on how the
Lyapunov functions are used. Let A denote the Dirichlet–Laplace operator on (0, L),
and σ = σ (y) a Lipschitz function such that σ (0) = 0, and consider the problem
dy(t) = Ay(t)dt + λσ (y(t))dβ, t > 0,
(7.119)
y(0) = yo .
1
L V (t, y) = Vt (t, y) + Vy (t, y), Ay + Vyy (t, y)λσ (y), λσ (y)
2
and
QV (t, y) = (Vy (t, y))2 , λσ (y) .
7.5 Comments 167
Assume that the solution to (7.119) satisfies |y(t)| = 0 for all t ≥ 0 a.s., provided
|yo | = 0 a.s., and that there exists a function V (t, y) ∈ C 1 (R+ , R+ ) × C 2 (L 2 (0, L);
R+ ), and ψ1 (t), ψ2 (t) ≥ 0 are two functions for which there exist constants p >
0, γ ≥ 0, and θ ∈ R such that
(1) y p ≤ V (t, y), ∀y ∈ H 1 (0, L);
(2) L V (t, y) ≤ ψ1 (t)V (t, y), ∀y ∈ H01 (0, L), ∀t ∈ R+ ;
(3) QV (t, y):
≥ ψ2 (t)V 2 (t, y), ∀x: ∈ H01 (0, L), ∀t ∈ R+ ;
t t
ψ1 (s)ds ψ2 (s)ds
(4) lim sup 0
t
≤ θ, lim inf 0
t
≥ 2γ .
t→∞ t→∞
log |y(t)| γ −θ
lim sup ≤− a.s.,
t→∞ t p
where t t
1 1
lim sup λ(s)ds ≤ λ0 and lim inf ρ(s)ds ≥ ρ0 ,
t→∞ t 0 t→∞ t 0
λσ (y), y2 ≥ ρ̃(t)y4 , ∀y ∈ L 2 (0, L),
1
lim sup log y(t)2 ≤ −(2(ρ0 + ρ̃0 ) − λ0 ), P − a.s.
t→∞ t
168 7 Stabilization of Stochastic Equations
A more direct and simple control is proposed in [14, Sect. 5.5], where the internal
stabilization of the Navier–Stokes equations driven by linear multiplicative noise is
treated. Provided that the first eigenvalue of the Oseen operator is large enough, the
feedback u = −η1O 0 y once inserted into the equations ensures the stability of the
closed-loop system, P−a.s..
Proportional-type feedback laws (both internal and from the boundary) were pro-
posed, in the context of noise stabilization of deterministic equations, by Barbu; see
the book [10, Chap. 4]. Regarding the boundary case, in [10] the equation under con-
sideration is the Navier–Stokes equation, but the ideas can be easily reformulated for
general parabolic-like equations. The control law involves a family of independent
N
Brownian motions β j j=1 and is given as
N
u=η μ j y, ϕ j Φ j dβ j .
j=1
The boundary conditions are lifted into the equations, and then the system is decom-
posed into its unstable and stable parts. The solution of the unstable part is given
explicitly and it is shown that if it is stable, then via a Lyapunov function and Itô’s
formula it is shown that the stable part is stable as well. This allows us to conclude
that the corresponding solution of the closed-loop equation satisfies
∞
e2γ t y(t)2 dt < ∞ P − a.s.
0
The proof of the stability of the system is almost identical to the proof of the main
result of Sect. 7.1, except that the solution of the unstable part is given explicitly.
This is possible due to the imposed hypothesis of linear independence of the traces
of the normal derivatives of the dual eigenfunctions on the boundary (as described
in the Comments of Chap. 2). Of course, following the ideas in Chap. 2, one may
define another stabilizing noise control, where this kind of hypothesis is dropped.
The results presented in the second section of this chapter were published in the
author’s work [106].
On the other hand, concerning the internal stabilization by noise of deterministic
equations, there are substantially more results. We refer first to the early work of
Arnold [5], which provides an example of an unstable system stabilized by a random
parameter noise, followed by the work on linear systems Arnold et al. [6]. Other
stabilization results are provided via Lyapunov exponents in Kwiecinska [78, 79].
Let us briefly describe the ideas behind those works. The equation
d
X (t) = AX (t)
dt
is considered, where A generates a C0 -semigroup in a Hilbert space. It is denoted by
7.5 Comments 169
1
λdet := lim sup log X det (t),
t→∞ t
the Lyapunov exponent of the deterministic equation. Here X det is the solution of the
deterministic equation. Then the equation is perturbed by
N
d X = AX dt + σ Bk X dβk ,
k=1
where the Bk are linear continuous operators satisfying some diagonalizable and
commutation properties. Similarly, a Lyapunov coefficient of the stochastic equation
is introduced:
1
λst := lim sup log X st (t),
t→∞ t
where X st is the solution of the stochastic equation. The author proves that the
stochastic Lyapunov exponents turn out to be smaller, almost surely, than their deter-
ministic counterparts. This means that the deterministic system is made more stable
by adding a term with white noise. Moreover, there exists σ0 such that for σ ≥ σ0 ,
all the stochastic Lyapunov exponents are strictly smaller than zero with probability
one. For a collection of more results on this subject, one may see [41].
Regarding the third section of this chapter, Burgers’s equation is often referred to
as a one-dimensional “cartoon” of the Navier–Stokes equation because it does not
exhibit turbulence. In contrast, it turns out that its stochastic version, (7.73), models
turbulence; for details, one can see [47, 111].
In the literature there are plenty of results concerning the stabilization of the deter-
ministic Burgers equation; for example, we refer to [74], which provides a global
stabilization result, with some consequences on the stabilizability of the stochastic
version. The results of Sect. 7.3 are new. The ideas are based on the mild formu-
lation, described above, plus a fixed-point argument. The idea to use fixed-point
arguments in order to prove the stability of deterministic or stochastic equations has
been previously used in papers such as [89].
Finally, the result asserting that Eq. (7.107) (see also (7.110)) is stabilizable by
a proportional-type feedback law involving only time-discrete measurements of the
state, see (7.108) and (7.109), can be viewed as a completion of the result in (2.68),
where via some numerical simulations, we observed that there is no need for full-
state knowledge, but only on a part of the domain. So we may conclude that the
proportional-type feedback law designed in Chap. 2 and used through out this book
to stabilize different types of deterministic or stochastic PDEs can be improved in
order to involve only time-discrete measurements of the state on only a part of the
domain where the phenomena (modeled by the PDE) are evolving. From the practical
point of view and the costs, this is a very important feature. The results in Sect. 7.4
are new.
Chapter 8
Stabilization of Unsteady States
ess supt>0 ess supx∈(0,L) (|∂t a(t, x)| + |ax (t, x)| + |a(t, x)| + |b(t, x)|) ≤ c1 .
(8.2)
In addition, we assume that
a(t, 0) = a(t, L) = 0, ∀t ≥ 0.
and σ (t, x, 0) = 0.
Now let ŷ be some trajectory of the uncontrolled (8.1). More precisely, ŷ = ŷ(t, x)
satisfies
⎧
⎪ ∂t ŷ(t, x) = ŷx x (t, x) + a(t, x) ŷx (t, x) + b(t, x) ŷ(t, x) + σ (t, x, ŷ(t, x)),
⎪
⎪
⎨ 0 < x < L , t > 0,
⎪ ŷx (t, L) = 0, t ≥ 0,
⎪
⎪
⎩
ŷ(0, x) = ŷo (x) for x ∈ [0, L].
(8.4)
Then define the fluctuation variable z := y − ŷ, which by virtue of (8.1) and (8.4)
satisfies the equation
⎧
⎪ ∂t z(t, x) = z x x (t, x) + a(t, x)z x (t, x) + b(t, x)z(t, x)
⎪
⎪
⎨ + σ (t, x, z(t, x) + ŷ(t, x)) − σ (t, x, ŷ(t, x)), for 0 < x < L , t > 0,
⎪ z x (t, 0) = U (t) := u(t) − ŷx (t, 0), z x (t, L) = 0, t ≥ 0,
⎪
⎪
⎩
z(0, x) = z o (x) := yo (x) − ŷo (x) for x ∈ [0, L].
(8.5)
We will use all the notation from Chap. 7, Sect. 7.2, except that this time, we take
the γk to be
k
γk := N α + , k = 1, 2, . . . , N , (8.6)
N
with α > 2.
We emphasize that
for all k = 1, 2, . . . , N .
The given a priori feedback law v is the same as in (7.41)–(7.42); namely, for
w ∈ L 2 (0, L), we set
where
⎞ ⎛ ϕ1 (0) ⎞
⎛
w, ϕ1
γk −μ1
⎜ w, ϕ2 ⎟ ⎜ ⎜ ϕ2 (0) ⎟
⎟
vk (w) := A ⎜ ⎟
⎝ ........ ⎠ , ⎜
γk −μ2 ⎟ . (8.9)
⎝ ........ ⎠
w, ϕ N ϕ N (0)
γk −μ N N
The main result of stabilization concerning the Eq. (8.1) is stated and proved
below.
Theorem 8.1 Let ρ > 0 be arbitrary but fixed. For N ∈ N large enough, the solution
y to the equation
⎧
⎪ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x) + σ (t, x, y(t, x)),
⎪
⎪
⎪
⎪ for 0 < x < L and t > 0,
⎪
⎨
1 x
)dξ
yx (t, 0) = v(e 2 0 a(t,ξ
(y(t) − ŷ(t))) + ŷx (t, 0), t > 0
⎪
⎪
⎪
⎪ yx (t, L) = 0 for t > 0,
⎪
⎪
⎩
y(0, x) = yo (x) for x ∈ [0, L],
(8.10)
satisfies
lim sup eρt ess sup |y(t, x) − ŷ(t, x)| < ∞. (8.11)
t→∞ x∈(0,L)
and x x
a(t,ξ )dξ
σ (t, x, e− 2 a(t,ξ )dξ
1 1
σ̃ (w(t)) := e 2 0 0 w(t, x) + ŷ(t, x))
x (8.14)
1
a(t,ξ )dξ
−e 2 0 σ (t, x, ŷ(t, x)).
It is easily seen from (8.2) and (8.3) that we may find some constant c2 > 0 such that
Now we lift the boundary conditions into the Eq. (8.12), obtaining thereby an
internal control-type problem. As in (7.49), we find that (8.12) is equivalent to
N
N
∂t w(t) = − Aw(t) + vi (w(t))(Ã + γi )Dγi − 2 μ j vi (w(t))Dγi , ϕ j ϕ j
i=1 i, j=1
+ Γ (w(t)); w(0) = wo ,
(8.16)
where
Γ (w) := dw + σ̃ (w).
N
∂t z(t) = − Az(t) + vi (z(t))(Ã + γi )Dγi
i=1
(8.18)
N
−2 μ j vi (z(s))Dγi , ϕ j ϕ j ; z(0) = z o ,
i, j=1
w
1,N := ess sup ess sup e N t |w(t, x)|.
t>0 x∈(0,L)
Then by Lemma 8.1, we write the solution of (8.16) in a mild formulation via the
kernel p, i.e.,
L t L
w(t, x) = p(t, x, ξ )wo (ξ )dξ + p(t − s, x, ξ )Γ (w(s))ds. (8.20)
0 0 0
Then we have
L t L
|w(t, x)| ≤ | p(t, x, ξ )||wo (ξ )|dξ + | p(t − s, x, ξ )||Γ (w(s))|ds
0 0 0
(using (8.19) in the first term and (8.17) in the second one)
≤ Ce−N t ess sup |wo (x)|
x∈(0,L)
t L
+ e N (t−s) | p(t − s, x, ξ )|e−N (t−s) c3 (1 + L σ )|w(s)|ds
0 0
∞ L
−N t −N t
≤ Ce ess sup |wo (x)| + c3 (1 + L σ )
w
1,N e e N t | p(t, x, ξ )|dξ dt
x∈(0,L) 0 0
1
w
1,N ≤ C ess sup |wo (x)| + Cc3 (1 + L σ )
w
1,N .
x∈(0,L) Nθ
Therefore, if we choose N large enough that Cc3 (1 + L σ ) N1θ < 1 and N > ρ, we
obtain that
w
1,ρ ≤
w
1,N < ∞. (8.21)
Remark 8.1 In comparison with the proof of Theorem 7.4, here we did not estimate
the L 2 -norm of the solution (using Parseval’s identity), but instead we estimated the
L ∞ -norm. That is why in comparison to Sect. 7.2, here we had to change the values
of γk s, k = 1, 2, . . . , N , in order to obtain relation (8.19). This suggests that the
various ways of choosing the γk also play an important role, allowing one to solve
stabilization problems for different equations in different frameworks.
where ⎛ ⎞ ⎛ ϕ1 (0) ⎞
w, ϕ1 γk −μ1
⎜ w, ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟
⎟
vk (w) := A ⎜ ⎟
⎝ ........ ⎠ , ⎜
γk −μ2 ⎟ . (8.25)
⎝ ......... ⎠
w, ϕ N ϕ N (0)
γk −μ N N
In the proof of Theorem 8.1, we have shown that the feedback v given by (8.24)
ensures the exponential stability of (8.23).
When trying to apply this theoretical result in practice, things may become difficult
to implement, since the feedback law (8.24) requires full state knowledge, while in
practice, only measurements at the end x = L are available.
8.2 The Stabilization Result and Applications 177
Based on the ideas from Krstic [75], we propose the following observer for system
(8.23)–(8.25):
⎧
⎪
⎪ ∂t ŵ(t, x) = ŵx x (t, x) + d(t, x)ŵ(t, x) + K 1 (t, x)[w(t, L) − ŵ(t, L)],
⎨
0 < x < L , t > 0,
⎪
⎪ ŵ x (t, 0) = v( ŵ(t)), ŵ x (t, L) = K 10 (t)[w(t, L) − ŵ(t, L)], t ≥ 0,
⎩
ŵ(0, x) = w0 (x), 0 ≤ x ≤ L .
(8.26)
Here K 1 , K 10 are output injection functions. Observer (8.26) is in the standard form
of a copy of the system plus injection of the output estimation error. This form is
usually used for the finite-dimensional case, in which observers of the form
d
X̂ = A X̂ + Bu + L(Y − C X̂ )
dt
are constructed for plants
d
X = AX + Bu, Y = C X.
dt
This standard form allows us to pursue duality between the observer and the controller
design, that is, to find the observer gain function using the solution to the stabilization
problem we studied in the previous section. This can be put in connection with the
way duality is used to find the gains of a Luenberger observer based on the pole
placement control algorithm, or the way duality is used to construct Kalman filters
based on the LQR design.
In practice, things go as follows: once the estimates of measurements in x = L
are available, one inserts them into the observer equation (8.26) and numerically
computes the solution ŵ and, at the same time, v(ŵ). With this v(ŵ) plugged into the
plant equation (8.23), instead of v(w), one expects that the corresponding solution
of (8.23) will go exponentially fast to zero.
Keeping in mind that in (8.23), v(w) is replaced by v(ŵ), we deduce that the
observer error
w̃(t, x) := w(t, x) − ŵ(t, x)
that transforms (8.27) into the exponentially stable (for c > 0) system
∂t z̃(t, x) = z̃ x x (t, x) − cz̃(t, x), x ∈ (0, L), t > 0,
(8.28)
z̃ x (0) = z̃ x (L) = 0.
The free parameter c can be used to set the desired observer convergence speed.
Straightforward computations involving (8.28) give
L L
∂t w̃(t, x) = ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ − K (t, x, ξ )∂t z̃(t, ξ )dξ
x x
L
= ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ
x
L L
− K (t, x, ξ )z̃ ξ ξ (t, ξ )dξ + c K (t, x, ξ )z̃(t, ξ )dξ
x x
L
= ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ + K (t, x, x)z̃ x (t, x)
x
L
− K ξ (t, x, x)z̃(t, x) + K ξ (t, x, L)z̃(t, L) − K ξ ξ (t, x, ξ )z̃(t, ξ )dξ
x
L
+c K (t, x, ξ )z̃(t, ξ )dξ.
x
(8.29)
Likewise, we have
Recall that
L
d
w̃x (t, x) = z̃ x (t, x) + K (t, x, x)z̃(t, x) − K (t, x, ξ )z̃(t, ξ )dξ.
x dx
d
K (t, 0, ξ ) = 0,
dx
where we have used that w̃x (t, 0) = z̃ x (t, 0) = 0. Now put x = L and recall that
w̃x (t, L) = −K 10 (t)w(t, L) and that z̃ x (t, L) = 0. We arrive at
K (t, L , L) = −K 10 (t).
τ = x + ξ η = x − ξ,
and define
τ +η τ −η
G(t, τ, η) := K (t, x, ξ ) = K t, , ,
2 2
180 8 Stabilization of Unsteady States
So we are able to say that Eqs. (8.28) and (8.27) are equivalent. This implies the
asymptotic exponential decay of the error w̃, from which we conclude that the
observer (8.26) asymptotically exponentially approximates the plant equation (8.23).
8.2.2 Applications
Now we will consider a stabilization problem associated with the following SPDE:
⎧
⎪ dY (t, x) = Yx x (t, x)dt + f (t, x)Yx (t, x)dt + h(t)Y (t, x)dβ(t, x),
⎪
⎪
⎪
⎪ for 0 < x < L and t > 0,
⎪
⎨ h(t)β(t)
Yx (t, 0) = u(t) := e h(t)β(t,0)
v e Y (t) , t > 0 (8.38)
⎪
⎪
⎪
⎪ Yx (t, L) = 0 for t > 0,
⎪
⎪
⎩
Y (0, x) = Yo (x) for x ∈ [0, L].
Here f = −2hβx , where β is a Brownian motion in time and colored in space such
that βx (t, 0) = βx (t, L) = 0; and h is such that
1
|h(t)| ≤ C √ , t > 0. (8.39)
t
(For the precise formulation of the solution to (8.38), see Chap. 7.)
8.2 The Stabilization Result and Applications 181
A fair question would be why this stochastic PDE is related to the deterministic
PDE (8.1), studied above in this chapter. The reason is that in order to study the
boundary stabilization of (8.1), we will reduce it by a rescaling procedure (similarly
as in the third section of Chap. 7) to a random parabolic equation and apply to
this equation the stabilization result established in Theorem 8.1. Namely, by the
substitution
w(t) := e−h(t)β(t) Y (t),
doing similar computations as in [22], we obtain that w is the solution to the following
random deterministic equation:
⎧
⎪ ∂t w(t, x) = e−h(t)β(t,x) (eh(t)β(t,x) w(t, x))x x
⎪
⎪
⎪
⎪
⎪
⎪ + f (t, x)e−h(t)β(t,x) (eh(t)β(t,x) w(t, x))x
⎪
⎨
d 1 2
− h(t)β(t, x) + h (t) w(t, x), P-a.s.,t > 0, x ∈ (0, L),
⎪
⎪ dt 2
⎪
⎪
⎪
⎪ wx (t, 0) = v(w(t)), wx (t, L) = 0, P − a.s., t > 0,
⎪
⎪
⎩
w(0) = yo ,
(8.40)
where in order to recover the boundary conditions, we used that βx (t, 0) = βx (t, L) =
0. Or equivalently,
⎧
⎨ ∂t w(t, x) = wx x (t, x) + q(t, x)w(t, x), P-a.s., for t > 0, x ∈ (0, L),
⎪
wx (t, 0) = v(w(t)), wx (t, L) = 0, P-a.s., t > 0, (8.41)
⎪
⎩
w(0) = yo ,
d 1
q(t, x) := h(t)βx x (t, x) + (h(t)βx (t, x))2 − h(t)β(t, x) − h 2 (t) − 2h(t)(βx (t, x))2 .
dt 2
It is clear that Eq. (8.41) is a particular case of Eq. (8.1). In fact, in applying a
rescaling argument to reduce a stochastic PDE to a deterministic one, the latter will
usually have time-dependent coefficients (as (8.41) does). Consequently, the problem
of stabilization of an SPDE can be solved via the stabilization to trajectories for
some deterministic PDE. However, for the moment, this is not the best approach
for this problem, since in the literature, there are very few results on unsteady-state
stabilization. On the other hand, since we have obtained in Theorem 8.1 a result
concerning the stabilization of trajectories for a semilinear heat equation, it is clear
that we may immediately obtain a stabilization result for its stochastic version as
well. In fact, we can prove the following result.
Theorem 8.2 Let ρ > 0 be arbitrary but fixed. For N ∈ N large enough, the solution
Y to the equation
182 8 Stabilization of Unsteady States
⎧
⎪
⎪ dY (t, x) = Yx x (t, x)dt + f (t, x)Yx (t, x)dt + h(t)Y (t, x)dβ(t, x),
⎪
⎪
⎪
⎪ for 0 < x < L and t > 0,
⎨
Yx (t, 0) = u(t) := e h(t)β(t,0)
v(e h(t)β(t)
Y (t)), t > 0 (8.42)
⎪
⎪
⎪
⎪ Y (t, L) = 0 for t > 0,
⎪
⎪ x
⎩
Y (0, x) = Yo (x) for x ∈ [0, L],
satisfies
−∞ < lim sup log |Y (t, x)| < −ρ, ∀x ∈ (0, L), P-a.e. (8.43)
t→∞
whence for
|βx x (t)| (βx (t))2 |β(t)|
Ωr := sup √ + + √ ≤r ,
t≥0 t t t
8.3 Comments
As already mentioned and discussed, regarding the nonstationary case, there are
very few results, most of them treating the internal stabilization problem only, see
[4, 23, 77], while Rodrigues [122] deals with the boundary case. The reason for this
impoverished literature is that all the techniques developed for the stationary case
seem not to work for stabilizing trajectories.
In the work Barbu et al. [23], the Foias–Prodi property for parabolic PDEs is used.
Roughly speaking, this property means that if the projections of two solutions to the
unstable modes converge to each other as time goes to infinity, then the difference
between these solutions goes to zero. However, it turns out that the conclusion remains
true if the projections are close to each other at times proportional to a fixed constant.
So the main idea in [23] was to design a control that ensures equality at integer times
for the projections of two solutions to the unstable modes. More precisely, assume that
for a sufficiently large integer N , one manages to construct an (internal) control such
that once plugged into (8.5), the corresponding solution to the closed-loop equation
(8.5) satisfies PN Y (1) = 0, where PN is the projection of L 2 on the space spanned by
the first N eigenfunctions of the Laplacian in (0, L). Then using Poincaré’s inequality
and the regularizing property of the resolving operator for (8.5), one gets
−1
Y (1)
=
(I − PN )Y (1)
≤ C1 μ N 2
Y (1)
H 1 (0,L)
(8.46)
−1 −1
≤ C2 μ N 2
Yo
+
U
L 2 (0,1);X ) ≤ C3 μ N 2
Yo
,
where μ j j denotes the increasing sequence of eigenvalues of the Laplace operator
and Ci , i = 1, 2, 3, are some constants not depending on N . It is clear that the fact
that C3 is independent of N is of great importance, and in [23], this is shown based
on a truncated observability inequality. It follows from (8.46) that provided that N
is sufficiently large, one has
Y (1)
≤ e−μ
Y0
.
Iterating this procedure, one gets an exponentially decaying solution. Then via
the dynamic programming principle, a Riccati feedback stabilizing controller is
designed. As noticed above, the Riccati-based controls are not the best ones, from a
practical point of view, since the algebraic Riccati equations require a large number
of hard computations. Here we propose simple proportional-type ones. The results
in this chapter were published in the author’s work [107], except those concerning
the observer design, which are new.
Of course, one may try to stabilize Eq. (8.1) by a Dirichlet boundary control, in the
same manner in which we did so for the Neumann boundary case. Namely, consider
the following problem:
184 8 Stabilization of Unsteady States
⎧
⎪ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x) + σ (t, x, y(t, x)),
⎪
⎪
⎨ 0 < x < L , t > 0,
⎪
⎪ y(t, 0) = u(t), y(t, L) = 0, t ≥ 0,
⎪
⎩
y(0, x) = yo (x) for x ∈ [0, L].
(8.47)
This situation corresponds to Examples 2.3–2.5. So the feedback v, in this case,
should look like
v(w) := v1 (w) + · · · + v N (w), (8.48)
In this case,
j 2π 2 2 jπ x
μj = and ϕ j = sin , j = 1, 2, 3, . . . ,
L2 L L
respectively.
Now let us compare the form of the vk in the Neumann case given in (8.9) with
the Dirichlet case from (8.49), the first of which involves the quantities
2 ( j − 1)π 0 2
ϕ j (0) = cos = ,
L L L
So it gains an extra N . This is bad. Moreover, if we look at the Gram matrix B, given
now as ⎛√ √ √ ⎞
μ1 μ1 μ1 μ2 ... μ1 μ N
√ √ √
2 ⎜ μ2 μ1 μ2 μ2 ... μ2 μ N ⎟
B := ⎜ ⎝
⎟,
⎠
L √ ........................................................
√ √
μ N μ1 . . . μ N μ N −1 μ N μ N
and argue as in (7.43)–(7.47), we may show that the best we can get is
1 N2
=
B1 + · · · + B N
≤ C 2α−1 .
λ1 (A) N
1
e−cN t , t ≥ 0.
2
|qi j (t)| ≤ C
α− 25
N
That is, we gain an extra N 2 in the estimates. Overall, in the final estimates we will
have an extra N 3 . This is too much to be able to manipulate the powers of N to obtain
relations such as (7.70)–(7.71).
In conclusion, it is not straightforward to pass from the Neumann case to the
Dirichlet one. Further subtle estimates of the quantities and properties of the feedback
law must be deduced.
Chapter 9
Internal Stabilization of Abstract
Parabolic Systems
In this chapter, we will reconsider the abstract parabolic equation framework from
Chap. 2. This time, we will design an internal stabilizing proportional-type actuator.
As in the boundary case, the feedback laws are of finite-dimensional nature, given
in a simple form, and easy to manipulate from the computational point of view. And
since we formulate the results in an abstract form, it is clear that for different types
of precise models satisfying the imposed abstract hypotheses, these can be applied
to the stabilization problem.
For the reader’s convenience, we will restate the abstract formulation from Chap. 2.
Let O be an open bounded domain in Rd , d ∈ N∗ , with smooth boundary ∂O. We
denote by · the norm in L2 (O). Let A be a closed and densely defined linear
differential operator on L2 (O), with domain D(A); and let F0 : D(F0 ) ⊂ D(A) →
L2 (O) be a nonlinear differential operator. We assume that
(1) −A generates a C0 -analytic semigroup on L2 (O).
(2) For all y, ŷ ∈ D(A), there exists the limit
1
F0 (ŷ)(y) := lim F0 (ŷ + λy) − F0 (ŷ) ,
λ→0 λ
in L2 (O). Moreover, F0 (0) = 0, and for for some α ∈ (0, 1) and C > 0, we have
Besides this, given ρ > 0, there is a finite number N of eigenvalues such that
λj < ρ, j = 1, 2, . . . , N , while
λj ≥ ρ, j = N + 1, N + 2, . . . (9.3)
We assume that
(4) Each unstable eigenvalue λj , j = 1, 2, . . . , N , is semisimple.
N
It is easily seen that the finite part of the spectrum λj j=1 can be separated from
the rest of the spectrum by a rectifiable curve ΓN in the complex space C. Set Xu to
N
be the linear space generated by the eigenfunctions ϕj j=1 , that is,
N
Xu := lin span ϕj j=1 .
is known as the algebraic projection of L2 (O) onto Xu . It is easy to see that the
operator
Au := PN A
N
maps the space Xu into itself and σ (Au ) = λj j=1 . More precisely, Au : Xu → Xu
is finite-dimensional and can be represented by an N × N matrix.
If A∗ is the dual operator of A, then its eigenvalues are precisely {λj }j∈N∗ , and the
corresponding eigenfunctions are
A∗ ϕj∗ = λj ϕj∗ , j ∈ N∗ .
9.1 Presentation of the Problem 189
while XN∗ = lin span{ϕj∗ }Nj=1 = PN∗ L2 (O) .
Via the Schmidt orthogonalization procedure, it
follows by hypothesis (4) that
∗ N
one can find a biorthogonal system {ϕj }j=1 , {ϕj }j=1 of eigenfunctions of A corre-
N
dy
+ Ay + F0 (y) = 0, t > 0; y(0) = yo , (9.6)
dt
A(ŷ) + F0 (ŷ) = 0.
where
G(z) := F0 (z + ŷ) − F0 (ŷ) − F0 (ŷ)(z). (9.8)
Φi , 1O 0 ϕj∗ = δij , i, j = 1, 2, . . . , N .
N
Φi = αki ϕk∗ , i = 1, 2, . . . , N ,
k=1
N
αki ϕk∗ , 1O 0 ϕj∗
= δij , i, j = 1, . . . , N .
k=1
one may show that once u, given by (9.10), is plugged into the Eq. (9.9), we obtain
its stability. The method of proof for this is classical, and it has been used frequently
throughout this book. Briefly, one first considers the linearization part and splits
the system in two: the stable and unstable parts. Concerning the finite-dimensional
unstable part, simple computations lead to
d
zi + λi zi = −ηzi , i = 1, 2, . . . , N ,
dt
where zi = z, ϕi
, i = 1, 2, . . . , N . Then the stability of the linearized part follows
immediately (for details, see [10, Theorem 2.3]). Then via a fixed-point argument,
local stability can be deduced as well (see [10, Sect. 2.5]).
The second feedback law, which we propose for stabilization, involves the sign
function, and it reads as
N
u(t) := −η sign(PN z(t), ϕj∗
)PN Φj , (9.11)
j=1
N
Φj := αjk ϕk∗ , j = 1, . . . , N ,
k=1
with
N
αik ϕk∗ , ϕj∗
0 = δij , i, j = 1, . . . , N . (9.13)
k=1
As a matter of fact, the only difference between the feedback laws (9.10) and
(9.11) is the presence of the sign function. The reason to introduce this function is,
as we will see below, that it allows us to obtain a stronger result, namely, that in finite
time, the solution z belongs to the stable space Xs .
Theorem 9.1 below amounts to saying that for η sufficiently large, the feedback
controller (9.11) is exponentially stabilizing, with exponent −γ , in the linearized
system, and it steers zo into Xs in a finite time T > 0.
Theorem 9.1 Let ρ > 0 and zo ∈ L2 (O) be such that zo ≤ ρ. Then the closed-
loop system
⎧
⎨ dz + Az + η sign(P z, ϕ ∗
)P (mΦ ) = 0, t ≥ 0,
N
⎪
N j N j
dt (9.14)
⎪
⎩ j=1
z(0) = zo ,
such that for T > 0 arbitrary but fixed and η such that
λj
η ≥ ρ max , (9.15)
1≤j≤N e
λj T − 1
we have
PN z(t) = 0, ∀t ≥ T , (9.16)
and
z(t) ≤ Ce−γ t z0 , ∀t ≥ T , (9.17)
(9.17) hold. Roughly speaking, this means that for each N , there is a controller of
the form (9.11) that steers Yo into Xs .
Proof We apply the projector PN to the system (9.14) and obtain that
⎧
⎪
⎨ dzu N
+ Au zu + η sign(zu , ϕi∗
)PN (mΦi ) = 0, t ≥ 0,
dt (9.18)
⎪
⎩ i=1
zu (0) = PN zo ,
N
where zu := PN z. If we decompose zu as zu = zj ϕj , introduce it into (9.18), and
j=1
scalar multiply by ϕj∗ the Eq. (9.18), we get that
⎧
⎨ dzj + λ z + ηsignz = 0, ∀t ≥ 0,
j j j
dt (9.19)
⎩
zj (0) = zjo ,
for all j = 1, . . . , N . Here we have used the relations (9.5) and (9.13).
It should be said that the multivalued ordinary differential system (9.19) is well
posed, because the multivalued function z → signz is maximal monotone on C.
N
Hence there is a unique absolutely continuous solution zj j=1 to the system (9.19).
Moreover, if we take account the relation
signz · z = |z|, ∀z ∈ C,
1d
|zj (t)|2 +
λj |zj (t)|2 + η|zj (t)| = 0, t ≥ 0.
2 dt
This yields
d
|zj (t)| +
λj |zj (t)| + η = 0, t ≥ 0,
dt
and therefore
η
λj t
e
λj t |zj (t)| − |zj (0)| + e − 1 = 0, ∀t ≥ 0. (9.20)
λj
Next, we apply to the system (9.14) the projector I − PN , and get that
⎧
⎨ dzs + A z = 0, t ≥ 0,
s s
dt (9.21)
⎩
zs (0) = (I − PN )zo ,
Remark 9.2 As mentioned, the results obtained above hold as well in the case in
which the eigenvalues are not necessarily semisimple. Indeed, let us assume, for
example, that the matrix Aϕi , ϕj
Ni,j=1 has the form
⎛ ⎞
λ1 0 0 0 . . . 0
⎜ 1 λ1 0 0 . . . 0 ⎟
⎜ ⎟
Aϕi , ϕj
Ni,j=1 =⎜
⎜ 0 0 λ3 0 . . . 0 ⎟ .
⎟ (9.22)
⎝ ....................... ⎠
0 0 0 0 . . . λN
e
λ1 t |z1 (t)| ≤ |z1 (0)| ≤ ρ. (9.25)
We multiply the second equation of (9.23) by z̄2 and take the real part of the result
to obtain that
1d
|z2 |2 +
λ1 |z2 |2 + η|z2 | = −
(z1 z̄2 ).
2 dt
194 9 Internal Stabilization of Abstract Parabolic Systems
This yields
λ1 t e
λ1 t − 1 t
e |z2 (t)| − |z2 (0)| + η ≤ e
λ1 τ |z1 (τ )|d τ. (9.26)
λ1 0
zj (t) = 0, t ≥ T , j = 1, . . . , N ,
λj
if η ≥ ρ(1 + T ) max .
1≤j≤N e
λj T − 1
Now let us treat another case. Let us assume that
⎛ ⎞
λ1 1 0 0 . . . 0
⎜ 1 λ1 0 0 . . . 0 ⎟
⎜ ⎟
Aϕi , ϕj
Ni,j=1 = ⎜
⎜ 0 0 λ3 0 . . . 0 ⎟ .
⎟ (9.27)
⎝ ....................... ⎠
0 0 0 0 . . . λN
d
(|z1 | + |z2 |) + (
λ1 − 1)(|z1 | + |z2 |) + 2η ≤ 0. (9.29)
dt
It is easy to observe that if
1
λ1 − 1
λj
η ≥ ρ max ; max ,
2 e(
λ1 −1)T − 1 3≤j≤N e
λj T − 1
zj (t) = 0, t ≥ T , j = 1, . . . , N ,
as desired.
9.1 Presentation of the Problem 195
We conclude that when the unstable eigenvalues are not necessarily semisimple,
one can choose η > 0 in an appropriate way, sufficiently large, to obtain the same
results as in Theorem 9.1.
The main result of this section is the next theorem, which amounts to saying that the
feedback controller (9.11) exponentially stabilizes the nonlinear system (9.9), and
just as for the linear equation, it steers zo into Xs in finite time T > 0.
Theorem 9.2 Let T , ρ > 0 be sufficiently small. For each zo ∈ W such that zo W ≤
ρ, the problem
⎧
⎨ dz + Az + η sign(P z, ϕ ∗
)P (mΦ ) + Gz = 0, t ≥ 0,
N
⎪
N j N j
dt (9.31)
⎪
⎩ j=1
z(0) = zo ,
PN z(t) = 0, ∀t ≥ T , (9.33)
and
z(t) ≤ Ce−βt z0 , ∀t ≥ T , (9.34)
Proof For r ≤ 1, let us introduce the ball of radius r centered at the origin of the
space L2 (0, ∞; Z ):
1 !
∞ 2
S(0, r) := f ∈ L (0, ∞; Z ) : f L2 (0,∞;Z ) =
2
f (t)2Z dt ≤r .
0
The idea of the proof is as follows: we show that for all Z ∈ S(0, r), problem
(9.35) has a solution zZ ∈ S(0, r) for T , ρ, and r sufficiently small. Moreover,
we show that PN zZ (t) = 0, ∀t ≥ T . Then we denote by Γ the operator that asso-
ciates Z to the solution zZ . In doing so, we get that Γ : S(0, r) → S(0, r) is a
contraction on S(0, r), for T , r sufficiently small. It follows that there exists a
unique solution z ∈ S(0, r) for (9.31). Next, we show that z ∈ C([0, ∞); W ) and
z(t) ∈ B(0, b) := {f ∈ W : f W ≤ b} , t ≥ T , for some b > 0, which will imply
the claimed exponential decay (9.34).
By Theorem 9.1, one can easily deduce that
and ∞
e−As t W 2Z ≤ cW 2W , ∀W ∈ W , (9.37)
0
Therefore,
(N Z)(t)L2 (0,∞;Z ) ≤ Cr 2 , (9.42)
N Z1 − N Z2 2L2 (0,∞;Z ) ≤ 4ck 2 r 2 Z1 − Z2 2L2 (0,∞;Z ) , ∀Z1 , Z2 ∈ S(0, r). (9.43)
Finally, we define
With these key results in hand, we can proceed with the proof.
To prove that there exists a solution to the equation (9.35), one can argue as in the
proof of the Theorem 9.1, using the fact that the function sign is maximal monotone
on C. Next, one may show that this solution remains in S(0, r), for r sufficiently
198 9 Internal Stabilization of Abstract Parabolic Systems
small, and it satisfies (9.33) and (9.34). Then one applies the projector PN to (9.35)
and gets that
1d
|zj (t)|2 +
λj |zj (t)|2 + η|zj (t)| = −
GZ, ϕj∗
z̄j , t ≥ 0, (9.46)
2 dt
N
for all j = 1, . . . , N , where PN z = zj ϕj . Next, using the Schwarz inequality, it
j=1
follows that
) )) ) ) )
) )
−
GZ, ϕj∗
z̄j ≤ )GZ, ϕj∗
) )zj ) ≤ GZϕj∗ )zj ) .
d
|zj (t)| +
λj |zj (t)| + η ≤ GZϕj∗ , ∀t ≥ 0, (9.47)
dt
e
λj t − 1 2 ∗
e
λj t |zj (t)| + η − kr ϕj + |zj (0)| ≤ 0, ∀t ≥ 0, (9.51)
λj
for all j = 1, . . . , N . It is easy to see that if η satisfies relation (9.32), we get that
|zj (t)| = 0, ∀t ≥ T , for all j = 1, . . . , N . Moreover, we get also from (9.51) that
|zj (t)| ≤ e−
λj T kr 2 ϕj∗ + ρ , 0 ≤ t ≤ T . (9.52)
N 2 r 2
hT |λj |e−2
λj T kr 2 ϕj∗ + ρ ≤ , (9.53)
j=1
4
9.2 Stabilization of the Full Nonlinear Equation (9.9) 199
Thus one can obtain via (9.52), (9.54), and (9.53) that
∞ T T
N
PN z(t)2Z dt = PN z(t)2Z dt ≤ h |λj ||zj (t)|2 dt (9.55)
0 0 0 j=1
N
2 r2
≤ hT |λj |e−2
λj T kr 2 ϕj∗ + ρ ≤ . (9.56)
j=1
4
d
zs + As zs + (I − PN )GZ = 0, t ≥ 0; zs (0) = (I − PN )zo , (9.57)
dt
where zs = (I − PN )z. Using the variation of constants formula, we have that
t
zs (t) = e−As t (I − PN )zo + e−As (t−τ ) (I − PN )GZ(τ )d τ, t ≥ 0. (9.58)
0
It is easy to see that by making use of the relations (9.58), (9.38), and (9.44), we have
the equality
zs (t) = (ΛZ)(t), ∀t ≥ 0.
r2
C(ρ 2 + r 4 ) ≤ , (9.60)
4
we get from (9.59) that
r2
(I − PN )z2L2 (0,∞;Z ) = zs 2L2 (0,∞;Z ) ≤ . (9.61)
4
Finally, we conclude that if T , ρ, and r are small enough that they satisfy relations
(9.53) and (9.60), we have
200 9 Internal Stabilization of Abstract Parabolic Systems
∞ ∞
z2L2 (0,∞;Z ) = z(t)2Z dt ≤ 2 PN z(t)2Z + (I − PN )z(t)2Z dt
0 0
2 2
r r
≤2 + = r2,
4 4
if we take account of the relations (9.55)–(9.56) and (9.61). This means that the
solution z remains in the ball S(0, r). Hence if we denote by Γ the operator that
associates Z to the corresponding solution z to the system (9.35), we have that Γ
maps the ball S(0, r) into itself. Therefore, in order to complete the proof, it is enough
to show that Γ is a contraction on S(0, r). To this end, we have the following. Let
Z1 , Z2 be two functions in S(0, r), and z1 , z2 ∈ S(0, r) the corresponding solutions
to the system (9.35). Consequently, z1 and z2 satisfy
⎧
⎨ dz1 + Az + η sign(P z , ϕ ∗
)P (mΦ ) = −GZ , t ≥ 0,
N
⎪
1 N 1 j N j 1
dt (9.62)
⎪
⎩ j=1
z1 (0) = zo
and
⎧
⎨ dz2 + Az + η sign(P z , ϕ ∗
)P (mΦ ) = −GZ , t ≥ 0,
N
⎪
2 N 2 j N j 2
dt (9.63)
⎪
⎩ j=1
z2 (0) = zo .
⎧
⎨ d z + λ z + ηsign(z ) = −GZ , ϕ ∗
, t ≥ 0,
1j j 1j 1j 1 j
dt (9.64)
⎩
z1j (0) = zjo ,
and ⎧
⎨ d z + λ z + ηsign(z ) = −GZ , ϕ ∗
, t ≥ 0,
2j j 2j 2j 2 j
dt (9.65)
⎩
z2j (0) = zj ,
o
for all j = 1, . . . , N . Taking into account that sign is a maximal operator, we get
from (9.66) multiplied by z̄1j − z̄2j , that
⎧ )
⎨ d )z − z )) +
λ ))z − z )) ≤ |GZ − GZ , ϕ ∗
|, t ≥ 0,
1j 2j j 1j 2j 1 2 j
dt (9.67)
⎩
z1j − z2j (0) = 0,
and
| z1j − z2j (t)| = 0, ∀t ≥ T ,
for all j = 1, . . . , N .
In the same manner as in relation (9.55), we obtain, via relation (9.69), that
∞
PN (z1 − z2 )(t)2Z dt
0
N %
2 & (9.70)
≤ hT |λj |e−2
λj T 2krϕj∗ Z1 − Z2 2L2 (0,∞;Z ) .
j=1
we get that
(I − PN )z ∈ C([0, ∞); W ).
Thus z ∈ C([0, ∞); W ), as claimed. Finally, by (9.45), we have for all t ≥ T , taking
into account that z = (I − PN )z,
z(t)W ≤ Cρ + Cr 2 . (9.75)
from which, using the classical strategy for nonlinear autonomous systems [27,
p. 178], we get the claimed exponential decay (9.34).
9.3 The Design of a Real Stabilizing Feedback Controller 203
ψj , ψi = δij , i, j = 1, . . . , N . (9.76)
N
u = −η sign(PN z, ψj
PN Ψj , (9.77)
j=1
where
N
Ψj = αjk ψk , j = 1, . . . , N , (9.78)
k=1
and
N
αjk ψk , ψi
0 = δji , i, j = 1, . . . , N . (9.79)
k=1
(We can choose αjk in this way because the system {ψj }Nj=1 is linearly independent
d
in L2 (O0 ) .)
Then substituting u into the linearized system, we have
⎧
⎨ d z + Az = −η sign(P , ψ
)P (m(Ψ )), t ≥ 0,
N
⎪
N j N j
dt (9.80)
⎪
⎩ j=1
z(0) = zo .
Arguing as in the proof of Theorem 9.1 and taking account of the fact that
204 9 Internal Stabilization of Abstract Parabolic Systems
Ψj , ψi = δij , i, j = 1, . . . , N ,
and
e−Âs t L(H ,H ) ≤ Ce−γ t , ∀t ≥ 0,
Theorem 9.3 Let T , ρ > 0, and zo be such that zo ≤ ρ. For 0 < η = η(T , ρ)
sufficiently large, we have for the solution z to the closed-loop system (9.80),
and
z(t) ≤ Ce−γ t zo , ∀t ≥ T . (9.82)
Proof For simplicity, let us assume that N = 4. The other cases can be treated sim-
ilarly. We have
Ã(ϕ1 ) = Ã(ψ1 + iψ2 ) = Aψ1 + iAψ2 .
Hence
Aψ1 =
λ1 ψ1 − λ1 ψ2 and Aψ2 =
λ1 ψ2 + λ1 ψ1 . (9.83)
Aψ3 =
λ2 ψ3 − λ2 ψ4 and Aψ4 =
λ2 ψ4 + λ2 ψ3 . (9.84)
d 4
zu + Âu zu = −η sign(zu , ψj
)PN (m(Ψj ))
dt j=1
reads as ⎧
⎪
⎪
d
z1 +
λ1 z1 + λ1 z2 = −ηsign(z1 ),
⎪
⎪
⎪
⎪ dt
⎪
⎪d
⎪
⎪
⎨ z2 +
λ1 z2 − λ1 z1 = −ηsign(z2 ),
dt (9.85)
⎪
⎪ d
⎪
⎪ z3 +
λ2 z3 + λ2 z4 = −ηsign(z3 ),
⎪
⎪ dt
⎪
⎪
⎪
⎪ d
⎩ z4 +
λ2 z4 − λ2 z3 = −ηsign(z4 ), ∀t ≥ 0.
dt
9.3 The Design of a Real Stabilizing Feedback Controller 205
Multiplying the first equation of (9.85) by z1 , the second by z2 , and summing them,
we get
1d
(|z1 |2 + |z2 |2 ) +
λ1 (|z1 |2 + |z2 |2 ) + η(|z1 | + |z2 |) = 0, ∀t ≥ 0.
2 dt
Hence
1d 1
(|z1 | + |z2 |)2 +
λ1 (|z1 | + |z2 |)2 + η(|z1 | + |z2 |) ≤ 0, ∀t ≥ 0.
4 dt 2
The same result can be obtained for the coefficients z3 and z4 . Now arguing as in the
proof of Theorem 9.1, one can obtain the desired result. The details are omitted.
In the same manner, following the ideas in the proof of Theorem 9.2, one can
obtain for the nonlinear system
d N
z + Az + Gz = −η sign(PN , ψj
)PN (m(Ψj )), t ≥ 0; z(0) = zo , (9.86)
dt j=1
Theorem 9.4 Let T , ρ > 0 be sufficiently small. For each zo ∈ W such that
zo W ≤ ρ, the problem (9.86) is well posed on W with the unique solution
z ∈ C([0, ∞); W ) ∩ L2 (0, ∞; Z ) if η = η(T , ρ) is large enough. Moreover, these
solutions satisfy
PN z(t) = 0, ∀t ≥ T , (9.87)
and
z(t) ≤ Ce−βt z0 , ∀t ≥ T , (9.88)
9.4 Comments
The stabilization problems presented above have been studied extensively over the
last six or seven years, and we refer to the works [11–13, 19, 30, 31, 50, 118,
119, 121], as well as to the book [10], for significant results in this direction. We
have presented here an internal stabilizing control design associated with abstract
parabolic-like equations. The proportional feedback law is similar to the one in
Chap. 2, but reconsidered for the internal case. A similar result was published in
206 9 Internal Stabilization of Abstract Parabolic Systems
Barbu and Munteanu [20]. However, the abstract setting discussed here is new. It
should be mentioned that these results are connected with those in [123], where
the exact controllability in projections for the Navier–Stokes equations is obtained.
However, there is no overlap, and the technique used here is completely different.
The proof of the stability of the nonlinear system is based mainly on the ideas in
[19].
References
1. Aamo OM, Krstic M, Bewley TR (2003) Control of mixing by boundary feedback in a 2D-
channel. Automatica 39:1597–1606
2. Agranovich MS (2015) Sobolev spaces, their generalizations and elliptic problems in smooth
and lipschitz domains. Springer, New York
3. Amendola G, Fabrizio M, Golden JM (2012) Thermodynamics of materials with memory:
theory and applications. Springer, New York
4. Ammari K, Duyckaerts T, Shirikyan A (2016) Local feedback stabilisation to a nonstationary
solution for a damped non-linear wave equation. Math Control Relat Fields 6(1):1–5
5. Arnold L (1979) A new example of an unstable system being stabilized by random parameter
noise. Inform Comm Math Chem 133–140
6. Arnold L, Crauel H, Wihstutz V (1983) Stabilization of linear systems by noise. SIAM J
Control Optim 21:451–461
7. Barbu V (1975) Nonlinear semigroups and differential equations in Banach spaces. Noordhoff,
Leyden
8. Barbu V (2003) Internal stabilization of the phase filed system. Adv Autom Control 754:1–8
9. Barbu V (2007) Stabilization of a plane channel flow by wall normal controllers. Nonlin Anal
Theory-Methods Appl 67(9):2573–2588
10. Barbu V (2010) Stabilization of Navier-Stokes flows. Springer, New York
11. Barbu V (2010) Stabilization of a plane channel flow by noise wall normal controllers. Syst
Control Lett 59(10):608–614. Barbu V (2010) Optimal stabilizable feedback controller for
Navier-Stokes equations. Contemp Math 513:43–53
12. Barbu V (2012) Stabilization of Navier-Stokes equations by oblique boundary feedback con-
trollers. SIAM J Control Optim 50(4):2288–2307
13. Barbu V (2013) Boundary stabilization of equilibrium solutions to parabolic equations. IEEE
Trans Autom Control 58:2416–2420
14. Barbu V (2018) Controllability and stabilization of parabolic equations. Birkhäuser Basel
15. Barbu V, Bonaccorsi S, Tubaro L (2014) Existence and asymptotic behavior for hereditary
stochastic evolution equations. Appl Math Optim 69:273–314
16. Barbu V, Colli P, Gilardi G, Marinoschi G, Rocca E (2017) Sliding mode control for a nonlinear
phase-field system. SIAM J Control Optim 55(3):2108–2133
17. Barbu V, Colli P, Gilardi G, Marinoschi G (2017) Feedback stabilization of the Cahn-Hilliard
type system for phase separation. J Diff Eqs 262:2286–2334
18. Barbu V, Iannelli M (2000) Controllability of the heat equation with memory. Diff Int Eqs
13:1393–1412
19. Barbu V, Lasiecka I, Triggiani R (2006) Abstract settings for tangential boundary stabilization
of Navier-Stokes equations by high- and low-gain feedback controllers. Nonlin Anal 64:2704–
2746
20. Barbu V, Munteanu I (2012) Internal stabilization of Navier-Stokes equation with exact con-
trollability on spaces with finite codimension. Evol Eqs Control Theory 1(1):1–16
21. Barbu V, Da Prato G (2012) Internal stabilization by noise of the Navier-Stokes equation.
SIAM J Control Optim 49(1):1–20
22. Barbu V, Röckner M (2015) An operatorial approach to stochastic partial differential equations
driven by linear multiplicative noise. J Eur Math Soc 17:1789–1815
23. Barbu V, Rodrigues SS, Shirikyan A (2011) Internal exponential stabilization to a nonstation-
ary solution for 3D Navier-Stokes equations. SIAM J Control Optim 49(4):1454–1478
24. Barbu V, Triggiani R (2004) Internal stabilization of Navier-Stokes equations with finite
dimensional controllers. Indiana Univ Math J 53:1443–1469
25. Barbu V, Wang G (2003) Internal stabilization of semilinear parabolic systems. J Math Anal
Appl 285:387–407
26. Barbu V, Iannelli M (2000) Controllability of the heat equation with memory. Differ Integr
Eqs 13:1393–1412
27. Balogh A, Krstic M (2002) Infinite dimensional backstepping-style feedback transformations
for a heat equation with an arbitrary level of instability. Eur J Control 8:165–176
28. Badra M, Takahashi T (2011) Stabilization of parabolic nonlinear systems with finite dimen-
sional feedback or dynamical controllers: application to the Navier-Stokes system. SIAM J
Control Optim 49(2):420–463
29. Balogh A, Liu W-J, Krstic M (2001) Stability enhancement by boundary control in 2D channel
flow. IEEE Trans Autom Control 46:1696–1711
30. Bedra M (2009) Feedback stabilization of the 2-D and 3-D Navier-Stokes equations based on
an extended system. ESAIM COCV 15:934–968
31. Bedra M (2009) Lyapunov functions and local feedback stabilization of the Navier-Stokes
equations. SIAM J Control Optim 48:1797–1830
32. Bertini L, Giacomin G (1997) Stochastic Burgers and KPZ equations from particle systems.
Commun Math Phys 183:571–607
33. Bewley TR (2001) Flow control: new challenges for new renaissance. Prog Aerospace Sci
37:21–58
34. Bourbaki N (2007) Théories spectrales. Springer, Berlin
35. Brezis H (2011) Functional analysis. Sobolev spaces and partial differential equations.
Springer, New York
36. Britton NF (1986) Reaction-diffusion equations and their applications to biology. Academic
Press, New York
37. Brzezniak Z, Goldys B, Peszat S, Russo F (2013) Second order PDEs with Dirichlet white
noise boundary conditions. J Evol Eqs 15(1):1–26
38. Boskovic DM, Krstic M, Liu W (2001) Boundary control of an unstable heat equation via
measurement of domain-averaged temperature. IEEE Trans Autom Control 46(12):2028–
2028
39. Caginalp G (1988) Conserved-phase field system: implications for kinetic undercooling. Phys
Rev B 38:789–791
40. Cannarsa P, Frankowska H, Marchini EM (2013) Optimal control for evolution equations with
memory. J Evol Eqs 13:197–227
41. Carballo T (2006) Recent results on stabilization of PDEs by noise. Bol Soc Esp Mat Apl
37:47–70
42. Caraballo T, Liu K, Mao X (2001) On stabilization of partial equations by noise. Nagoya
Math J 161:155–170
43. Cochran J, Vasquez R, Krstic M (2006) Backstepping boundary control of Navier-Stokes
channel flow: a 3D extension, 25th edn. In: American control conference
References 209
44. Dalang RC (1999) Extending the martingale measure stochastic integral with applications to
spatially homogeneous S.P.D.E’s. Electron J Probab 4(6) (online)
45. Chen Z (1994) Optimal boundary controls for a phase field model. IMA J Math Control Inform
10(2):157–176
46. Chepyzhov V, Miranville A (2006) On trajectory and global attractors for semilinear heat
equations with fading memory. Ind Univ Math J 55(1):119–167
47. Choi H, Temam R, Moin P, Kim J (1993) Feedback control for unsteady flow and its application
to the stochastic Burgers equation. J Fluid Mech 253:509–543
48. Coleman B, Gurtin M (1967) Equipresence and constitutive equations for rigid heat conduc-
tors. Z Angew Math Phys 18:199–208
49. Colli P, Gilardi G, Marinoschi G, Rocca E (2017) Sliding mode control for a phase field
system related to tumor growth. Appl Math Optim 1–24
50. Coron JM (2007) Control and nonlinearity. AMS, Providence, RI
51. Dafermos CM (1970) Asymptotic stability in viscoelasticity. Arch Rat Mech Anal 37:297–308
52. Debussche A, Fuhrman M, Tessitore G (2007) Optimal control of a stochastic heat equation
with boundary-noise and boundary-control. ESAIM COCV 13:178–205
53. Fabbri G, Goldys B (2009) An LQ problem for the heat equation on the halfline with Dirichlet
boundary control and noise. SICON 48(3):1473–1488
54. Fisher RA (1937) The wave of advance of advantageous genes. Ann Eugen 7:353–369
55. Fitzhugh A (1961) Impulses and physiological states in theoretical phenomena in the nerve
membrane. Biophys J 1:445–466
56. Foondun M, Nualart E (2015) On the behaviour of stochastic heat equations on bounded
domains. Lat Am J Probab Math Stat 12(2):551–571
57. Funaki T, Quastel J (2015) KPZ equation, its renormalization and invariant measures. Stoch
Partial Diff Eqs Anal Comput 3(2):159-220
58. Fursikov A (2002) Real process corresponding to 3D Navier-Stokes system and its feedback
stabilization form boundary. Am Math Soc Transl 206(2):95–123
59. Fursikov AV (2001) Stabilizability of quasi-linear parabolic equations by feedback boundary
control. Sbornik Math 192:593–639
60. Fursikov AV (2004) Stabilization for the 3D Navier-Stokes system by feedback boundary
control. Discret Contin Dyn Syst 10(1):289–314
61. Gilbarg D, Trudinger N (2001) Elliptic partial differential equations of second order. Springer,
Berlin
62. Giorgi C, Pata V, Marzocchi A (1998) Asymptotic behavior of a semilinear problem in heat
conduction with memory. NoDEA 5:333–354
63. Guatteri G, Masiero F (2013) On the existence of optimal controls for SPDEs with boundary
noise and boundary control. SIAM J Control Optim 51(13):1909–1939
64. Guerrero S, Imanuvilov OY (2013) Remarks on non controllability of the heat equation with
memory. ESAIM Control Optim Calc Var 19(1):288–300
65. Gyongy I, Nualart D (1999) On the stochastic Burgers equation in the real line. Ann Probab
27(2):782–802
66. Gurtin M, Pipkin A (1968) A general theory of heat conduction with finite wave speed. Arch
Rational Mech Anal 31:113–126
67. Hartmann J (1937) Theory of the laminar flow of an electrically conductive liquid in a ho-
mogeneous magnetic field. Det Kgl. Danske Videnskabernes Selskab Mathematisk-fysiske
Meddelelser XV 6:1–27
68. Haussmann UG (1978) Asymptotic stability of the linear Ito equation in infinite dimensions.
J Math Anal Appl 65:219–235
69. Hazewinkel M (2001) [1994] Gram matrix. Encyclopedia of mathematics. Springer Sci-
ence+Business Media B.V./Kluwer Academic Publishers
70. Ichikawa A (1985) Stability of parabolic equations with boundary and pointwise noise. Lecture
notes in control and information sciences 69. Springer, Berlin
71. Karafyllis I, Krstic M (2014) On the relation of delay equations to first-order hyperbolic partial
differential equations. ESAIM: COCV 20:894–923
210 References
72. Kato T (1966) Perturbation theory for linear operators. Die Grundlehren der mathematischen
Wissenschaften, Band 132, Springer, New York
73. Komornik V (1994) Exact controllability and stabilization-the multiplier method. Masson
74. Krstic M (1999) On global stabilization of Burgers’ equation by boundary control. Syst
Control Lett 37:123–141
75. Krstic M, Smyshlyaev A (2008) Boundary control of PDEs: a course on backstepping designs.
SIAM, Philadelphia
76. Krstic M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic
PDEs and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–
758
77. Kroner A, Rodrigues SS (2015) Remarks on the internal exponential stabilization to a non-
stationary solution for 1D Burgers equations. SIAM J Control Optim 53(2):1020–1055
78. Kwiecińska A (1999) Stabilization of partial differential equations by noise. Stoch Process
Appl 79:179–184
79. Kwiecińska A (2002) Stabilization of evolution equations by noise. Proc Am Math Soc
130(10):3067–3074
80. Lasiecka I, Triggiani R (1991) Differential and algebraic Riccati equations with application
to boundary/point control problems: continuous theory and approximation theory. Lecture
notes in control and information sciences. Springer, Berlin, p 164
81. Lasiecka I, Triggiani R (2000) Control theory for partial differential equations: continuous
and approximation theories. Cambridge University Press, Cambridge
82. Lasiecka I, Triggiani R (2015) Stabilization to an equilibrium of the Navier-Stokes equations
with tangential action of feedback controllers. Nonlin Anal Theory Meth Appl 121:424–446
83. Lee D, Choi H (2001) Magnetohydrodynamic turbulent flow in a channel at low magnetic
Reynolds number. J Fluid Mech 439:367–394
84. Lefter C (2010) On a unique continuation property related to the boundary stabilization of
Magnetohydrodynamic equations. Annals of Uiv AL.I CUZA Math, LVI
85. Lipster R, Shiraev AN (1989) Theory of martingales. Kluwer, Dordrecht
86. Liu HB, Hub P, Munteanu I (2016) Boundary feedback stabilization of Fisher’s equations.
Syst Control Lett 97:55–60
87. Liu WJ (2003) Boundary feedback stabilization of an unstable heat equation. SIAM J Control
Optim 42(3):1033–1043
88. Liu Z, Zheng S (1999) Semigroups associated with dissipative systems. Chapman & Hall/CRC
research notes in mathematics 398. Chapman & Hall/CRC, Boca Raton, FL
89. Luo J (2008) Fixed points and exponential stability of mild solutions of stochastic partial
differential equations with delays. J Math Anal Appl 342:753–760
90. Lopatinskij YaB (1953) A method of reduction of boundary-value problems for systems of
differential equations of elliptic type to a system of regular integral equations. Ukrain Mat Zh
5:123–151
91. Mao X (1994) Exponential stability of stochastic. Differential equations. Pure and applied
mathematics. Chapman & Hall, New York
92. Mao X (2013) Stabilization of continuous-time hybrid stochastic differential equations by
discrete-time feedback control. Automatica 49(12):3677–3681
93. Marionschi G (2017) A note on the feedback stabilization of a Cahn Hilliard type system
with a singular logarithmic potential. Solvability, regularity, and optimal control of boundary
value problems for PDEs. Springer, Berlin, pp 357–377
94. Munteanu I (2011) Tangential feedback stabilization of periodic flows in a 2-D channel. Differ
Integr Eqs 24(5–6):469–494
95. Munteanu I (2012) Normal feedback stabilization of periodic flows in a two-dimensional
channel. J Optim Theory Appl 152(2):413–443
96. Munteanu I (2012) Normal feedback stabilization of periodic flows in a three-dimensional
channel. Numer Funct Anal Optim 33(6):611–637
97. Munteanu I (2013) Normal feedback stabilization for linearized periodic MHD channel flow,
at low magnetic Reynolds number. Syst Control Lett 62:55–62
References 211
98. Munteanu I (2013) Boundary feedback stabilization of periodic fluid flows in a magnetohy-
drodynamic channel. IEEE Trans Autom Control 58(8):2119–2125
99. Munteanu I (2014) Boundary stabilization of the phase field system by finite-dimensional
feedback controllers. J Math Anal Appl 412:964–975
100. Munteanu I (2015) Boundary stabilization of the Navier-Stokes equation with fading memory.
Int J Control 88(3):531–542
101. Munteanu I (2015) Stabilization of semilinear heat equations, with fading memory, by bound-
ary feedbacks. J Differ Eqs 259:454–472
102. Munteanu I (2017) Boundary stabilization of a 2-D periodic MHD channel flow, by propor-
tional feedbacks. ESAIM COCV 23(4):1253–1266
103. Munteanu I (2017) Stabilization of a 3-D periodic channel flow by explicit normal boundary
feedbacks. J Dyn Control Syst 23(2):387–403
104. Munteanu I (2017) Stabilization of stochastic parabolic equations with boundary-noise and
boundary-control. J Math Anal Appl 449(1):829–842
105. Munteanu I (2017) Stabilisation of parabolic semilinear equations. Int J Control 90(5):1063–
1076
106. Munteanu I (2018) Boundary stabilization of the stochastic heat equation by proportional
feedbacks. Automatica 87:152–158
107. Munteanu I (2018) Boundary stabilization to non-stationary solutions for deterministic and
stochastic parabolic-type equations. J Control Int. https://doi.org/10.1080/00207179.2017.
1407878
108. Murray JD (1993) Mathematical biology. Springer, Berlin
109. Pandolfi L (2013) Boundary controllability and source reconstruction in a viscoelastic string
under external traction. J Math Anal Appl 407:464–479
110. Partington JR (2004) Linear operators and linear systems. London mathematical society stu-
dent texts (60). Cambridge University Press, Cambridge
111. Da Prato G, Debussche A (1999) Control of the stochastic Burgers model of turbulence. SIAM
J Control Optim 37(4):1123–1149
112. Da Prato G, Zabczyk J (1993) Evolution equations with white-noise boundary conditions.
Stoch Rep 42:167–182
113. Da Prato G, Zabczyk J (2013) Stochastic equations in infinite dimensions. Cambridge Uni-
versity Press, Cambridge
114. Schuster E, Luo L, Krstic M (2008) MHD channel flow control in 2D: mixing enhancement
by boundary feedback. Automatica 44:2498–2507
115. Takashima M (1996) The stability of the modified plane Poiseuille flow in the presence of a
transverse magnetic field. Fluid Dyn Res 17:293–310
116. Triggiani R (1980) Boundary feedback stabilization of parabolic equations. Appl Math Optim
6:201–220
117. Triggiani R (2007) Stability enhancement of a 2-D linear Navier-Stokes channel flow by a
2-D wall normal boundary controller. Discret Contin Dyn Syst SB 8(2):279–314
118. Raymond JP (2006) Feedback boundary stabilization of the two-dimensional Navier-Stokes
equations. SIAM J Control Optim 45:790–828
119. Raymond JP (2007) Feedback boundary stabilization of the three dimensional incompressible
Navier-Stokes equations. Math Pures et Appl 87:627–669
120. Smyshlaev A, Krstic M (2005) On control design for PDEs with space-dependent diffusivity
or time-dependent reactivity. Automatica 41:1601–1608
121. Ravindran SS (2000) Reduced-order adaptive controllers for fluid flows using POD. J Sci
Comput 15:457–478
122. Rodrigues SS (2018) Feedback boundary stabilization to trajectories for 3D Navier-Stokes
equations. Math Optim Appl. https://doi.org/10.1007/s00245-017-9474-5
123. Shirikyan A (2007) Exact controllability in projections for three-dimensional Navier-Stokes
equations. Ann I. H. Poincaré 24:521–537
124. Vasquez R, Krstic M (2007) A closed-form feedback controller for stabilization of the lin-
earized 2-D Navier-Stokes Poisseuille system. IEEE Trans Autom Control 52:2298–2312
212 References
125. Walsh JB (1986) An introduction to stochastic partial differential equations. Lecture notes in
mathematical 1180. Springer, Berlin, pp 265–439
126. Xie B (2016) Some effects of the noise intensity upon non-linear stochastic heat equations on
[0, 1]. Stoch Proc Appl 126:1184–1205
127. Xu C, Schuster E, Vazquez R, Krstic M (2008) Stabilization of linearized 2D magnetohydro-
dynamic channel flow by backstepping boundary control. Syst Control Lett 57:805–812
128. Yosida K (1980) Functional analysis. Springer, Berlin
129. Zhou J (2014) Optimal control of a stochastic delay heat equation with boundary-noise and
boundary-control. Int J Control 87(9):1808–1821
Index
A D
Abstract parabolic, 22 Dirichlet map, 25
Accretive operator, 11 Distributions space, 4
Adapted process, 14 Dominated convergence theorem, 3
Adjoint operator, 8 Dual space, 8
Algebraic multiplicity, 7
E
B Eigenvalue, 7
Banach space, 2 Eigenvector, 7
Bochner integrable, 5 Ellipticity condition, 10
Borel–Cantelli lemma, 16 Elliptic operators, 10
Borel σ -algebra, 2 Equilibrium solution, 22
Boundary control problem, 22 Expectation E, 13
Boundary value problem, 10 Extension operator, 8
Brownian motion, 14
Burkholder–Davis–Gundy inequality, 16
F
Feedback control, 23
C Filtration, 14
Cauchy problem, 11 Fourier series, 6
Cauchy sequence, 2 Fractional power, 9
Closed and densely defined operator, 7 Fubini’s theorem, 3
Closed-loop equation, 23
Compact operator, 8 G
Complete measure space, 2 Gaussian distribution, 14
Conditional expectation, 13 Generalized eigenvector, 7
Controlled Cahn–Hilliard system, 93 Geometric multiplicity, 7
Controlled heat equation with delays, 109 Gram matrix, 1
Controlled magnetohydrodynamics equa-
tions, 78
Controlled Navier–Stokes equations, 49 H
Controlled stochastic equations, 128 Hartmann–Poiseuille profile, 78
C0 -semigroup, 12 Hilbert–Schmidt norm, 7
C([0, T ]; X ), 5 Hille–Yosida theorem, 12
C 1 ([0, T ]; X ), 5 Hölder’s inequality, 5
I Q
Integrable function, 3 Quadratic variation, 14
Internal control, 24
Itô’s isometry, 15
R
Random deterministic equation, 17
L Random variable, 13
Linearization, 20 Rescaling, 17
Linear operators, 7 Resolvent operator, 7
Lipschitz function, 12 Resolvent set, 7
Local martingale, 14 Riesz–Schauder–Fredholm theorem, 8
Lopatinskii condition, 10
L p (O), 4
S
Self-adjoint operator, 9
M Semilinear evolution equation, 12
m-accretive operator, 11 Semimartingale, 14
Martingale, 13 Semisimple eigenvalue, 7
Measurable function, 2 σ -algebra, 2
Measure, 2 Sobolev embeddings, 5
Measure space, 2 Sobolev space, 4
Spectrum, 7
Mild solution, 12
Stabilizable controller, 23
Modes, 6
Stabilization problem, 23
Stabilization to non-steady states, 24
Stabilization to trajectories, 171
N Stable equilibrium, 22
Null set, 2 Stochastic Chebyshev inequality, 16
Stochastic claculus, 16
Stochastic integral, 15
O Stochastic processes, 13
Operatorial norm, 7 Stopping time, 14
Orthonormal basis, 7 Symmetric operator, 8
P U
Parabolic Poiseuille profile, 50 Unstable eigenvalues, 20
Parseval’s identity, 6
Poincaré’s inequality, 5
Positive definite function, 9 V
Positive definite operators, 8 Variation of constants formula, 12
Powers of a linear operator, 9
Probability space, 13
Product measure, 3 W
Proportional feedback, 28 Weak solution, 13