You are on page 1of 222

Progress in Nonlinear Differential Equations

and Their Applications


Subseries in Control
93

Ionuţ Munteanu

Boundary
Stabilization
of Parabolic
Equations
Progress in Nonlinear Differential Equations
and Their Applications

PNLDE Subseries in Control

Volume 93

Editors
Jean-Michel Coron, Université Pierre et Marie Curie, Paris, France

Editorial Board
Viorel Barbu, Facultatea de Matematică, Universitatea “Alexandru Ioan Cuza” din,
Iaşi, Romania
Piermarco Cannarsa, Department of Mathematics, University of Rome “Tor
Vergata”, Roma, Italy
Karl Kunisch, Institute of Mathematics and Scientific Computing, University of
Graz, Graz, Austria
Gilles Lebeau, Laboratoire J.A. Dieudonné, Université de Nice Sophia-Antipolis,
Nice, France
Tatsien Li, School of Mathematical Sciences, Fudan University, Shanghai, China
Shige Peng, Institute of Mathematics, Shandong University, Jinan, China
Eduardo Sontag, Department of Electrical and Computer Engineering, Northeastern
University, Boston, MA, USA
Enrique Zuazua, Departamento de Matemáticas, Universidad Autónoma de Madrid,
Madrid, Spain

More information about this series at http://www.springer.com/series/15137


Ionuţ Munteanu

Boundary Stabilization
of Parabolic Equations
Ionuţ Munteanu
Faculty of Mathematics
Alexandru Ioan Cuza University
Iaşi, Romania

ISSN 1421-1750 ISSN 2374-0280 (electronic)


Progress in Nonlinear Differential Equations and Their Applications
PNLDE Subseries in Control
ISBN 978-3-030-11098-7 ISBN 978-3-030-11099-4 (eBook)
https://doi.org/10.1007/978-3-030-11099-4
Library of Congress Control Number: 2018966441

Mathematics Subject Classification (2010): 35K05, 93D15, 93B52, 93C20, 47F05, 60H15, 35R09,
35Q30, 35Q92

© Springer Nature Switzerland AG 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered
company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my beloved daughter, Anastasia
Preface

In recent years, many researchers have been working on designing stabilizers in


different technological areas such as surface design for controlling aircraft, voltage
regulators in electronics, camera stabilizers, chemical substances to prevent
unwanted change in the state of another substance, different types of food preser-
vatives, medical processes for preventing shock in sick or injured people, and mood
stabilizers, among many others.
In this book, we will treat this subject from a mathematical point of view. More
exactly, we will consider different models from different fields such as fluid flows
modeled by the Navier–Stokes equations; electrically conducted fluid flows modeled
by the magnetohydrodynamic equations; phase separation modeled by the Cahn–
Hilliard equations; different cases of semilinear heat equations arising from biology,
chemistry, or population dynamics as well as their stochastic versions. Then we will
address the problem of boundary stabilization associated with these models. All
these models can be combined under the rubric of abstract parabolic-like equations,
namely equations whose linear parts are generated by analytic C0 -semigroups. That
is why, in Chap. 2, we consider the boundary stabilization problem associated to
abstract parabolic-like equations and develop an algorithm to design proportional-
type boundary feedback stabilizers, of finite-dimensional structure, expressed in a
very simple form, that are easy to manipulate in numerical simulations. It should be
emphasized that no rigorous stabilization theory is possible without a unique con-
tinuation theory for the eigenfunctions of the linear operator obtained from the
linearization of the equation around the target solution. So, once a model, such as
those above, can be formulated in a parabolic abstract form, the boundary stabilizing
control design method can be applied, provided that a unique continuation property
of the eigenfunctions is established. This provides the power of this control design
technique; namely, it can be applied to a wide range of models. But it requires that
we prove a priori a unique continuation result that relies on some advanced results
and techniques involving both the theory of parabolic-like equations and functional
analysis.

vii
viii Preface

We mention that in the literature, there are also other notable results concerning
the boundary stabilization of parabolic equations, and though we mention some
basic references and offer a brief presentation of other significant works in the field,
we have not presented them in detail. We confine ourselves to the proportional-type
feedback design only, which is based on the spectral decomposition of a linearized
system in stable and unstable systems, thereby omitting other important results in
the literature. This book was written with the goal of presenting in detail new results
related to an algorithm for the design of proportional-type feedback forms, which
enabled us to obtain some of the first results in areas such as boundary stabilization
of the Cahn–Hilliard system, and trajectories for the semilinear heat equation and
even for stochastic partial differential equations. These ideas are still being devel-
oped, and one might expect in the future to obtain other spectacular achievements.
Besides stabilization, the robustness of stabilizable feedback under stochastic
perturbations is also discussed. The form of the feedback is based on the eigen-
functions of the linear operator, and we have tried to use a minimal set of them.
The reader is assumed to have a basic knowledge of linear functional analysis,
linear algebra, probability theory, and the general theory of elliptic, parabolic, and
stochastic equations. Most of this is reviewed in Chap. 1. The material included in
this book (excepting the comments on the references) represents the original con-
tribution of the author and his coworkers.
The author is indebted to Prof. Viorel Barbu for suggesting to us, five years ago,
that we develop some of his own earlier ideas on constructing proportional-type
feedback forms, which led to the conception of this entire book. We are indebted to
him as well for encouraging us to write this book and for useful discussions,
pertinent observations and suggestions, and unstinting support and guidance in the
writing of this book. Many thanks go to Hanbing Liu, and special thanks to my
parents for their love and support. Also, the author is indebted to Mrs. Elena
Mocanu, from the Institute of Mathematics Iaşi, who assisted in the typesetting of
this text.

Iaşi, Romania Ionuţ Munteanu


August 2018
Contents

1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Notation and Theoretical Results . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Stabilization of Abstract Parabolic Equations . . . . . . . . . . . . . . . . . . 19
2.1 Presentation of the Abstract Model . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 The Design of the Boundary Stabilizer . . . . . . . . . . . . . . . . . . . . 25
2.2.1 The Case of Mutually Distinct Unstable Eigenvalues . . . . . 26
2.2.2 The Semisimple Eigenvalues Case . . . . . . . . . . . . . . . . . . 37
2.3 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3 Stabilization of Periodic Flows in a Channel . . . . . . . ....... . . . . . 49
3.1 Presentation of the Problem . . . . . . . . . . . . . . . . . ....... . . . . . 49
3.2 The Stabilization Result . . . . . . . . . . . . . . . . . . . ....... . . . . . 51
3.2.1 The Feedback Law and the Stability of the System . . . . . . 63
3.3 Design of a Riccati-Based Feedback . . . . . . . . . . ....... . . . . . 71
3.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... . . . . . 75
4 Stabilization of the Magnetohydrodynamics Equations
in a Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 77
4.1 The Magnetohydrodynamics Equations of an Incompressible
Fluid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 77
4.2 The Stabilizing Proportional Feedback . . . . . . . . . . . . . . . . . .... 86
4.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 91
5 Stabilization of the Cahn–Hilliard System . . . . . . . . . . . . . . . . . . . . 93
5.1 Presentation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1.1 Stabilization of the Linearized System . . . . . . . . . . . . . . . 96
5.2 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

ix
x Contents

6 Stabilization of Equations with Delays . . . . . . . . . ....... . . . . . . . 109


6.1 Presentation of the Problem . . . . . . . . . . . . . . . ....... . . . . . . . 109
6.2 Stability of the Linearized System . . . . . . . . . . ....... . . . . . . . 113
6.3 Feedback Stabilization of the Nonlinear System (6.1) . . . . . . . . . . 120
6.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... . . . . . . . 125
7 Stabilization of Stochastic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1 Robustness in the Presence of Noise Perturbation
of the Boundary Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Stabilization of the Stochastic Heat Equation on a Rod . . . . . . . . . 136
7.2.1 Mild Formulation of the Solution and Proof
of the Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.3 Stabilization of the Stochastic Burgers Equation . . . . . . . . . . . . . . 150
7.4 Stabilization by Discrete-Time Feedback Control . . . . . . . . . . . . . 162
7.5 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8 Stabilization of Unsteady States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.1 Presentation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.2 The Stabilization Result and Applications . . . . . . . . . . . . . . . . . . 172
8.2.1 Observer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
8.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
9 Internal Stabilization of Abstract Parabolic Systems . . . . . . . . . . . . 187
9.1 Presentation of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2 Stabilization of the Full Nonlinear Equation (9.9) . . . . . . . . . . . . . 195
9.3 The Design of a Real Stabilizing Feedback Controller . . . . . . . . . 203
9.4 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Acronyms

C The set of all complex numbers


N The set of all natural numbers
R The real line ð1; 1Þ
Rd The d-dimensional Euclidean space
O An open subset of Rd
@O The boundary of O
Q ¼ O  ð0; TÞ
R ¼ @O  ð0; TÞ, where 0\T\1
k : kX The norm of the linear normed space X
X0 The dual of the space X
ð; ÞH The scalar product of the Hilbert space H
hx; yid The scalar product of the vectors x; y 2 Rd
k kd The Euclidean norm in Rd
LðX; YÞ The space of linear continuous operators from X to Y
B The adjoint of the operator B
DðAÞ The domain of the operator A
RðAÞ The range of the operator A
1C The indicator function of the set C
sign The signum function on X : sign x ¼ x=k xkX if x 6¼ 0,
sign 0 ¼ fx; k xkX  1g
C k ðOÞ The space of real-valued functions on O that are continuously
differentiable up to order k, k  1
C0k ðOÞ The subspace of functions in C k ðOÞ with compact support in O
DðOÞ The space C01 ðOÞ
dk u ðkÞ The derivative of order k of the function u : ½a; b ! X
dtk ; u
D 0 ðOÞ The dual of DðOÞ (i.e., the space of distributions on O)
CðOÞ The space of continuous functions on O
Lp ðOÞ The space of p-summable functions u : O ! R endowed with the
R 1
norm jujp ¼ ð O juðxÞjp dxÞp ; 1  p\1

xi
xii Acronyms

W m; p ðOÞ The Sobolev space fu 2 Lp ðOÞ; Da u 2 Lp ðOÞ;


jaj  m; 1  p  1g
W0m; p ðOÞ The closure of C01 ðOÞ in the norm of W m; p ðOÞ
H k ðOÞ; H0k ðOÞ The spaces W k; 2 ðOÞ and W0k; 2 ðOÞ, respectively
Lp ða; b; XÞ The space of p-summable functions from ða; bÞ to X, 1  p  1;
1  a\b  1
Cð½a; b; XÞ The space of X-valued continuous functions on ½a; b
W 1; p ð½a; b; XÞ The Sobolev space fu 2 ACð½a; b; XÞ; du dt 2 L ðða; bÞ; XÞg
p

n The outward normal to O


@u
@n
the normal derivative of the function u : O ! R
Chapter 1
Preliminaries

For easy reference, we collect here some standard notation and results in functional
analysis and partial differential equations that will be used throughout this work.

1.1 Notation and Theoretical Results

Functional Spaces

Here O will stand for a bounded domain in Rd , d ∈ N \ {0} , i.e., a nontrivial con-
nected open subset of Rd such that
  
O ⊂ x ∈ Rd : x12 + · · · + xd2 ≤ R ,

for some R > 0. We denote by O its closure and by ∂O its boundary. We always
assume that the boundary of O is piecewise smooth (e.g., the boundary of a polygon
or a sphere). In many cases, the boundary of O will be split into two parts as ∂O =
Γ1 ∪ Γ2 , where Γ1 has nonzero surface measure.
Consider H to be a Hilbert space over C, with the inner product ·, ·. Then the
system {x1 , x2 , . . . , xN } ⊂ H is linearly independent in H if and only if the Gram
matrix  N
G := xi , xj  i,j=1

is invertible. It is known that for every system in H , the corresponding Gramian is a


positive semidefinite Hermitian matrix, i.e.,

G = G T and Gc, cN ≥ 0, ∀c ∈ CN ,

© Springer Nature Switzerland AG 2019 1


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_1
2 1 Preliminaries

where G T denotes the transpose of the matrix G, ·, ·N denotes the scalar product
in CN , and z denotes the complex conjugate of z ∈ C (for more details on Gramians,
see, e.g., [69]).
Now consider a normed vector space (X ,
·
) over R. A sequence (un )n ⊂ X is
called a Cauchy sequence if for all ε > 0, there exists Nε ∈ N such that


un − um
< ε, ∀n, m ≥ Nε .

We say that X is a Banach space if it is a complete normed vector space, i.e., a


normed space in which every Cauchy sequence converges to a limit point in X .
Next let X be a nonempty set. A set F of subsets of X is a σ -algebra if:
(a) the empty set belongs to F ;
(b) the complement
 F c := {x ∈ X : x ∈
/ F} belongs to F for all F ∈ F ;
(c) the union Fj belongs to F for Fj ∈ F , j ∈ N.
j∈N

A measure μ on a measurable space (X , F ) is a mapping from F to R+ ∪ {∞} such


that:
(a) the
empty
 set
has measure zero;
(b) μ Fj = μ(Fj ) if Fj ∈ F are mutually disjoint.
j∈N j∈N

Together, the pair (X , F , μ) forms a measure space. A set F ∈ F is called a null set if
μ(F) = 0, and a measure space is said to be complete if all subsets of null sets belong
to F . (Every measure space (X , F , μ) can be extended to a complete measure space
by including all subsets of null sets). The Borel σ -algebra of a topological space Y ,
denoted by B(Y ), is the smallest σ -algebra containing all open subsets of Y . The
usual notion of volume of subsets of Rd gives rise to Lebesgue measure, thereby
introducing the notion of Lebesgue measure space.
A function u : X → Y is F -measurable if the pullback set u−1 (G) is in F for
every G ∈ B(Y ). A function s : X → Y is simple if there exist sj ∈ Y and Fj ∈ F
for j = 1, 2, . . . , N with μ(Fj ) < ∞ such that


N
s(x) = sj 1Fj (x), x ∈ X ,
j=1

where 1F is the indicator function, i.e.,



1, x ∈ F,
1F (x) :=
0, x ∈
/ F.

The integral of a simple function s with respect to the measure space (X , F , μ) is




N
s(x)d μ(x) := sj μ(Fj ).
X j=1
1.1 Notation and Theoretical Results 3

We say that a measurable function u is integrable with respect to μ if there exist


simple functions {un }n∈N such that un (x) → u(x) as n → ∞ for almost all x ∈ X ,
and un is a Cauchy sequence in the sense that for all ε > 0,


un (x) − um (x)
Y d μ(x) < ε
X

for all n, m sufficiently large. If u is integrable, we define



u(x)d μ(x) := lim un (x)d μ(x).


X n→∞ X

If F ∈ F , we define

u(x)d μ(x) := u(x)1F (x)d μ(x).


F X

The above definition can be extended to C-valued functions as follows: letting u =


u1 + iu2 , with u1,2 R-valued functions, we say that u is measurable (integrable) if
both u1 , u2 are measurable (integrable), and define


u(x)d μ(x) := u1 (x)d μ(x) + i u2 (x)d μ(x).


X X X

Theorem 1.1 (Dominated convergence) Consider a sequence of measurable func-


tions un : X → Y such that un (x) → u(x) in Y as n → ∞ for almost all x ∈ X . If
there is a real-valued integrable function U such that
un (x)
Y ≤ |U (x)|, ∀n ∈ N
and almost all x ∈ X , then


lim un (x)d μ(x) = lim un (x)d μ(x) = u(x)d μ(x).


n→∞ X X n→∞ X

Let (Xk , Fk , μk ), k = 1, 2, be two measure spaces. We set F1 × F2 for the


smallest σ -algebra containing all sets F1 × F2 for Fk ∈ Fk . The product measure
μ1 × μ2 on F1 × F2 is defined by (μ1 × μ2 )(F1 × F2 ) := μ1 (F1 )μ2 (F2 ).
Theorem 1.2 (Fubini) Suppose that (Xk , Fk , μk ), k = 1, 2, are σ -finite measure
spaces and consider a measurable function u : X1 × X2 → Y . If




u(x1 , x2 )
Y d μ1 (x1 ) d μ2 (x2 ) < ∞,
X2 X1

then u is integrable with respect to the product measure μ1 × μ2 and


4 1 Preliminaries




u(x1 , x2 )d (μ1 × μ2 )(x1 , x2 ) =
u(x1 , x2 )
Y d μ1 (x1 ) d μ2 (x2 )
X1 ×X2 X X

2
1
=
u(x1 , x2 )
Y d μ2 (x2 ) d μ1 (x1 ).
X1 X2

Next, by Lp (O), 1 ≤ p ≤ ∞, we denote the standard space of Lebesgue integrable


L functions on O endowed with the norm
p


p1

f

Lp (O ) := |f (x)| dx
p
.
O

One can easily see that Lq (O) ⊂ Lp (O) for 1 ≤ p ≤ q. For the particular case p = 2,
L2 (O) is a Hilbert space, with the inner product

f , gL2 (O ) = f (x)g(x)dx.
O

Next, we set D(O) = C0∞ (O) for the space of infinitely differentiable functions
with compact support defined in O. Then denote by D (O) its dual, known as the
space of distributions on O. Based on this, one can introduce the Sobolev spaces
W 1,p (O), 1 ≤ p ≤ ∞, defined as
 
∂u
u ∈ L (O) :
p
∈ L (O), i = 1, 2, . . . , d .
p
∂xi

Here the partial derivatives, like ∂x∂ i above, are taken in the sense of distributions, i.e.,
given a multi-index α ∈ Nd of order |α| = α1 + α2 + · · · + αd , we use the notation

∂ |α| u
Dα u :=
∂x1α1 . . . ∂xdαd

and define

α |α|
D uφdx = (−1) uDα φdx, ∀φ ∈ D(O).
O O

Further, we set W0 (O) for the closure of C0∞ (O) in W 1,p (O), i.e.,
1,p

W0 (O) := {u ∈ W 1,p (O) : ∃(un )n∈N ⊂ C0∞ (O) such that un −→ u in W 1,p (O)}.
1,p

1,p
In other words, W0 (O) consists of functions from W 1,p (O) that can be approxi-
mated by smooth functions with compact support. For the particular case p = 2, we
define W 1,2 (O) = H 1 (O) and W01,2 (O) = H01 (O).
1.1 Notation and Theoretical Results 5

In general, we may introduce for each m ∈ N the space



H m (O) := u ∈ L2 (O) : Dα u ∈ L2 (O), for each multi-index α with |α| ≤ m .


The dual of the space W0 (O) is denoted by W −1,p (O), for p = p−1
1,p p
. In partic-
−1,2 −1
ular, W (O) = H (O).
The connection between the Lebesgue and Sobolev spaces is given by the Sobolev
embeddings. Namely, assume u ∈ W 1,p (O). Then for q = dpd −p
we have W 1,p (O) ⊂
Lq (O); i.e., the identity map from W 1,p (O) to Lq (O) is bounded.
Proposition 1.1 (Poincaré’s inequality) There exists a constant CO (depending only
on the domain O) such that for all u ∈ H01 (O), we have


u
L2 (O ) ≤ CO
∇u
L2 (O ) .

Proposition 1.2 (Hölder’s inequality) Let p, q ∈ [1, ∞] with 1


p
+ 1
q
= 1. Then for
all measurable functions u, v, we have


uv
L1 (O ) ≤
u
Lp (O )
v
Lq (O ) .

We continue by defining the space of X -valued continuous functions on [0, T ] ⊂


[0, ∞), denoted by C([0, T ]; X ), where X is some Banach space. The space
C([0, T ]; X ) is endowed with the sup norm


u
C([0,T ];X ) := sup
u(t)
X , ∀u ∈ C([0, T ]; X ).
t∈[0,T ]

Similarly, we introduce the space of continuous differentiable X -valued functions


C 1 ([0, T ]; X ), with the norm
 
d 

u
C 1 ([0,T ];X ) := sup
u(t)
X + sup  u(t)

 .
t∈[0,T ] t∈[0,T ] dt X

Also, Lp (0, T ; X ) stands for the space of X -valued Bochner integrable Lp functions
on (0, T ) with the norm

T p1
p

u
Lp (0,T ;X ) :=
u(t)
X dt .
0

By W 1,p ([0, T ]; X ) we denote the Sobolev space


 
d
u ∈ L (0, T ; X ) :
p
u ∈ L (0, T ; X ) ,
p
dt

where d
dt
u is taken in the sense of X -valued vectorial distributions on (0, T ).
6 1 Preliminaries

Fourier Series

Let f : [x0 , x0 + P] → C be an integrable function on the interval [x0 , x0 + P], where


x0 , P are real numbers. If f (x0 ) = f (x0 + P), it follows that f can be represented on
the whole real line (f is extended by periodicity to the whole line) as

fk e i
2πkx
f (x) = P , x ∈ R,
k∈Z

where

x0 +P
1
f (x)e−i
2πkx
fk := P dx, k ∈ Z.
P x0

The coefficients fk are called the Fourier modes of the function f . In particular, if
x0 = 0 and P = 2π , we get that f can be represented as

f (x) = fk eikx , x ∈ R, (1.1)
k∈Z

 2π −ikx
with fk := 2π 1
0 f (x)e dx, k ∈ Z.
One can immediately deduce that f is real-valued if and only if fk = f−k , ∀k ∈ Z,
that is, if fk and f−k are complex conjugates.
Introduce the space L2per (0, 2π ) consisting of all locally square-integrable func-
tions on R that are 2π -periodic. The norm in L2per (0, 2π ) is defined as


f
2L2per (0,2π) = 2π |fk |2 ,
k∈Z

where fk are the corresponding Fourier modes introduced above (also known as
Parseval’s identity).
Next, define

1
Hper (0, 2π ) := f ∈ H 1 (0, 2π ) : f (0) = f (2π ) ,

with the norm



1 (0,2π) := 2π

f
2Hper (1 + k 2 )|fk |2 .
k∈Z

We can also define the Fourier series for functions of two variables x and y in the
square [0, 2π ] × [0, 2π ]:
f (x, y) = fkl eikx eily ,
k,l∈Z

 2π  2π
where fkl := 1
4π 2 0 0 f (x, y)e−ikx e−ily dxdy.
1.1 Notation and Theoretical Results 7

Linear Operators

If X , Y are Banach spaces, then L(X , Y ) is the space of all linear continuous operators
from X to Y with the operatorial norm
 

Ax
Y

A
L(X ,Y ) := sup , ∀x ∈ X , x = 0 .

x
X

If X and Y are two Hilbert spaces, we introduce the Hilbert–Schmidt norm of an


operator A : X → Y as

A
2HS :=
Aei
2Y ,
i∈I

where {ei : i ∈ I } is an orthonormal basis in X , i.e.,

ei , ej  = δij , ∀i, j,

where δij is the Kronecker delta δij = 1 if i = j, and δij = 0 otherwise.


We call A a Hilbert–Schmidt operator if
A
HS < ∞.

Letting A ∈ L(X , X ), for each λ ∈ C we denote by (λI − A)−1 the resolvent of


A, by ρ(A) the resolvent set

ρ(A) = λ ∈ C : (λI − A)−1 ∈ L(X , X ) ,

and by σ (A) = C \ ρ(A) the spectrum of A. We say that λ ∈ C is an eigenvalue of


the linear operator A if there exists x ∈ D(A), x = 0, such that Ax = λx. Such an x
is called an eigenvector of A corresponding to the eigenvalue λ. If λ is an eigenvalue
for A, then the dimension of the linear eigenvector space

Ker(λI − A) := {x ∈ X : Ax = λx}

is called the geometric multiplicity of λ. The vector x is called a generalized eigen-


vector corresponding to the eigenvalue λ if

(λI − A)m x = 0, for some m ∈ N.

The dimension of the space of generalized eigenvectors is called the algebraic mul-
tiplicity of the eigenvalue λ. An eigenvalue λ of the operator A is called semisimple
if its algebraic multiplicity coincides with its geometric multiplicity.
We say that the linear operator A : D(A) ⊂ X → Y is closed if its graph is closed,
that is, if xn → x in X and yn = Axn → y in Y implies y = Ax. The operator A is
said to be densely defined if its domain D(A) is dense in X .
8 1 Preliminaries

An operator is called compact if it maps bounded sets into relatively compact sets.
Concerning the eigenvalues of an operator, we have the following result, known as
the Riesz–Schauder–Fredholm theorem (see [128], p. 283).
Theorem 1.3 Let A be a closed and densely defined operator in X with compact
resolvent (λI − A) −1, for some λ ∈ ρ(A). Then the spectrum σ (A) consists of iso-
lated eigenvalues λj j∈N each of finite (algebraic) multiplicity.

If X is a Banach space, we denote by X its dual space endowed with the dual
norm 

x∗
X := sup X (x, x∗ )X :
x
X = 1 ,

where by X (x, x∗ )X , we mean the value of x∗ computed in x.


We say that A : D(A) ⊂ X → X is symmetric if

(Ay, z) = (y, Az), ∀y, z ∈ D(A).

We note that a symmetric operator has only positive semisimple eigenvalues.


Let A ∈ L(X , Y ) be a closed and densely defined operator. Then the adjoint,
A∗ : Y → X of A is defined by
∗ ∗
X (A y , x)X =Y (y∗ , Ax)Y , ∀x ∈ D(A),

with D(A∗ ) = y∗ ∈ Y : ∃C > 0, |Y (y∗ , Ax)Y | ≤ C
x
X , ∀x ∈ D(A) .
If λ is an eigenvalue of A, then λ is an eigenvalue of A∗ , of the same multiplicity.
Assume now that X is a Hilbert space with the scalar product ·, ·X and the
induced norm
·
X . Also assume that there exists λ0 ∈ ρ(A). Then define the space
(D(A)) as the completion of X in the norm


x
(D (A)) =
(λ0 I − A)−1 x
X , ∀x ∈ X .

Then one has


D(A) ⊂ X ⊂ (D(A)) ,

algebraically and topologically. Moreover, the operator A has an extension, denoted


by à : X → (D(A∗ )) and defined by

(D (A∗ )) Ãx, yD (A∗ ) = x, A∗ y, ∀y ∈ D(A∗ ). (1.2)

By the closed graph theorem, one has that à ∈ L(X , (D(A∗ )) ). Moreover, one has
that the spectrum and the eigenvalues of à coincide with those of A.

Positive Semidefinite Operators

A linear operator L ∈ L (H ) on a Hilbert space H is called positive semidefinite if


1.1 Notation and Theoretical Results 9

u, Lu ≥ 0, ∀u ∈ H ,

and positive definite if


u, Lu > 0, ∀u ∈ H \ {0} .

In the particular case in which H = Cd and L = A, with A a d × d complex symmet-


ric matrix (also known as a Hermitian matrix, i.e., AT = A), we recover the definitions
of positive semidefinite and positive definite matrices, respectively.
We say that a function k : D × D → R is positive semidefinite if for all N ∈ N,
xj ∈ D, and aj ∈ R, for j = 1, 2, . . . , N , we have


N
ai aj k(xi , xj ) ≥ 0.
i,j=1

For a domain D, if k ∈ C(D × D) is a positive semidefinite function, then the integral


operator L on H = L2 (D) is positive semidefinite, i.e.,

u, Lu = u(x)k(x, y)u(y)dxdy ≥ 0, ∀u ∈ L2 (D).


D×D

Powers of a Linear Operator

Let A be a linear operator


from D(A) ⊂ H to a Hilbert space H with an orthonormal
basis of eigenfunctions ϕj : j ∈ N∗ and eigenvalues λj > 0, ordered so that λj+1 ≥
λj . Such operators are self-adjoint, i.e., they satisfy

Au, v = u, Av, ∀u, v ∈ D(A).



We see that given u ∈ H , we have u = ∞ ∗
j=1 uj ϕj , where uj = u, ϕj , j ∈ N . Con-
sequently,


Au = A uj ϕj = λj uj ϕj .
j=1 j=1

We may define the fractional power α ∈ R of A as




Aα u := λαj uj ϕj , ∀u ∈ H , (1.3)
j=1


and let D(Aα ) be the set of all u = ∞ α α
j=1 uj ϕj such that A u ∈ H . The domain D(A )
is a Hilbert space with inner product
10 1 Preliminaries



u, vα := Aα u, Aα v
j=1

and corresponding induced norm


u
α :=
Aα u
H . In addition, we have
1 1
• u, v 12 = A 2 u, A 2 v = Au, v, ∀u, v ∈ D(A);
• for α > 0,
u
α ≥ λα1
u
, u ∈ D(Aα ).

Elliptic Operators
Let A be a scalar linear partial differential operator

A = a(x, D) = aα (x)Dα
|α|≤2l

on O of even order 2l, with coefficients infinitely differentiable on O. (Here Dα stands


for the partial derivative operator introduced above.) Its symbol is the polynomial
a(x, ξ ) obtained from a(x, D) by replacing all Dj by real numbers ξj . The principal
symbol a0 (x, ξ ) is the leading homogeneous part of the symbol:

a0 (x, ξ ) = aα (x)ξ α .
|α|=2l

Suppose we are also given a boundary operator



B = b(x, D) = bβ (x)Dβ ,
|β|≤r

on Γ = ∂O, of nonnegative order r, with infinitely differentiable coefficients.


The boundary value problem that we will consider in this section is

a(x, D)u(x) = 0 in O,
(1.4)
b(x, D)u(x) = g on Γ.

Now let us give the definition of ellipticity for this problem:


(a) The operator A is elliptic on O, that is, for its main symbol a0 (x, ξ ), we have

a0 (x, ξ ) = 0, ∀x ∈ O, ∀ξ ∈ Rd \ {0} .

(b) Moreover, the operator A is regular elliptic, i.e., the equation a0 (x, ξ , ζ ) = 0
with ξ = 0 has the same number of roots ζ in the upper and lower half-planes
(and this number equals l).
(c) The boundary operator B obeys the Lopatinskii condition. In order to have a
simple statement of this, we transfer the origin to a point x0 and rotate the
coordinate system so that the t = xd axis is directed along the inner normal
1.1 Notation and Theoretical Results 11

to the boundary at this point. Suppose that the operators of the problem are
rewritten in this coordinate system. Consider the following problem on the ray
R+ = {t : t > 0} for fixed ξ = ξ0 = 0:

a0 (x0 , ξ0 , Dt )v(t) = 0, t > 0,
(1.5)
b(x0 , ξ0 , Dt )v|t=0 = h,

where b0 is the principal symbol of the operator b,



b0 (x, ξ ) = bβ (x)ξ β .
|β|=r

Problem (1.5) is required to have precisely one solution in L2 (R+ ) for every
ξ0 = 0 and every number h. (More about the Lopatinskii condition can be found
in [90].)
Problem (1.5) is obtained from the original problem by freezing the coefficients
at the point x0 , removing the lower-order terms, and applying the formal Fourier
transform with respect to the tangent variables.
If all three conditions stated above hold, then the problem is said to be elliptic.
Let us assume that A can be written in divergence form as

A= (−1)|α| ∂ α (aα,β ∂ β ),
|α|≤l,|β|≤l

such that the strong ellipticity condition



 aα,β (x)ξ α+β ≥ C|ξ |2
|α|=l,|β|=l

holds. Then, via the Lax–Milgram theorem, one may show that an elliptic problem
such as (1.5) has a unique solution. For more details on this subject, see [2].
The Cauchy Problem
Let X be a real Banach space with the dual denoted by X , and let A : D(A) ⊂ X →
X be a linear unbounded operator. The operator A is said to be accretive if

(Au, η)X ≥ 0 for η ∈ J (u), ∀u ∈ D(A),

where J : X → X is the duality map of the space X . Equivalently,


(I + λA)−1 f
X ≤
f
X , λ > 0,

for all f , g in the range R(I + λA), of the operator I + λA. The operator A is said to
be m-accretive if it is accretive and R(I + λA) = X for all λ > 0 (equivalently, for
some λ > 0).
12 1 Preliminaries

A semilinear evolution equation on the Banach space X is a differential equation


of the form
d
u = −Au + f (u), u(0) = u0 . (1.6)
dt
We start with f ≡ 0. We have
Theorem 1.4 (Hille–Yosida) Let A be m-accretive. Then given uo ∈ D(A), there
exists a unique function u ∈ C 1 ([0, ∞); X ) ∩ C([0, ∞); D(A)) satisfying
 d
dt
+ Au = 0, t ≥ 0,
u
u(0) = uo .

Moreover,
 
d 

u(t)
X ≤
uo
X and  u(t)

 =
Au(t)
X ≤
Auo
X , t ≥ 0.
dt X

The map uo −→ u(t) extended by continuity to all of X is denoted by e−At . It is a


continuous (C0 -)semigroup of contractions on X , and −A is called its infinitesimal
generator.
Proof For a proof, see [35, Chap. 7].
We say that f : X → X is Lipschitz if


f (u1 ) − f (u2 )
X ≤ Cf
u1 − u2
X , ∀u1 , u2 ∈ X ,

for some constant Cf > 0. If A is m-accretive, then the variation of constants formula
in (1.6) gives the following reformulation of Eq. (1.6):

t
−tA
u(t) = e uo + e−(t−s)A f (u(s))ds.
0

Such a function u is called a mild solution of Eq. (1.6).


We have the following results (for proofs, see [7]).
Theorem 1.5 Suppose that A is an m-accretive operator and f is Lipschitz. Then
there exists a mild solution u(t) to (1.6) for t ≥ 0. Further, for T > 0, there exists
CT > 0 such that for all uo ∈ X ,


u(t)
X ≤ CT (1 +
uo
X ), 0 ≤ t ≤ T .

Usually, the evolution Eq. (1.6) is understood in the weak sense.


Proposition 1.3 Let X be a Hilbert space, let A be a self-adjoint m-accretive opera-
tor and f a Lipschitz function. Then the mild solution u(t) of (1.6) given by Theorem
1.5 belongs to D(A 2 ) and dtd u(t) ∈ D(A− 2 ) for all t > 0.
1 1
1.1 Notation and Theoretical Results 13

The above proposition does not provide enough smoothness of u(t) to interpret
(1.6) directly, since we do not know whether u(t) ∈ D(A). We can, however, develop
1
the weak form using test functions v ∈ D(A 2 ). Taking the inner product of (1.6) with
v gives  
d
u, v = −Au, v + f (u), v,
dt

or equivalently,   
1 d

A− 2 u, A 2 v = A 2 u, A 2 v + f (u), v.
1 1 1

dt

And so, due to the result in the above proposition, the expression is well defined for
1
t > 0. This leads to the definition of a weak solution for Eq. (1.6): Let V = D(A 2 )
and V = D(A− 2 ), which is the dual of V . We say that u : [0, T ] → V is a weak
1

solution of (1.6) if for almost all s ∈ [0, T ], we have dtd u(s) ∈ V and

   1 
d 1
u(s), v = −a(u(s), v) + f (u(s)), v, ∀v ∈ V, where a(u, v) := A 2 u, A 2 v .
dt

Stochastic Processes
For this section, we refer for further details to [113]. A triple (Ω, F , P) is called
a probability space, where Ω is the set of possible outcomes, F is a σ -algebra of
subsets of Ω, called the set of events, and P : F → [0, 1] is a probability measure,
which assigns “probabilities” to the outcomes of Ω with P(Ω) = 1.
An F -measurable function X : Ω → H , i.e., a function such that X −1 (B) ⊂ F
for each Borel set B of H , where H is a Banach space, is called a random variable. A
family {X (t) : t ≥ 0} of random variables is called a stochastic process. The quantity

E(X ) := Xd P
Ω

is called the expectation, or the average, of the random variable X .


Let V ⊂ F a sub-σ -algebra of F . The random  variable X : Ω → H is usu-
ally not V -measurable. Thus, the integral V Xd P|V , where V ∈ V cannot be
well defined in general. However, the local averages V Xd P can be recovered in
(Ω, V , P|V ) via the conditional expectation. A conditional expectation of X given
V , denoted by E(X |V ), is a V -measurable function E(X |V ) : Ω → H that satisfies

E(X |V )d P = Xd P for each V ∈ V .


V V

Let X (t) be a stochastic process such that E(|X (t)|) < ∞ for all t ≥ 0. Then X (t)
is called a martingale if
14 1 Preliminaries

X (s) = E(X (t)|U (s)), P-a.s., for all t ≥ s > 0,

where U (s) := σ (X (τ ) : 0 ≤ τ ≤ s) is the σ -algebra generated by the random vari-


ables X (τ ) for 0 ≤ τ ≤ s. The quadratic variation of X (t) is the process, written
[X ](t), defined bas


N
[X ](t) = lim (X (tk ) − X (tk−1 ))2 ,

Δ
→0
k=1

where Δ ranges over partitions of the interval [0, t] and the norm of the partition Δ
is the mesh.
A collection of σ -algebras Ft , t ≥ 0, satisfying Fs ⊂ Ft for all 0 ≤ s ≤ t is
called a filtration, and a stochastic process X (t) is said to be adapted to the filtration
Ft if for each t ≥ 0, X (t) is Ft -measurable. A random variable τ : Ω → [0, ∞) is
an Ft -stopping time if
{τ ≤ t} ⊂ Ft , ∀t ≥ 0.

The stochastic process X (t) is said to be a local martingale if there is a sequence


of stopping times (τn )n such that τn → ∞, P-a.s., as n → ∞, and for each n,
X (min {t, τn }) is a martingale.
The stochastic process X (t) is an Ft -semimartingale if X = M + Y , where M
is a local martingale with respect to Ft and Y is an Ft -adapted finite variation
process, that is, for each t > 0,

sup |Y (ti+1 ) − Y (ti )| < ∞,
i

where the supremum is taken over all the partitions of [0, t].
The following result, which is related to the martingale convergence theorem, is
important for obtaining convergence in probability of stochastic processes. Its proof
can be found in [85].
Lemma 1.1 Let I and I1 be nondecreasing adapted processes, V a nonnegative
semimartingale, and M a local martingale such that E(V (t)) < ∞, ∀t ≥ 0, I1 (∞) <
∞, P-a.s., and V (t) + I (t) = V (0) + I1 (t) + M (t), ∀t ≥ 0. Then there exists

lim V (t) < ∞, P-a.s., and I (∞) < ∞, P-a.s.


t→∞

A real-valued random variable Y is said to be N (0, q)-Gaussian distributed if



b
1 x2
P[a ≤ Y ≤ b] = √ e− 2q dx, for all − ∞ < a < b < ∞.
2π q a

A real-valued stochastic process β(·) is called a Brownian motion or a Wiener


process if
1.1 Notation and Theoretical Results 15

(a) β(0) = 0, P-a.s.,


(b) β(t) − β(s) is N (0, t − s) Gaussian distributed for all 0 ≤ s ≤ t,
(c) for all times 0 < t1 < t2 < · · · < tn , the random variables β(t1 ), β(t2 ) − β(t1 ),
. . . , β(tn ) − β(tn−1 ) are independent.
We have the following bounds (for a proof, see [10, Lemma 4.6]).
Lemma 1.2 Let β(t), t ≥ 0, be a real Brownian motion in some probability space
(Ω, F , P). Then for each λ > 0, we have

  
P sup e β(t)−λt
≥ r = P esupt>0 {β(t)−λt} ≥ r
t>0
= P sup {β(t) − λt} ≥ log r
t>0

= r −2λ .

Let U be another Hilbert space. A simple process is a stochastic process X (t) of


the form
N
Φ(t) = Φi−1 1[ti−1 ,ti ) (t),
i=1

where 0 = t0 < t1 < · · · < tn = 1 is a partition of [0, 1] and Φi−1 is an L(U, H )-


valued Fti−1 -measurable random variable, and 1[ti−1 ,ti ) is the characteristic function
of the interval [ti−1 , ti ). Then one defines the stochastic integral

t
N
Φ(s)d β(s) := Φi−1 [β(min {ti , t}) − β(min {ti−1 , t})].
0 i=1

This definition can be extended, in a standard way (see [113], pp. 1–4), to adapted
processes Φ : [0, T ] → L(U, H ) such that

t
P
Φ(s)
2HS ds < ∞, t ≥ 0 = 1,
0

where
·
HS is the Hilbert–Schmidt norm in L(U, H ). Then we have
(a) Itô’s isometry:

t 2
t
E Φ(s)d β(s) =E Φ (s)ds ,
2
0 0

t
(b) 0 Φ(s)d β(s) is a martingale, in particular,

t
E Φ(s)d β(s) = 0,
0
16 1 Preliminaries

(c) the solution X = X (t) of the stochastic differential equation

dX (t) = Φ(t)d β(t) + f (t)dt, t ∈ (0, T ),


X (0) = x,

is defined as the process given by



t
t
X (t) := x + f (s)ds + Φ(s)d β(s), t ∈ [0, T ],
0 0

(d) Stochastic calculus: let X (t) = (X (t)1 , . . . , X (t)n )T be a random vector such
that
dX (t) = μ(t)dt + G(t)d β(t),

for a vector μ(t) and a matrix G(t); then Itô’s lemma states that
 
∂φ 1
d φ(t, X (t)) = + (∇X φ) μ(t) + Tr[G (t)(HX φ)G(t)] dt
T T
∂t 2
+ (∇X φ)T G(t)d β(t),

where φ : [0, ∞) × Rn → R is once differentiable in the first variable and twice


differentiable in the second one, ∇X φ is the gradient of φ with respect to X , HX φ
is the Hessian matrix of φ with respect to X , and Tr is the trace operator.
Some important inequalities:

1. Chebyshev’s inequality. If X is a random variable and 1 ≤ p < ∞, then

1
P(|X | ≥ λ) ≤ E(|X |p ), ∀λ > 0.
λp
2. Borel–Cantelli Lemma application. If Xk → X in probability, then there exists
a subsequence ∞
Xkj j=1 ⊂ {Xk }∞
k=1

such that
Xkj (ω) → X (ω) for almost every ω.

3. Burkholder–Davis–Gundy inequality. If X is a martingale with X (0) = 0, and


1 < p < ∞, then there exist cp and Cp such that
   
p p
cp E [X ] 2 (t) ≤ E max |X (s)|p ≤ Cp E [X ] 2 (t) .
0≤s≤t

Let H be a separable Hilbert space and consider the stochastic differential equation
1.1 Notation and Theoretical Results 17

dX (t) + AX (t)dt = f (t)dt + B(X (t))d β(t),


(1.7)
X (0) = x,

where −A is the infinitesimal generator of a C0 -semigroup e−tA on H , and B is a


continuous operator. The adapted process X (t) is said to be a mild solution of (1.7) if

t
t
X (t) = e−tA x + e−(t−s)A f (s)ds + e−(t−s)A B(X (s))d β(s), t ∈ [0, T ].
0 0

We have (see [113, p. 67]) the following theorem.


Theorem 1.6 Assume that f ∈ L2 (0, T ; H ) and


e−tA Bx − e−tA By
HS ≤ γ
x − y
H , ∀t ∈ [0, T ], x, y ∈ H .

Then (1.7) has a unique mild solution X ∈ C([0, T ]; L2 (Ω, F , P, H )), which is the
space of all continuous functions [0, T ] → L2 (Ω, F , P, H ) that are adapted to the
filtration Ft .
This result extends to nonlinear differential equations of the form

dX + AXdt + F(X )dt = fdt + B(X )d β,

for Lipschitz mappings F : H → H .


A stochastic differential equation of the form

dX (t) + AX (t)dt = f (t)dt + h(t)X (t)d β(t), t ≥ 0,

can be transformed into the equivalent random deterministic equation



d 1
∂t y + e−h(t)β(t) A(eh(t)β(t) y) = f (t) − h(t)β(t) + h2 y, P-a.s., t ≥ 0,
dt 2

via the rescaling y(t) := e−h(t)β(t) X (t) (for more details, see [22]).
Chapter 2
Stabilization of Abstract Parabolic
Equations

In this chapter, we present a technique to design asymptotically exponentially stabi-


lizing boundary proportional-type feedback controllers for nonlinear parabolic-like
equations, namely equations for which their linear parts are generated by analytic C0 -
semigroups. In what follows, we will simply refer to them as parabolic equations, in
concordance with the title of this book. The feedback law’s main features are that it is
expressed in an explicit simple form and has a finite-dimensional structure involving
only the eigenfunctions of the linear operator obtained from the linearized equation.
As we will see, these features will enable us to obtain the first results to appear in the
literature regarding the stabilization of different equations, such as the stochastic heat
equation, the Chan–Hilliard equations, and for boundary stabilization to nonsteady
states for parabolic-type equations.

2.1 Presentation of the Abstract Model

Let O be an open bounded domain in Rd , d ∈ N∗ , with smooth boundary ∂O, split


into two parts ∂O = Γ1 ∪ Γ2 , such that Γ1 has nonzero surface measure. We set n for
the outward unit normal to the boundary ∂O. Let A be a closed and densely defined
linear differential operator on L 2 (O), with domain D(A), and let F0 : D(F0 ) ⊂
D(A) → L 2 (O) be a nonlinear (differential) operator. We assume that
(A1) −A generates a C0 -analytic semigroup on L 2 (O).
(A2) For all y, ŷ ∈ D(A), there exists the limit

1 
F0 ( ŷ)(y) := lim F0 ( ŷ + λy) − F0 ( ŷ)
λ→0 λ

in L 2 (O). Moreover, F0 (0) = 0, and for some α ∈ (0, 1) and C > 0, we have

© Springer Nature Switzerland AG 2019 19


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_2
20 2 Stabilization of Abstract Parabolic Equations

F0 ( ŷ)y ≤ αAy + Cy, ∀y ∈ D(A). (2.1)

Now fix some ŷ ∈ D(A), and introduce the linear operator

A := A + F0 ( ŷ), D(A) = D(A). (2.2)

It is easy to see that A is closed and densely defined and that −A generates a C0 -
semigroup on L 2 (O). The operator A can be viewed as the linearization of A + F0
around ŷ. In addition to (A1) and (A2), we assume that
(A3) the resolvent (λI + A)−1 of A is compact in L 2 (O).
Hypothesis (A3) implies, via the Fredholm–Riesz theory (see Theorem 1.3), that the
operator A has a countable set of eigenvalues λ j , j ∈ N∗ (repeated accordingly to
their multiplicities), and corresponding eigenfunctions ϕ j , j ∈ N∗ , i.e.,

Aϕ j = λ j ϕ j , j ∈ N∗ .

Besides this, given ρ > 0, there is a finite number N of eigenvalues such that

λ j < ρ, j = 1, 2, . . . , N , while λ j ≥ ρ, j = N + 1, N + 2, . . . . (2.3)

The first N eigenvalues are usually called the unstable eigenvalues. Recall that if
the algebraic multiplicity of an eigenvalue coincides with the geometric multiplicity,
then that eigenvalue is called semisimple (see Chap. 1). We add to the above context
the following assumption:
(A4) Each unstable eigenvalue λ j , j = 1, 2, . . . , N , is semisimple.

Example 2.1 A classical example is A = −Δ, with


 
∂y
D(A) = y ∈ H (O) : y|Γ1 = 0,
2
| =0 ,
∂n Γ2

F0 (y) = f (y), where f is some C 1 -nonlinear function in y, and ŷ ∈ L ∞ (O). In this


case,
A = −Δ + f  ( ŷ).

Taking into account that A is self-adjoint, one can easily check that hypotheses
(A1)–(A4) hold for this case.

Denote by ·, · and by  ·  the scalar product and the corresponding norm in
L 2 (O), respectively. Since the spectrum of A might contain some complex eigenval-
ues, it will be convenient in the sequel to view A as a linear operator (still denoted
by A) in the complexified space L 2 (O) + iL 2 (O) (which will still be denoted by
L 2 (O)). We denote by ·, · and by  ·  the corresponding scalar product and the
induced norm of the complexified L 2 (O), respectively.
2.1 Presentation of the Abstract Model 21
 N
It is easily seen that the finite part of the spectrum λ j j=1 can be separated from
the rest of the spectrum by a rectifiable curve Γ N in the complex space C. Set Xu to
 N
be the linear space generated by the eigenfunctions ϕ j j=1 , that is,

 N
Xu := lin span ϕ j j=1 .

Then the operator PN : L 2 (O) → Xu defined by



1
PN := (λI − A)−1 dλ (2.4)
2π i ΓN

is known as the algebraic projection of L 2 (O) onto Xu . It is easy to see that the
operator
Au := PN A (2.5)
 N
maps the space Xu into itself and σ (Au ) = λ j j=1 . More exactly, Au : Xu → Xu
is finite-dimensional and can be represented by an N × N matrix. (σ (Au ) stands for
the spectrum of the operator Au ; see Chap. 1.)  
If A∗ is the adjoint operator of A, then its eigenvalues are precisely λ j j∈N∗ ,
with the corresponding eigenfunctions

A∗ ϕ ∗j = λ j ϕ ∗j , j ∈ N∗ .

The adjoint PN∗ of PN is given by



1
PN∗ = (λI − A∗ )−1 dλ,
2πi ΓN


N
while X N∗ = lin span ϕ ∗j = PN∗ L 2 (O) .
j=1
Via the Schmidt orthogonalizationprocedure, it follows by hypothesis (A4) that
 N
N 

one can find a biorthogonal system ϕ j j=1 , ϕ j of eigenfunctions of A
j=1
and A∗ , respectively, i.e.,

ϕ j , ϕ ∗j  = δi j , i, j = 1, 2, . . . , N , (2.6)

Aϕ j = λ j ϕ j , A∗ ϕ ∗j = λ j ϕ ∗j . (2.7)

Here δi j stands for the Kronecker symbol, namely



1, i = j
δi j =
0, i = j.
22 2 Stabilization of Abstract Parabolic Equations

If y ∈ L 2 (O) but y ∈
/ D(A), we will understand by Ay a differential form involv-
ing y rather than the operator A acting on y. In this light, consider the problem
⎧ ∂y
⎨ ∂t + Ay + F0 (y) = 0 in (0, ∞) × O,
B.C. (y, 0) on (0, ∞) × ∂O, (2.8)

y(0) = yo in O.

Here B.C. (y, 0) denotes some appropriate boundary conditions for the unknown
function y, and yo ∈ D(A) is the initial data. The operator ∂t∂ + A + F0 is called the
abstract parabolic differential operator.

Example 2.2 With respect to Example 2.1, Eq. (2.8) reads


⎧ ∂y
⎨ ∂t − Δy + f (y) = 0 in (0, ∞) × O,
∂y
y = 0 on (0, ∞) × Γ1 , ∂n = 0, on (0, ∞) × Γ2 , (2.9)

y(0) = yo in O.

Under appropriate conditions on A, F0 , and B.C. (y, 0), problem (2.8) is well
posed. Here we do not give any further details about the well-posedness, since these
were discussed in Chap. 1, Theorem 1.5. We simply assume that (2.8) with B.C. (y, 0)
generates a semiflow y = y(t, yo ), t ≥ 0.
An equilibrium (steady-state or stationary) solution ŷ to system (2.8) is a solution
to the stationary equation (if there exists one)

A ŷ + F0 ( ŷ) = 0.

The equilibrium ŷ is said to be locally asymptotically exponentially stable if

lim ect (y(t, yo ) − ŷ) = 0


t→∞

in L 2 (O) for all yo in a neighborhood of ŷ. Here c > 0 is some constant.


By defining the fluctuation variable z := y − ŷ, the stability of ŷ can be equiva-
lently expressed as the stability of the null solution to the equation
⎧∂
⎨ ∂t z + Az + G(z) = 0 in (0, ∞) × O,
B.C. (z + ŷ, 0) on (0, ∞) × ∂O, (2.10)

z(0) = z o := yo − ŷ in O,

where
G(z) := F0 (z + ŷ) − F0 ( ŷ) − F0 ( ŷ)(z). (2.11)

If the steady state ŷ is not stable, a way to stabilize it is to plug into (2.8) a
controller function u : [0, ∞) → U that takes values in another space U (which is
assumed Hilbert), obtaining thereby the following boundary controlled problem:
2.1 Presentation of the Abstract Model 23
⎧ ∂y
⎨ ∂t + Ay + F0 (y) = 0, in (0, ∞) × O,
B.C. (y, u) on (0, ∞) × ∂O, (2.12)

y(0) = yo in O,

or equivalently, by (2.10),
⎧∂
⎨ ∂t z + Az + G(z) = 0 in (0, ∞) × O,
B.C. (z, v) on (0, ∞) × ∂O, (2.13)

z(0) = z o in O,

Example 2.3 The boundary control problems (2.12) and (2.13) associated with (2.9)
from Example 2.2 look like
⎧ ∂y
⎨ ∂t − Δy + f (y) = 0 in (0, ∞) × O,
∂y
y = u, on (0, ∞) × Γ1 , ∂n = 0 on (0, ∞) × Γ2 , (2.14)

y(0) = yo in O,

and
⎧ ∂z
⎨ ∂t − Δz + f  ( ŷ)z + G(z) = 0 in (0, ∞) × O,
∂z
z = v := u − ŷ on (0, ∞) × Γ1 , ∂n = 0 on (0, ∞) × Γ2 , (2.15)

z(0) = z o in O,

respectively. Here
G(z) := f (z + ŷ) − f ( ŷ) − f  ( ŷ)(z).

Hence the asymptotic exponential stabilization problem consists in finding a con-


troller u ∈ L 2 (0, ∞; U ) such that once it is inserted into Eq. (2.12), the correspond-
ing solution y = y(t, yo , u) to Eq. (2.12) has the property that

lim ect (y(t, yo , u) − ŷ) = 0


t→∞

in L 2 (O) for some constant c > 0, for all yo in a neighborhood of ŷ. Throughout this
book, we will use the shortened terminology stabilization (stability, stabilizability)
in referring to the asymptotic exponential stabilization (asymptotic exponential sta-
bility, stabilizability, respectively) of some system. If one can find such a controller,
then the equation is said to be stabilizable from the boundary. If the controller is in
feedback form, i.e.,
u(t) = K (y(t)), t ≥ 0,

where K is a given operator from L 2 (O) to U , then Eq. (2.12) is said to be a closed-
loop equation..
In practice, a controller given in feedback form is the most desirable, since it
ensures better performance. Roughly speaking, if at time t ∗ , the solution of Eq.
24 2 Stabilization of Abstract Parabolic Equations

(2.12) gets away from ŷ, then at the very same moment t ∗ , the feedback controller
u(t ∗ ) = K (y(t ∗ )) reacts and brings back the trajectory close to the steady state.
One can equivalently express the stabilization problem for Eq. (2.13). More pre-
cisely, the problem consists in finding a feedback control v such that once it is
inserted into Eq. (2.13), the corresponding solution z to the closed-loop equation
(2.13) satisfies
lim ect z(t) = 0 in L 2 (O).
t→∞

There is another type of control largely used in stabilization theory of parameter


distributed systems, namely the internal control. More precisely, in this case, the
controlled system (2.12) is of the form
⎧ ∂y
⎨ ∂t + Ay + F0 (y) = 1O 0 u in (0, ∞) × O,
B.C. (y, 0) on (0, ∞) × ∂O,

y(0) = yo in O,

where 1O 0 is the characteristic function of a subdomain O0 ⊂ O. While in (2.12) the


controller’s actuation and sensing are applied only through the boundary conditions,
in this case the controller actuation penetrates the domain of the PDE system or is
evenly distributed everywhere in the domain. Boundary control is generally consid-
ered to be physically more realistic, because actuation and sensing are nonintrusive,
but it turns out to be a harder problem than the internal control one. This is the main
reason why throughout this book, we will confine ourselves mostly to the boundary
control case (except in the last chapter).
One may consider as well the problem of (boundary or internal) stabilization to
non-steady states associated with (2.8). That means a nonstationary ŷ, i.e., ŷ = ŷ(t),
solution to
∂ ŷ
+ A ŷ + F0 ( ŷ) = 0.
∂t

The main difficulty in this case is that the corresponding linear operator A has time-
dependent spectrum, making useless all the considerations and results from the sta-
tionary case. Therefore, this is a challenging subject, and in a subsequent chapter,
we will pose and solve this problem for a special case.
The theory of control and stabilization uses a number of tools, many of them
developed in the 1960 by Kalman with his theory of filtering and algebraic approach
to control systems, then by Pontryagin with his maximum principle, which is a
generalization of Lagrange multipliers, and by Bellman with his principle of dynamic
programming, or by Lyapunov and his Lyapunov functions, among others. In the
present book, we will rely mainly on unique continuation results and construct a
Lyapunov function for the system. More exactly, the form of the feedback controller
is given a priori (guaranteed by a unique continuation result), and then, once it has
been plugged into the system, one proves the stability of the closed-loop system by
finding a Lyapunov function.
2.2 The Design of the Boundary Stabilizer 25

2.2 The Design of the Boundary Stabilizer

We begin to develop the ideas on how to construct the boundary proportional-type


controller v (equivalently, u) for the stationary case of ŷ. In order to represent (2.13)
as an abstract Cauchy problem, we will lift the B.C. into the equations. This will be
done via the Dirichlet map, D, which will now be introduced. Let β ∈ L 2 (∂O). We
denote by Dγ β the solution z̃ to the equation

⎨ 
N
A(z̃) − 2 λk z̃, ϕk∗ ϕk + γ z̃ = 0 in O,
(2.16)
⎩ k=1
B.C. (z̃, β) on ∂O.

Under appropriate boundary conditions B.C. (z̃, β) and γ > 0 sufficiently large,
there exists a unique solution to (2.16), defined such that the operator Dγ belongs to
1
L(L 2 (∂O), H 2 (O)) (for details, see [19] or [80], and also (1.4).
For later use, we need to compute the scalar product Dγ β, ϕ ∗j , j = 1, 2, . . . , N .
To this end, scalar multiplying (2.16) by ϕ ∗j and taking into account relations (2.6)
and (2.7) yields, via Green’s formula, that

1
Dγ β, ϕ ∗j  = − β, D ϕ ∗j 0 , j = 1, 2, . . . , N . (2.17)
γ − λj

Here
D ϕ ∗j := −(γ − λ j )D∗γ ϕ j , j = 1, 2, . . . , N ,

where D∗γ denotes the adjoint operator of Dγ . And ·, ·0 stands for the scalar product
in L 2 (∂O).
As we shall see below, the algorithm requires a new hypothesis. More exactly,
(A5) None of the functions D ϕ ∗j , j = 1, 2, . . . , N , is identically zero on the bound-
ary ∂O.
This assumption is related to the unique continuation property of the eigenfunctions of
the adjoint A∗ of the linear operator A. It arises naturally in the context of boundary
control problems. In the existing literature on this subject, instead of hypothesis
(A5), a stronger one is assumed, namely linear independence of the traces of the
eigenfunctions on the boundary (see more in the “Comments” section below). For
such a hypothesis, it is usually hard to check its validity in practical examples, and
there are many simple cases of domains O for which it fails to hold. For all the
examples in this book, the weaker assumption (A5) is satisfied, while the one related
to linear independence is not. As a matter of fact, validation of assumption (A5), for
different models, will involve the major effort of this book, since once one has (A5)
satisfied (together with (A1)–(A4)), the control design algorithm may be applied
similarly, for all the models, as described in this chapter. Consequently, roughly
speaking, every evolution equation governed by an elliptic operator can be stabilized
26 2 Stabilization of Abstract Parabolic Equations

from the boundary by a proportional-type feedback of the form (2.26) below, once a
unique continuation-type result such as (A5) is provided.

Example 2.4 For Example 2.3, the corresponding map Dγ looks like Dγ := z̃, where
z̃ satisfies ⎧
⎨ N
−Δz̃ + f  ( ŷ)z̃ − 2 λk z̃, ϕk ϕk + γ z̃ = 0 in O,
(2.18)
⎩ ∂ z̃
k=1
z̃ = β on Γ1 , ∂n = 0 on Γ2 .

Since the linear operator is self-adjoint, we have that ϕ ∗j = ϕ j , j ∈ N∗ . Then simple



computations show that D = ∂n |Γ1 , and that
 ∂ϕ j
− γ −λ
1
β, ∂n 0
 , j = 1, 2, . . . , N ,
Dγ β, ϕ j  = j
∂ϕ j (2.19)
− γ +λ
1
j
β, ∂n 0
 , j = N + 1, N + 2, . . . .

This time, ·, ·0 stands for the scalar product in L 2 (Γ1 ). Assumption (A5), in this
∂ϕ
case, says that each trace of ∂nj , j = 1, 2, . . . , N , cannot identically vanish on Γ1 .
Equivalently, if ϕ j satisfies

∂ϕ j
−Δϕ j + f  ( ŷ)ϕ j = λ j ϕ j in O, ϕ j = 0 on Γ1 and = 0 on ∂O,
∂n

then necessarily ϕ j ≡ 0 in O. Since the boundary ∂O is assumed to be smooth and


the Lebesgue measure of Γ1 is assumed to be nonzero, it is known that for the elliptic
operator −Δ + f  ( ŷ), the above condition holds. Therefore, for this case, assumption
(A5) is valid.

Next, for the convenience of the reader, we will split the presentation into two
parts: first, we strengthen the hypothesis (A4) by (A4.1) below, assuming that the
unstable eigenvalues are distinct. Then in the second part, we slightly adjust the
feedback law to show that it still achieves stability in the more general framework
given by hypothesis (A4).

2.2.1 The Case of Mutually Distinct Unstable Eigenvalues

For the moment, let us assume that


(A4.1) The unstable eigenvalues λi , i = 1, . . . , N , are simple, or, equivalently, are
mutually distinct.
We choose ρ < γ1 < γ2 < · · · < γ N to be N real constants such that Eq. (2.16)
is well posed for each of them, and denote by Dγi , i = 1, . . . , N , the corresponding
solutions.
2.2 The Design of the Boundary Stabilizer 27

N
Next let us denote by B the Gram matrix of the system D ϕ ∗j in the Hilbert
j=1
space L 2 (∂O). That is,
⎛ ⎞
D ϕ1∗ , D ϕ1∗ 0 D ϕ1∗ , D ϕ2∗ 0 . . . D ϕ1∗ , D ϕ N∗ 0
⎜ D ϕ ∗ , D ϕ ∗ 0 D ϕ ∗ , D ϕ ∗ 0 . . . D ϕ ∗ , D ϕ ∗ 0 ⎟
B := ⎜ 2 2 2 N ⎟
⎝ .............................................................................. ⎠ .
2 1 (2.20)
D ϕ N∗ , D ϕ1∗ 0 D ϕ N∗ , D ϕ2∗ 0 . . . D ϕ N∗ , D ϕ N∗ 0

Further, we introduce the matrices


⎛ ⎞
1
γk −λ1
0 ... 0
⎜ 0 1
... 0 ⎟
Λγk := ⎜ γk −λ2 ⎟
⎝ ................................. ⎠ , k = 1, . . . , N , (2.21)
0 0 . . . γk −λ 1
N

and
Bk := Λγk BΛγk , k = 1, . . . , N . (2.22)

We have the following result.

Proposition 2.1 The sum of the Bk , i.e., B1 + B2 + · · · + B N , is an invertible matrix.

Before we prove this, let us note that for each k = 1, 2, . . . , N , Bk is invertible


if and only if the Gramian B is invertible, and this happens only when the system

N
D ϕ ∗j is linearly independent in the Hilbert space L 2 (∂O). This usually is not
j=1
the case, even for simple domains O. Moreover, when the dimension d of the space
equals 1, D ϕ ∗j , j = 1, 2, . . . , N , are just some complex numbers, and therefore, it
makes no sense to talk about linear independence unless N equals 1. But usually N =
1 is not enough to ensure (2.3). To conclude, it is highly possible that each Bk , k =
1, 2, . . . , N , is singular, but it turns out that their sum is always nonsingular. The
ideas in this chapter are based on those from the work [13] of Barbu, which requires
the linear independence assumption. Here, we drop it by defining the feedback law
based on the invertible matrix B1 + B2 + · · · + B N , rather than only one Bk as is
done in [13].
The reason why we need the invertibility of B1 + · · · + B N becomes clearer in
the computations (2.35)–(2.37) below, which allow us to obtain in a simple form the
stable finite-dimensional differential system (2.39).
Proof of Proposition
⎛ ⎞ 2.1. Arguing by contradiction, let us assume that there exists a
z1
⎜ z2 ⎟
nonzero z = ⎜ ⎟
⎝ . . . ⎠ ∈ C such that (B1 + · · · + B N )z = 0. Scalar multiplying this
N

zN
relation by z in C N yields
28 2 Stabilization of Abstract Parabolic Equations
 2
N   N 
  1 
 z D ∗
ϕ (x)  d x = 0.
 j 
 j=1 γk − λ j
j
k=1 ∂O 

We deduce from the above that


N
1
zj D ϕ ∗j (x) = 0, a.e. on ∂O,
j=1
γk − λ j

for all k = 1, . . . , N . This gives an N × N linear homogeneous system, with the


unknowns z j , j = 1, . . . , N , for almost all x ∈ ∂O. The determinant of the matrix
of the corresponding system is
 1 
 γ1 −λ1 D ϕ1∗ (x) γ1 −λ 1
D ϕ2∗ (x) . . . γ1 −λ 1
D ϕ N∗ (x) 
 1 2 N 
  ∗  ∗  ∗ 
 γ2 −λ1 D ϕ1 (x) γ2 −λ2 D ϕ2 (x) . . . γ2 −λ N D ϕ N (x) 
1 1

 ........................................................................ 
 
 1 D ϕ ∗ (x) 1 D ϕ ∗ (x) . . . 1 D ϕ ∗ (x) 
γ N −λ1 1 γ N −λ2 2 γ N −λ N N
 1 
⎛ ⎞  γ1 −λ 1
. . . γ1 −λ
1

  1 γ1 −λ2 N 
N
 1 1
. . . γ2 −λ N 
1 (2.23)
= ⎝ D ϕ ∗j (x)⎠  γ2 −λ1 γ2 −λ2 
 ................................... 
j=1
 1 1
. . . γ N −λ N 
1
γ N −λ1 γ N −λ2
⎛ ⎞  
N  N
(−1) j−1  j−1
(λ − λ )(γ − γ )
= ⎝ D ϕ ∗j (x)⎠
j k j k
= 0,
j=1 j=2
γ j − λ j k=1 (γk − λ j )(γ j − λk )

 N
at least for some x ∈ ∂O, by virtue of the unique continuation property of D ϕi∗ i=1
 N
assumed in (A5), the fact that the set λ j j=1 contains distinct elements, and the
inequality
ρ < γ1 < γ 2 < · · · < γ N .

This implies that the above homogeneous system has a unique solution, namely the
trivial one z = 0. This is in contradiction to our assumption. Hence we conclude that
the sum B1 + · · · + B N is indeed an invertible matrix. 
Proposition 2.1 enables us to define the matrix

A := (B1 + B2 + · · · + B N )−1 . (2.24)

Now let us introduce the following proportional-type feedback laws:


2.2 The Design of the Boundary Stabilizer 29

⎛ ⎞ ⎛ 1 ⎞
 z(t), ϕ1∗  γk −λ1
D ϕ1∗ (x) 
⎜ z(t), ϕ ∗  ⎟ ⎜ 1 D ϕ2∗ (x) ⎟
vk (z(t))(= vk (t, x)) := A ⎜ 2 ⎟ ⎜ γk −λ2 ⎟
⎝ .............. ⎠ , ⎝ ....................... ⎠ , (2.25)
z(t), ϕ N∗  1
γk −λ N
D ϕ N∗ (x)
N
t ≥ 0, x ∈ ∂O, for k = 1, 2, . . . , N .

We recall that ·, · N denotes the standard scalar product in C N .


Then we define v as

v(z) := v1 (z) + v2 (z) + · · · + v N (z),

which in condensed form can be written as


⎛ ⎞ ⎛  ∗⎞
 z(t), ϕ1∗  D ϕ1 
⎜ z(t), ϕ ∗  ⎟ ⎜ D ϕ ∗ ⎟ N
v = ΛS A ⎝ ⎜ 2 ⎟ , ⎜ 2 ⎟ , where Λ := Λγk . (2.26)
............ ⎠ ⎝ ........ ⎠ S
k=1
z(t), ϕ N∗  D ϕ N∗ N

Next, we lift the boundary control v into Eq. (2.13). To achieve this, by (2.13) and
N
(2.16), setting η := z − Dγk vk , we have
k=1

N 
∂η ∂Dγk vk
+ Aη + G(z) = − − ADγk vk
∂t k=1
∂t

∂   
N N N
=− Dγk vk − 2 λ j Dγk vk , ϕ ∗j ϕ j + γk Dγk vk .
∂t k=1 k, j=1 k=1
(2.27)
Thus

∂  
N N
∂η
+ Aη = − Dγ vk + G (z), η(0) = z o − Dγk vk (z(0)),
∂t ∂t k=1 k k=1

where


N 
N
G (z) := −G(z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j + γk Dγk vk (z).
k, j=1 k=1

Recall that −A generates a C0 -semigroup. Then the variation of constants formula


yields
30 2 Stabilization of Abstract Parabolic Equations

  t
∂ 
t N
η(t) = e−tA η(0) − e−(t−s)A Dγk vk ds + e−(t−s)A G (z(s))ds.
0 ∂s k=1 0

Some integration by parts performed on the first integral term leads us to


 t 
N  t
−tA −(t−s)A
z(t) = e zo + e à Dγk vk ds + e−(t−s)A G (z(s))ds, (2.28)
0 k=1 0

where à stands for the extension of the operator A to the whole space L 2 (O) (see
(1.2) in Chap. 1). For the sake of notational simplicity, in the sequel we will omit
writing the symbol ˜, but keep in mind that by A we refer, in fact, to the extended
operator of A.
Hence, we finally get that (2.13) is equivalent to

dz N
+ Az + G(z) = (γk I + A) Dγk vk (z)
dt k=1
(2.29)

N
−2 λ j Dγk vk (z), ϕ ∗j ϕ j , t ≥ 0; z(0) = z o .
k, j=1

In order to show the stability of (2.29), we first consider only the linear part of it,
and obtain the following result.

Theorem 2.1 Under (A1)–(A5) and (A4.1), the unique solution to the linear equa-
tion
dz N
+ Az = (γk I + A) Dγk vk (z)
dt k=1
(2.30)
N

−2 λ j Dγk vk (z), ϕ j ϕ j , t ≥ 0; z(0) = z o ,
k, j=1

satisfies
z(t)2 ≤ Ce−ρt z o 2 , ∀t ≥ 0, (2.31)

for some positive constant C.

Proof We represent the solution z to Eq. (2.30) as z = z u + z s , where

z u = PN z and z s = (I − PN )z,

with PN the projector defined by (2.4). In this way, (2.30) can be split as
2.2 The Design of the Boundary Stabilizer 31

dz u  N
on Xu : + Au z u = (γk I + Au ) Dγk vk (z u )
dt k=1
(2.32)

N
−2 λ j Dγk vk (z u ), ϕ ∗j ϕ j , t ≥ 0; z u (0) = PN z o ,
k, j=1

and

dz s  N
on Xs : + As z s = (γk I + As ) Dγk vk (z u ), t ≥ 0; z s (0) = (I − PN )z o .
dt k=1
(2.33)
Here
Au := PN A and As := (I − PN )A.

Recall that the spaces Xu := PN L 2 (O) and Xs := (I − PN )L 2 (O) are invariant


with respect to A, and we have that
 N  ∞
σ (Au ) = λ j j=1 and σ (As ) = λ j j=N +1 .

Note that by the definition of vk given by (2.25), we have vk = vk (z u ). That is why


in (2.32) and (2.33) we wrote vk (z u ).
We see that (2.32) is a finite-dimensional system, while (2.33) is infinite-
dimensional. However, −As generates a C0 -analytic semigroup in Xs , and together
with
σ (As ) ⊂ {λ ∈ C : λ ≥ ρ} ,

this implies that


e−tAs  L(L 2 (O ),L 2 (O )) ≤ Ce−ρt , ∀t ≥ 0, (2.34)

which tells us that the infinite-dimensional system (2.33) is governed by an expo-


nentially asymptotically stable operator. Consequently, one may expect that (2.33) is
stable. Hence, the main effort will be to prove the stability of the remaining system
(2.32).
For later purposes, we show that
⎛ ⎞ ⎛ ⎞
Dγk vk , ϕ1∗  z(t), ϕ1∗ 
⎜ Dγk vk , ϕ ∗  ⎟ ⎜ z(t), ϕ ∗  ⎟
⎜ 2 ⎟ ⎜ 2 ⎟
⎝ .................. ⎠ = −Bk A ⎝ ................ ⎠ , (2.35)
Dγk vk , ϕ N∗  z(t), ϕ N∗ 

where the Bk were introduced in (2.22) above, for k = 1, . . . , N . This is indeed so.
We have by (2.25) that
32 2 Stabilization of Abstract Parabolic Equations

⎛ ⎞ ⎛ ⎞
 z(t), ϕ1∗ 
1
γk −λ1
Dγk D ϕ1∗ , ϕ ∗j  
⎜ z(t), ϕ ∗  ⎟ ⎜  ∗ ∗ ⎟
2 ⎟ , ⎜ γk −λ2 Dγk D ϕ2 , φ j  ⎟
1
Dγk vk , ϕ ∗j  = A ⎜
⎝ ............... ⎠ ⎝ ................................. ⎠ , j = 1, . . . , N .
z(t), ϕ N∗  γ −λ
1
Dγk D ϕ N∗ , ϕ ∗j 
k N N

Then by relation (2.17), it follows that


⎛ ⎞ ⎛ − 1
D ϕ1∗ , D ϕ ∗j 0 

 z(t), ϕ1∗  (γk −λ1 )(γk −λ j )
⎜ z(t), ϕ ∗  ⎟ ⎜ − (γk −λ2 )(γ
1
D ϕ2∗ , D ϕ ∗j 0 ⎟
Dγk vk , ϕ ∗j  = A ⎜ 2 ⎟,⎜ k −λ j )

,

⎝ .............. ⎠ ⎝ ............................................. ⎟
⎠ (2.36)
z(t), ϕ N∗  − (γk −λ N )(γ
1
D ∗
ϕ , D  ∗
ϕ 0
k −λ j ) N j N
j = 1, . . . , N ,

from which we immediately obtain (2.35). In particular, this yields that


⎛ ⎞ ⎛ ⎞
Dγk vk , ϕ1∗  z(t), ϕ1∗ 
N
⎜ Dγk vk , ϕ ∗  ⎟ N
⎜ z(t), ϕ ∗  ⎟
⎜ 2 ⎟=− B A ⎜ 2 ⎟
⎝ ................ ⎠ k ⎝ ............... ⎠
k=1 k=1
Dγk vk , ϕ N∗  z(t), ϕ N∗ 
⎛ ⎞
 N  z(t), ϕ1∗ 
 ⎜ z(t), ϕ ∗  ⎟
=− Bk A ⎜ 2 ⎟
⎝ ............... ⎠ (2.37)
k=1
z(t), ϕ N∗ 
⎛ ⎞
z(t), ϕ1∗ 
⎜ z(t), ϕ ∗  ⎟
= −⎜ ⎟
⎝ ................. ⎠ ,
2

z(t), ϕ N∗ 

if we recall that A is the inverse of the sum of the Bk .


Returning to (2.32) and representing the solution z u as


N
zu = z j (t)ϕ j ,
j=1

where z j = z, ϕ ∗j , j = 1, 2, . . . , N , by (2.6), (2.7), (2.35), and (2.37), we may


rewrite (2.32) as

1
N
1
Zt + ΛZ = Zt + ΛZ − γk Bk AZ , t > 0; Z (0) = Zo , (2.38)
2 2 k=1

or equivalently,
2.2 The Design of the Boundary Stabilizer 33


N
Zt = −γ1 Z + (γ1 − γk )Bk AZ , t > 0; Z (0) = Zo . (2.39)
k=2

Here ⎛ ⎞
z(t), ϕ1∗ 
⎜ z(t), ϕ ∗  ⎟
Z := ⎜ 2 ⎟
⎝ .............. ⎠ and Λ := diag(λ1 , λ2 , . . . , λ N ).
z(t), ϕ N∗ 

Recall that Bk , k = 1, . . . , N , are positive semidefinite Hermitian matrices (by


the definition of Bk , Λγk and the fact that B is a Gram matrix). Therefore,

Bk q, q N ≥ 0, ∀q ∈ C N , k = 1, . . . , N .

Consequently, A = (B1 + · · · + B N )−1 is a positive definite Hermitian matrix (see


Chap. 1, for details). Thus one can define another positive definite Hermitian matrix,
1
denoted by A 2 , such that
1
Aq, q N = A 2 q2N , ∀q ∈ C N .
1
(A 2 is the square root of A; for details, see (1.3) or [34].)
Now let us scalar multiply Eq. (2.39) by AZ to get

1 d 1 1
N
A 2 Z (t)2N = −γ1 A 2 Z (t)2N + (γ1 − γk )Bk AZ (t), AZ (t) N ,
2 dt k=2
(2.40)
which leads to
1 d 1 1
A 2 Z (t)2N ≤ −γ1 A 2 Z (t)2N , t ≥ 0,
2 dt

since γ1 − γk < 0, k = 2, . . . , N . Here  ·  N stands for the Euclidean norm in C N .


1
The above relation implies the exponential decay of Z in the A 2 ·  N -norm, i.e.,

A 2 Z (t)2N ≤ e−2γ1 t A 2 Zo 2N , t ≥ 0,


1 1

1
where using the fact that A 2 is a positive definite Hermitian matrix, we finally arrive at

Z (t)2N ≤ Ce−2γ1 t Zo 2N , t ≥ 0, (2.41)

for some positive constant C.


(We note that A·, · N is a Lyapunov function for the differential system (2.32).)
Returning from C N to Xu , relation (2.41) yields
34 2 Stabilization of Abstract Parabolic Equations

z u (t)2 = Z (t)2N ≤ Ce−2γ1 t PN z o 2 , t ≥ 0. (2.42)

Finally, (2.33), (2.34), and (2.42) imply that

z s (t)2 ≤ Ce−ρt (I − PN )z o 2 , ∀t ≥ 0. (2.43)

Hence the conclusion of the theorem follows immediately by (2.42) and (2.43),
since z = z u + z s . 

Now let us return to the full nonlinear system (2.29). In order to be able to show
its stability, we need to strengthen the assumptions on the nonlinear part F0 to
(A6) |F0 (y)| ≤ C(|y|m + 1), ∀y ∈ R, where 0 < m < ∞ for d = 1, 2 and m = 3
for d = 3.
Then we have the following result.

Theorem 2.2 Let 1 ≤ d ≤ 3. Assume that (A1)–(A6) hold together with (A4.1).
Then for each z o ∈ Uo , there exists a unique solution z to the equation

dz N  N
+ Az + G(z) = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j ,
dt (2.44)
k=1 k, j=1
t ≥ 0; z(0) = z o ,

satisfying, for some constants C, c > 0,

z(t)2 ≤ Ce−ct z o 2 , ∀t ≥ 0.

Here  
Uo := z o ∈ L 2 (O); z o  ≤ σ ,

for some sufficiently small σ > 0.

Proof Let us define


N 
N
A z := Az − (γk I + A) Dγk vk (z) + 2 λ j Dγk vk (z), ϕ ∗j ϕ j , z ∈ D(A ),
k=1 k, j=1

with D(A ) = D(A). By Theorem 2.1, we know that −A generates a C0 -analytic


semigroup in L 2 (O) that satisfies

e−tA 2L(L 2 (O ),L 2 (O )) ≤ Ce−ρt , ∀t ≥ 0. (2.45)


2.2 The Design of the Boundary Stabilizer 35

We may equivalently rewrite (2.44) as


 t
−tA
z(t) = e zo + e−(t−s)A G(z(s))ds, t ≥ 0. (2.46)
0

We are going to show that for z o  ≤ σ sufficiently small, Eq. (2.46) has a unique
solution z ∈ L m+1 (0, ∞; H 1 (O)). To this end, we will proceed as in [19]. More
precisely, we consider the map  : L m+1 (0, ∞; H 1 (O)) → L m+1 (0, ∞; H 1 (O))
defined by  t
z := e−tA z o + e−(t−s)A G(z(s))ds,
0

and we shall show that for r sufficiently small, it maps the ball
 
B(0, r ) := z ∈ L m+1 (0, ∞; H 1 (O)) : z L m+1 (0,∞;H 1 (O )) ≤ r

into itself and is a contraction on B(0, r ), for z o  sufficiently small and r suitably
chosen. By (A6), the definition of G, and the Sobolev embedding theorem (for
dimension 1 ≤ d ≤ 3), we have that

G(z 1 ) − G(z 2 ) ≤ Cz 1 − z 2  H 1 (O ) (z 1 mH 1 (O ) + z 2 mH 1 (O ) ),

while
H 1 (O ) .
G(z) ≤ Czm+1

Then arguing as in the proof of [10, Theorem 3.5], one may show that  is a
contraction on B(0, r ). Hence, via the contraction mapping theorem, for z o  ≤ σ
sufficiently small, Eq. (2.46) has a unique solution z ∈ L m+1 (0, ∞; H 1 (O)). By a
standard argument, such as the one in [10, Proposition 5.9], this implies also that for
some constants C, c > 0,

z(t) ≤ Ce−ct z o 2 , ∀t ≥ 0,

thereby completing the proof. 

To conclude this section, we recall the notation z := y − ŷ and see that Theorems
2.1 and 2.2 imply the following stabilization result for the original system (2.12).
Theorem 2.3 Under (A1)–(A6) and (A4.1), for 1 ≤ d ≤ 3, we have that for each
yo ∈ Uo , there exists a unique solution y to the equation
 ∂y
∂t
+ Ay + F0 (y) = 0 in (0, ∞) × O;
(2.47)
B.C.(y + ŷ, u(y)) on (0, ∞) × ∂O; y(0) = yo in O,

satisfying, for some constants C, c > 0,


36 2 Stabilization of Abstract Parabolic Equations

y(t) − ŷ2 ≤ Ce−ct yo − ŷ2 , ∀t ≥ 0.

Here ⎛ ⎞ ⎛  ∗⎞
 y(t) − ŷ, ϕ1∗  D ϕ1 
⎜ y(t) − ŷ, ϕ ∗  ⎟ ⎜ D ϕ ∗ ⎟

u := Λ S A ⎝ 2 ⎟ , ⎜ 2 ⎟
................... ⎠ ⎝ ......... ⎠
y(t) − ŷ, ϕ N∗  D ϕ N∗ N

and  
Uo := yo ∈ L 2 (O); yo − ŷ ≤ σ ,

for some sufficiently small σ > 0.

Example 2.5 The corresponding stabilization results in Theorems 2.1–2.3, expressed


for Eq. (2.9) in Example 2.2, are as follows. First, for the linearized case, we have
the following theorem.

Theorem 2.4 Assume that hypothesis (A4.1) holds and that f is a C 1 function such
that f  ∈ C(R). Then the solution y to the equation


⎪ yt (t, x) − Δy(t, x) + f  ( ŷ(x))y(t, x) = 0, t > 0, x ∈ O,

⎪ ⎛ ⎞ ⎛ ∂ϕ1 ⎞

⎪  y(t), ϕ1  

⎪ ∂n

⎨ ⎜ y(t), ϕ2  ⎟ ⎜ ∂ϕ2 ⎟
y(t, x) = Λ S A ⎝ ⎜ ⎟ , ⎜ ∂n ⎟ , t > 0, x ∈ Γ1 ,
.............. ⎠ ⎝ ...... ⎠ (2.48)



⎪ y(t), ϕ N  ∂ϕ N

⎪ ∂n N



y(t, x) = 0, t > 0, x ∈ Γ2 ,

⎩ ∂n
y(0, x) = yo , x ∈ Ω,

satisfies the exponential decay

y(t)2 ≤ Ce−ρt yo 2 , t ≥ 0, (2.49)

for a prescribed ρ > 0 and a constant C > 0. Here Λ S := Λγ1 + · · · + Λγ N with


⎛ ⎞
1
γk −λ1
0 ... 0
⎜ 0 1
... 0 ⎟
Λγk := ⎜ γk −λ2 ⎟
⎝ ................................ ⎠ , k = 1, · · · , N .
0 0 . . . γk −λ 1
N

Moreover, A := (B1 + B2 + · · · + B N )−1 , where

Bk := Λγk BΛγk , k = 1, . . . , N ,

with B the Gram matrix


2.2 The Design of the Boundary Stabilizer 37

⎛ ⎞
 ∂ϕ 1
, ∂ϕ1   ∂ϕ
∂n ∂n 0
1
, ∂ϕ2  . . .
∂n ∂n 0
 ∂ϕ
∂n
1
, ∂ϕ N
∂n 0

⎜  ∂ϕ2 , ∂ϕ1   ∂ϕ2 , ∂ϕ2  . . .  ∂ϕ , ∂ϕ  ⎟
B := ⎜ ⎟
2 N

⎝ .................................................................. ⎠ ,
∂n ∂n 0 ∂n ∂n 0 ∂n ∂n 0

 ∂ϕ
∂n
N
, ∂ϕ 1
∂n 0
  ∂ϕ∂n
N
, ∂ϕ 2
∂n 0
 ...  ∂ϕ
∂n
N
, ∂ϕ N
∂n 0

 ∞  ∞
where ·, ·0 stands for the scalar product in L 2 (Γ1 ). Finally, λ j j=1 , ϕ j j=1
denote the eigenvalues and eigenfunctions of the linear operator −Δ + f  ( ŷ),
respectively; and γ1 , . . . , γ N are some real positive numbers.

And for the full nonlinear case, we have the following.

Theorem 2.5 Assume 1 ≤ d ≤ 3 and that hypothesis (A4.1) holds. In addition,


assume also that there exist C1 > 0, q ∈ N, αi > 0, i = 1, . . . , q, when d = 1, 2,
and 0 < αi ≤ 1, i = 1, . . . , q, when d = 3, such that
 q 

 αi
| f (y)| ≤ C1 |y| + 1 , ∀y ∈ R.
i=1

Then the solution to the closed-loop nonlinear equation




⎪ yt (t, x) − Δy(t, x) + f (y) = 0, t > 0, x ∈ O,

⎪ ⎛ ⎞ ⎛ ∂ϕ1 ⎞

⎪  y(t) − ŷ, ϕ1  

⎪ ∂n

⎪ ⎜ y(t) − ŷ, ϕ2  ⎟ ⎜ ∂ϕ2 ⎟
⎨ y(t, x) = Λ S A ⎜
⎪ ⎟ ⎜ ∂n ⎟
⎝ .................... ⎠ , ⎝ ...... ⎠ + ŷ(x),
(2.50)

⎪ y(t) − ŷ, ϕ N  ∂ϕ N

⎪ ∂n N

⎪ t > 0, x ∈ Γ1 ,





y(t, x) = 0, t > 0, x ∈ Γ ,

⎩ ∂n 2
y(0, x) = yo , x ∈ Ω,

satisfies the exponential decay

y(t) − ŷ2 ≤ Ce−ρt yo − ŷ2 , t ≥ 0,

for a prescribed ρ > 0 and a constant C > 0, provided that yo − ŷ is small enough.
For the notation A, Λs , ϕ j , . . ., we refer to Theorem 2.4.

2.2.2 The Semisimple Eigenvalues Case

In this section, we drop the hypothesis (A4.1) but keep the more general one (A4). In
this case, the result in Proposition 2.1 may fail to hold, since the determinant given
by (2.46) may be zero. In other words, in this case, the sum B1 + · · · + B N may be a
singular matrix. To overcome this problem, we slightly perturb the spectrum of the
38 2 Stabilization of Abstract Parabolic Equations

linear operator A. To illustrate our approach, let us assume, for instance, that

λ1 = λ2 and λ j = λk , ∀ j, k = 2, 3, . . . , N , j = k

(other cases can be treated in a similar manner).


This time, the operator introduced in (2.16) is given as follows: for β ∈ L 2 (∂O),
we define Dγ β := z̃, where z̃ is a solution to


⎨ 
N
Az̃ − 2 λk z̃, ϕk∗ ϕk (x) − δz̃, ϕ1∗ ϕ1 + γ z̃ = 0 in O,
(2.51)

⎩ k=1
B.C.(z̃, β) on ∂O,

for some δ > 0.


We choose ρ < γ1 < γ2 := γ1 + N 1−1 < γ3 := γ1 + N 1−2 · · · < γ N := γ1 + 1,
with γ1 large enough that for δ = γ14 , Eq. (2.51) is well posed for each γi ,
1
i = 1, . . . , N , and such that (λ1 + δ), λ2 , . . . , λ N are distinct. One may easily
show that 

− γ −δ−λ
1
β, D ϕ ∗j 0 , j = 1,
Dγ β, ϕ j  = 1
(2.52)
− γ −λ
1
j
β, D ϕ ∗j 0 , j = 2, . . . , N .

Now the feedback has the form


⎛ ⎞ ⎛  ∗⎞
 z(t), ϕ1∗  D ϕ1 
⎜ z(t), ϕ ∗  ⎟ ⎜ D ϕ ∗ ⎟
v = ΛS A ⎜ ⎟ ⎜ 2 ⎟
⎝ ............ ⎠ , ⎝ ....... ⎠ ,
2 (2.53)
z(t), ϕ N∗  D ϕ N∗ N


N
where Λ S := Λγk , with Λγk given this time as
k=1


1 1 1
Λγk := diag , ,..., , (2.54)
γk − δ − λ1 γk − λ 2 γk − λ N

while the matrix A is given similarly as in (2.24) (we mention that since (λ1 + δ),
λ2 , . . . , λ N are distinct, a result similar to that in Proposition 2.1 can be proved,
showing in this way that A is well defined). Computations similar to those in (2.27)–
(2.29) yield that system (2.13) is equivalent to

dz N  N
+ Az = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j
dt k=1 k, j=1


N
−δ Dγk vk , ϕ1∗ ϕ1 , t ≥ 0; z(0) = z o .
k=1
2.2 The Design of the Boundary Stabilizer 39

The main results in the present context are similar to those above. First, concerning
the linearized system, we have the following theorem.

Theorem 2.6 Under (A1)–(A5), the unique solution to the equation

dz N  N
+ Az = (γk I + A) Dγk vk (z) − 2 λ j Dγk vk (z), ϕ ∗j ϕ j
dt k=1 k, j=1
(2.55)

N
−δ Dγk vk , ϕ1∗ ϕ1 , t ≥ 0; z(0) = z o ,
k=1

satisfies
z(t)2 ≤ Ce−ρt z o 2 , ∀t ≥ 0, (2.56)

for some positive constant C.

Proof We argue as in the proof of Theorem 2.1. We get this time that the finite-
dimensional unstable part of the linear system (2.55) (which corresponds to (2.32)),
has the equivalent form (see all the computations between (2.35) and (2.39))

N
δ
Zt = −γ1 Z + (γ1 − γk )Bk AZ − O; Z (0) = Zo ,
k=2
2

where ⎛ ⎞
z(t), ϕ1∗ 
⎜ 0 ⎟
O := ⎜ ⎟
⎝ .............. ⎠ .
0

Then scalar multiplying the above equation by AZ yields

d 1 1
A 2 Z (t)2N ≤ −2γ1 A 2 Z (t)2N + δAZ (t)2N , t ≥ 0, (2.57)
dt
where A stands for the classical Euclidean induced norm of the matrix A. Denote
by λ1 (A) > 0 the first eigenvalue of A and integrate over time in (2.57). This yields
 t
λ1 (A)Z (t)2N ≤ e−2γ1 t A 2 Z0 2N + e−2γ1 (t−s) δAZ (s)2N ds,
1
(2.58)
0

where making use of Grönwall’s lemma gives us


" #
1 1 δA
Z (t)2N ≤ A 2 Z0 2N exp − 2γ1 t , t ≥ 0. (2.59)
λ1 (A) λ1 (A)
40 2 Stabilization of Abstract Parabolic Equations

Let us denote by bi j , i, j = 1, . . . , N , the entries of the matrix B1 + B2 + · · · +


B N . By the definition of Bi (see (2.22)) and the constants γi , i = 1, . . . , N and δ
(see after (2.51)), we have that

lim γ 2 |bi j | ∈ R+ , ∀i, j = 1, . . . , N .


γ1 →∞ 1

Let bi∗j denote the entries of the adjoint of the matrix B1 + · · · + B N . By virtue of
the definition of the adjoint matrix and the above observation, we deduce that

lim γ 2(N −1) |bi∗j | ∈ R+ , ∀i, j = 1, . . . , N .


γ1 →∞ 1

Besides this, the above observation also implies that

lim γ 2(N +1) |det (B1 + · · · + B N )| = +∞.


γ1 →∞ 1

In other words, we have


 
1  1 
|bi∗j | ≤ ci j , = . . . , ,   2(N +1)
2(N −1)
i, j 1, N and  det (B + · · · + B )  ≤ cγ1 ,
γ1 1 N

for some positive constants ci j , c, i, j = 1, . . . , N , independent of γ1 , for γ1 large


enough. This yields, if we denote by ai j , i, j = 1, . . . , N , the entries of the matrix
A = (B1 + · · · + B N )−1 , that there exists some constant C > 0, independent of γ1 ,
such that |ai j | ≤ Cγ14 , i, j = 1, . . . , N . Consequently,

A ≤ Cγ14 , (2.60)

for γ1 large enough. In conclusion, for γ1 large enough, there exists some μ > 0 such
that
δA A 1
− 2γ1 = 4 − 2γ1 ≤ C N 2 − 2γ1 ≤ −μ,
λ1 (A) γ1 λ1 (A) λ1 (A)

since 1
λ1 (A)
→ 0 for γ1 → ∞. This, together with (2.59), implies

1
e−μt A 2 Z0 2N , t ≥ 0,
1
Z (t)2N ≤
λ1 (A)

which represents the exponential decay of the first N modes of z. The rest of the
proof mimics the proof of Theorem 2.1, and it is therefore omitted. 
Finally, based on the above result, one can immediately deduce the following
counterpart of Theorem 2.3.
Theorem 2.7 Under (A1)–(A6), for 1 ≤ d ≤ 3, we have that for each yo ∈ Uo , there
exists a unique solution y to the equation
2.2 The Design of the Boundary Stabilizer 41
 ∂y
∂t
+ Ay + F0 (y) = 0 in (0, ∞) × O;
(2.61)
B.C.(y + ŷ, u(y)) on (0, ∞) × ∂O; y(0) = yo in O,

satisfying, for some constants C, c > 0,

y(t) − ŷ2 ≤ Ce−ct yo − ŷ2 , ∀t ≥ 0.

Here ⎛ ⎞ ⎛  ∗⎞
 y(t) − ŷ, ϕ1∗  D ϕ1 
⎜ y(t) − ŷ, ϕ ∗  ⎟ ⎜ D ϕ ∗ ⎟
u := Λ S A ⎜ 2 ⎟ ⎜ 2 ⎟
⎝ .................. ⎠ , ⎝ ........ ⎠ ,
y(t) − ŷ, ϕ N∗  D ϕ N∗ N

with Λ S given by (2.54) and A by (2.24); and


 
Uo := yo ∈ L 2 (O); yo − ŷ ≤ σ ,

for some sufficiently small σ > 0.

2.3 A Numerical Example

In this section, we further particularize the model in Example 2.2 (see also Example
2.3), in the sense that we take the space dimension to be equal to one, and f of the
form
f (y) := −αy + βy 2 ,

obtaining thereby the 1-dimensional Fischer model, which reads as



⎨ yt (t, x) − yx x (t, x) − αy(t, x) + βy 2 (t, x) = 0, (t, x) ∈ (0, ∞) × (0, 1),
y(t, 0) = 0, y(t, 1) = u(t), t ∈ (0, ∞),

y(0, x) = y0 (x), x ∈ (0, 1),
(2.62)
where α, β are positive constants. Fisher’s equation is a nonlinear parabolic equation
first proposed by Fisher to model the advance of a mutant gene in an infinite one-
dimensional habitat [54]. Moreover, Fisher’s equation has been used as a basis for
a wide variety of models for the spatial spread of genes in a population, chemical
wave propagation, flame propagation, branching Brownian motion processes, and
even nuclear reactor theory [36, 108]. It is well known that the uncontrolled Fisher’s
equation is unstable. The goal is to stabilize the null solution via the proportional-
feedback law designed in the previous section. To this end, it is clear that, arguing as
before, one is able to obtain similar results as in Theorems 2.4 and 2.5, in Example
2.5. More precisely, we have the following result.
42 2 Stabilization of Abstract Parabolic Equations

Theorem 2.8 The solution y to the equation




⎪ yt (t, x) − yx x (t, x) − αy(t, x) = 0, t > 0, x ∈ (0, 1),



⎪ y(t, 0)) = 0, ⎛ ⎞ ⎛ ⎞


⎨  y(t), ϕ1  (ϕ1 )x (1) 
⎜ y(t), ϕ2  ⎟ ⎜ (ϕ2 )x (1) ⎟ (2.63)

⎪ y(t, 1) = Λ S A ⎜ ⎟ ⎜ ⎟
⎝ .............. ⎠ , ⎝ .............. ⎠ , t > 0,





⎪ y(t), ϕ N  (ϕ N )x (1)
⎩ N
y(0, x) = yo , x ∈ (0, 1),

satisfies the exponential decay

y(t)2L 2 (0,1) ≤ Ce−ρt yo 2L 2 (0,1) , t ≥ 0, (2.64)

for a prescribed ρ > 0 and a constant C > 0. Here Λ S := Λγ1 + · · · + Λγ N with


⎛ ⎞
1
γk −λ1
0 ... 0
⎜ 0 1
... 0 ⎟
Λγk := ⎜ γk −λ2 ⎟
⎝ ................................ ⎠ , k = 1, . . . , N .
0 0 . . . γk −λ 1
N

Moreover, A := (B1 + B2 + · · · + B N )−1 , where

Bk := Λγk BΛγk , k = 1, . . . , N ,

with B being the Gram matrix


⎛ ⎞
(ϕ1 )x (1)(ϕ1 )x (1) (ϕ1 )x (1)(ϕ2 )x (1) . . . (ϕ1 )x (1)(ϕ N )x (1)
⎜ (ϕ2 )x (1)(ϕ1 )x (1) (ϕ2 )x (1)(ϕ2 )x (1) . . . (ϕ2 )x (1)(ϕ N )x (1) ⎟
B := ⎜ ⎟
⎝ ..................................................................................... ⎠ ,
(ϕ N )x (1)(ϕ1 )x (1) (ϕ N )x (1)(ϕ2 )x (1) . . . (ϕ N )x (1)(ϕ N )x (1)
 ∞  ∞
where λ j j=1 , ϕ j j=1 denote the eigenvalues and eigenfunctions of the linear
operator −∂x x − α, respectively; and γ1 , . . . , γ N are some positive numbers.

And for the full nonlinear case, we have this theorem:


Theorem 2.9 The solution to the closed-loop nonlinear equation


⎪ yt (t, x) − yx x (t, x) − αy(t, x) + βy 2 (t, x) = 0, t > 0, x ∈ (0, 1),



⎪ y(t, 0) = 0, ⎛ ⎞ ⎛ ⎞


⎨  y(t), ϕ1  (ϕ1 )x (1) 
⎜ y(t), ϕ2  ⎟ ⎜ (ϕ2 )x (1) ⎟ (2.65)

⎪ y(t, 1) = Λ S A ⎜ ⎟ ⎜ ⎟
⎝ .............. ⎠ , ⎝ ............ ⎠ , t > 0,





⎪ y(t), ϕ N  (ϕ N )x (1)
⎩ N
y(0, x) = yo , x ∈ (0, 1),
2.3 A Numerical Example 43

satisfies the exponential decay

y(t)2L 2 (0,1) ≤ Ce−ρt yo 2L 2 (0,1) , t ≥ 0,

for a prescribed ρ > 0 and a constant C > 0, provided that yo  L 2 (0,1) is small
enough. For the notation A, Λs , ϕ j , . . ., we refer to Theorem 2.8.
Now let us study the problem numerically. It is easy to see that when α > (2π )2 ,
there is more than one unstable eigenvalue. We take α = 50, β = 0.30, and set
the initial profile to be u 0 (x) = 5xe x , x ∈ [0, 1]. In this case, for decay rate ρ ∈
(0, (3π )2 − 50], a two-dimensional feedback controller can be designed to stabilize
the system. For a larger rate

ρ ∈ ((3π )2 − 50, (4π )2 − 50],

we need to design a three-dimensional feedback controller. If one wants to archive an


even bigger decay rate, a feedback controller of larger dimension can be accordingly
designed.
We can observe from Fig. 2.1 that the state is unstable without control.
First, we take ρ = 10. In this case, we need to take γ1 > α such that the Dirichlet
operator, given by (2.18), is well defined. Taking γ1 = 60, γ2 = 70, and simulating
the controlled system with the above parameters, we get Fig. 2.2.
Second, we take ρ = 50. In this case, we need to take γ1 > α + λ3 such that Eq.
(2.18) has a unique solution. Taking γ1 = 90, γ2 = 100, γ3 = 110, and simulating
the controlled system with the above parameters, we get Fig. 2.3. It is obvious that
the state with 3-D control decays faster than that with 2-D control.
We want to stress that the decay rate of the controlled system may be larger than
the value of ρ we set. For example, in the case ρ = 10, the actual decay rate is greater
than 10 but does not exceed min{λ3 , γ1 }. On the other hand, if we fix a decay rate

Fig. 2.1 State of Fisher’s


equation without control
44 2 Stabilization of Abstract Parabolic Equations

Fig. 2.2 State of Fisher’s


equation with 2-D control

Fig. 2.3 State of Fisher’s


equation with 3-D control

ρ, then as the parameter α increases, we may need to design a higher-dimensional


feedback controller. The analysis is quite similar to that in the above discussion with
respect to ρ for fixed α, and we will not go into details.
We take γ1 = 15, γ2 = 20, and then
 1 −2

(45−π 2 )2 (45−π 2 )(45−(2π)2 )
B1 = π 2
−2 4 ,
(45−π 2 )(45−(2π)2 ) (45−(2π)2 )2

 1 −2

(50−π 2 )2 (50−π 2 )(50−(2π)2 )
B2 = π 2
−2 4 .
(50−π 2 )(50−(2π)2 ) (50−(2π)2 )2

By Theorem 2.9, we know that the control of feedback form


  $1   
y sin π xd x 1
u(t) = F(y)(t) := T A $ 10 , , (2.66)
y sin 2π xd x 1
0 2
2.3 A Numerical Example 45

exponentially stabilizes Fisher’s equation (as shown in Fig. 2.2). Here


 
−π 2π
T := 45−π 2
−π
45−(2π)2
2π , A = (B1 + B2 )−1 . (2.67)
50−π 2 50−(2π)2

As we may realize, it is more practical to use only part of the information about
the state. One can take a modified feedback law such as
  $b   
a y sin π xd x 1
u(t) = F(y)(t) := T A $ b , , (2.68)
1
a y sin 2π xd x 2

where [a, b] is a proper subset of [0, 1].


If we fix b = 1 and simulate the closed-loop system with different values for a,
we find that for all a ≤ 0.24, the system governed by Fisher’s equation can still be
stabilized using the feedback law (2.68). However, the value of a cannot be too small.
When a = 0.25, it seems that the system cannot be stabilized with a control of the
form (2.68).
Some further computations show that the feedback controller u(t) can stabilize
the solution of Fisher’s equation not only in L 2 (0, 1), but also in H 1 (0, 1) when
the initial data satisfy some compatibility condition. Similarly as above, we take a
control of the form (2.68), which uses only part of the state information. Simulating
the closed-loop system with b = 1, and with different values for a, we find that when
a ≤ 0.24, the solution is exponentially stable in H 1 (0, 1).
It is reasonable that when we use less information about the state for the feedback
control, the feedback controller is less effective in stabilizing the solution of the
closed-loop system.

2.4 Comments

The problem of boundary stabilization of the heat equation was first solved in the
pioneering work of Triggiani [116]. His approach was based on spectral decompo-
sition, similar to what we do here and to what has been done in many papers on
this subject. Then several other methods were proposed for deriving new types of
controls. One of the most fruitful is the so-called backstepping method, developed
by Krstic and coworkers; see, for example, [1, 27, 29, 38, 87, 114, 124, 127] or
the book [75]. Let us briefly present it here, since it can be related to the results of a
subsequent chapter. Let us consider the reaction–diffusion equation on (0, 1):

yt (t, x) = yx x (t, x) + λy(t, x),
(2.69)
y(t, 0) = 0, y(t, 1) = u(t).
46 2 Stabilization of Abstract Parabolic Equations

The uncontrolled equation (2.69) (i.e., u ≡ 0) is unstable when λ > 0 is sufficiently


large. In other words, the term λy is a source of instability in (2.69), and the natural
objective for a boundary feedback is to eliminate it. To this end, one considers the
state transformation
 x
w(t, x) = y(t, x) − k(x, ξ )y(t, ξ )dξ,
0

with the kernel k such that w satisfies the target stable equation

wt (t, x) = wx x (t, x),
(2.70)
w(t, 0) = 0, w(t, 1) = 0.

The control u is given by


 1
u(t) = k(1, ξ )y(t, ξ )dξ.
0

Thus, the whole problem reduces to finding a kernel k that ensures this passage.
Plugging the form of w into Eq. (2.70), one easily deduces that k must obey

k x x (x, ξ ) − kξ ξ (x, ξ ) = λk(x, ξ ), x, ξ ∈ (0, 1),
(2.71)
k(x, 0) = 0, k(x, x) = − λ2 x, x ∈ (0, 1).

These form a well-posed PDE of hyperbolic type in the Goursat form. Moreover,
one can obtain explicitly the form of the kernel k, and consequently the form of the
feedback u.
A more direct method is the so-called design of proportional type feedback, which
we use here as well. We begin by mentioning the significant results obtained by Barbu
in [12, 13]; see also the monograph [14]. As mentioned before, the design algorithm
we developed here is based on the ideas in [13]. So in order to have a clear comparison
between what we have presented here and [13], let us briefly describe what is stated
and proved in that work. Consider the parabolic equation (see also Examples 2.1, 2.2)

yt (t, x) = Δy(t, x) + f (x, y(t, x)), in (0, ∞) × O,
∂y (2.72)
y = u on Γ1 , ∂n = 0 on Γ2 .

N
∂ϕ j
Under the assumption that the traces ∂n
are linearly independent in L 2 (Γ1 ),
j=1
the feedback

N
u=η μ j y, ϕ j  j
j=1
2.4 Comments 47

achieves stability in (2.72). Here η and μ j , j = 1, 2, . . . , N , are positive real param-



N
∂ϕ
eters, while  j are linear combinations of the traces ∂nj such that
j=1

% &
∂ϕl
j, = δ jl , j, l = 1, 2, . . . , N .
∂n 0

It is clear that such  j can be constructed if and only if the above hypothesis on
linear independence holds. Here ·, ·0 stands for the scalar product in L 2 (Γ1 ).
Note that this u can be equivalently written as
⎛ ⎞ ⎛ ∂ϕ1 ⎞
 y, ϕ1  ∂n 
⎜ ⎟ ⎜ ∂ϕ2
−1 ⎜ y, ϕ2  ⎟ ⎜ ∂n

u = η Λ1 B ⎝ , ⎟ ,
.......... ⎠ ⎝ ... ⎠
y, ϕ N  ∂ϕ N
∂n N

∂ N
where Λ1 = diag(μ1 , . . . , μ N ) and B is the Gram matrix of the system ∂n ϕ j |Γ1 j=1
in L 2 (Γ1 ). One can clearly see the similarity between this u and the control v in (2.26).
In the same proportional-type feedback context, we mention as well the recent
work of Lasiecka and Triggiani [82], in which the hypothesis of semisimple eigen-
values is dropped, but instead an additional internal controller is inserted into the
equations.
Other stabilization results are obtained for specific models, and they will be men-
tioned in the relevant subsequent chapters.
The results in this chapter, carried out for the particular case presented in Examples
2.1, 2.2, appeared in the work Munteanu [105], while their formulation here in
the general framework for an abstract parabolic differential operator of the type

∂t
+ A + F0 , obeying assumptions (A1)–(A6), is new. The need to consider the
general abstract context is to emphasize that in all that follows in the chapters below,
we are presenting just some particular cases. The whole effort is to show that (A1)–
(A6) (especially (A5)) are satisfied for the considered examples. Consequently, the
present controller design algorithm is not confined to the considered models, but
can be applied by those working on this subject to a larger spectrum of models,
generically named parabolic-like equations.
The feedback designed here has many advantages: it is linear and of finite-dimen-
sional structure, expressed in a very simple form involving only the eigenfunctions
of the linear operator derived from the linearized equations, and is therefore easy to
manipulate from the numerical point of view.
We mention that one can easily adapt the present feedback control design tech-
nique to systems of the form

yt = Δy + f (x, y), in (0, ∞) × O,
∂y (2.73)
y = u on Γ1 , ∂n = 0 on Γ2 ,
48 2 Stabilization of Abstract Parabolic Equations

where this time, y is a vector ⎛ ⎞


y1
⎜ y2 ⎟
y=⎜ ⎟
⎝ ... ⎠ .
yM

However, we will not go into details about this problem here, since later, we will
treat different types of systems such as the Navier–Stokes equations, the magne-
tohydrodynamics equations, and the phase field equations. Concerning the internal
stabilization of (2.73), one may consult the work [25].
Other important topics related to the control of parabolic-like equations, for
instance exact and approximate controllability and optimal control, are beyond the
scope of this presentation, and we refer to Coron’s book [50] for significant recent
results in this direction.
The numerical examples were published in the work [86] of of Liu et al.
Chapter 3
Stabilization of Periodic Flows
in a Channel

Here we apply the control design algorithm from Chap. 2 to the Navier–Stokes
equations, placed in a particular geometry, namely a semi-infinite channel. The high
instability of the Navier–Stokes equations is well known as is the fact that the principal
way to suppress the turbulence occurring in the dynamics of a fluid is to plug in a
stabilizing feedback control. In addition, a Riccati-based robust controller is also
constructed.

3.1 Presentation of the Problem

The 3-D boundary controlled incompressible Navier–Stokes equations in the geom-


etry of a 2π -periodic channel in the x and z coordinates are given by

(x, y, z) ∈ (−∞, ∞) × [0, 1] × (−∞, ∞),

and


⎪ u t − νΔu + u ∂∂ux + v ∂u∂y
+ w ∂u
∂z
= − ∂∂ px ,


⎪ ∂v ∂v ∂v ∂p
⎪ vt − νΔv + u ∂ x + v ∂ y + w ∂z = − ∂ y ,





⎪ w − νΔw + u ∂w + v ∂w + w ∂w = − ∂∂zp ,
⎨ t ∂x ∂y ∂z
∂u
∂x
+ ∂∂vy + ∂w
∂z
= 0, ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),



⎪ (u, v, w, p)(t, x + 2π, y, z + 2π ) = (u, v, w, p)(t, x, y, z),



⎪ ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),



⎪ (u, w)(t, x, 0, z) = (u, w)(t, x, 1, z) = 0,

v(t, x, 0, z) = 0, v(t, x, 1, z) = Ψ (t, x, z), ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),
(3.1)
and the initial data

(u, v, w)(0, x, y, z) = (u o , vo , wo )(x, y, z), x, z ∈ R, y ∈ (0, 1), (3.2)

© Springer Nature Switzerland AG 2019 49


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_3
50 3 Stabilization of Periodic Flows in a Channel

where the standard notation includes u for the streamwise velocity, v for the wall-
normal velocity, w for the spanwise velocity, p for the pressure, ν for the viscosity
coefficient of the fluid, while Ψ is the control. The incompressibility of the fluid is
described by the divergence-free condition.
In order not to deal with infinite domains, we have assumed that both the velocity
field and the pressure are 2π -periodic in the first and last spatial coordinates (for
more details on this, one may consult the work [33]). Instead of 2π , one could take
any L > 0, and all the results below still hold. However, we keep the particular period
2π for the sake of simplicity of notation.
The parabolic Poiseuille profile, denoted here by

[(U (y), 0, 0) , −ax] ,

will be the target of our controlled problem. Here


a
U (y) = − (y 2 − y), y ∈ (0, 1), (3.3)

for some a ∈ R+ . The Poiseuille flow is viewed as a pressure-induced flow in a
long duct, being a laminar flow of an incompressible Newtonian fluid of viscosity
ν. It is obtained from the stationary uncontrolled equations (3.1) by taking both the
wall-normal velocity and the spanwise velocity equal to zero.
Besides the obvious use of these equations, namely to model the evolution of
fluids in a pipe, they are often and successfully used to model the circulation of blood
through the vessels. It is known that for high values of the viscosity coefficient, we
have laminar flow (no turbulence), but when ν is low, turbulence occurs. Thus one
seeks to plug some control into the system to force the corresponding solution to
behave like the steady parabolic Poiseuille profile. We aim here to construct such a
control, namely Ψ , using ideas from Chap. 2. Note that the control is with actuation
only on the upper wall, and only on the normal component of the velocity field.
There is no action, however, in y = 0, on the streamwise and spanwise components
or inside the channel. This is of great importance from the practical point of view,
since it makes the applicability of the control highly feasible.
Next, following the approach from the second chapter, we introduce the lineariza-
tion of (3.1), around the equilibrium profile (3.3), given by


⎪ u t − νΔu + U ∂∂ux + v ∂U
∂y
= − ∂∂ px ,



⎪ ∂v ∂p
⎪ vt − νΔv + U ∂ x = − ∂ y ,



⎪ wt − νΔw + U ∂w = − ∂ p ,
⎨ ∂x ∂z
∂u ∂v ∂w
⎪ ∂x
+ ∂y
+ ∂z
= 0, ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1), (3.4)



⎪ (u, v, w, p)(t, x + 2π, y, z + 2π ) = (u, v, w, p)(t, x, y, z),



⎪ (u, w)(t, x, 0, z) = (u, w)(t, x, 1, z) = 0,



v(t, x, 0, z) = 0, v(t, x, 1, z) = Ψ (t, x, z), x, z ∈ R, y ∈ (0, 1),
3.1 Presentation of the Problem 51

and initial data u(0) = u 0 := u o − U, v(0) = v0 := vo , w(0) = w0 := wo .


System (3.4) is too complex for applying the argument from Chap. 2 directly,
mainly because it contains four unknowns and only three equations. To overcome
this defect, we will take advantage of the 2π -periodicity assumption. More exactly,
we decompose the system into Fourier modes (for details, see Chap. 1), then reduce
the pressure from it.

3.2 The Stabilization Result

Let u = u(t, x, y, z) be 2π -periodic in x and z, with



u(t, x, y, z) = u kl (t, y)eikx eilz , u kl = u −k−l ,
k,l∈Z

such that
 1
|u kl (y)|2 dy < ∞.
k,l∈Z 0

The corresponding L 2 -norm can be expressed in terms of the Fourier modes as


⎛ ⎞ 21
  1
u(t) := ⎝ 2π |u kl (t, y)|2 dy ⎠ .
k,l∈Z 0

Moreover, for a function f : [0, 1] → C, f = f (y), y ∈ [0, 1], we denote by


f  := ∂∂y f the derivative with respect to the y-coordinate. We consider H to be the
complexified space of L 2 (0, 1). We denote also by  ·  the norm in H and by < ·, · >
the scalar product.
Now we return to system (3.4) and rewrite it in terms of Fourier coefficients as
⎧  
⎪ (u kl )t − ν[−(k2 + l2 )u kl + ukl ] + ikU u kl + U v kl = −ikpkl , a.e. in (0, 1),
⎪ 2 2


⎨ (vkl )t − ν[−(k + l )vkl + vkl ] + ikU vkl = − pkl , a.e. in (0, 1),

(wkl )t − ν[−(k 2 + l 2 )wkl + wkl ] + ikU wkl = −ilpkl , a.e. in (0, 1),

⎪ 

⎪ iku kl + v + ilw kl = 0, a.e. in (0, 1),
⎩ kl
(u kl , wkl )(0) = (u kl , wkl )(1) = 0, vkl (0) = 0, vkl (1) = ψkl , ∀t ≥ 0,
(3.5)
with initial data u 0kl , vkl
0
, wkl
0
.
It is clear that the stabilizability of (3.4) is equivalent to that of (3.5) at each level
(k, l) ∈ Z × Z. That is why in what follows, we will stabilize (3.5) for each pair
(k, l) separately. Then we will conclude with the main stabilization result.
Firstly, let us take care of the cases k = l = 0 and k = 0, l = 0. We insert

ψ00 ≡ ψ0l ≡ 0.
52 3 Stabilization of Periodic Flows in a Channel

After some straightforward computations involving the free divergence condition,


one can easily arrive at the conclusion that

u 00 (t)2 + v00 (t)2 + w00 (t)2 ≤ C1 e−νt (u 000 2 + v00


0 2
 + w00
0 2
 ), (3.6)

t ≥ 0, and that

u 0l (t)2 + v0l (t)2 + w0l (t)2 ≤ C1 e−2νl t (u 00l 2 + v0l


2
0 2
 + w0l
0 2
 ), (3.7)

t ≥ 0, for some constant C1 > 0.


From now on, we consider only the cases k = 0 and k 2 + l 2 = 0.
The result below is an important one. It makes the connection between the behavior
of the Fourier modes vkl and u kl , wkl . More exactly, it says that it is enough to have
that vkl is exponentially stable in order to have that system (3.5) is stable. In other
words, the normal component of the velocity field is the leading one in measuring
the laminar or turbulent character of the flow.

Lemma 3.1 It suffices to stabilize vkl exponentially in the norm



 − vkl (t) + (k 2 + l 2 )vkl (t)

in order to have that (3.5) is exponentially stable.

Proof Assume that we have found some control ψkl such as

d
|ψkl (t)| + |ψkl (t)| ≤ Ce−μt , t ≥ 0, (3.8)
dt
for some positive constants C, μ, such that once plugged into (3.5), we have

 − vkl (t) + (k 2 + l 2 )vkl (t)2 ≤ Ce−μt , t ≥ 0. (3.9)

Introduce the function Vkl = Vkl (t, y), given by

Vkl (t, y) = vkl (t, y) + (2y 3 − 3y 2 )ψkl (t), t ≥ 0, y ∈ (0, 1).

By (3.5), we see that Vkl satisfies the equation



⎪ [−Vkl + (k 2 + l 2 )Vkl ]t + νVkliv − [2ν(k 2 + l 2 ) + ikU ]Vkl



⎪ +[ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU  ]Vkl


⎨ = [−(12y − 6)ψkl + (k 2 + l 2 )(2y 3 − 3y 2 )ψkl ]t

⎪ −[2ν(k 2 + l 2 ) + ikU ](12y − 6)ψkl



⎪ +[ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU  ](2y 3 − 3y 2 )ψkl , y ∈ (0, 1),

⎩ 
Vkl (0) = Vkl (1) = 0, Vkl (0) = 0, Vkl (1) = 0.
(3.10)
3.2 The Stabilization Result 53

We scalar multiply Eq. (3.10) by Vkl , take the real part of the result, and obtain

1 d
(Vkl 2 + (k 2 + l 2 )Vkl 2 ) + νVkl 2 + 2ν(k 2 + l 2 )Vkl 2
2 dt
 1
+ ν(k 2 + l 2 )2 Vkl 2 = ik U  Vkl Vkl dy
0
 1
+ [−(12y − 6)ψkl + (k 2 + l 2 )(2y 3 − 3y 2 )ψkl ]t Vkl dy (3.11)
0
 1
− ((2ν(k + l ) + ikU )(12y − 6)ψkl )Vkl dy
2 2
0
 1

+ (ν(k + l ) + ik(k + l )U + ikU )(2y − 3y )ψkl )Vkl dy ,
2 2 2 2 2 3 2
0

from which we get that

1 d
(Vkl 2 + (k 2 + l 2 )Vkl 2 ) + ν(k 2 + l 2 )(Vkl 2 + (k 2 + l 2 )Vkl 2 )
2 dt  
 2 (3.12)
d 
≤ Ckl Vkl  +  ψkl  + |ψkl | + Vkl  , t ≥ 0,
2 2 2
dt

for some Ckl > 0. By (3.8), (3.9), and the definition of Vkl , it follows that

Vkl (t)2 ≤ Ckl e−μt vkl


0 2
 , t ≥ 0, (3.13)

for some Ckl > 0. By (3.12), together with (3.13) and (3.8), we obtain

Vkl (t)2 + (k 2 + l 2 )Vkl 2 ≤ Ckl e−μt vkl


0 2
 , t ≥ 0.

Therefore,

vkl (t)2 ≤ Ce−μt vkl
0 2
 , t ≥ 0. (3.14)

Now, taking into account that



iku kl + ilwkl = −vkl ,

relation (3.14) implies that

ku kl (t) + lwkl (t)2 ≤ Ce−μt vkl


0 2
 , t ≥ 0. (3.15)

Multiplying the first equation of system (3.5) by il and the third by −ik and summing
them, we get that
54 3 Stabilization of Periodic Flows in a Channel

(ilu kl − ikwkl )t − ν[−(k 2 + l 2 )(ilu kl − ikwkl ) + (ilu kl − ikwkl ) ]


(3.16)
+ikU (ilu kl − ikwkl ) + ilU  vkl = 0.

Scalar multiplying Eq. (3.16) by (ilu kl − ikwkl ) and taking the real part of the result,
we obtain that
1 d
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 + ν(ilu kl − ikwkl ) 2
2 dt   1 
= −il U  vkl ilu kl − ikwkl dy .
0

Hence
 1 
1 d  
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 |l| ≤  U  vkl ilu kl − ikwkl dy 
2 dt 0
1 ν(k 2 + l 2 ) |l|  a 2
≤ |l| ilu kl − ikwkl 2 + v kl  2
.
2 |l| ν(k 2 + l 2 ) 2ν

This yields
d  a 2
ilu kl − ikwkl 2 + ν(k 2 + l 2 )ilu kl − ikwkl 2 ≤ vkl 2 .
dt 2ν

From the above estimate and relation (3.9), one can easily obtain that

lu kl (t) − kwkl (t)2 ≤ Ce−μt (u 0kl 2 + vkl


0 2
 + wkl
0 2
 ), ∀t ≥ 0, (3.17)

for some C > 0.


Now, taking into account that

ku kl + lwkl 2 + lu kl − kwkl 2 = (k 2 + l 2 )(u kl 2 + wkl 2 ),

relations (3.15) and (3.17) imply that

u kl 2 + wkl 2 ≤ Ce−μt vkl


0 2
 , t ≥ 0. (3.18)

Finally, relations (3.9) and (3.18) imply that


 
u kl (t)2 + vkl (t)2 + wkl (t)2 ≤ Ce−μt u 0kl 2 + vkl
0 2
 + wkl
0 2
 , (3.19)

for some Ckl > 0, t ≥ 0, k, l = 0. 

We intend to write the system (3.5) in an abstract form, aiming to apply the control
design for abstract parabolic-like equations from Chap. 2. To this end, we define
the following operators: for each 0 = k ∈ Z, we define Lk : D(Lk ) ⊂ H → H and
Fk : D(Fk ) ⊂ H → H , by
3.2 The Stabilization Result 55


Lk v := −v + k 2 v, D(Lk ) = H 2 (0, 1) ∩ H01 (0, 1), (3.20)
  
Fk v := νv − (2νk + ikU )v + k(νk + ik U + iU )v, D(Fk )
2 3 2

= H 4 (0, 1) ∩ H02 (0, 1), (3.21)

Next, we define Ak : D(Ak ) ⊂ H → H


 
Ak := Fk L−1 −1
k , D(Ak ) = v ∈ H : Lk v ∈ D(Fk ) . (3.22)

Furthermore, for each k, l ∈ Z, k, l = 0, we denote by Lkl : D(Lkl ) ⊂ H → H and


Fkl : D(Fkl ) ⊂ H → H the operators


Lkl v := −v + (k 2 + l 2 )v, D (Lkl ) = H01 (0, 1) ∩ H 2 (0, 1), (3.23)
  
Fkl v := νv − [2ν(k 2 + l 2 ) + ikU ]v + [ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU ]v, (3.24)

D (Fkl ) = H 4 (0, 1) ∩ H02 (0, 1),

which introduce the operators Akl : D(Akl ) ⊂ H → H , defined by


 
Akl := Fkl L−1 −1
kl , D(Akl ) = v ∈ H : Lkl v ∈ D(Fkl ) . (3.25)

Finally, we set the differential forms

 
Lk v := −v + k 2 v, Lkl v := −v + (k 2 + l 2 )v, (3.26)
  
Fk v := νv − (2νk 2 + ikU )v + k(νk 3 + ik 2 U + iU )v, (3.27)
  
Fkl v := νv − [2ν(k 2 + l 2 ) + ikU ]v + [ν(k 2 + l 2 )2 + ik(k 2 + l 2 )U + ikU ]v. (3.28)

System (3.5) can be equivalently rewritten as abstract equations, governed by the


operators Ak and Akl , respectively (see (3.63) below). Therefore, it is necessary to
study these operators in detail. In the two lemmas below, we gather their proper-
ties. First, we show that they generate C0 -analytic semigroups, which, for k and l
sufficiently large, are exponentially asymptotically stable.
Lemma 3.2 The operators −Ak , k ∈ Z∗ , and −Akl , k, l ∈ Z∗ , generate C0 -
analytic semigroups on H , and for each λ ∈ ρ(−Ak ), (λI + Ak )−1 is compact;
also, for each λ ∈ ρ(−Akl ), (λI + Akl )−1 is compact. Moreover, one has, for each
γ > 0,
σ (−Ak ) ⊂ {λ ∈ C : λ ≤ −γ } , ∀|k| > S,

and 
σ (−Akl ) ⊂ {λ ∈ C : λ ≤ −γ } , ∀ k 2 + l 2 > S,

where 21
1 a
S := √ 1+γ + √ . (3.29)
2ν 2ν
56 3 Stabilization of Periodic Flows in a Channel

Here σ (−A) is the spectrum of the operator −A and ρ(−A) is the resolvent set
of −A.

Proof We will consider only the more complex case of −Akl (for −Ak , one may
argue likewise, because of their similar forms). So for λ ∈ C and f ∈ H , consider
the equation
λg + Akl g = f,

or equivalently,
λLkl v + Fkl v = f. (3.30)

Scalar multiplying this equation by v and taking into account (3.23) and (3.24), yields
 1  1
λ (|v |2 + (k 2 + l 2 )|v|2 )dy + ν |v |2 dy
0 0
 1  1
+ 2ν(k 2 + l 2 ) |v |2 dy + ν(k 2 + l 2 )2 |v|2 dy (3.31)
0 0
 1
+k U  ( v v − v v)dy =  f, v
0

and   1
1
λ (|v |2 + (k 2 + l 2 )|v|2 )dy + k |v |2 dy
0 0
 1 (3.32)
1 
+k (k + l )U + U |v| =  f, v.
2 2 2
0 2

Then, via Poincaré’s inequality, we see by (3.31) and (3.32) that for some r > 0,

C
|(λI + Akl )−1 f | ≤ | f | for |λ| > r,
|λ| − r

which implies, via the Hille–Yosida theorem (see Chap. 1), that −Akl is the infinitesi-
mal generator of a C0 -analytic semigroup, denoted by e−Akl t , t ≥ 0, on H . Moreover,
by (3.31), (3.32) we see that (λI + Akl )−1 is compact on H , and it follows also that
all the eigenvalues λ of −Akl satisfy the estimates
 1  1
 2
λ (|v | + (k + l )|v| )dy + 2ν(k + l )
2 2 2 2 2
|v |2 dy
0 0
 1  1
 2
+ν |v | dy + ν(k + l )
2 2 2
|v|2 dy
0 0
 1
≤ −k U  ( v v − v v)dy
0
3.2 The Stabilization Result 57
 1  1
1
 2
≤ 2νk |v | dy +
2
|U  |2 |v|2 dy
0 2ν 0
 1  1
 2 a2
≤ 2ν(k + l )
2 2
|v | + 3 |v|2 dy,
0 8ν 0

where Av = −λv. By the above estimate we see that for γ > 0 arbitrary but fixed,
we have

1 a
λ ≤ −γ if k + l ≥ √
2 2 1+γ + √ ,
2ν 2ν

which completes the proof. 



In particular, it follows by Lemma 3.2 that for |k| ≥ S and k 2 + l 2 ≥ S, we have

e−Ak t  L(H,H ) ≤ Ce−γ t and e−Akl t  L(H,H ) ≤ Ce−γ t , (3.33)

for all t ≥ 0. This


√ implies that for the stabilization of (3.4), it suffices to stabilize
system (3.5) for k 2 + l 2 ≤ S only.
Moreover, Lemma 3.2 ensures that the hypotheses (A1)–(A3) from Chap. 2 hold
in the present case.
 again by Lemma 3.2, −Ak has a countable set of eigenvalues,
Furthermore,
∞   ∞
denoted by λkj , and the same holds for −Akl , and we denote by λklj
j=1 j=1
the corresponding eigenvalues. Besides this, there is only a finite number Nk of
eigenvalues λkj with λkj ≥ 0, the unstable eigenvalues; and a finite number Nkl of
 
eigenvalues with λklj ≥ 0, j = 1, . . . , Nkl . We denote by ϕ kj , j = 1, 2, . . . , and
 
ϕ k∗
j , j = 1, 2, . . . , the corresponding eigenfunctions for −Ak and its adjoint −A∗k ,
   
respectively; and by ϕ klj , j = 1, 2, . . . and ϕ kl∗ j , j = 1, 2, . . . the correspond-
ing eigenfunctions for −Akl and its adjoint −A∗kl , respectively.
Next, we show that the unique continuation-type hypothesis (A5) from Chap. 2
holds in the present case. The Dirichlet map D that lifts the boundary conditions into
the equations is introduced in (3.54) below, similarly as in (2.16). Then one easily
deduces that in the present case,

D ϕ = ϕ (1),

where D was defined in (2.17). Hence, validation of hypothesis (A5) is equivalent


to showing that for the solution ϕ ∗ to

⎪ ∗  ∗   ∗ 
⎨ ν(ϕ ) − (2ν(k + l ) − ikU + λ̄)(ϕ ) + 2ikU (ϕ )
2 2

⎪ + ((k 2 + l 2 )λ̄ + ν(k 2 + l 2 )2 − ik(k 2 + l 2 )U )ϕ ∗ = 0, (3.34)


⎩ ϕ ∗ (0) = ϕ ∗ (1) = 0, (ϕ ∗ ) (0) = (ϕ ∗ ) (1) = 0,
58 3 Stabilization of Periodic Flows in a Channel

if it satisfies in addition that (ϕ ∗ ) (1) = 0, then necessarily ϕ ∗ ≡ 0. This is in con-


tradiction to the fact that ϕ ∗ is an eigenfunction of the adjoint operator −A∗kl of −Akl .
Thus necessarily,

D ϕ ∗ = (ϕ ∗ ) (1) = 0.

As announced in Chap. 2, this task is not an easy one. Even if the fourth-order
equation (3.34) has five null boundary conditions, we cannot deduce immediately
that the only solution is the trivial one, since the two boundary conditions in y = 0
may be linearly dependent with those given in y = 1. To overcome this problem,
we take into account the special form of the equation, more precisely its symmetric
nature. Indeed, by the form of U in (3.3), one can easily see that

U (y) = U (1 − y), ∀y ∈ (0, 1).

This implies that if ϕ ∗ (y) is a solution to (3.34), then ϕ ∗ (1 − y) is also a solution


to (3.34). This enables us to assume that the eigenfunction is either symmetric or
antisymmetric, i.e., ϕ ∗ (y) = ±ϕ ∗ (1 − y), ∀y ∈ [0, 1]. By assuming this, we gain
another null boundary condition, namely

(ϕ ∗ ) (0) = ±(ϕ ∗ ) (1) = 0,

and it turns out that these six null boundary conditions are enough to establish our
claim.

Lemma 3.3 Let λkj for some 0 < |k| ≤ S, and let j ∈ {1, . . . , Nk } be an unstable
eigenvalue of −A∗k . Then we can choose the corresponding adjoint eigenfunction ϕ k∗j

such that (ϕ k∗
j ) (1) > 0.

Moreover, let λklj for some 0 < k 2 + l 2 ≤ S, and let j ∈ {1, . . . , Nkl } be an
unstable eigenvalue of −A∗kl . Then we can choose the corresponding adjoint eigen-
kl∗ 
j such that (ϕ j ) (1) > 0.
function ϕ kl∗

Proof We will consider only the more complex case, the eigenvalues and eigenfunc-
tions of −A∗kl , while for −A∗k one may construct a similar argument. We aim to show
kl∗ 
j such that (ϕ j ) (1) = 0. Then if needed,
that we can choose the eigenfunction ϕ kl∗
kl∗ 
j by (ϕ j ) (1)ϕ j , we obtain the desired result. The proof follows in
replacing ϕ kl∗ kl∗

three steps.
Step 1. For a function f : [0, 1] → C, let us denote by fˇ : [0, 1] → C the function

fˇ(y) := f (1 − y), ∀y ∈ [0, 1].

We say that the function f : [0, 1] → C is symmetric if f (y) = fˇ(y), ∀y ∈ [0, 1],
and antisymmetric if f (y) = − fˇ(y), ∀y ∈ [0, 1]. In this step, we show that we
can choose a basis of the adjoint eigenfunction space consisting of symmetric or
antisymmetric functions.
3.2 The Stabilization Result 59

Let us denote by λ := λklj the unstable eigenvalue. If ϕ ∗ := ϕ kl∗


j is an eigenfunction

corresponding to λ̄, then ϕ satisfies the boundary value problem

⎪ ∗  ∗   ∗ 
⎨ ν(ϕ ) − (2ν(k + l ) − ikU + λ̄)(ϕ ) + 2ikU (ϕ )
2 2

⎪ + ((k 2 + l 2 )λ̄ + ν(k 2 + l 2 )2 − ik(k 2 + l 2 )U )ϕ ∗ = 0, (3.35)


⎩ ϕ ∗ (0) = ϕ ∗ (1) = 0, (ϕ ∗ ) (0) = (ϕ ∗ ) (1) = 0.

Let us observe that if ϕ ∗ is a solution to (3.35), then ϕˇ∗ is also a solution to (3.35),
because of the symmetric form of the equation.
Let us denote by H the fourth-dimensional linear space of the solutions to the
fourth-order linear homogeneous differential equation
 
ν(ϕ ∗ ) − (2ν(k 2 + l 2 ) − ikU + λ̄)(ϕ ∗ ) + 2ikU  (ϕ ∗ )
+ ((k 2 + l 2 )λ̄ + ν(k 2 + l 2 )2 − ik(k 2 + l 2 )U )ϕ ∗ = 0, a.e. in (0, 1).

Then the eigenfunction space can be written as the linear space E , defined as
 
E := ϕ ∈ H : ϕ(0) = ϕ(1) = 0, (ϕ) (0) = (ϕ) (1) = 0 .

It is easy to see that the dimension of E is ≤ 2. We claim that we can find a basis for this
linear space consisting of symmetric functions or antisymmetric functions. Indeed,
let us assume that there exists ϕ ∈ E that is neither symmetric nor antisymmetric.
Then the two functions

ϕ1 := ϕ + ϕ̌ and ϕ2 := ϕ − ϕ̌

are both in E . Moreover, ϕ1 = 0, ϕ2 = 0, and the system {ϕ1 , ϕ2 } is linearly inde-


pendent, because ϕ1 is symmetric and ϕ2 is antisymmetric. This, together with the
fact that the dimension of E is ≤ 2, proves our claim.
Hence we can assume that the corresponding eigenfunction ϕ ∗ is either symmetric
or antisymmetric. Assume, for instance, that ϕ ∗ is symmetric. The other case can be
treated similarly.
We want to show that we have (ϕ ∗ ) (1) = 0. Let us assume, for the sake of a
contradiction, that (ϕ ∗ ) (1) = 0. From symmetry, we get also that (ϕ ∗ ) (0) = 0.
It is easy to see that ϕ ∗ satisfies F∗kl ϕ ∗ = 0, where F∗kl is the adjoint operator of
Fkl given by (3.24). We have
 1  1
0= F∗kl ϕ ∗ ϕdy = ϕ ∗ Fkl ϕdy + ν((ϕ ∗ ) (1)ϕ  (1) − (ϕ ∗ ) (0)ϕ  (0)),
0 0
(3.36)
∀ϕ ∈ D(Fkl ),
60 3 Stabilization of Periodic Flows in a Channel

by taking into account the boundary conditions for ϕ ∗ , i.e.,

ϕ ∗ (0) = ϕ ∗ (1) = 0, (ϕ ∗ ) (0) = (ϕ ∗ ) (1) = 0, (ϕ ∗ ) (0) = (ϕ ∗ ) (1) = 0.

From (3.36), we get that


 1
|ϕ ∗ |2 dy = 0, (3.37)
0

provided that ϕ satisfies 


Fkl ϕ = ϕ ∗ , in (0, 1),
(3.38)
ϕ  (0) = ϕ  (1) = 0,

where Fkl is the differential form given in (3.26).


Relation (3.37) implies that ϕ ∗ ≡ 0, which is in contradiction to the fact that ϕ ∗
is an eigenfunction. This implies that the assumption (ϕ ∗ ) (1) = 0 is false, which
leads to our desired result. So in order to complete the proof, it remains to show that
there exists a solution ϕ to the Eq. (3.38).
Step 2. We claim that there exists a function ϕ1 such that

Fkl ϕ1 = 0, y ∈ (0, 1) ,
(3.39)
ϕ1 (0) − ϕ1 (1) = 0.

The proof of this claim will be given in the last step of the proof. In this step, we
will prove that under the above claim, there exists a solution to the Eq. (3.38), and
so we obtain the desired result.
Let us construct the function ϕ2 := ϕ1 + ϕ̌1 . As seen before, the equation Fkl ϕ1 =
0 is symmetric, and this implies that if ϕ1 is a solution, then also ϕ̌1 is. Hence we
have ⎧
⎨ Fkl ϕ2 = 0,
ϕ  (0) = ϕ1 (0) − ϕ1 (1) = 0, (3.40)
⎩ 2
ϕ2 is symmetric.

Let ϕ3 be a solution to the equation Fkl ϕ3 = ϕ ∗ such that ϕ3 is symmetric. There


exists a symmetric solution to the above equation because ϕ ∗ is symmetric. Indeed,
let ϕ4 be a solution to the equation Fkl ϕ4 = 21 ϕ ∗ . If we take ϕ3 := ϕ4 + ϕ̌4 , we have
Fkl ϕ3 = 21 ϕ ∗ + 21 ϕ ∗ = ϕ ∗ , and ϕ3 is symmetric, as desired.
ϕ  (0)
Now we define ϕ5 := − ϕ3 (0) ϕ2 + ϕ3 . We have that ϕ5 is well defined, and more-
2
over, ϕ5 satisfies

⎨ Fkl ϕ5 = ϕ ∗ ,
ϕ5 is symmetric, (3.41)
⎩ 
ϕ5 (0) = 0 and, because of the symmetry, ϕ5 (1) = 0.

So we can take ϕ := ϕ5 .
3.2 The Stabilization Result 61

Step 3. In the last step we show that there exists a function ϕ1 such that

Fkl ϕ1 = 0, y ∈ (0, 1) ,
(3.42)
ϕ1 (0) − ϕ1 (1) = 0.

We assume, for the sake of a contradiction, that this is not true. Hence for every
solution ψ to the equation Fkl ψ = 0, we have ψ  (0) − ψ  (1) = 0.
Let us denote by H1 the linear space of the solutions to the equation Fkl ψ = 0,
and by E1 the linear subspace of H1 defined by
 
E1 := ψ ∈ H1 : ψ  (0) − ψ  (1) = 0 .

Let ψ ∈ E1 . Define Ψ := ψ + ψ̌. We have

Ψ  (0) = ψ  (0) − ψ  (1) = 0, Ψ  (1) = ψ  (1) − ψ  (0) = 0,

since ψ ∈ H1 . Also,

Ψ  (0) = ψ  (0) − ψ  (1) = 0, Ψ  (1) = ψ  (1) − ψ  (0) = 0,

since ψ ∈ E1 .
Let us set Φ := Ψ  − (k 2 + l 2 )Ψ . The equation Fkl Ψ = 0 can be rewritten in
the form
νΦ  − (ν(k 2 + l 2 ) + ikU + λ)Φ + ikU  Ψ = 0, (3.43)

and
Ψ  − (k 2 + l 2 )Ψ = Φ. (3.44)

Observe that since Ψ  (0) = Ψ  (1) = 0 and Ψ  (0) = Ψ  (1) = 0, we have

Φ  (0) = Φ  (1) = 0.

If we scalar multiply Eq. (3.43) by Φ and Eq. (3.44) by Ψ , we get


 1  1
 2
−ν |Φ | dy − (ν(k + l ) + λ)
2
|Φ|2 dy
2
0  1  1 0 (3.45)
−ik U |Φ|2 dy + ikU  Ψ Φdy = 0,
0 0

and   
1 1 1
− |Ψ  |2 dy − (k 2 + l 2 ) |Ψ |2 dy = ΦΨ dy. (3.46)
0 0 0
62 3 Stabilization of Periodic Flows in a Channel
1
From (3.46) we see that 0 Ψ Φdy is a real number. Using this and taking the real
part of (3.45), we obtain that
 1  1
 2
−ν |Φ | dy − (ν(k + l ) + λ)
2 2
|Φ|2 dy = 0.
0 0

Since λ is an unstable eigenvalue, we have that λ > 0. So the relation above yields

Φ ≡ 0.

It is easy to see that Φ ≡ 0 implies Ψ ≡ 0, which implies that ψ = −ψ̌. Hence

ψ ∈ E1 ⇒ ψ = −ψ̌. (3.47)

Let us consider the following subspaces of H1 :


 
S := ψ ∈ H1 : ψ = ψ̌ ,
 
A S := ψ ∈ H1 : ψ = −ψ̌ ,

which are the symmetric subspace of H1 and the antisymmetric subspace of H1 ,


respectively. It is easy to see that
 
 1  1
S = ψ ∈ H1 : ψ =ψ =0
2 2
 
1  1
A S = ψ ∈ H1 : ψ =ψ =0 .
2 2

This implies that dimC S = dimC A S = 2.


Relation (3.47) implies that E1 ⊂ A S . Let us define the subspace of H1
 
F1 := ψ ∈ H1 : ψ  (0) = 0 .

We have dimC F1 = 3. Since dimC S = 2, dimC H1 = 4, and F1 , S ⊂ H1 , we


have that
F1 ∩ S = {0} .

Hence there exists 0 = ψ ∈ F1 ∩ S . That ψ ∈ F1 implies that ψ  (0) = 0, and


from symmetry (because ψ ∈ S ), ψ  (1) = 0. These yield that ψ  (0) − ψ  (1) = 0
and ψ ∈ H1 . We conclude that ψ ∈ E1 ⊂ A S . Finally, we have ψ ∈ S ∩ A S ,
which implies ψ ≡ 0, which is absurd. Hence the assumption made is not true, which
means that there exists a function ϕ1 such that
3.2 The Stabilization Result 63

Fkl ϕ1 = ϕ ∗ , y ∈ (0, 1),
(3.48)
ϕ1 (0) − ϕ1 (1) = 0.

As we mentioned earlier, this completes the proof. 

3.2.1 The Feedback Law and the Stability of the System

We saw earlier that for k = l = 0, k = 0, and l = 0, the stability of the system is


guaranteed without any boundary control. We continue with the case k = 0 and
l = 0. System (3.5) reduces to

⎪ (u k0 )t − ν[−k 2 u k0 + u k0 ] + ikU u k0 + U  vk0 = −ikpk0 , a.e. in (0, 1),



⎪ (vk0 )t − ν[−k 2 vk0 + vk0
] + ikU vk0 = − pk0 
, a.e. in (0, 1),


⎨ 
iku k0 + vk0 = 0, a.e. in (0, 1),
 (3.49)
⎪ (wk0 )t − ν[−k 2 wk0 + wk0
⎪ ] + ikU wk0 = 0, a.e. in (0, 1),



⎪ u (0) = u (1) = 0, v (0) = 0, vk0 (1) = ψk0 , wk0 (0) = wk0 (1) = 0,


k0 k0 k0
∀t ≥ 0.

First, we consider separately only the equations for u k0 and vk0 , that is,


⎪ (u k0 )t − ν[−k 2 u k0 + u k0 ] + ikU u k0 + U  vk0 = −ikpk0 , a.e. in (0, 1),

⎨ (v ) − ν[−k 2 v + v ] + ikU v = − p  , a.e. in (0, 1),
k0 t k0 k0 k0 k0
 (3.50)

⎪ iku + v = 0, a.e. in (0, 1),


k0 k0
u k0 (0) = u k0 (1) = 0, vk0 (0) = 0, vk0 (1) = ψk0 , ∀t ≥ 0.

We reduce the pressure from the system and use the free divergence condition to get
that vk0 satisfies the equation
⎧  
⎪ 
⎨ (−vk0 + k vk0 )t + νvk0 − (2νk + ikU )vk0
2 2

⎪ + k(νk 3 + ik 2 U + iU  )vk0 = 0, t ≥ 0, y ∈ (0, 1), (3.51)


⎩ v (0) = v (1) = 0, v (0) = 0, v (1) = ψ (t),
k0 k0 k0 k0 k0

which can be equivalently rewritten as



(Lk vk0 )t + Fk vk0 = 0, t ≥ 0, y ∈ (0, 1),
  (3.52)
vk0 (0) = vk0 (1) = 0, vk0 (0) = 0, vk0 (1) = ψk0 (t),

where Lk and Fk are given in (3.26). Note that formally, Eq. (3.52) may be rewritten
as
(z k )t + Ak z k = 0
64 3 Stabilization of Periodic Flows in a Channel

by setting z k := Lk vk0 , where we recall the operators −Ak with their eigenvalues
{λkj } j and their eigenfunctions {ϕ kj } j , described in the previous section.
For the sake of simplicity, we assume that

the unstable eigenvalues λkj , j = 1, . . . , Nk , are simple, (3.53)

which is the counterpart of hypothesis (A4.1) from Chap. 2. The present algorithm
works equally well in the case of semisimple eigenvalues by doing tricks similar to
those in Chap. 2. However, we will not develop this subject here since the presentation
may get too hard to follow.
Using (if necessary) the Gram–Schmidt procedure, we may assume that the sys-
tems {ϕ kj } Nj=1
k
and {ϕ k∗ Nk
j } j=1 are biorthonormal, that is,

ϕik , ϕ k∗
j  = δi j , i, j = 1, . . . , Nk ,

where δi j is the Kronecker symbol.


Next we will lift the boundary conditions into the Eq. (3.52) via the Dirichlet
operator, defined as (see also (2.16)) follows: let α ∈ C. We denote by Dγ α := ω the
solution to the equation


⎨ 
Nk
Fk ω + 2 λkj Lk ω, φ k∗
j φ j + γ Lk ω = 0, y ∈ (0, 1),
k
(3.54)

⎩ j=1
 
ω(1) = α, ω(0) = ω (0) = ω (1) = 0.

(It is known that for γ > 0 large enough, the above equation has a unique solution
1
in H 2 (0, 1).) Next, let us compute Lk Dγ α, φmk∗ , for some 1 ≤ m ≤ Nk . To this
end, we have from (3.54) scalar multiplied by φmk∗ and by the biorthogonality of the
eigenfunctions systems that

0 = Fk ω, φmk∗  + 2λkm Lk ω, φmk∗  + γ Lk ω, φmk∗ 


= −α(φmk∗ ) (1) + ω, F∗k φmk∗  + (γ + 2λkm )Lk ω, φmk∗ (by Lemma 3.3))
= −α(φmk∗ ) (1) + Lk ω, A∗k φmk∗  + (γ + 2λkm )Lk ω, φmk∗ .

This yields that


α
Lk Dγ α, φmk∗  = (φ k∗ ) (1), 1 ≤ m ≤ Nk . (3.55)
γ + λkm m

Note that in this case,



D ϕ = ϕ (1),

where D was introduced in (2.17).


3.2 The Stabilization Result 65

We choose Nk constants 0 < γ1k < γ2k < · · · < γ Nk k large enough that Eq. (3.54),
corresponding to each γik , i = 1, . . . , Nk , has a solution, and denote by Dγik , i =
1, . . . , Nk , the corresponding solutions.
Now for each 0 < |k| ≤ S, we introduce the feedback ψk0 as (see the proportional
feedback defined in (2.6) in Chap. 2)
⎛! " ⎞ ⎛ k∗  ⎞
!Lk vk0 (t), ϕ1k∗ " (ϕ1 ) (1) $
k∗

⎜ Lk vk0 (t), ϕ ⎟ ⎜ (ϕ k∗ ) (1) ⎟
ψk0 (t) := − Λksum Ak ⎜ 2 ⎟ ⎜ ⎟
⎝ ...................... ⎠ , ⎝ ............... ⎠
2 , (3.56)
! " k∗ 
Lk vk0 (t), ϕ Nkk∗
(ϕ Nk ) (1) N k

with Λksum := Λkγ k + · · · + Λkγ k , for


1 Nk

⎛ ⎞
1
γik +λk1
0 ... 0
⎜ 0 1
... 0 ⎟
⎜ γik +λk2 ⎟
Λkγ k := ⎜ ⎟ , i = 1, . . . , Nk . (3.57)
i ⎝ .................................. ⎠
0 0 . . . γ k +λk
1
i Nk

Moreover,
Ak := (B1k + B2k + · · · + B Nk k )−1 (3.58)

(the counterpart of the matrix A introduced in (2.24), Chap. 2), where

Bik := Λkγ k Bk Λkγ k , i = 1, . . . , Nk (3.59)


i i

(see also (2.22)), with Bk the matrix


⎛ ⎞
[ϕ1k∗ ) (1)k ]2 (ϕ1k∗ ) (1)(ϕ2k∗ ) (1) . . . (ϕ1k∗ ) (1)(ϕ Nk∗k ) (1)
⎜(ϕ k∗ ) (1)(ϕ k∗ ) (1) [(ϕ2k∗ ) (1)]2 . . . (ϕ2k∗ ) (1)(ϕ Nk∗k ) (1)⎟
Bk :=⎜ 2 1 ⎟
⎝.................................................................................................⎠ (3.60)
(ϕ Nk∗k ) (1)(ϕ1k∗ ) (1) (ϕ Nk∗k ) (1)(ϕ2k∗ ) (1) . . . [(ϕ Nk∗k ) (1)]2

(he counterpart of the Gram matrix B introduced in (2.20) in Chap. 2). Here ·, · N
stands for the classical scalar product in C N .
By Lemma 3.3, we know that (ϕik∗ ) (1) = 0, i = 1, . . . , Nk , and therefore,
hypothesis (A5) is verified for the present case. Hence just as in Proposition 2.1,
one can show that the above matrices Ak are well defined, and consequently, the
feedback ψk0 is well defined.
We plug the above ψk0 into (3.52) and show that it ensures its stability.
Similarly to (2.25), we decompose ψk0 as

ψk0 = v1k + · · · + vkNk ,


66 3 Stabilization of Periodic Flows in a Channel

where
⎛! " ⎞ ⎛ 1 (φ k∗ ) (1) ⎞
 Lk vk0 (t), ϕ1k∗ γik +λk1 1 $
⎜ ! "
k∗ ⎟ ⎜ k 1 k (φ k∗ ) (1) ⎟
k ⎜ Lk vk0 (t), ϕ2 ⎟ ⎜ γi +λ2 2 ⎟
vi (t) := − A ⎝
k
⎠,⎜ ⎟ , t ≥ 0, (3.61)
.......................
! " ⎝ ....................... ⎠
Lk vk0 (t), ϕ Nk∗k 1
γ k +λk
(φ Nk∗k ) (1)
i Nk Nk

for i = 1, 2, . . . , Nk . Likewise in (2.35), we have that


⎛% &⎞
Lk Dγik vik , ϕ1k∗ ⎛! "⎞
⎜% &⎟ ! Lk vk0 , ϕ1k∗ "
⎜ ⎟ ⎜ Lk vk0 , ϕ k∗ ⎟
⎜ Lk Dγik vik , ϕ2k∗ ⎟
⎜ ⎟ = −Bik Ak ⎜ 2 ⎟
⎝ .................. ⎠ , for i = 1, . . . , Nk . (3.62)
⎜ ....................... ⎟ ! "
⎝% &⎠ Lk vk0 , ϕ Nk∗k
Lk Dγik vi , ϕ Nk
k k∗

In the next lines, the approach will slightly differ from that in Chap. 2, in the sense
that we will not consider the equivalent reformulation of Eq. (3.52) via the variation
of constants formula and the extension operators. Instead, we perform computations
directly in Eq. (3.52). The reason is that in this case, we do not care about the nonlinear
equation, but only the linearized one.
Returning to the linear equation (3.52), we define

z k := Lk [vk0 − Dγ1k v1k − · · · − Dγ Nk vkNk ].


k

We see that z k belongs to D(−Ak ). Subtracting (3.52) and (3.54), corresponding to


Dγik vik , i = 1, . . . , Nk , we arrive at


Nk 
Nk
(z )t = −Ak z + 2
k k
λkj Lk Dγik vik , ϕ k∗
j ϕ j
k
+ γik Lk Dγik vik
 i, j=1
 i=1
(3.63)

Nk
− Lk Dγik vik .
i=1 t

In terms of the new variable z k , the feedbacks vik , i = 1, .., Nk , have the form

! k ⎛ " ⎞ ⎛ k 1 k (ϕ k∗ ) (1) ⎞


 z
! k (t), ϕ k∗
1 " γi +λ1 1 $
⎜ ⎟ ⎜ 1
(ϕ k∗ 
) (1) ⎟
1 z (t), ϕ k∗
⎜ ⎟
vik (t) = − Ak ⎜ ⎟ γi +λ2
k k 2
⎝ ....................... ⎠ , ⎜ ⎟ .
2 (3.64)
2 ! k " ⎝ ....................... ⎠
z (t), ϕ Nkk∗ 1 
(ϕ Nk ) (1)
k∗
γ k +λk i Nk Nk
3.2 The Stabilization Result 67

To see this, we do the following straightforward computations:


⎛ % & ⎞ ⎛ 1 (ϕ k∗ ) (1) ⎞
z k (t), ϕ1k∗ γik +λk1 1
 ⎜ % & ⎟ ⎜ ⎟$
⎜ ⎟ ⎜ 1 (ϕ k∗ ) (1) ⎟
1 ⎜ z k (t), ϕ2k∗ ⎟ ⎜ γ k +λk 2

k
A ⎜ ⎟,⎜ ⎟
⎜ ....................... ⎟ ⎜ ⎟
i 2
....................... ⎟
& ⎠ ⎜
2
⎝ % ⎝ ⎠
1 k∗ 
z k (t), ϕ k∗ k k (ϕ N ) (1)
γi +λ N k
N k k Nk

⎛ ! " ⎞ ⎛ 1
(ϕ k∗ ) (1)

Lk vk0 , ϕ1k∗ γik +λk1 1
 ⎟$
⎜ ! " ⎟ ⎜ 1
(ϕ k∗ ) (1) ⎟
k ⎜ Lk vk0 , ϕ2 ⎟ ⎜
k∗
1 ⎜ γik +λk2 2 ⎟
= A ⎜ ....................... ⎟ , ⎜ ⎟
2 ⎝ % & ⎠ ⎝ ....................... ⎠
Lk vk0 , ϕ k∗ 1
(ϕ k∗ ) (1)
Nk k k γi +λ N
Nk
k Nk
⎛% &⎞
⎛ ⎞
Lk Dγ k v kj , ϕ1k∗ 1
(ϕ k∗ ) (1)

 ⎟ γik +λk1 1
% & ⎟$
j
⎜ ⎜
1 k∗ ⎟ ⎜ (ϕ k∗ ) (1)
Nk 1
k ⎜ Lk D γ k v j , ϕ 2 ⎟ ⎜
k ⎟
− A ⎜ ⎟, γik +λk2 2 ⎟
⎜ ....................... ⎟ ⎜ ⎟
j
2 ⎝ ....................... ⎠
j=1 ⎝% & ⎠
k +λk (ϕ Nk ) (1)
1 k∗  ,
Lk Dγ k v kj , ϕ k∗
Nk γi N k
j Nk
(taking into account relation (3.62))
⎛ ! " ⎞ ⎛ 1
(ϕ k∗ ) (1)

!Lk vk0 , ϕ1k∗ "
k∗
 γik +λk1 1
( ⎜ Lk vk0 , ϕ ⎟ ⎜ ⎟$
1 ' (ϕ k∗ ) (1) ⎟
1
k⎜ 2 ⎟ ⎜ ⎜ γik +λk2 2 ⎟
= I + A (B1 + · · · + B Nk ) A ⎜ ....................... ⎟ , ⎜
k k k
2 ⎝ % & ⎠ ⎝ ....................... ⎟ ⎠
Lk vk0 , ϕ k∗Nk
1 k∗ 
k (ϕ Nk ) (1)
k γi +λ N
k Nk

= −vik ,

since Ak = (B1k + · · · + B Nk k )−1 . Moreover, as in (3.62), we have now


⎛% &⎞
Lk Dγik vik , ϕ1k∗ ⎛! k "⎞
⎜% &⎟ ! z (t), ϕ1k∗ "
⎜ ⎟ ⎜ z k (t), ϕ k∗ ⎟
⎜ Lk Dγik vik , ϕ2k∗ ⎟ 1
⎜ ⎟ = − Bik Ak ⎜ 2 ⎟
⎝ ................. ⎠ , i = 1, . . . , Nk . (3.65)
⎜ ....................... ⎟ 2 ! "
⎝% &⎠ z k (t), ϕ Nk∗k
Lk Dγik vi , ϕ Nk
k k∗

Next, we decompose system (3.63) into its stable and unstable parts. Recall the
projections PNk , and its adjoint PN∗k , defined by
 
1 −1 1
PNk := (λI + Ak ) dλ; PN∗k := (λI + A∗k )−1 dλ,
2π i Γ 2π i Γ¯

where Γ (its conjugate Γ¯ , respectively) separates the unstable spectrum from the
stable one of −Ak (−A∗k , respectively). We set
68 3 Stabilization of Periodic Flows in a Channel

− AuNk := PNk (−Ak ), −AsNk := (I − PNk )(−Ak ), (3.66)

for the restrictions of −Ak , respectively.


The system (3.63) can accordingly be decomposed as

z k = z Nk + ζ Nk , z Nk := PNk z k , ζ Nk := (I − PNk )z k ,

where applying PNk and (I − PNk ) to (3.63), we obtain

d
z N + AuNk z Nk
dt k
⎡ ⎛ ⎞⎤
Nk 
Nk 
Nk
= PNk ⎣2 λ j Lk Dγ k vi , ϕ j ϕ j +
k k k∗ k
γi Lk Dγ k vi − ⎝Lk
k k
Dγ k vi ⎠ ⎦
k
(3.67)
i i i
i, j=1 i=1 i=1 t
d
ζ N + AsNk ζ Nk
dt k
⎡ ⎛ ⎞⎤
 Nk 
Nk 
Nk
= (I −PNk )⎣2 λkj Lk Dγ k vik , ϕ k∗
j ϕ j +
k
γik Lk Dγ k vik − ⎝Lk Dγ k vik⎠ ⎦ (3.68)
i i i
i, j=1 i=1 i=1 t

respectively.
Let us decompose z Nk as


Nk
z Nk (t, y) = z k (t), ϕ k∗
j ϕ j (y).
k

j=1

We introduce this z Nk in Eq. (3.67). Then we scalar multiply (3.67) successively by


j , j = 1, . . . , Nk , take account the biorthogonality of the systems of eigenfunc-
ϕ k∗
tions, notice that we may assume that PN∗k ϕ k∗ ∗
j = ϕ j (since PNk is idempotent), and
k∗

take advantage of relation (3.65) to get that


Nk
1 k N
1 k N
Zt k = Λk Z k − Λk Bik Ak Z k − γik Bik Ak Z k + B k Ak Zt k , t ≥ 0,
i=1
2 i=1 2 i=1 i
⎛ ⎞
z k (t), ϕ1k∗ 
⎜ z k (t), ϕ k∗  ⎟
where Z k := ⎜ 2 ⎟
⎝ ................. ⎠ and Λ := diag (λ1 , λ2 , . . . , λ Nk ). Recalling that
k k k k

z k (t), ϕ Nk∗k 
Ak = (B1k + · · · + B Nk k )−1 , we see that the above relation yields


Nk
Zt k = −γ1k Z k + (γ1k − γik )Bik Ak Z k , t ≥ 0, (3.69)
i=2
3.2 The Stabilization Result 69

which is the counterpart of Eq. (2.39) from Chap. 2. Thus continuing with arguments
similar to those in (2.39)–(2.41), we conclude that

Z k (t)2Nk ≤ Ce−2γ1 t Zok 2Nk , t ≥ 0, (3.70)

for some positive constants C, γ1 > 0 independent of k.


The above relation says that the unstable part of the system (3.63) is, in fact,
stable. Further, one can argue as at the end of the proof of Theorem 2.1 in order to
deduce that the system (3.50) is stabilized by ψk0 defined in (3.56). The details are
omitted.
Therefore, by virtue of Lemma 3.2 and (3.70), we may conclude that on plugging
the feedback
⎛! "⎞ ⎛ k∗  ⎞
!Lk vk0 (t), ϕ1k∗ " (ϕ1 ) (1) $
k∗

⎜ L (t), ϕ ⎟ ⎜ k∗  ⎟
ψk0 = − Λksum Ak ⎜
v
k k0 2 ⎟,⎜ (ϕ2 ) (1) ⎟
⎝......................⎠ ⎝ .............. ⎠ , 0 < |k| ≤ S, (3.71)
! "
Lk vk0 (t), ϕ Nk∗k (ϕ Nk∗k ) (1) Nk
ψk0 ≡ 0, |k| > S,

into (3.49), we obtain that

u k0 (t)2 + vk0 (t)2 + wk0 (t)2 ≤ C3 e−μ3 t (u 0k0 2 + vk0  + wk0
0 2 0 2
 ),
(3.72)
∀t ≥ 0, ∀|k| > 0, for some constants C3 , μ3 > 0, independent of k.
The case k = 0 and l = 0 can be treated similarly to that above, obtaining that
the feedback
⎛ ! " ⎞ ⎛ kl∗  ⎞
 !Lkl vkl (t), ϕ1kl∗ "
kl∗
(ϕ1 ) (1) $

kl ⎜ Lkl vkl (t), ϕ2
⎟ ⎜ (ϕ kl∗ ) (1) ⎟
ψkl = − Λkl ⎟ ⎜ 2 ⎟
sum A ⎝ .......................... ⎠ , ⎝ ................. ⎠
! " 
Lkl vkl (t), ϕ Nkl∗kl (ϕ Nkl∗kl ) (1) Nkl
(3.73)

for 0 < k 2 + l 2 ≤ S,

ψkl ≡ 0 for k 2 + l 2 > S

sum := Λγ kl + · · · + Λγ kl , for
ensures the stability. Here Λkl kl kl
1 Nkl

⎛ ⎞
1
γikl +λkl
0 ... 0
⎜ 0 ⎟
1


1
γikl +λkl
... 0 ⎟
Λklkl := ⎜
2 ⎟ , i = 1, . . . , Nkl , (3.74)
γi ⎝ .................................... ⎠
0 0 . . . γ kl +λ1
kl
i Nkl

for some 0 < γ1kl < · · · < γ Nklkl , Nkl real constants sufficiently large. Moreover,
70 3 Stabilization of Periodic Flows in a Channel

Akl := (B1kl + B2kl + · · · + B Nklkl )−1 , (3.75)

where
Bikl := Λkl
γ kl
Bkl Λkl
γ kl
, i = 1, . . . , Nkl , (3.76)
i i

Bkl being the matrix


⎛      ⎞
[(ϕ1kl∗ ) (1)]2 (ϕ1kl∗ ) (1)(ϕ2kl∗ ) (1) . . . (ϕ1kl∗ ) (1)(ϕ kl∗
Nkl ) (1)
⎜ kl∗  kl∗     ⎟
⎜ Nkl ) (1) ⎟
Bkl := ⎜ (ϕ2 ) (1)(ϕ1 ) (1) [(ϕ2kl∗ ) (1)]2 . . . (ϕ2kl∗ ) (1)(ϕ kl∗
⎟. (3.77)
⎝ ....................................................................................................... ⎠
    
Nkl ) (1)(ϕ1 ) (1) (ϕ Nkl ) (1)(ϕ2 ) (1) . . .
(ϕ kl∗ kl∗ kl∗ kl∗ [(ϕ kl∗
Nkl ) (1)]
2

We may deduce then that

u kl (t)2 + vkl (t)2 + wkl (t)2 ≤ C4 e−μ4 t (u 0kl 2 + vkl


0 2
 + wkl
0 2
 ), (3.78)

∀t ≥ 0, for all k, l ∈ Z∗ , for some constants C4 , μ4 > 0, independent of k, l.


Collecting all the above results, we conclude with the following theorem:
Theorem 3.1 Given initial data u o , vo , wo in L 22π (O), define the feedback Ψ as
 
Ψ (t, x, z) = ψk0 (t)eikx + ψkl (t)eikx eilz , (3.79)

0<|k|≤S 0< k 2 +l 2 ≤S

where for 0 < |k| ≤ S,

ψk0 (t) :=
⎛  −ikx k∗
⎞ ⎛ k∗  ⎞
 O (−v yy (t) + k 2 v(t))e−ikx ϕ1k∗ (y)d xd ydz
2 (ϕ1 ) (1) $
⎜ ϕ2 (y)d xd ydz ⎟ ⎜ k∗  ⎟
− Λksum Ak ⎜ O (−v yy (t) + k v(t))e ⎟ , ⎜ (ϕ2 ) (1) ⎟ ,
⎝ ... ⎠ ⎝ .............. ⎠
 −ikx k∗ 
O (−v yy (t) + k v(t))e ϕ Nk (y)d xd ydz (ϕ Nk ) (1)
2 k∗
Nk
(3.80)

j the eigenfunctions of the adjoint operator −Ak of −Ak given by (3.22), and
with ϕ k∗
Λksum , Ak are defined
√ by (3.56)–(3.58).
And for 0 < k 2 + l 2 ≤ S,

ψkl (t) :=
⎛  −ikx e−ilz ϕ kl∗ (y)dxdydz
⎞ ⎛ kl∗  ⎞
 O (−v yy (t) + (k + l )v(t))e
2 2 (ϕ1 ) (1) $
⎜  1
−ikx e−ilz ϕ kl∗ (y)dxdydz ⎟ ⎜  ⎟
kl ⎜ O [−v yy (t) + (k + l )v(t)]e ⎟ ⎜ (ϕ2kl∗ ) (1) ⎟
2 2
− Λkl
sum A ⎜
2 ⎟,⎜ ⎟ ,
⎝ .................
 ⎠ ⎝ ... ⎠
−ikx −ilz 
O [−v yy (t) + (k 2 + l 2 )v(t)]e e ϕ kl∗
Nkl (y)dxdydz (ϕ kl∗
N ) (1) kl Nkl
(3.81)

with ϕ kl∗
j the eigenfunctions of the adjoint operator −Akl , of −Akl given in (3.25);
sum , A given in (3.74) and (3.75), respectively.
and Λkl kl
3.2 The Stabilization Result 71

Then the solution of the closed-loop system




⎪ u t − νΔu + U ∂∂ux + v ∂U
∂y
= − ∂∂ px ,



⎪ vt − νΔv + U ∂∂vx = − ∂∂ py ,




⎪ ∂w
⎨ wt − νΔw + U ∂ x = − ∂z ,
∂p

∂u
⎪ ∂x
+ ∂∂vy + ∂w
∂z
= 0, ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1), (3.82)



⎪ (u, v, w, p)(t, x + 2π, y, z + 2π ) = (u, v, w, p)(t, x, y, z),



⎪ (u, w)(t, x, 0, z) = (u, w)(t, x, 1, z) = 0,



v(t, x, 0, z) = 0, v(t, x, 1, z) = Ψ (v), ∀t ≥ 0, x, z ∈ R, y ∈ (0, 1),

satisfies the exponential decay



 
|u(t, ξ )|2 + |v(t, ξ )|2 + |w(t, ξ )|2 dξ
(0,2π)×(0,1)×(0,2π)

−μt
 
≤ Ce |u 0 (ξ )|2 + |v0 (ξ )|2 + |w0 (ξ )|2 dξ,
(0,2π)×(0,1)×(0,2π)

t ≥ 0, for some positive constants C, μ.

3.3 Design of a Riccati-Based Feedback

Observe that slight perturbations of the coefficients of Eq. (3.1) lead to different
eigenfunctions of the linearized operator. This means that under small perturbations,
the feedback given in Theorem 3.1 might no longer ensure the stability of the system.
A more robust controller can be constructed via a Riccati-based approach. This is
what we do in this section. We reconsider the stabilization problem associated with
system (3.5), at each level (k, l) ∈ Z2 , by looking for a feedback representation of
the controller ψkl in terms of an operator solving a Riccati algebraic equation. We
will use the standard technique, namely minimization of a cost functional.
Let us consider the case k = 0 and l = 0. As in (3.50)–(3.52), we get that vk0
satisfies 
(Lk vk0 )t + Fk vk0 = 0, t ≥ 0, y ∈ (0, 1),
  (3.83)
vk0 (0) = vk0 (1) = 0, vk0 (0) = 0, vk0 (1) = ψk0 (t),

where Lk and Fk are given in (3.26).


Still, we lift the boundary conditions into equations. This time, the Dirichlet
operator is much simpler. Let ω =: Dγ k ψk0 be the solution to

Fk ω + γ k ω = 0, y ∈ (0, 1), t ≥ 0,
(3.84)
ω (0) = ω (1) = 0, ω(0) = 0, ω(1) = ψk0 , t ≥ 0.
72 3 Stabilization of Periodic Flows in a Channel

For γ k > 0 sufficiently large there is a solution. Then doing computations similar to
those in (2.27)–(2.29), we obtain that (3.83) is equivalent to

d
z k + Ak z k = Ak Dγ k ψk0 , t ≥ 0; z k (0) = z k0 := Lk vk0
0
, (3.85)
dt

where z k := Lk vk0 .
We associate to (3.85) the following linear quadratic control problem:
 ∞
1
φ(z k0 ) := min (L−1
k z k (t) + |ψk0 (t)| )dt,
2 2
(3.86)
2 0

subject to ψk0 ∈ L 2 (0, ∞; X ) and z k satisfying (3.85). Here


 
X := H 2 (0, 1) ∩ H01 (0, 1)

is the dual of the space H 2 (0, 1) ∩ H01 (0, 1).


First let us show that the optimization problem is well posed on the state space
X , i.e., φ(z k0 ) < ∞, ∀z k0 ∈ X . In other words, we must show that with z k0 ∈ X
arbitrary, there exists some control ψk0 ∈ L 2 (0, ∞; X ) such that the corresponding
solution z k of (3.85) satisfies z k ∈ L 2 (0, ∞; X ). In fact, such a feedback ψk0 is
provided in the above section. (It is the proportional controller given by (3.56) that
provides the exponential decay (3.72).) From the exponential stability and the form
of the feedback, we deduce also that there exists some constant a2 > 0 such that

φ(z k0 ) ≤ a2 ||L−1
k z k || , ∀z k ∈ X .
0 2 0
(3.87)

It is easy to see that the map φ(z) → z ∈ X is continuous, and thus, ||L−1 k z|| ≤
cφ(z). This, together with relation (3.87), shows that there exist constants a1 and a2
such that
a1 L−1 0 2 0 −1 0 2
k z k  ≤ φ(z k ) ≤ a2 Lk z k  , ∀z k ∈ X .
0
(3.88)

Thus by (3.88), there is a linear nonnegative self-adjoint operator Rk : X → X


associated with the linear symmetric form φ(·) such that

1
φ(z k0 ) = Rk z k0 , z k0 X , ∀z k0 ∈ X , (3.89)
2
where Rk ∈ L(X , X ).
By the dynamic programming principle, for each 0 < t < T , the optimal solution
(ψk∗ , z k∗ ) to (3.86) and (3.85) is also the solution to the optimization problem
⎧  T ⎫
⎨1 ⎬
(||L−1 z (s)|| 2
+ |ψ (s)| 2
)ds + φ(z (T )),
min 2 t k k k0 k
, (3.90)
⎩ ⎭
subject to (3.85), z k (t) = z k∗ (t)
3.3 Design of a Riccati-Based Feedback 73

z k∗ (t) ∈ X as initial condition, where z k∗ (T ) ∈ X as well.


By the maximum principle, we obtain that
 ∗
ψk∗ (t) = Ak Dγ k qT = νqT (1), a.e. t ∈ (0, T ),
L−2 ∗
k Rk z k (t) = −q T (t), ∀t ∈ [0, T ], (3.91)
 ∗
ψk∗ (t) = Ak Dγ k (L−2 ∗
k Rk z k (t)), ∀t ≥ 0,

where qT is the solution to the dual equation


0
d
q
dt T
− A∗k qT = L−2 ∗
k z k , ∀t ∈ (0, T ),
−2 ∗
(3.92)
qT (T ) = −Lk Rk z k (T ).

Let us show that Rk : H → H . More precisely, we show that if z k∗ is the optimal


solution to (3.85)–(3.86) (which is also optimal to (3.90)) with z k∗ (0) ∈ H , then
Rk z k∗ (0) ∈ H . We have that L−2 ∗
k z k ∈ L (0, T ; H (0, 1)). From Eq. (3.92), since
2 4

−A∗k generates an analytic C0 -semigroup, we know that

(T − t) 2 qT ∈ C([0, T ); D((−A∗k ) 2 ))
1 3

(for more details, see, for example, [80]). This yields

qT (0) ∈ D((−A∗k ) 2 ) ⊂ H 6 (0, 1),


3

because D(−A∗k ) ⊂ H 4 (0, 1). So

Rk z k∗ (0) = −L2k qT (0) ∈ H 2 (0, 1) ⊂ H,

as claimed.
Finally, we show that Rk is a solution to a Riccati-type equation. To this end, we
first notice that again by the dynamic programming principle and (3.89), we have
 ∞
1 1
Rz k∗ (t), z k∗ (t)X = φ(z k∗ (t)) = (L−1 ∗ ∗
k z k (s) + |ψk0 (s)| )ds,
2 2
(3.93)
2 2 t

∀t ≥ 0. Differentiating (3.93) in t and using the self-adjointness of Rk on X and


Eq. (3.85), we obtain

1 2 −2 1 −1 ∗
L−1 ∗ −1 ∗ ∗ 
k Ak z k (t), Lk Rk z k (t) + ν |(Lk Rk z k (t)) (1)| = Lk z k (t) , (3.94)
2 2
2 2
t ≥ 0, which implies, by setting t = 0, that Rk satisfies the following Riccati equa-
tion:
1 2 −2 1 −1 0 2
L−1 0 −1 0 0 
k Ak z k , Lk Rk z k  + ν |(Lk Rk z k ) (1)| = Lk z k  , ∀z k ∈ H.
2 0
(3.95)
2 2
74 3 Stabilization of Periodic Flows in a Channel

Let us notice that we have proved that Rk z k0 ∈ H for all z k0 ∈ H ; hence

L−2
k Rk z k ∈ H (0, 1).
0 4

Thus the third derivative of L−2 0


k Rk z k in the Riccati algebraic equation (3.95) makes
sense.
In the end, using the classical Datko’s theorem, we finally obtain the exponential
decay of the solution, once the feedback
 
ψk0 = −ν L−2
k Rk Lk vk0 (1) for 0 < |k| ≤ S

and
ψk0 ≡ 0 for |k| > S

are plugged into the equations.


For the case k = 0 and l = 0, we can easily argue as in the previous case and rely
on the results in the above section to deduce that once plugged into the feedback
  
ψkl = −ν L−2
kl Rkl Lkl vkl (1) for 0 < k 2 + l 2 ≤ S

and 
ψkl ≡ 0 for k 2 + l 2 > S,

the corresponding solution to the Eq. (3.5) with k = 0 and l = 0 is exponentially


decaying. Here Rkl is a self-adjoint linear operator satisfying a Riccati equation
similar to (3.95).
Putting together all we have obtained above, we deduce the following Riccati-
based stabilization result for our problem:
Theorem 3.2 Once the feedback

Ψ (t, x, z) := ψkl (t)eikx eilz ,
k,l∈Z

where


⎪ −ν(L−2 
k Rk Lk vk0 (t)) (1) for 0 < |k| ≤ S, l = 0,

⎪ for |k| > S, l = 0,
⎨ 0
ψkl (t) := 0 for k = 0, l ∈ Z,√

⎪ −2 

⎪ −ν(L R kl kl vkl (t)) (1)
L for k, l = 0 and √k 2 + l 2 ≤ S,
⎩ kl
0 for k, l = 0 and k 2 + l 2 > S,

is plugged into system (3.4), this yields its exponential stability. Here Rk , Rkl : X →
X are linear, self-adjoint operators satisfying Riccati-type equations of the form
3.3 Design of a Riccati-Based Feedback 75

1 2 −2 1 −1 0 2
L−1 0 −1 0 0 
k Ak z k , Lk Rk z k  + ν |(Lk Rk z k ) (1)| = Lk z k  , ∀z k ∈ H,
2 0
2 2
and
1 2 −2 1 −1 0 2
L−1 0 −1 0 0 
kl Akl z kl , Lkl Rkl z kl  + ν |(Lkl Rkl z kl ) (1)| = Lkl z kl  , ∀z kl ∈ H,
2 0
2 2

respectively, where X is the dual of the space H 2 (0, 1) ∩ H01 (0, 1).

3.4 Comments

The local stabilization theory for the Navier–Stokes equations by feedback control
supported on the boundary of a domain filled with liquid was created in Fursikov
[59, 60]. In particular, the feedback theory was developed in Fursikov [58]. The idea
to construct boundary controllers was based on previous results on stabilization via
internal distributed feedbacks. Roughly speaking, it consists in extending the domain
O by a thin strip around the boundary, obtaining thereby the new domain O ∪ Oε ,
and considering Oε to be the support of the internal feedback. Once the internal
feedback is constructed for the new extended system, one may let ε go to zero. The
boundary controller for the former problem is the trace of the solution to the latter
problem.
Another method to deal with boundary actuators is to lift them into the equations
via an auxiliary operator acting on functions defined on the boundary with values on
the whole domain. (This method is used as well in the results presented in this book.)
Then, via the Riccati-based method, boundary stabilizing actuators were constructed
in Barbu et al. [19]. Other results on this subject were obtained in Raymond [118,
119].
In [12], Barbu designed an explicit feedback law of proportional type, called
oblique, that acts almost normal to the boundary. It has the following form:


N
∂Φ j
u=η μ j y, ϕ j  + α(x)n(x) , x ∈ ∂O,
j=1
∂n

where α is an arbitrary continuous function with zero circulation on ∂O, that is,

α(x)d x = 0.
O

One can easily see that

u(t, x) · n(x) = α(x), ∀x ∈ ∂O,


76 3 Stabilization of Periodic Flows in a Channel

and
C
| cosu(t, x), n(x)| ≥ 1 − , ∀x ∈ ∂O,
C + |α(x)|

where C > 0 is independent of α. This means that the stabilizable boundary con-
troller u can be chosen almost normal to ∂O. However, for technical reasons the
limit case |α| = +∞, that is, u normal, is excluded from the discussion. Moreover,
again the feedback is under the requirement of linear independence of the system of
eigenfunctions.
The general domain O is replaced by the particular infinite channel form, and
via the backstepping technique, stabilizing feedbacks for the Poiseuille profile were
designed by Krstic and his coworkers in [1, 29, 124]. In all these works, in order
to achieve stability, all the components of the velocity field are controlled on the
boundary. Other results are obtained by Triggiani in [117].
From the practical point of view, to implement a tangential control into the system
is quite demanding, both from the technological point of view and the cost. The most
feasible case is that in which the control acts only on the normal component of the
velocity field, the so-called wall-normal controller. Results in this direction were
obtained by Barbu in [9, 12] and for the stochastic case in Barbu [11]. More results
on the stabilization of the Navier–Stokes flows can be found in the book Barbu [10].
The results presented in this chapter provide as well normal boundary stabilizers,
and concerning the construction of a proportional type feedback stabilizer, they
appeared in Munteanu [103], and concerning the Riccati-based approach, in the
author’s work [95, 96].
We mention that we were not able to deduce the local stability of the full nonlinear
Navier–Stokes system, because of the normal boundary conditions. More precisely,
in trying to reduce the pressure from the nonlinear system, the usual trick is to apply
the Leray projector. However, due to the nontangential conditions, this cannot be
done.
Chapter 4
Stabilization of the Magnetohydro-
dynamics Equations in a Channel

Here we consider again a channel flow. But in addition to the assumptions of the
previous chapter, we assume that the incompressible fluid is electrically conducting
and affected by a constant transverse magnetic field. This kind of flow was first
investigated both experimentally and theoretically by Hartmann [67]. The governing
equations are the magnetohydrodynamics equations (MHD, for short), which are a
coupling between the Navier–Stokes equations and the Maxwell equations.

4.1 The Magnetohydrodynamics Equations


of an Incompressible Fluid

The 2-D MHD equations are



⎪ ρ(ut − νΔu + uux + vvy ) + CCx − CBy = −px ,



⎨ ρ(vt − νΔv + uvx + vvy ) + BBy − BCx = −py ,
Bt − μσ
1
ΔB + uBx + vBy − Bux − Cuy = 0, (4.1)


⎪ Ct − μσ ΔC + uCx + vCy − Bvx − Cvy = 0,

1

ux + vy = 0, Bx + Cy = 0, t ≥ 0, (x, y) ∈ R × (−L, L).

Here (u, v) is the velocity field, p is the scalar pressure, and (B, C) is the mag-
netic field. The positive constants ρ, ν, μ, and σ represent the fluid mass density,
the kinematic viscosity, the magnetic permeability, and the electrical conductivity,
respectively; 2L is the distance between the walls.
These equations are of huge importance, and they are used in the study of magneto-
fluids such as plasmas, liquid metals, salt water, and electrolytes.

© Springer Nature Switzerland AG 2019 77


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_4
78 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

The fully developed steady state of (4.1), the Hartmann–Poiseuille profile, which
we are going to stabilize, is given by
 
∗ 1 1 cosh(Hay∗ )
û(y ) = 1− , v̂ ≡ 0
Ha tanh(Ha) cosh(Ha)
(4.2)
y∗ 1 sinh(Hay∗ )
B̂ = − + , Ĉ ≡ B0 ,
Ha Ha sinh(Ha)

σ
where y∗ := Ly , Ha := B0 L ρν . For later purposes, we notice that


1 e−Hay


|(û + B̂) | = − + ≤ 2, y∗ ∈ [−1, 1]. (4.3)
Ha sinh(Ha)
y∗

Here B0 is the constant external applied magnetic field.


As one can easily see, the MHD equations, in comparison to the Navier–Stokes
equations, are far more complex. Thus, the whole effort from now on is to reduce, in
various ways, their complexity. A first simplification is to define the dimensionless
variables
x 1 v0 t 1
x∗ := , (u∗ , v∗ ) := (u, v), t ∗ := , (B∗ , C ∗ ) := (B, C),
L v0 L b0

with

L2 σ
v0 := − p̂x and b0 := −μL2 p̂y ,
ρν ρν

(p̂ is the pressure corresponding to the equilibrium solution (4.2)). For the sake of
simplicity we drop the star notation. However, we keep in mind that now we are
dealing with the above variables.
Again we assume 2π -periodicity with respect to the x-coordinate of the velocity
field, the magnetic field, and the pressure. In addition, we impose that the magnetic
Prandtl number of the fluid, i.e., Prm := νμσ , be equal to one. Such a periodic MHD
channel flow does not directly correspond to a specific laboratory fluid. It is, however,
often studied as an approximation to torus devices of plasma-controlled fusion, such
as the Tokamak and the reversed field pinch. Numerical simulations have shown that
turbulence may appear in the movement of this kind of flow; that is, the flow may
become unstable.
It is easily seen that Prm = 1 implies N = R = Rm = 1, after rescaling as nec-
essary. So the linearization of system (4.1) around the equilibrium profile (4.2),
supplemented with the boundary conditions, has the form
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 79


⎪ ut − Δu + ûux + vûy + B0 Cx − B0 By − B̂y C = px ,



⎪ vt − Δv + ûvx + B̂B + B̂By − B̂Cx = py ,



⎪ Bt − ΔB + ûBx + B̂y v − B̂ux − B0 uy − ûy C = 0,



⎪ Ct − ΔC + ûCx − B̂vx − B0 vy = 0,

⎪ u + v = 0, B + C = 0,
⎨ x y x y

⎪ u(t, x + 2π, y) = u(t, x, y), v(t, x + 2π, y) = v(t, x, y), (4.4)



⎪ B(t, x + 2π, y) = B(t, x, y), C(t, x + 2π, y) = C(t, x, y),



⎪ p(t, x + 2π, y) = p(t, x, y),



⎪ u(t, x, −1) = u(t, x, 1) = v(t, x, −1) = 0, v(t, x, 1) = Ψ (t, x),



⎪ x, −1) = B(t, x, 1) = C(t, x, −1) = 0, C(t, x, 1) = Ξ (t, x),

⎩ B(t,
t ≥ 0, x ∈ R, y ∈ (−1, 1),

and initial data u0 , v0 , B0 , C 0 . Here Ψ and Ξ are the boundary controllers, which
means that both the normal components of the velocity field and the magnetic field
are controlled on the upper wall. Of course, from the practical point of view, it would
have been more convenient to control only the wall-normal velocity. Unfortunately,
this is not possible with the algorithm from Chap. 2, because, in trying to show
the unique continuation property (see Lemma 4.2 below) related to a vector-valued
operator, one cannot prove that both components of the unstable eigenvector are
nonzero. Rather, a weaker result is available, saying that both components cannot
vanish simultaneously. Consequently, both the velocity and the magnetic field must
be controlled.
As in the previous chapter, we take advantage of the 2π -periodicity and decompose
(4.4) into Fourier modes. We get the following infinite system, indexed by k ∈ Z:


⎪ (uk )t − (−k 2 uk + uk ) + ik ûuk + û vk + ikB0 ck − B0 bk − B̂ ck = ikpk ,



⎪ (vk )t − (−k 2 vk + vk ) + ik ûvk + B̂ bk + B̂bk − ik B̂ck = pk ,

⎪ (b ) − (−k 2 b + b ) + ik ûb + B̂ v − ik B̂u − B u − û c = 0,
⎨ k t k k k k k 0 k k

⎪ (ck )t − (−k 2 ck + ck ) + ik ûck − ik B̂vk − B0 vk = 0, (4.5)



⎪ ikuk + vk = 0, ikbk + ck = 0, t ≥ 0, y ∈ (−1, 1),




⎩ bk (−1) = bk (1) = ck (−1) = 0, uk (−1) = uk (1) = vk (−1) = 0,

vk (1) = ψk , ck (1) = ξk ,

with initial data uk0 , vk0 , b0k , dk0 .



Here  denotes the derivative with respect to y, i.e., ∂y .
Stabilization of (4.4) is equivalent to stabilization of (4.5) at each level k ∈ Z.
When k = 0, we put ψ0 ≡ ξ0 ≡ 0, and after some straightforward computations, we
deduce that

u0 (t)2 + v0 (t)2 + b0 (t)2 + c0 (t)2


(4.6)
≤ Ce−αt (u00 2 + v00 2 + b00 2 + c00 2 ), t ≥ 0,

for some positive constants C, α.


80 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

Since we have taken care of k = 0, from now on we will consider only k = 0.


System (4.5) still looks complicated, since it involves five unknowns (with only
four equations). Of course, one may try to reduce the pressure in the same manner
as we did in the previous chapter. We will do this, but before that, we aim to reduce
the number of the field’s unknowns as well. To this end, let us set

S1k := uk + bk , S2k := vk + ck , D1k := uk − bk , D2k := vk − ck , and


0
S1k := uk0 + b0k , S2k
0
:= vk0 + ck0 , D1k
0
:= uk0 − b0k , D2k
0
:= vk0 − ck0 .

Then we add the first equation to the third one of (4.5), and the second equation to
the fourth one of (4.5). In this way, we obtain the two-equation system
  + û D + B̂ D + ik(B c − B̂u ) = ikp ,
(S1k )t − (−k 2 S1k + S1k ) + ik ûS1k − B0 S1k 2k 2k 0 k k k
(S2k )t − (−k S2k + S2k ) + ik ûS2k + ik B̂S2k − B0 vk + B̂ bk + B̂bk = pk .
2 

Then we reduce the pressure from it and use the divergence-free conditions to find
that
  

(−S2k + k 2 S2k )t + S2k + B̂S2k − [2k 2 + ik D̂]S2k
(4.7)
− [ik D̂ + k 2 B0 ]S2k

+ [k 4 + ik 3 D̂]S2k + ik[(Ŝ  D2k ] = 0.

Here and in the following,

Ŝ := û + B̂ and D̂ := û − B̂.

We do the same for the differences. More precisely, we subtract the third equation
from the first one of (4.5), the fourth equation form the second one of (4.5), and
reduce the pressure as before to arrive at
 
 
(−D2k + k 2 D2k )t + D2k − B0 D2k − [2k 2 + ik Ŝ]D2k − [ik ŝ − k 2 B0 ]D2k

(4.8)
+ [k 4 + ik 3 Ŝ]D2k + ik[D̂ S2k ] = 0.

Hence by (4.7) and (4.8), we get that (4.5) is equivalent to


⎧    

⎪ (−S2k + k 2 S2k )t + S2k + B̂S2k − [2k 2 + ik D̂]S2k



⎪ − [ik D̂ + k 2 B0 ]S2k 
+ [k 4 + ik 3 D̂]S2k + ik[(Ŝ  D2k ] = 0,


⎨   

(−D2k + k 2 D2k )t + D2k − B0 D2k − [2k 2 + ik Ŝ]D2k − [ik ŝ − k 2 B0 ]D2k





⎪ + [k 4 + ik 3 Ŝ]D2k + ik[D̂ S2k ] = 0,

⎪    
⎪ S2k (−1) = S2k (1) = S2k (−1) = D2k
⎪ (−1) = D2k (1) = D2k (−1) = 0,

S2k (1) = ψk := ψk + ξk , D2k (1) = ψk := ψk − ξk ,
S D

(4.9)
and the initial data S2k0
:= vk0 + ck0 , D2k 0
:= vk0 − ck0 . Thus we have reduced the five-
unknown problem (4.5) to the two-unknown problem (4.9).
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 81

To write the above system in an abstract form, we introduce the linear operators
Lk : D(Lk ) ⊂ H × H → H × H and Fk : D(Fk ) ⊂ H × H → H × H , defined as

−S  + k 2 S  2
Lk (S D)T := , D(Lk ) = H 2 (−1, 1) ∩ H01 (−1, 1) (4.10)
−D + k 2 D

(here (· ·)T means the transpose matrix) and

Fk (S D)T (4.11)
  

S + B0 S − [2k 2 + ik D̂]S  − [ik D̂ + k 2 B0 ]S  + [(k 4 + ik 3 D̂]S + ik[Ŝ  D]
:=   ,
D − B0 D − [2k 2 + ik Ŝ]D − [ik Ŝ  − k 2 B0 ]D + [(k 4 + ik 3 Ŝ]D + ik[D̂ S]
 2
D (Fk ) = H 4 (−1, 1) ∩ H02 (−1, 1) ,

respectively. We will denote by Lk and by Fk the differential forms of the operators


Lk and Fk , respectively (that is, we do not take into account the domain of definition,
but only the form).
Moreover, we define the operator
 
Ak := Fk L−1 −1
k , D(Ak ) = (S D) : Lk (S D) ∈ D(Fk ) .
T T

Regarding the operator −Ak , we may prove the following lemma, arguing similarly
as in the proof of Lemma 3.2.

Lemma 4.1 The operator −Ak generates a C0 -analytic semigroup on H × H , and


for each λ ∈ ρ(−Ak ) (the resolvent set of −Ak ), (λI + Ak )−1 is compact. More-
over, there exists M > 0 such that σ (−Ak ) ⊂ {λ ∈ C : λ < 0} , ∀|k| > M . Here
σ (−Ak ) is the spectrum of −Ak .

Lemma 4.1 says that for all |k| > M , we may take ψkS ≡ ψkD ≡ 0, since at these
levels the system is stable. Therefore, it remains to stabilize the system (4.9) for
0 < |k| ≤ M only. Besides this, Lemma 4.1 guarantees that the operator −Ak has
a countable set of eigenvalues, denoted by {λkj }∞ j=1 (each repeated according to its
multiplicity); and there is only a finite number Nk of eigenvalues for which λkj ≥
0, j = 1, . . . , Nk , the unstable eigenvalues. Finally, let
 ∞  ∞
ϕjk := (ϕ1jk ϕ2jk )T and ϕjk∗ := (ϕ1jk∗ ϕ2jk∗ )T
j=1 j=1

denote the corresponding eigenvectors of the operator −Ak and its adjoint −A∗k ,
respectively. For the sake of simplicity, we assume that the unstable eigenvalues are
simple. Hence, we may suppose that they are arranged such that

λk1 < λk2 < · · · < λkNk .


82 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

This assumption implies also that the systems {ϕjk }Nj=1


k
and {ϕjk∗ }Nj=1
k
may be chosen
to be biorthogonal, that is,

ϕik , ϕjk∗  = δij , i, j = 1, . . . , Nk ,

with δij the Kronecker symbol.


However, one may apply the present stabilizing algorithm to the semisimple case
of eigenvalues as well (see Chap. 2), but since it might be difficult to follow the
computations, we will not develop this problem here.
Furthermore, as in Lemma 3.3, a “unique continuation”-type result for the eigen-
vectors of the dual operator −A∗k of −Ak can be obtained, though there is a major
difference between the operator −Ak introduced in (3.22) and the present −Ak .
Namely, the latter acts on vectors. Let us consider the counterpart of (3.34), which
reads as

(λLk + F∗k )(ϕ1 ϕ2 )T = 0, t > 0,
(4.12)
(ϕ1 ϕ2 )T (−1) = (ϕ1 ϕ2 )T (1) = (ϕ1 ϕ2 )T (−1) = (ϕ1 ϕ2 )T (1) = 0.

Now our goal is to show that

(ϕ1 ϕ2 )T (1) = (0 0)T .

In other words, ϕ1 (1) and ϕ2 (1) cannot vanish simultaneously for every eigenvector
corresponding to an unstable eigenvalue of the adjoint operator. This is equivalent
to the fact that there exists μk ∈ C such that

(ϕ1 ) (1) + μk (ϕ2 ) (1) = 0,

for all the eigenvectors corresponding to the unstable eigenvalues. This is exactly
what we prove below. Then, with the help of this μk , we will construct our controller
(see (4.19) below).
Similarly as in Lemma 3.3, we will show that if (ϕ1 ϕ2 )T solves (4.12) plus
  T
(ϕ1 ϕ2 ) (1) = (0 0)T , then necessarily

(ϕ1 ϕ2 )T ≡ (0 0)T ,

which is in contradiction to the fact that (ϕ1 ϕ2 )T is an eigenvector. The symmetric


nature of Eq. (4.12) will be decisive again. However, this time, the symmetry is more
curious, namely if (ϕ1 ϕ2 )T (y) solves (4.12), then (ϕ2 ϕ1 )T (−y) solves (4.12) as well.
Although this is much weaker than what we had in Lemma 3.3, it is enough to prove
our claim. Replacing the eigenvector as necessary, we may gain an additional null
boundary condition, that is, (ϕ1 ϕ2 )T (−1) = (0 0)T .
Taking into account Eqs. (4.26)–(4.27) below, we see that in this case, the corre-
sponding D (given in (2.17)) reads as
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 83

D (ϕ1 ϕ2 )T = (ϕ1 ) (1) + μk (ϕ2 ) (1).

The lemma below says nothing but the fact that assumption (A5) from Chap. 2 holds
in the present case.

Lemma 4.2 Let 0 < |k| ≤ M . Then there exists μk ∈ C such that

(ϕ1jk∗ ) (1) + μk (ϕ2jk∗ ) (1) > 0, j = 1, . . . , Nk .

Proof Below, we will understand by ∧ and by ∨ the logical symbols for “and” and
“or,” respectively.
Fix k ∈ Z such that 0 < |k| ≤ M . For the sake of simplicity of notation, let us set
λ := λkj and ϕ := ϕjk∗ . First, consider the case in which ϕ ∗ is a classical eigenvector
corresponding to the eigenvalue λ, i.e.,

−A∗k ϕ ∗ = λϕ ∗ .

Hence ϕ := L−1 ∗
k ϕ solves
(λLk + F∗k )ϕ = 0, (4.13)

where F∗k is the dual of the operator Fk defined by (4.11).


Given a function f : [−1, 1] → C, we denote by fˇ : [−1, 1] → C the function

fˇ (y) := f (−y), y ∈ [−1, 1].

ˇ
It is easy to check that Ŝ = D̂, and that

(λLk + F∗k )(ϕ1 ϕ2 )T = 0 ⇒ (λLk + F∗k )(ϕ̌2 ϕ̌1 )T = 0.

Set
(ψ1 ψ2 )T := (ϕ1 + ϕ̌2 ϕ2 + ϕ̌1 )T .

From above, we deduce that

(λLk + F∗k )(ψ1 ψ2 )T = 0. (4.14)


   
We have two cases: either (ψ1 (1) = 0) ∨ (ψ2 (1) = 0) or ψ1 (1) = ψ2 (1) = 0.
 
Assume first that ψ1 (1) = ψ2 (1) = 0. Since ψ̌1 = ψ2 , we also have that

ψ1 (−1) = 0. Defining

Ψ := −ψ1 + k 2 ψ1 ,

we have from (4.14) that


84 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

−Ψ + B0 Ψ  + (k 2 − ik D̂ + λ)Ψ + ik D̂ (ψ1 + ψ̌1 ) = 0 in (−1, 1),
(4.15)
Ψ  (−1) = Ψ  (1) = 0.

Scalar multiplying (4.15) by Ψ and taking the real part of the result, we get
 1
 2  
Ψ  + (k + λ)Ψ  + ik
2 2
D̂ (ψ1 + ψ̌1 ) Ψ dy = 0. (4.16)
−1

Simple computations show that


 
Ψ  2 = ψ1 2 + 2k 2 ψ1 2 + k 4 ψ1 2 .

It follows from (4.16), via Poincaré’s inequality

π 2 v2 ≤ v 2 , ∀v ∈ H 2 (−1, 1) ∩ H01 (−1, 1),

and relation (4.3) that



ψ1 2 + (k 4 + 2k 2 π 2 )ψ1 2 + (k 2 + λ)Ψ 2
 1  1

≤ ik D̂ (ψ1 + ψ̌1 ) Ψ dy = ik Ŝ  (ψ̌1 + ψ1 ) Ψ̌ dy (4.17)
−1 −1
≤ 4ψ1 2 + k 2 Ψ 2 .

Recall that λ is an unstable eigenvalue. Consequently, λ ≥ 0. Moreover,

k 4 + 2π 2 k 2 − 4 > 0, ∀k ∈ Z∗ .

Thus relation (4.17) implies that ψ1 ≡ ψ2 ≡ 0. Therefore, we see that ϕ1 = −ϕ̌2 .


With this in hand, we claim that
 
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0).

Indeed, assume for the sake of contradiction that


 
ϕ (1) = ϕ2 (1) = 0.

Since ϕ1 = −ϕ̌2 , we get as well that ϕ1 (−1) = 0.

Hence setting Φ := −ϕ1 + k 2 ϕ1 , we obtain by (4.13) that

−Φ + B0 Φ  + (k 2 − ik D̂ + λ)Φ + ik D̂ (ϕ1 + ϕ̌1 ) = 0,
(4.18)
Φ  (−1) = Φ  (1) = 0.

Similarly as above, scalar multiplying (4.18) by Φ and taking the real part of the
result, we obtain that
4.1 The Magnetohydrodynamics Equations of an Incompressible Fluid 85

ϕ1 = ϕ2 = 0.

This is in contradiction to the fact that ϕ = (ϕ1 ϕ2 )T is an eigenvector. We conclude


 
that in the case ψ1 (1) = ψ2 (1) = 0, we necessarily have that
 
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0).

Now if we take
(χ1 χ2 )T := (ϕ1 − ϕ̌1 ϕ2 − ϕ̌2 )T
 
and argue as before, we get that in the case χ1 (1) = χ2 (1) = 0, we necessarily have
that
 
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0).

From the above, we get the following cases:


(1) [ψ1 (1) = ψ2 (1) = 0] ∧ [χ1 (1) = χ2 (1) = 0]. This implies that
 
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0).

ϕ (1) 
This means that for all θ ∈ C∗ such that θ = − ϕ1 (1) (in the case that ϕ2 (1) = 0,
2
otherwise for all θ ∈ C∗ ), we have
 
ϕ1 (1) + θ ϕ2 (1) = 0.

(2) [ψ1 (1) = ψ2 (1) = 0] ∧ [(χ1 (1) = 0) ∨ (χ2 (1) = 0)]. Again the first one implies
that
 
(ϕ1 (1) = 0) ∨ (ϕ2 (1) = 0),

ϕ (1)
which means, as before, that for all θ ∈ C∗ such that θ = − ϕ1 (1) (in the case
2

that ϕ2 (1) = 0, otherwise for all θ ∈ C∗ ), we have
 
ϕ1 (1) + θ ϕ2 (1) = 0.

(3) [(ψ1 (1) = 0) ∨ (ψ2 (1) = 0)] ∧ [χ1 (1) = χ2 (1) = 0]. The second one implies

ϕ (1) 
as before that for all θ ∈ C∗ such that θ = − ϕ1 (1) (in the case that ϕ2 (1) = 0,
2
otherwise for all θ ∈ C∗ ), we have
 
ϕ1 (1) + θ ϕ2 (1) = 0.

(4) [(ψ1 (1) = 0) ∨ (ψ2 (1) = 0)] ∧ [(χ1 (1) = 0) ∨ (χ2 (1) = 0)]. By the fact that
(ψ1 + χ1 ψ2 + χ2 )T = 2(ϕ1 ϕ2 )T , we get as before that there exists infinitely
many θ ∈ C such that
86 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

 
ϕ1 (1) + θ ϕ2 (1) = 0.

We conclude that in every case, there exists some μk ∈ C such that


 
ϕ1 (1) + μk ϕ2 (1) = 0,

and the conclusion follows immediately for this case.


Now we treat the general case of generalized eigenvectors. Let us consider the
chain (ϕ11 ϕ21 )T , . . . , (ϕ1J ϕ2J )T for some J ∈ N such that

(λ + A∗k )(ϕ11 ϕ21 )T = 0

and
(λ + A∗k )j (ϕ1j ϕ2j )T = 0, j = 2, 3, . . . , J .

Concerning (ϕ11 ϕ21 )T , we may show, as in the above lines, that there exists some
μ such that
 
ϕ11 (1) + μϕ21 (1) = 0.

Then if needed, replacing (ϕ1j ϕ2j )T , j = 2, 3, . . . , J , by

(ϕ1j ϕ2j )T + qj (ϕ11 ϕ21 )T ,

with qj > 0 properly chosen such that


   
[ϕ1j (1) + qj ϕ11 (1)] + μ[ϕ2j (1) + qj ϕ21 (1)] = 0,

we easily complete the proof. 

4.2 The Stabilizing Proportional Feedback

The main theorem of this section amounts to saying that the following feedback laws,
once plugged into the system (4.4) yield its stability. Let us define

1 
Ψ (t, x) := (1 + μk )U k (t)eikx ,
2
0<|k|≤M
1  (4.19)
Ξ (t, x) := (1 − μk )U k (t)eikx .
2
0<|k|≤M

(Notice that with our notation, ψk = (1 + μk )U k (t) and ξk = (1 − μk )U k (t), and


so ψkS = U k and ψkD = μk U k , for 0 < |k| ≤ M .)
4.2 The Stabilizing Proportional Feedback 87

Here, μk , 0 < |k| ≤ M , are the constants given in Lemma 4.2:


⎛ ⎞⎛ k ⎞
 Lk (vk + ck vk − ck )T (t), ϕ1k∗  l1 !
⎜ L (v + c v − c ) T
(t), ϕ k∗ ⎟ ⎜ k ⎟

U k (t) := − Λksum Ak ⎜ k k k k k 2 ⎟ ⎜ l2 ⎟
⎝ ........................................... ⎠,⎝ ... ⎠ , (4.20)
Lk (vk + ck vk − ck ) (t), ϕNk 
T k∗ k
l Nk N k

with Λksum := Λkγ k + · · · + Λkγ k , for


1 Nk

⎛ ⎞
1
γik +λk1
0 ... 0
⎜ 0 1
... 0 ⎟
⎜ γik +λk2 ⎟
Λkγ k := ⎜ ⎟ , i = 1, . . . , Nk , (4.21)
i ⎝ ................................... ⎠
0 0 . . . γ k +λ 1
k
i Nk

for some 0 < γ1k < · · · < γNkk , Nk real constants sufficiently large that relation (4.26)
below holds. Moreover,

lik := (ϕ1ik∗ ) (1) + μk (ϕ2ik∗ ) (1), i = 1, . . . , Nk ,

which by Lemma 4.2 are positive real numbers;

Ak := (G k1 + G k2 + · · · + G kNk )−1 , (4.22)

where
G ki := Λkγ k G k Λkγ k , i = 1, . . . , Nk , (4.23)
i i

where G k is the matrix


⎛ ⎞
l1k l1k l1k l2k . . . l1k lNk k
⎜ lk lk lk lk . . . lk lk ⎟
G k := ⎜ 2 1 2 2 2 Nk ⎟
⎝ ................................... ⎠ . (4.24)
lNk k l1k lNk k l2k . . . lNk k lNk k

Similarly as in Proposition 2.1, since ljk = 0, j = 1, . . . , Nk , it is possible to show


that the matrix Ak is well defined. Further, Lk is the differential form of the operator
Lk defined in (4.10), and
 2π
vk (t, y) := v(t, x, y)e−ikx dx
0

and  2π
ck (t, y) := C(t, x, y)e−ikx dx.
0
88 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

Finally, ·, ·Nk stands for the classical scalar product in CNk .

Theorem 4.1 Once the feedbacks Ψ, Ξ defined in (4.19) are plugged into the linear
equation (4.4), we obtain the asymptotic exponential decay of the corresponding
solution to the closed-loop system (4.4).

Proof The stability will be shown at each level 0 < |k| ≤ M of the system (4.9)
(whose stability is equivalent to the stability of the system (4.5) and consequently to
the stability of the system (4.4)), since, as we saw earlier, the other levels are stable.
So let us fix some 0 < |k| ≤ M . In order to simplify the notation, since k is fixed, in
what follows we will omit the index k.
The corresponding closed-loop system (4.9) reads as follows:
⎧ 
⎨ L (S2 D2 )T ) t + F (S2 D2 )T = 0, y ∈ (−1, 1),
(S D )T (1) = (U μU )T , (4.25)
⎩ 2 2 T
(S2 D2 ) (−1) = (S2 D2 )T (−1) = (S2 D2 )T (1) = 0.

In order to lift the boundary conditions into the equations, aiming to use the spectral
decomposition method, we introduce the Dirichlet operator as in (2.16) in Chap. 2:
let α ∈ C, and denote by Dγ α := w the solution to the equation

⎨ F w + 2  λ L w, φ ∗ φ + γ L w = 0, y ∈ (−1, 1),
N

j j j
(4.26)

⎩ j=1
w(1) = (α μα)T , w(−1) = w (−1) = w (1) = 0.

For later purposes, let us compute L Dγ α, ϕm∗ , for some 1 ≤ m ≤ N . To this


end, we have from (4.26) scalar multiplied by ϕm∗ that

0 = F w, ϕm∗  + 2λm L w, ϕm∗  + γ L w, ϕm∗ 


∗ 
= −α(ϕ1m ) (1) + L w, A∗ ϕm∗  + (γ + 2λm )L w, ϕm∗ .
∗ 
) (1) + μ(ϕ2m

This yields that


α α
L Dγ α, φm∗  = (φ ∗ ) (1) + μ(φ2m
∗ 
) (1) = lm , 1 ≤ m ≤ N .
γ + λm 1m γ + λm
(4.27)
In particular, the corresponding D given in (2.17) reads as

D (ϕ1 ϕ2 )T = (ϕ1 ) (1) + μ(ϕ2 ) (1).

Next, we choose N constants 0 < γ1 < γ2 < · · · < γN large enough that

equation (2.1), corresponding to each γi , i = 1, . . . , N , has a solution. (4.28)

Now let us introduce the feedbacks


4.2 The Stabilizing Proportional Feedback 89

⎛ ⎞ ⎛ 1 ⎞
 L (S2 D2 )T (t), ϕ1∗  γi +λ1 1
l !
⎜ L (S2 D2 )T (t), ϕ ∗  ⎟ ⎜ 1 l2 ⎟

Ui (t) := − A ⎝ 2 ⎟ , ⎜ γ i +λ 2 ⎟
............................. ⎠ ⎝ .......... ⎠

L (S2 D2 ) (t), ϕN 
T 1
γi +λN N
l
⎛ ∗
⎞ ⎛ ⎞ N (4.29)
 L (S2 D2 ) (t), ϕ1 
T
l1 !
⎜ L (S2 D2 )T (t), ϕ ∗  ⎟ ⎜ l2 ⎟
= − Λγi A ⎜ 2 ⎟ ⎜ ⎟
⎝ .............................. ⎠ , ⎝ ... ⎠ , t ≥ 0,
L (S2 D2 )T (t), ϕN∗  lN N

for i = 1, 2, . . . , N . It is clear that U given in (4.20) is equal to U1 + · · · + UN .


For later computations, we need to show that
⎛ ⎞ ⎛ ⎞
L Dγi Ui , ϕ1∗  L (S2 D2 )T , ϕ1∗ 
⎜ L Dγi Ui , ϕ ∗  ⎟ ⎜ L (S2 D2 )T , ϕ ∗  ⎟
⎜ 2 ⎟ ⎜ 2 ⎟
⎝ .................... ⎠ = −G i A ⎝ .......................... ⎠ , (4.30)
L Dγi Ui , ϕN∗  L (S2 D2 )T , ϕN∗ 

where the G i are introduced in (4.23) above, for i = 1, . . . , N . This is indeed so. We
have, via relation (4.27),

L Dγi Ui , ϕm∗  = Ui L Dγi 1, ϕm∗ 


⎛ ⎞ ⎛ 1
ll

 L (S2 D2 )T , ϕ1∗  (γi +λ1 )γi +λm 1 m !
⎜ L (S2 D2 )T , ϕ ∗  ⎟ ⎜ 1
⎜ (γi +λ2 )γi +λm l2 lm ⎟

= − A⎝ ⎜ 2 ⎟ ,⎜ ⎟ , m = 1, . . . , N ,
.......................... ⎠ ⎝ ........................ ⎠

L (S2 D2 ) , ϕN 
T 1
l l
(γ +λ )γ +λ N m
i N i m N

whence (4.4) follows immediately.


Returning to the linear equation (4.25), we set

z := L[(S2 D2 )T − Dγ1 U1 − · · · − DγN UN ].

Subtracting (4.25) and (4.26), we deduce that


 

N 
N 
N
zt = −Az + 2 λj L Dγi Ui , ϕj∗ ϕj + γi L Dγi Ui − L Dγi Ui .
i,j=1 i=1 i=1 t
(4.31)
In terms of the new variable z, the feedbacks Ui , i = 1, . . . , N , have the form
⎛ ⎞ ⎛ 1 ⎞
z(t), ϕ1∗ 
 l
γi +λ1 1 !
1 ⎜ z(t), ϕ ∗  ⎟ ⎜ 1 l2 ⎟
Ui (t) = − A ⎜ 2 ⎟ ⎜ γi +λ2 ⎟
⎝ ............... ⎠ , ⎝ ............ ⎠ . (4.32)
2
z(t), ϕN∗  1
l
γi +λN N N
90 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

To see this, we do the following straightforward computations:


⎛ ⎞ ⎛ 1 ⎞
 z(t), ϕ1∗  l
γi +λ1 1 !
1 ⎜ z(t), ϕ ∗  ⎟ ⎜ 1 l2 ⎟
A⎜ ⎟ ⎜ i 2 ⎟ γ +λ
⎝ ............... ⎠ , ⎝ ........... ⎠
2
2
z(t), ϕN∗  1
γi +λN N
l
N
⎛ ∗
⎞ ⎛ 1 ⎞
 L (S2 D2 ) , ϕ1 
T
γi +λ1 1
l !
1 ⎜ L (S2 D2 )T , ϕ ∗  ⎟ ⎜ 1 l2 ⎟
= A⎜ 2 ⎟ ⎜ γi +λ2
⎝ ........................... ⎠ , ⎝ ............ ⎠

2
L (S2 D2 )T , ϕN∗  1
γi +λN N
l
N
⎛ ∗
⎞ ⎛ 1 ⎞
 L D U
γj j , ϕ 1  γi +λ1
l 1 !
1 ⎜ L Dγj Uj , ϕ ∗  ⎟ ⎜ 1 l2 ⎟
N
− A⎝ ⎜ 2 ⎟ , ⎜ γ i +λ 2 ⎟
2 j=1 ...................... ⎠ ⎝ ............. ⎠

L Dγj Uj , ϕN  1
γi +λN N
l
N
(taking into account relation (4.30))
⎛ ⎞ ⎛ 1 ⎞
 L (S2 D2 )T , ϕ1∗  γi +λ1 1
l !
1 ⎜ L (S2 D2 )T , ϕ ∗  ⎟ ⎜ 1
l ⎟
= ⎜
[I + A(G 1 + · · · + G N )] A ⎝ 2 ⎟ , ⎜ γ i +λ 2
2 ⎟
2 ......................... ⎠ ⎝ ............ ⎠
L (S2 D2 )T , ϕN∗  1
γi +λN N
l
N
= −Ui ,

since A = (G 1 + · · · + G N )−1 . Moreover, as in (4.30), we have now


⎛ ⎞ ⎛ ⎞
L Dγi Ui , ϕ1∗  z(t), ϕ1∗ 
⎜ L Dγi Ui , ϕ ∗  ⎟ ⎜ ∗ ⎟
2 ⎟ = − G A ⎜ z(t), ϕ2  ⎟ , k = 1, . . . , N .
⎜ 1
⎝ ...................... ⎠ k ⎝ ............. ⎠ (4.33)
2
L Dγi Ui , ϕN∗  z(t), ϕN∗ 

Taking advantage of relation (4.33), after successive scalar multiplications of


equation (4.31) by ϕ1∗ ,…, ϕN∗ , we get that


N
1
N
1
N
Zt = ΛZ − ΛG i AZ − γi G i AZ + G i AZt , t ≥ 0,
i=1
2 i=1 2 i=1
⎛ ⎞
z(t), ϕ1∗ 
⎜ z(t), ϕ ∗  ⎟
where Z := ⎜ 2 ⎟
⎝ ............... ⎠ and Λ := diag(λ1 , λ2 , . . . , λN ).
z(t), ϕN∗ 
Recalling that A = (G 1 + · · · + G N )−1 , we see that the above relation yields
4.2 The Stabilizing Proportional Feedback 91


N
Zt = −γ1 Z + (γ1 − γi )G i AZ , t ≥ 0. (4.34)
i=2

Closely arguing as in the proof of Theorem 2.1, by (4.34) we get the exponential
decay of the unstable part of the solution. Then using that the stable part operator
is exponentially decaying, we conclude the proof. Further details are omitted, since
they mimic the proof of Theorem 2.1. 

Remark 4.1 As in Sect. 3.3, based on the above feedback proportional stabilizer,
one may develop a Riccati-based one. Since the ideas are almost the same, we will
not develop this problem here (see [98] for details).

4.3 Comments

As mentioned above, due to the complexity of the problem, there are fewer results on
boundary stabilization for the MHD equations than for the Navier–Stokes equations.
One of the main reasons is that the usual procedure for transforming a boundary con-
trolled system into a system with controllers distributed in a subdomain, by extend-
ing the initial domain to a slightly larger domain (a technique that was presented in
the comments section from the previous chapter), are not directly applicable in the
MHD case. The special feature of the MHD system is that the equation satisfied by
the magnetic field needs to have a divergence-free right-hand side. Using localized
controllers, after applying the Leray projector, the controller usually becomes dis-
tributed in the whole domain, and the above transformation to the internal controller
case fails to work. That is why a special form of the internal controller for the second
extended equation must be used, which is done in Lefter [84].
However, it turns out that in the special case of the Hartmann MHD framework,
with the domain O=channel, it is easier to derive boundary stabilizers directly. One
of the reasons is that, assuming further that the fluid is with low value of the magnetic
Reynolds number Rm , the induced magnetic field is much weaker than the applied
one, and therefore, it can be neglected, so that we obtain the simplified magneto-
hydrodynamics equations (SMHD for short; see [127]). The SMHD equations are
nothing but some linear perturbations of the Navier–Stokes equations, and they look
like this:

⎨ ut − ν(uxx + uyy ) + uux + vuy + N B02 u = −px ,
vt − ν(vxx + vyy ) + uvx + vvy = −py , (4.35)

ux + vy = 0,

where B0 is the constant external applied magnetic field and N > 0 is the Stuart (or
interaction) number. Therefore, it is clear that the control design algorithm developed
for the Navier–Stokes equations in a channel is expected to work equally well for the
SMHD in the channel case. This is indeed true, and we refer for the backstepping
92 4 Stabilization of the Magnetohydrodynamics Equations in a Channel

technique to the work of Krstic and his coworkers [114, 127], while for the Riccati-
based method, we refer to the author’s work [97]. Other related results are [83, 115]
and the references therein.
In this chapter, we have considered arbitrary values for Rm , solving the problem
in the more general case than SMHD, namely the case of Prandtl number equal to
one. The results concerning the proportional feedback were published in Munteanu
[102], while the Riccati case in the author’s work [98].
Chapter 5
Stabilization of the Cahn–Hilliard System

In this chapter, the Cahn–Hilliard system will be investigated. This system describes
the process of phase separation, whereby the two components of a binary fluid spon-
taneously separate and form domains pure in each component. This phenomenon
appears in many engineering and medical applications.

5.1 Presentation of the Problem

Let O ⊂ R3 be open, bounded, connected, with sufficiently smooth boundary Γ =


∂O, split as Γ = Γ1 ∪ Γ2 , where Γ1 has nonzero surface measure. We consider the
boundary-controlled problem that consists of the Cahn–Hilliard system

⎨ (θ + l0 ϕ)t − Δθ = 0 in (0, ∞) × O,
ϕt − Δμ = 0 in (0, ∞) × O, (5.1)

μ = −νΔϕ + F  (ϕ) − γ0 θ in (0, ∞) × O,

supplemented with the boundary conditions


⎧ ∂ϕ ∂μ

⎨ ∂n = ∂n = 0 on (0, ∞) × Γ,

(5.2)

⎪ ∂θ ∂θ
⎩ = u on (0, ∞) × Γ1 , = 0 on (0, ∞) × Γ2 ,
∂n ∂n
and with the initial data

θ (0) = θo , ϕ(0) = ϕo in O. (5.3)

© Springer Nature Switzerland AG 2019 93


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_5
94 5 Stabilization of the Cahn–Hilliard System

In system (5.1)–(5.3), the variables θ, ϕ and μ represent the temperature, the order
parameter, and the chemical potential, respectively; ν, l0 , γ0 are positive constants
with some physical meaning; F  is the derivative of the double-well potential

(ϕ 2 − 1)2
F(ϕ) = , (5.4)
4
and n is the unit outward normal vector to the boundary Γ . Finally, u is the control
acting only on the temperature flux, on one part of the boundary, namely Γ1 . The
equations (5.1)–(5.3) are known as conserved phase field system, due to the mass
conservation of ϕ, which is obtained by integrating the second equation in (5.1) in
space and using the boundary condition for μ from (5.2).
Let (ϕ̂, θ̂ ) ∈ H 4 (O) × H 2 (O) be a stationary solution of the uncontrolled system
(5.1)–(5.3), i.e.,

⎪ 
⎨ νΔ ϕ̂ − ΔF (ϕ̂) = −Δθ̂ = 0 in O,
2

∂ ϕ̂ ∂Δϕ̂ ∂ θ̂ (5.5)

⎩ = = = 0 on Γ.
∂n ∂n ∂n

(For a discussion of the existence of stationary solutions, see [17, Lemma A1].)
We emphasize that different stationary profiles correspond to different types of
phase separation.
We prefer to make a function transformation in (5.1), namely

σ := α0 (θ + l0 ϕ), (5.6)

with α0 > 0 chosen such that


γ0
= α0 l0 =: γ > 0, (5.7)
α0

that is,

γ0
α0 = . (5.8)
l0

Writing the system (5.1)–(5.3) in the variables ϕ and σ and using (5.7) and the
notation

l := γ0 l0 , (5.9)

we obtain the equivalent nonlinear system


5.1 Presentation of the Problem 95
⎧ 
⎪ ϕt + νΔ ϕ − ΔF (ϕ) − lΔϕ + γ Δσ = 0, in (0, ∞) × O,
⎪ 2



⎪ σt − Δσ + γ Δϕ = 0, in (0, ∞) × O,



⎪ ∂ϕ

⎨ = 0, on (0, ∞) × Γ,
∂n (5.10)

⎪ ∂Δϕ γ0 ∂σ

⎪ = − u, = α0 u, on (0, ∞) × Γ1 ,



⎪ ∂n ν ∂n

⎪ ∂Δϕ ∂σ

⎩ = = 0, on (0, ∞) × Γ2 .
∂n ∂n
Then we continue with the classical step, namely reducing the problem to the null
stabilization, via the fluctuation variables

y := ϕ − ϕ̂, z := σ − σ̂ , (5.11)
yo := ϕo − ϕ̂, z o := σo − σ̂ , (5.12)

where, clearly, σ̂ := α0 (θ̂ + l0 ϕ̂) and σo := α0 (θo + l0 ϕo ). And so system (5.1)–(5.3)


transforms into the equivalent null boundary stabilization problem

⎪ yt + νΔ2 y − Δ[F  (y + ϕ̂) − F  (ϕ̂)] − lΔy + γ Δz = 0, in (0, ∞) × O,



⎪ z − Δz + γ Δy = 0, in (0, ∞) × O,

⎪ t



⎪ ∂y

⎨ ∂n = 0, in (0, ∞) × Γ,

∂Δy γ0 ∂z

⎪ = − u, = α0 u, on (0, ∞) × Γ1 ,

⎪ ∂n ν ∂n



⎪ ∂Δy ∂z

⎪ = = 0, on (0, ∞) × Γ2 ,

⎪ ∂n ∂n


y(0) = yo , z(0) = z o .
(5.13)
This is a fourth-order differential system due to the presence of Δ2 . We have met
fourth-order differential equations before, in Chap. 3, regarding the Navier–Stokes
equations in a channel, and in Chap. 4 for the magnetohydrodynamics equations in a
channel. The main difference between those equations and (5.13) is that now we are
dealing with nonlinearities under the Laplace operator. Consequently, the linearized
system will not be subtracted from (5.13) in the classical way. More precisely, we
set
Fl := F∞  + l, (5.14)

where

1
 :=
F∞ F  (ϕ̂(ξ ))dξ, (5.15)
mO O

with m O the Lebesgue measure of O, and we introduce the linear system


96 5 Stabilization of the Cahn–Hilliard System

⎪ yt + νΔ2 y − Fl Δy + γ Δz = 0 in (0, ∞) × O,



⎪ z t − Δz + γ Δy = 0 in (0, ∞) × O,





⎪ ∂y

⎨ ∂n = 0 in (0, ∞) × Γ,

∂Δy γ0 ∂z (5.16)

⎪ = − u, = α0 u on (0, ∞) × Γ1 ,

⎪ ∂n ν ∂n



⎪ ∂Δy ∂z

⎪ = = 0 on (0, ∞) × Γ2 ,

⎪ ∂n ∂n


y(0) = yo , z(0) = z o .

We remark that the above system is not the linearization of (5.13), since the replace-
ment of the nonlinear term is different from the usual one.

5.1.1 Stabilization of the Linearized System

Set A : D(A) ⊂ L 2 (O) × L 2 (O) → L 2 (O) × L 2 (O),



νΔ2 − Fl Δ γ Δ
A := , (5.17)
γΔ −Δ

having the domain



∂y ∂Δy ∂z
D(A) = (y z)T ∈ L 2 (O ) × L 2 (O ) : A (y z)T ∈ L 2 (O ) × L 2 (O ), = = = 0 on Γ
∂n ∂n ∂n

endowed with its graph norm. By the regularity of O, it follows that

D(A) ⊂ H 4 (O) × H 2 (O);

also we notice that A is self-adjoint.

Proposition 5.1 The operator A is quasi-m-accretive on L 2 × L 2 , that is, λI + A


is m-accretive for some λ > 0, and its resolvent is compact.
1
Proof We set V = D(A) × D(A 2 ). We compute the scalar product

A(y z)T , (φ ψ)T = (νΔy · Δφ + Fl ∇ y · ∇φ − γ ∇z · ∇φ)d x
O

+ (∇z · ∇ψ − γ ∇ y · ∇ψ)d x,
O

for all (φ ψ)T ∈ V. We see easily that A is bounded from V to V  , the dual of V .
Indeed, we have
5.1 Presentation of the Problem 97

A(y z)T V  = sup A(y z)T , (φ ψ)T ≤ C (y z)T V .
(φ ψ)T ∈V, (φ ψ)T ) V ≤1

Moreover,

A(y z)T , (y z)T = (ν|Δy|2 + Fl |∇ y|2 − 2γ ∇ y · ∇z + |∇z|2 )d x
O
1
≥ ν Δy 2 − (|Fl | + 2γ 2 ) ∇ y 2 + ∇z 2
2
1 1
= ν Δy 2 + z 2H 1 (O ) − ν y 2 − z 2 − a0 ∇ y 2 ,
2 2

with a0 := |Fl | + 2γ 2 . Next, by the interpolation

ν C2
a0 ∇ y 2 ≤ C Δy y ≤ Δy 2 + y 2 ,
2 2ν
we deduce from the above that

A(y z)T , (y z)T ≥ C1 (y z)T 2V − C2 (y z)T 2 , for all (y z)T ∈ V. (5.18)

The above relations lead to the fact that A is quasi-m-accretive, which means that

A + C2 I : V → V 

is coercive, thus surjective. Consequently, (λI + A)−1 is well defined for λ ≥ C2 .


Let ( f 1 f 2 )T ∈ L 2 × L 2 , and define (λI + A)−1 ( f 1 f 2 )T = (y z)T . It is readily seen
that (5.18) implies

(y z)T 2V ≤ C ( f 1 f 2 )T 2 , for λ ≥ C2 ,

and some C > 0, whence it follows that (λI + A)−1 (E) is relatively compact when-
ever E is bounded in L 2 × L 2 . (For more details, see [17, Proposition 2.1].) 
∞
Therefore, A has a countable set λ j j=1 of real eigenvalues and a complete set
of corresponding eigenvectors. Moreover, all the eigenspaces are finite-dimensional,
and by repeating each eigenvalue according to its multiplicity, we have that

λ1 ≤ λ2 ≤ λ3 ≤ . . . and lim λ j = +∞. (5.19)


j→∞


We note that zero is an eigenvalue, and it is of multiplicity 2, since 2m1O (1 1)T

and 2m1O (−1 1)T are eigenvectors for it. By (5.19), the number of nonpositive
eigenvalues is finite, i.e., for some N ∈ N, we have that
98 5 Stabilization of the Cahn–Hilliard System

λ j < 0, j = 1, 2, . . . , N − 2, λ N −1 = λ N = 0 and λ j > 0 for j > N . (5.20)

For the sake of simplicity, we assume that

(H1 ) : Each negative eigenvalue is simple. (5.21)

(Of course, one can consider the general case as well, namely the semisimple case,
arguing similarly
as in the
∞last part of Chap. 2, and still obtain a stabilization result.)
Denote by (ϕ j ψ j )T j=1 the corresponding eigenvectors, that is,

⎧ 2
⎪ νΔ ϕ j − Fl Δϕ j + γ Δψ j = λ j ϕ j , in O,


γ Δϕ j − Δψ j = λ j ψ j , in O, (5.22)

⎩ ∂ϕ j = ∂Δϕ j = ∂ψ j = 0, on Γ,

∂n ∂n ∂n
for all j = 1, 2, . . . . ∞
By the self-adjointness of A, we may assume that the system (ϕ j ψ j )T j=1 forms
an orthonormal basis in L 2 (O) × L 2 (O) that is orthogonal in D(A).
The control design procedure developed in Chap. 2 requires further knowledge
about the eigenvectors of the linear operator A. We refer to the validation of the
decisive hypothesis (A5) regarding the unique continuation of the eigenvectors. It is
clear by the form of the operator A, which involves the Laplace operator, that the
eigenvectors (ϕ j ψ j )T can be associated with the eigenfunctions of the Neumann– ∞
Laplacian (this is indeed true; see (5.23) below). In this light, let us denote by μ j j=1
∞
and by e j j=1 the eigenvalues and the normalized eigenfunctions of the Neumann–
Laplacian, respectively, i.e.,

∂e j
Δe j = μ j e j in O and = 0 on Γ,
∂n
to which we simply refer as the Laplace operator Δ in the sequel. We
∞ know that
μ j ≤ 0 for all j = 1, 2, . . . , μ j → −∞ for j → ∞. Moreover, e j j=1 forms an
orthonormal basis in L 2 (O) that is orthogonal in H 1 (O).
We have enough experience (from the previous chapters) to realize that the Neu-
mann boundary conditions yield that hypothesis (A5) reads as follows: the trace of
the eigenvector (ϕ j ψ j )T , j = 1, 2, . . . , N , is not identically zero on Γ1 . In any
case, due to the boundary conditions in (5.16) and the definition of the Neumann
map Dη in (5.27) below, in the present case we have that

D (ϕ ψ)T = ψ;

see (5.28) below. Hence our task is considerably simplified, since we must show
the nonvanishing of the second component of the eigenvector only. More exactly, it
5.1 Presentation of the Problem 99

is enough to show that only ψ j , j = 1, 2, . . . , N , does not vanish on Γ1 . To this


end, we will adopt the idea from Chap. 3 (or Chap. 4), namely a priori we choose the
eigenvectors (ϕ j ψ j )T , j = 1, 2, . . . , N , such that ψ j = 0 on Γ1 . For j = N − 1, N ,
there is nothing to prove, since we have already set (see after (5.19))
 
1 1
(ϕ N −1 ψ N −1 )T = (1 1)T and (ϕ N ψ N )T = (−1 1)T .
2m O 2m O

Therefore, the result below concerns only the negative eigenvalues.


Lemma 5.1 For all j = 1, 2, . . . , N − 2, there exists an eigenfunction ek of the
Laplace operator corresponding to the eigenvalue μk that satisfies 0 > μk ≥ Fl −γ
2

ν
such that
γ μk
ψj ≡  ek . (5.23)
(γ μk ) + (λ j + μk )2
2

In this case, the eigenvalue λ j is a root of the second-degree polynomial

X 2 + [(Fl + 1)μk − νμ2k ]X − νμ3k + (Fl − γ 2 )μ2k .

In particular, we deduce that necessarily Fl − γ 2 ≤ 0, in order to have unstable


negative eigenvalues for the operator A; and

ψ j ≡ 0 on Γ1 , ∀ j = 1, 2, . . . , N . (5.24)

Proof Let j ∈ {1, 2, . . . , N − 2} . For the sake of simplicity of notation, we drop the
indices j, that is, we use the notation

(ϕ ψ)T = (ϕ j ψ j )T and λ = λ j .

We have that (ϕ ψ)T satisfies


⎧ 2
⎪ νΔ ϕ − Fl Δϕ + γ Δψ = λϕ, in ,


γ Δϕ − Δψ = λψ, in , (5.25)

⎩ ∂ϕ = ∂Δϕ = ∂ψ = 0, on Γ.

∂n ∂n ∂n
∞
Let us decompose ϕ and ψ in the basis e j j=1 of the eigenfunctions of the
Neumann Laplacian as

 ∞

ϕ= ϕ jej, ψ = ψ jej.
j=1 j=1
100 5 Stabilization of the Cahn–Hilliard System

Successively scalar multiplying equation (5.25) by e j , j = 1, 2, . . ., using the bound-


ary conditions and Green’s formula, we deduce that

(νμ2j − Fl μ j − λ)ϕ j + γ μ j ψ j = 0,
γ μ j ϕ j − (λ + μ j )ψ j = 0, ∀ j ∈ N∗ .

For all j, this is a second-order linear homogeneous system, with the unknowns
ϕ j , ψ j . Computing the determinant of the matrix of the system, we get that μ j must
satisfy

− ν(μ j )3 + (Fl − γ 2 − νλ)(μ j )2 + λ(Fl + 1)μ j + λ2 = 0, (5.26)

in order not to have ϕ j = ψ j = 0.


The polynomial

−ν X 3 + (Fl − γ 2 − νλ)X 2 + λ(Fl + 1)X + λ2 ,

may have at most two distinct negative roots (since the free term is λ2 > 0). Denote
them by X 1 < 0, X 2 < 0. Assume that we have

μk = μk+1 = · · · = μk+M = X 1 , and μs = μs+1 = · · · = μs+L = X 2 ,

i.e., X 1 is an eigenvalue of the Laplace operator of multiplicity M + 1, and X 2 is an


eigenvalue of the Laplace operator of multiplicity L + 1. Of course, it may happen
that only one of X 1 , X 2 is an eigenvalue of the Laplace operator (we see that at least
one is an eigenvalue in order not to have ϕ ≡ ψ ≡ 0, which is in contradiction to the
fact that (ϕ ψ)T is an eigenvector). In that case, all the discussion below can be easily
revised, and one would arrive at similar conclusions. However, we will consider the
“worst-case scenario,” namely that X 1 , X 2 are both eigenvalues of the Lapalcian.
One can easily check that each vector from the systems
⎧ T ⎫

⎪ λ + γ ⎪

⎨  X 1
eq 
X 1
eq ,⎬
X1 := (γ X 1 ) + (λ + X 1 )
2 2 (γ X 1 ) + (λ + X 1 )
2 2 ,

⎪ ⎪

⎩ ⎭
q = k, k + 1, . . . , k + M
⎧ T ⎫

⎪ λ + X2 γ X2 ⎪

⎨ eq  eq ,⎬
X2 := (γ X 2 ) + (λ + X 2 )
2 2 (γ X 2 ) + (λ + X 2 )
2 2 ,

⎪ ⎪

⎩ ⎭
q = s, s + 1, . . . , s + L

has unit norm and satisfies equation (5.25), i.e., it is an eigenvector for A correspond-
ing to the eigenvalue λ. Notice that X1 ∪ X2 contains M + L + 2 orthogonal unit
vectors, which in particular are linearly independent.
5.1 Presentation of the Problem 101

Furthermore, arguing as above, let (ϕ̃ ψ̃)T satisfy (5.25). Then necessarily

ϕ̃ = ϕ̃ k ek + ϕ̃ k+1 ek+1 + · · · + ϕ̃ k+M ek+M + ϕ̃ s es + ϕ̃ s+1 es+1 + · · · + ϕ̃ s+L es+L ,


ψ̃ = ψ̃ k ek + ψ̃ k+1 ek+1 + · · · + ψ̃ k+M ek+M + ψ̃ s es + ψ̃ s+1 es+1 + · · · + ψ̃ s+L es+L ,

that is, ϕ̃ and ψ̃ are linear combinations of the eigenfunctions

{ek , ek+1 , . . . , ek+M , es , es+1 , . . . , es+L } .

Taking into account that the system {ek , ek+1 , . . . , ek+M , es , es+1 , . . . , es+L } is lin-
early independent, plugging the above ϕ̃ and ψ̃ into relations (5.25), and recalling
that μk = · · · = μk+M = X 1 , μs = μs+1 = · · · = μs+L = X 2 , we deduce that

λ + X1 q
ϕ̃ q = ψ̃ , q = k, k + 1, . . . , k + M,
γ X1
λ + X2 q
ϕ̃ q = ψ̃ , q = s, s + 1, . . . , s + L .
γ X2

Hence
 T  T
λ+X 1 λ+X 1
(ϕ̃ ψ̃)T = ψ̃ k e
γ X1 k
ek + · · · + ψ̃ k+M e
γ X 1 k+M
ek+M
 T  T
λ+X 2 λ+X 2
+ψ̃ s e
γ X2 s
es + · · · + ψ̃ s+L e
γ X 2 s+L
es+L .

Or equivalently,
√  T
(λ+X 1 )2 +(γ X 1 )2 k λ+X 1 γ X1
(ϕ̃ ψ̃)T = γ X1
ψ̃ √ ek √ e k + ···
(λ+X 1 )2 +(γ X 1 )2 (λ+X 1 )2 +(γ X 1 )2
√  T
(λ+X 1 )2 +(γ X 1 )2 k+M λ+X 1 γ X1
+ γ X1
ψ̃ √ ek+M √ ek+M
2 (λ+X 1 ) +(γ X 1 )
2 2 (λ+X 1 ) +(γ X 1 )
2

√  T
(λ+X 2 )2 +(γ X 2 )2 s γ X2
+ γ X2
√ λ+X2 2
ψ̃ es √ es + ···
(λ+X 2 ) +(γ X 2 )2 (λ+X 2 )2 +(γ X 2 )2
√  T
(λ+X 2 )2 +(γ X 2 )2 s+L γ X2
+ γX
ψ̃ √ λ+X2 2 es+L √ es+L .
2 (λ+X 2 ) +(γ X 2 )
2 2 (λ+X 2 ) +(γ X 2 )
2

Thus we obtain that the above (ϕ̃ ψ̃)T may be written as a linear combination of the
vectors from X1 ∪ X2 . In other words, X1 ∪ X2 forms a system of generators for
the subspace of the eigenvectors of the operator A corresponding to the eigenvalue λ.
Recalling that X1 ∪ X2 is linearly independent, we conclude that in fact, X1 ∪ X2
represents an orthonormal basis of this subspace. Consequently, we may choose the
102 5 Stabilization of the Cahn–Hilliard System

eigenvector (ϕ ψ)T to be one of the elements from X1 ∪ X2 , in particular such that


ψ is of the form (5.23).
Let us consider now the identity (5.26) as one of unknown λ. So λ must be a root
of the second-degree polynomial

X 2 + [(Fl + 1)μk − νμ2k ]X − νμ3k + (Fl − γ 2 )μ2k .

We observe that the above polynomial has a negative root, namely λ, and a nonnega-
tive one. Indeed, assume for the sake of a contradiction that both roots are negative.
Then by Viète’s relations, we deduce that

−νμ3k + (Fl − γ 2 )μ2k > 0 and (Fl + 1)μk − νμ2k > 0.

Adding the second relation multiplied by −μk ≥ 0 to the first we deduce

−μ2k (1 + γ 2 ) > 0,

which is absurd. Therefore, again by Viète’s relations, we get that necessarily

−νμ3k + (Fl − γ 2 )μ2k ≤ 0,

Fl −γ 2
and therefore, μk must satisfy μk ≥ ν
. 

We continue following the algorithm in Chap. 2. We transform the boundary con-


trol problem into an internal-type one by lifting the boundary conditions into equa-
tions. In order to do this, let us define the so-called Neumann operator as follows:
given a ∈ L 2 (Γ1 ) and η > 0, we set Dη a := (y z)T , the solution to the system (recall
(2.51) and the fact that λ N −1 = λ N = 0)


⎪ N

⎪ νΔ 2
y − F Δy + γ Δz − 2 λ j (y, z), (ϕ j , ψ j ) ϕ j


l




j=1



⎪ − δ(y, z), (ϕ N , ψ N ) ϕ N + ηy = 0 in O,



⎪ N



⎪ − Δz + γ Δy − 2 λ j (y, z), (ϕ j , ψ j ) ψ j

⎨ j=1
(5.27)

⎪ − δ(y, z), (ϕ N , ψ N ) ϕ N + ηz = 0 in O,



⎪ ∂ y

⎪ = 0 in (0, ∞) × Γ,



⎪ ∂n

⎪ ∂Δy γ0 ∂z



⎪ = − a, = α0 a on (0, ∞) × Γ1 ,

⎪ ∂n ν ∂n


⎩ ∂Δy = ∂z = 0 on (0, ∞) × Γ2 ,

∂n ∂n
5.1 Presentation of the Problem 103

where δ > 0 is such that λ1 , . . . , λ N −1 , λ N + δ are distinct and η is sufficiently large


to ensure the existence of a unique solution to (5.27), defining thereby the map
Dη ∈ L(L 2 (Γ1 ), H 1 (O) × H 1/2 (O)). Easy computations lead to

α0
Dη a, (ϕ j ψ j )T = a, ψ j 0 , j = 1, 2, . . . , N − 1,
η − λj
α0 (5.28)
Dη a, (ϕ N ψ N )T = a, ψ N 0 ,
η − λN − δ

where  · , · 0 stands for the classical scalar product in L 2 (Γ1 ).


Let 0 < η1 < η2 < · · · < η N be N constants sufficiently large such that (5.27) is
well posed for each of them. For future use, we set

Dηi , i = 1, 2, . . . , N , the corresponding solutions of (5.27). (5.29)

Further, set
 
1 1 1 1
ηk := diag , ,..., , , (5.30)
ηk − λ1 ηk − λ2 ηk − λ N −1 ηk − λ N − δ

k = 1, 2, . . . , N , and

N
 S := ηk .
k=1

Moreover, define
Bk := ηk Bηk , k = 1, 2, . . . , N , (5.31)
N
where B is the Gram matrix of the system ψ j |Γ1 j=1 , in L 2 (Γ1 ), i.e.,
⎛ ⎞
ψ1 , ψ1 0 ψ1 , ψ2 0 . . . ψ1 , ψ N 0
⎜ ψ2 , ψ1 0 ψ2 , ψ2 0 . . . ψ2 , ψ N 0 ⎟
B := ⎜ ⎟
⎝ ................................................... ⎠ . (5.32)
ψ N , ψ1 0 ψ N , ψ2 0 . . . ψ N , ψ N 0

Set
(B1 + B2 + · · · + B N )−1 =: A. (5.33)

Then A is well defined. Indeed, this can be shown by arguing similarly as in


Proposition 2.1 and making use of Lemma 5.1, from which we know that for all
j = 1, 2, . . . , N , the traces of ψ j are not identically zero on Γ1 .
104 5 Stabilization of the Cahn–Hilliard System

Now, plugging the feedback


⎛ ⎞ ⎛ ⎞
# (y(t) z(t))T , (ϕ1 ψ1 )T ψ1 (x) $
⎜ (y(t) z(t))T , (ϕ2 ψ2 )T ⎟ ⎜ ψ2 (x) ⎟
u(t) = S A ⎜ ⎟ ⎜ ⎟
⎝ .................................... ⎠ , ⎝ ......... ⎠ , (5.34)
(y(t) z(t))T , (ϕ N ψ N )T ψ N (x) N

into equations (5.16), one may show, similarly as in Theorems 2.6, 2.7, that it achieves
its exponential stability. More exactly, we have the following result, which is com-
mented on in the forthcoming Remark 5.1. Its proof is omitted, since it is similar to
the proof of Theorem 2.6, and it has been repeated several times in previous chapters.

Proposition 5.2 The solution (y, z) to the system




⎪ yt + νΔ2 y − Fl Δy + γ Δz = 0, in (0, ∞) × O,



⎪ z t − Δz + γ Δy = 0, in (0, ∞) × O,




⎪ ∂ y = 0, on (0, ∞) × Γ,



⎪ ∂n

⎪ ⎛ ⎞ ⎛ ⎞

⎪ (y z)T , (ϕ1 ψ1 )T ψ1 $

⎪ #

⎪ ∂Δy γ0 ⎜ (y z)T , (ϕ2 ψ2 )T ⎟ ⎜ ψ2 ⎟

⎪ = −  ⎜ ⎟ ⎜ ⎟


⎪ ∂n ν
S A ⎝ ....................... ⎠ , ⎝ .... ⎠ ,



⎪ (y z)T , (ϕ N ψ N )T ψN

⎪ N

on (0, ∞) × Γ1 (5.35)

⎪ ⎛ ⎞ ⎛ ⎞

⎪ # (y z) T
, (ϕ ψ ) T
ψ1 $

⎪ 1 1

⎪ ∂z ⎜ (y z)T , (ϕ2 ψ2 )T ⎟ ⎜ ψ2 ⎟

⎪ = α0  S A ⎜ ⎟,⎜ ⎟


⎪ ∂n ⎝ ...................................... ⎠ ⎝ ..... ⎠ ,



⎪ (y z)T , (ϕ N ψ N )T ψN

⎪ N



⎪ on (0, ∞) × Γ1 ,





⎪ ∂Δy ∂z

⎪ = = 0, on (0, ∞) × Γ2 ,

⎪ ∂n ∂n


y(0) = yo , z(0) = z o ,

satisfies the exponential decay

(y(t) z(t))T 2 ≤ C1 e−C2 t (yo z o )T 2 , t ≥ 0, (5.36)

for some constants C1 , C2 > 0.

Remark 5.1 From the practical point of view, it is important to describe how one can
compute the first N eigenvectors of the operator A, since the boundary feedback law
5.1 Presentation of the Problem 105

is expressed only in terms of those eigenvectors. In order to do this, we will mainly


rely on the results from the proof of Lemma 5.1.
First of all, one should compute the first, let us say K , eigenvalues and eigenfunc-
tions of the Neumann Laplace operator, i.e.,

∂e j
Δe j = μ j e j , in O; = 0, on Γ ; j = 1, 2, . . . , K ,
∂n
for which
Fl − γ 2
μj ≥ , j = 1, 2, . . . , K .
ν
We have that μ1 = 0 and μi = 0 for i = 2, 3, . . . , K .
Then for each j = 1, 2, . . . , K one should check whether the polynomial

X 2 + [(Fl + 1)μ j − νμ2j ]X − νμ3j + (Fl − γ 2 )μ2j

has a nonpositive root. If it does, we denote it by λ, and this is in fact a nonpositive


eigenvalue of the operator A. Let us assume that we have found N such nonpositive
roots, and denote them by λi , i = 1, 2, . . . , N .
Hence for each i = 1, 2, . . . , N , either λi = 0, in which case one can take
  T    T
1 1 1 1
(ϕi ψi )T = or (ϕi ψi )T = − ,
2m O 2m O 2m O 2m O

or there exists some j ∈ {2, . . . , K } such that the eigenvalue λi can be computed as
a root of the following second-degree polynomial:

X 2 + [(Fl + 1)μ j − νμ2j ]X − νμ3j + (Fl − γ 2 )μ2j .

Then the corresponding eigenvector is given by


 T
λi + μ j γ μj
(ϕi ψi ) =
T
 ej  ej .
(γ μ j )2 + (λi + μ j )2 (γ μ j )2 + (λi + μ j )2

In conclusion, the problem reduces to finding the first K eigenvalues and eigenfunc-
tions of the Neumann Laplace operator and computing the roots of some third-degree
polynomials.

Recalling the notation (5.11), The following result concerning the linearized sys-
tem of (5.10) follows immediately by Proposition 5.2.

Theorem 5.1 The unique solution to the linear system


106 5 Stabilization of the Cahn–Hilliard System


⎪ (θ + l0 ϕ)t − Δθ = 0, in (0, ∞) × O,



⎪ ϕt − Δμ = 0, in (0, ∞) × O,





⎪ μ = −νΔϕ − ϕ − γ0 θ, in (0, ∞) × O,



⎨ ∂ϕ = ∂μ = 0, on (0, ∞) × Γ,

∂n ∂n (5.37)

⎪ ∂θ

⎪ = u =  S AO, J N , on (0, ∞) × Γ1 ,

⎪ ∂n



⎪ ∂θ

⎪ = 0, on (0, ∞) × Γ2 ,



⎪ ∂n

θ (0) = θo , ϕ(0) = ϕo , in O,

satisfies the exponential decay

(ϕ(t) θ (t))T − (ϕ̂ θ̂ )T 2 ≤ C5 e−C6 t (ϕo θo )T − (ϕ̂ θ̂)T 2 , ∀t ≥ 0,

for some positive constants C5 , C6 . Here

O = O(θ, ϕ)
⎛ ⎞
ϕ, ϕ1 + α0 (θ + l0 ϕ), ψ1 − ϕ∞ , ϕ1 − α0 (θ∞ − l0 ϕ∞ ), ψ1
⎜ ϕ, ϕ2 + α0 (θ + l0 ϕ), ψ2 − ϕ∞ , ϕ2 − α0 (θ∞ − l0 ϕ∞ ), ψ2 ⎟
:= ⎜ ⎟
⎝ .......................................................................................... ⎠
ϕ, ϕ N + α0 (θ + l0 ϕ), ψ N − ϕ∞ , ϕ N − α0 (θ∞ − l0 ϕ∞ ), ψ N
(5.38)
and ⎛ ⎞
ψ1
⎜ ψ2 ⎟
J := ⎜ ⎝ ... ⎠ .

ψN

Via a fixed-point argument similar to what we discussed in the previous chapters,


one can deduce from Theorem 5.1 the local boundary stabilization for the nonlinear
system as well.

5.2 Comments

Instead of the Cahn-Hilliard system, it is usually studied its simpler form, the so-
called phase-field system, which reads as
%
θt − kΔθ + laΔϕ + lb(ϕ − ϕ 3 ) − ldθ = 0,
(5.39)
ϕt − aΔϕ − b(ϕ − ϕ 3 ) + dθ = 0, in R+ × O.
5.2 Comments 107

The problem of stabilization of the phase field system has been intensively studied
in the literature using various methods. The Riccati-based approach is used in Barbu
[8], where a stabilizing finite-dimensional feedback controller with compact support
acting only on one component of the system is constructed. The boundary stabiliza-
tion problem was studied, for example, in Chen [45] using the time optimal control
technique, while in Munteanu [99], a proportional boundary feedback is designed
under the constraint that the eigenfunctions are linearly independent.
Concerning the problem of stabilization of systems of Cahn-Hilliard type (5.1)–
(5.3), we mention the work Barbu et al. [17], which constructs an internal stabilizing
feedback, while concerning the boundary stabilization case, the result of this chapter
represents the first result in this direction, and it is based on the ideas in [17]. Other
results related to the control problem associated with the Cahn-Hilliard system are
the sliding mode controls in Barbu et al. [16] and Colli et al. [49], while in Marinoschi
[93], the singular potential case is investigated.
Chapter 6
Stabilization of Equations with Delays

In this chapter, we consider equations with delays. Namely, the derivative of an


unknown function at a certain time depends on the values of the function at previous
times. More exactly, we consider in the model aftereffect phenomena by adding a
memory term. Engineers conclude that actuators, sensors that are involved in feed-
back control, introduce, in addition, delays into the system. That is why from the
control engineering point of view it is of great interest to consider control problems
associated with equations with delays. Furthermore, special kinds of substances,
such as viscoelastic fluids, may also impose such delays. We will prove here that
the proportional feedback, designed in Chap. 2, still ensures stability for this kind of
system.

6.1 Presentation of the Problem

The subject of this chapter is the Dirichlet boundary control problem of the following
evolution integro–partial differential equation:
⎧  t  t

⎪ ∂ = Δy(t, + − + μ k(t − s)y(s, x)ds

⎪ t y(t, x) x) k(t s)Δy(s, x)

⎪ −∞ −∞

+ f (y(t, x)), (t, x) ∈ Q := (0, ∞) × O,

⎪ y(t, x) = u(t, x) on Σ 1 := (0, ∞) × Γ1 ,
⎪ ∂


⎪ y = 0 on Σ := (0, ∞) × Γ2 ,
⎩ ∂n 2
y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O.
(6.1)
The effects of the memory are expressed in the linear time convolution of the functions
Δy(·, ·), respectively y(·, ·), and the memory kernel k(·). The Dirichlet controller u
is applied on Γ1 , while Γ2 is insulated.

© Springer Nature Switzerland AG 2019 109


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_6
110 6 Stabilization of Equations with Delays

As can be seen, in the initial condition, the fourth equation in (6.1), it is assumed
that the function y(t, x) is known for all t ≤ 0. However, y(t, x) does not necessarily
satisfy the equation for negative t. We shall assume for the nonlinear function f that
f (0) = 0 and f  (0) > 0, and choose from the following two hypotheses, which we
have met before:
(i) f ∈ C 1 (R);
(ii) f ∈ C 2 (R), and there exist C1 > 0, q ∈ N, αi > 0, i = 1, . . . , q, when d =
1, 2, and 0 < αi ≤ 1, i = 1, . . . , q, when d = 3, such that
 q

| f  (y)| ≤ C1 |y|αi + 1 , ∀y ∈ R.
i=1

(Here f  stands for the derivative dy


d
f.) Finally, μ is a nonnegative constant.
The instability of the null solution in (6.1) may occur because of the presence
of the nonlinear term f . But since the kernel k will be assumed to be positive, the
instability may also be caused by the presence of the memory term containing the
positive constant μ. Therefore, it makes sense to deal with the stabilization of the
null solution in (6.1).
We set A : D(A) ⊂ L 2 (O) → L 2 (O), defined by

Ay := −Δy, ∀y ∈ D(A),



D(A) = y ∈ H (O) : y = 0 on Γ1 and
2
y = 0 on Γ2 .
∂n
∞ ∞
Let {ϕi }i=1 be an orthonormal basis of eigenfunctions of A. By {λi }i=1 we denote
the corresponding eigenvalues, repeated according to their multiplicity. It is easy to
see that we can rearrange the set of eigenvalues as

0 < λ1 ≤ λ2 ≤ · · · ≤ λi ≤ · · · .,

with λi → ∞ when i → ∞.
For the rest of this chapter, we let · stand for the norm in L 2 (O).
In addition to the above context, we assume as well that
(k) there exists some δ > 0 such that the nonnegative memory kernel

k ∈ C([0, ∞), R+ ) ∩ C 2 ((0, ∞), R+ )

satisfies
k  (t) + 2δk(t) ≤ 0 and k  (t) + 2δk  (t) ≥ 0, ∀t ≥ 0, (6.2)

and eρδ· k· = const., for all 0 ≤ ρ ≤ 1.


Straightforward computations show that this implies that
6.1 Presentation of the Problem 111

k(t) ≤ k(0)e−2ρδt , 0 ≤ ρ ≤ 1, t ≥ 0, (6.3)

and moreover,
d m ρδt
(−1)m m
e k(t) ≥ 0, m = 0, 1, 2,
dt

for all t > 0 and 0 ≤ ρ ≤ 1. Hence by [7, Proposition 4.1], we have that eρδ· k(·)
is a positive kernel, i.e.,
 t  τ 
ρδ(τ −s)
w(τ ) e k(τ − s)w(s)ds dτ ≥ 0, ∀w ∈ L 2 (0, t; R), t ≥ 0,
0 0
(6.4)
for all 0 ≤ ρ ≤ 1.
It should be noted that such kernels k satisfying (6.2) are often considered in
the literature in studying the stability of heat equations with memory (see, for
example, [46, 88]). In fact, the exponential decay of k reflects the fading of the
far history in the model. Besides this, a simple example of a kernel that obeys
M
(6.2) is k(t) = bi e−ai t , t ≥ 0, for some ai , bi > 0, i = 1, . . . , M.
i=1
(o) The initial data yo belongs to the space L 2 (−∞, 0; H 2 (O)).
On setting
 0  0
η(t, x) := k(t − s)Δyo (s, x)ds + μ k(t − s)yo (s, x)ds, (t, x) ∈ Q,
−∞ −∞
(6.5)
assumption (o) leads to the following estimates:
 0 2   0 2
η(t) 2 ≤ 2 k(t − s) Δyo (s) ds +2 μ k(t − s) yo (s) ds
−∞ −∞
(using Schwarz’s inequality and relation (6.3), with ρ = 1)
 0  0  0 
≤2 e−4δ(t−s) ds Δyo (s) 2 ds + μ2 yo (s) 2 ds
−∞ −∞ −∞
 
max 1, μ −4δt
2
≤ e yo L 2 (−∞,0;H 2 (O )) , t ≥ 0.

(6.6)
Therefore, there exists a constant C > 0 such that

η(t) 2 ≤ Ce−δt yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0, and


 ∞ (6.7)
e2δt η(t) 2 dt ≤ C yo 2L 2 (−∞,0;H 2 (O )) .
0

(N) N ∈ N is a constant sufficiently large that


112 6 Stabilization of Equations with Delays

δ
− λi + f  (0) + < 0 and − λi + μ < 0 for i = N + 1, N + 2, . . . . (6.8)
4
As we have done already in this book, in order to simplify the presentation, we
assume that the first N eigenvalues are simple. Of course, one may argue as in the
second part of Chap. 2 to deal with the general semisimple case.
Accordingly, changing the proportional feedback law provided in Chap. 2 to suit
 ∂ N
our case, we have that B is the Gram matrix of the system ∂n ϕi i=1 in the Hilbert
space L 2 (Γ1 ), with the standard scalar product

g, h0 := f (x)g(x)dσ
Γ1

(σ being the corresponding Lebesgue measure on the boundary Γ1 ). That is,


⎛ ∂ ∂ ∂ ∂ ∂ ∂ ⎞
∂n φ1 , ∂n φ1 0 ∂n φ1 , ∂n φ2 0 . . . ∂n φ1 , ∂n φ N 0
⎜ ∂ φ , ∂ φ  ∂ φ , ∂ φ  ... ∂ φ , ∂ φ  ⎟
B := ⎜ ∂n 2 ∂n 1 0 ∂n 2 ∂n 2 0 ∂n 2 ∂n N 0 ⎟
⎝ ...................................................................... ⎠ . (6.9)
∂ ∂ ∂ ∂ ∂ ∂
∂n φ N , ∂n φ1 0 ∂n φ N , ∂n φ2 0 . . . ∂n φ N , ∂n φ N 0

Then we define the matrices


⎛ ⎞
1
γk −λ1
0 ... 0
⎜ 0 1
... 0 ⎟
Λγk := ⎜ γk −λ2 ⎟
⎝ ............................ ⎠ , k = 1, . . . , N , (6.10)
0 0 . . . γk −λ 1
N


N
Λ S := Λγk (6.11)
k=1

and
A = (B1 + B2 + · · · + B N )−1 , (6.12)

where
Bk := Λγk BΛγk , k = 1, . . . , N . (6.13)

(Recall that by virtue of Example 2.4 and Proposition 2.1, the sum B1 + · · · + B N
is invertible.)
Finally, the feedback laws are
⎛ ⎞ ⎛ 1 ∂ ⎞
y(t), ϕ1  γk −λ1 ∂n 1
ϕ (x)
 ⎟
⎜ y(t), ϕ2  ⎟ ⎜ ∂
⎜ γk −λ2 ∂n ϕ2 (x) ⎟
1

u k (t, x) = A ⎝ ⎟ ,⎜ ⎟ , t ≥ 0, x ∈ Γ1 , (6.14)
.............. ⎠ ⎝ ...................... ⎠ N
y(t), ϕ N  1
γ −λ ∂n N

ϕ (x)
k N
6.1 Presentation of the Problem 113

for k = 1, 2, . . . , N . Here ·, · denotes the standard scalar product in L 2 (O), while


·, · N denotes the standard scalar product in R N . Then we take u to be the sum

u = u1 + u2 + · · · + u N ,

which in a condensed form, can be rewritten as


⎛ ⎞ ⎛ ∂ ⎞
y(t), ϕ1  ϕ
 ∂n 1 
⎜ y(t), ϕ2  ⎟ ⎜ ∂ ϕ ⎟

u = ΛS A ⎝ ⎟ , ⎜ ∂n 2 ⎟ , t ≥ 0. (6.15)
.............. ⎠ ⎝ ....... ⎠ N
y(t), ϕ N  ∂
ϕN∂n

Remark 6.1 For negative time t, the controller u is defined as


⎛ ⎞ ⎛ ∂ ⎞
yo (t), ϕ1  ϕ
 ∂n 1 
⎜ yo (t), ϕ2  ⎟ ⎜ ∂ ϕ ⎟
u(t, x) := Λ S A ⎜ ⎟ , ⎜ ∂n 2 ⎟ .
⎝ ................ ⎠ ⎝ ........ ⎠ (6.16)
N
yo (t), ϕ N  ∂
ϕ
∂n N

Since yo is known, we deduce that for negative time, u is in fact a known function.

6.2 Stability of the Linearized System

The following result amounts to saying that the feedback u given by (6.15) globally
exponentially stabilizes the first-order approximation of (6.1).
Theorem 6.1 Let N ∈ N as in (6.8). Under hypothesis (k), (o), (i), for each
yo ∈ C([0, ∞); L 2 (O)) ∩ L 2 (−∞, 0; H 2 (O)), there exists a unique solution

y ∈ C [0, ∞); L 2 (O) ∩ L 2 (0, ∞; H 1 (O))

to the closed-loop system


⎧  t  t

⎪ ∂ = Δy(t, + − + μ k(t − s)y(s, x)ds

⎪ t y(t, x) x) k(t s)Δy(s, x)ds

⎪ 0 0

+ f  (0)y(t, x) + η(t, x), (t, x) ∈ Q,

⎪ y(t, x) = u(y(t)), on Σ1

⎪ ∂

⎪ y = 0 on Σ2 ,
⎩ ∂n
y(0, x) = yo (0, x), x ∈ O,
(6.17)
which satisfies the exponential decay
δ
y(t) 2 ≤ Ce− 2 t ( yo (0) 2 + yo 2L 2 (−∞,0;H 2 (O )) ), ∀t ≥ 0, (6.18)
114 6 Stabilization of Equations with Delays

for some constant C > 0. Here η is as introduced in (6.5), δ > 0 is as introduced in


(6.2), and the feedback u is given in (6.15). Besides this, there exists c > 0 such that
 ∞  
y(t) 2H 1 (O ) ≤ c yo (0) 2 + yo 2L 2 (−∞,0;H 2 (O )) . (6.19)
0

Proof First, we equivalently rewrite Eq. (6.17) as one with null boundary conditions.
To this end, we introduce, similarly as in (2.16) and (2.18), the map D, as follows:
given β ∈ L 1 (Γ1 ), we denote by Dγ β := y the solution to the equation

⎨ 
N
−Δy − 2 λk y, ϕk ϕk + γ y = 0 in O,
(6.20)
⎩ k=1

y = β on Γ1 , ∂n
y = 0 on Γ2 .

Doing similar computations as in (2.19), we get


 ∂ϕ j
− γ −λ
1
β, ∂n 0
 , j = 1, 2, . . . , N ,
Dγ β, ϕ j  = j
∂ϕ j (6.21)
− γ +λ
1
j
β, ∂n 0
 , j = N + 1, N + 2, . . . .

We let γ1 , . . . , γ N be N positive sufficiently large constants, and set Dγ1 , . . . , Dγ N


for the corresponding solutions to (6.20).
Similarly as in (2.35), we have that
⎛ ⎞ ⎛ ⎞
Dγk u k , ϕ1  y(t), ϕ1 
⎜ Dγk u k , ϕ2  ⎟ ⎜ y(t), ϕ2  ⎟
⎜ ⎟ ⎜ ⎟
⎝ ................. ⎠ = −Bk A ⎝ .............. ⎠ , (6.22)
Dγk u k , ϕ N  y(t), ϕ N 

where the Bk were introduced in (6.13) above, for k = 1, . . . , N .


Let us define


N
z(t, x) := y(t, x) − Dγk u k (t, x), (t, x) ∈ Q,
k=1

and

N
z o (x) := yo (0, x) − Dγk u k (0, x), x ∈ O,
k=1

(u k is given by (6.14)). Then with similar arguments as in (2.36)–(2.37), we have


that the feedback u may be expressed in terms of z only as
6.2 Stability of the Linearized System 115

⎛ ⎞ ⎛ 1 ∂ ⎞
z(t), ϕ  ϕ1
 1
⎜ 1 ∂ ϕ ⎟
γ k −λ 1 ∂n
1 ⎜ z(t), ϕ2  ⎟ ⎟,⎜ 2 ⎟
u k (t, x) = A ⎜ ⎜ γk −λ2 ∂n ⎟ . (6.23)
2 ⎝ ................. ⎠ ⎝ ................. ⎠ N
z(t), ϕ N  1 ∂
γk −λ N ∂n N
ϕ

Finally, as in (6.22), we have now


⎛ ⎞ ⎛ ⎞
Dγk u k , ϕ1  z(t), ϕ1 
⎜ Dγk u k , ϕ2  ⎟ 1 ⎜ z(t), ϕ2  ⎟
⎜ ⎟ ⎜ ⎟
⎝ ................. ⎠ = − 2 Bk A ⎝ ................ ⎠ , k = 1, . . . , N , (6.24)
Dγk u k , ϕ N  z(t), ϕ N 

and Eq. (6.17) may be rewritten in terms of z as follows:


⎧  t  t



⎪ z (t, x) = −Az(t, x) − k(t − s)Az(s)ds + μ k(t − s)z(s)ds


t


0 0

⎪ N  t



⎪ +μ k(t − s)Dγk u k (s)ds


⎨ k=1 0

⎪ 
N

⎪ + (R1 + R2 )( z(t), ϕ1 , . . . , z(t), ϕ N ) + f  (0)z + f  (0) Dγk u k





⎪ 
k=1


t

⎪ + k(t − s)R2 ( z(s), ϕ1 , . . . , z(s), ϕ N ) + η(t), t > 0, x ∈ O,



⎩ 0
z(0, x) = z o (x), x ∈ O,
(6.25)
where


N
R1 ( z, ϕ1 , . . . , z, ϕ N ) := − Dγi u i
i=1 t
(6.26)

N 
N
R2 ( z, ϕ1 , . . . , z, ϕ N ) := −2 λ j Dγi u i , ϕ j ϕ j + γi Dγi u i .
i, j=1 i=1

(One may show that given an initial datum yo ∈ C((−∞, 0]; L 2 (O)) ∩ L 2 (0, ∞; H 1
(O)), there exists a unique solution z ∈ C([0, ∞); L 2 (O)) ∩ L 2 (0, ∞; H 1 (O)) to
the system (6.25), proving thereby the well-posedness of (6.25) and consequently
that of (6.17). See, for instance, [46, Theorem 2.1].)
By virtue of (6.24), using the fact that A is the inverse of the sum of Bk ’s, k =
1, . . . , N , we immediately see that
116 6 Stabilization of Equations with Delays
⎛ ⎞
R1 , ϕ1 
⎜ R1 , ϕ2  ⎟ 1  N
1
⎜ ⎟= Bk AZt = Zt
⎝ .......... ⎠ 2 2
k=1
R1 , ϕ N 
⎛ ⎞
R2 , ϕ1 
⎜ R2 , ϕ2  ⎟ N
1
N
(6.27)
⎜ ⎟=Λ B AZ − γk Bk AZ
⎝ ........... ⎠ k
2 k=1
k=1
R2 , ϕ N 
1
N
1
= ΛZ − γ1 Z + (γ1 − γk )Bk AZ ,
2 2 k=2
⎛ ⎞ ⎛ ⎞
z(t), ϕ1  λ1 0 . . . 0
⎜ z(t), ϕ2  ⎟ ⎜ 0 λ2 . . . 0 ⎟
where we have set Z (t) := ⎜ ⎟ ⎜ ⎟
⎝ ........... ⎠, t ≥ 0, and Λ := ⎝ ............... ⎠ .
z(t), ϕ N  0 0 . . . λN
Taking into account the above relations and projecting the Eq. (6.25) into the space
 N
Xu := lin span ϕ j j=1 , it follows that
 t  t
d
Z (t) = −ΛZ (t) − k(t − s)ΛZ (s)ds + μ k(t − s)Z (s)ds
dt 0 0

μ t 1 d
− k(t − s)Z (s)ds + Z (t) + ΛZ (t)
2 0 2 dt
1
N
1
− γ1 Z (t) + (γ1 − γk )Bk AZ (t)
2 2 k=2
f  (0)
+ f  (0)Z (t) − Z (t)
2
  
1
t N
1
+ k(t − s) ΛZ (s) − γ1 Z (s) + (γ1 − γk )Bk AZ (s) ds + L(t),
0 2 2 k=2
⎛ ⎞
η(t), ϕ1 
⎜ η(t), ϕ2  ⎟
for t > 0, where L(t) := ⎜ ⎟
⎝ .............. ⎠ .
η(t), ϕ N 
Equivalently,

d ! t
Z (t) = −γ1 + f  (0) Z (t) + (μ − γ1 ) k(t − s)Z (s)ds
dt 0
N
+ (γ1 − γk )Bk AZ (t)
k=2
6.2 Stability of the Linearized System 117

  
t 
N
+ k(t − s) (γ1 − γk )Bk AZ (s) ds + 2L(t), t > 0. (6.28)
0 k=2

Now let us scalar multiply (in R N ) Eq. (6.28) by AZ (t), to arrive at (see (2.39)–
(2.40))

1 d "" 12
"2
"
"A Z (t)"
2 dt N
" 1 "2   t 
" " 1 1
≤ [−γ1 + f  (0)] "A 2 Z (t)" + (μ − γ1 ) A 2 Z (t), k(t − s)A 2 Z (s)ds
N 0 N
# ⎡ ⎤ (
 t N
1 1 1 1
+ A 2 Z (t), k(t − s) ⎣ (γ1 − γk )A 2 Bk A 2 A 2 Z (s)⎦ ds
0 k=2 N
+ 2 AZ (t), L(t) N , t > 0.

After multiplying the above equation by e2δt and changing t to τ , we get

d  δτ "
" 1
" 2
"
 " 1
"
" 2
"
e "A 2 Z (τ )" ≤ 2[−γ1 + f  (0) + δ] eδτ "A 2 Z (τ )"
dτ N N
 τ
δτ 21 δ(τ −s) δs 21
+ 2(μ − γ1 ) e A Z (τ ), e k(τ − s)e A Z (s)ds N
0
N )

+2 (γ1 − γk )
k=2
  1 1  τ  1 1 1  *
1 1 2 1 2
× eδτ A 2 Bk A 2 A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Bk A 2 A 2 Z (s)ds
0 N
1
 1

+ 4e A N L(τ ) N eδτ A 2 Z (τ ) N , τ > 0.
δτ 2

1
Here we used in the last term the Cauchy–Schwarz inequality and set A 2 N for the
1
induced Euclidean norm of the matrix A 2 . Then integrating the above equation with
respect to τ over (0, t), we deduce that
 t " 1 " 2
1 1 " "
e2δt A 2 Z (t) 2N ≤ A 2 Z (0) 2N + 2[−γ1 + f  (0) + δ] eδτ "A 2 Z (τ )" dτ
0 N
 t  τ
1 1
+ 2(μ − γ1 ) eδτ A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Z (s)ds N dτ
0 0
N )

+2 (γ1 − γk )
k=2
 t#  1 1 1  τ  1 1 1
( 
1 2 1 2
eδτ A 2 Bk A 2 A 2 Z (τ ), eδ(τ −s) k(τ − s)eδs A 2 Bk A 2 A 2 Z (s)ds dτ
0 0 N
 t  
1 1
+ 4 A 2 N eδτ L(τ ) N eδτ A 2 Z (τ ) N dτ
0
118 6 Stabilization of Equations with Delays

(using in the third and the fourth term relation (6.4) with ρ = 1,
and the fact that μ − γ1 < 0 and γ1 − γk < 0, k = 2, . . . , N )
 t " 1 " 2
1 " "
≤ A 2 Z (0) 2N + 2[−γ1 + f  (0) + δ] eδτ "A 2 Z (τ )" dτ
0 N
 t  
1 1
+ 4 A 2 N eδτ L(τ ) N eδτ A 2 Z (τ ) N dτ
0
(using, in the last term, Young’s inequality and the fact that − γ1 + f  (0) + δ < 0)
1  t
1
2 8 A 2 2N
≤ A Z (0) N +
2 e2δτ L(τ ) 2N dτ, t > 0.
−γ1 + f  (0) + δ 0

It follows that
1
A 2 Z (t) N
 1 
−2δt 1 8 A 2 2N t
≤e A Z
2 (0) 2N + e 2δτ
η(τ ) dτ
2
−γ1 + f  (0) + δ 0
 1

−2δt 1 8 A 2 2N
≤e A Z 2 + (0) 2N C yo L 2 (−∞,0;H 2 (O )) , t ≥ 0,
2
−γ1 + f  (0) + δ
(6.29)
1
using (6.7). Hence recalling that A 2 is symmetric and positive definite, we see that
(6.29) yields the existence of a constant C > 0 such that
 
Z (t) N ≤ Ce−2δt Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0. (6.30)

Now taking the norm in (6.28) and using (6.30), (6.3) with ρ = 21 and (6.7), we
deduce that
" "
"d "
" Z (t)"
" dt "
N
  t  
−2δt −δ(t−s) −2δs −δt
≤C e + e e ds + e Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O ))
0
 
−δt
≤ Ce Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0,
(6.31)

for some constant C > 0.


Next, let us take care of the remaining modes of z, namely z j := z, ϕ j , j =
N + 1, N + 2, . . . . To this end, scalar multiplying Eq. (6.25) by ϕ j , j = N + 1, N +
2, . . . , we get
6.2 Stability of the Linearized System 119
 t
d
z j (t) = −λ j z j (t) + (−λ j + μ + f  (0)) k(t − s)z j (s)
dt 0
N  t
+μ k(t − s) Dγi u i (s), ϕ j ds
i=1 0


N (6.32)
+ R1 (z 1 , . . . , z N ), ϕ j  + [γi + f  (0)] Dγi u i (t), ϕ j 
i=1
N  t
 
N
+ k(t − s) γi Dγi u i (s), ϕ j ds + η(t), ϕ j , t > 0.
i=1 0 k=1

By (6.23), we see that u i , i = 1, 2, . . . , N , depend only on the first N modes of z.


Consequently, by (6.30) and (6.3) with ρ = 21 , we get that

N 
 t 
N
μ k(t − s) Dγi u i (s), ϕ j ds + [γi + f  (0)] Dγi u i (t), ϕ j 
i=1 0 i=1
N 
 t 
N
(6.33)
+ k(t − s) γi Dγi u i (s), ϕ j ds
i=1 0 k=1
 
−δt
≤ Ce Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0.

On the other hand, R1 depends on the time derivatives of the modes z j , j =


1, 2, . . . , N . So by (6.31), we also have that
 
R1 (z 1 , . . . , z N ), ϕ j  ≤ Ce−δt Z (0) 2N + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0. (6.34)

In conclusion, arguing similarly as above, i.e., multiplying (6.32) by e2δt z j (t),


setting τ instead of t, then integrating the result with respect to τ over (0, t) and
taking advantage of (6.4), (6.7), (6.33), and (6.34), we obtain that

  
|z j (t)|2 ≤ Ce−δt z(0) 2 + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0.
j=N +1

This together with (6.30) yields that



  
z(t) 2 = |z j (t)|2 ≤ Ce−δt z(0) 2 + yo 2L 2 (−∞,0;H 2 (O )) , t ≥ 0. (6.35)
j=1

N
Recalling that y = z + i=1 Dγi u i , we immediately obtain (6.18), as desired.
120 6 Stabilization of Equations with Delays

To get the H 1 -norm estimate in (6.19), we scalar multiply Eq. (6.25) by z. After
some straightforward computations, using relations (6.35), (6.33), and (6.7), we get
that
 t
z(t) 2 ≤ z o 2 − 2 ∇z(τ ) 2 dτ
0
 ) t  τ  *
−2 ∇z(τ, x) k(τ − s)∇z(s, x)ds dτ d x
O 0 0
 t  τ 
+μ z(τ ) k(τ − s) z(s) ds dτ
0 0
 t   (6.36)
 −δt
+ f (0) z(τ ) dτ + Ce
2
z(0) + yo L 2 (−∞,0;H 2 (O ))
2 2
0
 t
≤ z o 2 − 2 ∇z(τ ) 2 dτ
0
 t  
+ f  (0) z(τ ) 2 dτ + Ce−δt z(0) 2 + yo 2L 2 (−∞,0;H 2 (O )) ,
0

t ≥ 0, using relation (6.4), with ρ = 0. Hence we deduce the existence of a constant


c > 0 such that
 ∞
z(t) 2H 1 (O ) dt ≤ c( z o 2 + yo 2L 2 (−∞,0;H 2 (O )) ). (6.37)
0

N
Recalling that y = z + i=1 Dγi u i , we get immediately that (6.19) holds, as claimed.


6.3 Feedback Stabilization of the Nonlinear System (6.1)

Here we plug the feedback u given by (6.15) into the nonlinear system (6.1) and
show that it locally stabilizes it. More precisely, we have the following theorem.

Theorem 6.2 Let N ∈ N as in (6.8). Under hypotheses (k), (o), (H), (ii), the
feedback controller u given by (6.15) locally exponentially stabilizes the nonlin-
ear system (6.1). More exactly, there exists ρ > 0 sufficiently small that for all
yo ∈ L 2 (−∞, 0; H 2 (O)) with yo (0) 2 + yo 2L 2 (−∞,0;H 2 (O )) ≤ ρ, there exists a
unique solution
y ∈ C([0, ∞); L 2 (O)) ∩ L 2 (0, ∞; H 1 (O))

to the equation
6.3 Feedback Stabilization of the Nonlinear System (6.1) 121
⎧  t  t

⎪ ∂ =Δy(t, + − + μ k(t − s)y(s, x)ds

⎪ t y(t, x) x) k(t s)Δy(s, x)ds
⎨ −∞ −∞
+ f (y(t, x)), (t, x) ∈ (0, ∞) × O,



⎪ y(t, x) = u(y(t)) on Γ1 and ∂n∂
y = 0 on Γ2 ,

y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O,
(6.38)
which is L 2 -exponentially decaying.

Proof As in the proof of Theorem 6.1 (see (6.25)), we rewrite (6.38) via the operators
Dγk , k = 1, 2, . . . , N , as
⎧  t



⎪ z t (t, x) = −Az(t, x) − k(t − s)Az(s)ds



⎪  t
0
 t



⎪ + μ k(t − s)z(s)ds + μ k(t − s)Du(s)ds



⎨ 0 0
+ (R1 + R2 )( z(t), ϕ1 , . . . , z(t), ϕ N ) + f  (0)z + f  (0)Du (6.39)

⎪  t



⎪ + k(t − s)R2 ( z(s), ϕ1 , . . . , z(s), ϕ N ) + η(t)



⎪ 0



⎪ + f (z + Du(z)) − f  (0)(z + Du(z)), t > 0, x ∈ O,


z(0, x) = z o (x), x ∈ O,

where we have set



N
Du = Dγi u i .
i=1

In the sequel, we will consider an approximation of (6.39), to which we will apply


a fixed-point argument in order to show its well-posedness. To this end, consider, for
each M > 0, the following truncation functions JM : H 1 (O) → L 2 (O), defined as

 f (ξ + Du(ξ )) − f  (0)(ξ + Du(ξ )), ξ 1 ≤ M,
JM (ξ ) := 
f M
ξ 1
(ξ + Du(ξ )) − f (0) ξM 1 (ξ + Du(ξ )), ξ 1 > M.

We claim that
JM (ξ ) ≤ C M ξ 2H 1 (O ) , ∀ξ ∈ H 1 (O), (6.40)

for some C M > 0 depending only on M. Indeed, let ξ H 1 (O ) ≤ M. Then since f


satisfies estimate (ii), we have that
⎛ ⎞
q

C
| f (ξ + Du(ξ )) − f  (0)(ξ + Du(ξ ))| ≤ |ξ + Du(ξ )|αi + 1⎠ |ξ + Du(ξ )|2 .
1 ⎝
2
i=1

It follows from the above relation, via Hölder’s inequality, that


122 6 Stabilization of Equations with Delays

2 "
" q
"2
"
"
C1 "
JM (ξ ) 2 ≤ " |ξ + Du(ξ )|αi + 1" ξ + Du(ξ ) 4L 6 (O )
"
2 "
i=1 L (O )
6

 2  q
2
C1 αi
≤ |ξ + Du(ξ )| L 6 (O ) + 1 L 6 (O ) ξ + Du(ξ ) 4L 6 (O )
2 i=1
 2  q
2
C1 αi
= ξ + Du(ξ ) L 6αi (O ) + σ (O) ξ + Du(ξ ) 4L 6 (O ) ,
2 i=1

where using the Sobolev embedding theorem (i.e., L p (O) → H 1 (O), ∀0 < p ≤ 6),
we obtain
 2
q
2
C1
JM (ξ ) ≤2
ξ + Du(ξ ) αHi 1 (O ) + σ (O) ξ + Du(ξ ) 4H 1 (O ) . (6.41)
2 i=1

Since
ξ + Du(ξ ) H 1 (O ) ≤ (1 + C D ) ξ H 1 (O ) , (6.42)

it follows, by (6.41), (6.42), and the fact that ξ H 1 (O ) ≤ M, that

 2 
q
2
C1 (1 + C D )2 αi αi
JM (ξ ) ≤2
(1 + C D ) M + σ (O) ξ 4H 1 (O ) .
2 i=1

 
C1 (1+C D )2 
q
αi αi
Hence taking C M = 2
(1 + C D ) M + σ (O) , we get (6.40), as
i=1
claimed. Likewise for ξ H 1 (O ) > M. We have, as before, that
+   +
+ M M +
+f 
(ξ + Du(ξ )) − f (0) (ξ + Du(ξ ))++
+ ξ H 1 (O ) ξ H 1 (O )
 q + +αi + +2
C1  + M + + M +
≤ + +
(ξ + Du(ξ ))+ + 1 + + (ξ + Du(ξ ))++ .
2 + ξ ξ
i=1
1 H (O ) 1 H (O )

Then as before, applying Hölder’s inequality and Sobolev embeddings, we obtain


that

JM (ξ ) 2
q " "αi 2 " "4
2  " M " " M "
≤ C21 " ξ 1 (ξ + Du(ξ ))" 1 + σ (O) " ξ 1 (ξ + Du(ξ ))" 1 .
i=1 H (O ) H (O ) H (O ) H (O )

" "
" "
Since " ξ M1 (ξ + Du(ξ ))" ≤ (1 + C D )M (see (6.42)), we get
H (O ) H 1 (O )
6.3 Feedback Stabilization of the Nonlinear System (6.1) 123

 2 
q
2
C1
JM (ξ ) 2 ≤ ((1 + C D )M)αi + σ (O) (1 + C D )4 ξ 4H 1 (O ) .
2 i=1

Taking C M as before, we conclude that (6.40) holds in this case, as well.


With similar arguments, one may also obtain the estimate
!
JM (ξ1 ) − JM (ξ2 ) ≤ C M ξ1 H 1 (O ) + ξ2 H 1 (O ) ξ1 − ξ2 H 1 (O ) , (6.43)

for all ξ1 , ξ2 ∈ H 1 (O).


Now let us consider the approximation problem
⎧  t

⎪ (z ) (t, x) = −Az (t, x) − k(t − s)Az M (s)ds

⎪ M t M

⎪ 0

⎪  t  t



⎪ + μ k(t − s)z (s)ds + μ k(t − s)Du M (s)ds


M

⎪ 0 0

⎨ + (R1 + R2 )( z M (t), ϕ1 , . . . , z M (t), ϕ N )
(6.44)

⎪ + f  (0)z M + f  (0)Du M

⎪  t



⎪ + k(t − s)R2 ( z M (s), ϕ1 , . . . , z M (s), ϕ N )



⎪ 0



⎪ + η(t) + JM (z M ), t > 0, x ∈ Ω,


z(0, x) = z o (x), x ∈ Ω,

where u M = u(z M ).
Let us denote by {S(t) : t ≥ 0} the semigroup generated by the evolution
Eq. (6.25), guaranteed by Theorem 6.1, defined as follows: for each initial datum
z o ∈ L 2 (O), we denote by S(t)z o , t ≥ 0, the solution to (6.25). In the proof of
Theorem 6.1, we have actually shown that the semigroup {S(t) : t ≥ 0} is L 2 -
exponentially stable and satisfies
 ∞
S(t)g 2H 1 (O ) dt < c( g 2 + yo 2L 2 (−∞,0;H 2 (O )) ), ∀g ∈ D(L)
0

(see relations (6.18) and (6.19)).


 
Next, for ξ ∈ S(0, r M ) := g ∈ L 2 (0, ∞; H 1 (O)) : g L 2 (0,∞;H 1 (O )) ≤ r M ,
introduce the map
 t
(Λξ )(t) := S(t)z o + (N ξ )(t); (N ξ )(t) := S(t − τ )JM (ξ )(τ )dτ.
0

Since all the hypotheses from [19] are satisfied in the present case, we may apply
the same fixed-point argument for Λ on S(0, r M ) as in the proof of [19, Theo-
rem 5.1], in order to deduce that for each M > 0, there exist r M > 0 and ρ M > 0
124 6 Stabilization of Equations with Delays

sufficiently small such that for each z o , z o 2 + yo 2L 2 (−∞,0;H 2 (O )) ≤ ρ M , there


exists a unique solution z M ∈ C([0, ∞; L 2 (O)) ∩ L 2 (0, ∞; H 1 (O)) to (6.44), with
z M L 2 -exponentially decaying. More precisely,

z M (t) 2 ≤ C (M, r M )e−γ (M,r M )t ρ M , t ≥ 0. (6.45)

The details are omitted. Here C , γ : R+ × R+ → R∗+ are some continuous functions
depending only on M and r M .
To conclude with the proof, it remains to show that there exists C > 0, independent
of M, such that
z M (t) H 1 (O ) ≤ C, ∀t ≥ 0. (6.46)

Then, taking M sufficiently large, we will have

JM (z M (t)) = f (z M (t) + Du(z M (t))) − f  (0)(z M (t) + Du(z M )(t)), ∀t ≥ 0,

which immediately will lead to the conclusion that the exponentially decaying z M
is, in fact, a solution to the system (6.39). Then, recalling that y = z + Du, the result
stated in the theorem has been proved.
To show relation (6.46), we scalar multiply Eq. (6.44) by Az M , and use (6.3),
(6.45) and relations (6.7), (6.40). We deduce that
 t
z M (t) 2H 1 (O ) ≤ C1 (M, r M )ρ M + C M z M (τ ) 4H 1 (O ) dτ, t ≥ 0,
0

for some positive continuous function C1 : R+ × R+ → R+ . Then by Grönwall’s


lemma, we get
,∞
z M (τ ) 2H 1 (O ) dτ
≤ C1 (M, r M )ρ M eC M C 2 (M,r M )ρ M ,
CM
z M (t) 2H 1 (O ) ≤ C1 (M, r M )ρ M e 0

for some positive continuous function C2 : R+ × R+ → R+ , since z M ∈ L 2 (0, ∞;


H 1 (O)). Now it is clear that given a constant C > 0, for each M > 0 we may take
ρ M so small that
C1 (M, r M )ρ M eC M C 2 (M,r M )ρ M < C,

which implies that


z M (t) H 1 (O ) < C, ∀t ≥ 0, ∀M > 0,

thereby completing the proof. 


Remark 6.2 It is easy to see that if in Eq. (6.1) we change the null boundary Neumann
condition to a null Dirichlet boundary condition, the result, stated in Theorem 6.2,
 N
still holds for a feedback u of similar form (with the eigenfunctions ϕ j j=1 changed
accordingly).
6.3 Feedback Stabilization of the Nonlinear System (6.1) 125

Remark 6.3 It should be noted that the same stabilizing method developed here may
be also applied to the the following type of heat equation with memory:
⎧ ,t ,t
⎨∂t y(t, x) = Δy(t, x) + −∞ k1 (t − s)Δy(s, x)ds + μ −∞ k2 (t − s)y(s, x)ds
+ f (y(t, x)), (t, x) ∈ Q,
⎩ ∂ y = 0 on Σ , y(t, x) = y (t, x), (t, x) ∈ (−∞, 0]×O,
y(t, x) = u(t, x) on Σ1 , ∂n 2 o
(6.47)
with k1 , k2 two different positive kernels satisfying hypothesis (k). A similar result
to Theorem 6.2 can be obtained in this case. The details are omitted.
Remark 6.4 If there exists a constant a ∈ R, a = 0, such that f (a) = 0, then similar
results to those in Theorem 6.2 concerning the local stabilization of the steady-state
solution a in the nonlinear system (6.1) with μ = 0 can be obtained, following the
algorithm developed above. Indeed, setting y := y − a, we reduce the problem to
the null stabilization of the equivalent system
⎧ ,t

⎪ ∂t y(t, x) = Δy(t, x) + −∞ k(t − s)Δy(s, x)ds + f˜(y(t, x)),

(t, x) ∈ Q := (0, ∞) × O,


⎪ y(t, x) = u(t, x) on Σ1 := (0, ∞) × Γ1 , ∂n y = 0 on Σ2 := (0, ∞) × Γ2 ,

y(t, x) = yo (t, x), (t, x) ∈ (−∞, 0] × O,
(6.48)
where f˜(y) := f (y + a), y ∈ R satisfies similar assumptions ( f 1 ), ( f 2 ) to those that
f does. Then it is clear that the algorithm can be applied. The details are omitted.

6.4 Comments

The model (6.1) was introduced in [66], and it describes the heat flow in a rigid
isotropic homogeneous heat conductor with memory. It is derived in the framework
of the theory of heat flows with memory established in [48]. Moreover, a system of
first-order hyperbolic PDEs can be transformed to a system described by retarded
functional differential equations like (6.1) (for details, see [71]). These equations
serve as a model for physical phenomena such as traffic flows, chemical reactors,
and heat exchangers.
Similar equations have been considered in different papers, but the problem of the
behavior of solutions and stability was directly addressed in [46, 62]. There, the main
ingredient used is the so-called history space setting, which consists in considering
some past history variables as additional components of the phase space correspond-
ing to the equation under study (this idea is due to Dafermos [51]), whereas concern-
ing the first-order hyperbolic equations, the backstepping method is implemented by
Krstic et al. [76].
The boundary stabilization problem associated with (6.1) with k ≡ 0 was studied
in Chap. 2. When the model incorporates memory terms, this problem is far from
being solved and well understood. The character of Eq. (6.1) is determined by the
nature of the kernel k, and in some situations, this equation might be of hyperbolic
126 6 Stabilization of Equations with Delays

type (that is, with finite speed propagation). This is the case, for instance, if k(t) =
e−εt . By virtue of relation (6.3), we clearly see the hyperbolic nature of Eq. (6.1).
In any case, there are cases of kernels k for which the equation is of parabolic type,
namely
k(t) = a0 t −ε , a0 > 0, 0 < ε < 1.

But we see that such a kernel cannot satisfy hypothesis (6.2). In the parabolic case,
the controllability problem is solved by relying on similar arguments to those in the
free memory case; see, for instance, the work of Barbu and Iannelli [26] or Pandolfi
[109]. Since our stabilization method requires an exponential decay of the kernel (see
hypothesis (k)) it is clear that the parabolic case is left outside, while the hyperbolic
case can be treated similarly to the free memory case. This is an interesting difference
between the stabilization and controllability problem associated with equations with
memory.
The results presented in this chapter are new, and are based on those obtained in
Munteanu [101]. While in [101] an additional hypothesis of linear independence of
the traces of the normal derivatives of the eigenfunctions on Γ1 is imposed, here we
drop it by using the control design in Chap. 2. Other results concerning the Navier–
Stokes equations with memory were obtained in the author’s work [100].
For more results on the controllability problem associated with (6.1), see [18],
for example, and for the optimal control problem, see [40], for instance. For more
details about heat equations with memory, one may consult the book [3]. Finally, we
call the reader’s attention to the result on the present subject in [64], as well as [15],
concerning the stochastic version of the problem.
Chapter 7
Stabilization of Stochastic Equations

In this chapter, we consider stochastic PDEs. We address the boundary stabilization


problem, which will be of two types: pathwise stabilization and stabilization in mean.
Stochastic differential equations can be viewed as a generalization of dynamical
systems theory to models with noise. This generalization arises naturally due to
the fact that real systems cannot be completely isolated from their environments
and for this reason always experience external stochastic influence. Clearly, noise
perturbation complicates the problem considerably.
However, it turns out that similar proportional-type deterministic feedback,
designed in Chap. 2, ensures stability, though only for some special cases of stochas-
tic equations. Depending on the equation, the technique that we use is to start with
an argument similar to that in Chap. 2, then improve it by writing the solution in an
integral form, and finally improve the latter by adding a fixed-point argument to the
procedure.

7.1 Robustness in the Presence of Noise Perturbation of the


Boundary Feedback

This section answers to the following question: in the case in which the stabilizing
feedback designed in Chap. 2 is perturbed by a noise, will it still ensure the stabil-
ity of the system? This situation directly corresponds to practice. More precisely,
measuring instruments may present some malfunctions, and therefore, the accuracy
of the collected data may be randomly negatively affected. Thus, in order to have a
more realistic model, it makes sense to add a noise perturbation to the controller. We
confine ourselves to the one-dimensional case, with Neumann boundary conditions
in which the derivative of the unknown is equal to the sum of the control and a white

© Springer Nature Switzerland AG 2019 127


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_7
128 7 Stabilization of Stochastic Equations

noise in time. For higher dimensions it is not even known whether this problem is
well posed.
The governing equations are

⎨ ∂t Y (t, x) = Yx x (t, x) + f (x, Y (t, x)), t > 0, x ∈ (0, L),
Y (t, 0) = u(t) + e−δt β̇(t), Yx (t, L) = 0, t > 0, (7.1)
⎩ x
Y (0, x) = Yo (x), x ∈ (0, L).

Here {β = β(t), t ≥ 0} is a standard real Brownian motion in the probability space


(Ω, P, F ); the unknown Y = Y (t, x, ω) is a real-valued process; the initial data
Yo belongs to L 2 (0, L); f is a nonlinear function; and u = u(t) is the control (see
Chap. 1 for details).
The target solution, which we aim to stabilize, is any Ŷ ∈ C 2 ([0, L]) satisfying
the equation
Ŷx x (x) + f (x, Ŷ (x)) = 0 in (0, L); Ŷx (L) = 0.

Once we define the regulation error Y − Ŷ → Z , we translate the problem to the


origin, by equivalently rewriting (7.1) as’

⎨ ∂t Z (t, x) = Z x x (t, x) + f (x, Z (t, x) + Ŷ (x)) − f (x, Ŷ (x)), t > 0, x ∈ (0, L),
Z (t, 0) = v + e−δt β̇, Z x (t, L) = 0, t > 0,
⎩ x
Z (0, x) = Z o (x) := Yo (x) − Ŷ (x), x ∈ (0, L),
(7.2)
where v = u − ŷx (0).
Recall the classical assumptions on f , which we have met before:

(i) f, f  ∈ C([0, L] × R),

where f  = f y . Besides this, when needed, we will strengthen assumption (i) to

(ii) | f  (x, y)| ≤ C(|y|m + 1), ∀x ∈ [0, L], y ∈ R,

where 0 < m < ∞.


The corresponding linear operator given in (2.2) in Chap. 2, obtained from the
linearization around the steady state Ŷ , is defined here as

Ay := yx x + f  (x, Ŷ )y, ∀y ∈ D(A),


  (7.3)
D(A) = y ∈ H 2 (0, L) : yx (0) = yx (L) = 0 .

It is easy to check that A is self-adjoint in L 2 (0, L), and satisfies assumptions (A1)–
 ∞from Chap. 2. Hence −A has a countable set of real eigenvalues,
(A4)  ∞ denoted by
λ j j=1 , with the corresponding eigenfunctions denoted by ϕ j j=1 , that is,

−Aϕ j = λ j ϕ j , j = 1, 2, 3, . . . .
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 129
 
The system ϕ j j∈N∗ may be chosen to be orthonormal. Besides this, given ρ > 0,
there exists N ∈ N such that

λ j ≤ ρ for j = 1, 2, . . . , N and λ j > ρ j ≥ N + 1. (7.4)

We fix ρ (and so we fix N as well) such that

2ρ − δ < 0.

In order to simplify our presentation, we will assume further that the first N eigen-
values are distinct. The general case can be also considered and treated as in Chap. 2,
Sect. 2.2.2.
Recalling the notation in (2.20)–(2.26), we introduce the feedback law
⎛ ⎞ ⎛ ⎞

Z (t), ϕ1 ϕ1 (0) 

Z (t), ϕ2 ⎟ ⎜ ϕ2 (0) ⎟
v(t) := Λ S A ⎜ ⎟ ⎜ ⎟
⎝ .............. ⎠ , ⎝ .......... ⎠ (7.5)

Z (t), ϕ N ϕ N (0) N

and rewrite it as the sum v = v1 + v2 + · · · + v N , where


⎛ ⎞ ⎛ 1 ⎞

Z (t), ϕ1 ϕ (0) 
γk −λ1 1

Z (t), ϕ2 ⎟ ⎜ 1 ϕ2 (0) ⎟
vk (t) := A ⎜ ⎟ ⎜ γk −λ2 ⎟
⎝ ............ ⎠ , ⎝ ................. ⎠ , t ≥ 0, k = 1, . . . , N . (7.6)

Z (t), ϕ N γ −λ
1
ϕ N (0) k N N

In this case, the Gram matrix B has the form


⎛ ⎞
(ϕ1 (0))2 ϕ1 (0)ϕ2 (0) . . . ϕ1 (0)ϕ N (0)
⎜ ϕ2 (0)ϕ1 (0) (ϕ2 (0))2 . . . ϕ2 (0)ϕ N (0) ⎟
B := ⎜ ⎟
⎝ ............................................................... ⎠ . (7.7)
ϕ N (0)ϕ1 (0) ϕ N (0)ϕ2 (0) . . . (ϕ N (0))2

It is important to emphasize that ϕi (0) = 0, i = 1, . . . , N , since otherwise, we would


have ϕi ≡ 0, i = 1, . . . , N , which is absurd. That is, hypothesis (A5) also holds for
this case.
For a Hilbert space H , we denote by MP2 (0, T ; H ) the space of all H −valued
progressively measurable processes

X : Ω × (0, T ) → H

such that  T
E X (t)2H dt < ∞,
0
130 7 Stabilization of Stochastic Equations

where E is the expectation.


Denote by CP ([0, T ]; H ) the space of all Ft -adapted processes X ∈ MP2 (0, T ; H )
that have a modification in C([0, T ]; L 2 (Ω)).
Our goal is to show the following robustness in the presence of noise perturbation
result for the proportional feedback we introduced in Chap. 2.
Theorem 7.1 Under assumptions (i) and (ii), the closed-loop equation

⎪ ∂t Y (t, x) =Yx x (t, x) + f (x, Y (t, x)), t > 0, x ∈ (0, L),

⎪ ⎛ ⎞ ⎛ ⎞



⎪ 
Y (t) − Ŷ , ϕ1 ϕ1 (0) 

⎪ ⎜ ⎟ ⎜ ⎟

⎨ Yx (t, 0) = Λ S A ⎜
Y (t) − Ŷ , ϕ2 ⎟ , ⎜ ϕ2 (0) ⎟
⎝ .................... ⎠ ⎝ ........ ⎠ (7.8)


Y (t) − Ŷ , ϕ N ϕ N (0)




N

⎪ + (0) + −δt
β̇(t), (t, = ≥

⎪ Ŷ x e Y x L) 0, t 0,

Y (0, x) =Yo (x), x ∈ (0, L),

has a unique solution Y ∈ CP ([0, T ]; L 2 (0, L)) that satisfies


ρ
lim e 2 t Y (t) − Ŷ 2L 2 (0,L) < ∞, P − a.s.,
t→∞

provided that Yo − Ŷ  L 2 (0,L) ≤ θ for some θ > 0 sufficiently small.


Following the approach in Chap. 2, we first plug the proposed feedback law (7.5)
into the first-order approximation of Eq. (7.2) and derive the following result (the
counterpart of the result in Theorem 2.1).
Theorem 7.2 Under assumption (i), the closed-loop equation


⎪ ∂t Z (t, x) = Z x x (t, x) + f  (x, Ŷ )Z (t, x), t > 0, x ∈ (0, L),

⎪ ⎛ ⎞ ⎛ ⎞


Z (t), ϕ1 ϕ1 (0) 

⎪ 

⎨ Z (t, 0) = Λ A ⎜
⎪ ⎟ ⎜ ⎟

Z (t), ϕ2 ⎟ , ⎜ ϕ2 (0) ⎟ + e−δt β̇(t),
S ⎝
x
............ ⎠ ⎝ ........ ⎠ (7.9)


Z (t), ϕ N ϕ N (0)




N

⎪ Z (t, L) = 0, t ≥ 0,

⎪ x

Z (0, x) = Z o (x), x ∈ (0, L),

has a unique solution Z ∈ CP ([0, T ]; L 2 (0, L)) that satisfies


ρ
lim e 2 t Z (t)2L 2 (0,L) < ∞, P − a.s.
t→∞

Proof In order to lift the boundary control into the equations, for some

δ < γ1 < · · · < γ N


7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 131

we introduce as in (2.16) the Neumann operators Dγk . The noise is lifted as well via
the map D = D(x), x ∈ (0, L), which is the solution to the equation

−Dx x (x) − f  (x, ŷ)D(x) + γ D(x) = 0, x ∈ (0, L),
(7.10)
Dx (0) = 1, Dx (L) = 0,

for some sufficiently large γ > 0. Then arguing as in (2.27)–(2.29), it follows that
Eq. (7.9) may be equivalently rewritten as
⎧ ⎛  N 

⎪  N 

⎪ d Z (t) = ⎝AZ (t) − 2 λj Dγk vk (Z (t)), ϕ j ϕ j (x)




⎨ j=1 k=1

⎪ N

⎪ + (γk − A)Dγk (x)vk (Z (t)) dt + (γ − A)De−δt dβ, t > 0,






k=1
Z (0) = Z .o

(7.11)
Equation (7.11) is formal. The precise meaning of the state equation is as follows:
we say that a continuous L 2 (0, L) predictable process Z is a solution to the state
equation if P − a.s.

N 
 
 t 
N
Z (t) = etA Z o − 2 e(t−τ )A λ j Dγk vk (Z (τ )), ϕ j ϕ j (x)dτ
j=1 0 k=1
N 
 t
+ e(t−τ )A (γk − A)Dγk (x)vk (Z (τ ))dτ (7.12)
k=1 0
 t
+ e(t−τ )A (γ − A)De−δτ dβ(τ ).
0

Here the integral arising on the right-hand side of (7.12) is taken in the sense of Itô
with values in H −1 (0, L). (We refer to [52, Proposition 2.4] or [113] for the existence
and uniqueness of such a solution.)
We continue with the argument in Chap. 2 by projecting the system on Xu : =
 N  ∞
linspan ϕ j j=1 and Xs := linspan ϕ j j=N +1 (see (2.32) and (2.33)).
The so-called unstable part in this case reads as (see as well (2.39))
⎧  

⎨ 
N
dZ = −γ1 Z + (γ1 − γk )Bk AZ dt + Φe−δt dβ, t > 0,
(7.13)

⎩ k=2
Z (0) = Zo .
132 7 Stabilization of Stochastic Equations

Here ⎛ ⎞
−ϕ1 (0)
⎜ −ϕ2 (0) ⎟
Φ := ⎜ ⎟
⎝ .......... ⎠ .
−ϕ N (0)

Applying Itô’s formula to eδt


AZ , Z N in (7.13) yields (see (2.39)–(2.41) for the
notation and computations)
 t
eδt A 2 Z 2N = A 2 Zo 2N + (δ − 2γ1 )eδs A 2 Z 2N
1 1 1

0


N
δs
+2e (γ1 − γk )
Bk AZ , AZ N ds
k=2
 t  t
+ e−δs
AΦ, Φ N ds + 2
AΦ, Z N dβ(s).
0 0
(7.14)
We see that (recall the positive semidefiniteness of the matrices Bk and the fact that
the sequence (γk )k=1,N is increasing)


N
2eδs (γ1 − γk )
Bk AZ , AZ N ≤ 0, s ≥ 0.
k=2

Also, recall that γ1 was taken such that γ1 > δ, which implies that δ − 2γ1 < 0.
Finally, notice that since A is positive definite, we have
AΦ, Φ ≥ 0. Hence taking
the expectation in (7.14) yields
  1
E eδt A 2 Z 2N ≤ A 2 Zo 2N +
AΦ, Φ N < ∞, ∀t ≥ 0.
1 1
(7.15)
δ
Now let us define

V (t) := eδt A 2 Z 2N ,


1

 t
I1 (t) :=
AΦ, Φ N e−δs ds,
0
 t
M(t) := 2
AΦ, Z N dβ,
0
  t 
N

δs 1
δs
I (t) := − (δ − 2γ1 )e A Z 2 2N + 2e (γ1 − γk )
Bk AZ , AZ N ds.
0 k=2

Taking into account that M is a local martingale and I, I1 are nondecreasing, adapted,
and with finite variation processes, we conclude that
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 133

V (t) = Z (0) + I1 (t) − I (t) + M(t)

is a semimartingale. By (7.15) we are able to apply Lemma 1.1 to V, I, I1 and M


(noticing also the obvious fact that I1 (∞) < ∞) to obtain that there exists the limit
 
lim eδt A 2 Z 2N < ∞, P − a.s.
1

t→∞

1
Using that A 2 is an invertible positive definite symmetric matrix, it follows that
lim eδt Z 2N < ∞, P−a.s. This implies that
t→∞

lim eδt z u (t)2L 2 (0,L) < ∞, P − a.s., (7.16)


t→∞


N
 
since z u (t)2L 2 (0,L) = | Z (t), ϕ j |2 = Z (t)2N .
j=1
Concerning the stable part, since the spectrum of the operator As consists of
 ∞
−λ j j=N +1
with − λ j < −ρ, j ≥ N + 1,

by Lyapunov’s theorem, there exists Q ∈ L(Xs , Xs ), Q = Q ∗ ≥ 0, such that

1

Qz, As z + ρz = z2X s , ∀z ∈ Xs . (7.17)
2
The stable part of the system can be written as

dz s (t) = (As z s (t) + F(v(t))) dt + e−δt G(x)dβ,
(7.18)
z s (0) = z so ,

where

N
F(v(t)) := (γk − As )Dγk (x)vk (t)
k=1

and
G(x) := (γ − As )D(x).

We point out that by As , we understand, in fact, its extension Ãs (to recall this, see
(2.28), while for the extension operator, see Chap. 1). So


N
(γk − As )Dγk ∈ (D(As )) .
k=1
134 7 Stabilization of Stochastic Equations

By (7.16) and the definition of vk in (7.6), we easily see from the definition of F that
we have  N 
 
−δt  

g, F(v(t)) ≤ C1 e g L 2  (γk − As )Dγk 
  (7.19)
k=1 (D (As ))
− 21 δt
≤ C2 e g L 2 , t ≥ 0, P-a.s.,

for all g ∈ D(As ).


Applying Itô’s formula in (7.18) to the function e2ρt
Qz, z , we get via (7.17)
that
1 1
e2ρt Q 2 z s 2L 2 (0,L) = Q 2 z so 2L 2 (0,L)
 t 
+ −e2ρτ z s 2L 2 (0,L) + 2e2ρτ
Qz s , F(v(τ )) + e2(ρ−δ)τ
QG, G dτ (7.20)
0
 t
+ e(2ρ−δ)τ
Qz s , G dβ.
0

First, we show that


 1   
E Q 2 z s (t)2L 2 (0,L) ≤ Ce−ρt and E z s (t)2L 2 (0,L) ≤ Ce−ρt , (7.21)

∀t ≥ 0, for some positive constant C. To this end, taking the expectation in (7.20)
and recalling that 2ρ − δ < 0, we deduce that
   t  1 1  
1
E e2ρt Q 2 z s 2L 2 (0,L) ≤ C + 2E e2ρτ Q 2 Q 2 z s , F(v(τ )) dτ , (7.22)
0

∀t ≥ 0, where

1 1 1
C = Q 2 z s0 2L 2 (0,L) + Q 2 G2 2 .
2(δ − ρ) L (0,L)

This implies via the Schwarz inequality, the stochastic Fubini’s theorem, and the
estimate (7.19) that
   t  1 
1 1
E e2ρt Q 2 z s 2L 2 (0,L) ≤ C + 2C e2ρτ E Q 2 z s  L 2 (0,L) e− 2 δτ dτ
0
 t    t
1
≤ C + ρC E e2ρτ Q 2 z s 2L 2 (0,L) dτ + C e(2ρ−δ)τ dτ
0 0
(recall that 2ρ − δ < 0)
 t  
1
≤C +C ρE e2ρτ Q 2 z s 2L 2 (0,L) dτ.
0
7.1 Robustness in the Presence of Noise Perturbation of the Boundary Feedback 135

1
Then via Grönwall’s inequality and the fact that Q 2 is a symmetric positive definite
operator, (7.21) follows immediately.
Next, from (7.20), we have
 
1
 1 εp 
P sup Q 2 z s (t)2L 2 (0,L) ≥ εp
≤ P Q 2 z s ( p)2L 2 (0,L) ≥
t∈[ p, p+1] 5
 p+1   !
1 εp
+P 2ρQ 2 z s 2L 2 (0,L) + z s 2L 2 (0,L) dτ ≥
p 5
 " t  " 
"  " εp
+ P 2 sup "" Q 2 Q 2 z s , F(v(τ )) dτ "" ≥
1 1

t∈[ p, p+1] p 5
 " t " 
" " εp
+P sup "" e−2δτ
QG, G dτ "" ≥
t∈[ p, p+1] p 5
 " t " 
"  1  " ε
+P sup " e " −δτ 1
Q 2 z s , Q 2 G dβ " ≥ " p

t∈[ p, p+1] p 5
(using the Cebyshev inequality and the Burkholder–Davis–Gundy inequality)
 p+1 
5  1  5 1

≤ E Q 2 z s ( p)2L 2 (0,L) + E 2ρQ 2 z s 2L 2 (0,L) + z s 2L 2 (0,L) dτ
εp εp p
 p+1 " "
10 " 1 1 "
+ E " Q 2 Q 2 z s , F(v(τ )) " dτ
εp p
 p+1
5 # $
+ E e−2δτ |
QG, G | dτ
εp p
 p+1 " 1 "2 !
5 " "
E e−2δτ " Q 2 z s , Q 2 G " dτ
1
+
εp p
(making use of the estimates (7.19) and (7.21))
1
≤ Ce−ρp ,
εp

for some ε p > 0. Taking ε p = e− 2 ρp , we get from the above that


1

 
− 21 ρp
≤ Ce− 2 ρp , ∀ p ∈ N∗ .
1 1
P sup Q 2 z s (t)2L 2 (0,L) ≥e (7.23)
t∈[ p, p+1]

The Borel–Cantelli lemma now implies that there exists p(ω) such that if p > p(ω),
then
sup Q 2 z s (t)2L 2 (0,L) ≤ Ce− 2 ρp ,
1 1

t∈[ p, p+1]

which implies that


136 7 Stabilization of Stochastic Equations

lim e 2 ρt z s (t)2L 2 (0,L) < ∞, P − a.s.


1
(7.24)
t→∞

Recalling that z = z u + z s and invoking (7.16) and (7.24), we are led to the conclusion
of the theorem, thereby completing the proof. 

Theorem 7.2 provides a global asymptotic exponential stabilization result of the


linearization of Eq. (7.2) under the action of the feedback (7.5). Then arguing as in
Theorem 2.2, one may deduce the following local stabilization result of the nonlinear
system (7.2). Its proof is omitted.

Theorem 7.3 Under assumptions (i) and (ii), the closed-loop equation


⎪ ∂t Z (t, x) =Z x x (t, x) + f (x, Z (t, x) + Ŷ (x)) − f (x, Ŷ (x)), t > 0, x ∈ (0, L),

⎪ ⎛ ⎞ ⎛ ⎞


Z (t), ϕ1 ϕ1 (0) 

⎪ 

⎨ Z (t, 0) = Λ A ⎜
⎪ ⎟ ⎜ ⎟

Z (t), ϕ2 ⎟ , ⎜ ϕ2 (0) ⎟ + e−δt β̇(t),
S ⎝
x
............ ⎠ ⎝ ......... ⎠


Z (t), ϕ N ϕ N (0)




N
⎪ Z x (t, L) =0, t ≥ 0,




Z (0, x) =Z o (x), x ∈ (0, L),
(7.25)
has a unique solution Z ∈ CP ([0, T ]; L 2 (0, L)), which satisfies
ρ
lim e 2 t Z (t)2L 2 (0,L) < ∞, P − a.s.,
t→∞

provided that Z o  ≤ θ for some θ > 0 sufficiently small.

To end this section, returning to the initial variable y, Theorem 7.3 immediately
implies Theorem 7.1.

7.2 Stabilization of the Stochastic Heat Equation on a Rod

The subject of this section is represented by the heat equation on (0, L), L > 0,
perturbed by an internal multiplicative noise, i.e.,

⎨ ∂t Y (t, x) = Yx x (t, x) + λσ (x, Y (t, x))dβ(t, x), 0 < x < L , t > 0,

Yx (t, 0) = u(t), Yx (t, L) = 0, t ≥ 0, (7.26)


Y (0, x) = Yo (x), for x ∈ [0, L].

Here dβ denotes a Gaussian space-time noise on [0, ∞) × [0, L] that is usually


understood as the distribution derivative of the Brownian sheet β(t, x) in t and
x; see Chap. 1 or [125]. And σ is a uniformly globally Lipschitz function with
σ (x, 0) = 0 , ∀x ∈ (0, L). In particular, there exists L σ > 0 such that
7.2 Stabilization of the Stochastic Heat Equation on a Rod 137

|σ (x, y)| ≤ L σ |y|, ∀x ∈ [0, L], ∀y ∈ R, (7.27)

where λ is a positive number, usually refereed as the level of the noise; u is the
boundary control.
It was shown by Foondun and Nualart in [56] that in the absence of a control, no
matter how small (or large) λ is, the corresponding solution y to (7.26) is exponen-
tially unstable in the expectation. More precisely, it is shown in [56, Theorem 1.5]
that the solution y to (7.26) without the boundary control (i.e., u ≡ 0) satisfies

1
0 < lim inf log E|Y (t, x)|2 < ∞, ∀x ∈ (0, L).
t→∞ t
Hence it makes sense to search for a stabilizing feedback u for (7.26) in the sense
that once inserted into the equation, the corresponding solution of the closed-loop
equation satisfies

1
lim inf log E|Y (t, x)|2 < −γ , ∀x ∈ (0, L),
t→∞ t
for some γ > 0.
Our goal is to show that a feedback similar to the one described in Chap. 2 ensures
that the corresponding solution of the closed-loop equation (7.26) goes exponentially
fast to zero in a certain sense (see Theorem 7.4 and relation (7.35) below). Since the
stochastic force is of multiplicative type, one may guess that the method, presented in
Chap. 2, that was successfully applied in the previous section (in the case of additive
noise) may fail to work now. Indeed, the spectral decomposition method is useless
due to the presence of the nonlinearity σ (y)dβ, unless it is considered separately. That
is why, in comparison with the previous section, we will change the approach in the
following way: we consider separately the linear equation and show that after we lift
the control, the corresponding obtained linear operator generates a C0 −semigroup
that can be expressed in a mild formulation, via a kernel. Then returning to the full
nonlinear equation, we write its solution in an integral formulation, a fact that allows
one to obtain the desired exponential decay. All our effort will be concentrated in
showing that the kernel has “good properties”; see Lemma 7.1 below.
In any case, let us further explain the approach we will follow. Say that we are
dealing with the following heat equation:

∂t y(t, x) − y(t, x) − ay(t, x) = 0 in (0, ∞) × (0, L),
(7.28)
y(t, 0) = y(t, L) = 0, ∀t ≥ 0, y(0) = yo ,
 ∞  ∞
where a and L are some positive constants. Let us denote by λ j j=1 and by ϕ j j=1
the system of eigenvalues and the system of eigenfunctions that diagonalizes the
Dirichlet Laplacian in L 2 (0, L), respectively. It is known that they are given by
138 7 Stabilization of Stochastic Equations

!2 ! 21 !
jπ 2 jπ x
λj = and ϕ j = sin , j = 1, 2, . . . .
L L L

Then denoting by


p(t, x, ξ ) := e−λ j t ϕ j (x)ϕ j (ξ ), t ≥ 0, x, ξ ∈ O, (7.29)
j=1

the so-called Dirichlet heat kernel, it is easy to see that

p(t, x, ξ ) ≤ pG (t, x, y), ∀t > 0, (7.30)

where
1 |x−ξ |2
pG (t, x, ξ ) := √ e− 4t
4π t

is the Gaussian kernel. Next, for t0 > 0 fixed, there is a constant c > 0 such that

p(t, x, ξ ) ≤ ce−λ1 t , ∀t > t0 . (7.31)

It follows that there exists some constant c1 > 0 such that for all η ∈ (0, λ1 ), we have
 ∞ !
1 1
eηt p(t, x, x)dt ≤ c1 √ + . (7.32)
0 η λ1 − η

This is indeed so. We use the bounds (7.30) and (7.31). We therefore write
 ∞  t0  ∞
eηt p(t, x, x)dt = eηt p(t, x, x)dt + eηt p(t, x, x)dt =: I1 + I2 .
0 0 t0

For I1 , using (7.30), we have


 
t0
ηt
t0
eηt
I1 ≤ e pG (t, x, x)dt ≤ c2 √ dt.
0 0 t

As for I2 , by (7.31), we have


 ∞  ∞
I2 ≤ eηt p(t, x, x)dt ≤ c eηt e−λ1 t dt.
t0 t0

Combining these two results yields the estimate (7.32).


7.2 Stabilization of the Stochastic Heat Equation on a Rod 139

Let us first consider the case a = 0. We can represent the corresponding solution
to (7.28) as  L
y(t, x) = p(t, x, ξ )yo (ξ )dξ.
0

Hence for each η ∈ (0, 2λ1 ), we have


 ∞  ∞  L !
ηt ηt
e y(t) dt =
2
e y (t, x)d x dt
2
0 0 0
 %   !2 
∞ L L
ηt
= e p(t, x, ξ )yo (ξ )dξ d x dt
0 0 0
 ∞  L !
≤ yo 2 eηt p(2t, x, x)d x dt
0 0
 L  ∞ !
ηt
= yo  2
e p(2t, x, x)dt d x
0 0
(using estimate (7.32))
≤ Cyo 2 ,

for some positive constant C. It follows by the semigroup property that the solution
y is exponentially decaying in the L 2 −norm  · .
Now consider the case a = 0 such that λ1 − a < 0. Set μ j := λ j − a, for j =
1, 2, .., and


p̃(t, x, ξ ) := e−μ j t ϕ j (x)ϕ j (ξ ).
j=1

Then the solution to (7.28) can be written again in integral form as


 L
y(t, x) = p̃(t, x, ξ )yo (ξ )dξ. (7.33)
0

It is clear that in trying to show an L 2 −norm exponential decay for this y, by following
the above ideas, an estimate like (7.32) fails to hold for the kernel p̃, because the
eigenvalue μ1 = λ1 − a is negative, and so (7.31) can no longer be obtained.
In order to achieve the stability in (7.28), a boundary feedback control may be
inserted. To achieve this goal, we look for a special feedback law that allows us
to write the corresponding solution of the closed-loop equation in an integral form
similar to (7.33). It is clear that in order for us to be able to do this, the feedback law
must be explicitly given in a simple form. It turns out that the proportional feedback
law designed in Chap. 2 is able to do this job.
140 7 Stabilization of Stochastic Equations

7.2.1 Mild Formulation of the Solution and Proof of the


Main Result

The goal of this section is to prove the following result.


Theorem 7.4 Let ρ > 0 be arbitrary but fixed. For N ∈ N large enough, the closed-
loop equation

⎪ ∂t Y (t, x) = Yx x (t, x) + λσ (x, Y (t, x))dβ(t, x),


⎨ for 0 < x < L and t > 0,
(7.34)

⎪ Yx (t, 0) = u(Y (t)), Yx (t, L) = 0 for t > 0,


Y (0, x) = Yo (x) for x ∈ [0, L],

has a unique solution Y ∈ CP ([0, T ]; L 2 (0, L)), which satisfies

− ∞ < lim sup log E|Y (t, x)|2 < −ρ, ∀x ∈ (0, L). (7.35)
t→∞

Here the feedback u is defined by (7.42) below.


In contrast to the previous discussions, here we will also estimate the magnitude,
with respect to the parameter N , of the controller u. More precisely, in what follows,
we will estimate the norm of the matrices involved in the definition of the controller
u, with respect to the parameter N . To this end, we define below what we understand
by a quantity to be of order some power of N . Let k > 0, and consider a function f
that depends on the parameter N , i.e., f = f (N ). We say that f is of order N k , and
denote this by O(N k ), if
f (N )
lim
N →∞ N k

exists and is finite and nonzero.


Now let us construct the stabilizing feedback law, using the ideas in Chap. 2. The
governing operator of Eq. (7.34) is the Neumann–Laplace operator on (0, L), given
by
Ay = −yx x , ∀y ∈ D(A), (7.36)
 
D(A) = y ∈ H 2 (0, L) : yx (0) = yx (L) = 0 .

It is well known that it has a countable set of eigenvalues, namely

( j − 1)2 π 2
μj = , j = 1, 2, . . . ,
L2
with the corresponding eigenfunctions
7.2 Stabilization of the Stochastic Heat Equation on a Rod 141
⎧&
⎨ 1 , j =1
ϕ j (x) = &  
L
⎩ 2
cos ( j−1)π x
, j = 2, 3, . . . .
L L

which form an orthonormal basis in L 2 (0, 1).


Let us fix some large enough N ∈ N. In the previous chapters, the choice of N was
correlated with the magnitude of the eigenvalue μ N +1 . Namely, N was chosen to be
large enough that μ N +1 was greater than some given constant. This time, besides the
magnitude of μ N +1 , here we will take into account also the magnitude of N itself.
In other words, we will study how the controller u depends on N . It turns out that it
can be estimated to be of order O(N η ), for some good enough η to imply the desired
stability (see (7.48) below). In this light, this time, the constants γ1 , . . . , γ N will not
be arbitrary, but of a precise form. More exactly,

k
γk := μ N + + N α , k = 1, 2, . . . , N , (7.37)
N

with 7
4
< α < 2. Note that we have

γk π2 γk − μ N
lim = and lim = 1.
N →∞ N2 L2 N →∞ Nα

Thus we stress that

γk , k = 1, 2, . . . , N , are of order O(N 2 ), (7.38)

and
γk − μ N , k = 1, 2, . . . , N , are of order O(N α ). (7.39)

This time, the counterpart of the Gram matrix B introduced in (2.20) is given by
⎛ √ √ ⎞
√1 2 ... 2
1 ⎜ 2 2 ... 2 ⎟
B := ⎜ ⎟. (7.40)
L ⎝√.................... ⎠
2 2 ... 2

And the corresponding form of the feedback law (2.26), in the present case, reads as
follows: for each k = 1, . . . , N , we set
⎛ ⎞ ⎛ ϕ1 (0) ⎞

y, ϕ1 γk −μ1 

y, ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟

u k (y) := A ⎜ ⎟
⎝ ........... ⎠ , ⎜
γk −μ2 ⎟ , (7.41)
⎝ ......... ⎠

y, ϕ N ϕ N (0)
γk −μ N N
142 7 Stabilization of Stochastic Equations

then introduce u as
u(y) := u 1 (y) + · · · + u N (y). (7.42)

For the definition of A, see (2.24).


Recall the definition of the Bk from (2.22). Their sum is precisely the following:
 N N

N
1  bi j
Bk = , (7.43)
k=1
L k=1
(γk − μi )(γk − μ j )
i, j=1


where b11 = 1 and bi j = 2, (i, j) = (1, 1). We want to estimate the magnitude
of u k , k = 1, 2, . . . , N . This reduces to estimating the first and last eigenvalues of
the matrix A, or equivalently, by the definition of A, to estimating the last and first
eigenvalues of the sum matrix B1 + · · · + B N . Let us denote by r1 , . . . , r N (arranged
as an increasing sequence) the positive eigenvalues of the latter matrix. By virtue of
(7.43), we have that

1 
N
bii
r1 + · · · + r N = .
L i,k=1 (γk − μi )2

By (7.38) and (7.39), we deduce that

1 1
C1 ≤ r1 + r2 + · · · + r N ≤ C2 2α−2 , (7.44)
N2 N
for some positive constants C1 , C2 , independent of N (but for N large enough).
' NNext by the Gershgorin circle theorem, we know that the eigenvalues of the matrix
k=1 Bk cannot be far from its diagonal entries. More precisely, we know that there
exists some j ∈ {1, 2, . . . , N } such that
" √ " √
" N "  N
" 2 " 2
"r N − "≤ .
" (γ k − μ j ) 2" (γ k − μ j )(γ k − μl )
k=1 k,l=1

Since in the above inequality the   to zero as N → ∞, we deduce


right-hand side tends

N
2
that r N is maximal of order O . Taking into account relations
k=1
(γk − μ N )2
(7.38) and (7.39), and that
√ √

N
2 2N
≤ ,
k=1
(γk − μ N )2 (γ1 − μ N )2
7.2 Stabilization of the Stochastic Heat Equation on a Rod 143
# $
it follows that r N is maximal of order O 1
N 2α−1
. In other words, we obtain that

1
ri ≤ C 3 , i = 1, 2, . . . , N , (7.45)
N 2α−1
for some constant C3 > 0, independent of N .
Finally, taking α very close to 2 as necessary, we see by (7.44) and (7.45) that
necessarily,
1
ri ≥ C4 3 , i = 1, 2, . . . , N , (7.46)
N
for some positive constant C#4 , independent
$ # of N$ . Hence we conclude that the orders
of r1 , . . . , r N lie between O N13 and O N 2α−1
1
.
Recalling that A = (B1 + B2 + · · · + B N )−1 , and denoting by λ1 (A) and λ N (A)
the first and last eigenvalues of the matrix A, respectively, we get that

λ1 (A) and λ N (A) = A have order between O(N 2α−1 ) and O(N 3 ). (7.47)

Here A denotes the classical Euclidean norm of the matrix A.


By (7.41), (7.39), and (7.47), we deduce then
⎛ ⎞

y, ϕ1 
 
√ ⎜
y, ϕ2 ⎟
|u k (Y )| ≤ C N  ⎜ ⎟ 
⎠ , ∀t ≥ 0.
3−α
N ⎝ (7.48)
 ......... 

y, ϕ N 
N

Proof of Theorem 7.4. First of all, we note that the stochastic equation (7.34) is well
posed, since both σ and the Neumann boundary conditions are Lipschitz, and thus
one can argue for the existence and uniqueness as in [113].
We lift the boundary conditions into the Eq. (7.34) by arguing similarly as in
(2.27)–(2.29), obtaining thereby the internal control-type problem


N 
N
 
∂t Y (t) = − AY (t) + u i (Y (t))(Ã + γi )Dγi − 2 μ j u i (Y (t))Dγi , ϕ j ϕ j
i=1 i, j=1

+ λσ (Y (t))dβ; Y (0) = Yo .
(7.49)
(One can check the section above or [19, Sect. 1] for additional explanations on the
precise definition of a solution to (7.49).)
Next, the idea is to forget, for a while, about the stochastic perturbation, and
express the solution z to the linear equation
144 7 Stabilization of Stochastic Equations


N
∂t z(t) = − Az(t) + u i (z(t))(Ã + γi )Dγi
i=1


N
  (7.50)
−2 μ j u i (z(t))Dγi , ϕ j ϕ j , t > 0,
i, j=1

z(0) = z o ,

in an integral form. This enables one to have a mild formulation for the solution y
to (7.49). One can prove the following results.

Lemma 7.1 The solution z of


N 
N
 
∂t z(t) = − Az(t) + u i (z(t))(Ã + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ;
i=1 i, j=1

z(0) = z o ,
(7.51)
can be written in a mild formulation as
 L
z(t, x) = p(t, x, ξ )z o (ξ )dξ.
0

Moreover, we have that


 ∞  L
1
e N t p 2 (t, x, ξ )dξ dt ≤ C , ∀x ∈ (0, L), (7.52)
0 0 Nθ

for some positive θ and C > 0 independent of N .

Proof We will decompose z as




z(t) = z j (t)ϕ j (x),
j=1

 
where z j (t) = z(t), ϕ j , j = 1, 2, . . . . From now on, all our effort will be devoted
to writing, for each j ∈ N∗ , z j in the form



z j (t) = f i j (t)
z o , ϕi , (7.53)
i=1

with | f i j (t)| ≤ Ci j e−ci j t , t ≥ 0, for some Ci j , ci j > 0 of order some powers of N .


Once we do this, then immediately we may write the kernel p as
7.2 Stabilization of the Stochastic Heat Equation on a Rod 145



p(t, x, ξ ) = f i j (t)ϕi (x)ϕ j (ξ )
i, j=1

and play with the estimates (in terms of N ) of Ci j , ci j in order to deduce (7.52) as
well.
We emphasize that in the case of a general feedback law, after a lift of the boundary
conditions into the equations, it is not easy (or even possible) to get a relation like
(7.53). However, the simple explicit form of our feedback law allows us to do this.
Scalar multiplying equation (7.51) by ϕ j , j = 1, . . . , N , and arguing as in (2.38)–
(2.41), we get that the first N modes of the solution z satisfy

d  N
Z = −γ1 Z + (γ1 − γk )Bk AZ , t > 0. (7.54)
dt k=2

The solution Z can be expressed as


 'N 
t −γ1 I + k=2 (γ1 −γk )Bk A
Z (t) = e Zo .
 N
This yields that there exist continuous functions qi j : [0, ∞) → R i, j=1 such that


N
 
z i (t) = qi j (t) z o , ϕ j , i = 1, . . . , N . (7.55)
j=1

Furthermore, using the results in (2.40)–(2.41), we conclude that

C
|qi j (t)|2 ≤ e−γ1 t , ∀t ≥ 0, ∀i, j = 1, . . . , N . (7.56)
λ1 (A)

Making use of (7.38) and (7.47), relation (7.56) implies

1
e−cN t , ∀t ≥ 0,
2
|qi j (t)| ≤ C (7.57)
α− 21
N

for some positive constants C, c, independent of N . Clearly, we may take c < πL 2


2

(which we shall do for later purposes).


Since by (7.41), the feedbacks u i , i = 1, . . . , N , are some linear combinations
of the modes z 1 , . . . , z N , we get from (7.55) that there exist continuous functions
 N
ri j : [0, ∞) → R i, j=1 such that


N
 
u i = u i (t) = ri j (t) z o , ϕ j , i = 1, . . . , N , (7.58)
j=1
146 7 Stabilization of Stochastic Equations

where by (7.41), (7.47), (7.39), and (7.57), we get that there exists C > 0 such that
√ √
N 1 −cN 2 t N
= C N 4−2α e−cN t ,
2
|ri j (t)| ≤ AZ (t) N ≤ CN 3
1 e α
γ1 − μ N α−
N 2 N
(7.59)
∀t ≥ 0, ∀i, j = 1, . . . , N .
We move on to the modes z j , j > N . Scalar multiplying equation (7.51) by
ϕ j , j > N , we get
(
2
N
d
z j = −μ j z j − u i , t > 0.
dt L i=1

Then the variation of constants formula gives


( 
  2 t 
N
−μ j t
z j (t) =e zo , ϕ j − e−μ j (t−s) u i (s)ds, t ≥ 0. (7.60)
L 0 i=1

We write ( N 
j 2  t −μ j (t−s)
wi (t) := − e rki (s)ds.
L k=1 0

Involving (7.58), the above relation (7.60) reads as

  N
z j (t) = e−μ j t z o , ϕ j +
j
wi (t)
z o , ϕi , t ≥ 0. (7.61)
i=1

Simple computations, taking advantage of the estimates (7.59), yield

1
N 5−2α e−cN t , ∀t ≥ 0,
j 2
|wi (t)| ≤ C (7.62)
μ j − cN 2

for all i = 1, 2, . . . , N and j = N + 1, N + 2, . . .. Above, we have used the fact


that c was chosen such that c < πL 2 .
2

We may now conclude by (7.55) and (7.61) that the solution z to (7.51) may be
written as  L
z(t, x) = p(t, x, ξ )z o (ξ )dξ,
0

where the kernel p is given as

p(t, x, ξ ) := p1 (t, x, ξ ) + p2 (t, x, ξ ) + p3 (t, x, ξ ), (7.63)

for t ≥ 0, x, ξ ∈ (0, L), where


7.2 Stabilization of the Stochastic Heat Equation on a Rod 147
⎛ ⎞

N N
p1 (t, x, ξ ) := ⎝ q ji (t)ϕ j (x)⎠ ϕi (ξ ),
i=1 j=1


p2 (t, x, ξ ) := e−μi t ϕi (x)ϕi (ξ ), and
i=N +1
⎛ ⎞

N ∞

p3 (t, x, ξ ) := ⎝ wi (t)ϕ j (x)⎠ ϕi (ξ ).
j

i=1 j=N +1

Now we want to estimate the quantity


 ∞  L
e N t p 2 (t, x, ξ )dξ dt.
0 0

Taking into account the form of p, by Parseval’s identity, we have that


 L
p 2 (t, x, ξ )dξ (7.64)
0
⎛ ⎞2

N N ∞
 ∞

= ⎝ q ji (t)ϕ j (x) + wi (t)ϕ j (x)⎠ +
j
e−2μi t ϕi2 (x)
i=1 j=1 j=N +1 i=N +1
⎛ ⎞2 ⎞2 ⎛

N 
N 
N ∞
 ∞

≤2 ⎝ q ji (t)ϕ j (x)⎠ + 2 ⎝ wi (t)ϕ j (x)⎠ +
j
e−2μi t ϕi2 (x).
i=1 j=1 i=1 j=N +1 i=N +1

Thus  
∞ L
e N t p 2 (t, x, ξ )dξ dt ≤ I1 + I2 + I3 , (7.65)
0 0

where ⎛ ⎞2
 ∞ 
N 
N
I1 :=2 eNt ⎝ q ji (t)ϕ j (x)⎠ dt
0 i=1 j=1
 ∞
≤C e N t N 3 max |q ji (t)|2 dt
0 i, j=1,N
(7.66)
(using (7.57) and taking N large enough)
 ∞
1
e N t 2α−1 e−2cN t dt
2
≤ C N3
0 N
1
≤ C 2α−2 , ∀x ∈ (0, L).
N
148 7 Stabilization of Stochastic Equations

Next, ⎛ ⎞2
 ∞ 
N ∞

I2 := 2 eNt ⎝ wi (t)ϕ j (x)⎠ dt
j

0 i=1 j=N +1

(using (7.62))
⎡ ⎤2
 ∞ ∞
 !
1 (7.67)
eNt N ⎣ N 5−2α e−cN t ⎦ dt
2
≤C
0 j=N +1
μ j − cN 2

⎛ ⎞2
∞  ∞
1
= C N 11−4α ⎝ ⎠ e N t e−2cN t dt.
2

j=N +1
μ j − cN 2
0

Note that

 ∞
  N  N +1
1 1 1 1 1
≤ ≤C dx + dx + · · ·
j=N +1
μ j − cN 2 π2
L2
−c j=N +1
j2 N −1 x2 N x2
 ∞
1 1
=C dx = C .
N −1 x2 N −1

It follows by (7.67) that

1 1 1
I2 ≤ C N 11−4α ≤ C 4α−7 , (7.68)
N 2cN − N
2 2 N

for N large enough.


Finally, notice that the kernel p2 has a similar structure to that of the heat kernel
p defined by (7.29), except that in the present case, the first eigenvalue of the infinite
summation is μ N +1 = πL 2 N 2 > 0. Hence we can bound the kernel p2 similarly as in
2

(7.30)–(7.32), by taking η = N . In this way, we obtain that


  ∞
 
∞  ∞
1
−2μi t
I3 := e Nt
e ϕi2 (x) dt = e N t p2 (2t, x, x)dt ≤ C √ ,
0 i=N +1 0 N
(7.69)
for some C > 0.
Again considering all the above estimates, namely (7.65), (7.66), (7.68), and
(7.69), we deduce that
 ∞  L
1
e N t p 2 (t, x, ξ )dξ dt ≤ C , ∀x ∈ (0, L), (7.70)
0 0 Nθ

where θ > 0 is defined as (recall that α was chosen such that α > 47 )
7.2 Stabilization of the Stochastic Heat Equation on a Rod 149
 
1
θ := min 2α − 2; 4α − 7; , (7.71)
2

thereby completing the proof. 

Proof of Theorem 7.4 (continued). Next, the idea is to get rid of the Brownian motion.
This is usually done by taking the second moment into the equation and using Itô’s
isometry. Before doing that, let us introduce

Y 2,N := essupt>0 essupx∈(0,L) e N t E|Y (t, x)|2 .

Then, by virtue of Lemma 7.1, we write the solution of (7.71) in a mild formulation
via the kernel p, i.e.,
 L
Y (t, x) = p(t, x, ξ )Yo (ξ )dξ
0
 t L (7.72)
+λ p(t − s, x, ξ )σ (x, Y (s, ξ )))β(ds, dξ ).
0 0

Notice that the kernel p, defined by (7.63), has similar structure, with similar prop-
erties, to the classical heat kernel. Consequently, one can easily argue as in [125, Ex.
3.4] or [44, Theorem 13] in order to deduce the unique existence of a solution Y to
(7.72).
Taking the second moment in (7.72) and using Itô’s isometry and relation (7.27),
we obtain that
 L
E|Y (t, x)|2 ≤ 2L p2 (t, x, ξ )yo (ξ )dξ
0
 t L
+ 2Lλ2 L 2σ p2 (t − s, x, ξ )E|Y (s, ξ )|2 dξ ds
0 0
(using (7.52))
 t L
≤ Ce−N t + 2Lλ2 L 2σ p2 (t − s, x, ξ )E|Y (s, ξ )|2 dξ ds
0 0
 t L
= Ce−N t + 2Lλ2 L 2σ e N (t−s) p2 (t − s, x, ξ )e−N (t−s) E|Y (s, ξ )|2 dξ ds
0 0
 t L
≤ Ce−N t + Y 2,N 2Lλ2 L 2σ e−N t e N (t−s) p2 (t − s, x, ξ )dξ ds
0 0
(again using relation (7.52) )
!
1
≤ Ce−N t 1 + Lλ2 L 2σ Y 2,N .

The above relation implies that

1
Y 2,N ≤ C + Cλ2 L L 2σ Y 2,N .

150 7 Stabilization of Stochastic Equations

Therefore, if we choose N large enough that Cλ2 L L 2σ N1θ < 1 and N > ρ, we obtain
that
Y 2,ρ ≤ Y 2,N < ∞,

thereby completing the proof. 

7.3 Stabilization of the Stochastic Burgers Equation

Here we propose to further develop the ideas from the previous section regarding
the equivalent rewrite of the solution in an integral form. We will consider again a
nonlinear stochastic equation, namely the stochastic Burgers equation, and stabilize
its null solution from the boundary. This equation reads as


⎪ dY (t, x) = νYx x (t, x)dt + b(t, x)Y (t, x)Yx (t, x)dt +θ Y (t, x)dβ(t),

t > 0, x ∈ (0, L),

⎪ Y x (t, 0) = v(t), Y x (t, L) = 0, t > 0,

Y (0, x) = yo (x), x ∈ (0, L).
(7.73)
It is clear that the second-order nonlinearity Y Yx significantly complicates the con-
text, which is left outside by the previous approach, applied to the stochastic heat
equation. In any case, this time, the stochastic perturbation is very simple, θ Y dβ,
with θ a positive constant. This allows one to do a rescaling in order to transform
(7.73) into a deterministic random PDE. Concerning the obtained random determin-
istic equation, one does not even know whether it well posed. So in fact, we have
to solve three problems at once, namely existence, uniqueness, and stabilization.
This can be achieved via a fixed-point argument. More exactly, we will consider an
auxiliary functional space, namely
 
ρ
Z = y = y(t, x) : sup e (y(t) + t yx (t)) < ∞ ,
Nt
t>0

for some positive ρ, write again the solution in a mild formulation, then show that
the corresponding nonlinear functional leaves the ball
 
Br (0) = e N t (y(t) + t ρ yx (t)) ≤ r

invariant and that it is a contraction on it, for r and initial data small enough. From
this, via the contraction mapping theorem, the three problems are solved.
Let us give some details about the functions and parameters that constitute (7.73).
The function b is such that there exist Cb > 0 and 0 ≤ m 1 ≤ m 2 ≤ · · · ≤ m S , for
some S ∈ N, for which
7.3 Stabilization of the Stochastic Burgers Equation 151
 S 

sup |b(t, x)| ≤ Cb t mk
+ 1 , ∀t > 0. (7.74)
x∈(0,L) k=1

Moreover, we assume that m S and θ are such that θ can be split as

1 2 1
θ = m S + + θ1 , (7.75)
2 4
where θ1 > 0.
In (7.73), consider the substitution

Y (t) = Γ (t)y(t), t ∈ [0, ∞), (7.76)

where Γ (t) : L 2 (0, L) → L 2 (0, L) is the linear continuous operator defined by the
equations
dΓ (t) = θ Γ (t)dβ(t), t ≥ 0, Γ (0) = 1,

which can be equivalently expressed as

Γ (t) = eθβ(t)− 2 θ , t ≥ 0.
t 2
(7.77)

By the transformation (7.76), Eq. (7.73) reduces to the random parabolic equation

⎪ ∂

⎪ y(t) = νΓ −1 (t)(Γ (t)y(t))x x + Γ −1 (t)b(t)(Γ (t)y(t)) (Γ (t)y(t))x ,
⎨ ∂t
t ∈ [0, ∞),

⎪ yx (t, 0) = Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞),


y(0) = yo .
(7.78)
Indeed, if y is a regular solution to (7.78) (for instance absolutely continuous in t)
that is progressively measurable in (t, ω) in the probability space {Ω, P, F , Ft }
and  T
E y(t)2H 2 (O ) dt < ∞,
0

then by Itô’s formula in (0, T ) × Ω × O, we have

∂y
dY = ydΓ (t) + Γ (t) dt in (0, T ) × O.
∂t

Then we obtain for y the random Eq. (7.78), as claimed. On the other hand, an Ft -
adapted solution t → y(t) to Eq. (7.78) leads via transformation (7.76) to a solution
Y to (7.73) in the sense of the above definition. We equivalently write (7.78) as
152 7 Stabilization of Stochastic Equations

⎪ ∂

⎪ y(t) = νθ [βx x (t)y(t) + (βx (t))2 y(t) + 2βx (t)yx (t) + yx x (t)]

⎪ ∂t

+ b(t)Γ (t)y(t)[θβx (t)y(t) + yx (t)], t ∈ [0, ∞), (7.79)



⎪ yx (t, 0) = Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞),


y(0) = yo .

In order to simplify the problem, we assume that the Brownian motion β is only
time-dependent. Hence it follows by (7.79) that in fact, y satisfies the equation
⎧∂
⎨ ∂t y(t) = νyx x (t) + Γ (t)b(t)y(t)yx (t), t ∈ [0, ∞),
y (t, 0) = u(t) := Γ −1 (t)v(t), yx (t, 1) = 0, t ∈ [0, ∞), (7.80)
⎩ x
y(0) = yo .

Of course, one may think to consider the more general case of β = β(t, x) and
apply the argument that follows to (7.79). Unfortunately, this is not a trivial task.
We will explain later what other difficulties appear in this general case and what
additional hypotheses should be added.
Below, we will frequently use the following obvious but useful inequality:

e−at ≤ t −a , ∀t > 0, a ≥ 0.

Before moving on, let us see that by the law of the iterated logarithm, arguing as
in Lemma 3.4 in [21], it follows that there exists a constant CΓ > 0 such that

Γ (t) = eθβ(t)−θ1 t e−(m S + 4 )t ≤ CΓ e−(m S + 4 )t , ∀t > 0, P-a.s.,


1 1
(7.81)

where we have used that 21 θ 2 = m S + 1


4
+ θ1 . Then by (7.74), we have that
 

S
m k −(m S + 41 )t −(m S + 41 )t
Γ (t) sup |b(t, x)| ≤ CΓ Cb t e +e
x∈(0,L) k=1
(since 0 ≤ m 1 ≤ m 2 ≤ · · · ≤ m S )
 S 

m k −(m k + 41 )t − 14 t
≤C t e +e (7.82)
k=1
 

S
m k −(m k + 41 ) − 41
≤C t t +t
k=1

≤ (S + 1)Ct − 4 , ∀t > 0.
1

Next, we recall the Neumann–Laplace operator A, and its spectrum {μk }∞ k=1
and its eigenfunction system {ϕk }∞ k=1 ; the Gram matrix B, the matrices Λk and
Bk , k = 1, . . . , N , and A; the Neumann operators Dγk , k = 1, 2, . . . , N , introduced
in relations (7.36)–(7.43) above, respectively. Also, we recall that based on them, we
7.3 Stabilization of the Stochastic Burgers Equation 153

have introduced the feedback laws


⎛ ⎞ ⎛ ϕ1 (0) ⎞

y, ϕ1 γk −μ1 

y, ϕ2 ⎟ ⎜ ϕ (0) ⎟
⎜ γk2−μ2 ⎟

u k (y) := A ⎝ ⎟ ,⎜ ⎟ , (7.83)
......... ⎠ ⎝ ........ ⎠

y, ϕ N ϕ N (0)
γk −μ N N

and u as
u(y) := u 1 (y) + · · · + u N (y). (7.84)

Next, arguing similarly as in (7.49), we equivalently rewrite (7.80) as an internal


control-type problem:


N 
N
 
∂t y(t) = − Ay(t) + u i (y(t))(A + γi )Dγi − 2 μ j u i (y(t))Dγi , ϕ j ϕ j
i=1 i, j=1

+ b(t)Γ (t)y(t)yx (t); y(0) = yo .


(7.85)
Finally, as in Lemma 7.1, one may show that the solution z of


N 
N
 
∂t z(t) = −Az(t) + u i (z(t))(A + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ,
i=1 i, j=1

z(0) = z o ,
(7.86)
can be written in a mild formulation as
 L
z(t, x) = p(t, x, ξ )z o (ξ )dξ,
0

where
p(t, x, ξ ) := p1 (t, x, ξ ) + p2 (t, x, ξ ) + p3 (t, x, ξ ), (7.87)

for t ≥ 0, x, ξ ∈ (0, L). Here


⎛ ⎞

N N
p1 (t, x, ξ ) := ⎝ q ji (t)ϕ j (x)⎠ ϕi (ξ ),
i=1 j=1


p2 (t, x, ξ ) := e−μi t ϕi (x)ϕi (ξ ), and
i=N +1
⎛ ⎞

N ∞

p3 (t, x, ξ ) := ⎝ wi (t)ϕ j (x)⎠ ϕi (ξ ).
j

i=1 j=N +1
154 7 Stabilization of Stochastic Equations

j
The quantities q ji (t) and wi (t) involved in the definition of p satisfy the following
estimates: for some Cq > 0, depending on N ,

|q ji (t)| ≤ Cq e−cN t , ∀t ≥ 0,
2
(7.88)

for all i, j = 1, 2, . . . , N , and for some Cw > 0, depending on N ,

1
e−cN t , ∀t ≥ 0,
j 2
|wi (t)| ≤ Cw (7.89)
μ j − cN 2

for all i = 1, 2, . . . , N and j = N + 1, N + 2, . . .. Moreover, for all z o ∈ L 2 (0, L),


we have that
⎧ % N 2 ⎫ 21
⎨ 
∞  ⎬
≤ Ce−cN
j 2
μj wi (t)
z o , ϕi t
sup |
z o , ϕl |, ∀t ≥ 0.
⎩ ⎭ l=1,2,...,N
j=N +1 i=1
(7.90)
We will not comment on (7.87)–(7.89) since they can be directly obtained from
the proof of Lemma 7.1. In any case, relation (7.90) is a new property of the kernel
p, and we prove it below. Note that the kernel p is the same as in the framework
of the stochastic heat equation from the previous section. Yet we succeed in finding
new properties of it, namely relation (7.90). This suggests that the kernel p may have
further important properties that can be developed and used for stabilization of other
more complex problems. In other words, we are sure that in the future, one may
solve the stabilization problem of other diverse complicated stochastic equations
using these proportional-type feedback laws.
The need for relation (7.90) becomes clearer if we recall that we are now dealing
1
0 yx as1well. Therefore, estimates of the H -norm will be needed.
with the derivative

It is known that √
1
μj
ϕj forms a basis in H 1 ; hence it is easy to see that relation
j=1
(7.90) is an estimate of an H 1 -norm. Now let us prove it. Setting


N 
N
 
B(u)(t) := u i (z(t))(A + γi )Dγi − 2 μ j u i (z(s))Dγi , ϕ j ϕ j ,
i=1 i, j=1

and  
(B(u)(t)) j := B(u)(t), ϕ j , j = 1, 2, . . . ,

we have, via (7.60)–(7.61), that


N  t
e−μ j (t−s) (B(u)(s)) j ds,
j
wi (t)
z o , ϕi =
i=1 0
7.3 Stabilization of the Stochastic Burgers Equation 155

which yields that


⎧ % N 2 ⎫ 21 ⎧ ⎫1
⎨ 
∞  ⎬ ⎨ 
∞ 2 t 32 ⎬ 2
j −μ j (t−s)
μj wi (t)
z o , ϕi = μj e (B (u)(s)) j ds
⎩ ⎭ ⎩ 0 ⎭
j=N +1 i=1 j=N +1
⎛ 4 5 ⎞
⎝since 1
√ ϕj is an orthogonal basis in H 1 (0, L)⎠
μj
j
 t
−μ N +1 (t−s)
≤ e  [B (u)(s)]x ds
0
 t
e−μ N +1 (t−s) e−cN s ds |
z o , ϕl | ≤ Ce−cN
2 2t
≤C sup sup |
z o , ϕl |, ∀t ≥ 0,
0 l=1,2,...,N l=1,2,...,N

where we have used the form of B(u)) and the fact that

|u i (t)| ≤ Ce−cN
2
t
sup |
z o , ϕl |, ∀t ≥ 0, i = 1, . . . , N .
l=1,2,...,N

That is exactly what we have claimed.


The goal of the present section is stated in the theorem below.

Theorem 7.5 Let η > 0, depending on ω and sufficiently small, and let N ∈ N be
sufficiently large. Then for each yo ∈ L 2 (0, L) with y0  < η, there exists a unique
solution y to the random deterministic Eq. (7.85) belonging to the space Y ,
   
1
Y := y ∈ Cb ((0, ∞), H (0, L)) : sup e (y(t) + t yx (t)) < ∞ .
Nt 2 1
t≥0

In particular, the stochastic Burgers equation



⎪ dY (t, x) = νYx x (t, x)dt + b(t, x)Y (t, x)Yx (t, x)dt + θ Y (t, x)dβ(t),



⎪ t > 0, x ∈ (0, L),



⎪ ⎛ ⎞ ⎛ ϕ1 (0) ⎞



⎪ 
Y (t), ϕ1 γk −μ1 
⎨ N

Y (t), ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟

Yx (t, 0) = A⎜ ⎟
⎝ ............ ⎠ , ⎜
γk −μ2 ⎟ , (7.91)

⎪ ⎝ ........ ⎠

⎪ k=1


Y (t), ϕ N ϕ N (0)

⎪ γk −μ N N



⎪ Y (t, L) = 0, t > 0,


x
Y (0, x) = yo (x), x ∈ (0, L),

has a unique solution Y = Γ y that pathwise almost surely is exponentially decaying


in the L 2 -norm.
156 7 Stabilization of Stochastic Equations

Proof The norm in Y is given as


 1

|y|Y := sup e N t (y(t) + t 2 yx (t)) .
t≥0

It is clear that for all y ∈ Y , we have

e N t y(t) ≤ |y|Y and e N t yx (t) ≤ t − 2 |y|Y , ∀t > 0.


1
(7.92)

We set Br (0) := {y ∈ Y : |y|Y ≤ r } .


Based on what we have discussed above, we may rewrite (7.85) in a mild formu-
lation as
 L  t L
y(t, x) = p(t, x, ξ )yo (ξ )dξ + p(t − s, x, ξ )b(s, ξ )Γ (s)y(s, ξ )yξ (s, ξ )dξ ds,
0 0 0

where p is defined in (7.87). Thus the existence of a solution y is equivalent to the


fact that the map G : Y → Y , defined as
 L
G y := p(t, x, ξ )y(0, ξ )dξ + F y,
0

where
 t L
(F y) (t) := p(t − s, x, ξ )b(s, ξ )Γ (s)y(s, ξ )yξ (s, ξ )dξ ds,
0 0

has a fixed point.


In what follows, we aim to show that G is a contraction on Br (0) that maps the
ball Br (0) into itself, for r > 0 properly chosen. Then via the contraction mappings
theorem, we will deduce that G has a unique fixed point y ∈ Br (0) that is in fact the
mild solution to the Eq. (7.85). Then one easily arrives at the conclusion claimed by
the theorem.
We will need to estimate the norm | · |Y of G y. So in particular, we will need to
estimate the | · |Y -norm of F y, for y ∈ Y . We begin with the L 2 -norm of F y. We
propose to use Parseval’s identity, so in order to do this, based on the kernel’s form
(7.87), we conveniently rewrite the term F y as
 t
F y(t) = (F1 (y(s)) + F2 (y(s)) + F3 (y(s))) ds, (7.93)
0
7.3 Stabilization of the Stochastic Burgers Equation 157

where

F1 (y)(t, s, x)
% N  
 N  L
:= q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ ϕ j (x),
j=1 i=1 0

F2 (y)(t, s, x)
∞ 2  L 3
−μ j (t−s)
:= e Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ ϕ j (x),
j=N +1 0

F3 (y)(t, s, x)

% N  
  j L
:= wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ ϕ j (x).
j=N +1 i=1 0
(7.94)
It follows via Parseval’s identity that

F1 (y)
⎧ % N 2 ⎫ 21
⎨ N   L ⎬
= q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=1 i=1

(using the uniform boundedness of eigenfunctions and (7.82))


N  L
− 41
≤C |q ji (t − s)|s |y(s, ξ )||yξ (s, ξ )|dξ
i, j=1 0
(7.95)
(involving relation (7.88) and Schwarz’s inequality)
≤ Ce−cN (t−s) − 41
2
s y(s)yξ (s)
−2N t (−cN 2 +2N + 14 )(t−s) − 41 (t−s) − 41 N s
= Ce e e y(s)e N s yξ (s)
e s
1
(by (7.92) and the fact that − cN 2 + 2N + < 0, N large)
4
≤ Ce−N t (t − s)− 4 s − 4 s − 2 |y|2Y
1 1 1

= Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.


1 3

We continue with

F2 (y)
⎧ ⎫1
⎨  ∞ 2  L 32 ⎬ 2
= e−μ j (t−s) Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
158 7 Stabilization of Stochastic Equations

⎧ ⎫1
∞ 2
⎨   L 32 ⎬ 2
= e−2N t e−(μ j −2N )(t−s) Γ (s)e2N s b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1
⎧⎡
⎨  ∞  L
# −(μ j −2N )(t−s) N s $
≤ Ce −N t ⎣ e e y(s, ξ )ϕ j (ξ )
⎩ 0 j=N +1

# $ 62 1 21
× Γ (s)b(s, ξ )e N s yξ (s, ξ ) dξ
(by Schwarz’s inequality)

⎨ ∞  L
≤ Ce−N t e−2(μ j −2N )(t−s) ϕ 2j (ξ )e2N s y 2 (s, ξ )dξ ×
⎩ 0j=N +1
 L  21
Γ (s)b 2 2
(s, ξ )e2N s yξ2 (s, ξ )dξ
0
⎧ ⎡ ⎤
⎨ L ∞

= Ce−N t ⎣ e−2(μ j −2N )(t−s) ϕ 2j (ξ )⎦ e2N s y 2 (s, ξ )dξ ×
⎩ 0
j=N +1
 L  21
Γ (s)b
2 2
(s, ξ )e2N s yξ2 (s, ξ )dξ
0
(use inequality between the heat and the Gaussian kernel (7.30))
 L  L  21
−N t − 21 2N s 2
≤ Ce (t − s) e y (s, ξ )dξ Γ (s)b (s, ξ )e yξ (s, ξ )dξ
2 2 2N s 2
0 0
(by (7.82))
≤ Ce−N t (t − s)− 4 s − 4 e N s y(s)e N s yξ (s)
1 1

(using (7.92))
≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
1 3
(7.96)

Finally, we deal with

F3 (y)
⎧ % N 2 ⎫ 21
⎨  ∞  j  L ⎬
= wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=N +1 i=1

(by (7.82), (7.89) and the uniform boundedness of the eigenfunctions)


⎛ ⎞
∞
1
≤C⎝ ⎠ e−cN 2 (t−s) s − 14 y(s)yξ (s)
j=N +1
μ j − cN 2
7.3 Stabilization of the Stochastic Burgers Equation 159

(the series converge; see (7.67)–(7.68))


≤ Ce−2N t e(−cN +2N + 41 )(t−s) − 14 (t−s) − 14 N s
2
e y(s)e N s yξ (s)
e s
1
(by (7.92) and the fact that − cN 2 + 2N + < 0 for N large enough)
4
≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
1 3
(7.97)

We conclude that (7.95)–(7.97) imply that


 t !
−N t − 43 − 41 −N t 1 3
F (y)(t) ≤ Ce s (t − s) ds|y|2Y =e CB , |y|2Y , (7.98)
0 4 4

∀t ≥ 0, where B(x, y) is the classical beta function.


By the exponential semigroup property, we have as well that
 
 L 
 p(t, x, ξ )yo (ξ )dξ  −N t
  ≤ Ce yo . (7.99)
0

We go on with the estimates in the H 1 -norm. Using the above notation, we have

(F1 (y))x 
⎧ % N 2 ⎫ 21
⎨ N   L ⎬
= μj q ji (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=1 i=1

(arguing as in (7.95))
≤ Ce−cN (t−s) − 41
2
s y(s)yξ (s)
= Ce−2N t e(−cN +2N + 34 )(t−s) − 43 (t−s) − 14 N s
2
e y(s)e N s yξ (s)
e s
3
(by (7.92) and the fact that − cN 2 + 2N + < 0 for N large enough)
4
≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
3 3

(7.100)
Next,

(F2 (y))x 
⎧ ⎫1
⎨  ∞ 2  L 32 ⎬ 2
= μ j e−μ j (t−s) Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1

= (t − s)− 2
1
160 7 Stabilization of Stochastic Equations

⎧ ⎫1
∞ 2
⎨   L 32 ⎬ 2
1
(t − s) 2 μ j2 e−μ j (t−s) Γ (s)
1
× b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1

(using the obvious inequality [(t − s)μ j ] 2 ≤ e 2 μ j (t−s) )


1 1

≤ (t − s)− 2
1

⎧ ⎫1
⎨  ∞ 2  L 32 ⎬ 2
e− 2 μ j (t−s) Γ (s)
1
× b(s, ξ )y(s, ξ )yξ (s, ξ )ϕ j (ξ )dξ
⎩ 0 ⎭
j=N +1

(arguing as in (7.96))
≤ C(t − s)− 2 e−N t (t − s)− 4 s − 4 |y|2Y
1 1 3

= Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.


3 3
(7.101)

Finally,

(F3 (y))x 
⎧ % N 2 ⎫ 21
⎨  ∞  j  L ⎬
= μj wi (t − s)Γ (s) b(s, ξ )y(s, ξ )yξ (s, ξ )ϕi (ξ )dξ
⎩ 0 ⎭
j=N +1 i=1

(by (7.90))
" "
≤ Ce−cN
2
(t−s)
sup " Γ (s)b(s, ·)y(s, ·)yξ (s, ·), ϕl (·) "
l=1,2,...,N

(with similar arguments as before)


≤ Ce−N t (t − s)− 4 s − 4 |y|2Y , ∀0 < s < t.
3 3

(7.102)
Therefore, (7.100)–(7.102) imply that
 t !
−N t − 43 − 34 −N t − 21 1 1
(F (y)(t))x  ≤ Ce (t − s) s ds|y|2Y =e t CB , |y|2Y ,
0 4 4
(7.103)
∀t > 0.
Heading toward the end of the proof, we note that

 % !2  21

1
L
∂p
e t Nt 2 (t, x, ξ ) dξ dt < ∞,
0 0 ∂x

since the presence of the μ j in the infinite sum is controlled as in (7.101) by the
1
presence of t 2 . Consequently, via the semigroup property, we deduce that
7.3 Stabilization of the Stochastic Burgers Equation 161
 
 L
∂p 
 (t, x, ξ )dξ  −N t − 21
 ∂x  ≤ Ce t yo . (7.104)
0

Now, gathering together the relations (7.97), (7.99), (7.103), and (7.104), we arrive
at the fact that there exists a constant C1 > 0 such that

|G y|Y ≤ C1 (yo  + |y|2Y ), (7.105)

for all y ∈ Y .
It is easily seen that arguments similar to those above lead as well to

|G y − G y|Y ≤ C2 (|y|Y + |y|Y )|y − y|Y , ∀y, y ∈ Y , (7.106)

for some constant C2 > 0.


Recall that yo  < η. It then follows that if η is small enough that
 
1 1
η < min , ,
4C12 4C1 C2

then taking r = 2C1 η, we get from (7.106) that G is a contraction and by (7.105)
that G maps the ball Br (0) into itself, as claimed.
Note that C1 , C2 depend on ω, since in the above, ω-estimates for Γ were used
(CΓ is ω-dependent). Thus η should depend on ω too. This means that in fact, yo
must depend on ω. 

Remark 7.1 Let us return to Eq. (7.79). If one assumes that β depends on the space
variable as well, then in trying to apply the above approach, one has to estimate
terms like βx . The law of the iterated logarithm should work again. In any case, this
time, we will no longer have |y|2Y on the right-hand side, because of the terms like
βx y, βx yx . This implies that in applying the fixed-point argument, at some point one
should find an r > 0 sufficiently small that for some constants c1 , c2 ,

c1 r + c2 r 2 < r.

This is possible if and only if c1 < 1, but no one can guarantee this. In the above
case, we had
c2 r 2 < r,

and this is possible, provided r is sufficiently close to zero. Hence, we believe that
the above argument surely fails to work for the case of a space-dependent Brownian
motion β, unless some additional hypothesis on the coefficients of the equation, such
as small enough, are imposed.
162 7 Stabilization of Stochastic Equations

7.4 Stabilization by Discrete-Time Feedback Control

In the previous sections we discussed boundary stabilization by continuous-time (reg-


ular) actuators, for different types of stochastic equations. Such a continuous-time
feedback control requires continuous observation of the state y(t), for all time t ≥ 0.
However, it is more realistic and costs less in practice if the state is observed only
at discrete times, say 0, τ, 2τ, . . . , where τ > 0 is the duration between two consec-
utive observations. Accordingly, the feedback control should be designed based on
these7 discrete-time
6 observations,
7 6 namely the feedback control should be of the form
ũ(y( τt τ ), t), where τt is the integer part of τt . Thus, the idea of this section is
to design a discrete-time proportional-type feedback, plug it into the equations, and
show that it still ensures the stability of the system. We will use as a guide the work
of Mao [92], which treats the case of stochastic differential equations.
The equation under study reads as

⎨ d ỹ(t) = ỹx x (t)dt + λσ ( ỹ(t))dβ, t > 0, x ∈ (0, L),
ỹx (t, 0) = ũ(t), ỹx (t, L) = 0, t ≥ 0, (7.107)

ỹ(0) = yo .

Recall that
 we have  denoted by A the Neumann–Laplace operator, see (7.36),
and by μ j j and ϕ j j its system of eigenvalues and system of eigenfunctions,
respectively. Let N ∈ N be sufficiently large that Theorem 7.4 holds, and assume
that this time, σ is a Lipschitz function of ỹ that depends only on the first N modes
of ỹ, namely
ỹ, ϕ1 , . . . ,
ỹ, ϕ N . Using the notation in (7.37)–(7.42), we introduce
the feedback form ũ as

ũ( ỹ) := ũ 1 ( ỹ) + ũ 2 ( ỹ) + · · · + ũ N ( ỹ), (7.108)

where  ⎞ ⎛ ϕ1 (0) ⎞
⎛  #7 t 6 $
 ỹ τ τ , ϕ1 γk −μ1 
 #7 6 $ 
⎜ ỹ t τ , ϕ2 ⎟ ⎜ ⎜
ϕ2 (0) ⎟

ũ k ( ỹ) := A ⎜ τ ⎟
⎝ .................... ⎠ , ⎜
γ k −μ 2 ⎟ . (7.109)
⎝ ........ ⎠
 #7 t 6 $ 
ỹ τ τ , ϕ N ϕ N (0)
γk −μ N N

We observe that the feedback control ũ is designed based on the discrete-time state
observations ỹ(0), ỹ(τ ), ỹ(2τ ), . . ..
Next, we lift the boundary conditions into Eq. (7.107) by arguing similarly as in
(2.27)–(2.29), obtaining thereby the internal control-type problem
7.4 Stabilization by Discrete-Time Feedback Control 163
4 2 3 !! 5

N
t
d ỹ(t) = −A ỹ(t) + ũ i ỹ τ (A + γi )Dγi dt
i=1
τ
⎧ ⎫
⎨ 
N 8 2 3 !! 9 ⎬ (7.110)
t
+ −2 μ j ũ i ỹ τ )Dγi , ϕ j ϕ j dt
⎩ τ ⎭
i, j=1

+ λσ ( ỹ(t))dβ; ỹ(0) = yo .

We note that Eq. (7.110) is in fact a stochastic PDE with delays, with a bounded
variable delay. Indeed, if we define the bounded variable ζ : [0, ∞) → [0, τ ] by

ζ (t) := t − kτ for kτ ≤ t < (k + 1)τ,

for k ∈ N, then Eq. (7.110) can be equivalently rewritten as


4 5

N
d ỹ(t) = −A ỹ(t) + ũ i ( ỹ (t − ζ (t))) (A + γi )Dγi dt
i=1
⎧ ⎫
⎨ 
N
  ⎬ (7.111)
+ −2 μ j ũ i ( ỹ (t − ζ (t))) Dγi , ϕ j ϕ j dt
⎩ ⎭
i, j=1

+ λσ ( ỹ(t))dβ; ỹ(0) = yo .

Hence, the classical existence theory for delay SPDEs can be applied in order to
ensure that (7.111) (and implicitly (7.110)) is well posed.
To address the mean-square exponential stability of the controlled Eq. (7.110),
we will relate it to its continuous-time controlled version (also refereed to as the
auxiliary problem)
4

N
dy(t) = −Ay(t) + u i (y(t))(A + γi )Dγi
i=1


N
  ⎬
−2 μ j u i (y(s))Dγi , ϕ j ϕ j dt + λσ (y(t))dβ; y(0) = yo ,

i, j=1
(7.112)
which is mean exponential stable by virtue of Theorem 7.4. More precisely, we have
obtained that

− ∞ < lim sup log E|y(t, x)|2 < −ρ, ∀x ∈ (0, L). (7.113)
t→∞

We aim to prove a relation similar to (7.113) for the solution ỹ to (7.110). To this end,
we will compare ỹ with y, and show that they are close enough (in a proper sense),
164 7 Stabilization of Stochastic Equations

provided that τ is sufficiently small (namely, we make state observations frequently


enough).
As in (7.54) (see also (7.13) and (2.38)), we decompose both (7.110) and (7.112)
into an eigenbasis, and get that the first N modes satisfy
4 2 3 !  N 2 3 !5
t t
d Y˜ = −ΛY˜ (t) + ΛY˜ τ + Bk AY˜ τ dt
τ k=1
τ (7.114)
 
+ λσ̃ Y˜ (t) dβ(t)

and
4 5

N
dY = −ΛY (t) + ΛY (t) + Bk AY (t) dt + λσ̃ (Y (t)) dβ(t), (7.115)
k=1

respectively. Here
⎛ ⎞ ⎛ ⎞

ỹ, ϕ1
y, ϕ1

ỹ, ϕ2 ⎟ ⎜ ⎟
Y˜ := ⎜ ⎟ , Y := ⎜
y, ϕ2 ⎟
⎝ ......... ⎠ ⎝ ....... ⎠

ỹ, ϕ N
y, ϕ N

and
Λ := diag(μ1 μ2 . . . μ N ).

Finally, σ̃ is a function depending only on the first N modes, since σ was assumed
to be like that, ⎛ ⎞

σ (Y ), ϕ1

σ (Y ), ϕ2 ⎟
σ̃ := ⎜ ⎟
⎝ .............. ⎠ .

σ (Y ), ϕ N

Note that with the two Eqs. (7.114) and (7.115) we are placed in the context of
finite-dimensional stochastic differential equations from [92]. And since the sublinear
assumptions from [92] are satisfied for the present case (see [92, Assumption 2.1],
because here we are dealing with matrices and σ is Lipschitz), we may deduce similar
results to those in [92, Lemma 3.2, Theorem 3.1]. More precisely, one may show
that " "2
" "
E "Y˜ (t) − Y (t)" ≤ C1 e−C2 t E|Yo |2 , ∀t ≥ 0, (7.116)

for some positive constants C1 and C2 . And taking advantage of the exponential
decay (7.113), we finally get that
7.4 Stabilization by Discrete-Time Feedback Control 165
" "2
" "
E "Y˜ (t)" ≤ Ce−ρt E|Yo |2 , ∀t ≥ 0, (7.117)

for some constants C, ρ > 0.


Now let us scalar multiply (7.111) by ϕi , i > N . We get
⎧ ⎫
⎨ 2 3 !!

N
t  ⎬    
d ỹi = −μi ỹi + ũ k Y˜ τ (A + γk )Dγk , ϕi dt + λ σ Y˜ (t) , ϕi dβ(t),
⎩ τ ⎭
k=1

where obviously, ỹi =


ỹ, ϕi . Of course, this will be compared with its continuous
version, obtained from (7.112),
4 5

N
 
dyi = −μi yi + u k (Y (t)) (A + γk )Dγk , ϕi dt + λ
σ (Y (t)) , ϕi dβ(t).
k=1

Similarly as in (7.117), one may deduce with arguments from [92, Lemma 3.2,
Theorem 3.1] that
E| ỹi (t)|2 ≤ Ce−ρt E| ỹi (0)|2 , ∀t ≥ 0,

for all i = N + 1, N + 2, . . .. We conclude that by virtue of the above relation and


(7.117), we have that

E ỹ(t)2 ≤ Ce−ρt Eyo 2 , t ≥ 0. (7.118)

7.5 Comments

The problem of boundary stabilization of stochastic parabolic-type equations has


been discussed in many papers; see, for instance, [70], [10, Sect. 2.4.1], and the
references therein. In all those works the equation always contains noise as a forcing
term, which makes the problem easier, since the presence of enough noise guaranties
the stability of the system. However, in [52] the limit is considered as well as the
more difficult situation of a noise acting only at the boundary, corresponding to a
realistic situation in which the control itself is perturbed by a noise dβ. In the first
section of this chapter, we assumed a fading-type noise, namely e−δt dβ, for some
δ > 0 (the noise perturbation is not permanently of the same intensity but vanishes
exponentially fast). The exponential decay of the noise requirement is mandatory,
since we are studying here the problem of stabilization. In [52] is addressed the
optimal control problem associated with Eq. (7.1). Those results were improved in
[63], while in [129] the authors considered in addition some delays in Eq. (7.1). The
results presented in this section were published in Munteanu [104].
Of course, another interesting case is that in which a Dirichlet boundary controller
is perturbed by noise. The main difficulties that appear in that case are related to the
166 7 Stabilization of Stochastic Equations

fact that the solution is no longer L 2 -valued. More precisely, the solution lies in a
negative Sobolev space H α , α < − 14 . The reason is that the smoothing properties
of the heat equation are not strong enough to regularize a rough term such as a white
noise. However, one may suggest reconsidering the problem in the new framework
proposed in [53], namely in weighted L 2 -spaces. The difficulty is that in order to
apply the reduction method, one should consider the eigenbasis in the weighted L 2 -
space of the weighted Laplacian. Another idea is to consider the solution in the space
of distributions as in [37].
Besides this, even in the case of Neumann boundary conditions, it would be
interesting to consider a space dimension higher than one. The main difficulty in that
case is that the solution D of (7.10) should satisfy noise boundary conditions of the
type

D(x) = e−δt dβ(t, x), x ∈ Γ,
∂n
where Γ is a part of the boundary of the domain in which the equation is considered,
while n is its outward unit normal. To define such a D, one may rely on the existing
results in [112], after imposing some additional conditions. Then the control design
algorithm may be applied.
The problem of stabilization of the stochastic versions of the deterministic models
has arisen naturally in the scientific community. First, the finite-dimensional case
was considered, and we mention Mao and his coworkers for notable results in this
direction; see the book [91], for example. There, it is mainly the Lyapunov stability
technique that is used, which consists in finding proper Lyapunov functions for
the equation under discussion. Then these ideas were reconsidered in the infinite-
dimensional case, and we refer to the joint work of Caraballo et al. [42], which treats
a similar problem to the one we presented above. Let us give some details on how the
Lyapunov functions are used. Let A denote the Dirichlet–Laplace operator on (0, L),
and σ = σ (y) a Lipschitz function such that σ (0) = 0, and consider the problem

dy(t) = Ay(t)dt + λσ (y(t))dβ, t > 0,
(7.119)
y(0) = yo .

Assume that V (t, y) : R+ × L 2 (0, L) → R+ is a C 1,2 -positive functional such that


for all y ∈ H01 (0, L), t ∈ R+ , Vy (t, y) ∈ H01 (0, L). Define the operators L and Q
(which are, in fact, the deterministic part and the stochastic part of Itô’s formula
applied to V (t, y) in (7.119), respectively): for y ∈ H01 (0, L) and t ∈ R+ ,

  1   
L V (t, y) = Vt (t, y) + Vy (t, y), Ay + Vyy (t, y)λσ (y), λσ (y)
2
and  
QV (t, y) = (Vy (t, y))2 , λσ (y) .
7.5 Comments 167

Assume that the solution to (7.119) satisfies |y(t)| = 0 for all t ≥ 0 a.s., provided
|yo | = 0 a.s., and that there exists a function V (t, y) ∈ C 1 (R+ , R+ ) × C 2 (L 2 (0, L);
R+ ), and ψ1 (t), ψ2 (t) ≥ 0 are two functions for which there exist constants p >
0, γ ≥ 0, and θ ∈ R such that
(1) y p ≤ V (t, y), ∀y ∈ H 1 (0, L);
(2) L V (t, y) ≤ ψ1 (t)V (t, y), ∀y ∈ H01 (0, L), ∀t ∈ R+ ;
(3) QV (t, y):
≥ ψ2 (t)V 2 (t, y), ∀x: ∈ H01 (0, L), ∀t ∈ R+ ;
t t
ψ1 (s)ds ψ2 (s)ds
(4) lim sup 0
t
≤ θ, lim inf 0
t
≥ 2γ .
t→∞ t→∞

Then the strong solution of equation (7.119) satisfies

log |y(t)| γ −θ
lim sup ≤− a.s.,
t→∞ t p

since V is a Lyapunov function for the system.


Then the problem of internal stabilization by noise was proposed in the same work
[42]. More precisely, the problem

dy(t) = Ay(t)dt + λσ (y(t))dβ1 + h(t, y(t)dβ2 , t > 0,
(7.120)
y(0) = yo

was considered. Here β1 and β2 are two independent Brownian motions. If h is a


Lipschitz function with h(t, 0) = 0 for which there exist λ(·), ρ(·) t ≥ 0 and ν0 , ρ0 ∈
R such that
h(t, y)2 ≤ λ(t)y2 , t ≥ 0, y ∈ L 2 (0, L),

y, h(t, y) ≥ ρ(t)y4 , t ≥ 0, y ∈ L 2 (0, L),

where  t  t
1 1
lim sup λ(s)ds ≤ λ0 and lim inf ρ(s)ds ≥ ρ0 ,
t→∞ t 0 t→∞ t 0

then under the assumption that


λσ (y), y 2 ≥ ρ̃(t)y4 , ∀y ∈ L 2 (0, L),

where ρ̃(·) is a nonnegative continuous function such that


 T
1
lim inf ρ̃(s)ds ≥ ρ̃0 , ρ̃0 ∈ R+ ,
t→∞ t 0

the solution of (7.120) satisfies

1
lim sup log y(t)2 ≤ −(2(ρ0 + ρ̃0 ) − λ0 ), P − a.s.
t→∞ t
168 7 Stabilization of Stochastic Equations

A more direct and simple control is proposed in [14, Sect. 5.5], where the internal
stabilization of the Navier–Stokes equations driven by linear multiplicative noise is
treated. Provided that the first eigenvalue of the Oseen operator is large enough, the
feedback u = −η1O 0 y once inserted into the equations ensures the stability of the
closed-loop system, P−a.s..
Proportional-type feedback laws (both internal and from the boundary) were pro-
posed, in the context of noise stabilization of deterministic equations, by Barbu; see
the book [10, Chap. 4]. Regarding the boundary case, in [10] the equation under con-
sideration is the Navier–Stokes equation, but the ideas can be easily reformulated for
general parabolic-like equations. The control law involves a family of independent
 N
Brownian motions β j j=1 and is given as


N
 
u=η μ j y, ϕ j Φ j dβ j .
j=1

The boundary conditions are lifted into the equations, and then the system is decom-
posed into its unstable and stable parts. The solution of the unstable part is given
explicitly and it is shown that if it is stable, then via a Lyapunov function and Itô’s
formula it is shown that the stable part is stable as well. This allows us to conclude
that the corresponding solution of the closed-loop equation satisfies
 ∞
e2γ t y(t)2 dt < ∞ P − a.s.
0

The proof of the stability of the system is almost identical to the proof of the main
result of Sect. 7.1, except that the solution of the unstable part is given explicitly.
This is possible due to the imposed hypothesis of linear independence of the traces
of the normal derivatives of the dual eigenfunctions on the boundary (as described
in the Comments of Chap. 2). Of course, following the ideas in Chap. 2, one may
define another stabilizing noise control, where this kind of hypothesis is dropped.
The results presented in the second section of this chapter were published in the
author’s work [106].
On the other hand, concerning the internal stabilization by noise of deterministic
equations, there are substantially more results. We refer first to the early work of
Arnold [5], which provides an example of an unstable system stabilized by a random
parameter noise, followed by the work on linear systems Arnold et al. [6]. Other
stabilization results are provided via Lyapunov exponents in Kwiecinska [78, 79].
Let us briefly describe the ideas behind those works. The equation

d
X (t) = AX (t)
dt
is considered, where A generates a C0 -semigroup in a Hilbert space. It is denoted by
7.5 Comments 169

1
λdet := lim sup log X det (t),
t→∞ t

the Lyapunov exponent of the deterministic equation. Here X det is the solution of the
deterministic equation. Then the equation is perturbed by


N
d X = AX dt + σ Bk X dβk ,
k=1

where the Bk are linear continuous operators satisfying some diagonalizable and
commutation properties. Similarly, a Lyapunov coefficient of the stochastic equation
is introduced:
1
λst := lim sup log X st (t),
t→∞ t

where X st is the solution of the stochastic equation. The author proves that the
stochastic Lyapunov exponents turn out to be smaller, almost surely, than their deter-
ministic counterparts. This means that the deterministic system is made more stable
by adding a term with white noise. Moreover, there exists σ0 such that for σ ≥ σ0 ,
all the stochastic Lyapunov exponents are strictly smaller than zero with probability
one. For a collection of more results on this subject, one may see [41].
Regarding the third section of this chapter, Burgers’s equation is often referred to
as a one-dimensional “cartoon” of the Navier–Stokes equation because it does not
exhibit turbulence. In contrast, it turns out that its stochastic version, (7.73), models
turbulence; for details, one can see [47, 111].
In the literature there are plenty of results concerning the stabilization of the deter-
ministic Burgers equation; for example, we refer to [74], which provides a global
stabilization result, with some consequences on the stabilizability of the stochastic
version. The results of Sect. 7.3 are new. The ideas are based on the mild formu-
lation, described above, plus a fixed-point argument. The idea to use fixed-point
arguments in order to prove the stability of deterministic or stochastic equations has
been previously used in papers such as [89].
Finally, the result asserting that Eq. (7.107) (see also (7.110)) is stabilizable by
a proportional-type feedback law involving only time-discrete measurements of the
state, see (7.108) and (7.109), can be viewed as a completion of the result in (2.68),
where via some numerical simulations, we observed that there is no need for full-
state knowledge, but only on a part of the domain. So we may conclude that the
proportional-type feedback law designed in Chap. 2 and used through out this book
to stabilize different types of deterministic or stochastic PDEs can be improved in
order to involve only time-discrete measurements of the state on only a part of the
domain where the phenomena (modeled by the PDE) are evolving. From the practical
point of view and the costs, this is a very important feature. The results in Sect. 7.4
are new.
Chapter 8
Stabilization of Unsteady States

In this chapter, we address the problem of stabilization of unsteady-state trajectories


of time-dependent systems. In this case, the linear operator obtained from the lin-
earization of the equation around the trajectory is time-dependent, so its spectrum is
time-dependent as well. This means that the spectral method leaves out this case. We
will follow the approach from Sect. 7.2, Chap. 7. Namely, we will write the solution
of the nonlinear equation in a mild formulation via a kernel and prove its stability.

8.1 Presentation of the Problem

The subject of this chapter is the following boundary-controlled parabolic-type equa-


tion on (0, L), L > 0:

⎪ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x) + σ (t, x, y(t, x)),


⎨ 0 < x < L , t > 0,

⎪ yx (t, 0) = u(t), yx (t, L) = 0, t ≥ 0,


y(0, x) = yo (x) for x ∈ [0, L].
(8.1)
Now let us give details. Functions a, b : R+ × R → R are uniformly bounded
with a of class C 1 in both t and x, in the sense that we may find some positive
constant c1 such that

ess supt>0 ess supx∈(0,L) (|∂t a(t, x)| + |ax (t, x)| + |a(t, x)| + |b(t, x)|) ≤ c1 .
(8.2)
In addition, we assume that

a(t, 0) = a(t, L) = 0, ∀t ≥ 0.

© Springer Nature Switzerland AG 2019 171


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_8
172 8 Stabilization of Unsteady States

Further, σ is a uniformly globally Lipschitz nonlinear function, i.e., there exists


L σ > 0 such that

|σ (t, x, y) − σ (t, x, ȳ)| ≤ L σ |y − ȳ|, ∀t ≥ 0, x ∈ [0, L], ∀y, ȳ ∈ R, (8.3)

and σ (t, x, 0) = 0.
Now let ŷ be some trajectory of the uncontrolled (8.1). More precisely, ŷ = ŷ(t, x)
satisfies

⎪ ∂t ŷ(t, x) = ŷx x (t, x) + a(t, x) ŷx (t, x) + b(t, x) ŷ(t, x) + σ (t, x, ŷ(t, x)),


⎨ 0 < x < L , t > 0,
⎪ ŷx (t, L) = 0, t ≥ 0,



ŷ(0, x) = ŷo (x) for x ∈ [0, L].
(8.4)
Then define the fluctuation variable z := y − ŷ, which by virtue of (8.1) and (8.4)
satisfies the equation

⎪ ∂t z(t, x) = z x x (t, x) + a(t, x)z x (t, x) + b(t, x)z(t, x)


⎨ + σ (t, x, z(t, x) + ŷ(t, x)) − σ (t, x, ŷ(t, x)), for 0 < x < L , t > 0,
⎪ z x (t, 0) = U (t) := u(t) − ŷx (t, 0), z x (t, L) = 0, t ≥ 0,



z(0, x) = z o (x) := yo (x) − ŷo (x) for x ∈ [0, L].
(8.5)

8.2 The Stabilization Result and Applications

We will use all the notation from Chap. 7, Sect. 7.2, except that this time, we take
the γk to be

k
γk := N α + , k = 1, 2, . . . , N , (8.6)
N
with α > 2.
We emphasize that

γk , γk − μ1 , γk − μ N are of order O(N α ), (8.7)

for all k = 1, 2, . . . , N .
The given a priori feedback law v is the same as in (7.41)–(7.42); namely, for
w ∈ L 2 (0, L), we set

v(w) := v1 (w) + · · · + v N (w), (8.8)


8.2 The Stabilization Result and Applications 173

where
⎞ ⎛ ϕ1 (0) ⎞

w, ϕ1 
 γk −μ1
⎜ w, ϕ2  ⎟ ⎜ ⎜ ϕ2 (0) ⎟

vk (w) := A ⎜ ⎟
⎝ ........ ⎠ , ⎜
γk −μ2 ⎟ . (8.9)
⎝ ........ ⎠
w, ϕ N  ϕ N (0)
γk −μ N N

The main result of stabilization concerning the Eq. (8.1) is stated and proved
below.
Theorem 8.1 Let ρ > 0 be arbitrary but fixed. For N ∈ N large enough, the solution
y to the equation

⎪ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x) + σ (t, x, y(t, x)),



⎪ for 0 < x < L and t > 0,

⎨ 
1 x
)dξ
yx (t, 0) = v(e 2 0 a(t,ξ
(y(t) − ŷ(t))) + ŷx (t, 0), t > 0



⎪ yx (t, L) = 0 for t > 0,



y(0, x) = yo (x) for x ∈ [0, L],
(8.10)
satisfies

lim sup eρt ess sup |y(t, x) − ŷ(t, x)| < ∞. (8.11)
t→∞ x∈(0,L)

Here the feedback v is defined by (8.8).

Proof As mentioned above, stabilization of (8.10) is equivalent to the null stabiliza-


tion of the translated Eq. (8.5). That is why in the following, we will consider only
the latter situation. Carrying out the transformation
x
1
a(t,ξ )dξ
w(t, x) := e 2 0 z(t, x), t > 0, x ∈ [0, L],

in (8.5), we get that w satisfies



⎪ ∂ w(t, x) = wx x (t, x) + d(t, x)w(t, x) + σ̃ (w(t)) for 0 < x < L , t > 0,
⎨ t
wx (t, 0) = v(w(t)), wx (t, L) = 0, t ≥ 0,

⎩ 
1 x
w(0, x) = wo (x) := e 2 0 a(0,ξ )dξ (yo (x) − ŷo (x)) for x ∈ [0, L],
(8.12)
where we have used that a(t, 0) = a(t, L) = 0, ∀t > 0. Here
 x
1 1 1
d(t, x) := ∂t a(t, ξ )dξ − ax (t, x) − a 2 (t, x) + b(t, x), (8.13)
2 0 2 4
174 8 Stabilization of Unsteady States

and x x
a(t,ξ )dξ
σ (t, x, e− 2 a(t,ξ )dξ
1 1
σ̃ (w(t)) := e 2 0 0 w(t, x) + ŷ(t, x))
x (8.14)
1
a(t,ξ )dξ
−e 2 0 σ (t, x, ŷ(t, x)).

It is easily seen from (8.2) and (8.3) that we may find some constant c2 > 0 such that

ess sup ess sup |d(t, x)| ≤ c2


t>0 x∈(0,L) (8.15)
|σ̃ (w(t, x))| ≤ c2 L σ |w(t, x)|, ∀t > 0, x ∈ (0, L).

Now we lift the boundary conditions into the Eq. (8.12), obtaining thereby an
internal control-type problem. As in (7.49), we find that (8.12) is equivalent to


N 
N
∂t w(t) = − Aw(t) + vi (w(t))(Ã + γi )Dγi − 2 μ j vi (w(t))Dγi , ϕ j ϕ j
i=1 i, j=1

+ Γ (w(t)); w(0) = wo ,
(8.16)
where
Γ (w) := dw + σ̃ (w).

It is clear that one can find some constant c3 such that

|Γ (w)| ≤ c3 (1 + L σ )|w|. (8.17)

Similarly as in Lemma 7.1, we may prove the following result.

Lemma 8.1 The solution z of


N
∂t z(t) = − Az(t) + vi (z(t))(Ã + γi )Dγi
i=1
(8.18)

N
−2 μ j vi (z(s))Dγi , ϕ j ϕ j ; z(0) = z o ,
i, j=1

can be written in a mild formulation as


 L
z(t, x) = p(t, x, ξ )z o (ξ )dξ.
0

Moreover, we have that


 ∞  L
1
e N t | p(t, x, ξ )|dξ dt ≤ C , ∀x ∈ (0, L), (8.19)
0 0 Nθ
8.2 The Stabilization Result and Applications 175

for some positive θ and C > 0, independent of N .


Proof of Theorem 8.1 (continued). Now we are ready to conclude the proof of The-
orem 8.1. Let us define


w
1,N := ess sup ess sup e N t |w(t, x)|.
t>0 x∈(0,L)

Then by Lemma 8.1, we write the solution of (8.16) in a mild formulation via the
kernel p, i.e.,
 L  t L
w(t, x) = p(t, x, ξ )wo (ξ )dξ + p(t − s, x, ξ )Γ (w(s))ds. (8.20)
0 0 0

Then we have
 L  t L
|w(t, x)| ≤ | p(t, x, ξ )||wo (ξ )|dξ + | p(t − s, x, ξ )||Γ (w(s))|ds
0 0 0
(using (8.19) in the first term and (8.17) in the second one)
≤ Ce−N t ess sup |wo (x)|
x∈(0,L)
 t L
+ e N (t−s) | p(t − s, x, ξ )|e−N (t−s) c3 (1 + L σ )|w(s)|ds
0 0
 ∞ L
−N t −N t
≤ Ce ess sup |wo (x)| + c3 (1 + L σ )
w
1,N e e N t | p(t, x, ξ )|dξ dt
x∈(0,L) 0 0

(again using (8.19) in the second term)


1
≤ Ce−N t ess sup |wo (x)| + Ce−N t c3 (1 + L σ )
w
1,N , ∀t > 0, x ∈ (0, L).
x∈(0,L) Nθ

The above relation implies that

1

w
1,N ≤ C ess sup |wo (x)| + Cc3 (1 + L σ )
w
1,N .
x∈(0,L) Nθ

Therefore, if we choose N large enough that Cc3 (1 + L σ ) N1θ < 1 and N > ρ, we
obtain that

w
1,ρ ≤
w
1,N < ∞. (8.21)

Keeping in mind that we have defined


x
y(t, x) − ŷ(t, x) = e− 2 a(t,ξ )dξ
1
0 w(t, x),

we see that relation (8.21) yields the desired result. 


176 8 Stabilization of Unsteady States

Remark 8.1 In comparison with the proof of Theorem 7.4, here we did not estimate
the L 2 -norm of the solution (using Parseval’s identity), but instead we estimated the
L ∞ -norm. That is why in comparison to Sect. 7.2, here we had to change the values
of γk s, k = 1, 2, . . . , N , in order to obtain relation (8.19). This suggests that the
various ways of choosing the γk also play an important role, allowing one to solve
stabilization problems for different equations in different frameworks.

8.2.1 Observer Design

Let us reconsider Eq. (8.1), but this time with σ ≡ 0, i.e.,



⎨ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x), 0 < x < L , t > 0,
yx (t, 0) = u(t), yx (t, L) = 0, t ≥ 0,

y(0, x) = yo (x), 0 ≤ x ≤ L .
(8.22)
Again, let ŷ stand for a particular time-dependent solution of the uncontrolled (8.22)
(that is, ŷ satisfies (8.4) with σ ≡ 0). Then as was done in (8.5), we write the equation
satisfied by the fluctuation variable Y := y − ŷ. This, via the transformation
x
1
a(t,ξ )dξ
w := e 2 0 Y (t, x),

may be equivalently rewritten as (see also (8.12))



⎨ ∂t w(t, x) = wx x (t, x) + d(t, x)w(t, x), 0 < x < L , t > 0,
wx (t, 0) = v(w(t)), wx (t, L) = 0, t ≥ 0, (8.23)

w(0, x) = w0 (x), 0 ≤ x ≤ L .

Here (see (8.9) and (8.8))

v(w) := v1 (w) + · · · + v N (w), (8.24)

where ⎛ ⎞ ⎛ ϕ1 (0) ⎞
 w, ϕ1  γk −μ1
⎜ w, ϕ2  ⎟ ⎜ ⎜
ϕ2 (0) ⎟

vk (w) := A ⎜ ⎟
⎝ ........ ⎠ , ⎜
γk −μ2 ⎟ . (8.25)
⎝ ......... ⎠
w, ϕ N  ϕ N (0)
γk −μ N N

In the proof of Theorem 8.1, we have shown that the feedback v given by (8.24)
ensures the exponential stability of (8.23).
When trying to apply this theoretical result in practice, things may become difficult
to implement, since the feedback law (8.24) requires full state knowledge, while in
practice, only measurements at the end x = L are available.
8.2 The Stabilization Result and Applications 177

Based on the ideas from Krstic [75], we propose the following observer for system
(8.23)–(8.25):


⎪ ∂t ŵ(t, x) = ŵx x (t, x) + d(t, x)ŵ(t, x) + K 1 (t, x)[w(t, L) − ŵ(t, L)],

0 < x < L , t > 0,

⎪ ŵ x (t, 0) = v( ŵ(t)), ŵ x (t, L) = K 10 (t)[w(t, L) − ŵ(t, L)], t ≥ 0,

ŵ(0, x) = w0 (x), 0 ≤ x ≤ L .
(8.26)
Here K 1 , K 10 are output injection functions. Observer (8.26) is in the standard form
of a copy of the system plus injection of the output estimation error. This form is
usually used for the finite-dimensional case, in which observers of the form

d
X̂ = A X̂ + Bu + L(Y − C X̂ )
dt
are constructed for plants

d
X = AX + Bu, Y = C X.
dt
This standard form allows us to pursue duality between the observer and the controller
design, that is, to find the observer gain function using the solution to the stabilization
problem we studied in the previous section. This can be put in connection with the
way duality is used to find the gains of a Luenberger observer based on the pole
placement control algorithm, or the way duality is used to construct Kalman filters
based on the LQR design.
In practice, things go as follows: once the estimates of measurements in x = L
are available, one inserts them into the observer equation (8.26) and numerically
computes the solution ŵ and, at the same time, v(ŵ). With this v(ŵ) plugged into the
plant equation (8.23), instead of v(w), one expects that the corresponding solution
of (8.23) will go exponentially fast to zero.
Keeping in mind that in (8.23), v(w) is replaced by v(ŵ), we deduce that the
observer error
w̃(t, x) := w(t, x) − ŵ(t, x)

satisfies the following PDE:



∂t w̃(t, x) = w̃x x (t, x) + d(t, x)w̃(t, x) − K 1 (t, x)w̃(t, L), 0 < x < L , t > 0,
w̃x (t, 0) = 0, w̃x (t, L) = −K 10 (t)w̃(L , t), t ≥ 0.
(8.27)
Now the functions K 1 (t, x) and K 10 (t) must be determined such that (8.27) is expo-
nentially stable to zero. To this end, we look for a backstepping-like coordinate
transformation  L
w̃(t, x) := z̃(t, x) − K (t, x, ξ )z̃(t, ξ )dξ
x
178 8 Stabilization of Unsteady States

that transforms (8.27) into the exponentially stable (for c > 0) system

∂t z̃(t, x) = z̃ x x (t, x) − cz̃(t, x), x ∈ (0, L), t > 0,
(8.28)
z̃ x (0) = z̃ x (L) = 0.

The free parameter c can be used to set the desired observer convergence speed.
Straightforward computations involving (8.28) give
 L  L
∂t w̃(t, x) = ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ − K (t, x, ξ )∂t z̃(t, ξ )dξ
x x
 L
= ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ
x
 L  L
− K (t, x, ξ )z̃ ξ ξ (t, ξ )dξ + c K (t, x, ξ )z̃(t, ξ )dξ
x x
 L
= ∂t z̃(t, x) − ∂t K (t, x, ξ )z̃(t, ξ )dξ + K (t, x, x)z̃ x (t, x)
x
 L
− K ξ (t, x, x)z̃(t, x) + K ξ (t, x, L)z̃(t, L) − K ξ ξ (t, x, ξ )z̃(t, ξ )dξ
x
 L
+c K (t, x, ξ )z̃(t, ξ )dξ.
x
(8.29)
Likewise, we have

w̃x x (t, x) = z̃ x x (t, x) + K x (t, x, x)z̃(t, x) + K (t, x, x)z̃ x (t, x)


 x
d (8.30)
+ K (t, x, x)z̃(t, x) − K x x (t, x, ξ )z̃(t, ξ )dξ.
dx 0

Subtracting (8.30) from (8.29), we get that


 
d
∂t w̃(t, x) − w̃x x (t, x) = −2 K (t, x, x) − c z̃(t, x)
dx
 L
  (8.31)
+ −∂t K (t, x, ξ ) + K x x (t, x, ξ ) − K ξ ξ (t, x, ξ ) z̃(t, ξ )dξ
x
+ K ξ (t, x, L)z̃(t, L).

It follows by (8.27) and (8.31) that one should have


  L 
d(t, x) z̃(t, x) − K (t, x, ξ )z̃(t, ξ )dξ − K 1 (t, x)w̃(t, L)
x
 
d
= −2 K (t, x, x) − c z̃(t, x) (8.32)
dx
8.2 The Stabilization Result and Applications 179
 L  
+ −∂t K (t, x, ξ ) + K x x (t, x, ξ ) − K ξ ξ (t, x, ξ ) z̃(t, ξ )dξ
x
+ K ξ (t, x, L)z̃(t, L).

In order for (8.32) to hold, we must have



⎨ −∂t K (t, x, ξ ) + K x x (t, x, ξ ) − K ξ ξ (t, x, ξ ) = −d(t, x)K (t, x, ξ ),
−2 ddx K (t, x, x) − c = d(t, x), (8.33)

K ξ (t, x, L) = −K 1 (t, x).

Recall that
 L
d
w̃x (t, x) = z̃ x (t, x) + K (t, x, x)z̃(t, x) − K (t, x, ξ )z̃(t, ξ )dξ.
x dx

Putting x = 0 and assuming that K (t, 0, 0) = 0, we get

d
K (t, 0, ξ ) = 0,
dx

where we have used that w̃x (t, 0) = z̃ x (t, 0) = 0. Now put x = L and recall that
w̃x (t, L) = −K 10 (t)w(t, L) and that z̃ x (t, L) = 0. We arrive at

K (t, L , L) = −K 10 (t).

Hence, we have to solve the equation

− ∂t K (t, x, ξ ) + K x x (t, x, ξ ) − K ξ ξ (t, x, ξ ) = −d(t, x)K (t, x, ξ ) (8.34)

with boundary conditions


⎧  x

⎪ 1
⎨ K (t, x, x) = − [c + d(t, ξ )]dξ,
2 0 (8.35)

⎪ d
⎩ K (t, 0, ξ ) = 0.
dx
We introduce the standard change of variables

τ = x + ξ η = x − ξ,

and define  
τ +η τ −η
G(t, τ, η) := K (t, x, ξ ) = K t, , ,
2 2
180 8 Stabilization of Unsteady States

thereby transforming the problem into the following PDE:


 
τ +η
− ∂t G(t, τ, η) + 4G τ η (t, τ, η) = −d t, G(t, τ, η), (τ, η) ∈ O1 , (8.36)
2

with boundary conditions


⎧  τ

⎨ G(t, τ, 0) = 1 2 [c + d(t, ξ )]dξ,
2 0 (8.37)

⎩ G (t, τ, −τ ) − G (t, τ, −τ ) = 0.
τ η

Here the domain is

O1 := {(τ, η) : 0 < τ < 2L , 0 < η < min(τ, 2L − τ )} .

If d is such that (8.36)–(8.37) has a solution G, then we can recover K as

K (t, x, y) = G(t, x + y, x − y).

So we are able to say that Eqs. (8.28) and (8.27) are equivalent. This implies the
asymptotic exponential decay of the error w̃, from which we conclude that the
observer (8.26) asymptotically exponentially approximates the plant equation (8.23).

8.2.2 Applications

Now we will consider a stabilization problem associated with the following SPDE:

⎪ dY (t, x) = Yx x (t, x)dt + f (t, x)Yx (t, x)dt + h(t)Y (t, x)dβ(t, x),



⎪ for 0 < x < L and t > 0,

⎨  h(t)β(t) 
Yx (t, 0) = u(t) := e h(t)β(t,0)
v e Y (t) , t > 0 (8.38)



⎪ Yx (t, L) = 0 for t > 0,



Y (0, x) = Yo (x) for x ∈ [0, L].

Here f = −2hβx , where β is a Brownian motion in time and colored in space such
that βx (t, 0) = βx (t, L) = 0; and h is such that

1
|h(t)| ≤ C √ , t > 0. (8.39)
t

(For the precise formulation of the solution to (8.38), see Chap. 7.)
8.2 The Stabilization Result and Applications 181

A fair question would be why this stochastic PDE is related to the deterministic
PDE (8.1), studied above in this chapter. The reason is that in order to study the
boundary stabilization of (8.1), we will reduce it by a rescaling procedure (similarly
as in the third section of Chap. 7) to a random parabolic equation and apply to
this equation the stabilization result established in Theorem 8.1. Namely, by the
substitution
w(t) := e−h(t)β(t) Y (t),

doing similar computations as in [22], we obtain that w is the solution to the following
random deterministic equation:

⎪ ∂t w(t, x) = e−h(t)β(t,x) (eh(t)β(t,x) w(t, x))x x





⎪ + f (t, x)e−h(t)β(t,x) (eh(t)β(t,x) w(t, x))x

⎨  
d 1 2
− h(t)β(t, x) + h (t) w(t, x), P-a.s.,t > 0, x ∈ (0, L),

⎪ dt 2



⎪ wx (t, 0) = v(w(t)), wx (t, L) = 0, P − a.s., t > 0,



w(0) = yo ,
(8.40)
where in order to recover the boundary conditions, we used that βx (t, 0) = βx (t, L) =
0. Or equivalently,

⎨ ∂t w(t, x) = wx x (t, x) + q(t, x)w(t, x), P-a.s., for t > 0, x ∈ (0, L),

wx (t, 0) = v(w(t)), wx (t, L) = 0, P-a.s., t > 0, (8.41)


w(0) = yo ,

where we used that f = −2hβx and set

d 1
q(t, x) := h(t)βx x (t, x) + (h(t)βx (t, x))2 − h(t)β(t, x) − h 2 (t) − 2h(t)(βx (t, x))2 .
dt 2

It is clear that Eq. (8.41) is a particular case of Eq. (8.1). In fact, in applying a
rescaling argument to reduce a stochastic PDE to a deterministic one, the latter will
usually have time-dependent coefficients (as (8.41) does). Consequently, the problem
of stabilization of an SPDE can be solved via the stabilization to trajectories for
some deterministic PDE. However, for the moment, this is not the best approach
for this problem, since in the literature, there are very few results on unsteady-state
stabilization. On the other hand, since we have obtained in Theorem 8.1 a result
concerning the stabilization of trajectories for a semilinear heat equation, it is clear
that we may immediately obtain a stabilization result for its stochastic version as
well. In fact, we can prove the following result.

Theorem 8.2 Let ρ > 0 be arbitrary but fixed. For N ∈ N large enough, the solution
Y to the equation
182 8 Stabilization of Unsteady States


⎪ dY (t, x) = Yx x (t, x)dt + f (t, x)Yx (t, x)dt + h(t)Y (t, x)dβ(t, x),



⎪ for 0 < x < L and t > 0,

Yx (t, 0) = u(t) := e h(t)β(t,0)
v(e h(t)β(t)
Y (t)), t > 0 (8.42)



⎪ Y (t, L) = 0 for t > 0,

⎪ x

Y (0, x) = Yo (x) for x ∈ [0, L],

satisfies

−∞ < lim sup log |Y (t, x)| < −ρ, ∀x ∈ (0, L), P-a.e. (8.43)
t→∞

Here v is given by (8.8).


Proof As mentioned earlier, Eq. (8.42) is equivalent to

⎨ ∂t w(t, x) = wx x (t, x) + q(t, x)w(t, x), P-a.s., for t > 0, x ∈ (0, L),

wx (t, 0) = v(w(t)), wx (t, L) = 0, P-a.s., t > 0, (8.44)


w(0) = yo .

Making use of (8.39), we get, for some constant C > 0, that


 
|βx x (t)| (βx (t))2 |β(t)|
|q(t)| ≤ C √ + + √ + 1 , t > 0.
t t t

By the law of the iterated logarithm, it follows that


 
|βx x (t)| (βx (t))2 |β(t)|
sup √ + + √ < ∞, P-a.e.,
t≥0 t t t

whence for    
|βx x (t)| (βx (t))2 |β(t)|
Ωr := sup √ + + √ ≤r ,
t≥0 t t t

we have P(Ωrc ) → 0 as r → ∞ (for details, see Lemma 3.4 in [21]).


Hence we may conclude that

ess sup ess sup |q(t, x)| ≤ c4 , P − a.e., (8.45)


t>0 x∈(0,L)

for some constant c4 > 0.


To conclude the proof, we remark that Eq. (8.44) is of the form (8.1) with a ≡
0, b ≡ q, and σ ≡ 0. Notice that (8.45) ensures that in this case, b satisfies (8.2), as
needed. Therefore, one may argue as in the previous section in order to obtain the null
exponential stability of the random deterministic equation (8.44), and consequently
relation (8.42). The details are omitted. 
8.3 Comments 183

8.3 Comments

As already mentioned and discussed, regarding the nonstationary case, there are
very few results, most of them treating the internal stabilization problem only, see
[4, 23, 77], while Rodrigues [122] deals with the boundary case. The reason for this
impoverished literature is that all the techniques developed for the stationary case
seem not to work for stabilizing trajectories.
In the work Barbu et al. [23], the Foias–Prodi property for parabolic PDEs is used.
Roughly speaking, this property means that if the projections of two solutions to the
unstable modes converge to each other as time goes to infinity, then the difference
between these solutions goes to zero. However, it turns out that the conclusion remains
true if the projections are close to each other at times proportional to a fixed constant.
So the main idea in [23] was to design a control that ensures equality at integer times
for the projections of two solutions to the unstable modes. More precisely, assume that
for a sufficiently large integer N , one manages to construct an (internal) control such
that once plugged into (8.5), the corresponding solution to the closed-loop equation
(8.5) satisfies PN Y (1) = 0, where PN is the projection of L 2 on the space spanned by
the first N eigenfunctions of the Laplacian in (0, L). Then using Poincaré’s inequality
and the regularizing property of the resolving operator for (8.5), one gets

−1

Y (1)
=
(I − PN )Y (1)
≤ C1 μ N 2
Y (1)
H 1 (0,L)
(8.46)
−1   −1
≤ C2 μ N 2
Yo
+
U
L 2 (0,1);X ) ≤ C3 μ N 2
Yo
,
 
where μ j j denotes the increasing sequence of eigenvalues of the Laplace operator
and Ci , i = 1, 2, 3, are some constants not depending on N . It is clear that the fact
that C3 is independent of N is of great importance, and in [23], this is shown based
on a truncated observability inequality. It follows from (8.46) that provided that N
is sufficiently large, one has


Y (1)
≤ e−μ
Y0
.

Iterating this procedure, one gets an exponentially decaying solution. Then via
the dynamic programming principle, a Riccati feedback stabilizing controller is
designed. As noticed above, the Riccati-based controls are not the best ones, from a
practical point of view, since the algebraic Riccati equations require a large number
of hard computations. Here we propose simple proportional-type ones. The results
in this chapter were published in the author’s work [107], except those concerning
the observer design, which are new.
Of course, one may try to stabilize Eq. (8.1) by a Dirichlet boundary control, in the
same manner in which we did so for the Neumann boundary case. Namely, consider
the following problem:
184 8 Stabilization of Unsteady States

⎪ ∂t y(t, x) = yx x (t, x) + a(t, x)yx (t, x) + b(t, x)y(t, x) + σ (t, x, y(t, x)),


⎨ 0 < x < L , t > 0,

⎪ y(t, 0) = u(t), y(t, L) = 0, t ≥ 0,


y(0, x) = yo (x) for x ∈ [0, L].
(8.47)
This situation corresponds to Examples 2.3–2.5. So the feedback v, in this case,
should look like
v(w) := v1 (w) + · · · + v N (w), (8.48)

where ⎛ ⎞ ⎛ (ϕ1 )x (0) ⎞


w, ϕ1 
 γk −μ1
⎜ w, ϕ2  ⎟ ⎜ ⎜
(ϕ2 )x (0) ⎟

vk (w) := A ⎜ ⎟
⎝ ......... ⎠ , ⎜
γk −μ2 ⎟ . (8.49)
⎝ ......... ⎠
w, ϕ N  (ϕ N )x (0)
γk −μ N N

In this case,
  
j 2π 2 2 jπ x
μj = and ϕ j = sin , j = 1, 2, 3, . . . ,
L2 L L

are the eigenvalues and eigenfunctions of the Dirichlet Laplacian

Ay = −yx x , y ∈ D(A) = H 2 (0, L) ∩ H01 (0, L),

respectively.
Now let us compare the form of the vk in the Neumann case given in (8.9) with
the Dirichlet case from (8.49), the first of which involves the quantities
   
2 ( j − 1)π 0 2
ϕ j (0) = cos = ,
L L L

whereas the latter one involves


   
2 j jπ 0 2√
(ϕ j )x (0) = cos = μj.
LL L L

That is, now an additional μ j appears in the form of the feedback law. Consequently,
the analogue of the estimate (7.48) reads, in this case, as
⎛ ⎞
 w, ϕ1  
 
√ ⎜ w, ϕ2  ⎟
|u k (w)| ≤ C N ⎜ ⎟
⎠ .
4−α
N ⎝ (8.50)
 ......... 
 w, ϕ N  
8.3 Comments 185

So it gains an extra N . This is bad. Moreover, if we look at the Gram matrix B, given
now as ⎛√ √ √ ⎞
μ1 μ1 μ1 μ2 ... μ1 μ N
√ √ √
2 ⎜ μ2 μ1 μ2 μ2 ... μ2 μ N ⎟
B := ⎜ ⎝
⎟,

L √ ........................................................
√ √
μ N μ1 . . . μ N μ N −1 μ N μ N

and argue as in (7.43)–(7.47), we may show that the best we can get is

1 N2
=
B1 + · · · + B N
≤ C 2α−1 .
λ1 (A) N

This has bad repercussions for estimating |qi j | in (7.56)–(7.57),

1
e−cN t , t ≥ 0.
2
|qi j (t)| ≤ C
α− 25
N

That is, we gain an extra N 2 in the estimates. Overall, in the final estimates we will
have an extra N 3 . This is too much to be able to manipulate the powers of N to obtain
relations such as (7.70)–(7.71).
In conclusion, it is not straightforward to pass from the Neumann case to the
Dirichlet one. Further subtle estimates of the quantities and properties of the feedback
law must be deduced.
Chapter 9
Internal Stabilization of Abstract
Parabolic Systems

In this chapter, we will reconsider the abstract parabolic equation framework from
Chap. 2. This time, we will design an internal stabilizing proportional-type actuator.
As in the boundary case, the feedback laws are of finite-dimensional nature, given
in a simple form, and easy to manipulate from the computational point of view. And
since we formulate the results in an abstract form, it is clear that for different types
of precise models satisfying the imposed abstract hypotheses, these can be applied
to the stabilization problem.

9.1 Presentation of the Problem

For the reader’s convenience, we will restate the abstract formulation from Chap. 2.
Let O be an open bounded domain in Rd , d ∈ N∗ , with smooth boundary ∂O. We
denote by  ·  the norm in L2 (O). Let A be a closed and densely defined linear
differential operator on L2 (O), with domain D(A); and let F0 : D(F0 ) ⊂ D(A) →
L2 (O) be a nonlinear differential operator. We assume that
(1) −A generates a C0 -analytic semigroup on L2 (O).
(2) For all y, ŷ ∈ D(A), there exists the limit

1 
F0 (ŷ)(y) := lim F0 (ŷ + λy) − F0 (ŷ) ,
λ→0 λ

in L2 (O). Moreover, F0 (0) = 0, and for for some α ∈ (0, 1) and C > 0, we have

F0 (ŷ)y ≤ αAy + Cy, ∀y ∈ D(A). (9.1)

© Springer Nature Switzerland AG 2019 187


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4_9
188 9 Internal Stabilization of Abstract Parabolic Systems

Now fix some ŷ ∈ D(A) and introduce the linear operator

A := A + F0 (ŷ), D(A) = D(A). (9.2)

It is easy to see that A is closed, densely defined, and −A generates a C0 -semigroup


on L2 (O). In addition, we assume that
(3) The resolvent (λI − A)−1 of A is compact in L2 (O).
Hypothesis (3) implies that the operator A has a countable set of eigenvalues λj , j ∈
N∗ (repeated according to their multiplicity), and corresponding eigenvectors ϕj , j ∈
N∗ , i.e.,
Aϕj = λj ϕj , j ∈ N∗ .

Besides this, given ρ > 0, there is a finite number N of eigenvalues such that


λj < ρ, j = 1, 2, . . . , N , while
λj ≥ ρ, j = N + 1, N + 2, . . . (9.3)

We assume that
(4) Each unstable eigenvalue λj , j = 1, 2, . . . , N , is semisimple.
 N
It is easily seen that the finite part of the spectrum λj j=1 can be separated from
the rest of the spectrum by a rectifiable curve ΓN in the complex space C. Set Xu to
 N
be the linear space generated by the eigenfunctions ϕj j=1 , that is,

 N
Xu := lin span ϕj j=1 .

Then the operator PN : L2 (O) → Xu defined by



1
PN := (λI − A)−1 d λ (9.4)
2π i ΓN

is known as the algebraic projection of L2 (O) onto Xu . It is easy to see that the
operator
Au := PN A
 N
maps the space Xu into itself and σ (Au ) = λj j=1 . More precisely, Au : Xu → Xu
is finite-dimensional and can be represented by an N × N matrix.
If A∗ is the dual operator of A, then its eigenvalues are precisely {λj }j∈N∗ , and the
corresponding eigenfunctions are

A∗ ϕj∗ = λj ϕj∗ , j ∈ N∗ .
9.1 Presentation of the Problem 189

The dual PN∗ of PN is given by



1
PN∗ = (λI − A∗ )−1 d λ,
2π i ΓN

 
while XN∗ = lin span{ϕj∗ }Nj=1 = PN∗ L2 (O) .
Via the Schmidt orthogonalization procedure, it
follows by hypothesis (4) that
∗ N
one can find a biorthogonal system {ϕj }j=1 , {ϕj }j=1 of eigenfunctions of A corre-
N

sponding to the first eigenvalues {λj }Nj=1 , i.e.,

ϕj , ϕj∗ = δij , i, j = 1, 2, . . . , N . (9.5)

Consider the Cauchy problem

dy
+ Ay + F0 (y) = 0, t > 0; y(0) = yo , (9.6)
dt

and let ŷ be an equilibrium solution to the system (9.6), i.e., ŷ satisfies

A(ŷ) + F0 (ŷ) = 0.

By defining the fluctuation variable z := y − ŷ, the stability of ŷ can be equiva-


lently reduced to the stability of the null solution to the equation

⎨ ∂ z + Az + G(z) = 0 in (0, ∞) × O,
∂t (9.7)

z(0) = zo := yo − ŷ in O,

where
G(z) := F0 (z + ŷ) − F0 (ŷ) − F0 (ŷ)(z). (9.8)

We consider an open subdomain O0 ⊂ O, and associate with (9.7) the control


system ⎧
⎨ ∂ z + Az + G(z) = 1 u in (0, ∞) × O,
O0
∂t (9.9)

z(0) = zo in O,

where 1O 0 is the characteristic function of the set O0 .


We will construct two stabilizing feedback laws for (9.9). The first approach
is from Barbu [10], and it is directly related to the proportional controllers. More
precisely, u is given as
N
u(t) = −η z(t), ϕj∗ Φj , (9.10)
j=1
190 9 Internal Stabilization of Abstract Parabolic Systems
 
where Φj is a system of functions such that

Φi , 1O 0 ϕj∗ = δij , i, j = 1, 2, . . . , N .

Such a system can be found in the form


N
Φi = αki ϕk∗ , i = 1, 2, . . . , N ,
k=1

where αki ∈ C are chosen to form the system


N
αki ϕk∗ , 1O 0 ϕj∗ = δij , i, j = 1, . . . , N .
k=1

By the unique continuation of the eigenfunctions, it follows immediately that the


above system has a unique solution (a fact that was not true for the boundary case).
Assuming that
η ≥ γ −
λj , j = 1, 2, . . . , N ,

one may show that once u, given by (9.10), is plugged into the Eq. (9.9), we obtain
its stability. The method of proof for this is classical, and it has been used frequently
throughout this book. Briefly, one first considers the linearization part and splits
the system in two: the stable and unstable parts. Concerning the finite-dimensional
unstable part, simple computations lead to

d
zi + λi zi = −ηzi , i = 1, 2, . . . , N ,
dt
where zi = z, ϕi , i = 1, 2, . . . , N . Then the stability of the linearized part follows
immediately (for details, see [10, Theorem 2.3]). Then via a fixed-point argument,
local stability can be deduced as well (see [10, Sect. 2.5]).
The second feedback law, which we propose for stabilization, involves the sign
function, and it reads as


N
u(t) := −η sign( PN z(t), ϕj∗ )PN Φj , (9.11)
j=1

where sign is the multivalued function on C defined by


 z
|z|
,
if z = 0,
sign(z) := (9.12)
{w ∈ C : |w| ≤ 1} , if z = 0,
9.1 Presentation of the Problem 191

and Φj ∈ L2 are defined by


N
Φj := αjk ϕk∗ , j = 1, . . . , N ,
k=1

with

N
αik ϕk∗ , ϕj∗ 0 = δij , i, j = 1, . . . , N . (9.13)
k=1

As a matter of fact, the only difference between the feedback laws (9.10) and
(9.11) is the presence of the sign function. The reason to introduce this function is,
as we will see below, that it allows us to obtain a stronger result, namely, that in finite
time, the solution z belongs to the stable space Xs .
Theorem 9.1 below amounts to saying that for η sufficiently large, the feedback
controller (9.11) is exponentially stabilizing, with exponent −γ , in the linearized
system, and it steers zo into Xs in a finite time T > 0.
Theorem 9.1 Let ρ > 0 and zo ∈ L2 (O) be such that zo  ≤ ρ. Then the closed-
loop system

⎨ dz + Az + η  sign( P z, ϕ ∗ )P (mΦ ) = 0, t ≥ 0,
N

N j N j
dt (9.14)

⎩ j=1
z(0) = zo ,

where m = 1O 0 has a unique solution

z ∈ L∞ (0, t; L2 (O)) ∩ L2 (δ, t; D(A)), ∀0 < δ < t,

such that for T > 0 arbitrary but fixed and η such that
 

λj
η ≥ ρ max , (9.15)
1≤j≤N e
λj T − 1

we have
PN z(t) = 0, ∀t ≥ T , (9.16)

and
z(t) ≤ Ce−γ t z0 , ∀t ≥ T , (9.17)

for some C > 0. If for some j ∈ {1, . . . , N }, we have


λj = 0, then in (9.15) we take

λj
e
λj T −1
to be T1 .
Remark 9.1 In Theorem 9.1, N can be taken arbitrarily large. In fact, for each N
there exists γ such that max
λj ≤ γ , and for η satisfying (9.15), relations (9.16),
1≤j≤N
192 9 Internal Stabilization of Abstract Parabolic Systems

(9.17) hold. Roughly speaking, this means that for each N , there is a controller of
the form (9.11) that steers Yo into Xs .

Proof We apply the projector PN to the system (9.14) and obtain that


⎨ dzu N
+ Au zu + η sign( zu , ϕi∗ )PN (mΦi ) = 0, t ≥ 0,
dt (9.18)

⎩ i=1
zu (0) = PN zo ,


N
where zu := PN z. If we decompose zu as zu = zj ϕj , introduce it into (9.18), and
j=1
scalar multiply by ϕj∗ the Eq. (9.18), we get that

⎨ dzj + λ z + ηsignz = 0, ∀t ≥ 0,
j j j
dt (9.19)

zj (0) = zjo ,

for all j = 1, . . . , N . Here we have used the relations (9.5) and (9.13).
It should be said that the multivalued ordinary differential system (9.19) is well
posed, because the multivalued function z → signz is maximal monotone on C.
 N
Hence there is a unique absolutely continuous solution zj j=1 to the system (9.19).
Moreover, if we take account the relation

signz · z = |z|, ∀z ∈ C,

we have by (9.19) that

1d
|zj (t)|2 +
λj |zj (t)|2 + η|zj (t)| = 0, t ≥ 0.
2 dt
This yields
d
|zj (t)| +
λj |zj (t)| + η = 0, t ≥ 0,
dt
and therefore
η 
λj t 
e
λj t |zj (t)| − |zj (0)| + e − 1 = 0, ∀t ≥ 0. (9.20)

λj

It is easy to see that since η satisfies (9.15), we have


η 
λj t 
e − 1 − |zj (0)| ≥ 0, ∀t ≥ T ,

λj

which together with (9.20), implies immediately relation (9.16).


9.1 Presentation of the Problem 193

Next, we apply to the system (9.14) the projector I − PN , and get that

⎨ dzs + A z = 0, t ≥ 0,
s s
dt (9.21)

zs (0) = (I − PN )zo ,

where zs := (I − PN )z. We have that

zs (t) ≤ Ce−γ t (I − PN )zo  ≤ Ce−γ t zo , ∀t ≥ 0.

This, together with relation (9.16), implies (9.17), as desired. 

Remark 9.2 As mentioned, the results obtained above hold as well in the case in
which the eigenvalues are not necessarily semisimple. Indeed, let us assume, for
example, that the matrix  Aϕi , ϕj Ni,j=1 has the form
⎛ ⎞
λ1 0 0 0 . . . 0
⎜ 1 λ1 0 0 . . . 0 ⎟
⎜ ⎟
 Aϕi , ϕj Ni,j=1 =⎜
⎜ 0 0 λ3 0 . . . 0 ⎟ .
⎟ (9.22)
⎝ ....................... ⎠
0 0 0 0 . . . λN

Thus in this case, (9.19) has the form




⎪ dz1

⎪ + λ1 z1 + ηsignz1 = 0, t ≥ 0; z1 (0) = z1o ,

⎪ dt

dz2
+ z1 + λ1 z2 + ηsignz2 = 0, t ≥ 0; z2 (0) = z2o , (9.23)

⎪ dt


⎪ dzj

⎩ + λj zj + ηsignzj = 0, t ≥ 0; zj (0) = zjo , j = 3, . . . , N .
dt
Just as in the proof of Theorem 9.1, one can obtain that
η
λj t
e
λj t |zj (t)| − |zj (0)| + (e − 1) = 0, t ≥ 0, j = 1 and j = 3, 4, . . . , N .

λj
(9.24)
It follows, in particular, that

e
λ1 t |z1 (t)| ≤ |z1 (0)| ≤ ρ. (9.25)

We multiply the second equation of (9.23) by z̄2 and take the real part of the result
to obtain that
1d
|z2 |2 +
λ1 |z2 |2 + η|z2 | = −
(z1 z̄2 ).
2 dt
194 9 Internal Stabilization of Abstract Parabolic Systems

This yields


λ1 t e
λ1 t − 1 t
e |z2 (t)| − |z2 (0)| + η ≤ e
λ1 τ |z1 (τ )|d τ. (9.26)

λ1 0

Using relations (9.24), (9.25), and (9.26), it is easy to see that

zj (t) = 0, t ≥ T , j = 1, . . . , N ,
 

λj
if η ≥ ρ(1 + T ) max .
1≤j≤N e
λj T − 1
Now let us treat another case. Let us assume that
⎛ ⎞
λ1 1 0 0 . . . 0
⎜ 1 λ1 0 0 . . . 0 ⎟
⎜ ⎟
 Aϕi , ϕj Ni,j=1 = ⎜
⎜ 0 0 λ3 0 . . . 0 ⎟ .
⎟ (9.27)
⎝ ....................... ⎠
0 0 0 0 . . . λN

We get immediately that


⎧d

⎪ |z1 | +
λ1 |z1 | + η ≤ |z2 |, t ≥ 0,




dt

d
|z2 | +
λ1 |z2 | + η ≤ |z1 |, t ≥ 0, (9.28)

⎪ dt




⎩ d |z | +
λ |z | + η = 0, t ≥ 0, j = 3, . . . , N .
j j j
dt
We sum the first two equations of (9.28) to see that

d
(|z1 | + |z2 |) + (
λ1 − 1)(|z1 | + |z2 |) + 2η ≤ 0. (9.29)
dt
It is easy to observe that if
  
1
λ1 − 1
λj
η ≥ ρ max ; max ,
2 e(
λ1 −1)T − 1 3≤j≤N e
λj T − 1

then via (9.28) and (9.29), we have

zj (t) = 0, t ≥ T , j = 1, . . . , N ,

as desired.
9.1 Presentation of the Problem 195

We conclude that when the unstable eigenvalues are not necessarily semisimple,
one can choose η > 0 in an appropriate way, sufficiently large, to obtain the same
results as in Theorem 9.1.

9.2 Stabilization of the Full Nonlinear Equation (9.9)

In what follows, we will consider γ to be 0, and so N is such that Xs is generated


by the eigenfunctions corresponding to the stable eigenvalues, i.e.,
λj > 0, j =
N +1, N + 2, . . .. Hence in this case,
λj ≤ 0, for j = 1, . . . , N . We define β :=
min
λj , j = N + 1, . . . . Thus in this case, we have the following estimate for the
operator As :
e−As t L(L2 (O ),L2 (O )) ≤ Ce−βt , ∀t ≥ 0. (9.30)

The main result of this section is the next theorem, which amounts to saying that the
feedback controller (9.11) exponentially stabilizes the nonlinear system (9.9), and
just as for the linear equation, it steers zo into Xs in finite time T > 0.

Theorem 9.2 Let T , ρ > 0 be sufficiently small. For each zo ∈ W such that zo W ≤
ρ, the problem

⎨ dz + Az + η  sign( P z, ϕ ∗ )P (mΦ ) + Gz = 0, t ≥ 0,
N

N j N j
dt (9.31)

⎩ j=1
z(0) = zo ,

is well posed on W with unique solution z ∈ C([0, ∞); W ) ∩ L2 (0, ∞; Z ), if η is


such that ⎧  ⎫

λj kϕj∗  + ρ ⎬
η ≥ max . (9.32)
1≤j≤N ⎩ e
λj T − 1 ⎭

Moreover, these solutions satisfy

PN z(t) = 0, ∀t ≥ T , (9.33)

and
z(t) ≤ Ce−βt z0 , ∀t ≥ T , (9.34)

for some C > 0.


Here k is given by relation (9.39) below, and W = L2 (O) and Z = H 1 (O). If

λj
for some j ∈ {1, . . . , N }, we have
λj = 0, then in (9.32) we take e
λj T −1 to be T1 .
196 9 Internal Stabilization of Abstract Parabolic Systems

Proof For r ≤ 1, let us introduce the ball of radius r centered at the origin of the
space L2 (0, ∞; Z ):
  1 !
∞ 2
S(0, r) := f ∈ L (0, ∞; Z ) : f L2 (0,∞;Z ) =
2
f (t)2Z dt ≤r .
0

For all Z ∈ S(0, r), let us consider the system



⎨ dz + Az + η  sign( P z, ϕ ∗ )P (mΦ ) = −GZ, t ≥ 0,
N

N j N j
dt (9.35)

⎩ j=1
z(0) = zo .

The idea of the proof is as follows: we show that for all Z ∈ S(0, r), problem
(9.35) has a solution zZ ∈ S(0, r) for T , ρ, and r sufficiently small. Moreover,
we show that PN zZ (t) = 0, ∀t ≥ T . Then we denote by Γ the operator that asso-
ciates Z to the solution zZ . In doing so, we get that Γ : S(0, r) → S(0, r) is a
contraction on S(0, r), for T , r sufficiently small. It follows that there exists a
unique solution z ∈ S(0, r) for (9.31). Next, we show that z ∈ C([0, ∞); W ) and
z(t) ∈ B(0, b) := {f ∈ W : f W ≤ b} , t ≥ T , for some b > 0, which will imply
the claimed exponential decay (9.34).
By Theorem 9.1, one can easily deduce that

e−As t L(W ) ≤ Ce−δt , t ≥ 0, (9.36)

and  ∞
e−As t W 2Z ≤ cW 2W , ∀W ∈ W , (9.37)
0

for some c > 0.


Next, we define  t
(N Z)(t) := e−As (t−τ ) (GZ)(τ )d τ. (9.38)
0

About the nonlinearity G, we will assume that it is of second-order type, namely

GZW ≤ kZ2Z , ∀Z ∈ Z , (9.39)

for some k > 0, and

GZ1 − GZ2 W ≤ k (Z1 Z + Z2 Z ) Z1 − Z2 Z , ∀Z1 , Z2 ∈ S(0, r). (9.40)

By duality, we have that for Z ∈ L2 (0, ∞; Z  ), Z  is the dual of Z ,


9.2 Stabilization of the Full Nonlinear Equation (9.9) 197
 ∞  ∞ " t #
N Z(t), ζ (t) dt = e−As (t−τ ) (GZ)(τ )d τ, ζ (t) dt
0 0 0
 ∞ t
$ −A (t−τ ) $
≤ $e s (GZ)(τ )$Z ζ (t)Z  d τ dt
 ∞  ∞
0 0

$ −A (t−τ ) $
= $e s (GZ)(τ )$Z ζ (t)Z  dt d τ
0 τ
 ∞ % ∞ & 21
$ −A (t−τ ) $2
≤ $e s (GZ)(τ )$ dt Z
0 τ
% ∞ &1 !
2
(9.41)
× ζ (t)2Z  dτ
0
 % & 21
∞ ∞ $ −A (t−τ ) $2
= ζ (t)L2 (0,∞;Z  ) $e s (GZ)(τ )$Z dt dτ
0 τ
(by (9.37))
 ∞
≤ Cζ (t)L2 (0,∞;Z  ) GZ(τ )W d τ
0
(by (9.39))
≤ Cζ (t)L2 (0,∞;Z  ) Z(t)2L2 (0,∞;Z ) .

Therefore,
(N Z)(t)L2 (0,∞;Z ) ≤ Cr 2 , (9.42)

for all Z ∈ S(0, r).


In a similar manner, one can show as well that

N Z1 − N Z2 2L2 (0,∞;Z ) ≤ 4ck 2 r 2 Z1 − Z2 2L2 (0,∞;Z ) , ∀Z1 , Z2 ∈ S(0, r). (9.43)

Finally, we define

(ΛZ)(t) := e−As t (I − PN )zo + (N Z)(t). (9.44)

By (9.36) and (9.42), we deduce that


'  (
∞ 2
ΛZ2L2 (0,∞;Z ) ≤ C (I − PN )zo 2W + Z(t)2Z
0 (9.45)
≤ C(ρ + r ), ∀Z ∈ S(0, r).
2 4

With these key results in hand, we can proceed with the proof.
To prove that there exists a solution to the equation (9.35), one can argue as in the
proof of the Theorem 9.1, using the fact that the function sign is maximal monotone
on C. Next, one may show that this solution remains in S(0, r), for r sufficiently
198 9 Internal Stabilization of Abstract Parabolic Systems

small, and it satisfies (9.33) and (9.34). Then one applies the projector PN to (9.35)
and gets that

1d  
|zj (t)|2 +
λj |zj (t)|2 + η|zj (t)| = −
GZ, ϕj∗ z̄j , t ≥ 0, (9.46)
2 dt
N
for all j = 1, . . . , N , where PN z = zj ϕj . Next, using the Schwarz inequality, it
j=1
follows that
  ) )) ) ) )
) )

GZ, ϕj∗ z̄j ≤ ) GZ, ϕj∗ ) )zj ) ≤ GZϕj∗  )zj ) .

This, together with (9.46), yields

d
|zj (t)| +
λj |zj (t)| + η ≤ GZϕj∗ , ∀t ≥ 0, (9.47)
dt

for all j = 1, . . . , N . Multiplying (9.47) by e


λj τ and integrating over (0, t), we obtain
that
 t  t

λj t
λj τ
e |zj (t)| − |zj (0)| + ηe dτ ≤ e
λj τ GZ(τ )ϕj∗ d τ, t ≥ 0, (9.48)
0 0

for all j = 1, . . . , N . Now using estimate (9.39), we get that


 t  t
e
λj τ GZ(τ )ϕj∗ d τ ≤ e
λj τ GZ(τ )W ϕj∗ d τ (9.49)
0 0
 t  ∞
≤ kϕj∗  e
λj τ Z(τ )2Z d τ ≤ kϕj∗  Z(τ )2Z ≤ kr 2 ϕj∗ , (9.50)
0 0

since Z ∈ S(0, r). Hence (9.49)–(9.50) and (9.46) yield

e
λj t − 1  2 ∗ 
e
λj t |zj (t)| + η − kr ϕj  + |zj (0)| ≤ 0, ∀t ≥ 0, (9.51)

λj

for all j = 1, . . . , N . It is easy to see that if η satisfies relation (9.32), we get that
|zj (t)| = 0, ∀t ≥ T , for all j = 1, . . . , N . Moreover, we get also from (9.51) that
 
|zj (t)| ≤ e−
λj T kr 2 ϕj∗  + ρ , 0 ≤ t ≤ T . (9.52)

We choose T > 0 sufficiently small that

N   2  r 2
hT |λj |e−2
λj T kr 2 ϕj∗  + ρ ≤ , (9.53)
j=1
4
9.2 Stabilization of the Full Nonlinear Equation (9.9) 199

where h > 0 is given by the following relation between the norms:


1
 · H 1 (O ) ≤ hA 2 · . (9.54)

Thus one can obtain via (9.52), (9.54), and (9.53) that
 ∞  T  T 
N
PN z(t)2Z dt = PN z(t)2Z dt ≤ h |λj ||zj (t)|2 dt (9.55)
0 0 0 j=1
N 
  2 r2
≤ hT |λj |e−2
λj T kr 2 ϕj∗  + ρ ≤ . (9.56)
j=1
4

Now applying the projector I − PN to (9.35), we get that

d
zs + As zs + (I − PN )GZ = 0, t ≥ 0; zs (0) = (I − PN )zo , (9.57)
dt
where zs = (I − PN )z. Using the variation of constants formula, we have that
 t
zs (t) = e−As t (I − PN )zo + e−As (t−τ ) (I − PN )GZ(τ )d τ, t ≥ 0. (9.58)
0

It is easy to see that by making use of the relations (9.58), (9.38), and (9.44), we have
the equality
zs (t) = (ΛZ)(t), ∀t ≥ 0.

Thus from (9.45), we obtain that

zs 2L2 (0,∞;Z ) ≤ C(ρ 2 + r 4 ). (9.59)

Taking ρ and r sufficiently small that

r2
C(ρ 2 + r 4 ) ≤ , (9.60)
4
we get from (9.59) that

r2
(I − PN )z2L2 (0,∞;Z ) = zs 2L2 (0,∞;Z ) ≤ . (9.61)
4
Finally, we conclude that if T , ρ, and r are small enough that they satisfy relations
(9.53) and (9.60), we have
200 9 Internal Stabilization of Abstract Parabolic Systems
 ∞  ∞  
z2L2 (0,∞;Z ) = z(t)2Z dt ≤ 2 PN z(t)2Z + (I − PN )z(t)2Z dt
0 0
 2 2
r r
≤2 + = r2,
4 4

if we take account of the relations (9.55)–(9.56) and (9.61). This means that the
solution z remains in the ball S(0, r). Hence if we denote by Γ the operator that
associates Z to the corresponding solution z to the system (9.35), we have that Γ
maps the ball S(0, r) into itself. Therefore, in order to complete the proof, it is enough
to show that Γ is a contraction on S(0, r). To this end, we have the following. Let
Z1 , Z2 be two functions in S(0, r), and z1 , z2 ∈ S(0, r) the corresponding solutions
to the system (9.35). Consequently, z1 and z2 satisfy

⎨ dz1 + Az + η  sign( P z , ϕ ∗ )P (mΦ ) = −GZ , t ≥ 0,
N

1 N 1 j N j 1
dt (9.62)

⎩ j=1
z1 (0) = zo

and

⎨ dz2 + Az + η  sign( P z , ϕ ∗ )P (mΦ ) = −GZ , t ≥ 0,
N

2 N 2 j N j 2
dt (9.63)

⎩ j=1
z2 (0) = zo .

Applying, N PN to (9.62) and (9.63), and decomposing


N as before, the projector
PN z1 = j=1 z1j ϕj and PN z2 = j=1 z2j ϕj , we get that


⎨ d z + λ z + ηsign(z ) = − GZ , ϕ ∗ , t ≥ 0,
1j j 1j 1j 1 j
dt (9.64)

z1j (0) = zjo ,

and ⎧
⎨ d z + λ z + ηsign(z ) = − GZ , ϕ ∗ , t ≥ 0,
2j j 2j 2j 2 j
dt (9.65)

z2j (0) = zj ,
o

for all j = 1, . . . , N . Subtracting (9.64) and (9.65), we obtain




⎪ d      
⎨ dt z1j − z2j + λj z1j − z2j + η sign(z1j ) − sign(z2j )

= − GZ1 − GZ2 , ϕj∗ , t ≥ 0, (9.66)



⎩ 
z1j − z2j (0) = 0,
9.2 Stabilization of the Full Nonlinear Equation (9.9) 201

for all j = 1, . . . , N . Taking into account that sign is a maximal operator, we get
from (9.66) multiplied by z̄1j − z̄2j , that
⎧ )
⎨ d )z − z )) +
λ ))z − z )) ≤ | GZ − GZ , ϕ ∗ |, t ≥ 0,
1j 2j j 1j 2j 1 2 j
dt (9.67)
⎩ 
z1j − z2j (0) = 0,

for all j = 1, . . . , N . Hence



  t
e
λj t | z1j − z2j (t)| ≤ e
λj τ | (GZ1 − GZ2 ) (τ ), ϕj∗ |d τ
0
 t
≤ ϕj∗  GZ1 − GZ2 W
0
(9.68)
(using (9.40))
 t
≤ kϕj∗  {Z1 Z + Z2 Z } Z1 − Z2 Z dt
0
≤ 2krϕj∗ Z1 − Z2 L2 (0,∞;Z ) ,

for all j = 1, . . . , N . Hence


 
| z1j − z2j (t)| ≤ e−
λj T 2krϕj∗ Z1 − Z2 L2 (0,∞;Z ) , 0 ≤ t < T (9.69)

and  
| z1j − z2j (t)| = 0, ∀t ≥ T ,

for all j = 1, . . . , N .
In the same manner as in relation (9.55), we obtain, via relation (9.69), that
 ∞
PN (z1 − z2 )(t)2Z dt
0
N %
  2 & (9.70)
≤ hT |λj |e−2
λj T 2krϕj∗  Z1 − Z2 2L2 (0,∞;Z ) .
j=1

To obtain estimates for (I − PN )(z1 − z2 ), we apply the projector (I − PN ) to


(9.62) and (9.63), use the variation of constants formula as above, and get that
 t
(I − PN )(z1 − z2 )(t) = e−As (t−τ ) (I − PN )(GZ1 − GZ2 )(τ )d τ, t ≥ 0. (9.71)
0

Using (9.43), we obtain that

(I − PN )(z1 − z2 )2L2 (0,∞;Z ) ≤ 4ck 2 r 2 Z1 − Z2 2L2 (0,∞;Z ) . (9.72)


202 9 Internal Stabilization of Abstract Parabolic Systems

Now (9.70) and (9.72) together yield

z1 − z2 2L2 (0,∞;Z )


⎧ ⎫
⎨  N   2 ⎬ (9.73)
≤ 2 hT |λj |e−2
λj T 2krϕj∗  + 4ck 2 r 2 Z1 − Z2 2L2 (0,∞;Z ) .
⎩ ⎭
j=1

Thus if we take T and r sufficiently small that


⎧ ⎫
⎨  N   2 ⎬
2 hT |λj |e−2
λj T 2krϕj∗  + 4ck 2 r 2 < μ2 < 1, (9.74)
⎩ ⎭
j=1

we get that

Γ Z1 − Γ Z2 L2 (0,∞;Z ) ≤ μZ1 − Z2 L2 (0,∞;Z ) , ∀Z1 , Z2 ∈ S(0, r),

with μ < 1. Hence Γ is a contraction on S(0, r), as desired.


We conclude that if T , r, ρ are small enough that they satisfy relations (9.53),
(9.60), and (9.74), then via the contraction mapping principle, there exists a unique
solution z ∈ S(0, r) ⊂ L2 (0, ∞; Z ) to equation (9.31) that satisfies relation (9.33).
Next, we want to show that this solution z is in C([0, ∞); W ) as well. To this end,
we take into account that zj ∈ C([0, T ], C) and zj (t) = 0, t ≥ T , for all j = 1, . . . , N .
Hence
PN z ∈ C([0, ∞); W ).

Moreover, since e−As t is an analytic semigroup on W , uniformly stable there, by


(9.36), it follows by convolution of the definition of the operator N that

(I − PN )z ∈ C([0, ∞); W ).

Thus z ∈ C([0, ∞); W ), as claimed. Finally, by (9.45), we have for all t ≥ T , taking
into account that z = (I − PN )z,

z(t)W ≤ Cρ + Cr 2 . (9.75)

Thus z(t) ∈ S(0, b) for all t ≥ T , where b := Cρ + Cr 2 .


This then yields that  ∞
z(t)2Z dt ≤ Kzo 2W ,
T

from which, using the classical strategy for nonlinear autonomous systems [27,
p. 178], we get the claimed exponential decay (9.34). 
9.3 The Design of a Real Stabilizing Feedback Controller 203

9.3 The Design of a Real Stabilizing Feedback Controller

In applications, it is convenient to design a real stabilizing controller of the form


(9.11). To do this, we consider again γ ≥ 0 to be arbitrary but fixed. We set {ψj }Nj=1 =
  N2

ϕj , ϕj j=1 (we assume, for simplicity, that all λj , 1 ≤ j ≤ N , are complex, and
so N is even). We set
Xˆu = linspan{ψj }Nj=1 ,

and denote by P̂N : H → Xˆu the algebraic projection on Xˆu .


We set also Xˆs = (I − PN )H , and

Âu = A|Xˆu , Âs = A|Xˆs .

We have, of course, Âu =


Au and Âs =
As . Moreover, we can orthogonalize
{ψj }Nj=1 , via the Gram–Schmidt procedure, and get thereby

ψj , ψi = δij , i, j = 1, . . . , N . (9.76)

Now we consider the feedback controller


N
u = −η sign( PN z, ψj PN Ψj , (9.77)
j=1

where

N
Ψj = αjk ψk , j = 1, . . . , N , (9.78)
k=1

and

N
αjk ψk , ψi 0 = δji , i, j = 1, . . . , N . (9.79)
k=1

(We can choose αjk in this way because the system {ψj }Nj=1 is linearly independent
 d
in L2 (O0 ) .)
Then substituting u into the linearized system, we have

⎨ d z + Az = −η  sign( P , ψ )P (m(Ψ )), t ≥ 0,
N

N j N j
dt (9.80)

⎩ j=1
z(0) = zo .

Arguing as in the proof of Theorem 9.1 and taking account of the fact that
204 9 Internal Stabilization of Abstract Parabolic Systems

Ψj , ψi = δij , i, j = 1, . . . , N ,

and
e−Âs t L(H ,H ) ≤ Ce−γ t , ∀t ≥ 0,

we deduce the following result.

Theorem 9.3 Let T , ρ > 0, and zo be such that zo  ≤ ρ. For 0 < η = η(T , ρ)
sufficiently large, we have for the solution z to the closed-loop system (9.80),

PN z(t) = 0, for all t ≥ T , (9.81)

and
z(t) ≤ Ce−γ t zo , ∀t ≥ T . (9.82)

Proof For simplicity, let us assume that N = 4. The other cases can be treated sim-
ilarly. We have
Ã(ϕ1 ) = Ã(ψ1 + iψ2 ) = Aψ1 + iAψ2 .

On the other hand, we have

Ã(ϕ1 ) = λ1 ϕ1 = λ1 (ψ1 + iψ2 ).

Hence
Aψ1 =
λ1 ψ1 − λ1 ψ2 and Aψ2 =
λ1 ψ2 + λ1 ψ1 . (9.83)

In the same manner, we get also that

Aψ3 =
λ2 ψ3 − λ2 ψ4 and Aψ4 =
λ2 ψ4 + λ2 ψ3 . (9.84)

Thus in this case, the finite-dimensional system

d  4
zu + Âu zu = −η sign( zu , ψj )PN (m(Ψj ))
dt j=1

reads as ⎧


d
z1 +
λ1 z1 + λ1 z2 = −ηsign(z1 ),



⎪ dt

⎪d


⎨ z2 +
λ1 z2 − λ1 z1 = −ηsign(z2 ),
dt (9.85)

⎪ d

⎪ z3 +
λ2 z3 + λ2 z4 = −ηsign(z3 ),

⎪ dt



⎪ d
⎩ z4 +
λ2 z4 − λ2 z3 = −ηsign(z4 ), ∀t ≥ 0.
dt
9.3 The Design of a Real Stabilizing Feedback Controller 205

Multiplying the first equation of (9.85) by z1 , the second by z2 , and summing them,
we get

1d
(|z1 |2 + |z2 |2 ) +
λ1 (|z1 |2 + |z2 |2 ) + η(|z1 | + |z2 |) = 0, ∀t ≥ 0.
2 dt
Hence
1d 1
(|z1 | + |z2 |)2 +
λ1 (|z1 | + |z2 |)2 + η(|z1 | + |z2 |) ≤ 0, ∀t ≥ 0.
4 dt 2
The same result can be obtained for the coefficients z3 and z4 . Now arguing as in the
proof of Theorem 9.1, one can obtain the desired result. The details are omitted. 

In the same manner, following the ideas in the proof of Theorem 9.2, one can
obtain for the nonlinear system

d  N
z + Az + Gz = −η sign( PN , ψj )PN (m(Ψj )), t ≥ 0; z(0) = zo , (9.86)
dt j=1

the following theorem.

Theorem 9.4 Let T , ρ > 0 be sufficiently small. For each zo ∈ W such that
zo W ≤ ρ, the problem (9.86) is well posed on W with the unique solution
z ∈ C([0, ∞); W ) ∩ L2 (0, ∞; Z ) if η = η(T , ρ) is large enough. Moreover, these
solutions satisfy
PN z(t) = 0, ∀t ≥ T , (9.87)

and
z(t) ≤ Ce−βt z0 , ∀t ≥ T , (9.88)

for some C, β > 0.

9.4 Comments

The stabilization problems presented above have been studied extensively over the
last six or seven years, and we refer to the works [11–13, 19, 30, 31, 50, 118,
119, 121], as well as to the book [10], for significant results in this direction. We
have presented here an internal stabilizing control design associated with abstract
parabolic-like equations. The proportional feedback law is similar to the one in
Chap. 2, but reconsidered for the internal case. A similar result was published in
206 9 Internal Stabilization of Abstract Parabolic Systems

Barbu and Munteanu [20]. However, the abstract setting discussed here is new. It
should be mentioned that these results are connected with those in [123], where
the exact controllability in projections for the Navier–Stokes equations is obtained.
However, there is no overlap, and the technique used here is completely different.
The proof of the stability of the nonlinear system is based mainly on the ideas in
[19].
References

1. Aamo OM, Krstic M, Bewley TR (2003) Control of mixing by boundary feedback in a 2D-
channel. Automatica 39:1597–1606
2. Agranovich MS (2015) Sobolev spaces, their generalizations and elliptic problems in smooth
and lipschitz domains. Springer, New York
3. Amendola G, Fabrizio M, Golden JM (2012) Thermodynamics of materials with memory:
theory and applications. Springer, New York
4. Ammari K, Duyckaerts T, Shirikyan A (2016) Local feedback stabilisation to a nonstationary
solution for a damped non-linear wave equation. Math Control Relat Fields 6(1):1–5
5. Arnold L (1979) A new example of an unstable system being stabilized by random parameter
noise. Inform Comm Math Chem 133–140
6. Arnold L, Crauel H, Wihstutz V (1983) Stabilization of linear systems by noise. SIAM J
Control Optim 21:451–461
7. Barbu V (1975) Nonlinear semigroups and differential equations in Banach spaces. Noordhoff,
Leyden
8. Barbu V (2003) Internal stabilization of the phase filed system. Adv Autom Control 754:1–8
9. Barbu V (2007) Stabilization of a plane channel flow by wall normal controllers. Nonlin Anal
Theory-Methods Appl 67(9):2573–2588
10. Barbu V (2010) Stabilization of Navier-Stokes flows. Springer, New York
11. Barbu V (2010) Stabilization of a plane channel flow by noise wall normal controllers. Syst
Control Lett 59(10):608–614. Barbu V (2010) Optimal stabilizable feedback controller for
Navier-Stokes equations. Contemp Math 513:43–53
12. Barbu V (2012) Stabilization of Navier-Stokes equations by oblique boundary feedback con-
trollers. SIAM J Control Optim 50(4):2288–2307
13. Barbu V (2013) Boundary stabilization of equilibrium solutions to parabolic equations. IEEE
Trans Autom Control 58:2416–2420
14. Barbu V (2018) Controllability and stabilization of parabolic equations. Birkhäuser Basel
15. Barbu V, Bonaccorsi S, Tubaro L (2014) Existence and asymptotic behavior for hereditary
stochastic evolution equations. Appl Math Optim 69:273–314
16. Barbu V, Colli P, Gilardi G, Marinoschi G, Rocca E (2017) Sliding mode control for a nonlinear
phase-field system. SIAM J Control Optim 55(3):2108–2133
17. Barbu V, Colli P, Gilardi G, Marinoschi G (2017) Feedback stabilization of the Cahn-Hilliard
type system for phase separation. J Diff Eqs 262:2286–2334

© Springer Nature Switzerland AG 2019 207


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4
208 References

18. Barbu V, Iannelli M (2000) Controllability of the heat equation with memory. Diff Int Eqs
13:1393–1412
19. Barbu V, Lasiecka I, Triggiani R (2006) Abstract settings for tangential boundary stabilization
of Navier-Stokes equations by high- and low-gain feedback controllers. Nonlin Anal 64:2704–
2746
20. Barbu V, Munteanu I (2012) Internal stabilization of Navier-Stokes equation with exact con-
trollability on spaces with finite codimension. Evol Eqs Control Theory 1(1):1–16
21. Barbu V, Da Prato G (2012) Internal stabilization by noise of the Navier-Stokes equation.
SIAM J Control Optim 49(1):1–20
22. Barbu V, Röckner M (2015) An operatorial approach to stochastic partial differential equations
driven by linear multiplicative noise. J Eur Math Soc 17:1789–1815
23. Barbu V, Rodrigues SS, Shirikyan A (2011) Internal exponential stabilization to a nonstation-
ary solution for 3D Navier-Stokes equations. SIAM J Control Optim 49(4):1454–1478
24. Barbu V, Triggiani R (2004) Internal stabilization of Navier-Stokes equations with finite
dimensional controllers. Indiana Univ Math J 53:1443–1469
25. Barbu V, Wang G (2003) Internal stabilization of semilinear parabolic systems. J Math Anal
Appl 285:387–407
26. Barbu V, Iannelli M (2000) Controllability of the heat equation with memory. Differ Integr
Eqs 13:1393–1412
27. Balogh A, Krstic M (2002) Infinite dimensional backstepping-style feedback transformations
for a heat equation with an arbitrary level of instability. Eur J Control 8:165–176
28. Badra M, Takahashi T (2011) Stabilization of parabolic nonlinear systems with finite dimen-
sional feedback or dynamical controllers: application to the Navier-Stokes system. SIAM J
Control Optim 49(2):420–463
29. Balogh A, Liu W-J, Krstic M (2001) Stability enhancement by boundary control in 2D channel
flow. IEEE Trans Autom Control 46:1696–1711
30. Bedra M (2009) Feedback stabilization of the 2-D and 3-D Navier-Stokes equations based on
an extended system. ESAIM COCV 15:934–968
31. Bedra M (2009) Lyapunov functions and local feedback stabilization of the Navier-Stokes
equations. SIAM J Control Optim 48:1797–1830
32. Bertini L, Giacomin G (1997) Stochastic Burgers and KPZ equations from particle systems.
Commun Math Phys 183:571–607
33. Bewley TR (2001) Flow control: new challenges for new renaissance. Prog Aerospace Sci
37:21–58
34. Bourbaki N (2007) Théories spectrales. Springer, Berlin
35. Brezis H (2011) Functional analysis. Sobolev spaces and partial differential equations.
Springer, New York
36. Britton NF (1986) Reaction-diffusion equations and their applications to biology. Academic
Press, New York
37. Brzezniak Z, Goldys B, Peszat S, Russo F (2013) Second order PDEs with Dirichlet white
noise boundary conditions. J Evol Eqs 15(1):1–26
38. Boskovic DM, Krstic M, Liu W (2001) Boundary control of an unstable heat equation via
measurement of domain-averaged temperature. IEEE Trans Autom Control 46(12):2028–
2028
39. Caginalp G (1988) Conserved-phase field system: implications for kinetic undercooling. Phys
Rev B 38:789–791
40. Cannarsa P, Frankowska H, Marchini EM (2013) Optimal control for evolution equations with
memory. J Evol Eqs 13:197–227
41. Carballo T (2006) Recent results on stabilization of PDEs by noise. Bol Soc Esp Mat Apl
37:47–70
42. Caraballo T, Liu K, Mao X (2001) On stabilization of partial equations by noise. Nagoya
Math J 161:155–170
43. Cochran J, Vasquez R, Krstic M (2006) Backstepping boundary control of Navier-Stokes
channel flow: a 3D extension, 25th edn. In: American control conference
References 209

44. Dalang RC (1999) Extending the martingale measure stochastic integral with applications to
spatially homogeneous S.P.D.E’s. Electron J Probab 4(6) (online)
45. Chen Z (1994) Optimal boundary controls for a phase field model. IMA J Math Control Inform
10(2):157–176
46. Chepyzhov V, Miranville A (2006) On trajectory and global attractors for semilinear heat
equations with fading memory. Ind Univ Math J 55(1):119–167
47. Choi H, Temam R, Moin P, Kim J (1993) Feedback control for unsteady flow and its application
to the stochastic Burgers equation. J Fluid Mech 253:509–543
48. Coleman B, Gurtin M (1967) Equipresence and constitutive equations for rigid heat conduc-
tors. Z Angew Math Phys 18:199–208
49. Colli P, Gilardi G, Marinoschi G, Rocca E (2017) Sliding mode control for a phase field
system related to tumor growth. Appl Math Optim 1–24
50. Coron JM (2007) Control and nonlinearity. AMS, Providence, RI
51. Dafermos CM (1970) Asymptotic stability in viscoelasticity. Arch Rat Mech Anal 37:297–308
52. Debussche A, Fuhrman M, Tessitore G (2007) Optimal control of a stochastic heat equation
with boundary-noise and boundary-control. ESAIM COCV 13:178–205
53. Fabbri G, Goldys B (2009) An LQ problem for the heat equation on the halfline with Dirichlet
boundary control and noise. SICON 48(3):1473–1488
54. Fisher RA (1937) The wave of advance of advantageous genes. Ann Eugen 7:353–369
55. Fitzhugh A (1961) Impulses and physiological states in theoretical phenomena in the nerve
membrane. Biophys J 1:445–466
56. Foondun M, Nualart E (2015) On the behaviour of stochastic heat equations on bounded
domains. Lat Am J Probab Math Stat 12(2):551–571
57. Funaki T, Quastel J (2015) KPZ equation, its renormalization and invariant measures. Stoch
Partial Diff Eqs Anal Comput 3(2):159-220
58. Fursikov A (2002) Real process corresponding to 3D Navier-Stokes system and its feedback
stabilization form boundary. Am Math Soc Transl 206(2):95–123
59. Fursikov AV (2001) Stabilizability of quasi-linear parabolic equations by feedback boundary
control. Sbornik Math 192:593–639
60. Fursikov AV (2004) Stabilization for the 3D Navier-Stokes system by feedback boundary
control. Discret Contin Dyn Syst 10(1):289–314
61. Gilbarg D, Trudinger N (2001) Elliptic partial differential equations of second order. Springer,
Berlin
62. Giorgi C, Pata V, Marzocchi A (1998) Asymptotic behavior of a semilinear problem in heat
conduction with memory. NoDEA 5:333–354
63. Guatteri G, Masiero F (2013) On the existence of optimal controls for SPDEs with boundary
noise and boundary control. SIAM J Control Optim 51(13):1909–1939
64. Guerrero S, Imanuvilov OY (2013) Remarks on non controllability of the heat equation with
memory. ESAIM Control Optim Calc Var 19(1):288–300
65. Gyongy I, Nualart D (1999) On the stochastic Burgers equation in the real line. Ann Probab
27(2):782–802
66. Gurtin M, Pipkin A (1968) A general theory of heat conduction with finite wave speed. Arch
Rational Mech Anal 31:113–126
67. Hartmann J (1937) Theory of the laminar flow of an electrically conductive liquid in a ho-
mogeneous magnetic field. Det Kgl. Danske Videnskabernes Selskab Mathematisk-fysiske
Meddelelser XV 6:1–27
68. Haussmann UG (1978) Asymptotic stability of the linear Ito equation in infinite dimensions.
J Math Anal Appl 65:219–235
69. Hazewinkel M (2001) [1994] Gram matrix. Encyclopedia of mathematics. Springer Sci-
ence+Business Media B.V./Kluwer Academic Publishers
70. Ichikawa A (1985) Stability of parabolic equations with boundary and pointwise noise. Lecture
notes in control and information sciences 69. Springer, Berlin
71. Karafyllis I, Krstic M (2014) On the relation of delay equations to first-order hyperbolic partial
differential equations. ESAIM: COCV 20:894–923
210 References

72. Kato T (1966) Perturbation theory for linear operators. Die Grundlehren der mathematischen
Wissenschaften, Band 132, Springer, New York
73. Komornik V (1994) Exact controllability and stabilization-the multiplier method. Masson
74. Krstic M (1999) On global stabilization of Burgers’ equation by boundary control. Syst
Control Lett 37:123–141
75. Krstic M, Smyshlyaev A (2008) Boundary control of PDEs: a course on backstepping designs.
SIAM, Philadelphia
76. Krstic M, Smyshlyaev A (2008) Backstepping boundary control for first-order hyperbolic
PDEs and application to systems with actuator and sensor delays. Syst Control Lett 57(9):750–
758
77. Kroner A, Rodrigues SS (2015) Remarks on the internal exponential stabilization to a non-
stationary solution for 1D Burgers equations. SIAM J Control Optim 53(2):1020–1055
78. Kwiecińska A (1999) Stabilization of partial differential equations by noise. Stoch Process
Appl 79:179–184
79. Kwiecińska A (2002) Stabilization of evolution equations by noise. Proc Am Math Soc
130(10):3067–3074
80. Lasiecka I, Triggiani R (1991) Differential and algebraic Riccati equations with application
to boundary/point control problems: continuous theory and approximation theory. Lecture
notes in control and information sciences. Springer, Berlin, p 164
81. Lasiecka I, Triggiani R (2000) Control theory for partial differential equations: continuous
and approximation theories. Cambridge University Press, Cambridge
82. Lasiecka I, Triggiani R (2015) Stabilization to an equilibrium of the Navier-Stokes equations
with tangential action of feedback controllers. Nonlin Anal Theory Meth Appl 121:424–446
83. Lee D, Choi H (2001) Magnetohydrodynamic turbulent flow in a channel at low magnetic
Reynolds number. J Fluid Mech 439:367–394
84. Lefter C (2010) On a unique continuation property related to the boundary stabilization of
Magnetohydrodynamic equations. Annals of Uiv AL.I CUZA Math, LVI
85. Lipster R, Shiraev AN (1989) Theory of martingales. Kluwer, Dordrecht
86. Liu HB, Hub P, Munteanu I (2016) Boundary feedback stabilization of Fisher’s equations.
Syst Control Lett 97:55–60
87. Liu WJ (2003) Boundary feedback stabilization of an unstable heat equation. SIAM J Control
Optim 42(3):1033–1043
88. Liu Z, Zheng S (1999) Semigroups associated with dissipative systems. Chapman & Hall/CRC
research notes in mathematics 398. Chapman & Hall/CRC, Boca Raton, FL
89. Luo J (2008) Fixed points and exponential stability of mild solutions of stochastic partial
differential equations with delays. J Math Anal Appl 342:753–760
90. Lopatinskij YaB (1953) A method of reduction of boundary-value problems for systems of
differential equations of elliptic type to a system of regular integral equations. Ukrain Mat Zh
5:123–151
91. Mao X (1994) Exponential stability of stochastic. Differential equations. Pure and applied
mathematics. Chapman & Hall, New York
92. Mao X (2013) Stabilization of continuous-time hybrid stochastic differential equations by
discrete-time feedback control. Automatica 49(12):3677–3681
93. Marionschi G (2017) A note on the feedback stabilization of a Cahn Hilliard type system
with a singular logarithmic potential. Solvability, regularity, and optimal control of boundary
value problems for PDEs. Springer, Berlin, pp 357–377
94. Munteanu I (2011) Tangential feedback stabilization of periodic flows in a 2-D channel. Differ
Integr Eqs 24(5–6):469–494
95. Munteanu I (2012) Normal feedback stabilization of periodic flows in a two-dimensional
channel. J Optim Theory Appl 152(2):413–443
96. Munteanu I (2012) Normal feedback stabilization of periodic flows in a three-dimensional
channel. Numer Funct Anal Optim 33(6):611–637
97. Munteanu I (2013) Normal feedback stabilization for linearized periodic MHD channel flow,
at low magnetic Reynolds number. Syst Control Lett 62:55–62
References 211

98. Munteanu I (2013) Boundary feedback stabilization of periodic fluid flows in a magnetohy-
drodynamic channel. IEEE Trans Autom Control 58(8):2119–2125
99. Munteanu I (2014) Boundary stabilization of the phase field system by finite-dimensional
feedback controllers. J Math Anal Appl 412:964–975
100. Munteanu I (2015) Boundary stabilization of the Navier-Stokes equation with fading memory.
Int J Control 88(3):531–542
101. Munteanu I (2015) Stabilization of semilinear heat equations, with fading memory, by bound-
ary feedbacks. J Differ Eqs 259:454–472
102. Munteanu I (2017) Boundary stabilization of a 2-D periodic MHD channel flow, by propor-
tional feedbacks. ESAIM COCV 23(4):1253–1266
103. Munteanu I (2017) Stabilization of a 3-D periodic channel flow by explicit normal boundary
feedbacks. J Dyn Control Syst 23(2):387–403
104. Munteanu I (2017) Stabilization of stochastic parabolic equations with boundary-noise and
boundary-control. J Math Anal Appl 449(1):829–842
105. Munteanu I (2017) Stabilisation of parabolic semilinear equations. Int J Control 90(5):1063–
1076
106. Munteanu I (2018) Boundary stabilization of the stochastic heat equation by proportional
feedbacks. Automatica 87:152–158
107. Munteanu I (2018) Boundary stabilization to non-stationary solutions for deterministic and
stochastic parabolic-type equations. J Control Int. https://doi.org/10.1080/00207179.2017.
1407878
108. Murray JD (1993) Mathematical biology. Springer, Berlin
109. Pandolfi L (2013) Boundary controllability and source reconstruction in a viscoelastic string
under external traction. J Math Anal Appl 407:464–479
110. Partington JR (2004) Linear operators and linear systems. London mathematical society stu-
dent texts (60). Cambridge University Press, Cambridge
111. Da Prato G, Debussche A (1999) Control of the stochastic Burgers model of turbulence. SIAM
J Control Optim 37(4):1123–1149
112. Da Prato G, Zabczyk J (1993) Evolution equations with white-noise boundary conditions.
Stoch Rep 42:167–182
113. Da Prato G, Zabczyk J (2013) Stochastic equations in infinite dimensions. Cambridge Uni-
versity Press, Cambridge
114. Schuster E, Luo L, Krstic M (2008) MHD channel flow control in 2D: mixing enhancement
by boundary feedback. Automatica 44:2498–2507
115. Takashima M (1996) The stability of the modified plane Poiseuille flow in the presence of a
transverse magnetic field. Fluid Dyn Res 17:293–310
116. Triggiani R (1980) Boundary feedback stabilization of parabolic equations. Appl Math Optim
6:201–220
117. Triggiani R (2007) Stability enhancement of a 2-D linear Navier-Stokes channel flow by a
2-D wall normal boundary controller. Discret Contin Dyn Syst SB 8(2):279–314
118. Raymond JP (2006) Feedback boundary stabilization of the two-dimensional Navier-Stokes
equations. SIAM J Control Optim 45:790–828
119. Raymond JP (2007) Feedback boundary stabilization of the three dimensional incompressible
Navier-Stokes equations. Math Pures et Appl 87:627–669
120. Smyshlaev A, Krstic M (2005) On control design for PDEs with space-dependent diffusivity
or time-dependent reactivity. Automatica 41:1601–1608
121. Ravindran SS (2000) Reduced-order adaptive controllers for fluid flows using POD. J Sci
Comput 15:457–478
122. Rodrigues SS (2018) Feedback boundary stabilization to trajectories for 3D Navier-Stokes
equations. Math Optim Appl. https://doi.org/10.1007/s00245-017-9474-5
123. Shirikyan A (2007) Exact controllability in projections for three-dimensional Navier-Stokes
equations. Ann I. H. Poincaré 24:521–537
124. Vasquez R, Krstic M (2007) A closed-form feedback controller for stabilization of the lin-
earized 2-D Navier-Stokes Poisseuille system. IEEE Trans Autom Control 52:2298–2312
212 References

125. Walsh JB (1986) An introduction to stochastic partial differential equations. Lecture notes in
mathematical 1180. Springer, Berlin, pp 265–439
126. Xie B (2016) Some effects of the noise intensity upon non-linear stochastic heat equations on
[0, 1]. Stoch Proc Appl 126:1184–1205
127. Xu C, Schuster E, Vazquez R, Krstic M (2008) Stabilization of linearized 2D magnetohydro-
dynamic channel flow by backstepping boundary control. Syst Control Lett 57:805–812
128. Yosida K (1980) Functional analysis. Springer, Berlin
129. Zhou J (2014) Optimal control of a stochastic delay heat equation with boundary-noise and
boundary-control. Int J Control 87(9):1808–1821
Index

A D
Abstract parabolic, 22 Dirichlet map, 25
Accretive operator, 11 Distributions space, 4
Adapted process, 14 Dominated convergence theorem, 3
Adjoint operator, 8 Dual space, 8
Algebraic multiplicity, 7

E
B Eigenvalue, 7
Banach space, 2 Eigenvector, 7
Bochner integrable, 5 Ellipticity condition, 10
Borel–Cantelli lemma, 16 Elliptic operators, 10
Borel σ -algebra, 2 Equilibrium solution, 22
Boundary control problem, 22 Expectation E, 13
Boundary value problem, 10 Extension operator, 8
Brownian motion, 14
Burkholder–Davis–Gundy inequality, 16
F
Feedback control, 23
C Filtration, 14
Cauchy problem, 11 Fourier series, 6
Cauchy sequence, 2 Fractional power, 9
Closed and densely defined operator, 7 Fubini’s theorem, 3
Closed-loop equation, 23
Compact operator, 8 G
Complete measure space, 2 Gaussian distribution, 14
Conditional expectation, 13 Generalized eigenvector, 7
Controlled Cahn–Hilliard system, 93 Geometric multiplicity, 7
Controlled heat equation with delays, 109 Gram matrix, 1
Controlled magnetohydrodynamics equa-
tions, 78
Controlled Navier–Stokes equations, 49 H
Controlled stochastic equations, 128 Hartmann–Poiseuille profile, 78
C0 -semigroup, 12 Hilbert–Schmidt norm, 7
C([0, T ]; X ), 5 Hille–Yosida theorem, 12
C 1 ([0, T ]; X ), 5 Hölder’s inequality, 5

© Springer Nature Switzerland AG 2019 213


I. Munteanu, Boundary Stabilization of Parabolic Equations, Progress in
Nonlinear Differential Equations and Their Applications 93,
https://doi.org/10.1007/978-3-030-11099-4
214 Index

I Q
Integrable function, 3 Quadratic variation, 14
Internal control, 24
Itô’s isometry, 15
R
Random deterministic equation, 17
L Random variable, 13
Linearization, 20 Rescaling, 17
Linear operators, 7 Resolvent operator, 7
Lipschitz function, 12 Resolvent set, 7
Local martingale, 14 Riesz–Schauder–Fredholm theorem, 8
Lopatinskii condition, 10
L p (O), 4
S
Self-adjoint operator, 9
M Semilinear evolution equation, 12
m-accretive operator, 11 Semimartingale, 14
Martingale, 13 Semisimple eigenvalue, 7
Measurable function, 2 σ -algebra, 2
Measure, 2 Sobolev embeddings, 5
Measure space, 2 Sobolev space, 4
Spectrum, 7
Mild solution, 12
Stabilizable controller, 23
Modes, 6
Stabilization problem, 23
Stabilization to non-steady states, 24
Stabilization to trajectories, 171
N Stable equilibrium, 22
Null set, 2 Stochastic Chebyshev inequality, 16
Stochastic claculus, 16
Stochastic integral, 15
O Stochastic processes, 13
Operatorial norm, 7 Stopping time, 14
Orthonormal basis, 7 Symmetric operator, 8

P U
Parabolic Poiseuille profile, 50 Unstable eigenvalues, 20
Parseval’s identity, 6
Poincaré’s inequality, 5
Positive definite function, 9 V
Positive definite operators, 8 Variation of constants formula, 12
Powers of a linear operator, 9
Probability space, 13
Product measure, 3 W
Proportional feedback, 28 Weak solution, 13

You might also like