Professional Documents
Culture Documents
INSTITUTE OF MATHEMATICS
HANOI 2017
MINISTRY OF VIETNAM ACADEMY OF
EDUCATION AND TRAINING SCIENCE AND TECHNOLOGY
INSTITUTE OF MATHEMATICS
HANOI 2017
BỘ GIÁO DỤC VÀ ĐÀO TẠO VIỆN HÀN LÂM KHOA HỌC
VÀ CÔNG NGHỆ VIỆT NAM
HÀ NỘI – 2017
A
knowledgments
I first learned about inverse and ill-posed problems when I met Professor Đinh Nho Hào in
2007, my final year of bachelor’s study. I have been extremely fortunate to have a chance
to study under his guidance since then. I am deeply indebted to him not only for his
supervision, patience, encouragement and support in my research, but also for his precious
advices in life.
I would like to express my special appreciation to Professor Hà Tiến Ngoạn, Professor
Nguyễn Minh Trí, Doctor Nguyễn Anh Tú, the other members of the seminar at Department
of Differential Equations and all friends in Professor Đinh Nho Hào’s group seminar for
their valuable comments and suggestions to my thesis. I am very grateful to Doctor Nguyễn
Trung Thành (Iowa State University) for his kind help on MATLAB programming.
I would like to thank the Institute of Mathematics for providing me with such an excellent
study environment.
Furthermore, I would like to thank the leaders of College of Sciences, Thai Nguyen Uni-
versity, the Dean board as well as to all of my colleagues at the Faculty of Mathematics
and Informatics for their encouragement and support throughout my PhD study.
Last but not least, I could not have finished this work without the constant love and
unconditional support from my parents, my parents-in-law, my husband, my little children
and my dearest aunt. I would like to express my sincere gratitude to all of them.
Abstra
t
The problems of re
onstru
ting the initial
ondition in paraboli
equations from the obser-
vation at the nal time, from interior integral observations, and from boundary observations
appropriate mist fun tionals. We prove that these fun tionals are Fré het dierentiable
and derive a formula for their gradient via adjoint problems. The dire t problems are rst
dis retized in spa e variables by the nite dieren e method and the variational problems
are orrespondingly dis retized. The onvergen e of the solution of the dis retized varia-
tional problems to the solution of the ontinuous ones is proved. To solve the problems
numeri ally, we further dis retize them in time by the splitting method. It is proved that
the ompletely dis retized fun tionals are Fré het dierentiable and the formulas for their
gradient are derived via dis rete adjoint problems. The problems are then solved by the
onjugate gradient method and the numeri al algorithms are tested on omputer. As a
by-produ t of the variational method based on Lan zos' algorithm, we suggest a simple
i
Tóm tắt
Các bài toán xác định điều kiện ban đầu trong phương trình parabolic từ quan sát tại
thời điểm cuối, từ quan sát tích phân bên trong, và từ quan sát biên đã được nghiên cứu.
Chúng tôi sử dụng phương pháp biến phân nghiên cứu bài toán ngược này bằng cách cực
tiểu hóa các phiếm hàm chỉnh. Chúng tôi chứng minh rằng các phiếm hàm này là khả vi
Fréchet và đưa ra công thức gradient của chúng thông qua các bài toán liên hợp. Trước
tiên, sử dụng phương pháp sai phân hữu hạn để rời rạc hóa bài toán thuận và bài toán liên
hợp tương ứng theo các biến không gian. Chúng tôi chứng minh sự hội tụ của nghiệm của
bài toán biến phân rời rạc tới nghiệm của bài toán biến phân liên tục. Để giải số bài toán,
chúng tôi tiếp tục rời rạc bài toán theo biến thời gian bằng phương pháp sai phân phân
rã (phương pháp splitting). Chúng tôi cũng chứng minh được rằng các phiếm hàm rời rạc
này là khả vi Fréchet và đưa ra công thức gradient của chúng thông qua bài toán liên hợp
rời rạc. Sau đó chúng tôi sử dụng phương pháp gradient liên hợp để giải và các thuật toán
số được thử nghiệm trên máy tính. Ngoài ra, như một sản phẩm phụ của phương pháp
biến phân, dựa trên thuật toán Lanczos, chúng tôi đề xuất một phương pháp đơn giản để
minh họa tính đặt không chỉnh của bài toán.
ii
De
laration
This work has been completed at Institute of Mathematics, Vietnam Academy of Science
and Technology under the supervision of Prof. Dr. Habil. Đinh Nho Hào. I declare hereby
that the results presented in it are new and have never been published elsewhere.
Author: Nguyen Thi Ngoc Oanh
iii
List of Figures
2.2 Example 2: Re
onstru
tion results: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error;
(d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the
solid
urve: the estimated fun
tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.3 Example 3: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error;
(d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the
solid
urve: the estimated fun
tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4 Example 4: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error;
(d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the
solid
urve: the estimated fun
tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.5 Example 5: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error;
(d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the
solid
urve: the estimated fun
tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1 Example 1. Singular values: three observations and various time intervals of observations. . . . 68
3.2 Example 2: Re
onstru
tion results for (a) 3 uniform observation points in (0, 0.5), error in L2 -
norm = 0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2 -norm = 0.006133; (
)
3 uniform observation points in (0.25, 0.75), the error in L2 -norm = 0.0060894; (d) 3 uniform
observation points in Ω, the error in L2 -norm = 0.0057764 . . . . . . . . . . . . . . . . . . 69
3.3 Re onstru tion result of Example 3: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 70
3.4 Re onstru tion result of Example 4: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 72
3.5 Re onstru tion result of Example 5: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 73
3.6 Example 6. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
)
point-wise error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion . . . . . . . . . . . . 74
3.7 Example 7. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
)
point-wise error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion. . . . . . . . . . . . 76
3.8 Example 8. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
)
point-wise error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion. . . . . . . . . . . . 77
iv
4.1 Example 1: Singular Values for 1D Problem . . . . . . . . . . . . . . . . . . . . . . . 87
4.2 Example 2, 3, 4: 1D Problem: Re
onstru
tion results for smooth,
ontinuous and dis
ontinuous
initial
onditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3 Example 5: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 89
4.4 Example 5 (
ontinue): Error (left) and the verti
al sli
e of the exa
t initial
ondition and its
re
onstru
tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . 89
4.5 Example 6: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 90
4.6 Example 6 (
ontinue): Error (left) and the sli
e of the exa
t initial
ondition and its re
onstru
-
tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . . . . . 90
4.7 Example 7: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 90
4.8 Example 7 (
ontinue): Error (left) and the sli
e of the exa
t initial
ondition and its re
onstru
-
tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . . . . . 91
v
List of Tables
3.1 Example 3: Behavior of the algorithm with dierent starting points of observation τ . . . . . . 71
3.2 Example 6: Behavior of the algorithm when the number of observations and the positions of
observations vary (N = 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.3 Example 6: Behavior of the algorithm when the number of observations and the positions of
observations vary (N = 9). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
vi
List of Notations
vii
Contents
Page
Abstra t i
Tóm tắt ii
List of Tables v
Introdu tion 1
1.4.2. Dis retization in spa e variables and the onvergen e of the nite
dieren e s heme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
viii
1.4.3. Dis
retization in time and splitting dieren
e s
heme . . . . . . . . . 36
4.3 Full dis retization of the variational problem and the onjugate gradient
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
ix
Con
lusion 92
Bibliography 93
x
Introdu
tion
The predi tion of an evolution pro ess requires its initial ondition whi h is unfortunately
not always available or pre isely given in pra ti e. Data assimilation is the pro ess of
re onstru ting model initial onditions from measured observations and the rst guess
eld in ombination with the dynami al system. Data assimilation is extensively used
in meteorology, o eanography, weather fore ast [2, 3, 45, 66, 71℄, environmental pollution
[8, 55, 63℄, image pro essing [6℄, industrial produ tion [4, 5, 9, 24℄, . . . . For surveys of
methods in data assimilation, we refer to [2, 10, 11, 15, 16, 17, 37, 46, 47, 48, 57, 64, 65,
66, 67, 71, 73, 74, 75℄, and the referen es therein.
Suppose that the pro ess under onsideration is modelled by a system of evolution equations
dU
+ AU = F, (0.1)
dt
where U is the ve
tor representing the state variables that we want to "predi
t", A is an
ellipti operator in the spa e variables and F is the ve tor of exterior for e a ting on the
system. The goal of predi tion is to nd a good approximation to U during a period of
time of length T. The problem we are fa ed with is that we do not know the initial data
for U before some time moment T0 for omputing the solution of the predi tion model
initial ondition at a time before T0 from measurements and then use it to solve the above
system for predi tion. This problem is unfortunately ill-posed. A problem is said to be
well-posed in sense of Hadamard if the following onditions are satised [20℄: i) Existen e:
There is a solution of the problem. ii) Uniqueness: The solution is unique. iii) Stability:
The solution ontinuously depends on the data (in some appropriate topologies). If at
least one of the above onditions is not fullled, the problem is said to be ill-posed (or
However, many important problems in pra ti e are ill-posed and there have been a lot of
works devoted to their study (see [2, 3, 4, 5, 7, 8, 9, 14, 19, 24, 31, 36, 45, 68, 72, 80℄
and the referen es therein). The ill-posedness of a problem auses serious troubles by
making lassi al numeri al methods unstable, that is a small error in the data may ause
arbitrarily large errors in its solution. Tikhonov A.N. [78℄ in 1943 realized that unstability
results from la ks information, and to restore stability we should impose some a priori
ondition. Tikhonov then pointed out the possibility of nding stable solutions to ill-posed
problems. The importan e of ill-posed problems has been then realized by Lavrent'ev M.M.
1
[42, 43, 44, 45℄, John F. [34℄, Pu
i C. [69℄, Ivanov V.K. [32, 33℄ who
an be
onsidered as
the founders of the theory of ill-posed problems. In 1963, Tikhonov [79, 80℄ published his
elebrated regularization method and sin e then inverse and ill-posed problems be ame an
This thesis is devoted to data assimilation in heat ondu tion, namely it aims at determin-
ing the initial ondition of the system (0.1) des ribing heat ondu tion using three types
n
∂u X ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q, (0.8)
∂t i,j=1 ∂xi ∂xj
u|t=0 = v in Ω, (0.9)
u=0 on S (0.10)
X n
∂u
= (aij (x, t)uxj ) cos(ν, xi ) = g on S, (0.11)
∂N i,j=1
where ν is the outer normal ve tor to S. We see that the system (0.8), (0.10) or (0.8),
When the oe ients of equation (0.8), data v, g and the right hand side (sour e) f are
given, the problem of determining u(x, t) from the system (0.8)(0.10) or (0.8), (0.9), (0.11)
is
alled the dire
t problem. It is proved that there exists a weak solution (the denition of
whi
h will be given in Chapter 1) to these problems [24, 82℄. The inverse problem (data
assimilation)
onsidered in this thesis is that of determining the initial
ondition v from
one of the above three types of observation: Denoting the solution to (0.8)(0.10) or (0.8),
(0.9), (0.11) by u(v) to emphasize its dependen e on the initial ondition v and supposing
2
that we observe u by Cu(v). The inverse problem is to determine v when Cu(v) is given,
Cu(v) = z. (0.12)
This problem is ill-posed as we will see that in our hosen spa es the operator mapping
v to Cu(v)
is
ompa
t. However,
hara
terizing its degree of ill-posedness is not an easy
o
task. Denoting the solution to (0.8)(0.10) (or (0.8), (0.9), (0.11)) with v ≡ 0 by u, we see
o
that the operator from v to Cv := Cu(v) − C u is bounded and linear. Thus, instead of
studying the equation (0.12), we have to deal with the linear operator equation
o
Cv = z − C u. (0.13)
The asymptoti behavior of singular values of C (or eigenvalues of C ∗C ) hara terizes the
ill-posedness of the problem [19℄. However, up to now there are very few results on this
question in inverse problems for partial dierential equations. Some hara terization has
been obtained, but only in very simple ases [8, 19, 51℄. In this thesis, as a by-produ t of
the variational method for nding v , we propose a numeri
al s
heme for estimating singular
values of C whi
h we will present below.
1
J0 (v) = kCu(v) − zk2H (0.14)
2
with respe
t to v ∈ L2 (Ω). However, this problem is ill-posed, we will minimize the
regularized fun tional ([2, 4, 5, 7, 8, 9, 14, 19, 24, 33, 36, 71℄)
1 γ
Jγ (v) = kCu(v) − zk2H + kv − v ∗ k2L2 (Ω) (0.15)
2 2
with respe
t to v ∈ L2 (Ω), v ∗ is an a priori estimate to v , k · kH is the
instead. Here,
norm of an appropriate Hilbert spa e H and γ > 0 the regularization parameter. It will be
proved that there exists a unique solution to this optimization problem, the fun tional Jγ
is Fré het dierentiable and its gradient an be al ulated via an adjoint problem. To solve
the problem numeri ally, we shall apply the splitting nite dieren e method to dis retize
the optimization problems and prove the onvergen e of the method. The reason we hoose
the splitting nite dieren e method is that this method is easy for oding and it splits
very fast. We note that one an dis retize the problems by the nite element method,
however, sin e the oe ients in our problems depend on time, it is easier to use the nite
dieren
e method.
o
Now we return to estimating the singular values of C. We have ∇J0 (v) = C ∗ (Cv −(z −C u))
o
with C∗ being the adjoint operator of C. If we
hoose z = C u, then ∇J0 (v) = C ∗ Cv .
Unfortunately, the expli
it form of C is usually not available, and even if it is available, it
is not easy to analyze the asymptoti behaviour of its singular values. However, as we an
al
ulate ∇J0 (v) = C ∗ Cv via the solution to an adjoint problem for any v, we
an apply
∗
Lan
zos' algorithm [81℄ to estimate the eigenvalues of C C.
3
The
ontent of this thesis is as follows. In Chapter 1 we summarize some basi
results
for the dire t problems, their nite dieren e approximations and onvergen e results, also
some standard algorithms like the onjugate gradient method, Lan zos' algorithm, et are
presented there. We note that sin e the solution to the Diri hlet problem (0.8)(0.10) or
to the Neumann problem (0.8), (0.9), (0.11) is understood in the weak sense, the nite
dieren e method for them is ompli ated and the proof of onvergen e of the method is
not trivial.
The se
ond
hapter is devoted to the re
onstru
ting the initial
ondition v in the Diri
hlet
problem (0.8)(0.10) from the observation at the nal time moment: Cu := u(x, T ) = ξ(x).
This problem is well known under the name paraboli
equations ba
kward in time whi
h
has many appli ations in pra ti e and up to now there have been many papers devoted to
it (see [4, 5, 6, 8, 12, 18, 24, 31, 45, 68℄ and the referen es therein). However, among these
works only a few are devoted to the ase of time-dependent oe ients [1, 25, 31, 45, 53℄.
Paraboli equations ba kward in time are severely ill-posed as the following simple example
shows [19℄.
Consider the heat equation with homogeneous Diri hlet boundary ondition
∞
X
v(x) = vn ϕn (x), x ∈ [0, π] (0.19)
n=1
q q Rπ
2 2
with ϕn (x) = π
sin(nx) and vn = π
v(τ ) sin(nτ )dτ.
0
We easily obtain
∞
X 2
u(x, t) = vn e−n t ϕn (x).
n=1
2
Hen
e, if we for
e ξ ∈ L (0, π), then
∞
X ∞
X
−n2
ξ(x) = u(x, 1) = vn e ϕn (x) = ξn ϕn (x)
n=1 n=1
q Rπ
2
with ξn = π
ξ(τ ) sin(nτ )dτ.
0
Thus,
2
vn = ξn en , n = 1, 2, . . . .
and r ∞
2 X n2
v(x) = e ξn sin(nx). (0.20)
π n=1
4
For v ∈ L2 (0, π), we must have
∞
X ∞
X 2
kvk2L2 (0,π) = vn2 = e2n |ξn |2 < ∞. (0.21)
n=1 n=1
From (0.20) and (0.21), we see that the problem of re onstru ting v from ξ is severely
ill-posed: First, a solution v exists only for those fun
tions ξ whose Fourier
oe
ients ξn
−n2
rapidly de
rease as n tends to innity (mu
h faster than e ). Se
ond, a small error in
n2
the n-th Fourier
oe
ient is amplied by a fa
tor of e . For example, an error of 10−8 in
3
the fth Fourier
oe
ient ξ5 of the data ξ indu
es a large error of about 10 in the initial
temperature.
Now we return to the problem of re
onstru
ting the initial
ondition v in the Diri
hlet
problem (0.8)(0.10) from the observation at the nal time moment: Cu := u(x, T ) = ξ(x).
v . Following the general approa
h stated above to re
onstru
t v we minimize the fun
tional
1
J0 (v) := ku(·, T ; v) − ξk2L2(Ω)
2
with respe
t to v ∈ L2 (Ω). We will see that the operator v → Cu(v) : L2 (Ω) → L2 (Ω) is
ompa
t, hen
e the problem of solving the equation Cu(v) = ξ is ill-posed. It follows that
the above minimization problem is also ill-posed. To stabilize it we minimize the Tikhonov
1 γ
Jγ (v) := ku(·, T ; v) − ξk2L2 (Ω) + kv − v ∗ k2L2 (Ω) ,
2 2
∗
with γ>0 being a regularization parameter and v an approximation of v . We prove that
the fun tional Jγ is Fré het dierentiable and derive the following formula for its gradient
(Theorem 2.1.1)
The optimization problem is then dis retized by the nite dieren e method in spa e vari-
ables and it is proved that the solution of the dis retized optimization problem onverges
(weakly or strongly depends on the smoothness of the data) to the solution of the ontinuous
one. We further dis retize it in time using the splitting method to split multi-dimensional
problems into a sequen e of one-dimensional ones. We derive a formula for the gradient
of the fully dis retized fun tional via an adjoint problem and then apply the onjugate
gradient method to solve it numeri ally. The algorithm is then tested on several ben h-
mark examples to show the e ien y of our approa h. We also apply Lan zos' algorithm
5
to estimate the singular values of the dis
retized version of the operator C . Here, as above,
o o
C is the linear operator from v into Cv := Cu(v) − C u with u being the solution to the
Diri
hlet problem (0.8)(0.10) with the homogeneous initial
ondition.
The third hapter studies the re onstru tion of the initial ondition v in (0.8)(0.10) from
N integral observations
li u = hi (t), t ∈ (τ, T ), τ ≥ 0, i = 1, 2, . . . , N,
R
where ωi ∈ L1 (Ω), i = 1, 2, . . . , N are non-negative weight fun
tions with Ω ωi (x)dx > 0
Z
li u(x, t) = ωi (x)u(x, t)dx = hi (t), t ∈ (τ, T ), i = 1, . . . , N. (0.22)
Ω
Let us dis uss observations (0.22). First, any measurement is an averaged pro ess, that
ωi (x) be
hosen as
1
, if x ∈ Si ,
ωi (x) = |Si | (0.23)
0, otherwise
where |Si | is the volume of Si . We see that |Si | is similar to the width of the instrument
and when we let |Si | tend to zero we have the point observation. It should be noted also
that as the solution to (0.8)(0.10) is understood in the weak sense, its value at ertain
point does not always have a meaning, but it does in the above averaged sense. Third,
with this kind of observations, the data need not be always available at the whole spa e
domain and at any time. Thus, our problem setting is new and more pra ti al than the
related ones, where one requires either 1) the knowledge of u(x, T ) in the whole spatial
domain Ω that is hardly realized in pra ti e (see the re ent survey [12℄, or [25, 61℄ and the
fun
tional
N
1X γ
Jγ (v) = kli u(v) − hi k2L2 (τ,T ) + kv − v ∗ k2L2 (Ω) (0.24)
2 i=1 2
with respe
t to v ∈ L2 (Ω) with γ > 0 being the regularization parameter and v∗ an
The last hapter is devoted to the ase of observations at the boundary. We suppose that
our evolution system is generated by the Neumann problem (0.8), (0.9), (0.11). The inverse
problem we study is to re
onstru
t the initial
ondition v in (0.9), when the solution u is
given on a part of the boundary S . Namely, let Γ ⊂ ∂Ω and denote Σ = Γ × (0, T ).
Our aim is to re onstru t the initial ondition v from the impre ise measurement ϕ of the
solution u on Σ:
6
The uniqueness of the inverse problem follows from the theory of the Cau
hy problem for
paraboli equations [24, 31, 45, 58℄. Re ently, Klibanov proved some stability estimates
for this inverse problem [38, 39℄. Unfortunately, up to now there are very few studies on
numeri al methods for this problem. The work by Buly hëv [13℄ et al is the only referen e
on this aspe t whi h we have found. Thus, the solution method for this problem proposed
in this hapter is a new ontribution to eld. We note that this problem has the root in
the inverse heat ondu tion problem (IHCP) where one determines the surfa e temperature
and heat ux on an a essible part of the boundary from the surfa e temperature and the
surfa e heat ux on the a essible part of it (see [4, 9, 24, 45℄). The typi al formulation
for IHCP requires the initial ondition given [4, 9℄. However, as IHCP an be regarded
as a non- hara teristi Cau hy problem for paraboli equations, no initial ondition is
needed [24, 45, 58℄. The uniqueness with stability estimates are proved in (see [24, 31, 45,
58℄). Using the same method presented in Chapters 2 and 3, we minimize the Tikhonov
1 γ
Jγ (v) = ku(v) − ϕk2L2 (Σ) + kv − v ∗ k2L2 (Ω) (0.26)
2 2
∗
with γ >0 being regularization parameter and v a
ertain estimation of v. We obtain
1. Nguyen Thi Ngo Oanh, A splitting method for a ba kward paraboli equation with
time-dependent oe ients, Computers & Mathemati s with Appli ations 65(2013),
1728. (Chapter 2)
2. Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition
3. Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition
1. 8th International Conferen e "Inverse Problems: Modeling & Simulation", 2328 May,
2. Mini-workshop on "Analysis and Appli ations of PDEs", 29 June 2016, Vietnam In-
Da Nang, Vietnam;
7
4. 12th Workshop on Optimization and S
ienti
Computing, 2325 April, 2014, Ba Vi,
Vietnam;
5. 13th Workshop on Optimization and S ienti Computing, 2325 April, 2015, Ba Vi,
Vietnam;
8
Chapter 1
Auxiliary results
In this hapter, we introdu e some basi notions in Sobolev spa es and present well-
posedness results related to the Diri hlet and Neumann problems for paraboli equations.
The main part of this hapter is devoted to the nite dieren e method in spa e variables
and its onvergen e for the Diri hlet and Neumann problems. This onvergen e of nite
dieren e s heme is a new result for the weak solution. The splitting method is suggested
and proved to be stable. A variational problem and its dis retized version are presented and
a new onvergen e result for weak solution is proved. Lan zos' algorithm for approximating
n
∂u X ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q, (1.7)
∂t i,j=1 ∂xi ∂xj
u|t=0 = v in Ω, (1.8)
9
with either Diri
hlet boundary
ondition
u=0 on S, (1.9)
X n
∂u
= (aij (x, t)uxj ) cos(ν, xi )|S = g on S, (1.10)
∂N i,j=1
To study these problems, we introdu e the following standard Sobolev spa es (see [24, 29,
Denition 1.1.1. The spa
e H 1 (Ω) is the set of all elements u(x) ∈ L2 (Ω) having gener-
∂u
alized derivatives
∂xi
∈ L2 (Ω), i = 1, . . . , n with s
alar produ
t
Z n
!
X ∂u ∂v
(u, v)H 1 (Ω) := uv + dx.
Ω i=1
∂xi ∂xi
Denition 1.1.2. The spa e H01 (Ω) is the ompletion of C01 (Ω) in the norm of H 1 (Ω). In
Denition 1.1.3. The spa
e H 1,0(Q) is the set of all elements u(x, t) ∈ L2 (Q) having
∂u
generalized derivatives
∂xi
∈ L2 (Q), i = 1, . . . , n with s
alar produ
t
ZZ n
!
X ∂u ∂v
(u, v)H 1,0 (Q) := uv + dxdt.
Q i=1
∂xi ∂xi
Denition 1.1.4. The spa
e H 1,1(Q) is the set of all elements u(x, t) ∈ L2 (Q) having
∂u
generalized derivatives
∂xi
∈ L2 (Q), i = 1, . . . , n and ∂u
∂t
∈ L2 (Q) with s
alar produ
t
ZZ n
!
X ∂u ∂v ∂u ∂v
(u, v)H 1,1 (Q) := uv + + dxdt.
Q i=1
∂xi ∂xi ∂t ∂t
Denition 1.1.5. The spa
e H01,0 (Q) is the set of all elements u(x, t) ∈ H 1,0 (Q) vanishing
on S
H01,0 (Q) = {u ∈ H 1,0 (Q) : uS = 0}.
Denition 1.1.6. The spa
e H01,1 (Q) is the set of all elements u(x, t) ∈ H 1,1 (Q) vanishing
on S
H01,1 (Q) = {u ∈ H 1,1 (Q) : uS = 0}.
10
where Z T
kuk2L2(0,T ;B) = ku(t)k2B dt.
0
We also dene
with norm
kuk2W (0,T ;H 1(Ω)) = kuk2L2 (0,T ;H 1 (Ω)) + kut k2L2 (0,T ;(H 1 (Ω))′ ) .
The spa
e W (0, T ; H01(Ω)) is similarly dened with the note that (H01 (Ω))′ = H −1 (Ω).
The solutions of the Diri
hlet problem (1.7)-(1.9), and the Neumann problem (1.7), (1.8), (1.10)
Denition 1.1.7. A weak solution in W (0, T ; H01(Ω)) of the problem (1.7)-(1.9) is a fun -
∀η ∈ L2 (0, T ; H01(Ω))
and
u|t=0 = v in Ω. (1.12)
Denition 1.1.8. A weak solution in W (0, T ; H 1(Ω)) of the problem (1.7), (1.8), (1.10)
1
is a fun
tion u(x, t) ∈ W (0, T ; H (Ω)) satisfying the identity
Z T ZZ h X
n i
∂u ∂η
hut, ηi(H 1 (Ω))′ ,H 1 (Ω) dt + aij (x, t) + b(x, t)uη dxdt
0 Q i,j=1
∂xj ∂xi
ZZ ZZ (1.13)
and
u|t=0 = v in Ω. (1.14)
Due to [24, p. 3546℄, [41℄, [82, p. 141152℄ and [83, Chapter IV℄ we have the following
results about the well-posedness of the Diri hlet and Neumann problems.
Theorem 1.1.1. Let the
onditions (1.1)(1.6) be satised. The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H01(Ω)) to the Diri
hlet problem (1.7)(1.9).
Furthermore, there exists a positive
onstant cD independent of the initial
ondition v and
the right hand side f (only depends on aij , b and Ω) su
h that
kukW (0,T ;H0 (Ω)) ≤ cD kf kL (Q) + kvkL (Ω) .
1 2 2 (1.15)
11
Theorem 1.1.2. Let the
onditions (1.1)(1.6) be satised. The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H 1(Ω)) to the Neumann problem (1.7), (1.8),
(1.10). Furthermore, there exists a positive
onstant cN independent of the initial
ondition
v , the boundary
ondition g and the right-hand side f (only depends on aij , b and Ω) su
h
that
kukW (0,T ;H 1(Ω)) ≤ cN kf kL2 (Q) + kgkL2(S) + kvkL2 (Ω) . (1.16)
In addition to Denition 1.1.7 and Denition 1.1.8, we introdu e the following denitions.
The weak solution in H 1,0 (Q) to the Diri hlet problem (1.7)(1.9) and the Neumann prob-
lem (1.7), (1.8), (1.10) are understood in the weak sense as follows:
Denition 1.1.9. A weak solution in H 1,0 (Q) to the problem (1.7)(1.9) is a fun tion
Denition 1.1.10. A weak solution in H 1,0(Q) to the problem (1.7), (1.8), (1.10) is a
1,0
fun
tion u∈H (Q) satises the identity
ZZ n
X
∂u ∂η
− uηt + aij (x, t)
+ b(x, t)uη dxdt
Q i,j=1
∂xj ∂xi
ZZ Z ZZ
= f ηdxdt + v(x)η(x, 0)dx + gηdζdt, ∀η ∈ H 1,1 (Q) with η(x, T ) = 0.
Q Ω S
(1.19)
It has been shown that solution belonging to H 1,0 (Q) in two above denitions are in
1
W (0, T ; H (Ω)) (see [82, 3.4.4., p. 148153℄ and [82, 7.3.℄).
results related to the adjoint problems and Green's formula. The following results an be
12
Consider the adjoint problem to (1.7)(1.9):
X n
∂ ∂p
−pt − aij (x, t) + bp = aQ in Q,
i,j=1
∂xi ∂xj
(1.20)
p(ζ, t) = 0 on S,
p(x, T ) = aΩ in Ω,
where aQ ∈ L2 (Q) and aΩ ∈ L2 (Ω). We dene the solution of this problem is a fun
tion
p ∈ W (0, T ; H01(Ω)) satisfying the variational problem
Z T ZZ X
n ZZ
−(pt , v)H −1 (Ω),H01 (Ω) dt + aij pv + bpv dxdt = aQ vdxdt,
0 Q i,j=1 Q
∀v ∈ L2 (0, T ; H01(Ω)),
p(T ) = aΩ .
By
hanging the time dire
tion and using the result of Theorem 1.1.1, we see that there
exists a unique solution p ∈ W (0, T ; H01(Ω)) and p also satises an a priori inequality
Theorem 1.2.1. Suppose that the
onditions (1.1)(1.4) hold. Let y ∈ W (0, T ; H01(Ω)) be
the solution to the problem
X n
∂ ∂y
yt − aij (x, t) + by = bQ in Q,
i,j=1
∂xi ∂xj
(1.21)
y=0 on S,
y(x, 0) = bΩ in Ω,
with bQ ∈ L2 (Q), and bΩ ∈ L2 (Ω). Assume that aQ ∈ L2 (Q), aΩ ∈ L2 (Ω) and p ∈
W (0, T ; H01(Ω)) is the weak solution to the adjoint problem (1.20). Then we have Green's
formula Z ZZ Z ZZ
aΩ y(·, T )dx + aQ ydxdt = bΩ p(·, 0)dx + bQ pdxdt. (1.22)
Ω Q Ω Q
Similarly, onsider the adjoint problem to the Neumann problem (1.7), (1.8), (1.10):
X n
∂ ∂p
−pt − aij (x, t) + bp = aQ in Q,
i,j=1
∂xi ∂xj
(1.23)
∂N p = aS on S,
p(x, T ) = aΩ in Ω,
where aQ ∈ L2 (Q), aS ∈ L2 (S), and aΩ ∈ L2 (Ω). We dene the solution to this problem
1
by a fun
tion p ∈ W (0, T ; H (Ω)) satisfying the variational problem
Z T ZZ X
n ZZ ZZ
−(pt , v)(H 1 (Ω))∗ ,H 1 (Ω) dt + aij pv + bpv dxdt = aQ vdxdt + aS vdζdt,
0 Q i,j=1 Q S
∀v ∈ L2 (0, T ; H 1(Ω)),
p(T ) = aΩ .
13
We
an also prove that there exists a unique solution p ∈ W (0, T ; H 1(Ω)) to this problem
In the next two se tions, we present the nite dieren e method for solving the dire t
problems. Note that the solutions to the Diri hlet and Neumann problems studied in
this thesis are understood in the weak sense, hen e the onvergen e results of the nite
dieren e method for them are not trivial. For larity of presentation, we will separately
des ribe the nite dieren e method for one-dimensional and multi-dimensional problems.
Let Ω = (0, L) and Q = (0, L) × (0, T ), S = {0, 1} × (0, T ). We subdivide the interval
Denote by ui (t) (or ui if there is no onfusion) the value of u at x = xi . We also use the
!
∂u ∂ ∂u
− a(x, t) + b(x, t)u = f in Q,
∂t ∂x ∂x
(1.26)
u|t=0 = v in Ω,
u(0, t) = u(L, t) = 0 in (0, T ].
14
The solution to (1.26) is a fun
tion u(x, t) ∈ W (0, T ; H01(Ω)) satisfying
Z ZZ
T ∂u ∂η
hu t , ηi 1
H (Ω),H0 (Ω)
−1 dt + a(x, t) + b(x, t)uη − f η dxdt = 0,
0 ∂x ∂x Q
(1.27)
∀η ∈ L2 (0, T ; H01(Ω)),
u| =v in Ω.
t=0
the identity
Z ZZ
T ∂u ∂η
hut , ηi(H 1 (Ω))′ ,H 1 (Ω) dt + a(x, t) + b(x, t)uη dxdt
∂x ∂x
0 ZZ Z Q Z
T T
= f ηdxdt − g(0, t)η(0, t)dt + g(L, t)η(L, t)dt, ∀η ∈ L2 (0, T ; H 1(Ω)),
Q 0 0
u|t=0 = v in Ω.
(1.29)
follows
ZZ Z T Nx
X
∂u dui (t)
ηdxdt ≈ h η i (t)dt, (1.30)
Q ∂t 0 0
dt
ZZ Z N
X x −1
∂u ∂η T ui+1 (t) − ui (t) η i+1 (t) − η i (t)
a(x, t) dxdt ≈ h ai (t) dt, (1.31)
Q ∂x ∂x 0 0
h h
ZZ Z T N
X x
where
x
Zi+1 x
Zi+1 xi+1
Z
1 1 1
ai (t) = a(x, t)dx, bi (t) = b(x, t)dx, f i (t) = f (x, t)dx, i = 0, Nx − 1.
h h h
xi xi xi
(1.34)
15
a. The Diri
hlet problem (1.26)
Putting the approximations in (1.30)(1.33) into (1.27), we obtain
ZT XNx N
X x −1 Nx
X Nx
X
dui i i
ui+1 − ui η i+1 − η i i i i
h η +h a +h buη −h f i ηi dt = 0,
0
dt 0
h h 0 0
0
ui (0) = v i , i = 0, N ,
x
(1.35)
with
x
Zi+1
i
1
v = v(x)dx. (1.36)
h
xi
obtain
ZT XNx NXx −1 XNx
dui i ui+1 − ui η i+1 − η i
h η + h ai
+ h bi i i
u η dt
dt h h
0 0 0
0
ZT X Nx (1.39)
= h f i
η − g 0 0
η + g Nx Nx
η dt,
i
0
0
ui (0) = v i , i = 0, N ,
x
16
where ū(t) = (u0 (t), u1 (t), . . . , uNx (t))′ , v̄ = (v 0 , v 1 , . . . , v Nx )′ .
The
oe
ient matrix Λ is given by formula (1.38) and the right hand side F̄ (t) is by
i
f , i = 1, Nx − 1,
F̄ (t) = f 0 − h1 g 0 , i = 0, (1.41)
N x 1 Nx
f + h g , i = Nx .
The positive semi-deniteness of Λ in (1.37) and (1.40) is proved as follows.
Lemma 1.3.1. With ea
h t, the
oe
ient matrix Λ dened in the system (1.37) and
(1.40) is positive semi-denite.
interval (0, T ) into Nt uniform subintervals by the grid 0 = t0 < t1 < · · · < tNt = T with
tm+1 − tm = ∆t = T /Nt , m being time index. Denoting um = ū(tm ), Λm = Λ(tm ), F m =
F̄ (tm ), m = 0, 1 . . . , Nt , we dis
retize (1.37) and (1.40) by ([56℄)
m+1
u − um um+1 + um
+ Λm = F m+1/2 ,
∆t 2 (1.42)
u0 = v.
Denote by (·, ·) and k·k the s alar produ t and Eu lidean norm in the spa e R Nx , respe -
tively. We have the following result on the stability of the nite dieren e s heme.
∆t m −1 ∆t m ∆t m −1
kum+1 k ≤ k(E + Λ ) (E − Λ )kkum k + ∆tk(E + Λ ) kkF m+1/2 k. (1.44)
2 2 2
17
On the other hand, sin
e Λm is positive semi-denite, it follows from Kellogg's Lemma [56,
∆t m −1 ∆t m
k(E + Λ ) (E − Λ )k ≤ 1.
2 2
Moreover,
∆t m −1 ∆t m −1
∆t m −1 (E + 2
Λ ) ϕ, (E + 2
Λ ) ϕ
k(E + Λ ) k = sup
2 ϕ (ϕ, ϕ)
(φ, φ)
= sup ∆t m ∆t m
φ ((E + 2
Λ )φ, (E + 2
Λ )φ)
(φ, φ)
= sup ∆t2
≤ 1.
φ ((φ, φ) + ∆t(Λφ, φ) + 4
(Λm φ, Λmφ)
Thus, from (1.44) we obtain
of the splitting s hemes are: (i) they are stable regardless of the hoi e of the spatial
and temporal grid sizes and (ii) the resulting linear systems an be easily solved sin e
they are triangular systems. Using the te hniques presented in [40, 54, 56, 84℄ (see also
[26, 27, 29, 61, 62, 76, 77℄), we propose the nite dieren e s heme based on the formulas
(1.11) and (1.13) for the denition of the weak solutions. First, we dis retize the problem
in spa e variables and obtain system of ordinary dierential equations with respe t to the
time variable t, then we dis retize the obtained system in time by the splitting method.
we do not onsider mixed derivatives in the equation (1.7), although in prin iple this ase
an be treat in the same manner [84℄. Thus, for the ease of notation, we set aij = 0 if i 6= j
and denote aii = ai , i = 1, . . . , n.
18
Before getting into details, we rewrite the Diri
hlet problem (1.7)(1.9) without mixed
derivatives as follows:
!
∂u Pn ∂ ∂u
− ai (x, t) + b(x, t)u = f in Q,
∂t i=1 ∂xi ∂xi
(1.46)
u|t=0 = v in Ω,
u = 0 on S.
The weak solution to the problem (1.46) is a fun tion u(x, t) ∈ W (0, T ; H01(Ω)) satisfying
the identity
RT RR h P
n ∂u ∂η i
hu , ηi 1 dt + ai (x, t) +b(x, t)uη − f η dxdt = 0,
0 t H (Ω),H0 (Ω)
−1
Q ∂xi ∂xi
i=1
(1.47)
∀η ∈ L2 (0, T ; H01(Ω)),
u| =v in Ω.
t=0
Similarly, we rewrite the Neumann problem (1.7), (1.8), (1.10) in the form
!
∂u Pn ∂ ∂u
− ai (x, t) + b(x, t)u = f in Q,
∂t i=1 ∂xi ∂xi
(1.48)
u|t=0 = v in Ω,
∂u = g
on S.
∂N
The weak solution to the problem (1.48) is a fun
tion u(x, t) ∈ W (0, T ; H 1(Ω)) satisfying
RT RR Pn ∂u ∂η
0 hut , ηi(H 1 (Ω))′ ,H 1 (Ω) dt+
Q
i=1
ai (x, t)
∂xi ∂xi
+ b(x, t)uη dxdt
RR RR (1.49)
= Q f ηdxdt + S gηdζdt, ∀η ∈ L2 (0, T ; H 1(Ω)),
u| = v in Ω.
t=0
• k := (k1 , . . . , kn ), 0 ≤ ki ≤ Ni ;
• xk := (xk11 , . . . , xknn ) is the grid point;
• ∆h := h1 · · · hn ;
19
• ei , i = 1, . . . , n being the unit ve
tor in the xi -dire
tion, i.e. e1 = (1, 0, . . . , 0), . . . , en =
(0, . . . , 0, 1).
Around ea
h grid point, we dene the following subsets of Ω:
ω(k) :={x ∈ Ω : (ki − 0.5)hi < xi < (ki + 0.5)hi , ∀i = 1, . . . , n}, (1.51)
ωi+ (k) :={x ∈ Ω : ki hi ≤ xi ≤ (ki + 1)hi , (kj − 0.5)hj ≤ xj ≤ (kj + 0.5)hj , ∀i 6= j}.
(1.52)
The set of the indi es of all grid points belonging to Ω̄ is denoted by Ω̄h , that is
The set of the indi es of all interior grid points is denoted by Ωh , that is
The set of the indi es of all boundary grid points is denoted by Πh , that is
Πh = Ω̄h \ Ωh . (1.55)
Πir
h := {k = (k1 , k2, . . . , kn ) : ki = Ni }, i = 1, . . . , n. (1.57)
For a fun
tion u(x, t) Q, we denote by ūk (t) (or uk if there is no
onfusion), or
dened in
u(k, t) its approximate value at (xk , t). Suppose that ū = {uk , k ∈ Ω̄h } is a grid fun
tion
dened in QhT := Ω̄h × (0, T ) whi
h have the rst weak derivative with respe
t to t. We
dene
ZT X n X
X
2 2
kūk2H 1,0 (QhT ) := ∆h ūk (t) + k
ūxi (t) dt (1.59)
0 k∈Ω̄h i=1 k∈Ω̄i
h
and
ZT X
k 2 k 2 n
XX 2
kūk2H 1,1 (QhT ) := ∆h ū (t) + ūt (t) + ūkxi (t) dt (1.60)
0 k∈Ω̄h i i=1 k∈Ω̄
h
with the dire t dieren e quotient and ba kward one dening as follows
20
When uh is a grid fun
tion, we denote uhxi by u xi .
For a grid fun
tion ū we dene the following interpolations in Q:
1) Pie
ewise
onstant:
2) Multi-linear:
n
X X
ūˆ(x, t) := ūk (t) + ūkxi (t)(xi − ki hi ) + ūkxixj (t)(xi − ki hi )(xj − kj hj )
i=1 1≤i<j≤n
(1.62)
n
Y
+···+ ūkx1 x2 ...xn (t) +
(xi − ki hi ), (x, t) ∈ ω (k) × (0, T ).
i=1
From Theorem 3.2, Theorem 3.3, Chapter 6 in [40℄ and the omments therein, we have the
following results.
Lemma 1.4.1. Suppose that the grid fun tion ū satises the inequality
Asymptoti relationships between the two interpolations as the grid size h tends to zero
Lemma 1.4.3. Suppose that the hypothesis of Lemma 1.4.1 is fullled. Then if {ūˆ(x, t)}h
weakly
onverges to a fun
tion u(x, t) in L2 (Q) as the grid size h tends to zero, the sequen
e
{ū˜(x, t)}h also weakly
onverges to u(x, t) in L2 (Q). Moreover, if {ūˆ|S }h weakly
onverges
to u|S in L2 (S) then {ū˜|S }h also weakly
onverges to u|S in L2 (S).
21
Lemma 1.4.4. Suppose that the hypothesis of Lemma 1.4.2 is fullled. Then if {ūˆ(x, t)}h
strongly
onverges to a fun
tion u(x, t) in L2 (Q) as the grid size h tends to zero, the se-
quen
e {ū˜(x, t)}h also strongly
onverges to u(x, t) in L2 (Q). Moreover, if {ūˆ|S }h
onverges
to u|S in L2 (S) then {ū˜|S }h also
onverges to u|S in L2 (S).
Lemma 1.4.5. Suppose that the hypothesis of Lemma 1.4.2 is fullled and i ∈ {1, . . . , n}.
Then if the sequen
e of derivatives {ūˆxi (x, t)}h weakly
onverges to a fun
tion u(x, t) in
L2 (Q) as h tends to zero, the sequen
e {ū˜xi (x, t)}h also
onverges to u(x, t) in L2 (Q) where
1.4.2. Dis
retization in spa
e variables and the
onvergen
e of the nite dif-
feren
e s
heme
For a fun
tion z dened in Ω, ω(k) by
we dene its average on
Z Z
1 1
zk = z(x)dx = z(x)dx. (1.67)
|ω(k)| ω(k) ∆h ω(k)
Here, if k belongs to the boundary of Ω then we understand that ω(k) is a grid box of Ω
ontaining k.
We approximate the integrals in (1.47) and (1.49) as follows
ZZ Z T X dūk (t)
∂u
ηdxdt ≈ ∆h η̄ k (t)dt, (1.68)
Q ∂t 0 dt
k∈Ω̄h
ZZ Z T X
∂u ∂η
ai (x, t) dxdt ≈ ∆h āki (t)ūkxi (t)η̄xki (t)dt, (1.69)
Q ∂xi ∂xi 0
k∈Ωih
ZZ Z T X
b(x, t)uηdxdt ≈ ∆h b̄k (t)ūk (t)η̄ k (t)dt, (1.70)
Q 0
k∈Ω̄h
ZZ Z T X
f ηdxdt ≈ ∆h f¯k (t)η̄ k (t)dt, (1.71)
Q 0
k∈Ω̄h
ZZ Z
Xn
1 X k T X
gηdζdt ≈ ∆h ḡ (t)η̄ k (t) + ḡ k (t)η̄ k (t) dt
S 0 i=1 hi
k∈Πil h \Π
0 k∈Πir
h \Π
0
X
+ ∆h ḡ k (t)η̄ k (t). (1.72)
k∈Π0
Z T h X dūk Xn X i
∆h k k ¯k k
+ b̄ ū − f η̄ + k k k
āi ūxi η̄xi dt = 0. (1.73)
0 dt i=1 i
k∈Ω̄h k∈Ωh
22
Using the summation by parts formula together with the
ondition ūk = η̄ k = 0 when
ki = 0 and ki = Ni , we have
with āki− = āk−ei . Repla
ing into (1.73) and approximating the initial
ondition ūk (0) =
v̄ k , k ∈ Ω̄h , we obtain the following system approximating the original problem (1.47)
dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (1.75)
ū(0) = v̄,
with ū = {uk , k ∈ Ω̄h }, the fun
tion v̄ = {v k , k ∈ Ω̄h } being a grid fun
tion approximating
the initial
ondition v and F̄ = {f , k ∈ Ωh } where f¯ is dened in formula (1.67). The
k k
proof of this proposition is similar to proof of Lemma 1.4.6 for the Neumann problem in
Z T h X dūk Xn X i
∆h + b̄k ūk η̄ k + āki ūkxi η̄xki dt
0 dt i=1 i
k∈Ω̄h k∈Ωh
Z (1.77)
T hX Xn
1 X k k X X i
= ∆h ¯k k
f η̄ + ḡ η̄ + k k
ḡ η̄ + k k
ḡ (t)η̄ (t) dt.
0 i=1
hi
k∈Ω̄h il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Denote by kil = (k1 , . . . , ki−1, 0, ki+1 , . . . , kn ), kir = (k1 , . . . , ki−1, Ni , ki+1 , . . . , kn ). For arbi-
23
trary and xed indi
es k2 , . . . , kn , using the summation by parts, we obtain
N
X 1 −1 N
X 1 −1
ūk+e1 − ūk η̄ k+e1 − η̄ k
āk1 ūkx1 η̄xk1 = āk1
k1 =0 k1 =0
h1 h1
N
X 1 −1 N1 −1
ūk+e1 − ūk k+e1 X k ū
k+e1
− ūk k
= āk1 η̄ − ā1 η̄
h21 h21
k1 =0 k1 =0
N1
X N
X 1 −1
ūk − ūk−e1 k ūk+e1 − ūk k
= āk1− η̄ − āk1 η̄
k1 =1
h21 k1 =0
h21
k1l +e1 k1l N
X 1 −1
k
k l ū − ū k1l ū − ūk−ei k k ū
k+e1
− ūk k
= −ā11 η̄ + āk1− η̄ − ā1 η̄
h21 k1 =1
h21 h21
k1r k1r −e1
ū − ū r
+ āk1− η̄ k1 ,
h21
(1.78)
k
with ā1− = āk−e
1
1
. The similar terms in the x2 , . . . , xn dire
tions are treated in the same
way. Then, repla ing this equality into (1.77) and approximating the initial ondition in
the se ond equation of (1.49), we obtain the following system approximating the original
problem (1.49)
dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (1.79)
ū(0) = v̄,
with ū = {uk , k ∈ Ω̄h } and the fun
tion v̄ = {v k , k ∈ Ω̄h } is a grid fun
tion approximating
the initial
ondition v and
k k
āi− ūk − ūk−ei − āi ūk+ei − ūk , k ∈ Ω̄ : 1 ≤ k ≤ N − 1,
h i i
h2i h2i
k
b̄k k
ū − āi ūk+ei − ūk , k ∈ Π : k = 0, k ∈
/ Π0 ,
h i
(Λi ū)k = + h2i (1.80)
n
k
āi− k
ū − ūk−ei , k ∈ Πh : ki = Ni , k ∈ / Π0 ,
h 2
i
0, k ∈ Π0 .
Next, we will prove that the oe ients matrix Λi dened by (1.80) is positive semi-denite.
Lemma 1.4.6. For any t, the
oe
ients matrix Λi , i = 1, 2, · · · , n is positive semi-
denite.
24
Proof. Without loss of generality, we assume n=2 and i = 1. We have
(0,k ) h2 b̄(0,k2 ) (0,k )
ā1 2 + 1 2 −ā1 2 ... 0 0
(1,k ) (1,k ) h2 b̄(1,k2 )
−ā1− 2 2ā1∗ 2 + 1 2 ... 0 0
k
1 0
(2,k )
−ā1− 2 ... 0 0
Λ1 = 2
h1 ... ... ... ... ...
0 0 . . . 2ā1∗ 1
(N −1,k2 )
+
h21 b̄(N1 −1,k2 )
−ā1
(N1 −1,k2 )
2
(N1 ,k2 ) (N ,k2 ) h21 b̄(N1 ,k2 )
0 0 ... −ā1− ā1−1 + 2
(1.82)
a) If v ∈ L2 (Ω), then exists a
onstant c independent of h and the
oe
ients of the
equation su
h that
X Z TX
n X
k 2
max ∆h |ū (t)| + ∆h |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
Z T X X
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.83)
0
k∈Ω̄h k∈Ω̄h
Proof. The main idea of this proof is similar to that of [40, Chapter VI℄.
25
a) For arbitrary t∗ ∈ (0, T ]. Set
ūk (t), if t ∈ [0, t∗ ],
k
η̄ (t) =
0, if / [0, t∗ ].
t∈
Sin
e
Z X
t∗ 1X k ∗ 2 1X k
dt ūkt (t)ūk (t) = |ū (t )| − |ū (0)|2 ,
0 2 2
k∈Ωh k∈Ω̄h k∈Ω̄h
k
and ū (0) = v̄, it follows from (1.73) that
X Z t∗ h X Xn X i
1 k ∗ 2 k k 2
∆h |ū (t )| + ∆h b̄ |u | + āki |ūkxi |2 dt
2 0 i=1 i
k∈Ω̄h k∈Ω̄h k∈Ωh
(1.85)
Z t∗ X X
1
= ∆h f¯k ūk dt + ∆h |v̄ k |2 .
0 2
k∈Ω̄h k∈Ω̄h
Multiplying the both sides of the equality (1.85) by 2, applying Cau
hy's inequality to the
k
rst term in the right hand side, noting that b ≥ 0, we obtain
X Z t∗ n X
X Z t∗ X
∆h k ∗
|ū (t )| + 2∆h 2
āki |ūkxi |2 dt ≤ ∆h |f¯k |2 dt
0 i=1 k∈Ωi 0
k∈Ω̄h h k∈Ω̄h
Z (1.86)
t∗ X X
+ ∆h |ūk |2 dt + ∆h |v̄ k |2 .
0
k∈Ω̄h k∈Ω̄h
Put
X
y(t) = ∆h |ūk (t∗ )|2 .
k∈Ω̄h
Hen e, we have
X Z T X X
max ∆h k
|ū (t)| ≤ c ∆h 2
|f¯k |2 dt + ∆h k 2
|v̄ | . (1.88)
t∈[0,T ] 0
k∈Ω̄h k∈Ω̄h k∈Ω̄h
From the onditions (1.1)(1.3) about the oe ient ai , the inequalities (1.86) and (1.87)
we have
Z T X n X
X
∆h |ūk (t)|2 + |ūkxi |2 dt
0 i=1 k∈Ωi
k∈Ω̄h h
Z T X X
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.89)
0
k∈Ω̄h k∈Ω̄h
26
Combining the two inequalities, we obtain the inequality (1.83).
RT P
b) In order to obtain a bound for ∆h |ūkt (t)|2 dt, we repla
e η̄ k in (1.73) by ūkt and
0 k∈Ω̄h
get
Z T hX X n X
X i Z T X
∆h |ūkt (t)|2 + b̄k ūk ūkt + āki ūkxi ūkxi t dt = ∆h f¯k ūkt dt. (1.90)
0 i=1 k∈Ωi 0
k∈Ω̄h k∈Ω̄h h
k∈Ω̄h
Multiplying the both sides of (1.90) by 2, integrating by parts the se ond and the third
terms in the left hand side and using Cau hy's inequality for the right hand side, we obtain
ZT X
2∆h |ūkt (t)|2 dt
0 k∈Ω̄h
X h i XZ
db̄k (t) k 2 T
k k 2 k 2
+ ∆h b |ū (T )| − |ū (0)| − ∆h |ū | (t)dt
0 dt
k∈Ω̄h k∈Ω̄h
Xn X h i X n X Z T
k k 2 k 2 dāki (t) k 2
+ ∆h āi |ūxi (T )| − |v̄xi | − ∆h |ūxi | dt
i=1 i i=1 i 0 dt
k∈Ωh k∈Ωh
ZT X
= 2∆h f¯k ūkt dt
0 k∈Ω̄h
Z T X Z T X
≤ ∆h |f¯k |2 dt + ∆h |ūkt |2 dt.
0 0
k∈Ω̄h k∈Ω̄h
From this inequality, sin
e b≥0 and the
onditions (1.1)(1.3) about the
oe
ient ai ,
the inequalities (1.86), (1.87) and the hypothesis of the theorem, we obtain
Z T X Z T X
∆h |ūkt (t)|2 dt ≤ ∆h |f¯k |2 dt
0 0
k∈Ω̄h k∈Ω̄h
X n X
X
k k 2
+ ∆h b̄ |v̄ | + ∆h āki |v̄xki |2
k∈Ω̄h i=1 k∈Ωi
h
XZ db̄k (t) k 2T
+ ∆h |ū | (t)dt
0 dt
k∈Ω̄h
Xn X Z T
dāki (t) k 2
+ ∆h |ūxi | dt
i=1 i 0 dt
k∈Ωh
Z T X X n X
X
≤ ∆h |f¯k |2 dt + ∆hµ1 |v̄ k |2 + ∆hΛ |v̄xki |2
0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h h
XZ T n X Z
X T
k 2
+ µ2 ∆h |ū | (t)dt + µ2 ∆h |ūkxi |2 dt.
0 i=1 k∈Ωi 0
k∈Ω̄h h
27
Using this inequality and (1.89) we get
Z T X n X
X
∆h |ūk (t)|2 + |ūkxi |2 dt
0 i=1 k∈Ωi
k∈Ω̄h h
Z T X X
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.91)
0
k∈Ω̄h k∈Ω̄h
Z T X Z T X X n X
X
∆h |ūkt (t)|2 dt ≤ ∆h |f¯k |2 dt + 2∆hµ1 k 2
|v̄ | + 2∆hΛ |v̄xki |2 .
0 0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h k∈Ω̄h h
(1.92)
If v ∈ L2 (Ω), the right hand side of (1.83) is bounded by c(kf k2L2 (Q) + kvk2L2 (Ω) ). On the
other hand, if v ∈ H01 (Ω), the right hand side of 2 2
(1.84) is bounded by c(kf kL2 (Q) +kvkH 1 (Ω) ).
Consequently, it follows form Lemmas 1.4.7, 1.4.2 and 1.4.4 the onvergen e of the multi-
linear interpolation of ūk to the solution of the problem (1.46) as the grid size h tends to
To emphasize the dependen e of the solution of the dis retized problem on the grid size,
Theorem 1.4.8. 1) If v ∈ L2 (Ω), then the multi-linear interpolation (1.62) ûh of the
dieren
e-dierential problem (1.73) in QT weakly
onverges in L2 (Q) to the solution u ∈
H 1,0 (Q) of the original problem (1.46) and its derivatives with respe
t to xi , i = 1, . . . , n
weakly
onverges in L2 (Q) to uxi .
2) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞, and v ∈ H01 (Ω), then uˆh
strongly
onverges L2 (Q) to u. Furthermore, uˆh |S
onverges to u|S in L2 (S).
The proof of this theorem is similar to that of Theorem 1.4.11 below, therefore we omit it.
We now prove the boundedness of the nite dieren e approximation to the solution of the
a) If v ∈ L2 (Ω), then there exists a
onstant c independent of h and the
oe
ients of the
equation (1.7) su
h that
28
X Z T n X
X
k 2
max ∆h |ū (t)| + ∆h |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
X X n
∆h X k 2
≤ c ∆h |v̄ k |2 + |v̄ |
i=1
hi 0
k∈Ω̄h k∈Π
Z T X XZ TX n
¯ k 2 ∆h ¯k 2
+ ∆h |f | dt + |f | dt
0 hi
k∈Ω̄h k∈Π0 0 i=1
Z TX
1 X k2
n X
+ ∆h |ḡ | + |ḡ k |2 dt
0 i=1 hi il ir
k∈Πh k∈Πh
XZ T
1
n
X ∆h k 2
+ |ḡ | dt . (1.93)
0 n i=1
hi
k∈Π0
b) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞ and if g ∈ H 0,1(S), then
X Z T n X
X
k 2 k 2
max ∆h |ū (t)| + ∆h |ūt (t)| + |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
X n
X ∆h X
n X
X
k 2 k 2
≤ c ∆h |v̄ | + |v̄ | + |v̄xki |2
i=1
hi i=1 k∈Ωi
k∈Ω̄h k∈Π0 h
Z T X XZ T n
X ∆h
+ ∆h |f¯k |2 dt + |f¯k |2 dt
0 0 i=1
hi
k∈Ω̄h k∈Π0
Z
1 X
T Xn X
k 2 k 2
+ ∆h |ḡ | + |ḡ | dt
0 i=1
hi
k∈Πil
h
k∈Πir
h
XZ T
1 X ∆h k 2
n
+ |ḡ | dt . (1.94)
0 n i=1 hi
k∈Π0
Proof. The proof of this lemma is similar to that of Lemma 1.4.7 but dierent from the
Sin e
Z X
t∗ 1X k ∗ 2 1X k
ūkt (t)ūk (t)dt = |ū (t )| − |ū (0)|2
0 2 2
k∈Ωh k∈Ω̄h k∈Ω̄h
1X k ∗ 2 1X k2
= |ū (t )| − |v̄ | ,
2 2
k∈Ω̄h k∈Ω̄h
29
from (1.77) we have
X Z t∗ h X Xn X i
1 k ∗ 2 k k 2 k k 2
∆h |ū (t )| + ∆h b̄ |u | + āi |ūxi | dt
2 0 i=1 i
k∈Ω̄h k∈Ω̄h k∈Ωh
Z t∗ X X
1
= ∆h f¯k ūk dt + ∆h |v̄ k |2
0 2
k∈Ω̄h k∈Ω̄h
Z
1 X k k
t∗ Xn X
+ ∆h ḡ ū + ḡ k ūk dt
0 i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0
Z t∗ X
+ ∆h ḡ k ūk dt. (1.95)
0 k∈Π0
Using ǫ-Cau
hy's inequality for the terms in the right hand side, taking into a
ount that
k
b̄ ≥ 0 and the
ondition (1.3), we obtain
X Z t∗ Xn X
1 k ∗ 2
∆h |ū (t )| + λ∆h |ūkxi |2 dt
2 0 i=1 k∈Ωi
k∈Ω̄h h
Z t∗ X Z t∗ X
1
≤ ∆h |f¯k |2 dt + ǫ1 ∆h |ūk |2 dt
4ǫ1 0 0
k∈Ω̄h k∈Ω̄h
X
+ ∆h |v̄ k |2
k∈Ω̄h
Z
1 X k 2 X k 2
t∗ Xn
1
+ ∆h |ḡ | + |ḡ | dt
4ǫ2 0 i=1
hi il ir
k∈Πh k∈Πh
Z
1 X
t∗ n
X X
+ ǫ2 ∆h |ūk |2 + |ūk |2 dt
0 i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0
X 1 n Z X n Z
1 t∗
k 2 1 1X t∗ X
+ ǫ2 ∆h |ū | dt + ∆h hi |ḡ k |2 dt. (1.96)
n h
i=1 i 0 0
4ǫ2 n i=1 0
k∈Π k∈Π0
We estimate the last sum in the above inequality. In the ontinuous ase, we an evaluate
the L2 -norm of the tra
e of a fun
tion ϕ ∈ H 1 (Ω) on the boundary ∂Ω by its L2 −norm
(see [40, p. 2831℄)
However, a similar inequality is not valid for a grid fun tion in Ωh due to its orners. We
1 X
Xn X
∆h |uk |2 + k 2
|u | .
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0
L1 Ωh := {k ′ = {k2 , . . . , kn }, 1 ≤ ki ≤ Ni − 1, ∀i = 2, . . . , n}.
30
Consider the sum
N
X 1 −1 X ′
h2 · · · hn |ū(0,k ) |2 .
k1 =1 k ′ ∈L1 Ωh
Sin
e
k1
X
′ ′ ′
ū(0,k ) = −ū(k1 ,k ) + h1 ū(j,k )
x1 , 1 ≤ k1 ≤ N1 − 1,
j=0
we obtain
N
X 1 −1 X N
X 1 −1 X k1
X
′ ′ ′
∆h |ū(0,k ) |2 = ∆h | − ū(k1 ,k ) + h1 ū(j,k
x1 |
) 2
X N
X 1 −1 X
1 1 ′
≤ ∆h |ūk |2 + h21 N1 (N1 − 1)∆h |ūx(k11 ,k ) |2
2 2 k k ′ ∈L1 Ωh
k∈Ω̄h 1 =1
X N
X 1 −1 X
1 k 2 2 ′
≤ ∆h |ū | + L1 ∆h |ūx(k11 ,k ) |2 .
2
k∈Ω̄h k =1 ′ 1 k ∈L1 Ωh
form
N 1 −1
∆h X ′ 1 X X X ′
|ū(0,k ) |2 ≤ ∆h |ūk |2 + 2L1 ∆h |ūx(k11 ,k ) |2 . (1.98)
h1 ′ L1
k ∈L1 Ωh k =1 ′
k∈Ω̄h 1 k ∈L1 Ωh
We now estimate ū0 (t). The value of ū(t) at the other
orner points are similarly estimated.
From the equation (1.80) and (1.81) we have
dū0
+ b̄0 ū0 = f¯0 + ḡ 0 . (1.99)
dt
Furthermore, we have the initial
ondition ū0 (0) = v̄ 0 . Sin
e 0 ≤ b ≤ µ1 , applying
Z t Z t
0
|ū (t)| ≤ c |v̄ | + 0 ¯0
|f (τ )|dτ + 0
|ḡ (τ )|dτ
0 0
31
Thus, from (1.96), (1.98) and (1.100) we obtain
X Z t∗ X n X
1 k ∗ 2
∆h |ū (t )| + λ∆h |ūkxi |2 dt
2 0 i=1 k∈Ωi
k∈Ω̄h h
Z t∗ X Z t∗ X
1
≤ ∆h |f¯k |2 dt + ǫ1 ∆h |ūk |2 dt
4ǫ1 0 0
k∈Ω̄h k∈Ω̄h
X
+ ∆h |v̄ k |2
k∈Ω̄h
Z
1 X k2
Xn t∗ X
1
+ ∆h |ḡ | + |ḡ k |2 dt
4ǫ2 0 h
i=1 i k∈Πil
h
k∈Πirh
Z t∗ X
1
+ ǫ2 n ∆h |ūk (t)|2 dt
min Li 0
i∈{1,...,n} k∈Ω̄h
Z t∗ n X
X
+ 2nǫ2 ∆h max Li |ūkxi |2 dt
i∈{1,...,n} 0 i=1 k∈Ωi
h
n Z
1 1X X t∗
+ ∆h hi |ḡ k |2 dt
4ǫ2 n i=1 0 k∈Π0
Z
∆h ¯k 2
X t 1X n Xn
∆h X k 2
∗
n k 2
+ 2 ǫ2 c |f (t)| + |ḡ (t)| dt + c |v̄ | (1.101)
0 0 n i=1 hi i=1
hi 0
k∈Π k∈Π
Choosing ǫ2 su h that
2nǫ2 max Li = λ,
i∈{1,...,n}
we an eliminate the term ontaining derivatives with respe t to xi in the both sides of the
above inequality. Then
hoosing ǫ1 = 1/2 and applying Gronwall's inequality with respe
t
P k ∗ 2
to ∆h |ū (t )| , we obtain the estimate
k∈Ω̄h
X X X n
∆h X k 2
k 2 k 2
max ∆h |ū (t)| ≤ c ∆h |v̄ | + |v̄ |
t∈[0,T ]
i=1
hi 0
k∈Ω̄h k∈Ω̄h k∈Π
Z T X XZ TX n
∆h ¯k 2
+ ∆h |f¯ | dt +
k 2
|f | dt
0 0 i=1
h i
k∈Ω̄h k∈Π0
Z TX n
1 X k2 X
k 2
+ ∆h |ḡ | + |ḡ | dt
0 i=1 hi il ir
k∈Πh k∈Πh
XZ T
1
n
X ∆h k 2
+ |ḡ | dt . (1.102)
0 n i=1
hi
k∈Π0
In the inequality (1.101) hoosing ǫ2 su h that 2nǫ2 maxi∈{1,...,n} Li = λ/2, then applying
32
the inequality (1.102) we obtain the following estimate for the derivative of ū as follows
Z n X
X X Xn
T
∆h X k 2
∆h |ūkxi |2 dt ≤ c ∆h |v̄ k |2 + |v̄ |
0 i=1 k∈Ωi i=1
hi 0
h k∈Ω̄h k∈Π
Z T X XZ T n
X ∆h
+ ∆h |f¯k |2 dt + |f¯k |2 dt
0 0 i=1
hi
k∈Ω̄h k∈Π0
Z
1 X
T Xn X
+ ∆h |ḡ k |2 + |ḡ k |2 dt
0 i=1
hi
k∈Πil
h
k∈Πir
h
XZ T
1
n
X ∆h k 2
+ |ḡ | dt . (1.103)
0 n i=1
hi
k∈Π0
Combining (1.102) and (1.103), we obtain estimate (1.93) as laimed in the theorem.
RT P
b) Next, to estimate ∆h |ūkt (t)|2 dt, we repla
e η̄ k in (1.73) by ūkt to obtain
0 k∈Ω̄h
Z T hX X n X
X i
∆h |ūkt (t)|2 + b̄k ūk ūkt + āki+ ūkxi ūkxi t dt
0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h h
Z T X
= ∆h f¯k ūkt dt
0
k∈Ω̄h
Z Z
T Xn
1 X k k X T X
+ ∆h ḡ ¯ut + ḡ k ūkt dt + ∆h ḡ k ūkt dt. (1.104)
0 i=1
hi 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Multiplying the both sides of (1.104) with 2, applying integration by parts to the se ond
and third terms of the right-hand side, the terms of the left-hand side ex ept for the rst
one, we obtain
ZT X
2∆h |ūkt (t)|2 dt
0 k∈Ω̄h
X h i XZ
db̄k (t) k 2 T
k k 2 k 2
+ ∆h b̄ |ū (T )| − |ū (0)| − ∆h |ū | (t)dt
0 dt
k∈Ω̄h k∈Ω̄h
n
XX h i X n X Z T
k k 2 k 2 dāki+ (t) k 2
+ ∆h āi+ |ūxi (T )| − |ūxi (0)| − ∆h |ūxi | dt
i=1 i i=1 i 0 dt
k∈Ωh k∈Ωh
ZT X Z T Xn
1 X k k X
= 2∆h f¯k ūkt dt − ∆h ḡt ū + ḡtk ūk dt
0 h
i=1 i
0 k∈Ω̄h il 0 k∈Πh \Π k∈Πir
h \Π
0
Xn
1 X k X
+ ∆h ḡ (T )ūk (T ) + k
ḡ (T )ū (T ) k
i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0
33
Xn
1 X X
− ∆h ḡ k (0)ūk (0) + ḡ k (0)ūk (0)
h
i=1 i k∈Πil \Π0
h
k∈Πir
h \Π
0
Z T X X X
− ∆h ḡtk ūk dt + ∆h ḡ k (T )ūk (T ) − ∆h ḡ k (0)ūk (0)
0 k∈Π0 k∈Π0 k∈Π0
ZT X
= 2∆h f¯k ūkt dt
0 k∈Ω̄h
Z Z
T Xn
1 X k k X T X
− ∆h ḡt ū + ḡtk ūk dt − ∆h ḡtk ūk dt
0 h
i=1 i 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Xn
1 X k X X
+ ∆h ḡ (T )ūk (T ) + ḡ k (T )ūk (T ) + ∆h ḡ k (T )ūk (T )
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Xn
1 X X X
− ∆h ḡ k (0)ūk (0) + ḡ k (0)ūk (0) − ∆h ḡ k (0)ūk (0)
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
RT
Proof. Indeed, we havew(T ) = w(t) + t wt (τ )dτ. Hen
e
Z T Z T
2 2 2 2 2
|w(T )| ≤ 2 |w(t)| + ( |wt (τ )|dτ ) ≤ 2 |w(t)| + T |wt (t)|2 dt .
t 0
Taking the integral of the both sides with respe t to t from 0 to T, then dividing the
Z Z T X
1 T Xn
1 X X
k 2 1
(II) ≤ ∆h + |ḡt | dt + ∆h |ḡtk |2 dt
2 0 h
i=1 i
2 0 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π
Z Z
1
n
T X 1 X X
k 2 1 T X
+ ∆h + |ū | dt + ∆h |ūk |2 dt
2 0 i=1
hi 2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
34
Z Z
1 T Xn
1 X X 1 T X
≤ ∆h + |ḡtk |2 dt + ∆h |ḡtk |2 dt
2 0 h
i=1 i
2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Z T X n X
X Z T X
c k 2 1
+ ∆h |ū | dt + |ūkxi |2 dt + ∆h |ūk |2 dt. (1.108)
2 0 i=1 k∈Ω̄+
2 0
k∈Ω̄h k∈Π0
h
Applying Cau
hy's inequality, Lemma 1.4.10, the inequalities (1.98), (1.100), we obtain
Z Z
1 T X 1 X
n X T X
1
(III) ≤ c2 ∆h + |ḡtk |2 dt + ∆h k 2
|ḡ | dt
2 0 i=1 hi 2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Z Z T X
1 T X 1 X
n X
k 2 1 k 2
+ c2 ∆h + |ḡt | dt + ∆h |ḡ | dt
2 0 i=1 hi 2 0 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π
X X n
∆h X k 2
k 2
c3 ∆h |v̄ | + |v̄ |
i=1
hi
k∈Ω̄h k∈Π0
Z T X XZ TX n
¯k 2 ∆h ¯k 2
+ ∆h |f | dt + |f | dt
0 0 0 i=1
hi
k∈Ω̄h k∈Π
Z TX n X X
1
+ ∆h |ḡ k |2 + |ḡ k |2 dt
0 i=1 hi il ir
k∈Πh k∈Πh
XZ T
1
n
X ∆h k 2
+ |ḡ | dt . (1.109)
0 n i=1
hi
k∈Π0
Further, using Cau hy's inequality, Lemma 1.4.10 and (1.102) we an estimate (IV) as
follows
1 Xn
1 X X 1 X
(IV ) ≤ ∆h + |ḡ k (0)|2 + ∆h |ḡ k (0)|2
2 i=1
hi 2
il 0 k∈Πh \Π k∈Πir
h \Π
0 0 k∈Π
1 Xn
1 X X 1 X
+ ∆h + |ūk (0)|2 + ∆h |ūk (0)|2
2 h
i=1 i
2
il 0 k∈Πh \Π k∈Πir
h \Π
0 0 k∈Π
Z
Xn
1 X T X
≤ c4 ∆h + |ḡ k (t)| + |ḡtk (t)|2 dt
0 i=1 hi
k∈Πil
h \Π
0 k∈Πir
h \Π
0
Z T X
1
+ ∆h |ḡ k (t)|2 + |ḡtk (t)|2 dt
2 0 0 k∈Π
X n X
X
+ c5 ∆h |ūk (0)|2 + |ūkxi (0)|2 . (1.110)
k∈Ω̄h i=1 k∈Ω̄+
h
Sin e the oe ient b is nonnegative, the oe ients ai satisfy the ondition (1.1)(1.3),
furthermore, these oe ients satisfy the se ond onditions of the lemma, from the in-
From Lemma 1.4.9, we have the following results whi h assert the onvergen e of the nite
35
Theorem 1.4.11. 1) If v ∈ L2 (Ω), then the multi-linear interpolation (1.62) ûh of the
dieren
e-dierential problem (1.79) in QT weakly
onverges in L2 (Q) to the solution u ∈
H 1,0 (Q) of the Neumann problem (1.46) and its derivatives with respe
t to xi , i = 1, . . . , n
onverges weakly in L2 (Q) to uxi .
2) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞, and v ∈ H01 (Ω), then ûh
onverges strongly in L2 (Q) to u. Furthermore, ûh |S
onverges to u|S in L2 (S).
Proof. We follow Ladyzhenskaya [40, Chapter 6℄ (see also [76, 77℄) to prove these onver-
gen e results. We an prove onvergen e rates like in [35, 70℄. However, we do not pursuit
it in this thesis.
1) From the estimate (1.93), the right hand side of (1.93) is bounded by c kf kL2 (Q) +
kgkL2(S) + kvkL2 (Ω) . Therefore, the sequen
e {ūˆ}h of the multi-linear interpolations is
bounded in H 1,0
(Q), due to Lemma 1.4.1. Hen
e, there is a subsequen
e {ūˆ}hκ weakly
onverges to some fun tion u = u(x, t) ∈ H 1,0 (Q) and the sequen e of the derivative
{ū˜}hκ weakly
onverges in L (Q) to u and the subsequen
e of the derivative {ū˜}hκxi weakly
2
2
onverges in L (Q) to uxi as κ tends to innity. To nish the proof, we need to show that
ea h term in the dis rete equation (1.77) onverges to the orresponding one in (1.19).
Indeed, sin
e C 2,1 (Q) H 1,1 (Q), it is enough to
onsider the fun
tion η in this
is dense in
spa e. The values of the grid fun tion {η̄}h are set to be the values of η at the grid points.
It is lear that {η̄}h onverges uniformly to η as h tends to zero. This implies that all the
terms in (1.77) (the terms on the boundary in the right hand side are onsidered as the
boundary term) onverge to the orresponding terms in (1.19). This means that u is a
weak solution of the Neumann problem (1.46) whi h is unique. Thus, every subsequen e
of {ūˆ}h
onverges to the same fun
tion u. Hen
e the sequen
e {ūˆ}h itself
onverges to u.
2) When the
onditions of the se
ond part of the theorem are fullled, we have the a priori
estimate (1.94). Therefore, from the part one of the theorem, the sequen
e {ūˆ}h
onverges
1,0 1,1
to the solution u ∈ H (Q) whi
h belongs to H (Q), due to Theorem 1.1.2. On the
other hand, from (1.94), the sequen
e {ūˆ}h of the multi-linear interpolations is bounded in
H (Q), due to Lemma 1.4.2. Hen
e, {ūˆ}h weakly
onverges to u = u(x, t) ∈ H 1,1 (Q). It
1,1
F m := F̄ (tm ).
In order to obtain the splitting dieren
e s
heme for the Cau
hy problems (1.75) and (1.79),
36
we set um+δ := ū(tm + δ∆t), Λm
i := Λi (tm + ∆t/2). We introdu
e the following two
ir
le
1 1
um+ 2n − um um+ 2n + um
∆t
+ Λm
1 = 0,
2
2
2 1 2 1
um+ 2n − um+ 2n um+ 2n − um+ 2n
∆t
+ Λm
2 = 0,
2
2 (1.111)
···
1 n−1 1 n−1
um+ 2 − um+ 2n um+ 2 − um+ 2n 1 ∆t m m
+ Λm
n = Fm + Λ F ,
∆t
2
2 2 8 n
n+1 1 n+1 1
um+ 2n − um+ 2 um+ 2n − um+ 2 1 ∆t m m
+ Λm
n = Fm − Λ F ,
∆t
2
2 2 8 n
···
2n−1 2n−2 2n−1 2n−2
um+ 2n − um+ 2n um+ 2n + um+ 2n
∆t
+ Λm
2 = 0,
2
2
m+ 2n−1 2n−1
um+1 − u 2n um+1 − um+ 2n
∆t
+ Λm
1 = 0,
2
2
u0 = v̄.
Equivalently,
∆t m m+ 1 ∆t m m
E1 + Λ u 2n = E1 − Λ u ,
4 1 4 1
∆t m m+ 2 ∆t m m+ 1
E2 + Λ u 2n = E2 − Λ u 2n ,
4 2 4 2
...
∆t m m+ 1 ∆t m ∆t m+ n−1
En + Λn u 2 − F = En − Λm u 2n ,
4 2 4 n
∆t m m+ n+1 ∆t m m+ 1 ∆t m
En + Λn u 2n = En − Λ u 2+ F , (1.112)
4 4 n 2
...
∆t m+ 2n−1 ∆t m m+ 2n−2
E2 + Λm u 2n = E 2 − Λ u 2n ,
4 2 4 2
∆t m+1 ∆t m m+ 2n−1
E1 + Λm u = E 1 − Λ u 2n ,
4 1 4 1
u0 = v̄.
37
with
Am = Am m m m
1 · · · An An · · · A1 ,
B m = Am m
1 · · · An ,
∆t m −1 ∆t m
where Am
i = (Ei + 4
Λi ) (Ei − 4
Λi ). The stability of the splitting s
heme (1.113) is
given in the following theorem.
∆t m −1 ∆t m
kAm
i k = k(Ei + Λi ) (Ei − Λ )k ≤ 1.
4 4 i
Hen
e, we get
Next, we introdu e the dis retized Green's formula for the dire t and adjoint problems.
38
Proof. Multiplying both sides of the rst equation of (1.114) with η m , m = 0, 1, . . . , M −1,
and summing the results over m, we have
M
X −1 M
X −1 M
X −1
m+1 m m m m
hu ,η i = hA u , η i + hF m , η m i.
m=0 m=0 m=0
hu0 , η M i = hC, η M i.
M
X −1 M
X −1 M
X −1
m+1 m 0 M m m m
hu , η i + hu , η i = hA u , η i + hF m , η m i + hC, η M i. (1.117)
m=0 m=0 m=0
M
X −1 M
X −1 M
X −1
m m+1 m+1 ∗ m+1 m+1
hη , u i= h(A ) η ,u i+ hK m+1 , um+1 i.
m=0 m=0 m=0
hη M , u0 i = hD, u0i.
M
X −1 M
X −1 M
X −1
m m+1 M 0 m+1 ∗ m+1 m+1
hη , u i+hη , u i = h(A ) η ,u i+ hK m+1 , um+1 i+hD, u0i,
m=0 m=0 m=0
or
M
X −1 M
X M
X
hη m , um+1 i + hη M , u0 i = h(Am )∗ η m , um i + hK m , um i + hD, u0i. (1.118)
m=0 m=1 m=1
M
X −1 M
X −1 M
X M
X
m m m m m M m ∗ m m
hA u , η i + hF , η i + hC, η i = h(A ) η , u i + hK m , umi + hD, u0i.
m=0 m=0 m=1 m=1
M
X −1 M
X
hu0, (A0 )∗ η 0 i+ hF m , η m i+hC, η M i = h(AM )∗ η M , uM i+ hK m , umi+hD, u0i.
m=0 m=1
termining the initial ondition v in 1) the Diri hlet problem (0.8)(0.10) from either the
39
observation of its solution u at the nal time, or integral observations, 2) in the Neumann
problem (0.8), (0.9), (0.11) from the observation of its solution u at a part of the boundary
S. The solution u to the Diri
hlet and Neumann problems is understood in the weak sense
i) the nal time observation operator Cu(v) = u(x, T ; v): v ∈ L2 (Ω) → Cu(v) ∈ H :=
L2 (Ω),
ii) or the integral operator Cu(v) = (l1 u(x, t; v), l2 u(x, t; v), . . . , lN u(x, t; v)), where li are
2
dened by (0.22). In this
ase, the operator C maps v ∈ L (Ω) to Cu(v) ∈ H =
2 N
(L (0, T )) ,
iii) or the tra
e of the solution u on a part of the boundary Cu(v) = u(x, t; v)|Γ. Thus, C
2
maps v ∈ L (Ω) to Cu(v) ∈ H = L2 (Γ).
Our data assimilation problem is that of re
onstru
ting the initial
ondition v when the
o
Cv = z − C u. (1.119)
o
Set z̃ = z − C u. To get a stable approximation to v we minimize the Tikhonov fun
tional
1 γ
Jγ (v) = kCv − z̃k2H + kv − v ∗ k2L2 (Ω) (1.120)
2 2
with respe
t to v ∈ L2 (Ω). Here, v∗ is an estimation to v. The minimizer to this optimiza-
with Φ2 (h) → 0 as h → 0.
We approximate the problem (1.120) by
1 γ
min Jγh (v) := kCh v − z̃k2H + kv − vh∗ k2L2 (Ω) (1.124)
v 2 2
whi
h is
hara
terized by the rst-order optimality
ondition
40
Here, vh∗ ∈ L2 (Ω) is an approximation to v∗ su
h that
with Φ3 (h) → 0 as h → 0.
Let v̂hγ be the solution to the variational problem
kz δ − z̃kH ≤ δ. (1.128)
Theorem 1.5.1. Let vγ be the solution of the variational problem (1.121) and γ > 0. Then
the following error estimate holds
δ
kv γ − v̂hγ k ≤ Φ(h) + c ,
γ
where Φ(h) → 0 when h tends to zero.
Proof. In the proof c1 , c2 , . . . are generi
positive
onstants, h·, ·i and k · k denote the s
alar
2
produ
t and the norm in L (Ω), respe
tively. From the equation (1.125) we have
Hen
e
c1
kvhγ k ≤ kz̃kH + c2 . (1.129)
γ
From the equations (1.125) and (1.127) we have
Therefore,
γkvhγ − v̂hγ k2 =hCˆh∗ Ch v̂hγ − Ch∗ Ch vhγ − Cˆh∗ z δ + Ch∗ z̃, vhγ − v̂hγ i
=hCˆh∗ Ch v̂hγ − C ∗ Ch v̂hγ + C ∗ Ch v̂hγ − Ch∗ Ch vhγ − Cˆh∗ z δ + Ch∗ z̃, vhγ − v̂hγ i
=h(Cˆ∗ − C ∗ )Ch v̂ γ , v γ − v̂ γ i + hC ∗ Ch v̂ γ − C ∗ Ch v γ , v γ − v̂ γ i
h h h h h h h h h
Moreover, using the inequalities (1.123) and (1.129), we estimate the rst term of the right
41
hand side of the last quality as follows
h(Cˆh∗ − C ∗ )Ch v̂hγ , vhγ − v̂hγ i =h(Cˆh∗ − C ∗ )Ch (v̂hγ − vhγ ), vhγ − v̂hγ i
+h(Cˆh∗ − C ∗ )Ch v γ , v γ − v̂ γ i
h h h
hC ∗ Ch v̂hγ − Ch∗ Ch vhγ , vhγ − v̂hγ i =hCh v̂hγ , C(vhγ − v̂hγ )iH − hCh vhγ , Ch (vhγ − v̂hγ )iH
=hCh (v̂hγ − vhγ ), C(vhγ − v̂hγ )iH + hCh vhγ , (C − Ch )(vhγ − v̂hγ )iH
=h(Ch − C)(v̂hγ − vhγ ), C(vhγ − v̂hγ )iH − kC(v̂hγ − vhγ )k2H
+ hCh vhγ , (C − Ch )(vhγ − v̂hγ )iH
c
1
≤c5 Φ1 (h)kvhγ − v̂hγ k2 + c6 Φ1 (h) kz̃kH + c2 kvhγ − v̂hγ k.
γ
Further, using the inequalities (1.122), (1.123) and (1.128), we obtain
hCh∗ z̃ − Cˆh∗ z δ , vhγ − v̂hγ i =h(C ∗ − Cˆh∗ )z δ , vhγ − v̂hγ i + hz̃ − z δ , C(vhγ − v̂hγ )iH
+ hz̃, (Ch − C)(vhγ − v̂hγ )iH
≤ c7 Φ2 (h) + c8 δ + c9 Φ1 (h) kvhγ − v̂hγ k.
Hen e
1 c
8
kvhγ − v̂hγ k ≤ c7 Φ2 (h) + c9 Φ1 (h) + δ. (1.130)
γ γ
Similarly, from (1.121), (1.122), (1.123), (1.126), (1.127) and (1.129), we have
Combining this inequality with (1.130) we arrive at the statement of the theorem.
42
1.6 Lan
zos' algorithm for approximating singular values
The problem (1.119) is ill-posed (as we will see in the next
hapters that the operator
C a ting from L2 (Ω) to H is ompa t) and its degree if ill-posedness is the asymptoti
of C is available and even if it is, it is not easy to estimate the asymptoti behaviour of
whi
h
an be
al
ulated via an adjoint problem for any v. Therefore, we
an apply Lan
zos'
∗
algorithm [81℄ to estimate the eigenvalues of Ch Ch based on its value Ch∗ Ch v . This approa
h
seem to be appli able to many problems and e ient as the results of the next hapters
Step 1. (Initialization.)
b
Let β0 = 0, q0 = 0 and an arbitrary ve
tor b,
al
ulate q1 = .
kbk
Put Q = q1 and k = 0.
v = v − QQv,
v = v − QQv.
v
qk+1 = .
βk
The result of 1.4 of this hapter is written on the basis of the paper
[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-
tions from boundary observations. Journal of Inverse and Ill-Posed Problems 24(2016),
no. 2, 195220.
43
Chapter 2
In this hapter, we study the data assimilation problem of re onstru ting the initial on-
dition in the Diri hlet problem (1.7)(1.9) from the observation of its solution at the nal
It is proved that the regularized fun tional is Fré het dierentiable and a formula for its
gradient is derived via an adjoint problem. The variational problem is rst dis retized in
spa e variables and the onvergen e results of the method are proved. The problem is then
fully dis retized and it is proved that the dis retized fun tional is Fré het dierentiable. A
formula for its gradient is derived via a dis rete adjoint problem and the onjugate gradient
method is applied to numeri ally solve the dis retized variational problem. Some numeri al
examples are tested on omputer to prove the e ien y of the proposed algorithm.
!
∂u Pn ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q,
∂t i,j=1 ∂xi ∂xj
(2.1)
u|t=0 = v in Ω,
u = 0 on S.
The oe ients aij , b and the data f and v satisfy the onditions (1.1)(1.6). Under these
onditions, (2.1) has a unique weak solution u ∈ W (0, T ; H01(Ω)) in the sense of Denition
1.1.7.
Data assimilation: Suppose that v is not given and we have to re onstru t it from the
observation of the solution u to (2.1) at the nal time Cu := u(·, T ) = ξ(·) ∈ L2 (Ω).
44
From now on, to emphasize the dependen
e of the solution u on the initial
ondition v, we
Remark 2.1.1. The operator C maps v ∈ L2 (Ω) to u(x, T ; v) is
ompa
t from L2 (Ω) to
L2 (Ω). Hen
e the problem of
2
re
onstru
ting v from u(·, T ; v) = ξ(·) ∈ L (Ω) is ill-posed.
Proof. Sin
eu ∈ W (0, T ; H01(Ω)), we have u(·, T ; v) ∈ H01 (Ω). However, H01 (Ω) is
om-
2 2 2
pa
tly embedded in L (Ω), therefore the operator C from L (Ω) to L (Ω) is
ompa
t.
To analyse the degree of the ill-posedness of the problem we have to study behaviour of
the singular values of "the linear part" C of the ane operator mapping v ∈ L2 (Ω) to
2
u(x, T ; v) ∈ L (Ω) whi
h is dened as presented in Introdu
tion and in 1.5 as follows:
o
denote by u the solution to (2.1) with v ≡ 0, then
o o
Cv := Cu − C u = u(x, T ; v) − u(x, T )
is bounded and linear from L2 (Ω) to L2 (Ω). It is lear that C is ompa t. Sin e there is no
expli it form for C and the analysis of its singular values is not trivial, we therefore suggest
a numeri al s heme to approximate the singular values of C. Before doing so, we des ribe
the variational method for nding the initial ondition v in from u(x, T ; v) by minimizing
γ 1 γ
Jγ (v) := J0 (v) + kv − v ∗ k2L2 (Ω) = ku(·, T ; v) − ξk2L2 (Ω) + kv − v ∗ k2L2 (Ω) , (2.3)
2 2 2
over L2 (Ω) with γ > 0 being the regularization parameter and v∗ an approximation of the
Sin
e u(·, T ; v) − ξ(·) ∈ L2 (Ω), due to Theorem 1.1.1, there exists a unique solution p∈
W (0, T ; H01(Ω)).
Theorem 2.1.1. The fun tional Jγ is Fré het dierentiable and its gradient is given by
45
Proof. For a small variation δv , we have
1 1
J0 (v + δv) − J0 (v) = ku(v + δv) − ξk2L2 (Ω) − ku(v) − ξk2L2 (Ω)
2 2
1 1
= ku(v + δv) − u(v) + u(v) − ξk2L2 (Ω) − ku(v) − ξk2L2 (Ω)
2 2
1
= ku(v + δv) − u(v)k2L2 (Ω) + hu(v + δv) − u(v), u(v) − ξiL2 (Ω) (2.6)
2
1
= kδu(v)k2L2 (Ω) + hδu(v), u(v) − ξiL2 (Ω)
2 Z
1
2
= kδu(v)kL2 (Ω) + δu(v) u(v) − ξ dx
2 Ω
Applying the a priori estimate (1.15) of Theorem 1.1.1 to the solution of the problem (2.7),
we obtain that there exists a
onstant cD su
h that kδukW (0,T ;H 1(Ω)) ≤ cD kδvkL2(Ω) . Hen
e
1
2
kδu(v)k2L2(Ω) = o(kδvkL2(Ω) ).
Furthermore, using Green's formula (1.22) of Theorem 1.2.1 for (2.4) and (2.7), we obtain
Z Z
u(v) − ξ δu(x, T )dx = δvp(x, 0)dx.
Ω Ω
where p(x, t) is the solution to the adjoint problem (2.4). It follows the formula (2.5) for
We note that
1
J0 (v) = ku(·, T ; v) − ξk2L2 (Ω)
2
1 o
= kCv − (ξ − C u)k2L2 (Ω) .
2
Hen
e
o
∇J0 (v) = C ∗ C(v − (ξ − C u)).
o
Thus, if we take ξ = C u, then
46
where C∗ is the adjoint operator of C. On the other hand, from the above theorem,
Thus, for any v ∈ L2 (Ω) we
an always evaluate C ∗ Cv via a dire
t problem for
al
ulating
ū(v) and an adjoint problem for obtaining p(x, 0; v). When we dis
retize the problem, C
has a form of a matrix, therefore we
an apply Lan
zos' algorithm 1.6 to approximate
mixed derivative, that is aij = 0 if i 6= j , and we denote aii by ai as in 1.4 and get
We dis
retize this problem by the nite dieren
e method in spa
e variables and get the
o
system of ordinary dierential equations (1.75). Denote ξ˜ = ξ − u(x, T ). Then J0 (v) =
o o
1 ˜ L2 (Ω) . We approximate the
k u(x, T ; v) − u(x, T ) − ξ − u(x, T ) kL2 (Ω) = 12 kCv − ξk
2
fun
tional Jγ (v) by
1 X γ X k
Jγh (v̄) = h |ū(T ; v̄) − ξ¯k |2 + h |v̄ − v¯∗ k |2 (2.9)
2 k∈Ω 2 kΩ
h h
and minimize this fun
tional over all grid fun
tions v̄ . Set Ch v̄ = ūˆ(x, T ; v̄), where ūˆ is the
pie
ewise linear interpolation of ū dened in Subse
tion 1.4.1. Then, if the
ondition 2) of
Theorem 1.4.8 is satised, we see that all onditions in Subse tion 1.5 for Theorem 1.5.1
are satised. As we do not allow noise in the data ξ yet, we have the following result on
the onvergen e of the solution of the dis retized optimization problem (2.9).
Theorem 2.2.1. Let
ondition 2) of Theorem 1.4.8 is satised. Then the linear interpo-
lation ūˆ of the solution uγ of the problem (2.9)
onverges in L2 (Ω) to the solution v γ of the
problem (2.3) as h tends to zero.
We use the splitting method in Subse tion 1.4.3. to get the s heme (1.111) or (1.112). Now,
47
we dis
retize the obje
tive fun
tional J0 (v) as follows
1 X k,M
J0h (v̄) := [u (v̄) − ξ k ]2 . (2.10)
2 k∈Ω
h
Here we use the notation uk,M (v̄) to indi ate the dependen e of the solution to the initial
ondition v̄ and M represents the index of the nal time. We drop the multiplier h as it
h
does not play any role here. Furthermore, we use the same notation J0 as in the previous
this purpose, we need to
al
ulate the gradient of the obje
tive fun
tion J0h . In this
k
subse
tion, we use the following inner produ
t of two grid fun
tions u := {u , k ∈ Ωh } and
v := {v k , k ∈ Ωh } X
(u, v) := uk v k . (2.11)
k∈Ωh
The following theorem give a formula for the gradient of the obje tive fun tional (2.10).
Theorem 2.3.1. The gradient ∇J0h (v̄) of the obje
tive fun
tional J0h at v̄ is given by
∗
∇J0h (v̄) = A0 η 0 , (2.12)
1 X k,M 1 X k,M
J0h (v̄ + δv̄) − J0h (v̄) = [u (v̄ + δv̄) − ξ k ]2 − [u (v̄) − ξ k ]2
2 2
k∈Ωh k∈Ωh
1 X k,M 2
= w +(w M , ψ).
2 k∈Ω
h
48
wherew k,m := uk,m (v̄ + δv̄) − uk,m(v̄), k ∈ Ωh , m = 0, . . . , M , w m := {v k,m, k ∈ Ωh } and
ψ = uM (v̄) − ξ . It follows from (1.111) that w is the solution to the problem
w m+1 = Am w m , m = 0, .., M − 1,
(2.16)
w 0 = δv̄.
Taking the inner produ
t of both sides of the mth equation of (2.16) with an arbitrary
m N1 ×...×Nn
ve
tor η ∈R , then summing the results over m = 0, ..., M − 1, we obtain
M
X −1 M
X −1
m+1 m
(w ,η ) = (Am w m , η m )
m=0 m=0
(2.17)
M
X −1
∗
= (w m, Am η m ).
m=0
∗
Here Am is the adjoint matrix of Am . Consider the adjoint problem
η m = (Am+1 )∗ η m+1 , m = M − 2, M − 1, . . . , 0,
(2.18)
η M −1 = ψ.
Taking the inner produ t of the both sides of the rst equation of (2.18) with an arbitrary
M
X −2 M
X −2
(w m+1 m
,η ) = (w m+1 , (Am+1 )∗ η m+1 ). (2.19)
m=0 m=0
Taking the inner produ t of the both sides of the se ond equation of (2.18) with the ve tor
vM , we have
M
X −2 M
X −2
(w m+1 , η m ) + (w M , η M −1) = (w m+1 , (Am+1 )∗ η m+1 ) + (w M , ψ),
m=0 m=0
or, equivalently
M
X −1 M
X −1
(w m+1 m
,η ) = (w m , (Am )∗ η m ) + (w M , ψ). (2.21)
m=0 m=1
From (2.17) and (2.21) we obtain
∗ ∗
(w M , ψ) = (w 0 , A0 η 0 ) = (δv̄, A0 η 0 ). (2.22)
P 2
On the other hand, it
an be proved by indu
tion that w k,M = o(kδv̄k). Hen
e, it
k∈Ωh
follows from (2.15) and (2.22) that
∗
J0h (v̄ + δv̄) − J0h (v̄) = (δv̄, A0 η 0 ) + o(kδv̄k). (2.23)
Consequently, the gradient of the obje tive fun tional J0h an be written as
∂J0h (v̄) ∗
= A0 η 0 . (2.24)
∂v̄
49
Note that, sin
e the matri
es Λmi , i = 1, ..., n are symmetri
, we have for m = 0, ..., M − 1
∆t m −1 ∆t m ∗
m ∗
(Ai ) = (Ei + Λ ) (Ei − Λ )
4 i 4 i
∆t m ∗ ∆t m −1 ∗
= (Ei − Λ ) (Ei + Λ )
4 i 4 i
∆t m ∆t m −1
= (Ei − Λi )(Ei + Λ ) .
4 4 i
Hen
e
(Am )∗ = (Am m m m ∗
1 ...An An ...A1 )
∗ m ∗ m ∗ m ∗
= (Am
1 ) ...(An ) (An ) ...(A1 )
∆t m ∆t m −1 ∆t m ∆t m −1
= (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
The proof is
omplete.
the initial ondition v̄ from the measured nal state uM = ξ by the following steps:
Step 1 (initialization): Given an initial guess v 0 and a s
alar α > 1,
al
ulate the residual
r̂ 0 = u(v 0) − ξ by solving the splitting s
heme (1.111) with the initial
ondition v̄ being
0
repla
ed by the initial guess v .
If kr̂ 0 k ≤ αǫ, stop the algorithm. Otherwise, set i = 0, d−1 = (0, . . . , 0) and go to Step 2.
Step 2. Cal ulate the gradient r i = ∇J0k,M (v i ) given in (2.12) by solving the adjoint
where
i 2
β i−1 = k r k
for i ≥ 1,
k r i−1 k2 (2.26)
−1
β = 0.
Step 3. Cal
ulate the solution ūi of the splitting s
heme (1.111) with v̄ being repla
ed by
di , put
k r i k2
αi = . (2.27)
k (ūi )M k2
Then, set
v i+1 = v i + αi di . (2.28)
50
Step 4. If k r̂ i+1 k≤ αǫ, stop the algorithm (Nemirovskii's stopping rule). Otherwise, set
i := i + 1, and go ba
k to Step 2.
We note that (2.29)
an be derived from the equality
r̂ i+1 = uM (v i+1 ) − ξ = uM (v i + αi di ) − ξ
(2.30)
= r̂ i + uM (αi di ) = r̂ i + αi (ūi )M .
numeri al tests to estimate the singular values and to re onstru t the initial ondition v(x).
C ∗ Cv = p(x, 0),
Thus, to approximate the singular values of C we have to approximate (2.31) and (2.32) to
get an approximation to C in a matrix form, say Ch , and Ch∗ Ch v and then apply Lan
zos'
∗
algorithm to approximate eigenvalues of Ch Ch . In doing so, we subdivide the domain Ω into
50 uniform subintervals and the time interval (0, 1) into 50 subintervals and then apply the
method presented in Chapter 1 to solve (2.31) and (2.32).
The result of estimating the singular values after 51 iterations is presented in Figure 2.1.
We
an see that the singular value of the operator is very small, the 51th value is about
−15
10 . The
ondition number (the quotient of the rst singular value and the 51th singular
10
value) is about 3.38431 × 10 .
51
−4
10
−6
10
−8
10
−10
10
−12
10
−14
10
−16
10
0 5 10 15 20 25 30 35 40 45 50 55
We test our algorithm for dierent kinds of the initial onditions: 1) very smooth (Exam-
ple 2), 2) ontinuous but not smooth (Example 3) and 3) dis ontinuous (Example 4 and
Example 5). For the nite dieren e method for solving the dire t problem, we take the
spatial mesh size h = (0.02, 0.02) and the time mesh size ∆t = 0.02.
Example 2. In this example, we take
1
a1 (x, t) = 0.02[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
1
a2 (x, t) = 0.01[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
2 2
b(x, t) = x1 + x2 + 2x1 t + 1,
After inserting the exa t solution to the system (2.33), we get the exa t initial ondition
52
and the right hand side
v(x) t 1−t
f (x, t) = − 1 − 0.06 1 − 1− cos(15πx1 ) cos(15πx2 )
2 2 2
t
− (x1 + x2 + 2x1 t + 1)(2 − t) − 0.075π 2(1 − )(1 − t)
2 2
2
× 2 cos(πx1 ) sin(πx2 ) sin(15πx1 ) cos(15πx2 )
− sin(πx1 ) cos(πx2 ) cos(15πx1 ) sin(15πx2 ) .
The numeri al results of re onstru ting the initial ondition v(x) for Example 2 are pre-
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41
(a) (b)
Error
1
0.8
0.5 0.6
0 0.4
0.2
−0.5
0.81
0.81 0
( ) (d)
Figure 2.2: Example 2: Re
onstru
tion results: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error;
(d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the solid
urve: the
estimated fun
tion).
In the next examples we present our tests for re onstru ting nonsmooth initial onditions
method for getting approximations to the exa t solution u. Taking numeri al solutions to
u(x, T ) as the data, we apply our algorithm for re
onstru
ting v . For solving the dire
t
and inverse problems we use two dierent mesh sizes to avoid "inverse
rime".
53
Example 3. In this example, v(x) is
hosen as a multi-linear fun
tion given by
2x2 , if x2 ≤ 1/2 and x2 ≤ x1 and x1 ≤ 1 − x2 ,
2(1 − x ),
2 if x2 ≥ 1/2 and x2 ≥ x1 and x1 ≥ 1 − x2 ,
v(x) =
2x1 , if x1 ≤ 1/2 and x1 ≤ x2 and x2 ≤ 1 − x1 ,
2(1 − x ),
1 otherwise,
the oe ients a1 (x, t), a2 (x, t) and b(x, t) are given as
1
a1 (x, t) = 0.02[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
1
a2 (x, t) = 0.01[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
2 2
b(x, t) = x1 + x2 + 2x1 t + 1.
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41
(a) (b)
Error
1
0.8
0.5 0.6
0 0.4
0.2
−0.5
0.81
0.81 0
( ) (d)
Figure 2.3:Example 3: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error; (d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the solid
urve: the estimated
fun
tion).
In Examples 4 and 5, we test the algorithm with the pie ewise onstant initial ondition
54
given by
1, if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,
v(x) =
0, otherwise .
Example 4. The
oe
ients a1 (x, t) = a2 (x, t) = b(x, t) = 10−2 .
The numeri
al results for Example 4 are presented in Figure 2.4
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41
(a) (b)
Error
1.2
1
1
0.8
0.5
0.6
0
0.4
−0.5 0.2
0.81
0.81 0
0.41 exact sol.
0.41 appr. sol.
−0.2
x2 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2
( ) (d)
Figure 2.4:Example 4: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error; (d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the solid
urve: the estimated
fun
tion).
55
Exact function v Estimated function v
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41
(a) (b)
Error
1.2
1
1
0.8
0.5
0.6
0
0.4
−0.5 0.2
0.81
0.81 0
0.41 exact sol.
0.41 appr. sol.
−0.2
x 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x2
1
( ) (d)
Figure 2.5:Example 5: Re
onstru
tion result: (a) exa
t fun
tion v ; (b) estimated one; (
) point-wise error; (d) the
omparison of v|x1 =1/2 and its re
onstru
tion (the dashed
urve: the exa
t fun
tion, the solid
urve: the estimated
fun
tion).
[61℄ Oanh N.T.N., A splitting method for a ba kward paraboli equation with time-
dependent oe ients. Computers & Mathemati s with Appli ations 65(2013), 1728.
56
Chapter 3
In this hapter, we study the data assimilation problem of determining the initial ondition
v(x) in the Diri
hlet problem (1.7)(1.9) from N integral observations li u = hi (t), t ∈
(τ, T ), τ ≥ 0, i = 1, 2, . . . , N regarded as generalization of pointwise spatial observations:
1
R
Suppose that we are given N nonnegative weight fun
tions ωi ∈ L (Ω) with ω (x)dx > 0
Ω i
and we
an observe u by
Z
li u(x, t) = ωi (x)u(x, t)dx = hi (t), t ∈ [τ, T ], i = 1, . . . , N.
Ω
a mist fun tional. As the variational problem is ill-posed, we stabilize it by the Tikhonov
regularization method. It is proved that the regularized fun tional is Fré het dierentiable
and a formula for its gradient is derived via an adjoint problem. The variational prob-
lem is rst dis retized in spa e variables and the onvergen e results of the method are
proved. The problem is then fully dis retized and it is proved that the dis retized fun -
tional is Fré het dierentiable. A formula for its gradient is derived via a dis rete adjoint
problem and the onjugate gradient method is applied to numeri ally solve the dis retized
variational problem. Some numeri al examples are provided to show the e ien y of the
proposed algorithm.
57
3.1 Problem setting and the variational method
As in the previous
hapter, for the ease of reading, we rewrite the Diri
hlet problem (1.7)
We assume that the
oe
ients aij , b and the data f and v satisfy the
onditions (1.1)(1.6).
The solution to this problem is understood in the weak sense of Denition 1.1.7. It has
been proved that under these
onditions there exists a unique solution u ∈ W (0, T ; H01(Ω))
to (3.1).
Data assimilation: Suppose that v is not given and we have to re onstru t it from
Remark 3.1.1. If v ∈ H01 (Ω), aij , b ∈ C 1 ([0, T ]; L∞ (Ω)), i, j = 1, . . . , n and there exists
2
a
onstant µ1 su
h that |∂aij /∂t|, |∂b/∂t| ≤ µ1 , then the operator C maps v ∈ L (Ω) to
(l1 u(x, t; v), · · · , lN u(x, t; v)) is
ompa
t from L2 (Ω) to (L2 (τ, T ))N . Hen
e the problem of
2 N
re
onstru
ting v from Cu(v) = (h1 , · · · , hN ) ∈ (L (τ, T )) is ill-posed.
Proof. If the
ondition 2) of Theorem 1.1.1 is satised, then u ∈ H01,1 (Q). Therefore,
(l1 u(x, t; v), · · · , lN u(x, t; v)) ∈ (H 1 (τ, T ))N whi
h is
ompa
tly embedded in (L2 (τ, T ))N .
2 2 N
Hen
e, the operator C from L (Ω) to (L (τ, T )) is
ompa
t.
o
Now we analyze the degree of ill-posedness of this problem. We denote by u the solution
of the problem
∂u Pn ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q,
∂t i,j=1 ∂xi ∂xj
(3.3)
u = 0 on S,
u|t=0 = 0 in Ω,
58
o
Then, u(v) = u[v] + u. Hen
e
o o
li u(v) = li u[v] + li u = Ci v + li u (3.5)
Cv = ~~. (3.6)
We introdu e now a variational method for solving the re onstru tion problem.
to emphasise the dependen e of the solution u(x, t) on the initial ondition v . A natural
method for re onstru ting v from the observations li (u), i = 1, 2, . . . , N (3.2) is to minimize
Sin
e γ > 0, it is easily seen that the problem of minimizing Jγ (v) over L2 (Ω) has a unique
solution. Now we prove that Jγ (v) is Fré
het dierentiable and derive a formula for its
where χ(τ,T ) (t) = 1, if t ∈ (τ, T ) and χ(τ,T ) (t) = 0, otherwise. The solution to (3.9) is
XN
understood in the weak sense of 1.2. Sin
e ωi (li u − hi )χ(τ,T ) (t) ∈ L2 (Q), there exists
i=1
a unique solution p ∈ W (0, T ; H01(Ω)) to (3.9).
59
Theorem 3.1.1. The gradient ∇J0 (v) of the obje
tive fun
tional J0 (v) at v is given by
N
X N
X
1 1
J0 (v + δv) − J0 (v) = kli u(v + δv) − hi k2L2 (τ,T ) − kli u(v) − hi k2L2 (τ,T )
i=1
2 i=1
2
XN N
X
1
= kli δu(v)k2L2(τ,T ) + hli δu(v), li u(v) − hi iL2 (τ,T )
i=1
2 i=1
XN
= hli δu(v), li u(v) − hi iL2 (τ,T ) + o(kδvkL2 (Ω) )
i=1
N
X
J0 (v + δv) − J0 (v) = hli δu, li u − hi iL2 (τ,T ) + o(kδvkL2 (Ω) )
i=1
XN Z T Z
= ωi δudx li u − hi dt + o(kδvkL2 (Ω) )
i=1 τ Ω
(3.12)
XN Z T Z
= δuωi (li u − hi )dxdt + o(kδvkL2 (Ω) )
i=1 τ Ω
Z T Z N
X
= δu ωi (li u − hi )χ(τ,T ) (t)dxdt + o(kδvkL2(Ω) ).
0 Ω i=1
Using Green's fomula (1.22) (Theorem 1.2.1) for systems (3.9) and (3.11), we obtain
Z T Z N
X Z
δu ωi (li u − hi )χ(τ,T ) (t)dxdt = δvp(x, 0)dx. (3.13)
0 Ω i=1 Ω
60
From this result, we see that the fun
tional Jγ (v) is also Fré
het dierentiable and its
Now we turn to the question of estimating the degree of ill-posedness of our re onstru ting
problem. Sin e Ci , i = 1, . . . , N, are dened by (3.5), from the above theorem we have
Ci∗ g = p†i (x, 0), where p†i is the solution to the adjoint problem
∂p†i Pn ∂ ∂p†i
− −
∂t i=1 ∂xi i a (x, t) + b(x, t)p†i = ωi (x)g̃(t) in Q,
∂xi
(3.15)
p†i = 0 on S,
p† (x, T ) = 0 in Ω
i
with g ∈ L2 (τ, T ) and g̃(t) = g(t) for t ∈ (τ, T ) and g̃(t) = 0, otherwise.
N
1X
J0 (v) = kli u(v) − hi k2L2 (τ,T )
2 i=1
N
1X o
= kCi v − hi − li u k2L2 (τ,T )
2 i=1
1
= kCv − ~~k2(L2 (τ,T ))N .
2
Hen
e
N
X
J0′ (v) = C (Cv − ~~) =
∗
Ci∗ (Ci v − ~i ).
i=1
o
If we take hi su
h that hi = li u, then
N
X
J0′ (v) = C ∗ Cv = Ci∗ Ci v.
i=1
PN
Due to Theorem 3.10, C ∗ Cv = i=1 Ci∗ Ci v = p0 (x, 0), where p0 is the solution of the adjoint
problem
∂p0 P
n ∂ ∂p0 PN
− − aij (x, t) + b(x, t)p 0
= i=1 ωi li u[v]χ(τ,T ) (t) in Q,
∂t i,j=1 ∂xi ∂xj
(3.16)
p0 = 0 on S,
0
p (x, T ) = 0 in Ω.
Thus, if v ∈ L2 (Ω) is given, we have the value C ∗ Cv = p0 (x, 0). However, an expli
it form
∗
of C C is not available to analyze its eigenvalues and eigenfun
tions. Despite this, when
we dis retize the problem, the nite-dimensional approximations Ch to C are matri es, and
61
so we
an apply the Lan
zos' algorithm [81℄ to estimates the eigenvalues of Ch∗ Ch based on
∗
the values Ch Ch vh . The numeri
al simulation of this approa
h will be given in 3.4.
Sin e the gradient of the fun tional Jγ (v) an be al ulated via the adjoint problem (3.9),
the onjugate gradient algorithm is appli able to approximating the initial ondition v as
where
k
k∇Jγ (v k )k2
β = , αk = argminα≥0 Jγ (v k + αdk ). (3.18)
k∇Jγ (v k−1)k2
N
X 1 γ
k
Jγ (v + αd ) = k
kli u(v k + αdk ) − hi k2L2 (τ,T ) + kv k + αdk − v ∗ k2L2 (Ω)
i=1
2 2
XN
1 o γ
= kCi (v k + αdk ) + li u − hi k2L2 (τ,T ) + kαdk + v k − v ∗ k2L2 (Ω)
i=1
2 2
XN
1 γ
= kαCi dk + li u(v k ) − hi k2L2 (τ,T ) + kαdk + v k − v ∗ k2L2 (Ω) .
i=1
2 2
Hen
e
XN XN
∂Jγ (v k + αdk ) k 2
=α kCi d kL2 (τ,T ) + hCi dk , li u(v k ) − hi iL2 (τ,T )
∂α i=1 i=1
62
It follows from (3.17) that αk
an be rewritten as
h∇Jγ (v k ), −∇Jγ (v k )iL2 (Ω)
−N , if k = 0,
P
k 2 k 2
kCi d kL2 (τ,T ) + γkd kL2 (Ω)
αk = i=1
(3.20)
h∇Jγ (v k ), −∇Jγ (v k ) + β k dk−1iL2 (Ω)
− , if k > 0.
PN
k 2 k 2
kCi d kL2 (τ,T ) + γkd kL2 (Ω)
i=1
Therefore,
k
k∇Jγ (v k )k2L2 (Ω)
α = , k = 0, 1, 2, . . . (3.21)
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1
that is aij = 0 if i 6= j , and we denote aii by ai as in 1.4 and get the system (1.46).
Furthermore, we assume Ω is the open parallelepiped dened in 1.4.1. We dis retize this
problem by the nite dieren e method in spa e variables and get the system of ordinary
Using the representation in Subse tion 3.1, we have the rst-order optimality ondition for
We approximate the fun
tional Jγ as follows. First, we approximate the fun
tional li (v)
by
X
lih ū(v̄) = ∆h ω̄ik ūk (v̄), i = 1, . . . , N. (3.23)
k∈Ωh
ō
As in the
ontinuous problem, we dene u by the solution to the Cau
hy problem
dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (3.24)
ū(0) = 0,
where F̄ is dened as in (1.75), and ū[v̄] the solution to the Cau
hy problem
dū
+ (Λ1 + · · · + Λn )ū = 0,
dt (3.25)
ū(0) = v̄.
Then,
ō X X ō k
lih ū(v̄) = lih ū[v̄] + lih u = ∆h ω̄ik ūk [v̄] + ∆h ω̄ik u , i = 1, . . . , N. (3.26)
k∈Ωh k∈Ωh
63
Hen
e the operator Cih v̄˜ = lih ū[v̄] is linear and bounded from L2 (Ω) into L2 (τ, T ), for
˜
with Ḡ = {ω̄ik g(t), k ∈ Ωh } where ω̄ik is dened in formula (1.67), then
∗
Cih g = p¯† (x, 0).
Thus, the dis
retized version of C has the form Ch := (C1h , . . . , CN h ) and the fun
tional Jγ
is now approximated as follows:
1 γ
Jγh (vh ) : = kChv̄˜ − ~~k2(L2 (τ,T ))N + kv̄ − v¯∗ k2L2 (Ω) (3.28)
2 2
N
1X γ
= kCih v̄˜ − ~i k2L2 (τ,T ) + kv̄ − v¯∗ k2L2 (Ω) . (3.29)
2 i=1 2
ō
Here, for simpli
ity of notation, we again set ~i = hi −lih u. For this dis
retized optimization
If we suppose that ondition 2) of Theorem 1.4.11 is satised, then kCh v̄ − Cvk(L2 (τ,T ))N and
kCh∗ v − CvkL2 (τ,T ) tend to zero as h tends to zero. Following Theorem 1.5.1 of Subse tion
Theorem 3.2.1. Let
ondition 2) of Theorem 1.4.11 be satised. Then the interpolation
v̄bhγ
of the solution v̄hγ of the problem (3.28)
onverges to the solution v γ of the problem (3.8)
in L2 (Ω) as h tends to zero.
We use Crank-Ni olson's method in Se tion 1.3 and the splitting method in Subse tion
1.4.3. to get the s heme (1.111) or (1.112). Now, we dis retize the obje tive fun tional
J0 (v) as follows
N X M X
1X
J0h,∆t (v̄) := [ ω k uk,m(v̄) − hm 2
i ] , (3.31)
2 i=1 m=ℓ k∈Ω i
h
where ℓ is the rst index for whi
h ℓ∆t > τ and uk,m (v̄) shows its dependen
e on the initial
k k
ondition v̄ and m is the index of grid points on time axis. We denote ωi = ωi (x ) the
k
approximation of the fun
tion ωi (x) in Ωh at points x , as dened by (1.67). We note that
h,∆t
in the denition of J0 the multiplier ∆h has been dropped as it plays no role.
To minimize J0h,∆t (v̄) by the onjugate gradient method, we rst al ulate its gradient.
64
where η satises the adjoint problem
η m = (Am+1 )∗ η m+1 + P ω k [ P ω k um+1 (v̄) − hm+1 ],
N
i i i m = M − 1, M − 2, . . . , ℓ − 1,
i=1 k∈Ωh
η M = 0.
(3.33)
∆t m ∆t m −1 ∆t m ∆t m −1
(Am )∗ = (E1 − Λ1 )(E1 + Λ ) ...(En − Λn )(En + Λ )
4 4 1 4 4 n (3.34)
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λ ) ...(E1 − Λ1 )(E1 + Λ ) .
4 4 n 4 4 1
Proof. For a small variation δv̄ of v̄ ,we have from (3.31) that
N X M X N X M X
1X 1X
J0h,∆t(v̄ + δv̄)−J0h,∆t (v̄) = [ ωik uk,m (v̄+δv̄)−hm
i ] 2
− [ ω k uk,m(v̄)−hm
i ]
2
2 i=1 m=ℓ k∈Ω 2 i=1 m=ℓ k∈Ω i
h h
N X
X M X N X
X M X X
1 2
= ωik w k,m + ωik w k,m[ ωik uk,m(v̄)−hm
i ]
2 i=1 m=ℓ k∈Ωh i=1 m=ℓ k∈Ωh k∈Ωh
N X M X N X M X
1X X
k k,m 2
= ωi w + ωik w k,mψik,m
2 i=1 m=ℓ k∈Ω i=1 m=ℓ k∈Ω
h h
X M
N X X N X
X M
1 2
= ωik w k,m + (ωi w m , ψim ),
2 i=1 m=ℓ k∈Ωh i=1 m=ℓ
(3.35)
k,m k,m k,m k,m P
where v := u (v̄ + δv̄) − u (v̄) and ψi = ωik uk,m − hm
i , k ∈ Ωh , and the inner
k∈Ωh
produ
t is that of RN1 ×...×Nn .
It follows from (1.111) that w is the solution to the problem
w m+1 = Am w m , m = 0, .., M − 1,
(3.36)
w 0 = δv̄.
Taking the inner produ
t of the both sides of the mth equation of (3.36) with an arbitrary
m N1 ×...×Nn
ve
tor η ∈R , summing the results over m = ℓ − 1, ..., M − 1, we obtain
M
X −1 M
X −1
m+1 m
(w ,η ) = (Am w m , η m )
m=ℓ−1 m=ℓ−1
(3.37)
M
X −1
∗
= (w m , Am η m ).
m=ℓ−1
∗
Here Am is the adjoint matrix of Am . Consider the adjoint problem
η m = (Am+1 )∗ η m+1 + PN ω ψ m+1 , m = M − 1, M − 2, . . . , ℓ − 1,
i=1 i i
(3.38)
η M = 0.
65
Taking the inner produ
t of the both sides of the rst equation of (3.38) with an arbitrary
M
X −1 M
X −1 N
X M
X −1
m+1 m m+1 m+1 ∗ m+1
(w ,η ) = (w , (A )η )+ (w m+1 , ωi ψim+1 )
m=ℓ−1 m=ℓ−1 i=1 m=ℓ−1
(3.39)
M
X N X
X M
= (w m , (Am )∗ η m ) + (w m, ωi ψim ).
m=ℓ i=1 m=ℓ
N X
X M
∗ ∗
(w m , ωi ψ m ) + (w M , AM η M ) = (w ℓ−1, Aℓ−1 η ℓ−1 )
i=1 m=ℓ
∗
= (Aℓ−2 w ℓ−2 , Aℓ−1 η ℓ−1)
(3.40)
= ···
∗
= (Aℓ−2 · · · A0 w 0 , Aℓ−1 η ℓ−1 )
∗ ∗
= (δv̄, A0 · · · Aℓ−1 η ℓ−1).
P
N P
M P 2
On the other hand, it
an be proved by indu
tion that ωik w k,m = o(kδv̄k).
i=1 m=ℓ k∈Ωh
M
Hen
e, it follows from the
ondition η = 0, (3.35) and (3.40) that
∗ ∗
J0∆t (v̄ + δv̄) − J0∆t (v̄) = (δv̄, A0 · · · Aℓ−1 η ℓ−1 ) + o(kδv̄k). (3.41)
Consequently, the gradient of the obje tive fun tion J0h an be written as
∂J0∆t (v̄) ∗ ∗
= A0 · · · Aℓ−1 η ℓ−1 . (3.42)
∂v̄
m
Note that, sin
e the the matri
es Λi , i = 1, ..., n are symmetri
, we have for m = ℓ−
1, ..., M − 1
∆t m −1 ∆t m ∗
∗
(Am
i ) = (Ei + Λ ) (Ei − Λ )
4 i 4 i
∆t m ∗ ∆t m −1 ∗
= (Ei − Λ ) (Ei + Λ )
4 i 4 i
∆t m ∆t m −1
= (Ei − Λi )(Ei + Λ ) .
4 4 i
Hen
e
(Am )∗ = (Am m m m ∗
1 ...An An ...A1 )
∗ m ∗ m ∗ m ∗
= (Am
1 ) ...(An ) (An ) ...(A1 )
∆t m ∆t m −1 ∆t m ∆t m −1 (3.43)
= (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
The proof is
omplete.
The onjugate gradient method for the dis retized fun tional (3.31) onsists of the following
steps:
66
P
N
Step 1. Choose an initial approximation v0 and
al
ulate the residual r̂ 0 = [li u(v 0) − hi ]
i=1
by solving the splitting s
heme (1.111) with v̄ being repla
ed by the initial approximation
v0 and set k = 0.
Step 2. Cal
ulate the gradient r 0 = −∇Jγ (v 0 ) given in (3.32) by solving the adjoint
v 1 = v 0 + α 0 d0 .
Step 4. For k = 1, 2, · · · ,
al
ulate r k = −∇J0 (v k ), dk = r k + β k dk−1 , where
kr k k2
βk =
kr k−1 k2
Step 5. Cal
ulate αk
k
kr k k2
α =
P
N
kli dk k2 + γkdk k2
i=1
k
where li d
an be
al
ulated from the splitting s
heme (1.111) with v̄ being repla
ed by dk
and F = 0. Then, set
v k+1 = v k + αk dk .
meri al results. We test our algorithm for 1) very smooth initial, 2) for ontinuous, but
not smooth, and 3) for dis ontinuous initial onditions. Thus, the degree of di ulties
in reases with examples. We note that although in the theory we only prove the onver-
gen e of our dieren e s heme for smooth initial onditions, it works for other ases as our
numeri al examples show. Also, we vary the observation points as well as time interval of
observations. The results for the one-dimensional ase with approximate singular values
will be presented in Examples 14, respe tively. The results for two-dimensional ases will
67
3.4.1. One-dimensional numeri
al examples
Set Ω = (0, 1), the nal time T = 1, the grid size to be 0.02 and the random noise is 0.01.
The one-dimensional system has the form:
ut − (a(x, t)ux )x + b(x, t)u = f (x, t),
in Q,
u(0, t) = u(1, t) = 0, t ∈ (0, T ], (3.44)
u(x, 0) = v(x), in Ω.
In the all tests for this
ase, we set the
oe
ients
a(x, t) = x2 t + 2xt + 1,
b(x, t) = 0.
We take N=3 observations of the form (3.2) with weight fun tions ωi (x), i = 1, 2, 3 are
hosen as follows
1 if x ∈ (xi − ǫ, xi + ǫ)
2ǫ
ωi (x) = with ǫ = 0.01 (3.45)
0 otherwise
operator obtained by our method des ribed in 3.1 for dierent τ are given in Figure 3.1.
0
10
τ=0
−2 τ=0.3
10
τ=0.5
−4
τ=0.9
10
−6
10
−8
10
−10
10
−12
10
−14
10
−16
10
−18
10
−20
10
0 5 10 15 20 25 30 35 40 45 50 55
Figure 3.1: Example 1. Singular values: three observations and various time intervals of observations.
68
From these singular values we see that our inverse problem is severely ill-posed and the
ill-posedness depends on the time interval of observations: the less observation time is, the
Now we show numeri al results for re onstru ting dierent initial onditions with dierent
uexa t = et sin(2πx).
Hen e, the exa t initial ondition is given by v(x) = sin(2πx). The right hand side
stru tion result and the exa t initial ondition v for various positions of observations are
Comparison Comparison
(a) (b)
Comparison Comparison
( ) (d)
Figure 3.2: Example 2: Re
onstru
tion results for (a) 3 uniform observation points in (0, 0.5), error in L2 -norm
= 0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2 -norm = 0.006133; (
) 3 uniform observation
points in (0.25, 0.75), the error in L2 -norm = 0.0060894; (d) 3 uniform observation points in Ω, the error in L2 -norm
= 0.0057764 .
69
The numeri
al results show that the quality of the re
onstru
tion depends on the positions
of the observations and is better when the observations are uniformly distributed in Ω.
Example 3. In this example, the initial
ondition is
hosen to be the same as Example
1, and regularization parameter is also γ = 5 × 10−3 , but the starting time for observation
is taken to be τ = 0.01, τ = 0.05, τ = 0.1 and τ = 0.3. The
omparison of re
onstru
tion
results is displayed in Figure 3.3.
Comparison Comparison
(a) (b)
Comparison Comparison
( ) (d)
Figure 3.3: Re onstru tion result of Example 3: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.
70
Table 1 shows the relation between the starting point of observation τ and the error in the
2
L -norm of the algorithm when the number of observations is N = 3 and the regularization
−3
parameter is γ = 5 × 10 .
τ = 0.01 0.0084225
τ = 0.05 0.0226540
τ = 0.1 0.0388760
τ = 0.15 0.0405540
τ = 0.2 0.0622210
τ = 0.3 0.1015000
τ = 0.5 0.1798100
Table 3.1: Example 3: Behavior of the algorithm with dierent starting points of observation τ .
We now test the algorithm for nonsmooth initial onditions. In Examples 3 and 4, we
hoose v, set f = 0, and solve the dire t problem by the Crank-Ni olson method to nd
an approximation to the exa t solution u, then use these data for re onstru ting the exa t
initial ondition v. Here, we use dierent mesh-sizes for the dire t solvers to avoid "inverse
rime".
The regularization parameter and the starting time τ are hosen to be the same as Example
71
Comparison Comparison
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
(a) (b)
Comparison Comparison
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
( ) (d)
Figure 3.4: Re onstru tion result of Example 4: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.
The regularization parameter and the starting time τ are hosen to be the same as Example
72
Comparison Comparison
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
exact sol. exact sol.
appr. sol. appr. sol.
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
(a) (b)
Comparison Comparison
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
exact sol. exact sol.
appr. sol. appr. sol.
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
( ) (d)
Figure 3.5: Re onstru tion result of Example 5: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.
As said in the Introdu tion the operators li dened by (3.2) with these weight fun tions an
be regarded as "pra ti al" point-wise observations. We shall test our algorithm for dierent
distributions of xi : they will taken either in the whole Ω or in a part of it: (0, 0.5) ×(0, 0.5),
73
(0.5, 1) × (0, 0.5), (0.5, 1) × (0.5, 1) or (0, 0.5) × (0.5, 1). The number of observation N will
be taken either 4 or 9.
Example 6. We test algorithm for the smooth initial
ondition given by v(x) = sin(πx1 ) sin(πx2 )
and u(x, t) = (1 − t)v(x). The sour
e term f is thus given by
f (x, t) = − v(x) 1 − (1 − t) π 2 + 0.5π 2 cos(3πx1 ) cos(3πx2 ) − (x21 + x22 + 2x1 t + 1)
+ 0.75π 2 (1 − t)2 ×
sin(3πx1 ) cos(3πx2 ) cos(πx1 ) sin(πx2 ) + cos(3πx1 ) sin(3πx2 ) sin(πx1 ) cos(πx2 ) .
Exact solution
Approximation solution
1
1
0.5
0.5
0
0
−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2
(a) (b)
Error
0.6
0.8
0.4
0.6
0.2
0.4
0
0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2
( ) (d)
Figure 3.6: Example 6. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
) point-wise
error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion .
algorithm stops after 38 iterations and
omputational time is 87.496271 se
onds and the
2
error in L -norm is 0.02346. The behavior of the algorithm when observation regions and
number of observations vary is shown in Tables 2 and 3. We
an see that the more number
74
Observation region Iteration Error in L2 norm
Table 3.2:Example 6: Behavior of the algorithm when the number of observations and the positions of observations
vary (N = 4).
Table 3.3:Example 6: Behavior of the algorithm when the number of observations and the positions of observations
vary (N = 9).
Now we test the algorithm for nonsmooth initial onditions. In Examples 7 and 8, we
hoose v and set f = 0, then use the splitting method to nd an approximation to the
solution u. After that we use the obtained data to test our algorithm.
The re onstru tion results and the exa t initial ondition v are shown in Figure 3.7. The
algorithm stops after 45 iterations and the omputational time is 103.394200 se onds and
75
Exact solution
Approximation solution
1
1
0.5
0.5
0
0
−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2
(a) (b)
Error
0.6
0.8
0.4
0.6
0.2
0.4
0
0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2
( ) (d)
Figure 3.7: Example 7. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
) point-wise
error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion.
Example 8. We test the algorithm with the pie
ewise
onstant initial
ondition
1 if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,
v(x) =
0 otherwise .
The algorithm stops after 100 iterations and the
omputational time is 109.423362 se
onds
2
and the error in L -norm is 0.033148. The re
onstru
tion results and the exa
t initial
ondition v are shown in Figure 3.8.
76
Exact solution
Approximation solution
1
1
0.5
0.5
0
0
−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2
(a) (b)
Error
0.6
0.8
0.4
0.6
0.2
0.4
0
0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2
( ) (d)
Figure 3.8: Example 8. Re
onstru
tion results: (a) Exa
t initial
ondition v ; (b) re
onstru
tion of v ; (
) point-wise
error; (d) the
omparison of v|x1 =1/2 | and its re
onstru
tion.
[27℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-
tions from integral observations. Inverse Problems in S
ien
e and Engineering (to appear)
doi: 10.1080/17415977.2016.1229778.
77
Chapter 4
v(x) in the
In this
hapter, we study the data assimilation of estimating the initial
ondition
Neumann problem (1.7), (1.8), (1.10) from the boundary observation u|Σ = ϕ(ζ, t), (ζ, t) ∈
ization method. It is proved that the regularized fun tional is Fré het dierentiable and a
formula for its gradient is derived via an adjoint problem. The variational problem is rst
dis retized in spa e variables and the onvergen e results of the method are proved. The
problem is then fully dis retized and it is proved that the dis retized fun tional is Fré het
dierentiable. A formula for its gradient is derived via a dis rete adjoint problem and
the onjugate gradient method is applied to numeri ally solve the dis retized variational
problem. Some numeri al examples are provided to show the e ien y of the proposed
algorithm.
u|t=0 = v in Ω, (4.2)
∂u
= g, on S, (4.3)
∂N
where
X n
∂u
|S := (aij (x, t)uxj ) cos(ν, xi )|S ,
∂N i,j=1
ν is the outer normal to S.
78
The solution to this problem is a fun
tion u ∈ W (0, T ; H 1(Ω)) satisfying Denition 1.1.8.
If the onditions (1.1)(1.6) are satised, then Theorem 1.1.2 shows that there exists a
Data assimilation: Re onstru t the initial ondition v(x) in (4.1)(4.3) from the obser-
vations of the solution u on a part of the boundary S. Namely, let Γ ⊂ ∂Ω and denote
Σ = Γ × (0, T ). Our aim is to re
onstru
t the initial
ondition v from the impre
ise
2
measurement ϕ ∈ L (Σ) of the solution u on Σ:
From now on, as in the previous hapters, to emphasize the dependen e of the solution
Cu(v) = ϕ. (4.5)
Remark 4.1.1. To see the ill-posedness of the problem, note that if the ondition 2) of
Theorem 1.1.2 is satised, then u ∈ H 1,1 (Q). Hen e the operator mapping v to u|Σ is
ompa t from L2 (Ω) to L2 (Σ). Thus, the problem of re onstru ting v from u|Σ is ill-posed.
To
hara
terize the degree of ill-posedness of this problem, denote the solution to the
o
problem (4.1)(4.3) with v ≡ 0 by u and denote the solution to the problem (4.1)(4.3)
fromL2 (Ω) to L2 (Σ) is linear and the problem (4.5) is redu
ed to solving the linear equation
o
Cv = ϕ − C u. We thus have to analyze the singular values of C . In doing so, let introdu
e
the variational method for solving the problem (4.5).
We reformulate the re onstru tion problem as the least squares problem of minimizing the
fun
tional
1
J0 (v) = ku(v) − ϕk2L2 (Σ) (4.7)
2
over L2 (Ω). As this minimization problem is unstable, we minimize its Tikhonov regular-
set by zero.
Now we prove that Jγ is Fré het dierentiable and derive a formula for its gradient. In
79
where χΣ (ξ, t) = 1 if (ξ, t) ∈ Σ and zero otherwise.
The solution of this problem is understood in the weak sense of 1.2. Sin
e u(v)|Σ − ϕ ∈
2 1
L (Σ), we see that there exists a unique solution in W (0, T ; H (Ω)) to (4.9).
1 1
J0 (v + δv) − J0 (v) = ku(v + δv) − ϕk2L2 (Σ) − ku(v) − ϕk2L2 (Σ)
2 2
1
= kδu(v)k2L2(Σ) + hδu(v), u(v) − ϕiL2 (Σ)
2
= hδu(v), u(v) − ϕiL2 (Σ) + o(kδvkL2 (Ω) )
ZZ
= δu(v) u(v) − ϕ dsdt + o(kδvkL2 (Ω) )
Σ
Using Green's formula (1.25) (Theorem 1.2.2) for (4.9) and (4.11), we have
ZZ Z
δu u(v) − ϕ dsdt = δvp(x, 0)dx.
Σ Ω
Hen
e
Z
J0 (v + δv) − J0 (v) = δvp(x, 0)dx + o(kδvk) = hp(x, 0), δviL2(Ω) + o(kδvkL2 (Ω) ).
Ω
From this result, we see that the fun tional Jγ (v) is also Fré het dierentiable and its
gradient ∇Jγ (v) has the form (4.10). The proof is omplete.
Now we return to hara terizing the degree of ill-posedness of the problem (4.5). Sin e
1
J0 (v) = ku(v)|Σ − ϕk2L2 (Σ)
2
1 o
= kCv − (ϕ − u|Σ )k2L2 (Σ) .
2
80
o
If in this formula we take ϕ = u|Σ , then
Due to Proposition 4.1.1 we have C ∗ Cv = p† (x, 0), where p† is the solution to the adjoint
problem
∂p† P n ∂ ∂p†
− − a (x, t) + b(x, t)p† = 0 in Q
∂t ∂x
ij
∂x
i,j=1 j i
†
∂p (4.13)
= u0 (v)χΣ on S,
∂N
p† (x, T ) = 0 in Ω.
Thus, for any v ∈ L2 (Ω) we
an evaluate C ∗ Cv by solving the dire
t problem (4.1)(4.3) and
∗
the adjoint problem (4.13). However, the expli
it form for C C is not available. As in the
previous hapters, when we dis retize the problem, the nite-dimensional approximations
Ch to C are matri es, and so we an apply the Lan zos' algorithm [81℄ to estimates the
eigenvalues of Ch∗ Ch based on its values Ch∗ Ch v . We will present some numeri al results in
Se tion 4.4.
We note that when Σ ≡ S Lions [50, p. 216219℄ suggested the following variational
n
∂u1 X ∂ ∂u1
− aij (x, t) + b(x, t)u1 = f in Q, (4.14)
∂t i,j=1
∂xi ∂xj
u1 = h on S, (4.15)
u1 |t=0 = v in Ω, (4.16)
and
n
∂u2 X ∂ ∂u2
− aij (x, t) + b(x, t)u2 = f in Q, (4.17)
∂t i,j=1
∂xi ∂xj
∂u2
=g on S, (4.18)
∂N
u2 |t=0 = v in Ω, (4.19)
1
JL0 (v) = ku1 (v) − u2 (v)k2L2 (Q) (4.20)
2
over L2 (Ω). As this variational problem has the ill-posed nature of the original problem
γ
JLγ (v) = JL0 (v) + kv − v ∗ k2L2 (Ω) . (4.21)
2
In this setting, the solution of the Diri
hlet problem (4.14)(4.16) is understood in a
om-
mon sense:
hoose a fun
tion Φ ∈ H 1,1 (Q) su
h that Φ|S = h, then ũ1 = u1 − Φ satises a
new homogeneous Diri
hlet problem with the new right hand side f˜ and initial
ondition
81
ṽ . The fun
tion ũ1 ∈ W (0, T ; H01(Ω)) is said to be a weak solution to this homogeneous
Z ZZ X
n
T ∂ ũ1 ∂η
(ũ1t , η)H −1 (Ω),H01 (Ω) dt + aij (x, t) + b(x, t)ũ1 η dxdt
0 Q i,j=1
∂xi ∂xj
Z T Z
= f˜ηdxdt, ∀η ∈ L2 (0, T ; H01(Ω))
0 Ω
and
If h is regular enough, there exists a unique solution ũ1 to the homogeneous Diri
hlet
problem and thus, there exists a unique solution u1 ∈ W (0, T ; H 1(Ω)) to (4.14)(4.16).
Sin
e u1 and u2 belong to W (0, T ; H 1(Ω)) we
an modify Lions' method as follows: mini-
n ZZ
λ1 1 2 2
λ2 X
MJL0 (v) = ku (v) − u (v)kL2 (Q) + aij (u1 (v) − u2 (v))xi (u1 (v) − u2 (v))xj dxdt,
2 2 i,j=1 Q
(4.23)
adjoint problems
∂p1 P n ∂ ∂p1
− − aij (x, t) + a(x, t)p1 =
∂t i,j=1 ∂x i ∂xj
Pn
1 2 1 2
λ1 (u (v) − u (v)) + λ2 i,j=1 aij u (v) − u (v) xj in Q, (4.24)
xi
1
p = 0 on S,
1
p (x, T ) = 0 in Ω,
and
∂p2 P n ∂ ∂p2
− − aij (x, t) + b(x, t)p2 =
∂t ∂x ∂x
i,j=1 i j
1 2
Pn 1 2
λ (u (v) − u (v)) + λ
1 a 2 i,j=1 ij u (v) − u (v) xj
in Q,
xi (4.25)
∂p2 ∂(u1 (v) − u2 (v))
= λ 2 on S,
∂N ∂N
p2 (x, T ) = 0 in Ω.
Lemma 4.1.2. The fun
tional MJL0 is Fré
het dierentiable and its gradient has the
form
in this thesis.
82
4.2 Dis
retization of the variational method in spa
e variables
We now turn to approximating the minimization problem (4.8). Due to the previous
Se tion 4.1,
1 o γ
Jγ (v) = kCv − (ϕ − u|Σ )k2L2 (Σ) + kv − v ∗ k2L2 (Ω)
2 2
and
o
Jγ′ (v) = C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ),
where Cv = u0(v)|Σ and p is the solution to the adjoint problem (4.13). Thus, the optimality
ondition is
o
C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ) = 0. (4.27)
Denote by Ch v = û0h |Σ we have that kCv − Ch vkL2 (Σ) tends to zero as h tends to zero. The
1 oˆ γ
Jγh (v) = kCh v − (ϕ̂h − uh |Σ )k2L2 (Σ) + kv̂h − v̂h∗ k2L2 (Ω)
2 2
for whi
h we have the rst-order optimality
ondition
oˆ
Ch∗ Ch v − (ϕ̂h − uh |Σ ) + γ(v̂h − v̂h∗ ) = 0. (4.28)
We note that to evaluate Ch∗ we have to solve the
orresponding dis
retized adjoint problem,
1,1
but the Neumann
ondition in the adjoint problem (4.13) does not belong to H (S),
therefore p is not in H
1,1
(Q), hen
e we do not have the strong
onvergen
e of Ch z to C ∗ z
∗
2
in L (Ω). However, when we dis
retize (4.13) we have to mollify the Neumann data by
the onvolution with Steklov's kernel [40℄, therefore we have a new approximate data in
H 1,1 (S). Sin e the solution of the adjoint problem (4.13) is stable with respe t to the
data, the solution of the adjoint problem with mollied data p̄ approximates the solution
p of (4.13). Now we apply the above nite dieren e s heme to the adjoint problem with
mollied data to get its multi-linear interpolation p̄ˆh su
h that p̄ˆh → p̄ in L2 ([0, T ], L2(Ω))
ˆh (t) → p̄(t) weakly in H 1 (Ω) for all t ∈ [0, T ]. Thus, in this way, instead of the
and p̄
∗
adjoint operator Sh , we dened an approximation C ˆ∗ of C for whi
h kC ∗ z − Cˆ∗ zkL2 (Ω) tends
h h h
to zero for all z being multi-linear interpolations on Ωh .
oˆ
Cˆh∗ Ch v − (ϕ̂h − uh |Σ ) + γ(v̂h − v̂h∗ ) = 0. (4.29)
Proposition 4.2.1. Let v γ be the solution of the variational problem (4.27) and γ > 0.
Then v̂hγ
onverges to v γ in L2 (Ω) as h tends to zero.
83
4.3 Full dis
retization of the variational problem and the
onju-
gate gradient method
In this se
tion, we
onsider the problem of estimating the dis
rete initial
ondition v̄ from
the dis rete measurement of the solution on the boundary of the domain. The fully dis-
M
1XX
J0h,∆t (v̄) := [uk,m(v̄) − ϕk,m]2 . (4.30)
2 m=1 k∈Γh
For minimizing the problem (4.30) by the
onjugate gradient method, we rst
al
ulate
h,∆t
the gradient of obje
tive fun
tion J0 (v̄) and it is shown by the following theorem
Theorem 4.3.1. The gradient of J0h,∆t at v̄ is given by
∗
∇J0h,∆t (v̄) = A0 η 0 , (4.31)
84
where w m = {w k,m := uk,m(v̄ + δv̄) − uk,m (v̄), k ∈ Γh } and ψ m = {ψ k,m := uk,m(v̄) −
ϕk,m, k ∈ Γh }, m = 0, 1, . . . , M.
It follows from (1.111) that w is the solution to the problem
w m+1 = Am w m , m = 0, .., M − 1,
(4.35)
w 0 = δv̄.
Taking the inner produ
t of the both sides of the mth equation of (4.35) with an arbitrary
m N1 ×...×Nn
ve
tor η ∈R and then summing the results over m = 0, ..., M − 1, we obtain
M
X −1 M
X −1
m+1 m
hw ,η i = hAm w m , η m i
m=0 m=0
(4.36)
M
X −1
∗
= hw m, Am η m i.
m=0
∗
Here, h·, ·i is the inner produ
t in RN1 ×...×Nn và Am is the adjoint matrix of Am .
Taking the inner produ
t of the both sides of the rst equation of (4.32) with an arbitrary
M
X −2 M
X −2 M
X −2
hw m+1 , η m i = hw m+1 , (Am+1 )∗ η m+1 i + hw m+1 , ψ m+1 i
m=0 m=0 m=0
(4.37)
M
X −1 M
X −1
= hw m , (Am )∗ η m i + hw m, ψ m i.
m=1 m=1
Taking the inner produ t of both sides of the se ond equation of (4.32) with an arbitrary
ve tor wM , we have
hw M , η M −1 i = hw M , ψ M i. (4.38)
M
X −2 M
X −1 M
X −1
m+1 m M M −1 m m ∗ m
hw , η i + hw , η i= hw , (A ) η i + hw m , ψ m i + hw M , ψ M i. (4.39)
m=0 m=1 m=1
M
X −1
∗
hw 0, A0 η 0 i = hw m , ψ m i + hw M , ψ M i.
m=1
Equivalently,
M
X
0 ∗ 0
hδv̄, A η i= hw m , ψ m i. (4.40)
m=1
P P
M 2
On the other hand, we
an prove that w k,m = o(kv̄k). Hen
e, it follows form
k∈Γh m=1
(4.34) and (4.40) that
∗
J0h,∆t (v̄ + δv̄) − J0h,∆t (v̄) = hδv̄, A0 η 0 i. (4.41)
85
Consequently, J0h,∆t is dierentiable and its gradient has the form (3.32).
∆t m ∆t m −1 ∆t m ∆t m −1
(B m )∗ = (En − Λn )(En + Λn ) ...(E1 − Λ1 )(E1 + Λ ) . (4.43)
4 4 4 4 1
Conjugate gradient method for the dis retized fun tional (4.30) onsists of the following
steps
Step 1. Choose an initial approximation v 0 and
al
ulate the residual r̂ 0 = u(v 0 )|Σ − ϕ by
0
solving the splitting s
heme (1.111) with v̄ being repla
ed by the initial approximation v
and set k = 0.
Step 2. Cal
ulate the gradient r 0 = −∇Jγ (v 0 ) given in (4.31) by solving the adjoint
0
kr 0 k2
α =
ku(d0)|Σ k2 + γkd0 k2
where u(d0)
an be
al
ulated from the splitting s
heme (1.111) with v̄ being repla
ed by
kr k k2
βk =
kr k−1 k2
Step 5. Cal
ulate αk
k
kr k k2
α =
ku(dk )|Σ k2 + γkdk k2
where u(dk )
an be
al
ulated from the splitting s
heme (1.111) with v̄ being repla
ed by
v k+1 = v k + αk dk .
The simulation of this algorithm on omputer will be given in the next se tion.
lems. As in the previous hapters, we will test for dierent kinds of the initial onditions:
86
1) very smooth, 2)
ontinuous but not smooth (the hat fun
tion), 3) dis
ontinuous (step
fun tions). We see that the degree of di ulty in reases from test 1) to test 3). In the
one-dimensional ases we will present our numeri al al ulation for the singular values by
ut − (aux )x = f in Q,
−aux (0, t) = ϕ1 (t) in (0, T ],
aux (1, t) = ϕ2 (t) in (0, T ],
u|t=0 = v in Ω,
where
oe
ient a = 2xt + x2 t + 1. The observations will be taken at x=0 and x = 1.
−2
The noise level is 10 .
Example 1. We approximate singular values for the ases when we in rease the oe ient
a by a0 = 5 and a0 = 10 times. It appeared that if the oe ient of the equation is larger,
the singular values are smaller. This an be seen in Figure 4.1 on the singular values
2
10
a0=1
a0=5
0 a0=10
10
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
0 5 10 15 20 25 30 35 40 45 50 55
87
Now we present numeri
al results for dierent initial
onditions as explained above.
0.6
0.8
0.4
0.2 0.6
v(x)
0 v(x)
−0.2 0.4
−0.4
0.2
−0.6
−0.8
0
−1
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x
Estimated function v
0.8
0.6
v(x)
0.4
0.2
0
Exact sol.
Noiselevel=1%
Noiselevel=10%
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
Figure 4.2: Example 2, 3, 4: 1D Problem: Re
onstru
tion results for smooth,
ontinuous and dis
ontinuous initial
onditions.
in the one-dimensional ase, we hoose the initial ondition v , then let u = v × (1 − t).
Putting u in the equation to get the boundary data and right hand side f . The observation
88
is taken on the whole boundary S and the noise level is set by 10−2 . In all examples we
take
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39
Figure 4.3: Example 5: Exa t initial ondition (left) and its re onstru tion (right).
Error
Exact sol.
Noiselevel=1%
Noiselevel=10%
1
1
0.8
0.5
0.6
Vertical slice
0.4
−0.5
0.2
−1
0.79
0
0.79
0.39
0.39
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x1 x
2
Figure 4.4: Example 5 (
ontinue): Error (left) and the verti
al sli
e of the exa
t initial
ondition and its re
on-
stru
tion along the interval [(0.5, 0), (0.5, 1)] (right).
89
Exact function v Estimated function with noiselevel 1%
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39
Figure 4.5: Example 6: Exa
t initial
ondition (left) and its re
onstru
tion (right).
Error
Exact sol.
Noiselevel=1%
Noiselevel=10%
1
1
0.8
0.5
0.6
Vertical slice
0
0.4
−0.5
0.2
−1
0.79
0
0.79
0.39
0.39
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x1 x
2
Figure 4.6: Example 6 (
ontinue): Error (left) and the sli
e of the exa
t initial
ondition and its re
onstru
tion
along the interval [(0.5, 0), (0.5, 1)] (right).
1 1
0.5 0.5
v(x)
v(x)
0 0
−0.5 −0.5
0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39
Figure 4.7: Example 7: Exa t initial ondition (left) and its re onstru tion (right).
90
Error
1.2
1 1
0.5 0.8
Vertical slice
0 0.6
−0.5 0.4
−1 0.2
0.79
0.79 0
Exact sol.
0.39 Noiselevel=1%
0.39 Noiselevel=10%
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x
1 2
Figure 4.8: Example 7 (
ontinue): Error (left) and the sli
e of the exa
t initial
ondition and its re
onstru
tion
along the interval [(0.5, 0), (0.5, 1)] (right).
In all examples we see that the numeri al re onstru tions are pretty good. However, when
the oe ients are large, the ill-posedness of the problem is more severe and the method
[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-
tions from boundary observations. Journal of Inverse and Ill-Posed Problems 24(2016),
no. 2, 195220.
91
Con
lusion
In this thesis we study data assimilation in heat ondu tion of re onstru ting the initial
ondition in a heat transfer pro ess from either 1) the observation of the temperature at
the nal time moment, or 2) interior integral observations whi h are regarded as interior
measurements, or 3) boundary observations. The rst problem is new in the sense that the
oe ients of the equation des ribing the heat transfer pro ess are depending on time, and
up to now there are very few studies devoted to it. The se ond problem is a new setting
for su h kind of problems in data assimilation: interior observations are important, but
related studies are devoted to the ase of pointwise observations whi h are not realisti
in pra ti e. The use of integral observations is more pra ti al. The third problem is very
hard as the observation is only on the boundary and up to now there have been very few
studies for this ase. We reformulate these problems as a variational problem aiming at
minimizing a mist fun tional in the least squares sense. We prove that the fun tional is
Fré het dierentiable and derive a formula for its gradient via an adjoint problem and as
a by-produ t of the method, we propose a very natural and easy method for estimating
the degree of ill-posedness of the re onstru tion problem. For numeri ally solving the
problems, we dis retize the dire t and adjoint problems by the splitting nite dieren e
method for getting the gradient of the dis retized variational problems and then apply
the onjugate gradient method for solving them. We note that sin e the solutions in the
thesis are understood in the weak sense, the nite dieren e method for them is not trivial.
With respe t to the dis retization in spa e variables we prove onvergen e results for the
dis retization methods. We test our method on omputer for various numeri al examples
92
The author's publi
ations related to
the thesis
[1℄ Nguyen Thi Ngo Oanh, A splitting method for a ba kward paraboli equation with
time-dependent
oe
ients, Computers & Mathemati
s with Appli
ations 65(2013), 1728;
[2℄ Dinh Nho Hào and Nguyen Thi Ngo
Oanh, Determination of the initial
ondition in
paraboli
equations from integral observations, Inverse Problems in S
ien
e and Engineer-
ing (to appear) doi: 10.1080/17415977.2016.1229778.
[3℄ Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition in
paraboli
equations from boundary observations, Journal of Inverse and Ill-Posed Problems
24(2016), no. 2, 195220.
93
Bibliography
[1℄ Agmon S. and Nirenberg L., Properties of solutions of ordinary dierential equations
Problems of Mathemati
al Physi
s. Russian A
ademy of S
ien
es, Institute for Nu-
meri
al Mathemati
s, Mos
ow, 2003. (Russian).
[3℄ Agoshkov V. I., On some inverse problems for distributed parameter systems, Russ.
J. Numer. Anal. Math. Modelling, 18(2003), no. 6, 455465.
[4℄ Alifanov O.M., Inverse Heat Transfer Problems, Springer, New York, 1994.
[5℄ Alifanov O.M., Artyukhin E.A., and Rumyantsev S.V., Extreme Methods for Solving
Ill-Posed Problems with Appli
ations to Inverse Heat Transfer Problems. Begell House
In
., New York, 1995.
[6℄ Aubert G. and Kornprobst P., Mathemati
al Problems in Image Pro
essing, Springer,
New York, 2006.
[7℄ Banks H. T. and Kunis
h K., Estimation Te
hniques for Distributed Parameter Sys-
tems, Birkhäuser, Boston, In
., Boston, MA, 1989.
[8℄ Baumeister J., Stable Solution of Inverse Problems. Friedr. Vieweg & Sohn, Braun-
s hweig, 1987.
[9℄ Be
k J.V., Bla
kwell B., Clair St.C.R., Inverse Heat Condu
tion, Ill-Posed Problems,
Wiley, New York, 1985.
[10℄ Bengtsson L., Ghil M., and Källén E., Dynami
Meteorology: Data Assimilation Meth-
ods. Springer-Verlag, New York, 1981.
[11℄ Bennett A.F., Inverse Methods in Physi
al O
eanography, Cambridge University Press,
Cambridge, 1992.
[12℄ Boussetila N. and Rebbani F., Optimal regularization method for ill-posed Cau hy
[13℄ Buly hëv E.V., Glasko V.B. and Fëdorov S.M., Re onstru tion of an initial temper-
ature from its measurements on a surfa e. Zh. Vy hisl. Mat. i Mat. Fiz. 23(1983),
14101416.(Russian)
94
[14℄ Chavent G., Nonlinear Least Squares for Inverse Problems. Theoreti
al Foundations
and Step-by-Step Guide for Appli
ations, Springer, New York, 2009.
with the dire t and adjoint shallow water equations. Tellus 42A(1990), 531549.
with the adjoint vorti
ity equations, Part II, Numeri
al results. Quart. J. Roy. Meteor.
So
. 113(1987), 13291347.
[17℄ Courtier P., Derber J., Erri o R.M., Louis J.F. and Vuki evi T., Review of the use
343357.
[18℄ Du N.V., Paraboli Equations Ba kwards in Time. PhD Thesis, Vinh University,
2011. (Vietnamese).
[19℄ Engl H.W., Martin H. and Andreas N., Regularization of Inverse Problems. Dordre
ht,
Boston, London, 1996.
[20℄ Hadamard J., Le
tures on the Cau
hy Problem in Linear Partial Dierential Equa-
tions, Yale University Press, New Haven, 1923.
[21℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations II: A
[22℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations III:
A variational method and its approximation s
hemes. Numer. Fun
t. Anal. Optim.
13(5&6)(1992), 565583.
[23℄ Hào D.N., A molli ation method for ill-posed problems, Numer. Math., 68(1994),
469506.
[24℄ Hào D.N., Methods for Inverse Heat Condu tion Problems. Peter Lang Verlag, Frank-
[25℄ Hào D.N. and Du N.V., Stability results for ba kward paraboli equations with time-
[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-
[27℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in
10.1080/17415977.2016.1229778.
[28℄ Hào D.N., Thanh P.X., Lesni D. and Johansson B.T., A boundary element method for
a multi-dimensional inverse heat
ondu
tion problem. Int. J. Comput. Math. 89(2012),
15401554.
95
[29℄ Hào D.N., Thành N.T., and Sahli H., Splitting-based gradient method for multi-
dimensional inverse ondu tion problems. J. Comput. Appl. Math., 232(2009), 361
377.
[30℄ Hinze M., A variational dis retization on ept in ontrol onstrained optimization:
[31℄ Isakov V., Inverse Problems for Partial Dierential Equations. Se ond edition.
[32℄ Ivanov V.K., On linear problems whi
h are not well-posed. Dokl. Akad. Nauk SSSR
145(1962), no. 2, 270272 (in Russian).
[33℄ Ivanov V.K., Vasin V.V., and Tanana V.P., Theory of Linear Ill-Posed Problems and
its Appli
ations. VSP, Utre
ht, 2002.
[34℄ John F., Numeri al solution of the equation of heat ondu tion for pre eeding time.
[35℄ Jovanovi¢ B.S. and Süli E.Analysis of Finite Fieren
e S
hemes. For Linear Partial
Dierential Equations with Generalized Solutions. Springer, London, 2014.
[36℄ Kabanikhin S. I., Inverse and Ill-Posed Problems. Theory and Appli
ations. De
[37℄ Kalnay E, Atmospheri Modeling, Data Assimilation and Predi tability. Cambridge
[38℄ Klibanov M.V., Estimates of initial onditions of paraboli equations and inequalities
[39℄ Klibanov M.V. and Tikhonravov A.V., Estimates of initial onditions of paraboli
equations and inequalities in innite domains via lateral Cau
hy data. J. Dierential
Equations 237(2007), 198224.
[41℄ Ladyzhenskaya O. A., Solonnilov V. A. and Ural
eva N. N., Linear and Quasilinear
Equations of Paraboli
Types. Ameri
an Mathemati
al So
iety, 1968.
[42℄ Lavrent'ev M.M., On Cau
hy's problem for Lapla
e's equation. Dokl. Akad. Nauk
SSSR 102(1955), no. 2, 205206. (in Russian).
[43℄ Lavrent'ev M.M., Integral equations of the rst kind. Dokl. Akad. Nauk SSSR
127(1959), no. 1 , 3133. (in Russian).
[44℄ Lavrent'ev M.M., Ill-Posed Problems of Mathemati al Physi s. Siberian Bran h of the
[45℄ Lavrent'ev M. M., Romanov V. G. and Shishatskii G. P., Ill-posed Problems in Math-
emati
al Physi
s and Analysis. Amer. Math. So
., Providen
e, R. I., 1986.
96
[46℄ Le Dimet F.-X. and Shutyaev V.P., On Newton methods in data assimilation. Russian
J. Numer. Anal. Math. Modelling 15(2000), no. 5, 419434.
[47℄ Le Dimet F.-X. and Shutyaev V.P., On data assimilation for quasilinear paraboli
[48℄ Le Dimet F.-X. and Talagrand O., Variational algorithms for analysis and assimilation
[49℄ Li J., Yamamoto M. and Zou J., Conditional stability and numeri al re ontru tion of
[50℄ Lions J.-L., Optimal Control of Systems Governed by Partial Dierential Equations,
Springer, Berlin, 1971.
[51℄ Louis A.K., Inverse und s hle ht gestellte Probleme. B.G. Teubner, Stuttgart, 1989.
[52℄ Lundvall J. Kozlov V. and Weinerfelt P., Iterative meth- ods for data assimilation for
[53℄ Manselli P. and Miller K., Dimensionality redu tion methods for e ient numeri al
solution, ba
kward in time, of paraboli
equations with variable
oe
ients. SIAM J.
Math. Anal. 11(1980), 147159.
[54℄ Mar huk G.I., Methods of Numeri al Mathemati s. Springer-Verlag, New York, 1975.
[55℄ Mar huk G.I., Mathemati al Modeling in the Problem of the Environment, Nauka,
[56℄ Mar huk G.I., Splitting and alternating dire tion methods. In Ciaglet P. G. and Li-
[57℄ Mar
huk G., Adjoint Equations and Analysis of Complex Systems. Springer, New York,
1995.
[58℄ Mizohata S., Uni ité du prolongement des solutions pour quelques opérateurs diéren-
tiels paraboliques. Mem. Coll. S i. Univ. Kyoto. Ser. A. Math. 31(1958), 219239.
[59℄ Nemirovskii A.S., The regularizing properties of the adjoint gradient method in ill-
posed problems. Zh. vy hisl. Mat. mat. Fiz. 26(1986), 332347. Engl. Transl. in
[60℄ No edal J. and Wright S.J., Numeri al Optimization. Se ond edition. Springer, New
York, 2006.
[61℄ Oanh N.T.N., A splitting method for a ba kward paraboli equation with time-
[62℄ Oanh N.T.N. and Huong B.V., Determination of a time-dependent term in the right
97
[63℄ Okubo A., Difussion and E
ologi
al Models: Model Perspe
tives, Spinger S
i-
[64℄ Parmuzin E.I., Le Dimet F.-X. and Shutyaev V.P., On error analysis in variational data
[65℄ Parmuzin E.I., Shutyaev V.P., Numeri al solution of the problem on re onstru ting the
initial
ondition for a semilinear paraboli
equation. Russian J. Numer. Anal. Math.
Modelling 21(2006), no. 4, 375393.
[66℄ Parmuzin E.I., Shutyaev V.P., Variational data assimilation for a nonstationary heat
ondu
tion problem with nonlinear diusion. Russian J. Numer. Anal. Math. Mod-
elling 20(2005), no. 1, 8195.
[67℄ Parmuzin E.I. and Shutyaev V.P., Numeri al algorithms for solving a problem of
data assimilation. (Russian) Zh. Vy hisl. Mat. Mat. Fiz. 37(1997), no. 7, 816827;
[68℄ Payne L., Improperly Posed Problems in Partial Dierential Equations. SIAM,
Philadelphia, 1975.
[69℄ Pu
i C., Sui problemi di Cau
hy non "ben posti. Atti A
ad. Naz. Lin
ei. Rend. Cl.
S
i. Fis. Mat. Nat. 18(1955), no. 8, 473477. (Italian)
[70℄ Samarskii A.A., Lazarov R.D. and Makarov L., Finite Dieren
e S
hemes for Dieren-
tial Equations with Weak Solutions. Visshaya Shkola Publ
., Mos
ow, 1987. (Russian)
[71℄ Shutyaev V.P., Control Operators and Iterative Algorithms in Variational Data As-
similation Problems. Nauka, Mos
ow, 2001. (Russian).
[72℄ Sun N.Z., Inverse Problems in Groundwater Modeling, Kluwer A ad. Publishers, Dor-
[73℄ Talagrand O., A study of the dynami
s of four dimensional data assimilation. Tellus
33(1981), 4360.
with the adjoint vorti
ity equations, Part I. Theory. Quart. J. Roy. Meteor. So
.
113(1987), 13111328.
[76℄ Thành N.T., Infrared Thermography for the Dete
tion and Chara
terization of Buried
Obje
ts. PhD thesis, Vrije Universiteit Brussel, Brussels, Belgium, 2007.
[77℄ Thành N.T., Hào D.N., and Sahli H., Thermal infrared te hnique for landmine dete -
504.
98
[78℄ Tikhonov A.N., On the stability of inverse problems. Doklady A
ad. S
i. USSR
39(1943), 176179.
[79℄ Tikhonov A.N., On the solution of ill-posed problems and the method of regularization.
[80℄ Tikhonov A.N. and Arsenin V.Y., Solutions of Ill-Posed Problems, Winston, Wash-
ington, 1977.
[81℄ Trefethen L.N. and Bau D. III, Numeri al Linear Algebra, SIAM, Philadelphia, 1997.
[82℄ Tröltzs h F., Optimal Control of Partial Dierential Equations. Graduate Studies in
[83℄ Wloka J., Partial Dierential Equations. Cambridge University Press, 1987.
[84℄ Yanenko N.N., The Method of Fra tional Steps. Springer-Verlag, Berlin, Heidelberg,
99