You are on page 1of 113

MINISTRY OF VIETNAM ACADEMY OF

EDUCATION AND TRAINING SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

NGUYEN THI NGOC OANH

DATA ASSIMILATION IN HEAT CONDUCTION

THESIS FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY IN MATHEMATICS

HANOI  2017
MINISTRY OF VIETNAM ACADEMY OF
EDUCATION AND TRAINING SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

NGUYEN THI NGOC OANH

DATA ASSIMILATION IN HEAT CONDUCTION

Spe iality: Dierential and Integral Equations


Spe iality Code: 62 46 01 03

THESIS FOR THE DEGREE OF


DOCTOR OF PHYLOSOPHY IN MATHEMATICS

Supervisor: PROF. DR. HABIL. ĐINH NHO HÀO

HANOI  2017
BỘ GIÁO DỤC VÀ ĐÀO TẠO VIỆN HÀN LÂM KHOA HỌC
VÀ CÔNG NGHỆ VIỆT NAM

VIỆN TOÁN HỌC

NGUYỄN THỊ NGỌC OANH

ĐỒNG HÓA SỐ LIỆU TRONG TRUYỀN NHIỆT

Chuyên ngành: Phương trình Vi phân và Tích phân


Mã số: 62 46 01 03

LUẬN ÁN TIẾN SĨ TOÁN HỌC

Người hướng dẫn khoa học:


GS. TSKH. ĐINH NHO HÀO

HÀ NỘI – 2017
A knowledgments

I first learned about inverse and ill-posed problems when I met Professor Đinh Nho Hào in
2007, my final year of bachelor’s study. I have been extremely fortunate to have a chance
to study under his guidance since then. I am deeply indebted to him not only for his
supervision, patience, encouragement and support in my research, but also for his precious
advices in life.
I would like to express my special appreciation to Professor Hà Tiến Ngoạn, Professor
Nguyễn Minh Trí, Doctor Nguyễn Anh Tú, the other members of the seminar at Department
of Differential Equations and all friends in Professor Đinh Nho Hào’s group seminar for
their valuable comments and suggestions to my thesis. I am very grateful to Doctor Nguyễn
Trung Thành (Iowa State University) for his kind help on MATLAB programming.
I would like to thank the Institute of Mathematics for providing me with such an excellent
study environment.
Furthermore, I would like to thank the leaders of College of Sciences, Thai Nguyen Uni-
versity, the Dean board as well as to all of my colleagues at the Faculty of Mathematics
and Informatics for their encouragement and support throughout my PhD study.
Last but not least, I could not have finished this work without the constant love and
unconditional support from my parents, my parents-in-law, my husband, my little children
and my dearest aunt. I would like to express my sincere gratitude to all of them.
Abstra t
The problems of re onstru ting the initial ondition in paraboli equations from the obser-

vation at the nal time, from interior integral observations, and from boundary observations

are studied. We reformulate these inverse problems as variational problems of minimizing

appropriate mist fun tionals. We prove that these fun tionals are Fré het dierentiable

and derive a formula for their gradient via adjoint problems. The dire t problems are rst

dis retized in spa e variables by the nite dieren e method and the variational problems

are orrespondingly dis retized. The onvergen e of the solution of the dis retized varia-

tional problems to the solution of the ontinuous ones is proved. To solve the problems

numeri ally, we further dis retize them in time by the splitting method. It is proved that

the ompletely dis retized fun tionals are Fré het dierentiable and the formulas for their

gradient are derived via dis rete adjoint problems. The problems are then solved by the

onjugate gradient method and the numeri al algorithms are tested on omputer. As a

by-produ t of the variational method based on Lan zos' algorithm, we suggest a simple

method to demonstrate the ill-posedness.

i
Tóm tắt

Các bài toán xác định điều kiện ban đầu trong phương trình parabolic từ quan sát tại
thời điểm cuối, từ quan sát tích phân bên trong, và từ quan sát biên đã được nghiên cứu.
Chúng tôi sử dụng phương pháp biến phân nghiên cứu bài toán ngược này bằng cách cực
tiểu hóa các phiếm hàm chỉnh. Chúng tôi chứng minh rằng các phiếm hàm này là khả vi
Fréchet và đưa ra công thức gradient của chúng thông qua các bài toán liên hợp. Trước
tiên, sử dụng phương pháp sai phân hữu hạn để rời rạc hóa bài toán thuận và bài toán liên
hợp tương ứng theo các biến không gian. Chúng tôi chứng minh sự hội tụ của nghiệm của
bài toán biến phân rời rạc tới nghiệm của bài toán biến phân liên tục. Để giải số bài toán,
chúng tôi tiếp tục rời rạc bài toán theo biến thời gian bằng phương pháp sai phân phân
rã (phương pháp splitting). Chúng tôi cũng chứng minh được rằng các phiếm hàm rời rạc
này là khả vi Fréchet và đưa ra công thức gradient của chúng thông qua bài toán liên hợp
rời rạc. Sau đó chúng tôi sử dụng phương pháp gradient liên hợp để giải và các thuật toán
số được thử nghiệm trên máy tính. Ngoài ra, như một sản phẩm phụ của phương pháp
biến phân, dựa trên thuật toán Lanczos, chúng tôi đề xuất một phương pháp đơn giản để
minh họa tính đặt không chỉnh của bài toán.

ii
De laration
This work has been completed at Institute of Mathematics, Vietnam Academy of Science
and Technology under the supervision of Prof. Dr. Habil. Đinh Nho Hào. I declare hereby
that the results presented in it are new and have never been published elsewhere.
Author: Nguyen Thi Ngoc Oanh

iii
List of Figures

2.1 Example 1: Singular values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

2.2 Example 2: Re onstru tion results: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error;
(d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the
solid urve: the estimated fun tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2.3 Example 3: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error;
(d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the
solid urve: the estimated fun tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.4 Example 4: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error;
(d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the
solid urve: the estimated fun tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2.5 Example 5: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error;
(d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the
solid urve: the estimated fun tion). . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.1 Example 1. Singular values: three observations and various time intervals of observations. . . . 68

3.2 Example 2: Re onstru tion results for (a) 3 uniform observation points in (0, 0.5), error in L2 -
norm = 0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2 -norm = 0.006133; ( )
3 uniform observation points in (0.25, 0.75), the error in L2 -norm = 0.0060894; (d) 3 uniform
observation points in Ω, the error in L2 -norm = 0.0057764 . . . . . . . . . . . . . . . . . . 69

3.3 Re onstru tion result of Example 3: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 70

3.4 Re onstru tion result of Example 4: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 72

3.5 Re onstru tion result of Example 5: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3. . . . . 73

3.6 Example 6. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( )
point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion . . . . . . . . . . . . 74

3.7 Example 7. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( )
point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion. . . . . . . . . . . . 76

3.8 Example 8. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( )
point-wise error; (d) the omparison of v|x1 =1/2 | and its re onstru tion. . . . . . . . . . . . 77

iv
4.1 Example 1: Singular Values for 1D Problem . . . . . . . . . . . . . . . . . . . . . . . 87

4.2 Example 2, 3, 4: 1D Problem: Re onstru tion results for smooth, ontinuous and dis ontinuous
initial onditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.3 Example 5: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 89

4.4 Example 5 ( ontinue): Error (left) and the verti al sli e of the exa t initial ondition and its
re onstru tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . 89

4.5 Example 6: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 90

4.6 Example 6 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru -
tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . . . . . 90

4.7 Example 7: Exa t initial ondition (left) and its re onstru tion (right). . . . . . . . . . . . . 90

4.8 Example 7 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru -
tion along the interval [(0.5, 0), (0.5, 1)] (right). . . . . . . . . . . . . . . . . . . . . . . 91

v
List of Tables

3.1 Example 3: Behavior of the algorithm with dierent starting points of observation τ . . . . . . 71

3.2 Example 6: Behavior of the algorithm when the number of observations and the positions of
observations vary (N = 4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.3 Example 6: Behavior of the algorithm when the number of observations and the positions of
observations vary (N = 9). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

vi
List of Notations

Rn the n− dimensional Eu lidean spa e;


Ω open set in Rn ;
∂Ω the boundary of Ω;
Γ a subset of ∂Ω;
Q = Ω × (0, T )
S = ∂Ω × (0, T );
Σ = Γ × (0, T );
C 1 (Q) spa e of ontinuously dierentiable fun tions on Q;
2 2
L (Ω), L (S), L (Q)2
spa e of measurable, squared integrable fun tions in Ω
(resp. on S, Q);
H 1 (Ω) = {u(x) ∈ L2 (Ω) : ∂x ∂u
i
∈ L2 (Ω)};
H 1,1 (Q) = {u(x, t) ∈ L2 (Q) : ∂x ∂u
i
∈ L2 (Q)};
H01 (Ω) = {u(x) ∈ H 1 (Ω) : u|∂Ω = 0};
H01,0 (Q) = {u(x, t) ∈ H 1,0 (Q) : u|S = 0};
H01,1 (Q) = {u(x, t) ∈ H 1,1 (Q) : u|S = 0};
L∞ (Ω), L∞ (S), L∞ (Q) spa e of measurable, almost everywhere bounded fun tions in Ω
(resp. on S, Q);
(H 1 (Ω))′ dual spa e of H 1 (Ω);
(H01 (Ω))′ dual spa e of H01 (Ω);
L2 (0, T ; H 1 (Ω)) = {u : u(t) ∈ H 1 (Ω) a.e. t ∈ (0, T ) và kukL2 (0,T ;H 1 (Ω)) < ∞};
L2 (0, T ; (H 1 (Ω))′ ) = {u : u(t) ∈ (H 1 (Ω))′ a.e. t ∈ (0, T ) và kukL2 (0,T ;(H 1 (Ω))′ ) < ∞};
L2 (0, T ; H01 (Ω)) = {u : u(t) ∈ H01 (Ω) a.e. t ∈ (0, T ) và kukL2 (0,T ;H01 (Ω)) < ∞};
L2 (0, T ; (H01 (Ω))′ ) = {u : u(t) ∈ (H01 (Ω))′ a.e. t ∈ (0, T ) và kukL2 (0,T ;(H01 (Ω))′ ) < ∞};
W (0, T ; H 1 (Ω)) = {u : u ∈ L2 (0, T ; H 1 (Ω)), ut ∈ L2 (0, T ; (H 1 (Ω))′ )};
W (0, T ; H01 (Ω)) = {u : u ∈ L2 (0, T ; H01 (Ω)), ut ∈ L2 (0, T ; (H01 (Ω))′ )};
Ni number of interval on xi ;
hi grid size on xi ;
k = (k1 , . . . , kn );
xk = (xk1 1 , . . . , xknn ) grid verti es;
h = (h1 , . . . , hn ) ve tor of spatial grid sizes;
∆h = h1 · · · hn ;
ei the unit ve tor in xi ;
ω(k) = {x ∈ Ω : (ki − 0.5)hi ≤ xi ≤ (ki + 0.5)hi };
Ω̄h = {k = (k1 , . . . , kn ) : 0 ≤ ki ≤ Ni };
Ωh = {k = (k1 , . . . , kn ) : 1 ≤ ki ≤ Ni − 1};
Ωih = {k = (k1 , . . . , kn ) : 0 ≤ ki ≤ Ni − 1, 1 ≤ kj ≤ Nj − 1, ∀j 6= i};
Πh = Ω̄h \ Ωh ;
Πil
h = {k = (k1 , k2 , . . . , kn ) : ki = 0}, i = 1, . . . , n;
Πir
h = {k = (k1 , k2 , . . . , kn ) : ki = Ni }, i = 1, . . . , n;
= u hi−u ;
k+ei k
ukxi
CG onjugate gradient method;
∇J gradient of J .

vii
Contents

Page

Abstra t i

Tóm tắt ii

List of Figures iii

List of Tables v

List of Notations vii

Introdu tion 1

Chapter 1 Auxiliary results 9

1.1 Dire t problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2 Adjoint problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.3 Finite dieren e method for one-dimensional dire t problems . . . . . . . . . 14

1.3.1. Dis retization in the spa e variable . . . . . . . . . . . . . . . . . . . 15

1.3.2. Dis retization in time . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4 Finite dieren e method for multi-dimensional dire t problems . . . . . . . . 18

1.4.1. Interpolations of grid fun tions . . . . . . . . . . . . . . . . . . . . . 19

1.4.2. Dis retization in spa e variables and the onvergen e of the nite

dieren e s heme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

viii
1.4.3. Dis retization in time and splitting dieren e s heme . . . . . . . . . 36

1.5 Approximation of the variational problems . . . . . . . . . . . . . . . . . . . 39

1.6 Lan zos' algorithm for approximating singular values . . . . . . . . . . . . . 43

Chapter 2 Data assimilation by the nal time observations 44

2.1 Problem setting and the variational method . . . . . . . . . . . . . . . . . . 44

2.2 Dis retization of the variational problem in spa e variable . . . . . . . . . . . 47

2.3 Full dis retization of the variational problem . . . . . . . . . . . . . . . . . . 47

2.3.1. The gradient of the obje tive fun tional . . . . . . . . . . . . . . . . . 48

2.3.2. Conjugate gradient method . . . . . . . . . . . . . . . . . . . . . . . 50

2.4 Numeri al results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.4.1. Approximations of the singular values . . . . . . . . . . . . . . . . . . 51

2.4.2. Numeri al examples for two-dimensional problems . . . . . . . . . . . 52

Chapter 3 Data assimilation by the integral observations 57

3.1 Problem setting and the variational method . . . . . . . . . . . . . . . . . . 58

3.2 Dis retization of the variational problem in spa e variable . . . . . . . . . . . 63

3.3 Full dis retization of the variational problem . . . . . . . . . . . . . . . . . . 64

3.4 Numeri al example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.4.1. One-dimensional numeri al examples . . . . . . . . . . . . . . . . . . 68

3.4.2. Two-dimensional numeri al examples . . . . . . . . . . . . . . . . . . 73

Chapter 4 Data assimilation by the boundary observations 78

4.1 Problem setting and the variational method . . . . . . . . . . . . . . . . . . 78

4.2 Dis retization of the variational method in spa e variables . . . . . . . . . . 83

4.3 Full dis retization of the variational problem and the onjugate gradient

method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

4.4 Numeri al example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.4.1. Numeri al example in one-dimensional ase . . . . . . . . . . . . . . . 87

4.4.2. Numeri al example in the multi-dimensional ase . . . . . . . . . . . 88

ix
Con lusion 92

The author's publi ations related to the thesis 93

Bibliography 93

x
Introdu tion

The predi tion of an evolution pro ess requires its initial ondition whi h is unfortunately

not always available or pre isely given in pra ti e. Data assimilation is the pro ess of

re onstru ting model initial onditions from measured observations and the rst guess

eld in ombination with the dynami al system. Data assimilation is extensively used

in meteorology, o eanography, weather fore ast [2, 3, 45, 66, 71℄, environmental pollution

[8, 55, 63℄, image pro essing [6℄, industrial produ tion [4, 5, 9, 24℄, . . . . For surveys of

methods in data assimilation, we refer to [2, 10, 11, 15, 16, 17, 37, 46, 47, 48, 57, 64, 65,

66, 67, 71, 73, 74, 75℄, and the referen es therein.

Suppose that the pro ess under onsideration is modelled by a system of evolution equations

dU
+ AU = F, (0.1)
dt
where U is the ve tor representing the state variables that we want to "predi t", A is an

ellipti operator in the spa e variables and F is the ve tor of exterior for e a ting on the

system. The goal of predi tion is to nd a good approximation to U during a period of

time of length T. The problem we are fa ed with is that we do not know the initial data

for U before some time moment T0 for omputing the solution of the predi tion model

from T0 on. However, we an observe (measure) U somehow, say, by CU with C being a

linear operator. The data assimilation problem is to determine an approximation of the

initial ondition at a time before T0 from measurements and then use it to solve the above

system for predi tion. This problem is unfortunately ill-posed. A problem is said to be

well-posed in sense of Hadamard if the following onditions are satised [20℄: i) Existen e:

There is a solution of the problem. ii) Uniqueness: The solution is unique. iii) Stability:

The solution ontinuously depends on the data (in some appropriate topologies). If at

least one of the above onditions is not fullled, the problem is said to be ill-posed (or

improperly posed). Hadamard thought that su h problems have no physi al meaning.

However, many important problems in pra ti e are ill-posed and there have been a lot of

works devoted to their study (see [2, 3, 4, 5, 7, 8, 9, 14, 19, 24, 31, 36, 45, 68, 72, 80℄

and the referen es therein). The ill-posedness of a problem auses serious troubles by

making lassi al numeri al methods unstable, that is a small error in the data may ause

arbitrarily large errors in its solution. Tikhonov A.N. [78℄ in 1943 realized that unstability

results from la ks information, and to restore stability we should impose some a priori

ondition. Tikhonov then pointed out the possibility of nding stable solutions to ill-posed

problems. The importan e of ill-posed problems has been then realized by Lavrent'ev M.M.

1
[42, 43, 44, 45℄, John F. [34℄, Pu i C. [69℄, Ivanov V.K. [32, 33℄ who an be onsidered as

the founders of the theory of ill-posed problems. In 1963, Tikhonov [79, 80℄ published his

elebrated regularization method and sin e then inverse and ill-posed problems be ame an

a tive bran h of mathemati al physi s and omputational s ien e.

This thesis is devoted to data assimilation in heat ondu tion, namely it aims at determin-

ing the initial ondition of the system (0.1) des ribing heat ondu tion using three types

of observation: 1) observation at the nal time moment, 2) interior (integral) observations,

3) boundary observations. We now formulate our problems more pre isely.

Let Ω be an open bounded domain in Rn , n ≥ 1, with boundary ∂Ω. Denote Q = Ω×(0, T ]


with T > 0 being given, S = ∂Ω × (0, T ]. Let

aij , i, j ∈ {1, 2, . . . , n}, b ∈ L∞ (Q), (0.2)

aij = aji , i, j ∈ {1, 2, . . . , n}, (0.3)


n
X
λkξk2Rn ≤ aij (x, t)ξi ξj ≤ λ−1 kξk2Rn , ∀ξ ∈ Rn , (0.4)
i,j=1
0 ≤ b(x, t) ≤ µ a.e. in Q, (0.5)

v ∈ L2 (Ω), g ∈ L2 (S), f ∈ L2 (Q), (0.6)

λ is positive onstant and µ ≥ 0. (0.7)

Consider the initial value problem

n  
∂u X ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q, (0.8)
∂t i,j=1 ∂xi ∂xj

u|t=0 = v in Ω, (0.9)

with either Diri hlet boundary ondition

u=0 on S (0.10)

or Neumann boundary ondition

X n
∂u
= (aij (x, t)uxj ) cos(ν, xi ) = g on S, (0.11)
∂N i,j=1

where ν is the outer normal ve tor to S. We see that the system (0.8), (0.10) or (0.8),

(0.11) are examples of the system (0.1).

When the oe ients of equation (0.8), data v, g and the right hand side (sour e) f are

given, the problem of determining u(x, t) from the system (0.8)(0.10) or (0.8), (0.9), (0.11)
is alled the dire t problem. It is proved that there exists a weak solution (the denition of

whi h will be given in Chapter 1) to these problems [24, 82℄. The inverse problem (data
assimilation) onsidered in this thesis is that of determining the initial ondition v from

one of the above three types of observation: Denoting the solution to (0.8)(0.10) or (0.8),

(0.9), (0.11) by u(v) to emphasize its dependen e on the initial ondition v and supposing

2
that we observe u by Cu(v). The inverse problem is to determine v when Cu(v) is given,

say, by z. In other words, we have to solve the equation

Cu(v) = z. (0.12)

This problem is ill-posed as we will see that in our hosen spa es the operator mapping

v to Cu(v)
is ompa t. However, hara terizing its degree of ill-posedness is not an easy
o
task. Denoting the solution to (0.8)(0.10) (or (0.8), (0.9), (0.11)) with v ≡ 0 by u, we see
o
that the operator from v to Cv := Cu(v) − C u is bounded and linear. Thus, instead of

studying the equation (0.12), we have to deal with the linear operator equation

o
Cv = z − C u. (0.13)

The asymptoti behavior of singular values of C (or eigenvalues of C ∗C ) hara terizes the

ill-posedness of the problem [19℄. However, up to now there are very few results on this

question in inverse problems for partial dierential equations. Some hara terization has

been obtained, but only in very simple ases [8, 19, 51℄. In this thesis, as a by-produ t of

the variational method for nding v , we propose a numeri al s heme for estimating singular
values of C whi h we will present below.

To nd v, we minimize the mist fun tional

1
J0 (v) = kCu(v) − zk2H (0.14)
2
with respe t to v ∈ L2 (Ω). However, this problem is ill-posed, we will minimize the

regularized fun tional ([2, 4, 5, 7, 8, 9, 14, 19, 24, 33, 36, 71℄)

1 γ
Jγ (v) = kCu(v) − zk2H + kv − v ∗ k2L2 (Ω) (0.15)
2 2
with respe t to v ∈ L2 (Ω), v ∗ is an a priori estimate to v , k · kH is the
instead. Here,

norm of an appropriate Hilbert spa e H and γ > 0 the regularization parameter. It will be

proved that there exists a unique solution to this optimization problem, the fun tional Jγ

is Fré het dierentiable and its gradient an be al ulated via an adjoint problem. To solve

the problem numeri ally, we shall apply the splitting nite dieren e method to dis retize

the optimization problems and prove the onvergen e of the method. The reason we hoose

the splitting nite dieren e method is that this method is easy for oding and it splits

multi-dimensional problems into a sequen e of one-dimensional problems and hen e it is

very fast. We note that one an dis retize the problems by the nite element method,

however, sin e the oe ients in our problems depend on time, it is easier to use the nite

dieren e method.
o
Now we return to estimating the singular values of C. We have ∇J0 (v) = C ∗ (Cv −(z −C u))
o
with C∗ being the adjoint operator of C. If we hoose z = C u, then ∇J0 (v) = C ∗ Cv .
Unfortunately, the expli it form of C is usually not available, and even if it is available, it

is not easy to analyze the asymptoti behaviour of its singular values. However, as we an

al ulate ∇J0 (v) = C ∗ Cv via the solution to an adjoint problem for any v, we an apply

Lan zos' algorithm [81℄ to estimate the eigenvalues of C C.

3
The ontent of this thesis is as follows. In Chapter 1 we summarize some basi results

for the dire t problems, their nite dieren e approximations and onvergen e results, also

some standard algorithms like the onjugate gradient method, Lan zos' algorithm, et are

presented there. We note that sin e the solution to the Diri hlet problem (0.8)(0.10) or

to the Neumann problem (0.8), (0.9), (0.11) is understood in the weak sense, the nite

dieren e method for them is ompli ated and the proof of onvergen e of the method is

not trivial.

The se ond hapter is devoted to the re onstru ting the initial ondition v in the Diri hlet
problem (0.8)(0.10) from the observation at the nal time moment: Cu := u(x, T ) = ξ(x).
This problem is well known under the name paraboli equations ba kward in time whi h

has many appli ations in pra ti e and up to now there have been many papers devoted to

it (see [4, 5, 6, 8, 12, 18, 24, 31, 45, 68℄ and the referen es therein). However, among these

works only a few are devoted to the ase of time-dependent oe ients [1, 25, 31, 45, 53℄.

Paraboli equations ba kward in time are severely ill-posed as the following simple example

shows [19℄.

Consider the heat equation with homogeneous Diri hlet boundary ondition

ut (x, t) = uxx (x, t), x ∈ (0, π), 0 < t ≤ 1, (0.16)

u(0, t) = u(π, t) = 0, 0 < t ≤ 1, (0.17)

u(x, 0) = v(x) ∈ L2 (0, π). (0.18)

The problem is to re onstru t the initial ondition v from u(x, 1) = ξ(x).


Using Fourier expansion of v, we have the representation


X
v(x) = vn ϕn (x), x ∈ [0, π] (0.19)
n=1
q q Rπ
2 2
with ϕn (x) = π
sin(nx) and vn = π
v(τ ) sin(nτ )dτ.
0

We easily obtain

X 2
u(x, t) = vn e−n t ϕn (x).
n=1
2
Hen e, if we for e ξ ∈ L (0, π), then


X ∞
X
−n2
ξ(x) = u(x, 1) = vn e ϕn (x) = ξn ϕn (x)
n=1 n=1
q Rπ
2
with ξn = π
ξ(τ ) sin(nτ )dτ.
0

Thus,
2
vn = ξn en , n = 1, 2, . . . .
and r ∞
2 X n2
v(x) = e ξn sin(nx). (0.20)
π n=1

4
For v ∈ L2 (0, π), we must have


X ∞
X 2
kvk2L2 (0,π) = vn2 = e2n |ξn |2 < ∞. (0.21)
n=1 n=1

From (0.20) and (0.21), we see that the problem of re onstru ting v from ξ is severely

ill-posed: First, a solution v exists only for those fun tions ξ whose Fourier oe ients ξn
−n2
rapidly de rease as n tends to innity (mu h faster than e ). Se ond, a small error in
n2
the n-th Fourier oe ient is amplied by a fa tor of e . For example, an error of 10−8 in
3
the fth Fourier oe ient ξ5 of the data ξ indu es a large error of about 10 in the initial

temperature.

Now we return to the problem of re onstru ting the initial ondition v in the Diri hlet
problem (0.8)(0.10) from the observation at the nal time moment: Cu := u(x, T ) = ξ(x).

We sometimes write u(x, t; v) or u(v) instead of u(x, t) to emphasize the dependen e of u on

v . Following the general approa h stated above to re onstru t v we minimize the fun tional
1
J0 (v) := ku(·, T ; v) − ξk2L2(Ω)
2
with respe t to v ∈ L2 (Ω). We will see that the operator v → Cu(v) : L2 (Ω) → L2 (Ω) is
ompa t, hen e the problem of solving the equation Cu(v) = ξ is ill-posed. It follows that
the above minimization problem is also ill-posed. To stabilize it we minimize the Tikhonov

regularized fun tional

1 γ
Jγ (v) := ku(·, T ; v) − ξk2L2 (Ω) + kv − v ∗ k2L2 (Ω) ,
2 2

with γ>0 being a regularization parameter and v an approximation of v . We prove that

the fun tional Jγ is Fré het dierentiable and derive the following formula for its gradient

(Theorem 2.1.1)

∇Jγ (v) = p(x, 0) + γ(v − v ∗ ),

with p(x, t) being the solution to the adjoint problem


 !

 ∂p P n ∂ ∂p

 − − aij (x, t) + b(x, t)p = 0 in Q,
 ∂t i,j=1 ∂xj ∂xi

 p(x, T ) = u(x, T ; v) − ξ in Ω,



p(x, t) = 0 on S.

The optimization problem is then dis retized by the nite dieren e method in spa e vari-

ables and it is proved that the solution of the dis retized optimization problem onverges

(weakly or strongly depends on the smoothness of the data) to the solution of the ontinuous

one. We further dis retize it in time using the splitting method to split multi-dimensional

problems into a sequen e of one-dimensional ones. We derive a formula for the gradient

of the fully dis retized fun tional via an adjoint problem and then apply the onjugate

gradient method to solve it numeri ally. The algorithm is then tested on several ben h-

mark examples to show the e ien y of our approa h. We also apply Lan zos' algorithm

5
to estimate the singular values of the dis retized version of the operator C . Here, as above,
o o
C is the linear operator from v into Cv := Cu(v) − C u with u being the solution to the
Diri hlet problem (0.8)(0.10) with the homogeneous initial ondition.

The third hapter studies the re onstru tion of the initial ondition v in (0.8)(0.10) from

N integral observations

li u = hi (t), t ∈ (τ, T ), τ ≥ 0, i = 1, 2, . . . , N,
R
where ωi ∈ L1 (Ω), i = 1, 2, . . . , N are non-negative weight fun tions with Ω ωi (x)dx > 0
Z
li u(x, t) = ωi (x)u(x, t)dx = hi (t), t ∈ (τ, T ), i = 1, . . . , N. (0.22)

Let us dis uss observations (0.22). First, any measurement is an averaged pro ess, that

is of the form (0.22). Se ond, su h a kind of observations is a generalization of point

observations. Indeed, let xi ∈ Ω, i = 1, . . . , N be given, Si be a neighbourhood of xi , and

ωi (x) be hosen as

 1

, if x ∈ Si ,
ωi (x) = |Si | (0.23)


0, otherwise

where |Si | is the volume of Si . We see that |Si | is similar to the width of the instrument

and when we let |Si | tend to zero we have the point observation. It should be noted also

that as the solution to (0.8)(0.10) is understood in the weak sense, its value at ertain

point does not always have a meaning, but it does in the above averaged sense. Third,

with this kind of observations, the data need not be always available at the whole spa e

domain and at any time. Thus, our problem setting is new and more pra ti al than the

related ones, where one requires either 1) the knowledge of u(x, T ) in the whole spatial

domain Ω that is hardly realized in pra ti e (see the re ent survey [12℄, or [25, 61℄ and the

referen es therein), 2) or the measurements of u in ω × (τ, T ), where ω is a subdomain of

Ω and τ >0 is a onstant [49℄.

As in Chapter 2, to re onstru t v from the observations (0.22), we minimize the Tikhonov

fun tional
N
1X γ
Jγ (v) = kli u(v) − hi k2L2 (τ,T ) + kv − v ∗ k2L2 (Ω) (0.24)
2 i=1 2
with respe t to v ∈ L2 (Ω) with γ > 0 being the regularization parameter and v∗ an

estimation of v. We approa h to this problem as in Chapter 2 and obtain similar results.

The last hapter is devoted to the ase of observations at the boundary. We suppose that

our evolution system is generated by the Neumann problem (0.8), (0.9), (0.11). The inverse

problem we study is to re onstru t the initial ondition v in (0.9), when the solution u is
given on a part of the boundary S . Namely, let Γ ⊂ ∂Ω and denote Σ = Γ × (0, T ).

Our aim is to re onstru t the initial ondition v from the impre ise measurement ϕ of the

solution u on Σ:

ku|Σ − ϕkL2 (Σ) ≤ ǫ. (0.25)

6
The uniqueness of the inverse problem follows from the theory of the Cau hy problem for

paraboli equations [24, 31, 45, 58℄. Re ently, Klibanov proved some stability estimates

for this inverse problem [38, 39℄. Unfortunately, up to now there are very few studies on

numeri al methods for this problem. The work by Buly hëv [13℄ et al is the only referen e

on this aspe t whi h we have found. Thus, the solution method for this problem proposed

in this hapter is a new ontribution to eld. We note that this problem has the root in

the inverse heat ondu tion problem (IHCP) where one determines the surfa e temperature

and heat ux on an a essible part of the boundary from the surfa e temperature and the

surfa e heat ux on the a essible part of it (see [4, 9, 24, 45℄). The typi al formulation

for IHCP requires the initial ondition given [4, 9℄. However, as IHCP an be regarded

as a non- hara teristi Cau hy problem for paraboli equations, no initial ondition is

needed [24, 45, 58℄. The uniqueness with stability estimates are proved in (see [24, 31, 45,

58℄). Using the same method presented in Chapters 2 and 3, we minimize the Tikhonov

regularized fun tional

1 γ
Jγ (v) = ku(v) − ϕk2L2 (Σ) + kv − v ∗ k2L2 (Ω) (0.26)
2 2

with γ >0 being regularization parameter and v a ertain estimation of v. We obtain

similar results to that of Chapter 2.

For dis retization, we restri t for the ase Ω is an open parallelepiped in Rn , n = 1, 2, 3.


Due to our experien es and omputing resour es, the numeri al tests have been done for

one- and two-dimensional spa e variables only.

Parts of the thesis have been published in

1. Nguyen Thi Ngo Oanh, A splitting method for a ba kward paraboli equation with

time-dependent oe ients, Computers & Mathemati s with Appli ations 65(2013),

1728. (Chapter 2)

2. Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition

in paraboli equations from integral observations, Inverse Problems in S ien e and


Engineering (to appear) doi: 10.1080/17415977.2016.1229778. (Chapter 3)

3. Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition

in paraboli equations from boundary observations, Journal of Inverse and Ill-Posed


Problems 24(2016), no. 2, 195220. (Chapters 1 and 4)

and have been presented at

1. 8th International Conferen e "Inverse Problems: Modeling & Simulation", 2328 May,

2016, Ölüdeniz, Fethiye, Turkey;

2. Mini-workshop on "Analysis and Appli ations of PDEs", 29 June 2016, Vietnam In-

stitute for Advan ed Study in Mathemati s;

3. Vietnam-Korea workshop on sele ted topi s in Mathemati s, 2024 February, 2017,

Da Nang, Vietnam;

7
4. 12th Workshop on Optimization and S ienti Computing, 2325 April, 2014, Ba Vi,

Vietnam;

5. 13th Workshop on Optimization and S ienti Computing, 2325 April, 2015, Ba Vi,

Vietnam;

6. PhD Student Conferen e, Institute of Mathemati s, Vietnam A ademy of S ien e and

Te hnology, 2011, 2012, 2013, 2014;

7. Seminar at Department of Dierential Equations, Institute of Mathemati s, Vietnam

A ademy of S ien e and Te hnology.

8
Chapter 1

Auxiliary results

In this hapter, we introdu e some basi notions in Sobolev spa es and present well-

posedness results related to the Diri hlet and Neumann problems for paraboli equations.

The main part of this hapter is devoted to the nite dieren e method in spa e variables

and its onvergen e for the Diri hlet and Neumann problems. This onvergen e of nite

dieren e s heme is a new result for the weak solution. The splitting method is suggested

and proved to be stable. A variational problem and its dis retized version are presented and

a new onvergen e result for weak solution is proved. Lan zos' algorithm for approximating

eigenvalues of a matrix is presented.

1.1 Dire t problem


Let Ω be an open bounded domain in Rn , n ≥ 1 with boundary ∂Ω. Denote Q = Ω×
(0, T ], S = ∂Ω × (0, T ]. Let

aij , i, j ∈ {1, 2, . . . , n}, b ∈ L∞ (Q), (1.1)

aij = aji , i, j ∈ {1, 2, . . . , n}, (1.2)


n
X
λkξk2Rn ≤ aij (x, t)ξi ξj ≤ λ−1 kξk2Rn , ∀ξ ∈ Rn , ∀(x, t) ∈ Q, (1.3)
i,j=1
0 ≤ b(x, t) ≤ µ a.e. in Q, (1.4)

v ∈ L2 (Ω), g ∈ L2 (S), f ∈ L2 (Q), (1.5)

λ is positive onstant and µ ≥ 0. (1.6)

Consider the initial value problem

n  
∂u X ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q, (1.7)
∂t i,j=1 ∂xi ∂xj

u|t=0 = v in Ω, (1.8)

9
with either Diri hlet boundary ondition

u=0 on S, (1.9)

or Neumann boundary ondition

X n
∂u
= (aij (x, t)uxj ) cos(ν, xi )|S = g on S, (1.10)
∂N i,j=1

where ν is the outer unit normal ve tor to S.


When the oe ients of (1.7), v, g and f are given, the problem of uniquely solving u(x, t)
from the system (1.7)(1.9) or (1.7), (1.8), (1.10) is alled the dire t problem [24, 82℄.

To study these problems, we introdu e the following standard Sobolev spa es (see [24, 29,

40, 41, 76, 82, 83℄).

Denition 1.1.1. The spa e H 1 (Ω) is the set of all elements u(x) ∈ L2 (Ω) having gener-
∂u
alized derivatives
∂xi
∈ L2 (Ω), i = 1, . . . , n with s alar produ t

Z n
!
X ∂u ∂v
(u, v)H 1 (Ω) := uv + dx.
Ω i=1
∂xi ∂xi

Denition 1.1.2. The spa e H01 (Ω) is the ompletion of C01 (Ω) in the norm of H 1 (Ω). In

ase ∂Ω is smooth, we have



H01 (Ω) = {u ∈ H 1 (Ω) : u ∂Ω = 0}.

Denition 1.1.3. The spa e H 1,0(Q) is the set of all elements u(x, t) ∈ L2 (Q) having
∂u
generalized derivatives
∂xi
∈ L2 (Q), i = 1, . . . , n with s alar produ t
ZZ n
!
X ∂u ∂v
(u, v)H 1,0 (Q) := uv + dxdt.
Q i=1
∂xi ∂xi

Denition 1.1.4. The spa e H 1,1(Q) is the set of all elements u(x, t) ∈ L2 (Q) having
∂u
generalized derivatives
∂xi
∈ L2 (Q), i = 1, . . . , n and ∂u
∂t
∈ L2 (Q) with s alar produ t
ZZ n
!
X ∂u ∂v ∂u ∂v
(u, v)H 1,1 (Q) := uv + + dxdt.
Q i=1
∂xi ∂xi ∂t ∂t

Denition 1.1.5. The spa e H01,0 (Q) is the set of all elements u(x, t) ∈ H 1,0 (Q) vanishing
on S

H01,0 (Q) = {u ∈ H 1,0 (Q) : u S = 0}.

Denition 1.1.6. The spa e H01,1 (Q) is the set of all elements u(x, t) ∈ H 1,1 (Q) vanishing
on S

H01,1 (Q) = {u ∈ H 1,1 (Q) : u S = 0}.

Let B be a Bana h spa e, we dene

L2 (0, T ; B) = {u : u(t) ∈ B a. e. t ∈ (0, T ) and kukL2(0,T ;B) < ∞},

10
where Z T
kuk2L2(0,T ;B) = ku(t)k2B dt.
0
We also dene

W (0, T ; H 1(Ω)) = {u : u ∈ L2 (0, T ; H 1(Ω)), ut ∈ L2 (0, T ; (H 1(Ω))′ )},

with norm

kuk2W (0,T ;H 1(Ω)) = kuk2L2 (0,T ;H 1 (Ω)) + kut k2L2 (0,T ;(H 1 (Ω))′ ) .
The spa e W (0, T ; H01(Ω)) is similarly dened with the note that (H01 (Ω))′ = H −1 (Ω).
The solutions of the Diri hlet problem (1.7)-(1.9), and the Neumann problem (1.7), (1.8), (1.10)

are understood in the weak sense as follows:

Denition 1.1.7. A weak solution in W (0, T ; H01(Ω)) of the problem (1.7)-(1.9) is a fun -

tion u(x, t) ∈ W (0, T ; H01(Ω)) satisfying the identity


Z T ZZ h X
n i
∂u ∂η
hut , ηiH −1 (Ω),H01 (Ω) dt + aij (x, t) + b(x, t)uη − f η dxdt = 0,
0 Q i,j=1
∂xj ∂xi (1.11)

∀η ∈ L2 (0, T ; H01(Ω))

and

u|t=0 = v in Ω. (1.12)

Denition 1.1.8. A weak solution in W (0, T ; H 1(Ω)) of the problem (1.7), (1.8), (1.10)
1
is a fun tion u(x, t) ∈ W (0, T ; H (Ω)) satisfying the identity

Z T ZZ h X
n i
∂u ∂η
hut, ηi(H 1 (Ω))′ ,H 1 (Ω) dt + aij (x, t) + b(x, t)uη dxdt
0 Q i,j=1
∂xj ∂xi
ZZ ZZ (1.13)

= f ηdxdt + gηdζdt, ∀η ∈ L2 (0, T ; H 1(Ω))


Q S

and

u|t=0 = v in Ω. (1.14)

Due to [24, p. 3546℄, [41℄, [82, p. 141152℄ and [83, Chapter IV℄ we have the following

results about the well-posedness of the Diri hlet and Neumann problems.

Theorem 1.1.1. Let the onditions (1.1)(1.6) be satised. The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H01(Ω)) to the Diri hlet problem (1.7)(1.9).
Furthermore, there exists a positive onstant cD independent of the initial ondition v and
the right hand side f (only depends on aij , b and Ω) su h that
 
kukW (0,T ;H0 (Ω)) ≤ cD kf kL (Q) + kvkL (Ω) .
1 2 2 (1.15)

2) If v ∈ H01 (Ω), aij , b ∈ C 1 ([0, T ]; L∞ (Ω)), i, j = 1, . . . , n and there exists a onstant


µ1 su h that |∂aij /∂t|, |∂b/∂t| ≤ µ1 , then u ∈ H01,1 (Q).

11
Theorem 1.1.2. Let the onditions (1.1)(1.6) be satised. The following statements hold:
1) There exists a unique solution u ∈ W (0, T ; H 1(Ω)) to the Neumann problem (1.7), (1.8),
(1.10). Furthermore, there exists a positive onstant cN independent of the initial ondition

v , the boundary ondition g and the right-hand side f (only depends on aij , b and Ω) su h
that  
kukW (0,T ;H 1(Ω)) ≤ cN kf kL2 (Q) + kgkL2(S) + kvkL2 (Ω) . (1.16)

2) If v ∈ H 1 (Ω), g ∈ H 0,1(S), aij , b ∈ C 1 ([0, T ]; L∞ (Ω)), i, j = 1, . . . , n and there exists


a onstant µ1 su h that |∂aij /∂t|, |∂b/∂t| ≤ µ1 then u ∈ H01,1 (Q). In this ase, there exists
a onstant, denoted again by cN su h that
 
kukH 1,1 (Q) ≤ cN kf kL2 (Q) + kgkH 0,1(S) + kvkH 1 (Ω) . (1.17)

In addition to Denition 1.1.7 and Denition 1.1.8, we introdu e the following denitions.

The weak solution in H 1,0 (Q) to the Diri hlet problem (1.7)(1.9) and the Neumann prob-

lem (1.7), (1.8), (1.10) are understood in the weak sense as follows:

Denition 1.1.9. A weak solution in H 1,0 (Q) to the problem (1.7)(1.9) is a fun tion

u∈ H01,0 (Q) satises the identity


ZZ  n
X  ZZ Z
∂u ∂η
− uηt dt + aij (x, t) + b(x, t)uη dxdt = f ηdxdt + v(x)η(x, 0)dx,
Q i,j=1
∂xj ∂xi Q Ω

∀η ∈ H01,1(Q) with η(x, T ) = 0.


(1.18)

Denition 1.1.10. A weak solution in H 1,0(Q) to the problem (1.7), (1.8), (1.10) is a
1,0
fun tion u∈H (Q) satises the identity

ZZ  n
X 
∂u ∂η
− uηt + aij (x, t)
+ b(x, t)uη dxdt
Q i,j=1
∂xj ∂xi
ZZ Z ZZ
= f ηdxdt + v(x)η(x, 0)dx + gηdζdt, ∀η ∈ H 1,1 (Q) with η(x, T ) = 0.
Q Ω S
(1.19)

It has been shown that solution belonging to H 1,0 (Q) in two above denitions are in
1
W (0, T ; H (Ω)) (see [82, Ÿ3.4.4., p. 148153℄ and [82, Ÿ7.3.℄).

1.2 Adjoint problem


To study the variational problem for data assimilation in heat ondu tion, we need some

results related to the adjoint problems and Green's formula. The following results an be

proved in the same way as in [82, Ÿ3.6.1., p. 156158℄.

12
Consider the adjoint problem to (1.7)(1.9):

X n  
∂ ∂p
−pt − aij (x, t) + bp = aQ in Q,
i,j=1
∂xi ∂xj
(1.20)
p(ζ, t) = 0 on S,
p(x, T ) = aΩ in Ω,
where aQ ∈ L2 (Q) and aΩ ∈ L2 (Ω). We dene the solution of this problem is a fun tion
p ∈ W (0, T ; H01(Ω)) satisfying the variational problem
Z T ZZ  X
n  ZZ
−(pt , v)H −1 (Ω),H01 (Ω) dt + aij pv + bpv dxdt = aQ vdxdt,
0 Q i,j=1 Q

∀v ∈ L2 (0, T ; H01(Ω)),
p(T ) = aΩ .
By hanging the time dire tion and using the result of Theorem 1.1.1, we see that there

exists a unique solution p ∈ W (0, T ; H01(Ω)) and p also satises an a priori inequality

similar to (1.15). We have the following result:

Theorem 1.2.1. Suppose that the onditions (1.1)(1.4) hold. Let y ∈ W (0, T ; H01(Ω)) be
the solution to the problem
X n  
∂ ∂y
yt − aij (x, t) + by = bQ in Q,
i,j=1
∂xi ∂xj
(1.21)
y=0 on S,
y(x, 0) = bΩ in Ω,
with bQ ∈ L2 (Q), and bΩ ∈ L2 (Ω). Assume that aQ ∈ L2 (Q), aΩ ∈ L2 (Ω) and p ∈
W (0, T ; H01(Ω)) is the weak solution to the adjoint problem (1.20). Then we have Green's
formula Z ZZ Z ZZ
aΩ y(·, T )dx + aQ ydxdt = bΩ p(·, 0)dx + bQ pdxdt. (1.22)
Ω Q Ω Q

Similarly, onsider the adjoint problem to the Neumann problem (1.7), (1.8), (1.10):

X n  
∂ ∂p
−pt − aij (x, t) + bp = aQ in Q,
i,j=1
∂xi ∂xj
(1.23)
∂N p = aS on S,
p(x, T ) = aΩ in Ω,
where aQ ∈ L2 (Q), aS ∈ L2 (S), and aΩ ∈ L2 (Ω). We dene the solution to this problem
1
by a fun tion p ∈ W (0, T ; H (Ω)) satisfying the variational problem
Z T ZZ  X
n  ZZ ZZ
−(pt , v)(H 1 (Ω))∗ ,H 1 (Ω) dt + aij pv + bpv dxdt = aQ vdxdt + aS vdζdt,
0 Q i,j=1 Q S

∀v ∈ L2 (0, T ; H 1(Ω)),
p(T ) = aΩ .

13
We an also prove that there exists a unique solution p ∈ W (0, T ; H 1(Ω)) to this problem

and it satises an a priori inequality similar to (1.16).

Theorem 1.2.2. Let y ∈ W (0, T ; H 1(Ω)) be the solution to the problem


X n  
∂ ∂y
yt − aij (x, t) + by = bQ in Q,
i,j=1
∂xi ∂xj
(1.24)
∂N y = bS on S,
y(x, 0) = bΩ in Ω,

with bQ ∈ L2 (Q), bS ∈ L2 (S), and bΩ ∈ L2 (Ω). Suppose that aQ ∈ L2 (Q), aS ∈ L2 (S),


aΩ ∈ L2 (Ω) and p ∈ W (0, T ; H 1(Ω)) is the weak solution to the adjoint problem (1.23).
Then we have Green's formula
Z ZZ ZZ Z ZZ ZZ
aΩ y(·, T )dx+ aQ ydxdt+ aS ydζdt = bΩ p(·, 0)dx+ bQ pdxdt+ bS pdζdt.
Ω Q S Ω Q S
(1.25)

In the next two se tions, we present the nite dieren e method for solving the dire t

problems. Note that the solutions to the Diri hlet and Neumann problems studied in

this thesis are understood in the weak sense, hen e the onvergen e results of the nite

dieren e method for them are not trivial. For larity of presentation, we will separately

des ribe the nite dieren e method for one-dimensional and multi-dimensional problems.

1.3 Finite dieren e method for one-dimensional dire t problems


In this se tion, we introdu e the nite dieren e method to approximate weak solutions to

the one-dimensional dire t problems by using Crank-Ni olson's method.

Let Ω = (0, L) and Q = (0, L) × (0, T ), S = {0, 1} × (0, T ). We subdivide the interval

(0, L) into Nx subintervals by the uniform grid

0 = x0 < x1 < · · · < xNx = L with xi+1 − xi = h = L/Nx .

Denote by ui (t) (or ui if there is no onfusion) the value of u at x = xi . We also use the

similar notation for η.


The one-dimensional Diri hlet problem (1.7)(1.9) has now the form

 !

 ∂u ∂ ∂u

 − a(x, t) + b(x, t)u = f in Q,
 ∂t ∂x ∂x
(1.26)

 u|t=0 = v in Ω,



u(0, t) = u(L, t) = 0 in (0, T ].

14
The solution to (1.26) is a fun tion u(x, t) ∈ W (0, T ; H01(Ω)) satisfying
Z ZZ 
 T ∂u ∂η 



 hu t , ηi 1
H (Ω),H0 (Ω)
−1 dt + a(x, t) + b(x, t)uη − f η dxdt = 0,
0 ∂x ∂x Q
(1.27)

 ∀η ∈ L2 (0, T ; H01(Ω)),


u| =v in Ω.
t=0

Similarly, the one-dimensional Neumann problem (1.7), (1.8), (1.10) now is


 !

 ∂u ∂ ∂u

 − a(x, t) + b(x, t)u = f in Q,


 ∂t ∂x ∂x
(1.28)
 u|t=0 = v in Ω,



 ∂u ∂u

−a = g(0, t), a = g(L, t) in (0, T ].
∂x ∂x
Here, we suppose that g(0, .) and g(L, .) are in L2 (0, T ).
The weak solution to the problem (1.28) is a fun tion u(x, t) ∈ W (0, T ; H 1(Ω)) satisfying

the identity
Z ZZ 

 T ∂u ∂η 

 hut , ηi(H 1 (Ω))′ ,H 1 (Ω) dt + a(x, t) + b(x, t)uη dxdt

 ∂x ∂x
 0 ZZ Z Q Z
T T

 = f ηdxdt − g(0, t)η(0, t)dt + g(L, t)η(L, t)dt, ∀η ∈ L2 (0, T ; H 1(Ω)),

 Q 0 0


u|t=0 = v in Ω.
(1.29)

1.3.1. Dis retization in the spa e variable


We approximate the integrals in the rst equations of the systems (1.27) and (1.29) as

follows
ZZ Z T Nx
X
∂u dui (t)
ηdxdt ≈ h η i (t)dt, (1.30)
Q ∂t 0 0
dt
ZZ Z N
X x −1
∂u ∂η T ui+1 (t) − ui (t) η i+1 (t) − η i (t)
a(x, t) dxdt ≈ h ai (t) dt, (1.31)
Q ∂x ∂x 0 0
h h
ZZ Z T N
X x

b(x, t)uηdxdt ≈ h bi (t)ui (t)ηi (t)dt, (1.32)


Q 0 0
ZZ Z T N
X x

f ηdxdt ≈ h f i (t)η i (t)dt, (1.33)


Q 0 0

where
x
Zi+1 x
Zi+1 xi+1
Z
1 1 1
ai (t) = a(x, t)dx, bi (t) = b(x, t)dx, f i (t) = f (x, t)dx, i = 0, Nx − 1.
h h h
xi xi xi
(1.34)

15
a. The Diri hlet problem (1.26)
Putting the approximations in (1.30)(1.33) into (1.27), we obtain


 ZT  XNx N
X x −1 Nx
X Nx
X 

 dui i i
ui+1 − ui η i+1 − η i i i i
h η +h a +h buη −h f i ηi dt = 0,
0
dt 0
h h 0 0

 0

ui (0) = v i , i = 0, N ,
x
(1.35)

with
x
Zi+1
i
1
v = v(x)dx. (1.36)
h
xi

Sin e η in (1.35) is arbitrary, it follows that




 dū(t)
+ Λū(t) = F̄ (t),
dt (1.37)

ū(0) = v̄,

where ū(t) = (u0 (t), u1 (t), . . . , uNx (t))′ , v̄ = (v 0 , v 1 , . . . , v Nx )′ .


The oe ient matrix Λ is dened by
 
a0 + h2 b0 −a0 0 ... 0 0
 1 1 2 1 
 −a− 2a∗ + h b −a1 ... 0 0 
 
1  0 −a2− 2a2∗ + h2 b2 ... 0 0 
Λ = 2 
h  ... ... ... ... ... ... 

 x −1 2 Nx −1 Nx −1 
 0 0 ... . . . 2aN
∗ + h b −a 
Nx Nx 2 Nx
0 0 0 ... −a− a− + h b
(1.38)
i i−1 i 1
where a− =a and a∗ = 2
(ai− + ai ). The right hand side i
F̄ (t) = {f (t), i = 0, 1, . . . , Nx }.
b. The Neumann problem (1.28)
Similarly to the Diri hlet problem, putting the approximations (1.30)(1.33) in (1.29), we

obtain

 ZT  XNx NXx −1 XNx 

 dui i ui+1 − ui η i+1 − η i

 h η + h ai
+ h bi i i
u η dt

 dt h h

 0 0 0
0
ZT  X Nx  (1.39)

 = h f i
η − g 0 0
η + g Nx Nx
η dt,

 i

 0

 0

ui (0) = v i , i = 0, N ,
x

where v i , ai and fi are given by formulas (1.34) and (1.36).

Thus, we get the following system




 dū(t)
+ Λū(t) = F̄ (t),
dt (1.40)

ū(0) = v̄,

16
where ū(t) = (u0 (t), u1 (t), . . . , uNx (t))′ , v̄ = (v 0 , v 1 , . . . , v Nx )′ .
The oe ient matrix Λ is given by formula (1.38) and the right hand side F̄ (t) is by

 i
f , i = 1, Nx − 1,

F̄ (t) = f 0 − h1 g 0 , i = 0, (1.41)


 N x 1 Nx
f + h g , i = Nx .
The positive semi-deniteness of Λ in (1.37) and (1.40) is proved as follows.

Lemma 1.3.1. With ea h t, the oe ient matrix Λ dened in the system (1.37) and
(1.40) is positive semi-denite.

Proof. Put U = (U 0 , U 1 , . . . , U Nx )′ . It follows from the formula (1.38) that


 2 0  0 
a0 + h 2b U 0 − a0 U 1 U
 1
 1  1 0 1 h2 b1
 −a− U + 2a∗ + 2 U − a U   U 1  1 2  
ΛU, U = 2   
h1  ...  ... 
h2 bNx )
 N
−aN − U
x Nx −1
+ aN − +
x
2
U x U Nx
x −1
1 NX  2 1 X Nx
2
k k k+1
= 2 a U −U + bk U k ≥ 0.
h 2
k=0 k=0

Consequently, Λ is positive semi-denite. The proof is omplete.

1.3.2. Dis retization in time


Now we dis retize (1.37) and (1.40) in time by Crank-Ni olson's method. We subdivide the

interval (0, T ) into Nt uniform subintervals by the grid 0 = t0 < t1 < · · · < tNt = T with
tm+1 − tm = ∆t = T /Nt , m being time index. Denoting um = ū(tm ), Λm = Λ(tm ), F m =
F̄ (tm ), m = 0, 1 . . . , Nt , we dis retize (1.37) and (1.40) by ([56℄)

m+1

u − um um+1 + um
+ Λm = F m+1/2 ,
∆t 2 (1.42)

u0 = v.

The last an be rewritten in the ompa t form



um+1 = (E + ∆t m −1
Λ ) (E − ∆t m m
Λ )u + ∆t(E + ∆t m −1 m+1/2
Λ ) F ,
2 2 2
(1.43)
u0 = v

where E is the identity matrix.

Denote by (·, ·) and k·k the s alar produ t and Eu lidean norm in the spa e R Nx , respe -

tively. We have the following result on the stability of the nite dieren e s heme.

Lemma 1.3.2. The s heme (1.43) is stable.

Proof. It follows from the rst equation of (1.43) that

∆t m −1 ∆t m ∆t m −1
kum+1 k ≤ k(E + Λ ) (E − Λ )kkum k + ∆tk(E + Λ ) kkF m+1/2 k. (1.44)
2 2 2
17
On the other hand, sin e Λm is positive semi-denite, it follows from Kellogg's Lemma [56,

Theorem 2.1, p. 220℄ that

∆t m −1 ∆t m
k(E + Λ ) (E − Λ )k ≤ 1.
2 2
Moreover,
 
∆t m −1 ∆t m −1
∆t m −1 (E + 2
Λ ) ϕ, (E + 2
Λ ) ϕ
k(E + Λ ) k = sup
2 ϕ (ϕ, ϕ)
(φ, φ)
= sup ∆t m ∆t m
φ ((E + 2
Λ )φ, (E + 2
Λ )φ)
(φ, φ)
= sup ∆t2
≤ 1.
φ ((φ, φ) + ∆t(Λφ, φ) + 4
(Λm φ, Λmφ)
Thus, from (1.44) we obtain

kum+1 k ≤ kum k + kF m+1/2 k,


kum k ≤ kum−1 k + kF m−1/2 k,
···
ku1 k ≤ ku0 k + kF 1/2 k.

Putting kvk = ku0 k, kf k = max kF m+1/2 k, we obtain


m

kum+1 k ≤ kvk + (m + 1)∆tkf k. (1.45)

Consequently, the nite dieren e s heme (1.43) is stable.

1.4 Finite dieren e method for multi-dimensional dire t prob-


lems
We apply the splitting nite dieren e method to dis retize the multi-dimensional (n =
2, 3) dire t problems (1.7)(1.9) or (1.7), (1.8), (1.10). The idea of splitting s hemes is

to approximate a omplex problem by a sequen e of simpler ones. The main advantages

of the splitting s hemes are: (i) they are stable regardless of the hoi e of the spatial

and temporal grid sizes and (ii) the resulting linear systems an be easily solved sin e

they are triangular systems. Using the te hniques presented in [40, 54, 56, 84℄ (see also

[26, 27, 29, 61, 62, 76, 77℄), we propose the nite dieren e s heme based on the formulas

(1.11) and (1.13) for the denition of the weak solutions. First, we dis retize the problem

in spa e variables and obtain system of ordinary dierential equations with respe t to the

time variable t, then we dis retize the obtained system in time by the splitting method.

We restri t ourself to the ase Ω is an open parallelepiped in Rn , n = 2, 3. Furthermore,

we do not onsider mixed derivatives in the equation (1.7), although in prin iple this ase

an be treat in the same manner [84℄. Thus, for the ease of notation, we set aij = 0 if i 6= j
and denote aii = ai , i = 1, . . . , n.

18
Before getting into details, we rewrite the Diri hlet problem (1.7)(1.9) without mixed

derivatives as follows:
 !

 ∂u Pn ∂ ∂u

 − ai (x, t) + b(x, t)u = f in Q,
 ∂t i=1 ∂xi ∂xi
(1.46)

u|t=0 = v in Ω,



u = 0 on S.

The weak solution to the problem (1.46) is a fun tion u(x, t) ∈ W (0, T ; H01(Ω)) satisfying

the identity


RT RR h P
n ∂u ∂η i

 hu , ηi 1 dt + ai (x, t) +b(x, t)uη − f η dxdt = 0,
 0 t H (Ω),H0 (Ω)
−1
Q ∂xi ∂xi
i=1
(1.47)

 ∀η ∈ L2 (0, T ; H01(Ω)),


u| =v in Ω.
t=0

Similarly, we rewrite the Neumann problem (1.7), (1.8), (1.10) in the form
 !

 ∂u Pn ∂ ∂u

 − ai (x, t) + b(x, t)u = f in Q,

 ∂t i=1 ∂xi ∂xi
(1.48)

u|t=0 = v in Ω,


 ∂u = g

on S.
∂N
The weak solution to the problem (1.48) is a fun tion u(x, t) ∈ W (0, T ; H 1(Ω)) satisfying


RT RR  Pn ∂u ∂η 

 0 hut , ηi(H 1 (Ω))′ ,H 1 (Ω) dt+
 Q
i=1
ai (x, t)
∂xi ∂xi
+ b(x, t)uη dxdt
RR RR (1.49)

 = Q f ηdxdt + S gηdζdt, ∀η ∈ L2 (0, T ; H 1(Ω)),


u| = v in Ω.
t=0

1.4.1. Interpolations of grid fun tions


We suppose that Ω = (0, L1 ) × (0, L2 ) × · · · × (0, Ln ) and subdivide Ω into small ells by

re tangular uniform grid spe ied by

0 = x0i < x1i = hi < · · · < xN


i = Li , i = 1, . . . , n.
i

Here hi = Li /Ni is the grid size in the xi -dire tion, i = 1, . . . , n.


Denote by

• k := (k1 , . . . , kn ), 0 ≤ ki ≤ Ni ;
• xk := (xk11 , . . . , xknn ) is the grid point;

• h := (h1 , . . . , hn ) is the ve tor of spatial grid size;

• ∆h := h1 · · · hn ;

19
• ei , i = 1, . . . , n being the unit ve tor in the xi -dire tion, i.e. e1 = (1, 0, . . . , 0), . . . , en =
(0, . . . , 0, 1).
Around ea h grid point, we dene the following subsets of Ω:

ω + (k) :={x ∈ Ω : ki hi < xi < (ki + 1)hi , ∀i = 1, . . . , n}, (1.50)

ω(k) :={x ∈ Ω : (ki − 0.5)hi < xi < (ki + 0.5)hi , ∀i = 1, . . . , n}, (1.51)

ωi+ (k) :={x ∈ Ω : ki hi ≤ xi ≤ (ki + 1)hi , (kj − 0.5)hj ≤ xj ≤ (kj + 0.5)hj , ∀i 6= j}.
(1.52)

The set of the indi es of all grid points belonging to Ω̄ is denoted by Ω̄h , that is

Ω̄h := {k = (k1 , . . . , kn ) : 0 ≤ ki ≤ Ni , ∀i = 1, . . . , n}. (1.53)

The set of the indi es of all interior grid points is denoted by Ωh , that is

Ωh := {k = (k1 , . . . , kn ) : 1 ≤ ki ≤ Ni − 1, ∀i = 1, . . . , n}. (1.54)

The set of the indi es of all boundary grid points is denoted by Πh , that is

Πh = Ω̄h \ Ωh . (1.55)

Moreover, we use some notations for the subset of Πh as follows

Πilh := {k = (k1 , k2, . . . , kn ) : ki = 0}, i = 1, . . . , n, (1.56)

Πir
h := {k = (k1 , k2, . . . , kn ) : ki = Ni }, i = 1, . . . , n. (1.57)

The points xk with ki = 0 Ni , ∀i = 1, . . . , n are


or alled the orner points and the set of
0
the indi es of all orner points is denoted by Π .

We also make use of the following sets

Ωih := {k = (k1 , . . . , kn ) : 0 ≤ ki ≤ Ni −1, 0 ≤ kj ≤ Nj , ∀j 6= i}, with i = 1, . . . , n. (1.58)

For a fun tion u(x, t) Q, we denote by ūk (t) (or uk if there is no onfusion), or
dened in

u(k, t) its approximate value at (xk , t). Suppose that ū = {uk , k ∈ Ω̄h } is a grid fun tion
dened in QhT := Ω̄h × (0, T ) whi h have the rst weak derivative with respe t to t. We

dene  
ZT  X n X
X 
 2   2
kūk2H 1,0 (QhT ) := ∆h ūk (t) + k
ūxi (t) dt (1.59)
 
0 k∈Ω̄h i=1 k∈Ω̄i
h

and
 
ZT  X 
 k 2  k 2  n
XX 2 
kūk2H 1,1 (QhT ) := ∆h ū (t) + ūt (t) + ūkxi (t) dt (1.60)
 
0 k∈Ω̄h i i=1 k∈Ω̄
h

with the dire t dieren e quotient and ba kward one dening as follows

uk+ei − uk u(x + hi ei ) − u(x) uk − uk−ei u(x) − u(x − hi ei )


ūxi := = , ūx̄i := = .
hi hi hi hi

20
When uh is a grid fun tion, we denote uhxi by u xi .
For a grid fun tion ū we dene the following interpolations in Q:
1) Pie ewise onstant:

ū˜(x, t) := ūk (t), (x, t) ∈ ω(k) × (0, T ). (1.61)

2) Multi-linear:

n
X X
ūˆ(x, t) := ūk (t) + ūkxi (t)(xi − ki hi ) + ūkxixj (t)(xi − ki hi )(xj − kj hj )
i=1 1≤i<j≤n
(1.62)
n
Y
+···+ ūkx1 x2 ...xn (t) +
(xi − ki hi ), (x, t) ∈ ω (k) × (0, T ).
i=1

From these formulas, with n = 2, we have

ūˆ(x, t) := ūk (t) + ūkx1 (t)(x1 − k1 h1 ) + ūkx2 (t)(x2 − k2 h2 )


(1.63)
+ ūkx1 x2 (t)(x1 − k1 h1 )(x2 − k2 h2 ),
with n = 3:
3
X
ūˆ(x, t) := ūk (t) + ūkxi (t)(xi − ki hi )
i=1
X
(1.64)
+ ūkxixj (t)(xi − ki hi )(xj − kj hj )
1≤i<j≤3

+ ūkx1 x2 x3 (t)(x1 − k1 h1 )(x2 − k2 h2 )(x3 − k3 h3 ).

From Theorem 3.2, Theorem 3.3, Chapter 6 in [40℄ and the omments therein, we have the

following results.

Lemma 1.4.1. Suppose that the grid fun tion ū satises the inequality

kūkH 1,0 (QhT ) ≤ C, (1.65)

with C being a onstant independent of h. Then, the multi-linear interpolation (1.62) of ū


is bounded in H 1,0 (Q).
Lemma 1.4.2. Suppose that the grid fun tion ū satises the inequality

kūkH 1,1 (QhT ) ≤ C, (1.66)

with C being a onstant independent of h. Then, the multi-linear interpolation (1.62) of ū


is bounded in H 1,1 (Q).

Asymptoti relationships between the two interpolations as the grid size h tends to zero

are stated in the following lemmas.

Lemma 1.4.3. Suppose that the hypothesis of Lemma 1.4.1 is fullled. Then if {ūˆ(x, t)}h
weakly onverges to a fun tion u(x, t) in L2 (Q) as the grid size h tends to zero, the sequen e
{ū˜(x, t)}h also weakly onverges to u(x, t) in L2 (Q). Moreover, if {ūˆ|S }h weakly onverges
to u|S in L2 (S) then {ū˜|S }h also weakly onverges to u|S in L2 (S).

21
Lemma 1.4.4. Suppose that the hypothesis of Lemma 1.4.2 is fullled. Then if {ūˆ(x, t)}h
strongly onverges to a fun tion u(x, t) in L2 (Q) as the grid size h tends to zero, the se-
quen e {ū˜(x, t)}h also strongly onverges to u(x, t) in L2 (Q). Moreover, if {ūˆ|S }h onverges
to u|S in L2 (S) then {ū˜|S }h also onverges to u|S in L2 (S).

Lemma 1.4.5. Suppose that the hypothesis of Lemma 1.4.2 is fullled and i ∈ {1, . . . , n}.
Then if the sequen e of derivatives {ūˆxi (x, t)}h weakly onverges to a fun tion u(x, t) in
L2 (Q) as h tends to zero, the sequen e {ū˜xi (x, t)}h also onverges to u(x, t) in L2 (Q) where

ū˜xi (x, t) := ūxi (k), ∀x ∈ ωi+ (k).

1.4.2. Dis retization in spa e variables and the onvergen e of the nite dif-
feren e s heme
For a fun tion z dened in Ω, ω(k) by
we dene its average on
Z Z
1 1
zk = z(x)dx = z(x)dx. (1.67)
|ω(k)| ω(k) ∆h ω(k)

Here, if k belongs to the boundary of Ω then we understand that ω(k) is a grid box of Ω
ontaining k.
We approximate the integrals in (1.47) and (1.49) as follows

ZZ Z T X dūk (t)
∂u
ηdxdt ≈ ∆h η̄ k (t)dt, (1.68)
Q ∂t 0 dt
k∈Ω̄h
ZZ Z T X
∂u ∂η
ai (x, t) dxdt ≈ ∆h āki (t)ūkxi (t)η̄xki (t)dt, (1.69)
Q ∂xi ∂xi 0
k∈Ωih
ZZ Z T X
b(x, t)uηdxdt ≈ ∆h b̄k (t)ūk (t)η̄ k (t)dt, (1.70)
Q 0
k∈Ω̄h
ZZ Z T X
f ηdxdt ≈ ∆h f¯k (t)η̄ k (t)dt, (1.71)
Q 0
k∈Ω̄h
ZZ Z
Xn
1 X k T X 
gηdζdt ≈ ∆h ḡ (t)η̄ k (t) + ḡ k (t)η̄ k (t) dt
S 0 i=1 hi
k∈Πil h \Π
0 k∈Πir
h \Π
0
X
+ ∆h ḡ k (t)η̄ k (t). (1.72)
k∈Π0

a. The Diri hlet problem (1.46)


Substituting the integrals (1.68)(1.71) into the rst equation of (1.47), we have the fol-

lowing equality (dropping the time variable t for a moment)

Z T h X  dūk  Xn X i
∆h k k ¯k k
+ b̄ ū − f η̄ + k k k
āi ūxi η̄xi dt = 0. (1.73)
0 dt i=1 i
k∈Ω̄h k∈Ωh

22
Using the summation by parts formula together with the ondition ūk = η̄ k = 0 when

ki = 0 and ki = Ni , we have

X X ūk+ei − ūk η̄ k+ei − η̄ k


āki ūkxi η̄xki = āki
hi hi
k∈Ωih k∈Ωih
X ūk+ei − ūk k+ei X k ūk+ei − ūk k
= āki η̄ − āi η̄
h2i h2i
k∈Ωih i k∈Ωh
(1.74)
X − ūk−ei k X k ūk+ei − ūk k
ū k
= āki−η̄ − āi η̄
k∈Ωh
h2i k∈Ωh
h2i
X  ūk − ūk−ei k+ei
− ūk

k k ū
= āi− 2
− āi 2
η̄ k ,
k∈Ω
hi h i
h

with āki− = āk−ei . Repla ing into (1.73) and approximating the initial ondition ūk (0) =
v̄ k , k ∈ Ω̄h , we obtain the following system approximating the original problem (1.47)


 dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (1.75)

ū(0) = v̄,

with ū = {uk , k ∈ Ω̄h }, the fun tion v̄ = {v k , k ∈ Ω̄h } being a grid fun tion approximating
the initial ondition v and F̄ = {f , k ∈ Ωh } where f¯ is dened in formula (1.67). The
k k

oe ients matrix Λi has the form


 k

 āi− k k−ei
 āki k+e 

 ū − ū − 2 ū i − ūk , 2 ≤ ki ≤ Ni − 2,

 h 2
hi
k k 
 ki
b̄ ū āi− k āi k+ek

(Λi ū)k = + ū − ū i
− ū k
, ki = 1, (1.76)
n 
 hi2
hi2



 āk  āk

 i− ūk − ūk−ei + i ūk , ki = Ni − 1.
h2i h2i
We note that the oe ients matri es Λi dened by (1.76) is positive semi-denite and the

proof of this proposition is similar to proof of Lemma 1.4.6 for the Neumann problem in

the following part.

b. The Neumann problem (1.48)


Putting (1.68)(1.72) in the rst equation of (1.49), for the ease of notation, we drop the

time variable t for a moment, we have

Z T h X  dūk  Xn X i
∆h + b̄k ūk η̄ k + āki ūkxi η̄xki dt
0 dt i=1 i
k∈Ω̄h k∈Ωh
Z (1.77)
T hX Xn
1 X k k X  X i
= ∆h ¯k k
f η̄ + ḡ η̄ + k k
ḡ η̄ + k k
ḡ (t)η̄ (t) dt.
0 i=1
hi
k∈Ω̄h il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

Denote by kil = (k1 , . . . , ki−1, 0, ki+1 , . . . , kn ), kir = (k1 , . . . , ki−1, Ni , ki+1 , . . . , kn ). For arbi-

23
trary and xed indi es k2 , . . . , kn , using the summation by parts, we obtain

N
X 1 −1 N
X 1 −1
ūk+e1 − ūk η̄ k+e1 − η̄ k
āk1 ūkx1 η̄xk1 = āk1
k1 =0 k1 =0
h1 h1
N
X 1 −1 N1 −1
ūk+e1 − ūk k+e1 X k ū
k+e1
− ūk k
= āk1 η̄ − ā1 η̄
h21 h21
k1 =0 k1 =0
N1
X N
X 1 −1
ūk − ūk−e1 k ūk+e1 − ūk k
= āk1− η̄ − āk1 η̄
k1 =1
h21 k1 =0
h21
k1l +e1 k1l N
X 1 −1
 k

k l ū − ū k1l ū − ūk−ei k k ū
k+e1
− ūk k
= −ā11 η̄ + āk1− η̄ − ā1 η̄
h21 k1 =1
h21 h21
k1r k1r −e1
ū − ū r
+ āk1− η̄ k1 ,
h21
(1.78)
k
with ā1− = āk−e
1
1
. The similar terms in the x2 , . . . , xn dire tions are treated in the same

way. Then, repla ing this equality into (1.77) and approximating the initial ondition in

the se ond equation of (1.49), we obtain the following system approximating the original

problem (1.49) 

 dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (1.79)

ū(0) = v̄,

with ū = {uk , k ∈ Ω̄h } and the fun tion v̄ = {v k , k ∈ Ω̄h } is a grid fun tion approximating
the initial ondition v and
 k k
 āi− ūk − ūk−ei − āi ūk+ei − ūk , k ∈ Ω̄ : 1 ≤ k ≤ N − 1,



 h i i

 h2i h2i

 k
b̄k k
ū − āi ūk+ei − ūk , k ∈ Π : k = 0, k ∈

/ Π0 ,
h i
(Λi ū)k = + h2i (1.80)
n 
 k

 āi− k 

 ū − ūk−ei , k ∈ Πh : ki = Ni , k ∈ / Π0 ,

 h 2

 i

0, k ∈ Π0 .

Moreover, the right hand side F̄ is dened as follows



 ¯k
f , 1 ≤ ki ≤ Ni − 1,

k
F̄ = f¯k + ḡh , ki = 0 or ki = Ni , if / Π0 ,
k∈ (1.81)

 i
 ¯k k
f + ḡ , if k ∈ Π0 .

Next, we will prove that the oe ients matrix Λi dened by (1.80) is positive semi-denite.

Lemma 1.4.6. For any t, the oe ients matrix Λi , i = 1, 2, · · · , n is positive semi-
denite.

24
Proof. Without loss of generality, we assume n=2 and i = 1. We have
 
(0,k ) h2 b̄(0,k2 ) (0,k )
ā1 2 + 1 2 −ā1 2 ... 0 0
 (1,k ) (1,k ) h2 b̄(1,k2 )

 −ā1− 2 2ā1∗ 2 + 1 2 ... 0 0 
 
k
1  0
(2,k )
−ā1− 2 ... 0 0


Λ1 = 2  
h1  ... ... ... ... ... 
 
 0 0 . . . 2ā1∗ 1
(N −1,k2 )
+
h21 b̄(N1 −1,k2 )
−ā1
(N1 −1,k2 ) 
 2 
(N1 ,k2 ) (N ,k2 ) h21 b̄(N1 ,k2 )
0 0 ... −ā1− ā1−1 + 2
(1.82)

where āk1∗ = 12 (āk1 + āk1− ).


Setting U1k2 = (U (0,k2 ) , U (1,k2 ) , . . . , U (N1 ,k2 ) )′ , and taking into a ount that āk1− = āk−e
1
1
and
k
b̄ ≥ 0, we see that
 (0,k2 ) h21 b̄(0,k2 )  (0,k2 ) (0,k2 )
 
ā1 + U − ā1 U (1,k2 )
U (0,k2 )
 1  2
 
Λk1 U1k2 , U1k2 = 2  ...  ... 
h1 (N ,k ) (N ,k ) h2 b̄(N1 ,k2 )  (N1 ,k2 )
−ā1−1 2 U (N1 −1,k2 ) + ā1−1 2 + 2 2 U U (N1 ,k2 )
1 −1
1 NX  2 1 X N1
2
(k1 ,k2 ) (k1 ,k2 ) (k1 +1,k2 )
= 2 ā1 U −U + b̄(k1 ,k2 ) U (k1 ,k2 ) ≥ 0.
h1 k =0 2 k =0
1 1

Hen e Λk1 is positive semi-denite. The proof is omplete.

Next, we prove the boundedness of the solution of (1.75) and (1.79).

Lemma 1.4.7. Let ū be a solution of the Cau hy problem (1.75).

a) If v ∈ L2 (Ω), then exists a onstant c independent of h and the oe ients of the
equation su h that
X Z TX
n X
k 2
max ∆h |ū (t)| + ∆h |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
 Z T X X 
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.83)
0
k∈Ω̄h k∈Ω̄h

b) Moreover, if ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞, we have


X Z T Xn X 
k 2 k 2 k 2
max ∆h |ū (t)| + ∆h |ūt (t)| + |ūxi | dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
Z T X X n X
X 
≤ c∆h |f¯k |2 dt + k 2
|v̄ | + |v̄xki |2 . (1.84)
0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h h

Proof. The main idea of this proof is similar to that of [40, Chapter VI℄.

25
a) For arbitrary t∗ ∈ (0, T ]. Set

ūk (t), if t ∈ [0, t∗ ],
k
η̄ (t) =
0, if / [0, t∗ ].
t∈
Sin e
Z X
t∗ 1X k ∗ 2 1X k
dt ūkt (t)ūk (t) = |ū (t )| − |ū (0)|2 ,
0 2 2
k∈Ωh k∈Ω̄h k∈Ω̄h
k
and ū (0) = v̄, it follows from (1.73) that

X Z t∗ h X Xn X i
1 k ∗ 2 k k 2
∆h |ū (t )| + ∆h b̄ |u | + āki |ūkxi |2 dt
2 0 i=1 i
k∈Ω̄h k∈Ω̄h k∈Ωh
(1.85)
Z t∗ X X
1
= ∆h f¯k ūk dt + ∆h |v̄ k |2 .
0 2
k∈Ω̄h k∈Ω̄h

Multiplying the both sides of the equality (1.85) by 2, applying Cau hy's inequality to the
k
rst term in the right hand side, noting that b ≥ 0, we obtain

X Z t∗ n X
X Z t∗ X
∆h k ∗
|ū (t )| + 2∆h 2
āki |ūkxi |2 dt ≤ ∆h |f¯k |2 dt
0 i=1 k∈Ωi 0
k∈Ω̄h h k∈Ω̄h
Z (1.86)
t∗ X X
+ ∆h |ūk |2 dt + ∆h |v̄ k |2 .
0
k∈Ω̄h k∈Ω̄h

Put
X
y(t) = ∆h |ūk (t∗ )|2 .
k∈Ω̄h

From (1.86) we have


Z t∗ Z t∗ X X

y(t ) ≤ y(t)dt + ∆h |f¯k |2 dt + ∆h |v̄ k |2 .
0 0
k∈Ω̄h k∈Ω̄h

Applying Gronwall's inequality, we obtain


 Z t∗ X X 

y(t ) ≤ ∆h |f¯k |2 dt + ∆h |v̄ k |2 et . (1.87)
0
k∈Ω̄h k∈Ω̄h

Hen e, we have

X  Z T X X 
max ∆h k
|ū (t)| ≤ c ∆h 2
|f¯k |2 dt + ∆h k 2
|v̄ | . (1.88)
t∈[0,T ] 0
k∈Ω̄h k∈Ω̄h k∈Ω̄h

From the onditions (1.1)(1.3) about the oe ient ai , the inequalities (1.86) and (1.87)

we have
Z T X n X
X 
∆h |ūk (t)|2 + |ūkxi |2 dt
0 i=1 k∈Ωi
k∈Ω̄h h
 Z T X X 
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.89)
0
k∈Ω̄h k∈Ω̄h

26
Combining the two inequalities, we obtain the inequality (1.83).

RT P
b) In order to obtain a bound for ∆h |ūkt (t)|2 dt, we repla e η̄ k in (1.73) by ūkt and
0 k∈Ω̄h
get

Z T hX X n X
X i Z T X
∆h |ūkt (t)|2 + b̄k ūk ūkt + āki ūkxi ūkxi t dt = ∆h f¯k ūkt dt. (1.90)
0 i=1 k∈Ωi 0
k∈Ω̄h k∈Ω̄h h
k∈Ω̄h

Multiplying the both sides of (1.90) by 2, integrating by parts the se ond and the third

terms in the left hand side and using Cau hy's inequality for the right hand side, we obtain

ZT X
2∆h |ūkt (t)|2 dt
0 k∈Ω̄h
X h i XZ
db̄k (t) k 2 T
k k 2 k 2
+ ∆h b |ū (T )| − |ū (0)| − ∆h |ū | (t)dt
0 dt
k∈Ω̄h k∈Ω̄h
Xn X h i X n X Z T
k k 2 k 2 dāki (t) k 2
+ ∆h āi |ūxi (T )| − |v̄xi | − ∆h |ūxi | dt
i=1 i i=1 i 0 dt
k∈Ωh k∈Ωh

ZT X
= 2∆h f¯k ūkt dt
0 k∈Ω̄h
Z T X Z T X
≤ ∆h |f¯k |2 dt + ∆h |ūkt |2 dt.
0 0
k∈Ω̄h k∈Ω̄h

From this inequality, sin e b≥0 and the onditions (1.1)(1.3) about the oe ient ai ,
the inequalities (1.86), (1.87) and the hypothesis of the theorem, we obtain

Z T X Z T X
∆h |ūkt (t)|2 dt ≤ ∆h |f¯k |2 dt
0 0
k∈Ω̄h k∈Ω̄h
X n X
X
k k 2
+ ∆h b̄ |v̄ | + ∆h āki |v̄xki |2
k∈Ω̄h i=1 k∈Ωi
h

XZ db̄k (t) k 2T
+ ∆h |ū | (t)dt
0 dt
k∈Ω̄h
Xn X Z T
dāki (t) k 2
+ ∆h |ūxi | dt
i=1 i 0 dt
k∈Ωh
Z T X X n X
X
≤ ∆h |f¯k |2 dt + ∆hµ1 |v̄ k |2 + ∆hΛ |v̄xki |2
0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h h

XZ T n X Z
X T
k 2
+ µ2 ∆h |ū | (t)dt + µ2 ∆h |ūkxi |2 dt.
0 i=1 k∈Ωi 0
k∈Ω̄h h

27
Using this inequality and (1.89) we get

Z T X n X
X
∆h |ūk (t)|2 + |ūkxi |2 dt
0 i=1 k∈Ωi
k∈Ω̄h h
 Z T X X 
≤ c ∆h |f¯k |2 dt + ∆h |v̄ k |2 . (1.91)
0
k∈Ω̄h k∈Ω̄h

It follows from (1.3), (1.4) that

Z T X Z T X X n X
X
∆h |ūkt (t)|2 dt ≤ ∆h |f¯k |2 dt + 2∆hµ1 k 2
|v̄ | + 2∆hΛ |v̄xki |2 .
0 0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h k∈Ω̄h h
(1.92)

Using the inequality (1.83) we obtain (1.84).

If v ∈ L2 (Ω), the right hand side of (1.83) is bounded by c(kf k2L2 (Q) + kvk2L2 (Ω) ). On the
other hand, if v ∈ H01 (Ω), the right hand side of 2 2
(1.84) is bounded by c(kf kL2 (Q) +kvkH 1 (Ω) ).

Consequently, it follows form Lemmas 1.4.7, 1.4.2 and 1.4.4 the onvergen e of the multi-

linear interpolation of ūk to the solution of the problem (1.46) as the grid size h tends to

zero. This is stated in the following theorem.

To emphasize the dependen e of the solution of the dis retized problem on the grid size,

we now use the notation uh instead of ūk .

Theorem 1.4.8. 1) If v ∈ L2 (Ω), then the multi-linear interpolation (1.62) ûh of the
dieren e-dierential problem (1.73) in QT weakly onverges in L2 (Q) to the solution u ∈
H 1,0 (Q) of the original problem (1.46) and its derivatives with respe t to xi , i = 1, . . . , n
weakly onverges in L2 (Q) to uxi .
2) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞, and v ∈ H01 (Ω), then uˆh
strongly onverges L2 (Q) to u. Furthermore, uˆh |S onverges to u|S in L2 (S).

The proof of this theorem is similar to that of Theorem 1.4.11 below, therefore we omit it.

We now prove the boundedness of the nite dieren e approximation to the solution of the

Neumann problem (1.49). We have the following results.

Lemma 1.4.9. Let ū be a solution of the Cau hy problem (1.79).

a) If v ∈ L2 (Ω), then there exists a onstant c independent of h and the oe ients of the
equation (1.7) su h that

28
X Z T n X
X
k 2
max ∆h |ū (t)| + ∆h |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
 X X n
∆h X k 2
≤ c ∆h |v̄ k |2 + |v̄ |
i=1
hi 0
k∈Ω̄h k∈Π
Z T X XZ TX n
¯ k 2 ∆h ¯k 2
+ ∆h |f | dt + |f | dt
0 hi
k∈Ω̄h k∈Π0 0 i=1
Z TX
1 X k2 
n X
+ ∆h |ḡ | + |ḡ k |2 dt
0 i=1 hi il ir
k∈Πh k∈Πh

XZ T
1
n
X ∆h k 2 
+ |ḡ | dt . (1.93)
0 n i=1
hi
k∈Π0

b) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞ and if g ∈ H 0,1(S), then
X Z T n X
X 
k 2 k 2
max ∆h |ū (t)| + ∆h |ūt (t)| + |ūkxi |2 dt
t∈[0,T ] 0
k∈Ω̄h i=1 k∈Ωi
h
 X n
X ∆h X
n X
X
k 2 k 2
≤ c ∆h |v̄ | + |v̄ | + |v̄xki |2
i=1
hi i=1 k∈Ωi
k∈Ω̄h k∈Π0 h
Z T X XZ T n
X ∆h
+ ∆h |f¯k |2 dt + |f¯k |2 dt
0 0 i=1
hi
k∈Ω̄h k∈Π0
Z
1 X 
T Xn X
k 2 k 2
+ ∆h |ḡ | + |ḡ | dt
0 i=1
hi
k∈Πil
h
k∈Πir
h

XZ T
1 X ∆h k 2 
n
+ |ḡ | dt . (1.94)
0 n i=1 hi
k∈Π0

Proof. The proof of this lemma is similar to that of Lemma 1.4.7 but dierent from the

estimation for the sum on the boundary.

a) Similar to the proof of Lemma 1.4.7, for arbitrary t∗ ∈ (0, T ] set



ūk (t), if t ∈ [0, t∗ ],
k
η̄ (t) =
0, if / [0, t∗ ].
t∈

Sin e

Z X
t∗ 1X k ∗ 2 1X k
ūkt (t)ūk (t)dt = |ū (t )| − |ū (0)|2
0 2 2
k∈Ωh k∈Ω̄h k∈Ω̄h

1X k ∗ 2 1X k2
= |ū (t )| − |v̄ | ,
2 2
k∈Ω̄h k∈Ω̄h

29
from (1.77) we have

X Z t∗ h X Xn X i
1 k ∗ 2 k k 2 k k 2
∆h |ū (t )| + ∆h b̄ |u | + āi |ūxi | dt
2 0 i=1 i
k∈Ω̄h k∈Ω̄h k∈Ωh
Z t∗ X X
1
= ∆h f¯k ūk dt + ∆h |v̄ k |2
0 2
k∈Ω̄h k∈Ω̄h
Z
1 X k k 
t∗ Xn X
+ ∆h ḡ ū + ḡ k ūk dt
0 i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0

Z t∗ X
+ ∆h ḡ k ūk dt. (1.95)
0 k∈Π0

Using ǫ-Cau hy's inequality for the terms in the right hand side, taking into a ount that
k
b̄ ≥ 0 and the ondition (1.3), we obtain

X Z t∗ Xn X
1 k ∗ 2
∆h |ū (t )| + λ∆h |ūkxi |2 dt
2 0 i=1 k∈Ωi
k∈Ω̄h h
Z t∗ X Z t∗ X
1
≤ ∆h |f¯k |2 dt + ǫ1 ∆h |ūk |2 dt
4ǫ1 0 0
k∈Ω̄h k∈Ω̄h
X
+ ∆h |v̄ k |2
k∈Ω̄h
Z
1  X k 2 X k 2
t∗ Xn
1
+ ∆h |ḡ | + |ḡ | dt
4ǫ2 0 i=1
hi il ir
k∈Πh k∈Πh
Z
1 X 
t∗ n
X X
+ ǫ2 ∆h |ūk |2 + |ūk |2 dt
0 i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0

X 1 n Z X n Z
1 t∗
k 2 1 1X t∗ X
+ ǫ2 ∆h |ū | dt + ∆h hi |ḡ k |2 dt. (1.96)
n h
i=1 i 0 0
4ǫ2 n i=1 0
k∈Π k∈Π0

Here, the positive onstants ǫ1 and ǫ2 will be hosen.

We estimate the last sum in the above inequality. In the ontinuous ase, we an evaluate

the L2 -norm of the tra e of a fun tion ϕ ∈ H 1 (Ω) on the boundary ∂Ω by its L2 −norm
(see [40, p. 2831℄)

kϕk2L2 (∂Ω) ≤ ckϕk2H 1 (Ω) . (1.97)

However, a similar inequality is not valid for a grid fun tion in Ωh due to its orners. We

now estimate the sum

1 X 
Xn X
∆h |uk |2 + k 2
|u | .
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0

Denote k = (k1 , k ′ ) and

L1 Ωh := {k ′ = {k2 , . . . , kn }, 1 ≤ ki ≤ Ni − 1, ∀i = 2, . . . , n}.

30
Consider the sum
N
X 1 −1 X ′
h2 · · · hn |ū(0,k ) |2 .
k1 =1 k ′ ∈L1 Ωh

Sin e
k1
X
′ ′ ′
ū(0,k ) = −ū(k1 ,k ) + h1 ū(j,k )
x1 , 1 ≤ k1 ≤ N1 − 1,
j=0

we obtain

N
X 1 −1 X N
X 1 −1 X k1
X
′ ′ ′
∆h |ū(0,k ) |2 = ∆h | − ū(k1 ,k ) + h1 ū(j,k
x1 |
) 2

k1 =1 k ′ ∈L1 Ωh k1 =1 k ′ ∈L1 Ωh j=0


N
X 1 −1 X N
X 1 −1 X Xk1
1 (k1 ,k ′ ) 21 ′
≤ ∆h |ū | + ∆h |h1 ūx(k11 ,k ) |2
2 2 j=0
k 1 =1 k ′ ∈L 1 Ωh k =1 ′ 1 k ∈L1 Ωh
N
X 1 −1 X N
X 1 −1 X N
X 1 −1
1 (k1 ,k ′ ) 21 ′
≤ ∆h |ū | + ∆h h21 N1 |ux(k11 ,k ) |2
2 2 j=0
k1 =1 k ′ ∈L1 Ωh k1 =1 k ′ ∈L1 Ωh

X N
X 1 −1 X
1 1 ′
≤ ∆h |ūk |2 + h21 N1 (N1 − 1)∆h |ūx(k11 ,k ) |2
2 2 k k ′ ∈L1 Ωh
k∈Ω̄h 1 =1

X N
X 1 −1 X
1 k 2 2 ′
≤ ∆h |ū | + L1 ∆h |ūx(k11 ,k ) |2 .
2
k∈Ω̄h k =1 ′ 1 k ∈L1 Ωh

Here, we have used the equality h1 = L1 /N1 and N1 ≥ 2.


L1
On the other hand, sin e h1 (N1 − 1) = N1
(N1 − 1) ≥ 1/2L1 , the above inequality has the

form

N 1 −1
∆h X ′ 1 X X X ′
|ū(0,k ) |2 ≤ ∆h |ūk |2 + 2L1 ∆h |ūx(k11 ,k ) |2 . (1.98)
h1 ′ L1
k ∈L1 Ωh k =1 ′
k∈Ω̄h 1 k ∈L1 Ωh

On the other boundary, we have similar estimates.

We now estimate ū0 (t). The value of ū(t) at the other orner points are similarly estimated.
From the equation (1.80) and (1.81) we have

dū0
+ b̄0 ū0 = f¯0 + ḡ 0 . (1.99)
dt
Furthermore, we have the initial ondition ū0 (0) = v̄ 0 . Sin e 0 ≤ b ≤ µ1 , applying

Gronwall's inequality, we obtain

 Z t Z t 
0
|ū (t)| ≤ c |v̄ | + 0 ¯0
|f (τ )|dτ + 0
|ḡ (τ )|dτ
0 0

with c being a dened onstant. Hen e


Z t X Z t X
∆h  ¯0 
n n n
1 ∆h 0 2 1 2 0 2 1 X ∆h 0 2
|ū (τ )| dτ ≤ c |f (τ )| + |ḡ (τ )| dτ + |v̄ | .
0 n i=1 hi 0 n i=1 hi n i=1 hi
(1.100)

31
Thus, from (1.96), (1.98) and (1.100) we obtain

X Z t∗ X n X
1 k ∗ 2
∆h |ū (t )| + λ∆h |ūkxi |2 dt
2 0 i=1 k∈Ωi
k∈Ω̄h h
Z t∗ X Z t∗ X
1
≤ ∆h |f¯k |2 dt + ǫ1 ∆h |ūk |2 dt
4ǫ1 0 0
k∈Ω̄h k∈Ω̄h
X
+ ∆h |v̄ k |2
k∈Ω̄h
Z
1 X k2 
Xn t∗ X
1
+ ∆h |ḡ | + |ḡ k |2 dt
4ǫ2 0 h
i=1 i k∈Πil
h
k∈Πirh
Z t∗ X
1
+ ǫ2 n ∆h |ūk (t)|2 dt
min Li 0
i∈{1,...,n} k∈Ω̄h
Z t∗ n X
X
+ 2nǫ2 ∆h max Li |ūkxi |2 dt
i∈{1,...,n} 0 i=1 k∈Ωi
h
n Z
1 1X X t∗
+ ∆h hi |ḡ k |2 dt
4ǫ2 n i=1 0 k∈Π0
Z
∆h  ¯k 2 
X t 1X n Xn
∆h X k 2

n k 2
+ 2 ǫ2 c |f (t)| + |ḡ (t)| dt + c |v̄ | (1.101)
0 0 n i=1 hi i=1
hi 0
k∈Π k∈Π

with c being a dened onstant.

Choosing ǫ2 su h that

2nǫ2 max Li = λ,
i∈{1,...,n}

we an eliminate the term ontaining derivatives with respe t to xi in the both sides of the

above inequality. Then hoosing ǫ1 = 1/2 and applying Gronwall's inequality with respe t
P k ∗ 2
to ∆h |ū (t )| , we obtain the estimate
k∈Ω̄h

X  X X n
∆h X k 2
k 2 k 2
max ∆h |ū (t)| ≤ c ∆h |v̄ | + |v̄ |
t∈[0,T ]
i=1
hi 0
k∈Ω̄h k∈Ω̄h k∈Π
Z T X XZ TX n
∆h ¯k 2
+ ∆h |f¯ | dt +
k 2
|f | dt
0 0 i=1
h i
k∈Ω̄h k∈Π0
Z TX n  
1 X k2 X
k 2
+ ∆h |ḡ | + |ḡ | dt
0 i=1 hi il ir
k∈Πh k∈Πh

XZ T
1
n
X ∆h k 2 
+ |ḡ | dt . (1.102)
0 n i=1
hi
k∈Π0

In the inequality (1.101) hoosing ǫ2 su h that 2nǫ2 maxi∈{1,...,n} Li = λ/2, then applying

32
the inequality (1.102) we obtain the following estimate for the derivative of ū as follows

Z n X
X  X Xn
T
∆h X k 2
∆h |ūkxi |2 dt ≤ c ∆h |v̄ k |2 + |v̄ |
0 i=1 k∈Ωi i=1
hi 0
h k∈Ω̄h k∈Π
Z T X XZ T n
X ∆h
+ ∆h |f¯k |2 dt + |f¯k |2 dt
0 0 i=1
hi
k∈Ω̄h k∈Π0
Z
1 X 
T Xn X
+ ∆h |ḡ k |2 + |ḡ k |2 dt
0 i=1
hi
k∈Πil
h
k∈Πir
h

XZ T
1
n
X ∆h k 2 
+ |ḡ | dt . (1.103)
0 n i=1
hi
k∈Π0

Combining (1.102) and (1.103), we obtain estimate (1.93) as laimed in the theorem.

RT P
b) Next, to estimate ∆h |ūkt (t)|2 dt, we repla e η̄ k in (1.73) by ūkt to obtain
0 k∈Ω̄h

Z T hX X n X
X i
∆h |ūkt (t)|2 + b̄k ūk ūkt + āki+ ūkxi ūkxi t dt
0 i=1 k∈Ωi
k∈Ω̄h k∈Ω̄h h
Z T X
= ∆h f¯k ūkt dt
0
k∈Ω̄h
Z Z
T Xn
1 X k k X  T X
+ ∆h ḡ ¯ut + ḡ k ūkt dt + ∆h ḡ k ūkt dt. (1.104)
0 i=1
hi 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

Multiplying the both sides of (1.104) with 2, applying integration by parts to the se ond

and third terms of the right-hand side, the terms of the left-hand side ex ept for the rst

one, we obtain

ZT X
2∆h |ūkt (t)|2 dt
0 k∈Ω̄h
X h i XZ
db̄k (t) k 2 T
k k 2 k 2
+ ∆h b̄ |ū (T )| − |ū (0)| − ∆h |ū | (t)dt
0 dt
k∈Ω̄h k∈Ω̄h
n
XX h i X n X Z T
k k 2 k 2 dāki+ (t) k 2
+ ∆h āi+ |ūxi (T )| − |ūxi (0)| − ∆h |ūxi | dt
i=1 i i=1 i 0 dt
k∈Ωh k∈Ωh

ZT X Z T Xn
1 X k k X 
= 2∆h f¯k ūkt dt − ∆h ḡt ū + ḡtk ūk dt
0 h
i=1 i
0 k∈Ω̄h il 0 k∈Πh \Π k∈Πir
h \Π
0

Xn
1 X k X 
+ ∆h ḡ (T )ūk (T ) + k
ḡ (T )ū (T ) k

i=1
hi il 0 k∈Πh \Π k∈Πir
h \Π
0

33
Xn
1 X X 
− ∆h ḡ k (0)ūk (0) + ḡ k (0)ūk (0)
h
i=1 i k∈Πil \Π0
h
k∈Πir
h \Π
0

Z T X X X
− ∆h ḡtk ūk dt + ∆h ḡ k (T )ūk (T ) − ∆h ḡ k (0)ūk (0)
0 k∈Π0 k∈Π0 k∈Π0
ZT X
= 2∆h f¯k ūkt dt
0 k∈Ω̄h
Z Z
T Xn
1 X k k X  T X
− ∆h ḡt ū + ḡtk ūk dt − ∆h ḡtk ūk dt
0 h
i=1 i 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

Xn
1 X k X  X
+ ∆h ḡ (T )ūk (T ) + ḡ k (T )ūk (T ) + ∆h ḡ k (T )ūk (T )
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

Xn
1 X X  X
− ∆h ḡ k (0)ūk (0) + ḡ k (0)ūk (0) − ∆h ḡ k (0)ūk (0)
h
i=1 i il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

:= (I) + (II) + (III) + (IV ). (1.105)

To pro eed, we need the following auxiliary result.

Lemma 1.4.10. Let w ∈ C 1 [0, T ]. Then


Z Z T
2 2 T 2 2
|w(T )| ≤ |w(t)| dt + 2T |wt (t)|2 dt. (1.106)
T 0 0

RT
Proof. Indeed, we havew(T ) = w(t) + t wt (τ )dτ. Hen e
 Z T   Z T 
2 2 2 2 2
|w(T )| ≤ 2 |w(t)| + ( |wt (τ )|dτ ) ≤ 2 |w(t)| + T |wt (t)|2 dt .
t 0

Taking the integral of the both sides with respe t to t from 0 to T, then dividing the

obtained inequality by T, we arrive at the assertion of the lemma.

Applying Cau hy's inequality, we have


Z T X Z T X
(I) ≤ ∆h |f¯k |2 dt + ∆h |ūkt |2 dt. (1.107)
0 0
k∈Ω̄h k∈Ω̄h

Applying Cau hy's inequality and (1.98), we get

Z Z T X
1 T Xn
1 X X 
k 2 1
(II) ≤ ∆h + |ḡt | dt + ∆h |ḡtk |2 dt
2 0 h
i=1 i
2 0 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π
Z Z
1
n
T X 1 X X 
k 2 1 T X
+ ∆h + |ū | dt + ∆h |ūk |2 dt
2 0 i=1
hi 2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0

34
Z Z
1 T Xn
1 X X  1 T X
≤ ∆h + |ḡtk |2 dt + ∆h |ḡtk |2 dt
2 0 h
i=1 i
2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Z T X n X
X  Z T X
c k 2 1
+ ∆h |ū | dt + |ūkxi |2 dt + ∆h |ūk |2 dt. (1.108)
2 0 i=1 k∈Ω̄+
2 0
k∈Ω̄h k∈Π0
h

Applying Cau hy's inequality, Lemma 1.4.10, the inequalities (1.98), (1.100), we obtain
Z Z
1  T X 1 X  
n X T X
1
(III) ≤ c2 ∆h + |ḡtk |2 dt + ∆h k 2
|ḡ | dt
2 0 i=1 hi 2 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π0
Z Z T X
1  T X 1 X  
n X
k 2 1 k 2
+ c2 ∆h + |ḡt | dt + ∆h |ḡ | dt
2 0 i=1 hi 2 0 0
il 0 k∈Πh \Π k∈Πir
h \Π
0 k∈Π
 X X n
∆h X k 2
k 2
c3 ∆h |v̄ | + |v̄ |
i=1
hi
k∈Ω̄h k∈Π0
Z T X XZ TX n
¯k 2 ∆h ¯k 2
+ ∆h |f | dt + |f | dt
0 0 0 i=1
hi
k∈Ω̄h k∈Π
Z TX n  X X 
1
+ ∆h |ḡ k |2 + |ḡ k |2 dt
0 i=1 hi il ir
k∈Πh k∈Πh

XZ T
1
n
X ∆h k 2 
+ |ḡ | dt . (1.109)
0 n i=1
hi
k∈Π0

Further, using Cau hy's inequality, Lemma 1.4.10 and (1.102) we an estimate (IV) as

follows

1 Xn
1 X X  1 X
(IV ) ≤ ∆h + |ḡ k (0)|2 + ∆h |ḡ k (0)|2
2 i=1
hi 2
il 0 k∈Πh \Π k∈Πir
h \Π
0 0 k∈Π

1 Xn
1 X X  1 X
+ ∆h + |ūk (0)|2 + ∆h |ūk (0)|2
2 h
i=1 i
2
il 0 k∈Πh \Π k∈Πir
h \Π
0 0 k∈Π
Z
Xn
1 X T X  
≤ c4 ∆h + |ḡ k (t)| + |ḡtk (t)|2 dt
0 i=1 hi
k∈Πil
h \Π
0 k∈Πir
h \Π
0

Z T X 
1
+ ∆h |ḡ k (t)|2 + |ḡtk (t)|2 dt
2 0 0 k∈Π
X n X
X 
+ c5 ∆h |ūk (0)|2 + |ūkxi (0)|2 . (1.110)
k∈Ω̄h i=1 k∈Ω̄+
h

Sin e the oe ient b is nonnegative, the oe ients ai satisfy the ondition (1.1)(1.3),

furthermore, these oe ients satisfy the se ond onditions of the lemma, from the in-

equalities (1.105),(1.107)(1.110), with the use of (1.93), we obtain (1.94).

From Lemma 1.4.9, we have the following results whi h assert the onvergen e of the nite

dieren e s heme (1.79) of the Neumann problem (1.49).

35
Theorem 1.4.11. 1) If v ∈ L2 (Ω), then the multi-linear interpolation (1.62) ûh of the
dieren e-dierential problem (1.79) in QT weakly onverges in L2 (Q) to the solution u ∈
H 1,0 (Q) of the Neumann problem (1.46) and its derivatives with respe t to xi , i = 1, . . . , n
onverges weakly in L2 (Q) to uxi .
2) If ai , b ∈ C 1 ([0, T ], L∞ (Ω)) with |∂ai /∂t|, |∂b/∂t| ≤ µ2 < ∞, and v ∈ H01 (Ω), then ûh
onverges strongly in L2 (Q) to u. Furthermore, ûh |S onverges to u|S in L2 (S).

Proof. We follow Ladyzhenskaya [40, Chapter 6℄ (see also [76, 77℄) to prove these onver-

gen e results. We an prove onvergen e rates like in [35, 70℄. However, we do not pursuit

it in this thesis.

1) From the estimate (1.93), the right hand side of (1.93) is bounded by c kf kL2 (Q) +

kgkL2(S) + kvkL2 (Ω) . Therefore, the sequen e {ūˆ}h of the multi-linear interpolations is

bounded in H 1,0
(Q), due to Lemma 1.4.1. Hen e, there is a subsequen e {ūˆ}hκ weakly

onverges to some fun tion u = u(x, t) ∈ H 1,0 (Q) and the sequen e of the derivative

{ūˆ}hκxi weakly onverges to the orresponding derivative u xi in L2 (Q) as κ tends to innity


(equivalently, hκ i = 1, 2, . . . , n. Due to Lemma 1.4.3, the subsequen e
tends to zero) for

{ū˜}hκ weakly onverges in L (Q) to u and the subsequen e of the derivative {ū˜}hκxi weakly
2

2
onverges in L (Q) to uxi as κ tends to innity. To nish the proof, we need to show that

ea h term in the dis rete equation (1.77) onverges to the orresponding one in (1.19).

Indeed, sin e C 2,1 (Q) H 1,1 (Q), it is enough to onsider the fun tion η in this
is dense in

spa e. The values of the grid fun tion {η̄}h are set to be the values of η at the grid points.

It is lear that {η̄}h onverges uniformly to η as h tends to zero. This implies that all the

terms in (1.77) (the terms on the boundary in the right hand side are onsidered as the

boundary term) onverge to the orresponding terms in (1.19). This means that u is a

weak solution of the Neumann problem (1.46) whi h is unique. Thus, every subsequen e

of {ūˆ}h onverges to the same fun tion u. Hen e the sequen e {ūˆ}h itself onverges to u.
2) When the onditions of the se ond part of the theorem are fullled, we have the a priori

estimate (1.94). Therefore, from the part one of the theorem, the sequen e {ūˆ}h onverges
1,0 1,1
to the solution u ∈ H (Q) whi h belongs to H (Q), due to Theorem 1.1.2. On the

other hand, from (1.94), the sequen e {ūˆ}h of the multi-linear interpolations is bounded in
H (Q), due to Lemma 1.4.2. Hen e, {ūˆ}h weakly onverges to u = u(x, t) ∈ H 1,1 (Q). It
1,1

ˆ}hκ strongly onverges to u = u(x, t) ∈ L2 (Q). Furthermore, ûh |S onverges


follows that {ū
2
to u|S in L (S).

1.4.3. Dis retization in time and splitting dieren e s heme


Next, we dis rete the time variable t. We subdivide [0, T ] into M subintervals by the points
ti , i = 0, . . . , M, t0 = 0, t1 = ∆t, ..., tM = M∆t = T. To simplify the notation, we set
uk,m := uk (tm ). In this part, if there is no onfusion, we will drop the spa e index. Denote

F m := F̄ (tm ).
In order to obtain the splitting dieren e s heme for the Cau hy problems (1.75) and (1.79),

36
we set um+δ := ū(tm + δ∆t), Λm
i := Λi (tm + ∆t/2). We introdu e the following two ir le

omponent-by- omponent splitting s heme [54℄

1 1
um+ 2n − um um+ 2n + um
∆t
+ Λm
1 = 0,
2
2
2 1 2 1
um+ 2n − um+ 2n um+ 2n − um+ 2n
∆t
+ Λm
2 = 0,
2
2 (1.111)

···
1 n−1 1 n−1
um+ 2 − um+ 2n um+ 2 − um+ 2n 1 ∆t m m
+ Λm
n = Fm + Λ F ,
∆t
2
2 2 8 n
n+1 1 n+1 1
um+ 2n − um+ 2 um+ 2n − um+ 2 1 ∆t m m
+ Λm
n = Fm − Λ F ,
∆t
2
2 2 8 n
···
2n−1 2n−2 2n−1 2n−2
um+ 2n − um+ 2n um+ 2n + um+ 2n

∆t
+ Λm
2 = 0,
2
2
m+ 2n−1 2n−1
um+1 − u 2n um+1 − um+ 2n

∆t
+ Λm
1 = 0,
2
2
u0 = v̄.
Equivalently,

 ∆t m  m+ 1  ∆t m  m
E1 + Λ u 2n = E1 − Λ u ,
4 1 4 1
 
∆t m m+ 2  ∆t m  m+ 1
E2 + Λ u 2n = E2 − Λ u 2n ,
4 2 4 2
...
 ∆t m  m+ 1 ∆t m   ∆t  m+ n−1
En + Λn u 2 − F = En − Λm u 2n ,
4 2 4 n
 ∆t m  m+ n+1  ∆t m  m+ 1 ∆t m 
En + Λn u 2n = En − Λ u 2+ F , (1.112)
4 4 n 2
...
 ∆t  m+ 2n−1  ∆t m  m+ 2n−2
E2 + Λm u 2n = E 2 − Λ u 2n ,
4 2 4 2
 ∆t  m+1  ∆t m  m+ 2n−1
E1 + Λm u = E 1 − Λ u 2n ,
4 1 4 1
u0 = v̄.

where Ei is the identity matrix orresponding to Λi , i = 1, . . . , n. The splitting s heme

(1.112) an be rewritten in the ompa t form



um+1 = Am um + ∆tB m F m ,
(1.113)
u0 = v̄,

37
with

Am = Am m m m
1 · · · An An · · · A1 ,

B m = Am m
1 · · · An ,

∆t m −1 ∆t m
where Am
i = (Ei + 4
Λi ) (Ei − 4
Λi ). The stability of the splitting s heme (1.113) is
given in the following theorem.

Theorem 1.4.12. The splitting s heme (1.113) is stable.

Proof. Indeed, from the rst equation of (1.113), we have

kum+1 k ≤ kAm um k + ∆tkB m F m k


≤ kAm kkumk + ∆tkB m kkF m k
≤ kAm m m m m m m m
1 k . . . kAn kkAn k . . . kA1 kku k + ∆tkA1 k . . . kAn kkF k.

Moreover, applying Kellogg's Lemma [54, Theorem 2.1, p. 220℄, we obtain

∆t m −1 ∆t m
kAm
i k = k(Ei + Λi ) (Ei − Λ )k ≤ 1.
4 4 i
Hen e, we get

kum+1 k ≤ kum k + ∆tkF m k,


kum k ≤ kum−1 k + ∆tkF m−1 k,
...
ku1 k ≤ ku0 k + ∆tkF 0 k.

Putting kvk = ku0 k and kf k = max kF m k, we have


m

kum+1 k ≤ kvk + m∆tkf k.

Consequently, the s heme is stable.

Next, we introdu e the dis retized Green's formula for the dire t and adjoint problems.

Theorem 1.4.13. Let u be a solution of the dieren e s heme



um+1 = Am um + F m , m = 0, 1, . . . , M − 1,
(1.114)
u0 = C,

and η a solution of the adjoint problem



η m = (Am+1 )∗ η m+1 + K m+1 , m = 0, 1, . . . , M − 1,
(1.115)
η M = D.

Then, the dis retized Green's formula has form


M
X −1 M
X
0 0 ∗ 0 m m M M ∗ M m
hu , (A ) η i+ hF , η i+hC, η i = h(A ) η , u i+ hK m , um i+hD, u0i. (1.116)
m=0 m=1

38
Proof. Multiplying both sides of the rst equation of (1.114) with η m , m = 0, 1, . . . , M −1,
and summing the results over m, we have

M
X −1 M
X −1 M
X −1
m+1 m m m m
hu ,η i = hA u , η i + hF m , η m i.
m=0 m=0 m=0

Multiplying both sides of the se ond equation of (1.114) with ηM we obtain

hu0 , η M i = hC, η M i.

Hen e, it follows from (1.114) that

M
X −1 M
X −1 M
X −1
m+1 m 0 M m m m
hu , η i + hu , η i = hA u , η i + hF m , η m i + hC, η M i. (1.117)
m=0 m=0 m=0

Multiplying both sides of the rst equation of (1.115) with um+1 , m = 0, 1, . . . , M − 1,


summing the results over m, we have

M
X −1 M
X −1 M
X −1
m m+1 m+1 ∗ m+1 m+1
hη , u i= h(A ) η ,u i+ hK m+1 , um+1 i.
m=0 m=0 m=0

Multiplying both sides of the se ond equation of (1.115) with u0 we get

hη M , u0 i = hD, u0i.

Hen e, it follows from the adjoint problem (1.115) that

M
X −1 M
X −1 M
X −1
m m+1 M 0 m+1 ∗ m+1 m+1
hη , u i+hη , u i = h(A ) η ,u i+ hK m+1 , um+1 i+hD, u0i,
m=0 m=0 m=0

or

M
X −1 M
X M
X
hη m , um+1 i + hη M , u0 i = h(Am )∗ η m , um i + hK m , um i + hD, u0i. (1.118)
m=0 m=1 m=1

It follows from (1.117) and (1.118) that

M
X −1 M
X −1 M
X M
X
m m m m m M m ∗ m m
hA u , η i + hF , η i + hC, η i = h(A ) η , u i + hK m , umi + hD, u0i.
m=0 m=0 m=1 m=1

Consequently, we have the dis retized Green's formula as follows

M
X −1 M
X
hu0, (A0 )∗ η 0 i+ hF m , η m i+hC, η M i = h(AM )∗ η M , uM i+ hK m , umi+hD, u0i.
m=0 m=1

The proof is omplete.

1.5 Approximation of the variational problems


As stated in Introdu tion, the inverse problems onsidered in this thesis are that of de-

termining the initial ondition v in 1) the Diri hlet problem (0.8)(0.10) from either the

39
observation of its solution u at the nal time, or integral observations, 2) in the Neumann

problem (0.8), (0.9), (0.11) from the observation of its solution u at a part of the boundary
S. The solution u to the Diri hlet and Neumann problems is understood in the weak sense

as in Denitions 1.1.7 and 1.1.8.

Thus, the observation operator C is either

i) the nal time observation operator Cu(v) = u(x, T ; v): v ∈ L2 (Ω) → Cu(v) ∈ H :=
L2 (Ω),
ii) or the integral operator Cu(v) = (l1 u(x, t; v), l2 u(x, t; v), . . . , lN u(x, t; v)), where li are
2
dened by (0.22). In this ase, the operator C maps v ∈ L (Ω) to Cu(v) ∈ H =
2 N
(L (0, T )) ,

iii) or the tra e of the solution u on a part of the boundary Cu(v) = u(x, t; v)|Γ. Thus, C
2
maps v ∈ L (Ω) to Cu(v) ∈ H = L2 (Γ).
Our data assimilation problem is that of re onstru ting the initial ondition v when the

observation Cu(v) is given by some z ∈ H.


o
Denoting the solution to (0.8)(0.10) (or (0.8), (0.9), (0.11)) with v ≡ 0 by u, we see that
2 o
the operator from v ∈ L (Ω) to Cv := Cu(v) − C u ∈ H is bounded and linear. Thus, the

above inverse problems lead to the linear operator equation

o
Cv = z − C u. (1.119)

o
Set z̃ = z − C u. To get a stable approximation to v we minimize the Tikhonov fun tional

1 γ
Jγ (v) = kCv − z̃k2H + kv − v ∗ k2L2 (Ω) (1.120)
2 2
with respe t to v ∈ L2 (Ω). Here, v∗ is an estimation to v. The minimizer to this optimiza-

tion problem is hara terized by the optimality ondition:

Jγ′ (v) = C ∗ (Cv − z̃) + γ(v − v ∗ ) = 0, (1.121)

where C∗ is the adjoint operator of C. We denote the solution to this problem by vγ .


Suppose that Ch is an approximation to C su h that

k(C − Ch )vkH ≤ Φ1 (h)kvkL2 (Ω) ∀v ∈ L2 (Ω) (1.122)

with Φ1 (h) → 0 as h → 0. Also suppose that Cˆh∗ is an approximation to Ch∗ su h that

kC ∗ ω − Cˆh∗ ωkL2 (Ω) ≤ Φ2 (h)kωkH ∀ω ∈ H (1.123)

with Φ2 (h) → 0 as h → 0.
We approximate the problem (1.120) by
 1 γ 
min Jγh (v) := kCh v − z̃k2H + kv − vh∗ k2L2 (Ω) (1.124)
v 2 2
whi h is hara terized by the rst-order optimality ondition

Ch∗ (Ch vhγ − z̃) + γ(vhγ − vh∗ ) = 0. (1.125)

40
Here, vh∗ ∈ L2 (Ω) is an approximation to v∗ su h that

kvh∗ − v ∗ kL2 (Ω) ≤ Φ3 (h)kv ∗ kL2 (Ω) (1.126)

with Φ3 (h) → 0 as h → 0.
Let v̂hγ be the solution to the variational problem

Cˆh∗ (Ch v̂hγ − z δ ) + γ(v̂hγ − vh∗ ) = 0 (1.127)

with a perturbation zδ of z̃ satisfying

kz δ − z̃kH ≤ δ. (1.128)

Following [28, 30℄ we have the following result.

Theorem 1.5.1. Let vγ be the solution of the variational problem (1.121) and γ > 0. Then
the following error estimate holds
δ
kv γ − v̂hγ k ≤ Φ(h) + c ,
γ
where Φ(h) → 0 when h tends to zero.

Proof. In the proof c1 , c2 , . . . are generi positive onstants, h·, ·i and k · k denote the s alar
2
produ t and the norm in L (Ω), respe tively. From the equation (1.125) we have

γkvhγ k2 = hCh∗ (z̃ − Ch vhγ ) + γvh∗ , vhγ i


= hz̃, Ch vhγ iH + γhvh∗ , vhγ i − kCh vhγ k2H
≤ hz̃, Ch vhγ iH + γhvh∗ , vhγ i
≤ c1 kz̃kH kvhγ k + c2 γkvhγ k.

Hen e
c1
kvhγ k ≤ kz̃kH + c2 . (1.129)
γ
From the equations (1.125) and (1.127) we have

γ(vhγ − v̂hγ ) = Cˆh∗ (Ch v̂hγ − z δ ) − Ch∗ (Ch vhγ − z̃).

Therefore,

γkvhγ − v̂hγ k2 =hCˆh∗ Ch v̂hγ − Ch∗ Ch vhγ − Cˆh∗ z δ + Ch∗ z̃, vhγ − v̂hγ i
=hCˆh∗ Ch v̂hγ − C ∗ Ch v̂hγ + C ∗ Ch v̂hγ − Ch∗ Ch vhγ − Cˆh∗ z δ + Ch∗ z̃, vhγ − v̂hγ i
=h(Cˆ∗ − C ∗ )Ch v̂ γ , v γ − v̂ γ i + hC ∗ Ch v̂ γ − C ∗ Ch v γ , v γ − v̂ γ i
h h h h h h h h h

+ hCh∗ z̃ − Cˆh∗ z δ , vhγ − v̂hγ i.

Moreover, using the inequalities (1.123) and (1.129), we estimate the rst term of the right

41
hand side of the last quality as follows

h(Cˆh∗ − C ∗ )Ch v̂hγ , vhγ − v̂hγ i =h(Cˆh∗ − C ∗ )Ch (v̂hγ − vhγ ), vhγ − v̂hγ i
+h(Cˆh∗ − C ∗ )Ch v γ , v γ − v̂ γ i
h h h

≤k(Cˆh∗ − C )Ch (v̂h − vh )kkvhγ


∗ γ γ
− v̂hγ k
+k(Cˆh∗ − C ∗ )Ch vhγ kkvhγ − v̂hγ k
c 
1
≤c3 Φ2 (h)kvhγ − v̂hγ k2 + c4 Φ2 (h) kz̃kH + c4 kvhγ − v̂hγ k.
γ
Similarly, using the inequalities (1.122) and (1.129) we an estimate the se ond term by

hC ∗ Ch v̂hγ − Ch∗ Ch vhγ , vhγ − v̂hγ i =hCh v̂hγ , C(vhγ − v̂hγ )iH − hCh vhγ , Ch (vhγ − v̂hγ )iH
=hCh (v̂hγ − vhγ ), C(vhγ − v̂hγ )iH + hCh vhγ , (C − Ch )(vhγ − v̂hγ )iH
=h(Ch − C)(v̂hγ − vhγ ), C(vhγ − v̂hγ )iH − kC(v̂hγ − vhγ )k2H
+ hCh vhγ , (C − Ch )(vhγ − v̂hγ )iH
c 
1
≤c5 Φ1 (h)kvhγ − v̂hγ k2 + c6 Φ1 (h) kz̃kH + c2 kvhγ − v̂hγ k.
γ
Further, using the inequalities (1.122), (1.123) and (1.128), we obtain

hCh∗ z̃ − Cˆh∗ z δ , vhγ − v̂hγ i =h(C ∗ − Cˆh∗ )z δ , vhγ − v̂hγ i + hz̃ − z δ , C(vhγ − v̂hγ )iH
+ hz̃, (Ch − C)(vhγ − v̂hγ )iH
 
≤ c7 Φ2 (h) + c8 δ + c9 Φ1 (h) kvhγ − v̂hγ k.

Hen e

1  c
8
kvhγ − v̂hγ k ≤ c7 Φ2 (h) + c9 Φ1 (h) + δ. (1.130)
γ γ

Similarly, from (1.121), (1.122), (1.123), (1.126), (1.127) and (1.129), we have

γkv γ − v̂hγ k2 =hCˆh∗ Ch v̂hγ − C ∗ Ch v̂hγ + C ∗ Ch v̂hγ − C ∗ Cv γ − Cˆh∗ z δ + C ∗ z̃, v γ − v̂hγ i


γhv ∗ − vh∗ , v γ − v̂hγ i
=h(Cˆh∗ − C ∗ )Ch v̂hγ , v γ − v̂hγ i + hCh v̂hγ − Cv γ , C(v γ − v̂hγ )iH + hv ∗ − vh∗ , v γ − v̂hγ i
+ hC ∗ z̃ − Cˆ∗ z δ , v γ − v̂ γ i
h h

=h(Cˆh∗ − C )Ch v̂h , v − v̂hγ i + h(Ch − C)v̂hγ , C(v γ − v̂hγ )iH


∗ γ γ

+ hC(v̂hγ − v γ ), C(v γ − v̂hγ )iH + h(C ∗ − Cˆh∗ )z δ , v γ − v̂hγ )i


+ hz̃ − z δ , C(v γ − v̂hγ )iH + hv ∗ − vh∗ , v γ − v̂hγ i
≤h(Cˆh∗ − C ∗ )Ch v̂hγ , v γ − v̂hγ i + h(Ch − C)v̂hγ , C(v γ − v̂hγ )iH
+ h(C ∗ − Cˆ∗ )z δ , v γ − v̂ γ )i
h h
δ
+ hz̃ − z , C(v − γ
v̂hγ )i + hv ∗ − vh∗ , v γ − v̂hγ i
≤c10 Φ2 (h)kv γ − v̂hγ k + c11 Φ2 (h)kv γ − v̂hγ k
+ Φ2 (h)kz δ kH kv γ − v̂hγ k + c12 δkv γ − v̂hγ k + Φ3 (h)kv ∗ kkv γ − v̂hγ k.

Combining this inequality with (1.130) we arrive at the statement of the theorem.

42
1.6 Lan zos' algorithm for approximating singular values
The problem (1.119) is ill-posed (as we will see in the next hapters that the operator

C a ting from L2 (Ω) to H is ompa t) and its degree if ill-posedness is the asymptoti

behaviour of the singular values of C or of eigenvalues of C ∗C . However, no expli it form

of C is available and even if it is, it is not easy to estimate the asymptoti behaviour of

its singular values. As we will approximate C by some nite-dimensional operators Ch , the


variational problem (1.120) is approximated by the problem (1.124). If we take γ = 0,
z̃ = 0, then

J0h (v) = Ch∗ Ch v



(1.131)

whi h an be al ulated via an adjoint problem for any v. Therefore, we an apply Lan zos'

algorithm [81℄ to estimate the eigenvalues of Ch Ch based on its value Ch∗ Ch v . This approa h

seem to be appli able to many problems and e ient as the results of the next hapters

show. Lan zos' algorithm onsists of the following steps [81℄:

Step 1. (Initialization.)
b
Let β0 = 0, q0 = 0 and an arbitrary ve tor b, al ulate q1 = .
kbk
Put Q = q1 and k = 0.

Step 2. k = 1, 2, 3, . . . . Cal ulating v = Ch∗ Ch qk .


Let

Cal ulating αk = qk v and orthogonalizing the ve tor v twi e

v = v − QQv,
v = v − QQv.

Cal ulating βk = kvk and al ulating ve tor

v
qk+1 = .
βk

The result of Ÿ1.4 of this hapter is written on the basis of the paper

[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-

tions from boundary observations. Journal of Inverse and Ill-Posed Problems 24(2016),

no. 2, 195220.

43
Chapter 2

Data assimilation by the nal time


observations

In this hapter, we study the data assimilation problem of re onstru ting the initial on-

dition in the Diri hlet problem (1.7)(1.9) from the observation of its solution at the nal

time. We reformulate it as a variational problem of minimizing a mist fun tional. As

the variational problem is ill-posed, we stabilize it by the Tikhonov regularization method.

It is proved that the regularized fun tional is Fré het dierentiable and a formula for its

gradient is derived via an adjoint problem. The variational problem is rst dis retized in

spa e variables and the onvergen e results of the method are proved. The problem is then

fully dis retized and it is proved that the dis retized fun tional is Fré het dierentiable. A

formula for its gradient is derived via a dis rete adjoint problem and the onjugate gradient

method is applied to numeri ally solve the dis retized variational problem. Some numeri al

examples are tested on omputer to prove the e ien y of the proposed algorithm.

2.1 Problem setting and the variational method


For the ease of reading, we rewrite the Diri hlet problem (1.7)(1.9) with the new indexing:

 !

 ∂u Pn ∂ ∂u

 − aij (x, t) + b(x, t)u = f in Q,
 ∂t i,j=1 ∂xi ∂xj
(2.1)

 u|t=0 = v in Ω,



u = 0 on S.

The oe ients aij , b and the data f and v satisfy the onditions (1.1)(1.6). Under these

onditions, (2.1) has a unique weak solution u ∈ W (0, T ; H01(Ω)) in the sense of Denition

1.1.7.

Data assimilation: Suppose that v is not given and we have to re onstru t it from the

observation of the solution u to (2.1) at the nal time Cu := u(·, T ) = ξ(·) ∈ L2 (Ω).

44
From now on, to emphasize the dependen e of the solution u on the initial ondition v, we

denote it by u(v) or u(x, t; v).

Remark 2.1.1. The operator C maps v ∈ L2 (Ω) to u(x, T ; v) is ompa t from L2 (Ω) to
L2 (Ω). Hen e the problem of
2
re onstru ting v from u(·, T ; v) = ξ(·) ∈ L (Ω) is ill-posed.

Proof. Sin eu ∈ W (0, T ; H01(Ω)), we have u(·, T ; v) ∈ H01 (Ω). However, H01 (Ω) is om-
2 2 2
pa tly embedded in L (Ω), therefore the operator C from L (Ω) to L (Ω) is ompa t.

To analyse the degree of the ill-posedness of the problem we have to study behaviour of

the singular values of "the linear part" C of the ane operator mapping v ∈ L2 (Ω) to
2
u(x, T ; v) ∈ L (Ω) whi h is dened as presented in Introdu tion and in Ÿ1.5 as follows:
o
denote by u the solution to (2.1) with v ≡ 0, then

o o
Cv := Cu − C u = u(x, T ; v) − u(x, T )

is bounded and linear from L2 (Ω) to L2 (Ω). It is lear that C is ompa t. Sin e there is no

expli it form for C and the analysis of its singular values is not trivial, we therefore suggest

a numeri al s heme to approximate the singular values of C. Before doing so, we des ribe

the variational method for nding the initial ondition v in from u(x, T ; v) by minimizing

the mist fun tional


1
J0 (v) := ku(·, T ; v) − ξk2L2(Ω) (2.2)
2
over L2 (Ω), subje t to the system (2.1). As the inverse problem is ill-posed, this variational

problem is so, therefore, we use the Tikhonov regularization method to stabilize it by

minimizing the Tikhonov regularization fun tional

γ 1 γ
Jγ (v) := J0 (v) + kv − v ∗ k2L2 (Ω) = ku(·, T ; v) − ξk2L2 (Ω) + kv − v ∗ k2L2 (Ω) , (2.3)
2 2 2
over L2 (Ω) with γ > 0 being the regularization parameter and v∗ an approximation of the

initial ondition v , whi h an be set to zero.


Now we prove that Jγ (v) is Fré het dierentiable and derive a formula for its gradient ∇Jγ .
In doing so, we introdu e the adjoint problem
 !

 ∂p P
n ∂ ∂p

 − − aij (x, t) + b(x, t)p = 0 in Q,
 ∂t i=1 ∂xj ∂xi
(2.4)

 p(x, T ) = u(x, T ; v) − ξ(x) in Ω,



p = 0 on S.

Sin e u(·, T ; v) − ξ(·) ∈ L2 (Ω), due to Theorem 1.1.1, there exists a unique solution p∈
W (0, T ; H01(Ω)).

Theorem 2.1.1. The fun tional Jγ is Fré het dierentiable and its gradient is given by

∇Jγ (v) = p(x, 0) + γ(v − v ∗ ), (2.5)

where p(x, t) is the solution to the adjoint problem (2.4).

45
Proof. For a small variation δv , we have

1 1
J0 (v + δv) − J0 (v) = ku(v + δv) − ξk2L2 (Ω) − ku(v) − ξk2L2 (Ω)
2 2
1 1
= ku(v + δv) − u(v) + u(v) − ξk2L2 (Ω) − ku(v) − ξk2L2 (Ω)
2 2
1
= ku(v + δv) − u(v)k2L2 (Ω) + hu(v + δv) − u(v), u(v) − ξiL2 (Ω) (2.6)
2
1
= kδu(v)k2L2 (Ω) + hδu(v), u(v) − ξiL2 (Ω)
2 Z
1  
2
= kδu(v)kL2 (Ω) + δu(v) u(v) − ξ dx
2 Ω

where δu(v) = u(v + δv) − u(v) is the solution to the problem


 !

 ∂δu Pn ∂ ∂δu

 − aij (x, t) + b(x, t)δu = 0 in Q,
 ∂t i=1 ∂xi ∂xj
(2.7)

 δu = 0 on S,



δu(x, 0) = δv(x) in Ω.

Applying the a priori estimate (1.15) of Theorem 1.1.1 to the solution of the problem (2.7),

we obtain that there exists a onstant cD su h that kδukW (0,T ;H 1(Ω)) ≤ cD kδvkL2(Ω) . Hen e
1
2
kδu(v)k2L2(Ω) = o(kδvkL2(Ω) ).
Furthermore, using Green's formula (1.22) of Theorem 1.2.1 for (2.4) and (2.7), we obtain
Z   Z
u(v) − ξ δu(x, T )dx = δvp(x, 0)dx.
Ω Ω

Combining it with (2.6), we have


Z
J0 (v + δv) − J0 (v) = o(kδvkL2 (Ω) ) + δvp(x, 0)dx = o(kδvk2L2 (Ω) ) + hδv, p(x, 0)iL2(Ω) .

Thus, J0 is Fré het dierentiable and

∇J0 (v) = p(x, 0)

where p(x, t) is the solution to the adjoint problem (2.4). It follows the formula (2.5) for

the gradient of the fun tional Jγ (v). The proof is omplete.

We note that

1
J0 (v) = ku(·, T ; v) − ξk2L2 (Ω)
2
1 o
= kCv − (ξ − C u)k2L2 (Ω) .
2
Hen e
o
∇J0 (v) = C ∗ C(v − (ξ − C u)).
o
Thus, if we take ξ = C u, then

∇J0 (v) = C ∗ Cv,

46
where C∗ is the adjoint operator of C. On the other hand, from the above theorem,

∇J0 (v) = C ∗ Cv = p(x, 0),

where p = p(x, t; v) is the solution to the adjoint problem


 !

 ∂p P
n ∂ ∂p

 − − aij (x, t) + b(x, t)p = 0 in Q,
 ∂t i=1 ∂xj ∂xi
o (2.8)

 p(x, T ) = u(x, T ; v) − u(x, T ) in Ω,



p = 0 on S.

Thus, for any v ∈ L2 (Ω) we an always evaluate C ∗ Cv via a dire t problem for al ulating
ū(v) and an adjoint problem for obtaining p(x, 0; v). When we dis retize the problem, C
has a form of a matrix, therefore we an apply Lan zos' algorithm Ÿ1.6 to approximate

eigenvalues of C∗C. The numeri al results will be presented in Ÿ2.4.

2.2 Dis retization of the variational problem in spa e variable


In this se tion we suppose that the rst equation of the Diri hlet problem (2.1) has no

mixed derivative, that is aij = 0 if i 6= j , and we denote aii by ai as in Ÿ1.4 and get

the system (1.46). Furthermore, we assume Ω is an open parallelepiped dened in Ÿ1.4.1.

We dis retize this problem by the nite dieren e method in spa e variables and get the
o
system of ordinary dierential equations (1.75). Denote ξ˜ = ξ − u(x, T ). Then J0 (v) =
o  o 
1 ˜ L2 (Ω) . We approximate the
k u(x, T ; v) − u(x, T ) − ξ − u(x, T ) kL2 (Ω) = 12 kCv − ξk
2
fun tional Jγ (v) by

1 X γ X k
Jγh (v̄) = h |ū(T ; v̄) − ξ¯k |2 + h |v̄ − v¯∗ k |2 (2.9)
2 k∈Ω 2 kΩ
h h

and minimize this fun tional over all grid fun tions v̄ . Set Ch v̄ = ūˆ(x, T ; v̄), where ūˆ is the
pie ewise linear interpolation of ū dened in Subse tion 1.4.1. Then, if the ondition 2) of

Theorem 1.4.8 is satised, we see that all onditions in Subse tion 1.5 for Theorem 1.5.1

are satised. As we do not allow noise in the data ξ yet, we have the following result on

the onvergen e of the solution of the dis retized optimization problem (2.9).

Theorem 2.2.1. Let ondition 2) of Theorem 1.4.8 is satised. Then the linear interpo-
lation ūˆ of the solution uγ of the problem (2.9) onverges in L2 (Ω) to the solution v γ of the
problem (2.3) as h tends to zero.

2.3 Full dis retization of the variational problem


To solve the variational problem numeri ally we need to dis retize the system (1.46) in time.

We use the splitting method in Subse tion 1.4.3. to get the s heme (1.111) or (1.112). Now,

47
we dis retize the obje tive fun tional J0 (v) as follows

1 X k,M
J0h (v̄) := [u (v̄) − ξ k ]2 . (2.10)
2 k∈Ω
h

Here we use the notation uk,M (v̄) to indi ate the dependen e of the solution to the initial

ondition v̄ and M represents the index of the nal time. We drop the multiplier h as it
h
does not play any role here. Furthermore, we use the same notation J0 as in the previous

se tion, although it depends also on the time mesh size ∆t.

2.3.1. The gradient of the obje tive fun tional


We will solve the minimization problem (2.10) by the onjugate gradient method. For

this purpose, we need to al ulate the gradient of the obje tive fun tion J0h . In this
k
subse tion, we use the following inner produ t of two grid fun tions u := {u , k ∈ Ωh } and

v := {v k , k ∈ Ωh } X
(u, v) := uk v k . (2.11)
k∈Ωh

The following theorem give a formula for the gradient of the obje tive fun tional (2.10).

Theorem 2.3.1. The gradient ∇J0h (v̄) of the obje tive fun tional J0h at v̄ is given by
∗
∇J0h (v̄) = A0 η 0 , (2.12)

where η = (η 0 , . . . , η M ) satises the adjoint problem



η M −1 = uM (v̄) − ξ
(2.13)
η m = (Am+1 )∗ η m+1 , m = M − 2, M − 3, . . . , 0.

Here, the matrix (Am )∗ has form


∆t m ∆t m −1 ∆t m ∆t m −1
(Am )∗ = (E1 − Λ1 )(E1 + Λ ) ...(En − Λn )(En + Λ )
4 4 1 4 4 n (2.14)
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λ ) ...(E1 − Λ1 )(E1 + Λ ) .
4 4 n 4 4 1
Proof. For a small variation δv̄ of v̄ , it follows from (2.10) that

1 X k,M 1 X k,M
J0h (v̄ + δv̄) − J0h (v̄) = [u (v̄ + δv̄) − ξ k ]2 − [u (v̄) − ξ k ]2
2 2
k∈Ωh k∈Ωh

1 X k,M 2 X k,M k,M


= w + w [u (v̄) − ξ k ]
2 k∈Ω k∈Ω
h h
(2.15)
1 X k,M 2 X k,M k
= v + w ψ
2 k∈Ω k∈Ω
h h

1 X k,M 2
= w +(w M , ψ).
2 k∈Ω
h

48
wherew k,m := uk,m (v̄ + δv̄) − uk,m(v̄), k ∈ Ωh , m = 0, . . . , M , w m := {v k,m, k ∈ Ωh } and
ψ = uM (v̄) − ξ . It follows from (1.111) that w is the solution to the problem

w m+1 = Am w m , m = 0, .., M − 1,
(2.16)
w 0 = δv̄.

Taking the inner produ t of both sides of the mth equation of (2.16) with an arbitrary
m N1 ×...×Nn
ve tor η ∈R , then summing the results over m = 0, ..., M − 1, we obtain

M
X −1 M
X −1
m+1 m
(w ,η ) = (Am w m , η m )
m=0 m=0
(2.17)
M
X −1
∗
= (w m, Am η m ).
m=0
∗
Here Am is the adjoint matrix of Am . Consider the adjoint problem

η m = (Am+1 )∗ η m+1 , m = M − 2, M − 1, . . . , 0,
(2.18)
η M −1 = ψ.

Taking the inner produ t of the both sides of the rst equation of (2.18) with an arbitrary

ve tor w m+1 , then summing the results over m = 0, ..., M − 2, we obtain

M
X −2 M
X −2
(w m+1 m
,η ) = (w m+1 , (Am+1 )∗ η m+1 ). (2.19)
m=0 m=0

Taking the inner produ t of the both sides of the se ond equation of (2.18) with the ve tor

vM , we have

(w M , η M −1) = (w M , ψ). (2.20)

It follows from (2.19) and (2.20) that

M
X −2 M
X −2
(w m+1 , η m ) + (w M , η M −1) = (w m+1 , (Am+1 )∗ η m+1 ) + (w M , ψ),
m=0 m=0

or, equivalently
M
X −1 M
X −1
(w m+1 m
,η ) = (w m , (Am )∗ η m ) + (w M , ψ). (2.21)
m=0 m=1
From (2.17) and (2.21) we obtain
∗ ∗
(w M , ψ) = (w 0 , A0 η 0 ) = (δv̄, A0 η 0 ). (2.22)
P 2
On the other hand, it an be proved by indu tion that w k,M = o(kδv̄k). Hen e, it
k∈Ωh
follows from (2.15) and (2.22) that
∗
J0h (v̄ + δv̄) − J0h (v̄) = (δv̄, A0 η 0 ) + o(kδv̄k). (2.23)

Consequently, the gradient of the obje tive fun tional J0h an be written as

∂J0h (v̄) ∗
= A0 η 0 . (2.24)
∂v̄
49
Note that, sin e the matri es Λmi , i = 1, ..., n are symmetri , we have for m = 0, ..., M − 1
 ∆t m −1 ∆t m ∗
m ∗
(Ai ) = (Ei + Λ ) (Ei − Λ )
4 i 4 i
 ∆t m ∗   ∆t m −1 ∗
= (Ei − Λ ) (Ei + Λ )
4 i 4 i
∆t m ∆t m −1
= (Ei − Λi )(Ei + Λ ) .
4 4 i
Hen e

(Am )∗ = (Am m m m ∗
1 ...An An ...A1 )
∗ m ∗ m ∗ m ∗
= (Am
1 ) ...(An ) (An ) ...(A1 )
∆t m ∆t m −1 ∆t m ∆t m −1
= (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
The proof is omplete.

2.3.2. Conjugate gradient method


Denote by ǫ the noise level of the measured data. Using the well-known onjugate gradient

method with an a posteriori stopping rule introdu ed by Nemirovskii [59℄, we re onstru t

the initial ondition v̄ from the measured nal state uM = ξ by the following steps:

Step 1 (initialization): Given an initial guess v 0 and a s alar α > 1, al ulate the residual
r̂ 0 = u(v 0) − ξ by solving the splitting s heme (1.111) with the initial ondition v̄ being
0
repla ed by the initial guess v .

If kr̂ 0 k ≤ αǫ, stop the algorithm. Otherwise, set i = 0, d−1 = (0, . . . , 0) and go to Step 2.

Step 2. Cal ulate the gradient r i = ∇J0k,M (v i ) given in (2.12) by solving the adjoint

problem (2.13). Then set

di = −r i + βi−1 di−1 , (2.25)

where 
i 2
β i−1 = k r k

for i ≥ 1,
k r i−1 k2 (2.26)

 −1
β = 0.
Step 3. Cal ulate the solution ūi of the splitting s heme (1.111) with v̄ being repla ed by

di , put
k r i k2
αi = . (2.27)
k (ūi )M k2
Then, set

v i+1 = v i + αi di . (2.28)

The residual an be al ulated by

r̂ i+1 = r̂ i + αi (ūi )M . (2.29)

50
Step 4. If k r̂ i+1 k≤ αǫ, stop the algorithm (Nemirovskii's stopping rule). Otherwise, set

i := i + 1, and go ba k to Step 2.
We note that (2.29) an be derived from the equality

r̂ i+1 = uM (v i+1 ) − ξ = uM (v i + αi di ) − ξ
(2.30)
= r̂ i + uM (αi di ) = r̂ i + αi (ūi )M .

2.4 Numeri al results


To illustrate the performan e of the proposed algorithm, we present in this se tion some

numeri al tests to estimate the singular values and to re onstru t the initial ondition v(x).

2.4.1. Approximations of the singular values


Let Ω = (0, 1) and T = 1. Consider the system


ut − (a(x, t)ux )x = f (x, t)
 in Q,
u(0, t) = u(1, t) = 0 t ∈ (0, T ], (2.31)



u(x, 0) = v(x) in Ω

with a(x, t) = 2xt + x2 + 1.


Example 1. o
u the solution to (2.31) for v ≡ 0. As presented above the operator
Denote by
o
Cv = u(x, T ; v) − u(x, T ) is linear and ompa t from L2 (Ω) to L2 (Ω). Furthermore,

C ∗ Cv = p(x, 0),

where p is the solution to the adjoint problem




−pt − (a(x, t)px )x = 0
 in Q,
p(0, t) = p(1, t) = 0 t ∈ (0, T ], (2.32)


 o
p(x, T ) = u(x, T, v) − u(x, T ). in Ω

Thus, to approximate the singular values of C we have to approximate (2.31) and (2.32) to

get an approximation to C in a matrix form, say Ch , and Ch∗ Ch v and then apply Lan zos'

algorithm to approximate eigenvalues of Ch Ch . In doing so, we subdivide the domain Ω into
50 uniform subintervals and the time interval (0, 1) into 50 subintervals and then apply the
method presented in Chapter 1 to solve (2.31) and (2.32).

The result of estimating the singular values after 51 iterations is presented in Figure 2.1.

We an see that the singular value of the operator is very small, the 51th value is about
−15
10 . The ondition number (the quotient of the rst singular value and the 51th singular
10
value) is about 3.38431 × 10 .

51
−4
10

−6
10

−8
10

−10
10

−12
10

−14
10

−16
10
0 5 10 15 20 25 30 35 40 45 50 55

Figure 2.1: Example 1: Singular values.

2.4.2. Numeri al examples for two-dimensional problems


Set Ω = (0, 1) × (0, 1) and T = 1. Set x = (x1 , x2 ). Consider the system


ut − (a1 (x, t)ux1 )x1 − (a2 (x, t)ux2 )x2 + b(x, t)u = f (x, t),

u(0, x2 , t) = u(1, x2 , t) = u(x1 , 0, t) = u(x1 , 1, t) = 0, (2.33)



u(x, 0) = v(x1 , x2 ).

We test our algorithm for dierent kinds of the initial onditions: 1) very smooth (Exam-

ple 2), 2) ontinuous but not smooth (Example 3) and 3) dis ontinuous (Example 4 and

Example 5). For the nite dieren e method for solving the dire t problem, we take the

spatial mesh size h = (0.02, 0.02) and the time mesh size ∆t = 0.02.
Example 2. In this example, we take

1
a1 (x, t) = 0.02[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
1
a2 (x, t) = 0.01[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
2 2
b(x, t) = x1 + x2 + 2x1 t + 1,

and the exa t solution

uexa t = sin(πx1 ) sin(πx2 ) × (1 − t/2).

After inserting the exa t solution to the system (2.33), we get the exa t initial ondition

v(x) = sin(πx1 ) sin(πx2 ),

52
and the right hand side

v(x) t 1−t 
f (x, t) = − 1 − 0.06 1 − 1− cos(15πx1 ) cos(15πx2 )
2 2 2
 t
− (x1 + x2 + 2x1 t + 1)(2 − t) − 0.075π 2(1 − )(1 − t)
2 2

 2
× 2 cos(πx1 ) sin(πx2 ) sin(15πx1 ) cos(15πx2 )

− sin(πx1 ) cos(πx2 ) cos(15πx1 ) sin(15πx2 ) .

The numeri al results of re onstru ting the initial ondition v(x) for Example 2 are pre-

sented in Figure 2.2.

Exact function v Estimated function v

1 1

0.5 0.5
v(x)

v(x)
0 0

−0.5 −0.5

0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41

x 0.01 0.01 x 0.01 0.01


2 x 2 x
1 1

(a) (b)

Error

1
0.8

0.5 0.6

0 0.4

0.2
−0.5

0.81
0.81 0

0.41 exact sol.


0.41 appr. sol.
−0.2
x 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x2
1

( ) (d)

Figure 2.2: Example 2: Re onstru tion results: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error;
(d) the omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the
estimated fun tion).

In the next examples we present our tests for re onstru ting nonsmooth initial onditions

v. We generate the examples by hoosing v


f = 0, then applying the splittingand setting

method for getting approximations to the exa t solution u. Taking numeri al solutions to

u(x, T ) as the data, we apply our algorithm for re onstru ting v . For solving the dire t
and inverse problems we use two dierent mesh sizes to avoid "inverse rime".

53
Example 3. In this example, v(x) is hosen as a multi-linear fun tion given by


 2x2 , if x2 ≤ 1/2 and x2 ≤ x1 and x1 ≤ 1 − x2 ,



2(1 − x ),
2 if x2 ≥ 1/2 and x2 ≥ x1 and x1 ≥ 1 − x2 ,
v(x) =

 2x1 , if x1 ≤ 1/2 and x1 ≤ x2 and x2 ≤ 1 − x1 ,



2(1 − x ),
1 otherwise,

the oe ients a1 (x, t), a2 (x, t) and b(x, t) are given as

1
a1 (x, t) = 0.02[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
1
a2 (x, t) = 0.01[1 − (1 − t) cos(15πx1 ) cos(15πx2 )],
2
2 2
b(x, t) = x1 + x2 + 2x1 t + 1.

The numeri al results for Example 3 are presented in Figure 2.3.

Exact function v Estimated function v

1 1

0.5 0.5
v(x)

v(x)

0 0

−0.5 −0.5

0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41

x 0.01 0.01 x 0.01 0.01


2 x 2 x
1 1

(a) (b)

Error

1
0.8

0.5 0.6

0 0.4

0.2
−0.5

0.81
0.81 0

0.41 exact sol.


0.41 appr. sol.
−0.2
x 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x2
1

( ) (d)

Figure 2.3:Example 3: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the
omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated
fun tion).

In Examples 4 and 5, we test the algorithm with the pie ewise onstant initial ondition

54
given by 
1, if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,
v(x) =
0, otherwise .
Example 4. The oe ients a1 (x, t) = a2 (x, t) = b(x, t) = 10−2 .
The numeri al results for Example 4 are presented in Figure 2.4

Exact function v Estimated function v

1 1

0.5 0.5
v(x)

v(x)
0 0

−0.5 −0.5

0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41

x 0.01 0.01 x 0.01 0.01


2 x 2 x
1 1

(a) (b)

Error

1.2

1
1

0.8

0.5
0.6

0
0.4

−0.5 0.2

0.81
0.81 0
0.41 exact sol.
0.41 appr. sol.
−0.2
x2 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2

( ) (d)

Figure 2.4:Example 4: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the
omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated
fun tion).

Example 5. The oe ients

a1 (x, t) = 0.02(1 − 0.5(1 − t) cos(15πx1 ) cos(15πx2 )),


a2 (x, t) = 0.01(1 − 0.5(1 − t) cos(15πx1 ) cos(15πx2 )),
b(x, t) = x21 + x22 + 2x1 t + 1.

Figure 2.5 shows the numeri al results for Example 5.

55
Exact function v Estimated function v

1 1

0.5 0.5
v(x)

v(x)
0 0

−0.5 −0.5

0.81 0.81
0.81 0.81
0.41 0.41
0.41 0.41

x 0.01 0.01 x 0.01 0.01


2 x 2 x
1 1

(a) (b)

Error

1.2

1
1

0.8

0.5
0.6

0
0.4

−0.5 0.2

0.81
0.81 0
0.41 exact sol.
0.41 appr. sol.
−0.2
x 0.01 0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x2
1

( ) (d)

Figure 2.5:Example 5: Re onstru tion result: (a) exa t fun tion v ; (b) estimated one; ( ) point-wise error; (d) the
omparison of v|x1 =1/2 and its re onstru tion (the dashed urve: the exa t fun tion, the solid urve: the estimated
fun tion).

This hapter is written on the basis of the paper

[61℄ Oanh N.T.N., A splitting method for a ba kward paraboli equation with time-

dependent oe ients. Computers & Mathemati s with Appli ations 65(2013), 1728.

56
Chapter 3

Data assimilation by the integral


observations

In this hapter, we study the data assimilation problem of determining the initial ondition

v(x) in the Diri hlet problem (1.7)(1.9) from N integral observations li u = hi (t), t ∈
(τ, T ), τ ≥ 0, i = 1, 2, . . . , N regarded as generalization of pointwise spatial observations:
1
R
Suppose that we are given N nonnegative weight fun tions ωi ∈ L (Ω) with ω (x)dx > 0
Ω i
and we an observe u by
Z
li u(x, t) = ωi (x)u(x, t)dx = hi (t), t ∈ [τ, T ], i = 1, . . . , N.

It is required to re onstru t the initial ondition v from these observations. As in the

previous hapter, we reformulate this problem as a variational problem aims of minimizing

a mist fun tional. As the variational problem is ill-posed, we stabilize it by the Tikhonov

regularization method. It is proved that the regularized fun tional is Fré het dierentiable

and a formula for its gradient is derived via an adjoint problem. The variational prob-

lem is rst dis retized in spa e variables and the onvergen e results of the method are

proved. The problem is then fully dis retized and it is proved that the dis retized fun -

tional is Fré het dierentiable. A formula for its gradient is derived via a dis rete adjoint

problem and the onjugate gradient method is applied to numeri ally solve the dis retized

variational problem. Some numeri al examples are provided to show the e ien y of the

proposed algorithm.

57
3.1 Problem setting and the variational method
As in the previous hapter, for the ease of reading, we rewrite the Diri hlet problem (1.7)

(1.9) with the new indexing:


 !

 ∂u Pn ∂ ∂u

 − aij (x, t) + b(x, t)u = f in Q,
 ∂t i,j=1 ∂xi ∂xj
(3.1)

 u|t=0 = v in Ω,



u = 0 on S.

We assume that the oe ients aij , b and the data f and v satisfy the onditions (1.1)(1.6).
The solution to this problem is understood in the weak sense of Denition 1.1.7. It has

been proved that under these onditions there exists a unique solution u ∈ W (0, T ; H01(Ω))
to (3.1).

Data assimilation: Suppose that v is not given and we have to re onstru t it from

the observation of the solution u to (3.1) from N integral observations li u = hi (t), t ∈


(τ, T ), τ ≥ 0, i = 1, 2, . . . , N , with
Z
li u(x, t) = ωi (x)u(x, t)dx = hi (t), t ∈ (τ, T ), i = 1, . . . , N. (3.2)

R
Here ωi ∈ L1 (Ω) with

ωi (x)dx > 0 are N given nonnegative weight fun tions.

Remark 3.1.1. If v ∈ H01 (Ω), aij , b ∈ C 1 ([0, T ]; L∞ (Ω)), i, j = 1, . . . , n and there exists
2
a onstant µ1 su h that |∂aij /∂t|, |∂b/∂t| ≤ µ1 , then the operator C maps v ∈ L (Ω) to

(l1 u(x, t; v), · · · , lN u(x, t; v)) is ompa t from L2 (Ω) to (L2 (τ, T ))N . Hen e the problem of
2 N
re onstru ting v from Cu(v) = (h1 , · · · , hN ) ∈ (L (τ, T )) is ill-posed.

Proof. If the ondition 2) of Theorem 1.1.1 is satised, then u ∈ H01,1 (Q). Therefore,
(l1 u(x, t; v), · · · , lN u(x, t; v)) ∈ (H 1 (τ, T ))N whi h is ompa tly embedded in (L2 (τ, T ))N .
2 2 N
Hen e, the operator C from L (Ω) to (L (τ, T )) is ompa t.

o
Now we analyze the degree of ill-posedness of this problem. We denote by u the solution

of the problem


 ∂u Pn ∂  ∂u 

 − aij (x, t) + b(x, t)u = f in Q,
 ∂t i,j=1 ∂xi ∂xj
(3.3)

 u = 0 on S,



u|t=0 = 0 in Ω,

and u[v] the solution of the problem




 ∂u Pn ∂ ∂u 

 − aij (x, t) + b(x, t)u = 0 in Q,
 ∂t i,j=1 ∂xi ∂xj
(3.4)

 u = 0 on S,



u|t=0 = v in Ω.

58
o
Then, u(v) = u[v] + u. Hen e
o o
li u(v) = li u[v] + li u = Ci v + li u (3.5)

where Ci v, i = 1, . . . , N mapping v to li u[v], i = 1, . . . , N are linear bounded operators

from L2 (Ω) to L2 (τ, T ).


Dene the linear operator

C = (C1 , . . . , CN ) : L2 (Ω) → (L2 (τ, T ))N ,


Cv = (C1 v, . . . , CN v) for v ∈ L2 (Ω),
where the s alar produ t in (L2 (τ, T ))N is dened by: if ~~1 = (~11 , . . . , ~1N ) and ~~2 =
PN
(~21 , . . . , ~2N ) are in (L2 (τ, T ))N , then (~~1 , ~~2 )(L2 (τ,T ))N = 1 2
i=1 (~i , ~i )L2 (τ,T ) . Hen e the
P
is dened by k~ k~i k2L2 (τ,T ) for ~~ = (~1 , . . . , ~N ) ∈
2 N
norm in (L (τ, T )) ~k2(L2 (τ,T ))N =
o
(L2 (τ, T ))N . Set ~i = hi − li u, i = 1, . . . , N and ~~ = (~1 , . . . , ~N ). Then the problem of
re onstru ting the initial ondition v in (3.1) from the observations (3.2) has the form

Cv = ~~. (3.6)

We introdu e now a variational method for solving the re onstru tion problem.

We denote u(x, t), u(x, t; v) or u(v), if there is no onfusion,


the solution to (3.1) by

to emphasise the dependen e of the solution u(x, t) on the initial ondition v . A natural

method for re onstru ting v from the observations li (u), i = 1, 2, . . . , N (3.2) is to minimize

the mist fun tional


N
1X
J0 (v) = kli u(v) − hi k2L2 (τ,T ) (3.7)
2 i=1
with respe t to v ∈ L2 (Ω).
Let v∗ be an a priori guess of v. We now ombine the above least squares problem with

Tikhonov regularization as follows: minimize the fun tional


N
γ ∗ 2
1X γ
Jγ (v) = J0 (v) + kv − v kL2 (Ω) = kli u(v) − hi k2L2 (τ,T ) + kv − v ∗ k2L2 (Ω) (3.8)
2 2 i=1 2
with respe t to v ∈ L2 (Ω) with γ being positive Tikhonov regularization parameter.

Sin e γ > 0, it is easily seen that the problem of minimizing Jγ (v) over L2 (Ω) has a unique
solution. Now we prove that Jγ (v) is Fré het dierentiable and derive a formula for its

gradient. In doing so, onsider the adjoint problem:





 ∂p P
n ∂  ∂p  N
X

− − a (x, t) + b(x, t)p = ωi (li u − hi )χ(τ,T ) (t) in Q,
 ∂t i,j=1 ∂xi ij ∂xj i=1
(3.9)

p = 0 on S,



p(x, T ) = 0 in Ω,

where χ(τ,T ) (t) = 1, if t ∈ (τ, T ) and χ(τ,T ) (t) = 0, otherwise. The solution to (3.9) is
XN
understood in the weak sense of Ÿ1.2. Sin e ωi (li u − hi )χ(τ,T ) (t) ∈ L2 (Q), there exists
i=1
a unique solution p ∈ W (0, T ; H01(Ω)) to (3.9).

59
Theorem 3.1.1. The gradient ∇J0 (v) of the obje tive fun tional J0 (v) at v is given by

∇J0 (v) = p(x, 0), (3.10)

where p(x, t) is the solution to the adjoint problem (3.9).

Proof. For a small variation δv of v, we have

N
X N
X
1 1
J0 (v + δv) − J0 (v) = kli u(v + δv) − hi k2L2 (τ,T ) − kli u(v) − hi k2L2 (τ,T )
i=1
2 i=1
2
XN N
X
1
= kli δu(v)k2L2(τ,T ) + hli δu(v), li u(v) − hi iL2 (τ,T )
i=1
2 i=1
XN
= hli δu(v), li u(v) − hi iL2 (τ,T ) + o(kδvkL2 (Ω) )
i=1

where δu(v) is the solution to the problem



 ∂δu − P
 n ∂  ∂δu

 aij (x, t) + b(x, t)δu = 0 in Q,
 ∂t i,j=1 ∂xi ∂xj
(3.11)

δu(x, t) = 0 on S,



δu(x, 0) = δv in Ω.

It follows from (3.2) that

N
X
J0 (v + δv) − J0 (v) = hli δu, li u − hi iL2 (τ,T ) + o(kδvkL2 (Ω) )
i=1
XN Z T Z  
= ωi δudx li u − hi dt + o(kδvkL2 (Ω) )
i=1 τ Ω
(3.12)
XN Z T Z
= δuωi (li u − hi )dxdt + o(kδvkL2 (Ω) )
i=1 τ Ω
Z T Z N
X
= δu ωi (li u − hi )χ(τ,T ) (t)dxdt + o(kδvkL2(Ω) ).
0 Ω i=1

Using Green's fomula (1.22) (Theorem 1.2.1) for systems (3.9) and (3.11), we obtain

Z T Z N
X Z
δu ωi (li u − hi )χ(τ,T ) (t)dxdt = δvp(x, 0)dx. (3.13)
0 Ω i=1 Ω

It follows from (3.12) and (3.13) that


Z
J0 (v + δv) − J0 (v) = δvp(x, 0)dx + o(kδvkL2(Ω) ) = hp(x, 0), δviL2 (Ω) + o(kδvkL2(Ω) ).

Consequently, J0 is Fré het dierentiable and

∇J0 (v) = p(x, 0).

The proof is omplete.

60
From this result, we see that the fun tional Jγ (v) is also Fré het dierentiable and its

gradient ∇Jγ (v) has the form

∇Jγ (v) = p(x, 0) + γ(v − v ∗ ), (3.14)

where p(x, t) is the solution to the adjoint problem (3.9).

Now we turn to the question of estimating the degree of ill-posedness of our re onstru ting

problem. Sin e Ci , i = 1, . . . , N, are dened by (3.5), from the above theorem we have

Ci∗ g = p†i (x, 0), where p†i is the solution to the adjoint problem


 ∂p†i Pn ∂  ∂p†i 

− −
 ∂t i=1 ∂xi i a (x, t) + b(x, t)p†i = ωi (x)g̃(t) in Q,
∂xi
(3.15)

p†i = 0 on S,


p† (x, T ) = 0 in Ω
i

with g ∈ L2 (τ, T ) and g̃(t) = g(t) for t ∈ (τ, T ) and g̃(t) = 0, otherwise.

From (3.5), we have

N
1X
J0 (v) = kli u(v) − hi k2L2 (τ,T )
2 i=1
N
1X o
= kCi v − hi − li u k2L2 (τ,T )
2 i=1
1
= kCv − ~~k2(L2 (τ,T ))N .
2
Hen e

N
X
J0′ (v) = C (Cv − ~~) =

Ci∗ (Ci v − ~i ).
i=1

o
If we take hi su h that hi = li u, then

N
X
J0′ (v) = C ∗ Cv = Ci∗ Ci v.
i=1
PN
Due to Theorem 3.10, C ∗ Cv = i=1 Ci∗ Ci v = p0 (x, 0), where p0 is the solution of the adjoint

problem


 ∂p0 P
n ∂  ∂p0  PN

 − − aij (x, t) + b(x, t)p 0
= i=1 ωi li u[v]χ(τ,T ) (t) in Q,
 ∂t i,j=1 ∂xi ∂xj
(3.16)

 p0 = 0 on S,


 0
p (x, T ) = 0 in Ω.

Thus, if v ∈ L2 (Ω) is given, we have the value C ∗ Cv = p0 (x, 0). However, an expli it form

of C C is not available to analyze its eigenvalues and eigenfun tions. Despite this, when

we dis retize the problem, the nite-dimensional approximations Ch to C are matri es, and

61
so we an apply the Lan zos' algorithm [81℄ to estimates the eigenvalues of Ch∗ Ch based on

the values Ch Ch vh . The numeri al simulation of this approa h will be given in Ÿ3.4.

Sin e the gradient of the fun tional Jγ (v) an be al ulated via the adjoint problem (3.9),

the onjugate gradient algorithm is appli able to approximating the initial ondition v as

follows [60℄. Let



−∇J (v k ) if k = 0,
γ
v k+1 = v k + αk dk , k
d = (3.17)
−∇Jγ (v k ) + β k dk−1 if k > 0,

where

k
k∇Jγ (v k )k2
β = , αk = argminα≥0 Jγ (v k + αdk ). (3.18)
k∇Jγ (v k−1)k2

From (3.5), we have

N
X 1 γ
k
Jγ (v + αd ) = k
kli u(v k + αdk ) − hi k2L2 (τ,T ) + kv k + αdk − v ∗ k2L2 (Ω)
i=1
2 2
XN
1 o γ
= kCi (v k + αdk ) + li u − hi k2L2 (τ,T ) + kαdk + v k − v ∗ k2L2 (Ω)
i=1
2 2
XN
1 γ
= kαCi dk + li u(v k ) − hi k2L2 (τ,T ) + kαdk + v k − v ∗ k2L2 (Ω) .
i=1
2 2

Hen e
XN XN
∂Jγ (v k + αdk ) k 2
=α kCi d kL2 (τ,T ) + hCi dk , li u(v k ) − hi iL2 (τ,T )
∂α i=1 i=1

+ γαkdk k2L2 (Ω) + γhdk , v k − v ∗ iL2 (Ω) .


∂J(v k + αdk )
Letting = 0, we obtain
∂α
P
N
hCi dk , li u(v k ) − hi iL2 (τ,T ) + γhdk , v k − v ∗ iL2 (Ω)
αk = − i=1
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1
P
N
hdk , Ci∗ (li u(v k ) − hi )iL2 (Ω) + γhdk , v k − v ∗ iL2 (Ω)
i=1
=−
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1 (3.19)
P
N
hdk , Ci∗ (li u(v k ) − hi ) + γ(v k − v ∗ )iL2 (Ω)
i=1
=−
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1

h∇Jγ (v k ), dk iL2 (Ω)


=− .
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1

62
It follows from (3.17) that αk an be rewritten as


 h∇Jγ (v k ), −∇Jγ (v k )iL2 (Ω)

 −N , if k = 0,

 P

 k 2 k 2
 kCi d kL2 (τ,T ) + γkd kL2 (Ω)
αk = i=1
(3.20)

 h∇Jγ (v k ), −∇Jγ (v k ) + β k dk−1iL2 (Ω)

 − , if k > 0.

 PN

 k 2 k 2
kCi d kL2 (τ,T ) + γkd kL2 (Ω)

i=1

Therefore,

k
k∇Jγ (v k )k2L2 (Ω)
α = , k = 0, 1, 2, . . . (3.21)
P
N
kCi dk k2L2 (τ,T ) + γkdk k2L2 (Ω)
i=1

3.2 Dis retization of the variational problem in spa e variable


Suppose that the rst equation of the Diri hlet problem (3.1) has no mixed derivative,

that is aij = 0 if i 6= j , and we denote aii by ai as in Ÿ1.4 and get the system (1.46).

Furthermore, we assume Ω is the open parallelepiped dened in Ÿ1.4.1. We dis retize this

problem by the nite dieren e method in spa e variables and get the system of ordinary

dierential equations (1.75).

Using the representation in Subse tion 3.1, we have the rst-order optimality ondition for

this problem as follows

∇Jγ (v) = C ∗ (Cv − ~~) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ) = 0. (3.22)

We approximate the fun tional Jγ as follows. First, we approximate the fun tional li (v)
by
X
lih ū(v̄) = ∆h ω̄ik ūk (v̄), i = 1, . . . , N. (3.23)
k∈Ωh


As in the ontinuous problem, we dene u by the solution to the Cau hy problem


 dū
+ (Λ1 + · · · + Λn )ū − F̄ = 0,
dt (3.24)

ū(0) = 0,

where F̄ is dened as in (1.75), and ū[v̄] the solution to the Cau hy problem


 dū
+ (Λ1 + · · · + Λn )ū = 0,
dt (3.25)

ū(0) = v̄.

Then,

ō X X ō k
lih ū(v̄) = lih ū[v̄] + lih u = ∆h ω̄ik ūk [v̄] + ∆h ω̄ik u , i = 1, . . . , N. (3.26)
k∈Ωh k∈Ωh

63
Hen e the operator Cih v̄˜ = lih ū[v̄] is linear and bounded from L2 (Ω) into L2 (τ, T ), for

i = 1, . . . , N . Furthermore, if p¯† is a solution to the Cau hy problem



 ¯
 d p†
+ (Λ1 + · · · + Λn )p¯† − Ḡ = 0,
dt (3.27)

p̄† (T ) = 0,

˜
with Ḡ = {ω̄ik g(t), k ∈ Ωh } where ω̄ik is dened in formula (1.67), then

Cih g = p¯† (x, 0).
Thus, the dis retized version of C has the form Ch := (C1h , . . . , CN h ) and the fun tional Jγ
is now approximated as follows:

1 γ
Jγh (vh ) : = kChv̄˜ − ~~k2(L2 (τ,T ))N + kv̄ − v¯∗ k2L2 (Ω) (3.28)
2 2
N
1X γ
= kCih v̄˜ − ~i k2L2 (τ,T ) + kv̄ − v¯∗ k2L2 (Ω) . (3.29)
2 i=1 2

Here, for simpli ity of notation, we again set ~i = hi −lih u. For this dis retized optimization

problem we have the rst-order optimality ondition

Ch∗ (Ch v̄ − ~~) + γ(v̄ − v¯∗ ) = 0. (3.30)

If we suppose that ondition 2) of Theorem 1.4.11 is satised, then kCh v̄ − Cvk(L2 (τ,T ))N and

kCh∗ v − CvkL2 (τ,T ) tend to zero as h tends to zero. Following Theorem 1.5.1 of Subse tion

1.5 we have the following result.

Theorem 3.2.1. Let ondition 2) of Theorem 1.4.11 be satised. Then the interpolation
v̄bhγ
of the solution v̄hγ of the problem (3.28) onverges to the solution v γ of the problem (3.8)
in L2 (Ω) as h tends to zero.

3.3 Full dis retization of the variational problem


To solve the variational problem numeri ally we need to dis retize the system (1.46) in time.

We use Crank-Ni olson's method in Se tion 1.3 and the splitting method in Subse tion

1.4.3. to get the s heme (1.111) or (1.112). Now, we dis retize the obje tive fun tional

J0 (v) as follows
N X M X
1X
J0h,∆t (v̄) := [ ω k uk,m(v̄) − hm 2
i ] , (3.31)
2 i=1 m=ℓ k∈Ω i
h

where ℓ is the rst index for whi h ℓ∆t > τ and uk,m (v̄) shows its dependen e on the initial
k k
ondition v̄ and m is the index of grid points on time axis. We denote ωi = ωi (x ) the
k
approximation of the fun tion ωi (x) in Ωh at points x , as dened by (1.67). We note that
h,∆t
in the denition of J0 the multiplier ∆h has been dropped as it plays no role.

To minimize J0h,∆t (v̄) by the onjugate gradient method, we rst al ulate its gradient.

Theorem 3.3.1. The gradient of J0h,∆t at v̄ is given by


∗ ∗
∇J0h,∆t (v̄) = A0 · · · Aℓ−1 η ℓ−1 , (3.32)

64
where η satises the adjoint problem

η m = (Am+1 )∗ η m+1 + P ω k [ P ω k um+1 (v̄) − hm+1 ],
N

i i i m = M − 1, M − 2, . . . , ℓ − 1,
i=1 k∈Ωh

η M = 0.
(3.33)

Here the matrix (A ) is given bym ∗

∆t m ∆t m −1 ∆t m ∆t m −1
(Am )∗ = (E1 − Λ1 )(E1 + Λ ) ...(En − Λn )(En + Λ )
4 4 1 4 4 n (3.34)
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λ ) ...(E1 − Λ1 )(E1 + Λ ) .
4 4 n 4 4 1
Proof. For a small variation δv̄ of v̄ ,we have from (3.31) that

N X M X N X M X
1X 1X
J0h,∆t(v̄ + δv̄)−J0h,∆t (v̄) = [ ωik uk,m (v̄+δv̄)−hm
i ] 2
− [ ω k uk,m(v̄)−hm
i ]
2
2 i=1 m=ℓ k∈Ω 2 i=1 m=ℓ k∈Ω i
h h
N X
X M X N X
X M X X
1 2
= ωik w k,m + ωik w k,m[ ωik uk,m(v̄)−hm
i ]
2 i=1 m=ℓ k∈Ωh i=1 m=ℓ k∈Ωh k∈Ωh
N X M X N X M X
1X  X
k k,m 2
= ωi w + ωik w k,mψik,m
2 i=1 m=ℓ k∈Ω i=1 m=ℓ k∈Ω
h h

X M
N X X N X
X M
1 2
= ωik w k,m + (ωi w m , ψim ),
2 i=1 m=ℓ k∈Ωh i=1 m=ℓ
(3.35)
k,m k,m k,m k,m P
where v := u (v̄ + δv̄) − u (v̄) and ψi = ωik uk,m − hm
i , k ∈ Ωh , and the inner
k∈Ωh
produ t is that of RN1 ×...×Nn .
It follows from (1.111) that w is the solution to the problem

w m+1 = Am w m , m = 0, .., M − 1,
(3.36)
w 0 = δv̄.

Taking the inner produ t of the both sides of the mth equation of (3.36) with an arbitrary
m N1 ×...×Nn
ve tor η ∈R , summing the results over m = ℓ − 1, ..., M − 1, we obtain

M
X −1 M
X −1
m+1 m
(w ,η ) = (Am w m , η m )
m=ℓ−1 m=ℓ−1
(3.37)
M
X −1
∗
= (w m , Am η m ).
m=ℓ−1
∗
Here Am is the adjoint matrix of Am . Consider the adjoint problem

η m = (Am+1 )∗ η m+1 + PN ω ψ m+1 , m = M − 1, M − 2, . . . , ℓ − 1,
i=1 i i
(3.38)
η M = 0.

65
Taking the inner produ t of the both sides of the rst equation of (3.38) with an arbitrary

ve tor w m+1 , summing the results over m = ℓ − 1, ..., M − 1, we obtain

M
X −1 M
X −1 N
X M
X −1
m+1 m m+1 m+1 ∗ m+1
(w ,η ) = (w , (A )η )+ (w m+1 , ωi ψim+1 )
m=ℓ−1 m=ℓ−1 i=1 m=ℓ−1
(3.39)
M
X N X
X M
= (w m , (Am )∗ η m ) + (w m, ωi ψim ).
m=ℓ i=1 m=ℓ

From (3.37) and (3.39), we have

N X
X M
∗ ∗
(w m , ωi ψ m ) + (w M , AM η M ) = (w ℓ−1, Aℓ−1 η ℓ−1 )
i=1 m=ℓ
∗
= (Aℓ−2 w ℓ−2 , Aℓ−1 η ℓ−1)
(3.40)
= ···
∗
= (Aℓ−2 · · · A0 w 0 , Aℓ−1 η ℓ−1 )
∗ ∗
= (δv̄, A0 · · · Aℓ−1 η ℓ−1).
P
N P
M P 2
On the other hand, it an be proved by indu tion that ωik w k,m = o(kδv̄k).
i=1 m=ℓ k∈Ωh
M
Hen e, it follows from the ondition η = 0, (3.35) and (3.40) that
∗ ∗
J0∆t (v̄ + δv̄) − J0∆t (v̄) = (δv̄, A0 · · · Aℓ−1 η ℓ−1 ) + o(kδv̄k). (3.41)

Consequently, the gradient of the obje tive fun tion J0h an be written as

∂J0∆t (v̄) ∗ ∗
= A0 · · · Aℓ−1 η ℓ−1 . (3.42)
∂v̄
m
Note that, sin e the the matri es Λi , i = 1, ..., n are symmetri , we have for m = ℓ−
1, ..., M − 1
 ∆t m −1 ∆t m ∗

(Am
i ) = (Ei + Λ ) (Ei − Λ )
4 i 4 i
 ∆t m ∗  ∆t m −1 ∗
= (Ei − Λ ) (Ei + Λ )
4 i 4 i
∆t m ∆t m −1
= (Ei − Λi )(Ei + Λ ) .
4 4 i
Hen e

(Am )∗ = (Am m m m ∗
1 ...An An ...A1 )
∗ m ∗ m ∗ m ∗
= (Am
1 ) ...(An ) (An ) ...(A1 )
∆t m ∆t m −1 ∆t m ∆t m −1 (3.43)
= (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) (E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
The proof is omplete.

The onjugate gradient method for the dis retized fun tional (3.31) onsists of the following

steps:

66
P
N
Step 1. Choose an initial approximation v0 and al ulate the residual r̂ 0 = [li u(v 0) − hi ]
i=1
by solving the splitting s heme (1.111) with v̄ being repla ed by the initial approximation

v0 and set k = 0.
Step 2. Cal ulate the gradient r 0 = −∇Jγ (v 0 ) given in (3.32) by solving the adjoint

problem (3.33). Then set d0 = r 0 .


Step 3. Cal ulate
kr 0 k2
α0 =
P
N
kli d0 k2 + γkd0 k2
i=1
0
where li d an be al ulated from the splitting s heme (1.111) with v̄ being repla ed by d0
and F = 0. Then, set

v 1 = v 0 + α 0 d0 .
Step 4. For k = 1, 2, · · · , al ulate r k = −∇J0 (v k ), dk = r k + β k dk−1 , where

kr k k2
βk =
kr k−1 k2
Step 5. Cal ulate αk
k
kr k k2
α =
P
N
kli dk k2 + γkdk k2
i=1
k
where li d an be al ulated from the splitting s heme (1.111) with v̄ being repla ed by dk
and F = 0. Then, set

v k+1 = v k + αk dk .

3.4 Numeri al example


To illustrate the e ien y of the proposed algorithm, we present in this se tion some nu-

meri al results. We test our algorithm for 1) very smooth initial, 2) for ontinuous, but

not smooth, and 3) for dis ontinuous initial onditions. Thus, the degree of di ulties

in reases with examples. We note that although in the theory we only prove the onver-

gen e of our dieren e s heme for smooth initial onditions, it works for other ases as our

numeri al examples show. Also, we vary the observation points as well as time interval of

observations. The results for the one-dimensional ase with approximate singular values

will be presented in Examples 14, respe tively. The results for two-dimensional ases will

be given in Examples 57.

67
3.4.1. One-dimensional numeri al examples
Set Ω = (0, 1), the nal time T = 1, the grid size to be 0.02 and the random noise is 0.01.
The one-dimensional system has the form:


ut − (a(x, t)ux )x + b(x, t)u = f (x, t),
 in Q,
u(0, t) = u(1, t) = 0, t ∈ (0, T ], (3.44)



u(x, 0) = v(x), in Ω.
In the all tests for this ase, we set the oe ients

a(x, t) = x2 t + 2xt + 1,
b(x, t) = 0.

We take N=3 observations of the form (3.2) with weight fun tions ωi (x), i = 1, 2, 3 are

hosen as follows

1 if x ∈ (xi − ǫ, xi + ǫ)

ωi (x) = with ǫ = 0.01 (3.45)
0 otherwise

where x1 = 0.1, x2 = 0.5 and x3 = 0.9


Example 1. Following Ÿ3.1, we denote the solution to the system (3.44) with v ≡ 0 by
o o o o
u. Then the operator C dened by Cv = (l1 (u − u), l2 (u − u), l2 (u − u)) is bounded and
2 2 3
linear from L (Ω) to (L (τ, T )) . The result of approximate singular values of the solution

operator obtained by our method des ribed in Ÿ3.1 for dierent τ are given in Figure 3.1.

0
10
τ=0
−2 τ=0.3
10
τ=0.5
−4
τ=0.9
10

−6
10

−8
10

−10
10

−12
10

−14
10

−16
10

−18
10

−20
10
0 5 10 15 20 25 30 35 40 45 50 55

Figure 3.1: Example 1. Singular values: three observations and various time intervals of observations.

68
From these singular values we see that our inverse problem is severely ill-posed and the

ill-posedness depends on the time interval of observations: the less observation time is, the

more ill-posed the problem is.

Now we show numeri al results for re onstru ting dierent initial onditions with dierent

time intervals of observations and dierent positions for observations.

Example 2. We set the exa t solution to system (3.44) as follows

uexa t = et sin(2πx).

Hen e, the exa t initial ondition is given by v(x) = sin(2πx). The right hand side

f (x, t) = (1 + 4π 2 (x2 t + 2xt + 1))et sin(2πx) − 2πet (2xt + 2t) cos(2πx).

We hoose τ = 0, the regularization parameter is hosen to be γ = 5 × 10−3 . The re on-

stru tion result and the exa t initial ondition v for various positions of observations are

shown in Figure 3.2.

Comparison Comparison

exact sol. exact sol.


1 approx. sol. 1 approx. sol.
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


x x

(a) (b)

Comparison Comparison

exact sol. exact sol.


1 approx. sol. 1 approx. sol.
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


x x

( ) (d)

Figure 3.2: Example 2: Re onstru tion results for (a) 3 uniform observation points in (0, 0.5), error in L2 -norm
= 0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2 -norm = 0.006133; ( ) 3 uniform observation
points in (0.25, 0.75), the error in L2 -norm = 0.0060894; (d) 3 uniform observation points in Ω, the error in L2 -norm
= 0.0057764 .

69
The numeri al results show that the quality of the re onstru tion depends on the positions

of the observations and is better when the observations are uniformly distributed in Ω.
Example 3. In this example, the initial ondition is hosen to be the same as Example

1, and regularization parameter is also γ = 5 × 10−3 , but the starting time for observation
is taken to be τ = 0.01, τ = 0.05, τ = 0.1 and τ = 0.3. The omparison of re onstru tion
results is displayed in Figure 3.3.

Comparison Comparison

exact sol. exact sol.


1 appr. sol. 1 appr. sol.
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


x x

(a) (b)

Comparison Comparison

exact sol. exact sol.


1 appr. sol. 1 appr. sol.
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


x x

( ) (d)

Figure 3.3: Re onstru tion result of Example 3: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.

70
Table 1 shows the relation between the starting point of observation τ and the error in the
2
L -norm of the algorithm when the number of observations is N = 3 and the regularization
−3
parameter is γ = 5 × 10 .

Starting point τ Error in L2 norm

τ = 0.01 0.0084225

τ = 0.05 0.0226540

τ = 0.1 0.0388760

τ = 0.15 0.0405540

τ = 0.2 0.0622210

τ = 0.3 0.1015000

τ = 0.5 0.1798100

Number of observations N = 3; regularization parameter γ = 5 × 10−3

Table 3.1: Example 3: Behavior of the algorithm with dierent starting points of observation τ .

We now test the algorithm for nonsmooth initial onditions. In Examples 3 and 4, we

hoose v, set f = 0, and solve the dire t problem by the Crank-Ni olson method to nd

an approximation to the exa t solution u, then use these data for re onstru ting the exa t

initial ondition v. Here, we use dierent mesh-sizes for the dire t solvers to avoid "inverse

rime".

Example 4. In this example, the initial ondition is given by



2x, if x ≤ 0.5,
v(x) =
2(1 − x), otherwise .

The regularization parameter and the starting time τ are hosen to be the same as Example

1. The omparison of re onstru tion results is displayed in Figure 3.4.

71
Comparison Comparison

exact sol. exact sol.


appr. sol. appr. sol.
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

(a) (b)

Comparison Comparison

exact sol. exact sol.


appr. sol. appr. sol.
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

( ) (d)

Figure 3.4: Re onstru tion result of Example 4: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.

Example 5. In this example, the initial ondition is given by



1, if 0.25 ≤ x ≤ 0.75,
v(x) =
0, otherwise .

The regularization parameter and the starting time τ are hosen to be the same as Example

1. The omparison of re onstru tion results is displayed in Figure 3.5.

72
Comparison Comparison

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
exact sol. exact sol.
appr. sol. appr. sol.
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

(a) (b)

Comparison Comparison

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
exact sol. exact sol.
appr. sol. appr. sol.
−0.2 −0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

( ) (d)

Figure 3.5: Re onstru tion result of Example 5: (a) τ = 0.01 ; (b) τ = 0.05; ( ) τ = 0.1; (d) τ = 0.3.

3.4.2. Two-dimensional numeri al examples


SetΩ = (0, 1) × (0, 1), T = 1, the grid sizes 0.02 and the random noise 0.01. Set further

x = (x1 , x2 ). The problem Diri hlet problem has the form




ut − (a1 (x1 , x2 , t)ux1 )x1 − (a2 (x1 , x2 , t)ux2 )x2 + b(x1 , x2 , t)u = f (x1 , x2 , t)
 in Q,
u(x1 , 0, t) = u(x1 , 1, t) = u(0, x2 , t) = u(1, x2 , t) = 0, 0 ≤ x1 , x2 ≤ 1, t ∈ (0, T ],



u(x1 , x2 , 0) = v(x1 , x2 ) in Ω.
(3.46)
1
For the tests we take a1 (x, t) = a2 (x, t) = 0.5[1 − 2
(1 − t) cos(3πx1 ) cos(3πx2 )] and b(x, t) =
x21 + x22 + 2x1 t + 1.
The weight fun tions ωi (x) are hosen as follows

1 if x ∈ (xi1 − ǫ, xi1 + ǫ) × (xi2 − ǫ, xi2 + ǫ)

ωi (x) = with ǫ = 0.01. (3.47)
0 otherwise

As said in the Introdu tion the operators li dened by (3.2) with these weight fun tions an

be regarded as "pra ti al" point-wise observations. We shall test our algorithm for dierent

distributions of xi : they will taken either in the whole Ω or in a part of it: (0, 0.5) ×(0, 0.5),

73
(0.5, 1) × (0, 0.5), (0.5, 1) × (0.5, 1) or (0, 0.5) × (0.5, 1). The number of observation N will

be taken either 4 or 9.

Example 6. We test algorithm for the smooth initial ondition given by v(x) = sin(πx1 ) sin(πx2 )
and u(x, t) = (1 − t)v(x). The sour e term f is thus given by
 
f (x, t) = − v(x) 1 − (1 − t) π 2 + 0.5π 2 cos(3πx1 ) cos(3πx2 ) − (x21 + x22 + 2x1 t + 1)
+ 0.75π 2 (1 − t)2 ×
 
sin(3πx1 ) cos(3πx2 ) cos(πx1 ) sin(πx2 ) + cos(3πx1 ) sin(3πx2 ) sin(πx1 ) cos(πx2 ) .

The measurement is taken at 9 points, the regularization parameter is set to be γ = 5×10−3 .


The re onstru tion result and the exa t initial ondition v are shown in Figure 3.6. The

Exact solution
Approximation solution

1
1

0.5
0.5

0
0

−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2

(a) (b)

Error

0.6
0.8

0.4
0.6

0.2

0.4
0

0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2

( ) (d)

Figure 3.6: Example 6. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise
error; (d) the omparison of v|x1 =1/2 | and its re onstru tion .

algorithm stops after 38 iterations and omputational time is 87.496271 se onds and the
2
error in L -norm is 0.02346. The behavior of the algorithm when observation regions and
number of observations vary is shown in Tables 2 and 3. We an see that the more number

of observations we take, the more a ura y we have.

74
Observation region Iteration Error in L2 norm

(0, 0.5) × (0, 0.5) 17 0.028807

(0.5, 1) × (0, 0.5) 15 0.028366

(0.5, 1) × (0.5, 1) 15 0.028147

(0, 0.5) × (0.5, 1) 17 0.028395

(0, 1) × (0, 1) 17 0.027934

Number of observations N = 4; regularization parameter γ = 5 × 10−3 .

Table 3.2:Example 6: Behavior of the algorithm when the number of observations and the positions of observations
vary (N = 4).

Observation region Iteration Error in L2 norm

(0, 0.5) × (0, 0.5) 29 0.024127

(0.5, 1) × (0, 0.5) 25 0.024503

(0.5, 1) × (0.5, 1) 32 0.024667

(0, 0.5) × (0.5, 1) 25 0.025325

(0, 1) × (0, 1) 22 0.023887

Number of observations N = 9; regularization parameter γ = 5 × 10−3

Table 3.3:Example 6: Behavior of the algorithm when the number of observations and the positions of observations
vary (N = 9).

Now we test the algorithm for nonsmooth initial onditions. In Examples 7 and 8, we

hoose v and set f = 0, then use the splitting method to nd an approximation to the

solution u. After that we use the obtained data to test our algorithm.

Example 7. In this example, we hoose 9 observations, set the regularization parameter


−3
γ = 5 × 10 and v a multi-linear fun tion given by


 2x2 if x2 ≤ 1/2 and x2 ≤ x1 and x1 ≤ 1 − x2 ,



2(1 − x ) if x ≥ 1/2 and x ≥ x and x ≥ 1 − x ,
2 2 2 1 1 2
v(x) =

 2x1 if x1 ≤ 1/2 and x1 ≤ x2 and x2 ≤ 1 − x1 ,



2(1 − x ) otherwise .
1

The re onstru tion results and the exa t initial ondition v are shown in Figure 3.7. The

algorithm stops after 45 iterations and the omputational time is 103.394200 se onds and

the error in L2 -norm is 0.024959.

75
Exact solution
Approximation solution

1
1

0.5
0.5

0
0

−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2

(a) (b)

Error

0.6
0.8

0.4
0.6

0.2

0.4
0

0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2

( ) (d)

Figure 3.7: Example 7. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise
error; (d) the omparison of v|x1 =1/2 | and its re onstru tion.

Example 8. We test the algorithm with the pie ewise onstant initial ondition

1 if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,
v(x) =
0 otherwise .

The algorithm stops after 100 iterations and the omputational time is 109.423362 se onds
2
and the error in L -norm is 0.033148. The re onstru tion results and the exa t initial
ondition v are shown in Figure 3.8.

76
Exact solution
Approximation solution

1
1

0.5
0.5

0
0

−0.5 −0.5
50
0.79 40 50
0.79 30 40
0.39 20 30
0.39 20
10 10
x2 −0.01 −0.01 0 0
x1 x x1
2

(a) (b)

Error

0.6
0.8

0.4
0.6

0.2

0.4
0

0.2
−0.2
50
40 50 0
30 40
20 30 exact sol.
20 appr. sol.
10 10 −0.2
x2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1 x2

( ) (d)

Figure 3.8: Example 8. Re onstru tion results: (a) Exa t initial ondition v ; (b) re onstru tion of v ; ( ) point-wise
error; (d) the omparison of v|x1 =1/2 | and its re onstru tion.

This hapter is written on the basis of the paper

[27℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-

tions from integral observations. Inverse Problems in S ien e and Engineering (to appear)
doi: 10.1080/17415977.2016.1229778.

77
Chapter 4

Data assimilation by the boundary


observations

v(x) in the
In this hapter, we study the data assimilation of estimating the initial ondition

Neumann problem (1.7), (1.8), (1.10) from the boundary observation u|Σ = ϕ(ζ, t), (ζ, t) ∈

Σ = Γ × (0, T ), Γ ⊂ ∂Ω. We reformulate it as a variational problem of minimizing a mist


fun tional. As the variational problem is ill-posed, we stabilize it by the Tikhonov regular-

ization method. It is proved that the regularized fun tional is Fré het dierentiable and a

formula for its gradient is derived via an adjoint problem. The variational problem is rst

dis retized in spa e variables and the onvergen e results of the method are proved. The

problem is then fully dis retized and it is proved that the dis retized fun tional is Fré het

dierentiable. A formula for its gradient is derived via a dis rete adjoint problem and

the onjugate gradient method is applied to numeri ally solve the dis retized variational

problem. Some numeri al examples are provided to show the e ien y of the proposed

algorithm.

4.1 Problem setting and the variational method


We re all the Neumann problem (1.7), (1.8), (1.10) in this hapter with new indexing for

the ease of reading:


n  
∂u X ∂ ∂u
− aij (x, t) + b(x, t)u = f in Q, (4.1)
∂t i,j=1 ∂xi ∂xj

u|t=0 = v in Ω, (4.2)

∂u
= g, on S, (4.3)
∂N
where
X n
∂u
|S := (aij (x, t)uxj ) cos(ν, xi )|S ,
∂N i,j=1
ν is the outer normal to S.

78
The solution to this problem is a fun tion u ∈ W (0, T ; H 1(Ω)) satisfying Denition 1.1.8.

If the onditions (1.1)(1.6) are satised, then Theorem 1.1.2 shows that there exists a

unique solution u ∈ W (0, T ; H 1(Ω)) to the Neumann problem (4.1)(4.3).

Data assimilation: Re onstru t the initial ondition v(x) in (4.1)(4.3) from the obser-

vations of the solution u on a part of the boundary S. Namely, let Γ ⊂ ∂Ω and denote

Σ = Γ × (0, T ). Our aim is to re onstru t the initial ondition v from the impre ise
2
measurement ϕ ∈ L (Σ) of the solution u on Σ:

ku|Σ − ϕkL2 (Σ) ≤ ǫ. (4.4)

From now on, as in the previous hapters, to emphasize the dependen e of the solution

u in (4.1)(4.3) on the initial ondition v, we denote it by u(v) or u(x, t; v). Denote

Cu(v) = u(v)|Σ , we thus have to solve the operator equation

Cu(v) = ϕ. (4.5)

Remark 4.1.1. To see the ill-posedness of the problem, note that if the ondition 2) of

Theorem 1.1.2 is satised, then u ∈ H 1,1 (Q). Hen e the operator mapping v to u|Σ is

ompa t from L2 (Ω) to L2 (Σ). Thus, the problem of re onstru ting v from u|Σ is ill-posed.

To hara terize the degree of ill-posedness of this problem, denote the solution to the
o
problem (4.1)(4.3) with v ≡ 0 by u and denote the solution to the problem (4.1)(4.3)

with f ≡ 0, g ≡ 0 by u0 . Then, the operator


o
Cv = Cu − C u (4.6)

fromL2 (Ω) to L2 (Σ) is linear and the problem (4.5) is redu ed to solving the linear equation
o
Cv = ϕ − C u. We thus have to analyze the singular values of C . In doing so, let introdu e
the variational method for solving the problem (4.5).

We reformulate the re onstru tion problem as the least squares problem of minimizing the

fun tional
1
J0 (v) = ku(v) − ϕk2L2 (Σ) (4.7)
2
over L2 (Ω). As this minimization problem is unstable, we minimize its Tikhonov regular-

ized fun tional


1 γ
Jγ (v) = ku(v) − ϕk2L2 (Σ) + kv − v ∗ k2L2 (Ω) (4.8)
2 2
over L2 (Ω) with γ > 0 the regularization parameter, v ∗ an estimation of v whi h an be

set by zero.

Now we prove that Jγ is Fré het dierentiable and derive a formula for its gradient. In

doing so, we introdu e the adjoint problem





 ∂p Pn ∂  ∂p 

 − − aij (x, t) + b(x, t)p = 0 in Q,

 ∂t i,j=1 ∂xj ∂xi
∂p  (4.9)

 = u(v) − ϕ χΣ on S,

 ∂N


p(x, T ) = 0 in Ω,

79
where χΣ (ξ, t) = 1 if (ξ, t) ∈ Σ and zero otherwise.

The solution of this problem is understood in the weak sense of Ÿ1.2. Sin e u(v)|Σ − ϕ ∈
2 1
L (Σ), we see that there exists a unique solution in W (0, T ; H (Ω)) to (4.9).

Lemma 4.1.1. The fun tional Jγ is Fré het dierentiable and

∇Jγ (v) = p(x, 0) + γ(v(x) − v ∗ (x)), (4.10)

where p(x, t) is the solution to the adjoint problem (4.9).

Proof. For a small variation δv of v, we have

1 1
J0 (v + δv) − J0 (v) = ku(v + δv) − ϕk2L2 (Σ) − ku(v) − ϕk2L2 (Σ)
2 2
1
= kδu(v)k2L2(Σ) + hδu(v), u(v) − ϕiL2 (Σ)
2
= hδu(v), u(v) − ϕiL2 (Σ) + o(kδvkL2 (Ω) )
ZZ  
= δu(v) u(v) − ϕ dsdt + o(kδvkL2 (Ω) )
Σ

where δu is the solution to the problem




 P n ∂ ∂δu

δut − ai (x, t) + b(x, t)δu = 0 in Q,


 i,j=1 ∂xi ∂xj
∂δu (4.11)

 = 0 on S,

 ∂N


δu(x, 0) = δv in Ω.

Using Green's formula (1.25) (Theorem 1.2.2) for (4.9) and (4.11), we have
ZZ   Z
δu u(v) − ϕ dsdt = δvp(x, 0)dx.
Σ Ω

Hen e
Z
J0 (v + δv) − J0 (v) = δvp(x, 0)dx + o(kδvk) = hp(x, 0), δviL2(Ω) + o(kδvkL2 (Ω) ).

Consequently, the fun tional J0 is Fré het dierentiable and

∇J0 (v) = p(x, 0).

From this result, we see that the fun tional Jγ (v) is also Fré het dierentiable and its

gradient ∇Jγ (v) has the form (4.10). The proof is omplete.

Now we return to hara terizing the degree of ill-posedness of the problem (4.5). Sin e

1
J0 (v) = ku(v)|Σ − ϕk2L2 (Σ)
2
1 o
= kCv − (ϕ − u|Σ )k2L2 (Σ) .
2

80
o
If in this formula we take ϕ = u|Σ , then

J0′ (v) = C ∗ Cv. (4.12)

Due to Proposition 4.1.1 we have C ∗ Cv = p† (x, 0), where p† is the solution to the adjoint

problem 

 ∂p† P n ∂ ∂p† 

− − a (x, t) + b(x, t)p† = 0 in Q

 ∂t ∂x
ij
∂x
 i,j=1 j i

∂p (4.13)

 = u0 (v)χΣ on S,

 ∂N


p† (x, T ) = 0 in Ω.

Thus, for any v ∈ L2 (Ω) we an evaluate C ∗ Cv by solving the dire t problem (4.1)(4.3) and

the adjoint problem (4.13). However, the expli it form for C C is not available. As in the

previous hapters, when we dis retize the problem, the nite-dimensional approximations

Ch to C are matri es, and so we an apply the Lan zos' algorithm [81℄ to estimates the

eigenvalues of Ch∗ Ch based on its values Ch∗ Ch v . We will present some numeri al results in

Se tion 4.4.

We note that when Σ ≡ S Lions [50, p. 216219℄ suggested the following variational

method. Consider two boundary value problems

n  
∂u1 X ∂ ∂u1
− aij (x, t) + b(x, t)u1 = f in Q, (4.14)
∂t i,j=1
∂xi ∂xj

u1 = h on S, (4.15)

u1 |t=0 = v in Ω, (4.16)

and
n  
∂u2 X ∂ ∂u2
− aij (x, t) + b(x, t)u2 = f in Q, (4.17)
∂t i,j=1
∂xi ∂xj

∂u2
=g on S, (4.18)
∂N
u2 |t=0 = v in Ω, (4.19)

then minimize the fun tional

1
JL0 (v) = ku1 (v) − u2 (v)k2L2 (Q) (4.20)
2
over L2 (Ω). As this variational problem has the ill-posed nature of the original problem

we regularize it by minimizing the Tikhonov fun tional

γ
JLγ (v) = JL0 (v) + kv − v ∗ k2L2 (Ω) . (4.21)
2
In this setting, the solution of the Diri hlet problem (4.14)(4.16) is understood in a om-

mon sense: hoose a fun tion Φ ∈ H 1,1 (Q) su h that Φ|S = h, then ũ1 = u1 − Φ satises a
new homogeneous Diri hlet problem with the new right hand side f˜ and initial ondition

81
ṽ . The fun tion ũ1 ∈ W (0, T ; H01(Ω)) is said to be a weak solution to this homogeneous

Diri hlet problem, if

Z ZZ  X
n 
T ∂ ũ1 ∂η
(ũ1t , η)H −1 (Ω),H01 (Ω) dt + aij (x, t) + b(x, t)ũ1 η dxdt
0 Q i,j=1
∂xi ∂xj
Z T Z
= f˜ηdxdt, ∀η ∈ L2 (0, T ; H01(Ω))
0 Ω

and

ũ1 |t=0 = ṽ in Ω. (4.22)

If h is regular enough, there exists a unique solution ũ1 to the homogeneous Diri hlet
problem and thus, there exists a unique solution u1 ∈ W (0, T ; H 1(Ω)) to (4.14)(4.16).
Sin e u1 and u2 belong to W (0, T ; H 1(Ω)) we an modify Lions' method as follows: mini-

mize the fun tional

n ZZ
λ1 1 2 2
λ2 X
MJL0 (v) = ku (v) − u (v)kL2 (Q) + aij (u1 (v) − u2 (v))xi (u1 (v) − u2 (v))xj dxdt,
2 2 i,j=1 Q
(4.23)

with λ1 and λ2 being non-negative and λ1 + λ2 > 0 .


The fun tional MJLγ is Fré het dierentiable and its gradient an be represented via two

adjoint problems


 ∂p1 P n ∂ ∂p1 

 − − aij (x, t) + a(x, t)p1 =

 ∂t i,j=1 ∂x i ∂xj

 Pn   
1 2 1 2
λ1 (u (v) − u (v)) + λ2 i,j=1 aij u (v) − u (v) xj in Q, (4.24)
 xi

 1

 p = 0 on S,


 1
p (x, T ) = 0 in Ω,

and


 ∂p2 P n ∂ ∂p2 

 − − aij (x, t) + b(x, t)p2 =

 ∂t ∂x ∂x
 i,j=1 i j


 1 2
Pn  1 2
 
λ (u (v) − u (v)) + λ
1 a 2 i,j=1 ij u (v) − u (v) xj
in Q,
xi (4.25)

 ∂p2 ∂(u1 (v) − u2 (v))



 = λ 2 on S,

 ∂N ∂N

p2 (x, T ) = 0 in Ω.

Lemma 4.1.2. The fun tional MJL0 is Fré het dierentiable and its gradient has the
form

MJL′0 (v) = p1 (x, 0) − p2 (x, 0). (4.26)

Lions' method is a subje t of another independent resear h, we therefore do not pursuit it

in this thesis.

82
4.2 Dis retization of the variational method in spa e variables
We now turn to approximating the minimization problem (4.8). Due to the previous

Se tion 4.1,

1 o γ
Jγ (v) = kCv − (ϕ − u|Σ )k2L2 (Σ) + kv − v ∗ k2L2 (Ω)
2 2
and
 o

Jγ′ (v) = C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ),

where Cv = u0(v)|Σ and p is the solution to the adjoint problem (4.13). Thus, the optimality

ondition is
 o

C ∗ Cv − (ϕ − u|Σ ) + γ(v − v ∗ ) = p(x, 0) + γ(v − v ∗ ) = 0. (4.27)

Denote by Ch v = û0h |Σ we have that kCv − Ch vkL2 (Σ) tends to zero as h tends to zero. The

dis rete version of the fun tional (4.8) is

1 oˆ γ
Jγh (v) = kCh v − (ϕ̂h − uh |Σ )k2L2 (Σ) + kv̂h − v̂h∗ k2L2 (Ω)
2 2
for whi h we have the rst-order optimality ondition
 oˆ

Ch∗ Ch v − (ϕ̂h − uh |Σ ) + γ(v̂h − v̂h∗ ) = 0. (4.28)

We note that to evaluate Ch∗ we have to solve the orresponding dis retized adjoint problem,
1,1
but the Neumann ondition in the adjoint problem (4.13) does not belong to H (S),
therefore p is not in H
1,1
(Q), hen e we do not have the strong onvergen e of Ch z to C ∗ z

2
in L (Ω). However, when we dis retize (4.13) we have to mollify the Neumann data by

the onvolution with Steklov's kernel [40℄, therefore we have a new approximate data in

H 1,1 (S). Sin e the solution of the adjoint problem (4.13) is stable with respe t to the

data, the solution of the adjoint problem with mollied data p̄ approximates the solution

p of (4.13). Now we apply the above nite dieren e s heme to the adjoint problem with

mollied data to get its multi-linear interpolation p̄ˆh su h that p̄ˆh → p̄ in L2 ([0, T ], L2(Ω))
ˆh (t) → p̄(t) weakly in H 1 (Ω) for all t ∈ [0, T ]. Thus, in this way, instead of the
and p̄

adjoint operator Sh , we dened an approximation C ˆ∗ of C for whi h kC ∗ z − Cˆ∗ zkL2 (Ω) tends
h h h
to zero for all z being multi-linear interpolations on Ωh .

Let v̂hγ be the solution of the variational problem

 oˆ

Cˆh∗ Ch v − (ϕ̂h − uh |Σ ) + γ(v̂h − v̂h∗ ) = 0. (4.29)

Following Se tion 1.5 we an prove the following result.

Proposition 4.2.1. Let v γ be the solution of the variational problem (4.27) and γ > 0.
Then v̂hγ onverges to v γ in L2 (Ω) as h tends to zero.

83
4.3 Full dis retization of the variational problem and the onju-
gate gradient method
In this se tion, we onsider the problem of estimating the dis rete initial ondition v̄ from

the dis rete measurement of the solution on the boundary of the domain. The fully dis-

retized version of Jγ has the form

M
1XX
J0h,∆t (v̄) := [uk,m(v̄) − ϕk,m]2 . (4.30)
2 m=1 k∈Γh

For minimizing the problem (4.30) by the onjugate gradient method, we rst al ulate
h,∆t
the gradient of obje tive fun tion J0 (v̄) and it is shown by the following theorem
Theorem 4.3.1. The gradient of J0h,∆t at v̄ is given by
∗
∇J0h,∆t (v̄) = A0 η 0 , (4.31)

where η = (η 0 , . . . , η M ) satises the adjoint problem



 m m+1 ∗ m+1
+ ψ m+1 , m = M − 2, M − 3 . . . , 0,
η = (A
 )η
η M −1 = ψ M , (4.32)


 M
η = 0,
with
ψ m = {ψ k,m := uk,m(v̄) − ϕk,m, k ∈ Γh }, m = 0, 1, . . . , M
and the matri es (Am )∗ and (B m )∗ being given by
∆t m ∆t m −1 ∆t m ∆t m −1
(Am )∗ = (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) ...(E1 − Λ1 )(E1 + Λ ) , (4.33)
4 4 4 4 1
∆t m ∆t m −1 ∆t m ∆t m −1
(B m )∗ = (En − Λn )(En + Λn ) ...(E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
Proof. For a small variation δv̄ of v̄ , we have from (4.30) that

J0h,∆t (v̄ + δv̄) − J0h,∆t (v̄)


M M
1XX k,m k,m 2
1XX
= [u (v̄ + δv̄) − ϕ ] − [uk,m (v̄) − ϕk,m ]2
2 k∈Γ m=1 2 k∈Γ m=1
h h
M  2 M
1XX XX
= w k,m + w k,m(uk,m(v̄) − ϕk,m )
2 k∈Γ m=1 k∈Γh m=1 (4.34)
h
M  2 X XM
1XX k,m
= w + w k,mψ k,m
2 m=1 m=1
k∈Γh k∈Γh
M  2 XM
1XX
= w k,m + hw m , ψ m i,
2 k∈Γ m=1 m=1
h

84
where w m = {w k,m := uk,m(v̄ + δv̄) − uk,m (v̄), k ∈ Γh } and ψ m = {ψ k,m := uk,m(v̄) −
ϕk,m, k ∈ Γh }, m = 0, 1, . . . , M.
It follows from (1.111) that w is the solution to the problem

w m+1 = Am w m , m = 0, .., M − 1,
(4.35)
w 0 = δv̄.

Taking the inner produ t of the both sides of the mth equation of (4.35) with an arbitrary
m N1 ×...×Nn
ve tor η ∈R and then summing the results over m = 0, ..., M − 1, we obtain

M
X −1 M
X −1
m+1 m
hw ,η i = hAm w m , η m i
m=0 m=0
(4.36)
M
X −1
∗
= hw m, Am η m i.
m=0
∗
Here, h·, ·i is the inner produ t in RN1 ×...×Nn và Am is the adjoint matrix of Am .
Taking the inner produ t of the both sides of the rst equation of (4.32) with an arbitrary

ve tor w m+1 , summing the results over m = 0, ..., M − 2, we have

M
X −2 M
X −2 M
X −2
hw m+1 , η m i = hw m+1 , (Am+1 )∗ η m+1 i + hw m+1 , ψ m+1 i
m=0 m=0 m=0
(4.37)
M
X −1 M
X −1
= hw m , (Am )∗ η m i + hw m, ψ m i.
m=1 m=1

Taking the inner produ t of both sides of the se ond equation of (4.32) with an arbitrary

ve tor wM , we have

hw M , η M −1 i = hw M , ψ M i. (4.38)

It follows from (4.37) and (4.38) that

M
X −2 M
X −1 M
X −1
m+1 m M M −1 m m ∗ m
hw , η i + hw , η i= hw , (A ) η i + hw m , ψ m i + hw M , ψ M i. (4.39)
m=0 m=1 m=1

From (4.36) and (4.39) we obtain

M
X −1
∗
hw 0, A0 η 0 i = hw m , ψ m i + hw M , ψ M i.
m=1

Equivalently,
M
X

0 ∗ 0
hδv̄, A η i= hw m , ψ m i. (4.40)
m=1

P P
M  2
On the other hand, we an prove that w k,m = o(kv̄k). Hen e, it follows form
k∈Γh m=1
(4.34) and (4.40) that

∗
J0h,∆t (v̄ + δv̄) − J0h,∆t (v̄) = hδv̄, A0 η 0 i. (4.41)

85
Consequently, J0h,∆t is dierentiable and its gradient has the form (3.32).

Note that, sin e the matri es Λi , i = 1, . . . , n are symmetri , we have for m = 0, . . . , M − 1


∆t m ∆t m −1 ∆t m ∆t m −1
(Am )∗ = (E1 − Λ1 )(E1 + Λ1 ) ...(En − Λn )(En + Λ )
4 4 4 4 n (4.42)
∆t m ∆t m −1 ∆t m ∆t m −1
× (En − Λn )(En + Λn ) ...(E1 − Λ1 )(E1 + Λ ) .
4 4 4 4 1
Similarly,

∆t m ∆t m −1 ∆t m ∆t m −1
(B m )∗ = (En − Λn )(En + Λn ) ...(E1 − Λ1 )(E1 + Λ ) . (4.43)
4 4 4 4 1

Conjugate gradient method for the dis retized fun tional (4.30) onsists of the following

steps

Step 1. Choose an initial approximation v 0 and al ulate the residual r̂ 0 = u(v 0 )|Σ − ϕ by
0
solving the splitting s heme (1.111) with v̄ being repla ed by the initial approximation v

and set k = 0.
Step 2. Cal ulate the gradient r 0 = −∇Jγ (v 0 ) given in (4.31) by solving the adjoint

problem (4.32). Then set d0 = r 0 .


Step 3. Cal ulate

0
kr 0 k2
α =
ku(d0)|Σ k2 + γkd0 k2
where u(d0) an be al ulated from the splitting s heme (1.111) with v̄ being repla ed by

d0 and g = 0, F = 0. Then, set


v 1 = v 0 + α 0 d0 .
Step 4. For k = 1, 2, · · · , al ulate r k = −∇J0 (v k ), dk = r k + β k dk−1 , where

kr k k2
βk =
kr k−1 k2
Step 5. Cal ulate αk
k
kr k k2
α =
ku(dk )|Σ k2 + γkdk k2
where u(dk ) an be al ulated from the splitting s heme (1.111) with v̄ being repla ed by

dk and g = 0, F = 0. Then, set

v k+1 = v k + αk dk .

The simulation of this algorithm on omputer will be given in the next se tion.

4.4 Numeri al example


In this se tion we will present our numeri al simulation for one- and two-dimensional prob-

lems. As in the previous hapters, we will test for dierent kinds of the initial onditions:

86
1) very smooth, 2) ontinuous but not smooth (the hat fun tion), 3) dis ontinuous (step

fun tions). We see that the degree of di ulty in reases from test 1) to test 3). In the

one-dimensional ases we will present our numeri al al ulation for the singular values by

the method des ribed in Se tion 4.1.

4.4.1. Numeri al example in one-dimensional ase


Set Ω = (0, 1), T = 1. Consider the system

ut − (aux )x = f in Q,
−aux (0, t) = ϕ1 (t) in (0, T ],
aux (1, t) = ϕ2 (t) in (0, T ],
u|t=0 = v in Ω,

where oe ient a = 2xt + x2 t + 1. The observations will be taken at x=0 and x = 1.
−2
The noise level is 10 .

Example 1. We approximate singular values for the ases when we in rease the oe ient

a by a0 = 5 and a0 = 10 times. It appeared that if the oe ient of the equation is larger,

the singular values are smaller. This an be seen in Figure 4.1 on the singular values

evaluated by the method presented in Se tion 4.1.

2
10
a0=1
a0=5
0 a0=10
10

−2
10

−4
10

−6
10

−8
10

−10
10

−12
10
0 5 10 15 20 25 30 35 40 45 50 55

Figure 4.1: Example 1: Singular Values for 1D Problem

87
Now we present numeri al results for dierent initial onditions as explained above.

Example 2. Smooth initial ondition: v = sin(2πx).


Example 3. Continuous but not smooth initial ondition:

2x, if x ≤ 0.5,
v=
2(1 − x), otherwise.

Example 4. Dis ontinuous initial ondition:



1, if 0.25 ≤ x ≤ 0.75,
v=
0, otherwise.

Estimated function v Estimated function v

Exact sol. Exact sol.


1 Noiselevel=1% Noiselevel=1%
Noiselevel=10% Noiselevel=10%
1
0.8

0.6
0.8
0.4

0.2 0.6
v(x)

0 v(x)

−0.2 0.4

−0.4
0.2
−0.6

−0.8
0

−1

−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x

Estimated function v

0.8

0.6
v(x)

0.4

0.2

0
Exact sol.
Noiselevel=1%
Noiselevel=10%
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x

Figure 4.2: Example 2, 3, 4: 1D Problem: Re onstru tion results for smooth, ontinuous and dis ontinuous initial
onditions.

4.4.2. Numeri al example in the multi-dimensional ase


Set Ω := (0, 1) × (0, 1), T = 1. ut − (a1 ux1 )x1 − (a2 ux2 )x2 = f . As
Consider the equation

in the one-dimensional ase, we hoose the initial ondition v , then let u = v × (1 − t).

Putting u in the equation to get the boundary data and right hand side f . The observation

88
is taken on the whole boundary S and the noise level is set by 10−2 . In all examples we

take

a1 (x1 , x2 , t) = a2 (x1 , x2 , t) = 10−1 (1 + 10−2 cos(πx1 t) cos(πx2 )).

Example 5. Smooth initial ondition: v = sin(πx1 ) sin(πx2 ).


Exact function v Estimated function with noiselevel 1%

1 1

0.5 0.5
v(x)

v(x)
0 0

−0.5 −0.5

0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39

x2 −0.01 −0.01 x2 −0.01 −0.01


x x
1 1

Figure 4.3: Example 5: Exa t initial ondition (left) and its re onstru tion (right).

Error
Exact sol.
Noiselevel=1%
Noiselevel=10%
1

1
0.8

0.5
0.6
Vertical slice

0.4
−0.5

0.2
−1

0.79
0
0.79
0.39
0.39
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x1 x
2

Figure 4.4: Example 5 ( ontinue): Error (left) and the verti al sli e of the exa t initial ondition and its re on-
stru tion along the interval [(0.5, 0), (0.5, 1)] (right).

Example 6. Continuous initial ondition:




2x2 , if x2 ≤ 0.5 and x2 ≤ x1 and x1 ≤ 1 − x2 ,



2(1 − x ), if x ≥ 0.5 and x ≥ x and x ≥ 1 − x ,
2 2 2 1 1 2
v=

2x1 , if x1 ≤ 0.5 and x1 ≤ x2 and x2 ≤ 1 − x1 ,



2(1 − x ), otherwise.
2

89
Exact function v Estimated function with noiselevel 1%

1 1

0.5 0.5
v(x)

v(x)
0 0

−0.5 −0.5

0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39

x −0.01 −0.01 x −0.01 −0.01


2 x 2 x
1 1

Figure 4.5: Example 6: Exa t initial ondition (left) and its re onstru tion (right).
Error
Exact sol.
Noiselevel=1%
Noiselevel=10%
1

1
0.8

0.5
0.6

Vertical slice
0

0.4
−0.5

0.2
−1

0.79
0
0.79
0.39
0.39
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x1 x
2

Figure 4.6: Example 6 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion
along the interval [(0.5, 0), (0.5, 1)] (right).

Example 7. Dis ontinuous initial ondition:



1, if 0.25 ≤ x1 ≤ 0.75 and 0.25 ≤ x2 ≤ 0.75
v=
0, otherwise.

Exact function v Estimated function with noiselevel 1%

1 1

0.5 0.5
v(x)

v(x)

0 0

−0.5 −0.5

0.79 0.79
0.79 0.79
0.39 0.39
0.39 0.39

x2 −0.01 −0.01 x2 −0.01 −0.01


x x
1 1

Figure 4.7: Example 7: Exa t initial ondition (left) and its re onstru tion (right).

90
Error

1.2

1 1

0.5 0.8

Vertical slice
0 0.6

−0.5 0.4

−1 0.2

0.79
0.79 0
Exact sol.
0.39 Noiselevel=1%
0.39 Noiselevel=10%
−0.2
x −0.01 −0.01 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x x
1 2

Figure 4.8: Example 7 ( ontinue): Error (left) and the sli e of the exa t initial ondition and its re onstru tion
along the interval [(0.5, 0), (0.5, 1)] (right).

In all examples we see that the numeri al re onstru tions are pretty good. However, when

the oe ients are large, the ill-posedness of the problem is more severe and the method

is less ee tive.

This hapter is written on the basis of the paper

[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-

tions from boundary observations. Journal of Inverse and Ill-Posed Problems 24(2016),

no. 2, 195220.

91
Con lusion

In this thesis we study data assimilation in heat ondu tion of re onstru ting the initial

ondition in a heat transfer pro ess from either 1) the observation of the temperature at

the nal time moment, or 2) interior integral observations whi h are regarded as interior

measurements, or 3) boundary observations. The rst problem is new in the sense that the

oe ients of the equation des ribing the heat transfer pro ess are depending on time, and

up to now there are very few studies devoted to it. The se ond problem is a new setting

for su h kind of problems in data assimilation: interior observations are important, but

related studies are devoted to the ase of pointwise observations whi h are not realisti

in pra ti e. The use of integral observations is more pra ti al. The third problem is very

hard as the observation is only on the boundary and up to now there have been very few

studies for this ase. We reformulate these problems as a variational problem aiming at

minimizing a mist fun tional in the least squares sense. We prove that the fun tional is

Fré het dierentiable and derive a formula for its gradient via an adjoint problem and as

a by-produ t of the method, we propose a very natural and easy method for estimating

the degree of ill-posedness of the re onstru tion problem. For numeri ally solving the

problems, we dis retize the dire t and adjoint problems by the splitting nite dieren e

method for getting the gradient of the dis retized variational problems and then apply

the onjugate gradient method for solving them. We note that sin e the solutions in the

thesis are understood in the weak sense, the nite dieren e method for them is not trivial.

With respe t to the dis retization in spa e variables we prove onvergen e results for the

dis retization methods. We test our method on omputer for various numeri al examples

to show the e ien y of our approa h.

92
The author's publi ations related to
the thesis

[1℄ Nguyen Thi Ngo Oanh, A splitting method for a ba kward paraboli equation with

time-dependent oe ients, Computers & Mathemati s with Appli ations 65(2013), 1728;
[2℄ Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition in

paraboli equations from integral observations, Inverse Problems in S ien e and Engineer-
ing (to appear) doi: 10.1080/17415977.2016.1229778.

[3℄ Dinh Nho Hào and Nguyen Thi Ngo Oanh, Determination of the initial ondition in

paraboli equations from boundary observations, Journal of Inverse and Ill-Posed Problems
24(2016), no. 2, 195220.

93
Bibliography

[1℄ Agmon S. and Nirenberg L., Properties of solutions of ordinary dierential equations

in Bana h spa es, Comm. Pure Appl. Math. 16(1963), 121239.

Optimal Control Methods and the Method of Adjoint Equations in


[2℄ Agoshkov V. I.,

Problems of Mathemati al Physi s. Russian A ademy of S ien es, Institute for Nu-
meri al Mathemati s, Mos ow, 2003. (Russian).

[3℄ Agoshkov V. I., On some inverse problems for distributed parameter systems, Russ.
J. Numer. Anal. Math. Modelling, 18(2003), no. 6, 455465.

[4℄ Alifanov O.M., Inverse Heat Transfer Problems, Springer, New York, 1994.

[5℄ Alifanov O.M., Artyukhin E.A., and Rumyantsev S.V., Extreme Methods for Solving
Ill-Posed Problems with Appli ations to Inverse Heat Transfer Problems. Begell House
In ., New York, 1995.

[6℄ Aubert G. and Kornprobst P., Mathemati al Problems in Image Pro essing, Springer,
New York, 2006.

[7℄ Banks H. T. and Kunis h K., Estimation Te hniques for Distributed Parameter Sys-
tems, Birkhäuser, Boston, In ., Boston, MA, 1989.

[8℄ Baumeister J., Stable Solution of Inverse Problems. Friedr. Vieweg & Sohn, Braun-

s hweig, 1987.

[9℄ Be k J.V., Bla kwell B., Clair St.C.R., Inverse Heat Condu tion, Ill-Posed Problems,
Wiley, New York, 1985.

[10℄ Bengtsson L., Ghil M., and Källén E., Dynami Meteorology: Data Assimilation Meth-
ods. Springer-Verlag, New York, 1981.

[11℄ Bennett A.F., Inverse Methods in Physi al O eanography, Cambridge University Press,
Cambridge, 1992.

[12℄ Boussetila N. and Rebbani F., Optimal regularization method for ill-posed Cau hy

problems. Ele tron. J. Dierential Equations, 147(2006), 115.

[13℄ Buly hëv E.V., Glasko V.B. and Fëdorov S.M., Re onstru tion of an initial temper-

ature from its measurements on a surfa e. Zh. Vy hisl. Mat. i Mat. Fiz. 23(1983),

14101416.(Russian)

94
[14℄ Chavent G., Nonlinear Least Squares for Inverse Problems. Theoreti al Foundations
and Step-by-Step Guide for Appli ations, Springer, New York, 2009.

[15℄ Courtier P. and Talagrand O., Variational assimilation of meteorologi al observations

with the dire t and adjoint shallow water equations. Tellus 42A(1990), 531549.

[16℄ Courtier P. and Talagrand O., Variational assimilation of meteorologi al observations

with the adjoint vorti ity equations, Part II, Numeri al results. Quart. J. Roy. Meteor.
So . 113(1987), 13291347.

[17℄ Courtier P., Derber J., Erri o R.M., Louis J.F. and Vuki evi T., Review of the use

of adjoint, variational methods and Kalman lters in meteorology. Tellus 45A(1993),

343357.

[18℄ Du N.V., Paraboli Equations Ba kwards in Time. PhD Thesis, Vinh University,

2011. (Vietnamese).

[19℄ Engl H.W., Martin H. and Andreas N., Regularization of Inverse Problems. Dordre ht,
Boston, London, 1996.

[20℄ Hadamard J., Le tures on the Cau hy Problem in Linear Partial Dierential Equa-
tions, Yale University Press, New Haven, 1923.

[21℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations II: A

variational method. Numer. Fun t. Anal. Optim. 13(5&6)(1992), 541564.

[22℄ Hào D.N., A non hara teristi Cau hy problem for linear paraboli equations III:

A variational method and its approximation s hemes. Numer. Fun t. Anal. Optim.
13(5&6)(1992), 565583.

[23℄ Hào D.N., A molli ation method for ill-posed problems, Numer. Math., 68(1994),

469506.

[24℄ Hào D.N., Methods for Inverse Heat Condu tion Problems. Peter Lang Verlag, Frank-

furt/Main, Bern, New York, Paris, 1998.

[25℄ Hào D.N. and Du N.V., Stability results for ba kward paraboli equations with time-

dependent oe ients. Inverse Problems 27(2011), no. 2, 025003, 20 pp.

[26℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in paraboli equa-

tions from boundary observations. J. Inverse Ill-Posed Probl. 24(2016), 195220.

[27℄ Hào D.N. and Oanh N.T.N., Determination of the initial ondition in

paraboli equations from integral observations. Inverse Probl. S i. Eng. doi:

10.1080/17415977.2016.1229778.

[28℄ Hào D.N., Thanh P.X., Lesni D. and Johansson B.T., A boundary element method for

a multi-dimensional inverse heat ondu tion problem. Int. J. Comput. Math. 89(2012),
15401554.

95
[29℄ Hào D.N., Thành N.T., and Sahli H., Splitting-based gradient method for multi-

dimensional inverse ondu tion problems. J. Comput. Appl. Math., 232(2009), 361

377.

[30℄ Hinze M., A variational dis retization on ept in ontrol onstrained optimization:

The linear-quadrati ase, Computat. Optimiz. Appl., 30(2005), 4561.

[31℄ Isakov V., Inverse Problems for Partial Dierential Equations. Se ond edition.

Springer, New York, 2006.

[32℄ Ivanov V.K., On linear problems whi h are not well-posed. Dokl. Akad. Nauk SSSR
145(1962), no. 2, 270272 (in Russian).

[33℄ Ivanov V.K., Vasin V.V., and Tanana V.P., Theory of Linear Ill-Posed Problems and
its Appli ations. VSP, Utre ht, 2002.

[34℄ John F., Numeri al solution of the equation of heat ondu tion for pre eeding time.

Ann. Mat. Pura. Appl., 40(1955), 129142.

[35℄ Jovanovi¢ B.S. and Süli E.Analysis of Finite Fieren e S hemes. For Linear Partial
Dierential Equations with Generalized Solutions. Springer, London, 2014.
[36℄ Kabanikhin S. I., Inverse and Ill-Posed Problems. Theory and Appli ations. De

Gruyter, Germany, 2011.

[37℄ Kalnay E, Atmospheri Modeling, Data Assimilation and Predi tability. Cambridge

University Press, Cambridge, 2002.

[38℄ Klibanov M.V., Estimates of initial onditions of paraboli equations and inequalities

via lateral Cau hy data. Inverse Problems 22(2006), 495514.

[39℄ Klibanov M.V. and Tikhonravov A.V., Estimates of initial onditions of paraboli

equations and inequalities in innite domains via lateral Cau hy data. J. Dierential
Equations 237(2007), 198224.

[40℄ Ladyzhenskaya O. A., The Boundary Value Problems of Mathemati al Physi s.


Springer-Verlag, New York, 1985.

[41℄ Ladyzhenskaya O. A., Solonnilov V. A. and Ural eva N. N., Linear and Quasilinear
Equations of Paraboli Types. Ameri an Mathemati al So iety, 1968.

[42℄ Lavrent'ev M.M., On Cau hy's problem for Lapla e's equation. Dokl. Akad. Nauk
SSSR 102(1955), no. 2, 205206. (in Russian).

[43℄ Lavrent'ev M.M., Integral equations of the rst kind. Dokl. Akad. Nauk SSSR
127(1959), no. 1 , 3133. (in Russian).

[44℄ Lavrent'ev M.M., Ill-Posed Problems of Mathemati al Physi s. Siberian Bran h of the

Russian A ademy Publishers, 1962 (in Russian).

[45℄ Lavrent'ev M. M., Romanov V. G. and Shishatskii G. P., Ill-posed Problems in Math-
emati al Physi s and Analysis. Amer. Math. So ., Providen e, R. I., 1986.

96
[46℄ Le Dimet F.-X. and Shutyaev V.P., On Newton methods in data assimilation. Russian
J. Numer. Anal. Math. Modelling 15(2000), no. 5, 419434.

[47℄ Le Dimet F.-X. and Shutyaev V.P., On data assimilation for quasilinear paraboli

problems. Russian J. Numer. Anal. Math. Modelling 16(2001), no. 3, 247259.

[48℄ Le Dimet F.-X. and Talagrand O., Variational algorithms for analysis and assimilation

of meteorologi al observations: theoreti al aspe ts. Tellus 38A(1986), 97110.

[49℄ Li J., Yamamoto M. and Zou J., Conditional stability and numeri al re ontru tion of

initial temperature. Commun. Pure Appl. Anal. 8(2009), 361382.

[50℄ Lions J.-L., Optimal Control of Systems Governed by Partial Dierential Equations,
Springer, Berlin, 1971.

[51℄ Louis A.K., Inverse und s hle ht gestellte Probleme. B.G. Teubner, Stuttgart, 1989.

[52℄ Lundvall J. Kozlov V. and Weinerfelt P., Iterative meth- ods for data assimilation for

Burgers' equation. J. Inverse Ill-posed Probl., 14(2006), 505535.

[53℄ Manselli P. and Miller K., Dimensionality redu tion methods for e ient numeri al

solution, ba kward in time, of paraboli equations with variable oe ients. SIAM J.
Math. Anal. 11(1980), 147159.

[54℄ Mar huk G.I., Methods of Numeri al Mathemati s. Springer-Verlag, New York, 1975.

[55℄ Mar huk G.I., Mathemati al Modeling in the Problem of the Environment, Nauka,

Mos ow, 1982. (Russian)

[56℄ Mar huk G.I., Splitting and alternating dire tion methods. In Ciaglet P. G. and Li-

ons J. L., editors, Handbook of Numeri al Mathemati s. Volume 1: Finite Dieren e


Methods. Elsevier S ien e Publisher B.V., North-Holland, Amsterdam, 1990.

[57℄ Mar huk G., Adjoint Equations and Analysis of Complex Systems. Springer, New York,
1995.

[58℄ Mizohata S., Uni ité du prolongement des solutions pour quelques opérateurs diéren-

tiels paraboliques. Mem. Coll. S i. Univ. Kyoto. Ser. A. Math. 31(1958), 219239.

[59℄ Nemirovskii A.S., The regularizing properties of the adjoint gradient method in ill-

posed problems. Zh. vy hisl. Mat. mat. Fiz. 26(1986), 332347. Engl. Transl. in

U.S.S.R. Comput. Maths. Math. Phys. 26:2(1986), 716.

[60℄ No edal J. and Wright S.J., Numeri al Optimization. Se ond edition. Springer, New

York, 2006.

[61℄ Oanh N.T.N., A splitting method for a ba kward paraboli equation with time-

dependent oe ients. Comput. Math. Appl. 65(2013), 1728.

[62℄ Oanh N.T.N. and Huong B.V., Determination of a time-dependent term in the right

hand side of linear paraboli equations. A ta Math. Vietnam 41(2016), 313335.

97
[63℄ Okubo A., Difussion and E ologi al Models: Model Perspe tives, Spinger S i-

en e+Business Media, New York, 2001.

[64℄ Parmuzin E.I., Le Dimet F.-X. and Shutyaev V.P., On error analysis in variational data

assimilation problem for a nonlinear onve tion-diusion model. Russian J. Numer.


Anal. Math. Modelling 21(2006), no. 2, 169183.

[65℄ Parmuzin E.I., Shutyaev V.P., Numeri al solution of the problem on re onstru ting the

initial ondition for a semilinear paraboli equation. Russian J. Numer. Anal. Math.
Modelling 21(2006), no. 4, 375393.

[66℄ Parmuzin E.I., Shutyaev V.P., Variational data assimilation for a nonstationary heat

ondu tion problem with nonlinear diusion. Russian J. Numer. Anal. Math. Mod-
elling 20(2005), no. 1, 8195.

[67℄ Parmuzin E.I. and Shutyaev V.P., Numeri al algorithms for solving a problem of

data assimilation. (Russian) Zh. Vy hisl. Mat. Mat. Fiz. 37(1997), no. 7, 816827;

translation in Comput. Math. Math. Phys. 37(1997), no. 7, 792803.

[68℄ Payne L., Improperly Posed Problems in Partial Dierential Equations. SIAM,

Philadelphia, 1975.

[69℄ Pu i C., Sui problemi di Cau hy non "ben posti. Atti A ad. Naz. Lin ei. Rend. Cl.
S i. Fis. Mat. Nat. 18(1955), no. 8, 473477. (Italian)

[70℄ Samarskii A.A., Lazarov R.D. and Makarov L., Finite Dieren e S hemes for Dieren-
tial Equations with Weak Solutions. Visshaya Shkola Publ ., Mos ow, 1987. (Russian)

[71℄ Shutyaev V.P., Control Operators and Iterative Algorithms in Variational Data As-
similation Problems. Nauka, Mos ow, 2001. (Russian).

[72℄ Sun N.Z., Inverse Problems in Groundwater Modeling, Kluwer A ad. Publishers, Dor-

dre ht, Boston, London, 1994.

[73℄ Talagrand O., A study of the dynami s of four dimensional data assimilation. Tellus
33(1981), 4360.

[74℄ Talagrand O., Assimilation of observations, an introdu tion. J. Met. So . Japan


75(1997), 1B, 191209.

[75℄ Talagrand O. and Courtier P., Variational assimilation of meteorologi al observations

with the adjoint vorti ity equations, Part I. Theory. Quart. J. Roy. Meteor. So .
113(1987), 13111328.

[76℄ Thành N.T., Infrared Thermography for the Dete tion and Chara terization of Buried
Obje ts. PhD thesis, Vrije Universiteit Brussel, Brussels, Belgium, 2007.

[77℄ Thành N.T., Hào D.N., and Sahli H., Thermal infrared te hnique for landmine dete -

tion: mathemati al formulation and methods. A ta Math. Vietnam. 36(2011), 469

504.

98
[78℄ Tikhonov A.N., On the stability of inverse problems. Doklady A ad. S i. USSR
39(1943), 176179.

[79℄ Tikhonov A.N., On the solution of ill-posed problems and the method of regularization.

Dokl. Akad. Nauk SSSR 151(1963), 501504 (in Russian).

[80℄ Tikhonov A.N. and Arsenin V.Y., Solutions of Ill-Posed Problems, Winston, Wash-

ington, 1977.

[81℄ Trefethen L.N. and Bau D. III, Numeri al Linear Algebra, SIAM, Philadelphia, 1997.

[82℄ Tröltzs h F., Optimal Control of Partial Dierential Equations. Graduate Studies in

Mathemati s, Ameri an Mathemati al So iety, Providen e, Rhode Island, 2010.

[83℄ Wloka J., Partial Dierential Equations. Cambridge University Press, 1987.

[84℄ Yanenko N.N., The Method of Fra tional Steps. Springer-Verlag, Berlin, Heidelberg,

New York, 1971.

99

You might also like