You are on page 1of 8

# Lecture 1

OPERATOR EQUATIONS

## A common mathematical equation, arising in science and technology, is the gen-

eral operator equation; that is,

L(u) = f, (1)

## where L is an operator, f is known data, and u is the unknown state to be found.

If f is identically vanishing, then (1) is said to be homogeneous, otherwise, it is
called nonhomogeneous. Problem (1) is well-stated, if

## (iv) the operator L is well-defined; that is, L : D T maps every u D

to a unique element L(u) T.

## If statement (iv) is true and L is surjective (onto), existence of solutions to

problem (1) is guaranteed for all f T, since T equals the range L(D); that
is, for all f T, there exists u D, such that L(u) = f . In addition, if L is
injective (one-to-one), uniqueness of the solution is ensured; recall that L is
injective if for all u1 , u2 D, L(u1 ) = L(u2 ) implies u1 = u2 . If an operator L
is surjective and injective, it is said to be bijective and we can define the inverse
operator L1 : T D. Existence of the inverse operator enables us to write

L1 (L(u)) = L1 (f ) u = L1 (f ), (2)

## meaning that u is computed by acting L1 on the known data f .

Other than existence and uniqueness, a significant property of the solution
to problem (1) is stability, best-known to mathematicians as continuous depen-
dence on the data f . We say that the solution u depends continuously on the
data f , if small changes f in the data result in small changes u in the solu-
tion, with respect to appropriate norms. Norms are used to quantify the notion
of small by providing a way to compute the distance between elements in vector
spaces. As it will become evident in later lecture, stability requires continuity of
the inverse operator L1 .
If a problem has exactly one, stable solution, it is called well-posed, other-
wise it is said to be ill-posed. It is worth mentioning that researchers have made
determined efforts to deal with ill-posed problems, such as many inverse prob-
lems. As a result of these efforts, the so-called regularization methods aim to
construct well-posed problems in the vicinity of the original ill-posed problem,
given some notion of proximity.
To clarify the ideas presented in this lecture, we discus certain instances of
problem (1), common in science and technology, that fit to the abstract frame-
work.

EXAMPLE 1. Let L : Rn1 Rn1 be the operator with values L(x) = Ax,
where A Rnn . Given data f = b Rn1 , equation (1) is written

Ax = b. (3)

If the rows of A are linearly independent, we say that A has full-rank and
we write rank (A) = n. Recall that the rank of A is the dimension of the image
of L and that the determinant of a full-rank matrix is nonzero. When this is the
case, the inverse matrix A1 exists and is defined by AA1 = In , where In is the
identity matrix of size n. Hence, the solution x is uniquely determined by

x = A1 b. (4)

If rank (A) < n, A is called rank-deficient and det (A) = 0. In this case,
A is said to be singular, meaning that it is not invertible, and there are two pos-
sible scenarios; either b L(Rn1 ) and (3) has infinitely many solutions, or
b / L(Rn1 ) and (3) has no solutions.

## EXAMPLE 2. Let L : Rn1 Cn1 be the operator with values L(x) = (A

In )x, where A Rnn and C. The homogeneous problem L(x) = 0 is
written
Ax = x (5)

## 2 LECTURES IN APPLIED MATHEMATICS

and is known as the eigenvalue problem of A; the scalars satisfying (5) are the
eigenvalues of A and they are associated with the so-called eigenvectors x = 0.
Note that if the matrix AIn is invertible, then (5) has only the trivial solution
x = 0; hence, we consider the singular case by assuming that

A () = det (A In ) = 0. (6)

## The polynomial A is said to be the characteristic polynomial of A and its roots

are the eigenvalues of A. The set of the eigenvalues of A is called the spectrum
of A and is denoted (A); that is,

## If x is an eigenvector of A, then so is x for all C r {0}, which means

that (5) lacks the uniqueness property; thus, (5) is ill-possed.

## EXAMPLE 3. Let Rn be an open, bounded, and connected region with

sufficiently smooth boundary and let C 2 () be the class of functions on
whose derivatives up to order two exist and are continuous on . The differ-
ential operator : C 2 () C 0 () is called the Laplace operator; given n
Cartesian coordinates (xi ), the Laplace operator is written
n
2
= 2
. (8)
i=1
x i

u = f, (9)

## where f C 0 () and = . Equation (9) is an elliptic partial differ-

ential equation known as the Poisson equation, while its homogeneous version
u = 0 is called the Laplace equation. Typically, elliptic equations describe
equilibrium states and their solution minimizes some notion of the energy of the
system. Uniqueness of solutions to (9) requires specification of the values of u at
the boundary of , such as u = g, where g C 0 (); such conditions are
called Dirichlet boundary conditions and make sense if u C 2 () C 0 (),
meaning that u is twice continously differentiable in and can be extended con-
tinuously up to the boundary .

OPERATOR EQUATIONS 3
We prove uniqueness of the solution to the boundary value problem

u = f in ; u=g at (10)

## using argument by contradiction and Greens identity

u
vu + v u = v , (11)
n

where u/n is the normal derivative of u in the outward pointing normal di-
rection n. Let u1 = u2 , where both u1 and u2 satisfy problem (10); that is,

u1 = f in ; u1 = g at , (12)

u2 = f in ; u2 = g at . (13)

## Hence, the difference w = u1 u2 satisfies the homogeneous problem

w = 0 in ; w=0 at . (14)

## Substituting w for v and u in (11), results in

w
ww + w w = w , (15)
n

## where the first and last terms vanish, since w = 0 in and w = 0 at ,

respectively. Thus, we are left with

w w = 0 w = 0, (16)

## meaning that w is constant throughout , and since it is continuous up to the

boundary, where w = 0, we get w = u1 u2 = 0 in . We conclude that
the solution to problem (10) is unique, since otherwise it must be u1 = u2 , in
contrast to our hypothesis u1 = u2 .

## 4 LECTURES IN APPLIED MATHEMATICS

COMPUTATIONAL LAB
Let = (0, 1) and consider the boundary value problem

## The exact solution of (17) is

1
u(x) = x(1 x). (18)
2
We define a finite set of n equidistant points xi , where i = 1, 2, . . . , n; that
is,
0 = x1 < x2 < < xn1 < xn = 1, (19)
where xi = xi1 + h, i = 2, . . . , n, and 0 < h 1 provides a fineness measure
of the resulting partition. Given the function values ui at xi , Taylors theorem
provides the approximations
h2 h2
ui1 ui hui + u , ui+1 ui + hui + u . (20)
2! i 2! i
Combining equations (20), results in

u
1 ( ) i1
ui1 2ui + ui+1
ui = 2 1 2 1 ui . (21)
h2 h
ui+1
This last formula holds for points xi , which means that the valid index
values are i = 2, . . . , n 1; the cases i = 1, i = n need some extra attention,
since the boundary conditions u1 = un = 0 must be considered. For that
purpose, we use the equations

1 u1 + 0 u2 + + 0 un1 + 0 un = 0, (22)

0 u1 + 0 u2 + + 0 un1 + 1 un = 0. (23)
Equations (21), (22), and (23) can be written as

1 0 0 0 0 u1 f1

1 2 1 0 0 u2 f2

1 0 1 2 1 0 u3

f3

. . . . . . . = .. , (24)
h 2 . . . . . . . . .
. .
. . . .
0 0 0 1 2 1 f
un1 n1
0 0 0 0 0 1 un fn

OPERATOR EQUATIONS 5
where fi = f (xi ) = 1 when i = 2, 3, . . . , n 1 and f1 = fn = 0; the
first and last rows of the coefficient matrix ensure that the Dirichlet conditions
u(0) = u(1) = 0 are satisfied by the computed solution.
There are two ways to generate the points xi , either by choosing the number
n of these points, or by choosing the step h. Recall that

xn = x1 + (n 1)h; (25)

hence, in Matlab, we can obtain the vector x with components xi with the fol-
lowing code snippet;

h = 0.001;
n = 1/h + 1;
x = linspace(0, 1, n);

## The function linspace guarantees that the boundary points x1 = 0, xn = 1

are components of x and thus, it is preferred over the colon vector construction
0:h:1. Now that we have a discrete version of , we can build the vector f
with components f (xi ) = fi . In our example, fi = 1 for i = 2, 3, . . . , n 1,
whereas f1 = fn = 0;

f = -ones(n, 1);
f(1) = 0; f(n) = 0;

To construct the coefficient matrix A of the linear system (24), we note that A is
a Toeplitz matrix with modified first and last rows, so that the boundary condi-
tions are satisfied, and it is dominated by zeros. Matrices dominated by zeros are
called sparse and Matlab stores them by squeezing out the zero elements, when
the sparse function is used;

## A = sparse(toeplitz([-2,1,zeros(1, n-2)])) / h^2;

A(1,1) = 1; A(1,2) = 0; % impose B.C. at x(1)
A(n,n) = 1; A(n,n-1) = 0; % impose B.C. at x(n)

## Given A and f, the approximated solution is computed as the solution to the

linear system (24) by typing

## 6 LECTURES IN APPLIED MATHEMATICS

and the result is shown in the figure below.

0.15
Exact
Numerical

0.1
u(x)

0.05

0
0 0.2 0.4 0.6 0.8 1
x

## Let us now consider the two-dimensional boundary value problem

2 u(x, y) 2 u(x, y)
+ = 1, in ; u=0 at . (26)
x2 y 2
Using an indexed grid, as the one shown in the figure below, and the approxi-
mations (20) for each one of the independent variables of u we get
2u 2u uin 2ui + ui+n ui1 2ui + ui+1
+ + (27)
x2 y 2 h2 h2

11 22 33 44 55 66 77 88 99 110 121
1
10 21 32 43 54 65 76 87 98 109 120

9 20 31 42 53 64 75 86 97 108 119
0.8
8 19 30 41 52 63 74 85 96 107 118

7 18 29 40 51 62 73 84 95 106 117
0.6
6 17 28 39 50 61 72 83 94 105 116
y

5 16 27 38 49 60 71 82 93 104 115
0.4
4 15 26 37 48 59 70 81 92 103 114

3 14 25 36 47 58 69 80 91 102 113
0.2
2 13 24 35 46 57 68 79 90 101 112

1 12 23 34 45 56 67 78 89 100 111
0
0 0.2 0.4 0.6 0.8 1
x

OPERATOR EQUATIONS 7
and hence,
uin + ui1 4ui + ui+1 + ui+n
u . (28)
h2
To impose the boundary conditions, we first label the boundaries at x = 0, y =
1, x = 1, y = 0 as W, N, E, S, respectively, and we identify the corresponding
indices vectors
iW = 1, 2, . . . , n, (29)
iN = 2n, 3n, . . . , (n 1)n, (30)
iE = (n 1)n + 1, (n 1)n + 2, . . . , n2 , (31)
iS = n + 1, 2n + 1, . . . , (n 2)n + 1, (32)
where the corner points are included only at iW and iE . Combining iW , iN , iE , iS
in one vector iB.C. , denoted in Matlab ibc, the following code snippet constructs
the right-hand-side vector f and the coefficient matrix A.
f = -ones(n^2, 1);
f(ibc) = 0;
A = (1/h^2)*sparse(toeplitz([-4, 1, zeros(1, n-2),1,...
zeros(1, (n-1)*n-1)]));
A(ibc,:) = 0;
A(ibc,ibc) = speye(4*(n-1));
After solving the resulting system by typing u=A\f, the vector u is reshaped and
then plotted using the contour function.

0.8

0.6
y

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
x