You are on page 1of 82

Karlstad University

Division for Engineering Science, Physics and Mathematics

Yury V. Shestopalov and Yury G. Smirnov

Integral Equations

A compendium

Karlstad 2002
Contents
1 Preface 4

2 Notion and examples of integral equations (IEs). Fredholm IEs of the first
and second kind 5
2.1 Primitive function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Examples of solution to integral equations and ordinary differential equations 6

4 Reduction of ODEs to the Volterra IE and proof of the unique solvability


using the contraction mapping 8
4.1 Reduction of ODEs to the Volterra IE . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Principle of contraction mappings . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 Unique solvability of the Fredholm IE of the 2nd kind using the contraction
mapping. Neumann series 12
5.1 Linear Fredholm IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Nonlinear Fredholm IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . 13
5.3 Linear Volterra IE of the 2nd kind . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.4 Neumann series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.5 Resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6 IEs as linear operator equations. Fundamental properties of completely con-


tinuous operators 17
6.1 Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.2 IEs and completely continuous linear operators . . . . . . . . . . . . . . . . . . . 18
6.3 Properties of completely continuous operators . . . . . . . . . . . . . . . . . . . 20

7 Elements of the spectral theory of linear operators. Spectrum of a completely


continuous operator 21

8 Linear operator equations: the Fredholm theory 23


8.1 The Fredholm theory in Banach space . . . . . . . . . . . . . . . . . . . . . . . 23
8.2 Fredholm theorems for linear integral operators and the Fredholm resolvent . . . 26

9 IEs with degenerate and separable kernels 27


9.1 Solution of IEs with degenerate and separable kernels . . . . . . . . . . . . . . . 27
9.2 IEs with degenerate kernels and approximate solution of IEs . . . . . . . . . . . 31
9.3 Fredholms resolvent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

10 Hilbert spaces. Self-adjoint operators. Linear operator equations with com-


pletely continuous operators in Hilbert spaces 34
10.1 Hilbert space. Selfadjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.2 Completely continuous integral operators in Hilbert space . . . . . . . . . . . . . 35
10.3 Selfadjoint operators in Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . 37
10.4 The Fredholm theory in Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . 37

2
10.5 The HilbertSchmidt theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

11 IEs with symmetric kernels. HilbertSchmidt theory 42


11.1 Selfadjoint integral operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
11.2 HilbertSchmidt theorem for integral operators . . . . . . . . . . . . . . . . . . 45

12 Harmonic functions and Greens formulas 47


12.1 Greens formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
12.2 Properties of harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

13 Boundary value problems 50

14 Potentials with logarithmic kernels 51


14.1 Properties of potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
14.2 Generalized potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

15 Reduction of boundary value problems to integral equations 56

16 Functional spaces and Chebyshev polynomials 59


16.1 Chebyshev polynomials of the first and second kind . . . . . . . . . . . . . . . . 59
16.2 FourierChebyshev series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

17 Solution to integral equations with a logarithmic singulatity of the kernel 63


17.1 Integral equations with a logarithmic singulatity of the kernel . . . . . . . . . . 63
17.2 Solution via FourierChebyshev series . . . . . . . . . . . . . . . . . . . . . . . . 64

18 Solution to singular integral equations 66


18.1 Singular integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
18.2 Solution via FourierChebyshev series . . . . . . . . . . . . . . . . . . . . . . . . 66

19 Matrix representation 71
19.1 Matrix representation of an operator in the Hilbert space . . . . . . . . . . . . . 71
19.2 Summation operators in the spaces of sequences . . . . . . . . . . . . . . . . . . 72

20 Matrix representation of logarithmic integral operators 74


20.1 Solution to integral equations with a logarithmic singularity of the kernel . . . . 76

21 Galerkin methods and basis of Chebyshev polynomials 79


21.1 Numerical solution of logarithmic integral equation by the Galerkin method . . . 81

3
1 Preface
This compendium focuses on fundamental notions and statements of the theory of linear oper-
ators and integral operators. The Fredholm theory is set forth both in the Banach and Hilbert
spaces. The elements of the theory of harmonic functions are presented, including the reduction
of boundary value problems to integral equations and potential theory, as well as some methods
of solution to integral equations. Some methods of numerical solution are also considered.
The compendium contains many examples that illustrate most important notions and the
solution techniques.
The material is largely based on the books Elements of the Theory of Functions and Func-
tional Analysis by A. Kolmogorov and S. Fomin (Dover, New York, 1999), Linear Integral
Equations by R. Kress (2nd edition, Springer, Berlin, 1999), and Logarithmic Integral Equa-
tions in Electromagnetics by Y. Shestopalov, Y. Smirnov, and E. Chernokozhin (VSP, Utrecht,
2000).

4
2 Notion and examples of integral equations (IEs). Fred-
holm IEs of the first and second kind
Consider an integral equation (IE)
Z b
f (x) = (x) + K(x, y)(y)dy. (1)
a

This is an IE of the 2nd kind. Here K(x, y) is a given function of two variables (the kernel)
defined in the square
= {(x, y) : a x b, a y b},
f (x) is a given function, (x) is a sought-for (unknown) function, and is a parameter.
We will often assume that f (x) is a continuous function.
The IE Z b
f (x) = K(x, y)(y)dy (2)
a
constitutes an example of an IE of the 1st kind.
We will say that equation (1) (resp. (2)) is the Fredholm IE of the 2nd kind (resp. 1st kind)
if the the kernel K(x, y) satisfies one of the following conditions:
(i) K(x, y) is continuous as a function of variables (x, y) in the square ;
(ii) K(x, y) is maybe discontinuous for some (x, y) but the double integral
Z bZ b
|K 2 (x, y)|dxdy < , (3)
a a

i.e., takes a finite value (converges in the square in the sense of the definition of convergent
Riemann integrals).
An IE Z x
f (x) = (x) + K(x, y)(y)dy, (4)
0

is called the Volterra IE of the 2nd kind. Here the kernel K(x, y) may be continuous in the
square = {(x, y) : a x b, a y b} for a certain b > a or discontinuous and satisfying
condition (3); f (x) is a given function; and is a parameter,

Example 1 Consider a Fredholm IE of the 2nd kind


Z 1
2
x = (x) (x2 + y 2 )(y)dy, (5)
0

where the kernel K(x, y) = (x2 + y 2 ) is defined and continuous in the square 1 = {(x, y) : 0
x 1, 0 y 1}, f (x) = x2 is a given function, and the parameter = 1.

Example 2 Consider an IE of the 2nd kind


Z 1
f (x) = (x) ln |x y|(y)dy, (6)
0

5
were the kernel K(x, y) = ln |x y| is discontinuous (unbounded) in the square 1 along the
line x = y, f (x) is a given function, and the parameter = 1. Equation (6) a Fredholm IE
of the 2nd kind because the double integral
Z 1Z 1
ln2 |x y|dxdy < , (7)
0 0

i.e., takes a finite valueconverges in the square 1 .

2.1 Primitive function


Below we will use some properties of the primitive function (or (general) antiderivative) F (x)
defined, for a function f (x) defined on an interval I, by
F 0 (x) = f (x). (8)
The primitive function F (x) for a function f (x) (its antiderivative) is denoted using the
integral sign to give the notion of the indefinite integral of f (x) on I,
Z
F (x) = f (x)dx. (9)

Theorem 1 If f (x) is continuous on [a, b] then the function


Z x
F (x) = f (x)dx. (10)
a

(the antiderivative) is differentiable on [a, b] and F 0 (x) = f (x) for each x [a, b] .

Example 3 F0 (x) = arctan x is a primitive function for the function
1
f (x) =
2 x(1 + x)
because
dF0 1
F00 (x) = = = f (x). (11)
dx 2 x(1 + x)

3 Examples of solution to integral equations and ordi-


nary differential equations
Example 4 Consider a Volterra IE of the 2nd kind
Z x
f (x) = (x) exy (y)dy, (12)
0

where the kernel K(x, y) = exy , f (x) is a given continuously differentiable function, and is
a parameter. Differentiating (12) we obtain
Z x
f (x) = (x) exy (y)dy
0
Z x
0 0
f (x) = (x) [(x) + exy (y)dy].
0

6
Subtracting termwise we obtain an ordinary differential equatin (ODE) of the first order

0 (x) ( + 1)(x) = f 0 (x) f (x) (13)

with respect to the (same) unknown function (x). Setting x = 0 in (12) we obtain the initial
condition for the unknown function (x)

(0) = f (0). (14)

Thus we have obtained the initial value problem for (x)

0 (x) ( + 1)(x) = F (x), F (x) = f 0 (x) f (x), (15)


(0) = f (0). (16)

Integrating (15) subject to the initial condition (16) we obtain the Volterra IE (12), which
is therefore equivalent to the initial value problem (15) and (16).
Solve (12) by the method of successive approximations. Set

K(x, y) = exy 0 y x 1,
K(x, y) = 0 0 x y 1.

Write two first successive approximations:


Z 1
0 (x) = f (x), 1 (x) = f (x) + K(x, y)0 (x)dy, (17)
0
Z 1
2 (x) = f (x) + K(x, y)1 (y)dy = (18)
0
Z 1 Z 1 Z 1
f (x) + K(x, y)f (y)dy + 2 K(x, t)dt K(t, y)f (y)dy. (19)
0 0 0
Set Z 1 Z x
K2 (x, y) = K(x, t)K(t, y)dt = ext ety dt = (x y)exy (20)
0 y

to obtain (by changing the order of integration)


Z 1 Z 1
2 (x) = f (x) + K(x, y)f (y)dy + 2 K2 (x, y)f (y)dy. (21)
0 0

The third approximation


Z 1
3 (x) = f (x) + K(x, y)f (y)dy +
0
Z 1 Z 1
+ 2 K2 (x, y)f (y)dy + 3 K3 (x, y)f (y)dy, (22)
0 0

where Z 1
(x y)2 xy
K3 (x, y) = K(x, t)K2 (t, y)dt = e . (23)
0 2!

7
The general relationship has the form
n
X Z 1
m
n (x) = f (x) + Km (x, y)f (y)dy, (24)
m=1 0

where
Z 1
(x y)m1 xy
K1 (x, y) = K(x, y), Km (x, y) = K(x, t)Km1 (t, y)dt = e . (25)
0 (m 1)!
Assuming that successive approximations (24) converge we obtain the (unique) solution to IE
(12) by passing to the limit in (24)

X Z 1
(x) = f (x) + m Km (x, y)f (y)dy. (26)
m=1 0

Introducing the notation for the resolvent



X
(x, y, ) = m1 Km (x, y). (27)
m=1

and changing the order of summation and integration in (26) we obtain the solution in the
compact form
Z 1
(x) = f (x) + (x, y, )f (y)dy. (28)
0

Calculate the resolvent explicitly using (25):



X (x y)m1
(x, y, ) = exy m1 = exy e(xy) = e(+1)(xy) , (29)
m=1 (m 1)!
so that (28) gives the solution
Z 1
(x) = f (x) + e(+1)(xy) f (y)dy. (30)
0

Note that according to (17)


(x, y, ) = e(+1)(xy) , 0 y x 1,
(x, y, ) = 0, 0 x y 1.
and the series (27) converges for every .

4 Reduction of ODEs to the Volterra IE and proof of


the unique solvability using the contraction mapping
4.1 Reduction of ODEs to the Volterra IE
Consider an ordinary differential equatin (ODE) of the first order
dy
= f (x, y) (31)
dx

8
with the initial condition
y(x0 ) = y0 , (32)
where f (x, y) is defined and continuous in a two-dimensional domain G which contains the
point (x0 , y0 ).
Integrating (31) subject to (32) we obtain
Z x
(x) = y0 + f (t, (t))dt, (33)
x0

which is called the Volterra integral equation of the second kind with respect to the unknown
function (t). This equation is equivalent to the initial value problem (31) and (32). Note that
this is generally a nonlinear integral equation with respect to (t).
Consider now a linear ODE of the second order with variable coefficients
y 00 + A(x)y 0 + B(x)y = g(x) (34)
with the initial condition
y(x0 ) = y0 , y 0 (x0 ) = y1 , (35)
where A(x) and B(x) are given functions continuous in an interval G which contains the point
x0 . Integrating y 00 in (34) we obtain
Z x Z x Z x
0 0
y (x) = A(t)y (x)dx B(x)y(x)dx + g(x)dx + y1 . (36)
x0 x0 x0

Integrating the first integral on the right-hand side in (36) by parts yields
Z x Z x
y 0 (x) = A(x)y(x) (B(x) A0 (x))y(x)dx + g(x)dx + A(x0 )y0 + y1 . (37)
x0 x0

Integrating a second time we obtain


Z x Z xZ x Z xZ x
0
y(x) = A(x)y(x)dx (B(t)A (t))y(t)dtdx+ g(t)dtdx+[A(x0 )y0 +y1 ](xx0 )+y0 .
x0 x0 x0 x0 x0
(38)
Using the relationship Z xZ x Z x
f (t)dtdx = (x t)f (t)dt, (39)
x0 x0 x0
we transform (38) to obtain
Z x Z x
0
y(x) = {A(t) + (x t)[(B(t) A (t))]}y(t)dt + (x t)g(t)dt + [A(x0 )y0 + y1 ](x x0 ) + y0 .
x0 x0
(40)
Separate the known functions in (40) and introduce the notation for the kernel function
K(x, t) = A(t) + (t x)[(B(t) A0 (t))], (41)
Z x
f (x) = (x t)g(t)dt + [A(x0 )y0 + y1 ](x x0 ) + y0 . (42)
x0

Then (40) becomes Z x


y(x) = f (x) + K(x, t)y(t))dt, (43)
x0
which is the Volterra IE of the second kind with respect to the unknown function (t). This
equation is equivalent to the initial value problem (34) and (35). Note that here we obtain a
linear integral equation with respect to y(x).

9
Example 5 Consider a homogeneous linear ODE of the second order with constant coefficients

y 00 + 2 y = 0 (44)

and the initial conditions (at x0 = 0)

y(0) = 0, y 0 (0) = 1. (45)

We see that here

A(x) 0, B(x) 2 , g(x) 0, y0 = 0, y1 = 1,

if we use the same notation as in (34) and (35).


Substituting into (40) and calculating (41) and (42),

K(x, t) = 2 (t x),
f (x) = x,

we find that the IE (43), equivalent to the initial value problem (44) and (45), takes the form
Z x
y(x) = x + (t x)y(t)dt. (46)
0

4.2 Principle of contraction mappings


A metric space is a pair of a set X consisting of elements (points) and a distance which is a
nonnegative function satisfying

(x, y) = 0 if and only if x = y,


(x, y) = (y, x) (Axiom of symmetry), (47)
(x, y) + (y, z) (x, z) (Triangle axiom)

A metric space will be denoted by R = (X, ).


A sequence of points xn in a metric space R is called a fundamental sequence if it satisfies
the Cauchy criterion: for arbitrary > 0 there exists a number N such that

(xn , xm ) < for all n, m N. (48)

Definition 1 If every fundamental sequence in the metric space R converges to an element in


R, this metric space is said to be complete.

Example 6 The (Euclidian) n-dimensional space Rn of vectors a


= (a1 , . . . , an ) with the Eu-
clidian metric v
u n
uX
a, b) = ||
( a b||2 = t (ai bi )2
i=1

is complete.

10
Example 7 The space C[a, b] of continuous functions defined in a closed interval a x b
with the metric
(1 , 2 ) = max |1 (x) 2 (x)|
axb

is complete. Indeed, every fundamental (convergent) functional sequence {n (x)} of continuous


functions converges to a continuous function in the C[a, b]-metric.

Example 8 The space C (n) [a, b] of componentwise continuous n-dimensional vector-functions


f = (f1 , . . . , fn ) defined in a closed interval a x b with the metric
1 2
(f , f ) = || max |fi1 (x) fi2 (x)|||2
axb

is complete. Indeed, every componentwise-fundamental (componentwise-convergent) functional


sequence {n (x)} of continuous vector-functions converges to a continuous function in the
above-defined metric.

Definition 2 Let R be an arbitrary metric space. A mapping A of the space B into itself is
said to be a contraction if there exists a number < 1 such that

(Ax, Ay) (x, y) for any two points x, y R. (49)

Every contraction mapping is continuous:

xn x yields Axn Ax (50)

Theorem 2 Every contraction mapping defined in a complete metric space R has one and only
one fixed point; that is the equation Ax = x has only one solution.
We will use the principle of contraction mappings to prove Picards theorem.

Theorem 3 Assume that a function f (x, y) satisfies the Lipschitz condition

|f (x, y1 ) f (x, y2 )| M |y1 y2 | in G. (51)

Then on a some interval x0 d < x < x0 + d there exists a unique solution y = (x) to the
initial value problem
dy
= f (x, y), y(x0 ) = y0 . (52)
dx

Proof. Since function f (x, y) is continuous we have |f (x, y)| k in a region G0 G which
contains (x0 , y0 ). Now choose a number d such that

(x, y) G0 if |x0 x| d, |y0 y| kd


M d < 1.

Consider the mapping = A defined by


Z x
(x) = y0 + f (t, (t))dt, |x0 x| d. (53)
x0

11
Let us prove that this is a contraction mapping of the complete set C = C[x0 d, x0 + d] of
continuous functions defined in a closed interval x0 d x x0 + d (see Example 7). We have
Z x Z x Z x

|(x) y0 | = f (t, (t))dt
|f (t, (t))| dt k dt = k(x x0 ) kd. (54)
x0 x0 x0
Z x
|1 (x) 2 (x)| |f (t, 1 (t)) f (t, 2 (t))| dt M d max |1 (x)| 2 (x). (55)
x0 x0 dxx0 +d

Since M d < 1, the mapping = A is a contraction.


From this it follows that the operator equation = A and consequently the integral
equation (33) has one and only one solution.
This result can be generalized to systems of ODEs of the first order.
To this end, denote by f = (f1 , . . . , fn ) an n-dimensional vector-function and write the
initial value problem for a system of ODEs of the first order using this vector notation
d
y
= f(x, y), y(x0 ) = y0 . (56)
dx
where the vector-function f is defined and continuous in a region G of the n + 1-dimensional
space Rn+1 such that G contains the n + 1-dimensional point x0 , y0 . We assume that f satisfies
the the Lipschitz condition (51) termwise in variables y0 . the initial value problem (56) is
equivalent to a system of IEs
Z x

(x) = y0 + f(t, (t))dt.
(57)
x0

Since function f(t, y) is continuous we have ||f (x, y)|| K in a region G0 G which contains
(x0 , y0 ). Now choose a number d such that

(x, y) G0 if |x0 x| d, ||
y0 y|| Kd
M d < 1.

Then we consider the mapping = A defined by


Z x

(x) = y0 + f(t, (t))dt,
|x0 x| d. (58)
x0

and prove that this is a contraction mapping of the complete space C = C[x
0 d, x0 + d] of
continuous (componentwise) vector-functions into itself. The proof is a termwise repetitiion of
the reasoning in the scalar case above.

5 Unique solvability of the Fredholm IE of the 2nd kind


using the contraction mapping. Neumann series
5.1 Linear Fredholm IE of the 2nd kind
Consider a Fredholm IE of the 2nd kind
Z b
f (x) = (x) + K(x, y)(y)dy, (59)
a

12
where the kernel K(x, y) is a continuous function in the square

= {(x, y) : a x b, a y b},

so that |K(x, y)| M for (x, y) . Consider the mapping g = Af defined by


Z b
g(x) = (x) + K(x, y)(y)dy, (60)
a

of the complete space C[a, b] into itself. We have


Z b
g1 (x) = K(x, y)f1 (y)dy + (x),
a
Z b
g2 (x) = K(x, y)f2 (y)dy + (x),
a

(g1 , g2 ) = max |g1 (x) g2 (x)|


axb
Z b
|| |K(x, y)||f1 (y) f2 (y)|dy ||M (b a) max |f1 (y)| f2 (y)| (61)
a ayb
1
< (f1 , f2 ) if || < (62)
M (b a)
Consequently, the mapping A is a contraction if
1
|| < , (63)
M (b a)
and (59) has a unique solution for sufficiently small || satisfying (63).
The successive approximations to this solution have the form
Z b
fn (x) = (x) + K(x, y)fn1 (y)dy. (64)
a

5.2 Nonlinear Fredholm IE of the 2nd kind


The method is applicable to nonlinear IEs
Z b
f (x) = (x) + K(x, y, f (y))dy, (65)
a

where the kernel K and are continuous functions, and

|K(x, y, z1 ) K(x, y, z2 )| M |z1 z2 |. (66)

If satisfies (63) we can prove in the same manner that the mapping g = Af defined by
Z b
g(x) = (x) + K(x, y, f (y))dy, (67)
a

of the complete space C[a, b] into itself is a contraction because for this mapping, the inequality
(61) holds.

13
5.3 Linear Volterra IE of the 2nd kind
Consider a Volterra IE of the 2nd kind
Z x
f (x) = (x) + K(x, y)(y)dy, (68)
a

where the kernel K(x, y) is a continuous function in the square for some b > a, so that
|K(x, y)| M for (x, y) .
Formulate a generalization of the principle of contraction mappings.
Theorem 4 If A is a continuous mapping of a complete metric space R into itself such that
the mapping An is a contraction for some n, then the equation Ax = x has one and only one
solution.
In fact if we take an arbitrary point x R and consider the sequence Akn x, k = 0, 1, 2, . . ., the
repetition of the proof of the classical principle of contraction mappings yields the convergence
of this sequence. Let x0 = limk Akn x, then Ax0 = x0 . In fact Ax0 = limk Akn Ax. Since
the mapping An is a contraction, we have

(Akn Ax, Akn x) (A(k1)n Ax, A(k1)n x) . . . k (Ax, x).

Consequently,
lim (Akn Ax, Akn x) = 0,
k

i.e., Ax0 = x0 .
Consider the mapping g = Af defined by
Z x
g(x) = (x) + K(x, y)(y)dy, (69)
a

of the complete space C[a, b] into itself.

5.4 Neumann series


Consider an IE Z b
f (x) = (x) K(x, y)(y)dy, (70)
a
In order to determine the solution by the method of successive approximations and obtain the
Neumann series, rewrite (70) as
Z b
(x) = f (x) + K(x, y)(y)dy, (71)
a

and take the right-hand side f (x) as the zero approximation, setting

0 (x) = f (x). (72)

Substitute the zero approximation into the right-hand side of (73) to obtain the first approxi-
mation Z b
1 (x) = f (x) + K(x, y)0 (y)dy, (73)
a

14
and so on, obtaining for the (n + 1)st approximation
Z b
n+1 (x) = f (x) + K(x, y)n (y)dy. (74)
a

Assume that the kernel K(x, y) is a bounded function in the square = {(x, y) : a x
b, a y b}, so that
|K(x, y)| M, (x, y) .
or even that there exists a constant C1 such that
Z b
|K(x, y)|2 dy C1 x [a, b], (75)
a

The the following statement is valid.

Theorem 5 Assume that condition (75) holds Then the sequence n of successive approxima-
tions (74) converges uniformly for all satisfying
Z bZ b
1
, B= |K(x, y)|2 dxdy. (76)
B a a

The limit of the sequence n is the unique solution to IE (70).

Proof. Write two first successive approximations (74):


Z b
1 (x) = f (x) + K(x, y)f (y)dy, (77)
a
Z b
2 (x) = f (x) + K(x, y)1 (y)dy = (78)
a
Z b Z b Z b
f (x) + K(x, y)f (y)dy + 2 K(x, t)dt K(t, y)f (y)dy. (79)
a a a
Set Z b
K2 (x, y) = K(x, t)K(t, y)dt (80)
a
to obtain (by changing the order of integration)
Z b Z b
2 (x) = f (x) + K(x, y)f (y)dy + 2 K2 (x, y)f (y)dy. (81)
a a

In the same manner, we obtain the third approximation


Z b
3 (x) = f (x) + K(x, y)f (y)dy +
a
Z b Z b
2 3
+ K2 (x, y)f (y)dy + K3 (x, y)f (y)dy, (82)
a a

where Z b
K3 (x, y) = K(x, t)K2 (t, y)dt. (83)
a

15
The general relationship has the form
n
X Z b
m
n (x) = f (x) + Km (x, y)f (y)dy, (84)
m=1 a

where Z b
K1 (x, y) = K(x, y), Km (x, y) = K(x, t)Km1 (t, y)dt. (85)
a
and Km (x, y) is called the mth iterated kernel. One can easliy prove that the iterated kernels
satisfy a more general relationship
Z b
Km (x, y) = Kr (x, t)Kmr (t, y)dt, r = 1, 2, . . . , m 1 (m = 2, 3, . . .). (86)
a

Assuming that successive approximations (84) converge we obtain the (unique) solution to IE
(70) by passing to the limit in (84)

X Z b
m
(x) = f (x) + Km (x, y)f (y)dy. (87)
m=1 a

In order to prove the convergence of this series, write


Z b
|Km (x, y)|2 dy Cm x [a, b], (88)
a

and estimate Cm . Setting r = m 1 in (86) we have


Z b
Km (x, y) = Km1 (x, t)K(t, y)dt, (m = 2, 3, . . .). (89)
a

Applying to (89) the Schwartz inequality we obtain


Z b Z b
|Km (x, y)|2 = |Km1 (x, t)|2 |K(t, y)|2 dt. (90)
a a

Integrating (90) with respect to y yields


Z b Z b Z bZ b
2 2 2 2
|Km (x, y)| dy B |Km1 (x, t)| B Cm1 (B = |K(x, y)|2 dxdy) (91)
a a a a

which gives
Cm B 2 Cm1 , (92)
and finally the required estimate
Cm B 2m2 C1 . (93)
Denoting s
Z b
D= |f (y)|2 dy (94)
a

and applying the Schwartz inequality we obtain


Z 2 Z b Z b
b

Km (x, y)f (y)dy |Km (x, y)|2 dy |f (y)|2 dy D2 C1 B 2m2 . (95)
a a a

16
Thus the common term of the series (87) is estimated by the quantity
q
D C1 ||m B m1 , (96)

so that the series converges faster than the progression with the denominator ||B, which proves
the theorem.
If we take the first n terms in the series (87) then the resulting error will be not greater
than q ||n+1 B n
D C1 . (97)
1 ||B

5.5 Resolvent
Introduce the notation for the resolvent

X Z b
m1
(x, y, ) = Km (x, y) (98)
m=1 a

Changing the order of summation and integration in (87) we obtain the solution in the compact
form
Z b
(x) = f (x) + (x, y, )f (y)dy. (99)
a

One can show that the resolvent satisfies the IE


Z b
(x, y, ) = K(x, y) + K(x, t)(t, y, )dt. (100)
a

6 IEs as linear operator equations. Fundamental prop-


erties of completely continuous operators
6.1 Banach spaces
Definition 3 A complete normed space is said to be a Banach space, B.

Example 9 The space C[a, b] of continuous functions defined in a closed interval a x b


with the (uniform) norm
||f ||C = max |f (x)|
axb

[and the metric


(1 , 2 ) = ||1 2 ||C = max |1 (x)| 2 (x)|]
axb

is a normed and complete space. Indeed, every fundamental (convergent) functional sequence
{n (x)} of continuous functions converges to a continuous function in the C[a, b]-metric.

Definition 4 A set M in the metric space R is said to be compact if every sequence of elements
in M contains a subsequence that converges to some x R.

17
Definition 5 A family of functions {} defined on the closed interval [a, b] is said to be uni-
formly bounded if there exists a number M > 0 such that

|(x)| < M

for all x and for all in the family.

Definition 6 A family of functions {} defined on the closed interval [a, b] is said to be equicon-
tinuous if for every > 0 there is a > 0 such that

|(x1 ) (x2 )| <

for all x1 , x2 such that |x1 ) x2 | < and for all in the given family.

Formulate the most common criterion of compactness for families of functions.


Theorem 6 (Arzelas theorem). A necessary and sufficient condition that a family of contin-
uous functions defined on the closed interval [a, b] be compact is that this family be uniformly
bounded and equicontinuous.

6.2 IEs and completely continuous linear operators


Definition 7 An operator A which maps a Banach space E into itself is said to be completely
continuous if it maps and arbitrary bounded set into a compact set.

Consider an IE of the 2nd kind


Z b
f (x) = (x) + K(x, y)(y)dy, (101)
a

which can be written in the operator form as

f = + A. (102)

Here the linear (integral) operator A defined by


Z b
f (x) = K(x, y)(y)dy (103)
a

and considered in the (Banach) space C[a, b] of continuous functions in a closed interval a
x b represents an extensive class of completely continuous linear operators.

Theorem 7 Formula (103) defines a completely continuous linear operator in the (Banach)
space C[a, b] if the function K(x, y) is to be bounded in the square

= {(x, y) : a x b, a y b},

and all points of discontinuity of the function K(x, y) lie on the finite number of curves

y = k (x), k = 1, 2, . . . n, (104)

where k (x) are continuous functions.

18
Proof. To prove the theorem, it is sufficient to show (according to Arzelas theorem) that the
functions (103) defined on the closed interval [a, b] (i) are continuous and the family of these
functions is (ii) uniformly bounded and (iii) equicontinuous. Below we will prove statements
(i), (ii), and (iii).
Under the conditions of the theorem the integral in (103) exists for arbitrary x [a, b]. Set

M = sup |K(x, y)|,


and denote by G the set of points (x, y) for which the condition

|y k (x)| < , k = 1, 2, . . . n, (105)
12M n
holds. Denote by F the complement of G with respect to . Since F is a closed set and K(x, y)
is continuous on F , there exists a > 0 such that for points (x0 , y) and (x00 , y) in F for which
|x0 x00 | < the inequality
1
|K(x0 , y) K(x00 , y)| < (b a) (106)
3
holds. Now let x0 and x00 be such that |x0 x00 | < holds. Then
Z b
0 00
|f (x ) f (x )| |K(x0 , y) K(x00 , y)||(y)|dy (107)
a

can be evaluated by integrating over the sum A of the intervals



|y k (x0 )| < , k = 1, 2, . . . n,
12M n

|y k (x00 )| < , k = 1, 2, . . . n,
12M n
and over the complement B of A with respect to the closed interval [a, b]. The length of A does

not exceed 3M . Therefore
Z
2
|K(x0 , y) K(x00 , y)||(y)|dy ||||. (108)
A 3
The integral over the complement B can be obviously estimated by
Z

|K(x0 , y) K(x00 , y)||(y)|dy ||||. (109)
B 3
Therefore
|f (x0 ) f (x00 )| ||||. (110)
Thus we have proved the continuity of f (x) and the equicontinuity of the functions f (x) (ac-
cording to Definition (6)) corresponding to the functions (x) with bounded norm ||||. The
uniform boundedness of f (x) (according to Definition (5)) corresponding to the functions (x)
with bounded norms follows from the inequality
Z
|(x)| |K(x, y)||(y)|dy M (b a)||||. (111)
B

19
6.3 Properties of completely continuous operators
Theorem 8 If {An } is a sequence of completely continuous operators on a Banach space E
which converges in (operator) norm to an operator A then the operator A is completely contin-
uous.

Proof. Consider an arbitrary bounded sequence {xn } of elements in a Banach space E such
that ||xn || < c. It is necessary to prove that sequence {Axn } contains a convergent subsequence.
The operator A1 is completely continuous and therefore we can select a convergent subse-
quence from the elements {A1 xn }. Let
(1) (1)
x1 , x2 , . . . , x(1)
n , ..., (112)

be the inverse images of the members of this convergent subsequence. We apply operator A2 to
each members of subsequence (112)). The operator A2 is completely continuous; therefore we
can again select a convergent subsequence from the elements {A2 x(1)
n }, denoting by

(2) (2)
x1 , x2 , . . . , x(2)
n , ..., (113)

the inverse images of the members of this convergent subsequence. The operator A3 is also com-
pletely continuous; therefore we can again select a convergent subsequence from the elements
{A3 x(2)
n }, denoting by
(3) (3)
x1 , x2 , . . . , x(3)
n , ..., (114)
the inverse images of the members of this convergent subsequence; and so on. Now we can form
a diagonal subsequence
(1) (2)
x1 , x2 , . . . , x(n)
n , .... (115)
Each of the operators An transforms this diagonal subsequence into a convergent sequence.
If we show that operator A also transforms (115)) into a convergent sequence, then this will
establish its complete continuity. To this end, apply the Cauchy criterion (48) and evaluate the
norm
||Ax(n) (m)
n Axm ||. (116)
We have the chain of inequalities

||Ax(n) (m) (n) (n)


n Axm || ||Axn Ak xn || +
+ ||Ak x(n) (m) (m)
n Ak xm || + ||Ak xm Axm ||
(m)

||A Ak ||(||x(m) (m) (n) (m)


m || + ||xm ||) + ||Ak xn Ak xm ||.


Choose k so that ||A Ak || < 2c
and N such that

||Ak x(n) (m)
n Ak xm || <
2
holds for arbitrary n > N and m > N (which is possible because the sequence ||Ak x(n)
n ||
converges). As a result,
||Axn(n) Ax(m)
m || < , (117)
i.e., the sequence ||Ax(n)
n || is fundamental.

20
Theorem 9 (Superposition principle). If A is a completely continuous operator and B is a
bounded operator then the operators AB and BA are completely continuous.

Proof. If M is a bounded set in a Banach space E, then the set BM is also bounded. Conse-
quently, the set ABM is compact and this means that the operator AB is completely continuous.
Further, if M is a bounded set in E, then AM is compact. Then, because B is a continuous
operator, the set BAM is also compact, and this completes the proof.

Corollary A completely continuous operator A cannot have a bounded inverse in a finite-


dimensional space E.

Proof. In the contrary case, the identity operator I = AA1 would be completely continuous
in E which is impossible.

Theorem 10 The adjoint of a completely operator is completely continuous.

7 Elements of the spectral theory of linear operators.


Spectrum of a completely continuous operator
In this section we will consider bounded linear operators which map a Banach space E into
itself.

Definition 8 The number is said to be a characteristic value of the operator A if there exists
a nonzero vector x (characteristic vector) such that Ax = x.

In an n-dimensional space, this definition is equivalent to the following:


the number is said to be a characteristic value of the operator A if it is a root of the
characteristic equation
Det |A I| = 0.
The totality of those values of for which the inverse of the operator A I does not exist
is called the spectrum of the operator A.
The values of for which the operator A I has the inverse are called regular values of
the operator A.
Thus the spectrum of the operator A consists of all nonregular points.
The operator
R = (A I)1
is called the resolvent of the operator A.
In an infinite-dimensional space, the spectrum of the operator A necessarily contains all the
characteristic values and maybe some other numbers.
In this connection, the set of characteristic values is called the point spectrum of the operator
A. The remaining part of the spectrum is called the continuous spectrum of the operator A. The
latter also consists, in its turn, of two parts: (1) those for which A I has an unbounded
inverse with domain dense in E (and this part is called the continuous spectrum); and (2) the

21
remainder of the spectrum: those for which A I has a (bounded) inverse whose domain is
not dense in E, this part is called the residual spectrum.
If the point is regular, i.e., if the inverse of the operator A I exists, then for sufficiently
small the operator A ( + )I also has the inverse, i.e., the point + is regular also. Thus
the regular points form an open set. Consequently, the spectrum, which is a complement, is a
closed set.
Theorem 11 The inverse of the operator A I exists for arbitrary for which || > ||A||.

Proof. Since
1
A I = I A ,

the resolvent 1
1 A 1X Ak
R = (A I)1 = I = .
k=0 k
This operator series converges for ||A/|| < 1, i.e., the operator A I has the inverse.

Corollary. The spectrum of the operator A is contained in a circle of radius ||A|| with center
at zero.

Theorem 12 Every completely continuous operator A in the Banach space E has for arbitrary
> 0 only a finite number of linearly independent characteristic vectors which correspond to
the characteristic values whose absolute values are greater than .

Proof. Assume, ad abs, that there exist an infinite number of linearly independent characteristic
vectors xn , n = 1, 2, . . ., satisfying the conditions

Axn = n xn , |n | > > 0 (n = 1, 2, . . .). (118)

Consider in E the subspace E1 generated by the vectors xn (n = 1, 2, . . .). In this subspace


the operator A has a bounded inverse. In fact, according to the definition, E1 is the totality of
elements in E which are of the form vectors

x = 1 x1 + . . . + k xk , (119)

and these vectors are everywhere dense in E1 . For these vectors

Ax = 1 1 x1 + . . . + k k xk , (120)

and, consequently,
A1 x = (1 /1 )x1 + . . . + (k /k )xk (121)
and
1
||A1 x|| < ||x||.

Therefore, the operator A1 defined for vectors of the form (119) by means of (120) can be ex-
tended by continuity (and, consequently, with preservation of norm) to all of E1 . The operator

22
A, being completely continuous in the Banach space E, is completely continuous in E1 . Accord-
ing to the corollary to Theorem (9), a completely continuous operator cannot have a bounded
inverse in infinite-dimensional space. The contradiction thus obtained proves the theorem.

Corollary. Every nonzero characteristic value of a completely continuous operator A in the


Banach space E has only finite multiplicity and these characteristic values form a bounded set
which cannot have a single limit point distinct from the origin of coordinates.
We have thus obtained a characterization of the point spectrum of a completely continuous
operator A in the Banach space E.
One can also show that a completely continuous operator A in the Banach space E cannot
have a continuous spectrum.

8 Linear operator equations: the Fredholm theory


8.1 The Fredholm theory in Banach space
Theorem 13 Let A be a completely continuous operator which maps a Banach space E into
itself. If the equation y = x Ax is solvable for arbitrary y, then the equation x Ax = 0 has
no solutions distinct from zero solution.

Proof. Assume the contrary: there exists an element x1 6= 0 such that x1 Ax1 = T x1 = 0
Those elements x for which T x = 0 form a linear subspace E1 of the banach space E. Denote
by En the subspace of E consisting of elements x for which the powers T n x = 0.
It is clear that subspaces En form a nondecreasing subsequence

E1 E2 . . . En . . . .

We shall show that the equality sign cannot hold at any point of this chain of inclusions. In
fact, since x1 6= 0 and equation y = T x is solvable for arbitrary y, we can find a sequence of
elements x1 , x2 , . . . , xn , . . . distinct from zero such that

T x 2 = x1 ,
T x 3 = x2 ,
...,
T xn = xn1 ,
....

The element xn belongs to subspace En for each n but does not belong to subspace En1 . In
fact,

T n xn = T n1 xn1 = . . . = T x1 = 0,

but

T n1 xn = T n2 xn1 = . . . = T x2 = x1 6= 0.

23
All the subspaces En are linear and closed; therefore for arbitrary n there exists an element
yn+1 En+1 such that
1
||yn+1 || = 1 and (yn+1 , En ) ,
2
where (yn+1 , En ) denotes the distance from yn+1 to the space En :

(yn+1 , En ) = inf {||yn+1 x||, x En }.

Consider the sequence {Ayk }. We have (assuming p > q)


1
||Ayp Ayq || = ||yp (yq + T yp T yq )|| ,
2
since yq + T yp T yq Ep1 . It is clear from this that the sequence {Ayk } cannot contain
any convergent subsequence which contradicts the complete continuity of the operator A. This
contradiction proves the theorem

Corollary 1. If the equation y = x Ax is solvable for arbitrary y, then this equation has
unique solution for each y, i.e., the operator I A has an inverse in this case.

We will consider, together with the equation y = x Ax the equation h = f A f which


the
is adjoint to it, where A is the adjoint to A and h, f are elements of the Banach space E,
conjugate of E.

Corollary 2. If the equation h = f A f is solvable for all h, then the equation f A f = 0


has only the zero solution.

It is easy to verify this statement by recalling that the operator adjoint to a completely
continuous operator is also completely continuous, and the space E conjugate to a Banach
space is itself a Banach space.

Theorem 14 A necessary and sufficient condition that the equation y = x Ax be solvable is


that f (y) = 0 for all f for which f A f = 0.

Proof. 1. If we assume thast the equation y = x Ax is solvable, then

f (y) = f (x) f (Ax) = f (y) = f (x) A f (x)

i.e., f (y) = 0 for all f for which f A f = 0 .


2. Now assume that f (y) = 0 for all f for which f A f = 0. For each of these functionals
we consider the set Lf of elements for which f takes on the value zero. Then our assertion is
equivalent to the fact that the set Lf consists only of the elements of the form xAx. Thus, it
is necessary to prove that an element y1 which cannot be represented in the form x Ax cannot
be contained in Lf . To do this we shall show that for such an element y1 we can construct a
functional f1 satisfying
f1 (y1 ) 6= 0, f1 A f1 = 0.

24
These conditions are equivalent to the following:

f1 (y1 ) 6= 0, f1 (x Ax) = 0 x.

In fact,
(f1 A f1 )(x) = f1 (x) A f1 (x) = f1 (x) f1 (Ax) = f1 (x Ax).
Let G0 be a subspace consisting of all elements of the form x Ax. Consider the subspace
{G0 , y1 } of the elements of the form z + y1 , z G0 and define the linear functional by setting

f1 (z + y1 ) = .

Extending the linear functional to the whole space E (which is possible by virtue of the Hahn
Banach theorem) we do obtain a linear functional satisfying the required conditions. This
completes the proof of the theorem.

Corollary If the equation f A f = 0 does not have nonzero solution, then the equation
x Ax = y is solvable for all y.

Theorem 15 A necessary and sufficient condition that the equation f A f = h be solvable


is that h(x) = 0 for all x for which x Ax = 0.

Proof. 1. If we assume that the equation f A f = h is solvable, then

h(x) = f (x) A f (x) = f (x Ax)

i.e., h(x) = 0 if x Ax = 0.
2. Now assume that h(x) = 0 for all x such that xAx = 0. We shall show that the equation
f A f = h is solvable. Consider the set F of all elements of the form y = x Ax. Construct
a functional f on F by setting
f (T x) = h(x).
This equation defines in fact a linear functional. Its value is defined uniquely for each y because
if T x1 = T x2 then h(x1 ) = h(x2 ). It is easy to verify the linearity of the functional and extend
it to the whole space E. We obtain

f (T x) = T f (x) = h(x).

i.e., the functional is a solution of equation f A f = h. This completes the proof of the
theorem.

Corollary If the equation x Ax = 0 does not have nonzero solution, then the equation
f A f = h is solvable for all h.

The following theorem is the converse of Thereom 13.

Theorem 16 If the equation x Ax = 0 has x = 0 for its only solution, then the equation
x Ax = y has a solution for all y.

25
Proof. If the equation x Ax = 0 has only one solution, then by virtue of the corollary to
the preceding theorem, the equation f A f = h is solvable for all h. Then, by Corollary 2
to Thereom 13, the equation f A f = 0 has f = 0 for its only solution. Hence Corollary to
Thereom 14 implies that the equation x Ax = y has a solution for all y.
Thereoms 13 and 16 show that for the equation

x Ax = y (122)

only the following two cases are possible:


(i) Equation (122) has a unique solution for each y, i.e., the operator I A has an inverse.
(ii) The corresponding homogeneous equation x Ax = 0 has a nonzero solution, i.e., the
number = 1 is a characteristic value for the operator A.
All the results obtained above for the equation x Ax = y remain valid for the equation
x Ax = y (the adjoint equation is f A f = h). It follows that either the operator I A
has an inverse or the number 1/ is a characteristic value for the operator A. In other words,
in the case of a completely continuous operator, an arbitrary number is either a regular point
or a characteristic value. Thus a completely continuous operator has only a point spectrum.
Theorem 17 The dimension n of the space N whose elements are the solutions to the equation
x Ax = 0 is equal to the dimension of the subspace N whose elements are the solutions to
the equation f A f = 0.
This theorem is proved below for operators in the Hilbert space.

8.2 Fredholm theorems for linear integral operators and the Fred-
holm resolvent
One can formulate the Fredholm theorems for linear integral operators
Z b
A(x) = K(x, y)(y)dy (123)
a

in terms of the Fredholm resolvent.


The numbers for which there exists the Fredholm resolvent of a Fredholm IE
Z b
(x) K(x, y)(y)dy = f (x) (124)
a

will be called regular values. The numbers for which the Fredholm resolvent (x, y, ) is not
defined (they belong to the point spectrum) will be characteristic values of the IE (124), and
they coincide with the poles of the resolvent. The inverse of characteristic values are eigenvalues
of the IE.
Formulate, for example, the first Fredholm theorem in terms of regular values of the resol-
vent.
If is a regular value, then the Fredholm IE (124) has one and only one solution for arbitrary
f (x). This solution is given by formula (99),
Z b
(x) = f (x) + (x, y, )f (y)dy. (125)
a

26
In particular, if is a regular value, then the homogeneous Fredholm IE (124)
Z b
(x) K(x, y)(y)dy = 0 (126)
a

has only the zero solution (x) = 0.

9 IEs with degenerate and separable kernels


9.1 Solution of IEs with degenerate and separable kernels
An IE
Z b
(x) K(x, y)(y)dy = f (x) (127)
a

with a degenerate kernel


n
X
K(x, y) = ai (x)bi (y) (128)
i=1

can be represented, by changing the order of summation and integration, in the form
n
X Z b
(x) ai (x) bi (y)(y)dy = f (x). (129)
i=1 a

Here one may assume that functions ai (x) (and bi (y) ) are linearly independent (otherwise, the
numer of terms in (128)) can be reduced).
It is easy to solve such IEs with degenerate and separable kernels. Denote
Z b
ci = bi (y)(y)dy, (130)
a

which are unknown constants. Equation (129)) becomes


n
X
(x) = f (x) + ci ai (x), (131)
i=1

and the problem reduces to the determination of unknowns ci . To this end, substitute (131)
into equation (129) to obtain, after simple algebra,
n
( Z b " n
# )
X X
ai (x) ci bi (y) f (y) + ck ak (y) dy = 0. (132)
i=1 a k=1

Since functions ai (x) are linearly independent, equality (132) yields


Z b " n
#
X
ci bi (y) f (y) + ck ak (y) dy = 0, i = 1, 2, . . . n. (133)
a k=1

27
Denoting
Z b Z b
fi = bi (y)f (y)dy, aik = bi (y)ak (y)dy (134)
a a

and changing the order of summation and integration, we rewrite (133) as


n
X
ci aik ck = fi , i = 1, 2, . . . n, (135)
k=1

which is a linear equation system of order n with respect to unknowns ci .


Note that the same system can be obtained by multiplying (131) by ak (x) (k = 1, 2, . . . n)
and integrating from a to b
System (135) is equivalent to IE (129) (and (127), (128)) in the following sense: if (135) is
uniquely solvable, then IE (129) has one and only one solution (and vice versa); and if (135)
has no solution, then IE (129) is not solvable (and vice versa).
The determinant of system (135) is

1 a11 a12 . . . a1n

a 1 a22 . . . a2n
21
D() = . (136)
. . ... .

a an2 . . . 1 ann
n1

D() is a polynomial in powers of of an order not higher than n. Note that D() is not
identically zero because D(0) = 1. Thus there exist not more than n different numbers = k
such that D(k ) = 0. When = k , then system (135) together with IE (129) are either
not solvable or have infinitely many solutions. If 6= k , then system (135) and IE (129) are
uniquely solvable.
Example 10 Consider a Fredholm IE of the 2nd kind
Z 1
(x) (x + y)(y)dy = f (x). (137)
0

Here
2
X
K(x, y) = x + y = ai (x)bi (y) = a1 (x)b1 (y) + a2 (x)b2 (y),
i=1

where
a1 (x) = x, b1 (y) = 1, a2 (x) = 1, b2 (y) = y,
is a degenerate kernel, f (x) is a given function, and is a parameter.
Look for the solution to (137) in the form (131) (note that here n = 2)
2
X
(x) = f (x) + ci ai (x) = f (x) + (c1 a1 (x) + c2 a2 (x)) = f (x) + (c1 x + c2 ). (138)
i=1

Denoting
Z 1 Z 1
fi = bi (y)f (y)dy, aik = bi (y)ak (y)dy, (139)
0 0

28
so that
Z 1 Z 1
1
a11 = b1 (y)a1 (y)dy = ydy = ,
0 0 2
Z 1 Z 1
a12 = b1 (y)a2 (y)dy = dy = 1, (140)
0 0
Z 1 Z 1
1
a21 = b2 (y)a1 (y)dy = y 2 dy = ,
0 0 3
Z 1 Z 1
1
a22 = b2 (y)a2 (y)dy = ydy = ,
0 0 2
and
Z 1 Z 1
f1 = b1 (y)f (y)dy = f (y)dy, (141)
0 0
Z 1 Z 1
f2 = b2 (y)f (y)dy = yf (y)dy,
0 0

and changing the order of summation and integration, we rewrite the corresponding equation
(133) as
2
X
ci aik ck = fi , i = 1, 2, (142)
k=1

or, explicitly,

(1 /2)c1 c2 = f1 ,
(/3)c1 + (1 /2)c2 = f2 ,

which is a linear equation system of order 2 with respect to unknowns ci .


The determinant of system (143) is

1 /2 1

D() = = 1 + 2 /4 2 /3 = 1 2 /12 = (2 + 12 12)
(143)
/3 1 /2 12
is a polynomial of order 2. (D() is not identically, and D(0) = 1). There exist not more than
2 different numbers = k such that D(k ) = 0. Here there are exactly two such numbers; they
are

1 = 6 + 4 3, (144)


2 = 6 4 3.

If 6= k , then system (143) and IE (137) are uniquely solvable.


To find the solution of system (143) apply Cramers rule and calculate the determinants of
system (143)

f

D1 = 1 = f1 (1 /2) + f2 =
f2 1 /2
Z 1 Z 1 Z 1
= (1 /2) f (y)dy + yf (y)dy = [(1 /2) + y]f (y)dy, (145)
0 0 0

29

f
D1 = 1
= f1 (1 /2) + f2 =
f2 1 /2
Z 1 Z 1 Z 1
= (1 /2) f (y)dy + yf (y)dy = [(1 /2) + y]f (y)dy, (146)
0 0 0


(1 /2) f
1
D2 = = f2 (1 /2) + f1 /3 =
/3 f2
Z 1 Z 1 Z 1
= (1 /2) yf (y)dy + (/3) f (y)dy = [y(1 /2) + /3]f (y)dy, (147)
0 0 0

Then, the solution of system (143) is


Z 1
D1 12
c1 = = 2 [(1 /2) + y]f (y)dy =
D + 12 12 0
Z 1
1
= 2
[(6 12) 12y]f (y)dy, (148)
+ 12 12 0
Z 1
D2 12
c2 = = 2 [y(1 /2) + /3]f (y)dy =
D + 12 12 0
Z 1
1
= [y(6 12) 4]f (y)dy (149)
2 + 12 12 0
According to (138), the solution of the IE (137) is

(x) = f (x) + (c1 x + c2 ) =


Z 1

= f (x) + 2 [6( 2)(x + y) 12xy 4]f (y)dy. (150)
+ 12 12 0
When = k (k = 1, 2), then system (143) together with IE (137) are not solvable.

Example 11 Consider a Fredholm IE of the 2nd kind


Z 2
(x) sin x cos y(y)dy = f (x). (151)
0

Here
K(x, y) = sin x cos y = a(x)b(y)
is a degenerate kernel, f (x) is a given function, and is a parameter.
Rewrite IE (151) in the form (131) (here n = 1)
Z 2
(x) = f (x) + c sin x, c= cos y(y)dy. (152)
0

Multiplying the first equality in (152) by b(x) = cos x and integrating from 0 to 2 we obtain
Z 2
c= f (x) cos xdx (153)
0

30
Therefore the solution is
Z 2
(x) = f (x) + c sin x = f (x) + sin x cos yf (y)dy. (154)
0

Example 12 Consider a Fredholm IE of the 2nd kind


Z 1
2
x = (x) (x2 + y 2 )(y)dy, (155)
0

where K(x, y) = x2 + y 2 is a degenerate kernel f (x) = x2 is a given function, and the parameter
= 1.

9.2 IEs with degenerate kernels and approximate solution of IEs


Assume that in an IE
Z b
(x) K(x, y)(y)dy = f (x) (156)
a

the kernel K(x, y and f (x) are continuous functions in the square

= {(x, y) : a x b, a y b}.

Then (x) will be also continuous. Approximate IE (156) with an IE having a degenerate kernel.
To this end, replace the integral in (156) by a finite sum using, e.g., the rectangle rule
Z b n
X
K(x, y)(y)dy h K(x, yk )(yk ), (157)
a k=1

where
ba
h= , yk = a + kh. (158)
n
The resulting approximation takes the form
n
X
(x) h K(x, yk )(yk ) = f (x) (159)
k=1

of an IE having a degenerate kernel. Now replace variable x in (159) by a finite number of


values x = xi , i = 1, 2, . . . , n. We obtain a linear equation system with unknowns k = (xk )
and right-hand side containing the known numbers fi = f (xi )
n
X
i h K(xi , yk )k = fi , i = 1, 2, . . . , n. (160)
k=1

We can find the solution {i } of system (160) by Cramers rule,


(n)
Di ()
i = , i = 1, 2, . . . , n, (161)
D(n) ()

31
calculating the determinants

1 hK(x1 , y1 ) hK(x1 , y2 ) . . . hK(x1 , yn )

hK(x , y ) 1 hK(x2 , y2 ) . . . hK(x2 , yn )
2 1
D(n) () = . (162)
. . ... .

hK(x , y ) hK(x , y ) . . . 1 hK(xn , yn )
n 1 n 2

and so on.
To reconstruct the approximate solution (n) using the determined {i } one can use the IE
(159) itself:
n
X
(x) (n) = h K(x, yk )k + f (x). (163)
k=1

One can prove that (n) (x) as n where (x) is the exact solution to IE (159).

9.3 Fredholms resolvent


As we have already mentioned the resolvent (x, y, ) of IE (159) satisfies the IE
Z b
(x, y, ) = K(x, y) + K(x, t)(t, y, )dt. (164)
a

Solving this IE (164) by the described approximate method and passing to the limit n
we obtain the resolvent as a ratio of two power series

D(x, y, )
(x, y, ) = , (165)
D()

where

X (1)n
D(x, y, ) = Bn (x, y)n , (166)
n=0 n!

X (1)n
D() = cn n , (167)
n=0 n!

B0 (x, y) = K(x, y),


K(x, y) K(x, y1 ) . . . K(x, yn )
Z b Z b

K(y1 , y) K(y1 , y1 ) . . . K(y1 , yn )
Bn (x, y) = ... dy1 . . . dyn ; (168)
a a . . ... .

K(y , y) K(y , y ) . . . K(yn , yn )
n n 1

c0 = 1,

32

K(y1 , y1 ) K(y1 , y2 ) . . . K(y1 , yn )
Z b Z b

K(y2 , y1 ) K(y2 , y2 ) . . . K(y2 , yn )
cn = ... dy1 . . . dyn . (169)
a a . . ... .

K(y , y ) K(y , y ) . . . K(yn , yn )
n 1 n 2

D() is called the Fredholm determinant, and D(x, y, ), the first Fredholm minor.
If the integral Z Z b b
|K(x, y)|2 dxdy < , (170)
a a
then seires (167) for the Fredholm determinant converges for all complex , so that D() is an
entire function of . Thus (164)(167) define the resolvent on the whole complex plane , with
an exception of zeros of D() which are poles of the resolvent.
We may conclude that for all complex that do not coincide with the poles of the resolvent,
the IE Z b
(x) K(x, y)(y)dy = f (x) (171)
a
is uniquely solvable, and its solution is given by formula (99)
Z b
(x) = f (x) + (x, y, )f (y)dy. (172)
a
For practical computation of the resolvent, one can use the following recurrent relationships:
B0 (x, y) = K(x, y), c0 = 1,
Z b
cn = Bn1 (y, y)dy, n = 1, 2, . . . , (173)
a
Z b
Bn (x, y) = cn K(x, y) n K(x, t)Bn1 (t, y)dt, n = 1, 2, . . . . (174)
a
Example 13 Find the resolvent of the Fredholm IE (137) of the 2nd kind
Z 1
(x) (x + y)(y)dy = f (x). (175)
0
We will use recurrent relationships (173) and (174). Here
K(x, y) = x + y
is a degenerate kernel. We have c0 = 1, Bi = Bi (x, y) (i = 0, 1, 2), and
B0 = K(x, y) = x + y,
Z 1 Z 1
c1 = B0 (y, y)dy = 2ydy = 1,
0 0
Z 1
B1 = c1 K(x, y) K(x, t)B0 (t, y)dt =
0
Z 1
1 1
= x+y (x + t)(t + y)dt = (x + y) xy ,
0 2 3
Z 1 Z 1
1 1 1
c2 = B1 (y, y)dy = (y + y) y 2 dy = ,
0 0 2 3 6
Z 1
B2 = c2 K(x, y) K(x, t)B1 (t, y)dt =
0
Z 1
1 1 1
= (x + y) 2 (x + t) (t + y) ty dt = 0.
6 0 2 3

33
Since B2 (x, y) 0, all subsequent c3 , c4 , . . ., and B3 , B4 , . . ., vanish (see (173) and (174)),
and we obtain

D(x, y, ) =
1
X (1)n
= Bn (x, y)n = B0 (x, y) B1 (x, y) =
n=0 n!

1 1
= x + y (x + y) xy , (176)
2 3
D() =
2
X (1)n 1
= cn n = c0 c1 + c2 2 =
n=0 n! 2
1 1
= 1 2 = (2 + 12 12)
12 12
and

D(x, y, ) x + y 12 (x + y) xy 1
3
)
(x, y, ) = = 12 . (177)
D() 2 + 12 12
The solution to (175) is given by formula (99)
Z 1
(x) = f (x) + (x, y, )f (y)dy,
0

so that we obtain, using (176),


Z 1

(x) = f (x) + [6( 2)(x + y) 12xy 4]f (y)dy, (178)
2 + 12 12 0
which coincides with formula (150).
The numbers = k (k = 1, 2) determined in (144), 1 = 6 + 4 and 2 = 6 4, are
(simple, i.e., of multiplicity one) poles of the resolvent because thay are (simple) zeros of the
Fredholm determinant D(). When = k (k = 1, 2), then IE (175) is not solvable.

10 Hilbert spaces. Self-adjoint operators. Linear opera-


tor equations with completely continuous operators
in Hilbert spaces
10.1 Hilbert space. Selfadjoint operators
A real linear space R with the inner product satisfying

(x, y) = (y, x) x, y R,
(x1 + x2 , y) = (x1 , y) + (x2 , y), x1 , x2 , y R,
(x, y) = (x, y), x, y R,
(x, x) 0, x R, (x, x) = 0 if and only if x = 0.

34
is called the Euclidian space.
In a complex Euclidian space the inner product satisfies
(x, y) = (y, x),
(x1 + x2 , y) = (x1 , y) + (x2 , y),
(x, y) = (x, y),
(x, x) 0, (x, x) = 0 if and only if x = 0.
Note that in a complex linear space
y).
(x, y) = (x,
The norm in the Euclidian space is introduced by

||x|| = x, x.
The Hilbert space H is a complete (in the metric (x, y) = ||x y||) infinite-dimensional
Euclidian space.
In the Hilbert space H, the operator A adjoint to an operator A is defined by
(Ax, y) = (x, A y), x, y H.
The selfadjoint operator A = A is defined from
(Ax, y) = (x, Ay), x, y H.

10.2 Completely continuous integral operators in Hilbert space


Consider an IE of the second kind
Z b
(x) = f (x) + K(x, y)(y)dy. (179)
a

Assume that K(x, y) is a HilbertSchmidt kernel, i.e., a square-integrable function in the square
= {(x, y) : a x b, a y b}, so that
Z bZ b
|K(x, y)|2 dxdy , (180)
a a

and f (x) L2 [a, b], i.e.,


Z b
|f (x)|2 dx .
a
Define a linear Fredholm (integral) operator corresponding to IE (179)
Z b
A(x) = K(x, y)(y)dy. (181)
a

If K(x, y) is a HilbertSchmidt kernel, then operator (181) will be called a HilbertSchmidt


operator.
Rewrite IE (179) as a linear operator equation
= A(x) + f, f, L2 [a, b]. (182)

35
Theorem 18 Equality (182) and condition (180) define a completely continuous linear opera-
tor in the space L2 [a, b]. The norm of this operator is estimated as
s
Z bZ b
||A|| |K(x, y)|2 dxdy. (183)
a a

Proof. Note first of all that we have already mentioned (see (75)) that if condition (180) holds,
then there exists a constant C1 such that
Z b
|K(x, y)|2 dy C1 almost everywhere in [a, b], (184)
a

i.e., K(x, y) L2 [a, b] as a function of y almost for all x [a, b]. Therefore the function
Z b
(x) = K(x, y)(y)dy is defined almost everywhere in [a, b] (185)
a

We will now show that (x) L2 [a, b]. Applying the Schwartz inequality we obtain
Z 2 Z b Z b
b
2
|(x)| = K(x, y)(y)dy |K(x, y)|2 dy |(y)|2 dy = (186)
a a a
Z b
= ||||2 |K(x, y)|2 dy.
a

Integrating with respect to x and replacing an iterated integral of |K(x, y)|2 by a double integral,
we obtain Z b Z bZ b
2 2 2
||A|| = |(y)| dy |||| |K(x, y)|2 dxdy, (187)
a a a
which proves (1) that (x) is a square-integrable function, i.e.,
Z b
|(x)|2 dx ,
a

and estimates the norm of the operator A : L2 [a, b] L2 [a, b].


Let us show that the operator A : L2 [a, b] L2 [a, b] is completely continuous. To this end,
we will use Theorem 8 and construct a sequence {An } of completely continuous operators on the
space L2 () which converges in (operator) norm to the operator A. Let {n } be an orthonormal
system in L2 [a, b]. Then {m (x)n (y)} form an orthonormal system in L2 (). Consequently,

X
K(x, y) = amn m (x)n (y). (188)
m,n=1

By setting
N
X
KN (x, y) = amn m (x)n (y). (189)
m,n=1

define an operator AN which is finite-dimensional (because it has a degenerate kernel and maps
therefore L2 [a, b] into a finite-dimensional space generated by a finite system {m }N
m=1 ) and
therefore completely continuous.

36
Now note that KN (x, y) in (189) is a partial sum of the Fourier series for K(x, y), so that
Z bZ b
|K(x, y) KN (x, y)|2 dxdy 0, N . (190)
a a

Applying estimate (183) to the operator A AN and using (190) we obtain

||A AN || 0, N . (191)

This completes the proof of the theorem.

10.3 Selfadjoint operators in Hilbert space


Consider first some properties of a selfadjoint operator A (A = A ) acting in the Hilbert space
H.
Theorem 19 All eigenvalues of a selfadjoint operator A in H are real.
Proof. Indeed, let be an eigenvalue of a selfadjoint operator A, i.e., Ax = x, x H, x 6= 0.
Then we have
x),
(x, x) = (x, x) = (Ax, x) = (x, Ax) = (x, x) = (x, (192)
i.e., is a real number.
which yields = ,

Theorem 20 Eigenvectors of a selfadjoint operator A corresponding to different eigenvalues


are orthogonal.

Proof. Indeed, let and be two different eigenvalues of a selfadjoint operator A, i.e.,

Ax = x, x H, x 6= 0; Ay = y, y H, y 6= 0.

Then we have
(x, y) = (x, y) = (Ax, y) = (x, Ay) = (x, y) = (x, y), (193)
which gives
( )(x, y) = 0, (194)
i.e., (x, y) = 0.

Below we will prove all the Fredholm theorems for integral operators in the Hilbert space.

10.4 The Fredholm theory in Hilbert space


We take the integral operator (181), assume that the kernel satisfies the HilbertSchmidt con-
dition
Z bZ b
|K(x, y)|2 dxdy , (195)
a a

and write IE (179)


Z b
(x) = K(x, y)(y)dy + f (x) (196)
a

37
in the operator form (182),

= A + f, (197)

or

T = f, T = I A. (198)

The corresponding homogeneous IE

T 0 = 0, (199)

and the adjoint equations

T = g, T = I A , (200)

and

T 0 = 0. (201)

We will consider equations (198)(201) in the Hilbert space H = L2 [a, b]. Note that (198)
(201) may be considered as operator equations in H with an arbitrary completely continuous
operator A.
The four Fredholm theorems that are formulated below establish relations between the
solutions to these four equations.
Theorem 21 The inhomogeneous equation T = f is solvable if and only if f is orthogonal to
every solution of the adjoint homogeneous equation T 0 = 0 .

Theorem 22 (the Fredholm alternative). One of the following situations is possible:


(i) the inhomogeneous equation T = f has a unique solution for every f H
(ii) the homogeneous equation T 0 = 0 has a nonzero solution .

Theorem 23 The homogeneous equations T 0 = 0 and T 0 = 0 have the same (finite)


number of linearly independent solutions.

To prove the Fredholm theorems, we will use the definiton of a subspace in the Hilbert space
H:
M is a linear subspace of H if M closed and f, g M yields f + g M where and are
arbitrary numbers,
and the following two properties of the Hilbert space H:
(i) if h is an arbitrary element in H, then the set horth of all elements f H orthogonal to h,
horth = {f : f H, (f, h) = 0} form a linear subspace of H which is called the orthogonal
complement,
and
(ii) if M is a (closed) linear subspace of H, then arbitrary element f H is uniquely represented
as f = h + h0 , where h M , and h0 M orth and M orth is the orthogonal complement to M .

Lemma 1 The manifold Im T = {y H : y = T x, x H} is closed.

38
Proof. Assume that yn Im T and yn y. To prove the lemma, one must show that y Im T ,
i.e., y = T x. We will now prove the latter statement.
According to the definition of Im T , there exist vectors xn H such that

yn = T xn = xn Axn . (202)

We will consider the manifold Ker T = {y H : T y = 0} which is a closed linear subspace


of H. One may assume that vectors xn are orthogonal to Ker T , i.e., xn Ker T orth ; otherwise,
one can write
xn = hn + h0n , hn Ker T,
and replace xn by hn = xn h0n with h0n Ker T orth . We may assume also that the set of
norms ||xn || is bounded. Indeed, if we assume the contrary, then there will be an unbounded
subsequence {xnk } with ||xnk || . Dividing by ||xnk || we obtain from (202)
xnk xnk
A 0.
||xnk || ||xnk ||

Note that A is a completely continuous operator; therefore we may assume that


( )
xnk
A
||xnk ||

is a convergent subsequence, which converges, e.g., to a vector z H. It is clear that ||z|| = 1


and T z = 0, i.e., z Ker T . We assume however that vectors xn are orthogonal to Ker T ;
therefore vector z must be also orthogonal to Ker T . This contradiction means that sequence
||xn || is bounded. Therefore, sequence {Axn } contains a convergent subsequence; consequently,
sequence {xn } will also contain a convergent subsequence due to (202). Denoting by x the limit
of {xn }, we see that, again y = T x due to (202). The lemma is proved.

Lemma 2 The space H is the direct orthogonal sum of closed subspaces Ker T and Im T ,

Ker T + Im T = H, (203)

and also

Ker T + Im T = H. (204)

Proof. The closed subspaces Ker T and Im T are orthogonal because if h Ker T then (h, T x) =
(T h, x) = (0, x) = 0 for all x H. It is left to prove there is no nonzero vector orthogonal to
Ker T and Im T . Indeed, if a vector z is orthogonal to Im T , then (T z, x) = (z, T x) = 0 for
all x H, i.e., z Ker T . The lemma is proved.
Lemma 2 yields the first Fredholm theorem 21. Indeed, f is orthogonal to Ker T if and
only if f Im T , i.e., there exists a vector such that T = f .
Next, set H k = Im T k for every k, so that, in particular, H 1 = Im T . It is clear that

. . . H k . . . H 2 H 1 H, (205)

all the subsets in the chain are closed, and T (H k ) = H k+1 .

39
Lemma 3 There exists a number j such that H k+1 = H k for all k j.
Proof. If such a j does not exist, then all H k are different, and one can construct an orthonormal
sequence {xk } such that xk H k and xk is orthogonal to H k+1 . Let l > k. Then

Axl Axk = xk + (xl + T xk T xl ).

Consequently, ||Axl Axk || 1 because xl + T xk T xl H k+1 . Therefore, sequence {Axk }


does not contain a convergent subsequence, which contradicts to the fact that A is a completely
continuous operator. The lemma is proved.
Lemma 4 If Ker T = {0} then Im T = H.
Proof. If Ker T = {0} then T is a one-to-one operator. If in this case Im T 6= H then the chain
(205) consists of different subspaces which contradicts Lemma 3. Therefore Im T = H.
In the same manner, we prove that Im T = H if Ker T = {0}.
Lemma 5 If Im T = H then Ker T = {0}.
Proof. Since Im T = H, we have Ker T = {0} according to Lemma 3. Then Im T = H
according to Lemma 4 and Ker T = {0} by Lemma 2.

The totality of statements of Lemmas 4 and 5 constitutes the proof of Fredholms alternative
22.
Now let us prove the third Fredholms Theorem 23.
To this end, assume that Ker T is an infinite-dimensional space: dimKer T = . Then this
space
conyains an infinite orthonormal sequence {xn } such that Axn = xn and ||Axl Axk || =
2 if k 6= l. Therefore, sequence {Axn } does not contain a convergent subsequence, which
contradicts to the fact that A is a completely continuous operator.
Now set = dimKer T and = dimKer T and assume that < . Let 1 , . . . , be an
orthonormal basis in Ker T and 1 , . . . , be an orthonormal basis in Ker T . Set

X
Sx = T x + (x, j )j .
j=1

Operator S is a sum of T and a finite-dimensional operator; therefore all the results proved
above for T remain valid for S.
Let us show that the homogeneous equation Sx = 0 has only a trivial solution. Indeed,
assume that

X
Tx + (x, j )j = 0. (206)
j=1

According to Lemmas 2 vectors psij are orthogonal to all vectors of the form T x (206) yields

Tx = 0

and

(x, j ) = 0, quad1 j .

40
We see that, on the one hand, vector x must be a linear combination of vectors psij , and on
the other hand, vector x must be orthogonal to these vectors. Therefore, x = 0 and equation
Sx = 0 has only a trivial solution. Thus according to the second Fredholms Theorem 22 there
exists a vector y such that

X
Ty + (y, j )j = +1 .
j=1

Calculating the inner product of both sides of this equality and +1 we obtain

X
(T y, +1 ) + (y, j )(j , +1 ) = (+1 , +1 ).
j=1

Here (T y, +1 ) = 0 because T y Im T and Im T is orthogonal to Ker T . Thus we have



X
(0 + (y, j ) 0 = 1.
j=1

This contradiction is obtained because we assumed < . Therefore . Replacing now


T by T we obtain in the same manner that . Therefore = . The third Fredholms
theorem is proved.
To sum up, the Fredholms theorems show that the number = 1 is either a regular point
or an eigenvalue of finite multiplicity of the operator A in (197).
All the results obtained above for the equation (197) = A+f (i.e., for the operator AI)
remain valid for the equation = A + f (for the operator A I with 6= 0). It follows
that in the case of a completely continuous operator, an arbitrary nonzero number is either a
regular point or an eigenvalue of finite multiplicity. Thus a completely continuous operator has
only a point spectrum. The point 0 is an exception and always belongs to the spectrum of a
completely continuous operator in an infinite-dimensional (Hilbert) space but may not be an
eigenvalue.

10.5 The HilbertSchmidt theorem


Let us formulate the important HilbertSchmidt theorem.
Theorem 24 A completely continuous selfadjoint operator A in the Hilbert space H has an or-
thonormal system of eigenvectors {k } corresponding to eigenvalues k such that every element
H is represented as X
= ck k + 0 , (207)
k
0 0
where the vector Ker A, i.e., A = 0. The representation (207) is unique, and
X
A = k ck k , (208)
k

and
k 0, n , (209)
if {k } is an infinite system.

41
11 IEs with symmetric kernels. HilbertSchmidt theory
Consider first a general a HilbertSchmidt operator (181).
Theorem 25 Let A be a HilbertSchmidt operator with a (HilbertSchmidt) kernel K(x, y).
Then the operator A adjoint to A is an operator with the adjoint kernel K (x, y) = K(y, x).

Proof. Using the Fubini theorem and changing the order of integration, we obtain
Z b (Z b )
(Af, g) = K(x, y)f (y)dy g(x)dx =
a a
Z bZ b
= K(x, y)f (y)g(x)dydx =
a
Z b (aZ )
b
= K(x, y)g(x)dx f (y)dy =
a a
Z b (Z )
b
= f (y) K(x, y)g(x)dx dy,
a a

which proves the theorem.

Consider a symmetric IE
Z b
(x) K(x, y)(y)dy = f (x) (210)
a

with a symmetric kernel

K(x, y) = K(y, x) (211)

for complex-valued or

K(x, y) = K(y, x) (212)

for real-valued K(x, y).


According to Theorem 25, an integral operator (181) is selfadjoint in the space L2 [a, b], i.e.,
A = A , if and only if it has a symmetric kernel.

11.1 Selfadjoint integral operators


Let us formulate the properties (19) and (20) of eigenvalues and eigenfunctions (eigenvectors)
for a selfadjoint integral operator acting in the Hilbert space H.
Theorem 26 All eigenvalues of a integral operator with a symmetric kernel are real.
Proof. Let 0 and 0 be an eigenvalue and eigenfunction of a integral operator with a symmetric
kernel, i.e.,
Z b
0 (x) 0 K(x, y)0 (y)dy = 0. (213)
a

42
Multiply equality (213) by 0 and integrate with respect to x between a and b to obtain

||0 ||2 0 (K0 , 0 ) = 0, (214)

which yields

||0 ||2
0 = . (215)
(K0 , 0 )

Rewrite this equality as

||0 ||2
0 = , = (K0 , 0 ), (216)

and prove that is a real number. For a symmetric kernel, we have

= (K0 , 0 ) = (0 , K0 ). (217)

According to the definition of the inner product

(0 , K0 ) = (K0 , 0 ).

Thus

= (K0 , 0 ) = = (K0 , 0 ),

so that = , i.e., is a real number and, consequently, 0 in (215) is a real number.

Theorem 27 Eigenfunctions of an integral operator with a symmetric kernel corresponding to


different eigenvalues are orthogonal.

Proof. Let and and and be two different eigenvalues and the corresponding eigen-
functions of an integral operator with a symmetric kernel, i.e.,
Z b
(x) = K(x, y) (y)dy = 0, (218)
a
Z b
(x) = K(x, y) (y)dy = 0. (219)
a

Then we have

( , ) = ( , ) = (A , ) = ( , A ) = ( , ) = ( , ), (220)

which yields
( )( , ) = 0, (221)
i.e., ( , ) = 0.

One can also reformulate the HilbertSchmidt theorem 24 for integral operators.

43
Theorem 28 Let 1 , 2 , . . . , and 1 , 2 , . . . , be eigenvalues and eigenfunctions (eigenvectors)
of an integral operator with a symmetric kernel and let h(x) L2 [a, b], i.e.,
Z b
|h(x)|2 dx .
a

If K(x, y) is a HilbertSchmidt kernel, i.e., a square-integrable function in the square =


{(x, y) : a x b, a y b}, so that
Z bZ b
|K(x, y)|2 dxdy < , (222)
a a

then the function Z b


f (x) = Ah = K(x, y)h(y)dy (223)
a
is decomposed into an absolutely and uniformly convergent Fourier series in the orthonornal
system 1 , 2 , . . . ,

X
f (x) = fn n (x), fn = (f, n ).
n=1

The Fourier coefficients fn of the function f (x) are coupled with the Fourier coefficients hn of
the function h(x) by the relationships
hn
fn = , hn = (h, n ),
n
so that

X hn
f (x) = n (x) (fn = (f, n ), hn = (h, n )). (224)
n=1 n

Setting h(x) = K(x, y) in (224), we obtain



X n (y)
K2 (x, y) = n (x), n (y) = (K(x, y), n (x)), (225)
n=1 n

and n (y) are the Fourier coefficients of the kernel K(x, y). Let us calculate n (y). We have,
using the formula for the Fourier coefficients,
Z b
n (y) = K(x, y)n (x)dx, (226)
a

or, because the kernel is symmetric,


Z b
n (y) = K(x, y)n (x)dx, (227)
a

Note that n (y) satisfies the equation


Z b
n (x) = n K(x, y)n (y)dy. (228)
a

44
Replacing x by y abd vice versa, we obtain
Z b
1
n (y) = K(y, x)n (x)dx, (229)
n a

so that
1
n (y) = n (y).
n
Now we have

X n (x)n (y)
K2 (x, y) = . (230)
n=1 2n

In the same manner, we obtain



X n (x)n (y)
K3 (x, y) = ,
n=1 3n

and, generally,

X n (x)n (y)
Km (x, y) = . (231)
n=1 m
n

which is bilinear series for kernel Km (x, y).


For kernel K(x, y), the bilinear series is

X n (x)n (y)
K(x, y) = . (232)
n=1 n

This series may diverge in the sense of C-norm (i.e., uniformly); however it alwyas converges
in L2 -norm.

11.2 HilbertSchmidt theorem for integral operators


Consider a HilbertSchmidt integral operator A with a symmetric kernel K(x, y). In this case,
the following conditions are assumed to be satisfied:
Z b
i A(x) = K(x, y)(y)dy,
a
Z bZ b
ii |K(x, y)|2 dxdy , (233)
a a
iii K(x, y) = K(y, x).

According to Theorem 18, A is a completely continuous selfadjoint operator in the space L2 [a, b]
and we can apply the HilbertSchmidt theorem 24 to prove the following statement.

45
Theorem 29 If 1 is not an eigenvalue of the operator A then the IE (210), written in the
operator form as

(x) = A + f (x), (234)

has one and only one solution for every f . If 1 is an eigenvalue of the operator A, then IE
(210) (or (234)) is solvable if and only if f is orthogonal to all eigenfunctions of the operator A
corresponding to the eigenvalue = 1, and in this case IE (210) has infinitely many solutions.

Proof. According to the HilbertSchmidt theorem, a completely continuous selfadjoint oper-


ator A in the Hilbert space H = L2 [a, b] has an orthonormal system of eigenvectors {k }
corresponding to eigenvalues k such that every element H is represented as
X
= ck k + 0 , (235)
k

where 0 Ker A, i.e., A 0 = 0. Set


X
f= b k k + f 0 , Af 0 = 0, (236)
k

and look for a solution to (234) in the form


X
= xk k + 0 , A0 = 0. (237)
k

Substituting (236) and (237) into (234) we obtain


X X X
xk k + 0 = xk k k + bk k + f 0 , (238)
k k k

which holds if and only if

f 0 = 0 ,
xk (1 k ) = bk (k = 1, 2, . . .)

i.e., when

f 0 = 0 ,
bk
xk = , k 6= 1
1 k
bk = 0, k = 1. (239)

The latter gives the necessary and sufficient condition for the solvability of equation (234). Note
that the coefficients xn that correspond to those n for which n = 1 are arbitrary numbers.
The theorem is proved.

46
12 Harmonic functions and Greens formulas
Denote by D R2 a two-dimensional domain bounded by the closed smooth contour .
A twice continuously differentiable real-valued function u defined on a domain D is called
harmonic if it satisfies Laplaces equation

u = 0 in D, (240)

where
2u 2u
u = + (241)
x2 y 2
is called Laplaces operator (Laplacian), the function u = u(x), and x = (x, y) R2 . We will
also use the notation y = (x0 , y0 ).
The function
1 1
(x, y) = (x y) = ln (242)
2 |x y|
is called the fundamental solution of Laplaces equation. For a fixed y R2 , y 6= x, the function
(x, y) is harmonic, i.e., satisfies Laplaces equation

2 2
+ = 0 in D. (243)
x2 y 2
The proof follows by straightforward differentiation.

12.1 Greens formulas


Let us formulate Greens theorems which leads to the second and third Greens formulas.
Let D R2 be a (two-dimensional) domain bounded by the closed smooth contour
and let ny denote the unit normal vector to the boundary directed into the exterior of
and corresponding to a point y . Then for every function u which is once continuously
= D + , u C 1 (D),
differentiable in the closed domain D and every function v which is twice
v C (D),
continuously differentiable in D, 2 Greens first theorem (Greens first formula) is
valid Z Z Z
v
(uv + grad u grad v)dx = u dly , (244)
ny
D
and v C 2 (D),
where denotes the inner product of two vector-functions. For u C 2 (D)
Greens second theorem (Greens second formula) is valid
Z Z Z !
v u
(uv vu)dx = u v dly , (245)
ny ny
D

Proof. Apply Gauss theorem Z Z Z


div Adx = ny Adly (246)
D

47
(with the components A1 . A2 being
to the vector function (vector field) A = (A1 .A2 ) C 1 (D)
once continuously differentiable in the closed domain D = D + , Ai C 1 (D),
i = 1, 2) defined
by " #
v v
A = u grad v = u , u , (247)
x y
the components are
v v
A1 = u , A2 = u . (248)
x y
We have ! !
v v
div A = div (u grad v) = u + u = (249)
x x y y
u v 2 v u v 2v
+u 2 + + u 2 = grad u grad v + uv. (250)
x x x y y y
On the other hand, according to the definition of the normal derivative,
v
ny A = ny (ugrad v) = u(ny grad v) = u , (251)
ny
so that, finally,
Z Z Z Z Z Z
v
div Adx = (uv + grad u grad v)dx, ny Ad ly = u dly , (252)
ny
D D

which proves Greens first formula. By interchanging u and v and subtracting we obtain the
desired Greens second formula (245).
be harmonic in the domain D.
Let a twice continuously differentiable function u C 2 (D)
Then Greens third theorem (Greens third formula) is valid
Z !
u (x, y)
u(x) = (x, y) u(y) dly , x D. (253)
ny ny

Proof. For x D we choose a circle


q
(x, r) = {y : |x y| = (x x0 )2 + (y y0 )2 = r}

of radius r (the boundary of a vicinity of a point x D) such that (x, r) D and direct the
unit normal n to (x, r) into the exterior of (x, r). Then we apply Greens second formula
(245) to the harmonic function u and (x, y) (which is also a harmonic function) in the domain
y D : |x y| > r to obtain
Z Z
0= (u(x, y) (x, y)u)dx = (254)
D\(x,r)

Z !
(x, y) u(y)
u(y) (x, y) dly . (255)
ny ny
(x,r)

48
Since on (x, r) we have
ny
grad y (x, y) = , (256)
2r
a straightforward calculation using the mean-value theorem and the fact that
Z
v
dly = 0 (257)
ny

for a harmonic in D function v shows that


Z !
(x, y) u(y)
lim u(y) (x, y) dly = u(x), (258)
r0 ny ny
(x,r)

which yields (253).

12.2 Properties of harmonic functions


Theorem 30 Let a twice continuously differentiable function v be harmonic in a domain D
bounded by the closed smooth contour . Then
Z
v(y)
dly = 0. (259)
ny

Proof follows from the first Greens formula applied to two harmonic functions v and u = 1.

Theorem 31 (Mean Value Formula) Let a twice continuously differentiable function u be


harmonic in a ball
q
B(x, r) = {y : |x y| = (x x0 )2 + (y y0 )2 r}

of radius r (a vicinity of a point x) with the boundary (x, r) = {y : |x y| = r} (a circle)


r). Then
and continuous in the closure B(x,
1 Z 1 Z
u(x) = 2 u(y)dy = u(y)dly , (260)
r 2r
B(x,r) (x,r)

i.e., the value of u in the center of the ball is equal to the integral mean values over both the
ball and its boundary.

Proof. For each 0 < < r we apply (259) and Greens third formula to obtain
Z
1
u(x) = u(y)dly , (261)
2
|xy|=

whence the second mean value formula in (260) follows by passing to the limit r. Multi-
plying (261) by and integrating with respect to from 0 to r we obtain the first mean value
formula in (260).

49
Theorem 32 (MaximumMinimum Principle) A harmonic function on a domain cannot
attain its maximum or its minimum unless it is constant.

Corollary. Let D R2 be a two-dimensional domain bounded by the closed smooth contour


Then u attains both its maximum and its
and let u be harmonic in D and continuous in D.
minimum on the boundary.

13 Boundary value problems


Formulate the interior Dirichlet problem: find a function u that is harmonic in a domain D
bounded by the closed smooth contour , continuous in D = D and satisfies the Dirichlet
boundary condition:
u = 0 in D, (262)
u| = f, (263)
where f is a given continuous function.
Formulate the interior Neumann problem: find a function u that is harmonic in a domain D
bounded by the closed smooth contour , continuous in D = D and satisfies the Neumann
boundary condition
u
= g, (264)
n
where g is a given continuous function.

Theorem 33 The interior Dirichlet problem has at most one solution.

Proof. Assume that the interior Dirichlet problem has two solutions, u1 and u2 . Their difference
u = u1 u2 is a harmonic function that is continuous up to the boundary and satisifes the
homogeneous boundary condition u = 0 on the boundary of D. Then from the maximum
minimum principle or its corollary we obtain u = 0 in D, which proves the theorem.

Theorem 34 Two solutions of the interior Neumann problem can differ only by a constant.
The exterior Neumann problem has at most one solution.

Proof. We will prove the first statement of the theorem. Assume that the interior Neumann
problem has two solutions, u1 and u2 . Their difference u = u1 u2 is a harmonic function that
u
is continuous up to the boundary and satisifes the homogeneous boundary condition n y
=0
on the boundary of D. Suppose that u is not constant in D. Then there exists a closed ball
B contained in D such that Z Z
|grad u|2 dx > 0.
B

50
From Greens first formula applied to a pair of (harmonic) functions u, v = u and the interior
Dh of a surface h = {x hnx : x } parallel to the Ds boundary with sufficiently small
h > 0, Z Z Z
2 u
(uu + |grad u| )dx = u dly , (265)
ny
Dh

we have first Z Z Z Z
|grad u|2 dx |grad u|2 dx, (266)
B Dh

(because B Dh ); next we obtain


Z Z Z
u
|grad u|2 dx u dly = 0. (267)
ny
Dh

Passing to the limit h 0 we obtain a contradiction


Z Z
|grad u|2 dx 0.
B

Hence, the difference u = u1 u2 must be constant in D.

14 Potentials with logarithmic kernels


In the theory of boundary value problems, the integrals
Z Z

u(x) = E(x, y)(y)dly , v(x) = E(x, y)(y)dly (268)
ny
C C

are called the potentials. Here, x = (x, y), y = (x0 , y0 ) R2 ; E(x, y) is the fundamental solution
of a second-order elliptic differential operator;

=
ny ny
is the normal derivative at the point y of the closed piecewise smooth boundary C of a domain
in R2 ; and (y) and (y) are sufficiently smooth functions defined on C. In the case of Laplaces
operator u,
1 1
g(x, y) = g(x y) = ln . (269)
2 |x y|
In the case of the Helmholtz operator H(k) = + k 2 , one can take E(x, y) in the form
i (1)
E(x, y) = E(x y) = H0 (k|x y|),
4
(1)
where H0 (z) = 2ig(z) + h(z) is the Hankel function of the first kind and zero order (one
1 1 1
of the so-called cylindrical functions) and g(x y) = ln is the kernel of the two-
2 2 |x y|
dimensional single layer potential; the second derivative of h(z) has a logarithmic singularity.

51
14.1 Properties of potentials
Statement 1. Let D R2 be a domain bounded by the closed smooth contour . Then the
kernel of the double-layer potential

(x, y) 1 1
V (x, y) = , (x, y) = ln , (270)
ny 2 |x y|

is a continuous function on for x, y .

Proof. Performing differentiation we obtain


1 cos x,y
V (x, y) = , (271)
2 |x y|

where x,y is the angle between the normal vector ny at the integration point y = (x0 , y0 ) and
vector x y. Choose the origin O = (0, 0) of (Cartesian) coordinates at the point y on curve
so that the Ox axis goes along the tangent and the Oy axis, along the normal to at this
point. Then one can write the equation of curve in a sufficiently small vicinity of the point y
in the form
y = y(x).
The asumption concerning the smoothness of means that y0 (x0 ) is a differentiable function
in a vicinity of the origin O = (0, 0), and one can write a segment of the Taylor series

x2 00 1
y = y(0) + xy 0 (0) + y (x) = x2 y 00 (x) (0 < 1)
2 2
because y(0) = y 0 (0) = 0. Denoting r = |x y| and taking into account that x = (x, y) and
the origin of coordinates is placed at the point y = (0, 0), we obtain
s s
q q 1 1
r= (x 0)2 + (y 0)2 = x2 + y 2 = x2 + x4 (y 00 (x))2 = x 1 + x2 (y 00 (x))2 ;
4 4
y 1 xy 00 (x)
cos x,y = = q ,
r 2 1 + x2 (1/4)(y 00 (x))2

and
cos x,y y 1 y 00 (x)
= 2 = .
r r 2 1 + x2 (1/4)(y 00 (x))2
The curvature K of a plane curve is given by

y 00
K= ,
(1 + (y 0 )2 )3/2

which yields y 00 (0) = K(y), and, finally,

cos x,y 1
lim = K(y)
r0 r 2

52
which proves the continuity of the kernel (270).

Statement 2 (Gauss formula) Let D R2 be a domain bounded by the closed smooth


contour . For the double-layer potential with a constant density
Z
0 (x, y) 1 1
v (x) = dly , (x, y) = ln , (272)
ny 2 |x y|

we
where the (exterior) unit normal vector n of is directed into the exterior domain R2 \ D,
have

v 0 (x) = 1, x D,
1
v 0 (x) = , x , (273)
2

v (x) = 0, x R2 \ D.
0

from the equality


Proof follows for x R2 \ D
Z
v(y)
dly = 0 (274)
ny

applied to v(y) = (x, y). For x D it follows from Greens third formula
Z !
u (x, y)
u(x) = (x, y) u(y) dly , x D, (275)
ny ny

applied to u(y) = 1 in D.
Note also that if we set
1
v 0 (x0 ) = , x0 ,
2
we can also write (273) as

0 1
v (x0 ) = lim v(x + hnx0 ) = v 0 (x0 ) , x0 . (276)
h0 2

Corollary. Let D R2 be a domain bounded by the closed smooth contour . Introduce


the single-layer potential with a constant density
Z
1 1
u0 (x) = (x, y)dly , (x, y) = ln . (277)
2 |x y|

For the normal derivative of this single-layer potential

u0 (x) Z (x, y)
= dly , (278)
nx nx

53
we
where the (exterior) unit normal vector nx of is directed into the exterior domain R2 \ D,
have
u0 (x)
= 1, x D,
nx
u0 (x) 1
= , x , (279)
nx 2
u0 (x)
= 0, x R2 \ D.
nx

Theorem 35 Let D R2 be a domain bounded by the closed smooth contour . The double-
layer potential
Z
(x, y) 1 1
v(x) = (y)dly , (x, y) = ln , (280)
ny 2 |x y|

and from R2 \ D
with a continuous density can be continuously extended from D to D to
R2 \ D with the limiting values on
Z
(x0 , y) 1
v (x0 ) = (y)dly (x0 ), x0 , (281)
ny 2

or
1
v (x0 ) = v(x0 ) (x0 ), x0 , (282)
2
where
v (x0 ) = lim v(x + hnx0 ). (283)
h0

Proof.
1. Introduce the function
Z
(x, y)
I(x) = v(x) v 0 (x) = ((y) 0 )dly (284)
ny

where 0 = (x0 ) and prove its continuity at the point x0 . For every > 0 and every > 0
there exists a vicinity C1 of the point x0 such that

|(x) (x0 )| < , x0 C1 . (285)

Set Z Z
I = I1 + I2 = ... + ....
C1 \C1

Then
|I1 | B1 ,

54
where B1 is a constant taken according to
Z

(x, y)
dl B1 x D, (286)
ny y

and condition (286) holds because kernel (270) is continuous on .


Taking = /B1 , we see that for every > 0 there exists a vicinity C1 of the point
x0 (i.e., x0 C1 ) such that
|I1 (x)| < x D. (287)
Inequality (287) means that I(x) is continuous at the point x0 .
2. Now, if we take v (x0 ) for x0 and consider the limits v (x0 ) = limh0 v(x + hny ) from
inside and outside contour using (276), we obtain
1
v (x0 ) = I(x0 ) + lim v 0 (x0 hnx0 ) = v(x0 ) (x0 ), (288)
h0 2
1
v+ (x0 ) = I(x0 ) + lim v 0 (x0 + hnx0 ) = v(x0 ) + (x0 ), (289)
h+0 2
which prove the theorem.

Corollary. Let D R2 be a domain bounded by the closed smooth contour . Introduce


the single-layer potential
Z
1 1
u(x) = (x, y)(y)dly , (x, y) = ln . (290)
2 |x y|

with a continuous density . The normal derivative of this single-layer potential

u(x) Z (x, y)
= (y)dly (291)
nx nx

can be continuously extended from D to D and from R2 \ D


to R2 \ D with the limiting values
on Z
u(x0 ) (x0 , y) 1
= (y)dly (x0 ), x0 , (292)
nx nx0 2

or
u(x0 ) u(x0 ) 1
= (x0 ), x0 , (293)
nx nx 2
where
u(x0 )
= lim nx0 grad v(x0 + hnx0 ). (294)
nx0 h0

55
14.2 Generalized potentials
Let S () R2 be a domain bounded by the closed piecewise smooth contour . We assume
that a rectilinear interval 0 is a subset of , so that 0 = {x : y = 0, x [a, b]}.
Let us say that functions Ul (x) are the generalized single layer (SLP) (l = 1) or double layer
(DLP) (l = 2) potentials if
Z
Ul (x) = Kl (x, t)l (t)dt, x = (x, y) S (),

where
Kl (x, t) = gl (x, t) + Fl (x, t) (l = 1, 2),
1 1
g1 (x, t) = g(x, y 0 ) = ln , g2 (x, t) = g(x, y0 ) [y0 = (t, 0)],
|x y0 | y0
F1,2 are smooth functions, and we shall assume that for every closed domain S0 () S (),
the following conditions hold

i) F1 (x, t) is once continuously differentiable with respect to the


variables of x and continuous in t;
ii) F2 (x, t) and
t
1 Z
F2 (x, t) = F2 (x, s)ds, q R1 ,
y q
are continuous.
(1)
We shall also assume that the densities of the generalized potentials 1 L2 () and
(2) (1) (2)
2 L2 (), where functional spaces L2 and L2 are defined in Section 3.

15 Reduction of boundary value problems to integral


equations
Greens formulas show that each harmonic function can be represented as a combination of
single- and double-layer potentials. For boundary value problems we will find a solution in the
form of one of these potentials.
Introduce integral operators K0 and K1 acting in the space C() of continuous functions
defined on contour Z
(x, y)
K0 (x) = 2 (y)dly , x (295)
ny

and Z
(x, y)
K1 (x) = 2 (y)dly , x . (296)
nx

The kernels of operators K0 and K1 are continuous on As seen by interchanging the order
of integration, K0 and K1 are adjoint as integral operators in the space C() of continuous
functions defined on curve .

56
Theorem 36 The operators I K0 and I K1 have trivial nullspaces
N (I K0 ) = {0}, N (I K1 ) = {0}, (297)
The nullspaces of operators I + K0 and I + K1 have dimension one and
N (I + K0 ) = span {1}, N (I + K1 ) = span {0 } (298)
with Z
0 dly 6= 0. (299)

Proof. Let be a solution to the homogeneous integral equation K0 = 0 and define a


double-layer potential
Z
(x, y)
v(x) = (y)dly , x D (300)
ny

with a continuous density . Then we have, for the limiting values on ,
Z
(x, y) 1
v (x) = (y)dly (x) (301)
ny 2

which yields
2v (x) = K0 (x) = 0. (302)
From the uniqueness of the interior Dirichlet problem (Theorem 33) it follows that v = 0 in the
whole domain D. Now we will apply the contunuity of the normal derivative of the double-layer
potential (300) over a smooth contour ,

v(y) v(y)
= 0, y , (303)
ny + ny
where the internal and extrenal limiting values with respect to are defined as follows

v(y) v(y)
= lim = lim ny grad v(y hny ) (304)
ny y, yD ny h0

and
v(y) v(y)
= lim 2 = lim ny grad v(y + hny ). (305)
ny + y, yR \D ny h0

Note that (303) can be written as


lim ny [grad v(y + hny ) grad v(y hny )] = 0. (306)
h0

From (304)(306) it follows that



v(y) v(y)
= 0, y . (307)
ny + ny
Using the uniqueness for the exterior Neumann problem and the fact that v(y) 0, |y| ,
Hence from (301) we deduce = v+ v = 0 on . Thus
we find that v(y) = 0, y R2 \ D.
N (I K0 ) = {0} and, by the Fredholm alternative, N (I K1 ) = {0}.

57
Theorem 37 Let D R2 be a domain bounded by the closed smooth contour . The double-
layer potential
Z
(x, y) 1 1
v(x) = (y)dly , (x, y) = ln , x D, (308)
ny 2 |x y|

with a continuous density is a solution of the interior Dirichlet problem provided that is a
solution of the integral equation
Z
x, y)
(x) 2 (y)dly = 2f (x), x . (309)
ny

Proof follows from Theorem 14.1.

Theorem 38 The interior Dirichlet problem has a unique solution.

Proof The integral equation K = 2f of the interior Dirichlet problem has a unique solution
by Theorem 36 because N (I K) = {0}.

Theorem 39 Let D R2 be a domain bounded by the closed smooth contour . The double-
layer potential
Z
(x, y)
u(x) = (y)dly , x R2 \ D, (310)
ny

with a continuous density is a solution of the exterior Dirichlet problem provided that is a
solution of the integral equation
Z
(x, y)
(x) + 2 (y)dly = 2f (x), x . (311)
ny

Here we assume that the origin is contained in D.

Proof follows from Theorem 14.1.

Theorem 40 The exterior Dirichlet problem has a unique solution.

Proof. The integral operator K : C() C() defined by the right-hand side of (310) is
compact in the space C() of functions continuous on because its kernel is a continuous
function on . Let be a solution to the homogeneous integral equation + K = 0 on

and define u by (310). Then 2u = + K = 0 on R, and by the uniqueness for the exterior
and dly = 0. Therefore + K = 0 which
Dirichlet problem it follows that u R2 \ D,
means
R
according to Theorem 36 (because N (I + K) = span {1}), that = const on . Now
dly = 0 implies that = 0 on , and the existence of a unique solution to the integral
equation (311) follows from the Fredholm property of its operator.

58
Theorem 41 Let D R2 be a domain bounded by the closed smooth contour . The single-
layer potential Z
u(x) = (x, y)(y)dly , x D, (312)

with a continuous density is a solution of the interior Neumann problem provided that is
a solution of the integral equation
Z
(x, y)
(x) + 2 (y)dly = 2g(x), x . (313)
nx

Theorem 42 The interior Neumann problem is solvable if and only if


Z
dly = 0 (314)

is satisfied.

16 Functional spaces and Chebyshev polynomials


16.1 Chebyshev polynomials of the first and second kind
The Chebyshev polynomials of the first and second kind, Tn and Un , are defined by the following
recurrent relationships:
T0 (x) = 1, T1 (x) = x; (315)
Tn (x) = 2xTn1 (x) Tn2 (x), n = 2, 3, ...; (316)
U0 (x) = 1, U1 (x) = 2x; (317)
Un (x) = 2xUn1 (x) Un2 (x), n = 2, 3, ... . (318)
We see that formulas (316) and (318) are the same and the polynomials are different because
they are determined using different initial conditions (315) and (317).
Write down few first Chebyshev polynomials of the first and second kind; we determine
them directly from formulas (315)(318):
T0 (x) = 1,
T1 (x) = x,
T2 (x) = 2x2 1,
T3 (x) = 4x3 3x, (319)
T4 (x) = 8x4 8x2 + 1,
T5 (x) = 16x5 20x3 + 5x,
T6 (x) = 32x6 48x4 + 18x2 1;
U0 (x) = 1,
U1 (x) = 2x,
U2 (x) = 4x2 1,
U3 (x) = 8x3 4x, (320)
U4 (x) = 16x4 12x2 + 1,
U5 (x) = 32x5 32x3 + 6x,
U6 (x) = 64x6 80x4 + 24x2 1.

59
We see that Tn and Un are polynomials of degree n.
Let us breifly summarize some important properties of the Chebyshev polynomials.
If |x| 1, the following convenient representations are valid

Tn (x) = cos(n arccos x), n = 0, 1, ...; (321)


1
Un (x) = sin ((n + 1) arccos x), n = 0, 1, ... . (322)
1 x2
At x = 1 the right-hand side of formula (322) should be replaced by the limit.
Polynomials of the first and second kind are coupled by the relationships
1
Un (x) = T 0 (x), n = 0, 1, ... . (323)
n + 1 n+1
Polynomials Tn and Un are, respectively, even functions for even n and odd functions for odd
n:
Tn (x) = (1)n Tn (x),
(324)
Un (x) = (1)n Un (x).
One of the most important properties is the orthogonality of the Chebyshev polynomials
in the segment [1, 1] with a certain weight function (a weight). This property is expressed as
follows:
Z1
1 0,
n 6= m
Tn (x)Tm (x) dx = /2, n = m 6= 0 ; (325)
1 x 2

1 , n=m=0
Z1 (
0, n 6= m
Un (x)Um (x) 1 x2 dx = . (326)
/2, n = m
1

Thus in the segment


[1, 1], the Chebyshev polynomials of the first kind are orthogonal
1/ 1 x2 and the Chebyshev polynomials of the second kind are orthogonal
with the weight
with the weight 1 x2 .

16.2 FourierChebyshev series


Formulas (325) and (326) enable one to decompose functions in the Fourier series in Chebyshev
polynomials (the FourierChebyshev series),
X
a0
f (x) = T0 (x) + an Tn (x), x [1, 1], (327)
2 n=1

or
X
f (x) = bn Un (x), x [1, 1]. (328)
n=0

Coefficients an and bn are determined as follows


1
2Z dx
an = f (x)Tn (x) , n = 0, 1, ...; (329)
1 x2
1

60
1
2Z
bn = f (x)Un (x) 1 x2 dx, n = 0, 1, ... . (330)

1

Note that f (x) may be a complex-valued function; then the Fourier coefficients an and bn
are complex numbers.
Consider some examples of decomposing functions in the FourierChebyshev series
2
4X 1
1 x2 = 2
T2n (x), 1 x 1;
n=1 4n 1

4X 1
arcsin x = T2n+1 (x), 1 x 1;
n=0 (2n + 1)2

X (1)n1
ln (1 + x) = ln 2 + 2 Tn (x), 1 < x 1.
n=1 n
Similar decompositions can be obtained using the Chebyshev
polynomials Un of the second
2
kind. In the first example above, an even function f (x) = 1 x is decomposed; therefore its
expansion contains only coefficients multiplying T2n (x) with even indices, while the terms with
odd indices vanish.
Since |Tn (x)| 1 according to (321), the first and second series converge for all x [1, 1]
uniformly in this segment. The third series represents a different type of convergence: it con-
verges for all x (1, 1] (1 < x 1) and diverges at the point x = 1; indeed, the
decomposed function f (x) = ln(x + 1) is not defined at x = 1.
In order to describe the convergence of the FourierChebyshev series introduce the
functional
spaces of functions square-integrable in the segment (1, 1) with the weights 1/ 1 x2 and
(1)
1 x2 . We shall write f L2 if the integral
Z1
dx
|f (x)|2 (331)
1
1 x2

(2)
converges, and f L2 if the integral
Z1
|f (x)|2 1 x2 dx. (332)
1

converges. The numbers specified by (convergent) integrals (331) and (332) are the squared
(1) (2)
norms of the function f (x) in the spaces L2 and L2 ; we will denote them by kf k21 and kf k22 ,
respectively. Write the symbolic definitions of these spaces:
Z1
(1) dx
L2 := {f (x) : kf k21 := |f (x)|2 < },
1
1 x2

Z1
(2)
L2 := {f (x) : kf k22 := |f (x)|2 1 x2 dx < }.
1

61
(1) (2)
Spaces L2 and L2 are (infinite-dimensional) Hilbert spaces with the inner products

Z1
dx
(f, g)1 = f (x)g(x) ,
1
1 x2

Z1
(f, g)2 = f (x)g(x) 1 x2 dx.
1

Let us define two more functional spaces of differentiable functions defined in the segment
(1,
1) whose first derivatives are square-integrable in (1, 1) with the weights 1/ 1 x2 and
1x : 2

W 21 := {f (x) : f L(1) 0 (2)


2 , f L2 },

and
c 1 := {f (x) : f L(1) , f 0 (x) 1 x2 L(2) };
W 2 2 2

they are also Hilbert spaces with the inner products


Z1 Z1 Z1
dx dx 0
(f, g)W 21 = f (x) g(x) + f (x)g 0 (x) 1 x2 dx,

1
1 x2 1 1 x2 1

Z1 Z 1 Z 1
dx dx
(f, g)W
b1 = f (x) g(x) + f 0 (x)g 0 (x) (1 x2 )3/2 dx
2
1
1 x2 1 1 x2 1
and norms 1 2
Z Z1
2


dx
0 2
kf kW 1 = f (x) + |f (x)| 1 x2 dx,
2
1
1 x2 1
1 2
Z Z1
2
dx
kf kW
b 21 = f (x)
2 + |f 0 (x)|2 (1 x2 )3/2 dx.

1
1x 1

If a function f (x) is decomposed in the FourierChebyshev series (327) or (328) and is


(1) (2)
an elemet of the space L2 or L2 (that is, integrals (331) and (332) converge), then the
corresponding series always converge in the root-mean-square sense, i.e.,

kf SN k1 0 and kf SN k2 0 as N ,

where SN are partial sums of the FourierChebyshev series. The smoother function f (x) in
the segment [1, 1] the faster the convergence of its FourierChebyshev series. Let us explain
this statement: if f (x) has p continuous derivatives in the segment [1, 1], then

a0 XN C ln N

f (x) T 0 (x) a n Tn (x) , x [1, 1], (333)
2 Np
n=1

where C is a constant that does not depend on N .

62
A similar statement is valid for the FourierChebyshev series in polynomials Un : if f (x) has
p continuous derivatives in the segment [1, 1] and p 2, then

XN C

f (x) bn Un (x) p3/2 , x [1, 1]. (334)
N
n=0

We will often use the following three relationships involving the Chebyshev polynomials:
Z1
1 dx
Tn (x) = Un1 (y), 1 < y < 1, n 1; (335)
xy 1 x2
1

Z1
1
Un1 (x) 1 x2 dx = Tn (y), 1 < y < 1, n 1; (336)
xy
1

Z1 (
1 dx ln 2, n = 0;
ln Tn (x) = (337)
|x y| 1 x2 Tn (y)/n, n 1.
1

The integrals in (335) and (336) are improper integrals in the sense of Cauchy (a detailed
analysis of such integrals is performed in [8, 9]). The integral in (337) is a standard convergent
improper integral.

17 Solution to integral equations with a logarithmic sin-


gulatity of the kernel
17.1 Integral equations with a logarithmic singulatity of the kernel
Many boundary value problems of mathematical physics are reduced to the integral equations
with a logarithmic singulatity of the kernel
Z1
1 dx
L := ln (x) = f (y), 1 < y < 1 (338)
|x y| 1 x2
1

and
Z1
1 dx
(L + K) := (ln + K(x, y)) (x) = f (y), 1 < y < 1. (339)
|x y| 1 x2
1

In equations (338) and (339) (x) is an unknown function and the right-hand side f (y) is a
given (continuous) function. The functions ln (1/|x y|) in (338) and ln (1/|x y|) + K(x, y)
in (339) are the kernels of the integral equations; K(x, y) is a given function which is assumed
to be continuous (has no singularity) at x = y. The kernels go to infinity at x = y and
therefore referred to as functions with a (logarithmic) singulatity, and equations (338) and
(339) are called integral equations with a logarithmic singulatity of the kernel. In equations
(338) and (339) the weight function 1/ 1 x2 is separated explicitly. This function describes

63
the behaviour (singularty) of the solution in the vicinity of the endpoints 1 and +1. Linear
integral operators L and L + K are defined by the left-hand sides of (338) and (339).
Equation (338) is a particular case of (339) for K(x, y) 0 and can be solved explicitly.
Assume that the right-hand side f (y) W 1 (for simplicity, one may assume that f is a
2
continuously differentiable function in the segment [1, 1]) and look for the solution (x) in
(1)
the space L2 . This implies that integral operators L and L + K are considered as bounded
linear operators
(1)
L : L2 W 1 and L + K : L(1) 1
2 W2 .
2

Note that if K(x, y) is a sufficiently smooth function, then K is a compact (completely contin-
uous) operator in these spaces.

17.2 Solution via FourierChebyshev series


Let us describe the method of solution to integral equation (338). Look for an approximation
N (x) to the solution (x) of (338) in the form of a partial sum of the FourierChebyshev series

XN
a0
N (x) = T0 (x) + an Tn (x), (340)
2 n=1

where an are unknown coefficients. Expand the given function f (y) in the FourierChebyshev
series
XN
f0
fN (y) = T0 (y) + fn Tn (y), (341)
2 n=1

where fn are known coefficients determined according to (329). Substituting (340) and (341)
into equation (338) and using formula (337), we obtain

XN XN
a0 1 f0
ln 2T0 (y) + an Tn (y) = T0 (y) + fn Tn (y). (342)
2 n=1 n 2 n=1

Now equate coefficients multiplying Tn (y) with the same indices n:

a0 f0 1
ln 2 = , a n = fn ,
2 2 n
which gives unknown coefficients an
f0
a0 = , an = nfn
ln 2
and the approximation
XN
f0
N (x) = T0 (x) + fn Tn (x). (343)
2 ln 2 n=1

The exact solution to equation (338) is obtained by a transition to the limit N in (343)
X
f0
(x) = T0 (x) + nfn Tn (x). (344)
2 ln 2 n=1

64
In some cases one can determine the sum of series (344) explicitly and obtain the solution
in a closed form. One can show that the approximation N (x) of the solution to equation (338)
(1)
defined by (340) converges to the exact solution (x) as N in the norm of space L2 .
Consider integral equation (339). Expand the kernel K(x, y) in the double FourierChebyshev
series, take its partial sum
N X
X N
K(x, y) KN (x, y) = kij Ti (x)Tj (y), (345)
i=0 j=0

and repeat the above procedure. A more detailed description is given below.
Example 14 Consider the integral equation
Z1 q
1 dx
(ln + xy) (x) = y + 1 y2, 1 < y < 1.
|x y| 1 x2
1

Use the method of solution described above and set


XN
a0
N (x) = T0 (x) + an Tn (x)
2 n=1

to obtain q
2 4X T2k (y)
y+ 1 y2 = T0 (y) + T1 (y) ,
k=1 4k 2 1
K(x, y) = T1 (x)T1 (y).
Substituting these expressions into the initial integral equation, we obtain

a0 XN
an 2 4 [N/2]
X T2k (y)
ln 2T0 (y) + Tn (y) + a1 T1 (y) = T0 (y) + T1 (y) ,
2 n=1 n 2 k=1 4k 2 1

here [ N2 ] denotes the integer part of a number. Equating the coefficients that multiply Tn (y) with
the same indices n we determine the unknowns
4 2 8 n
a0 = , a1 = , a2n+1 = 0, a2n = 2
, n 1.
ln 2 4n 1
The approximate solution has the form

4 2 8 [N/2]
X n
N (x) = + x 2
T2n (x).
ln 2 n=1 4n 1

The exact solution is represented as the FourierChebyshev series



4 2 8X n
(x) = + x 2
T2n (x).
ln 2 n=1 4n 1
(1)
The latter series converges to the exact solution (x) as N in the norm of space L2 ,
(1)
and its sum L2 .

65
18 Solution to singular integral equations
18.1 Singular integral equations
Consider two types of the so-called singular integral equations
Z1
1 dx
S1 := (x) = f (y), 1 < y < 1; (346)
xy 1 x2
1

Z1 !
1 dx
(S1 + K1 ) = + K1 (x, y) (x) = f (y), 1 < y < 1; (347)
xy 1 x2
1
Z1
1
S2 := (x) 1 x2 dx = f (y), 1 < y < 1; (348)
xy
1
Z1
!
1
(S2 + K2 ) := + K2 (x, y) (x) 1 x2 dx = f (y), 1 < y < 1. (349)
xy
1

Equations (346) and (348) constitute particular cases of a general integral equation (347)
and can be solved explicitly. Singular integral operators S1 and S1 + K1 are considered as
(1) (2)
bounded linear operators in the weighted functional spaces introduced above, S1 : L2 L2 ,
(1) (2)
S1 + K1 : L2 L2 . Singular integral operators S2 S2 + K2 are considered as bounded linear
(2) (1) (2) (1)
operators in the weighted functional spaces S2 : L2 L2 , S2 + K2 : L2 L2 . If the kernel
functions K1 (x, y) and K2 (x, y) are sufficiently smooth, K1 and K2 will be compact operators
in these spaces.

18.2 Solution via FourierChebyshev series


Look for an approximation N (x) to the solution (x) of the singular integral equations in the
form of partial sums of the FourierChebyshev series. From formulas (335) and (336) it follows
that one should solve (346) and (347) by expanding functions in the Chebyshev polynomials
of the first kind and f in the Chebyshev polynomials of the second kind. In equations (348)
and (349), should be decomposed in the Chebyshev polynomials of the second kind and f in
the Chebyshev polynomials of the first kind.
(2) (1)
Solve equation (346). Assume that f L2 and look for a solution L2 :
XN
a0
N (x) = T0 (x) + an Tn (x); (350)
2 n=1

N
X 1
fN (y) = fk Uk (y), (351)
k=0
where coefficients fk are calculated by formulas (330). Substituting expressions (350) and (351)
into equation (346) and using (335), we obtain
N
X N
X 1
an Un1 (y) = fk Uk (y). (352)
n=1 k=0

66
Coefficient a0 vanishes because (see [4])

Z1
1 dx
= 0, 1 < y < 1. (353)
x y 1 x2
1

Equating, in (352), the coefficients multiplying Un (y) with equal indices, we obtain an =
fn1 /, n 1. The coefficient a0 remains undetermined. Thus the solution to equation (346)
depends on an arbitrary constant C = a0 /2. Write the corresponding formulas for the approx-
imate solution,
N
1X
N (x) = C + fn1 Tn (x); (354)
n=1
and the exact solution,

1X
(x) = C + fn1 Tn (x). (355)
n=1
(1)
The latter series converges in the L2 -norm.
In order to obtain a unique solution to equation (346) (to determine the constant C),
one should specify an additional condition for function (x), for example, in the form of an
orthogonality condition
Z1
dx
(x) 2
= 0. (356)
1
1 x
(1)
(orthogonality of the solution to a constant in the space L2 ). Then using the orthogonality
(325) of the Cebyshev polynomial T0 (x) 1 to all Tn (x), n 1, and calculating the inner
product in (355) (multiplying both sides by 1/ 1 x2 and integrating from -1 to 1) we obtain
1
1Z dx
C= (x) = 0.
1 x2
1

Equation (347) can be solved in a similar manner; however first it is convenient to decompose
the kernel function K1 (x, y) in the FourierChebyshev series and take the partial sum
N N
X X 1
(1) (1)
K1 (x, y) KN (x, y) = kij Ti (x)Uj (y), (357)
i=0 j=0

and then proceed with the solution according to the above.


If K1 (x, y) is a polynomial in powers of x and y, representation (357) will be exact and can
be obtained by rearranging explicit formulas (315)(320).

Example 15 Solve a singular integral equation


Z1 !
1 dx
+ x2 y 3 (x) = y, 1 < y < 1
xy 1 x2
1

67
subject to condition (356). Look for an approximate solution in the form

XN
a0
N (x) = T0 (x) + an Tn (x).
2 n1

The right-hand side


1
y = U1 (y).
2
Decompose K1 (x, y) in the FourierChebyshev series using (320),

1 1 1 1
K1 (x, y) = T0 (x) + T2 (x) U1 (y) + U3 (y) .
2 2 4 8
Substituting the latter into equation (335) and using the orthogonality conditions (325), we
obtain
N
X 1 1 (a0 + a2 ) 1
an Un1 (y) + U1 (y) + U3 (y) = U1 (y),
n=1 4 8 4 2
Equating the coefficients multiplying Un (y) with equal indices, we obtain

(a0 + a2 ) 1
a1 = 0, a2 + = , a3 = 0,
16 2
(a0 + a2 )
a4 + = 0; an = 0, n 5.
32
Finally, after some algebra, we determine the unknown coefficients
1 8 1 1
a0 = C, a2 = ( C), a4 = (2C ), an = 0, n 6= 0, 2, 4,
17 68
where C is an arbitrary constant. In order to determine the unique solution that satisfies (356),
calculate
8 1
C = 0, a0 = 0, a2 = , a4 = .
17 68
The final form of the solution is
8 1
(x) = T2 (x) T4 (x),
17 68
or
2 4 18 2 33
(x) = x + x .
17 17 68
Note that the approximate solution N (x) = (x) for N 4. We see that here, one can obtain
the solution explicitly as a polynomial.

Consider equations (348) and (349). Begin with (348) and look for its solution in the form
N
X 1
N (x) = bn Un (x). (358)
n=0

68
Expand the right-hand side in the Chebyshev polynomials of the first kind
XN
f0
fN (y) = T0 (y) + fk Tk (y), (359)
2 k=1

where coefficients fk are calculated by (329). Substitute (358) and (359) into (348) and use
(336) to obtain
N
X 1 XN
f0
bn Tn+1 (y) = T0 (y) + fk Tk (y), (360)
n=0 2 k=1

where unknown coefficients bn are determined form (360),


1
bn = fn+1 , n 0.

Equality (360) holds only if f0 = 0. Consequently, we cannot state that the solution to (348)
exists for arbitrary function f (y). The (unique) solution exists only if f (y) satisfies the orthog-
onality condition
Z1
dy
f (y) = 0, (361)
1 y2
1

which is equivalent to the requirement f0 = 0 (according to (329) taken at n = 0). Note that,
when solving equation (348), we impose, unlike equation (356), the condition (361) for equation
(346), on the known function f (y). As a result, we obtain
a unique approximate solution to (348) for every N 2

1 NX
1
N (x) = fn+1 Un (x), (362)
n=0
and
a unique exact solution to (348)

1X
(x) = fn+1 Un (x). (363)
n=0

We will solve equations (348) and (349) under the assumption that the right-hand side
(1) (2) (2)
f L2 and L2 . Series (363) converges in the norm of the space L2 .
The solution of equation (349) is similar to that of (348). The difference is that K2 (x, y) is
approximately replaced by a partial sum of the FourierChebyshev series
N
X 1 X
N
(2) (2)
K2 (x, y) KN (x, y) = kij Ui (x)Tj (y). (364)
i=0 j=0

Substituting (358), (359), and (364) into equation (349) we obtain a linear equation system
with respect to unknowns bn . As well as in the case of equation (348), the solution to (349)
exists not for all right-hand sides f (y).
The method of decomposing in FourierChebyshev series allows one not only to solve equa-
tions (348) and (349) but also verify the existence of solution. Consider an example of an
equation of the form (349) that has no solution (not solvable).

69
Example 16 Solve the singular integral equation
Z1 q !

1 (2)
+ 1 y2 (x) 1 x2 dx = 1, 1 < y < 1; L2 .
xy
1

Look for an approximate solution, as well as in the case of (348), in the form of a series
N
X 1
N (x) = bn Un (x).
n=0

Replace K2 (x, y) = 1 y 2 by an approximate decomposition

(2) 2 4 [N/2]
X 1
KN (x, y) = T0 (y) 2
T2k (y).
k=1 4k 1

The right-hand side of this equation is 1 = T0 (y). Substituting this expression into the equation,
we obtain
N 1 [N/2]
X X 1
bn Tn+1 (y) + T0 (y) 2 T (y) b0 = T0 (y).
2 1 2k
n=0 k=1 4k
Equating the coefficients that multiply T0 (y) on both sides of the equation, we obtain b0 = 1;
equating the coefficients that multiply T1 (y), we have b0 = 0,which yields b0 = 0. This
cobtradiction proves that the considered equation has no solution. Note that this circumstance
is not connected with the fact that we determine an approxiate solution N (x) rather than the
exact solution (x). By considering infinite series (setting N = ) in place of finite sums we
can obtain the same result.

Let us prove the conditions for function f (y) that provide the solvability of the equation
Z1 q !

1
+ 1 y2 (x) 1 x2 dx = f (y), 1 < y < 1
xy
1

with the right-hand side f (y) To this end, replace the decomposition of the right-hand side
which was equal to one by the expansion

XN
f0
fN (y) = T0 (y) + fn Tn (y).
2 n=1

Equating the coefficients that multiply equal-order polynomials, we obtain


f0 1
b0 = , b0 = f1 ,
2

1 2
b2k1 = f2k + 2 b0 ,
4k 1
1
b2k = f2k+1 ; k 1.

70
We see that the equation has the unique solution if and only if
f0 1
= f1 .
2
Rewrite this condition using formula (329)
1 1
1Z dy 1Z dy
f (y) = y f (y) .
2 1y 2 1 y2
1 1

Here, one can uniquely determine coefficient bn , which provides the unique solvability of the
equation.

19 Matrix representation of an operator in the Hilbert


space and summation operators
19.1 Matrix representation of an operator in the Hilbert space
Let {n } be an orthonormal basis in the Hilbert space H; then each element f H can

X
be represented in the form of the generalized Fourier series f = an n , where the sequence
n=1

X
of complex numbers {an } is such that the series |an |2 < converges in H. There exists a
n=1
(linear) one-to-one correspondence between such sequences {an } and the elements of the Hilbert
space.
We will use this isomorphism to construct the matrix representation of a completely contin-
uous operator A and a completely continuous operator-valued function A() holomorphic with
respect to complex variable acting in H.
Assume that g = Af , f H, and

gk = gk () = (A()f, k ),

i.e., an infinite sequence (vector) of the Fourier coefficients {gn } = (g1 , g2 , ...) corresponds to

X
an element g H and we can write g = (g1 , g2 , ...). Let f = fk k ; then f = (f1 , f2 , ...). Set
k=1
N
X
X
fN = Pf = fk k and An = akn k , n = 1, 2, . . .. Then,
k=1 k=1

N
X
X X
X
lim Af N = lim fn akn k = Af = akn fn k .
N N
n=1 k=1 k=1 n=1


X
X
On the other side, Af = gk k , hence, gk = akn (Af, k ) = (g, k ). Now we set Ak = gk
k=1 n=1
and
(gk , j ) = (Ak , j ) = ajk ,

71
where ajk are the Fourier coefficients in the expansion of element gk over the basis {j }.
Consequently,

X
Ak = gk = ajk j ,
j=1

and the series


X X
|ajk |2 = |(gk , j )|2 < ; k = 1, 2, . . . .
j=1 j=1

Infinite matrix {ajk }


j,k=1 defines operator A; that is, one can reconstruct A by this matrix and
an orthonormal basis {k } of the Hilbert space H: find the unique element Af for each f H.

P
In order to prove the latter assertion, we will assume that f = k k and f N = P f . Then,
k=1

X N
X
Af N = kN k , where kN = akj j . Applying the continuity of operator A, we have
k=1 j=1


X
Ck = (Af, k ) = lim (Af N , k ) = lim kN = akj j .
N N
j=1

Hence, for each f H,



X
X
Af = Ck k , Ck = akj j ,
k=1 j=1

and the latter equalities define the matrix representation of the operator A in the basis {k }
of the Hilbert space H.

19.2 Summation operators in the spaces of sequences


In this section, we will employ the approach based on the use of Chebyshev polynomials to
construct families of summation operators associated with logarithmic integral equations.
Let us introduce the linear space hp formed by the sequences of complex numbers {an } such
that X
|an |2 np < , p 0,
n=1
with the inner product and the norm

X X
1
2
(a, b) = an bn np , kakp = 2 p
|an | n .
n=1 n=1

One can easily verify the validity of the following statement.


Theorem 43 hp , p 0 is the Hilbert space.
Let us introduce the summation operators

1 0 ... 0 ...
0 21 ... 0 ...


L = ... ... ... ... ...

0 ... n1 0 ...
... ... ... ... ...

72
or, in other notation,
1
L = {lnj }
n,j=1 , lnj = 0, n 6= j, lnn = , n = 1, 2, . . . ,
n
and
1 0 ... 0 ...
0 2 ... 0 ...

L1 =
... ... ... ... ...

0 ... n 0 ...
... ... ... ... ...
or
L1 = {lnj }
n,j=1 , lnj = 0, n 6= j, lnn = n, n = 1, . . . .
that play an important role in the theory of integral operators and operator-valued functions
with a logarithmic singularity of the kernel.

Theorem 44 L is a linear bounded invertible operator acting from the space hp to the space hp+2 :
L : hp hp+2 (Im L = hp+2 , Ker L = {}). The inverse operator L1 : hp+2 hp , is bounded,
p0.

2 Let us show that L acts from hp to hp+2 and L1 acts from hp+2 to hp . Assume that a hp .
Then, for La we obtain

X an 2 p+2 X
n
= |an |2 np < .
n=1 n n=1

If b hp+2 , we obtain similar relationships for L1 b:



X
X
|bn n|2 np = |bn |2 np+2 < .
n=1 n=1

The norm kLakp+2 = kakp . The element a = L1 b, a hp , is the unique solution of the equation
La = b, and ker L = {}. According to the Banach theorem, L1 is a bounded operator. 2
Note that kLk = kL1 k = 1 and

L1 L = LL1 = E,

where E is the infinite unit matrix.


Assume that the summation operator A acting in hp is given by its matrix {anj }
n,j=1 and
the following condition holds:
X

X np+2
|anj |2 C 2 < , C = const, C > 0.
n=1 j=1 jp

Then, we can prove the following statement.

Theorem 45 The operator A : hp hp+2 is completely continuous

73
2 Let hp and A = . Then,


X
X X

np+2 |n |2 = np+2 anj j
n=1 n=1 j=1

X

X |anj |2
kk2p np+2 p
C 2 kk2p ,
n=1 j=1 j
hence, hp+2 . Now we will prove that A is completely continuous. In fact, operator A can
be represented as a limit with respect to the operator norm of a sequence of finite-dimensional
operators AN , which are produced by the cut matrix A and defined by the matrices AN =
{anj }N
n,j=1 :
2 2
X
N
X1 X X
2

p+2

p+2

k(A AN )kp+2 = n
anj j + n anj j
n=1 j=N n=N j=1

X
2 N 1 X

X |anj | X |anj |2 p+2
kk2p np+2 p + kk2p n
n=N j=1 j n=1 j=N jp
h
X X

|anj | 2 X
X
|anj |2 i
kk2p np+2 + np+2 ,
n=N j=1 jp j=N n=1 jp
and, consequently, the following estimates are valid with respect to the operator norm:
X
X N
|anj |2 X X
p+2 |anj |
2
kA AN k np+2 + n 0, N . 2
n=N j=1 jp j=N n=1 jp

Corollary 1 The sequence of operators AN strongly converges to operator A: kA AN k 0,


N , and kAk < C, C = const. In order to provide the complete continuity of A, it is
sufficient to require the fulfilment of the estimate
X
X
np+2 |anj |2 < C, C = const. (365)
n=1 j=1

Definition 9 Assume that condition (365) holds for the matrix elements of summation oper-
ator A. Then, the summation operator F = L + A : hp hp+2 is called the L-operator.

Theorem 46 L-operator F = L + A : hp hp+2 is a Fredholm operator; ind (L + A) = 0.

20 Matrix Representation of Logarithmic Integral Op-


erators
Consider a class of functions such that they admit in the interval (1, 1) the representation
in the form of the FourierChebyshev series
X
1
(x) = T0 (x)0 + n Tn (x); 1 x 1, (366)
2 n=1

74
where Tn (x) = cos(n arccos x) are the Chebyshev polynomials and = (0 , 1 , ...n , ...) h2 ;
that is,
X
1
kk22 = |0 |2 + |n |2 n2 < .
2 n=1

In this section we add the coordinate 0 to vector and modify the corresponding formulas. It is
easy to see that all results of previous sections are also valid in this case. Recall the definitions
of weighted Hilbert spaces associated with logarithmic integral operators:
Z1
(1) dx
L2 = {f (x) : kf k21 = |f (x)|2 < },
1
1 x2

Z1
(2)
L2 = {f (x) : kf k22 = |f (x)|2 1 x2 dx < },
1
(1)
f 1 = {f (x) : f L , f 0 L }, (2)
W 2 2 2

and

c 1 = {f (x) : f L(2) , f (x) = f (x) 1 x2 , f 0 L(2) , f (1) = f (1) = 0}.
W 2 2 2

Below, we will denote the weights by p1 (x) = 1 x2 and p1 1 (x).
(1) (2)
Note that the Chebyshev polynomials specify orthogonal bases in spaces L2 and L2 . In
order to obtain the matrix representation of the integral operators with a logarithmic singularity
of the kernel, we will prove a similar statement for weighted space W f 1.
2

f 1 and are isomorphic.


Lemma 6 W2

2 Assume that . We will show that W f 1 . Function (x) is continuous, and (366) is
2
its FourierChebyshev series with the coefficients
1
2Z
n = Tn (x)(x)p1
1 (x)dx.

1


X
The series |n |2 n2 converges and, due to the RieszFischer theorem, there exists an element
n=1
(1)
f L2 , such that its Fourier coefficients in the Chebyshev polynomial series are equal to nn ;
that is, setting Un (x) = sin(n arccos x), one obtains
1 1
2Z 1 2Z 0
nn = nTn (x)(x)p1 (x)dx = Un (x)(x)dx

1 1

1 1
2Z 0 dx 2Z dx
= Un (x)(p1 (x)(x)) = Un (x)f (x) , n 1.
p1 (x) p1 (x)
1 1

75
Z1
Since f (x) is an integrable function, the integral |f (x)|p1
1 (x)dx converges, and
1

Zx
f (t)dt
F (x) =
p1 (t)
1

is absolutely continuous (as a function in the form of an integral depending on the upper limit
of integration), F 0 (x) = f (x)p1
1 (x), and

1 1 1
2Z dx 2Z 0 2Z 0
Un (x)f (x) = Un (x)F (x)dx = Un (x)F (x)dx.
p1 (x)
1 1 1

The latter equalities yield


Z1 Z1
dx
Un0 (x)[(x) F (x)]dx = n Tn (x)[(x) F (x)] = 0, n 1.
p1 (x)
1 1

Hence, (x) F (x) = C = const. The explicit form of F 0 (x) yields 0 (x) = f (x)p1
1 (x);
0 (1) 0 (2)
therefore, (x)p1 (x) L2 and (x) L2 .
f 1 and be represented by the Fourier
Now, let us prove the inverse statement. Let W 2
Chebyshev series (366). Form the Fourier series

X
p1 (x)0 (x) = n Un (x), |x| 1,
n=1

where
1 1
2Z 0 dx 2Z 0
n = Un (x)(p1 (x) (x)) = Un (x)(x)dx =
p1 (x)
1 1
1
2n Z (x)dx
= Tn (x) .
p1 (x)
1

Now, in order to prove that h2 , it is sufficient to apply the Bessel inequality to n = nn ,


where
1
2Z dx
n = (x)Tn (x) .
p1 (x)
1
2

20.1 Solution to integral equations with a logarithmic singularity of


the kernel
Consider the integral operator with a logarithmic singularity of the kernel
1
1Z ds f 1,
L = ln |x s|(s) , W2
p1 (s)
1

76
and the integral equation
L + N = f, f 1,
W (367)
2

where
Z1
ds
N (x) = N = N (q)(s) , q = (x, s),
p1 (x)
1

and N (q) is an arbitrary complex-valued function satisfying the continuity conditions


2N
N (q) C 1 (1 ); (q) L2 [1 , p1 1
1 (x)p1 (s)],
x2
1 = ([1, 1] [1, 1]).
Write the FourierChebyshev expansions
X
1
(s) = 0 T0 + n Tn (s), = (0 , 1 , ..., n , ...) h2 , (368)
2 n=1

and
f0 X
f (x) = T0 (x) + fn Tn (x),
2 n=1
where the coefficients
1
2Z dx
fn = f (x)Tn (x) , n = 0, 1, . . . .
1 x2
1

Function N (x) is continuous and may be thus also expanded in the FourierChebyshev series
X
1
N (x) = b0 T0 (x) + bn Tn (x), (369)
2 n=1

where Z Z
dxds
bn = 2 N (q)Tn (x)(s) . (370)
2p 1 (x)p1 (s)
1
f 1 to the
According to Lemma 6 and formulas (368) and (369), equation (367) is equivalent in W 2
equation
X X
0 1 b0
ln 2 T0 (x) + n n Tn (x) + T0 (x) + bn Tn (x)
2 n=1 2 n=1
X (371)
f0
= T0 (x) + fn Tn (x), |x| 1.
2 n=1

Note that since N (x) is a differentiable function of x in [1, 1], the series in (371) converge
uniformly, and the equality in (371) is an identity in (1, 1).
Now, we substitute series (368) for (x) into (370). Taking into account that {Tn (x)} is a
basis and one can change the integration and summation and to pass to double integrals (these
statements are verified directly), we obtain an infinite system of linear algebraic equations
(
2 ln 20 + b0 = f0 , (372)
n1 n + bn = fn , n 1.

77
In terms of the summation operators, this systems can be represented as

L + A = f ; = (0 , 1 , . . . , n , . . .) h2 ; f = (f0 , f1 , . . . , fn , . . .) h4 .

Here, operators L and A are defined by


1
L = {lnj }
n,j=0 , lnj = 0, n 6= j; l00 = ln 2; lnn = , n = 1, 2, . . . ;
n
Z Z
dxds
A = {anj }
n,j=0 , anj = nj N (q)Tn (x)Tj (s) , (373)
2 p1 (x)p1 (s)
1

where
1,
n = j = 0,
nj = 2, n 1, j 1,

2, for other n, j.
Lemma 7 Let N (q) satisfy the continuity conditions formulated in (367). Then

X
X 1
|anj |2 n4 + |a0j |2 C0 , (374)
n=1,j=0 j=0 ln2 2

where

1 1 2
1 Z Z N (q)dx ds Z Z
2 dxds
C0 = 2 + p21 (x) |(p1 (x)Nx0 )0x | 2 . (375)
ln 2 p1 (x) p1 (s) p1 (x)p1 (s)
1 1 1

2 Expand a function of two variables belonging to the weighted space L2 [1 , p1 1


1 (x)p1 (s)] in
the double FourierChebyshev series using the FourierChebyshev system {Tn (x)Tm (s)}:
X
X
p1 (x)(p1 (x)Nx0 (q))0x = Cnm Tn (x)Tm (s),
n=0 m=0

where
1 1
2nm Z ds Z
Cnm = Tm (s) Tn (x)(p1 (x)Nx0 (q))0x dx
p1 (s)
1 1
Z1 Z1
2nm ds
= Tm (s) Tn0 (x)p1 (x)Nx0 (q)dx
p1 (s)
1 1
1 Z1
nnm Z
2
Tm (s)
= ds Un (x)Nx0 (q)dx
2 p1 (s)
1 1
n2 2nm Z Z
N (q)Tn (x)Tm (s)
= dxds = n2 nm anm .
2 p1 (x)p1 (s)
1

The Bessel inequality yields


X
X Z Z
4 2 2 dxds
n |anm | p21 (x)|[p1 (x)Nx0 (q)]0x |2 .
n=1 m=0 p1 (x)p1 (s)
1

78
Let us expand the function
1
1 Z N (q)dx
g(s) =
ln 2 2 p1 (x)
1

in the FourierChebyshev series. Again, due to the Bessel inequality, we obtain



1 1 2
1 X
2 1 Z Z N (q)ds ds
2 |a0m | 2 . 2
ln 2 m=0 ln 2 p1 (x) p1 (s)
1 1

21 Galerkin methods and basis of Chebyshev polyno-


mials
We describe the approximate solution of linear operator equations considering their projections
onto finite-dimensional subspaces, which is convenient for practical calculations. Below, it is
assumed that all operators are linear and bounded. Then, the general results concerning the
substantiation and convergence the Galerkin methods will be applied to solving logarithmic
integral equations.

Definition 10 Let X and Y be the Banach spaces and let A : X Y be an injective operator.
Let Xn X and Yn Y be two sequences of subspaces with dim Xn = dim Yn = n and
Pn : Y Yn be the projection operators. The projection method generated by Xn and Pn
approximates the equation
A = f
by its projection
Pn An = Pn f.
This projection method is called convergent for operator A, if there exists an index N such that
for each f Im A, the approximating equation Pn A = Pn f has a unique solution n Xn for
all n N and these solutions converge, n , n , to the unique solution of A = f .

In terms of operators, the convergence of the projection method means that for all n
N , the finite-dimensional operators An := Pn A : Xn Yn are invertible and the pointwise
convergence A1n Pn A , n , holds for all X. In the general case, one may expect
that the convergence occurs if subspaces Xn form a dense set:

inf k k 0, n (376)
Xn

for all X. This property is called the approximating property (any element X can be
approximated by elements Xn with an arbitrary accuracy in the norm of X). Therefore,
in the subsequent analysis, we will always assume that this condition is fulfilled. Since An =
Pn A is a linear operator acting in finite-dimensional spaces, the projection method reduces to
solving a finite-dimensional linear system. Below, the Galerkin method will be considered as
the projection method under the assumption that the above definition is valid in terms of the
orthogonal projection.
Consider first the general convergence and error analysis.

79
Lemma 8 The projection method converges if and only if there exists an index N and a positive
constant M such that for all n N , the finite-dimensional operators An = Pn A : Xn Yn are
invertible and operators A1
n Pn A : X X are uniformly bounded,

kA1
n Pn Ak M. (377)

The error estimate is valid

kn k (1 + M ) inf k k. (378)
Xn

Relationship (378) is usually called the quasioptimal estimate and corresponds to the error
estimate of a projection method based on the approximation of elements from X by elements
of subspaces Xn . It shows that the error of the projection method is determined by the quality
of approximation of the exact solution by elements of subspace Xn .
Consider the equation
S + K = f (379)
and the projection methods
Pn Sn = Pn f (380)
and
Pn (S + K)n = Pn f (381)
with subspaces Xn and projection operators Pn : Y Yn .

Lemma 9 Assume that S : X Y is a bounded operator with a bounded inverse S 1 : Y X


and that the projection method (380) is convergent for S. Let K : X Y be a compact bounded
operator and S + K is injective. Then, the projection method (381) converges for S + K.

Lemma 10 Assume that Yn = S(Xn ) and kPn K Kk 0, n . Then, for sufficiently


large n, approximate equation (381) is uniquely solvable and the error estimate

kn k M kPn S Sk

is valid, where M is a positive constant depending on S and K.

For operator equations in Hilbert spaces, the projection method employing an orthogonal
projection into finite-dimensional subspaces is called the Galerkin method.
Consider a pair of Hilbert spaces X and Y and assume that A : X Y is an injective
operator. Let Xn X and Yn Y be the subspaces with dim Xn = dim Yn = n. Then, n Xn
is a solution of the equation A = f obtained by the projection method generated by Xn and
operators of orthogonal projections Pn : Y Yn if and only if

(An , g) = (f, g) (382)

for all g Yn . Indeed, equations (382) are equivalent to Pn (An f ).


Equation (382) is called the Galerkin equation.

80
Assume that Xn and Yn are the spaces of linear combinations of the basis and probe func-
tions: Xn = {u1 , . . . , un } and Yn = {v1 , . . . , vn }. Then, we can express n as a linear combination
n
X
n = n un ,
k=1

and show that equation (382) is equivalent to the linear system of order n
n
X
k (Auk , vj ) = (f, vj ), j = 1, . . . , n, (383)
k=1

with respect to coefficients 1 , . . . , n .


Thus, the Galerkin method may be specified by choosing subspaces Xn and projection
operators Pn (or the basis and probe functions). We note that the convergence of the Galerkin
method do not depend on a particular choice of the basis functions in subspaces Xn and probe
functions in subspaces Yn . In order to perform a theoretical analysis of the Galerkin methods,
we can choose only subspaces Xn and projection operators Pn . However, in practice, it is more
convenient to specify first the basis and probe functions, because it is impossible to calculate
the matrix coefficients using (383).

21.1 Numerical solution of logarithmic integral equation by the Galerkin


method
Now, let us apply the projection scheme of the Galerkin method to logarithmic integral equation
Z1
1 dx
(L + K) = (ln + K(x, y)) (x) = f (y), 1 < y < 1. (384)
|x y| 1 x2
1

Here, K(x, y) is a smooth function; accurate conditions concerning the smoothness of K(x, y)
that guarantee the compactness of integral operator K are imposed in Section 2.
One can determine an approximate solution of equation (384) by the Galerkin method
taking the Chebyshev polynomials of the first kind as the basis and probe functions. According
to the scheme of the Galerkin method, we set
N
X 1
a0
N (x) = T0 (x) + an Tn (x),
2 n=1

XN = {T0 , . . . , TN 1 }, YN = {T0 , . . . , TN 1 }, (385)


and find the unknown coefficients from finite linear system (383). Since L : XN YN is a
bijective operator, the convergence of the Galerkin method follows from Lemma 10 applied to
operator L. If operator K is compact and L + K is injective, then we can also prove the conver-
gence of the Galerkin method for operator L + K using Lemma 9. Note also that quasioptimal
(1)
estimate 378 in the L2 norm is valid for N (x) (x), N .
(1)
f 1 is injective and operator K : L (1) f 1 is
Theorem 47 If operator L + K : L2 W 2 2 W 2
compact, then the Galerkin method (385) converges for operator L + K.

81
The same technique can be applied to constructing an approximate solution to the hyper-
singular equation
1 !
d Z 1
(H + K) = + K(x, y) (x) 1 x2 dx = f (y), 1 < y < 1. (386)
dy xy
1

The conditions concerning the smoothness of K(x, y) that guarantee the compactness of integral
operator K are imposed in Section 2.
Choose the Chebyshev polynomials of the second kind as the basis and probe functions, i.e.,
set
XN = {U0 , . . . , UN 1 }, YN = {U0 , . . . , UN 1 }. (387)
Unknown approximate solution N (x) is determined in the form
N
X 1
N (x) = bn Un (x). (388)
n=0

Note that hypersingular operator H maps XN onto YN and is bijective.


From Lemma 10, it follows that the Galerkin method for operator H converges. By virtue
of Lemma 9, this result is also valid for operator H + K under the assumption that K is
compact and H + K is injective. Quasioptimal formula (378) estimates the rate of convergence
of approximate solution N (x) to exact one (x), N in the norm of space Wc 1 . Unknown
2
coefficients an are found from finite linear system (383).

Theorem 48 If operator H + K : W c 1 L(2) is injective and K : W


c 1 L(2) is compact, then
2 2 2 2
the Galerkin method (387) converges for operator H + K.

82

You might also like