You are on page 1of 47

A Survey on Solution Methods for Integral Equations

Ilias S. Kotsireas
June 2008

Introduction

Integral Equations arise naturally in applications, in many areas of Mathematics, Science and
Technology and have been studied extensively both at the theoretical and practical level. It
is noteworthy that a MathSciNet keyword search on Integral Equations returns more than
eleven thousand items. In this survey we plan to describe several solution methods for Integral
Equations, illustrated with a number of fully worked out examples. In addition, we provide a
bibliography, for the reader who would be interested in learning more about various theoretical
and computational aspects of Integral Equations.
Our interest in Integral Equations stems from the fact that understanding and implementing
solution methods for Integral Equations is of vital importance in designing efficient parameterization algorithms for algebraic curves, surfaces and hypersurfaces. This is because the
implicitization algorithm for algebraic curves, surfaces and hypersurfaces based on a nullvector
computation, can be inverted to yield a parameterization algorithm.
Integral Equations are inextricably related with other areas of Mathematics, such as Integral
Transforms, Functional Analysis and so forth. In view of this fact, and because we made a
conscious effort to limit the length in under 50 pages, it was inevitable that the present work
is not self-contained.

We thank ENTER. This project is implemented in the framework of Measure 8.3 of the programme Competitiveness, 3rd European Union Support Framework, and is funded as follows: 75% of public funding from the
European Union, Social Fund, 25% of public funding from the Greek State, Ministry of Development, General
secretariat of research and technology, and from the private sector.

We thank Professor Ioannis Z. Emiris, University of Athens, Athens, Greece, for the warm hospitality in
his Laboratory of Geometric & Algebraic Algorithms, ERGA.

Linear Integral Equations

2.1

General Form

The most general form of a linear integral equation is


Z

b(x)

h(x)u(x) = f (x) +

K(x, t)u(t) dt

(1)

The type of an integral equation can be determined via the following conditions:
f (x) = 0 Homogeneous. . .
f (x) 6= 0 Nonhomogeneous. . .
b(x) = x . . . Volterra integral equation. . .
b(x) = b . . . Fredholm integral equation. . .
h(x) = 0 . . . of the 1st kind.
h(x) = 1 . . . of the 2nd kind.
For example, Abels problem:
Z x
p
(t)

dt
2gf (x) =
xt
0

(2)

is a nonhomogeneous Volterra equation of the 1st kind.

2.2

Linearity of Solutions

If u1 (x) and u2 (x) are both solutions to the integral equation, then c1 u1 (x) + c2 u2 (x) is also a
solution.

2.3

The Kernel

K(x, t) is called the kernel of the integral equation. The equation is called singular if:
the range of integration is infinite
the kernel becomes infinite in the range of integration
2.3.1

Difference Kernels

If K(x, t) = K(xt) (i.e. the kernel depends only on xt), then the kernel is called a difference
kernel. Volterra equations with difference kernels may be solved using either Laplace or Fourier
transforms, depending on the limits of integration.

2.3.2

2.4

Resolvent Kernels

Relations to Differential Equations

Differential equations and integral equations are equivalent and may be converted back and
forth:
Z xZ s
dy
= F (x) y(x) =
F (t) dt ds + c1 x + c2
dx
a
a
2.4.1

Reduction to One Integration

In most cases, the resulting integral may be reduced to an integration in one variable:
Z xZ s
Z x
Z x
Z x
F (t) dt ds =
F (t)
ds dt =
(x t)F (t) dt
a

Or, in general:
Z xZ

x1

xn1

...
a

1
(n1)!
2.4.2

F (xn ) dxn dxn1 . . . dx1 =


Za

(x x1 )n1 F (x) dx1

Generalized Leibnitz formula

The Leibnitz formula may be useful in converting and solving some types of equations.
d
dx

(x)

F (x, y) dy =
(x)

(x)
(x)

3
3.1

F
d
d
(x, y) dy + F (x, (x)) (x) F (x, (x)) (x)
x
dx
dx

Transforms
Laplace Transform

The Laplace transform is an integral transform of a function f (x) defined on (0, ). The
general formula is:
Z
F (s) L{f } =
esx f (x) dx
(3)
0

The Laplace transform happens to be a Fredholm integral equation of the 1st kind with kernel
K(s, x) = esx .
3.1.1

Inverse

The inverse Laplace transform involves complex integration, so tables of transform pairs are
normally used to find both the Laplace transform of a function and its inverse.
3.1.2

Convolution Product

When F1 (s) L{f1 } and FR2 (s) L{f2 } then the convolution product is L{f1 f2 }
x
F1 (s)F2 (s), where f1 f2 = 0 f1 (x t)f2 (t) dt. f1 (x t) is a difference kernel and f2 (t)
is a solution to the integral equation. Volterra integral equations with difference kernels where
the integration is performed on the interval (0, ) may be solved using this method.
3.1.3

Commutativity

The Laplace transform is commutative. That is:


Z x
Z
f1 f2 =
f1 (x t)f2 (t) dt =
0

3.1.4

f2 (x t)f1 (t) dt = f2 f1
0

Example

Find the inverse Laplace transform of F (s) =

1
s2 +9

f (x) =
=
=
=
=

1
s(s+1)

using a table of transform pairs:

1
1
L
+
s2 + 9 s(s + 1)

1
1
1
1
L
+L
s2 + 9
s(s + 1)

1
1
1
1
sin 3x + L

3
s s+1

1
1
1
1
1
sin 3x + L
L
3
s
s+1
1
sin 3x + 1 ex
3
1

3.2

Fourier Transform

The Fourier exponential transform is an integral transform of a function f (x) defined on


(, ). The general formula is:
Z
F () F{f } =
eix f (x) dx
(4)

The Fourier transform happens to be a Fredholm equation of the 1st kind with kernel K(, x) =
eix .
3.2.1

Inverse

The inverse Fourier transform is given by:


1
f (x) F {F } =
2
1

eix F () d

(5)

It is sometimes difficult to determine the inverse, so tables of transform pairs are normally used
to find both the Fourier transform of a function and its inverse.
3.2.2

Convolution Product

When F1 () F{f1 } and FR2 () F{f2 } then the convolution product is F{f1 f2 }

F1 ()F2 (), where f1 f2 = f1 (x t)f2 (t) dt. f1 (x t) is a difference kernel and f2 (t)
is a solution to the integral equation. Volterra integral equations with difference kernels where
the integration is performed on the interval (, ) may be solved using this method.
3.2.3

Commutativity

The Fourier transform is commutative. That is:


Z
Z
f1 f2 =
f1 (x t)f2 (t) dt =

3.2.4

f2 (x t)f1 (t) dt = f2 f1

Example

Find the Fourier transform of g(x) =


The inverse transform of sina is

sin ax
x

using a table of integral transforms:

f (x) =

1
,
2

|x| a
0, |x| > a
5

but since the Fourier transform is symmetrical, we know that


F{F (x)} = 2f ().
Therefore, let
F (x) =
and

f () =

so that

3.3

sin ax
x

sin ax
x

1
,
2

|x| a
0, |x| > a

= 2f () =

, || a
.
0, || > a

Mellin Transform

The Mellin transform is an integral transform of a function f (x) defined on (0, ). The general
formula is:
Z
F () M{f } =
x1 f (x) dx
(6)
0

If x = e

and f (x) 0 for x < 0 then the Mellin transform is reduced to a Laplace transform:
Z
Z
t 1
t
t
(e ) f (e )(e ) dt =
et f (et ) dt.
0

3.3.1

Inverse

Like the inverse Laplace transform, the inverse Mellin transform involves complex integration,
so tables of transform pairs are normally used to find both the Mellin transform of a function
and its inverse.
3.3.2

Convolution Product

When F1 () M{f1 } and F2 () M{f2 } then the convolution product is M{f1 f2 }


R f (t)f ( x )
F1 ()F2 (), where f1 f2 = 0 1 t 2 t dt. Volterra integral equations with kernels of the
form K( xt ) where the integration is performed on the interval (0, ) may be solved using this
method.

3.3.3

Example

Find the Mellin transform of eax , a > 0. By definition:


Z
F () =
x1 eax dx.
0

Let ax = z, so that:

F () = a

ez z 1 dz.

From the definition of the gamma function ()


Z
() =
x1 ex dx
0

the result is
F () =

()
.
a

Numerical Integration

General Equation:

u(x) = f (x) +

K(x, t)u(t) dt
a

P
Let Sn (x) = nk=0 K(x,
P tk )u(tk )k t.
Let u(xi ) = f (xi ) + nk=0 K(xi , tk )u(tk )k t, i = 0, 1, 2, . . . , n.
If equal increments are used, x = ba
.
n

4.1

Midpoint Rule
Z

b
a

4.2

baX
f (x) dx
f
n i=1

xi1 + xi
2

Trapezoid Rule
Z

b
a

#
"
n1
X
1
ba 1
f (xi ) + f (xn )
f (x0 ) +
f (x) dx
n
2
2
i=1

4.3

Simpsons Rule
Z

f (x) dx
a

ba
[f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + + 4f (xn1 ) + f (xn )]
3n

Volterra Equations

Volterra integral equations are a type of linear integral equation (1) where b(x) = x:
Z x
h(x)u(x) = f (x) +
K(x, t)u(t) dt

(7)

The equation is said to be the first kind when h(x) = 0,


Z x
f (x) =
K(x, t)u(t) dt

(8)

and the second kind when h(x) = 1,


Z

u(x) = f (x) +

K(x, t)u(t) dt

(9)

5.1

Relation to Initial Value Problems

A general initial value problem is given by:


d2 y
dy
+ A(x) + B(x)y(x) = g(x)
2
dx
dx
y(a) = c1 , y 0 (a) = c2
Z x
y(x) = f (x) +
K(x, t)y(t) dt
a

where

f (x) =

(x t)g(t) dt + (x a)[c1 A(a) + c2 ] + c1


a

and
K(x, t) = (t x)[B(t) A0 (t)] A(t)

(10)

5.2
5.2.1

Solving 2nd -Kind Volterra Equations


Neumann Series

The resolvent kernel for this method is:


(x, t; ) =

n Kn+1 (x, t)

n=0

where

Kn+1 (x, t) =

K(x, y)Kn (y, t) dy


t

and
K1 (x, t) = K(x, t)
The series form of u(x) is
u(x) = u0 (x) + u1 (x) + 2 u2 (x) +
and since u(x) is a Volterra equation,
Z

u(x) = f (x) +

K(x, t)u(t) dt
a

so
u0 (x) + u1 (x) + 2 u2 (x) + =
Z x
Z
2
f (x) +
K(x, t)u0 (t) dt +
a

K(x, t)u1 (t) dt +


a

Now, by equating like coefficients:


u0 (x) = f (x)
Z x
u1 (x) =
K(x, t)u0 (t) dt
Za x
u2 (x) =
K(x, t)u1 (t) dt
..
.

..
.Z

a
x

un (x) =

K(x, t)un1 (t) dt


a

Substituting u0 (x) = f (x):


Z

u1 (x) =

K(x, t)f (t) dt


a

Substituting this for u1 (x):


Z

u2 (x) =

K(x, t)
K(s, t)f (t) dt ds
a
a
Z x

Z x
=
f (t)
K(x, s)K(s, t) ds dt
a
t
Z x
=
f (t)K 2 (x, t) dt
a

Continue substituting recursively for u3 , u4 , . . . to determine the resolvent kernel. Sometimes


the resolvent kernel becomes recognizable after a few iterations as a Maclaurin or Taylor series
and can be replaced by the function represented by that series. Otherwise numerical methods
must be used to solve the equation.
Example: Use the Neumann series method to solve the Volterra integral equation of the
nd
2 kind:
Z x
u(x) = f (x) +
ext u(t) dt
(11)
0
xt

In this example, K1 (x, t) K(x, t) = e

, so K2 (x, t) is found by:


Z x
K2 (x, t) =
K(x, s)K1 (s, t) ds
t
Z x
=
exs est ds
Zt x
=
ext ds
t

= (x t)ext

and K3 (x, t) is found by:


Z

K3 (x, t) =

K(x, s)K2 (s, t) ds


Z

exs (s t)est ds

10

Z
=
=
=
=
=

(s t)ext ds
t
Z x
xt
e
(s t) ds
t
2
x
xt s
e
ts
2
t
2

2
x

t
xt
e
t(x t)
2
(x t)2 xt
e
2

so by repeating this process the general equation for Kn+1 (x, t) is


Kn+1 (x, t) =

(x t)n xt
e .
n!

Therefore, the resolvent kernel for eq. (11) is


(x, t; ) = K1 (x, t) + K2 (x, t) + 2 K3 (x, t) +
(x t)2 xt
= ext + (x t)ext + 2
e +
2

2
xt
2 (x t)
= e
1 + (x t) +
+ .
2
2

The series in brackets, 1 + (x t) + 2 (xt)


+ , is the Maclaurin series of e(xt) , so the
2
resolvent kernel is
(x, t; ) = ext e(xt) = e(+1)(xt)
and the solution to eq. (11) is
Z

u(x) = f (x) +

e(+1)(xt) f (t) dt.

5.2.2

Successive Approximations

1. make a reasonable 0th approximation, u0 (x)


2. let i = 1
3. ui = f (x) +

Rx
0

K(x, t)ui1 (t) dt


11

4. increment i and repeat the previous step until a desired level of accuracy is reached
If f (x) is continuous on 0 x a and K(x, t) is continuous on both 0 x a and
0 t x, then the sequence un (x) will converge to u(x). The sequence may occasionally be
recognized as a Maclaurin or Taylor series, otherwise numerical methods must be used to solve
the equation.
Example: Use the successive approximations method to solve the Volterra integral equation
of the 2nd kind:
Z x
u(x) = x
(x t)u(t) dt
(12)
0

In this example, start with the 0

th

approximation u0 (t) = 0. this gives u1 (x) = x,


Z x
u2 (x) = x
(x t)u1 (t) dt
0
Z x
= x
(x t)t dt
0
2
x
xt
t3
= x

2
3 0
3
x
= x
3!

and

(x t)u2 (t) dt

Z x
t3
x
(x t) t
dt
6
0
2
x 3
x
t
t
t4
t5
xx

2
24 0
3
30 0

2
x
x4
x3 x5
xx

2
24
3
30
3
5
3
5
x
x
x
x
x
+
+

2
24
3
30
x3 x5
x
+ .
3!
5!

u3 (x) = x

=
=
=
=
=

Continuing this process, the nth approximation is


un (x) = x

x3 x5
x2n+1
+
+ (1)n
3!
5!
(2n + 1)!
12

which is clearly the Maclaurin series of sin x. Therefore the solution to eq. (12) is
u(x) = sin x.
5.2.3

Laplace Transform

If the kernel depends only on x t, then it is called a difference kernel and denoted by K(x t).
Now the equation becomes:
Z x
u(x) = f (x) +
K(x t)u(t) dt,
0

which is the convolution product


Z

K(x t)u(t) dt = K u
0

Define the Laplace transform pairs U (s) u(x), F (s) f (x), and K(s) K(x).
Now, since
L{K u} = KU,
then performing the Laplace transform to the integral equations results in
U (s) = F (s) + K(s)U (s)
F (s)
U (s) =
, K(s) 6= 1,
1 K(s)
and performing the inverse Laplace transform:

F (S)
1
u(x) = L
, K(s) 6= 1.
1 K(s)
Example: Use the Laplace transform method to solve eq. (11):
Z x
u(x) = f (x) +
ext u(t) dt.
0

Let K(s) = L{ex } =

1
s1

and
F (s)

1 s1
s1
= F (s)
s1

U (s) =

13

s1+
s1
F (s)
= F (s) +
s1
F (s)
= F (s) +
.
s ( + 1)
= F (s)

The solution u(x) is the inverse Laplace transform of U (s):

F (s)
1
u(x) = L
F (s) +
s ( + 1)

F (s)
1
1
= L {F (s)} + L
s ( + 1)

1
1
F (s)
= f (x) + L
s ( + 1)
= f (x) + e(+1)x f (x)
Z x
= f (x) +
e(+1)(xt) f (t) dt
0

which is the same result obtained using the Neumann series method.
Example: Use the Laplace transform method to solve eq. (12):
Z x
u(x) = x
(x t)u(t) dt.
0

Let K(s) = L{x} =

1
s2

and
U (s) =

1
1
2 U (s)
2
s
s
1
s2

1 + s12
1
= 2
s +1
The solution u(x) is the inverse Laplace transform of U (s):

1
1
u(x) = L
s2 + 1
= sin x

14

which is the same result obtained using the successive approximations method.
5.2.4

Numerical Solutions

Given a 2nd kind Volterra equation


Z

u(x) = f (x) +

K(x, t)u(t) dt
a

divide the interval of integration (a, x) into n equal subintervals, t = xnna .n 1, where
xn = x.
Let t0 = a, x0 = t0 = a, xn = tn = x, tj = a + jt = t0 + jt, xi = x0 + it = a + it = ti .
Using the trapezoid rule,
Z x
K(x, t)u(t) dt
a

1
t K(x, t0 )u(t0 ) + K(x, t1 )u(t1 ) +
2

1
+ K(x, tn1 )u(tn1 ) + K(x, tn )u(tn )
2
t a

where t = j j = xa
, tj x, j 1, x = xn = tn .
n
Now the equation becomes

1
u(x) = f (x) + t K(x, t0 )u(t0 ) + K(x, t1 )u(t1 ) +
2

1
+ K(x, tn1 )u(tn1 ) + K(x, tn )u(tn ) , tj x, j 1, x = xn = tn .
2
K(x, t) 0 when t > x (the integration ends at t = x), K(xi , tj ) = 0 for tj > xi .
Numerically, the equation becomes

1
u(xi ) = f (xi ) + t K(xi , t0 )u(t0 ) + K(xi , t1 )u(t1 ) +
2

1
+ K(xi , tj1 )u(tj1 ) + K(xi , tj )u(tj ) , i = 1, 2, . . . , n, tj xi
2
where u(x0 ) = f (x0 ).
15

Denote ui u(ti ), fi f (xi ), Kij K(xi , tj ), so the numeric equation can be written in a
condensed form as

1
u0 = f0 , ui = fi + t Ki0 u0 + Ki1 u1 +
2

1
+ Ki(j1) uj1 + Kij uj , i = 1, 2, . . . , n, j i
2
there are n + 1 equations
u0 = f 0

1
1
u1 = f1 + t K10 u0 + K11 u1
2
2

1
1
u2 = f2 + t K20 u0 + K21 u1 + K22 u2
2
2
..
..
.
.

1
1
un = fn + t Kn0 u0 + Kn1 u1 + + Kn(n1) un1 + Knn un
2
2
where a general relation can be rewritten as

fi + t 21 Ki0 u0 + Ki1 u1 + + Ki(i1) ui1


ui =
1 t
Kii
2
and can be evaluated by substituting u0 , u1 , . . . , ui1 recursively from previous calculations.

5.3
5.3.1

Solving 1st -Kind Volterra Equations


Reduction to 2nd -Kind
Z

f (x) =

K(x, t)u(t) dt
0

If K(x, x) 6= 0, then:
Z

K(x, t)
(t) dt + K(x, x)u(x)
x
0
Z x
1
1
df
K(x, t)
u(x) =

u(t) dt
K(x, x) dx
x
0 K(x, x)
df
=
dx

16

1
1 K(x, t)
and H(x, t) =
.
K(x, x)
K(x, x) x
Z x
u(x) = g(x) +
H(x, t)u(t) dt

Let g(x) =

This is a Volterra equation of the 2nd kind.


Example: Reduce the following Volterra equation of the 1st kind to an equation of the 2nd
kind and solve it:
Z x
sin x =
ext u(t) dt.
(13)
0
xt

In this case, = 1, K(x, t) = e , and f (x) = sin x. Since K(x, x) = 1 6= 0, then eq. (13)
becomes:
Z x
u(x) = cos x
ext u(t) dt.
0

This is a special case of eq. (11) with f (x) = cos x and = 1:


Z x
cos t dt
u(x) = cos x
0

= cos x [sin t]x0


= cos x sin x.

5.3.2

Laplace Transform

If K(x, t) = K(x t), then a Laplace transform may be used to solve the equation. Define the
Laplace transform pairs U (s) u(x), F (s) f (x), and K(s) K(x).
Now, since
L{K u} = KU,
then performing the Laplace transform to the integral equations results in
F (s) = K(s)U (s)
F (s)
, K(s) 6= 0,
U (s) =
K(s)
and performing the inverse Laplace transform:

1 1 F (S)
u(x) = L
, K(s) 6= 0.

K(s)
17

Example: Use the Laplace transform method to solve eq. (13):


Z x
sin x =
ext u(t) dt.
0

Let K(s) = L{ex } =

1
s1

and substitute this and L{sin x} =

1
s2 +1

to get

1
1
=

U (s).
s2 + 1
s1
Solve for U (s):
1

1 s2 +1
U (s) =
1
s1
1 s1
=
s2 + 1

s
1
1

.
=
s2 + 1 s2 + 1
Use the inverse transform to find u(x):

s
1 1
1
u(x) =
L

s2 + 1 s2 + 1
1
=
(cos x sin x)

which is the same result found using the reduction to 2nd kind method when = 1.
Example: Abels equation (2) formulated as an integral equation is
Z x
p
(t)

2gf (x) =
dt.
xt
0
This describes the shape (y) of a wire in a vertical plane along which a bead must descend
under the influence of gravity a distance y0 in a predetermined time f (y0 ). This problem is
described by Abels equation (2).
To solve Abels equation, let F (s) be the Laplace
of f (x) and (s) be the Laplace
p transform
transform of (x). Also let K(s) = L{ 1x } = s ( ( 21 ) = ). The Laplace transform of
eq. (2) is
r
p

(s)
2gF (s) =
s
18

or, solving for (s):

r
(s) =

So (x) is given by:

r
(x) =

2g
sF (s).

2g 1
L { sF (s)}.

Since L1 { s} does not exist, the convolution theorem cannot be used directly to solve for
(x). However, by introducing H(s) F(s)
the equation may be rewritten as:
s
r
(x) =

2g
sH(s).

Now
h(x) = L1 {H(s)}

1
1
F (s)
= L
s
Z x
1
f (t)

=
dt
0
xt
and since h(0) = 0,
dh
= L1 {sH(s) h(0)}
dx
= L1 {sH(s)}

Finally, use

dh
dx

to find (x) using the inverse Laplace transform:


r

Z x
2g d
1
f (t)

(x) =
dt
dx
0
xt

Z x
f (t)
2g d

=
dt
dx 0
xt
which is the solution to Abels equation (2).

19

Fredholm Equations

Fredholm integral equations are a type of linear integral equation (1) with b(x) = b:
Z b
h(x)u(x) = f (x) +
K(x, t)u(t) dt

(14)

The equation is said to be the first kind when h(x) = 0,


Z x
f (x) =
K(x, t)u(t) dt

(15)

and the second kind when h(x) = 1,


Z

u(x) = f (x) +

K(x, t)u(t) dt

(16)

6.1

Relation to Boundary Value Problems

A general boundary value problem is given by:


d2 y
dy
+ A(x) + B(x)y = g(x)
2
dx
dx

(17)

y(a) = c1 , y(b) = c2
Z b
y(x) = f (x) +
K(x, t)y(t) dt
a

where

f (x) = c1 +
a

and

K(x, t) =

6.2

Z b
xa
(x t)g(t) dt +
(b t)g(t) dt
c2 c1
ba
a
xb
[A(t) (a t) [A0 (t) B(t)]]
ba
xa
[A(t) (b t) [A0 (t) B(t)]]
ba

if x > t
if x < t

Fredholm Equations of the 2nd Kind With Seperable Kernels

A seperable kernel, also called a degenerate kernel, is a kernel of the form


K(x, t) =

n
X

ak (x)bk (t)

(18)

k=1

which is a finite sum of products ak (x) and bk (t) where ak (x) is a function of x only and bk (t)
is a function of t only.
20

6.2.1

Nonhomogeneous Equations

Given the kernel in eq. (18), the general nonhomogeneous second kind Fredholm equation,
Z b
u(x) = f (x) +
K(x, t)u(t) dt
(19)
a

becomes
u(x) = f (x) +
= f (x) +

Z bX
n

ak (x)bk (t)u(t) dt

a k=1
n
X

ak (x)

k=1

Define ck as

bk (t)u(t) dt
a

ck =

bk (t)u(t) dt
a

(i.e. the integral portion). Multiply both sides by bm (x), m = 1, 2, . . . , n


bm (x)u(x) = bm (x)f (x) +

n
X

ck bm (x)ak (x)

k=1

then integrate both sides from a to b


Z b
Z b
Z b
n
X
ck
bm (x)ak (x) dx
bm (x)u(x) dx =
bm (x)f (x) dx +
a

k=1

Define fm and amk as

fm =

bm (x)f (x) dx
a

amk =

bm (x)ak (x) dx
a

so finally the equation becomes


cm = fm +

n
X

amk ck , m = 1, 2, . . . , n.

k=1

That is, it is a nonhomogeneous system of n linear equations in c1 , c2 , . . . , cn . fm and amk are


known since bm (x), f (x), and ak (x) are all given.
21

This system
can be solved by finding all cm , m = 1, 2, . . . , n,
Pn
f (x) + k=1 ck ak (x) to find u(x). In matrix notation,

c1
f1
a11 a12
c2
f2
a21 a22

let C = .. , F = .. , A = ..
..
.
.
.
.
an1 an2
cn
fn

and substituting into u(x) =

a1n
a2n

.. .
..
. .
ann

C = F + AC
C AC = F
(I A)C = F
which has a unique solution if the determinant |I A| =
6 0, and either no solution or infinitely
many solutions when |I A| = 0.
Example: Use the above method to solve the Fredholm integral equation
Z 1
u(x) = x +
(xt2 + x2 t)u(t) dt.
(20)
P2

The kernel is K(x, t) = xt2 + x2 t = k=1 ak (x)bk (t), where a1 (x) = x, a2 (x) = x2 , b1 (t) = t2 ,
and b2 (t) = t. Given that f (x) = x, define f1 and f2 :
Z 1
Z 1
1
f1 =
b1 (t)f (t) dt =
t3 dt =
4
Z0 1
Z0 1
1
f2 =
b2 (t)f (t) dt =
t2 dt =
3
0
0
so that the matrix F is

F =

1
4
1
3

Now find the elements of the matrix A:


Z 1
Z 1
a11 =
b1 (t)a1 (t) dt =
t3 dt =
Z0 1
Z0 1
a12 =
b1 (t)a2 (t) dt =
t4 dt =
Z0 1
Z0 1
a21 =
b2 (t)a1 (t) dt =
t2 dt =
Z0 1
Z0 1
a22 =
b2 (t)a2 (t) dt =
t3 dt =
0

22

1
4
1
5
1
3
1
4

so that the A is

1
4
1
3

A=
Therefore, C = F + AC becomes


c1
=
c2
or:

1
4
1
3

1 4 5
3 1 4

1
5
1
4

1
5
1
4

1
4
1
3

c1
c2

c1
,
c2
1
4
1
3

If the determinant of the leftmost term is not equal to zero, then there is a unique solution for
c1 and c2 that can be evaluated by finding the inverse of that term. That is, if

1
4
5

1 6= 0
3
4
2

6= 0
1
4
15
240 120 2
6= 0
240
240 120 2 6= 0.
If this is true, then c1 and c2 can be found as:
60 +
240 120 2
80
=
,
240 120 2

c1 =
c2
then u(x) is found from
u(x) = f (x) +
= x+

n
X

ck ak (x)

k=1
2
X

ck ak (x)

k=1

= x + [c1 a1 (x) + c2 a2 (x)]

(60 + )x
80x2
= x+
+
240 120 2 240 120 2
23

6.2.2

(240 60)x + 80x2


, 240 120 2 6= 0.
240 120 2

Homogeneous Equations

Given the kernel defined in eq. (18), the general homogeneous second kind Fredholm equation,
Z

u(x) =

K(x, t)u(t) dt

(21)

becomes
u(x) =
=

Z bX
n
a k=1
n
X

ak (x)bk (t)u(t) dt
Z

ak (x)

bk (t)u(t) dt
a

k=1

Define ck , multiply and integrate, then define amk as in the previous section so finally the
equation becomes
n
X
amk ck , m = 1, 2, . . . , n.
cm =
k=1

That is, it is a homogeneous system of n linear equations in c1 , c2 , . . . , cn . Again,


system
Pthe
n
can be solved by finding all cm , m = 1, 2, . . . , n, and substituting into u(x) = k=1 ck ak (x)
to find u(x). Using the same matrices defined earlier, the equation can be written as
C = AC
C AC = 0
(I A)C = 0.
Because this is a homogeneous system of equations, when the determinant |I A| 6= 0 then
the trivial solution C = 0 is the only solution, and the solution to the integral equation is
u(x) = 0. Otherwise, if |I A| = 0 then there may be zero or infinitely many solutions whose
eigenfunctions correspond to solutions u1 (x), u2 (x), . . . , un (x) of the Fredholm equation.
Example: Use the above method to solve the Fredholm integral equation
Z
u(x) =
(cos2 x cos 2t + cos 3x cos3 t)u(t) dt.
(22)
0

24

P
The kernel is K(x, t) = cos2 x cos 2t + cos 3x cos3 t = k=1 2ak (x)bk (t) where a1 (x) = cos2 x,
a2 (x) = cos 3x, b1 (t) = cos 2t, and b2 (t) = cos3 t. Now find the elements of the matrix A:
Z
Z

a11 =
b1 (t)a1 (t) dt =
cos 2t cos2 t dt =
4
Z0
Z0
a12 =
b1 (t)a2 (t) dt =
cos 2t cos 3t dt = 0
0
0
Z
Z
a21 =
b2 (t)a1 (t) dt =
cos3 t cos2 t dt = 0
Z0
Z0

a22 =
b2 (t)a2 (t) dt =
cos3 t cos 3t dt =
8
0
0
and follow the method of the previous example to find

1
0
4
0
1
8

1
4

1
8

c1 and c2 :
= 0
= 0
= 0

For c1 and c2 to not be the trivial solution, the determinant must be zero. That is,

0
4

= 1 1 = 0,

0
1 8
4
8
which has solutions 1 = 4 and 2 = 8 which are the eigenvalues of eq. (22). There are two
corresponding eigenfunctions u1 (x) and u2 (x) that are solutions to eq. (22). For 1 = 4 , c1 = c1
(i.e. it is a parameter), and c2 = 0. Therefore u1 (x) is
4
c1 cos2 x.

To form a specific solution (which can be used to find a general solution, since eq. (22) is
linear), let c1 = 4 , so that
u1 (x) = cos2 x.
u1 (x) =

Similarly, for 2 = 8 , c1 = 0 and c2 = c2 , so u2 (x) is


8
c2 cos 3x.

u2 (x) =
To form a specific solution, let c2 = 8 , so that

u2 (x) = cos 3x.


25

6.2.3

Fredholm Alternative

The homogeneous Fredholm equation (21),


Z

u(x) =

K(x, t)u(t) dt
a

has the corresponding nonhomogeneous equation (19),


Z b
u(x) = f (x) +
K(x, t)u(t) dt.
a

If the homogeneous equation has only the trivial solution u(x) = 0, then the nonhomogeneous
equation has exactly one solution. Otherwise, if the homogeneous equation has nontrivial
solutions, then the nonhomogeneous equation has either no solutions or infinitely many solutions
depending on the term f (x). If f (x) is orthogonal to every solution ui (x) of the homogeneous
equation, then the associated nonhomogeneous
equation will have a solution. The two functions
Rb
f (x) and ui (x) are orthogonal if a f (x)ui (x) dx = 0.
6.2.4

Approximate Kernels

Often a nonseperable kernel may have a Taylor or other series expansion that is separable.
This approximate kernel is denoted by M (x, t) K(x, t) and the corresponding approximate
solution for u(x) is v(x). So the approximate solution to eq. (19) is given by
Z b
v(x) = f (x) +
M (x, t)v(t) dt.
a

The error involved is = |u(x) v(x)| and can be estimated in some cases.
Example: Use an approximating kernel to find an approximate solution to
Z 1
u(x) = sin x +
(1 x cos xt)u(t) dt.

(23)

The kernel K(x, t) = 1 x cos xt is not seperable, but a finite number of terms from its
Maclaurin series expansion

x2 t2 x4 t4
x3 t2 x5 t4
1x 1
+
= 1 x +

+
2!
4!
2!
4!
is seperable in x and t. Therefore consider the first three terms of this series as the approximate
kernel
x3 t2
.
M (x, t) = 1 x +
2!
26

The associated approximate integral equation in v(x) is

Z 1
x3 t2
v(x) = sin x +
1x+
v(t) dt.
2!
0

(24)

Now use the previous method for solving Fredholm equations with seperable kernels to solve
eq. (24):
2
X
M (x, t) =
ak (x)bk (t)
k=1
2

where a1 (x) = (1 x), a2 (x) = x3 , b1 (t) = 1, b2 (t) = t2 . Given that f (x) = sin x, find f1 and
f2 :
Z 1
Z 1
f1 =
b1 (t)f (t) dt =
sin t dt = 1 cos 1
0
0
Z 1
Z 1 2
1
t
sin t dt = cos 1 + sin 1 1.
f2 =
b2 (t)f (t) dt =
2
0
0 2
Now find the elements of the matrix A:
Z 1
Z 1
1
a11 =
b1 (t)a1 (t) dt =
(1 t) dt =
2
Z0 1
Z0 1
1
a12 =
b1 (t)a2 (t) dt =
t3 dt =
4
Z0 1
Z0 1 2
t
1
a21 =
b2 (t)a1 (t) dt =
(1 t) dt =
2
24
Z0 1
Z0 1 5
t
1
a22 =
b2 (t)a2 (t) dt =
dt = .
12
0
0 2
Therefore, C = F + AC becomes

1 cos 1
c1
= 1
+1
c2
cos 1 + sin 1 1
2
or

1 12
41
1
1
24
1 12

c1
c2

27

1
2
1
24

1
4
1
12

1 cos 1
1
cos 1 + sin 1 1
2

c1
,
c2

The determinant is

1 1
14
2

1
1

1 12
24

= 1 11 1 1

2 12 4 24
43
=
96

which is non-zero, so there is a unique solution for C:


76
64 24
cos 1 +
+
sin 1 1.0031
43
43 43
20
44 48
=
cos 1
+
sin 1 0.1674.
43
43 43

c1 =
c2

Therefore, the solution to eq. (24) is


v(x) = sin x + c1 a1 (x) + c2 a2 (x)
sin x + 1.0031(1 x) + 0.1674x3

which is an approximate solution to eq. (23). The exact solution is known to be u(x) = 1, so
comparing some values of v(x):
Solution
0
0.25
0.5
0.75
1
u(x)
1
1
1
1
1
v(x)
1.0031 1.0023 1.0019 1.0030 1.0088

6.3

Fredholm Equations of the 2nd Kind With Symmetric Kernels

A symmetric kernel is a kernel that satisfies


K(x, t) = K(t, x).
If the kernel is a complex-valued function, then to be symmetric it must satisfy [K(x, t) =
K(t, x)] where K denotes the complex conjugate of K.
These types of integral equations may be solved by using a resolvent kernel (x, t; ) that
can be found from the orthonormal eigenfunctions of the homogeneous form of the equation
with a symmetric kernel.

28

6.3.1

Homogeneous Equations

The eigenfunctions of a homogeneous Fredholm equation with a symmetric kernel are functions
un (x) that satisfy
Z b
un (x) = n
K(x, t)un (t) dt.
a

It can be shown that the eigenvalues of the symmetric kernel are real, and that the eigenfunctions un (x) and um (x) corresponding to two of the eigenvalues n and m where n 6= m
are orthogonal (see earlier section on the Fredholm Alternative). It can also be shown that
there is a finite number of eigenfunctions corresponding to each eigenvalue of a kernel if the
kernel is square integrable on {(x, t)|a x b, a t b}, that is:
Z bZ
a

K 2 (x, t) dx dt = B 2 < .

Once the orthonormal eigenfunctions 1 (x), 2 (x), . . . of the homogeneous form of the equations are found, the resolvent kernel can be expressed as an infinite series:
(x, t; ) =

X
k (x)k (t)

k=1

, 6= k

so the solution of the 2nd -order Fredholm equation is


u(x) = f (x) +

X
ak k (x)
k=1

where

f (x)k (x) dx.

ak =
a

6.4
6.4.1

Fredholm Equations of the 2nd Kind With General Kernels


Fredholm Resolvent Kernel

Evaluate the Fredholm Resolvent Kernel (x, t; ) in


Z

u(x) = f (x) +

(x, t; )f (t) dt
a

(x, t; ) =
29

D(x, t; )
D()

(25)

where D(x, t; ) is called the Fredholm minor and D() is called the Fredholm determinant.
The minor is defined as
D(x, t; ) = K(x, t) +

X
()n

n!

n=1

where

Bn (x, t)

Bn (x, t) = Cn K(x, t) n

K(x, s)Bn1 (s, t) ds, B0 (x, t) = K(x, t)


a

and

Cn =

Bn1 (t, t) dt, n = 1, 2, . . . , C0 = 1


a

and the determinant is defined as


D() =

X
()n

n!

n=0

Bn (x, t) and Cn may each be


determinant:

Z b Z b Z b

Bn (x, t) =

a a
a

Cn , C0 = 1.

expressed as a repeated integral where the integrand is a


K(x, t) K(x, t1 )
K(t1 , t) K(t1 , t1 )
..
..
.
.
K(tn , t) K(tn , t1 )

K(t1 , t1 ) K(t1 , t2 )
Z b Z b Z b
K(t2 , t1 ) K(t2 , t2 )
Cn =

..
..
.
.
a a
a

K(tn , t1 ) K(tn , t2 )
6.4.2

K(x, tn )
K(t1 , tn )
..
..
.
.
K(tn , tn )

K(t1 , tn )
K(t2 , tn )
..
..
.
.
K(tn , tn )

dt1 dt2 . . . dtn

dt1 dt2 . . . dtn

Iterated Kernels

Define the iterated kernel Ki (x, t) as


Z

Ki (x, y)

K(x, t)Ki1 (t, y) dt, K1 (x, t) K(x, t)


a

so

i (x)

Ki (x, y)f (y) dy


a

30

and
un (x) = f (x) + 1 (x) + 2 2 (x) + + n n (x)
n
X
= f (x) +
i i (x)
i=1

un (x) converges to u(x) when |B| < 1 and


s
Z bZ b
B=
K 2 (x, t) dx dt.
a

6.4.3

Neumann Series

The Neumann series


u(x) = f (x) +

i i (x)

i=1

is convergent, and substituting for i (x) as in the previous section it can be rewritten as
Z b

X
i
Ki (x, t)f (t) dt
u(x) = f (x) +

= f (x) +

i=1
Z b "X

#
i Ki (x, t) f (t) dt

i=1

= f (x) +

(x, t; )f (t) dt.


a

Therefore the Neumann resolvent kernel for the general nonhomogeneous Fredholm integral
equation of the second kind is
(x, t; ) =

i1 Ki (x, t).

i=1

6.5

Approximate Solutions To Fredholm Equations of the 2nd Kind

The solution of the general nonhomogeneous Fredholm integral equation of the second kind eq.
(19), u(x), can be approximated by the partial sum
SN (x) =

N
X
k=1

31

ck k (x)

(26)

of N linearly independent functions 1 , 2 , . . . , N on the interval (a, b). The associated error
(x, c1 , c2 , . . . , cN ) depends on x and the choice of the coefficients c1 , c2 , . . . , cN . Therefore, when
substituting the approximate solution for u(x), the equation becomes
Z

SN (x) = f (x) +

K(x, t)SN (t) dt + (x, c1 , c2 , . . . , cN ).

(27)

Now N conditions must be found to give the N equations that will determine the coefficients
c1 , c 2 , . . . , c N .
6.5.1

Collocation Method

Assume that the error term disappears at the N points x1 , x2 , . . . , xN . This reduces the
equation to N equations:
Z b
SN (xi ) f (xi ) +
K(xi , t)SN (t) dt, i = 1, 2, . . . , N,
a

where the coefficients of SN can be found by substituting the N linearly independent functions 1 , 2 , . . . , N and values x1 , x2 , . . . , xN where the error vanishes, then performing the
integration and solving for the coefficients.
Example: Use the Collocation method to solve the Fredholm integral equation of the
second kind
Z 1
u(x) = x +
xtu(t) dt.
(28)
1

Set N = 3 and choose the linearly independent functions 1 (x) = 1, 2 (x) = x, and 3 (x) = x2 .
Therefore the approximate solution is
S3 (x) =

3
X

ck k (x) = c1 + c2 x + c3 x2 .

k=1

Substituting into eq. (27) obtains


Z

S3 (x) = x +

xt(c1 + c2 t + c3 t2 ) dt + (x, c1 , c2 , c3 )

= x+x

(c1 t + c2 t2 + c3 t3 ) dt + (x, c1 , c2 , c3 )

32

and performing the integration results in


2
1
Z 1
c1 t
c2 t3 c3 t4
2
3
(c1 t + c2 t + c3 t ) dt =
+
+
2
3
4 1
1

c1 c2 c3
c1 c2 c3
=
+ +
+
2
3
4
2
3
4
c1 c1 c2 + c2 c3 c3
=
+
+
2
3
4
2
=
c2
3
so eq. (27) becomes

2
= x+x
c2 + (x, c1 , c2 , c3 )
3

2
= x 1 + c2 + (x, c1 , c2 , c3 ).
3

c1 + c2 x + c3 x

Now, three equations are needed to find c1 , c2 , and c3 . Assert that the error term is zero at
three points, arbitrarily chosen to be x1 = 1, x2 = 0, and x3 = 1. This gives
2
c1 + c2 + c3 = 1 + c2
3
1
c1 + c2 + c3 = 1,
3

c1 + 0 + 0 = 0
c1 = 0,

2
c1 c2 + c3 = 1 c2
3
1
c1 c2 + c3 = 1.
3
Solve for c1 , c2 , and c3 , giving c1 = 0, c2 = 3, and c3 = 0. Therefore the approximate solution
to eq. (28) is S3 (x) = 3x. In this case, the exact solution is known to be u(x) = 3x, so the
approximate solution happened to be equal to the exact solution due to the selection of the
linearly independent functions.
33

6.5.2

Galerkin Method

Assume that the error term is orthogonal to N given linearly independent functions 1 , 2 , . . . , N
of x on the interval (a, b). The N conditions therefore become
Z

j (x)(x, c1 , c2 , . . . , cN ) dx

Z b
Z b
=
j (x) SN (x) f (x)
K(x, t)SN (t) dt dx

= 0, j = 1, 2, . . . , N

which can be rewritten as either

Z b
Z b
j (x) SN (x)
K(x, t)SN (t) dt dx
a
a
Z b
=
j (x)f (x) dx, j = 1, 2, . . . , N
a

or
Z

j (x)
a

N
X

Z
ck k (x)

K(x, t)
a

k=1

" N
X

#
ck k (t) dt

)
dx

k=1

j (x)f (x) dx, j = 1, 2, . . . , N


a

after substituting for SN (x) as before. In general the first set of linearly independent functions j (x) are different from the second set j (x), but the same functions may be used for
convenience.
Example: Use the Galerkin method to solve the Fredholm integral equation in eq. (28),
Z 1
u(x) = x +
xtu(t) dt
1

using the same linearly independent functions 1 (x) = 1, 2 (x) = x, and 3 (x) = x2 giving the
approximate solution
S3 (x) = c1 + c2 x + c3 x2 .

34

Substituting into eq. (27) results in the error term being


Z 1
2
(x, c1 , c2 , c3 ) = c1 + c2 x + c3 x x
xt(c1 + c2 t + c3 t2 ) dt.
1

The error must be orthogonal to three linearly independent functions, chosen to be 1 (x) = 1,
2 (x) = x, and 3 (x) = x2 :

Z 1
Z 1
Z 1
2
2
1 c1 + c2 x + c3 x
xt(c1 + c2 t + c3 t ) dt dx =
x dx
1
1
1
1

Z 1
Z 1
x2
2
2
3
c1 + c2 x + c3 x x
(c1 t + c2 t + c3 t ) dt dx =
2 1
1
1
"
#
2
1
Z 1
c1 t
1 1
c2 t3 c3 t4
2
c1 + c2 x + c3 x x
+
+
dx =

2
3
4 1
2 2
1

Z 1
1
2
c1 c2 x + c3 x dx = 0
3
1
1

c2 x2 c3 x3
= 0
+
c1 x +
6
3 1
2
2c1 + c3 = 0,
3
Z

x c1 + c2 x + c3 x
1

1
1

Z
2
x c1 + c2 x + c3 x
2

xt(c1 + c2 t + c3 t ) dt dx =

1
x c1 c2 x + c3 x2 dx
3
1

Z 1
1 2
3
c1 x c2 x + c3 x dx
3
1
2
1
c1 x
c 2 x3 c 3 x4
+
+
2
9
4 1
2
c2
9
c2
Z

3 1

x
3 1

1 1
+
3 3

2
3
2
=
3
= 3,

Z
xt(c1 + c2 t + c3 t ) dt dx =
2

1
1

35

x2 dx

x3 dx

x
Z

1
1
1

1
c1 c2 x + c3 x2
3

dx =

1
x4
4 1

1 3
1 1
4
c1 x c2 x + c3 x dx =

3
4 4
3
1
c1 x
c2 x4 c3 x5
+
+
= 0
3
12
5 1
2
2
c1 + c3 = 0.
3
5
2

Solving for c1 , c2 , and c3 gives c1 = 0, c2 = 3, and c3 = 0, resulting in the approximate solution


being S3 (x) = 3x. This is the same result obtained using the collocation method, and also
happens to be the exact solution to eq. (27) due to the selection of the linearly independent
functions.

6.6

Numerical Solutions To Fredholm Equations

A numerical solution to a general Fredholm integral equation can be found by approximating


the integral by a finite sum (using the trapezoid rule) and solving the resulting simultaneous
equations.
6.6.1

Nonhomogeneous Equations of the 2nd Kind

In the general Fredholm Equation of the second kind in eq. (19),


Z

u(x) = f (x) +

K(x, t)u(t) dt,


a

the interval (a, b) can be subdivided into n equal intervals of width t = ba


. Let t0 = a,
n
tj = a + jt = t0 + jt, and since the variable is either t or x, let x0 = t0 = a, xn = tn = b,
and xi = x0 + it (i.e. xi = ti ). Also denote u(xi ) as ui , f (xi ) as fi , and K(xi , tj ) as Kij . Now
if the trapezoid rule is used to approximate the given equation, then:

Z b
1
u(x) = f (x) +
K(x, t)u(t) dt f (x) + t K(x, t0 )u(t0 )
2
a

1
+K(x, t1 )u(t1 ) + + K(x, tn1 )u(tn1 ) + K(x, tn )u(tn )
2

36

or, more tersely:

1
1
u(x) f (x) + t K(x, t0 )u0 + K(x, t1 )u1 + + K(x, tn )un .
2
2
There are n + 1 values of ui , as i = 0, 1, 2, . . . , n. Therefore the equation becomes a set of
n + 1 equations in ui :

1
ui = fi + t Ki0 u0 + Ki1 u1 + + Ki(n1) un1
2

1
+ Kin un , i = 0, 1, 2, . . . , n
2
that give the approximate solution to u(x) at x = xi . The terms involving u may be moved to
the left side of the equations, resulting in n + 1 equations in u0 , u1 , u2 , . . . , un :

t
t
1
K00 u0 tK01 u1 tK02 u2
K0n un = f0
2
2
t
t
K1n un = f1
K10 u0 + (1 tK11 ) u1 tK12 u2
2
2
..
..
.

.
t
t
Knn un = fn .
Kn0 u0 tKn1 u1 tKn2 u2 + 1
2
2
This may also be written in matrix form:
KU = F,
where K is the matrix of coefficients

K00 tK01
1 t
2
t K10 1 tK11

2
K=
..
..

.
.
t
2 Kn0
tKn1
U is the matrix of solutions

U =

u0
u1
..
.
un

37

..
.

t
K0n
2
t
K1n
2
..
.

c ,

t
Knn
2

and F is the matrix of the nonhomogeneous part

f0
f1

F = ..
.
fn

Clearly there is a unique solution to this system of linear equations when |K| 6= 0, and either
infinite or zero solutions when |K| = 0.
Example: Use the trapezoid rule to find a numeric solution to the integral equation in eq.
(23):
Z
1

u(x) = sin x +

(1 x cos xt)u(t) dt
0

at x = 0, 12 , and 1. Choose n = 2 and t = 10


= 12 , so ti = it = 2i = xi . Using the trapezoid
2
rule results in

1 1
1
ui = f i +
Ki0 u0 + Ki1 u1 + Ki2 u2 , i = 0, 1, 2,
2 2
2
or in matrix form:

14 K02
u0
sin 0
1 41 K00 12 K01
1 K10 1 1 K11 1 K12 u1 = sin 1 .
4
2
4
2
14 K20
21 K21 1 14 K22
u2
sin 1

Substituting fi = f (xi ) = sin 2i and Kij = K(xi , tj ) = 1 2i cos ij4

1
1
1 41 (1 0)

(1

0)

(1

0)

1 1 1
1 21 1 12 cos14 14 1 12 cos 12
4
2
41 (1 1)
12 1 cos 12
1 14 (1 cos 1)
3

12
14
4
1 1 + 1 cos 1 1 + 1 cos 1
8
2
4
4
4
8
2
0 12 + 12 cos 12 34 + 14 cos 1

0.75
0.5
0.25
u0
0.125 0.7422 0.1403 u1
u2
0
0.0612 0.8851

into the matrix obtains


sin 0
u0
u1 = sin 12
u2
sin 1

u0
sin 0
u1 = sin 12
u2
sin 1

0
0.4794 ,
0.8415

which can be solved to give u0 1.0132, u1 1.0095, and u2 1.0205. The exact solution of
eq. (23) is known to be u(x) = 1, which compares very well with these approximate values.

38

6.6.2

Homogeneous Equations

The general homogeneous Fredholm equation in eq. (21),


Z b
u(x) =
K(x, t)u(t) dt
a

is solved in a similar manner as nonhomogeneous Fredholm equations using the trapezoid


rule. Using the same terminology as the previous section, the integral is divided into n equal
subintervals of width t, resulting in n + 1 linear homogeneous equations

1
1
ui = t Ki0 u0 + Ki1 u1 + + Ki(n1) un1 + Kin un , i = 0, 1, . . . , n.
2
2
Bringing all of the terms to the left side of the equations results in

t
t
1
K00 u0 tK01 u1
K0n un = 0
2
2
t
t
K10 u0 + (1 tK11 ) u1
K1n un = 0

2
2
..
..
.

.
t
t

Kn0 u0 tKn1 u1 + 1
Knn un = 0.
2
2
Letting =

simplifies this system by causing to appear in only one term

t
t

K00 u0 tK01 u1 tK02 u2


K0n un =
2
2
t
t
K10 u0 + ( tK11 ) u1 tK12 u2
K1n un =
2
2
..

.
t
t
Knn un =
Kn0 u0 tKn1 u1 +
2
2

Again, the equations may be written in matrix notation as


KH U = 0
where

0=

0
0
..
.
0

39

of each equation:
0
0
..
.
0.

U is the same as the nonhomogeneous case:

U =

u0
u1
..
.

un
and KH is the coefficient matrix for the homogeneous

t
K00 tK01
2
t K10 tK11

2
KH =
..
..

.
.
t
tKn1
2 Kn0

equations

..
.

t
K0n
2
t
K1n
2
..
.

t
Knn
2

An infinite number of nontrivial solutions exist iff |KH | = 0. This condition can be used to find
the eigenvalues of the system by finding = 1 as the zeros of |KH | = 0.
Example: Use the trapezoid rule to find a numeric solution to the homogeneous Fredholm
equation
Z
1

u(x) =

K(x, t)u(t) dt

(29)

with the symmetric kernel

K(x, t) =

x(1 t), 0 x t
t(1 x), t x 1

at x = 0, x = 12 , and x = 1. Therefore n = 2 and t = 12 , and

i j
Kij = K
,
, i, j = 0, 1, 2.
2 2
The resulting matrix is

0
0
0
0

1
8

0
0 .

For the system of homogeneous equations obtained to have a nontrivial solution, the determinant must be equal to zero:

0
0

0 1 0 = 2 1 = 0,
8

8
0
0

40

or, = 0 or = 18 .
For = 18 , = 1 = 8 and substituting into the system of equations obtains u0 = 0, u2 = 0,
and u1 as an arbitrary constant. Therefore there are two zeros at x = 0 and x = 1, but an
arbitrary value at x = 21 . It can be found by approximating the function u(x) as a triangular
function connecting the three points (0, 0), ( 12 , u1 ), and (1, 0):

2u1 x,
0 x 21
,
u(x) =
2u1 (x 1), 12 x 1
and making its norm be one:

u2 (x) dx = 1.

Substituting results in
Z
4u21

1
2

Z
2

x dx +

4u21

(x 1)2 dx = 1

1
2

u21 u21
+
= 1
6
6
u21
= 1
3

u1 =
3.

So the approximate numerical values are u(0) = 0, u( 21 ) = 3, and u(1) = 0.


For comparison
with an exact solution, the orthonormal eigenfunction is chosen to be

2
u1 (x) = 2 sin x, which corresponds to the eigenvalue 1 =
to the ap , which is close

proximate eigenvalue = 8. The exact solution gives u1 (0) = 2 sin 0 = 0, u1 ( 21 ) = 2 sin 2

1.4142, and u1 (1) = 2 sin = 0.

An example of a separable kernel equation

Solve the integral equation


Z

(x)

sin(x + t)(t)dt = 1
0

2
where 6= .

41

(30)

The kernel is K(x, t) = sin x cos x + sin t cos x where


a1 (x) = sin x
a2 (x) = cos x
b1 (x) = cos t
b2 (x) = sin t
where we have use the additive identity sin(x + t) = sin x cos t + sin t cos x.
Choosing f (x) = 1 we obtain
Z

f1 =

b1 (t)f (t)dt =

cos tdt = 0

Z0
f2 =

Z0
b2 (t)f (t)dt =

sin tdt = 2

and so

0
2

F =
Z

(31)

b1 (t)a1 (t)dt =
Z

sin t cos tdt = 0


Z

2
Z0
Z0

=
b2 (t)a1 (t)dt =
sin2 tdt =
2
Z0
Z0
sin t cos tdt = 0
b2 (t)a2 (t)dt =
=

a12 =

a12

a11 =

a21

b1 (t)a2 (t)dt =

cos2 tdt =

which yields

A=

Now we can rewrite the original integral equation as C = F + AC



0
0 2
c1
c1
=
+
c2
2
c2
0
2
or

42

(32)

(33)

c1
c2

0
2

(34)

c2 = 0
2

c2
c1 = 2
2
c1

with solutions c1 = 8/(4 2 2 ), c2 = 42 /(4 2 2 ).


The general solution is (x) = 1 + c1 a2 + c2 a1 = 1 + c1 cos x + c2 sin x and the unique solution
8
42
of the integral equation is (x) = 1 +
cos x +
sin x.
(4 2 2 )
(4 2 2 )
IVP1
0 (x) = F (x, (x)), 0 x 1
(0) = 0
Z

(x) =

F (t, (t))dt + 0
0

IVP2
00 (x) = F (x, (x)), 0 x 1
(0) = 0 , 0 (0) = 00
Z
0

F (t, (t))dt + 00
Z 0x Z s
(x) =
ds
F (t, (t))dt + 00 (x) + 0

(x) =

we know that:
Z
Z

x
0

ds
Z s

ds
0

G(s, t)dt =
0

F (t, (t))dt =

G(s, t)ds
t

(x t) F (t, (t))dt
0

43

dt
Z

Therefore,

(x) =
0

(x t) F (t, (t))dt + 00 (x) + 0

Theorem 1 Degenerate Kernels


Pn If k(x, t) is a degenerate kernel on [a, b] [a, b] which can
be written in the form k(x, t) = i=0 ai (x)bi (t) where a1 , . . . , an and b1 , . . . , bn are continuous,
then
1. for every continuous function f on [a, b] the integral equation
Z b
(x)
k(x, t) (t)dt = f (x) (a x b)

(35)

possesses a unique continuous solution , or


2. the homogeneous equation
Z

k(x, t) (t)dt = 0 (a x b)

(x)

(36)

has a non-trivial
solution , in which case eqn. 35 will have non-unique solutions if and
Rb
only if a f (t)(t) dt = 0 for every continuous solution of the equation
Z b

(x)
k(t, x) (t)dt = 0 (a x b).
(37)
a

Theorem 2 Let K be a bounded linear map from L2 (a, b) to itself with the property that {K :
L2 (a, b)} has finite dimension. Then
1. the linear map I K has an inverse and (I K)1 is a bounded linear map, or
2. the equation K = 0 has a non-trivial solution L2 (a, b).
In the specific case where the kernel k generates a compact operator on L2 (a, b), the integral
equation
Z b
(x)
k(x, t) (t)dt = f (x) (a x b)
a

will have a unique solution, and the solution operator (I K)1 will be bounded, provided
that the corresponding homogeneous equation
Z b
(x)
k(x, t) (t)dt = 0 (a x b)
a

has only the trivial solution L2 (a, b).


44

Bibliographical Comments

In this last section we provide a list of bibliographical references that we found useful in writing this survey and that can also be used for further studying Integral Equations. The list
contains mainly textbooks and research monographs and do not claim by any means that it
is complete or exhaustive. It merely reflects our own personal interests in the vast subject of
Integral Equations. The books are presented using the alphabetical order of the first author.
The book [Cor91] presents the theory of Volterra integral equations and their applications,
convolution equations, a historical account of the theory of integral equations, fixed point theorems, operators on function spaces, Hammerstein equations and Volterra equations in abstract
Banach spaces, applications in coagulation processes, optimal control of processes governed by
Volterra equations, and stability of nuclear reactors.
The book [Hac95] starts with an introductory chapter gathering basic facts from analysis,
functional analysis, and numerical mathematics. Then the book discusses Volterra integral
equations, Fredholm integral equations of the second kind, discretizations, kernel approximation, the collocation method, the Galerkin method, the Nystrom method, multi-grid methods
for solving systems arising from integral equations of the second kind, Abels integral equation,
Cauchys integral equation, analytical properties of the boundary integral equation method,
numerical integration and the panel clustering method and the boundary element method.
The book [Hoc89] discusses the contraction mapping principle, the theory of compact operators,
the Fredholm alternative, ordinary differential operators and their study via compact integral
operators, as well as the necessary background in linear algebra and Hilbert space theory.
Some applications to boundary value problems are also discussed. It also contains a complete
treatment of numerous transform techniques such as Fourier, Laplace, Mellin and Hankel, a
discussion of the projection method, the classical Fredholm techniques, integral operators with
positive kernels and the Schauder fixed-point theorem.
The book [Jer99] contains a large number of worked out examples, which include the approximate numerical solutions of Volterra and Fredholm, integral equations of the first and second
kinds, population dynamics, of mortality of equipment, the tautochrone problem, the reduction
of initial value problems to Volterra equations and the reduction of boundary value problems
for ordinary differential equations and the Schrodinger equation in three dimensions to Fredholm equations. It also deals with the solution of Volterra equations by using the resolvent
kernel, by iteration and by Laplace transforms, Greens function in one and two dimensions and
with Sturm-Liouville operators and their role in formulating integral equations, the Fredholm
equation, linear integral equations with a degenerate kernel, equations with symmetric kernels
and the Hilbert-Schmidt theorem. The book ends with the Banach fixed point in a metric
space and its application to the solution of linear and nonlinear integral equations and the use
45

of higher quadrature rules for the approximate solution of linear integral equations.
The book [KK74] discusses the numerical solution of Integral Equations via the method of
invariant imbedding. The integral equation is converted into a system of differential equations.
For general kernels, resolvents and numerical quadratures are being employed.
The book [Kon91] presents a unified general theory of integral equations, attempting to unify
the theories of Picard, Volterra, Fredholm, and Hilbert-Schmidt. It also features applications
from a wide variety of different areas. It contains a classification of integral equations, a description of methods of solution of integral equations, theories of special integral equations,
symmetric kernels, singular integral equations, singular kernels and nonlinear integral equations.
The book [Moi77] is an introductory text on integral equations. It describes the classification of
integral equations, the connection with differential equations, integral equations of convolution
type, the method of successive approximation, singular kernels, the resolvent, Fredholm theory,
Hilbert-Schmidt theory and the necessary background from Hilbert spaces.
The handbook [PM08] contains over 2500 tabulated linear and nonlinear integral equations and
their exact solutions, outlines exact, approximate analytical, and numerical methods for solving
integral equations and illustrates the application of the methods with numerous examples. It
discusses equations that arise in elasticity, plasticity, heat and mass transfer, hydrodynamics,
chemical engineering, and other areas. This is the second edition of the original handbook
published in 1998.
The book [PS90] contains a treatment of a classification and examples of integral equations,
second order ordinary differential equations and integral equations, integral equations of the
second kind, compact operators, spectra of compact self-adjoint operators, positive operators,
approximation methods for eigenvalues and eigenvectors of self-adjoint operators, approximation methods for inhomogeneous integral equations, and singular integral equations.
The book [Smi58] features a carefully chosen selection of topics on Integral Equations using complex-valued functions of a real variable and notations of operator theory. The usual
treatment of Neumann series and the analogue of the Hilbert-Schmidt expansion theorem are
generalized. Kernels of finite rank, or degenerate kernels, are discussed. Using the method of
polynomial approximation for two variables, the author writes the general kernel as a sum of a
kernel of finite rank and a kernel of small norm. The usual results of orthogonalization of the
square integrable functions of one and two variables are given, with a treatment of the RieszFischer theorem, as a preparation for the classical theory and the application to the Hermitian
kernels.
The book [Tri85] contains a study of Volterra equations, with applications to differential equa46

tions, an exposition of classical Fredholm and Hilbert-Schmidt theory, a discussion of orthonormal sequences and the Hilbert-Schmidt theory for symmetric kernels, a description of the Ritz
method, a discussion of positive kernels and Mercers theorem, the application of integral equations to Sturm-Liouville theory, a presentation of singular integral equations and non-linear
integral equations.

References
[Cor91] C. Corduneanu. Integral equations and applications. Cambridge University Press,
Cambridge, 1991.
[Hac95] W. Hackbusch. Integral equations, volume 120 of International Series of Numerical Mathematics. Birkhauser Verlag, Basel, 1995. Theory and numerical treatment,
Translated and revised by the author from the 1989 German original.
[Hoc89] H. Hochstadt. Integral equations. Wiley Classics Library. John Wiley & Sons Inc.,
New York, 1989. Reprint of the 1973 original, A Wiley-Interscience Publication.
[Jer99] A. J. Jerri. Introduction to integral equations with applications. Wiley-Interscience,
New York, second edition, 1999.
[KK74] H. H. Kagiwada and R. Kalaba. Integral equations via imbedding methods. AddisonWesley Publishing Co., Reading, Mass.-London-Amsterdam, 1974. Applied Mathematics and Computation, No. 6.
[Kon91] J. Kondo. Integral equations. Oxford Applied Mathematics and Computing Science
Series. The Clarendon Press Oxford University Press, New York, 1991.
[Moi77] B. L. Moiseiwitsch. Integral equations. Longman, London, 1977. Longman Mathematical Texts.
[PM08] A. D. Polyanin and A. V. Manzhirov. Handbook of integral equations. Chapman &
Hall/CRC, Boca Raton, FL, second edition, 2008.
[PS90]

D. Porter and D. S. G. Stirling. Integral equations. Cambridge Texts in Applied


Mathematics. Cambridge University Press, Cambridge, 1990. A practical treatment,
from spectral theory to applications.

[Smi58] F. Smithies. Integral equations. Cambridge Tracts in Mathematics and Mathematical


Physics, no. 49. Cambridge University Press, New York, 1958.
[Tri85]

F. G. Tricomi. Integral equations. Dover Publications Inc., New York, 1985. Reprint
of the 1957 original.
47

You might also like