You are on page 1of 122

CHAPTER ONE

By the end of this chapter, the reader must be able to:


1) identify and classify integral equations
2) connect Cauchy problem to Volterra’s integral equations
3) solve some integral equations

1
INTRODUCTION

• BASIC DEFINITIONS AND CLASSIFICATION OF INTEGRAL


EQUATIONS (I.E.)
• APPLICATIONS TO SOLUTIONS OF CAUCHY PROBLEMS FOR
ORDINARY DIFFERENTIAL EQUATIONS (ODE)

Integral equations are effective means of analytical investigations of various


problems in physics, engineering, life science and real world problems.

Solutions of boundary problems of ODE are obtained in stages. Firstly, the


general solution is found, and then the particular solution which satisfies the
boundary conditions.

However, if it is possible to transform a problem into an integral equation


then along such a problem in stages is not really so. Furthermore, ODEs are
of so many types whereas Integral Equations have just few.

DEFINITIONS:
1. An integral equation (I.E.) is an equation in which the unknown
functions y(x) occurs under the integral sign.

2. If the unknown function in the equation is of first degree, then


such integral equation in Linear.

3. Linear Integral Equation of the first kind (Fredholm’s) is an equation


of the form
b
∫ a k ( x , t ) y (t )(dt ) = f ( x )
(1.1)

4. Fredholm’s Linear Integral Equation of the second kind is the


equation of the form
y ( x ) = λ ∫ ba k ( x , t ) y (t ) dt + f ( x )
(1.2)

Where k(x, t) is a known continuous function is two variables referred to as


the Kernel of the I.E. f(x) is the free or forcing term which is also known
and continuous; y(x) is the unknown function; a, b - are constants and
limits of integration: λ is a numeric parameter.

2
The integral on the r.h.s of 1.2 can be considered as an integral of the
parameter t.

The kernel of the equation K(x, t) is defined on the x-t plane in the
square R. where R : a ≤ x ≤ b; a ≤ t ≤ b

a b x

4. A solution of an I.E. is the function y(x) which makes an

identity with respect to x.

The introduction of the parameter λ facilitates investigation of the I.E.


(1.2). λ is introduced because for a fixed value of λ , it is not always that
the I.E. has a solution. We can, therefore, vary λ in order for the
solution of the I.E. to exist.
The parameter λ can be also be introduced on the left hand side of 1.1 to
have the form
b
λ ∫ K ( x, t ) y ( t ) dt = f ( x ) ( )
… 1.1'
a

If in the I.E 1.2 f(x) = 0, then we have a homogenous equation (Fredholm’s)


of the second type:

3
b
y ( x ) = λ ∫ K ( x, t ) y ( t ) dt … (1.3)
a

and this always has the trivial (zero) solution y(x) = 0.


The value of λ for which the homogenous equation 1.3 has a non-zero (non-
trivial) solution y ( x ) ≠ 0 is called the eigenvalue (or eigen number) of the
kernel k ( x, t ) of the corresponding non-homogeneous equation, and the
corresponding solution, y ( x ) is the eigen function.

Of practical importance is Fredholm’s I.E. of the second form with


symmetric kernel k ( x, t ) i.e.

K ( x, t ) = K ( t , x ) … (1.4 )

Symmetric kernels have the following properties:

(1) For every K ( x, t ) = K ( t , x ) there exist at least one eigenvalue;

(2) All eigen-values of symmetric kernels are Real.

(3) Eigen functions φ ( x ) and Ψ ( x ) of a symmetric Kernel corresponding


to distinct eigen values λ1 and λ2 ( λ1 ≠ λ2 ) are orthogonal in the
fundamental interval ( a, b ) that is
b

∫ φ ( x ) Ψ ( x )dx = 0
a
… (1.5 )

The fundamental problem in solving an integral equation is to find an exact


or approximate solution of the non-homogenous I.E. for a given value of the
parameter λ and the eigen values, and corresponding functions of the
homogenous I.E.

VOLTERRA INTEGRAL EQUATIONS


If in Fredholm’s integral equation the constant b is changed to the variable x
then we have the corresponding Volterran integral equations (1st and 2nd
types). An equation with a variable upper limit of the form

4
x

∫ k ( x, t ) y ( t ) dt = f ( x )
a
… (1.6 )

is known as the Volterra Integral Equation of the first type.

An equation with a variable upper limit of the form


x
y ( x ) = λ ∫ k ( x, t )y ( t ) dt + f ( x ) (a ≤ t ≤ x ≤ b) … (1.7 )
a

is Volterra’s Integral Equation of the 2nd type.

If the kernel K ( x, t ) and f ( x ) are continuously differentiable functions


and K ( x, t ) ≠ 0 for a ≤ X ≤ b, then the volterra’s Integral Equation of the
lst type becomes vollterra’s I.E of the 2nd type.
Differentiating Volterra’s equation of the lst type w.r.t. x, we have
x
K ( x, x ) y ( x ) + ∫ K x' ( x, t ) y ( t ) dt = f ' ( x )
a

Differentiation of a function with respect to a parameter under the


integral sign.
b
I ( λ ) = ∫ f ( x, λ )dx
a

dI ( λ ) d
b b
∂f ( x, λ )
= ∫ f ( x, λ ) dx = ∫ dx
dλ dλ a a
∂λ

Comment: If a or b is a function of λ that is a ( λ ) or b ( λ ) ,then

dI ( λ ) b( λ )
∂f ( x, λ )
= ∫λ dx + b' ( λ ) f ⎡⎣b ( λ ) , λ ⎤⎦ − a ' ( λ ) f ⎡⎣ a ( λ ) , λ ⎤⎦
dλ a( ) ∂λ

5
Differentiating with respect to x.
x

∫ k ( x, t ) y ( t ) dt = f ( x )
a
x

∫ K ( x, t ) y ( t ) dt + ⎡⎣ k ( x, x ) y ( x )⎤⎦ = f ( x )
' '
x
a

From which we obtain Volterra’s Integral Equation of the 2nd type


x
y ( x ) = ∫ K1 ( x, t ) y ( t ) dt + f1 ( x )
a

where

K x' ( x, t ) f ' ( x)
K1 ( x, t ) = − , f1 ( x ) =
k ( x, x ) k ( x, x )

From the formal point of view, Volterra’s Integral Equation differs from
Fredholm’s only by the fact that the constant upper limit is changed into a
variable upper limit.

THE CONNECTION BETWEEN CAUCHY PROBLEM FOR AN

n-th ORDER LINEAR ODE AND VOLTERRA EQUATION

Formulation of the Cauchy Problem

Essentially this is a differential equation together with some initial


conditions.

Specifically, we consider the Linear Ordinary Differential Equations of the


second order

6
d 2u du
2
+ p ( x) + q ( x)u = f ( x) (a ≤ x ≤ b) … (1.8 )
dx dx

with initial conditions

u (a) = α , u' ( a ) = β … (1.9 )

Adopting the method of converting high order differential equation to lower


orders, we assume

d 2u
= y ( x) … (1.10 )
dx 2

and systematically integrate from a to x. We have


x
du
= y ( t ) dt + C1
dx ∫a

and

x t
u ( x ) = ∫ dt ∫ y ( s ) ds + c1 ( x − a ) + c2
a a

Changing the order of integration in the double integral we observe, that


x t s x

∫ dt ∫ y ( s ) ds = ∫ ds ∫ y ( s ) dt
a a a t
x
= ∫ ( x-s ) y ( s ) ds
a
x
= ∫ ( x-t ) y ( t ) dt
a

From the initial conditions 1.9 for x=a, we find


a t
α = ∫ dt ∫ y ( s ) ds + c1 ( a − a ) + C2
a a

hence c2 = α . Similarly we have

7
a
β = ∫ y ( t ) dt + C1
a

and β = C1 . Since the limits of integration are the same.

Therefore,
x
du
= y ( t ) dt + β … (1.11)
dx ∫a

and
x
u ( x ) = ∫ ( x − t ) y ( t ) dt + β … (1.12 )
a

We substitute the relations (1.10 – 1.12) into differential equation 1.8 to get:

⎡x ⎤ ⎡x ⎤
y ( x ) + p ( x ) ⎢ ∫ y ( t ) dt + β ⎥ + q ( x ) ⎢ ∫ ( x − t ) y ( t ) dt ⎥ = f ( x ) − β P ( x ) − ⎡⎣ β ( x − a ) + α ⎤⎦ q ( x )
⎣a ⎦ ⎣a ⎦

Finally, we have
x
y ( x ) + ∫ ⎡⎣ p ( x ) + q ( x )( x − t ) ⎤⎦ y ( t ) dt = f ( x ) − β p ( x ) − ⎡⎣ β ( x − a ) + α ⎤⎦ q ( x )
a

We make the following substitution:

p ( x ) + q ( x )( x − t ) = k ( x, t )
f ( x ) − β p ( x ) − ⎡⎣ β ( x − a ) ⎤⎦ q ( x ) = F ( x )

The equation now becomes


x
y ( x ) = ∫ k ( x, t ) y ( t ) dt + F ( x ) … (1.13)
a

which is Volterra’s Integral Equation.

Therefore, knowing the function y(x), from the expression 1.12 we can find
u ( x ) and u ' ( x ) . Hence Volterra’s Integral Equation includes all the data of

8
the Cauchy problem for the linear Differential equation 1.8

A similar result can be obtained for an n-th order linear differential equation.

The reverse problem also holds. If the Kernel


n
K ( x, t ) = ∑ ai ( x ) t i
i =0

is a polynomial of the order n with respect to t, then systematically


differentiating Volterra’s Integral Equation 1.13 we arrive Cauchy problem
for an n-th linear Diffferential Equation.

Lab Work: Try and ascertain the above.

Example 1.1
Find the solution of the Volterra’s Integral Equation of the 1st type.
x
x2
∫ cos ( x − t ) y ( t ) dt =
0
2

Solution
Differentiating the given equation with respect to x, we have
x
y ( x ) − ∫ sin ( x − t ) y ( t ) dt = x … (1)
0

Differentiating again, we have


x
y − ∫ cos ( x − t ) y ( t ) dt = 1
'

Going back to the given equation, we have

x2
y' = +1
2

The general solution of the above ODE, which is of the separable form is

9
x3
y ( x) = + x + C1 …( 2)
6

where C1 is an arbitrary constant. Taking into consideration equation (1), it follows that

y ( 0) = 0

From (2) C1=0. Hence the solution of the given Volterra equation is

x3
y ( x) = +x
6

Exercises

Solve the integral equations:


x

(1) y ( x ) = x + ∫ ( t − x ) y ( t ) dt
0
1

x 12
( 2 ) y ( x ) = sin x − + ∫ xty ( t ) dt
4 40
( 3) Bt seperating the kernel solve the equation

y ( x ) = sin x + ∫ e −α ( x + t ) y ( t ) dt
0

where α is a real constant, and show that the solution is not valid when α = 0 and
α= . 1
2

( 4 ) Obtain the eigenvalues and eigen functions of the following equations:


1

( a ) y ( x ) = λ ∫ ( x − t ) y ( t ) dt
−1
Π

(b) y ( x) = λ ∫ cos ( x + t ) y ( t ) dt
2

−Π

10
CHAPTER TWO

At the end of this chapter, the reader would be able to:

1. Solve Fredholm’s I.E. using the method of successive approximations;

2. Find and apply radius of convergence of series

3. Find the Resolvent kernel to write out solution;

4. Solve I.E. using various methods.

5. Solve Integral Equations by several methods


METHOD OF SUCCESSIVE APPROXIMATIONS FOR FREDHOLM’S INTEGRAL
EQUATION

1) USING THE FREE TERM

Let the Fredholm’s Integral Equation of the 2nd type be given by


b
y ( x ) = λ ∫ K ( x, t ) y ( t ) dt + f ( x ) … ( 2.1)
a

We solve this Integral Equation on assumption that the kernel of the equation k ( x, t ) is
continuous in the square R : a ≤ x ≤ b, a ≤ t ≤ b and the function f ( x ) is continuous in the
interval [ a, b] . These conditions will ensure that k ( x, t ) and f ( x ) are bounded. We now
look for our solution in the form of a series of an ascending order of λ .

y ( x ) = φ0 ( x ) + λφ1 ( x ) + λ 2φ2 + … + λ nφn + … … ( 2.2 )

If the series (2.2) uniformly converges for some value of λ , then it can be substituted into the
right hand side of (2.1), replacing the argument x by t and effecting the term-wise integration.

Equation (2.2) takes the form

⎡ b b b

y ( x ) = f ( x ) + ⎢ λ ∫ k ( x, t ) φ0 ( t ) + λ ∫ k ( x, t ) φ1 ( t ) dt + … + λ ∫ k ( x, t ) φn ( t ) dt + …⎥
2 n +1

⎣ a a a ⎦

Replacing the left-side of (2.3) by the expression in (2.2) and equating coefficients of equal
powers of λ , we have

φ ( x) = f ( x) ⎫


b
φ1 ( x ) = ∫ k ( x, t ) φ0 ( t ) dt ⎪
a ⎪
b
⎪⎪
φ2 ( x ) = ∫ k ( x, t ) φ1 ( t ) dt ⎬ … ( 2.4 )
a ⎪
………………………… ⎪

b

φn ( x ) = ∫ k ( x, t ) φn −1 ( t ) dt ⎪
a ⎪⎭
The process of constructing the function φn ( x ) is called the method of successive approximation
of solutions, and can be continued indefinitely.

The expression (2.4) helps us to evaluate the coefficients of the series (2.2) successively and to
form the series, which formally satisfies the integral equation (2.1).

NOTE:

For the sum of the series (2.2) to be a solution of the Integral Equation 2.1, it is
necessary that it converges uniformly.
Indeed suppose the Kernel k ( x, t ) is bounded by A, that is

k ( x, t ) < A … ( 2.5 )

and the function f ( x ) by m, so that

f ( x) < M … ( 2.6 )

where A and M are given positive numbers. Then from equation (2.4) we have

φ0 ( x ) = f ( x ) < M
b b
φ1 ( x ) ≤ ∫ k ( x, t ) φ0 ( t ) dt < AM ∫ dt = AM ( b − a )
a a
b b
φ2 ( x ) ≤ ∫ k ( x, t ) φ1 ( t ) dt < A. AM ( b − a ) ∫ dt = A2 ( b − a ) M
a a

…………………………………………………………………
b b
φn ( x ) ≤ ∫ k ( x, t ) φn −1 ( t ) dt < A. A M ( b − a ) ∫ dt = A ( b − a )
n −1
n −1
λ +…
n n n

a a

This is a geometric progression with the ratio A ( b − a ) λ . This sequence converges as long as the
ratio of progression is less than 1. Consequently, the series (2.2) converges, if
1
λ < … ( 2.7 )
A (b − a )
Therefore, the Integral Equation has a unique solution if the parameter λ is
sufficiently small in absolute value. With the relation (2.4) we can successively calculate
the coefficients of the sequence (2.2) which is rather inconvenient. This is so because to
evaluate the coefficients φn ( x ) it is necessary to find all the preceding coefficients.

2) USING THE KERNEL

We make our new objective to find these coefficients from the known elements of the Integral
Equation that is, from the Kernel k ( x, t ) and the right hand side f ( x ) .

From the first relation in (2.4) we have

φ0 ( x ) = f ( x )

From the second relation in (2.4) we have


b
φ1 ( x ) = ∫ k ( x, t ) f ( t ) dt … ( 2.8)
a

Before making the 3rd substitution we change the variable of integration t to s:


b
φ1 ( s ) = ∫ k ( s, t ) f ( t ) dt
a

Changing the order of integration, we have


b b
⎛b ⎞
φ2 ( x ) = ∫ K ( x, s ) φ1 ( s ) ds = ∫ K ( x, s ) ⎜ ∫ k ( x, s ) f ( t ) dt ⎟ ds
a a ⎝a ⎠
b
⎛ b
⎞ b
= ∫ f ( t ) ⎜ ∫ k ( x, s ) k ( s, t ) ds ⎟ dt = ∫ k2 ( x, t ) f ( t ) dt … ( 2.10 )
a ⎝a ⎠ a

b
where k2 ( x, t ) = ∫ k ( x, s ) k ( s, t ) ds. Changing the variable t to s in the 4th relation and using the
a

expression (2.9) we have,


b
where k3 ( x, t ) = ∫ k ( x, t ) k2 ( x, t ) ds. Continuing the process and introducing the function
a
b b
⎛b ⎞
φ3 ( x ) = ∫ k ( x, s ) φ2 ( s ) ds = ∫ k ( x, s ) ⎜ ∫ k2 ( s, t ) f ( t ) dt ⎟ ds
a a ⎝a ⎠
b
⎛ b
⎞ b
= ∫ f ( t ) ⎜ ∫ k ( x, s ) k ( s, t ) ds ⎟ dt = ∫ k3 ( x, t ) f ( t ) dt … ( 2.10 )
a ⎝a ⎠ a

b
where k3 ( x, t ) = ∫ k ( x, s ) k2 ( s, t ) ds. . Continuing the process and introducing the function
a

b
kn ( x, t ) = ∫ k ( x, s ) kn −1 ( s, t ) ds … ( 2.11)
a

we obtain
b
φn ( x ) = ∫ kn ( x, t ) f ( t ) dt … ( 2.12 )
a

The functions k2 ( x, t ) , k3 ( x, t ) ,… , kn ( x, t ) are referred to as iterated (repeated) kernels.

The kernel k ( x, t ) is the first kernel. Substituting the expression for φn ( x ) into the series (2.2), we
have

⎡b b b

y ( x ) = f ( x ) + λ ⎢ ∫ k ( x, t ) f ( t ) dt + λ ∫ k2 ( x, t ) f ( t ) dt + … + λ n −1 ∫ kn ( x, t ) f ( t ) dt + …⎥
⎣a a a ⎦

If the sequence

k1 ( x, t ) + λ k2 ( x, t ) + λ 2 k3 ( x, t ) + … + λ n −1kn ( x, t ) + … … ( 2.14 )

uniformly converges, then the sum in the square brackets in (2.13) can be replaced by the
integral sum and we write
b
y ( x ) = f ( x ) + λ ∫ Γ ( x, t , λ ) f ( t ) dt
a

where Γ ( x, t , λ ) is the sum of the series 2.14

Γ = k ( x, t ) + λ k2 ( x, t ) + λ 2 k3 ( x, t ) + … + λ n −1 kn ( x, t ) + …
The function Γ ( x, t , λ ) is called the Resolvent of the of the Integral Equation (2.1). When the
Resolvent is known we can find the solution of the Integral Equation (2.1) for any function
f ( x ) provided the parameter λ is sufficiently small in absolute value. In other words, (2.7) is
satisfied.

Example 2.1

Evaluate, using the first three successive approximations of the solution of the Integral Equation
below.
1
y ( x ) + ∫ xty ( t ) dt = x 2
0

Solution

Suppose the solution has the form

y ( x ) = y3 ( x ) = φ0 ( x ) + λφ1 ( x ) + λ 2φ2 ( x ) + λ 3φ3 ( x )

where
1
φ0 ( x ) = f ( x ) = x , φn ( x ) = ∫ k ( x, t ) φn −1 ( t ) dt . We have
2

φ0 ( x ) = x 2
1
x
φ1 ( x ) = ∫ xt.t 2 dt =
0
4
1
t x
φ2 ( x ) = ∫ xt. dt =
0
4 12
1
t x
φ3 ( x ) = ∫ xt dt =
0
12 36

Since λ = −1 , then the first three successive approximation is the approximate solution of the
Integral Equation.

Hence y0 ( x ) = φ0 ( x ) = x 2
x
y1 ( x ) = φ ( x ) − φ1 ( x ) = x 2 −
4
⎛1 1 ⎞
y2 ( x ) = φ0 ( x ) − φ1 ( x ) + φ2 ( x ) = x 2 − x ⎜ − ⎟
⎝ 4 4.3 ⎠
⎛1 1 1 ⎞
y3 ( x ) = φ0 ( x ) − φ1 ( x ) + φ2 ( x ) − φ3 ( x ) = x 2 − x ⎜ − + 2 ⎟
⎝ 4 4.3 4.3 ⎠

We now find the exact solution of the Integral Equation.

We have

y ( x ) = x 2 − xC1

where
1 1

(
C1 = ∫ ty ( t ) dt = ∫ t t 2 − tC1 dt =) 1 C1

4 3
0 0

Consequently,

and the exact solution of the equation is


3x
y ( x ) = x2 −
16

METHODS OF SOLUTION

Example 2.2

When the kernel k ( x, y ) can be written as

k ( x, y ) = g ( x ) h ( y ) … (1)

where g and h are functions of x and y only respectively. Then the Fredholm’s equation may be
solved as follows: Let g ( x ) ≡ const ,then the y-integration (integrating with respect to y) can be
b
considered as u ( x) = f ( x) + λ g ( x) ∫ h ( y ) u ( y ) dy …( 2)
a
Hence

… ( 3)
b
∫a
h( y )u ( y )dy = c(≡ const )

We have as solution

u ( x) = f ( x) + λ cg ( x ) …( 4)

In particular solve the following equation:


1
u ( x ) = cosh x − − x
2
1
3 ∫ xyu( y)dy …( 5)
0

The solution takes the form:

u ( x) = cosh x − 2x − cx3 …( 6 )

where

…( 7 )
1
c = ∫ yu ( y )dy
0

From (6) and (7) we have

… (8)
1
c = ∫ y ⎡⎣cosh y − 2y + c 3y ⎤⎦ dy
0

Simplifying

…(9)
−1
c = 15
16 − 8
9e

Therefore

⎡ 3e−1 ⎤
u ( x) = cosh x − 2x + ⎢ 165 − ⎥x … (10 )
⎣ 8 ⎦

= cos gx − 163 ⎡⎣1 + 2e −1 ⎤⎦ x … (11)


Example 2.3

For the homogeneous Fredholm’s


Π
2

u ( x) = λ ∫ ( sin x )( sin y ) u ( y ) dy … (12 )


0

there is a solution for a particular value of λ for which the solution is non-trivial. Find this
λ (eigen value) and the corresponding solution for u (the eigen function).

We write as above,
Π
2

u ( x ) = λ sin x ∫ sin y u ( y ) dy = cλ sin x … (13)


0

where
Π
2

c = ∫ sin y u ( y ) dy … (14 )
0

putting u ( x ) = cλ sin x from (13) into (14) we have

η/2 cΠ λ
c = cλ ∫ sin 2 ydt = … (15 )
0 4
4
If c ≠ 0 ,then λ = . The solution corresponding to this value of λ is u ( x ) :
Π

u ( x ) = A sin x …(16 )

where A ≡ const
Example 2.4

The Volterra equation can sometimes be transformed into ordinary differential equation which
may be easier to solve than the integral equation. An example is the equation
x
u ( x ) = 2 x + 4∫ ( y − x ) u ( y ) dy … (17 )
0

Differentiation with respect to x

⎡ x

u ( x ) = 2 + 4 ⎢{( y − x ) u ( y )} y = x − ∫ u ( y ) dy ⎥
d
… (18 )
dx ⎣ 0 ⎦

x
= 2 - 4 ∫ u ( y ) dy … (19 )
0

Differentiating again

d2
u ( x ) = A cos 2 x + B sin 2 x ( 21)
dx 2

A, B ≡ consts. We further determine A and B by substituting (21) into (17). We find that
A = 0, B = 1

Finally, from (21) the solution of (17) is

u ( x ) = sin 2 x

Example 2.5

For the integral equation


1
y ( x ) = λ ∫ e x −1 y (t )dy + f ( x )
0

Find the Resolvent, defining the radius of convergence of the sequence. write out of the
solution for any arbitrary free term f (x) and also find the
Solution

We find the integral kernels:

k ( x , t ) = k1 ( x , t ) = e x −t

1 1
k 2 ( x , t ) = ∫ k ( x , s) k1 ( s, t ) ds = ∫ e x − s e s− t ds = e x − t
0 0

1
k 3 ( x , t ) = ∫ k ( x , s) k 2 ( s, t )ds = e x −t
0

Hence
k n ( x , t ) = e x − t (n = 1,2,...)

The Resolvent of the kernel is


∞ ∞
Γ(x , t, λ) = ∑ λ n −1
kn (x , t ) = e x −t
∑λ n−1

n =1 n =1

The series obtained is a geometric progression, which converges for λ < 1 and has the sum
1
. Therefore,
1− λ

e x −t
Γ ( x, t , λ ) =
1− λ
The solution of the equation has the form
1

y( x) = f ( x) + λ
1− λ ∫
0
e x −t f (t )dt

In particular, for λ = 1 2 and f ( x ) = e x we have

1
y ( x) = e x + 1− 212 ∫ e dt = 2e x
1 x −t

0
Exercises

Find the Resolvent, radius of convergence of the series and the solution of the equation for any
arbitrary free term f ( x ) and also the solution for a given λ and f(x) of the following equations:

1
1. y ( x) = λ ∫ xty (t )dt + f ( x) : λ = 12 , f ( x) = 65 x
0

Ans.

Γ ( x, t , λ ) = ∑ [ λ3 ]
n−1
= 3 xt
3−λ , λ < 3;
n =1

1
y ( x) = λ ∫ 3 xt
3−λ f (t )dt + f ( x); y ( x) = x
0

1
2. y ( x) = λ ∫ ty (t )dt + f ( x); λ = 12 , f ( x) = x
0

Ans.

Γ ( x, t , λ ) = t ∑ [ λ2 ]
n−1
= 2t
2−λ , λ < 2;
n −1

2
for λ = 12 , we have f ( x ) = x and the solution y=x+
9
CHAPTER 3

The reader should be able to solve Volterran Integral Equations in a similar


manner as described in Chapter 2.
METHOD OF SUCCESSIVE APPROXIMATIONS FOR
VOLTERRA’S INTEGRAL EQUATIONS

Let the Volterra’s equation of the second form

… ( 3.1)
x
y ( x) = f ( x) + λ ∫ k ( x, t ) y (t )dt
0

be given, where f ( x ) is a continuous function in the interval [ 0, a ] ; k ( x, t ) is a


continuous kernel for o ≤ x ≤ a, o ≤ t ≤ x . Form the interval [0, a ] we take a
continuous function y0 ( x ) and replace it in the right hand side of (3.1)
instead of y(x). we obtain
x
y1 ( x ) = f ( x ) + λ ∫ k ( x ) y o (t )dt
0

which is also continuous in [0, a ] . Continuing this process we obtain a


sequence of functions
yo ( x), y1 ( x),..., yn ( x)........

where

yo = f ( x)

and

1
yn ( x) = f ( x) + λ ∫ k ( x, t ) yn −1 (t ) dt
0

The sequence of function { yn ( x)} , for n → ∞ converges to the exact solution


y(x) of the integral equation 3.1. The function y n ( x ) is an approximation of
the solution of the equation. The function could also be expressed by the
iterations of the kernel k n ( x , t ) , where
k1 ( x , t ) = k ( x , t )

and
x
k n −1 ( x , t ) = ∫ k ( x , s) k n ( s, t )ds(n, = 1,2,...)
a

In the following sense:


x ⎡ n ⎤
y n ( x ) = f ( x ) + ∫ ⎢∑ λμ Kυ υ ( x , t ) ⎥ f (t )dt
a
⎣ υ =1 ⎦
Then

n
y ( x ) = f ( x ) − λ ∫ Γ ( x , t , λ ) f ( t ) dt
a

where the resolvent is



Γ( x , t , λ ) = − ∑ λυ Kυ +1 ( x , t )
υ =0

Example 3.1

Evaluate the first three approximations of the solution of Volterra’s integral


equation (2nd type)
x
y ( x ) + ∫ ( x − t ) y (t ) = 1
0

Solution

Let our solution have the form


y ( x) ∼ y3 ( x) = ϕ0 ( x) + λϕ1 ( x) + λ 2ϕ2 ( x) + λ 3ϕ3 ( x)

where
where ϕ ( x ) = y0 ( x ) = 1
x
yn ( x) = ∫ k ( x, t )ϕ n −1 (t )dt
0

Then we have
ϕ0 ( x ) = 1

x
ϕ1 ( x) = ∫ ( x − t )dt = x 2 − x2
2 = x2
2 = x2
2!
0

x 44x = −34x
ϕ2 ( x) = ∫ ( x − t ) t2 dt = =
2
x4
4!
0 2.3.4

x
ϕ3 ( x) = ∫ ( x − t ) t4! dt =
4
x6
0 6!

Consequently, the approximate solution of the given equation (here λ = −1 )

is such that
y0 ( x) ∼ 1

y1 ( x) ∼ 1 − x2
2!

y2 ( x ) ∼ 1 − x2
2! + x4
4!

y3 ( x) ∼ 1 − x2
2! + x4
4! − x6
6!
Example 3.2

For the Volterra’s integral equation of the second type


x
y ( x ) = λ ∫ e x − t y ( x )dt + f ( x )
0

find the Resolvent and the solution.

Solution

We find the iterative kernel

k ( x , t ) = k1 ( x, t ) = e x −t

x
k 2 ( x, t ) = ∫t
k ( x , s) k ( s, t )ds = ∫ e x − st e s − t ds = ( x − t )e x − t

x
k 3 ( x , t ) = ∫ e x − s ( s − t )e s − t ds = ( x −t )2
2! e x −t
t

Hence

( x − t ) n −1
k n ( x, t ) = ( n − t )!
e x −t

The resoluvent kernel is


λ −1
Γ( x, t , λ ) = e x −t n = 1( n − 1)!λ n −1 ( x − t ) = e x −t e = e(λ + 1)( x − t )

The solution has the form


x
y ( x ) = f ( x ) + λ ∫ e(λ + 1)( x − t ) f (t )dt
0

The inconvenience of the method of successive approximation is the


necessity to calculate in the quadrature. If this is not possible then one may
use numerical methods.

Exercise
1. Solve completely
1
y ( x ) = 56 + 12 ∫ xty ( t ) dt
0

2. For the above integral equation, use the method of successive


approximation of solution to solve.

3. From (1) and (2) above estimate the accuracy of the results.
CHAPTER 4

The Reader should be able to identify degenerate kernels and solve the
corresponding Integral equations appropriately.
METHOD OF DEGENERATE KERNEL FOR FREDHOLM’S

INTEGRAL EQUATION OF THE 2nd TYPE.

TRANSFORMATION TO A SYSTEM OF LINEAR

ALGEBRAIC EQUATIONS

The object will be to find a solution to Fredholm’s integral Equation of the


2nd type

… ( 4.1)
b
y ( x) = f ( x) + λ ∫ k ( x, t ) y(t )dt
a

not only for a sufficiently small value of the parameter λ , but for all λ for
which this solution exists.

We consider an Integral Equation with degenerate kernel.

Definition
The kernel k ( x, t ) of an Integral Equation is called Degenerate if it can
be represented in the form of a finite sum of pair-wise product of
functions one of which is a function of only x, and the other only of y:

k ( x , t ) = a1 ( x )b1 (t ) + a 2 ( x )b2 (t ) +...+ a n ( x )bn ( t )

n
= ∑ ai ( x )bi (t ) (4.2)
i =1

ai ( x ), bi (t )

are linearly independent function Fredholm’s I.E. of the 2nd type with
degenerate kernel is the equation of the form
⎡ n ⎤
⎢ ∑ ai ( x)bi (t ) ⎥ y (t )dt … ( 4.3)
b
y ( x) = f ( x) + λ ∫
a
⎣ i =1 ⎦

Equation 4.1 with degenerate kernel is solved by replacing the degenerate


kernel in this equation:

⎡ b b ⎤
y ( x) = f ( x) + λ ⎢ ∫ a1 ( x)b1 (t ) y (t )dt + ∫ a2 ( x)b2 (t ) y (t ) dt + ⎥
⎣ a a

]= f ( x) + λ [ a1 ( x) ∫ b1 (t ) y (t ) dt +
b b
+... + ∫ an ( x)bn (t ) y (t )dt
a a

]=
b b
+ a2 ( x) ∫ b2 (t ) y (t )dy + ... + an ( x) ∫ bn (t ) y (t )dt
a a

n
= f ( x) + λ ∑ ai ( x) ∫ bi (t ) y (t ) dt
b

a
i =1

Suppose

… ( 4.4 )
b
∫a
bi (t ) y (t )dt = ci

Where ci ( i = 1… n ) are constant coefficients, then we have


n
y ( x) = f ( x) + λ ∑ ci ai ( x) … ( 4.5 )
i =1

If the coefficients ci are found then the problem could be considered solved.
However it is not possible to calculate ci since the function y(x) is unknown.
To find c we substitute 4.5 into 4.4 after which we get a system of algebraic
equations:
n
ci = ∫ bi (t ) f ( t ) dt + λ ∫ bi (t )∑ c j a j (t )dt
b b

a a
j =1

From which
n
ci − λ ∑ ci gij = fi … ( 4.6 )
j =1

where
b
fi = ∫ bi (t ) f (t )dt
a

… ( 4.7 )
b
gij = ∫ bi (t ) a j (t ) dt
a

The system 4.6 can be rewritten in the form


n

∑j =1
(δ ij − λ g ji )ci = fi … ( 4.8 )

where δ ij is Kronecker’s symbol

⎧⎪ 1, if i=j
∂ ij = ⎨
⎪⎩0, if i ≠ j, ( i,j=1,cdots,n )

Therefore, in finding the coefficients c1 , c2 ,… , cn we have a system of n linear


algebraic equations with n unknowns.
n
c1 − λ ∑ g1 j c j = f1
j =1
n
c2 − λ ∑ g 2 j c j = f 2 … ( 4.9 )
j =1

…………………
n
cn − λ ∑ g nj c j = f n
j =1

or in the expanded form,


(1 − λ g11 ) c1 − λ g12 c2 − − λ g1n cn = f1
−λ g 21 + (1 − λ g 22 ) c2 − − λ g 2 n cn = f 2 … ( 4.9 )
…………………
−λ g n1c − λ g n 2 cn c2 − + (1 − λ g nn ) cn = f n

If the system is (4.9) has a solution with respect to the unknown ci , then the
non-homogeneous Integral Equation also has a solution defined by equation
(4.5). Consequently, the Integral Equation (Fredholm’s second type) with
degenerated kernel and the system (4.9) are equivalent.

The determinant of the system 4.9 is given below:

D ( λ ) = det (δ ij − λ g ji ) ≡

1 − λ g11 − λ g 21 − λ g n1
−λ g12 1-λ g 22 − λ gn2
… ( 4.10 )

−λ g1n − λ g 2 n 1-λ g nn

Let Aij ( λ ) be the algebraic adjoint to corresponding element (or co-factors)


δ ij − λ g ji of the determinant D ( λ ) . If D ( λ ) ≠ 0 , then by the Crammer’s rule

Dj (λ ) ∑ nj =1 Aji ( λ ) f i
ci = =
D. ( λ ) D (λ )

where Di ( λ ) is the determinant 4.10 in which the free terms (r.h.s) replaces
the i-th column of this determinant ( i = 1,… , n ) . By the relation (4.5) the
solution of the integral equation (4.3) has the form
b
∑in=1 ∑ nj =1 ai ( x ) b j ( t ) Aji ( λ )
y ( x) = f ( x) + λ ∫ f ( t ) dt
a
D (λ )

For simplicity, represent

D ( x, t , λ ) = ∑in=1 ∑in=1 ai ( x ) b j ( t ) Aji ( λ )

To have
b
D ( x, t , λ )
y ( x) = f ( x) + λ ∫ f ( t ) dt … ( 4.12 )
a
D (λ )

The function

D ( x, t , λ ) n n Aji ( λ )
Γ ( x, t , λ ) = = Σ Σ ai ( x ) b j ( t )
D (λ ) i =1 j =1 D (λ )

is called the Resolvent for the Integral Equation.

Hence
b
y ( x ) = f ( x ) + λ ∫ Γ ( x, t , λ ) f ( t ) dt
a

If D ( x, t , λ ) and D ( λ ) are evaluated then the Resolvent Γ is known.

Example 4.1

Solve the integral equation


1
Y ( x ) − 2 ∫ (1 + 3xt ) y (t )dt = x 2
0

Solution

Transform the given equation into the form


1 1
y ( x) = 2∫ y (t )dy + 6 x ∫ ty (t )dt + x 2
0 0

and let

… (1)
1 1
c1 = ∫ y (t )dt , c2 = ∫ ty (t )dt
0 0

we have
y ( x) = 2c1 + 6 xc2 + x 2 …( 2)

substituting (2) into (1) , we have the system of equations


1
c1 = ∫ (2c1 + 6tc2 + t 2 )dt
0

1
c2 ∫ (2c1t + 6t 2 c2 + t 3 )dt
0

c1 = 2c1 + 3c2 + 13
OR
c2 = c1 + 2c2 + 1
4

c1 + 3c2 = − 13

c1 + 2c2 = − 41

Solving this system we obtain


cc = − 245 , c2 = − 241

On the basis of equation (2) , our solution of the given equation will be of
the form

y ( x ) = x 2 − 4x − 125

Example 4.2

Solve the integral equation


1
y ( x ) − λ ∫ (1 + x + t ) y (t )dt = f ( x )
0

Solution

The Integral Equation can be transformed into the form


1 1
y ( x ) = λ (1 + x ) ∫ y (t )dt + λ ∫ ty (t )dt + f ( x )
0 0
Representing

… (1)
1 1
c1 = ∫ y (t )dt , ∫ ty (t )dt = c2
0 0

we have

y ( x) = λ (1 + x)c1 + c2 + f ( x) …( 2)

Replacing the relation (2) into equation (1), we have

c1 (1 − 32 λ ) − λ C2 = ∫
1
f (t )dt
0

− 56 λ c1 (1 − λ2 ) c2 = ∫ tf (t )dt
1

The Determinant of the system is

1 − 23 λ − λ
D( λ ) = = 1 − 2λ − 12
λ
=0
2

− 56 λ11 − λ
2

whence the unique solution of the system is


1 1
(1− λ2 ) ∫0 f ( t ) dt + λ ∫0 tf ( t ) dt
c1 = D( λ )

1 1
c2 = 56 λ ∫ f (t )dt + (1 − 23 λ ) ∫ tf (t )dt
0 0

Therefore, the solution is of the form


1 {1+ x + t + λ }[ 13 − 21 ( x + t ) + xt } ]] f ( t ) ft
y( x) = f ( x) + λ ∫ D(λ )
0

And the resolvent is of the form


Γ( x , t ,: λ ) =
1+ x + t + λ [ − ( x + t ) + xy
1 1
3 2 ]
λ
1− 2 λ − 12

Example 4.3

Find the solution of the I.E. with degenerate kernel


π
y ( x) − λ x ∫ ( x cos t + t 2 sin x + sin t cos x) y (t )dt = 2 x
−π

Solution

Expanding and factorising out of constant we have


π π
y ( x) = λ x ∫ y (t ) cos tdt + λ sin x ∫ t 2 y (t )dt +
−π −π
π
λ cos x ∫ y (t ) sin tdt + 2 x
−π

Let
π π
∫π

y (t ) cos xtd t = c1 ; ∫π−
t 2 y (t ) = c2
π
∫π −
y (t ) sin tdt = c3

where c1 , c2 , c3 are unknown constants, And we have

y ( x) = c1λ x + c2 λ sin x + c3 λ cos t + 2 x …( 2)

Replacing (2) in (1) we have

c1 = ∫π−π (c1λt + c2 λ sin t + c3λ cos t + t ) dt

π
c2 = ∫ (c1λt + c2 λ sin t + c3λ cos t + t )t 2 dt … ( 3)
−π

π
c3 = ∫ (c1 λt + c2 λ sin t + c3λ cos t + t ) sin tdt
−π
Opening the brackets and regrouping, we have the following system:

⎛ Π
⎞ Π
C1 ⎜ 1 − λ ∫ t cos tdt ⎟ − C2 λ ∫ sin t cos tdt −
⎝ −Π ⎠ −Π
Π Π
−C3 λ ∫
−Π
cos 2 tdt =
−Π
∫ t cos tdt ,
Π
⎛ Π

−C1λ ∫ t dt + C2 ⎜ 1 − λ ∫ t 2 sin tdt ⎟
3

−Π ⎝ −Π ⎠
Π Π
−C3 λ ∫t
2
cos tdt = ∫ t dt
3
…( 4)
−Π −Π
Π Π
−C1λ ∫ t sin tdt − C2 λ ∫ sin tdt +
2

−Π −Π

⎛ Π
⎞ Π
+C3 ⎜1 − λ ∫ sin t cos tdt ⎟ = ∫ t sin tdt
⎝ −Π ⎠ −Π

we now find the definite integral in the system


π
∫π = π4 − ( −4π ) = 0,
4
t4 π
t 3 dt =
4

− 4 π −ν

π π π
∫π

sin 2 tdt = ∫
−π
1
2 (1 − cos 2t )dt = 12 t − 12 sin 2t −π =

= 12 π − 12 sin 2π + π − 12 sin(−2π )

= π2 − 14 sin 2π + π2 − 14 sin 2π = π

π π π
∫π

cos 2 tdt = ∫
−π
1
2 (1 + cos 2t )dt = 12 t + 12 sin 2t −π =

= 12 π + 12 sin 2π + π − 12 sin(−2π )

= π2 + 14 sin 2π + 14 sin 2π = π

π π π
∫π

sin tdt = 1
2 ∫π

sin 2tdt = 1
4 ∫π

sin 2td (2t ) =

π
= 1
4 − cos 2t −π = − 14 cos 2π − cos(−2π )

= − 14 (cos 2π − cos 2π ) = 0

π π
cos tdt = [ − cos t + sin t ]−π
π
∫π +∫
π
t sin tdt = −t cos −π
− −π

= −π cos π + sin π − [ (−π ) + sin(−π ) ] =

= −π (−1) + 0 − [π (−1) + 0] = π − (−π ) = 2π

( here u=t, du=dt, dv=sint dt, v=-cost )


π π π
∫π −∫
π
t cos tdt = t sin t −π sin tdt = t sin t − ( − cos t ) −π
− −π

= π sin π + cos π − −π sin( −π ) + cos( −π ) =


( here u=t, du=dt, dv=costdt, v=sintdt )
Π
π
∫π + 2 ∫ t cos tdt =
π
t sin tdt = −t cos t
2 2
−π

−Π

= −π 2 cos tπ − −(−π ) 2 cos(−π ) + 2.0

= −π 2 (−1) + π 2 (−1) = π 2 − π = 0

⎛ Π

∫ tcostdt=0 ⎟⎠
2
⎜ here u=t , du=2tdt, dv=sintdt, v=-cost; integral
⎝ -Π

π
∫π

t cos tdt = 0

π π
∫π − 2∫
π
t 2 cos tdt = t 2 sin t −π t sin tdt = π 2 sin π − (−π ) 2 sin(−π ) − 4π
− −π

= 0 − 4π = −4π

⎛ Π

∫ Π
2
⎜ here u =t, du=2tdt, dv=costdt, v=sin y; tsintdt=2 ⎟
⎝ -Π ⎠

Putting the values of definite integrals in system (4), we have a system of


algebraic equations for finding c1 , c2 , c3 :

C1 − λΠ C3 = 0

C2 + 4λΠC3 = 0 … ( 5)

−2λΠC1 − λΠC2 + C3 = 2 ( 2Π ) 4Π

The determinant of the above system is

10 − λπ
D( λ ) = 014λπ = 1 − 2π 2 λ2 + 4π 2 λ2 = 1 + 2πλ2 ≠ 0
−2λπ − λπ 1
We solve system (5) by Crammer’s method.

00 − πλ
c1 = 014πλ = 2π 2 λ
1+ 2 πλ2
2π − λπ 1

00 − πλ

c2 = 004πλ = − 8π 2 λ
1+ 2 πλ

−2λ 2π 1

00 − πλ
c3 = 014πλ = 2π
1+ 2 πλ

−2λ − πλ 2π

Substituting the values of c1 , c2 , c3 into the equation (2), we have the


solution of the integral equation as :
y( x) = 2 πλ
1+ 2 πλ2
(πλx − 4πλ sin x + cos x ) + 2π

Exercise
2
π

(
y ( x ) − 4 ∫ sin 2 x y ( t ) dt = 2 x − π )
0

Answer:

π3
y ( x) = − sin 2 x + 2 x − π
π −1
1
C2 = C1 + 2C2 +
4

where
1
C1 + 3C2 = −
3
1
C1 + 2C2 = −
4

Solving this system we obtain


5 1
C1 = − , C2 = −
24 24

On the basis of equation (2), our solution of the given equation will be of the
form
x 5
y ( x ) = x2 − −
4 12

Example 4.2

Solve the integral equation


1 1
y ( x ) = λ (1 + x ) ∫ y ( t ) dt + λ ∫ ty ( t ) dt + f ( x )
0 0

Representing
1
C1 = ∫ y ( t ) dt , λ ∫ ty ( t ) dt = C2 … (1)
0

we have

y ( x ) = λ (1 + x ) C1 + C2 + f ( x ) …( 2)

Replacing the relation (2) into equation (1), we have


⎛ 3 ⎞
1
C1 ⎜ 1 − λ ⎟ − λ C2 = ∫ f ( t ) dt
⎝ 2 ⎠ 0

⎛ λ⎞
1
5
− λ C1 ⎜1 − ⎟ C2 = ∫ tf ( t ) dt
6 ⎝ 2⎠ 0

The determinant of the system is

1 − 32 λ -λ λ2
D (λ ) = = 1 − 2λ − =0
- 56 1- λ2 12

whence the unique solution of the system is


1 1

(1 − 2 ) λ ∫ f ( t ) dt + λ ∫ tf ( t ) dt
λ

C1 = 0 0

D (λ )

1 1
5
6 λ ∫ f ( t ) dt + (1 − 32 λ )∫ tf ( t ) dt
C2 = 0 0

D (λ )

Therefore, the solution is of the form

Γ ( x, t ; λ ) =
{1 + x + t + λ ⎡⎣ 1
3 − 12 ( x + b ) b + xt ⎤⎦ f ( t ) dt }
1 − 2λ − λ12
2

Equation 4.3

Find the solution to the integral equation with degenerate kernel


Π
y ( x) − λ ∫ ( x cos t + t )
sin x + cos x sin t y ( t ) dt = 2 x
2

−Π

Solution

Expanding and factorizing out the constant we have


Π Π Π
y ( x ) = λ x ∫ y ( t ) cos tdt + λ sin x ∫ t 2 y ( t ) dt + λ cos x ∫ y ( t ) sin tdt + 2 x
−Π −Π −Π

Let
Π Π

∫ y ( t ) cos tdt = C1 ; ∫ t y ( t ) dt = C
2
2
−Π −Π
Π

∫ y ( t ) sin tdt = C
−Π
3

where C1 , C2 , C3 are unknown constants. And we have

y ( x ) = C1λ x + C2 λ sin x + C3 λ cos x + 2 x …( 2)

Replacing (2) we have


Π
C1 = ∫ ( C λ t + C λ sin t + C λ cos t + t )tdt
−Π
1 2 3

Π
C2 = ∫ ( C λt + C λ sin t + C λ cos t + t ) t … ( 3)
2
1 2 3 dt
−Π
Π
C3 = ∫ ( C λ t + C λ sin t + C λ cos t + t ) sin tdt
−Π
1 2 3

Opening the brackets a regrouping, we have the following system:

⎛ Π
⎞ Π
C1 ⎜ 1 − λ ∫ t cos tdt ⎟ − C2 λ ∫ sin t cos tdt −
⎝ −Π ⎠ −Π
Π Π
-C3 λ ∫ cos tdt = ∫ t cos tdt ,
2

−Π −Π
Π
⎛ Π

−C1λ ∫ t dt + C2 ⎜ 1 − λ ∫ t 2 sin tdt ⎟
3

−Π ⎝ −Π ⎠
Π Π
−C3 λ ∫ t 2 cos tdt = ∫ t dt
3
…( 4)
−Π −Π
Π Π
−C1λ ∫ t sin tdt − C λ ∫ sin tdt +
2
2
−Π pi

⎛ Π
⎞ Π
+C3 ⎜1 − λ ∫ sin t cos tdt ⎟ = ∫ t sin tdt
⎝ −Π ⎠ −Π
we now find the definite integral in the system

Π 4 ( −Π )
Π Π 4
t4
∫ t dt = 4 = − = 0,
3

−Π −Π
4 4
Π Π Π
1 1 1
∫−Π sin tdt = −Π∫ 2 (1 − cos 2t ) dt = 2 t − 2 sin 2t −Π
2

1 1 1
= Π − sin 2Π + Π + sin ( −2Π )
2 2 2
Π Π Π
1 1 1
∫ cos 2 tdt = ∫ (1 + cos 2t ) dt = t + sin 2t
−Π −Π
2 2 2 −Π

1 1 1
= Π + sin 2Π + Π − sin ( −2Π )
2 2 2
Π 1 Π 1
= + sin 2Π + + sin 2Π = Π
2 4 2 4
Π Π Π
1 1
∫ sin t cos tdt = ∫ sin 2td ( 2t ) = ∫ sin 2td ( 2t )
−Π
2 −Π 4 −Π
1 1
− cos 2t −Π = − cos 2Π − cos t ( −2Π )
Π
=
4 4
1
= − ( cos t 2Π − cos 2Π ) = 0
4

Π Π Π

∫ cos tdt = [ −t cos t + sin t ]


Π

−Π
t sin tdt = −t cos +
−Π
−Π
−Π

= −Π cos Π + sin Π − ⎡⎣Π cos ( −Π ) + sin ( −Π ) ⎤⎦


= −Π ( −1) + 0 − ⎡⎣Π ( −1) + 0 ⎤⎦ = Π − ( −Π ) = 2Π

(here u=t, du=dt, dv=sint dt, v=-cost)


Π Π Π Π

∫ t cos tdt = t sin t


−Π
− ∫ sin tdt = t sin t − ( − cos t )
−Π
−Π −Π

=ΠsinΠ +cosΠ − −Π sin ( −Π ) + cos ( −Π )


=0-1+0- ( -1)
( here u=t, du=dt, dv=costdt, v=sint ) ,
Π Π Π

∫t sin tdt = −t cos t + 2 ∫ t cos tdt =


2 2

−Π −Π −Π

=Π cos t Π − − ( −Π ) cos ( −Π ) + 2.0


2 2

=-Π 2 ( −1) + Π 2 ( −1) = Π 2 − Π = 0


⎛ Π

∫ =
2
⎜ here u=t , du=2tdt, dv=sintdt, v=-cost; integral t cos tdt 0 ⎟
⎝ −Π ⎠
Π Π Π 2

∫t
2
cos tdt = t sin t 2
− 2 ∫ t sin tdt = Π sin Π − ( Π ) sin ( −Π ) − 4Π = 0 − 0 − 4Π = −4Π
2

−Π −Π −Π
Π

∫ t cos tdt = 0
2
here u=t , du=2tdt, dv=sintdt, v=-cost; integral
−Π
Π Π 2

cos tdt = t sin t −Π − 2 ∫ t sin tdt = Π sin Π − ( Π ) sin ( −Π ) − 4Π = 0 − 0 − 4Π = −4Π


Π
∫t
2 2 2

−Π −Π

⎛ Π

⎜ here u = t , du=2tdt, dv=cost dt, v=siny; ∫ t sin tdt = 2Π ⎟⎠
2

⎝ −Π

putting the values of definite integrals in system (4), we have a system of


algebraic equations for finishing C1 , C2 , C3
C1 − λΠC3 = 0
C2 + 4λΠC3 = 0
−2λΠC1 − λΠC2 + C3 = 2 ( 2Π ) 4Π

The determinant of the above system is


1 0 -λΠ
D (λ ) = 0 1 4λΠ = 1 − 2Π 2 λ 2 + 4Π 2 = 1 + 2Πλ 2 ≠ 0
-2λΠ -λΠ 1
We solve system (5) by Crammer’s method
0 0 -λΠ
C1 = 0 1 4λΠ
4Π -Πλ 1
=
(
2 2Π 2 Π )= 4Π 2 λ
D (λ ) 1 + 2Π 2 λ 2 1 + 2Πλ 2
1 0 -Πλ
C2 = 0 0 4Πλ
-2Π λ 2Π 1 −16Π 2 λ
=
D (λ ) 1 + 2Π λ 2

1 0 0
C3 = 0 1 0
-2Π -Π λ 2Π 2Π
=
D (λ ) 1 + 2Πλ 2
Substituting the values of C1 , C2 , C3 into the equation (2), we have the solution
of the integral equation as:
2Π λ
y ( x) = ( Πλ x − 4Πλ sin x + cos x ) + 2 x
1 + 2Π λ 2

Exercises

( )
2

y ( x ) − 4 ∫ sin 2 x y ( t ) dt = 2 x − Π
1.
0

Answer:

π3
y ( x) = − sin 2 x + 2 x − Π
π −1

1
y ( x ) − ∫ exp {arcsin x} y ( t ) dt = tan x
2.
−1

Answer
y ( x ) = tan x

Π
4

y ( x) − λ ∫ tan ty ( t ) dt = cot x
3. −Π
4

Solution
π
y ( x) = λ + cot x
2
CHAPTER FIVE

At the end of this short chapter, you should be able to link Fourier Series to
degenerate kernel and be ready to apply this to find approximate solutions to
integral equations.
EXPANSION OF THE DEGENERATE KERNEL INTO FOURIER
SERIES

For approximate solution of an integral equation


b
y ( x ) = f ( x ) λ ∫ k ( x , t ) y (t )dt (5.1)
a

where the functions f (x) and k(x,t) are continuous, the kernel k(x,t) is
replaced by an approximate to its degenerate kernel
n
k ( n ) ( x , t ) = ∑ ai ( x )bi ( t ) (5.2)
i =0

Let l = b − a . The continuous kernel k(x,t) can be approximated by


trigonometric polynomial of period 2l . Let

n
k ( n ) ( x, t ) = 12 a0 (t ) + ∑ ak (t ) cos kπι x (5.4)
k =1

where k=0, 1, 2,...

A similar expansion can be obtained if the roles of x and t are interchanged.


We can also use the finite interval if the double Fourier series. For instance

n
ak (t ) ∼ 12 ako + ∑ akm cos mιπ t
m =1

Then on the basis of the dependence of (5.3) and (5.4), we have


n n
k (n)
( x , t ) = a oo +
1
4
1
2 ∑
k =1
a ko cos kπ
ι x+ 1
2 ∑
k =1
a om cos mιπ t +

n n
+∑ ∑ a km cos kιπ x cos mππ t ,
k =1 k =1

where
b b
a km = ι
4
2 ∫ ∫
a a
k ( x , t ) cos kιπ cos mιπ tdxdt .

If k ( n ) ( x , t ) is an approximate degenerate kernel of the exact kernel k(x,t)


and the function f ( x ) is also near approaches to the function f ( x ) then the
approximate solution of the integral equation.
b
zn ( x ) = f n ( x ) + λ ∫ k n ( x ,) zn (t )dt
a

is an approximation to the exact solution y ( x ) of the integral equation (5.1).


CHAPTER SIX

At the end of this chapter, you should be able to:

1. find the eigenvalues and eigenfuctions of integral equations with


degenerate kernel.

2. find conditions under which non-homogeneous integral equations


have unique solutions, no solution and infinitely many solutions.
EIGENVALUES AND EIGEN FUNCTIONS OF AN INT EGRAL
EQUATION

We continue the investigation of the equation with degenerate kernel


n
k ( x, t ) = ∑ a1 ( x)bi (t )
i =1

of the form

⎡ n ⎤
⎢ ∑ ai ( x)bi ( x) ⎥ y ( x)dt + f ( x)
b
y ( x) = λ ∫
a
⎣ i =1 ⎦

DEFINITION:

The Eigenvalue of the integral equation 6.1 is value of the parameter λ for
which there exists a solution of the Homogenous integral equation

⎡ n ⎤
⎢ ∑ ai ( x)bi (t ) ⎥ y (t )dt … ( 6.2 )
b
y ( x) = λ ∫
⎣ ⎦

and the corresponding solution of the homogenous equation is the Eigen


function. We recall the homogenous linear algebraic system as in Chapter 4
for finding the coefficients ci in the form:

(1 − λ g11 )c1 − λ g11c2 ... − λ g1n c1 = 0

−λ g 21c1 + (1 − λ g 22 c2 ...λ g 2 n cn = 0

..............................................
............................................... … ( 6.3)
λ g ni c1λ g n 2 c2 ...... + (1 − λ g nn )cn = 0

where g ij and c1 (i,j=1,...,n) are just as of the same sense as in chapter 4.


The eigenvalues are the roots of the algebraic equations, obtained as a result
of equating the determinant D ( λ ) of system 6.3 to zero.

1 − λ g11 − λ g12 ... − λ gin


−λ g 21 (1 − λ g 22 )... − λ g 2 n
D (λ ) = =0

−λ g ni − λ g n 2 ...1 − λ g nn
… ( 6.4 )

of degree m ≤ n . If this equation has m roots then the integral equation 6.2
has m eigenvalues. Every eigenvalue λ k ( k = 1,2,..., m; m ≤ n) corresponds to a
non-zero solution of the homogenous system 6.3

C1(1) , C2(1) ,… , Cn(1) -first solution


C1( 2) , C2( 2) ,… , Cn( 2) -second solution
C1( m ) , C2( m ) ,… , Cn( m ) - m-th solution

If λ k ≠ 0 is a root of equation 6.4, then the corresponding eigen function


ϕ k (x) of the degenerate kernel k(x,t) is such that:

n
y1 ( x ) = ∑ c1 (i ) ai ( x )
i =1

n
y 2 ( x ) = ∑ ci
(2)
ai ( x )
i =1

..........................................
n
y m ( x ) = ∑ ci
( m)
ai ( x )
i =1

The homogeneous integral equation


b
y ( x ) = λ ∫ k ( x , t ) y (t )dt
a
with degenerate kernel k(x,t)for the value λ , for which Δλ = 0 , has m
linearly independent solution y(x), defined by:
n
yk ( x) = λk ∑ Ci( k ) ai ( x),
i =1

where C ( k ) nontrivial solution of the homogeneous system

(∂ )
n


i =1
ij − λk g ij Cl( k ) = 0

If λ ≠ λ k (k=1,2,..,m) is an eigen value of the degenerate kernel k(x,t), then


the non homogenous integral equation
b
y(x) = + λ ∫a k(x,t) y(x) dt

either has no solution or has infinitely many solutions.

Example 6.1

For the integral equation with degenerate kernel


1
y ( x ) = λ ∫ xt 2 y (t )dt + f ( x )
0

find the eigenvalues, eigen functions, the solution of the non -


homogeneous equation for any free term f(x) (if λ is not an eigen value),
the resolvent and also the solution of the Integral Equation for λ = 3 and
f(x) = 1.

Solution

From the given equation, we have

… (1)
1
y ( x) = λ x ∫ t 2 y (t )dt + f ( x)
0

let
…( 2)
1
C1 = ∫ t 2 y (t )dt
0

then

y ( x) = λ c1 x + f ( x) … ( 3)

Substituting the relation (3) into (2), we have

C1 = ∫ t 2 [ λ c1t + f (t )] dt = λ c1 14 + ∫ t 2 f (t )dt
1 1

0 0

where

…( 4)
1
⎡⎣1 − λ4 ⎤⎦ c1 = ∫ t 2 f (t )dt
0

and

… ( 5)
1
C1 = 1−1λ
4
∫ 0
t 2 f (t ) dt

In finding the eigenvalues and eigen functions put f(x)=0.

Then from (4) it follows that

⎡⎣1 − λ4 ⎤⎦ c1 = 0 …( 6)

This equation has a solution for λ = 4 and its solution is where C is an


arbitrary constant. Ci = C ≠ 0 , where C is an arbitrary constant.

The solution of the equation defines the dependency of (3) whilst the
solution itself has the form

ϕ ( x) = 4Cx = C * x …( 7 )

where c* is an arbitrary constant ( C * = 4C ) . Consequently, the eigenvalue


λ = 4 , and the corresponding eigen function ϕ1 ( x) = C * x.

If λ ≠ 4 , then by (5)
1

C1 = ∫0 1−λ / 4
2
t f ( t ) dt
Substituting C1 in (3) we get the solution of the given Integral Equation

1
4 xλ ∫0 t
2
f ( t ) dt
y( x) = 1− λ / 4 + f ( x)

OR
1
y( x) = λ ∫ 4 xt 2
4−λ f (t )dt + f ( x )
0

Hence the resolvent

Γ( x , t , λ ) = 4 xt 2
4−λ

For λ = 3 and f ( x ) = 1, the solution of the Integral Equation has the form
1
y ( x ) = 3∫ 4 xt 2
4−3 dt + 1 = 4 x + 1
0

Example 2

Solve the non homogeneous Integral Equation


π
y ( x ) − λ ∫ cos( x + t ) y (t )dt = 1
0

Solution

The kernel of the Integral Equation


2
K ( x, t ) = ∑ ai ( x ) bi ( t ) = cos x cos t − sin x sin t
i =1

where a1 ( x ) = cos x, b1 ( t ) = cos t; a 2 ( x ) = − sin x, b2 ( t ) = sin t .


The system of linear algebraic equation for finding C1 and C2 has the form

⎧⎪(1 − λ11 ) C1 − λ g12 C2 = f1



⎪⎩−λ g 21C1 + (1 − λ g 22 ) C2 = f 2

where
Π Π
g11 = ∫ b1 ( t ) a1 ( t ) dt = ∫ cos 2 tdt = π / 2
0 0
Π Π
g12 = ∫ b1 ( t ) a2 ( t ) dt = ∫ − sin t cos tdt = 0
0 0
Π Π
g 21 = ∫ b2 ( t ) a1 ( t ) dt = ∫ sin t cos tdt = 0
0 0
Π Π
g 22 = ∫ b2 ( t ) a2 (t )dt = ∫ − sin t sin tdt = −π / 2
0 0
Π Π
f1 = ∫ b1 ( t ) f ( t ) dt = ∫ cos tdt = 0
0 0
Π Π
f 2 = ∫ b2 ( t ) f ( t ) dt = ∫ sin tdt = 2
0 0

Consequently the system (1) has the form

C1 ⎡⎣1 − λπ2 ⎤⎦ = 0 …( 2)

C21 ⎣⎡1 + λπ2 ⎦⎤ = 2

The determinant of the system


D (λ ) = 1 − λπ2 = 1 − λ 4π
2 2

01+ λπ
2

and the eigen values


λ 1 = − π2 , λ 2 = 2
π

1. If the determinant of the system D (λ ) ≠ 0, then the solution of the system

C1 = 0, C2 = 1+ λπ
2
2

and the equation has a unique solution

y ( x ) = λc1 y1 ( x ) + λc2 y 2 ( x ) + f ( x )

Hence
y ( x ) = − 21λ+sinλπ x + 1
2

2. If the determinant of the system D( λ ) = 0, then either

λ 1 = − π2 orλ 2 = π
2

For λ 1 = − λ2 from (2) we have

C1 = 0
0=2

which is not possible. Hence for this eigenvalue the given non homogenous
equation has no solution

For λ 2 = π2 , by system (2) we have


C 1 − ar bitrary

C2 = 1

And, therefore, the I.E. has the form


y ( x ) = π2 C1 sin x − π2 sin x + 1

Example 3

For the I.E. with symmetric kernel


1
y ( x ) = λ ∫ ( x + t ) y (t )dt + f ( x )
0

find the solution, eigenvalues, eigen functions, resolvent and also the
solution for λ = λ k whereλ k − the determined eigenvalue.

Solution

The kernel k ( x, t ) = x + t is degenerate.

The equation can be put in the sum form:

… (1)
1 1
y ( x) = λ x ∫ y (t )dt + λ ∫ ty (t )dt + f ( x)
0 0

Let

…( 2)
1 1

0
y (t )dt = C1 , ∫ ty (t )dt = C2
0

Then the given equation will have the form

y ( x ) = λ C1 x + λ C2 + f ( x ) … ( 3)

Substituting (3) into (2), we have


⎧ 1 [ λ C t + λ C + f (t )] dt = C
⎪ ∫0 1 2 1
⎨ 1
⎪ ∫ t [ λ C1t + λ C2 + f ( x) ] dt = C2
⎩0

where

⎧λ C 1 1

⎪ 1 ∫0
1tdt + λ C 2 ∫ 0
d + ∫0
f (t )dt = C1
⎨ 1 1 1
⎪λ C1 ∫ t 2 dt + λ c2 ∫ tdt + ∫ tf (t )dt = C2
⎩ 0 0 0

OR

⎧λ C t 2 1 +λ C + 1 + 1 f (t )dt = C
⎪ 1 2 0 2 0 ∫0 1


⎪ 2 1 1
⎪⎩λ C1 t3 0 +λ C2 + t2 0 + ∫0 f ( x)dt = C2
3 1

and

⎧ λC1 + λ C + 1 f (t )dt = C
⎪ 2 2 ∫0 1
⎨ 1
⎪ λ3C1 + λ3C1 + ∫ + f (t )dt = C2
⎩ 0

then

⎧ 1 f (t )dt = ⎡1 − λ ⎤ C − λ C
⎪ ∫0 ⎣ 2⎦ 1 2

⎨ …( 4)
⎪tf (t )dt = − λ C + 1 − λ C
⎪⎩ 3 1 ⎣⎡ 2 ⎦⎤ 2

System (4) has solution if its determinant

1 − λ2 − λ
D( λ ) = = 1 − λ − 12λ = 0
− λ3 1 − λ2
Hence the eigenvalues

λ 1,2 = −6+ 4 3

Since the kernel K ( x, t ) is symmetric, i.e.

K ( x, t ) = K ( t , x )

then the eigen values λ1, λ2 are real.

To find the eigen functions, solve (4) for λ = λ 1 , λ 2 i.e. the system with the
right hand side as zero

( )
⎧ 1 − λ2k C1 − λk C2 = 0
⎪⎪

⎪ λk λk
⎪⎩− 3 C1 + ⎡⎣1 − 2 ⎤⎦ C2 = 0

Since the equations of the system are dependent we only need to consider
one of them, say the first.

hence ⎡1 − λ2k ⎤ C1 − λk C2 = 0 (k = 1, 2)
⎣ ⎦

from which

(
λk C2 = C1 1 − λ2 k
)

From (3) we find the eigen functions

φk ( x ) = C1 λkx + 1 − λ2 ( k
)

For λ ≠ λk ( k=1,2 ) the solution of (4) is


[1 − λ ] ∫
1 1
2 f (t )dt + ∫ tf (t )dt
C1 = 0 0

1 − λ − 12
2

f (t )dt + [1 − ]∫
1 1

C2 =
λ
3 ∫
0
λ
2 0
tf (t )dt
1 − λ − 12
λ 2

Substituting these values in equation (3) we find the solution of the non
homogeneous equation

⎡ 1 1 1 1

⎢( 12 ) ∫ ( )
− + λ ∫0 ( ) + ∫0 ( ) + ( − ) ∫0 ( )
λ λ λ
1 x f t dt x tf t dt 3 f t dt 1 2 tf t dt ⎥
y ( x) = λ ⎣ ⎦+ f x
0
( )

λ
[1 − λ / 2] x + λ xt + + [1 − λ / 2] t
y ( x) = λ ∫
1
3 f (t )dt + f ( x)
0 λ2
1− λ −
12

Hence, the resolvent

⎡ λ⎤ λ ⎛ λ⎞
⎢⎣1 − 2 ⎥⎦ x + λxt + 3 + ⎜⎝ 1 − 2 ⎟⎠ t
Γ( x , t , λ ) =
λ2
1− λ −
12

The kernel is symmetric and therefore the eigen function of the Integral
Equation and the transpose to it will coincide, i.e.

⎡ λk ⎤
φ k ( x ) = ψ k ( x ) = C1 λ k x + ⎢1 −
⎣ 2 ⎥⎦

The condition for the Integral Equation to have solution when λ = λ k is of


the form
1 ⎡ ⎛ λ ⎞⎤
∫0
f ( x) ⎢λk x + ⎜1 − k ⎟ ⎥ dx = 0
⎣ ⎝ 2 ⎠⎦

OR

⎡ λk ⎤
… ( 5)
1 1
λk ∫ tf ( x)dt + ⎢1 −
2 ⎥⎦ ∫0
f (t )dt = 0
0

Conditions the condition for consistency of system (4). In which case (4)
tends to

⎡ λk ⎤ 1
⎢1 − 2

⎥C1 − λ k C2 =


0
f ( t ) dt ,

OR

⎡ λ ⎤ 1
λ k C2 = ⎢− k ⎥C1 − ∫ f (t )dt
⎣ 2 ⎦ 0

Then the solution of any non homogeneous Integral Equation (1)

for λ = λ k , has the form

⎡ λ ⎤ 1
y ( x ) = C1 ⎢λ k x + 1 − k ⎥ − ∫ f ( t ) dt + f ( x )
⎣ 2 ⎦ 0

Summary

For integral equation 6.1 the following hold:

1. If the parameter λ is not an eigen value then the non homogeneous

(6.1) has a unique solution for any free term f (x)

2. If the parameter λ is an eigen, i.e. ∇λ = 0, then the homogeneous

equation (6.2) has a non zero solution (eigen function), whilst the

non-homogeneous integral equation (6.1) has a solution, if


b
∫ a
f ( x )ψ ( x )dt = 0

where ψ ( x ) is any eigen function of the integral equation with the


degenerate kernel
n


i =1
a i ( t )bi ( x )

For the homogeneous integral equation (6.2) the following holds:

1. If the parameter λ is not its eigen value (i.e. ∇( λ ) ≠ 0), then the

homogeneous integral equation (6.2) has a unique trivial solution


y ( x) ≡ 0

2. If the parameter λ is its eigen value (i.e. Δ ( λ ) = 0 ,

then the homogeneous integral equation has n non zero solution (eigen
functions). The solution of the homogeneous integral equation 6.2 can be put
in the form of a linear combination of these eigen functions
n
y ( x ) = C1φ 1 ( x ) + C2 φ 2 ( x ) − +..+ Cn φ n ( x ) = ∑ Ci φ i ( x )
i =1

EXERCISES

For the equations with degenerate kernel find the eigen values, resolvent and
the solution of the non-homogeneous integral equation for the given values
of λ and f ( x )

1. y ( x) = λ ∫0 sin x sin ty (t ) + f ( x), if λ = 1, f ( x) = 1

Answer
1 sin x sin t
λ1 = , ϕ1 ( x) = C sin x, Γ ( x, t ; λ ) =
π 1 − λπ

y = C sin x + f ( x), y ( x) = 1

1
2. y ( x) = −λ ∫0 ( x 2 t + xt 2 ) y (t )dt + f ( x), if λ = λk

Answer

xt ⎛ λ⎞ 2 2 ⎛ λ ⎞ x2t 2λ
λ − ⎜ 1 + ⎟ xt − x t ⎜ 1 + ⎟ +
5 ⎝ 4⎠ ⎝ 4⎠ 3
Γ( x , t ; λ ) =
λ λ2
1+ −
2 240

for λ = λ k the solution


1
y ( x) = ϕk ( x) − 5 x 2 ∫ t 2 f (t )dt + f ( x)
0

1
3. y( x) = λ ∫−1 ( xt + x 2 t 2 ) y(t )dt + f ( x), if λ = 1, f ( x) = x

Answer
3 5
λ1 = , ϕ1 ( x) = ψ 1 ( x)cx, λ2 = , φ2 ( x) = ψ 2 ( x) = cx 2
2 2

xt x2t 2
Γ ( x, t ; λ ) =
2 2
1− λ + 1− λ
3 5
3
If λ =
2

15 2 1 2 5
y ( x) = cx + f ( x) + x ∫ t f (t )dt λ =
4 −1 2

15 1 2
4 ∫−1
y ( x) = cx 2 + f ( x) − x t f (t )dt , y ( x) ≡ 3x
CHAPTER SEVEN

Readers will be able to formulate and apply Fredholm’s theorems for the
non-homogeneous and homogeneous Fredholm’s Integral Equations.
FREDHOLM’S ALTERNATIVE

For the Fredholm’s IE the following holds:

Theorem 7.1
(Fredholm’s alternative) Either the non-homogeneous equation of the
second type

… ( 7.1)
b
y ( x) = f ( x) + λ ∫ k ( x, t ) y(t )dt
a

has a unique solution for ay function f ( x ) or the corresponding


homogeneous integral equation.

… ( 7.2 )
b
y ( x) = f ( x) ∫ k ( x, t ) y (t )dt
a

has at least one non-zero solution.

Theorem 7.2

The necessary and sufficient condition for the existence of the solution y(x)
of the non-homogeneous integral equation (7.1) in the second case of the
alternative is the orthogonality of the right side of this equation f ( x ) to any
solution ψ ( x ) adjoint (connected) to equation 7.2 of the homogeneous
integral equation.

b
ψ ( x) = λ ∫ k ( x, t )Ψ (t )dt
a

ie
… ( 7.3)
b
∫a
f ( x)ψ ( x)dx = 0

The condition of orthogonality (7.3) of the right hand side or part of this
equation gives n equations
… ( 7.4 )
b
∫a
f (t )bi (t )dt = 0(i1, 2,..., n)

Example 1

Solve the integral equation


1
y ( x) = λ ∫ (5 x 2 − 3)t 2 y ( y )dt + e x
0

Solution

Let

… (1)
1
C = ∫ t 2 y (t )dt
0

The given equation takes the form

y ( x) = C λ (5 x 2 − 3) + e x …( 2 )

Putting (2) into (1), we have

1 1 1
C = ∫ t 2 ⎡⎣C λ (5t 2 − 3 + et ⎤⎦ dt = C λ ∫ t 2 (5t 2 − 3)dt + ∫ t 2 et dt
0 0 0

1 1
= C λ ∫ (5t 4 − 3t 2 )dt + ∫ t 2 et dt
0 0

whence
1 1
C − C λ ∫ (5t 4 − 3t 2 )dt = ∫ t 2 et dt
0 0

Integrating the right side by parts t 2 = u, e t dt = dv , du = 2tdt , v = e t , we find


1
⎡ t5 t3 ⎤ 1
C − C λ ⎢5. − 3. ⎥ = t 2 e 2 − 2 ∫ tet dt
1
0
⎣ 5 3 ⎦0 0

OR
1
C − C λ (1 − 1 − 0 + 0) = e − 2∫ tet dt
0

Making the substitution

t = u, e t dt = dν

du = dtν = e t , hence

C = e − 2 ⎡te t 10 − ∫ e1dt ⎤ = e − 2 e − (e − e 0 ) =
1

⎢⎣ 0 ⎥⎦
= e−2

The given equation has a unique solution for any given λ

y ( x ) = (e − 2)(5x 2 − 3) + e x

the corresponding homogeneous integral equation


y ( x ) = λ ∫ (5x 2 − 3)t 2 y (t )dt has a unique trivial solution y ( x ) = 0
1

0
CHAPTER 8

At the end of this chapter, you should be able to find approximate


solutions, integral equations by various approximations
(quadrature) methods.
Be able to appreciate that Approximate solution could be obtained
as long as the unknown function is sought using some form of
approximation, say of the kernel or the free term.
APPLICATION OF THE QUADRATURE FORMULA IN
SOLVING FREDHOLM AND VOLTERRA INTEGRAL
EQUATIONS

We consider the following expression


b n

∫ F ( x ) dx = ∑ ki f ( xi ) + 0 [ F ]
a i =1
… ( 8.1)

where xi are points on the X-axis in [ a, b] ; ki -numeric coefficients,


independent of the choice of the function F ( x ) ' i = 1, 2,…, n ; ⎡⎣ F ( x )⎤⎦ -
remainder (error) of the formula 8.1.
Often
n
ki ≥ o and ∑ ki = b − a
i =1

In the case of equally spaced points


xi = a + ( i − 1) h
where
b−a
h=
n −1
(1) for the rectangles formula
ki = h ( i = 1, 2,…, n = 1) , kn = 0
(2) for the ordinary trapezoidal
h
k1 = kn = , k 3 = … = kn −1 = h ;
2

(3) By the Simpson formula for n = 2m + 1

k1 = k2 m +1 = h
3

k2 = k4 = … = k2 m = 4h
3

k3 = k5 = … = k2 m −1 = 2h
3

Let Fredholm’s integral equation of the 2nd type be given as

y ( x ) − λ ∫ k ( x, t ) y ( t ) dt = f ( t ) ( a ≤ x ≤ b ) … ( 8.2 )

Choose xi from the interval [ a, b] and effect the following


substitutions:
y ( xi ) = yi , k ( x i , x j ) = kij , f ( x ) = f i ( i = 1,… , n )

By the quadratic formula (8.1) we have the following of equation


n
yi − λ ∑ k j kij y j = fi … ( 8.4 )
j =1

Let
⎧0, i ≠ j
δ ij = ⎨
⎩1, i=j

Since
n
yi = ∑ δ ijyi
j =i

Then (8.4) can be put in the form

∑ (δ − λ k j kij )y j = fi
n

ij ( i==2,…,n ) … ( 8.5 )
j =1

If
D ( λ ) = det (δ ij − λ k j kij ) ≠ 0 … (8.6 )

then 8.5 has a unique solution yi , which can be found by various


methods. Then for the solution y ( x ) of 8.2 we obtain an
approximate analytical expression

n
y ( x ) = f ( x ) + λ ∑ k j k ( x, xj ) y j … ( 8.7 )
j =1

The different roots λ1 , λ2 ,…, λm ( m ≤ n ) of the algebraic equation


D ( λ ) = 0 represent approximate eigenvalues of the kernel K ( x, t ) .

If
Yigl ( i = 1 f , 2,… , m; l = 1, 2,… , pg )

are the corresponding non-zero solution of the homogeneous


system.

∑ (δ − λg k j kij ) Y jgl = 0 ( i=1,2,… ,n )


n

ij
j =1

then the eigen functions of the kernel are approximately equal

φgl ( x ) = λg ∑ k j k ( x, x j )Y jgl
n

j =1

Adopting the quadrature formula yields goods results if the kernel


K ( x, t ) and f ( x ) (function) are sufficiently smooth functions. That is,
if they have tangents at all points and the angle of inclination of
this tangent is a continuous function of the arclength s.
Let the integral equation Fredholm’s first type be given
b
λ ∫ K ( x, t ) y ( t ) dt = f ( x )
a

In this case the approximate value yi of the solution y ( x ) at the node


xi is defined from the system of algebraic equations

n
λ ∑ k j kij yij = fi ( i=1,2,…, n )
j =1

If given Volterra’s integral of the 2nd type

y ( x ) − λ ∫ k ( x, t ) y ( t ) dt = f ( x )

Then we have kij = 0 for j > i and the corresponding system of


algebraic equations
n
yi − λ ∑ k j kij yij = f i … ( 8.8)
j =1
is a linear system with a triangular matrix
If
1 − λ kij kii ≠ 0

then from (8.8) systematically, we find

y1 = fi (1 − λ k1 k11 )
−1

y2 = ( f 2 + λ k1 k21 y1 )(1 − λ k2 k22 )


−1

………………
⎛ n −1 ⎞
yn = ⎜ f n + λ ∑ k j kij y j ⎟ (1 − λ kn knn )
−1

⎝ j =1 ⎠

In general, let
b
y ( x ) = λ ∫ k ( x, t ) y ( t ) dt + f ( x ) … ( 8.9 )
a

be an integral equation with continuous kernel K ( x, t ) and free term


f (t ) .

If K ( x, t ) could be approximated sufficiently by H ( x, t ) , then solving


the equation with H ( x, t ) we will have a solution close to the
solution with the kernel K ( x, t ) for the same function f ( x ) .

More so if we construct the sequence {H n ( x, t )} of degenerate kernel


uniformly converging to K ( x, t ) then the sequence { zn ( x )} of
solutions of the equation with the kernel {H n ( x, t )} will be uniformly
convergent to the solution y ( x ) of 8.9.

Method of such construction are varied. For example K ( x, t ) could


be approximated by the partial (truncated) sum of a power double
trigonometric series, if the kernel k ( x, t ) could be expanded into a
uniformly convergent series in the rectangle Q {a ≤ x, t ≤ b} by a
power or trigonometric series, or be approximated to algebraic or
trigonometric interpolating polynomials.

Example 1
Find the approximate solution of the integral equation
1
y ( x ) = e − ∫ xe xt y ( t ) dt
x

Use step size h = 0.5

Solution
In the given interval of integration [0,1] we select nodes at the point
x1 = 0; x2 = 0.5; x3 = 1 . Step size h = 0.5

The corresponding values of the function f ( x ) = e x and the kernel


k ( x, t ) = xe xt at these nodes are given below

Function Function kij = k ( x j , t j )


f ( x)

f ( xi ) = fi xi ti 0 0.5 1

1 0 0 0 0
1.6487 0.5 0.6420 0.8244
0.5000
2.7183 1 1 1.6487 2.7183
By Simpson’s quadrature formula we have
1
1
∫ F ( x ) dt ∼ 6 ⎡⎣ F ( 0 ) + 4F ( 0.5) + F (1)⎤⎦
0

To find the approximate solutions of yi ( i = 1, 2,3) of the solution y ( x )


at the nodes xi , we have the system of algebraic equations
⎧ y1 = 1

⎨ y2 + 6 ( 0.5000 y1 + 2.5680 y2 + 1.3542 y3 ) = 1.6487
1


⎩ y3 + 6 ( y1 + 6.5848 y2 + 2.7183 y3 ) = 2.7183
1

Grouping common terms


⎧1.4280 y2 + 0.2265 y3 = 1.5654

⎩1.0991y2 + 1.4531y3 = 2.5516

solving the terms, we have


y1 = 1, y 2 = 0.930, y3 = 1.053

The approximate solution of the given integral equation has the


form

( ) (
y ( x ) = f ( x ) − h2 xe xt1 y1 − 2 xe xt2 y2 + xe xt3 y3 = f ( x ) − h2 xe xt . y1 − 2 xe xt2 y2 + xe xt3 y3 )
(
y ( x ) = e x − 6x 1 + 3.720e 2 + 1.053e x
x
)
y ( x ) = e x − h2 ∑ kij yi

EXERCISES
Find the approximate solution of the integral equations below:
Π
4

1) y ( x ) = ∫ x ( sin t ) y ( t ) dt + sin x
0

1
2) y ( x ) = ∫ x (1 − e xs )y ( s ) ds + e x − x
0
CHAPTER NINE

By the time you complete this chapter, you should know:


1. how to define Green’s functions:
2. the properties of Green’s function;
3. how to use Green’s function to transform a Boundary Value
Problem into an Integral Equation;
GREEN’S FUNCTION. TRANSFORMING A BOUNDARY
VALUE PROBLEM INTO AN INTEGRAL EQUATION

Green’s functions of the boundary problem for an ordinary


differential equation

Let us consider the differential equation


d2x
Lx = + g (t ) x (t ) = h (t ) … ( 9.1)
dt 2
where g ( t ) and h ( t ) are continuous functions in [ a, b] .

Assume we look for our solution x ( t ) of equation (1), satisfying the


following homogeneous boundary conditions
x ( a ) = x (b) = 0 … ( 9.2 )

DEFINITION
Green’s function G ( t , s ) of the boundary value problem 9.1 – 9.2 is
a function in two variables, defined in the rectangle a ≤ t , s ≤ b and
such that
1) Lt G ( t , s ) = 0 such that t < s and t > s , as a function of t , it
satisfies equation 9.1 for the given values of t and s . In other
words in the intervals a < t < b and a < s < b , for t = s , G ( t , s )
possesses second order derivatives both with respect to t and
s , the derivatives being equal to zero.

2) The function G ( t , s ) satisfies the boundary conditions


G ( a, s ) = 0, G ( b,s ) = 0 for a<s<b
3) G ( t , s ) is continuous for t = s
G ( s+0,s ) = G ( s − 0, s )
4) The derivatives ∂∂Gt at the point endures a jump
∂G ( s + 0, s ) ∂G ( s − 0, s )
− =1
∂t ∂t

Differentiation of Generalized functions


∂G ∂G { ∂t G ( s + ∈, s ) − ∂t G ( s − ∈, s )}
∂ ∂

= +
∂t dt ∂t

Taking into accounts the unit jump at t = s and having in mind


differentiation of the unit function θ ( t − s ) (fig.9.1),

θ (t − s )
1

0
t=s t

we conclude that
d2 d⎛d ⎞ d 2G
G ( )
t , s = ⎜ G ( )⎟ ( )
t , s = δ t − s +
dt 2 dt ⎝ dt ⎠ dt 2

where δ ( t − s ) is the Dirac δ - function, δdtG is the classical ( or


2
2

ordinary) second order derivative of Green’s function.


Therefore, conditioning (1) and (4) in the definition can be
replaced by

Lt G ( t , s ) = δ ( t − s ) … ( 9.3)

We are now in the position to immediately, write the solution of


the boundary problem (b.v.p) (9.1) – (9.2) using Green’s
function. Indeed, the solution of the b.v.p (9.1) – (9.2) is given
by the formula
b
x ( t ) = ∫ G ( t , s ) h ( s ) ds … ( 9.4 )
a

By the condition (2) or the definition of Green’s function, the


function x ( t ) defined by 9.4, satisifies the boundary conditions
9.2.

Further
⎪⎧ ⎪⎫
b b
Lx ( t ) = Lt ⎨ ∫ G ( t , s ) h ( s ) ds ⎬ = ∫ Lt G ( t , s ) h ( s ) ds =
⎩⎪ a ⎭⎪ a
b
= ∫ δ ( t − s ) h ( s ) ds = h ( t )
a

That is the function x ( t ) satisfying the differential equation


(9.1). On integrating this identity
d2x ∂ 2 G ( t , s ) d ⎛ dx ∂G ⎞
G (t, s ) 2 − x (t ) = ⎜G − x⎟
dt 2
∂t dt ⎝ dt ∂t ⎠

over the intervals a < t < s − ∈ and s+ ∈< t < b where ∈ is a


sufficiently small positive number, considering (9.1) – (9.2) and
taken into account properties (1) – (4) of Green’s function
G ( t , s ) , we have the relation
s −∈ b

∫ G ( t , s ) ( − g ( s ) x ( s ) + h ( s ) )ds + ∫ G ( t , s ) ⎡⎣− g ( s ) x ( s ) + h ( s )⎤⎦ds =


a s +∈

dx dx
G ( s- ∈ ,s ) − G ( s + ∈, s )
dt t = s −∈ dt t = s +∈
∂G ( t , s ) ∂G ( t , s )
-x ( s- ∈) + x ( s + ∈)
∂t t = s −∈
∂t t = s +∈

Considering the limit ∈→ 0 we obtain


b b
x ( t ) = − ∫ G ( t , s ) g ( s ) x ( s ) ds + ∫ G ( t , s ) h ( s ) ds … ( 9.5)
a a

meaning
b
x ( t ) + ∫ k ( s, t ) x ( s ) ds = F ( t ) … ( 9.6 )
a

b
where K ( s, t ) = −G ( t , s ) g ( s ) and F ( t ) = ∫ G ( t , s ) h ( s ) da
a

The solution x ( t ) of the integral equation 9.6, provided it exists,


satisfies the differential equation 9.1 and the boundary condition
9.2.
Generally, Green’s function constructed a priori is of the form

⎧⎪ ( t −bb)(− as − a ) for s ≤ t
G ( t , s ) = ⎨ t-a s −b
( )( )
⎪⎩ b −a for s ≥ t

which should satisfy all the conditions of Green’s function.

For the simple reason that x ( t ) is a solution of (9.6) and also


(9.5), we can write
b
x ( t ) = ∫ G ( t , s ) ⎡⎣ − g ( s ) x ( s ) + h ( s ) ds ⎤⎦ … 9.7
a
Therefore,
s
X (t ) = ∫ (
t − a )( s −b )
b−a
⎡⎣ g ( s ) X ( s ) + h ( s ) ⎤⎦ ds +
a
b
+∫ (
t-b )( s − a )
b−a
⎡⎣ g ( s ) x ( s ) + h ( s ) ⎤⎦ ds
a

we find

t−a
s
dx
=∫ ⎡ − g ( s ) x ( s ) + h ( s ) ⎤⎦ds +
dt a b − a ⎣
s
t-b
+∫ ⎡ − g ( s ) x ( s ) + h ( s ) ds ⎤⎦
a
b-a ⎣
d x s−a
2
s−b
= ⎡⎣ − g ( s ) x ( s ) + h ( s ) ⎤⎦ − ⎡ − g ( s ) y ( s ) + h ( s ) ⎤⎦
dt 2
b−a b−a⎣
=-g ( s ) y ( s ) + h ( s )

Since G ( t , s ) is continuous, then taking the limits as s → a and


s → b we have x ( a ) = x ( b ) = 0 , because of the equality of
G (t, a ) = G (t, b ) = 0

Note
Green’s function is symmetric.

EXAMPLE
The influence function R ( t , s ) is the solution of the system
d2R
+ R = δ (t − s )
dt 2
dR
R t =s = =1
dt t = s
the value of R that satisfies this is R ( t , s ) = sin t ( t − s ) . The
corresponding integral representation for the I.V.P (1) is
t
y ( t ) = ∫ sin t ( t − s ) F ( s ) ds
0

y ( 0) , y' ( 0)
are non-zero, then we can add a suitable solution
c1u + c2 u to the integral equation and evaluate the constants C1 and
C2 using the prescribed conditions.

Example

y '' + y ' = F ( t ) , y ' ( 0 ) = −1


has the solution
t
y ( t ) = ∫ sin ( t − s ) F ( s ) ds + c1 ( sin t ) + c2 ( cos t )
0

By considering the conditions, we find

c1 = −1 and c2 = 1

Example

d2x
Lx = − x (t ) = h (t ) … (1)
dt 2
x ( 0 ) = 0, x (1) = 0 …( 2)
we construct Green’s function of the problem (1) – (2). The
general solution of the homogenous equation
d2x
− x (t ) = 0
dt 2

the corresponding equation (1) is of the form

x ( t ) = c1et + c2 e−t

Since the function G ( t , s ) must be the solution of this


homogeneous equation for t < s and for t > s . Let it be in the
form

⎧⎪G ( t , s ) = a1 ( s ) et + a2 ( s ) e − t , 0≤t≤s
⎨ b 2 − 4ac
⎪⎩G ( t,s ) = b1 ( s ) e + b2 ( s ) e ,
−t
t
s ≤ t ≤1
… ( 3)

G ( 0,s ) = 0, G (1,s )

This gives

⎧⎪a1 + a2 = 0
⎨ −1
⎪⎩b1e + b2 e = 0 …( 4)

Condition (3) the continuity of G ( t , s ) for t=s gives


b1e s + b2 e− s = a1e s − a2 e− s … ( 5)

Condition (4)
∂G ( s + ∈, s ) ∂G ( s − ∈, s )
− =1
∂t ∂t
b1e s − b2 e − s ⎡⎣ a1e s − a2 e − s ⎤⎦ = 1 …(6)

we find the values of a1 , a2 , b1 , b2 which gives


sinh t sinh ( s-1)
⎪⎧ sinh , 0≤t≤s
⎨ sinh s sinh ( t-1)
⎪⎩ sinh , s ≤ t ≤1

⎧⎪a1e + a2 = 0 → a1 = − a2
( 4) ⎨ −s −1
⎪⎩b1e + b2 e = 0 → b1e = −b2 e

( 5) b1es + b2 e− s = a1e s ± a2 e− s

( 6 ) b1es − ⎡⎣ ai es − a2 e− s ⎤⎦ = 1

e −1e s − ee − s −1e−1 + ee− s


CHAPTER 10

This chapter will expose you further to transforming Boundary


Value Problems into Integral Equations and how to apply
properties of Green’s function.
TRANSFORMATION OF THE BOUNDARY VALUE
PROBLEM (B.V.P.) INTO AN INTEGRAL EQUATION

We need to find the solution x ( t ) of the equation

d2x
Ls = 2 + ( −1 + λ q ( t ) ) x ( t ) = h ( t ) … (10.1)
dt

( λ is a numeric parameter )

that satisfies the boundary condition

x ( 0 ) = x (1) = 0 … (10.2 )

Rewriting equation (1) in the form

d2x
− x (t ) = h (t ) − λ q (t ) x (t ) … (10.3)
dt 2

We shall have an equivalent equation and boundary conditions


(10.2) in the form of an integral equation
1
x ( t ) = −λ ∫ G ( t , s ) q ( s ) x ( t ) ds + h1 ( t )
0

where
1
h1 ( t ) = ∫ G ( t , s ) h ( s ) ds
0

and G ( t , s ) - Green’s function corresponding to the homogeneous


equation

d2x
− x (t ) = 0
dt 2
and the boundary condition (10.2)
reduce the b.v.p.

y '' + λ y = 0 … (10.4 )
y ( 0 ) = 0, y ' (1) +v 2 y (1) =1 … (10.5 )

to a Fredholm integral equation


From the definition of that Green’s function

⎧⎪ A1 ( t ) s, s<t
G ( s, t ) = ⎨
⎩⎪A 2 ( t ) ⎡⎣1 + v2 (1 − s ) ⎤⎦ , s>t

By symmetry of Green’s independent of t.


The jump condition yields

Ct ( −v2 ) − C ⎡⎣1 + v2 (1 − t ) ⎤⎦ = 1

OR

1
C=
(1 + v2 )

Thus, the Green’s function is completely determined

{⎡⎣1 + V (1 − t )⎤⎦ s / (1 + v )} ,
2 2 s<t

The required integral equation is

1 + v2 (1 − s ) s
y (s) = λ ∫ ty ( t ) dt +
1 + v2 0
1
s s
+λ ∫ ⎡⎣1 + v2 (1 − t ) ⎤⎦ y ( t ) dt +
1+v 2 s 1 + v2
the homogeneous equation

d4 y
=0
ds 4

has the fourier independent solution 1, s 2 , s 3 , s . Therefore, we take


the value of G ( s, t ) to be
⎧⎪ A0 ( t ) + A1 ( t ) s + A2 ( t ) s 2 + A3 ( t )3 , s<t
G ( s, t ) = ⎨
⎪⎩ B0 ( t ) + B1 ( t ) s + B2 ( t ) s + B3 ( t ) s , s>t
2 3

The boundary conditions at the end points give

A0 ( t ) = 0, A1 ( t ) = 0, B2 = 3B0 − 2 B, B3 = 2 B0 + B1

Thus the relation (3) becomes

⎧⎪ A 2 ( t ) s 2 + A3 ( t ) s 3 , s<t
G ( s, t ) = ⎨
⎪⎩(1 − s ) ⎡⎣ B0 ( t )(1 + 2 s ) + B1 s, s>t ⎤⎦
2

The remaining constants are determined by applying the matching


conditions at s = t , which result in the simultaneous equation

( )
t 2 A2 + t 3 A3 ( t ) − 1 − 3t 2 + 2t 3 B0 − t (1 − t ) B1 = 0
2
CHAPTER 11

This chapter demonstrates the use of Hilbert-Schmidt’s


theorem in solving integral equations. The reader will learn to
do this through guided worked examples.
SOME SOLUTIONS OF SYMMETRIC INTEGRAL
EQUATIONS

We demonstrate the use of the Hilbert-Schmidt’s theorem in


finding an explicit solution of the non-homogeneous Fredholm
integral equation of the second kind

g ( s ) = f ( s ) + λ ∫ k ( s, t ) g ( t ) dt … (11.1)

With a symmetric L2 - kernel


a. Assume λ is not an eigenvalue
b. and let all eigenvalues and eigen functions of the kernel
K ( s, t ) be known
c. note that the function g ( s ) − f ( s ) has an integral
representation of the form 11.1

We can apply the Hilbert-Schmidt’s theorem and write



g ( s ) − f ( s ) = ∑ ck Φ k ( s ) … (11.2 )
k =1

where

ck = ∫ ⎡⎣ g ( s ) − f ( s ) ⎤⎦Φ k ( s ) ds = g k − f k … (11.3)

with

g k = ∫ g ( s ) Φ k ( s ) ds = g k − f k … (11.3)

By Ch 12.5 (2) we have


ck = λ g k / λk … (11.5 )

λ is not an eigenvalue, so from (11.3) and (11.5) we have

ck = ⎡⎣ λ / ( λk − λ ) ⎤⎦ f k , g k = ⎡⎣λk / ( λk − λk − λ ) ⎤⎦ f k

Replacing ck from Ch 12.5(6) into ?????????? we have the solution


of the integral equation Ch 12.5(1) in terms of an absolutely and
uniformly convergent series:


fk
g (s) = f (s) + ∑ Φk ( s ) … (11.7 )
k =1 λk − λ

OR
∞ Φk ( s ) Φk (t )
g (s) = f (s) − λ∑ ∫ f ( t ) dt … (11.8 )
K =1 ( λk − λ )

Hence the resolvent kernel


k ( s ) Φk (t )
∞ Φ
Γ ( s, t ; λ ) = ∑ … (11.9 )
k =1 ( λk − λ )

which implies the singular points of the resolvent kernel


k ( s ) Φk (t )
∞ Φ
Γ ( s, t ; λ ) = ∑ … (11.9 )
k =1 ( λk − λ )

Example 1
Solve the symmetric integral equation
1

( )
g ( s ) = ( s + 1) + λ ∫ st + s 2 t 2 g ( t ) dt
2

−1

Solution
λ = λ1 =is an eigenvalue and we have the intermediate form 0 0 in
3
2

one of the coefficients. But the function s 2 + 1 is orthogonal to the


eigenfunction ( 12 6 ) s which corresponds to the eigenvalue 32 .

From 11, we have


f1 = 0
⎛1 ⎞
1
f2 = ∫ (t )
+ 1 ⎜ 10 ⎟ t 2 dt = ( 8 15 )10
2

−1 ⎝2 ⎠

And therefore the required solution is


3 ( 8 15 )10 ⎛ 1 ⎞ 2
g (s) = ⎜ 10 ⎟ s + cs + s +
2

2 (5 2) − (3 2) ⎝ 2 ⎠

OR
g ( s ) = 5s 2 + cs + 1

where c is an arbitrary constant.

Example 3
Solve the symmetric integral equation
g ( s ) f ( s ) + λ ∫ k ( s ) k ( t ) g ( t ) dt

If we write
∫ k ( s ) k ( t ) dt = − {∫ k ⎡⎣k ( t ) }
dt ⎤ d ( s )
2

we observe that
1
λ1 = dt
∫ ⎡⎣ k ( t )⎤⎦
2

is an eigenvalue. The corresponding normalized eigenfunction is


1

Φ1 ( s ) = k ( s ) ∫ ⎡⎣ k ( t ) dt ⎤⎦
2 2

the coefficient f1 has the value

{ }
1

∫ ⎡⎣ k ( t ) dt ⎤⎦ ∫ f ( t ) k ( t ) dt
2
f1 =
2

Therefore for λ =λ, the solution is


g ( s ) = λ f1 ( λ1 − λ ) Φ1 ( s ) + f ( s )

Or

{
g ( s ) = ( λ k ( s ) ) ∫ f ( s ) k ( s ) ds 1 − λ ∫ ⎡⎣ k ( s ) ⎤⎦ ds + f ( s )
2
}
On the other hand, if
λ = λ1 = 1 ∫ ⎡⎣ k ( s ) ⎤⎦ ds
2

then f ( s ) must be orthogonal to Φ1 ( s ) , and in that case the solution


is
g ( s ) = f ( s ) + ck ( s )

C-arbitrary constant.
Example 4
Solve the symmetric Fredholm integral equation of the first kind

∫ k ( s, t ) g ( t ) dt = f ( s ) … (1)

where

⎪⎧ s (1 − t ) , s<t
k ( s, t ) = ⎨
⎪⎩(1-t ) t , s>t
…( 2)

Solution

Observe that the boundary value problem

d 2t
+ λ y = 0, y ( 0 ) = y (1) = 0 … ( 3)
dt 2

is equivalent to the homogeneus equation


1
g ( s ) = λ ∫ k ( s, t ) g ( t ) dt …( 4)
0

the eigenvalues of the system (3) are

λ1 = Π 2 , λ2 = ( 2Π ) , λ3 = ( 3Π ) ,… ,
2 2

and the corresponding normalized eigenfunctions are

2 sinΠs, 2sin2Πs, 2sin3Πs,…


1
f 2 = 2 ∫ ( sin k Π t ) f ( t ) dt …(5)
0
L2 Field Code Changed
and the integral equation (1) has a solution of class iff the Formatted: Font: 16 pt, Not Raised
infinite series by / Lowered by

∞ ∞

∑f k
2
λk2 = Π 4 ∑ ( k 4 f k2 )
k =1 k =1

converges

Example 5

Solve Poisson’s integral equation

1 − P2

g (α ) d α
f (Θ) = ∫ 1 − 2 P ⎡cos ( Θ − α )⎤ + P
2Π 0 ⎣ ⎦
2

0 ≤ Θ2 ≥ Π; 0<p<1 … (1)

Solution
Here, the symmetric K ( Θ, α ) can be expanded to give
{ } { }
−1
K ( Θ, α ) = ⎡⎣1 − p 2 2Π ⎤⎦ 1 − 2 p ⎡⎣ cos ( Θ − α ) ⎤⎦ + p 2


(1 2Π + 1 Π ) ∑ p k cos ⎡⎣ k ( Θ − α )⎤⎦ …( 2)
k =1

By the expansion (2)

∫ K ( Θ, α )( 2Π ) dx = ( 2Π )
− 12 −1
2
λ = 1, Φ ( s ) = ( 2Π )
− 12
which means that . Using the formula

∫ k ( Θ, α )
cos
sin
nα dx = p n sin nΘ, n = 1, 2,3,…
0

we have

λ2 k −1 = λ2 k p − k ; Φ 2k-1 ( s ) = Π − 2 cos ks
1

Φ 2k ( s ) = Π − 2 sin ks, k=1,2,3,… … ( 3)


1

From chapter 11, evaluating the coefficients f k , we conclude that


the integral equation has an L2 - solution iff the infinite series


an2 + bn2

k =1 p2n

converges,

where

an = (1 Π ) ∫ f ( Θ ) cos nΘd Θ
0

bn = (1 Π ) ∫ f ( Θ ) sin nΘd Θ
0
CHAPTER 12

By the end of this chapter, the reader would have been exposed to some terminologies
and theorems in Integral Equations and their possible USAGES in both theory and
application.
SUPPLEMENTARY MATERIALS

1. Hermitian Kernel:
A complex-values kernel k ( x, t ) is called symmetric (or Hermitian) if
k ( x, t ) = k ∗ ( t , x ) where k ∗ ( t , x ) denotes the complex conjugate of k ( x, t ) .
For real-valued kernel, the above definition coincides with the definition
k ( x, t ) = k ( t , x ) .

2. Convolution Integral:
Consider an integral in which the kernel k ( s, t ) is a function of the difference
( s − t ) only.
k ( x, t ) = k ( x − t ) … (1)
where k is a certain function of one variable. The integral equation
x
y ( x ) = f ( x ) + λ ∫ k ( x − t ) y ( t ) dt …( 2)
a
and corresponding Fredholm equation are called integral equations of the convolution
type.

The function defined by the integral


x x

∫ k ( s − t ) y ( t ) dt = ∫ k ( t ) y ( x − t ) dt
0 0
… ( 3)

is called the convolution or the Faltung of the two functions k and y . The integrals
occurring in (3) are called convolution integrals.

Relation (3) is a special case of the standard convolution


∞ ∞

∫ k ( x − t ) y ( t ) dt = ∫ j ( t ) y ( x − t ) dt …( 4)
−∞ −∞
(3) is obtained from (4) by taking
k ( t ) = y ( t ) = 0 , for t < 0 and t > s

3. The Inner or Scalar products of two function


The inner or scalar product ( Φ,ψ ) of two complex L2 -functions and Φ of real
variables, a ≤ s ≤ b is defined as
b

( Φ,ψ ) = ∫ Φ ( t )ψ ∗ ( t ) dt … (1)
a
b 2

( l2 − function ∫ g (t)
a
dt < ∞; square-integrable function g ( t )
The function Φ and ψ are orthogonal if ( Φ,ψ ) = 0 . The norm of a function Φ ( t ) is
given by the relation.
1 1
⎡b ⎤ ⎡b ⎤
2 2

Φ = ⎢ ∫ φ ( t ) Φ ∗ ( t ) dt ⎥ = ⎢ ∫ Φ ( t ) dt ⎥
2

⎣a ⎦ ⎣a ⎦

A function φ is called normalized if φ = 1 . A non null function (whose norm is not


zero) can always be normalized by dividing it by its norm. The following equations
due Schwarz and Minkowskii hold

( Φ,ψ ) ≤ Φ ψ
( Φ,ψ ) ≤ Φ ψ
4. Fredholm’s Theorems (with degenerate kernel)
We consider the degenerate kernel
n
k ( t , s ) = ∑ ai ( t ) bi ( s ) … (1)
i =1
as in the definition. The 2nd type of Fredholm’s integral equation with degenerate
kernel k ( x, t )
n b
y ( x ) = λ ∑ ai ( x )∫ bi ( t ) y ( t ) dt + f ( x ) …( 2 )
i =1 a

where f ( x ) is a continuous function in the interval [ a, b] . Let equation (2) have the
solution y = y ( x ) . Then as before

b
Ci = ∫ y ( t ) bi ( t ) dt ( i=1,2,… ,n )
a

then from (2) we have

n
y ( x ) = f ( x ) + λ ∑ Ci ai ( x ) …( 4)
i =1

From which we conclude that the solution of the integral equation with degenerate
kernel is equivalent to defining the constants CLi ( i = 1, 2,… , n ) .

Fredholm’s First Theorem


If λ is not an eigenvalue, then the integral equation (2) has a unique solution y ( x ) ,
defined by (4), for any arbitrary free term f ( x ) .
In order for equation (2) to have a unique solution for any function f ( x ) , it is
necessary and sufficient that the corresponding homogeneous equation have only a
trivial solution y ( x ) = 0 .

To the equation
b
y ( x ) = λ ∫ k ( x, t ) y ( t ) dt + f ( x ) …( 5)
a
the equation

b
ψ ( x ) = ∫ k ∗ ( x, t )ψ ( t ) dt + g ( x ) …( 6)
a
is called the conjugate (adjoint) to equation (5).

For equation (2) with degenerate kernel the conjugate (adjoint) to this equation has
the form
b n
φ ( x ) = λ ∫ ∑ ai ( t ) bi ( x )ψ ( t ) dt + g ( x ) …( 7 )
a i =1
For this

n
ψ ( x ) = g ( x ) + λ ∑ ci bi ( x ) … (8 )
i =1
where

b
ci = ∫ψ ( t ) ai ( t ) dt ( i=1,2,… ,n ) …( 9 )
a

If g ( x ) = 0 , then equation (7) is homogeneous. To find ci we have the homogeneous


system

n
ci − λ ∑
i =1c j = 0
k ji … (10 )

as the adjoin or conjugate with the system

n
ci − λ ∑ kij ci = 0 … (10 )
i =1

Both the system and its adjoint will have the same number of linearly independent
solution vectors. If {c1 ,… , cl } ( l=i,… p ) are the non-zero solution-vectors of system
(10), then the function
n
ψ l ( x ) = ∑ ci(l ) bi ( x ) ( l=1,2,… ,p )
i =1
will be the eigenfunction of the homogeneous equation

n b
ψ ( x ) = ∑ bi ( x ) ∫ ai ( t )ψ ( t ) dt … (12 )
i =1 a

5. Hilbert – Schmidt Theorem And Some Immediate Consequences

Hilbert – Schmidt Theorem:


If f ( s ) can be written in the form

f ( s ) = λ ∫ k ( s, t ) h ( t ) dt … (1)

where K ( s, t ) is a symmetric L2 kernel and h ( t ) is an L2 - function, then f ( s ) can


be expanded in an absolute and uniformly convergent Fourier series with respect to
the orthogonal system of eigen functions of the kernel K:


f ( s ) = ∑ fn Φn ( s ) fn ( fn , Φn )
h =1

The fourier coefficients of the function f ( s ) are related to the Fourier coefficients hn
of the function h ( s ) by the relations

h n = ( h, Φ n )
hn
fn =
λn ,

where L2 -function is a square-integrable function g ( t )

b 2

∫ g (t )
a
dt < ∞

L2 -kernel
A kernel K ( s, y ) is an L2 -kernel if
a. for each values of s, t in the square a ≤ s ≤ b, a ≤ t ≤ b
b b

∫ ∫ K ( s, t )
2
dsdt < ∞
a a

b. for each value of s in a ≤ s ≤ b


b

∫ K ( s, t )
2
dsdt < ∞
a

c. for each value of t in a ≤ t ≤ b a


b

∫ k ( s, t )
2
dsdt < ∞
a

hn of the function h ( s ) by the relation


hn
fn = , h n = ( h, Φ n ) …( 2)
λn

where λ are the eigenvalues of the kernel.

Proof
The fourier coefficients of the function f ( s ) with respect to the orthonormal
system {Φ n ( s )} are
f n = ( f , Φ n ) = ( Kh, Φ n ) = ( h, K Φ n ) = λn−1 hn

Taking into account the relation λn K Φ n − Φ n . Thus the Fourier series for
f ( s ) is
∞ ∞
hn
f ( s ) ∼ ∑ fn Φn ( s ) = ∑ Φn ( s ) … ( 3)
n =1 n =1 λn

The reminder term for this series is estimated as follows:

a. for each set of values of s, t in the square a ≤ s ≤ b, a ≤ s ≤ b


b b

∫ ∫ K ( s, t )
2
dsdt < ∞
a a

b. for each value of s in a ≤ s ≤ b


b

∫ K ( s, t )
2
dsdt < ∞
a

c. for each value of t in a ≤ t ≤ b


b

∫ K ( s, t )
2
dsdt < ∞
a

hn of the function hs by the relation

hn
fn = , h n = ( h, Φ n ) …( 2)
λn

where λ are the eigenvalues of the kernel.

Proof
The fourier coefficients of the function f ( s ) with respect to the
orthonormal system {Φ n ( s )} are
f n = ( f , Φ n ) = ( Kh, Φ n ) = ( j, K Φ n ) = λn−1 hn

Taking into account the relation λn K Φ n − Φ n . Thus the fourier series


for f ( s ) is

∞ ∞
hn
f ( s ) ∼ ∑ fn Φn ( s ) = ∑ Φn ( s ) … ( 3)
n =1 n =1 λn

The reminder term for this series is estimated as follows:

Φk ( s ) Φk ( s )
2 2
∞ n+ p n+ p

∑h
n =1
k
λk
≤ ∑n ∑
k = n +1
2
k
k = n +1 λk2
1

n+ p ∞ Φ 2k ( s )
≤ ∑
k = n +1
hk2 ∑
k =1 λk2
…( 4)
Applying Bessel’s inequality
Φn ( s )
2

∑ ≤ ∫ K ( s, t ) dt ≤ c12
2

n =1 λ
2
n

we conclude that (4) is bounded. Since h ( s ) is an L2 -function, the


series ∑ h is convergent and hence the partial sum ∑ nk +=np+1hk2 can
∞ 2
n =1 k

be made arbitrary small. Therefore, (3) and (4) converges absolutely


and uniformly.

We show that equation (3) approaches f ( s ) in the mean.


Let
n
hm
ψ n (s) = ∑ Φm ( s ) … ( 5)
m =1 λm

We estimate the value of f ( s ) − ψ n ( s )



hm
f ( s ) − ψ n ( s ) = kh − ∑ Φm ( s )
n =1 λm
n
( h, Φ m )
kh-∑ Φ m ( s ) = K n +1 h …(6)
m=1 λ

K ( n +1) truncated kernel

By 6
(
f ( s ) − ψ n ( s ) = K n +1 h = K n +1 h, K ( n +1) h
2
)
(
⎡⎣ h,K n+1 K n +1 ⎤⎦ = h, K 2n +1 ) (7)
K (2n+1) = K ( n +1) K ( n +1)

From (7) and (8), we have


(
f ( s ) − ψ n ( s ) = h, K 2n +1 h ≤ ( h, h ) λn2+1 )
2

The least eigenvalue of the kernel K 2(


n +1)
is equal to λn2+1

= max ⎡⎣( h, K 2n +1 h ( n, h ) ) ⎤⎦
1
… (8)
λn +1
2
since λn +1 → ∞ . f ( s ) − ψ n ( s ) → 0 as n → ∞ . By the relation
(triangular inequality)
f −ψ ≤ f −ψ n + ψ n −ψ …( 9 )
where ψ is the limit of the series with partial sum ψ n , we prove that
f =ψ .
From the above f − ψ n → 0 , n → ∞ . Since the series 10.3
converges uniformly, ∃∈> 0 :
ψ n ( s ) − ψ ( s ) <∈
for sufficiently large n. Therefore
ψ n ( s ) − ψ ( s ) <∈ ( b − a ) 2
1

Which shows that f = ψ and therefore ends the proof.

THE IMMEDIATE CONSEQUENCE OF THE HILBERT –


SCHMIDT THEOREM IN THE BILINEAR FORM

By definition

km ( s, t ) = ∫ k ( s, t ) km −1 ( x, t ) dx m=2,3 … (10 )

Which is of the form 1 where h ( s ) = km−1 ( s, t ) and fixed t .


For the Fourier coefficient ak ( t ) of km ( s, t ) with respect to the
system of km ( s, t ) with respect to the system of eigen function

Φ k ( s ) of K ( s, t )

we have

ak ( t ) ∫ K m ( s, t ) Φ k ( s ) ds = λk−1Φ k ( t )

By Hilbert – Schmidt’s theorem all iterated kernels km ( s, t ) m ≥ 2 of


a symmetric L 2 -kernel can be represented by the absolutely and
uniformly convergent series


k ( s, t ) = ∑ λk− m Φ k ( t ) … (11)
k =1

Putting s = t in (11) and integrating from a to b we obtain


∑λ
k =1
−m
k = ∫ K m ( s, s ) ds = Am (12 )

Am is the trace of the iterated kernel K m K m

Definite Kernels and Mercer’s Theorem


A symmetric L 2 -kernel K
→ is non-negative definite if ( K Φ, Φ ) ≥ 0 preveru L 2 - function Φk
→ definite if it is non-negative definite and ( K Φ, Φ ) = 0 implies Φ
is null.

Similarly, we define non-positive definite and negative-definite


symmetric kernels. If a symmetric kernel is not of any of the four
categories it is called indefinite.

Theorem ( a consequence of Hilbert Schmidt theorm)


A non null, symmetric L 2 -kernel K is non negative if and only if all
its eigenvalues are positive. It is positive-definite iff it is non-
negative and some full of orthonormal system eigen functions k id
complete.

Mercer’s Theorem (Consequence of H-S Theorem)


If a non-null, symmetric L 2 -kernel is quasi-definite (i.e. when all but
finite number of eigenvalues are of one sign) and continuous then the
series

∑λ
n =1
−1
n … (13)

is convergent and

∞ Φ n ( s ) Φ ∗n ( t )
k ( s, t ) = ∑
n =1 λn
the series is uniformly and absolutely convergent.
SOLVED QUESTIONS

Solution

The integral is of convolution type. Take the Laplace transform of both


sides.

So the answer is the inverse Laplace transform of the right hand side.

________________________________________________________

Solution

In this problem

Take the Laplace transform of both sides.

Since
The solution is

Formulate an integral equation from the BVP:

Solve for .

Integrate both sides from a to x with respect to a new dummy variable y.

Integrate again. Notice how the limits of integration change.

Fact:

So the last result becomes


This equation is of the form , which is
a Volterra equation of the second kind.

Solve

Integrate both sides with respect x.

Approximate

One method: Expand both and in Taylor series; in


powers of , and in powers of . We can save ourselves some
effort if we note from the integral equation that ; and from
repeatedly differentiating the integral equation, that (so the
coefficient of the x2 term in the y expansion is 1), and that
for (because we get the sine in the integrand for those
terms). Thus, the Taylor series expansion for (about ) has the
form .

The expansion for is


.

Inserting these in the integral equation, we get

or after some expanding and rearrangement (and subtracting the term


from both sides)

All of the integrals on the right side are readily evaluated. For example,
retaining only terms through , we get

Equating coefficients for each power of on the left and right sides gives
a system of equations for the coefficients Upon solving that
system, we get .
Repeating the calculation, this time including terms through , gives
and leaves the lower-order coefficients essentially
unchanged, evidence that the above approximation for is quite
good for .

Find lambda: if

If , then

Therefore, or .

Find lambda: if

If , then

Therefore, .
Find the value of lambda for which the homogeneous Fredholm integral

equation has a nontrivial solution, and find all


the solutions.

First, divide both sides by :

The right side is a constant; therefore, the left side must also be a

constant. Thus , so .

Insert this in the integral equation:


Since we want a nontrivial solution, we may assume that , so

divide both sides by ; this yields

Evaluate the integral; this gives

Thus, a nontrivial solution exists only for and the general


solution in this case is where is an arbitrary constant.

_____________________________________________________________
___________

solution Write as an ODE:

solution Solve

solution Formulate an integral equation from the IVP:

solution Solve
solution Solve

solution Find the Euler equation

solution Reduce to a PDE:

solution Transform the BVP to an integral equation:

Transform the BVP to an integral

equation:

First, integrate both sides from x to 1 (so that the boundary condition
y'(1) = 0 can be applied). This gives

Applying the condition y'(1) = 0 and multiplying both sides by (-1)


results in

Next, integrate both sides from 0 to x:

or, applying the condition y(0) = 0,


The double integral may be simplified if we invert the order of
integration. To do this, consider the region of integration for the double

integral:
This same region, when the order of integration is reversed, looks like

this:

We see that the region must be split into two pieces, and a separate
integral written for each piece:

or after doing the trivial z-integration,

The desired integral equation, then, is


Such an integral equation is often written in the form

where

Solve:

Let .

Write a new equation:

The last integrand is odd and the limits of integration are a multiple of its
period, so the integral equals 0.

From
the answer is

_____________________________________________________________
___________

Solve:

Therefore

_____________________________________________________________
___________

Solve:

Differentiate both sides with respect to x.


_____________________________________________________________
___________

Convert to an integral equation:

The solution is

To check the solution, start differentiating.


NB: BVP = Boundary value problem

You might also like