Professional Documents
Culture Documents
1
INTRODUCTION
DEFINITIONS:
1. An integral equation (I.E.) is an equation in which the unknown
functions y(x) occurs under the integral sign.
2
The integral on the r.h.s of 1.2 can be considered as an integral of the
parameter t.
The kernel of the equation K(x, t) is defined on the x-t plane in the
square R. where R : a ≤ x ≤ b; a ≤ t ≤ b
a b x
3
b
y ( x ) = λ ∫ K ( x, t ) y ( t ) dt … (1.3)
a
K ( x, t ) = K ( t , x ) … (1.4 )
∫ φ ( x ) Ψ ( x )dx = 0
a
… (1.5 )
4
x
∫ k ( x, t ) y ( t ) dt = f ( x )
a
… (1.6 )
dI ( λ ) d
b b
∂f ( x, λ )
= ∫ f ( x, λ ) dx = ∫ dx
dλ dλ a a
∂λ
dI ( λ ) b( λ )
∂f ( x, λ )
= ∫λ dx + b' ( λ ) f ⎡⎣b ( λ ) , λ ⎤⎦ − a ' ( λ ) f ⎡⎣ a ( λ ) , λ ⎤⎦
dλ a( ) ∂λ
5
Differentiating with respect to x.
x
∫ k ( x, t ) y ( t ) dt = f ( x )
a
x
∫ K ( x, t ) y ( t ) dt + ⎡⎣ k ( x, x ) y ( x )⎤⎦ = f ( x )
' '
x
a
where
K x' ( x, t ) f ' ( x)
K1 ( x, t ) = − , f1 ( x ) =
k ( x, x ) k ( x, x )
From the formal point of view, Volterra’s Integral Equation differs from
Fredholm’s only by the fact that the constant upper limit is changed into a
variable upper limit.
6
d 2u du
2
+ p ( x) + q ( x)u = f ( x) (a ≤ x ≤ b) … (1.8 )
dx dx
d 2u
= y ( x) … (1.10 )
dx 2
and
x t
u ( x ) = ∫ dt ∫ y ( s ) ds + c1 ( x − a ) + c2
a a
∫ dt ∫ y ( s ) ds = ∫ ds ∫ y ( s ) dt
a a a t
x
= ∫ ( x-s ) y ( s ) ds
a
x
= ∫ ( x-t ) y ( t ) dt
a
7
a
β = ∫ y ( t ) dt + C1
a
Therefore,
x
du
= y ( t ) dt + β … (1.11)
dx ∫a
and
x
u ( x ) = ∫ ( x − t ) y ( t ) dt + β … (1.12 )
a
We substitute the relations (1.10 – 1.12) into differential equation 1.8 to get:
⎡x ⎤ ⎡x ⎤
y ( x ) + p ( x ) ⎢ ∫ y ( t ) dt + β ⎥ + q ( x ) ⎢ ∫ ( x − t ) y ( t ) dt ⎥ = f ( x ) − β P ( x ) − ⎡⎣ β ( x − a ) + α ⎤⎦ q ( x )
⎣a ⎦ ⎣a ⎦
Finally, we have
x
y ( x ) + ∫ ⎡⎣ p ( x ) + q ( x )( x − t ) ⎤⎦ y ( t ) dt = f ( x ) − β p ( x ) − ⎡⎣ β ( x − a ) + α ⎤⎦ q ( x )
a
p ( x ) + q ( x )( x − t ) = k ( x, t )
f ( x ) − β p ( x ) − ⎡⎣ β ( x − a ) ⎤⎦ q ( x ) = F ( x )
Therefore, knowing the function y(x), from the expression 1.12 we can find
u ( x ) and u ' ( x ) . Hence Volterra’s Integral Equation includes all the data of
8
the Cauchy problem for the linear Differential equation 1.8
A similar result can be obtained for an n-th order linear differential equation.
Example 1.1
Find the solution of the Volterra’s Integral Equation of the 1st type.
x
x2
∫ cos ( x − t ) y ( t ) dt =
0
2
Solution
Differentiating the given equation with respect to x, we have
x
y ( x ) − ∫ sin ( x − t ) y ( t ) dt = x … (1)
0
x2
y' = +1
2
The general solution of the above ODE, which is of the separable form is
9
x3
y ( x) = + x + C1 …( 2)
6
where C1 is an arbitrary constant. Taking into consideration equation (1), it follows that
y ( 0) = 0
From (2) C1=0. Hence the solution of the given Volterra equation is
x3
y ( x) = +x
6
Exercises
(1) y ( x ) = x + ∫ ( t − x ) y ( t ) dt
0
1
x 12
( 2 ) y ( x ) = sin x − + ∫ xty ( t ) dt
4 40
( 3) Bt seperating the kernel solve the equation
∞
y ( x ) = sin x + ∫ e −α ( x + t ) y ( t ) dt
0
where α is a real constant, and show that the solution is not valid when α = 0 and
α= . 1
2
( a ) y ( x ) = λ ∫ ( x − t ) y ( t ) dt
−1
Π
(b) y ( x) = λ ∫ cos ( x + t ) y ( t ) dt
2
−Π
10
CHAPTER TWO
We solve this Integral Equation on assumption that the kernel of the equation k ( x, t ) is
continuous in the square R : a ≤ x ≤ b, a ≤ t ≤ b and the function f ( x ) is continuous in the
interval [ a, b] . These conditions will ensure that k ( x, t ) and f ( x ) are bounded. We now
look for our solution in the form of a series of an ascending order of λ .
If the series (2.2) uniformly converges for some value of λ , then it can be substituted into the
right hand side of (2.1), replacing the argument x by t and effecting the term-wise integration.
⎡ b b b
⎤
y ( x ) = f ( x ) + ⎢ λ ∫ k ( x, t ) φ0 ( t ) + λ ∫ k ( x, t ) φ1 ( t ) dt + … + λ ∫ k ( x, t ) φn ( t ) dt + …⎥
2 n +1
⎣ a a a ⎦
Replacing the left-side of (2.3) by the expression in (2.2) and equating coefficients of equal
powers of λ , we have
φ ( x) = f ( x) ⎫
⎪
⎪
b
φ1 ( x ) = ∫ k ( x, t ) φ0 ( t ) dt ⎪
a ⎪
b
⎪⎪
φ2 ( x ) = ∫ k ( x, t ) φ1 ( t ) dt ⎬ … ( 2.4 )
a ⎪
………………………… ⎪
⎪
b
⎪
φn ( x ) = ∫ k ( x, t ) φn −1 ( t ) dt ⎪
a ⎪⎭
The process of constructing the function φn ( x ) is called the method of successive approximation
of solutions, and can be continued indefinitely.
The expression (2.4) helps us to evaluate the coefficients of the series (2.2) successively and to
form the series, which formally satisfies the integral equation (2.1).
NOTE:
For the sum of the series (2.2) to be a solution of the Integral Equation 2.1, it is
necessary that it converges uniformly.
Indeed suppose the Kernel k ( x, t ) is bounded by A, that is
k ( x, t ) < A … ( 2.5 )
f ( x) < M … ( 2.6 )
where A and M are given positive numbers. Then from equation (2.4) we have
φ0 ( x ) = f ( x ) < M
b b
φ1 ( x ) ≤ ∫ k ( x, t ) φ0 ( t ) dt < AM ∫ dt = AM ( b − a )
a a
b b
φ2 ( x ) ≤ ∫ k ( x, t ) φ1 ( t ) dt < A. AM ( b − a ) ∫ dt = A2 ( b − a ) M
a a
…………………………………………………………………
b b
φn ( x ) ≤ ∫ k ( x, t ) φn −1 ( t ) dt < A. A M ( b − a ) ∫ dt = A ( b − a )
n −1
n −1
λ +…
n n n
a a
This is a geometric progression with the ratio A ( b − a ) λ . This sequence converges as long as the
ratio of progression is less than 1. Consequently, the series (2.2) converges, if
1
λ < … ( 2.7 )
A (b − a )
Therefore, the Integral Equation has a unique solution if the parameter λ is
sufficiently small in absolute value. With the relation (2.4) we can successively calculate
the coefficients of the sequence (2.2) which is rather inconvenient. This is so because to
evaluate the coefficients φn ( x ) it is necessary to find all the preceding coefficients.
We make our new objective to find these coefficients from the known elements of the Integral
Equation that is, from the Kernel k ( x, t ) and the right hand side f ( x ) .
φ0 ( x ) = f ( x )
b
where k2 ( x, t ) = ∫ k ( x, s ) k ( s, t ) ds. Changing the variable t to s in the 4th relation and using the
a
b
where k3 ( x, t ) = ∫ k ( x, s ) k2 ( s, t ) ds. . Continuing the process and introducing the function
a
b
kn ( x, t ) = ∫ k ( x, s ) kn −1 ( s, t ) ds … ( 2.11)
a
we obtain
b
φn ( x ) = ∫ kn ( x, t ) f ( t ) dt … ( 2.12 )
a
The kernel k ( x, t ) is the first kernel. Substituting the expression for φn ( x ) into the series (2.2), we
have
⎡b b b
⎤
y ( x ) = f ( x ) + λ ⎢ ∫ k ( x, t ) f ( t ) dt + λ ∫ k2 ( x, t ) f ( t ) dt + … + λ n −1 ∫ kn ( x, t ) f ( t ) dt + …⎥
⎣a a a ⎦
If the sequence
k1 ( x, t ) + λ k2 ( x, t ) + λ 2 k3 ( x, t ) + … + λ n −1kn ( x, t ) + … … ( 2.14 )
uniformly converges, then the sum in the square brackets in (2.13) can be replaced by the
integral sum and we write
b
y ( x ) = f ( x ) + λ ∫ Γ ( x, t , λ ) f ( t ) dt
a
Γ = k ( x, t ) + λ k2 ( x, t ) + λ 2 k3 ( x, t ) + … + λ n −1 kn ( x, t ) + …
The function Γ ( x, t , λ ) is called the Resolvent of the of the Integral Equation (2.1). When the
Resolvent is known we can find the solution of the Integral Equation (2.1) for any function
f ( x ) provided the parameter λ is sufficiently small in absolute value. In other words, (2.7) is
satisfied.
Example 2.1
Evaluate, using the first three successive approximations of the solution of the Integral Equation
below.
1
y ( x ) + ∫ xty ( t ) dt = x 2
0
Solution
where
1
φ0 ( x ) = f ( x ) = x , φn ( x ) = ∫ k ( x, t ) φn −1 ( t ) dt . We have
2
φ0 ( x ) = x 2
1
x
φ1 ( x ) = ∫ xt.t 2 dt =
0
4
1
t x
φ2 ( x ) = ∫ xt. dt =
0
4 12
1
t x
φ3 ( x ) = ∫ xt dt =
0
12 36
Since λ = −1 , then the first three successive approximation is the approximate solution of the
Integral Equation.
Hence y0 ( x ) = φ0 ( x ) = x 2
x
y1 ( x ) = φ ( x ) − φ1 ( x ) = x 2 −
4
⎛1 1 ⎞
y2 ( x ) = φ0 ( x ) − φ1 ( x ) + φ2 ( x ) = x 2 − x ⎜ − ⎟
⎝ 4 4.3 ⎠
⎛1 1 1 ⎞
y3 ( x ) = φ0 ( x ) − φ1 ( x ) + φ2 ( x ) − φ3 ( x ) = x 2 − x ⎜ − + 2 ⎟
⎝ 4 4.3 4.3 ⎠
We have
y ( x ) = x 2 − xC1
where
1 1
(
C1 = ∫ ty ( t ) dt = ∫ t t 2 − tC1 dt =) 1 C1
−
4 3
0 0
Consequently,
METHODS OF SOLUTION
Example 2.2
k ( x, y ) = g ( x ) h ( y ) … (1)
where g and h are functions of x and y only respectively. Then the Fredholm’s equation may be
solved as follows: Let g ( x ) ≡ const ,then the y-integration (integrating with respect to y) can be
b
considered as u ( x) = f ( x) + λ g ( x) ∫ h ( y ) u ( y ) dy …( 2)
a
Hence
… ( 3)
b
∫a
h( y )u ( y )dy = c(≡ const )
We have as solution
u ( x) = f ( x) + λ cg ( x ) …( 4)
u ( x) = cosh x − 2x − cx3 …( 6 )
where
…( 7 )
1
c = ∫ yu ( y )dy
0
… (8)
1
c = ∫ y ⎡⎣cosh y − 2y + c 3y ⎤⎦ dy
0
Simplifying
…(9)
−1
c = 15
16 − 8
9e
Therefore
⎡ 3e−1 ⎤
u ( x) = cosh x − 2x + ⎢ 165 − ⎥x … (10 )
⎣ 8 ⎦
there is a solution for a particular value of λ for which the solution is non-trivial. Find this
λ (eigen value) and the corresponding solution for u (the eigen function).
We write as above,
Π
2
where
Π
2
c = ∫ sin y u ( y ) dy … (14 )
0
η/2 cΠ λ
c = cλ ∫ sin 2 ydt = … (15 )
0 4
4
If c ≠ 0 ,then λ = . The solution corresponding to this value of λ is u ( x ) :
Π
u ( x ) = A sin x …(16 )
where A ≡ const
Example 2.4
The Volterra equation can sometimes be transformed into ordinary differential equation which
may be easier to solve than the integral equation. An example is the equation
x
u ( x ) = 2 x + 4∫ ( y − x ) u ( y ) dy … (17 )
0
⎡ x
⎤
u ( x ) = 2 + 4 ⎢{( y − x ) u ( y )} y = x − ∫ u ( y ) dy ⎥
d
… (18 )
dx ⎣ 0 ⎦
x
= 2 - 4 ∫ u ( y ) dy … (19 )
0
Differentiating again
d2
u ( x ) = A cos 2 x + B sin 2 x ( 21)
dx 2
A, B ≡ consts. We further determine A and B by substituting (21) into (17). We find that
A = 0, B = 1
u ( x ) = sin 2 x
Example 2.5
Find the Resolvent, defining the radius of convergence of the sequence. write out of the
solution for any arbitrary free term f (x) and also find the
Solution
k ( x , t ) = k1 ( x , t ) = e x −t
1 1
k 2 ( x , t ) = ∫ k ( x , s) k1 ( s, t ) ds = ∫ e x − s e s− t ds = e x − t
0 0
1
k 3 ( x , t ) = ∫ k ( x , s) k 2 ( s, t )ds = e x −t
0
Hence
k n ( x , t ) = e x − t (n = 1,2,...)
n =1 n =1
The series obtained is a geometric progression, which converges for λ < 1 and has the sum
1
. Therefore,
1− λ
e x −t
Γ ( x, t , λ ) =
1− λ
The solution of the equation has the form
1
y( x) = f ( x) + λ
1− λ ∫
0
e x −t f (t )dt
1
y ( x) = e x + 1− 212 ∫ e dt = 2e x
1 x −t
0
Exercises
Find the Resolvent, radius of convergence of the series and the solution of the equation for any
arbitrary free term f ( x ) and also the solution for a given λ and f(x) of the following equations:
1
1. y ( x) = λ ∫ xty (t )dt + f ( x) : λ = 12 , f ( x) = 65 x
0
Ans.
∞
Γ ( x, t , λ ) = ∑ [ λ3 ]
n−1
= 3 xt
3−λ , λ < 3;
n =1
1
y ( x) = λ ∫ 3 xt
3−λ f (t )dt + f ( x); y ( x) = x
0
1
2. y ( x) = λ ∫ ty (t )dt + f ( x); λ = 12 , f ( x) = x
0
Ans.
∞
Γ ( x, t , λ ) = t ∑ [ λ2 ]
n−1
= 2t
2−λ , λ < 2;
n −1
2
for λ = 12 , we have f ( x ) = x and the solution y=x+
9
CHAPTER 3
… ( 3.1)
x
y ( x) = f ( x) + λ ∫ k ( x, t ) y (t )dt
0
where
yo = f ( x)
and
1
yn ( x) = f ( x) + λ ∫ k ( x, t ) yn −1 (t ) dt
0
and
x
k n −1 ( x , t ) = ∫ k ( x , s) k n ( s, t )ds(n, = 1,2,...)
a
n
y ( x ) = f ( x ) − λ ∫ Γ ( x , t , λ ) f ( t ) dt
a
Example 3.1
Solution
where
where ϕ ( x ) = y0 ( x ) = 1
x
yn ( x) = ∫ k ( x, t )ϕ n −1 (t )dt
0
Then we have
ϕ0 ( x ) = 1
x
ϕ1 ( x) = ∫ ( x − t )dt = x 2 − x2
2 = x2
2 = x2
2!
0
x 44x = −34x
ϕ2 ( x) = ∫ ( x − t ) t2 dt = =
2
x4
4!
0 2.3.4
x
ϕ3 ( x) = ∫ ( x − t ) t4! dt =
4
x6
0 6!
is such that
y0 ( x) ∼ 1
y1 ( x) ∼ 1 − x2
2!
y2 ( x ) ∼ 1 − x2
2! + x4
4!
y3 ( x) ∼ 1 − x2
2! + x4
4! − x6
6!
Example 3.2
Solution
k ( x , t ) = k1 ( x, t ) = e x −t
x
k 2 ( x, t ) = ∫t
k ( x , s) k ( s, t )ds = ∫ e x − st e s − t ds = ( x − t )e x − t
x
k 3 ( x , t ) = ∫ e x − s ( s − t )e s − t ds = ( x −t )2
2! e x −t
t
Hence
( x − t ) n −1
k n ( x, t ) = ( n − t )!
e x −t
Exercise
1. Solve completely
1
y ( x ) = 56 + 12 ∫ xty ( t ) dt
0
3. From (1) and (2) above estimate the accuracy of the results.
CHAPTER 4
The Reader should be able to identify degenerate kernels and solve the
corresponding Integral equations appropriately.
METHOD OF DEGENERATE KERNEL FOR FREDHOLM’S
ALGEBRAIC EQUATIONS
… ( 4.1)
b
y ( x) = f ( x) + λ ∫ k ( x, t ) y(t )dt
a
not only for a sufficiently small value of the parameter λ , but for all λ for
which this solution exists.
Definition
The kernel k ( x, t ) of an Integral Equation is called Degenerate if it can
be represented in the form of a finite sum of pair-wise product of
functions one of which is a function of only x, and the other only of y:
n
= ∑ ai ( x )bi (t ) (4.2)
i =1
ai ( x ), bi (t )
are linearly independent function Fredholm’s I.E. of the 2nd type with
degenerate kernel is the equation of the form
⎡ n ⎤
⎢ ∑ ai ( x)bi (t ) ⎥ y (t )dt … ( 4.3)
b
y ( x) = f ( x) + λ ∫
a
⎣ i =1 ⎦
⎡ b b ⎤
y ( x) = f ( x) + λ ⎢ ∫ a1 ( x)b1 (t ) y (t )dt + ∫ a2 ( x)b2 (t ) y (t ) dt + ⎥
⎣ a a
⎦
]= f ( x) + λ [ a1 ( x) ∫ b1 (t ) y (t ) dt +
b b
+... + ∫ an ( x)bn (t ) y (t )dt
a a
]=
b b
+ a2 ( x) ∫ b2 (t ) y (t )dy + ... + an ( x) ∫ bn (t ) y (t )dt
a a
n
= f ( x) + λ ∑ ai ( x) ∫ bi (t ) y (t ) dt
b
a
i =1
Suppose
… ( 4.4 )
b
∫a
bi (t ) y (t )dt = ci
If the coefficients ci are found then the problem could be considered solved.
However it is not possible to calculate ci since the function y(x) is unknown.
To find c we substitute 4.5 into 4.4 after which we get a system of algebraic
equations:
n
ci = ∫ bi (t ) f ( t ) dt + λ ∫ bi (t )∑ c j a j (t )dt
b b
a a
j =1
From which
n
ci − λ ∑ ci gij = fi … ( 4.6 )
j =1
where
b
fi = ∫ bi (t ) f (t )dt
a
… ( 4.7 )
b
gij = ∫ bi (t ) a j (t ) dt
a
∑j =1
(δ ij − λ g ji )ci = fi … ( 4.8 )
⎧⎪ 1, if i=j
∂ ij = ⎨
⎪⎩0, if i ≠ j, ( i,j=1,cdots,n )
…………………
n
cn − λ ∑ g nj c j = f n
j =1
If the system is (4.9) has a solution with respect to the unknown ci , then the
non-homogeneous Integral Equation also has a solution defined by equation
(4.5). Consequently, the Integral Equation (Fredholm’s second type) with
degenerated kernel and the system (4.9) are equivalent.
D ( λ ) = det (δ ij − λ g ji ) ≡
1 − λ g11 − λ g 21 − λ g n1
−λ g12 1-λ g 22 − λ gn2
… ( 4.10 )
−λ g1n − λ g 2 n 1-λ g nn
Dj (λ ) ∑ nj =1 Aji ( λ ) f i
ci = =
D. ( λ ) D (λ )
where Di ( λ ) is the determinant 4.10 in which the free terms (r.h.s) replaces
the i-th column of this determinant ( i = 1,… , n ) . By the relation (4.5) the
solution of the integral equation (4.3) has the form
b
∑in=1 ∑ nj =1 ai ( x ) b j ( t ) Aji ( λ )
y ( x) = f ( x) + λ ∫ f ( t ) dt
a
D (λ )
To have
b
D ( x, t , λ )
y ( x) = f ( x) + λ ∫ f ( t ) dt … ( 4.12 )
a
D (λ )
The function
D ( x, t , λ ) n n Aji ( λ )
Γ ( x, t , λ ) = = Σ Σ ai ( x ) b j ( t )
D (λ ) i =1 j =1 D (λ )
Hence
b
y ( x ) = f ( x ) + λ ∫ Γ ( x, t , λ ) f ( t ) dt
a
Example 4.1
Solution
and let
… (1)
1 1
c1 = ∫ y (t )dt , c2 = ∫ ty (t )dt
0 0
we have
y ( x) = 2c1 + 6 xc2 + x 2 …( 2)
1
c2 ∫ (2c1t + 6t 2 c2 + t 3 )dt
0
c1 = 2c1 + 3c2 + 13
OR
c2 = c1 + 2c2 + 1
4
c1 + 3c2 = − 13
c1 + 2c2 = − 41
On the basis of equation (2) , our solution of the given equation will be of
the form
y ( x ) = x 2 − 4x − 125
Example 4.2
Solution
… (1)
1 1
c1 = ∫ y (t )dt , ∫ ty (t )dt = c2
0 0
we have
y ( x) = λ (1 + x)c1 + c2 + f ( x) …( 2)
c1 (1 − 32 λ ) − λ C2 = ∫
1
f (t )dt
0
− 56 λ c1 (1 − λ2 ) c2 = ∫ tf (t )dt
1
1 − 23 λ − λ
D( λ ) = = 1 − 2λ − 12
λ
=0
2
− 56 λ11 − λ
2
1 1
c2 = 56 λ ∫ f (t )dt + (1 − 23 λ ) ∫ tf (t )dt
0 0
Example 4.3
Solution
Let
π π
∫π
−
y (t ) cos xtd t = c1 ; ∫π−
t 2 y (t ) = c2
π
∫π −
y (t ) sin tdt = c3
π
c2 = ∫ (c1λt + c2 λ sin t + c3λ cos t + t )t 2 dt … ( 3)
−π
π
c3 = ∫ (c1 λt + c2 λ sin t + c3λ cos t + t ) sin tdt
−π
Opening the brackets and regrouping, we have the following system:
⎛ Π
⎞ Π
C1 ⎜ 1 − λ ∫ t cos tdt ⎟ − C2 λ ∫ sin t cos tdt −
⎝ −Π ⎠ −Π
Π Π
−C3 λ ∫
−Π
cos 2 tdt =
−Π
∫ t cos tdt ,
Π
⎛ Π
⎞
−C1λ ∫ t dt + C2 ⎜ 1 − λ ∫ t 2 sin tdt ⎟
3
−Π ⎝ −Π ⎠
Π Π
−C3 λ ∫t
2
cos tdt = ∫ t dt
3
…( 4)
−Π −Π
Π Π
−C1λ ∫ t sin tdt − C2 λ ∫ sin tdt +
2
−Π −Π
⎛ Π
⎞ Π
+C3 ⎜1 − λ ∫ sin t cos tdt ⎟ = ∫ t sin tdt
⎝ −Π ⎠ −Π
− 4 π −ν
π π π
∫π
−
sin 2 tdt = ∫
−π
1
2 (1 − cos 2t )dt = 12 t − 12 sin 2t −π =
= 12 π − 12 sin 2π + π − 12 sin(−2π )
= π2 − 14 sin 2π + π2 − 14 sin 2π = π
π π π
∫π
−
cos 2 tdt = ∫
−π
1
2 (1 + cos 2t )dt = 12 t + 12 sin 2t −π =
= 12 π + 12 sin 2π + π − 12 sin(−2π )
= π2 + 14 sin 2π + 14 sin 2π = π
π π π
∫π
−
sin tdt = 1
2 ∫π
−
sin 2tdt = 1
4 ∫π
−
sin 2td (2t ) =
π
= 1
4 − cos 2t −π = − 14 cos 2π − cos(−2π )
= − 14 (cos 2π − cos 2π ) = 0
π π
cos tdt = [ − cos t + sin t ]−π
π
∫π +∫
π
t sin tdt = −t cos −π
− −π
= −π 2 (−1) + π 2 (−1) = π 2 − π = 0
⎛ Π
⎞
∫ tcostdt=0 ⎟⎠
2
⎜ here u=t , du=2tdt, dv=sintdt, v=-cost; integral
⎝ -Π
π
∫π
−
t cos tdt = 0
π π
∫π − 2∫
π
t 2 cos tdt = t 2 sin t −π t sin tdt = π 2 sin π − (−π ) 2 sin(−π ) − 4π
− −π
= 0 − 4π = −4π
⎛ Π
⎞
∫ Π
2
⎜ here u =t, du=2tdt, dv=costdt, v=sin y; tsintdt=2 ⎟
⎝ -Π ⎠
C1 − λΠ C3 = 0
C2 + 4λΠC3 = 0 … ( 5)
−2λΠC1 − λΠC2 + C3 = 2 ( 2Π ) 4Π
10 − λπ
D( λ ) = 014λπ = 1 − 2π 2 λ2 + 4π 2 λ2 = 1 + 2πλ2 ≠ 0
−2λπ − λπ 1
We solve system (5) by Crammer’s method.
00 − πλ
c1 = 014πλ = 2π 2 λ
1+ 2 πλ2
2π − λπ 1
00 − πλ
c2 = 004πλ = − 8π 2 λ
1+ 2 πλ
−2λ 2π 1
00 − πλ
c3 = 014πλ = 2π
1+ 2 πλ
−2λ − πλ 2π
Exercise
2
π
(
y ( x ) − 4 ∫ sin 2 x y ( t ) dt = 2 x − π )
0
Answer:
π3
y ( x) = − sin 2 x + 2 x − π
π −1
1
C2 = C1 + 2C2 +
4
where
1
C1 + 3C2 = −
3
1
C1 + 2C2 = −
4
On the basis of equation (2), our solution of the given equation will be of the
form
x 5
y ( x ) = x2 − −
4 12
Example 4.2
Representing
1
C1 = ∫ y ( t ) dt , λ ∫ ty ( t ) dt = C2 … (1)
0
we have
y ( x ) = λ (1 + x ) C1 + C2 + f ( x ) …( 2)
⎛ λ⎞
1
5
− λ C1 ⎜1 − ⎟ C2 = ∫ tf ( t ) dt
6 ⎝ 2⎠ 0
1 − 32 λ -λ λ2
D (λ ) = = 1 − 2λ − =0
- 56 1- λ2 12
(1 − 2 ) λ ∫ f ( t ) dt + λ ∫ tf ( t ) dt
λ
C1 = 0 0
D (λ )
1 1
5
6 λ ∫ f ( t ) dt + (1 − 32 λ )∫ tf ( t ) dt
C2 = 0 0
D (λ )
Γ ( x, t ; λ ) =
{1 + x + t + λ ⎡⎣ 1
3 − 12 ( x + b ) b + xt ⎤⎦ f ( t ) dt }
1 − 2λ − λ12
2
Equation 4.3
−Π
Solution
Let
Π Π
∫ y ( t ) cos tdt = C1 ; ∫ t y ( t ) dt = C
2
2
−Π −Π
Π
∫ y ( t ) sin tdt = C
−Π
3
Π
C2 = ∫ ( C λt + C λ sin t + C λ cos t + t ) t … ( 3)
2
1 2 3 dt
−Π
Π
C3 = ∫ ( C λ t + C λ sin t + C λ cos t + t ) sin tdt
−Π
1 2 3
⎛ Π
⎞ Π
C1 ⎜ 1 − λ ∫ t cos tdt ⎟ − C2 λ ∫ sin t cos tdt −
⎝ −Π ⎠ −Π
Π Π
-C3 λ ∫ cos tdt = ∫ t cos tdt ,
2
−Π −Π
Π
⎛ Π
⎞
−C1λ ∫ t dt + C2 ⎜ 1 − λ ∫ t 2 sin tdt ⎟
3
−Π ⎝ −Π ⎠
Π Π
−C3 λ ∫ t 2 cos tdt = ∫ t dt
3
…( 4)
−Π −Π
Π Π
−C1λ ∫ t sin tdt − C λ ∫ sin tdt +
2
2
−Π pi
⎛ Π
⎞ Π
+C3 ⎜1 − λ ∫ sin t cos tdt ⎟ = ∫ t sin tdt
⎝ −Π ⎠ −Π
we now find the definite integral in the system
Π 4 ( −Π )
Π Π 4
t4
∫ t dt = 4 = − = 0,
3
−Π −Π
4 4
Π Π Π
1 1 1
∫−Π sin tdt = −Π∫ 2 (1 − cos 2t ) dt = 2 t − 2 sin 2t −Π
2
1 1 1
= Π − sin 2Π + Π + sin ( −2Π )
2 2 2
Π Π Π
1 1 1
∫ cos 2 tdt = ∫ (1 + cos 2t ) dt = t + sin 2t
−Π −Π
2 2 2 −Π
1 1 1
= Π + sin 2Π + Π − sin ( −2Π )
2 2 2
Π 1 Π 1
= + sin 2Π + + sin 2Π = Π
2 4 2 4
Π Π Π
1 1
∫ sin t cos tdt = ∫ sin 2td ( 2t ) = ∫ sin 2td ( 2t )
−Π
2 −Π 4 −Π
1 1
− cos 2t −Π = − cos 2Π − cos t ( −2Π )
Π
=
4 4
1
= − ( cos t 2Π − cos 2Π ) = 0
4
Π Π Π
−Π −Π −Π
∫t
2
cos tdt = t sin t 2
− 2 ∫ t sin tdt = Π sin Π − ( Π ) sin ( −Π ) − 4Π = 0 − 0 − 4Π = −4Π
2
−Π −Π −Π
Π
∫ t cos tdt = 0
2
here u=t , du=2tdt, dv=sintdt, v=-cost; integral
−Π
Π Π 2
−Π −Π
⎛ Π
⎞
⎜ here u = t , du=2tdt, dv=cost dt, v=siny; ∫ t sin tdt = 2Π ⎟⎠
2
⎝ −Π
1 0 0
C3 = 0 1 0
-2Π -Π λ 2Π 2Π
=
D (λ ) 1 + 2Πλ 2
Substituting the values of C1 , C2 , C3 into the equation (2), we have the solution
of the integral equation as:
2Π λ
y ( x) = ( Πλ x − 4Πλ sin x + cos x ) + 2 x
1 + 2Π λ 2
Exercises
( )
2
y ( x ) − 4 ∫ sin 2 x y ( t ) dt = 2 x − Π
1.
0
Answer:
π3
y ( x) = − sin 2 x + 2 x − Π
π −1
1
y ( x ) − ∫ exp {arcsin x} y ( t ) dt = tan x
2.
−1
Answer
y ( x ) = tan x
Π
4
y ( x) − λ ∫ tan ty ( t ) dt = cot x
3. −Π
4
Solution
π
y ( x) = λ + cot x
2
CHAPTER FIVE
At the end of this short chapter, you should be able to link Fourier Series to
degenerate kernel and be ready to apply this to find approximate solutions to
integral equations.
EXPANSION OF THE DEGENERATE KERNEL INTO FOURIER
SERIES
where the functions f (x) and k(x,t) are continuous, the kernel k(x,t) is
replaced by an approximate to its degenerate kernel
n
k ( n ) ( x , t ) = ∑ ai ( x )bi ( t ) (5.2)
i =0
n
k ( n ) ( x, t ) = 12 a0 (t ) + ∑ ak (t ) cos kπι x (5.4)
k =1
n
ak (t ) ∼ 12 ako + ∑ akm cos mιπ t
m =1
n n
+∑ ∑ a km cos kιπ x cos mππ t ,
k =1 k =1
where
b b
a km = ι
4
2 ∫ ∫
a a
k ( x , t ) cos kιπ cos mιπ tdxdt .
of the form
⎡ n ⎤
⎢ ∑ ai ( x)bi ( x) ⎥ y ( x)dt + f ( x)
b
y ( x) = λ ∫
a
⎣ i =1 ⎦
DEFINITION:
The Eigenvalue of the integral equation 6.1 is value of the parameter λ for
which there exists a solution of the Homogenous integral equation
⎡ n ⎤
⎢ ∑ ai ( x)bi (t ) ⎥ y (t )dt … ( 6.2 )
b
y ( x) = λ ∫
⎣ ⎦
−λ g 21c1 + (1 − λ g 22 c2 ...λ g 2 n cn = 0
..............................................
............................................... … ( 6.3)
λ g ni c1λ g n 2 c2 ...... + (1 − λ g nn )cn = 0
−λ g ni − λ g n 2 ...1 − λ g nn
… ( 6.4 )
of degree m ≤ n . If this equation has m roots then the integral equation 6.2
has m eigenvalues. Every eigenvalue λ k ( k = 1,2,..., m; m ≤ n) corresponds to a
non-zero solution of the homogenous system 6.3
n
y1 ( x ) = ∑ c1 (i ) ai ( x )
i =1
n
y 2 ( x ) = ∑ ci
(2)
ai ( x )
i =1
..........................................
n
y m ( x ) = ∑ ci
( m)
ai ( x )
i =1
(∂ )
n
∑
i =1
ij − λk g ij Cl( k ) = 0
Example 6.1
Solution
… (1)
1
y ( x) = λ x ∫ t 2 y (t )dt + f ( x)
0
let
…( 2)
1
C1 = ∫ t 2 y (t )dt
0
then
y ( x) = λ c1 x + f ( x) … ( 3)
C1 = ∫ t 2 [ λ c1t + f (t )] dt = λ c1 14 + ∫ t 2 f (t )dt
1 1
0 0
where
…( 4)
1
⎡⎣1 − λ4 ⎤⎦ c1 = ∫ t 2 f (t )dt
0
and
… ( 5)
1
C1 = 1−1λ
4
∫ 0
t 2 f (t ) dt
⎡⎣1 − λ4 ⎤⎦ c1 = 0 …( 6)
The solution of the equation defines the dependency of (3) whilst the
solution itself has the form
ϕ ( x) = 4Cx = C * x …( 7 )
If λ ≠ 4 , then by (5)
1
C1 = ∫0 1−λ / 4
2
t f ( t ) dt
Substituting C1 in (3) we get the solution of the given Integral Equation
1
4 xλ ∫0 t
2
f ( t ) dt
y( x) = 1− λ / 4 + f ( x)
OR
1
y( x) = λ ∫ 4 xt 2
4−λ f (t )dt + f ( x )
0
Γ( x , t , λ ) = 4 xt 2
4−λ
For λ = 3 and f ( x ) = 1, the solution of the Integral Equation has the form
1
y ( x ) = 3∫ 4 xt 2
4−3 dt + 1 = 4 x + 1
0
Example 2
Solution
where
Π Π
g11 = ∫ b1 ( t ) a1 ( t ) dt = ∫ cos 2 tdt = π / 2
0 0
Π Π
g12 = ∫ b1 ( t ) a2 ( t ) dt = ∫ − sin t cos tdt = 0
0 0
Π Π
g 21 = ∫ b2 ( t ) a1 ( t ) dt = ∫ sin t cos tdt = 0
0 0
Π Π
g 22 = ∫ b2 ( t ) a2 (t )dt = ∫ − sin t sin tdt = −π / 2
0 0
Π Π
f1 = ∫ b1 ( t ) f ( t ) dt = ∫ cos tdt = 0
0 0
Π Π
f 2 = ∫ b2 ( t ) f ( t ) dt = ∫ sin tdt = 2
0 0
C1 ⎡⎣1 − λπ2 ⎤⎦ = 0 …( 2)
01+ λπ
2
C1 = 0, C2 = 1+ λπ
2
2
y ( x ) = λc1 y1 ( x ) + λc2 y 2 ( x ) + f ( x )
Hence
y ( x ) = − 21λ+sinλπ x + 1
2
λ 1 = − π2 orλ 2 = π
2
C1 = 0
0=2
which is not possible. Hence for this eigenvalue the given non homogenous
equation has no solution
C2 = 1
Example 3
find the solution, eigenvalues, eigen functions, resolvent and also the
solution for λ = λ k whereλ k − the determined eigenvalue.
Solution
… (1)
1 1
y ( x) = λ x ∫ y (t )dt + λ ∫ ty (t )dt + f ( x)
0 0
Let
…( 2)
1 1
∫
0
y (t )dt = C1 , ∫ ty (t )dt = C2
0
y ( x ) = λ C1 x + λ C2 + f ( x ) … ( 3)
where
⎧λ C 1 1
⎪ 1 ∫0
1tdt + λ C 2 ∫ 0
d + ∫0
f (t )dt = C1
⎨ 1 1 1
⎪λ C1 ∫ t 2 dt + λ c2 ∫ tdt + ∫ tf (t )dt = C2
⎩ 0 0 0
OR
⎧λ C t 2 1 +λ C + 1 + 1 f (t )dt = C
⎪ 1 2 0 2 0 ∫0 1
⎪
⎨
⎪ 2 1 1
⎪⎩λ C1 t3 0 +λ C2 + t2 0 + ∫0 f ( x)dt = C2
3 1
and
⎧ λC1 + λ C + 1 f (t )dt = C
⎪ 2 2 ∫0 1
⎨ 1
⎪ λ3C1 + λ3C1 + ∫ + f (t )dt = C2
⎩ 0
then
⎧ 1 f (t )dt = ⎡1 − λ ⎤ C − λ C
⎪ ∫0 ⎣ 2⎦ 1 2
⎪
⎨ …( 4)
⎪tf (t )dt = − λ C + 1 − λ C
⎪⎩ 3 1 ⎣⎡ 2 ⎦⎤ 2
1 − λ2 − λ
D( λ ) = = 1 − λ − 12λ = 0
− λ3 1 − λ2
Hence the eigenvalues
λ 1,2 = −6+ 4 3
K ( x, t ) = K ( t , x )
To find the eigen functions, solve (4) for λ = λ 1 , λ 2 i.e. the system with the
right hand side as zero
( )
⎧ 1 − λ2k C1 − λk C2 = 0
⎪⎪
⎨
⎪ λk λk
⎪⎩− 3 C1 + ⎡⎣1 − 2 ⎤⎦ C2 = 0
Since the equations of the system are dependent we only need to consider
one of them, say the first.
hence ⎡1 − λ2k ⎤ C1 − λk C2 = 0 (k = 1, 2)
⎣ ⎦
from which
(
λk C2 = C1 1 − λ2 k
)
φk ( x ) = C1 λkx + 1 − λ2 ( k
)
1 − λ − 12
2
f (t )dt + [1 − ]∫
1 1
C2 =
λ
3 ∫
0
λ
2 0
tf (t )dt
1 − λ − 12
λ 2
Substituting these values in equation (3) we find the solution of the non
homogeneous equation
⎡ 1 1 1 1
⎤
⎢( 12 ) ∫ ( )
− + λ ∫0 ( ) + ∫0 ( ) + ( − ) ∫0 ( )
λ λ λ
1 x f t dt x tf t dt 3 f t dt 1 2 tf t dt ⎥
y ( x) = λ ⎣ ⎦+ f x
0
( )
λ
[1 − λ / 2] x + λ xt + + [1 − λ / 2] t
y ( x) = λ ∫
1
3 f (t )dt + f ( x)
0 λ2
1− λ −
12
⎡ λ⎤ λ ⎛ λ⎞
⎢⎣1 − 2 ⎥⎦ x + λxt + 3 + ⎜⎝ 1 − 2 ⎟⎠ t
Γ( x , t , λ ) =
λ2
1− λ −
12
The kernel is symmetric and therefore the eigen function of the Integral
Equation and the transpose to it will coincide, i.e.
⎡ λk ⎤
φ k ( x ) = ψ k ( x ) = C1 λ k x + ⎢1 −
⎣ 2 ⎥⎦
OR
⎡ λk ⎤
… ( 5)
1 1
λk ∫ tf ( x)dt + ⎢1 −
2 ⎥⎦ ∫0
f (t )dt = 0
0
⎣
Conditions the condition for consistency of system (4). In which case (4)
tends to
⎡ λk ⎤ 1
⎢1 − 2
⎣
⎥C1 − λ k C2 =
⎦
∫
0
f ( t ) dt ,
OR
⎡ λ ⎤ 1
λ k C2 = ⎢− k ⎥C1 − ∫ f (t )dt
⎣ 2 ⎦ 0
⎡ λ ⎤ 1
y ( x ) = C1 ⎢λ k x + 1 − k ⎥ − ∫ f ( t ) dt + f ( x )
⎣ 2 ⎦ 0
Summary
equation (6.2) has a non zero solution (eigen function), whilst the
∑
i =1
a i ( t )bi ( x )
1. If the parameter λ is not its eigen value (i.e. ∇( λ ) ≠ 0), then the
then the homogeneous integral equation has n non zero solution (eigen
functions). The solution of the homogeneous integral equation 6.2 can be put
in the form of a linear combination of these eigen functions
n
y ( x ) = C1φ 1 ( x ) + C2 φ 2 ( x ) − +..+ Cn φ n ( x ) = ∑ Ci φ i ( x )
i =1
EXERCISES
For the equations with degenerate kernel find the eigen values, resolvent and
the solution of the non-homogeneous integral equation for the given values
of λ and f ( x )
2π
1. y ( x) = λ ∫0 sin x sin ty (t ) + f ( x), if λ = 1, f ( x) = 1
Answer
1 sin x sin t
λ1 = , ϕ1 ( x) = C sin x, Γ ( x, t ; λ ) =
π 1 − λπ
y = C sin x + f ( x), y ( x) = 1
1
2. y ( x) = −λ ∫0 ( x 2 t + xt 2 ) y (t )dt + f ( x), if λ = λk
Answer
xt ⎛ λ⎞ 2 2 ⎛ λ ⎞ x2t 2λ
λ − ⎜ 1 + ⎟ xt − x t ⎜ 1 + ⎟ +
5 ⎝ 4⎠ ⎝ 4⎠ 3
Γ( x , t ; λ ) =
λ λ2
1+ −
2 240
1
3. y( x) = λ ∫−1 ( xt + x 2 t 2 ) y(t )dt + f ( x), if λ = 1, f ( x) = x
Answer
3 5
λ1 = , ϕ1 ( x) = ψ 1 ( x)cx, λ2 = , φ2 ( x) = ψ 2 ( x) = cx 2
2 2
xt x2t 2
Γ ( x, t ; λ ) =
2 2
1− λ + 1− λ
3 5
3
If λ =
2
15 2 1 2 5
y ( x) = cx + f ( x) + x ∫ t f (t )dt λ =
4 −1 2
15 1 2
4 ∫−1
y ( x) = cx 2 + f ( x) − x t f (t )dt , y ( x) ≡ 3x
CHAPTER SEVEN
Readers will be able to formulate and apply Fredholm’s theorems for the
non-homogeneous and homogeneous Fredholm’s Integral Equations.
FREDHOLM’S ALTERNATIVE
Theorem 7.1
(Fredholm’s alternative) Either the non-homogeneous equation of the
second type
… ( 7.1)
b
y ( x) = f ( x) + λ ∫ k ( x, t ) y(t )dt
a
… ( 7.2 )
b
y ( x) = f ( x) ∫ k ( x, t ) y (t )dt
a
Theorem 7.2
The necessary and sufficient condition for the existence of the solution y(x)
of the non-homogeneous integral equation (7.1) in the second case of the
alternative is the orthogonality of the right side of this equation f ( x ) to any
solution ψ ( x ) adjoint (connected) to equation 7.2 of the homogeneous
integral equation.
b
ψ ( x) = λ ∫ k ( x, t )Ψ (t )dt
a
ie
… ( 7.3)
b
∫a
f ( x)ψ ( x)dx = 0
The condition of orthogonality (7.3) of the right hand side or part of this
equation gives n equations
… ( 7.4 )
b
∫a
f (t )bi (t )dt = 0(i1, 2,..., n)
Example 1
Solution
Let
… (1)
1
C = ∫ t 2 y (t )dt
0
y ( x) = C λ (5 x 2 − 3) + e x …( 2 )
1 1 1
C = ∫ t 2 ⎡⎣C λ (5t 2 − 3 + et ⎤⎦ dt = C λ ∫ t 2 (5t 2 − 3)dt + ∫ t 2 et dt
0 0 0
1 1
= C λ ∫ (5t 4 − 3t 2 )dt + ∫ t 2 et dt
0 0
whence
1 1
C − C λ ∫ (5t 4 − 3t 2 )dt = ∫ t 2 et dt
0 0
OR
1
C − C λ (1 − 1 − 0 + 0) = e − 2∫ tet dt
0
t = u, e t dt = dν
du = dtν = e t , hence
C = e − 2 ⎡te t 10 − ∫ e1dt ⎤ = e − 2 e − (e − e 0 ) =
1
⎢⎣ 0 ⎥⎦
= e−2
y ( x ) = (e − 2)(5x 2 − 3) + e x
0
CHAPTER 8
∫ F ( x ) dx = ∑ ki f ( xi ) + 0 [ F ]
a i =1
… ( 8.1)
k1 = k2 m +1 = h
3
k2 = k4 = … = k2 m = 4h
3
k3 = k5 = … = k2 m −1 = 2h
3
y ( x ) − λ ∫ k ( x, t ) y ( t ) dt = f ( t ) ( a ≤ x ≤ b ) … ( 8.2 )
Let
⎧0, i ≠ j
δ ij = ⎨
⎩1, i=j
Since
n
yi = ∑ δ ijyi
j =i
∑ (δ − λ k j kij )y j = fi
n
ij ( i==2,…,n ) … ( 8.5 )
j =1
If
D ( λ ) = det (δ ij − λ k j kij ) ≠ 0 … (8.6 )
n
y ( x ) = f ( x ) + λ ∑ k j k ( x, xj ) y j … ( 8.7 )
j =1
If
Yigl ( i = 1 f , 2,… , m; l = 1, 2,… , pg )
ij
j =1
φgl ( x ) = λg ∑ k j k ( x, x j )Y jgl
n
j =1
n
λ ∑ k j kij yij = fi ( i=1,2,…, n )
j =1
y ( x ) − λ ∫ k ( x, t ) y ( t ) dt = f ( x )
y1 = fi (1 − λ k1 k11 )
−1
………………
⎛ n −1 ⎞
yn = ⎜ f n + λ ∑ k j kij y j ⎟ (1 − λ kn knn )
−1
⎝ j =1 ⎠
In general, let
b
y ( x ) = λ ∫ k ( x, t ) y ( t ) dt + f ( x ) … ( 8.9 )
a
Example 1
Find the approximate solution of the integral equation
1
y ( x ) = e − ∫ xe xt y ( t ) dt
x
Solution
In the given interval of integration [0,1] we select nodes at the point
x1 = 0; x2 = 0.5; x3 = 1 . Step size h = 0.5
f ( xi ) = fi xi ti 0 0.5 1
1 0 0 0 0
1.6487 0.5 0.6420 0.8244
0.5000
2.7183 1 1 1.6487 2.7183
By Simpson’s quadrature formula we have
1
1
∫ F ( x ) dt ∼ 6 ⎡⎣ F ( 0 ) + 4F ( 0.5) + F (1)⎤⎦
0
⎪
⎩ y3 + 6 ( y1 + 6.5848 y2 + 2.7183 y3 ) = 2.7183
1
( ) (
y ( x ) = f ( x ) − h2 xe xt1 y1 − 2 xe xt2 y2 + xe xt3 y3 = f ( x ) − h2 xe xt . y1 − 2 xe xt2 y2 + xe xt3 y3 )
(
y ( x ) = e x − 6x 1 + 3.720e 2 + 1.053e x
x
)
y ( x ) = e x − h2 ∑ kij yi
EXERCISES
Find the approximate solution of the integral equations below:
Π
4
1) y ( x ) = ∫ x ( sin t ) y ( t ) dt + sin x
0
1
2) y ( x ) = ∫ x (1 − e xs )y ( s ) ds + e x − x
0
CHAPTER NINE
DEFINITION
Green’s function G ( t , s ) of the boundary value problem 9.1 – 9.2 is
a function in two variables, defined in the rectangle a ≤ t , s ≤ b and
such that
1) Lt G ( t , s ) = 0 such that t < s and t > s , as a function of t , it
satisfies equation 9.1 for the given values of t and s . In other
words in the intervals a < t < b and a < s < b , for t = s , G ( t , s )
possesses second order derivatives both with respect to t and
s , the derivatives being equal to zero.
= +
∂t dt ∂t
θ (t − s )
1
0
t=s t
we conclude that
d2 d⎛d ⎞ d 2G
G ( )
t , s = ⎜ G ( )⎟ ( )
t , s = δ t − s +
dt 2 dt ⎝ dt ⎠ dt 2
Lt G ( t , s ) = δ ( t − s ) … ( 9.3)
Further
⎪⎧ ⎪⎫
b b
Lx ( t ) = Lt ⎨ ∫ G ( t , s ) h ( s ) ds ⎬ = ∫ Lt G ( t , s ) h ( s ) ds =
⎩⎪ a ⎭⎪ a
b
= ∫ δ ( t − s ) h ( s ) ds = h ( t )
a
dx dx
G ( s- ∈ ,s ) − G ( s + ∈, s )
dt t = s −∈ dt t = s +∈
∂G ( t , s ) ∂G ( t , s )
-x ( s- ∈) + x ( s + ∈)
∂t t = s −∈
∂t t = s +∈
meaning
b
x ( t ) + ∫ k ( s, t ) x ( s ) ds = F ( t ) … ( 9.6 )
a
b
where K ( s, t ) = −G ( t , s ) g ( s ) and F ( t ) = ∫ G ( t , s ) h ( s ) da
a
⎧⎪ ( t −bb)(− as − a ) for s ≤ t
G ( t , s ) = ⎨ t-a s −b
( )( )
⎪⎩ b −a for s ≥ t
we find
t−a
s
dx
=∫ ⎡ − g ( s ) x ( s ) + h ( s ) ⎤⎦ds +
dt a b − a ⎣
s
t-b
+∫ ⎡ − g ( s ) x ( s ) + h ( s ) ds ⎤⎦
a
b-a ⎣
d x s−a
2
s−b
= ⎡⎣ − g ( s ) x ( s ) + h ( s ) ⎤⎦ − ⎡ − g ( s ) y ( s ) + h ( s ) ⎤⎦
dt 2
b−a b−a⎣
=-g ( s ) y ( s ) + h ( s )
Note
Green’s function is symmetric.
EXAMPLE
The influence function R ( t , s ) is the solution of the system
d2R
+ R = δ (t − s )
dt 2
dR
R t =s = =1
dt t = s
the value of R that satisfies this is R ( t , s ) = sin t ( t − s ) . The
corresponding integral representation for the I.V.P (1) is
t
y ( t ) = ∫ sin t ( t − s ) F ( s ) ds
0
y ( 0) , y' ( 0)
are non-zero, then we can add a suitable solution
c1u + c2 u to the integral equation and evaluate the constants C1 and
C2 using the prescribed conditions.
Example
c1 = −1 and c2 = 1
Example
d2x
Lx = − x (t ) = h (t ) … (1)
dt 2
x ( 0 ) = 0, x (1) = 0 …( 2)
we construct Green’s function of the problem (1) – (2). The
general solution of the homogenous equation
d2x
− x (t ) = 0
dt 2
x ( t ) = c1et + c2 e−t
⎧⎪G ( t , s ) = a1 ( s ) et + a2 ( s ) e − t , 0≤t≤s
⎨ b 2 − 4ac
⎪⎩G ( t,s ) = b1 ( s ) e + b2 ( s ) e ,
−t
t
s ≤ t ≤1
… ( 3)
G ( 0,s ) = 0, G (1,s )
This gives
⎧⎪a1 + a2 = 0
⎨ −1
⎪⎩b1e + b2 e = 0 …( 4)
Condition (4)
∂G ( s + ∈, s ) ∂G ( s − ∈, s )
− =1
∂t ∂t
b1e s − b2 e − s ⎡⎣ a1e s − a2 e − s ⎤⎦ = 1 …(6)
⎧⎪a1e + a2 = 0 → a1 = − a2
( 4) ⎨ −s −1
⎪⎩b1e + b2 e = 0 → b1e = −b2 e
( 5) b1es + b2 e− s = a1e s ± a2 e− s
( 6 ) b1es − ⎡⎣ ai es − a2 e− s ⎤⎦ = 1
d2x
Ls = 2 + ( −1 + λ q ( t ) ) x ( t ) = h ( t ) … (10.1)
dt
( λ is a numeric parameter )
x ( 0 ) = x (1) = 0 … (10.2 )
d2x
− x (t ) = h (t ) − λ q (t ) x (t ) … (10.3)
dt 2
where
1
h1 ( t ) = ∫ G ( t , s ) h ( s ) ds
0
d2x
− x (t ) = 0
dt 2
and the boundary condition (10.2)
reduce the b.v.p.
y '' + λ y = 0 … (10.4 )
y ( 0 ) = 0, y ' (1) +v 2 y (1) =1 … (10.5 )
⎧⎪ A1 ( t ) s, s<t
G ( s, t ) = ⎨
⎩⎪A 2 ( t ) ⎡⎣1 + v2 (1 − s ) ⎤⎦ , s>t
Ct ( −v2 ) − C ⎡⎣1 + v2 (1 − t ) ⎤⎦ = 1
OR
1
C=
(1 + v2 )
{⎡⎣1 + V (1 − t )⎤⎦ s / (1 + v )} ,
2 2 s<t
1 + v2 (1 − s ) s
y (s) = λ ∫ ty ( t ) dt +
1 + v2 0
1
s s
+λ ∫ ⎡⎣1 + v2 (1 − t ) ⎤⎦ y ( t ) dt +
1+v 2 s 1 + v2
the homogeneous equation
d4 y
=0
ds 4
A0 ( t ) = 0, A1 ( t ) = 0, B2 = 3B0 − 2 B, B3 = 2 B0 + B1
⎧⎪ A 2 ( t ) s 2 + A3 ( t ) s 3 , s<t
G ( s, t ) = ⎨
⎪⎩(1 − s ) ⎡⎣ B0 ( t )(1 + 2 s ) + B1 s, s>t ⎤⎦
2
( )
t 2 A2 + t 3 A3 ( t ) − 1 − 3t 2 + 2t 3 B0 − t (1 − t ) B1 = 0
2
CHAPTER 11
g ( s ) = f ( s ) + λ ∫ k ( s, t ) g ( t ) dt … (11.1)
where
ck = ∫ ⎡⎣ g ( s ) − f ( s ) ⎤⎦Φ k ( s ) ds = g k − f k … (11.3)
with
g k = ∫ g ( s ) Φ k ( s ) ds = g k − f k … (11.3)
ck = ⎡⎣ λ / ( λk − λ ) ⎤⎦ f k , g k = ⎡⎣λk / ( λk − λk − λ ) ⎤⎦ f k
∞
fk
g (s) = f (s) + ∑ Φk ( s ) … (11.7 )
k =1 λk − λ
OR
∞ Φk ( s ) Φk (t )
g (s) = f (s) − λ∑ ∫ f ( t ) dt … (11.8 )
K =1 ( λk − λ )
Example 1
Solve the symmetric integral equation
1
( )
g ( s ) = ( s + 1) + λ ∫ st + s 2 t 2 g ( t ) dt
2
−1
Solution
λ = λ1 =is an eigenvalue and we have the intermediate form 0 0 in
3
2
−1 ⎝2 ⎠
2 (5 2) − (3 2) ⎝ 2 ⎠
OR
g ( s ) = 5s 2 + cs + 1
Example 3
Solve the symmetric integral equation
g ( s ) f ( s ) + λ ∫ k ( s ) k ( t ) g ( t ) dt
If we write
∫ k ( s ) k ( t ) dt = − {∫ k ⎡⎣k ( t ) }
dt ⎤ d ( s )
2
we observe that
1
λ1 = dt
∫ ⎡⎣ k ( t )⎤⎦
2
Φ1 ( s ) = k ( s ) ∫ ⎡⎣ k ( t ) dt ⎤⎦
2 2
{ }
1
∫ ⎡⎣ k ( t ) dt ⎤⎦ ∫ f ( t ) k ( t ) dt
2
f1 =
2
Or
{
g ( s ) = ( λ k ( s ) ) ∫ f ( s ) k ( s ) ds 1 − λ ∫ ⎡⎣ k ( s ) ⎤⎦ ds + f ( s )
2
}
On the other hand, if
λ = λ1 = 1 ∫ ⎡⎣ k ( s ) ⎤⎦ ds
2
C-arbitrary constant.
Example 4
Solve the symmetric Fredholm integral equation of the first kind
∫ k ( s, t ) g ( t ) dt = f ( s ) … (1)
where
⎪⎧ s (1 − t ) , s<t
k ( s, t ) = ⎨
⎪⎩(1-t ) t , s>t
…( 2)
Solution
d 2t
+ λ y = 0, y ( 0 ) = y (1) = 0 … ( 3)
dt 2
λ1 = Π 2 , λ2 = ( 2Π ) , λ3 = ( 3Π ) ,… ,
2 2
∞ ∞
∑f k
2
λk2 = Π 4 ∑ ( k 4 f k2 )
k =1 k =1
converges
Example 5
1 − P2
2Π
g (α ) d α
f (Θ) = ∫ 1 − 2 P ⎡cos ( Θ − α )⎤ + P
2Π 0 ⎣ ⎦
2
0 ≤ Θ2 ≥ Π; 0<p<1 … (1)
Solution
Here, the symmetric K ( Θ, α ) can be expanded to give
{ } { }
−1
K ( Θ, α ) = ⎡⎣1 − p 2 2Π ⎤⎦ 1 − 2 p ⎡⎣ cos ( Θ − α ) ⎤⎦ + p 2
∞
(1 2Π + 1 Π ) ∑ p k cos ⎡⎣ k ( Θ − α )⎤⎦ …( 2)
k =1
∫ K ( Θ, α )( 2Π ) dx = ( 2Π )
− 12 −1
2
λ = 1, Φ ( s ) = ( 2Π )
− 12
which means that . Using the formula
2Π
∫ k ( Θ, α )
cos
sin
nα dx = p n sin nΘ, n = 1, 2,3,…
0
we have
λ2 k −1 = λ2 k p − k ; Φ 2k-1 ( s ) = Π − 2 cos ks
1
∞
an2 + bn2
∑
k =1 p2n
converges,
where
2Π
an = (1 Π ) ∫ f ( Θ ) cos nΘd Θ
0
2Π
bn = (1 Π ) ∫ f ( Θ ) sin nΘd Θ
0
CHAPTER 12
By the end of this chapter, the reader would have been exposed to some terminologies
and theorems in Integral Equations and their possible USAGES in both theory and
application.
SUPPLEMENTARY MATERIALS
1. Hermitian Kernel:
A complex-values kernel k ( x, t ) is called symmetric (or Hermitian) if
k ( x, t ) = k ∗ ( t , x ) where k ∗ ( t , x ) denotes the complex conjugate of k ( x, t ) .
For real-valued kernel, the above definition coincides with the definition
k ( x, t ) = k ( t , x ) .
2. Convolution Integral:
Consider an integral in which the kernel k ( s, t ) is a function of the difference
( s − t ) only.
k ( x, t ) = k ( x − t ) … (1)
where k is a certain function of one variable. The integral equation
x
y ( x ) = f ( x ) + λ ∫ k ( x − t ) y ( t ) dt …( 2)
a
and corresponding Fredholm equation are called integral equations of the convolution
type.
∫ k ( s − t ) y ( t ) dt = ∫ k ( t ) y ( x − t ) dt
0 0
… ( 3)
is called the convolution or the Faltung of the two functions k and y . The integrals
occurring in (3) are called convolution integrals.
∫ k ( x − t ) y ( t ) dt = ∫ j ( t ) y ( x − t ) dt …( 4)
−∞ −∞
(3) is obtained from (4) by taking
k ( t ) = y ( t ) = 0 , for t < 0 and t > s
( Φ,ψ ) = ∫ Φ ( t )ψ ∗ ( t ) dt … (1)
a
b 2
( l2 − function ∫ g (t)
a
dt < ∞; square-integrable function g ( t )
The function Φ and ψ are orthogonal if ( Φ,ψ ) = 0 . The norm of a function Φ ( t ) is
given by the relation.
1 1
⎡b ⎤ ⎡b ⎤
2 2
Φ = ⎢ ∫ φ ( t ) Φ ∗ ( t ) dt ⎥ = ⎢ ∫ Φ ( t ) dt ⎥
2
⎣a ⎦ ⎣a ⎦
( Φ,ψ ) ≤ Φ ψ
( Φ,ψ ) ≤ Φ ψ
4. Fredholm’s Theorems (with degenerate kernel)
We consider the degenerate kernel
n
k ( t , s ) = ∑ ai ( t ) bi ( s ) … (1)
i =1
as in the definition. The 2nd type of Fredholm’s integral equation with degenerate
kernel k ( x, t )
n b
y ( x ) = λ ∑ ai ( x )∫ bi ( t ) y ( t ) dt + f ( x ) …( 2 )
i =1 a
where f ( x ) is a continuous function in the interval [ a, b] . Let equation (2) have the
solution y = y ( x ) . Then as before
b
Ci = ∫ y ( t ) bi ( t ) dt ( i=1,2,… ,n )
a
n
y ( x ) = f ( x ) + λ ∑ Ci ai ( x ) …( 4)
i =1
From which we conclude that the solution of the integral equation with degenerate
kernel is equivalent to defining the constants CLi ( i = 1, 2,… , n ) .
To the equation
b
y ( x ) = λ ∫ k ( x, t ) y ( t ) dt + f ( x ) …( 5)
a
the equation
b
ψ ( x ) = ∫ k ∗ ( x, t )ψ ( t ) dt + g ( x ) …( 6)
a
is called the conjugate (adjoint) to equation (5).
For equation (2) with degenerate kernel the conjugate (adjoint) to this equation has
the form
b n
φ ( x ) = λ ∫ ∑ ai ( t ) bi ( x )ψ ( t ) dt + g ( x ) …( 7 )
a i =1
For this
n
ψ ( x ) = g ( x ) + λ ∑ ci bi ( x ) … (8 )
i =1
where
b
ci = ∫ψ ( t ) ai ( t ) dt ( i=1,2,… ,n ) …( 9 )
a
n
ci − λ ∑
i =1c j = 0
k ji … (10 )
n
ci − λ ∑ kij ci = 0 … (10 )
i =1
Both the system and its adjoint will have the same number of linearly independent
solution vectors. If {c1 ,… , cl } ( l=i,… p ) are the non-zero solution-vectors of system
(10), then the function
n
ψ l ( x ) = ∑ ci(l ) bi ( x ) ( l=1,2,… ,p )
i =1
will be the eigenfunction of the homogeneous equation
n b
ψ ( x ) = ∑ bi ( x ) ∫ ai ( t )ψ ( t ) dt … (12 )
i =1 a
f ( s ) = λ ∫ k ( s, t ) h ( t ) dt … (1)
∞
f ( s ) = ∑ fn Φn ( s ) fn ( fn , Φn )
h =1
The fourier coefficients of the function f ( s ) are related to the Fourier coefficients hn
of the function h ( s ) by the relations
h n = ( h, Φ n )
hn
fn =
λn ,
b 2
∫ g (t )
a
dt < ∞
L2 -kernel
A kernel K ( s, y ) is an L2 -kernel if
a. for each values of s, t in the square a ≤ s ≤ b, a ≤ t ≤ b
b b
∫ ∫ K ( s, t )
2
dsdt < ∞
a a
∫ K ( s, t )
2
dsdt < ∞
a
∫ k ( s, t )
2
dsdt < ∞
a
Proof
The fourier coefficients of the function f ( s ) with respect to the orthonormal
system {Φ n ( s )} are
f n = ( f , Φ n ) = ( Kh, Φ n ) = ( h, K Φ n ) = λn−1 hn
Taking into account the relation λn K Φ n − Φ n . Thus the Fourier series for
f ( s ) is
∞ ∞
hn
f ( s ) ∼ ∑ fn Φn ( s ) = ∑ Φn ( s ) … ( 3)
n =1 n =1 λn
∫ ∫ K ( s, t )
2
dsdt < ∞
a a
∫ K ( s, t )
2
dsdt < ∞
a
∫ K ( s, t )
2
dsdt < ∞
a
hn
fn = , h n = ( h, Φ n ) …( 2)
λn
Proof
The fourier coefficients of the function f ( s ) with respect to the
orthonormal system {Φ n ( s )} are
f n = ( f , Φ n ) = ( Kh, Φ n ) = ( j, K Φ n ) = λn−1 hn
∞ ∞
hn
f ( s ) ∼ ∑ fn Φn ( s ) = ∑ Φn ( s ) … ( 3)
n =1 n =1 λn
Φk ( s ) Φk ( s )
2 2
∞ n+ p n+ p
∑h
n =1
k
λk
≤ ∑n ∑
k = n +1
2
k
k = n +1 λk2
1
n+ p ∞ Φ 2k ( s )
≤ ∑
k = n +1
hk2 ∑
k =1 λk2
…( 4)
Applying Bessel’s inequality
Φn ( s )
2
∞
∑ ≤ ∫ K ( s, t ) dt ≤ c12
2
n =1 λ
2
n
By 6
(
f ( s ) − ψ n ( s ) = K n +1 h = K n +1 h, K ( n +1) h
2
)
(
⎡⎣ h,K n+1 K n +1 ⎤⎦ = h, K 2n +1 ) (7)
K (2n+1) = K ( n +1) K ( n +1)
= max ⎡⎣( h, K 2n +1 h ( n, h ) ) ⎤⎦
1
… (8)
λn +1
2
since λn +1 → ∞ . f ( s ) − ψ n ( s ) → 0 as n → ∞ . By the relation
(triangular inequality)
f −ψ ≤ f −ψ n + ψ n −ψ …( 9 )
where ψ is the limit of the series with partial sum ψ n , we prove that
f =ψ .
From the above f − ψ n → 0 , n → ∞ . Since the series 10.3
converges uniformly, ∃∈> 0 :
ψ n ( s ) − ψ ( s ) <∈
for sufficiently large n. Therefore
ψ n ( s ) − ψ ( s ) <∈ ( b − a ) 2
1
By definition
km ( s, t ) = ∫ k ( s, t ) km −1 ( x, t ) dx m=2,3 … (10 )
Φ k ( s ) of K ( s, t )
we have
ak ( t ) ∫ K m ( s, t ) Φ k ( s ) ds = λk−1Φ k ( t )
∞
k ( s, t ) = ∑ λk− m Φ k ( t ) … (11)
k =1
∑λ
k =1
−m
k = ∫ K m ( s, s ) ds = Am (12 )
∑λ
n =1
−1
n … (13)
is convergent and
∞ Φ n ( s ) Φ ∗n ( t )
k ( s, t ) = ∑
n =1 λn
the series is uniformly and absolutely convergent.
SOLVED QUESTIONS
Solution
So the answer is the inverse Laplace transform of the right hand side.
________________________________________________________
Solution
In this problem
Since
The solution is
Solve for .
Fact:
Solve
Approximate
All of the integrals on the right side are readily evaluated. For example,
retaining only terms through , we get
Equating coefficients for each power of on the left and right sides gives
a system of equations for the coefficients Upon solving that
system, we get .
Repeating the calculation, this time including terms through , gives
and leaves the lower-order coefficients essentially
unchanged, evidence that the above approximation for is quite
good for .
Find lambda: if
If , then
Therefore, or .
Find lambda: if
If , then
Therefore, .
Find the value of lambda for which the homogeneous Fredholm integral
The right side is a constant; therefore, the left side must also be a
constant. Thus , so .
_____________________________________________________________
___________
solution Solve
solution Solve
solution Solve
equation:
First, integrate both sides from x to 1 (so that the boundary condition
y'(1) = 0 can be applied). This gives
integral:
This same region, when the order of integration is reversed, looks like
this:
We see that the region must be split into two pieces, and a separate
integral written for each piece:
where
Solve:
Let .
The last integrand is odd and the limits of integration are a multiple of its
period, so the integral equals 0.
From
the answer is
_____________________________________________________________
___________
Solve:
Therefore
_____________________________________________________________
___________
Solve:
The solution is