Professional Documents
Culture Documents
Ghorai 1
Lecture I
Introduction, concept of solutions, application
If the number of functions is one, then it is called simple or scalar ODE. Otherwise,
we have a system of ODEs.
Note: ODEs are distinguished from partial di↵erential equations (PDEs), which con-
tain partial derivatives of functions of more than one variable.
From now on, we shall mostly deal with simple or scalar ODE.
Definition 3. The highest derivative that appear in a ODE is the order of that ODE.
If the order of the ODE is small, then we shall use 0 for the derivative. For example, a
second order ODE is written as
F (x, y, y 0 , y 00 ) = 0.
then the given ODE is linear. If such representation is not possible, then we say that
the given ODE is nonlinear.
If the function F (x) is zero, then (2) is called linear homogeneous ODE. If the func-
tions a0 (x), a1 (x), · · · , an (x) are constants, then (2) is called linear ODE with constant
coefficients. Similarly, if the functions a0 (x), a1 (x), · · · , an (x) are constants and F (x)
is zero, then (2) is called linear homogeneous ODE with constant coefficients.
Example 2. A pendulam is released from rest when the supporting string is at angle
↵ to the vertical. If the air resistence is negligible, find the ODE describing the motion
of the pendulam.
Solution: Let s be the arc length measured from the lowest point as shown in Figure
1. If l is the length of the string, then s = l✓. Now the velocity v and accleration a is
given by
ds d✓ d2 s d2 ✓
v= =l , a = 2 = l 2.
dt dt dt dt
S. Ghorai 2
α
θ
mg
s
d2 ✓
+ sin ✓ = 0, (3)
dt2
where = g/l. Note that (3) is of second order and nonlinear. Usually, the nonlinear
ODEs are very difficult to solve analytically.
Example 3. Consider a pond in which only two types of fishes, A and B, live. Fish A
feeds on phytoplankton whereas fish B survives by eating A. We are interested to model
the populations growths of both the fishes. A model for this kind of Predator-Prey
interaction is popularly known as the Lotka–Volterra system.
Solution: In absence of B, fish population A will increase and the rate of increase is
proportional to number density of A. Similarly, in absence of A, fish population B will
decrease and the rate of decrease is proportional to the number density of B. When
both are present, A will decrease and B will increase. This rate of change, as a first
S. Ghorai 3
[C n (I) denotes the set of functions having derivatives up to the order n continuous in
the interval I]
In general, we are interested to know whether a given ODE, under certain circum-
stances, has a unique solution or not. Usually, this question can be answered with
some extra conditions such as initial conditions. Thus, we consider the so called initial
value problem (IVP) which takes the following form for a first order ODE:
⇣ ⌘ 9
y 0 (x) = f x, y(x) , x 2 I, =
(4)
;
y(x0 ) = y0 , x0 2 I, y0 2 J ,
Note that here, the notation ✓(0) means ✓(t = 0) and similar is the case for ✓0 (0).
(ii) Consider y 00 4y 0 + 4y = 0. For this, y(x) = (C1 + C2 x)e2x is the general solution,
whereas yp (x) = xe2x is a particular solution obtained by choosing C1 = 0, C2 = 1. The
conditions on C1 and C2 are equivalent to initial conditions y(0) = 0, y 0 (0) = 1. This
particular solution is valid for all x.
(iii) Clearly y = 1/(C x) is a genreal solution to y 0 = y 2 . Now y = 1/(1 x) is
a particular solution to the IVP y 0 = y 2 , y(0) = 1, which is valid in ( 1, 1). But
y = 1/(1 + x) is a particular solution to the IVP y 0 = y 2 , y(0) = 1, which is valid
in ( 1, 1).
Why the solution y = 1/(1 + x) of y 0 = y 2 , y(0) = 1 is not valid in ( 1, 1)?
Comment: An ODE may sometimes have an additional solution that can not be
obtained from the general solution. Such a solution is called singular solution.
Example 3: y 02 xy 0 + y = 0 has the general solution y = cx c2 (verify!). It also has
a solution ys (x) = x2 /4 that cannot be obtained from the general solution by choosing
specific values of c. Hence, the later solution is a singular solution.
Comment: The solutions y(x) of an ODE often are given implicitly. For example,
xy + ln y = 1 is an implicit solution of the ODE (xy + 1)y 0 + y 2 = 0.
Application: A stone of mass m is dropped from a height H above the ground.
Suppose the air resistence is proportional to the speed Derive the ODE that describe
the equations of motion.
Solution: Suppose the distance to the ball is measured from the ground. Now the
velocity v = dy/dt is positive upward. The air resistence acts in the opposite direction
to the motion. Hence,
d2 y dy
m = mg k , y(0) = H, y 0 (0) = 0. (6)
dt2 dt
Note that the gravity and the air resistence both oppose the motion if the stone is
traveling up. On the other hand, gavity aids and resistence opposes if the stone is
traveling down. (What will be the form of the above IVP if the distance y is measured
from height H in the downward direction?)
Clearly, (6) is a linear second order equation with constant coefficients. This can be
solved using the method to be developed later. Using v = dy/dt, (6) becomes a first
H
y
order ODE:
dv
+ ↵v = g, v(0) = 0, (7)
dt
where ↵ = k/m. This equation can be solved easily for v. Once, v is found, y is
obtained from
dy
= v, y(0) = H.
dt
Even without solving, we can derive some important informations. For example, the
stone will attain a terminal velocity vt = g/↵ (why negative sign?) provided H is
large enough. (Is it possible for a stone to attain terminal velocity when it is moving
up?)
Comment: Care must be taken if the air resistence is proportional to square of the
speed. Using the same coordinate system, the governing equation becomes
!2
d2 y dy
m 2 = mg + k , y(0) = H, y 0 (0) = 0. (8)
dt dt
q
The terminal velocity in this case becomes vt = g/↵.
S. Ghorai 1
Lecture II
Geomterical interpretation
From calculas, we know that y 0 is the slope of the curve y(x) at x. Hence, if (1) has
a solution curve passing through the point (x0 , y0 ), then the slope of that curve at
(x0 , y0 ) is f (x0 , y0 ). Thus, the value of f (x, y) at each point gives the slope of the
solution curve passing through that point.
y
(2,1)
(1,1)
x
(0,−1)
k=3
k=2
y
k=1
k=0
k=−1
k=−2
k=−3
k=3
k=2
y
k=1
k=0
k=−1
k=−2
k=−3
Given the di↵erential equation (1), we indicate the slope f (x, y) by a short line segments
called lineal elements. For example, lineal elements at three points, for y 0 = y x, are
shown in Figure 1. The lineal elements are also called direction or slope fields.
To draw approximate solution curve through arbitrary points, we need to cover the
whole domian with lineal elements. This process is very cumbersome and time con-
suming. Hence, we use the following method. On the domian we trace the curve
f (x, y) = k. The curve f (x, y) = k, on which the slope y 0 is constant and equals to k,
is called isoclines. If k = 0, then it has a special name, nullclines. Using isoclines, we
can draw lineal elements in the domain of f (x, y). These can be seen in Figure 2 for
y 0 = y x.
!1
!2
!3
!3 !2 !1 0 1 2 3
Since f (x, y) is continuous, we can add more arrows (if needed) between two null clines.
To draw solution curve, we follow the lineal elements since these are the directions of
S. Ghorai 3
!1
!2
!3
!3 !2 !1 0 1 2 3
VectorPlot[{1, y - x}, {x, -3, 3}, {y, -3, 3}, VectorPoints -> 8,
VectorStyle -> Arrowheads[0]]
produces Figure 5.
Excercise Draw the direction field of the following di↵erential equations:
(i) y 0 = 2x (ii) xy 0 = 2y
S. Ghorai 1
Lecture III
Solution of first order equations
1 Separable equations
These are equations of the form
y 0 = f (x)g(y)
ln |y| ln |1 + y| = ln |x| + C
When we wrote the solution without the modulas sign, it was (implicitly) assumed
that x > 0, y > 0. This is acceptable for problems in which the solution domain is not
given explicitly. But for some problems, the modulas sign is necessary. For example,
consider the following IVP:
xy 0 = y + y 2 , y( 1) = 2.
v 0 = v 2 + 1 ) tan 1
v = x + C ) x + y = tan(x + C)
S. Ghorai 2
3 Exact equation
A first order ODE of the form
M (x, y) dx + N (x, y) dy = 0 (1)
is exact if there exits a function u(x, y) such that
@u @u
M= and N = .
@x @y
Then the above ODE can be written as du = 0 and hence the solution becomes u = C.
Theorem 1. Let M and N be defined and continuously di↵erentiable on a rectangle
rectangle R = {(x, y) : |x x0 | < a, |y y0 | < b}. Then (1) is exact if and only if
@M/@y = @N/@x for all (x, y) 2 R.
Proof: We shall only prove the necessary part. Assume that (1) is exact. Then there
exits a function u(x, y) such that
@u @u
M= and N = .
@x @y
Since M and N have continuous first partial derivatives, we have
@M @2u @N @2u
= and = .
@y @y@x @x @x@y
Now continuity of 2nd partial derivative implies @M/@y = @N/@x.
S. Ghorai 3
µM dx + µN dy = 0
is exact. Hence
@(µM ) @(µN )
= .
@y @x
Comment: If an equation has an integrating factor, then it has infinitely many inte-
grating factors.
Proof: Let µ be an integrating factor. Then
µM dx + µN dy = du
Thus, Z u
µg(u)M dx + µg(u)N dy = dv, whare v = g(u) du
xdy ydx
= 0 ) d(y/x) = 0
x2
Also, 1/xy is an integrating factor since
xdy ydx
= 0 ) d ln(y/x) = 0
xy
Similarly it can be shown that 1/y 2 , 1/(x2 + y 2 ) etc. are integrating factors.
S. Ghorai 4
Now if we divide this by xy, then the last term remains di↵erential and the first term
also becomes di↵erential:
d(xy) ⇣ ⌘
2xdx + = 0 ) d x2 + ln(xy) = 0 ) x2 + ln(xy) = C
xy
S. Ghorai 1
Lecture IV
Linear equations, Bernoulli equations, Orthogonal trajectories, Oblique trajectories
1 Linear equations
A first order linear equations is of the form
y 0 + p(x)y = r(x) (1)
This can be written as
(p(x)y r(x))dx + dy = 0.
Here M = p(x)y r(x) and N = 1. Now
!
1 @M @N
= p(x)
N @y @x
Hence, R
p(x) dx
µ(x) = e
is an integrating factor. Multiplying (1) by µ(x) we get
d ⇣ R p(x) dx ⌘ R
e y = r(x)e p(x) dx
dx
Integrating we get R Z x R
p(x) dx
e y= r(s)e p(s) ds ds + C
which on simplification gives
R ✓ Z x R ◆
p(x) dx p(s) ds
y=e C+ r(s)e ds
Comment: The usual notation dy/dx implies that x is the independent variable and y
is the dependent variable. In trying to solve first order ODE, it is sometimes helpful to
reverse the role of x and y, and work on the resulting equations. Hence, the resulting
equation
dx
+ p(y)x = r(y)
dy
is also a linear equation.
Example 2. Solve (4y 3 2xy)y 0 = y 2 , y(2) = 1
Solution: We write this as
dx 2
+ x = 4y
dy y
Clearly, y 2 is an integrating factor. Hence,
Z y
2
xy = 4y 3 dy + C ) xy 2 = y 4 + C
2 Bernoulli’s equation
This is of the form
y 0 + p(x)y = r(x)y , (2)
where is a real number. Equation (2) is linear for = 0 or 1. Otherwise, it is
nonlinear and can be reduced to a linear form by substituting z = y 1
Example 3. Solve y 0 y/x = y 3
z 0 + 2z/x = 2
zx2 = 2x3 /3 + C
Replacing z we find
x2
3 + 2x3 = C
y2
F (x, y, y 0 , y 00 ) = 0
In some cases, by making substitution, we can reduce this 2nd order ODE to a 1st
order ODE. Few cases are described below
Case I: If the independent variable is missing, then we have F (y, y 0 , y 00 ) = 0. If we
substitute w = y 0 , then y 00 = w dw
dy
. Hence, the ODE becomes F (y, w, w dw
dy
) = 0, which
is a 1st order ODE.
Example 4. Solve 2y 00 y 02 4=0
Example 5. Solve xy 00 + 2y 0 = 0
4 Orthogonal trajectories
Definition 1. Two families of curves are such that each curve in either family is
orthogonal (whenever they intersect) to every curve in the other family. Each family
of curves is orthogonal trajectories of the other. In case the two families are identical,
they we say that the family is self-orthogonal.
Slope =−1/dy/dx
Slope = dy/dx
Now eliminate c between (3) and (4) to find the di↵erential equation
H(x, y, y 0 ) = 0 (5)
corresponding to the first family. As seen in Figure 1, the di↵erential equation for the
other family is obtained by replacing y 0 by 1/y 0 . Hence, the di↵eretial equation of
the orthogonal trajectories is
H(x, y, 1/y 0 ) = 0 (6)
General solution of (6) gives the required orthogonal trajectories.
Example 6. Find the orthogonal trajectories of familiy of straight lines through the
origin.
y = mx
x + yy 0 = 0
Integrating we find
x2 + y 2 = C,
which are family of circles with centre at the origin.
(a) (b)
ψ1
r ψ r ψ2
θ
θ
Consider the curve with angle 1. The curve that intersects it orthogonally has angle
2 = 1 + ⇡/2. Now
1
tan 2 =
tan 1
P dr + Qd✓ = 0.
Thus we find r d✓/dr = P r/Q. Hence, the di↵erential equation of the orthogonal
family is given by
r d✓ Q
=
dr Pr
or
Q dr r2 P d✓ = 0
General solution of the last equation gives the orthogonal trajectories.
Example 7. Find the orthogonal trajectories of familiy of straight lines through the
origin.
✓=A
dr = 0
Integrating we find
r = C,
which are family of circles with centre at the origin.
Slope= m 1
Slope= m 2
Now eliminate c between (8) and (9) to find the di↵erential equation
H(x, y, y 0 ) = 0. (10)
H(x, y, m1 ) = 0, (11)
y0 = 1
Lecture V
Picard’s existence and uniquness theorem, Picard’s iteration
where p(x) and r(x) are defined and continuous in a the interval a x b. Here
f (x, y) = p(x)y + r(x). If L = maxaxb |p(x)|, then
Clearly f does not satisfy Lipschitz condition near origin. But still it has unique
solution. Can youqprove this? [Hint: Let y1 and y2 be two solutions and consider
⇣q ⌘2
z(x) = y1 (x) y2 (x) .]
Comment: The existence and uniqueness theorem are also valid for certain system
of first order equations. These theorems are also applicable to a certain higher order
ODE since a higher order ODE can be reduced to a system of first order ODE.
y 0 = xy sin y, y(0) = 2.
_
Here f and @f /@y are continuous in a closed rectangle about x0 = 0 and y0 = 2. Hence,
there exists unique solution in the neighbourhood of (0, 2).
y0 = 1 + y2, y(0) = 0.
1/2. Hence, the theorems guarantee existence of unique solution in |x| 1/2, which is
much smaller than the original interval |x| 100.
Since, the above equation is separable, we can solve it exactly and find y(x) = tan(x).
This solution is valid only in ( ⇡/2, ⇡/2) which is also much smaller than [ 100, 100]
but nevertheless bigger than that predicted by the existence and uniqueness theorems.
y 0 = x|y|, y(1) = 0.
Since f is continuous and satisfy Lipschitz condition in the neighbourhood of the (1, 0),
it has unique solution around x = 1.
y 0 = y 1/3 + x, y(1) = 0
Now
1/3 1/3 |y1 y2 |
|f (x, y1 ) f (x, y2 )| = |y1 y2 | = 2/3 1/3 1/3 1/3
|y1 + y1 y2 + y2 |
Suppose we take y2 = 0. Then
|y1 0|
|f (x, y1 ) f (x, 0)| = 2/3
|y1 |
2/3
Now we can take y1 very close to zero. Then 1/|y1 | becomes unbounded. Hence, the
relation
|f (x, y1 ) f (x, 0)| L|y1 0|
does not always hold around a region about (1, 0).
Since f does not satisfy Lipschitz condition, we can not say whether unique solution
exits or does not exist (remember the existence and uniqueness conditios are sufficient
but not necessary).
On the other hand
y 0 = y 1/3 + x, y(1) = 1
has unique solution around (1, 1).
Example 5. Discuss the existence and unique solution for the IVP
2y
y0 = , y(x0 ) = y0
x
Solution: Here f (x, y) = 2y/x and @f /@y = 2/x. Clearly both of these exist and
bounded around (x0 , y0 ) if x0 6= 0. Hence, unique solution exists in a interval about x0
for x0 6= 0.
For x0 = 0, nothing can be said from the existence and uniquness theorem. Fortunately,
we can solve the actual problem and find y = Ax2 to be the general solution. When
x0 = 0, there exists no solution when y0 6= 0. If y0 = 0, then we have infinite number
of solutions y = ↵x2 (↵ any real number) that satisfy the IVP y 0 = 2y/x, y(0) = 0.
S. Ghorai 4
A rough approximation to the solution y(x) is given by the function y0 (x) = y0 , which
is simply a horizontal line through (x0 , y0 ). (don’t confuse function y0 (x) with constant
y0 ). We insert this to the RHS of (3) in order to obatin a (perhaps) better approximate
solution, say y1 (x). Thus,
Z x Z x
y1 (x) = y0 + f (t, y0 (t)) dt = y0 + f (t, y0 ) dt
x0 x0
The nest step is to use this y1 (x) to genereate another (perhaps even better) approxi-
mate solution y2 (x): Z x
y2 (x) = y0 + f (t, y1 (t)) dt
x0
At the n-th stage we find
Z x
yn (x) = y0 + f (t, yn 1 (t)) dt
x0
Theorem 3. If the function f (x, y) satisfy the existence and uniqueness theorem for
IVP (1), then the succesive approximation yn (x) converges to the unique solution y(x)
of the IVP (1).
Example 6. Apply Picard iteration for the IVP
y 0 = 2x(1 y), y(0) = 2.
Solution: Here y0 (x) = 2. Now
Z x
y1 (x) = 2 + 2t(1 2) dt = 2 x2
0
Z x
x4
y2 (x) = 2 + 2t(t2 1) dt = 2 x2 +
0
!
2
Z x
t4 x4 x6
y3 (x) = 2 + 2t t2 1 dt = 2 x2 +
0 2 2 3!
Z x !
t6 t4 x4 x6 x8
y4 (x) = 2 + 2t + t2 1 dt = 2 x2 + +
0 3! 2 2 3! 8!
By induction, it can be shown that
x4 x6 x2n
yn (x) = 2 x2 + + · · · + ( 1)n
2 3! n!
2 2
Hence, yn (x) ! 1 + e x as n ! 1. Now y(x) = 1 + e x is the exact solution of the
given IVP. Thus, the Picard iterates converge to the unique solution of the given IVP.
Comment: Picard iteration has more theoretical value than practical value. It is
used in the proof of existence and uniqueness theorem. On the other hand, finding
approximate solution using this method is almost impractical for complicated function
f (x, y).
S. Ghorai 1
Lecture VI
Numerical methods: Euler’s method, improved Euler’s method
Suppose x0 = a and we need to find the solution at x = b > a. We divide the interval
[a = x0 , b] in to N intervals of size h = (b a)/N . Thus, the uniform mesh points are
a = x 0 < x 1 < x 2 < · · · < xN .
f(x,y(x))
xn x n+ 1
Figure 2:
The integral on the RHS is the shaded area shown in Figure 2. But since y(t) is
unknown, we have to approximate the integral in the RHS of (3).
S. Ghorai 2
Now from the mean value theorem of integral calculus, we write (3) as
or equivalently
y(xn+1 ) = y(xn ) + hf (⇣, y(⇣)), ⇣ 2 (xn , xn+1 ) (4)
Now f (⇣, y(⇣)) is the slope of the solution curve y = y(x) at the unknown point x = ⇣.
Further y is also itself unknown. We need to approximate the slope using numerical
methods.
1 Euler’s method
We assume that the slope f varies so little in [xn , xn+1 ] that we can approximate it by
its value at x = xn , i.e. f (⇣, y(⇣)) ⇡ f (xn , y(xn )). Hence,
But the exact value y(xn ) is not known except y(x0 ) = y0 . Hence, if we assume
y(xn ) ⇡ yn , then f (xn , y(xn )) ⇡ f (xn , yn ). Finally the Euler’s method becomes
Geometrical interpretation
The curve between x0 and x1 is approximated by the straight line that passes through
(x0 , y0 ) having slope f (x0 , y0 ). Similarly, the curve between x1 and x2 is approximated
by the straight line that passes through (x1 , y1 ) having slope f (x1 , y1 ). Obviously,
at each step we are making error (shown by the red segment in Figure3). The error
between the exact and computed solution at the n-th step is |y(xn ) yn |.
x
x0 x1 x2
Geometrical interpretation
First the Euler’s method is used to predict the value y1⇤ at x = x1 . Slope at x = x1
is calculated from f (x1 , y1⇤ ). Then we obtained the average slope from the slopes at
x = x0 and at x = x1 . Now the approximate solution is the straight line that passes
throgh (x0 , y0 ) with slope equal to the average value of the slopes. This process is
repeated for the next step [x1 , x2 ] and so on.
y
Exact solution
y(x1) Slope f(x1 ,y1*)
Slope f(x0 ,y0)
Approximate solution
y(x0) y1*
Average slope
y0 h y1
x
x0 x1
Comment 1: Improved Euler’s method has many di↵erent names such as Modified
Euler’s method, Heun method and Runge-Kutta method of order 2. It also belongs to
the category of predictor-corrector method.
Comment 2: It can be proved that the accuracy of Euler’s method is proportional to
h and that of improved Euler’s method to h2 , where h is the step size. Hence, Improved
Euler’s method has better accuracy than that of Euler’s method.
S. Ghorai 4
Example 1. Apply (i) Euler’s method and (ii) improved Euler’s method to compute
y(x) at x = 0.4 with stepsize h = 0.2 for the initial value problem:
Solution: The values are rounded to three places of decimal in the Tables.
n xn yn y(xn ) en f (xn , yn )
0 0 2 2 0 0
1 0.2 2 1.961 0.039 -0.4
2 0.4 1.92 1.852 0.068
⇤ ⇤
n xn yn y(xn ) en f (xn , yn ) yn+1 f (xn+1 , yn+1 )
0 0 2 2 0 0 2 0.4
1 0.2 1.96 1.961 0.001 0.384 1.883 0.706
2 0.4 1.845 1.852 0.007
It is clear that the improved Euler’s method has better accuracy than the Euler’s
method which is expected. Also, decreasing the step size h improves the accuracy of
both of the methods.
S. Ghorai 1
Lecture VII
Second order linear ODE, fundamental solutions, reduction of order
Theorem 2. Let y1 (x) and y2 (x) be two solutions of (2). Then y(x) = c1 y1 (x)+c2 y2 (x)
(c1 , c2 arbitrary constants) is also a solution of (2).
Proof: Trivial
Definition 1. Two function f and g are defined in I. If there exists constant a, b, not
both zero such that
af (x) + bg(x) = 0 8x 2 I,
then f and g are linearly dependent (LD) in I, otherwise they are linearly independent
(LI) in I.
Example 1.
(i) sin x, cos x, x 2 ( 1, 1) are LI.
(ii) x|x|, x2 , x 2 ( 1, 1) are LI.
(iii) x|x|, x2 , x 2 (0, 1) are LD
Definition 2. Let f and g be two di↵erentiable functions. Then
f (x) g(x)
W (f, g) = = f (x)g 0 (x) g(x)f 0 (x)
f 0 (x) g 0 (x)
is called the Wronskian of f and g
Proof: Let y1 , y2 be LD. Thus, there exists a, b not both zero such that
Now (3) and (4) can be viewed as linear homogeneous equations in two unknowns a
and b. Since the solution is nontrivial, the determinant must be zero. Thus W (y1 , y2 ) =
0, 8x 2 I. Hence, W (y1 , y2 ) must be zero at x0 2 I.
Conversely, suppose W (y1 , y2 ) = 0 at x0 2 I. Now consider
and
ay10 (x0 ) + by20 (x0 ) = 0 (6)
Now the determinant of the system of linear equations (in unknowns a, b) of (5) and
(6) is the Wronskian W (y1 , y2 ) at x0 . Since, this is zero, we can find nontrivial solution
for a and b. Take these nontrivial a and b and form
By (5) and (6), we find y(x0 ) = y 0 (x0 ) = 0. Hence, by uniqueness theorem y(x) ⌘ 0,
i.e. for nontrivial a and b
Proof: We proceed as in the converse part of the previous theorem to prove that y1
and y2 are LD. Now proceed as in the first part of the same theorem to prove that
W (y1 , y2 ) = 0, 8x 2 I.
Aliter: Since y1 and y2 are solutions of (2), we obtain
where we have used the short notation W for W (y1 , y2 ). Integrating, we find
R
p(x) dx
W (x) = Ce
has unique solution a = c1 and b = c2 , since the determinant is nonzero. Let ⇣(x) =
c1 y1 (x)+c2 y2 (x). Then, ⇣(x0 ) = K0 , ⇣ 0 (x0 ) = K1 . But by the existence and uniqueness
theorem, we have y(x) ⌘ ⇣(x) and thus
Hence, y1 and y2 spans the solution space. Thus, y1 and y2 form a basis of solution for
(2). Thus, a general solution y(x) of (2) can be written as
where A and B are arbitrary constants. For an IVP, these constants take particular
values to satisfy the initial condition.
Existence of basis: By the existence and uniqueness theorem, there exists a solution
y1 (x) of (2) with y1 (x0 ) = 1, y10 (x0 ) = 0. Similarly, there exists a solution y2 (x) of (2)
with y2 (x0 ) = 0, y20 (x0 ) = 1. Hence, W (y1 , y2 ) = 1 6= 0 at x0 . By the previous, theorem
y1 and y2 form a basis solution for (2).
Example 3. y1 (x) = sin x and y2 (x) = cos x satisfy y 00 +y = 0 and W (y1 , y2 ) = 1 6= 0.
Hence, sin x and cos x form a basis of solution for y 00 + y = 0. Thus, a general solution
of y 00 + y = 0 is y(x) = C1 sin x + C2 cos x.
If we know one nonzero solution y1 (x) (by any method) of (9), then it is easy to find
the second solution y2 (x) which is independent of y1 . Thus, y1 and y2 will form a basis
of solution.
We assume that y2 (x) = v(x)y1 (x), where v(x) is an unknown function. Since, y2 is a
solution, we substitute y2 (x) = v(x)y1 (x) into (9). Taking into account the fact that
y1 is also a solution of (9), we find
y1 v 00 + (2y10 + py1 ) v 0 = 0.
S. Ghorai 4
Solution: Since at x = 0, the equation becomes singular, we solve the above for
x 6= 0. WLOG, we assume that x > 0. Clearly, y1 (x) = e x is a solution. We write
this equation as ✓ ◆
00 1 0 x+1
y + 2+ y + y = 0.
x x
Hence, p(x) = 2 + 1/x. Substituting y2 (x) = v(x)y1 (x) and solving we find
Z ✓ Z ◆
1
v(x) = 2x
exp (2 + 1/x) dx = ln x
e
Hence, y2 (x) = e x ln x. Thus, the general solution is y(x) = e x (C1 +C2 ln x), x > 0.
What is the general solution for x < 0?
S. Ghorai 1
Lecture VIII
Homogeneous linear ODE with constant coefficients
ay 00 + by 0 + cy = 0, x 2 I, (1)
where a, b, c are constants, then two independent solutions (i.e. basis) depend on the
quadratic equation
am2 + bm + c = 0. (2)
Equation (2) is called characteristic equation for (1).
Theorem 1. (i) If the roots of (2) are real and distinct, say m1 and m2 , then two
linearly independent (LI) solutions of (1) are em1 x and em2 x . Thus, the general solution
to (1) is
y = C1 em1 x + C2 em2 x .
(ii) If the roots of (2) are real and equal, say m1 = m2 = m, then two LI solutions of
(1) are emx and xemx . Thus, the general solution to (1) is
y = (C1 + C2 x)emx .
(iii) If the roots of (2) are complex conjugate, say m1 = ↵ + i and m2 = ↵ i , then
two real LI solutions of (1) are e↵x cos( x) and e↵x sin( x). Thus, the general solution
to (1) is
y = e↵x (C1 cos( x) + C2 sin( x)).
Proof: For convenience (specially for higher order ODE) (1) is written in the operator
form L(y) = 0, where
d2 d
L ⌘ a 2 + b + c.
dx dx
We also sometimes write L as
L ⌘ aD2 + bD + c,
y = C1 em1 x + C2 em2 x .
S. Ghorai 2
Example 1. Solve y 00 y0 = 0
Example 3. Solve y 00 2y 0 + 5y = 0
a0 y (n) (x) + a1 y (n 1)
(x) + a2 y (n 2)
(x) + · · · + an 1 y (1) (x) + an y(x) = 0, x 2 I, (4)
where the superscript (i) denotes the i-th derivative and all ai ’s are constants. As in the
case of 2nd order linear equation, the LI solutions of (4) depends on the characteristic
equations
a0 mn + a1 mn 1 + · · · + an 1 m + an = 0. (5)
Obviously, this equation has n roots. As in the case of 2nd order equation, the following
can be proved.
S. Ghorai 3
Theorem 2. The fundamental set of solutions B for (4) is obtained using the following
two rules:
Rule 1: If a root m of (5) is real and repeated k times, then this root gives k number
of LI solutions emx , xemx , x2 emx , · · · , xk 1 emx to B.
Rule 2: If the roots m = ↵ ± i of (5) is complex conjugate ( 6= 0) and are repeated
k times each, then they contribute 2k number of LI solutions e↵x cos( x), e↵x sin( x),
xe↵x cos( x), xe↵x sin( x), x2 e↵x cos( x), x2 e↵x sin( x), · · · · · · , xk 1 e↵x cos( x) and
xk 1 e↵x sin( x) to B.
Example 4. Solve y (5) (x) + y (4) (x) 2y (3) (x) 2y (2) (x) + y (1) (x) + y = 0
Example 5. Solve y (6) (x)+8y (5) (x)+25y (4) (x)+32y (3) (x) y (2) (x) 40y (1) (x) 25y = 0
Lecture IX
Non-homogeneous linear ODE, method of undetermined coefficients
where p, q are continuous functions. Let yp (x) is a (particular) solution to (1). Then
Y 00 + p(x)Y 0 + q(x)Y = 0.
Thus, Y satisfies a homogeneous linear 2nd order ODE. Hence, we express Y as linear
combination of two LI solutions y1 and y2 . This gives
Y = C1 y1 + C2 y2
or
y = C1 y1 + C2 y2 + yp (2)
Thus, the general solution to (1) is given by (2). We have seen how to find y1 and y2 .
Here, we concentrate on methods to find yp .
ay 00 + by 0 + cy = r(x), x 2 I, (3)
where a, b, c are constants and r(x) is a finite linear combination of products formed
from the polynomial, exponential and sines or cosines functions. Thus, r(x) is a finite
linear combination of functions of the following form:
(
↵x m sin x
e x ,
cos x
ay 00 + by 0 + cy = ri (x),
S. Ghorai 2
is a particular solution to
ay 00 + by 0 + cy = r(x).
Hence, we shall consider the case when r(x) is one of the ri (x). Thus, we choose r(x)
to be of the following form: (
↵x m sin x
e x .
cos x
Rule 1: If none of the terms in r(x) is a solution of the homogeneous problem, then
for yp , choose a linear combination of r(x) and all its derivatives that form a finite set
of linearly independent functions.
Example 1. Consider
y 00 2y 0 + 2y = x sin x.
Solution: The LI solutions of the homogeneous part are ex cos x and ex sin x. Clearly,
neither x nor sin x is a solution of the homogeneous part. Hence, we choose
(a+2b)x sin x+( 2a+b)x cos x+(2a 2b 2c+d) cos x+( 2a 2b+c+2d) sin x = x sin x.
Hence
a + 2b = 1, 2a + b = 0, (2a 2b 2c + d) = 0, ( 2a 2b + c + 2d) = 0.
Solving, we get
1 2 2 14
a= , b= , c= , d=
5 5 25 25
Hence, the general solution is
1 2 2 24
y = ex (C1 cos x + C2 sin x) + x sin(x) + x cos x + cos x + sin x
5 5 25 25
Aliter: (Annihilator method ) Writing D ⌘ d/dx, we write
Note that (D2 + 1)2 x sin x = 0. Hence, operating (D2 + 1)2 on both sides, we find
The characteristic roots are found from (m2 + 1)2 (m2 2m + 2) = 0. Thus, m =
1 ± i and m = ±i, ±i. Now solution to this homogeneous linear ODE with constant
coefficient is
Since, the first two terms are the solution of the original homogeneous part and hence
contribute nothing. Thus, the form for yp must be
yp = (c3 cos x + c4 sin x) + x(c5 cos x + c6 sin x),
which conforms with previous form.
Rule 2: If r(x) contains terms that are solution of the homogeneous linear part, then to
choose the trial form of yp follow the following steps. First, choose a linear combination
of r(x) and its derivatives which are LI. Second, this linear combination is multiplied
by a power of x, say xk , where k is the smallest nonnegative integer that makes all the
new terms not to be solutions of the homogeneous problem.
Example 2. Consider
y 00 2y 0 3y = xe x .
Solution: The LI solutions of the homogeneous part are e x and e3x . Clearly, e x
is a solution of the homogeneous part. Hence, we choose yp (x) = x(axe x + be x ).
Substituting, we find
e x ( 4b + 2a 8ax) = xe x
This, gives 4b + 2a = 0, 8a = 1 and thus a = 1/8, b = 1/16. Thus, the general
solution is
xe x
x
y = C1 e + C2 e3x
(2x + 1)
16
Aliter: (Annihilator method ) Writing D ⌘ d/dx, we write
(D2 2D 3)yp = xe x .
Since (D + 1)2 xe x
= 0, operating (D + 1)2 on both sides we find
(D + 1)2 (D2 2D 3)yp = 0
The characteristic roots are found from (m + 1)2 (m2 2m 3) = 0. Thus, m =
1, 1, 1, 3. Now solution to this homogeneous linear ODE with constant coefficient
is
yp = c1 e3x + e x (c2 + c3 x + c4 x2 )
Since, the first two terms are the solution of the original homogeneous part and hence
contribute nothing. Thus, the form for yp must be
yp = e x (c3 x + c4 x2 ),
which conforms with the previous form.
Example 3. Consider
y 00 2y 0 + y = 6xex
Solution: The LI solutions of the homogeneous part are ex and xex . Clearly, both
ex , xex are solutions of the homogeneous part. Hence, we choose yp (x) = x2 (axex +bex ).
Substituting, we find
ex (2b + 3ax) = 6xex
This, gives a = 1, b = 0. Thus, the general solution is
y = ex (C1 + C2 x + x3 )
S. Ghorai 4
Example 4. Consider
y 000 3y 00 + 2y 0 = 10 + 4xe2x .
The characteristic roots are found from m(m 2)2 (m3 3m + 2m) = 0. Thus, m =
0, 0, 1, 2, 2, 2. Now solution to this homogeneous linear ODE with constant coefficient
is
yp = c1 + c2 x + c3 ex + e2x (c4 + c5 x + c6 x2 )
The terms with c1 , c3 and c4 are the solution of the original homogeneous part and
hence contribute nothing. Thus, the form for yp must be
yp = c2 x + e2x (c5 x + c6 x2 ).
Lecture X
Non-homegeneous linear ODE, method of variation of parameters
Example 1. Consider
y 00 2y 0 3y = xe x .
(This has been solved before by the method of undetermined coefficeints.) The LI
solutions of the homogenous part are y1 (x) = e x and y2 (x) = e3x . Hence,
where Z Z
y2 (x)r(x) y1 (x)r(x)
u(x) = dx, v(x) = dx.
W (y1 , y2 ) W (y1 , y2 )
Now W (y1 , y2 ) = 4e2x . Hence,
Z
x x2
u(x) = dx =
4 8
Z
xe 4x x 4x 1 4x
v(x) = dx = e e
4 16 64
Thus,
✓ ◆
x2 x x 1
yp (x) = e + e3x e 4x
e 4x
8 16 64
Hence, the general solution is
x
y(x) = C1 e + C2 e3x + yp (x)
Since, the last term of yp can be absorbed with the constant C1 , we get the same
solution as obatined before.
Example 2. Consider
y 00 + y = tan x.
Solution: (This can not be solved by the method of undetermined coefficeint.) The
LI solutions of the homogenous part are y1 (x) = cos x and y2 (x) = sin x. Hence,
where Z Z
y2 (x)r(x) y1 (x)r(x)
u(x) = dx, v(x) = dx.
W (y1 , y2 ) W (y1 , y2 )
Now W (y1 , y2 ) = 1. Hence,
Z
u(x) = sin x tan x dx = ln | sec x + tan x| + sin x
Z
v(x) = sin x dx = cos x
Thus,
yp (x) = cos x ln | sec x + tan x|
Hence, the general solution is
Example 3. Consider
y 00 + y = |x|, x 2 ( 1, 1)
Solution: You can find the general solution using either the method of undetermined
coefficients (tricky!) OR method of variation of parameters. Try yourself.
Example 4. Consider
This is linear but the coefficients are not constants. Note that y1 (x) = ex is a solution
(by inspection!) of
xy 00 (1 + x)y 0 + y = 0.
Let us first divide by the leading coefficient to find
(1 + x) 0 1
y 00 y + y=0
x x
Using reduction of order, we find
Z
1 R (1+x)/x dx
y2 (x) = y1 (x) e dx = (x + 1)
y12
The LI solutions of the homogenous part are y1 (x) = ex and y2 (x) = (1 + x).
Divinding by the leading coefficient leads to
(1 + x) 0 1
y 00 y + y = xe2x
x x
Hence, r(x) = xe2x . Now
where Z Z
y2 (x)r(x) y1 (x)r(x)
u(x) = dx, v(x) = dx.
W (y1 , y2 ) W (y1 , y2 )
Here W (y1 , y2 ) = xex . Hence,
Z
u(x) = (x + 1)ex dx = xex
Z
1 2x
v(x) = e2x dx = e
2
Thus,
(x 1)
yp (x) = e2x
2
Hence, the general solution is
y(x) = C1 ex + C2 (x + 1) + yp (x)
S. Ghorai 4
Lecture XI
Euler-Cauchy Equation
Proof: We have seen that the trial solution for a constant coefficient equation is emx .
Now since power of xm is reduced by 1 by a di↵erentiation, let us take xm as trial
solution for (1).
For convenience, (1) is written in the operator form L(y) = 0, where
2d2 d
L ⌘ ax 2
+ bx + c.
dx dx
We also sometimes write L as
L ⌘ ax2 D2 + bxD + c,
where D = d/dx. Now
⇣ ⌘
L(xm ) = am(m 1) + bm + c xm = p(m)xm , (3)
Example 1. Solve x2 y 00 xy 0 3y = 0
Now ⇣ ⌘
L(xm ln x) = p0 (m) + p(m) ln x xm ,
where 0 represents the derivative. Since, m is a repeated root, the RHS is zero. Thus,
xm ln x is also a solution to (1) and it is independent of xm . Hence, the general solution
to (1) is
y = (C1 + C2 ln x)xm .
Y1 = x(↵+i )
= x↵ ei ln x
, and Y2 = x↵ e i ln x
.
But these are complex valued. Note that if Y1 , Y2 are LI, then so are y1 = (Y1 + Y2 )/2
and y2 = (Y1 Y2 )/2i. Hence, two real LI solutions of (1) are y1 = x↵ cos( ln x) and
y2 = x↵ sin( ln x). Thus, the general solution to (1) is
⇣ ⌘
y = x↵ C1 cos( ln x) + C2 sin( ln x) .
a( x + )2 y 00 + b( x + )y 0 + cy = 0.
where a, b and c are constants; then (4) is called nonhomogeneous Euler-Cauchy equa-
tion. We can use the method of variation of parameters as follows. First divide (4) by
ax2 so that the coefficient of y 00 becomes unity:
b 00 c
y 00 + y + 2 y = r(x), (5)
ax ax
where r(x) = r̃(x)/ax2 . Now we already know two LI solutions y1 , y2 of the homoge-
neous part. Hence, the particular solution to
(4) is given by
Z Z
y2 (x)r(x) y1 (x)r(x)
yp (x) = y1 (x) dx + y2 (x) dx.
W (y1 , y2 ) W (y1 , y2 )
Example 4.
Comment: In few cases, it can be solved also using method of undetermined coeffi-
cients. For this, we first convert it to constant coefficient liner ODE by t = ln x. If the
the transformed RHS is of special form then the method of undetermined coefficients
is applicable.
Example 5. Consider
ln x
x2 y 00 xy 0 3y = , x > 0.
x
Hence,
(ln x)2 ln x 1
yp (x) =
8x 16x 64x
Hence the general solution is y = c1 y1 + c2 y2 + yp , i.e.
A (ln x)2 ln x
y(x) = + Bx3 .
x 8x 16x
Note that last term of yp is absorbed with y1 .
Aliter: Let us make the transformation t = ln x. Then the given transformed to
ÿ 2ẏ 3y = te t ,
where ˙ = d/dt. This is the same problem we have solved in lecture 9 using method of
undetermined coefficients. The solution is (see lecture 9)
t te t
y(t) = C1 e + C2 e3t (2t + 1),
16
which in terms of original x variable becomes
C1 ln x
y(x) = + C2 x3 (2 ln x + 1),
x 16x
S. Ghorai 1
Lecture XII
Power Series Solutions: Ordinary points
1 Analytic function
Definition 1. Let f be a function defined on an interval I. We say f is analytic at
point x0 2 I if f can be expanded in a power series about x0 which has a positive radius
of convergence.
Here cn are constant and (1) converges for |x x0 | < R where R > 0. Radius of
convergence R can be found from ratio test/root test.
If f has power series representation (1), then its derivative exits in |x x0 | < R. These
derivatives are obtained by di↵erentiating the RHS of (1) term by term. Thus,
1
X 1
X
f 0 (x) = ncn (x x0 )n 1
⌘ (n + 1)cn+1 (x x0 )n , (2)
n=1 n=0
and 1 1
X X
f 00 (x) = n(n 1)cn (x x0 )n 2
⌘ (n + 2)(n + 1)cn+2 (x x0 )n . (3)
n=2 n=0
2 Ordinary points
Consider a linear 2nd order homogeneous ODE of the form
where a0 , a1 and a2 are continuous in an interval I. The points where a0 (x) = 0 are
called singular points. If a0 (x) 6= 0, 8x 2 I, then the above ODE can be written as (by
dividing by a0 (x))
y 00 + p(x)y 0 + q(x)y = 0. (4)
Definition 2. A point x0 2 I is called an ordinary point for (4) if p(x) and q(x) are
analytic at x = x0 .
Theorem 1. Let x0 be an ordinary point for (4). Then there exists a unique solution
y = y(x) of (4) which is also analytic at x0 and satisfies y(x0 ) = K0 , y 0 (x0 ) = K1
(K0 , K1 are arbitrary constants). Further, if p and q have convergent power series
expansion in |x x0 | < R, (R > 0), then the power series expansion of y is also
convergent in |x x0 | < R.
Example 1. Find power series solution around x0 = 0 for
(1 + x2 )y 00 + 2xy 0 2y = 0.
S. Ghorai 2
(
Solution: This can be solved by reduction of order technique since Y1 = x is a
solution. The other solution is given by
Z Z ✓ ◆
1 R 2x/(1+x2 ) dx 1 1 1
Y2 (x) = Y1 (x) 2
e dx = x dx = (1 + x tan x)
x x2 1 + x2
Note that the summation in the last term can be taken from n = 0 since the contribu-
tions due to n = 0 and n = 1 vanish. Thus
1 h
X i
2 00
(1 + x )y (x) = (n + 2)(n + 1)cn+2 + n(n 1)cn xn .
n=0
Similarly
1
X
2xy 0 (x) = 2ncn xn .
n=0
c2 = c0 ,
Lecture XIII
Legendre Equation, Legendre Polynomial
1 Legendre equation
This equation arises in many problems in physics, specially in boundary value problems
in spheres:
(1 x2 )y 00 2xy 0 + ↵(↵ + 1)y = 0, (1)
where ↵ is a constant.
We write this equation as
y 00 + p(x)y 0 + q(x)y = 0,
where
2x ↵(↵ + 1)
p(x) = 2
and q(x) = .
1 x 1 x2
Clearly p(x) and q(x) are analytic at the origin and have radius of convergence R = 1.
Hence x = 0 is an ordinary point for (1). Assume
1
X
y(x) = cn xn .
n=0
and
1
X (↵ + 2m)(↵ + 2m 2) · · · (↵ + 2)(↵ 1)(↵ 3) · · · (↵ 2m + 1)
y2 (x) = x+ ( 1)m x2m+1 .
m=1
(2m + 1)!
(3)
Taking c0 = 1, c1 = 0 and c0 = 0, c1 = 0, we find that y1 and y2 are solutions of
Legendre equation. Also, these are LI, since their Wronskian is nonzero at x = 0. The
series expansion for y1 and y2 may terminate (in that case the corresponding solution
has R = 1), otherwise they have radius of convergence R = 1.
2 Legendre polynomial
We note that if ↵ in (1) is a nonnegative integer, then either y1 given in (2) or y2 given
in (3) terminates. Thus, y1 terminates when ↵ = 2m (m = 0, 1, 2, · · ·) is nonnegative
even integer:
y1 (x) = 1, (↵ = 0),
2
y1 (x) = 1 3x , (↵ = 2),
y1 (x) = 1 10x2 + 35 3
x 4
, (↵ = 4).
Note that y2 does not terminate when ↵ is a nonnegative even integer.
Similarly, y2 terminates (but y1 does not terminate) when ↵ = 2m + 1 (m = 0, 1, 2, · · ·)
is nonnegative odd integer:
y2 (x) = x, (↵ = 1),
5 3
y2 (x) = x 3
x, (↵ = 5),
14 2 21 5
y2 (x) = x 3
x + 5
x, (↵ = 5).
Now we use Leibniz rule for the derivative of product two functions f and g:
X m ✓ ◆
(m) m (k) (m k)
(f · g) = f g ,
k=0
k
which can be proved easily by induction.
Thus from (7) we get
⇣ ⌘
(x2 1)u(n+2) + 2x(n + 1)u(n+1) + (n + 1)nu(n) = 2n xu(n+1) + (n + 1)u(n) .
Lecture XIV
Frobenius series: Regular singular points
1 Singular points
Consider the second order linear homogeneous equation
ax2 y 00 + bxy 0 + cy = 0,
For simplicity, we consider a second order linear ODE with a regular singular point
at x0 = 0. If x0 6= 0, it is easy to convert the given ODE to an equivalent ODE
with regular singular point at x0 = 0. For this, we substitute t = x x0 and let
z(t) = y(x0 + t). Then (3) becomes
t2 z̈ + tp̃(t)ż + q̃(t)z = 0,
S. Ghorai 2
where ˙ = d/dt. Thus, we consider following second order homogeneous linear ODE
x2 y 00 + xp(x)y 0 + q(x)y = 0, (4)
where p, q are analytic at the origin.
Ordinary point vs. regular singular point: This can explained by taking two ex-
amples. Consider
y 00 + y = 0,
which has 0 as the ordinary point. Note that the general solution is y = c1 cos x +
c2 sin x. At the ordinary point x0 = 0, we can find unique c1 , c2 for a given K0 , K1 such
that y(0) = K0 , y 0 (0) = K1 . Thus, unique solution exists for initial conditions specified
at the ordinary point.
Now consider the Euler-Cauchy equation
x2 y 00 2xy 0 + 2y = 0,
for which x0 = 0 is a regular singular point. The general solution is y = c1 x + c2 x2 .
Now it is not possible to find unique values of c1 , c2 for a given K0 , K1 such that
y(0) = K0 , y 0 (0) = K1 . Note that solution does not exist for K0 6= 0 since y(0) = 0.
2 Frobenius method
We would like to find two linearly independent solutions of (4) so that these form a
basis solution for x 6= 0. We find the basis solution for x > 0. For x < 0, we substitute
t = x and carry out similar procedure for t > 0.
If p and q in (4) are constants, then a solution of (4) is of the form xr . But since p and
q are power series, we assume that a solution of (4) can be represented by an extended
power series
1
X
y = xr an x n , (5)
n=0
which is a product of xr and a power series. We also assume that a0 6= 0. We formally
substitute (5) into (4) and find r and a1 , a2 , · · · in terms of a0 and r. Once we find (5),
we next check the convergence of the series. If it converges, then (5) becomes solution
for (4).
Now from (5), we find
1
X 1
X
x2 y 00 (x) = (n + r)(n + r 1)an xn+r , xy 0 (x) = (n + r)an xn+r .
n=0 n=0
OR " #
1
X n ⇣
X ⌘
r
x (r + n)(r + n 1)an + (r + k)pn k + qn k ak xn = 0.
n=0 k=0
⇢(r + n)an + bn = 0,
where
n
X1
bn = [(r + k)pn k + qn k ]ak .
k=0
⇢(r2 + m)am = bm .
To find the form of the solution in the case of B and C described above, we use the
reduction of order technique. We know that y1 (x) (corresponding the larger root)
always exists. Let y2 (x) = v(x)y1 (x). Then
1 R p(x)/x dx
v0 = e
y12
1 p0 ln x p1 x ···
= ⇣ ⌘2 e
x2r1 1 + a1 (r1 )x + a2 (r1 )x2 + · · ·
1 p1 x ···
= ⇣ ⌘2 e
x2r1 +p0 1 + a1 (r1 )x + a2 (r1 )x2 + ···
1
= g(x),
x2r1 +p0
where g(x) is analytic at x = 0 and g(0) = 1. Since g(x) is analytic at x = 0 with
P
g(0) = 1, we must have g(x) = 1 + 1 n
n=1 gn x . Since r1 , r2 are roots of (8), we must
have
r1 + r2 = 1 p0 ) 2r1 + p0 = m + 1.
Hence,
1 g1 gm 1 gm
v0 = + + ··· + 2 + + gm+1 + · · · ,
xm+1 x m x x
OR
m
x g1 x m+1 gm 1 x 1
v(x) = + + ··· + + gm ln x + gm+1 x + · · · . (11)
m m+1 1
Thus,
" #
m
x g1 x m+1 gm 1 x 1
y2 (x) = y1 (x) + + ··· + + gm ln x + gm+1 x + · · · .
m m+1 1
" #
⇣ 1
X ⌘ x m
g1 x m+1 gm 1 x 1
r1 n
= gm y1 (x) ln x + x 1+ an (r1 )x + + ··· + + gm+1 x + · · · .
n=1 m m+1 1
S. Ghorai 5
m
Now we take the factor x from the series inside the third bracket. Since r1 m = r2 ,
we finally find
1
X
y2 (x) = cy1 (x) ln x + xr2 cn xn , (12)
n=0
where we put gm = c.
Now for r1 = r2 , we have m = 0 and hence gm = g0 = g(0) = 1 = c. Thus, ln x term
is definitely present in the second solution. Also in this case, the series in (11) starts
with g0 ln x and the next term is g1 x. Hence, for r1 = r2 , we must have c0 = 0 in (12).
In certain cases, gm = c becomes zero (case C.ii) for m 1. Then the second solution
is also a Frobenius series solution; otherwise, the second Frobenius series solution does
not exist.
3 Summary
The results derived in the previous section can be summarized as follows. Consider
where p and q have convergent power series expansion in |x| < R, R > 0. Let r1 , r2
(r1 r2 ) be the roots of the indicial equation:
Theorem 1. If r1 r2 is not zero or a positive integer, then there are two linearly
independent solutions y1 and y2 of (13) of the form
y1 (x) = xr1 1 (x), y2 (x) = (ln x)y1 (x) + xr2 +1 2 (x), (16)
y1 (x) = xr1 1 (x), y2 (x) = c(ln x)y1 (x) + xr2 2 (x), (17)
Example 4. Discuss whether two Frobenius series solutions exist or do not exist for
the following equations:
(i) 2x2 y 00 + x(x + 1)y 0 (cos x)y = 0,
4 00 2 0
(ii) x y (x sin x)y + 2(1 cos x)y = 0.
Since r1 r2 = 3/2, which is not zero or a positive integer, two Frobenius series solutions
exist.
(ii) We can write this as
sin x 0 1 cos x
x2 y 00 xy + 2 y = 0.
x x2
Hence p(x) = sin x/x and q(x) = 2(1 cos x)/x2 . Thus, p(0) = 1 and q(0) = 1.
The indicial equation is
2xy 00 + (x + 1)y 0 + 3y = 0
Since r1 r2 = 1/2, is not zero or a positive integer, two independent Frobenius series
solution exist.
Substituting
1
X
y = xr an x n ,
n=0
From the first relation we find roots of the indicial equation r1 = 1/2, r2 = 0. Now
with the larger root r = r1 = 1/2, we find
(2n + 5)an 1
an = , n 1.
2n(2n + 1)
Iterating we find
7 21
a1 = a0 , a2 = a0 , · · ·
6 40
Hence, by induction
(2n + 5)(2n + 3)
an = ( 1)n a0 , n 1 (Check!)
15 · 2n n!
Thus, taking a0 = 1, we find
✓ ◆
1/2 7 21
y1 (x) = x 1 x + x2 ···
6 40
(n + 2)an 1
an = , n 1.
n(2n 1)
Iterating we find
a1 = 3a0 , a2 = 2a0 , · · ·
Hence, by induction
✓ ◆✓ ◆ ✓ ◆
n 5 2 5 2 5 2
an = ( 1) ··· a0 , n 1 (Check!)
2n 1 n 2n 3 n 1 1 1
Thus, taking a0 = 1, we find
⇣ ⌘
y2 (x) = 1 3x + 2x2 ···
Example 6. (Case B) Find the general solution in the neighbourhood of origin for
Hence p(x) = 2x and q(x) = x2 + 1/4. Thus, p(0) = 0, q(0) = 1/4. The indicial
equation is
r2 + (p(0) 1)r + q(0) = 0 ) r2 r + 1/4 = 0 ) r1 = r2 = 1/2.
Since the indicial equation has a double root, only one Frobenius series solution exists.
Substituting
1
X
y = xr an x n ,
n=0
r
(after some manipulation and cancelling x ) we find
1
X 1
X 1
X
n n
⇢(n + r)an x 8(n + r 1)an 1 x + 4an 2 xn = 0,
n=0 n=1 n=2
2
where ⇢(r) = (2r 1) . Rearranging the above, we get
⇣ ⌘ 1 h
X i
⇢(r)a0 + ⇢(r + 1)a1 8ra0 x + ⇢(n + r)an 8(n + r 1)an 1 + 4an 2 xn = 0.
n=2
ak+1 =
(2k + 1)ak
(k + 1)2
a0
ak 1
(k + 1)2
a0 =
(k
1
2
1)!(k + 1) k
k+1
a0 =
a0
(k + 1)!
}
Thus, taking a0 = 1, we find
!
1/2 x x2 x3
y1 (x) = x 1+ + + + · · · = x1/2 ex .
1! 2! 3!
For the general solution, we need to find another solution y2 . For this we use reduction
of order. Let y2 (x) = y1 (x)v(x). Then
Z
1 R pdx
v= e dx,
y12
where p(x) = 2. Hence Z
1
v(x) =
dx = ln x
x
and y2 = (ln x)x1/2 ex . Thus, the general solution is
y(x) = x1/2 ex (c1 + c2 ln x)
S. Ghorai 9
xy 00 + 2y 0 + xy = 0
x2 y 00 + 2xy 0 + x2 y = 0.
Hence p(x) = 2 and q(x) = x2 . Thus, p(0) = 2, q(0) = 0. The indicial equation is
From the first relation we find roots of the indicial equation r1 = 0, r2 = 1. Now with
the larger root r = r1 , we find
an 2
a1 = 0, an = , n 2.
n(n + 1)
Iterating we find
1 1
a2 = a0 , a3 = 0, a4 = a0 , · · ·
3! 5!
Hence, by induction
1
a2n = ( 1)n a0 , a2n+1 = 0.
(2n + 1)!
Thus, taking a0 = 1, we find
!
x2 x4 sin x
y1 (x) = 1 + ··· =
3! 5! x
Since r1 r2 = 1, a positive integer, the second Frobenius series solution may or may
not exist. Hence, to be sure, we need to compute it. With r = r2 = 1, we find
an 2
0 · a1 = 0, an = , n 2.
n(n 1)
S. Ghorai 10
Now the first relation can be satisfied by taking any value of a1 . For simplicity, we
choose a1 = 0. Iterating we find
1 1
a2 = a0 , a3 = 0, a4 = a0 , · · ·
2! 4!
Hence, by induction
1
a2n = ( 1)n a0 , a2n+1 = 0.
(2n)!
Thus, indeed a second Frobenius series solution exists and taking a0 = 1, we get
!
1 x2 x4 cos x
y2 (x) = x 1 + ··· = .
2! 4! x
Comment: The second solution could have been obtained using reduction of order
also. Suppose y2 = vy1 , then
Z R Z
x2 2/x dx
v= e dx = cosec2 x dx = cot x.
sin2 x
Hence y2 (x) = cos x/x (disregarding minus sign)
(x2 x)y 00 xy 0 + y = 0
OR 1 1
X X
x (n + r 1)2 an xn (n + r)(n + r 1)an xn = 0.
n=0 n=0
OR 1 1
X X
(n + r 2)2 an 1 xn (n + r)(n + r 1)an xn = 0.
n=1 n=0
OR 1 h
X i
r(r 1)a0 + (n + r)(n + r 1)an (n + r 2)2 an 1 xn = 0.
n=1
where ⇢(r) = r(r 1). From the first relation we find roots of the indicial equation
r1 = 1, r2 = 0. Now with the larger root r = r1 = 1, we find
(n 1)an 1
an = , n 1.
n(n + 1)
Iterating we find
an = 0, n 1.
Thus, taking a0 = 1, we find
y1 (x) = x
Now with r = r2 = 0, we find
Lecture XV
Bessel’s equation, Bessel’s function
1 Gamma function
Gamma function is defined by
Z 1
(p) = e t tp 1
dt, p > 0. (1)
0
The integral in (1) is convergent that can be proved easily. Some special properties of
the gamma function are the following:
Hence Z 1 Z 1 Z 1Z 1
u2 v2 (x2 +y 2 )
I2 = 4 e du e dv = 4 e dx dy.
0 0 0 0
Using polar coordinates ⇢, ✓, the above becomes
Z 1 Z ⇡/2 p
⇢2
I2 = 4 e ⇢ d⇢ d✓ ) I 2 = ⇡ ) I = ⇡
0 0
v. Using relation in i., we can extend the definition of (p) for p < 0. Suppose N is
a positive integer and N < p < N + 1. Now using relation of i., we find
(p + 1) (p + 2) (p + N )
(p) = = = ··· = .
p p(p + 1) p(p + 1) · · · (p + N 1)
Since p + N > 0, the above relation is well defined.
vi. (p) is not defined when p is zero or a negative integer. For small positive ✏,
(1 ± ✏) 1
(±✏) = ⇡ ! ±1 as ✏ ! 0.
±✏ ±✏
Since (0) is undefined, (p) is also undefined when p is a negative integer.
S. Ghorai 2
2 Bessel’s equation
Bessel’s equation of order ⌫ (⌫ 0) is given by
x2 y 00 + xy 0 + (x2 ⌫ 2 )y = 0. (2)
Obviously, x = 0 is regular singular point. Since p(0) = 1, q(0) = ⌫ 2 , the indicial
equation is given by
r2 ⌫ 2 = 0.
Hence, r1 = ⌫, r2 = ⌫ and r1 r2 = 2⌫. A Frobenius series solution exists for the
larger root r = r1 = ⌫. To find this series, we substitute
1
X
y = xr an x n , x>0
n=0
With r = r2 = ⌫, we get
⇣ ⌘ ⇣ ⌘
⇢(r) = 0; 1 · 1 (2k + 1) a1 = 0; n · n (2k + 1) an = an 2 , n 2.
It is clear that the even terms a2n can be determined uniquely. For odd
terms, a1 = a3 = · · · = a2k 1 = 0 but for a2k+1 we must have
This can be satisfied by taking any value of a2k+1 and for simplicity, we can
take a2k+1 = 0. Rest of the odd terms thus also vanish. Hence, the second
solution in this case is also given by (5), i.e.
1 ✓ ◆2n ⌫
X ( 1)n x
J ⌫ (x) = . (6)
n=0 n! (n ⌫ + 1) 2
With r = r2 = ⌫, we get
⇣ ⌘ ⇣ ⌘
⇢(r) = 0; 1 · 1 2k a1 = 0; n · n 2k an = an 2 , n 2.
It is clear that all the odd terms a2n+1 vanish. For even terms, a2 , a4 , · · · , a2k 2
each is nonzero. For a2k we must have
since (n + m + 1) = (n + m)!.
Since (±0) = ±1, we define 1/ (k) to be zero when k is nonpositive integer. Now
1 ✓ ◆2n m
X ( 1)n x
J m (x) = .
n=0 n! (n m + 1) 2
Substituting n m = k, we find
1 ✓ ◆2(m+k) m
X ( 1)k+m x
J m (x) =
k=0 (m + k)! (k + 1) 2
1 ✓ ◆2k+m
X ( 1)k x
= ( 1)m
k=0 k!(m + k)! 2
= ( 1)m Jm (x).
Hence, ⇣ ⌘0
x⌫ J⌫ (x) = x⌫ J⌫ 1 (x). (7)
Note that the sum runs from n = 1 (in contrast to that in A). Let k = n 1,
then we obtain
⇣ ⌘0 1 ✓ ◆2k+⌫+1
⌫ ⌫
X ( 1)k+1 x
x J⌫ (x) = x ⇣ ⌘
k=0 k! k + (⌫ + 1) + 1 2
1 ✓ ◆2k+⌫+1
⌫
X ( 1)k x
= x ⇣ ⌘ .
k=0 k! k + (⌫ + 1) + 1 2
Hence, ⇣ ⌘0
x ⌫ J⌫ (x) = x ⌫ J⌫+1 (x). (8)
Note: In the first relation A, while taking derivative, we keep the sum running
from n = 0. This is true only when ⌫ > 0. In the second relation B, we only
need ⌫ 0. Taking ⌫ = 0 in B, we find J00 = J1 . If we put ⌫ = 0 in A, then we
find J00 = J 1 . But J 1 = J1 and hence we find the same relation as that in B.
Hence, the first relation is also valid for ⌫ 0.
S. Ghorai 6
and
2⌫
J⌫ 1 + J⌫+1 = J⌫ . (10)
x
S. Ghorai 1
Lecture XVI
Strum comparison theorem, Orthogonality of Bessel functions
x2 y 00 + xy 0 + (x2 ⌫ 2 )y = 0, x > 0.
R p
x/2 dx
Solution: Here v = e = 1/ x. Now
⌫2 1 1 1/4 ⌫ 2
Q(x) = 1 + = 1 + .
x2 4x2 2x2 x2
Thus, Bessel equation in normal form becomes
!
1/4 ⌫ 2
u00 + 1 + u = 0. (3)
x2
Theorem 1. (Strum comparison theorem) Let and be nontrivial solutions of
y 00 + p(x)y = 0, x 2 I,
and
y 00 + q(x)y = 0, x 2 I,
where p and q are continuous and p q on I. Then between any two consecutive zeros
x1 and x2 of , there exists at least one zero of unless p ⌘ q on (x1 , x2 ).
S. Ghorai 2
Proof: Consider x1 and x2 with x1 < x2 . WLOG, assume that > 0 in (x1 , x2 ).
Then 0 (x1 ) > 0 and 0 (x2 ) < 0. Further, suppose on the contrary that has no zero
on (x1 , x2 ). Assume that > 0 in (x1 , x2 ). Since and are solutions of the above
equations, we must have
00
+ p(x) = 0,
00
+ q(x) = 0.
Now multiply first of these by and second by and subtracting we find
dW
= (q p) ,
dx
0 0
where W = is the Wronskian of and . Integrating between x1 and x2 , we
find Z x2
W (x2 ) W (x1 ) = (q p) dx.
x1
Now W (x2 ) 0 and W (x1 ) 0. Hence, the left hand side W (x2 ) W (x1 ) 0. On
the other hand, right hand side is strictly greater than zero unless p ⌘ q on (x1 , x2 ).
This contradiction proves that between any two consecutive zeros x1 and x2 of , there
exists at least one zero of unless p ⌘ q on (x1 , x2 ).
Proposition 1. Bessel function of first kind Jv (⌫ 0) has infinitely number of
positive zeros.
Proof: The number of zeros J⌫ is the same as that of nontrivial u that satisfies (3),
i.e. !
00 1/4 ⌫ 2
u + 1+ u = 0. (4)
x2
Now for large enough x, say x0 , we have
!
1/4 ⌫ 2 1
1+ > , x 2 (x0 , 1). (5)
x2 4
Now compare (4) with
1
v 00 + v = 0. (6)
4
Due to (5), between any two zeros of a nontrivial solution of (6) in (x0 , 1), there exists
at least one zero of nontrivial solution of (4). We know that v = sin(x/2) is a nontrivial
solution of (6), which has infinite number of zeros in (x0 , 1). Hence, any nontrivial
solution of (4) has infinite number of zeros in (x0 , 1). Thus, J⌫ has infinite number of
zeros in (x0 , 1), i.e. J⌫ has infinitely number of positive zeros. We label the positive
zeros of J⌫ by n , thus J⌫ ( n ) = 0 for n = 1, 2, 3, · · ·.
Now u(1) = J⌫ ( ) and v(1) = J⌫ (µ). Let us choose = m and µ = n, where m and
n are positive zeros of J⌫ . Then u(1) = v(1) = 0 and thus find
Z 1
2 2
( n m) xJ⌫ ( m x)J⌫ ( n x) dx = 0.
0
If n 6= m, then Z 1
xJ⌫ ( m x)J⌫ ( n x) dx = 0.
0
Now from (10), we find [since u0 (x) = J⌫0 ( x) etc]
dh ⇣ 0 ⌘i ⇣ ⌘
x J⌫ ( x)J⌫ (µx) µJ⌫ ( x)J⌫0 (µx) = µ2 2
xJ⌫ ( x)J⌫ (µx).
dx
We di↵erentiate this with respect to µ and then put µ = . This leads to
dh ⇣ ⌘i
2 xJ⌫ ( x)J⌫ ( x) = x x J⌫0 ( x)J⌫0 ( x) J⌫ ( x)J⌫0 ( x) x J⌫ ( x)J⌫00 ( x)
dx
Integrating between x = 0 to x = 1, we find
Z 1 ⇣ ⌘2
2 xJ⌫2 ( x) dx = J⌫0 ( ) J⌫ ( )J⌫0 ( ) J⌫ ( )J⌫00 ( ).
0
OR Z 1 !
1 J⌫ ( ) J⌫0 ( )
xJ⌫2 ( x) dx = J⌫0 ( )2 + J⌫00 ( )
0 2 2
[This last relation can be written as (NOT needed for the proof!)
Z 1 !
0
xJ⌫2 (
1
x) dx = J⌫0 ( )2 +
2
1
2
1
⌫2
2
J⌫2 ( ) ]
S. Ghorai 4
J⌫0 ( n) = J⌫+1 ( n ).
Suppose that f and f 0 are piecewise continuous on the interval 0 x 1. Then for
0 < x < 1,
8
1
X >
< f (x), where f is continuous
c n J⌫ ( ⌫n x) = +
f (x ) + f (x )
>
: , where f is discontinuous
n=1
2
At x = 0, it converges to zero for ⌫ > 0 and to f (0+) for ⌫ = 0. On the other hand,
it converges to zero at x = 1.
S. Ghorai 1
Lecture XVII
Laplace Transform, inverse Laplace Transform, Existence and Properties of Laplace
Transform
1 Introduction
Di↵erential equations, whether ordinary or partial, describe the ways certain quantities
of interest vary over time. These equations are generally coupled with initial conditions
at time t = 0 and boundary conditions.
Laplace transform is a powerful technique to solve di↵erential equations. It transforms
an IVP in ODE to algebraic equations. The solution of the algebraic equations is than
back-transformed to the original problem. In case of PDE, it can be applied to any
independent variable x, y, z and t that varies from 0 to 1. After applying Laplace
transform, the original PDE becomes a new PDE with one less independent variable or
an ODE. The resulting problem can be solved by other method (such as separation of
variables, another transform) and the solution is again back-transformed to the original
problem.
2 Laplace transform
Let a function
⇣ f be⌘ defined for t 0. We define the Laplace transform of f , denoted
by F (s) or L f (t) , as
⇣ ⌘ Z 1
st
F (s) = L f (t) = e f (t) dt, (1)
0
for those s for which the integral in (1) exists. We also refer f (t) as the inverse Laplace
transform of F (s) and we write
⇣ ⌘
f (t) = L 1 F (s) .
Comment 1: Laplace transform is defined for complex valued function f (t) and the
parameter s can also be complex. But we restrict our discussion only for the case in
which f (t) is real valued and s is real.
Comment 2: Since the integral in (1) is an improper integral, existence of Laplace
transform implies that the following limit exists:
Z 1 Z A
st
F (s) = e f (t) dt = lim e st f (t) dt.
0 A!1 0
Now
Z 1
st
F (s) = e f (t) dt
0
Z 1
st
= e dt
a
as
e
= , s > 0.
s
Example 2. Consider f (t) = ta , a > 1. Now
Z 1
F (s) = e st ta dt
0
Z 1
1
= a+1 e u ua+1 1 du
s 0
(a + 1)
= , s > 0.
sa+1
Hence F (1) = 1/s; F (t) = 1/s2 ; F (tn ) = n!/sn+1 , where n is nonnegative integer.
Z 1
st
F (s) = e cos(!t) dt
0
s
= , s > 0.
s2 + !2
Example 5. Consider f (t) = sin(!t). Using
Z
ebt ⇣ ⌘
sin(at)ebt dt = 2 b sin(at) a cos(at)
a + b2
Z 1
st
F (s) = e sin(!t) dt
0
!
= , s > 0.
s2 + !2
S. Ghorai 3
Example 8. Any polynomial is of exponential order. This is clear from the fact that
1 n n
X t a tn an n!
eat = =) tn n eat
n=0
n! n! a
2
But f (t) = et is not of exponential order.
Now we write Z 1
st
I= f (t)e dt = I1 + I2 ,
0
where Z Z
A 1
st st
I1 = f (t)e dt and I2 = f (t)e .
0 A
Since f is piecewise continuous, I1 exists. For the second integral I2 , we note that for
t A
|e st f (t)| M e (s c)t .
Thus
Z 1 Z 1 Z 1
st (s c)t (s c)t M
|f (t)e | dt M e dt M e dt = , s > c.
A A 0 s c
Since the integral in I2 converges absolutely for s > c, I2 converges for s > c. Thus,
both I1 and I2 exist and hence I exists for s > c.
p
Comment The above condition is not necessary. For example, consider f (t) = 1/ t,
which is not piecewise continuous in [0, 1). But
Z 1 st Z 1 r
e 1 u 1/2 1 (1/2) ⇡
p =p e u du = p = , s > 0.
0 t s 0 s s
Proof: Trivial
Example 9. Consider f (t) = cosh(!t). Then
⇣ ⌘ 1⇣ ⌘ 1✓ 1 1
◆
s
L cosh(!t) = L(e!t ) + L(e !t
) = + = 2 .
2 2 s ! s+! s !2
Example 10. Consider f (t) = sinh(!t). Proceeding as above, we find
⇣ ⌘
Theorem 2. (First shifting theorem) If L f (t) = F (s), then
⇣ ⌘ ⇣ ⌘
at at 1
L e f (t) = F (s a), and e f (t) = L F (s a) .
⇣ ⌘
Proof: Suppose L f (t) = F (s) holds for s > k. Now
⇣ ⌘ Z 1 Z 1
at st at (s a)t
L e f (t) = e e f (t) dt = e f (t) dt = F (s a), s a > k.
0 0
5t
Example 11. Consider f (t) = e cos(4t). Since
⇣ ⌘ s ⇣ ⌘ s+5
5t
L cos(4t) = 2
=) L e cos(4t) =
s + 16 (s + 5)2 + 16
⇣ ⌘
Proposition 2. If L f (t) = F (s), then F (s) ! 0 as s ! 1.
Proof: We prove this for a piecewise continuous function which is of exponential order.
But the result is valid for any function for which Laplace transform exists. Now
Z 1 Z 1
st
I= e f (t) dt =) |I| e st |f (t)| dt.
0 0
Now since the function is exponential order, there exists M, ↵, A such that |f (t)|
M1 e↵t for t A. Also, since the function is piecewise continuous in [0, A], we must
have |f (t)| M2 e t for 0 t A except possibly at some finite number of points
where f (t) is not defined. Now we take M = max{M1 , M2 } and = max{↵, }. Then
we have
Z 1 Z 1
st M
|F (s)| = |I| e |f (t)| dt M e (s )t dt = , s> .
0 0 s
Thus, F (s) ! 0 as s ! 1.
Comment: Any function F (s) without this behaviour can not be Laplace transform
of a certain function. For example, s/(s 1), sin s, s2 /(1+s2 ) are not Laplace transform
of any function.
S. Ghorai 1
Lecture XVIII
Unit step function, Laplace Transform of Derivatives and Integration, Derivative and
Integration of Laplace Transforms
u a(t)
a b t
f(t)
u a(t) f(t)
u a(t) f(t−a)
Conversely, ⇣ ⌘
1 as
L e F (s) = ua (t)f (t a).
then F (s) = G(s). Now we write g(t) in such a way that second shifting theorem (see
Theorem 1) can be applied. Hence, we manipulate g(t) in the following way:
g(t) = u0 (t)t2 u1 (t)(t 1 + 1)2 + u1 (t) sin[2(t 1) + 2] u⇡ (t) sin[2(t ⇡)] u⇡ (t) cos(t ⇡)
= u0 (t)t2 u1 (t)(t 1)2 2u1 (t)(t 1) u1 (t) + cos(2)u1 (t) sin[2(t 1)]
+ sin(2)u1 (t) cos[2(t 1)] + u⇡ (t) sin[2(t ⇡)] u⇡ (t) cos(t ⇡)
X1
n n 1Z
X xi+1
st xi+1 st
= e f (t) xi
+s e f (t) dt
i=0 i=0 xi
Z R
sR st
= e f (R) f (0) + s e f (t) dt
0
Corollary 1. Let f and its derivatives f (1) , f (2) , · · · , f (n 1) be continuous for t 0 and
are of exponential order. Further suppose that f (n) is piecewise continuous in [0, 1).
Then Laplace transform of f (n) exists and is given by
⇣ ⌘ ⇣ ⌘
(n) n
L f =s L f sn 1 f (0) sn 2 f (1) (0) · · · f (n 1) (0). (3)
Hence, ⇣ ⌘ ⇣ ⌘ !
s2 L f sf (0) f 0 (0) = !2L f 2! .
s2 + !2
Now f (0) = 0, f 0 (0) = 1. Simplifying, we find
⇣ ⌘ s2 ! 2
L f = 2
(s + ! 2 )2
Theorem 3. Let F (s) be the Laplace transform of f . If f is piecewise continuous in
[0, 1) and is of exponential order, then
⇣Z t ⌘ F (s)
L f (⌧ ) d⌧ = . (5)
0 s
Proof: Since f is piecewise continuous,
Z t
g(t) = f (⌧ ) d⌧
0
is continuous. Since f (t) is piecewise continuous, |f (t)| M ekt for all t 0 except
possibly at finite number of points where f has jump discontinuities. Hence,
Z t
M kt M kt
|g(t)| M ek⌧ d⌧ = (e 1) e .
0 k k
Thus, g is continuous and is of exponential order. Hence, Laplace transform of g exists.
Further g 0 (t) = f (t) and g(0) = 0. Using (2), we find
⇣ ⌘ ⇣ ⌘ ⇣ ⌘ ⇣ ⌘ F (s)
0 0
L g = sL g g(0) =) L g = sL g =) G(s) = .
s
S. Ghorai 5
Solution: Since ⇣⌘ ⇣ ⌘
1 t 1
L t = 2 =) L te =
s (s + 1)2
Hence for f (t) = te t , we have F (s) = 1/(s + 1)2 . Thus,
✓ ◆ Z t
1 F (s) 1 1 ⌧ t
= =) L = ⌧e d⌧ = 1 (t + 1)e
s(s + 1)2 s s(s + 1)2 0
Comment: The derivative formula for F (s) can be derived by di↵erentiating under
the integral sign, i.e.
Z
0 d 1 st
F (s) = e f (t) dt
ds 0
Z 1
@
= e st f (t) dt
@s
Z0 1
= e st ( tf (t)) dt
0
⇣ ⌘
= L tf (t) .
s 0 ! 2 s2
F (s) = =) F (s) = .
s2 + ! 2 (s2 + ! 2 )2
⇣ ⌘ 1 1 ⇣ ⌘ ebt eat
L tf (t) = = L ebt eat =) f (t) = .
s b s a t
S. Ghorai 6
Theorem 5. If F (s) is the Laplace transform of f and the limit of f (t)/t exists as
t ! 0+ , then
✓ ◆ Z 1 ⇣Z 1 ⌘ f (t)
f (t) 1
L = F (p) dp, and L F (p) dp = . (7)
t s s t
Proof: Let
f (t)
g(t) = f (t)/t, and g(0) = lim+ .
t!0 t
Now ⇣ ⌘ ⇣ ⌘
F (s) = L f (t) =) F (s) = L tg(t) = G0 (s), [using (6)]
Hence, Z A
G(s) = F (p) dp.
s
Thus,
Z A Z A Z 1 ✓ ◆ Z 1
f (t)
G(s) = F (p) dp F (p) dp =) G(s) = F (p) dp =) L = F (p) dp.
s 1 s t s
Solution: Let f (t) = sin !t. Using the formula (7), we find
✓ ◆ Z 1 ⇣ ⌘
sin !t ! ⇡ 1 s
L = dp = tan .
t s p2 + ! 2 2 !
Example 8. Consider the same problem as in Example 6, i.e. inverse Laplace trans-
form of ✓ ◆
s a
F (s) = ln
s b
Solution: Note that
⇣ ⌘ ✓ ◆ Z 1 Z 1 ✓ ◆ ✓ ◆
s a 1 1 ebt eat
L f (t) = ln = dp dp = L L
s b s s b s s a t t
Hence, ✓ bt ◆
⇣ ⌘ e eat ebt eat
L f (t) = L =) f (t) = .
t t
S. Ghorai 1
Lecture XIX
Laplace Transform of Periodic Functions, Convolution, Applications
X1 Z T
su snT
= f (u + nT )e du u=t nT
n=0 0
1
X Z T
snT su
= e f (u)e du
n=0 0
✓Z T ◆X
1
su snT
= f (u)e du e
0 n=0
Z T
su
f (u)e du
0
= sT
.
1 e
The last line follows from the fact that
1
X
snT
e
n=0
sT
is a geometric series with common ration e < 1 for s > 0.
Example 1. Consider f (t) = sin(!t), which is a periodic function of period 2⇡/!.
o 1 2 3 4
f(t)
1
o
2 4 6
t
Solution: Here f (t) is a periodic function of period T = 2. Hence, using (1), we find
Z 2 ✓Z 1 Z 2 ◆
1 st 1 st st
F (s) = f (t)e dt = te dt + (2 t)e dt
1 e 2s 0 1 e 2s 0 1
Since f 0 is piecewise continuous and is of exponential order, its Laplace transform exist.
Also, f 0 is periodic with period T = 2. Hence,
⇣ ⌘ Z 2 ✓Z 1 Z 2 ◆
0 1 0 st 1 st st (1 e s )2
L f = f (t)e dt = e dt + e dt = .
1 e 2s 0 1 e 2s 0 1 s(1 e 2s )
Hence,
⇣ ⌘ 1 1 1
L f 0 = tanh(s/2) =) sF (s) f (0) = tanh(s/2) =) F (s) = 2 tanh(s/2)
s s s
Comment: Is it possible to do similar calculations (like in aliter) in Example 2? If
not, why not?
2 Convolution
Suppose⇣we know
⌘ that a Laplace ⇣ transform
⌘ H(s) can be written as H(s) = F (s)G(s),
where L f (t) = F (s) and L g(t) = G(s). We need to know the relation of h(t) =
⇣ ⌘
1
L H(s) to f (t) and g(t).
Definition 1. (Convolution) Let f and g be two functions defined in [0, 1). Then
the convolution of f and g, denoted by f ⇤ g, is defined by
Z t
(f ⇤ g)(t) = f (⌧ )g(t ⌧ ) d⌧ (2)
0
The region of integration is the area in the first quadrant bounded by the t-axis and
S. Ghorai 4
τ =t
τ
1
o
t
the line ⌧ = t. The variable limit of integration is applied on ⌧ which varies from ⌧ = 0
to ⌧ = t.
Let us change the order of integration, thus apply variable limit on t. Then t would
vary from t = ⌧ to t = 1 and ⌧ would vary from ⌧ = 0 to ⌧ = 1. Hence, we have
⇣ ⌘ Z 1 ✓Z 1 ◆
st
L (f ⇤ g)(t) = e g(t ⌧ ) dt f (⌧ ) d⌧
0 ⌧
Z 1 ✓Z 1 ◆
= e g(u) du f (⌧ )e s⌧ d⌧,
su
t ⌧ =u
0 0
✓Z 1 ◆ ✓Z 1 ◆
su s⌧
= e g(u) du e f (⌧ ) d⌧
0 0
= F (s)G(s)
Example 4. Consider the same problem as given in Example 4 of Lecture Note 18,
i.e. find inverse Laplace transform of 1/s(s + 1)2 .
Solution: We write H(s) = F (s)G(s), where F (s) = 1/s and G(s) = 1/(s + 1)2 . Thus
f (t) = 1 and g(t) = te t . Hence, using convolution theorem, we find
Z t Z t
h(t) = f (t ⌧ )g(⌧ ) d⌧ = ⌧ e ⌧ d⌧ = 1 (t + 1)e t .
0 0
Note: We have used f (t ⌧ )g(⌧ ) in the convolution formula since f (t) = 1. This helps
a little bit in the evaluation of the integration.
Solution: Let H(s) = F (s)G(s), where F (s) = 1/(s2 + ! 2 ) and G(s) = 1/(s2 + ! 2 ).
S. Ghorai 5
3 Applications
Example 6. (Di↵erential equation) Solve the IVP
y 00 + y = t, y(0) = 0, y 0 (0) = 2
G(s) 2
s2 Y 2 + Y = G(s) =) Y = 2
+ 2
s +1 s +1
Taking inverse transform and convolution, we find
Z t Z t
y(t) = g(t ⌧ ) sin(⌧ ) dt + 2 sin t =) y(t) = (t ⌧ ) sin(⌧ ) dt + 2 sin t
0 0
y(t) = t + sin t
Hence, we find ✓ ◆
2 x2 1
y(t) = (1 2x ) + C .
2 4
OR
y(t) = (1 C/4) + (C/2 2)x2
Now y(0) = 1 =) C = 4. Hence
y(x) = (1 2x2 )
s2 /4
Comment: If we expand e /s3 then we find
s2 /4
e 1 1
= + non-negative power of s.
s3 s3 4s
If we assume L 1 (sk ) = 0, k = 0, 1, 2, · · · , then we find
✓ ◆ s2 /4
x2 1 e
L =
2 4 s3
Hence,
t
y(t) = 2 e