Professional Documents
Culture Documents
Mathematics-III
Dr. P D Srivastava
Dr. P V S N Murthy
Dr. J Kumar
Engineering Mathematics-III
-:Content Reviewed by :-
Lesson 1
FiniteDifferences
1.1 Introduction
y + λ2 y =
0 (1.1)
y (t ) c1 sin λt + c2 cos λt
= (1.2)
on any interval. Solution (1.2) gives the function value at any point in the
interval.If equation (1.1) is solved numerically, the time interval is first divided
into a pre-determined finite number of non-overlapping equally spaced or
otherwise subintervals and the numerical solution is obtained at the end points
of these subintervals. If the interval considered is (say) [ t0 , tn ]and if the interval
is divided into N number of non-overlapping subintervals, say of equally spaced
ones, then the numerical solution is obtained at the set of points
{ t0 , t1 , t2 ,...., t N −1 , t N }, with the difference between t j +1 and t j being constant for
=
every j , j 0,1,..., N − 1.
t0 t1 t2 ...... t j t j +1 ....... tN
t0 t1 t2 t3 ...... tj t j +1 t j + 2 ....... t N
The set of points { t0 , t1 , t2 ,...., t N } are called the nodal points. The numerical
solution at non-nodal points can also be obtained either using the interpolation
of the data at the nodal points. This concept is true when one is solving an
ordinary or partial differential equation. Many of the Finite difference operators
are applicable over equally spaced data points.
In the later part of this lesson, we shall define some of the finite differences
operators and their utility and their relationships.
t0 t1 t2 t3 ... t j −1 tj t j +1 …… tN
y0 y1 y2 y3 … y j −1 yj y j +1 …… yN
Ey j = y j +1 (1.3)
Similarly, E −1 y j = y j −1 (1.4)
E n y j = y j +n (1.5)
∆yi = yi +1 − yi (1.6)
This is called the first forward difference operator. The second forward
difference operator is ∆ 2 , its action is defined as:
Or ∆ 2 yi = yi + 2 − 2 yi +1 + yi (1.8)
In a similar way for any positive integer n , the n th forward difference operator
is defined as:
∇yi = yi − yi −1 (1.10)
∇ 2 yi = yi − yi −1
= yi − 2 yi −1 + yi −2 (1.11)
Observe that
∇ n yi +1 =yi +1 − yi =
∇yi (1.13)
δ=
yi y 1 −y 1 (1.14)
i+ i−
2 2
h h
Here, y= 1 y (ti + ) and y= 1 y (ti − ) .
i+ 2 i− 2
2 2
Also, δ=
2
yi δ y 1 −δ y 1
i+ i−
2 2
= ( yi +1 − yi ) − ( yi − yi −1 )
= yi +1 − 2 yi + yi −1 (1.15)
δ n yi δ n−1 y
= 1 − δ n−1 y 1 (1.16)
i+ i−
2 2
Observe that, δ 2 yi = yi +1 − 2 yi + yi −1
= ∆ 2 yi +1 (1.17)
Example 1: Construct the forward difference table for the following set of
equally spaced data given by:
t0 t1 t2 t3 t4 t5
y0 y1 y2 y3 y4 y5
Solution:
t0 y0 ∆ ∆2 ∆3 ∆4 ∆5
t1 y1 y1 − y0 =
∆y0
t2 y2 y2 − y1 =
∆y1 ∆y1 − ∆y0 = ∆ 2 y0
t3 y3 − y2 =
∆y2 ∆y2 − ∆y1 = ∆ 2 y1 ∆ 2 y1 − ∆ 2 y0 = ∆ 3 y0
y3
t4 y4 y4 − y3 =
∆y3 ∆y3 − ∆y2 = ∆ 2 y2 ∆ 2 y2 − ∆ 2 y1 = ∆ 3 y1 ∆ 3 y1 − ∆ 3 y0 = ∆ 4 y0
t5 y5 y5 − y4 =
∆y4 ∆y4 − ∆y3 = ∆ 2 y3 ∆ 2 y3 − ∆ 2 y2 = ∆ 3 y2 ∆ 3 y2 − ∆ 3 y1 = ∆ 4 y1 ∆ 4 y1 − ∆ 4 y0 = ∆ 5 y0
From the forward difference table constructed above it is noted that for a data
with 6 data points, we have the maximum order forward difference is ∆ 5 and
all differences of order 6 and above are zero. The entries with the same
subscript on y lie on sloping lines downward.
Example 2: Construct the backward difference table for the given data:
ti -1 0 1 2 3
yi 9.1 8.2 7.3 6.4 5.5
Solution:
∇ ∇2 ∇3 ∇4
-1 9.1
0 8.2 -0.9= ∇y1
1 7.3 -0.9= ∇y2 0= ∇ 2 y2
2 6.4 -0.9= ∇y3 0= ∇ 2 y3 0= ∇3 y3
3 5.5 -0.9= ∇y4 0= ∇ 2 y4 0= ∇3 y4 0= ∇ 4 y4
It is noted that for the given data, upto 4 th order backward differences exist. It is
also evident that the second order differences onward the values are zero, the
reason being all the first order differences are same.
Solution: For the given function, the data set is constructed as:
t t0 = −1 t1 = 0 t2 = 1 t3 = 2
y(t) y0 = 0 y1 = −1 y2 = 0 y3 = 3
2 3 3= ∆y2 2= ∆ 2 y1 0= ∆ 3 y0
The illustration reveals that the difference table is the same for both forward and
backward differences, but the entries are labelled differently.
Example 4: Form the central differences for the data given in the example 1.
Solution:
t y δ δ2 δ3 δ4
t0 y0
t1 y1 δ y1
y1 − y0 =
2
t2 y2 δ y3
y2 − y1 = δ y3 − δ y1 =
δ 2 y1
2 2 2
t3 y3 δ y5
y3 − y2 = δ y5 − δ y3 =
δ 2 y2 δ 3 y 3
2 2 2 2
t4 y4 δ y7
y4 − y3 = δ y7 − δ y5 =
δ 2 y3 δ 3 y 5 δ 4 y2
2 2 2 2
In the central difference table, the entries with the same script lie on the
horizontal lines.
References
Suggested Reading
Lesson 2
2.1 Introduction
In the previous lecture, we have noticed from the difference table that these
difference operators are related. In this lecture we establish the relations
between these operators.
Example 1: Show that the shift operator is related to the forward difference
operator as ∆= E − 1 [ 1 being the identity operator] and to the backward
difference operator ∇ as ∇ = 1 − E −1 .
Solution:
By definition, the forward difference operator when operating over the function
data yi , ∆yi , it becomes
∆yi = yi +1 − yi
= Eyi − yi
= ( E − 1) yi
∴∆= E − 1.
Similarly, ∇yi = yi − yi −1
= yi − E −1 yi
= (1 − E −1 ) yi
1 1
−
δ E2 − E 2 .
Example 2: Establish =
Solution: We know δ=
yi y 1 −y 1
i+ i−
2 2
1 1 1 1
− −
= E yi − E =
2
yi ( E − E ) yi 2 2 2
1 1
−
∴δ = E − E . 2 2
From these examples, one can establish the relation between ∆ , ∇ and δ as:
1 1
δ= (1 + ∇) − (1 − ∇) . 2 2
1
Example 3: Verify ∇E = E∇ = ∆ = δ E 2
Solution:
∇ ( yi +1 ) =
∇Eyi = yi +1 − yi =
∆yi .
And E∇y=i E ( yi − yi −1 )
= yi +1 − yi
= ∆yi
1
Similarly, δ E yi =
2
δy 1 =yi +1 − yi =∆yi .
i+
2
1
Thus we have ∇E = E∇ = ∆ = δ E 2 .
1 12 −
1
as µ
= E + E .
2
2
Solution:
2 µδ yi 2µ y 1 − y 1
=
i+ 2 i−
2
= 2µ y 1 − 2µ y 1
i+ i−
2 2
12 −
1
12 −
1
= E + E y 1 −E + E 2 y 1
2
i+ 2 i− 2
= ( yi+1 + yi ) − ( yi + yi−1 )
= yi +1 − yi −1
∴ 2 µδ = ∇ + ∆ .
δ2 δ2 δ2 δ2
Exercise 1: Show that ∆
= + δ 1+ ∇ δ 1+
and= − .
2 4 4 2
In example 3 of previous lesson, we observed that the third order difference for
a second degree polynomial is zero. This is similar to the third derivative of a
second degree polynomial is zero. This gives an intuition that the differential
operator D is connected with the difference operator.
dy 2 d2y
Let us denote D by , D by 2 and so on.
dt dt
dy h2 d 2 y
y (ti=
+1 ) y (ti + h) =y ( ti ) + h + + ..... (2.1)
dt ti 2! dt 2 t
i
Using the operators E and D , the above equation (2.1) can be written as
h 2 2 h3 3
Eyi =1 + hD + D + D + ..... yi (2.2)
2! 3!
Or Eyi = (e hD ) yi
Or E = e hD (2.3)
h 2 D 2 h3 D 3
=
∆ e hD
−=
1 hD + + + ..... (2.4)
2! 3!
= ln (1 + ∆ )
hD
∆ 2 ∆3
=∆− + − .......
2 3
1 ∆ 2 ∆3
or D= ∆ − + − ....... (2.5)
h 2 3
2
h 2 D 2 h3 D 3
∆= hD +
2
+ + ..... (2.6)
2! 3!
7 4 4
Or =
∆ 2 h 2 D 2 + h3 D 3 + h D + .....
12
1 2 11
=
Exercise 2: Show that D 2
2
∆ − ∆ 3 + ∆ 4 ....... .
h 12
Exercise 3: Establish
1 hD
(i) E = e
2 2
h 2 2 h3 3
(ii) ∇
= hD − D + D + .....
2! 3!
1 2 11
(iii) D=
2
2
∇ + ∇3 + ∇ 4 + .....
h 12
h4 D 4 h6 D 6
(iv) δ = h D +
2 2 2
+ + .....
12 360
Thus the first and second order derivatives of a function y (t ) at ti are written
using ∆ as:
dyi 1 ∆ 2 yi ∆ 3 yi
= Dyi = (∆yi − + − .....) (2.7)
dt h 2 3
d 2 yi 1 11
2
= D 2 yi = 2 (∆ 2 yi − ∆ 3 yi + ∆ 4 yi − .....) (2.8)
dt h 12
1 ∇ 2 yi ∇3 yi
Dyi = (∇yi + + + .....) (2.9)
h 2 3
1 2 11
and D 2 yi = 2
(∇ yi + ∇3 yi + ∇ 4 yi + .....) (2.10)
h 12
In terms of δ :
µ δ 3 yi δ 5 yi
Dyi = (δ yi − + − .....) (2.11)
h 6 30
1 2 δ 4 yi δ 6 yi
and D y=i
2
(δ yi − + − .....) (2.12)
h2 12 90
In the next example we illustrate the use of the relations among these operators.
Solution:
Step1: Let us construct the data set for the given function: Given ,
t4 = 2 y4 = 22
Step 2: Construct the difference table. Note that for the given cubic, the third
derivative is constant and the fourth and higher derivatives are zero. Similarly
the third difference will be constant and all higher order differences will become
dy d2y
zero. The expressions for and 2 are as given in equations (2.7), (2.8), (2.9)
dt dt
and (2.10).
Difference Table:
-2 -6
-1 -2 ∆y0 =
4
0 0 ∆y1 =2 ∆ 2 y0 =
−2
1 6 ∆y2 =
6 ∆ 2 y1 =
4 ∆ 3 y0 =
6
2 22 ∆y3 =
16 ∆ 2 y2 =
10 ∆ 3 y1 =
6 ∆ 4 y0 =
0
Note: ∆ 3 y0 =
∆ 3 y1 =
6 (constant), ∆ 4 y0 =
0=∆ 5 y0 .....
Step 3: Calculations:
dy dy 1 1 2 1 3
(i) = = ∆y − ∆ y + ∆ y1
h
1 1
dt t =
−1 dt t =
t1 2 3
1 1 1
= 2 − ⋅ 4 + ⋅6 = 2.
1 2 3
d2y 1
1 [ 4 − 6] = −2
1
= ∆ 2
y − ∆ 3
y=
dt 2 t =t h 2
1 1
1
16 , ∇ 2 y4 =
(ii) From the above table, we know ∇y4 = 10 , ∇3 y4 =
6.
dy 1 1 1
= ∇y4 + ∇ 2 y4 + ∇ 3 y4
dt t =t4 h 2 3
1 1 1
= 16 + ⋅ 10 + ⋅ 6 = 23 .
1 2 3
d2y 1
2
= 2 ∇ 2 y4 + ∇3 y4
dt t =t h
4
1
= [10 + 6] = 16 .
1
These values also coincide with the exact values of y′(2) = 23 and y′′(2) = 16 .
Thus these differences give us a way of evaluating the derivative values from
the given data set.
dy d2y
Exercise 4: Find and 2 at from the data set.
dt dt
0 1 2 3 4
-1 2 -3 4 5
d2y
Exercise 5: Find 2 at from the given data.
dt
-1 0 1 2
-1 1 3 5
References
Lesson 3
3.1 Introduction
In a numerical procedure, many types of errors can occur, among which the
prominent ones are (i) the truncation error and (ii) round-off error. We briefly
introduce these two types of errors below. The other types of errors are not
discussed in this lesson.
We noted that while writing the Taylor series expansion for a function about a
point in its neighbourhood, the series is truncated after a finite number of terms
in the series. This induces certain amount of error in the solution and this error
is called the truncation error. Thus the truncation error is the quantity T (say)
which must be added to the approximate solution in order to get the exact
solution.
1
For example, consider the function f ( x)= (1 + x) , x ∈ [0, 0.1] . We now try to
2
We have f (0) = 1 , and a certain higher order derivatives of the function and their
values at x = 0 are found as:
1 1 1 1 3
f ′( x) =, f ′(0) = ; f ′′( x) =
− , f ′′(0) =
− ; f ′′′( x) =5 .
2 1+ x
3
2 4 (1 + x ) 2 4 8 (1 + x ) 2
Now the second order approximation with the remainder term is written as:
3
x x2 1 x3
∴ (1 + x ) 2 =1 + − + , for some 0 < ξ < 0.1 .
2 8 16 1 5
(1 + ξ )
2
1 x3
Thus the truncation error after 2 nd degree approximation is T = .
16 5
(1 + ξ )
2
( 0.1) = 0.000625 .
3
(0.1)3
T max ≤
5
16
x∈[0,0.1]
16 (1 + x ) 2
There is another type of error called the round off error which is due to the
precision of the computer while doing the arithmetic operations among the
number. The round off error is the quantity denoted by R which must be added
to the finite representation of a computed number in order to make it the true
representation of that number. This is due to the rounding of numbers after a
certain decimal place during computation. The present day computers use the
double precession, because of which the round off error is minimum.
In general, the error is defined as the difference between the True value and the
Approximate value.
While dealing with finite difference operators, usually, the solution is associated
with a certain amount of Truncation error. This error can be minimized by
increasing the order of approximation in the Taylor Series expansion. The
difference operators are magnifiers of the error that occurred in the initial data
set, in turn affecting the approximate solution. The following example illustrates
how a small amount of error ∈ induced in the initial data is increasing with
higher order differences.
x −3 −2 −1 0 1 2 3
f ( x) 0 0 0 ∈ 0 0 0
Solution: The forward difference table for the above given data set is:
x f ∆f ∆2 f ∆3 f ∆4 f ∆5 f ∆6 f
−3 0
0
−2 0 0
∈
−1 0 ∈ −4 ∈
∈ −3 ∈ 10 ∈
0 ∈ −2 ∈ 6∈ −20 ∈
−∈ 3∈ −10 ∈
1 0 ∈ −4 ∈
0 −∈
2 0 0
0
3 0
ii) The error for any order difference is the binomial coefficients with
alternating signs.
Exercises:
2. If the number π = 4 tan −1 (1) is approximated using 5 decimal digit, find the
percentage relative error due to rounding.
3. Tabulate the propagation of initial error ∈ in the following data set with
increasing order differences.
x 0 1 2 3 4
f ( x) −1 1 ∈ 2 2
References
Suggested Reading
Lesson 4
4.1 Introduction
In this lesson, we learn to find the roots of a given non-linear equation involving
a polynomial and Transcendental functions.
1
For example (i) x3 − 4 x 2 − =
0 is a polynomial equation while
x
1
(ii) sin x − x cos =
0 is a transcendental equation.
x
Definition: A number ξ is said to be the root of (or solution of) the equation
if f (ξ ) ≡ 0 . It is also called a zero of f ( x) = 0 . A root of f ( x) = 0 is the
value of at which the graph of y = f ( x) intersects the x − axis.
f ( x)
x =ξ
O x
( x − ξ ) g ( x) =
If the equation f ( x) = 0 can be expressed as f ( x) = n
0 for some
The root of f ( x) = 0 is found using either a direct method, which gives the
exact value of x = ξ or by writing f ( x ) = 0 as an iterative procedure such as
=
xn +1 g=
( xn ), n 0,1, 2,.... . Root of cubic or higher degree polynomial equations and
transcendental equations cannot be found using the direct methods whereas the
iterative methods are quite handy though they provide approximate value to the
root of f ( x) = 0 .
(i) x =
4
x+2
or (ii)=x
1
2
( 4 − x 2 ) or (iii) =
x 4 − 2x .
i = 1, 2,3 where ϕ1 ( x) =
4
x+2
, ϕ2 =
( x)
1
2
( 4 − x2 ) and ϕ3 ( x=) 4 − 2x .
choice of the function ϕ ( x) and also on the initial approximation x0 to the root.
Below we state a sufficient condition on ϕ ( x) for the convergence of { xk } to the
root ξ .
root ξ of x = ϕ ( x) .
Solution:
1
2 x cos x + 3 =
Given= or x ( cos x + 3)
2
1
=
Choose ϕ ( x) ( cos x + 3)
2
1
=
Write xn +1 ( cos x + 3)
2
1
ϕ ′( x)
Since = sin x < 1 ,
2
π
we start with x0 = , and generate the sequence of iterates as
2
1 π
=x1 cos =+ 3 1.5
2 2
1
=x2 ( cos(1.5)
= + 3) 1.535
2
1
=x3 ( cos(1.535)
= + 3) 1.518
2
1
=x4 ( cos(1.518)
= + 3) 1.526
2
1
=x5 ( cos(1.526)
= + 3) 1.522
2
The root of the above equation correct to two decimal places is taken as
since x5 − x4= 1.522 − 1.526= 0.004 . We can improve the accuracy in this
Now it amounts to choosing two real values a and b such that f (a) ⋅ f (b) < 0 ,
and then choose the initial approximation between a and b .
Solution:
Given f ( x) = x3 + 4 x − 5 .
x = , f ( x) = −5 .
Take 0
Example 2: Find the interval which contains the smallest positive root
of xe x − cos x =
0.
Solution:
Take x = 0 , f (0) =⋅
0 e0 − cos 0 =−1 , and x = 1 , f (1) = 2.718 − 0.54 = 2.178 .
ii) if f (m1 ) ⋅ f (b0 ) < 0 , then the root lies between m1 and b0 .
Keep bisecting this way until we get a smallest subinterval containing the root.
Then the midpoint of that smallest interval is taken as the approximation for the
root of f ( x) = 0 .
Solution:
Given=
a0 0,=
b0 1,
f (a0 ) =
−1, f (b0 ) =
1.718, f (0) ⋅ f (1) < 0 .
f ( 0.5 ) < 0 and f ( 0 ) < 0 , so the interval ( 0, 0.5) is discarded as f ( 0 ) f ( 0.5 ) > 0 .
Denote I1 = (0.5,1) .
Step 2: Bisecting I1 = (0.5,1) gives (0.5, 0.75) and (0.75,1) as the new
subintervals. f (0.75) > 0 , f (0.5) < 0 ⇒ f ( 0.5 ) f ( 0.75 ) < 0 ,
I 4 , i.e., ξ = 0.59375 .
Note: Both the iteration method and Bisection method take more steps to give a
solution with reasonable accuracy.
Exercises:
1. Use bisection method to obtain a real root for the following equations correct
to three decimal places:
i) x 3 + x 2 − 1 =0
ii) e− x = 10 x
iii) 4 x − 1 =4sin x
iv) x + log x =
2.
i) sin x = 1 − x
ii) e x = cot x
iii) x 2 − x3 =
−1
x
iv) sin x =
2
v) x 4 + x 2 − 80 =
0
vi) sin 2 =
x x2 −1.
References
Suggested Reading
Lesson 5
5.1 Introduction
f (=
x) a1 x + a0 (5.1)
=
f k −1 a1 xk −1 + a0 (5.2)
and
=
f k a1 xk + a0 (5.3)
Where, f ( xk −1 ) = f k −1 and f ( xk ) = f k .
a0 =
( xk f k −1 − xk −1 f k )
(4)
( xk − xk −1 )
a1 =
( f k − f k −1 )
and (5)
( xk − xk −1 )
a0
x= − (6)
a1
xk +1 =
( xk f k −1 − xk −1 f k )
( f k − f k −1 )
xk +1 =
( x − xk −1 ) f , k =
xk − k 1, 2,... (7)
( f k − f k −1 ) k
2
39 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
( xk − xk −1 ) f
Here iteration function ϕ ( xk , xk −1=
) xk − generates a sequence of
( f k − f k −1 ) k
iterates { xk +1}k =1 .
∞
5.2.1 Algorithm
4. Repeat steps (2) & (3) until the difference between two successive
approximations xk +1 and xk is less than the error tolerance.
3
40 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
x1 x0 x
ξ x2
The initial approximations xk −1 and xk taken in the secant method are arbitrary. If
we impose a condition on these initial approximations such that f ( xk ). f ( xk −1 ) < 0
the method given by equation (5.7) satisfying this condition is called the Regula-
falsi method. Then the root of f ( x) = 0 lies in between these two values xk and xk −1 .
Now draw a chord joining the points ( xk −1 , f k −1 ) and ( xk , f k ) . This chord intersects
One of these two will hold. Without loss of generality, say f ( xk +1 ). f ( xk ) < 0 , then
4
41 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
( x0 , f 0 )
x3 x2 x1
O x0 ξ x
( x2 , f 2 ) ( x1 , f1 )
Solution:
Take x0 =
2, f 0 =
−1, x1 =
3, f1 =
16 .
5
42 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
3− 2
x2 =
3− 16 =
2.0813 , f ( x2 ) =
−0.1472 < 0 .
16 + 1
The root lies in the interval (2.083,3), join the points (2.0813, −0.1472) and (3,16) by
the chord, which cuts the x − axis at x3 = 2.08964 .
Solution:
x1 − x0
We obtain x= x0 − f 0 = 0.31467 and f (0.31467)
= 0.51987 > 0 .
f1 − f 0
2
6
43 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
The root lies between 0.31467 and 1. Repeating the procedure we obtain the
approximations to the roots as
x3 = 0.44673 ,
x4 = 0.49402 ,
x5 = 0.50995
x6 = 0.51520 , x7 = 0.51692
= =
x8 0.51748, x9 0.51767 ,
x10 = 0.51775,...
Exercises
1. Find a real root of the following equations using the Regula-falsi method:
a) x log 0 x = 1.2
b) x 4 − 32 =
0
c) x3 − 2 x − 5 =0
1
d) sinx =
x
7
44 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
8
45 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 6
Newton-Raphson Method
6.1 Introduction
Both the Bisection and Regula-fasi methods give a rough estimate for the root
of the equation , but these methods take many iterations to get a
reasonably accurate approximation to the root. An elegant method of obtaining
the root of is discussed below which is known as Newton-Raphson
method.
Then f ( x1 ) = 0 .
f ( x0 )
Thus we have f ( x0 ) + hf ′( x0 ) =
0 giving h = − .
f ′( x0 )
f ( x0 )
Then we have x1 = x0 + h = x0 − .
f ′( x0 )
f ( x0 )
x= x0 − (6.1)
f ′( x0 )
1
f ( x1 )
x2= x1 − (6.2)
f ′( x1 )
f ( xn )
xn += xn − , (6.3)
f ′( xn )
1
xn +1 = ϕ ( xn ) (6.4)
f ( x)
the iteration function is given as ϕ ( x)= x − .
f ′( x)
f′ f ⋅ f ′′ f ( x) ⋅ f ′′( x)
which becomes 1 + =
+ < 1 or f ( x) ⋅ f ′′( x) < [ f ′( x) ] .
2
f′ f′ [ f ′( x)]
2 2
If we choose the initial approximation x0 in the interval such that for all x ∈ I ,
2
47 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method
f ( x) ⋅ f ′′( x) < [ f ′( x) ]
2
is satisfied, then the sequence of iterations generated by
Solution:
Given f ( x) = x 4 − 11x + 8 = 0
f ′(=
x) 4 x3 − 11 .
f ( x2 )
x= x2 − = 1.89188
f ′( x2 )
3
f ( x3 )
x= x3 − = 1.89188 .
f ′( x3 )
4
3
48 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method
Solution:
Let x = 29 . Then x 2 − 29 =
0
xn2 − 29
xn +=
1 xn − =, n 0,1,2,… .
2 xn
y
y = f ( x)
f ( x0 )
f ( x1 )
f ( x2 )
O ξ
x1 x0 x
x2
4
49 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method
3 xn − cos xn − 1
xn += xn −
3 + sin xn
1
xn sin xn + cos xn + 1
xn +1 =
3 + sin xn
Solution:
f ( xk )
The Newton-Raphson method is xk +=1 xk − .
f ′( xk )
1 2
∈k f ′(ξ ) + 2 ∈k f ′′(ξ ) + ...
∈k +1 =
∈k −
[ f ′(ξ )+ ∈k f ′′(ξ ) + ...]
5
50 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method
−1
1 f ′′(ξ ) 2 f ′′(ξ )
=
∈k − ∈k + ∈k +... 1 + ∈k +...
2 f ′(ξ ) f ′(ξ )
1 f ′′(ξ ) 2
or=
∈k +1 ∈k +O(∈k 3 ) .
2 f ′(ξ )
1 f ′′(ξ )
On neglecting ∈k 3 and higher power ∈k , we get ∈k +1 = C ∈k 2 where C = .
2 f ′(ξ )
Exercises:
i) x − e− x =
0
iii) x3 − 5 x + 1 =0
cos x
iv) sin x + =
0
x
v) x3 − 2 x − 5 =0
References
Jain, M. K., Iyengar, S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Suggested Reading
6
51 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method
7
52 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 7
7.1 Introduction
The system given in (7.1) can also be written in the matrix vector form as:
Ax = b (7.2)
The solution of this system can be obtained using direct methods such as Gauss
Elimination and also by finding the inverse of the matrix directly. When the
size of the matrix A is very large, applicability of these direct methods is
limited as finding the inverse of a large size matrix is not so easy. Also, Gauss
elimination demands the diagonal dominant structure for A . These difficulties
are overcome in the other class of methods called the iterative methods. In what
follows, we discuss two useful iterative methods namely the Jacobi and Gauss-
Seidel methods.
X=
n +1 HX n + C , n = 0,1, 2,... (7.3)
2
54 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
Let the diagonal elements aii , i = 1, 2,..., n in the linear system (1) do not vanish.
We now rewrite the system (1) as:
b1 a12 a a
x1 = − x2 − 13 x3 − ... − 1n xn
a11 a11 a11 a11
b2 a21 a a
x2 = − x1 − 23 x3 − ... − 2 n xn
a22 a22 a22 a22
bn an1 a a
xn = − x1 − n 3 x2 − ... − n⋅n −1 xn −1 (7.4)
ann ann a22 ann
The system of equations (7.4) is meaningful only if all aii (diagonal elements)
are non-zero. If some of the diagonal elements in the system of equations given
in (7.1) are zero, than the equations should be rearranged so that this condition
satisfied. We now form an iterative mechanism for the equation (7.4) by writing
b1 a12 ( n ) a13 ( n ) a
x1( n +1) = − x2 − x3 − ... − 1n xn( n )
a11 a11 a11 a11
b2 a21 ( n ) a23 ( n ) a
x2( n +1) = − x1 − x3 − ... − 2 n xn( n )
a22 a22 a22 a22
bn an1 ( n ) an 2 ( n ) a
xn( n +1) = − x1 − x2 − ... − n⋅n −1 xn( n−1) . (7.5)
ann ann ann ann
Choose the set of initial approximations as: x (0) = ( x1(0) , x2(0) ,..., xn(0) ) and generate a
sequence of iterates x (1) , x (2) ,..., x ( k ) , x ( k +1) ,..., until the convergence condition
x ( k +1) − x ( k ) ≤∈ is satisfied.
( n +1)
Equation (7.5) is written in the matrix vector form as X= HX ( n ) + C
3
55 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
a12 a1n b1
0 − ... − a
a11 a11
11
a21 a2 n b2
− 0 ... −
where H = a22
a11 , C = a22 .
... ... ... ... ...
− an1 a
− n1 ... 0 bn
ann ann ann
i.e., A = L + D + U
Ax = ( L + D + U ) x = b
or Dx =−( L + U ) x + b
− D −1 ( L + U ) x + D −1b
or x =
or x k +1 =
− D −1 ( L + U ) x k + D −1b
k +1
or x= Hx + C
k
4 x1 + x2 + x3 =
2
x1 + 5 x2 + 2 x3 =
−6
x1 + 2 x2 + 3x3 =
−4
4
56 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
Solution:
4 1 1
Given A = 1 5 2 .
1 2 3
4 0 0 0 0 0 0 1 1
D = 0 5 0 , L = 1 0 0 and U 0 0 2
0 0 3 1 2 0 0 0 0
−1
4 0 0 0 0 0
− 0 5 0 1 0 0
− D −1 ( L + U ) =
H=
0 0 3 1 2 0
1
4 0 0
0 1 1
= −0 0 1 0 2
1
5
1 2 0
1
0 0
3
1 1
0 −
4 4
− 1 0
= −
2
5 5
1 2
− − 0
3 3
5
57 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
1 1
4 0 0 2
2
b 0 0 −6 = − 6 .
−1 1
=
C D=
5 5
−4
1 4
0 0 −
3 3
1 1 1
0 −
x1( n+1) 4 4 x(n) 2
1
2 (n) 6
= x2( n+1) = −
1
x ( n+1) 0 − x2 + − , n = 0,1,2,... ...... (i)
5 5 (n) 5
( n+1)
x3 1 2 3 4
x
− − 0 −
3 3 3
x1(0) 0.5
−0.5 , we generate from (i)
Start with the given initial approximation x2(0) =
x3(0) −0.5
x1(1) 0.75
(1) −1.1
x2 =
x3
(1)
−1.1667
.
x1(2) 1.0667
−0.8833 , using
Using this, we generate the next approximation as x2(2) =
x3(2) −0.85
x1(3) 0.9933
−1.0733 , ...
this, x2(3) =
x3(3) −1.1
6
58 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
Example 2: Solve the following system of linear algebraic equations using the
Jacobi method by writing the iterative system directly:
20 x + y − 2 z =
17 , 3 x + 20 − z =−18 , 2 x − 3 y + 20 z =
25 .
Solution:
x1( n+1)=
1
20
(17 − y ( n ) + 2 z ( n ) )
y1( n+1) =
1
20
( −18 − x ( n ) + z ( n ) )
z1( n+1) =
1
20
( 25 − 2 x ( n ) + 3 y ( n ) )
Start with x=
(0)
y=
(0)
z=
(0)
0.
x (1) =
0.85, y (1) =
−0.9, z (1) =
1.25 .
x (2) ==
1.02, y (2) −0.965, z (2) =
1.1515 .
x (3) =
1.0134, y (3) =
−0.9954, z (3) =
1.0032
x (4) =
1.0009, y (4) =
−1.0018, z (4) =
0.9993
x (5) =
1.0, y (5) =
−1.0002, z (5) =
0.9996
7
59 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
x (6) =
1.0, y (6) =
−1.0, z (6) =
1.0
This is an alternative way of writing the iteration procedure used in the earlier
example.
Exercises:
x=
(0)
1 0.3, x=
(0)
2 x=
(0)
3 x=
(0)
4 0.
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Suggested Reading
8
60 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method
9
61 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 8
8.1 Introduction
One should arrange, by row and column interchange that larger elements fall
along the diagonal, to the maximum possible extent. This method may be seen
as an improvement to the Jacobi method, where the available values for the
unknowns in a particular iteration are used in the same iteration. Consider the
system of linear algebraic equations (5) as given in the lesson 7.
In the first step of the iteration, we make use of the initial approximations
x2(0) , x2(0) ,..., xn(0) in the first of the above equation to evaluate x1(1) using,
− ( a12 x2(0) + a13 x3(0) + ... + a1n xn(0) ) . This approximation for x1(1) is used
b1 1
x1(1) =
a11 a11
x2(1) =
b2
−
1
a22 a22
( a21 x2(1) + a23 x3(0) + ... + a2 n xn(0) )
x3(1) =
b3
−
1
a33 a33
( a31 x1(1) + a32 x2(1) + ... + a3n xn(0) )
xn(1) =
bn
−
1
ann ann
( an1 x1(1) + an 2 x2(1) + ... + an⋅n−1 xn(1) )
We proceed to find the second approximation for the solution in the same
manner. The iteration process is terminated using the same criterion that was
discussed in the case of Jacobi method.
Then, .
( L + D) x (
k +1)
=
b − Ux ( k )
x(
k +1)
= ( D + L) −1 b − ( D + L) −1Ux ( k )
2
63 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method
( )k +1
or x= Hx + C
(k )
−( D + L) −1 ⋅ U and =
where H = C ( D + L) −1 b .
...... (i)
...... (ii)
...... (iii)
Solution:
Thus in the first two equations, the diagonal dominance is not present.
We now rearrange the order of the given system as:
3
64 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method
1
x ( k +1) = − y ( k ) + z ( k )
3
1
y ( k +1) = − x ( k +1) − z ( k )
2
1
z ( k +1) = 3 − x ( k +1) + y ( k +1) , k = 0,1,2,...
4
This gives,
1 (0)
x (1) = z − y (0) = 0
3
1
y (1) = − x (1) − z (0) =
−0.5
2
1
z (1) = 3 − x (1) + y (1) = 0.625 .
4
1 (1)
x (2) = z − y (1) = 0.375
3
1
y (2) =− x (2) − z (1) =
−0.5
2
1
z (2) = 3 − x (2) + y (2) = 0.53125 .
4
4
65 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method
x (3) =
0.34375, y (3) =
−0.4375, z (3) =
0.55469
x (4) =
0.33075, y (4) =
−0.44271, z (4) =
0.55664
x (5) =
0.33312, y (5) =
−0.4449, z (5) =
0.5555 .
, , .
Solution:
2 −1 0 7
Given A = −1 2 −1 , b = 1 , we find the decomposition of A as
0 −1 2 1
2 0 0 0 0 0 0 −1 0
D = 0 2 0 , L = −1 0 =0 , U 0 0 −1 .
0 0 2 0 −1 0 0 0 0
1
2 0 0
2 0 0
( D + L) =− 1 2 0 , ( D + L ) −1 1 1
= 0,
4 2
0 −1 2 1 1
1
8 4 2
5
66 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method
1 7
0 − 0 2
2
( D + L) −1U = 0 −
1 1 −1 9 .
− , ( D + L) b =
4 2 4
1 1 13
0 − −
8 4 8
x (3) 5.3125
Similarly, y (3) = 4.3125 .
z (3) 2.6563
1. , , .
2. , , .
3. , , .
6
67 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Suggested Reading
7
68 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 9
Decomposition Methods
9.1 Introduction
in general or
a11 a12 a13
A = a21 a22 a23 , in particular, the decomposition is possible if all the
a31 a32 a33
One can also choose ’s along the diagonal elements for and lii as diagonal
elements for . We now determine the element of and as follows.
a
a32 − 31 a12
l31u12 + l32u22 = a32 ⇒ l32 = a11 ,
a
a22 − 21 a12
a11
l31u13 + l32u23 + u33 =
a33
⇒ u33 = a33 − l31u13 − l32u23 .
Once we compute lij ’s and uij ’s, we obtain the solution of as given below.
Consider LUx = b
Call Ux =z ⇒ Lz =b .
1 0 0 z1 b1
i.e., l21 1 0 z2 = b2 .
l31 l32 1 z3 b3
z3
z2 − u23 u
⇒ x2 = 33
,
u22
u11 x1 + u12 x2 + u13 x3 =
z1 ,
⇒ x1 = z1 − u13 x3 − u12 x2 ,
[ x1, x2 , x3 ]
T
is the required solution.
Solution:
2 1 4 12
Given A = 20 .
8 −3 2 , b =
4 11 −1 33
2 1
Clearly a11 =≠
2 0, =−14 ≠ 0, A ≠ 0 .
8 −3
We find a decomposition of as as follows
1 0 0 2 1 4
0 −7 −14 .
L = 4 1 0 and U =
9 0 0 −27
2 − 1
7
= = b.
Ax LUx
Take Ux = z .
1 0 0 z1 12
Then Lz =
b ⇒ 4 1 0 z2 =
20
z 33
−1 3
9
2 −
7
⇒ z1 =12, z2 =20 − 4 × 12 =−28,
9
z3 =
33 + (−28) − 2(12) =
−27 .
7
2 1 4 x 12
Now Ux =z ⇒ 0 −7 −14 y =−
28
0 0 −27 z −27
−27
=z = 1,
−27
1
y=− [ −28 + 14.1] =2
7
1
=
x [12 − 2 − 4.1=] 3
2
A= L ⋅ LT
=
where L lij=
, lij 0 for .
Then Ax = b becomes
L ⋅ LT x =
b
Take LT x = z
⇒ Lz =
b.
Solution:
1 2 3 5
Given A =
= 2 8 22 , b 6 .
3 22 82 −10
Write A= L ⋅ LT
1 2 3 l11 0 0 l11 l21 l31
or 2 8 22 = l21 l22 0 0 l22 l32
3 22 82 l31 l32 l33 0 0 l33
1 0 0
L = 2 2 0 .
3 8 3
Now Ax = b ⇒ L ⋅ LT x = b .
u1
Put LT x = u where u = u2 (say)
u3
1 0 0 u1 5
⇒ Lu =b ⇒ 2 2 0 u2 = 6
3 8 3 u3 −10
⇒ u1 =
5, u2 =−2, u3 =−3 .
1 2 3 x 5
Solving LT x = u gives 0 2 8 y = −2
0 0 3 z −3
Exercises 2:
2 1 −4 1 u1 4
−4 3 5 −2 u −10
2 =
1 −1 1 −1 u3 2
1 3 −3 2 u4 −1
by LU decomposition method.
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Lesson 10
Interpolation
10.1 Introduction
partition of this interval [a, b] as a = x0 < x0 < ... < xi −1 < xi < ... < xN = b , having
( N + 1) nodal points. These nodes are either equally spaced ( xi − xi −1 =
ha
=
constant, i 0,1,2,..., N − 1 ) or unequally spaced. Let f ( xi ) = fi , i = 0,1,2,..., N .
The process of approximating the given function f ( x) on [a, b] OR a set of
P1 (=
x) a1 x + a0 , (10.1)
=
P1 ( xi ) f=
( xi ), i 0,1 (10.2)
f (= x0 ) a1 x0 + a0 and f (=
x0 ) P1 (= x1 ) P1 (=
x1 ) a1 x1 + a0 .
P1 ( x) x 1
f ( x ) x 1 = 0 .
0 0
f ( x1 ) x1 1
y = f ( x)
y = p1 ( x)
( x0 , f0 ) ( x1 , f1 )
x0 x1
Linear interpolation
P1 ( x) ( x0 − x1 ) − f ( x0 ) ( x − x1 ) + f ( x1 ) ( x − x0 ) =
0
x − x1 x − x0
=
or P1 ( x) f ( x0 ) + f ( x1 ) .
x0 − x1 x1 − x0
Write =
P ( x) l0 ( x) f ( x0 ) + l1 ( x) f ( x1 ) (10.3)
x − x1 x − x0
where l0 ( x) = , l1 ( x) =
x0 − x1 x1 − x0
The functions l0 ( x) and l1 ( x) are linear functions and are called the Lagrange
1
fundamental polynomials. They satisfy the conditions (i) ∑ l ( x) = 1
i =0
i and (ii)
1, if i = j
) δ=
li ( x j= .
0, if i ≠ j
ij
P2 ( x) =a0 + a1 x + a2 x 2 , (10.4)
=f ( x0 ) P=
2 ( x0 ), f ( x1 ) P2 ( x1 ) and f ( x2 ) = P2 ( x2 ) .(10.5)
( x0 , f0 ) , ( x1, f1 ) , ( x2 , f 2 ) .
f ( x0 ) =a0 + a1 x0 + a2 x02
f ( x1 ) =a0 + a1 x1 + a2 x12
f ( x0 ) =a0 + a1 x2 + a2 x22
Eliminating a0 , a1 , a2 we obtain:
P2 ( x) 1 x x2
f ( x0 ) 1 x0 x02
=0
f ( x1 ) 1 x1 x12
f ( x2 ) 1 x2 x22
1 x0 x02 1 x x2 1 x x2 1 x x2
⇒ P2 ( x) 1 x1 x22 − f ( x0 ) 1 x1 x12 + f ( x1 ) 1 x0 x02 − f ( x2 ) 1 x0 x02 =
0
1 x2 x22 1 x2 x22 1 x2 x22 1 x1 x12
P2 ( x) = l0 ( x) f ( x0 ) + l1 ( x) f ( x1 ) + l2 ( x) f ( x2 ) (10.6)
Where, l0 ( x) =
( x − x1 )( x − x2 ) ,
( x0 − x1 )( x0 − x2 )
l1 ( x) =
( x − x0 ) ( x − x2 ) ,
( x1 − x0 ) ( x1 − x2 )
l2 ( x) =
( x − x0 ) ( x − x1 ) .
( x2 − x0 ) ( x2 − x1 )
Solution:
= 0.14925
Hence find P2 ( x) .
Solution:
( x − 1)( x − 3)=
l0 ( x)=
( −1)( −3) 3
( x − 4 x + 3)
1 2
( x − 0 )( x =
− 3)
l=
2 ( x)
(1)( −2 )
1
2
( 3x − x 2 )
( x − 0 )( x=− 1)
l=
3 ( x)
( 3)( 2 ) 6
( x − x)
1 2
Now P2 (=
x)
3
(
1 2
x − 4 x + 3) ⋅ 1 + ( 3 x − x 2 ) ⋅ 3 + ( x 2 − x ) ⋅ 55
1
2
1
6
= 8 x 2 − 6 x + 1.
8 ⋅ ( 2) − 6 ( 2) + 1
∴ P2 ( x) =
2
= 32 − 12 + 1
= 21.
Solution:
l0 ( x) =
( x − 2.5)( x − 3) = 2 x 2 − 11x + 15
( −0.5)( −1.0 )
( x − 2 )( x − 3) =
l1 ( x) = −4 x 2 + 20 x − 24
( 0.5)( −0.5)
l2 ( x) =
( x − 2 )( x − 2.5) = 2 x 2 − 9 x + 10
(1.0 )( 0.5)
Hence,
P2 ( x) = (2x 2
− 11x + 15 ) (0.69315) − ( 4 x 2 − 20 x + 24 ) (0.91629)
+ ( 2 x 2 − 9 x + 10 ) (1.09861)
Simplifying, we get
P2 ( x) =
−0.08164 x 2 + 0.81366 x − 0.60761 .
Exercises:
1. Find the Lagrange quadratic interpolating polynomial P2 ( x) for the
2. The function f =
( x) sinx + cosx is represented by the data given by:
x: 10° 20° 30°
f ( x) : 1.1585 1.2817 1.366
π
Find P2 ( x) for this data. Hence find P2 . Compare it with the exact
12
π
value f .
12
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley
& Sons, Publishers, Singapore.
Suggested Reading
Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill
Publishers, New York.
Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth
Edition, Prentice Hall of India Publishers, New Delhi.
Lesson 11
11.1 Introduction
Result: Show that the N th degree Lagrange interpolating polynomial for the
data set {( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )} is given by
PN ( x) =
( x − x1 )( x − x2 )...( x − xN ) f + ( x − x0 ) ( x − x2 )...( x − xN ) f + ...
( x0 − x1 )( x0 − x2 )...( x0 − xN ) 0 ( x1 − x0 ) ( x1 − x2 )...( x1 − xN ) 1
... +
( x − x0 ) ( x − x1 )...( x − xN −1 ) f
( xN − x0 )( xN − x1 )...( xN − xN −1 ) N
N
PN ( x) = ∑ li ( x) fi
i =0
w( x)
where li ( x) =
( x − xi ) w′( xi )
( x x0 ) ( x − x1 )...( x − xN )
where w( x) =−
where ξ is some point from the discrete data set { x0 , x1 ,..., xN } such that
The derivatives of this truncation error is not done here, the reader is referred to
the reference books.
PN ( x) = f ( x) = a0 ( x − x1 )( x − x2 )...( x − xN ) + a1 ( x − x0 )( x − x2 )...( x − xN ) +
... + an ( x − x1 )( x − x2 )...( x − xN −1 ) (11.1)
f 0 = a0 ( x0 − x1 )( x0 − x2 )...( x0 − xN )
f0
⇒ a0 =
( x0 − x1 )( x0 − x2 )...( x0 − xN )
f1
a1 = ,
( x1 − x0 )( x1 − x2 )...( x1 − xN )
f2 fN
a2 = , ... aN = .
( x2 − x0 )( x2 − x1 )...( x2 − xN ) ( xN − x0 )( xN − x1 )...( xN − xN −1 )
Example 1: For the below given unequally spaced data find the interpolating
polynomial with highest degree:
x: 0 1 3 4
f ( x) −20 −12 −20 −24
Solution:
Given 4 data points, we can find utmost a third degree polynomial of the form
P ( x) =a0 + a1 x + a2 x 2 + a3 x 3 , where these ai ’s are determined using the given
data.
P3 ( x) = l0 ( x) f 0 ( x) + l1 ( x) f1 ( x) + l2 ( x) f 2 ( x) + l3 ( x) f 3 ( x)
3
87 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation
by:
( y − y1 )( y − y2 )...( y − y N ) ( y − y0 )( y − y1 )...( y − y N )
x= x0 + x1 + ...
( y0 − y1 )( y0 − y2 )...( y0 − y N ) ( y1 − y0 )( y1 − y2 )...( y1 − y N )
( y − y0 )( y − y1 )...( y − y N −1 )
... + xN .
( y N − y0 )( y N − y2 )...( y N − y N −1 )
4
88 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation
Solution:
1 27 4
= + −
2 14 7
= 1.860 .
Exercises:
1. Find the value of from the following data using the Lagrange
interpolation.
x 5 6 9 11
f ( x) 380 −2 196 508
5
89 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Suggested Reading
6
90 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 12
12.1 Introduction
Thus the nodes x0 , x1 ,..., xN are such that x=j x0 + jh for j = 0,1,2,..., N .
{( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )} .
Also we know E = 1 + ∆ .
∴ f ( x0 + ph )= (1 + ∆ ) f ( x0 )= (1 + ∆ )
p p
y0
p ( p − 1) 2 p ( p − 1)( p − 2) 3
= { y0 + p∆y0 + ∆ y0 + ∆ y0 + ...
2! 3!
p ( p − 1)( p − 2)...( p − N + 1) N
... + ∆ y0 + ...}
N!
(using Binomial theorem).
th
For an -data set, the and higher order forward differences became zero,
so the infinite series in the above equation becomes a polynomial of degree .
x − x0
Note that p = is a linear function of .
h
p ( p − 1)( p − 2)...( p − N + 1)
Thus will be a polynomial of degree is .
N!
Thus we have
p ( p − 1) 2 p ( p − 1)( p − 2) 3
y p = f ( x0 + ph) = y0 + p∆y0 + ∆ y0 + ∆ y0 + ...
2! 3!
p ( p − 1)( p − 2)...( p − N + 1) N
... + ∆ y0 (12.1)
N!
as the n th degree polynomial interpolating the given equally spaced data set. This
is called the Gregory-Newton Forward difference interpolating polynomial.
Example 1: Find the Newton Forward interpolating polynomial for the equally
spaced data points
Compute .
Solution:
Given x0 = 1, h = 1, f 0 = −1 .
The difference table for the given data is:
∆f ∆2 f ∆3 f
1 -1
-1
2 -2 2
1 0
3 -1 -2
-1
4 -2
3
93 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals
( x − 1)( x − 2)(−2)
=−1 + ( x − 1)(−1) +
2
= x2 − 4 x + 2 .
2
3 3 3 7
Now f x = = − 4 + 2 =− =−1.75 .
2 2 2 4
.
Solution:
So choose= =
x0 0.2, =
h 0.1, x 0.25 .
0.25 − 0.2
=
⇒p = 0.5 .
0.1
4
94 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals
0.1 1.4
0.16
0.2 1.56 0.04
0.2 0.0
0.3 1.76 0.04 0.0
0.24 0.0
0.4 2.0 0.04
0.28
0.5 2.28
∆3 f0 =
∆ 4 f0 =
0.
p ( p − 1) 2
f (0.25) = f 0 + p∆f 0 + ∆ f0 .
2!
Thus
.
Exercises:
5
95 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals
1. and from
2. from
3. Find the number of persons getting salary below Rs. 300 per day from the
following data.
Wages in
Rs.
Frequency
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
6
96 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals
7
97 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 13
13.1 Introduction
Consider the discrete data set for the continuous function on the interval
as (as considered in the lesson 12)
{( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )}
Let us now try to evaluate the function at some location near the end
nodes xN −1 or xN .
Write =
x xN + ph , is any real number.
Then y p = f ( x) = f ( xN + ph ) = E p f ( xN ) .
Use the relation between and the backward difference operator ∇ given as
E ≡ (1 − ∇ ) .
−1
Now E p ≡ (1 − ∇ ) .
−p
Thus f ( xN + ph )= (1 − ∇ )
−p
yN .
p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( xN + ph )= { y N + p∇y N + ∇ yN + ∇ y N + ...
2! 3!
p ( p + 1)( p + 2)...( p + N − 1) N
... + ∇ y N + ...} .
N!
th
For the given -data set, the and higher order backward differences
become zero, so the infinite series above becomes a polynomial of degree .
Note that =
x xN + ph
x−x p ( p + 1)...( p + N − 1)
⇒ p = N is a linear function of , and the product is a
h N!
polynomial of degree in .
th
Thus we get the degree interpolating polynomial in terms of the backward
differences at xN as:
p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( xN + ph ) = y N + p∇y N + ∇ yN + ∇ y N + ...
2! 3!
p ( p + 1)( p + 2)...( p + N − 1) N
... + ∇ yN (13.2)
N!
2
99 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial
Example 1: The following data represents the relation between the distance as a
function of height:
height 150 200 250 300 350 400
distance 13.03 15.04 16.81 18.42 19.90 21.27
Find y ( 410 ) .
Solution:
3
100 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial
p ( p + 1) 2 p( p + 1)( p + 2) 3
f (410) = y N + p∇y N + ∇ yN + ∇ yN
2! 3!
= 21.27 + (0.2)(1.37) +
( 0.2 )(1.2 ) (−0.11) + ( 0.2 )(1.2 )( 2.2 ) (0.02)
2 6
= 21.53 .
Thus .
Note that, when , p 3 , p 4 only correct the solution at fourth and fifth
decimal places.
Example 2: Find the cubic polynomial which takes the following data:
0 1 2 3
1 2 1 10
Solution:
4
101 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial
Let us now form the difference table for the given data:
∆f ∆2 f ∆3 f
x0 = 0 1 = f0
1 = ∆f 0
1 2 -2 = ∆ 2 f 0
= ∆ 3 f 0
12 3
-1 = ∆ f N
2 1 10 = ∆ 2 f N
9 = ∆f N
xN = 3 9 = fN
x − x0 x − 0
Case 1: Take =
x0 0,=
p = = x.
h 1
p ( p − 1) 2 p ( p − 1)( p − 2) 3
f ( x) = f 0 + p∆f 0 + ∆ f0 + ∆ f0
2 6
x ( x − 1) x ( x − 1)( x − 2 )
= 1 + x ⋅1 + (−2) + (12)
2 6
= 2 x3 − 7 x 2 + 6 x + 1 …… (i)
Take=
xN 3,= =
f N 10, h 1.
5
102 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial
x − xN ( x − 3=)
=
p = ( x − 3) .
h 1
p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( x) = f N + p∇f N + ∇ fN + ∇ fN .
2 6
Thus f ( x) = 10 + ( x − 3) (9) + (
x − 3)( x − 2 ) ( x − 3)( x − 2 )( x − 1) (12)
(10) +
2 6
= 2 x3 − 7 x 2 + 6 x + 1 . …… (ii)
(i) and (ii) clearly indicate that the interpolating polynomial for the given data is
the same though we use different methods.
Exercises:
1. Use the Lagrange interpolating polynomial for the data:
0 1 2 3
1 2 1 10
6
103 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial
3. If , , and
find .
4. Find the number of men getting wages below Rs. 35 from the data:
Wages in Rs:
Frequency
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill
Publishers, New York.
Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,
Prentice Hall of India Publishers, New Delhi.
7
104 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 14
Gauss Interpolation
14.1 Introduction
1st difference 2nd difference 3rd difference 4th difference 5th difference
x−3 y−3
δy 5
∆y−3 =
−
2
δy 3
∆y−2 = δ 3y 3
∆ 3 y−3 =
− −
2 2
δy 1
∆y−1 = δ 3y 1
∆ 3 y−2 = δ 5y 1
∆ 5 y−3 =
− − −
2 2 2
x1 y1 ∆ 2 y0 ( =
δ 2 y1 ) ∆ 4 y−1 ( =
δ 4 y1 )
δ y3
∆y1 = δ 3 y3
∆ 3 y0 =
2 2
x2 y2 ∆ 2 y1 ( =
δ 2 y2 )
δ y5
∆y2 =
2
x3 y3
This formula can be used directly to interpolate the function at the centre of the
data i.e., for values of , .
p ( p − 1) 2 p ( p − 1)( p − 2) 3
y p= y0 + p ( ∆y−1 + ∆ 2 y−1 ) + ( ∆ y−1 + ∆ 3 y−1 ) + ( ∆ y−1 + ∆ 4 y−1 ) + ...
2! 3!
( p + 1) p 2 ( p + 1) p ( p − 1) 3
= y0 + p∆y−1 +
2!
∆ y−1 +
3!
( ∆ y−2 + ∆ 4 y−2 ) + ...
or
p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 2)( p + 1) p ( p − 1) 4
y p = y0 + p∆y−1 + ∆ y−1 + ∆ y−2 + ∆ y−2 + ...
2! 3! 4!
(14.4)
p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 2)( p + 1) p ( p − 1) 4
y0 + pδ y
yp = 1 + δ y0 + δ y1+ δ y0 + ...
− 2! 3! − 4!
2 2
(14.5)
Formula given in (2) and (4) or (3) and (5) have limited utility, but are useful in
deriving the important method known as Stirling’s method.
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Lesson 15
15.1 Introduction
p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 1) p ( p − 1)( p − 2) 4
y p = y0 + p∆y0 + ∆ y−1 + ∆ y−2 + ∆ y−2
2! 3! 4!
( p + 2)( p + 1) p ( p − 1)( p − 2) 5
+ ∆ y−2 + ... (15.1)
5!
∆y0 = y1 − y0 , ∆ 3 y−1 = ∆ 2 y0 − ∆ 2 y−1 , ∆ 5 y−2 = ∆ 4 y−1 − ∆ 4 y−2 etc., then (1) becomes
p ( p − 1) 2 ( p + 1) p ( p − 1) 2
y p= y0 + p ( y1 − y0 ) +
2!
∆ y−1 +
3!
( ∆ y0 − ∆ 2 y−1 ) +
(15.2)
This formula is extensively used as it involves only even differences in and below
the central line.
= =
x−2 310, f −2 2.49136,
= =
x−1 320, f −1 2.50515,
= =
x0 330, f 0 2.51851,
= =
x1 340, f1 2.53148,
= =
x2 350, f 2 2.54407,
= =
x3 360, f3 2.5630,
x − 330
= =
h 10, p .
10
y ∆y ∆2 y ∆3 y ∆4 y ∆5 y
2.49136
0.01379
2.50515 -0.00043
0.01336 0.00004
2.51881 -0.00039 -0.00003
2
113 WhatsApp: +91 7900900676 www.AgriMoon.Com
Everett’s Central Difference Interpolation
337.5 − 330
= =
Take x 337.5, p = 0.75 .
10
To change the terms with negative sign, putting p = 1 − q in equation (1), we get
q (q 2 − 12 ) 2 q (q 2 − 12 )(q 2 − 22 ) 4
y p = qy0 + ∆ y−1 + ∆ y−2 + ...
3! 5!
p ( p 2 − 12 ) 2 p ( p 2 − 12 )( p 2 − 22 ) 4
+ py1 + ∆ y0 + ∆ y−1 + ...
3! 5!
q =1 − p =0.25 .
Exercise:
20 24 28 32
854 3162 3544 3992
3
114 WhatsApp: +91 7900900676 www.AgriMoon.Com
Everett’s Central Difference Interpolation
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
4
115 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 16
This is obtained by taking the mean of the Gauss Forward and Backward
interpolation formulae.
∆y0 + ∆y−1 p 2
2 p ( p 2 − 1) ∆ 3 y−1 + ∆ 3 y−2 p 2 ( p 2 − 1) 4
y p = y0 + p + ∆ y−1 + + ∆ y−2 + ...
2 2! 3! 2 4!
(16.1)
p p2 p ( p 2 + 12 ) 3 p ( p −1 ) 4
2 2 2
y p =+ δ y 1 + δ y− 1 + δ y0 + δ y 1 + δ y− 1 + δ y0 + ...
3
y0
2 2 2 2! 3! ⋅ 2 2 2 4!
(16.2)
Example 1: Find the value of e x when from the below given table:
Solution:
0.644 − 0.64
= =
x 0.644, =
x0 0.64, p = 0.4,=y0 1.896481.
0.01
⇒ ∆ 2 y−1 = ∆ 2 y0 − ∆ 3 y−1 .
p ( p − 1) 1 2 1 2 p ( p − 1) 3
2
p ( p 2 − 1) ( p − 2 ) 1 4 1 4
+ ∆ y−2 + ∆ y−2 + ...
4! 2 2
1 p ( p − 1) 2 1 p ( p − 1) 2 p ( p 2 − 1) 3
= y0 + p∆y0 +
2 2!
∆ y−1 +
2 2!
( ∆ y0 − ∆ y−1 ) + 3! ∆ y−1
3
2
117 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula
1 p ( p − 1) ( p − 2 ) 4 1 p ( p − 1) ( p − 2 ) 4
2 2
+
2 4!
∆ y−2 +
2 4!
( ∆ y−1 − ∆ 5 y−2 ) + ...
1
p − p ( p − 1)
p ( p − 1) ∆ y−1 + ∆ y0
2 2
2
or y p = y0 + p∆y0 + + ∆ 3 y−1
2! 2 3!
+
( p + 1) p ( p − 1)( p − 2 ) ∆ 4 y−2 + ∆ 4 y−1
+ ...
(16.3)
4! 2
Solution:
Taking= =
x0 24, h 4,=
y0 3162 .
1
=
We have p ( x − 24 ) .
4
x y ∆y ∆2 y ∆3 y
20 2854
308
24 3162 74
382 -8
28 3544 66
448
3
118 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula
32 3992
1
=
Taking =
x 25, p .
4
1
p ( p − 1) ∆ y−1 + ∆ y0
2 2 p − p ( p − 1)
2
y p = y0 + p∆y0 + + ∆ 3 y−1 + ...
2! 2 3!
(0.25)(−0.75) 74 + 66 (−0.25)(0.25)(−0.75)
∴ y (25) =3162 + (0.25)(382) + + (−8)
2 2 6
= 3250.87 .
Note:
1 1
1. If the value of p lies between − and , prefer Stirling’s formula, it gives a
4 4
better approximation.
1 3
2. If p lies between and , Bessel’s or Everett’s formula gives better
4 4
approximation.
Exercises:
4
119 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula
20 25 30 35 40
11.47 12.78 13.76 14.49 15.05
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
5
120 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 17
17.1 Introduction
by:
p( x) x 1
f ( x0 ) x0 1 = 0 .
f ( x1 ) x1 1
p ( x) ( x0 − x1 ) − x [ f ( x0 ) − f ( x1 ) ] + 1[ x1 f ( x0 ) − x0 f ( x1 ) ] =
0
or it is rewritten as
f ( x1 ) − f ( x0 )
p ( x=
) f ( x0 ) + ( x − x0 )
x1 − x0
= f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] (17.1)
x1 , given as:
f ( x1 ) − f ( x0 )
f [ x0 , x1 ] = .
x1 − x0
Solution:
Given=
x0 2,=
f0 = =
4, x1 2.5, f1 5.5.
f ( x1 ) − f ( x0 ) 5.5 − 4 1.5
f [ x0 , =
x1 ] = = = 3.
x1 − x0 2.5 − 2 0.5
) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ]
p1 ( x=
=4 + ( x − 2)(3)
= 3x − 2 .
f [ x1 , x2 ] − f [ x0 , x1 ]
f [ x0 , x1 , x2 ] =
( x2 − x0 )
f [ x1 , x2 , x3 ] − f [ x0 , x1 , x2 ]
f [ x0 , x1 , x2 , x3 ] = .
( x3 − x0 )
th
The same way, the divided difference is written as
f [ x1 , x2 ,..., xN ] − f [ x0 , x1 , x2 ,..., xN −1 ]
f [ x0 , x1 ,..., xN ] = .
( xN − x0 )
) a=0 f ( x0=) f [ x0 ] .
p ( x0=
Put x = x1 , we obtain
p ( x1 ) = a0 + ( x − x0 ) a1 = f [ x1 ] (say)
f ( x1 ) − f ( x0 )
=
⇒ a1 = f [ x0 , x1 ] .
( x1 − x0 )
Put x = x2 , we get
) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] + ( x2 − x0 ) ( x2 − x1 ) a=2 f ( x2 ) .
p ( x2=
On simplifying, we get
f [ x1 , x2 ] − f [ x0 , x1 ]
=a2 f=
[ x0 , x1 , x2 ] .
( x2 − x0 )
an = f [ x0 , x1 ,..., xN ] .
This formula is easily extended to (say) data. (i.e., addition of one more
data point to the previous data set) as
Example 2: Find the Newton divided difference interpolating polynomial for the
data.
0 1 3
1 3 5.5
Solution:
=
x0 0,=
x1 1,=
x2 3,=
f 0 1,=
f1 3,=
f 2 55 .
f (1) − f (0) 3 − 1
f [=
0,1] = = 2,
1− 0 1
55 − 3 52
f [1,3
=] = = 26 ,
3 −1 2
26 − 2 24
∴ f [ 0,1,3] = == 8.
3−0 3
=1 + x ⋅ 2 + ( x)( x − 1)8
= 8 x 2 − 6 x + 1.
Exercises:
1. Find from the following data using the Newton’s divided difference
interpolation.
1.5 3.0 5.0 6.5 8.0
5.0 31.0 131.0 282.0 521.0
1
2. If f ( x) = , find the divided difference f [ x0 , x1 , x2 ] .
x2
References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Lesson 18
Numerical Differentiation
18.1 Introduction
set, we use numerical methods to find its derivatives. There are many ways of
finding derivatives of a function when given in its data form. Important among
these methods are methods based on interpolation, method based on finite
difference operators. We discuss these methods through examples.
given by
N
PN ( x) = ∑ lk ( x) f k (18.1)
k =0
π ( x)
where lk ( x) = .
( x − xk ) π ′( xk )
( x x0 )( x − x1 ) ... ( x − xN ) .
and π ( x) =−
Differentiating (1) w.r.t. x , we obtain
N
PN ′ ( x) = ∑ lk ′ ( x) f k .
k =0
x − x1 x − x0
where l0 ( x) = and l1 ( x) = .
x0 − x1 x1 − x0
x − x1 x − x0
=
P1 ( x) f0 + f1 .
x0 − x1 x1 − x0
2 x − x1 − x2 2 x − x0 − x2 2 x − x0 − x1
where l0′ ( x) = , l1′ ( x) = , l2′ ( x) = .
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
2 x − x1 − x2 2 x − x0 − x2 2 x − x0 − x1
∴ P2′( x)
= f0 + f1 + f2 (18.3)
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
x1 − x2 2 x1 − x0 − x2 x1 − x0
At x = x1 , P2′( x) = f0 + f1 + f .
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 2
finding the interpolating polynomial Pn ( x) for the given set of data points.
Example 1: Find f ′(2) and f ′′(2) for the below given data set.
xi 2 2.2 2.6
fi 0.69315 0.78846 0.95551
Solution:
We have
f ′( x0 ) =
f ′(2) =
2 x0 − x1 − x2
f0 +
( x0 − x2 ) f1 +
( x0 − x1 ) f
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 2
4 − 2.2 − 2.6 2 − 2.6 2 − 2.2
= ( 0.69315) + ( 0.78846 ) + ( 0.95551)
( −0.2 )( −0.6 ) ( 0.2 )( −0.4 ) ( 0.6 )( 0.4 )
= 0.49619 .
f0 f1 f2
f ′′ ( x0=
) 2 + + .
( x −
0 1 0 x )( x − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x )
1
= −0.19642 .
In the above, we used Lagrange interpolation method. Similarly one can use
Newton interpolation methods also.
For a given equally spaced data set, we have learnt that Ef ( x) = ehD f ( x)
i.e., ehD ≡ E ⇒ hD ≡ log E (18.5)
where E is the shift operator, D is the differentiation operator, h being the
constant step size. Using the relation between E , ∆ and ∇ , we write
1 1
log (1 + ∆ ) ≡ ∆ − ∆ 2 + ∆ 3 − ...
2 3
hD ≡ log E ≡
1 1
− log (1 − ∇ ) ≡ ∇ + ∇ 2 + ∇3 + ... (18.6)
2 3
1 1
∆f k − ∆ 2 f k + ∆ 3 f k − ...
df 2 3
Then h ( xk ) ≡ hDf ( xk ) ≡
dx 1 1
∇f k + ∇ 2 f k + ∇3 f k + ...
2 3 (18.7)
In general we can write the higher order derivatives in terms of the higher order
dn
differences, the nth derivative operator can be written as
dx n
n n n +1 n(3n + 5) n + 2
∆ − 2 ∆ + 24
∆ − ...
h D ≡
n n
∇ n + n ∇ n +1 + n(3n + 5) ∇ n + 2 + ...
2 24 (18.8)
dy d2y
Example 2: Find and 2 at x = 1.2 using forward differential from the
dx dx
following table:
x: 1 1.2 1.4 1.6 1.8 2.0
y ( x) : 2.7183 3.3201 4.0552 4.953 6.0496 7.3891
Solution:
Take x0 = 1.2 , y0 = 3.3201 , h = 0.2 . We form the difference table for the given data
set as:
dy 1 1 1
= ∆f (1.2) − ∆ 2 f (1.2) + ∆ 3 f (1.2) − ...
dx x =1.2 h 2 3
1 1 1
= 0.7351 − ( 0.1627 ) + (0.0361) − ...
0.2 2 3
≈ 3.3205 .
x y ∆ ∆2 ∆3 ∆4 ∆5
1.2 3.3201
0.7351
1.4 4.0552 0.1627
0.8978 0.0361
1.6 4.9530 0.1988 0.008
1.0966 0.0441 0.0001
1.8 6.0496 0.2429 0.0094
1.3395 0.0535
2.0 7.3891 0.2964
1.6359
2.4 9.025
d2y 1
2
= 2 ∆ 2 f k − ∆ 3 f k + ...
dx x =1.2 h
1
= [0.1627 − 0.0361] ≈ 3.165 .
0.04
x 3 4 5 6 7 8
y 0.205 0.24 0.259 0.262 0.25 0.224
0.205 + ( 0.035 ) p −
or y ( x) =
( 0.016 ) p( p − 1) .
2
dy
The minimum value of y ( x) is obtained by solving =0
dp
(2 p − 1)
i.e., (0.035) − ( 0.016 ) =
0
2
⇒ 0.035 − 0.008(2 p − 1) =
0
0.035
⇒ (2 P − 1) =
0.008
⇒ p=
2.6875
Exercises
2. Given= =
sin 0o 0,sin10 o
=
0.1736,sin =
20o 0.3420,sin =
30o 0.5,sin 40o 0.6428 .
Find
(i) sin 23o
(ii) cos10o
(iii) − sin 20o using the method based on the finite differences.
3. Find the value of x for which y is maximum from the below tabulated values
for y ( x) .
x: 1.2 1.3 1.4 1.5 1.6
y: 0.93 0.96 0.98 0.99 1.0
References
Suggested Reading
Lesson 19
Numerical Integration
19.1 Introduction
Consider the data set S for a given function y = f ( x) which is not known
explicitly where S = {( x0 , y0 ) , ( x1 , y1 ) ,... ( xN , yN )} .
N
b
b
π ( x)
=I ∫ ∑ li ( x) fi dx + ∫ f ( N +1) (ξ ) dx (19.2)
a (
a i =0 N + 1) !
N b
∑ ∫ li ( x)dx fi + Rn
i =0 a
or=I λi fi + Rn (19.3)
b b
1
where λi are ∫ li ( x)dx and Rn is the remainder given by ∫ π ( x) f ( N +1) (ξ ) dx .
a ( N + 1)! a
Equation (3)gives an approximation for the integral value.
Using Newton’s, forward difference interpolation polynomial for the given data
set S, we now derive a general formula for numerical integration of
b
I = ∫ y ( x)dx (19.4)
a
x − x0
where p = .
h
2
136 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration
N ( 2 N − 3) 2 N ( N − 2) 3
2
N
= Nh y0 + ∆y0 + ∆ y0 + ∆ y0 + ... (19.7)
2 12 24
I= ∫ y( x)dx
x0
1
=1 ⋅ h y0 + ∆y0
2
h
= ( y0 + y1 ) (19.8)
2
I= ∫ y( x)dx
x0
2 2(1) 2
= 2h y0 + ∆y0 + ∆ y0
2 12
1
= 2h y0 + ( y1 − y0 ) + ( y2 − 2 y1 + y0 )
6
1 2 1
= h 2 y1 + y2 − y1 + y0
3 3 3
1
= [ y0 + 4 y1 + y2 ] (19.9)
3
I= ∫ y( x)dx
x0
3 3 1
= 3h y0 + ∆y0 + ∆ 2 y0 + ∆ 3 y0
2 4 8
3 3 1
= 3h y0 + ( y1 − y0 ) + ( y2 − 2 y1 + y0 ) + ( y3 − 3 y2 + 3 y1 − y0 )
2 4 8
3
137 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration
3h
= [ y0 + 3 y1 + 3 y2 + y3 ] (20.1)
8
xN
We now discuss the use of the case1 for evaluating the integral, I = ∫ y( x)dx
x0
.
N
We can write I = I1 + I 2 + I 3 + ... + I N = ∑ I j
j =1
xj
where I j = ∫
x j −1
y ( x)dx .
We take two consecutive data points at once and apply the formula given in
(19.5) for every pair of data points. Equation (19.4) is known as the Newton-
Cotes quadrate (Interpolation) formula. With different values of N = 1, 2,3,... ,we
derive different integration methods.
We obtain
x1
1 h
I=
1 ∫ y( x)dx= h y0 + ∆=
y0 [ y0 + y1 ]
x0 2 2
x2
1 h
I=
2 ∫ y( x)dx= 2 [ y1 + y2 ]
h y1 + ∆y1=
2
x0
4
138 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration
x3
1 h
I=
3 ∫ y( x)dx= 2 [ y2 + y3 ]
h y2 + ∆y2=
2
x0
……
xN
1 h
=
IN ∫y ( x=
)dx h yN −1 + ∆yN=
−1 [ yN −1 + yN ] .
xN −1 2 2
Now
xN x1 x2 xN
=I ∫ y(=
x0
x)dx ∫ y ( x)dx + ∫ y ( x)dx + ... + ∫
x0 x1 xN −1
y ( x)dx
= I1 + I 2 + ... + I N
h h h
= [ y0 + y1 ] + [ y1 + y2 ] + ... + [ yN −1 + yN ]
2 2 2
h
= y1 + 2 ( y2 + y3 + ... + y N −1 ) + y N (20.2)
2
This is known as the Trapezoidal rule. Since the method involves finding the
sum of the areas of these N-trapezoids, this method is named as Trapezoidal
rule.
1.2
Solution:
a=0, b=1.2, h=0.2, we tabulate the function at the nodal points as:
0 0.2 0.4 0.6 0.8 1.0 1.2
5
139 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration
∴I =4.656 .
Solution:
Exercises:
1 A solid of revolution formed by rotating about the axis, the area between
the axis, the lines and and a curve through the points with the
following coordinates:
Find the volume of the solid formed using the Trapezoidal rule.
6
140 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration
References
Suggested Reading
7
141 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 20
Now
Now using the Newton-Cotes formula with for each of the above
integrals, we get
...
or,
(20.1)
where = sum of the function values at the end points, = sum of the function
values at odd numbered abscissas, and = sum of the function values at even
numbered abscissas.
rd
This formula is known as the Simpson’s rule of integration.
rd
Example: Evaluate by Simpson’s rule taking .
Solution:
rd
By Simpson’s rule:
2
143 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules
The integral
xN= x0 + Nh
I= ∫
x0
y ( x)dx
x0 +3 h x0 + 6 h xN
= ∫
x0
y ( x)dx + ∫
x0 +3 h
y ( x)dx + ... + ∫ y( x)dx
xN − 3
(20.2)
th
This is known as the Simpson’s rule.
th
Example 2: Evaluate by Simpson’s rule.
Solution:
1
Take=
h 1,= x6 6 , f ( x) =
x0 0,= .
1 + x2
The number of subintervals is 6, is a multiple of 3. So we can use the Simpson’s
th
rule.
x: 0 1 2 3 4 5 6
y: 1 0.5 0.2 0.1 0.0588 0.0385 0.027
y0 y1 y2 y3 y4 y5 y6
3h
=I ( y0 + y6 ) + 3 ( y1 + y2 + y4 + y5 ) + 2 ( y=
3 )
1.3571.
8
3
144 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules
0.6
∫ e dx by taking 6 subintervals.
rd −x
2
Example 3: Using Simpson’s rule, find
0
Solution:
y = e− x
2
1 0.99 0.9608 0.9139 0.8521 0.7788 0.6977
y0 y1 y2 y3 y4 y5 y6
rd
By Simpson’s rule, we have
0.6
h
∫ e dx
−x
= ( y0 + y6 ) + 4 ( y1 + y3 + y5 ) + 2 ( y2 + y4 )
2
0
3
0.1
= (1 + 0.6977 ) + 4 ( 0.99 + 0.9139 + 0.8521) + 2 ( 0.9608 + 0.8521)
3
0.1
= [1.6977 + 10.7308 + 3.6258]
3
= 0.5351 .
4
145 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules
y0 y1 y2 y3 y4
rd
Estimate the volume of the solid using Simpson’s rule.
Solution:
Here h = 0.25 .
The volume of the solid of revolution is
1
I = ∫ π y 2 dx
0
rd
Using the Simpson’s rule:
π ( y0 + y4 2 ) + 4 ( y12 + y32 ) + 2 ( y2 2 )
h 2
=I
3
=
0.25 22
3 7
({ 2 2 2
} {
1) + ( 0.8415 ) + 4 ( 0.9896 ) + ( 0.9089 ) + 2 ( 0.9589 )
2 2
}
= 0.2618[10.7687 ]
= 2.8192 .
Exercises:
5.2
1. Evaluate ∫ log x ⋅ dx
4
2. A curve is given by
5
146 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules
x 0 1 2 3 4 5 6
y 0 2 2.5 2.3 2 1.7 1.5
3. Estimate the length of the arc of the curve 3y = x 3 from (0,0) to (1,3) using
rd
Simpson’s rule by taking 8 subintervals.
π
2
= ∫ cosθ ⋅ dθ by using Simpson’s rd
4. Evaluate I rule using 11 ordinates.
0
References
Suggested Reading
6
147 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 21
21.1 Introduction
Boole’s and Weddle’s rules are higher order integration methods. These
methods use higher order differences as explained below. By taking in
Newton-Cotes formula, we obtain Boole’s rule as:
x4
5 2 7
∫ y( x=
x0
)dx 4h y0 + 2∆y0 + ∆ 2 y0 + ∆ 3 y0 + ∆ 4 y0
3 3 90
2h
= [7 y0 + 32 y1 + 12 y2 + 32 y3 + 7 y4 ]
45
Similarly for the next set of data points between and , we write
the integral as
x4
2h
∫ y( x)dx= [7 y4 + 32 y5 + 12 y6 + 32 y7 + 7 y8 ]
x0
45
I= ∫ y( x)dx
x0
2h
= 7 y0 + 32 ( y1 + y3 + y5 + y7 + ...) + 12 ( y2 + y6 + y10 + ...) + 14 ( y4 + y8 + y12 + ...) + 7 y N
45
(21.1)
x6
9 2 123 4 11 5 41 6
∫x = + ∆ + ∆ + ∆ + ∆ + ∆ + ∆ y0
3
For y ( x ) dx 6 h y0 3 y0 y0 4 y0 y0 y0
0
2 60 20 140
3h
= [ y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ] .
10
x12
3h
For ∫ y ( x)dx
= [ y6 + 5 y7 + y8 + 6 y9 + y10 + 5 y11 + y12 ] .
x6
10
I= ∫ y( x)dx
x0
3h
= [ y0 + 5 ( y1 + y5 + y7 + y11...) + ( y2 + y4 + y8 + y10 + ...)
10
Weddle’s rule is found to be more accurate than all the methods discussed
earlier. This is because higher order approximation is used for the integration.
The below given table gives the error estimates involved in the integration
methods.
2
149 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules
x2
2 rd h h5 iv
Simpson’s Rule
∫ y( x)dx ( y0 + 4 y1 + y2 ) − y (ξ ) ; x0 < ξ < x2
2 90
x0
3 th x3
3h 3h5 iv
Simpson’s Rule
∫ y( x)dx ( y0 + 3 y1 + 3 y2 + y3 ) − y (ξ ) ; x0 < ξ < x3
8 90
x0
x4
4 Boole’s Rule 2h 8h7 vi
∫ y( x)dx ( 7 y0 + 32 y1 + 12 y2 + 32 y3 + 7 y4 ) − y (ξ ) ; x0 < ξ < x4
45 945
x0
x4
5 Weddle’s Rule 3h h7 vi
∫ y( x)dx ( y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ) − y (ξ ) ; x0 < ξ < x6
10 140
x0
3
150 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules
1.2
Solution:
2 ( 0.3)
= 7 (1) + 32 (1.34986 ) + 12 (1.82212 ) + 32 ( 2.4596 ) + 7 ( 3.32012 )
45
= 2.31954 .
12
1
Example 2: Evaluate ∫ 1+ x
0
2
dx by using Weddle’s rule with .
Solution:
1
The function y ( x) = is calculated
1 + x2
at=
x0 0,=
x1 2,=
x2 4,=
x3 6,=
x4 8,=
x5 10 and x6 = 12
as=
y0 1,=
y1 0.2,=
y2 0.05882,=
y3 0.02703,=
y4 0.01538,=
y5 0.0099 and
y6 = 0.0069 .
4
151 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules
3(2)
= 1 + 5 ( 0.2 ) + ( 0.05882 ) + 6 ( 0.02703) + ( 0.01538 ) + 5 ( 0.0099 ) + ( 0.0069 )
10
= 1.37567 .
Exercises:
5.2
1. Evaluate ∫ log x ⋅ dx using (i) Boole’s rule with
4
; (ii) Weddle’s rule by
1
2
1
2. Evaluate ∫
0 1 − x2
dx using Weddle’s rule.
2
1
4. Evaluate ∫0 1 + x 2 dx using Weddle’s rule taking intervals.
References
5
152 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules
Suggested Reading
6
153 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 22
Gaussian Quadrature
∑λ
b
=I ∫=
w( x) f ( x)dx
a
k =0
k fk (22.2)
where xk , k = 0,1,..., N are called the nodes which are distributed within the
limits of integration and λk , k = 0,1,..., N are called the discrete weights.
The formula ( ) is also known as the quadrature formula.
∑λ
b
=RN ∫a
w( x) f ( x) dx −
k =0
k fk (22.3)
considering f ( x) = c0 + c1 x + c2 x 2 + ... + c2 N +1 x 2 N +1 .
1
For example, when , then ∫=
−1
f ( x) dx w1 f ( x1 ) + w2 f ( x2 ) .
When the nodes xk are known, the corresponding methods are called Newton-
Cotes methods where the nodes are also to be determined, then the methods are
called the quadrature methods.
b−a b+a
=
transformation x t + . Depending on the weight function
2 2
a variety of methods are developed. We discuss here the Gauss-Legendre
integration method for which the weight function .
1 N
=I ∫=
f ( x) dx ∑λ k =0
k f ( xk )
−1
In the above λ0 , x0 are unknowns, these are obtained by making this integration
method exact for .
1
i.e., (a) ∫ 1 ⋅ dx =λ
−1
0 f ( x0 ) =λ0 =2
1
(b) ∫ x ⋅ dx =λ x
−1
0 0 ⇒ λ0 x0 =0 ⇒ x0 =0 ( λ0 = 2) .
1
∴ ∫ f ( x)dx =
2 ⋅ f (0) .
−1
∫=
f ( x)dx
−1
λ0 f ( x0 ) + λ1 f ( x1 )
1
1 3 3
∫ f ( x)dx=
−1
9
[5 f (− ) + 8 f (0) + 5 f ( )].
5 5
2
2x
Example 1: Evaluate the integral I = ∫ dx
1
1 + x 4
Solution:
1 3 3
I= 5 f − + 8 f (0) + 5 f
9 5 5
1
= 5 ( 0.4393) + 8(0.2474) + 5 ( 0.1379 )
9
= 0.5406
2
2x π
We can directly integrate ∫1+ x
1
4
dx and its integral is tan −1 (4) −
4
=
0.5404 .
1
1
Example 2: Evaluate the integral ∫ 1 + x dx using the Gauss-Legendre two point
0
formula.
Solution:
1 1 1
Define x = t + ⇒ dx = dt .
2 2 2
1 1
1
dt
∴∫ =f − + f
−1
t +3 3 3
= 0.69231
1 2 2
1 1 1
Exercises: Evaluate (a) ∫ dx (b) ∫ dx (c) ∫ dx
0
1 + x 1
x 1
1 + x 4
References
Suggested Reading
Lesson 23
Difference Equations
23.1 Introduction
(n + 3) − n 3
The order of the difference equation (3) is = = 3.
1 1
Solution:
(n + 2) − n
So the order of this equation is: = 2.
1
We now illustrate the formation of difference equations from the given family
of curves.
Solution:
= a (t + 1 − t ) + b[(t + 1) 2 − t 2 ]
=
a + b(2t + 1)
and ∆ 2 =
y 2b[(t + 1) − t=
] 2b .
1 2 1
b= ∆ y and a =∆y − ∆ 2 y (2t + 1) .
2 2
y=
n Yn + Vn (23.8)
Yn is also called as the complementary function.
which is called the characteristic equation of the difference equation (23.6). Let
the roots of the equation be ξ1 , ξ 2 ,...., ξ k are real and distinct. These roots are (i)
real, distinct (ii) real, repeating (iii) complex roots.
Case (iii): For the case where two of these roots are complex and rest of them
are real distinct: say ξ1 =α + iβ =r eiθ and ξ 2 =α − iβ =r e−iθ
ξ3 , ξ 4 ,...., ξ k are real and distinct. Then
β
with=r α 2 + β 2 and θ = tan −1 .
α
In the same way one can write the solution of the homogeneous difference
equation when it has several pairs of complex roots.
1
(ii) yn + 2 − yn +1 + yn =
0
4
Solution:
complete solution is
y=
n α1 (1) n + α 2 (−2) n + α 3 (3) n .
1 1 1
(ii) The characteristic equation is: ξ 2 − ξ + =0 and its roots are ξ = ,
4 2 2
n
1
yn (α1 + nα 2 ) .
(real, repeating). Hence the complete solution is: =
2
Example 4: (For complex roots): Find the complete solution of the difference
equation yn + 2 − 4 yn +1 + 5 yn =
0.
Solution:
1
ξ 2= 2 − i . So r= 4 + 1= 5 and θ = tan −1 , and hence
2
Exercises:
1: Write the difference equation ∆ 3 yn + ∆ 2 yn + ∆yn =0 in the subscript form.
(ii) yn + 2 − 6 yn +1 + 9 yn =
0
(iii) yn +3 − 3 yn +1 + 2 yn =
0
References
Suggested Reading
Lesson 24
24.1 Introduction
Using the shift operator , the above can be put in the operator form as
ϕ ( E ) yn = f n (24.2)
1
ϕ ( E ) is an operator involving , is its inverse operator [assuming its
ϕ (E)
existence]. The particular solution is obtained for different forms of non-
homogeneous function as given below. We consider the forms for as
( E − a) =
m
0 and the particular integral is
This way one can find the P.I. for the given non-homogeneous equation
when .
Solution:
=P.I .
1
( 2 n + 5n )
( E − 4 E + 3)
2
1 1
2n + 2 5n
( E − 4 E + 3) ( E − 4 E + 3)
2
1
=−2n + 5n
8
Solution:
n(n − 1) n−2 1
= (α1 + α 2 n)3n + ⋅ 3 + (−1) n .
2! 15
1 1 eipn − e − ipn
=P.I . =
ϕ ( E )
sin pn
ϕ (E) 2i
1 1 n 1 n a n = eipn
= a − n
ϕ ( E )
b ;
2i ϕ ( E ) b = e
− ipn
1 1 1 n 1 n
= a +
ϕ ( E )
Similarly, cos pn b .
ϕ (E) 2 ϕ (E)
[ϕ ( E )] n p .
−1
∴ P.I . =
Solution:
P.I . =
1
cos α
1 ( ein + e − in )
E 2 − 2 E cos α + 1 E 2 − 2 E ( eiα + e − iα ) + 1 2
1 1 1 1 1 − in
= ⋅ ⋅ e in
+ ⋅ ⋅ e
2 E − eiα ( eiα − e− iα ) E − e − iα ( e− iα − eiα )
1 1 1 − in
= ⋅ e in
− ⋅ e
4i sin α E − e iα
E −e − iα
1 1 1 − in
= ⋅ e in
− ⋅ e
4i sin α e − e
i iα −
e −e
i − iα
Solution:
1
=
P.I . ⋅ n ⋅ 3n
( E − 1)
2
1
=
3n ⋅ ⋅n
( 3E − 1)
2
1
=
3n ⋅ 2
⋅n
3
22 1 + ∆
2
−2
3n 3
= 2 1 + ∆ ⋅ n
2 2
3n
= 2 (1 − 3∆ + ....) [ n ]
2
3n 3n
= 2 {[ n ] − 3=
⋅ 1} ( n − 3) .
2 22
Exercises:
1. Solve yn+ 2 − 4 yn =−
n 1.
1 n
(iii) yn+ 2 − 2cos yn+1 + yn =
sin .
2 2
Lesson 25
25.1 Introduction
; subjected to (25.2)
The solution of (25.1) - (25.2) can be found as a series for in terms of power
of the independent variable ‘ ’, from which the value of can be obtained by
direct substitutionor as a set of tabled values of and .
The methods due to Picard and Taylor (Series) find the solution of the IVP
(25.1) -(25.2) as the dependent function as a function of the independent
variable ‘ ’. The other methods such as the Euler and Runge-Kutta methods
give the solution at some discrete data set for in the interval .
th
For an order differential equation, the general solution has arbitrary
constants and in order to compute the numerical solution of such an equation,
we need initial conditions each on the function and its derivatives upto the
th
order at the initial value I.V.P can also be found in the similar manner
as that of the solution of (25.1)-(25.2), but by using the vector treatment. Let us
now discuss the Picard’s method of successive approximations.
(25.3)
subjected to (25.4)
∫ dy = ∫ f (t , y)dt
y0 t0
t
y y0 + ∫ f (t , y )dt
or = (25.5)
t0
Notice that the unknown that is to be found isalso seen inside the integral, such
an equation is called an integral equation. Such an equation can be solved by the
method of successive approximations. To start with the procedure, assume an
approximation for the unknown function as inside the integral. This
makes the equation (25.3) as
t
y y0 + ∫ f (t , y )dt
= (25.6)
t0
Here y′(t ) is the first approximation to the solution. The second approximation
to the solution is obtained by using y′(t ) in the integral as
t
) y0 + ∫ f (t , y′(t ))dt .
y (t=
2
t0
t0
n =1
It can be proved that if the function f (t , y ) is bounded in some region about the
Solution:
dy
Integrating = 3t + y 2 in the domain, we get
dt
y t
∫=
dy ∫ (3t + y 2
)dt
1 0
t
⇒ y (t ) − y (0) = ∫ (3t + y 2 )dt
0
t
1 + ∫ (3t + y0 2 )dt
or y (t ) =
0
t
1 + ∫ (3t + 1)dt
or y′(t ) =
0
3t 2
= + t +1
2
is the first approximation. The second approximation is
t
1 + ∫ 3t + ( y′(t ) ) dt
y (t ) =
2 2
0
t
3t 2
2
= 1 + ∫ 3t + + t + 1 dt
0 2
9 5 3 4 4 3 5 2
= t + t + t + t + t +1
20 4 3 2
The advantage of this method is that we can compute the solution of the given
I.V.P. at every point of the domain. The disadvantage of this method is, as also
seen in the earlier example, that the integration procedure is laborious and at
times, integration might not be possible for some functions f (t , y ) . The limited
utility of this method demands for the search of more elegant methods for
solving I.V.Ps.
dy
Example 2: Solve the equation = t + y 2 subject to the condition y (=
t 0)= 1.
dt
Solution:
Start with y (0)= y=
0 1.
t
1 + ∫ ( t + 1) dt
This generates y (t ) =
(1)
t2
=1+ t +
2
t t2
2
and y (t ) = 1 + ∫ t + 1 + t + dt
(2)
0
2
3 2 1 1
=
1 + t + t2 + t3 + t4 + t5
2 3 4 20
and so on.
th
Note that finding y (3) (t ) itself involves squaring a degree polynomial and
integrating it, thus making this method more tedious.
In the next lesson, we learn one other analytical method that gives an
approximation for the solution in a function form.
Exercise:
1. Use Picard’s method to find the solution y at t = 0.2,0.4 and 1.0 correct to
dy t2
three decimal places for the I.V.P. = subject to y (0) = 0 [Hint: obtain
dt 1 + y 2
th
y (2) (t ) which results in a degree polynomial in t as the approximate solution,
which gives the solution correct to 3 decimal places].
References
Suggested Reading
Lesson 26
( t − t0 ) y′(t ( t − t0 ) ( t − t0 )
2 3
y (t ) =
y (t ) +
0 0 )+ y′′(t0 )+ y′′′(t0 ) + .... (26.1)
1! 2! 3!
h h2 h3
y (t1 ) =y (t0 ) + y′(t0 ) + y′′(t0 ) + y′′′(t0 ) + .... (26.2)
1! 2! 3!
dy
= f (t , y ), t ∈ [t0 , b ]
Now consider the I.V.P. (26.4)
dt
The Taylor Series solution of the given I.V.P. (3)-(4) is to find an approximate
function y (t ) as given in (1) which involves the derivatives of the unknown
function y (t ) at the initial point t = t0 , that satisfies the I.V.P. (3)-(4).
dy
Given = f (t , y ) .
dt
dy
At t = t0 ; (=t t0=) f (t0 , y (t0 ))
dt
= f (t0 , y0 )
= f 0 (say)
d dy
y′′(t ) =
dt dt
d ∂f ∂f dy
= ( f (t , y=
)) +
dt ∂t ∂y dt
∂f ∂f
= + ( f (t , y ) ) .
∂t ∂y
∂f ∂f
y′′(t =t0 ) = ( t =t0 , y (t0 ) ) + ( t =t0 , y (t0 ) ) ⋅ f ( t =t0 , y (t0 ) ) .
∂t ∂y
dy
Example 1: Find y (0.1) from the I.V.P. = 3t + y 2 subject to y (0) = 1 .
dt
Solution:
Given y=′ 3t + y 2
0 + [ y (0) ] =
⇒ y′(0) =
2
1
= 3 + 2 ⋅1 ⋅1 = 5
= 2 ⋅1 ⋅ 5 + 2 ⋅1 = 12
y iv = 2 y′ ⋅ y′′ + 2 y ⋅ y′′′ + 4 y′ ⋅ y′′ = 2 ⋅1 ⋅ 5 + 2 ⋅1 ⋅12 + 4 ⋅1 ⋅ 5 = 54 etc.
2
179 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
) y (t0 ) + ( t − t0 ) y′(t0 ) +
y (t= y′′(t0 ) + y′′′(t0 ) + y iv (t0 ) + ...
2! 3! 4!
=
Taking =
t 0.1, t0 0 (given),
2 3! 4!
∴ y (0) =
1.12722 .
Example 2: Obtain the Taylor Series solution for the I.V.P. given by
dy
= 2 y + 3et , y (0) = 0 . Compare the solution at t = 0.2 with the exact solution given
dt
y (t ) 3 ( e 2t − et ) .
by =
Solution:
9 21 3 45 4
The solution is y (t ) =3t + t 2 + t + t + ... .
2 6 24
At t = 0.2 , y (0.2) = 0.811 .
From the exact solution, y (0.2) = 0.8112 .
So the error in the numerical solution is 0.8112 − 0.811 = 0.001 .
3
180 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
( t − t9 )
2
) y (t9 ) + ( t − t9 ) y′(t9
y (t= )+ y′′(t9 ) + ...
2!
( t − t9 )( ) y ( p +1) (t9 + θ h)
1 1
( t − t0 ) y ( p ) (t9 ) +
p +1
... +
p
(26.6)
p! ( p + 1)!
where y ( p ) (t ) is the th
of the y (t ) and 0 < θ < 1 , a real number. The last term in the
expansion is called the remainder term. In general, the local truncation error at any
location t = t j +1 of the method is given by
1
T j +1 h( p +1) y ( p +1) (t j + θ h)
( p + 1) !
where =
h t j +1 − t j .
1
The order of a method is the largest integer for which T j +1 = O(h p ) .
h
The notation O(h p ) denoting that all terms of the order onwards are grouped to a
single term representation.
The method given by equation (26.1) is called the Taylor Series method of order .
Example 3: Determine the first three non-zero terms in the Taylor Series for y (t )
from the I.V.P. y=′ t 2 + y 2 , y (0) = 0 .
Solution:
y′(0) = 0;
4
181 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
2 + 2 ( y′ ) + 2 yy′′ ⇒ y′′′(0) =
y′′′ =
2
2
b) Find ‘ t ’, if the error in y (t ) obtained from the first four terms of the Taylor
series, is to be less than 5 ×10−5 , after rounding.
c) Determine the number of terms in the Taylor Series required to obtain the
result correct to 5 ×10−6 for t ≤ 0.4 .
Solution:
h2
a) The second order Taylor Series method is given by yn +1 =yn + hyn′ + yn′′ + O ( h3 ) .
2
= =
n 0,1; h 0.1 .
5
182 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
We have y=′ 2t + 3 y
or, y′(tn=) y=
′ 2tn + 3 yn
n
(0.1) 2
∴ y (0.1) = 1 + (0.1) ⋅ 3 + ⋅11 = 1.355 .
2
Taking =
n 1;=
h 0.1; = y1′ 4.265; =
y1 1.355; = y1′′ 14.795
(0.1) 2
⇒ y (0.2) = 1.355 + (0.1) ⋅ (4.265) + ⋅ (14.795)
2!
= 1.8555 .
b) Given= y′(0) 3 .
y (0) 1,=
y′′(0) 11,
Compute= = y′′′(0) 33 , y iv (0) = 99 .
99 3t 4
∴
= R4 e t < 5 ×10−5 .
24
Simplifying and solving this non-linear algebraic equation for finding t , we get
t 4 e3t < 0.000012
⇒ t < 0.056
6
183 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
Note: In the event, if the exact solution is not known, write one more non-
vanishing term in the Taylor Series than is required and then differentiate this
series times: here is .
Now to determine the number of terms required in the Taylor series solution to
obtain the solution correct to 5 ×10−4 for t ≤ 0.4 , we estimate it as:
max
tp max
⋅ y ( p ) (ξ ) ≤ 5 ×10−6
p! ξ t [ 0,0.4 ]
0 ≤t ≤ 0.4
th
Again using the analytical solution, we find the derivative of y (t ) at t = ξ as:
y p (ξ ) (11)3 p − 2 ⋅ e1.2
=
(0.4) p
or (11)3 p − 2 ⋅ e1.2 ≤ 5 ×10−6
p!
Exercises:
2. Solve y=′ y 2 + t , y (0) = 1 using Taylor Series method and compute y (0.2) ,
h = 0.1 .
7
184 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method
3. Evaluate y (0.1) correct to six decimal places ( 5 ×10−6 ) by Taylor Series method if
dy
y (t ) satisfies =
1 + yt , y (0) =
1.
dt
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
8
185 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 27
27.1 Introduction
Picard’s and Taylor Series methods give solution of the given I.V.P. in the form
of a power series. We now describe some numerical methods which give the
solution in the form of a discrete data at equally spaced points of the interval.
The discretization of the interval is considered as described in the lesson 1 and 2
is not repeated here.
Also, if y j +1 can be obtained by evaluating the right hand side of (27.3), then the
method is called an explicit method.
solution y j +1 explicitly, we need to solve the equation (3) to get the solution.
( )
y (t j ) + hϕ t j +1 , t j , y ( t j +1 ) , y ( t j ) , h + T j +1
y (t j +1 ) = (27.5)
1
By definition, the order of the single step method is T j +1 (27.7)
h
dy
27.3 Forward Euler Method for the I.V.P. = f ( x, y ) ; y (t0 ) = y0 :
dt
Let t j be any point in the interval [t0 , b] . Then the slope of y (t ) at t = t j is given by
dy
= f (t j , y j ) (27.8)
dt t =t j
y
y ( x)
Exact Solution
y j +1 Error
Numerical
yj Solution
Slope f (t j , y j )
t
tj t j +1
h
tj t j +1
dy
Replacing at t = t j by the first order forward difference at t j , we get
dt
y j +1 − y j
= f (t j , y j ) or y j +1 = y j + h ⋅ f (t j , y j ), j = 0,1, 2,..., N − 1 at every point of t j as
h
y2 = y1 + h ⋅ f (t1 , y1 ) ,
………
yN= yN −1 + h ⋅ f (t N −1 , yN −1 ) .
For a chosen h and/with the initial condition y0 , one can find the solution
explicitly at all the nodal points of the given interval. The local truncation error
in the method is given by
T j +1 y ( t j +1 ) − y j +1
=
= y ( t j +1 ) − y ( t j ) + h ⋅ f (t j , y j )
h2
T j +1 = y′′(ξ )
2
where t j < ξ < t j +1 . Let the maximum value of y′′(t ) in [t0 , b] be M * , then
max h2
T j +1 = T (say) = y′′(ξ )
max
[ t0 , b ] 2 [ t0 , b ]
h2 *
= M .
2
h2 *
Thus T ≤ M
2
2
i.e., the L.T.E is of O(h ) and by definition, the order of this forward Euler
1 1 h2 *
method is one since= T j +1 = M O(h1 ) .
h h 2
Solution:
Take=
t0 0,=
t1 0.01,=
t2 0.02,= t4 0.04 ; f (t , y ) = − y .
t3 0.03,=
Now y (t=
1) y (t0 ) + h ⋅ f (t0 , y0 )
=
1 + (0.1)(−1) = 0.99 ,
y (t2=
) y (t1 ) + h ⋅ f (t1 , y1 )
y (t=
3) y (t2 ) + h ⋅ f (t2 , y2 )
= 0.9606 .
The error in the Euler method for the solution at t = 0.4 is given as
0.9606 − 0.9608 =
0.0002 .
0 0.2
Solution:
y j +1 = y j + h ⋅ f (t j +1 , y j +1 ) .
= y (0) + h −2 ⋅ t1 ⋅ y (0.2) 2
⇒ y (0.2)
or y1 = 1 − 2(0.2)(0.2) y12
can be written as a quadratic in y1 as
0.08 y12 + y1 − 1 =0
Exercises:
1. Continue this to compute the solution at t = 0.4 if the above I.V.P. is solved in
the interval [ 0, 0.4] with h = 0.2 .
method.
3. Solve the I.V.P. y′ =−2ty 2 , 0 ≤ t ≤ 0.5, h =0.1, y (0) =1 using the forward Euler
method.
References
Suggested Reading
Lesson 28
28.1 Introduction
The first order Explicit Euler method is improved to achieve better accuracy in
=
the numerical solution for an initial value problem y′ f=
(t , y ), y (t0 ) y0 .
i.e., y j +1 = y j + h ⋅ y′j
= y j + ⋅ f ( t j , y j ) + f ( t j +1 , y j +1 )
h
2
or y j +1 = y j + ⋅ f ( t j , y j ) + f ( t j +1 , y j +1 )
h
(28.1)
2
method. The above iteration process is terminated at each step if the condition
y (js +1) − y (js ) < ε ,is satisfied,
Exercise: Show that the modified Euler method has L.T.E. as O(h3 ) and the
order of the method is O(h 2 ) .
rapidly the accuracy can be improved with refinement of the grid spacing h in
any given interval. For example, in a first order method such as
y j +1 = y j + h ⋅ f ( t j , y j ) + O(h)
h
If we reduce the mesh size h by , the error is reduced by approximately a
2
y j +1 = y j + ⋅ f ( t j , y j ) + f ( t j +1 , y j +1 ) + O(h 2 ) .
h
2
Solution:
y2 = y (0.1) .
= 1.05 .
2
194 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method
Compute y (0.05) :
Take s = 0 : calculate f ( t1 , y1(0) ) = 1.0262
y1(1) = y0 + ⋅ f ( t0 , y0 ) + f ( t1 , y1(0) )
h
2
= 1.0513
y1(2) = y0 + ⋅ f ( t0 , y0 ) + f ( t1 , y1(1) )
h
2
= 1.0513
y (0.05) = 1.0513 .
=
Compute y (1.0) =
: t1 0.05, =
y1 1.0513, h 0.05
⇒ y2(0) =
1.104 .
s = 0 ⇒ y2(1) = y1 + ⋅ f ( t1 , y1 ) + f ( t2 , y2(0) )
h
2
= 1.1055
s =1 ⇒ y2(2) = y1 + ⋅ f ( t1 , y1 ) + f ( t2 , y2(1) )
h
2
= 1.1055
dy
= 2=
Example 2: Given the I.V.P. ty, y (1) 1 find y (1.4) using the modified Euler
dt
method by taking h = 0.1 . Compare this solution with the exact solution
−1
y (t ) = et
2
. Calculate the percentage relative error.
Solution:
3
195 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method
(0.1)
Now y1(1) =
y0 + 2t0 y0 + 2t1 y1(0) =
1.232 ,
2
(0.1)
y1(2) =
y0 + 2t0 y0 + 2t1 y1(1) =
1.232 .
2
j = 1 to j = 3 are calculated (left as an exercise) and are tabulated below. The
Table 28.1
j tn yn Exact value Absolute error Percentage relative
y = etn −1
2
error
0 1 1 1 0 0
1 1.1 1.232 1.2337 0.0017 0.14
2 1.2 1.5479 1.5527 0.0048 0.31
3 1.3 1.9832 1.9937 0.0106 0.53
4 1.4 1.5908 2.6117 0.0209 0.80
Exercises:
4
196 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method
dy
(iii) =
2 + ty , y (1) = 0.5, t ∈ [1, 2] .
1, h =
dt
(iv) y′ =
t (1 + t 3 y ), y (0) = 0.1, t ∈ [ 0, 0.4] .
3, h =
References
Suggested Reading
5
197 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 29
Runge-Kutta Methods
29.1 Introduction
In single step explicit method, the approximate solution y j +1 is computed from the
y j +1 = y j + h ⋅ f ( t j , y j ) (29.1)
In equation (1), we used the slope at t = t j only. Similarly, in the modified Euler
method
y j + h f ( t j , y j ) + f ( t j +1 , y j +1 )
y j +1 = (29.3)
The slope is replaced by the average of slopes at the end points ( t j , y j ) and
(t j +1 , y j +1 ) .
defined as
y j +=
1 y j + h [Weighted average of slopes at points on the given interval] (29.4)
This way one can derive -explicit methods by taking n = 1, 2,..., N . Also, n in
th
(29.4) indicates the order of this Runge-Kutta method. The general order
h ⋅ f ( t j + c2 h, y j + a21k1 )
k2 =
h ⋅ f ( t j + c3 h, y j + a31k1 + a32 k2 )
k3 =
All wi ’s , ci ’s, ai ’s are the parameters which are determined by forcing the
th th
method (29.4) to be of order. Deriving a general order method is out of
nd
purview of this material, but we demonstrate the derivation of the order Runge-
Kutta method below.
where k1= h ⋅ f ( t j , y j )
h ⋅ f ( t j + c2 h, y j + a21k1 )
k2 =
The parameters c2 , a21 , w1 , w2 are chosen to make y j +1 closer to the exact solution
nd
y (t j +1 ) upto the order.
h2
y (t j +=
1) y (t j ) + h ⋅ y′(t j ) + ⋅ y′′(t j ) + ...
2!
h 2 ∂f ∂f
= y (t j ) + h ⋅ f ( t j , y j ) + ⋅ + f + ... (29.7)
2! ∂t ∂y t =t
j
Also, k1= h ⋅ f j
h f ( t j + c2 h, y j + a21h ⋅ f j )
k2 =⋅
∂f ∂f h2 2 ∂ 2 f ∂2 f ∂2 f
k2 h f ( t j ) + h c2
= + a21 f + ⋅ 2
c + 2c a f + a f + ...
∂t ∂y t =t 2! ∂t 2 ∂y∂t ∂y 2 t =t
2 21 21
j j
∂f ∂f
y j +1 = y j + ( w1 + w2 ) h ⋅ f j + h 2 w2 c2 + w2 a21 f + ... (29.8)
∂t ∂y t =t
j
1 nd
With the choice of c2 = , we derive the order Runge-Kutta method.
2
1
=
a21 ;=
w2 1;=
w1 0
2
we get k1= h ⋅ f ( t j , y j )
h 1
k2 =
h ⋅ f t j + , y j + k1
2 2
where k1= h ⋅ f ( t j , y j )
h ⋅ f ( t j + h, y j + k1 ) .
k2 =
This method is also a second order method which is known as the Euler-Cauchy
method. Clearly, with different choices for c2 , we get a different second order
Runge-Kutta method. Let us demonstrate its utility for solving the initial value
problems.
Solution:
y j +=
1 y j + k2
h ⋅ f ( t j , y j ) =−
where k1 = (0.2) 2t j y j 2 =
−0.4t j y j 2
h 1
k2 =
h ⋅ f t j + , y j + k1
2 2
2
1
−0.4 ( t j + 0.1) y j + k1
=
2
⇒ k1 =0, k2 =−0.04 .
For j == =
1 , t1 0.2, y1 0.96
⇒ k1 =−0.073728, k2 =−0.10226
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Lesson 30
th
Order Runge-Kutta Method
where k1= h ⋅ f ( t j , y j )
h 2
k2 =
h ⋅ f t j + 2 , y j + k1
3 3
h 2
and k3 =
h ⋅ f t j + 2 , y j + k2 .
3 3
where k1= h ⋅ f ( t j , y j )
h k
k2 =
h⋅ f tj + , yj + 1
2 2
h k
k3 =
h⋅ f tj + , yj + 2
2 2
and h ⋅ f ( t j + h, y j + k 3 )
k4 =
th
This method is also known as the Classical Runge-Kutta method. The order R-T
method is an efficient method which can be used very easily. Let us now illustrate
its use for finding the solution of given I.V.P.
Example 1: Use the Classical Runge-Kutta method to find the numerical solution
dy
at t = 0.6 for =
t + y , y (0.4) =
0.41, h =
0.2 .
dt
Solution:
=
Given =
t0 0.4, y0 0.41; f (t , y=
) t+y.
h k
k2 =
h ⋅ f t 0 + , y0 + 1
2 2
1
= (0.2) [ (0.4 + 0.1) + (0.41 + 0.09) ] 2 = 0.2
h k
k3 =
h ⋅ f t0 + , y0 + 2
2 2
1
= (0.2) [ (0.4 + 0.1) + (0.41 + 0.01)
= ]2 0.20099
h ⋅ f ( t0 + h, y0 + k3 )
k4 =
1
= (0.2) [ 0.6 + (0.41 + 0.20099) ] 2
= 0.22009 .
1
Now y (0.6)= y (0.4) + ( k1 + 2k2 + 2k3 + k4 )
6
= 0.41 + 0.20035
2
204 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method
= 0.61035 .
dy
Example 2: Find y (0.1) form =
y − t , y (0) =
2 by taking h = 0.1 .
dt
Solution:
= (0.1) [ 2 −=
0] 0.2
h k
k2 =
h ⋅ f t 0 + , y0 + 1
2 2
= (0.1) [ 2.1025 −=
0.05] 0.20525
h ⋅ f ( t0 + h, y0 + k3 )
k4 =
= (0.1) [ 2.20525
= − 0.1] 0.21053
1
Hence y (0.1) = y (0) + ( k1 + 2k2 + 2k3 + k4 )
6
= 2 + 0.2056
∴ y (0.1) =
2.2056 .
dy th
Example 3: Given =
1 + y 2 , y (0.2) =
0.2027, h =
0.2 compute y (0.4) using the order
dt
R-K method.
Solution:
Given f (t , y ) =
1 + y 2 , t0 =
0.2, y0 =
0.2027, h =
0.2
3
205 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method
= =
k3 0.2195, k4 0.2356 .
1
and hence y (0.4)= y (0.2) + ( k1 + 2k2 + 2k3 + k4 )
6
= 0.4228 .
Exercises:
rd
1. Use order Runge-Kutta method to find
dy
a) y (0.1) given =
3et + 2 y, y (0) =
0, h =
0.1 .
dt
dy
b) y (0.8) given =
t + y , y (0.4) =
0.41, h =
0.2 .
dt
dy y −t
c) y (0.2) given = = 1,=
, y (0) h 0.1 .
dt y+t
dy 1
d) y (0.2) given =
3t + y, y (0) =
1, h =
0.1 .
dt 2
th
2. Use order Runge-Kutta method to solve the problems 1(a)-1(d) and make a
comparison table.
4. Use the Classical Runge-Kutta method to find y (1.4) insteps of 0.2 given that
dy 2
=
t + y 2 , y (1) =
1.5 .
dt
4
206 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
5
207 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 31
31.1 Introduction
th
From the Theory of ordinary differential equations, it is evident that an order
ordinary differential equation, be a linear or a non-linear one, can be reduced to a
system of -first order equations. To see this, consider the second order o. d. e.
y′′ − y′ + 4t 2 y =
0
v′= v − 4t 2u ; v(1) = 2
This is known as the initial value problem in the first order system corresponding
nd
to the given order initial value problem.
th
In general an order differential equation
y ( n ) = F (t , y, y′, y′′,..., y n −1 ) (31.1)
is written in the first order system as follows set u1 = y (t ) .
u1′ = u2
u2′ = u3
... (31.2)
un′ −1 = un
un′ = F (t , u1 , u2 ,..., un −1 )
(31.4)
where u = [u1 , u2 ,..., un ] ,
T
f = [u2 , u3 ,..., un , F ] ,
T
In what follows is shown the utility of Taylor series method to the system of
I.V.Ps. through two examples.
1. The vector form of the Taylor Series method is written as
2
209 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems
h2 h p ( p)
y j +1 = y j + hy′j + y′′j + ... + yj
2! p!
=
Here j 0,1, 2,.., N − 1 denote the nodal point
y1, j ( k ) d
(k ) dt k −1 f1 (t j , y1, j , y2, j ,..., yn , j )
y2, j
=
and y j ( k ) = ... .
... d k −1
y (k ) f (t , y , y ,..., y )
n, j dt k −1
n j 1, j 2, j n, j
rd
Example: 1. Reduce the order I.V.P. into a system of first order I.V.P.:
y cos t , t ∈ [ 0,1]
y′′′ + 2 y′′ + y′ −=
Solution:
=
Set y u= ′ u2=
1 , u1 , u2′ u3 .
u2′ = u3
3
210 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems
nd
Example 2. Use the order Taylor Series method to compute y (1), y′(1) and
y′′(1) by taking h = 1.0 in the above example.
Solution:
1
∴ u (1) = u (0) + u ′(0) + u ′′(0) .
2
The system of I.V.P. is:
u1 ′ u1
u ′ =
= u2 u
3
u3 cos t − 2u3 − u2 + u1
u1 (0) 0
u (0) =
subject to= u2 (0) 1 .
u3 (0) 2
u2′ (0) 2
= −4
Also u ′′(0) = u3′ (0)
−2u3′ (0) − u2′ (0) + u1′(0) 7
0 1 2 2 y (1)
1 ′
∴ u (1) = 1 + 2 + −4 = 1 = y (1) .
2 3
2 −4 7 y′′(1)
2
4
211 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems
Exercises:
1
1. Solve the system equations u′ =−3u + 2v, u (0) =0 and v′ =
3u − 4v, v(0) =using
2
(i) Forward Euler method and
(ii) 2nd order Taylor Series method by taking h = 0.2 on the interval [0, 0.6] .
Keywords: Euler Method, Higher order initial value problems, Taylor Series
method.
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
5
212 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis
Lesson 32
th
System of I.V.Ps.- Order R-K Method
32.1 Introduction
nd th
We now present the vector form of the order and order Runge-Kutta
methods.
dy
= f= (t , y, z ); y (t0 ) y0
Given the I.V.P. : dt (32.1)
dz
= g= (t , y, z ); z (t0 ) z0
dt
nd
The Euler-Cauchy method (which belong to the class of order Runge-Kutta
method) when applied to the above system of I.V.P. is written in the vector form as
1
u j +1 =uj + ( k1 + k2 )
2
with k11= h ⋅ f ( t j , y j , z j )
k21= h ⋅ g ( t j , y j , z j )
h ⋅ f ( t j + h, y j + k11 , z j + k21 )
and k12 =
h ⋅ f ( t j + h, y j + k11 , z j + k21 )
k22 =
Example 1
z′ =
3 y − 4 z , z (0) =
0.5
Solution:
Given=
t0 0,=
y0 0,=
z0 0.5,
f (t , y, z ) =−3 y + 2 z , g (t , y, z ) =
3y − 4z .
k12 =
−0.08, k22 =
0.04
1
∴ y (0.2) = y (0) + ( k11 + k12 ) = 0.06,
2
1
z (0.2) = z (0) + ( k21 + k22 ) = 0.32 .
2
th
The order Runge-Kutta method for the system of equations as given in (1) is
written as
1
y j +1 =y j + ( k1 + 2k2 + 2k3 + k4 )
6
1
z j +1 = z j + ( l1 + 2l2 + 2l3=
+ l4 ) , j 0,1, 2,.., N − 1
6
where k1= h ⋅ f ( t j , y j , z j )
l1= g ⋅ f ( t j , y j , z j )
h k l
k2 =
h⋅ f tj + , yj + 1 , zj + 1
2 2 2
h k l
l2 =
g ⋅ f tj + , yj + 1 , zj + 1
2 2 2
h k l
k3 =
h⋅ f tj + , yj + 2 , zj + 2
2 2 2
h k l
l3 =
g ⋅ f tj + , yj + 2 , zj + 2
2 2 2
h k l
k4 =
h⋅ f tj + , yj + 3 , zj + 3
2 2 2
h k l
l4 =
g ⋅ f tj + , yj + 3 , zj + 3
2 2 2
In the similar manner, the method can be extended to or more first order
equations.
y′′ xy′2 − y 2
=
Example 2: Solve to compute y (0.2) .
= y′(0) 0
y (0) 1,=
Solution:
The given second order equation with the initial conditions can be written as the
system of two first order equations as:
dy
= z= f (t , y, z ) , say
dt
dz
= tz 2 − y 2 = g (t , y, z ) , say.
dt
Given=
t0 0,=
y0 1,=
z0 0,=
h 0.2 .Compute k1 , l1 , k2 , l2 , k3 , l3 and k4 , l4 in this order, we see
k1 = h ⋅ f ( t0 , y0 , z0 ) = 0.2 × 0 = 0
l1 =g ⋅ f ( t0 , y0 , z0 ) =0.2(−1) =−0.2
h k l
k2 =
h ⋅ f t 0 + , y0 + 1 , z 0 + 1 =
−0.02
2 2 2
h k l
l2 =
g ⋅ f t0 + , y0 + 1 , z0 + 1 =
−0.1998
2 2 2
h k l
k3 =
h ⋅ f t0 + , y0 + 2 , z0 + 2 =
−0.02
2 2 2
h k l
l3 =
g ⋅ f t0 + , y0 + 2 , z0 + 2 =
−0.1958
2 2 2
h k l
k4 =
h ⋅ f t0 + , y0 + 3 , z0 + 3 =
−0.0392
2 2 2
h k l
l4 =
g ⋅ f t0 + , y0 + 3 , z0 + 3 =
−0.1905
2 2 2
1
Thus y (0.2) = y (0) + ( k1 + 2k2 + 2k3 + k4 )
6
= 1 − 0.0199
= 0.9801
1
and y′(0.2) = z (0.2) = z (0) + ( l1 + 2l2 + 2l3 + l4 )
6
= 0 − 0.1970
= 0.197 .
Exercises
1. Find y (0.2) and z (0.2) using the 4th order Runge-Kutta method to solve
y′ =−3 y + 2 z , y (0) =
0
z′ =
3 y − 4 z , z (0) =
0.5
by taking h = 0.1 .
2. Solve y′′= y + ty′ , y (0) = 1 , y′(0) = 0 to find y (0.2) and y′(0.2) using the 4th order R-
K method. Take h = 0.1 .
1
3. Find y′′ =+
t 3 y′ t 3 y, y (0) =
1 , y′(0) = in [0,1] by taking h = 0.2 .
2
References
Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.
Suggested Reading
Lesson 33
Introduction
In this lesson we will discuss the idea of integral transform, in general, and Laplace trans-
form in particular. Integral transforms turn out to be a very efficient method to solve
certain ordinary and partial differential equations. In particular, the transform can take
a differential equation and turn it into an algebraic equation. If the algebraic equation
can be solved, applying the inverse transform gives us our desired solution. The idea of
solving differential equations is given in Figure 33.1.
Integral
Initial and boundary value Transform Algebraic
problems / PDE’s Problems/ ODE’s
Difficult Easy
Inverse
Transform
Solution of initial and boundary Solutions of
value problems or PDE’s Algebraic Problems
plane. Choosing different kernels and different values of a and b, we get different in-
tegral transforms. Examples include Laplace, Fourier, Hankel and Mellin transforms. For
K(s, t) = e−st , a = 0, b = ∞, the improper integral
Z ∞
e−st f (t) dt
0
R∞
Remark 1: The integral 0 e−st f (t) dt is said to be convergent (absolutely conver-
gent) if !
Z R Z R
lim e−st f (t) dt lim | e−st f (t) | dt
R→∞ 0 R→∞ 0
33.4.1 Problem 1
What will happen if we take s to be a complex number, i.e., s = x + iy. Since e−iyR =
cos yR − i sin yR, and therefore | e−iyR |= 1, then, we find
Thus, we have
1
L[f (t)] = L[1] = , Re(s) > 0.
s
33.4.2 Problem 2
Similarly, we get
1
L[e−iat ] = .
s + ia
33.4.3 Problem 3
Fins the Laplace transform of the unit step function (commonly known as the Heaviside
function). This function is given as
0 if t < a,
u(t − a) =
1 if t ≥ a.
Solution: Let us find the Laplace transform of u(t − a), where a ≥ 0 is some constant.
That is, the function that is 0 for t < a and 1 for t ≥ a.
∞ ∞ ∞
e−st e−as
Z Z
−st −st
L{u(t − a)} = e u(t − a) dt = e dt = = ,
0 a −s t=a s
33.4.4 Problem 4
Putting n = 1:
1 1 1!
L[t] = L[1] = 2 = 2
s s s
Putting n = 2:
2 2!
L[t2 ] = =
s3 s3
n!
If we assume L[tn ] = sn+1
, then
n+1 n (n + 1)! n!
L[tn+1 ] = L[t ] = n+2
⇒ L[tn ] = n+1 , Re(s) > 0.
s s s
33.4.5 Problem 5
Note that the above integral is convergent only for γ > −1. We substitute u = st ⇒ du =
sdt where s > 0. Thus we get
∞ u γ 1 Z ∞
1
Z
γ
L[t ] = −u
e du = γ+1 e−u uγ du
0 s s s 0
We know Z ∞
Γ(p) = up−1 e−u du (p > 0)
0
Then,
Γ(γ + 1)
L[tγ ] = , γ > −1, s > 0
sγ+1
Note that for γ = 1, 2, 3, ..., the above formula reduces to the formula we got in previous
γ!
example for integer values, i.e., L[tγ ] = .
sγ+1
33.4.6 Problem 6
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 34
In this lesson we compute the Laplace transform of some elementary functions, before
discussing the restriction that have to be imposed on f (t) so that it has a Laplace transform.
With the help of Laplace transform of elementary function we can get Laplace transform
of complicated function using properties of the transform that will be discussed later.
Another important aspect of the finding Laplace transform of elementary function relies
on using them for getting inverse Laplace transform.
34.1.1 Problem 1
Find Laplace transform of (i) cosh ωt, (ii) cos ωt, (iii) sinh ωt (iv) sin ωt .
Solution: (i) Using the definition of Laplace transform we get
eωt − e−ωt
L[cosh ωt] = L
2
We know the Laplace transform of exponential functions which can be used now to get
1 1 1 1 2s
L[cos ωt] = + =
2 s − iω s + iω 2 s2 + ω 2
Thus we have
s
L[cos ωt] =
s2 + ω 2
34.1.2 Problem 2
34.1.3 Problem 3
Thus we get
48
L[sin3 2t] =
(s2 + 4)(s2 + 36)
34.1.4 Problem 4
34.1.5 Problem 5
Find (a) L[t3 − 4t + 5 + 3 sin 2t] and (b) L[H(t − a) − H(t − b)].
Solution: (a) Using linearity of the transform we get
On simplification we find
Integration gives
e−as e−bs
L[H(t − a) − H(t − b)] = −
s s
This implies
e−as − e−bs
L[H(t − a) − H(t − b)] =
s
34.1.6 Problem 6
On simplifications we obtain
1 − esc
L[f (t)] =
cs2
34.1.7 Problem 7
34.1.8 Problem 8
√
Find Laplace transform of sin t.
Solution: We have
√ 1 3/2 1 5/2 1
sin t = t1/2 − t + t − t7/2 + ...
3! 5! 7!
Then, taking the Laplace transform of each term in the series we get
√ 1 1 1
L[sin L[t3/2 ] + L[t5/2 ] − L[t7/2 ] + . . .
t] = L[t1/2 ] −
3! 5! 7!
Γ(3/2) 1 Γ(5/2) 1 Γ(7/2) 1 Γ(9/2)
= 3/2 − + − + ...
s 3! s5/2 5! s7/2 7! s9/2
Further simplifications leads to
√
√ 1 π 1 31 1 53 1 1 753 1
L[sin t] = 1− + − + ...
2 s3/2 3! 2 s 5! 2 2 s2 7! 2 2 2 s3
r
1 π 1 1 1 1 1
= 1− 2 + − + ...
2s s 2 s 2! (22 s)2 3! (22 s)3
Thus, we have
r
√ 1 π −1
L[sin t] = e 4s .
2s s
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 35
In this lesson we shall discuss existence theorem on Laplace transform. Since every
Laplace integral is not convergent, it is very important to know for which functions
Laplace transform exists.
2
Consider the function f (t) = et and try to evaluate its Laplace integral. In this case we
realize that Z R
2
lim et −st
dt = ∞, for any choice of s
R→∞ 0
Naturally question arises in mind that for which class of functions, the Laplace integral
converges? So before answering this question we go through some definition.
A function f is called piecewise continuous on [a, b] if there are finite number of points a <
t1 < t2 < . . . < tn < b such that f is continuous on each open subinterval (a, t1 ), (t1 , t2 ), . . . , (tn , b)
and all the following limits exists
lim f (t), lim f (t), lim f (t), and lim f (t), ∀j.
t→a+ t→b− t→tj + t→tj −
35.1.1 Example 1
35.1.2 Example 2
35.2.1 Problem 1
lim f (t)
t→1±
do not exists.
35.2.2 Problem 2
A function f is said to be of exponential order α if there exist constant M and α such that
for some t0 ≥ 0
|f (t)| ≤ Meαt for all t ≥ t0
Geometrically, it means that the graph of the function f on the interval (t0 , ∞) does not
grow faster than the graph of exponential function Meαt
35.4.1 Problem 1
Show that the function f (t) = tn has exponential order α for any value of α > 0 and any
natural number n.
Solution: We check the limit
lim e−αt tn
t→∞
35.4.2 Problem 2
2
Show that the function f (t) = et is not of exponential order.
Solution: For given function we have
2
lim e−αt et = lim et(t−α) = ∞
t→∞ t→∞
for all values of α. Hence the given function is not of exponential order.
If f is piecewise continuous on [0, ∞) and of exponential order α then the Laplace trans-
form exists for Re(s) > α. Moreover, under these conditions Laplace integral converges
absolutely.
Proof: Since f is of exponential order α, then
|f (t)| ≤ Meαt , t ≥ 0
Then Z R Z R
|e −st
f (t)|dt ≤ |e−(x+iy)tMeαt |dt
0 0
Hence the Laplace integral converges absolutely and thus converges. This implies the
existence of Laplace transform. For piecewise continuous functions of exponential order,
the Laplace transform always exists. Note that it is a sufficient condition, that means if a
function is not of exponential order or piecewise continuous then the Laplace transform
may or may not exist.
Remark 2: It should be noted that the conditions stated in existence theorem are suf-
ficient rather than necessary conditions. If these conditions are satisfied then the Laplace
transform must exist. If these conditions are not satisfied then Laplace transform may or
may not exist. We can observe this fact in the following examples:
Note that f (t) is continuous on [0, ∞) but not of exponential order, however the
Laplace transform of f (t) exists, since
Z ∞
2 2
L[f (t)] = e−st 2tet cos(et )dt
0
This example shows that Laplace transform of a function which is not piecewise
continuous exists. These two examples clearly shows that the conditions given in
existence theorem are sufficient but not necessary.
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 36
In this lesson we discuss some properties of Laplace transform. There are several useful
properties of Laplace transform which can extend its applicability. In this lesson we
mainly present shifting and translation properties.
If L[f (t)] = F (s) then L eat f (t) = F (s − a), where a is any real or complex constant.
Z0 ∞
= e−(s−a)tf (t) dt
0
Again by the definition of Laplace transform we get
L eat f (t) = F (s − a).
36.2.1 Problem 1
36.2.2 Problem 2
36.2.3 Problem 3
36.2.4 Problem 4
Using shifting property evaluate L[sinh 2t cos 2t] and L[sinh 2t sin 2t]
Solution: We know that
2
L[sinh 2t] = = F (s)
s2 − 4
This implies
2 2
L[e2it sinh 2t] = =
(s − 2i)2 − 4 (s2 − 8) − 4is
Multiplying numerator and denominator by (s2 − 8) + 4is, we find
2(s2 − 8) + 8is 2(s2 − 8) + 8is
L[e2it sinh 2t] = =
(s2 − 8)2 + 16s2 (s4 + 64)
Replacing e2it by cos 2t + i sin 2t and using linearity of the transform we obtain
2(s2 − 8) 8s
L[cos 2t sinh 2t] + iL[cos 2t sinh 2t] = 4
+i 4
(s + 64) (s + 64)
Alternative form: It is sometimes useful to present this property in the following com-
pact form.
where (
1 when t > 0
H(t) =
0 when t < 0
Note that f (t − a)H(t − a) is same as the function g(t) given above.
36.4.1 Problem 1
(
0 when 0 ≤ t < 1
Find L[g(t)] where g(t) =
(t − 1)2 when t ≥ 1
Solution: On comparison with the function g(t) given in second shifting theorem we get
2
f (f ) = t2 ⇒ L[f (t)] =
s3
Using the second shifting property we find
2
L[g(t)] = e −s
.
s3
36.4.2 Problem 2
Solution: Comparing with the notations used in the second shifting theorem we have
f (t) = cos t. Thus, we find
s
L[f (t)] = F (s) = .
s2 + 1
Hence by the second shifting theorem we obtain
s
L[g(t)] = e−π/3 F (s) = e−π/3 .
s2 + 1
1 s
If L[f (t)] = F (s) then L[f (at)] = F
a a
36.5.1 Example
If
s2 − s + 1
L[f (t)] =
(2s + 1)2 (s − 1)
then find L[f (2t)].
Solution: Direct application of the second shifting theorem we obtain
s 2
− 2s + 1
1 2
L[f (2t)] =
2 2s + 1 2 s − 1
2 2
On simplifications, we get
1 s2 − 2s + 4
L[f (2t)] = .
4 (s + 1)2 (s − 2)
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 37
Before we state the derivative theorem, it should be noted that this results is the key aspect
for its application of solving differential equations.
Remark 1: Suppose f (t) is not continuous at t = 0, then the results of the above
theorem takes the following form
′
L[f (t)] = −f (0 + 0) + sL[f (t)]
′
Remark 2: An interesting feature of the derivative theorem is that L[f (t)] exists
′
without the requirement of f tobe of exponential order. Recall the existence of Laplace
2 2
transform of f (t) = 2tet cos et which is obvious now by the derivative theorem because
2 ′
f (t) = sin et .
37.2.1 Problem 1
Therefore, we have
2 ω 2ω
L[sin ωt] =
s s + 4ω 2
2
37.2.2 Problem 2
Then
′ ′′
f (t) = ntn−1 , f (t) = n(n − 1)tn−2 , . . . , f n (t) = n!.
Therefore, we find
n!
L[n!] = sn L[tn ] ⇒ L[tn ] = .
sn+1
37.2.3 Problem 3
yields
L[−k 2 sin kt] = s2 L[sin kt] − 0 − k
On simplifications we get
k
L[sin kt] =
s2 + k 2
37.2.4 Problem 4
This implies
120
L[60t2 ] = s3 L[f (t)] ⇒ L[f (t)] = .
s6
37.2.5 Problem 5
√
Using the Laplace transform of L sin t and applying the derivative theorem, find the
Laplace transform of the function √
cos t
√
t
yields √ r
cos t 1 π −1
L √ =s e 4s
2 t 2s s
Thus, we get √ r
cos t π −1
L √ = e 4s
t s
37.3.1 Theorem
37.4.1 Problem 1
Given that Z ∞
sin t 1
L = ds.
t s 1 + s2
Find the Laplace transform of the integral
t
sin u
Z
du.
0 u
37.4.2 Problem 2
Solution: With the application of the first shifting theorem we know that
n!
L[tn e−at ] =
(s + a)n+1
It follows from the above result on Laplace transform of integrals
Z t
n −au 1 n!
L u e du = L[tn e−at ] = .
0 s s(s + a)n+1
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 38
38.1 Multiplication by tn
38.1.1 Theorem
If F (s) is the Laplace transform of f (t), i.e., L[f (t)] = F (s) then,
d
L[tf (t)] = − F (s)
ds
and in general the following result holds
dn
L[tn f (t)] = (−1)n F (s).
dsn
Thus we get
dF (s)
= −L[tf (t)]
ds
Repeated differentiation under integral sign gives the general rule.
Applicability of the above result will now be demonstrated by some examples.
38.2.1 Problem 1
On simplifications we find
2s s2 − 3a2
L t2 cos at =
3
(s2 + a2 )
38.2.2 Problem 2
38.2.3 Problem 3
Since we know
1
L[sin t] =
1 + s2
then
d 1 2s
L[t sin t] = − 2
=
ds 1 + s (1 + s2 )2
and
d 2s 2(1 + s2 )2 − 8s2 (1 + s2 ) 6s2 − 2
L[t2 sin t] = − = =
ds (1 + s2 )2 (1 + s2 )4 (1 + s2 )3
Substituting the above values in the equation (38.1), we find
6s2 − 2 6s 2
L[f (t)] = 2 3
− 2 2
+
(1 + s ) (1 + s ) 1 + s2
Finally, we obtain
(2s4 − 6s3 + 10s2 − 6s)
L[f (t)] =
(s6 + 3s4 + 3s2 + 1)
38.3 Division by t
38.3.1 Theorem
exists, then, Z ∞
f (t)
L = F (u) du, [s > α]
t s
f (t)
Proof: This can easily be proved by letting g(t) = so that f (t) = tg(t).
t
Hence,
d
F (s) = L[f (t) = L[tg(t)] = L[g(t)]
ds
Integrating with respect to s we get,
∞ Z ∞
−L[g(t)] = F (s) ds.
s s
Since g(t) is piecewise continuous and of exponential order, it follows that lim L[g(t)] → 0.
s→∞
Thus we have Z ∞
L[g(t)] = F (s) ds.
0
This completes the proof.
38.3.2 Corollary
∞ ∞
f (t)
Z Z
If L[f (t)] = F (s) then dt = f (s) ds, provided that the integrals converge.
0 t 0
38.4.1 Problem 1
Solution: We know,
Z ∞
a f (t)
L[sin at] = 2 and L = F (u) du
s + a2 t s
On integrating we get,
Z ∞
sin at a −1 s
∞
L = ds = tan
t s2 + a2 a s
s
Thus we have
sin at π s
L = − tan−1
t 2 a
38.4.2 Problem 2
On integrating, we have
∞ ∞
L[f (t)] = tan−1 (s − 1) − tan−1 (s + 1)
s s
π −1 π −1
= − tan (s − 1) − + tan (s + 1)
2 2
On cancellation of π/2 we get
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 39
In this lesson we evaluate Laplace transform of periodic functions. Periodic functions fre-
quently occur in various engineering problems. We shall now show that with the help of a
simple integral, we can evaluate Laplace transform of periodic functions. We shall further
continue the discussion for stating initial and final value theorems of Laplace transforms
and their applications with the help of simple examples.
Noting f (τ + T ) = f (τ ) we find
Z T
L[f (t)] = e−sT f (t) dt + e−sT L[f (t)],
0
On simplifications, we obtain
T
1
Z
L[f (t)] = e−st f (t) dt.
1 − e−sT 0
Remark 1: Just to remind that if a function f is periodic with period T > 0 then
f (t) = f (t + T ), −∞ < t < ∞. The smallest of T , for which the equality f (t) = f (t + T ) is
true, is called fundamental period of f (t). However, if T is the period of a function f then
nT , n is any natural number, is also a period of f . Some familiar periodic functions are
sin x, cos x, tan x etc.
39.2.1 Problem 1
On integration we obtain
1 1 −s 1
L[f (t)] = e −1 =
1 − e−2s −s s(1 + e−s )
39.2.2 Problem 2
This implies
1 1 + e−sπ 1
L[f (t)] = 2
= 2
1−e−2sπ 1+s (1 + s )(1 − e−sπ )
39.2.3 Problem 3
1 h h(1 − esT /2 )
sT /2 −sT
L[f (t)] = 1 − 2e + e =
(1 − e−sT ) s s(1 − esT /2 )
These theorems allow the limiting behavior of the function to be directly calculated by
taking a limit of the transformed function.
then
f (0+) = lim f (t) = lim sF (s), [ assuming s is real ]
t→0+ s→∞
′ ′
Note that lim L[f (t)] = 0, since f is piecewise continuous on [0, ∞) and of exponential
s→∞
order. Therefore we have
Hence we get
Proof: Note that f has exponential order α = 0 since it is bounded, since limt→0+ f (t) and
limt→∞ f (t) exist and f (t) is continuous in [0, ∞). By the derivative theorem, we have
′
L[f (t)] = sF (s) − f (0+), s > 0,
On integrating we obtain
Remark 2: In the final value theorem, existence of limt→∞ f (t) is very important.
s
Consider f (t) = sin t. Then lims→0 sF (s) = lims→0 1+s 2 = 0. But limt→∞ f (t) does not
exist. Thus we may say that if lims→0 sF (s) = L exists then either limt→∞ f (t) = L or this
limit does not exist.
39.3.3 Example
Without determining f (t) and assuming that f (t) satisfies the hypothesis of the limiting
theorems, compute
1 a
lim f (t) and lim f (t) if L[f (t)] = + tan−1 .
t→0+ t→∞ s s
Remark 3: Final value theorem says limt→∞ f (t) = lims→0 sF (s), if limt→∞ f (t) exists.
If F (s) is finite as s → 0 then trivially limt→∞ f (t) = 0. However, there are several func-
tions whose Laplace transform is not finite as s → 0, for example, f (t) = 1 and its Laplace
transform F (s) is equal to 1s , s > 0. In this case we have lims→0 sF (s) = lims→0 1 = 1 =
limt→∞ f (t).
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 40
In this lesson we introduce the concept of inverse Laplace transform and discuss some of
its important properties that will be helpful to evaluate inverse Transform of some com-
plicated functions. As mention in the beginning of this module that the Laplace transform
will allow us to convert a differential equation into an algebraic equation. Once we solve
the algebraic equation in the transformed domain we will like to get back to the time
domain and therefore we need to introduce the concept of inverse Laplace transform.
If F (s) = L[f (t)] for some function f (t). We define the inverse Laplace transform as
There is an integral formula for the inverse, but it is not as simple as the transform itself as
it requires complex numbers and path integrals. The easiest way of computing the inverse
is using table of Laplace transform. For example,
w
L[sin wt] =
s2 + w2
This implies
−1 w
L = sin wt, t ≥ 0
s2 + w 2
and similarly
s s
L[cos wt] = 2 ⇒ L−1
= cos wt, t ≥ 0
s + w2 s2 + w 2
If we have a function F (s), to be able to find f (t) such that L[f (t)] = F (s), we need to first
know if such a function is unique.
Consider (
1 when t = 1
g(t) =
sin(t) when otherwise
1
L[g(t)] = = L[sin t]
s2 +1
Thus we have two different functions g(t) and sin t whose Laplace transform are same.
However note that the given two functions are different at a point of discontinuity. Thanks
to the following theorem where we have uniqueness for continuous functions:
If f and g are continuous and are of exponential order, and if F (s) = G(s) for all s > s0
then f (t) = g(t) for all t > 0.
Proof: If F (s) = G(s) for all s > s0 then,
Z ∞ Z ∞
e −st
f (t)dt = e−st g(t)dt, ∀s > s0
0 0
Z ∞
⇒ e−st [f (t) − g(t)]dt = 0, ∀s > s0
0
⇒ f (t) − g(t) ≡ 0, ∀t > t0 .
⇒ f (t) = g(t), ∀t > t0 .
Remark: The uniqueness theorem holds for piecewise continuous functions as well.
Recall that piecewise continuous means that the function is continuous except perhaps at
a discrete set of points where it has jump discontinuities like the Heaviside function or the
function g(t) defined above. Since the Laplace integral however does not ”see” values
at the discontinuities. So in this case we can only conclude that f (t) = g(t) outside of
discontinuities.
We now state some important properties of the inverse Laplace transform. Though, these
properties are the same as we have listed for the Laplace transform, we repeat them with-
out proof for the sake of completeness and apply them to evaluate inverse Laplace trans-
form of some functions.
If F1 (s) and F2 (s) are the Laplace transforms of the function f1 (t) and f2 (t) respectively,
then
L−1 [a1 F1 (s) + a2 F2 (s)] = a1 L−1 [F1 (s)] + L−1 [F2 (s)] = a1 f1 (t) + a2 f2 (t)
40.4.1 Problem 1
40.4.2 Problem 2
Solution: We use the method of partial fractions to write F in a form where we can use
the table of Laplace transform. We factor the denominator as s(s2 + 1) and write
s2 + s + 1 A Bs + C
3
= + 2 .
s +s s s +1
Putting the right hand side over a common denominator and equating the numerators we
get A(s2 + 1) + s(Bs + C) = s2 + s + 1. Expanding and equating coefficients we obtain
A + B = 1, C = 1, A = 1, and thus B = 0. In other words,
s2 + s + 1 1 1
F (s) = 3
= + 2 .
s +s s s +1
By linearity of the inverse Laplace transform we get
s2 + s + 1
−1 1 1
L−1
=L +L−1
= 1 + sin t.
s3 + s s s2 + 1
40.6.1 Problem 1
1
Evaluate L −1
(s + 1)2
Solution: Rewriting the given expression as
−1 1 −1 1
L =L
(s + 1)2 (s − (−1))2
Applying the first shifting property of the inverse Laplace transform
1 −t −1 1
L−1
=e L
(s + 1)2 s2
Thus we obtain
1
L−1
= te−t .
(s + 1)2
40.6.2 Problem 2
1
Find L −1
.
s2 + 4s + 8
Solution: First we complete the square to make the denominator (s + 2)2 + 4. Next we
find
1 1
L−1 = sin(2t).
s2 + 4 2
Putting it all together with the shifting property, we find
" #
1 1 1
L−1 = L−1 2 = e−2t sin(2t).
s2 + 4s + 8 (s + 2) + 4 2
40.8.1 Problem 1
We now find
−1 e−s
L 2
= L−1 e−s L[1 − cos t]
s(s + 1)
40.8.2 Problem 2
By linearity we have
−2s −3s
e−s e e
f (t) = L−1
2
+L−1
2
+L−1
s +4 s +4 (s + 2)2
Putting it all together and using the second shifting theorem we get
1 1
f (t) = sin 2(t − 1) H(t − 1) + sin 2(t − 2) H(t − 2) + e−2(t−3) H(t − 3)
2 2
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 41
41.1.1 Example
If
−1 s
L = cosh 4t,
s2 − 16
then find
−1 s
L 2
2s − 8
dn
If L−1 [F (s)] = f (t) then L −1
f (s) = (−1)n tn f (t), n = 1, 2 . . .
dsn
41.2.1 Example
41.3.1 Example
If L−1 [F (s)] = f (t) and f (0) = 0, then L−1 [sF (s)] = f ′(t)
41.4.1 Example
−1 1 −1 s
Using L = sin t, and with the application of above result compute L .
s2 + 1 s2 + 1
−1 s d
L 2
= sin t = cos t.
s +1 dt
Let L−1 [F (s)] = f (t). if f (t) is piecewise continuous and of exponential order α such that
f (t)
lim exists, then
t→0 t Z t
−1 F (s)
L = f (u) du.
s 0
41.6.1 Problem 1
Compute
−1 1
L 2
s(s + 1)
41.6.2 Problem 2
1
Find inverse Laplace transform of
(s2 + 1)2
Solution: We know that
−1 s 1
L 2 2
= t sin t.
(1 + s ) 2
We now apply the above result as
1 t
1 −1 1
Z
−1 s
L =L = t sin t dt.
(1 + s2 )2 s (1 + s2 )2 2 0
41.6.3 Problem 3
s−1
Find inverse Laplace transform of .
s2 (s2+ 1)
Solution: It is easy to compute
−1 s−1 −1 s −1 1
L =L −L = cos t − sin t.
(s2 + 1) (s2 + 1) (s2 + 1)
With the application of Laplace and inverse Laplace transform we can also compute some
complicated integrals.
41.8.1 Problem 1
∞
cos tx
Z
Evaluate dx, t > 0.
0 x2 + 1
Solution: Let ∞
cos tx
Z
f (t) = dx.
0 x2 + 1
Taking Laplace transform on both sides,
Z ∞
s
L[f (t)] = dx
0 (x2
+ 1)(s2 + x2 )
Z ∞
s 1 1
= 2 − dx
s +1 0 x2 + 1 s2 + x2
∞
s −1 1 −1 1
= 2 tan x − tan
s −1 s s 0
s π π π 1
= 2 − = .
s − 1 2 2s 2s+1
Taking inverse Laplace transform on both sides,
π −t
f (t) = e .
2
41.8.2 Problem 2
Z ∞ 2
Evaluate e−x dx.
0
Solution: Let Z ∞ 2
g(t) = e−tx dx
0
Now taking Laplace on both sides,
∞
1 1 1 π
Z
x ∞
L[g(t)] = dx = √ arctan √ = √
s + x2 s s 0 s2
0
Taking inverse Laplace transform we obtain
π −1 1 π 1
g(t) = L √ = √ √ .
2 s 2 π t
Hence for t = 1 we get √
Z ∞
−x2 π
e dx = .
0 2
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 42
In this lesson we introduce the convolution property of the Laplace transform. We shall
start with the definition of convolution followed by an important theorem on Laplace
transform of convolution. Convolution theorem plays an important role for finding in-
verse Laplace transform of complicated functions and therefore very useful for solving
differential equations.
42.1 Convolution
The convolution of two given functions f (t) and g(t) is written as f ∗ g and is defined by
the integral Z t
(f ∗ g)(t) := f (τ )g(t − τ ) dτ. (42.1)
0
As you can see, the convolution of two functions of t is another function of t.
42.2.1 Problem 1
(f ∗ g)(t) = et − t − 1.
42.2.2 Problem 2
1
We apply the identity cos(θ) sin(ψ) = sin(θ + ψ) − sin(θ − ψ) to get
2
t
1
Z
(f ∗ g)(t) = sin(ωt) − sin(ωt − 2ωτ ) dτ
0 2
On integration we obtain
t
1 1 1
(f ∗ g)(t) = τ sin(ωt) + cos(2ωτ − ωt) = t sin(ωt).
2 4ω τ =0 2
The formula holds only for t ≥ 0. We assumed that f and g are zero (or simply not
defined) for negative t.
The convolution has many properties that make it behave like a product. Let c be a con-
stant and f , g , and h be functions, then
(i) f ∗ g = g ∗ f, [symmetry]
(ii) c(f ∗ g) = cf ∗ g = f ∗ cg, [c=constant]
(iii) f ∗ (g ∗ h) = (f ∗ g) ∗ h, [associative property]
(iv) f ∗ (g + h) = f ∗ g + f ∗ h, [distributive property]
Proof: We give proof of (i) and all others can be done similarly. By the definition of
convolution we have Z t
f ∗g = f (τ )g(t − τ )dτ
0
Substituting t − τ = u ⇒ −dτ = du we get
Z 0 Z t
f ∗g =− f (t − u)g(u)du = f (t − u)g(u)du = g ∗ f
t 0
In other words, the Laplace transform of a convolution is the product of the Laplace
transforms. The simplest way to use this result is in reverse, i.e., to find inverse Laplace
transform.
42.5.1 Problem 1
42.5.2 Problem 2
42.5.3 Problem 3
√ 1
Substitution u = τ ⇒ du = √ dτ gives
2 τ
Z t −τ Z √t
et et
1 e 2
L−1 √ =√ √ dτ = 2 √ e−u du.
s(s − 1) π 0 τ π 0
Thus, we have
√
1
L−1
√ = et erf( t).
s(s − 1)
42.5.4 Problem 4
Solution: We know
t2
1 1
−1
L = and L−1
= sin t.
s3 2 s2 + 1
By the convolution theorem we have
1 t
1 1 2
Z
L −1
3 2
= t ∗ sin t = sin τ (t − τ )2 dτ
s (s + 1) 2 2 0
Z t
1 t
2
= − cos τ (t − τ ) − 2 (t − τ ) cos τ dτ
2 0 0
1 2 t Z t
= t − 2 ((t − τ ) sin τ ) + sin τ dτ .
2 0 0
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
Lesson 43
In this lesson we discuss Laplace transform of some special functions like error functions,
Dirac delta functions, etc. There functions appears in various applications of science and
engineering to some of them we shall encounter while solving differential equations using
Laplace transform.
The error appears in probability, statistics and solutions of some partial differential equa-
tions. It is defined as Z t
2 2
erf(t) = √ e−u du
π 0
Its complement, known as complementary error function, is defined as
∞
2
Z
2
erfc(t) = 1 − erf(t) = √ e−u du
π t
We find Laplace transform of different forms of error function in the following examples.
43.2.1 Problem 1
√
Find L[erf( t)].
Solution: From definition of the error function and the Laplace transform we have,
√
√ ∞Z t
2
Z
2
L[erf( t)] = √ e−st e−x dxdt
π 0 0
43.2.2 Problem 2
" √ #
k
e−2k s
k
−1
Find L erf √ . and show that L = erfc √
t s t
Solution: By the definition of Laplace transform we have
k
∞
k 2
Z Z √
t 2
−st
L erf √ = e √ e−u du dt
t 0 π 0
Let us assume Z ∞ 2
−u2 −s k2
I(s) = e u du
0
By differentiation under integral sign
∞ 2
k2
dI
Z
−u2 −s k2
= e u − 2 du
ds 0 u
√ √
sk sk
Substitution u =x⇒− u2
du = dx leads to
∞ 2
dI k k
Z
−x2 −s k2
= −√ e x dx = − √ I
ds s 0 s
Therefore, we get √
π −2k√s
I(s) = e
2
Substituting this value in the equation (43.1), we obtain
√ √
1 − e−2k s
√
k 2 π π −2k√s
L erf √ = √ − e =
t s π 2 2 s
Often in applications we study a physical system by putting in a short pulse and then
seeing what the system does. The resulting behaviour is often called impulse response.
Let us see what we mean by a pulse. The simplest kind of a pulse is a simple rectangular
pulse defined by
0 if t < a,
ϕaǫ (t) = 1/ǫ if a ≤ t < a + ǫ,
0 if a + ǫ ≤ t.
On integration we get
e−sa
L [ϕaǫ (t)] =
1 − e−sǫ
sǫ
We generally want ǫ to be very small. That is, we wish to have the pulse be very short
and very tall. By letting ǫ go to zero we arrive at the concept of the Dirac delta function,
δ(t − a). Thus, the Dirac-Delta can be thought as the limiting case of ϕǫ (t) as ǫ → 0
So δ(t) is a ”function” with all its ”mass” at the single point t = 0. In other words, the
Dirac-delta function is defined as having the following properties:
Unfortunately there is no such function in the classical sense. You could informally think
that δ(t) is zero for t 6= 0 and somehow infinite at t = 0.
As we can integrate δ(t), let us compute its Laplace transform.
Z ∞
L [δ(t − a)] = e−st δ(t − a) dt = e−as
0
In particular,
L [δ(t)] = 1.
Remark: Notice that the Laplace transform of δ(t − a) looks like the Laplace trans-
form of the derivative of the Heaviside function u(t − a), if we could differentiate the
Heaviside function. First notice
e−as
L u(t − a) = .
s
To obtain what the Laplace transform of the derivative would be we multiply by s, to
obtain e−as, which is the Laplace transform of δ(t − a). We see the same thing using
integration, Z t
δ(s − a) ds = u(t − a).
0
So in a certain sense
d
” u(t − a) = δ(t − a) ”
dt
This line of reasoning allows us to talk about derivatives of functions with jump disconti-
nuities. We can think of the derivative of the Heaviside function u(t−a) as being somehow
infinite at a, which is precisely our intuitive understanding of the delta function.
43.3.1 Example
s+1
Compute L−1 s .
Solution: We write,
−1 s+1 −1 1 −1 −1 1
L =L 1+ = L [1] + L = δ(t) + 1.
s s s
The resulting object is a generalized function which makes sense only when put under an
integral.
Suggested Readings
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Lesson 44
In this lesson we evaluate Laplace and inverse Laplace transforms of some useful func-
tions. Some important special functions include Bessel’s functions and Laguerre poly-
nomial. Additionally, some examples demonstrating potential of Laplace and inverse
Laplace transform for evaluating special integrals will be presented.
n2
n 1 ′
y + y + 1− 2 y =0
t t
t2 t4 t6
J0 (t) = 1 − + − + ...
22 22 42 22 42 62
and
t t3 t5
J1 (t) = − 2 + 2 2 + ...
2 2 4 2 4 6
Note that J0′ (t) = −J1 (t).
44.1.1 Example
t2 t4 t6
L[J0 (t)] = L 1 − 2 + 2 2 − 2 2 2 + . . .
2 2 4 2 4 6
Hence, we obtain
s
L[J1 (t)] = 1 − √
1 + s2
44.2.1 Example
(s − 1)n
Show that L[Ln (t)] =
sn+1
Solution: By definition of the Laplace transform we have
Z ∞ t
dn −t n
−st e
L[Ln (t)] = e e t dt
0 n! dtn
dn
Z ∞
1
e−(s−1)t n e−t tn dt
=
n! 0 dt
44.3.1 Problem 1
44.3.2 Problem 2
Show that ∞
sin t
Z
π
dt = .
0 t 2
Solution: We know
1
L[sin t] =
s2 +1
Therefore, we get Z ∞
sin t 1 π
L = 2
ds = − tan−1 s.
t s s +1 2
Taking limit as s → 0 (see remarks below for details) we find
∞
sin t
Z
π π
dt = − tan−1 (0) = .
0 t 2 2
Remark 1: Suppose that f is piecewise continuous on [0, ∞) and L[f (t)] = F (s) exists
R∞ R∞
for all s > 0, and 0 f (t) dt converges. Then lims→0+ F (s) = lims→0+ 0 e−st f (t) dt =
R∞
0 f (t) dt.
verges uniformly for all s ∈ E , then F (s) is a continuous function on E , that is, for
s → s0 ∈ E, Z ∞ Z ∞
−st
lim e f (t) dt = F (s0 ) = lim e−st f (t) dt.
s→s0 0 0 s→s0
R∞
Remark 3: Recall that the integral 0 e−st f (t) dt is said to converge uniformly for s
in some domain Ω if for any ǫ > 0 there exists some number τ0 such that if τ ≥ τ0 then
Z
∞
−st
e f (t) dt < ǫ
τ
for all s in Ω.
44.3.3 Problem 3
Solution: Let ∞
x sin xt
Z
f (t) = dx
0 x2 + a2
Taking Laplace transform, we get
Z ∞
x x
F (s) = dx
0 x2 + a x + s2
2 2
On simplification we obtain
1 π
F (s) =
2 s+a
Taking inverse Laplace transform we find
1 −at
f (t) = πe
2
Hence the value of the given integral
∞ ∞
x sin xt x sin xt
Z Z
dx = 2 dx = πe−at.
−∞ x2 + a2 0 x2 + a2
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 45
In previous lessons we have evaluated Laplace transforms and inverse Laplace transform
of various functions that will be used in this and following lessons to solve ordinary
differential equations. In this lesson we mainly solve initial value problems.
45.2.1 Problem 1
45.2.2 Problem 2
Solution: We will take the Laplace transform on both sides. By X(s) we will, as usual,
denote the Laplace transform of x(t).
L[x′′ (t) + x(t)] = L[cos(2t)],
s
s2 X(s) − sx(0) − x′ (0) + X(s) = 2 .
s +4
Plugging the initial conditions, we obtain
s
s2 X(s) − 1 + X(s) =
s2 + 4
We now solve for X(s) as
s 1
X(s) = +
(s2 + 1)(s2 + 4) s2 + 1
We use partial fractions to write
1 s 1 s 1
X(s) = 2
− 2
+ 2
3 s +1 3 s +4 s +1
Now take the inverse Laplace transform to obtain
1 1
x(t) = cos(t) − cos(2t) + sin(t).
3 3
45.2.3 Problem 3
45.2.4 Problem 4
Solve
′
y ′′ + y = CH(t − a), y(0) = 0, y (0) = 1.
45.2.5 Problem 5
Solution: We transform the equation and we plug in the initial conditions as before to
obtain −s −5s e e
s2 X(s) + X(s) = − .
s s
We solve for X(s) to obtain
e−s e−5s
X(s) = − .
s(s2 + 1) s(s2 + 1)
e−s
−1
−1 −s
L = L e L[1 − cos t] = 1 − cos(t − 1) H(t − 1).
s(s2 + 1)
Similarly, we have
e−5s
−1
−1 −5s
L = L e L[1 − cos t] = 1 − cos(t − 5) H(t − 5).
s(s2 + 1)
45.2.6 Problem 6
45.2.7 Problem 7
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 46
In this lesson we continue the application of Laplace transform for solving initial and
boundary value problems. In this lesson we will also look for differential equations with
variable coefficients and some boundary value problems.
46.1.1 Problem 1
or in other words
1
X(s) = F (s) .
s2 + ω02
We know
−1 1 sin(ω0 t)
L 2 2 = .
s + ω0 ω0
Therefore, using the convolution theorem, we find
t
sin ω0 (t − τ )
Z
x(t) = f (τ ) dτ,
0 ω0
46.1.2 Problem 2
46.1.3 Problem 3
46.1.4 Problem 4
We notice that y(0) = 0 and y ′′(0) = 0. Let us call C1 = y ′(0) and C2 = y ′′′(0). We solve
for Y (s),
−e−s C1 C2
Y (s) = + 2 + 4.
s4 s s
We take the inverse Laplace transform utilizing the second shifting property to take the
inverse of the first term.
−(x − 1)3 C2 3
y(x) = u(x − 1) + C1 x + x .
6 6
We still need to apply two of the endpoint conditions. As the conditions are at x = 2 we
can simply replace u(x − 1) = 1 when taking the derivatives. Therefore,
−(2 − 1)3 C2 3 −1 4
0 = y(2) = + C1 (2) + 2 = + 2C1 + C2 ,
6 6 6 3
and
−3 · 2 · (2 − 1) C2
0 = y ′′ (2) = + 3 · 2 · 2 = −1 + 2C2 .
6 6
Hence C2 = 21 and solving for C1 using the first equation we obtain C1 = −1
4 . Our solution
for the beam deflection is
−(x − 1)3 x x3
y(x) = u(x − 1) − + .
6 4 12
We now demonstrate the potential of Laplace transform for solving ordinary differential
equations with variable coefficients.
46.1.5 Problem 5
On integration we find
2 2
2
Z
s2
3 − s2 − s2 2 − s2
Y (s)s e = 4e − S e + 2se− 2 ds + c
y(t) = t2 − 1
46.1.6 Problem 6
On integration we get
c
Y (s) = √
1 + s2
Taking inverse Laplace transform we find
y(t) = cJ0 (t)
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 47
In this lesson we discuss application of Laplace transform for solving integral equations,
integro-differential equations and simultaneous differential equations.
where F (s), G(s), and K(s) are the Laplace transforms of f (t), g(t), and K(t) respectively.
Solving for F (s), we find
G(s)
F (s) = .
1 − K(s)
To find f (t) we now need to find the inverse Laplace transform of F (s). Similar steps can
be followed to solve the integral equation of second type mentioned above.
47.2.1 Problem 1
Solution: Applying Laplace transform on both sides and using convolution theorem we
get,
1
L[f (t)] = + L[sin t]L[f (t)]
s+1
On simplifications, we obtain
1 1
L[f (t)] 1 − 2 =
s +1 s+1
f (t) = 2e−t + t − 1
47.2.2 Problem 2
47.2.3 Problem 3
Solution: We apply the Laplace transform and the shifting property to get
2 1 1
3
= L[et x(t)] = X(s − 1),
s s s
where X(s) = L[x(t)]. Thus, we have
2 2
X(s − 1) = or X(s) = .
s2 (s + 1)2
We use the shifting property again to obtain
x(t) = 2e−t t.
In addition to the integral we have a differential term in the integro differential equa-
tions. The idea of solving ordinary differential equations and integral equations are now
combined. We demonstrate the procedure with the help of the following example.
47.3.1 Example
Solve Z t
dy
+ 4y + 13 y(u) du = 3e−2t sin 3t, y(0) = 3.
dt 0
Solution: Taking Laplace transform and using its appropriate properties we obtain,
Y (s) 3
sY (s) − y(0) + 4Y (s) + 13 =3 .
s (s + 2)2 + 9
Collecting terms of Y (s) we get
s2 + 4s + 13 9
Y (s) = +3
s (s + 2)2 + 9
On simplification we have
9s 3s
Y (s) = 2 +
[(s + 2)2 + 9] (s + 2)2 + 9
At the end we show with the help of an example the application of Laplace transform for
solving simultaneous differential equations.
47.4.1 Example
Solve
dx dy
= 2x − 3y, = y − 2x
dt dt
subject to the initial conditions
x(0) = 8, y(0) = 3.
and
sY (s) − y(0) = Y (s) − 2X(s)
On simplifications we receive
8s − 17
X(s) =
(s − 4)(s + 1)
Partial fractions lead to
5 3
X(s) = + ,
s+1 s−4
Taking inverse Laplace transform both sides we get
Now we solve the above equations (47.1) and for (47.2) Y (s)
On simplifications we get
3s − 22 3s − 22
Y (s) = =
s2 − 3s − 4 (s − 4)(s + 1)
Using the method of partial fractions we obtain
5 2
Y (s) = −
s+1 s−4
Taking inverse transform we get
Suggested Readings
Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.
Lesson 48
In this last lesson of this module we demonstrate the potential of Laplace transform for
solving partial differential equations. If one of the independent variables in partial differ-
ential equations ranges from 0 to ∞ then Laplace transform may be used to solve partial
differential equations.
Working steps are more or less similar to what we had for solving ordinary differential
equations. We take the Laplace transform with respect to the variable that ranges from 0 to
∞. This will convert the partial differential equation into an ordinary differential equation.
Then, the transformed ordinary differential equation must be solved considering the given
conditions. At the end we take the inverse Laplace transform which results the required
solution.
Denoting the Laplace transform of unknown variable u(x, t) with respect to t by U(x, s)
and using the definition of Laplace transform we have
Z ∞
U(x, s) = L[u(x, t)] = e−st u(x, t) dt
0
Z ∞ Z ∞
∂u −st ∂u d dU
(i) L = e dt = e−st u(x, t) dt =
∂x 0 ∂x dx 0 dx
Z ∞ ∞ Z ∞
∂u −st ∂u −st
(ii) L = e dt = e u − u(−s)e−st dt
∂t 0 ∂t 0 0
Z ∞
= −u(x, 0) + s ue−st dt
0
∂u
⇒ L = −u(x, 0) + sU(x, s)
∂t
∂2u d2 U
(iii) L =
∂x2 dx2
∂2u
2 ∂u
(iv) L = s U(x, s) − su(x, 0) − (x, 0)
∂t2 ∂t
∂2u
d d
(v) L = s U(x, s) − u(x, 0)
∂x∂t dx dx
Remark: In order to derive the above results, besides the assumptions of piecewise
continuity and exponential order of u(x, t) with respect to t, we have also used the fol-
lowing assumptions: (i) The differentiation under integral sign is valid and (ii) The limit
of the Laplace transform is the Laplace transform of the limit, i.e., limx→x0 L [u(x, t)] =
L [limx→x0 u(x, t)].
48.2.1 Problem 1
u(x, t) = x + t
48.2.2 Problem 2
u(0, t) = 0 ⇒ U(0, s) = 0, ⇒ c = 0
Thus we have
x 1 1
U(x, s) = =x −
s(s + 1) s s+1
Taking inverse Laplace transform we find the desired solution as
u(x, t) = x 1 − e−t
48.2.3 Problem 3
and
1 1
U(0, s) = 0 ⇒ c1 + c2 + = 0 ⇒ c2 = −
s s
Hence, we have
1 √ 1
U(x, s) = − e− sx +
s s
Taking inverse Laplace transform we find the desired solution as
−1 1 −√sx x x
u(x, t) = 1 − L e = 1 − 1 − erf √ = erf √
s 2 t 2 t
48.2.4 Problem 4
d2
s2 Y (x, s) − sy(x, 0) − yt (x, 0) − a2 Y (x, s) = 0
dx2
With the given initial condition we have the resulting differential equation
d2 Y s2
− =0
dx2 a2
Its general solution is given as
s s
Y (x, s) = c1 e a x + c2 e− a x
lim Y (x, s) = 0 ⇒ c1 = 0,
x→∞
and
ω ω
Y (0, s) = ⇒ c2 = 2
s2 +ω 2 s + ω2
Thus we have
ω − as x
Y (x, s) = e
s2 + ω 2
Taking inverse Laplace transform we obtain
h x i x
y(x, t) = sin ω t − H t− .
a a
Suggested Readings
Raisinghania, M.D. (2005). Ordinary & Partial Differential Equation. Eighth Edition. S.
Chand & Company Ltd., New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
QUIZ
(a)
s
1 as
e ebs (b) e e bs
1 as
s
(c)
s
e eas
1 bs
(d)
s
e eas
1 bs
4 p2 2 p2
s s2 4 p2 s s2 2 p2
(a) (b)
2 p2 2p
s s2 4 p2 s s 4 p2
(c) (d) 2
a 2 s 2 a 2 b2 a s 2 a 2 b2
s s s s
(a) (b)
a b a b a b a b
2 2 2 2 2 2 2 2
a s 2 a 2 b2 a s 2 a 2 b2
2
s s s s
(c) (d)
2
a b 2
a b 2
a b
2 2
a b
2
1
2t , t 1
(a) f (t ) ,t2 (b) f (t )
t 2
1t 2 , t 1
1et t sin 1t , t 0
t , t 0
(c) f (t ) (d) f (t )
0, t 0 0, t 0
2s s 2 a 2 8a 2 s s 2 a 2 2s s 2 a 2 8a 2 s s 2 a 2
2 3 2 2
(a) (b)
2s s 2 a 2 8a s s 2 a 2 2s 2 s 2 a 2 8a 2 s s 2 a 2
2 3 2 3
(c) (d)
1 e 2 s
1 e s
(a)
1 e2 s s2 1 (b)
1 e2 s s2 1
1 e s 1 e s
(c)
1 e s s2 1 (d)
1 e s s2 1
7. Laplace transform of f (t ) t H (t a ), t 0 is
eas
(a) 1 as (b) eas
1 as
s2
s2
e as e as
(c) 1 as (d) 1 as
s2 s2
8. Which of the following functions does not possess the Laplace transform?
(c) et
2
(d) te sin e
t2 t2
2s 3
9. The value of L1 2 is
s 4
3 3
(a) cos 2t sin 2t (b) 2cos 2t sin t
2 2
3 3
(c) 2cos 2t sin 2t (d) 2cos 2t sin 2t
4 2
1 67
(a) 28e 6t
28cos 11 t sin 11 t
47 11
1 67
(b) 28e 6t
28cos 11 t sin 11 t
47 11
1 6t 67
(c) 28e 28cos 11 t sin 11 t
47 11
1 6t 67
(d) 28e 28cos 11 t sin 11 t
47 11
3s 5
11. The value of L1 2 is
4 s 4 s 1
1 2t 1 2t
(a) e 6 7t , (b) e 6 7t ,
8 8
1 2t 1 2t
(c) e 6 7t , (d) e 6 7t ,
8 8
e s
12. The value of F ( s) is
s2 2
(a)
1
2
H (t ) cosh 2 t (b)
1
2
H (t )sin
2 t
(c)
1
2
H (t ) cos
2 t (d)
1
2
H (t )sinh 2 t
1
y t cos t H t t sin t
2
(a)
1
y t sin t H t t sin t
2
(b)
1
y t sin t H t t sin t
2
(c)
1
y t cos t H t t sin t
2
(d)
15. The solution of the integral equation y (t ) sin t 2 y (u ) cos(t u )du is
0
1 5 1 5
(a) y (t ) 2e t e 2 t (b) y (t ) 2et e 2t
2 2 2 2
1 5 1 5
(c) y (t ) 2e t e 2 t (d) y(t ) 2et e2t
2 2 2 2
2 y 2 y
; 0 x 1, t 0
t 2 x 2
y(0, t ) 0, t 0, y(1, t ) 0, t 0
is
18. The solution of the integral equation F (t ) t 2 cos(t u ) F (u )du
0
(c) (d)
3 6
te
3t
20. The value of the integral cos t dt is
0
1 2
(a) (b)
25 25
3 4
(c) (d)
25 25
Answers:
1. b 2. c
3. b 4. a
5. a 6. c
7. d 8. c
9. d 10. a
11. a 12. d
13. a 14. b
15. c 16. d
17. a 18. c
19. b 20. b
The information on this website does not warrant or assume any legal liability or
responsibility for the accuracy, completeness or usefulness of the courseware contents.
The contents are provided free for noncommercial purpose such as teaching, training,
research, extension and self learning.