You are on page 1of 319

Engineering

Mathematics-III

Dr. P D Srivastava
Dr. P V S N Murthy
Dr. J Kumar
Engineering Mathematics-III

-: Course Content Developed By :-

Dr. J Kumar (Assistant Professor)


Dr. P V S N Murthy (Associate Professor)
Dept. of Mathematics, IIT Kharagpur

-:Content Reviewed by :-

Dr. P D Srivastava (Professor)


Department of Mathematics, IIT, Kharagpur

AgriMoon App AgriVarsha App


App that helps the students to gain the Knowledge App that helps the students to All Agricultural
about Agriculture, Books, News, Jobs, Interviews of Competitive Exams IBPS-AFO, FCI, ICAR-JRF,
Toppers & achieved peoples, Events (Seminar, SRF, NET, NSC, State Agricultural exams are
Workshop), Company & College Detail and Exam available here.
notification.
Index
Lesson Name Page No
Module 1: Numerical Analysis
Lesson 1: Finite Differences 5-13
Lesson 2: Relation between Difference Operator 14-23
Lesson 3: Sources of Error and Propagation of
24-28
Error in the Difference Table
Lesson 4: Solutions of Non-Linear Equation 29-37
Lesson 5: Secant and Regula-Falsi Methods 38-45
Lesson 6: Newton-Raphson Method 46-52
Lesson 7: Linear System of Algebraic Equations –
53-61
Jacobi Method
Lesson 8: Gauss-Seidel Iteration Method 62-68
Lesson 9: Decomposition Methods 69-76
Lesson 10: Interpolation 77-84
Lesson 11: Higher Order Lagrange Interpolation 85-90
Lesson 12: Newton’s Forward Interpolation
91-97
Formula with Equal Intervals
Lesson 13: Newton’s Backward Interpolation
98-104
Polynomial
Lesson 14 Gauss Interpolation 105-111
Lesson 15: Everett’s Central Difference
112-115
Interpolation
Lesson16: Stirling’s and Bessel’s Formula 116-120
Lesson17. Newton’s Divided Difference
121-126
Interpolation
Lesson 18: Numerical Differentiation 127-134
Lesson 19 Numerical Integration 135-141
Lesson 20: Simpson’s one Third and Simpson’s
142-147
Three Eighth Rules
Lesson 21: Boole’s and Weddle’s Rules 148-153
Lesson 22: Gaussian Quadrature 154-158
Lesson 23: Difference Equations 159-165
Lesson 24: Non Homogeneous Difference
166-170
Equation
Lesson 25: Numerical Solutions of Ordinary
171-177
Differential Equations
Lesson 26: Taylor Series Method 178-185
Lesson 27: Single Step Methods 186-192
Lesson 28: Modified Euler Method 193-197
Lesson 29: Runge-Kutta Methods 198-202
Lesson 30 4th Order Runge-Kutta Method 203-207
Lesson 31: Methods for Solving Higher Order
208-212
Initial Value Problems
Lesson 32: System of I.V.Ps.- th Order R-K
213-217
Method
Module 2: Laplace Transform
Lesson 33: Introduction 218-223
Lesson 34: Laplace Transform of Some
224-229
Elementary Functions
Lesson 35: Existence of Laplace Transform 230-235
Lesson 36: Properties of Laplace Transform 236-241
Lesson 37: Properties of Laplace Transform
242-247
(Cont.)
Lesson 38: Properties of Laplace Transform
248-253
(Cont.)
Lesson 39: Properties of Laplace Transform
254-259
(Cont.)
Lesson 40: Inverse Laplace Transform 260-263
Lesson 41: Properties of Inverse Laplace
264-271
Transform
Lesson 42: Convolution for Laplace Transform 272-277
Lesson 43: Laplace Transform of Some Special
278-282
Functions
Lesson 44: Laplace and Inverse Laplace
283-288
Transform: Miscellaneous Examples
Lesson 45: Application of Laplace Transform 289-294
Lesson 46: Applications of Laplace Transform
295-300
(Cont.)
Lesson 47: Applications to Heat Equations 301-306
Lesson 48: Application of Laplace Transform
307-318
(Cont.)
Module 1: Numerical Analysis

Lesson 1

FiniteDifferences

1.1 Introduction

The analytical solution to a problem provides the value of the dependent


variable for any value of the independent variable. Consider, for instance the
simple Spring-mass system governed by the differential equation.

y + λ2 y =
 0 (1.1)

whose analytical solution can be readily written as:

y (t ) c1 sin λt + c2 cos λt
= (1.2)

on any interval. Solution (1.2) gives the function value at any point in the
interval.If equation (1.1) is solved numerically, the time interval is first divided
into a pre-determined finite number of non-overlapping equally spaced or
otherwise subintervals and the numerical solution is obtained at the end points
of these subintervals. If the interval considered is (say) [ t0 , tn ]and if the interval
is divided into N number of non-overlapping subintervals, say of equally spaced
ones, then the numerical solution is obtained at the set of points
{ t0 , t1 , t2 ,...., t N −1 , t N }, with the difference between t j +1 and t j being constant for

=
every j , j 0,1,..., N − 1.

t0 t1 t2 ...... t j t j +1 ....... tN

Fig.1.1: Equally spaced division of the interval [ t0 , t N ].

5 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

Similarly, this can be done with unequally spaced subintervals also

t0 t1 t2 t3 ...... tj t j +1 t j + 2 ....... t N

Fig.1.2: Unequally spaced division of the interval [ t0 , tn ].

The set of points { t0 , t1 , t2 ,...., t N } are called the nodal points. The numerical
solution at non-nodal points can also be obtained either using the interpolation
of the data at the nodal points. This concept is true when one is solving an
ordinary or partial differential equation. Many of the Finite difference operators
are applicable over equally spaced data points.

In the later part of this lesson, we shall define some of the finite differences
operators and their utility and their relationships.

1.2 Difference Operators and their Utility

Let y ( t ) be the variable depending on the independent variable t , consider the

equally spaced ( N + 1) data points, such that t j +1 − t j =


h , h being constant.

Table 1.1. Equally spaced data set

t0 t1 t2 t3 ... t j −1 tj t j +1 …… tN

y0 y1 y2 y3 … y j −1 yj y j +1 …… yN

The difference operator acts on the dependent function. Notation: y j = y (t j )

1.2.1 Shift Operator

6 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

It is denote by E , when it acts on y j , it shifts the data solution at t j to the

solution at t j +1 (i.e., the data is moved by one spacing forward) i.e.,

Ey j = y j +1 (1.3)

Similarly, E −1 y j = y j −1 (1.4)

For any (positive or negative) integer n ,

E n y j = y j +n (1.5)

1.2.2 Forward Difference Operator

It is denoted by ∆ , which is defined as

∆yi = yi +1 − yi (1.6)

This is called the first forward difference operator. The second forward
difference operator is ∆ 2 , its action is defined as:

∆ 2 yi = ∆yi +1 − ∆yi (1.7)


= ( yi + 2 − yi +1 ) − ( yi +1 − yi )

Or ∆ 2 yi = yi + 2 − 2 yi +1 + yi (1.8)

In a similar way for any positive integer n , the n th forward difference operator
is defined as:

7 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

∆ n yi = ∆ n−1 yi +1 − ∆ n−1 yi (1.9)

1.2.3 Backward Difference Operator:

The first backward difference operator is denoted by ∇ , and this is denoted as

∇yi = yi − yi −1 (1.10)

The second order backward difference is ∇ 2 and this is defined as:

∇ 2 yi = yi − yi −1

= yi − 2 yi −1 + yi −2 (1.11)

The n thorder backward difference of yi is

∇ n yi = ∇ n−1 yi − ∇ n−1 yi −1 (1.12)

Observe that

∇ n yi +1 =yi +1 − yi =
∇yi (1.13)

This means their difference operators are related to each other.

1.2.4 The Central Difference Operator

It is denoted by δ and it is defined as:

δ=
yi y 1 −y 1 (1.14)
i+ i−
2 2

8 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

h h
Here, y= 1 y (ti + ) and y= 1 y (ti − ) .
i+ 2 i− 2
2 2

Also, δ=
2
yi δ y 1 −δ y 1
i+ i−
2 2

= ( yi +1 − yi ) − ( yi − yi −1 )
= yi +1 − 2 yi + yi −1 (1.15)

δ n yi δ n−1 y
= 1 − δ n−1 y 1 (1.16)
i+ i−
2 2

Observe that, δ 2 yi = yi +1 − 2 yi + yi −1
= ∆ 2 yi +1 (1.17)

1.3 Construction of Difference Tables


Now, let us construct the forward, backward and central difference tables for the
given data points.

Example 1: Construct the forward difference table for the following set of
equally spaced data given by:

t0 t1 t2 t3 t4 t5
y0 y1 y2 y3 y4 y5

Solution:

t0 y0 ∆ ∆2 ∆3 ∆4 ∆5

9 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

t1 y1 y1 − y0 =
∆y0

t2 y2 y2 − y1 =
∆y1 ∆y1 − ∆y0 = ∆ 2 y0

t3 y3 − y2 =
∆y2 ∆y2 − ∆y1 = ∆ 2 y1 ∆ 2 y1 − ∆ 2 y0 = ∆ 3 y0
y3

t4 y4 y4 − y3 =
∆y3 ∆y3 − ∆y2 = ∆ 2 y2 ∆ 2 y2 − ∆ 2 y1 = ∆ 3 y1 ∆ 3 y1 − ∆ 3 y0 = ∆ 4 y0

t5 y5 y5 − y4 =
∆y4 ∆y4 − ∆y3 = ∆ 2 y3 ∆ 2 y3 − ∆ 2 y2 = ∆ 3 y2 ∆ 3 y2 − ∆ 3 y1 = ∆ 4 y1 ∆ 4 y1 − ∆ 4 y0 = ∆ 5 y0

From the forward difference table constructed above it is noted that for a data
with 6 data points, we have the maximum order forward difference is ∆ 5 and
all differences of order 6 and above are zero. The entries with the same
subscript on y lie on sloping lines downward.

Example 2: Construct the backward difference table for the given data:

ti -1 0 1 2 3
yi 9.1 8.2 7.3 6.4 5.5

Solution:

We know ∇ n yi = ∇ n−1 yi − ∇ n−1 yi −1 . Given y0 = 9.1 , y1 = 8.2 , y3 = 7.3 , y4 = 6.4 ,


y5 = 5.5 .

Now the difference table is:

10 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

∇ ∇2 ∇3 ∇4
-1 9.1
0 8.2 -0.9= ∇y1
1 7.3 -0.9= ∇y2 0= ∇ 2 y2
2 6.4 -0.9= ∇y3 0= ∇ 2 y3 0= ∇3 y3
3 5.5 -0.9= ∇y4 0= ∇ 2 y4 0= ∇3 y4 0= ∇ 4 y4

It is noted that for the given data, upto 4 th order backward differences exist. It is
also evident that the second order differences onward the values are zero, the
reason being all the first order differences are same.

Example 3: Compute a table of differences through ∆ 3 and ∇3 for the function


) t 2 − 1 using the step size h = 1 and t0 = −1 .
y (t =

Solution: For the given function, the data set is constructed as:

t t0 = −1 t1 = 0 t2 = 1 t3 = 2
y(t) y0 = 0 y1 = −1 y2 = 0 y3 = 3

For a second degree polynomial t 2 − 1 , the third order (forward) difference is


zero, that is the reason; we considered only 4 data points.

Now the forward difference table is:


∆ ∆2 ∆3
-1 0
0 -1 -1= ∆y0
1 0 1= ∆y1 2= ∆ 2 y0

11 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

2 3 3= ∆y2 2= ∆ 2 y1 0= ∆ 3 y0

Similarly the backward difference table is written as:


∇ ∇2 ∇3
-1 0= y0
0 -1= y1 -1= ∇y1
1 0= y2 1= ∇y2 2= ∇ 2 y2
2 3= y3 3= ∇y3 2= ∇ 2 y3 0= ∇3 y3

The illustration reveals that the difference table is the same for both forward and
backward differences, but the entries are labelled differently.

Example 4: Form the central differences for the data given in the example 1.

Solution:
t y δ δ2 δ3 δ4

t0 y0

t1 y1 δ y1
y1 − y0 =
2

t2 y2 δ y3
y2 − y1 = δ y3 − δ y1 =
δ 2 y1
2 2 2

t3 y3 δ y5
y3 − y2 = δ y5 − δ y3 =
δ 2 y2 δ 3 y 3
2 2 2 2

t4 y4 δ y7
y4 − y3 = δ y7 − δ y5 =
δ 2 y3 δ 3 y 5 δ 4 y2
2 2 2 2

In the central difference table, the entries with the same script lie on the
horizontal lines.

12 WhatsApp: +91 7900900676 www.AgriMoon.Com


Finite Differences

Keywords: Difference Tables, Central Difference Operator, Backward


Difference Operator, Forward Difference Operator, Shift Operator,
Finite Differences.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

13 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 2

Relation between Difference Operators

2.1 Introduction

In the previous lecture, we have noticed from the difference table that these
difference operators are related. In this lecture we establish the relations
between these operators.

Example 1: Show that the shift operator is related to the forward difference
operator as ∆= E − 1 [ 1 being the identity operator] and to the backward
difference operator ∇ as ∇ = 1 − E −1 .

Solution:

By definition, the forward difference operator when operating over the function
data yi , ∆yi , it becomes

∆yi = yi +1 − yi
= Eyi − yi
= ( E − 1) yi
∴∆= E − 1.

Similarly, ∇yi = yi − yi −1

= yi − E −1 yi

= (1 − E −1 ) yi

From the above example, one can write that E = ∆ + 1 and E −1 = 1 − ∇ .

14 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

1 1

δ E2 − E 2 .
Example 2: Establish =

Solution: We know δ=
yi y 1 −y 1
i+ i−
2 2

1 1 1 1
− −
= E yi − E =
2
yi ( E − E ) yi 2 2 2

1 1

∴δ = E − E . 2 2

From these examples, one can establish the relation between ∆ , ∇ and δ as:

1 1
δ= (1 + ∇) − (1 − ∇) . 2 2

1
Example 3: Verify ∇E = E∇ = ∆ = δ E 2

Solution:
∇ ( yi +1 ) =
∇Eyi = yi +1 − yi =
∆yi .

And E∇y=i E ( yi − yi −1 )
= yi +1 − yi
= ∆yi

1
Similarly, δ E yi =
2
δy 1 =yi +1 − yi =∆yi .
i+
2

1
Thus we have ∇E = E∇ = ∆ = δ E 2 .

15 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

Example 4: Show that 2 µδ = ∇ + ∆ where µ is the averaging operator defined

1  12 − 
1
as µ
=  E + E .
2
2 

Solution:
 
2 µδ yi 2µ  y 1 − y 1 
=
 i+ 2 i−
2

= 2µ y 1 − 2µ y 1
i+ i−
2 2

 12 − 
1
 12 − 
1
= E + E y 1 −E + E 2 y 1
2

  i+ 2   i− 2
= ( yi+1 + yi ) − ( yi + yi−1 )
= yi +1 − yi −1

Also ( ∇ + ∆ ) y=i ( yi − yi−1 ) + ( yi+1 − yi )


= yi +1 − yi −1

∴ 2 µδ = ∇ + ∆ .

δ2 δ2 δ2 δ2
Exercise 1: Show that ∆
= + δ 1+ ∇ δ 1+
and= − .
2 4 4 2

In example 3 of previous lesson, we observed that the third order difference for
a second degree polynomial is zero. This is similar to the third derivative of a
second degree polynomial is zero. This gives an intuition that the differential
operator D is connected with the difference operator.

dy 2 d2y
Let us denote D by , D by 2 and so on.
dt dt

16 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

Relation between Difference Operators

Consider the function y (t ) at a general node ti +1 . Now y ( ti +=


1) y (t + h)
can be expanded in Taylor series about the nodal point ti [assuming the
continuity of the higher order derivative of y (t ) at ti ],

dy h2 d 2 y
y (ti=
+1 ) y (ti + h) =y ( ti ) + h + + ..... (2.1)
dt ti 2! dt 2 t
i

Using the operators E and D , the above equation (2.1) can be written as

 h 2 2 h3 3 
Eyi =1 + hD + D + D + ..... yi (2.2)
 2! 3! 

Or Eyi = (e hD ) yi

Or E = e hD (2.3)

Also, we know that E = ∆ + 1 , using this equation (2.3) we have:

h 2 D 2 h3 D 3
=
∆ e hD
−=
1 hD + + + ..... (2.4)
2! 3!

Thus the forward difference operator ∆ is connected with the differential


operator D . We can express D explicitly in terms of ∆ .
From equation (2.4), we can write

= ln (1 + ∆ )
hD

17 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

∆ 2 ∆3
=∆− + − .......
2 3

Relation between Difference Operators

1 ∆ 2 ∆3 
or D=  ∆ − + − .......  (2.5)
h 2 3 

Now, the second order difference ∆ 2 can be written as:

2
 h 2 D 2 h3 D 3 
∆=  hD +
2
+ + .....  (2.6)
 2! 3! 

7 4 4
Or =
∆ 2 h 2 D 2 + h3 D 3 + h D + .....
12

1 2 11 
=
Exercise 2: Show that D 2
2 
∆ − ∆ 3 + ∆ 4 .......  .
h  12 

Exercise 3: Establish
1 hD
(i) E = e
2 2

h 2 2 h3 3
(ii) ∇
= hD − D + D + .....
2! 3!

1  2 11 
(iii) D=
2
2 
∇ + ∇3 + ∇ 4 + .....
h  12 

h4 D 4 h6 D 6
(iv) δ = h D +
2 2 2
+ + .....
12 360

18 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

Thus the first and second order derivatives of a function y (t ) at ti are written
using ∆ as:

dyi 1 ∆ 2 yi ∆ 3 yi
= Dyi = (∆yi − + − .....) (2.7)
dt h 2 3

d 2 yi 1 11
2
= D 2 yi = 2 (∆ 2 yi − ∆ 3 yi + ∆ 4 yi − .....) (2.8)
dt h 12

Relation between Difference Operators

Similarly, in terms of ∇ we can write the differential operator as:

1 ∇ 2 yi ∇3 yi
Dyi = (∇yi + + + .....) (2.9)
h 2 3

1 2 11
and D 2 yi = 2
(∇ yi + ∇3 yi + ∇ 4 yi + .....) (2.10)
h 12

In terms of δ :

µ δ 3 yi δ 5 yi
Dyi = (δ yi − + − .....) (2.11)
h 6 30

1 2 δ 4 yi δ 6 yi
and D y=i
2
(δ yi − + − .....) (2.12)
h2 12 90

In the next example we illustrate the use of the relations among these operators.

19 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

Example 5: Consider the function y (t ) =t 3 + 2t 2 + 3t in the interval


dy
with step size h = 1.0 . Find the approximate value for (i) at and
dt
d2y dy d2y
at using the forward differences and (ii) and 2 at using
dt 2 dt dt
the backward differences.

Solution:

Step1: Let us construct the data set for the given function: Given ,

Relation between Difference Operators


t0 = −2 y0 = −6
t1 = −1 y1 = −2
-2 -1 0 1 2
t2 = 0 y2 = 0
-6 -2 0 6 22
t3 = 1 y3 = 6

t4 = 2 y4 = 22

Step 2: Construct the difference table. Note that for the given cubic, the third
derivative is constant and the fourth and higher derivatives are zero. Similarly
the third difference will be constant and all higher order differences will become
dy d2y
zero. The expressions for and 2 are as given in equations (2.7), (2.8), (2.9)
dt dt
and (2.10).

Difference Table:

-2 -6

20 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

-1 -2 ∆y0 =
4

0 0 ∆y1 =2 ∆ 2 y0 =
−2
1 6 ∆y2 =
6 ∆ 2 y1 =
4 ∆ 3 y0 =
6
2 22 ∆y3 =
16 ∆ 2 y2 =
10 ∆ 3 y1 =
6 ∆ 4 y0 =
0

Note: ∆ 3 y0 =
∆ 3 y1 =
6 (constant), ∆ 4 y0 =
0=∆ 5 y0 .....

Relation between Difference Operators

Step 3: Calculations:

dy dy 1 1 2 1 3 
(i) = = ∆y − ∆ y + ∆ y1 
h 
1 1
dt t =
−1 dt t =
t1 2 3 

1 1 1 
= 2 − ⋅ 4 + ⋅6 = 2.
1  2 3 

d2y 1
 1 [ 4 − 6] = −2
1
=  ∆ 2
y − ∆ 3
y= 
dt 2 t =t h 2 
1 1
1

Thus y′(−1) =2 and y′′(−1) =−2 .

These approximations are coinciding with the exact values.

16 , ∇ 2 y4 =
(ii) From the above table, we know ∇y4 = 10 , ∇3 y4 =
6.

dy 1 1 1 
=  ∇y4 + ∇ 2 y4 + ∇ 3 y4 
dt t =t4 h  2 3 

1 1 1 
= 16 + ⋅ 10 + ⋅ 6  = 23 .
1 2 3 

21 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

d2y 1
2
= 2 ∇ 2 y4 + ∇3 y4 
dt t =t h
4

1
= [10 + 6] = 16 .
1

These values also coincide with the exact values of y′(2) = 23 and y′′(2) = 16 .
Thus these differences give us a way of evaluating the derivative values from
the given data set.

dy d2y
Exercise 4: Find and 2 at from the data set.
dt dt
0 1 2 3 4
-1 2 -3 4 5

d2y
Exercise 5: Find 2 at from the given data.
dt
-1 0 1 2
-1 1 3 5

Exercise 6: Write D3 in terms of (i) ∆ (ii) ∇ (iii) δ and µ .

Keyword: Forward difference operator

References

Jain, M. K. Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

22 WhatsApp: +91 7900900676 www.AgriMoon.Com


Relation between Difference Operators

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.
Suggested Reading

Scheid, Francis. (1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth


Edition, Prentice Hall of India Publishers, New Delhi.

23 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 3

Sources of Error and Propagation of Error in the Difference Table

3.1 Introduction

In a numerical procedure, many types of errors can occur, among which the
prominent ones are (i) the truncation error and (ii) round-off error. We briefly
introduce these two types of errors below. The other types of errors are not
discussed in this lesson.

We noted that while writing the Taylor series expansion for a function about a
point in its neighbourhood, the series is truncated after a finite number of terms
in the series. This induces certain amount of error in the solution and this error
is called the truncation error. Thus the truncation error is the quantity T (say)
which must be added to the approximate solution in order to get the exact
solution.

1
For example, consider the function f ( x)= (1 + x) , x ∈ [0, 0.1] . We now try to
2

obtain a second degree polynomial approximation to this function by using the


Taylor Series expansion about the point x = 0 .

We have f (0) = 1 , and a certain higher order derivatives of the function and their
values at x = 0 are found as:

1 1 1 1 3
f ′( x) =, f ′(0) = ; f ′′( x) =
− , f ′′(0) =
− ; f ′′′( x) =5 .
2 1+ x
3
2 4 (1 + x ) 2 4 8 (1 + x ) 2

Now the second order approximation with the remainder term is written as:

24 WhatsApp: +91 7900900676 www.AgriMoon.Com


Sources of Error and Propagation of Error in the Difference Table

3
x x2 1 x3
∴ (1 + x ) 2 =1 + − + , for some 0 < ξ < 0.1 .
2 8 16  1 5

(1 + ξ ) 
2

1 x3
Thus the truncation error after 2 nd degree approximation is T = .
16  5

(1 + ξ ) 
2

If we compute the value of f ( x) at x = 0.05 , using this approximation we get the


value as f (0.5) ≈ 1.0246875 . The exact value is 1.0246951077 .
The upper bound for the Truncation error for x ∈ [0, 0.1] is given by

( 0.1) = 0.000625 .
3
(0.1)3
T max ≤
 5
 16
x∈[0,0.1]
16 (1 + x ) 2 
 

Thus the approximate value of f ( x) at 0.05 is 1.0246875 which is with a


maximum error of 0.000625 . This error can be further lowered by considering a
higher order approximation for f ( x) .

There is another type of error called the round off error which is due to the
precision of the computer while doing the arithmetic operations among the
number. The round off error is the quantity denoted by R which must be added
to the finite representation of a computed number in order to make it the true
representation of that number. This is due to the rounding of numbers after a
certain decimal place during computation. The present day computers use the
double precession, because of which the round off error is minimum.

In general, the error is defined as the difference between the True value and the
Approximate value.

25 WhatsApp: +91 7900900676 www.AgriMoon.Com


Sources of Error and Propagation of Error in the Difference Table

The Relative error is defined as

Also one uses the Percentage Relative error which is given by

While dealing with finite difference operators, usually, the solution is associated
with a certain amount of Truncation error. This error can be minimized by
increasing the order of approximation in the Taylor Series expansion. The
difference operators are magnifiers of the error that occurred in the initial data
set, in turn affecting the approximate solution. The following example illustrates
how a small amount of error ∈ induced in the initial data is increasing with
higher order differences.

Example 1: Tabulate the propagation of initial error ∈ with higher order


forward difference operators for the data:

x −3 −2 −1 0 1 2 3
f ( x) 0 0 0 ∈ 0 0 0

Solution: The forward difference table for the above given data set is:
x f ∆f ∆2 f ∆3 f ∆4 f ∆5 f ∆6 f

−3 0
0

−2 0 0

26 WhatsApp: +91 7900900676 www.AgriMoon.Com


Sources of Error and Propagation of Error in the Difference Table


−1 0 ∈ −4 ∈

∈ −3 ∈ 10 ∈

0 ∈ −2 ∈ 6∈ −20 ∈

−∈ 3∈ −10 ∈

1 0 ∈ −4 ∈
0 −∈
2 0 0
0
3 0

From the above we note that

i) The magnitude of error increases with the order of the differences

ii) The error for any order difference is the binomial coefficients with
alternating signs.

iii) The algebraic sum of the errors in any column is zero.

Exercises:

1. Find the approximate value of f (2) from  f ( x) = ln( x) , x ∈ [1,3] by considering a


rd
3 order Taylor series approximation. Obtain the local truncation error and the
percentage relative error in this approximation.

2. If the number π = 4 tan −1 (1) is approximated using 5 decimal digit, find the
percentage relative error due to rounding.

3. Tabulate the propagation of initial error ∈ in the following data set with
increasing order differences.
x 0 1 2 3 4
f ( x) −1 1 ∈ 2 2

27 WhatsApp: +91 7900900676 www.AgriMoon.Com


Sources of Error and Propagation of Error in the Difference Table

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

28 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 4

Solutions of Non-Linear Equation

4.1 Introduction

In this lesson, we learn to find the roots of a given non-linear equation involving
a polynomial and Transcendental functions.

1
For example (i) x3 − 4 x 2 − =
0 is a polynomial equation while
x
1
(ii) sin x − x cos =
0 is a transcendental equation.
x

Definition: A number ξ is said to be the root of (or solution of) the equation
if f (ξ ) ≡ 0 . It is also called a zero of f ( x) = 0 . A root of f ( x) = 0 is the
value of at which the graph of y = f ( x) intersects the x − axis.

f ( x)

x =ξ

O x

29 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

( x − ξ ) g ( x) =
If the equation f ( x) = 0 can be expressed as f ( x) = n
0 for some

g ( x) such that g (ξ ) ≠ 0 and g ( x) is bounded for all x in the domain of definition of

the function f ( x ) , then x = ξ is called a multiple root of f ( x) = 0 with

multiplicity . If n = 1 , then it is called a simple root.

The root of f ( x) = 0 is found using either a direct method, which gives the
exact value of x = ξ or by writing f ( x ) = 0 as an iterative procedure such as

=
xn +1 g=
( xn ), n 0,1, 2,.... . Root of cubic or higher degree polynomial equations and

transcendental equations cannot be found using the direct methods whereas the
iterative methods are quite handy though they provide approximate value to the
root of f ( x) = 0 .

4.2 Iterative Methods

These methods are based on the idea of successive approximations.


Consider for example the equation f ( x) = x 2 + 2 x − 4 = 0 . This can be written as

(i) x =
4
x+2
or (ii)=x
1
2
( 4 − x 2 ) or (iii) =
x 4 − 2x .

Here we expressed the given in three different forms as x = ϕi ( x) ,

i = 1, 2,3 where ϕ1 ( x) =
4
x+2
, ϕ2 =
( x)
1
2
( 4 − x2 ) and ϕ3 ( x=) 4 − 2x .

Thus any given f ( x ) = 0 is written as x = ϕ ( x) , then the root of f ( x ) = 0 is the


point of intersection of the function y = ϕ ( x) with then line y = x .

The function ϕ ( x) is called the iteration function. This is obtained by writing an


iterative method as xn +1 = ϕ ( xn ) and generate a sequence of iterates { xk }∞k =1 by

30 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

starting with a suitable initial approximation x0 , i.e., start with x0 ,


generate x1 = ϕ ( x0 ) , using x1 , generate x2 , repeat this finitely many times until we
notice that the condition xk +1 − xk <∈ is satisfied, where xk , xk +1 are two
consecutive iterates and ∈ is the pre assigned error tolerance. We note that with
different initial approximations and different initial iterative function, the
sequence of approximations generated will be different but in all these cases,
these sequences always give its limit as the root of the equation f ( x) = 0.

Definition: A sequence of iterates { xk } is said to converge to the root x = ξ , if

lim xk = ξ . The convergence of an iteration method depends on the suitable


k →∞

choice of the function ϕ ( x) and also on the initial approximation x0 to the root.
Below we state a sufficient condition on ϕ ( x) for the convergence of { xk } to the
root ξ .

Result: If ϕ ( x) is a continuous function in some closed interval I ⊂  that


contains the root ξ of x = ϕ ( x) and ϕ ′( x) ≤ a < 1, ∀ x ∈ I , then for any choice of

x0 ∈ I , the sequence { xk } generated from


= xk +1 ϕ=
( xk ), k 0,1, 2,..., converges to the

root ξ of x = ϕ ( x) .

Example: Find the root of f ( x=) cos x + 3 − 2=


x 0 correct to two decimal places.

Solution:

1
2 x cos x + 3 =
Given= or x ( cos x + 3)
2
1
=
Choose ϕ ( x) ( cos x + 3)
2
1
=
Write xn +1 ( cos x + 3)
2

31 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

1
ϕ ′( x)
Since = sin x < 1 ,
2

π
we start with x0 = , and generate the sequence of iterates as
2

1 π 
=x1  cos =+ 3  1.5
2 2 

1
=x2 ( cos(1.5)
= + 3) 1.535
2

1
=x3 ( cos(1.535)
= + 3) 1.518
2

1
=x4 ( cos(1.518)
= + 3) 1.526
2

1
=x5 ( cos(1.526)
= + 3) 1.522
2

The root of the above equation correct to two decimal places is taken as
since x5 − x4= 1.522 − 1.526= 0.004 . We can improve the accuracy in this

approximation by considering more iteration.

4.2.1 How to make a good guess:

Convergence of the successive iterations { xk } obtained for a given iterative


method xk +1 = ϕ ( xk ) depends on a good guess for the initial approximation x0 for
the root x = ξ . The initial approximation is usually obtained from the physical
considerations of the problem and also based on the graphical representation of
the given function  f ( x) . For the solution of algebraic equation f ( x) = 0 , an
elegant method to choose the initial approximation is by using the intermediate
value theorem which is stated below:

32 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

Intermediate value Theorem:


If f ( x) is continuous on some interval=I [ a, b] ⊂  and the product
f (a ) ⋅ f (b) < 0 , then the equation f ( x) = 0 has at least one real root or an odd

number of real roots in the interval (a, b) .

Now it amounts to choosing two real values a and b such that f (a) ⋅ f (b) < 0 ,
and then choose the initial approximation between a and b .

Example 1: Find an interval in which the root of x3 + 4 x − 5 =0 lies in.

Solution:

Given f ( x) = x3 + 4 x − 5 .

x = , f ( x) = −5 .
Take  0

Again when x = 2 , f (2) = 11 .

Now f (0) and  (2)


f are having opposite signs. Hence  (0)
f ⋅ f (2) < 0 .

The intermediate value theorem guarantees that a root of x3 + 4 x − 5 =0 will lie in

the interval (0, 2) .

[Exercise: Plot this function – it cuts the x − axis at ].

Example 2: Find the interval which contains the smallest positive root
of xe x − cos x =
0.

Solution:

33 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

Take x = 0 , f (0) =⋅
0 e0 − cos 0 =−1 , and x = 1 , f (1) = 2.718 − 0.54 = 2.178 .

Since f ( 0 ) . f (1) < 0 , the root is in (0,1) .

4.3 Bisection Method

This method is based on the repeated application of the intermediate value


theorem. Let the root of the equation f ( x ) = 0 be in the interval I 0 = (a0 , b0 ) .
1
Bisect the interval I 0 and let m1 be the midpoint of (a0 , b0 ) , i.e.,=
m1 (a0 + b0 ) .
2

Check the following conditions:

i) if f (a0 ) ⋅ f (m1 ) < 0 , then root lies between a0 and m1 .

ii) if f (m1 ) ⋅ f (b0 ) < 0 , then the root lies between m1 and b0 .

Accordingly take I1 as either ( a0 , m1 ) or ( m1 , b0 ) , as one of these contains the root.


Bisect I1 and let m2 be the midpoint of I1 . The above conditions are checked
for these bisected intervals to get the new subinterval that contains the root.

Keep bisecting this way until we get a smallest subinterval containing the root.
Then the midpoint of that smallest interval is taken as the approximation for the
root of f ( x) = 0 .

The following example illustrates the Bisection method:

Example 3: Find a positive root of the equation f (=


x) xe x − 1 which lies in (0,1) .

Solution:

Given=
a0 0,=
b0 1,

34 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

f (a0 ) =
−1, f (b0 ) =
1.718, f (0) ⋅ f (1) < 0 .

Step 1: Bisecting ( 0,1) gives ( 0, 0.5 ) and ( 0.5,1) as two subintervals.

f ( 0.5 ) < 0 and f ( 0 ) < 0 , so the interval ( 0, 0.5) is discarded as f ( 0 ) f ( 0.5 ) > 0 .

Denote I1 = (0.5,1) .

Step 2: Bisecting I1 = (0.5,1) gives (0.5, 0.75) and (0.75,1) as the new
subintervals. f (0.75) > 0 , f (0.5) < 0 ⇒ f ( 0.5 ) f ( 0.75 ) < 0 ,

also f (0.75) > 0 , f (1.0) > 0 ⇒ f ( 0.75 ) f (1) > 0 .

Discard the interval (0.75,1.0) and denote I 2 = (0.5, 0.75) .

Step 3: Bisecting I2 gives (0.5, 0.625) and (0.625, 0.75) . Note


f (0.5). f (0.625) < 0 and f (0.625). f (0.75) > 0 so discard the interval (0.625, 0.75) and

take I 3 = (0.5, 0.625) .

Step 4: Bisecting I 3 , we get (0.5, 0.5625) and (0.5625, 0.625) , we


see f (0.5625) f (0.625) < 0 , take I 4 = (0.5625, 0.625) keep doing this process for
more steps to get a better approximation to the root. After step 4 , the root lies in
I 4 and we take the approximate value of the root as the midpoint of this interval

I 4 , i.e., ξ = 0.59375 .

This is a crude approximation to the root.

Note: Both the iteration method and Bisection method take more steps to give a
solution with reasonable accuracy.

35 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

Exercises:

1. Use bisection method to obtain a real root for the following equations correct
to three decimal places:

i) x 3 + x 2 − 1 =0

ii) e− x = 10 x

iii) 4 x − 1 =4sin x

iv) x + log x =
2.

2. By constructing a proper iterative function for the given equation, find an


approximation to its root.

i) sin x = 1 − x

ii) e x = cot x

iii) x 2 − x3 =
−1

x
iv) sin x =
2

v) x 4 + x 2 − 80 =
0

vi) sin 2 =
x x2 −1.

Keywords: Polynomial and Transcendental Functions, Direct Method, Iterative


Methods, Iterative Procedure

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

36 WhatsApp: +91 7900900676 www.AgriMoon.Com


Solutions of Non-Linear Equation

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

37 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 5

Secant and Regula-Falsi Methods

5.1 Introduction

A root of the algebraic equation f ( x) = 0 can be obtained by using the iteration


methods based on the first degree equation. We approximate the given f ( x) by a
first degree equation in the neighborhood of the root of f ( x) = 0 .

We may write the first degree equation as:

f (=
x) a1 x + a0 (5.1)

with a1 ≠ 0 and a0 and a1 are arbitrary parameters to be determined by posing two


conditions on f ( x) or its derivative at two different x locations on the curve.

5.2 Scant Method

Take two approximations xk −1 and xk to the root ξ of f ( x) = 0 . We determine the

constants a0 and a1 in equation (5.1) by using the conditions

=
f k −1 a1 xk −1 + a0 (5.2)

and
=
f k a1 xk + a0 (5.3)

38 WhatsApp: +91 7900900676 www.AgriMoon.Com


Secant and Regula-Falsi Methods

Where, f ( xk −1 ) = f k −1 and f ( xk ) = f k .

Solving equations (5.2) and (5.3), we get,

a0 =
( xk f k −1 − xk −1 f k )
(4)
( xk − xk −1 )

a1 =
( f k − f k −1 )
and (5)
( xk − xk −1 )

The solution of (5.1) is

a0
x= − (6)
a1

Now the new approximation xk +1 based on two initial approximations xk −1 and xk is


written as equation (5.6) as:

xk +1 =
( xk f k −1 − xk −1 f k )
( f k − f k −1 )

This may be rewritten as:

xk +1 =
( x − xk −1 ) f , k =
xk − k 1, 2,... (7)
( f k − f k −1 ) k

2
39 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

( xk − xk −1 ) f
Here iteration function ϕ ( xk , xk −1=
) xk − generates a sequence of
( f k − f k −1 ) k

iterates { xk +1}k =1 .

5.2.1 Algorithm

1. Start with ( x0 , f 0 ) and ( x1 , f1 ) .

2. Generate x2 using (7) and compute f 2 = f ( x2 ) .

3. Use ( x1 , f1 ) and ( x2 , f 2 ) to compute x3 .

4. Repeat steps (2) & (3) until the difference between two successive
approximations xk +1 and xk is less than the error tolerance.

The method fails if at any stage of computation f ( xk ) = f ( xk −1 ) (see equation 5.7).

3
40 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

x1 x0 x
ξ x2

Fig. 5.1. Secant Method – a Graphical Representation.

5.3 Regula-Falsi Method

The initial approximations xk −1 and xk taken in the secant method are arbitrary. If
we impose a condition on these initial approximations such that f ( xk ). f ( xk −1 ) < 0
the method given by equation (5.7) satisfying this condition is called the Regula-
falsi method. Then the root of f ( x) = 0 lies in between these two values xk and xk −1 .

Now draw a chord joining the points ( xk −1 , f k −1 ) and ( xk , f k ) . This chord intersects

the x − axis, say at x = xk +1 . Now look for f ( xk +1 ). f ( xk ) < 0 or f ( xk +1 ). f ( xk −1 ) < 0 .

One of these two will hold. Without loss of generality, say f ( xk +1 ). f ( xk ) < 0 , then
4
41 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

join the points ( xk , f k ) and ( xk +1 , f k +1 ) by a chord, this chord intersects say

at x = xk + 2 . The above procedure is repeated till we reach the root of f ( x) = 0 . In


this procedure we are sure of convergence of the sequence of iterates generated
using equation (5.7).

( x0 , f 0 )

x3 x2 x1
O x0 ξ x

( x2 , f 2 ) ( x1 , f1 )

Fig. 5.2. Regula - Falsi Method – a Graphical Representation.

Example 1: Find the real root of x3 − 2 x − 5 =0 using the Regula-falsi method.

Solution:

Given f ( x) = x3 − 2 x − 5 . We compute that f (2) =−1 < 0 and f (3)= 16 > 0 .

Take x0 =
2, f 0 =
−1, x1 =
3, f1 =
16 .

5
42 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

Using the method given by (7),

3− 2
x2 =
3− 16 =
2.0813 , f ( x2 ) =
−0.1472 < 0 .
16 + 1

The root lies in the interval (2.083,3), join the points (2.0813, −0.1472) and (3,16) by
the chord, which cuts the x − axis at x3 = 2.08964 .

Proceeding in this way, we obtain

x4 = 2.09274, x5 = 2.09388, x6 = 2.0943 etc.

The approximate value for the root is taken as x = 2.0943 .

Now we illustrate this method to find a root of the transcendental equation.

Example 2: Find the root of cos x − xe x =


0 using the Regula-falsi method correct
to four decimal places.

Solution:

( x) cos x − xe x , f (0) = 1 , f (1) = −2.17798 .


Let f=

Clearly f (0) . f (1) < 0 .

x1 − x0
We obtain x= x0 − f 0 = 0.31467 and f (0.31467)
= 0.51987 > 0 .
f1 − f 0
2

6
43 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

The root lies between 0.31467 and 1. Repeating the procedure we obtain the
approximations to the roots as

x3 = 0.44673 ,

x4 = 0.49402 ,

x5 = 0.50995

x6 = 0.51520 , x7 = 0.51692

= =
x8 0.51748, x9 0.51767 ,

x10 = 0.51775,...

The root correct to 4 decimal places as ξ = 0.5177 .

Exercises

1. Find a real root of the following equations using the Regula-falsi method:

a) x log 0 x = 1.2

b) x 4 − 32 =
0

c) x3 − 2 x − 5 =0

1
d) sinx =
x

2. Solve the above problems using the Secant method.

Keywords: Scant Method, Regula-Falsi Method

7
44 WhatsApp: +91 7900900676 www.AgriMoon.Com
Secant and Regula-Falsi Methods

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

8
45 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 6

Newton-Raphson Method

6.1 Introduction

Both the Bisection and Regula-fasi methods give a rough estimate for the root
of the equation , but these methods take many iterations to get a
reasonably accurate approximation to the root. An elegant method of obtaining
the root of is discussed below which is known as Newton-Raphson
method.

Let x0 be an approximate root of the equation . Let x1 be a


neighbouring point of x0 , such that for every small , x=1 x0 + h . Also let x1
be the exact root of .

Then f ( x1 ) = 0 .

Expanding f ( x0 + h) in Taylor Series about x0 ,

weget f ( x1=) f ( x0 + h=) f ( x0 ) + hf ′( x0 ) + ...= 0 .

Since is very small, we neglected and higher powers of in the above


Taylor series expansion.

f ( x0 )
Thus we have f ( x0 ) + hf ′( x0 ) =
0 giving h = − .
f ′( x0 )

f ( x0 )
Then we have x1 = x0 + h = x0 − .
f ′( x0 )

46 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton-Raphson Method

a closer approximation to the root of the given is given by

f ( x0 )
x= x0 − (6.1)
f ′( x0 )
1

A better approximation x2 can be obtained by

f ( x1 )
x2= x1 − (6.2)
f ′( x1 )

We generalize this procedure and write a general iteration method as

f ( xn )
xn += xn − , (6.3)
f ′( xn )
1

This is called the Newton-Raphson iteration method. Comparing this method


with the general iteration method

xn +1 = ϕ ( xn ) (6.4)

f ( x)
the iteration function is given as ϕ ( x)= x − .
f ′( x)

For convergence of this iteration method the sufficient condition is ϕ ′( x) < 1

f′ f ⋅ f ′′ f ( x) ⋅ f ′′( x)
which becomes 1 + =
+ < 1 or f ( x) ⋅ f ′′( x) < [ f ′( x) ] .
2

f′ f′ [ f ′( x)]
2 2

If we choose the initial approximation x0 in the interval such that for all x ∈ I ,

2
47 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method

f ( x) ⋅ f ′′( x) < [ f ′( x) ]
2
is satisfied, then the sequence of iterations generated by

iteration method (3) will converge to the root of .

Example 1: Using the Newton-Raphson method, find a real root of


f ( x) = x 4 − 11x + 8 = 0 .

Solution:

Given f ( x) = x 4 − 11x + 8 = 0

f ′(=
x) 4 x3 − 11 .

Choose x0 = 2 . This initial approximation can be obtained by using the


intermediate value theorem.
f ( x0 ) 2
x= x0 − =2 − =1.90476
f ′( x0 )
1
11

(1.90476 ) − 11(1.90476) + 8 = 1.89209


4
f ( x1 )
x2= x1 − = 1.90476 − ,
f ′( x1 ) 4(1.90476)3

f ( x2 )
x= x2 − = 1.89188
f ′( x2 )
3

f ( x3 )
x= x3 − = 1.89188 .
f ′( x3 )
4

We now accept the numerical approximation to the root as ξ = 1.89188 , correct


to 5 decimal places.

Note that this method requires evaluation of f ′( x) at every stage. Also if f ′( x) is


 f ( x) 
very large, then h  = −  will be very small. Thus x0 is close to x1 (which is
 f ′( x) 

3
48 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method

assumed to be the root of in the derivation of the method) and makes


the convergence of the successive iterations to the root faster.

Example 2: Evaluate 29 to four decimal places using the Newton-Raphson


method.

Solution:

Let x = 29 . Then x 2 − 29 =
0

Consider f ( x=) x 2 − 29 , then f ′( x) = 2 x .

xn2 − 29
xn +=
1 xn − =, n 0,1,2,… .
2 xn

Take x0 = 3.3 , we get x1 = 5.3858 , x2 = 5.3851 and x3 = 5.3851 .

y
y = f ( x)

f ( x0 )

f ( x1 )

f ( x2 )
O ξ
x1 x0 x
x2

Fig. 6.1. Newton-Raphson method – a schematic representation; ξ ≈ f ( xn )

4
49 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method

Example 3: Find a root of f ( x) = 3x − cos x − 1= 0 lying between 0 and1 .

Solution: Given f ( x) = 3x − cos x − 1= 0

f ′( x)= 3 + sin x choose x0 = 0.6 .

3 xn − cos xn − 1
xn += xn −
3 + sin xn
1

xn sin xn + cos xn + 1
xn +1 =
3 + sin xn

This gives x1 = 0.6071 , x2 = 0.6071 . Thus the approximation to the root is .


Definition: An iterative method xn +1 = ϕ ( xn ) is said to be of order or has the
rate of convergence if is the largest positive real number for which there

such that ∈k +1 ≤ M ∈k , where ∈k= xk − ξ is the


n
exists a finite constant
th
error in the iterate. The constant is called the asymptotic error constant.

Exercise 4: Determine the order of the Newton-Raphson method.

Solution:

f ( xk )
The Newton-Raphson method is xk +=1 xk − .
f ′( xk )

Take ∈k= xk − ξ or xk = ξ + ∈k and expanding f (ξ + ∈k ) and f ′(ξ + ∈k ) in Taylor


Series about the root ξ , we obtain.

 1 2 
∈k f ′(ξ ) + 2 ∈k f ′′(ξ ) + ...
∈k +1 =
∈k −
[ f ′(ξ )+ ∈k f ′′(ξ ) + ...]

5
50 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method

−1
 1 f ′′(ξ ) 2  f ′′(ξ ) 
=
∈k − ∈k + ∈k +... 1 + ∈k +...
 2 f ′(ξ )  f ′(ξ ) 

1 f ′′(ξ ) 2
or=
∈k +1 ∈k +O(∈k 3 ) .
2 f ′(ξ )

1 f ′′(ξ )
On neglecting ∈k 3 and higher power ∈k , we get ∈k +1 = C ∈k 2 where C = .
2 f ′(ξ )

Thus the Newton-Raphson method is a second order method.

Exercises:

Solve the following equations using Newton-Raphson method:

i) x − e− x =
0

ii) cos x − xe− x =


0

iii) x3 − 5 x + 1 =0

cos x
iv) sin x + =
0
x

v) x3 − 2 x − 5 =0

Keywords: Sufficient Condition, Newton-Raphson Iteration Method

References

Jain, M. K., Iyengar, S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

6
51 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton-Raphson Method

Scheid, Francis. (1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth


Edition, Prentice Hall of India Publishers, New Delhi.

7
52 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 7

Linear System of Algebraic Equations – Jacobi Method

7.1 Introduction

In this lesson we discuss an iterative method which gives the solution of a


system linear algebraic equations.

Consider a system of algebraic equations as:

a11 x1 + a12 x2 + ... + a1n xn =


b1

a21 x1 + a22 x2 + ... + a2 n xn =


b2

... ... ... ...

an1 x1 + an 2 x2 + ... + ann xn =


bn (7.1)

This is a linear system of n -algebraic equations in n unknowns. In general, the


number of equations is not equal to the number of unknowns. In (7.1),
aij , i = 1, 2,..., n ; j = 1, 2,..., n are the given coefficients, bi , i = 1, 2,..., n are the known

numbers and xi , i = 1, 2,..., n are the unknown numbers to be determined.

The system given in (7.1) can also be written in the matrix vector form as:

Ax = b (7.2)

 a11 a12 ... a1n   b1   x1 


a a22 ... a2 n  b  x 
where A =  21 , b =  2, x =  2
 ... ... ... ...   ...   ... 
     
 an1 an 2 ... ann  bn   xn 

53 WhatsApp: +91 7900900676 www.AgriMoon.Com


Linear System of Algebraic Equations – Jacobi Method

is called the coefficient matrix.

The solution of this system can be obtained using direct methods such as Gauss
Elimination and also by finding the inverse of the matrix directly. When the
size of the matrix A is very large, applicability of these direct methods is
limited as finding the inverse of a large size matrix is not so easy. Also, Gauss
elimination demands the diagonal dominant structure for A . These difficulties
are overcome in the other class of methods called the iterative methods. In what
follows, we discuss two useful iterative methods namely the Jacobi and Gauss-
Seidel methods.

The iterative methods are based on the idea of successive approximations.


Initially, the system of equations is written as:

X=
n +1 HX n + C , n = 0,1, 2,... (7.3)

where H is the Iterative matrix which depends on the matrix A and C is a


column vector which depends on both A and b . We start with an initial
approximation to the solution vector x = x0 and obtain a sequence of
approximation to x as x1 , x2 ,..., xn ,... , this sequence, in the limit as n → ∞ ,
converge to the exact solution vector x . We stop the iteration process when the
magnitude of the two successive iterates of x i.e., xn +1 and xn is smaller than
the pre-assigned error tolerance ∈ , xn +1 − xn ≤∈ for all elements of x .

The procedure of obtaining the iterative matrix H is given below.

2
54 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

Let the diagonal elements aii , i = 1, 2,..., n in the linear system (1) do not vanish.
We now rewrite the system (1) as:
b1 a12 a a
x1 = − x2 − 13 x3 − ... − 1n xn
a11 a11 a11 a11

b2 a21 a a
x2 = − x1 − 23 x3 − ... − 2 n xn
a22 a22 a22 a22

... ... ... ... … … … … …

bn an1 a a
xn = − x1 − n 3 x2 − ... − n⋅n −1 xn −1 (7.4)
ann ann a22 ann

The system of equations (7.4) is meaningful only if all aii (diagonal elements)
are non-zero. If some of the diagonal elements in the system of equations given
in (7.1) are zero, than the equations should be rearranged so that this condition
satisfied. We now form an iterative mechanism for the equation (7.4) by writing

b1 a12 ( n ) a13 ( n ) a
x1( n +1) = − x2 − x3 − ... − 1n xn( n )
a11 a11 a11 a11

b2 a21 ( n ) a23 ( n ) a
x2( n +1) = − x1 − x3 − ... − 2 n xn( n )
a22 a22 a22 a22

... ... ... ...

bn an1 ( n ) an 2 ( n ) a
xn( n +1) = − x1 − x2 − ... − n⋅n −1 xn( n−1) . (7.5)
ann ann ann ann

Choose the set of initial approximations as: x (0) = ( x1(0) , x2(0) ,..., xn(0) ) and generate a

sequence of iterates x (1) , x (2) ,..., x ( k ) , x ( k +1) ,..., until the convergence condition
x ( k +1) − x ( k ) ≤∈ is satisfied.

( n +1)
Equation (7.5) is written in the matrix vector form as X= HX ( n ) + C

3
55 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

 a12 a1n   b1 
 0 − ... −  a 
a11 a11
   11 
 a21 a2 n   b2 
− 0 ... −
where H =  a22 
a11 , C =  a22  .
  
 ... ... ... ...   ... 
   
 − an1 a
− n1 ... 0   bn 
 ann ann   ann 

In matrix vector form, the Jacobi method is derived as follows:

Given Ax = b , decompose A as the sum of lower Triangular, Diagonal and


upper triangular matrices. This decomposition is always possible.

i.e., A = L + D + U

Ax = ( L + D + U ) x = b

or Dx =−( L + U ) x + b

− D −1 ( L + U ) x + D −1b
or x =

or x k +1 =
− D −1 ( L + U ) x k + D −1b

k +1
or x= Hx + C
k

where the Iteration matrix H in Jacobi method is − D −1 ( L + U ) and C is D −1b .

Example 1: Solve the following system of equations using Jacobi method

4 x1 + x2 + x3 =
2

x1 + 5 x2 + 2 x3 =
−6

x1 + 2 x2 + 3x3 =
−4

4
56 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

by taking the initial approximation as x (0) = [0.5, −0.5, −0.5]T .

Solution:

4 1 1
Given A = 1 5 2  .
1 2 3 

Note that aii ≠ 0 for .

First write as , where

4 0 0 0 0 0 0 1 1 
D =  0 5 0  , L = 1 0 0  and U 0 0 2 
     
 0 0 3 1 2 0  0 0 0 

The iteration matrix is

−1
 4 0 0  0 0 0
−  0 5 0  1 0 0 
− D −1 ( L + U ) =
H=
   
 0 0 3 1 2 0 

1 
4 0 0
  0 1 1 
= −0 0  1 0 2 
1
 5  
 1 2 0 
1 
0 0 
 3 
 1 1 
 0 −
4 4 
 
− 1 0
= −
2
 5 5
 1 2 
− − 0 
 3 3 
5
57 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

and the column vector

1   1 
4 0 0  2 
  2   
b 0 0   −6  = − 6  .
−1 1
=
C D=
 5    5
  −4 
1    4
0 0  − 
 3   3 

We now write the Jacobi iterative method as:

 1 1   1 
 0 −
 x1( n+1)   4 4   x(n)   2 
 1  
  2   (n)   6 
=  x2( n+1)  =  −
1
x ( n+1) 0 −  x2  + − , n = 0,1,2,... ...... (i)
 5 5  (n)  5
 ( n+1)   
 x3   1 2   3   4
x
− − 0  − 
 3 3   3 

 x1(0)   0.5 
   −0.5 , we generate from (i)
Start with the given initial approximation  x2(0)  =
 
 x3(0)   −0.5
 

 x1(1)   0.75 
 (1)   −1.1 
 x2 =   
 x3 
(1)
 −1.1667 
  .

 x1(2)   1.0667 
   −0.8833 , using
Using this, we generate the next approximation as  x2(2)  =  
 x3(2)   −0.85 
 
 x1(3)   0.9933 
   −1.0733 , ...
this,  x2(3)  =  
 x3(3)   −1.1 
 

6
58 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

The exact solution of this system is x1 =


1, x2 =
−1, x3 =
−1 .

Example 2: Solve the following system of linear algebraic equations using the
Jacobi method by writing the iterative system directly:
20 x + y − 2 z =
17 , 3 x + 20 − z =−18 , 2 x − 3 y + 20 z =
25 .

Solution:

We can directly write the iterative system as

x1( n+1)=
1
20
(17 − y ( n ) + 2 z ( n ) )

y1( n+1) =
1
20
( −18 − x ( n ) + z ( n ) )

z1( n+1) =
1
20
( 25 − 2 x ( n ) + 3 y ( n ) )

Start with x=
(0)
y=
(0)
z=
(0)
0.

x (1) =
0.85, y (1) =
−0.9, z (1) =
1.25 .

Using this, we generate

x (2) ==
1.02, y (2) −0.965, z (2) =
1.1515 .

In the same way, we generate few more successive approximations as

x (3) =
1.0134, y (3) =
−0.9954, z (3) =
1.0032

x (4) =
1.0009, y (4) =
−1.0018, z (4) =
0.9993

x (5) =
1.0, y (5) =
−1.0002, z (5) =
0.9996
7
59 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

x (6) =
1.0, y (6) =
−1.0, z (6) =
1.0

Thus the solution is: , , .

This is an alternative way of writing the iteration procedure used in the earlier
example.

Exercises:

1. Solve the equations


10 x1 − 2 x2 − x3 − x4 =3, −2 x1 + 10 x2 − x3 − x4 =15, − x1 − x2 + 10 x3 − 2 x4 =27
− x1 − x2 − 2 x3 + 10 x4 =−9 using the Jacobi method by taking

x=
(0)
1 0.3, x=
(0)
2 x=
(0)
3 x=
(0)
4 0.

2. Write and for the following system of equations:


54 x + y + z= 110,2 x + 15 y + 6 z= 72, − x + 6 y + 27 z= 85. Solve this system

using the Jacobi iterative method by taking=


x (0) 2,=
y (0) 0,=
z (0) 0 .

Keywords: System of Linear Algebraic Equations, Jacobi Iterative Method

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

8
60 WhatsApp: +91 7900900676 www.AgriMoon.Com
Linear System of Algebraic Equations – Jacobi Method

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth


Edition, Prentice Hall of India Publishers, New Delhi.

9
61 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 8

Gauss-Seidel Iteration Method

8.1 Introduction

This method is applied to the linear system of algebraic equation for


which the diagonal elements of are larger in absolute value than the sum of
other elements in the each of its row in.

One should arrange, by row and column interchange that larger elements fall
along the diagonal, to the maximum possible extent. This method may be seen
as an improvement to the Jacobi method, where the available values for the
unknowns in a particular iteration are used in the same iteration. Consider the
system of linear algebraic equations (5) as given in the lesson 7.

b1 a12 (0) a13 (0) a


x1(1) = − x2 − x3 − ... − 1n xn(0)
a11 a11 a11 a11

b2 a21 (0) a23 (0) a


x2(1) = − x1 − x3 − ... − 2 n xn(0)
a22 a22 a22 a22

... ... ... ...


bn an1 (0) an 2 (0) a
xn(1) = − x1 − x2 − ... − n⋅n −1 xn(0)−1.
ann ann ann ann

This is the first step of the Jacobi iteration method.

In the first step of the iteration, we make use of the initial approximations
x2(0) , x2(0) ,..., xn(0) in the first of the above equation to evaluate x1(1) using,

62 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gauss-Seidel Iteration Method

− ( a12 x2(0) + a13 x3(0) + ... + a1n xn(0) ) . This approximation for x1(1) is used
b1 1
x1(1) =
a11 a11

in approximating x2(1) as shown below:

x2(1) =
b2

1
a22 a22
( a21 x2(1) + a23 x3(0) + ... + a2 n xn(0) )

Likewise, the other unknown are found as shown below:

x3(1) =
b3

1
a33 a33
( a31 x1(1) + a32 x2(1) + ... + a3n xn(0) )

... ... ... ...

xn(1) =
bn

1
ann ann
( an1 x1(1) + an 2 x2(1) + ... + an⋅n−1 xn(1) )

We proceed to find the second approximation for the solution in the same
manner. The iteration process is terminated using the same criterion that was
discussed in the case of Jacobi method.

We express the Gauss-Seidel method in the matrix form as follows for .


Let , where and are the lower and upper triangular matrices
with zeros for the diagonal entries and is the diagonal matrix.

Then, .

( L + D) x (
k +1)
=
b − Ux ( k )

x(
k +1)
= ( D + L) −1 b − ( D + L) −1Ux ( k )

2
63 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method

( )k +1
or x= Hx + C
(k )

−( D + L) −1 ⋅ U and =
where H = C ( D + L) −1 b .

Let us illustrate the use of Gauss-Seidel method below.

Example 1: Solve the system of equations using the Gauss-Seidel method


correct to three decimal places:

...... (i)

...... (ii)

...... (iii)

Solution:

In the above equations, note that

i. 1 > 2 + 1 , i.e., 1 > 3 is false.

ii. 1 > 3 + −1 i.e., 1 > 4 is false.

iii. 4 > 1 + −1 i.e., 4 > 2 is true.

Thus in the first two equations, the diagonal dominance is not present.
We now rearrange the order of the given system as:

Now this system shows diagonal dominance as 3 ≥ 2, 2 ≥ 2, 4 > 2 .

3
64 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method

We now write the iterative process for the above as:

1
x ( k +1) =  − y ( k ) + z ( k ) 
3

1
y ( k +1) =  − x ( k +1) − z ( k ) 
2

1
z ( k +1) = 3 − x ( k +1) + y ( k +1)  , k = 0,1,2,...
4

We start with the initial guess values as =


x (0) 1,=
y (0) 1,=
z (0) 1 .

This gives,

1 (0)
x (1) =  z − y (0)  = 0
3

1
y (1) = − x (1) − z (0)  =
−0.5
2

1
z (1) = 3 − x (1) + y (1)  = 0.625 .
4

The second iteration is given by

1 (1)
x (2) =  z − y (1)  = 0.375
3

1
y (2) =− x (2) − z (1)  =
−0.5
2

1
z (2) = 3 − x (2) + y (2)  = 0.53125 .
4

Proceeding in this way, we generate the sequence of approximations as:

4
65 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method

x (3) =
0.34375, y (3) =
−0.4375, z (3) =
0.55469

x (4) =
0.33075, y (4) =
−0.44271, z (4) =
0.55664

x (5) =
0.33312, y (5) =
−0.4449, z (5) =
0.5555 .

The approximate solution correct to 3 decimal places is take as:

, , .

Example 2: Solve the system of equations


, , using the Gauss-Seidel method by

taking the initial approximations as=


x (0) 0,=
y (0) 0,=
z (0) 0 .

Solution:
 2 −1 0  7

Given A = −1 2 −1  , b = 1 , we find the decomposition of A as
   
 0 −1 2  1
   
2 0 0  0 0 0 0 −1 0 
D = 0 2 0 , L =  −1 0 =0 , U 0 0 −1 .
     
 0 0 2   0 −1 0  0 0 0 

Gauss-Seidel method is x ( k +1) =


−( D + L) −1 ⋅ Ux ( k ) + ( D + L) −1 b .

1 
2 0 0
 2 0 0  

( D + L) =− 1 2 0  , ( D + L ) −1 1 1
= 0,
  4 2 
 0 −1 2  1 1
  1
 
 8 4 2 

5
66 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method

 1  7
 0 − 0  2
2
   
( D + L) −1U = 0 −
1 1 −1  9 .
− , ( D + L) b =
 4 2 4
 1 1  13 
0 − −   
 8 4   8 

Applying the above method,


 1  7
0 2
0 2
 x (1)    0     3.5 
 (1)  1   9  
0 = 2.25 
1
=y  0 +
 z (1) 
 4 2    4   
    
1    13  
  0 1.625
1
0   
 8 4   8 
 x (1)   x (2)   4.625 
   (2)   
Using  y (1)  , we generate  y  =  3.625  ,
 z (1)   z (2)   2.3125 
     

 x (3)   5.3125 
   
Similarly,  y (3)  =  4.3125  .
 z (3)   2.6563 
   

Note that these iterates are approaching the exact solution , , .

Exercises: By choosing suitable initial approximations, solve the following


system of linear algebraic equations using Gauss-Seidel method.

1. , , .

2. , , .

3. , , .

6
67 WhatsApp: +91 7900900676 www.AgriMoon.Com
Gauss-Seidel Iteration Method

4. 10 x1 − 2 x2 − x3 − x4 =3, −2 x1 + 10 x2 − x3 − x4 =15, − x1 − x2 + 10 x3 − 2 x4 =27,


− x1 − x2 − 2 x3 + 10 x4 =−9 .

Keywords: Linear Algebraic Equations, Gauss-Seidel Method, Diagonal


Dominance

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth


Edition, Prentice Hall of India Publishers, New Delhi.

7
68 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 9

Decomposition Methods

9.1 Introduction

LU and Cholesky Decomposition methods:

Gauss Elimination method, Gauss-Jordan method and LU and Cholesky


decomposition methods are the direct methods to solve the system of linear
algebraic equations . Decomposition methods are also known as
Factorization methods. In this lesson, we discuss the LU and Cholesky
Decomposition methods. In this, the basic idea is to write the coefficient matrix
as the product of a Lower triangular matrix and an upper triangular matrix .

9.2 LU Decomposition Method

Given the matrix =


as A (=
aij ), i, j 1, 2,...., n

in general or
 a11 a12 a13 
A =  a21 a22 a23  , in particular, the decomposition is possible if all the
 
 a31 a32 a33 

 a11 a12 a13 


 a11 a12  
principal minors of , i.e., a11 ,   ,  a21 a22 a23  are non-singular.

 21 22   a
a a
a33 
 31 a32
Let

1 0 0 u11 u12 u13 


with a choice for L = l21 1 0 and U =  0 u22
 u23  .
   
l31 l32 1   0 0 u33 

69 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

One can also choose ’s along the diagonal elements for and lii as diagonal
elements for . We now determine the element of and as follows.

1 0 0  u11 u12 u13   a11 a12 a13 


Consider LU l21 1
= 0  = 0 u22 u23  = a21 a22 a23  A .
    
l31 l32 1   0 0 u33   a31 a32 a33 

Equating the corresponding elements on both sides, we get


=
u11 a=
11 , u12 a=
12 , u13 a13 ,
a21
l21u11 = a21 ⇒ l21 = ,
a11
a31
l31u11 = a31 ⇒ l31 = ,
a11
a21
l21u12 + u22 = a22 ⇒ u22 = a22 − a12 ,
a11
a21
l21u13 + u23 = a23 ⇒ u23 = a23 − a13 ,
a11

a 
a32 −  31  a12
l31u12 + l32u22 = a32 ⇒ l32 =  a11  ,
a 
a22 −  21  a12
 a11 
l31u13 + l32u23 + u33 =
a33
⇒ u33 = a33 − l31u13 − l32u23 .

Once we compute lij ’s and uij ’s, we obtain the solution of as given below.

Consider LUx = b

70 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

Call Ux =z ⇒ Lz =b .

1 0 0   z1   b1 
i.e., l21 1 0   z2  = b2  .
    
l31 l32 1   z3  b3 

Now Forward substitution gives


⇒ z1 = b1 , l21 z1 + z2 = b2 ⇒ z2 = b2 − l21b1
l31 z1 + l32 z2 + z3 = b3 ⇒ z3 = b3 − l31 z1 − l32 (b2 − l21b1 ) .

This gives z in terms of the elements of b . Having found , gives the


unknown as
u11 u12 u13   x1   z1 
0 u u23   x2  =  z2  .
 22    
 0 0 u33   x3   z3 

Using backward substitution process,


z3
u33 x3 = z3 ⇒ x3 = ,
u33
u22 x2 + u23 x3 =
z2

 z3 
 z2 − u23 u 

⇒ x2 = 33 
,
u22
u11 x1 + u12 x2 + u13 x3 =
z1 ,
⇒ x1 = z1 − u13 x3 − u12 x2 ,

[ x1, x2 , x3 ]
T
is the required solution.

71 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

Example 1: Solve the following equations by LU decomposition


, , .

Solution:
2 1 4  12 

Given A =   20  .
8 −3 2 , b =
   
 4 11 −1  33 

2 1 
Clearly a11 =≠
2 0,  =−14 ≠ 0, A ≠ 0 .
 8 −3 
We find a decomposition of as as follows

 
1 0 0  2 1 4 
   0 −7 −14  .
L =  4 1 0  and U =
 
 9   0 0 −27 
2 − 1
 7 
= = b.
Ax LUx
Take Ux = z .

 
1 0 0   z1  12 
 
Then Lz =
b ⇒ 4 1 0   z2  =
 20 
   
   z   33 
−1  3   
9
2 −
 7 
⇒ z1 =12, z2 =20 − 4 × 12 =−28,
9
z3 =
33 + (−28) − 2(12) =
−27 .
7
2 1 4   x   12 
Now Ux =z ⇒ 0 −7 −14   y  =−
  28 
    
 0 0 −27   z   −27 

72 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

−27
=z = 1,
−27
1
y=− [ −28 + 14.1] =2
7
1
=
x [12 − 2 − 4.1=] 3
2

Thus the solution of the given system of equation is , and .


The advantage of direct methods is that we obtain exact solution while the iterative
methods give an approximate solution. For the generalization of the
LU decomposition procedure to a system of equations in -unknowns, the reader
is referred to any standard text book on Numerical methods.

Exercise 1: Solve the equations using LU decomposition.

9.3 Cholesky Method

If the coefficient matrix A is symmetric (i.e., A = AT ) and all leading order


principal minors are non-singular (as in case of LU decomposition method), then
the matrix A can be decomposed as

A= L ⋅ LT

=
where L lij=
, lij 0 for .

73 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

Then Ax = b becomes

L ⋅ LT x =
b
Take LT x = z
⇒ Lz =
b.

The intermediate solution zi , is obtained by forward substitution (as


described earlier) and solution xi , is determined by the back
substitution.

Example 2: Solve the system of equation using the Cholesky decomposition


method

Solution:
1 2 3   5 
Given A =
= 2 8 22  , b  6 .
   
 3 22 82   −10 

Write A= L ⋅ LT
1 2 3   l11 0 0  l11 l21 l31 
or  2 8 22  = l21 l22 0   0 l22 l32 
    
 3 22 82  l31 l32 l33   0 0 l33 

⇒ l112 =1 ⇒ l11 =1,


l11l21 =2 ⇒ l21 =2,

74 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

l11l31 =3 ⇒ l31 =3,


2
l21 + l22
2
=8 ⇒ l22 =2,
l31l21 + l32l22 = 22 ⇒ l32 = 8,
l312 + l322 + l332 = 82 ⇒ l33 = 3.

1 0 0 
L = 2 2 0 .
 
 3 8 3

Now Ax = b ⇒ L ⋅ LT x = b .
 u1 
Put LT x = u where u = u2  (say)
 
u3 

1 0 0   u1   5 
⇒ Lu =b ⇒  2 2 0  u2  =  6 
    
 3 8 3 u3   −10 

⇒ u1 =
5, u2 =−2, u3 =−3 .

 1 2 3  x  5
Solving LT x = u gives 0 2 8  y  =  −2 
    
0 0 3  z   −3

⇒ z =−1, y =3 and x = 2 as the required solution.

Exercises 2:

1. Solve the system of equations

, , using the (i) LU and


Cholesky decomposition methods.

75 WhatsApp: +91 7900900676 www.AgriMoon.Com


Decomposition Methods

2. Solve the system of equation

 2 1 −4 1   u1   4 
 −4 3 5 −2  u   −10 
  2 =  
 1 −1 1 −1  u3   2 
    
 1 3 −3 2  u4   −1 

by LU decomposition method.

3. Solve , , by the Cholesky method.

Keywords: Cholesky Decomposition Method, LU Decomposition Method

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

76 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 10

Interpolation

10.1 Introduction

Let f ( x) be a continuous function defined on some interval [a, b] . Consider a

partition of this interval [a, b] as a = x0 < x0 < ... < xi −1 < xi < ... < xN = b , having
( N + 1) nodal points. These nodes are either equally spaced ( xi − xi −1 =
ha

=
constant, i 0,1,2,..., N − 1 ) or unequally spaced. Let f ( xi ) = fi , i = 0,1,2,..., N .
The process of approximating the given function f ( x) on [a, b] OR a set of

( N + 1) data points ( xi , fi ) , where the function f ( x) is not given explicitly

using polynomial functions {x , x , x ,..., x ,...} is


0 1 2 N
known as polynomial

interpolation. The problem of polynomial approximation is to find a polynomial


Pn ( x) of degree n or less than n , which satisfies the condition
i=
Pn ( xi ) = f ( xi ) ,  0,1,2,..., N.

In such a case, the polynomial Pn ( x) is called the interpolating polynomial. The


advantage of polynomial interpolation is of two folds. The first use is in
reconstructing the approximation to the function f ( x) when it is not given
explicitly. The second use is to replace the function f ( x) by an interpolating
polynomial Pn ( x) using which differentiation and integration operations can be
easily performed. In general, if these are N + 1 distinct points
a = x0 < x0 < ... < xi −1 < xi < ... < xN = b , then the problem of interpolation is to

obtain Pn ( x) satisfying the conditions Pn ( xi ) = f ( xi ) , i = 0,1,2,..., N . We discuss


below the methods in which these interpolating polynomials can be obtained.

77 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

10.2 Linear Interpolation

Consider two data point ( x0 , f 0 ) and ( x1 , f1 ) . We wish to determine a polynomial

P1 (=
x) a1 x + a0 , (10.1)

where a0 and a1 are arbitrary constants, satisfying the interpolating conditions

=
P1 ( xi ) f=
( xi ), i 0,1 (10.2)

f (= x0 ) a1 x0 + a0 and f (=
x0 ) P1 (= x1 ) P1 (=
x1 ) a1 x1 + a0 .

Eliminating a0 and a1 , we obtain the interpolating polynomial as:

 P1 ( x) x 1
 f ( x ) x 1 = 0 .
 0 0 
 f ( x1 ) x1 1

y = f ( x)

y = p1 ( x)

( x0 , f0 ) ( x1 , f1 )

x0 x1

78 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

Linear interpolation

Expanding the determinant, we obtain

P1 ( x) ( x0 − x1 ) − f ( x0 ) ( x − x1 ) + f ( x1 ) ( x − x0 ) =
0

x − x1 x − x0
=
or P1 ( x) f ( x0 ) + f ( x1 ) .
x0 − x1 x1 − x0

This gives the linear interpolating polynomial.

Write =
P ( x) l0 ( x) f ( x0 ) + l1 ( x) f ( x1 ) (10.3)

x − x1 x − x0
where l0 ( x) = , l1 ( x) =
x0 − x1 x1 − x0

The functions l0 ( x) and l1 ( x) are linear functions and are called the Lagrange
1
fundamental polynomials. They satisfy the conditions (i) ∑ l ( x) = 1
i =0
i and (ii)

1, if i = j
) δ=
li ( x j=  .
0, if i ≠ j
ij

Equation (10.3) is called the Linear Lagrange interpolating polynomial.

10.3 Quadratic Interpolation


We now wish to determine a polynomial

P2 ( x) =a0 + a1 x + a2 x 2 , (10.4)

where a0 , a1 and a2 are arbitrary constants, satisfying the conditions

79 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

=f ( x0 ) P=
2 ( x0 ), f ( x1 ) P2 ( x1 ) and f ( x2 ) = P2 ( x2 ) .(10.5)

Thus we are determining P2 ( x) passing through data points

( x0 , f0 ) , ( x1, f1 ) , ( x2 , f 2 ) .

The arbitrary constants a0 , a1 and a2 can be determined from the three


conditions:

f ( x0 ) =a0 + a1 x0 + a2 x02

f ( x1 ) =a0 + a1 x1 + a2 x12

f ( x0 ) =a0 + a1 x2 + a2 x22

Eliminating a0 , a1 , a2 we obtain:

 P2 ( x) 1 x x2 
 
 f ( x0 ) 1 x0 x02 
=0
 f ( x1 ) 1 x1 x12 
 
 f ( x2 ) 1 x2 x22 

1 x0 x02 1 x x2 1 x x2 1 x x2
⇒ P2 ( x) 1 x1 x22 − f ( x0 ) 1 x1 x12 + f ( x1 ) 1 x0 x02 − f ( x2 ) 1 x0 x02 =
0
1 x2 x22 1 x2 x22 1 x2 x22 1 x1 x12

Expanding the determinants and simplifying, we obtain

P2 ( x) = l0 ( x) f ( x0 ) + l1 ( x) f ( x1 ) + l2 ( x) f ( x2 ) (10.6)

80 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

Where, l0 ( x) =
( x − x1 )( x − x2 ) ,
( x0 − x1 )( x0 − x2 )

l1 ( x) =
( x − x0 ) ( x − x2 ) ,
( x1 − x0 ) ( x1 − x2 )

l2 ( x) =
( x − x0 ) ( x − x1 ) .
( x2 − x0 ) ( x2 − x1 )

These li ( x) , i = 0,1,2 are Lagrange fundamental polynomials of second degree,


2
which satisfy (i) ∑ l ( x) = 1 and (ii) l ( x ) = δ
i =0
i i j ij .

Example 1: Obtain the value of f (0.15) using Lagrange linear interpolating


polynomial for the data:
x: 0.1 0.2
f ( x) : 0.09983 0.19867

Solution:

Put x = 0.15 in (3), we get

0.15 − 0.2 0.15 − 0.1


=P1 (0.15) (0.09983) + (0.19867)
0.1 − 0.2 0.2 − 0.1

= 0.14925

Example 2: Find P2 ( x) for the following unequally spaced data set:


x: 0 1 3
f ( x) 1 3 55

Hence find P2 ( x) .

81 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

Solution:

We compute the fundamental polynomial as:

( x − 1)( x − 3)=
l0 ( x)=
( −1)( −3) 3
( x − 4 x + 3)
1 2

( x − 0 )( x =
− 3)
l=
2 ( x)
(1)( −2 )
1
2
( 3x − x 2 )

( x − 0 )( x=− 1)
l=
3 ( x)
( 3)( 2 ) 6
( x − x)
1 2

Now P2 (=
x)
3
(
1 2
x − 4 x + 3) ⋅ 1 + ( 3 x − x 2 ) ⋅ 3 + ( x 2 − x ) ⋅ 55
1
2
1
6
= 8 x 2 − 6 x + 1.

8 ⋅ ( 2) − 6 ( 2) + 1
∴ P2 ( x) =
2

= 32 − 12 + 1
= 21.

Example 3: Find P2 ( x) approximating the given data


x: 2 2.5 3
f ( x) : 0.69315 0.91629 1.09861

Hence find y (2.7) .

Solution:

Let us find the fundamental polynomials l0 ( x), l1 ( x) and l2 ( x) :

82 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

l0 ( x) =
( x − 2.5)( x − 3) = 2 x 2 − 11x + 15
( −0.5)( −1.0 )
( x − 2 )( x − 3) =
l1 ( x) = −4 x 2 + 20 x − 24
( 0.5)( −0.5)

l2 ( x) =
( x − 2 )( x − 2.5) = 2 x 2 − 9 x + 10
(1.0 )( 0.5)

Hence,
P2 ( x) = (2x 2
− 11x + 15 ) (0.69315) − ( 4 x 2 − 20 x + 24 ) (0.91629)
+ ( 2 x 2 − 9 x + 10 ) (1.09861)

Simplifying, we get

P2 ( x) =
−0.08164 x 2 + 0.81366 x − 0.60761 .

Putting x = 2.7 , P2 (2.7) = 0.99412 .

Exercises:
1. Find the Lagrange quadratic interpolating polynomial P2 ( x) for the

data: f (0) = 1 , f (1) = 3 , f (3) = 5 .

2. The function f =
( x) sinx + cosx is represented by the data given by:
x: 10° 20° 30°
f ( x) : 1.1585 1.2817 1.366

83 WhatsApp: +91 7900900676 www.AgriMoon.Com


Interpolation

π 
Find P2 ( x) for this data. Hence find P2   . Compare it with the exact
 12 
π 
value f   .
 12 

Keywords: Equally Spaced, Nodal Points, Unequally Spaced, Linear


Interpolation, Quadratic Interpolation.

References
Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley
& Sons, Publishers, Singapore.

Suggested Reading
Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill
Publishers, New York.
Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth
Edition, Prentice Hall of India Publishers, New Delhi.

84 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 11

Higher Order Lagrange Interpolation

11.1 Introduction

Consider the data set {( x , f ) , ( x , f ) ,..., ( x


0 0 1 1 N , f N )} corresponding to the

function f ( x) on the interval [ x0 , xN ] . Also, the partition of the interval


[ x0 , xN ] is x0 < x1 < x2 < ... < xN , need not be equally spaced. We now derive an

interpolating polynomial PN ( x) for the above data,


satisfying PN ( xi ) = fi , i = 0,1,..., N .

11.2 Lagrange Interpolating Polynomial

Result: Show that the N th degree Lagrange interpolating polynomial for the
data set {( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )} is given by

PN ( x) =
( x − x1 )( x − x2 )...( x − xN ) f + ( x − x0 ) ( x − x2 )...( x − xN ) f + ...
( x0 − x1 )( x0 − x2 )...( x0 − xN ) 0 ( x1 − x0 ) ( x1 − x2 )...( x1 − xN ) 1

... +
( x − x0 ) ( x − x1 )...( x − xN −1 ) f
( xN − x0 )( xN − x1 )...( xN − xN −1 ) N

This is written in the compact form as:

N
PN ( x) = ∑ li ( x) fi
i =0

w( x)
where li ( x) =
( x − xi ) w′( xi )

( x x0 ) ( x − x1 )...( x − xN )
where w( x) =−

85 WhatsApp: +91 7900900676 www.AgriMoon.Com


Higher Order Lagrange Interpolation

( xi − x0 )( xi − x1 )...( xi − xi−1 )( xi − xi+1 )...( xi − xN ) .


and w′( xi ) =

These li ( x) are the n th degree fundamental polynomials satisfying


N
(i) ∑ l ( x) = 1
i =0
i and (ii) li ( x j ) = δ ij .

The Truncation error in Lagrange interpolation is given by


w( x) ( n+1)
Tn ( f , x) = f (ξ )
(n + 1)!

where ξ is some point from the discrete data set { x0 , x1 ,..., xN } such that

min { x0 , x1 ,..., xN , x} < ξ < max { x0 , x1 ,..., xN , x} .

The derivatives of this truncation error is not done here, the reader is referred to
the reference books.

Proof: Let PN ( x) be of the form

PN ( x) = f ( x) = a0 ( x − x1 )( x − x2 )...( x − xN ) + a1 ( x − x0 )( x − x2 )...( x − xN ) +
... + an ( x − x1 )( x − x2 )...( x − xN −1 ) (11.1)

Use the condition at x = x0 , f ( x0 ) = f 0 in (11.2)

f 0 = a0 ( x0 − x1 )( x0 − x2 )...( x0 − xN )
f0
⇒ a0 =
( x0 − x1 )( x0 − x2 )...( x0 − xN )

Similarly, use the other N conditions, we get


2
86 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation

f1
a1 = ,
( x1 − x0 )( x1 − x2 )...( x1 − xN )
f2 fN
a2 = , ... aN = .
( x2 − x0 )( x2 − x1 )...( x2 − xN ) ( xN − x0 )( xN − x1 )...( xN − xN −1 )

Substituting ai ’s in (11.2), we get PN ( x) as given in equation (11.1).

Example 1: For the below given unequally spaced data find the interpolating
polynomial with highest degree:
x: 0 1 3 4
f ( x) −20 −12 −20 −24

Then compute f (2) .

Solution:

Given 4 data points, we can find utmost a third degree polynomial of the form
P ( x) =a0 + a1 x + a2 x 2 + a3 x 3 , where these ai ’s are determined using the given
data.

Now let us use the Lagrange Interpolation formula

P3 ( x) = l0 ( x) f 0 ( x) + l1 ( x) f1 ( x) + l2 ( x) f 2 ( x) + l3 ( x) f 3 ( x)

where li ( x) are the fundamental polynomials;

( x − 1)( x − 3)( x − 4) ( x − 0)( x − 3)( x − 4)


=
∴ P3 ( x) (−20) + (−12) +
(−1)(−3)(−4) (1)(−2)(−3)

3
87 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation

( x − 0)( x − 1)( x − 4) ( x − 0)( x − 1)( x − 3)


(−20) + (−24)
(3)(2)(−1) (4)(3)(1)

or P3 ( x) =x 3 − 8 x 2 + 15 x − 20 is the highest degree polynomial that satisfies the


given data set.

Now f (2) ≈ P3 (2) =(2)3 − 8(2) 2 + 15(2) − 20


= −14 .

11.3 Inverse Interpolation

In interpolation, we find the function value of f ( x) at some non-nodal point


x in the interval. On the other hand, the process of estimating the value x for a
value of f ( x) which is not among the tabulated values is called the inverse
interpolation. The inverse interpolating polynomial is obtained by interchanging
the roles of xi and f ( xi ) in the Lagrange interpolating polynomial.

For yi = f ( xi ) and for the given data set {( x0 , y0 ) ( x1 , y1 ) ...( xN , y N )} , it is given

by:

( y − y1 )( y − y2 )...( y − y N ) ( y − y0 )( y − y1 )...( y − y N )
x= x0 + x1 + ...
( y0 − y1 )( y0 − y2 )...( y0 − y N ) ( y1 − y0 )( y1 − y2 )...( y1 − y N )
( y − y0 )( y − y1 )...( y − y N −1 )
... + xN .
( y N − y0 )( y N − y2 )...( y N − y N −1 )

Example 2: Find the value of x , if f ( x) = 7 from the given table.


x 1 3 4
f ( x) 4 12 19

4
88 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation

Solution:

Using the above inverse interpolating polynomial with N = 2 , we get

(7 − 12)(7 − 19) (7 − 4)(7 − 19) (7 − 1)(7 − 3)


x= (1) + (3) + (4)
(4 − 12)(4 − 19) (12 − 4)(12 − 19) (19 − 4)(19 − 12)

1 27 4
= + −
2 14 7

= 1.860 .

Observe that the function representing the above data set is y ( x=


) x2 + 3 .

Exercises:

1. Find the value of from the following data using the Lagrange
interpolation.
x 5 6 9 11
f ( x) 380 −2 196 508

2. Find a polynomial which passes through the


points (0, −12), (1,0), (3,6), (4,12) .

3. Find x if f ( x) = 6 from the below given table


x: 0 1 3 4
f ( x) : −12 0 12 24

using the inverse interpolating polynomial.

5
89 WhatsApp: +91 7900900676 www.AgriMoon.Com
Higher Order Lagrange Interpolation

4. Find the value of x corresponding to f ( x) = 12 from the following data set


{(2.8,9.8), (4.1,13.4), (4.9,15.5), (6.2,19.6)} using the inverse interpolating
polynomial.

Keywords: Lagrange Interpolation, Inverse Interpolating Polynomial

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth
Edition, New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth


Edition, Prentice Hall of India Publishers, New Delhi.

6
90 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 12

Newton’s Forward Interpolation Formula with Equal Intervals

12.1 Introduction

Let be a continuous function defined on the interval . Consider the


partition of the interval into equally spaced subintervals as:

a = x0 < x1 < x2 < ... < x j −1 < x j < x j +1... < xN = b

h for each j = 0,1,2,..., N .


where x j − x j −1 =

Thus the nodes x0 , x1 ,..., xN are such that x=j x0 + jh for j = 0,1,2,..., N .

We have the data set as:

{( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )} .

12.2 Gregory-Newton Forward difference Interpolating Polynomial

Suppose it is required to evaluate for some =


x x0 + ph where is any real
number. Newton’s forward difference interpolation makes use of the forward
difference operator ∆ on the given data set to generate a polynomial. For any real
number , the shift operator ( x0 ) f ( x0 + ph ) .
gives E p f =

Also we know E = 1 + ∆ .

∴ f ( x0 + ph )= (1 + ∆ ) f ( x0 )= (1 + ∆ )
p p
y0

91 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Forward Interpolation Formula with Equal Intervals

p ( p − 1) 2 p ( p − 1)( p − 2) 3
= { y0 + p∆y0 + ∆ y0 + ∆ y0 + ...
2! 3!

p ( p − 1)( p − 2)...( p − N + 1) N
... + ∆ y0 + ...}
N!
(using Binomial theorem).

th
For an -data set, the and higher order forward differences became zero,
so the infinite series in the above equation becomes a polynomial of degree .

x − x0
Note that p = is a linear function of .
h

p ( p − 1)( p − 2)...( p − N + 1)
Thus will be a polynomial of degree is .
N!

Thus we have
p ( p − 1) 2 p ( p − 1)( p − 2) 3
y p = f ( x0 + ph) = y0 + p∆y0 + ∆ y0 + ∆ y0 + ...
2! 3!
p ( p − 1)( p − 2)...( p − N + 1) N
... + ∆ y0 (12.1)
N!

as the n th degree polynomial interpolating the given equally spaced data set. This
is called the Gregory-Newton Forward difference interpolating polynomial.

The local truncation error in (12.1) becomes

p ( p − 1)( p − 2)...( p − N ) ( n+1) ( N +1)


TN ( f ; x) = h f (ξ )
( N + 1)!
2
92 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals

where min { x0 , x1 ,..., xN , x} < ξ < max { x0 , x1 ,..., xN , x} .

Example 1: Find the Newton Forward interpolating polynomial for the equally
spaced data points

Compute .

Solution:

Given x0 = 1, h = 1, f 0 = −1 .
The difference table for the given data is:

∆f ∆2 f ∆3 f

1 -1
-1
2 -2 2
1 0
3 -1 -2
-1
4 -2

Clearly ∆f =−1, ∆ 2 f =2 and ∆ 3 f =


0.

Using the Newton’s forward interpolating polynomial given by (12.1), we have

3
93 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals

 x −1 ( x − 1)( x − 2) ( x − 1)( x − 2)( x − 3)


f ( x) =−1 +   (−1) + (−2) + (0)
 1  2! 3!

( x − 1)( x − 2)(−2)
=−1 + ( x − 1)(−1) +
2

= x2 − 4 x + 2 .

2
 3 3 3 7
Now f  x = =  − 4  + 2 =− =−1.75 .
 2 2 2 4
.

Example 2: Interpolate at from the data given without writing the


polynomial.
0.1 0.2 0.3 0.4 0.5
1.4 1.56 1.76 2.0 2.28

Solution:

The function value is to be found at which is nearer to the node .

So choose= =
x0 0.2, =
h 0.1, x 0.25 .

x p =0.25 =x0 + ph =0.2 + p (0.1)

0.25 − 0.2
=
⇒p = 0.5 .
0.1

The forward difference table for the given data is written as


∆f ∆2 f ∆3 f ∆4 f

4
94 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals

0.1 1.4
0.16
0.2 1.56 0.04
0.2 0.0
0.3 1.76 0.04 0.0
0.24 0.0
0.4 2.0 0.04
0.28
0.5 2.28

From the above, x0 = 0.2 (chosen), f 0= 1.56, ∆f 0= 0.2, ∆ 2 f 0= 0.4 and

∆3 f0 =
∆ 4 f0 =
0.

The Newton’s forward interpolating polynomial becomes

p ( p − 1) 2
f (0.25) = f 0 + p∆f 0 + ∆ f0 .
2!

Thus
.

Note: Newton’s forward interpolation formula is used if the function evaluation is


desired near the beginning of the tabulated values.

Exercises:

Use Newton’s forward interpolation formula to find

5
95 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals

1. and from

2. from

3. Find the number of persons getting salary below Rs. 300 per day from the
following data.

Wages in
Rs.
Frequency

Keywords: Newton’s Forward Interpolation Formula, Forward Difference


Operator

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

6
96 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Forward Interpolation Formula with Equal Intervals

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

7
97 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 13

Newton’s Backward Interpolation Polynomial

13.1 Introduction

Consider the discrete data set for the continuous function on the interval
as (as considered in the lesson 12)

{( x , f ) , ( x , f ) ,..., ( x
0 0 1 1 N , f N )}

Let us now try to evaluate the function at some location near the end
nodes xN −1 or xN .

Write =
x xN + ph , is any real number.

Then y p = f ( x) = f ( xN + ph ) = E p f ( xN ) .

13.2 Gregory-Newton Backward Interpolating Polynomial

Use the relation between and the backward difference operator ∇ given as

E ≡ (1 − ∇ ) .
−1

Now E p ≡ (1 − ∇ ) .
−p

Thus f ( xN + ph )= (1 − ∇ )
−p
yN .

98 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Backward Interpolation Polynomial

Expanding (1 − ∇ ) using binomial expansion, we write


−p

p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( xN + ph )= { y N + p∇y N + ∇ yN + ∇ y N + ...
2! 3!
p ( p + 1)( p + 2)...( p + N − 1) N
... + ∇ y N + ...} .
N!

th
For the given -data set, the and higher order backward differences
become zero, so the infinite series above becomes a polynomial of degree .

Note that =
x xN + ph
x−x p ( p + 1)...( p + N − 1)
⇒ p = N is a linear function of , and the product is a
h N!
polynomial of degree in .

th
Thus we get the degree interpolating polynomial in terms of the backward
differences at xN as:

p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( xN + ph ) = y N + p∇y N + ∇ yN + ∇ y N + ...
2! 3!
p ( p + 1)( p + 2)...( p + N − 1) N
... + ∇ yN (13.2)
N!

This is called the Gregory-Newton Backward interpolating polynomial.


The local truncation error in (13.2) is

p ( p + 1)...( p + N ) ( N +1) ( N +1)


TN ( f ; x) = h f (ξ )
( N + 1)!

2
99 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial

where min { x0 , x1 ,..., xN , x} < ξ < max { x0 , x1 ,..., xN , x} .

Note: Newton backward interpolating polynomial is more efficient when applied


to interpolate the data at the end of the data set.

Let us illustrate this method.

Example 1: The following data represents the relation between the distance as a
function of height:
height 150 200 250 300 350 400
distance 13.03 15.04 16.81 18.42 19.90 21.27

Find y ( 410 ) .

Solution:

Let , chose xN = 400 , , y N = 21.27 .


x − xN 10
∴p= = = 0.2 .
h 50

We now find ∇y N , ∇ 2 y N , ∇3 y N , ∇ 4 y N by constructing the difference table:


∇ ∇2 ∇3 ∇4
150 13.03
2.01
200 15.04 -0.24
1.77 0.08

3
100 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial

250 16.81 -0.16 -0.05


1.61 0.03
300 18.42 -0.13 -0.01
1.48 0.02
350 19.90 -0.11
1.37
400 21.27

Using (2), we obtain

p ( p + 1) 2 p( p + 1)( p + 2) 3
f (410) = y N + p∇y N + ∇ yN + ∇ yN
2! 3!

= 21.27 + (0.2)(1.37) +
( 0.2 )(1.2 ) (−0.11) + ( 0.2 )(1.2 )( 2.2 ) (0.02)
2 6

= 21.53 .

Thus .

Note that, when , p 3 , p 4 only correct the solution at fourth and fifth
decimal places.

Example 2: Find the cubic polynomial which takes the following data:
0 1 2 3
1 2 1 10

Solution:

4
101 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial

Let us now form the difference table for the given data:
∆f ∆2 f ∆3 f
x0 = 0 1 = f0
1 = ∆f 0
1 2 -2 = ∆ 2 f 0

= ∆ 3 f 0
12  3
-1 = ∆ f N
2 1 10 = ∆ 2 f N
9 = ∆f N
xN = 3 9 = fN

x − x0 x − 0
Case 1: Take =
x0 0,=
p = = x.
h 1

Let us now find the Newton’s forward interpolating polynomial:

p ( p − 1) 2 p ( p − 1)( p − 2) 3
f ( x) = f 0 + p∆f 0 + ∆ f0 + ∆ f0
2 6

x ( x − 1) x ( x − 1)( x − 2 )
= 1 + x ⋅1 + (−2) + (12)
2 6

= 2 x3 − 7 x 2 + 6 x + 1 …… (i)

Case 2: Let us now find the Newton’s backward interpolating polynomial:

Take=
xN 3,= =
f N 10, h 1.

5
102 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial

x − xN ( x − 3=)
=
p = ( x − 3) .
h 1

p ( p + 1) 2 p( p + 1)( p + 2) 3
f ( x) = f N + p∇f N + ∇ fN + ∇ fN .
2 6

p=( x − 3) , p( p + 1) =( x − 3)( x − 2 ) , p( p + 1)( p + 2) =( x − 3)( x − 2 )( x − 1)

Thus f ( x) = 10 + ( x − 3) (9) + (
x − 3)( x − 2 ) ( x − 3)( x − 2 )( x − 1) (12)
(10) +
2 6
= 2 x3 − 7 x 2 + 6 x + 1 . …… (ii)

(i) and (ii) clearly indicate that the interpolating polynomial for the given data is
the same though we use different methods.

Exercises:
1. Use the Lagrange interpolating polynomial for the data:
0 1 2 3
1 2 1 10

Show that the interpolating polynomial is p3 ( x) = 2 x3 − 7 x 2 + 6 x + 1.

2. Find f(2) from the data

6
103 WhatsApp: +91 7900900676 www.AgriMoon.Com
Newton’s Backward Interpolation Polynomial

3. If , , and
find .

4. Find the number of men getting wages below Rs. 35 from the data:
Wages in Rs:
Frequency

Keywords: Backward Difference Operator, Local Truncation Error, Newton’s


Backward Interpolation Polynomial

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.
Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading
Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill
Publishers, New York.
Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,
Prentice Hall of India Publishers, New Delhi.

7
104 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 14

Gauss Interpolation

14.1 Introduction

Newton’s Forward and Backward interpolating polynomials are used to interpolate


the function values at the starting or end of the data respectively. We now see the
central difference formulas which are most suited for interpolation near the middle
of a tabulated set.

Consider the data points as:

{( x−3 , y−3 ) , ( x−2 , y−2 ) , ( x−1 , y−1 ) , ( x0 , y0 ) , ( x1 , y1 ) , ( x2 , y2 ) , ( x3 , y3 )}

14.2 Gauss-Forward Interpolation Formula

The difference table for the above data is:

The newton’s Forward interpolation formula is:


p ( p − 1) 2 p ( p − 1)( p − 2) 3 p( p − 1)( p − 2)( p − 3) 4
y p = y0 + p∆y0 + ∆ y0 + ∆ y0 + ∆ y0 + ...
2! 3! 4!
(14.1)

We know ∆ 3 y−1 = ∆ 2 y0 − ∆ 2 y−1 ⇒ ∆ 2 y0 = ∆ 2 y−1 + ∆ 3 y−1 .

Similarly, ∆ 3 y0 = ∆ 3 y−1 + ∆ 4 y−1 , ∆ 4 y0 = ∆ 4 y−1 + ∆ 5 y−1 .

Also ∆ 3 y−1 − ∆ 3 y−2 = ∆ 4 y−2 ⇒ ∆ 3 y−1 = ∆ 3 y−2 + ∆ 4 y−2 .

105 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gauss Interpolation

Similarly, ∆ 4 y−1 = ∆ 4 y−2 + ∆ 5 y−2 etc.

Substituting for ∆ 2 y0 , ∆ 3 y0 , ∆ 4 y0 ,... in equation (1), and rearranging, we get

p ( p − 1) 2 ( p + 1)( p)( p − 1) 3 ( p + 1)( p)( p − 1)( p − 2) 4


y p = y0 + p∆y0 + ∆ y−1 + ∆ y−1 + ∆ y−2 + ...
2! 3! 4!
(14.2)

This is called Gauss-Forward interpolation formula.

106 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

1st difference 2nd difference 3rd difference 4th difference 5th difference
x−3 y−3

 
δy 5
∆y−3  =

 2 

x−2 y−2 ∆ 2 y−3 ( =


δ 2 y−2 )

   
δy 3
∆y−2  = δ 3y 3 
∆ 3 y−3  =
− −
 2   2 

x−1 y−1 ∆ 2 y−2 ( =


δ 2 y−1 ) ∆ 4 y−3 ( =
δ 4 y−1 )

     
δy 1
∆y−1  = δ 3y 1 
∆ 3 y−2  = δ 5y 1 
∆ 5 y−3  =
− − −
 2   2   2 

x0 y0 → Central Line ∆ 2 y−1 ( =


δ 2 y0 ) ∆ 4 y−2 ( =
δ 4 y0 )

     
δ y1 
∆y0  = δ 3 y1 
∆ 3 y−1  = δ 5 y1 
∆ 5 y−2  =
 2   2   2 

x1 y1 ∆ 2 y0 ( =
δ 2 y1 ) ∆ 4 y−1 ( =
δ 4 y1 )

   
δ y3 
∆y1  = δ 3 y3 
∆ 3 y0  =
 2   2 

107 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gauss Interpolation

x2 y2 ∆ 2 y1 ( =
δ 2 y2 )

 
δ y5 
∆y2  =
 2 

x3 y3

108 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

We know ∆y0= δ y 1 , ∆ 2 y−=


1 δ 2 y0 , ∆ 3 y−=
1 δ 3 y 1 and ∆ 4 y−2 =
δ 4 y0 ;using these in (2),
2 2

we write the equation in (2) in terms of the central differences as:

p ( p − 1) 2 ( p + 1)( p )( p − 1) 3 ( p + 1)( p )( p − 1)( p − 2) 4


y0 pδ y 1 +
y p =+ δ y0 + δ y1 + δ y0 + ...
2 2! 3! 2 4!
(14.3)

This formula can be used directly to interpolate the function at the centre of the
data i.e., for values of , .

14.3 Gauss-Backward Interpolation Formula

We have ∆y0 − ∆y−1 = ∆ 2 y−1

⇒ ∆y0 = ∆y−1 + ∆ 2 y−1 ,

∆ 2 y0 = ∆ 2 y−1 + ∆ 3 y−1 , ∆ 3 y0 = ∆ 3 y−1 + ∆ 4 y−1 etc

Also, ∆ 3 y−1 = ∆ 3 y−2 + ∆ 4 y−2 ,

∆ 4 y−1 = ∆ 4 y−2 + ∆ 5 y−2 , etc.

Substituting these in equation (1), we get

p ( p − 1) 2 p ( p − 1)( p − 2) 3
y p= y0 + p ( ∆y−1 + ∆ 2 y−1 ) + ( ∆ y−1 + ∆ 3 y−1 ) + ( ∆ y−1 + ∆ 4 y−1 ) + ...
2! 3!

( p + 1) p 2 ( p + 1) p ( p − 1) 3
= y0 + p∆y−1 +
2!
∆ y−1 +
3!
( ∆ y−2 + ∆ 4 y−2 ) + ...

109 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gauss Interpolation

or
p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 2)( p + 1) p ( p − 1) 4
y p = y0 + p∆y−1 + ∆ y−1 + ∆ y−2 + ∆ y−2 + ...
2! 3! 4!

(14.4)

This is called Gauss-Backward interpolation formula. This is written using the


central differences as:

p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 2)( p + 1) p ( p − 1) 4
y0 + pδ y
yp = 1 + δ y0 + δ y1+ δ y0 + ...
− 2! 3! − 4!
2 2

(14.5)

Formula given in (2) and (4) or (3) and (5) have limited utility, but are useful in
deriving the important method known as Stirling’s method.

Keywords: Stirling’s Method, Central Differences, Gauss-Forward Interpolation


Formula, Interpolation near the Middle of a Tabulated Set

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

110 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gauss Interpolation

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

111 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 15

Everett’s Central Difference Interpolation

15.1 Introduction

We have the Gauss forward interpolation formula as

p ( p + 1) 2 ( p + 1) p ( p − 1) 3 ( p + 1) p ( p − 1)( p − 2) 4
y p = y0 + p∆y0 + ∆ y−1 + ∆ y−2 + ∆ y−2
2! 3! 4!

( p + 2)( p + 1) p ( p − 1)( p − 2) 5
+ ∆ y−2 + ... (15.1)
5!

15.2 Everett’s Formula

Eliminating odd differences ∆y0 , ∆ 3 y−1 , ∆ 5 y−2 etc. by

∆y0 = y1 − y0 , ∆ 3 y−1 = ∆ 2 y0 − ∆ 2 y−1 , ∆ 5 y−2 = ∆ 4 y−1 − ∆ 4 y−2 etc., then (1) becomes

p ( p − 1) 2 ( p + 1) p ( p − 1) 2
y p= y0 + p ( y1 − y0 ) +
2!
∆ y−1 +
3!
( ∆ y0 − ∆ 2 y−1 ) +

( p + 1) p ( p − 1)( p − 2) 4 ( p + 2)( p + 1) p( p − 1)( p − 2) 4


4!
∆ y−2 +
5!
( ∆ y−1 − ∆ 4 y−2 ) + ...
p ( p − 1)( p − 2) 2 ( p + 1) p ( p − 1) 2
=(1 − p ) y0 + py1 − ∆ y−1 + ∆ y0 −
3! 3!

( p + 1) p ( p − 1)( p − 2)( p − 3) 4 ( p + 2)( p + 1) p( p − 1)( p − 2) 4


∆ y−2 + ∆ y−1 + ...
5! 5!

(15.2)

This is known as Everett’s formula.

112 WhatsApp: +91 7900900676 www.AgriMoon.Com


Everett’s Central Difference Interpolation

This formula is extensively used as it involves only even differences in and below
the central line.

Example 1: Below given data represents the function . Use Everett’s


formula to find :

310 320 330 340 350 360


2.49136 2.50515 2.51851 2.53148 2.54407 2.55630

Take the data as:

= =
x−2 310, f −2 2.49136,

= =
x−1 320, f −1 2.50515,

= =
x0 330, f 0 2.51851,

= =
x1 340, f1 2.53148,

= =
x2 350, f 2 2.54407,

= =
x3 360, f3 2.5630,

x − 330
= =
h 10, p .
10
y ∆y ∆2 y ∆3 y ∆4 y ∆5 y
2.49136
0.01379
2.50515 -0.00043
0.01336 0.00004
2.51881 -0.00039 -0.00003

2
113 WhatsApp: +91 7900900676 www.AgriMoon.Com
Everett’s Central Difference Interpolation

0.01297 0.00001 0.00004


2.53148 -0.00038 0.00001
0.01259 0.00002
2.54407 -0.00036
0.01223
2.55630

337.5 − 330
= =
Take x 337.5, p = 0.75 .
10

To change the terms with negative sign, putting p = 1 − q in equation (1), we get

q (q 2 − 12 ) 2 q (q 2 − 12 )(q 2 − 22 ) 4
y p = qy0 + ∆ y−1 + ∆ y−2 + ...
3! 5!

p ( p 2 − 12 ) 2 p ( p 2 − 12 )( p 2 − 22 ) 4
+ py1 + ∆ y0 + ∆ y−1 + ...
3! 5!

q =1 − p =0.25 .

∴ y p = 0.62963 + 0.00002 − 0.0000002 + 1.89861 + 0.00002 + 0.00000001 = 2.52828

Exercise:

1. Find from the data

20 24 28 32
854 3162 3544 3992

using Everett’s formula.

3
114 WhatsApp: +91 7900900676 www.AgriMoon.Com
Everett’s Central Difference Interpolation

Keywords: Everett’s Formula, Gauss Forward Interpolation


References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

4
115 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 16

Stirling’s and Bessel’s Formula

16.1 Stirling’s Formula

This is obtained by taking the mean of the Gauss Forward and Backward
interpolation formulae.

This is written as:

 ∆y0 + ∆y−1  p 2
2 p ( p 2 − 1)  ∆ 3 y−1 + ∆ 3 y−2  p 2 ( p 2 − 1) 4
y p = y0 + p  + ∆ y−1 +  + ∆ y−2 + ...
 2  2! 3!  2  4!

(16.1)

Writing this using central differences, we obtain

p  p2 p ( p 2 + 12 )  3  p ( p −1 ) 4
2 2 2

y p =+  δ y 1 + δ y− 1  + δ y0 +  δ y 1 + δ y− 1  + δ y0 + ...
3
y0
2 2 2  2! 3! ⋅ 2  2 2  4!

(16.2)

This is called the Stirling’s formula.

Example 1: Find the value of e x when from the below given table:

x 0.61 0.62 0.63 0.64 0.65


y = ex 1.840431 1.858928 1.87761 1.896481 1.91554

116 WhatsApp: +91 7900900676 www.AgriMoon.Com


Stirling’s and Bessel’s Formula

Solution:

0.644 − 0.64
= =
x 0.644, =
x0 0.64, p = 0.4,=y0 1.896481.
0.01

By forming the difference table (left as an exercise!) we note that


∆y−1 =
0.018871,=
∆y0 0.01906, ∆=
2
y−1 0.000189 and all higher order differences
are approximately zero. Substituting these in the Stirling’s formula given in (1), we
get
.

16.2 Bessel’s Formula

We know ∆ 2 y0 − ∆ 2 y−1 = ∆ 3 y−1

⇒ ∆ 2 y−1 = ∆ 2 y0 − ∆ 3 y−1 .

Similarly ∆ 4 y−1 − ∆ 4 y−2 = ∆ 5 y−2

⇒ ∆ 4 y−2 = ∆ 4 y−1 − ∆ 5 y−2

Using these in the Gauss forward interpolation formula, we obtain

p ( p − 1)  1 2 1 2  p ( p − 1) 3
2

y p = y0 + p∆y0 +  ∆ y−1 + ∆ y−1  + ∆ y−1


2!  2 2  3!

p ( p 2 − 1) ( p − 2 )  1 4 1 4 
+  ∆ y−2 + ∆ y−2  + ...
4! 2 2 

1 p ( p − 1) 2 1 p ( p − 1) 2 p ( p 2 − 1) 3
= y0 + p∆y0 +
2 2!
∆ y−1 +
2 2!
( ∆ y0 − ∆ y−1 ) + 3! ∆ y−1
3

2
117 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula

1 p ( p − 1) ( p − 2 ) 4 1 p ( p − 1) ( p − 2 ) 4
2 2

+
2 4!
∆ y−2 +
2 4!
( ∆ y−1 − ∆ 5 y−2 ) + ...

 1
 p −  p ( p − 1)
p ( p − 1)  ∆ y−1 + ∆ y0  
2 2
2
or y p = y0 + p∆y0 +  + ∆ 3 y−1
2!  2  3!

+
( p + 1) p ( p − 1)( p − 2 )  ∆ 4 y−2 + ∆ 4 y−1 
+ ...
  (16.3)
4!  2 

This is known as the Bessel’s formula.

Example 2: Using Bessel’s formula, obtain


given .

Solution:

Taking= =
x0 24, h 4,=
y0 3162 .

1
=
We have p ( x − 24 ) .
4
x y ∆y ∆2 y ∆3 y
20 2854
308
24 3162 74
382 -8
28 3544 66
448

3
118 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula

32 3992

1
=
Taking =
x 25, p .
4

Bessel’s formula is:

 1
p ( p − 1)  ∆ y−1 + ∆ y0  
2 2 p −  p ( p − 1)
2
y p = y0 + p∆y0 +  + ∆ 3 y−1 + ...
2!  2  3!

(0.25)(−0.75)  74 + 66  (−0.25)(0.25)(−0.75)
∴ y (25) =3162 + (0.25)(382) +  + (−8)
2  2  6
= 3250.87 .

Note:

1 1
1. If the value of p lies between − and , prefer Stirling’s formula, it gives a
4 4
better approximation.
1 3
2. If p lies between and , Bessel’s or Everett’s formula gives better
4 4
approximation.

Exercises:

1. Using Stirling’s formula, find from the data


.

2. Find using Bessel’s formula from

4
119 WhatsApp: +91 7900900676 www.AgriMoon.Com
Stirling’s and Bessel’s Formula

20 25 30 35 40
11.47 12.78 13.76 14.49 15.05

3. Tabulate f ( x) = e − x in with . Find using (i).


Bessel’s and (ii) Everett’s formula.

Keywords: Bessel’s Formula, Stirling’s Formula

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

5
120 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 17

Newton’s Divided Difference Interpolation

17.1 Introduction

In Lagrange interpolation, the fundamental polynomials are constructed for writing


the interpolating polynomial. Suppose we found the fundamental polynomials for
the given data point set. If a data point is added to this set, then the fundamental
polynomials are reconstructed. This makes the process laborious. An easier way of
finding an interpolating polynomial is given by constructing the divided
differences.

17.2 Divided Difference

For the set {( x , f ) , ( x , f )} , the linear interpolating polynomial


0 0 1 1 a0 + a1 x is given

by:
p( x) x 1
f ( x0 ) x0 1 = 0 .
f ( x1 ) x1 1

Expand the determinant in term of the first row, we get

p ( x) ( x0 − x1 ) − x [ f ( x0 ) − f ( x1 ) ] + 1[ x1 f ( x0 ) − x0 f ( x1 ) ] =
0

or it is rewritten as
f ( x1 ) − f ( x0 )
p ( x=
) f ( x0 ) + ( x − x0 )
x1 − x0

= f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] (17.1)

121 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Divided Difference Interpolation

where f [ x0 , x1 ] is defined as the first divided difference of relative to x0 and

x1 , given as:
f ( x1 ) − f ( x0 )
f [ x0 , x1 ] = .
x1 − x0

Example 1: Given , find the linear interpolating polynomial


using Newton’s divided difference interpolation.

Solution:

Given=
x0 2,=
f0 = =
4, x1 2.5, f1 5.5.

Newton’s first divided difference

f ( x1 ) − f ( x0 ) 5.5 − 4 1.5
f [ x0 , =
x1 ] = = = 3.
x1 − x0 2.5 − 2 0.5

The interpolating polynomial is

) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ]
p1 ( x=

=4 + ( x − 2)(3)

= 3x − 2 .

17.3 Generalization to Data Points

The second divided difference of relative to the points x0 , x1 , x2 is written as:

f [ x1 , x2 ] − f [ x0 , x1 ]
f [ x0 , x1 , x2 ] =
( x2 − x0 )

122 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Divided Difference Interpolation

where f [ x1 , x2 ] is the first divided difference of relative to x1 and x2 is given


by:
f ( x2 ) − f ( x1 )
f [ x1 , x2 ] = .
x2 − x1

The third divided difference of relative to x0 , x1 , x2 , x3 is given by

f [ x1 , x2 , x3 ] − f [ x0 , x1 , x2 ]
f [ x0 , x1 , x2 , x3 ] = .
( x3 − x0 )

th
The same way, the divided difference is written as

f [ x1 , x2 ,..., xN ] − f [ x0 , x1 , x2 ,..., xN −1 ]
f [ x0 , x1 ,..., xN ] = .
( xN − x0 )

Let p ( x) = a0 + ( x − x0 ) a1 + ... + ( x − x0 ) ( x − x1 ) ...( x − xN −1 ) aN be the interpolating

polynomial for the distinct points {( x0 , f 0 ) , ( x1 , f1 ) ,..., ( xN , f N )} .

Substituting x = x0 in the above, we get

) a=0 f ( x0=) f [ x0 ] .
p ( x0=

Put x = x1 , we obtain

p ( x1 ) = a0 + ( x − x0 ) a1 = f [ x1 ] (say)

f ( x1 ) − f ( x0 )
=
⇒ a1 = f [ x0 , x1 ] .
( x1 − x0 )

123 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Divided Difference Interpolation

Put x = x2 , we get

) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] + ( x2 − x0 ) ( x2 − x1 ) a=2 f ( x2 ) .
p ( x2=

On simplifying, we get

f [ x1 , x2 ] − f [ x0 , x1 ]
=a2 f=
[ x0 , x1 , x2 ] .
( x2 − x0 )

Proceeding in this way, we show that

an = f [ x0 , x1 ,..., xN ] .

Then we obtain the divided difference interpolating polynomial as

) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] + ... + ( x − x0 ) ( x − x1 )...( x − xN −1 ) f [ x0 , x1,..., xN ] .


pN ( x=
(17.2)

This formula is easily extended to (say) data. (i.e., addition of one more
data point to the previous data set) as

) f ( x0 ) + ( x − x0 ) f [ x0 , x1 ] + ... + ( x − x0 )...( x − xN −1 ) f [ x0 , x1 ,..., xN ] +


pN +1 ( x=

( x − x0 )...( x − xN ) f [ x0 , x1,..., xN , xN +1 ] (17.3)

It amounts to finding the next divided difference and adding it to previously


obtained interpolating polynomial as shown above.

Example 2: Find the Newton divided difference interpolating polynomial for the
data.

124 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Divided Difference Interpolation

0 1 3
1 3 5.5

Solution:

=
x0 0,=
x1 1,=
x2 3,=
f 0 1,=
f1 3,=
f 2 55 .

f (1) − f (0) 3 − 1
f [=
0,1] = = 2,
1− 0 1

55 − 3 52
f [1,3
=] = = 26 ,
3 −1 2

26 − 2 24
∴ f [ 0,1,3] = == 8.
3−0 3

∴Newton’s divided difference interpolating polynomial is

p2 ( x)= f (0) + ( x − 0) f [0,1] + ( x − 0)( x − 1) f [0,1,3]

=1 + x ⋅ 2 + ( x)( x − 1)8

= 8 x 2 − 6 x + 1.

Exercises:
1. Find from the following data using the Newton’s divided difference
interpolation.
1.5 3.0 5.0 6.5 8.0
5.0 31.0 131.0 282.0 521.0

1
2. If f ( x) = , find the divided difference f [ x0 , x1 , x2 ] .
x2

125 WhatsApp: +91 7900900676 www.AgriMoon.Com


Newton’s Divided Difference Interpolation

3. From the data


-1 1 4 7
-2 0 63 342

(i) Find the Lagrange interpolating polynomial.

(ii) Find the Newton divided difference interpolating polynomial.

Keywords: Divided Difference

References

Jain, M. K., Iyengar. S.R.K., Jain. R.K. (2008). Numerical Methods. Fifth Edition,
New Age International Publishers, New Delhi.

Atkinson, E Kendall. (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid, Francis. (1989). Numerical Analysis. Second Edition, McGraw-Hill


Publishers, New York.

Sastry, S.S. (2005). Introductory Methods of Numerical Analysis. Fourth Edition,


Prentice Hall of India Publishers, New Delhi.

126 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 18

Numerical Differentiation

18.1 Introduction

Given a continuous function f ( x) on an interval [a, b] , it can be differentiated on


(a, b) . However, when f ( x) is a complicated function or when it is given in a data

set, we use numerical methods to find its derivatives. There are many ways of
finding derivatives of a function when given in its data form. Important among
these methods are methods based on interpolation, method based on finite
difference operators. We discuss these methods through examples.

18.2 Methods based on Interpolation

If ( xi , fi ) , i = 0,1, 2... , N are the ( N + 1) data points representing a function

y = f ( x) . The Lagrange interpolating polynomial for above set of data points is

given by
N
PN ( x) = ∑ lk ( x) f k (18.1)
k =0

π ( x)
where lk ( x) = .
( x − xk ) π ′( xk )
( x x0 )( x − x1 ) ... ( x − xN ) .
and π ( x) =−
Differentiating (1) w.r.t. x , we obtain
N
PN ′ ( x) = ∑ lk ′ ( x) f k .
k =0

Case 1: Linear interpolation: For ( x0 , f 0 ) , ( x1 , f1 ) :


We have =
P1 ( x) l0 ( x) f 0 + l1 ( x) f1

x − x1 x − x0
where l0 ( x) = and l1 ( x) = .
x0 − x1 x1 − x0

127 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

x − x1 x − x0
=
P1 ( x) f0 + f1 .
x0 − x1 x1 − x0

The derivative of this is


1 1
P1′( x)
= f0 + f (18.2)
x0 − x1 x1 − x0

Case 2: Quadratic interpolation: For the data {( x0 , f 0 ) , ( x1 , f1 ) , ( x2 , f 2 )} , we have


the interpolating polynomial as P2 ( x) = l0 ( x) f 0 + l1 ( x) f1 + l2 ( x) f 2 .
Its derivative is
P2′( x) = l0′ ( x) f 0 + l1′ ( x) f1 + l2′ ( x) f 2

2 x − x1 − x2 2 x − x0 − x2 2 x − x0 − x1
where l0′ ( x) = , l1′ ( x) = , l2′ ( x) = .
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
 2 x − x1 − x2 2 x − x0 − x2 2 x − x0 − x1 
∴ P2′( x) 
= f0 + f1 + f2  (18.3)
 ( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 
x1 − x2 2 x1 − x0 − x2 x1 − x0
At x = x1 , P2′( x) = f0 + f1 + f .
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 2

Similarly we can write P2′( x) at any nodal point or a non-nodal point.


2 2
In the same manner, l0′′ ( x) = , l1′′ ( x) = and
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 )
2
l2′′ ( x) =
( x2 − x0 )( x2 − x1 )
and the second derivative of P2 ( x) i.e.,
 f0 f1 f2 
P2′′ ( x=
) 2 + +  (18.4)
 ( x0 − x1 )( x0 − x2 ) ( x1 − x0 ) ( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 

This gives a way of finding an approximation to f ′( x) at every x ∈ [ x0 , xN ] by

finding the interpolating polynomial Pn ( x) for the given set of data points.

128 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

Example 1: Find f ′(2) and f ′′(2) for the below given data set.
xi 2 2.2 2.6
fi 0.69315 0.78846 0.95551

Solution:

We have

f ′( x0 ) =
f ′(2) =
2 x0 − x1 − x2
f0 +
( x0 − x2 ) f1 +
( x0 − x1 ) f
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 2
4 − 2.2 − 2.6 2 − 2.6 2 − 2.2
= ( 0.69315) + ( 0.78846 ) + ( 0.95551)
( −0.2 )( −0.6 ) ( 0.2 )( −0.4 ) ( 0.6 )( 0.4 )
= 0.49619 .

 f0 f1 f2 
f ′′ ( x0=
) 2 + + .
( x −
 0 1 0 x )( x − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x )
1 

 0.69315 0.78846 0.95551 


∴ f ′′ (2) 2 
= + + 
 ( −0.2 )( −0.6 ) ( 0.2 )( −0.4 ) ( 0.6 )( 0.4 ) 

= −0.19642 .

Thus f ′(2) = 0.49619 and f ′′ (2) = −0.19642 .

In the above, we used Lagrange interpolation method. Similarly one can use
Newton interpolation methods also.

18.3 Methods based on Finite Differences

For a given equally spaced data set, we have learnt that Ef ( x) = ehD f ( x)
i.e., ehD ≡ E ⇒ hD ≡ log E (18.5)
where E is the shift operator, D is the differentiation operator, h being the
constant step size. Using the relation between E , ∆ and ∇ , we write

129 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

1 1
log (1 + ∆ ) ≡ ∆ − ∆ 2 + ∆ 3 − ...
2 3
hD ≡ log E ≡
1 1
− log (1 − ∇ ) ≡ ∇ + ∇ 2 + ∇3 + ... (18.6)
2 3

1 1
∆f k − ∆ 2 f k + ∆ 3 f k − ...
df 2 3
Then h ( xk ) ≡ hDf ( xk ) ≡
dx 1 1
∇f k + ∇ 2 f k + ∇3 f k + ...
2 3 (18.7)

In general we can write the higher order derivatives in terms of the higher order
dn
differences, the nth derivative operator can be written as
dx n

 n n n +1 n(3n + 5) n + 2
∆ − 2 ∆ + 24
∆ − ...
h D ≡
n n

∇ n + n ∇ n +1 + n(3n + 5) ∇ n + 2 + ...
 2 24 (18.8)

In particular, when n = 2 and at x = xk ,


 2 11
 ∆ f k − ∆ 3 f k + ∆ 4 f k − ...
d f2
 12
h 2 2 ( xk ) ≡ 
dx ∇ 2 f + ∇3 f + 11 ∇ 4 f + ...
 k k
12
k
(18.9)

dy d2y
Example 2: Find and 2 at x = 1.2 using forward differential from the
dx dx
following table:
x: 1 1.2 1.4 1.6 1.8 2.0
y ( x) : 2.7183 3.3201 4.0552 4.953 6.0496 7.3891

Solution:

Take x0 = 1.2 , y0 = 3.3201 , h = 0.2 . We form the difference table for the given data
set as:

130 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

dy 1 1 1 
=  ∆f (1.2) − ∆ 2 f (1.2) + ∆ 3 f (1.2) − ...
dx x =1.2 h 2 3 

1  1 1 
=  0.7351 − ( 0.1627 ) + (0.0361) − ...
0.2  2 3 

≈ 3.3205 .
x y ∆ ∆2 ∆3 ∆4 ∆5

1.2 3.3201
0.7351
1.4 4.0552 0.1627
0.8978 0.0361
1.6 4.9530 0.1988 0.008
1.0966 0.0441 0.0001
1.8 6.0496 0.2429 0.0094
1.3395 0.0535
2.0 7.3891 0.2964
1.6359
2.4 9.025

d2y 1
2
= 2  ∆ 2 f k − ∆ 3 f k + ...
dx x =1.2 h

1
= [0.1627 − 0.0361] ≈ 3.165 .
0.04

Example 3: Given: ( xi , yi ) , i = 1, 2,3, 4,5, 6 as:

x 3 4 5 6 7 8
y 0.205 0.24 0.259 0.262 0.25 0.224

Find the value of x for which y is minimum.


Solution:

131 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

The difference table is:


x y ∆ ∆2 ∆3
3 0.205
0.035
4 0.24 -0.016
0.019 0.0
5 0.259 -0.016
0.003 0.001
6 0.262 -0.015
-0.012 0.001
7 0.25 -0.014
-0.026
8 0.224

Take x0 = 3 , y0 = 0.205 , h = 1.0 .


Let us now obtain the interpolating polynomial using Newton’s forward
p ( p − 1) 2 x − x0
difference interpolation. It is y ( x) = y0 + p∆y0 + ∆ y0 where p =
2 h

0.205 + ( 0.035 ) p −
or y ( x) =
( 0.016 ) p( p − 1) .
2

dy
The minimum value of y ( x) is obtained by solving =0
dp

(2 p − 1)
i.e., (0.035) − ( 0.016 ) =
0
2
⇒ 0.035 − 0.008(2 p − 1) =
0

0.035
⇒ (2 P − 1) =
0.008
⇒ p=
2.6875

132 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

Thus x =x0 + ph =3 + 2.6875 =5.6875 .


Hence minimum value of y ( x) is attained at x = 5.6875 and the minimum value
is 0.2628 .

Exercises

1. Find the value of cos1.747 using the below table:


x: 1.7 1.74 1.78 1.82
sin x : 0.9916 0.9857 0.9781 0.9691

2. Given= =
sin 0o 0,sin10 o
=
0.1736,sin =
20o 0.3420,sin =
30o 0.5,sin 40o 0.6428 .

Find
(i) sin 23o
(ii) cos10o
(iii) − sin 20o using the method based on the finite differences.

3. Find the value of x for which y is maximum from the below tabulated values
for y ( x) .
x: 1.2 1.3 1.4 1.5 1.6
y: 0.93 0.96 0.98 0.99 1.0

Keyword: Finite differences, Lagrange interpolation,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

133 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Differentiation

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

134 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 19

Numerical Integration

19.1 Introduction

Consider the data set S for a given function y = f ( x) which is not known
explicitly where S = {( x0 , y0 ) , ( x1 , y1 ) ,... ( xN , yN )} .

It is required to compute the value of the definite integral


b
I = ∫ y ( x)dx; (19.1)
a

The Lagrange interpolating polynomial for the above data is given by


N
π ( x)
=
y ( x) ∑ l ( x) f + ( N + 1)! f
i =0
i i
( N +1)
(ξ )

( x x0 )( x − x1 ) ... ( x − xN ) ; x0 < ξ < xN


where π ( x) =−
and li ( x) is the Lagrange fundamental polynomial.

Replace the function y ( x) by (19.2) in the integral (19.1) we obtain

N
b

b
 π ( x) 
=I ∫  ∑ li ( x) fi  dx + ∫  f ( N +1) (ξ )  dx (19.2)
a (
a  i =0  N + 1) ! 

135 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Integration

N b 
∑  ∫ li ( x)dx  fi + Rn
i =0  a 

or=I λi fi + Rn (19.3)
b b
1
where λi are ∫ li ( x)dx and Rn is the remainder given by ∫ π ( x) f ( N +1) (ξ ) dx .
a ( N + 1)! a
Equation (3)gives an approximation for the integral value.

19.2 Newton-Cotes Formulae

Using Newton’s, forward difference interpolation polynomial for the given data
set S, we now derive a general formula for numerical integration of
b
I = ∫ y ( x)dx (19.4)
a

Consider the partition of the integral [a, b] as


b−a
a = x0 < x1... < xN =b such that xN= x0 + Nh i.e., h = .
N

Using Newton’s forward interpolation formula in the above integral; we obtain


p ( p − 1) 2 p ( p − 1)( p − 2) 3
xN
 
=
I ∫  y
x0
0 + p∆y0 +
2!
∆ y0 +
2!
∆ y0 + ... dx

(19.5)

x − x0
where p = .
h

The above integral can be written as


N
 p2 − p 2 p3 − 3 p 2 + 2 p 3 
=I h ∫  y0 + p∆y0 + ∆ y0 + ∆ y0 + ...dp (19.6)
0  
2! 3!
N
 p2  p3 p 2   p 4 p3  
or I=  hy0 + ∆y0 +  −  ∆ 2 y0 +  − + p 2  ∆ 3 y0 + ...
 2  6 4   24 6   p =0

2
136 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration

 N ( 2 N − 3) 2 N ( N − 2) 3 
2
N
= Nh  y0 + ∆y0 + ∆ y0 + ∆ y0 + ... (19.7)
 2 12 24 

Case 1: With , we have


x1

I= ∫ y( x)dx
x0

 1 
=1 ⋅ h  y0 + ∆y0 
 2 
h
= ( y0 + y1 ) (19.8)
2

Case 2: With , we have


x2

I= ∫ y( x)dx
x0

 2 2(1) 2 
= 2h  y0 + ∆y0 + ∆ y0 
 2 12 

 1 
= 2h  y0 + ( y1 − y0 ) + ( y2 − 2 y1 + y0 ) 
 6 

 1 2 1 
= h  2 y1 + y2 − y1 + y0 
 3 3 3 

1
= [ y0 + 4 y1 + y2 ] (19.9)
3

Case 3: With N=3, we have


x3

I= ∫ y( x)dx
x0

 3 3 1 
= 3h  y0 + ∆y0 + ∆ 2 y0 + ∆ 3 y0 
 2 4 8 

 3 3 1 
= 3h  y0 + ( y1 − y0 ) + ( y2 − 2 y1 + y0 ) + ( y3 − 3 y2 + 3 y1 − y0 ) 
 2 4 8 

3
137 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration

3h
= [ y0 + 3 y1 + 3 y2 + y3 ] (20.1)
8
xN

We now discuss the use of the case1 for evaluating the integral, I = ∫ y( x)dx
x0
.

N
We can write I = I1 + I 2 + I 3 + ... + I N = ∑ I j
j =1

xj

where I j = ∫
x j −1
y ( x)dx .

We take two consecutive data points at once and apply the formula given in
(19.5) for every pair of data points. Equation (19.4) is known as the Newton-
Cotes quadrate (Interpolation) formula. With different values of N = 1, 2,3,... ,we
derive different integration methods.

We obtain
x1
 1  h
I=
1 ∫ y( x)dx= h  y0 + ∆=
y0  [ y0 + y1 ]
x0  2  2
x2
 1  h
I=
2 ∫ y( x)dx=  2 [ y1 + y2 ]
h  y1 + ∆y1=
 2
x0

4
138 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration

x3
 1  h
I=
3 ∫ y( x)dx=  2 [ y2 + y3 ]
h  y2 + ∆y2=
 2
x0

……
xN
 1  h
=
IN ∫y ( x=
)dx h  yN −1 + ∆yN=
−1  [ yN −1 + yN ] .
xN −1  2  2

Now
xN x1 x2 xN

=I ∫ y(=
x0
x)dx ∫ y ( x)dx + ∫ y ( x)dx + ... + ∫
x0 x1 xN −1
y ( x)dx

= I1 + I 2 + ... + I N

h h h
= [ y0 + y1 ] + [ y1 + y2 ] + ... + [ yN −1 + yN ]
2 2 2
h
=  y1 + 2 ( y2 + y3 + ... + y N −1 ) + y N  (20.2)
2

This is known as the Trapezoidal rule. Since the method involves finding the
sum of the areas of these N-trapezoids, this method is named as Trapezoidal
rule.

1.2

∫ 2e dx using trapezoidal rule by taking h=0.2.


x
Example 1: Evaluate
0

Solution:

a=0, b=1.2, h=0.2, we tabulate the function at the nodal points as:
0 0.2 0.4 0.6 0.8 1.0 1.2

: 1 1.221 1.492 1.822 2.226 2.718 3.32

The trapezoidal rule is given by


h
 =I 2 ( y0 + y6 ) + 2 ( y1 + y2 + y3 + y4 + y5 ) 
2

5
139 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration

= 0.2 (1 + 3.32 ) + 2 (1.221 + 1.492 + 1.822 + 2.226 + 2.718 )  .

∴I =4.656 .

Example 2: Evaluate , where is tabulated as:

Solution:

The trapezoidal rule gives


h
=
I ( y0 + y4 ) + 2 ( y1 + y2 + y3 ) 
2
0.25
= [(1 + 0.5) + 2(0.8 + 0.6667 + 0.5714)]
2
= 0.697 .

Exercises:

1 A solid of revolution formed by rotating about the axis, the area between
the axis, the lines and and a curve through the points with the
following coordinates:

Find the volume of the solid formed using the Trapezoidal rule.

2. Evaluate using the Trapezoidal rule by taking .

Keywords: Newton-Cotes Formulae, Numerical Integration,

6
140 WhatsApp: +91 7900900676 www.AgriMoon.Com
Numerical Integration

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

7
141 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 20

Simpson’s one Third and Simpson’s Three Eighth Rules

20.1 Simpson’s One-Third Rule

It is obtained by taking in the Newton-Cotes formula (4). We described


this in case (2) of the lesson (19). We divide the interval into an even
number of subintervals of equal length having odd number of abscissas.

We divide the interval into subintervals each of length , we

then get abscissas as

Now

Now using the Newton-Cotes formula with for each of the above
integrals, we get

...

Add all these values, we obtain

142 WhatsApp: +91 7900900676 www.AgriMoon.Com


Simpson’s one Third and Simpson’s Three Eighth Rules

or,
(20.1)

where = sum of the function values at the end points, = sum of the function
values at odd numbered abscissas, and = sum of the function values at even
numbered abscissas.

rd
This formula is known as the Simpson’s rule of integration.

rd
Example: Evaluate by Simpson’s rule taking .

Solution:

rd
By Simpson’s rule:

20.2 Simpson’s Three-Eighth Rule

2
143 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules

Take in the Newton-Cotes formula (4). This is described in Case (3) of


the lesson (19).To apply this method the number of subintervals should be taken
as multiples of 3.

The integral
xN= x0 + Nh

I= ∫
x0
y ( x)dx

x0 +3 h x0 + 6 h xN

= ∫
x0
y ( x)dx + ∫
x0 +3 h
y ( x)dx + ... + ∫ y( x)dx
xN − 3

(20.2)
th
This is known as the Simpson’s rule.

th
Example 2: Evaluate by Simpson’s rule.

Solution:

1
Take=
h 1,= x6 6 , f ( x) =
x0 0,= .
1 + x2
The number of subintervals is 6, is a multiple of 3. So we can use the Simpson’s
th
rule.

x: 0 1 2 3 4 5 6
y: 1 0.5 0.2 0.1 0.0588 0.0385 0.027
y0 y1 y2 y3 y4 y5 y6

3h
=I ( y0 + y6 ) + 3 ( y1 + y2 + y4 + y5 ) + 2 ( y=
3 )
 1.3571.
8 

3
144 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules

0.6

∫ e dx by taking 6 subintervals.
rd −x
2
Example 3: Using Simpson’s rule, find
0

Solution:

Evaluation of e − x is not a simple function, that cannot be integrated directly. In


2

such a situation using numerical integration it can be easily evaluated. Let us


construct the data:
x0 x1 x2 x3 x4 x5 x6
x: 0 0.1 0.2 0.3 0.4 0.5 0.6
x2 : 0 0.01 0.04 0.09 0.16 0.25 0.36

y = e− x
2
1 0.99 0.9608 0.9139 0.8521 0.7788 0.6977
y0 y1 y2 y3 y4 y5 y6

rd
By Simpson’s rule, we have
0.6
h
∫ e dx
−x
= ( y0 + y6 ) + 4 ( y1 + y3 + y5 ) + 2 ( y2 + y4 ) 
2

0
3

0.1
= (1 + 0.6977 ) + 4 ( 0.99 + 0.9139 + 0.8521) + 2 ( 0.9608 + 0.8521) 
3 
0.1
= [1.6977 + 10.7308 + 3.6258]
3
= 0.5351 .

Example 4: A solid of revolution is formed by rotating about the x -axis the


area bounded by the x -axis, the lines x = 0 and x = 1 , and the curve through the
point with the following data:
x: 0 0.25 0.5 0.75 1.0
y: 1.0 0.9896 0.9589 0.9089 0.8415

4
145 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules

y0 y1 y2 y3 y4

rd
Estimate the volume of the solid using Simpson’s rule.

Solution:

Here h = 0.25 .
The volume of the solid of revolution is
1
I = ∫ π y 2 dx
0

rd
Using the Simpson’s rule:

π ( y0 + y4 2 ) + 4 ( y12 + y32 ) + 2 ( y2 2 ) 
h  2
=I
3

=
0.25 22 
3 7 
({ 2 2 2
} {
1) + ( 0.8415 ) + 4 ( 0.9896 ) + ( 0.9089 ) + 2 ( 0.9589 ) 
2 2

 }
= 0.2618[10.7687 ]
= 2.8192 .

Exercises:
5.2

1. Evaluate ∫ log x ⋅ dx
4

Using (i) Trapezoidal rule


rd
(ii) Simpson’s rule
th
(iii) Simpson’s rule

by taking 12 subintervals. Then compare your results.

2. A curve is given by

5
146 WhatsApp: +91 7900900676 www.AgriMoon.Com
Simpson’s one Third and Simpson’s Three Eighth Rules

x 0 1 2 3 4 5 6
y 0 2 2.5 2.3 2 1.7 1.5

Evaluate (i) the area below the given curve


6
(ii) ∫ xy ⋅ dx using Simpson’s rd
rule.
0

3. Estimate the length of the arc of the curve 3y = x 3 from (0,0) to (1,3) using
rd
Simpson’s rule by taking 8 subintervals.

π
2
= ∫ cosθ ⋅ dθ by using Simpson’s rd
4. Evaluate I rule using 11 ordinates.
0

Keyword: Simpson’s one-Third rule, Simpson’s three-eighth rule, Even


number of subintervals.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

6
147 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 21

Boole’s and Weddle’s Rules

21.1 Introduction

Boole’s and Weddle’s rules are higher order integration methods. These
methods use higher order differences as explained below. By taking in
Newton-Cotes formula, we obtain Boole’s rule as:
x4
 5 2 7 
∫ y( x=
x0
)dx 4h  y0 + 2∆y0 + ∆ 2 y0 + ∆ 3 y0 + ∆ 4 y0 
 3 3 90 

2h
= [7 y0 + 32 y1 + 12 y2 + 32 y3 + 7 y4 ]
45

Similarly for the next set of data points between and , we write
the integral as
x4
2h
∫ y( x)dx= [7 y4 + 32 y5 + 12 y6 + 32 y7 + 7 y8 ]
x0
45

By taking the number of subintervals as a multiple of , we obtain


xN

I= ∫ y( x)dx
x0

2h
= 7 y0 + 32 ( y1 + y3 + y5 + y7 + ...) + 12 ( y2 + y6 + y10 + ...) + 14 ( y4 + y8 + y12 + ...) + 7 y N 
45 
(21.1)

To use this method, the number of subintervals should be taken as a multiple of


. By taking in the Newton-Cotes integration formula, we obtain the
Weddle’s Rule. Here, the number of subintervals should be taken as a multiple
of .

148 WhatsApp: +91 7900900676 www.AgriMoon.Com


Boole’s and Weddle’s rules

x6
 9 2 123 4 11 5 41 6 
∫x = + ∆ + ∆ + ∆ + ∆ + ∆ + ∆ y0 
3
For y ( x ) dx 6 h  y0 3 y0 y0 4 y0 y0 y0
0
2 60 20 140 

3h
= [ y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ] .
10
x12
3h
For ∫ y ( x)dx
= [ y6 + 5 y7 + y8 + 6 y9 + y10 + 5 y11 + y12 ] .
x6
10

Proceeding this way, we write


xN

I= ∫ y( x)dx
x0

3h
= [ y0 + 5 ( y1 + y5 + y7 + y11...) + ( y2 + y4 + y8 + y10 + ...)
10

+6 ( y3 + y9 + y15 + ...) + 2 ( y3 + y9 + y15 + ...) + y N ] (21.2)

Weddle’s rule is found to be more accurate than all the methods discussed
earlier. This is because higher order approximation is used for the integration.
The below given table gives the error estimates involved in the integration
methods.

2
149 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules

Summary of Newton-Cotes Methods


S.No. Name Integral Formula Error
x1
1 Trapezoidal Rule h h3
∫ y( x)dx ( y0 + y1 ) − y′′ (ξ ) ; x0 < ξ < x1
2 12
x0

x2
2 rd h h5 iv
Simpson’s Rule
∫ y( x)dx ( y0 + 4 y1 + y2 ) − y (ξ ) ; x0 < ξ < x2
2 90
x0

3 th x3
3h 3h5 iv
Simpson’s Rule
∫ y( x)dx ( y0 + 3 y1 + 3 y2 + y3 ) − y (ξ ) ; x0 < ξ < x3
8 90
x0

x4
4 Boole’s Rule 2h 8h7 vi
∫ y( x)dx ( 7 y0 + 32 y1 + 12 y2 + 32 y3 + 7 y4 ) − y (ξ ) ; x0 < ξ < x4
45 945
x0

x4
5 Weddle’s Rule 3h h7 vi
∫ y( x)dx ( y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ) − y (ξ ) ; x0 < ξ < x6
10 140
x0

3
150 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules

1.2

∫ e dx using Boole’s rule by taking


x
Example 1: Evaluate .
0

Solution:

The function y ( x) = e x is tabulated at the nodes x0 = 0 to


xN = 1.2 with xi =x0 + ih, i =1,2,3,4,... as:
x: 0 0.3 0.6 0.9 1.2
ex : 1 1.34986 1.82212 2.4596 3.32012
y0 y1 y2 y3 y4

Using this data in Boole’s rule


1.2
2h
∫ y( x)dx= ( 7 y0 + 32 y1 + 12 y2 + 32 y3 + 7 y4 )
0
45

2 ( 0.3)
= 7 (1) + 32 (1.34986 ) + 12 (1.82212 ) + 32 ( 2.4596 ) + 7 ( 3.32012 ) 
45 
= 2.31954 .

12
1
Example 2: Evaluate ∫ 1+ x
0
2
dx by using Weddle’s rule with .

Solution:

1
The function y ( x) = is calculated
1 + x2
at=
x0 0,=
x1 2,=
x2 4,=
x3 6,=
x4 8,=
x5 10 and x6 = 12
as=
y0 1,=
y1 0.2,=
y2 0.05882,=
y3 0.02703,=
y4 0.01538,=
y5 0.0099 and
y6 = 0.0069 .

4
151 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules

Using this data in the Weddle’s rule,


12
1 3h
∫ 1+ x 2
=
dx ( y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 )
0
10

3(2)
= 1 + 5 ( 0.2 ) + ( 0.05882 ) + 6 ( 0.02703) + ( 0.01538 ) + 5 ( 0.0099 ) + ( 0.0069 ) 
10 

= 1.37567 .

Exercises:
5.2
1. Evaluate ∫ log x ⋅ dx using (i) Boole’s rule with
4
; (ii) Weddle’s rule by

taking . Compare these two values.

1
2
1
2. Evaluate ∫
0 1 − x2
dx using Weddle’s rule.

∫ (1 + e sin 4 x ) dx using Boole’s rule with


−x
3. Evaluate .
0

2
1
4. Evaluate ∫0 1 + x 2 dx using Weddle’s rule taking intervals.

Keyword: Boole’s Rule, Higher order integration methods, Weddle’s rule.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

5
152 WhatsApp: +91 7900900676 www.AgriMoon.Com
Boole’s and Weddle’s rules

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

6
153 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 22
Gaussian Quadrature

22.1 Introduction: The problem of numerical integration is to find an


approximate value for
b
I = ∫ w( x) f ( x)dx (22.1)
a

where is a positive valued continuous function defined on called


the weight function. The function is assumed to be integrable. The
limits and are finite, semi-infinite or infinite. The integral ( ) is
approximated by a finite linear combination of f ( xk ) in the form

∑λ
b
=I ∫=
w( x) f ( x)dx
a
k =0
k fk (22.2)

where xk , k = 0,1,..., N are called the nodes which are distributed within the
limits of integration and λk , k = 0,1,..., N are called the discrete weights.
The formula ( ) is also known as the quadrature formula.

The error in this approximation is given as


N

∑λ
b
=RN ∫a
w( x) f ( x) dx −
k =0
k fk (22.3)

An integration method of the form ( ) is said to be order if it produces


exact results i.e.; RN = 0 for all polynomials of degree less than or equal to .
In evaluating the integral ( ) using ( ) involves finding
unknown weights λk ’s and unknown nodes xk ’s leading to computing

154 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gaussian Quadrature

unknowns. To compute these unknowns, the method (22.2) is made


exact for polynomial of degree less than or equal to , for example, by

considering f ( x) = c0 + c1 x + c2 x 2 + ... + c2 N +1 x 2 N +1 .
1
For example, when , then ∫=
−1
f ( x) dx w1 f ( x1 ) + w2 f ( x2 ) .

When the nodes xk are known, the corresponding methods are called Newton-
Cotes methods where the nodes are also to be determined, then the methods are
called the quadrature methods.

The interval of integration is always transformed to using the

b−a b+a
=
transformation x  t +   . Depending on the weight function
 2   2 
a variety of methods are developed. We discuss here the Gauss-Legendre
integration method for which the weight function .

22.2 Gauss-Legendre Integration Methods

Consider evaluating the integral

1 N
=I ∫=
f ( x) dx ∑λ k =0
k f ( xk )
−1

where xk are the nodes and λk are the weights.


1
(I) One Point formula : The formula is ∫ f ( x) dx
−1
= λ0 f ( x0 )

In the above λ0 , x0 are unknowns, these are obtained by making this integration
method exact for .

155 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gaussian Quadrature

1
i.e., (a) ∫ 1 ⋅ dx =λ
−1
0 f ( x0 ) =λ0 =2

1
(b) ∫ x ⋅ dx =λ x
−1
0 0 ⇒ λ0 x0 =0 ⇒ x0 =0 ( λ0 = 2) .

1
∴ ∫ f ( x)dx =
2 ⋅ f (0) .
−1

(II) The two point formula : The formula is given by


1

∫=
f ( x)dx
−1
λ0 f ( x0 ) + λ1 f ( x1 )

The unknowns are λ0 , λ1 , x0 , x1 . These unknowns are determined by making this

method exact for f ( x) = 1, x, x 2 , x3 ; we get


f ( x) =1 ⇒ λ0 + λ1 =2 .
x λ0 x0 + λ1 x1 =
f ( x) =⇒ 0.
2
f ( x) =x 2 ⇒ λ0 x02 + λ1 x12 = .
3
f ( x) =x3 ⇒ λ0 x03 + λ1 x03 =0.

Solving these non-linear equations we obtain


1 1
x0 =
± , x1 =
 , λ=
0 λ=
1 1.
3 3

And the two point Gauss-Legendre method is given by


1
1 1
∫ f ( x)dx
−1
= f (−
3
) + f ( ).
3

Exercise: Show that the three point Gauss-Legendre method is given by

156 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gaussian Quadrature

1
1 3 3
∫ f ( x)dx=
−1
9
[5 f (− ) + 8 f (0) + 5 f ( )].
5 5

2
2x
Example 1: Evaluate the integral I = ∫ dx
1
1 + x 4

using the Gauss-Legendre -point quadrature rule.

Solution:

The general quadrature formula is written in .


b−a b+a 1 3 1
So define=x ( )t + ( )t ⇒ x = t + , dx = dt
2 2 2 2 2
8(t + 3)
1
The integral transforms to ∫ 16 + (t + 3) 4

dt.
−1 
Using then -point rule, we get

1  3  3 
I= 5 f  −  + 8 f (0) + 5 f  
9   5   5  
1
= 5 ( 0.4393) + 8(0.2474) + 5 ( 0.1379 ) 
9
= 0.5406
2
2x π
We can directly integrate ∫1+ x
1
4
dx and its integral is tan −1 (4) −
4
=
0.5404 .

1
1
Example 2: Evaluate the integral ∫ 1 + x dx using the Gauss-Legendre two point
0

formula.

Solution:

157 WhatsApp: +91 7900900676 www.AgriMoon.Com


Gaussian Quadrature

1 1 1
Define x = t + ⇒ dx = dt .
2 2 2
 1   1 
1
dt
∴∫ =f − + f 
−1
t +3  3  3
= 0.69231

1 2 2
1 1 1
Exercises: Evaluate (a) ∫ dx (b) ∫ dx (c) ∫ dx
0
1 + x 1
x 1
1 + x 4

using Gauss-Legendre (i) -point (ii) -point quadrature methods.

Keywords: Gaussian Quadrature, One Point formula, Two point formula.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

158 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 23

Difference Equations

23.1 Introduction

Difference equations arise in problems in which sequential relations exist at


various discrete values of the independent variables say { t0 , t1 , t2 ,...., t N }. These
equations are commonly seen in control engineering, radar tracking etc.

Definition: A difference equation is a relation between the differences such as


of an unknown function at one or more general values of the independent
variable.

A general difference equation in terms of k unknown function values is written


as
F ( yn , yn+1 ,....., yn+ k ) = 0 (23.1)

For Example, ∆ 2 yn+1 + ∆yn =0 (23.2)


is a difference equation written using the forward differences. This can be
rewritten as
∆yn+ 2 − ∆yn+1 + ∆yn =0

or ( yn+3 − yn+ 2 ) − ( yn+ 2 − yn+1 ) + ( yn+1 − yn ) =


0

or yn+3 − 2 yn+ 2 + 2 yn+1 − yn =


0 (23.3)

If the function is non-linear in any one of these unknowns, then it is a non-


linear difference equation. If is a linear function in all these unknown function
values, then it is a linear difference equation.

159 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

Definition 2: The order of a difference equation is the difference between the


largest and the smallest argument occurring in the difference equation divided
by the unit of increment the equation (3).

(n + 3) − n 3
The order of the difference equation (3) is = = 3.
1 1

Example 1: Find the order of the difference equation derived from


∆yn+1 + 2 yn =
3.

Solution:

The difference equation corresponding to ∆yn+1 + 2 yn =


3 is

yn+ 2 − yn+1 + 2 yn − 3 =0 (23.4)

(n + 2) − n
So the order of this equation is: = 2.
1

23.2 Formation of Difference Equations:

We now illustrate the formation of difference equations from the given family
of curves.

Example 2: Form the difference equation corresponding to the two parameter


family of curves given by =
y at + bt 2 .

Solution:

We have ∆y = a∆t + b∆t 2

= a (t + 1 − t ) + b[(t + 1) 2 − t 2 ]
=
a + b(2t + 1)

160 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

and ∆ 2 =
y 2b[(t + 1) − t=
] 2b .

Eliminating the arbitrary constants and from ∆y and ∆ 2 y , we get

1 2 1
b= ∆ y and a =∆y − ∆ 2 y (2t + 1) .
2 2

Hence the given family of curve become


1 1
y =[∆y − ∆ 2 y (2t + 1)]t + ∆ 2 yt 2
2 2
or (t 2 + t )∆ 2 y − 2t ⋅ ∆t + 2 y = 0 .

Equivalently, the difference equation is


(t 2 + t ) yt + 2 − (2t 2 + 4t ) yt +1 + ( t 2 + 3t + 2 ) y =
0 (23.5)

Exercise 4: Form the difference equation from


(i) y=t at + b 2t (ii) =
yt a 2t + b3t .

23.3 Linear Difference Equations:


Consider the linear difference equation
a0 yn+ k + a1 yn+ k −1 + a2 yn+ k −2 + ..... + ak yn =
g ( n) (23.6)
where a0 , a1 ,....., ak are constants.

If g ≡ 0 then equation (6) is a homogeneous equation otherwise it is non-


homogenous equation. The solution of a difference equation is an expression for
yn which satisfies the given difference equation.

161 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

Definition: The general solution of a difference equation is that in which the


number of arbitrary constants is equal to the order of the difference equation,
i.e. , Y=
n c1 y1 (n) + c2 y2 (n) + ...... + cr yr (n) (23.7)
with c1 , c2 ,......., cr are arbitrary constants.

Definition: A particular solution is that solution which is derived from the


general solution by fixing the arbitrary constants. This is done using the initial
conditions on the unknown function at the nodal points.

If Vn is a particular solution of (6), then the complete solution of (6) is

y=
n Yn + Vn (23.8)
Yn is also called as the complementary function.

23.4 Homogenous Equations:


For finding the complementary solution of the equation (23.6), we assume the
solution of the form yn = Aξ n where is a constant which is non-zero.
Substituting this in (23.6), we get
A(a0ξ k + a1ξ k −1 + .... + ak )ξ n =
0

or a0ξ k + a1ξ k −1 + .... + ak =


0 (23.9)

which is called the characteristic equation of the difference equation (23.6). Let
the roots of the equation be ξ1 , ξ 2 ,...., ξ k are real and distinct. These roots are (i)
real, distinct (ii) real, repeating (iii) complex roots.

Let us see how the solution looks like in each case.


Case (i): Here ξ1 , ξ 2 ,...., ξ k are real and distinct, then
Yn= α1ξ1n + α 2ξ 2n + .... + α k ξ kn (23.10)

162 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

where α1 , α 2 ,...., α k are arbitrary constants.

Case (ii): Let ξ=1 ξ=


2 ξ3 and ξ 4 , ξ5 ,...., ξ k are all real, then Yn is written as

Yn = (α1 + nα 2 + n 2α 3 ) ξ1n + (α 4ξ 4n + .... + α k ξ kn ) (23.11)

As a special case when ξ1 is a real root with multiplicity , then


Yn = (α1 + nα 2 + .... + n kα k ) ξ1n (23.12)

Case (iii): For the case where two of these roots are complex and rest of them
are real distinct: say ξ1 =α + iβ =r eiθ and ξ 2 =α − iβ =r e−iθ
ξ3 , ξ 4 ,...., ξ k are real and distinct. Then

= (α1 cos nθ + α 2 sin nθ ) ξ1 + c3ξ3n + ..... + ck ξ kn


n
Yn

β 
with=r α 2 + β 2 and θ = tan −1  .
α 

In the same way one can write the solution of the homogeneous difference
equation when it has several pairs of complex roots.

Example 3: Solve the difference equations


(i) yn +3 − 2 yn + 2 − 5 yn +1 + 6 yn =
0

1
(ii) yn + 2 − yn +1 + yn =
0
4

Solution:

Note that the above are homogenous difference equations.


(i) By the replace yn by Aξ n , we obtain the characteristic equation as

163 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

ξ 3 − 2ξ 2 − 5ξ + 6 =0 , the roots of this are ξ= 1, −2,3 (real, distinct). Hence the

complete solution is
y=
n α1 (1) n + α 2 (−2) n + α 3 (3) n .

1 1 1
(ii) The characteristic equation is: ξ 2 − ξ + =0 and its roots are ξ = ,
4 2 2
n
1
yn (α1 + nα 2 )   .
(real, repeating). Hence the complete solution is: =
2

Example 4: (For complex roots): Find the complete solution of the difference
equation yn + 2 − 4 yn +1 + 5 yn =
0.

Solution:

The characteristic polynomial is ξ 2 − 4ξ + 5 =0 and its roots are ξ1= 2 + i and

1
ξ 2= 2 − i . So r= 4 + 1= 5 and θ = tan −1   , and hence
 
2

(α1 cos nθ + α 2 sin nθ ) ( )


n
=Yn 5 .

Exercises:
1: Write the difference equation ∆ 3 yn + ∆ 2 yn + ∆yn =0 in the subscript form.

2: Write the difference equation ∆yn+1 + ∆ 2 yn−1 =5 in the subscript form.


3: Find the order of the difference equation
yn+ 2 − 2 yn + yn−1 =
1.
1
5: Find the general solution of ∆ 2 yn +1 − ∆ 2 yn =0.
3
6: Solve the following difference equations
(i) yn +3 + 16 yn −1 =
0

(ii) yn + 2 − 6 yn +1 + 9 yn =
0

(iii) yn +3 − 3 yn +1 + 2 yn =
0

164 WhatsApp: +91 7900900676 www.AgriMoon.Com


Difference Equations

Keywords: Complete solution, Difference Equations, Homogenous difference


equations, Linear difference equation,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid. Francis, (1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

165 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 24

Non Homogeneous Difference Equation

24.1 Introduction

In this lesson we learn how to find the solution of the non-homogeneous


difference equation. The solution corresponding to the non-homogeneous term
is called the Particular integral of the difference equation:

Consider the non-homogeneous difference equation in the form


yn+ k + α1 yn+ k −1 + ..... + α k yk =
f ( n) (24.1)

Note that equation (24.1) is equivalent to equation (23.6).

Using the shift operator , the above can be put in the operator form as
ϕ ( E ) yn = f n (24.2)

where ϕ ( E ) = E k + α1E k −1 + ..... + α k .

Then the Particular integral is written as:


1
Vn = f ( n)
ϕ (E)

24.2 Finding the Particular Integral:

1
ϕ ( E ) is an operator involving , is its inverse operator [assuming its
ϕ (E)
existence]. The particular solution is obtained for different forms of non-
homogeneous function as given below. We consider the forms for as

a n , sin pn , cos pn and n p a n ⋅ G (n) where is a polynomial in .

166 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non Homogeneous Difference Equation

(A) When , the particular integral is


1 1 n
an = a provided ϕ (a ) ≠ 0 . If for ϕ (a ) = 0 , is a simple root,
ϕ (E) ϕ (a)
1
then a n = na n−1 and if ‘ ’ is a root with multiplicity then
E−a

( E − a) =
m
0 and the particular integral is

1 n(n − 1)....(n − m) n−m


= a .
( E − a)
m
m!

This way one can find the P.I. for the given non-homogeneous equation
when .

Example 1: Find the particular integral of yn+ 2 − 4 yn+1 + 3 yn =2n + 5n .

Solution:

=P.I .
1
( 2 n + 5n )
( E − 4 E + 3)
2

1 1
2n + 2 5n
( E − 4 E + 3) ( E − 4 E + 3)
2

Clearly, 2 and 5 are not the roots of the auxiliary equation of E 2 − 4 E + 3 .


1 1
=P.I . 2 n
+ 5n
2 − 4⋅2 + 3
2
5 − 4⋅5 + 3
2

1
=−2n + 5n
8

Example 2: Solve the difference equation yn+ 2 − 6 yn+1 + 9 yn = 3n + (−1) n .

167 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non Homogeneous Difference Equation

Solution:

The characteristic equation is ξ 2 − 6ξ + 9 =0.


Roots are ξ = 3,3 .

The complementary function is: yC=


.F . (α1 + hα 2 )(3n ) .

Note that is a root of the characteristic equation, with multiplicity , the


n(n − 1) n−2 1
particular integral is written as ⋅ 3 + (−1) n .
2! 15

Hence the complete solution is yn =yC . F + yP. I . =Yn + Vn

n(n − 1) n−2 1
= (α1 + α 2 n)3n + ⋅ 3 + (−1) n .
2! 15

(B) When f (n) = sin pn or cos pn :

1 1  eipn − e − ipn 
=P.I . =
ϕ ( E )  
sin pn
ϕ (E) 2i 
1 1 n 1 n a n = eipn
=  a −  n
ϕ ( E ) 
b ;
2i  ϕ ( E ) b = e
− ipn

This is in the form discussed for .

1 1 1 n 1 n
=  a +
ϕ ( E ) 
Similarly, cos pn b .
ϕ (E) 2  ϕ (E)

(C) When f (n) = n p , then


1
n p [ϕ ( E ) ] n p .
−1
=
P.I . =
ϕ (E)
Recall E = 1 + ∆ .

168 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non Homogeneous Difference Equation

[ϕ ( E )] n p .
−1
∴ P.I . =

Expanding [ϕ (1 + ∆) ] in increasing powers of ∆ (using binomial theorem) and


−1

operating it over n p , we get the particular integral.

(D) When f (n) = a n a (n) , where is a polynomial in .


1 n 1
=P.I . = a a ( n) a n a ( n) .
ϕ (E) ϕ (aE )

This can be solved using the procedure given in (C).

Example 3: Solve yn+ 2 − 2cos α yn+1 + yn =


cos n .

Solution:

It can be readily seen that the characteristic equation as


ξ 2 − 2cos αξ + 1 =0 and its roots are cos α ± i sin α .
So the C.F. is: (α1 cos α n ± α 2 sin α n ) (1) n .

P.I . =
1
cos α
1 ( ein + e − in )
E 2 − 2 E cos α + 1 E 2 − 2 E ( eiα + e − iα ) + 1 2

1 1 1 1 1 − in

=  ⋅ ⋅ e in
+ ⋅ ⋅ e 
2  E − eiα ( eiα − e− iα ) E − e − iα ( e− iα − eiα ) 

1  1 1 − in 
= ⋅ e in
− ⋅ e
4i sin α  E − e iα
E −e − iα 

1  1 1 − in 
= ⋅ e in
− ⋅ e
4i sin α  e − e
i iα −
e −e
i − iα 

∴ yn= C.F . + P.I .

Example 4: Find the particular integral of yn+ 2 − 2 yn+1 + yn =n ⋅ 3n .

169 WhatsApp: +91 7900900676 www.AgriMoon.Com


Non Homogeneous Difference Equation

Solution:

1
=
P.I . ⋅ n ⋅ 3n
( E − 1)
2

1
=
3n ⋅ ⋅n
( 3E − 1)
2

1
=
3n ⋅ 2
⋅n
 3 
22 1 + ∆ 
 2 
−2
3n  3 
= 2 1 + ∆  ⋅ n
2  2 

3n
= 2 (1 − 3∆ + ....) [ n ]
2
3n 3n
= 2 {[ n ] − 3=
⋅ 1} ( n − 3) .
2 22

Exercises:

1. Solve yn+ 2 − 4 yn =−
n 1.

2. Solve the following difference equations:


(i) ∆ 2 yn − 5∆yn + 4 yn = n + 2n

(ii) yn+ 2 − 6 yn+1 + 8 yn =2n + 6n

 1 n
(iii) yn+ 2 −  2cos  yn+1 + yn =
sin .
 2 2

Keyword: Particular integral, Non-homogeneous difference equation,

170 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 25

Numerical Solutions of Ordinary Differential Equations

25.1 Introduction

Many problems in engineering and science are modelled as ordinary differential


equations which are either linear or non-linear equations satisfying certain given
conditions. Only a few of these equations can be solved using the standard
analytical methods whereas the other alternative is to find their numerical
solution. Here, we see some of the numerical methods to solve a class of
problem known as the initial value problems (I.V.P). An initial value problem is
one where the differential equation is solved subjected to the required number
of initial of initial conditions.

A general first order initial value problem is given by (25.1)

; subjected to (25.2)

The solution of (25.1) - (25.2) can be found as a series for in terms of power
of the independent variable ‘ ’, from which the value of can be obtained by
direct substitutionor as a set of tabled values of and .

The methods due to Picard and Taylor (Series) find the solution of the IVP
(25.1) -(25.2) as the dependent function as a function of the independent
variable ‘ ’. The other methods such as the Euler and Runge-Kutta methods
give the solution at some discrete data set for in the interval .

th
For an order differential equation, the general solution has arbitrary
constants and in order to compute the numerical solution of such an equation,

171 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

we need initial conditions each on the function and its derivatives upto the
th
order at the initial value I.V.P can also be found in the similar manner
as that of the solution of (25.1)-(25.2), but by using the vector treatment. Let us
now discuss the Picard’s method of successive approximations.

25.2 Picard’s Method

In this method, a sequence of approximations is constructed by stating with an


initial approximation to the solution. The limit of this sequence (if exists) will
be the approximation for the solution of the I.V.P given by
equations ,

(25.3)

subjected to (25.4)

Integrating (1) between the limits,


y t

∫ dy = ∫ f (t , y)dt
y0 t0

t
y y0 + ∫ f (t , y )dt
or = (25.5)
t0

Notice that the unknown that is to be found isalso seen inside the integral, such
an equation is called an integral equation. Such an equation can be solved by the
method of successive approximations. To start with the procedure, assume an
approximation for the unknown function as inside the integral. This
makes the equation (25.3) as

172 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

t
y y0 + ∫ f (t , y )dt
= (25.6)
t0

The r.h.s. of (25.4) can be evaluated and call it as y′(t )


t
) y0 + ∫ f (t , y0 )dt .
i.e., y′(t=
t0

Here y′(t ) is the first approximation to the solution. The second approximation
to the solution is obtained by using y′(t ) in the integral as
t
) y0 + ∫ f (t , y′(t ))dt .
y (t=
2

t0

Repeating this process, we obtain


t
) y0 + ∫ f (t , y n−1 (t ))dt
y (t=
n

t0

with y (0) (t ) = y0 , n = 1,2,3,...

By this way we generated a sequence of approximating functions { y n (t )} .


n =1

It can be proved that if the function f (t , y ) is bounded in some region about the

point (t0 , y0 ) and if f (t , y ) satisfies the condition f (t , y ) − f (t , y*) ≤ k y − y *

for the same positive constant , then the sequence { y n (t )}



converges to the
n =1

solution of the I.V.P. given by (25.1)-(25.2) .

Example 1: Find the first three approximate analytical solutions to the


dy
I.V.P. = 3t + y 2 subjected to y (0) = 1.
dt

173 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

Solution:

dy
Integrating = 3t + y 2 in the domain, we get
dt
y t

∫=
dy ∫ (3t + y 2
)dt
1 0

t
⇒ y (t ) − y (0) = ∫ (3t + y 2 )dt
0

t
1 + ∫ (3t + y0 2 )dt
or y (t ) =
0

t
1 + ∫ (3t + 1)dt
or y′(t ) =
0

3t 2
= + t +1
2
is the first approximation. The second approximation is
t
1 + ∫ 3t + ( y′(t ) )  dt
y (t ) =
2 2
 
0

 t
 3t 2  
2

= 1 + ∫ 3t +  + t + 1  dt
0  2  

9 5 3 4 4 3 5 2
= t + t + t + t + t +1
20 4 3 2

Likewise, the third approximation can be found as


81 11 27 10 47 9 17 8 1157 7 68 6
y 3 (t=
) x + x + x + x + x + x
4400 400 240 32 1260 45
25 5 23 4 5
+ x + x + 2 x 3 + x 2 + x + 1.
12 12 2

174 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

The advantage of this method is that we can compute the solution of the given
I.V.P. at every point of the domain. The disadvantage of this method is, as also
seen in the earlier example, that the integration procedure is laborious and at
times, integration might not be possible for some functions f (t , y ) . The limited
utility of this method demands for the search of more elegant methods for
solving I.V.Ps.

dy
Example 2: Solve the equation = t + y 2 subject to the condition y (=
t 0)= 1.
dt

Solution:
Start with y (0)= y=
0 1.
t
1 + ∫ ( t + 1) dt
This generates y (t ) =
(1)

t2
=1+ t +
2
t  t2  
2

and y (t ) = 1 + ∫ t + 1 + t +   dt
(2)

0
2 
  
3 2 1 1
=
1 + t + t2 + t3 + t4 + t5
2 3 4 20
and so on.

th
Note that finding y (3) (t ) itself involves squaring a degree polynomial and
integrating it, thus making this method more tedious.

In the next lesson, we learn one other analytical method that gives an
approximation for the solution in a function form.

175 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

Exercise:

1. Use Picard’s method to find the solution y at t = 0.2,0.4 and 1.0 correct to

dy t2
three decimal places for the I.V.P. = subject to y (0) = 0 [Hint: obtain
dt 1 + y 2
th
y (2) (t ) which results in a degree polynomial in t as the approximate solution,
which gives the solution correct to 3 decimal places].

2. Use Picard’s method successive approximations to solve the following


I.V.Ps:
dy
a) = 1 + xy , y (0) = 1.
dx
dy
b) = t − y , y (0) = 1.
dt
dy
c) = t + t 4 y , y (0) = 3 .
dt
dy
d) = x + y 2 , y (0) = 0 .
dx
dy
e) = x + y , y (0) = 1.
dx

Keywords: Approximate analytical solutions, Initial value problems, Picard’s


Method,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

176 WhatsApp: +91 7900900676 www.AgriMoon.Com


Numerical Solutions of ordinary Differential Equations

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

177 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 26

Taylor Series Method

26.1 Taylor Series Method

Let y = y (t ) be a continuously differentiable function in the interval [t0 , b ] .

Expanding y (t ) around t = t0 in Taylor Series, we obtain

( t − t0 ) y′(t ( t − t0 ) ( t − t0 )
2 3

y (t ) =
y (t ) +
0 0 )+ y′′(t0 )+ y′′′(t0 ) + .... (26.1)
1! 2! 3!

h , a small increment, i.e., t0 + h= t1 , t1 ∈ [t0 , b ] we get


Taking t = t1 and t1 − t0 =

h h2 h3
y (t1 ) =y (t0 ) + y′(t0 ) + y′′(t0 ) + y′′′(t0 ) + .... (26.2)
1! 2! 3!

dy
= f (t , y ), t ∈ [t0 , b ]
Now consider the I.V.P. (26.4)
dt

subject to y (t0 ) = y0 (26.5)

The Taylor Series solution of the given I.V.P. (3)-(4) is to find an approximate
function y (t ) as given in (1) which involves the derivatives of the unknown
function y (t ) at the initial point t = t0 , that satisfies the I.V.P. (3)-(4).

dy
Given = f (t , y ) .
dt

dy
At t = t0 ; (=t t0=) f (t0 , y (t0 ))
dt
= f (t0 , y0 )

178 WhatsApp: +91 7900900676 www.AgriMoon.Com


Taylor Series method

= f 0 (say)

d  dy 
y′′(t ) =  
dt  dt 

d ∂f ∂f dy
= ( f (t , y=
)) +
dt ∂t ∂y dt

∂f ∂f
= + ( f (t , y ) ) .
∂t ∂y

∂f ∂f
y′′(t =t0 ) = ( t =t0 , y (t0 ) ) + ( t =t0 , y (t0 ) ) ⋅ f ( t =t0 , y (t0 ) ) .
∂t ∂y

In the same manner, we calculate the higher order derivatives of y (t ) depending on


the necessity. Before we discuss the order of approximation of the Taylor Series
method and error associated with a particular order method, let us see its utility for
finding the solution of the given I.V.P. through a few examples.

dy
Example 1: Find y (0.1) from the I.V.P. = 3t + y 2 subject to y (0) = 1 .
dt

Solution:

Given y=′ 3t + y 2
0 + [ y (0) ] =
⇒ y′(0) =
2
1

y′′ =3 + 2 y ⋅ y′ ⇒ y′′(0) =3 + 2 y (0) ⋅ y′(0)

= 3 + 2 ⋅1 ⋅1 = 5

y′′′ = 2 y ⋅ y′′ + 2 y′2 ⇒ y′′′(0) = 2 y (0) ⋅ y′′(0) + 2 [ y′(0) ]


2

= 2 ⋅1 ⋅ 5 + 2 ⋅1 = 12
y iv = 2 y′ ⋅ y′′ + 2 y ⋅ y′′′ + 4 y′ ⋅ y′′ = 2 ⋅1 ⋅ 5 + 2 ⋅1 ⋅12 + 4 ⋅1 ⋅ 5 = 54 etc.

2
179 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

Now the Taylor Series solution is written as


( t − t0 ) ( t − t0 ) ( t − t0 )
2 3 4

) y (t0 ) + ( t − t0 ) y′(t0 ) +
y (t= y′′(t0 ) + y′′′(t0 ) + y iv (t0 ) + ...
2! 3! 4!
=
Taking =
t 0.1, t0 0 (given),

( 0.1) ( 0.1) ( 0.1)


2 3 4

y (0) + (0.1) y′(0) +


y (0.1) = y′′(0) + y′′′(0) + y iv (0) + ...
2! 3! 4!
5 12 54
=+
1 0.1 + ( 0.1) + ( 0.1) + ( 0.1) + ...
2 3 4

2 3! 4!
∴ y (0) =
1.12722 .

Example 2: Obtain the Taylor Series solution for the I.V.P. given by
dy
= 2 y + 3et , y (0) = 0 . Compare the solution at t = 0.2 with the exact solution given
dt

y (t ) 3 ( e 2t − et ) .
by =

Solution:

Given y′ = 2 y + 3et ⇒ y′(0) = 3


y′′ = 2 y′ + 3et ⇒ y′′(0) = 9

y′′′ = 2 y′′ + 3et ⇒ y′′′(0) = 21

y iv = 2 y′′′ + 3et ⇒ y iv (0) = 45

9 21 3 45 4
The solution is y (t ) =3t + t 2 + t + t + ... .
2 6 24
At t = 0.2 , y (0.2) = 0.811 .
From the exact solution, y (0.2) = 0.8112 .
So the error in the numerical solution is 0.8112 − 0.811 = 0.001 .

3
180 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

26.2 Local Truncation Error and Order of the Method

The Taylor series expansion of y (t ) about any point t j is written as

( t − t9 )
2

) y (t9 ) + ( t − t9 ) y′(t9
y (t= )+ y′′(t9 ) + ...
2!

( t − t9 )( ) y ( p +1) (t9 + θ h)
1 1
( t − t0 ) y ( p ) (t9 ) +
p +1
... +
p
(26.6)
p! ( p + 1)!
where y ( p ) (t ) is the th
of the y (t ) and 0 < θ < 1 , a real number. The last term in the
expansion is called the remainder term. In general, the local truncation error at any
location t = t j +1 of the method is given by
1
T j +1 h( p +1) y ( p +1) (t j + θ h)
( p + 1) !

where =
h t j +1 − t j .

1
The order of a method is the largest integer for which T j +1 = O(h p ) .
h

The notation O(h p ) denoting that all terms of the order onwards are grouped to a
single term representation.

The method given by equation (26.1) is called the Taylor Series method of order .

Example 3: Determine the first three non-zero terms in the Taylor Series for y (t )
from the I.V.P. y=′ t 2 + y 2 , y (0) = 0 .

Solution:

y′(0) = 0;

4
181 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

y′′ =2t + 2 yy′ ⇒ y′′(0) =0

2 + 2 ( y′ ) + 2 yy′′ ⇒ y′′′(0) =
y′′′ =
2
2

y iv = 6 y′y′′ + 2 yy′′′ ⇒ y iv (0) = 0

y v = 2 yy v + 8 y′y′′′ + 6 ( y′′ ) ⇒ y v (0) = 0


2

y vi =2 yy v + 10 y′y iv + 20 y′′y′′′ ⇒ y vi (0) =0

Similarly y vii (0) = 80 , y viii (0)= 0= y ix (0)= y x (0) , y xi (0) = 38400 .


Thus the three term Taylor Series solution for the given I.V.P. is
1 1 2 11
y (t ) = t 3 + t 7 + t .
3 63 2079

Example 4: Given the I.V.P.: y=′ 2t + 3 y ; y (0) = 1 whose exact solution is


11 3t 2
y (t ) = e − ( 3t + 1) .
9 9
nd
a) Use order Taylor Series method to get y (0.2) with step length h = 0.1
(note =
h t j +1 − t j ).

b) Find ‘ t ’, if the error in y (t ) obtained from the first four terms of the Taylor
series, is to be less than 5 ×10−5 , after rounding.
c) Determine the number of terms in the Taylor Series required to obtain the
result correct to 5 ×10−6 for t ≤ 0.4 .

Solution:

h2
a) The second order Taylor Series method is given by yn +1 =yn + hyn′ + yn′′ + O ( h3 ) .
2
= =
n 0,1; h 0.1 .

5
182 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

We have y=′ 2t + 3 y
or, y′(tn=) y=
′ 2tn + 3 yn
n

or, y′′(tn ) =yn′′ =2 + 3 yn′ =2 + 6tn + 9 yn .


Take=
n 0;=
h 0.1;= y0′ 3;=
y0 1;= y0′′ 11 .

(0.1) 2
∴ y (0.1) = 1 + (0.1) ⋅ 3 + ⋅11 = 1.355 .
2
Taking =
n 1;=
h 0.1; = y1′ 4.265; =
y1 1.355; = y1′′ 14.795

(0.1) 2
⇒ y (0.2) = 1.355 + (0.1) ⋅ (4.265) + ⋅ (14.795)
2!
= 1.8555 .
b) Given= y′(0) 3 .
y (0) 1,=

y′′(0) 11,
Compute= = y′′′(0) 33 , y iv (0) = 99 .

So the Four term Taylor Series solution is


11 2 11 3
y (t ) =1 + 3t + t + t .
2 2
The remainder term is given by
t 4 iv
R4 = y (ξ ) gives the error in the approximation.
4!
We require R4 ≤ 5 ×10−5 .

Given the exact solution, use it to find y iv (ξ );


11 3t 2
y (t ) = e − (3t + 1) .
9 9
y iv (ξ ) = 99e3t .

99 3t 4

= R4 e t < 5 ×10−5 .
24

Simplifying and solving this non-linear algebraic equation for finding t , we get
t 4 e3t < 0.000012

⇒ t < 0.056
6
183 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

Note: In the event, if the exact solution is not known, write one more non-
vanishing term in the Taylor Series than is required and then differentiate this
series times: here is .
Now to determine the number of terms required in the Taylor series solution to
obtain the solution correct to 5 ×10−4 for t ≤ 0.4 , we estimate it as:
max
tp max
⋅ y ( p ) (ξ ) ≤ 5 ×10−6
p! ξ t [ 0,0.4 ]
0 ≤t ≤ 0.4

th
Again using the analytical solution, we find the derivative of y (t ) at t = ξ as:
y p (ξ ) (11)3 p − 2 ⋅ e1.2
=

(0.4) p
or (11)3 p − 2 ⋅ e1.2 ≤ 5 ×10−6
p!

Solving this non-linear algebraic equation using the Newton-Raphson method or


otherwise, we get a lower bound for as
p ≥ 10
th
This indicates that a minimum of order Taylor Series method gives the
th
solution which will be accurate upto the decimal place for all values of
t ∈ [0, 0.4] .

Exercises:

1. Compute an approximation to y (0.1) up to five decimal places from


dy 2
=
t y − 1, y (0) =
1.
dt

2. Solve y=′ y 2 + t , y (0) = 1 using Taylor Series method and compute y (0.2) ,
h = 0.1 .

7
184 WhatsApp: +91 7900900676 www.AgriMoon.Com
Taylor Series method

3. Evaluate y (0.1) correct to six decimal places ( 5 ×10−6 ) by Taylor Series method if
dy
y (t ) satisfies =
1 + yt , y (0) =
1.
dt

Keywords: Taylor Series method, Taylor series solution.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

8
185 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 27

Single Step Methods

27.1 Introduction

Picard’s and Taylor Series methods give solution of the given I.V.P. in the form
of a power series. We now describe some numerical methods which give the
solution in the form of a discrete data at equally spaced points of the interval.
The discretization of the interval is considered as described in the lesson 1 and 2
is not repeated here.

Single Step methods:


dy 
= f (t , y ), t ∈ [t0 , b ]
Consider the I.V.P. dt  (27.1)
subject to y (t0 ) = y0 

Consider the partition of the interval as


t0 < t1 < t2 < ... < t j −1 < t j < t j +1 < ... < t N =
b (27.2)

such that t j +1 − t j = h; ∀j = 0,1, 2,..., N − 1 where h is a constant is the step size.

Also denote y j = y (t j ) and y j +1 = y (t j +1 ) . t0 , t1 ,..., t N are called the nodal points. A


general single step method may be written as
y j +=
1 y j + hϕ ( t j +1 , t j , y j +1 , y j , h ) (27.3)

Where ϕ is ϕ (t , y, h) is called the increment function. Note that in (3), we see


the dependence of the unknown function on the two nodes t j and t j +1 . In single
step methods, the solution at only the previous point.

Also, if y j +1 can be obtained by evaluating the right hand side of (27.3), then the
method is called an explicit method.

186 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

Then equation (27.3) is written as y j +=1 y j + hϕ (t j , y j , h) (27.4)

The method is called an implicit method if increment function ϕ depends on


y j +1 also, as seen in equation (27.3). In such a situation, we cannot get the

solution y j +1 explicitly, we need to solve the equation (3) to get the solution.

27.2 The Local Truncation Error (L.T.E)

Denote y (t j ) as the exact solution and y j as the numerical solution of I.V.P.

(27.1) at t = t j . The exact solution y (t j ) satisfies the equation

( )
y (t j ) + hϕ t j +1 , t j , y ( t j +1 ) , y ( t j ) , h + T j +1
y (t j +1 ) = (27.5)

where T j +1 is called the L.T.E of the method.

Thus the L.T.E T j +1 = y ( t j +1 ) − y ( t j ) − hϕ ( t j +1 , t j , y ( t j +1 ) , y ( t j ) , h ) (27.6)

1
By definition, the order of the single step method is T j +1 (27.7)
h

dy
27.3 Forward Euler Method for the I.V.P. = f ( x, y ) ; y (t0 ) = y0 :
dt

Let t j be any point in the interval [t0 , b] . Then the slope of y (t ) at t = t j is given by
dy
= f (t j , y j ) (27.8)
dt t =t j

187 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

y
y ( x)

Exact Solution

y j +1 Error

Numerical
yj Solution
Slope f (t j , y j )
t
tj t j +1
h

Fig.1: Explicit Euler method

tj t j +1

dy
Replacing at t = t j by the first order forward difference at t j , we get
dt
y j +1 − y j
= f (t j , y j ) or y j +1 = y j + h ⋅ f (t j , y j ), j = 0,1, 2,..., N − 1 at every point of t j as
h

given in (2). By this, we mean


y1 = y0 + h ⋅ f (t0 , y0 ) ,

y2 = y1 + h ⋅ f (t1 , y1 ) ,

………
yN= yN −1 + h ⋅ f (t N −1 , yN −1 ) .

188 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

For a chosen h and/with the initial condition y0 , one can find the solution
explicitly at all the nodal points of the given interval. The local truncation error
in the method is given by
T j +1 y ( t j +1 ) − y j +1
=

= y ( t j +1 ) −  y ( t j ) + h ⋅ f (t j , y j ) 

Now expanding y ( t j +1 ) in the Taylor Series about t = t j and simplifying, we get

h2
T j +1 = y′′(ξ )
2
where t j < ξ < t j +1 . Let the maximum value of y′′(t ) in [t0 , b] be M * , then
max h2
T j +1 = T (say) = y′′(ξ )
max

[ t0 , b ] 2 [ t0 , b ]

h2 *
= M .
2

h2 *
Thus T ≤ M
2
2
i.e., the L.T.E is of O(h ) and by definition, the order of this forward Euler
1 1 h2 *
method is one since= T j +1 = M O(h1 ) .
h h 2

Example 1: Use Forward Euler method to solve y′ = − y with the initial


condition y (0) = 1 in [ 0, 0.04] by taking h = 0.01 .

Solution:

Take=
t0 0,=
t1 0.01,=
t2 0.02,= t4 0.04 ; f (t , y ) = − y .
t3 0.03,=

Now y (t=
1) y (t0 ) + h ⋅ f (t0 , y0 )

=
1 + (0.1)(−1) = 0.99 ,

y (t2=
) y (t1 ) + h ⋅ f (t1 , y1 )

189 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

=0.99 + (0.01)(−0.99) = 0.9801 ,

y (t=
3) y (t2 ) + h ⋅ f (t2 , y2 )

= 0.9801 + (0.01)(−0.9801) = 0.9703 ,

y (t4 ) = 0.9703 + (0.01)(−0.9703)

= 0.9606 .

We know the exact solution y′ =


− y, y (0) =
1 is y (t ) = e − t and gives y (0.4) = 0.9608 .

The error in the Euler method for the solution at t = 0.4 is given as
0.9606 − 0.9608 =
0.0002 .

27.4 Backward Euler Method

We can also replace the slope of y (t ) at t = t j by the first order backward


difference approximation which is given by
y j − y j −1
= f (t j , y j )
h
or y =j y j −1 + h ⋅ f (t j , y j )
or equivalently, y j +1 = y j + h ⋅ f (t j +1=
, y j +1 ) , j 0,1, 2,..., N − 1 .

Evidently, this is an implicit method. It can be shown (left as an exercise!) that


h2
the L.T.E. of this method is − y′′(ξ ), t j < ξ < t j +1 and the order of the method is
2
also one. Let us now illustrate this method with an example.

Example 2: Solve the I.V.P. y′ = 1 in [ 0, 0.2] with


−2ty 2 , y (0) = h = 0.2 .

0 0.2
Solution:

Backward Euler method is

190 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

y j +1 = y j + h ⋅ f (t j +1 , y j +1 ) .

Here t0 = 0, t1 = 0.2, h = 0.2, f = −2ty 2 for j = 0 .


y1 = y0 + h ⋅ f (t1 , y1 )

= y (0) + h  −2 ⋅ t1 ⋅ y (0.2) 2 
⇒ y (0.2)

or y1 = 1 − 2(0.2)(0.2) y12
can be written as a quadratic in y1 as
0.08 y12 + y1 − 1 =0

whose solution is y1 = 0.9307 .

Exercises:

1. Continue this to compute the solution at t = 0.4 if the above I.V.P. is solved in
the interval [ 0, 0.4] with h = 0.2 .

2. Solve the I.V.P. y′ =


t + y 2 , y (0) = 0.1, [ 0, 0.4] using the forward Euler
1, h =

method.

3. Solve the I.V.P. y′ =−2ty 2 , 0 ≤ t ≤ 0.5, h =0.1, y (0) =1 using the forward Euler
method.

Keywords: Backward Euler method, Local truncation error, Forward Euler


method.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

191 WhatsApp: +91 7900900676 www.AgriMoon.Com


Single Step Methods

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

192 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 28

Modified Euler Method

28.1 Introduction

The first order Explicit Euler method is improved to achieve better accuracy in
=
the numerical solution for an initial value problem y′ f=
(t , y ), y (t0 ) y0 .

In the modified Euler method, the slope of y (t ) at t = t j is approximated by the

average of the slopes at t = t j and t = t j +1 .

i.e., y j +1 = y j + h ⋅ y′j

= y j + ⋅  f ( t j , y j ) + f ( t j +1 , y j +1 ) 
h
2

or y j +1 = y j + ⋅  f ( t j , y j ) + f ( t j +1 , y j +1 ) 
h
(28.1)
2

This implicit method is used by setting an iterative procedure as follows:

y (js++11) = y j + ⋅  f ( t j , y j ) + f ( t j +1 , y (js+)1 )  , s = 0,1, 2,...


h
(28.2)
2

Now the initial approximation for y (0)


j +1 is considered as the solution y j +1 of Euler

method. The above iteration process is terminated at each step if the condition
y (js +1) − y (js ) < ε ,is satisfied,

where ε is a reassigned error tolerance.

Exercise: Show that the modified Euler method has L.T.E. as O(h3 ) and the
order of the method is O(h 2 ) .

The exponent of h in O(h p ) is the order of accuracy of the method. It is a


measure of accuracy of any numerical scheme. It gives an indication of how

193 WhatsApp: +91 7900900676 www.AgriMoon.Com


Modified Euler method

rapidly the accuracy can be improved with refinement of the grid spacing h in
any given interval. For example, in a first order method such as
y j +1 = y j + h ⋅ f ( t j , y j ) + O(h)

h
If we reduce the mesh size h by , the error is reduced by approximately a
2

factor . Similarly in a second order method such as

y j +1 = y j + ⋅  f ( t j , y j ) + f ( t j +1 , y j +1 )  + O(h 2 ) .
h
2

If we refine the mesh size by a factor of , we expect the error to reduce by a


factor 22 i.e., , which gives a rapid decrease in the error. Thus higher order
numerical schemes are preferred.

Example 1: Determine the value of y (0.1) from the I.V.P. y′= y + t 2 ,


= 1,=
y (0) h 0.05 .

Solution:

t0 = 0, t1 = t0 + h = 0.05, t2 = t0 + 2h = t1 + h = 0.1 , y0= 0, f (t , y=


) t 2 + y ; y1 = y (0.05) ,

y2 = y (0.1) .

y1( s +1) = y0 + ⋅  f ( t0 , y0 ) + f ( t1 , y1( s ) )  , s = 0,1, 2,...


h
2
Compute y1(0) using the Euler method.
h
y1(0)= y0 + f ( t0 , y0 )
2
= 1 + (0.05)(1.0)

= 1.05 .

Use modified Euler method now as follows:

2
194 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method

Compute y (0.05) :
Take s = 0 : calculate f ( t1 , y1(0) ) = 1.0262

y1(1) = y0 + ⋅  f ( t0 , y0 ) + f ( t1 , y1(0) ) 
h
2
= 1.0513

Take s = 1 ; y1(2) is computed as:

y1(2) = y0 + ⋅  f ( t0 , y0 ) + f ( t1 , y1(1) ) 
h
2
= 1.0513

Note: y1(2) − y1(1) =


0 , so we can stop the iteration process and conclude that

y (0.05) = 1.0513 .

=
Compute y (1.0) =
: t1 0.05, =
y1 1.0513, h 0.05

Euler method gives y2(0)= y1 + hf ( t1 , y1 )

⇒ y2(0) =
1.104 .

Now use the modified Euler method:

s = 0 ⇒ y2(1) = y1 + ⋅  f ( t1 , y1 ) + f ( t2 , y2(0) ) 
h
2
= 1.1055

s =1 ⇒ y2(2) = y1 + ⋅  f ( t1 , y1 ) + f ( t2 , y2(1) ) 
h
2
= 1.1055

Take the solution y (1.0) = 1.1055 .

dy
= 2=
Example 2: Given the I.V.P. ty, y (1) 1 find y (1.4) using the modified Euler
dt
method by taking h = 0.1 . Compare this solution with the exact solution
−1
y (t ) = et
2
. Calculate the percentage relative error.

Solution:

3
195 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method

We have the data: =


t0 1,=
t1 1.1,=
t2 1.2,=
t3 1.3,=
t4 1.4, =
y0 1,=
h 0.1 .

Modified Euler method is:

y j +1 = y j + ⋅  f ( t j , y j ) + f ( t j +1 , y j +1 )  for j = 0,1, 2,3 .


h
2
Take j = 0 : We have the iterative method is written as

y1( s +1) = y0 + ⋅  f ( t0 , y0 ) + f ( t1 , y1( s ) )  , s = 0,1, 2,...


h
2
Compute y1(0) using the Euler method:
y1(0) = y0 + (0.1) ⋅ 2 ⋅ (0.1) ⋅1= 1.2 .

(0.1)
Now y1(1) =
y0 +  2t0 y0 + 2t1 y1(0)  =
1.232 ,
2 
(0.1)
y1(2) =
y0 +  2t0 y0 + 2t1 y1(1)  =
1.232 .
2 
j = 1 to j = 3 are calculated (left as an exercise) and are tabulated below. The

absolute error and percentage relative errors are calculated as:


and

Table 28.1
j tn yn Exact value Absolute error Percentage relative
y = etn −1
2
error

0 1 1 1 0 0
1 1.1 1.232 1.2337 0.0017 0.14
2 1.2 1.5479 1.5527 0.0048 0.31
3 1.3 1.9832 1.9937 0.0106 0.53
4 1.4 1.5908 2.6117 0.0209 0.80

Exercises:

4
196 WhatsApp: +91 7900900676 www.AgriMoon.Com
Modified Euler method

1. Solve the I.V.Ps. using modified Euler method


dy
(i) =
−2 y, y (0) = 0.2 , t ∈ [ 0, 0.6] .
1, h =
dt
dy y −t
(ii) = h 0.1, t ∈ [ 0, 0.2] .
= 1,=
, y (0)
dt y+t

dy
(iii) =
2 + ty , y (1) = 0.5, t ∈ [1, 2] .
1, h =
dt
(iv) y′ =
t (1 + t 3 y ), y (0) = 0.1, t ∈ [ 0, 0.4] .
3, h =

Keywords: Absolute error, Explicit Euler method, Percentage relative errors,


Modified Euler method,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth


Edition, New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley


& Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

5
197 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 29

Runge-Kutta Methods

29.1 Introduction

In single step explicit method, the approximate solution y j +1 is computed from the

known solution at the point ( t j , y j ) using

y j +1 = y j + h ⋅ f ( t j , y j ) (29.1)

or y j +=1 y j + h (slope of y (t ) at t = t j ) (29.2)

In equation (1), we used the slope at t = t j only. Similarly, in the modified Euler
method

y j + h  f ( t j , y j ) + f ( t j +1 , y j +1 ) 
y j +1 = (29.3)

The slope is replaced by the average of slopes at the end points ( t j , y j ) and

(t j +1 , y j +1 ) .

29.2 Runge-Kutta Methods

Runge-Kutta methods use a weighted average of slopes on the given interval


t j , t j +1  , instead of a single slope. Thus the general Runge-Kutta method may be

defined as
y j +=
1 y j + h [Weighted average of slopes at points on the given interval] (29.4)

This way one can derive -explicit methods by taking n = 1, 2,..., N . Also, n in
th
(29.4) indicates the order of this Runge-Kutta method. The general order

198 WhatsApp: +91 7900900676 www.AgriMoon.Com


Runge - Kutta methods

Runge-Kutta method is written as


y j +1 = y j + ( w1k1 + w2 k2 + ... + wN k N )

where wi are the weights and each ki is defined as


k1= h ⋅ f ( t j , y j )

h ⋅ f ( t j + c2 h, y j + a21k1 )
k2 =

h ⋅ f ( t j + c3 h, y j + a31k1 + a32 k2 )
k3 =

k N = h ⋅ f ( t j + cN h, y j + aN 1k1 + aN 2 k2 + ... + aN , N −1k N −1 ) (29.5)

All wi ’s , ci ’s, ai ’s are the parameters which are determined by forcing the
th th
method (29.4) to be of order. Deriving a general order method is out of
nd
purview of this material, but we demonstrate the derivation of the order Runge-
Kutta method below.

29.3 Second Order Runge-Kutta Method


nd
Consider the general form of the ( order) Runge-Kutta method with 2 slopes,
y j +1 =y j + w1k1 + w2 k2 (29.6)

where k1= h ⋅ f ( t j , y j )

h ⋅ f ( t j + c2 h, y j + a21k1 )
k2 =

The parameters c2 , a21 , w1 , w2 are chosen to make y j +1 closer to the exact solution
nd
y (t j +1 ) upto the order.

Now writing y (t j +1 ) in Taylor series about t = t j ,

199 WhatsApp: +91 7900900676 www.AgriMoon.Com


Runge - Kutta methods

h2
y (t j +=
1) y (t j ) + h ⋅ y′(t j ) + ⋅ y′′(t j ) + ...
2!

h 2  ∂f ∂f 
= y (t j ) + h ⋅ f ( t j , y j ) + ⋅ + f  + ... (29.7)
2!  ∂t ∂y  t =t
j

Also, k1= h ⋅ f j

h f ( t j + c2 h, y j + a21h ⋅ f j )
k2 =⋅

Expanding f about ( t j , y j ) , we get

  ∂f ∂f  h2  2 ∂ 2 f ∂2 f ∂2 f  
k2 h  f ( t j ) + h  c2
= + a21 f  + ⋅  2
c + 2c a f + a f  + ...
 ∂t ∂y  t =t 2!  ∂t 2 ∂y∂t ∂y 2  t =t 
2 21 21

  j j 

Substituting the expressions for k1 and k2 in (1),we get

 ∂f ∂f 
y j +1 = y j + ( w1 + w2 ) h ⋅ f j + h 2  w2 c2 + w2 a21 f  + ... (29.8)
 ∂t ∂y  t =t
j

Comparing the coefficients of h and h 2 in (27.7) and (27.8) we obtain


1 1
w1 + w2 =
1=
; w2c2 =; w2 a21 .
2 2

The solution of this may be written as


1 1
a21 = c2 ; w2 = ; w1 = 1 − ,
2c2 2c2

c2 is arbitrary non-zero constant.

1 nd
With the choice of c2 = , we derive the order Runge-Kutta method.
2

200 WhatsApp: +91 7900900676 www.AgriMoon.Com


Runge - Kutta methods

1
=
a21 ;=
w2 1;=
w1 0
2

we get k1= h ⋅ f ( t j , y j )

 h 1 
k2 =
h ⋅ f  t j + , y j + k1 
 2 2 

And y j +=1 y j + k2 (29.9)

With the choice of c2 = 1 , we get


1 1
=
w2 =, w1 and a21 = 1
2 2
1
and y j +1 =y j + (k1 + k2 )
2

where k1= h ⋅ f ( t j , y j )

h ⋅ f ( t j + h, y j + k1 ) .
k2 =

This method is also a second order method which is known as the Euler-Cauchy
method. Clearly, with different choices for c2 , we get a different second order
Runge-Kutta method. Let us demonstrate its utility for solving the initial value
problems.

Example1: Compute y (0.4) from the I.V.P. y′ =


−2ty 2 , y (0) =
1, h =
0.2

Solution:

y j +=
1 y j + k2

h ⋅ f ( t j , y j ) =−
where k1 = (0.2)  2t j y j 2  =
−0.4t j y j 2

 h 1 
k2 =
h ⋅ f  t j + , y j + k1 
 2 2 

201 WhatsApp: +91 7900900676 www.AgriMoon.Com


Runge - Kutta methods

2
 1 
−0.4 ( t j + 0.1)  y j + k1 
=
 2 

Taking j = 0 ; given that=


t0 0,=
y0 1 ,

⇒ k1 =0, k2 =−0.04 .

∴ y (0.2) =y (0) + k2 =1 − 0.04 =0.96 .

For j == =
1 , t1 0.2, y1 0.96

⇒ k1 =−0.073728, k2 =−0.10226

and y (0.4)= y (0.2) + k2= 0.96 − 0.10226= 0.85774

Keywords: Weighted average of slopes,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

202 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 1: Numerical Analysis

Lesson 30
th
Order Runge-Kutta Method

30.1 3rd Order Runge-Kutta Method

The third order Runge-Kutta method is given by


1
y j +1 =y j + ( 2k1 + 3k2 + 3k3 ) (30.1)
8

where k1= h ⋅ f ( t j , y j )

 h 2 
k2 =
h ⋅ f  t j + 2 , y j + k1 
 3 3 

 h 2
and k3 =
h ⋅ f  t j + 2 , y j + k2  .
 3 3 

Derivation of this method involves evaluation of eight unknowns in eight non-


th
linear algebraic equations, which is very tedious. Similarly the order Runge-
Kutta method is also. The interested is referred to a standard test book on
Numerical analysis for the detailed derivation. The Fourth order Runge-Kutta
method is given as:
1
y j +1 =y j + ( k1 + 2k2 + 2k3 + k4 ) (30.2)
6

where k1= h ⋅ f ( t j , y j )

 h k 
k2 =
h⋅ f tj + , yj + 1 
 2 2

 h k 
k3 =
h⋅ f tj + , yj + 2 
 2 2

and h ⋅ f ( t j + h, y j + k 3 )
k4 =

203 WhatsApp: +91 7900900676 www.AgriMoon.Com


th
order Runge-Kutta method

th
This method is also known as the Classical Runge-Kutta method. The order R-T
method is an efficient method which can be used very easily. Let us now illustrate
its use for finding the solution of given I.V.P.

Example 1: Use the Classical Runge-Kutta method to find the numerical solution
dy
at t = 0.6 for =
t + y , y (0.4) =
0.41, h =
0.2 .
dt

Solution:

=
Given =
t0 0.4, y0 0.41; f (t , y=
) t+y.

First let us evaluate ki ’s.


k1= h ⋅ f ( t0 , y0 )
1
= (0.2) [ 0.4 + 0.41] 2= 0.18

 h k 
k2 =
h ⋅ f  t 0 + , y0 + 1 
 2 2
1
= (0.2) [ (0.4 + 0.1) + (0.41 + 0.09) ] 2 = 0.2

 h k 
k3 =
h ⋅ f  t0 + , y0 + 2 
 2 2
1
= (0.2) [ (0.4 + 0.1) + (0.41 + 0.01)
= ]2 0.20099
h ⋅ f ( t0 + h, y0 + k3 )
k4 =
1
= (0.2) [ 0.6 + (0.41 + 0.20099) ] 2

= 0.22009 .
1
Now y (0.6)= y (0.4) + ( k1 + 2k2 + 2k3 + k4 )
6

= 0.41 + 0.20035

2
204 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method

= 0.61035 .
dy
Example 2: Find y (0.1) form =
y − t , y (0) =
2 by taking h = 0.1 .
dt

Solution:

Given f (t , y ) =y − t , t0 =0, y0 =2, h =0.1


k1= h ⋅ f ( t0 , y0 )

= (0.1) [ 2 −=
0] 0.2

 h k 
k2 =
h ⋅ f  t 0 + , y0 + 1 
 2 2

= (0.1) [ 2.1 − 0.05=


] 0.205
 h k 
k3 =
h ⋅ f  t0 + , y0 + 2 
 2 2

= (0.1) [ 2.1025 −=
0.05] 0.20525

h ⋅ f ( t0 + h, y0 + k3 )
k4 =

= (0.1) [ 2.20525
= − 0.1] 0.21053

1
Hence y (0.1) = y (0) + ( k1 + 2k2 + 2k3 + k4 )
6

= 2 + 0.2056

∴ y (0.1) =
2.2056 .

dy th
Example 3: Given =
1 + y 2 , y (0.2) =
0.2027, h =
0.2 compute y (0.4) using the order
dt
R-K method.

Solution:

Given f (t , y ) =
1 + y 2 , t0 =
0.2, y0 =
0.2027, h =
0.2

3
205 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method

Evaluating ki ’s, we get


= =
k1 0.2082, k2 0.2188,

= =
k3 0.2195, k4 0.2356 .

1
and hence y (0.4)= y (0.2) + ( k1 + 2k2 + 2k3 + k4 )
6
= 0.4228 .

Exercises:
rd
1. Use order Runge-Kutta method to find

dy
a) y (0.1) given =
3et + 2 y, y (0) =
0, h =
0.1 .
dt

dy
b) y (0.8) given =
t + y , y (0.4) =
0.41, h =
0.2 .
dt

dy y −t
c) y (0.2) given = = 1,=
, y (0) h 0.1 .
dt y+t

dy 1
d) y (0.2) given =
3t + y, y (0) =
1, h =
0.1 .
dt 2
th
2. Use order Runge-Kutta method to solve the problems 1(a)-1(d) and make a
comparison table.

3. Solve the non-linear I.V.P.


dy y 2 − 2t
= subjected to y (0) = 1 in the interval [0,1] by taking h = 0.2 .
dt y 2 + 2t

4. Use the Classical Runge-Kutta method to find y (1.4) insteps of 0.2 given that
dy 2
=
t + y 2 , y (1) =
1.5 .
dt

Keywords: 3rd order Runge - Kutta method, th


order Runge - Kutta method,

4
206 WhatsApp: +91 7900900676 www.AgriMoon.Com
th
order Runge-Kutta method

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

5
207 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 31

Methods for Solving Higher Order Initial Value Problems

31.1 Introduction
th
From the Theory of ordinary differential equations, it is evident that an order
ordinary differential equation, be a linear or a non-linear one, can be reduced to a
system of -first order equations. To see this, consider the second order o. d. e.
y′′ − y′ + 4t 2 y =
0

subject to the initial conditions y (1) = 1 , y′(1) = 2 .


Let u = y (t ) and v = y′(t ) ,
then v′ = y′′
u ′ = v and v′= v − 4t u .
2
and we have

Thus the 2-first order equations are


u ′ = v ; u (1) = 1

v′= v − 4t 2u ; v(1) = 2

This is known as the initial value problem in the first order system corresponding
nd
to the given order initial value problem.

th
In general an order differential equation
y ( n ) = F (t , y, y′, y′′,..., y n −1 ) (31.1)
is written in the first order system as follows set u1 = y (t ) .

208 WhatsApp: +91 7900900676 www.AgriMoon.Com


Methods for Solving Higher order initial value problems

u1′ = u2 
u2′ = u3 

...  (31.2)
un′ −1 = un 

un′ = F (t , u1 , u2 ,..., un −1 ) 

With the transformed initial conditions


u1 (t0 ) η=
= 0 , u2 (t0 ) η1...,
= un (t0 ) η n −1 (31.3)

This system is written in the vector form as


u ′ = f (t , u ) 

u (t0 ) = η 

(31.4)
where u = [u1 , u2 ,..., un ] ,
T

f = [u2 , u3 ,..., un , F ] ,
T

η = [η0 ,η1 ,...,ηn −1 ] .


T

Thus the methods of solution of the first order I.V.P


dy
= f=(t , y ), y (t0 ) y0
dt
can be used to solve the above system of first order I.V.Ps.

31.2 Taylor Series Method

In what follows is shown the utility of Taylor series method to the system of
I.V.Ps. through two examples.
1. The vector form of the Taylor Series method is written as

2
209 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems

h2 h p ( p)
y j +1 = y j + hy′j + y′′j + ... + yj
2! p!

=
Here j 0,1, 2,.., N − 1 denote the nodal point

 y1, j ( k )   d 
 (k )   dt k −1 f1 (t j , y1, j , y2, j ,..., yn , j ) 
 y2, j   
=
and y j ( k ) =  ... .
...   d k −1 
 y (k )   f (t , y , y ,..., y ) 
 n, j   dt k −1 
n j 1, j 2, j n, j

31.3 Euler Method

The vector form of Euler method can be written as:


y j +=
1 y= ′
j + hy j , j 0,1, 2,..., N − 1 .

rd
Example: 1. Reduce the order I.V.P. into a system of first order I.V.P.:
y cos t , t ∈ [ 0,1]
y′′′ + 2 y′′ + y′ −=

subject to= y′(0) 1,=


y (0) 0,= y′′(0) 2 .

Solution:

=
Set y u= ′ u2=
1 , u1 , u2′ u3 .

Now the system of 3 first order equations is


u1′ = u2

u2′ = u3

u3′= cos t − 2u3 − u2 + u1

The initial conditions are:=


u1 (0) 0,=
u3 (0) 1,=
u3 (0) 2 .

3
210 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems

nd
Example 2. Use the order Taylor Series method to compute y (1), y′(1) and
y′′(1) by taking h = 1.0 in the above example.

Solution:

The Second order Taylor Series method is


h2
u (t0 + h)= u (t0 ) + hu ′(t0 ) + u ′′(t0 )
2
Given h =1

1
∴ u (1) = u (0) + u ′(0) + u ′′(0) .
2
The system of I.V.P. is:

u1 ′ u1 
u ′ =
= u2  u 
 3 
u3  cos t − 2u3 − u2 + u1 

u1 (0)  0 
u (0) =
subject to= u2 (0)  1  .
 
u3 (0)   2 

We now require to compute u ′(0) and u ′′(0) :


u2 (0)  1  u1′(0) 

′(0) u3 (0)
u= =  2   ′ 
  =  u2 (0) 
1 − 2u3 (0) − u2 (0) + u1 (0)   −4  u3′ (0) 

u2′ (0)  2 
 =  −4 
Also u ′′(0) = u3′ (0)   
 −2u3′ (0) − u2′ (0) + u1′(0)  7 

 
0  1  2  2   y (1) 
    1     ′ 
∴ u (1) = 1  +  2  +  −4  = 1  =  y (1)  .
2 3
 2   −4  7   y′′(1) 
 
2

4
211 WhatsApp: +91 7900900676 www.AgriMoon.Com
Methods for Solving Higher order initial value problems

Exercises:

1
1. Solve the system equations u′ =−3u + 2v, u (0) =0 and v′ =
3u − 4v, v(0) =using
2
(i) Forward Euler method and
(ii) 2nd order Taylor Series method by taking h = 0.2 on the interval [0, 0.6] .

Keywords: Euler Method, Higher order initial value problems, Taylor Series
method.

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

5
212 WhatsApp: +91 7900900676 www.AgriMoon.Com
Module 1: Numerical Analysis

Lesson 32
th
System of I.V.Ps.- Order R-K Method

32.1 Introduction
nd th
We now present the vector form of the order and order Runge-Kutta
methods.
dy 
= f= (t , y, z ); y (t0 ) y0 

Given the I.V.P. : dt  (32.1)
dz
= g= (t , y, z ); z (t0 ) z0 
dt 

nd
The Euler-Cauchy method (which belong to the class of order Runge-Kutta
method) when applied to the above system of I.V.P. is written in the vector form as
1
u j +1 =uj + ( k1 + k2 )
2

where u = [ y, z ] ; k1 = [ k11 , k21 ] ; k2 = [ k12 , k22 ]


T T T

with k11= h ⋅ f ( t j , y j , z j )

k21= h ⋅ g ( t j , y j , z j )

h ⋅ f ( t j + h, y j + k11 , z j + k21 )
and k12 =

h ⋅ f ( t j + h, y j + k11 , z j + k21 )
k22 =

Example 1

Find y (0.2) and z (0.2) from the system of I.V.P. :


y′ =−3 y + 2 z , y (0) =
0

z′ =
3 y − 4 z , z (0) =
0.5

by taking h = 0.2 using the Euler-Cauchy method.

213 WhatsApp: +91 7900900676 www.AgriMoon.Com


th
System of I.V.Ps. - Order R-K Method

Solution:
Given=
t0 0,=
y0 0,=
z0 0.5,

f (t , y, z ) =−3 y + 2 z , g (t , y, z ) =
3y − 4z .

k11 = 0.2, k21 = −0.4

k12 =
−0.08, k22 =
0.04

1
∴ y (0.2) = y (0) + ( k11 + k12 ) = 0.06,
2
1
z (0.2) = z (0) + ( k21 + k22 ) = 0.32 .
2

th
The order Runge-Kutta method for the system of equations as given in (1) is
written as
1
y j +1 =y j + ( k1 + 2k2 + 2k3 + k4 )
6
1
z j +1 = z j + ( l1 + 2l2 + 2l3=
+ l4 ) , j 0,1, 2,.., N − 1
6

where k1= h ⋅ f ( t j , y j , z j )

l1= g ⋅ f ( t j , y j , z j )

 h k l 
k2 =
h⋅ f tj + , yj + 1 , zj + 1 
 2 2 2

 h k l 
l2 =
g ⋅ f tj + , yj + 1 , zj + 1 
 2 2 2

 h k l 
k3 =
h⋅ f tj + , yj + 2 , zj + 2 
 2 2 2

 h k l 
l3 =
g ⋅ f tj + , yj + 2 , zj + 2 
 2 2 2

 h k l 
k4 =
h⋅ f tj + , yj + 3 , zj + 3 
 2 2 2

214 WhatsApp: +91 7900900676 www.AgriMoon.Com


th
System of I.V.Ps. - Order R-K Method

 h k l 
l4 =
g ⋅ f tj + , yj + 3 , zj + 3 
 2 2 2

In the similar manner, the method can be extended to or more first order
equations.

y′′ xy′2 − y 2
= 
Example 2: Solve  to compute y (0.2) .
= y′(0) 0 
y (0) 1,=

Solution:

The given second order equation with the initial conditions can be written as the
system of two first order equations as:
dy
= z= f (t , y, z ) , say
dt
dz
= tz 2 − y 2 = g (t , y, z ) , say.
dt
Given=
t0 0,=
y0 1,=
z0 0,=
h 0.2 .Compute k1 , l1 , k2 , l2 , k3 , l3 and k4 , l4 in this order, we see

k1 = h ⋅ f ( t0 , y0 , z0 ) = 0.2 × 0 = 0

l1 =g ⋅ f ( t0 , y0 , z0 ) =0.2(−1) =−0.2

 h k l 
k2 =
h ⋅ f  t 0 + , y0 + 1 , z 0 + 1  =
−0.02
 2 2 2

 h k l 
l2 =
g ⋅ f  t0 + , y0 + 1 , z0 + 1  =
−0.1998
 2 2 2

 h k l 
k3 =
h ⋅ f  t0 + , y0 + 2 , z0 + 2  =
−0.02
 2 2 2

 h k l 
l3 =
g ⋅ f  t0 + , y0 + 2 , z0 + 2  =
−0.1958
 2 2 2

 h k l 
k4 =
h ⋅ f  t0 + , y0 + 3 , z0 + 3  =
−0.0392
 2 2 2

215 WhatsApp: +91 7900900676 www.AgriMoon.Com


th
System of I.V.Ps. - Order R-K Method

 h k l 
l4 =
g ⋅ f  t0 + , y0 + 3 , z0 + 3  =
−0.1905
 2 2 2

1
Thus y (0.2) = y (0) + ( k1 + 2k2 + 2k3 + k4 )
6
= 1 − 0.0199
= 0.9801
1
and y′(0.2) = z (0.2) = z (0) + ( l1 + 2l2 + 2l3 + l4 )
6
= 0 − 0.1970
= 0.197 .

Exercises

1. Find y (0.2) and z (0.2) using the 4th order Runge-Kutta method to solve

y′ =−3 y + 2 z , y (0) =
0

z′ =
3 y − 4 z , z (0) =
0.5

by taking h = 0.1 .

2. Solve y′′= y + ty′ , y (0) = 1 , y′(0) = 0 to find y (0.2) and y′(0.2) using the 4th order R-
K method. Take h = 0.1 .

1
3. Find y′′ =+
t 3 y′ t 3 y, y (0) =
1 , y′(0) = in [0,1] by taking h = 0.2 .
2

Keywords: System of I.V.Ps.,

References

Jain. M. K., Iyengar. S.R.K., Jain. R.K.,(2008).Numerical Methods. Fifth Edition,


New Age International Publishers, New Delhi.

216 WhatsApp: +91 7900900676 www.AgriMoon.Com


th
System of I.V.Ps. - Order R-K Method

Atkinson. E Kendall, (2004). Numerical Analysis. Second Edition, John Wiley &
Sons, Publishers, Singapore.

Suggested Reading

Scheid.Francis,(1989). Numerical Analyysis. Second Edition, Mc Graw-Hill


Publishers, New York.

Sastry.S.S, (2005). Introductory Methods of Numerical Analysis. Fourth


Edition,Prentice Hall of India Publishers, New Delhi.

217 WhatsApp: +91 7900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 33

Introduction

In this lesson we will discuss the idea of integral transform, in general, and Laplace trans-
form in particular. Integral transforms turn out to be a very efficient method to solve
certain ordinary and partial differential equations. In particular, the transform can take
a differential equation and turn it into an algebraic equation. If the algebraic equation
can be solved, applying the inverse transform gives us our desired solution. The idea of
solving differential equations is given in Figure 33.1.

Integral
Initial and boundary value Transform Algebraic
problems / PDE’s Problems/ ODE’s

Difficult Easy

Inverse
Transform
Solution of initial and boundary Solutions of
value problems or PDE’s Algebraic Problems

Figure 33.1: Idea of Solving Differential/Integral Equations

33.1 Concept of Transformations

An integral of the form


Z b
K(s, t)f (t) dt
a
is called integral transform of f (t). The function K(s, t) is called kernel of the trans-
form. The parameter s belongs to some domain on the real line or in the complex

218 WhatsApp: +91 7900900676 www.AgriMoon.Com


Introduction

plane. Choosing different kernels and different values of a and b, we get different in-
tegral transforms. Examples include Laplace, Fourier, Hankel and Mellin transforms. For
K(s, t) = e−st , a = 0, b = ∞, the improper integral
Z ∞
e−st f (t) dt
0

is called Laplace transform of f (t). If we set K(s, t) = e−ist , a = −∞, b = ∞, then


Z ∞
eist f (t) dt
−∞

where i = −1 is called the Fourier transform of f (t). A common property of integral
transforms is linearity, i.e.,
Z b
I.T. [α f (t) + β g(t)] = K(s, t) [α f (t) + β g(t)] dt = α I.T.(f (t)) + β I.T. (g(t))
a
The symbol I.T. stands for integral transforms.

33.2 Laplace Transform

The Laplace transform of a function f is defined as


Z ∞
L[f (t)] = F (s) = e−st f (t) dt
0
provided the improper integral converges for some s.

R∞
Remark 1: The integral 0 e−st f (t) dt is said to be convergent (absolutely conver-
gent) if !
Z R Z R
lim e−st f (t) dt lim | e−st f (t) | dt
R→∞ 0 R→∞ 0

exists as a finite number.

33.3 Laplace Transform of Some Elementary Functions

We now give Laplace transform of some elementary functions. Laplace transform of


these elementary functions together with properties of Laplace transform will be used to
evaluate Laplace transform of more complicated functions.

219 WhatsApp: +9127900900676 www.AgriMoon.Com


Introduction

33.4 Example Problems

33.4.1 Problem 1

Evaluate Laplace transform of f (t) = 1, t ≥ 0.


Solution: Using definition of Laplace transform

e−st ∞
Z
−st
L[f (t)] = e dt =
−s 0

0

Assuming that s is real and positive, therefore


1
L[f (t)] = , since lim e−sR = 0
s R→∞

What will happen if we take s to be a complex number, i.e., s = x + iy. Since e−iyR =
cos yR − i sin yR, and therefore | e−iyR |= 1, then, we find

lim | exR || e−iyR |= 0 for Re(s) = x > 0


R→∞

Thus, we have
1
L[f (t)] = L[1] = , Re(s) > 0.
s

33.4.2 Problem 2

Find the Laplace transform of the functions eat , eiat , e−iat .


Solution: Using the definition of Laplace transform
∞ ∞
e−(s−a)t ∞
Z Z
at −st at
L[e ] = e e dt = e−(s−a)t dt =
−(s − a) 0

0 0
1
= , provided Re(s) > a (or s > a)
s−a

Similarly, we can evaluate



e−(s−ia)t ∞
Z
iat −(s−ia)t
L[e ] = e dt =
−(s − ia) 0

0
1
= , provided Re(s) > 0.
s − ia

220 WhatsApp: +9137900900676 www.AgriMoon.Com


Introduction

Here we have used the fact that, for s = x + iy , we have



e−(s−ia)R 1
−xR −i(y−a)R
lim =− lim e e =0

R→∞ −(s − ia) s − ia R→∞

Similarly, we get
1
L[e−iat ] = .
s + ia

33.4.3 Problem 3

Fins the Laplace transform of the unit step function (commonly known as the Heaviside
function). This function is given as

0 if t < a,
u(t − a) =
1 if t ≥ a.

Solution: Let us find the Laplace transform of u(t − a), where a ≥ 0 is some constant.
That is, the function that is 0 for t < a and 1 for t ≥ a.
∞ ∞  ∞
e−st e−as
Z Z
−st −st
L{u(t − a)} = e u(t − a) dt = e dt = = ,
0 a −s t=a s

where of course s > 0 and a ≥ 0.

33.4.4 Problem 4

Find the Laplace transform of tn , n = 1, 2, 3, ...


Solution: Using definition of Laplace transform we get
Z ∞  −st ∞ Z ∞ −st
n −st n e e
L[t ] = e t dt = tn − ntn−1 dt
0 −s 0 0 −s
Z ∞
n n
= 0+ e−st tn−1 dt = L[tn−1 ]
s 0 s

Putting n = 1:
1 1 1!
L[t] = L[1] = 2 = 2
s s s

221 WhatsApp: +9147900900676 www.AgriMoon.Com


Introduction

Putting n = 2:
2 2!
L[t2 ] = =
s3 s3
n!
If we assume L[tn ] = sn+1
, then

n+1 n (n + 1)! n!
L[tn+1 ] = L[t ] = n+2
⇒ L[tn ] = n+1 , Re(s) > 0.
s s s

One can also extend this result for non-integer values of n.

33.4.5 Problem 5

Find L[tγ ] for non-integer values of γ .


Solution: Using the definition of Laplace transform we get
Z ∞
γ
L[t ] = e−st tγ dt, (γ > −1)
0

Note that the above integral is convergent only for γ > −1. We substitute u = st ⇒ du =
sdt where s > 0. Thus we get
∞  u γ 1 Z ∞
1
Z
γ
L[t ] = −u
e du = γ+1 e−u uγ du
0 s s s 0

We know Z ∞
Γ(p) = up−1 e−u du (p > 0)
0
Then,
Γ(γ + 1)
L[tγ ] = , γ > −1, s > 0
sγ+1
Note that for γ = 1, 2, 3, ..., the above formula reduces to the formula we got in previous
γ!
example for integer values, i.e., L[tγ ] = .
sγ+1

33.4.6 Problem 6

Let f (t) = a0 + a1 t + a2 t2 + ... + an tn . Find L[f (t)].


Solution: Applying the definition of Laplace transform we obtain
" n #
X
L[f (t)] = L ak tk
k=0

222 WhatsApp: +9157900900676 www.AgriMoon.Com


Introduction

Using the linearity of the transform we get


n n
X
k
X k!
L[f (t)] = L[t ] = ak .
sk+1
k=0 k=0

Remark 2: For an infinite series n


P∞
n=0 an t ,
it is not possible, in general, to obtain
Laplace transform of the series by taking the transform term by term.

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

223 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 34

Laplace Transform of Some Elementary Functions

In this lesson we compute the Laplace transform of some elementary functions, before
discussing the restriction that have to be imposed on f (t) so that it has a Laplace transform.
With the help of Laplace transform of elementary function we can get Laplace transform
of complicated function using properties of the transform that will be discussed later.
Another important aspect of the finding Laplace transform of elementary function relies
on using them for getting inverse Laplace transform.

34.1 Example Problems

34.1.1 Problem 1

Find Laplace transform of (i) cosh ωt, (ii) cos ωt, (iii) sinh ωt (iv) sin ωt .
Solution: (i) Using the definition of Laplace transform we get
eωt − e−ωt
 
L[cosh ωt] = L
2

Using linearity of the transform we obtain


1   ωt   
L[cosh ωt] = L e − L e−ωt
2
Applying the Laplace transform of exponential function we obtain
 
1 1 1 s
L[cosh ωt] = − = 2
2 s−ω s+ω s + ω2

(ii) Following similar steps we obtain


eiωt + e−iωt
 
L[cos ωt] = L
2

Using linearity, we obtain


1   1 
L[cos ωt] = L eiωt + L e−iωt

2 2

224 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace Transform of Some Elementary Functions

We know the Laplace transform of exponential functions which can be used now to get
 
1 1 1 1 2s
L[cos ωt] = + =
2 s − iω s + iω 2 s2 + ω 2

Thus we have
s
L[cos ωt] =
s2 + ω 2

Similarly we get the last two cases (iii) and (iv) as


ω ω
L[sinh ωt] = and L[sin ωt] =
s2 − ω2 s2 + ω2

34.1.2 Problem 2

Find the Laplace transform of (3 + e6t )2 .


Solution: We determine the Laplace transform as follows

L(3 + e6t )2 = L(3 + e6t )(3 + e6t ) = L(9 + 6e6t + e12t )

Using linearity we get

L(3 + e6t )2 = L(9) + L(6e6t ) + L(e12t )


= 9L(1) + 6L(e6t ) + L(e12t )

Using the Laplace transform of elementary functions appearing above we obtain


9 6 1
L(3 + e6t )2 = + +
s s − 6 s − 12

34.1.3 Problem 3

Find the Laplace transform of sin3 2t.


Solution: We know that
sin 3t = 3 sin t − 4 sin3 t

This implies that we can write


1
sin3 2t = (3 sin 2t − sin 6t)
4

225 WhatsApp: +9127900900676 www.AgriMoon.Com


Laplace Transform of Some Elementary Functions

Applying Laplace transform and using its linearity property we get


1
L[sin3 2t] = (3L[sin 2t] − L[sin 6t])
4
Using the Laplace transforms of sin at we obtain
3 2 1 6
L[sin3 2t] = −
4 s2 + 4 4 s2 + 36

Thus we get
48
L[sin3 2t] =
(s2 + 4)(s2 + 36)

34.1.4 Problem 4

Find Laplace transform of the function f (t) = 2t .


Solution: First we rewrite the given function as
t
f (t) = 2t = eln 2 = et ln 2

Now f (t) is function of the form eat and therefore


1
L[f (t)] = , for s > ln 2
s − ln 2

34.1.5 Problem 5

Find (a) L[t3 − 4t + 5 + 3 sin 2t] and (b) L[H(t − a) − H(t − b)].
Solution: (a) Using linearity of the transform we get

L[t3 − 4t + 5 + 3 sin 2t] = L[t3 ] − 4L[t] + L[5] + 3L[sin 2t]

Using Laplace transform evaluated in previous previous examples, we have


6 4 5 6
L[t3 − 4t + 5 + 3 sin 2t] = 4
− 2+ + 2
s s s (s + 4)

On simplification we find

(5s5 + 2s4 + 20s3 10s2 + 24)


L[t3 − 4t + 5 + 3 sin 2t] =
[s4 (s2 + 4)]

226 WhatsApp: +9137900900676 www.AgriMoon.Com


Laplace Transform of Some Elementary Functions

(b) Using Linearity property we get

L[H(t − a) − H(t − b)] = L[H(t − a)] − L[H(t − b)]

Applying the definition of Laplace transform we obtain


Z ∞ Z ∞
L[H(t − a) − H(t − b)] = H(t − a)e −st
dt − H(t − b)e−st dt
Z0 ∞ Z0 ∞
= H(t − a)e−st dt − H(t − b)e−st dt
a b

Integration gives

e−as e−bs
L[H(t − a) − H(t − b)] = −
s s

This implies

e−as − e−bs
L[H(t − a) − H(t − b)] =
s

34.1.6 Problem 6

Find Laplace transform of the following function


(
t/c, if 0 < t < c ;
f (t) =
1, if t > c.

Here c is some constant.


Solution: Using the definition of Laplace transform we have
c   Z ∞
t
Z
L[f (t)] = e
−st
dt + e−st dt
0 c c

Integrating by parts we find


  −st   −st c  −st ∞
t e 1 e e
L[f (t)] = − − − 2 + −
c s c s 0 s c

On simplifications we obtain
1 − esc
L[f (t)] =
cs2

227 WhatsApp: +9147900900676 www.AgriMoon.Com


Laplace Transform of Some Elementary Functions

34.1.7 Problem 7

Find Laplace transform of the function f (t) given by



 0, if 0 < t < 1 ;


f (t) = t, if 1 < t < 2;

 0, if t > 2.

Solution: By the definition of Laplace transform we have


Z ∞ Z 2
L[f (t)] = e
−st
f (t) dt = e−st t dt
0 1
Integrating by parts we obtain
  −st 2 Z 2 −st
e e
L[f (t)] = t − + dt
s 1 1 s
2e−2s − e−s e−2s − e−s
=− −
s s2

34.1.8 Problem 8

Find Laplace transform of sin t.
Solution: We have
√ 1 3/2 1 5/2 1
sin t = t1/2 − t + t − t7/2 + ...
3! 5! 7!

Then, taking the Laplace transform of each term in the series we get
√ 1 1 1
L[sin L[t3/2 ] + L[t5/2 ] − L[t7/2 ] + . . .
t] = L[t1/2 ] −
3! 5! 7!
Γ(3/2) 1 Γ(5/2) 1 Γ(7/2) 1 Γ(9/2)
= 3/2 − + − + ...
s 3! s5/2 5! s7/2 7! s9/2
Further simplifications leads to
√  
√ 1 π 1 31 1 53 1 1 753 1
L[sin t] = 1− + − + ...
2 s3/2 3! 2 s 5! 2 2 s2 7! 2 2 2 s3
r  
1 π 1 1 1 1 1
= 1− 2 + − + ...
2s s 2 s 2! (22 s)2 3! (22 s)3
Thus, we have
r
√ 1 π −1
L[sin t] = e 4s .
2s s

228 WhatsApp: +9157900900676 www.AgriMoon.Com


Laplace Transform of Some Elementary Functions

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

229 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 35

Existence of Laplace Transform

In this lesson we shall discuss existence theorem on Laplace transform. Since every
Laplace integral is not convergent, it is very important to know for which functions
Laplace transform exists.
2
Consider the function f (t) = et and try to evaluate its Laplace integral. In this case we
realize that Z R
2
lim et −st
dt = ∞, for any choice of s
R→∞ 0
Naturally question arises in mind that for which class of functions, the Laplace integral
converges? So before answering this question we go through some definition.

35.1 Piecewise Continuity

A function f is called piecewise continuous on [a, b] if there are finite number of points a <
t1 < t2 < . . . < tn < b such that f is continuous on each open subinterval (a, t1 ), (t1 , t2 ), . . . , (tn , b)
and all the following limits exists

lim f (t), lim f (t), lim f (t), and lim f (t), ∀j.
t→a+ t→b− t→tj + t→tj −

Note: A function f is said to be piecewise continuous on [0, ∞) if it is piecewise continu-


ous on every finite interval [0, b], b ∈ R+ .

35.1.1 Example 1

The function defined by



2 0 ≤ t ≤ 1;
t,

f (t) = 3 − t, 1 < t ≤ 2;

t + 1, 2 < t ≤ 3;

is piecewise continuous on [0, 3].

230 WhatsApp: +91 7900900676 www.AgriMoon.Com


Existence of Laplace Transform

35.1.2 Example 2

The function defined by


(
1
2−t , 0 ≤ t < 2;
f (t) =
t + 1, 2 ≤ t ≤ 3;

is not piecewise continuous on [0, 3].

35.2 Example Problems

35.2.1 Problem 1

Discuss the piecewise continuity of


1
f (t) =
t−1
.
Solution: f (t) is not piecewise continuous in any interval containing 1 since

lim f (t)
t→1±

do not exists.

35.2.2 Problem 2

Check whether the function


(
1−e−t
t , t 6= 0;
f (t) =
0, otherwise

is piecewise continuous or not.


Solution: The given function is continuous everywhere other than at 0. So we need to
check limits at this point. Since both the left and right limits

lim f (t) = 1 and lim f (t) = 1


t→0− t→0+

exists, the given function is piecewise continuous.

231 WhatsApp: +9127900900676 www.AgriMoon.Com


Existence of Laplace Transform

35.3 Functions of Exponential Orders

A function f is said to be of exponential order α if there exist constant M and α such that
for some t0 ≥ 0
|f (t)| ≤ Meαt for all t ≥ t0

Equivalently, a function f (t) is said to be of exponential order α if

lim e−αt |f (t)| = a finite quantity


t→∞

Geometrically, it means that the graph of the function f on the interval (t0 , ∞) does not
grow faster than the graph of exponential function Meαt

35.4 Example Problems

35.4.1 Problem 1

Show that the function f (t) = tn has exponential order α for any value of α > 0 and any
natural number n.
Solution: We check the limit
lim e−αt tn
t→∞

Repeated application of L’hospital rule gives


n!
lim e−αt tn = lim =0
t→∞ t→∞ αn eαt

Hence the function is of exponential order.

35.4.2 Problem 2
2
Show that the function f (t) = et is not of exponential order.
Solution: For given function we have
2
lim e−αt et = lim et(t−α) = ∞
t→∞ t→∞

for all values of α. Hence the given function is not of exponential order.

232 WhatsApp: +9137900900676 www.AgriMoon.Com


Existence of Laplace Transform

35.4.3 Theorem (Sufficient Conditions for Laplace Transform)

If f is piecewise continuous on [0, ∞) and of exponential order α then the Laplace trans-
form exists for Re(s) > α. Moreover, under these conditions Laplace integral converges
absolutely.
Proof: Since f is of exponential order α, then

|f (t)| ≤ M1 eαt , t ≥ t0 (35.1)

Also, f is piecewise continuous on [0, ∞) then

|f (t)| ≤ M2 , 0 < t < t0 (35.2)

From equation (35.1) and (35.2) we have

|f (t)| ≤ Meαt , t ≥ 0

Then Z R Z R
|e −st
f (t)|dt ≤ |e−(x+iy)tMeαt |dt
0 0

Here we have assumed s to be a complex number so that s = x + iy . Noting that |e−iy | = 1


we find Z R Z R
|e−st f (t)|dt ≤ M e−(x−α)t dt
0 0
On integration we obtain
Z R
M M −(x−α)R
|e−st f (t)|dt ≤ − e
0 x−α x−α

Letting R → ∞ and noting Re(s) = x > α, we get


Z ∞
M
|e−st f (t)|dt ≤
0 x−α

Hence the Laplace integral converges absolutely and thus converges. This implies the
existence of Laplace transform. For piecewise continuous functions of exponential order,
the Laplace transform always exists. Note that it is a sufficient condition, that means if a
function is not of exponential order or piecewise continuous then the Laplace transform
may or may not exist.

233 WhatsApp: +9147900900676 www.AgriMoon.Com


Existence of Laplace Transform

Remark 1: We have observed in the proof of existence theorem that


Z ∞ Z ∞
−st
M
e f (t)dt ≤ |e−st f (t)|dt ≤ for Re(s) > α

0 0 Re(s) − α
We now deduce two important conclusions with this observation:
Z ∞
• L[f (t)] = e−st f (t)dt = F (s) → 0 as Re(s) → ∞
0

• if L[f (t)] 6→ 0 as s → ∞ (or Re(s) → ∞) then f (t) cannot be piecewise continuous


function of exponential order. For example functions such as F1 (s) = 1 and F2 (s) =
s/(s+1) are not Laplace transforms of piecewise continuous functions of exponential
order, since F1 (s) 6→ 0 and F2 (s) 6→ 0 as s → ∞.

Remark 2: It should be noted that the conditions stated in existence theorem are suf-
ficient rather than necessary conditions. If these conditions are satisfied then the Laplace
transform must exist. If these conditions are not satisfied then Laplace transform may or
may not exist. We can observe this fact in the following examples:

• Consider, for example,


2 2
f (t) = 2tet cos(et )

Note that f (t) is continuous on [0, ∞) but not of exponential order, however the
Laplace transform of f (t) exists, since
Z ∞
2 2
L[f (t)] = e−st 2tet cos(et )dt
0

Integration by parts leads to


∞ Z ∞
t2 2
L[f (t)] = e −st
sin(e ) + s e−st sin(et )dt

0 0

Using the definition of Laplace transform we obtain


2
L[f (t)] = − sin(1) + sL[sin(et )]
2 2
Note that L[sin(et )] exists because the function sin(et ) satisfies both the conditions
of existence theorem. This example shows that Laplace transform of a function
which is not of exponential order exists.

234 WhatsApp: +9157900900676 www.AgriMoon.Com


Existence of Laplace Transform

• Consider another example of the function


1
f (t) = √ ,
t
which is not piecewise continuous since f (t) → ∞ as t → 0. But we know that
r
Γ(1/2) π
L[f (t)] = √ = , s > 0.
s s

This example shows that Laplace transform of a function which is not piecewise
continuous exists. These two examples clearly shows that the conditions given in
existence theorem are sufficient but not necessary.

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

235 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 36

Properties of Laplace Transform

In this lesson we discuss some properties of Laplace transform. There are several useful
properties of Laplace transform which can extend its applicability. In this lesson we
mainly present shifting and translation properties.

36.1 First Shifting Property

If L[f (t)] = F (s) then L eat f (t) = F (s − a), where a is any real or complex constant.
 

Proof: By the definition of Laplace transform we find


Z ∞
L eat f (t) = eat f (t)e−st dt
 

Z0 ∞
= e−(s−a)tf (t) dt
0
Again by the definition of Laplace transform we get
L eat f (t) = F (s − a).
 

36.2 Example Problems

36.2.1 Problem 1

Find the Laplace transform of e−t sin2 t.


Solution: First we get the Laplace transform of sin2 t as
 
 2
 1 − cos 2t
L sin t = L
2
11 1 s 2
= − 2 = 2
= F (s).
2s 2s +4 s(s + 4)
Now using the first shifting property we obtain
2
L e−t sin2 t = F (s + 1) =
 
(s + 1)(s2 + 2s + 5)

236 WhatsApp: +91 7900900676 www.AgriMoon.Com


Properties of Laplace Transform

36.2.2 Problem 2

Find L[e−2t sin 6t].


Solution: Setting f (t) = sin 6t we find
6
L[f (t)] = F (s) =
s2 + 36
Now using the first shifting property we get
6
L[e−2t sin 6t] =
(s + 2)2 + 36

36.2.3 Problem 3

Evaluate L[e2t (t + 3)2 ].


Solution: By the definition and linearity of Laplace transform we have

L[(t + 3)2 ] =L[t2 + 6t + 9] = L[t2 ] + 6L[t] + 9L[1]


2! 6 9
= 3
+ 2+
s s s
Further simplifications lead to
2 + 6s + 9s2
L[(t + 3)2 ] = = F (s)
s3
Using the first shifting property we get
2 + 6(s − 2) + 9(s − 2)2
L[e2t (t + 3)2 ] = F (s − 2) =
(s − 2)3
9s2 − 30s + 26
=
(s − 2)3

36.2.4 Problem 4

Using shifting property evaluate L[sinh 2t cos 2t] and L[sinh 2t sin 2t]
Solution: We know that
2
L[sinh 2t] = = F (s)
s2 − 4

237 WhatsApp: +9127900900676 www.AgriMoon.Com


Properties of Laplace Transform

Using shifting property we can get

L[e2it sinh 2t] = F (s − 2i)

This implies
2 2
L[e2it sinh 2t] = =
(s − 2i)2 − 4 (s2 − 8) − 4is
Multiplying numerator and denominator by (s2 − 8) + 4is, we find
2(s2 − 8) + 8is 2(s2 − 8) + 8is
L[e2it sinh 2t] = =
(s2 − 8)2 + 16s2 (s4 + 64)

Replacing e2it by cos 2t + i sin 2t and using linearity of the transform we obtain
2(s2 − 8) 8s
L[cos 2t sinh 2t] + iL[cos 2t sinh 2t] = 4
+i 4
(s + 64) (s + 64)

Equating real and imaginary parts we have


2(s2 − 8) 8s
L[cos 2t sinh 2t] = 4 and L[cos 2t sinh 2t] = 4
(s + 64) (s + 64)

36.3 Second Shifting Property


(
f (t − a) when t > a
If L[f (t)] = F (s) and g(t) =
0 when 0 < t < a
then
L[g(t)] = e−as F (s).

Proof: By the definition of Laplace transform we have


Z ∞
L[g(t)] = e−st g(t) dt
Z0 ∞
= e−st f (t − a) dt
a

Substituting t − a = u so that dt = du, we find


Z ∞
L[g(t)] = e−s(u+a) f (u) du
0
Z ∞
=e−sa
e−su f (u) du
0

238 WhatsApp: +9137900900676 www.AgriMoon.Com


Properties of Laplace Transform

Again using the definition of Laplace transform we get

L[g(t)] = e−as F (s).

Alternative form: It is sometimes useful to present this property in the following com-
pact form.

If L[f (t)] = F (s) then


L [f (t − a)H(t − a)] = e−as F (s)

where (
1 when t > 0
H(t) =
0 when t < 0
Note that f (t − a)H(t − a) is same as the function g(t) given above.

36.4 Example Problems

36.4.1 Problem 1
(
0 when 0 ≤ t < 1
Find L[g(t)] where g(t) =
(t − 1)2 when t ≥ 1
Solution: On comparison with the function g(t) given in second shifting theorem we get
2
f (f ) = t2 ⇒ L[f (t)] =
s3
Using the second shifting property we find
 
2
L[g(t)] = e −s
.
s3

36.4.2 Problem 2

Find the Laplace transform of the function g(t), where


(
cos(t − π/3), t > π/3;
g(t) =
0, t > π/3.

239 WhatsApp: +9147900900676 www.AgriMoon.Com


Properties of Laplace Transform

Solution: Comparing with the notations used in the second shifting theorem we have
f (t) = cos t. Thus, we find
s
L[f (t)] = F (s) = .
s2 + 1
Hence by the second shifting theorem we obtain
s
L[g(t)] = e−π/3 F (s) = e−π/3 .
s2 + 1

36.5 Change of Scale Property

1 s
If L[f (t)] = F (s) then L[f (at)] = F
a a

Proof: By definition, we have


Z ∞
L[f (at)] = e−st f (at) dt.
0
Substituting at = u so that a dt = du we find

1
Z
e−( a )u f (u) du.
s
L[f (at)] =
0 a
Using definition of the Laplace transform we get
1 s
L[f (at)] = F .
a a

36.5.1 Example

If
s2 − s + 1
L[f (t)] =
(2s + 1)2 (s − 1)
then find L[f (2t)].
Solution: Direct application of the second shifting theorem we obtain
s 2
− 2s + 1

1 2
L[f (2t)] =
2 2s + 1 2 s − 1
 
2 2
On simplifications, we get
1 s2 − 2s + 4
L[f (2t)] = .
4 (s + 1)2 (s − 2)

240 WhatsApp: +9157900900676 www.AgriMoon.Com


Properties of Laplace Transform

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

241 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 37

Properties of Laplace Transform (Cont.)

In this lesson we continues discussing various properties of Laplace transform. In partic-


ular we shall discuss Laplace transform of derivatives and integrals. These two properties
are very important for solving differential and integral equations.

37.1 Laplace Transform of Derivatives

Before we state the derivative theorem, it should be noted that this results is the key aspect
for its application of solving differential equations.

37.1.1 Derivative Theorem



Suppose f is continuous on [0, ∞) and is of exponential order α and that f is piecewise
continuous on [0, ∞). Then

L[f (t)] = sL[f (t)] − f (0), Re(s) > α.

Proof: By the definition of Laplace transform, we have



Z ∞ ′
L[f (t)] = f (t)e−st dt
0

Integrating by parts, we get



∞ Z ∞
−st
L[f (t)] = f (t)e − f (t)e−st (−s) dt
0 0

Using the definition of Laplace transform we obtain



L[f (t)] = −f (0) + sL[f (t)], Re(s) > α.

This completes the proof.

242 WhatsApp: +91 7900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Remark 1: Suppose f (t) is not continuous at t = 0, then the results of the above
theorem takes the following form

L[f (t)] = −f (0 + 0) + sL[f (t)]


Remark 2: An interesting feature of the derivative theorem is that L[f (t)] exists

without the requirement of f tobe of exponential order. Recall the existence of Laplace
2 2
transform of f (t) = 2tet cos et which is obvious now by the derivative theorem because
  2 ′
f (t) = sin et .

Remark 3: The derivative theorem can be generalized as


′′ ′ ′
L[f (t)] = −f (0) + sL[f (t)]
′ ′
= −f (0) + s {−f (0) + sL[f (t)]} = s2 L[f (t)] − sf (0) − f (0).

In general, for nth derivative we have



L[f n (t)] = sn L[f (t)] − sn−1 f (0) − sn−2 f (0) − ... − f n−1 (0).

37.2 Example Problems

37.2.1 Problem 1

Determine L[sin2 ωt].


Solution: Let us assume that
f (t) = sin2 ωt

Now we compute the derivative of f as



f (t) = 2 sin ωt cos ωtω = ω sin 2ωt.

243 WhatsApp: +9127900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Using the derivative theorem we have



L[f (t)] = −f (0) + sL[f (t)]

Substituting the function f (t) and its derivative we find

L[ω sin 2ωt] = sL[sin2 ωt] − 0

Therefore, we have  
2 ω 2ω
L[sin ωt] =
s s + 4ω 2
2

37.2.2 Problem 2

Using derivative theorem, find L[tn ].


Solution: Let
f (t) = tn .

Then
′ ′′
f (t) = ntn−1 , f (t) = n(n − 1)tn−2 , . . . , f n (t) = n!.

From derivative theorem we have



L[f n (t)] = sn L[f (t)] − sn−1 f (0) − sn−2 f (0) − ... − f n−1 (0).

Therefore, we find
n!
L[n!] = sn L[tn ] ⇒ L[tn ] = .
sn+1

37.2.3 Problem 3

Using derivative theorem, find L[sin kt].


Solution: Let f (t) = sin kt and therefore we have

f ′ (t) = k cos kt and f ′′(t) = −k2 sin kt

Substituting in the derivative theorem

L[f ′′ (t)] = s2 L[f (t)] − sf (0) − f ′ (0)

244 WhatsApp: +9137900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

yields
L[−k 2 sin kt] = s2 L[sin kt] − 0 − k

On simplifications we get
k
L[sin kt] =
s2 + k 2

37.2.4 Problem 4

Using L[t2 ] = 2/s3 and derivative theorem, find L[t5 ].


Solution: Let f (t) = t5 so that f ′(t) = 5t4 , f ′′ (t) = 20t3 f ′′′ (t) = 60t2 . The derivative
theorem for third derivative reads as

L[f ′′′ (t)] = s3 L[f (t)] − s2 f (0) − sf ′ (0) − f ′′ (0)

This implies
120
L[60t2 ] = s3 L[f (t)] ⇒ L[f (t)] = .
s6

37.2.5 Problem 5
 √
Using the Laplace transform of L sin t and applying the derivative theorem, find the
Laplace transform of the function √
cos t

t

Solution: We know that h √i r


1 π −1
L sin t = e 4s
2s s

Let f (t) = sin t, then we have

cos t

f (0) = 0 and f (t) = √
2 t
Substitution of f (t) in the derivative theorem

L[f ′ (t)] = sL[f (t)] − f (0)

yields  √  r
cos t 1 π −1
L √ =s e 4s
2 t 2s s

245 WhatsApp: +9147900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Thus, we get √  r
cos t π −1
L √ = e 4s
t s

37.3 Laplace Transform of Integrals

37.3.1 Theorem

Suppose f (t) is piecewise continuous on [0, ∞) and the function


Z t
g(t) = f (u) du
0

is of exponential order. Then


1
L[g(t)] = F (s).
s

Proof: Clearly g(0) = 0 and g (t) = f (t). Note that g(t) is piecewise continuous and is of

exponential order as well as g (t) = f (t) is piecewise continuous. Then, we get using the
derivative theorem

L[g (t)] = sL[g(t)] − g(0)

Since g(0) = 0 we obtain the desired result as


1
L[g(t)] = L[f (t)]
s
This completes the proof.

37.4 Example Problems

37.4.1 Problem 1

Given that   Z ∞
sin t 1
L = ds.
t s 1 + s2
Find the Laplace transform of the integral
t
sin u
Z
du.
0 u

246 WhatsApp: +9157900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Solution: Direct application of the above result gives


Z t   
sin u 1 sin t
L du = L
0 u s t
Z ∞
1 1 1 hπ −1
i
= ds = − tan s
s s 1 + s2 s 2
Thus, we have
Z t 
sin u 1
L du = cot−1 s
0 u s

37.4.2 Problem 2

Find Laplace transform of the following integral


Z t
un e−au du
0

Solution: With the application of the first shifting theorem we know that
n!
L[tn e−at ] =
(s + a)n+1
It follows from the above result on Laplace transform of integrals
Z t 
n −au 1 n!
L u e du = L[tn e−at ] = .
0 s s(s + a)n+1

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

247 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 38

Properties of Laplace Transform (Cont.)

In this lesson we further continue discussing properties of Laplace transform. In partic-


ular, this lesson is devoted to Laplace transform of functions which are multiplied and
divide by t.

38.1 Multiplication by tn

38.1.1 Theorem

If F (s) is the Laplace transform of f (t), i.e., L[f (t)] = F (s) then,
d
L[tf (t)] = − F (s)
ds
and in general the following result holds
dn
L[tn f (t)] = (−1)n F (s).
dsn

Proof: By definition we know


Z ∞
F (s) = e−st f (t) dt
0

Using Leibnitz rule for differentiation under integral sign we obtain



dF (s)
Z
= (−t)e−st f (t) dt
ds 0

Thus we get
dF (s)
= −L[tf (t)]
ds
Repeated differentiation under integral sign gives the general rule.
Applicability of the above result will now be demonstrated by some examples.

248 WhatsApp: +91 7900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

38.2 Example Problems

38.2.1 Problem 1

Find Laplace transform of the function t2 cos at.


Solution: We know from Laplace transform of elementary functions that
s
L[cos at] =
s2 + a2
Direct application of the above rule gives
! !
d2 s2 + a2 − 2s2 a2 − s2
 
s d d
L t2 cos at = 2
 
= 2 = 2
ds s + a2
2 ds (s2 + a2 ) ds (s2 + a2 )

On simplifications we find
 2s s2 − 3a2

L t2 cos at =

3
(s2 + a2 )

38.2.2 Problem 2

Evaluate (i) L[te−t ] (ii) L[t2 e−t ] (iii) L[tk e−t ]


Solution: (i) We know that
1
L[e−t ] =
s+1
Using the above mentioned rule we find
d 1 1
L[te−t ] = − =
ds s + 1 (s + 1)2

(ii) Applying the same idea once again, we obtain


d 1 2
L[t2 e−t ] = − =
ds (s + 1)2 (s + 1)3

(iii) Similarly, we can further generalize this result as


k!
L[tk e−t ] =
(s + 1)k+1

249 WhatsApp: +9127900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

38.2.3 Problem 3

Find the Laplace transform of f (t) = (t2 − 3t + 2) sin t


Solution: Using linearity of the Laplace transform we have

L[f (t)] = L[t2 sin t] − 3L[t sin t] + 2L[sin t] (38.1)

Since we know
1
L[sin t] =
1 + s2
then
d 1 2s
L[t sin t] = − 2
=
ds 1 + s (1 + s2 )2
and
d 2s 2(1 + s2 )2 − 8s2 (1 + s2 ) 6s2 − 2
L[t2 sin t] = − = =
ds (1 + s2 )2 (1 + s2 )4 (1 + s2 )3
Substituting the above values in the equation (38.1), we find
6s2 − 2 6s 2
L[f (t)] = 2 3
− 2 2
+
(1 + s ) (1 + s ) 1 + s2

Further simplifications lead to


6s2 − 2 − 6s(1 + s2 ) + 2(1 + s2 )2
L[f (t)] =
(1 + s2 )3

Finally, we obtain
(2s4 − 6s3 + 10s2 − 6s)
L[f (t)] =
(s6 + 3s4 + 3s2 + 1)

38.3 Division by t

38.3.1 Theorem

If f is piecewise continuous on [0, ∞) and is of exponential order α such that


f (t)
lim
t→0+ t

exists, then,   Z ∞
f (t)
L = F (u) du, [s > α]
t s

250 WhatsApp: +9137900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

f (t)
Proof: This can easily be proved by letting g(t) = so that f (t) = tg(t).
t
Hence,
d
F (s) = L[f (t) = L[tg(t)] = L[g(t)]
ds
Integrating with respect to s we get,
∞ Z ∞
−L[g(t)] = F (s) ds.

s s

Since g(t) is piecewise continuous and of exponential order, it follows that lim L[g(t)] → 0.
s→∞
Thus we have Z ∞
L[g(t)] = F (s) ds.
0
This completes the proof.

Remark: It should be noted that the condition t→0+


lim [f (t)/t] is very important because
without this condition the function g(t) may not be piecewise continuous on [0, ∞). Thus
without this condition we can not use the fact lim L[g(t)] → 0.
s→∞

38.3.2 Corollary
∞ ∞
f (t)
Z Z
If L[f (t)] = F (s) then dt = f (s) ds, provided that the integrals converge.
0 t 0

Proof: We know that   Z ∞


f (t)
L = F (u) du
t s
Using the definition of Laplace transform we get
∞ ∞
−st f (t)
Z Z
e dt = F (u) du
0 t s

Taking limit s → 0 in above two integrals we obtain


∞ ∞
f (t)
Z Z
dt = F (u) du
0 t 0

This completes the proof.

251 WhatsApp: +9147900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

38.4 Example Problems

38.4.1 Problem 1

Find the Laplace transform of the function


sin at
f (t) =
t

Solution: We know,
  Z ∞
a f (t)
L[sin at] = 2 and L = F (u) du
s + a2 t s

On integrating we get,
  Z ∞
sin at a −1 s
  ∞
L = ds = tan
t s2 + a2 a s

s

Thus we have  
sin at π s
L = − tan−1
t 2 a

38.4.2 Problem 2

Find the Laplace transform of the function


2 sin t sinh t
f (t) =
t

Solution: Using Division by t property of the Laplace transform we get


Z ∞
L sin t et − e−t ds
 
L[f (t)] = (38.2)
s

Now we evaluate L sin t et − e−t


 
using linearity of the Laplace transform as

L sin t et − e−t = L et sin t − L e−t sin t


     

Applying the first shifting theorem we obtain


1 1
L sin t et − e−t =
 
2

1 + (s − 1) 1 + (s + 1)2

252 WhatsApp: +9157900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Substituting this value in the equation (38.2) we find


∞ 
1 1
Z
L[f (t)] = 2
− ds
s 1 + (s − 1) 1 + (s + 1)2

On integrating, we have
∞ ∞
L[f (t)] = tan−1 (s − 1) − tan−1 (s + 1)

s s
π −1 π −1
= − tan (s − 1) − + tan (s + 1)
2 2
On cancellation of π/2 we get

L[f (t)] = tan−1 (s + 1) − tan−1 (s − 1)

This can be further simplified to obtain


 
−1 2
L[f (t)] = tan
s2

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

253 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 39

Properties of Laplace Transform (Cont.)

In this lesson we evaluate Laplace transform of periodic functions. Periodic functions fre-
quently occur in various engineering problems. We shall now show that with the help of a
simple integral, we can evaluate Laplace transform of periodic functions. We shall further
continue the discussion for stating initial and final value theorems of Laplace transforms
and their applications with the help of simple examples.

39.1 Laplace Transform of a Periodic Function

Let f be a periodic function with period T so that f (t) = f (t + T ) then,


T
1
Z
L[f (t)] = e−sT f (t) dt.
1 − e−sT 0

Proof: By definition we have,


Z ∞
L[f (t)] = e−sT f (t) dt
0

We break the integral into two integrals as


Z T Z ∞
−sT
L[f (t)] = e f (t) dt + e−sT f (t) dt
0 T

Substituting t = τ + T in the second integral


Z T Z ∞
L[f (t)] = −sT
e f (t) dt + e−(τ +T )f (τ + T ) dτ
0 0

Noting f (τ + T ) = f (τ ) we find
Z T
L[f (t)] = e−sT f (t) dt + e−sT L[f (t)],
0

On simplifications, we obtain
T
1
Z
L[f (t)] = e−st f (t) dt.
1 − e−sT 0

This completes the proof.

254 WhatsApp: +91 7900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Remark 1: Just to remind that if a function f is periodic with period T > 0 then
f (t) = f (t + T ), −∞ < t < ∞. The smallest of T , for which the equality f (t) = f (t + T ) is
true, is called fundamental period of f (t). However, if T is the period of a function f then
nT , n is any natural number, is also a period of f . Some familiar periodic functions are
sin x, cos x, tan x etc.

39.2 Example Problems

39.2.1 Problem 1

Find Laplace transform for


(
1 when 0 < t ≤ 1
f (t) =
0 when 1 < t < 2

with f (t + 2) = f (t), t > 0.


Solution: Using the above result on periodic function, we have,
2 1
1 1
Z Z
−st
L[f (t)] = e f (t) dt = e−st dt
1 − e−2s 0 1 − e−2s 0

On integration we obtain
 
1 1  −s  1
L[f (t)] = e −1 =
1 − e−2s −s s(1 + e−s )

39.2.2 Problem 2

Find Laplace transform for


(
sin t when 0 < t < π
f (t) =
0 when π < t < 2π

with f (t + 2π) = f (t), t > 0.


Solution: Since f (t) is periodic with period 2π we have

1
Z
L[f (t)] = e−st f (t) dt
1 − e−2sπ 0

255 WhatsApp: +9127900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

We now evaluate the above integral as


Z 2π Z π Z 2π
−st −st
e f (t) dt = e f (t) dt + e−st f (t) dt
0 0 π

Substituting the given value of f (t) we obtain


2π π
1 + e−sπ
Z Z
−st
e f (t) dt = e−st sin t dt + 0 =
0 0 1 + s2

This implies
1 1 + e−sπ 1
L[f (t)] = 2
= 2
1−e−2sπ 1+s (1 + s )(1 − e−sπ )

39.2.3 Problem 3

Find the Laplace transform of the square wave with period T:


(
h when 0 < t < T /2
f (t) =
−h when T /2 < t < T

Solution: Using Laplace transform of periodic function we find


T
1
Z
L[f (t)] = e−st f (t) dt
1 − e−sT 0

Substituting f (t) we obtain


!
T /2 T
1
Z Z
L[f (t)] = he−st dt − he−st dt
1 − e−sT 0 T /2

Evaluating integrals we get

1 h  h(1 − esT /2 )
sT /2 −sT
L[f (t)] = 1 − 2e + e =
(1 − e−sT ) s s(1 − esT /2 )

39.3 Limiting Theorems

These theorems allow the limiting behavior of the function to be directly calculated by
taking a limit of the transformed function.

256 WhatsApp: +9137900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

39.3.1 Theorem (Initial Value Theorem)



Suppose that f is continuous on [0, ∞) and of exponential order α and f is piecewise
continuous on [0, ∞) and of exponential order. Let

F (s) = L[f (t)],

then
f (0+) = lim f (t) = lim sF (s), [ assuming s is real ]
t→0+ s→∞

Proof: By the derivative theorem,



L[f (t)] = sL[f (t)] − f (0+)

′ ′
Note that lim L[f (t)] = 0, since f is piecewise continuous on [0, ∞) and of exponential
s→∞
order. Therefore we have

0 = lim sF (s) − f (0+)


s→∞

Hence we get

lim f (t) = lim sF (s)


t→0+ s→∞

This completes the proof.

39.3.2 Theorem (Final Value Theorem)



Suppose that f is continuous on [0, ∞) and is of exponential order α and f is piecewise
continuous on [0, ∞) and furthermore limt→∞ f (t) exists then

lim f (t) = lim sL[f (t)] = lim sF (s)


t→∞ s→0 s→0

Proof: Note that f has exponential order α = 0 since it is bounded, since limt→0+ f (t) and
limt→∞ f (t) exist and f (t) is continuous in [0, ∞). By the derivative theorem, we have

L[f (t)] = sF (s) − f (0+), s > 0,

257 WhatsApp: +9147900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Taking limit as s → 0, we obtain


Z ∞ ′
lim e−st f (t) dt = lim sF (s) − f (0)
s→0 0 s→0

Taking the limit inside the integral


Z ∞ ′
f (t) dt = lim sF (s) − f (0)
0 s→0

On integrating we obtain

lim f (t) − f (0) = lim sF (s) − f (0)


t→∞ s→0

Cancellation of f (0) gives the desired results.

Remark 2: In the final value theorem, existence of limt→∞ f (t) is very important.
s
Consider f (t) = sin t. Then lims→0 sF (s) = lims→0 1+s 2 = 0. But limt→∞ f (t) does not

exist. Thus we may say that if lims→0 sF (s) = L exists then either limt→∞ f (t) = L or this
limit does not exist.

39.3.3 Example

Without determining f (t) and assuming that f (t) satisfies the hypothesis of the limiting
theorems, compute
1 a
lim f (t) and lim f (t) if L[f (t)] = + tan−1 .
t→0+ t→∞ s s

Solution: By initial value theorem, we get


h  a i
lim f (t) = lim sF (s) = lim 1 + s tan−1
t→0+ s→∞ s→∞ s

Application of L’hospital rule gives


s2 −a

s2 +a2 s2
lim f (t) = 1 + lim = 1+a
t→0+ s→∞ − s12
Using the final value theorem we find
h ai
lim f (t) = lim sF (s) = lim 1 + s tan−1 = 1.
t→∞ s→0 s→0 s

258 WhatsApp: +9157900900676 www.AgriMoon.Com


Properties of Laplace Transform (Cont.)

Remark 3: Final value theorem says limt→∞ f (t) = lims→0 sF (s), if limt→∞ f (t) exists.
If F (s) is finite as s → 0 then trivially limt→∞ f (t) = 0. However, there are several func-
tions whose Laplace transform is not finite as s → 0, for example, f (t) = 1 and its Laplace
transform F (s) is equal to 1s , s > 0. In this case we have lims→0 sF (s) = lims→0 1 = 1 =
limt→∞ f (t).

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

259 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 40

Inverse Laplace Transform

In this lesson we introduce the concept of inverse Laplace transform and discuss some of
its important properties that will be helpful to evaluate inverse Transform of some com-
plicated functions. As mention in the beginning of this module that the Laplace transform
will allow us to convert a differential equation into an algebraic equation. Once we solve
the algebraic equation in the transformed domain we will like to get back to the time
domain and therefore we need to introduce the concept of inverse Laplace transform.

40.1 Inverse Laplace Transform

If F (s) = L[f (t)] for some function f (t). We define the inverse Laplace transform as

L−1 [F (s)] = f (t).

There is an integral formula for the inverse, but it is not as simple as the transform itself as
it requires complex numbers and path integrals. The easiest way of computing the inverse
is using table of Laplace transform. For example,

w
L[sin wt] =
s2 + w2
This implies  
−1 w
L = sin wt, t ≥ 0
s2 + w 2
and similarly
 
s s
L[cos wt] = 2 ⇒ L−1
= cos wt, t ≥ 0
s + w2 s2 + w 2

40.2 Uniqueness of Inverse Laplace Transform

If we have a function F (s), to be able to find f (t) such that L[f (t)] = F (s), we need to first
know if such a function is unique.

260 WhatsApp: +91 7900900676 www.AgriMoon.Com


Inverse Laplace Transform

Consider (
1 when t = 1
g(t) =
sin(t) when otherwise

1
L[g(t)] = = L[sin t]
s2 +1

Thus we have two different functions g(t) and sin t whose Laplace transform are same.
However note that the given two functions are different at a point of discontinuity. Thanks
to the following theorem where we have uniqueness for continuous functions:

40.2.1 Theorem (Lerch’s Theorem)

If f and g are continuous and are of exponential order, and if F (s) = G(s) for all s > s0
then f (t) = g(t) for all t > 0.
Proof: If F (s) = G(s) for all s > s0 then,
Z ∞ Z ∞
e −st
f (t)dt = e−st g(t)dt, ∀s > s0
0 0
Z ∞
⇒ e−st [f (t) − g(t)]dt = 0, ∀s > s0
0
⇒ f (t) − g(t) ≡ 0, ∀t > t0 .
⇒ f (t) = g(t), ∀t > t0 .

This completes the proof.

Remark: The uniqueness theorem holds for piecewise continuous functions as well.
Recall that piecewise continuous means that the function is continuous except perhaps at
a discrete set of points where it has jump discontinuities like the Heaviside function or the
function g(t) defined above. Since the Laplace integral however does not ”see” values
at the discontinuities. So in this case we can only conclude that f (t) = g(t) outside of
discontinuities.
We now state some important properties of the inverse Laplace transform. Though, these
properties are the same as we have listed for the Laplace transform, we repeat them with-
out proof for the sake of completeness and apply them to evaluate inverse Laplace trans-
form of some functions.

261 WhatsApp: +9127900900676 www.AgriMoon.Com


Inverse Laplace Transform

40.3 Linearity of Inverse Laplace Transform

If F1 (s) and F2 (s) are the Laplace transforms of the function f1 (t) and f2 (t) respectively,
then
L−1 [a1 F1 (s) + a2 F2 (s)] = a1 L−1 [F1 (s)] + L−1 [F2 (s)] = a1 f1 (t) + a2 f2 (t)

where a1 and a2 are constants.

40.4 Example Problems

40.4.1 Problem 1

Find the inverse Laplace transform of


6 8 − 6s
F (s) = +
2s − 3 16s2 + 9

Solution: Using linearity of the inverse Laplace transform we have


     
1 1 s
f (t) = 6L −1
+ 8L−1
− 6L−1
2s − 3 16s2 + 9 16s2 + 9

Rewriting the above expression as


     
1 1 −1 1 3 −1 s
f (t) = 3L −1
+ L − L
s − (3/2) 2 s2 + (9/16) 8 s2 + (9/16)

Using the result  


1
L = eas
s−a
and taking the inverse transform we obtain
2 3t 3 3t
f (t) = 3e3t/2 + sin − cos .
3 4 8 4

40.4.2 Problem 2

Find the inverse Laplace transform of


s2 + s + 1
F (s) =
s3 + s

262 WhatsApp: +9137900900676 www.AgriMoon.Com


Inverse Laplace Transform

Solution: We use the method of partial fractions to write F in a form where we can use
the table of Laplace transform. We factor the denominator as s(s2 + 1) and write
s2 + s + 1 A Bs + C
3
= + 2 .
s +s s s +1
Putting the right hand side over a common denominator and equating the numerators we
get A(s2 + 1) + s(Bs + C) = s2 + s + 1. Expanding and equating coefficients we obtain
A + B = 1, C = 1, A = 1, and thus B = 0. In other words,
s2 + s + 1 1 1
F (s) = 3
= + 2 .
s +s s s +1
By linearity of the inverse Laplace transform we get
s2 + s + 1
     
−1 1 1
L−1
=L +L−1
= 1 + sin t.
s3 + s s s2 + 1

40.5 First Shifting Property of Inverse Laplace Transform

If L−1 [F (s)] = f (t), then L−1 [F (s − a)] = eat f (t)

40.6 Example Problems

40.6.1 Problem 1
 
1
Evaluate L −1
(s + 1)2
Solution: Rewriting the given expression as
   
−1 1 −1 1
L =L
(s + 1)2 (s − (−1))2
Applying the first shifting property of the inverse Laplace transform
   
1 −t −1 1
L−1
=e L
(s + 1)2 s2
Thus we obtain  
1
L−1
= te−t .
(s + 1)2

263 WhatsApp: +9147900900676 www.AgriMoon.Com


Inverse Laplace Transform

40.6.2 Problem 2
 
1
Find L −1
.
s2 + 4s + 8
Solution: First we complete the square to make the denominator (s + 2)2 + 4. Next we
find  
1 1
L−1 = sin(2t).
s2 + 4 2
Putting it all together with the shifting property, we find
  " #
1 1 1
L−1 = L−1 2 = e−2t sin(2t).
s2 + 4s + 8 (s + 2) + 4 2

40.7 Second Shifting Property of Inverse Laplace Transform


 
If L−1 [F (s)] = f (t), then L−1 e−as f (s) = f (t − a) H(t − a)

40.8 Example Problems

40.8.1 Problem 1

Find the inverse Laplace transform of


e−s
F (s) =
s(s2 + 1)

Solution: First we compute the inverse Laplace transform


   
1 −1 1 s
L −1
=L −
s(s2 + 1) s (s2 + 1)

Using linearity of the inverse transform we get


     
1 −1 1 s
L−1
=L −L−1
= 1 − cos t
s(s2 + 1) s (s2 + 1)

We now find  
−1 e−s  
L 2
= L−1 e−s L[1 − cos t]
s(s + 1)

264 WhatsApp: +9157900900676 www.AgriMoon.Com


Inverse Laplace Transform

Using the second shifting theorem we obtain


 
e−s  
L−1
= 1 − cos(t − 1) H(t − 1).
s(s2 + 1)

40.8.2 Problem 2

Find the inverse Laplace transform f (t) of


e−s e−2s e−3s
F (s) = 2 + +
s + 4 s2 + 4 (s + 2)2

Solution: First we find that  


1 1
L−1
= sin 2t
s2 + 4 2
and using the first shifting property
 
1
L −1
= e−2t
(s + 2)2

By linearity we have
   −2s   −3s 
e−s e e
f (t) = L−1
2
+L−1
2
+L−1
s +4 s +4 (s + 2)2

Putting it all together and using the second shifting theorem we get
1 1
f (t) = sin 2(t − 1) H(t − 1) + sin 2(t − 2) H(t − 2) + e−2(t−3) H(t − 3)
2 2

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

265 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 41

Properties of Inverse Laplace Transform

We shall continue discussing various properties of inverse Laplace transform. We mainly


cover change of scale property, inverse Laplace transform of integrals and derivatives etc.

41.1 Change of Scale Property


 
−1 1 t
If L−1 [F (s)] = f (t) then L [F (as)] = F
a a

41.1.1 Example

If  
−1 s
L = cosh 4t,
s2 − 16
then find  
−1 s
L 2
2s − 8

Solution: Given that  


−1 s
L = cosh 4t
s2 − 16
Replacing s by 2s and using scaling property we find
 
−1 2s 1
L 2
= cosh 2t
4s − 16 2
Thus, we obtain  
−1 s 1
L 2
= cosh 2t
2s − 8 2

41.2 Inverse Laplace Transform of Derivatives (Derivative Theorem)

dn
 
If L−1 [F (s)] = f (t) then L −1
f (s) = (−1)n tn f (t), n = 1, 2 . . .
dsn

266 WhatsApp: +91 7900900676 www.AgriMoon.Com


Properties of Inverse Laplace Transform

41.2.1 Example

Find the Laplace transform of


2as s2 − a2
(i) (ii)
(s + a2 )2
2 (s2 + a2 )2

Solution: Note that


a2 − s2
   
d a −2as d s
= 2 and =
ds s + a2
2 2
(s + a ) 2 ds s + a2
2 (s2 + a2 )2
Direct application of the derivative theorem we obtain
   
−1 2as −1 a
(i) L = (−1) t L − 2 = t sin at
(s2 + a2 )2 s + a2
and
s2 − a2
   
−1 −1 s
(ii) L = (−1) t L − 2 = t cos at
(s2 + a2 )2 s + a2

41.3 Inverse Laplace Transform of Integrals


Z ∞ 
−1 f (t)
If L−1 [F (s)] = f (t) then L f (s) ds =
s t

41.3.1 Example

Find the inverse Laplace transform f (t) of the function



1
Z
ds
s s(s + 1)

Solution: By the method of partial fraction we obtain


       
−1 1 −1 1 1 −1 1 −1 1
L =L − =L −L = 1 − e−t .
s(s + 1) s s+1 s s+1
Using the inverse Laplace transform of integrals we get

1 − e−t
Z 
−1 1
L = .
s s(s + 1) t

267 WhatsApp: +9127900900676 www.AgriMoon.Com


Properties of Inverse Laplace Transform

41.4 Multiplication by Powers of s

If L−1 [F (s)] = f (t) and f (0) = 0, then L−1 [sF (s)] = f ′(t)

41.4.1 Example
   
−1 1 −1 s
Using L = sin t, and with the application of above result compute L .
s2 + 1 s2 + 1

Solution: Direct application of the above result leads to

 
−1 s d
L 2
= sin t = cos t.
s +1 dt

41.5 Division by Powers of s

Let L−1 [F (s)] = f (t). if f (t) is piecewise continuous and of exponential order α such that
f (t)
lim exists, then
t→0 t   Z t
−1 F (s)
L = f (u) du.
s 0

41.6 Example Problems

41.6.1 Problem 1

Compute  
−1 1
L 2
s(s + 1)

Solution: we could proceed by applying this integration rule.


  Z t   Z t
−1 1 1 −1 1
L = L du = sin τ du = 1 − cos t.
s s2 + 1 0 s2 + 1 0

268 WhatsApp: +9137900900676 www.AgriMoon.Com


Properties of Inverse Laplace Transform

41.6.2 Problem 2

1
Find inverse Laplace transform of
(s2 + 1)2
Solution: We know that  
−1 s 1
L 2 2
= t sin t.
(1 + s ) 2
We now apply the above result as

1 t
   
1 −1 1
Z
−1 s
L =L = t sin t dt.
(1 + s2 )2 s (1 + s2 )2 2 0

Evaluating the above integral we get


 
−1 1 1
L 2 2
= (−t cos t + sin t).
(1 + s ) 2

41.6.3 Problem 3

s−1
Find inverse Laplace transform of .
s2 (s2+ 1)
Solution: It is easy to compute
     
−1 s−1 −1 s −1 1
L =L −L = cos t − sin t.
(s2 + 1) (s2 + 1) (s2 + 1)

Now repeated application of the above result we get


  Z t
−1 s−1
L = (cos t − sin t) dt = sin t + cos t − 1.
s(s2 + 1) 0

Finally, we obtain the desired transform as


  Z t
−1 s−1
L = (sin t + cos t − 1) dt = 1 − t + sin t − cos t.
s2 (s2 + 1) 0

41.7 Evaluation of Integrals

With the application of Laplace and inverse Laplace transform we can also compute some
complicated integrals.

269 WhatsApp: +9147900900676 www.AgriMoon.Com


Properties of Inverse Laplace Transform

41.8 Example Problems

41.8.1 Problem 1

cos tx
Z
Evaluate dx, t > 0.
0 x2 + 1
Solution: Let ∞
cos tx
Z
f (t) = dx.
0 x2 + 1
Taking Laplace transform on both sides,
Z ∞
s
L[f (t)] = dx
0 (x2
+ 1)(s2 + x2 )
Z ∞ 
s 1 1
= 2 − dx
s +1 0 x2 + 1 s2 + x2
  ∞
s −1 1 −1 1
= 2 tan x − tan
s −1 s s 0
s  π π  π 1
= 2 − = .
s − 1 2 2s 2s+1
Taking inverse Laplace transform on both sides,
π −t
f (t) = e .
2

41.8.2 Problem 2
Z ∞ 2
Evaluate e−x dx.
0
Solution: Let Z ∞ 2
g(t) = e−tx dx
0
Now taking Laplace on both sides,
∞  
1 1 1 π
Z
x ∞
L[g(t)] = dx = √ arctan √ = √
s + x2 s s 0 s2

0
Taking inverse Laplace transform we obtain
 
π −1 1 π 1
g(t) = L √ = √ √ .
2 s 2 π t
Hence for t = 1 we get √
Z ∞
−x2 π
e dx = .
0 2

270 WhatsApp: +9157900900676 www.AgriMoon.Com


Properties of Inverse Laplace Transform

Remark: Theoretical results on applicability of Laplace transform for evaluating of


integrals and evaluation of some more integrals will be further elaborated in one of the
next lessons.

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

271 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 42

Convolution for Laplace Transform

In this lesson we introduce the convolution property of the Laplace transform. We shall
start with the definition of convolution followed by an important theorem on Laplace
transform of convolution. Convolution theorem plays an important role for finding in-
verse Laplace transform of complicated functions and therefore very useful for solving
differential equations.

42.1 Convolution

The convolution of two given functions f (t) and g(t) is written as f ∗ g and is defined by
the integral Z t
(f ∗ g)(t) := f (τ )g(t − τ ) dτ. (42.1)
0
As you can see, the convolution of two functions of t is another function of t.

42.2 Example Problems

42.2.1 Problem 1

Find the convolution of f (t) = et and g(t) = t for t ≥ 0.


Solution: By the definition we have
Z t
(f ∗ g)(t) = eτ (t − τ ) dτ
0

Integrating by parts, we obtain

(f ∗ g)(t) = et − t − 1.

42.2.2 Problem 2

Find the convolution of f (t) = sin(ωt) and g(t) = cos(ωt) for t ≥ 0.

272 WhatsApp: +91 7900900676 www.AgriMoon.Com


Convolution for Laplace Transform

Solution: By the definition of convolution we have


Z t 
(f ∗ g)(t) = sin(ωτ ) cos ω(t − τ ) dτ.
0

1 
We apply the identity cos(θ) sin(ψ) = sin(θ + ψ) − sin(θ − ψ) to get
2
t
1
Z

(f ∗ g)(t) = sin(ωt) − sin(ωt − 2ωτ ) dτ
0 2
On integration we obtain
 t
1 1 1
(f ∗ g)(t) = τ sin(ωt) + cos(2ωτ − ωt) = t sin(ωt).
2 4ω τ =0 2

The formula holds only for t ≥ 0. We assumed that f and g are zero (or simply not
defined) for negative t.

42.3 Properties of Convolution

The convolution has many properties that make it behave like a product. Let c be a con-
stant and f , g , and h be functions, then
(i) f ∗ g = g ∗ f, [symmetry]
(ii) c(f ∗ g) = cf ∗ g = f ∗ cg, [c=constant]
(iii) f ∗ (g ∗ h) = (f ∗ g) ∗ h, [associative property]
(iv) f ∗ (g + h) = f ∗ g + f ∗ h, [distributive property]
Proof: We give proof of (i) and all others can be done similarly. By the definition of
convolution we have Z t
f ∗g = f (τ )g(t − τ )dτ
0
Substituting t − τ = u ⇒ −dτ = du we get
Z 0 Z t
f ∗g =− f (t − u)g(u)du = f (t − u)g(u)du = g ∗ f
t 0

This completes the proof.


The most interesting property for us, and the main result of this lesson is the following
theorem.

273 WhatsApp: +9127900900676 www.AgriMoon.Com


Convolution for Laplace Transform

42.4 Convolution Theorem

If f and g are piecewise continuous on [0, ∞) and of exponential order α, then

L[(f ∗ g)(t)] = L[f (t)]L[g(t)].

Proof: From the definition,


Z ∞ Z t
L[(f ∗ g)(t)] = −st
e f (τ )g(t − τ )dτ dt, [Re(s)> α ]
0 0

Changing the order of integration,


Z ∞Z t
L[(f ∗ g)(t)] = e−st f (τ )g(t − τ )dtdτ,
0 0

We now put t − τ = u ⇒ −dτ = du and get,


Z ∞Z ∞
L[(f ∗ g)(t)] = e−s(u+τ ) f (τ )g(u)dudτ
Z0 ∞ 0 Z ∞
−sτ )
= e f (τ )dτ e−su g(u)du
0 0
= L[f (t)]L[g(t)]

This completes the proof.

In other words, the Laplace transform of a convolution is the product of the Laplace
transforms. The simplest way to use this result is in reverse, i.e., to find inverse Laplace
transform.

42.5 Example Problems

42.5.1 Problem 1

Find the inverse Laplace transform of the function of s defined by


1 1 1
2
= .
(s + 1)s s + 1 s2

274 WhatsApp: +9137900900676 www.AgriMoon.Com


Convolution for Laplace Transform

Solution: We recognize the two elementary entries


   
1 1
L −1
= e−t and −1
L = t.
s+1 s2
Therefore,   Z t
1 1
L −1
= τ e−(t−τ ) dτ
s + 1 s2 0
On integration by parts we obtain
 
1 1
L −1
2
= e−t + t − 1.
s+1 s

42.5.2 Problem 2

Use the convolution theorem to evaluate


 
−1 s
L .
(s2 + 1)2

Solution: Note that


1 s
L[sin t] = and L[cos t] =
s2 + 1 s2 + 1
Using convolution theorem,
s
L[sin t ∗ cos t] = L[sin t]L[cos t] = .
(s2 + 1)2
Therefore, we have
  Z t
s
L −1
= sin τ cos(t − τ )dτ.
(s2 + 1)2 0

Using the trigonometric equality 2 sin A cos B = sin(A + B) + sin(A − B) we get


1 t
  Z
s
L −1
= [sin t + sin(2τ − t)]dτ.
(s2 + 1)2 2 0
On integration we find
   t
s 1 1 cos(2τ − t)
L −1
= t sin t + −
(s2 + 1)2 2 2 2 0
1 1
= t sin t [cos t − cos t].
2 4

275 WhatsApp: +9147900900676 www.AgriMoon.Com


Convolution for Laplace Transform

Finally we have the following result


 
s 1
−1
L = t sin t.
(s2 + 1)2 2

42.5.3 Problem 3

Use convolution theorem to evaluate


 
−1 1
L √
s(s − 1)

Solution: We know the following elementary transforms


Γ 12
   
1 1 1
L √ = √ ⇒L −1
√ =√
t s s tπ
and
 
1
L −1
= et .
s−1
Then by the convolution theorem, we find
  Z t
1 1 t 1
L −1
√ = √ ∗e = √ et−τ dτ.
s(s − 1) tπ 0 tπ

√ 1
Substitution u = τ ⇒ du = √ dτ gives
2 τ

Z t −τ Z √t
et et
 
1 e 2
L−1 √ =√ √ dτ = 2 √ e−u du.
s(s − 1) π 0 τ π 0
Thus, we have

 
1
L−1
√ = et erf( t).
s(s − 1)

42.5.4 Problem 4

Use convolution theorem to evaluate


 
−1 1
L .
s3 (s2 + 1)

276 WhatsApp: +9157900900676 www.AgriMoon.Com


Convolution for Laplace Transform

Solution: We know
t2
   
1 1
−1
L = and L−1
= sin t.
s3 2 s2 + 1
By the convolution theorem we have
1 t
 
1 1 2
Z
L −1
3 2
= t ∗ sin t = sin τ (t − τ )2 dτ
s (s + 1) 2 2 0
 Z t 
1  t
2
= − cos τ (t − τ ) − 2 (t − τ ) cos τ dτ
2 0 0

1 2 t Z t 
= t − 2 ((t − τ ) sin τ ) + sin τ dτ .

2 0 0

Finally we get the desired inverse Laplace transform as


t2
 
1
L−1
= + cos t − 1.
s3 (s2 + 1) 2

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.

277 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 43

Laplace Transform of Some Special Functions

In this lesson we discuss Laplace transform of some special functions like error functions,
Dirac delta functions, etc. There functions appears in various applications of science and
engineering to some of them we shall encounter while solving differential equations using
Laplace transform.

43.1 Error Function

The error appears in probability, statistics and solutions of some partial differential equa-
tions. It is defined as Z t
2 2
erf(t) = √ e−u du
π 0
Its complement, known as complementary error function, is defined as

2
Z
2
erfc(t) = 1 − erf(t) = √ e−u du
π t

We find Laplace transform of different forms of error function in the following examples.

43.2 Example Problems

43.2.1 Problem 1

Find L[erf( t)].
Solution: From definition of the error function and the Laplace transform we have,

√ ∞Z t
2
Z
2
L[erf( t)] = √ e−st e−x dxdt
π 0 0

By changing the order of integration we get,


√ 2 ∞ ∞
Z Z
2
L[erf( t)] = √ e−st e−x dtdx
π x=0 t=x2

278 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace Transform of Some Special Functions

Evaluating the inner integral we obtain


√ 2 ∞ −sx2 ∞
−x2 e 2 1
Z Z
2
L[erf( t)] = √ e dx = √ e−(1+s)x dx
π x=0 s πs x=0
p 1
Substituting (1 + s)x = u ⇒ dx = √ du
1+s

Z ∞
2 1 2 1
L[erf( t)] = √ √ e−u du = √
π s 1 + s x=0 s s+1
Z ∞ √
−u2 π
Note that we have used the value of Gaussian integral e du = .
x=0 2

43.2.2 Problem 2
" √ #

k
 
e−2k s  
k
−1
Find L erf √ . and show that L = erfc √
t s t
Solution: By the definition of Laplace transform we have
k
   ∞
k 2
Z Z √
t 2
−st
L erf √ = e √ e−u du dt
t 0 π 0

Changing the order of integration we get


k2
   ∞Z
k 2
Z
u2 2
L erf √ =√ e−st e−u dt du
t π 0 0

Evaluation of the inner integral leads to


   ∞  2

k 2 1
Z
−u2 −s k 2
L erf √ =√ e 1 − e u du
t πs 0

Using the value of Gaussian integral we have


   √ ∞ 2
 
k 2 1 π
Z
−u2 −s k 2
L erf √ =√ − e u du (43.1)
t πs 2 0

Let us assume Z ∞ 2
−u2 −s k2
I(s) = e u du
0
By differentiation under integral sign
∞ 2
k2
 
dI
Z
−u2 −s k2
= e u − 2 du
ds 0 u

279 WhatsApp: +9127900900676 www.AgriMoon.Com


Laplace Transform of Some Special Functions

√ √
sk sk
Substitution u =x⇒− u2
du = dx leads to
∞ 2
dI k k
Z
−x2 −s k2
= −√ e x dx = − √ I
ds s 0 s

Solving the above differential equation we get


√ √
ln I(s) = −2k s + ln c ⇒ I(s) = ce−2k s

Further note that


∞ √ √
π π
Z
−u2
I(0) = e du = ⇒ c=
0 2 2

Therefore, we get √
π −2k√s
I(s) = e
2
Substituting this value in the equation (43.1), we obtain
√ √
1 − e−2k s
   √ 
k 2 π π −2k√s
L erf √ = √ − e =
t s π 2 2 s

Taking inverse Laplace transform on both sides we get


" √ # " √ #

k
  
1 e−2k s e−2k s
erf √ = L−1 − L−1 = 1 − L−1
t s s s

This leads to the desired result as


" √ #
e−2k s k
  
k

−1
L = 1 − erf √ = erfc √
s t t

43.3 Dirac-Delta Function

Often in applications we study a physical system by putting in a short pulse and then
seeing what the system does. The resulting behaviour is often called impulse response.
Let us see what we mean by a pulse. The simplest kind of a pulse is a simple rectangular
pulse defined by 


0 if t < a,

ϕaǫ (t) = 1/ǫ if a ≤ t < a + ǫ,



0 if a + ǫ ≤ t.

280 WhatsApp: +9137900900676 www.AgriMoon.Com


Laplace Transform of Some Special Functions

Let us take the Laplace transform of a square pulse,


Z ∞
L [ϕaǫ (t)] = e−st ϕǫ (t) dt
0

Substituting the value of the function we obatin


a+ǫ
1
Z
L [ϕaǫ (t)] = e−st dt
ǫ a

On integration we get
e−sa 
L [ϕaǫ (t)] =

1 − e−sǫ

We generally want ǫ to be very small. That is, we wish to have the pulse be very short
and very tall. By letting ǫ go to zero we arrive at the concept of the Dirac delta function,
δ(t − a). Thus, the Dirac-Delta can be thought as the limiting case of ϕǫ (t) as ǫ → 0

δ(t − a) = lim ϕaǫ (t)


ǫ→0

So δ(t) is a ”function” with all its ”mass” at the single point t = 0. In other words, the
Dirac-delta function is defined as having the following properties:

(i) δ(t − a) = 0, ∀t, t 6= a


(ii) for any interval [c, d]

Z d 1 if the interval [c, d] contains a, i.e. c ≤ a ≤ d,
δ(t − a) dt =
c 0 otherwise.

(iii) for any interval [c, d]



Z d f (a) if the interval [c, d] contains a, i.e. c ≤ a ≤ d,
δ(t − a)f (x) dt =
c 0 otherwise.

Unfortunately there is no such function in the classical sense. You could informally think
that δ(t) is zero for t 6= 0 and somehow infinite at t = 0.
As we can integrate δ(t), let us compute its Laplace transform.
Z ∞
L [δ(t − a)] = e−st δ(t − a) dt = e−as
0

In particular,
L [δ(t)] = 1.

281 WhatsApp: +9147900900676 www.AgriMoon.Com


Laplace Transform of Some Special Functions

Remark: Notice that the Laplace transform of δ(t − a) looks like the Laplace trans-
form of the derivative of the Heaviside function u(t − a), if we could differentiate the
Heaviside function. First notice
  e−as
L u(t − a) = .
s
To obtain what the Laplace transform of the derivative would be we multiply by s, to
obtain e−as, which is the Laplace transform of δ(t − a). We see the same thing using
integration, Z t
δ(s − a) ds = u(t − a).
0
So in a certain sense
d 
” u(t − a) = δ(t − a) ”
dt
This line of reasoning allows us to talk about derivatives of functions with jump disconti-
nuities. We can think of the derivative of the Heaviside function u(t−a) as being somehow
infinite at a, which is precisely our intuitive understanding of the delta function.

43.3.1 Example
 s+1 
Compute L−1 s .
Solution: We write,
     
−1 s+1 −1 1 −1 −1 1
L =L 1+ = L [1] + L = δ(t) + 1.
s s s
The resulting object is a generalized function which makes sense only when put under an
integral.

Suggested Readings

Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

282 WhatsApp: +9157900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 44

Laplace and Inverse Laplace Transform: Miscellaneous Examples

In this lesson we evaluate Laplace and inverse Laplace transforms of some useful func-
tions. Some important special functions include Bessel’s functions and Laguerre poly-
nomial. Additionally, some examples demonstrating potential of Laplace and inverse
Laplace transform for evaluating special integrals will be presented.

44.1 Bessel’s Functions

The Bessel’s functions of order n (of first kind) is defined as


 n+2r
(−1)r

X t
Jn (t) = .
r!(n + r)! 2
r=0

This Bessel’s function is a solution of the Bessel’s equation of order n

n2
 
n 1 ′
y + y + 1− 2 y =0
t t

The Bessel’s functions of order 0 and 1 are given as

t2 t4 t6
J0 (t) = 1 − + − + ...
22 22 42 22 42 62
and
t t3 t5
J1 (t) = − 2 + 2 2 + ...
2 2 4 2 4 6
Note that J0′ (t) = −J1 (t).

44.1.1 Example

Find the Laplace transform of J0 (t) and J1 (t).


Solution: Taking Laplace transform of the J0 (t) we have

t2 t4 t6
 
L[J0 (t)] = L 1 − 2 + 2 2 − 2 2 2 + . . .
2 2 4 2 4 6

283 WhatsApp: +91 7900900676 www.AgriMoon.Com


Laplace and Inverse Laplace Transform: Misc. Examples

Using linearity of the Laplace transform we get


1 1 2! 1 4! 1 6!
L[J0 (t)] = − 2 3 + 2 2 5 − 2 2 2 7 + ...
s 2 s 2 4 s 2 4 6 s
This can be rewritten as
 
1 1 1 13 1 135 1
L[J0 (t)] = 1− + − + ...
s 2 s2 2 4 s4 2 4 6 s6
With Binomial expansion we can write
 −1/2
1 1 1
L[J0 (t)] = 1+ 2 =√
s s 1 + s2
Further note that L[J1 (t)] = −L[J0′ (t)] and therefore using the derivative theorem we find

L[J1 (t)] = −sL[J0 (t)] + J0 (0)] = 1 − sL[J0 (t)], since J0 (0) = 1

Hence, we obtain
s
L[J1 (t)] = 1 − √
1 + s2

44.2 Laguerre Polynomials

Laguerre polynomials are defined as


et dn −t n 
Ln (t) = e t , n = 0, 1, 2, . . .
n! dtn
The Laguerre polynomials are solutions of Laguerre’s differential equation
d2 y dy
x 2
+ (1 − x) + ny = 0, n = 0, 1, 2, . . .
dx dx

44.2.1 Example

(s − 1)n
Show that L[Ln (t)] =
sn+1
Solution: By definition of the Laplace transform we have
Z ∞ t
dn −t n 
−st e
L[Ln (t)] = e e t dt
0 n! dtn
dn
Z ∞
1
e−(s−1)t n e−t tn dt

=
n! 0 dt

284 WhatsApp: +9127900900676 www.AgriMoon.Com


Laplace and Inverse Laplace Transform: Misc. Examples

Integrating by parts, we find


1 −(s−1)t dn−1 −t n  ∞ n−1
 Z ∞ 
−(s−1)t d −t n

L[Ln (t)] = e e t + (s − 1) e e t dt
n! dtn−1 0 0 dtn−1
n−1
Noting that each term in dtd n−1 contains some integral power of t so that it vanishes as
t → 0 and e−(s−1)t vanishes for t → ∞ provided s > 1. Thus, we have

dn−1 −t n 
Z 
s−1 −(s−1)t
L[Ln (t)] = e e t dt
n! 0 dtn−1
Repeated use of integration by parts leads to
(s − 1)n ∞
(s − 1)n n
Z 
−(s−1)t −t n
L[Ln (t)] = e e t dt = L[t ]
n! 0 n!
Hence, we get
(s − 1)n n! (s − 1)n
L[Ln (t)] = =
n! sn+1 sn+1

44.3 Miscellaneous Example Problems

44.3.1 Problem 1

Using the convolution theorem prove that


1
Γ(m)Γ(n)
Z
B(m, n) = um−1(1 − u)n−1 du = , [m, n > 0].
0 Γ(m + n)

Solution: Let f (t) = tm−1 , g(t) = tn−1 , then


Z t
(f ∗ g)(t) = τ m−1 (t − τ )n−1 dτ,
0

Substituting τ = ut so that dτ = t du we obtain


Z 1
(f ∗ g)(t) = tm−1 um−1 tn−1 (1 − u)n−1 tdu
0

We simplify the above expression to get


Z 1
m+n−1
(f ∗ g)(t) = t um−1(1 − u)n−1 du = tm+n−1 B(m, n)
0

285 WhatsApp: +9137900900676 www.AgriMoon.Com


Laplace and Inverse Laplace Transform: Misc. Examples

Taking Laplace transform and using convolution property, we find


Γ(m)Γ(n)
L[tm+n−1 B(m, n)] = L[f (t)] ∗ L[g(t)] = L[tm−1 ] ∗ L[tn−1 ] =
sm+n
Taking inverse Laplace transform,
Γ(m)Γ(n) m+n−1
tm+n−1 B(m, n) = t
Γ(m + n)
Hence, we get the desired result as
Γ(m)Γ(n)
B(m, n) =
Γ(m + n)

44.3.2 Problem 2

Show that ∞
sin t
Z
π
dt = .
0 t 2

Solution: We know
1
L[sin t] =
s2 +1
Therefore, we get   Z ∞
sin t 1 π
L = 2
ds = − tan−1 s.
t s s +1 2
Taking limit as s → 0 (see remarks below for details) we find

sin t
Z
π π
dt = − tan−1 (0) = .
0 t 2 2

Remark 1: Suppose that f is piecewise continuous on [0, ∞) and L[f (t)] = F (s) exists
R∞ R∞
for all s > 0, and 0 f (t) dt converges. Then lims→0+ F (s) = lims→0+ 0 e−st f (t) dt =
R∞
0 f (t) dt.

Remark 2: If f is a piecewise continuous function ans 0∞ e−st f (t) dt = F (s) con-


R

verges uniformly for all s ∈ E , then F (s) is a continuous function on E , that is, for
s → s0 ∈ E, Z ∞ Z ∞
−st
lim e f (t) dt = F (s0 ) = lim e−st f (t) dt.
s→s0 0 0 s→s0

286 WhatsApp: +9147900900676 www.AgriMoon.Com


Laplace and Inverse Laplace Transform: Misc. Examples

R∞
Remark 3: Recall that the integral 0 e−st f (t) dt is said to converge uniformly for s
in some domain Ω if for any ǫ > 0 there exists some number τ0 such that if τ ≥ τ0 then
Z


−st

e f (t) dt < ǫ
τ

for all s in Ω.

44.3.3 Problem 3

Using Laplace transform, evaluate the following integral



x sin xt
Z
dx
−∞ x2 + a2

Solution: Let ∞
x sin xt
Z
f (t) = dx
0 x2 + a2
Taking Laplace transform, we get
Z ∞
x x
F (s) = dx
0 x2 + a x + s2
2 2

Using the method of partial fractions we obtain



a2 ∞ 
1 1 1
Z Z
F (s) = d x − − 2 dx
0 x2 + s2 s2 − a2 0
2
x +a 2 x + s2

Evaluating the above integrals we have


1 a2

1   1  ∞
−1 x −1 x −1 x
  ∞
F (s) = tan − 2 tan − tan
s s 0 s − a2 a a s s 0

On simplification we obtain
1 π
F (s) =
2 s+a
Taking inverse Laplace transform we find
1 −at
f (t) = πe
2
Hence the value of the given integral
∞ ∞
x sin xt x sin xt
Z Z
dx = 2 dx = πe−at.
−∞ x2 + a2 0 x2 + a2

287 WhatsApp: +9157900900676 www.AgriMoon.Com


Laplace and Inverse Laplace Transform: Misc. Examples

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

288 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 45

Application of Laplace Transform

In previous lessons we have evaluated Laplace transforms and inverse Laplace transform
of various functions that will be used in this and following lessons to solve ordinary
differential equations. In this lesson we mainly solve initial value problems.

45.1 Solving Differential/Integral Equations

We perform the following steps to obtain the solution of a differential equation.


(i) Take the Laplace transform on both sides of the given differential/integral equations.
(ii) Obtain the equation L[y] = F (s) from the transformed equation.
(iii) Apply the inverse transform to get the solution as y = L−1 [F (s)].
In the process we assume that the solution is continuous and is of exponential order so
that Laplace transform exists. For linear differential equations with constant coefficients
one can easily prove that under certain assumption that the solution is continuous and is
of exponential order. But for the ordinary differential equations with variable coefficients
we should be more careful. The whole procedure of solving differential equations will be
clear with the following examples.

45.2 Example Problems

45.2.1 Problem 1

Solve the following initial value problem


d2 y ′
+ y = 1, y(0) = y (0) = 0.
dt2

Solution: Take the Laplace transform on both sides, we get


′′
L[y ] + L[y] = L[1]

289 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace Transform

Using derivative theorems we find



s2 L[y] − sy(0)y (0) + L[y] = L[1]

We plug in the initial conditions now to obtain


 1 1
L[y] 1 + s2 = =
s s (1 + s2 )
Using partial fractions we obtain
1 s
L[y] = −
s 1 + s2
Taking inverse Laplace transform we get
   
−1 1 −1 s
y(t) = L −L = 1 − cos t
s 1 + s2

45.2.2 Problem 2

Solve the initial value problem


x′′ (t) + x(t) = cos(2t), x(0) = 0, x′ (0) = 1.

Solution: We will take the Laplace transform on both sides. By X(s) we will, as usual,
denote the Laplace transform of x(t).
L[x′′ (t) + x(t)] = L[cos(2t)],
s
s2 X(s) − sx(0) − x′ (0) + X(s) = 2 .
s +4
Plugging the initial conditions, we obtain
s
s2 X(s) − 1 + X(s) =
s2 + 4
We now solve for X(s) as
s 1
X(s) = +
(s2 + 1)(s2 + 4) s2 + 1
We use partial fractions to write
1 s 1 s 1
X(s) = 2
− 2
+ 2
3 s +1 3 s +4 s +1
Now take the inverse Laplace transform to obtain
1 1
x(t) = cos(t) − cos(2t) + sin(t).
3 3

290 WhatsApp: +9127900900676 www.AgriMoon.Com


Application of Laplace Transform

45.2.3 Problem 3

Solve the following initial value problem


d2 y dy 2 3t ′
− 6 + 9y = t e , y(0) = 2, y (0) = 6.
dt2 dt

Solution: Taking the Laplace transform on both sides, we get


′ 2
s2 Y (s) − sy(0) − y (0) − 6 [sY (s) − y(0)] + 9Y (s) =
(s − 3)3

Using initial values we obtain


2
s2 Y (s)2s − 6 − 6 [sY (s) − 2] + 9Y (s) =
(s − 3)3

We solve for Y (s) to get


2 2(s − 3)
Y (s) = 5
+
(s − 3) (s − 3)2
Taking inverse Laplace transform, we find
2 4 t3t 1
y(t) = t e + 2e3t = t4 et3t + 2e3t .
4! 12

45.2.4 Problem 4

Solve

y ′′ + y = CH(t − a), y(0) = 0, y (0) = 1.

Solution: Taking Laplace transform on both sides, we get



Z ∞
2
s Y (s) − sy(0) − y (0) + Y (s) = C e−st dt
a

We substitute the given initial values to obtain

(s2 + 1)Y (s) = 1 + C


−as
s
Solve for Y (s) as
1 −as
Y (s) = +C 2
s2 +1 s(s + 1)

291 WhatsApp: +9137900900676 www.AgriMoon.Com


Application of Laplace Transform

Method of partial fractions leads to


  
−1 1 s −as
y(t) = sin t + CL − e
s s2 + 1

By inverse Laplace transform we obtain

y(t) = sin t + CH(t − a)[1 − cos(t − a)].

45.2.5 Problem 5

Solve the following initial value problem

x′′ (t) + x(t) = H(t − 1) − H(t − 5), x(0) = 0, x′ (0) = 0,

Solution: We transform the equation and we plug in the initial conditions as before to
obtain −s −5s e e
s2 X(s) + X(s) = − .
s s
We solve for X(s) to obtain

e−s e−5s
X(s) = − .
s(s2 + 1) s(s2 + 1)

We can easily show that  


−1 1
L = 1 − cos t.
s(s2 + 1)
1
In other words L[1 − cos t] = . So using the shifting theorem we find
s(s2 + 1)

e−s
 
−1

−1 −s
  
L = L e L[1 − cos t] = 1 − cos(t − 1) H(t − 1).
s(s2 + 1)

Similarly, we have

e−5s
 
−1

−1 −5s
  
L = L e L[1 − cos t] = 1 − cos(t − 5) H(t − 5).
s(s2 + 1)

Hence, the solution is


   
x(t) = 1 − cos(t − 1) H(t − 1) − 1 − cos(t − 5) H(t − 5).

292 WhatsApp: +9147900900676 www.AgriMoon.Com


Application of Laplace Transform

45.2.6 Problem 6

Solve the initial value problem


x′′ + ω02x = δ(t), x(0) = 0, x′ (0) = 0.

Solution: We first apply the Laplace transform to the equation


s2 X(s) + ω02 X(s) = 1

Solving for X(s) we find


1
X(s) =
s2 + ω02
Taking the inverse Laplace transform we obtain
sin(ω0 t)
x(t) = .
ω0

45.2.7 Problem 7

Solve the initial value problem


y ′′ + 2y ′ + 2y = δ(t − 3)H(t − 3), x(0) = 0, x′ (0) = 0.

Solution: Recall the second shifting theorem


L [f (t − a)H(t − a)] = e−as F (s)

We now apply the Laplace transform to the differential equation to get



s2 Y (s) − sy(0) − y (0) + 2 [sY (s) − y(0)] + 2Y (s) = e−3s

Plugging the initial values we find


 2 
s + 2s + 2 Y (s) = e−3s

Solving for Y (s) we get


1
Y (s) = 2
e−3s
[(s + 1) + 1]
Taking inverse Laplace transform with the use of first and second shifting properties we
obtain  
1
y(t) = L−1 e−3s = H(t − 3)e−(t−3) sin(t − 3).
[(s + 1)2 + 1]

293 WhatsApp: +9157900900676 www.AgriMoon.Com


Application of Laplace Transform

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

294 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 46

Application of Laplace Transform (Cont.)

In this lesson we continue the application of Laplace transform for solving initial and
boundary value problems. In this lesson we will also look for differential equations with
variable coefficients and some boundary value problems.

46.1 Example Problems

46.1.1 Problem 1

Find the solution to

x′′ + ω02 x = f (t), x(0) = 0, x′ (0) = 0,

for an arbitrary function f (t).


Solution: We first apply the Laplace transform to the equation. Denoting the transform
of x(t) by X(s) and the transform of f (t) by F (s) as usual, we have

s2 X(s) + ω02 X(s) = F (s),

or in other words
1
X(s) = F (s) .
s2 + ω02
We know  
−1 1 sin(ω0 t)
L 2 2 = .
s + ω0 ω0
Therefore, using the convolution theorem, we find
t

sin ω0 (t − τ )
Z
x(t) = f (τ ) dτ,
0 ω0

or if we reverse the order


t
sin(ω0 t)
Z
x(t) = f (t − τ ) dτ.
0 ω0

295 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

46.1.2 Problem 2

Find the general solution of


y ′′ + y = e−t .

Solution: Taking Laplace transform on both sides


1
s2 Y (s) − sy(0) − y ′ (0) + Y (s) =
s+1
Denoting y(0) by y0 and y ′(0) by y1 we find
1
(s2 + 1)Y (s) − sy0 − y1 =
s+1
Now we solve for Y (s) to obtain
1 sy0 y1
Y (s) = 2
+ 2 + 2
(s + 1)(s + 1) s + 1 s + 1

Method of partial fractions leads to


 
1 1 s−1 sy0 y1
Y (s) = − 2 + 2 + 2
2 s+1 s +1 s +1 s +1

Taking the inverse transform we get


1 1 1
y(t) = e−t − cos t + sin t + y0 cos t + y1 sin t
2 2 2
This can be rewritten as
   
1 −t 1 1
y(t) = e + y0 − cos t) + y1 + sin t
2 2 2

Note that y0 and y1 are arbitrary, so the general solution is given by


1
y(t) = e−t + C0 cos t + C1 sin t.
2

46.1.3 Problem 3

Solve the following boundary value problem


π 
′′
y + y = cos t, y(0) = 1, y = 1.
2

296 WhatsApp: +9127900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Solution: Taking Laplace transform on both sides we get,


′ s
s2 Y (s) − sy(0) − y (0) + Y (s) =
s2 + 1
We solve for Y (s) to get

s s y (0)
Y (s) = 2 2
+ 2 + 2
(s + 1) s +1 s +1
Taking inverse Laplace transform on both sides we get,
1 ′
y(t) = t sin t + cos t + y (0) sin t.
2
Given y( π2 ) = 1, therefore
1π ′ ′
 π
1= + 0 + y (0) ⇒ y (0) = 1 − .
22 4
Hence, we obtain the solution as
1  π
y(t) = t sin t + cos t + 1 − sin t.
2 4

46.1.4 Problem 4

Solve the following fourth order initial value problem


d4 y
= −δ(x − 1),
dx4
with the initial conditions

y(0) = 0, y ′′ (0) = 0, y(2) = 0, y ′′ (2) = 0.

Solution: We apply the transform and get

s4 Y (s) − s3 y(0) − s2 y ′ (0) − sy ′′ (0) − y ′′′ (0) = −e−s .

We notice that y(0) = 0 and y ′′(0) = 0. Let us call C1 = y ′(0) and C2 = y ′′′(0). We solve
for Y (s),
−e−s C1 C2
Y (s) = + 2 + 4.
s4 s s

297 WhatsApp: +9137900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

We take the inverse Laplace transform utilizing the second shifting property to take the
inverse of the first term.
−(x − 1)3 C2 3
y(x) = u(x − 1) + C1 x + x .
6 6
We still need to apply two of the endpoint conditions. As the conditions are at x = 2 we
can simply replace u(x − 1) = 1 when taking the derivatives. Therefore,
−(2 − 1)3 C2 3 −1 4
0 = y(2) = + C1 (2) + 2 = + 2C1 + C2 ,
6 6 6 3
and
−3 · 2 · (2 − 1) C2
0 = y ′′ (2) = + 3 · 2 · 2 = −1 + 2C2 .
6 6
Hence C2 = 21 and solving for C1 using the first equation we obtain C1 = −1
4 . Our solution
for the beam deflection is
−(x − 1)3 x x3
y(x) = u(x − 1) − + .
6 4 12

We now demonstrate the potential of Laplace transform for solving ordinary differential
equations with variable coefficients.

46.1.5 Problem 5

Solve the initial value problem


′′ ′ ′
y + ty − 2y = 4; y(0) = −1, y (0) = 0.

Solution: Taking Laplace transform on both sides we get,


 
2 ′d ′
s Y (s) − sy(0) − y (0) + − L[y ] − 2Y (s) = 4L[1]
ds
Using the given initial values and applying derivative theorem once again, we get
d 4
s2 Y (s) + s − (sY (s) − y(0)) − 2Y (s) =
ds s
On simplification we find the following differential equation
 
dY 3 4
+ − s Y (s) = − 2 + 1
ds s s

298 WhatsApp: +9147900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Integrating factor of the above differential equation is given as


R 3 2
e ( s −s) ds = s3 e− 2
s

Hence, the solution of the differential equation can be written as


Z  
3 − s2
2
4 s2
Y (s)s e = − 2 − s s3 e− 2 ds + c
s

On integration we find
2 2
 2
 Z
s2
3 − s2 − s2 2 − s2
Y (s)s e = 4e − S e + 2se− 2 ds + c

We can simplify the above expression to get


2 1  c  s2
Y (s) = − + 3 e2
s3 s s
Since, Y (s) → 0 as s → ∞, c must be zero. Putting c = 0 and taking inverse Laplace
transform we get the desired solution as

y(t) = t2 − 1

46.1.6 Problem 6

Solve the initial value problem


′′ ′ ′
ty + y + ty = 0; y(0) = 1, y (0) = 0

Solution: Taking Laplace transform on both sides we get,


 
d ′′ ′ d
− L[y ] + L[y ] + − L[y] = 0
ds ds

Application of derivative theorem leads to


d n 2 ′
o d
− s Y (s) − sy(0)y (0) + {sY (s) − y(0)} − Y (s) = 0
ds ds
Plugging initial values, we find
 ′
s2 + 1 Y (s) + sY (s) = 0

299 WhatsApp: +9157900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

On integration we get
c
Y (s) = √
1 + s2
Taking inverse Laplace transform we find
y(t) = cJ0 (t)

Noting y(0) = 1, J0 (0) = 1 , we find c = 1. Thus, the required solution is


y(t) = J0 (t).

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

300 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 47

Application of Laplace Transform (Cont.)

In this lesson we discuss application of Laplace transform for solving integral equations,
integro-differential equations and simultaneous differential equations.

47.1 Integral Equation

An equation of the form


Z t
f (t) = g(t) + K(t, u)f (u) du,
0
or Z t
g(t) = K(t, u)f (u) du
0
are known as the integral equations, where f (t) is the unknown function. When the kernel
K(t, u) is of the particular form K(t, u) = K(t − u) then the equations can be solved using
Laplace transforms. We apply the Laplace transform to the first equation to obtain

F (s) = G(s) + K(s)F (s),

where F (s), G(s), and K(s) are the Laplace transforms of f (t), g(t), and K(t) respectively.
Solving for F (s), we find
G(s)
F (s) = .
1 − K(s)

To find f (t) we now need to find the inverse Laplace transform of F (s). Similar steps can
be followed to solve the integral equation of second type mentioned above.

47.2 Example Problems

47.2.1 Problem 1

Solve the following integral equation


Z t
f (t) = e
−t
+ sin(t − u)f (u) du.
0

301 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Solution: Applying Laplace transform on both sides and using convolution theorem we
get,
1
L[f (t)] = + L[sin t]L[f (t)]
s+1
On simplifications, we obtain
 
1 1
L[f (t)] 1 − 2 =
s +1 s+1

This further implies


s2 + 1
L[f (t)] =
s2 (s + 1)

Partial fractions leads to


2 1 1
L[f (t)] = + 2−
s+1 s s
Taking inverse Laplace transform we obtain the desired solution as

f (t) = 2e−t + t − 1

47.2.2 Problem 2

Solve the differential equation


Z t
x(t) = e −t
+ sinh(t − τ )x(τ ) dτ.
0

Solution: We apply Laplace transform to obtain


1 1
X(s) = + 2 X(s),
s+1 s −1
or
1
s+1 s−1 s 1
X(s) = = = 2 − 2 .
1 − s21−1 2
s −2 s −2 s −2
It is not difficult to take inverse Laplace transform to find
√ 1 √
x(t) = cosh( 2 t) − √ sinh( 2 t).
2

302 WhatsApp: +9127900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

47.2.3 Problem 3

Solve the following integral equation for x(t)


Z t
2
t = eτ x(τ ) dτ,
0

Solution: We apply the Laplace transform and the shifting property to get
2 1 1
3
= L[et x(t)] = X(s − 1),
s s s
where X(s) = L[x(t)]. Thus, we have
2 2
X(s − 1) = or X(s) = .
s2 (s + 1)2
We use the shifting property again to obtain

x(t) = 2e−t t.

47.3 Integro-Differential Equations

In addition to the integral we have a differential term in the integro differential equa-
tions. The idea of solving ordinary differential equations and integral equations are now
combined. We demonstrate the procedure with the help of the following example.

47.3.1 Example

Solve Z t
dy
+ 4y + 13 y(u) du = 3e−2t sin 3t, y(0) = 3.
dt 0

Solution: Taking Laplace transform and using its appropriate properties we obtain,
Y (s) 3
sY (s) − y(0) + 4Y (s) + 13 =3 .
s (s + 2)2 + 9
Collecting terms of Y (s) we get
s2 + 4s + 13 9
Y (s) = +3
s (s + 2)2 + 9

303 WhatsApp: +9137900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

On simplification we have
9s 3s
Y (s) = 2 +
[(s + 2)2 + 9] (s + 2)2 + 9

Taking inverse Laplace transform and using shifting theorem we get


" #
9(s − 2) 3(s − 2)
y(t) = e−2t L−1 2 + s2 + 9 .
(s2 + 9)

We now break the functions into the known forms as


" #
9s 18 1 3s 7
y(t) =e−2t L−1 2 − 2 + 2 2
+ 2 − 2
(s2 + 9) (s2 + 9) (s + 9) s +9 s +9
" #
9s s 2−9 3s 7
=e−2t L−1 2 + 2 + s2 + 9 − s2 + 9
(s2 + 9) (s2 + 9)

Using the the following basic inverse transforms


   
a s
L −1
= sin at, L−1
= cos at
s2 + a2 s2 + a2
" # " #
2as s 2 − a2
L−1 2
= t sin at, L−1 2
= t cos at.
2
(s + a )2 (s2 + a2 )
We find the desired solution as
 
3 7
y(t) = e −2t
t sin 3t + t cos 3t + 3 cos 3t − sin 3t
2 3

47.4 Simultaneous Differential Equations

At the end we show with the help of an example the application of Laplace transform for
solving simultaneous differential equations.

47.4.1 Example

Solve
dx dy
= 2x − 3y, = y − 2x
dt dt
subject to the initial conditions

x(0) = 8, y(0) = 3.

304 WhatsApp: +9147900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Solution: Taking Laplace transform on both sides we get

sX(s) − x(0) = 2X(s) − 3Y (s)

and
sY (s) − y(0) = Y (s) − 2X(s)

Collecting terms of X(s) and Y (s) we have the following equations

(s − 2)X(s) + 3Y (s) =8 (47.1)


2X(s) + (s − 1)Y (s) =3 (47.2)

Eliminating Y (s) we obtain

[(s − 1)(s − 2) − 6] X(s) = 8(s − 1) − 9

On simplifications we receive
8s − 17
X(s) =
(s − 4)(s + 1)
Partial fractions lead to
5 3
X(s) = + ,
s+1 s−4
Taking inverse Laplace transform both sides we get

x(t) = 5e−t + 3e4t

Now we solve the above equations (47.1) and for (47.2) Y (s)

[6 − (s − 1)(s − 2)] Y (s) = 16 − 3(s − 2)

On simplifications we get
3s − 22 3s − 22
Y (s) = =
s2 − 3s − 4 (s − 4)(s + 1)
Using the method of partial fractions we obtain
5 2
Y (s) = −
s+1 s−4
Taking inverse transform we get

y(t) = 5e−t − 2e4t .

305 WhatsApp: +9157900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Suggested Readings

Arfken, G.B., Weber, H.J. and Harris, F.E. (2012). Mathematical Methods for Physicists
(A comprehensive guide), Seventh Edition, Elsevier Academic Press, New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Grewal, B.S. (2007). Higher Engineering Mathematics. Fourteenth Edition. Khanna
Publishers, New Delhi.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Jain, R.K. and Iyengar, S.R.K. (2002). Advanced Engineering Mathematics. Third Edi-
tion. Narosa Publishing House. New Delhi.
Jeffrey, A. (2002). Advanced Engineering Mathematics. Elsevier Academic Press. New
Delhi.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.
Raisinghania, M.D. (2009). Advanced Differential Equations. Twelfth Edition. S. Chand
& Company Ltd., New Delhi.

306 WhatsApp: +9167900900676 www.AgriMoon.Com


Module 2: Laplace Transform

Lesson 48

Application of Laplace Transform (Cont.)

In this last lesson of this module we demonstrate the potential of Laplace transform for
solving partial differential equations. If one of the independent variables in partial differ-
ential equations ranges from 0 to ∞ then Laplace transform may be used to solve partial
differential equations.

48.1 Solving Partial Differential Equations

Working steps are more or less similar to what we had for solving ordinary differential
equations. We take the Laplace transform with respect to the variable that ranges from 0 to
∞. This will convert the partial differential equation into an ordinary differential equation.
Then, the transformed ordinary differential equation must be solved considering the given
conditions. At the end we take the inverse Laplace transform which results the required
solution.
Denoting the Laplace transform of unknown variable u(x, t) with respect to t by U(x, s)
and using the definition of Laplace transform we have

Z ∞
U(x, s) = L[u(x, t)] = e−st u(x, t) dt
0

Then, for the first order derivatives, we have

  Z ∞ Z ∞
∂u −st ∂u d dU
(i) L = e dt = e−st u(x, t) dt =
∂x 0 ∂x dx 0 dx

  Z ∞ ∞ Z ∞
∂u −st ∂u −st
(ii) L = e dt = e u − u(−s)e−st dt
∂t 0 ∂t 0 0
Z ∞
= −u(x, 0) + s ue−st dt
  0
∂u
⇒ L = −u(x, 0) + sU(x, s)
∂t

307 WhatsApp: +91 7900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Similarly for the second order derivatives we find

∂2u d2 U
 
(iii) L =
∂x2 dx2

∂2u
 
2 ∂u
(iv) L = s U(x, s) − su(x, 0) − (x, 0)
∂t2 ∂t

∂2u
 
d d
(v) L = s U(x, s) − u(x, 0)
∂x∂t dx dx

Remark: In order to derive the above results, besides the assumptions of piecewise
continuity and exponential order of u(x, t) with respect to t, we have also used the fol-
lowing assumptions: (i) The differentiation under integral sign is valid and (ii) The limit
of the Laplace transform is the Laplace transform of the limit, i.e., limx→x0 L [u(x, t)] =
L [limx→x0 u(x, t)].

48.2 Example Problems

48.2.1 Problem 1

Solve the following initial boundary value problem


∂u ∂u
= , u(x, 0) = x, u(0, t) = t
∂x ∂t

Solution: Taking Laplace transform


d
U(x, s) = sU(x, s) − u(x, 0)
dx
Using the initial values we get
d
U(x, s) − sU(x, s) = −x
dx
The integrating factor is
R
s dx
I.F. = e− = e−sx

308 WhatsApp: +9127900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Hence, the solution can be written as


Z
−sx
U(x, s)e =− xe−sx dx + c

On integration by parts we find


e−sx e−sx
Z
−sx
U(x, s)e = −x − dx + c
−s s
Simplify, the above expression we have
x 1
U(x, s) = + + cesx
s s
Using given boundary condition we find
1 1
2
= 2 + cesx ⇒ c = 0
s s
With this we obtain
x 1
U(x, s) = +
s s
Taking inverse Laplace transform, we find the desired solution as

u(x, t) = x + t

48.2.2 Problem 2

Solve the following partial differential equation


∂u ∂u
+x = x, x > 0, t > 0
∂t ∂x
with the following initial and boundary condition

u(x, 0) = 0, x > 0 and u(0, t) = 0, t > 0

Solution: Taking Laplace transform with respect to t we have


d x
sU(x, s) − u(x, 0) + x U(x, s) = , s>0
dx s
Using the given initial value we find
d s 1
U(x, s) + U(x, s) =
dx x s

309 WhatsApp: +9137900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

Its integrating factor is xs and therefore the solution can be written as


1 s 1
Z
s c
U(x, s)x = x dx + c ⇒ U(x, s) = x+ s
s s(s + 1) x

Boundary condition provides

u(0, t) = 0 ⇒ U(0, s) = 0, ⇒ c = 0

Thus we have  
x 1 1
U(x, s) = =x −
s(s + 1) s s+1
Taking inverse Laplace transform we find the desired solution as
 
u(x, t) = x 1 − e−t

48.2.3 Problem 3

Solve the following heat equation


∂u ∂2u
= 2, x > 0, t > 0
∂t ∂t
with the initial and boundary conditions

u(x, 0) = 1, u(0, t) = 0, lim = 1


x→∞

Solution: Taking Laplace transform we find


d2
sU(x, s) − u(x, 0) = U(x, s)
dx2
Using the given initial condition we have
d2
U(x, s) − sU(x, s) = −1
dx2
Its solution is given as

sx

sx 1
U(x, s) = c1 e + c2 e− +
s
The given boundary conditions give
1
lim U(x, s) = ⇒ c1 = 0
x→∞ s

310 WhatsApp: +9147900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

and
1 1
U(0, s) = 0 ⇒ c1 + c2 + = 0 ⇒ c2 = −
s s
Hence, we have
1 √ 1
U(x, s) = − e− sx +
s s
Taking inverse Laplace transform we find the desired solution as
      
−1 1 −√sx x x
u(x, t) = 1 − L e = 1 − 1 − erf √ = erf √
s 2 t 2 t

48.2.4 Problem 4

Solve the one dimensional wave equation


∂2y 2
2∂ y
= a , x > 0, t > 0
∂t2 ∂t2
with the initial conditions
y(x, 0) = 1, yt (x, 0) = 0

and boundary conditions

y(0, t) = sin ωt, lim y(x, t) = 0


x→∞

Solution: Taking Laplace transform we get

d2
s2 Y (x, s) − sy(x, 0) − yt (x, 0) − a2 Y (x, s) = 0
dx2
With the given initial condition we have the resulting differential equation

d2 Y s2
− =0
dx2 a2
Its general solution is given as
s s
Y (x, s) = c1 e a x + c2 e− a x

The given boundary conditions provides

lim Y (x, s) = 0 ⇒ c1 = 0,
x→∞

311 WhatsApp: +9157900900676 www.AgriMoon.Com


Application of Laplace Transform (Cont.)

and
ω ω
Y (0, s) = ⇒ c2 = 2
s2 +ω 2 s + ω2
Thus we have
ω − as x
Y (x, s) = e
s2 + ω 2
Taking inverse Laplace transform we obtain
h  x i  x
y(x, t) = sin ω t − H t− .
a a

Suggested Readings

Raisinghania, M.D. (2005). Ordinary & Partial Differential Equation. Eighth Edition. S.
Chand & Company Ltd., New Delhi.
Debnath, L. and Bhatta, D. (2007). Integral Transforms and Their Applications. Second
Edition. Chapman and Hall/CRC (Taylor and Francis Group). New York.
Dyke, P.P.G. (2001). An Introduction to Laplace Transforms and Fourier Series. Springer-
Verlag London Ltd.
Kreyszig, E. (2006). Advanced Engineering Mathematics, Ninth Edition, Wiley India
Pvt. Ltd, New Delhi.
McQuarrie, D.A. (2009). Mathematical Methods for Scientist and Engineers. First Indian
Edition. Viva Books Pvt. Ltd. New Delhi.
Schiff, J.L. (1999). The Laplace Transform: Theory and Applications. Springer-Verlag,
New York Inc.

312 WhatsApp: +9167900900676 www.AgriMoon.Com


Engineering Mathematics III
Module 2: Laplace Transform

QUIZ

1. Laplace transform of the rectangular wave function W (t  a, b)  1,0, a0tt,t b b is

(a)
s

1 as
e  ebs  (b)  e  e  bs 
1  as
s
(c)
s
 e  eas 
1  bs
(d)
s
 e  eas 
1 bs

2. Laplace transform of the function sin 2 pt is

4 p2 2 p2
s  s2  4 p2  s  s2  2 p2 
(a) (b)

2 p2 2p
s  s2  4 p2  s  s  4 p2 
(c) (d) 2

3. Laplace transform of the function sin at cos bt is

a 2  s 2  a 2  b2  a  s 2  a 2  b2 
s  s  s  s 
(a) (b)
  a  b  a  b   a  b  a  b
2 2 2 2 2 2 2 2

a  s 2  a 2  b2  a  s 2  a 2  b2 
2

s  s  s s 
(c) (d)
2
  a  b 2
  a  b 2
 a  b
2 2
 a  b
2

4. Which of the following function is not piecewise continuous?

1 
 2t , t 1
(a) f (t )  ,t2 (b) f (t )  
t 2 
1t 2 , t 1
 1et  t sin 1t , t 0
 t , t 0
(c) f (t )   (d) f (t )    

 0, t 0  0, t 0

313 WhatsApp: +91 7900900676 www.AgriMoon.Com


5. Laplace transform of t 2 cos at is

2s  s 2  a 2   8a 2 s  s 2  a 2  2s  s 2  a 2   8a 2 s  s 2  a 2 
2 3 2 2
(a) (b)

2s  s 2  a 2   8a s  s 2  a 2  2s 2  s 2  a 2   8a 2 s  s 2  a 2 
2 3 2 3
(c) (d)

6. Laplace transform of f (t )  sin(t ) , t  0 is

1  e 2 s
1  e  s
(a)
1  e2 s  s2  1 (b)
1  e2 s  s2  1
1  e  s 1  e  s
(c)
1  e s  s2  1 (d)
1  e s  s2  1

7. Laplace transform of f (t )  t H (t  a ), t  0 is

eas
(a) 1  as  (b) eas
1  as 
s2
s2
e  as e  as
(c) 1  as  (d) 1  as 
s2 s2

8. Which of the following functions does not possess the Laplace transform?

(a) et erfc  t (b)  


sin et
2

(c) et
2
(d) te sin  e 
t2 t2

 2s  3 
9. The value of L1  2 is
 s  4 

3 3
(a) cos 2t  sin 2t (b) 2cos 2t  sin t
2 2
3 3
(c) 2cos 2t  sin 2t (d) 2cos 2t  sin 2t
4 2

314 WhatsApp: +91 7900900676 www.AgriMoon.Com


2  5s
10. The inverse Laplace transform of is
( s  6)( s 2  11)

1  67 
(a)  28e 6t
 28cos 11 t  sin 11 t 
47  11 

1  67 
(b)  28e 6t
 28cos 11 t  sin 11 t 
47  11 

1  6t 67 
(c)  28e  28cos 11 t  sin 11 t 
47  11 

1  6t 67 
(d)  28e  28cos 11 t  sin 11 t 
47  11 

 3s  5 
11. The value of L1  2 is
 4 s  4 s  1

1 2t 1  2t
(a) e  6  7t  , (b) e  6  7t  ,
8 8

1 2t 1  2t
(c) e  6  7t  , (d) e  6  7t  ,
8 8

e  s
12. The value of F ( s)  is
s2  2

(a)
1
2
H (t   ) cosh  2 t     (b)
1
2
H (t   )sin  
2 t   

(c)
1
2
H (t   ) cos  
2 t    (d)
1
2
H (t   )sinh  2 t    

315 WhatsApp: +91 7900900676 www.AgriMoon.Com


d 2 y dy
13. The solution of the initial value problem   1  H (t  1)  ; y (0)  1, y '(0)  1 is
dt 2 dt

(a) y  t  1  2et  H  t  1  t  2  et 1  (b) y  t  1  2e t  H  t  1  t  2  e t 1 

(c) y  t  1  2e t  H  t  1  t  2  et 1  (d) y  t  1  2e t  H  t  1  t  2  et 1 

14. The solution of the initial value problem


d2y  cos t , 0t 

 y  f ( t ); y (0)  y '(0)  0; where f ( t )  
dt 2 
 0, t 

1
y t cos t  H  t    t    sin  t    
2
(a)

1
y t sin t  H  t    t    sin  t    
2
(b)

1
y t sin t  H  t    t    sin  t    
2
(c)

1
y t cos t  H  t    t    sin  t    
2
(d)


15. The solution of the integral equation y (t )  sin t  2 y (u ) cos(t  u )du is
0

(a) y (t )  te t (b) y (t )  t 2 e t

(c) y(t )  tet (d) y (t )  t 2 e  t

316 WhatsApp: +91 7900900676 www.AgriMoon.Com


t
dy(t )
16. The solution of the integro-differential equation  3 y (t )  2 y (u )du  t; y (0)  1 is
dt 0

1 5 1 5
(a) y (t )   2e  t  e 2 t (b) y (t )   2et  e 2t
2 2 2 2

1 5 1 5
(c) y (t )   2e t  e 2 t (d) y(t )   2et  e2t
2 2 2 2

17. The solution of the wave equation

2 y 2 y
 ; 0  x  1, t  0
t 2 x 2

with the following initial and boundary conditions

y( x,0)  sin  x, 0  x  1 , yt ( x,0)  0, 0  x  1

y(0, t )  0, t  0, y(1, t )  0, t  0

is

(a) y( x, t )  sin  x cos  t (b) y( x, t )  sin 2 x cos  t

(c) y( x, t )  sin  x cos2 t (d) y( x, t )  sin 2 x cos2 t


18. The solution of the integral equation F (t )  t  2 cos(t  u ) F (u )du
0

(a) 2et (t  1)  2  t (b) 2et (t  1)  2  t

(c) 2et (t  1)  2  t (d) 2et (t  1)  2  t

317 WhatsApp: +91 7900900676 www.AgriMoon.Com



sin t
e
t
19. The value of the integral dt is
0
t
 
(a) (b)
2 4

 
(c) (d)
3 6

te
3t
20. The value of the integral cos t dt is
0

1 2
(a) (b)
25 25

3 4
(c) (d)
25 25

Answers:

1. b 2. c
3. b 4. a
5. a 6. c
7. d 8. c
9. d 10. a
11. a 12. d
13. a 14. b
15. c 16. d
17. a 18. c
19. b 20. b

318 WhatsApp: +91 7900900676 www.AgriMoon.Com


****** ☺******
This Book Download From e-course of ICAR
Visit for Other Agriculture books, News, Recruitment,
Information, and Events at
www.agrimoon.com

Give Feedback & Suggestion at info@agrimoon.com


Send a Massage for daily Update of Agriculture on WhatsApp

+91-7900 900 676


Disclaimer:

The information on this website does not warrant or assume any legal liability or
responsibility for the accuracy, completeness or usefulness of the courseware contents.

The contents are provided free for noncommercial purpose such as teaching, training,
research, extension and self learning.

Connect With Us:

AgriMoon App AgriVarsha App


App that helps the students to gain the Knowledge App that helps the students to All Agricultural
about Agriculture, Books, News, Jobs, Interviews of Competitive Exams IBPS-AFO, FCI, ICAR-JRF,
Toppers & achieved peoples, Events (Seminar, SRF, NET, NSC, State Agricultural exams are
Workshop), Company & College Detail and Exam available here.
notification.

You might also like