You are on page 1of 833

Engineering Mathematics

This page is intentionally left blank


Engineering Mathematics

Vol. II

R. L. Garg
Nishu Gupta

Delhi  Chennai
No part of this eBook may be used or reproduced in any manner whatsoever without the
publisher’s prior written consent.

Copyright © 2015 Published by Dorling Kindersley (India) Pvt. Ltd


This eBook may or may not include all assets that were part of the print version. The publisher
reserves the right to remove any material in this eBook at any time.

ISBN: 9789332536333

e-ISBN: 9789332542181

First Impression

Head Office: 7th Floor, Knowledge Boulevard, A-8(A), Sector 62, Noida 201 309, UP, India.
Registered Office: 11 Community Centre, Panchsheel Park, New Delhi 110 017, India.
Dedication
To My Wife
Late Smt. Shashi Kiran Garg
R. L. Garg

To My Parents
Smt. Adarsh Garg
and
Late Shri B.S.Garg
Nishu Gupta
This page is intentionally left blank
Contents

Preface xiii
Acknowledgements xiv
About the Authors xv
Symbols, Basic Formulae and Useful Informations xvii

1.1  Introduction — 1
1.2 Definition, Limit, Continuity and Differentiability of a Function of Complex Variable — 1
1.2.1  Definition  1
1.2.2  Limit of a Function  1
1.2.3  Continuity of a Function  4
1.2.4  Differentiability of a Function  4
1.3  Analytic Functions — 5
1.4  Cauchy–Riemann Equations — 5
1.4.1  Sufficient Conditions for a Function to be Analytic  6
1.4.2  Polar Form of Cauchy–Riemann Equations  7
1.5  Harmonic Functions — 21
1.5.1  Orthogonal System of Level Curves  22
1.5.2  Method to Find Conjugate Harmonic Function  23
1.5.3  Milne Thomson Method  23
1.6  Line Integral in the Complex Plane — 39
1.6.1  Continuous Curve or Arc  39
1.6.2  Smooth Curve or Smooth Arc  39
1.6.3  Piecewise Continuous Curve  39
1.6.4  Piecewise Smooth Curve  39
1.6.5  Contour  39
1.6.6  Line Integral  39
1.7  Cauchy Integral Theorem — 44
1.7.1  Simply Connected Domain  44
1.7.2  Multiply Connected Domain  44
1.7.3  Independence of Path  46
1.7.4  Integral Function  47
1.7.5  Fundamental Theorem of Integral Calculus  47
1.7.6 Extension of Cauchy–Goursat Theorem for Multiply Connected Domains  49
viii | Contents

  1.8  Cauchy Integral Formula — 50


  1.8.1  Cauchy Integral Formula for Derivatives of Analytic Function  51
  1.8.2  Morera’s Theorem (Converse of Cauchy Integral Theorem)  53
  1.8.3  Cauchy Inequality  53
  1.8.4  Liouville’s Theorem  54
  1.8.5  Poisson’s Integral Formula  54
  1.9  Infinite Series of Complex Terms — 66
  1.9.1  Power Series  67
1.10  Taylor Series — 68
1.11  Laurent’s Series — 69
1.12  Zeros and Singularities of Complex Functions — 84
1.12.1  Zeros of an Analytic Function  84
1.12.2  Singularities of a Function  84
1.12.3  Method to Find Type of Isolated Singularity  85
1.13  Residue — 90
1.13.1  Residue at a Removable Singularity   90
1.13.2  Residue at a Simple Pole  90
1.13.3  Residue at Pole of Order m  91
1.13.4  Residue at an Isolated Essential Singularity  91
1.14  Evaluation of Contour Integrals using Residues — 97
1.15 Application of Cauchy Residue Theorem to Evaluate Real Integrals — 106
1.15.1  Integration Around the Unit Circle   106
∞ ∞

1.15.2 Improper Real Integrals of the Form ∫ f(x) dx or ∫ f(x) dx where f(z) has no 
Real Singularity  114 −∞ 0

1.15.3  Some Special Improper Real Integrals  121


1.15.4  Improper Integrals with Singularities on Real Axis  122
1.16  Conformal Mapping — 132
1.17  Some Standard Mappings — 136
1.17.1  Translation Mapping   136
1.17.2  Magnification/Contraction and Rotation  136
1.17.3  Linear Transformation  137
1.17.4  Inverse Transformation (Inversion and Reflection)  139
1.17.5  Square Transformation  144
1.17.6 Bilinear Transformation (Mobius Transformation or Fractional Transformation)  149
1.17.7  Cross Ratio of Four Points  149

2.1  Introduction — 165


2.2 Definition of Laplace Transform and Inverse Laplace Transform — 165
2.2.1  Piecewise Continuous Function  166
2.2.2  Function of Exponential Order  166
Contents  | ix

  2.3 Sufficient Conditions for Existence of Laplace Transform — 166


  2.4  Properties of Laplace Transforms — 167
  2.5  Laplace Transform of Elementary Functions — 168
  2.6  Laplace Transforms of Derivatives and Integrals — 170
  2.7  Differentiation and Integration of Laplace Transform — 172
  2.8  Evaluation of Real Integrals using Laplace Transform — 187
  2.9  Laplace Transform of Unit Step Function — 192
2.10 Laplace Transform of Unit Impulse Function (Dirac–Delta Function) — 199
2.11  Laplace Transform of Periodic Functions — 202
2.12  Inverse Laplace Transform — 209
2.13 Use of Partial Fractions to Find Inverse Laplace Transform — 210
2.14  Convolution Theorem — 221
2.15 Applications of Laplace Transform to Solve Linear Differential Equations,
Simultaneous Linear Differential Equations and Integral Equations — 227
2.16 Applications of Laplace Transform to Engineering Problems — 257
2.16.1  Problems Related to Electrical Circuits  257
2.16.2  Problem Related to Deflection of a Loaded Beam  265
2.16.3  Problems Related to Mechanical Systems  271

3.1  Introduction — 281


3.1.1  Periodic Functions  281
3.1.2  Trigonometric Series  282
3.1.3  Orthogonality of Trigonometric System  282
3.1.4  Fourier Series  283
3.1.5  Euler Formulae for Fourier Coefficients  283
3.1.6 Dirichlet’s Conditions for Convergence of Fourier Series of f(x) in [c, c + 2l ]  286
3.1.7  Fourier Series of Even and Odd Functions  286
3.2  Fourier Half-range Series — 323
3.2.1  Convergence of Half-range Cosine Series  324
3.2.2  Convergence of Half-range Sine Series  324
3.3  Others Formulae — 341
3.3.1  Parseval’s Formulae  341
3.3.2  Root Mean Square (R.M.S.) Value  343
3.3.3  Complex Form of Fourier Series  343
3.4  Harmonic Analysis — 352
3.5  Fourier Integrals and Fourier Transforms — 367
3.5.1  Fourier Series to Fourier Integrals  367
3.5.2  Fourier Cosine and Fourier Sine Integrals  368
3.5.3  Fourier Cosine and Sine Transforms  369
3.5.4  Complex Form of the Fourier Integral  370
x | Contents

3.5.5  Fourier Transform and Its Inverse  371


3.5.6  Spectrum  372
3.6  Properties of Fourier Transforms — 372
3.7  Convolution Theorem and Parseval’s Identities — 381
3.7.1  Convolution  381
3.7.2  Convolution Theorem (or Faltung Theorem) for Fourier Transforms  381
3.7.3  Parseval’s Identities (Energy Theorem)  382
3.7.4  Relation between Fourier and Laplace Transforms  384
3.8  Applications of Fourier Transforms — 416

4.1  Introduction — 433


4.2  Formation of Partial Differential Equations — 433
4.2.1  Elimination of Arbitrary Constants  433
4.2.2  Elimination of Arbitrary Functions  434
4.3  Definitions — 434
4.3.1  Linear and Non-linear Partial Differential Equations  434
4.3.2 Homogenous and Non-homogenous Partial Differential Equations  434
4.3.3  Partial Differential Equations Linear in Partial Derivatives  434
4.3.4 Linear Homogenous in their Order Partial Differential Equations  435
4.3.5  Solution of Partial Differential Equations  435
4.4  Direct Integration Method for Solutions — 444
4.5  Partial Differential Equations of the First Order — 448
4.5.1  Lagrange’s Method  448
4.5.2  Geometrical Interpretation of Lagrange’s Method  450
4.5.3  Charpit’s Method  460
4.5.4  Standard Form f (p,q) = 0  462
4.5.5  Standard Form f (z, p, q) = 0  462
4.5.6  Standard Form f (x,p) = f(y,q)  463
4.5.7  Clairut’s Equation  463
4.6 Linear in Second Order Partial Derivatives Differential Equations: Monge’s Method — 479
4.7 Partial Differential Equations Linear and Homogenous in Partial Derivatives with
Constant Coefficients — 486
4.7.1  Superposition or Linearity Principle  487
4.7.2  Rules for Finding the Complementary Function  487
4.7.3  Inverse Operator  489
4.7.4  Operator Methods for Finding Particular Integrals  490
4.8 Linear Partial Differential Equations with Constant Coefficients,
Non-homogeneous in Partial Derivatives — 507
4.8.1  Rules for Finding Complementary Function  507
4.8.2  Operator Methods for Finding Particular Integral  509
Contents  | xi

  4.9 Partial Differential Equations with Variable Coefficients Reducible to Partial Differential


Equations with Constant Coefficients — 510
4.10  Applications of Partial Differential Equations — 520
4.11 Vibrations of a Stretched String (One Dimensional Wave Equation) — 527
4.11.1  Solution of the Wave Equation  528
4.11.2  D’Alembert’s Method of Solving Wave Equation  530
4.12  One Dimensional Heat Flow — 546
4.12.1  Solution of the Heat Equation  547
4.13  Transmission Line Equations — 558
4.14  Two dimensional Heat Flow — 567
4.15  Solution of Two Dimensional Laplace equation — 569
4.16  Two Dimensional Wave Equation — 576
4.16.1  Solution of Two Dimensional Wave Equation  577

5.1  Introduction — 585


5.2  Errors in Numerical Computations — 585
5.3  Algebraic and Transcendental Equations — 586
5.3.1  Bisection Method or Bolzano Method or Halving Method  587
5.3.2  Direct Iteration Method  588
5.3.3  Secant and Regula-falsi Methods  594
5.3.4 Newton–Raphson Method (or Newton’s Iteration Method or Method of Tangents)  600
5.4  System of Linear Equations — 615
5.4.1  Gauss Elimination Method  615
5.4.2  Gauss–Jordan Method  616
5.4.3  Triangularisation Method  623
5.4.4  Doolittle Method  625
5.4.5  Crout’s Method  626
5.5 Iterative Methods for Solving System of Linear Equations — 634
5.5.1  Jacobi’s Iterative Method  634
5.5.2  Gauss–Seidel Iteration Method  635
5.6  Algebraic Eigenvalue Problems — 642
5.6.1  Power Method  642
5.6.2  Modification of Power Method  644
5.7  Linear Operators — 656
5.7.1  Forward Differences  661
5.7.2  Backward Differences  662
5.7.3  Central Differences  662
5.7.4  Factorial Polynomials  663
5.7.5  Error Propagation  665
5.7.6  Missing Entries in the Data  667
xii | Contents

5.8  Interpolation — 683


  5.8.1  Lagrange’s Interpolation Formula  683
  5.8.2  Divided Differences  685
  5.8.3  Newton’s Divided Difference Interpolation Formula  687
  5.8.4  Newton’s Forward Difference Interpolation Formula  695
  5.8.5  Newton’s Backward Difference Interpolation Formula  696
  5.8.6  Gauss Forward Interpolation Formula  701
  5.8.7  Gauss Backward Interpolation Formula  702
  5.8.8  Stirling’s Formula  703
  5.8.9  Bessel’s Interpolation Formula  703
5.8.10  Laplace–Everett’s Interpolation Formula  704
5.9  Inverse Interpolation — 712
5.9.1  Lagrange’s Method for Inverse Interpolation  712
5.9.2 Inverse Interpolation using Newton’s Forward Interpolation Formula  713
5.9.3  Inverse Interpolation using Everett’s Formula  713

6.1  Introduction — 725


6.2  Numerical Differentiation — 725
6.2.1  Derivatives at Interior Points  725
6.2.2  Derivative at Grid Points  726
6.3  Numerical Quadrature — 739
6.3.1  General Quadrature Formula  739
6.3.2  Trapezoidal Rule  739
6.3.3  Simpson’s One-third Rule  740
6.3.4  Simpson’s Three-eight Rule  741
6.3.5  Weddle’s Rule  742
6.3.6  Cote’s Formulas  743
6.3.7  Error Term in Quadrature Formulae  745
6.4 Numerical Solutions of Ordinary Differential Equations — 756
6.4.1  Taylor-series Method  756
6.4.2  Picard’s Method  764
6.4.3  Euler’s Method  772
6.4.4  Improved Euler’s Method  773
6.4.5  Modified Euler’s Method  774
6.4.6  Runge’s Method  780
6.4.7  Runge–Kutta Method  781
6.4.8  Milne’s Method  791
Index 801
Preface

The main aim of this book is to provide the readers with a thorough knowledge of the fundamental concepts
and methods of applied mathematics used in all streams of engineering, technology and science in a very
lucid and unambiguous style. For better understanding of students, all the concepts have been illustrated
with sufficient number of solved examples, followed by a collection of graded practical exercises, their
respective answers and their direct engineering applications. A large number of solved questions from dif-
ferent university examination papers form an integral part of this book.
The book is divided into two volumes: Volume I and II. Volume I contains nine chapters which cover
topics like differential calculus, integral calculus, infinite series, linear algebra: matrices, vector calculus,
ordinary differential equations, series solution and special functions. This is volume II of the book which
contains six chapters and comprises topics like functions of complex variables, Fourier series, Fourier
integrals, Fourier and Laplace transforms, partial differential equations, numerical methods in general and
linear algebra and numerical methods for differentiation, integration and ordinary differential equations.
The topic complex numbers have been uploaded on the book’s website that contains many solved
­examples on each topic discussed in the book. Multiple-choice solved questions, from various competitive
examinations and related to the topics covered in the chapters with their exercises, have also been uploaded
on the website.
Students of various institutes, engineering colleges and universities will definitely find this book to be
more than handful apart from students preparing for competitive examinations like GATE, NET, MAT etc.
Any suggestions for further improvement of this book will be gladly accepted.
Acknowledgments

We express our sincere gratitude to Nand Kishore Garg, Hon’ble Chairman, Maharaja Agrasen Technical
Education Society and M.L. Goyal, Director, Maharaja Agrasen Institute of Technology, Delhi, for their
constant inspiration, encouragement and providing necessary facilities.
We acknowledge our innumerable thanks to our colleagues and friends for their support and sugges-
tions. No words can suffice to express our deep feelings for our family members for their moral support and
understanding. We are specially thankful to Veena Rastogi (Mother in law of Nishu Gupta), Sachin Rastogi
(Husband of Nishu Gupta), Mahesh Garg and Dinesh Garg (Son’s of R. L. Garg), Shalini Garg (Daughter
of R. L. Garg) and Geetima Rai (Daughter in law of R. L. Garg) for their constant cooperation, motivation
and encouragement. We would also like to appreciate the contribution of Nishu Gupta’s daughter Sneha
Rastogi and R.L. Garg’s grandson Sparsh Goel for unending patience and understanding, without which the
completion of the book would have been impossible. It was a long and difficult journey for them.
Our special thanks to the reviewers for their valuable comments. We are thankful to the Pearson team,
especially to Anita Yadav and Vipin Kumar for their effective cooperation during various stages of book.
Last and not least, we apologize to all those who have been with us over the course of the years and
whose names we have failed to mention.

R. L. Garg
Nishu Gupta
About the Authors

R.L. GaRG
He received his Ph.D. in Mathematics in 1978 from Kurukshetra University, Kurukshetra, India. He is
a former Professor of Mathematics at Department of Statistics and Operations Research, Kurukshetra
­University, Kurukshetra, and has been teaching Mathematics and Statistics for the past 35 years. He had
been member of Kurukshetra University Court. His eight research papers are published in international
journals of repute. He has successfully supervised three Ph.D. students. After getting voluntary retire-
ment from Kurukshetra University in 2001, he started his own coaching classes to teach Mathematics for
IIT-JEE, AIEEE and CEET examinations successfully for eight years. After that he joined as Professor,
Mathematics at Maharaja Agrasen Institute of Technology, Delhi. He has been on the examination panel of
various universities and state service board exams.

Nishu Gupta
She is presently working as an Assistant Professor in Maharaja Agrasen Institute of Technology, Delhi.
She is gold medalist in M.Sc.(Maths) from C.C.S. University, Meerut (UP), India. She has been awarded
University Medal and Durga Trust Prize for obtaining highest marks in M.Phil.(Maths) from University of
Roorkee, India (presently IIT-Roorkee). She qualified National Eligibility Test (JRF-NET) conducted by
CSIR-UGC and was one of the toppers in the subject Mathematical Sciences. Also, she qualified fellow-
ship test for Ph.D. Degree conducted by the National Board for higher Mathematics, Department of Atomic
Energy. She has made contribution in the research in the field of Functional Analysis and was awarded
Ph.D. Degree from IIT-Roorkee in 2006. She has been teaching students at B.Sc., M.Sc. and B.Tech. level
for the last 16 years.
This page is intentionally left blank
Symbols, Basic Formulae and
Useful Informations

1. Greek Letters
a alpha e epsilon z zeta l lambda
b beta i iota c chi Γ  capital gamma
g gamma m mu p pi Σ  capital sigma
d delta n nu s sigma D  capital delta
q theta w omega t tau Φ  capital phi
f phi x xi r rho Ψ  capital psi
y psi h eta k kappa Ω  capital omega
2. Useful constants
e = 2.71828 log e 3 = 1.0986
π = 3.14159 log e 2 = 0.6931
2 = 1.41421 log e 10 = 2.3026
3 = 1.73205 log10 e = 0.4343
3. Some Notations
∈ belongs to iff if and only if
∉ does not belong to N Set of natural numbers
⇒ implies I Set of all integers
⇔ implies and implied by Q Set of all rational numbers
∀ for all R Set of all real numbers
∪ union C Set of all complex numbers
∩ intersection
4. Partial Fractions
a0 x n + a1 x n −1 + a2 x n − 2 +  + an
A fraction of the form
b0 x m + b1 x m −1 + b2 x m − 2 +  + bm
in which n and m are positive integers and n < m is called a proper fraction. If n ≥ m then
­dividing numerator by denominator, remainder divided by denominator will be proper
fraction. Only a proper fraction is resolved into partial fractions. We first factorize the
­denominator into real factors. These will be either linear or quadratic (we consider only such
xviii | Symbols, Basic Formulae and Useful Informations

cases) and some factors repeated. Then proper fraction is resolved into a sum of partial frac-
tions such that
(a) To a non-repeated linear factor (x - a) in the denominator corresponds a partial fraction
A
of the form .
x−a
(b) To a repeated linear factor ( x − b) r in the denominator corresponds the sum of r partial
A1 A2 Ar
fractions of the form + + + .
x − b ( x − b) 2 ( x − b) r
(c) To non-repeated quadratic factor ( x 2 + c x + d ) in the denominator corresponds a partial
B x +C
fraction of the form 2 .
x +cx+d
(d) To a repeated quadratic factor ( x 2 + e x + f ) k in the denominator corresponds the sum
B x + C1 B x + C2 B x + Ck
of k partial fractions of the form 2 1 + 2 2 + + 2 k .
x + e x + f (x + e x + f ) 2
( x + e x + f )k
Then, we are to find the constants A, B, C, Ai, B j , C j ; i = 1, 2,… , r , j = 1, 2,… , k used in
above partial fractions from (a) to (d).
Constants A and Ar in (a) and (b) are found by Suppression method explained below.
For finding A, put x = a, everywhere in the given proper fraction except in the factor ( x − a)
itself, the value obtained will be value of A.
For finding Ar, put x = b, everywhere in the given proper fraction except in the factor
( x − b) r itself, the value obtained will be value of Ar .
Other constants are obtained by writing the given proper fraction as sum of partial fractions
as explained above from (a) to (d) and then multiply by the denominator of given proper
fraction. Equate coefficients of various powers of x on both sides and solve the equations
obtained for the constants.
5. Synthetic Division
It is used to divide a polynomial by a linear polynomial with leading coefficient unity. For
this, we write the coefficients of dividend polynomial (writing zero for missing terms).
­Suppose, we are to divide the polynomial a0 x n + a1 x n −1 + a2 x n − 2 +  + an −1 x + an by ( x − a).
We write the coefficients a0 , a1 , a2 ,… , an −1 , an and a to the left as shown below. Then we
write a0 below the line as it is. Multiply a0 by a and write the result below a1 above the line
and add it to a1 and write below the line. We continue this process as follows

a) a0 a1 a2  an −1 an
aa0 aa1 + a 2 a0   
a0 a1 + aa0 a2 + aa1 + a 2 a0   
Symbols, Basic Formulae and Useful Informations  | xix

Then last term below the line will be remainder and before it there will be coefficients of Q
­ uotient
polynomial. For example, we divide 5 x 5 + 3 x 3 − 7 x + 8 by x + 2 as follows
−2) 5 0 3 0 −7 8
−10 20 −46 92 − 170
5 − 10 23 − 46 85 − 162
∴ 5 x 5 + 3 x 3 − 7 x + 8 = ( x + 2)(5 x 4 − 10 x 3 + 23 x 2 − 46 x + 85) − 162
6. Trigonometric formulae
  (i)  Values of T-ratios (trigonometrical ratios) for some angles are

q 0 p /12 p /6 p /4 p /3 5p /12 p /2

3 −1 1 1 3 3 +1
sin q 0 1
2 2 2 2 2 2 2
3 +1 3 1 1 3 −1
cos q 1 0
2 2 2 2 2 2 2
3 −1 1 3 +1
tan q 0 = 2− 3 1 3 = 2+ 3 ∞
3 +1 3 3 −1

π  π  π 5 −1 2π 5 +1
Note that tan  − 0  = ∞, tan  + 0  = −∞, sin = , cos =
 2   2  5 4 5 4
sin ( n π + θ ) = ( −1) sin θ , cos ( n π + θ ) = ( −1) cos θ
n n

 (ii)  Signs of T-ratios


All T-ratios are positive in first quadrant
sin θ and cosec θ are positive when θ is in second quadrant
tan θ and cot θ are positive when θ is in third quadrant
cos θ and sec θ are positive when θ is in fourth quadrant
T-ratios not mentioned are negative in that quadrant.
(iii) Formulae
(a)  sin ( A ± B ) = sin A cos B ± cos A sin B
(b)  cos ( A ± B ) = cos A cos B ∓ sin A sin B
tan A ± tan B
(c)  tan ( A ± B ) =
1∓ tan A tan B
2 tan A
(d)  sin 2 A = 2 sin A cos A =
1 + tan 2 A
1 − tan 2 A
(e)  cos 2 A = cos 2 A − sin 2 A = 2 cos 2 A − 1 = 1 − 2 sin 2 A =
1 + tan 2 A
xx | Symbols, Basic Formulae and Useful Informations

2 tan A
(f )  tan 2 A =
1 − tan 2 A
(g)  sin 3 A = 3 sin A − 4 sin 3 A
(h)  cos 3 A = 4 cos3 A − 3 cos A
3 tan A − tan 3 A
(i)  tan 3 A =
1 − 3 tan 2 A
C+D C−D
(j)  sin C+ sin D = 2 sin cos
2 2
C+D C−D
(k)  sin C− sin D = 2 cos sin
2 2
C+D C−D
(l)  cos C+ cos D = 2 cos cos
2 2
C+D D −C C+D C−D
(m)  cos C− cos D = 2 sin sin = −2 sin sin
2 2 2 2
(n)  2sin A sin B = cos ( A − B ) − cos ( A + B )
(o)  2sin A cos B = sin ( A + B ) + sin ( A − B )
(p)  2 cos A sin B = sin ( A + B ) − sin ( A − B )
(q)  2 cos A cos B = cos ( A + B ) + cos ( A − B )
7. Differentiation
d
(a)  x n = n x n −1
dx
d
(b)  ( a x + b) n = n a ( a x + b) n −1
dx
d
(c)  ( f ( x )) n = n ( f ( x )) n −1 f ′( x )
dx
d
(d)  a x = a x log a ; a > 0
dx
d
(e)  e x = e x
dx
d 1
(f)  log e x =
dx x
d
(g)  ( f ( x )) g ( x ) = g ( x ) ( f ( x )) g ( x ) −1 f ′( x ) + ( f ( x )) g ( x ) log f ( x ). g ′( x )
dx
    This formula is remembered as sum of derivatives considering
    [ g ( x ) as constant, f ( x ) variable] and [ f ( x ) as constant, g ( x ) variable]
d
(h)  sin x = cos x
dx
Symbols, Basic Formulae and Useful Informations  | xxi

d
(i)  cos x = − sin x
dx
d
(j)  tan x = sec 2 x
dx
d
(k)  cosec x = −cosec x cot x
dx
d
(l)  sec x = sec x tan x
dx
d
(m)  cot x = −cosec 2 x
dx
d 1
(n)  sin −1 x =
dx 1− x2
d 1
(o)  cos −1 x = −
dx 1 − x2
d 1
(p)  tan −1 x =
dx 1+ x2
d 1
(q)  cosec −1 x = −
dx x x2 −1
d 1
(r)  sec −1 x =
dx x x2 −1
d 1
(s)  cot −1 x = −
dx 1+ x2
d
(t)  sinh x = cosh x
dx
d
(u)  cosh x = sinh x
dx
8. Integration
In all below formulae, arbitrary constant should also be taken.
x n +1
(a)  ∫ x n dx = ; n ≠ −1
n +1
1
(b)  ∫ dx = log x
x
( a x + b) n +1
(c)  ∫ ( a x + b) n dx = ; n ≠ −1
( n + 1) a
1 1
(d)  ∫ dx = log a x + b
ax+b a
(e)  ∫ e x dx = e x
xxii | Symbols, Basic Formulae and Useful Informations

ax
(f)  ∫ a x dx = ; a > 0, a ≠ 1
log a
(g)  ∫ sin x dx = − cos x

(h)  ∫ cos x dx = sin x

(i)  ∫ tan x dx = log sec x

(j)  ∫ cot x dx = log sin x


 π x
(k)  ∫ sec x dx = log sec x + tan x = log tan  + 
 4 2

x
(l)  ∫ cosec x dx = log cosec x − cot x = log tan
2
(m)  ∫ sec 2 x dx = tan x

(n)  ∫ cosec 2 x dx = − cot x


dx 1 x
(o)  ∫ = tan −1
a2 + x 2 a a
dx 1 a+ x
(p)  ∫ 2 = log
a −x 2
2 a a−x
dx 1 x−a
(q)  ∫ 2 = log
x −a 2
2a x+a
dx x
(r)  ∫ = sin −1
a −x
2 2 a
dx dx x
(s)  ∫ = log x + a 2 + x 2 or ∫ = sinh −1
a +x
2 2
a +x
2 2 a
dx dx x
(t)  ∫ = log x + x 2 − a 2 or ∫ = cosh −1
x −a
2 2
x −a
2 2 a
x a2 + x 2 a2 x
(u)  ∫ a 2 + x 2 dx = + sinh −1
2 2 a
x x 2 − a2 a2 x
(v)  ∫ x 2 − a 2 dx = − cosh −1
2 2 a

x a2 − x 2 a2 x
(w)  ∫ a 2 − x 2 dx = + sin −1
2 ax 2 a
e
(x)  ∫ e ax sin (bx + c) dx = 2 [ a sin (bx + c) − b cos (bx + c)]
a + b2
e ax
(y)  ∫ e ax cos (bx + c) dx = 2 [ a cos (bx + c) + b sin (bx + c)]
a + b2
Symbols, Basic Formulae and Useful Informations  | xxiii

 The terms within the brackets in above two formulae can be remembered as
a (given trigonometric function)–(derivative of given trigonometric function)
(z) Integration by parts
Generalized formula for integration by parts is

∫ u( x).v( x) dx = u( x) v ( x) − u ′( x) v ( x) + u ′′( x) v ( x) −  ( −1) u


n −1 ( n −1)
1 2 3 ( x ) vn ( x )

+( −1) ∫ u ( x ) v ( x ) dx
n ( n)
n

where v ( x ) = ∫ v( x ) dx , v ( x ) = ∫ v ( x ) dx ,  , v ( x ) = ∫ v ( x ) dx.
n −1
1 2 1 n

If u(x) is polynomial of (n - 1) degree in x then last term will be zero.


This page is intentionally left blank
Functions of
Complex Variables 1
1.1 Introduction
In this chapter, we study the concepts of limits, continuity and differentiability of a function of
complex variable and define an analytic function and prove that it possesses all order derivatives,
the concept which is not true in real variables. Integration of functions of complex variable is
very much helpful in science and engineering. In case of real integral ∫ f ( x ) dx, path of integra-
b

tion is a real line from x = a to x = b, but in case of complex variables, ∫ f ( z ) dz , it will depend
b

a
upon the path of the variable z if f (z) is not analytic and does not depend on the path if f (z) is
analytic and in that case fundamental theorem of integral calculus holds. Many definite integrals
and improper integrals of real variables are evaluated with the help of complex integration using
Cauchy residue theorem but these integrals are difficult or cannot be found with methods of inte-
gration in real variables. Finally, conformal and bilinear transformations are considered.

1.2 Definition, Limit, Continuity and Differentiability of a


function of complex variable
1.2.1 Definition
Function of a complex variable is defined in the similar way as the function of a real variable. A
rule f which assigns to each z = x + iy ∈ A, a complex number w = u (x, y) + iv (x, y) ∈ B, is called
a function from A to B. A is called domain of f, B is called codomain of f and w is called image
of z. The set {f (z): z ∈ A} is called range of f. If f assigns each z the unique w then f is a single-
valued function and if f assigns to some z more than one w then f is multiple-valued function.
Trigonometric and hyperbolic functions are single-valued functions, whereas square root and
logarithmic functions are multi-valued functions. If we take principal value of multiple-valued
function then it will become single-valued function.

1.2.2  Limit of a Function


Let f (z) be a single-valued function defined on set S. Let z0 be a complex number which may or
may not be a member of S. If corresponding to e  > 0, however small, there exists d  > 0 such that
f ( z ) − l < ε for all 0 < z − z0 < δ
for some complex number l and 0 < z − z0 < δ is contained in S then if l is finite, it is called
the limit of f (z) as z → z0 and we write
lim f ( z ) = l
z → z0

2 | Chapter 1

Theorem 1.1  If lim f(z) exists, then it is unique.


z → z0

Proof: If possible, let


lim f(z) = l1, lim f(z) = l2 where l1 ≠ l2
z → z0 z → z0

Then, by definition corresponding to e > 0 there exist δ1 , δ 2 > 0 such that
ε
f ( z ) − l1 < ; 0 < z − z0 < δ1
2 
ε
f ( z ) − l2 < ; 0 < z − z0 < δ 2
2 
Let d = min (δ1 , δ 2 )
ε
\ f ( z ) − l1 < ; 0 < z − z0 < δ (1.1)
2
ε
f ( z ) − l2 < ; 0 < z − z0 < δ (1.2)
2
Now, l1 − l2 = ( f ( z ) − l2 ) − ( f ( z ) − l1 ) ≤ f ( z ) − l2 + f ( z ) − l1
ε ε
< + = ε for 0 < z − z0 < δ by (1.1) and (1.2)
2 2 
But e is arbitrary, so l1 − l2 = 0 ⇒ l1 = l2 .

Remark 1.1: In the definition given in 1.2.2, z → z0 along any path. If along two different paths
limits are different then from Theorem 1.1 limit does not exist. As for example
z
For the lim
z →0 z

along the path positive real axis


z x x
lim = lim+ = lim+ = 1
z →0 z x →0 x x →0 x

and along the path negative real axis
z x x
lim = lim− = lim− = −1
z →0 z x →0 x x →0 − x

z
Both are different. Therefore, lim does not exist.
z →0 z

Theorem 1.2 Let f (z) = u ( x, y ) + iv ( x, y ) ; z = x + iy, z0 = x0 + iy0 .

Then lim f ( z ) = u0 + iv0 iff lim u ( x, y ) = u0 and lim v ( x, y ) = v0


z → z0 x → x0 x → x0
y→ y y→ y
0 0
Functions of Complex Variables  | 3

Proof: Let lim f(z) = u0 + iv0 . Then corresponding to e  > 0 there exists δ > 0 such that
z → z0

u ( x, y ) + iv ( x, y ) − ( u0 + iv0 ) < ε when 0 < z − z0 < δ



∴ u ( x, y ) − u0 + i ( v ( x, y ) − v0 ) < ε when 0 < ( x − x0 ) + i ( y − y0 ) < δ

Now,
u ( x, y ) − u0 ≤ u ( x, y ) − u0 + i ( v ( x, y ) − v0 ) < ε when 0 < ( x − x0 ) + i ( y − y0 ) < δ

v ( x, y ) − v0 ≤ u ( x, y ) − u0 + i ( v ( x, y ) − v0 ) < ε when 0 < ( x − x0 ) + i ( y − y0 ) < δ



δ δ
and 0 < x − x0 < , 0 < y − y0 <
2 2

δ δ
⇒ 0 < ( x − x0 ) + i ( y − y0 ) ≤ x − x0 + y − y0 < + =δ 
2 2
δ δ
∴ u ( x, y ) − u0 < ε when 0 < x − x0 < , 0 < y − y0 <
2 2
δ δ
and v ( x, y ) − v0 < ε when 0 < x − x0 < , 0 < y − y0 <
2 2

Hence, lim u ( x, y ) = u0 and lim v ( x, y ) = v0


x → x0 x → x0

y→ y y→ y
0 0

Conversely, let lim u ( x, y ) = u0 , lim v ( x, y ) = v0.


x → x0 x → x0
y→ y y→ y
0 0

Then, corresponding to e > 0 there exist δ1 , δ 2 > 0 such that


ε
u ( x, y ) − u0 < when 0 < x − x0 < δ1 , 0 < y − y0 < δ1
2 
ε
and v ( x, y ) − v0 < when 0 < x − x0 < δ 2 , 0 < y − y0 < δ 2 . 
2
Let δ = min (δ1 , δ 2 )

ε
∴ u ( x, y ) − u0 < when 0 < x − x0 < δ , 0 < y − y0 < δ
2 
ε
and v ( x, y ) − v0 < when 0 < x − x0 < δ , 0 < y − y0 < δ . 
2
ε ε
∴ u ( x, y ) + iv ( x, y ) − (u0 + iv0 ) ≤ u ( x, y ) − u0 + v ( x, y ) − v0 < + =ε
2 2 
4 | Chapter 1

when 0 < x + iy − ( x0 + iy0 ) ≤ x − x0 + y − y0 < δ + δ = 2δ 

i.e., 0 < z − z0 < 2δ



∴ lim f ( z ) = u0 + iv0
z → z0


1.2.3 Continuity of a Function
If a single-valued function f(z) is defined in some neighbourhood of z0 i.e., in z − z0 < δ for
some δ , lim f ( z ) exists and lim f ( z ) = f ( z0 ) , then f (z) is called continuous at z = z0.
z → z0 z → z0

If a function f ( z ) is continuous at all points in some domain D, then f ( z ) is called continuous


in domain D.

1.2.4 Differentiability of a Function
If f ( z ) is a single-valued function defined in some neighbourhood of z0 i.e., z − z0 < δ for some
d then f(z) is said to be differentiable at z = z0 iff
f ( z ) − f ( z0 ) f ( z0 + ∆z ) − f ( z0 )
lim = lim exists.
z → z0 z − z0 ∆z → 0 ∆z

d
If this limit exists then it is the value of derivative at z = z0 and is written as f ′ ( z0 ) or
f ( z0 ) .
dz
Remark 1.2: In case of function of real variable there is concept of left-handed derivative and
right-handed derivative, but in case of function of complex variable there is no such concept as
the limit here is to taken along any path.
Theorem 1.3  If f (z) is differentiable at z = z0 then it is continuous at z = z0 but converse
is not true.
Proof: Let f (z) be differentiable at z = z0
f ( z ) − f ( z0 )
then f ′( z0 ) = lim
z → z0 z − z0

f ( z ) − f ( z0 )
Now, f (z) = ( z − z0 ) + f ( z0 ) ; z ≠ z0
z − z0

∴ lim f ( z ) = f ′( z0 )⋅ (0 ) + f ( z0 ) = f ( z0 )
z → z0

∴ f ( z ) is continuous at z = z0.
Now, f ( z ) = z = x − iy (where z = x + iy) is continuous at z = 0.
f ( z ) − f ( 0)
But lim = lim z − 0
z = lim
z
z →0 z −0 z →0 z →0 z

Functions of Complex Variables  | 5

Along the path y = 0, lim zz = lim xx = 1


z →0 x →0

z −iy
Along the path x = 0, lim = lim = −1
z →0 z y →0 iy
These two are not equal.
f ( z ) − f ( 0)
∴ lim does not exist.
z →0 z−0
∴ f ( z ) is not differentiable at z = 0.

Remark 1.3: It should be noted that if a function is not continuous at a point then it is not
­differentiable at that point.

1.3 Analytic Functions
A function f(z) is said to be analytic or holomorphic or regular at z = z0 iff there exists a neigh-
bourhood z − z0 < δ of z0 for some δ > 0 such that f(z) is differentiable at all points of this
neighbourhood. A function f(z) is called analytic in some domain D if it is analytic at all points of
domain D. If f (z) is not analytic at z = z0 and there is a neighbourhood z − z0 < δ of z0 for some
δ in which f (z) is differentiable at all points except z = z0 then z = z0 is called isolated singularity
of f (z).
If a function is differentiable for all z, then it is called an entire function.

1.4 Cauchy–Riemann Equations

Theorem 1.4  Let w = f(z) = u (x, y) + iv (x, y) be defined and continuous in some neighbour-
bood of a point z = x + iy and differentiable at z itself. Then at this point, the first order partial
derivatives of u and v exist and satisfy Cauchy–Riemann (C–R) equations:
ux = v y and u y = − v x

Proof: f (z) is differentiable at z


f ( z + ∆z ) − f ( z )
∴ f ′( z ) = lim
∆z → 0 ∆z 
exists and is unique along every path along which Dz → 0.
u ( x + ∆x, y + ∆y ) + iv ( x + ∆x, y + ∆y )  − u ( x, y ) + iv ( x, y ) 
f ′ ( z ) = lim
∆x →0
∆x + i∆y
∆y →0

Along the path Dx = 0, i.e., parallel to y-axis
u ( x, y + ∆y ) + iv ( x, y + ∆y )  − u ( x, y ) + iv ( x, y ) 
f ′( z ) = lim 
∆y → 0 i∆y

6 | Chapter 1

 u ( x, y + ∆y ) − u ( x, y ) v ( x, y + ∆y ) − v ( x, y ) 
= lim  + 
∆y → 0
 i∆y ∆y 

∂u ∂v ∂v ∂u
= −i + = − i (1.3)
∂y ∂y ∂y ∂y

Along the path Dy = 0, i.e., parallel to x-axis
 u ( x + ∆x, y ) − u ( x, y ) v ( x + ∆x, y ) − v ( x, y ) 
f ′ ( z ) = lim  +i 
∆x → 0
 ∆x ∆x 

∂u ∂v
= +i (1.4)
∂x ∂x 

Since f ( z ) is differentiable, the two limits in equations (1.3) and (1.4) are equal:
∂u ∂v ∂v ∂u
∴ +i = −i
∂x ∂x ∂y ∂y 
Equate real and imaginary parts
∂u ∂v ∂v ∂u
= , =−
∂x ∂y ∂x ∂y 
or ux = vy and uy = - vx

which are Cauchy–Riemann (C–R) equations

Remark 1.4: C–R equations are necessary conditions for a function to be differentiable or ana-
lytic at a point. Thus, a function not satisfying C–R equations at a point will neither be differen-
tiable nor analytic at that point. These conditions are not sufficient. Thus, there exist functions
which satisfy C–R equations at a point but not differentiable (and hence not analytic) at that point.

1.4.1 Sufficient Conditions for a Function to be Analytic

Theorem 1.5  The sufficient conditions for a function w = f(z) = u(x, y) + iv(x, y) to be analytic
at a point z are
(i) u, v, ux, uy, vx, vy are continuous functions of x and y in a certain neighbourhood of z.
(ii) C–R equations ux = vy, uy = - vx are satisfied in neighbourhood of z.
Proof: Let z + Dz lies in neighbourhood of z in which u, v, ux, uy, vx, vy are continuous
Then, by Taylor’s expansion
u ( x + ∆x, y + ∆y ) − u ( x, y ) = ∆x ux + ∆y u y
+ terms of at least second order in ∆x and ∆y (1.5)

Functions of Complex Variables  | 7

v ( x + ∆x, y + ∆y ) − v ( x, y ) = ∆x v x + ∆y v y
+ terms of at least second order in ∆x and ∆y (1.6)
In equations (1.5) and (1.6) use C–R equations uy = - vx and vy = ux, respectively.
u ( x + ∆x, y + ∆y ) − u ( x, y ) = ∆x ux − ∆y v x
+ terms of at least second order in ∆x and ∆y (1.7)

v ( x + ∆x, y + ∆y ) − v ( x, y ) = ∆x v x + ∆y ux
+ terms of at least second order in ∆x and ∆y (1.8)

Now, f (z +D z) – f (z) = u ( x + ∆x, y + ∆y ) + iv ( x + ∆x, y + ∆y )  − u ( x, y ) + iv ( x, y ) 

= u ( x + ∆x, y + ∆y ) − u ( x, y )  + i  v ( x + ∆x, y + ∆y ) − v ( x, y ) 

Use equations (1.7) and (1.8)
f (z + Dz) – f (z) = Dx ux – Dy vx + i(Dx vx + Dy ux)
= (Dx + iDy) ux + i(Dx + iDy) vx
+ terms of at least second order in ∆x and ∆y
= (ux + ivx) (Dx + iDy)
+ terms of at least second order in ∆x and ∆y 
f ( z + ∆z ) − f ( z )
\ = ux + iv x + terms of at least first order in ∆x and ∆y 
∆z
f ( z + ∆z ) − f ( z )
\ lim = ux + iv x
∆z → 0 ∆z 
\ f(z) is differentiable at z and f  ′(z) = ux + ivx
Similarly, f(z) is differentiable in neighbourhood of z.
Hence, f(z) is analytic at z.

Remark 1.5: If f (z) is differentiable at z, then


f ′(z) = ux + ivx = vy – iuy = ux – iuy = vy + ivx   (By C–R equations)

1.4.2 Polar Form of Cauchy–Riemann Equations

Theorem 1.6  Polar form of Cauchy–Riemann equations are


∂u 1 ∂v ∂v 1 ∂u
= , =−
∂r r ∂θ ∂r r ∂θ 
8 | Chapter 1

and using these equations, we have


∂ 2 u 1 ∂u 1 ∂ 2 u
+ + =0
∂r 2 r ∂r r 2 ∂θ 2 
Proof:
z = x + iy = re iθ
∴ w = f ( z ) = f ( re iθ ) = u( r , θ ) + iv( r , θ )

Differentiate partially w. r. t. r and q
∂z ∂u ∂v
f ′( z ) = + i (1.9)
∂r ∂r ∂r
∂z ∂u ∂v
f ′( z ) = +i (1.10)
∂θ ∂θ ∂θ
∂z∂z iθ iθ ∂z∂z iθ iθ iθ iθ
But = =e e, , = =ire
ire (∵(∵z z= =rere ))
∂r∂r ∂θ∂θ
\ from equations (1.9) and (1.10)
 ∂u ∂v  1  ∂u ∂v 
f ′( z ) = e − iθ  + i  = e − iθ  +i (1.11)
 ∂r ∂r  ri  ∂θ ∂θ 
∂u ∂v i ∂ u 1 ∂v
\ +i =− +
∂r ∂r r ∂θ r ∂θ 
Equate real and imaginary parts
∂u 1 ∂v
= (1.12)
∂r r ∂θ
∂v 1 ∂u
=− (1.13)
∂r r ∂θ

which are C–R equations in polar form.


To prove the other part, differentiate both sides of equation (1.12) partially w. r. t. r and both
sides of equation (1.13) partially w. r. t. q
∂ 2 u −1 ∂v 1 ∂ 2 v
= + (1.14)
∂r 2 r 2 ∂θ r ∂r ∂θ
∂2v −1 ∂ 2 u
= (1.15)
∂θ∂r r ∂θ 2

1
Multiply equation (1.15) by and add (1.14)
r
∂2u 1 ∂v 1 ∂∂2 u2 u 1 ∂v 1 ∂ 2 u∂ 2 u ∂ 2 u  ∂2u ∂2u 
= − − = − − ∵ = ∵
 = 
∂r 2 r 2 ∂θ r 2 ∂∂rθ2 2 r 2 ∂θ r2 ∂θ∂2r ∂θ ∂θ ∂r  ∂r ∂θ ∂θ ∂r 

Functions of Complex Variables  | 9

Using C–R equation (1.12), we have


∂ 2 u 1 ∂u 1 ∂ 2 u
+ + =0
∂r 2 r ∂r r 2 ∂θ 2 
Also, note that f  ′(z) in polar form is given by equation (1.11).

Example 1.1: Find the following limits


z 3 − iz 2 + z − i  i  xy
(i)  lim (ii) lim  x + e
z →i z −i z →i
 1 − x 

z 3 − iz 2 + z − i  0
Solution:  (i) lim    Form 
z →i z −i  0

= lim
( 3z 2
)  [By L′ Hospital rule]
− 2iz + 1
z →i 1
= 3i – 2i + 1= –3 + 2 + 1 = 0
2 2

or
z 3 − iz 2 + z − i z 2 ( z − i) + ( z − i)
lim = lim
z →i z −i z →i ( z − i)

( z 2 + 1)( z − i )
= lim
z →i ( z − i)

= lim ( z 2 + 1) = i 2 + 1 = −1 + 1 = 0
z →i 
 i  xy
(ii)  lim  x + e  (∵ z = x + iy ∴ z → i ⇒ x → 0 & y → 1)
z →i
 1− x 
 i  xy
= lim  x + e
y →1 
x →0 1 − x 

 i  0
= 0 + e =i
 1 − 0 

Example 1.2: (i) Show that the function f ( z ) = z is continuous everywhere but not differentiable
at any point in the complex plane.
(ii)  Show that f ( z ) = z is not differentiable at z = 0 and it is nowhere analytic.

Solution: (i) f ( z ) = z ; z = x + iy, z = x − iy
Let z0 = x0 + iy0 be any complex number.
\ z0 = x0 − iy0 
10 | Chapter 1

lim f ( z ) = lim z = lim ( x − iy ) = x0 − iy0 = z0 = f ( z0 ) 


z → z0 z → z0 x → x0
y → y0

\ f(z) is continuous at z = z0.


But z0 is any complex number.

\ f (z) is continuous at all points in complex plane.


f ( z ) − f ( z0 ) f ( z0 + ∆z ) − f ( z0 )
Now, lim = lim
z → z0 z − z0 ∆z → 0 ∆z
z0 + ∆z − z0
= lim
∆z → 0 ∆z 
∆z
= lim
∆z → 0 ∆z

∆x − i ∆y
= lim
∆x → 0 ∆x + i ∆y
∆y → 0

∆x − i ∆y ∆x
Now, lim lim = lim =1
∆x → 0 ∆y → 0 ∆x + i ∆y ∆x → 0 ∆x

∆x − i∆y −i∆y
and lim lim = lim = −1

∆y → 0 ∆x → 0x + i ∆ y ∆y → 0 i ∆y

∆x − i ∆y ∆x − i ∆y
\ lim lim ≠ lim lim
∆x → 0 ∆y → 0 ∆x + i ∆y ∆y → 0 ∆x → 0 ∆x + i ∆y

f ( z ) − f ( z0 )
Therefore, lim does not exist.
z → z0 z − z0
\ f(z) is not differentiable at z0.

But z0 is any complex number.

\ f(z) is not differentiable at any point in the complex plane.


(ii) By (i), f(z) = z is not differentiable at any point in the complex plane. Therefore, f(z) is
not differentiable at z = 0 and it is nowhere analytic.
2
Example 1.3: Show that the function f(z) = z is continuous everywhere but differentiable only
at the origin. Is it analytic at the origin?
2
Solution:  f ( z ) = u + iv = z = x 2 + y 2

\ u ( x, y ) = x 2 + y 2 , v = 0 

u ( x, y ) and v ( x, y ) are continuous functions of x and y for all x, y ∈ R.


Functions of Complex Variables  | 11

\ f(z) is continuous everywhere.


∂u ∂u ∂v ∂v
Now, = 2 x, = 2 y, = 0, =0
∂x ∂y ∂x ∂y

∂u ∂v ∂u ∂
These four partial derivatives are continuous everywhere and C–R equations = ; =−
∂x ∂y ∂y ∂
∂u ∂v ∂u ∂v
= ; =− are satisfied only at x = 0, y = 0.
∂x ∂y ∂y ∂x
\ f(z) is differentiable only at z = 0.
Since there is no neighbourhood of z = 0 in which f(z) is differentiable, therefore f(z) is not
analytic at z = 0.

Example 1.4: Show that the function f(z) = u + iv, where


 x 3 (1 + i ) − y 3 (1 − i )
 ; z≠0
f ( z) =  x2 + y2
 0 ; z=0
 
is continuous everywhere. Also, show that Cauchy–Riemann equations are satisfied at origin,
yet f ′(0) does not exist.
 x 3 (1 + i ) − y 3 (1 − i )
 ; z≠0
Solution:  f ( z ) = u( x, y ) + iv( x, y ) =  x2 + y2
 0 ; z=0

 x3 − y3
 2 ; ( x, y ) ≠ (0, 0)
u ( x, y ) =  x + y
2
\
 0 ; ( x , y ) = ( 0, 0 )

 x3 + y3
 2 ; ( x, y ) ≠ (0, 0)
v ( x, y ) =  x + y
2
and
 0 ; ( x , y ) = ( 0, 0 )

At all points, (x, y) ≠ (0, 0),
u and v are rational functions and hence continuous at all points (x, y) ≠ (0, 0).
Now, we check the continuity of u at (0, 0).
Let e > 0. Then
x3 − y3
u( x, y ) − u(0, 0) =
x2 + y2

x3 + y3 x2 x + y2 y
≤ =
x +y
2 2
x2 + y2
12 | Chapter 1

 x2 y2 
≤ x+ y ∵
 x 2 + y 2 ≤ 1 & ≤ 1
x +y
2 2

ε ε
< e when x < and y <
2 2
ε
Therefore, there exists δ = > 0 s.t.
2
u( x, y ) − u(0, 0) < ε when x < δ and y < δ

∴ lim u( x, y ) = u(0, 0)
x→0
y→0

⇒ u 
(x, y) is continuous at (0, 0).
Similarly, we can prove that v(x, y) is continuous at (0, 0).
Hence, u(x, y) and v(x, y) are continuous at all points.
Thus, f(z) = u + iv is continuous everywhere.
At origin
x3
−0
∂u u ( x , 0 ) − u ( 0, 0 ) 2
= lim = lim x =1
∂x x →0 x x →0 x
− y3
−0
∂u u (0, y ) − u (0, 0) y2
= lim = lim = −1
∂y y →0 y y →0 y
x3
−0
∂v v ( x , 0 ) − v ( 0, 0 ) 2
= lim = lim x =1
∂x x →0 x x →0 x
y3
−0
∂v v (0, y ) − v (0, 0) y2
= lim = lim =1
∂y y →0 y y →0 y

∂u ∂v ∂u ∂v
\ C–R equations = , =− are satisfied at the origin.
∂x ∂y ∂y ∂x
x 3 (1 + i )
−0
f ( z ) − f ( 0) 2
Along y = 0, lim = lim x = 1+ i (1)
z →0 z x →0 x 

2 x3
i −0
f ( z ) − f ( 0) 2 i 1+ i
Along y = x, lim = lim 2 x = = (2)
z →0 z x → 0 (1 + i ) x 1+ i 2 
Functions of Complex Variables  | 13

From (1) and (2)


f ( z ) − f ( 0)
lim does not exist.
z →0 z
\ f  ′(0) does not exist.

Example 1.5: Show that the function f(z) = xy is not regular at the origin, although C–R equa-
tions are satisfied at the point.
Solution:  f (z) = u (x, y) + iv (x, y) = xy

\ u (x, y) = xy , v (x, y) = 0
At origin
∂u u ( x , 0 ) − u ( 0, 0 ) 0−0
= lim = lim =0
∂x x →0 x x →0 x
∂u u (0, y ) − u (0, 0) 0−0
= lim = lim =0
∂y y →0 y y →0 y

∂v v ( x , 0 ) − v ( 0, 0 ) 0−0
= lim = lim =0
∂x x → 0 x x → 0 x
∂v v (0, y ) − v (0, 0) 0−0
= lim = lim =0
∂y y → 0 y y → 0 y

∂u ∂v ∂u ∂v
\ C–R equations = , =− are satisfied at the origin.
∂x ∂y ∂y ∂x
Along y = x in the first quadrant
f ( z ) − f ( 0) x2 − 0 x x 1
lim = lim+ = lim+ = = lim = (1)
z →0 z x → 0 (1 + i ) x x→0 (1 + i ) x x→0+ (1 + i ) x (1 + i )

and along y = x in the third quadrant
f ( z ) − f ( 0) x −x 1
lim = lim− = lim− =− (2)
z →0 z x → 0 (1 + i ) x x → 0 (1 + i ) x (1 + i )

f ( z ) − f ( 0)
From (1) and (2) lim does not exist.
z →0 z
\ f(z) is not differentiable at z = 0

\ f(z) is not regular at z = 0 although C–R equations are satisfied at z = 0.

e − z , z ≠ 0
−4

Example 1.6: Show that the function f(z) =  is not analytic at the origin although
0 , z = 0
the Cauchy–Riemann equations are satisfied there.
14 | Chapter 1

e − z , z ≠ 0
−4

Solution:  f(z) = u (x, y) + iv (x, y) = 


0 , z = 0

e − x , x ≠ 0
−4

\ u (x, 0) + iv (x, 0) = 
0 , x = 0

e − y , y ≠ 0
−4
−4
and u (0, y ) + iv (0, y ) = e − ( iy ) = 
0 , y = 0

e − x , x ≠ 0 e − y , y ≠ 0
−4 −4

\ u (x, 0) =  ,  u (0, y ) = 
0 , x = 0 0 , y = 0
v (x, 0) = 0 for all x ∈ R , v (0, y ) = 0 for all y ∈ R
At origin
∂u u ( x , 0 ) − u ( 0, 0 )
= lim
∂x x → 0 x
− x −4
e
= lim
x →0 x
1
= lim 1/ x 4
x →0
xe
1
= lim
x →0  1 1 1 
x 1 + 4 + + + 
 x 2 ! x 3! x12
8

1
= lim = 0 (1)
x→0 1 1 1
x+ 3 + + 
x 2! x 7 3! x11
∂u u (0, y ) − u (0, 0)
= lim
∂y y → 0 y
−4
e− y
= lim = 0   from (1)
y →0 y
∂v v ( x , 0 ) − v ( 0, 0 ) 0−0
= lim = lim =0
∂x x →0 x x →0 x
∂v v (0, y ) − v (0, 0) 0−0
= lim = lim =0
∂y y → 0 y y → 0 y

∂u ∂v ∂u ∂v
Thus, C–R equations = , =− are satisfied at origin.
∂x ∂y ∂y ∂x
Functions of Complex Variables  | 15

Along y = x,
1
−4
− 2 2 4
f ( z ) − f ( 0) e −[(1+ i ) x ] e [(1+ i ) ] x
lim = lim = lim
z →0 z x → 0 (1 + i ) x x →0 (1 + i ) x

1 1
− 2 4
e ( 2i ) x
4
1 e4x
= lim = lim
x → 0 (1 + i ) x (1 + i ) x→0 x

which is infinite and hence does not exist.
\ f(z) is not analytic at origin although C–R equations are satisfied at origin.

x 2 y 5 ( x + iy )
Example 1.7: Show that the function f(z) = , z ≠ 0, f (0) = 0 satisfies C–R equa-
tions at origin yet f ′(0) does not exist. x 4 + y10

 x3 y5 x 2 y6
 + i ; z≠0
Solution: f(z) = u (x, y) + iv (x, y) =  x 4 + y10 x 4 + y10
 0 ; z=0

 x3 y5
 ; ( x, y ) ≠ (0, 0)
\ u ( x, y ) =  x 4 + y10
0 ; ( x, y ) = (0, 0)

 x 2 y6
 ; ( x, y ) ≠ (0, 0)
v ( x, y ) =  x 4 + y10
 0 ; ( x, y ) = (0, 0)

At origin
∂u u ( x , 0 ) − u ( 0, 0 ) 0−0
= lim = lim =0
∂x x → 0 x x → 0 x
∂u u (0, y ) − u (0, 0) 0−0
= lim = lim =0
∂y y → 0 y y → 0 y

∂v v ( x , 0 ) − v ( 0, 0 ) 0−0
= lim = lim =0
∂x x → 0 x x → 0 x
∂v v (0, y ) − v (0, 0) 0−0
= lim = lim =0
∂y y →0 y y →0 y

∂u ∂v ∂u ∂v
\ C-R equations = ; =− are satisfied at origin.
∂x ∂y ∂y ∂x
16 | Chapter 1

Along y = x,
f ( z ) − f (0 ) x7 x3
lim = lim 4 = lim =0
z x → 0 x + x10 x→0 1 + x 6
z →0

Along y 5 = x 2,

f ( z ) − f (0 ) x4 1
lim = lim 4 =
z x→0 x + x 4 2
z →0

f ( z ) − f (0 )
\ lim does not exist.
z →0 z
\ f (z) is not differentiable at z = 0.
\ C-R equations are satisfied at origin yet f ′(0) does not exist.

Example 1.8: Use Cauchy–Riemann equations to show that f (z) = z 3 is analytic in the entire
complex plane.
Solution: f (z) = u (x, y) + iv (x, y) = (x + iy)3
= x3 – 3xy2 + i (3x2y – y3)

\ u (x, y) = x3 – 3xy2, v (x, y) = 3x2y – y3


∂u ∂v
= 3x2 – 3y2 = 3 (x2 – y2), = 6xy
∂x ∂x
∂u ∂v
= – 6xy = 3x2 – 3y2 = 3 (x2 – y2)
∂y ∂y

\ For all x, y ∈ R
∂u ∂v ∂u −∂v
C-R equations = , = are satisfied.
∂x ∂y ∂y ∂x
∂u ∂u ∂v ∂v
Also, , , , being polynomial functions are continuous for all x, y ∈ R.
∂x ∂y ∂x ∂y
\ f (z) is analytic for all z in complex plane.
\ f (z) = z3 is analytic in the entire complex plane.

1
(
Example 1.9: Determine ‘p’ such that the function f ( z ) = log x 2 + y 2 + i tan −1
2
px
y
)is analytic.
Solution: f (z) = u (x, y) + iv (x, y)
1 px
= log (x2 + y2) + i tan –1 ; ( x, y ) ≠ (0, 0 )
2 y
1
( )
\   u (x, y) = log x 2 + y 2 ; ( x, y ) ≠ (0, 0 )
2
Functions of Complex Variables  | 17

px
v ( x, y ) = tan −1 ; ( x , y ) ≠ ( 0, 0 )
y
∂u x ∂u y
= , =
∂x x 2 + y 2 ∂y x 2 + y 2

∂v 1 p py
= ⋅ = 2 2
∂x p2 x 2 y p x + y2
1+ 2
y
∂v 1  − px  − px
= ⋅ 2  = 2 2
∂y 2 2
p x  y  p x + y2
1+ 2
y

Since f (z) is analytic at z ≠ 0
∂u ∂v ∂u −∂v
\ C-R equations = , = are satisfied.
∂x ∂y ∂y ∂x
x − px y − py
\ = and 2 =
x 2 + y 2 p2 x 2 + y 2 x + y 2 p2 x 2 + y 2
These are satisfied only for p = –1.
\ p = -1.

Example 1.10: Find the constants a, b, c such that the function f (z), where
(i) f (z) = x – 2ay + i(bx - cy)
(ii) f (z) = – x2 + xy + y2 + i(ax2 + bxy + cy2)
is analytic. Express f (z) in terms of z.
Solution:  (i)  f (z) = x – 2ay + i(bx – cy) = u + iv
\ u = x – 2ay, v = bx – cy
∂u ∂u ∂v ∂v
\ = 1, = −2a, = b, = −c
∂x ∂y ∂x ∂y
Since f(z) is analytic, therefore C–R equations
∂u ∂v ∂u ∂v
= , =− are satisfied.
∂x ∂y ∂y ∂x
\ 1 = − c, − 2 a = − b

\ c = −1, 2a = b
\ a = k , b = 2k , c = −1; where k is any real number.
f ( z ) = x − 2ky + i( 2kx + y )

= x + iy + 2ki( x + iy ) = z + 2kiz = (1 + 2ki ) z; where k is any real number.

18 | Chapter 1

(ii) f ( z ) = − x 2 + xy + y 2 + i( ax 2 + bxy + cy 2 ) = u + iv

\ u = − x 2 + xy + y 2, v = ax 2 + bxy + cy 2
∂u ∂u ∂v ∂v
\ = −2 x + y, = x + 2 y, = 2ax + by, = bx + 2cy
∂x ∂y ∂x ∂y
Since f (z) is analytic, therefore C–R equations
∂u ∂v ∂u ∂v
= , =− are satisfied.
∂x ∂y ∂y ∂x
\ −2 x + y = bx + 2cy, x + 2 y = −2ax − by
1 1
⇒ b = −2 , c = , a=−
2 2
i
\ f ( z ) = − x 2 + xy + y 2 + ( − x 2 − 4 xy + y 2 )
2
i
= −( x 2 − y 2 + 2ixy ) − ( x 2 − y 2 + 2ixy )
2
 i
= − 1 +  ( x + iy )
2

 2
1
= − (2 + i) z 2
2

Example 1.11: Show that the functions defined by


(i) f ( z ) = z 3 + 1 − iz 2 and (ii) ez  are analytic everywhere.
f ( z + ∆z ) − f ( z )
Solution:  (i)  lim
∆z → 0 ∆z
( z + ∆z )3 + 1 − i( z + ∆z ) 2  −  z 3 + 1 − iz 2 
= lim
∆z → 0 ∆z
3 z 2 ∆z + 3 z ( ∆z ) 2 + ( ∆z )3 − i  2 z ∆z + ( ∆z ) 2 
= lim
∆z → 0 ∆z
= lim 3 z 2 + 3 z ∆z + ( ∆z ) 2 − 2iz − i∆z 
∆z → 0

= 3 z 2 − 2iz
\ f ′( z ) = 3 z 2 − 2iz which exists for all z

\ f ( z ) is analytic everywhere.
 (ii)  f ( z ) = e z = e x +iy = e x . eiy = e x (cos y + i sin y ) = u + iv
\ u = e x cos y, v = e x sin y
Functions of Complex Variables  | 19

∂u ∂u
\ = e x cos y , = −e x sin y
∂x ∂y
∂v ∂v
= e x sin y, = e x cos y
∂x ∂y

∂u ∂v ∂u ∂v
\ C–R equations = , = − are satisfied for all x, y ∈ R.
∂x ∂y ∂y ∂x
Also exponential and trigonometrical functions are continuous.
∂u ∂u ∂v ∂v
\ , , , are continuous functions for all x, y ∈ R.
∂x ∂y ∂x ∂y
\ f(z) is analytic function for all z.

Example 1.12: Prove that the function e x (cos y + i sin y ) is analytic and find its derivative in
terms of z.
Solution: Let f ( z ) = e x (cos y + i sin y ) = u + iv
Now, f(z) is analytic for all z (see above example (ii) part)
∂u ∂v
\ f ′( z ) = +i = e x cos y + ie x sin y
∂x ∂x
= e x (cos y + i sin y ) = e x ⋅ e iy

= e x +iy = e z.

Example 1.13: Show that if f ( z ) is analytic and


(i) Re f ( z ) = constant  (ii)  Im f ( z ) = constant   or (iii)  f ( z ) is a non − zero constant,
then f(z) is a constant.
Solution: (i) Let f ( z ) = u + iv be analytic where
Re f ( z ) = u = constant
∂u ∂u
\ = =0
∂x ∂y
∂u ∂v ∂u ∂u
\ f ′( z ) = +i = −i (By C − R equation)
∂x ∂x ∂x ∂y
= 0 – i (0) = 0
⇒ f(z) = constant
(ii) Let f ( z ) = u + iv be analytic where
Im f ( z ) = v = constant

20 | Chapter 1

∂v ∂v
\ = =0
∂x ∂y
∂u ∂v ∂v ∂v
\ f ′( z ) = +i = +i (By C − R equation )
∂x ∂x ∂y ∂x
= 0 + i(0) = 0
⇒ f(z) = constant
(iii) Let f ( z ) = u + iv be analytic where
f ( z ) = u 2 + v 2 = constant ≠ 0

\ u 2 + v 2 = c; c ≠ 0 (1)
Differentiate partially w. r. t. x and y
∂u ∂v
2u + 2v =0
∂x ∂x
∂u ∂v
2u + 2v =0
∂y ∂y

∂u ∂v
\ u +v = 0 (2)
∂x ∂x
∂u ∂v
u +v = 0 (3)
∂y ∂y

Use C–R equations in (3)
∂v ∂u
−u +v = 0 (4)
∂x ∂x
Square and add (2) and (4)
 ∂u  2  ∂v  2 
(u 2
+ v2 )   +    = 0
 ∂x   ∂x  
2 2
 ∂u   ∂v 
⇒   +   = 0
∂x ∂x
(∵ from (1) u 2
+ v2 = c ≠ 0 )

2
∂u ∂v
⇒ +i =0
∂x ∂x
2
⇒ f ′( z ) = 0

⇒ f ′( z ) = 0
⇒ f ′( z ) = 0
⇒ f ( z ) = constant
Functions of Complex Variables  | 21

Example 1.14: If the function f is given by


 θ θ
f ( z ) = z = r  cos + i sin  where r > 0 and 0 < θ < 2π , then show that f(z) is analytic and
 2 2
find f  ′(z).
 θ θ
Solution: Let f ( z ) = u + iv = r  cos + i sin  (given)
 2 2
θ θ
∴ u = r cos , v = r sin
2 2
∂u 1 θ ∂u r θ
∴ = cos , =− sin
∂r 2 r 2 ∂θ 2 2
∂v 1 θ ∂v r θ
= sin , = cos
∂r 2 r 2 ∂θ 2 2
∂u 1  r θ  1 ∂v ∂v 1  r θ  −1 ∂u
∴ =  cos  = and =  sin  =
∂r r  2 2  r ∂θ ∂r r  2 2 r ∂θ
\ C–R equations are satisfied.
∂u ∂u ∂v ∂v
Also,  , , , are continuous functions for r > 0.
∂r ∂θ ∂r ∂θ
∴ f ( z ) is analytic.

Now, f ( z ) = f ( re iθ ) = u( r , θ ) + iv( r , θ )
∂u ∂v
⇒ f ′( re iθ )e iθ = +i
∂r ∂r
− iθ
 ∂u ∂v  e  θ θ
∴ f ′( z ) = e − iθ  + i  =  cos + i sin 
 ∂r ∂r  2 r  2 2

− iθ iθ 2
e .e 1 1
= = =

2 r 2 re 2 z

1 1  θ θ
∴ f ′( z ) = =  cos − i sin  .
2 z 2 r 2 2

1.5 Harmonic Functions
A real function f(x, y) is called harmonic function in a domain D if it satisfies Laplace equation
∂2 f ∂2 f
+ = 0 and all its second order partial derivatives are continuous in D.
∂x 2 ∂y 2
We now prove the harmonic property of real and imaginary parts of an analytic function.
Theorem 1.7 If f ( z ) = u( x, y ) + iv( x, y ) is analytic in some domain D, then u(x, y) and v(x, y)
are harmonic functions in D.
22 | Chapter 1

Proof: f ( z ) = u( x, y ) + iv( x, y ) is analytic in D.


\ By C–R equations
ux = vy(1.16)
uy= -vx(1.17)
Differentiate both sides of equation (1.16) partially w. r. t. x and equation (1.17) partially
w. r. t. y and add
uxx + uyy = vxy – vyx = 0 (Q vxy = vyx)
\ u(x, y) satisfies Laplace equation.
Now differentiate both sides of equation (1.16) partially w. r. t. y and equation (1.17) partially
w. r. t. x and subtract
uyx – uxy = vyy + vxx = 0 (Q uxy = uyx)

or vxx + vyy = 0

\ v(x, y) satisfies Laplace equation.


We shall prove in Theorem 1.20 that all order derivatives of an analytic function exist. Thus,
all partial derivatives of second order of u(x, y) and v(x, y) are continuous functions of x and y.
Hence, u(x, y) and v(x, y) are harmonic functions.
Remark 1.6: u(x, y) and v(x, y) are called conjugate harmonic of each other and conjugate har-
monic of each of these is unique except for an additive constant when f(z) = u(x, y)+ iv (x, y) is
analytic.

1.5.1 Orthogonal System of Level Curves


Two curves f(x, y) = c1, g(x, y) = c2 where c1 and c2 are constants are called system of level curves.
These two system of curves are orthogonal iff tangents to these curves at point of intersection
are perpendicular.
Theorem 1.8  Level curves u(x, y) = c1 and v(x, y) = c2 of analytic function
f (z) = u(x, y) + iv(x, y) form an orthogonal system.
Proof: Let (x, y) be point of intersection of
u(x, y) = c1(1.18)
v(x, y) = c2(1.19)
For curve with equation (1.18)
dy −ux
m1 = = (1.20)
dx u y
For curve with equation (1.19)
dy −v x
m2 = = (1.21)
dx vy

Functions of Complex Variables  | 23

But f(z) = u(x, y) + iv(x, y) is analytic


\ By C–R equations, vx = –uy, vy = ux.
\ From equation (1.21)

m2 = −
( −u ) = u
y y
(1.22)
ux ux

From equations (1.20) and (1.22)
m1 m2 = –1
\ System of level curves u(x, y) = c1 and v(x, y) = c2 are orthogonal.

1.5.2 Method to Find Conjugate Harmonic Function


Let f(z) = u(x, y) + iv(x, y) be analytic in domain D and u(x, y) is given and we are to find v(x, y).
∂u ∂u
From given u(x, y) we find and .
∂x ∂y
Now, by C–R equation
∂v ∂u
=−
∂x ∂y

 ∂u 
v= ∫
y = constant
 − ∂y  dx + k ( y ) (1.23)

where k(y) is real function of y.
∂v ∂   ∂u   d ∂u
\ =  ∫  −  dx  + k ( y ) =    (By C–R equation)
∂y ∂y  y constant  ∂y   dy ∂x
d
Solving it we find k ( y ) and its integral will provide k(y).
dy
Substituting its value in equation (1.23), we find v.
Similarly, if v(x, y) is given we can find u(x, y).

1.5.3 Milne Thomson Method

Theorem 1.9  If f (z) = f (x, y) + iy(x, y) where z = x + iy


then f (z) = f (z, 0) + iy(z, 0)
Proof: We have z = x + iy, z = x − iy
z+ z z− z
∴ x= , y=
2 2i
Thus, f ( z ) ≡ φ ( x, y ) + iψ ( x, y )
24 | Chapter 1

can be written as
z+ z z− z z+ z z− z
f (z) ≡ φ  ,  + iψ  , 
 2 2i 2 2i  

This relation can be considered as a formal identity in the variables z and z .
In this identity replacing z by z we have f ( z ) = φ ( z , 0 ) + iψ ( z , 0 )
Remark 1.7:  (i)  Milne–Thomson method can be applied in both cases when f(z) is analytic or
not analytic.
(ii) If f ( z ) = f ( x + iy ) = φ ( x, y ) then f ( z ) = φ ( z , 0).
Method to find analytic function f(z) = u(x, y) + iv(x, y) as a function of z when one of
u(x, y) or v(x, y) is given.
When u(x, y) is given
f  ′(z) = ux + ivx = ux – iuy  (By C–R equation)
f  ′(z) = ux(x, y) - iuy (x, y)

By Milne–Thomson method
f  ′(z) = ux(z, 0) - iuy(z, 0)
\ f ( z ) = ∫ ux ( z , 0 ) − iu y ( z , 0 ) dz + ic

where c is arbitrary real constant.


Similarly, when v(x, y) is given

f ( z ) = ∫  v y ( z , 0 ) + iv x ( z , 0 ) dz + c

where c is arbitrary real constant.

Method to find f(z) = u(x, y) + iv(x, y) as function of z when u - v or u + v is given.


We have f (z) = u(x, y) + iv(x, y)
\ if(z) = iu(x, y) – v(x, y)
\ (1+ i ) f ( z ) = u ( x, y ) − v ( x, y ) + i u ( x, y ) + v ( x, y )
Let F( z ) = (1 + i ) f ( z )
U = u – v
V = u + v

If one of U or V is given then by Milne–Thomson method as explained above we can find F(z)
and then
1
f (z) = F( z ) can be found.
1+ i
Functions of Complex Variables  | 25

Method to find conjugate harmonic function in polar form.


Let f(z) = u(r, q) + iv(r, q) be analytic in a domain D and u(r, q) is given and we are to find v(r, q).
∂u ∂u
From given u(r, q) we find and .
∂r ∂θ
Now, by C–R equation
∂v 1 ∂u
=−
∂r r ∂θ
 1 ∂u 
v= ∫  − r ∂θ  dr + k (θ ) (1.24)
θ constant  

where k(q) is real function of q.
∂v ∂   1 ∂u   d ∂u
\ =  ∫ −  dr  + k (θ ) = r    (By C–R equation)
∂θ ∂θ θ constant  r ∂θ   dθ ∂r

d
Solving it we find k (θ ) and its integral will provide k(q).

Substituting its value in equation (1.24), we find v.
Similarly, if v(r, q) is given we can find u(r, q).
Milne–Thomson Method (polar form)
For any function
(z) = u(r, q) + iv(r, q) ; z = reiq
f 
We can write
(z) = u(z, 0) + iv(z, 0)
f 

Proof: z = re iθ , z = re − iθ
z
\ r 2 = zz and e 2iθ =
z

\
1  z
r = zz , θ = Log  
2i z
 1  z   1  z 
\ f ( z ) = u  zz , Log    + iv  zz , Log   
 2i  
z  2i  z 
This can formally be considered as an identity in z and z.

Replacing z by z, we have
(z) = u (z, 0) + iv (z, 0)
f 

Thus, if f ( z ) = f ( re i θ ) = φ ( r ,θ ) then f ( z ) = φ ( z , 0) .
26 | Chapter 1

Method to find f(z) = u(r, q) + iv(r, q) as a function of z when one of u(r, q) or v(r, q) is given.
Let u(r, q) be given. Then
1  ∂u ∂v   r
f ′( z ) = 
− iθ
+ i     from equation (1.11) where e = 

iz ∂θ ∂θ  z

1  ∂u ∂u   ∂u 1 ∂v 
= + ir     by C − R equation = 
iz  ∂θ ∂r   ∂r r ∂θ 
r ∂u 1 ∂u
= +
z ∂r iz ∂θ
1
=  rur ( r , θ ) − iuθ ( r , θ ) 
z
By Milne–Thomson method
1
f ′( z) = zur ( z , 0 ) − iuθ ( z , 0 ) 
z
 i 
\ f ( z ) = ∫ ur ( z , 0 ) − uθ ( z , 0 ) dz + ic
 z 
where c is arbitrary real constant.
Similarly if v(r, q) is given, we can find f(z).

Example 1.15: If f(z) is an analytic function of z show that


 ∂2 ∂2  2 2
(i)  2 + 2  Re f ( z ) = 2 f ′( z )
 ∂x ∂y 
 ∂2 ∂2  2 2
(ii)  2 + 2  f ( z ) = 4 f ′( z )
 ∂x ∂y 
Solution: Let f ( z ) = u + iv be analytic.
2 2
\ Re f ( z ) = u 2 and f ( z ) = u 2 + v 2

∂ 2 ∂u ∂ 2 2 ∂  ∂u   ∂ 2 u  ∂u  2 
Now, u = 2u ⇒ 2 u =  2u  = 2 u 2 +   
∂x ∂x ∂x ∂x  ∂x   ∂x  ∂x   
∂ 2 2  ∂ u  ∂u 
2 2

Similarly, u = 2 u 2 +   
∂y  ∂y  ∂y  
2


 ∂2 ∂ 2
  ∂u ∂u
2 2
   ∂u  2  ∂u  2 
\ +
 ∂x 2 ∂y 2  u 2
= 2 u +
 ∂x 2 ∂y 2  + 2    +    (1)
  ∂x   ∂y  
Since f (z) is analytic, therefore u is harmonic and hence satisfies Laplace equation
∂2u ∂2u
+ = 0.
∂x 2 ∂y 2
Functions of Complex Variables  | 27

\ from (1)
 ∂2 ∂2  2
 ∂u  2  ∂u  2 
 2 +  Re f ( z ) = 2   +   
 ∂x ∂y 2   ∂x   ∂y  

  ∂u   ∂v 
2 2

= 2   +  −   (By C-R equation)
 ∂x   ∂x   
 ∂u  2  ∂v  2  ∂u ∂v
2

= 2   +    = 2 +i
 ∂x   ∂x   ∂x ∂x

2
= 2 f ′( z ) (2)

Similarly, we can prove that
 ∂2 ∂2  2 2
 ∂x 2 ∂y 2  Im f ( z ) = 2 f ′( z ) (3)
+

Add (2) and (3)
 ∂2 ∂2 
( )
 ∂x 2 ∂y 2  u + v = 4 f ′( z )
+ 2 2 2


 ∂ 2
∂ 2
2 2
⇒  ∂x 2 + ∂y 2  f ( z ) = 4 f ′( z )

Example 1.16: Verify that u = x -y -y is harmonic in the whole complex plane and find conju-
2 2

gate harmonic v of u.
Solution: u = x2 - y2 - y
∂u ∂u
\ = 2 x, = −2 y − 1
∂x ∂y 
∂2u ∂2u ∂2u
= 2, = −2, =0
∂x 2 ∂y 2 ∂x ∂y

∂2u ∂2u
\ + =0
∂x 2 ∂y 2 
\  u satisfies Laplace equation and all second order partial derivatives of u are constants and
hence continuous functions.
\  u is harmonic in the whole complex plane.
Now, we find its conjugate harmonic v.
∂v −∂u ∂v ∂u
By C–R equations, = , =
∂x ∂y ∂y ∂x
∂v
\ = 2 y + 1(1)
∂x
28 | Chapter 1

∂v
= 2 x (2)
∂y
From (1),
v= ∫ ( 2 y + 1) dx + k ( y) = x ( 2 y + 1) + k ( y)
y constant

where k(y) is real function of y.
∂v d
\ = 2 x + k ( y ) (3)
∂y dy
From (2) and (3),
d
k ( y) = 0
dy

⇒ k(y) = constant = c
where c is arbitrary real constant.
\ Conjugate harmonic of u is v = x (2y +1) +c where c is arbitrary real constant.

Example 1.17: Show that the function u = e -2xy sin (x2 - y2) is harmonic. Find the conjugate func-
tion v and express u + iv as an analytic function.
Solution: u = e-2xy sin (x2 - y2)(1)
∂u
∂x
{ ( )
= e −2 xy −2 y sin x 2 − y 2 + 2 x cos x 2 − y 2 (2) ( )}

Interchanging x and y in R.H.S. of (1), we get -u
\ From (2), we have
∂u
∂y
{ (
= −e −2 xy −2 x sin y 2 − x 2 + 2 y cos y 2 − x 2 ) ( )}
{ ( )
= e −2 xy −2 x sin x 2 − y 2 − 2 y cos x 2 − y 2 ( )} (3)
From (2)
∂2u

∂x 2
{ (
= e −2 xy  −2 y −2 y sin x 2 − y 2 + 2 x cos x 2 − y 2
 ) ( )}



( ) (
−2 y ⋅ 2 x cos x 2 − y 2 + 2 cos x 2 − y 2 − 4 x 2 sin x 2 − y 2 

) ( )
( ) ( )
= e −2 xy  4 y 2 − x 2 sin x 2 − y 2 + ( 2 − 8 xy ) cos x 2 − y 2 ( ) (4)
∂2u
Interchanging x and y and changing the sign in R.H.S. of (4), we shall get
∂y 2
∂2u
\
∂y 2
( ) ( )
= −e −2 xy  4 x 2 − y 2 sin y 2 − x 2 + ( 2 − 8 xy ) cos y 2 − x 2 ( ) 

( ) ( )
= −e −2 xy  4 y 2 − x 2 sin x 2 − y 2 + ( 2 − 8 xy ) cos x 2 − y 2 ( ) (5)
Functions of Complex Variables  | 29

Add (4) and (5)


∂2u ∂2u
+ =0
∂x 2 ∂y 2 
Also, all second order partial derivatives of u are continuous.
\ u is harmonic function.
Now, we find its conjugate harmonic v.
By C–R equations
∂v −∂u ∂v ∂u
= , =
∂x ∂y ∂y ∂x

∂v ∂u
\
∂x
=−
∂y
=e −2 xy
 ( 
 2 x sin x − y + 2 y cos x − y  (6)
2 2 2 2
) ( )
∂v ∂u
and =
∂y ∂x
{ (
= e −2 xy −2 y sin x 2 − y 2 + 2 x cos x 2 − y 2 (7) ) ( )}
From (6),
v= ∫ { (
e −2 xy 2 x sin x 2 − y 2 + 2 y cos x 2 − y 2 ) ( )} dx + k ( y)
y constant

where k(y) is real function of y.
\ (
v = −e −2 xy cos x 2 − y 2 + k ( y )

)
∂v  2 x cos x − y − 2 y sin x 2 − y 2  + d k ( y ) (8)
\
∂y
=e −2 xy

2 2
(  dy ) ( )
From (7) and (8),
d
k ( y) = 0
dy

⇒ k(y) = constant = c
where c is arbitrary real constant.
(
\  conjugate harmonic of u is v = −e −2 xy cos x 2 − y 2 + c where c is arbitrary real constant. )

(
f ( z ) = u + iv = e −2 xy sin x 2 − y 2 − ie −2 xy cos x 2 − y 2 + ic ) ( ) 
(
= −ie −2 xy cos x 2 − y 2 + i sin x 2 − y 2 ) ( ) + ic 
= −ie −2 xy e
(
i x2 − y2 ) = −iei ( x 2
− y 2 + 2 i xy ) + ic

i ( x + iy )
2
iz 2
= −ie + ic = −ie + ic where c is arbitrary real constant.

Example 1.18: If ω = φ + iψ represents the complex potential function form electric field and
x
ψ = x2 − y2 + 2 , find the functions φ and w.
x + y2
30 | Chapter 1

Solution: ω = φ + iψ is an analytic function.


x
ψ = x2 − y2 + 2
x + y2 
∂ψ x2 + y2 − 2x2 y2 − x2
\ = 2x + = 2x +
∂x ( ) ( )
2 2
x2 + y2 x2 + y2

∂ψ 2 xy
= −2 y −
∂y ( )
2
x2 + y2

By C–R equations
∂φ ∂ψ 2 xy
= = −2 y − (1)
∂x ∂y ( )
2
x + y2
2

∂φ
=−
∂ψ
= −2 x −
y2 − x2
(2)
( )
∂y ∂x ( )
2
x2 + y2

\ From (1),
 2 xy 
φ= ∫  −2 y − 2  dx + k ( y ), where k(y) is real function of y.
( )
2
y constant  x + y2 
y
\ φ = −2 xy + + k ( y)
( x2 + y2

)

\
∂φ
= −2 x +
( x + y ⋅1 − y ⋅ 2 y
2 2
) +
d
k ( y)
∂y (x )
2
2
+ y2 dy


= −2 x −
( y − x ) + d k ( y) (3)
2 2

( x + y ) dy
2
2 2

From (2) and (3)
d
k ( y) = 0
dy

⇒ k(y) = constant = c
where c is arbitrary real constant.
y
\ φ = −2 xy + +c
(x 2
+ y2 ) 
where c is arbitrary real constant.
y  x 
\ ω = φ + iψ = −2 xy + + i  x2 − y2 + 2 2 
+c
(x 2
+y 2
)  ( )
x + y 

Functions of Complex Variables  | 31

( x − iy )
(
= i x 2 − y 2 + 2ixy + i ) ( x − iy ) ( x + iy ) + c 
 1 
= i ( x + iy ) +
2
+c
 ( x + iy )  
 1
\ ω = i  z 2 +  + c where c is arbitrary real constant.
 z

Example 1.19: Determine the analytic function whose real part is u = x 3 − 3 xy 2 + 3 x 2 − 3 y 2 + 2 x + 1


Also, prove that the given function satisfies Laplace equation.
Solution: u = x 3 − 3 xy 2 + 3 x 2 − 3 y 2 + 2 x + 1
∂u ∂u
\ = 3 x 2 − 3 y 2 + 6 x + 2, = −6 xy − 6 y
∂x ∂y 
∂2u ∂2u
\ = 6 x + 6, = −6 x − 6
∂x 2 ∂y 2

∂2u ∂2u
\ + =0
∂x 2 ∂y 2
\ u satisfies Laplace equation.
Let f (z) = u + iv be analytic function.
∂u ∂v ∂u ∂u
\ f ′( z ) = +i = −i (By C − R equation)
∂x ∂x ∂x ∂y 

( )
= 3 x 2 − 3 y 2 + 6 x + 2 + i ( 6 xy + 6 y )

By Milne–Thomson method, replace x by z and y by 0
f ′( z ) = 3 z 2 + 6 z + 2

\ f ( z ) = z 3 + 3 z 2 + 2 z + 1 + ic (∵ u contains constant 1) 
where c is arbitrary real constant.

Example 1.20: Determine the analytic function f (z) = u + iv, if v = log (x2 + y2) + x – 2y.
Solution:
v = log (x2 + y2) + x – 2y , (x, y) ≠ (0, 0)
∂v 2x ∂v 2y
= + 1, = −2
∂x x 2 + y 2 ∂y x 2 + y 2

If f(z) = u + iv is an analytic function, then
∂u ∂v ∂v ∂v
f ′( z ) = +i = +i (By C − R equation)
∂x ∂x ∂y ∂x 
 2y   2x 
= 2 − 2 + i 2 + 1 
x +y  x +y
2 2

32 | Chapter 1

By Milne–Thomson method, replace x by z and y by 0.


 2z  2i
f ′( z ) = −2 + i  2 + 1 = −2 + + i
 z +0  z
\ f ( z ) = −2 z + 2i log z + iz + c 
= ( i − 2 ) z + 2i log z + c

where c is arbitrary real constant.
2 sin 2 x
Example 1.21: Find the analytic function f(z) = u + iv, given u + v = 2 y
Solution: f (z) = u + iv e + e −2 y − 2 cos 2 x

\ if (z) = –v + iu
\ (1 + i) f (z) = u – v + i (u + v)
\ F(z) = U + iV where F(z) = (1 + i) f(z), U = u – v and V = u + v

f(z) is analytic ⇒ F(z) is analytic.


2 sin 2 x sin 2 x
V =u+v = −2 y
=
e 2y
+ e − 2 cos 2 x cosh 2 y − cos 2 x 

∂V 2 cos 2 x(cosh 2 y − cos 2 x ) − 2 sin 2 2 x 2(cos 2 x cosh 2 y − 1)


= =
∂x (cosh 2 y − cos 2 x ) 2 (cosh 2 y − cos 2 x ) 2 

∂V −2 sin 2 x sinh 2 y
=
∂y (cosh 2 y − cos 2 x ) 2

∂U ∂V ∂V ∂V
F ′( z ) = +i = +i (By C − R equation)
∂x ∂x ∂y ∂x 
−2 sin 2 x sinh 2 y (cos 2 x cosh 2 y − 1)
= + 2i
(cosh 2 y − cos 2 x ) 2
(cosh 2 y − cos 2 x ) 2 
By Milne–Thomson method, replace x by z and y by 0
2i ( cos 2 z − 1) 2i 2i
F ′( z ) = = = = −i cosec 2 z
(1 − cos 2 z )
2
( cos 2 z − 1) −2 sin 2
z

\ F ( z ) = i cot z + c where c is arbitrary real constant.
\ (1+ i ) f ( z ) = i cot z + c 
i c 1+ i 1− i
\ f ( z) = cot z + = cot z + c
( )
1 + i ( )
1 + i 2 2

1
\ f ( z ) = (1 + i ) cot z + (1 − i ) C 
2
c
where C = is arbitrary real constant.
2
Functions of Complex Variables  | 33

Example 1.22: Determine the analytic function f(z) = u +iv, if


cos x + sin x − e − y
u−v = and f (π 2) = 0
2(cos x − cosh y ) 
Solution: f(z) = u + iv
\ i f(z) = –v + iu
\ (1 + i) f(z) = u – v + i(u + v)
\ F(z) = U + iV where F(z) = (1 + i) f(z), U = u – v, V = u + v

(z) is analytic ⇒ F(z) is analytic


f 
cos x + sin x − e − y
U = u−v =
2(cos x − cosh y )

∂U (cos x − cosh y )( − sin x + cos x ) − (cos x + sin x − e − y )( − sin x )
=
∂x 2(cos x − cosh y ) 2

1 + (sin x − cos x ) cosh y − e − y sin x


= (1)
2(cos x − cosh y ) 2
∂U (cos x − cosh y )e − y + (cos x + sin x − e − y ) sinh y
=
∂y 2(cos x − cosh y ) 2

e − y (cos x − cosh y − sinh y ) + (cos x + sin x ) sinh y
= (2)
2(cos x − cosh y ) 2
∂U ∂V ∂U ∂U
F ′( z ) = +i = −i (By C − R equation)
∂x ∂x ∂x ∂y 
Substitute values from (1) and (2) and then applying Milne–Thomson method (replacing x by
z and y by 0), we have
1 + ( sin z − cos z ) − sin z − i ( cos z − 1) (1 + i ) (1 − cos z )
F ′( z ) = =
2 ( cos z − 1) 2 (1 − cos z )
2 2


1+ i 1+ i 1+ i  1 z 
= = =− − cosec 2 
2 (1 − cos z ) 2 z 2  2 2
4 sin
2 
1+ i z
\ F ( z) = − cot + ic  where c is arbitrary real constant.
2 2
1+ i z
\ (1 + i ) f ( z ) = − cot + ic 
2 2
1 z i
\ f ( z ) = − cot + c (3)
2 2 (1 + i )

34 | Chapter 1

π 1 i
f =− + c=0 (given) 
 2 2 (1 + i )
i 1
\ c=
(1 + i ) 2 
\ From (3)
1  z
f ( z) = 1 − cot 
2 2 
Example 1.23: Find analytic function f(z) = u (r, q) + iv (r, q) such that
v (r, q) = r2 cos 2q – r cos q + 2. Also find its harmonic conjugate.
Solution: v (r, q) = r2 cos 2q – r cos q + 2
∂v ∂v
\ = 2r cos 2θ − cos θ , = −2r 2 sin 2θ + r sin θ
∂r ∂θ 
By C–R equations
∂u 1 ∂v ∂u ∂v
= and = −r
∂r r ∂θ ∂θ ∂r 
∂u
= −2r sin 2θ + sin θ (1)
∂r
∂u
= −2r 2 cos 2θ + r cos θ (2)
∂θ
From (1),
u= ∫ ( −2r sin 2θ + sin θ ) dr + k (θ )
θ constant

where k(q) is real function of q.
\ u = – r2 sin2q + rsinq + k(q)
∂u d
\ = −2r 2 cos 2θ + r cos θ + k (θ ) (3)
∂θ dθ
From (2) and (3),
d
k (θ ) = 0
dθ 
⇒ k(q) = constant = c
where c is arbitrary real constant.
conjugate harmonic of v is u = – r2 sin2q + rsinq + c
and (
f ( z ) = u + iv = − r 2 sin 2θ + r sin θ + c + i r 2 cos 2θ − r cos θ + 2 ) 
= ir ( cos 2θ + i sin 2θ ) − ir ( cos θ + i sin θ ) + 2i + c
2

= ir 2 e 2iθ − ire iθ + 2i + c 
( )
= i z 2 − z + 2i + c

\ (
f ( z) = c + i z − z + 2
2
)
where c is arbitrary real constant.
Functions of Complex Variables  | 35

 1
Example 1.24: If u ( r , θ ) =  r −  sin θ , r ≠ 0, find analytic function f(z) = u + iv.
 r
 1
Solution: u( r , θ ) =  r −  sin θ , r ≠ 0,
 r
∂u  1 ∂u  1
= 1 + 2  sin θ , =  r −  cos θ
∂r  r  ∂θ  r 
f ( z ) = f ( re i θ ) = u(r, q) + iv (r, q) is analytic function

∂u ∂v ∂u 1 ∂u
\ f ′( re iθ )e iθ = +i = −i (By C − R equatiion)
∂r ∂r ∂r r ∂θ 
 ∂u i ∂u 
\ f ′( z ) = e − iθ  −
 ∂r r ∂θ 

 1 i 1 
= e − iθ 1 + 2  sin θ −  r −  cos θ 
 r  r r  
By Milne–Thomson method for polar, replace r by z and q by 0
i 1 i
\ f ′ ( z ) = −  z −  = −i + 2
z z z 
i
\ f ( z ) = −iz − + ic
z 
where c is arbitrary real constant.
 1 
i.e., f ( z ) = −i  z + − c
 z 
where c is arbitrary real constant.

Example 1.25: Find the orthogonal trajectories of the family of curves r2 cos 2q = C1.
Solution: Let u (r, q) = r2 cos 2q = r2 (cos2 q – sin2 q).
Then the family of curves given by v = constant will be the required orthogonal trajectories if
f (z) = u + iv is analytic.
Now, u(x, y) = x2 – y2 (Q x = r cos q, y = r sin q)
∂u ∂u
\ = 2 x, = −2 y
∂x ∂y 
By C–R equations,
∂v ∂u ∂v ∂u
=− , =
∂x ∂y ∂y ∂x

∂v
= 2y (1)
∂x
∂v
= 2 x (2)
∂y

36 | Chapter 1

From (1),
v= ∫ 2 ydx + k ( y )
y constant

where k(y) is real function of y.
\ v = 2 xy + k ( y ) 
∂v d
\ = 2 x + k ( y ) (3)
∂y dy
From (2) and (3),
d
k ( y) = 0
dy

⇒ k(y) = constant
\ v = 2xy + constant

\ orthogonal trajectories v = constant is


2xy = constant
or r2 (2cosq sinq) = constant
or r2 sin 2q = constant
Note: We can also solve the above question directly in polar form.

Exercise 1.1

( x + y)2 z2
1. If f ( z) = then show that 4. Find the following limits (i) lim
x2 + y2 z2 + z − 2
z →0 z

(ii) lim .
lim lim f ( z )  = 1 and lim lim f ( z )  = 1 but lim f ( zz)→1 z − 1
 y →0
x →0   y →0  x →0  z →0

lim f ( z )  = 1 but lim f ( z ) does not exist. 5. Check the continuity of the function
 x →0  z →0
z
f ( z ) = , z ≠ 0 and f (0) = 0 at the point
x2 − y2 z
2. If f ( z ) = 2 , then show that
x + y2 z = 0.

lim lim f ( z )  ≠ lim lim f ( z ) . 6. Show that the function f(z) = xy + iy is


 y →0
x →0   y →0  x →0 
continuous everywhere but is not ana-
x2 y lytic.
3. Show that lim does not exist 7. Show that the function f(z) = Re z is con-
z →0 x 4 + y 2
tinuous but not differentiable.
even though this function approaches
d 2
the same limit along every straight line 8. Prove that
dz
( )
z z does not exist any-
through the origin. where except at z = 0.
Functions of Complex Variables  | 37

9. Show that the function f(z) = z2 is differ- 22. Show that v(x, y) = –sin x sinh y is har-
entiable for all z and has the derivative monic. Find the conjugate harmonic of v
f ′(z) = 2z. and corresponding analytic function.
10. Show that the function 23. Show that u = e–x (x sin y – y cos y) is har-

f ( z) =
( ) , z ≠ 0, f ( 0 ) = 0
z
2
monic and find the conjugate harmonic
satisfies of u and corresponding analytic function.
z
the C–R equations at the origin yet f ′(0) 24. If f(z) = u + iv is an analytic function where
does not exist. sin 2 x
u= , then find f(z).
11. Discuss the analyticity of the function cosh 2 y − cos 2 x
f (z) = z z .
25. Show that the function u(x, y) = 4xy – 3x + 2
12. Show that the function z z is not ana- is harmonic. Construct the corresponding
lytic anywhere. analytic function f (z) = u(x, y) + iv(x, y).
13. Show that the real and imaginary parts of Express f (z) in terms of complex­
the function w = Log z satisfy the Cauchy– variable z.
Riemann equations when z is not zero. 26. Find an analytic function w = u + iv given
14. Find the point where the Cauchy–­ x
Riemann equations are satisfied for the that v = 2 + cosh x cos y
+
x y2
function f(z) = xy2 + ix2y. When does f ′(z)
27. If u = x2 – y2, find a corresponding ana-
exist? Where f(z) is analytic?
lytic function.
15. Find the constants a and b so that the
function f(z) = a (x2 – y2) + ib xy + c is 28. Find the analytic function whose real part
analytic at all points. is e2x ( x cos 2 y − y sin 2 y ).
16. Show that the function v(x, y) = ex sin y 29. Find the analytic function f(z) = u + iv,
is harmonic. Find its conjugate harmonic given that v = e x ( x sin y + y cos y )
function u(x, y) and the corresponding 30. If f(z) = u + iv is an analytic function and
analytic function f (z).
( )
u − v = ( x − y ) x 2 + 4 xy + y 2 , find f ( z ) .
17. Show that the function
u(x, y) = 2x + y3 – 3x2y is harmonic. Find 31. If f(z) = u + iv is an analytic function of
its conjugate harmonic function v(x, y) and z and u +v = ex (cos y + sin y), find f(z) in
the corresponding analytic function f (z). terms of z.
18. Let f(z) = u + iv be an analytic function. If 32. If f(z) = u + iv is an analytic function and
u = 3x – 2xy, then find v and express f (z) e y − cos x + sin x
in terms of z. u−v = find f(z) subject
cosh y − cos x
19. Prove that the function π  3−i
u = 3x2y + 2x2– y3 – 2y2 is harmonic and to the condition f   = .
2 2
find its harmonic conjugate. Hence find
f(z) as function of z. 33. Show that the function u(r, q) = r2 cos2q
is harmonic. Find its conjugate harmonic
20. Is the function u(x, y) = 2xy + 3xy2 – 2y3
function and the corresponding analytic
harmonic?
y function f(z).
21. Prove that u = x2 – y2 and v = 2 are
x + y2 34. Find p such that the function f (z)
harmonic functions of (x, y) but are not expressed in polar coordinates as
­
harmonic conjugates. f (z) = r2cos2q + ir2sin pq is analytic.
38 | Chapter 1

35. If f(z) = u(r, q) + iv(r, q) is an analytic r = 0, and that its derivative is


function where u = -r3 sin 3q, then find nrn –1 [cos (n–1) q + isin (n –1) q].
f(z) in terms of z. 38. Find the orthogonal trajectories of the
36. For what values of z, the function family of curves x3y – xy3 = constant.
w ­ defined by z = log r + if where 39. Given w = z3 where w = u + iv, show
w = r (cos f + i sin f) ceases to be analytic? that the family of curves u = c1 and
37. If n is real, show that rn (cos nq + isin nq) v = c2 (where c1 and c2 are constants) are
is analytic except possibly when ­orthogonal to each other.

Answers 1.1

  4.  (i) 0 (ii) 3 5.  Discontinuous at z = 0 11.  f(z) is analytic nowhere.

14. C-R equations are satisfied only at z = 0, f ′(z) exists only for z = 0 and f(z) is analytic
­nowhere.

15.  a = k, b =2k where k is any real number.

In all the answers below, c is an arbitrary real constant.

16.  u(x, y) = ex cos y + c; f(z) = ez + c

17.  v = x3 + 2y – 3xy2 + c; f(z) =2z + iz3 + ic

18.  v = x2 – y2 + 3y + c; f(z) = iz2 + 3z + ic

19. v = –x3 + x (3y2 + 4y) + c; f(z) = –iz3 + 2z2 + ic

20.  Not harmonic 22.  u = cos x cosh y; f(z) = cos z + c

23. v = e–x (x cos y + ysin y) + c; f(z) = ize– z + ic 24. f(z) = cot z + ic

25. f(z) = -2iz2 – 3z + 2 + ic 26. f ( z ) = i  + cosh z  + c


1
z 
27. f(z) = z + ic 28. f(z) = ze + ic
2 2z

29. f(z) = zez + c 30. f(z) = –iz3 + (1+ i ) c


z 1− i
31. f(z) = ez + (1 - i) c 32. f ( z ) = cot +
2 2
33.  v(r, q) = r2 sin 2q + c; f(z) = z2 + ic 34. 
p=2

35. f(z) = iz3 + ic 36. 


w ceases to be analytic nowhere.

38.  x4 + y4 – 6x2y2 = constant.


Functions of Complex Variables  | 39

1.6  Line integral in the complex plane


We shall be requiring certain terms in the definition of line integral and thus we are defining
them.

1.6.1 Continuous Curve or Arc


Any path C in complex plane joining the points z(a) and z(b) defined by parametric equation
z(t) = x(t) + iy(t); a ≤ t ≤ b where x(t), y(t) are continuous functions of t is called a continuous
curve or arc.
If the curve does not intersect itself then it is called a simple curve otherwise a multiple curve.
If z(a) = z(b), i.e., the starting and end points coincide then the curve is called a closed curve.
A closed curve which does not intersect itself is called a simple closed curve.

1.6.2 Smooth Curve or Smooth Arc


A continuous curve z(t) = x(t) + iy(t); a ≤ t ≤ b is called smooth curve or smooth arc if x ′(t ) and
y ′(t ) are continuous functions of t in [a, b] and z′(t) ≠ 0 for any t in [a, b].

1.6.3 Piecewise Continuous Curve


If there exists a partition a = t0 < t1 < t2 …< tn – 1 < tn = b of [a, b] such that the curve z(t) = x(t) + iy(t)
is continuous curve in every subinterval (tk – 1, tk); k = 1, 2, 3, …, n then the curve z(t) = x(t) + iy(t)
is called piecewise continuous curve in [a, b].

1.6.4 Piecewise Smooth Curve


If there exists a partition a = t0 < t1 < t2 …< tn–1 < tn = b of [a, b] such that the curve z(t) = x(t) + iy(t)
is smooth curve in every subinterval (tk–1, tk); k = 1, 2, 3, …, n then the curve z(t) = x(t) + iy(t) is
called piecewise smooth curve in [a, b].

1.6.5 Contour
A piecewise continuous closed smooth curve is called a contour.

1.6.6  Line Integral


Let z(t) = x(t) + iy(t); a ≤ t ≤ b be a piecewise smooth curve C and a = t0 < t1 < t2 < … < tn–1 < tn = b
be a partition of [a, b] such that z(t) is continuous on each subinterval (tk–1, tk).
Let xk ∈(tk–1, tk), k = 1, 2….., n
n
If lim ∑ f (ξk )∆zk ; ∆zk = zk − zk −1 ; exists when ∆zk → 0 then this limit is called the line
n→∞
k =1

integral of f(z) over C and is written as ∫ f ( z ) dz. If C is a closed curve, i.e., z(a) = z(b) then
C

∫ f ( z ) dz may be written as 
∫ f ( z )dz. Here, C is path of integration.
C C
40 | Chapter 1

In case of real variables, the path of integration of ∫ f ( x ) dx is always along the real line from
b

a
x = a to x = b, but in case of complex function f (z), the path of integration of definite internal
∫ f ( z ) dz can be along any piecewise continuous simple curve from z = a to z = b. Its value de-
b

a
pends on the path of integration. In 1.7.3, we shall prove that integral is independent of path if
f (z) is analytic in certain domain containing C.

∫ ( x − y + ix ) dz
1+ i
2
Example 1.26: Evaluate
0
(i) along the straight line from (0, 0) to (1, 1)
(ii) over the path along the lines y = 0 and x = 1
(iii) over the path along the lines x = 0 and y = 1
(iv) along the path x = y2
(v) along the curve C ; x = t, y = t2.
Solution:  (i)  Equation of straight line from (0, 0) to (1, 1) is y = x and x varies from 0 to 1.
\ z = x + iy = (1 +i) x ⇒ dz = (1 + i)dx
1
 x3 
∫ ( ) ( )
1+ i 1
x − y + ix dz = ∫ x − x + ix (1 + i ) dx = i (1 + i )   = (i − 1)
1
\ 2 2
0 0  3 0 3

(ii) over the path along the lines y = 0 and x = 1, i.e., along the lines OA and AB (see Figure 1.1)
Along the line OA
y = 0 and so z = x ⇒ dz = dx and x varies from 0 to 1.
Along the line AB
x = 1 and so z = 1 + iy ⇒ dz = idy and y varies from 0 to 1.
Y
y=1
C B(z = 1 + i)
x=0

x=1

X
O(z = 0) y = 0 A

Figure 1.1 

∫ ( x − y + ix ) dz = ∫ ( x + ix ) dx + ∫ (1 − y + i ) idy 
1+ i 1 1
\ 2 2
0 0 0

1 1
1 ix 3   y2 
=  x2 +  + i (1 + i ) y − 
2 3 0  2 0

1 i  1 1 5
= + + i 1 + i −  = − + i
2 3  2 2 6 
Functions of Complex Variables  | 41

(iii)  over the path along the lines x = 0 and y = 1, i.e., along the lines OC and CB (see Figure 1.1).
Along the line OC, x = 0 and so z = iy ⇒ dz = idy and y varies from 0 to 1.
Along the line CB, y = 1 and so z = x + i ⇒ dz = dx and x varies from 0 to 1.

∫ ( x − y + ix ) dz = ∫ ( )
1+ i
− y ( idy ) + ∫ x − 1 + ix 2 dx
1 1
\ 2
0 0 0 
1
 −iy 2   ( x − 1) ix 3 
1 2

=  + + 
 2 0  2 3 
0 

−i 1 i 1 i
= − + =− −
2 2 3 2 6
(iv)  Along the path x = y , z = x + iy = y + iy
2 2

\ dz = (2y + i) dy
when z → 0, y → 0 and when z → 1 + i, y → 1

∫ ( x − y + ix ) dz = ∫ ( y ) ( 2 y + i ) dy 
1+ i 1
\ 2 2
− y + iy 4
0 0

( ) ( )
1
= ∫  2 y 3 − 2 y 2 − y 4 + i y 2 − y + 2 y 5  dy
0 
1
 1 2 y 5  y 3 y 2 y 6  
=  y 4 − y3 − + i  − + 
 2 3 5  3 2 3  0

1 2 1 1 1 1 11 1
= − − +i − + =− + i
2 3 5  3 2 3  30 6 
(v)  Along the curve C: x = t, y = t2
z = x + iy = t + it2
\ dz = (1 + 2it) dt
As z → 0, t → 0 and z → 1 + i, t → 1

∫ ( x − y + ix ) dz = ∫ ( t − t )
1+ i
+ it 2 (1 + 2it ) dt
1
\ 2 2
0 0 

( )
1
= ∫ t − t − 2t + i t + 2t − 2t 3  dt
2 3 2 2

0 
1
t 2
t 3
t 4
 1 
=  − − + i  t 3 − t 4 
2 3 2  2 0

1 1 1  1 1 1
= − − + i 1 −  = − + i
2 3 2  2 3 2 

∫ ( z − a)
n
Example 1.27: Evaluate dz where C is the circle with center ‘a’ and radius r and n is
C dz
an integer. Also, discuss the case when n = -1, that is, evaluate ∫ .
C
( z − a)
42 | Chapter 1

Solution: On C , z − a = re iθ ; 0 ≤ θ < 2π
\ dz = ire iθ dθ 

i (n +1)θ
2π  r n+1 e 
∫ ( z − a) dz = ∫
n n inθ iθ
\ r e ⋅ ir e dθ =   ; n ≠ −1
0
 ( n + 1)  0
C



=
n +1
e {
r n+1 2π i(n+1)
−1 =
r n+1
n +1
}
(1 − 1) = 0; n ≠ −1

When n = -1,

dz 2π ire dθ 2π
∫C z − a ∫0 reiθ = i ∫0 dθ = 2π i
=


Example 1.28: Evaluate ∫ z dz where C is the contour


C
(i) the straight line from z = -i to z = i
(ii) left half of the unit circle z = 1 from z = -i to z = i.
Solution: (i)
Y

B(z = i)

X
O
C

A(z = −i)

Figure 1.2 (a)

On straight line from z = -i to z = i, x = 0


\ z = iy ⇒ dz = idy and z = y

and y varies from -1 to 1.
1 1
\ ∫ z dz = ∫
C
−1
y idy = 2i ∫ y dy   (even function)
0

1
 y2 
= 2i   = i
 2 0 
Functions of Complex Variables  | 43

(ii)
Y

B(z = i)

X
O
C

A(z = −i)

Figure 1.2 (b) 

On left half of unit circle z =1from -i to z = i,


−π −3π
z = e iθ ; θ varies from to
2 2 
\ z = 1 and dz = ie iθ dθ

−3π / 2 3π π

( )
−3π / 2 − − i
= i − ( −i ) = 2i
i
\ ∫ z dz = ∫ −π / 2
ie iθ dθ = e iθ
−π / 2
=e 2
−e 2

C 
2z + 3
Example 1.29: Evaluate the integral ∫ dz , where C is
C
z
(a) upper half of the circle z = 2 in the clockwise direction,
(b) lower half of the circle z = 2 in the anti-clockwise direction and
(c) the circle z = 2 in the anti-clockwise direction.

Solution: z = 2 ⇒ z = 2e iθ
\ dz = 2ie iθ dθ

2z + 3 4e iθ + 3
\
z
dz =
2e iθ
⋅ 2ie iθ dθ = i 4e iθ + 3 dθ ( )

(a)  On the upper half of z = 2 in clockwise direction, q varies from p to 0
2z + 3
( ) ( ) = 4 − ( −4 + 3iπ ) = 8 − 3π i
0 0
\ ∫ z
dz = ∫ i 4e iθ + 3 dθ = 4e iθ + 3iθ
π π
C 
(b)  On the lower half of z = 2 in anti-clockwise direction, q varies from p to 2p
2z + 3
( ) ( )
2π 2π
\ ∫ dz = ∫ i 4e iθ + 3 dθ = 4e iθ + 3iθ = ( 4 + 6π i ) − ( −4 + 3iπ ) = 8 + 3π i
z π π
C 
44 | Chapter 1

(c)  On circle z = 2 in anti-clockwise direction, q varies from 0 to 2p


2z + 3
( ) ( )
2π 2π
\ ∫ dz = ∫ i 4e iθ + 3 dθ = 4e iθ + 3iθ = ( 4 + 6π i ) − 4 = 6π i
z 0 0
C 

1.7 Cauchy Integral Theorem


We first define the following.

1.7.1 Simply Connected Domain


If every two points in a domain can be joined by a curve all points of which lie in domain then the
domain is called a connected domain. A connected domain D bounded by a simple closed curve
is called simply connected domain if every closed curve C1 inside D encloses only points of D.
Thus, in a simply connected domain if any closed curve inside it is shrunk then it will shrunk to a
point inside it without leaving D. Figure 1.3 shows simply connected domain D by closed curve
C in which C1 is any curve with the stated properties.
C
C1
D

Figure 1.3 

1.7.2 Multiply Connected Domain


A domain D bounded by two or more than two simple closed curves such that any two points
lying inside it can be joined by a simple curve which totally lies inside the domain D is called
multiply connected domain. Multiply connected domain cannot shrink to a point. The multicon-
nected domain has holes in it. If it has one hole then it is doubly connected and if it has two holes
then it is triply connected. Let D1 be removed from simply connected domain D then it will be
doubly connected: r < | z | < R is doubly connected. Figures 1.4 and 1.5 respectively show doubly
connected and triply connected domains.
C C
C1 C2
D C1 D2
D1
D1 D

    
Figure 1.4 Figure 1.5
Functions of Complex Variables  | 45

Any multiply connected domain can be converted into a simply connected domain by intro-
ducing cuts in the domain. In doubly connected domain in Figure 1.4 introduce cut AB as shown
in Figure 1.6 and the new domain can be viewed as in Figure 1.7.
C C
C1 *
C1
D D
D1 D1
B
A B A B
A
    
Figure 1.6 Figure 1.7

Note that here C1* is in opposite sense to C1.


Similarly by introducing two cuts AB and PQ to triply connected domain in Figure 1.5, we get
the simply connected domain as shown in Figure 1.8.
C
C*2 A
B A
C*1 D2 B
Q D1 D
P Q
P

Figure 1.8

We now state and prove Cauchy integral theorem.


Theorem 1.10  (Cauchy integral theorem)
If a function f(z) is analytic and its derivative f  ′(z) is continuous in a simply connected domain
D and C is any simple closed curve contained in D then

∫ f ( z ) dz = 0
C 
C

R D

Figure 1.9

Proof: Let f ( z ) = u( x, y ) + iv( x, y ). Then

∫ f ( z )dz = ∫ (u + iv)(dx + idy)


C C 
∫ (udx − vdy) + i ∫ (vdx + udy) (1.25)
=
C C
46 | Chapter 1

Let R be the region enclosed within C.


∂u ∂u ∂v ∂v
Now, , , , are continuous in R.
∂x ∂y ∂x ∂y
\ By Green’s theorem in plane
 −∂v ∂u 
∫ udx − vdy = ∫∫  −
∂x ∂y 
dxdy (1.26)
C R

 ∂u ∂v 
and ∫ vdx + udy = ∫∫  ∂x − ∂y  dxdy (1.27)
C R

But f(z) is analytic in R.


\ By C–R equations
∂u ∂v ∂u ∂v
= , =−
∂ x ∂ y ∂ y ∂x 
\ From equations (1.26) and (1.27)
 ∂v ∂v 
∫ (u dx − v dy) = ∫∫  − ∂x + ∂x  dx dy = 0
C R 
 ∂v ∂v 
and ∫ (vdx + udy) = ∫∫  ∂y − ∂y  dxdy = 0
C R 
\ From equation (1.25)

∫ f ( z )dz = 0 + i0 = 0
C

Remark 1.8: The condition that D is simply connected is necessary. But Cauchy and Goursat
proved that the condition of f  ′(z) is continuous can be relaxed.
Theorem 1.11  (Cauchy–Goursat theorem)
Let f(z) be analytic in a simply connected domain D and C be any simple closed curve contained
in D then

∫ f ( z )dz = 0
C

The proof of this theorem is beyond the scope of this book. From this theorem we can prove the
following result of independence of path of definite integral when integrand is analytic function.

1.7.3  Independence of Path

Theorem 1.12  Let f(z) be analytic in a simply connected domain D and C be any path con-
tained in D joining two points z1 and z2 in D then ∫ f ( z )dz is independent of the path C and
depends only on z1 and z2. C

Proof: Consider the simple paths C1 and C2 in D from z1 to z2. Let C2* denote the path C2 with
its orientation reversed then C1 and C2* constitute a simple closed curve inside D. Then by­
Cauchy–Goursat theorem
Functions of Complex Variables  | 47

∫ f ( z )dz = 0
C1 and C2*


D
C1 z2

z1
C2

Figure 1.10

\ ∫ f ( z )dz + ∫ f ( z )dz = 0
C1 C *2

\ ∫ f ( z )dz − ∫
C1 C2
f ( z )dz = 0  (∵ orientation of C2 andC2* are reverse of each other )

\ ∫ f ( z )dz = ∫ f ( z )dz
C1 C2

Thus, the integrals of f (z) from z1 to z2 along paths C1 and C2 are equal. But the paths C1 and
C2 joining z1 and z2 are arbitrary. Hence the integral of f (z) is independent of path C joining z1 and
z2

z2 and depends only on z1 and z2 and thus the integral can be written as ∫ f ( z )dz.
z1

1.7.4  Integral Function


Let f(z) be an analytic function in a simply connected domain D. Let z0 be a fixed point in D and
z

z be any point in D then ∫ f ( z ) dz is independent of path and hence depends on z only.


z0
z

F ( z) = ∫ f ( z )dz
z0

is called integral function of f (z).

1.7.5 Fundamental Theorem of Integral Calculus

Theorem 1.13  Let f (z) be analytic function in a simply connected domain D, F(z) is integral
function of f(z) then
z1

∫ f ( z )dz = F ( z ) − F ( z )
z0
1 0

where z0 and z1 are in D.


48 | Chapter 1

z
Proof: F ( z ) = ∫ f ( z )dz
z0
is integral function of f (z).

Let z be a fixed point in D. Now, f (x) is analytic and hence continuous at z. Thus, correspond-
ing to e > 0 there exists d > 0 such that
f (ξ ) − f ( z ) < ε when ξ − z < δ (1.28)

Now, we take ∆z such that ∆z < δ and line segment joining z and z + ∆z lies inside D and
take the path of integration this line segment.
Now
1  
z +∆z z z +∆z
F( z + ∆z ) − F( z )
− f ( z) =  ∫ f (ξ ) dξ − ∫ f (ξ ) dξ − ∫ f ( z ) dξ 
∆z ∆z  z0 z0 z 

 z +∆z

∵ ∫ f ( z ) dξ = f ( z ) ∆z 
 
 z

z +∆z
1
=
∆z ∫ [ f (ξ) − f ( z )] dξ
z 
z +∆z
F( z + ∆z ) − F( z ) 1
\
∆z
− f ( z) =
∆z ∫ [ f (ξ) − f ( z )] dξ
z

z +∆z
1

∆z ∫ f (ξ ) − f ( z ) dξ
z 
1
< ε ∆z = ε  (from 1.28)
∆z

 F ( z + ∆z ) − F( z ) 
\ lim  − f ( z ) = 0

∆z → 0 ∆z  
F ( z + ∆z ) − F ( z )
\ lim = f ( z)
∆z → 0 ∆z 
\ F ′( z ) = f ( z ) 

Since z is an arbitrary point in D


\ F(z) is analytic in D and F ′( z ) = f ( z ).
Thus, F(z) is indefinite integral of f (z).
\ F( z ) = ∫ f ( z )dz
Let H(z) be another definite integral of f (z).
Then, H ′(z) = f (z)
\   F ′(z) = H ′(z)
Functions of Complex Variables  | 49

z
\ F ( z) = H ( z) + C = ∫ f ( z )dz (1.29)
z0
Take limit as z → z0
F ( z0 ) = H ( z0 ) + C = 0

\ C = –H (z0)
z
\ F ( z ) = H ( z ) − H ( z0 ) = ∫ f ( z )dz
z0

Take limit as z → z1
z1

H ( z1 ) − H ( z0 ) = ∫ f ( z )dz
z0

z1

\ ∫ f ( z )dz = H ( z ) − H ( z ) = ( F ( z ) − C ) − ( F ( z ) − C )
z0
1 0 1 0 (from 1.29)

= F ( z1 ) − F ( z0 ) 

1.7.6 Extension of Cauchy–Goursat Theorem for Multiply


Connected Domains
Theorem 1.14  Let D be doubly connected domain with outer boundary curve C and inner
boundary curve C1. If f(z) is analytic in D and over C and C1 then ∫ f ( z )dz = ∫ f ( z )dz.
C C1
Proof: Giving a cut AB, we have simply connected domain as shown in Figure 1.11. Here orienta-
tion of C1* is opposite to orientation of C1 as C1* is clockwise and C1 is anticlockwise.

C
C*1
B
A
A B

Figure 1.11 

Thus, by Cauchy–Goursat theorem

∫ f ( z )dz + ∫ f ( z )dz + ∫ f ( z )dz + ∫ f ( z )dz = 0


C AB C1* BA

But ∫ f ( z )dz = − ∫ f ( z )dz
AB BA 
\ ∫ f ( z )dz − ∫ f ( z )dz = 0
C C1
(∵ C1*∫=f (−zC)dz
C
1 ) − ∫ f ( z ) dz = 0
C1
(∵ C1* = − C1 )

50 | Chapter 1

\ ∫ f ( z )dz = ∫ f ( z )dz 
C C1

Remark 1.9: If domain D is multiply connected with outer boundary C and inner boundaries C1,
C2, … Cn such that C1, C2, … Cn do not intersect and f(z) is analytic in D and C, C1, … Cn then
n

∫ f ( z )dz = ∑ ∫ f ( z )dz
C k =1 Ck

C

C2

C1

Cn

Figure 1.12

1.8 Cauchy Integral formula


Theorem 1.15  Let f (z) be analytic in a simply connected domain D and ‘a’ is any point in D and
C be any closed curve in D enclosing ‘a’ then
1 f ( z)

f ( a) =
2π i C z − a
dz

where C is traversed in anti-clockwise direction.


Proof: Let C1 be circle z − a = r such that C1 lies entirely within C. Then the region D1 bounded
f ( z)
by C and C1 is doubly connected and is analytic in this region and on C and C1.
z−a

C
D

C1
D1
r
a

Figure 1.13
Functions of Complex Variables  | 51

\ By Cauchy–Goursat theorem for doubly connected regions


f ( z) f ( z) f ( z ) − f ( a) dz
∫ z − a dz = ∫ z − a dz = ∫ z−a
dz + f ( a) ∫
z−a
C C1 C1
 C1

rieiθiθddθθ
rie
(( ))
2π2π
==I1I1++f f( a( a)∫)∫ OnCC1 ,1 ,zz−−aa==rereiθiθ; ;00≤≤θθ<<22ππ
∵∵On

00 rereiθiθ 
= 2π i f ( a) + I1 (1.30)

f ( z ) − f ( a)
where I1 = ∫ z−a
dz
C1

Now, f (z) is analytic at z = a and hence continuous at z = a.
\ lim f ( z ) = f ( a)
z→a 
\ Corresponding to e > 0, however small there exist d > 0 such that f ( z ) − f ( a) < ε when
z −a <δ.
We take r such that r < d, then
f ( z ) − f ( a) < ε when z − a = r

f ( z ) − f ( a) 1  
\ I1 ≤ ∫ z−a
dz < ε ⋅ 2π r = 2πε 
r
∵
 ∫ dz = 2π r 

C1  C1 
But e is arbitrary, therefore I1 = 0.
1 f ( z)
\ From equation (1.30) f ( a) = ∫
2π i C z − a
dz

1.8.1 Cauchy Integral Formula for Derivatives of Analytic Function


Using Cauchy integral formula, we shall establish that if f (z) is analytic in domain D, then its
derivatives of all orders exist and are also analytic in D. Such a result does not exist for functions
of real variables.
Theorem 1.16  Let f(z) be analytic in a simply connected domain D. Let ‘a’ be any point in
D and C be any simple closed curve in D enclosing point z = a. Then, f(z) has derivatives of all
order in D which are also analytic in D. Further,
n! f (z)
f ( n )( a ) = ∫ dz; n = 1, 2, 3, 
2π i C ( z − a )n+1
Proof: z = a lies inside C. For ∆a sufficiently small, a + ∆a will lie within C.
\ By Cauchy integral formula
1 f (z)
f (a) = ∫ dz
2π i C z − a

1 f (z)
f ( a + ∆a ) =
2π i ∫C z − a − ∆a
and dz

52 | Chapter 1

f ( a + ∆a ) − f ( a) 1  1 1 
\ = ∫  −  f ( z ) dz 
∆a 2π i ∆a C z − a − ∆a z − a 

1 f (z)
=
2π i ∫ ( z − a) ( z − a − ∆a) dz
C 
1 ( z − a − ∆a) + ∆a
=
2π i ∫ ( z − a) ( z − a − ∆a) f ( z ) dz
2

C

1  1 ∆a 
=
2π i ∫  ( z − a) + ( z − a) ( z − a − ∆a)  f ( z ) dz
2 2


C  
1 f ( z)
2π i ∫C ( z − a )2
= dz + I1 (1.31)

∆a f (z)
where I1 =
2π i ∫ ( z − a) ( z − a − ∆a) dz
2
C

Let minimum distance of any z on C from z = a is d then
2
z − a ≥ d 2 and z − a − ∆a ≥ z − a − ∆a ≥ d − ∆a

and f ( z ) is continuous at all points on C
\ f ( z ) ≤ M for some M
∆a f (z)
\ I1 =
2π ∫ ( z − a) ( z − a − ∆a) dz
C
2


∆a f (z)

2π ∫ z−a 2
z − a − ∆a
dz
C

∆aM ∆a M.L

2π d ( d − ∆a )
2 ∫ dz =
2π d ( d − ∆a )
2
C

where L = length of closed curve C
\ lim I1 = 0
∆a → 0 
\ from (1.31)
f ( a + ∆a ) − f ( a) 1 f ( z)
lim
∆a→ 0 ∆a
= ∫
2π i C ( z − a )2
dz

1 f ( z)
2π i C∫ ( z − a )2
\ f ′( a) = dz

Functions of Complex Variables  | 53

But ‘a’ is any point in D


\ f ′( z ) is analytic in D
\ If f ( z ) is analytic in D, then f ′( z ) is analytic in D
1 f ( z)
and f ′( a) = ∫
2π i C ( z − a )2
dz for any a in D.

Now, replacing Cauchy integral formula for f ( a ) by above expression for f ′( a ) and repeating
the above process, we shall have f ′′( z ) is analytic in D and
2! f (z)
f ′′( a ) = ∫ dz
2π i C ( z − a )3

Continuing this procedure, we shall have f ( n)
( z ) is analytic in D for all n ∈ N and
n! f (z)
f ( n )( a ) = ∫ dz
2π i C ( z − a )n+1

1.8.2 Morera’s Theorem (Converse of Cauchy Integral Theorem)

Theorem 1.17  If f (z) is continuous in a simply connected domain D and if ∫ f ( z ) dz = 0 for


every simple closed curve C in D then f (z) is analytic in D. C

Proof: Since ∫ f ( z ) dz = 0 along any closed curve C in D and f(z) is continuous at all point of C,
C
therefore, the line integral of f(z) from fixed point z0 in D to any point z in D will be independent
of path and hence will be single-valued function of z.
z

Let F ( z ) = ∫ f ( z ) dz. 
z0

After this, follow the proof of Theorem 1.13 and in the proof, we proved that F (z) is analytic
in D and F ′( z ) = f ( z ).
But derivative of an analytic function is an analytic function and hence f (z) is analytic in D.

1.8.3 Cauchy Inequality
Mn !
If f (z) is analytic within and on the circle C : z − a = r then, f ( n ) ( a) ≤ n
where M = max |  f(z) |on C. r
Proof: By Cauchy integral formula for nth order derivative
n! f (z)
f ( n )( a ) = ∫ dz
2π i C ( z − a )n+1

n! f (z)
\ f ( n )( a ) = ∫ ( z − a) n +1
dz
2π C

54 | Chapter 1

n! f (z) Mn!
f ( n)(a ) ≤ ∫ 2π r n+1 C∫
\ n +1
dz ≤ dz 
2π C z − a

Mn ! 2π r Mn !
\ f ( n) ( a ) ≤ = n
2π r n +1 r 
Mn !
\ f ( n) ( a ) ≤ n
r 

1.8.4  Liouville’s Theorem


If an entire function f (z) is bounded for all values of z, then f (z) must be constant.
Proof: Since f (z) is bounded for all z, so there exists M such that |  f (z) | ≤ M for all z.
By Cauchy integral formula for derivatives
1 f (w )
f ′( z ) = ∫ dw
2π i C ( w − z )2

where C : w − z = r

1 f (w ) 1 f ( w ) dw
f ′ (z) = ∫ (w − z ) 2π ∫C
\ dw ≤
2π C
2
w−z
2


1 M 1 M M

2π r2 ∫ dw =
2π r 2
⋅ 2π r =
r
C 
M
\ f ′( z) ≤
r 
r → ∞ gives f  ′(z) = 0 for all z

\ f (z) = constant

1.8.5 Poisson’s Integral Formula


If f (z) is analytic on and inside C : z = R and a = re iθ is any point within C then
1 R2 − r 2
( ) ( )

f re iθ =
2π ∫
0 R 2 − 2 Rr cos (θ − φ ) + r 2
f Re iφ dφ

Proof: By Cauchy integral formula
1 f (z)
f (a) =
2π i C∫ z − a
dz where z = Re iφ , a = re iθ  (1.32)

R2
Now ‘a’ lies inside C, hence its inverse point w. r. t. circle C, i.e., lies outside C.
a
Functions of Complex Variables  | 55

\ By Cauchy–Goursat theorem
1 f (z)
0= ∫
2π i C z − R 2 / a
dz 
( )
(1.33)

Subtract (1.33) from (1.32)

1  1 1 
2π i ∫C
f ( a) =  −  f ( z ) dz

 z − a z − R 2 / a ( ) 

1  1 a 
 f ( z ) dz
2π i ∫C  z − a za − R 2 
=  −

1 aa − R 2
f ( z ) dz
2π i ∫C z 2 a − aa + R 2 z + R 2 a
=

( ) 

(r
) f (Re ) R i e dφ2
−R 2 iφ iφ

\ ( )
f re iθ =
1
2π i ∫Rere − ( r + R ) Re + R re
2 2i φ − iθ 2 2 iφ 2 iθ
 (∵ z = Re iφ
; 0 ≤ θ < 2π on C )
0

1

(r − R ) f (Re ) dφ 2 2 iφ

= ∫
2π Rre ) − ( r + R ) + Rre ( )
(
0
i φ −θ 2 2 − i φ −θ


1 ( R − r ) f (Re ) dφ
2π 2 2 iφ

=
2π ∫R 2
− 2rR cos (φ − θ ) + r 2
0 

( )
Remark 1.10: If we write f re iθ = u ( r ,θ ) + iv ( r ,θ ) and f Re iφ = U ( R, φ ) + iV ( R, φ ) and ( )
equate real and imaginary parts

u (r,θ ) =
1 2π (
R 2 − r 2 U ( R, φ ) )
2π ∫0 R 2 − 2rR cos (φ − θ ) + r 2



v (r, θ ) =
1 2π (
R − r V ( R, φ ) 2 2
)
and
2π ∫0 R − 2rR cos (φ − θ ) + r 2
2


Example 1.30: Evaluate the following integrals
3+ i
(i) ∫ z 2 dz along the parabola y2 = x/3.
0
π
(ii) ∫ z 2 dz where C is the arc of the circle z = 2 from θ = 0 to θ = .
C
3

∫ ( z − z ) dz where C is the upper half of the circle |z| = 1.


2
(iii)
C

∫ ( 3z )
+ 4 z + 1 dz where C is the arc of the cycloid x = a (q + sinq), y = a (1 – cos q)
2
(iv)
C
between
(a)  (0, 0) and (2 pa, 0)   (b)  (0, 0) and (pa, 2a).
56 | Chapter 1

Solution: 
3+ i
(i) Since z2 is analytic everywhere, therefore ∫ 0
z 2 dz is independent of path and depends
upon end points 0 and 3 + i only. Therefore, by fundamental theorem of integral calculus

(3 + i )3 = 27 + 27i − 9 − i = 6 + 26i
3+ i
3+ i  z3 
∫0
z 2 dz =  
 3 0
=
3 3 3

(ii) |z| = 2 ⇒ z = 2eiq


\ By fundamental theorem of integral calculus
2 eiπ / 3
2 eiπ / 3  z3  8e iπ − 8 −8 − 8 −16
∫ z dz = ∫ z dz =   = = =
2 2
2  3 3 3 3
C 2 
(iii) Since z - z2 is analytic everywhere, therefore by fundamental theorem of integral
calculus
Y

X
A(–1,0) O B(1,0)

Figure 1.14

∫ ( z − z ) dz = ∫ ( z − z ) dz = ∫ ( z )
−1 1
2 2 2
− z dz
C 1 −1 
1
= 2 ∫ z dz (Q z is even and z is odd)
2 2
0
1
 z3  2
= 2  =
 0 3 
3
(iv) Since 3z2 + 4z + 1 is analytic everywhere, therefore by fundamental theorem of integral
calculus
2π a + i 0

∫ ( 3z + 4 z + 1 dz =) ∫ ( 3z + 4 z + 1 dz )
2 2
(a) 
C 0+i 0


= (z 3
+ 2z2 + z )
2π a

0
{
= z ( z + 1) }
2 2π a

0
= 2π a( 2π a + 1) 2

π a + 2 ia

∫ ( 3z + 4 z + 1 dz =) ∫ ( 3z + 4 z + 1 dz )
2 2
(b) 
C 0+i 0

{ }
(π +2 i ) a

= z ( z + 1) = (π + 2i )a {(π + 2i )a + 1}
2 2

0 
Functions of Complex Variables  | 57

dz
Example 1.31: Prove that ∫ z − a = 2π i
C
where C: |z - a| = r

Solution: Let f (z) = 1. Then f (z) is analytic on and inside C : |z - a| = r and z = a lies inside C.
1 f (z)
\ By Cauchy integral formula f ( a ) = ∫ dz
2π i C z − a
But      f (z) = f (a) = 1
dz
\ ∫ z − a = 2π i
C 
e− z 1
Example 1.32: Evaluate  ∫C z + 1 dz where C is (i) |z| = 2 (ii) |z| = 2.
Solution: (i) f (z) = e − z is analytic on and inside |z| = 2 and z = –1 lies inside |z| = 2.
\ By Cauchy integral formula
e− z
C∫ z + 1 dz = 2pi. f (- 1) = 2pie

e− z 1  1
(ii)  is analytic on and inside |z| = . ∵ z = −1 lies outside z = 
z +1 2  2
−z
e
\ By Cauchy–Goursat theorem  ∫C ( z + 1) dz = 0
Example 1.33: State and prove Cauchy’s integral formula and hence find the value of
(i)  F (3.5)  (ii) F (i)  (iii) F ′ (- 1) and F ″(- i), if
2 2
4z2 + z + 5 x  y
F( a ) = ∫ dz where C is the ellipse   +   = 1
C
z−a 2  3
Solution: For statement and proof of Cauchy integral formula see (1.8).
Let f (z) = 4z2 + z + 5
Y
(0,3)

(–2,0) O (2,0) X

(0,–3)

Figure 1.15
2 2
x  y
f (z) is analytic on and inside C where C is the ellipse   +   = 1
2  3
Now, i, - 1 and –i lie within C and z = 3.5 lies outside C.
58 | Chapter 1

4z2 + z + 5
(i) F ( 3.5 ) = ∫ is analytic on and inside C.
C
z − 3.5
\ By Cauchy–Goursat theorem F (3.5) = 0
f ( z)
Now, when a lies inside C then F( a ) = 2π i f ( a) = ∫ dz
( z − a)
(
F ( a ) = 2π i 4 a 2 + a + 5 )
C
\
\ F ′ ( a ) = 2π i (8a + 1) and F ′′ ( a ) = 2π i (8) = 16π i

(ii) F (i ) = 2π i  4(i ) 2 + i + 5 = 2π i(1 + i ) = 2π (i − 1)
(iii) F ′ ( −1) = 2π i ( −8 + 1) = −14π i
(iv) F ′′ ( −i ) = 16π i
2z2 − z − 2
Example 1.34: Evaluate f (2) and f (3) if f (a) = ∫
C
z−a
dz where C : z = 2.5.

Solution: F (z) = 2z – z – 2 is analytic on and inside C : z = 2.5.


2

\ if a lies within C then by Cauchy integral formula


F (z)
∫ z − a dz = 2π i F ( a ) = f ( a )
C 
Now, z = 2 lies inside C.
\ f (2) = 2pi F (2) = 2pi (8 – 2 - 2) = 8pi
Now, z = 3 lies outside C.
2z2 − z − 2
\ is analytic on and inside C.
z −3
\ By Cauchy–Goursat theorem
2z2 − z − 2
f (3) = 
∫ z − 3 dz = 0
C 
Example 1.35: Evaluate the following integrals:
e2z
(i) ∫ 2 dz where C : z = 3
(
C z − 3z + 2 )
ez
(ii) ∫ dz where C : z = 2
C (
z − 1) ( z − 4 )
sin π z 2 + cos π z 2
(iii) ∫ ( z − 1) ( z − 2 ) dz where C is the circle |z| = 3.
C

e2z e2z
Solution: (i) ∫ 2
C z − 3z + 2
dz = ∫ z − 1) ( z − 2 ) dz
C (

e2z e2z
=∫ dz − ∫ dz (1)
C
z−2 C
z −1
Functions of Complex Variables  | 59

f (z) = e2z is analytic on and inside C : z = 3 and z = 1, 2 lie inside C.


\ By Cauchy integral formula
e2z
∫C ( z − 2 ) dz = 2π i f ( 2 ) = 2π ie
4


2z
e
∫ ( z − 1) dz = 2π i f (1) = 2π ie
2

C

\ From (1)
e2z
∫z 2
− 3z + 2
(
dz = 2π ie 4 − 2π ie 2 = 2π i e 2 − 1 e 2 )
C 
ez
f (z) =
(ii)  is analytic on and inside C : z = 2 (Q z = 4 lies outside C) and z = 1 lies
( z − 4)
inside C.
\ By Cauchy integral formula
e z dz f (z) e −2π ie
∫ ( z − 1) ( z − 4 ) = ∫ ( z − 1) dz = 2π i f (1) = 2π i 1 − 4 = 3
C C

1 1 1
(iii) = −
( z − 1) ( z − 2 ) z − 2 z − 1
sin π z 2 + cos π z 2 sin π z 2 + cos π z 2 sin π z 2 + cos π z 2
\ C∫ ( z − 1) ( z − 2) dz = C∫ z−2
dz − C∫ z −1
dz

Now,  f (z) = sin π z 2 + cos π z 2 is analytic on and inside C : z = 3.
and z = 1, 2 lie inside C.
\ By Cauchy integral formula
sin π z 2 + cos π z 2
C∫ dz = 2π i f ( 2) = 2π i [sin 4π + cos 4π ] = 2π i
z−2

sin π z 2 + cos π z 2
C∫ dz = 2π i f (1) = 2π i [sin π + cos π ] = −2π i
z −1

\ from (1)
sin π z 2 + cos π z 2
C∫ ( z − 1) ( z − 2) dz = 2π i − ( −2π i ) = 4π i


( z − 3)

Example 1.36: Evaluate 
C (z 2
+ 2z + 5 )
dz where

(i)  C : z = 1    (ii) C : z + 1 + i = 2
60 | Chapter 1

Solution: z2 + 2z + 5 = 0
−2 ± 4 − 20
iff z= = −1 ± 2i
2 
Now, −1 ± 2i > 1

\ –1 ± 2i lie outside | z | = 1
and | – 1 + 2i + 1 + i | = 3 > 2,
| – 1 – 2i + 1+ i | = 1 < 2
\ –1 + 2i lies outside z + 1 + i = 2 and -1 - 2i lies inside z + 1 + i = 2
z −3
(i)  is analytic on and inside C : | z |=1
z2 + 2z + 5
z −3
\ By Cauchy–Goursat theorem ∫ 2 dz = 0
C z + 2z + 5
z −3
(ii)  f (z) = is analytic on and inside C : z + 1 + i = 2 and - 1 - 2i lies inside C.
z − ( −1 + 2i )
\ By Cauchy integral formula
z −3 f (z)
C∫ z 2 + 2 z + 5 dz = c∫ z − ( −1 − 2i ) dz = 2pi f (- 1 - 2i)

 −1 − 2i − 3  2π i ( −4 − 2i )
= 2π i  =
 −1 − 2i − ( −1 + 2i )  −4i

= π (2 + i)

z 4 dz
Example 1.37: Using Cauchy’s integral formula evaluate ∫ ( z + 1)( z − i) 2
where C is the ellipse
9x2 + 4y2 = 36. C
2 2
x  y
Solution: C is ellipse   +   = 1
2  3
Both z = -1, i lie inside C.
Y
(0,3)

(–2,0) O (2,0) X

(0,–3)

Figure 1.16
Functions of Complex Variables  | 61

By suppression method
1 1 1 A
≡ + +
( z + 1)( z − i ) 2 ( z + 1)( −1 − i ) 2 (i + 1)( z − i ) 2 z − i 
i 1− i

2 A
= + 2 +
z + 1 ( z − i)2 ( z − i)

1 1 − i
\ 1 ≡ − i ( z − i)2 + ( z + 1) + A( z + 1)( z − i )
2 2
Equate coefficient of z2
1 i
0=− i + A ⇒ A=
2 2
1 −i i 1− i
\ = + +
( z + 1)( z − i ) 2 2( z + 1) 2( z − i ) 2( z − i ) 2 
Now by Cauchy integral formula
z4
∫ z + 1 dz = 2π i (−1) = 2π i
4

C 
z4
∫ z − i dz = 2π i (i) = 2π i
4

C 
and by Cauchy integral formula for derivatives
z4 2π i  d 
∫ ( z − i) 2
dz = 
1!  dz
( z 4 ) = 2π i( 4i 3 ) = 8π
z =i
C

4
z dz −i 4
z dz i z 4
(1 − i ) z4
\ ∫ ( z + 1)( z − i) 2
=
2 ∫ ( z + 1) + 2 ∫ ( z − i) dz + ∫
2 C ( z − i)2
dz
C C C 
−i i 1− i
= ( 2π i ) + ( 2π i ) + (8π )
2 2 2 
= 4π (1 − i )

e2z
Example 1.38: Use Cauchy’s integral formula to evaluate ∫ ( z + 1) 4
dz where C is the circle
z = 2. C

Solution: f(z) = e2z is analytic on and inside C : z = 2 and z = −1 lies inside C.


\ By Cauchy integral formula for derivatives

e2z 2π i  d 3 2 z  πi 2z 8π i
∫ ( z + 1) 4
dz =
3!  dz 3
e  =
 z = −1 3
8e ( ) z = −1
=
3e 2
C
62 | Chapter 1

e z dz
Example 1.39: Evaluate ∫
C z (1 − z )3
where C is

1 1
(i)  z =   (ii) z − 1 =   (iii) z = 2
2 2
Solution:
1 ez
(i) z = 0 lies inside and z = 1 lies outside C : z = and f ( z ) = is analytic on and
inside C. 2 (1 − z )3

\ By Cauchy integral formula


ez

C z (1 − z )
3
dz = 2π i f (0) = 2π i

1 −e z
(ii) z = 0 lies outside C : z − 1 = and z = 1 lies inside C and f ( z ) = is analytic on and
inside C. 2 z
\ By Cauchy integral formula for derivatives
ez f ( z) 2π i  d 2 
∫C z(1 − z ) 3
dz = ∫C ( z − 1) 3
dz =  2 f ( z )
2!  dz  z =1 

 d  −e  
2 z
= πi  2  
 dz  z   z =1 
 d  e z e z  
= −π i   − 2  
 dz  z z   z =1

 e z 2e z 2e z 
= −π i  − 2 + 3 
z z z  z =1

= −π i(e − 2e + 2e) = −π ei 

1 1 1 A B
(iii) Let ≡ − + + (By suppression method) (1)
z ( z − 1) 3
( z − 1) 3
z z − 1 ( z − 1) 2
\ 1 = z − ( z − 1)3 + Az ( z − 1) 2 + Bz ( z − 1)  (2)
Equate coefficient of z3
0 = –1 + A ⇒ A = 1
Substitute z = –1 in (2)
1 = –1 + 8 – 4 + 2B ⇒ B = -1
1 1 1 1 1
\ = − − +
z ( z − 1) 3
( z − 1) z ( z − 1) ( z − 1)3
2

Both z = 0 and z = 1 lie inside z = 2 and f ( z ) = −e z is analytic on and inside C
Functions of Complex Variables  | 63

\ By Cauchy integral formula and Cauchy integral formula for derivatives


ez f ( z) f ( z) f ( z) f ( z)
∫ z(1 − z ) 3
dz = ∫
( z − 1)
dz − ∫
z
dz − ∫
( z − 1) 2
dz + ∫
( z − 1)3
dz
C C C C C 
2π i  d  2π i  d 2 
= 2π i f (1) − 2π i f (0) −  f ( z)  +  2 f ( z)  
1!  dz  z =1 2 !  dz  z =1
= −2π ie + 2π i + 2π ie − π ie = π i( 2 − e) 

z
Example 1.40: Evaluate ∫ (z
C
2
+ 1)
dz where C is

1
(i)  C : z + = 2   (ii)  C : z + i = 1
z
Solution: z 2 + 1 = ( z + i )( z − i )
1
(i) for z = i, z+ = i −i = 0 < 2
z
1
for z = −i, z + = −i + i = 0 < 2
z 
1
\ Both z = i, –i lie inside z + =2
z
z z 1 1 1 
= =  +  (By suppression method)
z + 1 ( z + i )( z − i ) 2  z − i z + i 
2

\ By Cauchy integral formula


z 1 1 1  1
∫z 2
+1
dz =  ∫
2 C z − i
dz + ∫
z + i
dz  = ( 2π i + 2π i ) = 2π i
 2
C C

(ii) for z = i, z + i = 2i = 2 > 1
and for z = −i, z + i = −i + i = 0 < 1

\ z = -i, lies inside and z = i outside C : z + i = 1
z
f ( z) = is analytic on and inside C and z = –i lies inside C
z −i
\ By Cauchy integral formula
z f ( z)  −i 
∫C z 2 + 1 dz =C∫ z + i dz =2π if (−i) = 2π i  −2i  = π i

Example 1.41: Evaluate the following integral by Cauchy’s integral formula
cos z
∫C z 2n+1 dz where C : z = 1
64 | Chapter 1

Solution:  (i)  z = 0 lies inside z = 1 and f ( z ) = cos z is analytic on and inside C.


\ By Cauchy integral formula for derivatives

cos z 2π i  d 2 n  2π i 2π i( −1) n
∫C z 2n+1 dz =  2 n cos z  =
( 2n)!  dz
{
( −1) n cos z } =
 z = 0 ( 2n)! ( 2n)!
z =0


Exercise 1.2

1+ i
1. Evaluate ∫ 0
( x 2 + iy )dz along the path 2+ i
7. Evaluate ∫ ( z ) 2 dz along
y = x 2. 0
(i) the real axis to 2 and then vertically to
B
2. If f (z) = x2 + ixy evaluate ∫ A
f ( z ) dz where

2 + i.
(ii) along the line 2y = x
A (1,1) and B (2,4) along
(i) the straight line AB 8. Evaluate the integral I = ∫ Re z 2 dz ( )
(ii) the curve C : x = t , y=t 2
from 0 to 2 + 4i along the
C

3. Find the value of the integral (i) line segment joining the points (0, 0)
∫ ( x + y)dx + x y dy
2
and (2, 4),
C (ii)  x-axis from 0 to 2, and then vertically
(i) along y = x2 having (0, 0), (3, 9) as end to 2 + 4i and
points. (iii) parabola y = x2.
(ii) along y = 3x between the same points. dz
Do the values depend upon path?
9. Evaluate I =  ∫C z − 2 around
( 2, 4 )
4. Evaluate ∫
( 2 y + x 2 )dx + (3 x − y )dy 
( 0 , 3)
(i) circle z − 2 = 4
(ii) Rectangle with vertices at 3 ± 2i, –2 ± 2i
along the parabola x = 2t , y = t 2 + 3.
(iii) Triangle with vertices at (0, 0), (1, 0),
5. Evaluate ∫ (3 x + 4 xy + 3 y )dx + 2( x + 3 xy + 4 y 2 )(0, dy 1).
(1,1)
2 2 2



( 0,0 ) 2
(1,1)
10. Evaluate  z dz around the square with
∫( 0,0) (3x +       
4 xy + 3 y 2 )dx + 2( x 2 + 3 xy + 4 y 2 )dy
2
C
vertices at (0, 0), (1, 0), (1, 1) (0, 1).
  (i) along y2 = x  (ii) along y = x2 dz
(iii) along y = x 11. Show that ∫ = −π i or π i according as
B C
z
6. Evaluate ∫ z 2 dz where A = (1, 1), C is the semi-circular arc of z = 1 above
A
B = (2, 4) along or below the x-axis.
(i) the line segment AC parallel to x-axis 1
12. Evaluate ∫ 3 dz , C : z = 1.
and CB parallel to y-axis. C
z
(ii) the straight line AB joining the two
∫ z dz
2
13. Evaluate using Cauchy’s integral
points A and B. C
(iii) the curve C : y = x 2 . theorem, where C : z = 1.
Functions of Complex Variables  | 65

∫ (5 z )
− z 3 + 2 dz around 19. Determine F(2), F(4), F(–3i), F  ′(i),
4
14. Evaluate
5z 2 − 4 z + 3
∫C z − α dz
C
F  ″(–2i) if F (α ) = 
(i) unit circle z = 1
(ii) square with vertices at (0, 0), (1, 0), where C is the ellipse 16x2 + 9y2 = 144
(1, 1), (1, 0)
(iii) curve consisting of the parabola y = x2 20. Evaluate the following integrals by using
from (0, 0) to (1, 1) and y2 = x from Cauchy’s integral formula
dz
(1, 1) to (0, 0).
15. Can the Cauchy integral theorem be ap-
(i) ∫C z 2 where C is the circle z = 1
plied for evaluating the following inte-  2 z 3 ( z − 1) 
grals? Hence, evaluate these integrals. (ii) ∫  + 3
dz where C is
( ) ( )
2
C  z − 2 z − 2 

∫ e dz; C : z = 1
2
sin z
(i)
C the circle z = 3
(ii) ∫ tan z dz, C : z =1 (iii) ∫
dz
where n is any positive
C ( z − a)
n
C

ez
∫
(iii)
C (z 2
+9 )
dz , C : z = 2 integer greater than 1 and C is the
closed curve containing a.
dz z −1
(iv) ∫

(iv) 
C (z 3
)
−1
, C is a triangle with verti-
C ( z + 1) ( z − 2 )
2
dz where

1 i C : z −i = 2
ces at 0, ± + .
4 2 2z2 + z
16. Evaluate the following integrals ∫C z 2 − 1 dz; C : z − 1 = 1
(v)
( )
2− i
∫ z dz
(i)
1
1 ze z
πi (vi)  ∫ dz if the point a lies
∫ e dz
2z
(ii)
−π i
2π i C ( z − a )3
i inside the simple closed curve C.

(iii) sinh π z dz
0 21. Evaluate by Cauchy’s integral formula
1+ 2 i
∫ ze dz
z
(iv) dz
0 (i) ∫ dz where C is the circle
1 (z 2
−1 )
∫ ze
2 z3 C
(v) dz.
0 x2 + y2 = 4.
dz 1
17. Evaluate ∫ z
C
2
+9
where C is
∫C z cos z dz; C : 9x2 + 4y2 = 1
(ii)

(i) z − 3i = 4 (ii) 
z + 3i = 2 1 z2 + 7
(iii)
2π i ∫ z − 2 dz where C : z = 5
(iii) z =5 C

3z 2 + 7 z + 1 sin 3 z
18. If f ( a ) = ∫ dz , where C is ∫C z + π / 2 where C : z = 5
(iv)
C
z−a
the circle x2 + y2 = 4, find the values of zdz
f (3), f  ′(1–i) and f  ″(1–i).
(v) ∫C 9 − z 2 ( z + i ) where C : z = 2
( )
66 | Chapter 1

dz 1 e zt

(vi)
(z )
where C : z = 4 (viii)  ∫ 2 dz where C : z = 3
C
2
+4 2π i C z + 1( )
sin π z + cos π z
2 2 and t > 0
(vii) ∫ dz where C : z = 3
( z + 1) ( z + 2 ) tan z
C
 ∫C ( z − π 4 )2 dz where C : z = 1
(ix)
+ cos π z 2
dz where C : z = 3
( z + 2)
Answers 1.2

−29 −151 45
  1. (5i – 1) /6   2.  (i)  + 11 i   (ii)  + i
3 15 4
1 1
  3. (i)  256   (ii) 200   4. 33/2  5. (i) 26/3  (ii) 26/3  (iii) 26/3
2 4
−86 −86 −86
  6. (i)  − 6i   (ii)  − 6i   (iii)  − 6i
3 3 3
14 11 10 5i
  7. (i)  + i   (ii)  −
3 3 3 3
8 −56 40
 8. (i) –8(1 + 2i)  (ii)  (1 − 2i )   (iii)  − i    9. (i) 2πi  (ii) 2πi  (iii) 
0
3 15 3
10. –1 + i    12. 0    13. 0    14. (i), (ii), (iii) = 0
15. yes; (i), (ii), (iii), (iv) = 0
1
16. (i)  1- 2i  (ii) 0  (iii) −2 / π   (iv) 2ie1+ 2i + 1   (v)  ( e − 1)
3
0  (ii) 2π ( 6 + 13i )   (iii) 12π i
17. (i)  π / 3   (ii) −π / 3   (iii) 0   18. (i) 

19. F(2) = 30 pi, F(4) = 0, F ( −3i ) = −12π ( 2 + 7i ) , F ′ ( i ) = −4π ( 5 + 2i ) , F ′′ ( −2i ) = 20π i

20. (i) 0  (ii) 4π i   (iii)  0  (iv) −2π i / 9   (v) 3π i   (vi) ea(1+a/2)

(i) 0  (ii) 2π i   (iii) 11  (iv) 2π i   (v) π / 5   (vi) 0  (vii) −4π i 


21. 
(viii) sin t  (ix)  4π i

1.9  Infinite series of complex terms


∞ ∞ ∞

∑ (a
k =1
k + ibk ) ; ak , bk ∈ R is an infinite series of complex terms. If the series ∑a
k =1
k and ∑b
k =1
k

converge to the sum A and B respectively then the series ∑(a
k =1
k + ibk ) converges to the sum A
∞ ∞ ∞
+ iB. Conversely if the series ∑(a
k =1
k + ibk ) converges to A + iB then ∑a
k =1
k and ∑b
k =1
k ­converge
Functions of Complex Variables  | 67

∞ ∞
to A and B respectively. In (5.2) of vol 1, we have shown that if ∑a
k =1
k and ∑b
k =1
k converge then

lim an = 0, lim bn = 0. Thus, if series ∑ ( ak + ibk ) converges then lim ( an + ibn ) = 0
n →∞ n →∞ n →∞
k =1
∞ ∞
The series ∑(a
k =1
k + ibk ) is absolutely convergent iff ∑a k =1
k + ibk is convergent.

Now, an ≤ an + ibn and bn ≤ an + ibn for all n ∈ N.


\ By comparision test in (5.3) of vol 1, if ∑ ( an + ibn ) converges absolutely then ∑ an and ∑ bn
converge absolutely and thus ∑ an and ∑ bn converge which implies ∑ ( an + ibn ) converges.
Thus, an absolute convergent series of complex numbers is convergent.
∞ n
Now, let the series ∑ u ( z ) converges to s(z) and ∑ u ( z ) = s (z). If corresponding to e > 0
k =1
k
k =1
k n

there exists m ∈ N depending on e and not on z such that sn ( z ) − s ( z ) < ε for all n > m then the

series ∑u (z)
k =1
k is said to be uniformly convergent.

A uniformly convergent series of continuous complex functions is itself continuous and can
be differentiated or integrated term by term.
If a uniformly convergent series of continuous complex functions converges to f(z) then f(z)
is an analytic function.

1.9.1 Power Series
An infinite series of the form

∑a (z − z ) = a0 + a1 ( z − z0 ) + a2 ( z − z0 ) +  + an ( z − z0 ) +  (1.34)
n 2 n

n 0

is called a power series about z = z0. The point z0 is called the centre of the power series. If
the power series converges for all z in z − z0 < R and diverges for all z in z − z0 > R then R is
called radius of convergence of the power series.
The power series is always convergent at its centre z = z0 since for z = z0 series reduces to a0.
a
If λ = lim n +1 then by De Alembert’s ratio test proved in (5.4) of vol 1, the series converges
n →∞ an
for z − z0 λ < 1 and diverges for z − z0 λ > 1.
1
\ Radius of convergence is
λ
1
and series converges in z − z0 <
λ
1
and diverges in z − z0 > .
λ
68 | Chapter 1

∑a (z − z )
n
Without proving, we state that the power series n 0 with non-zero radius of con-
vergence R is uniformly convergent in the circle z − z0 ≤ r < R.
Thus, if the power series ∑ an ( z − z0 ) converges to function f(z) then f(z) is analytic function
n

and also term by term differentiation and term by term integration are allowed within the circle
of convergence.

1.10 Taylor Series
Theorem 1.18  If a function f(z) is analytic inside a circle C : z − z0 = R then for all z inside
C, f(z) can be represented by power series

f ( z ) = ∑ an ( z − z0 ) where
n

n =1

f(
n)
( z0 ) 1 f (w )
an =
n!
= ∫
2π i Cr ( w − z0 )n +1
dw

where Cr is circle w − z0 = r < R containing z inside it.


Proof: Let z be an arbitrary point inside C : z − z0 = R.
Let Cr : w − z0 = r < R be circle containing point z inside Cr and w is any point on Cr.

C
r

z0
R

Figure 1.17
\ By Cauchy integral formula
1 f (w )
f (z) = ∫
 dw (1.35)
2π i Cr w − z
−1
1 1 1  z − z0 
Now,        = = 1 −  (1.36)
w − z ( w − z0 ) − ( z − z0 ) ( w − z0 )  w − z0 
Functions of Complex Variables  | 69

z − z0 z − z0
Since = < 1
w − z0 w − z0
We can expand R.H.S. of equation (1.36) in binomial series
1 1  2
z − z0  z − z0   z − z0 
3

\ = 1+ +  +  + 
w − z ( w − z0 )  w − z0  w − z0   w − z0  
 
This series converges uniformly and hence multiplying by f(w) and integrating term by term,
we have
1 f (w ) 1  f (w ) f (w ) f (w ) 
∫ ∫ dw + ( z − z0 ) ∫ dw +  + ( z − z0 ) ∫
n

2π i Cr w − z
dw =
2π i Cr ( w − z0 ) Cr ( w − z0 )
2
Cr ( w − z0 )
n +1
dw 


\ Using Cauchy integral formula and Cauchy integral formula for derivatives, we have
( z − z0 ) ( z − z0 )
2 n

f ( z ) = f ( z0 ) + ( z − z0 ) f ′ ( z0 ) + f ′′ ( z0 ) +  + f n ( z0 ) +
2! n!
or f ( z ) = a0 +a1 ( z − z0 ) + a2 ( z − z0 ) +  + an ( z − z0 ) +  
2 n

f ( n ) ( z0 ) 1 f (w )
where an =
n!
= ∫
2π i Cr ( w − z0 )n +1
dw

Remark 1.11: If we are to expand f(z) about z0 and f(z) is not analytic at z0 or in some neighbour-
hood of z0 then Laurent’s series is used.

1.11  Laurent’s series

Theorem 1.19  If a function f(z) is analytic in the ring-shaped region R bounded by two concen-
tric circles C and C1 with radii r and r1 (r > r1) and with centre at z0 then for all z in R
∞ ∞
a− n
f ( z ) = ∑ an ( z − z0 ) + ∑
n

n =1 ( z − z0 )
n
n=0

1 f ( w ) f ( n)
( z0 )
where an = ∫
2π i C ′ ( w − z0 ) n +1
dw =
n!
; n = 1, 2, 3,

1 f ( )
w
dw = f ( z0 )
2π i C∫′ ( w − z0 )
a0 =

1 f (w )
2π i C∫1′ ( w − z0 )− n+1
a− n = dw; n = 1, 2, 3,

where C ′ and C1′ are circles in R containing z in the annular ring with inner circle C1′ and
outer circle C ′.
70 | Chapter 1

Proof: Let z be any point in R. Draw circles C ′ and C1′ such that z lies in annular ring bounded
by inner circle C1′ and outer circle C ′ with centre z0 and C1′, C ′ completely lie in R. Introduce
cut AB such that AB does not pass through z as shown in Figure 1.18.

C
z

Ca
C1
A B
A B z0
C a1

Figure 1.18

Then f(z) is analytic in the simply connected domain bounded by AB, C1′ ∗ (in clockwise direc-
tion), BA and C ′ including circles C ′ and C1′ where C1′ ∗ is in opposite direction of C1′.
\ By Cauchy integral formula
1 f (w ) 1 f (w )
f (z) = ∫′ w − z dw − 2π i C∫′ w − z dw (1.37)
2π i C
1

where integrals over AB and BA cancel and integral over C1′ ∗ is negative of integral over C1′
as C1′ ∗ is traversed clockwise and C1′ is traversed anticlockwise.
z − z0
Now, when w lies on C′; <1
w − z0
−1
1 1 1  z − z0 
\ = = 1 − 
w − z w − z0 − ( z − z0 ) ( w − z0 )  w − z0 

( z − z0 )
n n
1 ∞
 z − z0  ∞
= ∑   =∑
( w − z0 ) n=0  w − z0  n=0 ( w − z0 )n+1

This series is uniformly convergent, hence after multiplying with f(w) and term by term inte-
gration, we have
f (w ) ∞
f (w )
C∫′ w − z dw = ∑ ( z − z0 )n ∫ dw (1.38)
C ′ ( w − z0 )
n +1
n= 0

w − z0
When w lies on C1′, <1
z − z0
Functions of Complex Variables  | 71

−1
1 1 1  w − z0 
\ = =− 1 −  
w − z w − z0 − ( z − z0 ) ( z − z0 )  z − z0 
( w − z0 ) ( w − z0 )
n n
∞ ∞
1
=− ∑
( z − z0 ) n = 0 ( z − z0 ) n
= −∑
n = 0 ( z − z0 )
n +1


This series is uniformly convergent, hence after multiplying with f(w) and term by term
­integration, we have
f (w ) ∞
1
C∫′ w − z ∑ ∫ f ( w ) . ( w − z0 ) dw (1.39)
n
n +1 
dw = −
n = 0 ( z − z0 ) C1′
1

Substituting from equations (1.38) and (1.39) in (1.37)



( z − z0 ) n f (w ) ∞
1
f (z) = ∑ ∫ (w − z ) dw + ∑ ∫ f (w ) (w − z )
n
dw
2π i ( z − z )
n +1 n +1
n= 0 2π i C′ n= 0 C1′
0

0 0

∞ ∞
a− n
= ∑ an ( z − z0 ) + ∑
n

( z − z0 )
n
n=0 n =1

1 f (w ) f n ( z0 )
2π i C∫′ ( w − z0 )n +1
where an = dw = ; n ∈N
n!

1 f (w )
a0 = ∫ dw = f ( z0 )
2π i C ′ w − z0

1 f (w)
2π i C∫1′ ( w − z0 )− n +1
a− n = dw

Now, we shall give another proof of the theorem that every order derivative of an analytic
function is an analytic function.
Theorem 1.20  If f(z) is an analytic function at z = a then mth order derivative of f (z), i.e.,
f (m)(z) is analytic at z = a for all m ∈ N.
Proof: f(z) is analytic at z = a and hence f (z) can be expanded in Taylor series about z = a.

f (z) = ∑ an ( z − a )
n
\
n=0 
1
This power series is uniformly convergent and circle of convergence is z − a =
k
a
where k = lim n +1
n →∞ a
n

Let m ∈ N

n!
f(
m)
( z ) = ∑ an ( z − a)
n−m
\
n=m ( n − m )! 
72 | Chapter 1

1
This power series is also uniformly convergent and circle of convergence is z − a = .
k
Hence, f (m)(z) is analytic at z = a.
But m is any natural number.
\ f (m)(z) is analytic at z = a for all m ∈ N.

1
Example 1.42: Expand f ( z ) = in the region
(
z z − 3z + 2
2
)
(i)  0 < z < 1    (ii)  1 < |z| < 2   (iii)  |z| > 2
1 1 1 1
Solution: f ( z ) = = − + (By suppression method)
z ( z − 1) ( z − 2) 2 z z − 1 2 ( z − 2)
−1
1 1 z 1 ∞
1 ∞ zn  z 
  (i)  f ( z ) = + (1 − z ) − 1 −  + ∑ zn − ∑ n 
−1
= ∵ z < 1, ∴ 2 < 1
2z 4  2 2 z n= 0 4 n= 0 2

1  1 
= + ∑  1 − n+ 2  z n ; 0 < z < 1
2 z n= 0  2  
−1 −1
1 1  1 1 z 1 1 ∞ 1 1 ∞ zn
(ii)  f ( z ) = − 1 −  − 1 −  = − ∑ − ∑
2z z  z  4  2 2 z z n = 0 z n 4 n= 0 2n
 1 z 
 ∵ 1 < z < 2, ∴ z < 1, 2 < 1
 
∞ ∞ ∞ ∞
1 1 zn 1 1 zn
= − ∑ n+1 − ∑ n+ 2 = − ∑ n − ∑ n+ 2 ; 1 < z < 2
2 z n=1 z n= 0 2 2 z n= 2 z n= 0 2 
−1 −1
1 1  1 1  2
(iii)  f ( z ) = − 1 −  + 1 − 
2z z  z  2z  z

1 1 ∞ 1 1 ∞ 2n  2 1 
= − ∑ n+ ∑ n ∵ z > 2, ∴ < 1, < 1
2 z z n= 0 z 2 z n = 0 z     z z 


1 ∞
2n−1 ∞ 2n−1 − 1 ∞ 2n− 2 − 1
= −∑ n +1
+∑ n +1
= ∑ n+1 = ∑ ; z >2
n =1 z n =1 z n= 2 z n= 3 zn 

1
Example 1.43: Expand f ( z ) = in Laurent series valid for
( z + 1) ( z + 3)
(i)  1 < z < 3   (ii) z > 3   (iii) 0 < z + 1 < 2   (iv) z < 1
1 1 1 1 
Solution: f ( z ) = =  −  (By suppression method)
( z + 1) ( z + 3) 2  z + 1 z + 3
Functions of Complex Variables  | 73

1 1  1 1  z 
−1 −1

  (i)  f ( z ) =  1 +  − 1 +  
2  z  z  3  3  

1  1 ∞ ( −1) 1 ∞ ( −1) z 
n n n
 1 z 
=  ∑ n − ∑ n
    ∵ 1 < z < 3, ∴ < 1, < 1
2  z n= 0 z

3 n= 0 3   z 3 


1  ∞ ( −1) ( −1) z  1  ∞ ( −1)n −1 ∞ ( −1)n z n 


∞ n n n

=  ∑ n +1 − ∑  = ∑ −∑ ; 1< z < 3
2  n= 0 z
 n= 0 3n +1  2  n =1 z
n
n= 0 3n +1 


1 1  1 
−1 −1
1  3
(ii)  f ( z ) =  1 +  − 1 +  
2  z  z  z  z 

1  1 ∞ ( −1) 1 ∞ ( −1) 3 
n n n
 3 1 
=  ∑ n − ∑  ∵ z > 3, ∴ z < 1, z < 1
2  z n= 0 z z n= 0 z n   
 
n n
( )
−1 ∞ ( −1) 3 − 1 −1 ∞ ( −1)
n −1
(3 n −1
);
−1
= ∑ z n +1 = 2 ∑
2 n= 0 z n
z >3
n =1 
1  1 1 
−1

1 1 1  z + 1
(iii)  f ( z ) =  −  =  − 1 +  
2  z +1 z +1+ 2 2  z +1 2  2  

1 1 n ( z + 1)
1 ∞
n
  z +1 
=  − ∑ ( −1)  ∵ 0 < z + 1 < 2, ∴ 2 < 1
2  z + 1 2 n= 0 2n 

=
1
−∑

( −1) ( z + 1) ; 0 < z + 1 < 2
n n

2 ( z + 1) n = 0 2n + 2


1 
−1
1  z
(iv)  f ( z ) = (1 + z ) − 1 + 
−1

2  3  3 

1 ∞ 1 ∞ ( −1) z 
n n
 z 
=  ∑ ( −1) z − ∑
n n
 ∵ z < 1, ∴ 3 < 1
2  n= 0 3 n = 0 3n 
 
1 ∞
( −1)n 1 − n+1  z n ; z < 1
1
= ∑
2 n= 0 3 
Example 1.44: Find the first four terms of the Taylor series expansion of the complex variable
z +1
function f ( z ) = about z = 2. Find the region of convergence.
( z − 3) ( z − 4)
74 | Chapter 1

z +1 5 4 5 4
Solution: f ( z ) = = − = − (By suppression method)
( z − 3) ( z − 4) z − 4 z − 3 z − 2 − 2 z − 2 − 1
−1
5  z − 2
+ 4 1 − ( z − 2)
−1
= − 1 − 
2 2  
5 ( z − 2)
∞ n ∞

∑ + 4∑ ( z − 2)
n
=− n
2 n= 0 2 n= 0 

 5 
= ∑  4 − n +1  ( z − 2 )
n

n= 0  2  
\ First four terms are
3 11 27 2 59
, ( z − 2) , ( z − 2) , ( z − 2)
3

2 4 8 16
Region of convergence is the common region of
z−2
< 1 and z − 2 < 1
2
i.e., z − 2 < 2 and z − 2 < 1

Common region is z − 2 < 1
\ Region of convergence is z − 2 < 1.
1
Example 1.45: Find the expansion of f ( z ) = in the region 1 < z − 1 < 2.
Solution: z − z3
1 1  1 1
f ( z) = − =  − 
z ( z − 1) ( z + 1) z − 1  z + 1 z 

1  1 1 
=  − 
z − 1  z − 1 + 2 z − 1 + 1 

1  1  z − 1 1  
−1 −1
1 
=  1 +  −  1+  
z − 1  2  2  z − 1  z − 1 


1  1 ∞ ( −1) ( z − 1) 1 ∞ ( −1)  
n n n
z −1 1 
=  ∑ − ∑    ∵ 1 < z − 1 < 2, ∴ < 1, < 1
z − 1  2 n= 0 2 n
z − 1 n = 0 ( z − 1)  
n
2 z −1 
 

=∑

( −1)n ( z − 1)n −1 + 1
−∑

( −1) n

n =1 2 n +1 2( z − 1) n = 0 ( z − 1)n + 2


=∑

( −1) ( z − 1)
n +1 n

+
1
−∑

( −1) ; 0 < z − 1 < 2
n−2

n= 0 2 n+ 2
2( z − 1) n = 2 ( z − 1)n

Functions of Complex Variables  | 75

Example 1.46: When z + 1 < 1, show that



1
2
= 1 + ∑ ( n + 1)( z + 1) n 
z n =1

1 1
= 1 − ( z + 1)
−2
Solution: =
z 2 ( z + 1 − 1)2 
∞ ∞
= ∑ ( n + 1) ( z + 1) = 1 + ∑ ( n + 1) ( z + 1)
n n

n= 0 n =1 
Example 1.47: Find all the possible Taylor’s and Laurent series expansions of the function f (z)
1
about the point z =1 where f ( z ) = .
( z + 1) ( z + 2)2
Solution: By suppression method
1 1 1 A
Let ≡ − + 
( z + 1) ( z + 2 ) + ( z + 2) +
2 2
z 1 z 2

1 ≡ ( z + 2) − ( z + 1) + A ( z + 1) ( z + 2)
2
\

Equate coefficient of z 2

0 = 1 + A  \ A = –1
1 1 1
\ f ( z) = − −
z + 1 z + 2 ( z + 2 )2

f (z) is not defined at z = –1, –2
Distances of z = 1 from z = –1, –2 are respectively 2 and 3.
\ We shall have Taylor expansion in z − 1 < 2 and Laurent expansions in 2 < z − 1 < 3 and
z − 1 > 3.
In the region z − 1 < 2
1 1 1
f ( z) = − −
z − 1 + 2 z − 1 + 3 ( z − 1 + 3)2
  
−1 −1 −2
1  z − 1 1  z − 1 1  z − 1
= 1 +  − 1 +  − 1 + 
2 2  3 3  9 3  
1 ∞ ( z − 1) 1 ∞n
( z − 1) 1 n ∞
( z − 1)n
= ∑ ( −1)n n − ∑ ( −1)n n − 2 ∑ ( −1) (n + 1)
n

2 n= 0 2 3 n= 0 3 3 n= 0 3n 

n  1 n + 4
= ∑ ( −1)  n +1 − n + 2  ( z − 1)
n

n= 0  2 3  
76 | Chapter 1

In the region 2 < z − 1 < 3


−1 −1 −2
1  2  1  z − 1 1  z − 1
f ( z) = 1 +  − 1 +  − 1 +  
z −1 z − 1 3 3  32  3 
n ( z − 1) ( z − 1)n
n
1 ∞ 2n 1 ∞ 1 ∞
= ∑ ( −1) n
− ∑ ( −1) − 2 ∑ ( −1) (n + 1)
n

z − 1 n= 0 ( z − 1) n
3 n= 0 3n
3 n= 0 3n

( −1)n 2n − ∞ −1 n (n + 4) z − 1 n

=∑ ∑ ( ) n+ 2 ( )
n = 0 ( z − 1)
n +1
n= 0 3


( −1)n −1 2n −1 − ∞ −1 n (n + 4) z − 1 n 
=∑ ∑ ( ) 3n+ 2 ( )
n =1 ( z − 1)n n= 0
In the region z − 1 > 3
−1 −1 −2
1  2  1  3  1  3 
f ( z) = 1 +  − 1 +  − 2 
1+ 
z − 1  z − 1 z − 1  z − 1 ( z − 1)  z − 1

1 ∞ 2n 1 ∞ 3n 1 ∞
(n + 1) 3n
= ∑ ( −1)n − ∑ ( −1) n
− ∑ ( −1)
n


z − 1 n= 0 ( z − 1) ( z − 1) n= 0
n
( z − 1) ( z − 1)2
n
n= 0 ( z − 1)n 

1 1 1
+ ∑ ( −1) 2n −1 − ( −1) 3n −1 − ( −1) ( n − 1) 3n − 2 
n −1 n −1 n
= −
z − 1 z − 1 n= 2   ( z − 1)n

∞  2n −1 + ( n − 4 ) 3n − 2 
= ∑ ( −1)
n −1


n= 2 ( z − 1)n 
1
Example 1.48: Write all possible Laurent series for the function f ( z ) = about z = –2.
z ( z + 2)
3

Solution: Singularities of f (z) are at z = 0, –2, their distances from z = –2, are respectively 2 and 0.
Hence, Laurent series in regions 0 < z + 2 < 2 and z + 2 > 2.
In region 0 < z + 2 < 2
−1
1 1 1  z + 2
f ( z) = ⋅ = − 1 −
( z + 2)3 z + 2 − 2 2 ( z + 2)3  2  
1 ∞
( z + 2 )n = − ∞ ( z + 2 )n − 3
3 ∑ ∑
=−
2 ( z + 2) n = 0 2 2 n +1
n
n= 0

( z + 2)
∞ n
1 1 1
=− − −
2 ( z + 2) 4 ( z + 2) 8 ( z + 2) n = 0 2
3 2
− ∑ n+ 4

In region z + 2 > 2
−1
1  2  1 ∞
2n ∞
2n ∞
2n − 4
f ( z) =  1 −  = ∑ = ∑ = ∑
( z + 2 )4  z + 2  ( z + 2 )4 n = 0 ( z + 2 )n n = 0 ( z + 2 )n + 4 n = 4 ( z + 2 )n

Functions of Complex Variables  | 77

−2 z + 3
Example 1.49: Find the Taylor’s series and Laurent’s series of f ( z ) = 2 with centre
at the origin. z − 3z + 2
−2 z + 3 1 1
Solution: f ( z ) = =− −  (By suppression method)
( z − 1) ( z − 2) z − 1 z − 2
Singularities of f(z) are at z = 1, 2 which are at distance 1 and 2 respectively from origin.
Hence, Taylor expansion in z < 1 and Laurent expansion in 1 < z < 2, z > 2 .
In region z < 1
−1 ∞
1 z 1 ∞ zn ∞
 1 
f ( z ) = (1 − z ) + 1 −  = ∑ zn + ∑ = ∑  1 + n +1  z n
−1

2  2 n= 0 2 n= 0 2 n
n= 0  2 

In region 1 < z < 2
−1 −1
1  1 1 z
f ( z ) = − 1 −  + 1 − 
z  z 2  2
1 ∞ 1 1 ∞ zn
=− ∑ + ∑
z n = 0 z n 2 n = 0 2n

∞ ∞
zn 1
= ∑ n + 1 − ∑ n +1
n= 0 2 n= 0 z 
∞ n ∞
z 1
= ∑ n +1 − ∑ n
n= 0 2 n =1 z 

In region z > 2
−1 −1
1  1 1  2 1 ∞ 1 1 ∞ 2n
f ( z ) = − 1 − 
z  z
− 1 − 
z z
=− ∑ − ∑
z n= 0 z n z n= 0 z n 


2n + 1 ∞
2n −1 + 1
= −∑ n +1
= −∑
n= 0 z n =1 zn 
2z3 + 1
Example 1.50: Find the Taylor’s expansion of function f ( z ) = about the point z = i.
z2 + z
Solution: Singularities of f(z) are at z = 0, –1 which are at distances 1 and 2 respectively from
z = i.
Thus, Taylor expansion will be in region z − i < 1.
By synthetic division
0) 2 0 0 1
0 0 0
−−
−1) 2 0 0 1 
−2 2
2 −2 2
78 | Chapter 1

2 1
\ f ( z ) = 2 z − 2 + + 
z + 1 z ( z + 1)
2 1 1
= 2z − 2 + + −
z +1 z z +1 
1 1
= 2z − 2 + +
z z +1 
1 1
= 2 ( z − i ) + 2i − 2 + +
z − i + i z − i +1+ i 
−1 −1
1 z − i 1  z − i
= − 2 + 2i + 2 ( z − i ) + 1 +  + 1 +  
i i  1+ i  1+ i 
−1
1  z − i
= − 2 + 2i + 2 ( z − i ) − i (1 − i( z − i ) ) +
−1
1 + 
1+ i  1+ i  
n ( z − i)
∞ n
1 ∞
= − 2 + 2i + 2 ( z − i ) − i ∑ i n ( z − i ) + ∑ ( ) 1+ i n
n
−1

n= 0 1 + i n= 0 ( ) 
n ( z − i)
∞ ∞ n
1 1
= − 2 + 2i + 2 ( z − i ) − i + ( z − i ) + − ( z − i ) − i ∑ i ( z − i ) + ∑ ( −1)
n n


1 + i 2i n= 2 n= 2 (1 + i )n+1 
 ( −1) 
n
 1− i  1 ∞

 +  2 + 1 −  ( z − i ) + ∑ i +  z − i)
n +1 (
n −1 n
=  −2 + 2i − i +

 2 2i n= 2 
 (1 + i ) 
−3 + i  i ∞   i  n +1 
+  3 +  ( z − i ) + ∑ i n −1 1 +    ( z − i ) 
n
=
2  2 n= 2   1 + i 

−3 + i  i ∞   1 + i  n +1 
+  3 +  ( z − i ) + ∑ i 1 +    ( z − i )
n −1 n
=
2  2  n= 2   2 

z
Example 1.51: Expand in 1 < z < 2
(z 2
)(
−1 z2 + 4 )
z z 1 1 
Solution:   f ( z ) = =  −   (By suppression method)
(z 2
)(
−1 z + 4 2
) 5  z2 −1 z2 + 4

z1 1  z2  
−1
 1
−1
 1 z
2

=  1 − 2  − 1 +   ∵ 1 < z < 2, ∴ < 1, < 1
5  z2 z 4 4   z 4 
 
1 ∞ 1 z ∞ n z
2n
= ∑ − ∑ ( −1) 
5 z n = 0 z 2 n 20 n = 0 4n
2 n +1
1 ∞ 1 1 ∞
( −1)n  
z
= ∑
5 n= 0 z 2 n +1
− ∑
10 n = 0 2 
Functions of Complex Variables  | 79

1 − cos z
Example 1.52: Expand the function in Laurent series about the point z = 0.
z3
z 2 z 4 z6
Solution: cos z = 1 − + − +  = ∑

( −1)n z 2 n
2! 4 ! 6 ! n= 0 ( 2 n )!
1 − cos z = ∑

( −1)n+1 z 2 n
n =1 ( 2n)! 

\
1 − cos z
=∑
( −1) z
∞ n +1 2n−3

; z≠0
z 3
n =1 ( 2 n )! 
( −1) z 2 n − 3
∞ n +1
1
= +∑ ; z≠0
2 z n= 2 ( 2 n )! 
Example 1.53: Find the Laurent series of z 2 e1/ z with centre as origin.

1
Solution: e1/ z = ∑ ; z≠0
n= 0 n ! zn

1
\ z 2 e1/ z = ∑ n−2
; z≠0
n= 0 n! z 
1 ∞ 1
= z2 + z + + ∑ ; z≠0
2 n= 3 n! z n − 2
1 ∞ 1
= z2 + z + +∑ ; z≠0
2 n =1 ( n + 2)! z n

π
Example 1.54: Expand cos z in a Taylor’s series about z =
4
π
Solution: Let  z − = w
4
π
∴ z = w+
4
 π π π
∴ cos z = cos  w +  = cos cos w − sin sin w
 4 4 4 
1
= (cos w − sin w )
2 
 ( −1) w ( −1) w 2 n +1 
∞ n 2n ∞ n
1
= ∑ −∑ 
2  n = 0 2n ! n = 0 ( 2n + 1)! 

1  w 2
w 3
w 4
w 5
w6 
= 1 − w − + + − − + 
2 2 ! 3 ! 4 ! 5 ! 6 ! 
2 3 4 5 6
  π  π  π  π  π 
  z −   z −   z −   z −   z −  
1   π  4   4   4   4   4 
= 1−  z −  − + + − − + 
2   4 2! 3! 4! 5! 6! 

80 | Chapter 1

ez
Example 1.55: Find the Laurent series of f ( z ) = about z = 1. Find region of ­convergence.
Solution: Singularities of f (z) are at z = 0, 1. z (1 − z )
Thus, there will be two Laurent series of f (z) about z = 1, one in region 0 < z − 1 < 1 and other
in region z − 1 > 1.
In the region 0 < z − 1 < 1,
ez e . e ( z −1) e . e ( z −1)
f ( z) =
z (1 − z )
=−
( z − 1) [1 + ( z − 1)]
=−
( z − 1)
[1 + ( z − 1)]
−1


e  ( z − 1) ( z − 1) ( z − 1) ( z − 1)
2
( z − 1)5
3
 4
=− 1 + + + + + +  . 
( z − 1)  1! 2! 3! 4! 5! 

1 − ( z − 1) + ( z − 1) 2 − ( z − 1)3 + ( z − 1) 4 − ( z − 1)5 + 



e   1  1 1
=− 1 + ( −1 + 1)( z − 1) + 1 − 1 +  ( z − 1) 2 +  −1 + 1 − +  ( z − 1)3 
( z − 1)   2 !  2 ! 3!

 1 1 1  1 1 1 1 
+ 1 − 1 + − +  ( z − 1) 4 +  −1 + 1 − + − +  ( z − 1)5 + 
 2 ! 3! 4 !  2 ! 3! 4 ! 5!  
e  ∞
n  1 1 1 ( −1) n  
=− 1 + ∑ ( −1)  − + −  +  ( z − 1) n 
( z − 1)  n = 2  2 ! 3! 4 ! n!   
e ∞
1 1 1 ( −1) n +1
=−
( z − 1)
−e ∑ ( −1)
n =1
n +1
 2 ! − 3! + 4 ! −  + ( n + 1)! ( z − 1)
n


and in the region z − 1 > 1,
−1
ez e . e ( z −1) e . e ( z −1)  1 
f ( z) = =− =− 2 
1+ 
z (1 − z ) ( z − 1) [1 + ( z − 1)] ( z − 1)  ( z − 1) 

e  ( z − 1) ( z − 1) 2 ( z − 1)3 ( z − 1) 4 ( z − 1)5 
=− 1 + + + + + +  .
( z − 1) 2  1! 2! 3! 4! 5!  
 1 1 1 1 1 
1 − + − + − +  
 ( z − 1) ( z − 1) ( z − 1) ( z − 1) ( z − 1)
2 3 4 5

e   1 1 1 1  1 1 1 1 
=−  1 − + − + −  +  − + − +  ( z − 1) 
( z − 1) 2   1! 2 ! 3! 4 ! 1! 2 ! 3! 4 !

1 1 1 1  1 1 1 1 
+  − + − +  ( z − 1) 2 +  − + − +  ( z − 1)3 
 2 ! 3! 4 ! 5!   3! 4 ! 5! 6 ! 
Functions of Complex Variables  | 81

1 1 1 1  1 1 1 1 
+  − + − +  ( z − 1) 4 +  − + − +  ( z − 1)5 
 4 ! 5! 6 ! 7 !   5! 6 ! 7 ! 8! 
1 1 1  
+  − + −  ( z − 1)6 + 
 6 ! 7 ! 8!   
 1 1 1  1  1 1 1  1
+  −1 + 1 − + − +  + 1 − 1 + − + −  
 2 ! 3! 4 !  ( z − 1)  2 ! 3! 4 !  ( z − 1) 2

 1 1 1  1  1 1 1  1
+  −1 + 1 − + − +  + 1 − 1 + − + − 
 2 ! 3! 4 !  ( z − 1) 3
 2 ! 3! 4 !  ( z − 1) 4


 1 1 1  1 
+  −1 + 1 − + − +  +  
 2 ! 3! 4 !  ( z − 1) 5
 
e  1  1  1  1 1  1 1 1
=−   + 1 −  ( z − 1) + ( z − 1) +  −  ( z − 1) −  − −  ( z − 1) 
2 3 4
( z − 1) 2  e e e 2 ! e  2 ! 3! e 

 1 1 1 1  1 1 1 1 1 
+  − + −  ( z − 1)5 −  − + − −  ( z − 1) +  
6
 2 ! 3! 4 ! e   2 ! 3! 4 ! 5! e  

 1  1 1 1  1 1 1 1  1 1 
+  −  + +−  + +−  + 
 e  ( z − 1) e ( z − 1)  e  ( z − 1) e ( z − 1)  e  ( z − 1)
2 3 4 5
 

∞  1 1 1 ( −1) n +1   1− e ∞
( −1) n
= −1 + ∑ ( −1) n e  − + −  +  − 1( z − 1) +
n
−∑
n =1   2 ! 3! 4 ! ( n + 1)   ( z − 1) n = 2 ( z − 1) n


Exercise 1.3

1. Find all possible Taylor’s and Laurent 4. Find the first three terms of the Taylor
series expansions of the function 1
­series expansion of f ( z ) = 2 about
1 z +4
f (z) = about z = 0. z = – i. Find the region of convergence.
(1 − z )
5. Find the Laurent’s expansion of
1
2. Expand the function f ( z ) = about the 1
z f ( z) = about the point z = 1.
point z = 2 in Taylor’s series. z ( z − 1) 2
6. Obtain the Taylor’s or Laurent’s
3. Find the Taylor series expansion of series which represents the function
­
1 1
­function f ( z ) = about the f ( z) = when
( z − 1)( z − 3) ( )
1 + z ( z + 2)
2

point z = 4. Find its region of convergence.


(i) 1 < z < 2   (ii) z > 2
82 | Chapter 1

7. Find the Taylor’s or Laurent’s series which 1


11. Expand in powers of z,
z2 −1 (1 + z )( 2 + z 2 )
2
represents the function 2 in the when
region z + 5z + 6
  (i)  z < 1   (ii) 1 < z < 2   
 (i)  z < 2   (ii) 2 < z < 3   
  

(iii)  z > 3 (iii)  z > 2

8. Find all possible Laurent’s series of e2z


12. Find Laurent expansion of f ( z ) =
7 z 2 + 9 z − 18 about z = 1. ( z − 1)3
f ( z) = about its singular
z3 − 9z
points. 13. Obtain the Taylor series expansion of
9. Obtain the Taylor series expansion of f (z) = ez about the point (i) z = 0 (ii) z = 2
1 14. Obtain the first three terms of the Lau-
f ( z) = 2 about z = 0.
z + (1 + 2i ) z + 2i rent’s series expansion of the function
1
10. Find the Laurent’s series for f ( z) = z about the point z = 0
(e − 1)
7z − 2
f ( z) = 3 in the region given valid in the region 0 < z < 2π .
z − z2 − 2z
by 0 < z + 1 < 1.

Answers 1.3
∞ ∞
1
1. ∑ z n ; z < 1, − ∑
n= 0 n =1 zn
; z >1

( −1) ( z − 2)
n n
2. ∑
n= 0 2 n +1
; z−2 < 2

1 ∞  1 
3. ∑ ( −1)n 1 − 3n+1  ( z − 4 )n ; z −4 <1
2 n= 0
1 2i −7
4. , ( z + i ), ( z + i ) 2 ; region of convergence z + i < 1
3 9 27
1 1 ∞
+ ∑ ( −1) n ( z − 1) ; 0 < z − 1 < 1 and

( −1)n −1 ;

n
5. − + z −1 > 1
( z − 1) ( z − 1) n= 0
2
n = 3 ( z − 1)
n

1  ∞ ( −1) z ∞ 
 ( −1) 2 ( −1)  
n n n n

6.  (i)   ∑ − ∑  2 n +1 − 2 n + 2   ; 1< z < 2


5  n = 0 2 n +1 n= 0  z z 
  

{
1 ∞  22 n − ( −1) n 2 2 − ( −1)
2n n
} 
(ii)  ∑ 
5 n =1  z 2 n + 1

z 2n+ 2 
; z >2

Functions of Complex Variables  | 83

1 ∞  3 8 
7.  (i)  − + ∑ ( −1) n  n +1 − n +1  z n ; z < 2
6 n =1 2 3 
5 ∞ 8 ∞
3.( −2) n −1
(ii)  − + ∑ ( −1) n +1 n +1 z n + ∑ ;2< z <3
3 n =1 3 n =1 zn
∞ 
 8 (3) − 3( 2)  
n −1 n −1

(iii) 1 + ∑ ( −1) n   ; z > 3


n =1 
 zn  

2 ∞ zn
{
8. About z = 0 (i) + ∑ ( −1) n − 4 n +1 ; 0 < z < 3
z n= 0 3
}
n −1
7 ∞
{ 3
(ii)  + ∑ ( −1) n −1 + 4 n ; z > 3
z n= 2 z
}

4  2 1 
About z = 3 (i) + ∑ ( −1) n  n +1 + n +1  ( z − 3) n ; 0 < z − 3 < 3
( z − 3) n = 0 3 6 

6 ∞
( −1) n −1 3n −1 1 ∞ ( −1) n ( z − 3) n
( z − 3) n= 2 ( z − 3)n 6 ∑
(ii)  + 2∑ + ; 3 < z −3 < 6
n=0 6n

7 ∞  2(3) n −1 + 6 n −1 
(iii)  + ∑ ( −1) n −1  ; z −3 > 6
( z − 3) n= 2 ( z − 3) n
1 2 ∞ 1 1 
About z = −3 (i) − ∑  n + n  ( z + 3) n ; 0 < z + 3 < 3
( z + 3) 3 n = 0  3 6 
3 ∞
3n −1 2 ∞ ( z + 3) n
(ii)  + 2∑ − ∑ ;3< z +3 < 6
( z + 3) n= 2 ( z + 3) 3 n=0 6 n
n

∞  2(3) n −1 + 4(6) n −1 
7
(iii)  +∑  ; z+3 > 6
( z + 3) n= 2 ( z + 3) n

1  ∞  1   
n +1

 ∑   − 1 ( −1) z n  ; z < 1
n
9.
(1 − 2i )  n= 0  2i   

−3 ∞
 2 
− ∑ 1 + n +1  ( z + 1) ; 0 < z + 1 < 1
n
10.
z + 1 n= 0  3 

 1  2n
∑ ( −1)
n
11. (i)  1 − n +1 z , z <1
n= 0  2 

( −1)n − ∞
z 2n
(ii)  ∑ ∑ ( −1)
n
, 1< z < 2
n= 0 z 2n+ 2 n= 0 2 n +1
∞ ( −1)n −1 (2n − 1)
(iii)  ∑ , z > 2
n =1 z 2n+ 2
84 | Chapter 1

 1 2 2 ∞
2n + 3 ( z − 1) n 
12. e 2  + + + ∑
( z − 1) n = 0 ( n + 3)! 
 ; z −1 > 0
 ( z − 1) ( z − 1)
3 2


zn ∞
( z − 2) n
13. (i) ∑ n!   (ii) e ∑
n= 0
2

n= 0 n!
1 1 z
14. first three terms are ,− , .
z 2 12

1.12  Zeros and Singularities of Complex functions


1.12.1  Zeros of an Analytic Function
A point z = z0 is called a zero of function f (z) if f (z) is analytic at z0 and f (z0) = 0.
In neighbourhood of zero z0 of f (z), by Taylor series
f (z) = a0 + a1 (z - z0) + a2 (z - z0)2+ a3(z - z0)3+…………
1 ( n)
where an = f ( z0 )
n!
If a0 = 0 and a1 ≠ 0 then z = z0 is called simple zero (or zero of order 1) of f(z).
If a0 = a1 = a2=…….= an–1= 0 and an ≠ 0 then f(z) is said to have a zero of order n at z = z0.
In the neighbourhood of z0 which is zero of order n of f (z)
f (z) = an (z – z0)n + an + 1 (z – z0)n+1 + ……. ; an ≠ 0
\ f (z) = (z – z0)n g(z)
g ( z ) = an + an +1 ( z − z0 ) + an + 2 ( z − z0 ) + ..... ≠ 0 for
2
where z = z0

1.12.2 Singularities of a Function
A point z = z0 at which the function f (z) is not defined or the function is not analytic is called
P( z)
­singularity of f(z). A rational function f ( z ) = has singular point at z = z0 if Q (z0) = 0 and
Q( z )
P (z0) ≠ 0 because in this case f(z) is not defined at z = z0.
There are following two types of singularities of a function.
(i) non-isolated singularity
(ii) isolated singularity

Non-isolated singularity
Singularity z = z0 is called non-isolated singularity of f(z) if every neighbourhood of z = z0 contains
singularity of f(z) other than z0 also.
Now, we shall prove that every neighbourhood of non-isolated singularity has infinite number
of singularities of f(z).
Functions of Complex Variables  | 85

Theorem 1.21  If z = z0 is non-isolated singularity of f(z) then every neighbourhood of z = z0


contains infinite number of singularities of f(z).
Proof: If possible, let the neighbourhood z − z0 < r of non-isolated singularity z0 contains
­finite number of singularities z1, z2,…….zn of f(z). Let r = min z0 − zk then the neighbourhood
1≤ k ≤ n
z − z0 < r of z0 contains no singularity of f(z). Thus z0 is not non-isolated singularity of f(z)
which contradicts and thus, every neighbourhood of z0 contains infinite no. of singularities of f(z).

Isolated singularities
If there exists a neighbourhood of singularity z = z0 of f(z) which contains no singularity of f(z)
other than z = z0 then z = z0 is called isolated singularity of f(z).
π 1 1
For example f ( z ) = cot has singularities at z = 0, ± 1, ± , ± , . Out of these
z 2 3
­singularities z = 0 is non-isolated and all other singularities are isolated singularities.
Isolated singularities are of three types.
Let z − z0 < δ be a neighbourhood of z = z0 in which f(z) has no singularity other than z = z0.
Laurent series expansion of f(z) about z = z0 in this neighbourhood may contain none, finite num-
ber or infinite number of terms of negative powers of z – z0. In case of no term of negative power
of (z – z0), z = z0 is removable isolated singularity. In case of finite number of terms say n of nega-
tive powers of (z – z0), z = z0 is a pole of order n. Pole of orders 1, 2 and 3 are also called simple
pole, double pole or triple pole, respectively. In case of infinite no. of terms of negative powers
of (z – z0), the isolated singularity z = z0 is called essential singularity.
Remark 1.12: 1
(i) If f(z) has a zero of nth order at z = z0 then has pole of order n at z = z0 and vice
versa. f ( z)
(ii) If f(z) has a zero of order n at z = z0 then (f(z))k has zero of order kn at z = z0.
(iii) If f(z) has a zero of order n at z = z0 then kth derivative of f(z) i.e., f (k)(z) has zero of order
n – k at z = z0 when 1 ≤ k ≤ n − 1.
(iv) If a function f(z) is analytic for all z except possible a finite no. of poles then f(z) is
sin z
called a meromorphic function. For example, function has poles at z = 1
( z − 1)( z + 3) 2
and z = –3 only and hence it is meromorphic function.

1.12.3 Method to Find Type of Isolated Singularity


If z = z1 is isolated singularity of f(z) then its type can be found by writing Laurent expansion
of f(z) about z = z1. Also without writing the Laurent expansion, we can find the type of isolated
singularity by the below mentioned results obtained from Laurent expansion of f(z) about z = z1.
(i) If lim f(z) exists then z1 is removable singularity.
z → z1

(ii) If lim ( z − z1 ) f ( z ) = λ where λ ≠ 0 then z1 is a pole of order k.


k

z → z1

(iii) If lim ( z − z1 ) f ( z ) is infinite for all non-negative integer n, then z1 is essential


n

z → z1
­singularity.
86 | Chapter 1

Example 1.56: What type of singularities have the following functions?


tan z −3
(i)    (ii) e z
z
tan z
Solution: (i) f(z) = has singularities at z = 0 and the points where tan z is undefined, i.e.,
π z
z = ( 2n + 1) ; n ∈ I
2
tan z sec 2 z
lim f ( z ) = lim = lim ( L ′ Hospital rule)
z →0 z →0 z z →0 1 
= 1
\ f (z) has removable singularity at z = 0
 π
z −  nπ + 
  π sin z  2
lim  z −  nπ +   f ( z ) = lim π ⋅
π
z → nπ +   2 z → nπ + 2 z cos z
2 
 π
sin  nπ + 
 2 1
=
π
lim
π − sin z
( L ′ Hospital rule )
nπ + z → nπ +
2
2 
n n
( )
− 1 1 ( )
− 1 1
= =− ⋅ 
π  π π −1 n
nπ + − sin  nπ +  n π + ( )
2  2 2
−2
= ≠ 0; n ∈I 
(2n + 1) π
π
\ f (z) has simple poles at each z = nπ + ; n ∈I
2
1
−3
(ii)  f ( z ) = e z = e z has only singularity at z = 0
3

1 1 1 1
f (z) = 1 + + + + +
z 3 2 ! z 6 3! z 9 4 ! z12
This expansion has infinite no. of terms of negative powers of z
\ f (z) has essential singularity at z = 0.
1 − e2z
Example 1.57: Find the type of singularity of the function f ( z ) = at z = 0 .
z3
1   ( 2 z ) (2 z ) (2 z ) 
2 3 4

Solution: f ( z ) = 3 1 − 1 + 2 z + + + + ... 
z   2! 3! 4!  

2 2 4 2z
= − − − − −
z2 z 3 3 

\ f (z) has double pole at z = 0.


Functions of Complex Variables  | 87

Example 1.58: Find the location and type of singularity of the following functions

(i) 
e− z
  (ii) 
( z + 1) − tan ( z + 1)
( z + 2) 3

−z
( z + 1)
3

e
Solution: (i)  f (z) = has singularity only at z = –2
( z + 2)3
lim ( z + 2) f ( z ) = lim e − z = e 2 ≠ 0
3

z →−2 z →−2 
\ f (z) has pole of order 3 at z = –2.
z + 1 − tan ( z + 1) π
(ii)  f (z) = has singularities at z = –1 and z = −1 + ( 2n + 1) ; n ∈ I
( z + 1) 3
2

1   ( z + 1)3 + 2 z + 1 5 − ... 
f (z) =  ( z + 1) −  ( z + 1) − ( ) 
( z + 1)3   3 15  

1 2
− ( z + 1) + ....
2
=
3 15 
\ f (z) has removable singularity at z = –1

 π
lim
π  z + 1 − ( 2n + 1) 2  f ( z )
z →−1+ ( 2 n +1)  
2 
  π   π
 z + 1 −  nπ + 2   sin ( z + 1)  z + 1 −  nπ + 2  
= lim  − ⋅ 
z →−1+ ( 2 n +1)
π
( z + 1) 2
( z + 1)3
cos ( z + 1)
2 
 π  π
sin  nπ +  z + 1 −  nπ + 
 2  2
= 0− lim
 π
3
z →−1+ ( 2 n +1)
π cos ( z + 1)
 nπ +  2
2 
 π
8 sin  nπ + 
 2 1
=− lim ( L ′ Hospital rule)
(2n + 1) 3
π 3
z →−1+ ( 2 n +1)
π − sin ( z + 1)
2 
 π
8 sin  nπ + 
 2 1 8
= ⋅ = ≠ 0 ; n ∈I
(2n + 1) 3
π 3
 π
sin  nπ +  (2n + 1)3 π 3
 2 
π
\ f (z) has simple poles at each of z = −1 + nπ + ; n ∈I
2
88 | Chapter 1

z
Example 1.59: Find the singularities of f ( z ) = and indicate the character of the
( )
2
­singularities. z +4
2

z z
Solution: f ( z ) = =
( )
( z + 2i )2 ( z − 2i )2
2
z2 + 4
f (z) has singularities at z = ± 2i
z 2i 1
lim ( z − 2i ) f ( z ) = lim
2
= =− i≠0
z → 2i z → 2i
( z + 2i ) 2

−16 8

\ f (z) has isolated singularity which is double pole at z = 2i.


z −2i 1
lim ( z + 2i ) f ( z ) = lim
2
= = i≠0
( z − 2i ) −
2
z →−2 i z →−2 i 16 8

\ f (z) has isolated singularity double pole at z = -2i.
1
Example 1.60: Find the nature and the location of singularities of f ( z ) = .
(
z ez −1 )
Solution: f (z) has singularities at z = 0 and z = 2 n p i; n ∈ I , n ≠ 0
z 1
lim z 2 f ( z ) = lim z = lim z ( L ′ Hospital rule )
z →0 z→0 e − 1 z →0 e

= 1 ≠ 0
1 z − 2nπ i
lim ( z − 2nπ i ) f ( z ) = lim ⋅

z → 2 nπ i z → 2 nπ i z ez −1 
1 z − 2nπ i
= lim
2nπ i z → 2 nπ i e z − 1 
1 1
= lim (L′ Hospital rule )
2nπ i z → 2 nπ i ez 
−i −i
= (1) = ; n ∈I, n ≠ 0
2nπ 2nπ 
\ f (z) has isolated singularities at z = 0, z = 2npi, n ∈ I , n ≠ 0.
Pole of order two at z = 0 and simple poles at z = 2npi ; n ∈ I , n ≠ 0.

Example 1.61: Find the nature and the location of the singularities of the following functions
 1 π
(i)  f ( z ) = tan     (ii) f ( z ) = cosec
 z z

Solution:
 1 1
(i)  f ( z ) = tan   has singularities at z = 0, ; n ∈I
 z π
(2n + 1) 2
Functions of Complex Variables  | 89

1
Now lim = 0
π
(2n + 1) 2
n →∞

1
Thus, every neighbourhood of z = 0 contains infinite no. of singularities among z = ;
π
n∈I ( 2n + 1)
2
\ z = 0 is non-isolated singularity.
1
z−
π

z −
1  (2n + 1) 2
lim  π  f ( z ) = lim1
z→
1  ( 2n + 1)  z→ 1
(2 n +1)
π  2 (2 n +1)
π cot
2 2 z 
1
= lim  (L′ Hospital rule)
z→
1 
π − cosec
2 1  1
(2 n +1)   − 
2 z   z2 

1
= 2
≠ 0; n ∈ I 
 π
( 2n + 1) 2 

1
\ f (z) has simple poles at each of z = ; n ∈I
π
(2n + 1) 2
π 1
(ii)  f ( z ) = cosec has singularities at z = 0 and z = ; n ∈ I ; n ≠ 0
z n
1
Now, lim =0
n →∞ n 
1
Thus, every neighbourhood of z = 0 contains infinite for n ∈ I , n ≠ 0
n
\ z = 0 is non-isolated singularity.
1
z−
 1
n lim  z −  f ( z ) = lim
π z→
1 n z→
1
n n sin
z 
1
= lim (L′ Hospital rule)
1 π π
z→
n − 2 cos
z z

=
( −1)n+1 ≠ 0; n ∈ I ; n ≠ 0
n2π 
1
\ f (z) has isolated singularities simple poles at each of z = ; n ∈ I ; n ≠ 0
n
90 | Chapter 1

1.13 Residue
If z = z0 is isolated singularity of f (z) then there exists neighbourhood |z – z0| < d of z0 in which
f (z) has only singularity z = z0. Thus, we can have Laurent series expansion of f (z) about z = z0 in
this neighbourhood.
In this expansion, the coefficient a−1 of ( z − z0 ) i.e.,
−1

1
2π i ∫C
a−1 = f ( z )dz where C : z − z0 = r < δ

is called residue of f (z) at z = z0 and in short it is written as Res (z0) = a–1
1
2π i ∫C
\ Res (z0) = f ( z )dz (1.40)

1.13.1 Residue at a Removable Singularity


Laurent series expansion of f (z) about removable singularity does not contain any term of nega-
tive powers of (z – z0)
\ Res (z0) = a–1 = Coefficient of (z – z0)–1 = 0

1.13.2 Residue at a Simple Pole


If z = z0 is simple pole of f(z), then its Laurent series expansion about z = z0 is

a−1
f ( z ) = ∑ an ( z − z0 ) +
n
;a ≠0
n= 0 ( z − z0 ) −1 
\ lim ( z − z0 ) f ( z ) = a−1 = Re s ( z0 ) ≠ 0
z → z0

φ (z)
If f ( z ) = and f (z) has simple pole at z = z0 such that φ ( z0 ) ≠ 0 then ψ ( z ) has simple zero
ψ (z)
at z = z0 and hence ψ ( z0 ) = 0 and ψ ′ ( z0 ) ≠ 0.

Now, lim ( z − z0 ) f ( z ) = lim


( z − z0 ) φ ( z )
z → z0 z → z0 ψ (z)

1
= φ ( z0 ) lim   (L′ Hospital rule)
z → z0 ψ ′ (z)

φ ( z0 )
=
ψ ′ ( z0 )

Functions of Complex Variables  | 91

1.13.3 Residue at Pole of Order m


If f(z) has pole of order m at z = z0 then in some neighbourhood of z0, Laurent series expansion
of f (z) is
 a a−2 a− m 
f ( z ) =  a0 + a1 ( z − z0 ) + a2 ( z − z0 ) +  +  −1 +
2
+  +  , a− m ≠ 0
   z − z0 ( z − z ) 2
( z − z ) m

0 0


\ ( z − z0 ) f ( z ) =  a0 ( z − z0 ) + a1 ( z − z0 ) + a2 ( z − z0 ) + 
m m m +1 m+ 2

 

+ [ a−1 ( z − z0 ) + a−2 ( z − z0 ) +  + a− m ]
m −1 m−2


(m + 1)!
\
d m −1
dz m −1
{( z − z )0
m
}  m!
f ( z ) =  a0
 1!
( z − z0 ) + a1 2! ( z − z0 )2

( m + 2 )! 
+ a2
3!
( z − z0 )3 + .... + a−1 ( m − 1)!
 

{( z − z ) }
m −1
d
f ( z ) = ( m − 1)! a−1
m
\ lim 0
z → z0 dz m −1 

\ Res (z0) = a–1 =


1
lim
d m −1
(m − 1)! z → z0 dz m −1 {( z − z ) 0
m
f (z) }
1.13.4 Residue at an Isolated Essential Singularity
When z = z0 is essential singularity of f (z) then expand f (z) in Laurent series about z = z0, the coef-
ficient of (z − z0)–1 will be residue. In this case it is the only way to find residue at z = z0.
z2
Example 1.62: Determine the poles of the function f ( z ) = and the residue at
each pole. ( z − 1) ( z − 2)2
Solution: f (z) has simple pole at z =1 and double pole at z = 2
z2
Res (1) = lim ( z − 1) f ( z ) = lim =1
z →1 z →1
( z − 2 )2 
2
1 d  d z
 ( z − 2) f ( z )
2
Res (2) = =
1! dz z=2 dz z − 1 z = 2


=
( z − 1) 2 z − z 2 =
z2 − 2z
=
4−4
=0
( z − 1)2 z=2
( z − 1) 2
z=2
1

92 | Chapter 1

1
Example 1.63: Determine poles and residues of f ( z ) = at each of its poles.
z +1
4

1
Solution: z 4 + 1 = 0 ⇒ z 4 = –1 = i 2 ⇒ z 2 = ± i = (1 ± i )2
2
1
\ z=± (1 ± i )
 2
\ Pole of f (z) are simple poles at z = z1, z2, z3, z4
1 1 1 1
where z1 = (1 + i ) , z2 = (1 − i ) , z3 = − (1 + i ) , z4 = − (1 − i )
2 2 2 2

Res (zj) = lim


j ( z − z ) = lim
1
( L’ Hospital rule ) 
z 4 + 1 z→ z j 4 z3
z→ z j

zj zj

1
= 3 =
4z j 4z j 4
=−
4
(
∵ z j 4 = −1 ; j = 1, 2, 3, 4
 )
1 1
\ Res ( z1 ) = − (1 + i ) , Res ( z2 ) = − (1 − i )
4 2 4 2 
1 1
Res ( z3 ) = (1 + i ) , Res ( z4 ) = (1 − i )
4 2 4 2 
1
Example 1.64: Find the residues of at z = 0 and z = –2.
z ( z + 2)
3

1
Solution: f (z) = has simple pole at z = 0 and pole of order 3 at z = –2
z ( z + 2)
3

1 1
Res (0) = lim z f ( z ) = lim = 
z →0 z →0
( z + 2) 3
8

1  d2  1  d 2 1 1 2 1
2 (
z + 2) f ( z )
3
Res (–2) =  =  2  = =−
2 !  dz  z = −2 2  dz z  z = −2 2 z 3 z = −2 8

Example 1.65: Compute the residues at all the singular points of f(z) where f(z) is given by
3
z2  z + 1 z2 − 2z 1
(i)    (ii)     (iii)     (iv) 
z − 2z + 2
2
z −1 ( z + 1) z + 4
2 2
( z + 1)3 ( )
(v) 
( z + 3)3   (vi)  n
z2
; n ∈N   (vii)  3
1
( z − 1)4 z −1 z + z5
z2 2± 4 −8
Solution: (i) f ( z ) = has simple poles at z = = 1± i
z − 2z + 2
2
2
Res (1 + i ) = lim ( z − 1 − i ) f ( z ) = lim
z2
=
(1 + i ) = 1
2

z →1+ i z →1+ i z − 1 + i 2i 
Functions of Complex Variables  | 93

Res (1 − i ) = lim ( z − 1 + i ) f ( z ) = lim


z2
=
(1 − i ) = 1  2

z →1− i z →1− i z −1− i −2i


3
 z + 1
(ii)  f ( z ) = 
 z − 1
1 1 
f (z) = 3 (
z − 1 + 2) = 3 (
z − 1) + 6 ( z − 1) + 12 ( z − 1) + 8 
3 3 2

( z − 1) ( z − 1) 

6 12 8
= 1+ + +
z − 1 ( z − 1)2 ( z − 1)3

f ( z ) has pole of order 3 at z = 1.
1
\ Res (1) = coeff of =6
z −1 
z2 − 2z
(iii)  f ( z ) = has double pole at z = – 1, simple poles at ± 2i
( z + 1)2 ( z 2 + 4)
Res ( −1) =
d
( z + 1)2 f ( z ) =
d z2 − 2z
=
(z 2
) ( )
+ 4 ( 2 z − 2) − z 2 − 2 z 2 z
z2 + 4 ( z + 4)
2
dz z =−1 dz z = −1
2
z = −1

5 ( −4 ) − (3) ( −2) 14
= =−
25 25 
z2 − 2z −4 − 4i
Res ( 2i ) = lim ( z − 2i ) f ( z ) = lim =
z → 2i
( z + 1) ( z + 2i ) (1 + 2i )2 (4i ) 
z → 2i 2

=
−1 − i
=
−1 − i
=
(1 + i ) (4 − 3i ) = 7 + i
25 
( −3 + 4i ) i −4 − 3i 16 + 9
z2 − 2z −4 + 4i
Res ( −2i ) = lim ( z + 2i ) f ( z ) = lim =
z →−2 i z →−2 i
( z + 1) ( z − 2i ) (1 − 2i )2 ( −4i ) 
2

=
1− i
=
1− i
=
(1 − i ) (4 + 3i ) = 7 − i
( −3 − 4i ) i 4 − 3i 25 25

1
(iv)  f ( z ) = has triple pole at z = –1
( z + 1)3
1 d2 1 d2
Res ( −1) = lim 2 ( z + 1) f ( z ) = lim 2 (1) = 0
3

2 ! z →−1 dz 2 z →−1 dz 
Other Method
1
f ( z) = is Laurent expansion
( z + 1)3
1
∴ Res ( −1) = coeff of =0
z +1 
94 | Chapter 1

(v)  f ( z ) =
( z + 3)3 has pole of order 4 at z = 1
( z − 1)4
1 1 
4 [
z − 1 + 4] = 4 (
z − 1) + 12 ( z − 1) + 48 ( z − 1) + 64 
3 3 2
f ( z) =

( z − 1) ( z − 1) 

1 12 48 64
= + + +
z − 1 ( z − 1) ( z − 1) ( z − 1)4
2 3

1
∴ Res (1) = coeff of =1
z −1 
z2
(vi) f ( z ) = ,
z −1 n

Poles of f (z) are simple poles at nth roots of unity, i.e., at zk = rk ; k = 0, 1, 2, …, n –1



2π 2π i
where ρ = cos + i sin =e n
n n
Res ( zk ) = lim
( z − zk ) z 2
= lim z 2 lim n k
z−z
z → zk z −1
n z → z k z → z k z −1 
1 1
= zk lim 2
z → zk nz n −1
= zk lim
2
z → zk nz n −1
( L ′ Hospital rule )( L ′ Hospital rule )

z3
=
nzk
1
n−3
= k ∵ zkn = 1
n
( 
)
1 6πnik
\ Res ( zk ) = e ; k = 0,1, 2,, n − 1
n 
1 1
(vii)  f ( z ) = 3 = has simple poles at z = ±i and triple pole at z = 0
z + z5 z3 z 2 + 1 ( )
z −i 1 1 1
Res (i ) = lim 3 = lim 3 = = 
z →i z ( z − i ) ( z + i ) z →i z ( z + i ) −i ( 2i ) 2
1 1 1
Res ( −i ) = lim ( z + i ) f ( z ) = lim = =
z →− i z →− i z ( z − i ) i ( −2i ) 2
3

1
f (z) =
1
( ) ( )
−1
Now 1 + z 2 = 3 1 − z 2 + z 4 − z 6 + 
z3 z 
1 1
= 3 − + z − z + 
3

z z 
1
\ Res (0 ) = coeff of = −1
z 
z
Example 1.66: Determine the poles of the function and the residue at each pole.
cos z
z π
Solution: f ( z ) = has simple poles at z = ( 2n + 1 ) ; n ∈ I
cos z 2
Functions of Complex Variables  | 95

π
z − nπ −
 π 2 
Res  nπ +  = lim z ⋅
 2  z → nπ + π cos z
2
π
z − nπ −
 π 2
=  nπ +  lim
 2  z → nπ + π cos z
2 
 π 1
=  nπ +  lim

( L ′ Hospital rule )
2  z → nπ + π − sin z
2 
 π  1 π
= ( −1) ( 2n + 1) ; n ∈ I
n +1
= −  nπ + 
 2  π 2 
sin  nπ + 
 2 

z3
Example 1.67: Find the residue of at z = 1.
( z − 1)4 ( z − 2) ( z − 3)
z3
Solution: f ( z ) = has pole of order 4 at z = 1
( z − 1)4 ( z − 2) ( z − 3)
z3
( z − 1)4 f ( z ) =  Synthetic division
( z − 2) ( z − 3) 2) 1 0 0 0
2 4 8
19 8 __
= z +5+ +  3)
z − 3 ( z − 2) ( z − 3) 1 2 4 8
3 15
1 5 19
19 8 8
= z +5+ − +  (By suppression method)
z −3 z −2 z −3
27 8
= z +5+ −
z −3 z −2 

1 d3 1 d3  27 8 
Res (1) = lim 3 ( z − 1) f ( z ) = lim 3  z + 5 +
4
− 
3! z →1 dz 6 z →1 dz  z − 3 z − 2 

1  −27 × 6 8 × 6  1  −27 × 6 
= lim  + =  + 48
 ( z − 3) ( z − 2)  6  16
4 4
6 z →1


27 101
=− +8 =
16 16 
96 | Chapter 1

Other Method
z3  1 1 
f ( z) =  z − 3 − z − 2
( z − 1) 
4



=
(1 + z − 1)  1 − 1  3

( z − 1)4  −2 + z − 1 −1 + z − 1 
1 + 3 ( z − 1) + 3 ( z − 1) + ( z − 1)  1  z − 1 −1 
2 3 −1

=  −  1 −  + (1 − ( z − 1)) 
( z − 1) 4
 2 2  
 1 1   1  z − 1 ( z − 1) ( z − 1) 
2 3
3 3
= + + +   −  1 + + + + 
 ( z − 1) ( z − 1) ( z − 1) z − 1  2 
4 3 2
2 4 8 


2
(
+ 1 + ( z − 1) + ( z − 1) + ( z − 1) +  
3
 )

1 1 3 3 1
\ Res (1) = coeff of = − +1− + 3 − + 3 − +1
z −1 16 8 4 2 
1 + 6 + 12 + 8 27
= 8− = 8−
16 16 
101
=
16 

Example 1.68: Compute the residues at all the singular points of


2
ze iz 1 − e2z ez 1− cos z
(i)  2   (ii)    (iii)    (iv) 
z +a 2
z 4
( z − i) 3
z
(v)  cot z   (vi) sec z
ze iz
Solution: (i) f ( z ) = 2
z + a2
Singularities of f (z) are simple poles at z = ± ia
ze iz iae − a 1 − a
Res ( ia ) = lim ( z − ia ) f ( z ) = lim = = e
z → ia z → ia z + ia 2ia 2 
iz
ze 1 a
Res ( −ia ) = lim ( z + ia ) f ( z ) = lim = e 
z →− ia z →− ia z − ia 2
1− e 2z
(ii)  f ( z ) = has pole of order 3 at origin
z4
1   ( 2 z ) 2 ( 2 z )3 ( 2 z ) 4  
∵ f ( z ) = 4 1 − 1 + 2 z + + + 
z   2! 3! 4!  
2 2 4 2 25 26 2 2 7 3
=− 3 − 2 − − − z− z − z 
z z 3 z 3 5! 6! 7! 
1 4
∴ Res (0) = coeff. of = − 
z 3
Functions of Complex Variables  | 97

2
ez
(iii)  f ( z ) = has pole of order 3 at z = i
( z − i)
3

1 d2 1 d2 2
lim 2 ( z − i ) f ( z ) = lim 2 e z
3
Res (i) =
2 ! z →i dz 2 z →i dz 
1 d 1
( ) 1
2 2
= lim 2 ze z = lim 4 z 2 + 2 e z = −
2 z → i dz 2 z → i e
1 − cos z
(iv)  f ( z ) =
z
1 − cos z sin z
lim f ( z ) = lim = lim = 0 ( L ′ Hospital rule )
z →0 z →0 z z →0 1 
∴ f (z) has removable singularity at z = 0
\ Res (0) = 0
cos z
(v)  f ( z ) = cot z = has simple poles at z = np, n ∈ I
sin z
Res ( nπ ) = lim ( z − nπ ) f ( z )
z → nπ 
z − nπ
= lim cos z ⋅
z → nπ sin z 
z − nπ
= ( −1) lim
n

z → nπ sin z

1
= ( −1) lim ( L ′ Hospital rule )
n

z → nπ cos z

1
= ( −1) ⋅
n
= 1; n ∈ I
( −1)n 
1 π
(vi)  f ( z ) = sec z = has simple poles at z = ( 2n + 1) , n ∈ I
cos z 2
π
z − nπ −
 π 2
Res  nπ +  = lim
 2  z → nπ + π cos z
2 
1
= lim
π − sin z
( L ′ Hospital rule )
z → nπ +
2 
1
= ( −1) ; n ∈ I
n +1
=−
 π
sin  nπ + 
 2 
1.14 Evaluation of contour integrals using residues
In this section, we discuss the application of residues in evaluating integrals of f(z) over a simple
closed curve. Following result is used to evaluate these integrals.
98 | Chapter 1

Theorem 1.22  Cauchy Residue Theorem


Let C be a simple closed curve and f(z) be analytic on and inside C except at a finite no. of
­isolated singularities z1, z2, ……., zn lying inside C then
n
∫ f ( z ) dz = 2π i∑ Res ( z )
k =1
k
C

= 2pi (sum of residues of f(z) at its isolated singularities inside C)

Proof: z1, z2, ……., zn are isolated singularities and hence there exist non-intersecting circles Ck
with centre at zk; k = 1, 2, …, n lying inside C such that each Ck contains only one singularity zk in-
side it. By extension of Cauchy–Goursat theorem for multiply connected domains (Remark 1.9),
we have
n
∫ f ( z ) dz = ∑ 
∫ f ( z ) dz
C k =1 C k

By equation (1.40)
∫ f ( z ) dz = 2π i Re s ( z ) ; k = 1, 2,… , n
Ck
k

n

Thus, ∫ f ( z )dz = 2π i∑ Re s( zk )
     
k =1
C 
= 2π i [sum of residues of f(z) at its isolated singularities inside C]

Example 1.69: Evaluate the following integral using residue theorem


z 2 dz
∫ ( z − 1) ( z + 2) where C :
C
2
z =3

z2
Solution: f ( z ) = has pole of order two at z = 1 and simple pole at z = –2. Both of
( z − 1)2 ( z + 2)
these poles lie inside z = 3.
d d z2
Res (1) = lim ( z − 1) f ( z ) = lim
2

z →1 dz z →1 dz z + 2

d  4   4  4 5
= lim  z − 2 +  = lim 1 − 2
= 1− =
z →1 dz  z+2 z →1
 ( z + 2)  9 9

z2 4
Res ( −2) = lim ( z + 2) f ( z ) = lim =
z →−2 z →−2
( z − 1)2 9

\ By Cauchy residues theorem
z 2 dz
∫ ( z − 1) ( z + 2) = 2π i (Sum of residues of f (z) at isolated singularities inside C)
C
2

5 4
= 2π i  +  = 2π i 
9 9
Functions of Complex Variables  | 99

dz
Example 1.70: State residue theorem and use it to evaluate ∫ z ( z + 4)
C
8
where C is the circle

(i)  z = 2     (ii)  z + 2 = 3
Solution: Statement of Cauchy residue theorem is already given in Theorem 1.22.
1
Now, f ( z ) = 8 has pole of order 8 at z = 0 and simple pole at z = –4
z ( z + 4)
1 d7 1 d7 1
Res (0 ) = lim 7 z 8 f ( z ) = lim 7 
7 ! z → 0 dz 7 ! z → 0 dz ( z + 4 )


1
= lim
( −1) 7! = − 1
7

7! z → 0
( z + 4)8 48

1 1 1
Res ( −4 ) = lim ( z + 4 ) f ( z ) = lim 8 = = 8 
z →−4 z →−4 z
( −4)8
4
(i)  z = 0 lies inside C and z = –4 lies outside C
\ By Cauchy residues theorem
2π i
∫ f ( z ) dz = 2π i  − 8  = − 8 .
1
 4  4
C 
(ii)  0 + 2 = 2 < 3 , −4 + 2 = 2 < 3
\ Both z = 0 and z = −4 lie inside C
\ By Cauchy residue theorem
∫ f ( z ) dz = 2π i (Sum of residues of f (z) at isolated singularities inside C)
C
 1 1
= 2π i  − 8 + 8  = 0 
 4 4 
dz
Example 1.71: Using residue theorem, evaluate ∫z
C
4
+1
which C is the circle x 2 + y 2 = 2 x .

Solution: C : x 2 + y 2 = 2 x or ( x − 1) + y 2 = 1 is circle z − 1 = 1
2

1
Now, f (z) = 4 has simples poles at zeros of z 4 + 1 = 0
z +1 
1
i.e., z 4 = −1 = i 2 ⇒ z 2 = ± i = ( ±2i )
2 
1 1
z = (1 ± i ) ⇒ z = ± (1 ± i )
2
\ 2

2 2 
\ f ( z ) has simple poles at z1 , z2 , z3 , z4

1 1 1
where z1 = (1 + i ) , z2 = (1 − i ) , z3 = − (1 + i ) ,
2 2 2 
1
z4 = − (1 − i )
2 
100 | Chapter 1

( )
2
1 i 2 −1 +1
Now,  ± −1 = <1
2 2 2

( )
2
1 i 2 +1 +1
and − ± −1 = > 1
2 2 2

\ Only z1 and z2 lie inside C.


z − z1 1
Res ( z1 ) = lim z → z1 z + 1
4
= lim 3
z → z1 4 z
( L′ Hospital rule ) 


z
= 14 = − 1
4 z1
z
4
∵ z14 = −1 ( )

z2
Similarly, Res ( z2 ) = −
4
\ By Cauchy residue theorem
dz
∫ 4 = 2π i (Sum of residues of f (z) at isolated singularities inside C)
C z +1
 z z  π π πi
= 2π i  − 1 − 2  = − i ( z1 + z2 ) = − i 2 = − .
 4 4 2 2 2 

e z dz
Example 1.72: Evaluate by residue theorem ∫ ( z + 1) ( z − 2)
C
2
where C is the circle z − 1 = 3.
ez
Solution: f ( z ) = has simple pole at z = 2 and double pole at z = –1
( z + 1)2 ( z − 2)
2 − 1 = 1 < 3, −1 − 1 = 2 < 3

\ Both z = 2 and z = –1 lie inside C
ez e2
Res (2) = lim ( z − 2) f ( z ) = lim =
z→2 z→2
( z + 1)2 9
d d ez
Res (–1) = lim
z →−1 dz
( z + 1) f ( z ) = zlim
2

→−1 dz z − 2


= lim
( z − 2) e z − e z = ( −1 − 2) e −1 − e −1 = − 4
z →−1
( z − 2 )2 ( −1 − 2)2 9e

\ By Cauchy residue theorem
∫ f ( z ) dz = 2π i (Sum of residues of f (z) at isolated singularities inside C)
C
 e 2 4  2π i  2 4 
= 2π i  −  =  e − 
 9 9e  9  e

Functions of Complex Variables  | 101

Example 1.73: Evaluate the following integrals


dz 1
(i)  ∫ where C : z = 1
∫ z dz where C : z = 1
(ii)   ze
C
z sin z
C

dz coth z
∫C sinh 2 z where C : z = 2
(iii)   (iv)  ∫
C
z −i
dz where C : z = 2

1
− 1
(v)  ∫ e z
sin dz where C : z = 1 (vi) 
z ∫ tan z dz where C : |z| = 2
C

1
Solutions: (i) f ( z ) = has poles at z = np ; n ∈ I. Double pole at z = 0 and simple poles at
z sin z
z = np, n ≠ 0, n ∈ I. Only z = 0 lies inside C : z = 1.
−1
1 1 1   z2 z4 
Now, f (z) = = = 2 
1 −  − +  
z sin z  z 3
z 5
 z   3! 5! 
z  z − + − 
 3! 5 ! 

1  z 2

= 1 + 3! + 
z2

1
\ Res (0) = Coeff of
=0
z
\ By Cauchy residue theorem
∫ f ( z ) dz = 2π i (Sum of residues of f (z) at isolated singularities inside C) = 0
C
1
(ii)  f ( z ) = ze z has singularity at z = 0 only which lies inside C : z = 1.
 1 1 1 
f ( z ) = z 1 + + + + 
 z 2 ! z 2 3! z 3 
1 1
\ Res (0) = Coeff of =
z 2
\ By Cauchy residue theorem
 1
 ∫C f ( z ) dz = 2π i  2  = π i

1
(iii)  f ( z ) = has singularity at z = 0 only which lies inside C : z = 2.
sinh 2 z
z 1
lim z f ( z ) = lim = lim   (L′ Hospital rule)
z →0 z → 0 sinh 2 z z → 0 2 cosh 2 z

1
= ≠0
2 
102 | Chapter 1

\ f (z) has simple pole at z = 0


1
and Res (0) = 
2
\ By Cauchy residue theorem
 1
C∫ f ( z ) dz = 2π i  2  = π i

coth z
(iv)  f ( z ) = has simple poles at z = 0, i.
z −i
Both z = 0, i lie inside C : z = 2
z cosh z 1 z
Res (0) = lim = lim
z →0 ( z − 1) sinh z −i z → 0 sinh z

1
= i lim = i         (L′ Hospital rule)
z → 0 cosh z

Res (i) = lim( z − i ) f ( z ) = lim coth z = coth i


z →i z →i 
\ By Cauchy residue theorem

∫ f ( z )dz = 2π i (Sum of residues of f (z) at isolated singularities inside C) = 2π i (i + coth i )


C

1
− 1
(v)  f ( z ) = e sin has singularity at z = 0 only which lies inside C : z = 1.
z
z
 1 1 1  1 1 
  f ( z ) = 1 − + 2
− 3
+   − 3
+  
 z 2 ! z 3 ! z  z 3 ! z 
1
   Res (0) = Coeff of=1
z
\ By Cauchy residue theorem

∫ f ( z ) dz = 2π i (Sum of residues of f (z) at isolated singularities inside C )


C   
= 2π i (1) = 2π i 
π π
(vi) f ( z ) = tan z has simple poles at z = (2n + 1)   ; n ∈ I of which only z = ± lie inside
 2 2
C: z =2

 π π
 z − 2  sin z z−
π    2
Res   = lim = (1) lim
 2  z →π cos z z→
π cos z
2 2

1
= lim   (L′ Hospital rule)
π − sin z
z →−
2
= –1
Functions of Complex Variables  | 103

 π π
 z +  sin z z+
 π  2 2 
Res  −  = lim = ( −1) lim
 2  z → −π cos z z→
− π cos z
2 2

1
= − lim (L′ Hospital rule)
− π − sin z
z→
2
= –1
\ By Cauchy residue theorem
∫ f ( z )dz = 2π i
C
(Sum of residues of f (z) at isolated singularities inside C)

= 2π i ( −1 − 1) = −4π i 

e2z
Example 1.74: Evaluate the integral ∫ ( z + 1) dz, C :
C
n
z = 2.
2z
e
Solution: f ( z ) = has pole of order n at z = –1 which lies inside C : z = 2
( z + 1)n
1 d n −1
n −1 (
z + 1) f ( z )
n
Res (–1) = ⋅ lim
( )
n − 1 ! z →−1 dz

1 d n −1 2 z 1
= lim e = lim 2n −1 e 2 z
(n − 1)! z →−1 dz n −1 (n − 1)! z →−1 
2n −1 e −2
=
(n − 1)! 
\ By Cauchy residue theorem

∫ f ( z ) dz = 2π i (Sum of residues of f (z) at isolated singularities inside C)


C
2n −1 e −2 2n π i
= 2π i ⋅ =
(n − 1)! (n − 1)! e 2 

Exercise 1.4

1. Write the zeros of the following func- 2. Find the type of singularities of the func-
tions and their order tions at z = 0
3
 z +1  z − sin z
     (i)  z2 sin z  (ii)  2     (i)  f ( z ) =
 z + 1 z3
sin z
( z + 1) ( z − 2 ) (ii) f ( z ) = r , r ≥ 2 is a positive
(iii)  z
( z − 3) ( z + 3) integer
104 | Chapter 1

3. Find the location and type of singularity


(vii) 
( z + 1)
of the following functions
sin( z − 2)
(z 2
− 16 ) ( z + 2)
(i)    
( z − 2) z2
(viii) 
(z )
2
(ii)  ( z + 1) e1 ( z +1)   
2 2
+ 3z + 2
z2
2z (ix) 
( )
2
(iii)  z2 + 1
( z − 1) ( z + 2)
3

7. Find the sum of the residues of the func-
4. Show that the function sin z
tion f ( z ) = at its poles inside the
(i)  cosec z has a simple pole at z = 0. z cos z
1 circle z = 2.
(ii)  2 has simple poles at z = 1
z −1 8. Compute the residues at all the singular
and z = –1.
points of
ez
(iii)  has a pole of order 3 at z = 0 1 1 1 − e2z
z3 (i)  z sin  (ii)  z cos (iii) 
z z z3
1
(iv) z sin has essential singularity at 2
z ze z sin z ez
z = 0. (iv)  (v)  2 (vi) 
( z − a)3 z z3
5. Classify the singular point z = 0 of the 9. Evaluate the following integrals using
functions residue theorem
ez ez 1+ z
(i)    (ii)  (i) 
z + sin z z − sin z ∫ z( 2 − z ) dz where C is the circle
C
z =1
6. Compute the residues at all the singular
( z + 3)
points of f(z) where f(z) is given by (ii)  ∫ ( z + 1) ( z − 2) dz where C :
2
z =3
z C
(i) 
( z + 1) ( z − 2) 2z −1

(iii)  
C
z ( z + 1)( z − 3)
dz where C : z = 2
2z + 1
(ii)  1
(z 2
−z−2 ) ∫
(iv)  
C z ( z + 4)
3
dz where C : z + 2 = 3

z +1 4 − 3z 3
(iii)  2
z − 2z
(v)  ∫ z( z − 1)( z − 2) dz where C :
C
z =
2
z3
 (iv) 
( z − 1) ( z − 2) ( z − 3) 10. Evaluate the integral
z −3
dz∫ z
+ 2z + 5 2
1+ z + z2 C
  (v)  using residue theorem, where C is the circle
( z − 1)2 ( z + 2)
(i)  z = 1     (ii) z + 1 − i = 2
z2
(vi) 
( z − 1)2 ( z + 2) (iii)  z + 1 + i = 2
Functions of Complex Variables  | 105

11. Evaluate the contour integral


∫e
1 z2
  (ii)  dz where C : z = 2
dz

I=
(e z
)
−1
,C : z =1 C

C sin z
(iii)  ∫
C z
6
dz where C : z = 2
z2 + 4
∫ 3 2
12. Evaluate I = 
C z + 2z + 2z
dz where  1
C is (iv)  ∫ sin  z  dz where C : z =1
  (i)  z = 1     (ii) z + 1 − i = 1 C

ez
(iii)  z + 1 + i = 1     (iv)  z − 1 = 5 (v)  C∫ ( z + 1)2 dz where C : z − 3 = 3
  (v) rectangle with vertices at 2 + i, 6 + i,
 z
2 + 4i and 6 + 4i (vi)  ∫ ( z + 1) cot  2  dz where C : z =1
13. Evaluate the integral C

ez e z dz
∫C ( z + 1)n dz, C : z = 2 ∫
(vii)  
C
cos π z
where C : z = 1
1
1
14. Evaluate the following integral 15. Prove that ∫C z sin z dz = 2π i where C is the cir
e
dz
(i)  ∫ where C is 1z z = 12
C
cosh z ∫ e sin z dz = 2π i where C is the circle z = 1.
C

Answers 1.4
1.   (i)  z = n p; n ∈ I, n ≠ 0 and order = 1, z = 0 is a zero of order 3.
(ii)  z = –1 is a zero of order 3 (iii) Simple zeros at z = –1 and z = 2
2. (i)  removable singularity at z = 0 (ii) pole of order (r – 1) at z = 0
3. (i)  removable singularity at z = 2 (ii) essential singularity at z = –1
(iii)  pole of order 3 at z = 1 and simple pole at z = –2
5. (i)  simple pole at z = 0 (ii) z = 0 is a pole of order 3
1 2 1 5
6. (i)  Res ( −1) = , Res ( 2) = (ii)  Res ( −1) = , Res ( 2) =
3 3 3 3
1 3 1 27
(iii)  Res (0) = − , Res ( 2) = (iv)  Res (1) = , Res ( 2) = −8, Res (3) =
2 2 2 2
2 1 5 4
 (v)  Res (1) = , Res ( −2) = (vi)  Res (1) = , Res ( −2) =
3 3 9 9
5 −3 1
(vii)  Res ( 4) = , Res ( −4) = , Res ( −2) =
48 16 12
−i i
(viii)  Res ( −1) = −4, Res ( −2) = 4  (ix)  Res (i ) = , Res ( −i ) =
4 4
−1
7. 0  8. (i) Res (0) = 0 (ii) Res (0) =   (iii) Res (0) = −2
2
a 
(iv) Res ( a) = e a  + 1 (v) Res (0) = 1 (vi) Res (0) = 1
2 
106 | Chapter 1

9. (i) pi  (ii)  0  (iii) –5pi/6  (iv)  0  (v) 2pi


10. (i) 0   (ii) p(– 2 + i)  (iii) p(2 + i)
11. 2pi   12. (i) 4pi  (ii)  – p(3+i)  (iii) 
p (3 – i)  (iv) 
2pi  (v) 0
2π i πi 1
13. 0 (ii) 0 (iii)    (iv) 2pi  (v) 0  (vi) 4pi (vii) −4i sinh
  14. (i) 
e( n − 1)! 60 2

1.15 Application of Cauchy residue theorem to


evaluate real integrals
Cauchy residue theorem is used to evaluate certain real integrals. These integrals are first trans-
formed to associate contour integrals. The contour integrals are then evaluated using Cauchy
residue theorem.

1.15.1 Integration Around the Unit Circle



Integrals of the type ∫ f (cos θ , sin θ ) dθ where f is a rational function of cosq and sinq can be
0
obtained by setting z = e iθ and thus
z + z −1 z − z −1
cos θ = , sin θ = , dz = ie iθ dθ = izdθ
2 2i
dz
\ dθ =
iz 
As q moves from 0 to 2p, z moves on unit circle z = 1 once anticlockwise.
Thus, the integral takes the form ∫ F ( z ) dz
C

1  z + z −1 z − z −1 
where F (z) = f , , C : z =1
iz  2 2i 

Now, after finding isolated singularities of F(z) inside C and residues at these singularities, we
find value of integral using Cauchy residue theorem.

2π dθ π dθ π
Example 1.75: Evaluate ∫0 a + b cos θ
; a > b > 0 and using it prove that ∫
0
= .
17 − 8 cos θ 15
2π dθ
Solution: Let I = ∫ ;a> b >0
0 a + b cos θ
Put z = e iθ ∴ dz = ie iθ dθ = izdθ 

dz
\ dθ =
iz 
1
z+
e iθ + e − iθ z = z +1
2
cos θ = =
2 2 2z 
Functions of Complex Variables  | 107

As q moves from 0 to 2p, z moves on unit circle C : | z | = 1 anticlockwise.


dz 2 dz
\ I =∫ = ∫ bz
( ) + 2az + b (1)
2
C b z2 +1  C
i
iz  a + 
 2z 

1
f (z) = 2 has simple poles at
bz + 2az + b 
−a + a2 − b2 −a − a2 − b2
z1 =  and  z2 =
b b 
a + a2 − b2 a
Now, z2 = > > 1.
b b
Also, z1z2 = product of roots of quadratic equation bz2 + 2az + b = 0 
= 1
1
\ z1 = <1
z2

\ Only z = z1 lies inside C
z − z1
Res ( z1 ) = lim ( z − z1 ) f ( z ) = lim
z → z1 z → z1 bz + 2az + b 
2

1
= lim ( L ′ Hospital rule)
2bz + 2a
z → z1

1 1 1
= ⋅ =
2 bz1 + a 2 a2 − b2 
\ By Cauchy residue theorem
1 πi
∫ f ( z ) dz = 2π i ⋅ 2
C a −b
2 2
=
a − b2 
2

\ from (1)
2 πi 2π
I= = 
i a −b
2 2
a2 − b2
Take a = 17, b = − 8   (condition a > |b| > 0 is satisfied)
2π dθ 2π 2π 2π
\ ∫0 17 − 8 cos θ = 172 − 82 = 9 (25) = 15

dθ 2π
(∵ 17 − 8 cos (2π − θ ) = 17 − 8 cos θ )
π
⇒ 2∫ =
0 17 − 8 cos θ 15 
π dθ π
\ ∫ 0
=
17 − 8 cos θ 15 
108 | Chapter 1

Example 1.76: Apply the calculus of residue to evaluate


π a dθ
∫0 a2 + sin 2 θ ; a > 0
π a dθ π 2a dθ
Solution: Let I = ∫ =∫
0 1 − cos 2θ 0 2a 2 + 1 − cos 2θ
a +
2

2
Substitute 2θ = φ ∴ 2dθ = dφ
As q varies from 0 to p, f varies from 0 to 2p
2π a dφ
\ I =∫ ;a>0
0 2a 2 + 1 − cos φ

iφ iφ
Substitute e = z ∴ ie dφ = dz
dz
\ dφ =
iz  1
z+
e iφ + e − iφ z = z +1
2
cos φ = =
2 2 2z 
As f moves from 0 to 2p, z moves on unit circle C : z = 1 anticlockwise.
adz 2a dz

\ I = 
 z + 1 2
=− ∫
 ( )
i C z − 2 2a 2 + 1 z + 1
2
iz  2a 2 + 1 −
C

 2 z 

dz
\ ∫C z 2 − 2 2a2 + 1 z + 1
I = 2ai 
( ) 
1
Now, f (z) = has simple poles at z1 and z2 
( )
z 2 − 2 2a 2 + 1 z + 1

( ) ( )
2
2 2a 2 + 1 + 4 2a 2 + 1 − 4
where z1 = 
2

( 2a )
2
= 2a 2 + 1 + 2
+1 −1



= 2a 2 + 1 + 2a 2 2a 2 + 2 ( )
= 2a 2 + 1 + 2a a 2 + 1 

and z2 = 2a 2 + 1 − 2a a 2 + 1

As a > 0, z1 = 2a + 1 + 2a a + 1 > 1
2 2

and z1 z2 = product of roots of quadratic equation z 2 − 2 2a 2 + 1 z + 1 = 0  ( )
= 1
Functions of Complex Variables  | 109

1
\   z2 = <1
z1
\ Only simple pole z2 lies inside C.
z − z2
Res ( z2 ) = lim ( z − z2 ) f ( z ) = lim
z → z2 z → z2 2
(
z − 2 2a 2 + 1 z + 1 )
1
= lim (L′ Hospital rule )

z → z2
( )
2 z − 2 2a 2 + 1

1 1
= =−


2  2a 2 + 1 − 2a a 2 + 1  − 2 2a 2 + 1
  ( ) 4a a2 + 1

\ By Cauchy residue theorem
 1  π
I = 2ai ⋅ 2π i ⋅  − =
 4a a + 1 
2
a2 + 1 

2π dθ
Example 1.77: Use calculus of residues to evaluate the integral ∫0 5 − 4 sin θ
.
2π dθ
Solution: Let I = ∫
0 5 − 4 sin θ

Substitute z = e iθ ∴ dz = ie iθ dθ = izdθ
dz
\ dθ =
iz 
1
z−
e iθ − e − iθ z = z −1
2
sin θ = =
2i 2i 2iz 

As q moves from 0 to 2p, z moves on unit circle C : z = 1 anticlockwise.


dz
\ ∫
I=
 z 2 − 1
C
iz  5 − 4
 2iz 

dz dz
∫C 5iz − 2 z 2 + 2 = − C∫ 2 z 2 − 5iz − 2
=

dz 1 dz
∫C  i 
= − =− ∫
2C  i
2  z −  ( z − 2i )  z −  ( z − 2i )
 2 2

1 i i
f (z) = has simple poles at z = , 2i of which only z = lies inside C
 i 2 2
 z −  ( z − 2i )
2
 i  i 1 2 2i
\ Res   = lim  z −  f ( z ) = lim =− =
 2  z→ i  2  z→
i z − 2i 3i 3
2 2 
110 | Chapter 1

\ By Cauchy residue theorem


1  2i  2π
I = − ⋅ 2π i   =
2  3 3 

2π dθ
Example 1.78: Evaluate ∫ 0 1 − 2a cos θ + a 2
where a is a complex constant and

(i)  a < 1   (ii) a > 1


2π dθ
Solution: Let I = ∫
0 1 − 2a cos θ + a 2
Substitute z = e iθ 
\ dz = ie iθ dθ = izdθ 
dz
\ dθ = 
iz
1
z+
e iθ + e − iθ z = z +1
2
cos θ = =
2 2 2z 
As q moves from 0 to 2p, z moves on unite circle C : z = 1 anticlockwise.
dz dz
I= ∫C  z +1 2
2 ∫
= i
(
C az − 1 + a
2
)2
z + a (1)
iz 1 − 2a +a 
 2z 
1
Let f (z) = 2
(
az − 1 + a 2 z + a ) 
1 1
= has simple poles at z = , a
( az − 1)( )
z − a a

1 1
Res ( a ) = lim ( z − a ) f ( z ) = lim = 2 
z→a z → a az − 1 a −1
 1  1 1 1 1
Res   = lim  z −  f ( z ) = lim = =
 a  z→ 1  a z → a ( z − a) 1  1 − a2
1
a a a  − a
a 

1
(i) When a < 1 then z = a lies inside C and z = lies outside
a
\ By Cauchy residue theorem
1 2π
I = i ⋅ 2π i ⋅ 2 =
a − 1 1 − a2 
1 1
(ii) When a > 1 then < 1, thus only lies inside C
a a
\ By Cauchy residue theorem
1 2π
I = i ⋅ 2π i ⋅ = 2
1− a 2
a −1 
Functions of Complex Variables  | 111

π
Example 1.79: Apply calculus of residues to evaluate
3
∫ 0
cos6 θ dθ
π π  1 + cos 2θ 
Solution: Let I = ∫ cos6 θ dθ = ∫   dθ
0 0  2
1
Substitute 2θ = φ ∴ dθ = dφ ; when q varies from 0 to p, f varies from 0 to 2p
2
2π (1 + cos φ ) d φ
3
1 2π
I=∫ (1 + cos φ )3 dφ
2 16 ∫0
\ =
0 8 
Substitute z = e iφ 

\ dz = ie dφ = izdφ 
dz
\ dφ = 
iz
1
z+
e iφ + e − iφ z = z +1
2
cos φ = =
2 2 2z 
As f moves from 0 to 2p, z moves on unit circle C : z = 1, anticlockwise.
(z )
3 3
1  z 2 + 1 dz 1
2
+ 2z + 1
\ I=
16 ∫ 1 + 2 z  =
iz 128i ∫ z4
dz
C C

1 ( z + 1)6 dz
=
128i ∫ z4
C 
( z + 1)
6

Now, f ( z ) = has pole of order 4 at z = 0 and z = 0 lies inside C


z4
1 d3 1 d3
Res (0 ) = lim 3 z 4 f ( z ) = lim 3 ( z + 1)
6

3! z → 0 dz 6 z → 0 dz 
1
lim 6 ⋅ 5 ⋅ 4 ⋅ ( z + 1) = 20 
3
=
6 z → 0

\ By Cauchy residue theorem


1 5π
I= ⋅ 2π i ( 20 ) =
128i 16 

Example 1.80: Apply calculus of residues to evaluate ∫ e cos θ cos ( nθ − sin θ ) dθ where n is any
0

non-negative integer and hence show that ∫ e cos θ cos (sin θ ) dθ = 2π .
0
2π 2π
cos θ i ( nθ − sin θ )
Solution: Let I = ∫ e e dθ = ∫ e cos θ − i sin θ e inθ dθ
0 0

( )
2π n
=∫ e e dθ e − iθ iθ

0 
− iθ − iθ dz
Substitute e = z ∴ − ie dθ = dz ⇒ dθ = −
iz
112 | Chapter 1

As q varies from 0 to 2p, z moves on unite circle C ∗ : z = 1 clockwise. Taking anticlockwise


direction positive, we have
ez  1  ez

I = −
C
 − 
z n  iz 
dz = −i C∫ z n+1 dz where C : z = 1 anticlockwise
ez
f (z) = has only pole of order n + 1 at z = 0 which lies inside C
z n +1
1 dn 1 dn
Res ( 0 ) = lim n z n +1 f ( z ) = lim n e z
n ! z →0 dz n ! z →0 dz 
1 1
= lim e = z

n! z →0 n! 
\ By Cauchy residue theorem
 1  2π
I = −i 2π i   =
 n ! n !

2π 2π
∫ e cos θ

 cos ( n θ − sin θ ) + i sin (nθ − sin θ ) dθ =
0 n! 
Equate real parts
2π 2π
∫ e cos θ cos ( nθ − sin θ ) dθ =
0 n! 
Take n = 0

(∵ cos ( − sin θ ) = cos (sin θ ))



∫ e cos θ cos (sin θ ) dθ = 2π 
0

Example 1.81: Apply calculus of residues to show that


0
sin 2 θ
a + b cos θ

b
(
dθ = 2 a − a 2 − b 2 ) where 0< b < a

2π sin θ dθ 1 2π 1 − cos 2θ
2
Solution: Let I = ∫
a + b cos θ 2 ∫0 a + b cos θ
= dθ
0

1 2π 1 − e 2iθ
2 ∫0 a + b cos θ
= Real part of dθ (1)

2π 1 − e 2iθ
Let I1 = ∫ dθ
0 a + b cos θ 
Substitute e iθ = z 

\ ie iθ dθ = dz 
dz
dθ =
\ iz 
1
z+
e iθ + e − iθ z = z +1
2
cos θ = =
2 2 2z 
Functions of Complex Variables  | 113

As q varies from 0 to 2p, z moves on unite circle C : z = 1 anticlockwise.


1 − z2 dz 2 1 − z 2 dz ( )

\ I1 = 
z 2 + 1 iz i 
⋅ = ∫ 2
C bz + 2az + b

C
a+b
2z
1 − z2
f ( z) = has simple poles at z1 and z2
bz 2 + 2az + b

−a + a2 − b2 −a − a2 − b2
where z1 = , z2 =
b b 
a + a2 − b2
z2 = >1 ∵a > b > 0
b

and z1 z2 = products of roots of equation (bz2 + 2az + b = 0)
= 1
1
\ z1 = <1
z2

\ Only z1 lies inside C

Res ( z1 ) = lim ( z − z1 ) f ( z ) = lim


(1 − z ) ( z − z )
2
1

z → z1 z → z1 bz 2 + 2az + b

z − z1
(
= 1 − z lim 22
1 )
z → z1 bz + 2az + b
= 1 − z1 lim
2 1
z → z1 2bz + 2a
( ) ( L ′ Hospittal rule)

2
 −a + a2 − b2 

=
(1 − z ) 2
1
=
1− 
 b


=
(
b 2 − a 2 + a 2 − b 2 − 2a a 2 − b 2 )
2 (bz1 + a ) 2 a −b 2 2
2b 2
a −b
2 2


( ) +a
2

b2 − a2 + a a2 − b2 − a2 − b2 a2 − b2
= =
b2 a2 − b2 b2 a2 − b2 

a2 − b2 − a
=−
b2 
\ By Cauchy residue theorem

I1 =
2 
2π i  −
a 2 − b 2 − a  4π a − a − b
 =
2 2
( )
i  b2  b2

1 2π
\ from (1) I = real part of I1 = 2  a − a 2 − b 2 
2 b  
114 | Chapter 1

∞ ∞
1.15.2 Improper Real Integrals of the form ∫ f (x ) dx or ∫ f (x ) dx where f(z)
has no Real Singularity −∞ 0

The improper integral ∫
−∞
f ( x) dx is defined as

∞ 0 S

I= ∫ f ( x) dx = lim
R →∞ ∫ f ( x) dx + lim ∫ f ( x) dx
S →∞
−∞ −R 0 
R

and lim
R →∞ ∫
−R
f ( x)dx, if exists, is called Cauchy principal value of I. Sometimes, Cauchy principal

value of I exists but I does not exist as for f (x) = x. But if I exists then Cauchy principal value
must exist and Cauchy principal value must be equal to I. We shall be taking only those cases in
which I exists and hence we shall be finding Cauchy principal value.
We shall be using Jordan lemma
2θ π
sin θ ≥ when 0 ≤ θ ≤ (1.41)
π 2
and following result
Result: If f (R, q) → 0 as R → ∞ uniformly over all values of θ , 0 ≤ θ ≤ π then
π π 2
∫ 0
f ( R, θ )dθ → 0 & ∫
0
f ( R, θ ) dθ → 0

Proof: f ( R, θ ) → 0 as R → ∞ uniformly
\ Corresponding to e > 0 there exist R1 such that
f ( R, θ ) < ε when R > R1 for all values of θ in [0, π ]
π π π
\ ∫ 0
f ( R, θ )dθ ≤ ∫
0
f ( R, θ ) dθ < ε ∫ dθ = πε
0

But e is arbitrary
π
\ ∫ f ( R, θ ) → 0 as R → ∞ uniformly
0 
π 2
Similarly, ∫ f ( R, θ ) dθ → 0 as R → ∞ uniformly.
0
∞ ∞
To evaluate ∫
−∞
f ( x) dx or ∫ f ( x) dx we assume that f(z) satisfies the following conditions
0
(i) f(z) has no singularity on real axis
(ii) f(z) is analytic in the upper half of z-plane except at a finite no. of isolated singularities
z1, z2,…..,zn in this half plane.
(iii) z f(z) → 0 uniformly as R → ∞ through the values 0 ≤ q ≤ p.
Let us consider ∫ f ( z)dz
C
where C consists of C R : z = R; Im ( z ) > 0 and real axis from

–R to R as shown in Figure 1.19.


Functions of Complex Variables  | 115

CR

X
–R O R

Figure 1.19

R
∴ ∫ f ( z) dz = ∫
C CR
f ( z ) dz + ∫
−R
f ( x ) dx (1.42)

z1, z2, …, zn are isolated singularities of f(z) inside C
\ By Cauchy residue theorem
n

∫ f ( z) dz = 2π i∑ Res ( z
k =1
k ) (1.43)
C

On C R , z = R e iθ , 0 < θ < π , dz = Rie iθ dθ = izdθ


π π
Now,
C
∫ f ( z ) dz ≤ ∫
0
f ( z ) iz dθ = ∫ z f ( z ) dθ
0
R

→ 0 as R → ∞(1.44)

\ from equation (1.42) as R → ∞


∞ n

∫ f ( x) dx = 2π i∑ Res ( zk )  (Using (1.43) and (1.44))


k =1
−∞
∞ ∞
Remark 1.13: To evaluate ∫ cos ax f ( x) dx or ∫ sin ax f ( x) dx we shall consider ∫ e
iaz
f ( z )dz
−∞ −∞ C
and shall proceed as above and in the last, real and imaginary parts as required will be equated.
In the examples 1.82 to 1.86 we shall use the figure of this article.

Example 1.82: Using contour integration evaluate


∞ dx ∞ dx
(i)  ∫ 6   (ii) ∫
0 x +1
( )
0 2
x 2 + a2
dz
Solution: (i) Consider ∫ z
C
6
+1
where C consists of semicircle C R : z = R; Im ( z ) > 0 and real axis from –R to R where R
is large
R
dz dz dx
∴ ∫C z 6 + 1 C∫ z 6 + 1 −∫R x 6 + 1 (1)
= +
R
116 | Chapter 1

1
has poles at the points where z + 1 = 0 or z 6 = −1 = i 6 
6
Now, f ( z) =
z +16

6 3 3
z z  z 
\  i  = 1 ⇒  i  = ±1 ⇒  ±i  = 1
      
z
\ = 1, ω , ω ⇒ z = ±i, ± iω , ± iω 2
2

±i

\ Simple poles at i, –i, iw, –iw, iw2, –iw2
−1 + i 3 − 3 i 3 i
i.e., i, − i, iω = i = − , − iω = + ,
2 2 2 2 2 
−1 − i 3 3 i 3 i
iω 2 = i = − , − iω 2 = − +
2 2 2 2 2
Out of these poles, those poles whose imaginary part is positive lie within C.
\ i, –iw, – iw2 lie inside C
Let these are z1, z2, z3 respectively.
(z − z j ) 1
Res ( z j ) = lim 6 = lim 5 (L′ Hospital rule) 
z→ z j z + 1 z→ z j 6 z

zj zj
1
= 5 = 6 =−
6z j 6z j 6
(
∵ z 6j = −1)

\ By Cauchy residue theorem
dz
C∫ z 6 + 1 = 2π i (Sum of residues at isolated singularities within C)

 z z z  πi
= 2π i  − 1 − 2 − 3  = − ( z1 + z2 + z3 )
 6 6 6 3

πi π
=−
3
(
i − iω − iω = 1 − ω − ω
2

3
) ( 2
)


= (∵ ω + ω = −1) (2)
2

3
On C R , z = Re iθ ∴ dz = i Re iθ dθ

dz
π
i Re iθ dθ
π i Re  1 1 
\ ∫
CR z +1
6
= ∫0 R6 e6iθ + 1 ∫0 R6 e6iθ − 1 dθ    ∵ z + z ≤ z − z 

1 2 1 2

dz R
\ ∫z
CR
6
≤ 6 π →0
+1 R −1
as R → ∞ (3)

\ As R → ∞, from (1), (2) and (3)


∞ dx 2π
∫ −∞
=
x +1 3 
6
Functions of Complex Variables  | 117

∞ dx 2π ∞ dx 1 2π  1 
⇒ 2∫ =  2∫ ∵6 2 = is even function∵ 2 is even function
0 x +1 3
6 0 x +
x 1+ 1 3  x + 1 
∞ dx π
\ ∫0
=
x +1 3 
6

dz
(ii) Consider ∫ ;a > 0
(z )
2
C
2
+ a2
where C is the contour consisting of semicircle C R ; z = R, Im( z ) > 0 and real axis from –R to
R where R is large
R
dz dz dx
\ C∫ z 2 + a2 2 C∫ z 2 + a2 2 −∫R x 2 + a2 2 (1)
= +
R ( ) ( ) ( )
1
Now, f ( z ) = has double poles at z = ±ia
(z )
2
+ a2 2

Out of which z = ia lies inside C  (Q a > 0)


d d 1
Res (ia) = lim ( z − ia) 2 f ( z ) = lim
dz z → ia dz ( z + ia) 2
z → ia

−2 −2 1
= lim = =
z → ia ( z + ia)3 −8ia3 4ia3

\ By Cauchy residue theorem
dz  1  π
∫ = 2π i  =
 4ia3  2a3
(2)
C (z 2
+a 2 2
)

On C R , z = Re dz = i Re iθ dθ
π
dz dz dz R dθ
\ ∫ ≤ ∫ ≤ ∫ =∫
(z ) (z ) (R )
2 2 2 2 2
CR
2
+a 2
CR z +a
2
CR
2
− a2 0
2
− a2

πR
= 2 → 0 as R → ∞ (3)
( R − a2 )2
\ As R → ∞, from (1), (2) and (3)
∞ dx π
∫ =
−∞
(x 2
+a 2 2
) 2a 3

 
∞ dx π 1
⇒ 2∫ = 3 ∵ is even function
(x ) ( )
2 2
0 2
+ a2 2a  x + a 2
2 

∞ dx π
\ ∫ =
0
(x 2
+a 2 2
) 4 a3

118 | Chapter 1

Example 1.83: Apply calculus of residues to evaluate


∞ x 2 dx ∞ x2
∫ (x
−∞ 2
)(
+ a2 x 2 + b2 )
; a, b > 0 and hence find the value of ∫ (x
−∞ 2
)(
+ 1 x2 + 4
dx
)
z 2 dz
Solution: (i) Consider ∫ ( z
C
2
)(
+ a2 z 2 + b2 )
; a, b > 0

where C is the contour consisting of C R : z = R, Im( z ) > 0, and real axis from –R to R where
R is large
R
z 2 dz z 2 dz x 2 dx
\ C∫ z 2 + a2 z 2 + b2 = C∫ z 2 + a2 z 2 + b2 + −∫R x 2 + a2 x 2 + b2 (1)
( )( ) ( )( ) ( )( )
R

z2
Now, f ( z ) = has simple poles at z = ± ia, ± ib of which z = ia, ib are inside
(z 2
+ a2 z 2 + b2)( )
C (Q  a, b > 0)
( z − ia) z 2
Res (ia) = lim
z → ia ( z 2 + a 2 )( z 2 + b 2 )

−a2 z − ia
= lim
− a 2 + b 2 z →ia ( z − ia)( z + ia) 
−a2 1 ia
= ⋅ =
b −a
2 2
2ia 2(b − a 2 )
2

Interchanging a and b(Q  f(z) does not change by interchange)
ib
Res (ib) =
2( a − b 2 )
2

\ By Cauchy residue theorem
z 2 dz  ia ib 
∫ ( z
C
2
+a 2
)(z 2
+b 2
)
= 2π i  2
+ 2 2 
 2(b − a ) 2( a − b ) 
2


−π ( a − b) π (b − a) π
= 2 = 2 = (2)
b − a2 b − a2 a+b
2
z 2 dz z dz R2
and ∫ 2 ≤ ∫ 2 2 z 2 − b2 C∫ R2 − a2 R2 − b2
= dz
C R ( z + a )( z + b )
2 2 2
CR z − a R ( )( ) ( )( )
π R3  
= → 0 as R → ∞ ∵ ∫ dz = π R (3)
( R − a 2 )( R 2 − b 2 )
2
 CR 

\ As R → ∞, from (1), (2) and (3)
∞ x2 π
∫ (x
−∞ 2
+a 2
)( x 2
+b 2
)
dx =
a+b
Functions of Complex Variables  | 119

Take a = 1 > 0, b = 2 > 0


∞ x2 π
∫ (x
−∞
+1 x + 4
2
)( 2
)
dx =
3

∞ x2
Example 1.84: Evaluate by the method of complex variables, the integral ∫ dx .
(x )
−∞ 3
z2
2
+1
Solution: Consider ∫ dz
( )
3
C z +1
2

where C is the contour consisting of C R : z = R, Im( z ) > 0 and real axis from –R to R where
R is large
R
z2 z2 x2
\ C∫ z 2 + 1 3 dz = C∫ z 2 + 1 3 dz + −∫R x 2 + 1 3 dx (1)
( ) R ( ) ( )
z2
Now, f ( z ) = has poles of order 3 at z = ± i out of which only z = i lies inside C
(z )
3
2
+1
1 d2
Res (i ) = lim 2 ( z − i )3 f ( z )
2 ! z →i dz 
1 d2 z2
= lim 2
2 z →i dz ( z + i )3

z2 z2 + 1 −1 ( z + i )( z − i ) 1
Now, = = −
( z + i) 3
( z + i) 3
( z + i)3
( z + i )3

z −i 1 z + i − 2i 1
= − = −
( z + i )2 ( z + i )3 ( z + i )2 ( z + i )3

1 2i 1
= − −
z + i ( z + i )2 ( z + i )3

1 d  1 2i 1  2
\ Res (i ) = lim 2  − − 
2 z →i dz  z + i ( z + i ) ( z + i )3 
2
 
1  2 12i 12 
= lim  − − 
2 z →i  ( z + i )3 ( z + i )4 ( z + i )5 

1  2 12i 12  1  i 3i 3i 
=  − − =  − + 
2  −8i 16 32i  2  4 4 8  
i
=−
16 
\ By Cauchy residue theorem
z2  i  π
C∫ z 2 + 1 3 dz = 2π i  − 16  = 8 (2)

( )
120 | Chapter 1

2
z2 z R2
∫ dz ≤ ∫ dz = π R → 0 as R → ∞  (3)
(z ) ( ) (R )
3 3 3
CR
2
+1 CR z −1
2 2
−1

\ As R → ∞, from (1), (2) and (3)


∞ x2 π
∫ dx =
(x )
3
−∞ 2
+1 8

Example 1.85: Evaluate the following integrals
cos ax
∞ ∞ sin ax
(i)  ∫
x 2 + b2
−∞
dx ; a, b > 0   (ii) ∫
−∞ x 2 + b 2
dx; a, b > 0
e iaz
Solution: (i) Consider  ∫ 2 2 dz
C z +b

where C consists of semicircle C R : z = R; Im ( z ) > 0 and real axis from –R to R for large R
R
e iaz e iaz eiax
\ C∫ z 2 + b2 dz = ∫ 2 2
CR z + b
dz + ∫ 2 2 dx (1)
−R x + b

e iaz
Now, f ( z ) = 2 has simple poles at z = ± ib of which z = ib lies inside C (∵ b > 0)
z + b2
e iaz e − ab
Res (ib) = lim =
z → ib ( z + ib ) 2 ib

\ By Cauchy residue theorem
e iaz e − ab π e − ab
C∫ z 2 + b2 dz = 2π i 2 ib = b (2)
e iaz eiaR (cos θ + i sin θ )
and ∫ z 2 + b2
dz ≤ ∫ 2
z − b2
dz (∵ On C R , z = Re iθ )
CR CR

e − a R sin θ 1
= ∫
CR R − b
2 2
dz <
R2 − b2
π R → 0 as R → ∞ (3)

 (∵ a R sin θ > 0 for 0 < θ < π on C R )


∴ e − a R sin θ < 1

from (1), (2) and (3) as R → ∞


∞ e iax π e − ab
∫ dx =
−∞ x + b
2 2
b 
∞ cos ax + i sin ax π e − ab
⇒ ∫−∞ x 2 + b2 dx =
b 

Equate real and imaginary parts
∞ cos ax π e − ab ∞ sin ax
∫−∞ x 2 + b2 dx =
b
, ∫ −∞ x 2 + b2
dx = 0

Functions of Complex Variables  | 121

Example 1.86: Apply residue theorem to evaluate


∞ cos ax
∫0 x2 + 1
dx ; a > 0

Solution: Taking b =1 in Example 1.85, we have


∞cos ax
∫ x2 + 1
−∞
dx = π e − a

∞ cos ax  cos ax 
⇒ 2∫ 2 dx = π e − a ∵ 2 is even function
0 x +1 x +1 

∞ cos ax π
\ ∫ 0 x +1
2
dx = e − a
2 

1.15.3 Some Special Improper Real Integrals


In the above examples, there were finite number of poles within the contour C. If there are infinite
number of poles within C for large R then this contour C cannot be taken and some other suitable
contour will have to be taken. We give an example to explain it.

e ax π
Example 1.87: Show that ∫−∞ e x + 1 dx = sin aπ ; 0 < a < 1.
e az
Solution: Here has simple poles at z = (2n+1)pi for all non-negative integers n inside the
e z +1
contour C of the above examples.
e az
∴ We consider  ∫ z dz where C is the rectangle ABCD with vertices A(–R, 0), B(R, 0),
C e +1

C(R, 2p), D( −R, 2p) where R is large

Y
D(–R,2π) C(R,2π)

X
A(–R,0) O B(R,0)

Figure 1.20
a ( R + iy )
e a( x + 2iπ )
R −R a ( − R + iy )
e az e ax 2π e 0 e
\ ∫ e
C
z
+1
dz = ∫ x
−R e + 1
dx + ∫
0 e + iy + 1
R
idy + ∫e
R
x + 2 iπ
+1
dx + ∫ − R + iy
2π e +1
idy (1)

e az
Now, f ( z ) = has simple pole at z = ( 2n + 1) iπ ; n ∈ I of which only z = ip lies inside C.
ez + 1
122 | Chapter 1

Res (ip) = lim


( z − iπ ) e az 
z → iπ ez + 1
z − iπ
= e aiπ lim z
z → iπ e +1

1
= e aiπ lim z (L′ Hospital rule )
z → iπ e

= – e aip

\ By Cauchy residue theorem


e az
C∫ e z + 1 dz = −2π i e 
aiπ
(2)

e( )
a R + iy
e a( R iy )
2π + 2π 2π
e aR
Now, ∫
0
e R +iy + 1
idy ≤ ∫
0 e R +iy − 1
dy = ∫
0
eR − 1
dy

2π e aR
= R
e −1 
aR
e ae aR
and lim R = lim R = a lim e (a −1) R (L′ Hospital rule )
R →∞ e − 1 R →∞ e R →∞ 
= 0     (∵ 0 < a < 1)

2π e a( R + iy )
Thus, as R → ∞, ∫0 e R + iy + 1
i dy → 0 (3)

e a(− R + iy )

e − aR 2π e − aR
0
and ∫e − R + iy
idy ≤ ∫ 1− e −R
dy = → 0 as R → ∞ (∵ a > 0) (4)
2π +1 0 1 − e−R
\ As R → ∞, from (1) to (4)

e( )
∞ −∞ a x + 2 iπ
e ax
∫ x dx + ∫∞ e x + 1 dx = −2π i e
aiπ

−∞ e + 1 
∞ ax
⇒ (1 − e 2iπ a ∫ x
e
e
+
)
1
dx = −2π i e aiπ
−∞ 

e ax 2π i e aiπ 2π i π
\ ∫−∞ e x + 1 dx = −
1− e 2 iπ a
= iπ a
e −e − iπ a
=
sin π a


1.15.4 Improper Integrals with Singularities on Real Axis


We assume that f(z) has same conditions as in (1.15.2) except that f(z) has singularities simple
poles on real axis at z = x1, x2… xm. Now considering the contour C consisting of CR, l1, l2, ... lm+1,
C1, C2… Cm as shown in Figure1.21.
Functions of Complex Variables  | 123

where CR : |z| = R; Im(z) > 0 and Ck : |z – xk| = e, Im(z) > 0, where R is large and e is small
Y

CR

C1 C2 Cm–1 Cm

X
–R l1 x1 l2 x2 O lm–1 xm–1 lm xm lm+1 R

Figure 1.21

We shall have
m m +1

∫ f ( z ) dz = ∫ f ( z ) dz + ∑ ∫ f ( z ) dz + ∑ ∫ f ( x ) dx (1.45)
C CR k =1 Ck (clockwise ) k =1 lk

As in (1.15.2)
∫ f ( z ) dz = 2π i (Sum of residues of f ( z ) at its isolated singulaarities inside C )
C
(1.46)

m +1 ∞

∑∫ f ( x ) dx = ∫ f ( x ) dx as ε → 0  (1.47)
k =1 lk −∞

∫ f ( z ) dz
CR
→ 0 as R → ∞  (1.48)

(Prove it as in (1.15.2))
Now, we find ∫
Ck (clockwise )
f ( z )dz

In the semicircle Ck : | z – xk | = e, z – xk = e eiq and q varies from p to 0.


By Laurent’s expansion, as xk is simple pole of f (z), we have

{
f ( z ) = a0 + a1 ( z − xk ) + a2 ( z − xk ) + ..... +

2
} a−1
z − xk
Now, z − xk = ε e iθ ⇒ dz = iε e iθ dθ where θ vaies from π to 0.
0
 a 
∫ f ( z ) dz = ∫ a0 + a1 ( z − xk ) + a2 ( z − xk ) + .... + −1  iε e iθ dθ
2
\
Ck π  z − xk 


{ }
0
 a 
( )
2
= ∫  a0 + a1ε e iθ + a2 ε e iθ + ... + −i1θ  iε e iθ dθ
π  εe 

124 | Chapter 1

0 0
 ( ) + .... ie iθ dθ + ia−1 ∫ dθ → ia−1 (θ )π as ε → 0 
2 0

= ε ∫  a0 + a1ε e + a2 ε e
iθ iθ

π
 π

\ ∫ f ( z ) dz → −π ia−1 = −π i Res ( xk ) as ε → 0
Ck

m
\ ∑ ∫ f ( z ) dz
k =1 Ck
→ −π i Sum of residues of f ( z ) at poles on real axis  as ε → 0  (1.49)

\ As R→∞ , e → 0 from equation (1.45) to (1.49) we have


∫ f ( x ) dx = 2π i (Sum of residues of f ( z ) at its singularities inside C )


−∞ 
+π i (Sum of residues of f ( z ) at simple poles on real axis )

Remark 1.14: In the examples 1.88 to 1.91 we shall be defining the contour C but the figure (as
in the article) will not be drawn.
∞ ∞
sin mx sin x
Example 1.88: Evaluate ∫
0
x
dx; m > 0 and hence find the value of ∫
0
x
dx.

e imz
Solution: Consider C∫ z dz
where C consists of CR : |z| = R, Im(z) > 0, real axis from –R to –r, Cr: |z| = r, Im(z) > 0, real axis
from r to R where R is large and r is small. Here, Cr is clockwise.
−r R
e imz e imz e imx e imz e imx
\ ∫ C z dz = ∫ z
CR
dz + ∫ x dx +
−R
∫ z
Cr
dz + ∫r x dx (1)
e imz
Now, is analytic on and inside C
z
\ By Cauchy–Goursat theorem
e imz
 ∫ C z dz = 0 (2)
On C R , z = Re iθ ; 0 < θ < π ∴ dz = i Re iθ dθ

π
e imz e imR (cosθ + i sin θ )

CR
z
dz = ∫
0 Re iθ
i Re iθ dθ

π
e − mR sinθ
≤∫
R
π 2
⋅ Rdθ = 2∫ e − mR sinθ dθ 
0
(∵ e − mR sin ( π −θ )
= e − mR sinθ )
0
Functions of Complex Variables  | 125

π
2 −2 mRθ
≤ 2∫ e π


0

 π 2θ −2θ 
∵ By Jordan inequality when 0 ≤ θ ≤ then sin θ ≥ and hence − sin θ ≤ 
2 π π 
π
π − 2 mR θ 2
=− e π
mR
0 

π
=
mR
(
1 − e − mR → 0 as R → ∞) (∵ m > 0) (3)

On Cr , z = r e iθ ∴ dz = ir e iθ dθ and θ varies from π to 0.  (Q clockwise)

( )
n
0 im reiθ iθ
∞ 0 imre
e imz e
\ ∫ dz = ∫ iθ
ire dθ = i ∑ ∫


Cr
z π re n=0 π n!


= i ∫ dθ + ∑ i
0 ∞
(im) n
rn 0

π
n =1 n! ∫
π
e inθ dθ → −π i + 0 as r → 0 (4)

\ As R → ∞, r → 0 from (1) to (4) we have


0 ∞
e imx e imx
∫ x dx − π i + ∫0 x dx = 0
−∞ 
∞ imx
e
⇒ ∫ x
dx = π i
−∞ 

cos mx + i sin mx
\ ∫
−∞
x
dx = π i

Equate imaginary parts

sin mx
∫ x
dx = π
−∞ 

sin mx  sin mx 
⇒ 2∫ dx = π ∵ is even function
0
x x 

sin mx π
\ ∫ x
dx = ; m > 0
2
0 
Take m = 1

sin x π
∫ x
dx =
2
0 
126 | Chapter 1


sin mx
Note: The above value is in Cauchy principal value sense. As the integral
−∞
x
dx; m > 0 is ∫
convergent so the value of the integral will be same as the value in Cauchy principal value sense.

cos 2ax − cos 2 bx
Example 1.89: Show that ∫ dx = π (b − a ) ; a, b > 0.
0
x2
e 2iaz − e 2ibz
Solution: Consider C∫ z 2 dz; a > 0, b > 0
where C consists of C R : z = R, Im ( z ) > 0, real axis from –R to –r, Cr : z = R, Im ( z ) > 0,
(clockwise) and real axis from r to R where R is large and r is small.
−r
e 2iaz − e 2ibz e 2iaz − e 2ibz e 2iax − e 2ibx e 2iaz − e 2ibz R
e 2iax − e 2ibx
\ C∫ z 2 dz = ∫ z2
CR
dz + ∫ x2
−R
dx + ∫ z2
Cr
dz + ∫r x2
dx (1)

e 2iaz − e 2ibz
Now, is analytic on and inside C
z2
\ By Cauchy–Goursat theorem
e 2iaz − e 2ibz
C∫ z 2 dz = 0 (2)
On C R , z = Re iθ ∴ dz = i Re iθ dθ ; 0 <θ <π

2 iaR ( cos θ + i sin θ ) 2 ibR ( cos θ + i sin θ )


e 2iaz − e 2ibz π e −e
\ ∫ z 2 dz =
CR
∫ 0 R 2 e 2iθ
i Re iθ dθ

e −2 aR sin θ + e −2 bR sin θ
π
≤∫ .Rdθ
0 R2 
π 1+1
<∫ dθ
0 R 
 (∵ aR sin θ > 0, bR sin θ > 0 for a > 0, b > 0, 0 < θ < π ∴ e −2 aR sinθ < 1, e − 2 bR sinθ < 1 )

= → 0 as R → ∞ (3)
R
On Cr , z = re iθ , dz = ir e iθ dθ and θ varies from π to 0 (∵ clockwise )
iθ iθ
e 2iaz − e 2ibz 0e
2 iare
− e 2ibre
\ ∫
Cr z 2
dz = ∫
π r e 2iθ
2
ir e iθ dθ


(2iare ) iθ n ∞
(2ibre ) iθ n

0 n=0
∑ n!
− ∑ n!
=∫
n=0

idθ

π re 
Functions of Complex Variables  | 127


(2ia re ) − (2ibre )
iθ n iθ n

0 ∑ n!
=∫
n=1

idθ 
π re

∞ (2ir )n (an − bn ) e inθ



0
=
n=1 r n! ∫ π e iθ
idθ


= 2i ( a − b ) ∫
0 eiθ
idθ + ∑
∞ (2i ) r n −1 an − bn
n
( ) 0
ie ( 1)θ dθ

i n−
π e iθ n=2 n! π

(2i ) n n −1
(a −b ) 1 − (−1)
( )
n n
∞ r
= 2π ( a − b ) + ∑
n −1

n= 2 (n − 1) n! 
e 2 iaz
−e 2 ibz
\ ∫ dz → 2π ( a − b ) as r → 0 (4)
Cr
z2
\ As R → ∞, r → 0 from (1) to ( 4 )

e 2iax − e 2ibx e 2iax − e 2ibx
dx + 2π ( a − b ) + ∫
0
∫ −∞ x 2
x2
dx = 0
0 
e 2iax − e 2ibx

\ ∫−∞ x 2 dx = 2π (b − a)

∞ ( cos 2ax − cos 2bx ) + i ( sin 2ax − sin 2bx )
⇒ ∫ dx = 2π ( b − a )

−∞ x2 
Equate real parts
∞ cos 2ax − cos 2bx
∫ dx = 2π (b − a )

−∞ x2 
∞ cos 2ax − cos 2bx  cos 2ax − cos 2bx 
\ ∫ dx = π (b − a )  ∵ is an even function
0 x2 x2 

∞ sin ax
Example 1.90: Evaluate the integral I = ∫ dx; a, b > 0
0
(
x x 2 + b2 )
e iaz
Solution: Consider ∫ z ( z
C
2
+ b2
dz; a, b > 0
)
where C consists of C R : z = R, Im ( z ) > 0 real axis from –R to –r, Cr : z = r , Im ( z ) > 0 (clock-
wise), real axis from r to R where R is large and r is small.
e iaz e iaz −r e iax e iaz R e iax

\ 
C (
z z 2 + b2 )
dz = ∫ z (z
CR
2
+ b2 )
dz + ∫
−R
(
x x 2 + b2 )
dx +
Cr
∫ z (z 2
+ b2 )
dz + ∫
r
(
x x 2 + b2 )
dx (1)
128 | Chapter 1

e iaz
Now, f ( z ) = has simple poles at z = 0, ± ib of which z = ib lies inside C (Q b > 0)
(
z z 2 + b2 )
e iaz e − ab e − ab
Res (ib ) = lim ( z − ib ) f ( z ) = lim = =− 2 
z →ib z →ib z ( z + ib ) ib( 2ib) 2b
\ By Cauchy residue theorem
e iaz  e − ab  π ie − ab
∫ z z 2 + b2
C (
dz = 2π i
)

 2b 2  = −
b2
(2)

On C R , z = R e iθ ∴ dz = i R e iθ dθ , 0 < θ < π
iaR ( cos θ + i sin θ )
e iaz e
\ ∫ z z dz ≤ ∫ Rieiθ dθ
CR ( 2
+ b2 ) CR (
Re iθ R 2 e 2iθ + b 2 ) 
π π
e − aR sin θ dθ
≤∫ 2 dθ < ∫ 2
0 R −b 0 R −b
2 2

 ( )
∵ a > 0, R > 0, 0 < sin θ < 1 for 0 < θ < π ∴ e − aR sin θ < 1
π
= → 0 as R → ∞ (3)
R 2 − b2
e iaz
Now, f ( z ) = has simple pole at z = 0
(
z z 2 + b2 )
\ By Laurent expansion about z = 0

(
f ( z ) = a0 + a1 z + a2 z 2 +  +
a−1
z  )
On Cr , z = r e iθ dz = ir e iθ dθ , θ varies from p to 0
0
 a 
\ ∫ (
f ( z ) dz = ∫  a0 + a1r e i θ + a2 r 2 e 2i θ +  + −i1θ  i r e i θ dθ
re 
)
Cr π  
0

(
= i ∫ a0 + a1r e i θ + a2 r 2 e 2i θ +  r e i θ dθ + i a−1 ( −π ) → 0 − π i a−1 ) as r → 0
π

e iaz 1
But a−1 = Res(0) = lim = 2
z →0 z +b
2 2
b
\ As r → 0
e iaz πi
∫ z (z
Cr
2
+b 2
)
dz = −
b2
(4)

\ from (1) to (4) as R → ∞, r → 0



πi π ie − ab
0
e iax e iax
∫ x (x
−∞
2
+ b2 )
dx − 2
+∫
b 0 x x 2 + b2
dx
(
= −
b2 )

Functions of Complex Variables  | 129


e iax πi
\ ∫ x (x dx = (
1 − e − ab  )
−∞
2
+ b2 ) b2

cos ax + i sin ax πi
⇒ ∫ dx = (
1 − e − ab )

−∞ (
x x +b 2 2
) b2

Equate imaginary parts

π

sin ax
∫−∞ x x 2 + b2 dx = b2 1 − e
− ab
( )
( ) 

π  
\ 2∫
sin ax
dx = (
1 − e − ab ) ∵
sin ax
is even funct
t ion 
0 (
x x 2 + b2 ) b2 (
 x x 2 + b 2 ) 

π
∫ x (x
sin ax
dx = (
1 − e − ab )
0
2
+b 2
) 2b 2


sin 2 x ∞
Example 1.91: Evaluate the integral
0 x2
dx. ∫
∞ 1 − cos 2 x
2
∞ sin x
Solution: Let I = ∫ dx = ∫ dx
0 x2 0 2x2
1 − e 2iz
Consider ∫ 2 z 2 dz
C

where C consists of C R : z = R, Im ( z ) > 0, real axis from –R to –r, Cr : z = r , Im ( z ) > 0 clock-


wise, real axis from r to R
1 − e 2iz 1 − e 2iz −r 1 − e
2 ix
1 − e 2iz R1− e
2 ix
\ C∫ 2 z 2 dz = ∫ 2
CR 2 z
dz + ∫− R 2 x 2 dx + ∫ 2z2
Cr
dz + ∫r 2 x 2 dx (1)
1 − e 2iz
Now, is analytic on and inside C
2z2
\ By Cauchy–Goursat theorem
1 − e 2iz
C∫ 2 z 2 dz = 0 (2)

On C R , z = Re iθ ∴ dz = i Re iθ dθ ; 0 < θ < π

1 − e 2iz 1 − e 2iz 1 + e 2iz


\ ∫ 2 dz ≤ C∫ 2 z 2 dz ≤ C∫ 2 R2 dz
CR 2 z R R

2 iR (cos θ + i sin θ )
π 1+ e
=∫ Rdθ
2R2
0 
130 | Chapter 1

1 + e −2 R sin θ π 1+1
(∵ e )
π
 =∫ dθ < ∫ dθ −2 R sin θ
< 1 as 0 < sin θ < 1 for 0 < θ < π
0 2R 0 2R

→ 0 as R → ∞ (3)

On Cr , z = r e iθ ∴ dz = ire iθ dθ , θ varies from π to 0 (clockwise )

iθ −∑

(2ie ) iθ n
rn
0 0
1 − e 2iz 1 − e 2ire n!
\ ∫ dz = ∫ 2 2iθ ireiθ dθ = ∫ idθ
n =1

Cr 2z 2
π 2r e π 2re iθ

0

= ∫ −i ∑

(2ie ) iθ n
r n −1

π n =1 n ! 2e iθ 
0
−i  ∞
(2i )n eiθ 
( )
n −1
=∫ ⋅  2i + ∑ r n −1 dθ
2  n= 2 n! 

π  
2
= ( −π ) = −π as r → 0 (4)
2

\ As R → ∞ , r → 0 from (1) to ( 4 )

0 ∞
1 − e 2ix 1 − e 2ix
∫ 2
−∞ 2 x
dx − π + ∫0 2 x 2 dx = 0


1 − e 2ix
\ ∫
−∞ 2 x
2
dx = π


1 − cos 2 x − i sin 2 x
⇒ ∫ 2x2
dx = π
−∞ 
Equate real parts
∞ 1 − cos 2 x
∫−∞ 2x2
=π 

∞ sin 2 x
\ ∫−∞ x 2 = π 
∞ sin 2 x π  sin 2 x 
⇒ ∫0 x2
=
2 ∵ x 2 is even function

Functions of Complex Variables  | 131

Exercise 1.5

1. Using contour integration evaluate ∞ x2 π


dθ 8. Show that ∫ dx = , a > 0.
(x )

2 3 16 a3
(i)  ∫ 0 2
+a
0
(2 + sin θ )
9. Use contour integration to evaluate the
2π cos θ
(ii)  ∫ 0
(3 + sin θ )

real integral ∫
∞ dx
; a > 0 and
( )
−∞ 3
x + a2
2
cos 3θ 2π
(iii)  ∫ dθ
0 5 − 4 cos θ ∞ dx
hence find the value of ∫ .
( )
3
π 1 + 2 cos θ
0
x +1
2
(iv)  ∫ dθ
0 5 + 4 cos θ 10. Show that
2. Using contour integration, prove that ∞ x2 dx π
2π dθ 2π
(i)  ∫ (x
−∞ 2
+1 ) (x
⋅ 2
+ 2x + 2 )
=
8
∫0 1 − 2a sin θ + a2 = 1 − a2 , 0 < a < 1
( ) ( ) ∞ x2 7π
3. Using contour integration, evaluate (ii)  ∫ dx =
(x ) (x )
2
−∞ 2
+1 2
+ 2x + 2 50
π dθ
∫0 (2 + cos θ )2 .
∞ cos mx
2π cos nθ 2π
11. Evaluate
)(
sin nθ ∫ (x
−∞ 2
+ a2 x 2 + b2
dx ; a , b > 0 and a ≠ b
)
4. Evaluate ∫ (1 + 2a cos θ + a ) dθcosandmx∫ (1 + 2a cos θ + a )
∞ 2 2


0 0
dx ; a , b > 0 and a ≠ b ; m > 0.
cos nθ 2π sin nθ
−∞
(x 2
)(
+ a2 x 2 + b2 )
dθ and ∫ dθ given a2 < 1
a cos θ + a 2 ) 0
(1 + 2a cos θ + a ) 2
12. Using complex variable technique evalu-
∞ cos 3 x
and n is a p­ ositive integer. ate ∫ dx.
5. Using contour integration evaluate
0
(
x + 1 x2 + 4
2
)( )
∞ dx ∞ cos mx
(i)  ∫ 13. Evaluate ∫ dx ; m, a > 0.
0
x +1
4
( ) 0
(x 2
+ a2 )
2

∞ dx
(ii)  ∫ (x ) 14. Show that ∫ (x
∞ sin x
dx = −
π sin 2
.
+ 16
)
0 4
−∞ 2
+ 4x + 5 e
6. Apply residue theorem to evaluate ∞ x sin x
x2 15. Evaluate the integral ∫0 dx; a > 0.
(x )
∞ 2
∫ dx, a > 0. + a2
2

( )
0 2
x 2 + a2
2
16. By integrating e − z round the rectangle
∞ x +2
2
whose vertices are –R, R, R+ ia, –R + ia
7. Evaluate the integral ∫ dx.
( )( )

evaluate the integral ∫ e − x cos 2ax dx.
2
0
x2 + 1 x2 + 4
−∞
132 | Chapter 1

17. By integrating e − z round the rectangle 18. Show that ∞ sin x dx = π and ∞ cos x dx = 0.
2

whose vertices are 0, R, R + ia, ia show


∫0 x 2 ∫0 x
∞ sin x π ∞ cos x
that
− a2
∫0 x dx = 2 and ∫0 x dx = 0.
∞ e π
(i)  ∫ e − x cos 2ax dx = sin x
( )
2
π 19. Show that ∫
∞ ∞
0 2 0
x x 2 + a2
dx = 2 (11 −- ee−-aa ) and
(2a ) ∫ x(
0

x− a2 a y 2 π
(ii)  ∫ e − x sin 2ax dx sin ( ) cos x
∞ ∞ ∞
∫0 x x 2 + a2∫0dxe =dy2a2 1 − e and ∫0 x x 2 + a2 dx = 0, a > 0.
2
= e −a
0
( ) ( )
Answers 1.5


1. (i)  2π 3 (ii)  0 (iii)  π 12 (iv) 0 3. 
3 3
2π ( −1) a n
n
π π 2
4. (i)  (ii) 0 5. (i)  (ii) 
1 − a2 2 2 32
π π
6. 7. 
4a 3
π  e − bm e − am 
11.  −
3π 3π
9. 5 , a 2 − b 2  b a  ( )
8a 16
π ( am + 1) e − am
13. 
π e −3
e 
−6
4 a3
12. −
2  3 6 

π e−a
15. π e−a
16. 
2

4a

1.16 Conformal Mapping
For every point (x, y) in the z-plane in domain of f, the relation w = f(z) defines a corresponding
point (u, v) in the w-plane. If a point P(z0) maps into the point P* (w0) then w0 is known as image
of z0. If P moves along a curve C in z-plane then P* will move along a corresponding curve C*
in the w-plane.
Now, let two curves C1 and C2 in z-plane intersect at P(z0) and the corresponding curves
C1* , C2* in w-plane intersect at P*(w0). If the angle of intersection of the curves C1 and C2 at P
in z-plane is same in magnitude and sense as the angle of intersection of curves C1* and C2* at
P* in w-plane then the transformation is called conformal. Thus, if the sense of rotation as well
as the magnitude of angle is preserved, the transformation is said to be conformal. If only the
magnitude of the angle is preserved, then the transformation is isogonal.
Functions of Complex Variables  | 133

The point z where f  ′(z) = 0 is called critical point of this transformation. The point z where
f(z) is defined and f  ′(z) ≠ 0 is called ordinary point. The point z where w = f(z) = z is called
fixed point or invariant point of the transformation. Now, we prove sufficient conditions for a
transformation to be conformal at a point.
Theorem 1.23  The mapping w = f(z) is conformal at each point z where f(z) is analytic and
f ′( z ) ≠ 0.
Proof: Let w = f(z) be analytic at P(z0) and f ′( z0 ) ≠ 0. Let C be a continuous curve in z-plane
with parametric equation
z(t) = x(t) + iy(t); t1 ≤ t ≤ t 2 (1.50)
passing through P(z0). The image C* of this curve in w-plane is w(t) = f (z(t)); t1 ≤ t ≤ t 2 passing
through P* (w0) the image of P(z0).
dy y (t )
From equation (1.50), slope of tangent to C at P = = at t = t0 corresponding to
dy dx dx x (t )
point P where y (t ) = , x (t ) =
dt dt
If θ 0 is inclination of this tangent at P, then
y (t )
tan θ 0 =
x (t )

y (t )
\ θ 0 = tan −1 = arg z (t ) at P
x (t )

\ z ( t0 ) = r0 e iθ0(1.51)

Similarly, w (t0 ) = ρ0 e iφ0 (1.52)



where φ0 is inclination of tangent at P* to C*.
Now, w(t) = f (z(t))
\ w (t ) = f ′ ( z (t )) z (t )

\ (
w (t0 ) = f ′ z (t0 ) z (t0 ) ) 
Using equations (1.51) and (1.52)
ρ0 e iφ0 = R0 e iψ 0 r0 e iθ0 where f ′ ( z (t0 )) = R0 eiψ 0 if f ′ ( z (t0 )) ≠ 0

iφ0 i (ψ 0 + θ0 )
\ ρ0 e = R0 r0 e

\ ρ0 = R0 r0 , φ0 = ψ 0 + θ 0 
ψ 0 = φ0 − θ 0 
\ 
and ρ0 = R0 r0  
Thus, the tangent to curve C at P is rotated by an angle
ψ 0 = Arg ( f ′ ( z0 )) where z0 = z (t0 )

134 | Chapter 1

Angle of rotation ψ 0 is independent of curve C and depends only on z0.


Thus if C1, C2 be any two curves passing through P(z0) with their images C1* , C2* respectively
meeting at P*(w0) then angle of rotation of both C1 and C2 will be ψ 0 .
If q1 and q2 are inclination of tangents to C1 and C2 at P(z0) respectively and f1 and f2 be incli-
nation of tangents to C1* and C2* at P * ( w0 ) then

ψ 0 = φ1 − θ1 = φ2 − θ 2
\ φ2 − φ1 = θ 2 − θ1 

Thus, the transformation is conformal at P.


Remark 1.15: (i) The angle through which tangent at P(z0) to a curve rotates by transformation
w = f (z) is called angle of rotation.
\ Angle of rotation at P = Arg (f ′(z0))
(ii) Element of arc r0 through P is magnified by R0 = | f  ′(z0) |. It is called coefficient of magnifica-
tion of transformation at P.
Theorem 1.24  A harmonic function f(x, y) remains harmonic under the conformal mapping
w = f (z).
Proof: Since f(x, y) is harmonic
∂ 2φ ∂ 2φ
\ + = 0 (1.53)
∂x 2 ∂y 2
Also, w = f(z) = u + iv is conformal, so f(z) is analytic and hence u and v are harmonic.
∂2u ∂2u
\ + = 0 (1.54)
∂x 2 ∂y 2

∂2 v ∂2 v
+ = 0 (1.55)
∂x 2 ∂y 2

Also, f(z) is analytic so u and v satisfy C–R equations
∂u ∂v ∂u ∂v
= , = − (1.56)
∂x ∂y ∂y ∂x

∂φ ∂φ ∂u ∂φ ∂v
Now, = +
∂x ∂u ∂x ∂v ∂x 

∂ 2φ ∂φ ∂ 2 u ∂  ∂φ  ∂u ∂φ ∂ 2 v ∂  ∂φ  ∂v
\ = + ⋅ + +
 
∂x 2 ∂u ∂x 2 ∂x  ∂u  ∂x ∂v ∂x 2 ∂x  ∂v  ∂x 

∂φ ∂ 2 u  ∂ 2 φ ∂u ∂ 2 φ ∂v  ∂u ∂φ ∂ 2 v  ∂ 2 φ ∂u ∂ 2 φ ∂v  ∂v
= + + + + +
∂u ∂x 2  ∂u 2 ∂x ∂v∂u ∂x  ∂x ∂v ∂x 2  ∂u∂v ∂x ∂v 2 ∂x  ∂x

Functions of Complex Variables  | 135

2 2
∂φ ∂ 2 u ∂φ ∂ 2 v ∂ 2 φ  ∂u  ∂ 2 φ  ∂v  ∂ 2 φ ∂u ∂v
= + + 2  + 2   +2 (1.57)
∂u ∂x 2
∂v ∂x 2
∂u  ∂x  ∂v  ∂x  ∂u ∂v ∂x ∂x
 ∂ 2φ ∂ 2φ 
∵ =
∂u∂v ∂u∂v 

Similarly
2 2
∂ 2 φ ∂φ ∂ 2 u ∂φ ∂ 2 v ∂ 2 φ  ∂u  ∂ 2 φ  ∂v  ∂ 2 φ ∂u ∂v
= + + + + 2 (1.58)
∂y 2 ∂u ∂y 2 ∂v ∂y 2 ∂u 2  ∂y  ∂v 2  ∂y  ∂u ∂v ∂y ∂y
Add equations (1.57) and (1.58)
∂ 2φ ∂ 2φ ∂φ  ∂ 2 u ∂ 2 u  ∂φ  ∂ 2 v ∂ 2 v  ∂ 2φ  ∂u   ∂u  
2 2

+ = + + + +   +   
∂x 2 ∂y 2 ∂u  ∂x 2 ∂y 2  ∂v  ∂x 2 ∂y 2  ∂u 2  ∂x   ∂y  

2 
 ∂v  
2 2
∂ φ   ∂v  ∂ φ  ∂u ∂v ∂u ∂v 
2
+ 2   +    + 2 +
∂v  ∂x   ∂y   ∂u∂v  ∂x ∂x ∂y ∂y 

Use equations (1.53) to (1.56)
∂ 2φ  ∂u   ∂ φ  ∂v   ∂u  
2 2 2 2
 −∂v 
2
∂ 2φ  ∂u ∂v  −∂v   ∂u  
0= 2   +    + 2   +    + 2  +   

∂u  ∂x ∂x   ∂v  ∂x   ∂x   ∂u ∂v  ∂x ∂x  ∂x   ∂x  

 ∂ 2φ ∂ 2φ   ∂u  
2 2
 ∂v 
\  2 + 2    +    = 0 (1.59)
 ∂u ∂v   ∂x   ∂x  
Now, f (z) is conformal
∂u ∂v
\ f ′( z ) = +i ≠0
∂x ∂x 
2 2
2  ∂u   ∂v 
\ f ′( z ) =   +   ≠ 0
 ∂x   ∂x  
\ from equation (1.59)
∂ 2φ ∂ 2φ
+ =0
∂u 2 ∂v 2 
Also all second order partial derivatives of f w.r.t. u and v will be continuous. Hence, f is
­harmonic in w-plane.

Example 1.92: Show that the mapping w = ez is conformal in the whole of the z-plane.
Solution: w = f ( z ) = e z
\ f ′( z ) = e z ≠ 0 for any z

\ f (z) is analytic and f ′( z ) ≠ 0 for any z.


Hence, the given mapping is conformal in whole of z-plane.
136 | Chapter 1

Example 1.93: Show that the mapping w = sin z is conformal everywhere except at
π
z = ( 2n + 1) ; n ∈ I.
2
Solution: w = f(z) = sin z
\ f ′( z ) = cos z is defined for all z
π
and f ′( z ) = cos z = 0 for z = ( 2n + 1) ; n ∈ I
2
π
Hence, the given mapping is conformal everywhere except at z = ( 2n + 1) ; n ∈ I.
2

1.17 Some Standard Mappings


In this section, we discuss some of the important standard transformations.

1.17.1 Translation Mapping
The mapping w = z + c; c = a + ib ; a, b ∈ R is translation mapping.
We have u + iv = x + iy + a +ib
\ u = x + a, v = y + b
Hence point P (x, y) in z-plane is mapped into the point P* (x + a, y + b) in the w-plane. Thus,
if w-plane is superposed on the z-plane then the figure of w-plane is translated through vector
C = aiˆ + bjˆ.
If equation of curve in z-plane is given then setting x = u – a, y = v – b, we get the equation of
the image in w-plane.
The regions in z-plane and w-plane have same shape, size and orientation and also mapping
is conformal. If z1 and z2 be any two points in z-plane then corresponding points in w-plane are
w1 = z1 + c, w2 = z2 + c respectively. We have w1 − w2 = z1 − z2 . Thus, this mapping preserves
the distance between the points.

1.17.2 Magnification/Contraction and Rotation


The mapping w = cz; c = a + ib; a, b ∈ R, c ≠ 0 is magnification/contraction and rotation ­mapping.
We have u + iv = (a + ib) (x + iy)

= ax – by + i(bx + ay) 

\ u = ax – by, v = bx + ay
Also, w = cz
w wc wc
⇒ z= = = 2
c cc c

Functions of Complex Variables  | 137

1 au + bv + i ( av − bu )
2 (
\ x + iy = u + iv ) ( a − ib ) = 
a +b
2
a2 + b2
au + bv av − bu
\ x= 2 ,y= 2
a + b2 a + b2 
Thus, on putting these values of x and y in the equation of the curve to be transformed we get
the equation of the image.
If we write c = c e iα , z = re iθ , w = Re iφ then we have
Re iφ = c re i (θ + α ) (∵ w = cz )
\ R = c r, φ = θ + α

Thus, transformation w = cz corresponding to rotation by an angle a = Arg (c) and magnifica-
tion or contraction by | c |. If | c | > 1 then there will be magnification and if | c | < 1 then there will
be contraction. If | c | = 1, then there is neither magnification nor contraction.
Shapes of figures in w-plane and z-plane are same and also the mapping is conformal.

1.17.3 Linear Transformation
The transformation w = az + c where a = a1 + ia2, c = c1 + ic2 is called linear transformation.
It is the combination of translation with rotation and magnification/contraction. This mapping is
conformal. Shapes of figures in z-plane and w-plane are same.

Example 1.94: Find the images of the following regions or curves in the z-plane onto the ­w-plane
under the given mappings
(i) The circle z − 1 = 2; w = 2 z
(ii) The semicircular region z < 1, Re z > 0; w = e iπ 4 z
(iii) The square with vertices at (1, 1), (3, 1), (3, 3), (1, 3);
(a) w = z + (1 + 2i)
(b) w = ( )
2e − π i 4 z + (1 + 2i )
Solution: (i) z − 1 = 2, w = 2 z
w
\ −1 = 2 ⇒ w − 2 = 4
2 
\ Image is circle w − 2 = 4, i.e., circle with centre (2, 0) and radius 4.
iπ − iπ
(ii)  z < 1, Re( z ) > 0, w = e 4 z ⇒ z = we 4



\ z < 1 ⇒ we 4
<1⇒ w <1

  1 1 
and Re ( z ) > 0 ⇒ Re (u + iv )  −i  > 0
  2 2

138 | Chapter 1

u v
⇒ + >0⇒u+v >0
2 2
v
(0,1)

u
(–1,0) O (1,0)

u+
(0,–1)

v=
0
Figure 1.22

\ Image is the interior of semicircle with centre (0, 0) and radius unity above the line u + v = 0.
(iii) (a) w = z + (1 + 2i) = (x + 1) + i(y + 2)
\ The points (1, 1), (3, 1), (3, 3) and (1, 3) map to (2, 3), (4, 3), (4, 5) and (2, 5), respectively in
the w-plane.
\ Image is square with vertices (2, 3), (4, 3), (4, 5) and (2, 5).
 1 1 
(b)  w = 2 e − π i 4 z + (1 + 2i ) = 2  −i  ( x + iy ) + (1 + 2i )
 2 2
= (1 − i ) ( x + iy ) + (1 + 2i )

= ( x + y + 1) + i ( − x + y + 2)

\ The points (1, 1), (3, 1), (3, 3) and (1, 3) map to (3, 2), (5, 0), (7, 2) and (5, 4), respectively in
the w-plane.
\ Image is square with vertices (3, 2), (5, 0), (7, 2) and (5, 4).

Example 1.95: Find the image of the triangle with vertices at i, 1 + i, 1 – i in the z-plane under
the transformation
 (i)  w = 3z + 4 – 2i
5π i
(ii)  w = e 3 ⋅ z − 2 + 4i
Solution: (i) w = 3z + 4 – 2i
\ w transforms i, 1 + i, 1 – i to the points 3i + 4 – 2i = 4 + i, 3(1 + i) + 4 – 2i = 7 + i,
3(1 – i) + 4 – 2i = 7 – 5i respectively in the w-plane.
\ Image is triangle with vertices 4 + i, 7 + i and 7 – 5i.
5π i
 5π 5π 
(ii)  w = e 3
z − 2 + 4i =  cos + i sin  z − 2 + 4i
 3 3

{( }
1 3
=  −i
2

2 
z − 2 + 4i =
1
2
1 − i 3 z − 4 + 8i )

Functions of Complex Variables  | 139

\ w maps i to the point


1
2
{( )
1 − i 3 i − 4 + 8i =} {(1
2
) }
3 − 4 + 9i

w maps 1 + i to the point


1
2
{( ) }
1 − i 3 (1 + i ) − 4 + 8i =
1
2
{( ) (
3 −3 + 9− 3 i )}
w maps 1 – i to the point
1
2
{( ) }
1 − i 3 (1 − i ) − 4 + 8i =
1
2
{( ) (
− 3 −3 + 7− 3 i )}
\ Image is triangle with vertices
1
2
{( ) }
3 − 4 + 9i ,
1
2
{( ) (
3 −3 + 9− 3 i )}
and
1
2
{( ) (
− 3 −3 + 7− 3 i . )}
1.17.4 Inverse Transformation (Inversion and Reflection)
1
The mapping w = is called inverse transformation.
z
1
If z = re iθ , w = Re iφ then Re iφ = e − iθ
r
1
\ R = , φ = −θ
r 
1 
Thus, point (r, q) in z-plane is mapped onto the point  , −θ  in w-plane. Thus, this mapping
r 
is combination of two mappings.
1 
(i) inversion in unit circle which maps (r, q) to  , θ  .
r 
1  1 
(ii) reflection in real axis which maps  , θ  to  , −θ  .
r  r 
The points inside unit circle are mapped to points outside unit circle and vice versa. The points
on unit circle are mapped to points on unit circle and hence unit circle maps onto unit circle.
Inverse transformation is conformal at all z except z = 0 which is the critical point.
1
For invariant points, w = = z.
z
\ z2 = 1 ⇒ z = ±1

Thus, z = ±1 are invariant points.


1
Example 1.96: Show that under the mapping w = all circles and straight lines in the z-plane
z
are transformed to circles and straight lines in the w-plane.
Solution: The equation


( )
a x 2 + y 2 + bx + cy + d = 0 (1)
represents a circle if a ≠ 0 and a straight line if a = 0 in the z-plane.
1
Now, transformation is w = .
z
140 | Chapter 1

1 1 u − iv
or z= ⇒ x + iy = = 2 
w u + iv u + v 2
u −v
\ x= , y= 2
u2 + v2 u + v2 
\ Equation (1) in z-plane is transformed to equation
 u  2  − v  2  bu cv
a  2 2
+ 2 2 
+ 2 − 2 +d =0
  u + v   u + v   u + v 2
u + v2

a bu cv
or + − +d =0
u2 + v2 u2 + v2 u2 + v2 
or ( )
d u 2 + v 2 + bu − cv + a = 0 (2)
If d = 0, then it is a straight line.
From (1), d = 0 implies (1) passes through origin.
Thus if circle or straight line in z-plane passes through origin then it is transformed to a
straight line in w-plane.
If d ≠ 0, then (2) represents a circle.
Thus if circle or straight line in z-plane does not pass through origin then it is transformed to
a circle in w-plane.
Thus, all circles and straight lines in z-plane are transformed to circles and straight lines in
w-plane.
Remark 1.16:
(i) A circle in z-plane (a ≠ 0) passing through origin (d = 0) is transformed to a straight line in
w-plane not passing through origin.
(ii) A circle in z-plane (a ≠ 0) not passing through origin (d ≠ 0) is transformed to a circle in
w-plane not passing through origin.
(iii) A straight line in z-plane (a = 0) passing through origin (d = 0) is transformed to a straight
line in w-plane passing through origin.
 (iv) A straight line in z-plane (a = 0) not passing through origin (d ≠ 0) is transformed to a circle
in w-plane passing through origin.
4
Example 1.97: Show that the mapping w = transforms the straight line x = c, c ≠ 0 in the
z
z-plane into a circle in the w-plane.
4 4
Solution: w = ⇒ z =
z w
4 4 ( u − iv )
⇒ x + iy = = 2
u + iv u + v2 
4u
\ x= 2
u + v2 
Functions of Complex Variables  | 141

Thus, the straight line x = c in z-plane is transformed to


4u
2 =c
u + v2 
or c(u2 + v2) - 4u = 0

which is a circle in w-plane (c ≠ 0 ) .

Example 1.98: Find the image of the infinite strip


1 1 1
(i)  < y <   (ii) 0 < y <   (iii) 1 < x < 2
4 2 2
1
under the mapping w = . Also, show the region graphically.
z
1 1
Solution: w = ⇒ z =
z w
1 u − iv
\ x + iy = =
u + iv u 2 + v 2 
u −v
\ x= 2 , y= 2
u + v2 u + v2 
1 1 −v
(i)  < y ⇒ < 2  or u2 + v2 + 4v < 0
4 4 u + v2
or u2 + (v + 2)2 < 22
or w + 2i < 2

1 −v 1
and  y < ⇒ 2 < or u2 + v2 + 2v > 0
2 u +v 2
2
or u2 + (v + 1)2 > 1
or w +i >1

1 1
\ Image of < y< is interior of circle w + 2i = 2 and exterior of circle w + i = 1. The
4 2
region is shown graphically.
Y

2 X
ದi

ದi

ದi

Figure 1.23
142 | Chapter 1

v 1
(ii)  0 < y ⇒ 0 < − ⇒ v < 0 and from part (i) y < ⇒ w + i > 1
u +v 22
2
1
\ Image of 0 < y < is outside the circle w + i = 1 and below the real line.
2
The region is shown graphically.
Y

2 X
ದi

Figure 1.24

u
(iii)  1 < x ⇒ 1 < ⇒ u2 + v2 − u < 0
u2 + v2
2 2
 1  1
 u −  + v <  
2
or
2 2 
1 1
or w− <
2 2
u u
x<2⇒ < 2 ⇒ u2 + v2 − > 0
u2 + v2 2 
2 2
 1  1
or  u −  + v 2
>  
4 4 
1 1
or w− >
4 4
Y

X
2   

Figure 1.25
Functions of Complex Variables  | 143

1 1 1 1
\ Image is interior of circle w − = and exterior of circle w − = . The region is shown
graphically in above figure. 2 2 4 4

1
Example 1.99: Show that under the transformation w = , the image of the hyperbola x2 - y2 = 1
is the lemniscate R = cos 2φ where w = R e .
2 iφ z
1
Solution: w = ; w = R e iφ , z = reiθ
z
1
\ z=
w 
iθ 1 − iφ
⇒ re = e
 R
1
\ r = , θ = −φ
R 
x2 − y2 = 1

⇒ (r cos θ )2 − (r sin θ )2 = 1
⇒ ( )
r 2 cos 2 θ − sin 2 θ = 1

⇒ r cos 2θ = 1 
2

1  1 
⇒ cos ( −2φ ) = 1 ∵ r = , θ = −φ 
R2 R 
or R 2 = cos 2φ 
\ Image of hyperbola x 2 − y 2 = 1 is lemniscate R 2 = cos 2φ.
1
Example 1.100: Find the image of the half-plane y > α under the mapping w = , when
z
(i) α > 0    (ii ) α < 0    (iii ) α = 0
1
Solution: w =
z
1
\ z=
w 
1 u − iv
⇒ x + iy = =
u + iv u 2 + v 2
u −v
\ x= 2 , y= 2
u + v2 u + v2 
−v
Image of y > α is 2 >α
u + v2
or ( )
α u 2 + v 2 + v < 0 (1)
(i)  α > 0
v
By (1), u 2 + v 2 + <0
α
144 | Chapter 1

2 2
\  1   1 
u2 +  v +  <   
 2α  2α
1 1
or w+ i < 
2α 2α

1 1
\ Image is interior of the circle w + i =
2α 2α
v
(ii)  α < 0, from (1) u 2 + v 2 + >0
α
2 2
 1   −1 
\ u2 +  v +  > 
 2α   2α 

1 −1
or w+ i >
2α 2α 
1 −1
\ Image is exterior of circle w + i =
2α 2α
(iii)  α = 0, from (1) v < 0
\ Image is open half plane below the real axis.

1.17.5 Square Transformation
The transformation w = z2 is called square transformation.
We have u + iv = ( x + iy ) = x 2 − y 2 + 2ixy
2

\ u = x 2 − y 2, v = 2 xy 

Thus, any line parallel to x-axis say y = c maps into u = x − c , v = 2cx where x is parameter.
2 2

Eliminating x
v2
u=
4 c 2 (
− c 2 or v 2 = 4c 2 u + c 2 )

which is right-handed parabola in w-plane with vertex (–c2, 0).
Any line parallel to y-axis say x = b maps into u = b 2 − y 2 , v = 2by where y is parameter.
Eliminating y
v2
u = b 2 − 2 or v 2 = −4b 2 u − b 2
4b
( )

which is left-handed parabola in w-plane with vertex (b2, 0).
In polar co-ordinates, z = re iθ , w = Re iφ
\ w = z2 ⇒ Re iφ = r 2 e 2iθ 
\ R = r 2, φ = 2θ 
Functions of Complex Variables  | 145

Thus, any circle of radius r in z-plane is transformed to circle of radius r2 in w-plane.


If q = 0 then f = 0. Thus, positive real axis in z-plane maps into +ve real axis in w-plane.
π
If θ = then φ = π . Thus, positive imaginary axis in z-plane maps into -ve real axis in w
­ -plane.
2 π
The first quadrant in z-plane, i.e., 0 < θ < maps into 0 < φ < π , i.e., upper half of w-plane.
2
The angle in z-plane at origin maps into double the angle in w-plane. Hence, mapping is not
conformal at origin.
dw
Also, = 2 z ≠ 0 if z ≠ 0 and w is analytic.
dz
Thus, mapping is conformal at all z except z = 0 and z = 0 is only critical point.
Invariant points are given by w = z2 = z.
i.e., z = 0, 1 are invariant points.

Example 1.101: Let the given mapping be w = z2, show that


(i)  The mapping is conformal everywhere except at z = 0.
(ii)  Coefficient of magnification at z = 1 + i is 2 2.
π
(iii)  Angle of rotation at z = 1 + i is .
4
(iv)  The circle z − 1 = 1 maps to cardioid ρ = 2 (1 + cos φ ) where w = ρe iφ in the w-plane.
Solution: Mapping is w = f (z) = z2
\ f ′ (z) = 2z

\ f (z) is analytic everywhere and f ′(z) = 0 for z = 0.
(i)  Thus, w = f (z) = z2 is conformal everywhere except at z = 0.
(ii)  { f ′ ( z )} = ( 2 z )z =1+ i = 2 (1 + i )
z =1+ i

\ f ′ (1 + i ) = 2 2

\ Coefficient of magnification at z = 1 + i is 2 2.
(iii)  Angle of rotation at (z = 1 + i)
= Arg { f ′ (1 + i )}

π
= Arg {2 (1 + i )} = tan −1 1 =
4
(iv)  The circle z − 1 = 1, i.e., z − 1 = e iθ ; 0 ≤ θ < 2π maps to

( ) = {e (e )}
2 2
w = z 2 = 1 + e iθ iθ / 2 − iθ / 2
+ e iθ / 2

2
 θ θ
\ ρe iφ = e iθ  2 cos  = 4 cos 2 eiθ
 2 2 

\ ρ = 4 cos = 2 (1 + cos θ ) ; φ = θ
2 
\ ρ = 2 (1 + cos φ )

\ Circle z − 1 = 1 maps to cardioid ρ = 2 (1 + cos φ ) . 
146 | Chapter 1

1 1
Example 1.102: Determine the image of the region ≤ x ≤ 1 and ≤ y ≤ 1 in the w-plane under
2 2
the mapping w = z2. Also, show both the regions graphically.
Solution:
w = z2
u + iv = ( x + iy ) = x 2 − y 2 + 2ixy
2
\

\ u = x 2 − y 2, v = 2 xy (1)
1 1
The line x = is mapped to u = − y 2 , v = y
2 4
(from (1))
1  1
i.e., u = − v 2 or v 2 = −  u −  (2)
4  4
The line x = 1is mapped to u = 1 − y 2 , v = 2 y (from (1))
2
v
i.e., or v 2 = −4 ( u − 1) (3)
u = 1−
4
1 1
The line y = is mapped to u = x 2 − , v = x
2 4
(from (1))
1 1
i.e., u = v 2 − or v 2 = u + (4)
4 4
and the line y = 1 is mapped to u = x 2 − 1, v = 2 x
v2
i.e., u = − 1 or v 2 = 4 ( u + 1) (5)
4
1
\ The region ≤ x ≤ 1 is mapped to the region between the parabolas (2) and (3) including
2 1
the parabolas and the region ≤ y ≤ 1 is mapped to the region between the parabolas (4) and
2
(5) including the parabolas. Also from (1), v = 2 xy and hence v > 0 for the region considered in
1 1
­z-plane. Therefore the rectangular area ≤ x ≤ 1 and ≤ y ≤ 1 is mapped to the shaded region
2 2
in w-plane as shown graphically hereunder.
v

3 5

Y 2 4
(0,2)

y=1 (0,.5)
(–.25,0) (.25,0)
y = 1/2 u
(–1,0) (1,0)
(0,–.5)

x
O (0,–2)
x = 1/2

x=1

Figure 1.26 Figure 1.27


Functions of Complex Variables  | 147

1
Example 1.103: Discuss the transformation w = z + and show that it maps the circle
z
z = a ( a ≠ 1) into an ellipse. Discuss the case when a = 1. Also, show that the radius vector
arg (z) = α (α > p/4) is mapped to a branch of a hyperbola whose eccentricity is sec α.
1
Solution: For the transformation w = f ( z ) = z +
z
1
f ′ ( z ) = 1 − 2 = 0 for z = ±1
z 
Transformation is not conformal at z = ±1.
1
w = z+
z
1
\ u + iv = r (cos θ + sin θ ) + (cos θ − i sin θ )
r 
 1  1
\ u =  r +  cos θ , v =  r −  sin θ 
 r  r
u2 v2 u2 v2 2
\ 2
+ 2
= 12
+ ∵ ( (
cos 2θ=+1sin 2 θ∵=cos )
1 2 θ + sin 2 θ = 1 )
 1  1  1   1
 r +   r −  r +   r − 
r r r   r
For circle z = a ( a ≠ 1) , we have r = a
u2 v2
\ 2
+ 2
=1
 1  1
 a +   a − 
a a 
which is an ellipse.
When a = 1, we have r = 1
\ u = 2cos q, v = 0

Thus, the image is line segment on real axis from (−2, 0) to (2, 0) (∵ −1 ≤ cos q ≤ 1)
and length of line segment = 4.
 π
For radius vector, arg z = α  α > 
 4
z = r (cos α + i sin α )


\  1  1
u =  r +  cos α , v =  r −  sin α
 r  r 
2 2
u 1 v 1
\ = r + 2 + 2 and
2
= r + 2 −2
2

cos 2 α r sin 2 α r 
u2 v2
⇒ − =4
cos 2 α sin 2 α 
148 | Chapter 1

which is a branch of hyperbola whose eccentricity e is given by


sin 2 α
e2 − 1 = = tan 2 α
cos 2 α 
 π
\ e = 1 + tan α = sec α ⇒ e = sec α > 1
2 2 2
∵ α > 4 
 
1
Example 1.104: Show that the image of the circle |z| = 2 under the transformation w + 2i = z +
is an ellipse. z
Solution: w = u + iv, z = 2 ⇒ z = 2 (cos θ + i sin θ )
1 1
\ w + 2i = z + ⇒ u + iv + 2i = 2 ( cos θ + i sin θ ) + ( cos θ − i sin θ )
z 2 
Equate real and imaginary parts
5 3
u= cos θ , v + 2 = sin θ
2 2 
∵ cos θ + sin θ = 1 
2 2

( v + 2)
2
u2
\ 2
+ 2
=1
5 3
2 2
    
which is an ellipse in w-plane.
a2
Example 1.105: If w = z + , prove that when z describes the circle x 2 + y 2 = a 2, w describes
z
a line segment and find its length. Also prove that if z describes the circle x 2 + y 2 = b 2 where
b > a, w describes an ellipse.
Solution: Circle x 2 + y 2 = b 2 is z = b
\ z = b ( cos θ + i sin θ )

a2 a2
\ w = z+ ⇒ u + iv = b ( cos θ + i sin θ ) + ( cos θ − i sin θ ) 
z b
 a2   a2 
⇒ u =  b +  cos θ , v =  b −  sin θ (1)
 b   b 
For b = a, u = 2a cos q, v = 0
As −1 ≤ cos θ ≤ 1 , therefore w describes line segment on u-axis (real axis) from (–2a, 0) to (2a, 0)
and length of line segment = 4a (a > 0).
∵ cos 2 θ + sin 2 θ = 1 
2
u v2 u2 v2 1
\ From (1), + = 1 or + = (b > a) (2)
( ) (b )
2 2 2 2 2
 a2   a2  b2 + a2 2
−a b2
 b +   b − 
b  b
\ If z describes the circle x 2 + y 2 = b 2, b > a, then w describes ellipse (2).
Functions of Complex Variables  | 149

1.17.6 Bilinear Transformation (Mobius Transformation or


Fractional Transformation)
The transformation
az + b
w= ; ad − bc ≠ 0
cz + d 
is called bilinear transformation or mobius transformation or fractional transformation.
az + b
Now, w= ⇒ cwz + dw = az + b
cz + d 
−dw + b
⇒ z= 
cw − a
which is also bilinear.
az + b
Now, w= ⇒ cwz + dw − az − b = 0
cz + d 
which is linear in both w and z and hence the name bilinear transformation.
dw a (cz + d ) − c ( az + b )
Now, =
dz (cz + d )2 
ad − bc
=
(cz + d )2 
dw
∴ ≠ 0 (as ad − bc ≠ 0 ) and w is analytic. Thus, the transformation is conformal for all z.
dz
Bilinear transformation is clearly one-one transformation.
az + b
For invariant points, w = =z
cz + d
\ cz 2 + ( d − a ) z − b = 0

which is quadratic in z. Thus, its roots, at most two points will be invariant points. If roots
­coincide, i.e., when ( d − a ) + 4bc = 0 then there is only one invariant point. In this case, bilinear
2

transformation is called parabolic.


To find bilinear transformation, we require cross ratio of four points which we define­
hereunder.

1.17.7 Cross Ratio of Four Points


If t1, t2, t3 and t4 be any four numbers or complex points then
(t1 − t2 ) (t3 − t4 ) is said to be their
(t1 − t4 ) (t3 − t2 )
cross ratio and is denoted by (t1, t2, t3, t4).
Theorem 1.25  Bilinear transformation preserves cross ratio of four points.
Proof: Let the points z1, z2, z3 and z4 be the four points which map to w1, w2, w3, w4 of w-plane,
az + b
respectively under the bilinear transformation w = , ad − bc ≠ 0.
cz + d
150 | Chapter 1

If these points are finite, then

w j − wk =
az j + b
− =
(
azk + b ( ad − bc ) z j − zk )
; j , k = 1, 2, 3, 4, j ≠ k

cz j + d czk + d ( )
cz j + d (czk + d )

( w − w 2 ) ( w3 − w 4 )
\ (w1 , w2 , w3 , w4 ) = 1
( w1 − w4 ) ( w3 − w2 ) 
(ad − bc ) ( z1 − z2 ) (ad − bc ) ( z3 − z4 )

=
(cz1 + d ) (cz2 + d ) (cz3 + d ) (cz4 + d )
(ad − bc ) ( z1 − z4 ) ⋅ (ad − bc ) ( z3 − z2 )

(cz1 + d ) (cz4 + d ) (cz3 + d ) (cz2 + d ) 
( z1 − z2 ) ( z3 − z4 ) = z , z , z , z
= ( )

( z1 − z4 ) ( z3 − z2 ) 1 2 3 4 
Thus, the cross ratio of four points is invaried under bilinear transformation.
Remark 1.17:
(i)  To find transformation which maps z1, z2, z3 to w1, w2, w3 respectively, we have
(w, w1, w2, w3) = (z, z1, z2, z3)

(w − w1 ) ( w2 − w3 ) ( z − z1 ) ( z2 − z3 )
\ =
(w − w3 ) ( w 2 − w1 ) ( z − z3 ) ( z2 − z1 )

(ii)  If one of z is infinite say z1 = ∞ then
(w − w1 ) (w2 − w3 ) = z2 − z3  z − z1 
= 1

(w − w3 ) (w2 − w1 ) z − z3 ∵ zlim
1 →∞ z2 − z1 

(iii)  If one of w is infinite say w1 = ∞ then
w2 − w3 ( z − z1 ) ( z2 − z3 )
=
w − w3 ( z − z3 ) ( z2 − z1 )

(iv)  If one of z and one of w is infinite say z1 = ∞, w3 = ∞ then
w − w1 z −z
= 2 3
w2 − w1 z − z3

(v) To find the bilinear transformation which maps a given region in z-plane to given region in
w-plane, following points should be noted:
(a) If both the regions are not circles, interior of circles or exterior of circles, then take
three points on boundary of region in z-plane and three points on boundary of region in
w-plane and find the bilinear transformation and then check whether it suits the regions
or not, ­otherwise multiply by −1.
(b) If both the regions are circles or interior of circles or exterior of circles then inverse
points w.r.t. circles will transform to inverse points. Using this, find the transform.
Functions of Complex Variables  | 151

Example 1.106: Find the bilinear transformation which maps the points z = 0, 1, i in the z-plane
onto the points 1 + i, –i, 2 – i in the w-plane respectively.
Solution: Let z1 = 0, z2 = 1, z3 = i and w1 = 1 + i, w2 = −i, w3 = 2 − i
The bilinear transformation which maps z1 , z2 , z3 to w1 , w2 , w3 respectively is
( w − w1 ) ( w2 − w3 ) ( z − z1 ) ( z2 − z3 )
= 
( w − w3 ) ( w2 − w1 ) ( z − z3 ) ( z2 − z1 )

[w − (1 + i )] [−i − 2 + i ] ( z − 0) (1 − i )
\ = 
[w − ( 2 − i )] [−i − 1 − i ] ( z − i ) (1 − 0)
w − (1 + i ) z (1 − i ) (1 + 2i ) z 3 + i (3 + i ) z
⇒ = ⋅ = ⋅ = 
w − (2 − i) z − i 2 z −i 2 2 z − 2i

Apply componendo and dividendo


2w − 3 (5 + i ) z − 2i
= 
1 − 2i (1 + i ) z + 2i
(7 − 9i ) z − 2i − 4
\ 2w − 3 = 
(1 + i ) z + 2i
(10 − 6i ) z + 4i − 4
\ 2w = 
(1 + i ) z + 2i
(5 − 3i ) z + 2i − 2
⇒ w= 
(1 + i ) z + 2i
which is the required bilinear transformation.

Example 1.107: Obtain the bilinear transformation which maps the points z =1, i,–1 onto the
points w = i, 0, –i respectively and hence find
(i) the image of |z| < 1
(ii) the invariant points of this transformation
Solution: Let z1 = 1, z2 = i, z3 = –1 and w1 = i, w2 = 0, w3 = –i.
The transformation which maps z1, z2, z3 to w1, w2, w3 respectively is
(w − w1 ) ( w2 − w3 ) ( z − z1 ) ( z2 − z3 )
=
(w − w3 ) ( w 2 − w1 ) ( z − z3 ) ( z2 − z1 )

( w − i ) ( 0 + i ) ( z − 1) ( i + 1)
\ =
( w + i ) ( 0 − i ) ( z + 1) ( i − 1) 
w− i i ( z − 1)
⇒ =
w+ i z +1 
152 | Chapter 1

Apply componendo and dividendo


2w (1 + i ) z + (1 − i ) (i − 1) z + (i + 1)
= ⇒w=
2i (1 − i ) z + (1 + i ) (1 − i ) z + (1 + i )
−z + i
\ w=
z+i 
which is the required transformation.
w −z + i
(i)          =
1 z+i
Apply componendo and dividendo
1 − w 2z 1− w
= ⇒ z = i
1 + w 2i  1 + w 

 1− w 
(∵ i = 1)
2 2
\ | z |< 1 ⇒  i < 1 ⇒ 1− w < 1+ w
 1 + w  
⇒ distance of w from (1,0) < distance of w from (−1,0) 

But right bisector of (1,0) and (−1,0) is u = 0.


\ Image of |z| <1 is u > 0 which is open half plane to the right of imaginary axis.
−z + i
(ii)  Transformation is w =
z+i
for invariant points w = z
−z + i
\ = z ⇒ z 2 + (1 + i ) z − i = 0
z+i 
− (1 + i ) ± (1 + i )2 + 4i − (1 + i ) ± 3 (1 + i )
\ z= =
2 2 
 ± 3 − 1
\ Invariant points are   (1 + i )
 2  
Example 1.108: Find the image of the half-plane x + y > 0 under the bilinear transformation

w=
( z − 1)
( z + i)
Solution: The given transformation is
w z −1
=
1 z+i 
\ wz + iw = z −1 
iw + 1
\ z=
1− w 
Functions of Complex Variables  | 153

2
1 + iw 1 − w 1 − w + iw − i w
\ x + iy = ⋅ = 
1− w 1− w 1− w
2

=
1 − (u − iv ) + i (u + iv ) − i u 2 + v 2 ( )
2
1− w

\ x=
1− u − v
,y=
v + u − u2 + v2 ( )
2 2
1− w 1− w


\ x+ y =
1 − u − v + v + u − u2 + v2 ( )= (
1 − u2 + v2 )
2 2
1− w 1− w

\ (
x + y > 0 ⇒ 1 − u2 + v2 > 0 ) 
or u2 + v2 < 1 ⇒ w < 1

\ Image of half-plane x + y > 0 is interior of circle |w| = 1.
z
Example 1.109: Find the image of the annulus 1 < | z | < 2 under the mapping w = .
z z −1
Solution: w = ⇒ zw − w = z
z −1
w
⇒ z=
w −1 
w
\ 1 < z < 2 is mapped to 1 < <2
w −1

or w − 1 < w < 2 w − 1 (1)

Now, w −1 < w

( )
2 2
⇒ u − 1 + iv < u + iv 

⇒ (u − 1)2 + v 2 < u 2 + v 2

1
⇒ −2u + 1 < 0 ⇒ u > (2)
2
and w < 2 w −1 
u + iv < 4. (u − 1) + iv
2 2

⇒ ( u + v ) < 4 ( u − 1) + v
2 2 2 2 

⇒ 3 ( u + v ) − 8u + 4 > 0
2 2

154 | Chapter 1

8 4
⇒ u2 + v2 − u + > 0 
3 3
2 2
 4 2
⇒ u − 3  + v >  3 
2

    
4 2
⇒ w− > (3)
3 3
4 2
\ From (1), (2) and (3), the image of the annulus 1 < z < 2 is part of exterior of circle w − =
1 3 3
in the region u > ⋅
2
Example 1.110: Find the fixed points of the bilinear transformation
3iz + 1 3z − 4
(i)  w =   (ii) w =
z +1 z −1
Solution: For fixed points w = z
3iz + 1
(i)  For fixed points, =z
z +1
⇒ z 2 + (1 − 3i ) z − 1 = 0 

(1 − 3i )
2
−1 + 3i ± +4
⇒ z=
2 
−1 + 3i ± −4 − 6i
⇒ z=
2 
\ Fixed points are
1
2
(−1 + 3i ± −4 − 6i )
3z − 4
(ii)  For fixed points w = =z
z −1
\ z 2 − z = 3z − 4 
or z2 − 4z + 4 = 0

⇒ ( z − 2 )2 = 0
⇒ z=2
\ The fixed point is 2.
az + b
Example 1.111: Obtain the condition under which the mapping w = maps a straight line
of z-plane into a unit circle of w-plane. cz +d
az + b
Solution: w =
cz + d
az + b
For unit circle w = 1 in w-plane, =1
cz + d
Functions of Complex Variables  | 155

2
az + b  az + b   a z + b 
⇒ =1⇒    =1
cz + d  cz + d   c z + d 
2 2 2 2 2 2
\ a z + abz + abz + b = c z + cdz + cdz + d

or (a 2
−c
2
) z + (ab − cd ) z + (ab − cd ) z + b
2 2 2
− d =0

2 2
For this to be straight line a − c = 0
2 2
⇒ a = c

\ a = c

which is the required condition.
Note: If a and c are real, then condition is a = ± c.

Example 1.112: Find the transformation which maps the points z = 1, −i, −1 to points w = i, 0, −i
respectively. Show that this transformation maps the region outside the circle | z | =1 into the half-
space R ( w ) > 0.
Solution: Let z1 = 1, z2 = −i, z3 = −1 and w1 = i, w2 = 0, w3 = −i.
The transformation which maps z1 , z2 , z3 to w1 , w2 , w3 respectively is
(w − w1 ) (w2 − w3 ) = ( z − z1 ) ( z2 − z3 )

(w − w3 ) (w2 − w1 ) ( z − z3 ) ( z2 − z1 ) 
\
(w − i ) (0 + i ) = ( z − 1) ( −i + 1)
(w + i ) (0 − i ) ( z + 1) ( −i − 1) 
 w − i  z − 1  1 − i 
or −  = − ⋅
 w + i  z + 1  1 + i 

w − i −i ( z − 1)
or =
w+i ( z + 1) 
Apply componendo and dividendo
2w (1 − i ) z + (1 + i )
=
2i (1 + i ) z + (1 − i )

(1 + i ) z − (1 − i ) z +i
⇒ w= =
(1 + i ) z + (1 − i ) z −i

which is the required transformation.
Region outside circle z = 1 is z > 1.
w z+i
Now, =
1 z −i 
156 | Chapter 1

Apply componendo and dividendo


2z w + 1  w + 1
= ⇒ z = i
2i w − 1  w − 1

 w + 1
\ |z| > 1 is mapped to i  >1
 w − 1

or w +1 > w −1

(u + 1) + iv > (u − 1) + iv 
2 2
or

or (u + 1)2 + v 2 > (u − 1)2 + v 2 


or 4u > 0 ⇒ u > 0 
\ Transformation maps the region outside the circle z = 1 into the open half plane to the right
of imaginary axis.
i−z
Example 1.113: Show that mapping w = maps the real axis of the z-plane onto the circle
i+z
w = 1, and the half-plane y > 0 onto the interior of the unit circle w < 1 in the w-plane.
Solution:
w i−z
=
1 i+z 
Apply componendo and dividendo
w +1 2i
=
w − 1 −2 z
 w − 1 −i ( w − 1) ( w + 1)
\ z = −i  =
 w + 1 w +1
2


\ x + iy =
( 2
) (
−i w + w − w − 1 −i u + v 2 + 2iv − 1
=
2
)
2 2
w +1 w +1

Equate imaginary parts

y=
(
− u2 + v2 + 1) (1)
2
w +1

Real axis on z-plane is y = 0
( )
It is mapped to − u 2 + v 2 + 1 = 0
or u 2 + v 2 = 1 or w = 1

\ Real axis on z-plane is mapped to circle w = 1.

and from (1) y > 0 is mapped to


(
− u2 + v2 + 1 ) >0
2
w +1
i.e., u 2 + v 2 < 1 or w < 1

which is interior of the unit circle w = 1.
Functions of Complex Variables  | 157

Example 1.114: Given the bilinear transformation


2z + 3
w= , find the image of the circle x 2 + y 2 − 4 x = 0 in the w-plane.
z−4
Solution: Circle x 2 + y 2 − 4 x = 0 is ( x − 2) + y 2 = 22 or z − 2 = 2.
2

2z + 3
w=
Transformation is          
z−4
\ wz − 4 w = 2 z + 3 
⇒ ( w − 2 ) z = 4w + 3 
4w + 3
⇒ z=
w−2 
4w + 3
\ z − 2 = 2 is mapped to −2 = 2
w−2
2w + 7
i.e., =2 
w−2
7
or w+ = w−2
2 
2
 7
 u + 2  + v = (u − 2) + v
2
⇒ 2 2

  
33
⇒ 11 u + = 0 or 4u + 3 = 0
4 
\ Image of circle x 2 + y 2 − 4 x = 0 in w-plane is line 4u + 3 = 0.

Example 1.115: Find the bilinear transformation which maps the points i, –i, 1 of z-plane into 0,
1, ∞ of the w-plane respectively.
Solution: Let z1 = i, z2 = −i, z3 = 1and w1 = 0, w2 = 1, w3 = ∞.
The transformation which maps z1, z2, z3 to w1, w2, w3 respectively is
(w − w1 ) (w2 − w3 ) = ( z − z1 ) ( z2 − z3 )

(w − w3 ) (w2 − w1 ) ( z − z3 ) ( z2 − z1 ) 
Take limit as w3 → ∞
w − w1 ( z − z1 ) ( z2 − z3 )
=
w2 − w1 ( z − z3 ) ( z2 − z1 )

\
w
=
( z − i ) ( −i − 1)
1 ( z − 1) ( −i − i )

1 ( z − i)
\ w= (1 − i )
2 ( z − 1)
158 | Chapter 1

Example 1.116: Determine the fractional transformation that maps


z1 = 0, z2 = 1, z3 = ∞ onto w1 = 1, w2 = −i, w3 = −1 respectively.
Solution: The transformation which maps z1, z2, z3 to w1, w2, w3 respectively is
(w − w1 ) (w2 − w3 ) = ( z − z1 ) ( z2 − z3 )

(w − w3 ) (w2 − w1 ) ( z − z3 ) ( z2 − z1 ) 
Take limit as z3 → ∞
(w − w1 ) (w2 − w3 ) = z − z1

(w − w3 ) (w2 − w1 ) z2 − z1
\
(w − 1) ( −i + 1) = z − 0 
(w + 1) ( −i − 1) 1 − 0
w −1 z
or =
w +1 i 
⇒ wi − i = zw + z 
i+z
⇒ w (i − z ) = z + i ⇒ w =
i−z 
Example 1.117: Determine the mobius transformation having 1 and i as fixed (invariant) points
and maps 0 to –1.
az + b
Solution: Let transformation be w = where ad − bc ≠ 0  (1)
cz + d
1 and i are fixed points
a+b ai + b
\ = 1, =i
c+d ci + d 
\ a+b = c+d  (2)
ai + b = −c + di  (3)
Multiply (2) by i and subtract (3)
(i – 1) b = (i + 1) c
 i − 1
\ c= b = ib (4)
 i + 1 
(1) maps z = 0 to w = – 1
b
\ = −1 ⇒ b = − d
d 
\ c = ib, d = −b 
\ from (2) a + b = ib − b ⇒ a = (i − 2) b
\ Transformation is

w=
(i − 2) bz + b = (i − 2) z + 1 = (1 + 2i ) z − i
ibz − b iz − 1 z+i 
Functions of Complex Variables  | 159

Example 1.118: Find a bilinear map which maps the upper half of the z-plane onto the right half
of the w-plane.
Solution: Boundary of upper half of z-plane is Im (z) = 0, i.e., y = 0.
Boundary of right half of w-plane is Re (w) = 0, i.e., u = 0.
\ Im (z) = 0 is mapped to Re (w) = 0
Suppose z1 = 0, z2 = 1, z3 = ∞ are mapped to w1 = i, w2 = −i, and w3 = 2i respectively.

\ Mapping is
(w − w1 ) (w2 − w3 ) = ( z − z1 ) ( z2 − z3 )
(w − w3 ) (w2 − w1 ) ( z − z3 ) ( z2 − z1 ) 
( w − i ) ( −i − 2i ) ( z − 0 )   (Take limit as z3 → ∞) 
i.e., =
( w − 2i ) ( −i − i ) (1 − 0 )
w −i 2
or = z
w − 2i 3 
or 3w − 3i = 2wz − 4iz 

4iz − 3i ( 4iz − 3i ) ( 2 z − 3)
\ w= =
2z − 3 2z − 3
2

8i z − 12i ( x + iy ) − 6i ( x − iy ) + 9i
2

\ u + iv = 2
2z − 3

For u > 0,        
Real part of (u + iv) > 0
\ 12 y − 6 y > 0, i.e., y > 0 
4iz − 3i
\ w= is required transformation.
2z − 3
This is not unique.

Exercise 1.6

1. Find the images of the following regions  (iv)  The triangle with vertices at
in the z-plane onto the w-plane under the   (0,1) , (1, −1) , (1,1) ; w = z + (1 − 2i )
given mappings
  (v)  The region z ≤ 1; w = (1 − i ) z − 2i
  (i)  The semicircular region
2. Show that the mapping w = iz represents a
z < 1, Im z > 0; w = z + ( 2 + i ) rotation through an angle π / 2.
 (ii)  The half-plane 3. Find and plot the image of the triangular
  Re z > 0; w = (1 − i ) z + 2 region with vertices at (0,0), (1,0), (0,1)
under the transformation w = (1 – i) z+3.
(iii)  The unit disk z < 1; w = (1 + i ) z + 2i
160 | Chapter 1

4. Under the mapping w = ze iπ 4 find the re- 12. Under the mapping w = f (z) = z2, find the
gion in w-plane corresponding to the tri- image of the region bounded by the lines
angular region on the z-plane bounded by x = 1, y = 1 and x + y = 1.
the lines x = 0, y = 0 and x + y =1. 13. Find the image of the curve x2 – y2 = 4 un-
5. Let a rectangular region OABC with verti- der the mapping w = z2.
ces at O(0,0), A(1,0), B(1,2), C(0,2) be de- 14. Determine the region of the w-plane into
fined in the z-plane. Find the image of the which the first quadrant of z-plane is
region in the w-plane, under the mapping mapped by the transformation w = z2.
(i)  w = z + 2 + i  (ii) w = 2z 15. Determine the region onto which the
(iii) w = e iπ 4 z   (iv) w = (1 – i) z – 2i π
sector r < a, 0 ≤ θ ≤ is mapped by
6. Determine the region in the w-plane into 4
i
which the rectangular region bounded by (i) w = z2 (ii) w = i z2 (iii) w = 2 , where
the lines x = 0, y = 0, x = 1, y = 2 in the z = re iθ and w = Re iφ z
z-plane is mapped under the transforma- 16. Find bilinear transformation which maps
tion w = (1 + i)z + 2 – i. the points 2, i, – 2 in the z-plane onto the
1 points 1, i, – 1 in the w-plane.
7. Under the mapping w = , find
z 17. Find the image of the closed half-disk
   (i)  The image of the circle z − 2 = 3 z ≤ 1, Im ( z ) ≥ 0 under the bilinear trans-
z
 (ii)  The image of z − 3i = 3 formation w = .
( z + 1)
(iii)  The image of the disk z − 1 ≤ 1
18. Find the fixed points of the bilinear trans-
 (iv)  The image of the region x + y > 1 z −1 z
The image of the region Re z > 1 and formation w =  (ii) w =
z +1 z−2
Im z > 1
19. Show that both the transformations
8. Find the image of the region bounded by
z −i i−z
the lines x − y < 2 and x + y > 2 under the w= and w = transform the
1 z+i i+z
mapping w = . upper half-plane Im ( z ) ≥ 0 into w ≤ 1.
z
9. Find the critical points of the transforma- 20. Find the bilinear transformation that maps
1 the points z1 = – i, z2 = 0, z3 = i into the
tion w = . points w1 = –1, w2 = i, w3 = 1 respectively.
z
10. Under the mapping w =
(1 + i ) , find the Into what curves the y-axis is transformed
to this transformation?
( z + i) 3− z
21. Show that the transformation w =
image of the region z − i < 2. z−2
5 
i transforms the circle with centre  , 0
11. Under the mapping w = , find the 2 
( z − i) 1
and radius in the z-plane into the imagi-
images of the following regions (i) Im z < 0 2
nary axis in the w-plane and the interior of
(ii) z > 1. the circle into the right half of the plane.
Functions of Complex Variables  | 161

22. Find the bilinear transformation whose 27.  (i) Determine the linear fractional trans-
fixed points are –1 and 1. formation that sends the points z = 0,
23. Determine the bilinear transformation −i
– i, 2i into the points w = 5i, ∞,
which maps z1 = 0, z2 = 1, z3 = ∞ into w1 = i, 3
­respectively.
w2 = –1, w3 = –i respectively.
24. Find the bilinear transformation which  (ii) Find the invariant points of this trans-
transforms the unit circle z = 1 into the formation.
real axis in such a way that the points (iii) Find the image of z < 1 under this
z = 1, i and –1 are mapped into the points transformation.
w = 0, 1 and ∞, respectively. Find the 28. Find a transformation w = f (z) which maps
­region into which the interior and ­exterior
 (i) the real axis in the z-plane onto the
of the circle are mapped.
real axis in the w-plane.
25. Determine the bilinear transformation
(ii) the unit disk z ≤ 1 in the z-plane
which maps 0, 1, ∞ into i, –1, –i, respec-
tively. Show that this transformation maps onto the half-plane Re ( w ) ≥ 0 in the
the interior of the unit circle in the z-plane w-plane.
into the half-plane Im (w) > 0. (iii) the unit disk z ≤ 1 in the z-plane onto
26. Find the bilinear transformation which the unit disk w ≤ 1 in the w-plane.
maps the points z = 1, i, 2+i in the
z-plane onto the points w = i, 1, ∞, in the 29. Find a transformation w = f(z) which maps
w-plane. the upper half-plane Im ( z ) ≥ 0 onto the
unit disk w ≤ 1.

Answers 1.6
1. (i)  Image is interior of semicircle with centre (2, 1) and radius unity above the line v =1
(ii)  Image is open half-plane u – v > 2.
(iii)  Image is interior of circle with centre (0, 2) and radius 2.
(iv)  Image is triangle with vertices (1, –1), (2, –3) and (2, –1).
(v)  Image is inside and boundary of circle with centre (0, –2) and radius 2 .
2. Image is triangular region with vertices A(3, 0), B(4, – 1), C(4, 1).
1
4. Image is triangular region bounded by the lines v = −u, v = u, v =
2
5.  (i) Image is rectangle with vertices (2, 1), (3, 1), (3,3), (2, 3).
 (ii) Image is rectangle with vertices (0,0), (2, 0), (2,4), (0, 4).
 1 1   −1 3   −2 2 
(iii) Image is rectangle with vertices (0, 0 ) ,  , , , , ,
 2 2   2 2   2 2 
 (iv) Image is rectangle with vertices (0, –2), (1, –3), (3, –1), (2, 0).
6. Image is rectangular region bounded by the lines, u + v = 1, u – v = 3, u + v = 3, u – v = –1.
162 | Chapter 1

2 3
7.  (i) Image is circle w + = .
5 5
 (ii) Image is straight line 6 v + 1 = 0.
1
(iii) Image is closed half-plane u ≥ ⋅
2
1 1
 (iv) w − (1 − i ) <
2 2
1 1 1 1
 (v) Image is the intersection of interiors of the circles w −
= and w + i =
2 2 2 2
1 1
8. Image is intersection of exterior of circle w − (1 + i ) = and interior of circle
4 2 2
1 1
w − (1 − i ) = ⋅
4 2 2
9. z = 0
10. Image is open half -plane 2u − 2v − 1 > 0
1 1 1
11. (i) w + <  (ii) u > −
2 2 2
 1
12. The region is bounded by three parabolas v 2 = 4 (u + 1) , v 2 = −4 (u − 1) u 2 = −2  v − 
 2
13. u = 4
14. Upper half of the w-plane
π π
15. (i)  sector R < a 2 ; 0 ≤ φ ≤ (ii) sector R < a 2 ; ≤ φ ≤ π
2 2
1
(iii) Region is first quadrant excluding inside and boundary of the circle w = 2 in first
quadrant. a

−3iz + 2
16. w =
z − 6i
1
17. u ≤ , v ≥ 0
2
18. (i) z = ±i (ii) z = 0, 3
−i ( z − 1)
20. w = , w =1
( z + 1)
az + b
22. w = where a, b are any complex numbers such that a2 ≠ b2.
bz + a
1 ( z − i)
23. w =
i ( z + i)
i (1 − z )
24. w = which maps interior of unit circle to v > 0 and exterior to v < 0.
(1 + z )
Functions of Complex Variables  | 163

25. w = −i
( z − i)
( z + i)
26. w=
(2 + i ) z − (1 + 2i )
z − (2 + i )
−3 z + 5i
27. (i) w = (ii) i, −5i (iii) v > 1
−iz + 1
( z − x1 ) ( x2 − x3 ) for any three different reals x , x , x
28. (i) w =
( z − x3 ) ( x2 − x1 ) 1 2 3

i−z
(ii)  w = ( not unique )    (iii)  w = k z − α ; k = 1, α <1
i+z α z −1
1+ i w −1
29. z = −
2 w −i
This page is intentionally left blank
Laplace Transform 2
2.1 Introduction
The study of Laplace transform is an essential part of mathematical background for engineers
and scientists. Laplace transform provides easy and effective method to solve linear ordinary or
partial differential equations under suitable initial and boundary conditions. Laplace transform
changes the given differential equation under given conditions to an algebraic equation in terms
of Laplace transform of dependent variable, then, solving this algebraic equation and finding the
inverse Laplace transform, we get the required solution. The quality of this method is that the
solution is found without finding the general solution. The Laplace transform method is most
suitable to the physical systems in which the driving force (which is right-hand side of differential
equation) is discontinuous or periodic or a large force acting for a short duration.

2.2 Definition of Laplace Transform and inverse


Laplace transform

Let f(t) be a given function defined for all t > 0. If ∫ 0
e − st f(t) dt exists, then

F(s) = ∫
0
e − st f(t) dt

is called the one-sided Laplace transform of the function f(t) and is also denoted by £{f(t)} or
f ( s) for those values of s for which the integral exists. Thus


£{f(t)} = F(s) = f (s) = ∫ e − st f(t) dt
0

If F(s) is the Laplace transform of f(t), i.e., £{f(t)} = F(s), then f(t) is called inverse Laplace
transform of F(s) and we write
f(t) = £ -1{F(s)}

The two-sided Laplace transform of f(t) is defined by



£{f(t)} = ∫ −∞
e − st f (t )dt

for all values of s real or complex for which the integral exists.
We shall be considering only one–sided Laplace transforms.
166 | Chapter 2

2.2.1 Piecewise Continuous Function


A function f(t) is said to be piecewise continuous function on [0, ∞) if for any interval
0 ≤ a ≤ t ≤ b, there are at most a finite number of points t1, t2, … tm in [a, b] at which the function
has finite jumps and hence discontinuous and is continuous in each subinterval (tk-1, tk).

2.2.2 Function of Exponential Order


A function f(t) is called of exponential order k if there exist constants k and M > 0 such that

| f(t)| ≤ Mekt ; t ≥ 0
For example, for t > 0
et + e − t
cosh t = < e t,  tn < et n! ;  n = 1, 2, 3, … and sin t ≤ et (∵ et ≥ 1)
2
Therefore, these functions are all of exponential order. Thus, most of the functions are of expo-
nential order.
2 2
However, for the function f(t) = e t , we have e t > Mekt for every large M and k for all t > t0 where
t0 is sufficiently large number depending on M and k and thus this function is not of exponential
order.

2.3 Sufficient conditions for existence of Laplace Transform


Theorem 2.1  Let f(t) be a function which is piecewise continuous on every finite interval in
t ≥ 0 and is of exponential order k for t ≥ 0. Then, £{f(t)} exists for s > k.
Proof: Since f(t) is piecewise continuous, e-st f(t) is integrable over every finite interval on t-axis
for t ≥ 0.
∞ ∞
We have £{ f (t )} = ∫0
e − st f (t )dt ≤ ∫ e − st f (t ) dt (2.1)
0

Since f (t) is exponential of order k, there exists M > 0 such that |  f(t)| ≤ Mekt ; t ≥ 0.
\ from (2.1)

∞  Me − ( s − k ) t 
£{ f (t )} ≤ ∫ Me − ( s − k ) t dt = − 
0
 s − k 0
M  
=  for s > k ∵ lim e − ( s − k ) t = 0 for s > k 
(s − k ) t →∞ 

Thus, £{f(t)} exists for s > k.
Remark 2.1: It is important to note that the above theorem gives only the sufficient conditions
for existence of Laplace transform of a function. The conditions are not necessary. For example,
1
the function f (t ) = is not piecewise continuous in [0, ∞) as lim f(t) = ∞, however,
t t →0+
Laplace Transform  | 167

−1/ 2
∞ ∞ x 1
£(t −1/ 2 ) = ∫ e − st t −1/ 2 dt = ∫ e − x   dx   (taking x = st, s > 0)
0 0
s s
 1
Γ 
1 ∞
1
−1  2 π
=
s
∫0
e− x x 2
dx =
s
=
s
;s>0

\ £(t-1/2) exists.

2.4 Properties of Laplace Transforms


(i) If Laplace transform of a function exists, then it is unique.
(ii) Linearity Property of Laplace transforms

Theorem 2.2:  (Laplace transform operator is a linear operator) Let f(t) and g(t) be any two
functions whose Laplace transforms exist for s > k and a, b be any constants, then
£{a f(t) + bg(t)} = a£{f(t)} + b£{g(t)}
Proof: We have

£ {af (t ) + bg (t )} = ∫ e − st {af (t ) + bg (t )} dt
0

∞ ∞
= a∫ e − st f (t )dt + b∫ e − st g (t )dt
0 0

= a £{ f (t )} + b £{g (t )}  ; s > k

(iii) First Shifting Property or First Translation Property

Theorem 2.3  If £{f(t)} = F(s); s > k, then


£{eatf(t)} = F(s–a); s – a > k.
Proof: Since F(s) = £{f(t)}, therefore we have

F ( s) = ∫ e − st f (t )dt ; s > k
0  
∞ ∞
Therefore, F ( s − a) = ∫ e − ( s − a ) t f (t ) dt = ∫ e − st {e at f (t )} dt ; s − a > k
0 0 
= £{e f(t)} ; s – a > k
at

Thus, £{eatf(t)} = F(s – a) ; s – a > k.


(iv) Change of Scale Property
1  s
Theorem 2.4 If £{f(t)} = F(s), then £{f(at)} = F  ;a>0
a  a

Proof: We have F(s) = £{f(t)} = ∫ 0
e − st f (t )dt

Therefore, £{f(at)} = ∫ e − st f ( at )dt ; a > 0
0  
168 | Chapter 2

∞ 1

sx
 x
=∫ e f ( x ) dx a
 taking t = 
0 a     a
s
1 ∞ − t 1  s
= ∫ e a f (t )dt = F  
a 0 a  a

2.5  Laplace Transform of Elementary Functions


1
(i) £(1) = ;s>0
s

∞  −e − st  1
We have £(1) = ∫ e − st dt =   = ;s>0
0
 s 0 s
1
(ii)  £(eat) =;s>a
s−a
1
Since £(1) = ; s > 0, therefore by using first shifting property
s
1
  £(eat) =
; s – a > 0, i.e., s > a
s−a
a
(iii)  £(sinh at) = 2 ; s > |a|
s − a2
 e at − e − at 
We have £(sinh at)   = £  
 2 
1 1
= £(eat) – £(e − at )     (by linearity property)
2 2
1 1 1 1
= − ⋅ ; s > a and s > –a
2 ( s − a) 2 ( s + a)
1  1 1 
= − ; s> a
2  ( s − a) ( s + a) 

a
= 2 ; s> a
( s − a2 )

s
(iv)  £(cosh at) = ; s > |a|
s − a2
2

1  1 1   e at + e − at 
As above, we have £(cosh at) = +  ∵ cosh at = 
2  ( s − a) ( s + a) 
 2
s
= ; s > |a|
(s 2
− a2 )
Laplace Transform  | 169

a s
(v) £(sin at) = , £(cos at) = 2 ;s>0
s +a 22
s + a2
1
We have £(eiat) = ;s>0
s − ia
s + ia
= 2

(
s + a2

)
But e = cos at + isin at. Therefore
iat

s + ia
£(cos at) + i £(sin at) = 2 ; s > 0
s + a2 ( )
Equate real and imaginary parts
s a
£(cos at) = 2 and £(sin at) = 2 ; s > 0
s +a 2
( ) s + a2 ( )
(vi)  For negative a, f(t) = eat cos wt and f(t) = eat sin wt are called damped vibrations
s−a
£(eat cos wt) = ; s > a
( s − a )2 + w 2
w
£(eat sin wt) = ; s > a
( s − a )2 + w 2
s w
We have £(cos wt) = ; and £(sin wt) = 2 ;s>0
s +w 2 2
s + w2 ( )
Therefore, by first shifting property
s−a w
£(eat cos wt) = and £(eat sin wt) = ; s > a
(s − a) + w
2 2
( s − a )2 + w 2
Γ ( n + 1)
(vii) £(tn) = ; s > 0; n > –1
s n +1
n!
= n +1 ; s > 0; if n is a non-negative integer
s
n
∞ ∞  x 1
We have £(tn) = ∫ e − st t n dt = ∫ e − x   dx ; (taking st = x; s > 0)
0 0  s s
Γ ( n + 1)

1
∫e
−x
= n +1
x n +1−1dx = ; s > 0; n + 1 > 0
s o s n +1
For n to be non-negative integer Γ ( n + 1) = n!
n!
Therefore, £(t n) = n +1 ; s > 0; n is a non-negative integer
s
 1
Γ 
 2 π   1 
As a particular case, £(t–½) = =    ∵ Γ   = π 
s s   2  
170 | Chapter 2

Laplace transforms of some elementary functions obtained in 2.5 are given here in a table.
f(t) £{f(t)} f(t) £{f(t)}

1,s>0 s
1 1  6 cos at ,s>0
s s + a2
2

1 ,s>a s−a
2 eat  7 eat cos wt ,s>a
s−a ( s − a) 2 + w 2
a w
3 sinh at ,s > a  8 eat sin wt ,s>a
s − a2
2
( s − a) 2 + w 2
s
,s > a Γ ( n + 1)
4 cosh at  9 tn (n > – 1) ,s>0
s2 − a2 s n +1
a n!
5 sin at ,s>0 10 tn (n is a non-negative ,s>0
s2 + a2 integer) s n +1

2.6  Laplace Transforms of Derivatives and Integrals


Theorem 2.5  (Laplace transform of derivative)
Suppose f (t) is continuous for all t ≥ 0, f(t) is of exponential order k and f(t) has piecewise con-
tinuous derivative in every finite interval in t ≥ 0. Then, £{ f  ′(t)} exists when s > k and
£{ f  ′(t)} = s£{ f (t)} – f (0).
Proof: We first take the case when f   ′(t) is continuous for all t ≥ 0. Then
∞ ∞
£ ( f ′ ( t ) ) = ∫ e − st f ′(t ) dt = e − st f (t )

− ∫ − se − st f (t ) dt (2.2)
0
0 0

Now, f (t) is of exponential order k, so there exists M > 0 such that |  f (t)| ≤ Mekt

\ e − st f (t ) < Me − ( s − k ) t

\ lim e–st f(t) = 0  if s > k


t →∞

Therefore, from (2.2)

£{ f  ′(t)} = – f(0) + s£{ f(t)}

Hence, £{f ′(t)} exists and £{ f ′(t)} = s£{ f (t)} – f (0).


If f ′(t) is piecewise continuous, then the proof is quite similar by breaking up the range of inte-

∫e
− st
gration in f ′(t) dt into parts such that f ′(t) is continuous in each part.
o
Laplace Transform  | 171

Theorem 2.6  (Laplace transform of nth order derivative) (n∈N)


Suppose f (t), f ′(t),…………..........…, f (n-1) (t) are continuous for t ≥ 0, each of exponential order
k and f (n) (t) is piecewise continuous in every finite interval in t ≥ 0. Then, £{f (n) (t)} exists when
s > k and £{f (n) (t)} = sn £{f (t)} - sn-1 f(0) – sn-2 f ′(0) – sn-3 f ′′(0) – … – f (n-1) (0).
Proof: The theorem will be proved by the principle of mathematical induction on n. For n = 1, the
result is proved in Theorem 2.5. Let the theorem be true for n = m. Then

£{f (m) (t)} = sm £{f (t)}–sm-1f (0) – sm-2 f ′(0) – …. – sf (m-2) (0) – f (m-1) (0)(2.3)

The application of Theorem 2.5 gives

£{f (m+1) (t)} = s£{f (m) (t)} – f (m) (0)

= sm+1 £{f (t)} – sm f (0) – sm-1f ′(0) – … – sf (m-1) (0) – f (m) (0) (from (2.3))
Hence, the theorem is true for n = m + 1.
\ By the principle of mathematical induction, the theorem is true for all n∈N.
Theorem 2.7  (Laplace transform of integral function)
Let £{f(t)} = F(s). If f(t) is of exponential order k and piecewise continuous at every finite interval
in t ≥ 0, then
t  1
£  ∫ f ( x )dx  = F(s) (s > 0, s > k)
0  s
t

Proof: Let   g(t) = ∫ f ( x)dx (2.4)


0

Since f(t) is of exponential order k, there exists M > 0 such that |  f(t)| ≤ Mekt(2.5)
We can assume k > 0 because if (2.5) is satisfied for some negative k, it is also satisfied for
positive k.
­

t t t

Now for t > 0, |g(t)| = ∫ f ( x )dx ≤ ∫ f ( x ) dx ≤ M ∫ e dx 


kx
(from (2.5))
0 0 0

M kt M kt
= (e − 1) < e , (k > 0)
k k
\ g(t) is also of exponential order k.
Since f (t) is piecewise continuous at every finite interval in t ≥ 0, therefore g(t) is continuous for
all t ≥ 0 and g′(t) = f (t) except for points at which f (t) is discontinuous. Hence, g′(t) is piecewise
continuous in every finite interval in t ≥ 0.

Now, £{f(t)} = £{g′ (t)} = s£{g(t)} - g(0) ; s > k  (by Theorem (2.5))

But g(0) = 0 from (2.4).


172 | Chapter 2

1
\ £{f(t)} = sL{g(t)} or £{g(t)} = £{f(t)}; s > 0, s > k
s
t  F ( s)
\ £  ∫ f ( x )dx = ; s > 0, s > k(∵ £{f(t)} = F(s))
0  s

t  ∞ t 
Other Method £  ∫ f ( x )dx = ∫ e − st  ∫ f ( x )dx dt by definition of Laplace transform
0  0 0 

t=∞
t= t
x=

t
x=

Figure 2.1

∞ ∞

=∫ ∫e
− st
f(x) dt dx   (by changing order of integration)
0 x

∞ ∞ ∞
 −e − st  1 − sx
= ∫
 f(x) dx = ∫ e f(x) dx, s > 0
0
s x s0
1
 = F (s); s > 0, s > k

s

2.7 Differentiation and Integration of Laplace Transform

Theorem 2.8  (Differentiation of Laplace Transform)


Let f(t) be piecewise continuous on every finite interval in t ≥ 0 and be of exponential order. If
£{f(t)} = F(s), then
d
F ( s) = –£{tf(t)}(2.6)
ds
dn
F ( s) = (–1)n £{tn f(t)}(2.7)
ds n
Laplace Transform  | 173

Proof:  We have F(s) = ∫ e − st f (t) dt


0

\ By Leibnitz’s rule for differentiating under the integral sign



dF ∂ 
= ∫  e − st f (t )  dt
ds 0  ∂s  
∞ ∞
= ∫ −te − st f(t) dt = – ∫ e − st{t f (t)} dt
0 0

= –£{t f (t)}
Now we prove (2.7) for n > 1 by the principle of mathematical induction.
From (2.6), the result (2.7) is true for n = 1.
Let (2.7) be true for n = m. Then
dm
F ( s) = (–1)m £{tm f(t)}
ds m

= (–1)m ∫e
− st m
t f(t) dt
0

d m+1 ∂ 
\ F ( s) = (–1)m ∫  ∂s e
− st m
t f (t ) dt
ds m+1 0  

= (–1)m+1 ∫e f (t ) dt = (–1)m+1 £{tm+1 f(t)}
− st m +1
t
0
\ (2.7) is true for n = m + 1.
Hence, the result follows by mathematical induction.
Remark 2.2: From the above theorem, we observe that if £{ f (t)} = F(s), then
dn
£{tn f (t)} = (–1)n F ( s).
ds n
We can use it as a formula to calculate Laplace transform of a function f(t) when it is multiplied
by some positive integral power of t.
Theorem 2.9  (Integration of Laplace Transform)
 f (t ) 
Let f (t) be piecewise continuous on [0, ∞) and be of exponential order k and lim+  exists, then
then t →0  t 
 f (t )  ∞
£  = ∫s F ( x ) dx ; s > k  where £{ f (t)} = F(s)

 t 

Proof: We have     F ( s) = ∫ e − st f (t ) dt ; s > k
0
Integrate from s to ∞
\ ∞
F ( x ) dx = ∫
∞ ∞
e − xt f (t ) dt dx

∫s s ∫
0 
174 | Chapter 2

∞ ∞

=∫ ∫ e − xt f (t ) dx dt  (by changing order of integration)
0 s

f (t )

∞ e 
− xt

=∫   f (t ) dt = ∫0 e (2.8)
− st
dt 
0
 −t  s t

f (t )
Now, under the given conditions satisfies conditions of existence of its Laplace transform.
t
 f ( t ) 
Thus, right-hand side of (2.8) is £  
 t 
 f (t )  ∞
\ £   = ∫s F ( x ) dx
 t  
Now, we solve some examples.

Example 2.1: Find the Laplace transform of the function f(t) = t n ; n ≥ 1 and n is odd integer.
Γ ( n + 1)
{ }
Solution: We have £ t n =
s n +1
if n > -1

n 
Γ  + 1
2 
{ }
Therefore, £ t n / 2 = n
+1

(1)
s 2

 n  n  n   n   1  1
But Γ  + 1 =  − 1  − 2 ....   Γ  
 2  2  2   2   2  2

n ( n − 2) ( n − 4 ) ......1   1 
= π ∵ Γ  2  = π 
2(n +1) / 2 

= (n +1) / 2
(n + 1)! π =
(n + 1)! π
2  2.4.6.... ( n + 1)  n + 1
2 n +1  !
 2 


\ From (1)
( n + 1)! π
{ }
£ t n/ 2 =
 n +1
2n +1   ! sn+ 2
 2  
Example 2.2: Find the Laplace transform of the functions
t 3
(i)  7e2t + 5 cosht +7t3 + 5 sin3t + 2   (ii) sinh sin t    (iii) 
sinat sinbt
3 2 2
 1
(iv) sinh32t   (v)  t2e–2t   (vi)  t − 
 t
Laplace Transform  | 175

Solution:
(i) Let f (t) = 7e2t + 5 cosht +7t3 + 5 sin3t + 2

\ £{f (t)} = 7£ (e2t) + 5£ (cosht) + 7£ (t3) + 5£ (sin 3t) + 2£ (1)

7 5s 7 (3!) 5 (3) 2
= + 2 + 4 + 2 +

s − 2 s −1 s s +3 2
s
7 5s 42 15 2
= + 2 + 4 + 2 +

s − 2 s −1 s s +9 s 
 2t −t

 t 3   e −e 2 3 
(ii)  £  sinh sin t = £ sin t
 2 2  2 2 
 
 
1  2t 3  1  −2t 3 
= £  e sin t  − £  e sin t
2  
2  2   2 

3 3
1 2 1 2
= 2
− 2
(by first shifting property)
2 1  3 
2 2 1  3 
2

 s −  +    s +  +  
2  2  2  2 

 1 1 
= 3 − 
 ( 2 s − 1) + 3 ( 2 s + 1) + 3 
2 2


1
(iii) Let f (t ) = sinat sinbt = cos ( a − b)t − cos ( a + b ) t 
2
1 1
\ £ { f (t )} = £ {cos ( a − b)t} − £ {cos ( a + b)t}
2 2 
1 s 1 s
= ⋅ − ⋅
2 s 2 + ( a − b) 2 2 s 2 + ( a + b) 2


=
s 
. 2
(a + b) − (a − b) 2 2


{ }{
2  s + ( a − b) 2 s 2 + ( a + b) 2
 } 
2abs 
=
{s 2
+ ( a − b) 2
} {s 2
+ ( a + b) 2
}
(iv)  We have sinh 6t = 3 sinh 2t + 4 sinh3 2t
1 3
\ sinh 3 2t = sinh 6t − sinh 2t 
4 4
176 | Chapter 2

1 3
\ £(sinh3 2t) = £(sinh 6t) − £(sinh 2t)
4 4
1 6 2 
=  2 − 3⋅ 2
4 s −6 2
s − 22 
6 32 48
= ⋅ 2 = 2

(
4 s − 4 s − 362
)( ) (
s − 4 s 2 − 36 )( )
2!
(v)  We have £ (t2) =
s3
2! 2
\ By first shifting property, £(t 2e–2t) = =
( s + 2) 3
( s + 2)3
3
 1
(vi)  We have  t −  = t 3/ 2 − 3t 1/ 2 + 3t −1/ 2 − t −3/ 2
 t
Since Laplace transform of tn exists only when n > –1 therefore £ (t -3/2) does not exist. Hence,
3
 1
£  t −  does not exist. However, if we use
 t
Γ ( n + 1)
£(tn) = , s > 0 as a formula with the recurrence relation Γ ( n + 1) = nΓ ( n) for negative
s n +1
non-integral values n, then
3
 1
£  t −  = £ (t3/2) –3 £ (t1/2) + 3 £ (t–1/2) –£ (t–3/2)
 t
 5  3  1  1
Γ   3Γ   3Γ   Γ  − 
 2  2  2  2
= 5 / 2 − 3/ 2 + 1/ 2 − −1/ 2
s s s s 
3 1 1 1 1 1 2 π  ∵ Γ 12 = −21 Γ  −21 
 = . π . 5 / 2 − 3. π . 3/ 2 + 3 π . 1/ 2 + −1/ 2  ∴ Γ  −1 = − 2 π 
2 2 s 2 s s s   2  
π
= 8s3 + 12s 2 − 6 s + 3
4 s5 / 2  

Example 2.3: Find the Laplace transforms of


 (i)  te–t sin 3t
(ii)  tn et sin 4t (n ∈ N) and deduce the Laplace transforms of t2 et sin 4t and t3 et sin 4t.

Solution:
1
(i)  We have £ (t) =
s2
1
\ £{te–(1–3i )t} = ; s > –1   (by first shifting property)
( s + 1 − 3i ) 2
Laplace Transform  | 177

( s + 1 + 3i ) 2
\ £{te–t (cos3t + isin 3t)} = 
{ }
2
( s + 1) + 9 2

\ £ (te–t cos3t) + i£ (te–t sin 3t) =


( s + 1)2 − 9 + 6 ( s + 1) i
{(s + 1)2 + 9}
2

Equate imaginary parts 


6 ( s + 1)
£ (te–t sin 3t) = ; s > –1
{(s + 1) + 9}2 2

3
Other Method We have £ (sin 3t) = ;s>0
s 2 + 32
−d  3  6s
\ £ (tsin 3t) =  = 2 ; s > 0
ds  s 2 + 32  ( )
2
s +9
\ By first shifting property
6 ( s + 1)
£ (e–t t sin3t) = ; s > –1
{(s + 1) + 9}2 2

(ii) For n∈N, we have


n!
( )
£ tn =
s n +1
; s > 0

n!
\ £ {tn e (1+4i) t} = ; s > 1
( s − 1 − 4i )n+1
n !( s − 1 + 4i )
n +1

\ £ {(tn et (cos 4t + isin 4t)} = n +1


; s > 1
( s − 1)2 + 16 
 
\ £ (tn et cos 4t) + i£ (tn et sin 4t)

n! ( s − 1)n +1 + n +1C1 ( s − 1)n ( 4i ) + n +1C2 ( s − 1)n −1 ( 4i )2 +  + n +1Cn +1 ( 4i )n +1 


=
( s − 1)2 + 16  n +1  
  
Equate imaginary parts
n!
 n +1C1 ( s − 1) ( 4 ) − n +1C3 ( s − 1) 43 + n +1C5 ( s − 1) 45 
n n−2 n−4
£ (tn et sin 4t) =
( s − 1) + 16 
2 n +1  
 
n!  4. n +1C1 ( s − 1)n − 43. n +1C3 ( s − 1)n − 2 + 45. n +1C5 ( s − 1)n − 4 
Hence £(tn et sin 4t) =
( s − 1)2 + 16 
n +1  
 
The last term within the brackets will be 4 n +1 i n n +1Cn +1 if n is even and 4 n i n −1 n +1Cn if n is
odd; s > 1.
178 | Chapter 2

Take n = 2 and 3 respectively


8 3 ( s − 1) − 16 
2
2
 4 ⋅ 3C1 ( s − 1)2 − 43 ⋅ 3C3  =  
£ (t e sin 4t) =
2 t
3 
( s − 1) + 16 
2  ( s − 1) + 16 
2 3

    
6 96 ( s − 1) ( s − 1) 2 − 16 
 4 ⋅ C1 ( s − 1) − 4 ⋅ C3 ( s − 1) =
3
and  £ (t e sin 4t) =
3 t 4 3 4

( s − 1)2 + 16   
4 4
( s − 1) 2 + 16 
 
4
Other Method We have £ (sin 4t) = ;s>0
(s 2
+ 42 )
dn  4 
\ £ (tn sin 4t) = ( −1)
n
 
ds n  s 2 + 16  
dn  4 
= ( −1)
n
n  
ds  ( s + 4i ) ( s − 4i ) 

1 dn  1 1 
= ( −1) ⋅
n
n 
−    (by suppression method)
2i ds  ( s − 4i ) ( s + 4i ) 

dn ( −1) n! an n

But we know that ( ax + b ) −1


=
dx n (ax + b)n+1
1  ( −1) n ! ( −1)n n! 
n

£ (tn sin 4t) = ( −1) ⋅


n
\  −
2i  ( s − 4i )n +1 ( s + 4i )n +1 
 


n ! ( s + 4i )
= 
n +1


( s − 4i )n+1 
(
2i  s 2 + 16 n +1 ) 
( )
n +1

 s 2 + 16 

=
n!
(
2i s + 16
2 n +1 
)
{
 s n +1 + n +1C s n ( 4i ) +
1 C2 s n −1 ( 4i ) +  +
n +1 2
Cn +1 ( 4i )
n +1 n +1
}

− {s − C1 s n ( 4i ) + C2 s n −1 ( 4i )  + Cn +1 ( −1) (4i )n+1 }
n +1 n +1 n +1 2 n +1 n +1


n!
=  4. C1 s − 4 . C3 s
n +1 n 3 n +1 n−2 5 n +1
+ 4 . C5 s n−4
 ; s > 0
(s )
n +1
2
+ 16

\ By first shifting property
n!
 4. n +1C1 ( s − 1)3 − 43. n +1C3 ( s − 1)n − 2 + 45. n +1C5 ( s − 1)n − 4  ;
£ (et tn sin 4t) =
( s − 1) + 16   
2 n +1

s > 1.  
Last term is as above in the first method.
Now, the remaining results can be deduced as above in the first method.
Laplace Transform  | 179

 sin at   cos at  exist?


Example 2.4: Evaluate £   , a > 0. Does £ 
 t   t 
a
Solution: £ (sin at) = ;s>0
s + a2
2

∞ ∞
 sin at  a 1 x
\ £  =
 t  s x +a
∫ 2
(
2
dx = a ⋅  tan −1 
a ) a s

π s s
= − tan −1 = cot −1 ; s > 0
2 a a
s
and £ (cos at) = 2 ; s > 0
s + a2

∞ ∞
 cos at   cos at  1 
If L 
 t   exists, then L 
 t 
= ∫
x
x +a
2 2
dx =  log x 2 + a 2 
2 s
( )
s

1
But  lim log (x2 + a2) = ∞
x→∞ 2

 cos at 
\  £   does not exist.
 t 

Example 2.5: Find the Laplace transform of the following functions


e − t sin t
(i) 
t
e at − cos bt
(ii) 
t
1
Solution: (i) We have £ (sint) = 2 ;s>0
s +1
\ By first shifting property
1
( )
£ e − t sin t = ; s > –1
( s + 1)2 + 1
 e − t sin t  ∞ 1 ∞
Therefore, £   =∫ dx =  tan −1 ( x + 1) 
 s ( x + 1) + 1
2
 t
s

π
= − tan −1 ( s + 1) = cot −1 ( s + 1) ; s > –1
2
1 s
(ii)  £ (eat – cos bt) = £ (eat) – £ (cos bt) = − ; s > a, s > 0
(s − a) (s 2
+ b2 )
 e at − cos bt  ∞  1 x 
\ £  = ∫ − 2  dx
 t  s  x − a x + b2 

180 | Chapter 2


 


1
2
(
= log ( x – a ) – log x 2 + b 2 
s
)
 
  a 
 ( x − a ) 

1 −  s−a
 x 
= log  = lim log   – log 2
 x + b  s x →∞
2 2

2
 b  s + b2 
 1+   
  x 

s−a s2 + b2
= 0 – log = log ; s > a, s > 0
s2 + b2 s−a
Example 2.6: Show that
 t cos at − cos bt  1  s2 + b2 
(i)  £  ∫ dt  = log  2
0 t  2s  s + a 2 

 t e t sin t  1
(ii)  £  ∫ dt  = cot –1 (s – 1)
0 t  s
s s
Solution: (i) £ (cos at – cos bt) = −
( s2 + a2 ) ( s2 + b2 )

 cos at − cos bt   x x 
\ £  = ∫ 2 − 2  dx
 s x +a x + b2 
2
 t

1 
2
1
=  log x 2 + a 2 – log x 2 + b 2 
2 s
( ) ( )

1  x 2 + a2   1  s2 + b2 
=  log  2 2 
= log  2
2  x + b  s 2  s + a 2 

 t cos at − cos bt  1  cos at − cos bt  1  s2 + b2 


\ £ ∫ dt  = £   = log  2
0 t  s  t  2s  s + a 2 
 (by Laplace transform of integral function)
1
(ii) £(sint) = 2
s +1

 sin t  1 π
\ £  =∫ 2 dx = {tan −1 x}∞s = − tan −1 s = cot −1 s
 t  s x +1 2 
 sin t 
\ £  et = cot –1 (s – 1)   (by first shifting property)
 t 
 t e t sin t  1  e t sin t  1 −1
\ £  ∫ dt  = £   = cot ( s − 1)
0 t  s  t  s 
Laplace Transform  | 181

Example 2.7: Find Laplace transform of


t
cos 2t − cos 3t sin u
(i) 
t ∫0 u du
(ii) 

Solution: (i) Take a = 2 and b = 3 in Example 2.6 (i)


 cos 2t − cos 3t  1  s2 + 9 
\ £  = log  2 
 t  2 s +4

(ii)  By Example 2.6 (ii)


 sin t 
£  = cot  s
–1

 t 
\ By Laplace transform of integral function
 t sin u  1  sin t  1
£  ∫ du  = £   = cot  s
–1

0 u  s  t  s

Example 2.8: Find the Laplace transforms of the following functions

1 − e −2 x
t /2 t
(i) ∫0
x
dx   (ii) t ∫ e − u sin 2u du
0

Solution: (i) Put 2x = u so that 2dx = du

1 − e−u
t
\ The given integral becomes ∫0 u du
1 1
Now, £ (1–e –t ) = £ (1) – £(e – t ) = − ; s > 0
s s +1
 1 − e −t  ∞ 1 1 
 dx = [ log x − log ( x + 1) ]s

\ £  = ∫ −
 t  s x x + 1  
  x  ∞   s + 1
= log    = log  ; s>0
 s 

  x + 1 s 
 t  1 − e −u   1  1 − e −t  1  s + 1
\ £ ∫  du  = £  = log 

0  u   s  t  s
  s 

 t / 2 1 − e −2 x  1  s +1
Hence £∫ dx  = log  ; s > 0
0 x  s  s 
2
(ii) £ (sin 2t) = 2 ;s>0
s + 22
2
\ £ (e–t sin 2t) = ; s > –1   (by first shifting property)
( s + 1)2 + 22
182 | Chapter 2

t  1
\ £  ∫ e − u sin 2u du  = £(e–t sin 2t); s > 0
0  s
1 2 2
= ⋅ = ; s > 0
(
s ( s + 1)2 + 22  s s 2 + 2 s + 5
 
)
 t  −d  2 (
 2 3s + 4 s + 5
2
)
\ £ t ∫ e − u sin 2u du  =   = ; s > 0
ds  s3 + 2 s 2 + 5s  ( )
2
 0  s 3 + 2 s 2 + 5s

1 − cos t
Example 2.9: Find the Laplace transforms of
t2
1 − cos t 1   t 2 t 4 t 6 t 8  
Solution: We have = 2 1 − 1 − + − +  
t2 t   2 ! 4 ! 6 ! 8!  
1 t2 t4 t6
= − + − +
2 ! 4 ! 6 ! 8! 
 1 − cos t  1 2! 4! 6!
\ £ = − + − +
 t 2  2 ! s 4 ! s3 6 ! s5 8! s 7

1 1 1 1 1 1 1 1
= . − . 3+ . 5− . +
1.2 s 3.4 s 5.6 s 7.8 s 7
 1  1  1 1  1  1 1  1  1 1 1
=  1 −  . −  −  . 3 +  −  5 −  −  7 +
 2 s  3 4  s  5 6  s  7 8 s

 1 1  1  3 1  1  5 1  1  7 
=  −   +   −   + 
 s 3  s  5  s 7  s 
s  1 1 1  
2 3 4
1 1  1 1
−  2 −  2  +  2  −  2  + 
2  s 2s  3 s  4s  
1 s  1 s  1
= tan −1 − log 1 + 2  = cot −1 s − log 1 + 2  ; s > 0
s 2  s  2  s 
Second Method
1 s
£ (1 − cos t ) = − 2 ; s > 0
s s +1
∞ ∞
 1 − cos t  1 x   1 
\  = ∫ x − 2  dx = log x − 2 log( x + 1) 
2
£
 t  s x + 1   s

 x   s2 + 1 
= log  = log  
 x 2 + 1  s  s 

Laplace Transform  | 183

 1 − cos t 
∞  x2 +1  1

 x2 +1 
\ £
 t
2  =
 s
∫  x 
log   dx =
2 ∫s
log  2  dx 
 x 
 

=
1
2 ∫s
{ (
log x 2 + 1 − 2 log x dx ) }


1
{ ( ) 1  2x 2
}

= log x 2 + 1 − 2 log x .x  − ∫  2 −  x dx  (integrating by parts)
2   s 2 s x +1 x


x  x 2 + 1  ∞
1
=  log  2   + ∫ 2 dx
2  x  s s x + 1

 1 2 y / (1 + y 2 )
lim
x →∞
x
2
log 1 + 2  = lim
 x  y→0+ 2 y
1
log 1 + y 2 = lim
y→0+
( 2
)
= 0   ( L’ Hospital rule)

 1 − cos t  s  s 2 + 1 −1 ∞
\ £ 2  = − log  s 2  + (tan x ) s
 t  2

−s  
s +1 π
2
= log  2  + − tan −1 s
2  s  2

s  1 
= cot −1 s − log 1 + 2  ; s > 0
2  s 

 cos t 
( )
Example 2.10: Find £ sin t and hence show that £ 
 
t 
=
π − 41s
s
e .

3 5 7
t2 t2 t2
Solution: We have sin t = t− + − +
3! 5! 7 !
1
n+
( −1) t 2
∞ n

=∑
n = 0 ( 2n + 1)!
  3
Γn+ 

( −1)
 n + 12  ∞ n
( −1)
n
 2
\ (
£ sin t = ∑ ) Lt  = ∑
n = 0 ( 2n + 1)!   n = 0 ( 2n + 1)! n+
3
s 2
n  1  1  1  1
∞ (
−1)  n +   n −  ...   Γ  
 2  2  2  2
= ∑ 3
n+
n= 0
(2n + 1)! s 2

( −1)n (2n + 1) (2n − 1) ...1 π
= ∑
n= 0 2n +1 ( 2n + 1)! sn+
3
2


184 | Chapter 2


( −1)n π
= ∑ 2 (2.4.6....2n) s
n= 0
n +1 n+ 32

1 π ∞
( −1)n 
1
n
π − 14 s
=
2s s
∑   = 3 e
n! 4 s
n=0 2s 2

\
d 
£  sin t  = s £ sin t − sin t
 dt 
( ) ( ) t =0
  (by Laplace transform of derivative)

 1 cos t  π
\ £  = 1 2 e −1 4 s − 0
2 t  2s

 cos t  π −1 4 s
\ £  = e
 t  s
 
Example 2.11: Find the Laplace transforms of t5et sin 4t and t5et cos 4t.
5! 120
Solution: We have £ (t5) = 6 = 6 ; s > 0
s s
120
\ {
£ t 5 e (1 + 4 i ) t = }
( s − 1 − 4i ) 6
; s > 1   (by first shifting property)

120( s − 1 + 4i )6
\ {
£ t 5 e t ( cos 4t + i sin 4t ) = } 6
( s − 1) 2 + 16 

{( ) (
\  £ t 5 e t cos 4t + i t 5 e t sin 4t )}
120
= 6
( s − 1)6 + 6C1 ( s − 1)5 ( 4i ) + 6C2 ( s − 1) 4 ( 4i ) 2 + 6C3 ( s − 1)3 ( 4i )3
( s − 1) + 16 
2

+ 6C4 ( s − 1) 2 ( 4i ) 4 + 6C5 ( s − 1)( 4i )5 + ( 4i )6 



Equate real and imaginary parts
120
£ (t5 et cos 4t) = ( s − 1)6 − 240( s − 1) 4 + 3840( s − 1) 2 − 4096 
6 
( s − 1) 2 + 16 

960( s − 1)
and £(t5 et sin 4t) = 6
3( s − 1) 4 − 160( s − 1) 2 + 768 ; s > 1
( s − 1) + 16 
2

Example 2.12: Find the Laplace transforms of erf (z) and erf ( z ).


2 z

2
Solution: erf (z) is defined as e − u du and is called error function and
π 0

erfc(z) is defined as 1– erf (z) and is called complementary error function.


Laplace Transform  | 185

∞ z
2
Now,  £{erf (z)} = ∫ e − sz ∫e
− u2
du dz
0 π 0

z=∞
z= z
u=

z
u=

Figure 2.2

∞ ∞
2
\ £{erf (z)} = ∫ ∫e
− sz − u 2
e dz du   (by changing order of integration)
π 0 u

∞ ∞
− u2  −e 
− sz
2
= ∫
π 0
e 
 s u
 du, s > 0

1
∞ s2  1 2
2 2e 4 ∞ −  u + s
= ∫e ∫
− u2 − su  2 
⋅e du = e du
s π 0 s π 0

1 2
s
2e 4 ∞ 1

2
= e − t dt  Take u + s=t
s π s/2 2 
1
s2
2e 4  − t 2 
∞ s

 ∫ e dt − ∫0 e dt 
2
2 −t
=
s π 0 
1 1
s2 s2
2e 4  π s
 e4  2 s

∫ ∫
2 2
2 −t
=  − e dt  = 1 −
2
e − t dt 
s π  2 0
 s  π 0

1
s2
e4  s
= erf c     (by definition of complementary error function)
s  2
1
s2
e4  s
Hence, £{erf (z)} = erf c   ; s > 0
s  2
186 | Chapter 2

( z ) = 2π ∫ e
z
− u2
Now,    erf du
0

£ {erf ( z )} =
2 ∞ z
∫ e − sz ∫ e − u du dz
2
\ 
π 0 0

u
=z
u
z=∞
z=

z
u=

Figure 2.3

∞ ∞
2
∫∫e
− sz − u 2
= e dz du   (by changing order of integration)
π 0 u2

∞ ∞
2  e − sz  − u2
= ∫ −  e du, s > 0
π 0  s  u2

(taking )
∞ ∞
2 2 1+ s u = t
∫ ∫
2 2
=
e − (1+ s ) u du = e − t dt 
s π 0 s 1+ s π 0

2 π 1
= ⋅ =
s 1+ s π 2 s 1+ s

Hence, { ( z )} = s 11+ s ; s > 0.


£ erf

Example 2.13: Find the Laplace transforms of Bessel functions J0(x) and J1(x). Also, deduce the
Laplace transform of J0(ax).
( −1)  x 
r 2r∞
Solution: We have    J0 (x) = ∑ 2  
   (from definition of J n ( x ) for n = 0)
r = 0 ( r !)  2 

( −1)
r

\ £ { J 0 ( x )} = ∑ £ ( x 2r )
r = 0 ( r !) 2
2 2r


Laplace Transform  | 187

=∑

( −1)r (2r )!
r = 0 ( r !) 2 s
2 2 r 2 r +1


1 
= 1 + ∑

( −1) (1⋅ 3 ⋅ 5 ⋅⋅⋅⋅⋅ ( 2r − 1)) (2 ⋅ 4 ⋅ 6 ⋅⋅⋅⋅⋅ 2r )  1  r  
r

 2 
s  r =1 ( r !)2 22 r s 

  1   3   5   2r − 1 
− − − ... −
1 ∞ 
 2   2   2   
 1  
r
2 
= 1+ ∑
  2 
s  r =1 r! s 
 
 
−1
1 1 2
1
\ £{J0(x)} = 1 + 2  =  (1)
s s  s +1
2

Now, we know that J 0′ (x) = – J1 (x)

\ J1 (x) = – J 0′ (x)

\ £{J1(x)} = – £{ J 0′ (x)}

= –[s £{J0 (x)} – J0 (0)]  (by Laplace transform of derivative)

1
= – s⋅ + 1 [from equation (1) and J0 (0) = 1]
s +1
2

s
\ £{J1 (x)} = 1 −
s2 + 1 
Further, by change of scale property
1  s
£ {J0 (ax)} = F   where F(s) = £ {J0 (x)}
a  a
1 1
\ £{J0(ax)} = .   (from (1))
a  s2
  + 1
a
1
Hence, £ {J0 (ax)} =
s + a2 
2

2.8 Evaluation of real integrals using Laplace transform


Some real integrals can be obtained by finding Laplace transform of appropriate functions and
giving suitable values to s as explained in the following examples.
188 | Chapter 2

Example 2.14: Evaluate


∞ ∞ ∞
sin 2 t
(i)  ∫ te −2t cos t dt (ii) 
∫ t e sin t dt   (iii) ∫
3 −t
dt
0 0 0
t2
∞ ∞ ∞
cos at − cos bt sin mt sin t
(iv)  ∫0 t
dt (v)  ∫0 t dt , m > 0 and deduce the value of ∫
0
t
dt

1
Solution:  (i) we have £(t) =
;s>0
s2
1
\ £ te it = ( )
( s − i)2
; s > 0   (by first shifting property)

( s + i)2
⇒ £{t (cost + isin t)} = 2
( s + 1) 2 
s 2 − 1 + 2 is
\ £(t cos t) + i £(t sin t) =
( s 2 + 1) 2 
Equate real parts
s2 − 1
£(t cost) = ; s > 0
( s 2 + 1) 2
∞ s2 − 1
\ ∫0 e t cos t dt = ( s2 + 1)2 ; s > 0
− st

Take s = 2
∞ 3

∫0
te −2t cos t dt =
25 
3! 6
(ii)  £(t3) =
= ;s>0
s4 s4
6
\ £(t3eit) =
( s − i)4 
6( s + i ) 4
⇒ £{t3(cost + i sin t)} = 2
( s + 1) 4 
6  s 4 + 4 s3i − 6 s 2 − 4 si + 1
\ £(t3 cost) + i £(t3sin t) = 
( )
4
s2 + 1

Equate imaginary parts

£(t3sint) =
(
24 s3 − s ) = 24s (s − 1) ; s > 0
2

(s + 1) (s + 1)
2 4 2 4


24 s ( s − 1)
2

\ ∫e
− st 3
t sin t dt = ; s > 0
0
( s 2 + 1) 4
Laplace Transform  | 189

Take s = 1

∫0
t 3e − t sin t = 0

1   ( 2t ) ( 2t ) ( 2t )6  
2 4
sin 2 t 1 − cos 2t
(iii) 2
= 2
= 2
1 − 1 − + − +  
t 2t 2t   2! 4! 6!  

3 5 7
2 2 2
= 1 − t 2 + t 4 − t 6 + 
4! 6! 8! 
 sin 2 t  1 23 2 ! 25 4 ! 27 6 !
\ £  2  = − . 3 + . 5 − . 7 +
 t  s 4! s 6! s 8! s

3 5 7
1 2 1  2 1  2 1  2
= . −   +   −   +
2 s 3.4 s 5.6 s 7.8  s  
3 5 7
 1  2  1 1   2  1 1   2  1 1  2 
 = 1 −  −  −    +  −    −  −    + 
 2 s  3 4   s   5 6   s   7 8  s 
  2 1  2 3 1  2 5 1  2 7 
=  −   +   −   + 
 s 3  s  5  s 7  s 
1 1  2 3 1  2 5 1  2 7 
−  −   +   −   + 
 s 4  s  6  s 8  s 

2 s  4 1  4 
2
1  4 
3
1  4
4

= tan −1 −  2 −  2  +  2  −  2  + 
s 4  s 2 s  3 s  4s   
 sin t 
2
2 s  4
\ £  2  = tan −1 − log  1 + 2  ; s > 0
 t  s 4  s 
∞ sin 2 t 2 s  4
\ ∫0 e − st
t 2
dt = tan −1 − log 1 + 2  ; s > 0
s 4  s 
∞ sin t2
 2 s s2 + 4 
\ lim+ ∫ e − st 2 dt = lim+  tan −1 − log 2 
s→0 0 t s→0  s 4 s 

∞ sin 2
t π 1 (
log s )
2
+ 4 − log s 2

∫0 slim
− st
\ e dt = − lim+
→ 0+ t2 2 4 s→0 1
s 
2s 2s
− 2
π 1
= − lim+ s + 4 s  (L’Hospital rule)
2

2 4 s→0 −1
s2
π 1 −8s π π
= + lim+ 2 = +0=
2 4 s → 0 s +4 2 2
∞ sin t
2
π
\ ∫0 t 2 dt = 2

190 | Chapter 2

(iv)  By Example 2.6 (i)


 cos at − cos bt  1  s2 + b2 
£  = log  2 2 
 t  2 s +a  

− st  cos at − cos bt  1  s2 + b2 
\ ∫0 
e
t
 dt =
2
log  s 2 + a 2  ; s > 0

∞ cos at − cos bt 1  s2 + b2 
\ lim+ ∫ e − st dt = lim+ log  2
s→0 0 t s→0 2  s + a 2 


cos at − cos bt 1 b2 b
\ ∫0 t
dt =
2
log
a 2
= log
a


m
(v)  £(sin mt) = ; s > 0, m > 0
s + m2
2

∞ ∞
 sin mt  m  x
\ £  =∫ 2 dx = tan −1  ; m > 0
 t  s x +m
2
 m s
π s s
= − tan −1 = cot −1
2 m m
− st sin mt −1 s

\ ∫0 e t dt = cot m ; s > 0, m > 0
∞ sin mt s
\ lim+ ∫ e − st dt = lim+ cot −1 ; m > 0
s→0 0 t s→0 m
∞ sin mt π
\ ∫0 t dt = 2 ; m > 0
Take m = 1
∞ sin t π

∫ 0 t
dt = 
2

 t  1  1  1
Example 2.15: Given £  2 = =
 π  s3/2
, show that £  .
   πt  s
t
Solution: Let f (t) = 2 . Then
π
2 1 1
  f  ′ (t) = . =
π 2 t πt 
\By Laplace transform of derivative, we have
£ {f  ′ (t)} = s £ { f (t)} – f (0)
 1   t  1 1
\ £  = s £  2  − 0 = s. 3/ 2 − 0 =
 π t   π  s s

Laplace Transform  | 191

 sin t   sin at 
Example 2.16: Given that £   = cot −1 s, find £  , a > 0.
 t   t 
Solution: By change of scale property
1  s
£{ f(at)} = F   , a > 0 where £{f(t)} = F(s)
a  a

\  sin at  1  s
£  = cot −1  
 at  a  a 
 sin at   s
⇒ £ = cot −1   , a > 0
 t   a

Heaviside Function or Unit Step Function


Heaviside function H(t), also called unit step function, U(t) or u0(t) is defined by
0, if t < 0
H (t) = U(t) = u0(t) = 
1, if t ≥ 0 
If the jump discontinuity is at t = a, then we define H (t–a) = U(t – a) also denoted by ua(t) as
0, if t < a
H (t–a) = U(t – a) = ua(t) =  ; a ≥ 0
1, if t ≥ a  

ua t

t
O a

Figure 2.4

Unit step function ua (t)


We now explain the writing of function with jumps in terms of unit step function and vice versa.
Let a < b. Then, by the definition of unit step function
ua (t ) − ub (t ) = 0, if t < a

1, if a ≤ t < b
0, if t ≥ b

192 | Chapter 2

Therefore, if the given function is of the form


 f1 (t ) , if 0 ≤ t < a1

 f 2 (t ) , if a1 ≤ t < a2
f t ,
 () if a2 ≤ t < a3
f (t ) =  3 (2.9)
 
 f (t ) , if an − 2 ≤ t < an −1
 n −1
 f n (t ) , if t ≥ an −1

We can express it in terms of unit step functions as


f ( t ) = f1 ( t ) u0 ( t ) − ua1 ( t )  + f 2 ( t ) ua1 ( t ) − ua2 ( t )  + f 3 (t ) ua2 (t ) − ua3 (t )

+  + f n −1 (t ) uan−2 (t ) − uan−1 (t ) + f n (t ) ua (t )
n −1

i.e., f ( t ) = f1 ( t ) u0 ( t ) +  f 2 ( t ) − f1 ( t )  ua1 ( t ) +  f 3 ( t ) − f 2 ( t )  ua2 ( t )



+  +  f n (t ) − f n −1 (t ) uan−1 (t )  (2.10)

Conversely, if f (t) is given in the form (2.10), then we can write it as
 f1 (t ) , if 0 ≤ t < a1

 f1 (t ) +  f 2 (t ) − f1 (t ) = f 2 (t ) , if a1 ≤ t < a2

f (t ) =  f1 (t ) +  f 2 (t ) − f1 (t ) +  f 3 (t ) − f 2 (t ) = f 3 (t ) , if a2 ≤ t < a3

 
 f (t ) +  f (t ) − f (t ) +  +  f (t ) − f (t ) = f (t ) , if a ≤ t
 1  2 1   n n −1  n n −1

which is (2.9).

2.9 Laplace Transform of unit step function


e − as
Theorem 2.10  £{ua(t)} = ; a ≥ 0, s > 0
s
0; t < a
Proof: We have ua(t) =  ; a≥0
1; t ≥ a
∞ ∞
e − st ua (t ) dt = ∫ e − st ⋅ 0 dt + ∫ e − st ⋅1 dt
a
\ £{ua(t)} = ∫0 0 a 

∞ e  e− st − as
= ∫ e − st dt =   = ; a ≥ 0, s > 0
a
 − s a s
Laplace Transform  | 193

Theorem 2.11  (second shifting theorem)


If £{f (t)} = F (s); s > k and a ≥ 0 be any real number, then £{f (t – a) ua(t)} = e–as F(s).
 0 ; t<a
Proof: We have by definition of ua(t),  f (t–a) ua(t) = 
 f ( t − a ) ; t ≥ a

\ £{f (t–a) ua(t)} = ∫ e − st f (t − a ) dt
a 

f ( x ) dx  (taking t = a + x)
a + x)
= ∫ e − s(
0

= e − as ∫ e − sx f ( x ) dx = e − as F ( s )
0 

Example 2.17: Find the Laplace transform of the following functions


(i)  t2u3(t)  (ii) sin2t U(t – p)   (iii) e–3t U(t – 2)  (iv) e– t [1–u2(t)]  (v) t 4u2(t)
Solution: 
(i)  f (t) = t2u3(t) = (t – 3 + 3)2 u3(t)

= (t − 3) 2 + 6 (t − 3) + 9 u3 (t )

 2 6 9
\ £{f (t)} = e –3s
 s3 + s 2 + s  ; s > 0
 
(ii) f (t) = sin2t U(t – p) = –sin(2p – 2t) U (t – p) = sin{2(t–p)} U (t – p)
2 2e − π s
\  £{f (t)} = e–ps = 2 ; s > 0
s +4 s +4
2

(iii)  f (t) = e–3t U(t - 2) = e–3(t - 2) – 6 ⋅ U(t – 2) = e – 6e – 3 (t – 2) U(t – 2)
1 e −2( s + 3)
\  £{f (t)} = e – 6 £ {e– 3 (t – 2) U(t – 2)} = e– 6 e– 2s = ; s > −3
s+3 s+3 
(iv)  f (t) = e –t 1 − u2 (t ) = e – t – e – 2 e – (t – 2) u2(t)
1 1 1 − e −2( s +1)
\  £{f (t)} = £ {e– t} –e – 2 £{e – (t – 2) u2(t)} = − e −2 .e −2 s . = ; s > – 1
s +1 s +1 s +1
(v)  f (t) = t 4 u2(t) = (t – 2 + 2)4 u2 (t)

= [(t–2)4 + 8 (t – 2)3 + 24 (t – 2)2 + 32 (t – 2) + 16] u2(t)

 4 ! 8 (3!) 24 ( 2 !) 32 16 
\  £{f (t)} = e −2 s  5 + 4 + + 2 + 
s s s3 s s

8e −2 s  4 6 6 3
  =  2 + + 2 + 3 + 4  ; s > 0
s s s s s
194 | Chapter 2

Example 2.18: Find the Laplace transform of functions


0; 0 ≤ t < 2
(i) f (t) =  where K is a constant
K ; t ≥ 2
  2π  2π
cos  t − 3 ; t≥ 3
(ii) f (t) =   
0 2π
; t<
 3
t ; 0 ≤ t ≤ 3
(iii) f (t) = 
0 ; t >3
 π
sin at ; 0<t <
a
(iv) f (t) = 
0 π
; <t
 a
2 T
T t ; 0<t<
2

(v) f (t) = 2 −
2t T
; <t <T
 T 2
0 ; t >T

Solution:

(i)  f (t) = K u2(t)

Ke −2 s  e − as 
\ £{f (t)} = K £{u2(t)} =
s
; s>0 ∵ L {ua (t )} = s 

 2π 
(ii)  f (t) = cos  t −  u 2π (t )
 3 3

−2π
−2π s
s e 3
⋅s
\ £{f (t)} = e 3
£ ( cos t ) = ; s > 0   (by second shifting theorem)
(s 2
+1 )
(iii)  f (t) = t [u0(t) – u3(t)] = t u0(t) – [(t – 3) + 3] u3(t)

\ By second shifting theorem

1  1 3  −3s 1
£{f (t)} =
s 2
s s  s
3
−  2 +  e = 2 1 − e −3s − e −3s
 s
( )

Laplace Transform  | 195

 
(iv)  f (t) = sin at u0 (t ) − uπ (t )  
 a 
 π 
= sin at u0(t) – sin a  − t   uπ (t )
  a  a 
  π
= sin at u0(t) + sin a  t −   uπ (t )
  a a

\ By second shifting theorem
 −π
s

π
a  1 + e a

a − s a 
£{f (t)} = 2 +e a 2 =
s +a 2
s +a 2
s +a
2 2

2t    2t   
(v)  f (t) = u0 (t ) − uT (t )  +  2 −  uT (t ) − uT (t ) 
T  2   T  2 
2t 4 T 2
\ f (t) = u0 (t ) −  t −  uT (t ) + (t − T ) uT (t )
T T  
2 2 T
\ By second shifting theorem
2
2 1 4 − T2 s 1 2 −Ts 1 2  − Ts

 £{f (t)} = . − e . 2+ e . 2 = 2 1 − e 
2
T s2 T s T s Ts
2
− Ts
 Ts4 − Ts
 − Ts
 Ts 
2e 2
 e − e 4
 8e 2
sinh 2  
 4
or £{f(t)} = =
Ts 2 Ts 2

Remark 2.3: All the questions of above type can be solved directly by definition of Laplace
transform. But it is always better to write the given function f (t) in terms of unit step function to
find £{f (t)}.
We solve Example 2.18 (iv) and (v) below directly by definition

(iv)  We have £{f (t)} = ∫e


− st
f (t ) dt
0
π /a ∞
\ £{f (t)} = ∫ 0
e − st sin at dt + ∫
π /a
e − st .0 dt
π
 e − st a
= 2 ( − s sin at − a cos at )
(
 s + a
2
)  0 
 −π
s
a 1 + e a 
ae − sπ / a + a  
= 2 =
(
s +a 2
) s +a
2 2


( )

196 | Chapter 2

T
2 T ∞
2 − st  2  − st
∫0 T te dt + T∫  2 − T t  e dt + T∫ 0 ⋅ e dt 
− st
(v)  £{f (t)} =
2
T
T
2   e − st   e − st   2  2   e − st   −2   e − st  
= t   − 1⋅  2   +  2 − t   −  
T   − s   s  0  T   − s   T   s 2  T
2 
− sT
− sT − sT
−2 T 1 2 2 − ST e 2 2
= e 2
 + 2  + 2 + 2 e + − 2e 2

T 2s s Ts Ts s Ts 
2
2 4 − sT
2 − sT 2  − Ts

= − e 2
+ e = 2 1 − e 2 
Ts 2 Ts 2 Ts 2 Ts  

2
− Ts
 Ts4 − Ts

2e  e − e 
2 4 − Ts
8e 2  Ts 
\ £{f (t)} = = sinh 2  
Ts 2 Ts 2
 4


Example 2.19: Find the Laplace transforms of the following functions


2; ; 0<t <π

(i)  f(t) = 0 ; π < t < 2π
sin t ; t > 2π

2 + t 2 ; 0 < t < 2

(ii)  f(t) = 6 ; 2<t <3
2t − 5 ; 3 < t < ∞

(iii)  f(t) = |t – 1|+|t + 1|; t ≥ 0
 0, 0 ≤ t ≤ 3
(iv)  f(t) = 
(t − 3) , t > 3
2

t 2 for 0 < t <1


(v)  f(t) = 
4t for t >1
t 2 ; 0<t<2

(vi)  f(t) = t − 1 ; 2 < t < 3
7 ; t>3

t − 1 ; 1 < t < 2

(vii)  f(t) = 3 − t ; 2 < t < 3
0 ; t > 3, 0 < t < 1

Laplace Transform  | 197

Solution:
   (i)  f (t) = 2[u0(t) – up(t)] + sin t ⋅ u2p(t)
= 2 u0(t) – 2up(t) + sin (t – 2p) u2p(t)

\ £{f (t)} = 2£{u0(t)} – 2 £{up(t)} + £{sin (t – 2p) u2p(t)}


2 2e − π s e −2π s
   =
s

s
+ e −2π s ⋅ 2
1
s +1 s
2
( )
= 1 − e −π s + 2
s +1
; s > 0

 (ii)  f (t) = (2 + t2)[u0(t) – u2(t)] + 6 [u2(t) – u3(t)] + [2(t – 5) u3(t)]


= (2 + t2) u0(t) + (4 – t2) u2(t) + (2t – 11) u3(t)
= (2 + t2) u0(t) + [4 – (t – 2 + 2)2] u2(t) + [2(t – 3 + 3) – 11] u3(t)
= (2 + t2) u0(t) – [(t – 2)2 + 4(t – 2)] u2(t) + [2(t – 3) – 5] u3(t)
2 2  2 4  2 5
\ £{f (t)} =  + 3  − e −2 s  3 + 2  + e −3s  2 −  ; s > 0
s s  s s  s s
(iii)  f(t) = 1 – t + t + 1 = 2; 0 ≤ t ≤ 1
= t – 1 + t + 1 = 2t; t ≥ 1
\ f (t) = 2[u0(t) – u1(t)] + 2 t u1(t)
= 2u0(t) + 2(t – 1) u1(t)
2 2e − s 2  e − s 
\ £{f (t)} = + 2 = 1 + ;s>0
s s s s 
 (iv)  f(t) = (t – 3)2 u3 (t)
2 2e −3s
\ £{f (t)} = e −3s ⋅ 3 = 3 ; s > 0
s s
  (v)  f (t) = t2 [u0(t) – u1(t)] + 4t u1(t)
= t2 u0(t) + (4t – t2) u1(t)
= t2 u0(t) + [4(t – 1) + 4 – (t – 1 + 1)2] u1(t)
= t2 u0(t) + [3 + 2(t – 1) – (t – 1)2] u1(t)
3 2 2  1  2 2
\ £{f (t)} =
2
s3  s s s  s
(
+ e−s  + 2 − 3  = e−s  3 +  + 3 1 − e−s ; s > 0
 s s
)
 (vi)  f(t) = t2[u0(t) – u2(t)] + (t – 1) [u2(t) – u3(t)] + 7u3(t)
= t2 u0(t) + (t – 1 – t2) u2(t) – (t – 8) u3(t)
= t2 u0(t) + [(t – 2) + 1 – (t – 2 + 2)2] u2(t) – [(t – 3) – 5] u3(t)
= t2 u0(t) – [(t – 2)2 + 3 (t – 2) + 3] u2(t) – [(t – 3) – 5] u3(t)
2  2 3 3  1 5
\ £{f (t)} = 3 − e −2 s  3 + 2 +  − e −3s  2 −  ; s > 0
s s s s s s
(vii)  f(t) = (t – 1)[u1(t) – u2(t)] + (3 – t) [u2(t) – u3(t)]
= (t – 1) u1(t) + (4 – 2t) u2(t) – (3 – t) u3(t)
198 | Chapter 2

= (t – 1) u1 (t) – 2(t – 2) u2 (t) + (t – 3) u3 (t)


1 1 1
\ £ { f (t )} = e − s ⋅ 2 − 2e −2 s ⋅ 2 + e −3s ⋅ 2
s s s
1 1
= 2 e − s (1 − 2e − s + e −2 s ) = 2 e − s (1 − e − s ) 2
s s
s
4 sinh 2
1 −2 s s / 2 −s/2 2 2
= 2 e (e − e ) =
s s2e2s

Example 2.20: Express the functions defined by the following graphs (i), (ii) and (iii) in terms
of unit step functions and find their Laplace transforms.
f t f t

t t

  
Figure 2.5 (i) Figure 2.5 (ii)
f t

Figure 2.5 (iii)


Solution: (i) From graph (i)
0 ; 0 ≤ t ≤ 1
 
f (t ) = t − 1 ; 1 ≤ t ≤ 2
1 ; t≥2

Laplace Transform  | 199

\ f(t) = (t – 1) [u1(t) – u2(t)] + u2(t) = (t – 1) u1(t) – (t – 2) u2(t)

\ By second shifting theorem


1 1 e−s
£{f (t)] = 2 e − s − 2 e −2 s = 2 (1 − e − s ) ; s > 0
s s s
(ii)  From graph (ii)
0 ; 0 ≤ t ≤1
t − 1 ; 1≤ t ≤ 2

f(t) = 
3 − t ; 2≤t ≤3
0 ; t≥3

\ f(t) = (t – 1) [u1(t) – u2(t)] + (3 – t) [u2(t) – u3(t)]
s
4 sinh 2
\ £{f(t)} = 2 from Example (2.19 (vii))
s2e2s
(iii)  From graph (iii)

f(t) = 0 ; 0 ≤ t < 1 

= 1; 1 < t < 2 

= 0; 2 < t < 3 

= 1; 3 < t < 4 

and so on.

\ f (t) = [u1(t) – u2(t)] + [u3(t) – u4(t)] +….

\ By Laplace transform of unit function


e − s e −2 s e −3s e −4 s
£{ f (t )} = − + + +     (infinite G. P.)
s s s s
e−s
1 1
= s −s = s 2 s 2 = ; s > 0
1+ e se (e + e − s 2 ) s 2 s
2 se cosh
2

2.10 Laplace Transform of unit impulse function


(Dirac–delta function)
It frequently occurs in many problems in electrical engineering, physics and mechanical engi-
neering that a large force acts for a very short duration. To deal with such problems, we define
unit impulse function or Dirac-delta function.
200 | Chapter 2

Define
0 ; t < 0
1

δ ε (t ) =  ; 0 ≤ t < ε
ε
0 ; t ≥ ε

t
ε

Figure 2.6

In terms of unit step function, we have


1
δ ε (t ) = {u0 (t ) − uε (t )}
ε
1
The pulse has the height and is of duration e. As e → 0+ the amplitude of pulse → ∞. Thus,
as e → 0+ we define ε

d(t) = lim+ de(t)


ε →0

This delta function is called unit impulse function.


ε 1
Its impulse = ∫ dt = 1
0 ε

It is clear from Figure 2.6 that as e → 0+, the height of strip increases indefinitely and width
­decreases in such a way that its area is always unity.
The delta function can be made to act at any other point. The delta function d (t – a) is defined by

d (t – a) = lim+ de(t – a); a ≥ 0


ε →0

0 ; t < a
1

where δ ε (t − a ) =  ; a ≤ t < a + ε
ε
0 ; t ≥ a + ε

It acts at t = a.
Laplace Transform  | 201

ua (t ) − ua + ε (t )
Now, de(t – a) = 
ε
ua (t ) − ua + ε (t )
\ d (t – a) = lim+ de(t – a) = lim+
ε →0 ε ε →0

u (t − a ) − u (t − a − ε )
= lim+ = u ′ (t − a )
ε →0 ε 
Actually d (t – a) is not a function in the ordinary sense but is the so-called ‘generalized function’
because we have

d (t – a) = ∞, if t = a

= 0, otherwise

and ∫0
δ (t − a) dt = 1

But an ordinary function which is everywhere zero except at a single point must have integral
zero. Even then, in impulse problems it is convenient to operate on d (t – a) as though it is an
ordinary function.
Theorem 2.12  Filtering property of Dirac-delta function and its Laplace transform

Let f (t) be continuous and integrable in [0, ∞). Then, ∫ f (t ) δ (t − a)dt = f ( a) and £{d (t – a)} = e– as
0

Proof: We have by definition of Dirac-delta function


0 ; t < a
1

de(t – a) =  ; a ≤ t < a + ε
ε
0 ; t > a + ε

and d (t – a) = lim+ δ ε (t – a)
ε →0

∞ a+ε 1
Now, ∫0
f (t )δ ε (t − a)dt = ∫
a ε
f (t )dt 

Using the mean value theorem of integral calculus we obtain


a+ε

1 f (t 0 ) a + ε
 ∫ f (t ) δ ε (t − a) dt =
0

a
ε
f (t ) dt =
ε ∫a
dt = f (t0 ) for some t0, a ≤ t0 ≤ a + e

\ lim ∫ f (t ) δ ε (t − a) dt = lim+ f (t0 ) = f ( a)
ε → 0+ ε →0
0 

\ ∫ f (t ) δ (t − a) dt = f ( a)
0 
202 | Chapter 2

Taking f (t) = e – st we have


∫e
− st
δ (t − a)dt = e − as
0 
\ £ {δ (t − a)} = e − as



1   π
Example 2.21: Evaluate (i)  £  {δ ( t − a )} (ii) 
∫ sin 2t δ  t −  dt
t  0
 4
Solution: (i) We have £ {δ (t − a)} = e − as

1 
\ £  δ (t − a)  = ∫ e − ax dx   (by integration of Laplace transform)
 t  s

 1  e − as
= − e − ax  = ; a > 0, s > 0
 a s a

(ii) Let f (t) = sin 2t


\ By filtering property, ∫ sin 2t δ (t − a ) dt = sin 2a



0 
π  π π
  Take a = , ∫ sin 2t δ  t −  dt = sin = 1
4 0  4 2


2.11  Laplace transform of periodic functions

Theorem 2.13  Let f (t) be piecewise continuous for t ≥ 0 and of exponential order k and peri-
odic with period T. Then
T
1
£ { f (t )} =
1 − e − sT ∫0
e − st f (t )dt ; s > 0

∞ m ( k +1)T
Proof: We have £ { f (t )} = ∫ e − st f (t )dt = lim ∑ ∫ e − st f (t )dt
0 m →∞ kT
k =0 
Put t = kT + x
\ dt = dx and f (t) = f (kT + x) = f (x) [Q f is periodic with period T ]
m T
\ £{f(t)} = lim ∑ ∫e
− s ( kT + x )
f ( x )dx
m →∞
k =0 0 
m T

= lim ∑ e − skT
∫e
− sx
f ( x ) dx
m →∞
k =0
0 
T ∞
= ∫e − st
f (t )dt .∑ e − ksT
(2.11)
k =0
0
Laplace Transform  | 203

But ∑ e
− ksT
is infinite G. P. with first term 1 and common ratio e–sT, so
k=0 ∞
1
∑ e − ksT = ; s > 0 (∵ e − sT < 1 for s > 0) 

k =0 1− e (
− sT
)
\ From (2.11)
T
1
1 − e − sT ∫0
£ { f (t )} = e − st f (t )dt ; s > 0.

Example 2.22: If f (t) = t2; 0 < t < 2 and f (t + 2) = f (t) for t >2 , find £{f (t)}.
Solution: Here, f (t) = t2; 0 < t < 2 is periodic function with period 2.
2
1
£ { f ( t )} =
(1 − e ) ∫
\ −2 s
e − st f (t ) dt
0

2
1
(1 − e ) ∫
2 − st
= t e
−2 s
dt
0

2
1  2 e   e − st 
− st
 − e − st  
= t .  − 2t + 2 .  s3  
(1 − e ) −2 s 
  −s 
 s 2 
0 

1  −4e −2 s 4e −2 s 2e −2 s 2 
=  − 2 − 3 + 3
(1 − e ) −2 s
 s s s s 

−2 e −2 s
= (2s 2
+ 2s + 1 − e 2 s )
(
s3 1 − e −2 s ) 
=
2
(2s 2
+ 2s + 1 − e 2s
) ; s > 0

(
s3 1 − e 2 s )
Example 2.23: Find the Laplace transform of the rectified semiwave function defined by
sin wt , 0 < t ≤ π w

f (t ) =  π 2π
0 , <t<
w w

when f (t) is periodic with period .
w

Solution: f (t) is periodic with period .
w

w
1
)∫
\ £{f (t)} = e − st f (t )dt
(1 − e −2π s / w
0

204 | Chapter 2

π
w
1
)∫
− st
= e sin wt dt

(1 − e −2π s / w 0

π /w
1  e − st

=  2 2 (
− s sin wt − w cos wt )
(1 − e −2π s / w
)  s + w 0

1  e −π s / w w 
=  2 w+ 2 2 
(
1 − e −2π s / w ) s + w
2
(s + w ) 

−π s / w
w (1 + e ) w
= = ; s > 0
(
1 − e −2π s / w ) ( s 2 + w 2 ) ( s 2 + w 2 )(1 + e − π s / w )

Example 2.24: Plot the 2p-periodic function f (t) given by


t ; 0<t <π
f  (t) = 
π − t ; π < t < 2π 
and find its Laplace transform.
Solution: Graph of the given 2p-periodic function is shown hereunder.
ft

π t
− π − π − π −π 0 π π π

−π

Figure 2.7

indicates that the point is not in the graph.


1 2π
£ { f (t )} = −2π s ∫0
e − st f (t ) dt

(
1− e ) 
1  te − st dt + (π − t )e − st dt 
π 2π
=
( ∫
1 − e −2π s  0 ) ∫π 

1  π te − st dt − π xe − s ( π + x ) dx   (taking t = p + x in the second integral)
 ∫0 ∫0
=

1− e −2π s 
( ) 

1  π te − st dt − e − π s π te − st dt 
 ∫0 ∫0
=
(1 − e ) −2π s 

Laplace Transform  | 205

(1 − e ) −π s
π 1   e − st   e − st  
π

(1 − e ) ∫
− st
= te dt = t ⋅  − 1 .  s 2  

−2π s 0
(1 + e ) −π s
  −s 

0 
 π −π s 1
=
1
 − s e + s2 1 − e (
−π s
) ; s > 0
(1 + e ) −π s


Example 2.25: Find the Laplace transform of the triangular wave function of period 2c given by
t ; 0<t<c
f (t ) = 
2c − t ; c < t < 2c 
Solution: f (t) is periodic function of period 2c.
1
\ £ { f (t )} =
2c

−2 cs ∫0
e − st f (t )dt
(
1− e ) 
1  te − st dt + ( 2c − t )e − st dt 
c 2c
=
(
1 − e −2 cs  0
∫ ) ∫c 

  e  − st
 e   
− st
c
 e − st   e − st   
2c

 t ⋅ 
1
= − 1 ⋅  s 2    + ( 2 c − t ) − ( − 1)  s 2   
(  )
1 − e −2 cs    − s  
 0 

 − s 
 c 

 −c − cs 1 − cs 
=
1
 e + s2 1 − e
− cs
( )
c − cs 1 −2 cs
+ e + 2 e −e  ( )
(
1 − e −2 cs  s ) s s 

( )  − 
cs cs
− cs 2
1− e 1  1 − e − cs  1  e 2 − e 2  1  cs 
= 2 = = 2 cs = 2 tanh   ; s > 0
s 1− e (
−2 cs
) 2 
s 1+ e  s − cs 

e2 +e 2 

cs
 s  2


Exercise 2.1

1. Find the Laplace transform of the follow- (iii) (1 + te–t)3


ing functions (iv) e – 2t (3 cos 4t – 2 sin 5t)
(i) 1 + 2 e – 4t + 9 cosh 4t + 7t3 + 5 sin 2t (v) 2et sin 4t cos 2t
(ii) sin2t sin 3t 2as
3. Show that £{t sin at} = and
(iii) sin3 2t
(s )
2
2
+ a2
(iv) cos2 3t
s2 − a2
(v) cos (at + b) £{t cos at} =
(s )
2
(vi) cos t 2
+ a2
2. Find the Laplace transforms of
4. Find the Laplace transforms of
 1 (i) te– t sin 2t
(i) e  t+  –t
 t (ii) t 2et sin 4t
(ii)
e cosh 4t sin 3t
– 3t (iii) t sin 2t cosh t
206 | Chapter 2

(iv) t3e – 3t 11. Find the Laplace transform of the follow-


(v) t cos3 t ing functions
(vi) te 2t (cost – sint) e t for 0 < t < 1
2as f (t ) = 
(i)
5. If £{t sin at} = , evaluate 0 for t > 1
(s )
2
2
+ a2
t
 for 0 < t < T
(i) £{at cos at + sin at} f (t ) =  T
(ii)
£{2 cos at – at sin at}
(ii) 1 for t > T
6. Find the Laplace transform of the follow- (t − 1)3 , t > 1
ing functions (iii) f (t ) = 
0 , t <1
e − at − e − bt
(i)
sin(t − α ) , t > α
t (iv) f (t ) = 
sin t3
0 , t <α
(ii)
t 12. Write the given function in terms of
e −2t sin 2t cosh t Heaviside’s unit step function and hence
(iii)
t find £{f (t)}.
2 e − t , 0 < t < 3
(iv) 2
sin t (i) f (t ) = 
t 0 , t > 3
7. Find the Laplace transform of the follow- 1 , 0 ≤ t < 1
ing functions 
t f ( t ) = t , 1 ≤ t < 2
(ii)
∫ e t dt
−t 4
(i) 2
0
t , 2 ≤ t < ∞
t
∫ t cosh t dt
(ii)
0 13. Find the Laplace transforms of
t
δ (t − π )
∫ cos x dx
(iii) 2
0
(i)
t
t
e −4 t ∫ t sin 3t dt
(iv) e − π t δ (t − a )
(ii)
0

 π
 sin 2t + sin 3t 
∞ sin 2t δ  t −  − t 2 δ (t − 2)
(iii)
8. Evaluate ∫   dt using  4
0  te t
t 2 U (t − 2) − cosh t δ (t − 2)
(iv)
Laplace transform.
9. Using Laplace transform, evaluate e − t sin t uπ (t )
(v)
−t 2
∞ e sin t t U (t − 4) − t 3 δ (t − 2)
(vi)
(i) ∫0 t dt ∞

∫ e
(ii)

−4 t 3
cosh t dt
14. Evaluate ∫e
0
−4 t
δ (t − 3) dt
0
15. For the periodic function f(t) of period 4,
10. Using Laplace transform show that

defined by
e − t sin u π
t

∫t = 0 u∫= 0 u du dt = 4 3t , 0 < t < 2


f (t ) =  , find £{f (t)}
6, 2 < t < 4
Laplace Transform  | 207

16. Find the Laplace transform of the square sin t ; 0 < t ≤ π


wave function of period ‘a’ defined as f (t ) = 
 − sin t ; π < t < 2π
1 for 0 < t < a 2
f (t ) = 
 −1 for a 2 < t < a f (t ) = f (t + π ). Also, draw its graph.
17. Find the Laplace transform of the 19. Find the Laplace transform of the periodic
­periodic function function defined by the triangular wave
kt
f (t ) = for 0 < t < T , f (t + T ) = f (t ) t a , 0≤t ≤a
T f (t ) = 
( 2a − t ) a , a ≤ t ≤ 2a
18. Find the Laplace transform of full recti-
fied sine wave defined by the expression and f (t + 2a) = f (t ).

Answers 2.1

1 2 9s 42 10 12 s
1. (i) + + + + ; s > 4  (ii)  ;s>0
s s + 4 s 2 − 16 s 4 s 2 + 4 ( s 2 + 1)( s 2 + 25)
48 ( s 2 + 18)
(iii) 2 ; s > 0        (iv)  ;s>0
( s + 4)( s 2 + 36) s( s 2 + 36)
s cos b − a sin b ∞
( −1) n n !
(v) 2
s + a2
; s > 0         
(vi) ∑
n= 0 ( 2n)! s n +1
;s>0

π π 3( s 2 + 6 s + 34)
2. (i) + ; s > – 1        (ii)  ;s>1
2( s + 1)3 2 s +1 ( s − 2 s + 10)( s 2 + 14 s + 58)
2

1 3 6 6 3( s + 2) 10
(iii) + + + ; s > 0   (iv)  2 − 2 ;s>–2
s ( s + 1) 2
( s + 2) ( s + 3)
3 4
s + 4 s + 20 (
s + 4 s + 29 ) ( )
6 2
(v) + 2 ;s>1
s − 2 s + 37 s − 2 s + 5
2

4( s + 1) 8(3s 2 − 6 s − 13)
4. (i) 2 ; s > – 1         
(ii)  ;s>1
( s + 2 s + 5) 2 ( )
3
s 2 − 2 s + 17
2( s − 1) 2( s + 1) 6
(iii) + ; s > 1   (iv)  ;s>–3
( ) (s ) ( s + 3)4
2 2
s − 2s + 5
2 2
+ 2s + 5

 
1  − s2 + 9 s2 + 3  s2 − 6s + 7
(v)  + 2
; s > 0     
(vi)  ;s>2
4  s2 + 9 2 ( ) ( ) ( )
2

 s 2
+ 1 
 s 2
− 4 s + 5
208 | Chapter 2

2as 2 2 s3
5. (i) (ii) 
(s 2
+a )
2 2
s2 + a2
2
( )
s+b 1 −1 −1 s 
6. (i) log ; s > –a, s > –b (ii)  3 cot s − cot  ; s > 0
s+a 4 3
1  s + 1  s + 3 
(iii) cot −1   + cot −1   ; s > –1
2  2   2  

1 s2 + 4 s
(iv)  − s log 2 + 4 cot −1  ; s > 0
4 s 2

24 s2 + 1
7. (i) ; s > 0 (ii)  ;s>1
s( s + 1)5 s( s 2 − 1) 2
s2 + 2 6
(iii) 2 2 ; s > 0 (iv)  ;s>0
s ( s + 4) ( s 2 + 8s + 25) 2
8. 3p/4

1 12
9. (i) log 5    (ii) 
4 35
e1− s − 1 1 − e − ST 6e − S e −α s
11. (i)   (ii)  2   (iii)  4   (iv)  2
1− s Ts s s +1
1 − e −3( s +1) 1 2 e−s 3 2
12. (i) ; s > –1    (ii)  + e −2 s + 2 + 2 e −2 s + 3 e −2 s ; s > 0
s +1 s s s s s
−π s
e − sπ
13. (i)      (ii) e − a ( s + π )      (iii) e 4 − 4e −2 s
π
2 4 4  −e − π ( s +1)
e −2 s  3 + 2 + − cosh 2 ; s > 0    
(iv) (v)  2 ; s > –1
s s s  ( s + 2 s + 2)

 1 4
e −4 s  2 +  − 8e −2 s ; s > 0
(vi)
s s
14. e–12
3 6e −4 s
15. −
s 2 (1 + e −2 s ) s(1 − e −4 s )

1  as 
16. tanh   , s > 0
s  4
Laplace Transform  | 209

k ke − sT
17. − ;s>0
s 2T s(1 − e − sT )
1 + e −π s
18. ;s>0
(1 − e − π s )( s 2 + 1)
f t

t
π π π π

Figure 2.8
1  as 
19. tanh   , s > 0
as 2
 2

2.12 Inverse Laplace Transform


On the basis of Laplace transform of elementary functions and first shifting property, the follow-
ing results of inverse Laplace transforms hold
1
(i) £ −1   = 1
s
 1 t n −1
(ii) £-1  s n  = ( n − 1)! ; n ∈ N
n −1
1 t
(iii) £ −1  n  = ;n > 0
 s  Γ( n)
−1  1  1
(iv) £  2  = sin at
 s + a2  a
 1  1
(v) £ −1  2 2 
= sinh at
 s −a  a
 s 
(vi) £ −1  2 2 
= cos at
s +a 
−1  s 
(vii) £  2  = cosh at
 s − a2 
 1 
(viii) £ −1  =e
at

 s−a
210 | Chapter 2

 1  e at t n −1
(ix) £-1  n
= ; n ∈N
 ( s − a)  ( n − 1)!
 1  at t
n −1
(x) £ −1   = e ; n>0
 ( s − a) 
n
Γ( n)

 1  1 at
(xi) £ −1  2 
= e sin bt
 ( s − a) + b  b
2

 1  1 at
(xii) £ −1  2 
= e sinh bt
− 2

 ( s a) b  b
 s−a 
(xiii) £ −1  2 
= e at cos bt
 ( s − a ) 2
+ b 
 s−a 
(xiv) £ −1  2 
= e at cosh bt
 ( s − a ) 2
− b 

2.13 Use of partial fractions to find inverse


Laplace transform
After resolving the given function F(s) into partial fractions and using above formulae, we can
find inverse Laplace transform of F(s), suppression method may be used to write partial fractions.

Example 2.26: Find the inverse Laplace transform of


3s + 5 2 6 3 + 4s 8 − 6s
(i) (ii)  − +
s2 + 8 2 s − 3 9 s 2 − 16 16 s 2 + 9

s+4 s 2 + 2s − 4 s
(iii) (iv)  (v) 4
( )
s ( s − 1) s 2 + 4 ( )(
s 2 + 2s + 5 s 2 + 2s + 2 s + 4a4)

Solution:
   
 3s + 5 2 
−1 −1  s  5 −1  2 2 
(i)  £  2  = 3£  2 
+ £  2 
 s + 8   s2 + 2 2
 ( )  2
 (
 s2 + 2 2
 ) 

5
= 3 cos 2 2t + sin 2 2t
2
Laplace Transform  | 211

 6 3 + 4s 8 − 6s 
(ii)  £ −1  − 2 + 
 2 s − 3 9 s − 16 16 s + 9 
2

 3  1 −1  4 3  4 −1  s  2  3 4 
= £ −1   − £   − £   + £ −1  2 
 s − 32  4  s − ( 4 3 )  9  s − ( 4 3 )  3  s + ( 3 4 )  
2 2 2 2 2

3  s 
− £ −1  2 
 s + ( 4 )  
8 2 3

3
t 1 4t 4 4t 2 3t 3 3t
= 3e 2 − sinh − cosh + sin − cos
4 3 9 3 3 4 8 4 
(iii)  By suppression method

Let
s+4

(0 + 4 ) + 1+ 4 As + B
+ 2
s( s − 1) s + 4
2
( )
s (0 − 1) (0 + 4 ) ( s − 1)1. (1 + 4 ) s + 4

−1 1 As + B
≡ + +
s s − 1 s2 + 4 
\ s + 4 ≡ – (s–1) (s2+4) + s(s2+4) + (As + B) s (s – 1)
Equate coefficients of s3 and s2
0 = − 1 + 1 + A
 ⇒ A = 0, B = A − 1 = −1
0 = 1− A+ B 

s+4 1 1 1 1 1 1 2
\ =− + − =− + −
(
s( s − 1) s 2 + 4 )
s s − 1 s2 + 4 s s − 1 2 s 2 + 22

 s+4  1
\ £ −1   = −1 + e − sin 2t
t

 s ( s − 1) s(2
+ 4  ) 2

s 2 + 2s − 4 s 2 + 2s + 5 − 9
(iv)  =
(
( s 2 + 2 s + 5) s 2 + 2 s + 2 ) (
( s 2 + 2 s + 5) s 2 + 2s + 2 )
1  1 1 
= − 3 2 − 2
s + 2s + 2 2
 s + 2 s + 2 s + 2 s + 5 

3 2
= 2 −
s + 2s + 5 s 2 + 2s + 2 
3 2 1
= − 2⋅
2 ( s + 1) 2
+2 2
( s + 1)2 + 12 
 s 2 + 2s − 4  1
 = e (3 sin 2t − 4 sin t )
−t
\ £-1  2
( 2
)(
 s + 2 s + 5 s + 2 s + 2 )  2

212 | Chapter 2

s s s
(v)  = =
(s 4
+ 4a 4
) (s 2
+ 2a ) − (2as) (s
2 2 2 2
− 2as + 2a 2
) (s 2
+ 2as + 2a 2 )
1  1 1 
=  − 2
4 a  s − 2as + 2a
2 2
s + 2as + 2a 2  

1  a a 
=  − 
4a2  ( s − a ) + a 2 ( s + a ) + a 2 
2 2


 
\ £-1  4
s 1 at
{
− at
 = 2 e sin at − e sin at }
(
 s + 4a
4
)  4 a


1  e at − e − at  1
= 2  sin at = 2 sinh at sin at
2a  2  2a

Example 2.27: Find the inverse Laplace transforms of

(i) 
1
  (ii) 
(2s − 3)      (iii) 
1
     (iv)  2
1
(s 2
− 5s + 6 ) (s 2
+ 4 s + 13 ) s ( s + 3)
2
s s +9 ( )
1 1 1 4
(v)      (vi)        (vii)     (viii) 
2
(
s s +4 2
) (s 3
+ 4s 2
) (s 2
− 4s + 8 ) (s 2
−s+2 )
3s − 1 6+s s2
(ix)       
(x)     (xi) 
( s − 2) 2
(s 2
+ 6 s + 13 ) (s 4
− a4 )
Solution:
1 1 1 1
(i)   = = −
(s 2
− 5s + 6 ) ( )( )
s − 2 s − 3 s − 3 s − 2

 1  −1  1  −1  1 
\ £ −1  2  = £  s − 3 − £  s − 2  = e − e
3t 2t

 s − 5 s + 6      

−1  (
 2s − 3   2 s + 2 ) − 7   ( s + 2 )  7  3 
(ii)  £ −1  2 =£   = 2£ −1   − £ −1  
 s + 4 s + 13   ( s + 2 ) + 3   ( s + 2 ) + 3  3  ( s + 2 ) + 3 
2 2 2 2 2 2

7
= 2e −2t cos 3t − e −2t sin 3t
3 
e −2t
= (6 cos 3t − 7 sin 3t ) 
3
Laplace Transform  | 213

(iii)  By suppression method


1 1 1 A
Let ≡ − +
s ( s + 3) 9 s 3 ( s + 3)2 s + 3
2

1 1
1 ≡ ( s + 3) − s + As( s + 3)
2
\
9 3 
Equate coefficient of s2
1 1
0= + A⇒ A= −
9 9
1 1 1 1
\ = − −
s ( s + 3)
2
9 s 9 ( s + 3) 3 ( s + 3)2

 1  1 1 1 1
\ £ −1  2 
= − e −3t − te −3t = 1 − e −3t − 3 te −3t ( )
 s ( s + 3) 
9 9 3 9

1 s s1 1  1 1 s 
(iv) = = − = −
(
s s +9 2
) s s +9
2
( 2
) 9  s 2 s 2 + 9  9  s s 2 + 32 

 1  1
\ £ −1  2  = (1 − cos 3t )
(
 s s + 9 )  9

1 1 1 1  1 1 2 
(v) =  2 − 2  = 2 −  2
s s +4
2
( 2
) 4 s s +4 4 s 8  s + 22 

 1  1 1 1
\ £ −1  2 2  = t − sin 2t = ( 2t − sin 2t )
(
 s s + 4 )  4 8 8

(vi) By suppression method
1 1 1 1 A
Let = ≡ + +
s3 + 4 s 2 s 2 ( s + 4 ) 4 s 2 16 ( s + 4 ) s

1 1
\ 1 ≡ ( s + 4 ) + s 2 + As ( s + 4)
4 16 
Equate coefficient of s2
1 1
0=
+ A⇒ A= −
16 16 
1 1 1 1
\ = + −
s3 + 4 s 2 4 s 2 16 ( s + 4 ) 16 s

 1  1 1 −4 t 1 1
\

−1
£  3
s + 4 s 2 

= t+ e − =
4 16 16 16
4t + e −4 t − 1 ( )

214 | Chapter 2

1 1 1  2 
(vii)   = =  
s 2 − 4 s + 8 ( s − 2)2 + 22 2  ( s − 2)2 + 22 

 1  1 2t
\ £ −1  2  = e sin 2t
− +
 s 4s 8  2 
7
4 8 2
(viii) =
(s 2
−s+2 ) 7 1  7 
2 2

 s −  +  
2  2 
−1  4  8 t /2  7 
\ = e sin 
 2 
£  2 t
 s −s+2 7  

3s − 1 3 ( s − 2) + 5 3 5
(ix)   = = +
( s − 2) 2
( s − 2) 2
s − 2 ( s − 2 )2

 3s − 1 
\ £ −1   = ( 3 + 5t ) e 2t
 ( s − 2 ) 
2


(x)  
6+s
=
( s + 3) + 3 = s + 3 + 3 . 2
(s 2
+ 6 s + 13 ) ( s + 3) + 2 ( s + 3) + 2 2 ( s + 3)2 + 22
2 2 2 2

 6+s  −3t  3  1 −3t


\ £ −1  2  = e  cos 2t + sin 2t  = e ( 2 cos 2t + 3 sin 2t )
 s + 6 s + 13   2  2 
s2 s2 1  a a 
(xi) = = + 2
(s 4
−a 4
) (s 2
+a 2
) (s 2
−a 2
) 
2a  s − a
2 2
s + a 2 

 s2  1
\ £ −1  4 = (sinh at + sin at )
 s −a
4
 2a 

1
Example 2.28: Find the inverse Laplace transform of 2 and hence of the function
s s + 1 s 2
+ 9 ( )( )
.
( )(
s2 + 1 s2 + 9 )
1 1 1 1 3 
Solution: =  2 − 2    
(by suppression method)
( s +1 s + 9
2
8  s)(
+ 1 3 s + 32 
2
)
Laplace Transform  | 215

 1  1  1  1
\ £ −1  2  =  sin t − sin 3t  = ( 3 sin t − siin 3t ) 
(
 s + 1 s + 9
2
)( )  8  3  24


1 1 3
= =sin 3sin
6 6
t t ∵ sin t =3t3=sin3 tsin
∵ 3sin ( (
− 4t −
sin43sin
t 3t ) )
   
We know that
If £ {f (t)} = F(s) then £ {f ′(t)} = sF(s)–f (0)
1 3
Taking sin t f (t) =
6 
1 2
f ′(t) = sin t cos t and f (0 ) = 0
2 
1
Also, F(s) = 2
s + 1 s2 + 9

( )( )
\ s 1 
sF ( s ) = 2 = £  sin 2 t cos t 
s +1 s + 92
2  ( )( )
 s  1 2 1
⇒ £ −1  2  = sin t cos t = sin t sin 2t
(
 s + 1 s + 9  2
2
)( 4 )
Example 2.29: Evaluate
 s2    (ii) £ −1  s   1 
(i)  £ −1  2     (iii) £ −1 
(
 s − a
2
)(s 2
−b 2
)( s − c 
2 2
) 
 s + a
2 2
( )
2 

 2
(
 s + a
2
)
2 


 1   s 
(iv)  £ −1      (v) £ −1 
 2 2   2 2 
(
 s − 9  )  s − 9  ( )
Solution: (i) By suppression method
s2 a2 b2
 = +
(
s2 − a2 s2 − b2 s2 − c2)( )(
a2 − b2 a2 − c2 s2 − a2 ) (
b2 − a2 b2 − c2 s2 − b2 )( )( ) ( )( )( )
2
c
+
(c 2
−a 2
) (c 2
)(
− b2 s2 − c2 )
 s2  a sinh at b sinh bt c sinh ct
\ £ −1  2 = + 2 + 2
(
 s − a
2
)( s − b2
2
)( )
s − c  a 2 − b 2 a 2 − c 2
2 2
( )(
b − a2 b2 − c2 c − a2 c2 − b2 ) ( )( ) ( )( )
s s 1  1 1 
(ii) = =  − 
(s ) (s − ia) (s + ia) 4ia  ( s − ia )2 ( s + ia )2 
2 2 2
2
+ a2  
216 | Chapter 2

 
 s  1 t  eiat − e − iat  t
\ £ −1  2 
= te iat − te − iat  =  = sin at

(
 s + a 2
2
)  4ia 2a  2i  2a

2 2
1  1   1  1 1 
(iii) =  =  − 
(s 2
+a 2 2
)  ( s − ia ) ( s + ia )   2ia s − ia s + ia  

1  1 1 2 
=−  + − 2 
4a2  ( s − ia ) ( s + ia ) s + a 2 
2 2


 
 1  1  2 
\ £ −1  2 
= − 2 te iat + te − iat − sin at 
(
 s 2 + a 2 )  4 a  a 

1   e iat + e − iat  
= sin at − at   
2a 3   2  
1
= 3 {sin at − at cos at }
2a 
2 2
1  1  1  1 1 
(iv) =  =  − 
(s 2
−9 )
2
 ( s − 3) ( )  
s + 3 6  s − 3 s + 3 

1  1 1 2 
=  + − 
36  ( s − 3)2 ( s + 3)2 s 2 − 9 
 
 
 1  1  3t 2 
\ £ −1  2 
= −3t
te + te − sinh 3t 
(
 s 2 − 9 )  36  3 

1   e 3t + e −3t  2 
= 2t   − sinh 3t 
36   2  3 

1
= (3t cosh 3t − sinh 3t )
54 

s s 1  1 1 
(v)  = =  − 
(s ) (s − 3) (s + 3) 12  ( s − 3) ( s + 3)2 
2 2 2 2
2
−9  
 
 s  1 t
\ £ −1  2 
= {
te 3t − te −3t = sinh 3t }
(
 s − 9
2
)  12 6

Laplace Transform  | 217

Example 2.30: Show that


1 1 t3 t5 t7
£ −1  sin  = t − + − + 
( 3!) ( 5!) ( 7!)
2 2 2
s s

1 1 11 1 1 1 
Solution: We have sin =  − + − + ......
s s s s 3! s 5! s 7! s 7
3 5

1 1 1 1
= − + − + .....
s 2 3! s 4 5 ! s 6 7 ! s 8 
t3 t5 t7  t n −1 
\ £ −1  sin  = t − −1  1 
1 1
+ − . + .... ∵ £   = 
  s  ( n − 1)! 
( 3!) ( 5!) ( 7!)
2 2 2 n
s s  
Example 2.31: Find the inverse Laplace transform of the following functions
s+a s+3 s2 + 9
(i)  log        (ii) cot −1      
(iii) log
s+b 2 s ( s + 2)
1 −1 a 1 1
(iv)  tan ; s > 0     (v)      (vi)  3 2
s s s s+4 s s +1 ( )
Solution:
 s + a
(i)  Let £ −1 log  = f (t )
 s+b
\ £ { f ( t )} = log ( s + a ) − log ( s + b )

d d   d d  
\ £ {t f£({tt)}f=( t−)} = {−log ({slog
+ a()s−+log + b()s}+ b )}
a ) −( slog ∵ £ {∵
t f£({tt)}f=( t−)} = £−{ f (£t ){}f( t )} 
ds ds
   ds ds  
1 1
=− +
s+a s+b 
 1 1 
\ t f ( t ) = £ −1  − − bt
 = e −e
− at

 s+b s+a 
e − bt − e − at
\ f (t ) =
t 
 s+3
(ii) Let £ −1  cot −1 = f (t )
 2 
s+3
\ £ { f ( t )} = cot −1
2 
−d s+3 1 1 2
\ £ {t f ( t )} = cot −1 = ⋅ =
ds 2  s+3
2
2 ( s + 3)2 + 22
1+  
 2  
218 | Chapter 2

−1  2 
\ t f ( t ) = £  = e −3t sin 2t
2 
( )
2
 s + 3 + 2 

1
\ f (t ) = e −3t sin 2t
t 
 s 2 + 9 
(iii) Let £ −1 log  = f (t )
 s ( s + 2 ) 

\ ( )
£ { f ( t )} = log s 2 + 9 − log s − log ( s + 2 )

−d 2s 1 1
\ £ {t f ( t )} = £ { f ( t )} = − 2 + +
ds (
s +9 s s +)2

1 1 s 
\ t f ( t ) = £ −1  +
 s s + 2
− 2⋅ 2 2
s + 3 
( −2 t
 = 1 + e − 2 cos 3t )


⇒ f (t ) =
1
t
(1 + e −2t − 2 cos 3t )


 a
(iv) Let £ −1  tan −1  = f ( t )
 s
a
\ £ { f ( t )} = tan −1
s
−d a d s
\ £ {t f ( t )} = tan −1 = − cot −1
ds s ds a 
1 1 a
= . = 2
s a s + a2
2
1+ 2
a 
−1  a 
\ t f (t ) = £  2  = sin at
+
s a
2
 
sin at
\ f (t ) =
t 
−1  a 1
\ £  tan −1  = sin at
 s t

 −1
1  1 a  −1 a  1 1   t
( ( ) )1 1  
duau du ∵ £∵∫ £f (∫u0) fdu( u )=du £={ f (£t {)}f( t )} 
t t t
\
 s  s s  s u 0 ∫
£ −1  £ tan −1 tan = ∫  =sin0
ausin
u   0 s s  

Laplace Transform  | 219

 1  −4 t −1  1  −4 t t
−1/ 2
e −4 t  −1  1  t n −1 
(v) £ −1  =e £  =e = ∵ £  n  = .
 s+4   s 1 π t   s  Γn 
Γ 
2
−4 t
 1  t e
\ £ −1   ∫
= dt
 s s + 4  0 πt 
Put 4t = x 2 so that 2 t = x 

1
\ dt = dx
 t
− x2
 
1 1 2 1
( )
2 t e 2 t
\ £ −1   ∫0 ∫
2
= dx = . e − x dx = erf 2 t
s s+4  π 2 π 0 2


1 1  1  1 1 1 
(vi)  =  =  2− 2 
( ) (
s3 s 2 + 1 s  s 2 s 2 + 1  ) s s s + 1

1 1 
Now, £ −1  2 − 2  = t − sin t
 s s + 1 
1  1 1 
 = ∫0 ( x − sin x ) dx   (by Laplace transform of integrals)
t
\ £ −1   2 − 2 
s  s s + 1 
t
 x2  t2
=  + cos x  = + cos t − 1
 2 0 2

Example 2.32: Find the inverse Laplace transform of
s 2 + 2s − 3 s −1
(i)  (ii) 
log
s ( s − 3) ( s + 2) s
Solution:
(i)  By suppression method
s 2 + 2s − 3 0+0−3 9+6−3 4−4−3
= + +
s ( s − 3) ( s + 2) s (0 − 3) (0 + 2) 3 ( s − 3) (3 + 2) ( −2) ( −2 − 3) ( s + 2)


1 4 3
= + −
2 s 5 ( s − 3) 10 ( s + 2)


 s 2 + 2 s − 3  1 4 3t 3 −2t
\ £ −1  = + e − e
 s ( s − 3) ( s + 2 )  2 5 10

220 | Chapter 2

(ii) Taking a = -1 and b = 0 in Example 2.31 (i), we get

 s − 1 e 0 − et 1 − et
£ −1 log = =
 s  t t

Example 2.33: Find the inverse Laplace transform of the following functions
−π
(3s + 1) e −3s    (iii)  e −3s
s
4e 2
(i)        
(ii) 
( s 2 + 16 ) s2 s2 + 4 ( s + 2)( )
e 4 − 3s e −2 s
(iv)     (v)  2
( s + 4)5/ 2 s

 4 
Solution:  (i)  We have L−1  2 = sin 4t
 s + 16 
 −π s 4    π 
£ −1 e 2 2  = sin 4  t −   uπ / 2 ( t )   (by second shifting theorem)
 s + 16    2 

= sin 4t uπ / 2 (t )

 −π
s 4  0 ; 0≤t <π /2
\ £ −1 e 2
=
 s + 16  sin 4t ; t ≥ π / 2
2


3s + 1
=
(3s + 1)  1 1  13 1 s 1 2 
(ii)      2 − 2  =  + − 3. 2 − . 
(
s s +4
2 2
) 4 s s + 4  4  s s2 s + 22 2 s 2 + 22 

 3s + 1  1  1 
\ £ −1  2 2  =  3 + t − 3 cos 2t − sin 2t 
(
 s s + 4 ) 
4 2 

 3s + 1  1  1 
\ £ −1  2 2 e −3 s  = 3 + (t − 3) − 3 cos {2 (t − 3)} − sin {2 (t − 3)} u3 (t )
(
 s s + 4  )
4  2 

 (by second shifting theorem)

1 1 
=  t − 3 cos ( 2t − 6 ) − sin ( 2t − 6 ) .u3 (t )
4 2  

 3s + 1  0 ; 0 ≤ t < 3
Hence, £ −1  2 2 e −3 s  =  1  1 
 s s + 4 (  
)
  4 t − 3 cos ( 2t − 6 ) − 2 sin ( 2t − 6 )  ; t ≥ 3

  
Laplace Transform  | 221

 1 
(iii)  We have £ −1  =e
−2 t

s+2
−1  1  0 if 0 ≤ t < 3
e −3s  = e ( ) ⋅ u3 ( t ) =  −2(t −3)
−2 t − 3
Hence,    £ 
s+2  e if t ≥ 3

  3/ 2
e −4 t .t 3/ 2
(iv)  We have  £ −1  −4 t −1  1 
1 −4 t t 4 −4 t 3/ 2
 = e £   = e = = e ⋅t
 ( s + 4 ) 
5/ 2 5/ 2
Γ5 / 2 3 1
s  ⋅ π 3 π
2 2
 4 −3s   e −3s 
\ £ −1  e = e 4 £ −1  = e4 ⋅
4 −4(t −3)
e . ( t − 3) u3 ( t )
3/ 2
5/ 2  5/ 2 
 ( s + 4 )   ( s + 4 )  3 π

0 if 0 ≤ t < 3
4 
e 4(4 − t ) (t − 3) ⋅ u3 (t ) =  4 4 (4 − t )
3/ 2
=
3 π 3 π e (t − 3) if
32
t≥3

−1  1 
(v)  £  2  = t
s 
 e −2 s  0 ; 0≤t<2
\ £-1  2  = (t − 2) u2 (t ) = 
 s  t − 2 ; t ≥ 2 

2.14 Convolution Theorem


We use convolution theorem to find inverse Laplace transform of product of two functions.
­Firstly, we shall define convolution of two functions.
Convolution
Let f (t) and g (t) be two functions defined in [0, ∞). Then convolution of f (t) and g(t) denoted by
f (t)*g(t) or (  f  *g)(t) is defined by (  f  *g)(t) = ∫ f (u ) g (t − u ) du; t ≥ 0.
t

0
It can be easily proved that the convolution satisfies commutative, associative and distributive
laws, i.e.,
(i)  f  *g = g *f      (ii) f  *(g *h) = (  f  *g)*h
(iii)  f  *(g + h) = f  *g + f  *h
Theorem 2.14  (convolution theorem)
Let f (t) and g(t) be piecewise continuous functions on [0, ∞) and be of exponential order,
F(s) = £{ f (t)}and G(s) = £ {g(t)}. Then, £ −1 {F ( s ) ⋅ G ( s )} = ( f * g )( t ) = ∫ f ( u ) . g ( t − u ) du.
t

t  ∞ t
Proof: We have £  ∫ f (u ) ⋅ g (t − u ) du  = ∫ e − st ∫ f (u ) ⋅ g (t − u ) du dt
0  0 0
222 | Chapter 2

By changing the order of integration

ut

t=∞
t= t
u=

t
u=

Figure 2.9

∞∞

£ ( f * g )( t ) = ∫∫e
− st
f (u ) g (t − u ) dt du
0 u

∞ ∞

= ∫  ∫ e − st g (t − u ) dt  f (u ) du
0 u 


∞ ∞

= ∫  ∫ e − s ( x + u ) g ( x ) dx  f (u )du  (taking t – u = x)
0 0 
∞ ∞

= ∫ e − su f (u ) du.∫ e − sx g ( x ) dx
0 0

∞ ∞

= ∫ e − st f (t ) dt .∫ e − st g (t ) dt
0
 0

= £{f (t)} ⋅ £{g(t)} = F(s).G(s)


t

\ £–1 {F(s) ⋅ G(s)} = (  f *g) (t) = ∫ f (u) g (t − u) du


0 

Example 2.34: Find the convolution sin at * cos at.


t
Solution: We have sin at * cos at = ∫ sin au cos a(t − u ) du
0

t
1
2 ∫0
= 2 sin au cos ( at − au ) du

Laplace Transform  | 223

t
1
=
2 ∫0
[sin at + sin(2au − at )] du

t
1  1 
= u sin at − cos ( 2au − at ) 
2  2a 0 
1  1 1 
= t sin at − cos at + cos at 
2  2a 2a 
1
= t sin at
2 

Example 2.35: Use convolution theorem to find

 1  −1  s   1 
 (i)  £ −1          (ii) £  2 
   (iii) £ −1 
 2 
( ) ( )
2 2
s s+4   s + a  s + a
2 2
 

 s2   1         1 
 (iv)  £ −1  2 −1 
   (v) £  3 2  (vi) £ −1  
( )(
 s + a s + b
2 2 2
)   s s +1 ( )   ( s − 2 ) ( s + 3 ) 

 s  −1  1   s2 
(vii)  £ −1  2    (viii) £  s ( s + 1) ( s + 2 )  (ix)  £ −1 
( )(
 s + 1 s + 4
2
)     (
 s + w
2 2
)
2 


1  1  t −1 2 e −4 t
Solution: (i) We have £ −1   = 1, £ −1  −4 t
=e ⋅ =
s  s+4  1 πt
Γ 
2
\ By convolution theorem
 1  e −4 t t
e −4 u
£ −1  = *1 = ∫ du
s s+4  πt 0 πu 
1
Put 4u = x2 so that 2 u = x ⇒ du = dx. 
u

 1  1
2 t
1 2
2 t

∫ ∫e
2
− x2
\ £ −1  = e − x dx = ⋅ dx
s s + 4  π 0
2 π 0 

=
1
2
erf 2 t( )

224 | Chapter 2

 s   1  1
(ii) We have £ −1  2 2 
= cos at, £ −1  2 2 
= sin at
s +a  s +a  a
\ By convolution theorem
 s  1  1
£ −1  2 2 2 
=  sin at  * cos at = (sin at * cos at)
+
 (s a )   a  a
1 1
= ⋅ t sin at  (from Example 2.34)
a 2
1
= t sin at
2a 
 1  1
(iii)  We have £ −1  2 2 
= sin at
s +a  a
\ By convolution theorem
 1  1  1  1
£ −1  2 2 2 
=  sin at  *  sin at  = 2 (sin at * sin at)
(s + a )   a  a  a
t
1
a 2 ∫0
= sin au sin a(t − u ) du

t
1
2a 2 ∫0
= 2 sin au sin( at − au ) du

t
1
=
2a 2 ∫ [cos (2au − at ) − cos at ] du
0 
t
1 1 
= sin ( 2au − at ) − u cos at 
2a 2  2a 0 

1 1 1 
=  2a sin at + 2a sin at − t cos at 
2a 2  
1
= 3
2a
[sin at − at cos at ]


 s   s 
(iv)  We have £ −1  2 2 
= cos at , £ −1  2 2 
= cos bt
s +a   s +b 
\ By convolution theorem
t
 s2 
£ −1  2  = cos at * cos bt = ∫0 cos au cos(bt − bu) du
 ( s + a )( s + b ) 
2 2 2

t
1
2 ∫0
= 2 cos au cos(bt − bu ) du

Laplace Transform  | 225

t
1
[cos{( a − b)u + bt} + cos{( a + b)u − bt}]du 
2 ∫0
=

t
1  1 1 
=  sin{( a − b)u + bt} + sin{( a + b)u − bt}
2  ( a − b) ( a + b) 0 
1  1 1 
=  ( a − b) (sin at − sin bt ) + ( a + b) (sin at + sin bt ) 
2  
1  2a 2b 
=  a 2 − b 2 sin at − a 2 − b 2 sin bt 
2  
a sin at − b sin bt
=
a2 − b2 

1 1  1 
(v)  We have £ −1  3  = t 2 , £ −1  2  = sin t
s  2  s +1
\ By convolution theorem
 1  1 2 t
1 2 1
t
£ −1  3 2  = t ∗ sin t = ∫ u sin(t − u) du = − ∫ u 2 sin(u − t ) du
 s ( s + 1)  2 0
2 20

1
= − u ⋅ {− cos(u − t )} − 2u ⋅ {− sin ( u − t )} + 2 {cos ( u − t )}
t
2

2 0

2 2
1 t t 2 t
= −  −t + 2 − 2 cos t  = − (1 − cos t ) = − 2 sin
2

2 2 2 2

 1   1 
(vi)  We have £ −1   = e 2t , £ −1  =e
−3t

 s−2  s+3
\ By convolution theorem
 
t
1
£ −1   = e *e = ∫e e
2t −3t 2 u −3( t − u )
du
 ( s − 2 ) ( s + 3)  0 
5u t
t
 e  1 −3t 5t
= e −3t ∫ e 5u du = e −3t 
 5  0 5
( )
= e e −1
0


1
(
= e 2t − e −3t
5
)

 1  −1  s 
(vii) We have £ −1  2  = sin t , £  2  = cos 2t
 s + 1  s +4
\ By convolution theorem
 s  t

£ −1  2
 ( s + 1)( s 2
+ 4 )
 = cos 2t * sin t =

∫ cos 2u sin(t − u) du
0

226 | Chapter 2

t
1
=
2 ∫0
[sin(u + t ) − sin(3u − t )] du 
t
1 1 
=  − cos( u + t ) + cos(3u − t ) 
2 3 0

1 1 1 
=  − cos 2t + cos t + cos 2t − cos t 
2 3 3 
1
= (cos t − cos 2t )
3 

 1   1   1 
(viii)  We have £ −1   = e − t , £ −1   = £ −1  
 s +1  s ( s + 2)   ( s + 1)2 − 1 
   

= e − t sinh t 

\ By convolution theorem
 1  t
£ −1   = e * e sinh t = ∫e
−u
–t –t
⋅ e − ( t − u ) sinh(t − u ) du
 s( s + 1)( s + 2)  0 
t
t
= e − t ∫ sinh(t − u ) du =  −e − t cosh(t − u ) 
0
0 
= e −t
( cosh t − 1) 
 s 
(ix)  We have £ −1  2 2 
= cos wt
s +w 
\ By convolution theorem
 s2  t
£ -1 
2 2  = cos wt * cos wt =
(s + w ) 
2 ∫ cos wu cos w(t − u) du
0

t
1
[cos wt + cos ( 2wu − wt )] du
2 ∫0
=

t
1  1 
=  u cos wt + sin( 2wu − wt ) 
2  2w 0 
1  1 1 
= t cos wt + sin wt + sin wt 
2  2w 2w 
1
=
2w
[tw cos wt + sin wt ]

Laplace Transform  | 227

2.15 Applications of Laplace transform to solve linear


differential equations, simultaneous linear
differential equations and integral equations
Laplace transform helps us in solving the initial value problems with constant and variable
­coefficients without finding complementary and particular integrals. This is also helpful in solv-
ing ­simultaneous differential equations and integral equations. Sometimes the following two
theorems are also used to find the initial and final values of the function f (t).
Theorem 2.15  Initial Value Theorem
Let f (t) is continuous and has piecewise continuous derivative f  ′(t) in every finite interval in t ≥ 0
and both f (t) and f  ′(t) are of exponential order, then lim f (t ) = lim sF ( s) provided the limits
t →0+ s →∞
exist.
Proof: Let f  ′(t) is of exponential order k > 0 then there exists M > 0, such that

f ′(t ) ≤ Me kt

By Theorem (2.5), Laplace transform of f  ′(t) exists and £ { f ′(t )} = sF ( s) − f (0) ; s > k
where F(s) is Laplace transform of f(t).
We have
∞ ∞

∫ e f ′(t ) dt ≤ ∫ e f ′(t ) dt
− st − st

0 0


≤ M ∫ e − (s − k ) t dt
0 

− ( s − k )t
M e
=−
s−k
 0

M −( s − k ) t
= as e → 0 as t → ∞ when s > k
s−k

\ £ { f ′(t )} = [ sF ( s) − f (0) ] → 0 as s → ∞

\ lim sF ( s) = lim f (t )
s →∞ t →0 + 

Theorem 2.16  Final Value Theorem


Let f (t) is continuous and has piecewise continuous derivative f  ′(t) in every finite interval in
t ≥ 0 and both f (t) and f  ′(t) are of exponential order, then

lim f (t ) = lim sF ( s) provided both the limits exist.


t →∞ s →0 +
228 | Chapter 2

Proof: By Theorem (2.5), Laplace transform of f  ′(t) exists and £ { f ′(t )} = sF ( s) − f (0); s > k
Then

lim [ sF ( s) − f (0) ] = lim £ { f ′(t )}


s →0 + s →0 + 
T

= lim lim ∫ e − st
f ′(t ) dt
s → 0 + T →∞
0 
 − st T
T

= lim lim e f (t ) + ∫ se − st f (t ) dt 
s → 0 + T →∞ 0
 0 
 T

= lim lim e − st f (T ) − f (0) + s ∫ e − st f (t ) dt 
s → 0 + T →∞
 0 

= lim lim e − st f (T ) − f (0) + lim sF ( s)


s → 0 + T →∞ s →0 + 
= lim f (T ) − f (0) + 0
T →∞

= lim f (t ) − f (0)
t→∞

\ lim f (t ) = lim sF ( s)
t→∞ s→0 +

Remark 2.4: Theorems (2.15) and (2.16) can be applied only when both the limits in the ­theorems
exist and are finite.

Example 2.36: Using Laplace transform techniques, solve the following initial value problems
   (i)  y ′ + 4 y = t , y(0) = 1
   (ii)  y ′′ + ay ′ − 2a 2 y = 0, y(0) = 6, y ′(0) = 0
d2 y dy
 (iii)  2
− 3 + 2 y = 4 x + e 3 x where y(0) = 1, y ′(0) = −1
dx dx
  (iv)  y ′′ + 4 y = 0, y(0) = 1, y ′(0) = 6
d2 y dy
   (v)  2
+ 2 + 5 y = e − t sin t , y(0) = 1, y ′(0) = −1
dt dt
   (vi)  y ′′ + 4 y ′ + 4 y = 12t 2 e −2t , y(0) = 2, y ′(0) = 1
(vii)  y ′′ − 3 y ′ + 2 y = 4e 2t , y(0) = −3, y ′(0) = 5
d2 y dy 17 dy
(viii)  + 2 + 5 y = sin 2t ; y = 2 and = −4 when t = 0
dt 2 dt 2 dt
   (ix)  y ′′ + 9 y = 6 cos 3t , y(0) = 2, y ′(0) = 0
   (x)  y ′′ + 9 y = sin 3t , y(0) = 0, y ′(0) = 0
Laplace Transform  | 229

Solution: (i) Initial value problem is

y ′ + 4 y = t ; y ( 0) = 1 

Take Laplace transform of both sides


1
sY ( s) − y(0) + 4Y ( s) = where Y ( s) = £{ y(t )}
s2 
1 1
\ ( s + 4) Y ( s) = +1 s +y(40)) Y=(1s)) =
((∵ +1 (∵ y(0) = 1)
s2  s2
1 1 s 2 +1
\ Y ( s) = + =
s 2 ( s + 4) ( s + 4) s 2 ( s + 4)

By suppression method
s2 + 1 17 1 A
Let ≡ + +
s ( s + 4) 16( s + 4) 4 s 2 s
2

17 1
\ s 2 + 1 ≡ s 2 + ( s + 4) + As( s + 4)
16 4 
Equate coefficient of s2
17 1
1= + A ⇒ A= −
16 16
17 1 1
\ Y ( s) = + 2−
16( s + 4) 4 s 16 s 

 17 1 1  17 −4 t t 1 1
\ y(t ) = £ −1  + 2− −4 t
 = e + − = (17e + 4t − 1)
16( s + 4) 4 s 16 s  16 4 16 16

(ii)  Initial value problem is

y ′′ + ay ′ − 2a 2 y = 0, y(0) = 6, y ′(0) = 0 

Take Laplace transform of both sides

s 2Y ( s) − sy(0) − y ′(0) + a{sY ( s) − y(0)} − 2a 2Y ( s) = 0 where Y ( s) = £{ y(t )}

\  ( s 2 + as − 2a 2 ) Y ( s) = 6( s + a) (∵ y(0) = 6 and y ′(0) = 0) 


6( s + a) 4 2
Y ( s) = = +   (by suppression method)
( s − a)( s + 2a) ( s − a) ( s + 2a)

 4 2 
\ y(t ) = £ −1  +  = 4e + 2e
at −2 at

 ( s − a ) ( s + 2 a ) 

230 | Chapter 2

(iii)  Initial value problem is


d2 y dy
− 3 + 2 y = 4 x + e 3 x , y(0) = 1, y ′(0) = −1
dx 2 dx
Take Laplace transform of both sides
4 1
s 2Y ( s) − sy(0) − y ′(0) − 3{sY ( s) − y(0)} + 2Y ( s) = + where Y ( s) = £{ y( x )}
s 2
s−3 
4 1
\ ( s 2 − 3s + 2) Y ( s) = + +s−4 (∵ y(0) = 1, y ′(0) = −1)
s 2
( s − 3) 
( s 2
− 3s + 1) 4( s 2
− 1) ( s 2
− 3s + 2 ) − 1 4( s − 1)( s + 1)
\ ( s 2 − 3s + 2) Y ( s) = − = −
( s − 3) s 2
( s − 3) s2
1 1 4( s + 1)
or Y ( s) = − − 2  (∵ s 2 − 3s + 2 = ( s − 1)( s − 2))
s − 3 ( s − 1)( s − 2)( s − 3) s ( s − 2)
By suppression method
1 1 1 1
= + +
( s − 1)( s − 2)( s − 3) ( s − 1)(1 − 2)(1 − 3) ( s − 2)( 2 − 1)( 2 − 3) ( s − 3)(3 − 1)(3 − 2) 
1 1 1
= − +
2( s − 1) ( s − 2) 2( s − 3) 
4( s + 1) 4 4 ⋅ ( 2 + 1) A −2 3 A
and Let ≡ 2 + + = 2 + +
s ( s − 2) s (0 − 2) 4( s − 2) s s
2
s−2 s

\ 4 ( s + 1) ≡ −2( s − 2) + 3s + As( s − 2)
2

Equate coefficient of s2
0 = 3+ A
⇒ A = −3
4( s + 1) −2 3 3
\ = + −
s 2 ( s − 2) s 2 s − 2 s

1 1 1 1 2 3 3
\ Y ( s) = − + − + − +
s − 3 2( s − 1) ( s − 2) 2( s − 3) s 2 ( s − 2) s 
3 2 1 2 1
or Y ( s) = + 2− − +
s s 2( s − 1) ( s − 2) 2( s − 3) 
3 2 1 2 1 
\ y( x ) = £ −1  + 2 − − + 
 s s 2( s − 1) ( s − 2) 2( s − 3) 

1 1
= 3 + 2 x − e x − 2e 2 x + e 3 x
2 2 
Laplace Transform  | 231

1
or      y( x ) = (6 + 4 x − e x − 4 e 2 x + e 3 x ) 
2
(iv) y ′′ + 4 y = 0, y(0) = 1, y ′(0) = 6
Take Laplace transform of both sides

s 2Y ( s) − sy(0) − y ′(0) + 4Y ( s) = 0 where Y(s) = £{y(t)}

\ ( s 2 + 4) Y ( s) = s + 6 (∵ y(0) = 1, and y ′(0) = 6) 

s+6 s 2
\ Y ( s) = = + 3⋅ 2
s 2 + 4 s 2 + 22 s + 22 
 s 2 
\ y(t ) = £ −1  2 + 3⋅ 2  = cos 2t + 3 sin 2t
 s + 2 2
s + 22  
(v) y ′′ + 2 y ′ + 5 y = e − t sin t, y(0) = 1, y ′(0) = −1
Take Laplace transform of both sides
1
s 2Y ( s) − sy(0) − y ′(0) + 2{s.Y ( s) − y(0)} + 5Y ( s) = where Y(s) = £{y(t)}
( s + 1) 2 + 1
1 1
( s 2 + 2 s + 5) Y ( s) = + s −1+ 2 = + ( s + 1)  (∵ y(0) = 1, y ′(0) = −1)
( s + 1) 2 + 1 ( s + 1) 2 + 1

1 ( s + 1)
\ Y ( s) = +
( s 2 + 2 s + 2)( s 2 + 2 s + 5) ( s + 1) 2 + 22

( s 2 + 2 s + 5) − ( s 2 + 2 s + 2) ( s + 1)
= +
3( s 2 + 2 s + 2)( s 2 + 2 s + 5) ( s + 1) 2 + 22

1  1 1 2  ( s + 1)
=  − ⋅ +
3  ( s + 1) + 1 2 ( s + 1) + 2  ( s + 1) 2 + 22
2 2 2 2
  
1 1
\ y(t ) = £ −1{Y ( s)} = (e − t sin t − e − t sin 2t ) + e − t cos 2t
3 2 
1 −t
= e ( 2 sin t − sin 2t + 6 cos 2t )
6 
(vi) y ′′ + 4 y ′ + 4 y = 12t 2 e −2t ; y(0) = 2, y ′(0) = 1
Take Laplace transform of both sides
24
s 2Y ( s) − sy(0) − y ′(0) + 4{sY ( s) − y(0)} + 4Y ( s) = where £{ y(t )} = Y ( s)
( s + 2)3 
24 24
\ ( s 2 + 4 s + 4) Y ( s) = + 2s + 9 = + 2( s + 2) + 5  (∵ y(0) = 2, y ′(0) = 1)
( s + 2) 3
( s + 2)3
232 | Chapter 2

24 2 5
\ Y ( s) = + +
( s + 2) ( s + 2) ( s + 2) 2 
5

\ y(t ) = £ −1{Y ( s)} = e −2t (t 4 + 2 + 5t ) = ( 2 + 5t + t 4 )e −2t 

(vii) y ′′ − 3 y ′ + 2 y = 4e 2t , y(0) = −3, y ′(0) = 5


Take Laplace transform of both sides
4
s 2Y ( s) − sy(0) − y ′(0) − 3{sY ( s) − y(0)} + 2Y ( s) = where Y ( s) = £{{ y(t )}
( s − 2)

4 4
\ ( s 2 − 3s + 2) Y ( s) = − 3s + 5 + 9 = − (3s − 14) (∵ y(0) = −3, y ′(0) = 5) 
( s − 2) ( s − 2)

\ Y ( s) =
4

(3s − 14)
( s − 1) ( s − 2) 2
( s − 1) ( s − 2) 
 ( s − 1) − ( s − 2)  (3s − 14)
= 4 2 

 ( s − 1)( s − 2)  ( s − 1)( s − 2) 
4 4 (3s − 14)
= − −
( s − 2) 2 ( s − 1)( s − 2) ( s − 1)( s − 2)

4 (3s − 10)
= −
( s − 2) 2 ( s − 1)( s − 2)

4 7 4
= − +   (by suppression method)
( s − 2) 2 ( s − 1) ( s − 2)

\ y(t ) = £ −1{Y ( s)} 

= 4te 2t − 7e t + 4e 2t 

= e t ( 4te t + 4e t − 7) 

d2 y dy 17 dy
(viii) + 2 + 5 y = sin 2t ; y = 2 and = −4 when t = 0
dt 2 dt 2 dt
Take Laplace transform of both sides
17
s 2Y ( s) − sy(0) − y ′(0) + 2{sY ( s) − y(0)} + 5Y ( s) = 2 where Y (s)) = £{ y(t )}
( s + 4)
17
\ ( s 2 + 2 s + 5) Y ( s) = + 2s − 4 + 4  (∵ y(0) = 2, y ′(0) = −4)
( s + 4)
2

17 2s
or Y ( s) = 2 + 2
( s + 4)( s + 2 s + 5) ( s + 2 s + 5)
2

Laplace Transform  | 233

17 As + B Cs + D
Let ≡ 2 + 2 
( s + 2 s + 5)( s + 4) ( s + 2s + 5) ( s + 4)
2 2

\ 17 ≡ As( s 2 + 4) + B( s 2 + 4) + Cs( s 2 + 2 s + 5) + D( s 2 + 2 s + 5)
Equate coefficients of same powers of s
A + C = 0 (1)

B + 2C + D = 0 (2)

4A + 5C + 2D = 0 (3)

4B + 5D = 17 (4)

From equations (1) and (3)


A = –C, C + 2D = 0 (5)
From equations (2) and (4)
17 − 5 D 17 − 5 D
B = , + 2C + D = 0, i.e., 8C – D = –17 (6)
4 4
Solving equations (5) and (6)
A = 2, B = 3, C = –2, D = 1
2s + 3 ( −2 s + 1) 2s 4s + 3 ( −2 s + 1)
\ Y ( s) = + + = +
( s 2 + 2 s + 5) ( s 2 + 4) ( s 2 + 2 s + 5) ( s 2 + 2 s + 5) ( s 2 + 4)

4( s + 1) − 1 2s 1
or Y ( s) = − 2 + 2
{( s + 1) + 2 } ( s + 4) ( s + 4)
2 2

4 ( s + 1)
1 2 2s 1 2
Y ( s) = − . − 2 + . 2

( s + 1) + 2 2 ( s + 1) + 2 s + 4 2 s + 4
2 2 2 2
( ) ( )
1 1
\ y(t ) = £ −1 {Y ( s )} = 4e − t cos 2t − e − t sin 2t − 2 cos 2t + sin 2t
2 2 
( ) 1
(
= 2 2e − t − 1 cos 2t + 1 − e − t sin 2t 
2
)
(ix)  y ′′ + 9 y = 6 cos 3t , y (0 ) = 2, y ′ (0 ) = 0
Take Laplace transform of both sides
6s
s 2Y ( s ) − sy ( 0 ) − y′ ( 0 ) + 9Y ( s ) = 2 where Y ( s ) = £ { y ( t )}

s +9 ( ) 

or (s 2
)
+ 9 Y ( s) =
6s
+ 2 s (∵ y(0) = 2, y ′ (0 ) = 0 )
(s 2
+9 )
234 | Chapter 2

6s 2s 6s 2s
\ Y ( s) = + = + 
(s ) (s ) (s − 3i ) (s + 3i ) ( s )
2 2 2
2
+9
2
+9 2
+9

1  1 1  s
or Y ( s) =  − +2 2 
2i  ( s − 3i ) ( s + 3i )2 

2
 s + 32
( )
1 3it
\ y(t ) = £ −1 {Y ( s )} =
2i
{
te − te −3it + 2 cos 3t }
 e 3it − e −3it 
=t  + 2 cos 3t = t sin 3t + 2 cos 3t
 2i 

(x)  y ′′ + 9 y = sin 3t ; y (0 ) = 0, y ′ (0 ) = 0
Take Laplace transform of both sides
3
s 2Y ( s ) − sy ( 0 ) − y ′ ( 0 ) + 9Y ( s ) = where Y ( s ) = £ { y ( t )}
s 2 + 32 

\ (s 2
) 3
( 3
)
+ 9 Y ( s) = s 2 2+ 9 Y ( s) =(∵ 2y(0) = y ′ (0(∵
) =y0(0) ) = y ′ (0) = 0)
s +9

( s +9 ) ( )
3 3
\ Y ( s) = =
(s ) (s + 3i ) (s − 3i )
2 2 2
2
+9

2
1  1 1 
= 3  − 
 6i  s − 3i s + 3i   

−1  1 1 2 
=  + − 2 
12  ( s − 3i ) ( s + 3i )
2 2
s + 32 
 
−1  3it 2 
\ y ( t ) = £ −1 {Y ( s )} =  te + te −3it − sin 3t 
12  3  
−1   e + e 3it −3it
 2 
=  2t  − 3 sin 3t 
12   2  
1 1
=− (3t cos 3t − sin 3t ) = (sin 3t − 3t cos 3t )
18 18

Example 2.37: Find the general solution of the given problem y′′(t) + 9y(t) = cos2t using the
Laplace transform.
Solution: Given equation is    y′′ (t) + 9y(t) = cos2t
Laplace Transform  | 235

Take Laplace transform of both sides


s
s2Y(s) – sy(0)–y′(0) + 9Y(s) = where Y(s) = £{y (t)}
s +4
2

s
\ (s2+9)Y(s) = +c s + c2 where we assume c1 = y(0) and c2 = y′(0)
s +4 1
2

s cs c
or Y ( s) = 2 + 21 + 22

( )(
s +9 s +4 2
) (s +9 s +9) ( 
)
s 1 1  c1 s c2
=  2 − + + 2
5  s + 4 s2 + 9  s2 + 9 ( ) (
s +9

)
 1 s 1 s s c 3 
\ y ( t ) = £ −1  ⋅ 2 − ⋅ 2 2 + c1 2 2 + 2 ⋅ 2 2 
5 s + 2 5 s + 3 s +3 3 s +3 
2

1 1 c
= cos 2t − cos 3t + c1 cos 3t + 2 sin 3t
5 5 3 
 1 c 1
=  c1 −  cos 3t + 2 sin 3t + cos 2t
 5 3 5 
1
\ y (t ) = C1 cos 3t + C2 sin 3t + cos 2t is the general solution of given equation where C1 and C2
5
are arbitrary constants.

Example 2.38: Solve the initial value problems


3 sin t − cos t ; 0 < t < 2π
(i)  y ′′ + y ′ − 2 y = 
3 sin 2t − cos 2t ; t > 2π
with initial conditions y(0) = 1, y′(0) = 0
0 ; 0 ≤ t < π /2

(ii)  y ′ + y = f ( t ) ; y ( 0 ) = 2 where f ( t ) =  π
cos t ; t ≥ 2

(iii)  y ′′ + 2 y ′ + 5 y = δ (t − 2) , y (0 ) = 0, y ′ (0 ) = 0

(iv)  y ′′ + 8 y ′ + 17 y = f ( t ) , y ( 0 ) = 0, y ′ ( 0 ) = 0

where f (t) is periodic function with period 2p given by


1 ; 0 < t < π
f (t ) = 
0 ; π < t < 2π 
d2 y
(v)  + 4 y = E ( x − 2) where E is the unit step function and
dx 2
y ( 0 ) = 0 and y ′ ( 0 ) = 1
236 | Chapter 2

Solution:
(i)  In terms of unit step function

y ′′ + y ′ − 2 y = (3 sin t − cos t ) {u0 (t ) − u2π (t )} + (3 sin 2t − cos 2t ) u2π (t )



= (3 sin t − cos t ) u0 (t ) + ( −3 sin t + cos t + 3 sin 2t − cos 2t ) u2π (t )

= (3 sin t − cos t ) u0 (t ) +

{3 sin (2π − t ) + cos (2π − t ) − 3 sin (4π − 2t ) − cos (4π − 2t )} u (t ) 2π

= (3 sin t − cos t ) u0 (t ) +

{−3 sin (t − 2π ) + cos (t − 2π ) + 3 sin 2 (t − 2π ) − cos 2 (t − 2π )} u (t ) 2π

Take Laplace transform of both sides

s 2Y ( s ) − sy (0 ) − y ′ (0 ) + sY ( s ) − y (0 ) − 2Y ( s )

3 s  −3 s 6 s 
= − + e −2π s  2 + 2 + 2 − 2  where Y(s) = £ (y(t))
s2 + 1 s2 + 1  s + 1 s + 1 s + 4 s + 4
 −3 s 
( )
\ s 2 + s − 2 Y ( s) = s + 1 +
3
− 2
s
s +1 s +1
2
+ e −2π s  2 + 2
s
+ 2
6
− 2
 s + 1 s + 1 s + 4 s + 4 



(
∵ y (0 ) = 1, y ′ (0 ) = 0 )
\ Y (s) =
s +1
+
3− s

(3 − s) e −2π s + (6 − s) e −2π s
( )(
s2 + s − 2 s2 + 1 s2 + s − 2 ) (
s2 + 1 s2 + s − 2 )( ) (
s2 + 4 s2 + s − 2 )( )
1 1
s+
2 2  1 1 
= + − 2 − 2 

2
1  3
2

2
1  3 
2
s + 1 s + s − 2 
 s +  −    s + −
  
2 2 2 2 
 1 1 1 1  −2π s
+ 2 − 2 + 2 − 2 e
s +1 s + s − 2 s + s − 2 s + 4 
1 3
s+
2 2 1  1 1 2  −2π s
= + − + 2 − . 2 e

2
1  3
2

2
1  3
2
s + 1  s + 1 2 s + 22 
2

 s +  −    s +  −  
2 2 2 2
−t
 3t 3t   1 
\ y(t) = £ −1 {Y ( s )} = e 2  cosh + sinh  − sin t +  sin (t − 2π ) − sin 2 (t − 2π ) u2π (t )
 2 2   2 

− t 3t
 1 
\ y (t ) = e 2 e 2 − sin t +  − sin ( 2π − t ) + sin ( 4π − 2t ) u2π (t )
 2  
Laplace Transform  | 237

  = e t − sin t +  sin t − sin 2t  u2π (t ) 


1
 2 
e t − sin t ; 0 ≤ t < 2π

\    y (t ) =  t 1
e − sin t + sin t − sin 2t ; t > 2π
 2 
e t − sin t ; 0 ≤ t < 2π

i.e.,   y (t ) =  t 1
e − sin 2t ; t > 2π
 2 
π   π
(ii)  In terms of unit step function y ′ + y = cos t .uπ (t ) = sin  − t  uπ (t ) = − sin  t −  uπ (t )
2
 2  2
 2 2
 π
\ y ′ + y = − sin  t −  uπ (t ) ; y (0 ) = 2
 2 2

Take Laplace transform of both sides
π
− s 1
sY ( s ) − y (0 ) + Y ( s ) = −e 2
  where Y(s) = £ (y(t))
  
( s2 + 1 )
π
1
(∵ y ( 0 ) = 2 )
− s
\ ( s + 1)Y ( s) = 2 − e 2

s2 + 1
π
2 1 − s
\ Y (s) = − e 2
s + 1 ( s + 1) s 2 + 1 ( ) 
1 1 As + B
Let ≡ +
( s + 1) ( s 2
+1 ) 2 ( s + 1) s 2 + 1 ( )
\ 1≡
2
(
1 2
)
s + 1 + ( As + B ) ( s + 1)

Equate coefficients of s2 and s
1
0= +A
2 
1
\ A= −
2
0 = A+ B 
1
\ B = −A =
2
2  1 1 ( s − 1)  − π2 s
\ Y (s) = + − + e
( s + 1)  2 ( s + 1) 2 s 2 + 1  ( )

238 | Chapter 2

2  1 1 s 1  − π s
= + − + − e 2
( s + 1)  2 ( s + 1) 2 s 2 + 1 2 s 2 + 1  ( )

 1 − t − π  1 π π 
y(t) = £ −1 {Y ( s )} = 2e − t + − e  2  + cos  t −  − sin  t −   uπ ( t )
1
\
 2 2  2 2  2   2

 1 π −t 1 1 
= 2e − t +  − e 2 + sin t + cos t  uπ (t )
 2 2 2  2 
2e − t ; 0≤t <π /2

\ y (t ) =  − t 1  − t
π
 π
2e − 2  e − sin t − cos t  ; t≥
2

   2


(iii)  y′′ + 2y′ + 5y = δ (t–2) ; y(0) = 0, y′(0) = 0


Take Laplace transform of both sides

s 2Y ( s) − sy(0) − y ′(0) + 2 {sY ( s) − y(0)} + 5Y ( s) = e −2 s   where Y(s) = £{y(t)}



\ (s2 + 2s +5) Y (s) = e–2s    (Q y(0) = y′(0) = 0)
e −2 s 1 2
\ Y ( s) = = . e −2 s
s + 2 s + 5 2 ( s + 1)2 + 22
2

\
1
2
{
y (t) = £ −1 {Y ( s )} = e ( ) sin 2 ( t − 2 ) u 2 ( t )
− t −2


}
0 ; 0≤t<2

\ y (t ) =  1 − (t − 2)
 2 e sin {2 (t − 2)} ; t≥2


1 ; 0 < t < π
(iv) f (t) = 
0 ; π < t < 2π
and f (t) is periodic with period 2p

1 1
£ ( f (t )) =
π
f ( t ) dt =
)∫ )∫
− st
\ e e − st dt 
(1 − e −2π s
0 (1 − e −2π s 0

π
1  e − st  1 − e −π s 1
= = =
(1 − e ) −2π s  − s 
0 s 1− e( −2π s
)
s 1 + e −π s ( )
Now, the given differential equation is
y′′ + 8y′ + 17y = f (t) ; y(0) = 0, y′(0) = 0
Laplace Transform  | 239

Take Laplace transform of both sides


1
s 2Y ( s) − sy(0) − y ′(0) + 8 {sY ( s) − y(0)} + 17Y ( s) =   where £{y(t)} = Y(s)

(
s 1 + e −π s )
1
\ (s2 + 8s +17) Y(s) =    (Q y(0) = y′(0) = 0)
(
s 1 + e −π s )
\ Y (s) =
(1 + e ) − π s −1

=∑

( −1)n e − nπ s (1)
{
s (s + 4) + 1
2
} n = 0 s {( s + 4 ) + 1}
2

 1 
Now, £ −1  −4 t
 = e sin t
( )
2
 s + 4 + 1  
  t
  1  e −4 x 
( )
t
\  ∫0 £ −1 
−4 x
= e sin x dx =  −4 sin x − cos x 
 {
 s ( s + 4 ) + 1
2
 17 } 0

=
1
17
1 − e −4 t ( 4 sin t + cos t ) { }

\ From equation (1)

1 
y ( t ) = £ −1 {Y ( s )} = ∑ ( −1) 1 − e ( ) {4 sin ( t − nπ ) + cos ( t − nπ )} ⋅ unπ ( t )
n −4 t − nπ

17  
n=0 

{ }

1 
= ∑ ( −1) 1 − e 4 nπ e −4 t 4 ( −1) sin t + ( −1) cos t  .unπ (t )
n n n

17  
n= 0 

1 
=∑
 ( −1)n − e 4 nπ e −4t {4 sin t + cos t } .unπ (t )
n= 0 17 

1 
=∑
 ( −1)n − e 4 nπ g (t ) .unπ (t ) where g(t) = e–4t(4 sint + cost)
n = 0 17

k
1 
\ y (t ) = ∑ ( −1)n − e 4 nπ g (t ) ; kp < t < (k + 1) p ; k = 0,1, 2, ……
n= 0 17 



1 1
=  1 − ( −1) {
k +1
− g (t ) .
e 4π
k +1

}
−1 {( ) } ; kp < t < (k + 1) p  (sum of G. P)

17 2
 e 4π − 1 ( ) 
 

1  e (k ) − 1 
4 π +1
y (t ) = ( ) g (t ) ; kp < t < (k + 1) p; k = 0, 1, 2, 3….
k
\  −1 + 1 − 2 4π
34  e −1 
where g(t) = e–4t (4sin t + cos t)
240 | Chapter 2

d2 y
(v)  + 4 y = E ( x − 2) where E is the unit step function and y (0 ) = 0 and y ′ (0 ) = 1
dx 2
Take Laplace transform of both sides
1
s 2Y ( s ) − sy (0 ) − y ′ (0 ) + 4Y ( s ) = e −2 s where Y ( s ) = £ { y ( x )}
s
\ ( ) 1
s 2 + 4 Y ( s ) = e −2 s + 1 
s
(∵ y (0) = 0 and y′ (0) = 1)
1 1
or Y (s) = e −2 s + 2
s s2 + 4 ( s +4

) ( )
=
(s 2
+ 4 − s2) e −2 s +
1
4s s + 4 ( 2
) ( s2 + 4 )
1 1 s  −2 s 1 2
= − e +
4  s s 2 + 22  2 s 2 + 22 ( )
\ y( x ) = £ −1 {Y ( s )}

1 1
= 1 − cos {2 ( x − 2 )} E ( x − 2 ) + sin 2 x
4 2
1 2 1
= sin ( x − 2 ) E ( x − 2 ) + sin 2 x
2 2
Example 2.39: Solve the following boundary value problem using the Laplace transform
π
(D2 + 9) y =18t  ; y(0) = 0, y   = 1
 2
Solution: Take Laplace transform of both sides of the given differential equation

{ 18
}
s 2Y ( s ) − sy (0 ) − y ′ (0 ) + 9Y ( s ) = 2 where Y ( s ) = £ { y ( t )}
s
Let y′(0) = k (constant). Then

(s 2
)
+ 9 Y (s) =
18
s2
+ k (∵ x (0 ) = 0 ) (s 2
)
+ 9 Y (s) =
18
s2
+ k (∵ x (0 ) = 0 )

18 k 1 1  k
\ Y (s) = + = 2 2 − 2 + 2
s s +9
2
( 2
) s +92
s s + 9 s + 9

 2 k −2 3 
or y ( t ) = £ −1 {Y ( s )} = £ −1  2 + ⋅ 2 2
 s 3 s + 3 

\ y (t ) = 2t +
(k − 2) sin 3t
3 
Laplace Transform  | 241

π
Given y   = 1
 2
π ( k − 2) 3π ( k − 2)
\ 1 = 2. + sin =π−
2 3 2 3 
\ 3 = 3π − ( k − 2 )
k −2 
⇒ = π −1
3
\ y (t ) = 2t + (π − 1) sin 3t


Example 2.40: Using convolution theorem, solve the initial value problem y′′ +16y = cos 4t,
y(0) = 0, y′(0) = 0.
Solution: Given differential equation is y′′ + 16y = cos 4t
Take Laplace transform of both sides
s
s 2Y ( s ) − sy ( 0 ) − y′ ( 0 ) + 16Y ( s ) = 2 where Y ( s ) = £ { y ( t )}
s + 42 
s s
\ ( s + 16
2
( s )Y+(16
2
s))=
Y ( s)2 = 2 (∵ y((∵0) =y(y0′)(0=) y=′(00)) = 0 )
( s + 16()s + 16

)
s
\ Y ( s) =
( )
2
s 2 + 16


 s 
\ y(t ) = £ −1 {Y ( s)} = £ −1  
( )
2
 s + 16
2


1 −1  4 s 
= £  2 ⋅ 2
4  s +(4 2
s + 42 )( ) 


4 s
Now, £ (sin 4t ) = and £ (cos 4t ) = 2
s +422
s + 42 
\ By convolution theorem
y(t ) = sin 4t ∗ cos 4t 

1 t
4 ∫0
= sin 4u ⋅ cos 4(t − u ) du

1 t
{sin 4t + sin(8u − 4t )} du
8 ∫0
=

242 | Chapter 2

t
1 1 
= u sin 4t − cos(8u − 4t ) 
8  8 0 
1 1 1  t
=  t sin 4t − cos 4t + cos 4t  = sin 4t
8 8 8  8 
t
Hence, the required solution is y(t ) = sin 4t.
8
Now, we shall be taking examples of initial value problems with variable coefficients.

Example 2.41: Solve the differential equations


d2x dx
(i) t 2 − (t + 2) + 3 x = t − 1 ; x(0) = 0, x( 2) = 9
dt dt
(ii) ty′′ + 2y′ + ty = cost  ; y (0) = 1
(iii) y′′ + ty′ - 2y = 6 - t, y (0) = 0, y′ (0) = 1
(iv) ty′′ + 2ty′ + 2y = 2, y (0) = 1

d2x dx
Solution: (i)  t 2
− ( t + 2) + 3 x = t − 1 ; x ( 0 ) = 0, x ( 2) = 9
dt dt
Take Laplace transform of both sides
−d 2
ds
{ d
} 1 1
s X ( s) − sx(0) − x ′(0) + {sX ( s) − x(0)} − 2 {sX ( s) − x(0)} + 3 X (ss) = 2 −
ds s s

                           where X (s) = £ (x (t))
dX dX 1 1
\ − s2 − 2 sX ( s) + x(0) + s + X ( s) − 2sX ( s) + 2 x(0) + 3 X ( s) = 2 −
ds ds s s
 d d 
∵ x ′(0) and x(0) are constants, therefore x ′( 0) = x ( 0 ) = 0
ds ds 

1− s
or (−s 2
+s ) dX
ds
− 4( s − 1) X =
s 2 (s 2
)
+ 9 (∵
18
Y (xs()0=) = 20+) k (∵ x (0 ) = 0 )
s

dX 4 1
\ + X = 3
ds s s 
It is a linear differential equation.
4
I .F . = ∫ ds = e 4 log s = e log s = s 4
4

s 
1 4
\ Solution is s 4 X ( s) = ∫ ⋅ s ds + c where c is an arbitrary constant.
s3
1  s2  1 c
\ X ( s) = 4 
+ c = 2 + 4
s 2  2s s

Laplace Transform  | 243

t c 3
\ x(t ) = £ −1 { X ( s)} = + t 
2 6
Given x (2) = 9
c
\ 9 = 1+ ×8
6 
⇒ c=6
t 3
Hence,   x (t ) = +t
2 

(ii) ty′′ + 2y′ + ty = cost  ; y (0) = 1


Take Laplace transform of both sides
−d 2
ds
{ } d
s Y ( s) − sy(0) − y ′(0) + 2 {sY ( s) − y(0)} − {Y ( s)} = 2
ds
s
s +1
where Y (s) = £ (y (t))

dY dY s
\ − s2 − 2 sY + y(0) + 2 sY − 2 y(0) − = 2
ds ds s + 1 

\ (
− s2 + 1 ) dY
ds
= 1+
s
s +1
 2 (∵ y (0) = 1)
dY 1 s 1 1  d 1 
\ − = 2 + = 2 + − 
ds (
s +1 s +1
2
) (
2
s + 1 2 )
 ds s +1
2
( ) 

Take inverse Laplace transform of both sides
1
ty(t ) = sin t + t sin t
2 
 1 1
\ y(t ) =  +  sin t
 t 2

(iii) y′′ + ty′ – 2y = 6 – t  ; y(0) = 0, y′(0) = 1
Take Laplace transform of both sides
d 6 1
s2Y(s) – sy(0) – y′(0) − {sY ( s) − y(0)} − 2Y ( s) = − 2 ; where Y(s) = £{y(t)}
ds s s
dY dY 6 16 1
\ 0) y=(0), =y ′0(0, )y=′(10)) = 1)
s 2Y ( s)2Y−(1s−) −s 1 − s− Y ( s−) Y−(2sY) (−s2) Y=( s)−= 2 − 2 (∵ y((∵
ds ds s ss  s
dY 1 6
\ s + (3 − s 2 )Y ( s) = 2 − − 1
ds s s 
dY  3 
or
1
(
+  − s Y ( s ) = 3 − s 2 − 6 s + 1
ds  s  s
)

244 | Chapter 2

It is a linear differential equation


3   s2 
∫  s − s ds  3 log s − 
 2 2

I .F . = e =e = s3 ⋅ e − s 2

\ Solution is

( )
= ∫ − s 2 − 6 s + 1 e − s 2 ds + c where c is an arbitrary constant.
2 2
Y ( s) ⋅ s 3 e − s 2


(
= ∫ s − se − s
2
2
) ds + 6∫ (− se ) ds + ∫ e − s2 2 − s2 2
ds + c


− ∫ e − s 2 ds + 6e − s + ∫ e − s 2 ds + c (integrating by parts)
2 2 2 2
= s e−s 2 2

= (s + 6) e − s
2
2
+c

1 6 c s2 2
\ Y ( s) = + + e (1)
s 2 s3 s3

But 0 = lim y (t ) = lim sY ( s ) (initial value problem )


t →0+ s →∞ 
\ From (1), c = 0
1 6
Hence, Y ( s) = +
s 2 s3 

\ y(t) = £–1 {Y(s)} = t + 3t2

(iv) ty′′ + 2ty′ + 2y = 2, y(0) = 1


Take Laplace transform of both sides


d 2
ds
{ d
} 2
s Y ( s) − sy(0) − y ′(0) − 2 {sY ( s) − y(0)} + 2Y ( s) = where Y(s) = £{y(t)}
ds s
dY dY 2
\ − s2 − 2 sY ( s) + y(0) − 2 s − 2Y ( s) + 2Y ( s) =
ds ds s
2 s−2
or s 2 + 2s
dY
ds
(
+ 2 sY ( s) = 1 − =
s s
 ) (∵ y (0) = 1)
\
dY
+
2
Y = 2
( s − 2)
ds s + 2 s ( s + 2)

It is a linear differential equation.
2
∫ s + 2 ds
I .F . = e = e 2 log( s + 2 ) = ( s + 2) 2
\ Solution is
( s − 2) . s + 2 2 ds + c
( s + 2) 2 Y ( s ) = ∫ ( ) where c is an arbitrary constant
s ( s + 2)
2
Laplace Transform  | 245

s2 − 4  4
=∫ ds + c = ∫ 1 − 2  ds + c 
s 2
 s 
4
\ ( s + 2) 2 Y ( s ) = s + +c
s 

Y ( s) =
s 2 + 4 + cs
=
( )
s 2 + 4 s + 4 − 4 s + cs
\
s ( s + 2) 2 s ( s + 2) 2

1 4 c
or Y ( s) = − +
s ( s + 2) 2
( s + 2) 2

\ y(t) = £ {Y(s)} = 1 – 4t e
–1 –2t
+ ct e –2t (1)

Now, y(0) = 1 is satisfied by (1)


\ (1) is the required solution where c is an arbitrary constant.
Now, we solve some simultaneous linear differential equations.

Example 2.42: Solve the simultaneous equations


y1′ + 2 y2 = 2 sin 2t 
2 y1 + y2′ = 2(1 + cos 2t ) 
where y1(0) = -2, y2(0) = 1

Solution: Let Y1(s) = £{y1 (t)}, Y2(s) = £{y2(t)}


We take Laplace transform of both sides of given differential equations
4
sY1 ( s) − y1 (0) + 2Y2 ( s) = 2
s +4 
1 s 
2Y1 ( s) + sY2 ( s) − y2 (0) = 2  + 2
 s s + 4 

4
Hence, sY1 ( s) + 2Y2 ( s) = 2 − 2 (1)
s +4
2 2s
2Y1 ( s) + sY2 ( s) = 1 + + 2 (2)
s s +4
Solving these equations by Crammer rule
4
−2 2
s +4
2

2 2s 4s 4 4s
1+ + 2 s − 2s − 2 − − 2
s s +4
= s +4 s s +4
2
Y1 ( s) =
s 2 s2 − 4
2 s

246 | Chapter 2

1  4 2s 2 4
\ Y1 ( s) =  −2 s − 2 −  = − 2 − 2 − 2 
s −42
s s − 4 s − 4 s( s − 4 )
2s 2  s 1
=− − − −
s 2 − 4 s 2 − 4  s 2 − 4 s  
1 2 3s
= − −
s s2 − 4 s2 − 4 
4
s −2
s +4
2

2 2s
2 1+ + 2
s s +4 1  2s 2 8 
and Y2 ( s) = = 2 s + 2 + − 2 + 4
s 2 
s −4 
2
(
s +4 s +4 )
2 s

1  2( s 2 + 4 − 4) 8 
\ Y2 ( s) = 2 6 + s + − 2 
(
s − 4  )
s +4
2
(
s +4 ) ( )  

1 16  8 s  1 1 
= 6 + s + 2 − 2 = 2 + 2 − 2 2 − 2 

( s − 4 
2
) s +4 ( )  s − 4 s − 4  s − 4 s + 4

6 2 s
\ Y2 ( s) = + 2 + 2
s −4 s +4 s −4 
2

1 2 s 
\ y1 (t ) = £ −1 {Y1 ( s)} = £ −1  − 2 − 3⋅ 2  = 1 − sinh 2t − 3 cosh 2t
s s − 2 s − 22 
2

 2 2 s 
y2 (t ) = £ −1 {Y2 ( s)} = £ −1 3 ⋅ 2 + 2 +  = 3 sinh 2t + sin 2t + cosh 2t
 s −2 s + 22 s 2 − 22 
2

Example 2.43: Solve by Laplace transform


(D – 2)x – (D + 1)y = 6 e3t

(2D – 3)x + (D – 3)y = 6 e3t

Given: x = 3, y = 0 when t = 0.

Solution: Take Laplace transform of both sides of given differential equations


6
sX ( s) − x(0) − 2 X ( s) − {sY ( s) − y(0)} − Y ( s) =
s−3 
6
and 2 {sX ( s) − x(0)} − 3 X ( s) + {sY ( s) − y(0)} − 3Y ( s) =
s−3 
where X ( s) = £ { x(t )} and Y ( s) = £ { y(t )}

Laplace Transform  | 247

6
\ ( s − 2) X ( s) − ( s + 1)Y ( s) = +3 (1)
( s − 3)
6
and ( 2 s − 3) X ( s) + ( s − 3)Y ( s) = + 6 (2)
( s − 3)
 (∵ x (0) = 3 and y (0) = 0)
Now, multiplying equation (1) by (2s – 3) and (2) by (s – 2) and then subtracting we get
6
− [( 2 s − 3)( s + 1) + ( s − 3)( s − 2)]Y ( s) = [( 2s − 3) − ( s − 2)] + 3( 2s − 3) − 6( s − 2)
( s − 3)
6( s − 1) 9 s − 15 3(3s − 5)
⇒ −(3s 2 − 6 s + 3) Y ( s) = +3= =
( s − 3) s−3 ( s − 3) 
3s − 5
or −( s − 1) 2 Y ( s) =
( s − 3) 
− (3s − 5)
\ Y ( s) =  (3)
( s − 1) 2 ( s − 3)
By suppression method
−(3s − 5) −4 2 A
≡ + +
( s − 1) 2 ( s − 3) 4( s − 3) ( −2)( s − 1) 2 ( s − 1) 
1 1 A
≡− − +
( s − 3) ( s − 1) 2 ( s − 1) 

\ -(3s - 5) = -(s - 1)2 - (s - 3) + A(s - 1)(s - 3)


Equating coefficient of s 2

0 = –1+ A 
⇒ A = 1
1 1 1
\ Y ( s) = − − +
( s − 3) ( s − 1) ( s − 1)
2

\ y(t) = £ {Y(s)} = -e - te + e
– 1 3t t t

or y(t) = et(1 - t - e2t)


Substituting the value of Y(s) from equation (3) in equation (2), we get

( 2 s − 3) X ( s) =
6
+6+
( 3s − 5 )

( s − 3) ( s − 1)2 

\ X ( s) =
6
+
6
+
(3s − 5)
( s − 3) (2s − 3) (2s − 3) (2s − 3) ( s − 1)2 
248 | Chapter 2

2 4 6 2 ( 2 s − 3) − ( s − 1)
= − + + 
( s − 3) (2s − 3) (2s − 3) (2s − 3) ( s − 1)2
2 2 2 1
= + + −
( ) (
s − 3 2 s − 3) ( s − 1) (
2
2 s − 3) ( s − 1)

2 2 2 1 2
= + + + −
( s − 3) (2s − 3) ( s − 1)2 ( s − 1) (2s − 3) 
2 2 1
= + +
( s − 3) ( s − 1) ( s − 1)
2

\ x(t) = £ –1 {X(s)} = 2e3t + 2tet + et
= et (2e2t + 2t + 1)

Hence, the solution is


x(t) = et(2e2t + 2t + 1)
y(t) = et(1 - t - e2t)

Example 2.44: Solve the simultaneous equations

(D2 - 3)x – 4y = 0

x + (D2 + 1)y = 0
d dy dx
where D ≡ and t > 0, given that x = y = = 0 and = 2 at t = 0
dt dt dt
Solution: Take Laplace transform of both sides of given equations
s2X(s) – sx(0) – x′(0) – 3X(s) – 4Y(s) = 0

and X(s) + s2Y(s) – sy(0) – y′(0) + Y(s) = 0

where X(s) = £{x(t)}, Y(s) = £{y(t)}

\ (s2 – 3) X(s) – 4Y(s) = 2 (1)

and X(s) + (s2 + 1) Y(s) = 0 (2)

 (∵ x(0) = y(0) = y ′(0) = 0 and x ′(0) = 2)

Multiply equation (2) by (s2 – 3) and then subtract from equation (1)

-{4 + (s2 + 1)(s2 – 3)} Y(s) = 2


Laplace Transform  | 249

i.e., -(4 + s4 – 2s2 – 3)Y(s) = 2


−2 2
or Y ( s) = =− (3)
(s 4
− 2s + 1
2
) (s 2
−1 )
2

2 2
 1   1 1 
\ Y ( s) = −2   = −2  − 
 ( s + 1)( s − 1)   2( s − 1) 2( s + 1)  
1 1 1
\ Y ( s) = − − + 2
2( s − 1) 2 2( s + 1) 2 s −1

( )
1  1 1 2 
=−  + − 2 
2  ( s − 1)

2
( s + 1) 2
s − 1 

( )
1
\ y ( t ) = £ −1 {Y ( s)} = − (te t + te − t − 2 sinh t )
2 
1
= − ( 2t cosh t − 2 sinh t )
2 
= sinh t – t cosh t

From equations (2) and (3)

X ( s) =
(
2 s2 + 1 )= (
2 s2 + 1 ) =
( s − 1) 2 + ( s + 1) 2
(s ) ( s − 1)2 ( s + 1)2 ( s − 1) 2 ( s + 1) 2
2
2
−1

1 1
\ X (s) = +
( s + 1) 2 ( s − 1) 2

 e −t + et 
\ x(t ) = £ −1{ X ( s)} = te − t + te t = 2t  
 2 
\ x(t) = 2t cosh t

Hence, the solution is x(t) = 2t cosh t and y(t) = sinh t – t cosh t.

Example 2.45: Find the solution of the following problems


t
  (i)  y(t ) = t 2 + ∫ y(u ) sin(t − u ) du
0
t
 (ii)  F (t ) + 2 ⋅ ∫ F (t ) dt = cosh 2t
0

t
(iii)  y ′ + 5 y + 4∫ y(u ) du = f (t ) ; y(0) = 2
0
250 | Chapter 2

where graph of f (t) is


ft

t
a b

Figure 2.10
t
(iv)  f (t ) = t + e −2t + ∫ f (u )e 2( t − u ) du
0
t
Solution: (i) y(t ) = t 2 + ∫ y(u ) sin(t − u ) du
0

= t2 + y (t) * sint
Take Laplace transform of both sides
2 1
Y ( s) = + Y ( s) 2 where Y(s) = £{y(t)}
s 3
s +1 ( )
\  1  2
1 − 2  Y ( s) = 3
s +1 s

s2 2
⇒ Y ( s) = 3
( s +1
2
) s

\ Y ( s) =
(
2 s2 + 1 2 1 4!
+
)=
5
s s3 12 s5 
1  1 
\ y(t ) = £ −1 {Y ( s)} = t 2 + t 4 = t 2 1 + t 2 
12  12 
t
(ii)  F (t ) + 2 ⋅ ∫ F (t ) dt = cosh 2t
0

Take Laplace transform of both sides


F ( s) s
F ( s) + 2
= 2 where £ {F (t )} = F ( s)
s s −4 
s+2 s
\ F ( s) = 2
s s −4 
s2 s2
\ F ( s) = =
( s + 2) s 2 − 4 (
( s + 2) 2 ( s − 2)

)
s2 4 4 A
Let ≡ + +    (by suppression method)
( s + 2) 2 ( s − 2) 16( s − 2) ( s + 2) 2 ( −4) s + 2
Laplace Transform  | 251

1
\ s2 ≡ ( s + 2) 2 − ( s − 2) + A( s + 2)( s − 2) 
4
Equate coefficient of s2
1
1= +A
4 
3
⇒ A= 
4
1 1 3
\ F ( s) = − +
4( s − 2) ( s + 2) 2
4( s + 2)

\ F (t ) = £ −1
{ } 1 2t −2 t
F ( s) = e − te + e
4
3 −2t
4 
1 2t
\ F (t ) = e + (3 − 4t )e −2t 
4 

(iii)  From the graph

f (t) = k(ua (t) – ub (t))

\ Equation is y ′ + 5 y + 4∫ y(u ) du = k (ua (t ) − ub (t ) )


t

Take Laplace transform of both sides


4
sY ( s) − y(0) + 5Y ( s) + Y ( s) =
s
k
s
(e − as
)
− e − bs   where Y(s) = £{y(t)}

(s 2
+ 5s + 4 ) Y ( s) = k (s 2 + 5s + 4 )
\
s s
(e − as
)
− e − bs + 2
s 
Y ( s)(∵
k as
= y(0e)− =
s
(
2−)e − bs + 2 ) (∵ y(0) = 2)

\ Y ( s) = 2
2s k e − as − e − bs
+ 2
( )
s + 5s + 4 s + 5s + 4 ( )
=
2s
+
k
( s + 1)( s + 4) ( s + 1)( s + 4)
e − as − e − bs ( )

−2  1 1  − as
= +
8
3( s + 1) 3( s + 4)
+k −  e −e
− bs
( )
 3( s + 1) 3( s + 4)  
 (by suppression method)
−2 − t 8 −4 t 1 1 
\ y(t ) = £ −1 {Y ( s)} = e + e + k  e − ( t − a ) − e −4 ( t − a )  ua (t )
3 3 3 3  

 1 1 
+ k  − e − ( t − b ) + e −4 ( t − b )  ub (t )
 3 3 
252 | Chapter 2

2
(
−4 t −t
 3 4e − e ; 0 ≤ t < a)

2
\ ( k
) { }
y(t ) =  4e −4 t − e − t + e − ( t − a ) − e −4 ( t − a ) ; a ≤ t < b 
3 3
2
(
−4 t −t
 3 4e − e + 3 e

) {
k −(t − a)
}
− e −4 ( t − a ) − e − ( t − b ) + e −4 ( t − b ) ; t ≥ b

t
(iv)    f (t ) = t + e −2t + ∫ f (u ) e 2( t − u ) du = t + e −2t + f (t ) * e 2t
0

Take Laplace transform of both sides


1 1 1
F ( s) = 2 + + F ( s) ⋅ where F ( s) = £ { f (t )}
s s+2 ( s − 2) 

\  1  1 1
1 −  F ( s) = 2 +
s − 2 s s+ 2

( s − 2) ( s − 2) 1 2 1  4 1
\ F ( s) = + =  +  + +
( s − 3) s 2
( s + 2)( s − 3) s  3s 3( s − 3)  5( s + 2) 5( s − 3)

 (by suppression method)

2 1  1 1  4 1
\ F ( s) = + − + + +
3s 2
3  3s 3 ( s − 3)  5( s + 2) 5( s − 3)

2 1 14 4
= − + +
3 s 2 9 s 45( s − 3) 5( s + 2)

2 1 14 4
\ f (t ) = £ −1 {F ( s)} = t − + e 3t + e −2t
3 9 45 5 

=
1
45
(
30t − 5 + 14e 3t + 36e −2t )


Exercise 2.2

1. Find the inverse Laplace transform of the following functions


3( s 2 − 1) 2 12 4 s3 2s 2 − 1
 (i)  + + (ii)  (iii) 
2 s5 4 − 3s s    ( s 4 − a 4 )      ( s 2 + 1)( s 2 + 4)

2 s3 − 6 s + 5 s 5s + 3
(iv)  (v)  (vi) 
( ) ( s − 1)( s 2 + 2 s + 5)
2
s − 6 s + 11s − 6         s − a 2
3 2 2

  
Laplace Transform  | 253

1 + 2s 1
(vii)    (viii)  s + 2   (ix) 
( )
2
( s + 2) ( s − 1)
2 2
( s + 3)( s + 1) 3
s − a2
2

s 6 s3 − 21s 2 + 20 s − 7
 (x)  (xi) 
s 4 + s 2 + 1       ( s + 1)( s − 2)3

2. Find the inverse Laplace transforms of
 s2 + a2  e −1 s
   (iv) log 1 + 2 
−1 1
(i)  log  2 2
  (ii) tan ( s − 1)    (iii) 
s +b  s  s 
1  s 
(v)  (vi)   s log 2 + cot −1 s
( s + 1)3    
2
 s +1 
3. Find the inverse Laplace transforms of
e − s − 3e −3s e − cs e−s
 (i)  (ii)  ; c > 0   (iii) 
s2      s 2 ( s + a) s +1
−s 2 −s
e−3 s
se + π e  se − as 
(iv)    (v)  2    (vi)  2 ;a>0

( s + 8s + 25)
2
(s + π 2 )  s − w 2 
 s  1  32 s 
4. If £ −1  2 2 
= t sin t , find £ −1  2 
.
 ( s + 1)  2  (16 s + 1) 
2

5. State convolution theorem and hence evaluate


 1 
(i) £ −1  
 ( s − 2)( s − 3) 
 1 
(ii) £ −1  2 2 
 s ( s + 1) 
8s
6. Use convolution theorem to evaluate the Laplace transform of .
( s 2 + 16)( s 2 + 1) 2
t
7. Using the convolution theorem, evaluate ∫
0
sin u.cos(t − u ) du.
8. Use convolution theorem to solve the following integral equations
t
(i)  f (t ) = t + 6∫ f (u ).e ( t − u ) du
0
t
(ii)  f (t ) = 1 + t + 2∫ sin u. f (t − u ) du
0
t
(iii)  f (t ) = t + e −2t + ∫ f (u ) e 2( t − u ) du
0

9. Solve the following boundary value problem using the Laplace transform
y ′′(t ) + 9 y(t ) = cos 2t
y(0) = 1, y(π 2) = −1
254 | Chapter 2

10. Using Laplace transform techniques, solve the following differential equations
π
(i)  y ′′′ − 2 y ′′ + 5 y ′ = 0; y = 0, y ′ = 1 at t = 0 and y = 1 at t =
8
(ii)  ( D 2 + n2 ) x = a sin( nt + α ); x = Dx = 0 at t = 0.
d2 y dy dy
(iii)  + 4 + 8 y = 1 given that y = 1 and = 1 at x = 0
dx 2 dx dx
d4x
(iv)  − a 4 x = 0, where a is a contant and x = 1, x ′ = x ′′ = x ′′′ = 0 at t = 0.
dt 2
d2x
 (v)  2 + x = t cos 2t given that x(0) = x ′(0) = 0
dt
(vi)  y ′′′ − 3 y ′′ + 3 y ′ − y = t 2 e t , given y(0) = 1, y ′(0) = 0, y ′′(0) = −2
(vii)  y ′′ + 4 y ′ + 3 y = e − t , y(0) = y ′(0) = 1
(viii)  y ′′ + 2 y ′ − 3 y = 3, y(0) = 4, y ′(0) = −7
19 8
(ix)  y ′′ − 5 y ′ + 4 y = e 2t , y(0) = , y ′( 0) =
12 3
−t
 (x)  y ′′ + 4 y ′ + 13 y = e , y(0) = 0, y ′(0) = 2
(xi)  y ′′ + 3 y ′ + 2 y = tδ (t − 1), y(0) = 0, y ′(0) = 0
11. A particle moves in a line so that its displacement x from a fixed point 0 at any time t, is
d2x dx
given by + 4 + 5 x + 80 sin 5t
dt 2 dt
If initially particle is at rest at x = 0, find its displacement at any time t.
d2x
12. Solve + 4 x = φ (t ) with x(0) = x ′(0) = 0,
dt 2
where φ (t ) = 0 when 0 < t < π 

= sin t when π < t < 2π 
= 0 when t > 2π 
13. Using convolution theorem, solve the initial value problem y ′′ + 9 y = sin 3t , y(0) = 0, y ′(0) = 0
n 3t , y(0) = 0, y ′(0) = 0
t
14. Solve the initial value problem y ′ − 4 y + 3∫ y(u ) du = t , y(0) = 1
0

d2 y dy  dy 
15. Solve +t − y(t ) = 0 if y(0) = 0,   = 1.
dt 2 dt  dt  t = 0

d 2 y dy dy
16. Solve the problem t 2
+ + 4ty = 0 given that y = 3 and = 0 when t = 0.
dt dt dt
Laplace Transform  | 255

17. Using Laplace transform solve the following differential equation y″ + 2ty′ - y = t when
y(0) = 0 and y′(0) = 1
18. Solve the following simultaneous equations by Laplace transform
dx dy
(i)  − y = e t , + x = sin t ; given x(0) = 1, y(0) = 0.
dt dt
dx dy
(ii)  + y = sin t , + x = cos t ; given x = 2 and y = 0 when t = 0.
dt dt
dx dy dx dy
(iii)  + + x = e − t , + 2 + 2 x + 2 y = 0; given that x(0) = −1, y(0) = 1.
dt dt dt dt
19. Solve the following simultaneous equations
dx t dx dy
+ x + 3∫ y dt = cos t + 3 sin t , 2 + 3 + 6 y = 0
dt 0 dt dt
subject to the conditions x = –3, y = 2 at t = 0.
dx dy d2x dy
20. Solve the simultaneous equations − − 2 x + 2 y = 1 − 2t , 2
+ 2 + x = 0 given
dx dt dt dt dt
x = 0, y = 0, = 0 when t = 0
dt
d2x dy dx d 2 y
21. Solve the simultaneous equations + 5 − x = t , 2 − + 4 y = 2 given that when
dt 2 dt dt dt 2
dx dy
t = 0, x = 0, y = 0 = 0, =0
dt dt

Answers 2.2
4
3 3 2 1 4 t 4 1
1. − t + t − 4e 3 +
 (i)  (ii)  (cosh at + cos at )
2 2 16 πt 2
3 1 t 5 t
(iii)  − sin t + sin 2t (iv)  e − e 2t + e 3t    (v)  sinh at
2 2 2 2a
3 t t
(vi)  e t − e − t cos 2t + e − t sin 2t (vii)  (e − e −2t )
2 3
at cosh at − sinh at
(viii) 
8
{
1 −3t
}
e + ( 2t 2 + 2t − 1)e − t (ix) 
2a 3
2  3  t  t2 
  (x) sin  t  sinh (xi)  2e − t +  4 + 3t −  e 2t
3  2  2  2

2. (i) 
2
t
(cos bt − cos at ) (ii) 
1
− e t sin t (iii) 
t
J0 2 t ( )
2(1 − cos t ) 1 − cos t
(iv) 
t

1
(v) 
8  ( )
 3 − t 2 sin t − 3t cos t  (vi) 2
 t
256 | Chapter 2

0 ; 0 ≤ t <1

3. (i)  t − 1 ; 1≤ t < 3
 −2t + 8 ; t≥3

e − ( t −1)
(iii) 
1
a2
{ }
a (t − c ) − 1 + e − a ( t − c ) U (t − c) (iv) 
π (t − 1)
⋅ U (t − 1)

1 −4 ( t − 3)   1 
(v)  e sin π t U  t −  − U (t − 1) 
sin(3t − 9)U (t − 3) (vi) 
3   2  
(vii)  cosh {w (t − a )} ⋅ U (t − a)

t
4. t sin 5. (i)  e 3t − e 2t  (ii) t + te − t − 2 + 2e − t
4
1 t
6. (60t sin t + 8 cos 4t − 8 cos t ) 7. 
2
sin t
225

8. (i) 
1
49
( )
6e 7t + 7t − 6 (ii) 2e − t − 1
t
(iii)
1
45
(
14e 3t − 5 + 30t + 36e −2t )
1
9. y(t ) = (cos 2t + 4 cos 3t + 4 sin 3t )
5
a
10. (i)  y(t ) = 1 + e t (sin 2t − cos 2t )      (ii) x(t ) = {cos α sin nt − nt cos(α + nt )}
2n 2
1 e −2 x 1
(iii)  y( x ) = − (cos 2 x − 3 sin 2 x )    (iv) x(t ) = (cosh at + cos at )
8 8 2
1  t2 t5 
(v)  x(t ) = ( 4 sin 2t − 5 sin t − 3t cos 2t )   (vi)  y(t ) = e t 1 − t − − 
9  2 60 
7 − t 1 − t −3 −3t
(vii)  y(t ) = e + te (viii) y(t ) = −1 + 2e t + 3e −3t
e       
4 2 4
−1 2t 14 t 19 4 t 1 −2t t
(ix)  y(t ) = e + e + e            (x)  y(t ) = e (3e − 3 cos 3t + 19 sin 3t )
2 9 36 30
(xi)  y(t ) = e − ( t −1)U (t − 1) − e −2( t −1)U (t − 1)

11. x(t ) = −2(cos 5t + sin 5t ) + 2e −2t (cos t + 7 sin t )

0 for 0 < t < π


1 1
 sin 2t + sin t for π < t < 2π
12. x(t ) =  6 3
1
 sin 2t for t < 2π .
3
Laplace Transform  | 257

1
13. y(t ) = (sin 3t − 3t cos 3t )
18
1 5
14. y(t ) = − e t + e 3t   15. y = t   16. y = 3 J 0 ( 2t )    17. y = t
3 3
1 t 1
18. (i)  x(t ) = (e + cos t + 2 sin t − t cos t ), y(t ) = (t sin t − et + cos t − sin t )
2 2
−t −t
  (ii)  x(t ) = e + e , y(t ) = sin t + e − e
t t

(iii)  x(t ) = −e − t (cos t + sin t ), y(t ) = e − t (1 + sin t )


2
19. x(t ) = e −3t + sin t − 2 cos t , y(t ) = 2e −3t − sin t
3
20. x(t ) = 2(1 − e − t − te − t ), y(t ) = 2 − t − 2(t + 1)e − t
21. x(t ) = −t + 5 sin t − 2 sin 2t , y(t ) = 1 − 2 cos t + cos 2t

2.16 Applications of Laplace transform to


Engineering Problems
Laplace transform is utilized as a tool for solving some of the engineering problems. We now
discuss some of these applications.

2.16.1 Problems Related to Electrical Circuits


Consider the LCR circuit shown in the figure. It contains resistance R measured in ohms (Ω),
inductance L measured in henrys, and capacitor C measured in farads. The voltage drop is equal
to voltage of battery. If battery is not in the system, then total voltage drop will be zero. If i(t)
dq
be current and q(t) charge at any time t, then i(t ) = . Voltage drop through resistance R is
dt
dq di d 2q
Ri = R , voltage drop in inductance L is L = L 2 , and voltage drop in capacitor of
dt dt dt
q 1 t
capacity C is =
C C ∫ i(t )dt .
0
L

According to Kirchoff’s laws, we have


(i) Total incoming current at a point is always equal R C
to the total outgoing current;
(ii) Total voltage drop in a closed circuit or in a mesh
is equal to total voltage of the electric source, i.e., E
battery etc.
Figure 2.11
258 | Chapter 2

Example 2.46: A condenser of capacity C is charged to potential E and discharged at t = 0


through an inductance L and resistance R. Show that the charge q at any time t is given by
CE − µt R 1 R2 CR 2
q= e [ µ sin ηt + η cos ηt ] where µ = and η 2 = − 2 ;L >
η 2L LC 4 L 4
Solution:
L

R C

Figure 2.12

If q(t) is charge at any time t then by Kirchoff  ’s law, q(t) satisfies the differential equation
d 2q dq q
L +R + =0
dt dt C
Take Laplace transform of both sides

( ) 1
L s 2 Q ( s ) − sq (0 ) − q ′ (0 ) + R ( sQ ( s ) − q (0 )) +
C
Q (s) = 0

where Q (s) = £(q(t))


But it is given that q(0) = EC and q′(0) = i(0) = 0
 2 1
\  Ls + Rs +  Q ( s ) = CE ( Ls + R )
C 
 2 R 1   R
\  s + s +  Q ( s ) = CE  s + 
L CL  L 
R 1 R2
Let = 2µ, = µ 2 + η2 = 2 + η2
L CL 4L
 CR 2 1 R2 2
 as L > 4 , so CL − 4 L2 > 0 and hence can be assumed η 

( )
\ s 2 + 2 µ s + µ 2 + η 2 Q ( s ) = CE ( s + 2 µ )

 s+µ µ η 
\ Q ( s ) = CE  + 
 ( s + µ ) + η η ( s + µ ) + η 
2 2 2 2

Take inverse Laplace transform of both sides
 µ  CE − µt
q (t ) = CE e − µt cos η (t ) + e − µt sin ηt  = e [η cos ηt + µ sin ηt ]
 η  η
Laplace Transform  | 259

CE − µt
\ q (t ) = e  µ sin (ηt ) + η cos (ηt ) 
η
R 2 1 1 R2
where µ= ,η = − µ2 = − 2⋅
2L CL CL 4 L 

Example 2.47: An impulsive voltage Ed (t) is applied to a circuit consisting of L, C, R in series


with zero initial conditions. If i is the current at any subsequent time t, find the limit of i as t → 0.
Solution:
R L

C Ed t

Figure 2.13

By Kirchoff’s law i(t) satisfy the differential equation


di 1 t
L + Ri + ∫ idt = Eδ (t )
dt C 0
Take Laplace transform of both sides
1
I (s) = E L ( sI ( s ) − i (0 )) + RI ( s ) +
Cs 
where I(s) = £(i(t))
Initially i(0) = 0
 1
\  Ls + R +  I ( s ) = E
Cs 
E
\ I (s) =
1
Ls + R +
Cs 
By initial value theorem
Es
lim i (t ) = lim sI ( s ) = lim
t → 0+ s →∞ s →∞ 1
Ls + R +
Cs
E E
= lim = .
s →∞ R 1 L
L+ + 2
s Cs 
Example 2.48: Find the current i(t) in the circuit in figure given below, if a single square wave
with voltage V0 is applied. The circuit is assumed to be quiescent before the square wave is
­applied.
260 | Chapter 2

vt
C

V vt

t
O a b      R

Figure 2.14 (a) and (b)

Solution: V(t) = V0 ; a ≤ t ≤ b

= 0; otherwise

\ V (t ) = V0 U (t − a ) − U (t − b )

If i(t) is current in the system at any time t then by Kirchoff’s law
t
1
idt = V (t ) = V0 U (t − a ) − U (t − b )
C ∫0
Ri +

Take Laplace transform of both sides
1 V
RI ( s) + I ( s) = 0 e − as − e − bs    where I(s) = £(i(t))
   Cs s
 1  V0 − as
\  s +
RC
 I ( s) =
R
e − e − bs ( )

V  e − as
e − bs 
I ( s) = 0  −
\ R 1 1 
 s + s+ 
RC RC 

Take inverse Laplace transform of both sides
V0  − RC
1
(t − a) −
1
(t − b) 
i (t ) =  e U ( t − a ) − e RC
U (t − b )
R 
a
V0 RC V b
Let C1 = e , C2 = 0 e RC
R R 
t t
− −
\ i(t ) = C1e RC
U (t − a ) − C 2 e RC
U (t − b )

0 ; 0<t<a
 t
 −
\ i(t ) = C1e RC ; a≤t<b
 t
(C − C )e − RC ; t ≥ b
 1 2

Laplace Transform  | 261

a
V0 RC V b
where C1 = e , C2 = 0 e RC 
R R

Example 2.49: Given that i = q = 0 at t = 0, find charge q and current i in the below circuit for t > 0.

Figure 2.15

Solution: If L Henry inductance, R W resistance and C farad capacity capacitor is in the circuit
having E e.m.f. source then by Kirchoff’s law
d 2q dq q
L +R + = E
dt dt C
1
It is given that  L = 1, R = 6, C = , E = sin t
9
d 2q dq
\ 2
+ 6 + 9q = sin t
dt dt 
Take Laplace transform of both sides
1
s 2 Q( s) − sq(0) − q ′(0) + 6 ( sQ( s) − q(0) ) + 9Q( s) =
s2 + 1
where Q(s) = £ (q(t))
It is given that q(0) = 0, q′(0) = i(0) = 0

( )
\ s 2 + 6 s + 9 Q( s) = 2
1
s +1 
1
\ Q ( s) = 2 
( )
s + 1 ( s + 3)
2

1  1 s−3  1 s+3−6
\ Q (s) = − = −
10 ( s + 3)  s + 3 s 2 + 1 10 ( s + 3)2 10 ( s + 3) s 2 + 1 ( )
1 1 3  1 s−3 
= − + − 2 

10 ( s + 3)
2
(
10 s + 1 2
) 

50  s + 3 s + 1

1 31 1 3 s 9 1
= − + ⋅ − +

10 ( s + 3)
2 2
( )
10 s + 1 50 s + 3 50 s + 1 50 s + 1
2 2


262 | Chapter 2

3 1 2 3 s
= + + −
50 ( s + 3) 10 ( s + 3) 2
( )
25 s + 1 50 s + 1
2 2 

Take inverse Laplace transform of both sides


3 −3t t −3t 2 3
q(t ) = e + e + sin t − cos t
50 10 25 50 
1 −3t 1
= e (5t + 3) + ( 4 sin t − 3 cos t )
50 50 
1 1
i (t ) = q ′ (t ) = (5 − 9 − 15t ) e + (3 sin t + 4 cos t )
−3t

50 50 
1 −3t 1
= − e (15t + 4 ) + (3 sin t + 4 cos t )
50 50 
Example 2.50: Inductance L, resistance R and capacitor of capacity C are connected in series
in a circuit having e.m.f. d(t). Find the charge at any time t given that initially the current and
charge are zero.
Solution:
C

R L

d t

Figure 2.16

If q(t) is charge at time t then by Kirchoff’s law


d 2q dq q
L 2 + R + = δ (t )
dt dt C
Take Laplace transform of both sides

( )
L s 2 Q( s) − sq(0) − q ′(0) + R ( sQ( s) − q(0) ) +
1
C
Q( s) = 1   where Q(s) = £(q(t))

It is given that  q(0) = 0, q′(0) = i(0) = 0

\  2 1
 Ls + Rs +  Q( s) = 1
C 
 2 R 1  1
or  s + s +  Q( s) =
L CL L
R 1 1 R2 4 L − CR 2
Let = µ > 0 and λ 2 = − µ2 = − 2 =
2L LC LC 4 L 4 L2C 
Laplace Transform  | 263

Here l be zero or positive or purely imaginary according as 4 L = CR 2 , 4 L > CR 2 , 4 L < CR 2


respectively
\ ( )
s 2 + 2 µ s + µ 2 + λ 2 Q ( s) =
1
L
1 1
\ Q ( s) = ⋅
L ( s + µ )2 + λ 2

If l = 0, i.e., 4L = CR2 then
1 1
Q ( s) =
L ( s + µ )2

Take inverse Laplace transform
t
q(t ) = e − µt
L
t − µt R
\ q(t ) = e , when 4L = CR2 where µ =
L 2L 
If l > 0,
i.e., 4L > CR2
1 λ
then Q( s) = ⋅
L λ ( s + µ )2 + λ 2

Take inverse Laplace transform
1 − µt
q(t ) = e sin λ t
L λ 
1 − µt R 4 L − CR 2
\ q(t ) = e sin λ t ,   when 4L > CR2 where µ = , λ=
Lλ 2L 2L C 
If l < 0, i.e., 4L < CR 2

then replacing l by il
1 − µt R CR 2 − 4 L
q(t ) = e sin iλ t when 4L < CR2 where µ = ; λ=
Liλ 2L 2L C 
1 − µt
= e sinh λ t  (∵ sin ix = i sinh x )

R
\ If µ =
2L 
t − µt
q(t ) = e if 4L = CR2
L
1 − µt 4 L − CR 2
= e sin λ t if 4L > CR2 where λ =
Lλ 2L C 
1 − µt CR 2 − 4 L
= e sinh λ t ; if 4L < CR2 where λ =
Lλ 2L C 
264 | Chapter 2

Example 2.51: If i(0) = 0 then find the current i(t) at any time t in RL-network shown in the
below figure.
i R A
i i

Et = L R

Figure 2.17

Solution: Since incoming current at A = outgoing current


\ i = i1 + i2(1)
In the mesh containing both resistances and battery, by Kirchoff’s law
Ri + Ri2 = 1
Put value of i2 from (1)
Ri + R(i – i1) = 1
\ 2Ri – Ri1 = 1 (2)
In the mesh containing R, L and battery, by Kirchoff’s law
di
L 1 + Ri = 1 (3)
dt
Take Laplace transform of both sides of (2) and (3)
1
2 RI ( s) − RI1 ( s) =
s
1
L ( sI1 ( s) − i1 (0) ) + RI ( s) =   where I(s) = £(i(t)), I1(s) = £(i1(t))
s
Since i(0) = 0
\ i1(0) = 0
1
\ 2 RI ( s) − RI1 ( s) = (4)
s
1
RI ( s) + LsI1 ( s) = (5)
s
By Crammer’s rule
1
−R
1 s
RI ( s) =
2 Ls + R 1
Ls
s 
Laplace Transform  | 265

Ls + R
\ I ( s) = 
sR ( 2 Ls + R )
 R 
− +R
1 1 2 
=  +     (by suppression method)
R s R
− ( 2 Ls + R) 
 2L 
1 L
= −
Rs R ( 2 Ls + R )

1 1 1
= −
Rs 2 R R
s+
2 L 
Take inverse Laplace transform of both sides
1 1 − 2RtL 1  −
Rt

i (t ) = − e =  2 − e 2L

R 2R 2R 

2.16.2 Problem Related to Deflection of a Loaded Beam


Let y be deflection of a loaded beam at distance x from one end of the beam. Radius of curvature
is given by
3
  dy  2  2
1 +   
  dx  
R= 
d2 y
dx 2
2
dy  dy 
Since deflection is small, will be small, and hence   will be negligible. Thus, we can
take dx  dx 
2
d y 1
=
dx 2 R
If m be moment of the forces acting on the beam about the point at distance x from one end then
E M
= ,
R I
where I is moment of inertia about the neutral axis, i.e. the axis passing through the centre of
gravity of the beam and E is young’s modulus of the material of which beam is formed, which is
assumed to be same throughout the beam.
d2 y
∴ EI =M
dx 2
dM d3 y d2M d4 y
= EI 3 is shearing stress at the point and 2
= EI 4 is load per unit length on the
dx dx dx dx
beam at distance x from one end.
266 | Chapter 2

When a load w0 act at a point at distance x from one end then this can be considered as the limit-
w
ing case of uniform loading 0 per unit length over the portion of the beam between x = a and
ε
x = a + e where e is very small. Thus
w
w( x) = 0 ; a < x < a + ε
ε
= 0; otherwise
\ w(x) = w0 d(x – a)
It should be noted that boundary conditions will be as follows:
(i) If an end is clamped, built-in, or fixed end, then there cannot be deflection at this end, and
dy
at this end, tangent to deflection curve will be neutral axis. Thus, at this end y = = 0.
dx
(ii) If an end is hinged or simply supported end, then there cannot be deflection at this end,
d2 y
and also sum of moments M about this end will be zero. Thus, at this end y = 0, = 0.
dx 2
dM
(iii) If an end is free and then sum of moments M and shearing stress will be zero at this
2 3 dx
d y d y
end, and thus, at this end 2
= 0, = 0.
dx dx 3
Example 2.52: A weightless beam of length l is freely supported at its both ends and a concen-
trated load W acts at a point x = a on it measured from one end. Find the resultant deflection and
also the deflection under the load.
Solution: If y is deflection at distance x from A, then differential equation of deflection (load
equation) is given by


A B

a W

Figure 2.18

d4 y
    EI = W δ ( x − a ) (1)
dx 4
where E is Young’s modulus of the material of the beam assumed to be same throughout the beam
and I is moment of inertia of beam about neutral axis.
Take Laplace transform of both sides of (1)
( )
EI s 4Y ( s) − s3 y(0) − s 2 y ′(0) − sy ′′(0) − y ′′′(0) = We − as   where Y(s) = £(y (x))
Now, end A is freely supported

\ y(0) = y″(0) = 0
Laplace Transform  | 267

Assuming y′(0) = c1, y″′(0) = c2

we have (
EI s 4Y ( s) − c1 s 2 − c2 = We − as ) 
− as
We
\ s 4Y ( s) = + c1 s 2 + c2
EI 
We − as c1 c2
\ + + Y ( s) =
EIs 4 s 2 s 4 
Take inverse Laplace transform of both sides
W ( x − a)
3
x3
U ( x − a ) + c1 x + c2
y( x ) =
EI 3! 3! 
 x 3

c1 x + c2 ; 0≤ x<a
 6
\ y( x ) =   (2)
W ( x − a )
3
x3
 6 EI + c1 x + c2 ; a≤ x≤l
6
Now, end B is freely supported
\ y(l) = y″(l) = 0
Now,
W (l − a )
3
l3
y(l ) = + c1l + c2 = 0 (3)
6 EI 6
W ( x − a)
y ′′( x ) = + c2 x
EI 
W (l − a )
\ y ′′(l ) = + c2 l = 0
EI 
W (l − a )
\ c2 = − (4)
EIl
\ from (3)
W (l − a ) W (l − a ) l 2
3

+ c1l − =0
6 EI 6 EI 
W (l − a )
3
Wl (l − a ) (l − a) (2al − a 2 )
\ c1 = − + =W (5)
6 EIl 6 EI 6 EIl
Substituting from (4) and (5) in (2), deflection at distance x from A is

y( x ) =
(
−W (l − a ) a 2 − 2al x ) −
W (l − a ) x 3
=
W (l − a ) x
 2al − a 2 − x 2  ;  0 ≤ x < a
6 EIl 6 EIl 6 EIl
W ( x − a) W (l − a ) x
3

=
6 EI
+
6 EIl
(2al − a 2
)
− x 2 ;   a ≤ x ≤ l

268 | Chapter 2

For deflection under the load, we have x = a.


\ Deflection under the load is
W (l − a ) a Wa 2 (l − a )
2

y ( a) =
6 EIl
(2al − a 2
− a2 = ) 3EIl

Example 2.53: A weightless beam of length L has its ends clamped at x = 0 and x = L. A concen-
L
trated load W acts vertically downward at the point x = . Find the resulting deflection.
3
Solution: If y is deflection at distance x from A, then differential equation of deflection (load
equation) is given by
L
A B

L
W

Figure 2.19

d4 y  L
EI = Wδ  x −   (1)
dx 4
 3
where E is Young’s modulus of the material of the beam assumed to be same throughout the beam
and I is moment of inertia of beam about neutral axis.
Take Laplace transform of both sides of  (1)
L

( )
− s
EI s 4Y ( s) − s3 y(0) − s 2 y ′(0) − sy ′′(0) − y ′′′(0) = We 3
  where Y(s) = £ (y (x))

End A is clamped
\   y(0) = y′(0) = 0
Let y″(0) = c1, y″′(0) = c2
L

( )
− s
\ EI s 4Y ( s) − c1 s − c2 = We 3

L
− s
W e c c 3
\ Y ( s) =
⋅ 4 + 13 + 24
EI s s s 
Take inverse Laplace transform of both sides
3
W  L  L  c1 x 2 c2 x 3
y( x ) =  x −  U  x −  + +
6 EI 3 3 2 6 

 x2 x3 L
 c1 + c2 ; 0≤ x<
 2 6 3
\ y( x ) =  3 2 3
(2)
 W  x − L + c x + c x L
; ≤x≤L
 6 EI  3
 1
2
2
6 3
Laplace Transform  | 269

End B is clamped
\ y(L) = y′(L) = 0
4WL3 c1 L2 L3
y( L) = + + c2 = 0 (3)
81EI 2 6
2
W  L c2 2 L
y ′( x ) =  x −  + c1 x + x ; < x ≤ L 
2 EI 3 2 3
2WL2 c
\ y ′( L) = + c1 L + 2 L2 = 0 (4)
9 EI 2
(3) and (4) can be written as
8 WL
3c1 + c2 L + =0
27 EI 
4 WL
2c1 + c2 L + =0
9 EI 
c1 c2 L WL / EI
\ = =
4 8 16 4 3− 2
− −
9 27 27 3 
4 WL 20 W
\ c1 = , c2 = −
27 EI 27 EI 
\ from (2)
 2 WL 2 10 W 3 2 Wx 2 L
 x − x = (3 L − 5 x ) ; 0 ≤ x <
 27 EI 81 EI 81 EI 3
y( x ) =  3 2
 W  x − L  + 2 Wx (3L − 5 x ) L
; ≤x≤L
 6 EI  
3  81 EI 3

Example 2.54: A cantilever beam clamped at x = 0 and free at x = l carries a uniform load w0 per
w x2 2
unit length. Show that deflection at any point is y( x ) = 0
EI
(
x − 4lx + 6l 2 .)
Solution: If y is deflection at distance x from A, then differential equation of deflection (load
equation) is given by

A B

Figure 2.20

d4 y
EI = w0
dx 4
d4 y w
\                4 = 0
dx EI 
270 | Chapter 2

Take Laplace transform of both sides


w0
s 4Y ( s) − s3 y(0) − s 2 y ′(0) − sy ′′(0) − y ′′′(0) =
EIs
where Y(s) = £ (y (x))
Now, end A is clamped
\   y(0) = y′(0) = 0

Let y″(0) = c1,  y″′(0) = c2


w0
\ s 4Y ( s) =
+ c1 s + c2
EIs 
w0 c1 c2
\ Y ( s) = + +
EIs5 s3 s 4 
Take inverse Laplace transform of both sides
w0 x 4 c1 x 2 c2 x 3
y( x ) = + + (1)
4 ! EI 2! 3!
End B is free end

\ y″(l) = y″′(l) = 0.


w0 x 3 c x2
y ′( x ) = + c1 x + 2
6 EI 2 
2
w x
y ′′( x ) = 0 + c1 + c2 x
2 EI 
w0 x
y ′′′( x ) = + c2
EI 
w0 l 2
\ y ′′(l ) = + c1 + c2 l = 0 (2)
2 EI
wl
y ′′′(l ) = 0 + c2 = 0
EI 
w0 l
\ c2 = −
EI 
\ from (2),
w0 l 2 w0 l 2 w0 l 2
c1 = − + =
2 EI EI 2 EI 
\ from (1)
w0 x 4 w0 l 2 x 2 w0 lx 3
y( x ) = + −
24 EI 4 EI 6 EI 
2
w x
= 0
24 EI
(
x 2 − 4lx + 6l 2 )

Laplace Transform  | 271

2.16.3 Problems Related to Mechanical Systems


If a particle of mass m moves along x-axis, then damping force due to medium (air, etc.) is
dx
proportional to its instantaneous velocity and act in the direction opposite to the direction
dt
dx
of motion. Thus, damping force is −a where a > 0 is damping constant. If f(t) represents all
dt
d2x dx
external forces acting on mass, then by Newton’s second law of motion m 2 = − k + f (t )
dt dt

Mass-Spring System

ft

Figure 2.21

Let m be the mass suspended on a spring which is rigidly supported from one end. The rest
­position is denoted by y = 0. Downward displacement is taken as positive. Let k > 0 be spring
dx
constant, i.e., stiffness and a > 0 be damping constant. Then a is the damping force acting
dt
upward and kx is force due to stiffness of spring which opposes motion. If f (t) is driving force
then equation of motion of mass by Newton’s second law of motion is
d2 y dx
m 2
= − a − kx + f (t )
dt dt

Example 2.55: A mass m moves along x-axis under the influence of a force which is proportional
to its instantaneous speed and in a direction opposite to the direction of motion. Assuming that
at t = 0, the particle is located at x = a and moving to the right with speed V0, find the position
where the mass comes to rest.
dx
Solution: Damping force is − µ where m is damping constant.
dt
\ By Newton’s second law of motion, equation of motion of m is
d2x dx
m 2
= −µ
dt dt
272 | Chapter 2

Take Laplace transform of both sides


( )
m s 2 X ( s) − sx(0) − x ′(0) = − µ ( sX ( s) − x(0) )   where X(s) = £(x(t))
Initially x(0) = a, x′(0) = V0
\ (ms 2
)
+ µ s X ( s) = ( ms + µ ) a + mV0

a mV0 a mV0  1 1 
\ X ( s) = + = + −    (by suppression method)
s s ( ms + µ ) s µ  s s + ( µ / m) 
Take inverse Laplace transform of both sides
mV0  −µ
t
x (t ) = a +  1 − e m  (1)
µ  

which gives position of mass at any time t where m is damping constant.
When the mass comes to rest then
dx mV0 µ − mµ t
= ⋅ e =0
dt µ m
thus t → ∞

and when t → ∞, from (1)


mV0
x = a+
µ
mV0 mV0
Thus, the mass comes to rest at distance a + to the right from origin or at a distance
µ µ
to the right from starting position where m is damping constant.

Example 2.56: An electron of mass m is projected with velocity c into a uniform magnetic field
of intensity k which is perpendicular to the direction of its motion. Find the position of electron
at time t.
Solution: Let the electron of mass m is projected with velocity c into uniform field along x­ -axis
from origin. If P(x, y) is position of electron at time t, then components of velocity along ­x-axis
dx dy
and y-axis are and respectively. Now –k (velocity) force is acting perpendicular to
dt dt
­motion of electron due to magnetic field. Thus its components along x-axis and y-axis will be
 dy  dx
−k  −  , − k respectively.
 dt  dt
\ By Newton’s second law of motion, equations of motion of electron are
d2x dy
m 2 =k
dt dt
2
d y dx
m 2 = −k
dt dt 
Laplace Transform  | 273

Take Laplace transform of both sides


( )
m s 2 X ( s) − sx(0) − x ′(0) = k ( sY ( s) − y(0) )

m ( s Y ( s) − sy(0) − y ′(0) ) = − k ( sX ( s) − x(0) )
2

where Y(s) = £ (y(t)), X(s) = £ (x(t))


Initially x(0) = 0, y(0) = 0 x′(0) = c, y′(0) = 0

\ ms 2 X ( s) − ksY ( s) = mc

ksX ( s) + ms Y ( s) = 0 
2

2 2
cm s cm 2 c
\ X ( s) = = =
m2 s4 + k 2 s2 m2 s2 + k 2 k
2

s2 +  
 m

−ckms −ckm −ckm  1 m2 s 
Y ( s) = = = −    (by suppression method)

(
m2 s4 + k 2 s2 s m2 s2 + k 2 )
k 2  s m 2 s 2 + k 2 

cm  s 1
=  2 − 
k  s + (k / m) 2
s

Taking inverse Laplace transform
cm  kt 
x (t ) = sin  
k  m

cm  kt 
y (t ) =  cos − 1
k  m
which gives the position of electron at any time t.

Example 2.57: A pellet of mass m is fired into a viscous gas from a gun at time t = 0 with muzzle
velocity V0. Its initial position is origin and velocity is zero. Find its position at any time t.
Solution: If a > 0 is damping constant and pellet moves along x-axis, then by Newton’s second
law of motion, equation of motion of pellet is
d2x dx
m2
= − a + mV0δ (t )
  dt dt
where d(t) is Dirac-delta function.
Take Laplace transform of both sides

( )
m s 2 X ( s) − sx(0) − x ′(0) = − a ( sX ( s) − x(0) ) + mV0
  
where X(s) = £(x(t))

Initially x(0) = x′(0) = 0

\ (ms 2
)
+ as X ( s) = mV0

274 | Chapter 2

mV0 mV0  1 1 
\ X ( s) = =  −    (by suppression method)
s ( ms + a ) a s a
s+ 
 m
Take inverse Laplace transform
mV0  −a
t
x (t ) =  1 − e m

a 

where a is damping constant.

Example 2.58: Determine the response of the damped mass spring when there is a unit mass on
the spring and driving force r(t) = 10 sin2t ; 0 < t < p act on it. Assume that the spring constant
and damping constant are 2 each and initially at t = 0 the mass on spring has displacement unity
from equilibrium position and velocity at that time is 5 units in the opposite direction of displace-
ment. Find the position at any time t.
Solution: Driving force r(t) is

r(t) = 10 sin2t ; 0 < t < p

= 10 sin 2t (U (t ) − U (t − π ))

= 10 sin 2t U (t ) + 10 sin ( 2π − 2t )U (t − π )

= 10 sin 2t U (t ) − 10 sin 2 (t − π )U (t − π )

\ By Newton’s second Law of motion, equation of motion of unit mass is
d2 y dy
2
= −2 − 2 y + 10 sin 2tU (t ) − 10 sin ( 2 (t − π ))U (t − π )
dt dx 
where y is downward displacement.
Take Laplace transform of both sides
20 20
s 2Y ( s) − sy(0) − y ′(0) = −2 ( sY ( s) − y(0) ) − 2Y ( s) +
− e −π s
s 2 + 22 s 2 + 22
where Y(s) = £ (y(t))

Now, y(0) = 1, y′(0) = – 5


20e − π ( s )
\ (s 2
)
+ 2 s + 2 Y ( s) = s + 2 − 5 +
20
s2 + 4

s2 + 4 
s +1 4 20
\ Y ( s) = − + 2 1 − e − π s  (1)
( )(
( s + 1) + 1 ( s + 1) + 1 s + 4 s 2 + 2s + 2 
2 2
)
20 As + B Cs + D
Let ≡ +
( )(
s 2 + 4 s 2 + 2s + 2 ) s 2 + 4 s 2 + 2s + 2

\ ( ) (
20 ≡ ( As + B ) s + 2 s + 2 + (Cs + D ) s 2 + 4
2
)
Laplace Transform  | 275

Equate coefficients of s3, s2, s and constant


A + C = 0 (2)
2A + B + D = 0 (3)
2A + 2B + 4C = 0 (4)
2B + 4D = 20 (5)
from (2),
C = –A
\ from (4),
B = A
\ from (5),
20 − 2 A 1
D= = 5− A
4 2 
\ from (3),
10 − A
2A + A + =0
2 
\ 4A + 2A + 10 – A = 0
\ A = –2
\ A = –2, B = –2, C = 2, D = 6

20 −2 ( s + 1) 2s + 6
\ = +
(s 2
)(
+ 4 s + 2s + 2
2
) s2 + 4 s 2 + 2s + 2

\ from (1)
s +1 4  2( s + 3) 2( s + 1)  −π s
Y ( s) = − + −  (1− e )
( s + 1) 2 + 1 ( s + 1) 2 + 1  ( s + 1) 2 + 1 s 2 + 4 

s +1 4  2( s + 1) 4 2s 2  −π s
= − + + − 2 − 2  (1 − e )
( s + 1) + 1 ( s + 1) + 1  ( s + 1) + 1 ( s + 1) + 1 s + 4 s + 4 
2 2 2 2

3 ( s + 1) 2s 2  2( s + 1) 4 2s 2  −π s
= − − 2 − + − 2 − 2 e
( s + 1) + 1 2
s + 4 s + 4  ( s + 1) + 1 ( s + 1) + 1 s + 4 s + 4 
2 2 2

Take inverse Laplace transform of both sides

y(t ) = 3e − t cos t − 2 cos 2t − sin 2t −  2e (t π ) cos (t − π ) + 4e − (t −π ) sin (t − π )


− −

−2 cos 2 (t − π ) − sin 2 (t − π ) U (t − π )

\ y(t ) = 3e − t cos t − 2 cos 2t − sin 2t ; 0 ≤ t < π



cos (π − t ) + 4e − (t −π ) sin (π − t )
−t − (t − π )
= 3e cos t − 2 cos 2t − sin 2t − 2e

276 | Chapter 2

+2 cos ( 2π − 2t ) − sin ( 2π − 2t ) ; t ≥ π

\ y(t ) = 3e − t cos t − 2 cos 2t − sin 2t ; 0≤t <π 

= 3e − t cos t + 2eπ − t (cos t + 2 sin t ) ; t ≥ π



which gives the position at any time t.

Example 2.59: Mechanical system in the given figure consist of two bodies each of mass unity
on three springs each having spring constant k. If y1, y2 are displacement of bodies from their
position of static equilibrium, then find y1, y2 at any time t assuming that y1(0) = y2(0) = 1,
y1′(0) = 3k , y2′ (0) = − 3k and neglect the masses of the spring and ­damping.

m =

y k

m =

y k

Figure 2.22

Solution: On the upper unit mass –ky1 is the force of upper spring and k (y2 – y1) is the force of
middle spring.
\ By Newton’s second law of motion, its equation of motion is
d 2 y1
= − ky1 + k ( y2 − y1 ) (1)
dt 2
On the lower unit mass –k(y2 – y1) is the force of middle spring and –ky2 is the force of lower
spring.
\ By Newton’s second law of motion, its equation of motion is
d 2 y2
= −k ( y2 − y1 ) − ky2 (2)
dt 2
Take Laplace transform of both sides of (1) and (2)

s 2Y1 ( s) − sy1 (0) − y1′(0) = kY2 ( s) − 2kY1 ( s) 


s 2Y2 ( s) − sy2 (0) − y2′ (0) = −2kY2 ( s) + kY1 ( s)
where Y1(s) = £(y1 (t)), Y2(s) = £(y2 (t))
Initially
y1 (0) = y2 (0) = 1, y1′(0) = 3k , y2′ (0) = − 3k 
Laplace Transform  | 277

\ (s 2
) + 2k Y1 ( s) − kY2 ( s) = s + 3k

k Y ( s) − ( s
1
2
)
+ 2k Y2 ( s) = − s + 3k

By Crammer’s rule
1 s + 3k −k
Y1 ( s) =

(
− s 2 + 2k )
2
+ k 2 − s + 3k (
− s + 2k 2
)
=
1
( )(
 s + 3k s 2 + 2k + k s − 3k  ) ( )
(s )  
2
2
+ 2k −k 2


=
s3 + 3ks + 3k s 2 + k ( ) = s (s 2
)
+ 3k + 3k s 2 + k ( )

(s 2
+ 2k )
2
− k2 (s 2
)(
+ 3k s 2 + k ) 
s 3k
= 2 + 2
s + k s + 3k 
By symmetry
s 3k
Y2 ( s) = −
s 2 + k s 2 + 3k 
Take inverse Laplace transform


y1 (t ) = cos( ) (
k t + sin 3k t )
y (t ) = cos (
2 k t ) − sin ( 3k t )


Exercise 2.3

1. A resistance R in series with inductance L is connected with e.m.f. E(t). If the switch is
connected at t = 0 and disconnected at t = a, find the current i at any time t.
2. In the given circuit L = 1 henry, C = 1 farad, V(t) = t; 0 < t < 1. Assuming that the current
and charge on the capacitor vanish initially, find the current i(t) at any time t.
i

C L

V t

Figure 2.23

3. A voltage Ee–at is applied at t = 0 to a circuit of inductance L and resistance R in series.


E  − at − Rt

Show that the current at time t is  e −e L  .
R − aL  
278 | Chapter 2

4. Constant voltage E is applied at t = 0 to a circuit containing inductance L, capacitance C and


resistance R in series. Find the current i at time t if the initial current and charge are zero.
5. Given that the current i and charge q are zero at t = 0. Find current i at any time t in the
following circuit
R

L E wt

Figure 2.24

6. Currents i1 and i2 in two meshes of a given circuit are given bydifferential equations
di1 di
− wi2 = a cos pt , 2 + wi1 = a sin pt . Find the currents i1 and i2 at any time t if i1 = i2 = 0
dt dt
at t = 0.
7. Determine the position of a unit mass moving under the influence of driving force dt, damp-
ing force with damping constant 2b and a force λ 2 times the displacement in the opposite
direction when it is given that 0 < b < λ and initially the displacement and velocity are zero.
8. A particle of mass 2 g moves on x-axis and is attracted towards origin O with a force nu-
merically equal to 8x. If it is initially at rest at x = 10, find its position at any subsequent
time assuming (a) no other force acts and (b) a damping force numerically equal to 8 times
the instantaneous velocity acts.
9. Determine the response to the damped mass spring when there is a unit mass on the spring
and a driving force r(t) acts on it. Spring constant is 2 and damping constant is 3. Assum-
ing that at t = 0 the mass on the spring has no displacement from equilibrium position and
velocity is zero, find the displacement at any time t in the cases
(a) r(t) is a square wave, r(t) = 1; 1 ≤ t ≤ 2
(b) r(t) is unit impulse at t = 1
10. A weightless beam is simply supported at its one end x = 0 and is clamped at the other
l
end x = l and is carrying a load W at x = . Find the deflection of the beam at any point.
4
11. A weightless beam of length L is freely supported at its ends. A concentrated load W
acts on it at distances a and b from ends A and B of the beam. Show that the deflection at
Wbx
­distance x from A is given by y ( x ) =  a ( L + b ) − x 2  ; 0 ≤ x ≤ a
6 EIL 
W  ab ( L + b ) b 3
=  x − x3 + ( x − b)  ; a ≤ x ≤ L
6 EI  L L 
  
12. A beam which is hinged at ends x = 0 and x = l carries a uniform load w0 per unit length.
Find the deflection at any point.
Laplace Transform  | 279

13. A beam which is clamped at its ends x = 0 and x = l carries a uniform load w0 per unit
w x 2 (l − x )
2

length. Show that the deflection at any point is y ( x ) = 0 .


24 EI

Answers 2.3

E −R
t E −LR t  RaL 
1. i (t ) =  1 − e L
 ; 0 ≤ t ≤ a ; i ( t ) = e  e − 1 ; t > a
R  R  

2. i (t ) = 1 − cos t ; 0 ≤ t ≤ 1 ; i (t ) = cos (t − 1) − cos t ; t ≥ 1

E − µt CR 2 E − µt 1 R2 CR 2
4. i ( t ) = t e if L = ; i (t ) = e sin λ t where λ = − 2 if L > ;
L 4 λL CL 4 L 4
E − µt R2 1 CR 2
i (t ) = e sinh λ t where λ = − if L <
λL 4C 2 CL 4
R
where µ in each case is .
2L

E0   − RT  
5.  wL  e L − cos wt  + R sin wt 
L w + R2
2 2
   

a a
6. i1 = (sin wt + sin pt ) , i2 = (cos wt − cos pt )
p+w p+w
 1 
7. x (t ) = e − bt  sin λ 2 − b 2 t 
 λ 2 − b2 

8. (a)  x (t ) = 10 cos 2t   (b) x (t ) = 10 (1 + 2t ) e −2t

1 −(t −1) 1 −2(t −1)


9. (a)  y ( t ) = 0; 0 ≤ t ≤ 1; y ( t ) = −e + e ;1 ≤ t ≤ 2;
2 2
1 1
y ( t ) = −e − ( t −1) + e − ( t − 2 ) + e −2( t −1) − e −2( t − 2 ) ; t ≥ 2
2 2
(b)  y ( t ) = 0; 0 ≤ t ≤ 1; y ( t ) = e −(t −1) − e −2(t −1) ; t ≥ 1
3
W  l
10. y ( x ) =
9Wx 2
256 EI
( ) l
l − 3x 2 ; 0 ≤ x ≤ ; y ( x ) =
4
 x −  +
6 EI  4
9Wx 2
256 EI
l
l − 3x 2 ; ≤ x ≤ l
4
( )
w0 x 3 3
12. y ( x ) =
24 EI
(
x + l − 2x2l )
This page is intentionally left blank
Fourier Series, Fourier Integrals
and Fourier Transforms 3
3.1 Introduction
In Chapter 9, of Volume 1, we have derived the Fourier-Legendre series which expand a func-
tion f ( x ) in ( −1,1) in terms of Legendre polynomials Pn ( x ) and Fourier–Bessel series which
expand a function f ( x ) in ( 0, R ) in terms of Bessel functions of a given order J n ( x ). In
these representations, we have used the orthogonality of Legendre polynomials and orthogonal-
ity of Bessel functions. Periodic phenomena occur quite frequently in motors, rotating machines,
electrical and sound waves, motion of earth, heart beats, etc. Thus, it is an important practical
problem to represent periodic functions in some simple series. Sine and cosine functions are pe-
riodic and form orthogonal system of functions and thus, Euler and Daniel Bernoulli worked and
Fourier introduced the representation of periodic functions in sine and cosine term series which
are called Fourier series. In this chapter, we discuss these series. Many functions including some
discontinuous periodic functions can be expanded in a Fourier series and hence are, in certain
sense, more universal than Taylor series expansions which cannot be represented for discontinu-
ous functions. Fourier series solution method is a powerful tool in solving some ordinary and
partial differential equations which we shall explain in the next chapter.
Fourier integrals and Fourier transforms extend the ideas and techniques of Fourier series to
non-periodic functions defined for all x. They are helpful in solving ordinary and partial differen-
tial equations initial and boundary values problems.

3.1.1  Periodic Functions


A function f ( x ) is called periodic if it is defined for all real x (except perhaps for some isolated
π 3π
x such as ± , ± , … for tan x) and there exists positive λ such that
2 2
f ( x + λ ) = f ( x )   for all x

This number λ is called a period of f ( x ). If f ( x ) is periodic and λ is its period, then
f ( x + 2λ ) = f ( x + λ + λ ) = f ( x + λ ) = f ( x )

and similarly   f ( x + 3λ ) = f ( x )
and f ( x + nλ ) = f ( x ) ; n ∈ N

Hence, if λ is a period of f ( x ), then nλ for any natural number n is also a period of f ( x ). The
smallest positive λ , if exists, is called the fundamental period of f ( x ). Fundamental period may
or may not exist, for example f ( x ) = 1 for all x is a periodic function and every positive real
number is its period but fundamental period does not exist.
282 | Chapter 3

If fundamental period of a periodic function exists, then it is unique. If f ( x ) and g ( x ) have


fundamental period λ , then h ( x ) = af ( x ) + bg ( x ) has fundamental period λ . Further, if f ( x )
and g ( x ) have fundamental periods λ1 and λ2 respectively then h ( x ) = af ( x ) + bg ( x ) have
fundamental period least common multiple of λ1 and λ2 .
The graph of periodic functions is obtained by periodic extension of its graph in interval of
any length.

For any natural number n, sin nx and cos nx have fundamental period and tan nx has
π 2π n
fundamental period . But a sin nx + b tan nx has fundamental period .
n n

3.1.2 Trigonometric Series
Simple functions 1,sin x, cos x,sin 2 x, cos 2 x, … ,sin nx, cos nx, … are periodic functions with
­period 2π . ∞
a
The series 0 + ∑ ( an cos nx + bn sin nx ), is called trigonometric series where a0, a1, a2, … b1,
2 n =1
b2, … are real constants and are called the coefficients of the series. Fundamental period of this
series is 2π and hence if the series converges, its sum will be a function of fundamental period
2π . The set of functions 1,sin x,sin 2 x, … cos x, cos 2 x,… forming the series is called the trigono-
metrical system.
πx 2π x 3π x πx 2π x 3π x
Similarly, the functions 1,sin ,sin ,sin ,  , cos , cos , cos are ­periodic
l l l l l l
functions with period, 2l. The series
a0 ∞  nπ x nπ x 
+ ∑  an cos + bn sin
2 n =1  l l 
is a trigonometric series. Fundamental period of this series is 2l and hence if the series ­converges,
its sum will be a function of fundamental period 2l.
Thus, trigonometric series can be used for representing any practically important periodic func-
tion f, simple or complicated, of any period 2l. This series will then be called Fourier series of f.

3.1.3 Orthogonality of Trigonometric System


We have for m, n ∈ N
c + 2l
c + 2l nπ x l  nπ x  l  nπ nπ c 
∫ cos dx = 
nπ 
sin
l c
=
nπ sin l ( c + 2l ) − sin l  = 0 (3.1)
c l  
c + 2l
c + 2l nπ x l  nπ x  l  nπ nπ c 
∫ sin dx = −
nπ  cos l  =−
nπ cos l ( c + 2l ) − cos l  = 0 (3.2)
c l  c  
c + 2l mπ x nπ x 1 c + 2l mπ x nπ x

c
cos
l
cos
l
dx = ∫ 2 cos
2 c l
cos
l
dx

1  c + 2l ( m + n)π x c + 2l ( m − n)π x 
=  ∫ cos dx + ∫ cos dx 
2 c l c l 

Fourier Series, Fourier Integrals and Fourier Transforms  | 283

1
= ( 0 + 0 ) = 0; m≠n [from (3.1)]
2
c + 2l mπ x nπ x 1  c + 2l ( m − n)π x c + 2l ( m + n)π x 

c
sin
l
sin
l
dx =  ∫ cos
2 c l
dx − ∫ cos
c l
dx 

1
= ( 0 − 0 ) = 0; m ≠ n  [from (3.1)]
2
mπ x
c + 2l nπ x 1  c + 2l ( m + n)π x c + 2l ( m − n)π x 
∫ c l
sin cos
l
dx =  ∫ sin
2 c l
dx + ∫ sin
c l
dx 

1
= ( 0 + 0 ) = 0 for all m, n ∈ N  [from (3.2)]
   2
Hence, the trigonometric system
 nπ x   nπ x 
1, sin : n∈ N  , cos : n∈ N 
 l   l 
is orthogonal with weight unity in any interval of length 2l and hence in ( −l , l ) and ( 0, 2l ).
Taking l = π , the trigonometric system
1, {sin nx : n ∈ N } , {cos nx : n ∈ N }

is orthogonal with weight unity in any interval of length 2π and hence in ( −π , π ) and ( 0, 2π ).
Also
c +2 l
2 nπ x 2nπ x  2nπ x 
c + 2l c + 2l 1  1 l
∫c cos l dx = ∫c 2 1 + cos l  dx = 2  x + 2nπ sin l c = l (3.3)
c +2 l
c + 2l nπ x c + 2l 1  2nπ x  1 l 2nπ x 
∫c
sin 2
l
dx = ∫
c 2 
1 − cos
l 
dx =  x −
2 2nπ
sin
l c
= l (3.4)

nπ x nπ x
Thus, integrals of cos 2 and sin 2 in any interval of length 2l is l.
l l

3.1.4  Fourier Series


Fourier series arise from the practical task of representing a given periodic function f ( x ) of
period 2l in terms of cosine and sine functions. These series are trigonometric series whose
coefficients are determined by Euler formulae discussed hereunder.

3.1.5  Euler Formulae for Fourier Coefficients


Suppose f ( x ) is a periodic function with period 2l defined on [ c, c + 2l ] which can be ­expanded
in uniformly convergent Fourier series
a0 ∞  nπ x nπ x 
f ( x) = + ∑  an cos + bn sin
2 n =1  l l 
284 | Chapter 3

1 c + 2l
f ( x ) dx 
l ∫c
then a0 =

1 c + 2l nπ x
an = ∫ f ( x ) cos dx
l c l 
1 c + 2l nπ x
bn = ∫ f ( x ) sin dx
l c l 
where a0 , an and bn are called Fourier coefficients or Euler’s coefficients.
a0 ∞  mπ x mπ x 
Proof: f ( x) = + ∑ am cos + bm sin  (3.5)
2 m =1  l l 
Integrate term by term within the limits c to c + 2l and use orthogonal property of trigonometric
system:
c + 2l a0 c + 2 l ∞
 c + 2l mπ x c + 2l mπ x 
∫c f ( x ) dx =
2 ∫c
dx + ∑ 
m =1 
am ∫ cos
c l
dx + bm ∫ sin
c l
dx 

a0 a
( x )c + 0 + 0 = 0 ( c + 2l − c ) = la0
c + 2l
=
2 2 
1 c + 2l
f ( x ) dx
l ∫c
∴ a0 =

nπ x
Multiply (3.5) by cos and integrate term by term within the limits c to c + 2l and use
l
­orthogonality property of trigonometric system and (3.3):
c + 2l nπ x a0 c + 2 l nπ x ∞ c + 2l mπ x nπ x
∫c f ( x ) cos dx = ∫ cos dx + ∑ am ∫ cos cos dx
l 2 c l m =1
c l l
m≠ n

c + 2l nπ x ∞ c + 2l mπ x nπ x
+ an ∫ cos 2 dx + ∑ ∫ sin cos dx
c l m =1
c l l
= 0 + 0 + an l + 0 = lan 

1 c + 2l nπ x
∴ ∫ f ( x ) cos
an = dx
l c l 
nπ x
Multiply (3.5) by sin and integrate term by term within the limits c to c + 2l and use
l
­orthogonality property of trigonometric system and (3.4):
nπ x a0 nπ x ∞
mπ x nπ x
f ( x ) sin dx + ∑ am ∫
c + 2l c + 2l c + 2l


c
l
dx =
2
∫ c
sin
l m =1
c
cos
l
sin
l
dx


mπ x nπ x nπ x
+ ∑ bm ∫
c + 2l c + 2l
sin sin dx + bn ∫ sin 2 dx
m =1
c
l l c
l
m≠n

= 0 + 0 + 0 + bn l 

1 c + 2l nπ x
∴ bn = ∫ f ( x ) sin dx
l c l 
Fourier Series, Fourier Integrals and Fourier Transforms  | 285

The expressions for Fourier coefficients


1 c + 2l
f ( x ) dx
l ∫c
a0 =

1 c + 2l nπ x
an = ∫ f ( x ) cos dx
l c l 
1 c + 2l nπ x
bn = ∫ f ( x ) sin dx
l c l 
are called Euler formulae.
We can find an and bn simultaneously by writing
nπ x
1 c + 2l
f ( x)e
i
an + ibn =
l ∫c
l
dx and equating real and imaginary parts.

Remark 3.1: (i) If f ( x ) is periodic with period 2π then Fourier series of f ( x ) in ( 0, 2π ) is


a0 ∞
f ( x) = + ∑ ( an cos nx + bn sin nx )
2 n =1 
and Euler formulae for coefficients are
1 2π 1 2π 1 2π
a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos nx dx, bn = ∫ f ( x ) sin nx dx
π 0 π 0 π 0

and Fourier series of f ( x ) in ( −π , π ) is


a0 ∞
f ( x) = + ∑ ( an cos nx + bn sin nx )
2 n =1
and Euler formulae for coefficients are
1 π 1 π 1 π
a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos nx dx, bn = ∫ f ( x ) sin nx dx
π −π π −π π −π

(ii) If f ( x ) is periodic with period 2l then Fourier series of f ( x ) in ( 0, 2l ) is

a0 ∞  nπ x nπ x 
f ( x) = + ∑ an cos + bn ,
2 n =1  l l 

and Euler formulae for coefficients are


1 2l 1 2l nπ x 1 2l nπ x
a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx,
l 0 l 0 l l 0 l

and Fourier series of f ( x ) in ( −l , l ) is

a0 ∞  nπ x nπ x 
f ( x) = + ∑  an cos + bn sin
2 n =1  l l 
286 | Chapter 3

and Euler formulae for coefficients are


1 l 1 l nπ x 1 l nπ x
a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx
l −l l −l l l −l l 
We now state the Dirichlet’s conditions of convergence of Fourier series; however, the proof is
beyond the scope of this book.

3.1.6 Dirichlet’s Conditions for Convergence of Fourier Series of


f(x) in [c, c + 2l ]
Following are the Dirichlet’s conditions:
(i) f ( x ) is periodic with period 2l
(ii) f ( x ) is piecewise continuous in [ c, c + 2l ]
(iii) f ( x ) has finite number of maxima or minima in [ c, c + 2l ]

If a function f ( x ) satisfies above mentioned Dirichlet’s conditions, then the Fourier series of
f ( x ) is
a0 ∞  nπ x nπ x 
+ ∑ an cos + bn sin
2 n =1  l l  
with Euler coefficients
1 c + 2l 1 c + 2l nπ x 1 c + 2l nπ x
a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx
l c l c l l c l 
is convergent. Its sum is f ( x ), except at a point x0 at which f ( x ) is not continuous and sum of
1
the series at discontinuity x = x0 is lim f ( x ) + lim+ f ( x )  which we write as
2  x → x0− x → x0 
1
 f ( x0 − 0 ) + f ( x0 + 0 ) .
2
a ∞
 nπ x nπ x 
Thus, f ( x ) ∼ 0 + ∑  an cos + bn sin at the points where the series may or may not
2 n =1  l l 
converge but ∼ can be replaced by equality sign = at the points where the series converges.
1
For example, the function f ( x ) = , 0 < x < 2π does not satisfy Dirichlet’s conditions as
3− x
lim− f ( x ) = ∞ and lim+ f ( x ) = −∞. Both these limits are infinite, so f ( x ) is not piecewise con-
x →3 x →3
tinuous in ( 0, 2π ) as 3 ∈ ( 0, 2π ). Thus, Fourier series expansion of f ( x ) in ( 0, 2π ) does not exist.

3.1.7  Fourier Series of Even and Odd Functions


f ( x ) is said to be an even function if f ( − x ) = f ( x ) for all x in domain of f
and f ( x ) is said to be an odd function if f ( − x ) = − f ( x ) for all x in domain of  f.
Fourier Series, Fourier Integrals and Fourier Transforms  | 287

Now, Fourier series of f ( x ) in [ −l , l ] is

a0 ∞  nπ x nπ x 
f ( x) = + ∑ an cos + bn sin
2 n =1  l l  
1 l 1 l nπ x 1 l nπ x
where a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx
l − l l − l l l − l l 
nπ x nπ x
If f ( x ) is an even function in [ −l , l ] then f ( x ) cos is even function and f ( x ) sin is
odd function in [ −l , l ]. λ l

1 l 2 l
f ( x ) dx = ∫ f ( x ) dx
l ∫− l
∴ a0 =
l 0 
1 l nπ x 2 l nπ x
an = ∫ f ( x ) cos dx = ∫ f ( x ) cos dx
l − l l l 0 l 
1 l nπ x
bn = ∫ f ( x ) sin dx = 0.
l − l l
Thus, if f ( x ) is an even function in [ −l , l ] then its Fourier series is
a0 ∞ nπ x
f ( x) = + ∑ an cos ,
2 n =1 l
2 l 2 l nπ x
where a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx .
l 0 l 0 l
nπ x nπ x
If f ( x ) is an odd function in [ −l , l ], then f ( x ) cos is odd function and f ( x ) sin
l l
1 l 1 l nπ x
( ) ( )
l ∫− l l ∫− l
is even function of x and hence f x dx = 0 , f x cos dx = 0 and
1 l nπ x 2 l nπ x l
( ) ( )
l ∫− l l ∫0
f x sin dx = f x sin dx .
l l
Thus, Fourier series of f ( x ) is

nπ x nπ x
∞ l
2
f ( x ) = ∑ bn sin where bn = ∫ f ( x ) sin dx
n =1 l l 0 l

Example 3.1: Find the Fourier series of f ( x ) = x, 0 < x < 2π and sketch the graph from
x = −4π to x = 4π .
Solution: Taking f ( x ) to be periodic function with period 2π , Fourier series expansion of
f ( x ) = x is

a
f ( x ) ∼ 0 + ∑ ( an cos nx + bn sin nx ) (1)
2 n =1
1 1 2
( )
2π 2π
where a0 =
π ∫
0
x dx =

x
0
= 2π

288 | Chapter 3


1 2π 1  i inx 1 inx 
an + ibn =
π ∫
0
xe inx dx =
π  − n xe + n2 e 
 0 

1  2π i i 2π n 1 i 2π n 
= − e + 2 (e − 1) 
π  n n 
1  2π i  2i
= − n  = − n
π   
Equate real and imaginary parts
2
an = 0, bn = −
n
∴ Fourier-series expansion of f ( x ) is

1
f ( x) ∼π − 2∑ sin nx.
n =1 n
Graph of f ( x ) = x, −4π < x < 4π is
fx

fx = π

x
− π − π o π π

Figure 3.1

 denotes that the point is not in the graph.

Example 3.2: Find a Fourier series to represent x − x 2 from −π to π . Hence show that
1 1 1 1 π2
− + − +  =
12 22 32 4 2 12 
Solution: Let f ( x ) = x − x 2, − π < x < π
Considering f ( x ) to be periodic with period 2π , Fourier-series expansion of f ( x ) is
a0 ∞
f ( x) ∼ + ∑ ( an cos nx + bn sin nx )
2 n =1 
π
1 π 2 π 2  x3  2π 2

where a0 =
π −π
(π 0
)
x − x 2 dx = − ∫ x 2 dx = −   = −
π  3 0 3
(∵ x is odd and x 2 is even function)
Fourier Series, Fourier Integrals and Fourier Transforms  | 289

1 2
∫ ( x − x ) cos nx dx = − π ∫
π π
an = 2
x 2 cos nx dx  (
∵ x cos nx is odd and
π −π 0
x 2 cos nx is even ­function)
π
2  21  2x 2 
=−  x  sin nx  + 2 cos nx − 3 sin nx 
π  n  n n 0 
4 ( −1)
n +1
4
=− cos nπ =
n2 n2 
1 2
∫ ( x − x ) sin nx dx = π ∫
π π
bn = 2
x sin nx dx   (∵ x sin nx is even and x 2 sin nx is odd function)
π −π 0

π
2  1   1  2
=  x  − cos nx  −  − 2 sin nx   = − ( −1)
n

π  n   n 0 n

2 ( −1)
n +1

=
n 
∴ Fourier-series expansion of f ( x ) is
( −1)
n +1
π2 ∞
f ( x) ∼ − + 2∑ ( 2 cos nx + n sin nx )
3 n =1 n2 
Taking x = 0
( −1)
n +1
π2 ∞
0=− + 4∑
3 n =1 n2 
1 1 1 1 π2
∴ − + − +  =
12 22 32 4 2 12

1
Example 3.3: Obtain the Fourier series to represent f ( x ) = (π − x ) , 0 < x < 2π
2

4
Hence obtain the following relations
1 1 1 π2 1 1 1 1 π2
  (i)  + + +  = (ii)  − + − +  =
12 22 32 6 12 22 32 4 2 12
1 1 1 π2
(iii)  2
+ 2 + 2 + =
1 3 5 8
Solution: Considering f ( x ) to be periodic function with period 2π , Fourier-series expansion
of f ( x ) is
a0 ∞
f ( x) ∼ + ∑ ( an cos nx + bn sin nx )
2 n =1 
1 2π 1 1 3 2π 1 π2
∫ (π − x ) dx = − (π − x )  0 =
2
where a0 = π 3 + π 3  =
π 0 4 12π 12π 6 
290 | Chapter 3

1 2π 1
∫ (π − x ) einx dx 
2
an + ibn =
π 0 4

1  
(π − x )  − einx  + 2 (π − x )  − 2 einx  + 2  3 einx  
2 i 1 i
= 
4π   n   n  n  0 
1  iπ 2 iπ 2 2π 2π 2i 2i  1
= − + + 2 + 2 + 3 − 3= 2
4π  n n n n n n  n

Equate real and imaginary parts
1
an = 2 , bn = 0
n 
∴ Fourier-series expansion of f ( x ) is
π2 ∞ 1
f ( x) ∼ + ∑ cos nx (1)
12 n =1 n2

f (0 + 0) + f (0 − 0) f ( 0 + 0 ) + f ( 2π − 0 ) 1 1 2 1 2  π 2 ∞ 1
∴ f (0) = = = π + π = +∑
2 2 2  4 4  12 n =1 n2 

1 π π2 π2
2
∴ ∑
n=1 n
2
= −
4 12
=
6 
1 1 1 π2
∴ + + +  = (2)
12 22 32 6
Taking x = π in (1)
π 2 ∞ ( −1)
n

+∑ =0
12 n =1 n2 
1 1 1 1 π2
∴ − + − + = (3)
12 22 32 4 2 12
Add (2) and (3)
 π π 2 3π 2
2
1 1 1
2  2 + 2 + 2 +  = + =
1 3 5  6 12 12 
1 1 1 π2
∴ 2
+ 2 + 2 + =
1 3 5 8 

Example 3.4: Obtain the Fourier series for f ( x ) = e − x ; 0 < x < 2π and deduce that
( −1)
n
π ∞
=∑ .
2 sinh π n = 2 n2 + 1
Solution: Considering f ( x ) to be periodic function with period 2π , Fourier-series expansion
of f ( x ) is ∞
a
f ( x ) ∼ 0 + ∑ ( an cos nx + bn sin nx )
2 n =1 
Fourier Series, Fourier Integrals and Fourier Transforms  | 291

1 1 −x 1 e −π eπ − e −π (
2e −π sinhh π )
( ) ( )
2π 2π
where a0 =
π ∫0
e − x dx = −
π
e
0
=
π
1 − e −2π =
π
=
π


1 1 1
( )
2π 2π
an + ibn =
π ∫0
e − x e inx dx =
π ( −1 + in )
e − x e inx
0


=−
1 1 + in −2π
e − 1 =
(1 + in ) 1 − e ( −2π
)
π 1+ n 2 
π 1 + n2 ( ) 

=
(
(1 + in ) e −π eπ − e −π ) = 2 (1 + in ) e sinh π −π

π (1 + n 2
) π (1 + n ) 2

Equate real and imaginary parts
2e −π sinh π 2ne −π sinh π
an = , bn =
π ( n2 + 1) π ( n2 + 1)

∴ Fourier-series expansion of f ( x ) is
1 −π  ∞
1 
f ( x) ∼ e sinh π 1 + 2∑ 2 ( cos nx + n sin nx )
π  n =1 n + 1 
Take x = π
 ( −1) 
n

1 −π
e −π = e sinh π 1 + 2∑ 2 
π  n =1 n + 1 


1 ∞ ( −1) ( −1)
n n
π ∞
∴ = +∑ 2 =∑ 2
2 sinh π 2 n =1 n + 1 n = 2 n + 1 

1 3
Example 3.5: Expand f ( x ) = x sin x, 0 < x < 2π as a Fourier series and deduce that ∑ = .
n=2 n − 1
2
4
Solution: Considering f ( x ) to be a periodic function with period 2π , Fourier-series expansion
of f ( x ) is

a
f ( x ) ∼ 0 + ∑ ( an cos nx + bn sin nx ) (1)
2 n =1
1 2π 1
[ − x cos x + sin x ]0 = −2

where a0 =
π ∫
0
x sin x dx =
π 
1 2π 1 2π  eix − e − ix  inx
an + ibn =
π ∫
0
x sin x e inx dx =
π ∫ 0
x
 2i
 e dx
 


=
1 2π
2π i ∫0
i n +1 x
(
x e ( ) − e ( ) dx (2)
i n −1 x
)
292 | Chapter 3

1  x  − i e i (n +1) x  −  − 1 e i (n +1) x 
=     
2π i   n + 1  ( n + 1)
2


  
i i (n −1) x   1  
− x  − e  − − e i (n −1) x   ; n ≠ 1
 n −1   ( n − 1) 2  
  
 0

1  2π i 1 1 2π i 1 1 
= − + − + − + ; n ≠ 1
2π i  n + 1 ( n + 1) ( n + 1)
2 2
n − 1 ( n − 1) ( n − 1)2 
2
 
 1 1  2
= −  = 2 ; n ≠1
 n − 1 n + 1  n −1 
Equate real and imaginary parts
2
an = , bn = 0; n ≠ 1
n2 − 1 
From (2)
1 2π
a1 + ib1 =
2π i ∫0
(
x e 2ix − 1 dx )


1   i 2ix  1 2ix x 2 
= x − e  + e − 
2π i   2  4 2 0

1  2π i 1 1 
= − + − − 2π 2 
2π i  2 4 4 
1
= − +πi
2 
Equate real and imaginary parts
1
a1 = − , b1 = π
2 
∴ Fourier-series expansion of f ( x ) is

1 1
f ( x ) ∼ −1 − cos x + π sin x + 2∑ 2 cos nx (3)
2 n=2 n − 1

Other method to find an and bn


1 2π


an =
π ∫0
x sin x cos nx dx

1 2π
= ∫ x sin ( n + 1) x − sin ( n − 1) x  dx
2π 0

Fourier Series, Fourier Integrals and Fourier Transforms  | 293

1   1 1 
=
2π  x  − n + 1 cos ( n + 1) x + n − 1 cos ( n − 1) x 
  

 1 1   
− − sin ( n + 1) x + sin ( n − 1) x   ; n ≠ 1
 ( n + 1) (n − 1)
2 2
  0

1   1 1  1 1 2
=
2π  2π − n + 1 + n − 1  = n − 1 − n + 1 = 2 ; n ≠ 1
   n −1 
1 2π 1 2π
a1 =
π ∫ 0
x sin x cos x dx =
2π ∫ 0
x sin 2 x dx

1   1   1  1
=  x  − cos 2 x  −  − sin 2 x   = −
2π   2   4 0 2

1 2π
π ∫0
bn = x sin x sin nx dx

1 2π
= ∫ x cos ( n − 1) x − cos ( n + 1) x  dx
2π 0

1   1 1 
=
2π  x  n − 1 sin ( n − 1) x − n + 1 sin ( n + 1) x 
  

 −1 1 
− cos ( n − 1) x + cos ( n + 1) x  ; n ≠ 1 
 ( n − 1) ( n + 1)  0
2 2

∴ bn = 0; n ≠ 1 

1 2π 1 2π
b1 = ∫ x sin 2 x dx = ∫ x (1 − cos 2 x ) dx
π 0 2π 0


1  x2   1   1   1
=  −  x  sin 2 x  −  − cos 2 x   =  2π 2  = π
2π  2   2   4   0 2π 

From (3)
f (0 + 0) + f (0 − 0) f ( 0 + 0 ) + f ( 2π − 0 )
f (0) = =
2 2

0+0 1 1 ∞
= = 0 = −1 − + 2∑ 2 
2 2 n=2 n −1

1 3
∴ ∑n
n=2
2
=
−1 4 
294 | Chapter 3

Example 3.6: The function f ( x ) is given by


 −π ; − π < x < 0
f (x) = 
x ; 0 < x < π 
f ( x + 2π ) = f ( x )

1 1 1 π2
Draw its graph and find its Fourier series and hence show that 2 + 2 + 2 +  = .
Solution: Fourier-series expansion of f ( x ) is 1 3 5 8
a0 ∞
f ( x) ∼ + ∑ ( an cos nx + bn sin nx )
2 n =1 
1 0 1   x2  
π
π

−π dx + ∫ x dx = ( −π x )−π +    
π  ∫−π
0
where a0 =
0  π   2 0 

1 π2  π
=  −π 2 +  = −
π 2  2
1 0
−π e inx dx + ∫ xe inx dx 
π
an + ibn =
π  ∫− π 0 

1  π i inx 0   i inx   1 inx   
π

= 
π  n
e ( )
+  x  − e  −  − 2 e   
−π
  n   n  0 


1 π i
= 
πn
(n πi
n
)
1 − ( −1) − ( −1) + 2 ( −1) − 1 
n

n
1 n
( 

)
i 1
= 1 − 2 ( −1)  − 2 1 − ( −1) 
n n

n  πn  

Equate real and imaginary parts
1 − 2 ( −1)
n
1 
an = − 2 1 − ( −1) , bn = 
n

πn   n 
2
∴ a2 n = 0, a2 n −1 = − ; n = 1, 2, 3,…
π ( 2n − 1)
2


1 3
b2 n = − , b2 n −1 = ; n = 1, 2, 3,…
2n 2n − 1 
∴ Fourier-series expansion of f ( x ) is
π 2 ∞ cos ( 2n − 1) x ∞  3 sin ( 2n − 1) x sin 2nx 
f (x) ∼ − − ∑ +∑ − 
4 π n =1 ( 2n − 1)2 n =1  2n − 1 2n 
f (0 + 0 ) + f (0 − 0 ) 0 + ( −π ) π 2 ∞ 1
f (0 ) = = =− − ∑ 
2 2 4 π n =1 ( 2n − 1)2

1 π 
2
∴ ∑ =
( 2n − 1)
2
n =1 8
Fourier Series, Fourier Integrals and Fourier Transforms  | 295

1 1 1 π2
∴ + + +  = 
12 32 52 8
Graph of f ( x ) is shown below
fx

fx =π

x
− π − π − π −π o π π π π π

f x = −π

Figure 3.2

 denotes that the point is not in the graph.

Example 3.7: Find the Fourier series to represent the function  f ( x ) given by
x ; 0≤ x ≤π 1 1 1 π2
f ( x) =  and deduce that 2 + 2 + 2 +  = .
2π − x ; π ≤ x ≤ 2π 1 3 5 8
Solution: Fourier-series expansion (assuming  f ( x ) periodic of period 2π ) of  f ( x ) is

a
f ( x ) = 0 + ∑ ( an cos nx + bn sin nx )
2 n =1 
1 π 1  x 2  π  x2  

x dx + ∫ ( 2π − x ) dx  =

where a0 =

π ∫0 π 
 π
   +  2π x −  
 2 0  2 π 

1 π 2 π2 
=  + 4 π − 2π − 2π +  = π
2 2 2

π 2 2 

1 π π

x e dx + ∫ ( 2π − x ) e dx
2

π  ∫0
an + ibn = inx inx


π  
1   i
π
  1    i 
=   x  − e inx  −  − 2 e inx   + ( 2π − x )  − e inx 
π    n   n  0   n 

 1 


− ( −1)  − 2 e inx   
 n  π 

296 | Chapter 3


=
1 π
π  n
n

n
1
( n
)iπ
− i ( −1) + 2 ( −1) − 1 + ( −1) − 2 1 − ( −1) 
n
n 1
n
(
n 


)
2
= − 2 1 − ( −1) 
n

π n  

Equate real and imaginary parts
2
an = − 2 1 − ( −1)  , bn = 0 ;
n
n = 1, 2, 3,…
πn  

4
∴ a2 n = 0, a2 n −1 = − ; n = 1, 2, 3, …
π ( 2n − 1)
2

∴ Fourier-series expansion of  f ( x ) is
π 4 ∞ cos ( 2n − 1) x
f ( x) = − ∑
2 π n =1 ( 2n − 1)2

π 4 ∞ 1
∴ f (0) = 0 = − ∑
2 π n =1 ( 2n − 1)2

1 π2 
∴ ∑
n =1 ( 2n − 1)
2
=
8

1 1 1 π2
2
+ 2 + 2 +  = .
1 3 5 8
0 , −π ≤ x ≤ 0
Example 3.8: If f ( x ) = 
sin x , 0 ≤ x ≤ π
Prove that
1 1 2 ∞ cos 2n x
f ( x ) = + sin x − ∑
π 2 π n =1 4 n2 − 1 
Hence, show that
1 1 1 1 1 1 1 1 π −2
(i)  + + +  = (ii)  − + − + =
1.3 3.5 5.7 2 1.3 3.5 5.7 7.9 4
Solution: Considering f ( x ) to be periodic with period 2π , Fourier-series expansion of f ( x ) is

a
f ( x ) = 0 + ∑ ( an cos n x + bn sin n x )
2 n =1 
π
1 1 1 2
sin x dx = ( − cos x )0 = (1 + 1) =
π

π ∫0
where a0 =
π π π
 π
1 π 1  e inx 
an + i bn = ∫ sin x e dx = 
inx
2 (
i n sin x − cos x ) , n ≠ 1
π 0 π 1 − n 0

1  ( ) 
n
= 1 + −1 ; n ≠ 1
(
π 1 − n2 ) 

−1 
1 + ( −1) ; n ≠ 1

n
=
(
π n2 − 1 ) 

Fourier Series, Fourier Integrals and Fourier Transforms  | 297

Equate real and imaginary parts


1 1 + ( −1)n  , bn = 0 ; n = 2, 3,…
an = −
( )
π n −1 
2 

2
∴ a2 n = − ; n = 1 , 2 , 3 ,…
(π 4n2 − 1 ) 
a2 n −1 = 0 , bn = 0 ; n = 2, 3,…
1 π 1 π 1
( cos 2 x )0 = 0
π
a1 = ∫ sin x cos x dx =
2π ∫0
sin 2 x dx = −
π 0 4π 
π
1 π 2 1 π 1  1  1
b1 = ∫ sin x dx = (1 − cos 2 x ) dx =  x − sin 2 x  =
2π ∫0

π 0 2π  2 0 2 
∴Fourier-series expansion of f ( x ) is
1 1 2 ∞ cos 2nx
f ( x ) = + sin x − ∑ 2 (1)
π 2 π n =1 4n − 1
Take x = 0
1 2 ∞ 1
0= − ∑
π π n =1 ( 2n − 1) ( 2n + 1)

1 1 1 1
∴ + + + =
1.3 3.5 5.7 2
π
In (1), take x =
2
( −1)
n +1
1 1 2 ∞
1= + + ∑
π 2 π n =1 ( 2n − 1) ( 2n + 1)

( )
n +1
2 ∞ −1 π −2
⇒ ∑
π n =1 ( 2n − 1) ( 2n + 1)
=


1 1 1 1 π −2
∴ − + − + =
1.3 3.5 5.7 7.9 4 
Example 3.9: Find the Fourier series of f ( x ) = x in −π < x < π , f ( x + 2π ) = f ( x ).
Solution: f ( x ) is odd function of x and hence Fourier series for f ( x ) is
  ∞  
f ( x ) ∼ ∑ bn sin nx
n =1 
2 π
where bn = ∫ x sin nx dx
π 0  π
2  1   1 
=  x  − cos nx  −  − 2 sin nx  
π  n   n 0 
2
= ( −1)
n +1

n 
∴ Fourier series for f ( x ) is
  ∞
( −1)
n +1

f ( x ) ∼ 2∑ sin ( nx )
n =1 n 
298 | Chapter 3

Example 3.10: Find the Fourier-series representations for


 (i)  f ( x ) = sin x , −π < x < π
(ii)  f ( x ) = cos x , −π < x < π
Also draw the graphs.
Solution: (i) Graph of  f ( x ) considering it periodic with period 2π is
fx

fx =
x
− π − π −π o π π π

Figure 3.3
 indicates point is not in graph.
f ( x ) is even function of x. Its Fourier-series representation is

a
f ( x ) ∼ 0 + ∑ an cos nx
2 n =1  π
2 π  2  4
where a0 = ∫ sin x dx =  − cos x  =
π 0
 π 0 π 
2 π 1 π
an = ∫ sin x cos nx dx = ∫ sin ( n + 1) x − sin ( n − 1) x  dx
π 0 π 0 
π
1  1 1 
=
π  − n + 1 cos ( n + 1) x + n − 1 cos ( n − 1) x  ; n ≠ 1
 0 

1 1
=
π  n + 1
{
( −1) + 1 −
n
}1
n −1
{
( −1) + 1 
n


}
2 ( −1) + 1
n
=−
π ( n2 − 1)  

∴ a2 n −1 = 0 ; n = 2, 3, 4,… 
4
a2 n = − ; n = 1, 2, 3,…
π ( 4 n2 − 1)

2 π 1 π 1
( − cos 2 x )0 = 0
π
a1 = ∫ sin x cos x dx = ∫ sin 2 x dx =
π 0 π 0 2π 
∴ Fourier-series representation of f ( x ) is
2 4 ∞ cos 2nx
f ( x) ∼ − ∑
π π n =1 ( 2n − 1) ( 2n + 1)

Fourier Series, Fourier Integrals and Fourier Transforms  | 299

(ii) Consider f (x) to be periodic with period 2p


fx

fx =
x
− π − π − π − π − π −π −π o π π π π π π π

Figure 3.4
indicates that the point is not in the graph.
Here,  f ( x ) is an even function of x.
Fourier-series representation of f ( x ) is
a0 ∞
f ( x) ∼ + ∑ an cos nx
2 n =1 
2 π 2 π π  2 π
π  4
where a0 = ∫ cos x dx =  ∫ 2 cos x dx − ∫π cos x dx  = ( sin x )02 − ( siin x )π  =
π 0 π 0  π  2  π
2

2 π 1 π π 
an = ∫ cos x cos nx dx =  ∫ 2 2 cos nx cos x dx − ∫π 2 cos nx cos x dx 
n 0 π 0 
2

1  π2 π 
=  ∫0 cos ( n + 1) x + cos ( n − 1) x  dx − ∫π cos ( n + 1) x + cos ( n − 1) x  dx 
π 2 
 π
1  1 1 2
=  sin ( n + 1) x + sin ( n − 1) x 
π  n + 1 n −1 0

π 
 1 1 
− sin ( n + 1) x + sin ( n − 1) x   ; n ≠ 1 
n +1 n −1 π 
2

2 1 π 1 π
=  sin ( n + 1) + sin ( n − 1)  ; n ≠ 1
π  n +1 2 n −1 2 
∴ a2 n −1 = 0 ; n = 2, 3, 4,… 
4 ( −1)
n +1
2 1
( −1)  =
1
( −1) −
n n
a2 n =  ; n = 1, 2,…
π  2n + 1 2n − 1  π ( 2n − 1) ( 2n + 1) 
300 | Chapter 3

2 π 1  π2 π 
a1 = ∫ =  ∫0 2 cos x dx − ∫π 2 cos x dx 
2 2
cos x cos x dx
π 0 π 
2

1  π2 π 
=  ∫ (1 + cos 2 x ) dx − ∫π (1 + cos 2 x ) dx 
π 0 2 
 π
π 
1  1 2  1 
=  x + sin 2 x  −  x + sin 2 x  
π  2 0  2 π 
 2 

1 π  π 
=  −  π −  = 0
π 2  2 

∴ Fourier-series representation of f ( x ) is
2 4 ∞ ( −1) cos 2nx
n +1

f ( x) ∼ + ∑ .
π π n =1 ( 2n − 1) ( 2n + 1)

Example 3.11: Expand f ( x ) = cos ax as a Fourier series in ( −π , π ) where a is fraction. What


will happen to Fourier series if a is integer?
Solution: f ( x ) is even function of x. Considering f ( x ) to be periodic function of period 2p,
Fourier-series expansion of f ( x ) is

a
f ( x ) ∼ 0 + ∑ an cos nx
2 n =1 
2 π 2 2
( sin ax )0 = sin aπ
π
where a0 = ∫ cos ax dx =
π 0 πa πa 
2 π 1 π
an = ∫ cos ax cos nx dx = ∫ cos ( n + a ) x + cos ( n − a ) x  dx
π 0 π 0 
π
1 1 1 
=  sin ( n + a ) x + sin ( n − a ) x 
π n + a n−a 0 
1  ( −1) ( −1)  2a ( −1)n +1
n n

= sin aπ − sin aπ  = sinn aπ


π  n+a

n−a (
 π n2 − a 2 ) 
∴ Fourier-series representation of f ( x ) is
( −1)
n +1

1 2a
f ( x) ∼ sin ( aπ ) + sin ( aπ ) ∑ 2 cos nx
πa π n =1 n − a
2

 ( ) 
n +1
2a 1 ∞ − 1
= sin ( aπ )  2 + ∑ 2 cos nx 
π  2a n =1 n − a
2


When a is an integer, then Fourier-series expansion in ( −π , π ) is
f ( x) = 1 if a = 0
       
= cos ax    if a is positive integer
= cos ( −ax ) if a is negative integer.
Fourier Series, Fourier Integrals and Fourier Transforms  | 301

Example 3.12: Obtain Fourier expansion for 1− cos x in interval −π < x < π .
x
Solution: Here, f ( x ) = 1 − cos x = 2 sin is even function of x. Consider f ( x ) to be
2
­periodic function with period 2π . Its Fourier-series expansion is

a
f ( x ) ∼ 0 + ∑ an cos nx
2 n =1 
π
2 π x 2 2 π x 4 2 x 4 2
π ∫0 π ∫0
where a0 = 2 sin dx = sin dx = − cos  =
2 2 π  2 0 π

2 π x
an = ∫ 2 sin cos nx dx
π 0 2 
2 π  1  1 
π ∫0  
= sin n +  x − sin  n −  x  dx
2  2 

π
2  −2 2n + 1 2 2n − 1 
= cos x + cos x
π  2n + 1 2 2n − 1 2 
0 
2 2 1 1  4 2
=  −  =−
π  2n + 1 2n − 1  π  4 n2 − 1

∴ Fourier-series expansion is
2 2 4 2 ∞ cos nx
f ( x) ∼
π
− ∑
π n =1 4 n2 − 1
.

Example 3.13: Express f ( x ) = x , − π < x < π as Fourier series.


1 1 1 π2
Hence show that 2 + 2 + 2 +  = .
1 3 5 8
Solution: f ( x ) is an even function of x. Considering f ( x ) to be periodic function with period

2π , Fourier-series expansion of f ( x ) is


a
f ( x ) ∼ 0 + ∑ an cos nx
2 n =1  π
2 π 2  x2 
where a0 = ∫ x dx =   = π
π 0 π  2 0

π
2 π 2  1   1 
π ∫0
an = x cos nx dx =  x  sin nx  −  − 2 cos nx  
π  n   n 0

−2
= 2 1 − ( −1) 
n

πn  

4
∴ a2 n = 0, a2 n −1 = − ; n = 1, 2, 3, …
π ( 2n − 1)
2


302 | Chapter 3

∴ Fourier-series expansion of f (x) is


π 4 ∞ cos ( 2n − 1) x
f ( x) ∼ − ∑
2 π n =1 ( 2n − 1)2

Take x = 0
π 4 ∞ 1
f (0) = 0 = − ∑
2 π n =1 ( 2n − 1)2

1 1 1 π2
\ 2
+ 2 + 2 + = .
1 3 5 8 

Example 3.14: Obtain the Fourier series for the function f ( x ) = x 2 , − π ≤ x ≤ π . Sketch the
graph of f ( x ). Hence show that
1 1 1 1 ∞
1 π2
  (i)  2 + 2 + 2 + 2 +  = ∑ 2 =
1 2 3 4 n =1 n 6
1 1 1 1 π2

 (ii)  + − +  =
12 22 32 4 2 12
1 1 1 ∞
1 π2
(iii)  2 + 2 + 2 +  = ∑ =
n =1 ( 2n − 1)
2
1 3 5 8

Solution:  f ( x ) is an even function. Considering f ( x ) to be periodic function of period 2π ,


   
Fourier series of f ( x ) is
  ∞
a
f ( x ) = x 2 = 0 + ∑ an cos nx
2 n =1
π 
2 π 2 2  x3  2π 2
where a0 = ∫ x dx =   =
π 0 π  3 0 3

2 π 2
an = ∫ x cos nx dx
π 0
π
2  21   1   1  4
sin nx  − 2 x  − 2 cos nx  + 2  − 3 sin nx   = 2 ( −1)
n
= x
π   n   n   n 0 n 
∴ Fourier series of f ( x ) is

( −1)
n
π2 ∞
f ( x) = + 4∑ 2 cos nx (1)
3 n =1 n

Take x = π
π2 ∞
1
f (π ) = π 2 = + 4∑ 2
3 n =1 n 
1 1 1 ∞
1 π2
∴ 2
+ 2 + 2 + = ∑ 2 = (2)
1 2 3 n =1 n 6
Fourier Series, Fourier Integrals and Fourier Transforms  | 303

Take x = 0 in (1)
( −1)
n
π2 ∞
+ 4∑ 2
f (0) = 0 =
3 n =1 n 
1 1 1 1 π2
\ − + − + = (3)
12 22 32 4 2 12
Add (2) and (3)
1 1 1  π π 2 3π 2 π 2
2
2  2 + 2 + 2  = + = =
1 3 5  6 12 12 4 
1 1 1 π2
⇒ 2
+ 2 + 2 = 
1 3 5 8
Graph of f ( x ) is
fx

fx =π

x
− π − π −π O π π π

Figure 3.5

Example 3.15: Obtain Fourier series for f ( x ) given by


 2x
1 + π , − π ≤ x ≤ 0
f ( x) = 
1 − 2 x , 0 ≤ x ≤ π
 π 
1 1 1 π2
Hence, deduce that 2 + 2 + 2 +  =
1 3 5 8
 2x
1− π , − π ≤ − x ≤ 0 i.e., 0 ≤ x ≤ π
Solution:  f ( − x ) = 
1 + 2 x , 0 ≤ − x ≤ π i.e., − π ≤ x ≤ 0
 π
\  f ( x ) is even function of x.
Considering f ( x ) to be periodic function of period 2π , Fourier series for f ( x ) is

a
f ( x ) = 0 + ∑ an cos nx
2 n =1 
π
2 π  2x  2 x2 
where a0 = ∫
π 0 1 −  dx =  x −  = 0
π  π π 0

304 | Chapter 3

2 π  2x 
π ∫0 
an = 1 −  cos nx dx
π  
π
2  2 x   1   2  1 
= 1 −   sin nx  −  −   − 2 cos nx  
π  π  n   π  n 0 
4
= 2 2 1 − ( −1) 
n

π n  

8
∴ a2 n = 0, a2 n −1 = ; n = 1, 2, 3, …
π 2 ( 2n − 1)
2


∴ Fourier series for f ( x ) is
8 cos( 2n − 1) x

f ( x) =
π2

( 2n − 1) 2 
n =1

8 ∞ 1
\ f (0) = 1 = 2 ∑
π n =1 ( 2n − 1)2

1 1 1 π2
\ + + + =
12 32 52 8 
−k ; − π < x < 0
Example 3.16: If f ( x ) = 
k ; 0 < x < π
and f ( x + 2π ) = f ( x ) for all x, obtain the Fourier series for f ( x ). Deduce that
1 1 1 π
1 − + − +  = .
3 5 7 4
−k ; − π < − x < 0 i.e., 0 < x < π
Solution: f ( − x ) = 
 k ; 0 < −x < π i.e., − π < x < 0
\  f ( x ) is odd function.
Fourier series for f ( x ) is

f ( x ) ∼ ∑ bn sin nx
n =1  π

where
2 π
bn = ∫ k sin nx dx =
π 0
2k  1

π  n
− cos nx  =

0 nπ
2k
(
1 − ( −1)
n


)
4k
\ b2 n = 0; b2 n −1 = ; n = 1, 2, 3,...
π ( 2n − 1)

\ Fourier series for f ( x ) is
4 k ∞ sin ( 2n − 1) x
f ( x) ∼ ∑ 
π n =1 2n − 1
4 k ∞ ( −1)
n +1
π 
f  =k =
2

π n =1 2n − 1 

1 1 1 π
\ 1 − + − +  = .
3 5 7 4
Fourier Series, Fourier Integrals and Fourier Transforms  | 305

Example 3.17: Sketch the curve for the function


π + x ; −π ≤ x ≤ −π / 2

f ( x ) = π / 2 ; −π / 2 ≤ x ≤ π / 2
π − x ; π /2≤ x ≤π

and find its Fourier expansion.
Solution: Considering f ( x ) to be periodic function of period 2π , graph of f ( x ) is

fx

fx = π
x
− π − π − π − π −π −π O π π π π π π

Figure 3.6

The graph is symmetrical about line x = 0.


\ f ( x ) is even function.

Fourier-series expansion of f ( x ) is

a0 ∞
f ( x ) = + ∑ an cos nx
2 n =1 
 π π 
2  π /2 π π  2  π x  2  x2  
where a0 =  ∫ dx + ∫π (π − x ) dx  =   + π x − 
π 0 2  π  2 0  2 π 
2 
2
 
2 π 2
π 2
π 2
π  3π
2
=  + − + =
π 4 2 2 8  4

2  π2 π π 
an =  ∫0 cos nx dx + ∫π (π − x ) cos nx dx 
π 2 
2

 π
π 
2  π 
2 π − x 1 
=  sin nx  +  sin nx − 2 cos nx  
π   2n 0  n n π 
 2 

306 | Chapter 3

2π nπ π nπ 1  nπ 
− 2 ( −1) − cos 
n
=  sin − sin
π  2n 2 2n 2 n  2  
2  nπ 
2 (
−1) − cos 
n
=−
πn  2 
2
\ a2 n −1 = ; n = 1, 2,...
π ( 2n − 1)
2


2 
1 − ( −1) ; n = 1, 2,...

n
a2 n = −
4π n2  

1
\ a4 n = 0, a2( 2 n −1) = − ; n = 1, 2, 3,...
π ( 2n − 1)
2


\ Fourier-series expansion of f ( x ) is
3π 1 ∞ 1
f ( x) = + ∑  2 cos ( 2n − 1) x − cos 2 ( 2n − 1) x 
8 π n =1 ( 2n − 1)2 

Example 3.18: Find the Fourier series for the function f ( x ) = 2 x − x 2 , 0 < x < 3 and deduce that

1 π2
∑n=1 n
2
= .
6
3
Solution: Length of period = 2l = 3   ∴ l =
2
Fourier series for f ( x ) is
a0 ∞  2nπ x 2nπ x 
f ( x) ∼ + ∑  an cos + bn sin
2 n =1  3 3  
3
2 3 2 x3 
where a0 = ∫
3 0
( )
2 x − x 2 dx =  x 2 −  = 0
3 3 0

2 nπ x
2 3
( )
i
an + i bn = ∫ 2 x − x e 3 dx
2

3 0  3
2      27i i 2 n3π x  
2 nπ x 2 nπ x
3i 9
( )
i
 − (2 − 2x )  − 2 2 e
i
=  2x − x  −  + ( −2 )  3 3 e
2
e 3 3


3   2nπ   4n π   8n π   0 
2  9i 9 9  3i 9
= − 2 2 − 2 2= − 2 2
3  2nπ n π 2n π  nπ n π 
Equate real and imaginary parts
9 3
an = − 2 2 , bn = ; n = 1, 2, 3,...
nπ nπ 
\ Fourier series for f ( x ) is
3 ∞ 1 2nπ x 3 2nπ x 
f ( x) ∼ ∑ sin 3 − π n2 cos 3 
π n =1  n

Fourier Series, Fourier Integrals and Fourier Transforms  | 307

For x = 3
f (3 − 0 ) + f (3 + 0 ) 9 ∞
1
2
=−
π2
∑n 2
n =1 
f (3 − 0 ) + f ( 0 + 0 )
−9 ∞ 1
⇒ = 2∑ 2
2 π n =1 n 

1 π  −3 + 0  π 2
2
⇒ ∑
n=1 n
2
= − =
9  2  6 

Example 3.19: (i) Express f ( x ) = x 2 in the Fourier series for 0 < x < 2.
π2 1 1 1
(ii)  Find the Fourier series for f ( x ) = x 2 in ( 0, 4 ) and deduce that = 1 + 2 + 2 + 2 +
6 2 3 4
Solution: (i) Fourier series for f ( x ) = x 2 in ( 0, 2 ) is
a0 ∞
f ( x) ∼ + ∑  an cos ( nπ x ) + bn sin ( nπ x ) 
2 n =1  
1 3 8
( )
2 2
where a0 = ∫ x 2 dx = x =
0 3 0 3
2
an + i bn = ∫ x 2 e inπ x dx
0 
2
  i inπ x   1   i 
=  x2  − e  − 2 x  − 2 2 e inπ x  + 2  3 3 e inπ x  
  nπ   n π   n π 0 

4i 4
=−
+ 2 2
nπ n π 
Equate real and imaginary parts
4 4
an = , bn = − ; n = 1, 2, 3,... 
n 2π 2 nπ
\ Fourier series for f ( x ) is
1 1 ∞ 1  1 
f ( x) ∼ 4  + ∑  cos ( nπ x ) − sin ( nπ x )  
 3 π n =1 n  nπ 
(ii)  Fourier series for f ( x ) = x 2 in ( 0, 4 ) is
a0 ∞  nπ x nπ x 
f ( x) ∼ + ∑ an cos + bn sin
2 n =1  2 2  

1 4 2 1 32
( )
4
where a0 =
2 ∫0
x dx = x 3
6 0
=
3 
1 4 2 i nπ2 x
2 ∫0
an + i bn = x e dx

308 | Chapter 3

4
1   −2i i nπ2 x   4 i nπ x   8i i nπ x  
=  x2  e  − 2 x  − 2 2 e 2  + 2  3 3 e 2 
2   nπ   nπ  n π   0 
1  32i 32  16i 16
=
 − + 2 2=− + 2 2
2  nπ n π  nπ n π 
Equate real and imaginary parts
16 16
an = 2 2 , bn = − ; n = 1, 2, 3,...
n π nπ 
\ Fourier series for  f ( x ) is

1 1 ∞ 1  1 nπ x nπ x  
f ( x ) ∼16  + ∑  cos − sin
 3 π n =1 n  nπ 2 2  
For x = 4
f (4 − 0) + f (4 + 0) 16 16 ∞
1
2
= +
3 π2
∑n 2
n =1 
f (4 − 0) + f (0 + 0) 16 16 ∞
1

2
= +
3 π2
∑n
n =1
2


16 + 0 16 16 ∞
1

2
= + 2
3 π
∑n 2
n =1 

1 2 1 1  π 2
⇒ ∑
n=1 n
2
= π −
 2 3 6
 
=

Example 3.20: Find the Fourier series of
π x ; 0 ≤ x <1

f ( x ) = 0 ; x =1
π x − 2 ; 1 < x ≤ 2
 ( ) 
π 1 1 1
Hence deduce that = 1 − + − +
4 3 5 7
Solution: Fourier series of f ( x ) is

a
f ( x ) ∼ 0 + ∑  an cos ( nπ x ) + bn sin ( nπ x ) 
2 n =1  2
π 2 1  x2  π  1 
a0 = ∫ π x dx + ∫ π ( x − 2 ) dx = ( )
1 2
where x + π  − 2 x  = + π  −2 − + 2  = 0 
0 1 2 0
 2 1 2  2 
an + ibn = ∫ π x e inπ x dx + ∫ π ( x − 2 ) e inπ x dx
1 2

0 1 
1 2
  i inπ x   1 inπ x     i inπ x   1 inπ x  
= π x  − e  −  − 2 2 e   + π ( x − 2 )  − e  −  − 2 2 e 
  nπ   nπ 0   nπ   nπ  1 
Fourier Series, Fourier Integrals and Fourier Transforms  | 309

 i ( −1)n i ( −1)
n


= π −
 n π n
1
− 2 2 1 − ( −1) −
π
n

n π
(
n
1
+ 2 2 1 − ( −1)
π
n
) ( ) 

2i 2i
( −1) = ( −1)
n n +1
=−
n n 
Equate real and imaginary parts
2
( −1) ; n = 1, 2,...
n +1
an = 0,    bn =
n 
\ Fourier series of f ( x ) is
( −1)
n +1

f ( x ) ∼ 2∑ sin ( nπ x )
n =1 n 
( −1)
n +1
1 ∞

f   = 2∑ sin
2 n =1 n 2 

( −1)
n +1

Let kn = sin
n 2 
( −1)
n +1

∴ k2 n = 0, k2 n −1 = ; n = 1, 2, 3,…
2n − 1 
( −1)
n +1
π ∞
1 1 1 π
∴ = 2∑ ⇒ 1− + − + =
2 n =1 2 n − 1 3 5 7 4
Other method to find Fourier series
Let the substitution x = y +1 changes f ( x ) to g ( y )

π ( y + 1) ; − 1 ≤ y < 0

g ( y) =  0 ; y=0
π y − 1 ; 0 < y ≤ 1
 ( ) 
g ( y ) is odd function of y.
Its Fourier series is

g ( y ) ∼ ∑ bn sin ( nπ y )
n =1 
bn = 2 ∫ π ( y − 1) sin nπ y dy
1
where
0 
1
  1   1 
= 2π ( y − 1)  − cos nπ y  −  − 2 2 sin nπ y  
  nπ   n π 0 

 1  2
= 2π  −  = −
 nπ  n
310 | Chapter 3

\ Fourier series of g(y) is



1
g ( y ) ∼ −2∑ sin ( nπ y )
n =1 n 
But y = x −1changes g ( y ) to f ( x )


1
∴ f ( x ) ∼ −2∑ sin  nπ ( x − 1) 
n =1 n 

1
or f ( x ) ∼ 2∑ sin ( nπ − nπ x )
n =1 n 
( −1)
n +1

or f ( x ) ∼ 2∑ sin ( nπ x )
n =1 n 
Example 3.21: A sinusoidal voltage E sin ωt , where t is time, is passed through a half-wave
r­ectifier that clips the negative portion of the wave. Find the Fourier series of the resulting
­periodic function
0 ; −L<t <0
u (t ) = 
 E sin ω t ; 0<t <L

2π π
Take p = 2 L = , L=
ω ω
Solution: Fourier series of u ( t ) is

a0 ∞  nπ t nπ t 
u ( t ) ∼ + ∑  an cos + bn sin
2 n =1  L L  
a0 ∞  π 
∼ + ∑ [ an cos nωt + bn sin nωt ]    ∵ = ω 
2 n =1  L 
L
1 L E 1  E
E sin ωt dt =  − cos ωt  = (1 − cos ω L )
L ∫0
where a0 =
L ω 0 π 
E 2E
= (1 − cos π ) =    (∵ Lω = π )
π π 

π
1 L Eω ω
an + ibn = ∫ E sin ωt e inωt dt = ∫ sin ωt e inωt dt
L 0 π 0
π 

Eω  e inω t  ω

=  ( inω sin ωt − ω cos ωt ) ; n ≠ 1



π  ω 1 − n2

2
( ) 
0 
E 1 + ( −1)  ; n ≠ 1
n
=

(
π 1 − n2  ) 

Equate real and imaginary parts
−E 
( −1) + 1 , bn = 0 ; n ≠ 1
n
an =
π n −12
(  )
Fourier Series, Fourier Integrals and Fourier Transforms  | 311

2E
∴ a2 n = − ; n = 1, 2, 3,... 
π ( 4 n2 − 1)
a2 n −1 = 0; n = 2, 3,... 

1 L
L ∫0
a1 = E sin ωt cos ωt dt

E π ω −E
( cos 2ωt )0 = 0
/ π /ω

2 L ∫0
= sin 2ωt dt =
4ω L 
1 L
b1 = ∫ E sin 2 ωt dt
L 0 
π /ω

(1 − cos 2ωt ) dt =  t − sin 2ωt 


E π /ω E 1
2 L ∫0
=
2 L  2ω 0 
Eπ E
= =
2ω L 2

\ Fourier series of u ( t ) is
E E 2 E ∞ cos ( 2nωt )
u (t ) ∼ + sin ωt −
π 2

π n =1 4 n2 − 1 

Example 3.22: Find the Fourier series for f ( x ) = x 2 in −l < x < l.


Solution: f ( x ) is even function of x.

Fourier series for f ( x ) is

a ∞
nπ x
f ( x ) ∼ 0 + ∑ an cos
2 n =1 l 

2 l 2 2 3 2l 2
( )
l

l ∫0
where a0 = x dx = x =
3l 0 3 
2 l 2 nπ x
an =
l ∫0
x cos
l
dx

2  l nπ x   l2 nπ x  
=  x 2  sin  − 2 x  − 2 2 cos
l   nπ l   nπ l 
l
 l3 nπ x   4l 2
2 2 (
−1)
n
+2  − 3 3 sin   =
 nπ l  0 n π

\ Fourier series for f ( x ) is


( −1)
n +1
l 2 4l 2 ∞
 nπ x 
f ( x) ∼ −
3 π2
∑n =1 n2
cos 
 l 

312 | Chapter 3

Example 3.23: Sketch the graph for the function


 0 ; −2 ≤ x ≤ −1
1 + x ; −1 ≤ x ≤ 0

f ( x) = 
1 − x ; 0 ≤ x ≤1
 0 ; 1≤ x ≤ 2

and obtain its Fourier series.
Solution: f ( x ) is even function with period 4.
Its graph is shown below
fx

fx =
x
− − − − − − O

Figure 3.7

Fourier series of f ( x ) is
a0 ∞ nπ x
f ( x) = + ∑ an cos
2 n =1 2
1
 x2  1
a0 = ∫ f ( x ) dx = ∫ (1 − x ) dx =  x −  =
2 1
where
0 0
 2 0 2
2 nπ x
an = ∫ f ( x ) cos dx
0 2
nπ x
= ∫ (1 − x ) cos
1
dx
0 2
1
  2 nπ x   4 nπ x  
= (1 − x )  sin  +  − 2 2 cos 
  nπ 2  n π 2   0
4  nπ 
= 2 2 1 − cos
nπ  2  
1  4
1 − ( −1)  , a2 n −1 =
n
∴ a2 n = 2 
; n = 1, 2, 3,…

2  π ( 2n − 1)
2 2
Fourier Series, Fourier Integrals and Fourier Transforms  | 313

2 4
∴ a4 n = 0, a2( 2 n −1) = , a2 n −1 = ; n = 1, 2, 3, …
( 2n − 1) ( 2n − 1)
2 2
π 2
π 2

∴ Fourier series for f ( x ) is

1 2 ∞
1  (2n − 1) π x + cos 2n − 1 π x .
f (x) = + ∑  2 cos {( ) }
4 π2 n =1 ( 2n − 1) 
2
2  

Example 3.24: Find the Fourier-series expansion of the periodic function


2 + x ; − 2 ≤ x ≤ 0
f ( x) = 
2 − x ; 0<x≤2

and f ( x + 4 ) = f ( x ) . Sketch the graph and deduce that
1 1 1 π2
1+ + + +  =
32 52 72 8 
Solution: f (x) is an even function with period 4.
Fourier-series expansion of f ( x ) is
a0 ∞ nπ x
f ( x) = + ∑ an cos
2 n =1 2 
2
 x2 
a0 = ∫ ( 2 − x ) dx =  2 x −  = 2
2
where
0
 2 0

nπ x
an = ∫ ( 2 − x ) cos
2
dx
0 2
2
  2 nπ x   4 nπ x  
= ( 2 − x )  sin  +  − 2 2 cos  
  nπ 2 nπ 2   0

4 
1 − ( −1) 
n
= 2 

2 

8
∴ a2 n = 0, a2 n −1 = ; n = 1, 2, 3, …
( 2n − 1)
2
π 2

∴ Fourier-series expansion of f ( x ) is
8 ∞
1 ( 2n − 1) π x
f ( x) = 1+ ∑ cos
π2 ( 2n − 1)
2
n =1 2


8 1
f (0) = 2 = 1 + ∑
π2 ( 2n − 1)
2
n =1

1 1 1 π 2
∴ 1+ 2
+ 2 + 2 + =
3 5 7 8 
314 | Chapter 3

The graph is shown below


fx

fx =
x
− − − − − − O

Figure 3.8

Example 3.25: Determine the Fourier series for the periodic triangle function f ( x ) with period
T  T T
T defined for 0 < a ≤ on  − ,  by
2  2 2
 x
1 − ; x ≤a
f ( x) =  a
0 T
; a< x ≤
 2

 x
1 − ; −a ≤ x ≤ a
Solution: f ( x) =  a
0 T T
; − ≤ x < −a, a < x ≤
 2 2
f ( x ) is an even function.
Fourier series for f ( x ) is
a0 ∞ 2nπ x
f ( x) = + ∑ an cos
2 n =1 T 
a
4 a x  4 x2  2a
where a0 = ∫ 1 −  dx =  x −  =
T  a
0 T 2a  0 T

4 a x  2nπ x
an = ∫ 1 −  cos dx
T  a
0 T 
a
4  x   T 2nπ x  1  −T 2 2nπ x  
=  1 −   sin +  cos 
T  a   2nπ T  a  4 n2π 2 T 0

Fourier Series, Fourier Integrals and Fourier Transforms  | 315

4  T2  2nπ a   T  2nπ a 
=  1 − cos T   =  1 − cos T  
T  4 an2 π 2   aπ 2 2
n  
2T nπ a
= sin 2
aπ n 2 2
T 
∴ Fourier series for f ( x ) is
a 2T ∞
nπ a
1 2nπ x
f ( x) = +
T aπ 2
n =1
∑n T2
sin 2
cos
T 
aω 4 ∞
1 nω a
= + ∑ sin 2 2 cos ( nω x ), 
2π aπω n =1 n2

where ω = .
T

Example 3.26: Obtain the Fourier series of the function f given by the following graph
x +y =a

x
− a − a −a O a a a

Figure 3.9

Solution: From the graph


0 ; − 3a / 2 ≤ x ≤ − a
 2
f ( x ) =  a − x2 ; − a ≤ x ≤ a
0 ; a ≤ x ≤ 3a / 2


and f ( x ) is periodic with period 3a.
Here, f ( x ) is an even function.
Fourier series of f ( x ) is
a0 ∞ 2nπ x
f ( x) = + ∑ an cos
2 n =1 3a 
4 3a / 2 4 a 2
where a0 =
3a 0∫ f ( x ) dx =
3a ∫0
a − x 2 dx

a

4 x a −x 2 2
a 2
x  aπ
=  + sin −1  =
3a  2 2 a  3
0 
4 3a / 2 2nπ x 4 a 2 2nπ x
an =
3a ∫ 0
f ( x ) cos
3a
dx =
3a ∫0
a − x 2 cos
3a
dx

316 | Chapter 3

∴ Fourier series of f ( x ) is
aπ ∞ 2nπ x
f ( x) = + ∑ an cos
6 n =1 3a 
4 a 2 2nπ x
where an =
3a ∫0
a − x 2 cos
3a
dx.


Example 3.27: If f ( x ) is a periodic function in [−l, l]. Prove that at both the end points the

a
Fourier series has the same value 0 + ∑ an ( −1) .
n

2 n =1
Solution: Fourier series of f ( x ) in [−l, l] is
a0 ∞ nπ x ∞ nπ x
f ( x) = + ∑ an cos + ∑ bn sin
2 n =1 l n =1 l 
1 l
f ( x ) dx
l ∫− l
where a0 =

1 l nπ x
an = ∫ f ( x ) cos dx
l − l l 
1 l nπ x
bn = ∫ f ( x ) sin dx
l − l l 
a0 ∞ nπ ( −l ) ∞ nπ ( −l )
f ( −l ) = + ∑ an cos +∑ bn sin
2 n =1 l n =1 l 
∞ ∞
a0
= + ∑ an cos nπ −∑ bn sin nπ
2 n =1 n =1 
a0 ∞
+ ∑ an ( −1)
n
=
2 n =1 
a0 ∞ nπ l ∞ nπ l
f (l ) = + ∑ an cos +∑ bn sin
2 n =1 l n =1 l 
a0 ∞
+ ∑ an ( −1)
n
=
2 n =1 
∴ At both ends Fourier series has the same value
a0 ∞
+ ∑ an ( −1)
n

2 n =1 
Fourier Series, Fourier Integrals and Fourier Transforms  | 317

Exercise 3.1

1. Find the Fourier series of the func- 9. Show that in the range 0 to 2p  the Fourier
tion f ( x ) = x + π if − π < x < π and series expansion for e x is
f ( x + 2π ) = f ( x ) . 2eπ sinh π  1 ∞  cos nx  ∞  n sin nx  
π 2 + ∑ 2  −∑  2 
2. Obtain the Fourier series for the func-  n =1  n + 1  n =1  n + 1  
tion f ( x ) = 2 x + 1, − π < x < π . Hence
deduce Fourier series for x and the line 10. Expand in Fourier series the function f
y = mx + c.  0, − π < x < 0
defined by f ( x ) = 
π−x 1, 0 ≤ x < π
3. Obtain the Fourier series of f ( x ) =
2 Deduce that sum of the Gregory series
in the interval (0, 2p ). Deduce 1 1 1 1 π
π 1 1 1 1 − + − + − is .
= 1 − + − + 3 5 7 9 4
4 3 5 7
11. Find the Fourier series of
4. Expand f ( x ) = x 2 , 0 < x < 2π in a
−1, 0 < x < π
Fourier series assuming that the func- f ( x) = 
tion is of period 2π . Hence deduce that  2, π < x < 2π
π2 1 1 1 1 12. Find the Fourier series for the periodic
= − + − +
12 12 22 32 4 2 function
5. Find the Fourier series for the function  0, − π < x < 0
f ( x ) = x + x2 , − π < x < π . f ( x) = 
 x, 0 < x < π
Hence show that
f ( x + 2π ) = f ( x )
π2 1 1 1
(i)  = 1 + 2 + 2 + 2 + 13. Find the Fourier series for f ( x ) if
6 2 3 4  
π2 1 1 1 1  −π , −π < x < 0
(ii)  = − + − + 
12 12 22 32 4 2 f ( x) =  x , 0< x <π
6. Find the Fourier series for −π / 2 , x=0
x2 
f (x) = x + , − π < x < π
4 1 1 1 π2
Deduce that + + +  =
7. Find the Fourier series of 12 32 52 8
3 x 2 − 6 xπ + 2π 2
f (x) = in the  −π π
12 <x<
 x, 2 2
interval (0, 2p ). Hence deduce that
­ 14. If f ( x ) = 
π2 1 1  0, π 3π
= 1 + 2 + 2 + <x<
6 2 3  2 2
8. Find the Fourier series to represent e ax in find the Fourier series of f ( x ) . Deduce
the interval −π < x < π and hence derive π2 ∞
1
series for
π
. that =∑ .
n =1 ( 2n − 1)
2
sinh π 8
318 | Chapter 3

15. Find the Fourier-series expansion of the 25. Prove that in the interval −π < x < π ,
periodic function of period 2p defined by ( )
n
∞ n −1
1
−π π x cos x = − sin x + 2∑ 2 sin nx.

 x , <x< 2 (
n=2 n − 1 )
f ( x) =  2 2
π 3π 26. Find the Fourier series for the function
π − x , <x<
 2 2 −1, −π < x < −π / 2

16. Find the Fourier series of f ( x) =  0 , −π / 2 < x < π / 2
1, π /2< x <π
0 , − π < x ≤ 0 
f ( x) =  2
x , 0 ≤ x < π 27. Find the Fourier series of the function
π + 2 x, − π < x < 0
which is assumed to be periodic with f ( x) =  .
­period 2π . π − 2 x, 0 ≤ x < π
17. Find the Fourier series of the function 1 1 1 π2
Hence deduce that + + +  = .
­defined as 12 33 52 8
 x +π, 0≤ x <π 28. Obtain Fourier series for
f ( x) =  and
− x − π , −π < x < 0 x , − π < x < 0
f ( x) =  and hence
f ( x + 2π ) = f ( x ) . − x , 0 < x < π
1 1 1 π2
18. Find the Fourier series of the function show that + 2 + 2 + = .
f ( x) = 1− x , − π < x < π.
2
1 3 5 8
−1 + x , − π < x < 0
19. Find Fourier series of f (x) = x3 in (–p, p ). 29. If f ( x ) =  with pe-
1 + x , 0 < x < π
20. Expand cosh x in Fourier series in riod 2π , find the Fourier series for f ( x ).
−π < x < π .
30. Find the Fourier series of
21. Find the Fourier series of f ( x ) = e in
−x

the interval ( −π , π ) . cos x , − π < x < 0


f ( x) = 
− cos x , 0 < x < π
22. Find the Fourier series for the function
f ( x ) = sinh ax, − π < x < π . 31. Determine the Fourier expansion for the
function f ( x ) given by
π 2 x2  
23. Expand the function f ( x ) = − in  π
12 4  x + 2 , − π < x < 0
Fourier series in the interval ( −π , π ) and f ( x) = 
π2 1 1 1  π − x, 0 < x < π
deduce that = − + −  2
12 12 22 32
32. Obtain the Fourier expansion for the
24. Show that for –p < x < p, function f ( x ) with period 2π given by
2 sin aπ  sin x 2 sin 2 x
sin ax =  − − x 2 , −π < x < 0
π  12 − a 2 22 − a 2 f ( x) =  2
3 sin 3 x   x , 0< x <π
+  ,
32 − a 2  33. Find Fourier series for the function
a is fraction. f ( x ) = x − x 2 , − 1 < x < 1.
Fourier Series, Fourier Integrals and Fourier Transforms  | 319

l l ∞
1 sin 2nπ x 46. Find the Fourier series for the function
34. Prove that
2
−x=
π
∑n l
, 0< x<l
x , 0 < x <1
1 sin 2nπ x
n =1
f ( x) =  .
∑n l
, 0 < x < l. 1 − x , 1 < x < 2
1
π −x 1 1 1 π2
35. Obtain the Fourier series of f ( x ) = Deduce that + 2 + 2 + = .
in 0 < x < 2. 2 2
1 3 5 8
36. Expand f ( x ) = e as a Fourier series in
−x 47. Find the Fourier series for function
 π x , 0 ≤ x ≤1
the interval (−l, l). f ( x) = 
37. Find the Fourier series of π ( 2 − x ) , 1 ≤ x ≤ 2
f ( x ) = 4 − x 2 in (0, 2) and deduce that 48. Obtain Fourier-series expansion of
π2 1 1 1 πx 
= + + + f ( x ) = x cos   in the interval
6 12 22 32  l 
38. Find the Fourier series of −l < x < l.
f ( x ) = π x, 0 < x ≤ 2.
49. Find Fourier-series expansion of
39. Find the Fourier series of f ( t ) = 1 − t 2 , − 1 ≤ t ≤ 1.
π x , 0 < x <1
f ( x) =  50. Find the Fourier series of f ( x ) = x x in
0 , 1< x < 2
the interval (-1, 1).
40. Find the Fourier series corresponding to
51. Obtain the Fourier-series expansion of
the function f ( x ) defined in (−2, 2) as
f ( x ) = x , − 2 ≤ x ≤ 2, f ( x ) = f ( x + 4 ) .
follows
2, − 2 ≤ x ≤ 0 52. Obtain the Fourier-series expansion of
f ( x) = 
 x, 0 < x < 2 f ( x ) = x 2 − 2, − 2 ≤ x ≤ 2.
41. Find the Fourier series for f ( x ) defined
53. If f ( x ) = x is defined in −l < x < l with
in ( −1, 1) by
period 2l, find the Fourier expansion of
c , − 1 < x < 0 f ( x ).
f ( x) =  1
c2 , 0 < x < 1 54. Find the Fourier series for
42. Obtain the Fourier series of k ( x − c ) , − c < x < 0
f ( x) = 
0, − 5 < x < 0
f ( x) =  k ( c + x ) , 0 < x < c
3, 0 < x < 5
55. Find the Fourier-series expansion of the
43. Find the Fourier-series expansion of period function of period 1 given by
1, 0 < x < 1 1 1
f ( x) = 
2, 1 < x < 2  2 + x, − 2 < x ≤ 0
f ( x) = 
44. Find the Fourier series of  1 − x, 0 < x < 1
x , −1 < x < 0  2 2
f ( x) = 
x + 2 , 0 < x <1 56. Find the Fourier series for peri-
45. Find the Fourier series for the function odic block function f with period
0, − 8 < x < 0 T  > 0 and 0 ≤ a ≤ T and defined by

f ( x ) = 4, 0 < x < 4 1 , x ≤ a / 2 ≤ T / 2
 0, 4 < x < 8 f ( x) = 
 0 , a / 2 < x ≤ T / 2
320 | Chapter 3

Answers 3.1

( −1)
n +1

 1.  f ( x ) ∼ π + 2 ∑ sin nx
n =1 n
( −1)
n +1

 2.  f ( x ) ∼1 + 4∑ sin nx
n =1 n
( −1)
n +1

  x ∼ 2∑ sin nx
n =1 n
( −1)
n +1

  mx + c ∼ c + 2m∑ sin nx
n =1 n

1
 3.  f ( x ) ∼ ∑ sin nx
n =1 n

4π 2 ∞
1 ∞
1
 4.  f ( x ) ∼ + 4∑ 2 cos nx − 4π ∑ sin nx
3 n =1 n n =1 n

( −1) cos nx ∞ ( −1) sin nx


n n +1
π2 ∞
 5.  f ( x ) ∼ + 4∑ + 2∑
3 n =1 n2 n =1 n
π 2 ∞ ( −1) ( −1)
n n +1

 6.  f ( x ) ∼ + ∑ 2 cos nx + 2∑ sin nx
12 n =1 n n =1 n

1
 7.  f ( x ) ∼ ∑ 2
cos nx
n =1 n
1 ( − 1) 
n
sinh aπ ∞ ∞
n
 8.  f ( x ) ∼  + 2a ∑ 2 cos nx + 2 ∑ ( − 1)
n +1
sin nx
π  a n =1 a + n
2
n =1 a +n
2 2


( − 1)
n
π ∞
 1 1 1 
  = 2∑ 2 = 2 2 − 2 + 2 − 
sinh π n=2 n + 1  2 +1 3 +1 4 +1 
1 2 ∞ sin ( 2n − 1) x
10.  f ( x ) ∼ +
2 π
∑ ( 2n − 1)
n =1

1 6 ∞ sin ( 2n − 1) x
11.  f ( x ) ∼ −
2 π
∑ ( 2n − 1)
n =1

cos ( 2n − 1) x ( −1)
n
π 2 ∞ ∞
12.  f ( x ) ∼ − ∑ −∑ sin nx
4 π ( 2n − 1)
2
n =1 n =1 n
π 2 (
∞ cos 2n − 1 x
) ∞
1
f ( x) ∼ − − ∑ + ∑ 1 − 2 ( −1)  sin nx
n
13. 
4 π n =1 ( 2n − 1) 2
n =1 n
 
Fourier Series, Fourier Integrals and Fourier Transforms  | 321

( −1) sin ( 2n − 1) x ( −1)


n +1 n +1
2 ∞ ∞ sin 2nx
14.  f ( x ) ∼ ∑ +∑
π ( 2n − 1)
2
n =1 n =1 2n

4 ∞ n +1 sin ( 2n − 1) x
15.  f ( x ) ∼ ∑ ( −1)
π n =1 ( 2n − 1)
2

( −1) π ∞ sin 2nx ∞  π 


n
π2 ∞
4
16.  f ( x ) ∼ + 2 ∑ 2 cos nx − ∑ +∑ −  sin ( 2n − 1) x
n =1  2n − 1 ( )
3
6 n =1 n 2 n =1 n
 π 2 n − 1 

π 4 ∞ cos ( 2n − 1) x (
∞ sin 2n − 1 x
)
17.  f ( x ) ∼ − ∑ + 4∑
2 π n =1 ( 2n − 1) 2
n =1 ( 2 n − 1)
2 −π 4 ∞ cos ( 2n − 1) x
18.  f ( x ) ∼ + ∑
π ( 2n − 1)
2
2 n =1


19.  f ( x ) ∼ 2 ∑ ( −1)
n (6 − n π ) sin nx
2 2

n =1 n3

sinh π  ( −1) 
n

20.  f ( x ) ∼ 1 + 2 ∑ cos nx
π 
 n =1 n + 1
2
 ( )
 π π 
1 sinh cos 2nx ∞ cosh cos ( 2n − 1) x 
4e −π / 2 π ∞
21.  f ( x ) ∼  sinh + ∑ 2 +∑ 2

π 4n2 + 1 ( 2n − 1) + 1 
2
 2 2 n =1 n =1
 
( −1)
n +1
2 sinh aπ ∞ n
22.  f ( x ) ∼ ∑ sin nx
π n =1 (n 2
+a 2
)
( −1)
n +1

23.  f ( x ) ∼ ∑ cos nx
n =1 n2
2 ∞ 1
26.  f ( x ) ∼ ∑ sin ( 2n − 1) x − sin 2 ( 2n − 1) x 
π n =1 ( 2n − 1) 
8 ∞ cos ( 2n − 1) x
27.  f ( x ) ∼ ∑
π n =1 ( 2n − 1)2

−π 4 ∞ cos ( 2n − 1) x
28.  f ( x ) ∼ + ∑
2 π n =1 ( 2n − 1)2

 4  ∞ sin ( 2n − 1) x ∞ sin 2nx


29.  f ( x ) ∼  2 +  ∑ −∑
 π  n =1 ( 2n − 1) n =1 n
322 | Chapter 3

−8 ∞ n sin 2nx
30.  f ( x ) ∼ ∑
π n =1 4 n2 − 1
4 ∞ cos ( 2n − 1) x
31.  f ( x ) ∼ ∑
π n =1 ( 2n − 1)2

2 ∞  π2 4  ∞
sin 2nx
32.  f ( x ) ∼ ∑  −
π n =1  2n − 1 ( 2n − 1) 
3
 sin ( 2 n − 1) x − π ∑ n
  n =1

( −1) 2 ∞ ( −1)
n +1 n +1

1 4
33.  f ( x ) ∼ − + 2
3 π
∑ n =1 n2
cos nπ x + ∑
π n =1 n
sin nπ x

π − 1 1 ∞ sin nπ x
35.  f ( x ) ∼ + ∑
2 π n =1 nπ
1 ( −1) n ( −1) nπ x 
n n

nπ x ∞
36.  f ( x ) ∼ sinh l  + 2l ∑ 2 cos + 2π ∑ sin 
n =1 l + n π n =1 l + n π
2 2 2 2 2
 l l l 

8 4 ∞
cos nπ x 4 ∞ sin nπ x
37.  f ( x ) ∼ − 2
3 π

n =1 n2
+ ∑
π n =1 n

sin nπ x
38.  f ( x ) ∼ π − 2∑
n =1 n

( −1)
n +1
π 2 ∞ 1 ∞
39.  f ( x ) ∼ − ∑ cos ( 2n − 1) π x + ∑ sinnπ x
4 π n =1 ( 2n − 1) 2
n =1 n

3 4 ∞
1 πx 2 ∞ 1 nπ x
40.  f ( x ) ∼ − 2 ∑ cos ( 2n − 1) − ∑ sin
2 π ( 2n − 1) 2 π n =1 n
2
n =1 2

c1 + c2 2 ( c2 − c1 ) ∞ sin ( 2n − 1) π x
41.  f ( x ) ∼
2
+
π
∑ ( 2n − 1)
n =1

3 6 ∞ 1 ( 2n − 1) π x
42.  f ( x ) ∼ + ∑ sin
2 π n =1 ( 2n − 1) 5
3 2 ∞ sin ( 2n − 1) π x
43.  f ( x ) ∼ − ∑
2 π n =1 ( 2n − 1)
2 ∞ 1 − 2 ( −1) 
n

44.  f ( x ) ∼1 + ∑ 
π n =1  n
 sin nπ x


4 ∞ ( −1) ( 2n − 1) π x 4 ∞ 1  ( 2n − 1) π x ( 2n − 1) π x 
n +1

45.  f ( x ) ∼1 + ∑
π n =1 ( 2n − 1)
cos
8
+ ∑ sin
π n =1 ( 2n − 1)  8
+ sin
4


Fourier Series, Fourier Integrals and Fourier Transforms  | 323

−4 ∞ 1 2 ∞ sin ( 2n − 1) π x
2 ∑
46.  f ( x ) ∼ cos ( 2n − 1) π x + ∑
π n =1 ( 2n − 1) 2
π n =1 ( 2n − 1)
π 4 ∞ 1
47.  f ( x ) = − ∑ cos ( 2n − 1) π x
2 π n =1 ( 2n − 1)2

l  1 ( ) nπ x 
n
∞ n −1
πx
48.  f ( x ) ∼  − sin + 4∑ 2 sin 
π 2

l n=2 n − 1 (
l 
 )
( −1)
n

2 4
49.  f ( t ) = −
3 π2

n =1 n2
cos nπ t

1 ∞ 1  1  
 sin ( 2nπ x ) + 2 
4
50.  f ( x ) ∼ − ∑ − 
( 2n − 1) 
sin ( 2n − 1) π x 
π n =1  n  ( 2n − 1) π 
3 2
 
8 ∞ 1 ( 2n − 1) π x
2 ∑
51.  f ( x ) = 1 − cos
π n =1 ( 2n − 1) 2
2

2 16 ∞ ( −1)
n
nπ x
52.  f ( x ) = − + 2 ∑ 2 cos
3 π n =1 n 2

2l ∞ ( −1)
n +1
nπ x
53.  f ( x ) ∼ ∑
π n =1 n
sin
l
2kc ∞ 1  nπ x
54.  f ( x ) ∼ ∑ 1 − 2 ( −1)  sin c
n

π n =1 n 
1 2 ∞ 1
55.  f ( x ) ∼ + 2∑ cos ( 2 ( 2n − 1) π x )
4 π n =1 ( 2n − 1)2

a 2 ∞ 1 π
56.  f ( x ) ∼ + ∑
T T ω n =1 n
sin ( naω ) cos ( 2nω x ) where ω =
T

3.2  Fourier Half-Range Series


Suppose that a function f ( x ) is defined on some finite interval [0, l]. It is possible to extend
the definition of f ( x ) to the interval [-l, 0] such that f ( x ) is periodic function of period 2l and
f ( x ) is even function in [-l, l]. This can be done by defining f ( x ) = f ( − x ) ; − l ≤ x ≤ 0. Then,
we can have Fourier-series expansion of this even extension of f ( x ), i.e., extension of f ( x ) to
even function. This Fourier series will be cosine series which is called half-range cosine series
of f ( x ).
324 | Chapter 3

Thus, half-range cosine series of f ( x ) in [0, l] is


a0 ∞ nπ x
f ( x) = + ∑ an cos
2 n =1 l 
2 l 2 l nπ x
where a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx
l 0 l 0 l 
Similarly, we can extend the definition of f ( x ) to interval [-l, 0] such that f ( x ) is ­periodic
function of period 2l and f ( x ) is odd function in [- l, l ]. This can be done by defining
f ( x ) = − f ( − x ) ; − l ≤ x ≤ 0. Then, we can have Fourier-series expansion of this odd extension
of f ( x ), i.e., extension of f ( x ) to odd function. The Fourier series will be sine series which is
half-range sine series of f ( x ). ∞
nπ x
Thus, half-range sine series of f ( x ) in [0, l ] is f ( x ) = ∑ bn sin
n =1 l
2 l nπ x
where bn = ∫ f ( x ) sin dx
l 0 l 

3.2.1 Convergence of Half-range Cosine Series


If f ( x ) satisfies Dirichlet’s conditions in [0, l ] and f ( x ) is continuous at x0 ∈ (0, l ) ,
then half-range Fourier cosine series for x = x0 converges to f ( x0 ) . If f ( x ) is discon-
tinuous at x = x0 ∈ ( 0, l ), then half-range Fourier cosine series for x = x0 converges to
1 1
lim f ( x ) + lim+ f ( x )  =  f ( x0 − 0 ) + f ( x0 + 0 ) .
2  x → x0− x → x0  2
Now, f ( x ) is even function with period 2l
∴ f (0 − 0 ) = f (0 + 0 )

and f ( l + 0 ) = f ( −l − 0 ) = f ( 2l − l − 0 ) = f ( l − 0 )

∴ for x = 0, Fourier cosine series converges to f (0 + 0) and for x = l, Fourier cosine series
­converges to f (l - 0).

3.2.2 Convergence of Half-range Sine Series


If f ( x ) satisfies Dirichlet’s conditions on [0, l ] and f ( x ) is continuous at x0 ∈ ( 0, l ), then

half-range Fourier sine series for x = x0 converges to f ( x0 ). If f ( x ) is discontinuous at x = x0 ∈ (0, l ) ,


1
then half-range Fourier sine series for x = x0 converges to  f ( x0 − 0 ) + f ( x0 + 0 ) .
2
Now, f ( x ) is odd function with period 2l
∴ f (0 − 0 ) = − f (0 + 0 )

Fourier Series, Fourier Integrals and Fourier Transforms  | 325

1
\  f (0 − 0 ) + f (0 + 0 ) = 0 
2
\ Half-range sine series for x = 0 has value 0
and f (l + 0 ) = f ( −2l + l + 0 )

= f ( −l + 0 )
= − f (l − 0 )
\ f (l + 0 ) + f (l − 0 ) = 0 

\ Half-range sine series for x = l has value 0.

Example 3.28: Prove that the function f ( x ) = x can be expanded


(i)  In a series of cosines in 0 < x < π as
π 4  cos x cos 3 x cos 5 x 
x= − + + + 
2 π  12 32 52 
1 1 1 π2
hence deduce that 2
+ 2 + 2 + =
1 3 5 8
(ii)  in a series of sines in 0 < x < π as

 sin x sin 2 x sin 3 x sin 4 x 


x = 2 − + − + 
 1 2 3 4 
1 1 1 π
hence deduce that 1 − + − +  =
3 5 7 4
Solution:
(i)  Fourier cosine series of f ( x ) = x in 0 < x < π is

a0 ∞
f ( x) = + ∑ an cos nx
2 n =1 
2 π 1 2
( )
π
where a0 =
π ∫0
x dx =
π
x
0


π
2 π 2  1   1 
π ∫0
an = x cos nx dx =  x  sin nx  −  − 2 cos nx  
π  n   n 0 
2
= − 2 1 − ( −1) 
n

πn  

4
\ a2 n = 0, a2 n −1 = − ; n = 1, 2, 3, …
π ( 2n − 1)
2

326 | Chapter 3

\ Fourier cosine series of f ( x ) is

π 4 ∞ 1
f ( x) = − ∑ cos ( 2n − 1) x  ; 0 < x < π
2 π n =1 ( 2n − 1)2

for x = 0
π 4 1 ∞
f (0 + 0) = − ∑
2 π n =1 ( 2n − 1)2

π 4 1 ∞
\ 0= − ∑
2 π n =1 ( 2n − 1)2

1 1 1 π 2
\ + + + =
12 32 52 8 
(ii)  Fourier sine series of f ( x ) in 0 < x < π is

f ( x ) = ∑ bn sin nx
n =1 
π
2 π 2  1   1 
π ∫0
bn = x sin nx dx =  x  − cos nx  −  − 2 sin nx  
π  n   n 0

2
( −1)
n +1
=
n 

\ Fourier sine series of f (x) is
( −1)
n +1

f ( x ) = 2∑ sin( nx ); 0 < x < π
n =1 n 


 π π
f   = = 2∑

( −1) n +1

sin

 2 2 n =1 n 2 

( −1)
n +1

Let kn = sin .
n 2

1  π  ( −1) n +1
\ k2 n = 0, k2 n −1 = sin  nπ −  = ; n = 1, 2, 3, …
2n − 1  2  2n − 1 

\
 π π
f   = = 2∑

( −1) n +1

 2 2 n =1 2 n − 1 

1 1 1 π
\ 1 − + − +  =
3 5 7 4
Fourier Series, Fourier Integrals and Fourier Transforms  | 327

Example 3.29: Find a series of cosines of multiples of x which will represent x sin x in the inter-
1 1 1 1 π −2
val ( 0, π ) and show that − + − + = .
1⋅ 3 3 ⋅ 5 5 ⋅ 7 7 ⋅ 9 4
Solution: Fourier cosine series of x sin x in ( 0, π ) is
a0 ∞
x sin x = + ∑ an cos nx
2 n =1 
2 π 2
x sin x dx = [ − x cos x + sin x ]0 = 2
π
where a0 =
π ∫0 π 
2 π 1 π
an = ∫ x sin x cos nx dx = ∫ x sin ( n + 1) x − sin ( n − 1) x  dx
π 0 π 0 
1  1 1 
= x − cos ( n + 1) x + cos ( n − 1) x 
π   n + 1 n −1 
  
π 
1 1
− − sin ( )
n + 1 x + sin ( ) 
n − 1 x  ; n ≠ 1
 ( n + 1) (n − 1)2
2
  0
1 1 2
( −1) − ( −1) = 2 ( −1) ; n ≠ 1
n n n +1
=
n +1 n −1 n −1 
2 π 1 π
π ∫0
a1 = x sin x cos x dx = ∫ x sin 2 x dx
π 0 
π
  11  1  1
=
 x  − 2 cos 2 x  + 4 sin 2 x  = − 2
  π  0 
\ Fourier cosine series of x sin x in ( 0, π ) is
( −1)
n

1
f ( x ) = x sin x = 1 − cos x − 2∑ 2 cos nx
2 n=2 n − 1 

\
π π ∞
f   = = 1 + 2∑ 2
( −1) cos nπ n +1

 2 2 n= 2 n − 1 2 

( −1)
n +1

Let kn = ; n≥2
cos
n −1
2
2 
\ k2 n −1 = 0; n = 2, 3, …

( −1) 2 n +1 ( −1) n ( −1) n +1


k2 n = = 2 ; n = 1, 2, 3, … 
4n2 − 1 4n − 1
( −1)
n +1

π −2
\ ∑
n =1 ( 2 n − 1) ( 2 n + 1)
=
4

1 1 1 1 π −2
\ − + − + =
1⋅ 3 3 ⋅ 5 5 ⋅ 7 7 ⋅ 9 4 
328 | Chapter 3

Example 3.30: Prove that for 0 ≤ x ≤ π


π 2  cos 2 x cos 4 x cos 6 x 
x (π − x ) = − 2 + 2
+ 2
+ 
6  1 2 3 
Solution: Fourier cosine series of x (π − x ) in 0 ≤ x ≤ π is
a0 ∞
x (π − x ) = + ∑ an cos nx
2 n =1 
π
2 π 2 π x 2 x3  π2
where a0 =
π ∫0
π x − x 2
(dx = 
π 2
− ) 
3 0
=
3

2 π
(
an = ∫ π x − x 2 cos nx dx
π 0
)

π
2 1   1   1 
= 
π
( n
)
π x − x 2  sin nx  − (π − 2 x )  − 2 cos nx  + ( −2 )  − 3 sin nx  
  n   n 0

2 
( −1) − 1
n +1
=
n2  
1
\ a2 n = − , a2 n −1 = 0; n = 1, 2, 3, …
n2 
\ Fourier cosine series of x (π − x ) in 0 ≤ x ≤ π  is

π2 ∞ 1
x (π − x ) = − ∑ cos 2nx
6 n =1 n2 
π 2  cos 2 x cos 4 x cos 6 x 
= − + + +  ; 0 ≤ x ≤ π
6  12 22 32  
Example 3.31: Determine half-range sine series for the function f defined by f (t ) = t 2 + t , 0 ≤ t < π .
) = t + t, 0 ≤ t < π.
2

Solution: Half-range sine series for f ( t ) = t 2 + t in 0 ≤ t < π is



f ( t ) = t 2 + t = ∑ bn sin nt
n =1 
π
2
where (
bn = ∫ t 2 + t sin nt dt
π 0
)

π
2   1   1   1 
( )
=  t 2 + t  − cos nt  − ( 2t + 1)  − 2 sin nt  + 2  3 cos nt  
π  n   n  n 0

=
2  π 2 +π
π
−
n
n

n
2 n 
( −1) − 3 1 − ( −1) 

( )

Fourier Series, Fourier Integrals and Fourier Transforms  | 329

π +1 2 (π + 1) 8
\ b2 n = − , b2 n −1 = − ; n = 1, 2, 3,... 
n 2n − 1 π ( 2n − 1)3

\Half-range sine series for f ( t ) = t 2 + t in 0 ≤ t < π is

∞  
π +1 2  4 
f (t ) = t 2 + t = ∑ − sin 2nt + π + 1 −  sin ( 2 n − 1) t 
n =1  2n − 1  ( ) 
2
n π 2 n − 1
   

 π
 x ; 0 < x < 2
Example 3.32: If f ( x ) = 
π − x ; π < x < π
 2
show that
4 sin 3 x sin 5 x sin 7 x 
(i)  f ( x ) = sin x − 2 + 2 − 2 + 
π  3 5 7 
π 2  cos 2 x cos 6 x cos 10 x 
(ii)  f ( x ) = − + + + 
4 π  12 32 52 

Solution: (i) Half-range sine series of f ( x ) in the given interval is



f ( x ) = ∑ bn sin nx
n =1 
2  π
π 
where bn =  ∫ 2 x sin nx dx + ∫π (π − x ) sin nx dx 
π 0 2 
 π
π 
2   1  1 2   1  1  
=  x − cos nx + sin nx  + ( π − x ) − cos nx − sin nx  
π    n 
 n
2
0   n
 2
 n π 
 2 

2 π nπ 1 nπ π nπ 1 nπ 
=  − cos + 2 sin + cos + 2 sin 
π  2n 2 n 2 2n 2 n 2 
4 nπ
= sin
π n2 2 
4 ( −1)
n +1

\ b2 n = 0, b2 n −1 = ; n = 1, 2,...
π ( 2n − 1)
2

\ Half-range sine series of f ( x ) in the given interval is

4 ∞ ( −1)
n +1

f ( x) = ∑ sin ( 2n − 1) x
π n =1 ( 2n − 1)2

330 | Chapter 3

4 sin 3 x sin 5 x sin 7 x 


=  sin x − 2 + 2 − 2 + 
π 3 5 7  

(ii)  Half-range cosine series of f ( x ) in the given interval is

a0 ∞
f ( x) = + ∑ an cos nx
2 n =1 
 π
π 
2  π2  2  x 2  2  x2 
+ π x −  
π
where a0 =  ∫ x dx + ∫π (π − x ) dx  =  
π 0  2 π 
 π  2  0 
2 
2
 

2 π 2 π2 π2 π2  π
=  +π − − + =
2

π8 2 2 8  2

2  π2 π 
an =  ∫0 x cos nx dx + ∫π (π − x ) cos nx dx 
π 
2

 π
π 
2   1  1 2  1  1  
=  x sin nx + cos nx  + ( π − x ) sin nx − cos nx  
π    n 
 n
2
0  n
 2
 n π 
 2 

2π nπ 1 nπ 1 π nπ 1 1 nπ 
− 2 ( −1) + 2 cos 
n
=  sin + 2 cos − 2 − sin
π  2n 2 n 2 n 2n 2 n n 2 


=
22

π n 2
cos
nπ 1
2 n
( n 
− 2 1 + ( −1) 

)
( −1)
n
1
\ a2 n = − , a2 n −1 = 0; n = 1, 2, 3,...
πn 2
π n2 
2
\ a4 n = 0, a2(2 n −1) = − , a2 n −1 = 0; n = 1, 2, 3,...
π ( 2n − 1)
2

\ Half-range cosine series of f ( x ) in the given interval is

π 2 ∞ 1
f ( x) = − ∑ cos ( 2 ( 2n − 1) x )
4 π n =1 ( 2n − 1)2


π 2  cos 2 x cos 6 x cos 10 x cos 14 x 


= − + + + + 
4 π  12 32 52 72  
Fourier Series, Fourier Integrals and Fourier Transforms  | 331

Example 3.33: Express f ( x ) = x 2 as a half-range cosine series for 0 < x < 2.


Solution: Half-range cosine series for f ( x ) = x 2 in 0 < x < 2 is
a0 ∞ nπ x
f ( x ) = x2 = + ∑ an cos
2 n =1 2 
1 3 2 8
( )
2
where a0 = ∫ x 2 dx = x =
0 3 0 3
2 nπ x
an = ∫ x 2 cos dx
0 2 
2
  2 nπ x   4 nπ x   8 sin nπ x  
=  x2  sin  − 2 x  − 2 2 cos  + 2 − 3 3
  nπ 2   nπ 2   nπ 2   0

16 16
= 2 2 cos nπ = 2 2 ( −1)
n

nπ nπ 
\ Half-range cosine series for f ( x ) = x 2 in 0 < x < 2 is

( −1)
n +1
4 16 ∞
nπ x
f ( x ) = x2 = −
3 π2
∑ n2
cos
2 
n =1

Example 3.34: Obtain the half-range sine series for f ( x ) = 2 − x for 0 < x < 2 and hence­
1 1 1 π
deduce that 1 − + − +  = .
3 5 7 4
Solution: Half-range sine series for f ( x ) = 2 − x in 0 < x < 2 is

nπ x
f ( x ) = 2 − x = ∑ bn sin
n =1 2 
nπ x
bn = ∫ ( 2 − x ) sin
2
where dx
0 2 
2
  2 nπ x   4 nπ x  
= ( 2 − x )  − cos  +  − 2 2 sin
  nπ 2   nπ 2   0

4
=
nπ 
\ Half-range sine series for f ( x ) = 2 − x in 0 < x < 2 is
4 ∞ 1 nπ x

f ( x) = 2 − x =
π n =1 n
sin
2 

4 ∞
1 nπ
Take x = 1, 1 = ∑ sin
π n =1 n 2 
1 1 1 π
⇒ 1− + − + =
3 5 7 4
332 | Chapter 3

x
Example 3.35: If f ( x ) = 1 − , 0 < x < l find (i) Fourier cosine series and (ii) Fourier sine series
l
of f ( x ). Graph the corresponding periodic continuations of f ( x ).

Solution: Fourier cosine series of f ( x ) in 0 < x < l is


x a0 ∞ nπ x
f ( x) = 1− = + ∑ an cos
l 2 n =1 l 
l
2 x2 
l
2  x
where a0 = ∫ 1 −  dx =  x −  = 1
l 0 l l 2l  0

nπ x
l
2  x
an = ∫
l 0 1 −  cos
l l
dx

l
2  x   l nπ x  1  l 2 nπ x  
=  1 −   sin  +  − 2 2 cos 
l  l   nπ l  l n π l 0

2 2
= − 2 2 ( −1) − 1 = 2 2 1 − ( −1) 
n n

nπ   nπ  

4
\ a2 n = 0, a2 n −1 = ; n = 1, 2, 3,...
π 2 ( 2n − 1)
2

\Fourier cosine series of f ( x ) in 0 < x < l is

x 1 4 ∞
1 ( 2n − 1) π x
f ( x) = 1− = + ∑ cos
l 2 π2 ( 2n − 1)
2
n =1 l

Graph of periodic continuation of f ( x ) is
fx

fx =

x
− l − l − l − l −l O l l l l l

Figure 3.10

 indicates that the point is not in the graph.


Fourier Series, Fourier Integrals and Fourier Transforms  | 333

(ii)  Fourier sine series of f ( x ) in ( 0, l ) is


x ∞ nπ x
f ( x) = 1− = ∑ bn sin
l n =1 l 
nπ x
l
2  x
where bn = ∫ 1 −  sin dx
l 0 l l

l
2  x   l nπ x  1  l 2 nπ x   2
=  1 −   − cos  +  − 2 2 sin  =
l  l   nπ l  l n π l   0 nπ

\ Fourier sine series of f ( x ) is
x 2 ∞ 1 nπ x
f ( x) = 1− = ∑ sin ; 0<x<l
l π n =1 n l 
Graph of periodic continuation of f ( x ) is
fx

fx =

x
− l − l − l − l −l O l l l l l
f x =−

Figure 3.11

 indicates that the point is not in the graph.

Example 3.36: Find the half period sine series for f ( x ) given in the range [ 0, l ] by the curve
OPQ in the following diagram
fx

Pa

Ql
x
O a l

Figure 3.12
334 | Chapter 3

Solution: From the graph


 d
 a x ; 0≤x≤a
f ( x) = 
 d (l − x ) ; a ≤ x ≤ l
 l − a 
Half period sine series for f ( x ) in range [ 0, l ] is

nπ x
f ( x ) = ∑ bn sin
n =1 l 
2 d nπ x nπ x 
a l
d
where bn =  ∫ x sin dx + ∫ ( l − x ) sin dx 
l 0 a l l − a l 
a

a
2d   l nπ x  l nπ x 
2
=  x  − cos  + 2 2 sin 
la  nπ l nπ l 0
l
2d   l nπ x  l2 nπ x 
+ (l − x )  − cos  − 2 2 sin 
l (l − a )  nπ l nπ l a 

2d  la nπ a l2 nπ a 
= − cos + 2 2 sin 
la  nπ l nπ l 
2d  (l − a ) l nπ a l2 nπ a 
+  cos + 2 2 sin 
l (l − a )  nπ l nπ l 

 l l  nπ a
= 2d  2 2 +  sin
 an π ( l − a ) n 2π 2 l
  
2dl 2 nπ a
= sin
a (l − a) n π
2 2
l

\Fourier sine series of f ( x ) is
2dl 2 ∞
1 nπ a nπ x
f (x) = ∑n sin sin ; 0≤ x≤l
a (l − a ) π 2 n =1
2
l l


Example 3.37: Sketch f ( x ) and its two periodic extensions and find the Fourier cosine series
for the function f ( x ) where
 2kx l
 l ; 0<x<
2
f ( x) = 
 2k ( l − x ) ; l < x < l
 l 2 
Also find the Fourier sine series for f ( x ).
Fourier Series, Fourier Integrals and Fourier Transforms  | 335

Solution: Below is the graph of f ( x ) and its two periodic extensions


(i) Graph of f ( x )
fx

x
O l l

Figure 3.13
indicates the point is not in the graph.
(ii)  Graph of even extension
fx

x
− l − l −l −l O l l l l

Figure 3.14
 indicates the point is not in the graph.
(iii)  Graph of odd extension
fx

x
− l − l −l −l O l l l l

−k

Figure 3.15
 indicates the point is not in the graph.
336 | Chapter 3

Fourier cosine series is the Fourier series for even extension and it is
a ∞
nπ x
f ( x ) ∼ 0 + ∑ an cos
2 n =1 l 
2  2l 2kx l 2k 
where a0 =  ∫0 dx + ∫ l ( l − x ) dx 
l l 2 l  
 l
l 
2  2k  x 2  2k 
2 x2  
=   +  lx −  
l  l  2 0 l  2 l
 2

l2 2 l2 l2 l2 
4k
=  +l − − +  = k
l2
8 2 2 8

2 2k  l
nπ x nπ x 
dx + ∫ l ( l − x ) cos
l
an = .  ∫ 2 x cos dx 
l l  0 l l 
2

 l
4k   x  l sin nπ x  −  − l cos nπ x  
2 2
= 2
l    nπ 
l   n2π 2

l   0

nπ x   
l
  l nπ x   l 2
+ (l − x )  sin + − cos  
  nπ l   n2π 2 l   l  
2

4k  l 2 nπ l2 nπ l2 l2 nπ
= sin + cos − − sin
l 2  2nπ 2 n2π 2 2 n2π 2 2nπ 2
l2 l2 nπ  
− ( −1) n
+ cos 

2 2

2 2
2 

=
 2l 2
4k
 2 2
n π
l2
cos


2 nπ
l2
2 2 {
n 
1 + ( −1) 

}

4k  l 2
l 
2
2k
a2 n = 2  2 2 ( −1) − 2 2  = 2 2 ( −1) − 1 ; n = 1, 2, ...
n n
\
l  2n π 2n π  n π  

4k
\ a4 n = 0, a2(2 n −1) = − , a2 n −1 = 0; n = 1, 2, 3,...
π ( 2n − 1)
2 2

\Fourier cosine series of f ( x ) is

1 4 ∞
1 2 ( 2n − 1) π x 
f (x) ∼ k  − 2 ∑ cos 

 2 π n =1 (2n − 1) 2
l 

Fourier Series, Fourier Integrals and Fourier Transforms  | 337

Fourier sine series is the Fourier series for odd extension and it is

nπ x
f ( x ) ∼ ∑ bn sin 
n =1 l

2 2k  2l nπ x nπ x 
dx + ∫ l ( l − x ) sin
l
where bn = ⋅  ∫ x sin dx 
l l  0 l 2 l 

4k  l

  x  − l cos nπ x  + l sin nπ x 
2 2
=
l2    nπ 
l  n2π 2 l 0


nπ x   
l
  l nπ x   l2
+ (l − x )  − cos  + − sin  

  nπ l   n2π 2 l   l  
2

4k  l2 nπ l2 nπ l2 nπ l2 nπ 
=  − cos + sin + cos + sin 
l2  2nπ 2 nπ 2 2nπ 2 nπ
2 2 2 2
2 

8k nπ
= sin

2 2
2 
8k
\ b2 n = 0, b2 n −1 = ( −1)n+1 ; n = 1, 2, 3,...
π ( 2n − 1) 2 2

\ Fourier sine series of f ( x ) is

( −1) ( 2n − 1) π x
n +1

8k
f ( x) ∼
π2

n =1 ( 2n − 1)
2
sin
l


1
 4 − x ; 0 < x < 1/ 2
Example 3.38: Express f ( x ) = 
 x − 3 ; 1/ 2 < x < 1
 4
as a Fourier series of sine terms.
Solution: Half-range sine series for f ( x ) is

f ( x ) = ∑ bn sin nπ x, 0 < x < 1 / 2; 1 / 2 < x < 1

n =1

bn = 2 ∫ f ( x ) sin nπ x dx
1
where
0 
 1 1
 1 3 
= 2  ∫  − x  sin nπ x dx + ∫1  x −  sin nπ x dx 
2

 4  2 4
0

338 | Chapter 3

 1
  1
= 2  − x −  1   1   2
   cos nπ x  +  − 2 2 sin nπ x  
  4   nπ   nπ  0

 
1
 3  1  1
+  x −   − cos nπ x  + 2 2 sin nπ x   
 4   nπ  nπ 1 
2

1  nπ  2 nπ 1 1 nπ 2 nπ
( −1) −
n
= cos + 1 − 2 2 sin − cos − 2 2 sin
2nπ  2  nπ 2 2 nπ 2 nπ 2 nπ 2


4
= − 2 2 sin


+
2 2nπ
1
{
1 − ( −1)
n


}
4 ( −1)
n
1
\ b2 n = 0, b2 n −1 = + ; n = 1, 2, 3,...
π ( 2n − 1)
2 2
π ( 2n − 1)

\Fourier sine series of f ( x ) is

1  4 ( −1) 
n
1 ∞
f (x) = ∑ 1 +  sin ( 2n − 1) π x, 0 < x < 1 / 2; 1 / 2 < x < 1 
π n =1 ( 2n − 1)  π ( 2n − 1) 

Exercise 3.2

1. Find a Fourier sine series for f ( x ) = k in 6. Expand x sin x as sine series in 0 < x < π .
0 < x < π. 7. Find a cosine series for f ( x ) = e x in
2. Find the half-range cosine series for 0 < x < π.
f ( x ) = sin x in 0 ≤ x ≤ π and hence show

8. Expand f ( x ) = cos x; 0 < x < π in half-
1 1
that ∑ 2 = . range sine series.
n =1 4 n − 1 2
9. Represent the following func-
3. Find the Fourier (i) cosine series and tion by a Fourier sine series:
(ii) sine series given by the function
f ( x ) = π − x in ( 0, π ).  t, 0 < t ≤ π / 2
f (t ) = 
π / 2, π / 2 < t ≤ π
4. Expand π x − x 2 in a half-range sine
series in the interval ( 0, π ) (upto 10. If a function f ( x ) is defined in ( 0, π )
the first three terms). Deduce that
1 1 1 π3 π / 3 , 0 < x < π / 3
− 3 + 3 − = . 
3
1 3 5 32 as f ( x ) = 0 , π / 3 < x < 2π / 3
5. Find the half-range cosine series for −π / 3 , 2π / 3 < x < π

the function f ( x ) = x 2 in 0 ≤ x ≤ π then show that its cosine series is
and hence find the sum of the series 2  1 1 
1 1 1 f ( x) = cos x − 5 cos 5 x + 7 cos 7 x − 
1 − 2 + 2 − 2 + 3 
2 3 4
Fourier Series, Fourier Integrals and Fourier Transforms  | 339

and its sine series is l ∞


1 2nπ x
1 1 21. Show that the series
π
∑ n sin
f ( x ) = sin 2 x + sin 4 x + sin 8 x n =1 l
2 4 l
represents − x, when 0 < x < l.
1 2
+ sin 10 x + 
5 22. Find the half-range cosine series of
11. Obtain the half-range sine series for  1, 0 ≤ x ≤ 1
f ( x) = 
the function f ( x ) = x 2 in the interval  x, 1 ≤ x ≤ 2
0 < x < 3.
23. Find the Fourier cosine series of the func-
12. Obtain the half-range sine series for e x
in 0 < x < 1. x2 , 0 ≤ x ≤ 2
tion f ( x ) = 
13. Find a Fourier sine series for  4, 2 ≤ x ≤ 4
f ( x ) = ax + b in 0 < x < l. 24. Obtain the half-range cosine series for
14. Express f ( x ) = x as a half-range cosine k , 0 < x < a
series in 0 < x < 2. f ( x ) defined by f ( x ) = 
0, a < x < l
15. Show that in the interval ( 0,1),
8 ∞ n  x , 0 < x <1
cos π x = ∑ 2 sin 2n π x 25. Expand f ( x) =  as
π n =1 4 n − 1 2 − x , 1 < x < 2
πx  a Fourier sine series. Hence deduce that
16. Develop sin   in half-range cosine
 l  π2 1 1 1
= + + +
series in the range 0 < x < l. 8 12 32 52
17. Find the half-range sine series for the 26. Obtain a half-range cosine series for
function f ( t ) = t − t 2 in the interval kx , 0 ≤ x ≤ l /2
0 < t < 1. f ( x) =  .
18. Find the half-range sine series for k ( l − x ) , l / 2 ≤ x ≤ l
f ( x ) = x in 0 < x < 2. Deduce the sum of the series
19. Find the Fourier cosine series and sine 1 1 1
+ + +
series of the function f ( x ) = 1, 0 ≤ x ≤ 2. 12 32 52
20. Obtain the half-range sine series for Also, prove that
f ( x ) = lx − x 2 in ( 0, l ) and hence show
4 kl ∞ ( −1) ( 2n + 1) π x
n

2 ∑
1 1 1 1 π3 f ( x) = sin .
that 3 − 3 + 3 − 3 +  = . π n = 0 ( 2n + 1) 2
l
1 3 5 7 32

Answers 3.2

4k ∞ 1
1.  f ( x ) = ∑
π n =1 ( 2n − 1)
sin ( 2n − 1) x

2 4 ∞ cos 2n x
2.  f ( x ) = − ∑
(
π π n =1 4 n2 − 1 )
340 | Chapter 3

π 4 ∞ cos ( 2n − 1) x ∞
sin nx
  3. (i)  π − x = + ∑
2 π n =1 ( 2n − 1) 2
π
(ii)  − x = 2 ∑
n =1 n

8  sin x sin 3 x sin 5 x 


 4.  f ( x ) =  13 + 33 + 53 + 
π  

( −1)
n +1
π2 ∞
 5.  x 2 = − 4∑ cos nx
3 n =1 n2
π 16 ∞ n sin 2 x
 6.  x sin x = sin x − ∑
2 π n =1 ( 4 n2 − 1)

 7.  e =
x eπ − 1 2 ∞ e ( −1) − 1
+ ∑
π
(
cos nx
n
)
π π n =1 n2 + 1
8 ∞ n
 8.  f ( x ) = ∑
π n =1 4 n2 − 1
sin 2nx

∞  
− sin 2nt  2 ( −1) 1 
n +1

 9.  f ( t ) = ∑  + +  sin ( 2n − 1) t 
n =1   π ( 2n − 1) ( 2n − 1)  
2
2n
 
∞ 
( 2n − 1) π x 9 2nπ x 
11.  f ( x ) = ∑ 
18
n =1  π ( 2n − 1)
3 3
π 2
( 2 n − 1){2
− 4 sin
3
} −

sin
3 
 

12.  e = 2π ∑
x
∞ {
n 1 + e ( −1)
n +1
} sin nπ x
n =1 (1 + n π )
2 2

1 ∞  2 ( al + 2b ) ( 2n − 1) π x al 2nπ x 
13.  ax + b = ∑ 
π n =1  ( 2n − 1)
sin
l
− sin
n l 

8 ∞
1 ( 2n − 1) π x
14.  x = 1 − ∑ cos
π2 ( 2n − 1)
2
n =1 2

πx  2 4 ∞ 1  2nπ x 
16.  sin  = π −π ∑ cos  
 l  n =1 4 n − 1
2
 l  ( )

8 1
17.  f ( t ) = ∑ sin ( 2n − 1) π t
π3 ( 2n − 1)
3
n =1

4 ∞ ( −1)
n +1
 nπ x 
18.  f ( x ) = ∑
π n =1 n
sin 
 2 

Fourier Series, Fourier Integrals and Fourier Transforms  | 341

4 ∞ 1 ( 2n − 1) π x
19.  f ( x ) = 1; f ( x ) = ∑
π n =1 ( 2n − 1)
sin
2
8l 3 ∞
1 ( 2n − 1) π x
20.  lx − x 2 = ∑ sin
π3 ( 2n − 1)
3
n =1 l

5 2 ∞
1 ( 2n − 1) π x 
22.  f ( x ) = +
4 π2
∑ cos ( 2n − 1) π x − 2 cos
n =1 ( 2n − 1) 
2
2

8 8 ∞  8 ( 2n − 1) π x π nπ x 
23.  f ( x ) = ∑ ( −1)
n
+  cos + 2 cos 
3 π3  ( 2n − 1)
3
4 n 2 
n =1

ka 2k ∞ 1 nπ a nπ x
24.  f ( x ) =
l
+ ∑ sin l cos l
π n =1 n

( −1) ( 2n − 1) π x
n +1

8
25.  f ( x ) = 2
π

n =1 ( 2n − 1)
2
sin
2

kl 2kl ∞ 1  2 ( 2n − 1) π x 
26.  f ( x) = − 2 ∑ cos  
4 π n =1 ( 2n − 1) 2
 l 

3.3 Others Formulae
3.3.1  Parseval’s Formulae
(a) If f ( x ) is continuous and differentiable in ( −l , l ) , then

1 2 ∞ 2 2 
( )  2 a0 + ∑ an + bn  ( )
l 2
∫−l  
 f x  dx = l
 n =1 
1 l 1 l nπx 1 l nπx
where a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx. 
l −l l −l l l −l l

(b) If f ( x ) is continuous and differentiable in ( 0, l ), then


l  a02 ∞ 2 
( )  + ∑ an 
l 2
(i)  ∫0  
 f x  dx =
2  2 n =1 
l
l ∞ 2
( ) ∑ bn
2
(ii)  ∫0  
 f x  dx =
2 n =1

2 l 2 l nπ x 2 l nπ x
where a0 = ∫ f ( x ) dx, an = ∫ f ( x ) cos dx, bn = ∫ f ( x ) sin dx
l 0 l 0 l l 0 l 
342 | Chapter 3

Proof: (a) Fourier series of f ( x ) in ( −l , l ) is

a0 ∞  nπ x nπ x 
f ( x) = + ∑  an cos + bn sin (3.6)
2 n =1  l l 

1 l
f ( x ) dx (3.7)
l ∫− l
where a0 =

1 l nπ x
an = ∫ f ( x ) cos dx (3.8)
l − l l
1 l nπ x
bn = ∫ f ( x ) sin dx (3.9)
l − l l
Multiply (3.6) by f ( x ) and integrate term by term within the limits –l to l [term by term integra-
tion is possible as series on R.H.S. of (3.6) is uniformly convergent because of continuity and
differentiability of f ( x ) in ( −l , l ) 

∞ 
nπ x nπ x 
l l l l
a0
\ ∫ [ f ( x )] dx =
2
∫ f ( x ) dx + ∑  an ∫ f ( x ) cos dx + bn ∫ f ( x ) sin dx 
−l
2 −l n =1  −l
l −l
l 

Using (3.7) to (3.9)


 a02 ∞ 2 2 
( ) (
 + ∑ an + bn  )
l 2
∫−l  
 f x  dx = l
 2 n =1 

(b)  (i)  Half-range Fourier cosine series of f ( x ) in ( 0, l ) is

a0 ∞ nπ x
f ( x) = + ∑ an cos (3.10)
2 n =1 l
2 l
f ( x ) dx (3.11)
l ∫0
where a0 =

2 l nπ x
an = ∫ f ( x ) cos dx (3.12)
l 0 l
Multiply ( 3.10 ) by f ( x ) and integrate within the limits 0 to l

a ∞
nπ x
 f ( x )  dx = 0 ∫ f ( x ) dx + ∑ a ∫ f ( x ) cos l dx
l 2 l l
∫ 0 2 0 n 0
n =1 
Using (3.11) and (3.12)
l  a2 ∞ 
 f ( x )  dx =  0 + ∑ an2 
l 2
∫0 2  2 n =1 

Fourier Series, Fourier Integrals and Fourier Transforms  | 343

(ii)  Half-range Fourier sine series of f ( x ) in ( 0, l ) is



n π x (3.13)
f ( x ) = ∑ bn sin
n =1 l
2 l nπx
where bn = ∫ f ( x ) sin dx (3.14)
l 0 l
Multiply (3.13) by f ( x ) and integrate within the limits 0 to l

nπ x l ∞ 2
( ) ∑ ( ) ∑ bn 
l 2 l
∫0  
 f x  dx = bn ∫0 f x sin
l
dx =
2 n =1
(from 3.14)
n =1

3.3.2 Root Mean Square (R.M.S.) Value


Root mean square value of a periodic function is frequently used in mechanical vibrations and
electric circuit theory. Root mean square value is also called the effective value of the function
and it is the square root of mean value of square of function and
1 b
Root mean square value of f ( x ) in ( a, b ) =  f ( x )  dx
2

b−a a

3.3.3 Complex Form of Fourier Series


Fourier series of f ( x ) in ( −l , l ) is
a0 ∞  nπ x nπx
f ( x) = + ∑ an cos + bn sin (3.15)
2 n =1  l l 
1 l
f ( x ) dx (3.16)
l ∫− l
where a0 =

1 l nπx
an = ∫ f ( x ) cos dx (3.17)
l − l l
1 l nπx
bn = ∫ f ( x ) sin dx (3.18)
l − l l
Now, (3.15) can be written as
 i
nπ x
−i
nπ x
i
nπ x
−i
nπ x

a0 ∞  e l + e l e l −e l 
f ( x ) = + ∑ an + bn
2 n =1  2 2i 
 

a0 ∞  1 nπ x
1 −i
nπ x

+ ∑  ( an − i bn ) e l + ( an + i bn ) e l 
i
=
2 n =1  2 2 
344 | Chapter 3

a0 1 1
Defining c0 = , cn = ( an − i bn ) , c− n = ( an + i bn ), we write f ( x )  as
2 2 2
∞ nπ x nπ x ∞ nπ x
−i
f ( x ) = c0 + ∑ cn e ∑
i i
l
+ c− n e l
= cn e l
(3.19)
n =1 n =−∞

a0 1 l
f ( x ) dx 
2 2l ∫− l
where c0 = =

nπ x nπ x 
( an − i bn ) =  ∫− l f ( x ) cos
1 1 1 l i l
cn = dx − ∫ f ( x ) sin dx 
2 2 l l l −l l 
nπ x
1 l  nπ x nπ x  1 l −i
= ∫ f ( x )  cos − i sin  dx = ∫ f ( x ) e l
dx
2l − l
 l l  2l − l

1 1 1 l nπ x i l nπ x 
c− n = ( an + i bn ) =  ∫ f ( x ) cos dx + ∫ f ( x ) sin dx 
2 2  l −l l l −l l 
inπ x
1 l  nπ x nπ x  1 l
=∫−l f ( x )  cos + i sin  dx = ∫−l f ( x ) e l
dx
2 l  l l  2l 
Hence, all the coefficients can be expressed as
nπ x
1 l −i
cn = ∫ f ( x) e l
dx; n = 0, ± 1, ± 2,...
2l − l

Thus, complex form of Fourier series of f ( x ) in ( −l , l ) is
∞ nπ x
f ( x) = ∑
i
cn e l

n =−∞ 
nπ x
1 l −i
where cn = ∫ f ( x ) e l dx; n = 0, ± 1, ± 2, ± 3,...
2l − l 

Remark 3.2: If f ( x ) satisfies Dirichlet’s conditions in ( −l , l ) then complex form of Fourier


1
series converges to f ( x ) at a point of continuity and to  f ( x + 0 ) + f ( x − 0 )  at a point of
discontinuity. 2

Example 3.39: Obtain the Fourier series for y = x 2 in −π < x < π . Using the two values of y,
1 1 1 1 π4
show that 4
+ 4 + 4 + 4 + = .
1 2 3 4 90
Solution: y = x 2 , − π < x < π is an even function.
\Fourier series for y is

a
y = 0 + ∑ an cos nx; −π < x < π 
2 n =1
Fourier Series, Fourier Integrals and Fourier Transforms  | 345

π
2 π 2  x3  2π 2
where a0 = ∫ x 2 dx =   = 
π 0 π  3 0 3
π
2 π 2  1   1   1 
and an = ∫ x 2 cos nx dx =  x 2  sin nx  − 2 x  − 2 cos nx  + 2  − 3 sin nx  
π 0 π  n   n   n 0

4 ( −1)
n
2  2π n
2 (
=  −1)  =
π n  n2 
\Fourier series for y is
( −1)
n
π2 ∞
y= + 4∑ 2 cos nx ; −π < x < π
3 n =1 n

Let y be root mean square value of y.


1 π 1 π 1  a02 ∞ 2 
Then, y2 = ∫ y 2 dx = ∫ y 2 dx =  + ∑ an   (by Parseval’s formula)
2π −π π 0 2  2 n =1 

1  2π 4 ∞ 16 
=  + ∑ 4  (1)
2 9 n =1 n 

Also y = x2 , − π < x < π 
π
1 π 1 π 1  x5  π4
y = ∫−π x dx = π ∫ x dx =   =
2 4 4
then, (2)
2π 0 π  5 0 5

Equating two values of y 2 from (1) and (2)


π 4 1  2π 4 ∞ 16 
=  +∑ 4 
5 2 9 n =1 n 


1 2π π 8π4 4 4
∴ 16∑ 4
= −2 =
n=1 n 5 9 45 

1 1 1 1 π4
∴ + + + +  =
14 24 34 4 4 90 

Example 3.40: Prove that for 0 < x < π


π 2  cos 2 x cos 4 x cos 6 x 
(a)  x (π − x ) = − + + + 
6  12 22 32 
8  sin x sin 3 x sin 5 x 
(b)  x (π − x ) =  + 3 + 3 + 
π 1 3
3 5 
346 | Chapter 3

Deduce form (a) and (b) respectively that



1 π4 ∞
1 π6

n=1 n
4
=
90
and ∑
n=1 n
6
=
945

Solution: (a) Let f ( x ) = x (π − x ) , 0 < x < π

By half-range cosine series of f ( x ) , we have


a0 ∞
f ( x ) = x (π − x ) = + ∑ an cos nx, 0 < x < π 
2 n =1
π
2 π 2  π x 2 x3  2 π3 π3  π2
where
π 0
(
a0 = ∫ π x − x 2 dx = 
π 2
)
−  = 
3 0 π  2
− =
3  3

2 π
and an =
π ∫0
(
π x − x 2 cos nx dx )

π
2 1   1   1 
= 
π
( n
)
π x − x 2  sin nx  − (π − 2 x )  − 2 cos nx  + ( −2 )  − 3 sin nx  
  n   n 0

2 π π  2
− ( −1) − 2  = 2 ( −1) − 1
n n +1
=
π  n2 n  n  

4 1
∴ a2 n = − =− , a2 n −1 = 0; n = 1, 2, 3,...
( 2n ) 2
n2

\ Half-range cosine series of f ( x ) in ( 0, π ) is

π2 ∞  1 
x (π − x ) = +∑ −  cos 2nx
6 n =1  n2  
π 2  cos 2 x cos 4 x cos 6 x 
= − + + + 
6  12 22 32 
By Parseval’s formula
2 π a02 ∞ 2 a02 ∞ 2
( ) + ∑ an = + ∑ a2 n
2

π ∫0 
 f x 
 dx =
2 n =1 2 n =1 
π 4 ∞
2 π
( ) +∑
1
2

π ∫0
∴ π x − x 2 dx =
18 n =1 n4 
π4 ∞ 1

2 π 2 2
π ∫0
π (
x − 2π x 3
+ x 4
dx = +∑ )
18 n =1 n4 
π

1 π 4 2  π 2 x3 π x 4 x5 
∴ ∑
n=1 n
4
=− +
18 π  3

2
+ 
5 0

Fourier Series, Fourier Integrals and Fourier Transforms  | 347

π4 2 π5 π5 π5  π 4 2π 4
=− +  − + =− + (10 − 15 + 6 ) 
18 π  3 2 5  18 30

 −5 + 6  π
4
=π4  =
 90  90 

(b)  f ( x ) = x (π − x ) ; 0 < x < π


By half-range sine series of f ( x ), we have

f ( x ) = x (π − x ) = ∑ bn sin nx, 0 < x < p
n =1 
2 π
where bn =
π ∫0
(π x − x 2 ) sin nx dx

π
2  −1   1   1 
=
π 
( )
π x − x 2  cos nx  − (π − 2 x )  − 2 sin nx  + ( −2 )  3 cos nx  
 n   n   n 0

2 2 
⋅ 3 1 − ( −1) 
n
=
π n  

8
∴ b2 n = 0, b2 n −1 = ; n = 1, 2, 3,...
π ( 2n − 1)
3

8 ∞
sin ( 2n − 1) x
∴ x (π − x ) = ∑
π n =1 ( 2n − 1)3

8  sin x sin 3 x sin 5 x 
=  3 + 3 + 3 + 
π 1 3 5 

By Parseval’s formula
π ∞ ∞
2 2
∫ x (π − x ) dx =∑ bn2 = ∑ b22n −1
2

π 0
n =1 n =1


64 1 2
( )
π
∴ ∑
π 2 n =1 ( 2n − 1)6 π ∫0
= x 4 − 2π x 3 + π 2 x 2 dx

π
2  x5 x 4 π 2 x3 
=  − 2π + 
π5 4 3 0

2 π π π 
5 5 5
=  − + 
π5 2 3

348 | Chapter 3

 6 − 15 + 10  π
4

= 2π 4   = 15 
 30 
1 1 1 π6
∴ 6
+ 6 + 6 + =
1 3 5 960 
1 1 1
Add 6
+ 6 + 6 + to both sides
2 4 6 ∞
1 π6  1 1 1 
∑ n 6
=
960
+  6 + 6 + 6 + 
2 4 6 
n=1

π6 1 1 1 1 
= + + + + 
960 64  16 26 36 

π6 1 ∞ 1
= + ∑ 6
960 64 n =1 n 

63 6 1 π6
⇒ ∑
64 n=1 n 6
=
960 
6
1 64π 6 π6
∴ ∑
n=1 n
6
= =
63 (960 ) 945


Example 3.41: Find the complex form of Fourier series of f ( x ) = e − x in −1 ≤ x < 1, f ( x + 2) = f ( x ) .


Solution: Complex form of Fourier series of f ( x ) is

f (x) = ∑ce n
inπ x
; −1 ≤ x < 1
n = −∞ 
1 1 − x − inπ x 1
( )
1

2 ∫−1
where cn = e e dx = − e − x e − inπ x
2 (1 + i nπ ) −1


  e ( −1) − e ( −1) 
n −1 n
 1 − i nπ
=   
1+ n π
2 2
  2 

( −1) (1 − i nπ )
n

= sinh 1
1 + n 2π 2 
\ Complex form of Fourier series of f ( x ) is

f ( x ) = sinh 1 ∑

( −1) (1 − i nπ ) ei nπ x ; − 1 ≤ x < 1
n

n = −∞ 1 + n2π 2 
Fourier Series, Fourier Integrals and Fourier Transforms  | 349

Example 3.42: Obtain the complex form of the Fourier series of the functions
0 , − π < x < 0 1 , −2 ≤ x < 1
(i) f ( x ) =     (ii) f ( x ) = 
1 , 0 ≤ x ≤ π 0 , 1 ≤ x < 2
Solution: (i) Complex form of Fourier series of f ( x ) is

f ( x) ∼ ∑ce n
inx

n =−∞ 
1 π 1 π
where cn = ∫ f ( x ) e − inx dx = ∫ e − inx dx
2π −π 2π 0

1
( )
π
= e − inx ; n≠0
−2iπ n 0


1  1 − ( −1) 
n

=  
iπ n  2 
 

0 ; n is even

∴ cn =  i
 − π n ; n is odd

1 π 1
2π ∫0
c0 = dx =
2

\ Complex form of Fourier series of f ( x ) is



1 i  e ix e − ix   e 3ix e −3ix   e 5ix e −5ix  
f ( x ) ∼ −  + + + + +  + 
2 π  1 −1   3 −3   5 −5  

(ii) Complex form of Fourier series of f ( x ) is


∞ inπ x
f ( x) ∼ ∑c n e 2

n =−∞ 
inπ x
1 2 −
( )
4 ∫−2
where cn = f x e 2
dx

1
1 1 − inπ2 x i  − i nπ2 x 
4 ∫−2
= e dx = e  ; n≠0
2nπ   −2

i  − in2π 
= e − e inπ 
2nπ  

350 | Chapter 3

 i 3n4π −i
3 nπ

i

 e − e 4

e 4
 
=
nπ 2i

i
e 4 3nπ
= sin ; n ≠ 0
nπ 4
1 2 1 1 1 1 3
f ( x ) dx = ∫ dx = ( x )−2 =
4 ∫−2
c0 =
4 −2 4 4
\Complex form of Fourier series of f ( x ) is

i
3 ∞
e 4 3nπ i nπ2 x
f ( x) ∼ + ∑ sin e
4 n =−∞ nπ 4
n≠0


Example 3.43: Find complex form of Fourier series of f ( x ) = e ax in the interval ( −π , π ) where
‘a’ is a nonzero real constant.
( −1)
n
π ∞
Hence deduce that = ∑ 2 .
a sinh aπ n =−∞ a + n2
Solution: Complex form of Fourier series of f ( x ) is

f ( x) ∼ ∑ce n
inx

n =−∞ 
1 π 1 π
where cn =
2π ∫ −π
e ax e − inx dx = e ax e − inx 
2π ( a − i n )  −π

a+i n e aπ ( −1)n − e − aπ ( −1)n  ; a ≠ 0
=

2π a + n( 2 2
)  

( −1) ( a + i n )
n

= sinh ( a π )  (∵ a ≠ 0, ∴ a 2 + n2 ≠ 0 )

π a +n ( 2 2
)
∴ Complex form of Fourier series of f ( x ) is

( −1) ( a + i n ) inx
n

1
f ( x) ∼ sinh ( aπ ) ∑ e
π n =−∞ a 2 + n2 
Taking x = 0 

( −1) ( a + in )
n

1
f (0) = sinh ( aπ ) ∑
π n =−∞ a 2 + n2 
Fourier Series, Fourier Integrals and Fourier Transforms  | 351

But f ( 0 ) = e0 = 1 

∴ 1=
1
sinh ( aπ ) ∑

( −1) (a + in)
n

π n = −∞ a 2 + n2 
( −1) ( a + i n )
n
π ∞
or = ∑
sinh aπ n =−∞ a 2 + n2 
Equate real parts
( −1) a
n
π ∞
= ∑ 2
sinh aπ n =−∞ a + n2 


π ∞
= ∑ 2
( −1) . n

a sinh aπ n = −∞ a + n2

Exercise 3.3

1. By using the sine series for 7. Find the complex form of Fourier series
of f ( x ) = e 2 x , 0 < x < 2.
f ( x) = 1 in 0< x <π show that
8. Find the complex form of Fourier series
π 2
1 1 1 of f ( x ) = cosh ax in −π < x < π where a
= 1 + 2 + 2 + 2 +
8 3 5 7 is not an integer.
2. From the Fourier sine series for 9. Find the complex form of Fourier series
f ( x ) = x (π − x ) in 0 < x < π . Prove that  −1 , − 1 < x < 0
of f ( x ) = 
1 1 1 π6  1, 0 < x <1
1 + 6 + 6 + 6 + ... = . 10. Obtain the complex form of Fourier se-
3 5 7 960
ries of f ( x ) = e x for −π < x < π .
π x , 0 < x <1
3. If f ( x) =  using 11. Find the complex form of Fourier series
π ( 2 − x ) , 1 < x < 2 x2 , 0 ≤ x < 1
half-range cosine series, show that of f ( x ) = 
 1 , 1< x < 2
1 1 1 π4 12. Find the complex form of Fourier series
+ + +  = .
14 34 54 96 of f ( x ) = sinh ax in the interval ( −l , l ).
( −1)
n +1

13. Find the complex form of the Fourier
4. Given the series x = 2∑ sin nx,
n =1 n series of f ( x ) = cos ax in −π < x < π

1 π2 where a is not an integer.
show that ∑ 2 = .
n=1 n 6 14. Find the complex form of Fourier series
5. Find the complex form of Fourier series of f ( x ) = cosh 2 x + sinh 2 x in the interval
of f ( x ) = 2 x in 0 < x < 2π . ( −5, 5).
15. Find complex form of Fourier series of
6. Find the complex form of Fourier series
of f ( x ) = sin ax in the interval ( −π , π ) 0 , − π / 2 < x < 0
f (x) = 
where a is not an integer. cos x, 0 < x < π / 2
352 | Chapter 3

Answers 3.3

1 inx
  5.  f ( x ) = 2π + 2i ∑ e
n =−∞ n
n≠0

( −1)
n
i sin aπ ∞ n
  6.  f ( x ) =
π
∑n
n =−∞
2
− a2
e inx

(e 4
−1 ) ∞
2 + inπ
  7.  f ( x ) =
2
∑ 4+n π
n =−∞
2 2
e inπ x

( −1)
n
a sinh aπ ∞
  8.  f ( x ) =
π
∑a
n =−∞
2
+ n2
e inx

2  e iπ x
e − iπ x   e 3iπ x e −3iπ x   e 5iπ x e −5iπ x  
  9.  f ( x ) ∼  + + + + +  + 
iπ  1 −1   3 −3   5 −5  
( −1) (1 + in )
n
sinh π ∞
10.  f ( x ) = ∑ e inx
π n =−∞ ( 1 + n2 )
2 1 ∞  i 2 ( −1) 
n

11.  f ( x ) ∼ + ∑  + 2 2 − 3 3
3 2 n =−∞  nπ nπ
2i

{( −1) − 1} e
n inπ x

n≠0  
( −1)
n inπ x
∞ n
12.  f ( x ) = iπ sinh al ∑ e l
a l +n π2n =−∞
2 2 2

a sin aπ ∞ ( −1) inx


n

13.  f ( x ) =
π
∑ 2 2e
n =−∞ a − n

( −1) (10 + inπ )


n

14.  f ( x ) = sinh 10 ∑ e inπ x / 5
n =−∞ (100 + n2π 2 )

(1 − 4n ) { }

1 1
15.  f ( x ) ∼ ∑ ( −1)
n
+ 2in ei 2 nx
π n =−∞
2

3.4  Harmonic Analysis


If f(x) is a period function with period 2l defined in (0, 2l) then it can be expanded in Fourier
series
a ∞
nπ x ∞ nπ x
f ( x ) = 0 + ∑ an cos + ∑ bn sin (3.20)
2 n =1 l n =1 l
1 2l
where a0 = ∫ f ( x ) dx (3.21)
l 0
Fourier Series, Fourier Integrals and Fourier Transforms  | 353

1 2l nπ x
an = ∫ f ( x ) cos dx (3.22)
l 0 l
1 2l nπ x
bn = ∫ f ( x ) sin dx (3.23)
l 0 l
Further, if f (x) is given in (0, l), we can have its even and odd extensions both of which are peri-
odic functions of period 2l. Even extension can be expanded in half-range Fourier cosine series
a0 ∞ nπ x
f ( x) = + ∑ an cos (3.24)
2 n =1 l
2 l
where a0 = ∫ f ( x ) dx (3.25)
l 0 
2 l nπ x
an = ∫ f ( x ) cos dx (3.26)
l 0 l
and odd extension can be expanded in half-range Fourier sine series

nπ x
f ( x ) = ∑ bn sin (3.27)
n =1 l
2 l nπ x
where bn = ∫ f ( x ) sin dx (3.28)
l 0 l
If f (x) is given an explicit function of x then Fourier coefficients in (3.21) to (3.23) or (3.25)
to (3.26) or (3.28) can be evaluated by integrals as mentioned in these equations. In practice,
however, the function is often given not by a formula but by graph or by table of corresponding
values. In such cases, integrals cannot be evaluated and instead, the following alternate forms
1
of these integrals are employed. As, for any function F ( x ) , F ( x ) dx is mean value of
b

b−a ∫a

y = F ( x ) over range (a, b), so equations (3.21) to (3.23) give


1 2l
a0 = 2 ∫ f ( x ) dx = 2 [mean value of f ( x ) in (0, 2l )]
2l 0
1 2l nπ x nπ x
an = 2 ∫ f ( x ) cos dx = 2 [mean value of f ( x ) cos in (0, 2l )]
2l 0 l l
1 2l nπ x nπ x
bn = 2 ∫ f ( x ) sin dx = 2 [mean value of f ( x ) sin in (0, 2l )]
2l 0 l l
and equations (3.25) to (3.26) give
1 l
a0 = 2 ∫ f ( x ) dx = 2 [mean value of f ( x ) in (0, l )]
l 0
1 l nπ x nπ x
an = 2 ∫ f ( x ) cos dx = 2 [mean value of f ( x ) cos in (0, l )]
l 0 l l
and equation (3.28) gives
1 l nπ x nπ x
bn = 2 ∫ f ( x ) sin dx = 2 [mean value of f ( x ) sin in (0, l )]
l 0 l l
Thus, in each case, Fourier coefficient is twice the mean value of function to be integrated in that in-
terval. Now, the mean value in discrete data can be obtained by dividing the sum by number of values.
354 | Chapter 3

πx πx
In (3.20), the term a1 cos + b1 sin is called the fundamental or first harmonic. The
l l
2π x 2π x 3π x 3π x
term a2 cos + b2 sin is called the second harmonic, the term a3 cos + b3 sin
l l l l
is called third harmonic and so on.
Amplitude of first harmonic = a12 + b12

Amplitude of second harmonic = a22 + b22

Amplitude of third harmonic = a32 + b32


and so on.
Remark 3.3:
  (i)  If data for function f ( x ) at equidistant points is
x x0 x1 x2 … xk −1 xk

f ( x) y0 y1 y2 … yk −1 yk

such that yk = y0 then xk = x0 + 2l where 2l is period of f ( x ). For finding mean of f ( x ),
nπ x nπ x
f ( x ) cos , f ( x ) sin in the period 2l the entry f (xk) = yk should not be taken as we take
l l
f ( x0 ) = y0 . The result will be same if we obtain the integrals by trapezoidal rule.

(ii)  If data for function f ( x ) at equidistant points is


x 0 x1 x2 … xk −1 xk

f ( x) y0 y1 y2 … yk −1 yk

and we are to find half-range sine-series, then for odd extension of f ( x ), we must
have y0 = 0, otherwise odd extension cannot be possible. Further, if yk = 0 then
xk = l (∵ f ( −l ) = f (l ) and f ( −l ) = − f (l ) ⇒ f (l ) = 0) and the entry yk should not be taken
but if yk ≠ 0 then xk ≠ l and yk +1 = 0 at xk +1 = l and hence entry yk should be taken.

(iii)  If data for function f ( x ) at equidistant points is


x 0 x1 x2 … xk −1 xk

f ( x) y0 y1 y2 … yk −1 yk

and we are to find half-range cosine series. Then for even extension of f ( x ), y0 may or may not
be zero. Further, for even extension f ( −l ) = f (l ) and hence f (l ) may or may not be zero. Hence
yk may or may not be zero, therefore, we have xk = l. Thus, for even extension by trapezoidal rule
2 l 2 x1
f ( x ) dx =  ∫ f ( x ) dx + ∫ f ( x ) dx +  + ∫ f ( x ) dx 
x2 xk


a0 =
l ∫0 
l 0 x1 xk −1 

2 y +y y +y y +y 
=  ( x1 − 0 ) 0 1 + ( x2 − x1 ) 1 2 +  + ( xk − xk −1 ) k −1 k 
l 2 2 2 

Fourier Series, Fourier Integrals and Fourier Transforms  | 355

2 l  y + y1 y1 + y2 y + yk  
= .  0 + +  + k −1 
l k 2 2 2 
2  y0 + yk 
= + y1 + y2 +  yk −1 
k  2 

Similarly,
 nπ xk 
y + yk cos
2 0 l + y cos nπ x nπ x nπ x k −1

an =  1
1
+ y2 cos 2
+  yk −1 cos  (∵ x0 = 0)
k 2 l l l 
 

Example 3.44: Given that x is discrete function of q, tabulated as follows:


q 0° 30° 60° 90° 120° 150° 180° 210° 240° 270° 300° 330°
X 70 886 1293 1400 1307 814 –70 –886 -1293 -1400 -1307 –814
Express x as a Fourier series in θ up to third harmonics.
Solution:
q ∞ x cos q sin q cos 2q sin 2q cos 3q sin 3q
0 70 1 0 1 0 1 0
30 886 3 2 0.5 0.5 3 2 0 1

60 1293 0.5 3 2 -0.5 3 2 -1 0

90 1400 0 1 -1 0 0 -1
120 1307 -0.5 3 2 -0.5 − 3 2 1 0

150 814 − 3 2 0.5 0.5 − 3 2 0 1

180 –70 -1 0 1 0 -1 0
210 –886 − 3 2 -0.5 0.5 3 2 0 -1

240 -1293 -0.5 − 3 2 -0.5 3 2 1 0

270 -1400 0 -1 -1 0 0 1

300 -1307 0.5 − 3 2 -0.5 − 3 2 -1 0

330 –814 3 2 -0.5 0.5 − 3 2 0 -1


0
356 | Chapter 3

1
a0 =
6
∑x= 0 
1   
(886 − 814 ) + 0.5 (1293 − 1307 )
1 3
a1 = ∑ x cos θ =  2 70 +
6 6   2 

1
= 70 + 3 ( 36 ) − 7  = 42
3 
1   
(1293 + 1307 )
1 3
b1 = ∑ x sin θ =  2 1400 + 0.5 ( 886 + 814 ) +
6 6   2 
1
= 1400 + 850 + 3 (1300 )  = 1501
3 
1
a2 = ∑ x cos 2θ = 0
6 
1
b2 = ∑ x sin 2θ = 0
6 
1 1
a3 = ∑ x cos 3θ =  2 ( 70 − 1293 + 1307 )  = 28
6 6 
1 1
b3 = ∑ x sin 3θ =  2 ( 886 − 1400 + 814 )  = 100
6 6 
a0
x = + a1 cos θ + b1 sin θ + a2 cos 2θ + b2 sin 2θ + a3 cos 3θ + b3 sin 3θ + 
2 
∴ up to three harmonics
x ≅ 42 cos θ + 1501sin θ + 28 cos 3θ + 100 sin 3θ 

Example 3.45: Find first three harmonics for the function f (θ ) given by the following table

q ° 0 60 120 180 240 300 360


f (q ) 0.8 0.6 0.4 0.7 0.9 1.1 0.8
Solution:
q ∞ f (q ) cos q sin q cos 2q sin 2q cos 3q sin 3q
0 0.8 1 0 1 0 1 0
60 0.6 0.5 3 2 -0.5 3 2 -1 0

120 0.4 -0.5 3 2 -0.5 − 3 2 1 0


180 0.7 -1 0 1 0 -1 0
240 0.9 -0.5 − 3 2 -0.5 3 2 1 0

300 1.1 0.5 − 3 2 -0.5 − 3 2 -1 0


Fourier Series, Fourier Integrals and Fourier Transforms  | 357

2 1
a1 =
6
∑ f (θ ) cos θ = 0.8 − 0.7 + 0.5 ( 0.6 − 0.4 − 0.9 + 1.1)  = 0.1 
3

2 1 3 
b1 =
6
∑ f (θ ) sin θ =  ( 0.6 + 0.4 − 0.9 − 1.1)  = −0.3
3  2 


2 1
a2 =
6
∑ f (θ ) cos 2θ = 0.8 + 0.7 − 0.5 ( 0.6 + 0.4 + 0.9 + 1.1)  = 0
3 

2 1 3
b2 =
6
∑ f (θ ) sin 2θ = ⋅
3 2
( 0.6 − 0.4 + 0.9 − 1.1) = 0


2 1
a3 = ∑ f (θ ) cos 3θ = [ 0.8 − 0.6 + 0.4 − 0.7 + 0.9 − 1.1] = −0.1
6 3 

2
b3 =
6
∑ f (θ ) sin 3θ = 0

∴ first three harmonics are
first harmonic = a1 cosq + b1 sinq = 0.1 cosq – 0.3 sinq
second harmonic = a2 cos2q + b2 sin2q = 0
third harmonic = a3 cos3q + b3 sin3q = – 0.1 cos3q

Example 3.46: Following values of y give the displacement of a certain machine part for the
rotation x of the fly wheel

π 2π 4π 5π
x 0 p 2p
3 3 3 3
y 1.98 2.15 2.77 0.22 -0.31 1.43 1.98

Express y in Fourier series up to the third harmonics.


358 | Chapter 3

Solution:
x y cos x sin x cos 2x sin 2x cos 3x sin 3x
0 1.98 1 0 1 0 1 0
π
2.15 0.5 3 2 -0.5 3 2 -1 0
3


2.77 -0.5 3 2 -0.5 − 3 2 1 0
3
p 0.22 -1 0 1 0 -1 0

-0.31 -0.5 − 3 2 -0.5 3 2 1 0
3

5π − 3 2 -0.5 − 3 2
1.43 0.5 -1 0
3
8.24

1 1 1
a0 = ∑ y = ( 8.24 ) = 1.37
2 6 6 
2 1
a1 =
6
∑ y cos x = 3
1.98 − 0.22 + 0.5 ( 2.15 − 2.77 + 0.31 + 1.43)  = 0.77

2 1 3 3
b1 =
6
∑ y sin x = 3 ⋅ 2 [ 2.15 + 2.77 + 0.31 − 1.43] = 6 ( 3.8) = 1.10

2 1
a2 =
6
∑ y cos 2 x = 3 1.98 + 0.22 − 0.5 ( 2.15 + 2.77 − 0.31 + 1.43) = −0.27

2 1 3 3
b2 =
6
∑ y sin 2 x = 3 2 [ 2.15 − 2.77 − 0.31 − 1.43] = 6 ( −2.36 ) = −0.68

2 1
a3 = ∑ y cos 3 x = [1.98 − 2.15 + 2.77 − 0.22 − 0.31 − 1.43] = 0.21
6 3 
2
b3 =
6
∑ y sin 3x = 0

y in Fourier series up to the third harmonics is
a
y = 0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3 x
2 
= 1.37 + ( 0.77 ) cos x + (1.10 ) sin x − ( 0.27 ) cos 2 x − ( 0.68 ) sin 2 x + ( 0.21) cos 3x

Fourier Series, Fourier Integrals and Fourier Transforms  | 359

Example 3.47: The turning moment T units of the crank shaft of a steam engine is given for a
series of values of the crank angle θ in degrees

q 0° 30° 60° 90° 120° 150°


T 0 5224 8097 7850 5499 2626

Find the first four terms in a series of sines to represent T. Also calculate T when θ = 75°.
Solution:
q  ∞ T sin q sin 2q sin 3q sin 4q
0 0 0 0 0 0

30 5224 0.5 3 2 1 3 2

60 8097 3 2 3 2 0 − 3 2

90 7850 1 0 -1 0

120 5499 3 2 − 3 2 0 3 2

150 2626 0.5 − 3 2 1 − 3 2

2 1 3 
b1 =
6
∑ T sin θ =  ( 8097 + 5499 ) + 7850 + 0.5 ( 5224 + 2626 ) 
3  2 

1
=  3 ( 6798 ) + 7850 + 3925 = 7850
3 
2 1 3 3
b2 = ∑ T sin 2θ = [5224 + 8097 − 5499 − 2626] = ( 5196 ) = 1500
6 3 2 6 
2 1
b3 = ∑ T sin 3θ = [5224 − 7850 + 2626 ] = 0
6 3 
2 1 3
b4 = ∑ T sin 4θ = [5224 − 8097 + 5499 − 2626] = 0
6 3 2 
∴ up to four terms
T = 7850 sin θ + 1500 sin 2θ + 0 sin 3θ + 0 sin 4θ +  

T ( 75° ) = 7850 sin 75° + 1500 sin 150° =


7850 ( 3 +1 ) + 1500 = 8332
2 2 2
360 | Chapter 3

Example 3.48: Obtain the constant term and the coefficients of the first two sine and first two
cosine terms in the Fourier expansion of y(x) tabulated below
x 0 1 2 3 4 5
y 9 18 24 28 26 20
Find also the amplitude of the first harmonic.
Solution:
πx
x y =θ cos q sin q cos 2q sin 2q
3
0 9 0 1 0 1 0
π
1 18 0.5 3 2 -0.5 3 2
3

2π - 3 2
2 24 -0.5 3 2 -0.5
3
3 28 p -1 0 1 0
4π − 3 2
4 26 -0.5 -0.5 3 2
3

5π − 3 2 − 3 2
5 20 0.5 -0.5
3
125

1 2 1
a0 = ∑ y = (125 ) = 20.83
2 12 6 
2 πx 1
a1 =
6
∑ y cos = 9 − 28 + 0.5 (18 − 24 − 26 + 20 )  = −8.33
3 3 
2 πx 1 3
b1 =
6
∑ y sin 3 = 3 2 [18 + 24 − 26 − 20] = −1.15

2 2π x 1
a2 =
6
∑ y cos
3
= 9 + 28 − 0.5 (18 + 24 + 26 + 20 )  = −2.33
3 
2 2π x 1 3
b2 =
6
∑ y sin 3 = 3 2 [18 − 24 + 26 − 20] = 0

a0
∴ Constant terms = = 20.83
2
Fourier Series, Fourier Integrals and Fourier Transforms  | 361

πx 2π x
Coeff. of sin = b1 = −1.15, Coeff. of sin = b2 = 0
3 3
πx 2π x
Coeff. of cos = a1 = −8.33, Coeff. of cos = −2.33
3 3

( −8.33) + ( −1.15)
2 2
Amplitude of first harmonic = a12 + b12 = = 8.41

Example 3.49: A function f ( x ) is defined as


x ; 0 ≤ x ≤1
f ( x) = 
2 − x ; 1 ≤ x ≤ 2 
Using 12 ordinates, show that an approximate Fourier series for f ( x ) is given by
f ( x ) = 0.5 − 0.415 cos π x − 0.056 cos 3π x − 0.030 cos 5π x

Solution: We have f ( x + 2) = 2 − ( x + 2) ; 1 ≤ x + 2 ≤ 2
But f ( x + 2) = f ( x )

∴ f ( x ) = − x; − 1 ≤ x ≤ 0

∴ f ( x ) is an even function of x.

x f (x) cos px cos 2px cos 3px cos 4px cos 5px
0 0 1 1 1 1 1
1/6 1/6 3 2 0.5 0 -0.5 − 3 2

1/3 1/3 0.5 -0.5 -1 -0.5 0.5


1/2 1/2 0 -1 0 1 0
2/3 2/3 -0.5 -0.5 1 -0.5 -0.5
5/6 5/6 − 3 2 0.5 0 -0.5 3 2

1 1 -1 1 -1 1 -1
7/6 5/6 − 3 2 0.5 0 -0.5 3 2

4/3 2/3 - 0.5 -0.5 1 -0.5 -0.5


3/2 1/2 0 -1 0 1 0
5/3 1/3 0.5 -0.5 -1 -0.5 0.5
11/6 1/6 3 2 0.5 0 -0.5 − 3 2
362 | Chapter 3

1 1 2
a0 = ⋅ ∑ f ( x ) 
2 2 12
1  1 2 3 4 5 6 5 4 3 2 1
=  + + + + + + + + + + 
12  6 6 6 6 6 6 6 6 6 6 6 
1  36  1
=   = = 0.5
12  6  2
2
a1 = ∑ f ( x ) cos π x
12

1  3  1 5 5 1  1 2 2 1 
=   − − +  + 0 .5  − − +  − 1
6  2  6 6 6 6 3 3 3 3 

1  −2 3 1 
=  − − 1 = −0.4147
6  3 3 

2
a2 = ∑ f ( x ) cos 2π x
12
1 1 1  1 1 2 5 5 2 1 1 
=  − + 1 − + 0.5  − − + + − − +   
6 2 2  6 3 3 6 6 3 3 6 
=0
2 1 1 2 2 1 1
a3 =
12
∑ f ( x ) cos 3π x = 6 − 3 + 3 − 1 + 3 − 3  = − 18 = −0.0556
2
a4 =
12
∑ f ( x ) cos 4π x
1 1 1  1 1 2 5 5 2 1 1 
=  + 1 + − 0.5  + + + + + + +  
6 2 2  6 3 3 6 6 3 3 6 
=0 
2
∑ f ( x ) cos 5π x
a5 =
12
1  3  1 5 5 1  1 2 2 1 
=   − + + −  + 0.5  − − +  − 1
6 2 6 6 6 6 3 3 3 3 
1 2 3 1  
=  − − 1
6 3 3 
= −0.0298
\ Approximate Fourier series is
f ( x ) = 0.5 − 0.415 cos π x − 0.056 cos 3π x − 0.030 cos 5π x

Fourier Series, Fourier Integrals and Fourier Transforms  | 363

Example 3.50: Obtain the first three coefficients in the Fourier cosine series for y, where y is
given in the following table:
x 0 1 2 3 4 5
y 4 8 15 7 6 2
Solution: Here, l = 5
∴ The Fourier cosine series for y is given by
a0 πx 2π x
y= + a1 cos + a2 cos +
2 5 5 
πx πx 2π x
x y cos cos
5 5 5
0 4 0 1 1
π
1 8 0.8090 0.3090
5


2 15 0.3090 -0.8090
5


3 7 -0.3090 -0.8090
5


4 6 -0.8090 0.3090
5
5 2 p -1 1

a0 1 2  4 + 2  1
\ = . + ( 8 + 15 + 7 + 6 )  = ( 39 ) = 7.8
2 2 5  2  5 
2  4 + ( −2)  2
a1 =  + {(8 − 6)0.8090 + (15 − 7)0.3090} = ( 5.0900 ) = 2.036
5 2  5 
2 4 + 2  2
a2 =
5  2 + {(8 + 6)0.3090 − (15 + 7)0.8090} = 5 ( −10.4720 ) = −4.1888
  
∴ The first three coefficients in the Fourier cosine series are
a0
= 7.8, a1 = 2.036 and a2 = −4.1888. 
2

Example 3.51: Obtain the Fourier sine series for f ( x ) containing three non-zero terms where
f ( x ) is given in the following table:
x 0 1 2 3 4 5
f (x) 0 10 15 8 5 3
364 | Chapter 3

Solution: Here, l = 6

∴ The Fourier sine series for f ( x ) is given by

πx 2π x 3π x
y = b1 sin + b2 sin + b3 sin +
6 6 6 

πx
x f (x) =θ sin q sin 2q sin 3q
6
0 0 0 0 0 0

1 10 π 1/ 2 3 2 1
6

2 15 π 3 2 3 2 0
3

π
3 8 1 0 -1
2

4 5 2π 3 2 − 3 2 0
3

5 3 5π 1/ 2 − 3 2 1
6

2 1  3 3
∴ b1 = ∑ f ( x ) sin θ =  5 + 8 +  + (15 + 5) 
6 3  2 2 

1
= 14.5 + 10 3  = 10.607
3
2 1 3 
b2 =
6
∑ f ( x ) sin 2θ =  (10 + 15 − 5 − 3)  = 4.907
3  2 

2 1 5
b3 =
6
∑ f ( x ) sin 3θ = 3 [10 − 8 + 3] = 3 = 1.667

∴ Fourier sine series for f ( x ) is

πx 2π x 3π x
f ( x ) = 10.607 sin + 4.907 sin + 1.667 sin +
6 6 6 
Fourier Series, Fourier Integrals and Fourier Transforms  | 365

Exercise 3.4
1. Find the Fourier series as far as the second harmonic to represent the function given by
the following table:

x 0 30° 60° 90° 120° 150° 180° 210° 240° 270° 300° 330°
f (x) 2.34 3.01 3.69 4.15 3.69 2.20 0.83 0.51 0.88 1.09 1.19 1.64

2. Given that y is discrete function of x, tabulated as follows:


x° 0 30 60 90 120 150 180 210 240 270 300 330 360
y 298 356 373 337 254 155 80 51 60 93 147 221 298
Express y as a Fourier series in x up to third harmonics.

3. The displacement y of a part of a mechanism is tabulated with corresponding angular


movement x  of the crank. Express y as a Fourier series neglecting the harmonics above
the third:
x° 0 30 60 90 120 150 180 210 240 270 300 330
y 1.80 1.10 0.30 0.16 0.50 1.30 2.16 1.25 1.30 1.52 1.76 2.00

4. Analyse the current i (amp) given by the table below into its constituent harmonics up to
third.

q ° 0 30 60 90 120 150 180 210 240 270 300 330


i 0 24 33.5 27.5 18.2 13 0 -24 -33.5 -27.5 -18.2 -13

5. A machine computes its cycle of operations every time as a certain pulley completes a
revolution. The displacement f ( x ) of a point on a certain portion of the machine is given
in the table given below for 12 positions of the pulley, x being the angle in degree turned
through by the pulley. Find a Fourier series to represent f ( x ) for all values of x.

x 30° 60° 90° 120° 150° 180° 210° 240° 270° 300° 330° 360°
f (x) 7.976 8.026 7.204 5.676 3.674 1.764 0.552 0.262 0.904 2.492 4.736 6.824

6. In a machine the displacement y of a given point for a certain angle θ is observed as


­follows:

q ° 0° 30° 60° 90° 120° 150° 180° 210° 240° 270° 300° 330°
y 7.9 8 7.2 5.6 3.6 1.7 0.5 0.2 0.9 2.5 4.7 6.8
Compute the coefficient of sin 2θ in the Fourier series representing the above variation.
366 | Chapter 3

7. The following values of y give the displacement in inches of a certain machine part for the
rotation x of the flywheel. Expand y in the form of a Fourier series:

π 2π 3π 4π 5π
x 0
6 6 6 6 6
y 0 9.2 14.4 17.8 17.3 11.7

8. The following table gives the variation of a periodic current over a period:

T T T 2T 5T
t (secs) 0 T
6 3 2 3 6
A(amp) 1.98 1.30 1.05 1.30 -0.88 -0.25 1.98

Show by harmonic analysis that there is a direct current part of 0.75 amp. in the variable
current, and obtain the amplitude of the first harmonic.

9. Obtain the first three coefficients in the Fourier cosine series for y, where y is given by the
following table:

x 0 1 2 3 4 5
y 4 10 15 8 5 3

Answers 3.4

1.  f ( x ) = 2.10 + 0.56 cos x + 1.53 sin x − 0.52 cos 2 x − 0.09 sin 2 x …

2.  y = 202 + 107 cos x + 121sin x − 39 cos 2 x + 9 sin 2 x + 2 cos 3 x − sin 3 x

3.  y = 1.26 + 0.04 cos x − 0.62 sin x + 0.53 cos 2 x − 0.23 sin 2 x − 0.1cos 3x + 0.08 sin 3x

4.  i = 5.7 cos θ + 30.3 sin θ − 5.1 cos 3θ + 3.2 sin 3θ

5.  f ( x ) = 4.174 + 2.450 cos x + 3.160 sin x + 0.120 cos 2 x


+ 0.034 sin 2 x + 0.080 cos 3 x + 0.010 sin x +
6.  −0.072
7.  y = 11.73 − 7.73 cos 2 x − 1.56 sin 2 x − 2.83 cos 4 x + 0.115 sin 4 x

8.  1.07 amp.

9.  8.3, 2.6832 and −4.1888.


Fourier Series, Fourier Integrals and Fourier Transforms  | 367

3.5  Fourier Integrals and Fourier Transforms


Fourier series are powerful tools in treating various problems involving periodic functions. Since,
of course, many practical problems involve non-periodic functions, so we extend the method of
Fourier series to Fourier integrals for such functions.

3.5.1  Fourier Series to Fourier Integrals


Theorem 3.1  Fourier Integral Theorem
If f ( x ) is piecewise continuous in every finite interval and left-handed derivative, right-handed

derivative at every point and ∫ f ( x ) dx exist then f ( x ) can be represented by Fourier integral
−∞


f ( x ) = ∫  A (ω ) cos (ω x ) + B (ω ) sin (ω x ) dω ,
0

1 ∞ 1 ∞
A (ω ) = f ( t ) cos (ωt ) dt , B (ω ) = f ( t ) sin (ωt ) dt.
π ∫−∞ ∫
where
π −∞

Fourier integral converges to f ( x0 ) if f ( x ) is continuous at x = x0 and converges to


1
 f ( x0 − 0 ) + f ( x0 + 0 )  if f ( x ) is discontinuous at x = x0.
2

Proof: Let f l ( x ) be a periodic function of period 2l defined in ( −l , l ). Its Fourier series is


a0 ∞ nπ
fl ( x ) = + ∑  an cos (ωn x ) + bn sin (ωn x )  , where ωn = (3.29)
2 n =1 l
1 l 1 l 1 l
where a0 = ∫ f l (t ) dt , an = ∫ f l (t ) cos (ω n t ) dt, bn = ∫ f l (t ) sin (ω n t ) dt .
l −l l −l l −l

Substituting values of a0 , an , bn in (3.29)

1 l 1 ∞ 
fl ( x ) = ( ) ∑ cos (ωn x ) ∫ f l ( t ) cos (ωn t ) dt + sin (ωn x )∫ f l ( t ) sin (ωn t ) dt 
l l

2l ∫− l
f l t dt +

l n =1  − l − l 

( n + 1) π nπ π 1 ∆ω
Let ∆ω = ωn +1 − ωn = − = and hence =
l l l l π
then Fourier series of f l ( x ) is
∆ω l
fl ( x ) = f l (t ) dt
2π ∫− l
1 ∞
+ ∑ cos (ω n x ) ∫ f l (t ) cos (ω n t ) dt + sin (ω n x ) ∫ f l (t ) sin (ω n t ) dt  ∆ω (3.30)
l l

π n =1  −l −l 
368 | Chapter 3

Take limit as l → ∞ and hence ∆ω → 0 in such a way that ωn → ω , the infinite series will be-
come integral from 0 to ∞ and first term on R.H.S. of (3.30) will approach zero and f l ( x ) → f ( x )
1 ∞
cos ω x ∞ f t cos (ωt ) dt + sin (ω x ) ∞ f ( t ) sin (ωt ) dt  dω
f ( x) = ∫  ( ) ∫−∞ ( ) ∫−∞
\
π 0 

1 ∞
Let A (ω ) = ∫ f ( t ) cos (ωt ) dt
π −∞

1 ∞
B (ω ) = f ( t ) sin (ωt ) dt
π ∫−∞ 

\ f ( x ) = ∫  A (ω ) cos (ω x ) + B (ω ) sin (ω x ) dω (3.31)
0

Thus, Fourier integral of f ( x ) is



f ( x ) = ∫  A (ω ) cos (ω x ) + B (ω ) sin (ω x )  dω
0 
1 ∞ 1 ∞
where ∫ A (ω ) =
f (t ) cos (ω t ) dt , B (ω ) = ∫ f (t ) sin (ω t ) dt .
π −∞ π −∞
The convergence of Fourier integral follows from convergence of Fourier series of f l ( x ) . Put-
ting values of A (ω ) and B (ω ) in (3.31), we can write
1 ∞ ∞
f ( x) = ∫ ∫ f ( t ) cos (ω x − ωt ) dt dω .
π 0 −∞

3.5.2  Fourier Cosine and Fourier Sine Integrals


Fourier integral of f ( x ) is

f ( x ) = ∫  A (ω ) cos (ω x ) + B (ω ) sin (ω x )  dω
0 
1 ∞ 1 ∞
where A (ω ) = ∫ f (t ) cos (ω t ) dt , B (ω ) = ∫ f (t ) sin (ω t ) dt . 
π −∞ π −∞
Case I: If f ( x ) is an even function then f ( t ) cos (ωt ) is an even function and f ( t ) sin (ωt ) is
an odd function
1 ∞ 2 ∞
\ A (ω ) = ∫ f (t ) cos (ω t ) dt = ∫ f (t ) cos (ω t ) dt and B (ω ) = 0.
π −∞ π 0
∴ Fourier integral of f ( x ) is
2 ∞ ∞
f ( x) = f ( t ) cos (ωt ) cos (ω x ) dt dω ; f is even.
π ∫0 ∫0

This Fourier integral of f ( x ) is called Fourier cosine integral of f ( x ).


Fourier Series, Fourier Integrals and Fourier Transforms  | 369

Case II: If f ( x ) is an odd function of x then f (t ) cos (ω t ) is an odd function of t and f (t ) sin (ω t )
is an even function of t.
1 ∞ 2 ∞
\ A (ω ) = 0 and B (ω ) = ∫ f ( t ) sin (ωt ) dt = ∫ f ( t ) sin (ωt ) dt .
π −∞ π 0
∴ Fourier integral of f ( x ) is
2 ∞ ∞
f (x) = f (t ) sin (ω t ) sin (ω x ) dtdω ; f is odd.
π ∫0 ∫0

This Fourier integral of f ( x ) is called Fourier sine integral of f ( x ).

3.5.3  Fourier Cosine and Sine Transforms


Case I: If f ( x ) is an even function, the Fourier integral is the Fourier cosine integral
∞∞
2
f (x) = f (t ) cos (ω t ) cos (ω x ) dt dω
π ∫0 ∫0


We define

Fc ( f ( x ) ) = ∫ f ( x ) cos (ω x ) dx = Fc (ω ) (say) (c suggests cosine) (3.32)
0

2
f ( x) = Fc (ω ) cos (ω x ) dω (3.33)
π ∫0
then

Fc (ω ) is called Fourier cosine transform of f ( x ).

Formula (3.33) gives us back f ( x ) from Fc (ω ) and therefore we write




2
f ( x ) = Fc−1 ( Fc (ω ) ) = ∫ Fc (ω ) cos (ω x ) dω
π 0

and f ( x ) is called inverse Fourier cosine transform of Fc (ω )
Case II: If f ( x ) is an odd function, the Fourier integral is the Fourier sine integral
∞∞
2
f ( x) = f ( t ) sin (ωt ) sin (ω x ) dt dω
π ∫0 ∫0


We define
Fs ( f ( x ) ) = ∫ f ( x ) sin (ω x ) dx = Fs (ω ) (say)


0

2
f ( x) = Fs (ω ) sin (ω x ) dω .
π ∫0
then

Fs (ω ) is called Fourier sine transform of f ( x ) and f ( x ) is inverse Fourier sine transform of


Fs (ω )
f ( x ) = Fs−1 ( Fs (ω ) )

370 | Chapter 3

Remark 3.4:

2
(i)  We can also take Fc (ω ) = ∫ f ( x ) cos (ω x ) dx
π 0

2
then f ( x ) = Fc−1 ( Fc (ω )) = ∫ F (ω ) cos ω x dω
π c
0 

2
(ii)  We can take F s (ω ) = ∫ f ( x ) sin(ω x) dx
π 0

2
then f ( x ) = Fs−1 ( Fs (ω )) = ∫ F (ω ) sin(ω x) dω
π s
0 

3.5.4 Complex Form of the Fourier Integral


Real Fourier integral of f ( x ) is
∞ ∞
1
f ( x) = ∫ ∫ f ( t ) cos (ω x − ωt ) dt dω
π 0 −∞ 
1

 ∞

=
π ∫ f ( t )  ∫ cos (ω x − ωt ) dω  dt (3.34)
−∞ 0

Now, cos (ω x − ωt ) is an even function of ω 

and sin (ω x − ωt ) is an odd function of ω 


∞ ∞
1
\ ∫ cos (ω x − ωt ) dω = ∫0 cos (ω x − ωt ) dω (3.35)
2 −∞

and ∫ sin (ω x − ωt ) dω = 0 (3.36)
−∞
i
Multiply (3.36) by and add (3.35)
2
∞ ∞
1
∫ cos (ω x − ωt ) dω = ∫ cos (ω x − ωt ) + i sin (ω x − ωt ) dω
2 −∞
0 

1
= ∫e(
i ω x −ω t )

2 −∞

\ from (3.34)
1

 ∞ i ω x −ω t ) 
f ( x) = ∫ f (t )  ∫ e ( dω  dt
2π −∞  −∞  
∞ ∞
1
∫ ∫ f (t ) e
( i ω x −ω t )
= dt dω

−∞ −∞ 
which is called complex form of Fourier integral.
Fourier Series, Fourier Integrals and Fourier Transforms  | 371

3.5.5  Fourier Transform and Its Inverse


Complex form of Fourier integral of f ( x ) is
∞ ∞
1
f ( x) = ∫ ∫ f (t ) e
( i ω x −ω t )
dt dω
2π −∞ −∞ 
∞ ∞
1
∫ e ∫ f (t ) e
iω x − iω t
= dt dω .
2π −∞ −∞ 

We define F ( f ( x )) = ∫ f ( x)e
− iω x
dx = F (ω ) (say)
−∞

1
then f (x) = ∫e
iω x
F ( f ( x )) d ω
2π −∞ 
F ( f ( x ) ) is called Fourier transform of f ( x ) and

1
f ( x ) = F −1 ( F (ω )) = ∫e
iω x
F (ω ) dω is called inverse Fourier transform of F(ω ) .
2π −∞

Remark 3.5: We can take following pairs of Fourier transforms of f ( x ) and inverse Fourier
transforms.
Fourier transform Inverse Fourier transform
∞ ∞
1
 (i)  F (ω ) = ∫ f ( x ) e − iω x dx ∫ F (ω ) e iω x dω
−∞
2π −∞
∞ ∞
1 1
(ii)  F (ω ) = ∫ f ( x ) e − iω x dx ∫ F (ω ) e
iω x

2π −∞ 2π −∞
∞ ∞
1
(iii)  F (ω ) = ∫ f ( x ) e iω x dx ∫ F (ω ) e
− iω x

−∞
2π −∞

∞ ∞
1 1
 (iv)  F (ω ) = ∫ f ( x ) e iω x dx ∫ F (ω ) e
− iω x

2π −∞ 2π −∞

Out of these pairs, we shall be taking pair (i).

Remark 3.6: Function f ( x ) is called self-reciprocal if

F ( f ( x ) ) = 2π f (ω )

and if (ii) pair is taken then f ( x ) is self-reciprocal if
F ( f ( x ) ) = f (ω ) 

372 | Chapter 3

3.5.6 Spectrum
Let F (ω ) be Fourier transform of f ( x ), then representation of f ( x ) as

1
f ( x) = ∫ F (ω ) e
iω x

2π −∞ 
is called spectral representation of f ( x ) and F (ω ) is called spectral density of f ( x ) . Here, ω
is called the frequency of transform.
The graph of F (ω ) for values of ω is called amplitude spectrum of f ( x ) and F (ω ) is called
2

energy of spectrum.

3.6  Properties of Fourier Transforms


(i)  Linearity property
For any functions f ( x ) and g ( x ) and any constants a and b
(a)  F ( af + bg ) = a F f ( x ) + b F g ( x )
(b)  Fc ( af + bg ) = a Fc ( x ) + b Fc g ( x ), f and g are even functions
(c)  Fs ( af + bg ) = a Fs f ( x ) + b Fs g ( x ) ; f and g are odd functions.

Proof:

(a)  F ( af + bg ) = ∫ af ( x ) + bg ( x ) e
− iω x
dx
−∞

∞ ∞
= a ∫ f ( x ) e − iω x dx + b ∫ g ( x ) e − iω x dx
−∞ −∞ 
= aF f ( x ) + bF g ( x )


(b)  Fc ( af + bg ) = ∫  af ( x ) + bg ( x ) cos (ω x ) dx
0
∞ ∞
= a ∫ f ( x ) cos (ω x ) dx + b ∫ g ( x ) cos (ω x ) dx
0 0 
= a Fc f ( x ) + b Fc g ( x ) 

(c)  Fs ( af + bg ) = ∫  af ( x ) + bg ( x )  sin (ω x ) dx
0
∞ ∞
= a ∫ f ( x ) sin (ω x ) dx + b ∫ g ( x ) sin (ω x ) dx
0 0 
= aFs ( f ( x ) ) + bFs ( g ( x ) )

Fourier Series, Fourier Integrals and Fourier Transforms  | 373

(ii)  Shifting property


(a)  Shifting property on x axis
If F ( f ( x ) ) = F (ω ) and a is any real number, then


(
F ( f ( x − a ) ) = e − iω a F (ω ) and hence F −1 e − iω a F (ω ) = f ( x − a ). )
(b)  Frequency shifting
If F ( f ( x ) ) = F (ω ) and a is any real number, then


( )
F e iax f ( x ) = F (ω − a ) and hence F −1 ( F (ω − a ) ) = e iax f ( x ).

Proof:
∞ ∞
(a)  F ( f ( x − a ) ) = f ( x − a ) e − iω x dx = e − iω a ∫ f ( x − a) e
− iω ( x − a )

−∞ −∞
dx

= e − iω a ∫ f (t ) e
− iω t
dt (taking x − a = t)
−∞

= e − iω a F (ω )


(
(b) F e iax f ( x ) = ) ∫e iax
f ( x ) e − iω x dx
−∞

∫ f ( x)e dx = F (ω − a )
− i (ω − a ) x
=
−∞ 
(iii)  Change of scale property for any a > 0
1 ω 
(a) If F f ( x ) = F (ω ), then F f ( ax ) = F   .
a a
1 ω 
(b) If Fc f ( x ) = Fc (ω ), then Fc f ( ax ) = Fc   .
a a
1 ω 
(c) If Fs f ( x ) = Fs (ω ), then Fs f ( ax ) = Fs .
a  a 
Proof:
∞ ∞ i ωt
− 1
(a)  F f ( ax ) = ∫ f ( ax ) e − iω x dx = ∫ f (t ) e a
dt (taking ax = t)
−∞ −∞
a
1 ω 
= F
a  a  
∞ ∞
 ωt  1
(b)  Fc f ( ax ) = ∫ f ( ax ) cos (ω x ) dx = ∫ f ( t ) cos   dt (taking ax = t )
0 0  a a
1 ω 
= Fc
a  a  
374 | Chapter 3

∞ ∞
 ωt  1
(c)  Fs f ( ax ) = ∫ f ( ax ) sin(ω x ) dx = ∫ f ( t ) sin   dt (taking ax = t )
0 0  a a
1 ω 
= Fs
a  a  
(iv)  Modulus property (or modulation theorem)
If F f ( x ) = F (ω ) , and a is any real number then
1
(a)  F ( f ( x ) cos ( ax ) ) =  F (ω + a ) + F (ω − a ) 
2
i
(b)  F ( f ( x ) sin ( ax ) ) =  F (ω + a ) − F (ω − a ) 
2
1
(c)  F ( F (ω ) cos ( aω ) ) =  f ( x + a ) + f ( x − a ) 
−1

2
−i
(d)  F −1 ( F (ω ) sin ( aω ) ) =  f ( x + a ) − f ( x − a ) 
2

Proof:

(a) F ( f ( x ) cos ( ax ) ) = ∫ f ( x ) cos (ax) e
− iω x
dx
−∞

e iax + e − iax − iω x
= ∫ f ( x)⋅ e dx
2
−∞ 
∞ ∞
1 1
= ∫ f ( x ) e ( ) dx + ∫ f ( x ) e ( ) dx
−i ω − a x −i ω + a x

2 −∞ 2 −∞

1
=  F (ω − a ) + F (ω + a ) 
2 
1
=  F (ω + a ) + F (ω − a )  .
2 

(b) F ( f ( x ) sin ( ax ) ) = ∫ f ( x ) sin (ax) e
− iω x
dx
−∞


 e iax − e − iax  − iω x
= ∫ f ( x) e dx
 2i 
−∞


i
∫ f ( x ) e − e  e dx
− iax − iω x
= iax

2 −∞

i 
∞ ∞
=  ∫ f ( x ) e ( ) dx − ∫ f ( x ) e ( ) dx 
−i ω + a x −i ω − a x

2  −∞ 
−∞

i
=  F (ω + a ) − F (ω − a )  .
2 
Fourier Series, Fourier Integrals and Fourier Transforms  | 375

∞ ∞
1 1 eiω a + e − iω a iω x
(c)  F ( F (ω ) cos (ω a ) ) = ∫ F (ω ) cos (ω a) eiω x dω = ∫ F (ω )
−1
e dω
2π −∞
2π −∞
2

1 1 
∞ ∞
1
F (ω ) e ( ∫ F (ω ) e
i x + a )ω i ( x − a )ω
= 
2  2π ∫
−∞
dω +
2π −∞
dω 

1
=  f ( x + a ) + f ( x − a )  .
2
∞ ∞
1 1 eiω a − e − iω a iω x
(d)  F −1 ( F (ω ) sin (ω a ) ) = ∫ F (ω ) sin (ω a) e
iω x
dω = ∫ F (ω ) e dω
2π −∞
2π −∞
2i

1 1 
∞ ∞
1
∫ F (ω ) e ∫ F (ω ) e
i ( x + a )ω i ( x − a )ω
=  dω − dω 
2i  2π −∞
2π −∞ 
−i
=  f ( x + a ) − f ( x − a ) .
2 

(v)  Modulation property of sine and cosine transforms


If Fc f ( x ) = Fc (ω ), Fs f ( x ) = Fs (ω ) and a is any real number then

1
(a)  Fs ( f ( x ) cos ( ax ) ) =  Fs (ω + a ) + Fs (ω − a ) 
2
1
(b)  Fs ( f ( x ) sin ( ax ) ) =  Fc (ω − a ) − Fc (ω + a ) 
2
1
(c)  Fc ( f ( x ) cos ( ax ) ) =  Fc (ω + a ) + Fc (ω − a ) 
2
1
(d)  Fc ( f ( x ) sin ( ax ) ) =  Fs (ω + a ) − Fs (ω − a ) 
2
1
(e)  Fs−1 ( Fs (ω ) cos ( aω ) ) =  f ( x + a ) + f ( x − a ) 
2
1
(f)  Fc−1 ( Fc (ω ) cos ( aω ) ) =  f ( x + a ) + f ( x − a ) 
2

Proof:
∞ ∞
1
(a)  Fs ( f ( x ) cos ( ax ) ) = ∫ f ( x ) cos ( ax ) sin (ω x ) dx = f ( x ) sin ( (ω + a) x ) + sin ( (ω − a) x )  dx
0
2 ∫0
∞ ∞
1 1
= ∫ f ( x ) sin ( (ω + a) x ) dx + ∫ f ( x ) sin ( (ω − a) x ) dx
20 20

1
=  Fs (ω + a ) + Fs (ω − a ) .
2
376 | Chapter 3

∞ ∞
1
(b)  Fs ( f ( x ) sin ( ax ) ) = ∫ f ( x ) sin ( ax ) sin (ω x ) dx = f ( x ) cos ((ω − a) x ) − cos ((ω + a) x ) dx
0
2 ∫0
1 
∞ ∞
=  ∫ f ( x ) cos ( (ω − a) x ) dx − ∫ f ( x ) cos ( (ω + a) x ) dx 
2 0 
0

1
=  Fc (ω − a ) − Fc (ω + a )  .
2 
∞ ∞
1
(c)  Fc ( f ( x ) cos ( ax ) ) = ∫ f ( x ) cos ( ax ) cos (ω x ) dx = f ( x ) cos ((ω + a) x ) + cos ((ω − a) x ) dx
0
2 ∫0
1 
∞ ∞
=  ∫ f ( x ) cos ( (ω + a) x ) dx + ∫ f ( x ) cos ( (ω − a) x ) dx 
2 0 
0

1
=  Fc (ω + a ) + Fc (ω − a )  .
2 
∞ ∞
1
(d) Fc ( f ( x ) sin ( ax ) ) = ∫ f ( x ) sin ( ax ) cos (ω x ) dx = f ( x ) sin ( (ω + a) x ) − sin ( (ω − a) x )  dx
0
2 ∫0
1 
∞ ∞
=  ∫ f ( x ) sin ( (ω + a) x ) dx − ∫ f ( x ) sin ( (ω − a) x ) dx 
2 0 
0

1
=  Fs (ω + a ) − Fs (ω − a )  .
2 

2
(e) Fs−1 ( Fs (ω ) cos ( aω ) ) = Fs (ω ) cos ( aω ) sin (ω x ) dω
π ∫0
12 

=  ∫ Fs (ω ) sin ( ( x + a)ω ) dω + sin ( ( x − a)ω ) dω  
2 π 0 

12 
∞ ∞
2
=  ∫ Fs (ω ) sin ( ( x + a)ω ) dω + ∫ Fs (ω ) sin ( ( x − a)ω ) dω 
2 π 0 π 0 
1
=  f ( x + a ) + f ( x − a )  .
2 

2
(f)  Fc−1 ( Fc (ω ) cos ( aω ) ) = F (ω ) cos ( aω ) cos (ω x ) dω
π ∫0 c
1 2 

=  ∫ Fc (ω ) cos (( x + a)ω ) + cos (( x − a)ω ) dω 


2 π 0 
12 
∞ ∞
2
=  ∫ Fc (ω ) cos ( ( x + a)ω ) dω + ∫ Fc (ω ) cos ( ( x − a)ω ) dω  
2 π 0 π 0 
1
= [ f ( x + a) + f ( x − a) ]
2 
Fourier Series, Fourier Integrals and Fourier Transforms  | 377

(vi)  Conjugate property


If F f ( x ) = F(ω ), then

(a) F f ( − x ) = F(ω ),
(b) F f ( − x ) = F( −ω ) and

(c) F ( f ( x )) = F( −ω ).

Proof:

(a) F f ( − x ) = ∫ f (−x) e
− iω x
dx
−∞
−∞

∫ f ( t ) e ( −dt ) (taking x = −t)


iω t
=

∫ f ( t ) e ( dt )
iω t
=
−∞ 

∫ f (t ) e
− iω t
= dt
−∞ 
 ∞

=  ∫ f ( t ) e − iωt dt  = F (ω )
 −∞  

(b) F f ( − x ) = ∫ f (−x) e
− iω x
dx
−∞
−∞

∫ f ( t ) e ( −dt ) (taking x = −t)


iω t
=

∫ f (t ) e
− i ( −ω ) t
= dt
−∞ 
= F ( −ω )


( )

(c) F f ( x ) = ∫ f ( x)e
− iω x
dx
−∞

∫ f (x) e
iω x
= dx
−∞

∫ f (x) e
− i (− ω ) x
= dx
−∞ 
= F ( −ω )

378 | Chapter 3

(vii)  Differentiations of transforms w.r.t. frequency


If F ( f ( x ) ) = F (ω ) , Fs ( f ( x ) ) = Fs (ω ) and Fc ( f ( x ) ) = Fc (ω ) exist then
dn
(
(a) F x n f ( x ) = i n) dω n
F (ω ) ; n ∈ N

dn
( )
n
(b) Fs x n f ( x ) = ( −1) 2 Fs (ω ) if n is even
dω n
n +1
dn
= ( −1) 2 Fc (ω ) if n is odd
dω n

dn
( )
n
(c) Fc x n f ( x ) = ( −1) 2 Fc (ω ) if n is even
dω n
n −1
dn
= ( −1) 2 Fs (ω ) if n is odd
dω n
Proof:

(a)   F (ω ) = F f ( x ) = ∫ f ( x)e
− iω x
dx
−∞

dn
F (ω ) = ∫ f ( x ) ( −ix ) e − iω x dx
n

dω n
−∞ 

1
= ( −i )
n
∫ f ( x) x e
n − iω x
dx =
i n (
F xn f ( x ) )
−∞
n

(
∴  F x n f ( x ) = i n) d
dω n
F (ω ) ; n ∈ N


(b)   Fs (ω ) = Fs f ( x ) = ∫ f ( x ) sin (ω x )dx
0

Fc (ω ) = Fc ( f ( x ) ) = ∫ f ( x ) cos (ω x )dx

0 
∴ if n is even natural number
dn ∞ n
then Fs ( ω ) = ∫0 f ( x ) ( −1) 2 x sin (ω x ) dx
n

dω n 
n ∞
= ( −1) 2 ∫ f ( x) x
n
sin (ω x ) dx
0 
( )
n
= ( −1) Fs x f ( x )
2
n

n

( ) d
n
∴ Fs x n f ( x ) = ( −1) 2 Fs (ω ) ; n = 2, 4, 6,…
dω n 
and if n is odd natural number
dn ∞ n +1
then Fc ( ω ) = ∫0 f ( x ) ( −1) 2 x sin (ω x ) dx
n

dω n 
Fourier Series, Fourier Integrals and Fourier Transforms  | 379

n +1
= ( −1) 2
(
Fs x n f ( x )  )
n +1 n
∴ ( )
Fs x n f ( x ) = ( −1) 2
d
dω n
Fc (ω ) ; n = 1, 3, 5,…

(c)  It can be proved as proof of part (b).
(viii)  Transform of derivatives
(a) Let f ( x ) be continuous and f ( ) ( x ) be piecewise continuous on every finite interval
k


(−l, l) and ∫−∞
f(
k −1)
( x ) dx converges for k = 1, 2,… , n and f (
k −1)
( x) → 0 as x → ±∞ for
k = 1, 2,… n

then F f(( n)
( x ) ) = ( iω ) F ( f ( x ) ).
n

(b) Let f ( x ) be continuous on [0, ∞), f ( x ) → 0 as x → ∞ and f ′ ( x ) is piecewise continuous


on every finite interval [0, l] then Fc f ′ ( x ) = ω Fs ( f ( x ) ) − f ( 0 ) and Fs f ′ ( x ) = −ω Fc ( f ( x ) ).

(c) If f ( x ) and f ′ ( x ) are continuous in ( 0, ∞ ), f ( x ) → 0, f ′ ( x ) → 0 as x → ∞ and f ′′ ( x ) is


piecewise continuous on every subinterval [0, l] then
Fc ( f ′′ ( x ) ) = −ω 2 Fc ( f ( x ) ) − f ′ ( 0 ) 

and Fs ( f ′′ ( x ) ) = −ω 2 Fs ( f ( x ) ) + ω f ( 0 )

Proof: (a) We shall prove the result by mathematical induction
F ( f ′ ( x )) = ∫

f ′ ( x ) e − iω x dx
−∞ 
Integrating by parts
(
F ( f ′ ( x )) = e − iω x f ( x ) )

− ∫ ( −iω ) e − iω x f ( x ) dx


−∞ −∞ 
dx = ( iω ) F ( f ( x ) )

= iω ∫ f ( x)e − iω x
−∞ 
∴ Result holds for n = 1
Let F f(( k)
( x ) ) = ( iω ) F ( f ( x ) )
k


( ( x ) ) = ∫−∞ e

then F f ( k +1) − iω x
f ( k +1)
( x ) dx

Integrate by parts
( ) ( ( x ) e −iω x )−∞ − ∫−∞ ( −iω ) e −iω x f ( k ) ( x ) dx
∞ ∞
F f k +1 ( x ) = f (
k)


( )

= ( iω ) ∫ e − iω x f ( ( x ) dx = ( iω ) F f ( ( x)
k) k)

−∞

= ( iω )( iω ) F f ( x ) = ( iω ) F f ( x)
k k +1


380 | Chapter 3

Hence the result holds by the principle of mathematical induction.


∴ ( )
F f (n) ( x ) = (iω ) F f ( x ) ;
n
n ∈N


(b)  Fc ( f ′ ( x ) ) = ∫ f ′ ( x ) cos (ω x ) dx

0
Integrate by parts
Fc ( f ′ ( x ) ) = ( f ( x ) cos (ω x ) )0 − ∫ ( −ω ) sin (ω x ) f ( x ) dx
∞ ∞

0 

= − f ( 0 ) + ω ∫ f ( x ) sin (ω x ) dx
0 
= ω Fs ( f ( x ) ) − f ( 0 )


and Fs f ′ ( x ) = ∫ f ′ ( x ) sin (ω x ) dx
0 
Integrate by parts
Fs ( f ′ ( x ) ) = ( f ( x ) sin (ω x ) )0 − ∫ ω cos (ω x ) f ( x ) dx
∞ ∞

0 
= −ω Fc ( f ( x ) )


(c)  Fc ( f ′′ ( x ) ) = ∫ f ′′ ( x ) cos (ω x ) dx

0 
Integrate by parts
Fc ( f ′′ ( x )) = ( f ′ ( x ) cos (ω x ) )0 − ∫ ( −ω ) sin (ω x ) f ′ ( x ) dx
∞ ∞

0 
= − f ′ ( 0 ) + ω Fs ( f ′ ( x ) )

= − f ′ ( 0 ) + ω  −ω Fc ( f ( x ) )   (from part (b))

= −ω 2 Fc ( f ( x ) ) − f ′ ( 0 )

Fs ( f ′′ ( x ) ) = ∫ f ′′ ( x ) sin (ω x ) dx

and
0 
= ( f ′ ( x ) sin (ω x ) )0 − ∫ ω cos (ω x ) f ′ ( x ) dx 
∞ ∞
(integration by parts)
0

= −ω Fc ( f ′ ( x ) ) = −ω ω Fs ( f ( x ) ) − f ( 0 )   (from part (b))

= −ω 2 Fs ( f ( x ) ) + ω f ( 0 )


(ix)  Fourier Transform of Integral Function



Let f ( x ) be piecewise continuous on every interval (−l, l) and ∫ f ( x ) dx converges, then
−∞
1
F  ∫ f ( t ) dt  = F ( f ( x ) ) , provided F ( f ( x ) ) = F (ω ) = 0 for ω = 0 
x

 −∞  iω
Fourier Series, Fourier Integrals and Fourier Transforms  | 381

Proof:
φ ( x) = ∫ f ( t ) dt 
x
Let
−∞

(
f (t ) dt = F ( f ( x )) )

∴ lim φ ( x ) = ∫ = 0 and φ ′ ( x ) = f ( x )
x →∞ −∞ ω =0 
∴ F ( f ( x )) = F (φ ′ ( x )) = (iω ) F (φ ( x ))

1
∴ F (φ ( x )) = F ( f ( x ))
iω 
1

F ∫ f (t ) dt =  F ( f ( x )) , provided F ( f ( x ) ) = F (ω ) = 0 for ω = 0 
x

 −∞  iω

3.7 Convolution Theorem and Parseval’s Identities


3.7.1 Convolution
∞ ∞
Let f ( x ) and g ( x ) be piecewise continuous on every interval (−l, l) and ∫ f ( x ) dx, g (x) d

−∞ ∫ −∞
f ( x ) dx, ∫ g ( x ) dx converge then convolution of f and g denoted by ( f ∗ g )( x ) is defined as
∞ −∞

( f ∗ g )( x ) = ∫−∞ f (τ ) g ( x − τ ) dτ

we have ( f ∗ g )( x ) = ( g ∗ f )( x ) 

3.7.2 Convolution Theorem (or Faltung Theorem) for Fourier Transforms


Theorem 3.2 Let f ( x ) and g ( x ) be piecewise continuous on every interval (–l, l) and let
∞ ∞
∫ f ( x ) dx, ∫ g ( x ) dx converge.
−∞ −∞

Let F ( f ( x )) = F (ω ) , F ( g ( x )) = G (ω ) then
(a)  F ( f ∗ g )( x )  = F (ω ) G (ω ) (convolution w.r.t. x)
1
(b)  F ( f ( x ) g ( x ) ) = ( F ∗ G )(ω ) (convolution w.r.t. frequency)


Proof: (a) F ( f ∗ g )( x )  = ∫ ( f ∗ g )( x ) e − iω x dx
−∞ 
∞ ∞
=∫ ∫ f (τ ) g ( x − τ ) dτ e − iω x dx
−∞ −∞ 
∞ ∞
f (τ ) e g ( x −τ ) e
− iω ( x −τ )
=∫ − iωτ
∫ dx dτ
−∞ −∞ 
∞ ∞
=∫ f (τ ) e − iωτ
∫ g (u ) e − iω u
du dτ (taking x − τ = u)
−∞ −∞
∞ ∞
=∫ f (τ ) e − iωτ dτ ∫ g ( u ) e − iω u du
−∞ −∞ 
= F (ω ) G (ω )

382 | Chapter 3

1 ∞
F −1 ( F ∗ G )(ω ) = ( F ∗ G )(ω ) eiω x dω
2π ∫−∞
(b)

1 ∞ ∞
F (τ ) G (ω − τ ) dτ e iω x dω
2π ∫−∞ ∫−∞
=

1 ∞ ∞
F (τ ) G (ω − τ ) e dω dτ
2π ∫−∞ ∫−∞
iω x
=

1 ∞ ∞
F (τ ) e iτ x ∫ G (ω − τ ) e ( ) dω dτ
2π ∫−∞
i ω −τ x
=
−∞

1 ∞ ∞
F (τ ) e iτ x ∫ G ( u ) e iux du dτ (taking ω − τ = u)
2π ∫−∞
=
−∞

1 ∞ ∞
= ∫ F (τ ) e iτ x dτ ∫ G ( u ) e iux du
2π −∞ −∞

1 ∞ ∞
F ( ω ) e dω ∫ G ( ω ) e dω
2π ∫−∞
iω x iω x
=
−∞

1
=  2π f ( x ) .2π g ( x )  = 2π f ( x ) g ( x )
2π  
1
∴ F −1 ( F ∗ G )(ω ) = f ( x ) g ( x )
2π 
1
∴ F ( f ( x ) g ( x )) = ( F ∗ G )(ω )
2π 

3.7.3  Parseval’s Identities (Energy Theorem)


Theorem 3.3  If f ( x ) and g ( x ) are piecewise continuous in every finite interval (–l, l) and
∞ ∞
∫ f ( x ) dx and ∫ g ( x ) dx converge and F f ( x ) = F (ω ) , F g ( x ) = G (ω ) , Fs f ( x ) = Fs (ω ) ,
−∞ −∞

Fs g ( x ) = Gs (ω ) , Fc f ( x ) = Fc (ω ) and Fc g ( x ) = Gc (ω ) then

1 ∞ ∞
(i)  ∫ F (ω ) G (ω ) dω = ∫ f ( x ) g ( x ) dx
2π −∞ −∞

1 ∞ ∞
F (ω ) dω = ∫ f ( x ) dx
2 2
(ii) 
2π ∫ −∞ −∞

2 ∞ ∞
(iii)  ∫ Fs (ω ) Gs (ω ) dω = ∫ f ( x ) g ( x ) dx
π 0 0

2 ∞ ∞
  (iv)  ∫ Fc (ω ) Gc (ω ) dω = ∫ f ( x ) g ( x ) dx
π 0 0

2 ∞ 2 ∞ ∞
Fs (ω ) dω = ∫ Fc (ω ) dω = ∫ f ( x ) dx
2 2 2
(v) 
π ∫0 π 0 0

Identities (ii) and (iv) are called Parseval’s identities or energy theorems.
Fourier Series, Fourier Integrals and Fourier Transforms  | 383

Proof:  (i)  F ( g ( x ) ) = G (ω ) 
1 ∞
∴ g ( x ) = F −1 G (ω ) = ∫ G (ω ) e iω x dω
2π −∞

Take conjugate of both sides
1 ∞
g (x) = ∫ G (ω ) e − iω x dω
2π −∞

∞ 1 ∞ ∞
∴ ∫ f ( x ) g ( x ) dx = ∫ ∫ f ( x ) G (ω ) e − iω x dω dx
−∞ 2π −∞ −∞

1 ∞ ∞
= ∫ ∫ f ( x )G (ω ) e − iω x dx dω
2π −∞ −∞

1
G (ω )  ∫ f ( x ) e − iω x dx  dω
∞∞
=
2π ∫
 −∞
−∞ 

1 ∞ 1 ∞
G ( ω ) F ( ω ) dω = F ( ω ) G ( ω ) dω
2π ∫−∞ 2π ∫−∞
=


(ii)  In part (i), taking g ( x ) = f ( x )



∞ 1 ∞
∫ f ( x ) f ( x ) dx = ∫ F ( ω ) F ( ω ) dω
−∞ 2π −∞

∞ 1 ∞
f ( x ) dx = F (ω ) dω
2 2
∴ ∫ −∞ 2π ∫ −∞


(iii) Fs g ( x ) = Gs (ω )

2 ∞
g ( x ) = Fs−1Gs (ω ) = G (ω ) sin (ω x ) dω
π ∫0 s


∞ 2 ∞ ∞
∫ f ( x ) g ( x ) dx = f ( x ) Gs (ω ) sin (ω x ) dω dx
π ∫0 ∫0

0

2 ∞ ∞
f ( x ) Gs (ω ) sin (ω x ) dx dω
π ∫0 ∫0
=

2 ∞
Gs (ω )  ∫ f ( x ) sin (ω x ) dx  dω

= ∫
π 0  0  
2 ∞
Gs (ω ) Fs (ω ) dω
π ∫0
=

∞ ∞
2
F (ω ) Gs (ω ) dω = ∫ f ( x ) g ( x ) dx
π ∫0 s

0 
384 | Chapter 3

(iv) Fc g ( x ) = Gc (ω ) 
2 ∞
g ( x ) = Fc−1Gc (ω ) = G (ω ) cos (ω x ) dω
π ∫0 c


∞ 2 ∞ ∞
∫ f ( x ) g ( x ) dx = f ( x ) Gc (ω ) cos (ω x ) dω dx
π ∫0 ∫0

0

2 ∞ ∞
= ∫ ∫ f ( x ) Gc (ω ) cos (ω x ) dx dω
π 0 0 
2 ∞
= ∫ Gc (ω )  ∫ f ( x ) cos (ω x )dx  dω


π 0  0  
2 ∞
= ∫ Gc (ω ) Fc (ω ) dω
π 0 
2 ∞ ∞
∴ ∫ Fc (ω ) Gc (ω ) dω = ∫ f ( x ) g ( x ) dx
π 0 0

(v)  In parts (iii) and (iv), take g ( x ) = f ( x ) and use

 Fs  f ( x )  = Fs (ω ) and Fc  f ( x )  = Fc (ω ) (clear from definitions)
2 ∞ 2 ∞ ∞
Fs (ω ) dω = ∫ Fc (ω ) dω = ∫ f ( x ) dx
2 2 2

π ∫ 0 π 0 0


3.7.4 Relation between Fourier and Laplace Transforms


If £ g ( x ) exists and £ g ( x ) = G ( s ), then
( )
F e − ax g ( x ) H ( x ) = G ( a + iω ) , a > 0; where H ( x ) is unit step function.

( )

Proof: F e − ax g ( x ) H ( x ) = ∫ e − ax g ( x ) e − iω x dx ; a > 0
0 

= ∫ g ( x)e
− ( a + iω ) x
dx
0 

= ∫ g ( x)e − sx
dx where a + iω = s 
0

= £ ( g ( x ) ) = G ( s ) = G ( a + iω )


Example 3.52: Using Fourier integral, show that


π
∞ 1 − cos (πλ )  ; 0< x<π
∫0 λ
sin ( λ x ) d λ = 2
0 ; x>π

Solution: Consider the function
Fourier Series, Fourier Integrals and Fourier Transforms  | 385

π
 ; 0< x<π
f (x) =  2
0 ; x>π 
Its Fourier sine integral is
2 ∞ ∞
f ( x) = f ( t ) sin ( λ x ) sin (λ t ) dt d λ
π ∫0 ∫0


2 ∞ π π
= ∫ sin (λ x ) ∫ sin (λ t ) dt d λ
π 0 0 2

π
∞  1 
= ∫ sin (λ x )  − cos (λ t )  d λ
0
 λ 0 
∞ 1 − cos (πλ )
=∫ sin (λ x ) d λ
0 λ 
π
∞ 1 − cos (πλ )  ; 0< x<π
\ ∫ sin ( λ x ) d λ =  2
0 λ 0 ; x>π

Example 3.53: Use Fourier integral to prove that
π
∞ sin ( λ x )sin (πλ )  sin x; 0 < x < π

0 1− λ2
dλ =  2
 0 ; x > π

Solution: Consider the function
π
 sin x; 0 < x < π
f ( x) =  2
 0 ; x > π

Its Fourier sine integral is
2 ∞ ∞
f ( x) = f ( t ) sin ( λ x ) sin (λ t ) dt d λ
π ∫0 ∫0


2 ∞ π π
= ∫ sin ( λ x ) ∫ sin t sin ( λt ) dt d λ
π 0 0 2

∞ π 1
= ∫ sin ( λ x ) ∫ ( cos ( λ − 1) t ) − ( cos ( λ + 1) t )  dt d λ
0 2
0

π
∞ 1  1 1 
=∫ sin ( λ x )  ( sin ( λ − 1) t ) − ( sin ( λ + 1) t ) d λ
0 2  ( λ − 1) λ +1  0

∞ 1  1 1 
=∫ sin ( λ x )  sin ( λπ − π ) − sin (π + λπ )  d λ
0 2  ( λ − 1) λ +1 

386 | Chapter 3

∞ 1  1 1 
=∫ sin (λ x )  sin (π − λπ ) + sin ( λπ )  d λ

0 2  (1 − λ) 1+ λ  
∞ 1  1 1 
=∫ sin (λ x )  sin ( λπ ) + sin ( λπ )  d λ

0 2  (1 − λ) 1+ λ  

1  1 1  
= ∫ sin ( λ x )  +  sin (πλ )  d λ
0
2  1 − λ 1 + λ   
∞ sin (πλ ) sin ( λ x )
=∫ dλ
0 1 − λ2 
π
∞ sin ( λ x ) sin (πλ )  sin x; 0 < x < π
∴ ∫ dλ =  2
0 1− λ 2
 0 ; x > π

Example 3.54: For a > 0, t > 0, show that
∞ cos (ωt ) π − at
(i)  ∫
0 ω +a
2 2
dω =
2 a
e

∞ ω sin (ωt ) π
(ii)  ∫
0 ω +a
2 2
dω = e − at
2
Solution:
(i) Consider the function
π − at
f (t ) = e ; t>0
2a 
Its Fourier cosine integral is
2 ∞ ∞
f (t ) = f ( x ) cos (ωt ) cos (ω x ) dx dω
π ∫0 ∫0


2 ∞ ∞ π
= ∫ cos (ωt ) ∫ e − ax cos (ω x ) dx dω
π 0 0 2a


1 ∞  e − ax 
2 (
= ∫ cos (ω t )  2 −a cos (ω x ) + ω sin (ω x ) )  dω
a ω + a
0
0 
1 ∞ a
= ∫ cos (ωt ) 2 dω  (∵ a > 0 )
a 0 ω + a2
∞ cos (ω t ) π − at
\ ∫ d ω = f (t ) = e ; t>0
0 ω 2 + a2 2a 
(ii) Fourier sine integral of
π − at
f (t ) = e , t > 0 is
2a
Fourier Series, Fourier Integrals and Fourier Transforms  | 387

2 ∞ ∞
f (t ) = f ( x ) sin (ωt ) sin (ω x ) dx dω
π ∫0 ∫0 
2 ∞ ∞ π
= ∫ sin (ωt ) ∫ − ax
e sin (ω x ) dx dω
π 0 0 2a 

1 ∞  e − ax

2 (
= ∫ sin (ωt )  2 −a sin (ω x ) − ω cos(ω x ) )  dω
a 0 ω + a 0 
1 ∞ ω sin (ωt )
= ∫ dω  (∵ a > 0 )
a 0 ω 2 + a2
∞ ω sin (ω t ) π − at
∴ ∫0 ω 2 + a2 dω = a f (t ) = 2 e ; t > 0

sin x ; 0 < x < π
Example 3.55: If f ( x ) =  then prove that
 0 ; x < 0 and x > π
1 ∞ cos (λ x ) + cos ( λ (π − x ) )
f ( x) = ∫ dλ 
π 0 1− λ2
πs

cos
and hence find the value of the integral ∫ 2 ds .
0 1 − s2
Solution: Fourier integral of
sin x ; 0 < x < π
f ( x) =  is
 0 ; x < 0 and x > π
1 ∞ ∞
f ( x ) = ∫ ∫ f ( t ) cos ( λ t − λ x ) dt d λ
π 0 −∞ 
1 ∞ π
= ∫ ∫ sin t cos ( λ t − λ x ) dt d λ
π 0 0 
1 ∞ π i (λt − λ x )
= Re ∫ ∫ e sin t dt d λ  (Re means ‘real part of ’)
π 0 0
1 ∞ π
= Re ∫ e − iλ x ∫ e iλt sin t dt d λ
π 0 0

π
− iλ x  e 
iλ t
1 ∞
= Re ∫0 e 1 − λ 2 (i λ sin t − cos t )  d λ
π 0 
− iλ x
1 ∞ e
= Re ∫ e + 1 d λ
iλπ

π 0 1− λ2  
i λ (π − x )
1 ∞ + e − iλ x
e
= Re ∫0 1 − λ 2 d λ 
π
1 ∞ cos ( λ (π − x ) ) + cos (λ x )
= ∫ dλ
π 0 1− λ2 
388 | Chapter 3

∞ cos ( λ x ) + cos ( λ (π − x ) )  π sin x; 0 < x < π


∴ ∫ dλ = π f (x) =  
1− λ  0 ; x < 0 and x > π
0 2

π
Take x = and replace λ by s
2 πs

2 cos
∫0 1 − s22 ds = π 
πs
cos
∞ π
⇒ ∫0 1 − s22 ds = 2
Example 3.56: Find the Fourier transforms of the functions
 (i)  f ( t ) = e ; a > 0    (ii) f ( t ) = te
−a t −a t
; a>0
(iii)  e − tU (t ) where U (t ) is unit step function

  f (t ) = e
−a t
Solution:  (i)  ; a>0

e ; t ≥ 0
− at
∴ f (t ) =  at
e ; t < 0 
F ( f ( t )) = ∫
∞ ∞
f (t ) e − iω t dt = ∫ e at e − iωt dt + ∫ e − at e − iωt dt
0

−∞ −∞ 0 
1 1
( ) ( )
0 ∞
= e at e − iωt − e − at e − iωt
a − iω −∞ a + iω 0

1 1 2a
= + = 2  (∵ a > 0)
a − iω a + iω ω + a 2

f ( t ) = te
−a t
(ii) ; a>0

t e at ; t < 0
∴ f (t ) =  − at
t e ; t ≥ 0
F ( f ( t )) = ∫
∞ ∞
f (t ) e − iω t dt = ∫ te at e − iω t dt + ∫ te − at e − iω t dt
0

−∞ −∞ 0 
0
  1  
a − iω t  1
= t  e( )  −  e ( a − iω )t  
  ( a − iω )   ( a − iω ) 
2
  −∞ 

 t 1 
e( ) −
− ( a + iω ) t
+  
− a + iω t
e
 − ( a + iω )
 ( a + iω )
2

0 

t 1
Now, lim t e at = lim = lim (L’ Hospital rule)
t →−∞ t →−∞ e − at t →−∞ −a e − at
=0 (∵ a > 0) 
Fourier Series, Fourier Integrals and Fourier Transforms  | 389

t 1
and lim t e − at = lim at
= lim (L’ Hospital rule)
t →∞ t →∞ e t →∞ a e at
=0 (∵ a > 0) 

− ( a + iω ) + ( a − iω )
2 2
1 1
∴ F ( f ( t )) = − + =
( a − iω ) ( a + iω ) (ω )
2 2 2
2
+ a2

4 aiω
=−
(ω )
2
2
+ a2

Other Method
from part (i)


F e ( ) = ω 2+aa
−a t
2

2

∴ (
F te −a t
) =i
d 2a
=−
4 aiω
dω ω + a ( )
2 2 2
ω 2 + a2


0 ; t < 0
(iii) f (t ) = e − t U (t ) =  − t
e ; t ≥ 0 

F ( f ( t )) = ∫
∞ ∞
∴ f (t ) e − iω t dt = ∫ e − t e − iω t dt ;
−∞ 0 
1 e −(t + iωt )  = 1 = 1 − iω

=−
(1 + iω )   0 1 + iω ω 2 + 1

Example 3.57: Find the 2Fourier transform of the function f ( x ) = e − ax 2
; a > 0 and hence find the
Fourier transform of e − x / 2.
f ( x ) = e − ax ; a > 0
2
Solution:

∴ F ( f ( x )) = ∫ e

− ax 2
e − iω x
dx = ∫ e
∞ (
− ax 2 + iω x ) dx
−∞ −∞ 
2
 iω 
∞ − a x + 
=∫ e
2
 2 a
⋅ e −ω / 4a
dx
−∞ 
1 ∞ π −ω 2 / 4 a  iω 

2 2
= e −ω / 4a
e− y dy = e   taking ax + = y
a −∞ a  2 a 
1
Take a =
2
(
F e− x
2
/2
)= 2π e −ω
2
/2


390 | Chapter 3

Example 3.58: Obtain the Fourier transform of the function f ( x )  given by


1 − x ; x ≤ 1
2

f ( x) = 
 0 ; otherwise

∞ x cos x − sin x x
and hence evaluate ∫0 x 3
cos dx .
2

F ( f ( x )) = ∫ ( )

f ( x ) e − iω x dx = ∫ 1 − x 2 e − iω x dx
1
Solution:
−∞ −1 
( ) ( cos (ω x) − i sin (ω x) ) dx 
1
= ∫ 1− x 2
−1

Now, (1 − x ) cos (ω x) is an even function of x


2

and (1 − x ) sin (ω x) is an odd function of x


2

F ( f ( x )) = 2∫ (1 − x ) cos (ω x ) dx
1
∴ 2
0 
1
 1   1   1 
 ( ω 
)
= 2  1 − x 2  sin (ω x )  + 2 x  − 2 cos (ω x )  − 2  − 3 sin (ω x )  
 ω   ω 0

 1 1 
= 4  − 2 cos ω + 3 sin ω 
 ω ω 
4
= 3 (sin ω − ω cos ω ) = F (ω )
ω 
By inversion formula
1 ∞4
f ( x) = ∫ ( sin ω − ω cos ω ) eiω x dω
2π −∞ ω3 
1 ∞ 4
=
2π ∫−∞ ω 3 ( sin ω − ω cos ω ) ( cos (ω x) + i sin (ω x) ) dx 
1
Now, ( sin ω − ω cos ω ) cos (ω x) is an even function of ω 
ω3
1
and ( sin ω − ω cos ω ) sin (ω x ) is an odd function of ω 
ω3
4 ∞ cos (ω x )
∴ f (x) = ∫ (sin ω − ω cos ω ) dω
π 0 ω3 
4 cos (ω x )
∞ 
 1 − x 2
; x ≤1
∴ ∫ ( sin ω − ω cos ω ) dω = 
π 0 ω 3
 0 ; otherwise 
1
Take x =
2
∞ 1 ω π  1  3π
∫0 ω 3 ( sin ω − ω cos ω ) cos 2 dω = 4 1 − 4  = 16

∞ x cos x − sin x x 3π
∴ ∫0 x3
cos dx = −
2 16 
Fourier Series, Fourier Integrals and Fourier Transforms  | 391

Example 3.59: Find the Fourier transform of

1 − x ; x < 1
f (x) = 
0 ; x >1

∞ sin 2 x
and hence evaluate ∫
0 x2
dx.

F ( f ( x )) = ∫

Solution: f ( x ) e − iω x dx
−∞ 
∫ (1 − x ) e dx = ∫ (1 − x ) ( cos (ω x ) − i sin (ω x ) ) dx
1 1
− iω x
=
−1 −1

Now, (1− x ) cos (ω x) is an even function of x


and (1− x ) sin (ω x) is an odd function of x
1 1
 1  1 
∴ F ( f ( x )) = 2∫ (1 − x ) cos (ω x ) dx = 2 (1 − x )  sin (ω x ) − 2 cos (ω x ) 
0   ω  ω 0 
2
= (1 − cos ω ) = F (ω )
ω2 
By inversion formula
1 ∞ 2 (1 − cos ω ) 1 − x ; x < 1
∫ e iω x dω = f ( x ) = 
2π −∞ ω 2
0 ; x >1

1 − cos ω
∞ π (1 − x ) ; x < 1
or ∫−∞ ω 2 ( cos ω x + i sin ω x ) d ω = 
0 ; x >1

1 − cos ω
Now, cos (ω x ) is an even function of ω and
ω2
1 − cos ω
sin (ω x ) is an odd function of ω 
ω2
∞ 1 − cos ω π (1 − x ) ; x < 1
∴ 2∫ cos (ω x ) d ω = 
0 ω2 0 ; x >1

Take x = 0
1 − cos ω
∞ π
∫ 0 ω 2
dω =
2
ω
2 sin 2
∞ π
∴ ∫0 ω 2 2 dω = 2

392 | Chapter 3

Take ω = 2x 
∞ sin 2 x π
∫0 x 2
dx =
2

1; x < a
Example 3.60: Find the Fourier transform of f ( x ) =  where a > 0
0; x > a
∞ sin ( as) cos ( sx ) ∞ sin x
and evaluate ∫−∞ s
ds and ∫0 x
dx.

F ( f ( x )) = ∫

Solution: f ( x ) e − isx dx
−∞

( cos ( sx) − i sin ( sx) ) dx


a a
= ∫
−a
e − isx dx = ∫
−a 
2 2 sin ( sa)
( sin ( sx) )0
a
= 2 ∫ cos ( sx ) dx =
a
= = F ( s)

0 s s 
By inversion formula
1 ∞ 2 sin ( sa ) 1; x < a
∫ e isx ds = f ( x ) = 
2π −∞ s 0; x > a 
sin ( sa )
∞ π ; x < a
∴ ∫−∞ s (cos ( sx) + i sin ( sx)) ds = 0 ; x > a
 
sin ( sa )
Now sin ( sx ) is an odd function of s
s
∞ sin ( sa ) cos ( sx ) π ; x < a
∴ ∫ −∞ s
ds = 
0 ; x > a 
Take x = 0 and a = 1,
sin s

∫ −∞s
ds = π

∞ sin s
⇒ 2∫ ds = π
0 s 
∞ sin x π
⇒ ∫0 x dx = 2 
Example 3.61: Find the complex Fourier transform of dirac delta function δ ( t − a ). Deduce the
Fourier transform of unity in terms of generalized function dirac delta.
Solution: By definition
δ ( t − a ) = lim δ ε ( t − a )
ε →0 + 
Fourier Series, Fourier Integrals and Fourier Transforms  | 393

1
 ; a < t < a+ε
where δ ε (t − a ) =  ε 
0 ; otherwise

∴ F δ (t − a ) = lim ∫ e − iω t δ ε (t − a ) dt
ε → 0 + −∞ 
a +ε 1 − iωt
= lim ∫
ε →0 + a ε
e dt

 1  e − iω a e − iωε − 1
( )
a +ε
= lim  − e − iωt =− lim
ε →0 +
 iωε a
 iω ε →0 + ε 
e − iω a 1 − e − iωε
= lim
iω ε →0 + ε 
e − iω a − iωε
= lim iω e   (L’ Hospital rule)
iω ε →0 +
= e − iω a 

Note that if a = 0, then F (δ ( t ) ) = 1.


By inversion formula
1 ∞
F −1 (1) = ∫ e iωt dω = δ ( t )
2π −∞


⇒ ∫ e dω = 2πδ (t )
iω t

−∞ 
Interchange t and ω

∫ e iωt dt = 2πδ (ω ) (1)
−∞

Change ω to – ω

∫ e − iωt dt = 2πδ ( −ω ) (2)
−∞

1 ∞
From (1), δ (ω ) = ∫ e iωt dt
2π −∞

1 ∞ 1 −∞
∴ δ ( −ω ) = ∫ e − iω t dt = − ∫ e iω x dx (taking t = − x)
2π −∞ 2π ∞

1 ∞
= ∫ e iωt dt = δ (ω )
2π −∞

\ δ (ω ) is an even function of ω 

\ from (2), ∫ e − iωt dt = 2πδ (ω )
−∞ 
∴ F (1) = 2πδ (ω )

394 | Chapter 3

Example 3.62: Find the Fourier transform of signum function sign(t) and Heaviside function
H ( t ).
Solution: Consider the function
 −e at ; t < 0
f (t ) =  − at where a > 0
e ; t > 0
F ( f (t )) = ∫

f ( t ) e − iωt dt
−∞ 
0 ∞
∫ dt + ∫ e − at e − iωt dt
− iω t
= −e e at
−∞ 0 
1 1
( ) ( )
0 ∞
=− e at e − iωt − e − at e − iωt
a − iω −∞ a + iω 0

1 1 2iω
=− + =− 2  (∵ a > 0)
a − iω a + iω ω + a2
when a → 0 +, f ( t ) → sign ( t )

−2iω 2i
∴ F ( sign ( t ) ) = lim =−
a →0 + ω +a
2 2
ω 
Now, Heaviside function H ( t ) is defined as
0 ; t < 0
H (t ) = 
1 ; t ≥ 0 
1
 1 + sign (t ) , t ≠ 0
∴ H (t ) =  2 
 1 ,t =0
1
∴ F ( H ( t )) =
 F (1) + F sign (t )
2 
1 2i 
=  2π δ (ω ) − 
2 ω
i
= π δ (ω ) −
ω
It should be noted that Fourier transforms of Unity and Heaviside functions are only in terms of
generalized function dirac delta which in actual practice is not a function. Thus, actually in terms
of definitions of functions, Fourier transforms of Unity and Heaviside functions do not exist. The
reason is lim cos ω t, lim cos ω t, lim sin ω t , lim sin ωt do not exist.
t →−∞ t →∞ t →−∞ t →∞

2
Example 3.63: Find the Fourier transform of xe − ax ; a > 0.

Solution: ( )=∫
F e − ax
2 ∞

−∞
2
e − ax e − iω x dx
2
 iω  − ω2
∞ −  ax + 
=∫ e  2 a
e 4a
dx
−∞ 
Fourier Series, Fourier Integrals and Fourier Transforms  | 395

∞ 1  iω 
= ∫ e− y e−ω  y = ax +
2 2
/ 4a
dy 
    2 a
−∞
a
π − ω2 / 4 a
= e
a 
f ( x ) = e − ax
2
Now, if

f ′ ( x ) = −2ax e − ax
2
then

∴ (
F ( f ′ ( x )) = F −2ax e − ax
2

) = ( i ω ) F ( f ( x ) ) = (i ω ) π − ω2 / 4 a
a
e


∴ (
F x e − ax
2

) = −2iaω π − ω2 / 4 a
a
e


1; x < a
Example 3.64: Find the Fourier transform of f ( x ) = 
0; x > a
  π x 
and hence find F  f ( x ) 1 + cos .
  a  
 F ( f ( x ) ) = ∫

f ( x ) e − i ωx dx = ∫ e − i ωx dx
a
Solution:
−∞ −a 
( cos (ω x ) − i sin (ω x ) ) dx
a
=∫
−a 
cos (ω x ) is an even function of x
and sin (ω x ) is an odd function of x
2 2
F f ( x ) = 2∫ cos (ω x ) dx = (sin (ω x))0a = sin (ωa)
a

0 ω ω 
∴ By linearity and modulation properties
  π x   πx
F  f ( x ) 1 + cos   = F f ( x ) + F  f ( x ) cos a 
  a   
 π  π
sin  ω +  a sin  ω −  a
2sin ( ωa )  a  a
= + +
ω π π
ω+ ω−
a a 
2sin ( ωa ) a sin ( ωa ) a sin ( ωa )
= − +
ω π + ωa π − aω 

= sin (ωa) 
(
 2 π 2 − ω 2 a 2 − aω (π − aω ) + aω (π + ωa ) 

)
 ω ( π + ωa ) ( π − ωa ) 

2π 2 sin ( ωa )
=
ω (π 2 − ω 2 a 2 )

396 | Chapter 3

Example 3.65: Find the Fourier transform of f ( x ) = e − axU ( x ) , a > 0 where U ( x ) is unit step
function. Hence, prove that F x n e − axU ( x ) = ( n!
. )
(a + is)n+1
F ( f ( x )) =

Solution: ∫ e − axU ( x ) e − isx dx
−∞


= ∫ e − ax e − isx dx
0

1
( )

=− e − ax e − isx
a + is 0

1
=
a + is 
Differentiate n times w.r.t. ‘s’
( −1) n! i n
n

∫ e − ax ( −ix ) e − isx dx =
n

( a + is ) 
0 n +1


∞ n!
∴ ∫0
e − ax x n e − isx dx =
(a + is)n+1 
⇒ (
F e − ax x n ∪ ( x ) = ) n!
(a + is)n+1 
2
Example 3.66: Show that the Fourier transform of e − x /2
is self-reciprocal.
1
Solution: We take F ( f ( t ) ) =

∫ f ( t ) e − i ω t dt = F ( ω )
2π −∞

1 ∞
then its reciprocal is ∫ F ( ω ) eiω t d ω
2π −∞

f ( x ) = e− x / 2
2
Now,

1
F ( f ( x )) =


2
∴ e − x / 2 e − i ω x dx
2π −∞

2
 x iω 
1 ∞ − + 


2
= e  2 2
e −ω / 2 dx
2π −∞

x iω
Put + =y
2 2 
1 ∞ 1
F f (x) = ∫
2 2 2
∴ e − y e −ω /2
2 dy = π e −ω /2

2π −∞
π 
=e −ω 2 / 2
= f (ω )

2
−x /2
Thus, Fourier transform of e is self-reciprocal, i.e., only variable name is changed.
Fourier Series, Fourier Integrals and Fourier Transforms  | 397

1
Example 3.67: Find the Fourier transform of f ( t ) = .
5 + it
Solution: We have
1
(
F H (t )e −5t = ) 5 + iω 
∵ By inversion formula
1 e iω t
dω = H (t ) e −5t


2π ∫ −∞ 5 + iω

Interchange t and ω
1 ∞ e iωt
∫−∞ 5 + it dt = H (ω ) e 
−5ω


Change ω to −ω
∞e − iωt
∫−∞ 5 + it dt = 2π H ( −ω ) e 


 1  2π e ; − ω ≥ 0

∴ F = 
 5 + it   0 ; − ω < 0

 1  2π e ; ω ≤ 0

i.e., F = 
 5 + it   0 ; ω > 0

Example 3.68: Find the inverse Fourier transforms of functions
2
e 4 iω π ω e −ω /8
 (i)      (ii) 
3 + iω 4 2i
1 1
(iii)    (iv) 
30 + 11iω − ω 2 4 + ω 9 + ω2
2
( )( )
 1 
Solution:  (i)  We have F −1   = H ( t ) e −3t
 3 + iω 
By shifting property
− ( −4 i ) ω
 e 4 iω  −1 e
= H (t − 4 ) e ( )
−3 t − 4
F −1   = F
 3 + iω  3 + i ω

 e 4 iω  e −3(t − 4) ; t ≥ 4
∴ F −1  =
 3 + iω   0 ; t<4

(ii) for a > 0
2
 iω 

( )=∫
∞ ∞ − a t + 
− at 2 − at 2
dt = ∫ e
2
− iω t
F e e e  2 a
e −ω / 4a
dt
−∞ −∞

398 | Chapter 3

1 ∞  iω 
∫  taking a t + = y
2 2
= e −ω / 4a
e − y dy
a −∞ 2 a 

π −ω 2
= e / 4a

a 
Take a = 2


F e −2t ( )= 2 π −ω 2 / 8
2
e 

∴ (
F t e −2t
2

) = i ddω F ( e ) = i −2 t 2 π  ω  −ω 2 /8
−  e
2  4


( )
2
π ω e −ω /8
2
∴ = F t e −2t
4 2i

 π ωe 
( )
−ω 2 /8
−2 t 2
⇒ F −1   = te
 4 2i 


1 1
(iii)  F −1 = F −1
+
30 11iω − ω 2
( 5 + iω ) ( 6 + iω )

 1
−1 1 
=F  −
 5 + iω 6 + iω 
 1   1 
= F −1   − F −1 
 5 + iω   6 + iω 

= e H (t ) − e
−5t −6 t
H (t )

e − e ; t ≥ 0
−5t −6 t
=
 0 ; t<0

1 1  1 1 
(iv)  F −1 = F −1   −     (by suppression method)
(4 + ω ) (9 + ω )
2 2
5  4 + ω
2
9 + ω2 

1 −1  1 1 
= F  − 
5  ( 2 + iω ) ( 2 − iω ) (3 + iω ) (3 − iω ) 

1 1  1 1  1 1 1 
= F −1   + −  + 
5  
4 2 + iω 2 − iω  6  3 + iω 3 − iω  

 (by suppression method)
1
Now, F e − at H ( t ) = ; a>0
a + iω 
Fourier Series, Fourier Integrals and Fourier Transforms  | 399

1
∴ F e at H ( −t ) = ;a>0
a − iω
 (
∵ if F { f (t )} = F (ω ) then F { f ( −t )} = F ( −ω ) )
 1 
F −1   = e H (t ) ; a > 0
− at

 a + iω  
 1 
F −1   = e H ( −t ) ; a > 0
at
and
 a − iω  
1  1 −2t
∴ F −1
1
{ 1
} {
=  e H (t ) + e 2t H ( −t ) − e −3t H (t ) + e3t H ( −t ) } 
( 4+ω 9+ω
2
)( 2
)
5 4 6

 1  1 −2t 1 −3t 
 5  4 e − 6 e  ; if t > 0

1 1  1 1 
∴ F −1 =   e 2 t − e 3t  ; if t < 0 
( 4 + ω ) (9 + ω )
2 2
5 4 6 
1  1 1 
  cosh 2t − cosh 3t  ; if t = 0
 5  2 3 

1
( −2 t
 60 3e − 2e
−3t
)
; t>0

1
or F −1
1
= (
3e 2t − 2e 3t ) ; t<0
( )(
4 + ω2 9 + ω2 )  60
1
 30 (3 cosh 2t − 2 cosh 3t ) ; t = 0
 

Example 3.69: Do Fourier sine and cosine transforms of exp (x) exist? Explain.
Solution: If Fourier sine transform of e x exists, then
( )

Fs e x = ∫ e x sin (ω x ) dx
0 

 e x

2 (
= sin (ω x ) − ω cos (ω x ) ) 
1 + ω 0 
But lim e ( sin (ω x ) − ω cos (ω x ) ) does not exist.
x
x →∞

( )
∴ Fs e x does not exist.
If Fourier cosine transform of e x exists, then
( )

Fc e x = ∫ e x cos (ω x ) dx
0 

 e x

2 (
= cos (ω x ) + ω sin (ω x ) ) 
1 + ω 0 
400 | Chapter 3

But lim e x ( cos (ω x ) + ω sin (ω x ) ) does not exist.


x →∞
∴ Fourier cosine transform of e x also does not exist.

Example 3.70: Find the Fourier sine transforms of


1 1
(i)     (ii)  e − ax ; a > 0
x x

Solution: (i)  Fs   = ∫ sin (ω x ) dx


1 ∞1

x 0 x
Put ω x = y 
 1 ∞ sin y π
∴ Fs   = ∫ dy =
 x 0 y 2 
1  ∞ 1
(ii) Fs  e − ax  = ∫ e − ax sin (ω x ) dx (1)
x  0 x
d 1  ∞
∴ Fs  e − ax  = ∫ e − ax cos (ω x ) dx
dω  x  0 

 e − ax 
2 (
= 2 −a cos (ω x ) + ω sin(ω x ) ) 
ω + a 0 
a
= 2    (∵ a > 0 )
ω + a2
1  a ω
∴ Fs  e − ax  = ∫ 2 dω + c = tan −1 + c
x  ω + a2 a 

For ω = 0, Fs  e − ax  = 0 
1
(from (1))
x 
∴ 0 = 0+c
⇒ c=0
1  ω
∴ Fs  e − ax  = tan −1 .
x  a 
2
Example 3.71: Find the Fourier cosine transform of e − x .

( )=∫ 2 ∞ 2
Solution: Fc e − x e − x cos (ω x ) dx
0

F (e ) = ∫ ( )
d ∞ 1 ∞
∴ ∫ sin (ω x ) −2 x e − x dx
2 2 2
−x
− x e − x sin (ω x ) dx =

c 0 2 0

1
( ) 1 ∞

sin (ω x ) e − x ∫
2 2
 = − ω cos (ω x ) e − x dx   (integrating by parts)
2 0 2 0
Fourier Series, Fourier Integrals and Fourier Transforms  | 401


ω
=−
2
Fc e − x
2


( )
d 2 ω 2
or Fc (e − x ) + Fc (e − x ) = 0
dω 2 
It is Leibnitz’s linear differential equation.
ω ω2
∫ 2 dω
I .F = e =e 4

ω2
∴ Solution is e Fc e − x
4
( )=K 2

( )= Ke
ω2

∴ Fc e − x2 4


( )
For ω = 0, Fc e − x
2

0

= ∫ e − x dx =
2

2
π
=K


( )
ω2
2 π −
∴ Fc e − x = e 4
2 
2 2
Example 3.72: Find the Fourier cosine transform of e − a x and hence evaluate Fourier sine trans-
− a2 x 2
form of xe .

( )=∫
2 2 ∞ 2 2
Solution: Fc e − a x e − a x cos (ω x ) dx (1)
0

= Re ∫ e − a x e iω x dx
2 2

0
2

 iω  ω2
∞ − a x −  −

= Re ∫ e  2a  4 a2
e dx
0 
ω2
− ∞ 1  iω 

2
=e 4 a2
e− y dy   taking a x − 2 a = y 
0 a
ω2
π − 4 a2
= e
2a

from (1),
d

Fc e (
− a2 x 2
0

)
= ∫ − xe − a2 x 2 2 2
sin (ω x ) dx = − Fs xe − a x

( )
 ω2 

( d
) π 4 a2
2 2 −
Fs xe − a x = −  e 
∴ dω  2 a 
ω 2 
π ω − 4 a2
= ⋅ e
2 a 2a 2
ω2
π − .
= 3
ωe 4 a2

4a

402 | Chapter 3

1
Example 3.73: Find the Fourier cosine transform of f ( x ) = 2 . Hence derive Fourier sine
x a + x2
transform of φ ( x ) = 2 .
a + x2
 1  ∞ cos (ω x )
Solution: Fc  2 2 
=∫ dx  (1)
a +x  0 a2 + x 2
∞ − x sin (ω x ) ∞ x sin (ω x )
2
d  1 
∴ Fc  2 2
=∫ dx = − ∫ dx
dω  a + x  0 a +x
2 2 0
x a2 + x 2 ( ) 

= −∫
∞ (a 2 2 2
)
+ x − a sin (ω x )
dx
0
(
x a + x2
2
) 
∞ sin (ω x ) ∞ sin (ω x )
= −∫ dx + a 2 ∫ dx
0 x

0
(
x a2 + x 2 )
π ∞ sin (ω x )
= − + a2 ∫ dx (2)
2 0
x ( a2 + x 2 )
d2  1  ∞ cos (ω x )  1 
∴ F
2 c  2 2
= a2 ∫ dx = a 2 Fc  2
dω a +x  0 a2 + x 2  a + x 2 

It is an ordinary differential equation.
Its solution is
 1 
Fc  2 2 
= c1e − aω + c2 e aω (3)
a +x 
from (1),

 1  ∞ dx 1 x π
 a + x 2  ∫0 a 2 + x 2  a
when w = 0, Fc  2 = = tan −1  =

a 0 2a

π
\ from (3), c1 + c2 = (4)
2a
 1 
 from (1), lim Fc  2 =0
a +x
2
a →∞
 
∴ from (3), c2 = 0 

π
∴ from (4), c1 =
2a 
 1  π − aω
Hence from (3), Fc  2 2 
= e
 a + x  2a 
∞ cos (ω x ) π − aω
∴ ∫0 a2 + x 2
dx =
2a
e

Fourier Series, Fourier Integrals and Fourier Transforms  | 403

Differentiate w.r.t. ‘w ’


− x sin (ω x )
∞ π
∫0 a +x
2 2
dx = − e − aω
2 
∞ x π − aω
⇒ ∫0 a2 + x 2 sin (ω x) dx = 2 e

 x  π − aω
⇒ Fs (φ ( x ) ) = Fs  2 = e
 a + x 2  2

Example 3.74: Find the Fourier sine and Fourier cosine transforms of
t ; 0 ≤ t ≤ l
f (t ) = 
0; t > l 

Solution: We have
Fc ( f ( t ) ) + iFs f ( t ) = ∫ f ( t ) ( cos (ωt ) + i sin (ωt ) ) dt


0 
l
= ∫ te dtiω t
0 
l
 −it iωt 1 iωt 
= e + 2e 
 ω ω 0 
i l iω l 1 iω l
= − e + 2 e −1
ω ω
( )

il 1
= − [ cos (ω l ) + i sin (ω l ) ] + 2 [ cos (ω l ) + i sin (ω l ) − 1]
ω ω 
Equate real and imaginary parts
l 1 1 1
 Fc ( f ( t ) ) = sin (ω l ) + 2 cos (ω l ) − 2 = 2 lω sin (ω l ) + cos (ω l ) − 1
ω ω ω ω
l 1 1
Fs ( f ( t ) ) = − cos (ω l ) + 2 sin (ω l ) = 2 sin (ω l ) − lω cos (ω l ) 
ω ω ω

1 1
Example 3.75: Find the Fourier sine transform of f (t ) = e − at ; a > 0. Deduce that F   = −i π
t t 
Solution: We have

 e − at 
( )

Fs e − at = ∫ e − at sin (ωt ) dt =  2 2 (
−a sin (ωt ) − ω cos (ωt ) ) 
0
 ω + a 0 
ω
= 2 ; a>0
ω + a2 
404 | Chapter 3

Integrate both sides w.r.t. ‘a’ within the limits a to ∞


∞ ∞ ω ∞
∫ ∫ e − at sin (ωt ) dt da = ∫ da
a 0 ω 2 + a2
a


∞ ∞  a 
∫0 ∫a e sin (ωt ) da dt =  tan ω  a   (changing the order of integration)
− at −1
⇒


∞  1 − at  π −1 a
∴ ∫  − e  sin (ω t ) dt = − tan
0 t 2 ω
a
π 
−1 ω ω
∞1
∴ ∫0 t e − at
sin (ω t ) dt = − cot = tan −1 
2 a a
1  ω
∴ Fs  e − at  = tan −1
t  a
− at
−1 ω
∞e
∴ ∫0 t sin (ωt ) dt = tan a ; a > 0
∞ e − at ω
∴ lim ∫ sin (ω t ) dt = lim tan −1
a→ 0 + 0 t a → 0 + a 
∞1 π
∴ ∫0 t sin (ωt ) dt = 2 (1)
 1 ∞ 1
F   = ∫ e − iω t dt
t −∞ t

∞ 1
=∫ cos (ω t ) − i sin (ω t ) dt
−∞ t 

1
cos (ωt ) is an odd function of t and
t
1
sin (ωt ) is an even function of t
t
 1 ∞ −i π
∴ F   = 2∫ sin (ω t ) dt = −2i  from (1)
t 0 t 2

= −iπ .

Example 3.76: Find the inverse Fourier sine transforms of the functions
ω 1
(i)  2    (ii)  e − aω ; a > 0
ω +1 ω
ω
Solution: (i) Let Fs −1 = f ( x)
ω +1
2

ω
∴ Fs ( f ( x )) = 2
ω +1 
⇒ Fs ( f ( x )) + ω 2 Fs ( f ( x )) = ω (1)
Fourier Series, Fourier Integrals and Fourier Transforms  | 405

Now, Fs ( f ′′ ( x ) ) = −ω 2 Fs f ( x ) + ω f ( 0 ) ; 

when lim f ( x ) = lim f ′ ( x ) = 0


x →∞ x →∞ 
∴ Fs ( f ′′ ( x )) + ω Fs ( f ( x )) = ω f (0 ) (2)
2

From (1) and (2), we are to find f ( x ) such that

f ′′( x ) = f ( x ); f (0) = 1, lim f ( x ) = lim f ′( x ) = 0


x →∞ x →∞ 
Now, solution of f ′′( x ) = f ( x )
is f ( x ) = c1e − x + c2 e x (3)

f ( 0 ) = c1 + c2 = 1 (4)

From (3), lim f ( x ) = 0 is satisfied only when c2 = 0 


x →∞

∴ from (4), c1 = 1 

∴ from (3), f ( x ) = e − x which satisfies lim f ( x ) = lim f ′( x ) = 0 


x →∞ x →∞

1  2 ∞ 1 − aω
(ii) Fs −1  e − aω  = ∫ e sin(ω x ) dω (1)
ω  π 0 ω
d −1  1 − aω  2 ∞ − aω
∴ Fs  e  = ∫ e cos (ω x ) dω
dx ω  π 0


2  e − aω 
2 (
=  2 −a cos (ω x ) + ω sin (ω x ) ) 
π x +a 0 
2a
=  (∵ a > 0 )
π ( x 2 + a2 )
Integrate both sides w.r.t. x
1  2 x
Fs −1  e − aω  = tan −1 + c
ω  π a 
1 
From (1), for x = 0, Fs −1  e − aω  = 0 = 0 + c
 ω  
∴ c=0
1  2 x
∴ Fs −1  e − aω  = tan −1 .
ω  π a
406 | Chapter 3

Example 3.77: Solve the integral equation


∞ 1 − s; 0 ≤ s ≤ 1
f ( x ) cos ( sx ) dx = 


0
0 ; s > 1 
2
∞ sin t π
and hence show that ∫ dt = .
0 t2 2
∞ 1 − s; 0 ≤ s ≤ 1
Solution: ∫ f ( x ) cos sx dx = 
0
0 ; s > 1
1 − s; 0 ≤ s ≤ 1
⇒ Fc f ( x ) = 
0 ; s > 1 
∴ by inversion formula
2 ∞
f ( x) =
π ∫0
( Fc f ( x ) ) cos( sx) ds

1
2 
(1 − s ) cos( sx ) ds = (1 − s )  sin( sx )  − 2 cos( sx) 
2 1 1 1
π ∫0
=
π  x  x 0

2
= (1 − cos x ) ; x > 0
π x2 
∞ 2 (1 − cos x ) 1 − s ; 0 ≤ s ≤ 1
∴ Fc f ( x ) = ∫ cos( sx ) dx = 
0 πx 2
0 ; s >1

2 (1 − cos x )
lim Fc ( f ( x )) = ∫

∴ dx = 1
s→0 + 0 π x2 
1 − cos x
∞ π
∴ ∫ 0 x 2
dx =
2
x
2 sin 2
∞ π
∴ ∫0 x 2 2 dx = 2

x
Put =y
2 
∞ 2 sin 2 y π
∫0 4y 2
⋅ 2 dy =
2

∞ sin 2 y π
or ∫0 y 2
dy =
2

∞ sin 2 t π
or ∫ 0 t2
dt =
2 
Fourier Series, Fourier Integrals and Fourier Transforms  | 407

Example 3.78: Using convolution theorem, find f ( t ) when


1
(i)  F ( f ( t ) ) = , f ( t ) = 0; t < 0.
(1 + iω )
2

1
(ii)  F ( f ( t ) ) = ; f ( t ) = 0; t < 0.
(ω ω 2 −1 )
1  1 1 
Solution: (i)  f ( t ) = F
−1
= F −1  ⋅
(1 + iω )
2
 1 + iω 1 + iω 
1
and F −1 = e −t H ( t )
1 + iω 
∴ by convolution theorem
1
F −1 = e − t H (t ) ∗ e − t H (t )
(1 + iω ) 2 

= ∫ e −τ H (τ ) e H ( t − τ ) dτ
− ( t −τ )

−∞ 

=e −t
∫ H (τ )H ( t − τ ) dτ
−∞ 
 1 ; τ ≥ 0, t − τ ≥ 0, i.e., t ≥ τ ≥ 0
Now, H (τ ) H (t − τ ) = 
0 ; t < τ <0 
1 t
∴ F −1
= e ∫ dτ = te ; t ≥ 0
−t −t

(1 + iω )
2 0

t e − t ; t ≥ 0
∴ f (t ) = 
0 ; t < 0 
−1
(ii)  We have F −1   = iH (t ) , F −1 2
1 1
= F −1 = − sin (t ) H (t )
ω ω −1 1− ω2
∴ by convolution theorem
1
F −1 = − sin ( t ) H ( t ) ∗ iH ( t )
(
ω ω 2 −1 ) 

= ∫ − sin τ H (τ ) i H ( t − τ ) dτ
−∞ 

= −i ∫ sin τ H (τ ) H ( t − τ ) dτ
−∞ 
1 , t ≥ τ ≥ 0
Now, H (τ ) H ( t − τ ) = 
0 , t < τ < 0 
1
= − ∫ i sin τ dτ = i (cos τ )0 = −i (1 − cos t ) , t ≥ 0
t t
∴ F −1
ω ω −1(
2
)
0

 −i (1 − cos t ) ; t ≥ 0
∴ f (t ) = 
 0 ; t<0

408 | Chapter 3

1 − x ; x < 1
Example 3.79: Find the Fourier transform of f ( x ) =  and hence find the value
∞ sin t
4 
 0 ; x > 1
of ∫ 4
dt .
0 t
F ( f ( x )) = ∫ f ( x ) e − iω x dx = ∫ (1 − x ) e − iω x dx
∞ 1
Solution:
−∞ −1 
= ∫ (1 − x ) ( cos(ω x ) − i sin(ω x ) ) dx
1

−1 
Now, (1− x ) cos(ω x) is even function of x and

(1− x ) sin (ω x) is odd function of x

F ( f ( x )) = 2∫ (1 − x ) cos (ω x ) dx
1

0 
1
 1  1  2
= 2 (1 − x )  sin (ω x )  − 2 cos (ω x )  = 2 (1 − cos ω )
  ω  ω 0 ω
By Parseval’s identity
1 ∞
F ( f ( x ) ) dω = ∫ f ( x ) dx
2 ∞ 2

2π ∫ −∞ −∞


4 (1 − cos ω )
2
1
dω = ∫ (1 − x ) dx
∞ 1 2

2π ∫ −∞ ω 4 −1

2
 2 ω
 2 sin 
4 ∞ 2
dω = 2∫ (1 − x ) dx
1

2
or
π 0 ω 4 0

 (1 − cos ω ) 
2

∵ (1 − x ) and
2
are even functions of x and ω resp.
 ω4 

ω
sin 4
16 ∞ 2 dω = −2 (1 − x )3 
1
or ∫
π 0 ω 4
3  0

ω
Put =t
2 
16 ∞ sin 4 t 2
π ∫0 16 t 4
.2 dt =
3


4
sin t π
⇒ ∫ 0 t 4
dt =
3

Fourier Series, Fourier Integrals and Fourier Transforms  | 409

Example 3.80: Using Parseval’s identity, prove that


π  1 − e−a 
2 2
∞  sin t  π ∞ sin ( at )
(i)  ∫
0  t 
 
dt =
2
   (ii) ∫0 t a2 + t 2 dt =
(

2  a 2 )
 .

Solution:
1; − 1 < x < 1
(i)  Let f (x) = 
0; otherwise
F ( f ( x ) ) = ∫ e − iω x dx = ∫ ( cos(ω x ) − i sin(ω x ) ) dx
1 1

−1 −1 
2 2
= 2 ∫ cos (ω x ) dx = ( sin (ω x ) )0 = sin ω
1 1

0 ω ω 
By Parseval’s identity
1 ∞
F ( f ( x ) ) dω = ∫ f ( x ) dx
2 ∞ 2

2π −∞ −∞

1 ∞ 4 sin ω 2
1

2π ∫−∞ ω 2
∴ dω = ∫ dx = 2
−1

2
∞ sin t
⇒ ∫−∞ t 2 dt = π

2
∞  sin t 
or 2∫   dt = π
0
 t  
2
∞  sin t  π
∴ ∫0  t  dt = 2

1 ; 0 < x < a
(ii) Let f ( x ) = e − ax ; x > 0, g ( x ) =  where a > 0
0 ; x > a

 e − ax 
Fc ( f ( x ) ) = ∫ e − ax cos (ω x ) dx =  2

2 (
−a cos (ω x ) + ω sin (ω x ) ) 
0
 ω + a 0 
a
= 2 = Fc (ω )
ω + a2 
1 1
Fc ( g ( x ) ) = ∫ cos (ω x ) dx = ( sin (ω x ) )0 = sin (ω a) = Gc (ω )
a a

0 ω ω 
We have the identity
2 ∞ ∞
∫ Fc (ω ) Gc (ω ) dω = ∫ f ( x ) g ( x ) dx
π 0 0

2 ∞ a sin (ω a) 1
( )
a a
∴ ∫
π ω a +ω
0 2
( 2
dω = ∫ e − ax dx = − e − ax
0
) a 0

 − e−a 
( )
2
∞ sin ( at ) π π 1
∫ (
2
−a
∴ dt = − e − 1 =   .
0
t a2 + t 2 ) 2a 2 2  a2 
410 | Chapter 3

2
∞ x2 ∞  x 
Example 3.81: Evaluate ∫ (a0 2
+ x2 ) (b 2
+ x2 )
dx and hence find ∫ 0  2  dx.
 x +1

Solution: Let f ( x ) = e − a x ; x > 0, g ( x ) = e − b x ; x > 0



 e− a x 
Fs ( f ( x ) ) = ∫ e 2 (
− a sin (ω x ) − ω cos (ω x ) ) 
∞ −ax
sin (ω x ) dx =  2
0
 ω + a  0

ω
= = Fs (ω )
ω + a2
2

Replacing a by b
ω
Fs ( g ( x ) ) = = Gs ( ω )
ω + b2
2

By Parseval’s identity
2 ∞ ∞
∫ Fs (ω ) Gs (ω ) dω = ∫ f ( x ) g ( x ) dx
π 0 0

2 ∞ ω2
dω = ∫ e ( ) dx
∞ − a+b x
∴ ∫ (
π 0 ω 2 + a2 ω 2 + b2 0
)( ) 
∞ ω 2
−π 1 e − ( a + b ) x 

∴ ∫ (ω
0 2
+a 2
) (ω 2
+b 2
)
dω =
2 (a + b) 0

∞ x 2
π
∴ ∫ (x0 2
)(
+ a2 x 2 + b2 )
dx =
2( a + b )

Taking a = b = 1 
2
∞  x  π
∫ 0  2  dx = 4 .
 x +1

Exercise 3.5

1. Find the Fourier integral representation 2. Find the Fourier integral representation of
1 , x ≤ 1 1 − x , x ≤ 1
2

of the function f ( x ) =  and the function f ( x ) =  .


0 , x > 1 0 , x >1
∞ sin λ cos ( λ x ) 1 , 0 ≤ x ≤ π
hence evaluate ∫ λ
d λ. 3. Express f ( x) =  as a
0 , x > π
0

∞ sin λ Fourier sine integral and hence evaluate


Also deduce the value of ∫ dλ. ∞ 1 − cos (πλ )
0 λ
∫0 sin ( xλ ) d λ .
λ
Fourier Series, Fourier Integrals and Fourier Transforms  | 411

4. Determine the Fourier sine integral of the 1, 0 < x < a


x , 0 < x < 1  (i)  f ( x ) =  .
 0, otherwise
function f ( x ) =  2 − x, 1 < x < 2.
0 , x>2 1, x ≤a
 (ii)  f ( x ) =  .
0, x >a
5. Using Fourier integral representation
show that  x, 0 < x < a
 0, x < 0 (iii)  f ( x ) =  .
∞ cos ( xα ) + α sin( xα )
 π 0, otherwise

∫0 1+ α 2
dα =  , x = 0. .
 −2x  x, x ≤ a
 (iv)  f ( x ) =  .
π e , x > 0 0, x > a
6. Find (a) Fourier cosine integral  a, − l < x < l
 (v)  f ( x ) =  .
and (b) Fourier sine integral of 0, otherwise
sin x, 0 ≤ x ≤ π
f (x) =  and show that  0, x < 0
0 , x>π  (vi)  f ( x ) =  − α x .
∞ sin (ω x ) sin (πω )
e , x ≥ 0
π
∫0 1− ω 2
dω = sin x; 0 ≤ x ≤ π .
2
where α > 0

 e − ax , x > 0
7. Find the Fourier integral representation (vii)  f ( x ) =  ax .
of the function  −e , x < 0
 e ax , x ≤ 0 where a > 0
f ( x ) =  − ax for a > 0.
e , x≥0  x e− x , x > 0
(viii)  f ( x ) =  .
Hence show that  0, x < 0
∞ cos (ω x ) π − ax
∫0 ω 2 + a2 dω = 2a e , x ≥ 0. 1
 , x ≤a
 (ix)  f ( x ) =  2a .
8. Find the Fourier cosine integral of  0, x >a

cos x, x < π / 2
f (x) =  .  a, − l < x < 0
0 , x > π / 2  (x)  f ( x ) =  .
0, otherwise
9. Find the Fourier cosine integral of where a > 0
f ( x ) = e − x cos x; x ≥ 0
a, −l < x < 0

10. Find the Fourier integral representation   (xi)  f ( x ) = b, 0 < x < l .
 0, x < 0 0, otherwise
 
of the function f ( x ) = 1 / 2, x = 0.
 e− x , x > 0 where a > 0, b > 0
 12. Find the Fourier transform of
11. Find the Fourier transform of the follow- sin x, 0 < x < π
f (x) =  .
ing functions defined on ( −∞, ∞ )  0, otherwise
412 | Chapter 3

∞ cos (πω / 2) π −x
Hence deduce that ∫ dω = . 21. Find the Fourier sine transform of e .
0
(1 − ω )
2
2
Hence evaluate ∫
∞ x sin mx
dx ( m > 0 ).
0 1+ x2
 t
1 − a , 0 < t < a 22. Find the function f ( x ) if

 t 1, 0 ≤ s < 1
13. Let f (t ) = 1 + , − a < t < 0. ∞ 
 a ∫0 f ( x ) sin ( sx) dx = 2, 1 ≤ s < 2 .
 0, otherwise  0, s≥2


 23. Find the Fourier cosine transform of
Find F { f ( t )}. cos x, 0 < x < a
f ( x) =  .
 0, x > a
14. Find the Fourier transform of e −9tU 0 (t )
where U 0 (t ) is the unit step function. 24. Find the Fourier cosine transform of
f ( x ) = e −2 x + 4e −3 x.
15. Find the amplitude spectrum of the
­function 25. Find f ( x ) whose Fourier cosine trans-
5, −2 ≤ t ≤ 2 sin as
f (t ) =  . form is
s
.
0, otherwise
26. Find f ( x ) if its Fourier cosine transform
16. Find the Fourier transform of 1
is .
a − x , x ≤ a
f (x) =  . (
1 + s2 )
 0, x > a 27. Solve the integral equation

π
∫ f ( x ) cos ( sx) dx = e ( s ≥ 0 ).
2
∞ sin x −s
Hence show that ∫ dx = . 0
0 x2 2
28. Find the Fourier sine and cosine trans-
is forms of the following functions
17. Prove that F cos ( ax ) U ( x )  =
− s2 (a 2
) 1 , 0 ≤ x ≤ l
 (i)  f ( x ) = 
where U (x) is the unit step function. 0 , x > l
18. Using modulation theorem, find the Fou- x , 0 < x <1

rier transform of f ( t ) cos bt, where f is (ii)  f ( x ) = 2 − x , 1 < x < 2
defined  0 , x>2

1, t < a
by f (t ) =  . (iii)  f ( x ) = 2e −5 x + 5e −2 x
0, t > a
  (iv) f ( x ) = xe − ax , a > 0
19. Find the Fourier sine transform of f (x)
sin x, 0 < x < a 29. Find (a) Fourier cosine and (b) sine
defined by f ( x ) =  . transform of f ( x ) = e − ax for x > 0; 
 0, x>a
a > 0. Deduce that Laplace inte-
∞ cos α x π − ax
20. Find the Fourier sine transform of
e − ax ; ( a > 0, x > 0 ) and show that
grals ∫0 a2 + α 2 dα = 2a e and
∞ α sin α x π − ax
∞ x sin mx π ∫0 a2 + α 2 dα = − 2 e .
∫ dx = e − m ; ( m > 0 ).
0 1+ x2 2
Fourier Series, Fourier Integrals and Fourier Transforms  | 413

30. Find Fourier sine and cosine transforms 34. Using Parseval’s identity, prove that
1 ∞ dt π
of x n−1 . Deduce that is self-recipro-
x ∫0 a2 + t 2 b2 + t 2 = 2ab ( a + b ).
( )( )
cal with respect to both the transforms.
31. Using convolution theorem, find the in- 35. Find the Fourier transform of
1  1, x < a
verse Fourier transform of . f (x) = 
12 + 7is − s 2 ( ) 0, x > a > 0
. Hence prove that

32. Using convolution theorem, find the in- 2


∞  sin t  π
verse Fourier transform of
1
. ∫ 
t 
 dt = .
( ) 2
0
6 + 5is − s 2
33. Using Parseval’s identity for Fourier co- 36. Find the Fourier transform of
sine and sine transforms of e − ax evaluate 1 − x , x < 1
f (x) =  . Hence find the
dx  0 , x >1

(i)  ∫ and
( )
0 2
a + x2
2 4
∞ sin x
value of ∫ dx.
∞ x 2 dx
0 x4
(ii)  ∫ where a > 0.
(a )
0 2
2
+ x2

Answers 3.5

π / 2 , x < 1
2 ∞ cos (λ x ) sin λ  π
 1.  f ( x ) = ∫ d λ ;  0 , x > 1;
π 0 λ π / 4 , x = 1 2

4 ∞ sin λ − λ cos λ
 2.  f ( x ) =
π ∫0
cos λ d λ
λ3
 π / 2, 0 ≤ x < π
2 ∞ 1 − cos (πλ ) 
 3.  f ( x ) = ∫ sin ( x λ ) d λ ;  0, x > π
π 0 λ  π / 4, x = π

4 ∞ 1
 4.  f ( x ) = (1 − cos λ ) sin λ sin (λ x) d λ
π ∫0 λ 2
2 ∞ 1 + cos (λπ )
 6. (a)  f ( x ) = ∫ cos (λ x ) d λ
π 0 1− λ2
2 ∞ sin (λπ )
   (b)  f ( x ) = ∫ sin (λ x ) d λ
π 0 1−α 2
2a ∞ cos (λ x )
 7.  f ( x ) =
π ∫0 λ 2 + a 2

414 | Chapter 3

 πλ 
cos ( λ x ) cos  
2 ∞  2
 8.  f ( x ) = ∫0 dλ
π 1− λ2( )
 9.  f ( x ) =
(
2 ∞ λ +2
2
)
π ∫0 λ 4 + 4
cos (λ x ) d λ
( )
1 ∞ cos (λ x ) + λ sin (λ x )
10.  f ( x ) =
π ∫0

1+ λ2
2 − iω a / 2 ωa 2 i  −
ωa
ωa 
11. (i) e sin (ii) sin (ω a) (iii)  2  aω e − iω a − 2e 2 sin 
ω 2 ω ω  2 
2i 2a 1
(iv)  2 ( aω cos (ω a) − sin (ω a) )       (v)  sin (ω l )    (vi) 
ω ω α + iω
−2iω 1 sin ( aω ) 2a iω2 l ωl
(vii)     (viii)     (ix)       (x)  e sin
a2 + ω 2 (1 + iω )
2
a ω ω 2

2 ω l  iωl −
iω l

(xi)  sin  ae 2 + be 2 
ω 2  
1 + e − iπω
12. 
1− ω2
2
13.  (1 − cos (ω a) )
aω 2
1
14. 
( 9 + iω )
15. 

F ω 

ω
− π − π −π −π π π π π

Figure 3.16
Fourier Series, Fourier Integrals and Fourier Transforms  | 415

2
16. (1 − cos (ω a) )
ω2
sin (ω + b ) a sin (ω − b ) a
18.  +
(ω − b ) (ω − b )
1  sin (ω − 1) a sin (ω + 1) a 
19.   − 
2  ω −1 ω +1 
ω
20. 
(a + ω2 )
2

ω π −m
21.  ; e
(1+ ω2 ) 2
2
22.  (1 + cos x − 2 cos 2 x )
πx
1  sin ( (1 + ω ) a ) sin ( (1 − ω ) a ) 
23.   + 
2  (1 + ω ) (1 − ω ) 
 1 6 
24.  2  2 + 2 
ω +4 ω +9
 1, x < a
25.  f ( x ) = 
0, x > a
26.  e − x , x ≠ 0
2
27. 
(
π 1+ x2 )
1 − cos (ω l ) sin (ω l )
28.      (i)  Fs ( f ( x ) ) = , Fc ( f ( x ) ) =
ω ω
2 sin ω 2 cos ω
 (ii)  Fs ( f ( x ) ) = (1 − cos ω ) , Fc ( f ( x ) ) = 2 (1 − cos ω )
ω2 ω
 2 5   1 1 
(iii)  Fs ( f ( x ) ) = ω  2 + 2  , Fc ( f ( x ) ) = 10  2 + 2 
 ω + 25 ω + 4  ω + 25 ω +4
2aω a2 − ω 2
 (iv)  Fs ( f ( x ) ) = , Fc ( f ( x ) ) =
(a ) (a )
2 2
2
+ ω2 2
+ ω2

a ω
29. (a)  Fc ( f ( x ) ) = Fs ( f ( x ) ) =
, (b) 
(a 2
+ω 2
) (a + ω2 )
2

2 Γ ( n) nπ 2 Γ ( n) nπ
(
30.  Fs x n −1 = ) π ω n
sin
2
, Fc x n −1 =
π ω n (
cos
2
)
416 | Chapter 3

( )
 e −3t 1 − e − t ; t ≥ 0
31.  
0 ; t<0

( )
 e −2t 1 − e − t , t ≥ 0
32.  
0 , t =0
π π
33. (i) (ii)
4 a3 4a
2sin aω
35. 
ω
2 (1 − cos ω ) π
36.  ;
ω2 3

3.8  Applications of Fourier Transforms


Ordinary differential equations for the range of the variable −∞ to ∞ can be solved by Fourier
transform technique. If boundary conditions are not given then homogenous solution is found by
writing auxiliary equation and particular integral is found by assuming boundary conditions and
using Fourier transform.
Fourier transforms are also applied to solve boundary value problems. In one dimensional bound-
ary value problems, the partial differential equation can easily be transformed into an ordinary
differential equation by applying a suitable transform. The required solution is then obtained
by solving this differential equation and taking the inverse transform. In a problem for finding
u ( x, t ) from partial differential equation, if x ≥ 0  then Fourier sine or Fourier cosine transform

∂ 
is to be used. If  u ( x, t )  is given then Fourier cosine transform is used as Fourier cosine
 ∂x  x =0
∂2u ∂ 
transform of contains  u ( x, t )  and if u ( x, t ) x =0 is given then we use Fourier sine
∂x  ∂x
2
transform.  x =0

Example 3.82: Find the solution of the following differential equation by using Fourier transform
dy
− 5 y = u0 ( t ) e −5t ; − ∞ < t < ∞ where u0 ( t ) is unit step function.
dt

Solution: A.E . is m − 5 = 0
∴ m = 5
∴ C .F . = c1e 5t 
To find particular integral (P.I.), we take boundary conditions lim y ( t ) = 0.
t →±∞
Fourier Series, Fourier Integrals and Fourier Transforms  | 417

Take Fourier transform of both sides of the given differential equation


1
( iω − 5) Y (ω ) = where Y (ω ) = F y ( t )

5 + iω
1
∴ Y (ω ) = −
(5 + iω ) (5 − iω ) 
1 1 1 
=− +   (by suppression method)
10  5 − iω 5 + iω 
1  −1 1 1 
∴ y (t ) = − F + F −1 
10  5 − iω 5 + iω  
This y ( t ) is particular integral

1  −1 1 1 
∴ P .I . = − F + F −1  (1)
10  5 − iω 5 + iω 
1
Now, F e −5t u0 ( t ) = 
5 + iω
1
∴ F e 5t u0 ( −t ) =
5 − iω 
 (∵ if F ( f (t )) = F (ω ) , then F f ( −t ) = F ( −ω ))
1 1
∴ F −1 = e 5t u0 ( −t ) and F
−1
= e −5t u0 (t )
5 − iω 5 + iω 

∴from (1)  1 −5t


 − 10 e ; t > 0

 1
P .I . = −
10
(
1 5t
e u0 ( −t ) + e −5t u0 (t ) ) =  − e 5t ; t < 0
 10
 1
− 5 ; t=0
 
∴ general solution is
 5t 1 −5t
c1e − 10 e ; t>0

 1
y (t ) = c1e 5t − e 5t = c2 e 5t ; t < 0
 10
 1
c1 − 5 = c ; t=0
 
418 | Chapter 3

\ general solution is
 5t 1 −5t
 Ae − e ; t > 0
y (t ) =  10
 Ae 5t ; t≤0
 
where A is an arbitrary constant.

Example 3.83: Solve the differential equation


d2 y dy − t
+ 3 + 2y = e
dt 2 dt
Solution: A.E . is
m 2 + 3m + 2 = 0 
∴ m = −1, −2 
∴ C .F . = c1e − t + c2 e −2t 
For finding P.I., we assume boundary conditions
lim y ( t ) = lim y ′ ( t ) = 0
t →±∞ t →±∞ 
Take Fourier transform of both sides of the given differential equation
(Y (ω ) = F y (t ))

(iω )2 + 3iω + 2 Y (ω ) =
∫−∞ e e dt ;
− t − iω t

  
0 ∞
=∫ ee t − iω t
dt + ∫ e e −t − iω t
dt
−∞ 0 
0 ∞
 e (1−iω )t   e −(1+ iω )t 
=  + 
 1 − iω  −∞  − (1 + iω )  0

1 1
= +
1 − iω 1 + iω 
1  1 1 
∴ Y (ω ) =  +
(1 + iω )( 2 + iω ) 1 − iω 1 + iω 

1 1  1 1 
= + −

(1 + iω ) ( 2 + iω ) (1 − iω ) 1 + iω 1 + iω 2 + iω 

1 1 1 1 1
= + − + −
2 (1 + iω ) 6 (1 − iω ) 3 ( 2 + iω ) (1 + iω )2 (1 + iω ) ( 2 + iω )

 (by suppression method)
1 1 1 1 1 1
= + − + − +
2 (1 + iω ) 6 (1 − iω ) 3 ( 2 + iω ) (1 + iω )2 1 + iω 2 + iω

1 1 1 2
= − + +
(1 + iω )
2
2 (1 + iω ) 6 (1 − iω ) 3 ( 2 + iω )

Fourier Series, Fourier Integrals and Fourier Transforms  | 419

1 1 1 1 1 2 1
∴ y (t ) = F −1 − F −1 + F −1 + F −1 (1)
(1 + iω ) 2
2 1 + iω 6 1 − iω 3 2 + iω

1
Now, F e − at u0 (t ) = ; a>0
a + iω 
∴ F e at u0 ( −t ) =
1
a − iω
; a>0  (∵ if F f (t ) = F (ω ) , then F f ( −t ) = F ( −ω ))
1 1 1
∴ F −1 = e − t u0 (t ) , F −1 = e t u0 ( −t ) , F −1 = e −2t u0 (t )
1 + iω 1 − iω 2 + iω 
1
Also, F e u0 (t ) =
−t

1 + iω 
d 1 1
⇒ F t e − t u0 (t ) = i =
dω 1 + iω (1 + iω )2

1
∴ F −1 = t e − t u0 (t )
(1 + iω ) 2

From (1), y ( t ) is P.I.
1 1 2
∴ P.I . = t e − t u0 (t ) − e − t u0 (t ) + e t u0 ( −t ) + e −2t u0 (t )
2 6 3 
 1 t
 6e ; t<0

 1  2
∴ P.I . =  t −  e − t + e −2t ; t > 0
  2  3
 1
 ; t=0
 3 
\ general solution is
 −t −2 t 1 t
C1e + C2 e + e ; t < 0
y (t ) = C . F . + P . I . =  6
(C + t ) e − t + C e −2t ; t ≥ 0
 1 2

where C1 and C2 are arbitrary constants.

Example 3.84: The temperature distribution u ( x, t ) in a thin, homogeneous infinite bar can be
modelled by the initial boundary value problem
∂u ∂2u
= c 2 2 , − ∞ < x < ∞, t > 0, u ( x, 0 ) = f ( x ) , u ( x, t ) is finite as x → ±∞. Find u(x,t), t > 0.
∂t ∂x
420 | Chapter 3

∂u ∂2u
Solution: = c2 2
∂t ∂x
Take Fourier transforms of both sides w.r.t. x
d
U (ω , t ) = c 2 (iω ) U (ω , t ) where F u ( x, t ) = U (ω , t )
2

dt 

= −c 2ω 2U (ω , t )

Solution of this first order ordinary differential equation is
U (ω , t ) = Ke − c ω t (1)
2 2


Now, u ( x, 0 ) = f ( x )

Take Fourier transform
U (ω , 0 ) = F (ω )  where F f ( x ) = F (ω )

∴ from (1),
U (ω , 0 ) = K = F (ω )

Hence, U (ω , t ) = F (ω ) e − c2ω 2 t

Take inverse Fourier transform
1 ∞
u ( x, t ) = ∫ F (ω ) e − c ω t e iω x dω (2)
2 2

2π −∞


But F (ω ) = F f ( x ) = ∫ f (ξ ) e − iωξ dξ
−∞ 
1 ∞ ∞
u ( x, t ) = ∫ e − c ω t e iω x ∫ f (ξ ) e − iωξ dξ dω
2 2

2π −∞ −∞

1 ∞ ∞
f (ξ ) e − c ω t e
− iω (ξ − x )
∫ ∫
2 2
= dξ dω
2π −∞ −∞

1 ∞ ∞
f (ξ ) e − c ω t e
− iω (ξ − x )
∫ ∫
2 2
= dω dξ    (by changing order
2π −∞ −∞
of integration)
1 ∞ ∞
∫ ∫ f (ξ ) e − c ω t cos ω (ξ − x ) − i sin ω (ξ − x )  dω dξ
2 2

=
2π −∞ −∞

But e − c 2ω 2 t
cos ω (ξ − x ) is even function of ω and e − c 2ω 2 t
sin ω (ξ − x ) is odd function of ω
1 ∞ ∞
∴ u ( x, t ) = ∫ 2 ∫ f (ξ ) e
− c 2ω 2 t
cos ω (ξ − x ) dω dξ
2π −∞ 0

1 ∞ ∞
f (ξ ) e − c ω t cos ω (ξ − x ) dξ dω    (by changing order of integration)
π ∫0 ∫
2 2
=
−∞

Fourier Series, Fourier Integrals and Fourier Transforms  | 421

V , − l < x < l
Example 3.85: Find the solution in the above example if f ( x ) =  where V is a
constant. 0, otherwise
F (ω ) = ∫ Ve − iω x dx = ∫ V ( cos (ω x ) − i sin (ω x ) ) dx
l l
Solution: Here, 
−l −l

cos (ω x ) is an even function of x and sin (ω x ) is an odd function of x


2V 2V
F (ω ) = 2V ∫ cos (ω x ) dx = ( sin (ω x) )0 = sin (ω l )
l l

0 ω ω 
\ from equation (2) in the above example
1 ∞ 2V
u ( x, t ) = ∫ sin (ω l ) e − c ω t eiω x dω
2 2

2π −∞ ω

1 ∞ 2V
∫ sin (ω l ) e − c ω t ( cos(ω x ) + i sin(ω x ) ) dω
2 2
=
2π −∞ ω 
1
Now, sin (ω l ) e − c ω t cos(ω x ) is an even function of ω 
2 2

ω
1
sin (ω l ) e − c ω t sin (ω x ) is an odd function of ω 
2 2
and
ω
2V ∞ 1
u ( x, t ) = ∫ sin (ω l ) e − c ω t cos (ω x ) dω
2 2

π 0 ω 
V ∞ 1 − c 2ω 2 t
= ∫ e sin ω ( l + x ) + sin ω ( l − x )  dω
π 0 ω 
1
Put ω = y
c t 
V ∞ 2 1 (l + x ) y 1 (l − x ) y 
u ( x, t ) = ∫ e − y  sin + sin  dy
π 0
y c t y c t 


V π (l + x ) π (l − x ) 
=  erf + erf 
π  2 2c t 2 2c t  

V (l + x ) (l − x ) 
= erf + erf 
2 2c t 2c t 


∂2u ∂2u
Example 3.86: Solve = α 2 2 , − ∞ < x < ∞, t ≥ 0 with conditions u(x,0)   =    f (x),
∂t 2
∂x  
∂u ∂u
( x, 0 ) = g (x) and assuming u, → 0 as x → ±∞.
∂t ∂x
422 | Chapter 3

∂2u 2 ∂ u
2
Solution: = α
∂t 2 ∂x 2
Take Fourier transform of both sides w.r.t. x
d2
U (ω , t ) = −α 2ω 2U (ω , t )  where F u ( x, t ) = U (ω , t )
dt 2 
Its solution is
U (ω , t ) = c1 cos (αωt ) + c2 sin (αωt ) (1)

Now, u ( x, 0 ) = f ( x )

Take Fourier transform
U (ω , 0 ) = F (ω )  where F ( f ( x ) ) = F (ω )

∴from (1),
U (ω , 0 ) = c1 = F (ω )

∴ U (ω , t ) = F (ω ) cos (αωt ) + c2 sin (αωt ) (2)


Now, u ( x, 0 ) = g ( x )
∂t 
Take Fourier transform
d
U (ω , 0 ) = G (ω )  where F g ( x ) = G (ω ) (3)
dt
From (2)
d
U (ω , t ) = −αω F (ω ) sin (αωt ) + c2αω cos (αωt )
dt 
d
∴ U (ω , 0 ) = c2αω = G (ω )  [from (3)]
dt
1
∴ c2 = G (ω )
αω 
\ from (2)
1
U (ω , t ) = F (ω ) cos (αωt ) + G (ω ) sin (αωt )
αω 
i −1 G (ω )
∴ u ( x, t ) = F −1  F (ω ) cos (αω t ) + F sin (αω t ) (4)
α iω
G (ω )
= ∫ g ( x ) dx 
x
Now, F −1
iω −∞
Fourier Series, Fourier Integrals and Fourier Transforms  | 423

By modulation theorem
1
F −1  F (ω ) cos (αω t ) =
2
[ f ( x + α t ) + f ( x − α t )](5)

 G (ω )  i x +αt
sin(αω t )  = −  ∫ g ( x ) dx 
x −αt
F −1  g ( x ) dx − ∫
 iω  2  −∞ −∞ 

i x −α t
= − ∫ g ( x ) dx 
x +α t x −α t
g ( x ) dx + ∫ g ( x ) dx − ∫
2  −∞ x −α t −∞  
i x +α t
g ( x ) dx (6)
2 ∫ x −α t
=−

Substituting from (5) and (6) in (4)
1 1 x +α t
u ( x, t ) =  f ( x + α t ) + f ( x − α t )  + ∫ g ( y ) dy
2 2α x −α t

∂v ∂2 v
Example 3.87: Use Fourier sine transform to solve the equation = K 2 ; x > 0, t > 0
∂t ∂x
subject to the conditions v = v0 when x = 0, t > 0 and v = 0 when t = 0, x > 0.
­
∂v ∂2v
Solution:   =K 2
∂t ∂x
Take Fourier sine transform w.r.t. x of both sides
d
Vs (ω , t ) = K  −ω 2Vs (ω , t ) + ω v ( 0, t ) , where Fs v ( x, t ) = Vs (ω , t )
dt 

But v ( 0, t ) = v0

d
\ Vs (ω , t ) + K ω 2Vs (ω , t ) = K ω v0
dt 
It is Leibnitz linear differential equation
2
I .F . = e K ω t

\ solution is
e K ω tVs (ω , t ) = ∫ K ω v0 e K ω t dt + c
2 2


v0 K ω 2t
= e +c
ω 
v0
\ Vs (ω , t ) = + c e − Kω 2t
(1)
ω
Now, v ( x, 0 ) = 0 
\ Vs (ω , 0 ) = 0

424 | Chapter 3

\ from (1),
v0
Vs (ω , 0 ) = +c = 0
ω 
v
⇒ c=− 0 
ω
v
Vs (ω , t ) = 0 1 − e − K ω t 
2
Hence,
ω 

Take inverse Fourier sine transform


v ( x, t ) =
2 ∞ v0

π 0 ω
( 2
1 − e − K ω t sin(ω x ) dω

)
2v0 ∞ sin(ω x ) 2v0 ∞ − K ω 2t sin(ω x )
π ∫0 π ∫0
= dω − e dω
ω ω 
2v0 π 2v0 ∞ 1 xy  y 

2
= − e− y sin dy   ω = 
π 2 π 0 y Kt  Kt  
  
2v0 π  x 
= v0 − ⋅ erf      (we prove this result in
π 2  2 Kt 
remark after this example)
 x 
= v0 1 − erf 
 2 Kt  
x
= v0 erfC
2 Kt 
Remark: From Q. No. 17 (i) Exercise 1.5, we have
∞ π − a2

2
e − x cos 2ax dx = e
0 2 
Changing x by y and a by x
∞ π − x2

2
e − y cos 2 xy dy = e
0 2 
c
Integrate both sides w.r.t. x from 0 to
2 t
c
∞ π c

∫ ∫ ∫
2 2
2 t e − y cos 2 xy dy dx = 2 t e − x dx
0 0 2 0


c
π π  c 
∫ ∫
2
⇒ 2 t e − y cos 2 xy dx dy = erf      (by changing order
0 0 2 2 2 t 
of integration)
2 c
∞ e− y π  c 
\ ∫ ( sin 2 xy )02 t dy = erf  
0 2y 4 2 t 
2
∞ e− y  cy  π  c 
\ ∫ 0 y 
 sin
t
 dy = erf 
 2 2 t


Fourier Series, Fourier Integrals and Fourier Transforms  | 425

∂u ∂ 2 u
Example 3.88: Solve the equation = ; 0 < x < ∞, t > 0 subject to the conditions
∂t ∂x 2
1 ; 0 < x < 1
  (i)  u ( 0, t ) = 0 , t > 0    (ii) u ( x, 0 ) = 
0 ; x > 1
(iii)  u ( x, t ) is bounded
∂u ∂ 2 u
Solution: = ; 0 < x < ∞, t > 0
∂t ∂x 2
Take Fourier sine transform of both sides w.r.t. x
d
U s (ω , t ) = −ω 2U s (ω , t ) + ω u ( 0, t )  where Fs u ( x, t ) = U s (ω , t )
dt 

But u ( 0, t ) = 0 for t > 0 

d
\ U s (ω , t ) = −ω 2U s (ω , t )
dt 
Solution of this first order ordinary differential equation is
U s (ω , t ) = K e −ω t (1)
2


1 ; 0 < x < 1
Now, u ( x, 0 ) = 
0 ; x > 1 
∞ 1 1 − cos ω
U s (ω , 0 ) = ∫ u( x, 0) sin (ω x ) dx = ∫ sin (ω x ) dx = − ( cos(ω x) )0 =
1 1
\
0 0 ω ω 
∴ from (1),
1 − cos ω
U s (ω , 0 ) = K =
ω 
1 − cos ω −ω 2t
\ U s (ω , t ) = e
ω 
Take inverse Fourier sine transform
2 ∞ 1 − cos ω −ω 2t
u ( x, t ) =
π ∫0
e sin(ω x ) dω
ω 
2
1 ∞ e −ω t
= ∫ [ 2 sin(ω x) − 2 sin(ω x) cos ω ] dω
π 0 ω 
2
1 ∞ e −ω t
= ∫  2 sin(ω x ) − sin ω ( x + 1) − sin ω ( x − 1)  dω 
π 0 ω 
y
Put ω =
t 
426 | Chapter 3


 xy ( x + 1) y sin ( x − 1) y  
 sin sin 
1 ∞ − y2 t t t 
u ( x, t ) = 2
π ∫0
e − − dy
 y y y 
1π x x +1 x − 1
=  2 erf − erf − erf     (from remark of
π 2 2 t 2 t 2 t
the previous example)
x 1 x +1 1 x −1
= erf − erf − erf
2 t 2 2 t 2 2 t 

∂u ∂2u
Example 3.89: Solve the equation = 2 2 ; 0 < x < ∞, t > 0 subject to the conditions
∂t ∂x
(i)  u (0, t ) = 0, t > 0  (ii) u ( x, 0 ) = e − x, x > 0
  
∂u
(iii)  u and both tend to zero as x → ∞
∂x
∂u ∂2u
Solution: = 2 2 ; 0 < x < ∞, t > 0
∂t ∂x
Take Fourier sine transform of both sides w.r.t. x
d
U s (ω , t ) = 2  −ω 2U s (ω , t ) + ω u ( 0, t )  where Fs u ( x, t ) = U s (ω , t )
dt 

But u ( 0, t ) = 0 for  t > 0 

d
\ U s (ω , t ) = −2ω 2U s (ω , t )
dt 
Solution of this first order ordinary differential equation is
U s (ω , t ) = Ke −2ω t (1)
2


Now, u ( x, 0 ) = e − x , x > 0

∞ ∞
\ U s (ω , 0 ) = ∫ u( x, 0) sin(ω x ) dx = ∫ e − x sin(ω x ) dx
0 0 

 e− x  ω
= 2 ( − sin(ω x ) − ω cos(ω x ) ) = 2

(
 ω + 1 ) 
0
ω +1

from (1),
ω
U s (ω , 0 ) = K =
ω2 +1 
ω
U s (ω , t ) = 2
2
\ e −2ω t

(ω + 1) 
Take inverse Fourier sine transform
2 ∞ ω
u ( x, t ) =
π ∫0 1 + ω 2
2
e −2ω t sin(ω x ) dω

Fourier Series, Fourier Integrals and Fourier Transforms  | 427

Example 3.90: Solve the Laplace’s equation in the semi-infinite strip shown in the figure ­provided:
Y

∇ u=

X
f x = e−ax a >

Figure 3.17

Solution: Laplace’s equation in the semi-infinite strip is


∂2u ∂2u
+ = 0 ; 0 < x < ∞, 0 < y < b (1)
∂x 2 ∂y 2
subject to the conditions
  (i)  u ( 0, y ) = 0, 0 < y < b   (ii) u ( x, b ) = 0; 0 < x < ∞
(iii)  u ( x, 0 ) = f ( x ) = e − ax ; 0 < x < ∞
Take Fourier sine transform w.r.t. x of both sides of (1)
d2
−ω 2U s (ω , y ) + ω u ( 0, y ) + U s (ω , y ) = 0  where Fs u ( x, y ) = U s (ω , y )
dy 2 

But u ( 0, y ) = 0, 0 < y < b

2
d
\ 2
U s (ω , y ) = ω 2U s (ω , y )
dy 
Its solution is
U s (ω , y ) = c1 e −ω y + c2 eω y (2)

Now, u ( x, 0 ) = f ( x ) = e − ax ; 0 < x < ∞ 

\ U s (ω , 0 ) = ∫ u( x, 0) sin(ω x ) dx
0


∞  e − ax 
2 (
= ∫ e − ax sin(ω x ) dx =  2 −a sin(ω x ) − ω cos(ω x ) ) 
0
 ω + a 0 

ω
= 2 (∵ a > 0)
ω + a2 
428 | Chapter 3

\ From (2)
ω
U s (ω , 0 ) = c1 + c2 = (3)
ω + a2
2

Also, u ( x, b ) = 0

⇒ U s (ω , b ) = 0

\ from (2),
U s (ω , b ) = c1e −ω b + c2 eω b = 0 (4)

Solving equations (3) and (4)


ω eω b ω eω b
c1 = =

( a2 + ω 2 ) ( eωb − e −ωb ) 2 ( a2 + ω 2 ) sinh (ωb ) 
−ω b −ω b
−ω e −ω e
c2 = =

(a 2
+ω 2
) (e ωb
−e −ω b
) ( 2
)
2 a + ω 2 sinh (ω b )

\ from (2)
ω e ( ) − e ( ) 
 = ω sinh (ω ( b − y ) )
ω b− y −ω b − y

U s (ω , y ) = 
2 ( a + ω ) sinh (ω b ) ( a 2 + ω 2 ) sinh (ω b )
2 2

ω sinh (ω b ) cosh (ω y ) − cosh (ω b ) sinh (ω y ) 
=

(a 2
)
+ ω 2 sinh (ω b )

ω
= 2 cosh (ω y ) − coth (ω b ) sinh (ω y ) 
(a + ω 2 )  
Take inverse Fourier sine transform
2 ∞ ω
u ( x, y ) = ∫ cosh (ω y ) − coth (ω b ) sinh (ω y )  sin(ω x ) dω
π (a + ω 2 ) 
0 2

Example 3.91: The steady state temperature distribution u ( x, y ) in a thin homogenous semi-
infinite plate is governed by the boundary value problem
∂2u ∂2u
+ = 0 ; 0 < x < l, 0 < y < ∞
∂x 2 ∂y 2
subject to the conditions
∂u
(i)  u ( 0, y ) = e −5 y , y > 0    (ii) u ( l , y ) = 0, y > 0    (iii) ( x, 0 ) = 0 ; 0 < x < l
∂y
Find the temperature distribution u ( x, y ) , 0 < x < l , y > 0
Fourier Series, Fourier Integrals and Fourier Transforms  | 429

∂2u ∂2u
Solution: + = 0 ; 0 < x < l, y > 0
∂x 2 ∂y 2
Take Fourier cosine transform w.r.t. y of both sides
d2  ∂ 
U c ( x, ω ) +  −ω 2U c ( x, ω ) − u ( x, 0 )  = 0
dx 2
 ∂y 

where Fc u ( x, y ) = U c ( x, ω )


But u ( x, 0 ) = 0 ; 0 < x < l
∂y

d2
\ U c ( x, ω ) = ω U c ( x, ω )
2

dx 2 
Its solution is
U c ( x, ω ) = c1e −ω x + c2 eω x (1)

But u (l, y ) = 0

\ U c (l,ω ) = 0

\ from (1),
U c ( l , ω ) = c1e −ω l + c2 eω l = 0 (2)

Also, u ( 0, y ) = e −5 y ; y > 0


\ U c ( 0, ω ) = ∫ u(0, y ) cos (ω y ) dy
0
 ∞
∞  e −5 y 
=∫ e −5 y
cos (ω y ) dy =  2 ( −5 cos (ω y ) + ω sin (ω y ) )

0
 ω + 25 0 
5
=
ω + 25 
2

\ from (1),
5
U c ( 0, ω ) = c1 + c2 = (3)
ω + 25
2

Solving (2) and (3)


−5eω l 5eω l
c1 = =

(ω 2
)(
+ 25 e −ω l − eω l ) ( )
2 ω 2 + 25 sinh (ω l )

−ω l −ω l
5e 5e
c2 = =−

(ω 2
+ 25 e )( −ω l
−e ωl
) (
2 ω + 25 sinh (ω l )
2
) 
430 | Chapter 3

Substituting in (1)
5 e ( ) − e ( ) 
ω l−x −ω l − x

U c ( x, ω ) =   = 5 sinh (ω l − ω x )


( )
2 ω + 25 sinh (ω l )
2
(
ω 2 + 25 sinh (ω l ) ) 
5 sinh (ω l ) cosh (ω x ) − cosh (ω l ) sinh (ω x ) 
=
(ω 2
)
+ 25 sinh (ω l )

5
= cosh (ω x ) − coth (ω l ) sinh (ω x )  
(ω + 25) 
2

Take inverse Fourier cosine transform


10 ∞ 1
u ( x, y ) = [cosh (ω x ) − coth(ω l ) sinh(ω x)] cos(ω y) dω
π ∫0 (ω 2 + 25)


Exercise 3.6

1. If the initial temperature of an infinite 6. Find the solution of the Laplace equation
θ for x < a ∂2u ∂2u
bar is given by θ ( x ) =  0 + = 0 inside the semi-infinite
∂x 2 ∂y 2
 0 for x > a strip x > 0 , 0 < y < b such that
determine the temperature at any point x
and at any instant t.  f ( x); y = 0 , 0 < x < ∞
2. Solve two dimensional Laplace equation 
u = 0 ; y=b,0< x<∞
∂2u ∂2u
+ = 0 subject to the conditions 0 ; x=0,0< y<b
∂x 2 ∂y 2 
∂u
u ( x, 0 ) = f ( x ) , = 0 at y = 0.
∂y ∂u ∂2u
7. Solve = k 2 , if u(0,  t)   =   0,
3. An infinite string is initially at rest ∂t ∂x
and that the initial displacement is u(x, 0) = e , ( x > 0 ) , u ( x, t ) is bounded
-x
f ( x ) , ( −∞ < x < ∞ ). Determine the dis-
placement y ( x, t ) of the string. where x > 0, t > 0.

∂u ∂ 2 u 8. Use Fourier transform to solve the


4. Solve = , t > 0 subject to
∂t 2∂x 2 ∂u ∂ 2 u
−x ­equation = , 0 < x < ∞, t > 0
u(x,0) = e . ∂t ∂x 2
5. The temperature distribution u ( x, t ) where u ( x, t ) satisfies the conditions
in a thin, homogeneous semi-­
infinite bar can be modelled by   (i)   ∂u  = 0 , t > 0
the initial boundary value problem  ∂x  x = 0
∂u 2 ∂ u 2
=c , 0 < x < ∞, t > 0;  x; 0 < x < 1
∂t ∂x 2  (ii)  u ( x, 0 ) = 
0; x > 1
u ( x, 0 ) = f ( x ), x > 0, u (0, t ) = 0, t > 0.
Find the temperature distribution u ( x, t ) . (iii)  u ( x, t ) < M , i.e., bounded
Fourier Series, Fourier Integrals and Fourier Transforms  | 431

∂u ∂2u  ∂u 
9. Solve = k 2 for 0 ≤ x < ∞, t > 0   ( x, 0 ) = 0, 0 < x < l . Find the tem-
∂t ∂x  ∂y 
given the conditions perature distribution u(x, y), 0 < x < l,
  (i)  u ( x, 0 ) = 0 for x ≥ 0 y > 0.
∂u dy
 (ii) ( 0, t ) = −a ( constant ) 11. Solve + 3 y = cos 3t , y ( 0 ) = 0 ; t ≥ 0
∂x dt
(iii)  u ( x, t ) is bounded. d2 y dy
12. Solve + 5 + 6 y = 2 sin t, t ≥ 0 sub-
10. The steady state temperature distribution dt 2 dt
u ( x, y ) in a thin homogeneous semi-in- ject to the conditions y ′ ( 0 ) = 0, y ( 0 ) = 0.
finite plate is governed by the boundary d2 y dy
13. Solve + 3 + 2y    =    H(t) sin w t
∂2u ∂2u dt dt
value problem + = 0, 0 < x < l, for t > 0 satisfying lim y ( t ) = 0 and
∂x 2 ∂y 2 t →0 +
lim y ′ ( t ) = 0.
0 < y < ∞; u(0, y) = e-2y, u(l, y) = 0, y > 0; t →0 +

Answers 3.6

θ 0  ( a + x ) ( a − x ) 
1. θ ( x, t ) = erf + erf 2
 where c is thermal diffusivity.
2  2c t 2c t 
1
2. u ( x, y ) =  f ( x − iy ) + f ( x + iy ) 
2
1
3. y ( x, t ) =  f ( x − ct ) + f ( x + ct )  where c 2 is diffusivity of string.
2
1
4. u ( x, t ) = e ( )
− x 2 / 1+ 4 t

(1 + 4t )
1/ 2

2 ∞ ∞
5. u ( x, t ) = ∫ ∫ f (ξ ) sin (ωξ ) dξ  e − c ω t sin (ω x ) dω
2 2

π 0  0 

2 ∞ ∞
6. u ( x, y ) = ∫ ∫ f (ξ ) sin (ωξ ) cosh (ω y ) − coth (ω b ) sinh (ω y )  sin (ω x ) dξ dω
π 0 0
2
2 ∞ ω e − kω t
7. u ( x, t ) = ∫ sin(ω x ) dω
π 0 1+ ω2
2 ∞1 1 
8. u ( x, t ) = ∫ sin ω − 2 (1 − cos ω )  cos(ω x ) e −ω t dω
2

π ω ω
0

2
2a ∞ 1 − e − kω t
9. u ( x, t ) =
π ∫0
cos(ω x ) dω
ω2
4 ∞ 1
10. u ( x, y ) = ∫ cosh (ω x ) − coth (ω l ) sinh (ω x )  cos (ω y ) dω
π 0 4 + ω2 
( )
432 | Chapter 3

1
11. y ( t ) = ( cos 3t + sin 3t ) + c1 e −3t
6
1
( )
12. y ( t ) = 2e −2t − e −3t + sin t − cos t
5
ω e −2t ωe −t 1
13. y ( t ) = − 2 + 2 + ( )
 2 − ω 2 sin(ωt ) − 3ω cos(ωt) 
( )(
ω + 4 ω +1 1+ ω2 4 + ω2  ) 
Partial Differential Equations 4
4.1 Introduction
When the functions involve only one variable then the system can be modelled by ordinary
­differential equations which we have already studied. Many problems in fluid mechanics, elec-
tricity, heat transfer, electromagnetic theory, quantum mechanics and other fields involve func-
tions depending upon more than one variable usually one variable time and other one or more
variables and hence these systems can be modelled by partial differential equations. The range of
applications of partial differential equations is enormous compared to that of ordinary differential
­equations.
We discuss the formation of partial differential equations when either an equation containing
the arbitrary constants or arbitrary functions is given or when a physical or geometrical system
with certain conditions is provided.
We define the linear and non-linear, homogenous and non-homogenous partial differential
equations. We also define partial differential equations linear in partial derivatives. Solutions of
first order partial differential equations linear in partial derivatives are discussed by Lagrange’s
method and non-linear in partial derivatives is discussed by Charpit’s method. Solutions of some
particular types of first order partial differential equations are also discussed. Solutions of s­ econd
order partial differential equations linear in second order partial derivatives are discussed by
Monge’s method. General solutions of homogenous and non-homogenous partial differential
equations of higher orders with constant coefficients which are either linear homogenous in their
order or linear of any type are discussed.
As applications of partial differential equations one dimensional heat equation, one dimen-
sional wave equation governing the motion of a vibrating string, two dimensional wave equations
in steady state governing vibrating membranes which are Laplace equations and transmission
lines are considered.

4.2  Formation of partial differential equations


Partial differential equations (p.d.e.) are formed either by eliminating arbitrary constants or
­arbitrary functions from family of surfaces.

4.2.1 Elimination of Arbitrary Constants


Let F ( x, y, z , a, b ) = 0 (4.1)

represents two parameter family of surfaces. Differentiate (4.1) partially w.r.t. x and w.r.t. y
∂F ∂F
+p = 0 (4.2)
∂x ∂z
434 | Chapter 4

∂F ∂F
+q = 0 (4.3)
∂y ∂z
Eliminating a and b from equations (4.1) to (4.3), we get a relation between x, y, z, p and q, i.e.
φ ( x, y, z , p, q ) = 0.

This will be first order partial differential equation of family of surfaces.
Remark 4.1: It may not be possible to eliminate a and b from equations (4.1) to (4.3). Then we
find the second order partial derivatives and eliminate the arbitrary constants a and b. However,
the higher order partial differential equation is not unique.
Remark 4.2: If we are to eliminate arbitrary constants c1 , c2 ,..., cn from the equation

F ( x, y, z , c1 , c2 ,..., cn ) = 0

then it may be possible to obtain partial differential equations of order n −1 or less or of order
more than n −1.

4.2.2 Elimination of Arbitrary Functions


Suppose we have family of surfaces containing arbitrary function f of x, y, z. We differentiate
partially w.r.t. x and w.r.t. y Eliminate f ′ from these two equations formed from partial differen-
tiation. We shall hence get the partial differential equation. If given equation contains two arbi-
trary functions then the equation obtained after differentiation partially w.r.t. x(or y) will have to
be differentiated again partially w.r.t. x and y and then eliminate two arbitrary functions to obtain
partial differential equation.

4.3 Definitions
4.3.1 Linear and Non-linear Partial Differential Equations
A partial differential equation is linear if it is of the first degree in the dependent variable and its
partial derivatives, otherwise the equation is non-linear.

4.3.2 Homogenous and Non-homogenous Partial Differential Equations


In a partial differential equation, if each term contains either the dependent variable or one of its
derivatives then the equation is called homogeneous, otherwise non-homogeneous.

4.3.3 Partial Differential Equations Linear in Partial Derivatives


A partial differential equation is linear in its partial derivatives if it is of the first degree in its
­partial derivatives. If nth partial derivatives are linear (others may or may not be linear) then
partial differential equation is called linear in nth order partial derivatives.
Partial Differential Equations  | 435

4.3.4 Linear Homogenous in their Order Partial Differential Equations


Partial differential equations of nth order which have linear nth order partial derivatives and have
no other partial derivative or dependent variable are called linear homogenous in nth order partial
differential equations.

4.3.5 Solution of Partial Differential Equations


A solution of a partial differential equation in some region R is a function that has all the partial
derivatives appearing in the equation in some domain containing R and satisfies the equation ev-
erywhere in R. In general, the totality of solutions of a partial differential equation is very large.
( )
For example, the functions u = x 2 − y 2 , u = e x cos y, u = ln x 2 + y 2 which are entirely differ-
∂2u ∂2u
ent from each other are solutions of partial differential equation + = 0. These solutions
∂x 2 ∂y 2
may be derived from complete solutions or general solutions. Solution of differential equation
containing arbitrary constants is called complete solution and that which contains arbitrary func-
tions is called general solution. Unique solution of a partial differential equation corresponding
to a given physical problem is obtained by use of additional conditions arising from the problem.
Such a solution is called a particular solution under these conditions. If additional conditions
that the solution assume are given on the boundary of region R then these conditions are called
boundary conditions and the problem is called boundary value problem. If time t is one of the in-
dependent variable and conditions are prescribed at t = 0 then these are called initial conditions
and the problem is called initial value problem. There are solutions which cannot be derived
from complete solution or general solution, and such solutions are called singular solutions.

Example 4.1: Form partial differential equations from the following equations by eliminating
the arbitrary constants:
 (a)  z = ax + by + ab
(b)  z = ax + a 2 y 2 + b
x2 y2 z2
 (c)  + + =1
a2 b2 c2
(d)  x 2 + y 2 = ( z − c ) tan 2 α
2

 (e)  z = ax 2 + 2bxy + cy 2
( x − h) + ( y − k )
2 2
  (f)  + z 2 = c2
(g)  z = ceωt cos (ω x )
Solution: (a) z = ax + by + ab (1)
Differentiating (1) partially w.r.t. x and y respectively, we have
p = a (2)
q = b (3)
436 | Chapter 4

Substituting for a and b from (2) and (3) in (1), we have


z = px + qy + pq 

which is a required p.d.e.


(b) z = ax + a 2 y 2 + b (1)
Differentiating (1) partially w.r.t. x and y respectively, we have
p = a (2)
q = 2a 2 y (3)

Eliminating a from (2) and (3)


q = 2 p2 y

which is a required p.d.e.


x2 y2 z2
  (c) + + = 1 (1)
a2 b2 c2
Differentiating (1) partially w.r.t. x and y respectively, we have
2x 2z
+ p = 0 (2)
a2 c2
2 y 2z
+ q = 0 (3)
b2 c2
Differentiating (2) partially w.r.t. x again, we get
2 2 2
+
a2 c2
(
p + zr = 0 (4))
2 2
Eliminating and 2 from (2) and (4), we get
a2 c
x zp
=0
1 ( p + zr ) 
2

i.e., ( p + zr ) x − zp = 0 (5)
2

which is a second-order p.d.e.


Note: The p.d.e. (5) obtained is not unique. One can also obtain the p.d.e.


(q 2
)
+ zt y − zq = 0

or  pq + zs = 0.

x 2 + y 2 = ( z − c ) tan 2 α (1)
2
(d)
Differentiating (1) partially w.r.t. x and y respectively, we have
2 x = 2 ( z − c ) tan 2 α . p (2)
2 y = 2 ( z − c ) tan 2 α . q (3)
Partial Differential Equations  | 437

Dividing (2) by (3), we have


x p
=
y q

or qx − py = 0 
which is a required p.d.e.
  (e) z = ax 2 + 2bxy + cy 2 (1)
Differentiating (1) partially w.r.t. x and y respectively, we have
p = 2ax + 2by (2)
q = 2bx + 2cy (3)

Differentiating (2) partially w.r.t. x and y respectively, we have


r = 2a (4)
s = 2b (5)

Differentiating (3) partially w.r.t. y, we have


t = 2c (6)

Substituting for a, b and c from (4), (5) and (6) in (1), we have

2 z = rx 2 + 2 sxy + ty 2 

which is a required p.d.e.


( x − h) + ( y − k )
2 2
  (f) + z 2 = c 2 (1)
Differentiating (1) partially w.r.t. x and y respectively, we have
2 ( x − h ) + 2 zp = 0 (2)

2 ( y − k ) + 2 zq = 0 (3)

Differentiating (2) again partially w.r.t. x, we have


(
2 + 2 p 2 + zr = 0 ) 
i.e., zr + p + 1 = 0
2

which is a second-order p.d.e.
(g) z = ceωt cos (ω x ) (1)
Differentiating (1) partially w.r.t. x and t respectively, we have

z x = −cω eωt sin (ω x ) (2)


zt = cω eωt cos (ω x ) (3)
438 | Chapter 4

Differentiate (2) partially w.r.t. x


z xx = −cω 2 eωt cos (ω x ) (4)
Differentiate (3) partially w.r.t. t
ztt = cω 2 eωt cos (ω x ) (5)
Add (4) and (5),
z xx + ztt = 0
which is a p.d.e.

Example 4.2: Find the differential equation of all spheres whose centres lie on the z-axis.
Solution: Equation of any sphere having its centre on z-axis say at ( 0, 0, c ) and radius r is
x 2 + y 2 + ( z − c ) = r 2 (1)
2

Differentiating (1) partially w.r.t. x and y respectively, we have


2 x + 2 ( z − c ) p = 0 (2)
2 y + 2 ( z − c ) q = 0 (3)
Eliminating z - c from (2) and (3), we have
qx − py = 0 
which is a required p.d.e.

Example 4.3: Find the differential equation of all planes which are at a constant distance a from
the origin.
Solution: The equation of family of planes in normal form is
lx + my + nz = a (1)

where l, m and n are parameters of family which are the d.c’s of the normal from the origin to
the plane
∴ l 2 + m 2 + n2 = 1 (2)

\ Using (2), equation (1) becomes

lx + my ± (1 − l 2
)
− m 2 z = a (3)

Differentiating (3) partially w.r.t. x and y respectively, we have

l± (1 − l 2
− m2 ) p=0

⇒ l = ∓ (1 − l 2 − m 2 ) p

Partial Differential Equations  | 439

m± (1 − l 2
− m2 q = 0 )
⇒ m = ∓ (1 − l 2 − m 2 ) q 

∴ (
l 2 + m2 = 1 − l 2 − m2 )( p 2
+ q2 )
⇒ (1 − l 2
− m2 ) ( p + q ) + (1 − l − m ) = 1
2 2 2 2

⇒ (1 − l − m ) = 1 + p1 + q 
2 2

( ) 2 2

∓p ∓q
∴ l= ,m=
(p 2
+ q +1
2
) p + q2 + 1
2


\ from equation (3)
− px − qy + z
= ±a
(p 2
+ q2 + 1 ) 
or z = px + qy ± a (p 2
+ q2 + 1 )
is a required p.d.e.
1 
Example 4.4: Form a p.d.e. by eliminating the function f from the relation z = y 2 + 2 f  + log y  .
x 
Solution:
1 
z = y 2 + 2 f  + log y  (1)
x 
Differentiating (1) partially w.r.t. x and y respectively, we have
−2  1 
p= f ′  + log y  (2)
x2  x 

2 1 
q = 2y + f ′ + log y  (3)
y  x 
Multiply (2) by x 2, (3) by y and add, we have
px 2 + qy = 2 y 2 

which is a required p.d.e.


 xy 
Example 4.5: Form a p.d.e. by eliminating the arbitrary function f  from the relation z = f   .
 z 
Solution:
 xy 
z = f   (1)
 z 
440 | Chapter 4

Differentiating (1) partially w.r.t. x and y respectively, we have


 z − xp   xy 
p = y  2  f ′   (2)
 z   z 
 z − yq   xy 
q = x  2  f ′   (3)
 z   z 
Divide (2) by (3)
p y ( z − xp )
=
q x ( z − yq )

or px ( z − yq ) = qy ( z − xp )

or px = qy 

which is a required p.d.e.

Example 4.6: Form a partial differential equation by eliminating the arbitrary functions f and φ
from the equation z = f ( y / x ) + φ ( xy ) .
Solution:
z = f ( y / x ) + φ ( xy ) (1)

Differentiating (1) partially w.r.t. x and y respectively, we have


 −y  y
p =  2  f ′   + yφ ′ ( xy ) (2)
 x   x

 1  y
q =   f ′   + xφ ′ ( xy ) (3)
 x  x

Multiply equation (2) by x, equation (3) by y and add


∴ px + qy = 2 xyφ ′ ( xy ) (4)
Differentiating (4) partially w.r.t. x and y respectively, we have
p + xr + ys = 2 y φ ′ ( xy ) + xyφ ′′ ( xy )  (5)

xs + q + yt = 2 x φ ′ ( xy ) + xyφ ′′ ( xy )  (6)

Divide (5) by (6)


p + xr + ys y
=
xs + q + yt x

∴ xp + x 2 r + xys = xys + yq + y 2 t

or px − qy = y t − x r 
2 2

which is a required p.d.e.


Partial Differential Equations  | 441

Example 4.7: Form a partial differential equation by eliminating the arbitrary functions from
z = yf ( x ) + xg ( y )

Solution:
z = yf ( x ) + xg ( y ) (1)

Differentiating (1) partially w.r.t. x and y respectively, we have


p = yf ′ ( x ) + g ( y ) (2)

q = f ( x ) + xg ′ ( y ) (3)

Differentiating (2) partially w.r.t.y, we have


s = f ′( x ) + g′( y )

∴ xys = x ( yf ′ ( x ) ) + y  xg ′ ( y )  (4)

Substituting the values of y f ′ ( x ) and x g ′ ( y ) from equations (2) and (3), we have

xys = x  p − g ( y )  + y  q − f ( x ) 

= px + qy −  yf ( x ) + xg ( y ) 

or xys = px + qy − z  (from equation (1))

which is a required p.d.e.

Example 4.8: Form a partial differential equation by eliminating the arbitrary function φ from
( )
lx + my + nz = φ x 2 + y 2 + z 2 .
Solution:
(
lx + my + nz = φ x 2 + y 2 + z 2 (1) )
Differentiating (1) partially w.r.t . x and y respectively, we have
(
l + np = ( 2 x + 2 zp ) φ ′ x 2 + y 2 + z 2 (2) )
m + nq = ( 2 y + 2 zq ) φ ′ ( x 2
+ y2 + z2 ) (3)
Divide (2) by (3)
l + np x + zp
=
m + nq y + zq

or y ( l + np ) + zq ( l + np ) = x ( m + nq ) + zp ( m + nq )

or y ( l + np ) − x ( m + nq ) = z ( mp − ql )

which is a required p.d.e.
442 | Chapter 4

Example 4.9: Form partial differential equation from the following equation by eliminating the
( ) (
arbitrary function: z = f x 2 − y + g x 2 + y

)
Solution:
(
z = f x 2 − y + g x 2 + y (1) ) ( )
Differentiating (1) partially w.r.t. x and y respectively, we have

( ) (
p = 2 xf ′ x 2 − y + 2 xg ′ x 2 + y (2) )
q = − f ′( x 2
− y ) + g ′ ( x + y ) (3) 2

Multiply equation (3) by 2x and add to equation (2)


∴ p + 2 xq = 4 xg ′ x 2 + y (4)( )
Differentiating (4) partially w.r.t. x and y respectively, we have

( )
r + 2q + 2 xs = 4 g ′ x 2 + y + 8 x 2 g ′′ x 2 + y (5) ( )
(
s + 2 xt = 4 xg ′′ x 2 + y (6) )
Multiply (6) by 2x and subtract from (5)
(
r + 2q + 2 xs − 2 x ( s + 2 xt ) = 4 g ′ x 2 + y )
∴ r + 2q − 4 x t = 4 g ′ ( x + y )
2 2

∴ x ( r + 2q − 4 x t ) = 4 xg ′ ( x + y ) = p + 2 xq 
2 2
(from equation (4))

or xr − 4 x 3t = p 

which is a required p.d.e.

Example 4.10: Form a partial differential equation by eliminating the arbitrary function f from
( )
f xy + z 2 , x + y + z = 0.

Solution: Let
u = xy + z 2, v = x + y + z (1)
Then, f ( u, v ) = 0 (2)

Differentiating equation (2) partially w.r.t. x


∂f  ∂u ∂u  ∂f  ∂v ∂v 
+ p +  + p = 0
∂u  ∂x ∂z  ∂v  ∂x ∂z  
∂f ∂f
∴ ( y + 2 zp ) + (1 + p ) = 0   (from (1)) (3)
∂u ∂v
Partial Differential Equations  | 443

Similarly differentiating (2) partially w.r.t. y, we have


∂f  ∂u ∂u  ∂f  ∂v ∂v 
 + q +  + q = 0
∂u  ∂y ∂z  ∂v  ∂y ∂z 

∂f ∂f
∴ ( x + 2 zq ) + (1 + q ) = 0   (from (1)) (4)
∂u ∂v
∂f ∂f
Eliminating and from (3) and (4), we get
∂u ∂v
y + 2 zp 1 + p
=0
x + 2 zq 1 + q

or ( y + 2 zp ) (1 + q ) − ( x + 2 zq ) (1 + p ) = 0 
or ( y − x ) + 2 z ( p − q ) + qy − px = 0 
or p ( x − 2z ) + q (2z − y ) = y − x

which is a required p.d.e.

Example 4.11: Form a partial differential equation by eliminating the arbitrary function f from
( )
f x 2 + y 2 + z 2 , z 2 − 2 xy = 0.

Solution: Let
u = x 2 + y 2 + z 2 , v = z 2 − 2 xy (1)
Then, f ( u, v ) = 0 (2)
Differentiate (2) partially w.r.t. x
∂f  ∂u ∂u  ∂f  ∂v ∂v 
+ p + + p =0
∂u  ∂x ∂z  ∂v  ∂x ∂z  
∂f ∂f
∴ ( 2 x + 2 zp ) + ( −2 y + 2 zp ) = 0   (from (1)) (3)
∂u ∂v
Similarly differentiate (2) partially w.r.t. y
∂f  ∂u ∂u  ∂f  ∂v ∂v 
 + q +  + q = 0
∂u  ∂y ∂z  ∂v  ∂y ∂z 

∂f ∂f
∴ ( 2 y + 2 zq ) + ( −2 x + 2 zq ) = 0   (from (1)) (4)
∂u ∂v
∂f ∂f
Eliminating 2 and 2 from (3) and (4), we get
∂u ∂v
x + zp − y + zp
=0
y + zq − x + zq

444 | Chapter 4

or ( x + zp ) ( − x + zq ) − ( y + zq ) ( − y + zp ) = 0 
or − x + xzq − xzp + z 2 pq + y 2 − yzp + yzq − z 2 pq = 0 
2

or ( x + y ) zp − ( x + y ) zq = y 2 − x 2
or zp − zq = y − x 
is a required p.d.e.

4.4 Direct integration method for solutions


There are partial differential equations which can be solved by directly integrating partially w.r.t.
one variable keeping other constant.
∂3 z
Example 4.12: Solve − 18 x 2 y + sin ( x − 2 y ) = 0.
∂x 2 ∂y
Solution: The given equation is
∂3 z
= 18 x 2 y − sin ( x − 2 y ) (1)
∂x 2 ∂y
Integrate (1) w.r.t. x keeping y constant
∂2 z
= 6 x 3 y + cos ( x − 2 y ) + f ( y )
∂x∂y

Again, integrate w.r.t. x keeping y constant
∂z 3 4
= x y + sin ( x − 2 y ) + x f ( y ) + g ( y )
∂y 2

Now, integrate w.r.t. y keeping x constant
3 4 2 1
z= x y + cos ( x − 2 y ) + x F ( y ) + G ( y ) + h ( x )
4 2 
where F ( y ) = ∫ f ( y ) dy, G ( y ) = ∫ g ( y ) dy

3 4 2 1
∴ z = x y + cos ( x − 2 y ) + x F ( y ) + G ( y ) + h ( x )
4 2 
is the solution, where h ( x ) is arbitrary function of x and F ( y ), G ( y ) are arbitrary functions of y.
∂2 z ∂z
Example 4.13: Solve + z = 0, given that when x = 0, z = e y and = 1.
∂x 2
∂x
Solution: The given equation is
∂2 z
+ z = 0 (1)
∂x 2
d2z
If z were function of x only, equation would be + z = 0 and its solution would have been
z = c cos x + c sin x. dx 2
1 2
Partial Differential Equations  | 445

But here z is a function of x and y, therefore c1 and c2 are functions of y.


∴ the solution of (1) is
z = f ( y ) cos x + g ( y ) sin x

∂z
∴ = − f ( y ) sin x + g ( y ) cos x
∂x 
∂z
As z = e y and = 1 when x = 0
∂x
∴ ey = f ( y)

and 1 = g ( y)

Hence, the solution is z = e y cos x + sin x.
∂2 z ∂z ∂z
Example 4.14: Solve = a 2 z given that when x = 0, = a sin y and = 0.
∂x 2
∂x ∂y
Solution: The given equation is
∂2 z
= a 2 z (1)
∂x 2

If z were function of x only, the solution of (1) would have been z = c1e ax + c2 e − ax. But here z is a
function of x and y, therefore c1 and c2 are functions of y.
∴ the solution of (1) is
z = f ( y ) e ax + g ( y ) e − ax (2)
∂z
= af ( y ) e ax − ag ( y ) e − ax
∂x 
∂z
Given that = a sin y when x = 0
∂x
∴ a sin y = a ( f ( y ) − g ( y ) )

∴ f ( y ) = sin y + g ( y )

∴ from (2)
(
z = sin y e ax + g ( y ) e ax + e − ax (3))
or z = sin y e + 2 g ( y ) cosh ax
ax

∂z
∴ = cos y e + 2 g ′ ( y ) cosh ax
ax

∂y
∂z 
Given that = 0 when x = 0
∂y
∴ 0 = cos y + 2 g ′ ( y )

1
∴ g ′ ( y ) = − cos y
2 
1
Integrate g ( y ) = − sin y + c1
2 
446 | Chapter 4

\ from (3), the solution is


 1 1 

 2 2 
(
z = sin y  e ax − e ax − e − ax  + c1 e ax + e − ax )

 e ax − e − ax   e ax + e − ax 
= sin y   + 2c1  
 2   2 
Hence, z = sin y sinh ax + c cosh ax 

where c is an arbitrary constant.

Example 4.15: Solve z x = 6 x + 3 y and z y = 3 x − 4 y.


Solution: The given equations are
z x = 6 x + 3 y (1)
z y = 3 x − 4 y (2)

Integrate (1) w.r.t. x keeping y constant


z = 3 x 2 + 3 xy + f ( y ) (3)
∴ z y = 3 x + f ′ ( y ) (4)

From (2) and (4)


f ′ ( y ) = −4 y

∴ f ( y ) = −2 y 2 + c

∴ from (3), the solution is
z = 3 x 2 + 3 xy − 2 y 2 + c 
where c is an arbitrary constant.

Exercise 4.1

1. Form partial differential equations by  (f  )  x 2 + y 2 + ( z − c ) = a 2


2

eliminating the arbitrary constants:


(g)  ( x − a ) + ( y − b ) = z 2 cot 2 α where
2 2

(a)  z = ax + by2 2
α is a parameter.
(b)  z = ( x − a ) + ( y − b )
2 2
2. Find the differential equation of all spheres
x2 y2 of fixed radius r having their centres in the
 (c)  2 z = + x–y plane.
a2 b2
3. Form a partial differential equation
(d)  ax + by + cz = 1
by eliminating the arbitrary constants
( x − a) + ( y − b)
2 2
 (e)  + z2 = 1 a, b and c from z = ax + by + cxy.
Partial Differential Equations  | 447

4. Form partial differential equations by 6. Solve the following partial differential


eliminating the arbitrary functions from equations by direct integration
 (a)  z = xy + f x 2 + y 2 ( )  (a) 
∂2 z y
= +2
(b)  xyz = φ ( x + y + z ) ∂x∂y x
∂2 z
 (c)  z = ( x + y ) φ x 2 − y 2 ( ) (b) 
∂x∂y
= x2 + y2
(
(d)  z = f x 2 − y 2 ) ∂2 z
 (e) z = f ( x + it ) + g ( x − it ) where  (c)  = 4 x sin 3 xy
∂x∂y
i = −1 ∂3 z
(f )  z = f ( y + 2 x ) + g ( y − 3 x ) (d)  + 18 xy 2 + sin ( 2 x − y ) = 0
∂x 2 ∂y
(g)  u = f ( x + at ) + g ( x − at ) ∂3 z
 (e)  = cos ( 2 x + 3 y )
(h)  z = xf ( x + t ) + g ( x + t ) ∂x 2 ∂y
(i)  z = f ( x ) g ( y ) ∂2 z
7. Solve = z, given that when
  ( j)  z = xf1 ( ax + by ) + f 2 ( ax + by ) ∂y 2
∂z
5. Form partial differential equations by y = 0, z = e x and = e− x.
∂y
eliminating the arbitrary function from
∂2 z
( )
 (a)  f x 2 + y 2 , x 2 − z 2 = 0 8. Solve
∂x∂y
= x 2 y subject to the condi-

(b)  f ( x + y + z , x + y + z ) = 0 2 2 2
tions z ( x, 0 ) = x 2 and z (1, y ) = cos y.

(c)  f ( x + y , z − xy ) = 0
2 2
9. Solve
∂2 z
= sin x sin y, given that
∂x∂y
(d)  φ ( xyz , x + y + z ) = 0 ∂z
= −2sin y when x = 0 and z = 0 when
 z y ∂y
 (e)  f  3 ,  = 0 π
x x y is an odd multiple of .
2

Answers 4.1
1. (a)  2z = px + qy p 2 + q 2 = 4 z
(b)  (c)  px + qy = 2 z
(d)  r = 0  or  s = 0  or  t = 0 (e) 
p2 + q2 + 1 z 2 = 1 ( )
(f)  qx − py = 0 (g) 
p + q = tan 2 a 2 2

( )
2. p 2 + q 2 + 1 z 2 = r 2
3. r = 0  or t = 0  or  z = px + qy − sxy
4. (a)  py − qx = y 2 − x 2 (b)  x ( y − z ) p + y ( z − x ) q = z ( x − y )
(c)  yp + xq = z (d)  py + qx = 0
(e)  z xx + ztt = 0 (f)  r + s = 6t
(g)  utt = a 2 uxx (h) 
z xx − 2 z xt + ztt = 0
(i)  pq = zs (  b 2 r − 2abs + a 2 t = 0
j) 
448 | Chapter 4

5. (a)  yzp − xzq = xy (b) ( y − z) p + ( z − x)q = x − y


(c)  py − qx = y − x
2 2
(d)  x ( y − z ) p + y ( z − x ) q = z ( x − y )
(e)  px + qy = 3 z
y2
6. (a)  z = log x + 2 xy + f ( x ) + g ( y )
2
1 1
(b)  z = x 3 y + y 3 x + f ( x ) + g ( y )
3 3
−4
(c)  z = sin ( 3 xy ) + f ( x ) + g ( y )
9y
1
(d)  z = cos ( 2 x − y ) − x 3 y 3 + x f ( y ) + g ( y ) + h ( x )
4
−1
(e)  z = sin ( 2 x + 3 y ) + f ( x ) + x g ( y ) + h ( y )
12
where f, g and h are arbitrary constants.
7. z = cosh x e y + sinh x e − y
1 1
8. z = x 3 y 2 + x 2 − y 2 − 1 + cos y
6 6
9. z = (1 + cos x ) cos y

4.5 Partial differential Equations of the first order


Partial differential equation of the form F ( x, y, z , p, q ) = 0 is first order partial differential
∂z ∂z
­equation where p = , q = .
∂x ∂y
We shall be dealing with the methods to solve some particular types of partial differential
equations of first order.

4.5.1 Lagrange’s Method
This method is used to solve partial differential equations linear in first order partial derivatives,
i.e., equation of the form Pp + Qq = R where P, Q and R are functions of x, y and z, respectively.
This equation is called Lagrange equation.
Theorem 4.1:  The general solution of the equation Pp + Qq = R is given by φ ( u, v ) = 0
where φ is an arbitrary function of u and v and u ( x, y, z ) = c1 , v ( x, y, z ) = c2 are two linearly
dx dy dz
­independent solutions of equations = = .
P Q R
dx dy dz
Proof: u ( x, y, z ) = c1 is a solution of equation = = (4.4)
P Q R
Partial Differential Equations  | 449

∂u ∂u ∂u
We have du = dx + dy + dz = 0 (4.5)
∂x ∂y ∂z
Put each term of (4.4) equal to k and put values of dx, dy and dz from (4.4) in (4.5). Hence we
now have
∂u ∂u ∂u
P +Q +R = 0 (4.6)
∂x ∂y ∂z
Similarly, v ( x, y, z ) = c2 is solution of (4.4)
∂v ∂v ∂v
∴ P + Q + R = 0 (4.7)
∂x ∂y ∂z
Now, differentiate partially w.r.t. x and y
φ ( u, v ) = 0 (4.8)

Thus, we have
∂φ  ∂u ∂u  ∂φ  ∂v ∂v 
+ p +  + p  = 0 (4.9)
∂u  ∂x ∂z  ∂v  ∂x ∂z 
∂φ  ∂u ∂u  ∂φ  ∂v ∂v 
+ q + + q = 0 (4.10)
∂u  ∂y ∂z  ∂v  ∂y ∂z 
∂φ ∂φ
Since both and cannot be zero
∂u ∂v
∴ non-trial solution exist
∂u ∂u ∂v ∂v
+ p + p
∂x ∂z ∂x ∂z
∴ =0
∂u ∂u ∂v ∂v
+ q + q
∂y ∂z ∂y ∂z

 ∂u ∂u   ∂v ∂v   ∂u ∂u   ∂v ∂v 
∴  ∂x + ∂z p  + q −  + q  + p = 0
   ∂y ∂z   ∂y ∂z   ∂x ∂z  
 ∂u ∂v ∂v ∂u   ∂v ∂u ∂u ∂v  ∂u ∂v ∂u ∂v
or ∂ ∂ − ∂ ∂  p+  −  q = ∂y ∂x − ∂x ∂y (4.11)
 z y z y   ∂z ∂x ∂z ∂x 
Solving (4.6) and (4.7)
P Q R
= = = c (say ) (4.12)
∂u ∂v ∂u ∂v ∂u ∂v ∂u ∂v ∂u ∂v ∂u ∂v
− − −
∂y ∂z ∂z ∂y ∂z ∂x ∂x ∂z ∂x ∂y ∂y ∂x
Putting values from (4.12) in (4.11)
P Q R
− p− q = −
c c c 
or Pp + Qq = R (4.13)
450 | Chapter 4

Thus, φ (u, v ) = 0, where u ( x, y, z ) = c1 and v ( x, y, z ) = c2 are two independent solutions of


dx dy dz
= = , which is general solution of Pp + Qq = R.
P Q R
dx dy dz
Remark 4.3: Equations = = are called Lagrange’s auxiliary equations (A.E.) or
P Q R
­Lagrange’s subsidiary equations.

4.5.2 Geometrical Interpretation of Lagrange’s Method


dx dy dz
From = =
P Q R

we observe that Piˆ + Qjˆ + Rkˆ is tangent to the integral curves represented by it and let z = f ( x, y )
be integral surface represented by (4.13) then ∇f = p iˆ + q ˆj − kˆ is normal to this integral surface.
Now,
( P iˆ + Q ˆj + R kˆ ) . ( p iˆ + q ˆj − kˆ ) = Pp + Qq − R = 0 when Pp + Qq = R. 
Thus, the integral surfaces represented by solution of (4.13) contain integral curves represented
by solution of (4.4).

Example 4.16: Solve the differential equation y 2 p − xyq = x ( z − 2 y )


Solution: Lagrange’s auxiliary equations are
dx dy dz
= = (1)
y 2
− xy x ( z − 2 y )
From first and second members, xdx + ydy = 0
Taking integral         x 2 + y 2 = c1 (2)
from second and third members of (1)
zdy − 2 ydy = − ydz 
or ydz + zdy − 2 ydy = 0 
or ( )
d yz − y 2 = 0

Taking integral y ( z − y ) = c2 (3)

( )
Hence, general solution is f x 2 + y 2 , y ( z − y ) = 0
where f is an arbitrary function.

( )
Example 4.17: Find the general solution of the partial differential equation x + y 2 p + yq = z + x 2
Solution: Lagrange’s auxiliary equations are
dx dy dz
= = (1)
x+ y 2
y z + x2
Partial Differential Equations  | 451

from first and second members


y dx − x dy − y 2 dy = 0 
y dx − x dy
or − dy = 0
y2 
x 
or d − y = 0
y  
Taking integral
x
− y = c1
y

or x = y ( c1 + y ) (2)
from second and third members of (1)
zdy + x 2 dy = ydz

zdy + y ( c1 + y ) dy = ydz 
2 2
or (from (2))
ydz − zdy
− ( c1 + y ) dy = 0
2
or 2
y 
 z 1 3
or d   − ( c1 + y )  = 0
 y 3  
Take integral
z 1
− ( c1 + y ) = c2
3

y 3

3 y 2 z − y 3 ( c1 + y ) = 3c2 y 3
3
or

or 3 y 2 z − x 3 = 3c2 y 3  (from (2))
3
z 1x
or − = c2
y 3 y3

∴ general solution is
x z x3 
f  − y, − 3  = 0
y y 3y 

where f is an arbitrary function.

Example 4.18: Solve the following differential equation


( mz − ny ) p + ( nx − lz ) q = ly − mx 
Solution: Lagrange’s auxiliary equations are
dx dy dz
= = (1)
mz − ny nx − lz ly − mx
452 | Chapter 4

From (1)
ldx + mdy + ndz = 0 and xdx + ydy + zdz = 0 
Taking integrals
lx + my + nz = c1

x 2 + y 2 + z 2 = c2 

∴ general solution is
(
f lx + my + nz , x 2 + y 2 + z 2 = 0) 
where f is an arbitrary function.

Example 4.19: Solve p − x 2 = q + y 2


Solution: Differential equation can be written as
p − q = x2 + y2

Lagrange’s auxiliary equations are
dx dy dz
= = (1)
1 −1 x 2 + y 2
From first two members, dx + dy = 0
Take integrals x + y = c1 (2)
from (1)
3 x 2 dx − 3 y 2 dy − 3dz = 0 
Take integrals
x 3 − y 3 − 3 z = c2 (3)
from (2) and (3)
x 3 + y 3 − 3z = f ( x + y )

where f is an arbitrary function.

Example 4.20: Solve the following differential equation


(x 2
) ( )
− yz p + y 2 − zx q = z 2 − xy

Solution: Lagrange’s auxiliary equations are
dx dy dz
= 2 = 2 (1)
x − yz y − zx z − xy
2

From (1)
dx − dy dy − dz dz − dx
= =
( x − y)( x + y + z) ( y − z)( x + y + z) ( z − x)( x + y + z) 
Partial Differential Equations  | 453

dx − dy dy − dz dz − dx
∴ = = (2)
x− y y−z z−x
From first two members after integrating
ln x − y − ln y − z = ln c

x− y
∴ ln = ln c
y−z

x− y
∴ = ±c = c1
y−z

Similarly, from last two members of (2)
y−z
= c2
z−x 
x− y y−z
∴ f , =0
 y−z z−x 
is the general solution where f is an arbitrary function.

( ) ( )
Example 4.21: Solve x 2 − y 2 − yz p + x 2 − y 2 − zx q = z ( x − y )
Solution: Lagrange’s auxiliary equations are
dx dy dz
= 2 = (1)
x − y − yz x − y − zx
2 2 2
z ( − y)
x
From (1),
dx dy dz dx − dy xdx − ydy
= 2 = = = 2 (2)
x − y − yz x − y − zx z ( x − y ) z ( x − y )
2 2 2
(
x − y2 ( x − y ) )
From third and fourth members,
dz = dx − dy 

or dx − dy − dz = 0 
Take integrals
x − y − z = c1 (3)
From third and fifth members of (2),
2dz 2 ( xdx − ydy )
=
z x2 − y2
Take integrals
ln z 2 = ln x 2 − y 2 + ln c

2
z
∴ = c2 (4)
x2 − y2
454 | Chapter 4

Hence, general solution is


 z2 
f  x − y − z, 2 =0
 x − y2 

where f is an arbitrary function.

Example 4.22: Solve the following differential equation


x2 ( y − z ) p + y2 ( z − x ) q = z2 ( x − y )

Solution: Lagrange’s auxiliary equations are
dx dy dz
= = (1)
x2 ( y − z) y2 ( z − x ) z2 ( x − y )
From (1)
1 1 1
dx + dy + dz = 0
x y z

1 1 1
and dx + 2 dy + 2 dz = 0
x2 y z 
Taking their integrals
ln x + ln y + ln z = ln c

∴ xyz = c1

1 1 1
and −  + +  = −c2
x y z 
1 1 1
∴ + + = c2
x y z

∴ general solution is
1 1 1
xyz = f  + + 
x y z
where f is an arbitrary function.
y−z z−x x− y
Example 4.23: Solve p+ q=
yz zx xy
Solution: Differential equation is
1 1 1 1 1 1
 −  p + − q =  − 
 z y   x z   y x
Lagrange’s auxiliary equations are
dx dy dz
= = (1)
1 1 1 1 1 1
 −  x− z  − 
z y    y x
Partial Differential Equations  | 455

From (1)
dx + dy + dz = 0 
1 1 1
and dx + dy + dz = 0
x y z
Taking their integrals
x + y + z = c1 
log x + log y + log z = log c

∴ xyz = c2 
∴ General solution is
f ( x + y + z , xyz ) = 0

where f is an arbitrary function.

( )
Example 4.24: Find the general solution of the p.d.e. 2 y 2 + z p + ( y + 2 x ) q = 4 xy − z. Hence
obtain the particular solution which passes through
2 2
 1  1
(a)  the curve z = 1,  x +  −  y +  = 1
 2  2
(b)  the straight line z = 1, y = x
Solution: Lagrange’s auxiliary equations are
dx dy dz
= = (1)
2 y + z y + 2 x 4 xy − z
2

From (1)
dx dy dz dx + dz zdy + ydz
= = = = (2)
2 y + z y + 2 x 4 xy − z 2 y ( y + 2 x ) 2 x z + 2 y 2
2
( )
From first and last members,
dx zdy + ydz
=
1 2x 
∴ 2 xdx − ( ydz + zdy ) = 0

Take integral
x 2 − yz = c1 (3)
From second and fourth members of (2)
dx − 2 ydy + dz = 0 
Take integral
x − y 2 + z = c2 (4)
456 | Chapter 4

∴ general solution is
( )
f x 2 − yz , x − y 2 + z = 0

where f   is an arbitrary function.
(a)  When surface passes through curve
2 2
 1  1
z = 1,  x +  −  y +  = 1 (5)
 2  2
From (3) and (4)
x 2 − y = c1 
x − y 2 = c2 − 1 
Add
( )
x 2 + x − y 2 + y = c1 + c2 − 1

2 2
 1  1
or  x + 2  −  y + 2  = c1 + c2 − 1 (6)
   
\ from (5)
c1 + c2 = 2 
∴ from (3) and (4), particular solution is
x 2 − yz + x − y 2 + z = c1 + c2 = 2 
or x 2 − y 2 − yz + x + z = 2 
When surface passes through line z = 1, y = x then from (3) and (4)
x 2 − x = c1 
x − x 2 = c2 − 1 
∴ c1 + c2 = 1 

Add (3) and (4)


x 2 − yz + x − y 2 + z = c1 + c2 = 1 
∴ particular solution is
x 2 − y 2 − yz + x + z = 1 

Example 4.25: Find the equation of the surface which cuts orthogonally the family of spheres
x 2 + y 2 + z 2 = cy, c ≠ 0 and passes through the circle z = 1, x 2 + y 2 = 4.
Solution: Equation of given family of surfaces is
x2 z2
φ ( x, y, z ) = + y+ =c
y y

Partial Differential Equations  | 457

∂φ ˆ ∂φ ˆ ∂φ ˆ
∇φ = i+ j+ k
∂x ∂y ∂z 
2x ˆ  x2 z2   2z 
= i +  − 2 + 1 − 2  ˆj +   kˆ
y  y y   y  
is normal to surface.
∂z ˆ ∂z ˆ ˆ
Let surface z = f ( x, y ) is normal to given family of spheres, then i+ j − k is normal to
this surface. ∂x ∂y
But it is orthogonal to given system of spheres.
∴ (
∇φ ⋅ piˆ + qjˆ − kˆ = 0

)
∴ differential equation of surface orthogonal to given system of spheres is
2x  x2 z2  2z
p +  − 2 +1− 2  q − =0
y  y y  y

or (
2 xyp + − x 2 + y 2 − z 2 q = 2 yz ) 
It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= 2 = (1)
2 xy − x + y − z
2 2
2 yz
from first and third members
dx dz
=
x z 
zdx − xdz
or =0
z2 
x
∴ d  = 0
z 
x
∴ = c1 (2)
z
from second and third member of (1) taking x = c1 z
we have
(
2 yzdy = −c12 z 2 + y 2 − z 2 dz

)
2 yzdy − y 2 dz
or
z2
(
+ 1 + c1 dz = 0
2
)

 y2 
or ((
d   + d 1 + c12 z = 0
 z 
) )

∴ its solution is
y2

z
(
+ 1 + c12 z = c2 )

458 | Chapter 4

Put value of c1
y2  x2 
+ 1 + 2  z = c2
z  z 

x2 + y2 + z2
or = c2 (3)
z
∴ general solution is
 x x2 + y2 + z2 
F ,
z z  = 0

where F is an arbitrary function.
When the surface passes through circle z = 1, x 2 + y 2 = 4
then from (2)
c1 = x 

and from (3)
x 2 + y 2 + 1 = c2 

⇒ (
4 + 1 = c2   ∵ x 2 + y 2 = 4 )
∴ c2 = 5 

Put in (3) equation of surface passing through circle is x 2 + y 2 + z 2 = 5 z.


∂t ∂t ∂t
Example 4.26: Solve ( t + y + z ) + ( t + z + x ) + ( t + x + y ) = x + y + z.
∂x ∂y ∂z
Solution: Lagrange’s auxiliary equations are
dx dy dz dt
= = = (1)
t+ y+z t+z+x t+x+ y x+ y+z
From (1)
dx + dy + dz + dt dx − dy dy − dz dz − dx
= = = (2)
3( x + y + z + t ) − ( x − y ) − ( y − z ) − ( z − x )
From first and second members
( x − y ) ( dx + dy + dz + dt ) + 3 ( x + y + z + t ) ( dx − dy ) = 0 
( x − y ) ( dx + dy + dz + dt ) + 3 ( x − y ) ( x + y + z + t ) ( dx − dy ) = 0 
3 2
or

or d (( x − y ) ( x + y + z + t )) = 0 
3

∴ its solution is
( x − y ) ( x + y + z + t ) = c1 
3

Similarly, from first and third members of (2) solution is
( y − z ) ( x + y + z + t ) = c2 
3

Partial Differential Equations  | 459

and from first and fourth member of (2) solution is

( z − x ) ( x + y + z + t ) = c3 
3

Hence, general solution is

f (( x − y ) ( x + y + z + t ) , ( y − z ) ( x + y + z + t ) , ( z − x ) ( x + y + z + t )) = 0 
3 3 3

where f is an arbitrary function.

Exercise 4.2

Solve the following partial differential ∂u ∂u ∂u


­equations: 20. x +y +z = xyz
∂x ∂y ∂z
1. yq − xp = z
2. yzp − xzq = xy
( ) ( )
21. x y 2 − z 2 p + y z 2 − x 2 q = z x 2 − y 2 ( )
y2 z
22. ( z − y) p + ( x − z)q = y − x
3. p + xzq = y 2 23. ( y + z) p − (x + z)q = x − y
x
4. px + qy = az , a ≠ 0 24. ( y + z) p + ( z + x)q = x + y
5. p tan x + q tan y = tan z 25. ( y + zx ) p − ( x + yz ) q = x 2 − y 2
6. y 2 zp − x 2 zq = x 2 y
26. Find the general solution of the partial dif-
7. pz − qz = z 2 + ( x + y )
2
∂z
ferential equation y ( x − z ) + (z2 - xz -
8. p − q = ln ( x + y ) ∂z ∂x
9. zp + yq = x x2) − y ( 2 x − z ) = 0. Hence obtain the
∂y
(
10. z z 2 + xy ) ( px − qy ) = x 4 particular solution which passes through
the ellipse z = 0, 2 x 2 + 4 y 2 = 1.
11. px ( z − 2 y ) = ( z − qy ) ( z − y
2 2
− 2 x3 ) 27. Find the equation of the surface which
12. xzp + yzq = xy cuts orthogonally the system of surfaces
( )
13. x 2 − y 2 − z 2 p + 2 xyq = 2 xz 2 xz + 3 yz = c ( z + 2 ), where c is an ar-
14. z ( xp − yq ) = y 2 − x 2 bitrary constant and passes through the
circle z = 0, x 2 + y 2 = 9.
15. x 2 p + y 2 q = ( x + y ) z
28. Find the equation of the system of sur-
16. 2 xzp + 2 yzq = z 2 − x 2 − y 2
faces which cut orthogonally the family
∂z ∂z
(
17. y 2 + z 2 )
∂x
− xy + zx = 0
∂y
( )
of cones z 2 = c x 2 + y 2 . Obtain the par-
ticular surface which passes through the
18. x ( y − z ) p + y ( z − x ) q = z ( x − y )
circle z = 3, x 2 + y 2 = 9.
19. px ( x + y ) − qy ( x + y )
+ (2x + 2 y + z ) ( x − y ) = 0
460 | Chapter 4

Answers 4.2

 y x
1. f  xy,  = 0
 z
16. (x 2
)
+ y2 + z2 = x f  
 y

(
2. f x 2 + y 2 , y 2 + z 2 = 0 )  y
17. x 2 + y 2 + z 2 = f  
z
3. f ( x 3
− y3 , x 2 − z 2 )=0 18. x + y + z = f ( xyz )
x y 
4. f  ,  = 0
a
19. ( x + y ) ( x + y + z ) = f ( xy )
y z  x x 
20. f  , , xyz − 3u  = 0
 sin x sin x  y z 
5. f  , =0
 sin y sin z  (
21. f x 2 + y 2 + z 2 , xyz = 0 )
(
6. f x + y , y + z
3 3 2 2
)=0
22. f ( x + y + z, x 2
+ y + z2 = 0
2
)
(
7. f x + y, 2 x − log x + y + z + 2 xy = 0
2 2 2
) 23. f ( x + y + z, x 2
+y −z
2 2
)=0
8. f ( x + y, x ln ( x + y ) − z ) = 0 x− y
 = ( x − y) ( x + y + z)
2
24. f 
z −
9. ln y ± sinh −1 = f x2 − z2 ( )  y z 
x −z ( )
2 2
25. f x 2 + y 2 − z 2 , xy + z = 0
(
10. f xy, x − z − 2 xyz
4 4 2
)=0  z2 
z − y2  y 26. f  x 2 + y 2 + z 2 , x 2 − xz +  = 0;
11. + x2 = f    2
x z
2 x 2 + 4 y 2 + 3 z 2 + 2 xz = 1
x
12. f   = xy − z 2
 y ( )
27. 3 x 2 + y 2 − z 2 − z 3 = 27

 y x2 + y2 + z2  x 
28. f  , x 2 + y 2 + z 2  = 0;
13. f  ,
z z  = 0  y 

(
14. f xy , x 2 + y 2 + z 2 = 0 ) x 2 + y 2 + z 2 = 18

1 1 x− y
15. f  − , =0
x y z 

We now explain a general method for finding the complete solution of general first order partial
differential equations.

4.5.3 Charpit’s Method
Consider the equation
f ( x, y, z , p, q ) = 0 (4.14)
Partial Differential Equations  | 461

Since z depends on x and y


∂z ∂z
dz = dx + dy = pdx + qdy (4.15)
∂x ∂y
Now, if we can find
φ ( x, y, z , p, q ) = 0 (4.16)
such that its solution is also solution of (4.14) then we can solve (4.14) and (4.16) for p and q and
substitute in (4.15) and then solving it we can find complete solution.
To obtain f , we differentiate (4.14) and (4.16) partially w.r.t. x and y
f x + pf z + f p px + f q qx = 0 (4.17)
φx + pφz + φ p px + φq qx = 0 (4.18)
f y + qf z + f p p y + f q q y = 0 (4.19)
φ y + qφz + φ p p y + φq q y = 0 (4.20)
∂  ∂z  ∂  ∂z 
Use qx =  = = p y and eliminate px , p y and q y from (4.17) to (4.20)
∂x  ∂y  ∂y  ∂x 
f p f q 0 f x + pf z
φp φq 0 φx + pφz
=0
0 fp fq f y + qf z
0 φp φq φ y + qφz

Expand in terms of elements of last column
− [ f x + pf z ]φ p  f pφq − f qφ p  + [φx + pφz ] f p  f pφq − f qφ p  −  f y + qf z  φq  f pφq − f qφ p 

+ φ y + qφz  f q  f pφq − f qφ p  = 0

Divide by  f pφq − f qφ p ≠ 0   (∵ solution exists)

φ p ( f x + pf z ) − f p (φx + pφz ) + φq ( f y + qf z ) − f q (φ y + qφz ) = 0



or − f pφx − f qφ y + ( − pf p − qf q ) φz + ( f x + pf z ) φ p + ( f y + qf z ) φq = 0

It is Lagrange’s equation for function φ with x, y, z , p and q as independent variables.
Lagrange’s auxiliary equations are
dx dy dz dp dq
= = = =
− f p − f q − pf p − qf q f x + pf z f y + qf z

These equations are called Charpit’s auxiliary equations.
An integral of these equations involving p or q or both can be taken as the required relation
(4.16). Then, solve (4.14) and (4.16) for p and q and put values of p and q in (4.15). After taking
its integral, we get the complete solution.
We now discuss four standard forms of f ( x, y, z , p, q ) = 0 so that finding the solution be easier.
462 | Chapter 4

4.5.4 Standard Form f ( p,q) = 0


In this case, given partial differential equation does not contain x, y, z explicitly.
∴ f x + pf z = 0, f y + qf z = 0

∴ Charpit’s auxiliary equations give solutions p = a, q = b

Now, f ( p, q ) = 0

⇒ f ( a, b ) = 0 
which gives b = f (a)

Now, dz = pdx + qdy = adx + bdy 

Take integrals
z = ax + by + c 

∴ complete solution of f ( p, q ) = 0 is
z = ax + by + c 
where f ( a, b ) = 0

i.e., z = ax + φ ( a) y + c 

where a and c are arbitrary constants.

4.5.5 Standard Form f ( z, p, q) = 0


In this case, given partial differential equation does not contain x and y explicitly.
We assume
z = f ( X ),

where X = x + ay.
Then,
∂z dz ∂X dz
p= = =
∂x dX ∂x dX 
∂z dz ∂X dz
q= = =a
∂y dX ∂y dX

∴ partial differential equation becomes
 dz dz 
f  z,
 dX
,a  =0
dX  
It is ordinary differential equation of first order. After solving it, put X = x + ay; hence, we shall
get the complete solution.
Partial Differential Equations  | 463

4.5.6 Standard Form f ( x,p) = f(y,q)


In this form, partial differential equation does not contain z explicitly and terms of x and p can be
separated from terms of y and q.
Let
f ( x, p ) = φ ( y, q ) = a.

Solving f ( x, p ) = a for p, we have p = F ( x, a )
Solving φ ( y, q ) = a for q, we have q = ψ ( y, a )
∴ dz = pdx + qdy = F ( x, a ) dx + ψ ( y, a ) dy

Take integrals
z = ∫ F ( x, a ) dx + ∫ψ ( y, a ) dy + b

which is the complete solution where a and b are arbitrary constants.

4.5.7 Clairut’s Equation
Any first order partial differential equation of the form z = px + qy + f ( p, q ) is called Clairut’s
equation.
This can be written as
f = z − px − qy 
∴ f x = − p, f y = − q, f z = 1

∴ f x + pf z = 0, f y + qf z = 0

∴ two solutions from Charpit’s equations will be p = a and q = b.
∴ complete solution is
z = ax + by + f ( a, b )

where a and b are arbitrary constants.

Example 4.27: Solve by Charpit’s method


  (i)  2 zx − px 2 − 2qxy + pq = 0
( )
 (ii)  p 2 + q 2 y = qz
(iii)  z 2 = pqxy
  (iv)  q + xp = p 2
Solution:
(i) Let f ≡ 2 zx − px 2 − 2qxy + pq = 0 (1)
Charpit’s auxiliary equations are
dp dq dz dx dy
= = = =
f x + pf z f y + qf z − pf p − qf q − f p − f q
464 | Chapter 4

Now,
f y + qf z = −2qx + 2qx = 0

∴ dq = 0 
∴ q=a
∴ from (1),
2 zx − px 2 − 2axy + ap = 0

2 zx − 2axy
∴ p=
x2 − a 
2 x ( z − ay )
dz = pdx + qdy = dx + ady
x2 − a 
dz − ady 2x
or = 2 dx
z − ay x −a 
Take integrals
log z − ay = log x 2 − a + log c

z − ay
∴ = ±c = b
x2 − a 
∴ (
z = ay + b x 2 − a )
where a and b are arbitrary constants.
(ii) ( )
f ≡ p 2 + q 2 y − qz = 0 (1)
f x = 0, f y = p 2 + q 2 , f z = −q, f p = 2 py, f q = 2qy − z

∴ Charpit’s auxiliary equations
dx dy dz dp dq
= = = =
− f p − f q − pf p − qf q f x + pf z f y + qf z

are
dx dy dz dp dq
= = = = 2
−2 py z − 2qy −2 p y + qz − 2q y − pq p
2 2

from last two members
pdp + qdq = 0 

Take integrals
p 2 + q 2 = a 2 (2)
∴ from (1),
a2 y
q=
z 
Partial Differential Equations  | 465

∴ from (2),
a2 y 2 a 2
p = ±a 1 − 2
=± z − a2 y 2
z z
Now,
a z 2 − a2 y 2 a 2 ydy
dz = pdx + qdy = ± dx +
z z 
zdz − a 2 ydy
or ± = adx
z 2 − a2 y 2

Take integrals
± z 2 − a 2 y 2 = ax + b 
z 2 − a 2 y 2 = ( ax + b )
2
or

z 2 = a 2 y 2 + ( ax + b )
2
or

where a and b are arbitrary constants.
(iii) f ≡ z 2 − pqxy = 0 (1)
f x = − pqy, f y = − pqx, f z = 2 z, f p = −qxy, f q = − pxy

∴ Charpit’s auxiliary equations
dx dy dz dp dq
= = = =
− f p − f q − pf p − qf q f x + pf z f y + qf z

are
dx dy dz dp dq
= = = =
qxy pxy 2 pqxy − pqy + 2 pz − pqx + 2qz

From these equations, we have
1 1 1 1
dx − dy dq − dp
x y q p
=
qy − px qy − px

dx dy dq dp
∴ − = −
x y q p

Take integrals
log x − log y = log q − log p + log c1

px
∴ = ±c1 = a
qy

∴ px = aqy (2)
∴ from (1),
z 2 = aq 2 y 2 

466 | Chapter 4

∴ z = cqy,

where c=± a
from (2),
px = c 2 qy 
z cz
∴ q= , p=
cy x

cz z
∴ dz = pdx + qdy = dx + dy
x cy

dz dx 1 dy
∴ =c +
z x c y

Take integrals
1
log z = c log x + log y + log c1
c 
= log x + log y + log c1
c 1/ c

∴ z = c1 x y
c 1/ c

∴ z = ±c1 x y = b x c y1/ c
c 1/ c
( b = ±c1 ) 
where b and c are arbitrary constants, c ≠ 0.
(iv) q + xp = p 2 (1)
or f ≡ q + xp − p 2 = 0 
f x = p, f y = 0, f z = 0, f p = x − 2 p, f q = 1.

∴ Charpit’s auxiliary equations


dx dy dz dp dq
= = = = ,
− f p − f q − pf p − qf q f x + pf z f y + qf z
become
dx dy dz dp dq
= = = = ,
2 p − x −1 2 p − px − q
2
p 0
dy dp
from =
−1 p

Taking integrals
log p = − y + c1

−y
∴ p = a1e where a1 = e c1 
  
∴ p = ± a1e − y = ae − y 
Partial Differential Equations  | 467

\ from (1),
q = a 2 e −2 y − axe − y 
∴ dz = pdx + qdy = ae − y dx + a 2 e −2 y dy − axe − y dy 
 a2 
= d  axe − y − e −2 y 
 2 
Take integrals
a 2 −2 y
z = axe − y − e +b
2 
where a and b are arbitrary constants.

Example 4.28: Find a complete integral of the p.d.e. ypq + xp 2 = 1. Hence, find a particular
­solution passing through the curve x = 0, y − z = 0.
Solution: p.d.e. is
f ≡ ypq + xp 2 − 1 = 0 (1)
f x = p 2 , f y = pq, f z = 0, f p = yq + 2 xp, f q = yp

∴ Charpit’s auxiliary equations are
dx dy dz dp dq
= = = =
− f p − f q − pf p − qf q f x + pf z f y + qf z

are
dx dy dz dp dq
= = = =
− yq − 2 xp − py p ( − yq − 2 xp ) − pqy p 2 pq

from last two members
pdq − qdp = 0 
pdq − qdp
or =0
p2 
q
or d  = 0
 p 
q
∴ =a
p

⇒ q = ap

∴ from (1)
ayp 2 + xp 2 = 1 
1
∴ p=±
ay + x 
468 | Chapter 4

1 a
∴ p=± , q=± 
ay + x x + ay

∴ dz = pdx + qdy = ±
dx + ady
x + ay
(
= d ±2 x + ay )

Integrate
z + b = ±2 x + ay 
( z + b) = 4 ( x + ay ) 
2
or (2)
where a and b are arbitrary constants.
(2) is a complete integral.
Any point on curve x = 0, y − z = 0 is ( 0, t , t ).
It lies on (2) when ( t + b ) = 4 at
2

i.e., t 2 + 2 ( b − 2a ) t + b 2 = 0

It has equal roots
4 ( b − 2a ) − 4b 2 = 0
2


∴ b − 2a = −b 
⇒ a = b

Put value of b in (2)


( z + a) = 4 ( x + ay ) (3)
2

Differentiating partially w.r.t. ‘a’
2 ( z + a) = 4 y

∴ a = 2y − z 
Put this value in (3), particular solution passing through given curve is

(z + 2y − z) = 4  x + ( 2 y − z ) y 
2


or (
4 y 2 = 4 x + 2 y 2 − yz )
or x + y − yz = 0. 
2

Example 4.29: Solve the following partial differential equations


  (i)  p + q =1
 (ii)  x 2 p 2 + y 2 q 2 = z 2

(iii)  ( y − x ) ( qy − px ) = ( p − q )
2

( x + y ) ( p + q) + ( x − y ) ( p − q)
2 2
  (iv)  =1
Partial Differential Equations  | 469

Solution: (i) Equation p + q = 1  is f ( p, q ) = p + q − 1 = 0.


Its solutions is z = ax + by + c where

f ( a, b ) = a + b − 1 = 0

( )
2
or b = 1− a

∴ complete solution is

( )
2
z = ax + 1 − a y+c

where a and c are arbitrary constants.
(ii) x 2 p2 + y 2 q2 = z 2

Let Z = log z , Y = log y , X = log x .

∂z dz ∂Z dX dz dX
∴ p= = = P
∂x dZ ∂X dx dZ dx 
1 z ∂Z
= zP = P where P =
x x    ∂X 

or x 2 p2 = z 2 P 2 
∂Z
Similarly, y 2 q 2 = z 2 Q 2     where Q =
∂Y 
∴ differential equation becomes

z 2 P 2 + z 2Q 2 = z 2

or f ( P, Q ) = P + Q 2 − 1 = 0
2

Its solution is
Z = aX + bY + c 

where f ( a, b ) = a 2 + b 2 − 1 = 0

∴ b = ± 1− a  2

\ complete solution is
Z = aX ± 1 − a 2 Y + c

or log z = a log x ± 1 − a 2 log y + c

where a and c are arbitrary constants.
470 | Chapter 4

(iii)  ( y − x ) ( qy − px ) = ( p − q )
2

Put X = x + y, Y = xy
∂z ∂z ∂X ∂z ∂Y
∴ p= = + = P + yQ
∂x ∂X ∂x ∂Y ∂x 
∂z ∂z ∂X ∂z ∂Y
q= = + = P + xQ
∂y ∂X ∂y ∂Y ∂y

∂z ∂z
where P= ,Q=
∂X ∂Y 
( y − x ) ( qy − px ) = ( y − x ) [ yP + xyQ − xP − xyQ ] = ( y − x ) P
2


( p − q) = ( y − x) Q
2 2 2
and

∴ differential equation becomes
P = Q2 
or f ( P, Q ) = P − Q 2 = 0

Its solution is
z = aX + bY + c = a ( x + y ) + bxy + c

where f ( a, b ) = a − b 2 = 0

⇒ a = b2
\ complete solution is
z = b 2 ( x + y ) + bxy + c

where b and c are arbitrary constants.
(iv)  ( x + y ) ( p + q ) + ( x − y ) ( p − q ) = 1
2 2

Put X 2 = x + y, Y 2 = x − y
∂z ∂z ∂X ∂z ∂Y 1 1
then, p= = + = P+ Q
∂x ∂X ∂x ∂Y ∂x 2 X 2Y 
∂z ∂z ∂X ∂z ∂Y 1 1
q= = + = P− Q
∂y ∂X ∂y ∂Y ∂y 2 X 2Y

∂z ∂z
where = P, =Q
∂X ∂Y 
∴ equation becomes
2 2
1  1 
X 2  P  +Y 2  Q =1
X  Y  
or P + Q −1 = 0 
2 2

or f ( P, Q ) = P 2 + Q 2 − 1 = 0

Partial Differential Equations  | 471

Its solution is
z = aX + bY + c
= ±a x + y ± b x − y + c 
f ( a, b ) = a 2 + b 2 − 1 = 0
where 
\ b = ± 1− a  2

∴ solution is
z = ±a x + y ± 1 − a2 x − y + c

where a and c are arbitrary constants.

Example 4.30: Find the complete solutions of the following partial differential equations
(
  (i)  z 2 p 2 + q 2 + 1 = a 2 )
 (ii)  z 2
(p x 2 2
+q 2
) =1
(iii)  p x = z ( z − qy )
2 2

(
  (iv)  z 2 p 2 + q 2 + 1 = 0 )
Solution: (i) Let
z = f (X)

where X = x + by 
∂z dz ∂X dz
then, p= = =
∂x dX ∂x dX 
∂z dz ∂X dz
q= = =b
∂y dX ∂y dX

∴ differential equation reduces to
 dz  2 2  dz 
2

z 2   + b   + 1 = a 2
 dX   dX   
2
 dz  a2 − z 2
or
 
(
 dX  1 + b =
2

z2 
)
dz a2 − z 2
or ± 1 + b2 =
dX z 
1 + b2 z
∴ dX = ± dz
a2 − z 2 
Take integrals
c + X = ∓ a2 − z 2 ⋅ 1 + b2 

or ( x + by + c )
2
(
= 1 + b2 )(a 2
− z2 )
or (1 + b ) ( a
2 2
−z 2
) = ( x + by + c ) 2


472 | Chapter 4

where b and c are arbitrary constants.


(ii) (
z 2 p2 x 2 + q2 = 1 ) 
Let z = f (X)

where X = ln x + ay

∂z dz ∂X 1 dz dz
then, p= = = ⇒  px =
∂x dX ∂x x dX    dX 

∂z dz ∂X dz
q= = =a
∂y dX ∂y dX

∴ differential equation reduces to

 dz  2 2  dz 
2

z 2   + a    =1
 dX   dX  

2
 dz 
or (
z 2 1 + a2   =1
 dX 
)

2
 dX 
or  dz  = z 1 + a
 
2 2
( )

dX
∴ = ± 1 + a2 z
dz 
∴ 2 dX = ±2 1 + a 2 z dz 

Take integrals
b + 2 X = ± 1 + a2 z 2 

( b + 2 ln x + 2ay ) = (1 + a ) z
2 2 2
or

where a and b are arbitrary constants.
(iii) p 2 x 2 = z ( z − qy )

Let z = f (X)

where X = ln x + a ln y .

∂z dz ∂X 1 dz dz
∴ p= = = ⇒  px =
∂x dX ∂x x dX    dX 
Partial Differential Equations  | 473

∂z dz ∂X a dz dz
q= = = ⇒  qy = a
∂y dX ∂y y dX    dX 
∴ differential equation reduces to
2
 dz   dz 
 dX  = z  z − a dX 
   
2
 dz  dz
∴  dX  + az dX − z = 0
2

  
 
dz − az ± a 2 z 2 + 4 z 2  − a ± a + 4  z
2
−a ± a 2 + 4
∴ = = = kz where k = 
dX 2 2   2

dX 1
∴ =
dz kz 
1
⇒ dX = dz
kz 
Take integrals
1
b+ X = ln z
k 
or k ( b + ln x + a ln y ) = ln z

−a ± a + 4 2
or b + ln x + a ln y  = ln z
2 
where a and b are arbitrary constants.
(iv) (
z 2 p2 + q2 + 1 = 0 ) 
Let z = f (X)

where X = x + ay 
∂z dz ∂X dz
∴ p= = =
∂x dX ∂x dX 
∂z dz ∂X dz
q= = =a
∂y dX ∂y dX

∴ differential equation reduces to
  dz 
2

(
z 2  1 + a2  
 dX 
) + 1 = 0
  
z = 0 is the singular solution.
474 | Chapter 4

for complete solution


2

(1 + a )  dX
2dz 


+1 = 0

dz 1
⇒ =± −
dX 1 + a2 

1
⇒ dz = ± − dX 
1 + a2
Take integrals

( )
± − 1 + a 2 z = X + b = x + ay + b

or ( ) z = ( x + ay + b )
− 1+ a 2 2 2


( x + ay + b ) + (1 + a ) z = 0 
2 2 2
or

where a and b are arbitrary constants.

Example 4.31: Find the complete solution of the following partial differential equations
  (i)  p 2 − q 2 = x − y
( )
 (ii)  z 2 p 2 + q 2 = x 2 + y 2
(iii)  p − q = x + y 2
2

( )
  (iv)  p 2 q 2 = 9 p 2 y 2 x 2 + y 2 − 9 x 2 y 2
Solution: (i) Given differential equation can be written as
p 2 − x = q 2 − y = a    (say)

∴ p = ± a + x, q = ± a + y 

Now, dz = pdx + qdy = ±  a + xdx ± a + ydy 



Take integrals
2
z = ± ( a + x ) ± ( a + y )  + b
3/ 2 3/ 2

3 

where a and b are arbitrary constants.
( )
(ii)  z 2 p 2 + q 2 = x 2 + y 2

z2
Put Z=
2 
∂z dz ∂Z 1 ∂Z P ∂Z
∴ p= = = = where P =
∂x dZ ∂x z ∂x z    ∂x 
Partial Differential Equations  | 475

∂z dz ∂Z 1 ∂Z Q ∂Z
q= = = =    where Q = 
∂y dZ ∂y z ∂y z ∂y

∴ p2 z 2 + q2 z 2 = P 2 + Q 2 

∴ differential equation reduces to

P 2 − x2 = y2 − Q2 = a
   (say)

∴ P = ± x2 + a, Q = ± y2 − a 

∴ dZ = Pdx + Qdy = ± ( x 2 + a dx ± y 2 − a dy )
Take integrals

  x x 2 + a a   y y 2 − a a 
Z = ± 
 
+ log x + x 2 + a( ) ± (
− log y + y 2 − a )  + b1

2 2   2 2
 

{ (
z 2 = ±  x x 2 + a + a log x + x 2 + a
 )} ± { y ( )}
y 2 − a − a log y + y 2 − a  + b   ( 2b1 = b) 

where a and b are arbitrary constants.
(iii)   p − q = x 2 + y 2
⇒ p − x 2 = q + y 2 = a   (say)
∴ p = x 2 + a, q = a − y 2 
( )
dz = pdx + qdy = x 2 + a dx + a − y 2 dy ( ) 
Take integrals
x3 y3
z= + ax + ay − + b1
3 3 
or 3 z = x 3 − y 3 + 3a ( x + y ) + b (b = 3b1 ) 
where a and b are arbitrary constants.
( )
(iv)  p 2 q 2 = 9 p 2 y 2 x 2 + y 2 − 9 x 2 y 2
Divide by p 2 y 2

q2 x2

y2
= 9 x 2
(
+ y 2
− 9 )
p2 

 1  q2
⇒ 9 x 2  −1 + 2  = 9 y 2 − 2 = a   (say)
 p  y
476 | Chapter 4

1 a

p 2 (
= 1 + 2 , q2 = y 2 9 y 2 − a 
9x
)
3x
∴ p=±
9x2 + a 

q = ± y 9 y2 − a 
 3x 
dz = pdx + qdy = ±  dx ± y 9 y 2 − a dy 
 9x + a
2

Take integrals

1 1 3

z = ±
3
9x2 + a ±
27
(
9 y2 − a ) 2
+b
  
where a and b are arbitrary constants.

Example 4.32: Solve the partial differential equations


  (i)  z = px + qy + 1 + p 2 + q 2
 (ii)  4 xyz = pq + 2 px 2 y + 2qxy 2
( ) (
(iii) pqz = p 2 xq + p 2 + q 2 yp + q 2 )
Solution:
(i)     z = px + qy 1 + p 2 + q 2
This equation is in Clairut’s form.
∴ solution is
z = ax + by + 1 + a 2 + b 2 

where a and b are arbitrary constants.
(ii)     4 xyz = 2 px 2 y + 2qxy 2 + pq
Divide by 4xy
x y pq
z = p+ q+
2 2 4 xy

Put x2 = X , y2 = Y 
∂z ∂z dX ∂z
∴ p= = = 2 xP where =P
∂x ∂X dx    ∂X 
∂z ∂z dY ∂z
q= = = 2 yQ where =Q
∂y ∂Y dy ∂Y 
  
Partial Differential Equations  | 477

\ equation reduces to
z = x 2 P + y 2 Q + PQ = PX + QY + PQ 
which is in Clairut’s form
\ solution is
z = aX + bY + ab = ax 2 + by 2 + ab 
where a and b are arbitrary constants.
( ) (
(iii) pqz = p 2 xq + p 2 + q 2 yp + q 2 )
p4 + q4
\ z = px + qy +
pq

which is in Clairut’s form
\ solution is
a4 + b4
z = ax + by + 
ab
where a and b are arbitrary constants.

Exercise 4.3

1. Solve the following partial differential (c)  p 2 − q 2 = 1


equations by Charpit’s method to obtain
(d)  p 2 − 3q 2 = 5
complete integrals
(a)  px + qy = pq (e)  p 2 + q 2 = npq
(b)  2 z + p 2 + qy + 2 y 2 = 0 4. Find the complete solutions of the follow-
ing partial differential equations
( )
(c)  p 2 + q 2 x = pz
(a)  p (1+ q ) = qz
(d)  2 ( z + xp + yq ) = yp 2
(b)  p ( 3 + q ) = 2qz
(e)  pxy + pq + qy = yz
(c)  zpq = p + q
(f)  p = ( z + qy )
2

(
(g)  xp + 3yq = 2 z − x 2 q 2 ) (d)  p 2 z 2 + q 2 = p 2 q
(e)  z = p 2 + q 2
(h)  z = p 2 x + q 2 y
2. Find the complete integral of the partial ( )
(f)  p 1 + q 2 = q ( z − c )
differential equation px + q 2 y = z. Hence (g)  q = z p 2 1 − p 2
2 2
( )
find a particular solution which passes
through the curve x = 1, y + z = 0. (
(h)  p 1 − q 2
) = q (1 − z )
3. Find the complete integrals of the follow- 5. Solve the following partial differential
ing partial differential equations equations to obtain complete integrals
(a)  pq = p + q (a)  p 2 + q 2 = x + y
(b)  2 p + 3 q = 6 x + 2 y
(b)  pq + p + q = 0
478 | Chapter 4

(c)  yp + xq + pq = 0 6. Find the complete integrals of the follow-


(d)  yp = 2 yx + log q ing partial differential equations
(
(e)  z p 2 − q 2 = x − y ) (a)  z = px + qy + 2 pq
1 1
(f)  p − q + 3 x = 0 (b)  px + qy = z − 3 p 3 q 3
(g)  z = a ( x − y ) − ( cos x + cos y ) + b (c)  z = px + qy + ln pq
(d)  ( pq − p − q ) ( z − px − qy ) = pq
(h)  p 2 q 2 + x 2 y 2 = x 2 q 2 x 2 + y 2 ( )
(e)  p 2 q 2 ( px + qy − z ) = 2
(i)  p 2 + q 2 = z 2 ( x + y )
(f)  pq ( px + qy − z ) = 1
3

(
(j)  zpy 2 = x y 2 + z 2 q 2 ) (g)  ( p − q ) ( z − px − qy ) = 1
(h)  ( px + qy − z ) = 1 + p 2 + q 2
2

Answers 4.3

Here, a and b are arbitrary constants. (d)  z = a tan ( x + ay + b )


1. (a)  2az = ( ax + y ) + b ( )
2
(e)  4 1 + a 2 z = ( x + ay + b) 2
(b)  y 2 ( x − a ) + y 2 + 2 z  = b
2
(f)  4 ( az − ac − 1) = ( x + ay + b )
2
 
(c)  z 2 = a 2 x 2 + ( ay + b ) (g)  z 2 − a 2 = ( x + ay + b )
2 2

(d)  4 y 3 z = 4 y ( ax + by ) − a 2 (h)  4 (1 − a + az ) = ( x + ay + b )
2

(e)  log z − ax = y − a log a + y + b 2 3 3



(f)  ( x − b ) ( a + yz ) + y = 0 5. (a)  z = ± ( x + a ) 2 ± ( y − a ) 2  + b
3 
(g)  zx = a ( ax + y ) + bx 3 1 1
(b)  z = ( 6 x + a ) + ( 2 y − a ) + b
3 3

(h)  (1+ a ) z = ± ( ax ± y + b ) 72 54
ax 2 a y2
( ) (c)  z = − +b
2
2. y +b = z − ax ; xy = z ( x − 2 ) 2 ( a + 1) 2
a
3. (a)  z = ax + y+b 1
( a − 1) (d)  z = x 2 + ax + e ay + b
a
a
(b)  z = ax − y+b 3
 3 3

( + 1)
a (e)  z 2 = ± ( x + a ) 2 ± ( y + a ) 2  + b
 
(c)  z = ax ± a 2 − 1 y + b 1
(f)  z = − ( a − 3 x ) + a 2 y + b
3

(d)  z = ± ( 3a 2 + 5) x + ay + b 9
(g)  z = a ( x − y ) − ( cos x + cos y ) + b
a
2
(
(e)  z = ax + n ± n2 − 4 y + b ) 1 3 1

4. (a)  log az − 1 = x + ay + b ( ) (
(h)  z = ±  x 2 + a 2 2 ± y 2 − a 2 2  + b )
3 
(b)  2az − 3 = be ( )
2 x + ay
2 3 3

(i)  log z = ± ( x + a ) 2 ± ( y − a ) 2  + b
2 3 
(c)  z 2 = ( a + 1) x + 2 ( a + 1) y + b
a (j)  z 2 = ax 2 ± ( a − 1) y 2 + b
Partial Differential Equations  | 479

6. (a)  z = ax + by + 2 ab 1
(f)  z = ax + by − 1

(b)  z = ax + by + 3a b
1
3
1
3 ( ab ) 3
1
(c)  z = ax + by + ln ab (g)  z = ax + by +
ab (a − b)
(d)  z = ax + by +
ab − a − b (h)  z = ax + by ± 1 + a 2 + b 2 .
2
(e)  z = ax + by − 2 2
ab

4.6 Linear in second order partial derivatives differential


equations: Monge’s method
Monge’s method gives the solution of partial differential equation
Rr + Ss + Tt = V (4.21)
where R, S , T and V are functions of x, y, z , p, q.
This method reduces the equation ( 4.21) into an equivalent system of two equations from
which we determine p or q or both p and q.  If p or q is determined then solution is obtained
following the procedure of solving Lagrange equation and if p and q both are determined then
solution is obtained by integrating dz = pdx + qdy.
We have
∂p ∂p
dp = dx + dy = rdx + sdy
∂x ∂y

∂q ∂q
dq = dx + dy = sdx + tdy
∂x ∂y

From these equations,
dp − sdy dq − sdx
r= ,t=
dx dy

Substitute these values in (4.21)
dp − sdy dq − sdx
R + Ss + T =V
dx dy

or ( Rdpdy + Tdqdx − Vdxdy ) − s ( R( dy )2 − Sdydx + T ( dx )2 ) = 0 
This equation holds for arbitrary values of s and hence
R( dy ) 2 − Sdydx + T ( dx ) 2 = 0 (4.22)
and Rdpdy + Tdqdx − Vdxdy = 0 (4.23)

Equations (4.22) and (4.23) are called Monge’s auxiliary equations or Monge’s subsidiary
­equations.
480 | Chapter 4

From equation (4.22)


1 
dy = S ± S 2 − 4 RT  dx (4.24)
2 R   
Now, dy has two non-zero values when S 2 − 4 RT ≠ 0, R ≠ 0, T ≠ 0.
Using these two values of dy and equation (4.23), we try to find two relations between
x, y, z , p, q, each relation containing an arbitrary function. These relations are called intermediate
integrals. We solve these relations for p and q and then we integrate
dz = pdx + qdy 
to obtain the solution of given partial differential equation. We may also use only one of the
values of dy from two values obtained and then use the method of solving the first order partial
differential equations like Lagrange’s equation to obtain the solution. For this purpose, we may
use dz = pdx + qdy.
In equation (4.22)
if S 2 − 4 RT = 0   or  R = 0   or  T = 0 
then dy will have only one non-zero value.
In this case, we use the method of solving the first order partial differential equation like
­Lagrange’s equation to obtain the solution. We may use dz = pdx + qdy.
We illustrate the above procedure through the following examples.

Example 4.33: Obtain the general solutions of the following partial differential equations by
Monge’s method
  (i)  r − t cos 2 x + p tan x = 0
 (ii)  y 2 r − 2 ys + t = p + 6 y
( )
(iii)  q q 2 + s = pt
  (iv)  rq 2 − 2 psq + tp 2 = qr − ps
Solution: (i) Compare the given differential equations with Rr + Ss + Tt = V .
We have
R = 1, S = 0, T = − cos 2 x, V = − p tan x
\ Monge’s auxiliary equations
R( dy ) 2 − Sdydx + T ( dx ) 2 = 0 
and Rdpdy + Tdqdx − Vdxdy = 0 
are
( dy ) 2 − cos 2 x( dx ) 2 = 0 (1)
dpdy − cos 2 x dqdx + p tan x dxdy = 0 (2)
From (1),
( dy + cos x dx ) ( dy − cos x dx ) = 0 
Partial Differential Equations  | 481

∴ dy = cos x dx (3)
or dy = − cos x dx (4)

From (2) and (3)


cos x dp dx − cos 2 x dqdx + p sin x( dx ) 2 = 0 
or cos x dp − cos 2 x dq + p sin x dx = 0 
Divide by cos 2 x
p sec x tan x dx + sec x dp − dq = 0 
or d ( p sec x − q ) = 0

∴ p sec x − q = a (5)

Take integrals of (3)


y − sin x = b (6)

From (5) and (6)


p sec x − q = φ1 ( y − sin x ) (7)

Similarly, from (2) and (4)


p sec x + q = φ2 ( y + sin x )  (8)

From (7) and (8)


1
cos x φ1 ( y − sin x ) + φ2 ( y + sin x ) 
p=
2 
1
q = φ2 ( y + sin x ) − φ1 ( y − sin x ) 
2 
Put these values in dz = pdx + qdy, hence we have
1 1
dz = cos x φ1 ( y − sin x ) + φ2 ( y + sin x )  dx + φ2 ( y + sin x ) − φ1 ( y − sin x )  dy
2 2 
1 1
= − φ1 ( y − sin x ) ( dy − cos x dx ) + φ2 ( y + sin x ) ( dy + cos x dx )
2 2 
1 1 
= d  ψ 2 ( y + sin x ) − ψ 1 ( y − sin x )  
2 2 
where ∫ φ1 ( t ) dt = ψ 1 ( t ) and ∫ φ2 ( t ) dt = ψ 2 ( t )

1 1
∴ z = ψ 2 ( y + sin x ) − ψ 1 ( y − sin x ) + c
2 2 
= f1 ( y + sin x ) + f 2 ( y − sin x )

where f1 and f 2 are arbitrary functions.
482 | Chapter 4

(ii) Compare the given differential equation y 2 r − 2 ys + t = p + 6 y with Rr + Ss + Tt = V , hence


we have
R = y 2 , S = −2 y, T = 1, V = p + 6 y 

\ Monge’s auxiliary equations


R( dy ) 2 − Sdydx + T ( dx ) 2 = 0 

and Rdpdy + Tdqdx − Vdxdy = 0 

are y 2 ( dy ) 2 + 2 ydydx + ( dx ) 2 = 0 (1)

y 2 dpdy + dqdx − ( p + 6 y ) dxdy = 0 (2)


From (1),
( ydy + dx )
2
=0

∴ ydy + dx = 0 
∴ dx = − ydy 

Put in (2)
y 2 dpdy − ydydq + y ( p + 6 y ) ( dy ) 2 = 0

⇒ ydp − dq + ( p + 6 y ) dy = 0

or ydp + pdy − dq + 6 ydy = 0 

or (
d py − q + 3 y 2 = 0 ) 
∴ py − q + 3 y 2 = a 
or py − q = −3 y 2 + a 

It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= = (3)
y −1 −3 y 2 + a
From second and third members
dz = (3 y 2 − a)dy 
Take integrals
z = y 3 − ay + b (4)
From first and second members of (3)
2dx + 2 ydy = 0 
Partial Differential Equations  | 483

Take integrals
2 x + y 2 = c (5)
From (4) and (5)
( ) (
z = y3 − y φ y 2 + 2 x + f y 2 + 2 x )
where f  and φ are arbitrary functions.
(iii)  Compare the given differential equation qs − pt = −q3 with Rr + Ss + Tt = V , hence we have
R = 0, S = q, T = − p , V = − q 3
Therefore, Monge’s auxiliary equations
R( dy ) 2 − Sdydx + T ( dx ) 2 = 0 
and Rdpdy + Tdqdx − Vdxdy = 0 
are −q dydx − p ( dx ) 2 = 0 (1)

− p dqdx + q3 dxdy = 0 (2)


from (1),
pdx + qdy = 0 
∴ dz = pdx + qdy = 0 
∴ z = a (3)

Using pdx = −qdy in (2)


qdydq + q3 dxdy = 0 
or dq + q 2 dx = 0 
1
or dq + dx = 0
q2 
 1 
⇒ d− + x = 0
 q  
1
∴ − +x=b
q
1
⇒ q=
x−b

∂z 1
\ =
∂y x − b

Integrating w.r.t.y keeping x constant
y
z= + f ( x)
x−b 
484 | Chapter 4

∴ y = ( x − b ) ( z − f ( x )) = ( x − φ ( z )) ( z − f ( x ))  (from (3))

∴ general solution is
y = xz − z φ ( z ) − x f ( x ) + f ( x ) φ ( z )

where f and φ are arbitrary functions.
(iv)  Compare the given differential equation rq 2 − 2 pqs + p 2 t = qr − ps

i.e., (q 2
)
− q r + ( p − 2 pq ) s + p 2 t = 0

with Rr + Ss + Tt = V , 

hence we have
R = q 2 − q, S = p − 2 pq, T = p 2 , V = 0 

∴ Monge’s auxiliary equations


R( dy ) 2 − S dydx + T ( dx ) 2 = 0 

and Rdpdy + Tdqdx − Vdxdy = 0 

are (q 2
)
− q ( dy ) 2 − ( p − 2 pq ) dydx + p 2 ( dx ) 2 = 0 (1)

(q 2
)
− q dp dy + p 2 dq dx = 0 (2)
from (1)
(q 2
)
− q ( dy ) 2 + p ( q + q − 1) dydx + p 2 ( dx ) 2 = 0

or ( q − 1) dy + pdx  [ qdy + pdx ] = 0

∴ pdx + qdy = 0 (3)

or pdx + ( q − 1) dy = 0 (4)

from (3),
dz = pdx + qdy = 0 
∴ z = a (5)

Use (3) in (2)


(q 2
)
− q dpdy − pq dydq = 0

or ( q − 1) dp − pdq = 0 
pdq − qdp 1
or + 2 dp = 0
p2 p 
Partial Differential Equations  | 485

q 1
or d −  = 0 
 p p
q 1
∴ − = b = f (z)  from (5)
p p

or p f ( z ) − q = −1

which is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= = (5)
f ( z ) −1 −1
From first and third members
dx + f ( z ) dz = 0

Integrate
x + g ( z ) = b, where g ( z ) = ∫ f ( z ) dz (6)
  
From second and third members of (5)
dy − dz = 0 
Take integrals
y − z = c (7)
From (6) and (7), general solution is
x + g (z) = φ ( y − z)

where g and f are arbitrary functions.

Exercise 4.4

Obtain the general solutions of the follow- 7. y 2 r + 2 sxy + x 2 t + px + qy = 0


ing partial differential equations by Monge’s 8. x 2 r − 2 xs + t + q = 0
­method.
9. x 2 r + 2 xys + y 2 t = 0
1. r = a 2 t
2. r − 3s − 10t = −3
10. ( )
e x − 1 ( qr − ps ) = pqe x
11. qr − ps = p3
3. ( r − s ) y + ( s − t ) x + q − p = 0
12. (1 − q ) r − 2 ( 2 − p − 2q + pq ) s
2

4. r + ( a + b ) s + abt = xy
5. ( x − y ) ( xr − xs − ys + yt ) = ( x + y ) ( p − q ) + (2 − p) t = 0
2

6. q 2 r − 2 pqs + p 2 t = pt − qs 13. r − t sin 2 x − p cot x = 0.


486 | Chapter 4

Answers 4.4

Here, f1 and f 2  are arbitrary functions.


1.  z = f1 ( y + ax ) + f 2 ( y − ax )
3 2
2.  z = f1 ( y + 5 x ) + f 2 ( y − 2 x ) − x
2
(
3.  z = f1 ( y + x ) + f 2 y 2 − x 2 )
1 3 1
4.  z = f1 ( y − ax ) + f 2 ( y − bx ) + x y − (a + b) x4
6 24
5.  z = f1 ( x + y ) − f 2 ( xy )
6.  y + f1 ( z ) = f 2 ( x − z )

( ) (
7.  z = log ( y + x ) f1 y 2 − x 2 + f 2 y 2 − x 2 )
8.  z + f1 ( ln x + y ) = f 2 ( ln x + y )
x x
9.  z = f1   + y f 2  
 y  y
10.  x = f1 ( y ) − f 2 ( z ) + e x
11.  x = yz − f1 ( z ) + f 2 ( y )
12.  x f1 ( z − y − 2 x ) + y = f 2 ( z − y − 2 x )
13.  z = f1 ( y + cos x ) + f 2 ( y − cos x ) .

4.7 Partial differential equations linear and homogenous


in partial derivatives with constant coefficients
∂n z ∂n z ∂n z ∂n z
An equation of the form + k + k +  + k = F ( x, y ) is linear and
∂x n −1∂y ∂x n − 2 ∂y 2
1 2 n
∂x n ∂y n
­homogenous in partial d­ erivatives of order n and has constant coefficients.
∂r ∂r
On writing r ≡ D r and r ≡ D ′r the equation can be written as
∂x ∂y
f ( D, D ′ ) z = F ( x, y )

where F ( D, D ′ ) = D n + k1 D n −1 D ′ + k2 D n − 2 D ′ 2 +  + kn D ′ n .

As in the case of ordinary linear equations with constant coefficients, the complete solution con-
sist of two parts namely the complementary function (C.F.) and particular integral (P.I.)
.
The complementary function is the general solution or complete solution of f ( D, D ′ ) z = 0
which must contain n arbitrary functions or n arbitrary constants.
The particular integral is the particular solution of f ( D, D ′ ) z = F ( x, y ) .
Partial Differential Equations  | 487

4.7.1 Superposition or Linearity Principle


If u1 , u2 are any solutions of f ( D , D ′ ) z = 0 (4.25)
then u = c1u1 + c2 u2 , where c1 and c2 are arbitrary constants is solution of (4.25). This principle
can be proved on the same line as the proof of Theorem (8.7) in ordinary differential equations
in volume 1.
From this theorem, we observe that if φi ( x, y ) , i = 1, 2,… n are solutions of (4.25), where
n
φ1 , φ2 ,… are arbitrary functions then ∑ φi ( x, y ) is C.F. of equation
i =0

f ( D, D ′ ) z = F ( x, y ) (4.26)

4.7.2 Rules for Finding the Complementary Function


Partial differential equation is
∂n z ∂n z ∂n z ∂n z
+ k + k +  + k = F ( x, y ) .
∂x n −1∂y ∂x n − 2 ∂y 2
1 2 n
∂x n ∂y n

We are to find its C.F. which is general solution or complete solution of
∂n z ∂n z ∂n z ∂2 z
+ k + k +  + k = 0. (4.27)
∂x n −1∂y ∂x n − 2 ∂y 2
1 2 n
∂x n ∂y n
This equation in symbolic form can be written as
 D n + k1 D n −1 D ′ + k2 D n − 2 D ′ 2 +  + kn D ′ n  z = 0. (4.28)

Replacing D by m and D ′ by 1, auxiliary equation (A.E.) is


m n + k1m n −1 + k2 m n − 2 +  + kn = 0.(4.29)
Let roots of this equation are m1 , m2 , … , mn .
Case I:  When all roots are different
In this case, equation (4.28) can be written as
( D − m1 D ′ ) ( D − m2 D ′)…( D − mn D ′) z = 0. (4.30)
Equation (4.30) will be satisfied by solution of ( D − mn D ′ ) z = 0
or p − mn q = 0 (4.31)

It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= =
1 −mn 0

Hence solutions are y + mn x = a, z = b where a and b are arbitrary constants.
∴ z = φn ( y + mn x )

is solution of (4.31) where φn is an arbitrary function.
488 | Chapter 4

Now, the factors of equation (4.30) can be written in any order and hence it will be satisfied
by solutions of ( D − m1 D ′ ) z = 0, ( D − m2 D ′ ) z = 0, … , ( D − mn D ′ ) z = 0
which are z = φ1 ( y + m1 x ) , z = φ2 ( y + m2 x ) ,… , z = φn ( y + mn x )
where φ1 , φ2 , … , φn are arbitrary functions.
Thus, by principle of superposition
n
C.F. = ∑ φi ( y + mi x ).
i =0 
Case II:  If two roots of equation (4.29) are equal say m1 = m2 and all others are different
In this case φ1 ( y + m1 x ) , φ2 ( y + m2 x ) will give only one arbitrary function of y + m1 x = y + m2 x.
The part of C.F. corresponding to repeated root is solution of
( D − m1 D ′) ( D − m1 D ′ ) z = 0 (4.32)
Let ( D − m1 D ′) z = u (4.33)
Now, we have ( D − m1 D ′) u = 0 
Its solution is
u = φ1 ( y + m1 x )

∴ from equation (4.33)
( D − m1 D ′) z = φ1 ( y + m1 x ) 
or p − m1q = φ1 ( y + m1 x )

It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= = (4.34)
1 −m1 φ1 ( y + m1 x )
From first two members
y + m1 x = a 

∴ from first and third members of (4.34)


dz
dx =
φ1 ( a )

∴ z = xφ1 ( a ) + b

∴ Complete solution of (4.32) is
z = xφ1 ( y + m1 x ) + φ2 ( y + m1 x )

where φ1 and φ2 are arbitrary functions.
Hence
C.F. = xφ1 ( y + m1 x ) + φ2 ( y + m1 x ) + φ3 ( y + m3 x ) +  + φn ( y + mn x ) .

Partial Differential Equations  | 489

Similarly, if m1 = m2 = m3 and all other roots of (4.29) are different then


C.F. = φ1 ( y + m1 x ) + xφ2 ( y + m1 x ) + x 2φ3 ( y + m1 x ) + φ4 ( y + m4 x ) +  + φn ( y + mn x )

and so on.

4.7.3  Inverse Operator


1
Definition: If f ( D, D ′ ) z = F ( x, y ) then F ( x, y ) is that function of x and y, not con-
f ( D, D ′ )
taining arbitrary constants, which when operated by f ( D, D ′ ) gives F ( x, y )
1
i.e., f ( D, D ′ ) F ( x, y ) = F ( x, y ) .
f ( D, D ′ )

1
Thus, F ( x, y ) satisfies the differential equation f ( D, D ′ ) z = F ( x, y ) and hence is its
f ( D, D ′ )
1
particular integral. Clearly, f ( D, D ′ ) and are inverse operators.
f ( D, D ′ )
1
Theorem 4.2:  F ( x, y ) = ∫ F ( x, a − mx ) dx
D − mD ′
where we put a = y + mx after integration.
1
Proof: Let F ( x, y ) = G ( x, y )
D − mD ′ 
1
⇒ ( D − mD ′ ) F ( x, y ) = ( D − mD ′ ) G ( x, y )
D − mD ′ 
∂G ∂G
∴ −m = F ( x, y )
∂x ∂y 
It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dG
= = (4.35)
1 −m F ( x, y )
From first and second members
y + mx = a (4.36)

From first and third members


dG = F ( x, a − mx ) dx  (from (4.36))

∴ G ( x, y ) = ∫ F ( x, a − mx ) dx

1
∴ F ( x, y ) = ∫ F ( x, a − mx ) dx (4.37)
D − mD ′
where we put a = y + mx after integration.
490 | Chapter 4

4.7.4 Operator Methods for Finding Particular Integrals


Consider the equation
f ( D, D ′ ) z = F ( x, y )

1
then, P.I. = F ( x, y )
f ( D, D ′ )

Case I:  F ( x, y ) = e ax + by

We have
D r F ( x, y ) = a r F ( x, y )

and D ′ F ( x, y ) = b F ( x, y )
r r

∴ f ( D, D ′ ) e ax + by
= f ( a, b ) e ax + by

1 1
∴ f ( a, b ) e ax + by = f ( D, D ′ ) e ax + by = e ax + by
f ( D, D ′ ) f ( D, D ′ )

1 1
∴ e ax + by = e ax + by  if f ( a, b ) ≠ 0
f ( D, D ′ ) f ( a, b ) 

1 1
∴ P.I. = e ax + by = e ax + by  if f ( a, b ) ≠ 0
f ( D, D ′ ) f ( a, b ) 

If f ( a, b ) = 0, i.e., D = a, D ′ = b satisfies f ( D, D ′ ) = 0
k
 a 
then  D − b D ′  is factor of f ( D, D ′ ) for some k ∈ N 
 
k
 a 
such that f ( D, D ′ ) =  D − D ′  φ ( D, D ′ ) where φ ( a, b ) ≠ 0.
 b 
1 1
∴ P.I. = e ax + by = e ax + by
f ( D, D ′ )  a 
k

 D − b D ′  φ ( D, D ′ )
  
1 1 1
= ⋅ e ax + by
φ ( a, b )  a 
k −1
 a 
 D − b D′   D − b D′ 
   

 a 
1 1 ax + b  c − x  a
k −1 ∫
= e  b  dx where c = y + x (from (4.37))
φ ( a, b )  a  b 
 D − b D′ 
   
Partial Differential Equations  | 491

1 1
= ⋅ xe bc 
φ ( a, b )  a 
k −1

 D − b D′ 
 
1 1  a 
= ⋅ xe ax + by     ∵ c = y + x 
φ ( a, b )  a 
k −1
 b 
 D − b D′ 
 
1 1 1
= ⋅ xe ax + by
φ ( a, b )  a 
k −2
 a 
 D − b D′   D − b D′ 
   

 a 
1 1 ax + b  c − x  ax
k −2 ∫
= x e  b  dx; c = y +
φ ( a, b )  a  b 
 D − b D′ 
 
1 1 x 2 ax + by
= e
φ ( a, b )  a 
k −2
2

D− b D 
  
Proceeding in this way
1 xk
P.I. = e ax + by = e ax + by
f ( D, D ′ ) k ! φ ( a, b )

k
 a 
But f ( D, D ′ ) =  D − D ′  φ ( D, D ′ )
 b  
 dk 
∴    k f ( D , D ′ ) = k ! φ ( a, b )
 dD  D = a, D ′ = b 
x k e ax + by
∴ P.I. =
 dk 
 k f ( D , D ′ )
 dD  D = a, D ′ = b 
Case II:
F ( x, y ) = sin ( ax + by + c )   or  cos ( ax + by + c )

We have D F ( x, y ) = −a F ( x, y )
2 2

D ′2 F ( x , y ) = −b 2 F ( x , y )

DD ′F ( x, y ) = −abF ( x, y )

492 | Chapter 4

Using these, we have for some l, m, n


f ( D, D ′ ) sin ( ax + by + c ) = ( lD + mD ′ + n ) sin ( ax + by + c ) (4.38)

= ( la + mb ) cos ( ax + by + c ) + n sin ( ax + by + c )

f ( D, D ′ ) cos ( ax + by + c ) = ( lD + mD ′ + n ) cos ( ax + by + c )

= − ( al + mb ) sin ( ax + by + c ) + n cos ( ax + by + c )

Thus, if al + mb ≠ 0, we have
1 1
( la + mb ) cos ( ax + by + c ) + n sin ( ax + by + c ) = sin ( ax + by + c )
f ( D, D ′ ) f ( D, D ′ )

1 1
n cos ( ax + by + c ) − ( la + mb ) sin ( ax + by + c ) = cos ( ax + by + c )
f ( D, D ′) f ( D, D ′ )

Solving these equations, we have
1 ( la + mb ) sin ( ax + by + c ) + n cos ( ax + by + c )
cos ( ax + by + c ) =
f ( D, D ′ ) ( la + mb ) + n2
2


1 − ( la + mb ) cos ( ax + by + c ) + n sin ( ax + by + c )
sin ( ax + by + c ) =
f ( D, D ′ ) ( la + mb )
2
+ n2

Combining these, we can write
1 − ( lD + mD ′ ) + n
F ( x, y ) = F ( x, y )
f ( D, D ′ ) (
− l D + m D ′ + 2lmDD ′ 2 2
2 2 2 2
) 2
D =− a , D ′ =−b 2
b , DD ′=− ab
+ n2

n − (lD + mD ′ )
= F ( x, y )
 n2 − (lD + mD ′ )2 
  D 2 = − a2, D ′2 = − b2, DD ′ = − ab

1
= F ( x, y )
[lD + mD ′ + n]D = − a , D ′ = − b , DD ′ = − ab
2 2 2 2

1 1
∴ F ( x, y ) = F ( x, y )  (from (4.38))
f ( D, D ′ )  f ( D, D ′ ) D 2 = − a2 , D ′2 = − b2 , DD ′ = − ab

If al + bm = 0, n ≠ 0 

then f ( D, D ′ ) F ( x, y ) = n F ( x, y )

1 1
∴ P.I. = F ( x, y ) = F ( x, y )
f ( D, D ′ ) n

If al + bm = 0, n = 0 i.e. f ( D, D ′ ) = 0 for D = −a , D ′ = −b 2 , DD ′ = −ab
2 2 2
Partial Differential Equations  | 493

1
then P.I. = sin ( ax + by + c ) 
f ( D, D ′ )
1
e(
i ax + by + c )
= Im. part of
f ( D, D ′ )

1
(from case I, ∵ f ( ia, ib ) = 0)
i ( ax + by + c )
= Im. part of x e 
d
f ( D, D ′ )
dD
1
=x sin ( ax + by + c )
d
f ( D, D ′ )
dD 
After this, proceed as above.
Similar is the case, when F ( x, y ) = cos ( ax + by + c )
Case III:
F ( x, y ) = sinh ( ax + by + c ) or cosh ( ax + by + c )

Here, D F ( x, y ) = a 2 F ( x, y )
2

D ′2 F ( x , y ) = b 2 F ( x , y )

DD ′F ( x, y ) = ab F ( x, y )

Hence proceed as in Case II, replacing D 2 by a 2 , D ′ 2 by b 2 and DD ′ by ab when f (D, D′) ≠ 0
for D2 = a2, D′2 = b2, DD′ = ab.
If f (D, D′) = 0 for D2 = a2, D′2 = b2, DD′ = ab, then convert F(x, y) in exponential forms and
solve it.
Case IV:
F ( x, y ) = polynomial in x and y 
−1
P.I. =  f ( D, D ′ )  F ( x, y )

−1
Expand  f ( D, D ′ )  in ascending powers of D or D ′ by Binomial theorem and then operate
on F ( x, y ) .
Case V:
F ( x, y ) = e ax + by V ( x, y )

for any g ( x, y ) , by Leibnitz theorem
( )
D n e ax + by g ( x, y ) = e ax + by D n g ( x, y ) + nC1 a e ax + by D n −1 g ( x, y )

+ nC2 a 2 e ax + by D n − 2 g ( x, y ) +  + a n e ax + by g ( x, y )

= e ax + by  D n + nC1 a D n −1 + nC2 a 2 D n − 2 +  a n  g ( x, y )

= e ax + by ( D + a ) g ( x, y )
n


( )
D ′ e ax + by g ( x, y ) = e ax + by ( D ′ + b ) g ( x, y )
n
n
Similarly,

494 | Chapter 4

∴ ( )
f ( D, D ′ ) e ax + by g ( x, y ) = e ax + by f ( D + a, D ′ + b ) g ( x, y )

1
∴ e ax + by
f ( D + a, D ′ + b ) g ( x, y ) = e ax + by
g ( x, y )
f ( D, D ′ ) 

1
Taking g ( x, y ) = V ( x, y )
f ( D + a, D ′ + b )

1 1
we have e ax + by
V ( x, y ) = e ax + by V ( x, y )
f ( D, D ′ ) f ( D + a, D ′ + b )

Case VI:
F ( x, y ) = φ ( ax + by ). 
Now, for any G ( ax + by )
D r G ( ax + by ) = a r G ( ( ax + by ) 
r)

D ′r G ( ax + by ) = b r G ( ( ax + by ) 
r)

∴ f ( D, D ′ ) G ( ax + by ) = f ( a, b ) G (n) ( ax + by )

(∵ f ( D, D ′ ) is homogeneous of degree n)
If f ( a, b ) ≠ 0, then
1 1
G ( ) ( ax + by ) = G ( ax + by )
n

f ( D, D )
′ f ( a, b )

Taking G (
n)
( ax + by ) = φ ( ax + by ) 
1 1
φ ( ax + by ) = G ( ax + by )
f ( D, D )
′ f ( a, b )

where G ( ax + by ) is obtained after integrating φ ( z ) w.r.t. z , n times and then taking z = ax + by
If f ( a, b ) = 0, then this case can be dealt as in Case I replacing e ax + by by φ ( ax + by )

x k φ ( ax + by )
and then P.I. =
 dk 
 k f ( D, D ′ )
 dD  D = a, D ′= b 
k
 a 
when f ( D, D ′ ) =  D − D ′  h ( D, D ′ );  h ( D, D ′ )  D = a , D ′= b ≠ 0
 b  
In particular, if
n
 a 
f ( D, D ′ ) =  D − D ′ 
 b  
xn
then P.I. = φ ( ax + by )
n! 
Partial Differential Equations  | 495

1 1 1
Hence, φ ( ax + by ) = φ ( ax + by ) 
( bD − aD ′)
n n n
b  a 
 D − b D′ 
 
xn
= φ ( ax + by )
n! bn 
Note:
We can also use the symbols Dx and Dy at the places of D and D ′ respectively.

Example 4.34: Solve the following partial differential equations


∂4 z ∂4 z
(i)  −
∂x 4 ∂y 4
(
= 0    (ii)  Dx 3 − 3Dx 2 D y + 2 D y 2 Dx z = 0 )
Solution: (i)  Differential equation written symbolically is
(D x
4
)
− Dy 4 z = 0

A.E. is m −1 = 0
4

⇒ (m 2
)(
− 1 m2 + 1 = 0  )
∴ m = ±1, ± i 

∴ general solution is
z = φ1 ( y + x ) + φ2 ( y − x ) + φ3 ( y − ix ) + φ4 ( y + ix )

where φ1 , φ2 , φ3 and φ4 are arbitrary functions.
(ii)  Differential equation is
(D x
3
− 3 Dx 2 D y + 2 D y 2 Dx z = 0 ) 
A.E. is m − 3m + 2m = 0 
3 2

⇒ m ( m − 1) ( m − 2) = 0
∴ m = 0, 1, 2 
∴ general solution is
z = φ1 ( y ) + φ2 ( y + x ) + φ3 ( y + 2 x )

where φ1 , φ2 and φ3 are arbitrary functions.

Example 4.35: Solve the following partial differential equations


  (i)  ( D − 7DD′ − 6 D′ ) z = sin ( x + 2 y ) + e
3 2 3
2x+ y


 (ii)  ( 2 D − 5 DD ′ + 2 D ′ ) z = 5 sin ( 2 x + y )
2
2


(iii)  ( D + DD ′ − 6 D ′ ) z = y cos x
2
2


  (iv)  ( D − 6 DD ′ + 9 D ′ ) z = 12 x + 36 xy
2
2 2


496 | Chapter 4

Solution: (i)  A.E. is


m3 − 7m − 6 = 0 
m = −1 satisfies it, by synthetic division


− −

∴ ( )
m3 − 7m − 6 = ( m + 1) m 2 − m − 6 = ( m + 1) ( m − 3) ( m + 2 ) = 0

∴ m = −2, − 1, 3 
∴ C.F. = φ1 ( y − 2 x ) + φ2 ( y − x ) + φ3 ( y + 3 x )

1
P.I. = 3 3 
 sin ( x + 2 y ) + e 2 x + y 

D − 7 DD − 6 D2
′ 
1 1
= 3 2 3
sin ( x + 2 y ) + 3 2
e2x+ y
D − 7 DD ′ − 6 D ′ D − 7 DD ′ − 6 D ′3 
1
To find 3 2
sin ( x + 2 y )
D − 7 DD ′ − 6 D ′3 
integrate sin z w.r.t. z three times and put z = x + 2 y, D = 1, D ′ = 2
1 1
∴ P.I. = cos ( x + 2 y ) + e2x+ y
1 − 7 (1)( 2 ) − 6 ( 2 ) 2 − 7 ( 2 )(1) − 6 (1)
3 2 3 3 3


1 1 2x+ y
= − cos ( x + 2 y ) − e
75 12 
∴ general solution is
1 1
z = φ1 ( y − 2 x ) + φ2 ( y − x ) + φ3 ( y + 3 x ) − cos ( x + 2 y ) − e 2 x + y
75 12 
where φ1 , φ2 and φ3 are arbitrary functions.
(ii)  A.E. is 2m 2 − 5m + 2 = 0
5 ± 25 − 16 1
∴ m= = ,2
4 2 
∴ C.F. = φ1 ( 2 y + x ) + φ2 ( y + 2 x )

1
P.I. = 2
5 sin ( 2 x + y )
2 D 2 − 5 DD ′ + 2 D ′ 
= 5x
1
4 D − 5D ′
( ( ) 2 2
( )
sin ( 2 x + y )  ∵ 2 −2 + 5 ( 2)(1) + 2 −1 = −8 + 10 − 2 = 0 )
1
= 5x  − cos ( 2 x + y ) 
4 ( 2 ) − 5 (1) 

5x
= − cos ( 2 x + y )
3 
Partial Differential Equations  | 497

∴ general solution is
5x
z = φ1 ( 2 y + x ) + φ2 ( y + 2 x ) − cos ( 2 x + y )
3 
where φ1 and φ2 are arbitrary functions.

(
(iii)  D 2 + DD ′ − 6 D ′
2

) z = y cos x
A.E. is
m2 + m − 6 = 0 
⇒ (m + 3) (m − 2) = 0 
∴ m = −3, 2 

∴ C.F. = φ1 ( y − 3 x ) + φ2 ( y + 2 x )

1 1
P.I. = y cos x = y cos x
D + DD ′ − 6 D ′
2 2
( D + 3 D ′ ) ( D − 2D′) 
1
( c − 2 x ) cos x dx ; c = 2 x + y 
D + 3D ′ ∫
=

1
= ( c − 2 x ) sin x − ( −2 ) ( − cos x ) 
D + 3D ′  
1
= [ y sin x − 2 cos x ]
D + 3D ′ 
= ∫ ( a + 3 x ) sin x − 2 cos x  dx ; a = y − 3 x 

= ( a + 3 x ) ( − cos x ) − ( 3) ( − sin x ) − 2 sin x 



= − y cos x + 3 sin x − 2 sin x = − y cos x + sin x  (∵ a = y − 3x )
∴ general solution is
z = φ1 ( y − 3 x ) + φ2 ( y + 2 x ) + sin x − y cos x

where φ1 and φ2 are arbitrary functions.
(iv)   D 2 − 6 DD ′ + 9 D ′  z = 12 x 2 + 36 xy
2

 
A.E. is m 2 − 6 m + 9 = 0

⇒ (m − 3)2 = 0
∴ m = 3, 3 

∴ C.F. = φ1 ( y + 3 x ) + xφ2 ( y + 3 x )

498 | Chapter 4

1
P.I. =
D − 6 DD ′ + 9 D ′
2 2 (12 x 2
+ 36 xy  )
−1
1  6 DD ′ − 9 D ′2 
=
D2
1 −
D2
 (12 x 2
+ 36 xy )
  
  6 D′ 9D′2 
1 
= 2
1 +  − 2  +  +  12 x 2 + 36 xy ( )
  D
D D  

1  6 D′ 
= 2 1 +
D  D
+  +  12 x + 36 xy

2
( )

1  6 
= 2 12 x + 36 xy + ( 36 x ) 
2

D  D 
1 1
= 2 12 x 2 + 36 xy + 108 x 2  = 2 120 x 2 + 36 xy
D D
( )

x4 x3
= 120 + 36 y = 10 x + 6 x y 4 3

12 6 
∴ general solution is
z = φ1 ( y + 3 x ) + xφ2 ( y + 3 x ) + 10 x 4 + 6 x 3 y

where φ1 and φ2 are arbitrary functions.

Example 4.36: Find the general solutions of partial differential equations

(
 (i)  D 2 + 2 DD ′ − 8 D ′
2

)z = 2x + 3y

 (ii)  ( D )
2 3
3
+ D 2 D ′ − DD ′ − D ′ z = e x cos 2 y

(iii)  ( D ) z = ( y − 1) e x
2
2
− DD ′ − 2 D ′

 (iv)  ( D + DD ′ − 6 D ′ ) z = x sin ( x + y )
2
2 2


Solution: A.E. is
(i) m 2 + 2m − 8 = 0

⇒  ( m + 4 ) ( m − 2) = 0
∴ m = −4, 2 
∴ C.F. = φ1 ( y − 4 x ) + φ2 ( y + 2 x )

1
P.I. = 2 2 x + 3y
D + 2 DD ′ − 8 D ′2 
Integrate  z  two times and then put z = 2 x + 3 y, D = 2, D ′ = 3
(2x + 3y )
5/ 2
1 1
(2x + 3y )
5/ 2
∴ P.I. = =−
( 2 + 2 ( 2)(3) − 8 (3) ) (3 / 2)(5 / 2) 210
2 2


Partial Differential Equations  | 499

∴ general solution is
1
z = φ1 ( y − 4 x ) + φ2 ( y + 2 x ) − (2x + 3y )
5/ 2

210 
where φ1 and φ2 are arbitrary functions.

(
(ii) D 3 + D 2 D ′ − DD ′ − D ′
2 3

)z = e x
cos 2 y

A.E. is m3 + m 2 − m − 1 = 0 

m = 1 satisfies it.
By synthetic division


( )
∴ m3 + m 2 − m − 1 = ( m − 1) m 2 + 2m + 1 = ( m − 1) ( m + 1) = 0
2


∴ m = −1, −1,1 
∴ C.F. = φ1 ( y − x ) + xφ2 ( y − x ) + φ3 ( y + x )

1
P.I. = 2 3
e cos 2 y x

D 3 + D 2 D ′ − DD ′ − D ′ 
1
=e x
cos 2 y
( D + 1) + ( D + 1) D ′ − ( D + 1) D ′ − D ′
3 2 2 3


1
=e x
2 3
cos 2 y
1 + D′ − D′ − D′ 
1
=e x
cos 2 y
( ) (
1 + D ′ − −22 − −22 D ′ ) 
ex 1 e x D′ − 1
= cos 2 y = cos 2 y
5 D′ + 1 5 D ′2 − 1 
ex 1 ex
= ( D ′ − 1) cos 2 y = − ( −2 sin 2 y − cos 2 y )
5 ( −4 − 1) 25

ex
= ( 2 sin 2 y + cos 2 y )
25 
∴ general solution is
ex
z = φ1 ( y − x ) + xφ2 ( y − x ) + φ3 ( y + x ) + ( 2 sin 2 y + cos 2 y )
25 
where φ1 , φ2 and φ3 are arbitrary functions.
500 | Chapter 4

(
(iii)  D 2 − DD ′ − 2 D ′
2

) z = ( y − 1) e x

A.E. is m2 − m − 2 = 0 
⇒ ( m − 2 ) ( m + 1) = 0 
∴ m = −1, 2 
∴ C.F. = φ1 ( y − x ) + φ2 ( y + 2 x )

1
P.I. = 2 ( y − 1) e x
D − DD ′ − 2 D ′
2

1
2 (
=e x
y − 1)
( D + 1) − ( D + 1) D ′ − 2 D ′
2


1
=e x
2 ( y − 1)
1 − D′ − 2D′ 

( )
−1
( y − 1)
2
= e 1 − D′ − 2D′
x


= e x (1 + D ′ + ) ( y − 1) = e x ( y − 1 + 1) = ye x

∴ general solution is
z = φ1 ( y − x ) + φ2 ( y + 2 x ) + ye x

where φ1 and φ2 are arbitrary functions.

(
(iv)  D 2 + DD ′ − 6 D ′
2

)z = x 2
sin ( x + y )

A.E. is m2 + m − 6 = 0 
⇒ (m + 3) (m − 2) = 0
∴ m = −3, 2 
∴ C.F. = φ1 ( y − 3 x ) + φ2 ( y + 2 x )

1
P.I. = x sin ( x + y )
2

( D + 3D ′ ) ( D − 2 D ′ ) 
1
x sin ( x + c − 2 x ) dx where c = y + 2 x 
( D + 3D ′ ) ∫
= 2

1
=  x 2 cos ( c − x ) − ( 2 x ) ( − sin ( c − x ) ) + ( 2 ) ( − cos ( c − x ) ) 
( D + 3 D ′ ) 

1
=  x cos ( x + y ) + 2 x sin ( x + y ) − 2 cos ( x + y )   (∵ c = y + 2 x )
2

( D + 3D ′ ) 
1
=
( D + 3D ′ ) 
( )
 x 2 − 2 cos ( x + y ) + 2 x sin ( x + y ) 


Partial Differential Equations  | 501

( )
= ∫  x 2 − 2 cos ( x + c + 3 x ) + 2 x sin ( x + c + 3 x )  dx   where c = y − 3 x 

= ∫ ( x 2
− 2 ) cos ( 4 x + c ) + 2 x sin ( 4 x + c )  dx

 1   1   1 
( )
=  x 2 − 2  sin ( 4 x + c ) − ( 2 x )  − cos ( 4 x + c ) + 2  − sin ( 4 x + c )
4   16   64 

 1   1 
+ ( 2 x )  − cos ( 4 x + c )  − 2  − sin ( 4 x + c )  
 4   16  
 x 2 − 2 1 1  x x 
=  − +  sin ( x + y ) +  −  cos ( x + y )   (∵ c = y − 3x )
 4 32 8  8 2 
1 3
=
32
( )
8 x 2 − 13 sin ( x + y ) − x cos ( x + y )
8 
∴ general solution is
1 3x
z = φ1 ( y − 3 x ) + φ2 ( y + 2 x ) +
32
( )
8 x 2 − 13 sin ( x + y ) − cos ( x + y )
8 
where φ1 and φ2 are arbitrary functions.
Another method to find P.I.
1
x2e (
i x+ y)
P.I. = Im. part of 2
D + DD ′ − 6 D ′
2

i( x + y) 1
= Im. part of e x2
( D + i ) + ( D + i ) ( D′ + i ) − 6 ( D′ + i )
2 2


i( x + y) 1
= Im. part of e x2
D + 2iD − 1 + iD − 1 + 6 
2

1
= Im. part of e (
i x+ y)
x2
D + 3iD + 4 
2

−1
1 1 
= Im. part of e (
i x+ y)
(
1 + 3iD + D 2  x 2
4  4 
)

e(
i x+ y)
 1 9 2  2
= Im. part of
4 
(
1 − 4 3iD + D − 16 D +  x
2


)

e(
i x+ y)
 3 13 2  2
= Im. part of 1 − 4 iD − 16 D +  x
4   
1  3 13 
= Im. part of cos ( x + y ) + i sin ( x + y )   x 2 − ix − 
4  2 8 
1 3
=
32
( )
8 x 2 − 13 sin ( x + y ) − x cos ( x + y )
8 
502 | Chapter 4

Example 4.37: Find the general solution of partial differential equations


(i)  4 r + 12 s + 9t = e 3 x − 2 y 
(ii)  r − 2 s + t = 2 x cos y 
(iii)  r + s − 2t = 8 ln ( x + 5 y )

Solution: (i)  Differential equation in symbolic form is

(4D 2
+ 12 DD ′ + 9 D ′
2

)z = e 3x −2 y


A.E. is
4 m 2 + 12m + 9 = 0

⇒ (2m + 3)2 = 0
3 −3
∴ m=− ,
2 2 
∴ C.F. = φ1 ( 2 y − 3 x ) + xφ2 ( 2 y − 3 x )

1 3x −2 y
P.I. = e
( 2 D + 3D ′ )
2


1
=x 2

d2
e3 x − 2 y  (∵ 2 ( 3) + 3 ( −2 ) = 0 )
( 2 D + 3D ′ )
2

dD 2
1
= x 2 e3 x − 2 y
8 
x 2 3x −2 y
= e
8 
∴ general solution is
x 2 3x −2 y
z = φ1 ( 2 y − 3 x ) + xφ2 ( 2 y − 3 x ) + e
8 
where φ1 and φ2 are arbitrary functions.
(ii) Differential equation in symbolic form is

(D 2
x − 2 Dx D y + D y2 z = 2 x cos y ) 
A.E. is
m 2 − 2m + 1 = 0  or  ( m − 1) = 0
2
⇒  m = 1, 1 
  
∴ C.F. = φ1 ( y + x ) + xφ2 ( y + x )

1
P.I. = 2 x cos y
( x y)
2
D − D

Partial Differential Equations  | 503

1
= Re part of 2 xe iy 
(D − Dy )
2
x

1
= 2Re part of e iy x
(D − Dy − i )
2
x

( )  ( x y ) x
−2
= 2Re part of e iy
−1 1 + i D − D

= 2Re part of ( −e ) [1 − 2iDx + ] x
iy

= 2Re part of ( −1) ( cos y + i sin y ) ( x − 2i )

= −2 [ x cos y + 2 sin y ]

∴ general solution is
z = φ1 ( y + x ) + xφ2 ( y + x ) − 2 ( x cos y + 2 sin y )

where φ1 and φ2 are arbitrary functions.
(iii)  Differential equation in symbolic form is

(D 2
x )
+ Dx D y − 2 D y2 z = 8 ln ( x + 5 y )

A.E. is m +m−2= 0
2

⇒ (m + 2) (m − 1) = 0
∴ m = −2, 1 
∴ C.F. = φ1 ( y − 2 x ) + φ2 ( y + x )

1
P.I. = 8 ln ( x + 5 y )
(D 2
x + Dx D y − 2 D y2 ) 
1
= 8 ln ( x + 5 y )
(D x + 2 D y ) ( Dx − D y )

1
=8
(D + 2 Dy )
∫ ln ( x + 5 ( c − x ) ) dx,    where c = x + y 
x

1  4x 
=8  x ln ( 5c − 4 x ) + ∫ dx   (integrating by parts)
( Dx + 2 D y )  5c − 4 x 

1   5c  
=8  x ln ( 5c − 4 x ) − ∫ 1 − 5c − 4 x  dx 
( Dx + 2 D y )    

1  5c 
=8  x ln ( 5c − 4 x ) − x − 4 ln ( 5c − 4 x ) 
( x y)
D + 2 D  

504 | Chapter 4

1  −x − 5y 
=8  ln ( x + 5 y ) − x   (∵ c = x + y )
( Dx + 2 D y )  4 
1
= −2 ( x + 5 y ) ln ( x + 5 y ) + 4 x 
Dx + 2 D y 

= −2 ∫ ( 5c + 11x ) ln ( 5c + 11x ) + 4 x  dx,   where c = y − 2 x 

 ( 5c + 11x )2 11( 5c + 11x )


2

= −2  ln ( 5c + 11x ) − ∫ dx + 2 x 2   (integrating by parts)
 22 22 ( 5c + 11x ) 

1 ( 5c + 11x ) 
2

= −  ( 5c + 11x ) ln ( 5c + 11x ) − 11
2
+ 44 x 2 
11  22 
 
1 
2 ( x + 5 y ) ln ( x + 5 y ) − ( x + 5 y ) + 88 x 2   (∵ c = y − 2 x )
2 2
=−
22  
1 
( x + 5 y ) ( 2 ln ( x + 5 y ) − 1) + 88 x 2 
2
=−
22  
∴ general solution is
1
z = φ1 ( y − 2 x ) + φ2 ( y + x ) − ( x + 5 y ) ( 2 ln ( x + 5 y ) − 1) − 4 x 2
2

22 
where φ1 and φ2 are arbitrary functions.

Exercise 4.5

1. Solve the following partial differential (h)  r + 6 s + 9t = 0


equations: (i)  ( D + 2 D ′ ) ( D − 3 D ′) z = 0
2

(
(a)  D 3 − 7 DD ′2 + 6 D ′3 z = 0 ) ∂2 z ∂2 z ∂2 z
(b)  25r − 40 s + 16t = 0 (j)  − 4 + 4 =0
∂x 2 ∂x∂y ∂y 2
(c)  2
∂2 z
+ 5
∂2 z
+ 2
∂2 z
=0 (
(k)  D 4 − 2 D 2 D ′2 + D ′4 z = 0 )
∂x 2 ∂x∂y ∂y 2
(l)  ( D 4
)
− 2 D D ′ + 2 DD ′ 3 − D ′ 4 z = 0
3

( )
(d)  D 2 − DD ′ − 6 D ′ 2 z = 0
2. Find the general solutions of the following
(e)  ( D − 4 D D ′ + 3DD ′ ) z = 0
3 2 2
partial differential equations
(f)  ( D − 6 D D ′ + 11DD ′ − 6 D ′ ) z = 0
3 2 2 3
( )
(a)  D 2 + 5 DD ′ + 6 D ′2 z = e x − y
(g)  ( D − 2 D D ′ + DD ′ ) z = 0
3 2 2
(b)  ( D 3 2 3
)
− 3D D ′ + 4 D ′ z = e x + 2 y
Partial Differential Equations  | 505

(c)  r − 4 s + 4t = e 2 x + y ∂2 z ∂2 z
(a)  − = sin x cos 2 y
(d)  r + 2 s + t = e 2 x + 3 y ∂x 2 ∂x∂ y

(
(e)  D 3 − 4 D 2 D ′ + 5 DD ′2 − 2 D ′3 z ) (
(b)  D 3 − 4 D 2 D ′ + 4 DD ′2 z )
= e y −2 x + e y +2 x + e y+ x = 2 sin ( 3 x + 2 y )
3. Obtain the general solutions of the follow- ( )
(c)  2 D 2 − 3DD ′ + D ′2 z = sin ( x − 2 y )
(d)  ( D )
ing partial differential equations 2
− 2 DD ′ + D ′ z = sin ( 2 x + 3 y )
2

∂ z 2
∂ z ∂ z 2 2
(a)  2 + 3 +2 2 = x+ y (e)  ( D 2
+ DD ′ − 6 D ′ ) z = cos ( 2 x + y )
2

∂x ∂x∂y ∂y
(f)  ( D 2
− DD ′) z = cos x cos 2 y
(
(b)  D − DD ′ − 6 D ′2 z = x + y
2
)
∂2 z ∂2 z
(c)  ( D 2
+ DD ′ − 6 D ′ ) z = x + y
2
(g)  + = cos mx cos ny + 30( 2 x + y )
∂x 2 ∂ y 2
∂2 z ∂2 z (h)  p − 2q = sin ( x + 2 y )
(d)  − = x2 + y2
∂x 2 ∂y 2
∂3 z ∂3 z ∂3 z
( )
(e)  D 2 + D ′2 z = x 2 y 2 (i) 
∂x 3
−7
∂ x∂ y 2
−6 3
∂y
(f)  ( D 2
)
+ 2 DD ′ + D ′2 z = 3 x + 2 y = sin ( x + 2 y ) + x 2 y

(g) 
∂3 z ∂3 z
− = x3 y3
(
(j)  D 2 − 2 DD ′ + D ′2 z = sin x )
∂x 3 ∂y 3
5. Find the general solutions of the following
(
(h)  4 D 3 − 3DD ′2 + D ′3 z = 6 x 2 y 2 ) differential equations

(i) 
∂3 z
− 2
∂3 z
= 2e 2 x + 3 x 2 y
(
(a)  2 D 2 + 5 DD ′ + 3D ′2 z = y e x )
∂x 3 ∂x 2∂ y (b)  4 r − 4 s + t = 16 log ( x + 2 y )

4. Find the general solutions of the following


partial differential equations

Answers 4.5

Here, φ1 , φ2 , φ3 and φ4 are arbitrary functions.


1. (a) z = φ1 ( y + x ) + φ2 ( y + 2 x ) + φ3 ( y − 3 x )
(b) z = φ1 ( 5 y + 4 x ) + xφ2 ( 5 y + 4 x )
(c) z = φ1 ( y − 2 x ) + φ2 ( 2 y − x )
(d) z = φ1 ( y + 3 x ) + φ2 ( y − 2 x )
(e) z = φ1 ( y ) + φ2 ( y + x ) + φ3 ( y + 3 x )
(f)  z = φ1 ( y + x ) + φ2 ( y + 2 x ) + φ3 ( y + 3 x )
(g) z = φ1 ( y ) + φ2 ( y + x ) + xφ3 ( y + x )
506 | Chapter 4

(h) z = φ1 ( y − 3 x ) + xφ2 ( y − 3 x )
(i)  z = φ1 ( y − 2 x ) + φ2 ( y + 3 x ) + x φ3 ( y + 3 x )
(j)  z = φ1 ( y + 2 x ) + xφ2 ( y + 2 x )
(k) z = φ1 ( y + x ) + xφ2 ( y + x ) + φ3 ( y − x ) + xφ4 ( y − x )
(l)  z = φ1 ( y + x ) + xφ2 ( y + x ) + x 2φ3 ( y + x ) + φ4 ( y − x )
1
2. (a) z = φ1 ( y − 2 x ) + φ2 ( y − 3 x ) + e x − y
2
1
(b) z = φ1 ( y − x ) + φ2 ( y + 2 x ) + xφ3 ( y + 2 x ) + e x + 2 y
27
1
(c) z = φ1 ( y + 2 x ) + xφ2 ( y + 2 x ) + x 2 e 2 x + y
2
1 2 x +3 y
(d) z = φ1 ( y − x ) + xφ2 ( y − x ) + e
25
1 x2
(e) z = φ1 ( y + x ) + xφ2 ( y + x ) + φ3 ( y + 2 x ) − e y − 2 x + xe y + 2 x − e y + x
36 2
x 2 y x3
3. (a) z = φ1 ( y − x ) + φ2 ( y − 2 x ) + −
2 3
x3 x 2
(b) z = φ1 ( y − 2 x ) + φ2 ( y + 3 x ) + + y
3 2
x2 y
(c) z = φ1 ( y + 2 x ) + φ2 ( y − 3 x ) +
2
x2 2
(d) z = φ1 ( y + x ) + φ2 ( y − x ) +
6
( x + 3y2 )
1
(e) z = φ1 ( y + ix ) + φ2 ( y − ix ) +
180
(15 x 4 y 2 − x 6)
x3
(f)  z = φ1 ( y − x ) + xφ2 ( y − x ) + x 2 y −
6
x6 y3 x9
(
(g) z = φ1 ( y + x ) + φ2 ( y + ω x ) + φ 3 y + ω 2 x + ) +
120 10080
; ω is cube root of unity.

1 5 2 x7
(h) z = φ1 ( y − x ) + φ2 ( 2 y + x ) + xφ3 ( 2 y + x ) + x y +
40 1120
1
(i)  z = φ1 ( y ) + xφ2 ( y ) + φ3 ( y + 2 x ) +
60
( )
15e 2 x + 3 x 5 y + x 6
1 1
4. (a) z = φ1 ( y ) + φ2 ( y + x ) + sin ( x + 2 y ) − sin ( x − 2 y )
2 6
2
(b) z = φ1 ( y ) + φ2 ( y + 2 x ) + xφ3 ( y + 2 x ) + cos ( 3 x + 2 y )
3
1
(c) z = φ1 ( 2 y + x ) + φ2 ( y + x ) − sin ( x − 2 y )
12
(d) z = φ1 ( y + x ) + xφ2 ( y + x ) − sin ( 2 x + 3 y )
x
(e) z = φ1 ( y + 2 x ) + φ2 ( y − 3 x ) + sin ( 2 x + y )
5
Partial Differential Equations  | 507

1
(f)  z = φ1 ( y ) + φ2 ( y + x ) + 3 cos ( x + 2 y ) − cos ( x − 2 y ) 
6
cos mx cos ny
(g) z = φ1 ( y + ix ) + φ2 ( y − ix ) − + (2x + y )
3

1
( m +n
2
)
2

(h) z = φ1 ( y + x ) + cos ( x + 2 y )
3
1 x5 y
(i)  z = φ1 ( y − x ) + φ2 ( y − 2 x ) + φ3 ( y + 3 x ) − cos ( x + 2 y ) +
75 60
(j)  z = φ1 ( y + x ) + xφ2 ( y + x ) − sin x
1
5. (a) z = φ1 ( 2 y − 3 x ) + φ2 ( y − x ) + ( 2 y − 5 ) e x
4
(b) z = φ1 ( 2 y + x ) + xφ2 ( 2 y + x ) + 2 x 2 log ( x + 2 y )

4.8 Linear partial differential equations with constant


coefficients, non-homogeneous in partial derivatives
Differential equation is
f ( D, D ′ ) z = F ( x, y )

where f ( D, D ′ ) is not homogeneous, i.e., sum of powers of D and D ′ in terms may not be equal.
Here also general solution is sum of complementary function (C.F.) and particular integral (P.I.).

4.8.1 Rules for Finding Complementary Function


Case I: When f ( D, D ′ ) can be factorized into linear factors of the type D + mD ′ + c or D ′ + c
where m, c be any constant (may be zero) and factors are not repeated.
To find the solution of
( D + mD ′ + c ) z = 0 (4.39)
or p + mq + cz = 0 
It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= =
1 m −cz
Solution from first two members is y − mx = k1
and solution from first and third members is z = k2 e − cx = φ ( k1 ) e − cx
Thus, solution of (4.39) is
z = e − cxφ ( y − mx ) .
Now, we find solution of


( D ′ + c ) z = 0  or  q = −cz (4.40)
508 | Chapter 4

Lagrange’s equations are


dx dy dz
= =
0 1 −cz
Its two solutions are
x = k1 , z = k2 e − cy .
Hence, in this case solution of (4.40) is
z = e − cyφ ( x ) .
Hence, solution corresponding to each factor will be found if factors are not repeated.
Then by superposition principle complementary function will be sum of all these solutions.
Case II:  When factors are repeated
Suppose D + mD ′ + c is repeated twice.
Now, we find solution of
( D + mD ′ + c ) z = 0 (4.41)
2

Let ( D + mD ′ + c ) z = u (4.42)
∴ from (4.41)
( D + mD ′ + c ) u = 0 
Its solution as in Case I is
u = e − cxφ ( y − mx )

∴ from (4.42)
p + mq + cz = e − cxφ ( y − mx )

or p + mq = −cz + e − cxφ ( y − mx )

It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dz
= = (4.43)
1 m −cz + e φ ( y − mx )
− cx

From first two members


y − mx = k1 
From first and third members
dx dz
=
1 −cz + e − cxφ ( k1 )

dz
∴ + cz = e − cx φ ( k1 )
dx 
It is Leibnitz linear equation.
I.F. = e cx 
Partial Differential Equations  | 509

∴ solution is
ze cx = ∫ e cx e − cxφ ( k1 ) dx + k2

= xφ ( k1 ) + k2

∴ z =  xφ ( k1 ) + k2  e − cx

∴ z = e − cx  xφ ( y − mx ) + ψ ( y − mx )

will be solution corresponding to ( D + mD ′ + c ) z = 0
2

Similarly z = e − cx  x 2φ1 ( y − mx ) + xφ2 ( y − mx ) + φ3 ( y − mx )  will be a solution correspond-


ing to ( D + mD ′ + c ) z = 0.
3

Case III: If f ( D, D ′ ) cannot be factorized into linear factors. For example D 2 + D ′ cannot be
factorized into linear factors.
As D r e hx + ky = hr e hx + ky 
and D ′r e hx + ky = k r e hx + ky 
∴ f ( D, D ′ ) e hx + ky = f ( h, k ) e hx + ky

hx + ky
Thus, z = e will be solution of
f ( D, D ′ ) z = 0 when f ( h, k ) = 0

Hence, by principle of superposition

C.F. = ∑ ci e hi x + ki y  where f ( hi , ki ) = 0
i =1


4.8.2 Operator Methods for Finding Particular Integral


Consider the equation
f ( D, D ′ ) z = F ( x, y ).
1
Then, P.I. = F ( x, y ) .
f ( D, D ′ )

Cases I to V in (4.7.4) are proved for general f ( D, D ′ ) and hence particular integral can be found
using them and for the general method, we prove the following theorem.
1
Theorem 4.3:  F ( x, y ) = e − cx ∫ e cx F ( x, a + mx ) dx where we put a = y − mx after
integration. D + mD ′ + c
1
Proof: Let F ( x, y ) = G ( x, y )
D + mD ′ + c
∴ ( D + mD ′ + c ) G ( x, y ) = F ( x, y ) 
510 | Chapter 4

∂G ∂G
∴ +m = −cG + F 
∂x ∂y
It is Lagrange’s equation.
Lagrange’s auxiliary equations are
dx dy dG
= =
1 m −cG + F 
From first and second members
y − mx = a 
From first and third members
dG
+ cG = F ( x, y ) = F ( x, a + mx )
dx 
It is Leibnitz linear equation.
I.F. = e cx 
∴ solution is Ge cx = ∫ e cx F ( x, a + mx ) dx

∫ e F ( x, a + mx ) dx 
− cx
∴ G=e cx

4.9 Partial differential equations with variable


coefficients reducible to partial differential
equations with constant coefficients
Consider the partial differential equation
f ( xD, yD ′ ) z = F ( x, y )

Put x = e X , y = eY

ie., X = log x , Y = log y ,

∂z ∂z dX 1 ∂z
then = = ⇒ xD ≡ D X
∂x ∂X dx x ∂X 
∂2 z 1 ∂z 1 ∂ 2 z dX
=− 2 +
∂x 2
x ∂X x ∂X 2 dx 
1 ∂z 1 ∂2 z
=− 2 + 2
x ∂X x ∂X 2 
⇒ x 2 D 2 ≡ D X ( D X − 1)

Similarly,
x 3 D 3 ≡ D X ( D X − 1) ( D X − 2 )

yD ′ ≡ DY , y 2 D ′2 ≡ DY ( DY − 1) , y 3 D ′3 = DY ( DY − 1) ( DY − 2 ) 
and so on.
Partial Differential Equations  | 511

Hence the differential equation will be reduced to partial differential equation with constant
coefficients and be solved by methods already discussed.

Example 4.38: Find the solutions of the following partial differential equations
(i)  3r + 7 s + 2t + 7 p + 4 q + 2 z = 0
(ii)  2r − s − t − p + q = 0

Solution:  (i)  Partial differential equation in symbolic form is


( 3D 2
+ 7 DD ′ + 2 D ′2 + 7 D + 4 D ′ + 2 z = 0 ) 
or ( D + 2 D ′ + 2 ) ( 3D + D ′ + 1) z = 0 
( D + 2 D ′ + 2 )  D +
1 1
or D′ +  z = 0
 3 3 
∴ General solution is
z = e −2 xφ1 ( y − 2 x ) + e − x / 3φ2 ( 3 y − x )

where φ1 and φ2 are arbitrary functions.

(ii)  Partial differential equation in symbolic form is


( 2D 2
− DD ′ − D ′2 − D + D ′ z = 0 ) 
or ( D − D ′) ( 2 D + D ′ − 1) z = 0 

( D − D ′)  D +
1 1
or D′ −  z = 0
 2 2 
∴ general solution is
z = φ1 ( y + x ) + e x / 2φ2 ( 2 y − x )

where φ1 and φ2 are arbitrary functions.

Example 4.39: Find the solutions of the following partial differential equations
( )
(i)  D 2 − D ′2 + 3D ′ − 3D z = e x + 2 y + xy
1
( x+2 y)
(ii)  ( 4 D + 3DD ′ − D ′ − D − D ′ ) z = 3e
2 2 2

(iii)  ( D − D ′ ) z = xe
2 ax + a2 y

  (iv)  ( D − DD ′ − 2 D ′ + 2 D + 2 D ′ ) z = e
2 2 2 x +3 y
+ sin ( 2 x + y ) + xy

 (v)  ( D + D ′ + 1) ( D ′ + 2 ) z = e x −3 y
1
(vi)  ( D + D ′ + 2 ) z =
(1 + e ) x+ y
512 | Chapter 4

Solution:
( )
(i)  D 2 − D ′2 + 3D ′ − 3D z = e x + 2 y + xy

or ( D − D ′ ) ( D + D ′ − 3) z = e x + 2 y + xy

∴ C.F. = φ1 ( y + x ) + e φ2 ( y − x )
3x

1
P.I. = 2 e x + 2 y + xy
D − D ′2 − 3D + 3D ′ 
1 1
=x ex+2 y + xy  (∵1 − 4 − 3 + 6 = 0 )
d
(
D 2 − D ′2 − 3D + 3D ′ ( D − D ′ ) ( D + D ′ − 3)
)
dD
−1 −1
1 1  D′    D D′ 
=x ex+2 y − 1 −  1 −  +   ( xy )
2D − 3 3D  D    3 3 

1  D′   D D′ 2 
= − xe x + 2 y − 1 + +  1 + + + DD ′ +  ( xy )
3D D  3 3 9 

1  D 2D ′ D ′ 2 
= − xe x + 2 y − 1 + + + + DD ′ +  ( xy )
3D 3 3 D 9 

1  y 2x 1 2
= − xe x + 2 y − xy + + + x+ 
3D  3 3 D 9

1  x2 xy x 2 x 3 2 
= − xe x + 2 y −  y + + + + x 
3 2 3 3 6 9 

∴ general solution is
1
z = φ1 ( y + x ) + e 3 xφ2 ( y − x ) − xe x + 2 y −
54
( )
9 x 2 y + 6 xy + 6 x 2 + 3 x 3 + 4 x

where φ1 and φ2 are arbitrary functions.
1
( x+2 y)
(
(ii)  4 D 2 + 3DD ′ − D ′2 − D − D ′ z = 3e 2 )
1
( x+2 y)
or ( 4 D − D ′ − 1) ( D + D ′) z = 3e 2 
1
 1 1 (x + 2 y)
or  D − D ′ −  ( D + D ′ ) z = 3e
2
4 4 
∴ C.F. = φ1 ( y − x ) + e φ2 ( 4 y + x )
x/4

1
3e ( )
x+2 y /2
P.I. =
4 D 2
+ 3 DD ′ − D ′ 2
− D − D ′ 
Partial Differential Equations  | 513

1  1 1 1 
e( )
x+2 y /2
= 3x ∵ 4   + 3   (1) − 1 − − 1 = 0 

d
dD
(
4 D 2 + 3DD ′ − D ′2 − D − D ′   
    
4 2
)2 

1
e( )
x+2 y /2
= 3x
8 D + 3D ′ − 1

1 x x+2 y /2
e( ) = e( ) 
x+2 y /2
= 3x
4 + 3 −1 2
∴ general solution is
x x+2 y /2
z = φ1 ( y − x ) + e x / 4φ2 ( 4 y + x ) + e ( )
2 
where φ1 and φ2 are arbitrary functions.

( )
2
(iii)  D 2 − D ′ z = xe ax + a y

Here, D 2 − D ′ cannot be factorized into linear factors in D and D ′.



∴ C.F. = ∑ cn e hn x + kn y where hn2 − kn = 0    ∴ kn = hn2 
n =1   

= ∑ cn e
hn ( x + hn y )

n =1 
1 ax + a2 y 2 1
P.I. = 2 xe = e ax + a y x
D − D′ ( D + a ) − D′ + a2 
2
( )
2 1
= e ax + a y 2 x
D + 2aD 
−1
ax + a2 y 1  D 2 1  D 
=e 1 +  x = e ax + a y
1 − +  x
2aD  2a  2aD  2a 

2 1  1  2 1  x2 x 
= e ax + a y
 x −  = e ax + a y  − 
2aD  2a  2a  2 2a 

x
2 (
ax − 1) e ax + a y
2
=
4a 
∴ complete solution is

x
z = ∑ cn e n ( 2 (
ax − 1) e ax + a y
h x + hn y ) 2
+
n =1 4a 
where cn , hn , n = 1, 2, 3, … are arbitrary constants.

( )
(iv)  D 2 − DD ′ − 2 D ′2 + 2 D + 2 D ′ z = e 2 x + 3 y + sin ( 2 x + y ) + xy
or ( D − 2 D ′ + 2 ) ( D + D ′) z = e 2 x +3 y + sin ( 2 x + y ) + xy 
C.F. = e −2 xφ1 ( y + 2 x ) + φ2 ( y − x )

514 | Chapter 4

1 1
P.I. = e2 x +3 y + 2 sin ( 2 x + y )
( D + D′) ( D − 2D′ + 2 ) D − DD ′ − 2 D ′2 + 2 D + 2 D ′ 
1
+ xy
( D + D ) ( D − 2D′ + 2)


1 1
= e 2 x +3 y
+ sin ( 2 x + y )
( 2 + 3) ( 2 − 6 + 2 ) −4 + 2 + 2 + 2 D + 2 D ′

−1 −1
1  D′   D 
+ 1 +  1 + − D ′  xy
2 D  D  2  
1 1 1 1  D′   D 
= − e2 x +3 y + sin ( 2 x + y ) + 1− +  1 − 2 + D ′ − DD ′ +   xy
10 2 D + D′ 2 D  D    
1 2 x +3 y 1 1 1  D D′ D′ 
=− e − cos ( 2 x + y ) + 1 − + D ′ − DD ′ − + +   xy
10 2 2 +1 2 D  2 D 2  
 1 1 
 ∵ sin ( 2 x + y ) = ∫ sin z dz , z = 2 x + y 
D + D′ 2 +1 
1 2 x +3 y 1 1  y 3 1 
=− e − cos ( 2 x + y ) +  xy − + x − 1 − x 
10 6 2D  2 2 D 

1 2 x +3 y 1 1  x2 xy 3 x 2 x3 
=− e − cos ( 2 x + y ) +  y − + −x− 
10 6 2 2 2 4 6 

1 2 x +3 y 1 1
=−
10
e − cos ( 2 x + y ) +
6 24
(
6 x 2 y − 6 xy + 9 x 2 − 12 x − 2 x 3 )

∴ general solution is
1 1
z = e −2 x φ1 ( y + 2 x ) + φ2 ( y − x ) − e 2 x + 3 y − cos ( 2 x + y )
10 6 
1
24
− (
2 x 3 − 6 x 2 y − 9 x 2 + 6 xy + 12 x )

where φ1 and φ2 are arbitrary functions.
(v)  ( D + D ′ + 1) ( D ′ + 2 ) z = e x −3 y

C.F. = e φ1 ( y − x ) + e
−x −2 y
φ2 ( x )

1
P.I. = e x −3 y
( D + D ′ + 1)( D ′ + 2 ) 
1
= e x −3 y = e x −3 y
(1 − 3 + 1) ( −3 + 2 ) 
Partial Differential Equations  | 515

∴ general solution is
z = e − xφ1 ( y − x ) + e −2 yφ2 ( x ) + e x −3 y

where f1 and f2 are arbitrary functions.
1
(vi)  ( D + D ′ + 2 ) z =
(1 + e )  x+ y

∴ C.F. = e −2 xφ1 ( y − x )

1 1
P.I. = ⋅
( D + D′ + 2) 1 + e x + y ( )
1
= e −2 x ∫ e 2 x ⋅ dx where a = y − x
(1 + e x
  
⋅ ea+ x ) 
(using formula given in (Theorem 4.3))
e2x
= e −2 x ∫ dx
(1 + e a

⋅ e2x )
1
−2 x
= e ⋅ a ln 1 + e
2e
a+2 x
( )

1
= e −2 x ⋅ y − x ln 1 + e x + y
2e
( ) (∵ a = y − x )
1
(
= x + y ln 1 + e x + y
2e
)

∴ general solution is
1
z = e −2 xφ1 ( y − x ) + x + y ln 1 + e x + y
2e
( )

where φ1 is an arbitrary function.

Example 4.40: Find the general solutions of the following partial differential equations
(i)  x 2 r − 3 xys + 2 y 2 t + px + 2qy = x + 2 y

(
(ii)  x 2 D 2 − y 2 D ′2 + xD − yD ′ z = log x )
1 ∂ z 1 ∂z
2
1 ∂ z 1 ∂z 2
(iii)  − 3 = 2 2− 3
x ∂x
2 2
x ∂x y ∂y y ∂y
Solution:  (i)  Differential equation in symbolic form is

(x D 2 2
x − 3 xyDx D y + 2 y 2 D y2 + xDx + 2 yD y z = x + 2 y ) 
Put x = e X , y = eY , i.e., X = log x , Y = log y

516 | Chapter 4

We have
xDx ≡ D X , yD y ≡ DY , x 2 Dx2 ≡ D X ( D X − 1) ,

xyDx D y ≡ D X DY , y 2 D y2 ≡ DY ( DY − 1)

∴ differential equation becomes
 D X ( D X − 1) − 3D X DY + 2 DY ( DY − 1) + D X + 2 DY  z

= e X + 2eY , if x > 0, y > 0 

= −e X + 2eY , if x < 0, y > 0 

= −e X − 2eY , if x < 0, y < 0 

= e X − 2eY , if x > 0, y < 0 

\  D X2 − 3D X DY + 2 DY2  z = e X + 2eY , if x > 0, y > 0



or  ( D X − DY ) ( D X − 2 DY ) z = e X + 2eY

A.E. is
( m − 1) ( m − 2 ) = 0 
\ m = 1, 2
∴ C.F. = φ1 (Y + 2 X ) + φ2 (Y + X )

= φ1 ( log y + 2 log x ) + φ2 ( log y + log x )

( )
= φ1 log x 2 y + φ2 ( log xy )

( )
= f1 x y + f 2 ( xy ) where f1 ≡ φ1 o ln, f 2 ≡ φ2 o ln 
2

1
P.I. =
D − 3D X DY + 2 DY2
2 (
e X + 2eY )
X 
1 1
= eX + 2⋅ eY = e X + eY
1− 0 + 0 0−0+2 
∴ P.I. = e X + eY = x + y  if x > 0, y > 0

= −e X + eY = x + y if x < 0, y > 0

= e X − eY = x + y  if x > 0, y < 0

= −e X − eY = x + y if x < 0, y < 0

Hence in each case, P.I. = x + y
Partial Differential Equations  | 517

∴ general solution is
( )
z = f1 x 2 y + f 2 ( xy ) + x + y

where f1 and f 2 are arbitrary functions.

(
(ii)  x 2 D 2 − y 2 D ′2 + xD − yD ′ z = log x )
Put  x = e , y = e , i.e., X = log x, Y = log y , as x > 0(∵ log x is defined)
X Y

We have xD ≡ D X , yD ′ ≡ DY , x 2 D 2 ≡ D X ( D X − 1) ,

xyDD ′ ≡ D X DY , y D ′ ≡ DY ( DY − 1)
2 2

∴ differential equation becomes
 D X ( D X − 1) − DY ( DY − 1) + D X − DY  z = log x = X

or  D X2 − DY2  z = X

\ A.E. is m − 1 = 0 
2

⇒ m = ±1
\ C.F. = φ1 (Y + X ) + φ2 (Y − X ) = φ1 ( log y + log x ) + φ2 ( log y − log x )

 y  y
= φ1 ( log xy ) + φ2  log  = f1 ( xy ) + f 2   where f1 ≡ φ1 o ln , f 2 ≡ φ2 o ln 
 x  x
1 X3 X3 1
= ( log x )
3
P.I. = X = =
D − DY
2
X
2
2.3 6 6

∴ general solution is
 y 1
z = f1 ( xy ) + f 2   + ( log x )
3

x 6 
where f1 and f 2 are arbitrary functions.
1 ∂ 2 z 1 ∂z 1 ∂ 2 z 1 ∂z
(iii)  − = −
x 2 ∂x 2 x 3 ∂x y 2 ∂y 2 y 3 ∂y 
x2 y2
Put = X, =Y
2 2 
∂z ∂z dX ∂z
∴ = =x
∂x ∂X dx ∂X 
∂2 z ∂ 2 z dX ∂z
= x +
∂x 2 ∂X 2 dx ∂X 
∂2 z ∂z
= x2 +
∂X 2
∂X
518 | Chapter 4

1 ∂ 2 z 1 ∂z ∂ 2 z
∴ − = = D X2 z 
x 2 ∂x 2 x 3 ∂x ∂X 2
Similarly,
1 ∂ 2 z 1 ∂z
− = DY2 z
y 2 ∂y 2 y 3 ∂y 
∴ differential equation reduces to
(D 2
X )
− DY2 z = 0

A.E. is
m2 − 1 = 0

∴ m = ±1

∴ C.F. = φ1 (Y + X ) + φ2 (Y − X )

 x2 + y2   y2 − x2 
= φ1   + φ2  ( ) (
 = f1 x + y + f 2 x − y
2 2 2 2
)
 2   2  
∴ general solution is
( )
z = f1 x 2 + y 2 + f 2 x 2 − y 2 ( )
where f1 and f 2 are arbitrary functions.

Exercise 4.6

1. Solve the following partial differential equations


(a)  r − t + p − q = 0
(b)  ( D + 2 D ′ − 3) ( D + D ′ − 1) z = 0
(c)  ( Dx + 3D y + 4 ) z = 0
2

(d)  ( D + 2 D ′ ) ( D + 3D ′ + 1) ( D + 2 D ′ + 2 ) z = 0
2

(
(e)  D 2 + DD ′ − D ′2 + D − D ′ z = 0 )
2. Solve ( D − D ′ − 2 ) ( D − D ′ − 3) z = e 3 x − 2 y to find general solution.
3. Find the general solutions of the following partial differential equations
( )
(a)  2 D 2 − DD ′ − D ′2 + D − D ′ z = e 2 x + 3 y
(b)  ( D − DD ′ + D ′ − 1) z = cos ( x + 2 y ) + e
2 y

(c)  ( 2 D + 3DD ′ + D ′ + D + D ′) z = x − y
2 2

(d)  ( D − D ′ + D + 3 D ′ − 2 ) z = x y
2 2 2
Partial Differential Equations  | 519

(e)  ( Dx − D y − 1) ( Dx − D y − 2 ) z = e 2 x − y + x

(
(f)  D 3 − 3DD ′ + D ′ + 4 z = e 2 x + y )
(g)  ( D − 3 D ′ − 2 ) z = 2e 2 x sin ( y + 3 x )
2

( )
(h)  D 2 + 2 DD ′ + D ′2 − 2 D − 2 D ′ z = sin ( x + 2 y )

(i)  ( D + 1) ( D + D ′ − 1) z = sin ( x + 2 y )

(j)  ( D + D ′ + 1) z = e − x tan ( x + 2 y )

4. Obtain the general solutions of the following partial differential equations


(
(a)  x 2 D 2 + 2 xyDD ′ + y 2 D ′2 z = x m y n )
(b)  ( x D 2 2
x )
− y 2 D y2 z = x 2 y
(c)  ( x D 2 2
)
− 4 xyDD ′ + 4 y 2 D ′2 + 6 yD ′ z = x 3 y 4
∂ z
2
∂ z ∂ z2
∂z ∂z 2
(d)  x 2 + 2 xy + y 2 2 − nx − ny + nz = x 2 + y 2
∂x 2 ∂x∂y ∂y ∂x ∂y
(
(e)  xD 3 D ′2 − yD 2 D ′3 z = 0 )
(f)  ( x D 2 2
)
− 2 xyDD ′ − 3 y 2 D ′2 + xD − 3 yD ′ z = x 2 y

Answers 4.6

Here, φ1 , φ2 , φ3 and φ4 are arbitrary functions.


1. (a) z = φ1 ( y + x ) + e − xφ2 ( y − x )
(b) z = e xφ1 ( y − x ) + e 3 xφ2 ( y − 2 x )
(c) z = e −4 xφ1 ( y − 3 x ) + xe −4 xφ2 ( y − 3 x )
(d) z = φ1 ( y − 2 x ) + e − xφ2 ( y − 3 x ) + e −2 x φ3 ( y − 2 x ) + xφ4 ( y − 2 x ) 

(e) z = ∑ cn e an x + bn y where an2 + an bn − bn2 + an − bn = 0 ; an , bn and cn are arbitrary ­constants.
n =1
1
2. z = e φ1 ( y + x ) + e 3 xφ2 ( y + x ) + e 3 x − 2 y
2x

6
1
3. (a)  z = φ1 ( y + x ) + e − x / 2φ2 ( 2 y − x ) − e 2 x + 3 y
8
1
(b) z = e φ1 ( y ) + e φ2 ( y + x ) + sin ( x + 2 y ) − x e y
x −x

2
(c) z = φ1 ( y − x ) + e − x / 2φ2 ( 2 y − x ) + x 2 − x ( y + 1)
1 3 3 21 
(d) z = e −2 x φ1 ( y + x ) + e xφ2 ( y − x ) −  x 2 y + x 2 + xy + y + 3 x + 
2 2 2 4
520 | Chapter 4

1 x 3
 (e)  z = e x φ1 ( y + x ) + e 2 xφ2 ( y + x ) + e 2 x − y + +
2 2 4

1 2x+ y
  (f) z = ∑ cn e an x + bn y
+ e where an − 3an bn + bn + 4 = 0;an , bn and cn are arbitrary ­constants.
3

n =1 7
(g) z = e 2 x φ1 ( y + 3 x ) + xe 2 xφ2 ( y + 3 x ) + x 2 e 2 x sin ( y + 3 x )
1
(h) z = φ1 ( y − x ) + e 2 xφ2 ( y − x ) +  2 cos ( x + 2 y ) − 3 sin ( x + 2 y ) 
39
1
(i)  z = e − x φ1 ( y ) + e xφ2 ( y − x ) − cos ( x + 2 y ) + 2 sin ( x + 2 y ) 
10
1 −x
 ( j)  z = e φ ( y − x ) + e ln sec ( x + 2 y )
−x

3
 y  y xm yn
4.  (a)  z = φ1   + xφ2   +
x  x  ( m + n)( m + n − 1)
 y 1
(b) z = φ1 ( xy ) + xφ2   + x 2 y
x 2
1 3 4
( ) (
 (c)  z = φ1 x 2 y + xφ2 x 2 y +
30
)
x y

 y x + y
2 2
 y
(d) z = xφ1   + x nφ2   +
x  x  2−n
(e) z = φ1 ( x ) + φ2 ( y ) + xφ3 ( y ) + yφ4 ( x ) + φ5 ( xy )
 y 1
( )
  (f)  z = φ1 x 3 y + φ2   − x 2 y
x 3

4.10 Applications of Partial Differential Equations


In most of engineering applications, second order partial differential equations are formed. First
of all, we shall classify second order partial differential equations.
A general second order linear partial differential equation is
∂2u ∂2u ∂2u ∂u ∂u
A +B +C 2 + D + E + Fu = f ( x, y )
∂x 2
∂x∂y ∂y ∂x ∂y
where A, B, C, D, E and F are functions of x and y.
These differential equations are classified into three parts.
Differential equation is called parabolic in a region if B 2 − 4 AC = 0 in that region.
∂u ∂2u
For example, one dimensional heat equation = c 2 2 is parabolic.
∂t ∂x
The differential equation is called hyperbolic in a region if B 2 − 4 AC > 0 in that region.
∂2u ∂2u
For example, one dimensional wave equation 2 = c 2 2 is hyperbolic.
∂t ∂x
Partial Differential Equations  | 521

The differential equation is called elliptic in a region if B 2 − 4 AC < 0 in that region. For
∂2u ∂2u
­example, two dimensional Laplace equation + = 0 is elliptic. Nature of p.d.e. depends
∂x 2 ∂y 2
only on coefficients of second-order derivatives.

Example 4.41: Classify the following p.d.e.


( )
uxx + 4uxy + x 2 + 4 y 2 u yy = sin ( x + y )

Solution: Coeff of uxx = A = 1, Coeff of uxy = B = 4
Coeff of u yy = C = x 2 + 4 y 2
  x2 
∴ ( )
B 2 − 4 AC = 16 − 4 x 2 + 4 y 2 = 16 1 −  + y 2  
  4 

2
x
If region R is the ellipse + y2 = 1
4
x2
then outside R, + y2 −1 > 0
4 
⇒ B 2 − 4 AC < 0

x2
and inside R, + y2 −1 < 0
4 
⇒  B 2 − 4 AC > 0
x2
on the ellipse R, + y2 −1 = 0
4 
 B − 4 AC = 0
2

Hence, the partial differential equation is elliptic outside this ellipse, hyperbolic inside this­
ellipse and parabolic on this ellipse.
Method of Separation of Variables
Suppose, we are given partial differential equation in u ( x, y ) and its partial derivatives.
We suppose
u ( x, y ) = X ( x ) Y ( y )

where X is function of x only and Y is function of y only.
Substitute it in given differential equation. Take terms of X and its derivatives on one side and
terms of Y and its derivatives on other side. Since X and Y are independent, so both these sides
will be constant.
Thus, two ordinary differential equations will be formed. Solve them for X and Y and then u = XY
will be solution of the given p.d.e.
522 | Chapter 4

Example 4.42: Using the method of separation of variables, solve


∂u ∂u
= 2 +u
∂x ∂t 
where u ( x, 0 ) = 6e −3 x ; x > 0, t > 0

Solution: Let u ( x, t ) = X ( x ) T ( t )

∂u ∂u
∴ = X ′ T, = XT ′
∂x ∂t 
where dashes denote derivatives w.r.t. their variables.

∴ differential equation becomes


X ′ T = 2 XT ′ + XT 
X′ T′
⇒ = 2 +1
X T 
Now L.H.S. is function of x and R.H.S. is function of t and hence each must be constant
X′ T′
∴ = 2 +1 = λ
X T 
λ −1
∴ X = λX, T′ =
1
T
2 
λ −1
t
∴ X = Ae λ x , T = Be 2

λ −1 λ −1
t t
∴ u = XT = AB e λ x e 2
= c eλ x e 2

u ( x, 0 ) = c e λx
= 6e −3 x

⇒ c = 6, λ = −3

∴ solution is
u ( x , t ) = 6e
−(3 x + 2t )


Example 4.43: Use the method of separation of variables to solve the p.d.e.
∂u ∂u
3 +2 = 0, u ( x , 0 ) = 4 e − x
∂x ∂y

Solution: Let u ( x, y ) = X ( x ) Y ( y )
∂u ∂u
∴ = X ′Y , = XY ′
∂x ∂y

where dashes denote derivatives w.r.t their variables.
Partial Differential Equations  | 523

∴ differential equation becomes


3 X ′ Y + 2 XY ′ = 0
X′ Y′ 
⇒ 3 = −2
X Y
L.H.S. is function of x and R.H.S. is function of y and hence each is constant
X′ Y′
∴ 3 = −2 = λ
X Y 
1 λ
∴ X ′ = λX, Y′ = − Y
3 2 
λ λ
x − y
∴ X = Ae 3 , Y = Be 2

λ λ λ λ
x − y x − y
∴ u ( x, y ) = AB e e 3 2
= ce e 3 2

λ
u ( x, 0 ) = c e
x
−x
= 4e 
3

⇒ c = 4, λ = −3

∴ solution is
1
(3 y − 2 x )
u ( x , y ) = 4e 2

∂u ∂u
Example 4.44: Solve the equation 4 + = 3u given u = 3e − y − e −5 y when x = 0.
∂x ∂y
Solution: Let u ( x, y ) = X ( x ) Y ( y )

∂u ∂u
∴ = X ′Y , = XY ′
∂x ∂y

where dashes denote derivatives w.r.t. their variables
∴ differential equation becomes
4 X ′ Y + XY ′ = 3 XY 
4 X ′ −Y ′
⇒ = +3
X Y 
L.H.S. is function of x and R.H.S. is function of y and hence each is constant.
X ′ −Y ′
∴ 4 = +3= λ
X Y 
λ
∴ X ′ = X , Y ′ = (3 − λ ) Y
4 
λ
3− λ ) y
X = Ae 4 , Y = Be (
x


λ λ
u ( x, y ) = AB e e (
3− λ ) y 3− λ ) y
= c e e(
x x
∴ 4 4

524 | Chapter 4

Now, u ( 0, y ) = 3e − y − e −5 y 
λ1 λ2
∴ u ( x, y ) is sum of two solutions c1e 4 e (
x 3 − λ1 ) y x
and c2 e 4 e (
3 − λ2 ) y

λ1 λ2
u ( x, y ) = c1 e 4 e (
3 − λ1 ) y
+ c2 e 4 e (
x x 3− λ2 ) y


u ( 0, y ) = c1e ( 3− λ1 ) y
+ c2 e ( 3− λ2 ) y
= 3e −y
− e −5 y

∴ either c1 = 3, λ1 = 4, c2 = −1, λ 2 = 8 or c1 = −1, λ1 = 8, c2 = 3, λ 2 = 4 
In both cases, solution is
u ( x , y ) = 3e x − y − e 2 x − 5 y

∂ 2V ∂V
Example 4.45: Use the method of separation of variables to solve the equation = given
that V = 0 when t → ∞ as well as V = 0 at x = 0 and x = l. ∂x 2 ∂t

Solution: Let V = X ( x )T (t )

∂ 2V ∂V
∴ = X ′′ T , = XT ′
∂x 2 ∂t 
where dashes denote derivatives with respect to their variables.
∴ differential equation becomes

X ′′ T = XT ′ 

X ′′ T ′
⇒ =
X T 
Now L.H.S. is function of x and R.H.S. is function of t and hence each must be constant.
X ′′ T ′
∴ = =λ
X T 
∴ X ′′ = λ X , T ′ = λT 

Now, T ′ = λT

⇒ T = Ae λt
As V = XT = 0 when t → ∞ , soλ must be negative
Take λ = − p2 , p > 0 
2
∴ T = Ae − p t 
Now X ′′ = λ X , 
Partial Differential Equations  | 525

A.E. is
m2 = λ = − p2 
∴ m = ± ip
Its solution is
X = B cos px + C sin px 

V = Ae − p t [ B cos px + C sin px ]
2


=e − p2 t
[c1 cos px + c2 sin px ] 
V (0, t ) = c1e − p t = 0
2



⇒ c1 = 0
V ( x, t ) = c2 e − p t sin px
2


V ( l , t ) = c2 e − p2 t
sin pl = 0 
⇒ pl = nπ , n = 1, 2, …    (∵ p > 0 )

∴ solutions are
n2 π 2
nπ x
− t
V ( x, t ) = bn e ; n = 1, 2,…
l2
sin    ( bn = c2 )
l 
∴ By principle of superposition, complete solution is
n2π 2

nπ x− t
V ( x, t ) = ∑ bn e l2
sin
n =1 l 
Example 4.46: Using the method of separation of variables, solve the parabolic partial differen-
∂2u ∂u
tial equation 2 = 16
∂x ∂y
Solution: Let u ( x, y ) = X ( x ) Y ( y )

∂2u ∂u
∴ = X ′′, = XY ′
∂x 2 ∂y 
where dashes denote derivatives w.r.t. their variables.
∴ differential equation becomes
X ′′ Y = 16 XY ′

X ′′ 16Y ′
or   =
X Y 
L.H.S. is function of x and R.H.S. is function of y and hence each is constant.
X ′′ 16Y ′
∴ = =λ
X Y 
1
X ′′ = λ X , Y ′ = λY
16 
526 | Chapter 4

Now, three cases arise as discussed hereunder.

Case I:  λ = 0 then


X ′′ = 0, Y ′ = 0 
∴ X = Ax + B, Y = C 
∴ u ( x, y ) = C ( Ax + B ) = c1 x + c2

Case II:  λ > 0, let λ = p 2 , p > 0
p2
∴ X ′′ = p 2 X , Y ′ = Y
16 
A.E. of X ′′ = p 2 X is m 2 − p 2 = 0
   ∴ m = ± p 
p2
y
∴ X = Ae − px + Be px , Y = C e 16

p2

( Ae )
y
− px
∴ u =Ce 16
+ Be px

(c e )     ( p = 4k ) 
k2y
−4 kx
=e 3 + c4 e 4 kx

Case III:  λ < 0, let λ = −16 k12 , k1 > 0 

X ′′ = −16 k12 X , Y ′ = − k12Y 

A.E. of X ′′ = −16 k12 X is m 2 + 16 k12 = 0


   ⇒ m = ±4i k1 
∴ X = A cos 4 k1 x + B sin 4 k1 x 
and solution of Y ′ = −k Y is1
2

2
Y = C e − k1 y

∴ solution is
u = XY = e − k1 y ( c5 cos 4 k1 x + c6 sin 4 k1 x )
2


∴ solutions are
u ( x, y ) = c1 x + c2 , u ( x, y ) = e k (c e )
2
−4 kx
y
+ c4 e 4 kx ,
3

u ( x, y ) = e − k12 y
( c5 cos 4k1 x + c6 sin 4k1 x ) 

∴ general solution is

u ( x, y ) = c1 x + c2 + e k (c e )
+ c4 e 4 kx + e − k1 y ( c5 cos 4 k1 x + c6 sin 4 k1 x )
2 2
y −4 kx
3

where c1 , c2 , c3 , c4 , c5 , c6 , k > 0, k1 > 0 are arbitrary constants.
Partial Differential Equations  | 527

4.11 VIBRATIONS OF A STRETCHED STRING


(ONE DIMENSIONAL WAVE EQUATION)
Y

T
Y + δY
Q x + δx y + δy

Y P x y
T

X
O A

Figure 4.1

Consider a uniform elastic string of length l stretched tightly between two points O and A.
Suppose that the string is displaced slightly from its position OA. Taking O as origin, x-axis
along OA and y-axis ⊥ to OA at O, displacement y of any point will depend upon x and t where
x is x-co-ordinate of point and t is time. We shall consider vibration of string under the following
assumptions.
 (i)  Each point of string moves perpendicular to the equilibrium position OA in x–y plane.
(ii)  String is flexible and does not offer resistance of bending.
∂y
(iii)  The displacement y and slope are small so that their higher powers can be neglected.
∂x
  (iv) Tension in string is large and constant throughout the string and weight of string is negli-
gible in comparison to tension.
Let m be mass per unit length of string. Let P ( x, y ) and Q ( x + δ x, y + δ y ) be position of two
points on the string at time t. Tension T acts at P and Q as shown in the figure.
Since there is no motion horizontally
∴ T cos (ψ + δψ ) = T cos ψ = T    (∵ψ and ψ + δψ are small ) (4.44)

 = δ s then m δ s is mass of portion PQ of the string. By Newton’s second law of motion,
Let PQ
equation of vertical motion is
∂2 y
m δ s 2 = T sin (ψ + δψ ) − T sinψ
∂t 
m δ s ∂ 2 y T sin (ψ + δψ ) T sin ψ
∴ = −
T ∂t 2 T cos (ψ + δψ ) T cos ψ
 (from (4.44))
mδ s ∂2 y
or = tan (ψ + δψ ) − tanψ
T ∂t 2 
But tanψ and tan (ψ + δψ ) are slopes of tangents at P and Q, respectively
528 | Chapter 4

Hence,
 ∂y   ∂y 
tan ψ =   =  
 ∂x  P  ∂x  ( x , y )

 ∂y   ∂y 
tan (ψ + δψ ) =   =  
 ∂x  Q  ∂x  ( x + δ x , y + δ y )

  ∂y   ∂y  
   −  
 ∂ y T
2
∂x x + δ x  ∂x  x
  (∵ δ s ≅ δ x)
∴  ∂t 2  = m  δx 

Taking limit as δ x → 0
∂2 y T ∂2 y
=
∂t 2 m ∂x 2 
∂2 y ∂2 y
or = c2 2
∂t 2
∂x 
T
where c 2 = is called the diffusivity of string.
m
It is the partial differential equation giving the vertical displacement of the points of the
string. Differential equation of vibrations of a stretched string is also called one dimensional
wave ­equation.

4.11.1  Solution of the Wave Equation


Wave equation is
∂2 y 2 ∂ y
2
= c
∂t 2 ∂x 2 
Let y ( x, t ) = X ( x ) T ( t )

where X ( x ) is function of x only and T (t) is function of t only
∂2 y ∂2 y
∴ = X ( x ) T ′′ ( t ) , = X ′′ ( x ) T (t )
∂t 2 ∂x 2 
∴ differential equation becomes
X ( x ) T ′′ ( t ) = c 2 X ′′ ( x ) T ( t )

X ′′ ( x ) 1 T ′′ ( t )
or = (4.45)
X ( x) c2 T (t )
L.H.S. is function of x and R.H.S. is function of t and hence each must be constant. This constant
l may be zero, positive or negative.
Partial Differential Equations  | 529

Case I:  λ = 0
In this case, we have
X ′′ ( x ) = 0, T ′′ (t ) = 0

∴ X ( x ) = Ax + B, T (t ) = Ct + D

∴ y ( x, t ) = ( Ax + B ) (Ct + D ) (4.46)

Case II:  λ is positive, i.e., λ = p 2 ; p > 0 


We have
X ′′ ( x ) − p 2 X ( x ) = 0

T ′′ ( t ) − p 2 c 2 T ( t ) = 0

Their solutions are
X = Ae px + Be − px , T = C e pct + De − pct 

∴ ( )( )
y ( x, t ) = Ae px + Be − px Ce pct + De − pct (4.47)

Case III:  λ  is negative, i.e., λ = − p 2 ; p > 0 

We have X ′′ ( x ) + p 2 X ( x ) = 0

T ′′ ( t ) + p c T ( t ) = 0
2 2

Their solutions are
X = A cos ( px ) + B sin ( px ) , T = C cos ( pct ) + D sin ( pct )

∴ y ( x, t ) =  A cos ( px ) + B sin ( px ) C cos ( pct ) + D sin ( pct ) (4.48)

There can be one or combination of these solutions.


But when the ends O and A are fixed then y ( 0, t ) = y ( l , t ) = 0 for all t and hence solutions
(4.46) and (4.47) are invalid.
∴ y ( x, t ) =  A cos ( px ) + B sin ( px ) C cos ( pct ) + D sin ( pct )


y ( 0, t ) = 0 ⇒ A= 0
and y (l , t ) = 0
sin pl = 0 
⇒
πn
⇒ p= ; n ∈N
l
530 | Chapter 4

∴ solutions are
  π nct   π nct   π nx
y ( x, t ) = bn cos   + en sin  l   sin l ; n = 1, 2, 3,… 
  l   
By principle of superposition, solution is

  π nct   π nct   π nx
y ( x, t ) = ∑ bn cos   + en sin  l   sin l
n =1   l    
Constants bn and en will be found from initial conditions.

4.11.2  D’Alembert’s Method of Solving Wave Equation


Wave equation is
∂2 y 2 ∂ y
2
= c
∂t 2 ∂x 2 
Let us introduce new variables
ξ = x + ct , η = x − ct 

so that y becomes function of ξ and η


∂y ∂y ∂ξ ∂y ∂η  ∂y ∂y 
= + = c − 
∂t ∂ξ ∂t ∂η ∂t  ∂ξ ∂η  
∂2 y  ∂  ∂y ∂y  ∂ξ ∂  ∂y ∂y  ∂η 
= c  −  +  −  
∂t  ∂ξ  ∂ξ ∂η  ∂t ∂η  ∂ξ ∂η  ∂t  
2

 ∂ 2 y ∂ 2 y ∂2 y ∂ 2 y 
= c 2  2 − − + 2 
 ∂ξ ∂ξ∂η ∂η∂ξ ∂η  

 ∂ 2 y ∂ 2 y ∂ 2 y 
= c 2  2 − 2 + 
 ∂ξ ∂ξ∂η ∂η 2  

∂y ∂y ∂ξ ∂y ∂η ∂y ∂y
= + = +
∂x ∂ξ ∂x ∂η ∂x ∂ξ ∂η 
∂2 y ∂  ∂y ∂y  ∂ξ ∂  ∂y ∂y  ∂η
=  +  +  + 
∂x 2 ∂ξ  ∂ξ ∂η  ∂x ∂η  ∂ξ ∂η  ∂x 
∂ y ∂ y
2 2
∂ y ∂ y2 2
= + + +
∂ξ 2 ∂ξ∂η ∂η∂ξ ∂η 2 

∂2 y ∂2 y ∂2 y
= + 2 +
∂ξ 2 ∂ξ∂η ∂η 2

Partial Differential Equations  | 531

∴wave equation becomes


 ∂2 y ∂2 y ∂2 y   ∂2 y ∂2 y ∂2 y 
c2  2 − 2 + 2  = c2  2 + 2 + 
 ∂ξ ∂ξ∂η ∂η   ∂ξ ∂ξ∂η ∂η 2 

∂2 y
or =0
∂ξ∂η 
Integrate w.r.t. ξ , keeping η constant
∂y
= f (η )
∂η 
Integrate w.r.t. η , keeping ξ constant
y ( x, t ) = ∫ f (η ) dη + φ (ξ ) = φ (ξ ) + ψ (η ) where ψ (η ) = ∫ f (η ) dη

∴ solution is
y ( x, t ) = φ ( x + ct ) + ψ ( x − ct )

where φ and ψ are arbitrary functions which will be determined from initial conditions.
This solution can be obtained directly also as wave equation:
(D t
2
)
− c 2 Dx2 y = 0

or ( Dt + cDx ) ( Dt − cDx ) y = 0 
Hence, solution is
y ( x, t ) = φ ( x + ct ) + ψ ( x − ct )

Suppose initial conditions are
 ∂y 
y ( x, 0 ) = f ( x ) and   = g (x)
 ∂t  ( x , 0 )

then, y ( x, 0 ) = φ ( x ) + ψ ( x ) = f ( x ) (4.49)

∂y
= c φ ′ ( x + ct ) −ψ ′ ( x − ct ) 
∂t 
 ∂y 
∴   = c φ ′ ( x ) − ψ ′ ( x ) = g ( x )
∂t ( x,0)

1
⇒ φ ′ (x) − ψ ′ (x) = g (x)
c 
Integrate w.r.t. x
x
1
φ ( x ) −ψ ( x )  = ∫ g ( x ) dx + k (4.50)
c x0

where x0 and k are arbitrary constants.


532 | Chapter 4

Solving (4.49) and (4.50)


1 
x
1
φ ( x) =  f ( x ) + ∫ g ( x ) dx + k 
2  c x0 

1 
x
1
ψ ( x) =  f ( x ) − ∫ g ( x ) dx −k 
2  c x0 

∴ y ( x, t ) = φ ( x + ct ) + ψ ( x − ct )

1  1 
x + ct x − ct
1 1
 =  f ( x + ct ) + ∫ g ( x ) dx +k  +  f ( x − ct ) − ∫ g ( x ) dx +k 
2  c x0  2  c x0 

1 1  
x + ct x0

=  f ( x + ct ) + f ( x − ct ) +  ∫ g ( x ) dx + ∫ g ( x ) dx 
2 c  x0  
 x − ct

1 
x + ct
1
=  f ( x + ct ) + f ( x − ct ) + ∫ g ( x ) dx 
2 c x − ct 

Remark 4.4:
(i) If we write half range Fourier sine series of f ( x ) and g ( x ) in (0, l ) we shall get the
­solution obtained in article (4.11.1).
(ii) If stretched string is infinite then method of solution in article (4.11.1) cannot be applied and
solution will be obtained by D ′ Alembert’s method.
∂2 y ∂2 y
Example 4.47: Solve the p.d.e. = a 2 2 representing the vibrations of a string of length l,
∂t 2
∂x
fixed at both ends, subject to the boundary conditions y ( 0, t ) = y ( l , t ) = 0 and initial conditions
π x ∂y
, = 0 at t = 0 
y = y0 sin
l ∂t
Solution: Here, the boundary conditions are
y ( 0, t ) = y ( l , t ) = 0

∴ solution of wave equation is

  π nat   π nat   nπ x
y ( x, t ) = ∑ bn cos   + en sin  l   sin l
n =1   l    
πx ∞
nπ x
y ( x, 0 ) = y0 sin = ∑ bn sin
l n =1 l 
∴ b1 = y0 , bn = 0; n = 2, 3,… 

 π at  πx ∞ nπ at nπ x
∴ y ( x, t ) = y0 cos   sin + ∑ en sin sin
 l  l n =1 l l 
Partial Differential Equations  | 533

∂y π a  π at  π x ∞ nπ a nπ at nπ x

∂t
= − y0 sin   sin l + ∑ en l cos l sin l 
l  l  n =1

 ∂y  ∞
π na nπ x
∴   = 0 = ∑ en sin
∂t t = 0 l l
n =1

∴ en = 0; n = 1, 2, 3,… 

∴ solution is
 π at   π x 
y ( x, t ) = y0 cos   sin  
 l   l 

Example 4.48: A thin uniform tightly stretched vibrating string fixed at the points x = 0 and
∂2 y 2 ∂ y
2
πx
x = l satisfy the equation = c ; y ( x, 0 ) = y0 sin 3 and released from rest from this
∂t 2
∂x 2
l
position. Find the displacement y ( x, t ) at any x and any time t.
Solution: Since the ends are fixed
∴ y (0, t ) = y (l , t ) = 0

∂ y
2
∂ y 2
∴ solution of wave equation = c 2 2 is
∂t 2
∂x

  nπ ct   nπ ct   nπ x
y ( x, t ) = ∑ bn cos   + en sin    sin
n =1   l   l  l

πx 1  πx 3π x  
y ( x, 0 ) = y0 sin 3 = y0   3 sin − sin
l 4  l l  

y0  πx 3π x  ∞ nπ x

4 3 sin l − sin l  = ∑ bn sin l
  n =1 
3 y0 − y0
∴ b1 = , b3 = ; bn = 0; n = 2, 4, 5,…
4 4 
3 y0  π ct  π x y0  3π ct  3π x
\ y ( x, t ) = cos  sin − cos  sin
4  l  l 4  l  l 

 nπ ct  nπ x
+ ∑ en sin   sin
n =1  l  l

∂y 3y πc  π ct  π x 3 y0π c  3π ct  3π x
\ = − 0 sin   sin + sin   sin
∂t 4l  l l 4l  l l 

nπ c  nπ ct  nπ x
+∑ en cos   sin
n =1 l  l  l
534 | Chapter 4

 ∂y  ∞
nπ c nπ x
\  ∂t  = 0 = ∑ l en sin l 
 t = 0 n =1

∴ en = 0; n = 1, 2, 3,… 
∴ solution is
y0   π ct  π x  3π ct  3π x 
y ( x, t ) = 3 cos  l  sin l − cos  l  sin l 
4      
∂2u ∂2u
Example 4.49: The vibrations of an elastic string is governed by the p.d.e. = .
∂t 2 ∂x 2
The length of the string is π and the ends are fixed. The initial velocity is zero and the initial de-
flection is u ( x, 0 ) = 2 ( sin x + sin 3 x ) . Find the deflection u ( x, t ) of the vibrating string for t > 0.
Solution: Since the ends are fixed
∴ u (0, t ) = u (π , t ) = 0

∂2 y ∂2u
∴ solution of wave equation 2 = 2 with diffusivity unity is
∂t ∂x

  nπ t   nπ t   nπ x
u ( x, t ) = ∑ bn cos   + en sin    sin
n =1   π   π  π


= ∑ b n cos ( nt ) + en sin ( nt )  sin ( nx )
n =1
∞ 
u ( x, 0 ) = 2 ( sin x + sin 3 x ) = ∑ bn sin ( nx )
n =1 
∴ b1 = 2 , b3 = 2 ; bn = 0 , n = 2, 4, 5, 6 … 

∴ u ( x, t ) = 2 cos t sin x + 2 cos 3t sin 3 x + ∑ en sin ( nt ) sin ( nx )
n =1 
∂u ∞
= −2 sin t sin x − 6 sin 3t sin 3 x + ∑ nen cos ( nt ) sin ( nx )
∂t n =1 
 ∂u  ∞
 ∂t  = 0 = ∑ nen sin ( nx )
 t = 0 n =1

∴ en = 0; n = 1, 2, 3… 
∴ solution is
u ( x, t ) = 2 ( cos t sin x + cos 3t sin 3 x ) .

∂ y 2
∂2 y
Example 4.50: Solve the boundary value problem = 4 , given that
∂t 2 ∂x 2
y ( 0, t ) = y ( 5, t ) = 0,

 ∂y 
  = 5 sin π x, y ( x, 0 ) = 0
∂t t = 0

Partial Differential Equations  | 535

Solution: Since y ( 0, t ) = y ( 5, t ) = 0
∂2 y ∂2 y
∴ solution of wave equation = 4 with diffusivity = c 2 = 4 is
∂t 2 ∂x 2

 2nπ t 2nπ t  nπ x
y ( x, t ) = ∑ bn cos + en sin  sin
n =1  5 5  5 

nπ x
y ( x, 0 ) = 0 = ∑ bn sin
n =1 5 
∴ bn = 0; n = 1, 2, 3… 

2nπ t nπ x
∴ y ( x, t ) = ∑ en sin sin
n =1 5 5 
∂y ∞ 2nπ 2nπ t nπ x
=∑ en cos sin
∂t n =1 5 5 5 
 ∂y  ∞
2nπ nπ x
∴   = 5 sin π x = ∑ en sin
∂t t = 0 n =1 5 5

5
∴ e5 = ; en = 0; n = 1, 2, 3… ; n ≠ 5
2π 
∴ solution is
5
y ( x, t ) = sin ( 2π t ) sin (π x )
2π 

∂2u ∂2u
Example 4.51: Solve the wave equation = c2 2 ; 0 < x < l, 0 < t < 4 with the boundary con-
∂t 2
∂x
 ∂u 
ditions, u ( 0, t ) = u ( l , t ) = 0 and initial conditions u ( x, 0 ) = f ( x ) and   = g ( x ) ; 0 < x < l
∂t t = 0
Solution: As
u ( 0, t ) = u ( l , t ) = 0

∂ u
2
∂2u
∴ solution of wave equation = c 2 2 ; 0 < x < l , 0 < t < 4 is
∂t 2
∂x

 nπ ct nπ ct  nπ x
u ( x, t ) = ∑ bn cos + en sin  sin l ; 0 < x < l , 0 < t < 4
n =1  l l  

nπ x
u ( x, 0 ) = f ( x ) = ∑ bn sin ;0<x<l
n =1 l 
It is Fourier half range sine series of f ( x ) in ( 0, l )
2 l nπ x
∴ bn = ∫ f ( x ) sin dx
l 0 l 
536 | Chapter 4

∂u ∞ nπ c  nπ ct nπ ct  nπ x
=∑ −bn sin + en cos sin 
∂t n =1 l  l l  l
 ∂u  ∞
nπ c nπ x
∴   = g ( x ) = ∑ en sin ; 0< x<l
∂t t = 0 n =1 l l

It is Fourier half range sine series of g ( x ) in ( 0, l )

nπ c 2 l nπ x
∴ en = ∫ g ( x ) sin dx
l l 0 l 
2 l nπ x
∴ en = ∫ g ( x ) sin dx
nπ c 0 l 
∴ solution is

 nπ ct nπ ct  nπ x
u ( x, t ) = ∑ bn cos + en sin  sin l , 0 ≤ x ≤ l ; 0 ≤ t < 4 
n =1  l l 
2 l nπ x 2 l nπ x
∫ f ( x ) sin g ( x ) sin
nπ c ∫0
where bn = dx, en = dx
l 0 l l 
Example 4.52: The points of trisection of a string of length l are pulled aside through the same
distance d on opposite sides of the position of equilibrium and the string is released from rest.
Derive an expression for the displacement of the string at subsequent time and show that the
midpoint of the string always remains at rest.
Solution: Let OA be string of length l. B and C are points of trisection and P is midpoint of string.
Then initially at t = 0, the string is in position OB ′PC ′A.
ux

B′

d
l l l
X
O B P C A
d

C′

Figure 4.2

We have
l  l   2l 
O ( 0, 0 ) , B ′  , d  , P  , 0  , C ′  , −d  , A ( l , 0 )
 3   2   3 
Partial Differential Equations  | 537

Let y(x,t) be displacement of string at distance x from O at time t.


Equation of portion OB ′ is
d −0
y ( x, 0 ) − 0 = ( x − 0)
l
−0
3 
3d
⇒ y ( x, 0 ) = x
l 
Equation of portion B ′C ′ is
−d − d  l
y ( x, 0 ) − d = x− 
2l l  3

3 3 
6d
⇒ y ( x, 0 ) = − x + 3d
l 
Equation of portion C ′A is
0+d
y ( x, 0 ) − 0 = (x − l)
2l
l−
3 
3d
⇒ y ( x, 0 ) = x − 3d
l 
3d l
∴ y ( x, 0 ) = x; 0 ≤ x ≤
l 3
6d l 2l
=− x + 3d ; ≤ x ≤
l 3 3 
3d 2l
= x − 3d ; ≤x≤l
l 3 
As ends O and A are fixed
∴ y (0, t ) = y (l , t ) = 0

Now, deflection y ( x, t ) follow wave equation
∂2 y 2 ∂ y
2
= c
∂t 2 ∂x 2 
where c 2 is diffusivity of string.
Its solution is

 nπ ct nπ ct  nπ x
y ( x, t ) = ∑  bn cos + en sin  sin l
n =1  l l  
∂y ∞ nπ c  nπ ct nπ ct  nπ x
=∑ −bn sin + en cos sin
∂t n =1 l  l l  l 
538 | Chapter 4

 ∂y  ∞
nπ c nπ x
∴   = 0 = ∑ en sin 
∂t t = 0 n =1 l l
∴ en = 0; n = 1, 2, 3,… 

nπ ct nπ x
∴ y ( x, t ) = ∑ bn cos sin
n =1 l l 

nπ x
y ( x, 0 ) = ∑ bn sin
n =1 l 
It is half range Fourier sine series of y ( x, 0 ) in 0 ≤ x ≤ l
nπ x
l
2
∴ bn = ∫ y ( x, 0 ) sin dx
l 0 l

 l 2l

nπ x nπ x nπ x 
3 3 l
2  3d  6d   3d 
= ∫ x sin dx + ∫  3d − x sin dx + ∫  x − 3d  sin dx 
l 0 l l l  l  l 2l  l  l
 
 3 3 

6 d    l
l /3
nπ x   l 2 nπ x  
= 2  x  − cos −− sin 
l    nπ l   n2π 2 l  0
 
2l / 3
  l nπ x   l2 nπ x  
+ ( l − 2 x )  − cos  − ( −2 )  − 2 2 sin 
  nπ l   n π l  l / 3

nπ x   
l
  l nπ x   l 2
+ ( x − l )  − cos  −  − 2 2 sin  
  nπ l   nπ l  2 l / 3 

6d  l 2 nπ l2 nπ l2 2nπ l2 nπ 2l 2 2nπ
=  − cos + sin + cos + cos − sin
l  3nπ
2
3 nπ2 2
3 3nπ 3 3nπ 3 nπ2 2
3

2l nπ l
2
2nπ2
l 2nπ  2
+ sin − cos − 2 2 sin 
n 2π 2 3 3nπ 3 nπ 3 


6 d 3l 2
nπ 3l 2
 nπ  
= 2  2 2 sin − sin  nπ − 
l n π 3 n 2π 2  3  

18d  nπ nπ 
− ( −1) sin 
n +1
= 2 2 sin
nπ  3 3 
18d  nπ
1 + ( −1)  sin
n
= 2 2 
nπ  3 
36 d 2nπ 9d 2nπ
∴ b2 n = sin = 2 2 sin ; b2 n −1 = 0; n = 1, 2, 3,…
( 2n ) π π
2 2 3 n 3

Partial Differential Equations  | 539

9d ∞ 1 2nπ 2nπ ct 2nπ x


∴ y ( x, t ) = 2 ∑ 2
sin cos sin 
π n =1 n 3 l l
which gives the displacement of string at time t.
∂y 9d ∞ 1 2nπ  −2nπ c  2nπ ct 2nπ x
= ∑ sin 3 ⋅  l  sin l sin l
∂t π 2 n =1 n2 
 ∂y 
∴   l = 0  (∵ sin nπ = 0; n = 1, 2…)
∂t x =
2

∴ midpoint of string always remain at rest.


∂2u ∂2u
Example 4.53: Solve the wave equation 2 = a 2 2 under the conditions u = 0 when x = 0
∂t ∂x
∂u
and x = π , = 0 when t = 0 and u ( x, 0 ) = x; 0 < x < π .
∂t
Solution: As u ( 0, t ) = u (π , t ) = 0

∂2u ∂2u
∴ solution of wave equation 2 = a 2 2 is
∂t ∂x

u ( x, t ) = ∑ bn cos ( nat ) + en sin ( nat )  sin ( nx )
n =1 
∂u ∞
= ∑ na ( −bn sin ( nat ) + en cos ( nat ) ) sin ( nx )
∂t n =1 
 ∂u  ∞
  = 0 = ∑ naen sin ( nx )
∂t t = 0 n =1

⇒ en = 0; n = 1, 2,… 

∴ u ( x, t ) = ∑ bn cos ( nat ) sin ( nx )
n =1


u ( x, 0 ) = x = ∑ bn sin ( nx )
n =1 
It is half range Fourier sine series of x in [ 0, π ]
π
2
x sin ( nx ) dx
π ∫0
∴ bn =

2  1  
π
  1
=  x  − cos nx  −  − 2 sin nx  
π   n   n 0 

2 2
= − cos nπ = ( −1)
n +1

n n 
∴ u ( x , t ) = 2∑

( −1) n +1

cos ( nat ) sin ( nx )


n =1 n 
540 | Chapter 4

Example 4.54: An elastic string of length l which is fastened at its ends x = 0 and x = l is ­released
from its horizontal position (zero initial displacement) with initial velocity g(x) given as
l
g ( x ) = x; 0 ≤ x ≤
3
l
= 0; < x < l
3 
Find the displacement of the string at any instant of time.
Solution: Let y ( x, t ) be displacement at distant x from end x = 0 at time t and c 2 is diffusivity
∂2 y 2 ∂ y
2
of string then = c .
∂t 2 ∂x 2
As ends x = 0, x = l are fastened
∴ y (0, t ) = y (l , t ) = 0

∴ solution of wave equation is

 nπ ct nπ ct  nπ x
y ( x, t ) = ∑ bn cos + en sin  sin
n =1  l l  l 

nπ x
y ( x, 0 ) = 0 = ∑ bn sin
n =1 l 
⇒ bn = 0; n = 1, 2, 3,… 

nπ ct nπ x
∴ y ( x, t ) = ∑ en sin sin
n =1 l l 
∂y ∞
nπ c nπ ct nπ x
=∑ en cos sin
∂t n =1 l l l 
 ∂y  ∞
nπ c nπ x
∴   = g ( x ) = ∑ en sin
∂t t = 0 n =1 l l 
It is half range Fourier sine series of g(x) in [0, l].
nπ c nπ x
l
2
∴ en = ∫ g ( x ) sin dx
l l 0 l

l

2 3
nπ x
l ∫0
= x sin dx    (by definition of g (x))
l
l

2  l nπ x   l 2 nπ x   3
= x  − cos  −  − 2 2 sin 
l   nπ l   nπ l 0

2 l 2
nπ l nπ  2
= − cos + 2 2 sin 
l  3nπ 3 nπ 3

Partial Differential Equations  | 541

2l 2  1 nπ 1 nπ 
∴ en = sin − cos  
π 2 c  n3π 3 3n2 3

2l 2 ∞  1 nπ 1 nπ  nπ ct nπ x
∴ y ( x, t ) = ∑  sin 3 − 3n2 cos 3  sin l sin l
π 2 c n =1  n 3 π 
Example 4.55: Solve the vibrating string problem with
 (i)  y ( 0, t ) = 0, y ( L, t ) = 0
 L
 x ; 0 < x < 2
 (ii)  y ( x, 0 ) = 
 L − x; L < x < L
 2
(iii)  yt ( x, 0 ) = x ( x − L ) ; 0 < x < L
Solution: As
y ( 0, t ) = y ( L, t ) = 0

∂ y
2
∂ y
2
∴ wave equation 2 = c 2 2 where c 2 is diffusivity of string has solution
∂t ∂x

 nπ ct nπ ct  nπ x
y ( x, t ) = ∑ bn cos + en sin  sin L
n =1  L L  

nπ c  nπ ct nπ ct  nπ x
yt ( x, t ) = ∑  −bn sin L + en cos L  sin L
n =1 L   

nπ c nπ x
∴ yt ( x, 0 ) = x ( x − L ) = ∑ en sin
n =1 L L 

It is half range Fourier sine series of x (x – L) in 0 < x < L


L
nπ c nπ x

L
2
L0
(
en = ∫ x 2 − Lx sin
L
)
dx

2  2  L nπ x 
∴ en = 
nπ c 
(
x − Lx  −
 nπ
cos )
L 
 − (2 x − L )
L 
 L2 nπ x   L3 nπ x  

 n π sin  + 2  cos 
2 2
L  nπ
3 3
L   0

=
2  2 L3

nπ c  n3π 3
(( −1) − 1)
n


3
8L
∴ e2 n = 0, e2 n −1 = − ; n = 1, 2, 3,.......
(2n − 1)4 cπ 4 
542 | Chapter 4


nπ x L L
y ( x, 0 ) = ∑ bn sin ;0< x< , < x<L
n =1 L 2 2
It is half range Fourier sine series of y (x, 0) in 0 < x < L
nπ x
L
2
y ( x, 0 ) sin
L ∫0
∴ bn = dx
L 

2 nπ x nπ x 
L/2 L
=  ∫ x sin dx + ∫ ( L − x ) sin dx   (by definition of y(x, 0))
L 0
L L/2
L 

2    L
L/2
nπ x   L2 nπ x  
=  x  − cos  −  − 2 2 sin 
L    nπ L   nπ L  0 

nπ x   
L
  L nπ x   L2
+ ( L − x )  − cos +− sin  
  nπ L   n2 π 2 L  L / 2 

2  L2 nπ L2 nπ L2 nπ L2 nπ 
=  − cos + sin + cos + sin 
L  2nπ 2 nπ2 2
2 2nπ 2 nπ2 2
2

4L nπ
= sin
n 2π 2 2 
4 L ( −1)
n +1

∴ b2 n = 0, b2 n −1 = ; n = 1, 2, 3,…
π 2 ( 2n − 1)
2


4 L ∞  ( −1) ( 2n − 1) π ct ( 2n − 1) π ct  ( 2n − 1) π x
n +1
2 L2
∴ y ( x, t ) = ∑ 
π 2 n =1  ( 2n − 1)2
cos
L

cπ 2 ( 2n − 1)
4
sin
L 
sin
L
 
∂2u ∂2u
Example 4.56: Solve the differential equation 2 = 4 2 subject to the conditions u = sin t at
∂t ∂x
∂u
x = 0 and = sin t at x = 0.
∂x
Solution: Since boundary conditions at x = 0 are trigonometric functions, so solution of d­ ifferential
∂2u ∂2u
equation 2 = 4 2 is
∂t ∂x
u ( x, t ) = ( A cos px + B sin px ) ( E cos ( 2 pt ) + F sin ( 2 pt ) )

u ( 0, t ) = sin t = A  E cos ( 2 pt ) + F sin ( 2 pt ) 

Partial Differential Equations  | 543

1
∴ AE = 0, AF = 1, p = 
2
1
⇒ E = 0, AF = 1, p = 
2
x x
∴ u ( x, t ) = cos sin t + ( BF ) sin sin t
2 2 
∂u 1 1  1 x
= − sin  x  sin t + ( BF ) cos sin t
∂x 2 2  2 2 
 ∂u  1
∴   = sin t = ( BF ) sin t
∂x x = 0 2

∴ BF = 2 
x x
∴ u ( x, t ) = cos sin t + 2 sin sin t
2 2 
 x x
= sin t  cos + 2 sin 
 2 2

Example 4.57: Find the D’Alembert solution of a one dimensional wave equation. Use it to find
the deflection of a vibrating string of unit length having fixed ends with initial velocity zero and
initial deflection f ( x ) = k ( sin x − sin 2 x ).
∂2 y ∂2 y
Solution: One dimensional wave equation is 2 = c 2 2 .
∂t ∂x
In the theory, we have already found its general solution
y ( x, t ) = φ ( x + ct ) + ψ ( x − ct )

where φ and ψ are arbitrary functions
Initial deflection = y ( x, 0 ) = φ ( x ) + ψ ( x ) = f ( x ) = k ( sin x − sin 2 x ) (1)
∂y
= cφ ′ ( x + ct ) − cψ ′ ( x − ct ) 
∂t
 ∂y 
∴ initial velocity =   = c (φ ′ ( x ) −ψ ′ ( x ) ) = 0
 ∂t t = 0 
which on integration gives
φ ( x ) −ψ ( x ) = constant = k1 (2)

From (1) and (2)


k 1
φ ( x) = ( sin x − sin 2 x ) + k1
2 2 
k 1
ψ ( x ) = ( sin x − sin 2 x ) − k1
2 2 
544 | Chapter 4

∴ y ( x, t ) = φ ( x + ct ) + ψ ( x − ct ) 
k k k k
 = sin ( x + ct ) − sin 2 ( x + ct )  + 1 + sin ( x − ct ) − sin 2 ( x − ct )  − 1
2 2 2 2
k
= {sin ( x + ct ) + sin ( x − ct )} − {sin 2 ( x + ct ) + sin 2 ( x − ct )}
2 
= k [sin x cos ct − sin 2 x cos 2ct ]

Example 4.58: Use D’Alembert’s solution to find the solution of the initial value problem
­defining the vibrations of an infinitely long elastic string
∂2u 2 ∂ u
2
= c , − ∞ < x < ∞, t > 0
∂t 2 ∂x 2 
∂u
u ( x, 0 ) = f ( x ) , ( x, 0 ) = g ( x )
∂t 
where
(i)  f ( x ) = sin 2 x, g ( x ) = cos 2 x

(ii)  f ( x ) = a sin 2 π x, g ( x ) = 0

Solution:
∂ u
2
2 ∂ u
2
= c
∂t 2 ∂x 2 
By D’Alembert method of solution, its general solution is
u ( x, t ) = φ ( x + ct ) + ψ ( x − ct ) (1)
u ( x, 0 ) = φ ( x ) + ψ ( x ) = f ( x ) (2)
∂u
= cφ ′ ( x + ct ) − cψ ′ ( x − ct )
∂t 
∂u
∴ ( ) ( ( ) ( )) ( ) (3)
x , 0 = c φ ′ x − ψ ′ x = g x
∂t
(i)  f ( x ) = sin 2 x, g ( x ) = cos 2 x

∴ from (2) and (3)
φ ( x ) + ψ ( x ) = sin 2 x (4)
1
φ ′ ( x ) −ψ ′ ( x ) = cos 2 x
c 
which on integration gives
1
φ ( x ) −ψ ( x ) = sin 2 x + k (5)
2c
Solving (4) and (5) we have
1 1 1
φ ( x ) = 1 +  sin 2 x + k
2  2c  2 
1 1 1
ψ (x) = 1 −  sin 2 x − k
2  2c  2 
Partial Differential Equations  | 545

\ from (1)
1  1  1  1  1  1 
u ( x, t ) =  1 +  sin 2 ( x + ct ) + k  +  1 −  sin 2 ( x − ct ) − k 
 2  2c  2   2  2c  2 

1 1 
=  sin 2 ( x + ct ) + sin 2 ( x − ct ) + {sin 2 ( x + ct ) − sin 2 ( x − ct )}
2 2c 
1 1 
= 2 sin 2 x cos 2ct + cos 2 x sin 2ct 
2  c 
1
= sin 2 x cos 2ct + cos 2 x sin 2ct
2c 
(ii)  f ( x ) = a sin 2 π x, g ( x ) = 0

\ from (2) and (3)
φ ( x ) + ψ ( x ) = a sin 2 π x (6)

φ ′ ( x ) −ψ ′ ( x ) = 0

which on integration gives
φ ( x ) −ψ ( x ) = k (7)
Solving (6) and (7), we have
1 1
φ ( x) = a sin 2 π x + k
2 2 
1 1
ψ ( x) = a sin 2 π x − k
2 2 
∴from (1)
u ( x, t ) = φ ( x + ct ) + ψ ( x − ct )

1 1  1 1 
=  a sin 2 π ( x + ct ) + k  +  a sin 2 π ( x − ct ) − k 
2 2  2 2 
a
= sin 2 π ( x + ct ) + sin 2 π ( x − ct ) 
2 
a 1 − cos 2π ( x + ct ) 1 − cos 2π ( x − ct ) 
=  + 
2 2 2  
a 1 
= 1 − {cos 2π ( x + ct ) + cos 2π ( x − ct )}
2 2 
a
= [1 − cos 2π x cos 2π ct ]
2 
546 | Chapter 4

4.12  One Dimensional Heat Flow


Consider the flow of heat by conduction in a uniform bar. We assume that the sides of the bar are
insulated and the loss of heat from the sides by conduction or radiation is negligible.
Take one end of the bar as origin and x-axis along the direction of flow of heat. Let u ( x, t )
be temperature at a point distance x from origin at time t and the temperature at all points of a
cross section is same.

Q Q

X
O x x + δx A

Figure 4.3

We know that in a body heat flows in a direction of decreasing temperature. Physical experiments
∂u
show that the rate of flow is proportional to and area of cross section A.
∂x
 ∂u 
Thus, quantity of heat Q1 flowing into the section at distance x is − KA   per second, the
 ∂x  x
negative sign indicates that u decreases as x increases and K is constant called thermal conductiv-
ity of body.
 ∂u 
The quantity of heat Q2 flowing out at section at distance x + δ x is − KA   per ­second.
 ∂x  x +δ x
Hence, amount of heat retained by bar of thickness δx at distance x is
 ∂u   ∂u  
= Q1 − Q2 = KA   −    per second
 ∂x  x +δ x  ∂x  x 
∂u
But rate of increase of heat in this bar of thickness δ x = σρ Aδ x
where σ is specific heat and ρ is density of material. ∂t

∂u   ∂u   ∂u  
∴ σρ Aδ x = KA    −  
∂t   ∂x  x +δ x  ∂x  x  
  ∂u   ∂u  
  − 
∴ ∂u K   ∂x  x + δ x  ∂x  x 
=
∂t σρ  δx 
Taking limit as δ x → 0 
∂u ∂ u 2
= c 2 2 (4.51)
∂t ∂x
Partial Differential Equations  | 547

K
where c 2 = is called thermal diffusivity.
σρ
Equation (4.51) is called heat equation in one dimension.

4.12.1  Solution of the Heat Equation


Heat equation is
∂u ∂2u
= c2 2
∂t ∂x 
Let u ( x, t ) = X ( x ) T ( t )

∂u ∂2u
∴ = XT ′ and = X ′′ T
∂t ∂x 2 
∴ differential equation becomes
XT ′ = c 2 X ′′ T 
X ′′ 1 T ′
∴ = 2
X c T 
L.H.S. is function of x and R.H.S. is function of t and hence both are constant say λ.
If λ = 0, then X ′′ = 0, T ′ = 0
∴ X = Ax + B, T = C
∴ u ( x, t ) = C ( Ax + B ) = A1 x + B1

If λ > 0, i.e., λ = p , p > 0 then X ′′ − p X = 0, T ′ = c 2 p 2T
2 2

( )
2
p2 t
∴ X = Ae px + Be − px , T = C e c

u ( x, t ) = ( Ae )C e p2 c2 t
( )
2 2
− px
∴ px
+ Be = A1e px + B1e − px e p c t

Similarly, if λ < 0, i.e., λ = − p , p > 0 then solution is u ( x, t ) = e ( A1 cos px + B1 sin px ).
2 2
2 −p c t

Of these solutions, we are to choose solutions satisfying initial and boundary conditions.
∂2u ∂u
The solution Ax + B is the solution of 2 = 0 and hence = 0 in heat equation.
∂x ∂t
Thus, solution Ax + B is a steady-state solution.
Case I:  If u ( 0, t ) = u ( l , t ) = 0 then there is no steady-state solution and u ( x, t ) decreases as time
increases and hence in this case solution will be
u ( x, t ) = e − p c t ( A1 cos px + B1 sin px )
2 2


u (0, t ) = e − p c t A1 = 0
2 2


⇒ A1 = 0 
u ( x, t ) = e − p c t B1 sin px
2 2


548 | Chapter 4

u (l , t ) = e − p c t B1 sin pl = 0 
2 2


⇒ p= ; n ∈N 
l
− n2π 2 c 2 t
nπ x
∴ u ( x, t ) = bn sin e l ; n∈ N
2

l 
By principle of superposition
− n2π 2 c 2 t

nπ x
u ( x, t ) = ∑ bn sin
2
e l
n =1 l 
Case II:  If either u ( 0, t ) or u ( l , t ) or both are non-zero and given, then u ( x, t ) will consist of
steady-state solution us ( x ) = Ax + B and transient-state solution ut ( x, t ).

∴ u ( x, t ) = us ( x ) + ut ( x, t )
 − n2π 2 c 2 t

nπ x
= Ax + B + ∑ bn sin e l2

n =1 l
Case III:  If at least one of y ( 0, t ) and y ( l , t ) is not given, then
u ( x, t ) = Ax + B + ( A1 cos px + B1 sin px ) e − p c t
2 2


Example 4.59: A rod of length l with insulated sides is initially at a uniform temperature u0.
Its ends are suddenly cooled to 0 o C and are kept at that temperature. Prove that the tempera-
c 2π 2 n2
nπ x − l 2 t

ture function u ( x, t ) is given by u ( x, t ) = ∑ bn sin e where bn is determined from
n =1 l
u ( x, 0 ) = u0 . Find the value of bn.
Solution:
∂ 2 u ∂u
The temperature function u ( x, t ) satisfies heat equation c 2 2 = under the conditions
∂x ∂t
u ( x, 0 ) = u0 , u (0, t ) = 0, u (l , t ) = 0.
Let u ( x, t ) = X ( x ) T ( t )

∂2u ∂u
∴ = X ′′ T , = XT ′;
∂x 2 ∂t 
where dashes denote derivatives with respect to their variables.
∴ differential equation becomes
c 2 X ′′ T = XT ′, X ( x ) T ( 0 ) = u0 , X ( 0 ) = 0, X ( l ) = 0

c 2 X ′′ T ′
∴ =
X T 
L.H.S. is function of x and R.H.S. is function of t and hence both are constant.
c 2 X ′′ T ′
∴ = =λ
X T 
∴ c 2 X ′′ − λ X = 0, T ′ = λT 
Partial Differential Equations  | 549

Case I:  λ = 0, then c 2 X ′′ = 0 

∴ X ′′ = 0 
⇒ X = Ax + B 
X (0) = B = 0

then X (l ) = Al = 0 ⇒ A= 0
  
∴ X = 0 for all t which is not valid

Case II:  λ > 0. Let λ = p 2 , p > 0,


then c 2 X ′′ − λ X = 0 
c 2 X ′′ − p 2 X = 0 

−p p
X ( x ) = Ae
x x
Its solution is c
+ Be c

X (0 ) = A + B = 0 ∴ B = −A 
  
 −px p
x
∴ X (x) = A e c − e c 
 

 −cp l p
l
X (l ) = A  e − e c  = 0 ⇒ A = 0
 
∴ A = B = 0 which is invalid   

Case III:  λ < 0. Let λ = − p 2 ; p > 0, 


then solutions of c 2 X ′′ + p 2 X = 0 and T ′ = − p 2T are
px px 2
X = A cos + B sin , T = C e− p t
c c 
 px px 
u ( x, t ) = e − p t
2
∴ c1 cos c + c2 sin c    where AC = c1 , BC = c2
  
u (0, t ) = e − p t c1 = 0    ⇒ c1 = 0
2

px
∴ u ( x, t ) = c2 e − p t sin
2

c 
pl
u ( l , t ) = c2 e − p t sin
2
=0
c 
pl
∴ = nπ ; n = 1, 2, 3…
c 
cnπ
∴ p= ; n = 1, 2, 3…
l 
550 | Chapter 4

c 2 n2 π 2
− t nπ x
∴ u(x, t) = bn e l2
sin ; n = 1, 2, 3…
l

\ By principle of superposition, solution is


c 2 n2π 2

nπ x − t
u ( x, t ) = ∑ bn sin e l2

n =1 l 

nπ x
where bn are determined from u ( x, 0 ) = u0 = ∑ bn sin ; 0≤x≤l
n =1 l
This is Fourier half range sine series of u0 in [ 0, l ], hence bn are Fourier coefficients given by

nπ x nπ x
l
2 2u0 l
bn = ∫ u ( x , 0 ) sin dx = ∫ sin dx
l 0 l l 0 l


( ) (∵ cos nπ = ( −1) )
l
2u0 l  nπ x  2u0
 − cos l  = nπ 1 − ( −1) 
n n
= ⋅
l nπ  0
4u0
∴ b2 n = 0, b2 n −1 = ; n = 1, 2, 3, … 
( 2n − 1) π
\ solution is
c 2 ( 2 n −1) π 2 t
( 2n − 1) π x −
2

4u 1
u ( x, t ) = 0
π

n =1 2n − 1
sin
l
e l2


∂u ∂2u
Example 4.60: The equation for heat conduction along a bar of length l is = a 2 2 neglect-
∂t ∂x
ing the radiation. Find an expression for u ( x, t ) if the ends of the bar are maintained at zero
temperature and if initially the temperature is T at the centre of the bar and falls uniformally to
zero with time.
Solution: Ends of the bar are maintained at zero temperature and hence there is no steady-state
solution
n2 π 2 a 2 t

nπ x − l 2
∴ u ( x, t ) = ∑ bn sin e
n =1 l 
Initially, u ( x, 0 ) = Ax + B

u ( 0, 0 ) = B = 0

l  l l
∴ u  , 0 = A + B = A = T
2  2 2 
2T
∴ A=
l 
Partial Differential Equations  | 551

2T l
∴ u ( x, 0 ) = x; 0 ≤ x ≤ 
l 2
l
for ≤ x ≤ l,
2 
l 
u ,0 = T
2 
u ( l, 0 ) = 0

l
∴ A + B = T 
2
Al + B = 0 
−2T
∴ A= , B = 2T
l 
2T l
∴ u ( x, 0 ) = x; 0 ≤ x ≤
l 2
−2T l
= x + 2T ; ≤ x ≤ l
l 2 

nπ x
u ( x, 0 ) = ∑ bn sin
n =1 l 
It is half range Fourier sine series of u ( x, 0 ) in [ 0, l ] .
nπ x
l
2
u ( x, 0 ) sin
l ∫0
∴ bn = dx
l

2  2T nπ x nπ x 
l/2 l
2T
= ∫ x sin dx + ∫ ( l − x ) sin dx 
l 0 l l l l 
l/2

   l l/2
4T nπ x   l 2 nπ x  
=  x  − cos  −  − 2 2 sin  
l2    nπ l   nπ l  0

nπ x   
l
  l nπ x   l 2
+ ( l − x )  − cos +  − sin  
  nπ l   n2π 2 l  l / 2 

4T  l 2 nπ l2 nπ l2 nπ l2 nπ 
=  − cos + sin + cos + sin  
l  2nπ
2
2 nπ2 2
2 2nπ 2 nπ2 2
2 
8T nπ
= sin
nπ2 2
2 
8T ( −1)
n +1

∴ b2 n = 0, b2 n −1 = ; n = 1, 2, 3, …
π 2 ( 2n − 1)
2


552 | Chapter 4

∴ solution is
( −1) ( 2n − 1) π x − ( 2 n−1l) π a t
2
n +1 2 2

8T
u ( x, t ) = 2 ∑
2
sin e
π n =1 ( 2n − 1)
2
l

Example 4.61: (a) An insulated rod of length l has its ends A and B maintained at 0 o and 100 o C
respectively until steady state conditions prevail. If B is suddenly reduced to 0 o C and maintained
at 0 o C, find the temperature at a distance x from A at time t.
(b) Find also the temperature if the change consists of raising the temperature of A to 20 o C and
reducing that of B to 80 o C.
∂u ∂2u
Solution: (a) Let u ( x, t ) be temperature at distance x from A at time t then = c 2 2 where
c 2 is diffusivity of rod. As ∂t ∂x

u ( 0, t ) = u ( l , t ) = 0

∴ solution is
n2π 2 c 2 t

nπ x −
u ( x, t ) = ∑ bn sin e l2

n =1 l 
u ( x, 0 ) = Ax + B; u ( 0, 0 ) = 0, u ( l , 0 ) = 100

100
∴ B = 0, A =
l 
100 x
∴ u ( x, 0 ) =
l 
100 x ∞ nπ x
u ( x, 0 ) = = ∑ bn sin
l n =1 l 
100 x
It is half range Fourier sine series of in [ 0, l ] .
l
nπ x
l
2 100 x
l ∫0 l
∴ bn = sin dx
l

l
200   l nπ x   l 2 nπ x  
= 2 x  − cos −− sin 
l   nπ l   n2π 2 l 0

200  l 2 n 200
( −1)  = ( −1)
n +1
= 2 

l  nπ  nπ 
∴ solution is
200 ∞ ( −1)
n +1 n2π 2 c 2 t
nπ x −
u ( x, t ) = ∑
π n =1 n
sin
l
e l2


(b)  Here, temperatures at end points are not zero.
∴ u ( x, t ) = us ( x ) + ut ( x, t )

Partial Differential Equations  | 553

where us ( x ) = Ax + B 

us ( 0 ) = B = 20

us ( l ) = Al + B = 80

60
∴ A= , B = 20
l 
60 x
∴ us ( x ) = + 20
l 
n2 π 2 c 2 t
60 x ∞
nπ x −
∴ u ( x, t ) = + 20 + ∑ bn sin e l2
l n =1 l 
100 x
u ( x, 0 ) =    (from part (a))
l
100 x 60 x ∞
nπ x
∴ = + 20 + ∑ bn sin
l l n =1 l 
40 x ∞
nπ x
∴ − 20 = ∑ bn sin
l n =1 l 
40 x
It is half range Fourier sine series of − 20 in [ 0, l ] .
l
nπ x
l
2  40 x 
∴ bn = ∫ 
l 0 l
− 20 sin
 l
dx

l
40   l nπ x   l 2 nπ x  
= 2  ( 2 x − l ) −
 nπ cos  − 2  − 2 2 sin 
l   l   nπ l 0

40  l 2 l2 
( )
n
=  − −1 − 
l 2  nπ nπ 


=−
40

(
1 + ( −1)
n


)
40
∴ b2 n = − , b2 n −1 = 0; n = 1, 2, 3,…
nπ 
∴ solution is
4 n2π 2 c 2 t
60 x 40 ∞ 1 2nπ x −
u ( x, t ) = + 20 − ∑ sin e l2
l π n =1 n l 
554 | Chapter 4

Example 4.62: A bar of length l with insulated sides is initially at 0°C temperature throughout.
∂u
The end x = 0 is kept at 0 o C for all time and heat is suddenly applied such that = 10 at x = l for
all time. Find the temperature function u ( x, t ). ∂x
Solution: Let u ( x, t ) be temperature at distance x from end at 0 o C at time t
∂u ∂2u
then, = c2 2
∂t ∂x 
where c 2 is diffusivity of bar.
As both ends are not at 0 o C
∴ u ( x, t ) = us ( x ) + ut ( x, t )

us ( x ) = Ax + B

us ( 0 ) = B = 0

 ∂us 
 ∂x  = A = 10
  x =l 
∴ us ( x ) = 10 x

u ( x, t ) = 10 x + (c1 cos px + c2 sin px ) e − p c t
2 2


(∵ temperature at end x = l is not given)
u ( 0, t ) = c1e − p c t = 0
2 2


⇒ cl = 0

u ( x, t ) = 10 x + c2 sin px e − p c t
2 2


∂u 2 2
= 10 + pc2 cos px e − p c t
∂x 
 ∂u 
 ∂x  = 10 = 10 + pc2 cos ( pl ) e
− p2 c2 t

 ( l , t )

cos pl = 0
∴ 
p=
(2n − 1) π ; n ∈ N
⇒ 2l 

∴ By principle of superposition
( 2 n −1)2 π 2 c2 t

( 2n − 1) π x −
u ( x, t ) = 10 x + ∑ cn sin e 4l2

n =1 2l 
u ( x , 0 ) = 0, for all x
Partial Differential Equations  | 555



0 = 10 x + ∑ cn sin
(2n − 1) π x 
n =1 2l

( 2n − 1) π x
or −10 x = ∑ cn sin
n =1  2l
It is half range Fourier sine series of −10x in [0, l].
2
l
(2n − 1) π x dx
∴ cn = ∫
l 0
( −10 x ) sin
2l

l
−20   2l ( 2n − 1) π x   4l 2 ( 2n − 1) π x  
 = x  − cos  − − sin
l   ( 2n − 1) π 2l   2n − 1 π 2
  ( )
2
2l 
 0

20  4l 2  π 
=−  sin  nπ −  
l  ( 2n − 1) π
2 2
 2 
 
80l
( −1)
n
=
( 2n − 1) π
2 2

∴ solution is
80l ∞ ( −1) ( 2n − 1) π x − ( 2 n −1) π 2 c 2 t
2
n

u ( x, t ) = 10 x + 2 ∑ sin e 4l2
π n =1 ( 2n − 1)2 2l


Example 4.63: A bar 100 cm long with insulated sides, has its ends kept at 0 o C and 100 o C until
steady-state conditions prevail. The two ends are then suddenly insulated and kept so. Find the
temperature distribution.
Solution: Let AB be the rod of length 100 cm and u ( x, t ) be temperature at distance x from A at
time t.
∂ 2 u ∂u
Heat equation is c 2 2 = where c 2 is diffusivity of bar.
∂x ∂t
Since two ends are insulated
 ∂u   ∂u 
∴   =  =0
∂x x = 0  ∂x  x =100

a0
∴ steady-state temp. is a constant say
2
∴ solution is
a
u ( x, t ) = 0 + ( A1 cos px + B1 sin px ) e − p c t   (
2 2
∵ temperatures at end
2 points are not given)
∂u
= ( − pA1 sin px + pB1 cos px ) e − p c t
2 2
Now,
∂x 
556 | Chapter 4

 ∂u  2 2
  = pB1e − p c t = 0 
∂x x = 0

⇒ B1 = 0
 ∂u 
and  ∂x  = − pA1 sin pl e
− p2 c2 t
= 0     (∵ B1 = 0 )
  x =l 

sin pl = 0 

⇒ p= ; n = 1, 2, 3, … 
l
n2π 2 c 2 t
a nπ x − l 2
∴ u ( x, t ) = 0 + an cos e ; n = 1, 2, 3, …
2 l 
are solutions.
∴ By principle of superposition, solution is
n2π 2 c 2 t
a ∞
nπ x −
u ( x, t ) = 0 + ∑ an cos e l2
2 n =1 l 
Now, u (0, 0 ) = 0, u (l , 0 ) = 100

100
∴ u ( x, 0 ) = x
l 
100 x a0 ∞ nπ x
∴ = + ∑ an cos
l 2 n =1 l 
100 x
It is half range Fourier cosine series of in [ 0, l ] .
l
l
200  x 2 
l
2 100 x
l ∫0 l
∴ a0 = dx = = 100
l 2  2  0

nπ x
l
2 100 x
an = ∫ cos dx
l 0 l l

l
200   l nπ x   l 2 nπ x  
= x
2  
sin  −  − 2 2 cos 
l   nπ l   nπ l 0

200 l 2  200
( −1) − 1 = 2 2 ( −1) − 1
n n
= ⋅
l 2 n 2π 2  nπ 
400
∴ a2 n = 0, a2 n −1 = − ; n = 1, 2, 3,...
π ( 2n − 1)
2 2

( 2n − 1) π x − ( 2 n−1l)2 π c t
2 2 2

400 ∞ 1
∴ u ( x, t ) = 50 − 2 ∑ cos e
π n =1 ( 2n − 1)2 l

Partial Differential Equations  | 557

But l = 100 cm


400 ∞
u ( x, t ) = 50 − 2 ∑
1
cos
(2n − 1) π x e − (2 n −1)2 π 2 c2 t
10000
π n =1 ( 2n − 1) 2
100

Example 4.64: Solve ut = c 2 uxx when
  (i)  u ≠ ∞ as t → ∞
 (ii)  ux = 0 when x = 0 for all t
(iii)  u = 0 when x = l for all t
  (iv)  u = u0 = constant when t = 0 for all 0 < x < l.
Solution: Here temperature at both ends is not zero for all t and is also not given at end x = 0.
∴ solution is
u ( x, t ) = ax + b + ( c1 cos px + c2 sin px ) e − p c t
2 2


For steady-state solution
us ( x ) = ax + b

∂us
= a = 0  for x = 0
∂x
∴ a = 0
∴ us(x) = b
Also, us ( l ) = 0

⇒ b = 0
∴ ax + b = 0 
u ( x, t ) = (c1 cos px + c2 sin px ) e − p c t
2 2


∂u
= ( − pc1 sin px + pc2 cos px ) e − p c t
2 2

∂x 
 ∂u  − p2 c2 t
 ∂x  = pc2 e = 0 for all t
  x =0 
⇒ c2 = 0
u ( x, t ) = c1 cos px e − p c t
2 2


∴ u (l , t ) = c1 cos pl e − p2 c2 t
= 0 for all t

( 2n − 1) π
∴ p= ; n = 1, 2, 3,…
2l 
( 2 n −1)2 π 2 c2 t
( 2n − 1) π x −
∴ u ( x, t ) = cn cos e 4l2
; n = 1, 2, … 
2l
558 | Chapter 4

By principle of superposition
( 2 n −1)2 π 2 c2 t

( 2n − 1) π x −
u ( x, t ) = ∑ cn cos e 4l2

n =1 2l 

( 2n − 1) π x
u ( x, 0 ) = u0 = ∑ cn cos
n =1 2l 
It is half range Fourier cosine series of u0 in [0, l ].

2
l
(2n − 1) π x dx
∴ cn = ∫
l 0
u0 cos
2l

 ( 2n − 1) π x 
l
2u0 2l
= ⋅ sin 
l ( 2n − 1) π  2l 0 
4u0
( −1)
n +1
=
( 2n − 1) π 
∴ solution is
( 2 n −1)2 π 2 c2 t
( −1) ( 2n − 1) π x
n +1

4u −
u ( x, t ) = 0
π
∑ 2n − 1
cos
2l
e 4l2

n =1 

4.13  Transmission Line Equations


x dx

O P L dx R dx Q

G dx C dx

Figure 4.4

Consider the flow of electricity in an insulated cable. Let P, Q be points on the cable at distance
x and x + δ x from starting point O. Let R and L respectively be resistance and inductance estab-
lished per unit length of the cable. Let C and G be capacitance and leakance respectively to the
ground per unit length. We assume that R, L, C, G are constants.
Let I, I + δ I be currents and V, V + δ V be potentials at P and Q respectively at time t. Here,
δ I and δ V will be negative.
Partial Differential Equations  | 559

Potential drop across segment PQ = potential drop due to resistance + potential drop due to
inductance
∂I
∴ −δV = ( Rδ x ) I + ( Lδ x ) (4.52)
∂t
Decrease in current in crossing segment PQ = d ecrease in current due to leakance
+ decrease in current due to capacitance
∂V
∴ −δ I = (Gδ x )V + (C δ x ) (4.53)
∂t
Divide (4.52) and (4.53) by δ x and take limit as δ x → 0
∂V ∂I
− = RI + L (4.54)
∂x ∂t
∂I ∂V
− = GV + C (4.55)
∂x ∂t
We shall find the partial differential equations eliminating V and I.
Differentiate both sides of (4.54) partially w.r.t. x
∂ 2V ∂I ∂2 I ∂I ∂2 I
− = R + L = R + L
∂x 2 ∂x ∂x∂t ∂x ∂t ∂x 
 ∂  ∂I
= R+ L 
 ∂t  ∂x 

 ∂  ∂V 
=  R + L   −GV − C  (from (4.55))
 ∂t   ∂t 
∂ 2V ∂V ∂V ∂ 2V
∴ = RGV + RC + LG + LC 2
∂x 2
∂t ∂t ∂t 
∂ 2V ∂ 2V ∂V
or = LC 2 + ( RC + LG ) + RGV (4.56)
∂x 2
∂t ∂t
Similarly differentiating both side of (4.55) partially w.r.t. x and using (4.54) will give
∂2 I ∂2 I ∂I
= LC + ( RC + LG ) + RGI (4.57)
∂x 2
∂t 2
∂t
Equations (4.56) and (4.57) are called telephone equations.

Remark 4.5:
(i) If L and G are negligible as in telegraph lines then (4.56) and (4.57) become
∂ 2V ∂V ∂2 I ∂I
= RC and = RC
∂x 2
∂t ∂x 2
∂t 
These are telegraph equations. These are similar to one dimensional heat flow.
560 | Chapter 4

(ii) If R and G are negligible as in radio then (4.56) and (4.57) become
∂ 2V ∂ 2V ∂2 I ∂2 I
= LC and = LC
∂x 2 ∂t 2 ∂x 2 ∂t 2 
These are called radio equations. These are similar to one dimensional wave equations.

Example 4.65: A transmission line 1000 km long is initially under steady-state conditions with
potential 1300 volts at the sending end ( x = 0 ) and 1200 volts at the receiving end ( x = 1000 ) .
The terminal end of the line is suddenly grounded, but the potential at the source is kept at
1300 volts. Assuming that the inductance and leakance to be negligible, find the potential V ( x, t ).
Solution: As the inductance and leakance are negligible, the line is a telegraph line.
The equation of telegraph line is
∂ 2V ∂V
= RC (1)
∂x 2
∂t
In steady state,
∂V
=0
∂t

∂ 2V
\ =0
∂x 2
\ steady-state voltage is V = Ax + B
But V = 1300   when x = 0 
V = 1200   when x = 1000 
x
\ V = 1300 −
10 
x
\ V ( x, 0 ) = 1300 −
10 
After grounding the terminal end x = 1000 steady-state voltage at x = 0 is 1300 volts and
steady-state voltage at x = 1000 is zero.
\ steady-state voltage after grounding
Vs ( x ) = 1300 − 1.3 x

Now, V ( x, t ) is sum of steady-state voltage and transient-state voltage.
\ solution of (1) considering as one dimensional heat equation is
n2π 2 t
∞ − nπ x
V ( x, t ) = 1300 − 1.3 x + ∑ En e 106 RC
sin  (∵ l = 1000 km )
n =1 1000
x ∞
nπ x
\ 1300 − = V ( x, 0 ) = 1300 − 1.3 x + ∑ En sin 
10 n =1 1000
Partial Differential Equations  | 561


nπ x
\ 1.2 x = ∑ En sin ; 0 ≤ x ≤ 1000 
n =1 1000

It is Fourier half range sine series in [ 0,1000 ] .


2 1000 nπ x
1000 ∫0
\ En = 1.2 x sin dx
1000 
1000
24   1000 nπ x   −106 nπ x  
= 4 x  − cos  − (1)  2 2 sin 
10   nπ 1000  n π 1000   0

24 10 6
n +1  2400
( −1)  = ( −1)
n +1
= 
10 4  nπ  nπ

\ solution is

2400 ∞ ( −1)
n +1 n2π 2 t
13 − nπ x
V ( x, t ) = 1300 − x +
10

π n =1 n
e 106 RC
sin
1000 

Example 4.66: Neglecting R and G, find the e.m.f. V ( x, t ) in a line of length l, t seconds after the
πx 5π x
ends were suddenly grounded, given that i ( x, 0 ) = i0 and V ( x, 0 ) = e1 sin + e5 sin .
l l
∂ 2V ∂ 2V
Solution: As R and G are neglected, we use radio equation = LC 2 .
∂x 2
∂t
Let V = X ( x )T (t ) .

\ differential equation becomes
X ′′ T = LCXT ′′ 
X ′′ LCT ′′
\
X
=
T
= − p 2 (say )  (∵ V ( 0, t ) = V ( l , t ) = 0 )
\ X ′′ + p 2 X = 0, LCT ′′ + p 2T = 0 

Solution of X ′′ + p 2 X = 0 is
X = c1 cos px + c2 sin px 
X ( 0 ) = c1 = 0

\ X = c2 sin px 
X ( l ) = c2 sin pl = 0


⇒ p= ; n = 1, 2,... 
l
nπ x
\ X = cn sin ; n = 1, 2, 3... 
l
562 | Chapter 4

Solution of LCT ′′ + p 2T = 0 is
pt pt
T = b cos + e sin
LC LC 
nπ t nπ t
= bn cos + en sin ; n = 1, 2,...
l LC l LC 
By principle of superposition, general solution is

 nπ t nπ t  nπ x
V ( x, t ) = ∑  Bn cos + En sin  sin (1)
n =1  l LC l LC  l 
Now, i ( x, 0 ) = i0

 ∂i 
\   = 0
∂x t = 0

∂i ∂V
But = −C  (from (4.55))
∂x ∂t
 ∂V 
\   =0
∂t  t = 0

\ from (1)
 ∂V  ∞
nπ nπ x
0=  = ∑ En sin
 ∂t t = 0 n =1 l LC l

\ En = 0; n = 1, 2, 3,… 

nπ t nπ x
\ V ( x, t ) = ∑ Bn cos sin (2)
n =1 l LC l
πx 5π x ∞
nπ x
\ e1 sin + e5 sin = V ( x, 0 ) = ∑ Bn sin
l l n =1 l 
\ B1 = e1, B5 = e5 ; Bn = 0; n ∈ N , n ≠ 1, 5 

\ from (2)
πt πx 5π t 5π x
V ( x, t ) = e1 cos sin + e5 cos sin .
l LC l l LC l

Exercise 4.7

1. Classify the following partial differential ∂2u ∂2u


equations (ii)  + = f ( x, y )
∂x 2 ∂y 2
∂2u ∂2u ∂ 2 u ∂u
 (i)  + =0 (iii)  c 2 2 =
∂x 2 ∂y 2 ∂x ∂t
Partial Differential Equations  | 563

∂2u ∂2u of the ends are changed to 40°C and


 (iv)  c 2 = 60°C, respectively. Find the temperature
∂x 2 ∂t 2
distribution in the rod at time t.
 (v)  4 ( x + 1) z xx − 4 ( x + 2) z xy
10. Find the temperature in a thin metal rod
+ ( x + 3) z yy = 0 of length l, with both the ends insulated
and with initial temperature in the rod
 (vi)  ( x − 2) zxx + 4 ( x + 1) zxy sin (π x / l ).
− 4 x z yy + z x = cos x ∂u ∂2u
11. Solve = k 2 , 0 < x < l , t > 0 with
∂u ∂u ∂t ∂x
2. Solve the equation −2 = u, u(x, 0) the boundary conditions ux(0, t) = 0,
∂x ∂y
ux(l, t) = 0, t ≥ 0 and the initial condition
= 3e −5 x + 2e −3 x by the method of separa-
u(x, 0) = f (x), 0 ≤ x ≤ l.
tion of variables.
12. Find the temperature u ( x, t ) in a homog-
3. Solve the following equation by the enous bar of heat conducting material of
method of separation of variables: length l cm with its ends kept at zero tem-
∂u ∂u perature and initial temperature given by
=4 where u ( 0, y ) = 8e −3 y
∂x ∂y ax ( l − x ) / l 2.
∂2 z ∂z ∂z 13. A bar 10 cm long with insulated sides has
4. Solve the equation −2 + =0
∂x 2
∂x ∂y its ends A and B maintained at tempera-
by the method of separation of variables. ture 50°C and 100°C respectively, until
5. Solve one dimensional heat equation steady-state condition prevails. The tem-
∂u ∂2u perature at A is suddenly raised to 90°C
= k 2 , x ∈ [0, l ] with initial condi-
∂t ∂x and at the same time lowered to 60°C at
tion u ( x, 0 ) = f ( x ) and the boundary B. Find the temperature distribution in
the bar at time t.
conditions u (0, t ) = 0, u (l , t ) = 0, t ≥ 0.
14. A bar of length l, laterally insulated, has
∂2u ∂u its ends A and B kept at 0° and u0 ° re-
6. Find the solution of = c2 for which
∂x 2
∂t spectively, until steady-state conditions
πx
u ( 0, t ) = u ( l , t ) = 0, u(x, 0) = sin by the prevail. If the temperature at B is then
l suddenly reduced to 0° and kept so while
method of separation of variables. that of A is maintained at 0° find the
7. Find the temperature in a bar of length 2 temperature in the bar at any subsequent
whose ends are kept at zero and lateral time.
surface insulated if the initial tempera- 15. Find the temperature u ( x, t ) in a bar
πx 5π x
ture is sin + 3 sin . which is perfectly insulated laterally,
2 2 whose ends are kept at temperature
∂u ∂2u 0°C and whose initial temperature is
8. Solve = k 2 , 0 < x < 2π with the con-
∂t ∂x f ( x ) = x (10 − x ) given that its length
ditions u ( x, 0 ) = x 2 , u ( 0, t ) = u(2p, t)
u ( 2π , t )==0.0. is 10 cm, constant cross section of area
9. The ends A and B of a rod 20 cm long have 1  cm², density 10.6 g/cm³, thermal con-
the temperatures at 30°C and at 80°C un- ductivity 1.04 cal/cm deg sec and specific
til steady state prevails. The ­temperatures heat 0.056 call/gm deg.
564 | Chapter 4

16. A bar AB of length 10 cm has its ends A ∂2u ∂2u


and B kept at 30° and 100° temperature 22. Solve the wave equation = a2 2 , 
∂t 2
∂x
respectively until steady-state condition 0 < x < l, t > 0, where a is a constant re-
is reached. Then the temperature at A is lated to tension in the vibrating string of
lowered to 20° and that at B to 40° and length l having fixed ends. The boundary
these temperatures are maintained. Find conditions and initial conditions are
the subsequent temperature distribution u ( 0, t ) = u ( l , t ) = 0, t ≥ 0
in the bar.
u ( x, 0 ) = f ( x ) , 0 ≤ x ≤ l
17. Solve the differential equation
∂u 2 ∂ u ut ( x, 0 ) = 0, 0 ≤ x ≤ l
2
=c for the conduction of heat
∂t ∂x 2 23. A string of length l is fastened at both
along a rod of length l without radiation, ends A and C. At a distance ‘b’ from the
subject to the following conditions: end A, the string is transversely displaced
 (i)  u is not infinite for t → ∞ to distance ‘d’ and is released from rest
∂u when it is in this position. Find the equa-
 (ii)  = 0 for x = 0 and x = l
∂x tion of the subsequent motion.
(iii) u = lx − x 2 for t = 0, between x = 0 24. A tightly stretched string of length l is at-
and x = l tached at x = 0 and at x = l and released
18. A string is stretched and fastened to from rest at t = 0. Find the expression for
two points l apart. Motion is started y the displacement of the string at a dis-
by displacing the string in the form 2π x
tance x, given that y = A sin at t = 0.
πx l
y = a sin from which it is released at
l 25. A tightly stretched flexible string has
time t = 0. Show that the displacement of its ends fixed at x = 0 and x = l. At time
any point at a distance x from one end at t = 0, the string is given a shape defined by
time t is given by f ( x ) = µ x ( l − x ) , where µ is a constant
πx π ct and then released. Find the displacement of
y ( x, t ) = a sin cos , where c 2 is dif-
l l any point x of the string at any time t > 0.
fusivity of string. 26. A string is stretched between the fixed
19. Find the deflection of vibrating string of points ( 0, 0 ) and ( l, 0 ) and released at
unit length having fixed ends with ini- rest from the initial deflection given by
tial velocity zero and initial deflection  2kx l
f ( x ) = k ( sin x − sin 2 x ) using the D ′  l when 0 < x < 2
Alembert solution. f (x) =  . Find
20. Find the displacement of a string  2k (l − x ) when l < x < l
 l 2
stretched between the fixed points ( 0, 0 )
the deflection of the string at any time t.
and (1, 0 ) and released from rest from the
position A sin π x + B sin 2π x. 27. An elastic string of length l which is fas-
21. A tightly stretched string of length l with tened at its ends x = 0 and x = l is picked
fixed ends is initially in equilibrium po- l l
up at its centre point x = to a height of
sition. It is set vibrating by giving each 2 2
point a velocity v0 sin 3 π x / l . Find the and released from rest. Find the displace-
displacement y ( x, t ). ment of the string at any instant of time.
Partial Differential Equations  | 565

28. Find the displacement of a string stretched when


between two fixed points at a distance 2c (i)  f ( x ) = sin x, g ( x ) = k where k is a
apart when the string is initially at rest constant
in equilibrium position and points of the (ii)  f ( x ) = 0, g ( x ) = sin 3 x
string are given initial velocities v where
( )
(iii)  f ( x ) = k x − x 2 , g ( x ) = 0
 x c , 0 < x < c 31. Find the current i and voltage v in a trans-
v= ; x being the
( 2c − x ) c , c < x < 2c
mission line of length l, t seconds after
the ends are suddenly grounded given
distance measured from one end.
πx
that i ( x, 0 ) = i0, v ( x, 0 ) = v0 sin . Also,
29. Show that the solution of the wave l
­equation R and G are negligible.
32. A telephone line 3000 km long has a re-
∂2 y 2 ∂ y
2
 = c can be expressed in the sistance of 4 ohms/km and a capacitance
∂t 2 ∂x 2 of 5 × 10 −7 farad/km. Initially, both the
form y ( x, t ) = φ ( x + ct ) + ψ ( x − ct ) ends are grounded so that the line is un-
∂y charged. At time t = 0, a constant e.m.f.
If y ( x, 0 ) = f ( x ) and ( x, 0 ) = 0, show E is applied to one end, while the other
∂t
1 end is left grounded. Assuming the induc-
that y ( x, t ) =  f ( x + ct ) + f ( x − ct ) 
2 tance and leakance to be negligible, show
that steady-state current of the grounded
30. Use D ′ Alembert solution to find the so- end at the end of 1 sec. is 5.3 per cent.
lution of the initial value problem defining 33. In a telephone wire of length l, a steady
the vibrations of an infinitely long elastic voltage distribution of 20 volts at the
∂2 y ∂2 y source end and 12 volts at the terminal
string 2 = c 2 2 , − ∞ < x < ∞, t > 0
∂t ∂x end is maintained. At time t = 0, the ter-
minal end is grounded. Determine the
∂y
y ( x, 0 ) = f ( x ) , ( x, 0 ) = g ( x ) voltage and current. Assume that L = 0
∂t and G = 0.

Answers 4.7

 1. (i) elliptic     (ii) elliptic    (iii) parabolic    (iv) hyperbolic
(v)  hyperbolic    (vi)  hyperbolic
 2.  u ( x, y ) = 3e
−(5 x + 3 y ) −(3 x + 2 y )
+ 2e
 3.  u ( x, y ) = 8e
−3( 4 x + y )

( ) c e − kx + c e − kx . + e x + (1+ k ) y c cos k x + c sin k x


(3
x + 1− k 2 y
)
2

 4.  z ( x, y ) = ( c1 x + c2 ) e x + y + e 4
1
(5 1 6 1 )

where c1 , c2 , c3 , c4 , c5 , c6 , k > 0, k1 > 0 are arbitrary constants.


− n2π 2 k
∞ t  nπ x  2 l  nπ x 
 5.  u ( x, t ) = ∑ bn e l2
sin   , where bn = ∫ f ( x ) sin   dx, n = 1, 2, 3, …
n =1  l  l 0
 l 
−π 2
π x c2 l 2 t
 6.  u ( x, t ) = sin e .
l
566 | Chapter 4

2 2 2 2
π x − c 4π t 5π x − 25c4 π t
 7.  u ( x, t ) = sin
2
e + 3 sin e , where c is the thermal diffusivity.
2 2
2

8 n 2  2  nx − n k t
 8.  u ( x, t ) = ∑ ( −1)  2 − π  − 2  sin e 4
n =1 n  πn  8n  2

20 ∞ 1 − 2 ( −1)
nπ n 2 2 2
nπ x − c400
 9.  u ( x, t ) = x + 40 − ∑
t
sin e , where c 2 is the thermal diffusivity.
π n =1 n 20
4 n2π 2 c 2
2 4 ∞ 1  2nπ x  − t
10  u ( x, t ) = − ∑ cos  e
l2
, where c 2 is the thermal diffusivity.
(
π π n =1 4 n − 1
2
 l  )
− n2π 2 k
a ∞
nπ x t 2 l nπ x
11.  u ( x, t ) = 0 + ∑ an cos f ( x ) cos
l ∫0
e l2
, where an = dx; n = 0,1, 2,…
2 n =1 l l
c 2 ( 2 n −1) π 2
( 2n − 1) π x −
2

8a ∞ 1 t
12.  u ( x, t ) = 3 ∑ sin e l2
, where c 2 is the thermal diffusivity.
π n =1 ( 2n − 1) 3
l
c 2π 2 n2
80 ∞ 1 nπ x −
13.  u ( x, t ) = 90 − 3 x − ∑
t
sin e 25
, where c 2 is the thermal diffusivity.
π n =1 n 5

( −1)
n +1 − n2π 2 c 2
2u ∞
nπ x t
14.  u ( x, t ) = 0
π
∑n =1 n
sin
l
e l2
, where c 2 is the thermal diffusivity.

800 ∞ 1 ( 2n − 1) π x
3 ∑
15.  u ( x, t ) =
−0.01752π 2 ( 2 n −1) t
2

e sin
π n =1 ( 2n − 1) 3
10
20 ∞ 1 − 6 ( −1)  nπ x − 100 t
n c 2 n2π 2
16.  u ( x, t ) = 2 x + 20 + ∑ 
π n =1  n
 sin
10
e , where c 2 is the thermal diffusivity.
 
−4 n2π 2 c 2
l2 l2 ∞
1  2nπ x  t
17.  u ( x, t ) = − 2
6 π

n =1 n
2
cos 
 l 
e
l2

19.  y ( x, t ) = k [sin x cos ct − sin 2 x cos 2ct ], where c 2 is diffusivity of string.

20.  y ( x, t ) = A sin π x cos π ct + B sin 2π x cos ( 2π ct ), where c 2 is diffusivity of string.


lv0  πx π ct 3π x 3π ct 
21.  y ( x, t ) =  9 sin sin − sin sin , where c 2 is diffusivity of string.
12π c  l l l l 

nπ at nπ x 2 l nπ x
22.  u ( x, t ) = ∑ bn cos sin , where bn = ∫ f ( x ) sin dx
n =1 l l l 0 l
2dl 2 ∞
1 nπ b nπ x nπ ct
23.  y ( x, t ) =
b (l − b)π 2
∑n
n =1
2
sin
l
sin
l
cos
l
, where c 2 is diffusivity of string.

2π x 2cπ t
24.  y ( x, t ) = A sin cos , where c2 is diffusivity of string.
l l
Partial Differential Equations  | 567

8µ l 2 ∞
1 ( 2n − 1) cπ t ( 2n − 1) π x
25.  y ( x, t ) = ∑
2
cos sin , where c is diffusivity of string.
π3 ( 2n − 1)
3
n =1 l l

8k ∞ ( −1) ( 2n − 1) cπ t ( 2n − 1) π x
n +1

26.  y ( x, t ) = 2 ∑ cos sin , where c 2 is diffusivity of string.


π n =1 ( 2n − 1) 2
l l

4l ∞ ( −1) ( 2n − 1) cπ t ( 2n − 1) π x
n +1

27.  y ( x, t ) = 2 ∑ cos sin , where c 2 is diffusivity of string.


π n =1 ( 2n − 1) 2
l l

16c ∞ ( −1) ( 2n − 1) π x ( 2n − 1) π at
n +1

28.  y ( x, t ) = 3 ∑ sin sin , where a 2 is diffusivity of string.


aπ n =1 ( 2n − 1) 3
2c 2c

30.   (i)  y ( x, t ) = sin x cos ( ct ) + kt


1
    (ii)  y ( x, t ) = sin ( 3 x ) sin ( 3ct )
3c
  (
(iii)  y ( x, t ) = k x − x 2 − c 2 t 2 )
 πt   π x 
31.  v ( x, t ) = v0 cos   sin  
 l LC   l 

C  πt  πx 
  i ( x, t ) = i0 − v0 sin   cos  
L  l LC   l 

20 ( l − x ) 24 ∞ ( −1)
n +1 − n2π 2
 nπ x  l 2 RC t
33.  v ( x, t ) =
l
+ ∑
π n =1 n
sin 
 l 
e

− n2π 2
20 24 ∞  nπ x  l 2 RC t
i ( x, t ) = + ∑ ( −1) cos 
n
  e
Rl Rl n =1  l 

4.14  Two dimensional Heat Flow


Consider the flow of heat in a metal plate in the x–y plane. We assume that the sides of plate are
insulated. Let u ( x, y, t ) be temperature at any point ( x, y ) of plate at time t.
Y

D x y + δy C x + δx + δy

Axy B x + δx
X
O

Figure 4.5
568 | Chapter 4

Consider a portion of plate of thickness α as shown in the figure.


As discussed in one dimensional heat flow
 ∂u 
Amount of heat entering AD per second = − Kαδ y  
 ∂x  x
 ∂u 
Amount of heat entering AB per second = − K αδ x  
 ∂y  y
 ∂u 
Amount of heat leaving BC per second = − Kαδ y  
 ∂x  x +δ x
 ∂u 
Amount of heat leaving DC per second = − Kαδ x  
 ∂y  y +δ y
Hence, total amount of heat retained by this element of plate per second
  ∂u   ∂u    ∂u   ∂u  
= Kα δ y   −    + δ x   −   
  ∂x  x +δ x  ∂x  x   ∂y  y +δ y  ∂y  y  
∂u
But rate of increase of heat in this element = σραδ xδ y
∂t
where σ is specific heat and ρ is density of material of plate.

∂u    ∂u   ∂u    δ u   ∂u   
∴ σραδ xδ y = K α δ y   −    + δ x   −   
∂t   ∂x  x + δ x  ∂x  x   δ y  y + δ y  ∂y  y  

  ∂u   ∂u   ∂u   ∂u  
  −   ∂y  −  
∂u K   ∂x  x + δ x  ∂x  x y +δ y
 ∂y  y 
∴ =  + 
∂t σρ  δx δy 
Take limit as δ x → 0, δ y → 0

∂u  ∂2u ∂2u 
= c2  2 + 2  
∂t  ∂x ∂y 
K
where c 2 = is diffusivity of material of plate
σρ
It is two dimensional heat equation.
Remark 4.6:
∂u
 (i)  In steady state, u is independent of t and hence = 0.
∂t
∴ In steady state, two dimensional heat equation is
∂2u ∂2u
+ =0
∂x 2 ∂y 2 
It is Laplace equation in two dimensions.
Partial Differential Equations  | 569

(ii)  In three dimensions, heat equation is


∂u  ∂2u ∂2u ∂2u 
= c2  2 + 2 + 2 
∂t  ∂x ∂y ∂z 

In steady state, it becomes
∂2u ∂2u ∂2u
+ + =0
∂x 2 ∂y 2 ∂z 2 
It is Laplace equation in three dimensions.

4.15  Solution of Two dimensional Laplace equation


Two dimensional Laplace equation is
∂2u ∂2u
+ =0
∂x 2 ∂y 2 
Let u ( x, y ) = X ( x ) Y ( y )

∂ u 2
∂ u 2
∴ = X ′′ Y , = XY ′′
∂x 2 ∂y 2 
∴ differential equation becomes
X ′′ Y + XY ′′ = 0 
X ′′ Y ′′
or =−
X Y 
L.H.S. is a function of x and R.H.S. is a function of y and hence each is constant say λ .

Case I:  λ = 0, then X ′′ = 0, Y ′′ = 0


∴ solutions are X = Ax + B, Y = Cy + D
∴ u ( x, y ) = ( Ax + B ) (Cy + D )

Case II:  λ is positive, i.e., λ = p 2 , p > 0 then 
X ′′ − p 2 X = 0, Y ′′ + p 2Y = 0. 

Solutions are X = Ae px + Be − px , Y = C cos py + D sin py


∴ ( )
u ( x, y ) = Ae px + Be − px (C cos py + D sin py )


Case III:  l is negative, i.e., λ = − p 2 , p > 0


∴ X ′′ + p 2 X = 0, Y ′′ − p 2Y = 0 
∴ solutions are X = A cos px + B sin px, Y = Ce py + De − py 

∴ (
u ( x, y ) = Ce py + De − py ) ( A cos px + B sin px ) 
570 | Chapter 4

Among these solutions, we are to find those solutions which satisfy initial and boundary condi-
tions consistent with the physical nature.
In particular, if u → 0 as y → ∞ for all x
Then solution must be
u ( x, y ) = e − py ( A1 cos px + B1 sin py ) ; p > 0

and if u → 0 as x → ∞ for all y
then solution must be
u ( x, y ) = e − px ( A1 cos py + B1 sin py ) , p > 0

Example 4.67: Find the solution of the Laplace equation
∂2u ∂2u
+ =0
∂x 2 ∂y 2 
which satisfies the conditions
  (i)  u → 0 as y → ∞ for all x    (ii)  u = 0 at x = 0 for all y
(iii)  u = 0 at x = l for all y        (iv) u = lx − x 2 if y = 0 for all x ∈ ( 0, l )
Solution: Since u → 0 as y → ∞ for all x
Solution is of the form
u ( x, y ) = e − py ( C1 cos px + C2 sin px ) ; p > 0

u ( 0, y ) = C1e − py = 0 for all y
⇒ C1 = 0
∴ u ( x, y ) = C2 e − py sin px

u ( l , y ) = C2 e − py sin pl = 0


⇒ p= ; n = 1, 2, 3,…
l 
nπ y
− nπ x
∴ u ( x, y ) = Cn e l sin ; n = 1, 2, 3, … 
l
∴ By principle of superposition, solution is
nπ y
∞ − nπ x
u ( x, y ) = ∑ Cn e l sin (1)
n =1 l

nπ x
lx − x 2 = u ( x, 0 ) = ∑ Cn sin
n =1 l 
It is Fourier half range sine-series in [ 0, l ]
nπ x
∴ Cn =
2 l
l ∫ ( )
lx − x 2 sin
l
dx
0

l
2  l nπ x   l2 nπ x   l3 nπ x  
 ( )
=  lx − x 2  −
 nπ
cos  − ( l − 2 x )  − 2 2 sin  + ( −2 )  3 3 cos 
l l   nπ l  n π l 0
Partial Differential Equations  | 571

=−
4l 2
n3π 3
(( −1) − 1) n

8l 2
∴ C2 n = 0, C2 n −1 = ; n = 1, 2, 3,…
( 2n − 1)
3
π3

∴ from (1), solution is

u ( x, y ) =
8l 2 ∞
1 −
(2 n −1)π y
(2n − 1) π x
∑ e l
sin
π3 n =1 (2n − 1) 3
l

∂u ∂u 2 2
Example 4.68: Solve + = 0; 0 ≤ x ≤ a, 0 ≤ y ≤ b subject to the conditions
∂x 2 ∂y 2
u ( 0, y ) = u ( a, y ) = u ( x, b ) = 0 ; u ( x, 0 ) = x ( a − x ).

Solution: Solutions can be of type


 (i) ( C1 x + C2 ) ( C3 y + C4 )
( )
 (ii) C1e px + C2 e − px ( C3 cos py + C4 sin py )
(iii) ( C e1
py
+ C2 e − py
) (C 3 cos px + C4 sin px )
Since u ( 0, y ) = u ( a, y ) = 0

∴ solution must be of the type
(
u ( x, y ) = C1e py + C2 e − py sin px ) 
u ( x, b ) = ( C e pb
+ C2 e − pb
) sin px =0
1

∴ C2 = −C1e 2 pb 
∴ (
u ( x, y ) = C1 e py − e ( − ) sin px
p 2b y
) 
= C1e pb
(e p( y −b)
−e
p(b − y )
) sin px 
 e p(b − y ) − e − p(b − y ) 
= −2C1e pb   sin px
 2
  
= Ce pb sinh p ( b − y ) sin px (C = −2C1 )

u ( a, y ) = Ce pb sinh p (b − y ) sin pa = 0 

⇒ p= ; n = 1, 2,… 
a
nπ b
nπ nπ x
∴ u ( x, y ) = C e a sinh (b − y ) sin ; n = 1, 2,…
a a 
nπ (b − y ) nπ x nπ b
= Cn sinh sin ; n = 1, 2,… where Cn = C e a . 
a a
572 | Chapter 4

∴ By principle of superposition, solution is


∞ nπ ( b − y ) nπ x
u ( x, y ) = ∑ Cn sinh sin (1)
n =1 a a

nπ b nπ x
∴ x ( a − x ) = u ( x, 0 ) = ∑ Cn sinh sin ; 0≤ x≤a
n =1 a a 
It is Fourier half range sine series of x ( a − x ) in [ 0, a ] .

nπ b 2 a nπ x
∴ Cn sinh
a
= ∫ ax − x 2 sin
a 0
(a
dx )

  a nπ x   a2 nπ x 
∴ Cn =
2

nπ b 
ax − x 2  −
 nπ
(
cos
a 
 )
− ( a − 2 x )  − 2 2 sin
 nπ a 
a sinh
a
a
 a3 nπ x  
+ ( −2 )  3 3 cos 
n π a 0

−4 a 2 ( −1) − 1
n

=  
nπ b
n π sinh
3 3

a 
8a 2
C2 n = 0,  C2 n −1 = ; n = 1, 2, 3,... 
( 2n − 1) π b
( 2n − 1) π sinh
3 3

a
∴ from (1), solution is

sinh
( 2n − 1) π ( b − y ) sin ( 2n − 1) π x
2 ∞
8a
u ( x, y ) =
π3
∑ a
( 2n − 1) π b
a
( 2n − 1)
n=1 3
sinh
a 
∂ u ∂ u
2 2
Example 4.69: Solve Laplace equation + = 0 in rectangle with
∂x 2 ∂y 2
uu((00,, yy)) == uu((aa,, yy)) == uu((xx,,bb)) == 00,, uu((xx,,00)) == ff ((xx))..
Also find the solution in a square of side π and f ( x ) = sin 2 x; 0 < x < π .
Solution: As u ( 0, y ) = u ( a, y ) = 0 
∴ solution is of the form
(
u ( x, y ) = C1e py + C2 e − py sin px; p > 0 ) 
u ( x, b ) = ( C e 1
pb
+ C2 e − pb ) sin px = 0 
∴ C2 = −C1e 2 pb 
Partial Differential Equations  | 573

∴ (
u ( x, y ) = C1 e py − e 2 pb e − py sin px ) 
= −C1e pb
(e p(b − y )
−e
− p(b − y )
) sin px 
= Ce pb sinh p ( b − y ) sin px      ( −2C1 = C )

u ( a, y ) = Ce pb sinh p (b − y ) sin pa = 0


⇒ p= ; n = 1, 2, 3,…
a 
nπ (b − y ) nπ x  nπ b 
∴ u ( x, y ) = Cn sinh sin ; n = 1, 2, 3,....     Ce a = Cn 
a a  
∴ By principle of superposition, solution is
∞ nπ ( b − y )
nπ x
u ( x, y ) = ∑ Cn sinh sin
n =1 a a 

nπ b nπ x
∴ f ( x ) = u ( x, 0 ) = ∑ Cn sinh sin
n =1 a a 
It is Fourier half range sin-series of f (x) in [0,a].
nπ b 2 nπ x
a

∴ Cn sinh = ∫ f ( x ) sin dx
a a0 a

nπ x
a
2
f ( x ) sin
nπ b ∫
∴ Cn = dx
a
a sinh 0
a 
\ solution is
∞ nπ ( b − y ) nπ x
u ( x, y ) = ∑ Cn sinh sin
n =1 a a 
nπ x
a
2
f ( x ) sin
nπ b ∫
where Cn = dx
a
a sinh 0
a 
In square of side π , a = b = π and when f ( x ) = sin 2 x, 0 < x < π
we have ∞
u ( x, y ) = ∑ Cn sinh n (π − y ) sin nx
n =1 
π
2
π sinh nπ ∫0
where Cn = sin 2 x sin nx dx 

π
1
=
π sinh nπ ∫ (1 − cos 2 x ) sin nx dx
0 
574 | Chapter 4

1  1 π
1
π 
=  − cos n x − ∫ sin ( n + 2 ) x + sin ( n − 2 ) x  dx  
π sinh nπ  n 0 20 
 1  
π

=
1
π sinh nπ  n
(n 1 1
 − ( −1) − 1 − − )
2 n+2
cos ( n + 2 ) x −
1
n−2
cos ( n − 2 ) x  
0 

  1 1  ( −1) − 1 ( −1) − 1 
n n

=
1
π sinh nπ  n
(
 − ( −1)n − 1 − −
2
)n+2

n − 2 

   
∴ C2 n = 0; n = 1, 2,… 
1  2 1 2 2 
C2 n −1 = −  + 
π sinh ( 2n − 1) π  2n − 1 2  2n + 1 2n − 3 


1  2 1 1 
= − −
π sinh ( 2n − 1) π  2n − 1 2n + 1 2n − 3 


1  4 n + 2 − 2n + 1 1 
=  −
π sinh ( 2n − 1) π  4n − 1
2
2n − 3 

1  2n + 3 1  1  4 n2 − 9 − 4 n2 + 1 
 =  2 − =  
π sinh ( 2n − 1) π  4n − 1 ( 2n − 3)  π sinh ( 2n − 1) π ( )
 4 n2 − 1 ( 2n − 3) 
−8
= ; n = 1, 2, 3,…
π ( 4 n − 1) ( 2n − 3) sinh ( 2n − 1) π
2

\ solution is
−8 ∞ sinh ( 2n − 1) (π − y )  sin ( 2n − 1) x
u ( x, y ) = ∑
( )
π n =1 4 n2 − 1 ( 2n − 3) sinh ( 2n − 1) π

Example 4.70: A rectangular plate with insulated surface is 10 cm wide and so long compared
to its width that it may be considered infinite in length without introducing an appreciable error.
If the temperature of the short edge y = 0 is given by
20 x ; 0≤ x≤5
u=
20 (10 − x ) ; 5 ≤ x ≤ 10 
and the two long edges x = 0, x = 10 as well as the other short edges are kept at 0 o C. Prove that
the temperature u at any point (x, y) is given by

 800  ∞ ( −1) (2n − 1) π x ⋅ e − (2 n10−1)π y .


n +1

u= 2 ∑ sin
 π  n =1 ( 2n − 1) 2
10

∂2u ∂2u
Solution: Temperature function u ( x, y ) at any point ( x, y ) satisfies Laplace equation 2 + 2 = 0
∂x ∂y
subject to the conditions u (0, y ) = u (10, y ) = 0, u ( x, y ) → 0 as y → ∞ and
Partial Differential Equations  | 575

20 x ; 0≤ x≤5
u( x , 0) =  
20 (10 − x ) ; 5 ≤ x ≤ 10

As u ( x, y ) → 0  when y → ∞ 

∴ solution is of the form


u ( x, y ) = e − py ( c1 cos px + c2 sin px ) ; p > 0

u (0, y ) = c1e − py = 0

⇒ c1 = 0
∴ u ( x, y ) = c e − py sin px     ( c = c2 )

u (10, y ) = ce − py sin 10 p = 0


⇒ p= ; n = 1, 2,… 
10
nπ y
nπ x
; n = 1, 2, 3,…     ( cn = c ) 

∴ u ( x, y ) = cn e 10 sin
10
∴ By principle of superposition, solution is
nπ y
∞ − nπ x
u ( x, y ) = ∑ cn e 10
sin (1)
n =1 10

nπ x
∴ u ( x, 0 ) = ∑ cn sin ; 0 ≤ x ≤ 10
n =1 10 
It is Fourier half range sine series of u ( x, 0 ) in [0,10].
nπ x
10
2
u ( x, 0 ) sin
10 ∫0
∴ cn = dx
10

1 nπ x nπ x 
5 10
=  ∫ 20 x sin dx + ∫ ( 200 − 20 x ) sin dx   ( by definition of u ( x, 0 ) )
50 10 5
10 
   10 nπ x   10 2 nπ x  
5

= 4  x  − cos  − (1)  − 2 2 sin 


   nπ 10   nπ 10  0
 
  10 nπ x   10 2 nπ x  
10

+ (10 − x )  − cos  − ( −1)  − 2 2 sin  
  nπ 10   nπ 10  5 


 50 nπ 100 nπ 50 nπ 100 nπ 
= 4 − cos + 2 2 sin + cos + 2 2 sin  
 nπ 2 n π 2 nπ 2 n π 2 
800 nπ
= 2 2 sin
nπ 2 
576 | Chapter 4

( 2n − 1) π 800 ( −1)
n +1
800
c2 n = 0; c2 n −1 = sin = ; n = 1, 2,.... 

( 2n − 1) ( 2n − 1)
2 2
π2 2 π2

∴ from (1), solution is

800 ∞ ( −1) ( 2n − 1) π x
n +1 ( 2 n −1)π y

2 ∑
u ( x, y ) = e 10
sin
π n =1 ( 2n − 1) 2
10

4.16  Two Dimensional wave equation
u x y t
T dy T d x

b
a

T dy
T dx

O y
A x y
B x y + dy

D x + dx y C x + dx y + dy
x

Figure 4.6
We shall obtain the partial differential equation for the vibrations of a tightly stretched membrane
(such as the membrane of a drum). We shall assume
 (i)  Membrane is homogeneous and hence the mass per unit area m is constant.
(ii)  The membrane is perfectly flexible and offers no resistance to bending.
(iii) The membrane is stretched and then fixed along its entire boundary in the x–y plane. The
tension per unit length T caused by stretching the membrane is same at all points and in all
directions and does not change during motion and it is so large that weight of membrane is
negligible in its comparison.
  (iv) The deflection u ( x, y, t ) of the membrane during the motion is small compared to size of
membrane and all angles of inclinations are small.
Consider a small portion ABCD of the membrane. Forces acting on sides are Td x, Td y.
Since motion is vertical, so horizontal components of tensions cancel.
Vertical components of tensions on deflected portion of AD and BC are −T δ x sin α and
T δ x sin β.
Partial Differential Equations  | 577

Since angles are small, we can replace their sines by their tangents. Hence, resultant of these
two components is T δ x [ tan β − tan α ] which is equal to T δ x u y ( x1, y + δ y ) − u y ( x2 , y )
where x1 , x2 lie in [ x, x + δ x ] .

Similarly, the resultant vertical component of tensions on deflected portions of AB and DC is
T δ y ux ( x + δ x, y1 ) − ux ( x, y2 ) 

where y1 , y2 lie in [ y, y + δ y ] .

If ρ is mass per unit area of membrane, then by Newton’s second law of motion (vertically
­upward)
∂2u
ρδ xδ y 2 = T δ x u y ( x1 , y + δ y ) − u y ( x2 , y )  +T δ y ux ( x + δ x, y1 ) − ux ( x, y2 ) 
∂t 
∂ 2 u T   u y ( x1 , y + δ y ) − u y ( x2 , y )   ux ( x + δ x, y1 ) − ux ( x, y2 )  
∴ =  + 
∂t 2 ρ   δy   δx  

Take limit as δ x → 0, δ y → 0

∂2u  ∂2u ∂2u 


= c2  2 + 2  
∂t 2
 ∂y ∂x 

T
where c 2 = is diffusivity of membrane.
ρ
which is the partial differential equation of two dimensional wave motion.

4.16.1  Solution of Two Dimensional Wave Equation


Two dimensional wave equation is
∂2u  ∂2u ∂2u 
= c2  2 + 2 
∂t 2
 ∂x ∂y 

Let u ( x, y, t ) = F ( x, y ) T ( t )

∂u2
∂ u ∂2 F
2
∂2u ∂2 F
∴ = FT ′′, 2 = 2 T , 2 = 2 T
∂t 2
∂x ∂x ∂y ∂y

∴ differential equation becomes
 ∂2 F ∂2 F 
FT ′′ = c 2  2 + 2  T
 ∂x ∂y 

1  ∂ F ∂ F  1 T ′′
2 2
or  + = 
F  ∂x 2 ∂y 2  c 2 T
L.H.S. is function of x and y and R.H.S. is function of t and hence each is constant, say λ .
578 | Chapter 4

As boundaries are fixed, i.e., u ( x, y, t ) = 0 for all t and all points of boundary. Hence, λ must
be negative say − p 2.
∂2 F ∂2 F
∴ + + p 2 F = 0 (4.58)
∂x 2 ∂y 2
and T ′′ = − p 2 c 2T 
Its solution is
T = k1 cos cpt + k2 sin cpt (4.59)

Now, let F ( x, y ) = X ( x ) Y ( y )

∂ F
2
∂ F 2
∴ = X ′′Y , 2 = XY ′′
∂x 2 ∂y

∴ from equation (4.58)
X ′′Y + XY ′′ + p 2 XY = 0 
X ′′ Y ′′ + p 2Y
∴ =−
X Y 
L.H.S. is function of x and R.H.S. is function of y and hence each is constant. As boundaries are
fixed, so constant must be negative, say −q 2.
∴ X ′′ + q 2 X = 0, Y ′′ + s 2Y = 0 where p 2 − q 2 = s 2 (4.60)

Solutions are
X ( x ) = A cos qx + B sin qx (4.61)
Y ( y ) = C cos sy + E sin sy (4.62)
Now, u ( x, y, t ) = X ( x ) Y ( y ) T ( t )

and on boundary, i.e., 0 ≤ x ≤ a, 0 ≤ y ≤ b 

u ( x, y, t ) = 0 for all t ≥ 0 

∴ X (0 ) = 0, X ( a ) = 0, Y (0 ) = 0, Y (b ) = 0

From (4.61),
X (0 ) = 0 ⇒ A = 0
  
From (4.62),
Y (0 ) = 0 ⇒ C = 0
  
∴ X ( x ) = B sin qx , Y ( y ) = E sin sy

Partial Differential Equations  | 579

X ( a ) = B sin qa = 0


⇒ q= ; m ∈ N (4.63)
a
and 
Y (b ) = E sin sb = 0

⇒ s= ; n ∈ N (4.64)
b
mπ x nπ y
∴ X ( x ) = Bm sin , Y ( y ) = En sin
a b 
mπ x nπ y m = 1, 2,....
∴ F ( x, y ) = Bm , n sin sin ; Bm , n = Bm En ;
a b n = 1, 2,....
From (4.60),
p2 = q2 + s2 
m 2π 2 n 2π 2
= + 2  from (4.63) and (4.64)
a2 b
 m 2 n2 
= π 2  2 + 2  = pm2 , n (say)
a b 
∴ from (4.59)
T = km , n cos c pm , n t + lm , n sin c pm , n t

∴ u ( x , y , t ) = X ( x ) Y ( y ) T (t )

= F ( x, y ) T ( t )

mπ x nπ y
= sin sin  K m , n cos c pm , n t + Lm , n sin c pm , n t 
a b  
where K m , n = Bm , n km , n , Lm , n = Bm , n lm , n

By principle of superposition
∞ ∞
mπ x nπ y
u ( x, y, t ) = ∑ ∑  K m , n cos ( cpm , n t ) + Lm , n sin ( cpm , n t )  sin sin (4.65)
m =1 n =1 a b

m 2 n2
where pm , n = π
+
a2 b2 
which gives us the solution of two dimension wave equation. Constants K m , n and Lm , n will be
obtained from initial conditions.
Suppose, the initial conditions are
∂u
u ( x, y, 0 ) = f ( x, y ) , ( x, y, 0 ) = g ( x, y )
∂t 
580 | Chapter 4

∞ ∞
mπ x nπ y
then u ( x, y, 0 ) = f ( x, y ) = ∑ ∑ K m , n sin sin 
m =1 n =1 a b

 ∞ nπ y  mπ x
= ∑  ∑ K m , n sin  sin
m =1  n =1 b  a


nπ y
Let lm ( y ) = ∑ K m , n sin (4.66)
n =1 b
mπ x

∴ f ( x, y ) = ∑ lm ( y ) sin
m =1 a 
It is half range Fourier sine series of f (x, y) in x ∈ [0, a] where y is taken as constant
2 a mπ x
∴ lm ( y ) = ∫ f ( x, y ) sin dx (taking y constant) (4.67)
a 0 a
Now, (4.66) is Fourier sine series of lm ( y ) in y ∈ [ 0, b ] .
2 b nπ y
∴ K m,n = ∫ lm ( y ) sin dy
b 0 b 
Put value of lm ( y ) from (4.67)
4 b a mπ x nπ y
f ( x, y ) sin
ab ∫0 ∫0
K m,n = sin dxdy (4.68)
a b
From (4.65)
∂ ∞ ∞
 mπ x   nπ y 
u( x, y, t ) = ∑ ∑ c pm , n  − K m , n sin ( c pm , n t ) + Lm , n cos ( c pm , n t )  ⋅ sin   sin  
∂t m =1 n =1  a   b 
∂u ∞ ∞
mπ x nπ y
\ ( x, y, 0 ) = ∑ ∑ c pm,n Lm,n sin sin = g ( x, y )
∂t m =1 n =1 a b 
Proceeding as above
4 mπ x nπ y
g ( x, y ) sin
b a
Lm , n =
cpm , n ab ∫ ∫ 0 0 a
sin
b
dxdy (4.69)

Substituting the value of K m , n and Lm , n in (4.65), we obtain the solution.

Example 4.71: Find the deflection u ( x, y, t ) of a rectangular membrane ( 0 ≤ x ≤ a, 0 ≤ y ≤ b )


whose boundary is fixed, given that it starts from rest and u ( x, y, 0 ) = xy ( a − x ) ( b − y ).
Solution: Let u ( x, y, t ) be deflection of point ( x, y ) at time t.
Then, wave equation is
∂2u 2∂ u ∂2u 
2
= c  + 
∂t 2  ∂x
2
∂y 2 

Partial Differential Equations  | 581

where c2 is diffusivity of membrane.


When boundaries are fixed then on the boundaries u ( x, y, t ) = 0 for all t.
Under this boundary condition solution is
∞ ∞
mπ x nπ y
u ( x, y, t ) = ∑ ∑  km , n cos ( pm , n ct ) + lm , n sin ( pm , n ct )  sin sin ,(1)
m =1 n =1 a b
 m 2 n2 
where pm2 , n = π 2  2 + 2  
a b 
mπ x ∞
nπ y ∞
xy ( a − x ) ( b − y ) = u ( x, y, 0 ) = ∑ ∑ km , n sin
sin
m =1 n =1 a b 

 ∞
nπ y  mπ x
= ∑  ∑ km , n sin  sin
m =1  n =1 b  a

Considering y constant, it is Fourier half range sine series of xy ( a − x ) ( b − y ) in [ 0, a ] .

nπ y 2 a mπ x
∴ ∑k m,n sin = ∫ xy ( a − x ) (b − y ) sin dx
n =1 b a 0 a 
2 y (b − y )  2  a mπ x   a2 mπ x 
=  ax − (
x  −
 mπ
cos )  − ( a − 2 x )  − sin 
 mπ
2 2
a  a  a 

a
 a3 mπ x  
+ ( −2 )  3 3 cos 
m π a 0
2 y ( b − y )  2a 3
=
a
− 3 3
 mπ
(( −1) − 1)
m


−4 a y ( b − y )
2

= ( −1)m − 1
m π 3 3  

∴ k2 m , n = 0; m = 1, 2, 3,…

nπ y 8a y ( b − y )
∞ 2

∑ k2 m −1, n sin = ; m = 1, 2, 3, … 
( 2m − 1) π 3
3
n =1 b
8a 2 y ( b − y )
It is, again, Fourier half range sine series of in [ 0, b ] .
( 2m − 1) π 3
3

2 b 8a y ( b − y )
2
nπ y
b ∫0 ( 2m − 1)3 π 3
∴ k2 m −1, n = sin dy 
b

16 a 2   b nπ y   b2 nπ y 
=  (
by − y 2  − cos  )
− ( b − 2 y )  − 2 2 sin 
( 2m − 1) bπ 3   nπ  nπ
3
b  b 

b
 b3 nπ y  
+ ( −2 )  3 3 cos 
n π b 0
582 | Chapter 4

−32a 2 b 2
( −1)n − 1 
=
( 2m − 1) π n  
3 6 3

64 a 2 b 2
∴ k2 m −1, 2 n −1 = and all other km , n = 0
(2m − 1)3 (2n − 1)3 π 6 
From (1)
∂ ∞ ∞
 mπ x   nπ y 
 u( x, y, t ) = ∑ ∑ c pm , n  −km , n sin ( pm , n c t ) + lm , n cos ( pm , n c t )  sin   sin  
∂t m =1 n =1  a   b 
∂ ∞ ∞
mπ x nπ y
\ 0= u ( x, y, 0 ) = ∑ ∑ lm , n pm , n c sin sin
∂t m =1 n =1 a b 
∴ lm , n = 0 for all m, n.

Put values of km , n and lm , n in (1)


∞ ∞
64 a 2 b 2 ( 2m − 1) π x ( 2n − 1) π y
 u ( x, y, t ) = ∑ ∑ cos ( p2 m −1, 2 n −1ct ) sin sin
( 2m − 1) ( 2n − 1)
3 3
m =1 n =1 π 6 a b

 ( 2m − 1)2 ( 2n − 1)2 
where p 2 2 m −1, 2 n −1 = π 2  2
+ 
 a b2 


Example 4.72: Find the deflection u ( x, y, t ) of a square membrane of one unit side whose
boundaries are fixed, if the initial velocity is zero and the initial deflection is given by
f ( x, y ) = sin ( 3π x ) sin ( 4π y ). Assume c = 1 in the differential equation.
Solution: Let u ( x, y, t ) be deflection of point ( x, y ) at time t.
Then, wave equation is
∂2u ∂2u ∂2u
= + ; 0 ≤ x ≤ 1, 0 ≤ y ≤ 1.
∂ t 2 ∂x 2 ∂y 2
When boundaries are fixed then on the boundaries u ( x, y, t ) = 0 and solution then will be
∞ ∞
u ( x, y, t ) = ∑ ∑  km , n cos ( pm , n t ) + lm , n sin ( pm , n t )  sin ( mπ x ) sin ( nπ y ),
m =1 n =1

where p 2
m,n = π 2 ( m 2 + n2 ) 
∂ ∞ ∞
u( x, y, t ) = ∑ ∑ pm , n  − km , n sin ( pm , n t ) + lm , n cos ( pm , n t )  sin ( mπ x ) sin ( nπ y )
∂t m =1 n =1

 ∂u  ∞ ∞
\ 0 =   = ∑ ∑ lm , n pm , n sin ( mπ x ) sin ( nπ y ) 
 ∂ t  t = 0 m =1 n =1

⇒ lm , n = 0 for all m, n.


Partial Differential Equations  | 583

∞ ∞
\ ( )
u ( x, y, t ) = ∑ ∑ km , n cos pm , n t sin ( mπ x ) sin ( nπ y ) 
m =1 n =1
∞ ∞
\ sin (3π x ) sin ( 4π y ) = u ( x, y, 0 ) = ∑ ∑ km , n sin ( mπ x ) sin ( nπ y )
m =1 n =1 
\ k3, 4 = 1 and all other km , n = 0

\ solution is
u ( x, y, t ) = cos ( p3, 4 t ) sin ( 3π x ) sin ( 4π y )

where p3, 4 = π 3 + 4 = 5π
2 2

\ u ( x, y, t ) = cos (5π t ) sin (3π x ) sin ( 4π y ).

Exercise 4.8

∂2u ∂2u 5. An infinitely long metal plate of width 1


1. Solve + = 0 subject to boundary with insulated surfaces has its tempera-
∂x 2 ∂y 2
ture zero along both the long edges y = 0
conditions u ( 0, y ) = sin y and u → 0 as
and y = 1 and the short edge at infinity.
x → ∞.
If the edge x = 0 is kept at fixed tem-
∂2u ∂2u perature T0, find the temperature T at any
2. Solve + = 0 in a rectangle in
∂x 2 ∂y 2 point ( x, y ) of the plate in steady state.
the x–y plane, 0 < x < a and 0 < y < b ∂2u ∂2u
satisfying the following boundary condi- 6. Solve + = 0 which satisfies the
∂x 2 ∂y 2
tions u ( x, 0 ) = u ( x, b ) = u ( 0, y ) = 0 and conditions: u ( 0, y ) = u ( l , y ) = u ( x, 0 ) = 0
u ( a, y ) = f ( y ). Also, find the solution sin nπ x
and u ( x, a ) = .
when f ( y ) = ky ( b − y ) , 0 < y < b. l
7. An infinitely long plane uniform plate is
∂2u ∂2u
3. Solve + = 0 in the interval bounded by two parallel edges and an end
∂x 2 ∂y 2 at right angles to them. The breadth is π .
0 ≤ x ≤ π , subject to the boundary condi- This end is maintained at temperature
tions: u ( 0, y ) = 0, u (π , y ) = 0, u ( x, 0 ) = 1 u0 at all points and the other edges are
at zero temperature. Determine the tem-
and u ( x, y ) → 0 as y → ∞ for all x.
perature at any point of the plate in the
∂2u ∂2u steady state.
4. Solve + = 0, subject to the condi- 8. A tightly stretched unit square mem-
∂x 2 ∂y 2
brane starts vibrating from rest and its
tions u ( x, 0 ) = 0, u ( x, a ) = 0, u ( x, y ) → 0
initial displacement is k sin 2π x sin π y.
as x → ∞ when x ≥ 0 and 0 ≤ y ≤ a. Show that the deflection at any instant is
(
k sin 2π x sin π y cos 5π ct . )
584 | Chapter 4

9. Find the deflection u ( x, y, t ) of the 10. Determine the displacement function


square membrane of side unity and u ( x, y, t ) of a rectangular membrane
boundaries fixed with c = 1, if the initial 0 < x < l1 , 0 < y < l2 with the entire
velocity is zero and the initial deflection boundary fixed and with initial condi-
f ( x, y ) = A sin π x sin 2π y. tions u ( x, y, 0 ) = 0 and ut ( x, y, 0 ) = 1.

Answers 4.8

 1.  u ( x, y ) = e − x sin y.

 nπ x   nπ y  2  nπ y 
 2.  u ( x, y ) = ∑ bn sinh  f ( y ) sin 
b

n =1  b  sin  b  , where bn =
    nπ a  ∫0
 b 
 dy
b sinh  
 b 

sinh
( 2n − 1) π x sin ( 2n − 1) π y

8kb 2
   and if f ( y ) = ky ( b − y ) , u ( x, y ) = 3
π
∑ b b
( 2n − 1) π a
.
n =1
( 2n − 1)
3
sinh
b
4 ∞ e ( )
− 2 n −1 y
 3.  u ( x, y ) = ∑ sin ( 2n − 1) x.
π n =1 ( 2n − 1)
− nπ x

nπ y
 4.  u ( x, y ) = ∑ cn e a
sin , where cn , n = 1, 2, 3… are arbitrary constants.
n =1 a

4T0 1
 5.  T ( x, y ) =
π
∑ ( 2n − 1) e (
n =1
− 2 n −1)π x
sin ( 2n − 1) π y.

 nπ y 
sinh
nπ x  l .
 6.  u ( x, y ) = sin  
l  nπ a 
sinh
 l 

4u0 1
 7.  u ( x, y ) =
π
∑ ( 2n − 1) e (
n =1
− 2 n −1) y
sin ( 2n − 1) x.

 9.  u ( x, y, t ) = A cos 5π t sin π x sin 2π y.

16 ∞ ∞ sin ( c p2 m −1, 2 n −1 t )  ( 2m − 1) π x   ( 2n − 1) π y 
2 ∑∑
10.  u ( x, y, t ) = sin   sin  
π c m =1 n =1 ( 2m − 1) ( 2n − 1) p2 m −1, 2 n −1  l1   l2 
 ( 2m − 1)2 ( 2n − 1)2 
where p22m −1, 2 n −1 = π 2  + .
 l12 l22 
Numerical Methods in General
and Linear Algebra 5
5.1 Introduction
Numerical methods are referred to the methods for solving problems numerically on a computer
or a calculator, or in older times by hand. Computers have changed the field as a whole as well as
many individual methods. Numerical methods are necessary because for many problems, there is
no solution formula as for many algebraic and transcendental equations, or in some cases a solu-
tion formula is practically useless. Ideas of round-off errors and various types of errors occur-
ring in numerical methods are discussed. Methods to find roots of algebraic and transcendental
equations and various methods to solve linear system of equations and eigenvalue problems are
also discussed. Interpolation means to find approximate values of a function f (x) for x between
given x values, and extrapolation means to find f (x) out of the range of given x values. Various
interpolation formulae are studied in detail.

5.2  Errors in Numerical Computations


Usually, the following three different types of errors occur in numerical computations.
(i) Gross errors
(ii) Round-off errors
(iii) Truncation errors

We shall deal them one by one.


Gross Errors: These errors are due to human mistakes or due to malfunctioning of a calcula-
tor or a computer as the case may be. Although these errors occur quite frequently in numerical
computations, yet we shall not treat them here.
Round-off Errors: A number is rounded to position n by making all digits to the right of this po-
sition zero. The digit in position n is unchanged, if the truncated part is less than half a unit of the
position value of n th place. The digit in position n is increased by one, if the truncated part is more
than half a unit of the position value of the n th place. If the truncated part is exactly half a unit of the
position value of n th place, then n th place is unchanged if n th place has even digit (0 is considered
as even), and n th place is increased by one unit, if n th place has odd digit. It should also be noted
that the significant digits in a number start with the first non-zero digit and end with the non-zero
digit if it does not contain decimal part, and if it contains decimal part, then zeros to the right in
decimal part are also significant.
586 | Chapter 5

For example, 895472603 rounded to 4 significant digits is 895500000.


5.2435904 rounded to 5 decimal places is 5.24359
8.73500 rounded to two decimal places is 8.74
7.24500 rounded to two decimal places is 7.24
11.34576523 rounded to five decimal places is 11.34577.
Truncation Errors: Truncation error arises when an infinite process is approximated by a finite
one. Suppose, we approximate ex by some polynomial, then truncation error arises. Truncation
error also arises when some differential equation is solved by difference method.
In practical computations, we start with some initial data. Even if the intermediate calcula-
tions are exact, the final result may give large deviations. Such type of problem is called ill-
conditioned.
In numerical computations, each step introduces a round-off error. The error in each step will
influence the error in subsequent steps. We must be careful that error propagation may give the
stable results.

5.3 Algebraic and Transcendental Equations


An equation f (x) = 0 is called an algebraic equation of degree n, if f (x) is a polynomial of
d­ egree n. If f (x) contains some other functions such as trigonometric, logarithmic, exponential,
etc., then f (x) = 0 is called a transcendental equation.
We shall now consider the concept of a root of an equation f (x) = 0. By definition, a is a root
of f (x) = 0 if f (a) = 0. However, in numerical applications, the equation usually cannot be satis-
fied exactly. Therefore, we modify the mathematical definition of root here.
a will be called a root of f (x) = 0, if | f (a) | < e, where e is given tolerance. According to this
definition, f (x) = 0 and M f (x) = 0 for some constant M do not have same roots. Before dealing
the methods to find root of an equation, we state a property.

Intermediate Value Property


If f (x) is continuous in [a, b] and f (a) ⋅ f (b) < 0, i.e., f (a) and f (b) have opposite signs, then the
equation f (x) = 0 has at least one real root in (a, b). Further, if | f (a)| < | f (b)|, then, in general,
root is near a, as compared to b.

Iterative Process
Suppose one or more approximations of the root of an equation are known and we find the next
approximation using the formula containing known approximations, then this process is known
as iterative process. Suppose x1 is an approximation of root of an equation and we find x2 using
formula containing x1, then approximation x3 from x2 and so on, then this process is called itera-
tive process.
Numerical Methods in General and Linear Algebra  | 587

Rate of Convergence
The fastness of convergence in any method is represented by its rate of convergence.
Let x1 , x2 , x3 ,  , xn , xn +1 be successive approximations of root of an equation having errors
ε1 , ε 2 , ε 3 ,  , ε n , ε n+1 . If ε n +1 = K ε nm then convergence is said to be of order m. If m = 1, then
convergence is called linear and if m = 2, then convergence is called quadratic. When the con-
vergence is quadratic, then the number of correct decimals is approximately doubled at every
­iteration, at least if the factor K is not too large. In case of linear convergence ε n +1 = K ε n and
hence ε n +1 = K ε n = K 2ε n −1 = K 3ε n − 2 =  = K nε1.
Thus, the error multiplies by K at each step and hence this convergence is also called as geo-
metric convergence.
Now, we shall be dealing some methods to find roots of a given equation.

5.3.1 Bisection Method or Bolzano Method or Halving Method


This method is based on the repeated application of intermediate value property.
Suppose we are to find real root of the equation f (x) = 0, where f (x) is a continuous function.
Let a and b be real numbers such that f (a) and f (b) have opposite signs, then the first approxima-
1
tion to the root is x1 = ( a + b ). If f (x1) = 0, then x1 is root. If f (x1) ≠ 0, then either f (a) and f (x1)
2 a + x1
have opposite signs in which case second approximation will be x2 = or f (x1) and f (b)
2
x +b
have opposite signs in which case second approximation will be x2 = 1 .
2
x +x
Now, replace a or b by x1 as the case be, then next approximation will be x3 = 1 2 and so on.
2
Number of Iterations Required to Reach Accuracy e
Suppose M is the length of interval (a, b), then after first approximation x1 the root will lie in
 a+b a+b  a+b
 a, 2  or in  2 , b  or x1 = 2 is root and thus root will lie in an interval of length
   
M
. Thus, at every step, the new interval containing the root is exactly half the length of the pre-
2 b−a
vious one. At the end of n steps when we obtain xn, the root will lie in an interval of length n .
Thus, the number of iterations n required to reach accuracy e must satisfy 2
b−a
≤ε
2n
or log ( b − a ) − n log 2 ≤ log ε

log ( b − a ) − log ε
or n≥
log 2

Smallest natural number n satisfying this inequality gives the number of iterations required
to reach accuracy e.
588 | Chapter 5

1
As the length of interval at each step is the length of interval in the previous step in which root
2
lies, so if en + 1 is error in xn +1 and en is error in xn then
1
ε n +1 = ε n
2
1
Hence, convergence is linear. Also, the convergence is geometric with common ratio < 1 and
thus, the process must converge to root. Hence, the process is slow but must converge. 2

5.3.2 Direct Iteration Method


To find the real root of the equation f (x) = 0 by successive approximations, we write this equa-
tion as x = φ ( x ) . Starting with suitable x0 we find x1 = φ ( x0 ) , x2 = φ ( x1 ) , xn +1 = φ ( xn ) . If the
­sequence { x1 , x2 ,  , xn , xn +1 } has a limit x, then x is a root of f (x) = 0. The root is same as the
point of intersection of the straight line y = x and the curve y = φ ( x ) . Figures (5.1) to (5.4)
­illustrate the working of the iterative method.
Y Y
x x)
y
= f( x
y = f(x) =
y y=
(x2, x3)
(x0, x1) (x1, x2)
(x0, x1)

X X
O x2 x1 x0 x0
O
      
Figrue 5.1 Figure 5.2

x
Y =
y
x

Y
y=

(x0, x1)
(x2, x3)

(x1, x2)
y = I(x)
y = φ(x)

x0 X O X
O x0
      
Figure 5.3 Figure 5.4

From the figures, it is clear that the convergence depends upon the form of f (x). In Figures (5.1)
and (5.3) the iterations converge to the root and in Figures (5.2) and (5.4), the iterations do not
converge to the root. The equation f (x) = 0 can be written in the form x = f (x) in an infinite
Numerical Methods in General and Linear Algebra  | 589

number of ways. For example, the equation x3 – 5 = 0 can be written as x = x3 + x – 5 or


5 + 4 x − x3
x= or in other ways. Thus, now we derive the condition of convergence so that the
4
equation f (x) = 0 be changed to the form x = f (x) and the process of iteration converges to the
root.
Let x  be exact root of f (x) = 0 which is changed to x = f (x)
then x  = f (x).
Let ε n+1 be error in xn+1
∴ x + ε n+1 = xn+1 
Now, by Cauchy mean value theorem,
ε1 = x1 − ξ = φ ( x0 ) − φ (ξ ) = ( x0 − ξ ) φ ′ (ξ 0 ) ; where x0 lies in the interval formed by x0 and x.
Similarly x2 − ξ = ( x1 − ξ ) φ ′ (ξ1 ); where x1 lies in the interval formed by x1 & ξ
and so on.
xn +1 − ξ = ( xn − ξ ) φ ′ (ξ n ); where xn lies in the interval formed by xn & ξ
Multiplying these, we have
xn +1 − ξ = ( x0 − ξ ) φ ′ (ξ0 ) φ ′ (ξ1 ) φ ′ (ξn ) ; ξ0 , ξ1 , , ξn lie in the interval formed by x0, x1,
x2,…, xn & x.
\ ε n +1 = ε 0 φ ′ (ξ 0 ) φ ′ (ξ1 )φ ′ (ξ n )

Let
φ ′ (ξi ) ≤ m for i = 0,1, 2,  , n
then ε n +1 ≤ m n +1 ε 0

Thus, convergence is linear, and iteration converges if m < 1
\ Iteration converges if φ ′ ( x ) < 1 where x lies in the interval formed by iteration values.
Now, we shall be taking some examples.

Example 5.1: Find a root of the equation x3 – 4x – 9 = 0 using the bisection method in four stages.
Solution: f  (x) = x3 – 4x – 9 = 0
(2) = –9, f (3) = 6
f 

f ( 2 ) > f ( 3)

f ( 2.7 ) = 2.7 − 4 ( 2.7 ) − 9 = −0.117
3

f ( 2.8 ) = 2.8 − 4 ( 2.8 ) − 9 = 1.752
3

\  root lies between 2.7 and 2.8
2.7 + 2.8
\ x1 = = 2.75
2 
590 | Chapter 5

Approximate root x f (x) Root between Next approximation

2.7 + 2.75
x1 = 2.75 +ive 2.7 and x1 = 2.725
2
2.7 + 2.725
x2 = 2.725 +ive 2.7 and x2 = 2.7125
2
2.7 + 2.7125
x3 = 2.7125 +ive 2.7 and x3 = 2.70625
2
2.7125 + 2.70625
x4 = 2.70625 -ive x3 and x4 = 2.709375
2

\  approximate root = 2.71

Example 5.2: (a) Find a root of the equation x3 – x – 11 = 0 and correct to four decimal places
using bisection method.
(b)  Using bisection method, find a negative root of x3 – x + 11 = 0.
Solution: (a)  f ( x ) = x 3 − x − 11 = 0

f (0 ) = −11, f (1) = −11, f ( 2) = −5, f (3) = 13



\  Root lies between 2 and 3

Q f ( 2 ) < f ( 3)

\  Root is near 2

f ( 2.3) = ( 2.3) − ( 2.3) − 11 = −1.133 < 0


3

f ( 2.4 ) = ( 2.4 ) − ( 2.4 ) − 11 = 0.424 > 0


3


f ( 2.4 ) < f ( 2.3)

\  Root lies near 2.4

f ( 2.37 ) = ( 2.37 ) − ( 2.37 ) − 11 = −0.058 < 0


3

f ( 2.38 ) = ( 2.38 ) − ( 2.38 ) − 11 = 0.101 > 0


3

  
\  Root lies between 2.37 and 2.38

2.37 + 2.38
x1 = = 2.375
2
Numerical Methods in General and Linear Algebra  | 591

Approximate root x f (x) Root between Next approximation

2.37 + 2.375
x1 = 2.375 +ive 2.37 and x1 = 2.3725
2
2.375 + 2.3725
x2 = 2.3725 -ive x1 and x2 = 2.37375
2
2.3725 + 2.37375
x3 = 2.37375 +ive x2 and x3  2.37313
2
2.37375 + 2.37313
x4 = 2.37313 -ive x3 and x4 = 2.37344
2
2.37375 + 2.37344
x5 = 2.37344 -ive x3 and x5  2.37360
2
2.37375 + 2.37360
x6 = 2.37360 -ive x3 and x6  2.37368
2
2.37360 + 2.37368
x7 = 2.37368 +ive x6 and x7  2.37364
2
2.37368 + 2.37364
x8 = 2.37364 -ive x7 and x8  2.37366
2
2.37364 + 2.37366
x9 = 2.37366 +ive x8 and x9 = 2.37365
2
2.37364 + 2.37365
x10 = 2.37365 +ive x8 and x10  2.373645
2

\  Approximate root corrected upto four decimal places = 2.3736


(b) If a is negative root of x3 – x + 11 = 0
then –a is positive root of (–x)3 –(–x) + 11 = 0, i.e. x3 – x – 11 = 0
from part (a), 2.3736 is positive root of x3 – x –11 = 0
\ –a  = 2.3736
\ a  = –2.3736

\  a  = – 2.3736 is a negative root of x3 – x + 11 = 0

Example 5.3: Find a root of the equation 2x = cos x + 3 and correct to three decimal places using
direct iteration method.
Solution: Equation is
f ( x ) = 2 x − cos x − 3 = 0

π π
f (0 ) = −4, f   = π − 3  0.1416 and f   < f (0 )
 2  2
592 | Chapter 5

π π
\  Root lies between 0 and and is near .
2 2
Given equation can be written as
1
x = ( cos x + 3) = φ ( x )
2
1
\ φ ′ ( x ) = − sin x
2 
\ φ ′ ( x ) < 1 for all x.
π
\  Iteration will converge. Take x0 =
2
x f (x)
x0 = p /2 1.5
x1 = 1.5 1.5354
x2 = 1.5354 1.5177
x3 = 1.5177 1.5265
x4 = 1.5265 1.5221
x5 = 1.5221 1.5243
x6 = 1.5243 1.5232
x7 = 1.5232 1.5238
x8 = 1.5238 1.5235
x9 = 1.5235 1.5236
x10 = 1.5236 1.5236

\  upto three decimal places root = 1.524


Example 5.4: Solve by iteration 2x – log10 x = 7
Solution: Equation is
f  (x) = 2x – log10 x – 7 = 0
f (1) = −5, f ( 2)  −3.30, f (3)  −1.477, f ( 4 )  .398
(3) < 0, f (4) > 0
f 
\  Root lies between 3 and 4 and is near 4
Given equation can be written as
1
x = ( 7 + log10 x ) = φ ( x )
2 
1
φ′( x) = log10 e
  2x 
\ φ ′ ( x ) < 1 for x ∈ ( 3, 4 )

\  Iteration will converge
Numerical Methods in General and Linear Algebra  | 593

Take x0 = 3.7
x f (x)
3.7 3.78410
3.78410 3.78898
3.78898 3.78926
3.78926 3.78928
3.78928 3.78928

\  Root upto four decimal places = 3.7893


x2 x3 x4
Example 5.5: Find the smallest root of the equation f ( x ) = 1 − x + − + −  = 0.
( 2!) ( 3!) ( 4 !)
2 2 2
2 3 4
x x x
Solution: f ( x ) = 1 − x + − + −  = 0 (1)
( 2!) ( 3!) ( 4 !)
2 2 2

Here   f (1) > 0, f (2) < 0


\  Smallest root lies between 1 and 2
we take x0 = 1.5
(1) can be written as
x2 x3 x4
x = 1+ − + −  = φ ( x)
( 2!) ( 3!) ( 4 !)
2 2 2


(1.5) (1.5) (1.5)
2 3 4

φ (1.5 ) = 1 + − + −
( 2!) ( 3!) ( 4 !)
2 2 2


= 1 + 0.5625 – 0.0938 + .0088 – .0005 + …  1.4770
f (1.4770) = 1 + 0.5454 – 0.0895 + 0.0083 – 0.0005 + …  1.4637
f (1.4637)  1 + 0.5356 – 0.0871 + 0.0080 – 0.0005 = 1.4560
f (1.4560)  1 + 0.5300 – 0.0857 + 0.0078 – 0.0005 = 1.4516
f (1.4516)  1 + 0.5268 – 0.0850 + 0.0077 – 0.0004 = 1.4491
f (1.4491)  1 + 0.5250 – 0.0845 + 0.0077 – 0.0004 = 1.4478
\ x f (x)
1.5 1.4770
1.4770 1.4637
1.4637 1.4560
1.4560 1.4516
1.4516 1.4491
1.4491 1.4478
\  upto two decimal places x = 1.45
594 | Chapter 5

5.3.3 Secant and Regula-falsi Methods


Secant Method
Let x0, x1 be two approximations of root of y = f (x) = 0. Then P(x0, y0) and Q(x1, y1) are two points
on the curve y = f (x) where y0 = f (x0), y1 = f (x1). Join PQ. We approximate the curve by secant
(chord) PQ and take the point of intersection of PQ with x-axis as the next approximation x2 of
the root. Then, we take secant joining Q(x1, y1) and R(x2, y2) as approximation of the curve and
point of intersection of QR with x-axis as the next approximation x3 of the root. Proceeding in this
way, curve is approximated by secant joining (xn-1, yn-1) and (xn, yn) and its point of intersection
with x-axis as the approximation xn+1 of the root. Equation of secant joining (xn-1, yn-1) and (xn, yn)
y − yn −1
is y − yn = n
xn − xn −1
( x − xn ) . For the point where it meets x-axis, we have y = 0 and x be xn+1
y − yn −1
\ − yn = n ( xn+1 − xn )
xn − xn −1

xn − xn −1 xn − xn −1
\ xn +1 = xn − yn = xn − f ( xn )
yn − yn −1 f ( xn ) − f ( xn −1 )

Equation of secant joining (xn-1, yn-1) and (xn, yn) can also be written as
yn − yn −1
y − yn −1 = ( x − xn−1 )
xn − xn −1

\ Interchanging xn and xn-1 above, iteration can also be written as
xn − xn −1
xn + 1 = xn - 1 − f ( xn −1 )
f ( xn ) − f ( xn −1 )
which is the iterative formula to find the approximations. Secant method is shown in Figures
(5.5) and (5.6).
Y
Y
(xn–1, f(xn–1))
(xn–1, f(xn–1))

(xn, f(xn))
(xn, f(xn))
O xn+1 X
X
O xn+1
            
Figure 5.5 Figure 5.6
In Figure (5.5), f (xn+1) cannot be found and hence iteration process diverges but in Figure (5.6),
iteration process converges to root.
In this method, at any stage of iterations, we do not test whether the root lies in (xi , xi+1) or not.
We use the last two approximations to obtain the next approximation. This is a draw-back of this
method that the iteration process may not converge.
Numerical Methods in General and Linear Algebra  | 595

Let x be exact root and en + 1 be error in xn + 1


\ from the iterative formula
xn − xn −1
xn +1 = xn − f ( xn )
f ( xn ) − f ( xn −1 )
  
(ξ + ε n ) − (ξ + ε n−1 )
ξ + ε n +1 = ξ + ε n − f (ξ + ε n )
f (ξ + ε n ) − f (ξ + ε n −1 )

ε f (ξ + ε n ) − ε n f (ξ + ε n −1 )
or ε n +1 = n −1
f (ξ + ε n ) − f (ξ + ε n −1 )
 ε2   ε2 
ε n −1  f (ξ ) + ε n f ′ (ξ ) + n f ′′ (ξ ) +  − ε n  f (ξ ) + ε n −1 f ′ (ξ ) + n −1 f ′′ (ξ ) + 
 2   2 
=
 f (ξ ) + ε n f ′ (ξ ) +  −  f (ξ ) + ε n −1 f ′ (ξ ) + 
    
But  f (x ) = 0
ε nε n −1
(ε n − ε n−1 ) f ′′ (ξ ) + 
\ ε n +1 = 2
(ε n − ε n−1 ) f ′ (ξ ) +  
f ′′ (ξ )
\ ε n +1 = ε n ε n −1 +
2 f ′ (ξ )
 f ′′ (ξ )
If the remaining terms are neglected, then ε n +1 = A ε n ε n −1 where A =
2 f ′ (ξ )
Let m be order of convergence then ε n +1 = k ε n , ε n = k ε n −1 for some k
m m

1
 1 m
\ ε n −1 =  ε n 
k  
\ from en + 1 = Aen en – 1
1
1 A
we have ε n +1 = A ε n . εm =
1m n 1m
ε n1+1/ m
k k 
But order of convergence is m
\ From this, we get
1
m = 1+
m

or m2 – m – 1 = 0
1± 5
\ m=
2 
But m>0
1+ 5
\ m=  1.62
2 
\ order of convergence is 1.62
596 | Chapter 5

Regula-falsi Method or Method of False Position


In this method, x0 and x1 are two approximations of the root of y = f (x) = 0 with the condition f (x0)
and f (x1) have opposite signs so that the root lies between x0 and x1. Then P(x0, y0) and Q (x1, y1)
are two points on the curve y = f (x) where y0 = f (x0) and y1 = f (x1). Here, also we approximate the
curve by secant PQ and take the point of intersection of PQ with x-axis as the next approximation
x2 of the root. We find f (x2). If f (x0) and f (x2) have opposite signs, then root lies between x0 and x2
and if f (x1) and f (x2) have opposite signs, then root lies between x1 and x2. If root lies between x1
and x2, then we interchange x0 and x1 so that P(x0, y0) is fixed point for every secant. Figure (5.7)
shows the case when P(x0, y0) is fixed point for secants. Figure (5.8) shows the case when root
lies between x1 and x2, and hence P(x0, y0) and Q(x1, y1) will be named as P(x1, y1) and Q(x2, y2)
for finding the next approximations. Now the root lies between x0 and x2. Again, we draw secant
through P(x0, y0) and R (x2, y2). Let x3 be point of intersection of this secant with x – axis, then x3
will be next approximation. When secant will be drawn joining (x0, y0) and (xn, yn), then its point
of intersection with x-axis will be xn+1. Equation of secant joining (x0, y0) and (xn, yn) is
y − y0
y − yn = n ( x − xn )
x n − x0

For the point where it meets x-axis, we have y = 0 and x = xn+1
y − y0
\ − yn = n ( xn+1 − xn )
x n − x0

x − x0
or xn +1 − xn = − n yn
y n − y0

x − x0 x y − xn y0
\ xn +1 = xn − n yn = 0 n
yn − y0 yn − y0

x0 f ( xn ) − xn f ( x0 ) x0 f n − xn f 0
= =
f ( xn ) − f ( x0 ) fn − f0

where fn = f (xn),  f0 = f (x0).
It is the iterative formula to find the approximations.
Y

P(x0, y0) Q(x1, y1)

x0
O x0 x3 x2 x1 X x2 X
O x1
x3

Q(x1, y1)          P(x0, y0)

Figure 5.7 Figure 5.8


This method always converges to the root as f (x0) and f (xn) for all n are of opposite signs.
Numerical Methods in General and Linear Algebra  | 597

Order of Convergence
Iterative formula is
x0 f n − xn f 0 x0 f ( xn ) − xn f ( x0 )
xn +1 = =
fn − f0 f ( xn ) − f ( x0 )
x n − x0
= xn − f ( xn )
f ( xn ) − f ( x0 )

Let x  be exact root of f (x) = 0 and e0, en, en+1 be error in x0, xn, xn+1, respectively.
\ From iterative formula
(ξ + ε n ) − (ξ + ε 0 )
ξ + ε n +1 = ξ + ε n − f (ξ + ε n )
f (ξ + ε n ) − f (ξ + ε 0 )

ε 0 f (ξ + ε n ) − ε n f (ξ + ε 0 )
\ ε n +1 =
f (ξ + ε n ) − f (ξ + ε 0 )

 ε 2
  ε2 
ε 0  f (ξ ) + ε n f ′ (ξ ) + n f ′′ (ξ ) +  − ε n  f (ξ ) + ε 0 f ′ (ξ ) + 0 f ′′ (ξ ) + 
2 2
=    
 f (ξ ) + ε n f ′ (ξ ) +  −  f (ξ ) + ε 0 f ′ (ξ ) + 

But f (x  ) = 0
ε0 εn
(ε n − ε 0 ) f ′′ (ξ ) + 
\ ε n +1 = 2
( ε n − ε 0 ) f ′ (ξ ) +  
ε f ′′ (ξ )
= 0 εn +
2 f ′ (ξ )

ε f ′′ (ξ )
If the remaining terms are neglected, then ε n +1 = K ε n , where K = 0 and hence the
­convergence is linear. 2 f ′ (ξ )

Though the method always converges to the root, yet the convergence is very slow. In the
secant method, the convergence is faster but we are not sure whether the process will converge.
Thus, we shall be dealing in the examples only the regula-falsi method.

Example 5.6: Using regula-falsi method, find the root of x log10 x = 1.2
(a)  correct to three decimal places
(b)  correct to five decimal places
Solution: f ( x ) = x log10 x − 1.2 = 0
(2) = – 0.5979,  f (3) = 0.2314
f 

Since f (2) and f (3) have opposite signs, root lies between 2 and 3
Taking x0 = 2 and x1 = 3
598 | Chapter 5

we have
x0 f1 − x1 f 0 2 ( 0.2314 ) − 3 ( −0.5979 )
x2 = = = 2.721
f1 − f 0 0.2314 + 0.5979

(x2) = –0.0171
f 

Since f (x1) and f (x2) have opposite signs, so root lies between x1 and x2. To keep x0 fixed, we
interchange x0 and x1
i.e.  x0 = 3, x1 = 2

f0 = 0.2314
Iterative formula becomes
x0 f n − xn f 0 3 f n − ( 0.2314 ) xn
xn +1 = =
fn − f0 f n − 0.2314

3fn − ( 0.2314 ) x n
n xn fn x n +1 =
fn − 0.2314

0.6809
2 2.721 –0.0171 = 2.74
0.2485
6357.2638
3 2.74 –5.6346 × 10–4 = 2.7406
2319.6346
6342.9544
4 2.7406 –0.4020 × 10–4 = 2.7406451
2314.402
634187.8819
5 2.7406451 –0.8686 × 10–6 = 2.7406461
231400.8686
6 2.7406461 3.51 × 10–9

\  Root lies between 2.7406451 and 2.7406461


\  Root upto three decimals = 2.741
Root upto five decimals = 2.74065

Example 5.7: Determine the root of xex – 2 = 0 by method of false position correct to four
decimal places.
Solution: f ( x ) = xe x − 2 = 0
f (0.8) = −0.2196, f (0.9) = 0.2136

Since f (0.8) and f (0.9) have opposite signs, root lies between 0.8 and 0.9.
Numerical Methods in General and Linear Algebra  | 599

Taking x0 = 0.8 and x1 = 0.9, we have


x0 f1 − x1 f 0 0.8 ( 0.2136 ) − 0.9 ( −0.2196 )
x2 = = = 0.8851
f1 − f 0 0.2136 + 0.2196
f ( x2 ) = −6.9685 × 10 −3 

Since f (x1) and f (x2) have opposite signs, so root lies between x1 and x2. To keep x0 fixed, we
interchange x0 and x1
i.e. x0 = 0.9, x1 = 0.8
f0 = 0.2136
Iterative formula becomes
x0 f n − xn f 0 0.9 f n − ( 0.2136 ) xn
xn +1 = =
fn − f0 f n − 0.2136

0.9fn − ( 0.2136 ) x n
n xn fn x n +1 =
fn − 0.2136

188.04525
2 0.851 –6.9685 × 10–3 = 0.852548
220.5685
1823.2914
3 0.852548 –2.4988 × 10–4 = 0.8526034
2138.4988
182124.3076
4 0.8526034 –9.1348 × 10–6 = 0.8526054
213609.1348
5 0.8526054 –4.4333 × 10–7

As f (0.8526054) is very small in magnitude, root is near it.


\ Root corrected to four decimal places = 0.8526

Example 5.8: Use regula-falsi method to find the real root of the equation x3 – 5x + 1 = 0 and
correct to four decimals.
Solution: f (x) = x3 – 5x + 1 = 0
(0) =1, f (1) = –3
f 
Since f (0) and f (1) are of opposite signs, the root lies between x0 = 0 and x1 = 1.
x0 f1 − x1 f 0 0 − 1
x2 = = = 0.25
f1 − f 0 −4

(x2) = – 0.2344
f 
f (x0) and f (x2) are of opposite signs, so root lies between x0 and x2 in which x0 is fixed.
600 | Chapter 5

Iterative formula becomes


x0 f n − xn f 0 − xn x
xn +1 = = = n
fn − f0 fn − 1 1 − fn

xn
n xn fn xn+1 =
1 − fn
2 0.25 –0.2344 0.2025275
3 0.2025275 –4.3303 × 10–3 0.2016543
4 0.2016543 –7.1337 × 10–5 0.2016399
5 0.2016399 –1.0940 × 10 –6 0.2016397
6 0.2016397 –1.1842 × 10 –7

As f (0.2016397) is very small in magnitude, root is near it.


\ Root corrected to four decimal places = 0.2016

5.3.4 Newton–Raphson Method (or Newton’s Iteration Method or


Method of Tangents)
In secant method, we replace the curve by variable secants and in regula-falsi method we replace
the curve by secants in which one end point of secants is fixed. In Newton–Raphson method, the
curve is replaced by variable tangents. Let x0 is an approximation to the root of f (x) = 0. We find
the equation of tangent at (x0, y0) to the graph of curve y = f (x) where y0 = f (x0). Let this tangent
meets x-axis at x1 then x1 will be next approximation and we find (x1, y1) on the graph and draw
tangent at (x1, y1) to the curve y = f (x). Its intersection with x-axis will be x2. Proceeding in this
way, when approximation xn is found then intersection of tangent at (xn, yn) to y = f (x) with x-axis
will give next approximation xn+1.
Now, equation of tangent at (xn, yn) to y = f (x) is

y − yn = f ′ ( xn ) . ( x − xn )

For its intersection with x-axis, we have y = 0 and x = xn+1

\ − yn = f ′ ( xn ) . ( xn +1 − xn )

yn f ( xn )
or xn +1 − xn = − =−
f ′ ( xn ) f ′ ( xn )

f ( xn )
or xn +1 = xn −
f ′ ( xn )

It is Newton’s iterative formula to obtain the approximations. Geometrically, the process is shown
in Figure (5.9)
Numerical Methods in General and Linear Algebra  | 601

(x0, y0)

(x1, y1)

(x2, y2)

O X
x2 x1 x0

Figure 5.9

Newton–Raphson formula can also be obtained analytically. Let en is error in approximation xn


to root x of  f (x) = 0,
then xn = ξ + ε n 
\ ξ = xn − ε n 
\ f ( ξ ) = f ( xn − ε n ) = 0

By Taylor-series expansion
ε n2
f ( xn ) − ε n f ′ ( xn ) + f ′′ ( xn ) −  = 0
2!
As en is small
\ f ( xn ) − ε n f ′ ( xn )  0

f ( xn )
\ εn 
f ′ ( xn )

f ( xn )
\ ξ  xn −
f ′ ( xn )

\ next approximation to the root will be
f ( xn )
xn +1 = xn −
f ′ ( xn )

which gives the same iterative formula we have derived geometrically.
602 | Chapter 5

Condition of Convergence
Newton’s iterative formula to find simple root of f (x) = 0 is
f ( xn )
xn +1 = xn −
f ′ ( xn )

f ( x)
If we take φ ( x) = x −
f ′( x)

then equation f (x) = 0 can be written as
x = φ ( x)

and iterative formula is
f ( xn )
xn +1 = φ ( xn ) = xn −
f ′ ( xn )

But iteration
xn +1 = φ ( xn )

converges if φ ′ ( x ) < 1 when x is near the root of equation.

f ′ ( x ) . f ′ ( x ) − f ( x ) f ′′ ( x )
Here φ′( x) = 1−
( f ′ ( x ))
2


\ Newton–Raphson iterative formula converges if

( f ′ ( x ) ) − f ( x) f ′′ ( x )
2

1− <1
( f ′ ( x ))
2

f ( x ) f ′′ ( x ) < ( f ′ ( x ) )
2
or

It is the sufficient condition for convergence of Newton–Raphson formula.

Order of Convergence
Let x be exact root of f (x) = 0 and approximations xn and xn+ 1 have errors en and en+1, respectively.
Thus, from Newton–Raphson iterative formula
f ( xn )
xn +1 = xn −
f ′ ( xn )

we have
f (ξ + ε n )
ξ + ε n +1 = ξ + ε n −
f ′ (ξ + ε n )

Numerical Methods in General and Linear Algebra  | 603

ε n f ′ (ξ + ε n ) − f (ξ + ε n )
\ ε n +1 = 
f ′ (ξ + ε n )
 ε 2 
ε n  f ′ (ξ ) + ε n f ′′ (ξ ) +  −  f (ξ ) + ε n f ′ (ξ ) + n f ′′ (ξ ) + 
 2 
=
f ′ (ξ ) + ε n f ′′ (ξ ) + 

But f (x  ) = 0
ε n2
f ′′ (ξ ) + 
\ ε n +1 = 2
f ′ (ξ ) + ε n f ′′ (ξ )

1 f ′′ (ξ )
= ε n2 +
2 f ′ (ξ )

If the remaining terms are neglected, then
1 f ′′ (ξ )
ε n +1 = k ε n2 where k =
2 f ′ (ξ )

Hence, the convergence is of order 2, i.e. convergence is quadratic.
This fact also means that the number of correct decimals is approximately doubled at every itera-
1 f ′′ (ξ )
tion, at least if the factor is not too large.
2 f ′ (ξ )
Remark 5.1: Newton–Raphson formula also works when coefficients or roots are complex. It
should be noted that if the algebraic equation has real coefficients, then a complex root cannot
be reached if we take initial approximation real. Thus, initial approximation will have to be taken
complex with imaginary part non-zero.

Modification of Newton’s Iteration Formula for Multiple Roots


If the equation f (x) = 0 has root x of multiplicity m > 1 then Newton–Raphson iteration formula
f ( xn )
xn +1 = xn −
f ′ ( xn )

possesses linear convergence which we prove below.


Let en and en+1 are errors in xn and xn+1, respectively,
f (ξ + ε n )
then ξ + ε n +1 = ξ + ε n −
f ′ (ξ + ε n )

ε n f ′ (ξ + ε n ) − f (ξ + ε n )
\ ε n +1 =
f ′ (ξ + ε n )

604 | Chapter 5

 ε m −1   ε m m 
ε n  f ′ (ξ ) + ε n f ′′ (ξ ) +  + n f ( ) (ξ ) +  −  f (ξ ) + ε n f ′ (ξ ) +  + n f ( ) (ξ ) + 
m

 ( m − 1)!   m! 
=  m −1
εn
f ′ (ξ ) + ε n f ′′ (ξ ) +  + f (ξ ) + 
( m)
( m − 1)!
But x is root of f (x) = 0 of multiplicity m

f (ξ ) = f ′ (ξ ) =  = f ( (ξ ) = 0 
m −1)
\
 1 1  m (m)
 ( m − 1)! − m ! ε n f (ξ ) + 
 
\ ε n +1 =
ε nm −1
f ( ) (ξ ) + 
m

(m − 1)! 
 1
= 1 −  ε n + 
 m  
If the remaining terms are neglected
 1
ε n +1 = K ε n where K = 1 −  ≠ 0 (∵ m > 1)
 m
\ Convergence is linear.
Thus, Newton’s iterative formula is slow for multiple roots and thus, this formula requires
­modification. 1
If equation f (x) = 0 has root x of multiplicity m, then equation  f ( x )  m = 0 has x  as simple
root. 1
i.e. F ( x ) = ( f ( x ) ) m = 0 has simple root x
1
1
F ′ (x) = ( f ( x )) m f ′ ( x )
−1

m
\ Iterative formula
1

xn +1 = xn −
F ( xn )
= xn −
( f ( xn ) ) m
F ′ ( xn ) 1 1
( f ( xn ) ) m f ′ ( xn )
−1

m
m f ( xn )
= xn −
f ′ ( xn )

will have quadratic convergence. This iterative formula is called modified (or sometime
­generalised) iterative formula for root of multiplicity m.
We, explicitly show that this iterative formula has quadratic convergence.
Let en and en+1 be errors in xn and xn+1, respectively,
\ from iterative formula
m f (ξ + ε n )
ξ + ε n +1 = ξ + ε n −
f ′ (ξ + ε n )

Numerical Methods in General and Linear Algebra  | 605

ε n f ′ (ξ + ε n ) − m f (ξ + ε n )
\  ε n +1 = 
f ′ (ξ + ε n )
 ε m −1   εm 
ε n  f ′ (ξ ) + ε n f ′′ (ξ ) + + n f (m) (ξ ) +  − m  f (ξ ) + ε n f ′ (ξ ) + + n f (m) (ξ ) +
( )
m − 1 !  m ! 
\ ε n +1 =  
ε nm −1
f ′ (ξ ) + ε n f ′′ (ξ ) +  + f ( ) (ξ ) + 
m

(m − 1)!
But x is root of multiplicity m of f (x) = 0
\   f (ξ ) = f ′ (ξ ) =  = f ( (ξ ) = 0 
m −1)

 1 m  m +1 (m +1)
 m ! − ( m + 1)! ε n f (ξ ) + 
 
\ ε n +1 =
ε nm −1
f ( ) (ξ ) + 
m

( )
m − 1 !

1 1  2 f
( )
(ξ ) m +1

= −  ε +
f (ξ )
(
n m)
 m m +1

If the remaining terms are neglected, then
f m +1 (ξ )
ε n +1 = K ε n2 where K =
m ( m + 1) f ( ) (ξ )
m

Hence, the convergence is quadratic.
Remark 5.2: Newton–Raphson technique is widely used on automatic computers for calculation
of various simple functions as inverse, square root, cube root, etc.
1 1
The quantity ; a ≠ 0 can be interpreted as a root of the equation − a = 0. Thus, iterative
a x
formula will be
1
−a
xn
xn +1 = xn + = xn ( 2 − a xn )
1
xn2
This can also be written as


( )
1 − a xn +1 = 1 − 2a xn + a 2 xn2 = (1 − a xn )

2

It clearly shows the quadratic convergence.


1
From above, we conclude that the iterative formula for finding can also be written as
a
1 − a xn +1 = (1 − a xn )
3


i.e (
xn +1 = xn 3 − 3a xn + a 2 xn2

)
which is still faster as its order of convergence is 3, but in this formula more computational work
will be required at each iteration and hence there is no real advantage.
606 | Chapter 5

If we want to compute a ; a > 0, then we start with the equation x 2 − a = 0 and thus, iterative
formula will be
x2 − a 1  a
x n +1 = x n − n =  xn + x 
2x 2 n
n

A corresponding formula for N th root of ‘a’ from the starting equation f (x) = xN - a = 0 is

xnN − a ( N − 1) xn + a
N

xn +1 = xn − =
NxnN −1 NxnN −1
and thus, for cube root of ‘a’ we have
1 1 a 
xn +1 = 2
3 xn
( 3
)
2 xn3 + a =  2 xn + 2 
xn 

1
and for , the iterative formula is
a
1 x
xn +1 = − −3
2 xn
( )
−3 xn−2 + a = n 3 − a xn2
2
( )
This formula does not make use of division.

Method of Finding Initial Approximations if Two Roots are Given Close


to a Number
Let two roots of equation f (x) = 0 are close to x then for small e, x + e and x - e are initial
­approximations to roots of f (x) = 0
ε2
\ f (ξ + ε ) = f (ξ ) + ε f ′ (ξ ) + f ′′ (ξ ) +  = 0 
2!
ε2
f (ξ − ε ) = f (ξ ) − ε f ′ (ξ ) + f ′′ (ξ ) −  = 0
2!
Since e is small so the terms of e 3 and higher powers of e can be neglected and adding we have
2 f (ξ ) + ε 2 f ′′ (ξ ) = 0

2 f (ξ )
\ ε2 = −
f ′′ (ξ )

2 f (ξ )
\ ε =± −
f ′′ (ξ )

2 f (ξ )
Thus, the first approximations for two roots can be taken near to ξ + − and
f ′′ (ξ )
2 f (ξ )
ξ− −
f ′′ (ξ )
Numerical Methods in General and Linear Algebra  | 607

3
Example 5.9: Using Newton–Raphson method, derive a formula to find N where N is a real
number. Hence evaluate 3 41 correct to four places of decimals.
Solution: 3 N is root of equation
f ( x ) = x3 − N = 0

\ f ′ ( x ) = 3x 2

\ Newton–Raphson iterative formula
f ( xn )
x n +1 = x n −
f ′ ( xn )

3
to evaluate N is
xn3 − N 1  N
xn +1 = xn − 2
=  2 xn + 2 
3 xn 3 xn 

Take N = 41;
Since, 33 = 27, 43 = 64

\ 3
27 = 3, 3 64 = 4 

Take x0 = 3.4

1 41 
n xn x n +1 =  2x n + 2 
3 xn 

0 3.4 3.4489
1 3.4489 3.44822
2 3.44822 3.44822

\ 3
41 correct to four places of decimals = 3.4482

Example 5.10: Apply Newton–Raphson method to compute


1

(i)  5      (ii) 30 5
     (iii) 3 24
Solution: k N is root of equation
f ( x ) = xk − N = 0

f ′( x) = k x k −1

608 | Chapter 5

\ Newton–Raphson iterative formula


f ( xn )
xn +1 = xn −
f ′ ( xn )

k
to evaluate N is
xnk − N 1  N 
xn +1 = xn − k −1
= ( k − 1) xn + k −1  (1)
k xn k xn 

(i) Take N = 5, k = 2 in (1)

1 5
xn +1 =  xn + 
2 xn 

Take x0 = 2.2

1 5
n xn x n +1 =  x n + 
2 xn 

0 2.2 2.236
1 2.236 2.236068
2 2.236068 2.236068

\ 5 correct up to five decimal places = 2.23607


(ii) Take N = 30, k = –5 in (1)
1 30 
xn +1 = −  −6 xn + −6 
5 xn 

= 1.2 xn − 6 xn
6


We have 25 = 32
1
1
\ ( 32 ) 5 = = 0.5

2 
\ we take x0 = 0.5
n xn x n + 1 = 1.2 x n − 6 x n6
0 0.5 0.50625
1 0.50625 0.50650
2 0.50650 0.506496
1
\ ( 30 ) 5 correct up to four decimal places = 0.5065

(iii) Take N = 24, k = 3 in (1)


Iterative formula is
1 24 
xn +1 =  2 xn + 2 
3 xn 

Numerical Methods in General and Linear Algebra  | 609

We take x0 = 3

1 24 
n xn x n +1 =  2x n + 2 
3 xn 

0 3 2.889
1 2.889 2.88451
2 2.88451 2.88450

\ 3
24 correct up to four decimal places = 2.8845
Example 5.11: Use Newton–Raphson method to solve the equation 3x - cos x - 1 = 0
Solution: f ( x ) = 3 x − cos x − 1 = 0
\ f ′ ( x ) = 3 + sin x

\ Newton–Raphson iteration formula is
f ( xn ) 3 x − cos xn − 1
xn +1 = xn − = xn − n
f ′ ( xn ) 3 + sin xn
xn sin xn + cos xn + 1
=
3 + sin xn

f ( 0 ) = −2, f (1) = 1.4597 ∴ we take x0 = 0.6

n xn xn sin xn + cos xn + 1 3 + sin xn xn + 1

0 0.6 2.1641 3.5646 0.6071


1 0.6071 2.1676 3.5705 0.607099
\ Root to four decimal places = 0.6071

Example 5.12: Compute to four decimal places the non-zero real root of x 2 + 4 sin x = 0
Solution: f ( x ) = x 2 + 4 sin x = 0,
f ′ ( x ) = 2 x + 4 cos x
We have −4 ≤ 4 sin x ≤ 4 

Thus, positive root must satisfy 0 < x ≤ 2


for 0 < x ≤ 2, f ( x ) > 0

\ There is no positive real root.
f ( −1) = −2.3659
  
f ( −2 ) = 0.3628
\ we take x0 = –1.9
610 | Chapter 5

Newton–Raphson iterative formula is


f ( xn ) x 2 + 4 sin xn
xn +1 = x n − = xn − n
f ′ ( xn ) 2 xn + 4 cos xn

x + 4 xn cos xn − 4 sin xn
2
= n

2 xn + 4 cos xn

n xn x n2 + 4x n cos x n − 4 sin x n 2xn + 4 cos xn xn + 1


0 -1.9 9.8522 -5.0932 -1.934
1 -1.934 10.2278 -5.2891 -1.93375
2 -1.93375 10.2250 -5.2876 -1.93377

\ Non-zero real root to four decimal places = –1.9338

Example 5.13: Determine the root of the equation cos x − xe x = 0 using Newton–Raphson method.
Solution: f ( x ) = cos x − xe x = 0
\ f ′ ( x ) = − sin x − ( x + 1) e x

Newton–Raphson iterative formula is
f ( xn )
xn +1 = xn −
f ′ ( xn )

cos xn − xn e xn
= xn −
− sin xn − ( xn + 1) e xn
cos xn − xn e xn
= xn +
sin xn + ( xn + 1) e xn

x e + xn sin xn + cos xn
2 xn
= n

sin xn + ( xn + 1) e xn

f ( 0 ) = 1, f (1) = −2.178

We take x0 = 0.5

n xn x n2e x n + x n sin x n + cos x n sin x n + ( x n + 1) e x n xn + 1

0 0.5 1.52948 2.9525 0.5180


1 0.5180 1.57572 3.0434 0.51775
2 0.51775 1.5750625 3.0421 0.5177550
3 0.5177550 1.5750757 3.0421033 0.5177588

\ Positive real root up to five decimal places = 0.51776


Numerical Methods in General and Linear Algebra  | 611

Example 5.14: Find the double root of the equation x 3 − x 2 − x + 1 = 0 near 0.9
Solution: f ( x ) = x3 − x 2 − x + 1 = 0
\ f ′ ( x ) = 3x 2 − 2 x − 1

Newton’s iterative formula for double root is
f ( xn )
xn +1 = xn − 2
f ′ ( xn )

xn3 − xn2 − xn + 1 xn3 + xn − 2
= xn − 2. =
3 xn2 − 2 xn − 1 3 xn2 − 2 xn − 1
x0 = 0.9
n xn x n3 + x n − 2 3 x n2 − 2 x n − 1 xn + 1

0 0.9 -0.371 -0.37 1.003


1 1.003 0.012027027 0.012027 1.0000022

\ Double root = 1.

Example 5.15: The equation f ( x ) = x 3 − 7 x 2 + 16 x − 12 = 0 has a double root. Starting with


initial approximation x0 = 1, find the double root correct to three decimal places using
 (i)  Newton–Raphson method
(ii)  Modified Newton–Raphson method
Solution: f ( x ) = x 3 − 7 x 2 + 16 x − 12 = 0
f ′ ( x ) = 3 x 2 − 14 x + 16

(i)  By Newton–Raphson method
Newton’s iterative formula is
f ( xn ) xn3 − 7 xn2 + 16 xn − 12 2 x 3 − 7 xn2 + 12
x n +1 = x n − = xn − = 2n
f ′ ( xn ) 3 xn − 14 xn + 16
2
3 xn − 14 xn + 16
x0 = 1

n xn 2 x n3 − 7 x n2 + 12 3 x n2 − 14 x n + 16 xn + 1
0 1 7 5 1.4
1 1.4 3.768 2.28 1.65
2 1.65 1.92675 1.0675 1.80
3 1.80 0.984 0.52 1.89
4 1.89 0.497838 0.2563 1.94
5 1.94 0.257568 0.1308 1.97
6 1.97 0.124446 0.0627 1.98
612 | Chapter 5

n xn 2 x n3 − 7 x n2 + 12 3 x n2 − 14 x n + 16 xn + 1
7 1.98 0.081984 0.0412 1.990
8 1.990 0.040498 0.0203 1.9950
9 1.9950 0.0201247 0.010075 1.9975
10 1.9975 0.0100312 5.018749 × 10–3 1.9987
11 1.9987 5.208446 × 10–3 2.605069 × 10–3 1.9994
12 1.9994 2.401801 × 10–3 1.201079 × 10–3 1.9997
13 1.9997 1.200451 × 10–3 6.00269 × 10–4 1.99986

\ Double root corrected to three decimal places = 2.000


(ii)  By Modified Newton–Raphson method
Modified Newton–Raphson iterative formula is
f ( xn ) x 3 − 7 x 2 + 16 xn − 12
xn +1 = xn − 2 = xn − 2 n 2 n
f ′ ( xn ) 3 xn − 14 xn + 16
xn3 − 16 xn + 24
=
3 xn2 − 14 xn + 16 
x0 = 1

n xn x n3 − 16 x n + 24 3 x n2 − 14 x n + 16 xn + 1

0 1 9 5 1.8
1 1.8 1.032 0.52 1.985
2 1.985 0.0613466 0.030675 1.99989
3 1.99989 4.40073 × 10–4 2.20036 × 10–4 2.0000045

\ Double root corrected to three decimal places = 2.000

Example 5.16: The equation x 4 − 5 x 3 − 12 x 2 + 76 x − 79 = 0 has two roots close to x = 2. Find


these roots to four decimals.
Solution:    f ( x ) = x 4 − 5 x 3 − 12 x 2 + 76 x − 79 = 0
Let approximations of two roots close to 2 are 2 + e  and 2 - e
ε2
\ f ( 2 + ε ) = f ( 2) + ε f ′ ( 2) + f ′′ ( 2 ) +  = 0 (1)
2!
ε2
f ( 2 − ε ) = f ( 2) − ε f ′ ( 2) + f ′′ ( 2 ) −  = 0 (2)
2!
Since e is small so neglect terms of e 3 and higher powers of e
Add (1) and (2)
2 f ( 2 ) + ε 2 f ′′ ( 2 ) = 0
Numerical Methods in General and Linear Algebra  | 613

2 f ( 2)
\ ε2 = − 
f ′′ ( 2 )
Now, f ( x ) = x 4 − 5 x 3 − 12 x 2 + 76 x − 79

f ′ ( x ) = 4 x 3 − 15 x 2 − 24 x + 76

f ′′ ( x ) = 12 x 2 − 30 x − 24

\ f ( 2) = 1, f ′′ ( 2) = −36

2 (1)
1
\ ε =− 2
=
−36 18 
1
\ ε =±  ±0.24
18 
Initial approximations to two roots close to 2 are 2.24 and 1.76
Newton’s iterative formula is
f ( xn ) x 4 − 5 x 3 − 12 xn2 + 76 xn − 79
xn +1 = xn − = xn − n 3 n
f ′ ( xn ) 4 xn − 15 xn2 − 24xxn + 76
3 xn4 − 10 xn3 − 12 xn2 + 79
=
4 xn3 − 15 xn2 − 24 xn + 76 
Taking x0 = 2.24

n xn 3 x n4 − 10 x n3 − 12 x n2 + 79 4 x n4 − 15 x n2 − 24 x n + 76 xn + 1

0 2.24 -18.076511 -8.066304 2.2410


1 2.2410 -18.145914 -8.0972809 2.2410

Taking x0 = 1.76

n xn 3 x n4 − 10 x n3 − 12 x n2 + 79 4 x n4 − 15 x n2 − 24 x n + 76 xn + 1

0 1.76 16.096417 9.103104 1.7682


1 1.7682 15.523893 8.7785616 1.7684
2 1.7684 15.509914 8.7706563 1.7684

\ Two roots close to 2 up to 4 decimals are 2.2410 and 1.7684

Exercise 5.1

1. Perform five iterations of the bisection 2. Find a real root of the equation
method to obtain the smallest positive f ( x ) = x 3 − x − 1 = 0 using bisection
root of the equation f ( x ) = x 3 − 5 x + 1 = 0 method.
614 | Chapter 5

3. Perform five iterations of the bisection 17. Using false position method, solve the
method to obtain a root of the equation equation 3x – cos x –1 = 0
f ( x ) = cos x − x e x = 0 18. Using regula–falsi method, find the root
4. Isolate the roots of the equation of the equation x3 + x2 – 3x – 3 = 0 lying
x 3 − 4 x + 1 = 0. Find all the roots using between 1 and 2.
bisection method. 19. Show that the initial approximation x0 for
5. Evaluate 5 by direct iteration method. finding 1/N, where N is a positive integer,
6. Find a real root of the equation by the Newton–Raphson method must
x 3 − x − 1 = 0 correct to two decimal plac- satisfy 0 < x0 < 2/N for convergence.
es by iterative method. 20. Use Newton’s method to find a root of the
7. Find a real root of the equation equation x3 – 3x – 5 = 0.
x 3 + x 2 − 1 = 0 by iteration method. 21. Perform four iterations of the New-
8. Solve the equation by direct iteration, ton–Raphson method to find the
e x − 3x = 0 smallest positive root of the equation
f (x) = x3 – 5x + 1 = 0.
9. Starting with x = 0.12, solve
x = 0. 21 sin (0.5 + x) by iteration method. 22. Find a positive root of x4 – x = 10 using
Newton–Raphson method.
10. Use the method of iteration to solve the
equation x = exp (– x), starting with 23. Find the largest root of x2 – 5x + 2 = 0
x = 1.00. Perform four iterations, taking correct to five decimal places by New-
the readings up to four decimal places. ton–Raphson method.
11. Find a real root of the equation 24. Find the positive root of the equation
x3 –2x –5 = 0 by the method of false posi- x 4 − 3 x 3 + 2 x 2 + 2 x − 7 = 0 by Newton–
tion correct to three decimal places. Raphson method.
25. Find the root of the equation x log10 x = 1.2
12. Find an approximate value of the root of
by Newton–Raphson method correct to
the equation x3 + x – 1 = 0 near x = 1,
six decimal places.
­using the method of false position.
26. Find the smallest root of the equation
13. Using regula-falsi method, find the real e − x − sin x = 0 correct to four decimal
root of the equation f ( x ) = x 3 − 5 x + 1 = 0 places.
which lies in the interval (0,1). Perform
27. Use Newton–Raphson method to solve
iterations to obtain this root.
the transcendental equation ex = 5x
14. The equation x 6 − x 4 − x 3 − 1 = 0 has one 28. Find an interval of length 1, in which the
real root between 1.4 and 1.5. Find this root of f ( x ) = 3 x 3 − 4 x 2 − 4 x − 7 = 0 lies.
root to four decimal places by false posi- Take the middle point of this interval as
tion method. the starting approximation and iterate
15. The negative root of the equation two times, using the Newton–Raphson
3x3 + 8x2 + 8x + 5 = 0 is to be determined. method.
Find the root by regula-falsi method. 29. Find a double root of the equation
Stop iteration when f (x2) < 0.02. x 3 − 5 x 2 + 8 x − 4 = 0 near 1.8.
16. Find the root of x ex = 3 by regula-falsi 30. Find the double root of x 3 − x 2 − x + 1 = 0
method correct to three decimal places. close to 0.8.
Numerical Methods in General and Linear Algebra  | 615

Answers 5.1
 1. 0.20 2. 1.323 3. 0.5 4. – 2.115, 0.254, 1.86 5. 2.23607
 6. 1.32 7. 0.7549 8. 0.6191 9. 0.1224 10. 0.6062
11. 2.095 12. 0.6823 13. 0.2016 14. 1.4036 15. –1.66
16. 1.050 17. 0.607 18. 1.732 19. 2.2790 20. 0.201640
21. 1.8556 22. 4.56155 23. 2.32672 24. 2.740646 25. 0.5885
26. 0.25917 27. 2.3 28. 2 29. 1

5.4 System of Linear Equations


In this unit, we shall be solving n linear equations in n unknowns having unique solution. We shall
distinguish mainly between direct and iterative methods. Firstly, we consider the direct methods.

5.4.1 Gauss Elimination Method


Let n linear equations in n unknowns be
a11 x1 + a12 x2 + … + a1n xn = y1
a21 x1 + a22 x2 + … + a2 n xn = y2

an1 x1 + an 2 x2 + … + ann xn = yn
If a11 = 0, then equations are permuted in a suitable way so that coefficient a11 of x1 in first equa-
tion is not zero. Then a11 is called pivot element. Divide first equation by a11 and then subtract
this equation multiplied by a21 , a31 , …, an1 from the second, third, …, nth equation. Thus, x1 will
remain in the first equation only. Next the coefficient a22 ′ of the variable x2 if not zero will be
pivot element (if it is zero, then permute second equation with any of third, fourth, …, nth equa-
tion in which coefficient of x2 is non zero). We divide the second equation by this pivot element
and then x2 is eliminated in a similar way from the third, fourth, …, nth equation. The procedure
is continued until x1 , x2 , …, xn −1 are eliminated from the last equation. The coefficient of xn in
last equation is pivot element and divide this equation by this pivot element. When elimination is
completed the system has the following form
x1 + a12′ x2 + a13′ x3 + … + a1′n xn = z1 

x2 + a23
′ x3 + … + a2′n xn = z2 


xn = z n 

The new coefficient matrix is upper triangular matrix with unit elements in the diagonal. Then
xn , xn −1 , …, x2 , x1 are obtained from the nth, (n -1)th, …, second and first equations by back
­substitutions.
616 | Chapter 5

The above process can also be performed by writing the given equations in matrix form AX = Y
 a11 a12  a1n y1   x1 
a a22  a2 n y2   x 
where augmented matrix is [ A :Y ] =  21 , X =  2  and performing the
   
   
 an1 an 2  ann yn   xn 
row transformations as explained above to reduce augmented matrix [ A :Y ] to the form
1 a12′ a13′  a1′n z1 
0 1 z2 
′ 
a23 a2′n
 = [ A′ : Z ]
 
 
0 0  1 zn 
Remark 5.3:
    (i) It may occur that the pivot element be different from zero but very small in magnitude.
It will give too large errors. The reason is that the small coefficient is usually formed as
the difference between two almost equal numbers. This difficulty is overcome by suitable
permutation of this equation with an equation below it which has the largest coefficient in
magnitude. This process is called partial pivoting.
 (ii) In practical work, it is recommended that an extra column be carried which contains the sum
of the coefficients in one row. Elements in this column are treated like the other elements in
the row in which they lie and same transformations be applied to these elements. In this way,
we get an easy check throughout the computation in each step by comparing the row sums
with the numbers in this extra column.
(iii) In the reduction of AY to A′Z if A′ contains zero row, then there will be no solution or
infinite number of solutions.

5.4.2 Gauss–Jordan Method
Jordan’s modification to Gauss elimination method is that the elimination is performed not only
in the equations below but also in the equations above. In this way, we shall finally obtain unit
coefficient matrix, i.e. identity matrix and we have the solution without further computations.

Remark 5.4: In Gauss elimination method, the number of operations of addition, subtraction, mul-
n3
tiplication and division comes out to be (this includes the operations during back ­substitutions)
3 n3
and in Gauss Jordan method, number of operations are . Hence the number of operations in
2
Gauss elimination method are less than the number of operations required in Gauss Jordan method
and hence Gauss elimination method is preferred as compared to Gauss Jordan method.

Example 5.17: Solve the following equations by Gauss elimination method with partial pivoting.
2 x2 + x4 = 0, 2 x1 + 2 x2 + 3 x3 + 2 x4 = −2, 4 x1 − 3 x2 + x4 = −7 and 6 x1 + x2 − 6 x3 − 5 x4 = 6
Numerical Methods in General and Linear Algebra  | 617

Solution: With initial partial pivoting, augmented matrix is


check
6 1 −6 −5 6 
0 2 2
 0 1 0 
3
2 2 3 2 −2
  7
 4 −3 0 1 −7
−5 
1
Operate R1
6 
 1 .16667 −1 −.83333 1  .33334
0 2 0 1 0  3

2 2 3 2 −2  7
 
4 −3 0 1 −7  −5

Operate R3 − 2 R1 , R4 − 4 R1

1 .16667 −1 −.83333 1  .33334
0 2 0 1 0  3

0 1.66666 5 3.66666 −4  6.33332
 
0 −3.66668 4 4.33332 −111 −6.33336

Operate R24

1 .16667 −1 −.83333 1  .33334


0 −3.66668 4 4.33332 −11 −6.33336
 
0 1.66666 5 3.66666 −4  6.33332
 
0 2 0 1 0  3

 1 
 − 3.66668  R2
  

1 .16667 −1 −.83333 1  .33334


0 1 −1.09091 −1.18181 2.99999  1.72727

0 1.66666 5 3.66666 −4  6.33332
 
0 2 0 1 0  3

R3 − 1.66666 R2 , R4 − 2 R2 

1 .16667 −1 −.83333 1  .33334
0 1 −1 . 09091 −1 . 18181 2. 99999  1.72727
 
0 0 6.81818 5.63634 −8.99996  3.45456
 
0 0 2.18182 3.36362 −5.99998  −0.45454

618 | Chapter 5

1
R3 
6.81818

1 .16667 −1 −.83333 1  .33334


0 1 −1.09091 −1.18181 2.99999  1.72727

0 0 1 .82666 −1.31999 .50667
 
0 0 2.18182 3.36362 −5.99998 −00.45454

R4 − 2.18182 R3 

1 .16667 −1 −.83333 1  .33334


0 1 −1.09091 −1.18181 2.99999  1.72727

0 0 1 .82666 −1.31999  .50667
 
0 0 0 1.56000 −3.12000  −1.56000

1
R4
1.56000
1 .16667 −1 −.83333 1 
0 1 −1.09091 −1.18181 2.99999 

0 0 1 .82666 −1.31999 
 
0 0 0 1 −2 

∴ x4 = – 2

x3 = −1.31999 − .82666 ( −2 ) = 0.3333

x2 = 3 + 1.09091( .33333) + 1.18181( −2 ) = 1



x1 = 1 − .16667(1) + 1( .33333) + .83333 ( −2 ) = −0.5

∴ x1 = −0.5, x2 = 1, x3 = 0.3333, x4 = −2 

Example 5.18: Solve the following equations using Gauss elimination method. Find the solution
correct up to three decimal places.
5 x1 − x2 + x3 = 10, 2 x1 + 4 x2 = 12, x1 + x2 + 5 x3 = −1
Solution: Given equations are
x1 + x2 + 5 x3 = −1 

2 x1 + 4 x2 = 12
5 x1 − x2 + x3 = 10 

Numerical Methods in General and Linear Algebra  | 619

Augmented matrix is
check
1 1 5 −1 6
 2 4 0 12  18
 
 5 −1 1 10  15
R2 − 2 R1 , R3 − 5 R1
1 1 5 −1 6
0 2 −10 14  6
 
0 −6 −24 15  −15
1
R2
2
1 1 5 −1 6
0 1 −5 7  3

0 −6 −24 15  −15
R3 + 6 R2
1 1 5 −1 6
0 1 −5 7  3
 
0 0 −54 57 3
 1 
 − 54  R3
 
1 1 5 −1  6
0 1 −5 7  3

0 0 1 −1.0556  −.0556
∴ x3 = – 1.056
x2 = 7 + 5 ( −1.0556 ) = 1.722

x1 = −1 − 1(1.722 ) − 5 ( −1.0556 ) = 2.556

∴ x1 = 2.556, x2 = 1.722, x3 = −1.056 

Example 5.19: Solve the following equations using Gauss–Jordan method up to three decimal
places
2 x1 + 2 x2 + x3 = 12, 3 x1 + 2 x2 + 2 x3 = 8, 5 x1 + 10 x2 − 8 x3 = 10
Solution: Given equations are
5 x1 + 10 x2 − 8 x3 = 10
3 x1 + 2 x2 + 2 x3 = 8 

2 x1 + 2 x2 + x3 = 12 

620 | Chapter 5

Augmented matrix is
check
 5 10 −8 10  17
3 2 2 8  15
 
 2 2 1 12 17
1
R1
5
1 2 −1.6 2  3.4
3 2 2 8  15

 2 2 1 12 17
R2 − 3R1 , R3 − 2 R1
1 2 −1.6 2 3.4
0 −4 6.8 2 4.8
 
0 −2 4.2 8  10.2

 1
 − 4  R2
 
1 2 −1.6 2  3.4
0 1 −1.7 −.5 −1.2
 
0 −2 4.2 8  10.2
R1 − 2 R2 , R3 + 2 R2
1 0 1.8 3  5.8
0 1 −1.7 −.5 −1.2
 
0 0 0.8 7  7.8
1
R3
0.8
1 0 1.8 3  5.8
0 1 −1.7 −.5  −1.2
 
0 0 1 8.75 9.75

R1 − 1.8 R3 , R2 + 1.7 R3

1 0 0 −12.75 −11.75
0 1 0 14.375  15.375
 
0 0 1 8.75  9.75
∴ x1 = −12.75, x2 = 14.375, x3 = 8.75 
Numerical Methods in General and Linear Algebra  | 621

Example 5.20: Solve the following system of equations by


(i) Gauss elimination method
(ii) Gauss–Jordan method
10 x − 7 y + 3 z + 5u = 6, − 6 x + 8 y − z − 4u = 5
3 x + y + 4 z + 11u = 2, 5 x − 9 y − 2 z + 4u = 7
Solution: (i) By Gauss elimination method
Augmented matrix is
check
10 −7 3 5 6 17
 −6 8 −1 −4 5 2

3 1 4 11 2 21
 
 5 −9 −2 4 7 5
1
R1
10
 1 −0.7 0.3 0.5 0.6  1.7
 −6 8 −1 −4 5  2

3 1 4 11 2  21
 
5 −9 −2 4 7  5
R2 + 6 R1 , R3 − 3R1 , R4 − 5R1

1 −0.7 0.3 0.5 0.6  1.7


0 3.8 0.8 −1 8.6  12.2
 
0 3.1 3.1 9.5 0.2 15.9
 
0 −5.5 −3.5 1.5 4  −3.5
1
R2
3.8
1 −0.7 0.3 0.5 0.6  1.7
0 1 .2105 −.2632 2.2632 3.2105
 (1)
0 3.1 3.1 9.5 0.2  15.9
 
0 −5.5 −3.5 1.5 4  −3.5
R3 − 3.1R2 , R4 + 5.5 R2
1 −0.7 0.3 0.5 0.6  1.7
0 1 .2105 −.2632 2.2632  3.2105

0 0 2.4474 10.3159 −6.8159 5.9474
 
0 0 −2.3422 0.0524 16.4476  14.1578
622 | Chapter 5

1
R3
2.4474
1 −0.7 0.3 0.5 0.6  1.7
0 1 .2105 −.2632 2.2632  3.2105

0 0 1 4.2150 −2.7850  2.4300
 
 0 0 −2 . 3422 0 .0524 16.4476  14.1578
R4 + 2.3422 R3

1 −0.7 0.3 0.5 0.6  1.7


0 1 .2105 −.2632 2.2632  3.2105

0 0 1 4.2150 −2.7850  2.4300
 
0 0 0 9.9248 9.9246  19.8494
1
R4
9.9248
1 −0.7 0.3 0.5 0.6  1.7
0 1 .2105 −.2632 2.2632  3.2105

0 0 1 4.2150 −2.7850  2.4300
 
0 0 0 1 1  2
∴ u = 1

z = −2.7850 − 4.2150(1) = −7 
y = 2.2632 − .2105( −7) + .2632(1) = 4 

x = .6 + .7( 4) − .3( −7) − .5(1) = 5 

∴ x = 5, y = 4, z = −7, u = 1 

(ii)  By Gauss–Jordan Method


Solution up to (1) is same
1 −0.7 0.3 0.5 0.6  1.7
0 1 0. 2105 −. 2632 2 . 2632 3.2105

0 3.1 3.1 9.5 0.2  15.9
 
0 −5.5 −3.5 1.55 4  −3.5
R1 + 0.7 R2 , R3 − 3.1R2 , R4 + 5.5 R2

1 0 .4474 .3158 2.1842  3.9474


0
 1 .2105 −.2632 2.2632  3.2105
0 0 2.4474 10.3159 −6.8159 5.9474
 
0 0 −2.3422 .0524 16.4476  14.1578
Numerical Methods in General and Linear Algebra  | 623

1
R3
2.4474

1 0 .4474 .3158 2.1842  3.9474


0 1 .2105 −.2632 2.2632  3.2105

0 0 1 4.2150 −2.7850  2.4300
 
0 0 −2.3422 0.0524 16.4476  14.1578
R1 − .4474 R3, R2 − .2105 R3, R4 + 2.3422 R3

1 0 0 −1.5700 3.4302  2.8602


0 1 0 −1.1505 2.8494  2.6989

0 0 1 4.2150 −2.7850  2.43
 
0 0 0 9.9248 9.9246  19.8494
1
R4
9.9248
1 0 0 −1.5700 3.4302  2.8602
0 1 0 −1.1505 2.8494  2.6989

0 0 1 4.2150 −2.7850  2.43
 
0 0 0 1 1  2
R1 + 1.57 R4 , R2 + 1.1505 R4 , R3 − 4.2150 R4

1 0 0 0 5.0002 
0 1 0 0 3.9999 

0 0 1 0 −7 
 
0 0 0 1 1 
∴ x = 5, y = 4, z = −7, u = 1 

5.4.3 Triangularisation Method
This method is also called LU decomposition method or factorisation method.
In Gauss elimination method, we have seen that the coefficient matrix A is reduced to upper
triangular matrix with diagonal elements unity. In fact, the elimination can be interpreted as the
premultiplication of A by a lower triangular matrix. Hence, in three dimensions we have

 l11 0 0   a11 a12 a13  1 u12 u13 


l l22 0   a21 a22 a23  = 0 1 u23 
 21
 l31 l32 l33   a31 a32 a33  0 0 1 

If the lower and upper triangular are denoted by L and U


624 | Chapter 5

then LA = U
or A = L–1 U
Now L–1 is also a lower triangular matrix, we can find a factorisation of A as a product of one
lower triangular matrix and one upper triangular matrix. Replacing L–1 by L, we have LU = A.
Thus, any n × n matrix can be factorised as A = LU where
 l11 0 00 
l l 0  0 
L =  21 22
 
 
 ln1 ln 2 ln3  lnn 
and
u11 u12 u13  u1n 
0 u22 u23  u2 n 
U =
  
 
0 0 0  unn 
n ( n + 1)
Now, each of L and U will have unknowns and hence number of unknowns will be
2
n(n + 1) and number of equations will be n2 which will be obtained by multiplying each row of L
with each column of U. Thus, we shall have to take n unknowns ourselves.
We take these n unknowns either all lii = 1 or all uii = 1
Thus, we have
LUX = Y
Suppose, UX = Z
∴ LZ = Y
From LZ = Y by back substitutions, we can find Z and then from UX = Z by back substitutions, we
can find X and equations will be solved.
Now, we explain the method of factorisation A to LU. In three dimensions, if L has diagonal
elements unity we have
1 0 0  u11 u12 u13   a11 a12 a13 
l 1 0   0 u22 u23  =  a21 a22 a23 
 21
 l31 l32 1   0 0 u33   a31 a32 a33 

The following equations in order will be obtained


u11 = a11
u12 = a12
u13 = a13
Numerical Methods in General and Linear Algebra  | 625

l21u11 = a21 
a
⇒ l21 = 21 
u11
l21u12 + u22 = a22 

⇒ u22 = a22 − l21u12 
l21u13 + u23 = a23 

⇒ u23 = a23 − l21u13 

l31u11 = a31 

a
⇒ l31 = 31 
u11

l31u12 + l32 u22 = a32 


a32 − l31u12
⇒ l32 = 
u22
l31u13 + l32 u23 + u33 = a33 

⇒ u33 = a33 − l31u13 − l32 u23 

Thus, all the unknowns will be obtained even without writing these equations if we find them in
this order.
Similarly if unity diagonal elements are taken in U, we can write L and U in LU = A without
writing the equations.
Remark 5.5: In determining elements in L and U in the above equations
a a a a − a21a12
u11 = a11 and u22 = a22 − l21u12 = a22 − 21 12 = 11 22
a11 a11
are in the denominator and hence unknowns can be found only if
a a12
a11 ≠ 0, a11a22 − a21a12 = 11 ≠0
a21 a22
Thus, LU decomposition is possible only when leading minors in A are non-zero.

5.4.4 Doolittle Method
In this method, augmented matrix [A : Y] is factorised into L[U : Z] where diagonal elements of L
are unity. The elements of L and [U : Z] are obtained in the same way as in the triangularisation.
In [U : Z], Z will be obtained in the same way as elements of U as A is augmented with Y. As in
three dimensions
 1 0 0   u11 u12 u13 z1   a11 a12 a13 y1 
l    
 21 1 0   0 u22 u23 z2  =  a21 a22 a23 y2 
 l31 l32 1   0 0 u33 z3   a31 a32 a33 y3 
626 | Chapter 5

z1, z2, z3 will be obtained from


z1 = y1, l21 z1 + z2 = y2 , l31 z1 + l32 z2 + z3 = y3
Then, from the equations
u11 x1 + u12 x2 + u13 x3 = z1
u22 x2 + u23 x3 = z2 

u33 x3 = z3 

By back substitution, we find x1 , x2 , x3.

5.4.5 Crout’s Method
Crout suggested a technique to determine systematically the entries of the lower and upper trian-
gular matrix augmented in which diagonal elements of upper triangular matrix are unity.
Augmented matrix from given equations is
 a11 a12  a1n y1 
a 
 21 a22  a2 n y2  .
  
 
 an1 an 2  ann yn 
Unknown elements of lower triangular matrix L and upper triangular matrix (with diagonal
­elements unity) U augmented with z is written in one matrix
 l11 u12 z1 
u13  u1n
l l22 z2 
u23  u2 n
( A′ : Z ) =  21 

 
ln 2 ln3  lnn zn 
l
 n1
This matrix is called the derived matrix or auxiliary matrix. Without writing any equations the
elements of auxiliary matrix are found in the order elements of first column, remaining elements
of first row, remaining elements of second column, remaining elements of second row, remaining
elements of third column, remaining elements of third row, ……. These elements are
a a a y
l11 = a11 u12 = 12 u13 = 13  u1n = 1n z1 = 1
l11 l11 l11 l11
a23 − l21u13 a2 n − l21u1n y2 − l21 z1
l21 = a21 l22 = a22 − l21u12 u23 =  u2 n = z2 =
l22 l22 l22
a3n − l31u1n − l32 u2 n y −l z −l z
l31 = a31 l32 = a32 − l31u12 l33 = a33 − l31u13 − l32 u23  u3n = z3 = 3 31 1 32 2
l33 l33
   
n −1

n −1
yn − ∑ lnj z j
ln1 = an1 ln 2 = an 2 − ln1u12 ln3 = an3 − ln1u13 − ln 2 u23  lnn = ann − ∑ lnj u jn
j =1
zn =
j =1 lnn
Numerical Methods in General and Linear Algebra  | 627

After finding [ A′ : Z ] , we have

1 u12 u13  u1n   x1   z1 


0 1    
 u23  u2 n   x2   z2 
=
      
    
0 0 0  1   xn   z n 

By back substitution, we can find x1 , x2 ,..., xn.

Example 5.21: Show that the LU decomposition method fails to solve the system of equations
x1 + x2 − x3 = 2 

2 x1 + 2 x2 + 5 x3 = −3 

3 x1 + 2 x2 − 3 x3 = 6 

in this order. After writing the equations in other order, solve the equations.
Solution: Given equations in matrix form can be written as

1 1 −1  x   2 
 2 2 5   y  =  −3
    
1 1 
 3 2 −3   z   6 
The leading minor = 2−2 = 0
2 2
Hence, LU decomposition method fails to solve the equations written in this order.
If we write the given equations in the order
x1 + x2 − x3 = 2

3 x1 + 2 x2 − 3 x3 = 6 

2 x1 + 2 x2 + 5 x3 = −3 

then AX = Y

1 1 −1  x1  2
where A =  3 2 −3 , X =  x2  , Y =  6 
 2 2 5   x3   −3

 l11 0 0  1 u12 u13  1 1 −1
Let l l22 0  0 1 u23  =  3 2 −3
 21
 l31 l32 l33  0 0 1   2 2 5 

628 | Chapter 5

\ 1 −1 
l11 = 1 u12 = = 1 u13 = = −1
1 1
−3 + 3
l21 = 3 l22 = 2 − (3)(1) = −1 u23 = =0
−1
l31 = 2 l32 = 2 − ( 2)(1) = 0 l33 = 5 − 2( −1) − 0(0) = 7

1 0 0  1 1 −1 1 1 −1
\  3 −1 0  0 1 0 =  3 2 −3
    
 2 0 7 0 0 1  2 2 5

1 0 0  1 1 −1  x1   2 
\  3 −1 0  0 1 0  x  =  6 
    2  
 2 0 7 0 0 1  x3   −3

1 1 −1  x1   z1 
Let 0 1 0  x  =  z  (1)
  2  2
0 0 1  x3   z3 

1 0 0   z1   2 
\  3 −1 0   z  =  6 
  2  
 2 0 7  z3   −3

\ z1 = 2
3 z1 − z2 = 6 

\ z2 = 3( 2) − 6 = 0 
2 z1 + 7 z3 = −3 

−3 − 2( 2)
\ z3 = = −1
7 
\ from (1)
1 1 −1  x1   2
 0 1 0  x  =  0
  2  
0 0 1  x3   −1
\ x3 = –1
x2 = 0
x1 + x2 − x3 = 2 

⇒ x1 = 2 − (0) + ( −1) = 1 
\ x1 = 1, x2 = 0, x3 = −1 
Numerical Methods in General and Linear Algebra  | 629

Example 5.22: Solve the system of equations AX = B by triangularisation method where


2 1 2   x1   5.6 
A = 8 5 13 , X =  x2  , B =  20.9 
    
6 3 12   x3  11.4 

 l11 0 0  1 u12 u13   2 1 2 


Solution: Let l l22 0  0 1 u23  = 8 5 13
 21
 l31 l32 l33  0 0 1  6 3 12 
1 2
\ l11 = 2 u12 = = 0.5 u13 = = 1
2 2 
13 − 8
l21 = 8 l22 = 5 − 4 = 1 u23 = =5
1
l31 = 6 l32 = 3 − 3 = 0 l33 = 12 − 6 − 0 = 6

 2 0 0  1 0.5 1  2 1 2 
\ 8 1 0  0 1 5 = 8 5 13
    
6 0 6  0 0 1 6 3 12 

 2 0 0  1 0.5 1  x1   5.6 
\ 8 1 0  0 1 5  x  =  20.9
   2  
6 0 6  0 0 1  x3  11.4 

1 0.5 1   x1   z1 
Let 0 1 5  x  =  z  (1)
  2  2
0 0 1   x3   z3 

 2 0 0   z1   5.6 
\ 8 1 0   z  =  20.9
  2  
6 0 6   z3  11.4 

\ 2 z1 = 5.6 
⇒ z1 = 2.8 
8 z1 + z2 = 20.9 
⇒ z2 = 20.9 − 8 ( 2.8 ) = −1.5 
6 z1 + 6 z3 = 11.4 

11.4 − 6 ( 2.8 )
⇒ z3 = = −0.9 
6
630 | Chapter 5

\ from (1)
1 0.5 1   x1   2.8 
0 1 5  x  =  −1.5
  2  
0 0 1   x3   −0.9 
\ x3 = −0.9 
x2 + 5 x3 = −1.5 
⇒ x2 = −1.5 − 5 ( −0.9 ) = 3 
x1 + 0.5 x2 + x3 = 2.8 
⇒ x1 = 2.8 − 0.5 ( 3) − ( −0.9 ) = 2.2 
\ x1 = 2.2, x2 = 3, x3 = −0.9 

Example 5.23: Solve the following system of equations by Doolittle method


 (i) 2 x + y + z − 2u = −10
4x + 2z + u = 8
3x + 2 y + 2 z = 7
x + 3 y + 2 z − u = −5 

(ii) 5 x1 + 4 x2 + x3 = 3.4
10 x1 + 9 x2 + 4 x3 = 8.8 

10 x1 + 13 x2 + 15 x3 = 19.2 

Solution: (i) Augmented matrix is
2 1 1 −2 −10 
4 0 2 1 8 
[ A : Y ] =  3 2 2 0 7
 
1 3 2 −1 −5 
Let

1 0 0 0  u11 u12 u13 u14 z1   2 1 1 −2 −10 


l 0   0 z2   4
 21 1 0 u22 u23 u24 0 2 1 8 
=
 l31 l32 1 0  0 0 u33 u34 z3   3 2 2 0 7
    
l41 l42 l43 1  0 0 0 u44 z4   1 3 2 −1 −5 

u11 = 2 u12 = 1 u13 = 1 u14 = −2 z1 = −10


4
l21 = =2 u22 = 0 − 2 u23 = 2 − 2 = 0 u24 = 1 + 4 = 5 z2 = 8 + 20 = 28
2
= −2
Numerical Methods in General and Linear Algebra  | 631

3 2 − 1.5
l31 = = 1.5 l32 = u33 = 2 − 1.5 − 0 u34 = 0 + 3 + 1.25 z3 = 7 + 15 + 7 = 29
2 −2
= −.25 = 0.5 = 4.25
1 3 − 0.5 2 − 0.5 − 0
l41 = = 0.5 l42 = l43 = u44 = −1 + 1 + 6.25 z4 = −5 + 5 + 35 − 87
2 −2 0.5
= −1.25 =3 − 12.75 = −6.5 = −52
1 0 0 0 2 1 1 −2 −10   2 1 1 −2 −10 
 2 1 0 0  0 −2 0 5 28  4 0 2 1 8 
\   =
1.5 −0.25 1 0 0 0 0.5 4.25 29   3 2 2 0 7 
     
0.5 −1.25 3 1 0 0 0 −6.5 −52   1 3 2 −1 −5 

\ −6.5u = −52 
⇒ u =8
0.5 z + 4.25u = 29 
29 − 4.25 ( 8 )
⇒ z= = −10 
0.5
−2 y + 5u = 28 
28 − 5 ( 8 )
⇒ y= =6
−2
2 x + y + z − 2u = −10 
−10 − 6 + 10 + 2 ( 8 )
⇒ x= =5
2
\ x = 5, y = 6, z = −10, u = 8 

(ii)  Augmented matrix is  5 4 1 3.4 


[ A : Y ] = 10 9 4 8.8 
10 13 15 19.2 

1 0 0   u11 u12 u13 z1   5 4 1 3.4 


Let l 1 0   0 u22 u23 z2  = 10 9 4 8.8 
 21
 l31 l32 1   0 0 u33 z3  10 13 15 19.2 

\ u11 = 5 u12 = 4 u13 = 1 z1 = 3.4
10
l21 = =2 u22 = 9 − 8 = 1 u23 = 4 − 2 = 2 z2 = 8.8 − 6.8 = 2.0
5
10 13 − 8
l31 = =2 l32 = =5 u33 = 15 − 2 − 10 = 3 z3 = 19.2 − 6.8 − 10 = 2.4
5 1 
632 | Chapter 5

1 0 0   5 4 1 3.4   5 4 1 3.4 
\  2 1 0  0 1 2 2.0  = 10 9 4 8.8 
    
 2 5 1  0 0 3 2.4  10 13 15 19.2 

\ 3 x3 = 2.4 
⇒ x3 = 0.8 
x2 + 2 x3 = 2 
⇒ x2 = 2 − 2 ( 0.8 ) = 0.4 
5 x1 + 4 x2 + x3 = 3.4 
3.4 − 4 ( 0.4 ) − 0.8
⇒ x1 = = 0.2 
5
\ x1 = 0.2, x2 = 0.4, x3 = 0.8 

Example 5.24: Solve the following system of equations by Crout’s method


 (i) 2x + 3y + z = 9
x + 2 y + 3z = 6 

3x + y + 2 z = 8 

(ii) 10 x1 − 7 x2 + 3 x3 + 5 x4 = 6
−6 x1 + 8 x2 − x3 − 4 x4 = 5 

3 x1 + x2 + 4 x3 + 11x4 = 2 

5 x1 − 9 x2 − 2 x3 + 4 x4 = 7 

Solution: (i) Augmented matrix of the given system is
2 3 1 9
[ A : Y ] = 1 2 3 6 
 3 1 2 8 
Let derived matrix be
 l11 u12 u13 z1 
[ A′ : Z ] = l21 l22 u23 z2 
 l31 l32 l33 z3 
3 1 9
∴ l11 = 2 u12 = = 1.5 u13 = = 0.5 z1 = = 4.5
2 2 2
3 − 0.5 6 − 4.5
l21 = 1 l22 = 2 − 1.5 = 0.5 u23 = =5 z2 = =3
0.5 0.5
8 − 13.5 + 10.5
l31 = 3 l32 = 1 − 4.5 = −3.5 l33 = 2 − 1.5 + 17.5 = 18 z3 = = 0.277778
18
Numerical Methods in General and Linear Algebra  | 633

\ UX = Z gives
1 1.5 0.5  x   4.5 
0 1 5   y  =  3 
 
0 0 1   z  0.277778

\ z = 0.277778 
y + 5z = 3 
⇒ y = 3 − 5 ( 0.277778 ) = 1.611110 
x + 1.5 y + 0.5 z = 4.5 
⇒ x = 4.5 − 1.5 (1.611110 ) − 0.5 ( 0.277778 ) = 1.944446 

To four decimals
\ x = 1.9444, y = 1.6111, z = 0.2778 

(ii)  Augmented matrix of the given system is


10 −7 3 5 6
 −6 8 −1 −4 5
[ A : Y ] =  3 1 4 11 2
 
 5 −9 −2 4 7
Let derived matrix be
 l11 u12 u13 u14 z1 
l l22 u23 u24 z2 
[ A′ : Z ] l21
=
l32 l33 u34 z3 
31
 
l
 41 l42 l43 l44 z4 

−7 3 5 6
\ l11 = 10 u12 = u13 = = 0.3 u14 = = 0.5 z1 = = 0.6
10 10 10 10
= −0.7
−1 + 1.8 −4 + 3 5 + 3.6
  l21 = −6 l22 = 8 − 4.2 u23 = 3.8
u24 =
3.8 z2 =
3.8
= 3.8 = 2.26316
= 0.21053 = −.26316
11 − 1.5 − .81580 2 − 1.8 − 7.01580
  l31 = 3 l32 = 1 + 2.1 l33 = 4 − .9 − .65264 u34 =
2.44736 z3 =
2.44736
= 3.1 = 2.44736
= 4.21507 = −2.78496
7 − 3 + 12.44378 − 6.52260
l = −9 + 3.5 l43 = −2 − 1.5 + 1.15792 l44 = 4 − 2.5 − 1.44378
  l41 = 5 42 z4 = 9.92465
= −5.5 = −2.34208 + 9.87203 = 9.92465
= 1.00001
634 | Chapter 5

\ UX = Z gives
1 −0.7 0.3 0.5   x1   0.6 
 
0
 1 0 .21053 − 0 .26316   x2   2.26316 
=
0 0 1 4.21507   x3   −2.78496 
    
0 0 0 1   x4   1.00001 
\ x4 = 1.00001  1 
x3 + 4.21507 x4 = −2.78496 

\ x3 = −2.78496 − 4.21507(1) = −7.00003  −7 
x2 + 0.21053 x3 − 0.26316 x4 = 2.26316 

\ x2 = 2.26316 − 0.21053 ( −7 ) + 0.26316 (1) = 4.00003  4

x1 − 0.7 x2 + 0.3 x3 + 0.5 x4 = 0.6 

\ x1 = 0.6 + 0.7 ( 4 ) − 0.3 ( −7 ) − 0.5 (1) = 5

\ x1 = 5, x2 = 4, x3 = −7, x4 = 1 

5.5 Iterative methods for solving system of


linear equations
The direct methods for the solution of the system of linear equations provide the solution after
a certain amount of fixed computation. In iterative methods (also called indirect methods), we
start from an approximation to the true solution and, if convergent, we form a sequence of closer
approximations till the required accuracy is achieved. Thus, the difference between direct and
iterative methods is that in direct methods the amount of computation is fixed while in iterative
methods, the amount of computation is not fixed and depends upon the accuracy required.
In general, we prefer a direct method for solving system of linear equations but for large systems,
iterative methods may be faster than the direct methods. Round-off errors in iterative methods are
smaller. If any error is committed at any stage of computation, it will get corrected at subsequent
stages. An iterative process may or may not converge. If the coefficient matrix A of given system is
n
diagonally dominant, i.e. aii ≥ ∑ aij for all i then the iterative process is sure to converge.
j =1
j ≠i

We shall consider two iterative processes.

5.5.1  Jacobi’s Iterative Method


Let the system of linear equations be
a11 x1 + a12 x2 +  + a1n xn = y1
a21 x1 + a22 x2 +  + a2 n xn = y2

an1 x1 + an 2 x2 + +
+ ann xn = yn
Numerical Methods in General and Linear Algebra  | 635

Permute the equations such that diagonal elements of coefficient matrix have large magnitude
in comparison to other elements of the row.
( )
Τ
Let x1( ) , x2( ) ,  , xn(
0 0 0)
be initial approximation.
In the absence of better estimate for initial approximation, we may take

( x ( ) , x ( ) ,, x ( ) )
Τ
= ( 0, 0, 0,  , 0 )
0 0 0 Τ
1 2 n

Now the equations can be written as


1 
x1 =
a11
( y1 − a12 x2 − a13 x3  − a1n xn )


1 
x2 =
a22
( y2 − a21 x1 − a23 x3 −  − a2n xn ) 

1 
x3 =
a33
( y3 − a31 x1 − a32 x2 − a34 x4 −  − a3n xn )  (5.1)

 


xn =
1
ann
(
yn − an1 x1 − an 2 x2 −  − an, n −1 xn −1  )


(0) (0) (0)
Substituting x1 , x2 , …, xn on R.H.S. of equations (5.1), we get first approximations

x1( ) =
1 1
a11
(
y1 − a12 x2( ) − a13 x3( )  − a1n xn( )
0 0 0
)
x2( ) =
1 1
a22
(
y2 − a21 x1( ) − a23 x3( ) −  − a2 n xn( )
0 0 0
)


xn( ) =
1 1
ann
(
yn − an1 x1( ) − an 2 x2( ) −  − an, n −1 xn( −)1
0 0 0
)
Again, substituting x1( ) , x2( ) , …, xn( ) on R.H.S. of system (5.1) we obtain next approximations
1 1 1

x1( ) , x2( ) , …, xn( ).


2 2 2

The process is repeated till the last two approximations are equal up to desired accuracy.

5.5.2 Gauss–Seidel Iteration Method


This method is modification of Jacobi’s method. As in Jacobi’s method, the system of linear
equations
a11 x1 + a12 x2 +  + a1n xn = y1
a21 x1 + a22 x2 +  + a2 n xn = y2

an1 x1 + an 2 x2 + +
+ ann xn = yn
636 | Chapter 5

is written as
1 
x1 = ( y1 − a12 x2 − a13 x3 −  − a1n xn ) 
a11

1 
x2 = ( y2 − a21 x1 − a23 x3 −  − a2 n xn ) 
a22  (5.2)
 

1 
xn =
ann
( yn − an1 x1 − an2 x2 −  − an,n−1 xn−1 ) 

( )
Τ
Let x1( ) , x2( ) , …, xn(
0 0 0)
be initial approximation.

Substituting x2( ) , x3( ) , …, xn(


0 0 0)
for x2 , x3 , …, xn in first equation of system (5.2), we get

x1( ) =
1 1
a11
(
y1 − a12 x2( ) − a13 x3( ) … − a1n xn( )
0 0 0
)
Now, substituting x1(1) , x3( 0 ) , x4( 0 ) , …, xn( 0 ) for x1 , x3 , x4 , …, xn in R.H.S. of second equation of system
(5.2), we get
x2( ) =
1 1
a22
(
y2 − a21 x1( ) − a23 x3( ) −  − a2 n xn( )
1 0 0
)
next substituting x1(1) , x2(1) , x4( 0 ) , …, xn( 0 ) for x1 , x2 , x4 , …, xn in R.H.S. of third equation of system
(5.2), we get
x3( ) =
1 1
a33
(
y3 − a31 x1( ) − a32 x2( ) − a34 x4( ) − … − a3n xn( )
1 1 0 0
)
and so on. We shall have
xn( ) =
1 1
ann
(
yn − an1 x1( ) − an 2 x2( ) −  − an, n −1 xn( −)1
1 1 1
)
In the substitutions as soon as a new approximation for an unknown is found, it is immediately
used in the next step.
( ) ( )
Τ
The process of iteration is repeated till x1( ) , x2( ) , …, xn( ) = x1( ) , x2( ) , …, xn( ) up to the
n +1 n +1 n +1 n n n

desired accuracy.
Remark 5.6: Since the most recent approximations of the unknowns are used in the next step, the
convergence in the Gauss–Seidel method is twice as fast as in Jacobi’s method.

Example 5.25: Starting with ( x0 , y0 , z0 ) = ( 0, 0, 0 ) and using Jacobi’s method, find the next five
iterations for the system
5 x − y + z = 10, 2 x + 8 y − z = 11, − x + y + 4 z = 3

Solution: From the given equations


1
x= (10 + y − z )
5
Numerical Methods in General and Linear Algebra  | 637

1
y= (11 − 2 x + z )
8
1
z = (3 + x − y )
4 
We have ( x0 , y0 , z0 ) = ( 0, 0, 0 )
1 11 3
\ x1 = (10) = 2, y1 = = 1.375, z1 = = 0.75
5 8 4 
1
x2 = (10 + 1.375 − 0.75 ) = 2.125
5 
1
y2 = (11 − 4 + 0.75 ) = 0.9688
8 
1
z2 = ( 3 + 2 − 1.375 ) = 0.9063
4 
1
x3 = (10 + 0.9688 − 0.9063) = 2.0125
5 
1
y3 = (11 − 2 ( 2.125 ) + 0.9063) = 0.9570
8 
1
z3 = ( 3 + 2.125 − 0.9688 ) = 1.0390
4 
1
x4 = (10 + 0.9570 − 1.0390 ) = 1.9836
5 
1
y4 = (11 − 2 ( 2.0125 ) + 1.0390 ) = 1.0018
8 
1
z4 = ( 3 + 2.0125 − 0.9570 ) = 1.0139
4 
1
x5 = (10 + 1.0018 − 1.0139 ) = 1.9976
5 
1
y5 =
8
(11 − 2 (1.9836 ) + 1.0139 ) = 1.0058

1
z5 = ( 3 + 1.9836 − 1.0018 ) = 0.9954
4 
\

( x5 , y5 , z5 ) = (1.9976 ,1.0058 , 0.9954 ) 
The iteration converges to (2, 1, 1)

Example 5.26: Use Gauss–Seidel procedure to solve


5 x + 2 y + z = 12, x + 4 y + 2 z = 15, x + 2 y + 5 z = 20
638 | Chapter 5

Solution: From the given equations


1
x = (12 − 2 y − z )
5
1
y = (15 − x − 2 z )
4
1
z = ( 20 − x − 2 y )
5 
Taking ( x0 , y0 , z0 ) = ( 0, 0, 0 )
we have
12
x1 = = 2.4,
5 
1
y1 = (15 − 2.4 ) = 3.15
4 
1
z1 = ( 20 − 2.4 − 2 ( 3.15 ) ) = 2.26
5 
1
x2 = (12 − 2 ( 3.15 ) − 2.26 ) = 0.688
5 
1
y2 = (15 − 0.688 − 2 ( 2.26 ) ) = 2.448
4 
1
z2 = ( 20 − 0.688 − 2 ( 2.448 ) ) = 2.8832
5 
1
x3 = (12 − 2 ( 2.448 ) − 2.8832 ) = 0.8442
5 
1
y3 = (15 − 0.8442 − 2 ( 2.8832 ) ) = 2.0974
4 
1
z3 = ( 20 − 0.8442 − 2 ( 2.0974 ) ) = 2.9922
5 
1
x4 = (12 − 2 ( 2.0974 ) − 2.9922 ) = 0.9626
5 
1
y4 = (15 − 0.9626 − 2 ( 2.9922 ) ) = 2.0132
4 
1
z4 = ( 20 − 0.9626 − 2 ( 2.0132 ) ) = 3.0022
5 
1
x5 = (12 − 2 ( 2.0132 ) − 3.0022 ) = 0.9943
5 
1
y5 = (15 − 0.9943 − 2 ( 3.0022 ) ) = 2.0003
4 
1
z5 = ( 20 − 0.9943 − 2 ( 2.0003) ) = 3.0010
5 
Numerical Methods in General and Linear Algebra  | 639

1
x6 =
5
(12 − 2 ( 2.0003) − 3.001) = 0.9997 
1
y6 = (15 − 0.9997 − 2 ( 3.001) ) = 1.9996
4 
1
z6 = ( 20 − 0.9997 − 2 (1.9996 ) ) = 3.0002
5 
\ ( x5 , y5 , z5 ) = ( 0.9943 , 2.0003 , 3.0010 ) 

( x6 , y6 , z6 ) = ( 0.9997, 1.9996 , 3.0002 ) 
\ Iteration converges to


( x, y, z ) = (1, 2, 3) 
Example 5.27: Using Gauss–Seidel method, solve the following system of equations
10 x + y − z = 17, 2 x + 20 y + z = 28, 3 x − y + 25 z = 105 starting with initial approximation
(1, 1, 1)T and perform three iterations.
Solution: From given equations
x = 1.7 − 0.1 y + 0.1z

y = 1.4 − 0.1x − 0.05 z 

z = 4.2 − 0.12 x + 0.04 y 

Initial approximation is
( x0 , y0 , z0 ) = (1,1,1)
T T


x1 = 1.7

y1 = 1.4 − 0.1(1.7 ) − 0.05 = 1.18

z1 = 4.2 − 0.12 (1.7 ) + 0.04 (1.18 ) = 4.043

x2 = 1.7 − 0.1(1.18 ) + 0.1( 4.043) = 1.986

y2 = 1.4 − 0.1(1.986 ) − 0.05 ( 4.043) = 0.999

z2 = 4.2 − 0.12 (1.986 ) + 0.04 ( 0.999 ) = 4.002



x3 = 1.7 − 0.1( 0.999 ) + 0.1( 4.002 ) = 2.000

y3 = 1.4 − 0.1( 2) − 0.05 ( 4.002 ) = 1.000

z3 = 4.2 − 0.12( 2) + 0.04(1) = 4.000 

After three iterations, we observe that iteration converges to solution ( x, y, z ) = ( 2,1, 4 )
T T
640 | Chapter 5

Example 5.28: Solve the equations


10 x1 − 2 x2 − x3 − x4 = 3
−2 x1 + 10 x2 − x3 − x4 = 15
− x1 − x2 + 10 x3 − 2 x4 = 27
− x1 − x2 − 2 x3 + 10 x4 = −9

by (i) Gauss–Jacobi method   (ii) Gauss–Seidel method.


Solution: From given equations
x1 = 0.3 + 0.2 x2 + 0.1x3 + 0.1x4

x2 = 1.5 + 0.2 x1 + 0.1x3 + 0.1x4

x3 = 2.7 + 0.1x1 + 0.1x2 + 0.2 x4

x4 = −0.9 + 0.1x1 + 0.1x2 + 0.2 x3

( )
T
Take initial solution x1( ) , x2( ) , x3( ) , x4( = ( 0, 0, 0, 0 )
0 0 0 0) T

x1( ) = 0.3 
1

x2( ) = 1.5 
1

x3( ) = 2.7 
1

x4( ) = −0.9 
1

x1( ) = 0.3 + 0.2 (1.5 ) + 0.1( 2.7) + 0.1( −0.9 ) = 0.78
2

x ( 2)
= 1.5 + 0.2 ( 0.3) + 0.1( 2.7 ) + 0.1( −0.9 ) = 1.74
2

x3 = 2.7 + 0.1( 0.3) + 0.1(1.5 ) + 0.2 ( −0.9 ) = 2.7
( 2)

x4 = −0.9 + 0.1( 0.3) + 0.1(1.5 ) + 0.2 ( 2.7 ) = −0.18
( 2)

x1 = 0.3 + 0.2 (1.74 ) + 0.1( 2.7 ) + 0.1( −0.18 ) = 0.90
( 3)

x2 = 1.5 + 0.2 ( 0.78 ) + 0.1( 2.7 ) + 0.1( −0.18 ) = 1.908
( 3)

x3 = 2.7 + 0.1( 0.78 ) + 0.1(1.74 ) + 0.2 ( −0.18 ) = 2.916
( 3)

x4 = −0.9 + 0.1( 0.78 ) + 0.1(1.74 ) + 0.2 ( 2.7 ) = −0.108
( 3)

x (4)
1 = 0.3 + 0.2 (1.908 ) + 0.1( 2.916 ) + 0.1( −0.108 ) = 0.9624

x2( ) = 1.5 + 0.2 ( 0.90 ) + 0.1( 2.916 ) + 0.1( −0.108 ) = 1.9608


4

Numerical Methods in General and Linear Algebra  | 641

x3( ) = 2.7 + 0.1( 0.90 ) + 0.1(1.908 ) + 0.2 ( −0.108 ) = 2.9592 


4

x4( ) = −0.9 + 0.1( 0.90 ) + 0.1(1.908 ) + 0.2 ( 2.916 ) = −0.036


4

x1 = 0.3 + 0.2 (1.9608 ) + 0.1( 2.9592 ) + 0.1( −0.036 ) = 0.9845
( 5)

x2 = 1.5 + 0.2 ( 0.9624 ) + 0.1( 2.9592 ) + 0.1( −0.036 ) = 1.9848
( 5)

x3 = 2.7 + 0.1( 0.9624 ) + 0.1(1.9608 ) + 0.2 ( −0.036 ) = 2.9851
( 5)

x4 = −0.9 + 0.1( 0.9624 ) + 0.1(1.9608 ) + 0.2 ( 2.9592 ) = −0.0158
( 5)

\ up to one decimal, the root converges to
( x1 , x2 , x3 , x4 ) = (1.0 , 2.0 , 3.0 , 0.0 ) 
T


(ii)  By Gauss–Seidel Method
x1( ) = 0.3 
1

x2( ) = 1.5 + 0.2 ( 0.3) = 1.56
1

x (1)
= 2.7 + 0.1( 0.3) + 0.1(1.56 ) = 2.886
3

x (1)
= −0.9 + 0.1( 0.3) + 0.1(1.56 ) + 0.2 ( 2.886 ) = −0.137
4

x ( 2)
= 0.3 + 0.2 (1.56 ) + 0.1( 2.886 ) + 0.1( −0.137 ) = 0.887
1

x ( 2)
= 1.5 + 0.2 ( 0.887 ) + 0.1( 2.886 ) + 0.1( −0.137 ) = 1.952
2

x ( 2)
= 2.7 + 0.1( 0.887 ) + 0.1(1.952 ) + 0.2 ( −0.137 ) = 2.956
3

x ( 2)
4 = −0.9 + 0.1( 0.887 ) + 0.1(1.952 ) + 0.2 ( 2.956 ) = −0.025

x1( ) = 0.3 + 0.2 (1.952 ) + 0.1( 2.956 ) + 0.1( −0.025 ) = 0.984


3

x ( 3)
= 1.5 + 0.2 ( 0.984 ) + 0.1( 2.956 ) + 0.1( −0.025 ) = 1.990
2

x ( 3)
= 2.7 + 0.1( 0.984 ) + 0.1(1.990 ) + 0.2 ( −0.025 ) = 2.992
3

x ( 3)
= −0.9 + 0.1( 0.984 ) + 0.1(1.990 ) + 0.2 ( 2.992 ) = −.004
4

x (4)
= 0.3 + 0.2 (1.990 ) + 0.1( 2.992 ) + 0.1( −.004 ) = 0.997
1

x (4)
= 1.5 + 0.2 ( 0.997 ) + 0.1( 2.992 ) + 0.1( −0.004 ) = 1.998
2

642 | Chapter 5

x3( ) = 2.7 + 0.1( 0.997 ) + 0.1(1.998 ) + 0.2 ( −0.004 ) = 2.999 


4

x4( ) = −0.9 + 0.1( 0.997 ) + 0.1(1.998 ) + 0.2 ( 2.999 ) = −0.001


4

∴ up to one decimal place iteration converges to solution ( x1 , x2 , x3 , x4 ) = (1.0 , 2.0 , 3.0 , 0.0 )
T T

5.6 Algebraic eigenvalue problems


Suppose we are given a matrix
 a11 a12  a1n 
 
a a22  a2 n 
A =  21
 
 
 an1 an 2  ann 
n eigenvalues of A are the solutions of n th degree polynomial equations in l which is
a11 − λ a12  a1n
a21 a22 − λ  a2 n
=0

an1 an 2  ann − λ
and corresponding to eigenvalue l, eigenvector is non-zero solution of n linear homogeneous
system of equations
 a11 − λ a12  a1n   x1 
 a a − λ  a  x 
 21 22 2n  2 = 0
    
  
 an1 an 2  ann − λ   xn 
Hence, theoretically the problem of finding eigenvalues and eigenvectors is reduced to finding
the roots of an algebraic equation and to solving n linear homogeneous system of equations. In
practical computation, this method is unsuitable and better method must be applied. If only the
numerically largest eigenvalue and its corresponding eigenvector is required, then the power
method is quite suitable.

5.6.1 Power Method
Suppose λ1 , λ2 , … , λn are eigenvalues and X 1 , X 2 , … , X n are corresponding eigenvectors of
n × n matrix A.
Let X = c1 X 1 + c2 X 2 +  + cn X n is a linear combination of eigenvectors.
Then AX = c1 AX 1 + c2 AX 2 +  + cn AX n 
= c1λ1 X 1 + c2 λ2 X 2 +  + cn λn X n 

 λ λ 
= λ1  c1 X 1 + c2 2 X 2 +  + cn n X n  provided l1 ≠ 0
 λ1 λ1 

Numerical Methods in General and Linear Algebra  | 643

 λ λ 
A2 X = λ1  c1 AX 1 + c2 2 AX 2 +  + cn n AX n 
 λ1 λ1 
 λ λ 
= λ1  c1λ1 X 1 + c2 2 ⋅ λ2 X 2 +  + cn n λn X n 
 λ1 λ1 

  λ2 
2
 λn 
2

= λ c1 X 1 + c2   X 2 +  + cn   X n 
 2

 1
 λ1   λ1  
 
Proceeding in this way, we have
  λ2 
p
 λn 
p

A X = λ1 c1 X 1 + c2   X 2 +  + cn   X n 
p p

  λ1   λ1  
 
Now, for large values of p, the vector
p p
λ  λ 
c1 X 1 + c2  2  X 2 +  + cn  n  X n
 λ1   λ1 
will converge to c1X1 if λ1 > λ2 ≥ λ3  ≥ λn and c1X1 is eigenvector corresponding to eigen-
value l1.
\ The eigenvalue l1 of largest magnitude is obtained as

λ = lim
(A X )p +1
r
; r = 1, 2,… , n
(A X )
p →∞ p
r

(
where A p +1 X ) r
(
and A p X ) r
are r th components of Ap+1X and ApX, respectively.
λ λ
The rate of convergence is determined by the quotient 2 . Convergence is faster if 2 is
smaller. λ1 λ1
For numerical purposes, the algorithm just described can be formulated in the following way:

Y (k +1) = AX (k )

λ (k +1) = max Y k +1
r
(( ) r )
Y( )
k +1
X (k +1) = (k +1)
λ 
(k +1) (k +1)
i.e., λ X = Y (k +1)  1  1
The initial value X can be chosen in a convenient way, either we may take 0  or 1 or
(0)

0  1
component 1 be taken as r th component if r th row of A has largest magnitude element and other
components zero. Iteration is stopped when AX ( )  λ ( ) X ( ) or AX ( )  − λ ( ) X ( ) .
n n n n n n

In the first case λ ( ) will be eigenvalue and X will be eigenvector. In the second case − λ (n)
n ( n )

will be eigenvalue and X (n) will be eigenvector.


644 | Chapter 5

5.6.2 Modification of Power Method


The power method can be modified in such a way that other eigenvalues are obtained.
We know that if A has eigenvalue l, then A−qI has eigenvalue l−q. Using this principle, we
can find the next dominant eigenvalue. We will now discuss how the next absolutely largest ei-
genvalue can be calculated if we know the absolutely largest eigenvalue l1 and the corresponding
eigenvector X1.
Let a1T be the first row vector of A and X1 is normalised in such a way that its first compo-
nent is 1.
Form A1 = A − X 1a1T . Here, first row of A1 will be zero.
Let l2 and X2 be other eigenvalue and corresponding eigenvector of A, where first component
of X2 is unity. As first components of X1 and X2 are unity
∴ a1T X 1 = λ1 (∵ AX 1 = λ1 X 1 ) 

a1T X 2 = λ 2 ( ∵ AX 2 = λ2 X 2 ) 
∴ (
A1 ( X 1 − X 2 ) = A − X 1 a1T )( X 1 − X2 )

= AX 1 − AX 2 − X a X 1 + X 1a1T X 2 
T
1 1

= λ1 X 1 − λ 2 X 2 − λ1 X 1 + λ 2 X 1 

= λ2 ( X1 − X 2 )

Thus, l2 is an eigenvalue of A1 with corresponding eigenvector X1 − X2.
As first component of X1 − X2 is zero, so the first column of A1 is irrelevant in
A1 (X1 − X2) = l2 (X1 − X2). Thus, remove first row and first column of A1. Let the remaining
matrix is B1.
Determine eigenvalue l2 of B1 and corresponding eigenvector.
By adding 0 as first component in this eigenvector, we get a vector Z which is an eigenvector
of A1 and we shall have
X 2 − X1 = c Z 

a1T ( X 2 − X 1 ) = ca1T Z
⇒ 
λ2 − λ1 = ca Z T
⇒ 1

λ −λ
∴ c= 2T 1
a1 Z

λ −λ 
Hence, X 2 − X1 =  2 T 1  Z
 a1 z 

λ −λ 
∴ X 2 = X1 +  2 T 1  Z
 a1 z 

Thus, eigenvalue l2 and corresponding eigenvector X2 will be determined.
Numerical Methods in General and Linear Algebra  | 645

Remark 5.7: (i) If l1 is dominant eigenvalue of A (i.e magnitude of l1 is maximum among the
magnitudes of eigenvalues of the matrix A) and l2 is dominant eigenvalue of A1, then l2 will be
the next dominant eigenvalue of A.
1
(ii) If λ is dominant eigenvalue of A−1, then will be smallest magnitude eigenvalue of A.
λ
Example 5.29: Determine the largest eigenvalue in magnitude and the corresponding ­eigenvector
of the following matrices.
 2 −1 0   −15 4 3

(i)  A =  −1 2 −1  (ii)  A =  10 −12 6 
 0 −1 2   20 −4 2

1 
Solution: (i) Let X (0) = 0 
1 
 2 −1 0  1   2  1
∴ AX (0) =  −1 2 −1 0  =  −2 = 2  −1 = λ (1) X (1)
 0 −1 2  1   2   1 

 2 −1 0   1   3   0.75
AX (1)      
=  −1 2 −1  −1 =  −4  = 4  −1  = λ (2) X (2)
 0 −1 2   1   3   0.75

 2 −1 0  0.75  2.5   0.714 
AX ( 2)      
=  −1 2 −1  −1  =  −3.5 = 3.5  −1  = λ (3) X (3)
 0 −1 2  0.75  2.5   0.714 

 2 −1 0  0.714   2.428  0.708
AX (3)    
=  −1 2 −1  −1  =  −3.428 = 3.428  −1  = λ (4) X (4)
 
 0 −1 2  0.714   2.428  0.708

 2 −1 0  0.708  2.416  0.707
AX (4)    
=  −1 2 −1  −1  =  −3.416  = 3.416  −1  = λ (5) X (5)
 
 0 −1 2  0.708  2.416  0.707

 2 −1 0  0.707  2.414  0.707
AX =  −1 2 −1
(5)  −1  =  −3.414  = 3.414  −1 
     
 0 −1 2  0.707  2.414  0.707

∴ eigenvalue of largest magnitude = 3.414
0.707
corresponding eigenvector =  −1 
0.707
646 | Chapter 5

1 
(ii)  Let X = 0 
(0)

0 

 −15 4 3 1   −15  −0.75


AX (0 )
=  10 −12 6  0  =  10  = 20  0.50  = λ (1) X (1)
     
 20 −4 2 0   20   1 

 −15 4 3  −0.75  16.25   1 
AX (1) =  10 −12 6   0.50  =  −7.50  = 16.25  −0.461 = λ (2) X (2)
 20 −4 2  1   −15.0   −0.923

 −15 4 3  1   −19.613  −0.981
AX ( 2)    
=  10 −12 6   −0.461 =  9.994  = 19.998  0.500  = λ (3) X (3)
 
 20 −4 2  −0.923  19.998   1 

 −15 4 3  −0.981  15.7715   0.801 
AX (3)
=  10 −12 6   0.500  =  −9.810  = 19.620  −0.500  = λ (4) X (4)
     
 20 −4 2  1   −19.620   −1 

 −15 4 3  0.801   −17.015  −1 
AX (4)
=  10 −12 6   −0.500  =  8.010  = 17.015  0.471 = λ (5) X (5)
   
 20 −4 2  −1   16.020  0.942

 −15 4 3  −1   19.771   0.986 
AX (5) =  10 −12 6   0.471 =  −10.00  = 20  −0.500  = λ (6) X (6)
 20 −4 2 0.942  −20.00   −1 

 −15 4 3  0.986   −19.79  −1 
AX (6)  
=  10 −12 6   −0.500  =  9.86  = 19.79  0.498 = λ (7) X (7)
   
 20 −4 2  −1   19.72  0.996 

 −15 4 3  −1   19.9980   0.999  −1 
AX (7)        
=  10 −12 6   0.498 =  −10.000  = 20  −0.5   −20 0.5
 20 −4 2 0.996   −20.000   −1   1 

 −15 4 3  −1   20   −1 
AX (8)
=  10 −12 6  0.5 =  −10  = −20 0.5
     
 20 −4 2  1   −20   1 

Numerical Methods in General and Linear Algebra  | 647

∴ eigenvalue of largest magnitude = -20


 −1 
corresponding eigenvector = 0.5
 1 
 2 −1 0 
Example 5.30: Find the absolutely smallest eigenvalue of the matrix A =  −1 2 −1
 2 −1 0   0 −1 2 
Solution: 
A =  −1 2 −1 
 0 −1 2 

A = 2 ( 4 − 1) + 1( −2 − 0 ) = 4

3 2 1
1  1 
∴ A =  2 4 2
−1 −1
 from A = A adj A 
4  
1 2 3

3 2 1
Let B =  2 4 2
1 2 3

0 
Let X (0)
= 1 
0 

 3 2 1  0  2 0.5
BX (0 )
=  2 4 2 1  =  4  = 4  1  = λ (1) X (1)
     
1 2 3 0   2  0.5

 3 2 1  0.5  4  0.667
BX (1)
=  2 4 2  1  = 6  = 6  1  = λ (2) X (2)
     
1 2 3 0.5  4  0.667

 3 2 1  0.667  4.668  0.7
BX ( 2)      
=  2 4 2  1  = 6.668 = 6.668  1  = λ (3) X (3)
1 2 3 0.667  4.668  0.7

 3 2 1  0.7  4.8  0.706 
BX (3) =  2 4 2  1  = 6.8 = 6.8  1  = λ (4) X (4)
1 2 3 0.7  4.8  0.706 

648 | Chapter 5

 3 2 1  0.706   4.824  0.707


BX (4)      
=  2 4 2  1  = 6.824  = 6.824  1  = λ (5) X (5)
1 2 3 0.706   4.824  0.707

 3 2 1  0.707  4.828 0.707


BX (5)      
=  2 4 2  1  = 6.828 = 6.828  1 
1 2 3 0.707  4.828 0.707

\ eigenvalue of B of maximum magnitude = 6.828
6.828
\ eigenvalue of A–1 of maximum magnitude = = 1.707
4
1
\ eigenvalue of A of minimum magnitude =  0.5858
1.707

 −306 −198 426


Example 5.31: The matrix A =  104 67 −147
 −176 −114 244 

 2
has an eigenvalue 6 and corresponding eigenvector  −1. Find its other eigenvalues and corre-
sponding eigenvectors.  1
Solution: Eigenvector corresponding to eigenvalue l1 = 6 with first component unity is
 1 
X 1 =  −0.5
 0.5 
 −306 −198 426   1 
A1 = A − X 1a1T =  104 67 −147 −  −0.5 [ −306 − 198 426]
 −176 −114 244   0.5 

 0 0 0
=  −49 −32 66 

 −23 −15 31

After leaving the first row and first column,
 −32 66 
B1 =  
 −15 31
Sum of eigenvalues = Trace of B1 = –32 + 31 = –1
Product of eigenvalues = B1 = −992 + 990 = −2
Numerical Methods in General and Linear Algebra  | 649

\ eigenvalues are roots of


t2 + t − 2 = 0

\ eigenvalues are –2 and 1


\ l2 = -2, l3 = 1 are other eigenvalues of A
for λ 2 = −2, two-dimensional eigenvector is the solution of
 −32 + 2 66   x1  0 
=
 −15
 31 + 2  x2  0 

11
\ −30 x1 + 66 x2 = 0, we can take eigenvector  
5
for λ3 = 1, two-dimensional eigenvector is the solution of
 −32 − 1 66   x1  0 
=
 −15
 31 − 1  x2  0 

 2
\ −33 x1 + 66 x2 = 0, we can take eigenvector  
1 
0
\ for λ 2 = −2, eigenvector of A1 is Z 2 = 11
 5 
0
Now, a1 Z 2 = [ −306 − 198 426] 11 = −48
T

 5 

 1  0
−2 − 6    λ −λ 
\ eigenvector of A is  −0.5 +  11 ∵ X 2 = X 1 + cZ & c = 2 T 1 
−48  a1 Z 
 0.5   5 

  6  0  6 
1  −3 + 11  1 
=       = 6 8 
6
  3   5   8 

 3
\ for λ 2 = −2, we can take eigenvector  4 
 4 
0 
for λ3 = 1, eigenvector of A1 is Z3 =  2
1 
650 | Chapter 5

0 
Now, a Z3 = [ −306 − 198 426 ]  2  = 30 
T
1
1 
\ eigenvector of A is

 1  0   6  0   6
 −0.5 + 1 − 6  2 = 1   −3 −  2  1 
  30   6       = 6  −5
 0.5  1    3  1    2 

 6
\ we can take eigenvector  −5
 2 
\ other eigenvalues of A are –2 and 1 and their corresponding eigenvectors are, respectively,
 3 6
 4  and  −5
   
 4   2 

Example 5.32: Using power method, find the dominant eigenvalue and corresponding eigenvec-
tor of the matrix
 3 −1 0 
A =  −1 2 −1
 0 −1 3 
and then using deflation method, find the other eigenvalues and corresponding eigenvectors.
1 
Solution: Let X (0 )
= 0 
1 

 3 −1 0  1   3   1 
\ AX (0 )    
=  −1 2 −1 0  =  −2 = 3  −0.6667 = λ (1) X (1)
 
 0 −1 3  1   3   1 

 3 −1 0   1   3.6667   1 
AX (1)      
=  −1 2 −1  −0.6667 =  −3.33334  = 3.6667  −0.9091 = λ (2) X ( 2 )
 0 −1 3   1   3.6667   1 

 3 −1 0   1   3.9091   1 
AX ( 2)      
=  −1 2 −1  −0.9091 =  −3.88182 = 3.9091  −0.9767 = λ (3) X (3)
 0 −1 3   1   3.9091   1 

Numerical Methods in General and Linear Algebra  | 651

 3 −1 0   1   3.9767   1 
AX (3)      
=  −1 2 −1  −0.9767 =  −3.99534  = 3.9767  −0.9941 = λ (4) X (4)
 0 −1 3   1   3.9767   1 

 3 −1 0   1   3.9941   1 
AX (4)      
=  −1 2 −1  −0.9941 =  −3.99882 = 3.9941  −0.9985 = λ (5) X (5)
 0 −1 3   1   3.9941   1 

 3 −1 0   1   3.9985   1 
AX (5)      
=  −1 2 −1  −0.9985 =  −3.99970  = 3.9985  −0.9996  = λ (6) X (6)
 0 −1 3   1   3.9985   1 

 3 −1 0   1   3.9996   1 
AX (6)  
=  −1 2 −1  −0.9996  =  −3.99992 = 3.9996  −0.9999
   
 0 −1 3   1   3.9996   1 

1
\ Iteration converges to l = 4 and corresponding eigenvector  −1
 1 
\ Dominant eigenvalue = l1 = 4
1
and corresponding eigenvector with first component unity = X 1 =  −1
 1 

 3 −1 0   1  0 0 0
A1 = A − X a =  −1 2 −1 −  −1 [3 − 1 0] =  2
T
1 1
    1 −1
 0 −1 3   1   −3 0 3 
Leaving its first row and first column
1 −1
B1 =  
0 3 
Its eigenvalues are l2 = 3, l3 = 1
1 − 3 −1   x1  0 
Two-dimensional eigenvector corresponding to l2 = 3 is given by =
 0
 3 − 3  x2  0 
1
which can be taken as  
 −2
0
\ Eigenvector of A1 is Z 2 =  1 
 −2
0
Now, a1 Z 2 = [3 −1 0]  1  = −1 
T

 −2
652 | Chapter 5

\ Eigenvector of A is

1 0 1


λ 2 − λ1   3− 4    
X1 + T Z 2 =  −1 + 1 = 0
a1 Z 2 −1    
 1   −2  −1

1 − 1 −1   x1   0 
Two-dimensional eigenvector corresponding to λ3 = 1 is given by    =  
 0 3 − 1  x2   0 
0 −1  x1  0 
i.e., 0 2   x  = 0 
  2   
0 
1 
we can take as   ⇒ eigenvectors of A1 is Z3 = 1 
0  0 

0
Now a Z3 = [3 −1 0] 1  = −1
T
1
0 

Eigenvector of A is
1 0 1
λ3 − λ1  −1 + 1 − 4 1  =  2
X1 + Z 3 =   −1    
a1T Z3
 1  0  1 

1
Dominant eigenvalue is 4 with eigenvector  −1 and other eigenvalues are 3 and 1 with corre-
1 1   1 
   
sponding eigenvectors  0  and  2 , respectively.
 −1 1 

Exercise 5.2

1. Solve the system of equations 2. Solve the following system of equations


2 1 1 −2  x1   −10  2 x1 + 3 x2 + x3 = 9
4
 0 2 1   x2   8  x1 + 2 x2 + 3 x3 = 6
=
3 2 2 0   x3   7  3 x1 + x2 + 2 x3 = 8
    
1 3 2 −1  x4   −5 
using Gauss elimination method.
using the Gauss elimination method with
partial pivoting.
Numerical Methods in General and Linear Algebra  | 653

3. Solve the following system of equations (a)  x + 2y + z = 8


by Gauss elimination method 2 x + 3 y + 4 z = 20
(a)  2 x + 3 y + z = 13 4 x + 3 y + 2 z = 16
x − y − 2 z = −1 (b)  2 x1 − 7 x2 + 4 x3 = 9
3 x + y + 4 z = 15 x1 + 9 x2 − 6 x3 = 1
(b)  x+ y+z =9 −3 x1 + 8 x2 + 5 x3 = 6
2 x + 5 y + 7 z = 52 (c)  10 x + y + z = 12
2x + y − z = 0 x + 10 y + z = 12
(c)  3 x + y − z = 3 x + y + 10 z = 12
2 x − 8 y + z = −5 7. Solve the following equations by Gauss–
x − 2 y + 9z = 8 Jordan method
(d)  8 y + 2 z = −7 x + 2 y + z − w = −2, 2 x + 3 y − z + 2w = 7
3x + 5 y + 2 z = 8
x + y + 3 z − 2w = −6, x + y + z + w = 2
6 x + 2 y + 8 z = 26
8. Solve the following system of equations
4. Solve the following linear system by by Gauss–Jordan method
Gauss elimination, with partial pivoting
if necessary (but without scaling). Check 1 1 1   x  1
the result by substitution. If no solution  4 3 −1  y  =  6 
    
or more than one solution exists, give a  3 5 3   z   4 
reason.
9. Solve the following system of equations by
(a)   1.5 x1 + 2.3 x2 = 16
(a)  Gauss elimination method
−4.5 x1 − 6.9 x2 = 48
(b)  Gauss–Jordan method
(b)   2 x1 + 5 x2 + 7 x3 = 25 2 x + y + z = 10
−5 x1 + 7 x2 + 2 x3 = −4 3 x + 2 y + 3 z = 18
x1 + 22 x2 + 23 x3 = 71
x + 4 y + 9 z = 16
(c)       3.2 x1 + 1.6 x2 = −0.8 10. Solve the following system of equations
1.6 x1 − 0.8 x2 + 2.4 x3 = 16.0 by method of factorisation
2.4 x2 − 4.8 x3 + 3.6 x4 = −39.0 2 x − 3 y + 10 z = 3
3.6 x3 + 2.4 x4 = 10.2 − x + 4 y + 2 z = 20
5. Apply Gauss–Jordan method to solve the 5 x + 2 y + z = −12
following equations
x+ y+z =9 11. Solve the equations
2 x − 3 y + 4 z = 13 2x + 3y + z = 9
3 x + 4 y + 5 z = 40 x + 2 y + 3z = 6
6. Solve the following system of equations 3x + y + 2 z = 8
by Gauss–Jordan method by the factorisation method.
654 | Chapter 5

12. Solve the following system of equations 14. Solve the system of equations
by Doolittle’s method 4 x1 + x2 + x3 = 2
(a)   4 x1 + 3 x2 − x3 = 6 x1 + 5 x2 + 2 x3 = −6
3 x1 + 5 x2 + 3 x3 = 4 x1 + 2 x2 + 3 x3 = −4
x1 + x2 + x3 = 1
using Gauss-Jacobi method with initial
approximation as x ( ) = [ 0.5, −0.5, −0.5] .
0 T
(b)   x1 + 2 x2 + 3 x3 = 14
2 x1 + 5 x2 + 2 x3 = 18 Perform three iterations. The exact solu-
3 x1 + x2 + 5 x3 = 20 tion is x1 = 1, x2 = −1, x3 = −1.
15. Solve by Jacobi’s method
(c)  3 x + 2 y + 7 z = 4 4 x + y + 3 z = 17
2x + 3y + z = 5
x + 5 y + z = 14
3x + 4 y + z = 7
2 x − y + 8 z = 12
13. Solve the following system of equations 16. Solve the system of equations
by Crout’s method
54 x + y + z = 110
(a)  2 x + y + 4 z = 12
2 x + 15 y + 6 z = 72
8 x − 3 y + 2 z = 20
− x + 6 y + 27 z = 85
4 x + 11 y − z = 33
by Gauss–Seidel method.
(b)  2 x1 + 3 x2 − 4 x3 + 2 x4 = −4
17. Using Gauss–Seidel method, solve the
x1 + 2 x2 + 3 x3 − 4 x4 =7 following system of equations
4 x1 − x2 + 2 x3 − 2 x4 =7
2 x1 − x2 = 7
3 x1 + 5 x2 − x3 + 6 x4 =5
− x1 + 2 x2 − x3 = 1
(c)   x1 + x2 + x3 = 1 − x2 + 2 x3 = 1
3 x1 + x2 − 3 x3 = 5

Starting with initial approximation
x1 − 2 x2 − 5 x3 = 10
(0, 0, 0)T perform three iterations.
(d)     4 x + 3 y + z − w = 14
18. Solve the linear system of equations by
2 x + 5 y + 2 z + w = 17 Gauss–Seidel method
x + 4 y + 4 z + 6 w = 20
3 x + y − z + 5w = 12 13 x1 + 5 x2 − 3 x3 + x4 = 18
2 x1 + 12 x2 + x3 − 4 x4 = 13
(e)  2 x − 6 y + 8 z = 24
5 x + 4 y − 3z = 2 3 x1 − 4 x2 + 10 x3 + x4 = 29
3 x + y + 2 z = 16 x2 + 3 x3 + 5 x4 = 31
(f)  4 x + y − z = 13 19. Solve the following system of equations by
3 x + 5 y + 2 z = 21 (a)  Gauss–Jacobi method
2 x + y + 6 z = 14 (b)  Gauss–Seidel method
Numerical Methods in General and Linear Algebra  | 655

20 x + y − 2 z = 17 5 2 1 −2 
3 x + 20 y − z = −18  2 6 3 −4 
(d)  A =  
2 x − 3 y + 20 z = 25 1 3 19 2 
 
20. Determine the largest eigenvalue in mag-  −2 −4 2 1 
nitude and the corresponding eigenvec-
10 −2 1 
tor of each of the given matrix by power
method. (e)  A =  −2 10 −2
 1 −2 10 
1 6 1 
(a)  A = 1 2 0  21. Find the absolutely smallest eigenvalue
0 0 3 of the given matrix A by power method.
 −6 18 2 
 −1 1 2 
A =  3 −3 −1
(b)  A =  0 1 −1
 0 0 4 
 4 −2 9 

 1 3 −1
(c)  A =  3 2 4 
 −1 4 10 

Answers 5.2

1. x1 = 5, x2 = 6, x3 = −10, x4 = 8
2. x1 = 1.9444, x2 = 1.6111, x3 = 0.2778
3. (a)  x = 3, y = 2, z = 1 (b)  x = 1, y = 3, z = 5
(c)  x = y = z = 1 x = 4, y = −1, z = 0.5
(d) 
4. (a)  No Solution
(b)  Infinite number of solutions; x1 = 2 + x2 , x2 arbitrary, x3 = 3 − x2
(c)  x1 = 1.5, x2 = −3.5, x3 = 4.5, x4 = −2.5
5. x = 1, y = 3, z = 5
6. (a) x = 1, y = 2, z = 3   (b) x1 = 4, x2 = 1, x3 = 2    (c)  x = y = z = 1
7. x = 1, y = 0, z = -1, w = 2  
8. x = 1, y = 0.5, z = −0.5
9. x = 7, y = −9, z = 5
10. x = −4, y = 3, z = 2
11. x = 1.9444, y = 1.6111, z = 0.2778
12. (a)  x1 = 1, x2 = 0.5, x3 = −0.5 (b)  x1 = 1, x2 = 2, x3 = 3
(c)  x = 0.875, y = 1.125, z = −0.125
656 | Chapter 5

13. (a)  x = 3, y = 2, z = 1 (b)  x1 = 1.0, x2 = 0.5, x3 = 2.0, x4 = 0.25


(c)  x1 = 6, x2 = −7, x3 = 2 (d) x = 2, y = 2, z = 1, w = 1
(e)  x = 1, y = 3, z = 5 (f)  x = 3, y = 2, z = 1
14. x ( 3) = [0.9333, −1.0733, −1.1000]
T

15. ( x6 , y6 , z6 ) = (2.9773 ,1.9863 , 0.9917) and iteration converges to ( x, y, z ) = (3, 2,1) .


16. ( x, y, z ) = (1.926 , 3.573 , 2.425)
T T

17. x ( 3) = (5.3125 , 4.3125 , 2.6563)


T

18. Iteration converges to ( x1 , x2 , x3 , x4 ) = (1, 2 , 3 , 4 )


T T

19. By both the methods, iteration converges to ( x , y , z ) = (1, −1,1) .


T T

20. (a)  4; ( 2, 1, 0 )   (b) 9.916; (0.173, − 0.112, 1)   (c) 11.66; (0.025, 0.422, 1)


T T T

(d) 19.837; (0.090, 0.215, 1, 0.051)   (e) 13.376; (0.844, − 1, 0.844 )


T T

21. 3

5.7 Linear Operators
Let a function y = f (x) has values f ( x0 ) , f ( x0 + h) , f ( x0 + 2h) ,… at points x0 , x0 + h, x0 + 2h, …,
respectively. Then, the arguments are equispaced and h is called interval of differencing.
We ­define the following operators when h is interval of differencing.
(i) E is shifting operator defined by
E f ( x ) = f ( x + h)
(ii) D is forward difference operator defined by
∆f ( x ) = f ( x + h) − f ( x )
(iii) ∇ is backward difference operator defined by
∇f ( x ) = f ( x ) − f ( x − h )
(iv) d is central difference operator defined by
 h  h
δ f ( x) = f  x +  − f  x − 
 2   2
(v) m is mean value operator defined by
1  h  h 
µ f ( x) =  f x +  + f x − 
2  2   2 
If f (x) is an analytic function, then differentiation operator D is defined as
d
D f ( x) = f ( x ) = f ′( x )
dx
Numerical Methods in General and Linear Algebra  | 657

Relation between Operators E and D


We have E f ( x) = f ( x + h )
and by Taylor series expansion
h2 h3
E f ( x ) = f ( x + h) = f ( x ) + h f ′( x ) + f ′′( x ) + f ′′′( x ) + 
2! 3! 

= 1 + hD +
(hD )2 + (hD )3 +  f ( x) = e hD f ( x)

 2! 3! 
∴ E = e hD 
We define operator U by
U = hD
∴ E = eU (5.3)
Now,
 h  h
δ f ( x) = f  x +  − f  x − 
 2  2

 1 − 
1
=  E 2 − E 2  f ( x)
 

1 1 1 1 U U
− hD − hD −
∴ δ = E −E =e
2 2 2
−e 2
= e2 −e 2

U
= 2 sinh
2 
1  h  h 
and µ f ( x) =  f  x +  + f x − 
2  2  2 

1 1

1

= E +E
2 2
 f ( x )
2 
1  12 − 
1
\ µ= E + E 2
2  

1 U /2
∴ µ = e +e
2
( −U / 2
 ) (from 5.3)
U
= cosh ≥ 1
2 
U 2 U
Now, cosh 2
= 1 + sinh
2 2
1 2
∴ µ = 1+ δ
2

4 
1
\ µ = 1+ δ 2
4 
658 | Chapter 5

U 1
The formulae δ = 2 sinh and µ = 1 + δ 2 are frequently used.
2 4
Now, we find the relation between the operators
(i) E in terms of other operators
∆f ( x ) = f ( x + h) − f ( x ) = ( E − 1) f ( x )
\ D = E – 1
\ E = 1 + ∆(5.4)
(
∇f ( x ) = f ( x ) − f ( x − h) = 1 − E −1 f ( x ))
−1
\ ∇ = 1− E 
E = (1 − ∇) (5.5)
−1
\
 h  h  1 − 
1
δ f (x) = f  x +  − f  x −  =  E 2 − E 2  f (x)
 2  2  

1 1

\ δ = E −E 2 2

1
\ E δ = E −1 
2

1
\ E − δ E 2 −1 = 0 
1
δ + δ2 + 4
\ E2 =
2 
\
1 2
4
2
(
E = δ + δ + 4 + 2δ δ 2 + 4 
 )
δ2 δ2
= 1+ + δ 1+ (5.6)
2 4
(ii)  D in terms of other operators
∆f ( x ) = f ( x + h) − f ( x ) = ( E − 1) f ( x )
∆ = E − 1 = (1 − ∇) − 1 
−1
\ from (5.5)

δ2 δ2
= + δ 1+  from (5.6)
2 4

= eU − 1  from (5.3)

(iii)  — in terms of other operators


(
∇f ( x ) = f ( x ) − f ( x − h) = 1 − E −1 f ( x ))
−1
\ ∇ = 1− E 
= 1 − (1 + ∆ ) 
−1
from (5.4)

Numerical Methods in General and Linear Algebra  | 659

Now, ∇ = 1 − ( E + E −1 − 2) + E − 2 
2
 1 − 
1
= E −1−  E 2 − E 2 
 
   
 = E − 1 − δ 2 
 δ2 δ2 
= 1 + + δ 1+  −1− δ 2  from (5.6)
 2 4 

δ2 δ2
=− + δ 1+
2 4 
and ∇ = 1 − E −1 = 1 − e −U  from (5.3)

(iv)  d in terms of other operators


 h  h  1 − 
1
δ f (x) = f  x +  − f  x −  =  E 2 − E 2  f (x)
 2  2  
1 1

\ δ = E2 − E 2

1 1


=E 2
( E − 1) = ∆ (1 + ∆ )− 2  from (5.4)

Also,
1 1

( )

δ = 1 − E −1 E 2 = ∇(1 − ∇) 2
 from (5.5)

we have already proved
U
δ = 2 sinh
2
(v) m in terms of other operators
1  h  h   1  12 − 
1
µ f (x) = f  x +  + f  x −  = E + E 2
 f ( x )
2  2 2   2 
1  12 − 
1
\ µ= E + E 2
2  
1 
1 −
= ( E + 1) E 2
2 
 ∆ 1
= 1 +  (1 + ∆ ) 2 

from (5.4)
 2

1
Also, µ=
1
2
(
1 + E −1 E 2 )

1 1
= ( 2 − ∇) (1 − ∇) 2 

from (5.5)
2
660 | Chapter 5

 ∇ 1
= 1 −  (1 − ∇) 2

 2 

1  12 − 
1
1  hD2 −
hD
 1  12U − U
1
µ= E + E 2
= e + e 2
= e + e 2
2   2   2  

U
= cosh
2 
(vi) U in terms of other operators

E = eU 
\ U = log E 
= log (1 + ∆ )  from (5.4)

= log (1 − ∇) = − log (1 − ∇) 
−1
from (5.5)

U
As δ = 2 sinh
2 
δ
U = 2 sinh −1
2
We give the above relations in the table below.
Relations between shift operator, difference operators,
mean value operator and differentiation operator

E D — d U

1 δ2
E E ∆+1 (1 – ∇)–1 1+ δ 2 + δ 1+ eU
2 4

1 2 δ2
∆ E–1 ∆ (1 – ∇)–1 – 1 δ + δ 1+ eU – 1
2 4

1 δ2
∇ 1 – E–1 1 – (1 + ∆)–1 ∇ − δ 2 + δ 1+ 1– e–U
2 4
1 1 1 1 U
d ∆ (1 + ∆ ) ∇ (1 − ∇) d
− −
− 2 sinh
E2 − E 2 2 2
2
1  12 − 
1
 ∆ 1
 ∇ 1
δ2 U
1 +  (1 + ∆ ) 2 1 −  (1 − ∇) 2
− −
m E + E 2
1+ cosh
2   2 2 4 2
δ
U log E log(1 + ∆) –log(1 – ∇) 2 sinh −1 U
2
Numerical Methods in General and Linear Algebra  | 661

If P is any of the above operator, then


P ( f ( x ) ± g ( x )) = P ( f ( x )) ± P ( g ( x ))

and P (c1 f ( x )) = c1 P ( f ( x ))

and hence all the above operators are linear operators.

5.7.1  Forward Differences


The differences y1 − y0 , y2 − y1 , …, yn − yn −1 are denoted by ∆y0 , ∆y1 , …, ∆yn −1, respectively, and
are called first-order forward differences.
Similarly, the second-order forward differences are defined by ∆ 2 yr = ∆yr +1 − ∆yr
In general, pth-order forward differences are defined as
∆ p yr = ∆ p −1 yr +1 − ∆ p −1 yr
These differences are shown below in forward difference table.
Forward difference table
x y Dy D2y D3y D4y D5y
x0 y0
∆y0
x0 + h y1 ∆2y0
∆y1 ∆3y0
x0 + 2h y2 ∆2y1 D4y0
∆y2 ∆3y1 D5y0
x0 + 3h y3 ∆2y2 D4y1
∆y3 ∆3y2
x0 + 4h y4 ∆2y3
∆y4
x0 + 5h y5

Remark 5.8: Any higher order forward difference can be expressed in terms of entries.
We have
∆ 2 y0 = ∆y1 − ∆y0 = ( y2 − y1 ) − ( y1 − y0 ) = y2 − 2 y1 + y 0

∆ 3 y0 = ∆ 2 y1 − ∆ 2 y0 = ( y3 − 2 y2 + y1 ) − ( y2 − 2 y1 + y0 ) = y3 − 3 y2 + 3 y1 − y0

∆ 4 y0 = ∆ 3 y1 − ∆ 3 y0 = ( y4 − 3 y3 + 3 y2 − y1 ) − ( y3 − 3 y2 + 3 y1 − y0 ) = y4 − 4 y3 + 6 y 2 − 4 y1 + y0

In general
∆ n y0 = yn − nC1 yn −1 + nC2 yn − 2 − nC3 yn − 3 +  + ( −1) y0
n


662 | Chapter 5

This result can be proved as follows


∆ n y0 = ( E − 1) y0 =  E n − nC1 E n −1 + nC2 E n − 2 − nC3 E n − 3 +  + ( −1)  y0
n n
 
= yn − nC1 yn −1 + nC2 yn − 2 − nC3 yn − 3 +  + ( −1) y0
n

5.7.2 Backward Differences
The differences y1 − y0 , y2 − y1, y3 − y2 , …, yn − yn −1 are denoted by ∇y1, ∇y2 , …, ∇yn, respec­tively,
and are called first-order backward differences.
Similarly, the second-order backward differences are defined by ∇ 2 yr = ∇yr − ∇yr −1
In general, pth-order backward differences are defined as
∇ p yr = ∇ p −1 yr − ∇ p −1 yr −1
These differences are shown below in backward difference table.
Backward difference table

x y —y —2y —3y —4y —5y


x0 y0
∇y1
x0 + h y1 ∇2y2
∇y2 ∇3y3
x0 + 2h y2 ∇2y3 ∇4y4
∇y3 ∇3y4 ∇5y5
x0 + 3h y3 ∇2y4 ∇4y5
∇y4 ∇3y5
x0 + 4h y4 ∇ y5
2

∇y5
x0 + 5h y5

Remark 5.9:
(
∇ n yn = 1 − E −1 )
n
(
yn = 1 − nC1 E −1 + nC2 E −2 − nC3 E −3 +  + ( −1) E − n yn
n
)
= yn − nC1 yn −1 + nC2 y n − 2 − nC3 yn − 3 +  + ( −1) y0
n

5.7.3 Central Differences
The differences y1 − y0 , y2 − y1 , y3 − y2 , …, yn − yn −1 are denoted by δ y1/ 2 , δ y3/ 2 , δ y5 / 2 , …, δ y 1 ,
n−
respectively, and are called first-order central differences. 2

Similarly, the second-order central differences are defined by δ yr = δ y 1 − δ y 1


2
r+ r−
2 2
Numerical Methods in General and Linear Algebra  | 663

In general, pth-order central differences are defined by


δ p yr = δ p −1 y 1 − δ p −1 y 1
r+ r−
2 2

These differences are shown below in central difference table.


Central difference table
x y dy d  2y d  3y d  4y d  5y
x0 y0
dy1/2
x0 + h y1 d  2y1
dy3/2 d  3y3/2
x0 + 2h y2 d  2y2 d  4y2
dy5/2 d   y5/2
3
d  5y5/2
x0 + 3h y3 d  2y3 d  4y3
dy7/2 d  3y7/2
x0 + 4h y4 d  2y4
dy9/2
x0 + 5h y5

Remark 5.10: (i) In the central difference table, the central differences on the same horizontal
line have the same suffix. The differences of odd order are known only for half values of suffixes
and the differences of even order for integral values of suffixes.
(ii)  The differences in each table are same but only the notation for that difference is changed.

5.7.4  Factorial Polynomials


The factorial polynomial is of special importance in the theory of finite differences. It helps us in
finding the successive forward differences of a polynomial directly by simple rule of differentia-
tion and also to find the polynomial by simple rule of integration if differences of a polynomial
function are given in factorial notation. The factorial polynomial of degree n (n is a positive
integer) is defined by the formula
x (n) = x ( x − 1) ( x − 2)…( x − n + 1) and for n = 0 , x (0) = 1 if the interval of differencing is 1 and
if interval of differencing is h then


( )
x ( n) = x ( x − h ) ( x − 2 h )… x − n − 1 h ; x (0 ) = 1

We have
( n)
∆x (n) = ( x + h) − x (n)



( )
= ( x + h ) x ( x − h )… x − n − 2 h − x ( x − h )… x − n − 1 h ( )
664 | Chapter 5

( )
= x ( x − h) x − n − 2 h ( x + h) − ( x + h − nh) 


(
= nh x ( x − h) x − n − 2 h = nh x (n −1)  )
n!
Theorem 5.1  ∆ r x ( n ) = hr x ( − ) ; r ≤ n
n r

( n − r )!
=0 ; r > n
Proof: We prove the result by principle of mathematical induction
For r = 1

 (
∆x (n) = ∆  x ( x − h) ( x − 2h) x − n − 1 h 
 )
( )
= ( x + h) x ( x − h) x − n − 2 h − x ( x − h) ( x − 2h) x − n − 1 h ( )

( )
= x ( x − h) x − n − 2 h [ x + h − x + nh − h]



(
= nh x ( x − h) x − n − 2 h )
n!
= nh x (n −1) = h x (n −1)

(n − 1)! 
∴ Result is true for r =1
Let the result is true for r = p; p ≤ n - 1
( n) n! (n − p)
∴ ∆ p ( x + h) = h p ( x + h)
( n − p )! 
p ( n) n! p (n − p)
∆ x = h x
( n − p )! 
( n)
∴ ∆ p +1 x ( ) = ∆ p ( x + h ) − ∆ p x ( )
n n

n! (n − p) n!
= h ( x + h) − h p x( − )
p n p

( n − p ) ! ( n − p ) !

=
(
n!
n − p )! 
( ) (
h ( x + h) x ( x − h) x + h − n − p − 1 h − x ( x − h) x − n − p − 1 h 
p 
 )

=
n!
( n − p )! ( )(
h x ( x − h) x − n − p − 2 h x + h − x + n − p − 1 h
p
)

n! n!
= h (n − p) x
p +1 (n − p −1)
= p +1 (n − p −1)
h x
( n − p )! ( n − p − 1)!

∴ Result holds for p + 1
∴ By the principle of mathematical induction result holds for all r = 1, 2, …, n
∴ For r = n we have
n!
∆ r x ( ) = ∆ n x ( ) = h n x (0 ) = n ! h n
n n

0!
Numerical Methods in General and Linear Algebra  | 665

which is constant.
∴ ∆ r x ( n ) = 0   for all r > n

Remark 5.11: (i) The nth-order forward differences of a polynomial of nth degree
a0 x n + a1 x n −1 + … + an −1 x + an are constant and its value is a0n! hn and all forward differences of
greater than n are zero.
Proof: a0 x n + a1 x n −1 + … + an −1 x + an = a0 x ( ) + factorial polynomial of degree n−1
n

∴ By above theorem
∆ n ( a0 x n + a1 x n −1 + … + an −1 x + an ) = a0 ∆ n x ( ) + 0 = a0 n ! hn
n

and ( )
∆ n + p a0 x n + a1 x n −1 + … + an = 0; p = 1, 2, 3…

(ii) A polynomial can be changed to factorial polynomial by synthetic division which will be
clear in the examples.
We have defined factorial x(n) for non-negative values of n.
When n ≥ 1, we have x (n) = ( x − n + 1) x (n −1) and requiring that this formula also hold for n = 0, we
1
get x ( ) =
−1
. Using the formula repeatedly for n = –1, – 2, … we obtain
x +1
1 1
x ( − n) = = ; n = 1, 2, 3,…
( x + 1) ( x + 2)…( x + n) ( x + n)(n)
with this formula x (n) = ( x − n + 1) x (n −1) and ∆x (n) = nx ( n −1) hold for negative values of n also.

5.7.5  Error Propagation


Firstly, we demonstrate the propagation of round-off errors. We assume that in the given data,
maximum round-off error is e. Here, we shall be considering the maximum possible variations
of errors which will be possible when magnitude of round-off error in each entry of the data is
maximum and two consecutive entries have error of opposite signs.

e
−2e
−e 4e
2e -8e
e -4e 16e
−2ε 8e −32e
−e 4e −16e
2e -8e
e -4e
−2e
−e
666 | Chapter 5

Thus, we observe that in the worst possible case, the error will be doubled for every new differ-
ence introduced.
Now, it may be possible that an entry is written wrong in the data. Then, this entry will have
error say e and no other entry will have error. Now, in the following table, we see the propagation
of this error.
0 0 0

0 0 0 e

0 0 e

0 0 e −6e

0 e −5e

0 e −4e 15e

e −3e 10e

e −2e 6e −20e

−e 3e −10e

0 e −4e 15e

0 −e 5e

0 0 e −6e

0 0 −e

0 0 0 e

0 0 0

We observe that the error propagation is in a triangular pattern and grows quickly. Excluding the
signs, there are binomial coefficients nC0 , nC1, nC2 , …, nCn in the nth difference column. The error
will be in that entry which has maximum value of binomial coefficient in its row.
Numerical Methods in General and Linear Algebra  | 667

5.7.6 Missing Entries in the Data


Suppose n entries in the data are given and some other entries are missing. Assuming these
­entries some unknowns, complete the difference table up to nth-order differences.
Now, all nth-order differences will be zero and we shall get number of equations equal to
number of unknowns. Solving these equations, we can find values of unknowns.

Example 5.33: Show that


∆ ∇ 1 1
   (i)  ∇ + ∆ = −   (ii) ∆∇ = ∇∆ = ∆ − ∇ = δ 2   (iii) µδ = ( ∆ + ∇) = E − E −1
∇ ∆ 2 2
( )
1
2
 δ2  2  1 
 (iv)  µ = 1 +       (v) ∆f k2 = ( f k + f k +1 ) ∆f k       (vi) 1 + µ 2δ 2 = 1 + δ 2 
 4  2 
7 4 4
(vii)  1+ ∆ = e     
hD
(viii) ∇ = h D − h D + h D − …
2 2 2 3 3

12
Solution:
( )
  (i)  ∇ + ∆ = 1 − E −1 + ( E − 1) = E − E −1 (1)

and
∆ ∇
− =
E − 1 1 − E −1 E 1 − E
− =
(−1


)
1 − E −1
∇ ∆ 1 − E −1 E −1 1 − E −1 E 1 − E −1

( )
1
= E − = E − E −1 (2)
E
from (1) and (2)
∆ ∇
∇+ ∆ = −
∇ ∆
 1 − 
1 1
 1 −  
1 1
− 
1

( 
) 
(
 
)
 (ii)  ∆∇ = ( E − 1) 1 − E −1 =  E 2 − E 2  E 2 1 − E −1 =  E 2 − E 2   E 2 − E 2 

= d ⋅ d = d    2  (1)
 1 1 1
  1 1
 1 1

( ) ( )
− − −
∇∆ = 1 − E −1 ( E − 1) = 1 − E −1 E  E − E 2 2 2
 =  E − E
2 2
  E − E
2 2



= d ⋅ d = d    2  (2)
2
 1 1

( )

∆ − ∇ = ( E − 1) − 1 − E −1 = E − 2 + E −1 =  E − E 2 2
 = δ (3)
2



from (1), (2) and (3)
2

∆∇ = ∇∆ = ∆ − ∇ = δ 

1  12 −  
1 1
− 
1
(iii)  µδ =
2 
E + E 2
  E 2
− E 2
1 −1
 = 2 E − E (1) ( )
1
2
1
2
( 1
)
( ∆ + ∇) = ( E − 1) + 1 − E −1  = E − E −1 (2)
2
( )

668 | Chapter 5

from (1) and (2)


µδ =
1
2
1
( ∆ + ∇) = E − E −1
2
( )

1
1
1  1 2
2
 δ2  2 1 − 
1 1
   (iv)  1 +  = 4 + δ 2
 4 2
( ) 2 = 4 +  E 2 − E 2 
2  



1 1
1 1
=  4 + E − 2 + E −1  2 =  E + 2 + E −1  2
2 2
1

1  12 −  
1 2 2
=  E + 2 
E
2   
 
1 1

1

= E +E  = µ.
2 2
2 

    (v)  ∆ f = f
k
2 2
k +1 − f = ( f k + f k +1 ) ( f k +1 − f k )
k
2

= ( f k + f k +1 ) ∆ f k


1 1 1 2
−  1  1 1 2
− 
1
− 
1
 (vi)  1 + µ 2δ 2 = 1 +  E 2 + E 2  δ 2 = 1 +  E 2 − E 2  + 4 E 2 E 2  δ 2
4  4   

2
 1   1 
= 1+
1 2
4
( 
)
δ + 4 δ 2 = 1 + δ 2 + δ 4  = 1 + δ 2 
4   2  

(vii)  (1+ ∆ ) f ( x ) = E f ( x ) = f ( x + h)

h2 h3
= f (x) + h f ′ (x) + f ′′ ( x ) + f ′′′ ( x ) + …
2! 3! 
 (hD )2 + (hD )3 + … f x
= 1 + hD +  ( )
 2! 3! 

=e hD
f (x)

∴ 1+ ∆ = e hD 

( ) = (1 − e )
2 − hD 2
(viii)  ∇ 2 = 1 − E −1
2
  h 2 D 2 h3 D 3 
= 1 − 1 − hD + − + … 
  2! 3! 

Numerical Methods in General and Linear Algebra  | 669

2
 h 2 D 2 h3 D 3 
=  hD − + − …
 2! 3! 

3 3
h D  1 2 4 4
= h2 D 2 − 2 +  +  h D −…
2!  4 6

7 4 4
= h 2 D 2 − h3 D 3 + h D −…
12 

Example 5.34: Prove that


(i)  hD = log (1 + ∆ ) = − log (1 − ∇) = sinh −1 ( µδ )

 ∆2  E ex ( )
(ii)    e x . 2 x = e x
 E ∆ e
 h 2 D 2 h3 D 3 
Solution: (i) e hD f ( x ) = 1 + hD + + + … f ( x )
 2! 3! 
h2 h3
= f ( x ) + h f ′( x ) + f ′′ ( x ) + f ′′′( x ) + … = f ( x + h) = E f ( x )
2! 3! 
∴ e hD = E = 1 + ∆ 
∴ hD = log (1 + ∆ ) (1)
Also, e hD = E 
∴ e − hD = E −1 = 1 − ∇ 
∴ − hD = log (1 − ∇)

∴ hD = − log (1 − ∇) (2)

e hD − e − hD 1
and sinh hD =
2 2
(
= E − E −1 )

1  12 −  
1 1
− 
1
=  E + E 2   E 2 − E 2  = µδ
2  

∴ hD = sinh −1 µδ (3)

From (1), (2) and (3)


hD = log (1 + ∆ ) = − log (1 − ∇) = sinh −1 ( µδ )

 ∆2  E e ( )
x
E e ( )x
x−h E e ( )
x

( )
(ii)    e x 2 x = ∆ 2 E −1 e x 2 x = ∆ 2 (e )
 E ∆ e ∆ e ( ) ∆2 ex ( )
670 | Chapter 5

E ex( ) =e E e
= e−h ∆2 ex ( )∆ ( )−h x


2
(e )
x


= e−he x+h = e x 

Example 5.35: Using the method of separation of symbols, show that


n ( n − 1) n ( n − 1) ( n − 2)
ux − 3 + … + ( −1) n ux − n +1 + ( −1) ux − n
n −1 n
∆ n ux − n = ux − n ux −1 + ux − 2 −
2! 3!
Solution:
∆ n ux − n = ( E − 1) ux − n
n

=  E n − nC1 E n −1 + nC2 E n − 2 − nC3 E n − 3 + … + nCn −1 ( −1) E + nCn ( −1)  ux − n


n −1 n

  
n ( n − 1) n ( n − 1) ( n − 2)
ux − 3 + … + ( −1) n ux − n +1 + ( −1) ux − n
n −1 n
= ux − n ux −1 + ux − 2 −
2! 3! 

Example 5.36: Prove the following identities


u0 x ∆u0 x 2 ∆ 2 u0
(i)  u0 + u1 x + u2 x 2 + … to ∞ = + + + … to ∞
1 − x (1 − x ) (1 − x )3
2

(ii)  ∆ n y0 = yn − nC1 yn −1 + nC2 yn − 2 − … + ( −1) y0


n

Solution:
 1 x x2 
2 (
E − 1) + 3 (
E − 1) + … u0
2
 (i)  R.H.S. =  +
1 − x (1 − x ) (1 − x ) 

1  
2

( E − 1) +   ( E − 1)2 + … u0


x x
= 1 +
1 − x  1 − x 1− x 

 1 
1  u
= x
1 − x 1 − ( E − 1)  0
 1− x  
1 1
= u0 = u0 = (1 − x E ) −1 u0
1 − x − x ( E − 1) 1− x E



(
= 1 + xE + x E + x E + … u0
2 2 3 3
) 
= u0 + xu1 + x u2 + x u3 + … 
2 3

= u0 + u1 x + u2 x 2 + u3 x 3 + … = L.H.S. 
Numerical Methods in General and Linear Algebra  | 671

(
(ii)  ∆ n y0 = ( E − 1) y0 = 1 − E −1 ) ( )
n n n
E n y0 = 1 − E −1 yn

= 1 − nC1 E −1 + nC2 E −2 − nC3 E −3 + … + ( −1) E − n  yn


n

  

= yn − nC1 yn −1 + nC2 yn − 2 − nC3 yn − 3 + … + ( −1) y0


n


Example 5.37: Given that y5 = 4, y6 = 3, y7 = 4, y8 = 10 and y9 = 24. Find the value of ∆ 4 y5
by using difference table.
Solution: Difference table for given data is

x y Dy D2y D3y D4y


5 4
−1
6 3 2
1 3
7 4 5 0
6 3
8 10 8
14
9 24

From the table


∆ 4 y5 = 0

Example 5.38: Find the missing entry in the following table

x 0 1 2 3 4
y(x) 1 3 9 − 81

Solution: Four entries are given


∴ 4th-order differences are zero
∴ ∆ 4 y0 = 0 
∴ ( E − 1)4 y0 = ( E 4 − 4 E 3 + 6 E 2 − 4 E + 1) y0 = 0 
∴ y4 − 4 y3 + 6 y2 − 4 y1 + y0 = 0 
1 1
∴ y3 = ( y4 + 6 y2 − 4 y1 + y0 ) = (81 + 6 (9) − 4 (3) + 1) = 31
4 4 
672 | Chapter 5

Other method
Let y3 = a
Then, difference table is

x y Dy D2y D3y D4y


0 1
2
1 3 4
6 a − 19
2 9 a − 15 124 − 4a
a-9 105 − 3a
3 a 90 − 2a
81 − a
4 81

Four entries are given


∴ 4th-order differences are zero
∴ 124 - 4a = 0
∴ a = 31
∴ y(3) = 31

Example 5.39: Find the missing values in the following table:

x 0 5 10 15 20 25
y 6 10 − 17 − 31
Solution: Let y (10) = a, y (20) = b
Then difference table is
x y Dy D2y D3y D4y
0 6
4
5 10 a − 14
a − 10 41 − 3a
10 a 27 − 2a 6a + b − 102
17 − a 3a + b - 61
15 17 a + b − 34 143 - 4a - 4b
b - 17 82 − a − 3b
20 b 48 − 2b
31 − b
25 31
Numerical Methods in General and Linear Algebra  | 673

As four entries are given, so 4th-order differences are zero


∴ 6a + b − 102 = 0
143 - 4a - 4b = 0
∴ 6a + b = 102
4a + 4b = 143
∴ By Cramer’s rule
102 1
143 4 408 − 143 265
a= = = = 13.25
6 1 24 − 4 20
4 4

6 102
4 143 858 − 408 450
b= = = = 22.50
6 1 24 − 4 20
4 4
Other method
Four entries are given
∴ ∆ 4 y0 = 0 and ∆ 4 y5 = 0 
Now, ∆ 4 y0 = 0 

( E − 1)
4
⇒ y0 = 0



( E − 4 E + 6 E − 4 E + 1 y0 = 0
4 3 2
) 
∴ y20 − 4 y15 + 6 y10 − 4 y5 + y0 = 0 
∴ 6 y10 + y20 = 4 y15 + 4 y5 − y0 = 4 (17) + 4 (10 ) − 6 = 102 (1)

Similarly, ∆ 4 y5 = 0 gives

  y25 − 4 y20 + 6 y15 − 4 y10 + y5 = 0 


∴ 4 y10 + 4 y20 = y25 + 6 y15 + y5 = 31 + 6 (17) + 10 = 143 (2)

Solving (1) and (2) by Cramer’s rule as above, we have


y10 = 13.25, y20 = 22.50

Example 5.40: One entry in the following table of a polynomial of degree 4 is incorrect. Correct
the entry by locating it.

x 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
y 1.0000 1.5191 2.0736 2.6611 3.2816 3.9375 4.6363 5.3771 6.1776 7.0471 8.0
674 | Chapter 5

Solution: Difference table for given data is


x y Dy D2y D3y D4y
1.0 1.0000
0.5191
1.1 1.5191 0.0354
0.5545 −0.0024
1.2 2.0736 0.0330 0.0024
0.5875  0.0000
1.3 2.6611 0.0330 0.0024
0.6205  0.0024
1.4 3.2816 0.0354 0.0051 + e
0.6559 0.0075 + e
1.5 3.9375 0.0429 + e −0.0084 - 4e
0.6988 + e −0.0009 − 3e
1.6 4.6363 + e 0.0420 − 2e 0.0186 + 6e
0.7408 - e 0.0177 + 3e
1.7 5.3771 0.0597 + e −0.0084 - 4e
0.8005 0.0093 - e
1.8 6.1776 0.0690 0.0051 + e
0.8695 0.0144
1.9 7.0471 0.0834
0.9529
2.0 8.0000

Since the degree of polynomial is 4, the fourth differences must be constant. Numerically largest
fourth difference is 0.0186 which is in the row of x = 1.6. Thus y1.6 has error. Let error is e then
error propagation is shown in the table.
As 4th-order differences are constant

∴ 0.0186 + 6e  = 0.0024
∴ 6e = –0.0162
∴ e = –0.0027
∴ y1.6 = 4.6363 – 0.0027
= 4.6336
Numerical Methods in General and Linear Algebra  | 675

 1 
Example 5.41: Evaluate ∆ 2  2
 x + 5 x + 6 

 1 
Solution: ∆ 2  
 ( x + 2) ( x + 3) 
(−2)
= ∆ 2 ( x + 1)



(
= ∆ −2 ( x + 1)
(−3)
)
(−4)
= ( −2) ( −3) ( x + 1)

6 6
= =
( x + 2) ( x + 3) ( x + 4) ( x + 5) x 4 + 14 x 3 + 71x 2 + 154 x + 120

 5 x + 12 
Example 5.42: Evaluate ∆ 2  2
 x + 5 x + 6 
Solution:

 5 x + 12   5 x + 12  2  2 3 
∆2  2 = ∆2  =∆  +    (By suppression method)
 x + 5 x + 6   ( x + 2) ( x + 3)   x + 2 x + 3

(−1) (−1)  (−2) (−2) 


= ∆ 2  2 ( x + 1) + 3 ( x + 2) = ∆  −2 ( x + 1) − 3 ( x + 2)
   
(−3) (−3) 4 6
= 4 ( x + 1) + 6 ( x + 2) = +
( x + 2) ( x + 3) ( x + 4) ( x + 3) ( x + 4) ( x + 5) 
10 x + 32 2 (5 x + 16 )
= = 4
( )( )( )( )
x + 2 x + 3 x + 4 x + 5 x + 14 x 3
+ 71x 2 + 154 x + 120


 1 
Example 5.43: Evaluate ∆  
 x ( x + 4 ) ( x + 8) 

 1  (−3) (−4)
Solution: ∆   = ∆ ( x − 4 ) = ( −3)( 4 ) ( x − 4 )
 x ( x + 4 ) ( x + 8) 
12
=−    (∵ h = 4 )
x ( x + 4 ) ( x + 8) ( x + 12) 

Example 5.44: Express the polynomial f ( x ) = 2 x 3 + 3 x 2 − 5 x + 4 in factorial notation and find


its successive differences. Also obtain a function whose first-order finite difference is f (x)
676 | Chapter 5

Solution: By synthetic division


0 ) 2 3 −5 4
0 0 0
1) 2 3 −5 4
2 5
2) 2 5 0
4
2 9
∴ f ( x ) = 2 x 3 + 3 x 2 − 5 x + 4 = 2 x (3) + 9 x (2) + 4

∴ ∆f ( x ) = 6 x (2) + 18 x (1) = 6 x ( x − 1) + 18 x = 6 x 2 + 12 x

∆ 2 f ( x ) = 12 x ( ) + 18 = 12 x + 18
1

∆ 3 f ( x ) = 12

∆ n f ( x ) = 0; n ∈ N , n > 3

Let D F ( x ) = f ( x ) = 2 x (3) + 9 x (2) + 4

2 (4) 9 (3)
∴ F (x) = x + x + 4x + C  (1)
4 3
1
= x ( x − 1) ( x − 2) ( x − 3) + 3 x ( x − 1) ( x − 2) + 4 x + C
2 
1 4
=  x − 6 x + 11x − 6 x  + 3  x − 3 x + 2 x  + 4 x + C
3 2 3 2

2 
1 4
=  x − 7 x + 14 x  + C
2

2 
∴ function whose first-order finite difference is f (x) is
1 4
2
( )
x − 7 x 2 + 14 x + C where C is
­arbitrary constant.

Example 5.45: Express f (x) = x3 – 2x2 + x – 1 in factorial notation and show that ∆ 4 f ( x ) = 0
Solution By synthetic division

0) 1 −2 1 −1
0 0 0
1) 1 − 2 1 −1
1 −1
2) 1 − 1 0
2
1 1
∴ f ( x ) = x 3 − 2 x 2 + x − 1 = x (3) + x (2) − 1

∴ ∆f ( x ) = 3 x (2) + 2 x (1)

Numerical Methods in General and Linear Algebra  | 677

∆ 2 f ( x ) = 6 x( ) + 2
1

∆3 f ( x) = 6

∆4 f (x) = 0


Example 5.46: Evaluate ∆ 3 ((1 − x ) (1 − 2 x ) (1 − 3 x ))


Solution: We have
f ( x ) = (1 − x ) (1 − 2 x ) (1 − 3 x ) = −6 x 3 + 11x 2 − 6 x + 1
By synthetic division
0) −611 − 6 1
0 0 0
1) −6 11 − 6 1
−6 5
2) −6 5 −1 
− 12
−6 −7

f ( x ) = −6 x 3 + 11x 2 − 6 x + 1 = −6 x ( ) − 7 x ( ) − x ( ) + 1 
3 2 1
\
\ ∆ 3 f ( x ) = −6 (3)( 2)(1) = −36

Example 5.47: Find the second-order difference of the polynomial x 4 − 12 x 3 + 42 x 2 − 30 x + 9
with interval of differencing h = 2
Solution: By synthetic division
0 ) 1 −12 42 −30 9
0 0 0 0
2) 1 −12 42 −30 9
2 −20 44
4 ) 1 −10 22 14
4 −24
6) 1 −6 −2
6
1 0
f ( x ) = x 4 − 12 x 3 + 42 x 2 − 30 x + 9 = x ( ) − 2 x ( ) + 14 x ( ) + 9
4 2 1
\

\ ∆ f ( x ) = 4h x (3)
− 2 ( 2) h x + 14 h = 4 ( 2) x
(1) (3)
− 4 ( 2) x + 14 ( 2)    (∵ h = 2) 
(1)

= 8 x (3) − 8 x (1) + 28 

∆ 2 f ( x ) = 8 (3) h x ( ) − 8 h = 24 ( 2) x ( ) − 8 ( 2) = 48 x ( ) − 16
2 2 2

= 48 x ( x − 2) − 16 = 48 x 2 − 96 x − 16

678 | Chapter 5

Exercise 5.3

1. Evaluate
(a)  ∆ cos x (b)  ∆ log f ( x ) (c) ∆ tan −1 x
(d)  ∆ 2 cos 2 x (e)  ∆ 2 sin ( ax + b ) (f) ∆ n e ax + b
2. Show that
 (i)  ∆  f ( x ) g ( x ) = f ( x + h) ∆ g ( x ) + g ( x ) ∆f ( x )

 f (x) g (x) ∆ f (x) − f (x) ∆ g (x)


(ii)  ∆  =
 g (x)  g ( x + h) g ( x )
3. Show that
(a)  ( ∆ + 1) (1 − ∇) = 1 (b)  δ = ∇ (1 − ∇) 2
−1
∆ = δ E 1/ 2 (c) 

2r
 ∆ δ
1 1
( )
(d)  µ = E 2 + E 2 = 1 +  (1 + ∆ ) 2 (e) 

−1 −1
Er =  µ + 
2  2  2
1 2  δ2  1
(f)  ∆ = δ + δ 1 +  (g)  D = log E
2  4 h
1 2 3 4 5 6  hD 
(h)  µ −1 = 1− δ + δ − δ +  (i) 
δ = 2 sinh 
8 128 1024  2 
4. Prove the following:
 ∆2  ∆ 2 ux
(a)    ux ≠ (b)  ∇ m f n = ∆ m f n − m
 E Eux
∆2 3
5. Show that (a)
E
x = 6 x h2 (b) ∆ ( )
( + ∇)2 x 2 + x = 8 with h = 1
6. If f ( x ) = x 3 − 3 x 2 + 5 x + 7, find ∆ f ( x ) , ∆ 2 f ( x ) , ∆ 3 f ( x ) , ∆ 4 f ( x ) when h = 1.
 5 x + 12 
7. Evaluate ∆ 2  2 interval of differencing being unity.
 x + 5 x + 6 
 
8. Evaluate (a)  ∆ 2 
1
( )(
∆10 (1 − ax ) 1 − bx 2 1 − cx 3 1 − dx 4
 (b)  )( )
 x ( x + 3) ( x + 6 ) 
9. If y = (3 x + 1) (3 x + 4 ) (3 x + 22), prove that
D4y = 136080(3x + 13)(3x + 16)(3x + 19)(3x + 22)
10. Using the method of separation of symbols, prove that
2 3
 x   x   x  2
 (i)  u1 x + u2 x 2 + u3 x 3 +  =  u + ∆u1 +  ∆ u1 +
 1 − x  1  1 − x   1 − x 
n +1 n +1 n +1
(ii)  u0 + u1 + u2 +  + un = C1 u0 + C2 ∆u0 + C3 ∆ 2 u0 +  + ∆ n u0
Numerical Methods in General and Linear Algebra  | 679

11. Prove the following identities:


1 1 1
 (i)  u0 − u1 + u2 − u3 +  = u0 − ∆u0 + ∆ 2 u0 + 
2 4 8
u1 u2 2  ∆ u ∆ 2 u0 
(ii)  u0 + x + x +  = e x u0 + x 0 + x 2 + 
1! 2!  1! 2! 
12. If ux = ax + bx + c, show that
2

u − x C1 .2. u2 x −1 + x C2 .22 u2 x − 2 +  + ( −2) ux = ( −1) (c − 2ax )


x x

2 x
13. Form the forward difference table for y = x3 – 2x2 + 7 for x = 0, 1, 2, ……6.
14. Construct a forward difference table from the following:
x 0 1 2 3 4
yx 1 1.5 2.2 3.1 4.6

Evaluate ∆3y1 and y5.


15. If f ( x ) = x 3 + 5 x − 7, then form the backward difference table for x = –1, 0, 1, 2, 3, 4, 5.
16. Form the backward difference table for the function f ( x ) = x 3 − 3 x 2 + 5 x − 7 for x = –1,
0, 1, 2, 3, 4 and 5.
17. Express y4 in terms of successive forward finite differences of y0.
18. A cubic polynomial takes the following values: y(0) = 1, y(1) = 0, y(2) = 1 and y(3) = 10.
Obtain y(4).
19. A cubic polynomial f (x) takes the following values: f0 = –5, f1 = 1, f2 = 9, f3 = 25, f4 = 55,
f5 = 105. Find f −1 and f 6.
20. Given y0 = –8, y1 = –6, y2 = 22, y3 = 148, y4 = 492 and y5 = 1222; find the value of y6.
21. Find the missing value in the following table:
x 16 18 20 22 24 26
y 43 89 – 155 268 388
22. Following are the population data from the census of same district (in thousands). Estimate
the missing population of year 1911.

Year 1881 1891 1901 1911 1921 1931


Population 363 391 421 – 461 501
23. Find the missing value in the following table:

x 45 50 55 60 65
y 3.0 – 2.0 – –2.4
24. Assuming that the following values of yx belong to a polynomial of degree four, compute
the next two values.
x 2 4 6 8 10
y 2 3 5 8 9
680 | Chapter 5

25. Interpolate the missing entries in the following table:


x 0 1 2 3 4 5
y(x) 0 – 8 15 – 35
26. Find the missing values in the following table of values of x and y:
x 0 1 2 3 4 5 6
y –4 –2 – – 220 546 1148
27. Assuming that the following values of y belong to a polynomial of degree 4, compute the
next three values:
x 0 1 2 3 4
y 1 –1 1 –1 1
28. Find and correct the error by means of differences, in the given data:

x 0 1 2 3 4 5 6 7 8 9 10
y 2 5 8 17 38 75 140 233 362 533 752

29. In the following table, one value of y is incorrect and that y is a cubic polynomial in x

x 0 1 2 3 4 5 6 7
y 25 21 18 18 27 45 76 123

Construct a difference table for y and use it to locate and correct the wrong value.
30. Express f ( x ) = 3 x 3 − 2 x 2 + 7 x − 6 in factorial polynomial.
31. Express f (u ) = u 4 − 3u 2 + 2u + 6 in terms of factorial polynomial. Hence show that
∆ 4 f (u ) = 24.
32. Express x 4 + 3 x 3 − 5 x 2 + 6 x − 7 in factorial polynomial and find their successive forward
differences.
33. Express y = 2 x 3 − 3 x 2 + 3 x − 10 in factorial notation and hence show that ∆ 3 y = 12.
34. Represent the function f (x) = x4 – 12x3 + 42x3 – 30x + 9 and its successive differences in
factorial notation in which the interval of differencing is one.
35. Obtain the function whose first difference is 9x2 + 11x + 5.

Answers 5.3

 h h  f ( x + h)  h
1. (a)  −2 sin  x +  sin (b) 
log   tan −1
(c) 
 2  2  f (x)  1 + x ( x + h)
ah
sin ( ax + b + ah) ( )
n
(b)  –4 sin2h cos (2x + 2h) (e) 
−4 sin 2 (f)  e ah − 1 e ax + b
2
( )
6. 3 x 2 − x + 1 , 6 x , 6 , 0
Numerical Methods in General and Linear Algebra  | 681

2 (5 x + 16 )
7.
( x + 2) ( x + 3) ( x + 4) ( x + 5)
108
8. (a)  (b) 
a b c d (10!)
x ( x + 3) ( x + 6 ) ( x + 9) ( x + 12)
13. x y ∆y ∆2y ∆3y ∆4y ∆5y ∆6y
0 7
-1
1 6 2
1 6
2 7 8 0
9 6 0
3 16 14 0 0
23 6 0
4 39 20 0
43 6
5 82 26
69
6 151

14. x yx ∆yx ∆2yx ∆3yx ∆4yx


0 1
0.5
1 1.5 0.2
0.7 0
2 2.2 0.2 0.4
0.9 0.4
3 3.1 0.6
1.5
4 4.6

∆ 3 y = 0.4 , y5 = 7.5
1
15. x y ∇y ∇ 2y ∇3y ∇4y ∇5y ∇6y
-1 -13
6
0 -7 0
6 6
1 -1 6 0
12 6 0
2 11 12 0 0
24 6 0
3 35 18 0
42 6
4 77 24
66
5 143
682 | Chapter 5

16. x y ∇y ∇2y ∇3y ∇4y ∇5 y ∇6y


-1 -16
9
0 -7 -6
3 6
1 -4 0 0
3 6 0
2 -1 6 0 0
9 6 0
3 8 12 0
21 6
4 29 18
39
5 68

17. y4 = y0 + 4 ∆y0 + 6 ∆ 2 y0 + 4 ∆ 3 y0 + ∆ 4 y0
18. 33   19.  f −1 = −15, f 6 = 181   20. 2554  21. 100
22. 442.2 thousands   23. 2.925, 0.225    24. 2 and –22
25. 3.24   26. 12, 68   27. 31, 129, 351
28. The correct entry corresponding to x = 5 is 77
29. The correct entry corresponding to x = 3 is 19
30. f ( x ) = 3 x (3) + 7 x (2) + 8 x (1) − 6
31. f (u ) = u (4) + 6u (3) + 4u (2) + 6
32. f ( x ) = x (4) + 9 x (3) + 11x (2) + 5 x (1) − 7
∆ f ( x ) = 4 x (3) + 27 x (2) + 22 x (1) + 5
∆ 2 f ( x ) = 12 x (2) + 54 x (1) + 22
∆ 3 f ( x ) = 24 x ( ) + 54
1

∆ 4 f ( x ) = 24
and ∆ n f ( x ) = 0; n > 4 , n ∈ N
33. y = 2 x (3) + 3 x (2) + 2 x (1) − 10
34. f ( x ) = x (4) − 6 x (3) + 13 x (2) + x (1) + 9
∆f ( x ) = 4 x (3) − 18 x (2) + 26 x (1) + 1

∆ 2 f ( x ) = 12 x (2) − 36 x (1) + 26

∆ 3 f ( x ) = 24 x ( ) − 36
1

∆ 4 f ( x ) = 24

and ∆ n f ( x ) = 0; n > 4, n ∈ N
35. f ( x ) = 3 x 3 + x 2 + x + c where c is arbitrary constant.
Numerical Methods in General and Linear Algebra  | 683

5.8  Interpolation
Suppose the values of a function y = f (x) are given for some values of x called arguments. Let I be
interval formed by these values of x. If the value of y is required for some x ∈ I, then it is called
problem of interpolation and if value of y is required for some x outside the interval I, then it is
called problem of extrapolation. We shall be treating the problem of extrapolation in the same
way as the problem of interpolation. The following two methods will be used for interpolation
when arguments may be equispaced or not equispaced. Other methods can be used only when
the arguments are equispaced.
For finding the value of f (x) for some x we shall have to approximate f (x) by some function
f(x) which can be found. If f(x) is taken as a polynomial taking the given values at given argu-
ments, then this polynomial is called interpolating polynomial.

5.8.1 Lagrange’s Interpolation Formula


Let y = f (x) be the function whose values at x0, x1, x2, … xn is given to be y0, y1, y2, …, yn, respec-
tively. We approximate f (x) by nth degree polynomial passing through (x0, y0), (x1, y1),…, (xn, yn)
which is called Lagrange’s polynomial P(x). Let
P ( x ) = a0 ( x − x1 ) ( x − x2 )…( x − xn ) + a1 ( x − x0 ) ( x − x2 ) ( x − x3 )…( x − xn )

+ a2 ( x − x0 ) ( x − x1 ) ( x − x3 )…( x − xn ) + … + an ( x − x0 ) ( x − x1 ) ( x − x2 )…( x − xn −1 ) (5.7)

Now P ( xk ) = yk ; k = 0, 1, 2, n 
∴ yk = ak ( xk − x0 ) ( xk − x1 ) ( xk − xk −1 ) ( xk − xk +1 ) ( xk − xn )

yk
∴ ak = ; k = 0, 1, 2, ...,, n

( xk − x0 ) ( xk − x1 )( xk − xk −1 ) ( xk − xk +1 )( xk − xn ) 
Substituting in (5.7)
n
f ( x )  P ( x ) = ∑ lk ( x ) yk (5.8)
k =0

where lk ( x ) =
( x − x0 ) ( x − x1 ) ( x − xk −1 ) ( x − xk +1 ) ( x − xn ) (5.9)
( xk − x0 ) ( xk − x1 ) ( xk − xk −1 ) ( xk − xk +1 ) ( xk − xn )
(5.8) is Lagrange’s interpolation formula to approximate f (x) at any x where lk (x) is given by (5.9)

Error in Lagrange-Interpolation Polynomial Approximation


n
Let F ( x ) = ∏ ( x − xr ).
r=0
Let I be interval bounded by x0, x1, x2, …, xn and h is any point in I such that h ≠ xi; i = 0, 1, 2, …, n.
We want to find error in P(h) when exact value is f (h).
The function f (x) – P(x) vanishes for x0, x1, …, xn
Let f (x) = P(x) + RF(x) at x = h.
684 | Chapter 5

Here, R can be determined as follows


Define G(x) = f (x) – P(x) – RF(x)
we have G(x) = 0; x = x0, x1, x2, …, xn, h
Arranging the (n + 2) points x0, x1, x2, …, xn, h in ascending order, we can form (n + 1) subin-
tervals of I. Applying Rolle’s theorem to the function G(x) in each of these subintervals we can
find xi ∈ I such that G ′ (ξi ) = 0 , i = 1, 2,...,( n + 1). Again n subintervals of I can be formed with
the help of the points ξ1 , ξ2 ,...ξn+1 and we can apply Rolle’s theorem to the function G ′ ( x ) in each
of these subintervals. Proceeding in this way applying Rolle’s theorem repeatedly, we have
G(n + 1)(x) = 0 for some x ∈ I
∴ G (n +1) (ξ ) = f (n +1) (ξ ) − R. ( n + 1)! = 0,

(Q P(x) is polynomial of degree n and F(n + 1)(x) = (n + 1)! for all x)
f (n +1) (ξ )
∴ R=
(n + 1)! 
f (n +1) (ξ )
Hence, error = R F (η) = ( x − x0 ) ( x − x1 ) ( x − xn )
(n + 1)! 
for some x lying in the interval bounded by x0, x1, …, xn.
If problem is of extrapolation, then x will lie in the interval bounded by h, x0, x1, x2, …, xn.
n
Remark 5.12: (i) If we take F ( x ) = ∏ ( x − xr )
r=0
then dividing (5.8) by F(x) we have
f (x) n
yk
=∑
F( x ) k =0 ( x − x k )( k
x − x 0 )( k
x − x1) ( k − xk −1 ) ( xk − xk +1 ) ( xk − xn )
x

f (x)
It is the partial fraction of where f(x) is taken as P(x).
F( x )
(ii) From (5.9)
lk ( x ) 1
=

F (x) ( k ) ( k 0 ) ( k 1 ) ( k − xk −1 ) ( xk − xk +1 ) ( xk − xn )
x − x x − x x − x  x

1 n n
1
∴ ∑ lk ( x ) = ∑
F ( x) k =0 k = 0 ( x − xk ) ( xk − x0 ) ( xk − x1 ) ( xk − xk −1 ) ( xk − xk +1 ) ( xk − xn )

1
But R.H.S. is partial fraction of
F (x)
1 n 1
∴ ∑ lk ( x ) = F ( x )
F ( x) k =0

n
∴ ∑ l ( x ) = 1 for all x
k =0
k

This can be used as a check in the calculations.


Numerical Methods in General and Linear Algebra  | 685

5.8.2 Divided Differences
If the functional values are given for non-equispaced arguments, then Lagrange’s interpolation
formula require much labour. In this respect, divided differences offer better possibilities.
Let x0, x1, x2, …, xn be arguments, then first divided differences between x0 and x1, x1 and x2, x2
and x3, …, xn–1 and xn are defined by
f ( x1 ) − f ( x0 ) f ( x0 ) − f ( x1 )
f ( x0 , x1 ) = = = f ( x1 , x0 )
x1 − x0 x0 − x1

f ( x2 ) − f ( x1 )
f ( x1 , x2 ) =
x2 − x1

f ( xn ) − f ( xn −1 )
f ( xn −1 , xn ) =
xn − xn −1

The second divided difference between x0, x1, x2; x1, x2, x3; …; xn – 2, xn – 1, xn are
f ( x1 , x2 ) − f ( x0 , x1 )
f ( x0 , x1 , x2 ) =
x2 − x0

f ( x2 , x3 ) − f ( x1 , x2 )
f ( x1 , x2 , x3 ) =
x3 − x1

f ( xn −1 xn ) − f ( xn − 2 , xn −1 )
f ( xn − 2 , xn −1 , xn ) =
xn − xn − 2
Similarly, we define nth divided difference
f ( x1 , x2 , …, xn ) − f ( x0 , x1 , …, xn −1 )
f ( x0 , x1 …, xn ) = .
xn − x0

Divided difference of order n of f (x) will be denoted by D | n f (x)
n
f ( xk )
Theorem 5.2  f ( x0 , x1 , …, xn ) = ∑
( )
n
k =0
∏ xk − x j
j=0
j≠k

Proof: We prove this theorem by principle of mathematical induction


for n = 1,
f ( x1 ) − f ( x0 ) f ( x0 ) f ( x1 )
f ( x0 , x1 ) = = +
x1 − x0 x0 − x1 x1 − x0

∴ the result is true for n = 1.
686 | Chapter 5

Let the result is true for n = m


m f ( xk )
∴ f ( x0 , x1 , …, xm ) = ∑
( )
m
k =0
∏ xk − x j
j=0
j≠k

f ( xk )
m +1
f ( x1 , x2 , …, xm +1 ) = ∑ m +1
k =1
∏ xk − x j
j =1
( )
j≠k

∴ f ( x0 , x1 , …, xm +1 )
f ( x1 , x2 , …, xm +1 ) − f ( x0 , x1 , …, xm ) 
=
xm +1 − x0

1  m +1 f ( xk ) m f ( xk ) 
=  ∑ m +1 −∑ m 
( xm +1 − x0 )  k =1
(
∏ xk − x j k = 0 ∏ xk − x j
 jj =≠1k j=0
) ( ) 

 j≠k 
  1 1 
1  f ( x0 ) f ( x m +1 ) m
 m +1 − m 
+∑ f ( xk )  ∏ x − x
= − m +
( ) (
∏ xk − x j ) 
(x ) ( )
m
( xm +1 − x0 )  ∏ − xj ∏ x m +1 − x j k =1  j =1 k j
j=0 
 j =1
0
j =1  j≠k j≠k 

m
f ( x0 ) f ( x m +1 ) f ( xk )
= m +1 + m +∑  xk − x0 − ( xk − xm +1 )
( ) ( ) ( )
m +1
∏ x0 − x j
j =1

j=0
x m +1 − x j k =1
( xm +1 − x0 ) ∏ xk − x j
j=0
j≠k

m
f ( x0 ) f ( x m +1 ) f ( xk )
= + + ∑ m +1
( ) ( ) ( )
m +1 m
∏ x0 − x j ∏ x m +1 − x j k =1
∏ xk − x j
j =1 j=0 j=0
j≠k

m +1
f ( xk )
= ∑ m +1
k =0
∏ xk − x j
j=0
( )
j≠k

Hence by principle of mathematical induction, the result holds for all n ∈N.
Theorem 5.3  For equidistant arguments with interval of differencing h
1
f ( x0 , x1 , …, xn ) = n ∆ n f 0
h n!
Proof: We shall prove this result by principle of mathematical induction
for n = 1,
f ( x1 ) − f ( x0 ) ∆ f ( x0 ) 1
f ( x0 , x1 ) = = = ∆ f0
x1 − x0 h h
∴ result holds for n = 1
Numerical Methods in General and Linear Algebra  | 687

Let the result is true for n = m


1
∴ f ( x0 , x1 , x2 …, xm ) = ∆ m f0
m
h m!

1
f ( x1 , x2 , …, xm +1 ) = m ∆ f1
m

h m!

f ( x1 , x2 , …, xm +1 ) − f ( x0 , x1 , …, xm )
∴ f ( x0 , x1 , …, xm +1 ) =
xm +1 − x0

∆ m f − ∆ m f0
= m 1
h m ! ( m + 1) h

m +1
∆ f0
=
hm +1 ( m + 1)!

∴ result is true for n = m + 1.
Hence by principle of mathematical induction, the result is true for all n ∈N.
Remark 5.13: (i) Even if the arguments are equal, the divided differences still have a meaning
f ( x0 + ε ) − f ( x0 )
f ( x0 , x0 ) = lim f ( x0 , x0 + ε ) = lim
ε →0 ε →0 ε
= f ′ ( x0 ); provided f (x) is differentiable at x0  (1)

f ( x0 , x0 , x0 ) = lim f ( x0 , x0 + ε , x0 + ε )
ε →0 
f ( x0 + ε , x0 + ε ) − f ( x0 , x0 + ε )
= lim
ε →0 ε
1
ε →0 ε

1
ε
(
= lim  f ′ ( x0 + ε ) − f ( x0 + ε ) − f ( x0 ) )  (from (1))

1 1 ε2 
= lim  f ′ ( x0 ) + ε f ′′ ( x0 ) +  −  ε f ′ ( x0 ) + f ′′ ( x0 ) +  
ε →0 ε ε 2! 
 
f ′′ ( x0 )
=
2 

Similarly f ( x0 , x0 , …, x0 ) =
f (r )
( x0 )
 r!
( r +1) arguments

d
(ii)  f ( x, x, x0 , x1 , …, xn ) = f ( x, x0 , x1 , …, xn )
dx

5.8.3 Newton’s Divided Difference Interpolation Formula


We have
f ( x ) − f ( x0 )
f ( x, x0 ) =
x − x0
688 | Chapter 5

Thus, from defining equations


f ( x ) = f ( x0 ) + ( x − x0 ) f ( x, x0 )

f ( x, x0 ) = f ( x0 , x1 ) + ( x − x1 ) f ( x, x0 , x1 )

f ( x, x0 , x1 ) = f ( x0 , x1 , x2 ) + ( x − x2 ) f ( x, x0 , x1 , x2 )

f ( x, x0 , x1 , …, xn −1 ) = f ( x0 , x1 , …, xn ) + ( x − xn ) f ( x, x0 , x1 , …, xn )

Multiply the second equation by x – x0, the third by ( x − x0 ) ( x − x1 ) ,… and last equation by
( x − x0 ) ( x − x1 )…( x − xn −1 ) and add
f ( x ) = f ( x0 ) + ( x − x0 ) f ( x0 , x1 ) + ( x − x0 ) ( x − x1 ) f ( x0 , x1 , x2 )
+ … + ( x − x0 ) ( x − x1 )…( x − xn −1 ) f ( x0 , x1 , …, xn ) + R

where
R = ( x − x0 ) ( x − x1 )…( x − xn ) f ( x, x0 , x1 , …, xn )

It is Newton’s divided difference interpolation formula where R is the error.
We can write
f ( x) = P( x) + R

where P(x) is nth degree polynomial and R = 0 for x = x0 , x1 , …, xn
As f ( xk ) = P( xk ) ; k = 0,1, 2, …, n
\ P(x) must be identical with Lagrange’s interpolation polynomial. Hence
f (n +1) (ξ )
R= ( x − x0 ) ( x − x1 ) ( x − xn )
(n + 1)! 

Thus, f ( x, x0 , x1 , …, xn ) =
f (ξ ) ,
(n +1)
ξ ∈ ( x0 , x n )
(n + 1)! 
Example 5.48: Find the interpolating polynomial for (0, 2) , (1, 3) , ( 2,12) and (5,147) using
­Lagrange’s interpolation formula
Solution:
x 0 1 2 5
y 2 3 12 147

If P(x) is Lagrange interpolating polynomial for y(x) then

P( x) =
( x − 1) ( x − 2) ( x − 5) ( 2) + ( x − 0) ( x − 2) ( x − 5) (3)
(0 − 1) (0 − 2) (0 − 5) (1 − 0) (1 − 2) (1 − 5) 
( x − 0) ( x − 1) ( x − 5) 12 + ( x − 0) ( x − 1) ( x − 2) 147
+ ( ) ( )
(2 − 0) (2 − 1) (2 − 5) (5 − 0) (5 − 1) (5 − 2) 
Numerical Methods in General and Linear Algebra  | 689

=−
1 3
5
( 3
) (
x − 8 x 2 + 17 x − 10 + x 3 − 7 x 2 + 10 x
4
)

(
−2 x − 6 x + 5 x +
3 2 49 3
20
) (
x − 3x + 2 x
2
)

1
= ( −4 + 15 − 40 + 49) x + (32 − 105 + 240 − 147) x 2
3

20  
+ ( −68 + 150 − 200 + 98) x + 40 

= x + x2 − x + 2 
3

Note:
( x − x1 ) ( x − x2 ) ( x − xn ) 
= x n − ( x1 + x2 +  + xn ) x n −1 + (∑ xi x j ) x n − 2 − (∑ xi x j xk ) x n − 3 +  + ( −1) x1 x2  xn
n

Example 5.49: Obtain a unique polynomial f (x) of degree 2 or less such that
f (1) = 1, f (3) = 27, f (4) = 64 using Lagrange’s interpolation concept.
Solution:
x 1 3 4
f (x) 1 27 64
Here, f (x) itself is Lagrange interpolation polynomial
( x − 3) ( x − 4) (1) + ( x − 1) ( x − 4) 27 + ( x − 1) ( x − 3) 64
f ( x) = ( ) ( )
(1 − 3) (1 − 4) (3 − 1) (3 − 4) (4 − 1) (4 − 3)
=
1 2
6
(
x − 7 x + 12 −
27 2
2
) (
x − 5x + 4 +
64 2
3
)
x − 4x + 3 ( )
1
= (1 − 81 + 128) x 2 + ( −7 + 405 − 512) x + 12 − 324 + 384 
6 
= 8 x 2 − 19 x + 12 

Example 5.50: Derive Lagrange’s interpolation formula. Apply it to find interpolating polyno-
mial to fit the following data:

x 0 1 2 3
y=e −1 x 0 1.72 6.39 19.09
Solution: Lagrange’s interpolation formula to approximate f (x) to Lagrange polynomial P(x) is
already derived. In the given data

( x − 1) ( x − 2) ( x − 3) 0 + ( x − 0) ( x − 2) ( x − 3) 1.72
P( x) = ( ) ( )
(0 − 1) (0 − 2) (0 − 3) (1 − 0) (1 − 2) (1 − 3) 
690 | Chapter 5

( x − 0) ( x − 1) ( x − 3) 6.39 + ( x − 0) ( x − 1) ( x − 2) 19.09
+ ( ) ( )
(2 − 0) (2 − 1) (2 − 3) (3 − 0) (3 − 1) (3 − 2) 
6.39 3 19.09 3
∴ (
P ( x ) = 0.86 x 3 − 5 x 2 + 6 x − ) 2
(x − 4 x 2 + 3x +
6
) (
x − 3x 2 + 2 x )

1
= (6 (0.86 ) − 3 (6.39) + 19.09) x 3 + ( −30 (0.86 ) + 12 (6.39) − 3 (19.09)) x 2
6 
+ (36 (0.86 ) − 9 (6.39) + 2 (19.09)) x 

1
(
= 5.08 x 3 − 6.39 x 2 + 11.63 x
6
)


Example 5.51: Write the Lagrange’s polynomial passing through the points ( x0 , f 0 ) , ( x1 , f1 ) and
3x 2 + x + 1
( x2 , f 2 ) and hence resolve x − 1 x − 2 x − 3 into partial fractions.
( )( )( )
Solution: Lagrange’s polynomial passing through ( x0 , f 0 ) , ( x1 , f1 ) and ( x2 , f 2 ) is

P( x) =
( x − x1 ) ( x − x2 ) f + ( x − x0 ) ( x − x2 ) f + ( x − x0 ) ( x − x1 ) f
( x0 − x1 ) ( x0 − x2 ) 0 ( x1 − x0 ) ( x1 − x2 ) 1 ( x2 − x0 ) ( x2 − x1 ) 2
Now  P ( x ) = 3 x 2 + x + 1 is the polynomial passing through (1, 5) , ( 2,15) , (3, 31)

( x − 2 ) ( x − 3) ( x − 1) ( x − 3) ( x − 1) ( x − 2 )
∴ 3x 2 + x + 1 = (5) + (15) + ( 31) 
(1 − 2 ) (1 − 3) ( 2 − 1) ( 2 − 3) ( 3 − 1) ( 3 − 2 )
5 31
= ( x − 2) ( x − 3) − 15 ( x − 1) ( x − 3) + ( x − 1) ( x − 2)
2 2 
Divide by ( x − 1) ( x − 2) ( x − 3)

3x 2 + x + 1 5 15 31
= − +
( x − 1) ( x − 2) ( x − 3) 2 ( x − 1) x − 2 2 ( x − 3)
3x + x + 1
2
which is the partial fraction of
( x − 1) ( x − 2) ( x − 3)
Example 5.52: Using the data sin (0.1) = 0.09983 and sin (0.2) = 0.19867, find an approximate
value of sin (0.15) by Lagrange’s interpolation. Obtain a bound on the truncation error.
Solution:

x 0.1 0.2
y(x) = sin x .09983 0.19867
Numerical Methods in General and Linear Algebra  | 691

Let P(x) be Lagrange’s polynomial, then


x − 0.2 x − 0.1
y( x )  P ( x ) = (.09983) + (0.19867)
0 .1 − 0 .2 0 .2 − 0.1 
0.15 − 0.2 0.15 − 0.1
∴ sin ( 0.15 ) = y ( 0.15 )  (.09983) + ( 0.19867 )
0.1 − 0.2 0.2 − 0.1 
= 0.5 (.09983) + 0.5 (0.19867)

= 0.5 (.09983 + 0.19867) = 0.14925

f ′′ (ξ )
Truncation error = (0.15 − 0.10 ) (0.15 − 0.20 )
2!
where f ( x ) = sin x, ξ ∈ ( 0.10, 0.20 )

∴ f ′′( x ) = − sin x 
∴ Max. f ′′ (ξ ) = Max. sin ξ < sin (0.20 ) = 0.19867
ξ ∈( 0.10 , 0.20 ) 
( 0.05)( 0.05)
∴ Truncation error < ( 0.19867 )
2 
∴ Truncation error < 0.0002484

∴ sin (0.15)  0.14925

and Truncation error < 0.0002484

Example 5.53: Find the nth divided difference of 1 based on the points x0 , x1 , x2 , , xn
x
1
Solution: Let f ( x ) = . We shall prove by principle of mathematical induction that
x

f [ x0 , x1 , x2 , …, xn ] =
( −1)n
x0 x1 x2 … xn

for n = 1
1 1

f ( x1 ) − f ( x0 ) x1 x0
f [ x0 , x1 ] = =
x1 − x0 x1 − x0

x0 − x1 1
= =−
x0 x1 ( x1 − x0 ) x0 x1

∴ the result is true for n = 1
Let the result is true for n = m

f [ x0 , x1 , x2 , , xm ] =
( −1)m
x0 x1 x2  xm

692 | Chapter 5

f [ x1 , x2 , x3 , , xm +1 ] =
( −1)m
x1 x2 x3  xm +1

f [ x1 , x2 , , xm +1 ] − f [ x0 , x1 , , xm ]
f [ x0 , x1 , x2 , , xm +1 ] =
xm +1 − x0

( −1) − ( −1)
m m

x1 x2  xm +1 x0 x1  xm
=
xm +1 − x0

( −1) ( x0 − xm +1 )
m

=
x0 x1 x2  xm ⋅ xm +1 ( xm +1 − x0 )


=
( −1) m +1

x0 x1 x2  xm +1

\ Result is true for n = m ⇒ Result is true for n = m + 1

\ By principle of mathematical induction, f [ x0 , x1 , x2 , …, xn ] =


( −1)n ; for all n ∈N
x0 x1 x2 … xn
1
Example 5.54: If f ( x ) = then find f [ a , b , c ].
x2
1 1
f (b ) − f ( a ) b 2 − a 2 a2 − b2
Solution:  f [ a , b] = = = 2 2
b−a b−a a b ( b − a)
( a + b)( a − b) −( a + b)
= =
a 2 b 2 ( b − a) a2b2

By Symmetry
−( b + c )
f [b , c ] =
b2c2  ( b + c ) ( a + b)
− 2 2 + 2 2
f [b , c ] − f [ a , b]
\ f [a , b , c] = = bc ab
c−a c−a 
− a 2 b − a 2 c + ac 2 + bc 2 bc 2 − a 2 b + ac 2 − a 2 c
= =
( c − a) a 2 b 2 c 2 ( c − a) a 2 b 2 c 2

b(c − a)(c + a) + ac(c − a) bc + ab + ac 1  1 1 1
= = =  + + 
( c − a) a 2 b 2 c 2 a2b2c2 abc  a b c 

Example 5.55: Employing Newton’s divided difference interpolation, estimate f (x) from the
­following data:
x 0 1 2 4 5 6
f (x) 1 14 15 5 6 19
Numerical Methods in General and Linear Algebra  | 693

Solution: Divided difference table for given data is


x f (x) Df (x)
| D| 2f (x) D| 3f (x) D| 4f (x) D| 5f (x)
0 1
13
1 14 −6
1 1
2 15 −2 0
−5 1 0
4 5 2 0
1 1
5 6 6
13
6 19

By Newton’s divided difference interpolation formula


f ( x )  f ( x0 ) + ( x − x0 ) f ( x0 , x1 ) + ( x − x0 ) ( x − x1 ) f ( x0 , x1 , x2 )
+ ( x − x0 ) ( x − x1 ) ( x − x2 ) f ( x0 , x1 , x2 , x3 ) + 
= 1 + 13( x − 0) + ( −6) x( x − 1) + (1) x( x − 1)( x − 2) = x 3 − 9 x 2 + 21x + 1 

Example 5.56: Form the divided difference table for the following data
x 0.5 1.5 3.0 5.0 6.5 8.0
f (x) 1.625 5.875 31.0 131.0 282.125 521.0
and find the interpolating polynomial and estimate the value of f (7)
Solution: Divided difference table for given data is
x f (x) Df (x)
| D| 2f (x) D| 3f (x) D| 4f (x) D| 5f (x)
0.5 1.625
4.25
1.5 5.875 5.0
16.75 1
3.0 31.000 9.5 0
50.00 1 0
5.0 131.000 14.5 0
100.75 1
6.5 282.125 19.5
159.25
8.0 521.000
694 | Chapter 5

If P(x) is interpolating polynomial then


P ( x ) = 1.625 + 4.25 ( x − 0.5) + 5.0 ( x − 0.5) ( x − 1.5) + 1( x − 0.5) ( x − 1.5) ( x − 3.0 )



( )
= 1.625 + 4.25 ( x − 0.5) + 5.0 x − 2 x + 0.75 + x − 5 x + 6.75 x − 2.25 = x + x + 1 
2 3 2 3

f (7)  P ( 7 ) = 73 + 7 + 1 = 351

Example 5.57: Using the following table find f (x) as a polynomial in powers of (x – 6)

x –1 0 2 3 7 10
f (x) –11 1 1 1 141 561

Solution Divided difference table for given data is

x f (x) Df (x)
| D| 2f (x) D| 3f (x) D| 4f (x) D| 5f (x)
–1 –11
12
0 1 –4
0 1
2 1 0 0
0 1 0
3 1 7 0
35 1
7 141 15
140
10 561

f (x) is 5th degree polynomial. By Newton’s divided difference interpolation formula


(x) = –11 + (x + 1)(12) + (x + 1) x(–4) + (x + 1) x (x – 2) (1)
f 
= –11 + 12 (x + 1) – 4x (x + 1) + (x – 2) x (x +1)
For writing the polynomial in powers of x – 6
we take x – 6 = y
i.e., x = y + 6
\ f (x) = –11 + 12 (y + 7) – 4(y + 6) (y + 7) + (y + 4) (y + 6) (y + 7)

= y3 + (17 – 4)y2 + (94 – 52 + 12)y + 168 – 168 + 84 – 11

= (x – 6)3 + 13(x – 6)2 + 54(x – 6 ) + 73


Numerical Methods in General and Linear Algebra  | 695

Example 5.58: Given that log10 654 = 2.8156, log10 658 = 2.8182, log10 659 = 2.8189,
log10 661 = 2.8202 find log10 656.
Solution: Divided difference table for given data is

x f (x) = log10x Df (x)


| D| 2f (x) D| 3f (x)

654 2.8156
.00065
658 2.8182  .00001
.00070 –.000004
659 2.8189 –.00002
.00065
661 2.8202

By Newton’s divided difference formula


log10 656 = f (656)  2.8156 + (656 – 654) (.00065) + (656 – 654).(656 – 658) (.00001)
+ (656 – 654) (656 – 658).(656 – 659) (–.000004)
∴ log10 656  2.8156 + 2 (.00065) – 4(.00001) – 12(.000004)
∴ log10 656  2.8168

Now, we shall discuss interpolation formulae which can be applied only when arguments are
equally spaced.

5.8.4 Newton’s Forward Difference Interpolation Formula


Let y0, y1, y2, …, yn be the values of f (x) at arguments x0, x0 + h, …, x0 + nh where h is the interval
of differencing and we want to interpolate f (x) at x = x0 + ph
f (x0 + ph) = Ep f (x0) = (1 + D)p f (x0) = (1 + D)p y0

 p( p − 1) 2 p( p − 1)( p − 2) 3 p( p − 1)( p − 2) …( p − n − 1) n 
= 1 + p∆ + ∆ + ∆ +… + ∆ + … y0
 2! 3! n!  

p( p − 1) 2 p( p − 1)( p − 2) …( p − n − 1) n
 y0 + p∆y0 + ∆ y0 + … + ∆ y0
2! n! 
which is Newton’s forward difference interpolation formula with error obtained below.
If we define
()
φ ( p) = y0 + p∆y0 + 2p ∆ 2 y0 + … + np ∆ n y0 ()
696 | Chapter 5

then
φ ( k ) = y0 + k ∆y0 + ( )∆
k
2
2
y0 + … + ( )∆
k
k
k
y0
= (1 + ∆ ) k y0 = E k y0 = y k  for k = 0,1, 2, …, n 

Thus, f (p) is identical with the Lagrangian interpolation polynomial and hence error is
given by

f (n 1) (ξ ) f (n +1) (ξ )
+

hn +1 p ( p − 1) ( p − n) = hn +1 p( ), ξ ∈ ( x0 , x0 + nh)
n +1

(n + 1)! (n + 1)!
Remark 5.14: From the forward difference table (already described), we observe that the quanti-
ties Dky0 lie on a straight line slopping down to the right from y0. Now, error will be less if more of
forward differences are used. Hence, it will be preferable to use forward difference interpolation
formula, if we are to interpolate the value of the function near the beginning of set of tabulated
value and 0 < p < 1.

5.8.5 Newton’s Backward Difference Interpolation Formula


Let y0, y1, y2, …, yn be the values of f (x) at arguments x0, x1, x2, …, xn where xk = x0 + kh; k = 1,
2, …, n and we want to interpolate f (x) at x = xn + ph
f ( xn + ph) = E p f ( xn ) = (1 − ∇) − p f ( xn ) = (1 − ∇) − p yn



= 1 +( ) ∇ + ( ) ∇ + ( ) ∇ +  + ( ) ∇ +  y 
p
1
p +1
2
2 p+ 2
3
3 p + n −1
n
n
n

 yn 1p n + ( ) ∇y + ( ) ∇ y + ( ) ∇ y +  + (
p +1
2
2
n
p+ 2
3
3
n )∇ y  p + n −1
n
n
n

which is Newton’s backward difference interpolation formula with error

f (n +1) (ξ ) f (n +1) (ξ ) (n +1)


hn +1 p ( p + 1) ( p + 2) ( p + n) = h n +1 ( p + n ) , ξ ∈ ( x0 , x0 + nh)
(n + 1)! (n + 1)!
The error is found in the similar way as in Newton’s forward interpolation formula.

Remark 5.15: From the backward difference table (already described), we observe that the quan-
tities ∇ k yn lie on a straight line sloping up to the right from yn. For less error, we use Newton’s
backward interpolation formula if we are to interpolate f (x) near the end of set of tabulated
­values and –1 < p < 0.

Example 5.59: The population of a city in the decennial census was as given below. Estimate the
population for the year 1895 and 1925.
years x 1891 1901 1911 1921 1931
Population y (in thousands) 46 66 81 93 101
Numerical Methods in General and Linear Algebra  | 697

Solution: Difference table for given data is

x y
1891 46
20
1901 66 –5
15 2
1911 81 –3 –3
12 –1
1921 93 –4
18
1931 101

For 1895, we shall use Newton’s forward difference formula,


x0 = 1891, h = 10, x = 1895
x − x0 1895 − 1891
p= = = 0.4
h 10 
p( p − 1) 2
y(1895)  y(1891) + p ∆y(1891) + ∆ y(1891)
2!
p( p − 1)( p − 2) 3 p( p − 1)( p − 2)( p − 3) 4
+ ∆ y(1891) + ∆ y(1891)
3! 4! 
(0.4)( −0.6) (0.4)( −0.6)( −1.6) ( 0.4 ) ( −0.6 ) ( −1.6 ) ( −2.6 )
= 46 + (0.4)( 20) + ( −5) + ( 2) + ( −3)
    2 6 24
= 54.8528  55 
∴ y (1895)  55 thousands
For x = 1925, we shall use Newton’s backward difference formula.
x = 1925, xn = 1931, h = 10
x − xn 1925 − 1931
p= = = −0.6
h 10 
( p + 1) p 2
y(1925)  y(1931) + p∇y(1931) + ∇ y(1931)
2! 
( p + 2)( p + 1) p 3 ( p + 3)( p + 2)( p + 1) p 4
+ ∇ y(1931) + ∇ y(1931)
3! 4! 
( 0.4 ) ( −0.6 ) (1.4 )(0.4 ) ( −0.6 ) ( 2.4 ) (1.4 )(0.4 ) ( −0.6 )
= 101 + ( −0.6 ) (8) + ( −4) + ( −1) + ( −3)
2 6 24
= 96.8368  97 
∴ y(1925)  97 thousands. 
698 | Chapter 5

Example 5.60: For the following data obtain the forward and backward differences interpolation
polynomials and estimate f (x) at x = 0.25 and x = 0.35

x 0.1 0.2 0.3 0.4 0.5


f (x) 1.40 1.56 1.76 2.00 2.28

Solution: Difference table for given data is


x f (x)
0.1 1.40
0.16
0.2 1.56 0.04
0.20
0.3 1.76 0.04
0.24
0.4 2.00 0.04
0.28
0.5 2.28

Second order differences are constant and hence both forward and backward interpolation
­polynomials are second degree polynomials.
Here,  h = 0.1, x0 = 0.1
For forward difference interpolation polynomial P(x)
x − x0 x − 0.1
p= = = 10 x − 1
h 0.1 
p( p − 1) 2
∴ P ( x ) = f (0.1) + p∆f (0.1) + ∆ f (0.1)
2! 
(10 x − 1)(10 x − 2)
= 1.40 + (10 x − 1)(0.16) + (0.04)
2 


(
= 1.40 + 0.16 (10 x − 1) + 0.02 100 x 2 − 30 x + 2 )
= 2 x 2 + x + 1.28 
f (0.25)  P (0.25) = 2(0.25) 2 + 0.25 + 1.28 = 1.655 

For backward difference interpolation polynomial
xn = 0.5, h = 0.1
x − xn x − 0.5
p= = = 10 x − 5
h 0.1
Numerical Methods in General and Linear Algebra  | 699

\ Backward difference interpolation polynomial F(x) is


( p + 1) p 2
F ( x ) = f (0.5) + p∇f (0.5) + ∇ f (0.5)
2! 
(10 x − 4)(10 x − 5)
= 2.28 + (10 x − 5)(0.28) + (0.04)
2
= 2.28 + 0.28(10 x − 5) + 0.02(100 x 2 − 90 x + 20) 

= 2 x 2 + x + 1.28 
f (0.35)  F (0.35) = 2(0.35) 2 + 0.35 + 1.28 = 1.875 

Here P(x) and F(x) are same. These must be same as f (x) is unique second degree polynomial.

Example 5.61: Given sin 45° = 0.7071, sin 50° = 0.7660, sin 55° = 0.8192 and sin 60° = 0.8660,
find sin 52° using Newton’s interpolation formula. Estimate the error.
Solution: Forward difference table is

x y = sin x Dy D2y D3y


45° 0.7071
0.0589
50° 0.7660 –0.0057
0.0532 –0.0007
55° 0.8192 –0.0064
0.0468
60° 0.8660

We shall use Newton’s forward interpolation formula to find approximate value of sin 52°
x = 52°, x0 = 50°, h = 5°
x − x0 52 − 50
\ p= = = 0.4
h 5 
p( p − 1) 2
\ sin 52°  y(50°) + p ∆y(50°) + ∆ y(50°)
2! 
(0.4)( −0.6)
= 0.7660 + (0.4)(0.0532) + ( −0.0064)
2 
\ sin 52°  0.7880 

Exact value of sin 52° to 4 decimal places = 0.7880. Thus, upto 4 decimal places, there is no
error.
700 | Chapter 5

Example 5.62: Find the number of men getting wages between ` 10 and 15 from the following
data:

Wages in ` 0−10 10−20 20−30 30−40


Frequency 9 30 35 42

Solution: First of all we find cumulative frequency

Wages in ` Frequency f Cumulative frequency


0−10 9 9
10−20 30 39
20−30 35 74
30−40 42 116

Let y denote number of men getting wages below ` x. Then difference table is

x y Dy D2y D3y
10 9
30
20 39 5
35 2
30 74 7
42
40 116

We estimate y (15) using Newton’s forward interpolation formula


x = 15, x0 = 10, h = 10

x − x0 15 − 10
\ p= = = 0.5
h 10 
p( p − 1) 2 p( p − 1)( p − 2) 3
y(15)  y(10) + p∆y(10) + ∆ y(10) + ∆ y(10)
2! 3! 
(0.5)( −0.5) (0.5)( −0.5)( −1.5)( 2)
= 9 + (0.5)(30) + (5) +
2 6 
\ y (15)  24

\ Number of men getting wages between ` 10 and 15


= y (15) – y (10) = 24 – 9 = 15
Numerical Methods in General and Linear Algebra  | 701

5.8.6 Gauss Forward Interpolation Formula


This formula will be derived in terms of following central differences in the central difference table.
y0 δ 2 y0 δ 4 y0 δ 6 y0
     
δ y1 2 δ y1 2
3
δ 5 y1 2
Let the values of function f (x) are given at x0-3h, x0-2h, x0-h, x0, x0 + h, x0 + 2h, x0 + 3h and val-
ues are y-3, y-2, y-1, y0, y1, y2, y3, respectively. We want to interpolate f (x) at x = x0 + ph; -1 < p < 1.
Let value is yp.
By Newton’s forward interpolation formula
( )∆ y + ( )∆ y + ( )∆ y + ( )∆ y + ( )∆ y + ( )∆ y
y p = f ( x ) = f ( x0 + ph) = y0 + p
1 0
p
2
2
0
p
3
3
0
p
4
4
0
p
5
5
0
p
6
6
0 +

= y + ( ) ∆y + ( ) ∆ E y + ( ) ∆ E y + ( ) ∆ E y + ( ) ∆ E y + ( ) ∆ E y + 
0
p
1 0
p
2
2
−1
p
3
3
−1
p
4
4
−1
p
5
5
−1
p
6
6
−1

= y + ( ) ∆y + ( ) ∆ (1 + ∆ ) y + ( ) ∆ (1 + ∆ ) y + ( ) ∆ (1 + ∆ ) y
0
p
1 0
p
2
2
−1
p
3
3
−1
p
4
4
−1

+ ( ) ∆ (1 + ∆ ) y + ( ) ∆ (1 + ∆ ) y +  
p
5
5
−1
p
6
6
−1

= y + ( ) ∆y + ( ) ∆ y + ( ) + ( ) ∆ y + ( ) + ( ) ∆ y
0
p
1 0
p
2
2
−1
p
2
p
3
3
−1
p
3
p
4
4
−1

+ ( ) + ( ) ∆ y + ( ) + ( ) ∆ y + 
p
4
p
5
5
−1
p
5
p
6
6
−1

= y + ( ) ∆y + ( ) ∆ y + ( ) ∆ y + ( ) ∆ (1 + ∆ ) y
0
p
1 0
p
2
2
−1
p +1
3
3
−1
p +1
4
4
−2

+ ( ) ∆ (1 + ∆ ) y + ( ) ∆ (1 + ∆ ) y + 
p +1
5
5
−2
p +1
6
6
−2

= y + ( ) ∆y + ( ) ∆ y + ( ) ∆ y + ( ) ∆ y
0
p
1 0
p
2
2
−1
p +1
3
3
−1
p +1
4
4
−2

+ ( ) + ( ) ∆ y + ( ) + ( ) ∆ y + …
p +1
4
p +1
5
5
−2
p +1
5
p +1
6
6
−2

= y + ( ) ∆y + ( ) ∆ y + ( ) ∆ y + ( ) ∆ y
0
p
1 0
p
2
2
−1
p +1
3
3
−1
p +1
4
4
−2

+ ( ) ∆ y + ( ) ∆ (1 + ∆ ) y + ( ) ∆ (1 + ∆ ) y + …
p+2
5
5
−2
p+2
6
6
−3
p+2
7
7
−3

= y + ( ) ∆y + ( ) ∆ y + ( ) ∆ y + ( ) ∆ y
0
p
1 0
p
2
2
−1
p +1
3
3
−1
p +1
4
4
−2

+ ( )∆ y + ( )∆ y +
p+2
5
5
−2
p+2
6
6
−3

But ∆ k y k −1 = (δ E ) y 1/ 2 k
k −1 = δ k y 1 ; k = 1, 3, 5,……
− −
2 2 2 
and ∆ yk
k = (δ E ) y 1/ 2 k
k = δ y0 ; k = 2, 4, 6, …
k
− −
2 2 
\ y p = y0 + ( )δ y + ( )δ
p
1 1
p
2
2
y0 + ( )δp +1
3
3
y1 + ( )δp +1
4
4
y0 + ( )δp+2
5
5
y1 + ( )δ p+2
6
6
y0 + 
2 2 2 
This is Gauss forward interpolation formula.
702 | Chapter 5

5.8.7 Gauss Backward Interpolation Formula


This formula will be derived in terms of following central differences in the central difference
table.
δy 1 δ3y 1 δ5y 1
− − −
 2  2  2
 2  δ 6 y
y0 δ y0 δ y0
4
0

Let the values of the function f (x) are given at x0 - 3h, x0 - 2h, x0 - h, x0, x0 + h, x0 + 2h, x0 + 3h
and values are y-3, y-2, y-1, y0, y1, y2, y3, respectively. We want to interpolate f (x) at x = x0 + ph;
-1< p < 1. Let value is yp .
∴ By Newton’s backward difference interpolation formula
y p = f ( x0 + ph)

= y0 + ( ) ∇y + ( ) ∇ y + ( ) ∇ y
p
1 0

p +1
2
2
0
p+ 2
3
3
0

+ ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + ….
p+3
4
4
0
p+ 4
5
5
0
p+5
6
6
0

= y + ( ) ∇y + ( ) ∇ E y + ( ) ∇ E y
0
p
1 0
p +1
2
2 −1
1
p+2
3
3 −1
1

+ ( )∇ E y + ( )∇ E y + ( )∇ E y + 
p+3
4
4 −1
1
p+4
5
5 −1
1
p+5
6
6 −1
1

= y + ( ) ∇y + ( ) ∇ (1 − ∇) y + ( ) ∇ (1 − ∇) y + ( ) ∇ (1 − ∇) y
0
p
1 0
p +1
2
2
1
p+ 2
3
3
1
p+3
4
4
1

+ ( ) ∇ (1 − ∇) y + ( ) ∇ (1 − ∇) y + 
p+ 4
5
5
1
p+5
6
6
1

= y + ( ) ∇y + ( ) ∇ y + ( ) − ( ) ∇ y + ( ) − ( ) ∇ y
0
p
1
 0
 p +1
2
2
1
p+ 2
3
p +1
2
3
1
p+3
4
p+ 2
3
4
1

+  ( ) − ( )  ∇ y +  ( ) − ( ) ∇ y + 
 p+ 4
5
  p+3
4
 5
1
p+5
6
p+ 4
5
6
1

= y + ( ) ∇y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ (1 − ∇ ) y
0
p
1 0
p +1
2
2
1
p +1
3
3
1
p+2
4
4
2

+ ( ) ∇ (1 − ∇ ) y + ( ) ∇ (1 − ∇ ) y + 
p+3
5
5
2
 p+4
6
6
2

= y + ( ) ∇y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y
0
p
1 0
p +1
2
2
1
p +1
3
3
1
p+2
4
4
2

+ ( ) − ( )  ∇ y + ( ) − ( )  ∇ y +  
p+3
5
p+2
4
5
2
p+4
6
p+3
5
6
2

= y + ( ) ∇y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ (1 − ∇) y
0
p
1 0
p +1
2
2
1
p +1
3
3
1
p+ 2
4
4
2
p+ 2
5
5
2
p+3
6
6
3 +

= y + ( ) ∇y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + ( ) ∇ y + 
p p +1 2 p +1 3 p+ 2 4 p+ 2 5 p+3 6
0 1 0 2 1
 3 1 4 2 5 2 6 3

1

But ∇ k y k −1 = (δ E ) y k −1 = δ k y 1 ; k = 1, 3, 5
2 k

2 2 2 
1

and ∇ k y k = (δ E 2 k
) y k = δ k y0; k = 2, 4, 6…
2 2
Numerical Methods in General and Linear Algebra  | 703

\ y p = y0 + ( )δ y
p
1

1 + ( )δp +1
2
2
y0 + ( )δp +1
3
3
y

1 + ( )δ
p+2
4
4
y0 + ( )δ
p+2
5
5
y

1 + ( )δp+3
6
6
y0 + ...
2 2 2 
This is Gauss backward difference formula.

5.8.8 Stirling’s Formula
This formula will be derived in terms of following central differences in the central difference
table
δy 1 δ3y 1 δ5y 1

 2   −
2
 4 

2
 6
y0 δ y0
2
δ y0 δ y0
    
δ y1 δ y1
3
δ y1
5

2 2 2

Stirling formula uses the entries in the table of both Gauss forward and Gauss backward formula.
This suggests us that we may take the mean of these values.
From Gauss forward formula
y p = y0 + ( )δ y + ( )δ
p
1 1
p
2
2
y0 + ( )δ
p +1
3
3
y1 + ( )δp +1
4
4
y0 + ( )δp+ 2
5
5
y1 + ( )δ p+ 2
6
6
y0 + 
2 2 2
From Gauss backward formula
y p = y0 + ( )δ y
p
1

1 + ( )δp +1
2
2
y0 + ( )δ p +1
3
3
y

1 + ( )δp+ 2
4
4
y0 + ( )δp+ 2
5
5
y

1 + ( )δp+3
6
6
y0 + 
2 2 2

Add these and divide by 2


1 1 1 1
− −
E2 + E 1 p E2 + E
( ) ( )
2 2
y p = y0 + p
1 δ y0 + ( p − 1 + p + 1) δ 2 y0 + p +1
3 δ 3
y0
2 2 2 2 
1 1

1 1 E +E 1 1
( )δ ( )δ ( )δ
2 2
+ ⋅ ( p − 2 + p + 2) p +1
3
4
y0 + p+2
5
5
y0 + ⋅ ( p − 3 + p + 3 ) p+2
5
6
y0 + 
2 4 2 2 6
p2 2 p 2 ( p 2 − 1) 4
= y0 +() p
1 µδ y0 +
2!
δ y0 + ( ) p +1
3 µδ 3 y0 +
4!
δ y0 + ( ) µδ
p+ 2
5
5
y0

p ( p − 1) ( p
2 2 2
−4 )δ 
+ 6
y0 + 
6!
which is Stirling’s interpolation formula.

5.8.9 Bessel’s Interpolation Formula


This formula will be derived in terms of following central differences in the central difference
formula
y0 δ2y0 δ4y0 δ6y0
δy1/2 δ3y1 δ5y1/2
y1 δ2y1 δ4y1 δ6y1
704 | Chapter 5

If x = x0 + ph
then f (x) = f (x0 + ph) = Ep f (x0) = yp

Also, f (x0 + ph) = f (x0 + h + (p - 1)h) = E p-1 f (x0 + h)

Thus, Bessel interpolation will be mean of Gauss forward interpolation formula and Gauss back-
ward interpolation in which y0 is replaced by y1 and p is replaced by p − 1.
By Gauss forward interpolation formula

y p = y0 + ( )δ y + ( )δ
p
1 1
p
2
2
y0 + ( )δ p +1
3
3
y1 + ( )δ
p +1
4
4
y0 + ( )δp+ 2
5
5
y1 + ( )δ
p+ 2
6
6
y0 + 
2 2 2

By Gauss backward interpolation formula replacing p by p – 1 and y0 by y1

y p = y1 + ( )δ y + ( )δ
p −1
1 1
p
2
2
y1 + ( )δ
p
3
3
y1 + ( )δ
p +1
4
4
y1 + ( )δp +1
5
5
y1 + ( )δ
p+ 2
6
6
y1 + 
2 2 2

Add these and divide by 2


y1 + y0 1  p + 1 + p − 2 
( y1 + y0 ) +  p − 2  δ y 1 + ( )δ ( )δ
1 1
yp = p 2
+  
p 3
y1
2
2 2
2 2 3
2 2 
y + y0 1  p + 2 + p − 3  y + y0
+ ( )
p +1
4 δ4 1
2
+ 
2 5
 ( )δ
p +1
4
5
y1 + ( ) p+ 2
6 δ6 1
2
+
2 
1
p−
 1
= µ y1 +  p −  δ y1 +
 2 2
( ) µδ
p
2
2
y1 + ( )
p
2
3
2 δ3y
1
2 2 2

1 
p−
+ ( ) µδ
p +1
4
4
y1 + ( ) p +1
4
5
2 δ5y +
1 ( ) µδ
p+2
6
6
y1 + 
2 2 2
which is Bessel’s formula.

5.8.10  Laplace−Everett’s Interpolation Formula


This formula will be derived in terms of following central differences in the central difference
table.
⋅ y0  ⋅δ 2 y0  ⋅δ 4 y0  ⋅δ 6 y0
⋅ ⋅ ⋅
⋅ y1  ⋅δ y1  ⋅δ y1  ⋅δ 6 y1
2 4

Here, only even order central difference of y0 and y1 are required. Thus, we are to eliminate odd-
order central differences in the Bessel formula.
Bessel interpolation formula is 1
p−
 1
y p = µ y 1 +  p −  δ y 1 + 2 µδ y 1 + 2
 2 
p 2 p
() 3
2 δ 3 y + p +1 µδ 4 y
1 4 ()
1 ( )
2 2 2 2 2
Numerical Methods in General and Linear Algebra  | 705

1 1
p− p−
+ ( ) p +1
4
5
2 δ5y +
1 ( ) µδ
p+ 2
6
6
y1 + ( )
p+ 2
6
7
2 δ7 y +…
1
2 2 2 
1  1 1 2
= ( y1 + y 0 ) +  p −  ( y1 − y 0 ) +
2  2
( ) (p
2
2
δ y1 + δ 2 y0 )
1
p− 
+ ( ) p
2
3
(
2 δ 2 y −δ 2 y +
1 0 ) ( ) 12 (δ
p +1
4
4
y1 + δ 4 y0 )
1 1
p− p−


+ ( ) p +1
4
5
(
2 δ4y −δ4y +
1 0 ) ( ) ( p+ 2
6
1 6
2
δ y1 + δ 6 y0 + ) ( ) p+ 2
6
7
(
2 δ6 y − δ6 y +
1 0 ) 
 1  1  1
= (1 − p) y0 + ( )  1 − p − 2  δ
p
2
2
y0 + ( )  1 − p − 2  δ
p +1
4
4
y0 + ( )  1 − p − 2  δ
p+ 2
6
6
y0 + 
2 3  2 5  2 7 

 1  1  1
+ py1 + () p
2 1
 +
p−
2  δ y1 +

2
( ) p +1
4 1
 +
p−
2  δ y1 +

4
( )
p+ 2
6 1
 +
p−
2  δ y1 + 

6

2 3 2 5 2 7

= (1 − p) y0 + ( ) 2 −3 p δ
p
2
2
y0 + ( ) 3 −5 p δ
p +1
4
4
y0 + ( ) 4 −7 p δ
p+2
6
6
y0 + 

p +1 2 p+2 4 p+3 6
+ py1 + ( ) p
2
3
δ y1 + ( )
p +1
4
5
δ y1 + ( )
p+2
6
7
δ y1 + 

q +1 2 q+2 4 q+3 6
= qy0 + ( )
1− q
2
3
δ y0 + ( )2−q
4
5
δ y0 + ( )
3− q
6
7
δ y0 + ....



+ py1 + ( )δp +1
3
2
y1 + ( )δ
p+2
5
4
y1 + ( )δ p+3
7
6
y1 + 

q +1 2 q+2 4 q+3 6
= qy0 + ( ) q
2
3
δ y0 + ( )
q +1
4
5
δ y0 + ( ) q+2
6
7
δ y0 + 



+ py1 + ( ) p +1
3 δ 2 y1 + ( )
p+ 2
5 δ 4 y1 + ( )
p+3
7 δ 6 y1 + ...

where q = 1 - p

∴ y p = qy0 + ( )δq +1
3
2
y0 + ( )δ
q+ 2
5
4
y0 + ( )δ
q+3
7
6
y0 + 


+ py1 + ( )δp +1
3
2
y1 + ( )δ
p+ 2
5
4
y1 + ( )δ p+3
7
6
y1 + 

where  q = 1 - p

which is Laplace–Everett’s formula.


706 | Chapter 5

Comparison of Central Difference Formulae


Gaussian interpolation formulae are of theoretical interest. Stirling’s formula is suitable for small
1 1
values of p, for example − ≤ p ≤ . Bessel’s formula is suitable for values of p not too far
4 4
1 1 3
from , for example ≤ p ≤ . Everett’s formula is most generally useful and in this formula
2 4 4
even order differences are together placed in horizontal lines.

Example 5.63: Use Gauss forward interpolation formula to find the value of log 337.5 from the
following table:
x 310 320 330 340 350 360
yx = log x 2.4914 2.5051 2.5185 2.5315 2.5441 2.5563

Solution: Central difference table for given data is

x y δy δ2y δ3y δ4y δ5y

310 2.4914
0.0137
320 2.5051 −0.0003
0.0134 −0.0001
330 2.5185 −0.0004 0.0001
0.0130 0 −0.0001
340 2.5315 −0.0004 0
0.0126 0
350 2.5441 −0.0004
0.0122
360 2.5563

x = 337.5, x0 = 330, h = 10

x − x0 337.5 − 330
∴  p = = = 0.75
h 10 
By Gauss forward interpolation formula
p ( p − 1) ( p + 1) p ( p − 1) δ 3 y
log 337.5 = y p = y0 + pδ y 1 + δ 2 y0 + 1
2! 3!
2 2 

+
( p + 1) p( p − 1) ( p − 2) δ 4 y +
( p + 2) ( p + 1) p ( p − 1) ( p − 2) δ 5 y
0 1
4! 5!
2 
Numerical Methods in General and Linear Algebra  | 707

= 2.5185 + (0.75)(0.0130 ) +
(0.75) ( −0.25) (1.75)(0.75) ( −0.25)
( −0.0004) + (0 )
2 6 
(1.75)( 0.75) ( −0.25) ( −1.25)
+ ( 0.0001)
24 
(2.75)(1.75)(0.75) ( −0.25) ( −1.25)
+ ( −0.0001)
120 
∴ log 337.5 ; 2.5283

Example 5.64: Apply Gauss’s backward interpolation formula to find the sales for the year 1974
from the table

(year) x 1939 1949 1959 1969 1979 1989


Sales (y) (in lakhs) 12 15 20 27 39 52

Solution: Central difference table for given data is

x y δy δ2y δ3y δ4y δ5y

1939 12
3
1949 15 2
5 0
1959 20 2 3
7 3 −10
1969 27 5 −7
12 −4
1979 39 1
13
1989 52

x = 1974, x0 = 1969, h = 10

x − x0 1974 − 1969
∴ p= = = 0.5
h 10 
By Gauss backward interpolation formula

y p  y0 + pδ y +
( p + 1) p δ 2 y +
( p + 1) p ( p − 1) δ 3 y +
( p + 2) ( p + 1) p ( p − 1) δ 4 y
1 0 1 0
− 2! 3! − 4!
2 2 

+
( p + 2) ( p + 1) p ( p − 1) ( p − 2) δ 5 y
1
5! −
2 
708 | Chapter 5

y (1974 )  27 + (0.5)(7) +
(1.5)(0.5) (1.5)(0.5) ( −0.5)
∴ (5 ) + (3) 
2 6
(2.5)(1.5)(0.5) ( −0.5) (2.5)(1.5)(0.5) ( −0.5) ( −1.5)
+ ( −7) + ( −10)
24 120 
∴ y (1974 )  32 lakhs

Example 5.65: Compute u12.2 from the following table using Stirling’s formula

x o 10 11 12 13 14
105ux 23967 28060 31788 35209 38368

where ux = 1 + log10 sin x 


Solution: Central difference table is
x (in degrees ) 105 ux
10 23967
4093
11 28060 −365
3728 58

12 31788 −307 −13

3421 45
13 35209 −262
3159
14 38368

x = 12.2, x0 = 12, h = 1

x − x0 12.2 − 12
∴ p= = = 0.2
h 1 
By Stirling’s formula

105 u p = y0 + pµδ y0 +
p2 2
δ y0 +
( p + 1) p ( p − 1) µ δ 2 y + ( p + 1) p 2 ( p − 1) δ 4 y +
0 0
2! 3! 4! 
 3728 + 3421 (0.2) (1.2) (0.2) ( −0.8)  58 + 45 
2

∴ 5
10 u12.2 = 31788 + (0.2)   + ( −307) +  
 2 2 6 2  
(1.2)(0.2) ( −0.8)
2

+ ( −13)
24 
Numerical Methods in General and Linear Algebra  | 709

= 31788 + (0.1)(7149) −
(0.2)2 (1.2)(0.2)(0.8) 103
(307) − ( )
2 12 
(1.2)(0.2) (0.8) 13  32495
2

+ ( )
24 
∴ u12.2  0.32495 

Example 5.66: Apply Bessel’s formula to obtain y25 given that y20 = 2854, y24 = 3162, y28 = 3544,
y32 = 3992.
Solution: Central difference table for given data is

x y δy δ2y δ3y
20 2854
308
24 3162 74

382 −8

28 3544 66
448
32 3992
To obtain y25
x = 25, x0 = 24, h = 4

x − x0 25 − 24
∴ p= = = 0.25
h 4 
By Bessel’s interpolation formula
1
p ( p − 1) p ( p − 1) p−
 1 2 δ3 y +
y p = µ y1 +  p −  δ y1 + µ δ 2 y1 + .
 2  2 ! 2 ! 3
1
2 2 2 2 

∴ y25 
3544 + 3162
+ ( −0.25)(382) +
(0.25) ( −0.75) ⋅  66 + 74 
 
2 2 2 

(0.25) ( −0.75) ⋅ ( −0.25)


+ ( −8)
2 3 

=
6706
− (0.25)(382) −
(0.25)(0.75)(140) − (0.25)(0.75)(0.25)(8)
2 4 6 
∴ y25  3251 
710 | Chapter 5

Example 5.67: Given the table


x 310 320 330 340 350 360
y = log x 2.49136 2.50515 2.51851 2.53148 2.54407 2.55630

find the value of log 337.5 by Everett’s formula.


Solution: Central difference table for given data is
x y dy d 2y d 3y d 4y d 5y
310 2.49136
0.01379
320 2.50515 –0.00043
0.01336 0.00004
330 2.51851 –0.00039 –0.00003

0.01297 0.00001 0.00004

340 2.53148 –0.00038 0.00001


0.01259 0.00002
350 2.54407 –0.00036
0.01223
360 2.55630

To obtain y = log 337.5


x = 337.5, x0 = 330, x1 = 340, h = 10 

x − x0 337.5 − 330
p= = = 0.75, q = 1 − p = 0.25
h 10 
( q + 1) q ( q − 1) 2 ( q + 2) ( q + 1) q ( q − 1) ( q − 2) 4
y p = qy0 + δ y0 + δ y0 + 
3! 5! 
( p + 1) p ( p − 1) 2 ( p + 2) ( p + 1) p ( p − 1) ( p − 2) 4
+ py1 + δ y1 + δ y1 + 
3! 5!
(1.25)(0.25) ( −0.75)
∴ y (337.5) = log 337.5  (0.25)( 2.51851) + ( −.00039)
6 
( 2.25)(1.25)(0.25) ( −0.75) ( −1.75)
+ ( −0.00003)
120 
(1.75)(0.75) ( −0.25)
+ (0.75)( 2.53148) + ( −0.00038)
6 
( 2.75)(1.75)(0.75) ( −0.25) ( −1.25)
+ (0.00001)
120 
∴ log 337.5  2.52827 
Numerical Methods in General and Linear Algebra  | 711

Example 5.68: The following table gives the value of ex for certain equidistant values of x. Find
the value of ex when x = 0.644 using
(i) Stirling’s formula  (ii) Bessel’s formula  (iii) Everett’s formula

x 0.61 0.62 0.63 0.64 0.65 0.66 0.67


y=e x 1.840431 1.858928 1.877610 1.896481 1.915541 1.934792 1.954237

Solution: Central difference table for given data is


x y δy δ2y δ3y δ4y δ5y δ6 y
0.61 1.840431
0.018497
0.62 1.858928 0.000185
0.018682 0.000004
0.63 1.877610 0.000189 −0.000004
0.018871 0 0.000006
0.64 1.896481 0.000189 0.000002 −0.000007
0.019060 0.000002 −0.000001
0.65 1.915541 0.000191 0.000001
0.019251 0.000003
0.666 1.934792 0.000194
0.019445
0.67 1.954237

To obtain y (0.644)
x = 0.644, x0 = 0.64, x1 = 0.65, h = 0.01 

x − x0 0.644 − 0.64
p= = = 0.4, q = 1 − p = 0.6
h 0.01
(i)  By Stirling’s formula

y p = y0 + pµδ y0 +
p2 2
δ y0 +
( p + 1) p ( p − 1) µ δ 2 y + ( p + 1) p ( p − 1) δ 4 y + 
2

0 0
2! 3! 4! 
 0.019060 + 0.018671 (0.4 )
2

∴ y (0.644 )  1.896481 + (0.4 )   + (0.000189)


 2 2 

y (0.644 )  1.896481 +
(0.4)(0.037931) + (0.4) 2

∴ (0.000189)
2 2 
∴ y (0.644 )  1.904082

712 | Chapter 5

(ii)  By Bessel’s formula


1
 1 p ( p − 1) p ( p − 1) p − 2 3
y p = µ y1 +  p −  δ y1 + µ δ y1 +
2
. δ y1 + 
 2 2 2! 2! 3
2 2 2 

∴ y (0.644 ) 
1.915541 + 1.896481
+ ( −0.1)(0.019060 ) +
(0.4) ( −0.6) . (0.000191 + 0.000189)
2 2 2 
(0.4) ( −0.6) . ( −0.1)
+ (0.000002)
2 3 
3.812022
− (0.1)(0.019060 ) −
(0.4)(0.6) 0.000190 + (0.4)(0.6) (0.1) 0.000002
= ( ) ( )
2 2 6 
∴ y (0.644 )  1.904082

(iii)  By Everett’s formula

y p = qy0 +
(q + 1) q (q − 1) δ 2 y +
(q + 2) (q + 1) q (q − 1) (q − 2) δ 4 y +
0 0
3! 5! 

+ py1 +
( p + 1) p ( p − 1) δ 2 y1 +
( p + 2) ( p + 1) p ( p − 1) ( p − 2) δ 4 y1 + 
3! 5! 

y (0.644 )  (0.6 )(1.896481)


(1.6)(0.6) ( −0.4)
∴ + (.000189)
6 

+ (0.4 )(1.915541) +
(1.4)(0.4) ( −0.6)
(0.000191)
6 
∴ y (0.644 )  1.904082


5.9  Inverse interpolation


Inverse interpolation is the process of finding the value of x for some given value of f (x) when the
given value of f (x) is between two tabulated values of f (x).

5.9.1 Lagrange’s Method for Inverse Interpolation


Lagrange’s polynomial is merely a relation between two variables where we take x as indepen-
dent variable and y = f (x) as dependent variable.
We can write the Lagrange inverse interpolation formula by taking y = f (x) as independent
variable and x as dependent variable.
Suppose, we are given that y = f (x) has values y0, y1, y2, …, yn at values of arguments x0, x1, x2,
…, xn and we are to find x for given value of y. We have
n
x = ∑ lk ( y ) x k
k =0
Numerical Methods in General and Linear Algebra  | 713

where
lk ( y ) =
( y − y0 ) ( y − y1 ) ( y − yk −1 ) ( y − yk +1 ) ( y − yn )
( yk − y0 ) ( yk − y1 ) ( yk − yk −1 ) ( yk − yk +1 ) ( yk − yn )
Iterative methods
Lagrange’s method is used when arguments are not equispaced. When the arguments are
­equispaced, then any of the interpolation formula (not Lagrange’s or divided difference) can
be used and by iteration process we can find value of x for given y. We shall explain it by using
forward difference interpolation formula and using Everett’s formula.

5.9.2 Inverse Interpolation using Newton’s Forward Interpolation Formula


From the given data, complete the forward difference table. Let y is given value for which x is
required. We name this value of y as yp then
y p = f ( x0 + ph )

\ x = x0 + ph 
After finding p, putting value of p in this equation, we get x.
Now Newton’s forward difference interpolation formula is
y p = y0 + p ∆y0 + ( )∆ y + ( )∆ y + ( )∆ y
p
2
2
0
p
3
3
0
p
4
4
0 +

− y − ( )∆ y − ( )∆ y − ( )∆
1 
\ p= yp p 2 p 3 p 4
y0 − 
∆y0 
0 2 0 3 0 4

First approximation of p is taken as
p1 =
1
∆y0
(
y p − y0 )

Second approximation p2 of p is

p2 =
1 
∆y0 
y p − y0 − p21 ∆ 2 y0  ( )

Third approximation p3 of p is
p3 =
1 
∆y0 
( )
y p − y0 − p22 ∆ 2 y0 − p32 ∆ 3 y0  ( )

The process is continued till two successive approximations of p are equal up to desired accuracy.
After finding p, x is found from x = x0 + ph.

5.9.3  Inverse Interpolation using Everett’s Formula


Let f is tabulated with δ 2f and δ 4f from given data and we want to find x for y = f (x) = yp
We have x = x0 + ph and by Everett’s formula


( ) ( )
y p = py1 + p +31 δ 2 y1 + p +5 2 δ 4 y1 + qy0 + q +31 δ 2 y0 + q +52 δ 4 y0

( ) ( )
where q = 1 − p
714 | Chapter 5

= py1 + ( )δ
p +1
3
2
y1 + ( )δ
p+ 2
5
4
y1 + (1 − p) y0 + ( )δ2− p
3
2
y0 + ( )δ
3− p
5
4
y0

First approximation p1 of p is given by p1y1 + (1 –p1)y0 = yp


Second approximation p2 of p is given by p2 y1 + (1 − p2 ) y0 = y p − ( )δ
p1 +1
3
2
y1 − ( )δ
2 − p1
3
2
y0
Third approximation p3 of p is given by

p3 y1 + (1 − p3 ) y0 = y p − ( )δ p2 +1
3
2
y1 − ( 2 − p2
3 )δ 2
y0 − ( p2 + 2
5 )δ 4
y1 − ( 3 − p2
5 )δ 4
y0

The process is continued till two successive approximation of p are equal up to desired
­accuracy.
After finding p, x is found from x = x0 + ph.

Example 5.69: If y1 = 4, y3 = 12, y4 = 19 and y x = 7, find x using inverse interpolation.


Solution:

y 4 12 19
x 1 3 4
We are to find x for y = 7
By Lagrange’s inverse interpolation formula

x
( y − y1 ) ( y − y2 ) x + ( y − y0 ) ( y − y2 ) x + ( y − y0 ) ( y − y1 ) x

( y0 − y1 ) ( y0 − y2 ) 0 ( y1 − y0 ) ( y1 − y2 ) 1 ( y2 − y0 ) ( y2 − y1 ) 2 
(7 − 12) (7 − 19) 1 + (7 − 4) (7 − 19) 3 + (7 − 4) (7 − 12) 4
= () () ( )
(4 − 12) (4 − 19) (12 − 4) (12 − 19) (19 − 4) (19 − 12)
( −5) ( −12 ) ( 3) ( −12 ) ( 3) ( −5)
∴ x + ( 3) + (4)
( −8) ( −15) (8) ( −7 ) (15)( 7 ) 
(5)(12) + (3)(12) 3 − (3)(5)(4)
= ()
(8)(15) (8)(7) (15)(7) 
∴ x 2 

Example 5.70: Following table gives the annuity value y for various ages x.

Age (x) 30 35 40 45 50
Annuity (y) 15.9 14.9 14.1 13.3 12.5

Find the age corresponding to annuity value 13.6


Numerical Methods in General and Linear Algebra  | 715

Solution:
Central difference table for given data is

x y δy δ2y δ3y δ4y


30 15.9
−1.0
35 14.9 0.2
−0.8 −0.2
40 14.1 0.0 0.2
   
−0.8 0.0
45 13.3   0.0 
−00.8
50 12.5

Take x0 = 40, h = 5
Let x be age for annuity 13.6
x − x0 x − 40
∴ p= = ; y p = 13.6
h 5 
By Bessel’s interpolation formula 1
p ( p − 1) p ( p − 1) p−
 1  2 δ3y
y p = µy 1 +  p −  δ y 1 + µδ 2 y 1 + ⋅
 2  2 2 3
1
2 2 2 2 
+
( p + 1) p ( p − 1) ( p − 2) µδ 4 y +
1
4!
2 
y p − µy 1 1
13.6 − (13.3 + 14.1)
1 2
p1 −  2
=
∴ 2 δ y1 13.3 − 14.1
2 
13.6 − 13.7
= = 0.125
−0.8 
1
∴ p1  + 0.125 = 0.625
2 
1 1  p1 ( p1 − 1) 2 
p2 −   y p − µy 1 − µδ y 1 
2 δ y1  2
2 2
2 
1  1 (0.625) ( −0.375) ⋅ 1 0 + 0  = 0.125
= 13.6 − (13.3 + 14.1) − ( )
13.3 − 14.1  2 2 2 

1
p2  + 0.125 = 0.625
2 
716 | Chapter 5

∴ p  0.625 
x − 40
= p  0.625
5 
∴ x  40 + 5 ( 0.625 ) = 43.125  43

∴ Age corresponding to annuity 13.6  43

Example 5.71: Using inverse interpolation, find the real root of the equation x3 + x – 3 = 0 which
is close to 1.2
Solution: Let y = x3 + x – 3
We have
x 1 2 3 4
y –1 7 27 65

Forward difference table for this data is

x y Dy D2y D3y
1 –1
8
2 7 12
20 6
3 27 18
38
4 65

We are to find x close to 1.2 for y = 0


h = 1, x0 = 1
x − x0
p= = x −1
1
By Newton’s forward interpolation formula
p ( p − 1) p ( p − 1) ( p − 2)
y p  y0 + p∆y0 + ∆ 2 y0 + ∆ 3 y0
2! 3!
for  yp = 0
y p − y0 0 − ( −1)
p1  = = 0.125
∆y0 8

1  p1 ( p1 − 1) 2 
p2   y p − y0 − ∆ y0 
∆y0  2 

Numerical Methods in General and Linear Algebra  | 717

1 (0.125) ( −0.875) 12 
= 0 + 1 − ( )
8 2 

1  (0.125)(0.875)(12) 
= 1 +   0.207
8 2 

1  p2 ( p2 − 1) 2 p ( p − 1) ( p2 − 2) 3 
p3   y p − y0 − ∆ y0 − 2 2 ∆ y0 
∆y0  2 6 

1 (0.207) ( −0.793) 12 − (0.207) ( −0.793) ( −1.793) 6 
= 0 + 1 − ( ) ( )
8 2 6 

1
= 1 + (0.207)(0.793)(6 ) − (0.207)(0.793)(1.793)
8 
 0.211 
1  p3 ( p3 − 1) 2 p3 ( p3 − 1) ( p3 − 2) 3 
p4   y p − y0 − ∆ y0 − ∆ y0 
∆y0  2 6 

1 (0.211) ( −0.789) 12 − (0.211) ( −0.789) ( −1.789) 6 
= 0 + 1 − ( ) ( )
8 2 6 

1
= 1 + (0.211)(0.789)(6 ) − (0.211)(0.789)(1.789)
8 
 0.2126 

1  p4 ( p4 − 1) 2 p ( p − 1) ( p4 − 2) 3 
p5   y p − y0 − ∆ y0 − 4 4 ∆ y0 
∆y0  2 6 

1 ( 0.2126 ) ( −0.7874 ) 
= 0 + 1 − (12 ) − ( 0.2126 ) ( −0.7874 ) ( −1.7874 )
8 2 

1
= 1 + (0.2126 )(0.7874 )(6 ) − (0.2126 )(0.7874 )(1.7874 )
8 
 0.2131 
∴ p  0.213 

∴ x − 1 = p  0.213 

∴ Root upto three decimal places = 1.213


718 | Chapter 5

Exercise 5.4

1. The following data give I, the indicated HP and V, the speed in knots developed by a ship

V 8 10 12 14 16
I 1000 1900 3250 5400 8950
Find I when V = 9, using Newton’s forward interpolation formula.
2. Compute (a)  y(9) (b)  y(7) (c)  y(17) and (d)  y(19) from the following data:

x 8 10 12 14 16 18
y 10 19 32.5 54 89.5 154
3. In the table below, the values of y are consecutive terms of a series of which 23.6 is the
sixth term. Find the first and tenth term of the series:

x 3 4 5 6 7 8 9
y 4.8 8.4 14.5 23.6 36.2 52.8 73.9
4. The following are data from the steam table:

Temp C °(t) 140 150 160 170 180


Pressure 3.685 4.854 6.302 8.076 10.225
kg f /cm2 (p)
Using Newton’s formula, find the pressure of the steam for temperature 142° and 175°.
5. The table below gives the values of tan x for 0.10 ≤ x ≤ 0.30

x 0.10 0.15 0.20 0.25 0.30


y = tan x 0.1003 0.1511 0.2027 0.2553 0.3093

Find (i) tan 0.12 (ii) tan 0.26 (iii) tan 0.40 (iv) tan 0.50
6. The amount A of a substance remaining in a reacting system after an interval of time t in a
certain chemical experiment is tabulated below:
t (min.) 2 5 8 11
A (gm.) 94.8 87.9 81.3 75.1
Obtain the value of A when t = 9 using Newton’s backward interpolation formula.
7. By using Newton’s forward difference formula, fit a polynomial of degree three which
takes the following values:
x 3 4 5 6
y 6 24 60 120
8. Find the cubic polynomial f (x) which takes the values f (0) = −4, f (1) = −1, f (2) = 2,
f (3) = 11, f (4) = 32, f (5) = 71. Find f (6) and f (2.5).
Numerical Methods in General and Linear Algebra  | 719

9. In the table below the values of y are consecutive terms of a series of which the number
21.6 is the sixth term. Find the first and tenth terms of the series:

x 3 4 5 6 7 8 9
y 2.7 6.4 12.5 21.6 34.3 51.3 72.9

10. The following table gives corresponding values of x and y. Prepare a forward difference
table and using it express y as a function of x. Also obtain y when x = 2.5.

x 0 1 2 3 4
y 7 10 13 22 43
11. Using Newton’s formula, find a cubic polynomial f (x) which takes the following set of
values (0,1), (1, 2), (2, 1) and (3, 10). Hence or otherwise evaluate f (4).
12. Calculate the approximate value of sin x for x = 0.54 and x = 1.36 using the following table:

x 0.5 0.7 0.9 1.1 1.3 1.5


sin x 0.47943 0.64422 0.78333 0.89121 0.96356 0.99749

13. The table gives the distance in nautical miles of the visible horizon for the given heights
in feet above the earth’s surface:

x = height 100 150 200 250 300 350 400


y = distance 10.63 13.03 15.04 16.81 18.42 19.90 21.27
Find the value of y when x = 218 ft. and 410 ft. by using Newton’s formulae.
14. The following table gives the values of density of saturated water for various temperatures
of saturated steam:

Temp°c (= T) 100 150 200 250 300


Density hg/m (= d)3 958 917 865 799 712
Find by interpolation, the densities when the temperatures are 130°c and 275°c, ­respectively.
15. By using Newton’s forward difference formula, find a polynomial which takes the ­following
values

x 1 3 5 7 9 11
yx 3 14 19 21 23 28

and hence compute yx at x = 2, 12.


16. Approximate cos 23° by interpolation from the following table correct to four places of
decimal

x 10° 20° 30° 40° 50° 60° 70° 80°


cos x 0.9848 0.9397 0.8660 0.7660 0.6428 0.5000 0.3420 0.1737
720 | Chapter 5

17. From the following data, estimate the number of persons having income in rupess in
­between
(i)  1000–1700 and (ii)  3500–4000

Income Below 500 500−1000 1000−2000 2000−3000 3000−4000


No. of persons 6000 4250 3600 1500 650

18. Following table gives the grouped data for no. of students lying in various weight groups:

Wt. in lbs. 0−40 40−60 60−80 80−100 100−120


No. of students 250 120 100 70 50

Find the number of students having weight between 60 and 70 lbs.


19. Use Gauss’s forward interpolation formula to find the value of y9 if y0 = 14, y4 = 24,
y8 = 32, y12 = 35 and y16 = 40
20. Apply Gauss’s forward formula to evaluate y30, given that y21 = 18.4708, y25 = 17.8144,
y29 = 17.1070, y33 = 16.3432 and y37 = 15.5154.
21. Apply Gauss’s backward interpolation formula to find the population for the year 1976,
given that

Year x 1940 1950 1960 1970 1980 1990


Population (y) in thousands 17 20 27 32 36 38

22. For the following table use Stirling’s formula to compute f (x) at x =11

x 2 6 10 14 18
y = f (x) 21.857 21.025 20.132 19.145 18.057
23. Using Stirling’s formula, compute f (1.22) from the following data:

x 1.0 1.1 1.2 1.3 1.4


f (x) 0.841 0.891 0.932 0.963 0.985
24. Use Stirling’s formula to find y35 given that y10 = 600, y20 = 512, y30 = 439, y40 = 346,
y50 = 243
25. Use Bessel’s formula to compute f (1.95) from the following data:

x 1.7 1.8 1.9 2.0 2.1 2.2 2.3


f (x) 2.979 3.144 3.283 3.391 3.463 3.997 4.491

26. Use Laplace – Everett’s formula to obtain f (1.15) given that f (1) = 1.000, f (1.10) = 1.049,
f (1.20) = 1.096 and f (1.30) = 1.140.
2 x − x2
27. The following table gives the values of the probability integral f (x) = ∫0 e dx for
some values of x. Find the values of this integral when x = 0.5437 using π
(i)  Stirling’s formula (ii)  Bessel’s formula
Numerical Methods in General and Linear Algebra  | 721

x 0.51 0.52 0.53 0.54 0.55 0.56 0.57


f (x) 0.5292437 0.5378987 0.5464641 0.5549392 0.5633233 0.5716157 0.5798158

28. Using Lagrange’s interpolation formula, find the value of y corresponding to x = 10 from
the following table:

x 5 6 9 11
y 12 13 14 16

29. Use Lagrange’s interpolation formula to fit a polynomial to the following data. Hence find
y(−2), y(1) and y(4).

x −1 0 2 3
y -8 3 1 2
x2 + x − 3
30. Use Lagrange’s interpolation formula to express the function 3 as a sum of
partial fractions. x − 2x2 − x + 2
31. Use Lagrange’s formula to find the form of f (x), given

x 0 2 3 6
f (x) 648 704 729 792
x2 + 6x −1
32. Express the function 2 as a sum of partial fractions using Lagrange’s
formula. ( x − 1)( x − 4)( x − 6)
33. Using Lagrange’s formula, prove that   32 f (1) = −3 f (–4) + 10 f (−2) + 30 f (2) – 5 f (4)
34. By means of Lagrange’s formula, prove that

y1 = y3 – 0.3 (y5 − y– 3) + 0.2 (y– 3 – y– 5)  (approximately)


35. The function y = f (x) is given in the points (7, 3), (8, 1), (9, 1) and (10, 9). Find y (9.5)
using Lagrange’s interpolation formula.
1 1
36. If a, b, c, d are the arguments of f (x) = , show that f (a, b, c, d) = -
x abcd
37. Find a polynomial satisfied by the following table:

x -4 −1 0 2 5
f (x) 1245 33 5 9 1335

38. Find Newton’s divided difference polynomial for the data given in the table below, Also
find f (2.5)

x −3 −1 0 3 5
f (x) −30 −22 −12 330 3458
722 | Chapter 5

39. By means of Newton’s divided difference formula, find the value of f (8) and f (15) from
the following table:

x 4 5 7 10 11 13
f (x) 48 100 294 900 1210 2028

40. Using divided difference, find the value of f (8), given that f (6) = 1.556,  f (7) = 1.690,
f (9) = 1.908, f (12) = 2.158
41. Given that
x 5 7 11 13 17
f (x) 150 392 1452 2366 5202

Evaluate f (9), using Lagrange’s and Newton’s divided difference formulae.


42. Given that

x 300 304 305 307


log10 x 2.4771 2.4829 2.4843 2.4871

Calculate the approximate value of log10 301.


43. Solve ex = 3.14 by inverse interpolation using x = f (y) with f (3.0) = 1.0986,
 f (3.2) = 1.1632,  f (3.4) = 1.2238
44. Apply Lagrange’s formula inversely to find, to two decimal places, the value of x when
y = f (x) = 19, given the following table:

x 0 1 2
f (x) 0 1 20

45. Compute the value of x, when y = 8 by inverse interpolation using Lagrange’s formula

x −2 −1 1 2
y -7 2 0 11
46. The following values of y = f (x) are given

x 10 15 20
y 1754 2648 3564
Find the value of x for y = 3000 by iterative method.
47. From the following data

x 1.8 2.0 2.2 2.4 2.6


y 2.9 3.6 4.4 5.5 6.7
find x when y = 5, using iterative method.
48. Find the root of the equation 10x3 – 15x + 3 = 0 which lies between 1 and 2.
Numerical Methods in General and Linear Algebra  | 723

Answers 5.4
1. 1406   2. (a) 14.2  (b) 5.2  (c) 116.6  (d) 205.9
3. y(1) = 3.1, y(10) = 100    4. 3.899, 9.1005
5. (i) 0.1205  (ii) 0.2660  (iii) 0.4241  (iv) 0.5543
6. 79.2 gm.  7. y(x) = x3 – 3x2 + 2x
8. f (x) = x3−3x2 + 5x – 4,  f (6) = 134,  f (2.5) = 5.375
9. y (1) = 0.1, y (10) = 100   10. y(x) = x3−3x2 + 5x + 7, y(2.5) = 16.375
11. f (x) = 2x3 – 7x2 + 6x + 1,  f (4) = 41   12. 0.51414, 0.97786
13. 15.70 nautical miles, 21.54 nautical miles    14.  935 hg/m3, 759 hg/m3
1
15. yx = (x3−21x2 + 159x – 91), y2 = 9.4375, y12 = 32.5625   16. 0.9205
16
17. (i) 2797  (ii) 402   18. 54   19. 33   20. 16.9216
21. 34 thousands   22. 19.895   23. 0.939   24. 395
25. 3.347   26. 1.073   27. (i) 0.5580520  (ii) 0.5580520   28. 14.6667
1
29. y = (7x3−31x2 + 28x + 18); y (−2) = −36.3333, y (1) = 3.6667, y (4) = 13.6667,
6
1 1 1
30. – + +    31. f (x) = −x2 + 30x + 648
2( x + 1) 2( x − 1) ( x − 2)
3 1 13 71
32. + – +    35. 3.625
35( x + 1) 5( x − 1) 10( x − 4) 70( x − 6)
37. f (x) = 3x4 - 5x3 + 6x2 − 14x + 5
38. f (x) = 5x4 + 9x3 − 27x2 – 21x − 12, f (2.5) = 102.6875
39. f (8) = 448, f (15) = 3150   40. 1.806   41. 810   42. 2.4786
43. 1.1442   44. 2.80   45. −3   46. 17   47. 2.3   48. 1.109

This page is intentionally left blank
Numerical Methods for
Differentiation, Integration and
Ordinary Differential Equations 6
6.1 introduction
Numerical differentiation is the computation of values of the derivative of a function from its
given values. Derivative is the limit of the difference quotient in which we divide by small quan-
tity which can create too much error. This is the reason that numerical differentiation is the weak-
est point of numerical methods and should be avoided wherever possible. However, the formulae
obtained in numerical differentiation are basic in numerical solution of differential equations.
Numerical integration means the numerical evaluation of integral ∫ f ( x ) dx where a and b
b

a
are given and f is a function given analytically by a formula or empirically by a table of values.
Numerical methods are applied when f is complicate to be integrated or only numerical values
of f are given.
Numerical methods for differential equations are of great importance to the engineer and physi-
cist because practical problems often lead to differential equations which cannot be solved by known
methods or solution is very much complicated and thus numerical methods will be preferable.

6.2 Numerical Differentiation
In interpolation, f (x) is approximated by a polynomial P (x). Now if we plot f (x) and P (x), we
may observe that even though the difference between P (x) and f (x) is small throughout the inter-
val but the slopes of tangents to two curves may differ quite appreciably and hence the derivative
will differ quite appreciably. It can also be possible that slopes may differ in sign and then even
the round off error will affect the calculations. This is the reason that numerical differentiation is
considered the weakest concept in numerical methods.
In the given data, if the interval of differencing is not constant then first find the interpolation
polynomial P (x) approximating f (x) by Lagrange interpolation polynomial or by divided dif-
ference interpolation polynomial. Then differentiate P (x) and after that put value of x at which
derivative of f (x) is required. Similarly, we can find second or higher order derivatives.
If f (x) is given at equidistant points, our problem is then to compute the derivative either at a
grid point or in an interior point.

6.2.1 Derivatives at Interior Points


dp 1
If the interior point is near the beginning point x0 and is x = x0 + ph then = .
By Newton’s forward interpolation formula dx h


y p = y0 + ( ) ∆y + ( ) ∆
p
1 0
p
2
2
y0 + ( )∆ y + ( )∆
p
3
3
0
p

4
4
y0 + 

p2 − p 2 p3 − 3 p 2 + 2 p 3 p 4 − 6 p3 + 11 p 2 − 6 p 4
= y0 + p∆y0 + ∆ y0 + ∆ y0 + ∆ y0 + 
2 6 24 
726 | Chapter 6

dy dy dp 1  2 p −1 2 3 p2 − 6 p + 2 3 4 p3 − 18 p 2 + 22 p − 6 4 
∴ = =  ∆y0 + ∆ y0 + ∆ y0 + ∆ y0 +  
dx dp dx h  2 6 24 
From this, y ′( x ) will be obtained.
Again differentiating it, we can find y″(x).
Similarly if x is near end point then use Newton’s backward interpolation formula.
If x is near the midpoint then central difference formula is used and proceeding as above we
can find derivative. For Example, when we use Everett formula
  q + 1 2  q + 2 4 
y p = qy0 +   δ y0 +   δ y0 + 
  3   5   

  p + 1 2  p + 2 4 
+  py1 +   δ y1 +   δ y1 + 
  3   5   

 q3 − q 2 q 5 − 5q 3 + 4 q 4 
= qy0 + δ y0 + δ y0 + 
 6 120 

 p3 − p 2 p5 − 5 p3 + 4 p 4 
+  py1 + δ y1 + δ y1 + 
 6 120 

Now, using
dy dy dp 1 dy
= =
dx dp dx h dp

and
dy dy dq dp 1 dy
= =−   (∵ q = 1 − p)
dx dq dp dx h dq
we have
1  3q 2 − 1 2 5q 4 − 15q 2 + 4 4 
y ′p =  − y0 − δ y0 − δ y0 + 
h  6 120 
 3 p2 − 1 2 5 p 4 − 15 p 2 + 4 4 
+  y1 + δ y1 + δ y1 + 
 6 120 

6.2.2 Derivative at Grid Points


Though the derivative at grid point can also be obtained as for interior points for p = 0, but there
are simpler formulae to find derivatives at grid points which we shall prove below. The derivative
using these formulae will require less computation work. The formulae are
1 1 1 
 (i)  y0′ = µδ y0 − µδ 3 y0 + µδ 5 y0 −  (6.1)
h  6 30 
1  2 1 4 1 6 
(ii)  y0′′ = δ y0 − 12 δ y0 + 90 δ y0 −  (6.2)
h2  
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 727

Proof: (i) 
U
2 sinh = δ
2
Differentiate w.r.t. δ
U dU
cosh =1
2 dδ 
1

dU 1 1  δ2  2
∴ = = = 1 + 
dδ U U  4 
cosh 1 + sinh 2
2 2

 1  3  1  3  5
−  −  −  −  − 
1 δ 2  2  2 δ 4  2  2  2 δ 6
= 1− + + +
2 4 2! 16 3! 64 
1 3 4 5
= 1− δ 2 + δ − δ 6 +
8 128 1024 
∴ integrating, we have
1 3 3 5 5
U =δ − δ + δ − δ 7 +  (6.3)
24 640 7168
µ  1 3 5 5   δ2 
= δ − δ3 + δ − δ 7 +  ∵ µ = 1 + 
2   
δ  24 640 7168   4 
1+
4 
1

 δ 
2
 1 3 4 5 2
= µδ 1 − δ 2 + δ − δ 6 +  1 + 
 24 640 7168  4 

 1 3 4 5  1 3 4 5 
= µδ 1 − δ 2 + δ − δ 6 +  1 − δ 2 + δ − δ 6 + 
 24 640 7168  8 128 1024 
 1 1   3 1 3  4  5 1 3 5  6 
= µδ 1 −  +  δ 2 +  + + δ − + + +  δ + 
  8 24   128 192 640   1024 1024 51 2 0 7168  

 1 1 1 6 
= µδ 1 − δ 2 + δ 4 − δ + 
 6 30 140 
1 1 1 1 
∴ D=  µδ − µδ 3 + µδ 5 − µδ 7 + 
h 6 30 140 
1 1 1 1 
∴ y0′ = µδ y0 − µδ 3 y0 + µδ 5 y0 − µδ 7 y0 + 
h  6 30 140 
728 | Chapter 6

(ii)  From (6.3)


1 3 3 5
U =δ − δ + δ −
24 640 
2
 1 3 5 
∴ U 2 = δ − δ 3 + δ − 
 24 640  
1 4  3 1  6
=δ2 − δ + + δ −
12  320 576  
1 4 1 6
=δ2 − δ + δ −
12 90 
1  2 1 4 1 6 
\ y0′′ = δ y0 − 12 δ y0 + 90 δ y0 − 
h2  
These formulae are to be used when the grid point is in the middle.
When the grid point is near the beginning we proceed as follows

E = e hD = 1 + ∆ 
∴ hD = log (1 + ∆ ) 
1 1 1 1 1
= ∆ − ∆ 2 + ∆3 − ∆ 4 + ∆5 − ∆6 + 
2 3 4 5 6 
1 1 1 1 1 1 
y0′ = ∆y0 − ∆ 2 y0 + ∆ 3 y0 − ∆ 4 y0 + ∆ 5 y0 − ∆ 6 y0 + 
h  2 3 4 5 6 
2
 1 1 1 1 
and h2 D 2 =  ∆ − ∆ 2 + ∆ 3 − ∆ 4 + ∆ 5 −  
 2 3 4 5  
2 1 2 2 2 2 1
= ∆ 2 − ∆3 +  +  ∆ 4 −  +  ∆5 +  + +  ∆6 − 
3 4  4 6  5 8 9 
1  2 11 4 5 5 137 6 
∴ y0′′ =  ∆ y0 − ∆ y0 + 12 ∆ y0 − 6 ∆ + 180 ∆ − 
3

h2  
When the grid point is near the end, we proceed as follows

E = e hD = (1 − ∇ )
−1


∴ hD = − log (1 − ∇ )

1 1 1 1 1
= ∇ + ∇ 2 + ∇3 + ∇ 4 + ∇5 + ∇6 + 
2 3 4 5 6 
1 1 1 1 1 1 
∴ yn′ = ∇yn + ∇ 2 yn + ∇3 yn + ∇ 4 yn + ∇5 yn + ∇6 yn + 
h  2 3 4 5 6 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 729

2
 1 1 1 1 
and h2 D 2 =  ∇ + ∇ 2 + ∇3 + ∇ 4 + ∇5 +   
 2 3 4 5 

2 1 1 1 2 1 1
= ∇ 2 + ∇3 +  +  ∇ 4 +  +  ∇5 +  + +  ∇6 + 
 3 4   2 3  5 4 9
1  2 11 4 5 5 137 6 
∴ yn′′ = ∇ yn + ∇ yn + 12 ∇ yn + 6 ∇ yn + 180 ∇ yn + 
3

h2  
Now, we shall consider some Examples.

Example 6.1: If u0 = 3, u1 = 12, u2 = 81, u3 = 2000, u4 = 100, calculate Du0.


Solution: Difference table for given data is

x ux Dux D2ux D3ux D4ux


0 3
9
1 12 60
69 1790
2 81 1850 −7459
1919 −5669
3 2000 −3819
−1900
4 100

We have
1
D= log (1 + ∆ )
h 
1 ∆ 2 ∆3 ∆ 4 ∆5 
= ∆ − + − + − 
h 2 3 4 5 

Here, h = 1
∆ 2 u0 ∆ 3 u0 ∆ 4 u0
∴ Du0  ∆u0 − + −
2 3 4 
60 1790 7459
= 9− + +
2 3 4 
 9 − 30 + 596.67 + 1864.75 
= 2440.42
730 | Chapter 6

Example 6.2: Given the following tabulated values of a function

x 1.0 1.2 1.4 1.6 1.8 2.0 2.2


y 2.7183 3.3201 4.0552 4.9530 6.0496 7.3891 9.0250
Prepare difference table and evaluate y′ and y″ at
(i)  x = 1.2  (ii) x = 1.6  (iii)  x = 2.0  (iv) x = 2.2
Solution: Difference table for given data is
x y
1.0 2.7183
0.6018
1.2 3.3201 0.1333
0.7351 0.0294
1.4 4.0552 0.1627 0.0067
0.8978 0.0361 0.0013
1.6 4.9530 0.1988 0.0080 0.0001
1.0966 0.0441 0.0014
1.8 6.0496 0.2429 0.0094
1.3395 0.0535
2.0 7.3891 0.2964
1.6359
2.2 9.0250
We have
h = 0.2
For x = 1.2
1 1 1 1 1 1 
D= log (1 + ∆ ) =  ∆ − ∆ 2 + ∆ 3 − ∆ 4 + ∆ 5 − 
h h 2 3 4 5 
1  1 1 1 1 
∴ y ′ (1.2 )   ∆y (1.2 ) − ∆ 2 y (1.2 ) + ∆ 3 y (1.2 ) − ∆ 4 y (1.2 ) + ∆ 5 y (1.2 ) 
0.2  2 3 4 5 
 0.1627 0.0361 0.0080 0.0014 
= 5 0.7351 − + − +
 2 3 4 5  
∴ y 
′(1.2)  3.3203
2
1 1 2 1 3 1 4 
D2 =  ∆ − 2 ∆ + 3 ∆ − 4 ∆ + 
h2  
1  11 5 
= 2  ∆ 2 − ∆ 3 + ∆ 4 − ∆ 5 + 
h  12 6 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 731

1  2 11 4 5 5 
∴ y ′′ (1.2 )   ∆ y (1.2 ) − ∆ y (1.2 ) + 12 ∆ y (1.2 ) − 6 ∆ y (1.2 )  
3

( 0.2 )
2
 

 (11)( 0.0080 ) 5 ( 0.0014 ) 


= 25 0.1627 − 0.0361 + − 
 12 6 

∴ y″(1.2)  3.3192

For x = 1.6
1 µδ 3 µδ 5 
D=  µδ − + − 
h 6 30 

1  1 3 1 
∴ y ′ (1.6 )   µδ y (1.6 ) − µδ y (1.6 ) + µδ 5 (1.6 ) 
0.2  6 30 
1 1 1 
= 5  ( 0.8978 + 1.0966 ) − ( .0361 + .0441) + ( 0.0013 + 0.0014 ) 
2 12 60 
1 19.944 0.802 0.027 
= − +
2  2 12 60  
∴ y′(1.6)  4.9528
We have 2
1  µδ 3 µδ 5 
D2 =  µδ − + − 
h2  6 30  
2
2 2  δ δ 
2 4
1
= µ δ 1 − + − 
h2  6 30  

1  δ2  2  δ2  1 1  4 
= 1 +  δ 1 − +  +  δ + 
h2  4   3  36 15  

1  δ  2 δ 2
17 4 4

= 1 +  δ − + δ + 
h2  4  3 180 

1  2 δ4 δ6 
= δ − + + 
h2  12 90 

1  2 1 4 1 6 
∴ y ′′ (1.6 ) = δ y (1.6 ) − 12 δ y (1.6 ) + 90 δ y (1.6 ) + 
( 0.2 )
2
 

100  0.0080 0.0001 
∴ y ′′ (1.6 )   0.1988 − +
4  12 90  
∴ y″(1.6)  4.9534
732 | Chapter 6

For x = 2.0
1 1
log (1 − ∇ ) = − log (1 − ∇ )
−1
D=
h h 
1 ∇ 2 ∇3 ∇ 4 
= ∇ + + + + 
h 2 3 4 

1  1 2 1 1 1 
∴ y ′ ( 2.0 )   ∇y ( 2.0 ) + ∇ y ( 2.0 ) + ∇3 y ( 2.0 ) + ∇ 4 y ( 2.0 ) + ∇5 y ( 2.0 ) 
0.2  2 3 4 5 
10  0.2429 0.0441 0.0080 0.0013 
= 1.3395 + + + +
2  2 3 4 5  
1 2.429 0.441 0.080 0.013 
= 13.395 + + + +
2 2 3 4 5  
∴ y ′ ( 2.0 )  7.3896
 2
1  ∇ 2 ∇3 ∇ 4 ∇5 
D = 2 ∇ +
2
+ + + + 
h  2 3 4 5  

1  2 1 2 1 1 1 2 1 
= 2 
∇ + ∇3 +  +  ∇ 4 +  +  ∇5 +  + +  ∇6 + 
h  4 3  2 3 9 5 4 
1  2 11 4 5 5 137 6 
= ∇ + ∇ + 12 ∇ + 6 ∇ + 180 ∇ + 
3

h2  
1  11 5 
∴ y ′′ ( 2.0 )  2 ∇ 2 y ( 2.0 ) + ∇3 y ( 2.0 ) + ∇ 4 y ( 2.0 ) + ∇5 y ( 2.0 ) 
h  12 6 
100  11 5 
= 0.2429 + 0.0441 + ( 0.0080 ) + ( 0.0013) 
4  12 6 
∴ y ′′ ( 2.0 )  7.3854

For x = 2.2
1  1 1 1 1 1 
y ′ ( 2.2 )   ∇y ( 2.2 ) + ∇ 2 y ( 2.2 ) + ∇3 y ( 2.2 ) + ∇ 4 y ( 2.2 ) + ∇5 y ( 2.2 ) + ∇6 y ( 2.2 ) 
 0.2  2 3 4 5 6 
10  1 1 1 1 1 
= 1.6359 + ( 0.2964 ) + ( 0.0535 ) + ( 0.0094 ) + ( 0.0014 ) + ( 0.0001) 
2  2 3 4 5 6 
∴ y ′ ( 2.2 )  9.0229

1  2 11 5 137 6 
y ′′ ( 2.2 )  2  ∇ y ( 2.2 ) + ∇3 y ( 2.2 ) + ∇ 4 y ( 2.2 ) + ∇5 y ( 2.2 ) + ∇ y ( 2.2 ) 
h  12 6 180 
100 
( 0.0001)
11 5 137
= 0.2964 + 0.0535 + ( 0.0094 ) + ( 0.0014 ) +
4  12 6 180 
∴ y ′′ ( 2.2 )  8.9940

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 733

Example 6.3: A rod is rotating in a plane about one of its ends. The following table gives the
angle q in radians through which the rod has turned for different values of time t seconds. Find
the angular velocity at t = 0.7 sec.

t (seconds) 0.0 0.2 0.4 0.6 0.8 1.0


q (radians) 0.00 0.12 0.48 1.10 2.00 3.20

Also find the angular acceleration at t = 0.7 sec.


Solution: Backward difference table for given data is

t q —q —2q —3q —4q — 5q


0.0 0.00
0.12
0.2 0.12 0.24
0.36 0.02
0.4 0.48 0.26 0
0.62 0.02 0
0.6 1.10 0.28 0
0.90 0.02
0.8 2.00 0.30
1.20
1.0 3.20


We shall use backward difference interpolation formula to evaluate  
 dt  t =0.7 sec.
t = 0.7, t0 = 0.8, h = 0.2 
t − t0 0.7 − 0.8
∴ p= = = −0.5 
h 0.2
( p + 1) p ( p + 2 ) ( p + 1) p
θ p = θ 0 + p∇θ 0 + ∇ 2θ 0 + ∇3θ 0 + 
2! 3! 
dp 1  ( 2 p + 1) 2 3 p2 + 6 p + 2 3 
θ ′ ( t ) = θ p′ =  ∇θ 0 + ∇ θ0 + ∇ θ 0 + 
dt h  2 6 

1  2 p +1 2 3 p2 + 6 p + 2 3 
∴ θ ′ ( 0.7 )   ∇ θ ( 0 . 8 ) + ∇ θ ( 0 .8 ) + ∇ θ ( 0.8 ) 
0.2  2 6 

10  3 ( .25 ) + 6 ( − ⋅ 5 ) + 2 
= 0.90 + ( 0.02 )
2  6 

734 | Chapter 6

 ( 0.25)( 0.02 ) 
= 5 0.90 − 
 6 
∴ θ ′ ( 0.7 )  4.5 radians/sec.

2
 dp  1
θ ′′ ( t ) = θ p′′    2 ∇ 2θ 0 + ( p + 1) ∇3θ 0 
 dt  h
1
∴ θ ′′ ( 0.7 ) = ∇ 2θ ( 0.8 ) + ( 0.5 ) ∇3θ ( 0.8 ) 
( 0.2 )
2


100
= 0.28 + ( 0.5 )( .02 ) 
4  
= 7.25
∴ θ ′′ ( 0.7 )  7.25 radians/sec2.
∴ angular velocity at t = 0.7 sec.  4.5 radians/sec.
angular acceleration at t = 0.7 sec.  7.25 radians/sec2.

Example 6.4: Compute f ″′(15) given

x 2 4 9 13 16 21 29
f (x) 57 1345 66340 402052 1118209 4287844 21242820

Solution: Divided difference table for given data is

x f (x) Df (x)
| D| 2f (x) D| 3f (x) D| 4f (x) D| 5f (x) D| 6f (x)
2 57
644
4 1345 1765
12999 556
9 66340 7881 45
83928 1186 1
13 402052 22113 64 0
238719 2274 1
16 1118209 49401 89
633927 4054
21 4287844 114265
2119372
29 21242820
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 735

By Newton’s divided interpolation formula


f ( x )  f ( 2 ) + ( x − 2 ) f [ 2, 4 ] + ( x − 2 ) ( x − 4 ) f [ 2, 4, 9]

+ ( x − 2 ) ( x − 4 ) ( x − 9 ) f [ 2, 4, 9,13]

+ ( x − 2 ) ( x − 4 ) ( x − 9 ) ( x − 13) f [ 2, 4, 9,13,16 ]

+ ( x − 2 ) ( x − 4 ) ( x − 9 ) ( x − 13) ( x − 16 ) f [ 2, 4, 9,13,16, 21]

+ ( x − 2 ) ( x − 4 ) ( x − 9 ) ( x − 13) ( x − 16 ) ( x − 21) f [ 2, 4, 9,13,16, 21, 29]

d3 

dx
( 3 2
) 4
(
f ′′′ ( x )  3 556 x − 15 x + 62 x − 72 + 45 x − 28 x + 275 x − 878 x + 936
3 2
)



(
+ x − 44 x + 705 x − 878 x + 4990 x − 14976 
5 4 3 2


)
= 556 ( 6 ) + 45 ( 24 x − 168 ) + 60 x 2 − 1056 x + 4230

∴ f ′′′ (15 )  3336 + 45 ( 360 − 168 ) + 13500 − 15840 + 4230

= 3336 + 45 (192 ) + 13500 − 15840 + 4230

= 13866 

Example 6.5: From the following data, find the maximum or minimum value of y

x 0.60 0.65 0.70 0.75


y 0.6221 0.6155 0.6138 0.6170
Solution: Forward difference table for given data is

x y Dy D2y D3y
0.60 0.6221
−0.0066
0.65 0.6155 0.0049
−0.0017 0
0.70 0.6138 9
0.032
0.75 0

x − x0 x − 0.60
x0 = 0.60, h = 0.05, p = = = 20 x − 12
h 0.05
By Newton’s forward difference interpolation formula
p ( p − 1)
y p = y0 + p∆y0 + ∆ 2 y0 + 
2
736 | Chapter 6

0.0049 2
∴ y p  0.6221 − 0.0066 p +
2
(p −p  )
dy ⋅0049
y ′p = = −0.0066 + ( 2 p − 1)
dp 2
For maxima or minima
dy dy dp dy
= = 20 =0
dx dp dx dp

⋅0049
∴ −0.0066 + ( 2 p − 1) = 0
2 
2 ( 0.0066 )
∴ 2 p −1 =
0.0049 
1  2 ( 0.0066 ) 
∴ p = 1 +  = 1.8469  1.85
2 0.0049 

d2 y d 2 y dp d2 y
= 20 ⋅ 2 ⋅ = 400 2 = 400 ( 0 ⋅ 0049 ) > 0
dx 2 dp dx dp 
∴ y is minimum when p = 1.85
⋅0049
Minimum y  0.6221 − 0.0066 (1.85 ) + (1.85)( 0.85)
2
∴ minimum y  0.6137

Exercise 6.1

1. Find the first and second derivatives of the function tabulated below at the point x = 3.0

x 3.0 3.2 3.4 3.6 3.8 4.0


f (x) -14.000 -10.032 -5.296 0.256 6.672 14.000

2. Find the first and second derivatives of the function tabulated below at the point x = 1.05

x 1.00 1.05 1.10 1.15 1.20 1.25 1.30


f (x) 1.00000 1.02470 1.04881 1.07238 1.09544 1.11803 1.14017

3. Find f ′(1) using the following data:

x 1.0 1.1 1.2 1.3 1.4


f (x) .2500 .2268 .2066 .1890 .1736

4. Find the first, second and third derivatives of the function tabulated below at the point x = 1.5

x 1.5 2.0 2.5 3.0 3.5 4.0


f (x) 3.375 7.000 13.625 24.000 38.875 59.000
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 737

5. Find the first and second derivatives of the function tabulated below at the point x = 1.1

x 1 1.2 1.4 1.6 1.8 2.0


f (x) 0.0000 0.1280 0.5440 1.2960 2.4320 4.0000

6. Find the first derivative of the function tabulated below at the point x = 0.4

x 0.1 0.2 0.3 0.4


f (x) 1.10517 1.22140 1.34986 1.49182

7. Find the first three derivatives of the function tabulated below at the point x = 2.5

x 1.5 1.9 2.5 3.2 4.3 5.9


f (x) 3.375 6.059 13.625 29.368 73.907 196.579

8. Given the following values of f (x) for certain values of x. Find f ′(4).

x 1 2 4 8 10
f (x) 0 1 5 21 27

9. Find f ′ (0.04) from the following table

x .01 .02 .03 .04 .05 .06


f (x) .1023 .1047 .1071 .1096 .1122 .1148
10. Find the first derivative of the function tabulated below at the point x = 7.50

x 7.47 7.48 7.49 7.50 7.51 7.52 7.53


f (x) 0.193 0.195 0.198 0.201 0.203 0.206 0.208

 dy 
11. For the following data, approximate   at x = 2
 dx 
x 0 2 3
y 2 -2 -1
dy
12. Find at x = 5 from the following table
dx
x 0 2 3 4 7 9
y 4 26 58 112 466 922
13. Given the following pair of values of x and f (x)

x 60 75 90 105 120
f (x) 28.2 38.2 43.2 40.9 37.7
Find f ′(93).
738 | Chapter 6

14. From the following table find x correct to two decimal places, for which y is maximum or
minimum and find this value of y.

x 1.2 1.3 1.4 1.5 1.6


y 0.9320 0.9636 0.9855 0.9975 0.9996
15. A slide in a machine moves along a straight rod. Its distance x meter along the rod is given
below for various values of time t seconds. Find the velocity and acceleration of the slide at
t = 0.3 sec.

t 0 0.1 0.2 0.3 0.4 0.5 0.6


x 30.13 31.62 32.87 33.64 33.95 33.81 33.24
16. A rod is rotating in a plane. The table below gives various values of q in radians through
which the rod has turned for different values of time t seconds. Find approximately the
­angular velocity and angular acceleration of the rod at t = 0.6 sec.

t 0 0.2 0.4 0.6 0.8 1.0


q 0 0.12 0.49 1.12 2.02 3.20

Answers 6.1

1. f ′ ( 3) = f ′′ ( 3) = 18
2. f ′ (1.05 ) = 0.48763, f ′′ (1.05 ) = −0.21433
3. f ′ (1) = −0.2483
4. f ′ (1.5 ) = 4.750, f ′′ (1.5 ) = 9.000, f ′′′ (1.5 ) = 6.000
5. f ′ (1.1) = 0.6300, f ′′ (1.1) = 6.6000
6. f ′ ( 0.4 ) = 1.49133
7. f ′ ( 2.5 ) = 16.750, f ′′ ( 2.5 ) = 15.000, f ′′′ ( 2.5 ) = 6.000
8. f ′ ( 4 ) = 2.8306
9. f ′ ( .04 ) = 0.2558
10. f ′ ( 7.50 ) = 0.235
 dy 
11.   = 0
 dx  x =2
 dy 
12.   = 98
 dx  x =5

13. f ′ ( 93) = −0.03627


Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 739

14. The maximum value occurs at x = 1.58 and the maximum value is 1.
15. Velocity = 5.33 m/sec., acceleration = −45.59 m/sec2.
16. Angular velocity = 3.82 radians/sec.
Angular acceleration = 6.75 radians/sec2.

6.3 Numerical Quadrature
Numerical quadrature is the process of computing the approximate value of a definite integral
using a set of numerical values of the integrand. We shall assume that the integrand is regular
and the interval of integration is finite. Numerical quadrature is performed by approximating the
integrand by an interpolating polynomial and then integrating it within the given limits.

6.3.1  General Quadrature Formula


Here the integrand is approximated by Newton’s forward interpolation formula up to nth order
differences and it is integrated within the limits x0 to x0 + nh.
Newton’s forward interpolation formula is
f ( x ) = f 0 + p ∆f 0 + ( )∆
p
2
2
f0 +  + ( )∆
p
n
n
f0

where x = x0 + ph
\ dx = hdp
x0 + nh n
\ ∫ f ( x ) dx  h ∫  f 0 + p∆f 0 + ( )∆
p
2
2
f0 +  + ( )∆
p
n
n
f 0  dp
x0 0

n
 p2 − p 2 p3 − 3 p 2 + 2 p 3 
= h ∫  f 0 + p∆f 0 + ∆ f0 + ∆ f 0 +  dp
0 
2 3!  

n
 p2 1  p3 p 2  2 1  p4  
= h  p f0 + ∆f 0 +  −  ∆ f0 +  − p3 + p 2  ∆ 3 f 0 + 
 2 2 3 2  3!  4  0 

 n2 1  n3 n 2  1  n4  
= h  n f 0 + ∆f 0 +  −  ∆ 2 f 0 +  − n3 + n2  ∆ 3 f 0 + 
 2 2 3 2  3!  4  

From this general formula, we obtain different quadrature formulae by taking n = 1, 2, 3….

6.3.2 Trapezoidal Rule
Here the integrand is approximated by a linear polynomial and integrated over the interval x0 to
x0 + h.
\ f (x)  f0 + pDf0
where x = x0 + ph
740 | Chapter 6

\ dx = hdp
x0 + h 1
1
 p2 
\ ∫ f ( x ) dx  h ∫ ( f 0 + p∆f 0 ) dp = h  pf 0 + ∆f 0 
x0 0  2 0 
 1   1 
= h  f 0 + ∆f 0  = h  f 0 + ( f1 − f 0 ) 
 2   2 
h
= [ f0 + f1 ]
2 
b
If we are to find ∫ f ( x ) dx by trapezoidal rule then we divide interval (a, b) into n equal parts at
a b−a
a = x0 , x0 + h, x0 + 2h,… , x0 + nh = b so that h = and apply trapezoidal rule in intervals
n
( )
( x0 , x0 + h) , ( x0 + h, x0 + 2h) ,… , x0 + n − 1 h, x0 + nh and add, we shall have
b
h
∫ f ( x ) dx  2 ( f
a
0 + f1 ) + ( f1 + f 2 ) +  + ( f n −1 + f n ) 

h
=  f 0 + 2 ( f1 + f 2 +  + f n −1 ) + f n 
2 

6.3.3 Simpson’s One-third Rule


Here the integrand is approximated by a quadratic polynomial and integrated over the interval
x0 to x0 + 2h.
p ( p − 1) 2
\ f ( x )  f 0 + p∆f 0 + ∆ f0
2 
where x = x0 + ph
\ dx = h dp
x0 + 2 h 2
 p2 − p 2 
\ ∫ f ( x ) dx  h ∫  f 0 + p∆f 0 + ∆ f 0  dp
x0 0
2  
2
 p2 1  p3 p 2  2 
= h  pf 0 + ∆f 0 +  −  ∆ f0 
 2 2 3 2  0 

 12 
= h  2 f 0 + 2∆f 0 +   ∆ 2 f 0 
 23 

 1 
= h  2 f 0 + 2 ( f1 − f 0 ) + ( f 2 − 2 f1 + f 0 ) 
 3 
h
= ( f 0 + 4 f1 + f 2 )
3 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 741

b
If we are to find ∫ f ( x ) dx by Simpson’s one-third rule then we divide interval (a, b) into n equal parts
a b−a
at a = x0, x0 + h, x0 + 2h,… , x0 + nh = b so that h = and n is even and apply Simpson’s one-
n
(
third rule in intervals ( x0 , x0 + 2h) , ( x0 + 2h, x0 + 4 h) , ( x0 + 4 h, x0 + 6 h) ,… , x0 + n − 2 h, x0 + nh )
and add. We shall have
b
h
∫ f ( x ) dx  3 ( f
a
0 + 4 f1 + f 2 ) + ( f 2 + 4 f 3 + f 4 ) +  + ( f n − 2 + 4 f n −1 + f n ) 

h
=  f 0 + f n + 4 ( f1 + f 3 + f 5 +  + f n −1 ) + 2 ( f 2 + f 4 +  + f n − 2 ) 
3 

6.3.4 Simpson’s Three-eight Rule


Here the integrand is approximated by a cubic polynomial and integrated over the interval x0 to
x0 + 3h.
p2 − p 2 p3 − 3 p 2 + 2 p 3
\ f ( x )  f 0 + p∆f 0 + ∆ f0 + ∆ f0
2 6 
where x = x0 + ph
∴ dx = h dp
x0 + 3 h 3
 p2 − p 2 p3 − 3 p 2 + 2 p 3 
\ ∫ f ( x ) dx  h ∫  f 0 + p∆f 0 + ∆ f0 + ∆ f 0  dp
x0 0
2 6  
3
 p2 1  p3 p 2  2 1  p4  
= h  p f0 + ∆f 0 +  −  ∆ f0 +  − p3 + p 2  ∆ 3 f 0 
 2 2 3 2  6 4  0 

 9 1 9 1  81  
= h 3 f 0 + ∆f 0 +  9 −  ∆ 2 f 0 +  − 27 + 9  ∆ 3 f 0 
 2 2 2 6 4  

 9 9 3 
= h 3 f 0 + ( f1 − f 0 ) + ( f 2 − 2 f1 + f 0 ) + ( f 3 − 3 f 2 + 3 f1 − f 0 ) 
 2 4 8 
3h
= 8 f 0 + 12 ( f1 − f 0 ) + 6 ( f 2 − 2 f1 + f 0 ) + f 3 − 3 f 2 + 3 f1 − f 0 
8  
3h
= [ f 0 + 3 f1 + 3 f 2 + f 3 ]
8 
b
If we are to find ∫ f ( x ) dx
a
by Simpson’s three-eight rule then we divide (a, b) into n

equal parts at a = x0 , x0 + h, x0 + 2h, x0 + 3h,… , x0 + nh = b where n is multiple of 3,


b−a
h= and apply Simpson’s three-eight rule in intervals (x0, x0 + 3h), (x0 + 3h, x0 + 6h),
n
( )
( x0 + 6h, x0 + 9h ) ,…, x0 + n − 3 h, x0 + nh and add. We shall have
742 | Chapter 6

b
3h
∫ f ( x ) dx =
a
( f 0 + 3 f1 + 3 f 2 + f 3 ) + ( f 3 + 3 f 4 + 3 f 5 + f 6 ) +  + ( f n −3 + 3 f n − 2 + 3 f n −1 + f n ) 
8 
3h
=  f 0 + f n + 3 ( f1 + f 2 + f 4 + f 5 +  + f n − 2 + f n −1 ) + 2 ( f 3 + f 6 +  + f n −3 ) 
8  

6.3.5 Weddle’s Rule
Here the integrand is approximated by a 6th degree polynomial and integrated over the interval
x0 to x0 + 6h.
p2 − p 2 p3 − 3 p 2 + 2 p 3
\ f ( x )  f 0 + p∆f 0 + ∆ f0 + ∆ f0
2 6 
1 1
+
24
4
(
p − 6 p + 11 p − 6 p ∆ f 0 +
3 2 4
)
120
5
(
p − 10 p 4 + 35 p3 − 50 p 2 + 24 p ∆ 5 f 0 )

1
+
720
(
p − 15 p + 85 p − 225 p + 274 p − 120 p ∆ f 0
6 5 4 3 2 6
)

where x = x0 + ph
∴ dx = h dp
x0 + 6 h 6
 p2 − p 2 p3 − 3 p 2 + 2 p 3
\ ∫ f ( x ) dx = h ∫  f 0 + p∆f 0 + ∆ f0 + ∆ f0
0 
2 6
x0

1
+
24
( )
p 4 − 6 p3 + 11 p 2 − 6 p ∆ 4 f 0

1
+
120
( )
p5 − 10 p 4 + 35 p3 − 50 p 2 + 24 p ∆ 5 f 0 

1 
+
720
(
p6 − 15 p5 + 85 p 4 − 225 p3 + 274 p 2 − 120 p ∆ 6 f 0  dp
 
)

 1 1  p3 p 2  2 1  p4 
= h  p f 0 + p 2 ∆f 0 +  −  ∆ f 0 +  − p3 + p 2  ∆ 3 f 0
 2 2 3 2  6 4 

1  p 3 4 11 3
5

+  − p + p − 3 p2  ∆ 4 f0
24  5 2 3 

1 1 6 35 4 50 3 
+  p − 2 p5 + p − p + 12 p 2  ∆ 5 f 0 
120  6 4 3 
6
1 1 7 5 6 225 4 274 3  
+  p − p + 17 p5 − p + p − 60 p 2  ∆ 6 f 0 
720  7 2 4 3  0

 123 4 33 5 41 6 
= h 6 f 0 + 18∆f 0 + 27∆ f 0 + 24 ∆ f 0 +
2 3
∆ f0 + ∆ f0 + ∆ f0 
 10 10 140 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 743

41 6 3
We replace ∆ f 0 by ∆ 6 f 0, the error will be negligible
140 10
x0 + 6 h

\ ∫ f ( x ) dx = h 6 f 0 + 18 ( f1 − f 0 ) + 27 ( f 2 − 2 f1 + f 0 ) + 24 ( f 3 − 3 f 2 + 3 f1 − f 0 )
x0

123 33
+ ( f 4 − 4 f3 + 6 f 2 − 4 f1 + f0 ) + ( f5 − 5 f 4 + 10 f3 − 10 f 2 + 5 f1 − f0 )
10 10 
3 
+ ( f 6 − 6 f 5 + 15 f 4 − 20 f 3 + 15 f 2 − 6 f1 + f 0 ) 
10 
 123 33 3   492 165 18 
= h  6 − 18 + 27 − 24 + − +  f 0 + 18 − 54 + 72 − + − f1
 10 10 10   10 10 10 

 738 45   492 
+  27 − 72 + − 33 +  f 2 +  24 − + 33 − 6  f 3
 10 10   10  
 123 165 45   33 18  3 
+ − +  f 4 +  −  f5 + f6 
 10 10 10   10 10  10 
3h
= [ f0 + 5 f1 + f 2 + 6 f3 + f 4 + 5 f5 + f6 ]
10 
b
If we are to find ∫ f ( x ) dx
by Weddle’s rule then we divide interval (a, b) into n equal parts at
a b−a
a = x0 , x0 + h, x0 + 2h,… , x0 + nh = b where n is multiple of 6, h = and apply Weddle’s
n
(
rule in intervals ( x0 , x0 + 6 h) , ( x0 + 6 h, x0 + 12h) ,… , x0 + n − 6 h, x0 + nh and add. We shall )
have
b
3h
∫a f ( x ) dx = 10  f0 + f n + 5 ( f1 + f7 + f13 +  + f n−5 )

+ 5 ( f 5 + f11 + f17 +  + f n −1 ) + 6 ( f 3 + f 9 + f15 +  + f n − 3 )

+ ( f 2 + f 4 + f8 + f10 +  + f n − 4 + f n − 2 ) + 2 ( f 6 + f12 + f18 +  + f n −6 ) 


6.3.6 Cote’s Formulas
Here the integrand is approximated by Lagrange’s interpolation polynomial passing through the
points (x0, y0), (x1, y1), (x2, y2) ,…, (xn , yn) where xk = x0 + k h; k = 0, 1, 2, …., n. Thus, if f (x) is
integrand then
n
f ( x )  P ( x ) = ∑ lk ( x ) y k
k =0 
( 0 ) ( 1 ) ( x − xk −1 ) ( x − xk +1 )( x − xn )
x − x x − x 
where lk ( x ) =
( xk − x0 ) ( xk − x1 )( xk − xk −1 ) ( xk − xk +1 )( xk − xn ) 
If we put x = x0 + ph and hence dx = hdp
744 | Chapter 6

we have
p ( p − 1) ( p − k + 1) ( p − k − 1) ( p − n )
lk =
k ( k − 1) (1) ( −1) ( k − n )

and we get
n n
f ( x ) dx  ∫ P ( x ) dx = h ∫ ∑l
xn xn
∫ x0 x0 0 k yk dp
n
k =0 
1
∑ yk
n
= nh
n ∫0
lk dp
1 n
k =0 
n ∫0
Put lk dp = nCk

We can write the integration formula
n n
P ( x ) dx = nh∑ nCk yk = ( xn − x0 )∑ nCk yk
xn
∫ x0
k =0 k =0 
The numbers Ck ; 0 ≤ k ≤ n are called Cote’s numbers.
n

Since lk = ln − k and we have already proved in Remark (5.12)(ii) that ∑l ( x) = 1


k

\ Cote’s numbers follow the properties


n
n
Ck = nCn − k and ∑ nCk = 1
k =0
In Cote’s formula, taking n = 1, 2, 3 we shall get trapezoidal rule, Simpson’s one-third rule and
Simpson’s three-eight rule.
For n = 1 1 1
1 2 1 1
1
C0 = 1C1 = ∫ l1 dp = ∫ p dp =
2
p
0
=
2
( )
0 0
\ Cote’s formula of order one is
1
x0 + h h
∫x0 f ( x ) dx  h∑ Ck yk = ( y0 + y1 )
1

k =0 2
which is trapezoidal rule.
For n = 2
1 p ( p − 1)
2
1  p3 p 2 
2 2
1 18  1
2
C0 = 2C2 = ∫ l2 dp = ∫ dp =  −  =  − 2 =
20 2 0 2 (1) 4 3 2 0 4  3  6
2
∵ ∑ 2
C k = 1
k =0  2 2
\ 2
(
C1 = 1 − 2C0 + 2C2 = 1 − =
6 3
)
\ Cote’s formula of order 2 is
x0 + 2 h 2

∫ f ( x ) dx  2h ∑ 2Ck yk = 2h  2C0 y0 + 2C1 y1 + 2C2 y2 


k =0
x0

1 2 1 
= 2h  y0 + y1 + y2 
6 3 6 
h
= [ y0 + 4 y1 + y2 ]
3 
which is Simpson’s one-third rule.
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 745

For n = 3
1 p ( p − 1) ( p − 2 )
3 3
1
C0 = 3C3 = ∫ l3 dp = ∫
3
dp

30 30 ( 3)( 2 )(1) 
3
1
(
= ∫ p3 − 3 p 2 + 2 p dp
18 0
)

3
1p 4
 1  81  1
=  − p3 + p 2  =  − 27 + 9  =
18  4 0 18  4  8

Now,
3
3
C1 = 3C2 and ∑ 3
Ck = 1
k =0 
\ 3
C0 + C1 + C2 + C3 = 1 
3 3 3

1 2 3
⇒ 3
C1 = 3C2 =
1
2
(
1 − 3C0 − 3C3 = 1 −  =
2 8 8 
)
\ Cote’s formula of order 3 is
x0 + 3 h n

∫ f ( x ) dx  3h∑ 3Ck yk
x0
k =0 
1 3 3 1 
= 3h  y0 + y1 + y2 + y3 
8 8 8 8 
3h
= [ y0 + 3 y1 + 3 y2 + y3 ]
8 
which is Simpson’s three-eight rule.

6.3.7 Error Term in Quadrature Formulae


We shall be using Taylor’s series method to find error in quadrature formulae.

Error Term in Trapezoidal Rule


In trapezoidal rule, we approximate
x0 + h h
∫ f ( x ) dx by
 f ( x0 ) + f ( x0 + h ) 
x0 2 
x0 + h h
\ error = ∫ f ( x ) dx −  f ( x0 ) + f ( x0 + h ) 
x0 2 
Let  f (x) = F ′(x)
746 | Chapter 6

h
\ error = F ( x0 + h ) − F ( x0 ) −  f ( x0 ) + f ( x0 + h )  
2
h2 h3
= F ( x0 ) + h F ′ ( x0 ) + F ′′ ( x0 ) + F ′′′ ( x0 ) + 
2! 3! 
h h2 h3 
− F ( x0 ) −  f ( x0 ) + f ( x0 ) + h f ′ ( x0 ) + f ′′ ( x0 ) + f ′′′ ( x0 ) + 
2 2! 3! 

h2 h3
= hf ( x0 ) + f ′ ( x0 ) + f ′′ ( x0 ) + 
2! 3! 
2 3
h h h4
− hf ( x0 ) − f ′ ( x0 ) − f ′′ ( x0 ) − f ′′′ ( x0 ) −  (∵ F ′( x ) = f ( x ) )
2 4 12 
3 3
h h
=− f ′′ ( x0 ) +  = − f ′′ ( x0 + θ h ) ; 0 < θ < 1
12 12 

∫ f ( x ) dx
b
If we are to find and (a, b) is divided into n subintervals at a = x0 , x0 + h , …, x0 + nh = b
a
then
h3
Error = −  f ′′ ( x0 + θ1h ) + f ′′ ( x1 + θ 2 h ) +  + f ′′ ( xn −1 + θ n h ) 
12 
where xi = x0+ ih and 0 < q1, q2…,qn < 1

If M = max . f ′′ ( x )
x0 < x < xn


then | max error | ≤


nh
M=
(b − a ) h 2 M
3

12 12 
this shows that the error is of order two.

Error Term in Simpson’s One-third Rule


In Simpson’s one-third rule, we approximate
x0 + 2 h
h
∫x0 f ( x ) dx by 3  f ( x0 ) + 4 f ( x0 + h ) + f ( x0 + 2h )

h
 f ( x0 ) + 4 f ( x0 + h) + f ( x0 + 2h). 
x0 + 2 h
\ error = ∫ f ( x ) dx −
x0 3
Let  f (x) = F ′(x)
h
\ error = F ( x0 + 2h ) − F ( x0 ) −  f ( x0 ) + 4 f ( x0 + h ) + f ( x0 + 2h ) 
3 
 ( 2h ) ( 2h ) ( 2h ) iv ( 2h ) v 
2 3 4 5

= 2hF ′ ( x0 ) + F ′′ ( x0 ) + F ′′′ ( x0 ) + F ( x0 ) + F ( x0 ) + 
2! 3! 4! 5!
  
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 747

h  h2 h3 h4 iv 
−  f ( x0 ) + 4  f ( x0 ) + h f ′ ( x0 ) + f ′′ ( x0 ) + f ′′′ ( x0 ) + f ( x0 ) + 
3  2! 3! 4! 

 ( 2h ) ′′
2
( 2h ) ′′′
3
( 2h ) iv
4

+  f ( x0 ) + 2h f ′ ( x0 ) + f ( x0 ) + f ( x0 ) + f ( x0 ) + 
 
2! 3! 4!

 4 2 4 h5 iv 
= 2h f ( x0 ) + 2h2 f ′ ( x0 ) + h3 f ′′ ( x0 ) + h4 f ′′′ ( x0 ) + f ( x0 ) + 
 3 3 15 

h 5 
− (1 + 4 + 1) f ( x0 ) + 6 h f ′ ( x0 ) + 4 h2 f ′′ ( x0 ) + 2h3 f ′′′ ( x0 ) + h4 f iv ( x0 ) + 
3 6 
 (∵ F ′( x ) = f ( x) )
h5 iv
=− f ( x0 ) + 
90 
h5 iv
=− f ( x0 + θ1h ) ; 0 < θ1 < 2
90 

∫ f ( x ) dx and (a, b) is divided into n subintervals where n is even at a = x ,


b
If we are to find 0
a
x0 + h, …, x0 + nh = b then
h5
Error = −  f iv ( x0 + θ1h ) + f iv ( x2 + θ 2 h ) + f iv ( x4 + θ3 h ) +  + f iv ( xn − 2 + θ n / 2 h ) 
90  
where xi = x0 + ih and 0 < θ1 , θ 2 ,… , θ n / 2 < 2

If M = max. f iv ( x )
x0 < x < xn

nh5 ( b − a ) h4
then | max. error | ≤ M= M
180 180 
Hence, error in Simpson’s one-third rule is of order 4.

Error Term in Simpson’s Three-Eight Rule


In Simpson’s there-eight rule, we approximate
3h
 f ( x0 ) + 3 f ( x0 + h ) + 3 f ( x0 + 2h ) + f ( x0 + 3h ) 
x0 + 3 h
∫ f ( x ) dx by
x0 8  
x0 + 3 h 3h
\ error  =∫ f ( x ) dx −  f ( x0 ) + 3 f ( x0 + h ) + 3 f ( x0 + 2h ) + f ( x0 + 3h ) 
x0 8 
Let f (x) = F′(x)
3h
\ error = F ( x0 + 3h ) − F ( x0 ) −  f ( x0 ) + 3 f ( x0 + h ) + 3 f ( x0 + 2h ) + f ( x0 + 3h ) 
8  
748 | Chapter 6

 9h 2 27h3
= 3hF ′ ( x0 ) + F ′′ ( x0 ) + F ′′′ ( x0 )
 2! 3!

81h4 iv 243h5 v 729h6 vi 


+ F ( x0 ) + F ( x0 ) + F ( x0 ) +  
4! 5! 6! 
3h   h2 h3 h4 iv h5 v 
−  f ( x0 ) + 3  f ( x0 ) + h f ′ ( x0 ) + f ′′ ( x0 ) + f ′′′ ( x0 ) + f ( x0 ) + f ( x0 ) + 
8   2! 3! 4! 5! 
 4 2 4 
+ 3  f ( x0 ) + 2h f ′ ( x0 ) + 2h2 f ′′ ( x0 ) + h3 f ′′′ ( x0 ) + h4 f iv ( x0 ) + h5 f v ( x0 ) + 
 3 3 15 
 9h 2 9 h3 27 81 
+  f ( x0 ) + 3h f ′ ( x0 ) + f ′′ ( x0 ) + f ′′′ ( x0 ) + h4 f iv ( x0 ) + h5 f v ( x0 ) + 
 2 2 8 40 
 9 9 27 81h5 iv 81h6 v 
= 3h f ( x0 ) + h2 f ′ ( x0 ) + h3 f ′′ ( x0 ) + h4 f ′′′ ( x0 ) + f ( x0 ) + f ( x0 ) + 
 2 2 8 40 80 
3h   h2 h3 h4 iv h5 v 
−  f ( x0 ) + 3  f ( x0 ) + h f ′ ( x0 ) + f ′′ ( x0 ) + f ′′′ ( x0 ) + f ( x0 ) + f ( x0 ) + 
8   2 6 24 120 
 4 h3 2h4 iv 4 
+3  f ( x0 ) + 2h f ′ ( x0 ) + 2h2 f ′′ ( x0 ) + f ′′′ ( x0 ) + f ( x0 ) + h5 f v ( x0 ) + 
 3 3 15 
 9 9 h3 27 81 
+  f ( x0 ) + 3h f ′ ( x0 ) + h2 f ′′ ( x0 ) + f ′′′ ( x0 ) + h4 f iv ( x0 ) + h5 f v ( x0 ) + 
 2 2 8 40 
3 5 iv
=−h f ( x0 ) + 
80 
3 5 iv
= − h f ( x0 + θ1h ) ; 0 < θ1 < 3
80 

If we are to find ∫ f ( x ) dx and (a, b) is divided into n subintervals where n is multiple of 3 at


b

a
a = x0 , x0 + h,… , x0 + nh = b then
3 5 iv
Error = − h  f ( x0 + θ1h ) + f iv ( x3 + θ 2 h ) + f iv ( x6 + θ3 h ) +  + f iv ( xn −3 + θ n / 3 h ) 
80 
where xi = x0 + ih and 0 < θ1 , θ 2 ,… , θ n / 3 < 3

If M = max. f iv ( x )
x0 < x < xn

nh5 (b − a) 4
then | max error | ≤ M= h M
80 80 
Hence, error in Simpson’s three-eight rule is of order 4.
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 749

We observe that the error in Simpson’s one-third rule is less as compared to error in Simpson’s
three-eight rule and also it requires less calculations. Hence Simpson’s one-third rule is preferred
in comparison to Simpson’s three-eight rule. This is the reason that we sometimes call Simpson’s
one-third rule as Simpson’s rule.
1 dx
Example 6.6: Evaluate ∫ 1+ x
0
using Simpson’s one-third rule by taking seven ordinates and
compare it with the actual value.
1
Solution: h =
6
1 1 1 2 5
x 0 1
6 3 2 3 6
1 6 3 2 3 6 1
y= 1
1+ x 7 4 3 5 11 2

By Simpson’s one-third rule


dx h
+ y6 + 4 ( y1 + y3 + y5 ) + 2 ( y2 + y4 ) 
1


∫ 1 + x  3  y
0 0

1  1 6 2 6   3 3 
= 1 + + 4  + +  + 2  + 
18  2  7 3 11   4 5 

1   198 + 154 + 126   27  
= 1 + 0.5 + 4   + 2  

18   231   20   
1
= [1 + 0.5 + 8.2770563 + 2.7]
18 
 0.693169793 
dx
∫ 1 + x = ( log (1 + x) )
1 1
Exact value of 0
= log 2  0.693147181
0

∴ up to three decimals values are same.


1 dx
Example 6.7: Evaluate ∫ 1+ x
0 2
using

1
  (i)  Trapezoidal rule taking h =
4
1 1
 (ii)  Simpson’s rule taking h =
3 4
3 1
(iii) Simpson’s rule taking h =
8 6
1
  (iv)  Weddle’s rule taking h =
6
750 | Chapter 6

Solution:

1 1 3
x 0 1
4 2 4
1 16 4 16 1
y= 1
1 + x2 17 5 25 2

1
(i)  By trapezoidal rule taking h =
4
1 dx h

∫0 1 + x 2 = 2  y0 + y4 + 2 ( y1 + y2 + y3 ) 
1 1  16 4 16  
= 1 + + 2  + +  
8 2  17 5 25   

 0.7828 
1 1
(ii)  By Simpson’s rule taking h =
3 4
dx h
 y0 + y4 + 4 ( y1 + y3 ) + 2 y2 
1


∫ 1+ x
0 2
=
3 
1  1  16 16   4 
= 1 + + 4  +  + 2  
12  2  17 25   5 


 0.7854 
For (iii) and (iv), table is

1 1 1 2 5
x 0 1
6 3 2 3 6
1 36 9 4 9 36 1
y= 1
1 + x2 37 10 5 13 61 2

3 1
(iii)  By Simpson’s rule taking h =
8 6
dx 3 1
= ×  y0 + y6 + 3 ( y1 + y2 + y4 + y5 ) + 2 y3 
1


∫ 1+ x
0 2
8 6 
1  1  36 9 9 36   4 
= 1 + + 3  + + +  + 2  

16  2  37 10 13 61   5  
 0.7854 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 751

1
(iv)  By Weddle’s rule taking h =
6
1 dx 3h

∫0 1 + x 2 = 10 [ y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ] 
1   36  9 4 9  36  1 
= 1+ 5  + + 6   + + 5  + 
20   37  10  5  13  61  2 

 0.7854 
π
1
Example 6.8: Find an approximate value of ∫
0
2
cos θ dθ using Simpson’s
3
rule by dividing
the interval into six subintervals.
Solution:
π π π π 5π π
q 0
12 6 4 3 12 2
y = cos θ 1 0.9828 0.9306 0.8409 0.7071 0.5087 0
1 π
Using Simpson’s rule with h =
3 12
π
π
∫ 2
cos θ dθ  1 + 0 + 4 ( 0.9828 + 0.8409 + 0.5087 ) + 2 ( 0.9306 + 0.7071)  1.187
0 36 

Example 6.9: The speeds of an electric train at various times after leaving one station until it
stops at the next station are given in the following table. Find the distance between two stations.

1
Speed in mph 0 13 33 39 40 40 36 15 0
2
1 1 1 1 1
Time in min 0 1 1 2 2 3 3 3
2 2 2 4 2
Solution:
Let t be time in hrs. and v(t) be speed in mph.

1 1 1 1 1 1 13 7
t 0
120 60 40 30 24 20 240 120
1
v (t) 0 13 33 39 40 40 36 15 0
2
1
Let S be the distance between the two stations. As the length of subintervals for t = 0 to t = hrs.
1 1 7 1 20
is hrs. and length of subintervals for t = hrs. to t = hrs. is hrs., so distance trav-
120 20 120 240
1 1 7
elled for t = 0 to t = hrs. and for t = hrs. to t = hrs. are to be found separately.
20 20 120
752 | Chapter 6

v ( t ) dt = ∫ v ( t ) dt + ∫ v ( t ) dt 
7 /120 1/ 20 7 /120
\ S=∫
0 0 1/ 20

1 1 1 1
By Simpson’s rule with h = hrs. for the time 0 to hrs. and h = hrs. for the time
1 73 120 20 240
hrs. to hrs., we have
20 120
1   79   1
S= 0 + 36 + 4 13 + + 40  + 2 ( 33 + 40 )  + 36 + 4 (15 ) + 0 
3 (120 )   2   3 ( )
240

23 2 25 5
= + = = = 1.667 miles.
15 15 15 3
 1
Example 6.10: Estimate the length of the arc of the curve 3y = x3 form (0, 0) to 1,  using
1  3
Simpson’s rule by taking h = 0.125.
3
1
Solution: Equation of curve is y = x 3
3
dy
\ = x2
dx 
x 0 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000
2
 dy 
1+  
 dx  1 1.00012 1.00195 1.00984 1.03078 1.07359 1.14735 1.25944 1.41421

= 1+ x4

1 3  1
If S be the length of the arc of the curve y = x form (0, 0) to 1, 3 
3  
2
 dy 
1 1
then S = ∫ 1 +   dx = ∫ 1 + x 4 dx
0
 dx  0

1
By Simpson’s rule with h = 0.125
3
0.125
[1 + 1.41421 + 4(1.00012 + 1.00984 + 1.07359 + 1.25944)
1
S = ∫ 1 + x 4 dx =
0 3
+ 2 (1.00195 + 1.03078 + 1.14735)

0.125
= 1 + 1.41421 + 4 ( 4.34299 ) + 2 ( 3.18008 ) 
3  
=1.08943 units

\ required length of arc  1.0894 units.


Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 753

Example 6.11: A curve passes through seven points (1, 2), (1.5, 2.4), (2.0, 2.7), (2.5, 2.8), (3, 3),
(3.5, 2.6) and (4, 2.1). Obtain the area bounded by the curve, the x-axis and the ordinates at x = 1
and x = 4. Also find the volume of the solid of revolution obtained by revolving this area about
the axis of x.
Solution:
x 1.0 1.5 2.0 2.5 3.0 3.5 4.0
y 2.0 2.4 2.7 2.8 3.0 2.6 2.1
y2 4.00 5.76 7.29 7.84 9.00 6.76 4.41
1
By Simpson’s rule with h = 0.5
3
4
0.5
Required area = ∫ y dx =  2.0 + 2.1 + 4 ( 2.4 + 2.8 + 2.6 ) + 2 ( 2.7 + 3.0 ) 
3 
1 
 7.783 sq.units
Required volume of solid of revolution
4
= π ∫ y 2 dx
1

0.5
= π.  4.00 + 4.41 + 4 ( 5.76 + 7.84 + 6.76 ) + 2 ( 7.29 + 9.00 ) 
3  
 64.10 cubic units

Exercise 6.2

1. Given that y = log x and

x 4.0 4.2 4.4 4.6 4.8 5.0 5.2


y 1.3863 1.4351 1.4816 1.5261 1.5686 1.6094 1.6487
5.2
Evaluate ∫ log x dx
4
by
1
(a) Trapezoidal rule (b) Simpson’s rule
3
3
(c) Simpson’s rule (d)  Weddle’s rule
8
Also compare it with exact value.
10
dx
2. Calculate ∫ 1+ x
2
by dividing the range into 16 equal parts.

6
dx
3. Evaluate ∫ (1 + x )
0
2
using
754 | Chapter 6

1
(a) Trapezoidal rule (b) Simpson’s rule
3
3
(c) Simpson’s rule (d)  Weddle’s rule
8
and compare the results with the exact value of the integral.
1

∫e
− x2
4. Evaluate dx by taking 11 ordinates using
0
(a)  Trapezoidal rule (b)  Simpson’s rule
3

∫ x dx
4
5. Calculate by Simpson’s rule an approximate value of by taking seven equidistant
−3
­ordinates. Compare it with the exact value and the value obtained by using the trapezoidal
rule. 0.7
6. Evaluate ∫ x1/ 7 e − x dx approximately by using a suitable formula.
0.5
π 2

∫e
sin x
7. Calculate dx by taking seven equidistant ordinates.
0 1
x2 1
8. Find the value of log 2 from ∫0 1+ x 3 dx using Simpson’s 3 rule by dividing the range into
four equal parts. Also find the error.
4
9. Evaluate ∫ e x dx by Simpson’s rule, using the data e = 2.72, e 2 = 7.39, e 3 = 20.09, e 4 = 54.60,
0
and compare it with the actual value.
1.4
10. Compute the value of the integral ∫ ( sin x − log
0.2
e )
x + e x dx by

(a)  Trapezoidal rule (b)  Simpson’s one-third rule


(c)  Simpson’s three-eight rule (d)  Weddle’s rule
using 7 ordinates. After finding the true value of the integral, compare the errors in the four
cases. π 2

11. Calculate an approximate value of ∫ sin x dx by


0

(a)  Trapezoidal rule (b)  Simpson’s rule


using 11 ordinates.
12. The table below gives the velocity v of speeding car at time t seconds. Approximate the
­distance travelled by the car in 12 s.

t (in seconds) 0 2 4 6 8 10 12
v (in m/s) 4 6 16 34 60 94 136

13. Find from the following table the area bounded by the curve y = f (x) and the x-axis from
x = 7.47 to 7.52
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 755

x 7.47 7.48 7.49 7.50 7.51 7.52


f (x) 1.93 1.95 1.98 2.01 2.03 2.06

14. A river is 80 m wide. The depth ‘d’ in metres at a distance x metres from one bank is given
by the following table. Calculate the area of the cross section of the river using
1
(a) Trapezoidal rule (b) Simpson’s rule
3
x 0 10 20 30 40 50 60 70 80
d 0 4 7 9 12 15 14 8 3

15. Find the volume of solid of revolution formed by rotating about the x-axis the area between the
x-axis, the lines x = 0 and x = 1, and a curve through the points with the following coordinates:

x 0.00 0.25 0.50 0.75 1.00


y 1.0000 0.9896 0.9589 0.9089 0.8415

Answers 6.2

1. (a) 1.82766 (b) 1.8278533
(c) 1.8278475 (d) 1.827858   Exact value 1.8278474
2. 1.299
3. By taking h = 1,
(a) 1.4108 (b) 1.3662 (c) 1.3571 (d) 1.3734   Exact value 1.4056
4. (a) 0.7462 (b) 0.7468
5. By Simpson’s rule 98; Exact value 97.2; By trapezoidal rule 115
1
6. 0.10207 (taking h = 0.05 in Simpson’s rule)
1 3
7. 3.1044 (by Simpson’s rule)
3
1
8. By Simpson’s rule 0.23108; Exact value 0.23105; Error 0.00003
3
9. 53.873; Actual value 53.598
10. (a) 4.0715 (b) 4.0522 (c) 4.0530 (d) 4.0514
True value 4.0509; Errors are in
(a) 0.0206 (b) 0.0013 (c) 0.0021 (d) 0.0005
11. (a) 0.9979 (b) 1.0000045
12. 552 m
13. 0.100 sq. units
14. (a)  705 m2 (b)  710 m2
15. 2.8192 cubic units
756 | Chapter 6

6.4 Numerical solutions of ordinary


differential equations
The analytical solutions of differential equations can be obtained in a limited way as there is
no general method of solving the equations. Quite often the differential equations appearing in
physical problems do not belong to the known types of analytic solutions. Such equations are
solved by numerical methods. We shall consider the methods to solve first order differential
dy
­equation = f ( x, y ), y ( x0 ) = y0 and then explain the method to solve second order differen-
dx
tial e­ quations and simultaneous first order differential equations.

6.4.1 Taylor-series Method
Let f (x, y) be a function which is differentiable for sufficient number of times.
Consider the initial value problem
dy
= y ′ = f ( x, y ) , y ( x0 ) = y0
dx 
From y ′ = f ( x, y ) we can find
y ′′ = f x + f y f

( )
y ′′′ = f xx + f xy f + f yx f + f y f x + f yy f + f y2 f and y iv , y v , ….

We find their values at x = x0. Then using the Taylor series expansion about x0 we can find
h2 h3 hn ( n)
y ( x0 + h ) = y0 + h y0′ + y0′′ + y0′′′ +  + y0
2! 3! n!
hn +1 ( n +1)
The error term will be y (ξ ) ; x0 < ξ < x0 + h
( n + 1)!
The number of terms depends upon the permissible error.
If we are to solve the simultaneous differential equations
dy
= y ′ = f ( x, y, z )
dx 
dz
= z ′ = φ ( x, y, z )
dx 
subject to the conditions y(x0) = y0, z(x0) = z0 then differentiating these equations successively we
find
y ′′, z ′′, y ′′′, z ′′′,… in this order and then using
h2
y ( x0 + h ) = y0 + h y0′ + y0′′ + 
2!
h2
z ( x0 + h ) = z0 + h z0′ + z0′′ + 
2!
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 757

we find y (x0 + h) and z (x0 + h).


If we are to solve second order differential equation
d2 y  dy 
= f  x, y,  ; y ( x0 ) = y0 , y ′ ( x0 ) = y0′
dx 2  dx 
dy
Then, we suppose that = z and solve the simultaneous equations
dx
dy
= z = φ ( x, y, z )
dx 
d 2 y dz
= = f ( x, y, z ) ; z ( x0 ) = y0′ , y ( x0 ) = y0 
dx 2 dx

Example 6.12: Using Taylor’s series find the solution of the differential equation xy ′ = x–y ;
y (2) = 2 at x = 2.1 correct to five places of decimals.
Solution: Differential equation is
y ( 2) 2
xy ′ = x − y ; y ( 2 ) = 2,    ∴  y ′ ( 2 ) = 1 − = 1− = 0 
2 2
On differentiating, we have
1− 0 1
xy ′′ + 2 y ′ = 1    \ y ′′ ( 2 ) = =
2 2
3
xy ′′′ + 3 y ′′ = 0   \ y ′′′ ( 2 ) = −
4

3
xy iv + 4 y ′′′ = 0   \  y iv ( 2 ) = ,
2

(.1)5 v .00001 v
y (ξ ) = y (ξ ) < (10) −6
5! 120 
\ By Taylor series expansion

( x − 2) ( x − 2) ( x − 2)
2 3 4

y ( x ) = y ( 2) + ( x − 2) y′ ( 2) + y ′′ ( 2 ) + y ′′′ ( 2 ) + y iv ( 2 ) +
2! 3! 4! 
(.1) (.1) (.1)
2 3 4

\ y ( 2.1)  2 + − +
4 8 16 

\ y(2.1) = 2.00238 correct to 5 decimal places.


758 | Chapter 6

Example 6.13: Using Taylor-series method solve


dy
= x 2 − y, y ( 0 ) = 1 at x = 0.1, 0.2, 0.3 and 0.4. Compare the values with exact solution.
dx
Solution: y ′ = x 2 − y ; y ( 0 ) = 1 \  y ′ ( 0 ) = 0 − 1 = −1
   
Differentiating both sides, we have
y ′′ + y ′ = 2 x    \  y ′′ ( 0 ) = − y ′ ( 0 ) = 1

y ′′′ + y ′′ = 2     \  y ′′′ ( 0 ) = 2 − y ′′ ( 0 ) = 1

y iv = − y ′′′   \  y iv ( 0 ) = −1


y v = − y iv    \  y v ( 0 ) = 1


y vi = − y v    \  y vi ( 0 ) = −1

Hence, y (0) = (–1) ; n ≥ 3
(n) n+1

\ By Taylor series expansion


x2 x3 x 4 iv x5 v x 6 vi
y ( x ) = y(0) + xy ′ ( 0 ) + y ′′ + y ′′′(0) + y (0) + y (0) + y ( 0 ) +
2! 3! 4! 5! 6! 
x 2 x3 x 4 x5
\ y ( x ) 1 − x + + − +
2 6 24 120 

( 0.1) ( 0.1) ( 0.1)


2 3 4

\ y ( 0.1)  1 − ( 0.1) + + − = 0.9052


2 6 24 
( 0.2 ) ( 0.2 ) ( 0.2 )
2 3 4

y ( 0.2 )  1 − ( 0.2 ) + + − = 0.8213


2 6 24 
( 0.3) ( 0.3) ( 0.3) ( 0.3)
2 3 4 5

y ( 0.3)  1 − ( 0.3) + + − + = 0.7492


2 6 24 120 

( 0.4 ) ( 0.4 ) ( 0.4 ) ( 0.4 )


2 3 4 5

y ( 0.4 )  1 − ( 0.4 ) + + − + = 0.6897


2 6 24 120 
All these results are up to 4 decimal places.
Now,
dy
+ y = x2
dx 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 759

is Leibnitz’s linear differential equation

I .F . = e ∫
dx
= ex 

\ solution is
(
ye x = ∫ x 2 e x dx = x 2 − 2 x + 2 e x + c ) 
−x
⇒ y = x − 2 x + 2 + ce 
2

\ y ( 0 ) = 2 + c = 1   (∵ y(0) =1)


\ c = –1

\ exact solution is
y = x 2 − 2x + 2 − e− x 

From the exact solution


y ( 0.1) = ( 0.1) − 2 ( 0.1) + 2 − e −0.1 = 0.9052
2


y ( 0.2 ) = ( 0.2 ) − 2 ( 0.2 ) + 2 − e
2 −0.2
= 0.8213

y ( 0.3) = ( 0.3) − 2 ( 0.3) + 2 − e
2 −0.3
= 0.7492

y ( 0.4 ) = ( 0.4 ) − 2 ( 0.4 ) + 2 − e
2 −0.4
= 0.6897

These results are taken up to 4 decimal places.
We obtain that the result obtained are same in both cases.

Example 6.14: Given the initial value problem u ′ = t 2 + u 2 , u (0 ) = 0. Determine the first three
non-zero terms in the Taylor series for u(t) and hence obtain the value for u(1). Also determine
t when the error in u(t) obtained from the first two non-zero terms is to be less than 10–6 after
rounding.
Solution:
u ′ = t 2 + u 2 ; u ( 0) = 0 \  u ′ ( 0 ) = 0 + u 2 ( 0 ) = 0

\ u ′′ = 2t + 2uu ′ \  u ′′ ( 0 ) = 0

u ′′′ = 2 + 2u ′ + 2uu ′′
2
\  u ′′′ ( 0 ) = 2

u = 6u ′u ′′ + 2uu ′′′
iv
\  u iv
(0) = 0 
u v = 6u ′′2 + 8u ′u ′′′ + 2uu iv \  uv (0) = 0

u vi = 20u ′′u ′′′ + 10u ′u iv + 2uu v \  u (0) = 0
vi

u ( 0 ) = 80
2
u vii = 20u ′′′ + 30u ′′u iv + 12u ′u v + 2uu vi \  vii

760 | Chapter 6

u viii = 70u ′′′u iv + 42u ′′u v + 14u ′u vi + 2uu vii \  u ( 0 ) = 0 


viii

\  u ix (0 ) = 0
2

u ix = 70u iv + 112u ′′′u v + 56u ′′u vi + 16u ′u vii + 2uu viii



u x
= 252 u iv v
u + 168 u ′′′ u vi
+ 72 u ′′ u vii
+ 18u ′ u viii
+ 2 uu ix
\  u x
( 0 ) = 0

v2
u = 252u + 420u u + 240u ′′′u + 90u ′′u + 20u ′u + 2uu 
xi iv vi vii viii ix x

\  u xi (0 ) = 240 ( 2)(80 ) = 38400



\ By Taylor series expansion up to first three non-zero terms
t3 t7 t 11 t 3 t 7 t 11
u (t ) = 2 + 80 + 38400 = + +2
3! 7! 11! 3 63 2079
1 1 2
\ u (1) = + +  0.350168
3 63 2079 
Error in u (t) obtained from the first two non-zero terms
2 t 11
 < 0.0000005
2079 
( 5)( 2079 )
\ t 11 < .10 −7
2 
1
 ( 5 )( 2079 ) −7  11
\ t< .10 
 2  
\ t  0.50 

dy
Example 6.15: Using Taylor series, find the numerical solution of = 2 y + 3e x at x = 0.2,
given that y (0) = 1. Compare the result with analytical solution. dx
Solution:
dy
y′ = = 2 y + 3e x ; y ( 0 ) = 1 ,   \  y ′ ( 0 ) = 2 + 3 = 5 
dx
Differentiate n times
y (n +1) = 2 y (n) + 3e x ; n = 1, 2, 3… 

\ y ′′ ( 0 ) = 2 y ′ ( 0 ) + 3 = 2 ( 5 ) + 3 = 13

y ′′′ ( 0 ) = 2 y ′′ ( 0 ) + 3 = 2 (13) + 3 = 29



y iv
( 0 ) = 2 y ′′′ ( 0 ) + 3 = 2 ( 29 ) + 3 = 61 
y v ( 0 ) = 2 y iv ( 0 ) + 3 = 2 ( 61) + 3 = 125

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 761

\ By Taylor series expansion


x2 x3 x 4 iv x5 v
y ( x )  y ( 0 ) + xy ′ ( 0 ) + y ′′ ( 0 ) + y ′′′ ( 0 ) + y (0) + y (0)
2! 3! 4! 5! 
13 2 29 3 61 4 25 5
= 1 + 5x + x + x + x + x
2 6 24 24 
13 29 61 25
y ( 0.2 )  1 + 5 ( 0.2 ) + ( 0.2 ) + ( 0.2 ) + ( 0.2 ) + ( 0.2 )
2 3 4 5
\
2 6 24 24 
\ y(0.2) = 2.3031 to 4 decimals.

Analytical solution
dy
− 2 y = 3e x
dx 
is Leibnitz’s linear differential equation.
I.F. = e–2x
Solution is
ye −2 x = ∫ 3e x .e −2 x dx = −3e − x + c

\ y = −3e + ce 
x 2x

\ y ( 0 ) = −3 + c = 1    (Q  y(0) = 1)

\ c = 4

\ solution is
y ( x ) = 4e 2 x − 3e x

\ y ( 0.2 ) = 4e 0.4
− 3e 0.2 = 2.3031

It is up to 4 decimals.
Thus, up to 4 decimals results are same.

Example 6.16: Evaluate by means of Taylor’s series expansion, the value of y at x = 0.1, 0.2 to
four significant figures for the problem

y ′′ − x ( y ′ ) + y 2 = 0 ; y ( 0 ) = 1, y ′ ( 0 ) = 0.
2

Solution: Given differential equation is

y ′′ − x ( y ′ ) + y 2 = 0 ; y ( 0 ) = 1, y ′ ( 0 ) = 0
2


762 | Chapter 6

Let y ′ = z 

y ( 0 ) = 1, z ( 0 ) = y ′ ( 0 ) = 0, \  y ′′ ( 0 ) = z ′ ( 0 ) = 0 − (1) = −1
2
\ y ′′ = z ′ = xz 2 − y 2 ;
   
y ′′′ = z ′′ = z 2 + 2 xzz ′ − 2 yy ′ \  y ′′′ ( 0 ) = 0

y iv = z ′′′ = 4 zz ′ + 2 xz ′ 2 + 2 xzz ′′ − 2 y ′ 2 − 2 yy ′′ \  y iv ( 0 ) = 2

\ By Taylor’s series expansion

t2 t3 t4
y (t )  y (0 ) + ty ′ (0 ) + y ′′ (0 ) + y ′′′ (0 ) + y iv (0 )
2! 3! 4! 
t2 t4
= 1− +
2 12 

( 0.1) ( 0.1)
2 4

\ y ( 0.1)  1 − +  0.9950
2 12 

( 0.2 ) ( 0.2 )
2 4

y ( 0.2 )  1 − +  0.9801
2 12 

Exercise 6.3

dy 6. Using Taylor’s series method, obtain the


1. Solve = x + y, given y = 0 when x = 1, dy
dx solution of = 3 x + y 2 and y = 1, when
up to x = 1.2 with h = 0.1. Compare the dx
final result with the value of the explicit x = 0. Find the value of y for x = 0.1,
solution. correct to four places of decimals.
dy 7. Solve by Taylor’s series method:
2. Solve = x − y 2 , y (0) =1 to find y (0.1)
dx y ′ = y − 2 x , y (0 ) = 1 for x = 0.1 and − 0.1.
correct to four places of decimal by using
y
Taylor series method.
dy 8. Find by Taylor’s series method, the value
3. Solve = y 2 + 1, y (0) = 0 in the range of y at x = 0.1 and x = 0.2 to five places of
dx
0 ≤ x ≤ 1 writing y as series in power of dy
decimals from = x 2 y − 1, y (0 ) = 1
x and check your result with the exact dx
­values. 9. Solve the following differential equation
4. Use Taylor’s series method to solve the by Taylor’s series method y′ = 2t + 3y,
dy y ( 0 ) = 1 and find y ( 0.1) .
equation = − xy, y (0 ) = 1. 10. Using Taylor series expansion evaluate the
dx
5. Solve the differential equation y ′′ = xy integral of y ′ − 2 y = 3e x , y (0 ) = 0 at
for x = 0.5 and x = 1 by Taylor series (a) x = 0.1(0.1) 0.3   (b)  x = 1.0; 1.1
­method where y (0) = 0, y′ (0) = 1.
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 763

11. Find the solution u (0.1) and u (0.2) of 14. Find solution of the simultaneous equa-
the initial value problem u′ = x(1 - 2u2); dx dy
u(0)  = 1 using the first three non-zero
tions = xy + 2t , = 2ty + x subject
dt dt
terms of the Taylor series method. to the initial conditions x(0) = 1, y(0) = –1
12. Solve numerically the system of simul- by Taylor series method.
dx dy dz
taneous equations + 2 x + 3 y = 0, 15. Solve = x + z, = x − y2 with
dt dx dx
dy y (0)  = 2, z (0) = 1 to get y(0.1), y(0.2),
+ 3 x + 2 y = 2e 2t with initial conditions
dt z(0.1) and z(0.2) approximately by ­Taylor’s
x = 1, y = 2 at t = 0, for t = 0.1 by Taylor algorithm.
series method. 16. Solve the differential equation
13. Use Taylor’s series method to ob- d2 y dy
tain the power series in t for x and y + x + y = 0, y (0 ) = 1, y ′ (0 ) = 0
dx 2 dx
satisfying the differential equations to approximate y(0.1) by Taylor’s series
dx d2 y method.
= x + y + t , 2 = x − t under the
dt dt
initial conditions x(0) = 0, y(0) = 1,
 dy 
 dt  = −1
 t =0

Answers 6.3

1. y(1.1) = 0.1103, y(1.2) = 0.2428 and results are same by explicit solution.
2. y(0.1) = 0.9138
3.

x 0.2 0.4 0.6 0.8 1


Calculated y (x) 0.203 0.423 0.684 1.026 1.521
Exact y (x) 0.203 0.423 0.684 1.030 1.557
x 2 x 4 x6
4. y ( x ) = 1 − + − + 
2 8 48
5. y ( 0.5 )  0.505, y (1)  1.085
5 9 177 5
6. y ( x ) = 1 + x + x 2 + 2 x 3 + x 4 + x + ; y ( 0.1) = 1.12725
2 4 60
7. y (0.1)  1.0954, y ( −0.1)  0.8944
8. y (0.1) = 0.90031, y (0.2) = 0.80227
9. y ( 0.1) = 1.36094
10. (a)  y (0.1)  0.34870, y (0.2)  0.81125, y ( 0.3)  1.41657
(b)  y (1.0 )  13.65000, y (1.1)  17.39683
764 | Chapter 6

11. u ( 0.1)  0.99505, u ( 0.2 )  0.9808


12. x (0.1)  0.3291, y (0.1)  1.6668

t 2 t 3 t 4 2t 5 3t 6
13. x ( t ) = t + + + + + + 
2 ! 3! 4 ! 5 ! 6 !
t4 t5 t6
y ( t ) = 1 − t + + + + 
4 ! 5! 6 !
3 19 3 4 9
14. x ( t ) = 1 − t + 2t 2 − t 3 + t 4 +  and y ( t ) = −1 + t − t 2 + t 3 − t 4 + 
2 12 2 3 8
15. y ( 0.1)  2.0845, y ( 0.2 )  2.1367, z ( 0.1)  0.5867, z ( 0.2 )  0.1535

16. y ( 0.1)  0.9950

6.4.2  Picard’s Method


Consider the differential equation
dy
= y ′ = f ( x, y ) ; y ( x0 ) = y0
dx 
We have
y ′ ( x ) dx = ∫ f ( x, y ) dx
x x


∫ x0 x0

y ( x ) − y ( x0 ) = ∫ f ( x, y ) dx 
x
or
x0

y ( x ) = y ( x0 ) + ∫ f ( x, y ) dx
x
or
x0

It is an integral equation of the given differential equation. First approximation to y(x) denoted by
y1 is obtained by taking y = y0 in the integral

y1 = y ( x0 ) + ∫ f ( x, y0 ) dx
x
\
x0

Then, second approximation is obtained by taking y = y1 in the integral

y2 = y ( x0 ) + ∫ f ( x, y1 ) dx
x

x0

Proceeding in this way
yn = y ( x0 ) + ∫ f ( x, yn −1 ) dx
x

x0

Thus, iterative formula for y(x) is

yn ( x ) = y ( x0 ) + ∫ f ( x, yn −1 ) dx
x

x0

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 765

If we are to solve the simultaneous differential equations


dy
= y ′ = f ( x, y, z )
dx 
dz
= z ′ = φ ( x, y, z ); y ( x0 ) = y0 , z ( x0 ) = z0
dx 

Then, from the formulae
yn = y ( x0 ) + ∫ f ( x, yn −1 , zn −1 ) dx
x

x0

zn = z ( x0 ) + ∫ φ ( x, yn −1 , zn −1 ) dx
x

x0

we obtain y1 , z1 , y2 , z2 ,… ,… , yn , zn in this order and when yn  yn −1, zn  zn −1 up to desired
accuracy then iteration is stopped.
If we are to solve the second order differential equation
d2 y  dy 
= f  x, y,  ; y ( x0 ) = y0 , y ′ ( x0 ) = y0′
dx 2  dx  

dy
then we suppose = z and solve the simultaneous differential equations
dx
dy
= φ ( x, y, z ) = z
dx 
dz
= f ( x, y, z ); y ( x0 ) = y0 , z ( x0 ) = y0′
dx

Remark 6.1:
 (i) Picard’s method fails if the function f (x, y) is not successively integrable.
(ii) An extra condition called Lipschitz condition
f ( x, y ) − f ( x, z ) < L y − z where L is constant

is required for convergence.
dy
Example 6.17: Employing Picard’s method, solve the differential equation = 2 x − y, given
dx
that y = 0.9 at x = 0, for x = 0.2, 0.4 and x = 0.6. Check the result with exact values.
Solution:
dy
= 2 x − y ; y(0) = 0.9
dx
By Picard’s method, iterative formula for finding approximations to y(x) is
yn ( x ) = y(0) + ∫ ( 2 x − yn −1 ( x ) ) dx = 0.9 + x 2 − ∫ yn −1 ( x ) dx
x x

0 0

where y0 = y(0) = 0.9 


766 | Chapter 6

Following table gives approximations

n yn(x)
0 0.9
1 0.9 − 0.9 x + x 2
x3
2 0.9 − 0.9 x + 1.45 x 2 −
3
 x3  x 4
3 0.9 − 0.9 x + 1.45  x 2 −  +
 3  12

 x3 x 4  x5
4 0.9 − 0.9 x + 1.45  x 2 − +  −
 3 12  60

 x3 x 4 x5  x6
5 0.9 − 0.9 x + 1.45  x 2 − + −  +
 3 12 60  360

 x3 x 4 x5 x6  x7
6 0.9 − 0.9 x + 1.45  x 2 − + − + −
 3 12 60 360  2520

From last two approximations


 x3 x 4 x5 
y( x )  0.9 − 0.9 x + 1.45  x 2 − + − 
 3 12 60 

 1 1 1 5
y ( 0.2 )  0.9 − 0.9 ( 0.2 ) + 1.45 ( 0.2 ) − ( 0.2 ) + ( 0.2 ) − ( 0.2 )  = 0.7743
2 3 4

 3 12 60 

 ( 0.4 ) ( 0.4 ) ( 0.4 ) 


3 4 5

y ( 0.4 ) = 0.9 − 0.9 ( 0.4 ) + 1.45 ( 0.4 ) −


2
+ −  = 0.7439
 3 12 60 

 ( 0.6 ) ( 0.6 ) ( 0.6 ) 


3 4 5

y ( 0.6 ) = 0.9 − 0.9 ( 0.6 ) + 1.45 ( 0.6 ) −


2
+ − = 0.7914
 3 12 60 


Now, given differential equation is
dy
+ y = 2x
dx 
It is Leibnitz’s linear differential equation.

I.F. = ex
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 767

Solution is
y e x = ∫ 2 xe x dx = 2 ( x − 1) e x + c

∴ y = 2 x − 2 + ce − x 
∴ y(0) = −2 + c = 0.9 
∴ c = 2.9

∴ particular solution is
y = 2 x − 2 + 2.9 e − x 

∴ y ( 0.2 ) = −1.6 + 2.9e −0.2 = 0.7743

y ( 0.4 ) = −1.2 + 2.9e −0.4 = 0.7439

y ( 0.6 ) = −0.8 + 2.9e −0.6 = 0.7916

y(0.6) is different at 4th decimal place and y(0.2) and y(0.4) are correct up to four decimal places.
dy
Example 6.18: Solve = 1 + xy, given that y = 1 when x = 0, in the interval [0, 0.5] for h = 0.1
dx
correct to three decimal places.
Solution:
dy
= 1 + xy ; y(0) = 1
dx
By Picard’s method, iterative formula for finding approximations to y(x) is
yn ( x ) = y(0) + ∫ [1 + xyn −1 ( x ) ] dx = 1 + x + ∫ x yn −1 ( x ) dx
x x

0 0

where y0 ( x ) = y(0) = 1 
Following table gives approximations
n y n (x )
0 1
x2
1 1+ x +
2
x 2 x3 x 4
2 1+ x + + +
2 3 8
x 2 x3 x 4 x5 x6
3 1+ x + + + + +
2 3 8 15 48
x 2 x3 x 4 x5 x6 x7 x8
4 1+ x + + + + + + +
2 3 8 15 48 105 384
x 2 x3 x 4 x5 x6 x7 x8 x9 x10
5 1+ x + + + + + + + + +
2 3 8 15 48 105 384 945 3840
768 | Chapter 6

x7
Now, for maximum in [0, 0.5]
105
( 0.5)
7

we have maximum value =  0.00007


105
∴ for values correct to three decimal places, from last two approximations we have
x 2 x3 x 4 x5 x6
y( x ) = 1 + x + + + + +
2 3 8 15 48 

( 0.1) ( 0.1) ( 0.1) ( 0.1) ( 0.1)


2 3 4 5 6

y ( 0.1) = 1 + ( 0.1) + + + + + = 1.105


2 3 8 15 48 
( 0.2 ) ( 0.2 ) ( 0.2 ) ( 0.2 ) ( 0.2 )
2 3 4 5 6

y ( 0.2 ) = 1 + 0.2 + + + + + = 1.223


2 3 8 15 48 
( 0.3) ( 0.3) ( 0.3) ( 0.3) ( 0.3)
2 3 4 5 6

y ( 0.3) = 1 + 0.3 + + + + + = 1.355


2 3 8 15 48 
( 0.4 ) ( 0.4 ) ( 0.4 ) ( 0.4 ) ( 0.4 )
2 3 4 5 6

y ( 0.4 ) = 1 + 0.4 + + + + + = 1.505


2 3 8 15 48 
( 0.5) ( 0.5) ( 0.5) ( 0.5) ( 0.5)
2 3 4 5 6

y ( 0.5 ) = 1 + 0.5 + + + + + = 1.677


2 3 8 15 48 
∴ up to 3 decimals
y ( 0.1) = 1.105, y ( 0.2 ) = 1.223, y ( 0.3) = 1.355, y ( 0.4 ) = 1.505, y ( 0.5 ) = 1.677


Example 6.19: Approximate y and z by using Picard’s method for the particular solution of
dy dz
= x + z, = x − y 2 for x = 0.1 given that y = 2, z = 1 where x = 0
dx dx
Solution:
dy dz
= x + z, = x − y 2 ; y(0) = 2, z (0) = 1
dx dx 
By Picard’s method iterative formulae for finding approximations to y(x) and z(x) are
x2
yn ( x ) = y(0) + ∫ [ x + zn −1 ( x ) ] dx = 2 +
x x
+ ∫ zn −1 ( x ) dx
0 2 0

2
x x x
zn ( x ) = z (0) + ∫  x − y 2 n −1 ( x )  dx = 1 + − ∫ y 2 n −1 ( x ) dx
0 2 0

where y0 ( x ) = y(0) = 2, z0 ( x ) = z (0) = 1 


Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 769

Following table gives approximations to y(x) and z(x) up to x4.

x2 x x2 x

2 ∫0
n y n (x ) = 2 + + ∫ z n −1 ( x ) dx z n (x ) = 1+ − y 2 n −1 ( x ) dx
2 0

0 2 1
2
x2 x x
1 2+ x+ 1+ − ∫ 4 dx
2 2 0

x2
= 1− 4x +
2
3 2 x3 x2
( )
x
2 2+ x− x + 1+ − ∫ 4 + 4 x + 3 x 2 + x 3 dx
2 6 2 0

3 x4
= 1 − 4 x − x 2 − x3 −
2 4
3 2 x3 x 4 x2 x 7 
3 2+ x− x − − 1+ − ∫  4 + 4 x − 5 x 2 − x 3  dx
2 2 4 2 0
 3 
3 5 7
= 1 − 4 x − x 2 + x3 + x 4
2 3 12
3 2 x3 5 4 x2
( )
x
4 2+ x− x − + x 1+ − ∫ 4 + 4 x − 5 x 2 − 5 x 3 dx
2 2 12 2 0

3 5 5
= 1 − 4 x − x 2 + x3 + x 4
2 3 4
3 2 x3 5 4 x2
( )
x
5 2+ x− x − + x 1+ − ∫ 4 + 4 x − 5 x 2 − 5 x 3 dx
2 2 12 2 0

3 5 5
= 1 − 4 x − x 2 + x3 + x 4
2 3 4

From last two approximations


3 2 1 3 5 4
y( x )  2 + x − x − x + x
2 2 12 
3 2 5 3 5 4
z ( x ) 1 − 4 x − x + x + x
2 3 4 
3 1 5
y(0.1)  2 + ( 0.1) − ( 0.1) − ( 0.1) + ( 0.1)  2.0845
2 3 4

2 2 12 
3 5 5
z(0.1)  1 − 4 ( 0.1) − ( 0.1) + ( 0.1) + ( 0.1)  0.5868
2 3 4

2 3 4 
770 | Chapter 6

d2 y dy
Example 6.20: Use Picard’s method to approximate y when x = 0.1 given that 2 − x 2 − x4 y = 0
dy dx dx
and y = 5 and = 1 when x = 0.
dx
Solution:
d2 y dy
2
− x2 − x 4 y = 0 ; y(0) = 5, y ′(0) = 1
dx dx 
dy
Let =z
dx 
dz
∴ = x z + x 4 y ; y(0) = 5, z (0) = 1
2

dx 
By Picard’s method iterative formulae for finding approximations to y(x) and z(x) are
x x
yn ( x ) = y(0) + ∫ zn −1 ( x ) dx = 5 + ∫ zn −1 ( x ) dx
0 0 
x x
zn ( x ) = z (0) + ∫  x 2 zn −1 ( x ) + x 4 yn −1 ( x )  dx = 1 + ∫  x 2 zn −1 ( x ) + x 4 yn −1 ( x )  dx
0 0 
where y0 = y(0) = 5, z0 = z (0) = 1 

Following table gives approximations to y(x) and z(x)


x x
n y n ( x ) = 5 + ∫ z n −1 ( x )dx z n ( x ) = 1 + ∫  x 2 z n −1 ( x ) + x 4y n −1 ( x ) dx
0 0

0 5 1
x3
1 5+x 1+ + x5
3
x 4 x6 x3 2 1
2 5+ x + + 1+ + x5 + x6 + x8
12 6 3 9 8
x 4 x6 2 7 1 9
3 5+ x + + + x + x
12 6 63 72

From last two approximations


x 4 x6
y( x )  5 + x + +
12 6 

(0.1) 4
∴ y(0.1)  5 + (0.1) +  5.10001
12 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 771

Exercise 6.4

1. Using Picard’s method of successive ap- y′ = x 2 + y 2


proximations, obtain a solution up to the satisfying the initial condition y(0) = 0
fifth approximation of the equation
dy 7. Given the differential equation
= y + x such that y = 1 when x = 0. dy x2
dx = with the initial condition
dx 1 + y 2
Check your answer by finding the exact
y = 0 when x = 0, use Picard’s method to
particular solution.
obtain y for x = 0.25, 0.5 and 1.0 correct to
2. Use Picard’s method to solve three places of decimals.
dy
= − xy, y(0) = 1.
dx 8. Approximate y and z at x = 0.1, using
Picard’s method for the solution of the
3. Use Picard’s method to approximate y dy dz
when x = 0.1 given that y = 1 when x = 0 equations = z, = x 3 ( y + z ) given
dy y − x dx dx
and = . 1
dx y + x that y(0) = 1 and z(0) = .
2
4. Perform two iterations of Picard’s method 9. Use Picard’s method to solve the equa-
to find an approximate solution of the ini- tions
tial value problem dx dy
= − y, = x given that x = 1, y = 0
dy dt dt
= x + y 2 , y ( 0) = 1
dx when t = 0.
5. Use Picard’s method to approximate the
value of y when x = 0.1 given that y = 1 10. Use Picard’s method to approximate y when
when x = 0 d2 y dy
x = 0.1 given that 2
+ 2 x + y = 0 and
dy dx dx
= 3x + y 2 dy
dx y = 0.5, = 0.1 when x = 0.
6. Using three successive approximations of dx
Picard’s method, obtain approximate solu-
tion of the different equation

Answers 6.4

x3 x 4 x5 x6
  1.  y5 = 1 + x + x 2 + + + +
3 12 60 720
exact particular solution is
x3 x 4 x5 x6
y = 2e x − x − 1 = 1 + x + x 2 + + + + + 
3 12 60 360
x 2 x 4 x6 x8 x 2 x 4 x6 x8
 2. y  1 − + − + y 1 − + − +
2 8 48 384 2 8 48 384
772 | Chapter 6

 3. 1.0906
3 2 2 3 1 4 1 5
 4. y2 = 1 + x + x + x + x + x
2 3 4 20
 5. y(0.1) = 1.127
x 3 x 7 2 x11 x15
 6. y3 = + + +
3 63 2079 59535
 7. y ( 0.25 ) = 0.0052, y ( 0.5 ) = 0.0416, y (1.0 ) = 0.3218

 8. y ( 0.1) = 1.05000, z ( 0.1) = 0.50004

t3 t5 t2 t4 t6
 9. y  t − + , x 1 − + −
6 120 2 24 720
10.  y (0.1) = 0.5075

6.4.3 Euler’s Method
Consider the differential equation
dy
= f ( x, y ) , y ( x0 ) = y0
dx 
We want to find y(xn) where xn = x0 + nh.
This method is based on the assumption that in a small interval, the curve is a straight line pass-
ing through the initial point along the tangent at initial point. Thus, if [x0, x1] is small interval
where x1 = x0 + h then we approximate the curve by tangent at (x0, y0).
Equation of tangent at (x0, y0) is
 dy 
y − y0 =   ( x − x0 ) = ( x − x0 ) f ( x0 , y0 )
 dx ( x0 , y0 )
Thus, the value of y corresponding to x = x1, is y1 = y0 + ( x1 − x0 ) f ( x0 , y0 ) = y0 + h f ( x0 , y0 ) .

Now, in the interval [x1, x2], we approximate the curve by straight line passing through (x1, y1)
along the tangent at (x1, y1).
Equation of tangent at (x1, y1) is
 dy 
y − y1 =   ( x − x1 ) = ( x − x1 ) f ( x1 , y1 )
 dx ( x1 , y1 )

Thus, the value of y corresponding to x = x2 is y2 = y1 + ( x2 − x1 ) f ( x1 , y1 ) = y1 + h f ( x1 , y1 ) .

Repeating this process n times, we shall have yn = yn −1 + h f ( xn −1 , yn −1 ) .


Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 773

Geometrically, the actual curve of solution through P ( x0 , y0 ) is shown dotted in Figure (6.1). In
the interval ( x0 , x1 ), we approximate the curve by tangent at P which meets ordinate at x = x1 in P1.
Now, let P1 Q1 be curve of solution through P1. In interval ( x1 , x2 ), we approximate this curve by
tangent at P1 to curve P1 Q1 which meets ordinate at x = x2 in P2. Repeating this process n times,
we reach point Pn. Thus, point Q on curve will be approximated by Pn by Euler method.
Y

Qx y
Q
Q P

Px y P P

X
O x x x xn

Figure 6.1

If h is not small then error is quite significant. Further, this sequence of lines may also deviate con-
siderably from the curve of solution. Hence, the method is very slow and requires improvement.

6.4.4 Improved Euler’s Method


In this method the approximation of y at x = x1 is obtained as in Euler’s method. Let it be y1( p )
∴ y1( p ) = y0 + h f ( x0 , y0 )

Now, this approximation is improved by approximating the curve of solution in the interval
(x0, x1) by a straight line passing through (x0, y0) but taking the slope of line the average of the
( )
slopes at (x0, y0) and x1 , y1( p ) . Thus, the equation of line becomes
1
 2
{ (
y − y0 = ( x − x0 )  f ( x0 , y0 ) + f x1 , y1( p ) 


)}


h
(
y1( c ) = y0 +  f ( x0 , y0 ) + f x1 , y1( p )  
2
)
Proceeding in this way, we get


(
y2( p ) = y1( c ) + h f x1 , y1( c ) )
h
( ) (
y2( c ) = y1( c ) +  f x1 , y1( c ) + f x2 , y2( p ) 
2
)
 
774 | Chapter 6

(
yn( +p1) = yn( c ) + h f xn , yn( c ) ) 
h
( ) ( )
yn( c+)1 = yn( c ) +  f xn , yn( c ) + xn +1 , yn( +p1)  ; y0( c ) = y0 = y( x0 )
2

6.4.5 Modified Euler’s Method


In Euler’s improvement method, the slope of the line is taken to be average of slope at initial point
of sub-interval and approximated slope at end point of interval. It is again modified by taking the
slope of the line to be the slope at the midpoint of the subinterval.
The value of y at the midpoint of the subinterval is approximated by Euler method:
 h h
y  x0 +  = y 1 = y0 + f ( x0 , y0 )
 2 2
2

and then by Euler modification method
 h  h 
y1 = y0 + h f  x0 + , y  x0 +  
 2  2 

Thus, the formula becomes
yn +1 = yn + hf ( xn +1/ 2 , yn +1/ 2 )

h
where yn +1/ 2 = yn + f ( xn , yn )
2 

dy
Example 6.21: Solve for y the differential equation = x + y, y(0) = 1 at x = 0.0 (0.2) 0.8 using
dx
(i) Euler’s method   (ii) Picard’s method
Also compare with exact values.
Solution:
dy
= f ( x, y ) = x + y ; y(0) = 1, h = 0.2
dx
(i) By Euler’s method

n xn yn f ( xn ,yn ) y n +1 = y n + h f ( x n , y n )

0 0.0 1.0000 1.0000 1.2000


1 0.2 1.2000 1.4000 1.4800
2 0.4 1.4800 1.8800 1.8560
3 0.6 1.8560 2.4560 2.3472
4 0.8 2.3472

∴ y ( 0.2 ) = 1.2000, y ( 0.4 ) = 1.4800, y ( 0.6 ) = 1.8560, y ( 0.8 ) = 2.3472



Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 775

(ii) By Picard’s Method


Iterative formula for approximations of y(x) is
x2
yn ( x ) = y(0) + ∫ ( x + yn −1 ( x ) ) dx = 1 +
x x
+ ∫ yn −1 ( x ) dx
0 2 0

where y0 = y(0) = 1
Following table gives approximations

n yn(x)
0 1
x2
1 1+ x +
2
x3
2 1+ x + x2 +
6
x3 x 4
3 1+ x + x2 + +
3 24
x3 x 4 x5
4 1+ x + x2 + + +
3 12 120
x3 x 4 x5 x6
5 1+ x + x2 + + + +
3 12 60 720
x3 x 4 x5 x6 x7
6 1+ x + x2 + + + + +
3 12 60 360 5040

from last two approximations, y (x) is approximated to


x3 x 4 x5
y( x ) 1 + x + x 2 + + +
3 12 60 
( 0.2 ) ( 0.2 ) ( 0.2 )
3 4 5

y ( 0.2 ) = 1 + ( 0.2 ) + ( 0.2 ) +


2
∴ + +  1.2428
3 12 60 
( 0.4 ) ( 0.4 ) ( 0.4 )
3 4 5

y ( 0.4 ) = 1 + ( 0.4 ) + ( 0.4 ) +


2
+ +  1.5836
3 12 60
( 0.6 ) ( 0.6 ) ( 0.6 )
3 4 5

y ( 0.6 ) = 1 + ( 0.6 ) + ( 0.6 ) +


2
+ +  2.0441
3 12 60
( 0.8) ( 0.8) ( 0.8)
3 4 5

y ( 0.8 ) = 1 + ( 0.8 ) + ( 0.8 ) +


2
+ +  2.6503
3 12 60
Analytically
dy
− y = x ; y ( 0) = 1
dx 
776 | Chapter 6

It is Leibnitz’s linear differential equation.


I.F. = e–x
\ solution is

ye − x = ∫ xe − x dx = ( − x − 1) e − x + c

∴ y = −1 − x + c e  x

∴ y(0) = −1 + c = 1    (∵ y(0) = 1)

∴ c = 2

∴ particular solution is
y = 2ex – 1 – x
From this, we have
y ( 0.2 ) = 2e 0.2 − 1 − 0.2  1.2428

y ( 0.4 ) = 2e 0.4 − 1 − 0.4  1.5836

y ( 0.6 ) = 2e 0.6 − 1 − 0.6  2.0442

y ( 0.8 ) = 2e 0.8 − 1 − 0.8  2.6511

By Euler method y ( 0.2 ) is correct only up to one decimal and y ( 0.4 ), y ( 0.6 ), y ( 0.8 ) are not
correct even up to one decimal. But by Picard’s method y ( 0.2 ) and y ( 0.4 ) are correct up to four
decimals, y ( 0.6 ) is correct up to three decimals and y ( 0.8 ) is correct up to two decimals.

dy y − x
Example 6.22: Solve the differential equation by Euler’s method = with the initial
condition y (0) = 1 for x = 0.1 taking h = 0.02. dx y + x
Solution:
dy y−x 2x
= f ( x, y ) = = 1− ; y(0) = 1, h = 0.02
dx y+x y+x

n xn yn f ( xn ,yn ) y n +1 = y n + h f ( x n , y n )

0 0.00 1.0000 1.0000 1.0200


1 0.02 1.0200 0.9615 1.0392
2 0.04 1.0392 0.9259 1.0577
3 0.06 1.0577 0.8926 1.0756
4 0.08 1.0756 0.8615 1.0928
5 0.10 1.0928

∴ y ( 0.1)  1.0928

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 777

dy
Example 6.23: Solve the differential equation = x + y, y(0) = 1 for y (0.1), y (0.2), y (0.3)
using modified Euler’s method. dx

Solution:
dy
f ( x, y ) = = x + y, y(0) = 1, h = 0.1
dx
h
n xn yn f ( xn ,yn ) y n + 1/ 2 = y n + f ( x n , y n ) f ( x n + 1/ 2 , y n + 1/ 2 ) y n + 1 = y n + h f ( x n + 1/ 2 , y n + 1/ 2 )
2
0 0 1 1 1.05 1.1 1.11
1 0.1 1.11 1.21 1.1705 1.3205 1.24205
2 0.2 1.24205 1.44205 1.31415 1.56415 1.39846
3 0.3 1.39846

∴ y ( 0.1)  1.11

y ( 0.2 )  1.2420

y ( 0.3)  1.3985 

Example 6.24: Use modified Euler’s formula to obtain y (0.2), y (0.4), and y (0.6) correct to three
dy
decimal places given that = y − x 2 with initial condition y(0) = 1.
dx
Solution:
dy
= f ( x , y ) = y − x 2 ; y ( 0) = 1
dx 
We take h = 0.1

h
n xn yn f (xn, yn) y n + 1/ 2 = y n + f ( x n , y n ) f ( x n + 1/ 2 , y n + 1/ 2 ) y n + 1 = y n + h f ( x n + 1/ 2 , y n + 1/ 2 )
2
0 0 1 1 1.05 1.0475 1.1048
1 0.1 1.1048 1.0948 1.1595 1.1370 1.2185
2 0.2 1.2185 1.1785 1.2774 1.2149 1.3400
3 0.3 1.3400 1.2500 1.4025 1.2800 1.4680
4 0.4 1.4680 1.3080 1.5334 1.3309 1.6011
5 0.5 1.6011 1.3511 1.6687 1.3662 1.7377
6 0.6 1.7377

\ y ( 0.2 )  1.219, y ( 0.4 )  1.468, y ( 0.6 )  1.738



778 | Chapter 6

dy 2x
Example 6.25: Solve = y − , y ( 0 ) = 1 in the range 0 ≤ x ≤ 0.2 using
dx y
(i)  Euler’s method    (ii)  Improved Euler’s method    (iii)  Modified Euler’s method.

Solution:
dy 2x
= f ( x, y ) = y − ; y (0 ) = 1, h = 0.1
dx y 
(i)  By Euler’s Method

xn yn f (xn, yn) y n +1 = y n + h f ( x n , y n )

0 1 1 1.1
0.1 1.1 0.9182 1.1918
0.2 1.1918

\ y (0.1)  1.1, y (0.2)  1.192




(ii) By Improved Euler’s Method

y n( p+)1 = y n ( c ) h
) ( ) ( ) ( )
xn yn (c) f (xn, yn(c)) f x n + 1 , y n( p+)1 y n( c+)1 = y n ( c ) + f x n , y n ( c ) + x n +1 , y n( p+)1 
(
+ h f x n , y n (c ) 2

0 1 1 1.1 0.9182 1.0959


0.1 1.0959 0.9139 1.1872 0.8503 1.1841
0.2 1.1841

\ y ( 0.1)  1.096, y ( 0.2 )  1.184




(iii)  By Modified Euler’s Method

h
xn yn f (xn, yn) y n + 1/ 2 = y n + f ( x n , y n ) f ( x n + 1/ 2 , y n + 1/ 2 ) y n + 1 = y n + h f ( x n + 1/ 2 , y n + 1/ 2 )
2
0 1 1 1.05 0.9548 1.0955
0.1 1.0955 0.9129 1.1411 0.8782 1.1833
0.2 1.1833

\ y ( 0.1)  1.096, y ( 0.2 )  1.183



Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 779

Exercise 6.5

dy dy
1. Let = x + y and y = 1 when x = 0. Find 7. Solve the equation = x + y2 ; y (0) = 1
dx dx
y when x = 0.05, 0.10, 0.15 and 0.20 by for y at x = 0.1 in two steps using modified
Euler’s method. Euler’s method.
dy x − y 8. Using modified Euler’s formula, deter-
2. Solve by Euler’s method = ,
dx 2 mine the value of y when x = 0.1 given that
1 y(0) = 1 and y ′ = x 2 + y 2 by taking h = 0.1.
y ( 0 ) = 1 over [0, 3] using step size .
2 9. Using Euler’s modified ­ method, find a
3. Using Euler’s method, find an approxi- dy
solution of the equation = x+ y
mate value of y corresponding to x = 2, dx
dy = f ( x, y ) with initial condition y = 1 at
given that = x + 2 y and y = 1 when
x = 1. dx
x  = 0 for the range 0 ≤ x ≤ 0.6 in steps
4. Using Euler’s method, solve for y at of 0.2.
dy dy
x = 0.1 from = x + y + xy, y ( 0 ) = 1 10. Solve = 1 − y, y (0 ) = 0 in the range
dx dx
taking step size h = 0.025. 0 ≤ x ≤ 0.3 using
5. Use Euler’s method to solve nu-  (i)  Euler’s method
merically the initial value problem  (ii)  improved Euler’s method
u ′ = −2t u 2 , u (0 ) = 1 with h = 0.2, 0.1 on (iii)  modified Euler’s method
the interval [0, 1]. by choosing h = 0.1. Compare the answers
6. Apply Euler’s modified method to solve with exact solution.
dy dy
= x + 3 y subject to y(0) = 1 to find an 11. Given that = x + y 2 and y = 1 at x = 0,
dx dx
approximate value of y when x = 1. find an approximate value of y at 0.5 by
improved Euler’s method.

Answers 6.5

 1.  y ( 0.05 ) = 1.05, y ( 0.10 ) = 1.105, y ( 0.15 ) = 1.16525, y ( 0.20 ) = 1.23101

 2.  y (0.5)  0.7500, y (1.0 )  0.6875, y (1.5)  0.7656,


y ( 2)  0.9492, y ( 2.5)  1.2119, y (3)  1.5339
 3.  y ( 2 )  8.1619 (taking h = 0.2)

 4.  y ( 0.1)  1.1117


780 | Chapter 6

 5. With h = 0.2, u ( 0.2 )  1, u ( 0.4 )  0.92, u ( 0.6 )  0.7846, u ( 0.8 )  0.6369, u (1)  0.5071

With h = 0.1, u ( 0.1)  1, u ( 0.2 )  0.98, u ( 0.3)  0.9416,

u ( 0.4 )  0.8884, u ( 0.5 )  0.8253, u ( 0.6 )  0.7572,

u ( 0.7 )  0.6884, u ( 0.8 )  0.6221, u ( 0.9 )  0.5602, u (1.0 )  0.5037

 6. y (1)  21.082

 7. y ( 0.1)  1.1162

 8. y ( 0.1)  1.1105

 9. y (0.2)  1.2298, y (0.4 )  1.5230, y (0.6 )  1.8828

10.  (i)  y (0.1)  0.1, y (0.2)  0.19, y (0.3)  0.271

 (ii)  y (0.1)  0.095, y (0.2)  0.1810, y (0.3)  0.2588

(iii)  y (0.1)  0.095, y (0.2)  0.1810, y (0.3)  0.2588

and exact values are y (0.1) = 0.0952, y (0.2) = 0.1813, y (0.3) = 0.2592

11.  y (0.5) = 2.207 (taking h = 0.1)

6.4.6 Runge’s Method
dy
Consider the differential equation = f ( x, y ) ; y ( x0 ) = y0
dx
\ value of f at x = x0 is f (x0, y0).
By modified Euler’s method
h  h h 
Value of f at x = x0 + is f  x0 + , y0 + f ( x0 , y0 )
2  2 2 

Now by Euler method at x = x0 + h, y is estimated by

y ( p) = y0 + h f ( x0 , y0 ) .

With this value of y
(
slope of solution curve at x0 + h, y (
p)
) is f ( x
0 + h, y (
p)
). 
Using this, we improve the value of y at x = x0 + h by y0 + hf x0 + h, y ( ) .
p
( )
( (
Then, value of f at x = x0 + h is f x0 + h, y0 + hf x0 + h, y ( p) . ))
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 781

 h
Now, by Simpson’s one-third rule  with h replaced by 
 2
x0 + h h  h h 
∫ f ( x, y ) dx =  f ( x0 , y0 ) + 4 f  x0 + , y0 + f ( x0 , y0 ) 
x0 6  2 2 

( (
+ f x0 + h, y0 + hf x0 + h, y (
p)
))
dy
Thus, from differential equation = f ( x, y ) ; y ( x0 ) = y0
dx
x0 + h dy x0 + h
we have ∫ dx = ∫ f ( x, y ) dx
x0 dx x0

h  h h 
⇒ y ( x0 + h ) − y0 = f ( x0 , y0 ) + 4 f  x0 + 2 , y 0 + 2 f ( x 0 , y 0 ) 
6   

( (
+ f x0 + h, y0 + h f x0 + h, y (
p)
))
\ if we define k1 = hf ( x0 , y0 )

 h k 
k2 = h f  x0 + , y0 + 1 
 2 2

k ′ = h f ( x0 + h, y0 + k1 )

k3 = h f ( x0 + h , y0 + k ′ )

1
then y ( x0 + h ) = y0 + ( k1 + 4k2 + k3 )
6 

6.4.7 Runge–Kutta Method
Runge–Kutta method requires the functional values at some points. If solution by Runge–Kutta
method agrees with the solution by Taylor’s series method up to hn then it will be called Runge–
Kutta method of order n.
First order Runge–Kutta Method
dy
Solution by Euler’s method of differential equation = f ( x, y ) ; y ( x0 ) = y0 is
dx
y ( x0 + h) = y0 + hf ( x0 , y0 )

Also by Taylor series method
y ( x0 + h) = y0 + h y0′ +  = y0 + hf ( x0 , y0 ) + 

Thus, solution by Euler’s method agrees with solution by Taylor’s series solution up to term in h.
Hence, Euler method is Runge–Kutta method of the first order.
782 | Chapter 6

Second order Runge–Kutta Method


dy
Solution of the differential equation = f ( x, y ) ; y ( x0 ) = y0 by improved Euler’s method is
dx


h
2  (
y ( x0 + h ) = y0 +  f ( x0 , y0 ) + f x0 + h, y1( ) 
p
 )
where y1( p) = y0 + hf ( x0 , y0 )



h
y ( x0 + h) = y0 +
2
(
 f ( x0 , y0 ) + f x0 + h, y0 + h f ( x0 , y0 ) 
 )

h ∂
= y0 +  f ( x 0 , y 0 ) + f ( x 0 , y 0 ) + h f ( x0 , y 0 )
2 ∂x
∂ 
+ hf ( x0 , y0 )
∂y
( )
f ( x0 , y 0 ) + O h 2  

h2  ∂f ∂f 
= y0 + hf ( x0 , y0 ) +
2  ∂x + ∂y f ( )
+ O h3
  ( x0 , y0 )

2
h
= y0 + hf ( x0 , y0 ) +
2
f ′( x0 , y0 ) + O h3 ( )

which is same as the solution given by Taylor’s series method up to terms in h2.
Thus, solution by improved Euler’s method agrees with solution by Taylor’s series solution up to
term in h2. Hence, improved Euler method is Runge–Kutta method of order 2.
Thus, second order Runge–Kutta method can be written as
k1 = hf ( x0 , y0 )

k2 = hf ( x0 + h, y0 + k1 )

1
k = ( k1 + k2 )
2 
y ( x0 + h ) = y0 + k

This is also called as Runge’s formula of order 2.
Third Order Runge–Kutta Method
Without giving the proof, we mention that the solution of differential equation
dy
dy
== ff ((xx,, yy));; yy((xx00 )) == yy00 by Runge’s method is given by
dx
dx
1
y ( x0 + h ) = y0 + ( k1 + 4 k2 + k3 )
6

where k1 = hf ( x0 , y0 )

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 783

 h k 
k2 = hf  x0 + , y0 + 1  
 2 2

k ′ = hf ( x0 + h , y0 + k1 )

k3 = hf ( x0 + h, y0 + k ′ )

is of third order.
Also, the solution
1
y ( x0 + h ) = y0 + ( k1 + 4k2 + k3 )
6 
where k1 = hf ( x0 , y0 )

 h k 
k2 = hf  x0 + , y0 + 1 
 2 2

k3 = hf ( x0 + h, y0 + k2 )

is of third order. This solution is called third order Runge–Kutta method. It is also known as
Kutta’s third order rule.
Runge–Kutta Method of 4th Order
The proof of Runge–Kutta method of third or fourth order are beyond the scope of this book.
However, we state the fourth order Runge–Kutta method.
dy
If we are to solve the differential equation = f ( x, y ) ; y ( x0 ) = y0 then solution by Runge–
Kutta method of fourth order is dx
1
y ( x0 + h ) = y0 + ( k1 + 2k2 + 2k3 + k4 )
6 
where k1 = hf (x0, y0)
 h k 
k2 = hf  x0 + , y0 + 1 
 2 2

 h k 
k3 = hf  x0 + , y0 + 2 
 2 2

k4 = hf ( x0 + h, y0 + k3 )

Remark 6.2:  When only Runge–Kutta method is mentioned that means Runge–Kutta method
of order 4.
Runge–Kutta method can be applied directly to differential equation of second order. Suppose
we are to find the solution of
d2 y
= f ( x, y, y ′ ) ; y ( x0 ) = y0 , y ′ ( x0 ) = y0′
dx 2 
784 | Chapter 6

we put y ′ = z = φ ( x, y, z )

then z ′ = f ( x, y, z ) ; y ( x0 ) = y0 , z ( x0 ) = z0 = y0′

then we find k1 = h φ ( x0 , y0 , z0 ) = h z0 = h y0′

l1 = h f ( x0 , y0 , z0 )

 h k l   l
k 2 = h φ  x 0 + , y0 + 1 , z 0 + 1  = h  z 0 + 
 2 2 2   2 

 h k l 
l2 = h f  x0 + , y0 + 1 , z0 + 1 
 2 2 2

 h k l   l 
k3 = h φ  x0 + , y0 + 2 , z0 + 2  = h  z0 + 2 
 2 2 2  2

 h k l 
l3 = h f  x0 + , y0 + 2 , z0 + 2 
 2 2 2

k4 = h φ ( x0 + h, y0 + k3 , z0 + l3 ) = h ( z0 + l3 )

l4 = h f ( x0 + h, y0 + k3 , z0 + l3 )

1
Then, k = ( k1 + 2k2 + 2k3 + k4 )
6 
1
l = ( l1 + 2l2 + 2l3 + l4 )
6 
and y ( x0 + h ) = y0 + k

y ′ ( x0 + h ) = z0 + l

dy
Example 6.26: Apply the fourth order Runge–Kutta method to solve = x 2 + y 2 , y ( 0 ) = 1.
dx
Take step size h = 0.1 and determine approximation to y (0.1) and y (0.2) correct to four decimal
places.
Solution:
dy
= f ( x, y ) = x 2 + y 2 , y (0 ) = 1, h = 0.1
dx 
If y (xn) = yn

then (
k1 = h f ( xn , yn ) = 0.1 xn2 + yn2 )
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 785

 h k   2
0.1   k1   
2
k2 = h f  xn + , yn + 1  = ( 0.1)  xn + + y + 
2   2  
n
 2 2 

 h k   2
0.1   k2  
2

k3 = h f  x n + , y n + 2  = ( 0 .1)  + +
 n 2   n 2 
x y +
 2 2      

k4 = h f ( xn + h, yn + k3 ) = ( 0.1) ( xn + 0.1) + ( yn + k3 ) 
2 2

 
1
k = ( k1 + 2k2 + 2k3 + k4 )
6 
yn +1 = y ( xn ) + k

n xn yn k1 k2 k3 k4 k yn + 1 = yn + k
0 0 1 0.1 0.1105 0.11161 0.12457 0.11146 1.11146
1 0.1 1.11146 0.12453 0.14001 0.14184 0.16108 0.14155 1.25301
2 0.2 1.25301
\ up to 4 decimals
y ( 0.1) = 1.1115, y ( 0.2 ) = 1.2530


dx dy
Example 6.27: Solve the system of differential equations = y − t, = x + t with x = 1, y = 1
when t = 0, taking h = 0.1. dt dt
Solution:
dx dy
= f ( t , x, y ) = y − t , = φ ( t , x, y ) = x + t
dt dt 
with x(0) =1, y(0) = 1; h = 0.1

k1 = h f ( t0 , x0 , y0 ) = 0.1(1 − 0 ) = 0.1

l1 = h φ ( t0 , x0 , y0 ) = 0.1(1 + 0 ) = 0.1

 h k l 
k2 = h f  t0 + , x0 + 1 , y0 + 1  = 0.1(1.05 − 0.05 ) = 0.1
 2 2 2

 h k l 
l2 = h φ  t0 + , x0 + 1 , y0 + 1  = 0.1(1.05 + 0.05 ) = 0.11 
 2 2 2

 h k l 
k3 = h f  t0 + , x0 + 2 , y0 + 2  = 0.1(1.055 − 0.05 ) = 0.1005
 2 2 2

786 | Chapter 6

 h k l 
l3 = h φ  t0 + , x0 + 2 , y0 + 2  = 0.1(1.05 + 0.05 ) = 0.11 
 2 2 2

k4 = h f ( t0 + h, x0 + k3 , y0 + l3 ) = 0.1(1.11 − 0.1) = 0.101



l4 = h φ ( t0 + h, x0 + k3 , y0 + l3 ) = 0.1(1.1005 + 0.1) = 0.12005

1
k= ( k1 + 2k2 + 2k3 + k4 ) = 0.10033
6 
1
l= ( l1 + 2l2 + 2l3 + l4 ) = 0.11001
6 
\ x ( 0.1) = x ( 0 ) + k = 1.10033

y ( 0.1) = y ( 0 ) + l = 1.11001

\ up to four decimals
x ( 0.1)  1.1003, y ( 0.1)  1.1100


Example 6.28: In an L-R-C circuit, voltage v(t) across the capacitor is given by the equation
d 2v dv
2
+ RC
LC +v =0
dt dt
dv
subject to the conditions t = 0, v = v0, = 0.
dt
dv
Taking h = 0.02 sec., use Runge–Kutta method to calculate v and when t = 0.02 for the data
v0 = 10 volts, C = 0.1 farad, L = 0.5 henry and R = 10 ohms. dt

Solution:
Using the given data, differential equation is
d 2v R dv v dv  dv 
2
=− − = −20 − 20 v ; v(0) = v0 = 10,   = 0
dt L dt LC dt  dt t = 0 
dv
Let = x = f ( t , v, x )
dt 
dv
\ = x = f ( t , v, x ) 
dt

dx d 2 v
and = = φ ( t , v, x ) = −20 ( v + x ) ; v ( 0 ) = 10, x ( 0 ) = 0, h = 0.02
dt dt 2 
k1 = h f ( t0 , v0 , x0 ) = ( 0.02 )( 0 ) = 0

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 787

l1 = h φ ( t0 , v0 , x0 ) = ( 0.02 ) ( −200 ) = −4.0 

 h k l 
k2 = h f  t0 + , v0 + 1 , x0 + 1  = ( 0.02 ) ( −2.0 ) = −0.04
 2 2 2

 h k l 
l2 = h φ  t0 + , v0 + 1 , x0 + 1  = ( 0.02 )  −20 (10 − 2 )  = −3.2
 2 2 2

 h k l 
k3 = h f  t0 + , v0 + 2 , x0 + 2  = ( 0.02 ) [ −1.6 ] = −0.032
 2 2 2

 h k l 
l3 = h φ  t0 + , v0 + 2 , x0 + 2  = ( 0.02 )  −20 (10 − 0.02 − 1.6 )  = −3.352
 2 2 2

k4 = h f ( t0 + h, v0 + k3 , x0 + l3 ) = ( 0.02 ) ( −3.352 ) = −0.06704

l4 = h φ ( t0 + h, v0 + k3 , x0 + l3 ) = ( 0.02 )  −20 (10 − 0.032 − 3.352 )  = −2.6464

1 1
k= ( k1 + 2k2 + 2k3 + k4 ) = 0 + 2 ( −0.04 ) + 2 ( −0.032 ) − 0.06704  = −0.003517
 6 6
1 1
l= ( l1 + 2l2 + 2l3 + l4 ) =  −4.0 + 2 ( −3.2 ) + 2 ( −3.352 ) − 2.6464  = −3.29173
6 6 
\ to four decimals
v ( 0.02 ) = v ( 0 ) + k = 10 − 0.0352 = 9.9648 volts

 dv 
 dt  = x ( 0 ) + l = 0 − 3.2917 = −3.2917 volts/sec.
 t =0.02 
Example 6.29: Apply Runge’s formula of order 2 to approximate value of y when x = 1.1 given
dy
= 3 x + y 2 and y = 1.2 when x = 1.
dx
Solution:
dy
= 3 x + y 2 = f ( x, y ) , y (1) = 1.2, h = 0.1
dx 
k1 = hf ( x0 , y0 ) = 0.444 

k2 = hf ( x0 + h, y0 + k1 ) = 0.60027

1
k = ( k1 + k2 ) = 0.52214
2 
\ y (1.1) = y (1) + k = 1.72214

788 | Chapter 6

\ up to four decimal places


y(1.1) = 1.7221.

Note: Runge’s (or Runge–Kutta’s) second order method is same as improved Euler’s method.

Example 6.30: Use Runge’s method to approximate y when x = 1.6 given that y = 0.4 at x = 1 and
dy
= x − y.
dx 
Solution:
dy
= x − y = f ( x, y ) , y (1) = 0.4,
dx 
Taking h = 0.2
k1 = hf ( xn , yn ) = 0.2 ( xn − yn )

 h k   h k 
k2 = hf  xn + , yn + 1  = 0.2  xn − yn + − 1 
 2 2  2 2

k ′ = hf ( xn + h, yn + k1 ) = 0.2 ( xn − yn + h − k1 )

k3 = hf ( xn + h, yn + k ′ ) = 0.2 ( xn − yn + h − k ′ )

1
k= ( k1 + 4k2 + k3 )
6 
yn +1 = yn + k 

n xn yn k1 k2 k ′ k3 k yn + 1
0 1 0.4 0.12 0.128 0.136 0.1328 0.12747 0.52747
1 1.2 0.52747 0.13451 0.14106 0.14760 0.14499 0.14062 0.66809
2 1.4 0.66809 0.14638 0.15174 0.15711 0.15496 0.15138 0.81947
3 1.6 0.81947

\ up to four decimals
y (1.6 ) = 0.8195


Example 6.31: Apply Runge–Kutta method of third and fourth orders to find an approximate
value of y when x = 0.2 taking h = 0.1 given that
dy
= x + y , y = 1 when x = 0.
dx 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 789

Solution:
dy
= x + y = f ( x, y ) ; y ( 0 ) = 1, h = 0.1
dx 
By Runge–Kutta method of third order

k1 = hf ( xn , yn ) = 0.1( xn + yn )

 h k   k 
k2 = hf  xn + , yn + 1  = 0.1  xn + yn + 0.05 + 1 
 2 2  2

k3 = hf ( xn + h, yn + k2 ) = 0.1( xn + yn + 0.1 + k2 )

1
k= ( k1 + 4k2 + k3 )
6 
yn + 1 = yn + k

n xn yn k1 k2 k3 k yn + 1
0 0 1 0.1 0.11 0.121 0.11017 1.11017
1 0.1 1.11017 0.12102 0.13207 0.14422 0.13225 1.24242
2 0.2 1.24242

\ up to four decimals
y(0.2) = 1.2424

(ii)  By Runge–Kutta method of fourth order

k1 = hf ( xn , yn ) = 0.1( xn + yn )


 h k   k 
k2 = hf  xn + , yn + 1  = 0.1  xn + yn + 0.05 + 1 
 2 2  2

 h k   k 
k3 = hf  xn + , yn + 2  = 0.1  xn + yn + 0.05 + 2 
 2 2  2

k4 = hf ( xn + h, yn + k3 ) = 0.1( xn + yn + h + k3 )

1
k= ( k1 + 2k2 + 2k3 + k4 )
6 
yn+1 = yn + k
790 | Chapter 6

n xn yn k1 k2 k3 k4 k yn + 1
0 0 1 0.1 0.11 0.1105 0.12105 0.11034 1.11034
1 0.1 1.11034 0.12103 0.13209 0.13264 0.14430 0.13246 1.24280
2 0.2 1.24280
\ up to four decimals
(0.2) = 1.2428
y 

Exercise 6.6

dy
1. Solve the equation = − y, for values of 6. Use the Runge–Kutta fourth order method
dx to find y (0.2) with h = 0.1 for the initial
y at x = 0.1 and x = 0.2 using Runge–Kutta
method of value problem
dy
(i)  order two = x + y , y ( 0 ) = 1.
(ii)  order three dx
(iii)  order four. 7. Using Runge–Kutta method of fourth or-
dy y 2 − x 2
2. Use Runge’s formula (third order) to der, solve = with y (0 ) = 1 at x = 0.2
dy y 2
− x 2 dy dx y 2 + x 2
solve the differential =equation with = xy−(0y)for
= 1yatatx x= =01.2.1and 0.4.
dy dx y 2 + x 2 dx
= x − y for y at x = 1.1 subject to y = 1 when
dx 8. Solve numerically the system of simul-
x = 1. dx dy
taneous equations + 2 x + 3 y = 0, + 3x +
3. Use Runge’s method to approximate dx y dy dt dt
when x = 0.1 given that y = 1 at x =+02 xand + 3 y = 0, + 3 x + 2 y = 2e 2t with initial conditions
dy dt dt
= 3x + y 2 . x = 1, y = 2 at t = 0, for t = 0.1 by Runge–
dx
Kutta method.
4. Apply Runge–Kutta method to find an ap- 9. Using Runge–Kutta method solve the
proximate value of y when x = 0.2 in steps
dy dy dz
of 0.1 if = x + y 2 given that y = 1 when equations = yz + x; = xz + y given
dx dx dx
x = 0. that y(0) = 1; z(0) = –1 for y(0.2) and
5. Use the fourth order Runge–Kutta meth- z(0.2).
od to solve the initial value problems 10. Using Runge–Kutta method, solve the
u′=  –2tu2, u(0) = 1 with h = 0.2 on the equation y ′′ = xy ′2 − y 2 for x = 0.2 given
­interval [0,0.4]. that y = 1, y′ = 0 when x = 0.
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 791

Answers 6.6

  1.    (i)  y ( 0.1)  0.905, y ( 0.2 )  0.8190


(ii)  y ( 0.1)  0.9049, y ( 0.2 )  0.8189
(iii)  y ( 0.1)  0.9048, y ( 0.2 )  0.8187

 2.  y (1.1)  1.0048

 3.  y ( 0.1)  1.12725

 4.  y ( 0.2 )  1.2736

 5.  u (0.2)  0.96153, u (0.4 )  0.86205

 6.  y ( 0.2 )  1.2194

 7.  y ( 0.2 )  1.196, y ( 0.4 )  1.3753

 8.  x ( 0.1)  0.3292, y ( 0.1)  1.6668

 9.  y ( 0.2 )  0.8522, z ( 0.2 )  −0.8341

10.  y ( 0.2 )  0.98015

6.4.8 Milne’s Method
First, we prove quadrature formula
x0 + 4 h 4h
∫ f ( x ) dx = [ 2 f1 − f 2 + 2 f3 ]

x0 3 
Let x = x0 + ph 
∴ dx = h dp 

f ( x ) dx = h∫ f ( x0 + ph) dp = h∫ E p f ( x0 ) dp
x0 + 4 h 4 4
∴ ∫ x0 0 0

p
= h ∫ (1 + ∆ ) f ( x0 ) dp
4

0 
4 p ( p − 1) 2 p ( p − 1) ( p − 2 ) 3 
 h ∫  f 0 + p∆f 0 + ∆ f0 + ∆ f 0  dp
0
 2 6  

4
 1 11 1  11  
= h  p f 0 + p 2 ∆f 0 +  p3 − p 2  ∆ 2 f 0 +  p 4 − p3 + p 2  ∆ 3 f 0 
 2 2  3 2  6  4  0

792 | Chapter 6

 20 8 
 = h  4 f 0 + 8 ( f1 − f 0 ) + ( f 2 − 2 f1 + f 0 ) + ( f 3 − 3 f 2 + 3 f1 − f 0 ) 
 3 3 
4h
= [ 2 f1 − f 2 + 2 f3 ]
3 
dy
Now, suppose we are to find the solution of = f ( x, y ) , f ( x0 ) = y0 .
dx
In Milne’s method first we find y ( x0 + h ) , y ( x0 + 2h ) , y ( x0 + 3h ) by Taylor series method or by
Picard’s method. Let these be y1, y2, y3 respectively.
From these values, we find f ( x0 + h, y1 ) = f1 , f ( x0 + 2h, y2 ) = f 2 , f ( x0 + 3h, y3 ) = f 3
x0 + 4 h dy x0 + 4 h
Now, ∫ x0 dx
dx = ∫
x0
f dx.

y ( x0 + 4 h) = y0 + ∫
x0 + 4 h
∴ f dx
x0

4h
y4  y0 + [ 2 f1 − f 2 + 2 f3 ]
3 
This is called predictor and we write
4h
y4( ) = y0 +
p
[ 2 f1 − f 2 + 2 f3 ].
3 
Then we use Simpson’s one-third rule to improve this value. This is called corrector.
h
y4( ) = y2 +  f 2 + 4 f 3 + f 4( p ) 
c

3 
where (
f 4( p) = f x0 + 4 h, y4( p) )
If necessary, the process can be repeated by taking this y4(c ) as y4( ) and then again finding y4( ) and
p c

repetition can be done until values obtained are approximately same as predicted and ­corrected.
Now, suppose we are to find y (x0 + 4h) when
d2 y  dy 
= f  x, y,  ; y ( x0 ) = y0 , y ′ ( x0 ) = y0′
dx 2  dx  
dy
Let = z = φ ( x, y, z )
dx 
dz
∴ = f ( x, y, z ) ; y ( x0 ) = y0 , z ( x0 ) = y0′ = z0
dx 
find y ( x0 + h) , y ( x0 + 2h) , y ( x0 + 3h)

and z ( x0 + h) , z ( x0 + 2h) , z ( x0 + 3h)

by Taylor-series method
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 793

and then find z ′ ( x0 + h ) , z ′ ( x0 + 2h ) , z ′ ( x0 + 3h )


i.e., f at x = x0 + h, x0 + 2h, x0 + 3h

Now by predictor formula find z ( ( x0 + 4h ) p)


and y (
p)
( x0 + 4h ). Then using corrector formula
find z ( x0 + 4 h ) and then y ( x0 + 4 h ).
(c) (c)

Example 6.32: Apply Milne’s method to find the solution of the differential equation
dy
= x + y, y ( 0 ) = 1 in the interval [0, 0. 4] in steps of h = 0.1. It is given that y (0.1) = 1.1103,
dx
y (0.2) = 1.2428, y (0.3) = 1.3997
dy
Solution: = x + y = f ( x, y )
dx 
y (0) = 1, y (0.1) = 1.1103, y (0.2) = 1.2428, y (0.3) = 1.3997; h = 0.1

n x y f=x+y
0 0 1 1
1 0.1 1.1103 1.2103
2 0.2 1.2428 1.4428
3 0.3 1.3997 1.6997

4h
y ( p) (0.4 ) = y (0 ) +
3
(2 f1 − f 2 + 2 f3 )

0.4
= 1+  2 (1.2103) − 1.4428 + 2 (1.6997) = 1.5836
3 


f 4(
p)
= f(
p)
( 0.4, y( ) ( 0.4 )) = 0.4 + 1.5836 = 1.9836 
p


y(
c)
( 0.4 ) = y ( 0.2 ) +
h
3
(
f 2 + 4 f 3 + f 4( )
p


)
0.1
= 1.2428 +
3
(1.4428 + 4 (1.6997 ) + 1.9836 )

= 1.5836 

which is same as y (p) (0.4)


∴ y (0.4) = 1.5836

dy
Example 6.33: Solve the initial value problem = 1 + xy 2 , y ( 0 ) = 1 for x = 0.4 by Milne’s
dx
method, it is given that y (0.1) = 1.105, y (0.2) = 1.223, y (0.3) = 1.355.
794 | Chapter 6

Solution:
dy
= f ( x, y ) = 1 + xy 2
dx
n x y f = 1 + xy 2
0 0 1 1
1 0.1 1.105 1.1221
2 0.2 1.223 1.2991
3 0.3 1.355 1.5508

4h
y(
p)
( 0.4 ) = y ( 0 ) + [ 2 f1 − f 2 + 2 f3 ]
3 
0.4
= 1+  2 (1.1221) − 1.2991 + 2 (1.5508 )  = 1.5396 
3 


f 4(
p)
= f(
p)
( 0.4, y( ) ( 0.4 )) = 1 + ( 0.4 )(1.5396 )
p 2
= 1.9481

h
y(
c)
( 0.4 ) = y ( 0.2 ) + f 2 + 4 f 3 + f 4( ) 
p

3 

0.1
= 1.223 + 1.2991 + 4 (1.5508 ) + 1.9481 = 1.5380 
3 
y(
p)
( 0.4 ) and y (
c)
( 0.4 ) are not equal up to three decimals.

Again, taking y (
p)
( 0.4 ) as y (
c)
( 0.4 ) obtained, i.e., 1.5380, we have


f 4(
p)
= f(
p)
( 0.4, y( ) ( 0.4 )) = 1 + ( 0.4 )(1.5380 )
p 2
= 1.9462

h
y ( ) ( 0.4 ) = y ( 0.2 ) +  f 2 + 4 f 3 + f 4( ) 
c p

3 

0.1
= 1.223 + 1.2991 + 4 (1.5508 ) + 1.9462  = 1.5380
3  
∴ up to three decimals, y (
p)
( 0.4 ) and y (
c)
( 0.4 ) are equal.

∴ y ( 0.4 ) = 1.538

dy 1
Example 6.34: Given
dx 2
( )
= 1 + x 2 y 2 and y ( 0 ) = 1, y ( 0.1) = 1.06 , y ( 0.2 ) = 1.12 and

y ( 0.3) = 1.21. Evaluate y ( 0.4 ) by Milne’s predictor–corrector method.


Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 795

Solution:
dy 1
dx
(
= f ( x, y ) = 1 + x 2 y 2
2
)
1
n x y f=
2
(1+ x 2 y 2 )
0 0 1 0.5
1 0.1 1.06 0.5674
2 0.2 1.12 0.6523
3 0.3 1.21 0.7979

4h
y(
p)
( 0.4 ) = y ( 0 ) + [ 2 f1 − f 2 + 2 f3 ]
3 
0.4
= 1+  2 ( 0.5674 ) − 0.6523 + 2 ( 0.7979 )  = 1.2771
3  


f 4(
p)
= f(
p)
( 0.4, y( ) ( 0.4 )) = 12 1 + ( 0.4 )  (1.2771)
p 2 2
= 0.9460

h
y(
c)
( 0.4 ) = y ( 0.2 ) + 
f 2 + 4 f 3 + f 4( ) 
p

3 
0.1
= 1.12 + 0.6523 + 4 ( 0.7979 ) + 0.9460  = 1.2797
3 

∴ up to two decimals, y (
p)
( 0.4 ) and y (
c)
( 0.4 ) are equal.
∴ y (0.4) = 1.28

dy
Example 6.35: Solve the differential equation = 2e x − y at x = 0.5 using Milne’s predictor–
corrector method, given that y (0.1) = 2. dx

Solution:
dy
= 2e x − y, y ( 0.1) = 2
dx 
We shall find y ( 0.2 ) , y ( 0.3) , y ( 0.4 ) by Taylor series method.

y ′ = 2e x − y ∴  y ′ (0.1) = 2e 0.1 − 2 = 0.21034 



y ′′ = 2e x − y ′ ∴  y ′′ (0.1) = 2e 0.1 − 0.21034 = 2.00000 

y (n) = 2e x − y (n −1) ; n = 2, 3, 4, 5,… 
796 | Chapter 6

∴ y ′′′ (0.1) = 2e 0.1 − 2 = 0.21034 

y iv ( 0.1) = 2e 0.1 − 0.21034 = 2.00000



y v ( 0.1) = 2e 0.1 − 2 = 0.21034

y vi ( 0.1) = 2e 0.1 − 0.21034 = 2.00000

By Taylor’s series expansion
x2 x3 x 4 iv
y ( x + 0.1) = y ( 0.1) + x y ′ ( 0.1) + y ′′ ( 0.1) + y ′′′ ( 0.1) + y ( 0.1) +
2! 3! 4! 
 x2 x4 x6   x3 x5 
\ y ( x + 0.1)  2 1 + + +  + 0.21034  x + + 
 2 24 720   6 120 

 ( 0.1)2 ( 0.1)4 ( 0.1)6   ( 0.1) ( 0.1) 
3 5

\ y ( 0.2 ) = 2 1 + + +  + 0.21034 0.1 + + 


 2 24 720 
  6 120 

= 2 (1.00500 ) + 0.21034 ( 0.10017 )  2.03107

 ( 0.2 ) ( 0.2 ) ( 0.2 ) 
2 4
6
( 0.2 ) ( 0.2 ) 
3 5

y ( 0.3) = 2 1 + + +  + 0.21034 0.2 + +


 2 24 720 
  6 120 


= 2 (1.02007 ) + 0.21034 ( 0.20134 ) = 2.08249

 ( 0.3)2 ( 0.3)4 ( 0.3)6   ( 0.3) ( 0.3) 
3 5

y ( 0.4 ) = 2 1 +
 + +  
+ 0.21034 0.3 + +
 2 24 720 
  6 120 


= 2 (1.04534 ) + 0.21034 ( 0.30452 ) = 2.15473

n x y f = 2ex - y
0 0.1 2 0.21034
1 0.2 2.03107 0.41174
2 0.3 2.08249 0.61723
3 0.4 2.15473 0.82892

4h
y(
p)
( 0.5) = y ( 0.1) + [ 2 f1 − f 2 + 2 f3 ]
3 
0.4
= 2+  2 ( 0.41174 ) − 0.61723 + 2 ( 0.82892 )  = 2.24855
3 
Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 797

f 4(
p)
= f(
p)
( 0.5, y( ) ( 0.5)) = 1.04899 
p

h
y(
c)
( 0.5) = y ( 0.3) + 
f 2 + 4 f 3 + f 4( ) 
p

3 
0.1
= 2.08249 + 0.61723 + 4 ( 0.82892 ) + 1.04899  = 2.248553
3 
\ up to four decimals, y ( ) ( 0.5 ) and y ( ) ( 0.5 ) are equal.
p c

\ y (0.5) = 2.2486

Example 6.36: Given y ′′ + xy ′ + y = 0, y ( 0 ) = 1, y ′ ( 0 ) = 0 obtain y for x = 0 ( 0.1) 0.3 by any


method. Further continue the solution by Milne’s method to calculate y ( 0.4 ).
Solution:
Let y ′ = z = f ( x, y, z )

∴ z ′ = y ′′ = − xz − y = φ ( x, y, z )

where y (0 ) = 1, y ′ (0 ) = z (0 ) = 0

∴ z ′ (0 ) = −(0)(0) − 1 = −1 

Now, by Picard’s method

y ( x ) = y ( 0 ) + ∫ z ( x ) dx = 1 + ∫ z ( x ) dx
x x
(1)
0 0 
z ( x ) = z ( 0 ) + ∫ ( − x z ( x ) − y ( x ) ) dx
x

0 
= − x ∫ z ( x ) dx + ∫  ∫ z ( x ) dx  dx − ∫ y ( x ) dx (integration by parts)
x x x x

0  0 
0 0


= − x  y ( x ) − 1 + ∫  y ( x ) − 1 dx − ∫ y ( x ) dx
x x
(from (1))
0 0 
= − xy ( x ) + x + ∫ y ( x ) dx − x − ∫ y ( x ) dx
x x
0 0 
= − xy ( x ) (2)

y ′ = − xy = f ( x, y ) , y ( 0 ) = 1


y ( x ) = 1 + ∫ − x y ( x ) dx = 1 − ∫ x y dx
x x

0 0 
x
∴ yn +1 = 1 − ∫ xyn dx
0 
798 | Chapter 6

n yn (x)
0 1
x2
1 1−
2
x2 x4
2 1− +
2 8
x 2 x 4 x6
3 1− + −
2 8 48
x 2 x 4 x6 x8
4 1− + − +
2 8 48 384
From last two approximations
x 2 x 4 x6
y ( x ) 1 − + −
2 8 48

From this, we find y ( 0.1) , y ( 0.2 ) , y ( 0.3) .

n x y f = –xy
0 0 1 0
1 0.1 0.99501 – 0.09950
2 0.2 0.98020 – 0.19604
3 0.3 0.95600 – 0.28680

4h
y(
p)
( 0.4 ) = y ( 0 ) + [ 2 f1 − f 2 + 2 f3 ]
3 
( 0.4 ) 
= 1+ 2 ( −0.09950 ) + 0.19604 + 2 ( −0.28680 ) 
3  
= 0.92313


f4( p)
= f(
p)
( 0.4, y( ) ( 0.4 )) = − ( 0.4 )( 0.92313) = −.36925 
p

h
∴ y ( ) (0.4 ) = y (0.2) + f 2 + 4 f 3 + f 4( ) 
c p

3 
0.1
= 0.98020 +  −0.19604 + 4 ( −0.2868 ) − 0.36925 = 0.92312 
3 
\ up to 4 decimals,
y(
p)
( 0.4 ) = y (c ) ( 0.4 ) = 0.9231 
\ y ( 0.4 ) = 0.9231

Numerical Methods for Differentiation, Integration and Ordinary Differential Equations  | 799

Exercise 6.7

1. Find y(2) if y(x) is the solution of 5. Using Milne’s predictor–corrector meth-


dy 1 od, solve the equation y ′ = x − y 2 in the
= ( x + y ) assuming y(0) = 2, y(0.5)
dx 2 range 0 ≤ x ≤ 1 given that y = 0 at x = 0.
= 2.636, y (1.0) = 3.595 and y (1.5) = 4.968 6. Use Milne’s method to find y (0.3) from
by Milne’s method. y ′ = x 2 + y 2, y  (0) = 1. Find the initial
dy 1
2. Given = , y (0 ) = 2, y (0.2) = 2.0933,­values
y (0.4 )y =(–2.0.1),
1755,y y(0.1),
(0.6) y= (0.2)
2.249from
3, the
dx ( x + y ) Taylor’s series method.
) = 2, y (0.2) = 2.0933, y (0.4) = 2.1755, y (0.6) = 2.2493, 7. Given dy = x 2 (1 + y ) and y(1) = 1, y(1.1)
find y (0.8) using Milne’s method. dx
= 1.233, y(1.2) = 1.548, y(1.3) = 1.979,
3. Using Milne’s predictor–corrector meth-
evaluate y(1.4) by Milne’s predictor–cor-
od evaluate the integral of y ′ − 4 y = 0
rector method.
at x  =  0.4, 0.5 given that y (0) = y0 = 1,
dy
y1 = y (0.1) = 1.492, y2 = y (0.2) = 2.226, 8. Solve = ( x + y ) y, y(0) = 1 using
dx
y3  = y (0.3) = 3.320. Also compare with Milne’s predictor–corrector method for
exact values. y (0.4). The values for x = 0.1, 0.2 and 0.3
dy should be obtained by Runge–Kutta meth-
4. Let = 1 + y with y (0) = 0. Compute
2

dx od of fourth order.
y (0.8) and y (1.0) by using Milne’s method.

Answers 6.7

 1. 6.873         2. 2.3164

 3.  x 0.4 0.5


Calculated value 4.953 7.388
Exact value 4.953 7.389

 4.  y ( 0.8 )  1.0293 , y (1.0 )  1.5554

 5.  y ( 0.2 )  0.0200, y ( 0.4 )  0.0795, y ( 0.6 )  0.1762 (by Picard’s method)
y ( 0.8 )  0.3046, y (1)  0.4556

 6.  y ( 0.3)  1.4397


 7.  y (1.4 )  2.575
 8.  y ( 0.4 )  1.83874
This page is intentionally left blank
Index

A Charpit’s method, 460


Circle of convergence, 68
Algebraic and transcendental equations, 586 Clairut’s equation, 463
Algebraic equation, 586 Closed curve, 39
Amplitude spectrum, 372 Codomain, 1
Analytic function, 5 Coefficient of magnification, 134
Angle of rotation, 134 Complementary error function, 184
Application of Cauchy residue theorem, 106 Complementary function, 486 – 487, 507–509
Applications of Fourier transforms, 416 Complete solution, 435, 486 – 488
Applications of Laplace transform, 257 Complex form of Fourier integral, 370
Applications of partial differential equations, 520 Complex form of Fourier series, 343
Arc, 39 Complex Fourier transform, 392
Complex variable, 1
B Conformal mapping, 132
Conjugate harmonic function, 22– 23
Backward difference interpolation polynomial, 696 Conjugate harmonic function in polar form, 25
Backward difference operator, 656 Conjugate property, 377
Backward difference table, 662 Connected domain, 44
Backward differences, 662 Continuity of a function, 4
Bessel functions, 186 Continuous curve, 39
Bessel’s interpolation formula, 703 Contour, 39
Bilinear transformations, 149 Contour Integrals, 97
Bisection method, 587 Contraction, 136–137
Bolzano method, 587 Convergence of Fourier series, 286
Boundary value problem, 435 Convergence of half range cosine series, 324
Convergence of half range sine series, 324
C Convergence of order m, 587
Converse of Cauchy integral theorem, 53
Cauchy inequality, 53 Convolution, 221
Cauchy integral formula, 50 Convolution theorem, 221, 381
Cauchy integral formula for derivatives, 51 Cote’s formulas, 743–745
Cauchy integral theorem, 45 C-R equations, 5–8
Cauchy principal value, 114 Critical point, 133
Cauchy residue theorem, 98 Cross ratio, 149–150
Cauchy-Goursat theorem, 46 Crout’s method, 626
Cauchy-Riemann equations, 5–8
Central difference operator, 656
Central difference table, 663 D
Central differences, 662 D’Alembert’s method of solving wave
Centre of the power series, 67 equation, 530
Change of scale property, 167, 373 Damped vibrations, 169
Charpit’s auxiliary equations, 461 Daniel Bernoulli, 281
802 | Index

Definition of line integral, 39 First–order backward differences, 662


Delta function, 200–201 First shifting property, 167
Differentiability, 4 First translation property, 167
Differentiation of Laplace transform, 172 Fixed point, 133
Differentiation operator, 656 Forward difference operator, 656
Differentiations of transforms w.r.t. frequency, 378 Forward difference table, 661
Diffusivity of string, 528 Forward differences, 661
Dirac-delta function, 199–201 Fourier coefficients, 284–286, 353
Direct integration method, 444 Fourier cosine integral, 368
Direct iteration method, 588 Fourier cosine series, 323–324
Dirichlet’s conditions, 286 Fourier cosine transform, 369
Divided differences, 685 Fourier integrals, 367–370
Domain, 1 Fourier series, 283
Doolittle method, 625 Fourier sine integral, 369
Double pole, 85 Fourier sine series, 324
Fourier sine transform, 369
Fourier transforms, 371
E Fourier transform of integral function, 380
Effective value of the function, 343 Fourth order Runge-Kutta method, 783
Electric circuits, 257 Fractional transformation, 149
Energy of spectrum, 372 Frequency shifting, 373
Energy theorem, 382 Function of exponential order, 166
Entire function, 5 Fundamental period, 281–282
Equispaced arguments, 656 Fundamental theorem of integral calculus, 47
Error in quadrature formulae, 745
Error function, 184
G
Error propagation, 665–666
Error term in Simpson’s one-third rule, 746 Gauss backward difference formula, 703
Error term in Simpson’s three-eight rule, 747 Gauss elimination method, 615–616
Error term in Trapezoidal rule, 745 Gauss forward interpolation formula, 701
Errors in numerical computations, 585 Gauss backward interpolation formula, 702
Essential singularity, 85 Gauss-Jordan method, 616
Euler formulae, 283, 285–286 Gauss-Seidel iteration method, 635
Euler’s method, 772 General quadrature formula, 739
Euler’s coefficients, 284 General solution, 435
Even function, 286 Generalized function, 201
Expressions for Fourier coefficients, 285 Geometric convergence, 587
Extrapolation, 585, 683 Green’s theorem, 46
Grid point, 725–726, 728
F
H
Factorial polynomial, 663
Factorisation method, 623 Half range cosine series, 323–324
Faltung theorem, 381 Half range sine series, 324
Filtering property of Dirac-delta function, 201 Halving method, 587
Final value theorem, 227 Harmonic analysis, 352
First order partial differential equation, 434, 448 Harmonic function, 21
Index  | 803

Heat equation, 520, 547, 568–569 Lagrange’s method, 448, 712


Heaviside function, 191, 394 Lagrange’s polynomial, 683
Holomorphic, 5 Lagrange’s subsidiary equations, 450
Homogenous and non-homogenous partial Laplace equation, 21, 521, 568–570
differential equations, 434 Laplace transform, 165
Laplace transform of derivative, 170
Laplace transform of integral, 171
I Laplace transform of periodic functions, 202
Improved Euler’s method, 773 Laplace transform of unit step function, 192
Independence of path, 46 Laplace-Everett’s interpolation formula, 704
Indirect methods, 634 Laurent’s series, 69
Infinite series of complex terms, 66 Level curves, 22
Initial value problem, 435 Limit of a function, 1
Initial value theorem, 227 Line integral, 39
Integral function, 47, 171, 380 Linear and non-linear partial differential
Integration of Laplace transform, 172–173 equations, 434
Interpolating polynomial, 683 Linear convergence, 587
Interpolation, 585, 683 Linear operators, 656
Interval of differencing, 656, 725 Linear transformation, 137
Invariant point, 133, 149 Linearity property, 167, 372
Inverse Fourier transform, 371 Liouville’s theorem, 54
Inverse interpolation, 712–714 Lipschitz condition, 765
Inverse Laplace transform, 165, 209 Lower triangular matrix, 623–624
Inverse operator, 489 LU decomposition method, 623
Inverse transformation, 139
Inversion, 139
M
Isogonal, 132
Isolated singularities, 85 Magnification, 134, 136–137
Iterative methods, 615, 634, 713 Mass-spring system, 271
Iterative process, 586, 634 Mean value operator, 656
Meromorphic function, 85
Method of factorisation, 623–624
J Method of false position, 596
Jacobi’s iterative method, 634 Method of separation of variables, 521
Jordan lemma, 114 Method of tangents, 600
Milne Thomson method, 23–26
Milne’s method, 791–793
K Milne’s predictor-corrector method, 794–795
Kirchoff’s laws, 257 Mobius transformation, 149
Kutta’s third order rule, 783 Modification of Newton’s iterative formula, 603
Modification of power method, 644
Modified Euler’s method, 774
L Modulation property of sine and cosine
Lagrange equation, 448 transforms, 375
Lagrange inverse interpolation formula, 712 Modulation theorem, 374
Lagrange’s auxiliary equations, 450 Modulus property, 374
Lagrange’s interpolation formula, 683 Monge’s method, 479–480
804 | Index

Morera’s theorem, 53 Piecewise continuous function, 166


Multiple curve, 39 Piecewise smooth curve, 39
Multiple-valued function, 1 Pivot element, 615–616
Multiply connected domain, 44–45 Poisson’s integral formula, 54
Polar form of Cauchy-Riemann equations, 7
Pole of order n, 85
N Power method, 642
Newton’s backward difference interpolation Power series, 67–68
formula, 696 Principle of superposition, 487
Newton’s divided difference interpolation Problem related to deflection of a loaded
formula, 687 beam, 265
Newton’s forward difference interpolation Problem related to mechanical systems, 271
formula, 695 Properties of Fourier transforms, 372
Newton’s iteration method, 600 Properties of Laplace transforms, 167
Newton-Raphson method, 600
Non-isolated singularity, 84
Q
Non-linear partial differential equations, 434
nth order derivative, 53 Quadratic convergence, 587
Numerical differentiation, 725
Numerical quadrature, 739
R
Radio equations, 560
O
Radius of convergence, 67
Odd function, 286 Radius of curvature, 265
One dimensional wave equation, 520, 527 Range of function, 1
One-sided Laplace transform, 165 Rate of convergence, 587
Operator methods, 490, 509 Rectified semiwave function, 203
Order of convergence, 595, 597, 602, 605 Reflection, 139
Ordinary point, 133 Regula-falsi method, 596
Orthogonal system, 22 Regular function, 5
Orthogonal trajectories, 35–36 Removable isolated singularity, 85
Orthogonality of trigonometric system, 282 Residue, 90
Residue theorem, 98
Root mean square value, 343
P Rotation mapping, 136
Parabolic, 149, 520 Round-off errors, 585
Parametric equation, 39 Runge’s method, 780
Parseval’s formulae, 341 Runge-Kutta method, 781–783
Parseval’s identities, 382
Partial differential equations, 433
S
Partial pivoting, 616
Particular integral, 486, 489–490, 507, 509 Secant method, 594
Particular solution, 435, 486 Second order Runge-Kutta method, 782
Periodic function, 281 Second shifting theorem, 193
Picard’s method, 764–765 Second-order backward differences, 662
Piecewise continuous curve, 39 Self-reciprocal, 371
Index  | 805

Shearing stress, 265 Transform of derivatives, 379


Shifting property, 167, 373 Translation mapping, 136
Shifting operator, 656 Transmission line, 558
Signum function, 394 Trapezoidal rule, 739–740
Simple closed curve, 39 Triangularisation method, 623
Simple curve, 39 Trigonometric series, 282–283
Simple pole, 85 Triple pole, 85
Simple zero, 84 Triply connected domain, 44 – 45
Simply connected domain, 44 Truncation error, 586
Simpson’s one-third rule, 740 Two dimensional wave equation, 576
Simpson’s three-eight rule, 741 Two-sided Laplace transform, 165
Single-valued function, 1 Type of isolated singularity, 85
Singular point, 84 Types of errors, 585
Singular solutions, 435 Gross errors, 585
Singularities of function, 84 Round-off errors, 585
Types of, 84 Truncation errors, 586
Non-isolated singularity, 84–85
Isolated singularity, 85 U
Smooth arc, 39
Smooth curve, 39 Uniformly convergent series, 67
Spectral density, 372 Unit impulse function, 199–200
Spectral representation, 372 Unit step function, 191–192
Spectrum, 372
Square transformation, 144 V
Square wave function, 207
Standard transformations, 136 Variable coefficients, 227, 242
Stirling’s formula, 703 Vibrations of a stretched string, 527
System of level curves, 22–23
System of linear equations, 615 W
Wave equation, 520, 527, 576
T Weddle’s rule, 742

Taylor series, 68
Taylor’s series method, 756 Y
Telegraph equations, 559 Young’s modulus, 265
Telephone equations, 559
Thermal diffusivity, 547
Third order Runge-Kutta method, 782–783
Z
Transcendental equation, 586 Zero of an analytic function, 84
This page is intentionally left blank
This page is intentionally left blank
This page is intentionally left blank

You might also like