You are on page 1of 85

Atlanta University Center

DigitalCommons@Robert W. Woodruff Library, Atlanta


University Center
ETD Collection for AUC Robert W. Woodruff Library

9-1-1979

Laplace transformation techniques in operational


calculus
Lewis Wooten
Atlanta University

Follow this and additional works at: http://digitalcommons.auctr.edu/dissertations


Part of the Mathematics Commons

Recommended Citation
Wooten, Lewis, "Laplace transformation techniques in operational calculus" (1979). ETD Collection for AUC Robert W. Woodruff
Library. Paper 518.

This Thesis is brought to you for free and open access by DigitalCommons@Robert W. Woodruff Library, Atlanta University Center. It has been
accepted for inclusion in ETD Collection for AUC Robert W. Woodruff Library by an authorized administrator of DigitalCommons@Robert W.
Woodruff Library, Atlanta University Center. For more information, please contact cwiseman@auctr.edu.
LAPLACE TRANSFORMATION TECHNIQUES IN

OPERATIONAL CALCULUS

A THESIS

SUBMITTED TO THE FACULTY OF ATLANTA UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR

THE DEGREE OF MASTER OF SCIENCE

BY

LEWIS WOOTEN

DEPARTMENT OF MATHEMATICAL SCIENCES

ATLANTA, GEORGIA

DECEMBER 1979
ABSTRACT

MATHEMATICS

WOOTEN, LEWIS, Atlanta University, 1979

Laplace Transformation Techniques in Operational Calculus

Advisor: Dr. Benjamin Martin

Master of Science degree conferred December 20, 1979

Thesis Dated: September 1979

This paper is concerned with the application of the theory of Opera

tional Calculus based on the Laplace transformation to problems frequently

encountered in Applied Mathematics. Beginning with a minimum of the theory

of the Laplace transform, and applying this basic theory of ordinary dif

ferential equations with both constant and variable coefficients. The pro

cess for finding the solution to partial differential equations with x, and

t as independent variables is illustrated. Additional topics are extended

to include the evaluation of definite integrals, nonlinear differential

equations and Volterra's integral equations with different kernels are also

discussed.
CONTENTS

INTRODUCTION iv-

CHAPTER PaSe

I. DEFINITION AND FUNDAMENTAL THEOREMS 1

Definition of the Laplace Transformation 1


Fundamental Theorems 3
Inverse Transform • 9

II. APPLICATION TO ORDINARY LINEAR DIFFERENTIAL EQUATIONS


WITH CONSTANT COEFFICIENTS 12

Single Differential Equation 12


Simultaneous Differential Equations 19

III. APPLICATION TO ORDINARY DIFFERENTIAL EQUATIONS WITH


VARIABLES COEFFICIENTS • 24
IV. APPLICATION TO PARTIAL DIFFERENTIAL EQUATIONS 28

V. EVALUATION OF DEFINITE INTEGRALS 40

VI.• APPLICATION TO ORDINARY NONLINEAR DIFFERENTIAL EQUATIONS 47

Solution by Nonlinear Integral Equations 48


Solution by Power Series

VII. APPLICATION TO INTEGRAL EQUATIONS OF THE CONVOLUTION TYPE 61

Integral Equations of the Convolution Type 62


Abel's Integral Equations •

APPENDICES • 71
BIBLIOGRAPHY ' 77
ii
List of Tables

Table PaSe

Tables of Properties 71

Tables of Transforms ' •*

iii
INTRODUCTION

The Laplace transformation was first proposed by Pierre Simon

Laplace, a French mathematician and one of the most prolific mathemati

cian of the 18th century. Very little is known of his early years for

when he become distinguished, he had the pettiness to hold himself aloft

from both his relatives and from those who had assisted him. Laplace was

more of an applied mathematician, having done the majority of his work in

mathematical physics, mechanics, and astronomy. During the years 1784-

1787, he produced some numerous works of exceptional power. Prominent

amount of these is one, which was read in 1784, and reprinted in the third

volume of the Mechanique Celeste, in which he introduced the Laplace inte

gral.

The term "operational method" implies a procedure of solving dif

ferential equations whereby the boundary or initial conditions are auto

matically satisfied in the course of the solution. Much.of the interest

in operational method was stimulated by Oliver Heaviside (1850-1925) who

developed its earlier concepts and applied them successfully to problems

dealing with almost every phase of physics and applied mathematics. In

spite of his notable contributions, Heaviside's development of the opera

tional calculus was largely empirical and lacking in mathematical rigor.

The operational method was placed on a sound mathematical founda

tion through the. efforts of many men. Bormwich and Wagner (1916) were

among the first to justify Heaviside's work on the basic of contour inte

gration. Carson followed by formulating the operational calculus on the

iv
basic of the infinite integral of the Laplace type. The methods of Carson

and Bromwich were linked together by Levy amd March as two phases of the

more general approach. Van der Pol, Doetsch, and others contributed to

summarizing the earlier works into a procedure of solution presently known

as the operational method of Laplace transformation.

Problems involving ordinary differential equations can be solved

operationally by an elementary knowledge of the Laplace transformation,

whereas other problems leading to partial differential equations required

some knowledge of the complex variable theory for thorough understanding.

The operational method of Laplace transformation offers a very

powerful technique for the fields of applied mathematics. In contrast to

the classical method, which requires the general solution to be fitted to

the initial or boundary conditions, these conditions are automatically in

corporated in the operational solution for any arbitrary or prescribed

excitation. Solutions for impulsive types of excitation and excitation of

arbitrary nature can be concisely written operationally. In some cases, it

is possible to determine the behavior of the system merely by examining the

operational equation without actually carrying out the solution.

The activity stimulated by Heaviside's method has been aptly sum

marized by the eminent British mathematician, E. T. Whittaker* in the

following statement:

Looking back on the controversy after thirty years, we should now


place the operational calculus with Poincare's discovery of auto-
morphic functions and Ricci's discovery of tensor calculus as the
three most important mathematical advances of the last quarter of
the nineteenth century. Applications, extensions, and justification
of it constitute a considerable part of the mathematical activity of
today.

*Whittaker, E. J.: Oliver Heaviside, Bulletin of the Calcutta Mathe


matical Society, Volume 20, p. 199.
CHAPTER I

DEFINITION AND FUNDAMENTAL THEOREMS

1*1 Definition of the Laplace Transform

Definition 1.1_L Let f (t) be a function of t specified for t *■ 0. Then the

Laplace transform of f(t) denoted by f(t), is defined as

e"stf(t) dt* (l.l.D

where the parameter s is complex.T


The Laplace transform of f(t) is said to exist if the integral

(l.l.l) converges for some value of s; otherwise it does not exist. The

sufficient conditions under which the Laplace transfaorm exist are that

the function f(t) in (1.1.1) is sectionally continuous in every finite in

terval 0< t<N, and the function must be of exponential order for t > N.

The new function F(s) is called the Laplace transform, or the

image, of the original function f(t). Whenever it is convenient to do so,

we shall denote the original function f(t) in lower-case letters and its

transform by the same letter in upper-case. But other notations that

distinguish between functions and their transforms are sometimes prefer

able; for example,

<£(s) = L f(t) or y(s) = L f(t)

*&is the symbol normally used to denote the Laplace transform


o£ a function, but for printing purposes we shall use the letter L.

fWe assume at the present time the parameter is real.

1
A function is called sectionally continuous or piecewise con

tinuous in an interval a < t< b if it is such that the interval can be

divided into a finite number of subintervals in which the function is con

tinuous and has finite right and left hand limits.

The unit step function, is defined as

10 when < t < k • (1.1.2)

1 when t > k

is an example of a function that is sectionally continuous on the interval

0 < t < N for every positive number N.

A function f(t) is of exponential order as t tends to infinity,

provided there is some constant M > 0 such that the product

je~atf(t)| < M or Jf(t)| < Me"" (1.1.3)


we say that f(t) is a function of exponential order as t-»«>or, briefly,

is of exponential order.

The function U(t) above, as well as the function t , are of

order of ** ast*arfor any positive a ; in fact, for the first function

and, when n = 0, for the second, we may write a = 0. The function e t of


exponential order (C > 2); but the function et2 is not of exponential
order.

Theorem 1.1: Sufficient Conditions for the Existence of the Laplace

Transform: If f(t) is sectionally continuous in evey finite interval

0 < t < N and exponential order a for t > N, then its Laplace transform

F(s) exist for all s > a .

Proof: We have for any positive number N,

-stf(t) dt =rvstf(t)dt +/\rstf(t) dt U.I.*)


Since f(t) is sectionally continuous its every finite subinterval

0 < t< N, the first integral on the right exist. Also, the second in

tegral in the right exist, since f(t) is of exponential order a for

t > N. To see this we have only to observe that in such case

«<t)| dt

. a

< / e~BtVlet dt = M
o T^T (1.1.6)

Thus the Laplace transform exists for s > a .

1.2 Fundamental Theorems of the Laplace Transform

We will now consider some of the very powerful and useful general

theorems concerning operations of the Laplace transform. These theorems

are of great utility in the solution of differential equations, evaluation

of integrals, and other procedures of applied mathematics.

In the following list of theorems, we assume, unless otherwise

stated, that all functions satisfy the conditions of Theorem 1.1 so that

their Laplace thansform exist.

Theorem 2.1: The Laplace transform of a constant is the same constant

divided by s, that is

Lk =¥ d.2.1)

Proof: To prove this, we have from the definition of the Laplace transform

r60 -<?t -st~\P ,


Lk =/e kdt -1^ e_J .i (1.2.2)

The integral vanishes at the upper limit since by hypothesis


Res > 0.
Theorem 2.2; L k0(t) = k L H(t), where k is a constant. (1.2.3)

Proof: We can prove this in the following manner

L kH(t) "St kH(t) dt = k/ e"StH(t) dt


J o
= kL H(t) (1.2.4)

Theorem 2.3: Linearity Property

If c and c. are only constants while f^(t) and f2(t) are func

tions with the Laplace transforms F^s) and F2(s) respectively, then

c2f2(t)]
2f2 f2(t)

(1.2.5)

The result is easily extended to more than two functions. The

proof follows directly from Theorem 2.2, and is easy enough, and will not

be given there. Because of the property of L expressed in this theorem,

we say L is a linear operator or that it has the linear property.

Theorem 2.4: Laplace Transform of Derivatives

If f(t) is continuous and has a derivative feed which Laplace

transformable

If L f(t) - F(s),

L f'(t) = sF(s) - f(0) (1.2.6)

Proof: To prove it, we have

Lf'(t) =rooe"StfI(t) dt (1.2.7)

Integrating by parts we

to t

= f(t)e -st
f(t) dt
o

-I
lim f(t)e sL f(t)

(1.2.8)
sF(s) - f(0)
This theorem is very useful in solving differential equations

with constant coefficients. This method is useful in finding the Laplace

transform of derivative without integrating, when the transform of the

function is known.

Theorem 2.5: If f(t) is continuous and has derivatives of order 1, 2, 3,

..., n which are Laplace transformable, then

Lf(n)(t) =snF(s) - Y fW (0)."-W (1.2.9)


(k) k
where F(s) is the Laplace transform of f(t) and f (0) = d_f
dtk
evaluated at t = 0.

This theorem is an extension of Theorem 2.4. A proof will not

be given here.

By repeated application of Theorem 2.4 the required results can

be obtained. Therefore it follows that

L f'(t) = sF(s) - f(0) (1.2.10a)

L f"(t) = s2F(s) - sf(0) - f'(0) (1.2.10b)


L f(3)(t) = s3F(s) - s2f(0) - sf'(O) - f"(0) (1.2.10c)
and etc. These expressions are useful in transforming differential

equations.

Theorem 2.6: If L f(t) = F(s), then

f f(u) du = F(s) (1.2.11)


o

Proof: Since
L /flu) =/>°VSt fhin) du (1.2.11b)
J no J n J O
Now integrating by parts yields

L f f(u) du =le"Stf f(u) du + I F(s) (1.2.11c)


J o J O J O S
s .

Now at the upper limit the first term vanishes because of the exponential

function. At the lower limit the first terms vanishes because of the

definite integral. Hence only the second term is left, and it is the

result stated in Theorem 2.6.

Theorem 2.7: Multiplication by t .

If L f(t) = F(s)

,n (n)
L tnf(t) = (-l)n*-T(s) = (-l)nF (s) (1.2.12)
ds11

where n = 1, 2, 3

Proof: We have

F(s) = fVStf(t) dt d.2.13)


Then by the Leibnitz's rule for differential under the integral sign,
-st
*df m
F'(s) = d ~stf(t) dt =/*°D d e fO:) dt
ds ds Jo Jo ds

- Jor -te"Stf(t) dt

-st

/:• i tf(t) dt

- L tf(t) (1.2.14)

Thus L tf(t) = -df_ = -f*(s)


ds

which proves the theorem for n = 1.

*We assume here that the Leibnitz's rule can be applied.


To establish the theorem in general, we use matheraatical induction.

Assume the theorem true for n = k, that is assume

" tkf(t) dt = (-l)Vk)(s) (1.2.15)

Then

d_ f ' e"St tkf(t) dt = (-l)kFk+1)(s) (1.2.16


dsjoo

or by Leibnitz's rule,

e t f(t) dt = (-1) F (s) (1.2.17;

that is

r°°e-st t k+lf(t) dt = (.D^p^^Cs) (1.2.18)


Jo

It follows that is (1.2.15) is true, i.e., if the theorem holds for n = k,

then (1.2.18) is true, i.e., the theorem holds for n = k +1. But by

(1.2.14) the theorem is true for n = 1. Hence it is true for n = 1 + 1 = 2

and n = 1 + 1 = 3, etc., and this for all positive integer values of n.

Theorem 2.8: Differentiation with respect to a second independent variable.

Consider the following function of two independent variables:

f = f (x,t)

If L f(x,t) = F(x,s) (1.2.19)

then ^ 8f(s:0 = iFOy) (1>2>20)

Proof: the proof of this theorem follows directly from the definition of

the Laplace transform:

L 9f(x,t) = f e"Staf(x,t) dt (1.2.21)


*x JO Jx
6

Since the variable x is not a variable of integration, then the order of

differentiation and integration may be interchanged to give

Lf(x,t) = 2L f e"St tl*^ dt (1.2.22)

*x

which completes the proof.

Theorem 2.9: Convolution Theorem*.

L fx(t) = F^s) (1.2.23)

L fx(t) = F2(s) (1.2.24)

Then the theorem states that

hf'f. (u)f (t-u) du = ht f2(u)f1(t-u)du = F1F2 (1.2.25)

The proof of Theorem 2.9 will not be given here, and we will list the

following theorem without their proofs.

Theorem 2.10: First Shifting Theorem

If L f(t) - f(s) ! - (1.2.26)

Then L eatf(t) = F(s - a) U-2-2?)


Theorem 2.11: Second Shifting Theorem

If L f(t) = F(s) and

f-F(t -a) t >


g(t)
lo t < a

L g(t) = e-aSF(s)

*This theorem is known in the literature as the Faltung or convolu


tion theorem. In the older literature of operational calculus it is called
the superposition theorem.
1.3 Inverse Transform

Definition 3.1: Let the symbol l"1 F(s) denote a function whose Laplace
transform is F(s). Thus

L f(t) = F(s) (1.3.1)

then

f(t) = L'1 F(s) (1.3.2)


This correspondence between function F(s) and f(t) is called

the inverse Laplace transformation, f(t) being the inverse transform of

F(s). The operator L"1 is a linear operator or that it has the linearity

property.

Since the Laplace transform of a null function >7(t) is zero,

that is

f net) dt-o (1-3-3)


o

it is clear that L f(t) = F(s) then also [f(t) + tft)] = F(s).


From this it follows that we can have two different functions with the

same Laplace transform.

Example: The two functions

and

e~3t otherwise U.3.5)

have the same Laplace transform, that is l/(s + 3).


If we allow null functions, we see that the inverse Laplace

transform is not unique. It is unique, if we disallow null functions

(which do not generally arise in cases of physical problems).


10

Theorem 3.1; Lerch's Theorem.

If we restrict ourselves to functions f(t) which are sectionally

continuous in every finite interval 0 <_ t<_ N and of exponential order for

t > N, then the inverse Laplace transform of F(s), that is L F(s) = f(t)

is unique.

We shall always assume such uniqueness unless otherwise stated.

The most obvious way of finding the inverse transform of a given

function of s consist of reading the result from a table of transforms.

But we shall take up methods of obtaining inverse transforms of certain

combinations and modifications of function of s, as well as methods of

resolving such functions into those listed in the tables. With the aid of

such procedures, we shall be able to make much use of the Laplace transfor

mation. In addition, there are explicit formulas for L F(s). The most

useful of these formulas involves an integral in the complex plane. To use

this integral, we must let s be a complex Variable and we must be prepared

to employ some theorems in the theory of functions of a complex variable.

Definition 3.2: If F(s) = L f(t), then L"1 F(s) is given by


1 a+iaj
f(t) - 2±iT f eStF(s) ds, t > 0 (1.3.6)
^ a-ia>
and f(t) = 0 for t < 0.

This result is called the complex inversion integral or formula.

It is also known as Bromwich's integral formula. The result provides a

direct means for obtaining the inverse Laplace transformation of a given

function F(s).

The integration of (1.3.6) is performed along a line s = a in

the complex plane where s = x + iy. The real number a is chosen so that
11

s = a lies to the right of all singularities (poles, branch points or

essential singularities) but is otherwise arbitrary.


CHAPTER II

APPLICATION TO ORDINARY LINEAR DIFFERENTIAL

EQUATIONS WITH CONSTANT COEFFICIENTS

The application of the Laplace transformation to solutions of

linear ordinary differential equations with constant coefficients, or

systems of such equations is well known. Such problems can, of course, be

solved by methods studied in a first course in differential equations.

We shall later on, solve more difficult problems, especially those in par

tial differential equations.

2.1 Single Differential Equations

In this section we will consider from a practical point of view,

a physical system which is characterized by a single time function y(t),

which satisfied a single differential equation. When more than one time

function exists in a physical system and they satisfy more than one dif

ferential equation in which all or some of those functions occur (simul

taneous differential equations).

We can exhibit the method of solution used here in a clearly

arranged scheme, to which we always return when we employ the Laplace

transformation of functional equations.

12
13

SCHEME

Differential equation Solution

Original!
space:! initial conditions

Inverse
Laplace transformation Laplace transformation

Image
Algebraic equation Solution
space:!

Explanation: Instead of solving the given differential equation

with given initial conditions directly, we make a detour across into the

iiaage space. We go from the original equation to the image equation (an

albegraic equation) by the Laplace transformation, we solve this and then

translate the solution back to the original space with the help of the in

version formula or the tables of transforms.

Hence the image function Y(s) of the desired time function y(t)

can be found by applying the Laplace transformation and we require only to

determine its corresponding original function. For this purpose we could

use the complex inversion formula. However, we wish to avoid this as

much as possible and so we proceed in the same way as when we meet an

integral in the process of calculation. We do not evaluate the integral

from its definition as the limit of a sum but we consult a table of inte

grals; when we do not find the required result immediately, we decompose

and reshape the integrand so that we reach a known integral. Analogously,

in our case we consult the attached Tables of Transforms and see whether

the original function of the image function under consideration is written


14

down, or whether perhaps by using our 'grammatical rules,' see if the

function can be built up from the functions in the tables.

There are advantages for using the Operational method (Laplace

transform) as opposed to the classical methods. In the classical method

we establish first a 'general' solution which the constant must be made to

fit the initial values. This necessitates the solution of an additional

system of linear equations with an unknown (for n >3 this is fairly tedious),

By contrast, the Laplace transform considers these initial values from the

beginning and introduces them automatically into the solution. Because of

this their influence is clear from the start. Therefore, the method is

particularly suitable for the initial valued problems. The frequent case

of vanishing initial values, which is not any simpler in the classical

method still requires the solution of the previously mentioned system of

linear equations, is solved very simply by the Laplace transformation.

While the classical method we first solve the homogeneous equation

and then the inhomogeneous equation by variation of parameters, in using

the Laplace transformation we can immediately solve the inhomogeneous equa

tion, which is practice the more important.

We will illustrate the procedure by considering the general equa

tion with constant coefficients;

x(n) (t) = alX(n-l Vl n


We introduce the following transforms:

y(s) = L x(t) = f e~Stx(t) dt (2-2)


•'O

Applying the Laplace transform to the differential equation, we have


15

(sny(s) -

- sn"3x'(0) - ... sx(n"3)(0) - x(n"2)(0)

- x(0) + any(s) + f(s) . (2.3)

If we use the notation

g(B) = (x(n-1}(0) + sx(n-2

a (x(n"2)(0) + sx(n"3)(0) + . . . sn-2x(0) +

... a .x(0) (2-4>


n-1

New we can write equation (2.1.4) in the form

, n+ a.811"1* . • • a)y(s) + g(s) = f(s) (2.5)


(si n

If we let

j, (s) = s 3n+a,sn-1+
+ a * * ' an (2-6)
'n 1

Then the image equation can be written as

y(s) = sSfD+Jlsl - L *<*> (2*7)


L (s) L (s)
n n

To obtain the solution of the differential Eq. (2.1.1) we must obtain in

some manner the inverse transform of y(s), and we would then have the

desired solution. The procedure in such a case is to decompose the expres

sion g(s)/Ln(s) and f(s)/Ln(s) into partial fraction, examine the table of

transforms, and obtain the appropriate inverse transforms.

We will consider some examples to illustrate the procedure.


16

Example 1.1 Find the general solution of the differential equation

y"(t) + k2y(t) = 0

y(0) = A, and y'(0) = B (2.1.2)

If we let

Y(s) = L y(t) -JQ eSty(t) dt (2.1.3)

then applying the Laplace transform to both members of Eq. (2.1.1), and

making use of the differentiation theorem, we obtain the following equa

tion:

s2Y(s) - sy(0) - y'(0) + k2Y(s) = O (2.1.4)

If the function ;<(t) satisfies the initial conditions, then

s2Y(s) - sA - B + k2Y(s) = 0 (2.1.5)

which is a simple algebraic equation. Its solution is clearly

Y(s) = A |— 1 + I I

Now inverting Eq. (2.1.6), we find

y(t) = Acos kt + B. sin kt


k

= Acos kt + B'sin kt (2.1.7)

where B1 = B/k, and A and B' are arbitrary constants.

To verify our formal result given by Eq. (2.1.7), we need only find

y"(t) from that equation and substitute in Eq. (2.1.1) to see that the

differential euqation is satisfied regardless of the value of A and B'.


17

Example 2.2 Find the solution of the differential equation

y"(t) + 2y'(t) + (5y(t) = e^sin t (2.1.8)


subject to the following initial conditions

y(0) = 0, and y'(0) = 1 (2.1.9)

Applying the Laplace transform to both members of the differential

equation, and letting Y(s) denote the transform of y(t), we obtain

s2Y(s) - sy(0) - y'(0) + 2 sY(s) - y(0) = 5Y(s) -

(2.1.10)
(s + I)2 + 1

Considering the initial conditions, we have

s2Y(s) - 1 + 2sY(s) + 5Y(s) = l/((s + I)2 + 1)


Hence

(s2 -r 2s + 5)Y(s) = l/(s2 + 2s + 2) (2.1.11)


Therefore, the solution of the image equation is

Y(s) = s + 2s + 3 (2.1.12)

(s2 + 2s + 2) (s + 2s + 5)

The rational function (2.1.12) can be expanded into fractions in

which the denominators contain only linear factors.

s2 + 2s + 3 = As + B = Cs + D (2.1.13)
(I2 + 2s + s)(s2 + 2s = 5) s2 + 2s + 2 s2 + 2s + 5
Solving for A, B. C and D from the partial fraction expansion (2.1.13),

we find that

A = 0, B = 1/3, C - 0, and D = 2/3


18

Now we have

Y(s)

3(s2 + 2s + 2) 3(s2 + 2s +5)

or

Y(s) - 1—~ + ^ (2.1.14)


3((s + IV + 1) 3((s = IV + 4)

Hence

y(t) = IT1 |"_ 1 + 2 1I (2.1.15)


L3((s + I)2 + 1) 3«s + I)2 + 4)J

Therefore, we have

^ -re si^-t +• -re sin 2t


3 j

or

y(t) - 1 "'(sin t + sin 2t) (2.1.16)

Example 1.3 Find the general solution of the following differential

equation

y"(t) H - a2y(t) = t(t) (2.1.17)

subject to the conditions

0) = C1,
y(0) C^ and
and y(0)
y' (0) C2 (2.1.18).

If we let

Y(s) = L y(t) = J e"Sty(t) dt (2.1.19)

then taking the Laplace transform of Eq. (2.1.17), we have

s2Y(s) - sy(0) -y'(0) + a2Y(s) = F(s) (2.1.20)


19

where

F(s) = L f(t)

Substituting the initial condition (2.1.18) in Eq. (2.1.20) and solving,

we have

Y(s) =
sCli_ +
C2Z +
F(s)
nSJ (2.1.21)
2,2 2^2 2, '2
s+a s+a s+a

Inverting Eq. (2.1.21), we have

y(t) + C cosh at + _2 sinh at + 1 / sinh a(t - u)f(u) du


1 o J n

or

y(t) = C cosh at + C ' sinh at + 1 C sinh a(t - u)f(u) du


■*■ a J o
(2.1.22)

where C^ = C2/a.

2. ,1 Simultaneous Differential Equations

As has already been shown, the calculations involved in solution

of a single differential equation with order greater than three are much

simpler by the Laplace transform method than by the classical method. How

ever, the method shows its full power in the solution of systems of several

differential equations where it leads to greater insight and fewer calcula

tions than the classical method, which in reality is not practicable at all.

For the sake of simplicity, consider the system of three differential

equations of the first order. In these, we write down all terms that theo

retically can appear, although usually a number of these terms are absent,

so that their coefficients have then to be set equal to 0. The system to

be considered is
20

(a21y'. +b12yi) + (a22yI2 + b22y2) ^a^y' + b23y3} = f2(t)

(a31yl + b31yl> + (a32fc2 + b32y2) + (a33y3 + b33y3) = f3(t)


Taking the Laplace transform of the system yields

ai3 (SY3 -y3(0)) + b13Y3 = F1(S)

a21
(sY1-y1(0))

"y3
(0))
(2.2.2)

331
-y
2<°>)
-y 3(0))
With the abbreviation

a s + b = o (s) (2.2.3)
aiks + Dik Pikk ;
We can write these equations in the form

PllY1 + Pl2 Y2 + P13Y3 = Fl + aHyl(0) + a12y2(0) + al3y3(0)

P21Y1 + P22Y2 + P23Y3 = F2 + a21yl(0) + a22y2(0)"+ a^


p31Yl + P32Y2 + P33Y3 + F3 + a31yl(0) + a32y2(0) + a33y3(0)
If the differential equations were not of first but of second order, then

the p (s) would be polynomials not of the first but of the second degree

and the value y'(0) ... would appear on the right-hand side in addition to
21

to the value of y(0) .... In practice, however, the equation would be

similar; they form a system of linear algebraic equations for the unknown

Y., Y_, Y»; theoretically such a system is solved most elegantly by

Crammer's rule with determinants, but in practice it is sometimes better

to use the successive elimination of the unknowns or (with a great number

of equations) one of the several different methods developed for solving

such a system.

On the right-hand, side of the equations there are the image func

tions F.(s) of the input function f.(t), and numerical constants which de

pend on the initial values; these we combine as follows:

(2.2.5)
ailyl(0) + aily2(0) + ai3y3(0) = ri
Let D(s) be the determinant of the system which is built from the P-k(s)

and which is in general a polynomial of the third degree in a, and we

obtain by Cramer's rule

rl P12 P13"1

2 + r2 P22 P23
'I"*
LF3 + r3 p32 p33j

P13"

^1 21 23 (2.2.6)

P12F1

P21 P22 F2
Y3i
'32 X3
22

This solution satisfies the system of differential equations and

the initial condition, provided that the determinant of the coefficients

of the highest powers of the system does not vanish, that is

det [a±k] * o (2.2.7)

and this is called the normal case. In this section we are making the

restrictions to only the normal case.

The procedure is illustrated by the following example:

Example 2.1 Solve the following system of differential equations.

x'(t) = 2x(t) - 3y(t) (2.2.8)

y'(t) = y(t) - 2x(t)

aubject to the initial conditions

x = b)
> t - o <2-2-9>
y - 3J
Taking the Laplace transform, where

y*00 -st
e x(t) dt
o

. ■ , (2.2.10)
and
x " r° -st
Y(s) - L y(t) -

we have

sX(s) - x(0) = 2X(s) - 3Y(s) (2.2.12)

sY(s) - y(0) = Y(s) - 2X(s)

substituting the initial conditions and simplifying, we have

(s-2)X(s) ♦ 3Y(s) - 8 (2

2X(s) + (s - l)lf(s) = 3
23

Solving Eqs. (2.2.13) simultaneously by Cramer's rule, we find that

8 3

_■ 3 s - 1
8s- 17 8s - 17

f
- 2 3 -

s -3s -4 (s + l)(s - 4)
2 s - .1

5_ 3
s + 1 s - 4

s - 2 8

2 3 3s - 22
3s - 22
3Z- 3s - 4 (s
s - 2 3

2 s - 1

(2.2.15)
s '+ 1 s-4

Then inverting the above equations, we have

X(t) = L"1 X(s) = (2.2.16)

y(t) - L"1 Y(s) = 22e"4t


CHAPTER III

APPLICATION TO ORDINARY DIFFERENTIAL EQUATIONS

WITH VARIABLES COEFFICIENTS

It is sometimes possible, by the introduction of the Laplace trans

formation, to transform certain linear differential equation with vari

ables coefficients to other equations that may be integrated readily. A

linear differential equation in y(t) whose coefficients are polynomials

in t transforms into a linear differential equation in y(s) whose coeffi

cients are polynomials in s. In case the transformed equation is simpler

than the original, the transformation may enable us to find the solution

of the original equation.

If the coefficients are polynomials of the first degree, the trans

formed equation is a linear equation of the first order, whose solution

can be written in terms of an integral. To find the solution of the

original equation, however, the inverse transform of the solution must be

obtained.

The procedure will be illustrated by the following examples:

Example 1. Find the solution of the differential equation

y"(t) + ty'(t) - y(t) = 0 (3-D

subject to the following initial conditions:

v = C
t = 0 (3.2)
y'

24
25

We have seen that

L tny(t) = (-1)° _dnL y(t) = (-1) Vn) (a) (3-3)


ds

and therefore we can write the transform of the product of t and any

derivative of y(t) in terms of y(s).

To transform Eq. (3.1), we let

Y(s) - r" e"8ty(y) dt


- r .(3.4)
J
then the transformed equation is

s2Y(s) - sy(0) - y'(0) - _d gy(s) _ y(Q) _ v(s) = O


ds
(3.5)

Considering the initial conditions, and simplifying we have

Y'(«) + Y(s) (I - s) =-- (3.6)

An integrating factor is

exp fI" ff'Z


f(2 - s) ds] = exp (21n s - hs )
2 -^ (3.7)
= s e

so the equation can be written as

d 2 -h2 - - ks2e~hS ^ (3-8)


ds

or

ds e y(s) = -se "* ds


ds (3-9)

integrating, we have

where C is the constant of integration

Y(s) = J^ + C %g2 (3.11)


26

To determine C, note that by series expansion

Y(s) - -„+ —-(1 + -2+ 1 4+...) (3.12)

s s 2 IIs
then

Y(s)=i+£+ C(I+(I2 + ...) (3.13)


s2 2 8S
Then since l"1 sK = 0, where k = 0, 1, 2, ..., we obtain after inverting
y(t) = (1 + C)t (3.14)

But from Eq. (3.2) we have y'(0) = 1, so calculate C = 0, then the required

solution is

y(t) = t (3-15)

Example 2; Solve Bessel's equation of order zero.

ty"(t) + y'(t) + ty(t) = 0 (3.16)

under the conditions that y(0) = 1 and y(t) and its derivatives have trans

forms .

The point t = 0 is a singular point of this differential euqation

such that one of the solution is a function that behaves like the natural

log of t near the singular point, and the Laplace transform of the deri

vatives of the function does not exist.

Applying the Laplace transform to the equation we have

_JL s2Y(s) - sy(0) - y'(0) + sY(s) - y(0) - ^(s) = 0 (3.17)


ds

and substituting the initial conditions we have

(s2 + l)Y'(s) + sY(s) = 0 (3.18)


This is a first order differential equation and can be solved by separa

tion of variables. Separating the variables, we have

r
s + 1
27

and integrating gives

In Y = -Jain (s + 1) + (3.20)

where c is the constant of integration. Solving we find that

Y(s) = c/(s2 + l)h (3.21)


consulting the tables on transforms, we find the solution to the original

equation

y(t) = CJ (t) (3.22)

If the function is to satisfy the condition y(0) = 1, then we

have C = 1, and the solution can be written as

y(t) = J (t) (3.23)

where J (t) is Bessel's function of the first kind of order zero:


o

CD n 2n
Jo(t) (-1)
(l)
(3.24)
(n!)'
n=o
•CHAPTER IV

APPLICATION TO PARTIAL DIFFERENTIAL EQUATIONS

In Chapter 2 we found that problems leading to ordinary differen

tial equations could be solved easily by the use of the Laplace trans

formation. In the study of partial differential equations it will appear

that it is still easy to find the solution by the Laplace transform, but,

since this usually involves more complicated functions of s than those

considered in Chapter 2, it is rather more difficult to derive the solution

by it.

In partial differential equations the unknowns are functions of

several variables, we consider here the case of two variables which we

denote by x and t; the unknown function is denoted by u(x,t). In a partial


differential equation a certian domain in the xt-plane is given at the .out

set, and it is within this domain that the unknown is to be determined.


For the equations considered here, we assume that t varies in the one-sided
infinite interval 0 < t <«,/ and x varies in a finite or infinite interval,
that the basic region of the xt-plane is a semi-infinite script, a quadrant
or a half-plane according as x varies in a finite, one-sided or two sided-

infinite interval.

If we wish to solve partial differential equations by using the

Laplace transformation, we must apply the Laplace transformation to the


function u(x,t) and to the derivatives which occur. Because the transfor
mation represents an integration with respect to a single variable, we

28
29

must undertake it relative to one of the variables in u, while the ether

is not involved. The variable to which the transformation is applied is t,

which we have assumed from the first to vary be between 0 andao, because

this is the interval over which the Laplace transform extends, the vari

able x is to be thought of as a constant. For each fixed value of x we

obtain a different transform which therefore depends not only (as previ

ously on s, but also on x; it is therefore a function U(x,s):

L u(x,t) = (* e"Stu(x,t) dt = U(x,s) (4.1)

If we wish to transform derivatives with respect to t, we can employ

Theorem (1.2.7), and here the variable x is kept constant. Then we have,

for example,

L u (x,t) = sU(x,s) - u(x,0), (A.2)

L u (x,t) = s2U(x,s) - xu(x,0) - ut(x,O) (4.3


c. *-

To use our method for derivatives with respect to x it must be

assumed that the order of differentiation and integration in the Laplace

integral can be interchanged; for example

L V*-0 = dlL «(*.*> = s°<x'B> (4'4)


L uxt(x'° = dlL Ut(X>t) = ~& SU(X'S) " U(X'0) (4'5>
Solution to one-dimensional boundary-value problems can be found

by the use of the Laplace transformation, by transforming the partial dif

ferential equation into an ordinary differential equation. The solution

is then obtained by solving the ordinary differential equation, and

*Where we have used the notation Ufc = du/dt


30

inverting by using the inversion formula or any other methods already

condidered.

By applying the Laplace transform twice we can find solutions to

two-dimensional problems, that is, we first transform the equation with

respect to one variable and then with respect to the remaining variable

and arrive at an ordinary differential equation. In such case the required

solution is obtained by double inversion. A similar technique can be

applied to three (or higher) dimensional problems. The process is some

times referred to as iterated Laplace transformation. Boundary-value prob

lems can sometime also be solved by both Fourier and Laplace transforms.

In the one-dimensional case, the method can be summed up in the

following scheme:

SCHEME

partial differential equation > solution


Original! + initial conditions »
space: | + boundary conditions

inverse Laplace
Laplace transformation transformation

1 ordinary differential equation


+ boundary conditions r> solution

The most difficult part of the transformation method is normally

the determination of the original function of the function found by the

Laplace transform. If the present tables are not sufficient, then methods

of using the complex integral formula must be employed, or expanding the

image solution into a series of decreasing powers of s. But, for the

problems considered here, the present tables are sufficient.


31

The use of the Laplace transform in the solution of partial dif

ferential equations will now be illustrated by the following examples:

Example 1. Solve the partial differential equation

X(x,t) + xut(x,t) = 0 (4-6)

subject to the conditions

u(x,0) =0

u(0,t) = t (4'8)
Let

D(x.s) = r e-Stu(x,t) dt (4-9)

From Eq. (4.6) it follows that


(4.10)
JL(x.s) + x sU(x,s) - u(x,0) - 0
dx

Using the initial condition (4.7), Eq. (4.10) becomes

Uf(x,s) + xsU(x.s) = 0 (4'U)


subject to the condition

UCO.s) - Us1 <4-l2>


where U(0,s) is the transform of the boundary condition (4.8).
The partial differential equation has been reduced to the above

ordinary differential equation. We could solve Eq. (4.12) by the Laplace

transformation method, but, for the simple problems arising here, it is

just as easy to write the complementary function and particular integral

in the usual way.

Eq. (4.12) is a first order linear differential equation which can

be solved by separation of variables. Therefore, Eq. (4.12) can be written

in the form

dU=-sxdx •
U
32

Solving, we have

lnU = -Jssx2 + C (4.14)


Where C is the constant integration. Therefore, we can write

U - Ce~hs* (4.15)
To determine C, we consider the condition (4.12), and find that

I2 - C C4.16)
s
Now we have the solution of the image equation

U(x,s) = I -*ssx2 (4-17)


2 e
s

If we let

a = k>* C4.18)

and

f(s) = 1/s2
The Eq. (4.17) can be written in the following form

U(x,s) = f(s) e"aS (4.20)

Inverting Eq. (4.20), by consulting the table of transforms, we find that

u(x,t) = F(t - a)H(t - a) «


where F(t) = L*1 f(s) = t. There the solution can be written as

u(x,t) = (t - a)H(t - a)

or

uCS,t) = (t - *sx2)H(t - %x. )


where H(t) is the unit step function defined as in Chapter 1.

Example 2. Solve the boundary value problem

u (x,t) - 2u . (s,t) + u (x,t) = 0 (4.23)

subject to the following conditions

u(x,0) =0 (i
.33

ut(x,O) - 0 (4.25)

u(O,t) = 0 (4.26)

u(l,t) = 0 f(t) (4.27)

Let

U(x,s) = L u(x,t) (4.28)

Transofrming Eq. (4.23) we have

A sU(x,s) - u(x,0) + s2U(x,s) - su(x,O) - u (x,0)


dx ax u

(4.29

Considering the initial condition, the above quation may be written as

U"(x,s) - 2sU'(x,s) + s2U(x,s) = 0 (4.30)


which is a second order ordinary linear differential equation that have

the following solution

U(x,s) - (Ax + B)eSX (4.31)


where A., and B are constant. To determine A and B we consider the boundary

conditions

U(0,s) = 0 (4.32)

U(l,s) = f(s) (4-33>


where conditions (4.32) and (4.33) are the transforms of (4.26) and (4.27)

respectively. Considering conditions (4.32), we have

B = 0

therefore

sx
U(x,s) = Axe

Substituting conditions (4.33), we have

F(s) AeE
-s (.35)
A F(s)e
34

Therefore, the solution of the image equation is

U(x,s) = xF(s)es(x"1) «
If we let

a - (x - 1) (4.37)

then we can write Eq. (4.36) as

U(x,s) = xf(s)eaS " (4.38)

Inverting, we find

u(x,t) = xf(t + a)H(t - a) (4.39)

or

u(x,t) = xf(x + t - l)H(x + t - 1) (4.40)

where

xf(x + t - 1) t> X- 1
H(x + t - 1) - ' _
0 t<X-1

Example 3. Let the temperature of the face of a semi-infinite solid x > 0

be prescribed by the function F(t) of time. If the initial temperature is

zero, the temperature function u(x,t) is the solution to the boundary value

problem.

ut(x,t) = ku (x,t) x >■ 0, t > 0 (4.41)


t XX

u(x,0) --0 * ><> (4.42)


u(0,t) = f(t), limu(x.t) =0 t >0 (4.43)

Applying the Laplace transform to Eq. (4.41), we have

sU(x,s) - u(x,0) = kU"(x,s) (4-44)

where

U(x,s) = L u(x,t) = r°
r e-Stu(x,t) dt (4.45)
J
35

Considering the initial conditions, and simplifying, we have

U"(x,s) - ~ U(x,s) = 0 (4.46)

Subject to the conditions

U(x,s) = F(s), and lim U(x,s) = 0 (4.47)

where

F(s) = L f(t) (4.48)

The general solution od Eq. (4.46) is

(4.49)
U(x,s) = C^t . ~2"

We find 0= 0 from (4.47)

(4.50)
U(x,s) = C2e"X S
and from Eq. (4.47) we find c2 = F(s), so that

(4.51)
U(x,s) » F(s)e

From the values .of transforms we find that

-x/s/k exp
(4.52)

4kt

and with the aid of the convolution theorem, we find

D(x,t) - L"1 F(s)g(s) = r f(t -r)g(r) dr (4.53)


J o
where we have used the notiation

(5.54)
g(s) = arxVSTk
Now we have

t / 2 \
u(x.t) = x f f(t - f ) expf - x )dr (4.55)

"?W JQ r3/2

We now make the following substitution

(4.56)
36

and we calculate

2
(4.57)

d - - x ,v (4.58)

and

.-3/2 = 8kXl (4.59)


3
x

We can write Eq. (4.55) as


r x2 *■
u(x,t) -2 r "

Wlien the temperature of the surface is constant

the temperature within the solid is therefore

, „
u(x,t) =
"o *■> _ ,_^^X (4.62)

Since

2 ■? _.2
erfc =

therefore, Eq. (4.62) becomes

^ r P / X \ (4.64)
u(x,t) = f erfc(—=-)
° \2Vkt /

Example 4t An infinitely long string having one end at x = 0 is initially


at rest on the x-axis. The end x = o undergoes a period!: transverse dis
placement given by A^in.t, t > 0. Find the displaced of any point on

the string at any time.


37

t
Ansin

Fig. 4-1

If Y(x,t) is the transverse displacement of the string at any point x at

any time t, then the boundary-value problem is

(x,t) = a2y (x,t) x >0, t> 0 (4.65)

u(x,0) = yt(x,0) = 0 (4.66)

y(0,t) » A sin (o t, and |y(x,t)J < M (4.67)


where the last condition specifies that the displacement is bounded.

Taking the Laplace transform of Eq. (4.65), we find

s2Y(s,s) - sy(x,0) = a2Y"(x,s) >.68)

where

dt (4.69)
/
Y(x,s) = L y(x,t) =

Considering the initial condition, and simplifying, Eq. (4.68) becomes

Y"(x,s) - s2 (4.70)

. a

subject to the conditions

Y(0,s) = Ao6> (4.71)


2 +,,2
s ***

(4.72)
Y(x,s) is bounded

The general solution of Eq. (4.70) is

Y(x,s) = C.eSX a + C e SX a (4.73)


38

From the condition of boundedness, we must have C = 0. Then we have

(4.74)

From condition (4.71), we find that

(4.75)
J2 2

Therefore,

(4.76)
Y(x,s)
e~sx/a

If we let

a = x/a (4.77)

(4.78)
and let f(s)

With this notation Eq. (4.76) can be written as

Y(x,s) = f(s)e"as (4.79)

UPon consulting the tables of transforms, we find

y(x,t) = L"1f(s)efltS = F(t - a)u(t -a.) (4.80)

Since

F(t) - L"1 f (s) = AQSin &>t (4.81)

and u(t) is the unit step function defined as

1 t > a-

u(t) ='
0 t < d (4.82)

Therefore, the desired solution is

(A sin
si <u(t - «) t > a
y(x,t)

0
° t < a.
(4.83)
39

or

A sincj (t - x/a) t > x/a


o
u

y(x,t) = JO t < x/a (4.84)


This means physically that a point x cf the string s-fcays at

rest until the time t = x/a. Thereafter it undergoes motion identical

with that of the end x = 0 but lags behind it in time by the amount x/a.

The constant a is the speed with which the wave travels.


CHAPTER V

EVALUATION OF DEFINITE INTEGRALS

Certain types of definite integrals may be evaluated very easily

by the use of integral transforms. The Laplace transform is the most ex

tensively used integral transform, at the present time, to evaluate

definite integrals. We will present the general procedure by which defi

nite integrals are evaluated by means of examples. Many integrals may be

evaluated by the following theorem.

Theorem 5.1: If L f(t = F(s), then

rns) ds ■ /
The proof of this theorem may be done in the following manner:'

Proof: By hypothesis we have

F(s) -f e"Stf(t) dt (5.2)


J o

Therefore

r00 f(s) ds = r°D r00 e~stfu) dt dS (5.3)


V o o J o

•00 00

f f e"Stf(t) ds dt
J o J o
provided that it is permissible to reverse the order of integration.

40
41

But we have

00 -st
e ds = I (5.4)

ds =rii£l dt (5.5)
/ o o

A critical examination of the validity of the above theorem pro

cedure reveals that it is possible to change the order of integration

involving infinite limits.*

To illustrate the use of the above theorem, let's consider some

examples.

Example 1: Evaluate the integral

f sin at dt where a > 0 (5.6)

Since

a
L sin at =
s
2 +• a-2 (5.7

then by theorem 5.1

/°° sin at dt
sin_at_dt r*0
.f*" „ a_
* ^ ds
~ ~J '. s2 + a

CD

Tan"1 s = E (5.8)
2
o

*A discussion of this will be found in H.S. Carslaw, "Fourier Serives


and ingergal" Dover Publication, Inc., New York, 1951.
42

Example 2: Evaluate the integral

(5.9)

Jo t

Since

(5.10)
I e"at = 1/(b + a)
and

L e "bt b) (5.11)

then

r e~at - e"bt dt =r fri_ _ _i Ids


Jo t J° I s + a s +

00

ln(s + a) - ln(s + b)

GD

s + a (5.12)
s + b

Another method of evaluating of integrals operationally depends

on the introduction of a parameter in the integrand. As an example, con

sider the following

Example 3; Evaluate

Q0 (5.13)
x sin xt dx

a + x
, -CD

Since

/ x sin xt dx _ / / e-st xsinjct dx dt (5.14)


Jq o
2 o
2 ""/n •* ft
a + x
a + x
43

Changing the order of integration we have

—St
e sin xt dt dx
J a + x ■

o (a + x ) (s + x )

Resolving the above into partial fractions we have

^r 2 2
2 1
i r f * * dx (5.16)
27 2 2 2^2
s -a^o [s +X a+x J

i / -l -l Mi
~2 ^; I sTan x - aTan x )
s - a * J

(
Its _ E!L
2 2

Since

x sln xt dx = if x sin xt dx
/
- cd
a2+x2 JO a
~
2^
+ x
"
Then

OD

C x sin xt dx _ r_JL_"| (5.18)


J 2 2 s + a I
" ^qd a + x L J

Taking the inverse transform, we have

-1 n = rte~at (-5.19)
L TTT

Therefore

00 xsin xt dx _ rfc-at (5#20)


2^2
a + x
-OD
44

Theorem 5.2: If the integral

f f(t) dt
o

,0D t

converges, then the integral/" e" f(t) converges uniformly with respect

to s in the close interval O.< s ^ s^ for any real value of s1> 0

That is

cd op

I f(t) dt = lim j e f(t)dt = lim F(s) = a constant.

We will not give a proof of the theorem; but the theorem can be

illustrated by some examples.

Example 4: Consider the integral

I = F J (t) dt (5.21)

Since the integral converges, then by theorem 5.2, we can write

' J (t) dt = linif e"StJ (t) dt (5.22)


0 s-*0^o o

From the tables of transforms we have

lim f° e~StJ (t) dt = lim _X =1 (5.23)

Example 5; Evaluate the integral

r sin xt dx (5.24)
J o . /——
45

We know that

2,4
*_ + X_
3! 5!

where

2
=
2
r,

4.

However

sin
X
X

■( 1 -
3!
(5.26)

Hence from Eq. (5 .26), we have

* <5-27>

therefore

sin x = Y 2 J, (x) (5.28)

Since I converges, then by theorem 2, the following integral

converges:

OD QD

sinjct =[»[e-SK Jx (xt)


* -V2j ^dx (5.29)

From the tables of transforms, we have

rvTJ
Now Eq. (5.30) can be written as

L (s2 + t^ J
(5.32)
4b

Applying theorem 5.2 to Eq. (5.32), we have

od I 2 2 k
I = r ;in xt dx =
sin Jrjj- llm (s + t K - s.

(5.33)

Remarks concerning Theorem 5.2

Uniform convergence on the interval 0 ^ x<co also means that

F(s) (the Laplace transform of a function f(t) is continuous in the in

terval 0 < x < cd , that is, for all values of s therein). It must not

be inferred however, that the continuity of F(s) implies uniform conver

gence of the integral inO<s<. oo . For instance as s -* 0, F(s) may be

continuous while the integral diverges. Consider the following example

llm rCDe"Stcos t dt - lim a = 0


J
But
CD

/cos t dt
o

diverges.

The integral diverges because f(t) = cos t, sin t are alternating

functions with a constant amplitude as t-»od. The function does not tend

to a limit as t-*<D , so
r cos t dt

oscillates finitely, that is, it is bounded and indefinite.


CHAPTER VI

APPLICATION TO ORDINARY NONLINEAR DIFFERENTIAL

EQUATIONS

For nonlinear differential equations there are no general methods

which are applicable to all cases and no solutions exist in closed form; in

general the solution cannot be written in terms of the classical- trans-

cedentals. We must be satisfied with approximate methods. Although the

Laplace transform is not immediately applicable in the previous way, since

it is a linear transformation and it particularly suited for solving

linear problems, it can, nevertheless, be useful in approximate methods.

Many of the analytical methods of practical importance are based

on attempting to find a solution as a combination of well-known tabulated

mathematical functions. It is recognized that an exact solution probably


cannot be found, but an approximate solution of sufficient accuracy may be
possible. While details of various methods differ, most of them are rather
similar and follow basically the same format. One part of the differential
equation is a linear equation that is simple enough to allow an exact solu
tion. The other part contains any terms that are difficult to handle and
will usually involve the nonlinear terms of the equation, and perhaps other
terms as well. The linear equation is solved so as to give the zero-order
or generating solution. This generating solution is then employed in some

way with the nonlinear terms of the original equation to produce first-

order corrections terms. These corrections terms are then combined with

47
48

the generating term to yield a first-order corrected solution, which is

an approximate solution of the original equation. The exact form of the

correction terms depend upon the particular details of the method being

employed.

If the degree of nonlinearity is sufficiently small, a single

application of this process, yielding a first-order corrected solution, >

may give sufficient accuracy. If the degree of nonlinearity is greater,

it is sometimes possible to obtain better accuracy by applying the method

a second time, so as to yield a second-order correction. Further repeated

applications of the method are possible theoretically, but practically the

mathematics is usually too complicated as opposed to the small increase in

accuracy obtained. Actually, an important uncertainty inherent in methods

of this sort is the error in the solution which they yield. It is not

always a simple matter to determine the error.

Since the determination of the error of the approximate solutions

to nonlinear differential equations is not readily done, we will not con

sider it her. Our concern is to demonstrate that the Laplace transforma

tion can be employed to nonlinear differential equations, and that the

approximate solutions obtained are as sufficient as those obtained by the

traditional methods.

6.1 Solution by Nonlinear Integral Equations

A method for solving nonlinear differential equations based on the

theory of nonlinear integral equations will now be described. This is an

imperative operational method, devised by Pipes, is intimately related to

Laelsco nonlinear integral equation.*

We illustrate the procedure by using the second order nonlinear

*This is a Volterra's nonlinear integral equation.


49

equation in operator form

2(0)x(t) + f x(t), x'(t), ... = e(t) (6.1.1)

here D = d/dt. The term X(D) is the linear part of the differential

equation, while f rx(t)| is the nonlinear part. Let us suppose that

s(0) = x'(0) = 0 (6.1.2)

although the problem can be anlayzed in cases where the initial conditions

are non-zero.

In order to formulate Eq.(.6.1»I)as an integral equation, let the

Laplace transforms be introduced.

L x(t) = I e s(t) dt = X(s) (6.1.3)


J o
L e(t) = E(s) (6.1.4)

£ f [x(t)] = G(s) (6.1.5)

With this notation and initial conditions (6.1.2), Eq. (6.1.1) can be

written as

Z(s)X(s) + G(s) - E(s) (6.1.6)

or

XT/ N E(s) G(s) ,, ,


x(s) = zTsT " TTsT (6*1
For simplicity, we introduce the notation

H(s) = 1/Z(s)

With this notation Eq. (6.1.6) may be written in the following form:

X(s) - E(s)H(s) + G(s)H(s) (6.1.8)

If we let

L"1 H(s) = L"1 1/Z(s) - h(t) (6.1.9)


then x(t) is obtained by taking the inverse transform of Eq. (6.1.8).

Thus

x(t) = L"1 E(s)H(s) - L"1 G(s)H(s) (6.1.10)


50

It v/e make use of the convolution theorem and apply if. to both members of

(6.1.10), the following results are obtained:

t t

x(t) = j h(t - u)e(u) du -J


o o

which is a nonlinear integral equation for s(t) of Volterra type. With

f(x) = 0, the nonlinear component is absent so that

xQ(t) = f h(t - u)e(u) du (6.1.12)


o

and in this notation Eq. (6.1.11) may be written in the form

x(t) = x (t) - f h(t - u)f [x(u)] du (6.1.13)


o

Laelsco hes shown that thelimit of the infinite sequence of functions

x (t) - f h(t - u)e(u) du


o J
o

X,(t) = x (t) - f h(t - u)f [x (u)] du


x o J ,
° (6.1.14)
t

x (t) - f h(t - u)f Fxn(u)l du

is the solution of the integral equation (6.1.13). That is

x(t)= limxn(t)

is the desired solution.

The investigation of the convergence of the sequence (6.1.14)

is given in the treatise of Laelsco. The results of this investigation


51

indicated that the sequence converges rapidly in application of this

method of solution of special cases arising from physical problems.

A convenient operation procedure that has considerable utility

in the calculation of the sequence x (t) for Eq. (6.14) is based on

finding the inverse Laplace transform of Eq. (6.1.10) rather than Eq.

(6.1.14). From Eq. (6.1.10) we have

x(t) - L"1 X(s) = L"1!^ - IT1 (6.1<16)

whereupon the sequence x (t) is defined by

Xo(t)

L"1 [Lf
xn+l(t) + Xo(t) "L" JLf [xn(O]l
[Xn(t)jl (6.1.17)
\ X(s) )
Formally, of course, the sequence *n(t) obtained from Eq. (6.1.17) is

the same as that obtain from Eq. (6.1.14). Practically the sequence is

more easily computed form (6.1.17) because the table of Laplace trans

forms greatly assist in the analysis.

As an illustration of the procedure consider the following exam

ples:

Example 1.1; Consider the following nonlinear differential equation

(a + 3cx2) dt + bx = e (6.1.18)
dt

with the condition x(0) = 0

Eq. (6.1.18) can be written in the form of (6.1.1) as follows

dx^h . d («?> - e (6.1.19)


*+b + cT
52

In this case we let

Z(D) j- OD + b, f(x) = c-^- (x3), and D = d/dt (6.1,20)

As aresult

H(s) = 1/Z(s) = l/(as + b) (6.1.21)

hence

h(t) = L"1 H(s) = -e-kt (6.1.22)


a

where k = b/a.

The integral equation specified by x(t) is

k t

x(t) - x o (t) - ~a Jf e"k(t"u)Dx3(u) du (6.1.23)


o

and the operational form is

(6.1.24)

where

x (t) - L~3 / 2—V = e (1 - e"kt) (6.1.25)


\as + b) b

The next approximation for x(t) is

r 3
x.(t) = £(1 - e"kt) + cL"1 L DX ovw| (6.1.26)
1 b I as + b

Now with

3 3

then we must calculate

Dx (t) = e (ke- - ke + ke )
o —r
53

Carrying out the indicated operations with the aid of the tables

of Laplace transforms, the following results are obtained:

x (t) = e(i - e~kt) - ce3(3kte"k1: - 2-kt + 6e"2kt _ 3 (6.1.28)


1 b ab 2 2

The higher approximations may be obtained ny computing more func

tions of the sequence (6.1.24). If the coefficients of the nonlinear

terms are small, the sequence converges rapidly and the second or third

approximations usually give the accuracy required.

Example 1.2; Consider the following nonlinear differential equation

y" + (2 + ay2)y' + (1 + by2)y =0 (6.1.29)


subject to the initial conditions

y(0) = 1, and y'(0) - c (6.1.30)

We can write Eq. (6.1.29) in the following form:

(y" + 2y' + y) + (ay'y2 + by3) = 0 (6.1.31)


where in this case we let

Z(D) = D2 + 2D + 1 (6.1.32)
f(x) = ay'y2 + by3 , (6.1.33)
and D - d/dt

As a result

H(s) = c/Z(s) = c/(s2 + 2s + 1) = c/(s + I)2 (6.1.34)


and hence

h(t) « L"1 H(s) - cte-t (6.1.35)


The nonlinear integral equation that satisfied y(t) is

y(t) = yo(t) - c t (t - u)e"(t " U) [ay'(u)y2(u) + by3(u)] du


(6.i;36)
54

and the operational form is

y(t) = yQ(t) - L"1 [Uay'y2 + by3)] (6.1.37)


2
(s + 1)
L J

where in this case

-t
yQ(t) - h(t) = cte (6.1.38)

The r.ext approximation for y(t) is

cet"t - L"
(ay'oy3o + by3)" (6.1.39)

(s + I)2
Now with

y2(t) = c2tV2t, y3(t) = c3tV3t, and (6.1.40)

-t
y«(t) t)e (6.1.41)

we can calculate

3N -3t
ay •y2 + by3 = c3(at2 + (b - a)t3)e (6.1.42
' o' o o

and then to form its Laplace transform:

ay 'y2 + by (6.1.43)
o o

(s + 3) (s + b) J
Hence we obtain the second approximation of the image function

2ac 60? - a)

1 S (a + I)2 (s + l)2(s + 3)3 (s + i)2(s + 3)4

(6.1.44)

where

Y(s) = L y(t) e"Sty(t) dt (6.1.45)


To find the corresponding original function y.(t), we make use

of the convolution theorem. We can write

2ac" 2ac" (6.1.46)

(s + l)2(s + 3)3 (s + I)2 (s + 3)3


and

6(b - a)c = 6(b - a)c"

(s + l)2(s + 3)4 (s+1)2 (s + 3)4


Since (6.1.47)

L"1 te
-t

(s + 1)'

-3t
,-1 1
(s + 3)1

l/6tV3t
(s

We have by the convolution theorem

2ac . u)e -(t -

(s + l)2(s + 3)3

ac f (u2t - u3)e"2u"tdu (6.1.48)


Jo

6(b - a)c _ u)e"<t -u) du


(s + 1) (s +

C , 3 4. -2u - t
c3(b -a) / Cu t - u )e
Ja
du

(6.1.49)
56

Combining Eq. (6.1.18) and (6.1.49), we find the original function to be

y (t) = cte - c b f \\n t - u t) - u - u )e du


Jo

(6.1.50)

Integrating Eq. (6.1.50) by parts and simplifying, we find the

first-order corrected solution to be

(6.1.52)

The continued application of this process obviously produces a sequence

of odd exponential functions multiplied by polynomials.

6.2 Solution by Power Series

Another powerful method of determining the solutions to certain

nonlinear differential equations will be given here. The method presented

is an operational adaptation of the one developed by Linsted and Liapound-

off. The solution of nonlinear differential equation is obtained as an in

perturbation theory by introducing a power series of the parameters and

then applying the Laplace transform to find each term. This operational

method, based on the Laplace transform, for solving certain problems has

been studied extensively by Pipes.1 The introduction of this operational


process brings to bear extensive tables of transforms thereby reducing

the algebraic labor involved in obtaining practical solutions.

This method can be illustrated by the following equation:

lL. A. Pipes, Operational Methods in Non-linear Mechanics (New


York: Dover Publications, 1965).
57

Example 2.1: Consider the following nonlinear differential equation

x"(t) +o>2x(t) +Xx2(t) = A(t) (6.2.1)


This equation is used in the theory of seismic wage by Nogaoka. The

effect of the point action of two simple harmonic forces will be con

sidered by letting

A(t) = a1cos<Jlt + a2cos£t)2t (6.2.2)

where a, a , a , o>» (O-,, &>2» and are constants subjected to the initial

conditions

x = a\
I t = 0 (6.2.3)
x1 = 0)

If we let

y(s) = f00 e"Stx(t)dt (6.2.4)

and to solve (6.2.1) we let

n=o

where the x.'s, whose transform we denote by y^s), are to be determined.

If Eq. (6.2.5) is squared, we have

x xn \2 = x2 + 2x x. X + x2 \ + 2(x + x )x X2 + ..•
nJ o ol . o ± £■

b /
(6.2.6)

Taking the Laplace transform of Eq. (6.2.1) we find

o 2 2
*■
s y(s)
r_\
- sx(0)
/ n\
- —
x'(0)
l/r>\
+
J.
y(s) -L.
+ TL v
.. / r,\
x =
~2 2 2 . .2
s +tt>i s +6J2
58

Upon substituting the initial conditions (6.2.3), and simplifying, then

/ •» as . ais a2s L x2
o 9 2 22 22
s + oj (s + <j )(s + 6>i) (s + cu )(s

The transform of Eq. (6.2.5) is

y(s) = ^ y (s)Xn . (6.2.8)


X
n=o
n
Upon substituting Eq. (6.2.5) and (6.2.6) into Eq. (6.2.7), we find that

n _JL_+ ,
-a2+ft|2 (S2 + cu2)(s2+coj) (S2 + .o>2)(s2+ c2
n=o

2 2 "- (6'2'9)
s + o>~

If we onlv consider powers of X no higher than the third, then when is

raised to the ze.ro, first, second, and the third powers we have

3.1 S

v
(6.2.12)

2 s2+o>2

(s) = L [2(xQ + ,1)»2 + 2] (6>2a3)


3 2 2
s + io

With the y 's thus identified that the system has the solution

x = N X_A. - L / 'A
Z- n ^- n
n=o n=o
Now we shall use the notation

X(a) - s> -
s2+a2 (6.2.15)

then Eq. (6.2.10) may be written in the form

y (s) = aeT(w) + a1sT(co)T(co1) + a2sT(co)T(u)2) . (6.2.16)

Inverting we have

x (t) = acos cjt + a (cos^t = cos coit) a2(cos ut - cos cc2t) (6.2.17)
o _1 +
2 2 2 _ 2
o>i ~ to o)_ ~ w

If we let

u ~ u>i id ~ U2

A - (a + Ax + A2)

then ve can rewrite Eq. (6.2.17) as

x (t) « Acos ut + A.cos u.t + A cos o)2t. (6.2.18)


o j. i t

This represents the first approximation of the solution of Eq. (6.2.1)

subjected to the conditions (6.2.3). The transformed second approximation

is given by Eq. (6.2.11), and may be written in the form

yl(s) -TUi^ (6-2-19)


2
Now we must calculate x
o

x2 = A2cos2 cot + 2AA cos cotcos u^t + 2AA cos a)tcos a>2t +
o x

A2cos2 cot + 2A1A2cos u^cos o,2t + A2cos2 U2t (6.2.20)


60

Then we find the Laplace transform to be

L x2o = !sA2(sT(2o$ + 1) + ^(sTCo.)


112^
+ 1) + %A2(sT(2u)0) + 1) +
sAA, T(o) + a),) +■ T(o) - u),) + sAA, T( u+ n,.) + 'f^ - u ) +

sA.A_ T(o). + U/.) + T(ojj - o>2)

Now upon substituting Eq. (6.2.21) into Eq. (6.2.19) and computing the

inverse transform by means of the tables, and simplifying the results,

we find that _
o 2 9 I A A, + A9 A.
A + Al12
+ V + C°S Ut. 1I 3to2
-2 + — 2oj2V- + (2o,29 _ ^2)
9

A2 Ml Ml M2
2 2 2
) ) (w,
( + Tum^)
T) (wi
(

A2
"r

0) 2 - o0l -h,2>2 ^-(^ - o)^2' 6u) 2 cos

2 2
Al cos ^l* + A2 cos 2oi2t +

- 2^2) (8w2 - 2o)2) o)2


AA ' x *- ' AAO
cos(u+u ) + 1
1 - 2
'2

cos (o)+o)2)t+ ^2 cos( o)-o)2)t: + L2.


+ "2^ ~ 0) J

cos U, + o^)t + \_2 cos


2
is - - 0)

A second correction x (t) is obtained in a like manner.


2
CHAPTER VII

APPLICATION TO INTEGRAL EQUATIONS

OF THE CONVOLUTION TYPE

An integral equation is an equation in which the function to be

determined appears under an integral sign. The importance of integral

equations in mathematical applications to physical problems lies in the

fact that it is usually possible to reformulate a differential equation

together with its boundary conditions as a single integral equation.

Integral equations of most frequent occurence in practice are convention

ally divided into two classifications, First, an integral equation of

the forts
b
#x)y(x) « P(x) -A / K(x,t)y(t) dt (7.1)
'*' a

where <f>, F, and K are given function and A, a, and b are constant s is

known as a Fredhold equation. The function y(x) is to be determined.

The given function K(x,t), which depends upon the current variable as well

as the auxiliary variable t, is known as the kernel of the integral equa

tion. If the upper limit of the integral is not a constant, but is

identified instead with the current variable, the equation takes the form

-x
(7.2)
<£(x)y(x) = F(x) + A r K(x,t)y(t) dt
a

and is known as a Volterra equation.

61
62

When <p 4 0, the above equation involves the unknown function

y(t), both inside and outside the integral. In the special case when

<p = 0, the unknown function appears only under the integral sign, and the

equation is known as a Fredholm integral equation of the first kind, while

in the case when <p = 1 the equation is said to be of the second kind.

Equations (7.1) and (7.2) have one thing in common; they are both

linear integral equations. That is, the function y enters the equation

in a linear manner so that

ra K(x,t) ["^(t) + c2y2(t) ]


J vb

b
= ci/K(x,t)y1(t) dt + c2 TK(x,t)y2(t) dt (7.3)
* a a

If the integral were replaced by the more general

rbK(x,t,y)(t) dt <7-4>

one would call the quation nonlinear. Equation (6.1.11) or

TK(x,t)y2(t) dt <7'5)
a

are typical examples of such operators.

We will now discuss certian types of integral equations that can

be solved by the use of the Laplace transformation.

7.1 Integral Equations of the Convolution Type

A special integral equation of importance in application is Vol-

terra's integral equation of the second kind with a different kernel.

The equation is of the form


63

t
Y(t) = F(t) + T K(t - u)Y(u) du (7.1.1)
o

where K(t - u) and F(t) are known functions and the function Y(t) is to

be determined. This equation is known in practice as the Convolution

type, which can be written as

I(t) = F(t) + K(t)*Y(t) (7.1.2)

where * indicates the operation of convolution. The unknown function

Y(t), and hence the solution of (7.1.1) may be readily found by the use

of the Laplace transformation. To solve (7.1.1) we shall introduce the

following transforms:

L F(t) =■■ f(s) (7.1.3)

L Y(t) = y(s)

L K(t) - k(s)

Than making use of the convolution theorem, we have

L f K(t - u)Y(u) = k(s)y(s) (7.1.4)


o

Now upon taking the Laplace transformation of Eq. (7.1.1), assuming that

both f(s) and k(s) exist, we find

y(s) = f(s) + k(s)y(s) (7.1.5)


This equation may be solved for y(s) in the form

V(S) = f(s) <7-1'6)


1 - k(s)

The required solution may be found by inversion, therefore

L"1 f(s) £7.1.7)


1 - k(s)

In the form (7.1.6) y(s) cannot be immediately transformed back into the

original function Y(t). We can write y(s) in the form


y(s) = f(s) + k(s) f(s) (7.1.8)

1 - k(s)

and it can be shown that an original function Q(t) always corresponds

to the function

q(s) = k(s) (8.1.9)

1 - k(s)

and thus (7.1.10) can be transformed in the original function, giving

Y(t) = F(t) + Q(t)*F(t) (7.1.10)

This solution, when it is written in the form

F(t) = Y(t) + (-Q)*F (7.1.11)

has the same form as the original integral equation except, that the

roles of Y(t) and F(t) are now interchanged and in place of the kernel

K(t) we have the 'reciprocal kernel' -Q(t).

We will now consider some examples to illustrate this procedure.

Example 1.1; Solve the integral equation

Y(t) = at +/ sin(t - u)Y(u) du (7.1.12)

The integral equation can be written in the form

Y(t) = at + Y(t)*sin t (7.1.13)

Taking the Laplace transformation, and using the convolution theorem we

find that

y(s) = a/s2 + y(s)/(s2 + 1) (7.1.14)


Solving, we obtain

y(s)=a[_ + \ (7.1.15)
V 2 4 /
\s s /
65

Upon consulting tha table of transforms, we find

Y(t) =. a(t + |t3) (7.1.16)


6

which can be verified directly as the solution of the integral equation

(7.1.12),

Example 1.2: Solve the following nonlinear equation:

Y(t) « h sin 2t + f Y(t - u)Y(u) du (7.1.17)


^o
We can write the integral equation in the form

Y(t) = h sin 2t + Y(t)*Y(t) (7.1.18)

Then taking the Laplace transform, using the convolution theorem we find

that

y(s) - _1 . 2 (7.1.19)
82 + 4 +(y(s))2
Solving, we obtain

(y(s))2 - y(s) + 1 = 0 (7.1.20)


s2 +"4
We obtain the image solution

Y(s)
4 \H (7.1.21)
"92 ~ 2
2 Ix ^/^ + 4

-1 + 1

Thus

1 ( s2 + 4 - s \ (7.1.22)
2 V WT7
66

and

y(s) - i / Vs'+ 4 + s \ (7.1.23)

From Eq. (7.1.22) we find the solution

Y(t) = L^yis) = J^t) (7.1.24)

Then Eq. (7.1.23) can be written as

y(a) - 2 1 (7.1.25)

's2 + 4
(7.1.26)

^ Vs+ 4 /
Hence a second solution is

Y(t)■- 5(t) - Jx(2t) (7.1.27)

where 5 (t) is the Dirac Delta function which vanishes when t * 0, but

is indinitely greater at t < 0. Therefore the solution (7.1.24) is con-

tinuous and bounded for t <^ 0.

Example 1.3; Convert the following differential equation to an integral

equation, and then solve the integral equation:

Y« - Y« - Y - 0 C7-1'28)
subject to the initial conditions

Y(0) = 0 and Y'(0) = 1 (7.1,29)


Integrating both sides of Eq. (7.1.28) form 0 to t, we have

f [y"(u) - Y'(u) - Y(u)J du = 0 (7.1.30)


67

or

Y'(t) - Y'(0) - Y(t) + y(0) - J Y(u) du = 0 (7.1.31)


o

Considering the initial conditions we have

Y'(t) - Y'(0) - Y(t) - f Y(u) du (7.1.32)


J o

Integrating a second time from 0 to t, we have

Y(t) - Y(0) - Y'(0)t - J Y(u) - / (t - u)Y(u) du (7.1.33)


o o

Considering the initial conditions again, we find

Y(t) - t - J Y(u)du - J (t - u)Y(u) du (7.1.34)


o o

Therefore we obtain the following integral equation

Y(t) = t +/ Y(u) du = / (t - u)Y(u) du (7.1.35)


o o

We can write the above equation in the form

Y(t) = t + Y(t) + t*Y(t) (7.1.36)

Taking the Laplace transform we find

y(s) = 1/s2 + y(s)/s + y(s)/s2 (7.1.37)


Solving, we obtain

y(s) 1 (7#1
s2 - s - 1
We can write Eq. (7.1.38) as

( ) 1 ^7*]
' 2 5
(s - h)2 - I
68

Hence

y(s) =V5 p. vg -I (7.1..40)


L (s - h)2 - ( 5/2)2J
If we let

a = -v/5/2 and b =

then Eq. (7.1.40) becomes

yOO-77 r s _] (7.i.ii)
1 (s - br + aZZ J
Then consulting the tables of transforms, we obtain the following solution

Y(t) = -ebtsin at
a

therefore

| |-t (7.1.42)

7.2 Abel's Integral Equations

An important integral equation of the convolution type is Abel s

integral equation, which takes the following form:

F(t) = f (t - u)1^) du (7.2.1)


o

where n is constant such that 0 < n < 1.

Equation (7.2.1) is classified as a Volterra's integral equation of

the first kind with a different kernel which is a special case of Eq.

(7.1), and can be written in the form

,t

F(t) - f K(t - u)Y(u) du (7.2.2)


o

Where the kernel K(t - u) is known and F(t) is a known function, and it is
69

required to determine the unknown function Y(t).

The transformed equation of F,q. (7.2.2) is

f(s) = k(s)y(s) (7.2.3)

Its solution is

y(s) - f(s)/k(s) (7.2.4)

In order to obtain a solution to Abel's equation from (7.2.3) we

have

L K(t) = L"1^11 = s11"1 V(l - n) = k(s) (7.2.5)


we obtain the image equation for y(s)

- n) \ y(s) - f(s) (7.2.6)

-n

with the solution

y(s) « f _ f(s) (7.2.7)

Since

n-1 (7.2.8)
n

the corresponding original function is

n-l*F(t) (7.2.9)
Y(t)

From the well-known formula

sin nft
(7.2.10)
n

V(n)

Then, by application of the convolution theorem, we obtain

(7.2.11)
sin nn d I F(u) du
it dt J (t - u)
n-1
70

which is the solution of Abel's integral equation where 0 < n -<1.

We will not consider a special case of Absl's equation:

Example 2.1; Solve the integral equation

t V 2 (7.2.12)
Y(u)(t - u) 2du = 1 + t + t

The equation can be written in the following form:

i 2
(7.2.13)
Y(t)*tT* - 1 + t + t
Than taking the Laplace transform, we find

Y(s) Vfls) =L +h + 2- (7.2.14)

Sovoing we have

4- * 4- * -i-
(7.2.15)
y(a) -

Inverting, we find that

3/2
2t (7.2.16)

Simplifying

(7.2.17)
Y(t) = 8t
APPENDIX A

TABLES OF LAPLACE TRANSFORMS

This Appendix contains common Laplace transforms which were implicitly

encountered in this research. A table of transforms sufficiently exten

sive for most practical purposes is given by Pipes and Harvill.*

*L. A- ?ipes and L. R. Harvill, Applied Mathematics for Engineers and


Physicists. Appendix A, pp. 764-782.
71

t>, Table 1 Theorems

F(s). f(t)

+ ioo

J_ f F(s)eSt ds
F(s) = f f(t)e"St dt f(t)
2ni J
-iCD
o

= Real

k k
s

kF(s) kf(t)

sF(s) - f(0)
dt

s2F(s) - sf(O)

A'
or

F1(s)F2(s)
f2(^)f1(t -v(dv
o ,

f(t - a)u(t - a) a - Real 0


e"aSF(s) u(t) = Heaviside
unit step

r
lira F(s) f(t)dt
s

dnF(s)
tnf(t) n = integer

df(xtt)
riE(x.s)
dx
72

OD CD

F(s) ds f(t) dt
t
APPENDIX B

TABLES OF TRANSFORMS
Table 2 Table of Transforms

i Ota .
" u(t) = h t - a u(t) = Heaviside step function
1 t a

/cd t = ° 5(t) = Dirac delta function

00 5(t)dt = 1
- oo

(0 t < a
1 -as ' u(t - a) = <h t = a a = Real
s (1 t > a

n n = 1, 2, 3, ..
i. = t_
n+1 ni
s n*
n n = all values except
negative integers

n+1 »^n 1)' V(x)


V(x) =
= gamma
gamma function
s

-at a _ real or complex


1 e
s + a

-at
- a©
s + a

1 1/-. - e -at\)
—(1
s( a)

1
at
2 2 a
s
+ a

s cos at
2 2
s a

1
h at
2 2
a
s - a

s cosh at
2 2
s - a
-at
te

(s + a)'

-bt .
, , , .2 2 e sin at
(s + b) + a

s f b -bt
— e cos at
2 2
(s + b) + *

T(6>) sin cot T(o>)


2 2
s +<u

|^T(cu) - T(3cu) sin3ojt

~-l
is
- s2T(2(u) sin2 oj
sT(a») cos bit

1 2 2
— 1 + s I(2dj) cos cut
2s

3
3sT(a>) t- sT(3<u) cos

J5 (A + B)T(A + B)
+(A - B)T(A - B) sin Atcos Bt

■f T(A + B) + T(A - B costAtcosBt

— T(A - B) - T(A + B) sinAtsinBt

Sf(a)T(b) cos at - cos bt


2 2
b a

a e
-a 2/4t
-a vs

2 v/rrt
75

-at
e

~\7«ft"

J (at)
o

s
2 j.
+ a
2

- s)
2^2
I x
s + a

Ta 1 - ert
2 s/t
BIBLIOGRAPHY

Cooper, J. L. B. Heaviside and the Operational Calculus. Math. Gazette


vol. 36 (1952).

Churchi.ll, R. V. Fouries Series and Boundary Value Problems, New York:


McGraw-Hill Book Company, 1969.

Churchill, R. V. Operational Mathematics. New York: McGraw-Hill Book


Company, 1972.

Cunningham, W. J. Nonlinear Analysis. New York: McGraw-Hill Book


Company, 1970.

Doetsch. A Guide to the Application of the Laplace and Z-Transformations.


Springer, 1968.

Drake, Reuben C. Application of the Laplace Transformation to the


Riocati Equation. Atlanta: Atlanta University, 1964.

Ford, Lester R. Differential Equations. New York: McGraw-Hill Book


Company, 1933.

Harvill, L. R. and Pipes, L. A. Applied Mathematics for Engineers and


Physicists. New York: McGraw-Hill Book Company, 1970.

Hilderbrand, Francis. Method of Applied Mathematics. New Jersey: Pren


tice-Hall, 1965.

Hoscstadt, Harry. Integral Equations. New York: John Wiley and Sons,
1973.

Jaeger, J. C. and Newstead, G. H. An Introduction ot the Laplace Trans


formation with Engineering Applications. London: Associated Book
Publisher, Ltd. 1969.

Lothar, B. Introduction to the Operational Calculus. New York: John


Wiley and Sons, Inc., 1967.

McLachlan, N. W. Modern Operational Calculus with Application to Tech


nical Mathematics. New York: Dover Publications, Inc., 1962.

Sebley, Samuel. Standard Math Tables. Ohio: The Chemical Rubber Company,
1965.

Stakgold, Ivar. Boundary Value Problems of Mathematical Physics. New


York: Macmillan Company, 1970.

77