You are on page 1of 23

CHAPTER 6

THE LAPLACE TRANSFORM AND MODELLING

De nition. Let f be a function de ned for all t  0. The Laplace transform of f is the function F (s)

de ned by

Z 1
F (s) = L(f ) = e st f (t)dt; (1)
0

provided the improper integral on the right exists.

The original function f (t) in (1) is called the inverse transform or inverse of F (s) and is denoted by

L (F ); i.e.,
1

f (t) = L (F (s)):
1

Notation: Original functions are denoted by lower case letters and their Laplace transforms by the same

letters in capitals. Thus F (s) = L(f (t)), Y (s) = L(y (t)) etc.

Recall that, by de nition, for any function h de ned on [0; 1),

Z 1 Z b
h(t) dt = lim h(t)dt
0
b!1 0

and the integral is said to converge if this limit exists. Because e st decreases so rapidly with t, the Laplace

transform usually does exist [there are exceptions, however], and then we say that the function f HAS A

WELL-DEFINED LAPLACE TRANSFORM.

1
Example 1. Let f (t) = eat , when t  0.

Find F (s).

Solution.
F (s) = L(eat )
Z 1
= e st eat dt
0

Z b
= lim e st eat dt:
b!1 0

Now
Z b 
a s)t dt = b b(a if s = a
e
a s if s 6= a.
(
e s) 1
0 a s

 a, 1 e a
R
If s < a, a s > 0 and e a s b
( )
! 1 as b ! 1. Thus when s 0
( s)t dt diverges. When s > a,

a s < 0, and e a s b ! 0 as b ! 1, and then


( )

1
F (s) = L(eat ) = ; s > a: (2)
s a

Example 2. Let f (t) = 1, t  0. Find F (s).

Solution. This function is the same as the one in Example 1 with a = 0, thus,

1
L(1) = ; s > 0: (3)
s

The Laplace transform is a linear operation; i.e., the Laplace transform of a linear combination of functions

equals the same linear combination of their Laplace transforms. Thus

2
Theorem

L(af (t) + bg(t)) = aL(f ) + bL(g); (4)

where a and b are constants.

As a corollary, the inverse Laplace transform also satis es the linearity property.

L (aF (s) + bG(s)) = aL (F ) + bL (g):


1 1 1
(5)

Veri cation of (4) and (5) is easy.

Example 3. Using (4), we obtain


 
1
L(cosh at) = L (eat + e at )
2
 
1 1 1
= +
2 s a s+a

s
= ; s > a  0:
s 2
a 2

Example 4. If F (s) = s + s ,
3 5
7

nd f (t) = L (F ).
1

Solution. Using (5), we have


3 1
L (F ) = L ( ) + 5L (
1 1
) 1

s s 7

= 3  1 + 5  e t = 3 + 5e t :
7 7

3
Example 5. Set a = iw in the formula (2)

L(eiwt ) = L(cos wt + i sin wt)

= L(cos wt) + iL(sin wt)

1 s + iw
= =
s iw s +w
2 2

Equating real and imaginary parts, we get

L(cos wt) = s2 sw2 , L(sin wt) = s2 ww2 .


+ +
(6,7)

Example 6. To show that L(tn ) = snn+1 , !

n = 0; 1; 2; : : :.

Solution. Z 1
L(tn ) = e st tn dt
0

1
= e st tn j 1
s 0

n
Z 1
+ e st tn dt:
1

s 0

First term is zero at t = 0 and as t ! 1. Thus, using induction

n
L(tn ) = L(tn ) 1

s
n(n 1) : : : 1
= L(1)
sn
n!
= n : (8)
s +1

4
Example 7. Given F (s) = s2s , nd L (F (s)).
2 +5
+9
1

Solution.

   
2s + 5 2s 5
L 1
=L 1
+
s +9
2
s +9 s +9
2 2

   
s 5 3
= 2L 1
+ L 1

s +9
2
3 s +9
2

5
= 2 cos 3t + sin 3t:
3

PIECEWISE CONTINUOUS FUNCTIONS

One of the nice things about Laplace transforms is that they are de ned by integration, and, unlike

di erentiation, integration doesn't care whether the function is continuous or not. In fact it is easy to de ne

the Laplace transform of a function whose graph is broken up into pieces. More formally:

A function f (t) de ned for t  0 has a jump discontinuity at a 2 [0; 1) if the one sided limits

lim f (t) = ` and lim+ f (t) = `


t!a t!a
+

exist but f is not continuous at t = a.

By de nition, a function f (t) is PIECEWISE CONTINUOUS on a nite interval a  t  b if jump


5
discontinuities are its only discontinuities. Such a function always has a nice Laplace transform (unless it

grows extremely quickly, faster than exponentially).

Transform of derivatives and integrals

Theorem Suppose that f (t) is continuous and has a well-de ned Laplace transform on [0; 1) and f 0 (t) is

piecewise continuous on [0; 1). Then L(f 0 (t)) exists and

L(f 0 ) = sL(f ) f (0); s > a: (10)

Proof. First consider the case when f 0 (t) is continuous for [0; 1). Then
Z 1
0
L(f ) = e st f 0 (t)dt
0

1 Z 1

= e st f (t) +s e st f (t)dt
0 0

= lim e sb f (b) f (0) + sL(f ):


b!1
Since f has a well-de ned Laplace transform, the rst term on the right is 0 when s > 0, showing that

L(f 0 ) = sL(f ) f (0) for s > a.

If f 0 is merely piecewise continuous, the proof is quite similar; in this case, the range of integration in

the original integral must be split into parts such that f 0 is continuous in each such part.

Apply (10) to f 00 to obtain

L(f 00 ) = sL(f 0 ) f 0 (0)

= s(sL(f ) f (0)) f 0 (0)


6
= s L(f )
2
sf (0) f 0 (0): (11)

Similarly L(f 000 ) = s L(f )


3
s f (0) sf 0 (0) f 00 (0), etc. By induction we obtain the following
2

Theorem Suppose that f (t), f 0 (t), f 00 (t), : : :, f n ( 1)


(t) are continuous and have well-de ned Laplace

transforms on [0; 1) and f n (t) is piecewise continuous on [0; 1), then


( )

L(f n ) = sn L(f ) sn f (0) sn f 0 (0)


( ) 1 2

 f n ( 1)
(0): (12)

Example 8. Find L(sin t). 2

Solution. Here f (t) = sin t, f 0 (t) = 2 sin t cos t = sin 2t, f (0) = 0. In view of (7) we obtain
2

L(f 0 ) = L(sin 2t)

2
=
s +4
2

= sL(f ) f (0) = sL(f )


:: L(sin t) = s s2
2
(
2
+4)
.

Example 9. Find L(t sin t).

Solution. Here f (t) = t sin t and f (0) = 0. Also


f 0 (t) = sin t + t cos t; f 0 (0) = 0

f 00 (t) = 2 cos t t sin t 2

= 2 cos t f (t)
2

7
:: by (11), L(f 00 ) = 2 L(cos t) L(f ) = s L(f ):
2 2

:: (s + )L(f ) = 2 L(cos t) = s2 s 2 .


2 2 2
+

Hence L(t sin t) = s2 s


2 2 :
(
2
+ )

Solution of Initial value problems

Consider the initial value problem

y00 + ay0 + by = r(t)

y(0) = k ; y0 (0) = k
0 1 (13)

with a, b, k and k constants. The function r(t) has the Laplace transform R(s).
0 1

Step 1. Take the Laplace transform of both sides of the d.e. using the linearity property of the Laplace

transform.

s L(y) sy(0) y0 (0) + a(sL(y) y(0)) + bL(y) = L(r):


2

Step 2. Use the given initial conditions to arrive at the subsidiary equation

s L(y) sk
2
0 k + a(sL(y) k ) + bL(y) = L(r):
1 0

Step 3. Solve this for L(y):

(s + a)k + k + R(s)
L(y) = 0 1
:
s + as + b
2

8
Step 4. Reduce the above to a sum of terms whose inverses can be found, so that the solution y(t) of (13)

is obtained.

Example 10. Solve y00 + y = e t , 2

y(0) = 0, y0 (0) = 1.

Solution. Taking the Laplace transform of both sides of the d.e. and using initial conditions we arrive at

1
s L(y) sy(0) y0 (0) + L(y) =
2
:
s 2
 
:: L(y) = s2 1
+1
1+ s 1
2
= s s s2
( 2)(
1
+1)
.

Now

s 1 A Bs + C
= +
(s 2)(s + 1) s 22
s +1 2

for appropriate constants A, B and C . Multiply both sides of this equation by (s 2)(s + 1) and equate
2

the coecients of s , s, s to obtain A


0 2
2C = 1, 2B + C = 1, A + B = 0. Thus A = , B =
1
5
1
5
, and

C= . 3
5

:: L (y ) = 1
5
s 1
2
s
s
3
5( 2 +1)

= s 1 s + s2 3
.
5( 2) s
5( 2 +1) 5( +1)

Taking the inverse transform of both sides we get (using (2), (6), and (7))

1 1 3
y(t) = e t 2
cos t + sin t:
5 5 5

9
Transform of the integral of a function

Theorem. If f (t) is piecewise continuous and has a well-de ned Laplace transform on [0; 1), then

Z t 
1
L f ( )d = L(f ) (s > 0; s > a): (14)
0 s

Example 11. Find f (t) if L(f ) = s2 s2 w2 . (


1
+ )

Solution.

 
 L w1 sin wt
::
=
s +w
2
1
2

we use (14) to get


 Z t   
1 1 cos wt
L sin wd = L
w 0 w 2

1
=
s( s + w )
2 2

and
 Z t   
1 1 sin wt
L (1 cos w )d = L (t )
w 2
0 w 2
w
1
= :
s (s + w )
2 2 2

:: f (t) = w2 t wt 
w .
1 sin

Theorem. (s-Shifting)

If f (t) has the transform F (s), s > a, then

L(ect f (t)) = F (s c); s c > a: (15)

10
Thus using the formulae (8), (6), and (7) we obtain
n!
L(ect tn ) =
(s c)n +1

s c
L(ect cos wt) =
(s c) + w
2 2

w
L(ect sin wt) = :
(s c) + w
2 2

Example 12. Solve y00 + 2y0 + 5y = 0,

y(0) = 2, y0 (0) = 4.

Solution. Following the procedure outlined earlier we obtain


2s
L(y) =
(s + 1) + 2
2 2

2(s + 1) 2
=
(s + 1) + 2
2 2
(s + 1) + 22 2

:: y(t) = e t (2 cos 2t sin 2t).

Example 13. Solve y00 2y0 + y = et + t,

y(0) = 1, y0 (0) = 0.

Solution. Taking Laplace transform of the d.e.

s L(y) sy(0) y0 (0) 2(sL(y) y(0)) + L(y)


2

1 1
= +
s 1 s2

or

1 1
(s 2
2s + 1)L(y ) = s 2+ +
s 1 s 2

11
or
s 2 1 1
L( y ) = + +
(s 1) 2
(s 1) 3
s (s 1)
2 2

1 1 1
= +
s 1 (s 1) 2
(s 1) 3

1 2 1 2
+ + +
(s 1) 2
s 1 s 2
s
:: y(t) = t et et + t + 2 = ( t 1)et + t + 2.
2 2
2 2

Unit Step (Heaviside) function

De nition.

u(t a) = 01;; t<a
t>a (16)

Example 14. Graph f (t) = u(t 1) u(t 3).

Clearly f (t) = 1 when 1 < t < 3 and 0 otherwise.

Observe that if 0 < a < b

(
0; t<a
u(t a) u(t b) = 1; a<t<b
0; t > b.

Let g (t) be some function of t. Then, if 0 < a < b

(
0 if t < a
g(t)(u(t a) u(t b)) = g(t) if a < t < b
0 t > b.

You can think of this in the following way: this function is \OFF" until t = a. Then it suddenly turns

\ON" the function g(t). It remains \ON" until t = b, where it suddenly switches \OFF" again. You can

easily imagine lots of engineering situations where this might be useful, for example in circuit theory. Notice

12
that this function is usually DISCONTINUOUS, but this won't be a problem for the Laplace transform,

because the transform is de ned by an integral, and discontinuous functions can often be integrated. In

fact, the main advantage of the Laplace transform is that it allows us to solve ODEs with discontinuous

right-hand-sides. Such ODEs often come up in engineering applications.

Example 15. Express


8
> t; 0<t<1
<
f (t) = 20; t; 1<t<2
2<t<3
>
:
1; t>3
Solution.
f (t) = t(u(t) u(t 1))

+ (2 t)(u(t 1)

u(t 2)) + u(t 3):

Example 16. Sketch


g(t) = 2u(t) + tu(t 1) + (3 t)u(t 2)

3u(t 4); t > 0:


Solution.

When
0 < t < 1 g (t) = 2  1 + t  0 + (3 t)  0 3  0 = 2

1 < t < 2 g (t) = 2  1 + t  1 + (3 t)  0 3  0

=2+t

2 < t < 4 g (t) = 2  1 + t  1 + (3 t)  1 3  0 = 5

t > 4 g(t) = 2  1 + t  1 + (3 t)  1 3  1 = 2
13
Theorem. (t-Shifting)

If L(f (t)) = F (s) then

L(f (t a)u(t a)) = e as F (s): (17)

Example 17. Setting f (t a) = 1 in (17) we get

e as
L(u(t a)) = : (18)
s

Example 18. Compute L(t u(t 1)).


2

Solution.
L(t u(t 1)) = L((t 1 + 1) u(t 1))
2 2

= L(((t 1) + 2(t
2
1) + 1)u(t 1))

2 2 1
= e s ( + + ):
s 3
s 2
s

Example 19. Compute L((et + 1)u(t 2)).

Solution.
L((et + 1)u(t 2)) = L((et e + 1)u(t 2))
2 2

 
e 2
1
=e s 2
+ :
s 1 s

14
The next and nal problem in this section is rather complicated, but it really just involves assembling

a lot of small bits and pieces. Notice that NONE of the methods we learned in earlier chapters would have

allowed us to solve this problem, so it should convince you that the Laplace Transform is really useful!

Example 20. Solve the initial value problem

y00 + 3y0 + 2y = g(t); y(0) = 0; y0 (0) = 1;

with

(
1; 0<t<1
g(t) = .
0; t>1

Solution.

g(t) = u(t) u(t 1)

:
Taking the Laplace transform of both sides of the d.e. we obtain, using (18),
s L(y) sy(0) y0 (0) + 3(sL(y) y(0)) + 2L(y)
2

1 e s
=
s s
so that,
 
s+1 1
L(y) = e s :
s(s + 3s + 2)
2
s(s + 3s + 2)
2

Now
s+1 s+1
=
s(s + 3s + 2) s(s + 1)(s + 2)
2

1
=
s(s + 2)
 
1 1 1
=
2 s s+2
15
:: L (s s
1 1
( +2)
) = (1
1
2
e t ).
2

Also
1 1
=
s(s + 3s + 2)
2
s(s + 1)(s + 2)
 
1 1 1 1
= +
2 s s+2 s+1
 
:: L 1 2t
s(s+1)(s+2) = 2 (1 + e )
1 1
e t

::

 
e s
L 1

s(s + 1)(s + 2)
 
1 t
= (1 + e 2( 1)
) e t ( 1)
u(t 1):
2

[using (17)]

Finally
1
y(t) = (1 e t ) 2

2
 
1 t
(1 + e 2( 1)
) e t ( 1)
u(t 1):
2

THE DIRAC DELTA FUNCTION

The Laplace Transform, as a mapping from functions of t to functions of s, is not surjective, that is,

given F (s) there need not exist any f (t) such that L(f ) = F . But sometimes it is very useful to bend the

de nition of the word, \function" so that certain functions of s do have a sort of inverse Laplace transform.

There is one extremely important function of s whose inverse Laplace transform we would like to have,

16
namely the exponential function e as . Strangely, this function does not have an inverse Laplace transform

in the ordinary sense. But it does have one in an extraordinary sense, as follows.

Consider the set of functions fa;h (t) de ned by

u(t a) u(t [a + h])


fa;h (t)  :
h

Clearly,
 
e as
1 e a hs
( + )

L (fa;h ) =
h s s
 
1 e a hs ( + )
e as
= :
s h

Now if h is very small, the expression in brackets on the right is approximately the derivative of e xs with

respect to x, evaluated at x = a. So we have

1
L (fa;h )   ( s)  e as
s

or

L (fa;h )  e as :

The approximation can be made as good as desired by choosing h suciently small. So it seems that we have

the answer to our question. But let's think more closely about what happens to fa;h (t) when h is extremely

small.

Evidently fa;h (t) is non-zero only on the very small region between a and a + h. But its value in there is

extremely large, 1=h. So in fact the area under its graph is always the same, namely 1, no matter how small

17
h may be. In short, the graph of fa;h (t) is like a spike sitting at t = a: very tall, very narrow, with area

1. When h is extremely small, let's give fa;h (t) a name: we call it  (t a), the Dirac delta function. So we

have, again when h is very tiny,

L ((t a))  e as ;

and clearly

Z 1
(t a)dt = 1:
0

Now since the approximation can be arbitrarily good, we get a bit sloppy here and replace the  by =. That

is, we picture the spike as being \really" in nitely narrow above t = a, and also in nitely tall, still having

\area" 1. That is, we take it that this strange thing is the inverse Laplace transform of e as .

These peculiar objects can be put on a mathematically rigorous foundation: google \Distributions (mathe-

matics)". But for our purposes it is enough to think of the delta function as a very tall and narrow function

around t = a, where \very narrow" means \negligible relative to the typical time scales of our problem".

For example, the time a cricket ball or baseball is in contact with the bat is negligible compared with the

time it takes for the players to react to the change in the motion of the ball.

In summary: L 1
[e as ] =  (t a); where (t a) denotes a \function" which represents an arbitrarily

narrow and tall \spike" at t = a, which nevertheless has a nite area (equal to 1) under its \graph".

Note that by setting a = 0 we have

L [1] = (t);
1

18
and consequently

L [c] = c(t)
1

for any constant c. So now we have nally discovered which \functions" have constants as their Laplace

transforms.

EXAMPLE: INJECTIONS!

Suppose that a doctor injects, almost instantly, 100 mg of morphine into a patient at time t = 1 day. He

does it again one day later, at t = 2. Suppose that the HALF-LIFE of morphine in the patient's body is 18

hours. Find the amount of morphine in the patient at any time.

Solution: Half-life refers to the exponential function e kt . \Half-life 18 hours" = 0.75 days means 1
2
=

`n(2)
e k : , that is, k =
0 75
= 0:924. So without the injections,
0:75

dy
= ky; k = 0:924:
dt

The injections are at a rate of 100 mg per day, but concentrated in delta-function spikes at t = 1 and t = 2

[time unit is DAYS]. So we have

dy = ky + 100(t 1) + 100(t 2)
dt

sL[y] y(0) = 0:924L[y] + 100e s + 100e s : 2

By the way, notice that we have to think of the delta function as something which itself has UNITS. In

19
this case you should think of the delta function as something that has units of [1/time], so that this equation

has consistent units. In many problems, it is actually very helpful to work out what units the delta function

has | it can have di erent units in di erent problems! This is particularly useful in physics problems where

something gets hit suddenly and gains some momentum instantly | you can use this in some of the tutorial

problems.

Since y (0) = 0,

100e s 100e s 2

L(y) = +
s + 0:924 s + 0:924

So, using the t-shifting theorem, we get

y = 100e : t
0 924( 1)
u(t 1) + 100e :
0 924( t 2)
u(t 2);

which you can easily graph using graphmatica (type in y = exp( 0:924(x 1))step(x 1) + exp( 0:924(x

2))step(x 2)).

EXAMPLE: PARAMETER RECONSTRUCTION!

Sometimes it happens, in Engineering applications, that you have a system whose nature you understand

[for example, you may know that it is a damped harmonic oscillator] but you don't know the values of the

parameters [the spring constant, the mass, the friction coecient]. In such a case, what you can do is to

\poke" the system with a sudden, sharp force, and watch how it behaves. Then you can reconstruct the

parameters of the system as follows.


20
Suppose for example that you have a damped harmonic oscillator | say, a spring with a mass attached |

which is initially at rest, that is, x(0) = x_ (0) = 0. and you poke it with a unit impulse [change of momentum

= 1 in MKS units] at time t = 1; in other words, the applied force is just  (t 1). You observe that the

displacement of the mass is

x(t) = u(t 1)e t ( 1)


sin(t 1):

[You can use Graphmatica to look at the graph by putting in

y = exp( (x 1))  sin(x 1)  step(x 1);

note that this oscillator is underdamped, despite the shape of the graph!] Question: what are the values of

the mass, the spring constant, the frictional coecient?

Solution: Newton's Second Law in this case says:

M x = kx bx_ + (t 1):

With the given initial data, taking the Laplace transform gives

Ms X (s) = kX (s) bsX (s) + e s ;


2

and so

e s
X (s) = :
Ms + bs + k
2

But the Laplace transform of the given response function is [using both t-shifting and s-shifting!]

e s e s
X (s) = = ;
(s + 1) + 1
2
s + 2s + 2
2

21
so by inspection we see that M = 1, b = 2, k = 2 in MKS units, and we have successfully reconstructed the

parameters of this system, just by hitting it with a delta function impulse! Similar ideas work for electrical

circuits etc etc etc. Useful idea.

If you look at the graph of the solution you will see that the derivative jumps suddenly at t = 1; that is,

there is a sharp corner there. This is typical in problems involving delta-function impulses, basically because

such impulses cause the momentum, and therefore the velocity, to change suddenly.

HOW DO I HANDLE NON-STANDARD DATA?

So far we have always assumed that one is given y (0), y 0 (0), y 00 (0) and so on. But what if you are given the

data at other points... for example, how do we solve

y00 + y = 0;

given y (=2) = 1, y 0 (=2) = 1, instead of y (0) and y 0 (0)? Answer: we pretend that we know y (0) and y 0 (0),

and work them out later. Let's call y (0) = and y 0 (0) = . Then taking the Laplace transform as usual,

we have

sY
2
s + Y = 0;

or

Y = ( s + )=(1 + s );
2

22
so that

y = cos(x) + sin(x):

Now we can use y (=2) = 1, y 0 (=2) = 1 to get two equations for and . You will nd that = 1 and

= 1, so nally y = cos(x) + sin(x).

23

You might also like