- Semana_1 CT With Applications to CS
- Circuits 2
- 02firstorderdifferentialequations-110916071146-phpapp02.pptx
- Slides 01
- 5233-l06
- 127 Feng Chen Zhang Griggs IJHMT 2011
- 32.2.yeh
- Mathematics III Question Paper
- MATH 3408
- The Inversion and Interpretation of Gravity Anomalies_Oldendurg
- RlcLaplace
- Engg Revised 2014
- 2.2 Higher-order Linear ODE
- Content 2011
- DE Lec1A.pdf
- ss09lo04CME2
- APPAREL III TO VIII.pdf
- 4H_Seismic_DP&I
- Frequency-domain synthetic aperture focusing for helical ultrasonic imaging H
- Linear Systems Review
- B.E. BIO MED
- EE 373 fall 2012
- Time Reversal
- Chapter1 Modelling for Control.pdf
- Power
- Art of Imparting Intelligence With GNU
- Shape Factor_Dual Porosity Models Zimm.
- Course Outline
- It Revised
- ES Systems Biology Notes
- Supplement
- More fun with Bessel functions
- Inverse Laplace Transforms
- Functions of a Complex Variable Lecture 3
- Conformal Mappings
- Math Complex Variables Hw5 Solutions
- A short paragraph involving the Riemann-Zeta function
- Nst Mmii Chapter5
- Errata Sheet for Chun Wa Wong Introduction to Mathematical Physics
- Mathematical Methods 1
- hw11sol
- Chapter_13
- Complex Variables Module 21
- Stat Mech Homework
- Nuclear Counting Statistics
- Chapter4 Complex Analysis
- Calculus of Complex Functions-Laurent Series and Residue Theorem
- Quadrupole_ellipsoid
- Appendix B
- Partial Solutions Manual for Chun Wa Wong Introduction to Mathematical Physics
- Bessel
- Errata to Classical Electrodynamics
- Errata Sheet for Chun Wa Wong Introduction to Mathematical Physics
- Laurent Series Examples
- Riemann Surface
- ce3errat
- Bessel Functions
- Laurent and Taylor Series
- Chun Wa Wong Quantum Mech Rvw 2

Integral transforms

In mathematics, an integral transform is any transform T of a given function f of

the following form:

Tf(s) =

_

x

2

x

1

K(x, s)f(x)dx. (4.1)

The input is a function f(x) and the output is another function Tf(s). There

are diﬀerent integral transforms, depending on the kernel function K(x, s). The

transforms we consider in this chapter are the Laplace transform and the Fourier

transform.

4.1 Laplace transform

4.1.1 Basic deﬁnition and properties

To obtain the Laplace transform of a given function f(x) we use the kernel K(x, s) =

e

−sx

, namely:

L{f} = F(s) =

_

∞

0

f(x)e

−sx

dx. (4.2)

Here s can also be a complex variable, namely the Laplace transform maps a real

function to a complex one. For our purposes it is enough to consider for the moment

s real. We can easily verify that L is a linear operator. In fact:

L{af + bg} =

_

∞

0

[af(x) + bg(x)]e

−sx

dx = a

_

∞

0

f(x)e

−sx

dx + b

_

∞

0

g(x)e

−sx

dx

⇒L{af + bg} = aL{f} + bL{g}. (4.3)

123

124 CHAPTER 4. INTEGRAL TRANSFORMS

Example 4.1.1 Find the Laplace transform of the function f(x) = 1.

It is

L{1} =

_

∞

0

e

−sx

dx = −

1

s

_

e

−sx

¸

∞

0

=

1

s

,

provided that s > 0 (this ensures us that lim

x→∞

e

−sx

= 0, namely that the integral

_

∞

0

e

−sx

dx converges).

Example 4.1.2 Find the Laplace transform of f(x) = x

n

, with n positive integer.

We integrate by parts and obtain:

L{x

n

} =

_

∞

0

x

n

e

−sx

dx = −

1

s

_

x

n

e

−sx

¸

∞

0

+

n

s

_

∞

0

x

n−1

e

−sx

dx =

n

s

L{x

n−1

}.

We have assumed also in this case that s > 0 (otherwise the integral

_

∞

0

x

n

e

−sx

dx

does not converge). To obtain L{x

n−1

} we proceed the same way and obtain L{x

n−1

} =

n−1

s

L{x

n−2

}. We iterate n times and obtain:

L{x

n

} =

n(n −1)(n −2) . . .

s

n

L{1} =

n!

s

n+1

.

Example 4.1.3 Find the Laplace transform of f(x) = sin(mx).

It is L{f(x)} =

_

∞

0

e

−sx

sin(mx)dx. By using the relation sin(mx) =

e

imx

−e

−imx

2i

we

obtain:

L{f(x)} =

1

2i

__

∞

0

e

(im−s)x

dt −

_

∞

0

e

−(im+s)x

dx

_

=

1

2i

__

e

(im−s)x

im−s

_

∞

0

−

_

e

−(im+s)x

−im−s

_

∞

0

_

=

1

2i

_

1

s −im

−

1

s + im

_

=

m

s

2

+ m

2

,

for s > 0. In fact, the terms e

(im−s)x

and e

−(im+s)x

can be written as e

−sx

[cos(mx) ±sin(mx)].

In the limit for x → ∞, only the term e

−sx

matters (the term [cos(mx) ±sin(mx)]

oscillates) and it tends to zero for any s > 0.

In these three simple cases we have seen that the integral 4.2 was convergent

for any possible value of s > 0. This is not always the case, as the two following

examples show.

4.1. LAPLACE TRANSFORM 125

Example 4.1.4 Find the Laplace transform of f(x) = e

ax

.

L{e

ax

} =

_

∞

0

e

ax

e

−sx

dx = lim

A→∞

_

A

0

e

(a−s)x

dx = lim

A→∞

_

e

(a−s)x

a −s

_

A

0

.

It is clear that this limit exists and is ﬁnite only if a < s (a < Re (s) if s ∈ C),

namely we can deﬁne the Laplace transform of the function f(x) = e

ax

only if Re

(s) > a. In this case it is:

L{e

ax

} =

1

s −a

.

Example 4.1.5 Find the Laplace transform of the function f(x) = cosh(mx).

It is L{f(x)} =

_

∞

0

e

−sx

cosh(mx)dx. By using the relation cosh(mx) =

e

mx

+e

−mx

2

we obtain:

L{f(x)} =

1

2

__

∞

0

e

(m−s)x

dt +

_

∞

0

e

−(m+s)x

dx

_

=

1

2

__

e

(m−s)x

m−s

_

∞

0

−

_

e

−(m+s)x

m + s

_

∞

0

_

=

1

2

_

1

s −m

+

1

s + m

_

=

s

s

2

−m

2

.

This result holds as long as e

(m−s)x

and e

−(m+s)x

tend to zero for x → ∞, namely it

must be s > |m|.

There are a few properties of the Laplace transform that help us ﬁnding the

transform of more complex functions. If we know that F(s) is the Laplace transform

of f(x), namely that L{f(x)} = F(s), then:

•

L

_

e

cx

f(x)

_

= F(s −c) (4.4)

This property comes directly from the deﬁnition of Laplace transform, in fact:

L

_

e

cx

f(x)

_

=

_

∞

0

f(x)e

cx

e

−sx

dx =

_

∞

0

f(x)e

−(s−c)x

dx = F(s −c).

•

L{f(cx)} =

1

c

F

_

s

c

_

, (c > 0) (4.5)

To show that it is enough to substitute cx with t. In this way is x =

t

c

, dx =

dt

c

and therefore:

126 CHAPTER 4. INTEGRAL TRANSFORMS

L{f(cx)} =

_

∞

0

e

−sx

f(cx)dx =

1

c

_

∞

0

e

−

s

c

t

f(t)dt =

1

c

F

_

s

c

_

.

•

L{u

c

(x)f(x −c)} = e

−sc

F(s) (4.6)

Here is u

c

(x) the Heaviside or step function, namely:

u

c

(x) =

_

_

_

0 x < c

1 x ≥ c

(4.7)

The function u

c

(x)f(x −c) is thus given by:

u

c

(x)f(x −c) =

_

_

_

0 x < c

f(x −c) x ≥ c

We have thus:

L{u

c

(x)f(x −c)} =

_

∞

c

e

−sx

f(x −c)dx.

With the substitution t = x −c we obtain:

L{u

c

(x)f(x −c)} =

_

∞

0

e

−s(c+t)

f(t)dt = e

−sc

F(s).

•

L{x

n

f(x)} = (−1)

n

F

(n)

(s) (4.8)

It is enough to derive F(s) with respect to s, to obtain:

F

′

(s) =

d

ds

_

∞

0

e

−sx

f(x)dx = −

_

∞

0

xe

−sx

f(x)dx = −L{xf(x)}.

If we now diﬀerentiate n times F(s) with respect to s we obtain:

F

(n)

(s) = (−1)

n

L{x

n

f(x)}.

From it, Eq. 4.8 is readily obtained.

•

L{f

′

(x)} = −f(0) + sF(s) (4.9)

This property can be obtained integrating e

−sx

f

′

(x) by parts, namely:

L{f

′

(x)} =

_

∞

0

e

−sx

f

′

(x)dx =

_

f(x)e

−sx

¸

∞

0

+s

_

∞

0

e

−sx

f(x)dx = −f(0)+sF(s),

provided that lim

x→∞

f(x)e

−sx

= 0.

4.1. LAPLACE TRANSFORM 127

Example 4.1.6 Find the Laplace transform of cos(mx).

We could calculate this transform directly but it is easier to use the Laplace transform

of sin(mx) that we have calculated in Example 4.1.3 (L{sin(mx)} =

m

s

2

+m

2

). From

Eq. 4.9 (and reminding that L is a linear operator) we have:

L

_

d

dx

sin(mx)

_

= mL{cos(mx)} = −sin(0) + s ·

m

s

2

+ m

2

.

⇒L{cos(mx)} =

s

s

2

+ m

2

.

Example 4.1.7 Find the Laplace transform of xcosh(mx).

We remind from Example 4.1.5 that L{cosh(mx)} = F(s) =

s

s

2

−m

2

(s > |m|). Eq.

4.8 tells us that F

′

(s) is the Laplace transform of −xcosh(mx). We have therefore:

L{xcosh(mx)} = −F

′

(s) = −

s

2

−m

2

−2s

2

(s

2

−m

2

)

2

=

s

2

+ m

2

(s

2

−m

2

)

2

.

Example 4.1.8 Find the Laplace transform of the function f(x) deﬁned in this way:

f(x) =

_

_

_

x x < π

x −cos(x −π) x ≥ π

By means of the step function (Eq. 4.7) we can rewrite f(x) as f(x) = x −

u

π

(x) cos(x−π). The Laplace transform of this function can be found by means of Eq.

4.6 and of the known results L{x

n

} =

n!

s

n+1

(Example 4.1.2) and L{cos(mx)} =

s

s

2

+m

2

(Example 4.1.6).

L{f(x)} = L{x} −L{u

π

(x) cos(x −π)} =

1

s

2

−e

−πs

L{cos x} =

1

s

2

−

se

−πs

s

2

+ 1

.

128 CHAPTER 4. INTEGRAL TRANSFORMS

4.1.2 Solution of initial value problems by means of Laplace

transforms

We have seen (Eq. 4.9) that the Laplace transform of the derivative of a function

is given by L{f

′

(x)} = −f(0) + sF(s), where F(s) = L{f(x)}. If we consider the

Laplace transform of higher order derivatives we obtain (always integrating by parts):

L{f

′′

(x)} =

_

∞

0

e

−sx

f

′′

(x)dx =

_

e

−sx

f

′

(x)

¸

∞

0

+ s

_

∞

0

e

−sx

f

′

(x)dx

= −f

′

(0) −sf(0) + s

2

F(s)

L{f

′′′

(x)} =

_

∞

0

e

−sx

f

′′′

(x)dx =

_

e

−sx

f

′′

(x)

¸

∞

0

+ s

_

∞

0

e

−sx

f

′′

(x)dx

= −f

′′

(0) −sf

′

(0) −s

2

f(0) + s

3

F(s)

.

.

.

L{f

(n)

(x)} = s

n

F(s) −s

n−1

f(0) −s

n−2

f

′

(0) −· · · −sf

(n−2)

(0) −f

(n−1)

(0), (4.10)

provided that lim

x→∞

f

(m)

(x)e

−sx

= 0, with m = 0 . . . n − 1. This result allows us to

simplify considerably linear ODEs. Let us take for instance an initial value problem

consisting of a second-order inhomogeneous ODE with constant coeﬃcients (but the

method can be applied also to more complex ODEs):

_

¸

¸

¸

_

¸

¸

¸

_

a

2

y

′′

(x) + a

1

y

′

(x) + a

0

y(x) = f(x)

y(0) = y

0

y

′

(0) = y

0

′

If we now make the Laplace transform of both members of this equation (calling Y (s)

the Laplace transform of y(x) and F(s) the Laplace transform of f(x)), we obtain:

a

2

_

s

2

Y (s) −sy

0

−y

0

′

¸

+ a

1

[sY (s) −y

0

] + a

0

Y (s) = F(s)

⇒Y (s)(a

2

s

2

+ a

1

s + a

0

) = F(s) + a

1

y

0

+ a

2

(sy

0

+ y

0

′

)

⇒Y (s) =

F(s) + a

1

y

0

+ a

2

(sy

0

+ y

0

′

)

a

2

s

2

+ a

1

s + a

0

. (4.11)

Namely, we have transformed an ODE into an algebraic one, which is of course easier

to solve. Moreover, the particular solution (satisfying the given initial conditions)

is automatically found, without need to search ﬁrst the general solution and the

look for the coeﬃcients that satisfy the initial conditions. Further, homogeneous

and inhomogeneous ODEs are handled in exactly the same way; it is not necessary

to solve the corresponding homogeneous ODE ﬁrst. The price to pay for these

4.1. LAPLACE TRANSFORM 129

advantages is that Eq. 4.11 is not yet the solution of the given ODE; we should

invert this relation and ﬁnd the function y(x) whose Laplace transform is given by

Y (s). This function is called the inverse Laplace transform of Y (s) and it is indicated

with L

−1

{Y (s)}.

Since the operator L is linear, it is easy to show that also the inverse operator

L

−1

is linear. In fact, given two functions f

1

(x) and f

2

(x) whose Laplace transforms

are F

1

(s) and F

2

(s), respectively, the linearity of the operator L ensures us that:

L{c

1

f

1

(x) + c

2

f

2

(x)} = c

1

F

1

(s) + c

2

F

2

(s).

If we apply now the operator L

−1

to both members of this equation we obtain:

L

−1

L{c

1

f

1

(x) +c

2

f

2

(x)} = L

−1

{c

1

F

1

(s) +c

2

F

2

(s)} = c

1

L

−1

{F

1

(s)} +c

2

L

−1

{F

2

(s)}.

To invert the function F(s) it is therefore enough to split it into many (possibly

simple) addends and ﬁnd for each of them the inverse Laplace transform. Based

on the examples in Sect. 4.1.1 (and others that we do not have time to calculate,

but that can be found in the mathematical literature) it is possible to construct a

“dictionary” of basic functions/expressions and corresponding Laplace transforms,

as in Table 4.1. Any time we face a particular F(s), we can look at the dictionary and

check whether it is possible to recover the function f(x) whose Laplace transform is

F(s).

Since the Laplace transform of the solution y(x) is always in the form of a

fraction (see Eq. 4.11), the method we will always use to split a function F(s) into

simple factors is the method of the partial fractions. It is worth reminding it brieﬂy.

We assume that F(s) =

Pn(s)

Qm(s)

is the quotient between two polynomials P

n

(s) and

Q

m

(s), with degrees n and m respectively. We will also assume m > n. It is always

possible to factorize the polynomial Q

m

(s) at the denominator into factors of the

type as +b or factors of the type cs

2

+ds +e. Sometimes, when we factorize Q

m

(s),

we obtain factors of the type (as + b)

k

(that means that s = −

b

a

is a root with

multiplicity k of the polynomial Q

m

(s)) or of the type (cs

2

+ds+e)

k

. The methos of

the partial fractions consists in writing the fraction P

n

(s)/Q

m

(s) as sum of simpler

fractions of the type

A

(as+b)

k

or

As+B

(cs

2

+ds+e)

k

. The partial fractions we seek depend on

the factor at the denominator, namely:

• If a factor (as+b) is present at the denominator, then we seek a partial fraction

of the type:

A

as + b

130 CHAPTER 4. INTEGRAL TRANSFORMS

Table 4.1: Summary of elementary Laplace transforms

f(x) = L

−1

{F(s)} F(s) = L{f(x)} Convergence

1

1

s

s > 0

e

mx 1

s−m

s > m

x

n n!

s

n+1

s > 0

sin(mx)

m

s

2

+m

2

s > 0

cos(mx)

s

s

2

+m

2

s > 0

sinh(mx)

m

s

2

−m

2

s > m

cosh(mx)

s

s

2

−m

2

s > m

e

mx

sin(px)

p

(s−m)

2

+p

2

s > m

e

mx

cos(px)

s−m

(s−m)

2

+p

2

s > m

x

n

e

mx n!

(s−m)

n+1

s > m

x

−1/2

_

π

s

s > 0

√

x

1

2

_

π

s

3

s > 0

δ(x −c) e

−cs

c > 0

u

c

(x)

e

−cs

s

s > 0

u

c

(x)f(x −c) e

−cs

F(s)

e

cx

f(x) F(s −c)

f(cx)

1

c

F

_

s

c

_

c > 0

_

x

0

f(˜ x)d˜ x

F(s)

s

_

x

0

f(x −ξ)g(ξ)dξ F(s)G(s)

(−1)

n

x

n

f(x) F

(n)

(s)

f

(n)

(x) s

n

F(s) −s

n−1

f(0) −· · · −f

(n−1)

(0)

4.1. LAPLACE TRANSFORM 131

• If a factor (as + b)

k

is present at the denominator, then we seek k partial

fractions of the type:

A

i

(as + b)

i

, i = 1 . . . k

• If a factor (cs

2

+ ds + e) is present at the denominator, then we seek a partial

fraction of the type:

As + B

cs

2

+ ds + e

• If a factor (cs

2

+ds +e)

k

is present at the denominator, then we seek k partial

fractions of the type:

A

i

s + B

i

(cs

2

+ ds + e)

i

, i = 1 . . . k

Example 4.1.9 Use the method of the partial fraction to split the function

F(s) =

s

3

+ s

2

+ 1

s

2

(s

2

+ s + 1)

.

We must determine the coeﬃcients A, B, C, D so that:

A

s

+

B

s

2

+

Cs + D

s

2

+ s + 1

=

s

3

+ s

2

+ 1

s

2

(s

2

+ s + 1)

.

If we multiply this equation by s

2

(s

2

+ s + 1), we obtain:

As(s

2

+ s + 1) + B(s

2

+ s + 1) + Cs

3

+ Ds

2

= s

3

+ s

2

+ 1.

We must now compare the coeﬃcients with like power of s, obtaining the system of

equations:

_

¸

¸

¸

¸

¸

_

¸

¸

¸

¸

¸

_

A + C = 1

A + B + D = 1

A + B = 0

B = 1

, ⇒

_

¸

¸

¸

¸

¸

_

¸

¸

¸

¸

¸

_

A = −1

B = 1

C = 2

D = 1

.

The given fraction can be thus decomposed in this way:

s

3

+ s

2

+ 1

s

2

(s

2

+ s + 1)

= −

1

s

+

1

s

2

+

2s + 1

s

2

+ s + 1

.

132 CHAPTER 4. INTEGRAL TRANSFORMS

Example 4.1.10 Find the inverse Laplace transform of the function

F(s) =

s

2

+ 5

s

3

−9s

We can write the given function as:

F(s) =

s

2

+ 5

s(s

2

−9)

=

s

2

+ 5

s(s −3)(s + 3)

.

To invert this function we have to apply the method of the partial fractions, namely:

s

2

+ 5

s(s −3)(s + 3)

=

A

s

+

B

s −3

+

C

s + 3

=

As

2

−9A + Bs

2

+ 3Bs + Cs

2

−3Cs

s(s −3)(s + 3)

.

Now we can compare terms with like power of s, obtaining the following system of

equations:

_

¸

¸

¸

_

¸

¸

¸

_

A + B + C = 1

3B −3C = 0

−9A = 5

From the second we obtain B = C, from the last A = −

5

9

. From the ﬁrst equation:

2B =

14

9

⇒ B = C =

7

9

.

Now we can invert all the terms of the given function and obtain:

f(x) = L

−1

_

s

2

+ 5

s

3

−9s

_

= L

−1

_

−

5

9

1

s

+

7

9

_

1

s −3

+

1

s + 3

__

= −

5

9

+

14

9

L

−1

_

s

s

2

−9

_

= −

5

9

+

14

9

cosh(3x).

Example 4.1.11 Solve the initial value problem

_

¸

¸

¸

_

¸

¸

¸

_

y

′′

(x) + 4y(x) = e

x

y(0) = 0

y

′

(0) = −1

4.1. LAPLACE TRANSFORM 133

We have to apply the operator L to both members of the given ODE. Since this is

a second-order ODE with constant coeﬃcients, we can apply directly Eq. 4.11 to

obtain:

Y (s) =

F(s) −1

s

2

+ 4

=

1

s−1

−1

s

2

+ 4

=

2 −s

(s −1)(s

2

+ 4)

.

We apply now the method of the partial fractions to decompose this function:

A

s −1

+

Bs + C

s

2

+ 4

=

2 −s

(s −1)(s

2

+ 4)

⇒As

2

+ 4A + Bs

2

+ Cs −Bs −C = 2 −s.

By equating the terms with like power of s we obtain the system of equations:

_

¸

¸

¸

_

¸

¸

¸

_

A + B = 0

C −B = −1

4A−C = 2

⇒

_

¸

¸

¸

_

¸

¸

¸

_

A = −B

C = B −1

−4B −B = 1

⇒

_

¸

¸

¸

_

¸

¸

¸

_

B = −

1

5

A =

1

5

C = −

6

5

The decomposed Y (s) is thus given by:

Y (s) =

1

5

1

s −1

−

1

5

s

s

2

+ 4

−

6

5

1

s

2

+ 4

.

With the help of Table 4.1 we can easily identify the inverse Laplace transforms of

these addends, obtaining therefore:

y(x) =

e

x

5

−

cos(2x)

5

−

3 sin(2x)

5

.

The method of the Laplace transform is sometimes more convenient, sometimes less

convenient compared to traditional methods of ODE resolution. It proves however

to be always more convenient in the case in which the inhomogeneous function is

a step function. In fact, in this case the only available traditional method is the

laborious variation of constants, whereas the Laplace transform of the step function

can be readily found.

Example 4.1.12 Find the solution of the initial value problem:

_

¸

¸

¸

_

¸

¸

¸

_

y

′′

(x) + y(x) = g(x)

y(0) = 0

y

′

(0) = 0

134 CHAPTER 4. INTEGRAL TRANSFORMS

where g(x) is given by:

g(x) =

_

¸

¸

¸

_

¸

¸

¸

_

0 0 ≤ x < 1

x −1 1 ≤ x < 2

1 x ≥ 2

(also known as ramp loading).

The function g(x) can be written as:

g(x) = u

1

(x)(x −1) −u

2

(x)(x −2),

where u

c

(x) is the Heaviside function (Eq. 4.7). In fact, for x < 1 both u

1

and u

2

are

zero. For x between 1 and 2 is u

1

= 1 but u

2

is still zero. For x ≥ 2 both functions

are 1 and therefore u

1

(x)(x −1) −u

2

(x)(x −2) = x −1 −x +2 = 1. If we make the

Laplace transform of both members of the given ODE we obtain:

s

2

Y (s) + Y (s) =

e

−s

s

2

−

e

−2s

s

2

⇒Y (s) =

_

e

−s

−e

−2s

_

1

s

2

(s

2

+ 1)

=

_

e

−s

−e

−2s

_

1 + s

2

−s

2

s

2

(s

2

+ 1)

=

e

−s

−e

−2s

s

2

−

e

−s

−e

−2s

s

2

+ 1

.

To invert this function Y (s) we use again the relation L{u

c

(x)f(x −c)} = e

−cs

F(s)

(and therefore L

−1

{e

−cs

F(s)} = u

c

(x)f(x −c)) to obtain:

f(x) = u

1

(x)(x −1) −u

2

(x)(x −2) −u

1

(x) sin(x −1) + u

2

(x) sin(x −2).

Among the results presented in Table 4.1 very signiﬁcant is the one concerning the

Dirac delta function δ(x−c). We remind here brieﬂy what is the Dirac delta function

and what are its properties. Given a function g(x) deﬁned in the following way:

g(x) = d

ξ

(x) =

_

_

_

1

2ξ

−ξ < x < ξ

0 x ≤ −ξ or x ≥ ξ

(4.12)

it is clear that the integral of this function is 1 for any possible choice of ξ, in fact:

_

∞

−∞

g(x)dx =

_

ξ

−ξ

1

2ξ

dx = 1.

It is also clear that if ξ tends to zero, the interval of values of x in which g(x) is

diﬀerent from zero becomes narrower and narrower until it disappears. Analogously,

the function g(x −c) = d

ξ

(x −c) is non-null only in a narrow interval of x centered

4.1. LAPLACE TRANSFORM 135

on c that disappears for ξ tending to zero. The limit of the function g(x) = d

ξ

(x)

for ξ → 0 is called Dirac delta function and is indicated with δ(x). It is therefore

characterized by the properties:

δ(x −c) = 0 ∀x = c (4.13)

_

∞

−∞

δ(x)dx = 1. (4.14)

Given a generic function f(x), if we integrate f(x)δ(x −c) between −∞ and ∞ we

obtain:

_

∞

−∞

f(x)δ(x −c)dx = lim

ξ→0

1

2ξ

_

c+ξ

c−ξ

f(x)dx = lim

ξ→0

1

2ξ

[2ξf(˜ x)] , ˜ x ∈ [c −ξ, c + ξ].

The last step is justiﬁed by the mean value theorem for integrals. But the interval

of values in which ˜ x must be taken collapses to the point c for ξ → 0, therefore we

obtain the important property of the Dirac delta function:

_

∞

−∞

f(x)δ(x −c)dx = f(c). (4.15)

To calculate the Laplace transform of δ(x − c) (with c ≥ 0) it is conveninet to

calculate ﬁrst the Laplace transform of the function d

ξ

(x−c) and then take the limit

ξ →0, namely:

L{δ(x −c)} = lim

ξ→0

_

∞

0

e

−sx

d

ξ

(x −c)dx = lim

ξ→0

_

c+ξ

c−ξ

e

−sx

2ξ

dx

= lim

ξ→0

e

−s(c+ξ)

−e

−s(c+ξ)

−2sξ

= e

−sc

lim

ξ→0

e

sξ

−e

−sξ

2sξ

= e

−sc

lim

ξ→0

ξ(e

sξ

+ e

−sξ

)

2ξ

= e

−sc

.

The last step is justiﬁed by the de l’Hopital’s rule for limits. In this way we have

found the result reported in Table 4.1 about the Laplace transform of δ(x − c). In

the case that c = 0 we have L{δ(x)} = 1.

4.1.3 The Bromwich integral

Although for most of the practical purposes the inverse Laplace transform of a given

function F(s) can be found by means of the “dictionary” provided by Tab. 4.1 (or

of more extended tables that can be found in the literature), a general formula for

136 CHAPTER 4. INTEGRAL TRANSFORMS

Figure 4.1: The inﬁnite line L along which the Bromwich integral must be performed.

the inversion of F(s) can be found treating F(s) as a complex function and is given

by the so-called Bromwich integral:

f(x) = L

−1

{F(s)} =

1

2πi

_

λ+i∞

λ−i∞

e

sx

F(s)ds, (4.16)

where λ is a real positive number and is larger that the real parts of all the singu-

larities of e

sx

F(s). Since F(s) has been deﬁned as the integral of e

−sx

f(x) between

x = 0 and x = ∞, we will consider in this formula only positive values of x, as well.

In practice, the integral must be performed along the inﬁnite line L, parallel to the

imaginary axis, indicated in Fig. 4.1. At this point, a curve must be chosen in order

to close the contour C. Possible completion paths are for instance the curves Γ

1

or

Γ

2

indicated in Fig. 4.2, namely the half-circles with radius R on the left and on

the right of L, respectively. For R →∞ these curves make with L a closed contour.

The Bromwich integral can be evaluated by means of the residue theorem provided

that the integral of the function e

sx

F(s) tends to zero for R (radius of the chosen

half-circle) tending to inﬁnity. If we choose the completion path Γ

1

, then the residue

theorem ensures us that:

f(x) =

1

2πi

· 2πi

C

R

j

=

C

R

j

, (4.17)

4.1. LAPLACE TRANSFORM 137

Figure 4.2: Possible contour completions for the integration path L to use in the

Bromwich integral.

where the sum is extended to all the residues of the function e

sx

F(s) in the complex

plane. In fact, by construction L lies on the right of each singularity of e

sx

F(s) and

on the limit R →∞ the closed curve C = L+Γ

1

will enclose them all (including for

instance the singularity z

1

that in Fig. 4.2 is not yet enclosed in C). If we instead

have to choose the completion path Γ

2

, then the closed curve L + Γ

2

will enclose no

singularities and therefore f(x) will be zero.

Example 4.1.13 Find the inverse Laplace transform of the function

F(s) =

2e

−2s

s

2

+ 4

.

From the relation L{u

c

(x)f(x − c)} = F(s)e

−cs

we can already derive the inverse

Laplace transform of the given function, namely u

2

(x) sin[2(x − 2)]. We check if we

can obtain the same result be means of the Bromwich integral. We have to evaluate

the integral

1

2πi

_

λ+i∞

λ−i∞

2e

s(x−2)

s

2

+ 4

ds.

We notice ﬁrst that the given function has two simple poles at s = 2i and s = −2i

(in fact it is s

2

+ 4 = (s + 2i)(s − 2i)), both of which have Re (z) = 0. We can

138 CHAPTER 4. INTEGRAL TRANSFORMS

therefore take an arbitrarily (but positive) small value of λ. We can distinguish two

cases: i) x < 2 and ii) x > 2. For x < 2 the exponent s(x − 2) has negative real

part if Re (s) > 0. We notice here that e

s(x−2)

= e

(x−2)Re(s)

e

i(x−2)Im(s)

, therefore

what determines the behavior of this function at inﬁnity is e

(x−2)Re(s)

(e

i(x−2)Im(s)

has

modulus 1 and does not create problems). That means that, for Re (s) → +∞ the

function e

s(x−2)

tends to zero. At the same time the denominator s

2

+ 4 diverges as

Re (s) → +∞ and this means that also the term

1

s

2

+4

that multiplies e

s(x−2)

tends to

zero along the curve Γ

2

for R → ∞. Therefore the integral of the function F(s)e

sx

tends to zero along the curve Γ

2

of Fig. 4.2 (for R → ∞) and we can calculate the

Bromwich integral by means of the contour C = L + Γ

2

. For what we have learned,

since the given closed contour does not enclose the poles, the function f(x) is zero.

For x > 2, the function e

s(x−2)

tends to zero for Re (s) → −∞. That means

that the integral of the function

e

s(x−2)

s

2

+4

tends to zero (for R →∞) along the curve Γ

1

of Fig. 4.2 and we take therefore Γ

1

as a completion of L to calculate the Bromwich

integral. For the residue theorem, this integral is given by the sum of the residues of

the function e

sx

F(s) at all the poles, namely:

f(x) = Res(2i) + Res(−2i).

We have:

Res(2i) = lim

s→2i

(s −2i)

2e

s(x−2)

s

2

+ 4

= lim

s→2i

2e

s(x−2)

s + 2i

=

e

2i(x−2)

2i

Res(−2i) = lim

s→−2i

(s + 2i)

2e

s(x−2)

s

2

+ 4

= lim

s→−2i

2e

s(x−2)

s −2i

=

e

−2i(x−2)

−2i

By summing up these two residues we obtain:

f(x) =

1

2i

_

e

2i(x−2)

−e

−2i(x−2)

¸

= sin[2(x −2)].

This is what we obtain if x > 2 whereas, as we have seen, if x is smaller than 2

the function is zero. Recalling the deﬁnition of the Heaviside function u

c

(x) we can

conclude that the inverse Laplace transform of the given function is:

f(x) = L

−1

_

2e

−2s

s

2

+ 4

_

= u

2

(x) sin[2(x −2)].

Example 4.1.14 Find the inverse Laplace transform of the function:

F(s) =

√

s −a,

with a ∈ R.

4.1. LAPLACE TRANSFORM 139

The function e

sx

√

s −a has no poles, but the function

√

z is multiple-valued in the

complex plane, therefore, as we have seen, a branch point is present at the point

z = 0, namely at s = a. This is the only singularity of our F(s)e

sx

and therefore,

in order to evaluate the Bromwich integral, we have to take λ larger than a. The

integral to calculate will be:

L

−1

{

√

s −a} =

1

2πi

_

λ+i∞

λ−i∞

√

s −ae

sx

ds.

By means of the substitution z = s −a we obtain:

L

−1

{

√

s −a} =

1

2πi

_

λ+i∞

λ−i∞

√

ze

(z+a)x

dz =

e

ax

2πi

_

λ+i∞

λ−i∞

√

ze

zx

dz.

In this case, the branch point is at zero, therefore λ can be arbitrarily small (but

always larger than zero). Since z = 0 is a branch point of the function to integrate,

we have to introduce a branch cut to evaluate the integral. Although we have taken

so far the positive real axis as a branch cut, we have also said that this choice is

arbitrary and to make the function

√

z singe value it is enough that closed curves are

not allowed to enclose the origin. We can therefore take as branch cut the negative

real axis. In Fig. 4.3 we indicate the contour we must use to integrate the given

function. Since the closed contour C = L + Γ

1

+ r

1

+ γ + r

2

+ Γ

2

does not enclose

singularities, its integral is zero. To evaluate the Bromwich integral (namely the

integral along L) we have to calculate the integral along the arcs Γ

1

and Γ

2

, along

the straight lines r

1

and r

2

and along the circumference γ.

Since the function

√

ze

zx

tends to zero for Re (z) → −∞ (the term

√

z cannot

contrast the exponential decay of e

zx

; remind that x must be positive), the integral

along the arcs Γ

1

and Γ

2

disappears.

To evaluate the integral along γ we take as usual z = εe

iθ

and we take the limit

for ε → 0. The interval of values of θ is [π, −π], in fact, as we arrive at γ the ﬁrst

argument will be π. Then, we rotate clockwise around the origin and after a whole

circuit the argument will be −π. Since dz = iεe

iθ

dθ we have:

_

γ

√

ze

zx

dz =

_

−π

π

√

εe

i

θ

2

e

xεe

iθ

· iεe

iθ

dθ.

The integrating function clearly tends to zero for ε →0, therefore there is no contri-

bution from the integral over γ.

Along the straight lines r

1

and r

2

we can assume that the arguments of the

complex numbers lying on them are π (along r

1

) and −π (along r

2

) and that their

imaginary parts tend to zero, therefore we have z = re

iπ

(r

1

) and z = re

−iπ

(r

2

).

Consequently, dz = e

iπ

dr (r

1

) and dz = e

−iπ

dr (r

2

). Notice here that, although we

are on the negative real axis, r is positive. In fact, e

iπ

= e

−iπ

= −1. The parameter

140 CHAPTER 4. INTEGRAL TRANSFORMS

Figure 4.3: Contour to use in Example 4.1.14.

r runs between +∞ and 0 (r

1

) and between 0 and +∞ (r

2

). The integral of the given

function along r

1

turns out to be:

_

r

1

√

ze

zx

dz =

_

0

∞

√

re

i

π

2

e

xre

iπ

· e

iπ

dr =

_

0

∞

√

r · i · e

−xr

· (−1)dr = i

_

∞

0

√

re

−xr

dr.

Along r

2

we have:

_

r

2

√

ze

zx

dz =

_

∞

0

√

re

−i

π

2

e

xre

−iπ

·e

−iπ

dr =

_

∞

0

√

r·(−i)·e

−xr

·(−1)dr = i

_

∞

0

√

re

−xr

dr.

In the end we have:

f(x) = L

−1

{

√

s −a} = −

e

ax

2πi

_

r

1

+r

2

√

ze

zx

dz = −

e

ax

π

_

∞

0

√

re

−xr

dr.

The sign minus is due to the fact that, as we have said, the integral along the whole

closed curve C is zero, therefore

_

L

F(s)e

sx

ds = −

_

r

1

+r

2

F(s)e

sx

ds. To evaluate

the integral

_

∞

0

√

re

−xr

dr we make the substitution xr = t

2

, therefore r =

t

2

x

and

dr =

2tdt

x

. We obtain:

_

∞

0

√

re

−xr

dr =

1

x

3/2

_

∞

0

te

−t

2

· 2tdt.

Since −2te

−t

2

is the diﬀerential of e

−t

2

we can integrate the given function by parts

and obtain:

_

∞

0

√

re

−xr

dr = −

1

x

3/2

_

_

te

−t

2

_

∞

0

−

_

∞

0

e

−t

2

dt

_

.

4.2. FOURIER TRANSFORMS 141

The term under square brackets is zero. By using the known result

_

∞

0

e

−t

2

dt =

√

π

2

we obtain:

_

∞

0

√

re

−xr

dr =

√

π

2x

3/2

.

This result completes our inversion of the function F(s) =

√

s −a, namely we have:

f(x) = L

−1

{

√

s −a} = −

e

ax

2

√

πx

3

.

4.2 Fourier transforms

Fourier transforms are widely used in physics and astronomy because they allow to

express a function (not necessarily periodic) as a superposition of sinusoidal func-

tions, therefore we devote this section to them. Since the Fourier transforms are used

mostly to represent time-varying functions, we shall use t as independent variable

instead of x. On the other hand, the transformed variable represents for most of the

application a frequency and will be indicated with ω instead of s.

4.2.1 Fourier series

For some physical applications, we might need to expand in series some functions

that are not continuous or not diﬀerentiable and that therefore do not admit a

Taylor series. Fourier series allow to represent periodic functions, for which a Taylor

expansion does not exist, as superposition of sine and cosine functions. Given a

periodic function f(t) with period T such that the integral of |f(t)| over one period

converges, f(t) can be expressed in this way:

f(t) =

a

0

2

+

∞

n=1

_

a

n

cos

_

2πnt

T

_

+ b

n

sin

_

2πnt

T

__

,

where the constant coeﬃcients a

n

, b

n

are called Fourier coeﬃcients. Deﬁning the

angular frequency ω =

2π

T

we simplify this expression into:

f(t) =

a

0

2

+

∞

n=1

[a

n

cos(ωnt) + b

n

sin(ωnt)] , (4.18)

namely the function f(t) can be expressed as a superposition of an inﬁnite number

of sinusoidal functions having periods T

n

=

2π

ωn

.

It can be shown that these coeﬃcients are given by:

142 CHAPTER 4. INTEGRAL TRANSFORMS

a

n

=

2

T

_ T

2

−

T

2

f(t) cos(ωnt)dt (4.19)

b

n

=

2

T

_ T

2

−

T

2

f(t) sin(ωnt)dt (4.20)

Example 4.2.1 Find the Fourier series expansion of the function

f(t) =

_

_

_

−1 −

T

2

+ kT ≤ t < kT

1 kT ≤ t <

T

2

+ kT

This is a square wave: a series of positive impulses followed periodically by negative

impulses of the same intensity. We can notice immediately that the function f(t)

is odd (f(t) = −f(−t)). Since the function cos(ωnt) is even, the whole function

f(t) cos(ωnt) is odd and its integral between −T/2 and T/2 is zero. That means that

the coeﬃcients a

n

are zero.

To ﬁnd the coeﬃcients b

n

we apply Eq. 4.20 obtaining:

b

n

=

2

T

_ T

2

−

T

2

f(t) sin(ωnt)dt =

2

T

_

−

_

0

−

T

2

sin(ωnt)dt +

_ T

2

0

sin(ωnt)dt

_

=

4

T

_ T

2

0

sin(ωnt)dt = −

2

nπ

[cos(ωnt)]

T

2

0

=

2

nπ

[1 −cos(nπ)] .

Here we have used the relation ωT = 2π. We can notice here that cos(nπ) is 1 if

n is even and -1 if n is odd, namely cos(nπ) = (−1)

n

. We could ﬁnd the same

result by means of the de Moivre’s theorem applied to the complex number z = e

iπ

.

The coeﬃcients b

n

are equal to zero if n is even and to

4

nπ

if n is odd. The Fourier

expansion we looked at is therefore:

f(t) =

4

π

_

sin(ωt) +

sin(3ωt)

3

+

sin(5ωt)

5

+ . . .

_

.

By using the identities cos z = (e

iz

+ e

−iz

)/2 and sin z = (e

iz

− e

−iz

)/2i the Fourier

expansion of a function f(t) can also be written as:

4.2. FOURIER TRANSFORMS 143

f(t) =

a

0

2

+

∞

n=1

_

a

n

e

iωnt

+ e

−iωnt

2

+ b

n

e

iωnt

−e

−iωnt

2i

_

=

a

0

e

iω0t

2

+

1

2

∞

n=1

_

(a

n

−ib

n

)e

iωnt

+ (a

n

+ ib

n

)e

−iωnt

¸

.

In this way we can see that the function f(t) can be expressed as sum, extending

from −∞ to +∞, of terms of the form e

iωnt

, where ω

n

= ω · n, namely we have:

f(t) =

∞

−∞

c

n

e

iωnt

; c

n

=

_

_

_

1

2

(a

n

−ib

n

) n ≥ 0

1

2

(a

n

+ ib

n

) n < 0

. (4.21)

This compact representation of the periodic function f(t) is called complex Fourier

series. If we combine the coeﬃcients a

n

and b

n

as indicated in Eq. 4.21 we ﬁnd that,

irrespective of the sign of n, we have:

c

n

=

1

T

_ T

2

−

T

2

f(t)e

−iωnt

dt. (4.22)

4.2.2 From Fourier series to Fourier transform

We have seen that the Fourier series allow us to describe periodic functions as su-

perpositions of sinusoidal functions characterized by angular frequencies ω

n

. To

represent non-periodic functions, what we can do is to extend the period T to inﬁn-

ity (every function can be considered periodic if the period is large enough). That

corresponds to consider a vanishingly small “frequency quantum” ∆ω =

ωn

n

=

2π

T

and therefore a continuous spectrum of angular frequencies ω

n

. Given a function

f(t) =

∞

n=−∞

c

n

e

iωnt

, with c

n

=

1

T

_ T

2

−

T

2

f(u)e

−iωnu

du, we want to see what happens

in the limit T → ∞ (or, analogously, ∆ω =

2π

T

→ 0). We have:

f(t) =

∞

n=∞

1

T

_ T

2

−

T

2

f(u)e

−iωnu

du · e

iωnt

=

∞

n=∞

∆ω

2π

_ T

2

−

T

2

f(u)e

−iωnu

du · e

iωnt

.

In the limit for T →∞ and ∆ω →0 the limits of the integration extend to inﬁnity,

the sum becomes an integral and the discrete values ω

n

become a continuous variable

ω (with ∆ω → dω). We have thus:

f(t) =

1

2π

_

∞

−∞

dωe

iωt

_

∞

−∞

duf(u)e

−iωu

. (4.23)

From this relation we can deﬁne the Fourier transform of a function f(t) as:

144 CHAPTER 4. INTEGRAL TRANSFORMS

˜

f(ω) = F{f(t)} =

1

√

2π

_

∞

−∞

f(t)e

−iωt

dt. (4.24)

Here we require, in order this integration to be possible, that

_

∞

−∞

|f(t)|dt is ﬁnite.

Unlike the Laplace transform, the Fourier transform is very easy to invert. In fact,

we can directly see from Eq. 4.23 that:

f(t) =

1

√

2π

_

∞

−∞

˜

f(ω)e

iωt

dω. (4.25)

Example 4.2.2 Find the Fourier transform of the normalized Gaussian distribution

f(t) =

1

τ

√

2π

e

−

t

2

2τ

2

.

By deﬁnition of Fourier transform we have:

˜

f(ω) =

1

√

2π

_

∞

−∞

f(t)e

−iωt

dt =

1

2πτ

_

∞

−∞

e

−iωt−

t

2

2τ

2

dt.

We can modify the exponent of e in the integral as follows:

−iωt −

t

2

2τ

2

= −

1

2τ

2

_

t

2

+ 2iωtτ

2

+ (iωτ

2

)

2

−(iωτ

2

)

2

¸

.

The ﬁrst 3 addends inside the square brackets are the square of t +iωτ

2

, namely we

obtain:

−iωt −

t

2

2τ

2

= −

(t + iωτ

2

)

2

2τ

2

+

(iωτ

2

)

2

2τ

2

= −

_

t + iωτ

2

√

2τ

_

2

−

1

2

ω

2

τ

2

.

Since the term e

−

1

2

ω

2

τ

2

does not depend on t we obtain:

˜

f(ω) =

1

2πτ

e

−

1

2

ω

2

τ

2

_

∞

−∞

e

−

“

t+iωτ

2

√

2τ

”

2

dt.

This is the integral of a complex function, therefore we should use the methods of

complex integration we have learned so far. However, we can see that the integration

simpliﬁes signiﬁcantly by means of the substitution:

t + iωτ

2

√

2τ

= s, dt =

√

2τds.

In this way we obtain:

˜

f(ω) =

1

√

2π

e

−

1

2

ω

2

τ

2

_

∞

−∞

e

−s

2

ds =

1

√

2π

e

−

1

2

ω

2

τ

2

,

4.2. FOURIER TRANSFORMS 145

where we have made use of the known result

_

∞

−∞

e

−s

2

ds =

√

π. It is important to

note that the Fourier transform of a Gaussian function is another Gaussian function.

The Fourier transform allows us to express the Dirac delta function in an elegant

and useful way. We recall Eq. 4.23

f(t) =

1

2π

_

∞

−∞

dωe

iωt

_

∞

−∞

duf(u)e

−iωu

.

By exchanging the variable of integration we obtain:

f(t) =

1

2π

_

∞

−∞

dω

_

∞

−∞

duf(u)e

iω(t−u)

=

1

2π

_

∞

−∞

du

_

∞

−∞

dωf(u)e

iω(t−u)

=

_

∞

−∞

duf(u)

_

1

2π

_

∞

−∞

e

iω(t−u)

dω

_

,

where the exchange of the order of integration has been made possible by the Fubini’s

theorem. Recalling Eq. 4.15 we can immediately recognize that:

δ(t −u) =

1

2π

_

∞

−∞

e

iω(t−u)

dω. (4.26)

Analogously to the Laplace transform, it is easy to calculate the Fourier trans-

form of the derivative of a function. It is:

F{f

′

(t)} =

1

√

2π

_

∞

−∞

f

′

(t)e

−iωt

dt

=

1

√

2π

_

f(t)e

−iωt

¸

∞

−∞

−

(−iω)

√

2π

_

∞

−∞

f(t)e

−iωt

dt

= iωF{f(t)}. (4.27)

Here we have assumed that the function f(t) tends to zero for t →±∞ (as it should

be since

_

∞

−∞

|f(t)|dt is ﬁnite). It is easy to iterate this procedure and show that:

F{f

(n)

(t)} = (iω)

n

F{f(t)}. (4.28)

This relation can be used in some cases to solve ODEs analogously to what done by

means of Laplace transforms, namely we transform both members of an ODE, solve

the obtained algebraic equation as a function of F{y(x)} (the Fourier transform of

the solution y(x) we seek) and then invert the function we have obtained. However,

for most of the practical cases, it is more convenient to use Laplace transformation

methods to solve ODEs. Fourier transformation methods can be extremely useful

instead to solve partial diﬀerential equations (see Sect. 6.3.1).

- Semana_1 CT With Applications to CSUploaded bycarla contreras ulloa
- Circuits 2Uploaded byalanster5678
- 02firstorderdifferentialequations-110916071146-phpapp02.pptxUploaded bymimahmoud
- Slides 01Uploaded byPaulaMorán
- 5233-l06Uploaded byAaron Jackson
- 127 Feng Chen Zhang Griggs IJHMT 2011Uploaded byVenkatesh Kabra
- 32.2.yehUploaded byΜέμνων Μπερδεμένος
- Mathematics III Question PaperUploaded bygnathw
- MATH 3408Uploaded byHeng Choc
- The Inversion and Interpretation of Gravity Anomalies_OldendurgUploaded byPaullus Josefo Magallan
- RlcLaplaceUploaded byharimadhavareddy
- Engg Revised 2014Uploaded bySachin Agrawal
- 2.2 Higher-order Linear ODEUploaded byBenya Srisoogcharoen
- Content 2011Uploaded byzain942000
- DE Lec1A.pdfUploaded byShang Divina Ebrada
- ss09lo04CME2Uploaded byfuzzy_mouse
- APPAREL III TO VIII.pdfUploaded byRaja Prabhu
- 4H_Seismic_DP&IUploaded byomar.moradi3707
- Frequency-domain synthetic aperture focusing for helical ultrasonic imaging HUploaded byJaapSuter
- Linear Systems ReviewUploaded byمحمد أشرف حسن
- B.E. BIO MEDUploaded byvinothvin86
- EE 373 fall 2012Uploaded byBerkay Enginoğlu
- Time ReversalUploaded bymemosuna
- Chapter1 Modelling for Control.pdfUploaded byMarcialRuiz
- PowerUploaded byTum Main
- Art of Imparting Intelligence With GNUUploaded byGanesh Gopal
- Shape Factor_Dual Porosity Models Zimm.Uploaded byaaron-ss
- Course OutlineUploaded byEngr Tariq
- It RevisedUploaded bysudheer105
- ES Systems Biology NotesUploaded byMohit Kumar

- SupplementUploaded byjeff_hammonds351
- More fun with Bessel functionsUploaded byjeff_hammonds351
- Inverse Laplace TransformsUploaded byjeff_hammonds351
- Functions of a Complex Variable Lecture 3Uploaded byjeff_hammonds351
- Conformal MappingsUploaded byjeff_hammonds351
- Math Complex Variables Hw5 SolutionsUploaded byjeff_hammonds351
- A short paragraph involving the Riemann-Zeta functionUploaded byjeff_hammonds351
- Nst Mmii Chapter5Uploaded byjeff_hammonds351
- Errata Sheet for Chun Wa Wong Introduction to Mathematical PhysicsUploaded byjeff_hammonds351
- Mathematical Methods 1Uploaded byjeff_hammonds351
- hw11solUploaded byjeff_hammonds351
- Chapter_13Uploaded byjeff_hammonds351
- Complex Variables Module 21Uploaded byjeff_hammonds351
- Stat Mech HomeworkUploaded byjeff_hammonds351
- Nuclear Counting StatisticsUploaded byjeff_hammonds351
- Chapter4 Complex AnalysisUploaded byjeff_hammonds351
- Calculus of Complex Functions-Laurent Series and Residue TheoremUploaded byjeff_hammonds351
- Quadrupole_ellipsoidUploaded byjeff_hammonds351
- Appendix BUploaded byjeff_hammonds351
- Partial Solutions Manual for Chun Wa Wong Introduction to Mathematical PhysicsUploaded byjeff_hammonds351
- BesselUploaded byjeff_hammonds351
- Errata to Classical ElectrodynamicsUploaded byjeff_hammonds351
- Errata Sheet for Chun Wa Wong Introduction to Mathematical PhysicsUploaded byjeff_hammonds351
- Laurent Series ExamplesUploaded byjeff_hammonds351
- Riemann SurfaceUploaded byjeff_hammonds351
- ce3erratUploaded byjeff_hammonds351
- Bessel FunctionsUploaded byjeff_hammonds351
- Laurent and Taylor SeriesUploaded byjeff_hammonds351
- Chun Wa Wong Quantum Mech Rvw 2Uploaded byjeff_hammonds351