Lectures on Partial Differential Equations

Math 316
Anmar Khadra
Transcribed by Kelvie Wong
kelvie@ieee.org
September 4, 2008
ii
Contents
Introduction v
1 Power Series Solutions 1
1.1 Legendre’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Series Solutions 5
2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Frobenius’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Another Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 r =
3
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 r = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Frobenius’s Method . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 Example of case 2 . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Fourier Series 19
3.1 Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Fourier’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Extending Fourier series to 2L-periodic functions . . . . . . . . . 26
3.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6 Half Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.7 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.7.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 Partial Differential Equations! 33
4.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
iii
iv CONTENTS
4.2 Initial Condtions (IC) and Boundary Conditions (BC) . . . . . . 34
4.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 One Dimensional Wave Equations . . . . . . . . . . . . . . . . . 36
4.4.1 Solving the Wave Equation . . . . . . . . . . . . . . . . . 38
4.4.2 d’Alembert Equation . . . . . . . . . . . . . . . . . . . . . 41
4.4.3 Method of Characteristic Lines . . . . . . . . . . . . . . . 44
4.5 One Dimensional Heat Equation . . . . . . . . . . . . . . . . . . 47
4.5.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5.3 Other Types of Boundary Conditions . . . . . . . . . . . 51
4.6 Two Dimensional Wave Equation . . . . . . . . . . . . . . . . . . 57
4.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.7 Two Dimensional Heat Equation . . . . . . . . . . . . . . . . . . 61
4.8 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.8.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.9 Poisson’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.9.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.10 Sturm Liouville Problems . . . . . . . . . . . . . . . . . . . . . . 71
4.10.1 Example: Bessel’s Equation . . . . . . . . . . . . . . . . . 71
4.10.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.10.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.11 The Parametrized Bessel’s Equation . . . . . . . . . . . . . . . . 75
4.12 Radially Symmetric Conditions . . . . . . . . . . . . . . . . . . . 77
4.12.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.13 Laplace’s Equation in Polar Coordinates . . . . . . . . . . . . . . 82
4.13.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.13.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.14 Non-Homogenous PDEs . . . . . . . . . . . . . . . . . . . . . . . 88
4.14.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Introduction
This document was typeset completely in L
A
T
E
X. The source is available upon
request. You are free to distribute this, provided the Terms and Conditions are
met.
Terms and Conditions
1. You may not laugh at the poorly drawn diagrams.
2. Try to report any errata.
v
vi INTRODUCTION
Chapter 1
Power Series Solutions
Find a power series solution up to the 6th power of the equation y
′′
+(cos x)y =
x
2
centered at x
0
= 0.
The solution takes a form of the following:
y =

n=0
c
n
x
n
where
y(x
0
) = c
0
, y

(x
0
) = c
1
, . . .
y
[n]
(x
0
) = n!c
n
Solve for y
′′
(x) from the DE:
y
′′
(x) = x
2
−(cos x)y
Now we substitute in x
0
= 0:
y
′′
(0) = 0
2
−y(0) = −c
0
= 2c
2
=⇒ c
2
= −
1
2
c
0
This gives us c
2
in terms of c
0
. We can repeat for higher orders.
_
y
[3]
(x) = 2x + (sin x)y −(cos x)y

_
x0=0
y
[3]
(0) = −c
1
= 6c
3
=⇒ c
3
= −
1
6
c
1
Continue until you generate all the coefficients from c
1
to c
6
, which corresponds
to the 6th power.
1
2 CHAPTER 1. POWER SERIES SOLUTIONS
c
4
=
1
12

1
12
c
0
, c
5
=
c
1
30
, c
6
= −
1
360

1
80
c
0
Substitute into y
y = c
0
+c
1
x +c
2
x
2
+· · · +c
6
x
6
+ higher order terms
= c
0
+c
1
x +
1
2
c
0
x
2

1
6
c
1
x
3
+
_
1
12
+
1
12
c
0
_
x
4
+
1
30
c
1
x
5

_
1
360
+
1
80
c
0
_
x
6
+. . .
= c
0
_
1 −
1
2
x
2
+
1
12
x
4

1
80
x
6
+. . .
_
+c
1
_
x −
1
6
x
3
+
1
30
x
5
+. . .
_
+
_
1
12
x
4
+
1
360
x
6
+. . .
_
The last term is the particular solution, and the former two are the solutions to
the homogenous equation.
1.1 Legendre’s Equation
Legendre’s equation is given by
(1 +x
2
)y
′′
−2xy

+m(m+ 1)y = 0, m ∈ Z
+
Notes
x
0
= 0
p(x) =
2x
1 −x
2
q(x) =
m(m+ 1)
1 −x
2
f(x) = y
′′
+p(x)y

+q(x)y = 0
This implies that x
0
is an ordinary point.
There are two singular points, x
1
= 1, x
2
= −1. We know that a power
series exists at this point (look this up).
The radius of convergence, centered at x
0
= 0 is the closest singular point
from x
0
, i.e. R = 1.
e.g. Let m = 2 and solve the DE. The de is given by
(1 −x
2
)y
′′
−2xy

+ 6y = 0
Seek a solution of the form
y =

n=0
c
n
x
n
.
1.1. LEGENDRE’S EQUATION 3
Thus,
(1 −x
2
)

n=0
n(n −1)c
n
x
n−2
−2x

n=1
nc
n
x
n−1
+ 6

n=0
c
n
x
n
= 0

n=2
n(n −1)c
n
x
n−1
+

n=2
−n(n −1)c
n
x
n
−2x

n=1
−2nc
n
x
n
+

n=0
6c
n
x
n
= 0
The first step is to match the powers (converting the above):

n=0
(n + 2)(n + 1)c
n
x
n
+

n=2
−n(n −1)c
n
x
n
+

n=1
−2nc
n
x
n
+

n=0
6c
n
x
n
= 0
The second step is to match the indices:
2c
2
+ 6c
3
x −2c
1
x + 6c
0
+ 6c
1
x+

n=2
[(n + 2)(n + 1)c
n+2
+ (6 −n(n + 1))c
n
] x
n
= 0
(2c
2
+ 6c
0
) + (6c
3
−2c
1
+ 6c
1
)x +· · · = 0
=⇒ 2c
2
+ 6c
0
= 0
=⇒ c
2
= −3c
0
6c
3
+ 4c
1
= 0 =⇒ c
3
= −
2
3
c
1
Now we try to find a recursive formula.
(n + 2)(n + 1)c
n+2
+ (6 = n(n + 1))c
n
= 0 n ≥ 2
c
n+2
=
−n(n + 1)
(n + 2)(n + 1)
c
n
n ≥ 2
c
0
and c
1
are arbitrary as usual, therefore we need to find c
4
→c
6
. c
4
= 0 =⇒
c
6
= 0, and so forth for all even subscripts beginning from 4. Substitute for the
other constants. c
4
=
3
10
c
3
= −
12
30
c
1
== −
1
5
c
1
and c
7
=
24
2
7!
c
1
Substituting:
y = c
0
+c
1
x. . .
= c
0
+c
1
x −3c
0
x
2

2
3
c
1
x
3

1
5
c
1
x
5
+. . .
= c
0
[1 −3x
2
] +c
1
[x −
2
3
x
3

1
5
x
5
+. . . ]
These are the two linearly independent solutions in terms of power series.
The c
0
-like terms are the Legendre polynomials. They come up every time
you solve Legendre equations.
4 CHAPTER 1. POWER SERIES SOLUTIONS
Chapter 2
Series Solutions
Power series look like :
DE : a
2
(x)y
′′
+a
1
(x)y

+a
0
(x)y = g(x)
Gives two linearly independent solutions:
y =

n=0
c
n
(x −x
0
)
n
For series solutions, consider the 2nd order homogenous linear differential
equation:
a
2
(x)y
′′
+a
1
(x)y

+a
0
(x)y = 0
Divide the entire equation by a
2
(x).
y
′′
+p(x)y

+q(x)y = 0
Suppose that x
0
is a singular point. We have two cases to consider:
1. A point x
0
is said to be a regular singlar point (RSP) of the above equation
if it is a singular point and the functions P(x) ≡ p(x)(x−x
0
) and Q(x) ≡
q(x)(x −x
0
)
2
are both analytic at x
0
;
2. If x
0
is not an RSP, then it is called an irregular singular point (ISP).
2.1 Example
Consider the DE given by
(x
2
−9)
2
y
′′
+ 2(x −3)y

+ 4y = 0
p(x) =
2(x −3)
(x
2
−9)
2
=
2
(x −3)(x + 3)
2
5
6 CHAPTER 2. SERIES SOLUTIONS
It is important to reduce the fractions to lowest terms.
q(x) =
4
(x
2
−9)
2
This says that x
0
= ±3 are singular points.
For x
0
= 3:
P(x) = p(x)(x −x
0
) =
2
(x −3)(x + 3)
2
(x −3) =
2
(x + 3)
2
which is analytical, even at x = −3
Q(x) = q(x)(x −x
0
)
2
=
4
(x + 3)
2
$
$
$
$
(x −3)
2
$
$
$
$
(x −3)
2
Therefore, x
0
= 3 is a regular singular point (RSP).
Now, for x
0
= −3:
P(x) = p(x)(x −x
0
) =
2
(x −3)(x + 3)
2
(x + 3) =
2
(x −3)(x + 3)
Which is not analytical; this implies that x
0
= 3 is an ISP.
The idea is to find series solutions centered at a RSP.
2.2 Frobenius’s Theorem
Suppose that x = x
0
is an RSP of the above standard DE, then there exists at
least one solution of the form:
y = (x −x
0
)
r

n=0
c
n
(x −x
0
)
n
=

n=0
c
n
(x −x
0
)
n+r
(2.1)
where r is to be determined. The series will converge at least over the interval
0 < x −x
0
< R.
Remark If Frobenius’s method generates 2 solutions in the form of a series,
then they are linearly independent. The general solution can then be generated,
though this is not always the case, as it sometimes generates only one solution.
In that case, we have to resort to reduction of order to find the other linearly
independent solution.
2.3 Another Example
Find a series solution for the DE
2.3. ANOTHER EXAMPLE 7
2xy
′′
−y

+ 2y = 0
p(x) = −
1
2x
q(x) =
1
x
Then x
0
= 0 is a SP.
P(x) = p(x)x =
1
2
Q(x) = q(x)x
2
= x
Both of these are analytic, therefore x
0
= 0 is an RSP, and we may apply
Frobenius’s Theorem.
Now we seek a solution of the form given by (2.1).
y = x
r

n=0
c
n
x
n
=

n=0
c
n
x
n+r
y

=

n=0
(n +r)c
n
x
n+r−1
y
′′
=

n=0
(n +r)(n +r −1)c
n
x
n+r−2
Remark The index here stays at n = 0.
Now we substitute into the DE:
2

n=0
(n +r)(n +r −1)c
n
x
n+r−2

n=0
(n +r)c
n
x
n+r−1
+ 2

n=0
c
n
x
n+r
= 0
Rearrange to

n=0
2(n +r)(n +r −1)c
n
x
n+r−2
+

n=0
−(n +r)c
n
x
n+r−1
+

n=0
2c
n
x
n+r
= 0
And now we take out the x
r
’s as a common factor
x
r
_

n=0
2(n +r)(n +r −1)c
n
x
n−1
+

n=0
−(n +r)c
n
x
n−1
+

n=0
2c
n
x
n
_
= 0
x
r
_

n=0
(n +r)(2n + 2r −3)
n
x
n−1
+

n=0
2c
n
x
n
_
= 0
8 CHAPTER 2. SERIES SOLUTIONS
Now we match the powers.
x
r
_

n=−1
(n +r + 1)(2n + 2r −1)c
n+1
x
n
+

n=0
2c
n
x
n
_
= 0
Next, we match the indices:
x
r
_
r(2r −3)c
0
x
−1
+

n=0
[(n +r + 1)(2n + 2r −1)c
n+1
+ 2c
n
] x
n
_
= 0
This implies that
r(2r −3)c
0
= 0
and
(n +r + 1)(2n + 2r −1)c
n+1
+ 2c
n
= 0, n ≥ 0
From the former equation, it is apparent that one of the terms has to be
zero. Plugging zero into c
0
gives us the trivial solution.
The resulting equation r(2r −3) = 0 is known as the indicial equation, and
the roots of this equation are called the indicial roots, where r = 0,
3
2
.
Plug the roots into the recursive formula (the latter of the above two).
2.3.1 r =
3
2
This gives us, for r =
3
2
:
c
n+1
=
−2
(n +
5
2
)(2n + 2)
=
−2
(2n + 5)(n + 1)
We continue finding the constants; leaving c
0
arbitrary (we know this from
the recursive formula):
c
1
= −
2
5
c
0
c
2
=
2
35
c
0
c
3
= −
4
945
c
0
c
4
=
2
10395
c
0
2.4. FROBENIUS’S METHOD 9
Substituting in r:
y = x
3
2

n=0
c
n
x
n
= x
3
2
_
c
0

2
5
c
0
x +
2
35
c
0
x
2
+. . .
_
= c
0
x
3
2

2
5
c
0
x
5
2
+
2
35
c
0
x
7
2
+. . .
= c
0
_
x
3
2

2
5
x
5
2
+
2
35
x
7
2
+. . .
_
The bracketed expression is one of the linearly independent solutions
y
1
= c
0
_
x
3
2

2
5
x
5
2
+
2
35
x
7
2
+. . .
_
2.3.2 r = 0
Second root r
2
= 0, the recursive formula looks like:
c
n+1
=
−2
(n + 1)(2n −1)
c
n
∀{n ≥ 0}
=⇒ y
2
= 1 + 2x −2
2
+
4
9
x
3
+
2
45
x
4
+
4
1575
x
5
+. . .
Both solutions are linearly independent, and linear combinations of the two
form the general solution.
Remark This is known as a power series due to the fact that they are not
whole number roots, and may be fractions.
Remark Notice that (r
1
−r
2
) =
3
2
/ ∈ Z
2.4 Frobenius’s Method
Let us see what we can draw from the previous example.
10 CHAPTER 2. SERIES SOLUTIONS
Result 2.1 Let r
1
> r
2
.
1. If (r
1
− r
2
) / ∈ Z, then the Frobenius method will generate two linearly
independent solutions y
1
, y
2
, and the task is done (finding the general
solution, that is).
2. If (r
1
− r
2
) ∈ Z, you either have two linearly independent solutions y
1
,
y
2
, and all is well, or you have only one generated solution via Frobe-
nius’ method, y
1
. By reduction of order, the second linearly independent
solution is given by:
y
2
= cy
1
ln x +

n=0
b
n
x
n+r2
where r
2
is the smaller indicial root.
3. If r
1
= r
2
= r, then one solution, y
1
is generated. In this case, we apply
reduction of order to generate the second linearly independent solution y
2
,
given by:
y
2
= y
1
ln x +

n=1
b
n
x
n+r
2.4.1 Example of case 2
Find a series solution to the equation
xy
′′
−xy

+y = 0
x
0
= 0 is a regular singular point (check). Seek a solution of the form
y = x
r

n=0
c
n
x
n
=

n=0
c
n
x
n+r
y

=

n=0
(n +r)c
n
x
n+r−1
y
′′
=

n=0
(n +r −1)(n +r)c
n
x
n+r−2
Substituting back into the DE:

n=0
(n +r)(n +r −1)c
n
x
n+r−1
+

n=0
−(n +r)c
n
x
n+r
+

n=0
c
n
x
n+r
= 0
2.4. FROBENIUS’S METHOD 11
Factoring out x
r
, and adding the right two terms together:
x
r
_

n=0
(n +r −1)(n +r)c
n
x
n−1
+

n=0
(1 −n −r)c
n
x
n
_
= 0
=⇒ x
r
_

n=−1
(n +r + 1)(n +r)c
n+1
x
n
+

n=0
(1 −n −r)c
n
x
n
_
= 0
Now we have (matching the indices):
x
r
_
r(r −1)c
0
x
−1
+

n=0
[(n +r + 1)(n +r)c
n+1
+ (1 −n −r)c
n
] x
n
_
= 0
This gives us the equations:
r(r −1)c
0
= 0
(n +r + 1)(n +r)c
n+1
+ (1 −n −r)c
n
= 0
n ≥ 0
Which are the indicial equation and recursive formula, respectively.
The indicial equation gives us the roots:
r
1
= 1
r
2
= 0
the difference of the two is an integer; this gives us case 2 (above).
r = 1 :
c
n+1
=
n
(n + 2)(n + 1)
c
n
∀ {n ≥ 0}
c
0
= 0 as it gives us the trivial solution.
y = x
r
_

n=0
c
n
x
n
_
= c
0
x
=⇒ y
1
= x
r = 0 :
c
n+1
=
n −1
(n + 1)n
n ≥ 0
c
1
is undefined, therefore c
0
= 0 and is not arbitrary, and c
1
is the arbitrary
parameter.
12 CHAPTER 2. SERIES SOLUTIONS
Remark Sometimes you have to go to c
2
or higher.
Now, n must start at 1.
c
n+1
=
n −1
(n + 1)n
n ≥ 1
Evaluating the coefficients, we find
c
2
= c
3
= c
4
= · · · = 0 =⇒ y = x
r
_

n=0
c
n
x
r
_
= c
1
x = y
2
= x = y
1
Frobenius’s method only generated one solution, and we apply reduction of
order.
We were seeking a solution of the form:
y = Cy
1
ln x +

n=0
b
n
x
n+r2
= Cy
1
ln x +

n=0
b
n
x
n
Now we differentiate
y

= Cy

1
ln x +C
y
1
x
+

n=1
nb
n
x
n−1
y
′′
= Cy
′′
1
ln x + 2C
y
1
x
−C
y
1
x
2
+

n=2
n(n −1)b
n
x
n−2
Now we substitute it back into the DE xy
′′
−xy

+y = 0:
Cxy
′′
1
ln x + 2Cy

1
−C
y
1
x
+

n=2
n(n −1)b
n
x
n−1
−Cxy

1
ln x −Cy
1
+

n=1
−nb
n
x
n
+Cy
1
ln x +

n=0
b
n
x
n
= 0
We can factor out C ln x:
C ln x[xy
′′
1
−xy

1
+y
1
] +

n=2
+

n=1
+

n=0
where the Σs represent the three series above. Note that the expression inside
the bracket is equal to zero (from the DE), leaving us with:
= C
y
1
x
−2Cy

1
+Cy
1
=⇒

n=2
n(n −1)b
n
x
n−1
+

n=1
nb
n
x
n
+

n=0
b
n
x
n
= −C +Cx
y
1
= x
2.5. BESSEL’S EQUATION 13
Now we match the powers and indices (exercise).
b
0
+

n=1
{n(n + 1)b
n+1
+ (1 −n)b
n
} x
n
= −C +Cx
Equating both sides, we get:
b
0
= −C
2b
2
= C
n(n + 1)b
n+1
+ (1 −n)b
n
= 0∀{n ≥ 2}
which gives us our recursive formula:
b
n+1
=
n −1
n(n + 1)
b
n
n ≥ 2
This leaves b
1
arbitrary.
We can now pump out all of the other coefficients.
b
3
=
1
2 · 3!
C, b
4
=
1
3 · 4!
C, . . .
y = Cy
1
ln x +b
0
+b
1
x +b
2
x
2
+. . .
= Cxln x −C +b
1
x +
1
1 · 2!
Cx
2
+
1
2 · 3!
x
3
+. . .
= b
1
x +C
_
xln x −1 +

n=2
x
n
(n −1)n!
_
Which gives us us our two linearly independent solutions, and thus our
general solution.
2.5 Bessel’s Equation
Bessel’s Equation is given by
x
2
y
′′
+xy

+ (x
2
−ν
2
)y = 0
Where ν ≥ 0
They are called Bessel’s equations of order ν. We will find a series solution to
this equation centered at x
0
= 0.
x
0
= 0 is a regular singular point (exercise).
We now apply Frobenius’ method; seek a solution of the form:
y =

n=0
c
n
x
n+r
= x
r

n=0
c
n
x
n
14 CHAPTER 2. SERIES SOLUTIONS
which gives us:
y

=

n=0
(n +r)c
n
x
n+r−1
y
′′
=

n=0
(n +r)(n +r −1)c
n
x
n+r−2
substituting it into the DE:
0 =

n=0
(n +r)(n +r −1)c
n
x
n+r
+

n=0
(n +r)c
n
x
n+r
+

n=0
c
n
x
n+r+2
+

n=0
−ν
2
c
n
x
n+r
We now factor out x
r
:
x
r
_

n=0
_
(n +r)
2
−ν
2
¸
c
n
x
n
+

n=0
c
n
x
n+2
_
= 0
x
r
_

n=−2
_
(n +r + 2)
2
−ν
2
¸
c
n+2
x
n+2
+

n=0
c
n
x
n+2
_
= 0
Matching indices:
x
r
_
(r
2
−ν
2
)c
0
+
_
(r + 1)
2
−ν
2
¸
c
1
x +

n=0
__
(n +r + 2)
2
−ν
2
¸
c
n+2
+c
n
¸
x
n+2
_
= 0
We choose the lowest order polynomial as the indicial equation, by conven-
tion:
r
2
−ν
2
= 0
r = ±ν
This implies:
_
(r + 1)
2
−ν
2
¸
c
1
= 0
=⇒ c
1
= 0(∀r)
c
n+2
=
−1
(n +r + 2)
2
−ν
2
c
n
, n ≥ 0
For r
1
= ν
c
n+2
=
−1
(n +ν + 2)
2
−ν
2
c
n
This implies that every odd subscript is zero.
2.5. BESSEL’S EQUATION 15
Let 2m = n + 2, m ≥ 1:
c
2m
=
−1
(2m+ν)
2
−ν
2
c
2m−2
m ≥ 1
Where m and n are dummy variables, so we can interchange it (for convenience)
c
2n
=
−1
(2n +ν)
2
−ν
2
c
2n−2
, n ≥ 1
=
−1
2
2
n
2
+ 2
2

c
2n−2
, n ≥ 1
We left off last time eliminating all of the odd subscripts. We are now looking
for c
2
and the other even subscripts:
c
2
=
−1
2
2
(1 +ν)
c
0
c
4
=
−1
2
2
· 2(2 +ν)
c
2
=
1
2
4
· 1 · 2(1 +ν)(2 +ν)
c
0
c
6
=
−1
2
6
· 1 · 2 · 3(1 +ν)(2 +ν)(3 +ν)
c
0
=⇒ c
2n
=
(−1)
n
2
2n
· n!

n
m=1
(m+ν)
To simplify, set:
c
0
=
1
2
ν
Γ(1 +ν)
(since it is arbitrary), where
Γ(α) =
_

0
t
α−1
e
−t
dt
which has a property:
Γ(1 +α) = αΓ(α) (2.2)
When we let α = n, it gives us:
Γ(1 +n) = nΓ(n) = n(n −1)Γ(n −1) = n!
which generates the n! in the expression also.
Thus:
c
2n
=
(−1)
n
2
2n
n!(1 +ν) . . . (n +ν)
·
1
2
ν
Γ(1 +ν)
=
(−1)
n
2
2n+ν
n!Γ(n +ν + 1)
, n ≥ 1
16 CHAPTER 2. SERIES SOLUTIONS
Plugging in n = 0, we still get c
0
as defined above. This implies that n starts
from 0.
Thus:
y
1
= x
ν
_

n=0
c
n
x
n
_
= x
r
(c
0
+c
1
x +c
2
x
2
+. . . )
= x
ν
_

n=0
c
2n
x
2n
_
= x
ν
_

n=0
(−1)
n
2
2n+ν
n!Γ(n +ν + 1)
x
2n
_
=

n=0
(−1)
n
n!Γ(n +ν + 1)
_
x
2
_
2n+ν
≡ J
ν
(x)
r
2
= −ν (exercise):
J
−ν
(x) =

n=0
(−1)
n
n!Γ(n −ν + 1)
_
x
2
_
2n−ν
Remark 1 These functions J
ν
(x) and J
−ν
(x) are called Bessel functions of
the first kind.
Remark 2 If ν is not an integer, then J
ν
and J
−ν
are linearly independent.
The general solution of the DE is then
y = AJ
n
(x) +BJ
−ν
(x)
However, if ν is an integer (i.e. ν = m), then
J
−m
(x) = (−1)
m
J
m
(x)
Which implies that J
−m
and J
m
are linearly dependent (one is a multiple of
the other). We therefore need a second linearly independent solution for the
general solution.
You have to go back to cases 2 and 3 to find this second solution (i.e. re-
duction of order).
Remark 3 Let ν be a non-integer. Define the function:
Y
ν
(x) =
J
ν
(x) cos(νπ) −J
−ν
(x)
sin(νπ)
This function is another Bessel function (solution to Bessel’s equation), lin-
early independent of J
ν
. This implies that it is linearly dependent to J
−ν
because the solution space of a second order differential equation is two dimen-
sional.
2.5. BESSEL’S EQUATION 17
With ν as an integer, i.e. ν = m,
Y
m
(x) = lim
ν→m
Y
ν
(x)
which is linearly independent of J
m
.
Yielding the general solution:
y = AJ
m
(x) +BY
m
(x)
or AJ
−m
(x) +BY
m
(x)
The latter equation holds because J
m
and J
−m
are linearly dependent. Y
ν
is
known as a Bessel function of the second kind.
18 CHAPTER 2. SERIES SOLUTIONS
Chapter 3
Fourier Series
Fourier series are expressions involving sin and cos of x, rather than powers of
x. The goal is to approximate functions using both sin and cos x.
3.1 Periodic Functions
A function f is called T-periodic (or periodic of period T) if
f(x +T) = f(x), x ∈ R (3.1)
T is called the fundamental period if T is the smallest positive number satisfying
this equation.
For example, h(x) = sin(2x) is a periodic function that is π periodic, as well
as for every multiple of π, and π is therefore the fundamental period.
Now, consider the sawtooth function:
f(x) = x ∧ f(x + 3)
f(x) is 3-periodic. The above representation is not unique.
f(x) =
_
x + 3, if −1 ≤ x < 0
x, if 0 ≤ x < 2
_
∧ f(x + 3) = f(x)
Notice that at points c = 0, ±3, ±6 . . . , we have finite jumps (discontinuities).
Observe that
f(c

) = lim
x→c

f(x) = 3
and
f(c
+
) = lim
x→c
+
f(x) = 0
Definition 3.1 A function f is said to be piecewise continuous on an interval
[a, b], if there are at most a finite number of points x
k
, k = 1, 2, . . . , n, (x
k−1
, x
k
)
at which f has finite discontinuities, (i.e. f has limits at x
k
) and is continuous
on each open interval (t
k
, t
k+1
), k = 1, 2, . . . , n −1.
19
20 CHAPTER 3. FOURIER SERIES
Result 3.1 A T-periodic function f is piecewise continuous in R if f is piece-
wise continuous on every interval [a, b] ∈ R
3.1.1 Example
Define the function [x] as the greatest integer less than or equal to x:
Examples:
[1.2] = 1
[−0.3] = −1
[0.3] = 0
[2] = 2
This is the step function. Now consider g(x) = x −[x]
g(x + 1) = x + 1 −[x + 1] = x + 1 −([x] + 1)
= x −[x]
Which makes g(x) a 1-periodic function, and as well, this function is piece-
wise continuous.
Definition 3.2 A function f is said to be piecewise differentiable if f and f

are piecewise continuous, though the statement is redundant in that if f is
continuous, f

is automatically piecewise continuous.
For example,
f(x) = x ∧ f(x + 3) = r
f

(x) = 1∀x = 3n, n ∈ Z
the latter term makes it piecewise continuous, and f is piecewise differentiable.
3.1.2 Properties
Let f be T-periodic.
1.
_
a+T
a
f(x)dx =
_
b+T
b
f(x)dx ∀a, b
2. Definition 3.3 The two functions f ∧ g are orthogonal on [a, b] if
_
b
a
f(x)g(x)dx = 0
3.1. PERIODIC FUNCTIONS 21
Applications Let m, n be nonnegative integers
(a)
∀m, n =⇒
_
R
−R
cos(mx) sin(mx)dx
cos is an even function, sin is an odd function, and therefore the whole
integral is odd, and over a symmetric interval, the integral therefore
evaluates to 0.
∀m, n =⇒
_
R
−R
cos(mx) sin(nx)dx = 0
(b)
∀m = n =⇒
_
π
−π
cos(mx) cos(nx)dx
= 2
_
π
0
cos(mx) cos(nx)dx
Using the identity
cos a cos b =
1
2
[cos(a +b) −cos(a −b)]
We find that cos mx ∧ cos nx are orthogonal.
(c)
∀m = n =⇒
_
π
−π
sin(mx) sin(nx)dx = 0
=⇒ sin(mx) ⊥ sin(nx)
(d)
_
π
−π
cos
2
(mx)dx = π =
_
π
−π
sin
2
(mx)dx
Apply the double angle formula to generate the answer π.
Recall:
1. If f(−x) = f(x), ∀x =⇒ f is an even function ≡ that f is symmemtric
around the y-axis.
2. If f(−x) = −f(x), ∀x =⇒ f is an odd function ≡ that f is symmetric
about the origin.
22 CHAPTER 3. FOURIER SERIES
3. Taylor series approximate functions via:
f(x) =

n=0
f
[n]
(x
0
)
n!
(x −x
0
)
n
Now how about using other types of series, such as Fourier series:
f(x) = a
0
+

n=1
[a
n
cos(nx) +b
n
sin(nx)] (3.2)
where f(x) is taken to be 2π-periodic. a
0
, a
n
, b
n
, n ≥ 1 are to be determined.
Notice that :
_
π
−π
f(x)dx =
_
π
−π
a
0
dx +

n=1
__
π
−π
a
n
cos(nx)dx +
$
$
$
$
$
$
$
$
_
π
−π
b
n
sin(nx)dx
_
=
_
π
−π
a
0
dx +

n=1
_
$
$
$
$
$
$
$
$$
2
_
π
0
a
n
cos(nx)dx
_
= 2πa
0
=⇒ a
0
=
1

_
π
−π
f(x)dx
a
0
=
1

_

0
f(x)dx (3.3)
Multiply (3.2) by cos mx, ∀m and integrate:
_
π
−π
f(x) cos(mx)dx
=
$
$
$
$
$
$
$
$
_
π
−π
a
0
cos(nx)dx +

n=1
_
_
π
−π
a
n
cos(nx) cos(mx)dx +
$
$
$
$
$
$
$
$
$
$
$$ _
π
−π
b
n
sin(nx) cos(mx)dx
_
=

n=1
__
π
−π
a
n
cos(nx) cos(mx)dx
_
= πa
m
=⇒ a
m
=
1
π
_
π
−π
f(x) cos(mx)dx
=⇒ a
n
=
1
π
_
π
−π
f(x) cos(nx)dx =
1
π
_

0
f(x) cos(nx)dx (3.4)
To find the b
n
s we can multiply (3.2) by sin mx similarly (excercise).
b
n
=
1
π
_
π
−π
f(x) sin(nx)dx =
1
π
_

0
f(x) sin(nx)dx (3.5)
Memorize these formulae.
3.2. EXAMPLE 23
3.2 Example
Consider the function f(x) = |x| if −π ≤ x ≤ π and f(x +2π) = f(x), and find
its Fourier series.
f is piecewise continuous, and as well, piecewise differentiable.
a
0
=
1

_
π
−π
f(x)dx
From above. f is even:
a
0
=
1
π
_
π
0
f(x)dx =
π
2
Continuing:
1
π
_
π
−π
f(x) cos(nx)dx =
2
π
_
π
0
f(x) cos(nx)dx =
2
π
_
π
0
xcos(nx)dx
=
2
π
_
x
n
sin(nx) +
1
n
2
cos(nx)
_
π
0
Integration by parts
=
2
π
_
cos nπ
n
2

1
n
2
_
=
2
π
_
(−1)
n
n
2

1
n
2
_
=
_
0, if n is even
−4
n
2
π
, if n is odd
_
=⇒ a
2n+1
=
−4
(2n + 1)
2
π
, n ≥ 0
From the above equations:
b
n
=
1
π
_
π
−π
f(x) sin(nx)dx = 0, ∀n
Observation
1. If f is even =⇒ b
n
= 0.∀n
2. If f is odd, =⇒ a
n
= a
0
= 0, ∀n
Thus:
f(x) =
π
2
+

n=0
−4
(2n + 1)
2
π
cos [(2n + 1)x]
which is the Fourier series for f(x) = |x|
24 CHAPTER 3. FOURIER SERIES
3.3 Uniform Convergence
If we let
S
N
=
π
2

4
π
N

n=0
1
(2n + 1)
2
cos [(2n + 1)x] , ∀{x ∈ R}
lim
N→∞
S
N
= S

= f(x)
then it is known as uniform convergence.
3.3.1 Example
Find the Fourier series for the function:
g(x) =
_
−c if −π ≤ x ≤ 0
c if 0 ≤ x ≤ π
_
; g(x + 2π) = g(x); c > 0
which produces the square wave function, which is an odd, piecewise differen-
tiable function.
If p = kπ then k ∈ Z, g(p) = ±C. Because g is an odd function, a
0
= a
1
=
· · · = a
n
= 0∀{n ≥ 1}
Recall:
b
n
=
1
π
_
π
−π
g(x) sin(nx)dx
=
2
π
_
π
0
g(x) sin(nx)dx
=
2c
π
_
π
0
sin(nx)dx
=
2c
π
_
−1
n
cos(nx)
_
π
0
=
2c
π
_
(−1)
n+1
n
+
1
n
_
this implies that all even subscripts are zero, thus we only deal with the odd
subscripts, i.e. b
2n−1
to generate the odd subscripts.
b
2n−1
=
4c
π(2n −1)
n ≥ 1
For the Fourier series :
S

= a
0

n=1
(a
n
cos(nx) +b
n
sin(nx))
=

n=1
4c
π(2n −1)
sin [(2n −1)x]
3.4. FOURIER’S THEOREM 25
because there are points of discontinuity, S

= g(x), and S

(0) = 0. Notice at
p, we have
S

(p) = 0 =
g(p
+
) +g(p

)
2
i.e., 0 represetnts the midpoint between c and −c.
Suppose that we define g(p) = 0∀{p = kπ}, then we have uniform conver-
gence for g(x), since S

= g(x)∀x.
Observation If p is a point of discontinuity, then the Fourier series will con-
verge to the mean of the jump, in other words,
S

(p) =
f(p
+
) +f(p

)
2
where p is a point of discontinuity. This phenomenon is called the Gibbs phe-
nomenon.
3.4 Fourier’s Theorem
Suppose that
1. f is a 2π-periodic function;
2. f is piecewise smooth on each interval of length 2π,
then at all points of continuity x, the Fourier series evaluated at x converges
to f(x) uniformly. At the finitely many points of discontinuity P
k
, the Fourier
series evaluated at P
k
converges to
f(P
+
k
)+f(P

k
)
2
.
Thus if f is continuous for all x, then the Fourier series converges uniformly
for all x.
3.4.1 Example
Let:
h(x) =
_
−1
1
2
x −c if −π ≤ x ≤ 0
1
2
x +c if 0 ≤ x ≤ π
_
, h(x + 2π) = h(x), c > 0.
Observe:
h(x) =
1
2
f(x) +g(x)
which implies that the Fourier series of h(x) is going to be the linearl combina-
tion generated by the Fourier series of f and g.
S
N
=
1
2
_
π
2

4
π
N

n=0
1
(2n + 1)
2
cos [(2n + 1)x]
_
+

n=1
4c
π(2n −1)
sin [(2n + 1)x]
=
π
4
+

n=0
_
−2
π(2n + 1)
2
cos [(2n + 1)x] +
4c
π(2n + 1)
sin [(2n + 1)x]
_
26 CHAPTER 3. FOURIER SERIES
3.5 Extending Fourier series to 2L-periodic func-
tions
Let a function f(x) be a piecewise smooth 2L-periodic function, i.e.
f(x + 2L) = f(x)
Define g(x) as follows:
g(x) = f
_
L
π
x
_
=⇒ g(x + 2π) = f
_
L
π
(x + 2π)
_
= f
_
L
π
x + 2L
_
= f
_
L
π
x
_
= g(x)
This implies that g(x) is a 2π-periodic piecewise smooth function, which implies
that the Fourier theorem applies.
This further implies that:
g(x) = a
0

n=1
(n
n
cos(nx) +b
n
sin(nx))
= f(
L
π
x) =⇒ Let ¯ x =
L
π
x
=⇒ x =
π
L
¯ x
=⇒ f(¯ x) = a
0
+

n=1
_
a
n
cos
_

L
¯ x
_
+b
n
sin
_

L
¯ x
__
This implies that the fourier series of f is given by:
f(x) = a
0
+

n=1
_
a
n
cos
_

L
x
_
+b
n
sin
_

L
x
__
The Fourier coefficients are given by the standard equations in Sections 3.1,
and 3.2:
a
0
=
1
2L
_
L
−L
f(x)dx =
1
2L
_
2L
0
f(x)dx (3.6)
a
n
=
1
L
_
L
−L
f(x) cos
_

L
x
_
dx =
1
L
_
2L
0
f(x) cos
_

L
x
_
dx (3.7)
b
n
=
1
L
_
L
−L
f(x) sin
_

L
x
_
dx =
1
L
_
2L
0
f(x) sin
_

L
x
_
dx (3.8)
n ≥ 1 (3.9)
Memorize these formulae, they are more general.
3.6. HALF EXPANSIONS 27
3.5.1 Example
Consider the function
f(x) = x
2
, −1 ≤ x < 1
and f(x + 2) = f(x). Find its Fourier series.
f is 2-periodic, =⇒ L = 1, and f is continuous for all x; this implies
that the Fourier series converges uniformly for all x. f is symmetric around the
y-axis, therefore f is an even function, therfore:
b
n
= 0, ∀ {n ≥ 1}
Find a
0
, a
n
, n ≥ 1. By Equation (3.6):
a
0
=
1
2
_
1
−1
x
2
dx =
1
3
By (3.7):
a
n
=
_
1
−1
x
2
cos(nπx)dx
a
n
= 2
_
1
0
x
2
cos(nπx)dx
integrating by parts:
a
n
= 2
_
x
2

sin(nπx) +
2x
n
2
π
2
cos(nπx) −
2
n
3
π
3
sin(nπx)
_
1
0
= 2
_
(−1)
n
2
n
2
π
2
_
=
(−1)
n
4
n
2
π
2
f(x) =
1
3
+
4
π
2

n=1
_
(−1)
n
n
2
cos(nπx)
_
3.6 Half Expansions
Suppose that f is defined over a finite interval (0, L). We need to find the
Fourier series of f on (0, L). One of the chief requirements for Fourier series
is the requirement of the continuity of the function. To remedy this, let f be
a piecewise smooth function, effectively extending the domain to (−∞, ∞), i.e.
x ∈ R.
We can do this in two different ways:
1. Half range cosine series expansion: Expanding the function to make it
even. This can be done by mirroring the function along y = nL, n ∈ Z,
thereby making it symmetric around the y-axis, and 2L-periodic. Call
this new function f
1
(x).
28 CHAPTER 3. FOURIER SERIES
f
1
(x) = f(x), x ∈ (0, L)
f
1
(x) = f(−x), x ∈ (−L, 0)
It is an even function.
f
1
(x + 2L) = f
1
(x)
And therefore the Fourier theorem is applicable.
f
1
(x) = a
0
+

n=1
a
n
cos
_

L
x
_
(3.10)
a
0
=
1
2L
_
L
−L
f
1
(x)dx =
1
L
_
L
0
f(x)dx (3.11)
a
n
=
1
L
_
L
−L
f
1
(x) cos
_

L
x
_
dx =
2
L
_
L
0
f(x) cos
_

L
x
_
dx (3.12)
This implies that the Fourier series of f is
f(x) = a
0
+

n=1
a
n
cos
_

L
x
_
, x ∈ (0, L)
except at points of discontinuity.
2. Half range sine series expansion: We shall construct an odd 2L-perodic
piecewise smooth function f
2
(x) which is an extension for f. It is an odd
function, so the graph is reflected along the origin.
f
2
(x) = f(x), x ∈ (0, L)
f
2
(x) = −f(−x), x ∈ (−L, 0)
f
2
(x + 2L) = f
2
(x)
To minimize the effect of the Gibbs Phemenon at the boundaries of dis-
continuity, I may set a point to the average of the two values at the
boundaries.
Now we are ready to find the Fourier series of f
2
.
f
2
(x) =

n=0
b
n
sin
_

L
X
_
(3.13)
b
n
=
2
L
_
L
0
f(x) sin
_

L
x
_
dx (3.14)
because we know that in the interval (0, L), f
2
(x) = f(x) This in turn
implies:
f(x) =

n=1
b
n
sin
_

L
x
_
∀x ∈ (0, L)
except for all points of discontinuity; this is implied.
3.6. HALF EXPANSIONS 29
3.6.1 Example
Consider the function:
f(x) = π, x ∈ (0, 1)
Find the two half range expansions.
1. Find f
1
(x) (even extension).
f
1
(x) = π, ∀x
Now we can find a
0
, a
n
, n ≥ 1.
a
0
=
_
1
0
f(x)dx = π
_
1
0
dx = π
a
n
= 2
_
1
0
f(x) cos(nπx)dx = 2π
_
1
0
cos(nπx)dx = 0, ∀n ≥ 1
This implies that the Fourier series for π is, indeed, π. Amazing.
2. Find f
2
(x) (odd extension).
f
2
(x) becomes the square wave function that is 2-periodic. To minimize
Gibbs phenomenon, we set the discontinuities of this function to be zero.
f
2
(x) =
_
_
_
π x ∈ (0, 1)
−π x ∈ (−1, 0)
0 x = k, k ∈ Z
f
2
(x + 2) = f
2
(x)
Now we find the Fourier series:
b
n
= 2
_
1
0
f(x) sin(nπx)dx = 2π
_
1
0
sin(nπx)dx =
2(−1)
n+1
n
+
2
n
this implies that the even ns are zero for all n ≥ 1, and:
b
2n+1
=
4
2n + 1
, ∀n ≥ 0
and
f(x) = 4

n=0
1
2n + 1
sin [(2n + 1)πx] , ∀x ∈ (0, 1)
30 CHAPTER 3. FOURIER SERIES
3.7 Error Analysis
Suppose that f is a 2L-periodic piecewise smooth (differentiable) function. The
Fourier series of f is given by:
f(x) = a
0
+

n=1
_
a
n
cos
_

L
x
_
+b
n
sin
_

L
x
__
Consider the partial sum given by:
S
N
= a
0
+
N

n=1
_
a
n
cos
_

L
x
_
+b
n
sin
_

L
x
__
Consider the error given by:
|f(x) −S
N
|
will not help much becaues of Gibbs phenomenon. At the points of discontinuity,
this error would be huge.
We shall now consider a different type of error, given by:
E
N
=
1
2L
_
L
−L
(f(x) −S
N
)
2
dx (3.15)
This is called the mean-square error.
Observation E
N
≥ 0 and will overcome the Gibbs phenomenon.
We want E
N
→ 0 as N → ∞; if this occurs, then S
N
is said to converge,
and therefore approximate, f in the mean.
The question is: when does this convergence occur?
Definition 3.4 A function f is said to be square integrable on [a, b] if:
_
b
a
f
2
(x)dx < ∞ (3.16)
i.e., the integral is finite. The set of all functions satisfying (3.16) is called the
class of integrable functions over [a, b]. The class of piecewise smooth functions
is a subset of the set of square integrable functions.
Theorem 3.1 If f is a square integrable function on [−L, L], then the Nth
partial sum S
N
approximates f in the mean, i.e.:
lim
N→∞
E
N
= 0
3.7. ERROR ANALYSIS 31
Observation Suppose that f is continuous, then the previous error
|f(x) −S
N
|
will generate “good” results because Gibbs phenomenon will not occur.
Theorem 3.2 If F is square integrable on [−L, L], then
E
N
=
1
2L
_
L
−L
f
2
(x)dx −a
0
2

1
2
N

n=1
_
a
n
2
+b
n
2
_
(3.17)
Corallary 1 Since E
N
≥ 0:
1
2L
_
L
−L
f
2
(x)dx ≥ a
2
0
+
1
2
N

n=1
_
a
2
n
+b
2
n
_
, ∀N ≥ 1 (3.18)
Corallary 2 Since lim
N→∞
E
N
= 0, then:
1
2L
_
L
−L
f
2
(x)dx = a
2
0
+
1
2

n=1
_
a
2
n
+b
2
n
_
(3.19)
Inequality (3.18) is called Bessel’s inequality and equality (3.19) is called Par-
seval’s identity. Bessel’s inequality is very important, but not very useful in this
course.
3.7.1 Example
Consider the function:
f(x) =
_
1 0 < x < L
−1 −L < x < 0
f(x) = f(x + 2L)
Another square wave function. f is odd:
=⇒ a
0
= a
n
= 0∀n ≥ 1
b
n
=
2

_
(−1)
n+1
+ 1
¸
, n ≥ 1
The Fourier series of f is given by:
f(x) =
2
π

n=1
(−1)
n+1
+ 1
n
sin
_

L
x
_
For even ns, the coefficients are zero.
32 CHAPTER 3. FOURIER SERIES
Let’s find E
N
:
E
N
=
1
2L
_
L
−L
f
2
(x)dx −a
0
2

1
2
N

n=1
_
a
2
n
+b
2
n
_
=
1
2L
_
L
−L
dx −
1
2
N

n=1
b
2
n
= 1 −
1
2
N

n=1
2

__
(−1)
n+1
+ 1
¸_
2
= 1 −
2
π
2
N

n=1
1
n
2
_
(−1)
n+1
+ 1
¸
2
E
1
= 1 −
8
π
2
≈ 0.189
E
2
= E
1
E
3
≈ 0.099 = E
4
Observe ethat E
N
is decreasing and getting closer and closer to zero as N →∞.
Now we use Parseval’s idenity:
1
2L
_
L
−L
f
2
(x)dx = a
2
0
+
1
2

n=1
_
a
n
2
+b
n
2
_
=⇒ 1 =
1
2

n=1
4
n
2
π
2
_
(−1)
n+1
+ 1
¸
2
π
2
2
=

n=1
1
n
2
_
(−1)
n+1
+ 1
¸
2
=

n=0
4
(2n + 1)
2
=⇒
π
2
8
=

n=0
1
(2n + 1)
2
Chapter 4
Partial Differential
Equations!
4.1 Classification
Recall ODEs take this form:
F(x, y, y

, . . . , y
[n]
) = 0
x is the only independent variable. Suppose you have two or more independent
variables, e.g. u = u(x, y). In this case, we have to deal with partial differential
equations (PDEs).
4.1.1 Example

2
u
∂t
2
= c
2
_

2
u
∂x
2
+

2
u
∂y
2
_
This is known as the two-dimensional wave equation. The independent variables
are x, y, and t, e.g.
∂u
∂t
+u
∂u
∂x
= f(x)
1. Order: is the number of the highest derivative in the equation, e.g.
_
∂u
∂t
_
4
+

2
u
∂x
2
=

3
u
∂y
3
is a 3rd order PDE. The 4 is a power, not an order.
2. Linearity: You want the unknown function u and its derivatives to appear
in a linear fashion in order for the PDE to be linear.
33
34 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Examples
∂u
∂t
= c
2
_

2
u
∂x
2
+

2
u
∂y
2
_
is a linear 2nd order PDE; this specific equation is known as the two
dimensional heat equation.
ih
∂ψ
∂t
= −
h
2
2m

2
ψ
∂x
2
+V (x)ψ
is also a linear 2nd order PDE; this is the Shr¨ odinger Equation.
uu
x
+u
y
= 0
is a non-linear 1st order PDE.
3. Homogeneity: Search for one non-zero term that does not include the
unknown function to say that the PDE is non-homogeneous. Otherwise,
it is homogenous.
Example

2
u
∂x
2
+

2
u
∂y
2
= f(x, y)
This is a second order linear non-homogeneous PDE; this specific equation
is the Poisson equation

2
u
∂x
2
+

2
u
∂y
2
= 0
This is a second order linear homogenous PDE; this is Laplace’s Equation.
4.2 Initial Condtions (IC) and Boundary Con-
ditions (BC)
Time t will deal the initial conditions whereas the spatial variables will deal
with boundary conditions.
Suppose that the unknown function is
u = u(x, t)
4.3. RESULT 35
For example, we might have:
u(x, t
0
) = f(x)
This is a zeroth order non-homogenous linear initial condition.
u(x, t
0
) −u
x
(x, t
0
) = 0
is also an initial condition (note the time dependence). This one is a first order
linear homogenous IC.
0 ≤ x ≤ L
_
u(0, t) = g
1
(t)
u(L, t) = g
2
(t)
_
These are boundary conditions, specifically zeroth order linear non-homogenous
boundary conditions.
u(0, t) −u(L, t)u
x
(L, t) = g(t)
is another example of a BC. This one is non-linear, non-homogenous and of
order 1.
4.3 Result
Theorem 4.1 If u
1
∧ u
2
are two solutions to a linear homogenous PDE, then
u = c
1
u
1
+ c
2
u
2
is also a solution to that PDE. Moreover, if u
1
∧ u
2
satisfy a
linear homogenous boundary condition BC, then so does u = c
1
u
1
+c
2
u
2
.
4.3.1 Example
Consider the PDE:
∂u
∂x
+
∂u
∂t
= 0
It is a linear homogenous first order PDE. This is easily solvable; apply the
substitution α = ax +bt and β = cx +dt where a, b, c, d are constants.
Using the chain rule:
∂u
∂x
=
∂u
∂α
∂α
∂x
+
∂u
∂β
∂β
∂x
= au
α
+cu
β
∂u
∂t
= bu
α
+du
β
exercise
36 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Substituting back into the PDE:
au
α
+cu
β
+ +bu
α
+du
β
= 0
=⇒ (a +b)u
α
+ (c +d)u
β
= 0
Since our constants are arbitrarily chosen, we may choose values such that it
makes the PDE easier to solve, e.g. a = −b, k = c +d, giving:
ku
β
= 0
=⇒ u
B
= 0 =⇒ u = u(α)
= u(a(x −t))
this is the general solution to the problem; any funciton satisfying the initial
PDE must satisfy this new condition e.g.
e
a(x−t)
; ln(a(x −t)); sin(a(x −t))
are all solutions. There are infinitely many linearly independent solutions. Lin-
ear combinations of solutions are also solutions, since this is a linear homogenous
PDE. This property only holds for linear homogenous PDEs; if any of the two
conditions fail, the superposition property also fails.
4.4 One Dimensional Wave Equations
Consider an elastic (very flexible) string with fixed end points of length L (be-
tween the fixed points), with vertical motion u(x), x ∈ [0, L] where x is the
position along the string. Horizontal motion is very small, and therefore will be
neglected. This motion is called transverse.
u(x, t) will be the position of a point x at a given time t. We need to find
u. Apply Newton’s second law of motion.

i
F
i
= ma (4.1)
The acceleration is defined as:
a ≡

2
u
∂t
2
0 < x < L
u(x, 0) = f(x) I.C.
u(0, t) = u(L, t) = 0 B.C.
Let ρ be the mass density
M
L
.
Now we find the forces on the string:
1. Tensile force (tension) τ
4.4. ONE DIMENSIONAL WAVE EQUATIONS 37
(a) This force τ is considered to be constant along the string.
(b) τ is constant for all time.
(c) The tensile force is tangental to the string.
2. External forces: such as damping, electromagnetic, gravitational, etc. We
shall consider these forces per unit mass, i.e.:
F
E
= mF = ρLF
Consider a very small portion of the string between A = x and B = x+∆x.
Let θ
A
and θ
B
be the angle between the tangental vector and the horizontal
at A and B respectively. Solving for the vertical component of the tensile
force:
T
A
= −τ sin θ
A
T
B
= τ sin θ
B

F = ma
−τ sin θ
A
+τ sin θ
B
+ρ∆xF = ρ∆x

2
u
∂t
2
Now we make a few assumptions to simplify them.
(a) θ
A
and θ
B
are both very small, implying:
cos θ
A
≈ 1 ≈ cos θ
B
sin θ
A
≈ tan θ
A
The latter is just the slope of the line at the point A.
sin θ
A
= tan θ
A
=
∂u
∂x
(x, t)
sin θ
B
=
∂u
∂x
(x + ∆x, t)
Now we substitute it back:
−τ
∂u
∂x
(x, t) +τ
∂u
∂x
(x + ∆x, t) +ρ∆xF = ρ∆x

2
u
∂t
2

2
u
∂t
2
=
τ
ρ
_
∂u
∂x
(x + ∆x, t) −
∂u
∂x
(x, t)
∆x
_
+F
Let ∆x →0 and c
2
=
τ
ρ
; this can be done because both τ and ρ are
positive. The units of c
2
are velocity squared.

2
u
∂t
2
= c
2

2
u
∂x
2
+F
38 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
which gives us the one dimensional wave equation with an exter-
nal force (per unit mass F) a.k.a. the forced one-dimensional wave
equation. If F = 0, then we have the unforced wave equation:

2
u
∂t
2
= c
2

2
u
∂x
2
which is a linear second order homogenous PDE.
Remark
i. If F is produced by gravity, then:
F = −g
ii. F is produced by damping, which is proportional to velocity:
F ∝
∂u
∂t
=⇒ F = −2k
∂u
∂t
We shall solve the wave equation.
4.4.1 Solving the Wave Equation

2
u
∂t
2
= c
2

2
u
∂x
2
u(0, t) = u(L, t) = 0 B.C.
u(x, 0) = f(x),
∂u
∂t
(x, 0) = g(x) I.C.
0 < x < L, t ≥ 0
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(4.2)
there is not one method to solve all PDEs; we shall try one method here, known
as the method of separation of variables.
We seek a solution of the form
u(x, t) = X(x)T(t) (4.3)
which we can plug back into (3.16)
∂u
∂t
= X(x)T

(t) =⇒

2
u
∂t
2
= X(x)T
′′
(t)

2
u
∂x
2
= X
′′
(x)T(t)
XT
′′
= c
2
X
′′
T.
=⇒
X
′′
X
=
1
c
2
T
′′
T
= k
4.4. ONE DIMENSIONAL WAVE EQUATIONS 39
it has to be a constant because both sides are functions of different variables.
We now generate two equations out of this.
X
′′
−kX = 0 (4.4)
T
′′
−c
2
kT = 0 (4.5)
Now we have two homogenous second order ODEs to solve. k is known as the
separation constant.
u(x, t) = 0 (because u(x, t) = 0 is the trivial solution and is uninteresting).
Now we try to solve (4.4), it is a second order linear ODE. The characteristic
equations are:
m
2
−k = 0
m
2
= k
There are three cases to consider
1. k = µ
2
> 0
m = ±µ
X(x) = ¯ c
1
e
µx
+ ¯ c
2
e
−µx
= c
1
cosh µx +c
2
sinh µ
u(0, t) = u(L, t) = 0
By (4.3):
X(0)T(t) = X(L)T(t) = 0
=⇒ X(0) = X(L) = 0
because T(t) = 0 yields the trivial solution. These are the boundary
equations associated with (4.4).
X(0) = 0 =⇒ c
1
+ 0 = 0
=⇒ c
1
= 0
X(L) = 0 =⇒ c
2
sinh µL = 0
=⇒ c
2
= 0 =⇒ X(x) = 0
which gives us our trivial solution u(x, t) = 0 again.
2. k = µ
2
= 0
m
1
= m
2
= 0 =⇒ X(x) = c
1
x +c
2
X(0) = 0 =⇒ c
2
= 0
X(L) = 0 =⇒ c
1
0
=⇒ u = 0
again, uninteresting.
40 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
3. k = −µ
2
< 0
m
1,2
= ±iµ
X(x) = c
1
cos µx +c
2
sin µx
X(0) = 0 =⇒ X(x) = c
2
sin µx
X(L) = 0 =⇒ c
2
sin µL = 0
Oh noes, another trivial solution. Let’s try sinµL = 0:
µL = mπ, m = 0, m ∈ Z
µ =

L
µ = µ
n
=

L
, n ∈ Z > 0
Because the arbitrary constant can absorb the negative signs, we can ig-
nore the negative numbers in m. Then, our solution can be written as
X(x) = X
n
(x) = c
2
sin

L
x
We set c
2
= 1 as we only need one solution out of this family.
X
n
(x) = sin

L
x
is the solution to (4.4).
Now let’s solve (4.5):
T
′′
n
+
_
cnπ
L
_
2
T
n
= 0; n ≥ 1
Let
λ
n
=
cnπ
L
; n ≥ 1 (4.6)
Continuing:
T
′′
n

2
n
T
n
= 0
m = ±iλ
n
CE
T
n
(t) = α
n
cos(λ
n
t) +β
n
sin(λ
n
t)
=⇒ u
n
(x, t) = X
n
T
n
= sin
_

L
x
_

n
cos λ
n
t +β
n
sin λ
n
t]
This is known as the nth normal mode. The wave equation in (4.2) is linear.
This implies
u(x, t) =

n=1
sin
_

L
x
_

n
cos λ
n
t +β
n
sin λ
n
t] (4.7)
4.4. ONE DIMENSIONAL WAVE EQUATIONS 41
is the solution to (4.2). If u(x, t) depends only on its nth normal mode, then
u is said to follow its own nth normal mode. The first mode is known as the
fundamental mode, whereas every other mode is called an overtone.
Now let’s apply the initial conditions.
u(x, 0) = f(x)
Now we sub this into (4.7):
u(x, 0) = f(x) =

n=1
α
n
sin
_

L
x
_
This is the half range sine series expansion of f, we can apply the Fourier series
coefficient equation:
α
n
=
2
L
_
L
0
f(x) sin
_

L
x
_
dx (4.8)
u
t
(x, 0) = g(x)
u
t
(x, t) =

n=1
sin
_

L
x
_
[−λ
n
α
n
sin (λ
n
t) +λ
n
β
n
cos (λ
n
t)]
u
t
(x, 0) = g(x) =

n=1
λ
n
β
n
sin
_

L
x
_
Which gives us the half range sine series expansion for g.
β
n
=
2
λ
n
L
_
L
0
g(x) sin
_

L
x
_
dx (4.9)
4.4.2 d’Alembert Equation
Suppose you have a string with initial shape
f(x) = sin

L
x 0 < x < L m ∈ Z
+
starting from rest. Find an expression for the subsequent motion.
Because it is starting from rest:
g(x) = 0 =⇒ β
n
= 0 ∀n ≥ 1
by (4.9). Let’s find α
n
α
n
=
2
L
_
L
0
sin

L
sin
_

L
x
_
dx
= 0 if n = m
= 1 otherwise
42 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
By Equation (4.7):
u(x, t) =

n=1
sin
_

L
x
_
_
α
n
cos(λ
n
t) +
$
$
$
$
$
β
n
sin(λ
n
t)
¸
= sin
_

L
x
_
cos(λ
m
t)
=
1
2
_
sin
_

L
x +λ
m
t
_
+ sin
_

L
x −λ
m
t
__
Subbing in λ
m
=
cmπ
L
:
=
1
2
_
sin
_

L
(x +ct)
_
+ sin
_

L
(x −ct)
__
=
1
2
[f(x +ct) +f(x −ct)]
Two things we know about f:
1. f is periodic;
2. f is defined for all x.
In general, for a string of length L and initial shape f(x) starting from rest, i.e.
g(x) = 0; we have
u(x, t) =
1
2
[f

(x +ct) +f

(x −ct] (4.10)
The new function f

is the 2L-periodic odd extension of f.
Result 4.1 For a given string of length L whose initial shape is f(x) and initial
velocity is g(x), the solution to system (4.2) is given by
u(x, t) =
1
2
[f

(x +ct) +f

(x −ct)] +
1
2c
_
x+ct
x−ct
g

(s)ds (4.11)
where f

and g

are 2L-periodic odd extensions of f and g. (4.11) is called
d’Alembert equation.
Consider the integral in (4.11).
G(x) =
_
x
a
g

(s)ds
G(x + 2L) −G(x) =
_
x+2L
a
g

(s)ds −
_
x
a
g

(s)ds
=
_
x+2L
a
g

(s)ds +
_
a
x
g

(s)ds
=
_
x+2L
x
g

(s)ds =
_
L
−L
g

(s)ds
= 0
4.4. ONE DIMENSIONAL WAVE EQUATIONS 43
because it is an odd function. Thus, G(x) is a 2L-periodic function.
_
x+ct
x−ct
g

(s)ds =
_
a
x−ct
g

(s)ds +
_
x+ct
a
g

(s)ds
= −G(x −ct) +G(x +ct)
By (4.11):
u(x, t) =
1
2
[f

(x +ct) +f

(x −ct)] +
1
2c
[G(x +ct) −G(x −ct)] (4.12)
Example
Consider a vibrating string with initial shape:
f(x) =
_
x 0 < x <
1
2
1 −x
1
2
≤ x < 1
_
g(x) = π
for x ∈ (0, 1). Find the equation of the subsequent motion, where c =
1
π
We start by finding the 2L-periodic odd extension of f and g. Then we
apply d’Alembert’s method.
Let’s find f

.
f

(x) = f(x) x ∈ (0, 1)
f

(x) = −f(−x) x ∈ (−1, 0)
f

(x + 2) = f

(x)
f

(x) =
_
_
_
−x −1 −1 ≤ x < −
1
2
x −
1
2
≤ x <
1
2
1 −x
1
2
≤ x < 1
Let’s find g

. Recall:
G(x) =
_
x
a
g

(s)ds
Let’s find G over the interval (−1, 1).
G(x) =
_
x
−1
g

(s)ds =
_
_
x
−1
−πds = −πx −π x < 0
_
0
−1
g

(s)ds +
_
x
0
g

(s)ds = π(x −1) x > 0
_
G(x + 2) = G(x)
By (4.12), we have
u(x, t) =
1
2
_
f

_
x +
t
π
_
+f

_
x −
t
π
__
+
π
2
_
G
_
x +
t
π
_
−G
_
x −
t
π
__
44 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
How about finding u at t =
π
2
? We just substitute it in. We need f

_
x +
1
2
_
and f

_
x −
1
2
_
. Repeat this for G(x) (exercise):
u
_
x,
π
2
_
=
_
2πx
−2π(x −1)
_
4.4.3 Method of Characteristic Lines
We had before in system (4.2) in t ≥ 0, 0 < x < L. These inequalities generate
what is known as the strip zone.
Recall t − t
0
= m(x − x
0
) is the equation of any line. m is the slope of the
line.
Let m = ±
1
2
; thus we have
x −ct = x
0
−ct
0

1
c
= m
x +ct = x
0
+ct
0
→ −
1
c
= m
generating two sets of lines that change according to x
0
and t
0
; these will be
referred to as L
1
and L
2
respectively.
L
1
and L
2
intersect at the point (x
0
, c
0
). The x-intercept of L
1
is (x
0
−ct
0
, 0)
and for L
2
is (x
0
+ct
0
, 0).
From this, and from solution given by (4.11) we see that u(x
0
, t
0
) depends
on the interval [x
0
−ct
0
, x
0
+ct
0
].
This interval is called the interval of dependence and the lines L
1
, L
2
are
called the characteristic lines.
Let’s now consider the whole interval [0, L].
We begin by drawing the characteristic lines through x = 0 and x = L.
x −ct = 0
x +ct = L
are our new lines.
Note Take a region in under both lines and above t = 0, I. Taking any point
(x, t) in I, the interval of dependence will be contained inside x ∈ (0, L)
We shall use region I to determine u(x, t) in other regions (II, III, IV). In
order to do this, we need the following theorem.
4.4. ONE DIMENSIONAL WAVE EQUATIONS 45
x
t
x = L
x −ct = 0
x +ct = L
I
II III
IV
Figure 4.1: Region Diagram
P
Q Q’
P’
t
x
46 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Theorem 4.2 Let PQP

Q

be the parallelogram generated by the characteris-
tic lines. Then:
u(P) +u(P

) = u(Q) +u(Q

)
Example
Consider the wave equation with c = 3; f(x) = −
_
x −
1
2
_
2
+
1
4
and g(x) = 2x
with 0 < x1. Find u in region I and II.
In region I: The solution is given by (4.11) where x+ct and x−ct will never
be outside the interval [0, L], i.e.
u(x, t) =
1
2
[f(x + 3t) +f(x −3t)] +
1
6
_
x+3t
x−3t
g(s)ds
Substitute in f and we get (exercise):
u(x, t) = −x
2
−9t
2
+x + 2xt
for region I.
In region II: We use the parallelogram theorem above:
t
x
x=L
I
P
Q’
Q
P’
4.5. ONE DIMENSIONAL HEAT EQUATION 47
Q

must be on the t-axis.
Recall:
u(P) +u(P

) = u(Q) +u(Q

)
u(P) = u(Q) +
¨
¨
¨
u(Q

) −u(P

)
= u(Q) −u(P

)
Coordinates of Q:
_
x −3t = 0
x + 3t = x
0
+ 3t
0
_
=⇒
_
x =
x0+3t0
2
t =
x0+3t0
6
Coordinates of Q

:
_
x = 0
x −3t = x
0
−3t
0
_
=⇒
_
x = 0
t =
−x0+3t0
3
Coordinates of P

_
x −3t = 0
x + 3t = −x
0
+ 3t
0
_
=⇒
_
x =
−x0+3t0
2
t =
−x0+3t0
6
Substituting in u(P):
u(P) =
1
4
x
2

41
2
xt +
9
4
t
2
+x
4.5 One Dimensional Heat Equation
Consider a thin rod with negligible thickness, and of length L; making it effec-
tively one dimensional. We are interested in finding out the temperature of a
given point x on the rod at a given time t.
The temperature u at the edges of the rod are held fixed at 0; these are
boundary conditions, i.e.
u(0, t) = u(L, t) = 0 B.C
Say that the temperature distribution is initially given by f(x) (initial condi-
tion).
u(x, 0) = f(x) I.C.
The initial boundary value problem is given by
_
_
_
∂u
∂t
= c
2 ∂
2
u
∂x
2
u(0, t) = u(L, t) = 0
u(x, 0) = f(x), 0 < x < L, t ≥ 0
(4.13)
48 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
The PDE is linear 2nd order homogenous. c
2
is known as the thermal diffusivity.
We then apply the method of separation of variables;
u(x, t) = X(x)T(t)
We can do this because the heat equation is a linear homogenous PDE with 0
boundary conditions.
∂u
∂t
= XT

,

2
u
∂x
2
X
′′
T
T

X = c
2
X
′′
T
=⇒
X
′′
X
=
1
c
2
T

T
This implies that the ratios are a constant, since both sides are independent of
their own respective variables.
=⇒
X
′′
X
=
1
c
2
T

T
= k
where k is the separation constant.
X
′′
−kX = 0
T

−kc
2
T = 0
We have already solved these equations in the Wave Equations section.
u(0, t) = u(L, t) = 0
≡ X(0)T(t) = X(L)T(t) = 0
X(0) = X(L) = 0 B.C
We need to discuss the three cases:
1. k = µ
2
> 0
2. k = µ
2
= 0 both generate the trivial solution.
3. k = −µ
2
< 0 generates X
n
= sin
_

L
x
_
; n ≥ 1.
Now we need to find T(t). Substitute
T

+
_
cnπ
L
_
2
T = 0
Let
λ
n
=
nπc
L
=⇒ T

n

2
n
T
n
= 0 (4.14)
4.5. ONE DIMENSIONAL HEAT EQUATION 49
_
dT
n
T
n
=
_
−λ
2
n
dt
=⇒ T
n
= α
n
e
−λnt
; n ≥ 1
Thus u
n
(x, t) = X
n
T
n
= α
n
e
−λ
2
n
t
sin
_

L
x
_
; n ≥ 1
Reminder u
n
is called the nth normal mode.
u(x, t) =

n=1
α
n
e
−λ
2
n
t
sin
_

L
x
_
(4.15)
Now we can apply the initial condition:
f(x) = u(x, 0) =

n=1
α
n
sin
_

L
x
_
This represents the half range sine series expansion of f over (0, L), and thus
we can get the constants α
n
from:
α
n
=
2
L
_
L
0
f(x) sin
_

L
x
_
dx (4.16)
4.5.1 Observations
1.
λ
n
→∞ =⇒ e
−λnt
→0
as n →∞.
This means that the fundamental mode is the dominant term whereas the
overtones are negligible.
2. As t → ∞, similar ideas are applicable. We are interested in finding the
long term behaviour of the temperature, i.e. the steady state solutions, or
time independent solutions. This means that
∂u
∂t
= 0, and thus indepen-
dent of time. For system (4.13), the steady state solution is u = 0.
3. Suppose we need to find out what happens when we have non-zero bound-
ary conditions.
The PDE becomes:
∂u
∂t
= c
2 ∂
2
u
∂x
2
u(0, t) = T
1
u(L, t) = T
2
u(x, 0) = f(x), 0 < x < L, t ≥ 0
_
¸
¸
_
¸
¸
_
(4.17)
50 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
The method of separation of variables will not work; the boundary condi-
tions are non-zero. To solve system (4.17), we need to discuss the steady
state behaviour.
∂u
1
∂t
= 0
=⇒

2
u
1
∂x
2
= 0
by the PDE.
=⇒ u
1
(x) = Ax +B
Applying the boundary condition:
u
1
(0) = B = T
1
u
1
(L) = AL +T
1
=⇒ A =
T
2
−T
1
L
Thus giving us the steady state solution:
u
1
(x) =
T
2
−T
1
L
x +T
1
(4.18)
The general idea is to solve system (4.17) using the steady solution (4.18).
Let
u
2
(x, t) = u(x, t) −u
1
(x) (4.19)
where u(x, t) is the solution to (4.17). Notice
∂u
2
∂t
=
∂u
∂t
−0 =
∂u
∂t

2
u
2
∂x
2
=

2
u
∂x
2
−0 =

2
u
∂x
2
=⇒
∂u
2
∂x
= c
2

2
u
2
∂x
2
and we have generated a one dimensional heat function for u
2
u
2
(0, t) = T
1
−T
1
= 0
u
2
(L, t) = T
2
−T
2
= 0
_
B.C
Oh mama mia! We generated two zero boundary conditions!
u
2
(x, 0) = f(x) −u
1
(x) I.C.
will comprise our initial condition. We have now generated a new initial bound-
ary value problem
∂u
2
∂t
= c
2

2
u
2
∂x
2
u
2
(0, t) = u
2
(L, t) = 0
u
2
(x, 0) = f(x) −u)1(x)
(4.20)
4.5. ONE DIMENSIONAL HEAT EQUATION 51
We can now solve (4.20) by the method of separation of variables. By (4.15),
we have
u
2
(x, t) =

n=1
α
n
e
−λ
2
n
t
sin
_

L
x
_
; λ
n
=
cnπ
L
α
n
=
2
L
_
L
0
[f(x) −u
1
(x)] sin
_

L
x
_
dx (4.21)
=⇒ u(x, t) = u
2
(x, t) +u
1
(x) (4.22)
4.5.2 Example
Solve the initial boundary value problem (4.17) given the data:
u(0, t) = 20; u(1, t) = 80; f(x) = 60x; L = c = 1
Find the steady state solution by Equation (4.18)
u
1
(x) =
80 −20
1
x + 20 = 60x + 20
λ
n
= nπ; n ≥ 1
a
n
= 2
_
1
0
[60x −60x −20] sin
_

L
x
_
dx
=
_
0 if n is even

80

if n is odd
α
2n+1
=
−80
(2n + 1)π
; n ≥ 0
By (4.15) we have:
u
2
(x, t) =

n=0
−80
(2n + 1)π
e
−(2n+1)
2
π
2
t
sin ((2n + 1)πx)
u(x, t) = u
2
(x, t) +u
1
(x)
Beccause the initial and boundary conditions are inconsistent, our solution u
may be discontinuous and thus will exhibit the Gibbs phenomenon at t = 0.
4.5.3 Other Types of Boundary Conditions
Let’s discuss other types of boundary conditions.
52 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Variation 1
∂u
∂t
= c
2

2
u
∂x
2
∂u
∂x
(0, t) =
∂u
∂x
(L, t) = 0
u(x, 0) = f(x)
(4.23)
This means that there is no heat flux, i.e. there is no propagation in the x
direction.
We now try to employ the method of separation of variables. Seek a solution
of the form
u(x, t) = X(x)T(t)
generates
X
′′
−kX = 0
T

−kT = 0
where k is the separation constant. Now we check the boundary conditions:
X

(0)T(t) = 0
X

(L)T(t) = 0
=⇒ X

(0) = X

(L) = 0
Now we solve
X
′′
−KX = 0
X

(0 = X

(L) = 0
which gives us the characteristic equation:
m
2
−k = 0 =⇒ m
2
= k
and we deal with our three cases again:
1. k = µ
2
> 0 (trivial solution)
2. k = µ
2
= 0 yields X(x) = c
1
+c
2
x; which is a line.
X

(x) = c
2
=⇒
_
X

(0) = c
2
= 0
X

(L) = c
2
= 0
Therefore the boundary conditions are satisfied. We have inifinitely many
solutions, and we only need one solution, so we set X
0
(x) = 1; µ = µ
0
= 0.
This X
0
is a non-trivial solution.
4.5. ONE DIMENSIONAL HEAT EQUATION 53
3. k = −µ
2
< 0 yields
X(x) = c
1
cos(µx) +c
2
sin(µx)
X

(x) = −c
1
µsin µx +c
2
µcos µx
=⇒
_
X

(0) = c
2
µ = 0 =⇒ c
2
= 0
X

(L) = −c
1
µsin µL = 0 =⇒ sin µL = 0
=⇒ µL = nπ
=⇒ µ = µ
n
=

L
, n ≥ 1
X(x) = X
n
(x) = c
1
cos µ
n
x
X
n
(x) = cos
_

L
x
_
Since at n = 0 we have X
0
= 1 and µ
0
= 0 for cases 2 and 3, we therefore merge
them together as follows:
X
n
(x) = cos
_

L
x
_
; n ≥ 0
where u
n
=

L
Substituting k = −µ
2
into the DE for T, we will end up with the following:
T

n
−(cµ
n
)
2
T
n
= 0; n ≥ 0
To simplify (we always do this), we let
λ
n
= cµ
n
=
cnπ
L
(4.24)
Giving us:
T


2
n
T
n
= 0
which is a separable equation:
_
dT
n
T
n
=
_
−λ
2
n
dt
=⇒ T
n
= α
n
e
−λ
2
n
t
Therefore the nth normal mode is:
u
n
(x, t) = α
n
e
−λ
2
n
t
cos
_

L
x
_
Because this is a linear PDE, this implies that the soultion is an infinite sum of
all the modes:
u(x, t) =

n=0
α
n
e
−λ
2
n
t
cos
_

L
x
_
54 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Taking out the first term:
u(x, t) = α
0
+

n=0
α
n
e
−λ
2
n
t
cos
_

L
x
_
(4.25)
We now use the initial conditions to find α
0
, α
n
, n ≥ 1.
f(x) = u(x, 0) = α
0
+

n=0
α
n
cos
_

L
x
_
Which gives us the half range cosine series expansion of f(x).
f(x) =⇒ a
0
=
1
L
_
L
0
f(x)dx
a
n
=
2
L
_
L
0
f(x) cos
_

L
x
_
dx
(4.26)
Variation II
Consider the initial boundary value problem
∂u
∂t
= c
2

2
u
∂x
2
u(0, t) = 0,
∂u
∂x
(L, t) = −τu(L, t) (B.C)
τ ≥ 0
u(x, 0) = f(x) (I.C.)
Let u(x, t) = X(x)T(t):
X
′′
−kX = 0
T

−c
2
kT = 0
where k once again is the separation constant. Now we apply the boundary
conditions.
X(0)T(t) = 0
X

(L)
¨
¨¨
T(t) = −τX(L)
¨
¨¨
T(t)
=⇒
X(0) = 0
X

(L) +τX(L) = 0
now we solve X
′′
−kX = 0:
m
2
−k = 0 =⇒ m
2
= k
And again, we consider the three cases
4.5. ONE DIMENSIONAL HEAT EQUATION 55
1. k = µ
2
> 0 and
2. k = µ
2
= 0 both generate the trivial solution.
3. k = −µ
2
< 0 implies that the characteristic roots are ±iµ and
X(x) = c
1
cos(µx) +c
2
sin(µx)
=⇒ X(0) = 0 =
&&
c
1
=⇒ X(x) = c
2
sin(µx)
X

(x) = c
2
µcos(µx)
c
2
µcos(µL) +τc
2
sin(µL) = 0
where the last step comes from applying the second boundary condition.
We assume c
2
= 0 as it would yield the trivial solution.
c
2
= 0 =⇒ µcos(µL) +τ sin(µL) = 0
Mama mia!
Notice:
tan(µL) = −
µ
τ
We can solve this numerically, but how do we know that µ exists? Let
y
1
(u) = tan µL
y
2
(u) = −
1
τ
u
and now look for the points of intersection of these two functions, graph-
ically:
56 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
There are clearly intersections. Thus, we have infinitely many points of
intersection.
µ = µ
n
; n ≥ 1
One can determine them numerically using software.
X(x) = X
n
(x) = sin (µ
n
x) ; n ≥ 1
(c
2
= 1)
Recall T

−c
2
kT = 0
=⇒ T

n
+ (cµ
n
)
2
T
n
= 0
Which is like the DE we solved before:
T
n
(t) = α
n
e
−λ
2
n
t
λ
n
= cµ
n
; n ≥ 1
The nth normal mode is then:
u
n
(x, t) = α
n
e
−λ
2
n
t
sin(µ
n
x)
u(x, t) =

n=1
α
n
e
−λ
2
n
t
sin(µ
n
x)
Where the latter equation is the general solution. Now we check the initial
conditions:
u(x, 0) = f(x) =

n=1
α
n
e
−λ
2
n
t
sin(µ
n
x)
This is not a half range sine series expansion; we don’t know that the
functions are orthogonal (due to µ
n
inside the sine function).
Question Are sin (µ
n
x) and sin(µ
m
x) orthogonal?
The answer is yes. The proof will be revisited later on.
Now we can mutiply both iides by sin (µ
m
x)
u(x, 0) = f(x) sin(µ
m
x) =

n=1
α
n
e
−λ
2
n
t
sin(µ
n
x) sin(µ
m
x)
and we integrate with respect to dx; then this destroys all the terms except
for m = n.
α
m
=
1
_
L
0
sin
2

m
x)dx
_
L
0
f(x) sin (µ
m
x) dx
4.6. TWO DIMENSIONAL WAVE EQUATION 57
4.6 Two Dimensional Wave Equation
Can we find theh Fourier series of a function given by z = f(x, y)? The answer
is yes. In this case, we will have a double Fourier series.
Theorem 4.3 If f(x, t) is a continuous function with constant first and second
order partial derivatives on [0, a] ×[0, b], then the double Fourier half range sin
series expansion of f(x, t) is given by
f(x, y) =

n=1

m=1
B
mn
sin
_

a
x
_
sin
_

b
y
_
(4.27)
where
B
mn
=
4
ab
_
b
0
_
a
0
f(x, y) sin
_

a
x
_
sin
_

b
y
_
dxdy (4.28)
To simplify the notation, we merge the two summation signs:
f(x, y) =

m,n=1
B
mn
sin
_

a
x
_
sin
_

b
y
_
Observation Notice that the functions sin
_

a
x
_
sin
_

b
y
_
and sin
_

a
x
_
sin
_

b
y
_
are orthogonal functions; remember that m = k and l = n.
Now let’s use this material to solve 2-d wave equations. The initial boundary
value problem is given by

2
u
∂t
2
= c
2
_

2
u
∂x
2
+

2
u
∂y
2
_
2D Wave Equation
u(0, y, t) = u(a, y, t) = 0 BC 1
u(x, 0, t) = u(x, b, t) = 0 BC 2
u(x, y, 0) = f(x, y);
∂u
∂t
(x, y, 0) = g(x, y) ICs
_
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
_
(4.29)
Now we apply the separations of variables. We seek a solution of the form
u(x, y, t) = X(x)Y (y)T(t)
Substituting in:
XY T
′′
= c
2
(X
′′
Y T +XY
′′
T) Divide by c
2
XY T
1
c
2
T
′′
T
=
X
′′
X
+
Y
′′
Y
= −k
2
58 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
If L = k
2
≥ 0 will generate the trivial solution, where L is the separation
constant. Thus
X
′′
X
+
Y
′′
Y
= −k
2
=⇒
X
′′
X
= −
Y
′′
Y
−k
2
= −µ
2
Again, L = µ
2
≥ 0 generates the trivial solution.
=⇒
Y
′′
Y
= −(k
2
−µ
2
) ≡ −ν
2
Now we can generate a system of ODEs that we can solve
X
′′

2
X = 0
Y
′′

2
Y = 0
T
′′
+ (ck)
2
T = 0
k
2
= µ
2

2
Now we have to apply the boundary conditions.
X(0) = X(a) = 0
Y (0) = Y (b) = 0
We need to solve
X
′′

2
X = 0
X(0) = X(a) = 0
as our first boundary value problem, our other one is
Y
′′

2
Y = 0
Y (0) = Y (b) = 0
For µ = µ
m
=

a
we have:
X = X
m
(x) = sin
_

a
x
_
For ν = ν
n
=

b
we have:
Y = Y
n
(y) = sin
_

b
y
_
But
k
2
= µ
2

2
=⇒ k
2
mn
= π
2
_
m
2
a
2
+
n
2
b
2
_
4.6. TWO DIMENSIONAL WAVE EQUATION 59
Let
λ
2
mn
= c
2
k
2
mn
= c
2
π
2
_
m
2
a
2
+
n
2
b
2
_
(4.30)
This term is called the characteristic frequency.
Thus
T
′′
mn

2
mn
T
mn
= 0
=⇒ m
2

2
mn
= 0
=⇒ m
1,2
= ±iλ
mn
T
mn
= α
mn
cos (λ
mn
t) +β
mn
sin (λ
mn
t)
The normal mode:
u(x, y, t) = sin
_

a
x
_
sin
_

b
y
_

mn
cos (λ
mn
t) +β
mn
sin (λ
mn
t)] (4.31)
Thus, the solution is:
u(x, y, t) =

m,n=1
sin
_

a
x
_
sin
_

b
y
_

mn
cos (λ
mn
t) +β
mn
sin (λ
mn
t)]
(4.32)
Now we can apply the initial conditions to get the final solution
f(x, y) = u(x, y, 0)
=

m,n=1
α
mn
sin
_

a
x
_
sin
_

b
y
_
which is the double sine Fourier series expansion, By (4.27) and (4.28),
α
mn
=
4
ab
_
b
0
_
a
0
f(x, y) sin
_

a
x
_
sin
_

b
y
_
dxdy (4.33)
Skipping a few steps, the solution is:
g(x, y) =
∂u
∂t
(x, y, 0) =

m,n=1
λ
mn
β
mn
sin
_

a
x
_
sin
_

b
y
_
which is, again, the double sine Fourier series expansion, by the same equations:
β
mn
=
4
λ
mn
ab
_
b
0
_
a
0
g(x, y) sin
_

a
x
_
sin
_

b
y
_
dxdy (4.34)
Theorem 4.4 The set of all points in the membrane that stays still (i.e. do
not vibrate) are called nodal lines. They satisfy the equation
u
mn
(x, y, t) = 0, ∀t
60 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
4.6.1 Example
a = b = 1
c =
1
π
f(x, y) = sin 3πxsin πy
g(x, y) = 0
α
mn
= 4
_
1
0
_
1
0
sin (3πx) sin (πy) sin (mπx) sin (nπy) dxdy
=
_
2
_
1
0
sin (3πx) sin (mπx) dx
_ _
2
_
1
0
sin (πy) sin (nπy) dy
_
=
_
1
−1
sin (3πx) sin (mπx) dx
_
1
−1
sin (πy) sin (nπy) dy
We know from the orthogonality principle that:
=
_
0m = 3 or n = 1
1m = 3 and n = 1
_
α
31
= 1; α
mn
= 0; m = 3 or n = 1
β
mn
= 0∀n, m
λ
mn
=
_
m
2
+n
2
; λ
31
=

10
Thus yielding our final solution:
u(x, y, t) = sin (3πx) sin (πy) cos
_

10t
_
So we find our nodal lines by setting the above expression to zero:
sin (3πx) sin (πy) cos
_

10t
_
= 0
This implies that
3πx = kπ; k ∈ Z
+
x =
k
3
So again, to get a zero:
πy = kπ
=⇒ y = k; k ∈ Z
+
Inside our boundaries, our nodal lines are
x =
1
3
,
2
3
4.7. TWO DIMENSIONAL HEAT EQUATION 61
4.7 Two Dimensional Heat Equation
Definition 4.1
∂u
∂t
= c
2
_

2
u
∂x
2
+

2
u
∂y
2
_
(4.35)
is the heat equation in two dimensions.
If we have zero boundary conditions:
u(0, y, t) = u(a, y, t) = 0
u(x, 0, t) = u(x, b, t) = 0
and an initial condition
u(x, y, 0) = f(x, y)
then we can use the method of separation of variables:
u(x, y, t) =

m,n=1
α
mn
sin
_
mπx
a
_
sin
_
nπy
b
_
e
−λ
2
mn
t
(4.36)
λ
mn
= cπ
_
m
2
a
2
+
n
2
b
2
(4.37)
α
mn
=
4
ab
_
a
0
_
b
0
f(x, y) sin
_
mπx
a
_
sin
_
nπy
b
_
dxdy
m, n = 1, 2, 3, . . .
(4.38)
To get a steady state solution, we set:
∂u
∂t
= 0

2
u
∂x
2
+

2
u
∂y
2
= 0

2
u = 0
u(0, y) = u(a, y) = 0
u(x, 0) = 0, u(x, b) = f
2
(x)
(4.39)
This means that there is a heat source at one side of the boundary conditions.
Again, we use separation of variables.
u(x, y) = X(x)Y (y)
62 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Substitute it into the equation we want to solve
u(x, y) = X(x)Y (y)
X
′′
Y +XY
′′
= 0
X
′′
X
+
Y
′′
Y
= 0
Y
′′
Y
= −
X
′′
X
= k
This yields two equations
Y
′′
−kY = 0
X
′′
+kX = 0
X(0)Y (t) = X(a)Y (y) = 0
For this to be true,
X(0) = 0; X(a) = 0
X(x)Y (0) = 0 =⇒ Y (0) = 0
So we take
k = µ
2
> 0
again to avoid the trivial solutions
X(x) = c
1
cos (µ) +c
2
sin (µx)
X(0) = c
1
= 0
X(x) = c
2
sin (µx)
X(a) = c
2
sin (µa) = 0
=⇒ µa = nπ; n = 1, 2, 3, . . .
Now we can solve for µ, and our final solution for x is
X
n
(x) = sin
_
nπx
a
_
; n = 1, 2, . . .
We pick c
2
= 1, which doesn’t matter because it gets absorbed later by another
constant.
Y
′′
n
−µ
2
n
Y
n
= 0
This gives us an exponential answer
Y
n
(y) = α
n
cosh (µ
n
y) +β
n
sinh (µ
n
y)
4.8. DIRICHLET PROBLEM 63
Now we can apply the boundary conditions
Y
n
(0) = 0 = α
n
Y
n
(y) = β
n
sinh(µ
n
y)
u
n
(x, y) = β
n
sinh
_
nπy
a
_
sin
_
nπx
a
_
=⇒ u(x, y) =

n=n
u
n
(x, y) =

n=n
β
n
sinh
_
nπy
a
_
sin
_
nπx
a
_
(4.40)
B
n
=
2
a sinh
_
nπb
a
_
_
a
0
f
2
(x) sin
_

a
x
_
dx (4.41)
4.8 Dirichlet Problem
Consider a box with the following boundaries:
y
x
a
b

2
u = 0
u = f
2
(x)
u = f
1
(x)
u = g
1
(y) u = g
2
(y)

2
u = 0
u(0, y) = g
1
(y)
u(a, y) = g
2
(y)
u(x, 0) = f
1
(x)
u(x, b) = f
2
(x)
(4.42)
Notice that these are non-zero.
We split the PDE into the following sections:
64 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
y
x
a
b

2
u
1
= 0
f
2
(x)
0
0 0
y
x
a
b

2
u
2
= 0
0
f
1
(x)
0 0
y
x
a
b

2
u
3
= 0
0
0
g
1
(y) 0
y
x
a
b

2
u
4
= 0
0
0
0 g
2
(y)
These new initial boundary value problems have the following properties:
u = u
1
+u
2
+u
3
+u
4
u(x, y, b) = f
2
(x)

2
u = ∇
2
(u
1
+u
2
+u
3
+u
4
)
= ∇
2
u
1
+∇
2
u
2
+∇
2
u
3
+∇
2
u
4
This gives us zero boundary condition problems, and thus, we may now employ
the method of separation of variables

2
u
1
= 0
u
1
(0, y) = u
1
(a, y) = 0
u
1
(x, 0) = f
1
(x)
u
1
(x, b) = 0
u
1
(x, y) = X(x), Y (y)
4.8. DIRICHLET PROBLEM 65
Y
′′
Y
=
−X
′′
X
= k
=⇒X
′′
+kX = 0
Y
′′
−kY = 0
k ≡ µ
2
X(0) = X(a) = 0
X(x) = X
n
(x) = sin
_
nπx
a
_
n ≥ 1
where µ
n
=

a
Now we do Y .
Y
n
(y) = A
n
cosh (µ
n
y) +B
n
sinh (µ
n
y)
Y
n
(b) = A
n
cosh (µ
n
b) +B
n
sinh (µ
n
b) = 0
=⇒
A
n
B
n
= −
sinh (µ
n
b)
cosh (µ
n
b)
Y
n
(y) = B
n
_
A
n
B
n
cosh (µ
n
y) + sinh (µ
n
y)
_
= B
n
_

sinh (µ
n
b)
cosh (µ
n
b)
cosh (µ
n
y) + sinh (µ
n
y)
_
=
−B
n
cosh(µ
n
b)
[sinh(µ
n
b) cosh(µ
n
y) −sinh(µ
n
y) + cosh(µ
n
b)]
=
−B
n
cosh(µ
n
b)
[sinh (µ
n
(b −y))]
= α
n
sinh(µ
n
(b −y))
This gives us enough information to solve each of the four cases. The result is
given below:
66 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Result 4.2
u
1n
(x, y) = α
n
sin
_
nπx
a
_
sinh(µ
n
(b −y))
u
1
(x, y) =

n=1
α
n
sin
_
nπx
a
_
sinh(µ
n
(b −y)) (4.43)
where α
n
=
2
a sinh
_
nπb
a
_
_
a
0
f
1
(x) sin
_
nπx
a
_
dx (4.44)
u
3
(x, y) =

n=1
γ
n
sinh
_

b
(a −x)
_
sin
_
nπy
b
_
(4.45)
where γ
n
=
2
b sinh
_
nπa
b
_
_
b
0
g
1
(y) sin
_
nπy
b
_
dy (4.46)
u
4
(x, y) =

n=1
δ
n
sinh
_
nπx
b
_
sin
_
nπy
b
_
(4.47)
where δ
n
=
2
b sinh
_
nπa
b
_
_
b
0
g
2
(y) sin
_
nπy
b
_
dy (4.48)
4.8.1 Example
Consider the initial boundary value problem given by:
a = b = 1
u(0, y) = u(x, 1) = 0
u(1, y) = sin (2πy)
u(x, 0) = x
We can see immediately that we need to only use u
1
and u
4
, from above; from
(4.48):
δ
n
=
2
sinh (πh)
_
1
0
sin(2πy) sin(nπy)dy
=
2
sinh(πn)
1
2
_
1
−1
sin(2πy) sin(nπy)dy
=
_
_
_
0 n = 2
1
sinh(π2)
n = 2
And consequently, from (4.47),
=⇒ u
4
(x, y) =
1
sinh(2π)
sinh(2πx) sin(2πy)
4.9. POISSON’S EQUATION 67
From (4.44),
α
n
=
2
sinh(nπ)
_
1
0
xsin(nπx)dx
=
2
sinh(nπ)
_

1

xcos(nπx) +
1
n
2
π
2
sin(nπx)
_
1
0
=
2
sinh(nπ)
_

1

cos(nπ) −0
_
= −
2(−1)
n
nπ sinh(nπ
Plugging into (4.48),
u
1
(x, y) =

n=1
−2(−1)
n
nπ sinh(nπ
sin(nπx) sinh(nπ(1 −y))
And thus, our general solution is:
u(x, y) = u
1
(x, y) +u
2
(x, y)
=
1
sinh(2π)
sinh(2πx) sin(2πy) −

n=1
2(−1)
n
nπ sinh(nπ
sin(nπx) sinh(nπ(1 −y))
4.9 Poisson’s Equation
We shall solve the boundary value problem

2
= f(x, y)
u(0, y) = g
1
(y), u(a, y) = g
2
(y)
u(x, 0) = f
1
(x), u(x, b)f
2
(x)
(4.49)
Consider a box bounded by u = f
1
(x), u = f
2
(x), u = g
1
(y), u = g
2
(y).
y
x
a
b

2
u = f(x, y)
f
2
(x)
f
1
(x)
g
1
(y) g
2
(y)
68 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
We can now decompose (split) the problem into two.
y
x
a
b

2
u
1
= 0
f
2
(x)
f
1
(x)
g
1
(y) g
2
(y)
The second rectangle has four zero boundary conditions.
y
x
a
b

2
u
2
= f(x, y)
0
0
0 0
and we can add both rectangles together to get the original problem. The
former problem will be known as problem 1, and the latter will be referred to
as problem 2.
Problem 1 is the Dirichlet problem, which was solved previously. We are
now left with a problem with zero boundary conditions. The idea is to guess a
solution and plug it in (an educated guess, of course). We try a function given
by
ϕ
mn
(x, y) = sin
_

a
x
_
sin
_

b
y
_
Note that the right hand side of this comes from the solution to the Dirichlet
problem. We now calculate the Laplacian

2
ϕ
mn
=
_

2
ϕ
mn
∂x
2
+

2
ϕ
mn
∂y
2
_
= −
m
2
π
2
a
2
sin
_

a
x
_
sin
_

b
y
_

n
2
π
2
b
2
sin
_

a
x
_
sin
_

b
y
_
= −
_
m
2
π
2
a
2
+
n
2
π
2
b
2
_
sin
_

a
x
_
sin
_

b
y
_
= −Λ
mn
sin
_

a
x
_
sin
_

b
y
_
Thus:

2
ϕ
mn
= −Λ
mn
ϕ
mn
(4.50)
4.9. POISSON’S EQUATION 69
This problem is basically an eigenvalue problem where Λ
mn
is the eigenvalue as-
sociated with the eigenfunction ϕ
mn
. Also notice that the boundary conditions
of (2) are satisfied by ϕ
mn
.
Thus, the educated guess is
u
2
(x, y) =

m,n=1
E
mn
sin
_

a
x
_
sin
_

b
y
_
(4.51)
This is known as the eigenseries expansion. E
mn
is to be determined. This
representation is not unique.
We need to solve ∇
2
u
2
= f(x, y), and now we can substitute (4.51).

m,n=1
−Λ
mn
E
mn
sin
_

a
x
_
sin
_

b
y
_
= f(x, y)
This is the double Fourier series expansion of f(x, y).
E
mn
= −
4
Λ
mn
ab
_
b
0
_
a
0
f(x, y) sin
_

a
x
_
sin
_

b
y
_
dxdy (4.52)
4.9.1 Example
Solve Equation (4.49) subject to the boundary conditions
u(0, y) = u(x, 1) = 0
u(1, y) = sin (2πy)
u(x, 0) = x
We have a rectangle as follows:
y
x
a
b

2
u = xy
0
x
0 sin (2πy)
We begin the decomposition into two separate problems.
70 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Problem 1 - Dirichlet problem
y
x
a
b

2
u
1
= 0
0
x
0 sin(2πy)
Problem 2 - Zero Boundary Conditions
y
x
a
b

2
u
2
= xy
0
0
0 0
From the previous example:
u
1
=
sinh 2πx
sinh 2π
sin 2πy +
2
π

n=1
(−1)
n+1
nsinh nπ
sin(nπx) sinh(nπ(1 −y))
Now let’s look for u
2
; we seek a solution of the form (4.51).
Λ
mn
=
m
2
π
2
a
2
+
n
2
π
2
b
2
= π
2
(m
2
+n
2
)
By (4.52):
E
mn
= −
4
π
2
(m
2
+n
2
)
_
1
0
_
1
0
xy sin (mπx) sin (nπy) dxdy
= −
4
π
2
(m
2
+n
2
)
_
1
0
xsin (mπx) dx
_
1
0
y sin(nπy)dy
=
4(−1)
m+n+1
π
2
mn(m+n)
This is the Fourier coefficient. It follows that
u
2
(x, y) =
4
π
2

m,n=1
(−1)
n+m+1
mn(m+n)
sin (mπx) sin (nπy)
Thus:
u = u
1
+u
2
is our final answer.
4.10. STURM LIOUVILLE PROBLEMS 71
4.10 Sturm Liouville Problems
Definition 4.2 1. The Regular Sturm-Liouville problem (RSL) over a finite
interval [a, b] is given by the second order boundary value problem
[P(x)y

]

+ [q(x) +λr(x)] y = 0
c
1
y(a) +c
2
y

(a) = 0
d
1
y(b) +d
2
y

(b) = 0
(4.53)
where
(a) c
1
, c
2
are constants, one of which is non-zero
(b) d
1
, d
2
are constants, one of which is non-zero
(c) p(x), p

(x), q(x), r(x) are continuous on [a, b].
(d) p(x) and r(x) > 0, ∀x ∈ [a, b]
These are known as regularity conditions.
2. A Singular Sturm-Liouville Problem (SSL), is as above, except that one
of the regularity conditions would fail.
4.10.1 Example: Bessel’s Equation
x
2
y
′′
+xy

+ (x
2
−ν
2
)y = 0
The latter is not a Sturm-Liouville problem. Therefore, we shall apply the
following substitution:
u =
x
λ
≡ x = λu
y

=
dy
dx
=
1
λ
dy
du
y
′′
=
1
λ
2
d
2
y
du
2
x
2
λ
2
d
2
y
du
2
+
x
λ
dy
du
+x
2
−ν
2
)y = 0
u
2
d
2
y
du
2
+u
dy
du
+
_
λ
2
u
2
−ν
2
_
y = 0
Divide by u
u
d
2
u
du
2
+
dy
du
. ¸¸ .
d
du
[u
dy
du
]+
h

ν
2
u

2
u
i
+
_

ν
2
u

2
u
_
y = 0
72 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
And this is now a Sturm-Liouville problem. This is known as the parametrized
Bessel’s equation. We will discuss this in detail later as we will need this to
solve PDEs. Continuing:
p(u) = u
q(u) = −
ν
2
u
r(u) = u
Because q(u) is discontinuous, this is a singular Sturm-Liouville problem.
Definition 4.3 Each non-zero or non-trivial solution to (4.53) is called an
eigenfunction of the Sturm-Liouville problem corresponding to the eigenvalue
λ.
4.10.2 Example
Find the eigenfunctions and eigenvalues of the Sturm-Liouville problem
y
′′
+λy = 0
p(x) = r(x) = 1; q(x) = 0
This implies that this is a regular Sturm-Liouville problem.
y(0) = y(2π) = 0
are the boundary conditions.
Our characteristic equation is
m
2
+λ = 0
=⇒ m
2
= −λ
As usual, we have three cases to consider:
1. λ = −µ
2
< 0 =⇒ m = ±µ
y(x) = c
1
cosh µx +c
2
sinh µx
This generates the trivial solution.
2. λ = −µ
2
= 0 Also a trivial solution
3. λ = µ
2
> 0 =⇒ m = ±iµ
y(x) = c
1
cos µx +c
2
sin µx
y(0) = 0 =⇒ c
1
= 0
y(2π) = 0 =⇒ c
2
sin 2πµ = 0
=⇒ sin 2πµ =⇒ µ
n
=
n
2
n ≥ 1
4.10. STURM LIOUVILLE PROBLEMS 73
These are the eigenvalues:
λ
n
= µ
2
n
=
n
2
4
The eigenfunctions corresponding to the eigenvalues are:
y
n
= sin
nx
2
Result 4.3 There are four things we can draw from this:
1. If λ
n
are the eigenvalues of a regular Sturm-Liouville problem, then λ
n
is
real and λ
1
< λ
2
< · · · < λ
n
< . . . , and lim
n→∞
= ∞.
2. There exists only one linearly independent eigenfunction corresponding to
each eigenvalue of a regular Sturm-Liouville problem. This does not hold
for periodic boundary conditions.
3. The eigenfunctions of a regular Sturm-Liouville problem and singular
Sturm-Liouville problems subject to the boundary condition
lim
x→b

{p(x) [y
n
(x)y

m
(x) −y
m
(x)y

n
(x)]}
− lim
x→a
+
{p(x) [y
n
(x)y

m
(x) −y
m
(x)y

n
(x)]} = 0
(4.54)
are orthogonal w.r.t the weight function r(x), i.e., If y
n
and y
m
are eigen-
function corresponding to two different eigenvalues. λ
n
∧ λ
m
, then
_
b
a
r(x)y
n
(x)y
m
(x)dx = 0
4. If f(x) is a piecewise smooth (differentiability implied) function, then
f(x) =

n=1
A
n
y
n
(x) is continuous on [a, b] (4.55)
where y
n
(x) are the eigenfunctions of a regular Sturm-Liouville problem,
and
A
n
=
_
b
a
f(x)y
n
(x)r(x)dx
_
b
a
y
2
n
(x)r(x)dx
(4.56)
74 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
4.10.3 Example
Solve
_
¸
¸
¸
_
¸
¸
¸
_
u
t
= u
xx
u
x
(0, t) = 0
u
x
(1, t) = −u(1, t)
u(x, 0) = x
We shall use the only method of solving this we learned so far, the method of
separation of variables, where we seek a solution of the ofrm u(x, t) = X(x)T(t)
XT

= X
′′
T =⇒
T

T
=
X
′′
X
= k = −µ
2
< 0
X


2
X = 0 T


2
T = 0 (∗)
X

(0) = 0, X

(1) +X(1) = 0
Let’s find X. The characteristic equation is given by
m
2

2
= 0
=⇒m = ±iµ
X(x) = c
1
cos µx +c
2
sin µx
X

(x) = −c
1
µsin µx +c
2
µcos µx
X

(0) = 0 =⇒ c
2
= 0
=⇒X(x) = c
1
cos µx First B.C.
X

(1) +X(1) = 0
=⇒ −c
1
µsin µ +c
1
cos µ = 0
cot µ = µ
Let y
1
(µ) = cot µ and y
2
(µ) = µ
Let’s call these points of intersection µ
1
, µ
2
, µ
3
, . . . . We have infinitely many
points of intersection µ
n
; n ≥ 1. Thus j
(x) = X
n
(x) = cos(µ
n
x); n ≥ 1
Now find T:
T

n

2
n
T
n
= 0 =⇒ T
n
(t) = α
n
e
µ
2
n
t
u
n
(x, t) = α
n
e
−µ
2
n
t
cos µ
n
x
u(x, t) =

n=1
α
n
e
−µ
2
n
t
cos µ
n
x
x =

n=1
α
n
cos µ
n
x I.C.
4.11. THE PARAMETRIZED BESSEL’S EQUATION 75
Notice the following about (∗):
_
[1X

]

+
_
0 +µ
2
x
¸
X = 0 X

(0) = 0
X

(1) +X(1) = 0
_
p(x) = 1 > 0
q(x) = 0
r(x) = 1 > 0
_
¸
_
¸
_
continuous on [0, 1]
We write it in this form so that it is consistent with the Sturm-Liouville problem.
We can now confirm that it is a RSL. Thus the eigenfunctions X
n
(x) = cos µ
n
k
assiciated with the eigenvalues
k = −µ
2
= −µ
2
n
satisfy the orthogonality property given by result 3. This implies
_
1
0
cos µ
n
xcos µ
m
xdx = 0 when n = m
By equation (4.56):
α
n
=
_
1
0
xcos µ
n
xds
_
1
0
cos
2
µ
n
xdx
= 2
µ
n
sin µ
n
+ cos µ
n
−1
µ
2
n

n
sin µ
n
cos µ
n
exercise
4.11 The Parametrized Bessel’s Equation
Consider the parameterized Bessel’s equation given by
u
d
2
y
du
2
+
dy
du
+
_
λ
2
u −
m
2
u
_
y = 0 (4.57)
This equation was generated from Bessel’s equation using the substitution x =
λu. (4.57) is a singular Sturm-Liouville problem (SSL).
The solution of Bessel’s equation:
y(x) = c
1
J
m
(x) +c
2
Y
m
(x); m ∈ Z
Thus, the solution of (4.57) is
y(u) = c
1
J
m
(λu) +c
2
Y
m
(λu)
Consider the boundary conditions
y

(a) = −ly(a) (4.58)
y(a) = 0 (4.59)
Where each of these denote a different case, case 1 and case 2, respectively.
76 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
By property 4 of the handout for Bessel functions:
lim
u→0
+
Y (u) = −∞ not good
So we set c
2
= 0 to eliminate that possibility. We are then left with
y(u) = c
1
J
m
(λu)
Let us consider first (4.59).
y(a) = 0 = c
1
J
m
(λa)
Assume c
1
= 0
J
m
(λa) = 0
We need infinitely many λs that will satisfy this equation. This is true (won’t
be proven)
We solve (4.57) by subjecting it to (4.58).
y

(u) = c
1
λJ

m
(λu)
&&
c
1
λJ

m
(λa) = −l
&&
c
1
J
m
(λa) (c
q
= 0)
λJ

m
(λa)
. ¸¸ .
h1(λ)
= −lJ
m
(λa)
. ¸¸ .
h2(λ)
Result 4.4 There are infinitely many λ = λ
mn
, n ≥ q at which h
1
∧ h
2
will
intersect. Thus, the eigenfunctions are y
n
= J
m

mn
u) corresponding to the
eigenvalues λ
mn
.
Result 4.5 More generally:
1. There are eigenfunctions J
mn

mn
u) corresponding to the eigenvalues λ
mn
for both cases.
2. Two different eigenfunctions will satisfy the orthogonality property
_
a
0
uJ
m

mn
u)J
m

mk
u)du = 0 (k = n) (4.60)
3. We have
_
a
0
uJ
2
m

mn
u)du =
a
2
2
J
2
m+1

mn
a) (4.61)
This is derived in the last problem of assignment 11.
4. (The most important result) Equations (4.54) and (4.55) are satisfied by
the eigenfunctions y
n
= J
m

mn
u). Although (4.57) is a SSL.
4.12. RADIALLY SYMMETRIC CONDITIONS 77
In other words: for a piecewise smooth function f(x), we have
f(x) =

n=1
α
n
J
m

mn
x) (4.62)
α
n
=
_
a
0
uf(u)J
m

mn
u)du
_
a
0
uJ
2
m

mn
u)du
n ≥ 1 (4.63)
this is known as the Bessel-Fourier expansion. Note that (4.63) is given by
(4.61). This function holds except at points of jump discontinuity; Gibbs phe-
nomenon is observed.
4.12 Radially Symmetric Conditions
Review We used rectangular coordinates before. We shall use now the polar
coordinates to deal with membranes of a circular shape. Remember that
r
2
= x
2
+y
2
θ = arctan
_
y
x
_
The Laplacian in polar coodinates is given by (after some tedious algebra)

2
u = u
xx
+u
yy
→u
rr
+
1
r
u
r
+
1
r
2
u
θθ
Consequently, the 2-d wave and heat equations are given by, respectively,
u
tt
= c
2
_
u
rr
+
1
r
u
r
+
1
r
2
u
θθ
_
u
t
= c
2
_
u
rr
+
1
r
u
r
+
1
r
2
u
θθ
_
78 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
θ
Definition 4.4 If f(r, θ) and g(r, 0) (the initial shape and velocity of the mem-
brane) are independent of θ, we say that the membrane is radially symmetric,
implying u
θ
= u
θθ
= 0
We need to solve
_
¸
¸
¸
_
¸
¸
¸
_
u
tt
= c
2
_
u
rr
+
1
r
u
r
_
u(a, t) = 0
u(r, 0) = f(r) ∧ u
t
(r, 0) = g(r)
r ≥ 0
(4.64)
Again we use the powerful method of separation of variables, i.e., we seek a
solution f the form
u(r, t) = R(r)T(t)
Substituting
RT
′′
= c
2
_
R
′′
T +
1
r
R

T
_
4.12. RADIALLY SYMMETRIC CONDITIONS 79
Dividing by c
2
RT
1
c
2
T
′′
T
=
_
R
′′
+
1
r
R

R
_
= −λ
2
< 0
This is not to avoid the trivial solutions, we do this to avoid unbounded solu-
tions. This gives us the following ODEs:
R
′′
+
1
r
R


2
R = 0
=⇒
_
rR
′′
+R


2
rR = 0
R(a) = 0
T
′′
+c
2
λ
2
T = 0
Notice that the R equation is the parameterized Bessel’s equation of order zero.
y(x) = c
1
J
0
(λx) +
$
$
$
$
$X
0
c
2
Y
0
(λx)
y(x) = c
1
J
0
(λx)
We remove the second solution to this problem due to the singularity at 0. Using
this result for R:
R(r) = C
1
J
0
(λr)
Now we can apply the boundary conditions
R(a) =
&&
c
1
J
0
(λa) = 0 =⇒ J
0
(λa) = 0
Now, we let α
n
; n ≥ 1 be the infinitely many coefficients of J
0
:
λa = α
0n
=⇒ λ = λ
0n
α
0n
a
These are the eigenvalues of the 0th order Bessel functions:
R
n
(r) = J
0

0n
r)
which are consequently, our eigenfunctions.
Now we solve the T equation:
T
′′
n
+c
2
λ
0n
2
T
n
= 0
The characteristic equation of this ODE is
m
2
+c
2
λ
0n
2
= 0
=⇒m = ±icλ
0n
=⇒T
n
(t) = α
n
cos (cλ
0n
t) +β
n
sin (cλ
0n
t)
80 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
This gives us our nth normal mode
u
n
(r, t) = R
n
(r)T
n
(t)
and our general solution
u(r, t) =

n=i
J
0

0n
r) [α
n
cos (cλ
0n
t) +β
n
sin (cλ
0n
t)] (4.65)
Now we can apply the initial conditions
u(r, 0) = f(r) =

n=1
A
n
J
0

0n
r)
Oh Mamma Mia! We have the eigenfunction expansion for f in terms of J!
Reminder This is the Fourier-Bessel expansion with Fourier-Bessel coeffi-
cients. From (4.61) and (4.63), A
n
may be determined.
A
n
=
2
a
2
J
2
1

0n
)
_
a
0
rf(r)J
0

0n
r) dr (4.66)
Now we determine the B
n
s:
u
t
(r, t) =

n=1
B
n

0n
. ¸¸ .
J
0

0n
r) = g(t)
Instead of B
n
being the Fourier-Bessel coefficient, we use B
n

0n
instead; em-
ploying (4.61) and (4.63):
Since λ
0n
=
α
0n
a
B
n
=
2

0n
aJ
2
1

0n
)
_
a
0
rg(r)J
0

0n
r) dr (4.67)
Now let’s see an example.
4.12.1 Example
Solve the wave equation (4.64) subject to:
a = 1 c = 10 f(r) = 1 −r
2
g(r) = 1
Because f and g are independent of θ, we can conclude that this is a radially
symmetric case, and thus, the solution is given by (4.65). Now we just have to
find A
n
and B
n
; by (4.66)
A
n
=
2
J
2
1

0n
)
_
a
0
r(1 −r
2
) J
0

0n
r)
. ¸¸ .
dr
4.12. RADIALLY SYMMETRIC CONDITIONS 81
In order to evalute this integral, we have to make sure that the Bessel functions
are a function of one variable only – this is accomplished using substitution. Let
s = λ
0n
r and ds = λ
0n
dr:
_
1
0
r(1 −r
2
)J
0

0n
r) dr =
_
λ0n
0
s
λ
0n
2
_
1 −
s
2
λ
0n
2
J(s)
_
ds
λ
0n
=
1
α
4
0n
_
α0n
0
_
α
2
0n
−s
2
_
. ¸¸ .
u
sJ
0
(s)ds
. ¸¸ .
dv
Which we now integrate by parts as indicated:
u = α
2
0n
−s
2
du = −2sds
v = sJ
1
(s) dv = sJ
0
(s)ds
Substituting it back in:
1
α
4
0n
_
_
$
$
$
$
$
$
$
$
$
$X
0
_
α
2
0n
−s
2
_
sJ
1
(s)
¸
¸
¸
α0n
0
+ 2
_
α0n
0
s
2
J
1
(s)ds
_
_
=
2
α
4
0n
_
s
2
J
2
(s)
¸
α0n
0
=
2
α
2
0n
J
2
α
0n
So then A
n
is given by:
A
n
=
4
α
2
0n
J
2
1

0n
)
J
2

0n
)
This can be simplified further from property 8 of the Bessel functions (on the
handout):
J
2

0n
) =
2
α
0n
J
1

0n
) −

U
0
J
0

0n
)
Remember that α
0n
are the zeroes of J
0
, justifying the last step. This gives us
the final result:
A
n
=
8
α
0n
3
J
1

0n
)
Similarly, we can solve for B
n
, which is left as an excercise:
B
n
=
1

0n
2
J
1

0n
)
82 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
4.13 Laplace’s Equation in Polar Coordinates
Laplace’s equation in polar coordinates is given by

2
u = u
rr
+
1
r
u
r
+
1
r
2
u
θθ
= 0
u(r, 0) = u(r, 2π), u
θ
(r, 0) = u
θ
(r, 0) = u
θ
(r, 2π)
u(a, θ) = f(θ), 0 ≤ r < a, 0 < θ < 2π
f(θ + 2π) = f(θ)
(4.68)
to solve this, we use the method of separation of variables; i.e., we seek a solution
of the form
u(r, θ) = R(r)Θ(θ)
Substituting into (4.68)
R
′′
Θ +
1
r
R

Θ +
1
r
2

′′
= 0
Dividing by Θ
R
r
2
r
2
R
_
R
′′
+
R

r
_
= −
Θ
′′
Θ
= k
=⇒
_
Θ
′′
+kθ = 0
r
2
R
′′
+rR

−kR = 0
Applying the boundary conditions:
u(r, 0) = u(r, 2π)
R(r)Θ(0) = R(r)Θ(2π) =⇒ Θ(0) = Θ(2π)
u
θ
(r, 0) = u
θ
(r, 2π) =⇒ Θ

(0) = Θ

(2π)
Thus
Θ
′′
+kΘ = 0
Θ(0) = Θ(2π); Θ

(0) = Θ

(2π)
These are called periodic boundary conditions, and have something special about
them. Continuing, the characteristic equation is
m
2
+k = 0
=⇒m
2
= −k
We have three cases again:
4.13. LAPLACE’S EQUATION IN POLAR COORDINATES 83
1. k = −µ
2
< 0 =⇒ m = ±µ gives us the trivial solution. Note that
contrary to the previous three case scenarios, this would be the case 3.
Make sure to check every case.
2. k = µ
2
= 0 =⇒ m = 0 which gives us:
Θ(θ) = aθ +b
Applying the boundary conditions, we have
Θ(0) = Θ(2π) =⇒ a(0) +
¡
b = a(2π) +
¡
b
=⇒0 = 2πa =⇒ a = 0
=⇒Θ(0) = b
and Θ

(θ) = 0
This implies that the second boundary condition is always satisfied, and
b is arbitrary. Now we have infinitely many solutions for Θ(θ); we only
need one solution, so we choose b = 1, thus yielding:
Θ(θ) = Θ
0
(θ) = 1
the eigenvalue for this problem is k = 0, and the corresponding eigenfunc-
tion is 1, from:
k
0
= µ
2
0
= 0 =⇒ µ
0
= 0
3. k = µ
2
> 0 =⇒ m = ±iµ – This yields the solution
Θ(θ) = c
1
cos (µθ) +c
2
sin (µθ)
Θ(0) = Θ(2π) :
c
1
= c
1
cos(2πµ) +c
2
sin(2πµ)
c
2
= c
1
1 −cos 2πµ
sin 2πµ
(∗)
Θ

(0) = Θ

(2π) :
µc
2
= −c
1
µsin 2πµ +µc
2
cos 2πµ
c
2
=
−c
1
µsin 2πµ
µ −µcos 2πµ
(∗∗)
By (∗) and (∗∗):
&&
c
1
1 −cos 2πµ
sin 2πµ
=

&&
c
1

µsin 2πµ

µ −

µcos 2πµ
Remember that c
1
= 0
(1 −cos 2πµ)
2
= −sin
2
2πµ
=⇒1 −2 cos 2πµ + cos
2
2πµ = −sin
2
2πµ
2 −2cos2πµ = 0
=⇒ cos 2πµ = 1
=⇒2πµ = 2πm; m ∈ Z; m ≥ 1
84 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
The eigenvalues are k, and therefore m
2
. Now we look for the eigenfunc-
tions.
Note that the constants in the original Θ equation now are changing as
the eigenvalues change, and thus we will subscript them:
Θ(θ) = Θ
m
(θ) = α
m
cos mθ +β
m
sin mθ; m ≥ 1
Note that we have two eigenfunctions for every eigenvalue. This is due to
the fact that our boundary conditions are periodic. Case 2 and 3 could
be merged into a single equation by extending m.
Θ
m
(θ) = α
m
cos(mθ) +β
m
sin(mθ), m ≥ 0
Now let’s look for R(r).
r
2
R
′′
m
+rR

m
−m
2
R
m
= 0 (k = m
2
)
This DE is Euler’s equation. The characteristic equation is given by:
λ(λ −1) +λ −m
2
= 0
λ
2
−m
2
=⇒ λ = ±m
This is the first case for Euler’s equation:
R
m
(r) = C
m
r
m
+D
m
r
−m
If m = 0, then:
R
m
(r) = C
m
+D
m
ln r
At r = 0, the D terms blow up, and is unrealistic, and therefore we set
D
m
= 0 for bounded solutions. Therefore, the solution for R is:
R
m
(r) = C
m
mr
m
, m ≥ 0
= γ
m
_
r
a
_
m
; C
m
=
γ
m
a
m
The m
th
normal mode is then
u
m
(r, θ) =
_
r
a
_
m
(a
m
cos mθ +b
m
sin mθ)
a
m
= γ
m
α
m
; b
m
= γ
m
β
m
yielding our general solution of u
u(r, θ) =

m=0
_
r
a
_
m
(a
m
cos mθ +b
m
sin mθ) (4.69)
4.13. LAPLACE’S EQUATION IN POLAR COORDINATES 85
Applying our initial conditions
u(a, θ) =

m=0
(a
m
cos mθ +b
m
sin mθ)
= a
0
+

m=0
(a
m
cos mθ +b
m
sin mθ)
Note that this form is exactly the same as for the Fourier series (i.e, in
(3.2). Now we apply (3.3), (3.4), (3.5) to find the constants.
4.13.1 Example
Solve the boundary value problem (4.68) given that
f(θ) =
_
π −θ if 0 ≤ θ < π
0 if π ≤ θ < 2π
The solution u is given by (4.69), and the constants are:
a
0
=
1

_

0
f(θ)dθ =
1

_
π
0
f(θ)dθ =
π
4
a
m
=
1
π
_
2
0
πf(θ) cos(mθ)dθ
=
1
π
_
π
0
(π −θ) cos(mθ)dθ
= −
1
πm
2
{(−1)
m
−1}
b
m
=
1
π
_
2
0
πf(θ) sin (mθ) dθ =
1
m
86 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
4.13.2 Example
α
β
We shall consider only θ bounded between 2 angles α and β The two edges
generated at θ = α and θ = β will be lined at zero.
Solve Laplace’s equation over the wedge 0 < r < 1; 0 < θ < π/4 subject to
u(r, 0) = u(r,
π
4
) = 0
∂u
∂r
(1, θ) = sin θ
a = 1
We seek a solution of the form
u(r, θ) = R(r)Θ(θ)
Substituting in:
R
′′
Θ +
1
r
R

θ +
1
r
2

′′
= 0
Rearranging
r
2
R
_
R
′′
+
1
r
R

_
= −
Θ
′′
Θ
= k = µ
2
> 0
4.13. LAPLACE’S EQUATION IN POLAR COORDINATES 87
Cases 1 and 2 generate the trivial solution, unlike before (check)
_
Θ
′′
+u
2
Θ = 0
Θ(0) = Θ
_
π
4
_
As an excercise, check that this is an RSL (it is).
Θ(θ) = c
1$
$
$
$
cos (µθ) +c
2
sin(µθ) = c
2
sin µθ
Θ
_
π
4
_
= 0 = c
2
sin
_
µ
π
4
_
=⇒ sin
_
µπ
4
_
= 0
=⇒
µπ
4
= mπ; m ≥ 1
µ = µ
m
= 4m; m ≥ 1
Remember that our eigenvalues are k = µ
2
Θ(θ) = Θ
m
(θ) = sin (4mθ)
r
2
R
_
R
′′
+
1
r
R
_
= 16m
2
r
2
R
′′
m
+rR

m
−16m
2
R
m
= 0 SSL, check
λ(λ −1) +λ −16m
2
= 0
=⇒λ = ±4m; m ≥ 1
This is Euler’s equation, so the solution is thus:
R
m
(r) = C
m
r
4m
+
¨
¨¨B
0
D
m
r
−4m
we cancel out D
m
for a bounded solution at r = 0, our final solution is given by
u(r, θ) =

m=1
C
m
r
4m
sin (4mθ)

2
u
∂r
2
(1, θ) = sin θ
=

m=1
βm
¸ .. ¸
4mC
m
sin 4mθ = sin θ
This is the half range sine series expansion of sin θ:
C
m
=
1
4mπ
_

0
f(θ) sin (4mθ) dθ
=
8
4πm
_
π/4
0
sin θ sin (4mθ) dθ
=
4

2
π
(−1)
m+1
m(16m
2
−1)
88 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
4.14 Non-Homogenous PDEs
Suppose that we want to solve
_
u
tt
= c
2
u
xx
+F(x, t) ←non-homo wave equation
u
t
= c
2
u
xx
+F(x, t) ←non-homo heat equation
Steps for solving these equations:
1. Find the eigenvalues and eigenfunctions of the SL problem associated with
the homo PDE
2. Generate an eigenvalue expansion from step one to solve the non homoge-
nous PDE
4.14.1 Example
Solve the one dimensional heat equation
_
¸
_
¸
_
u
t
= u
xx
+xe
−t
u(0, t) = 0, u
x
(1, t) +u(1, t) = 0
u(x, 0) = 0
1. Find the S.L problem associated with the homogenous PDE
_
u
t
= u
xx
u(0, t) = 0; u
x
(1, t) +u(1, t) = 0
To find u, we seek a solution of the form
u = X(x)T(t)
Subbing into the DE
XT

= X
′′
T
=⇒
X
′′
X
=
T

T
= −µ
2
< 0 (to avoid trivial solutions)
This is a SL problem of order 2 (look at the X derivative), written as
_
X
′′

2
X = 0
X(0) = 0; X

(1) +X(1) = 0
The characteristic equation is given by
m
2

2
= 0 =⇒ m = ±iµ
X(x) = c
1
cos µx +c
2
sin µx (X(0) = 0)
= c
2
sin µx
4.14. NON-HOMOGENOUS PDES 89
Applying the second initial condition
X

(1) +X(1) = 0 =⇒ c
2
µcos µ −c
2
sin µ tan µ = −µ
Notice now we have infinitely many solutions µ
n
; n ≥ 1)
2. Now we solve the non homogenous PDE by seeking a solution given by
u(x, t) =

n=1
α
n
X
n
(x)
Notice that the function depends on t also, so we rewrite the constant as
u(x, t) =

n=1
α
n
(t)X
n
(x)
=

n=1
α
n
(t) sin (µ
n
x)
Now we find α
n
(t) for all n ≥ 1
u
t
=

n=1
α

n
(t) sin (µ
n
x)
u
xx
= −

n=1
α
n
(t)µ
2
n
sin (µ
n
x)
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
Substitute into the non homogenous PDE
Reminder The boundary conditions of u (not u) are satisfied by the
eigenfunction expansion.

n=1
α

n
(t) sin (µ
n
x) = −u
xx
= −

n=1
α
n
(t)µ
2
n
sin (µ
n
x) +xe
−t
.¸¸.
(∗)
(∗∗)
Now let’s find the eigenfunction exansion of (∗), as it is not in an infinite
sum – we need it to be a summation so that we can manipulate it easier.
xe
−t
=

n=1
γ
n
(t) sin (µ
n
x)
This is the generalized Fourier series for xe
−t
.
Continuing.
γ
n
(t) =
_
1
0
xe
−t
sin (µ
n
x) dx
γ
n
(t) = e
−t
_
1
0
xsin (µ
n
x) dx
= e
−t
_
−µ
n
cos µ
n
+ sin µ
n
µ
n
2
_
. ¸¸ .
cn
90 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!
Now we sub into (∗∗)

n=1
α

n
(t) sin (µ
n
x) = −

n=1
α
n
(t)µ
2
n
sin (µ
n
x) +

n=1
c
n
e
e−t
sin (µ
n
x)
=⇒

n=1
_
α

n
(t) +α
n
(t)µ
2
n
−c
n
e
−t
_
sin(µ
n
x) = 0
Since this is true for all values of x, this implies that the bracketed ex-
pression must be equal to zero, ie.
α

n

2
n
α
n
−c
n
e
−t
= 0
which is a first order linear ODE; let’s find the integrating factor and solve
it
I(t) = e
R
µ
2
n
dt
= e
µ
2
n
t
Multiply the DE by I
_
e
µ
2
n
t
α
_

= c
n
e
−t+µ
2
n
t
= c
n
e

2
n
−1)t
e
µ
2
n
t
α =
_
c
n
e

2
n
−1)t
dt =
c
n
µ
2
n
−1
e

2
n
−1)t
+k
n
α
n
(t) =
c
n
µ
2
n
−1
e
−t
+k
n
e
−µ
2
n
t
We are still left with k
n
, so we solve by using the initial conditions
u(x, 0) = 0 =
_
1
0
α
n
(0) sin (µ
n
) xdx
α
n
(0) = 0

ii

Contents
Introduction 1 Power Series Solutions 1.1 Legendre’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . 2 Series Solutions 2.1 Example . . . . . . . . . . 2.2 Frobenius’s Theorem . . . 2.3 Another Example . . . . . 3 2.3.1 r = 2 . . . . . . . 2.3.2 r = 0 . . . . . . . . 2.4 Frobenius’s Method . . . 2.4.1 Example of case 2 2.5 Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 1 2 5 5 6 6 8 9 9 10 13 19 19 20 20 23 24 24 25 25 26 27 27 29 30 31

3 Fourier Series 3.1 Periodic Functions . . . . . . . . . . . . . . . . . 3.1.1 Example . . . . . . . . . . . . . . . . . . . 3.1.2 Properties . . . . . . . . . . . . . . . . . . 3.2 Example . . . . . . . . . . . . . . . . . . . . . . . 3.3 Uniform Convergence . . . . . . . . . . . . . . . 3.3.1 Example . . . . . . . . . . . . . . . . . . . 3.4 Fourier’s Theorem . . . . . . . . . . . . . . . . . 3.4.1 Example . . . . . . . . . . . . . . . . . . . 3.5 Extending Fourier series to 2L-periodic functions 3.5.1 Example . . . . . . . . . . . . . . . . . . . 3.6 Half Expansions . . . . . . . . . . . . . . . . . . 3.6.1 Example . . . . . . . . . . . . . . . . . . . 3.7 Error Analysis . . . . . . . . . . . . . . . . . . . 3.7.1 Example . . . . . . . . . . . . . . . . . . .

4 Partial Differential Equations! 33 4.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 iii

. . .10 4.8. . . . . . . . . . .13 4. . .9. . . .1 Example: Bessel’s Equation . 4. . . . . .1 Example . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Example . . . . . . . . . . . . One Dimensional Wave Equations . . . . . . . . .13. . . . . . . . . . . . . . . . . . .3 Other Types of Boundary Conditions . . . . . . . . . . .5 4. . . .1 Example . . . . . . . . . . . .1 Solving the Wave Equation . . . .6. . . . . . . . . . . . . . Poisson’s Equation . . 4. . . .1 Example . . . . 4. . . . . . The Parametrized Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . 4. . . 4.10.12 4.13. . . . . . . . . .1 Example . . . . . . . . .1 Example . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . .12. Sturm Liouville Problems . . . . . . . . . . . . . .2 Example . . . Dirichlet Problem . . . . . . . . . . . . . . . . . . . .3 4. . . .5. . 4. . .1 Example . . . . . Two Dimensional Wave Equation . . .1 Example . . . . . . . . 4. . Radially Symmetric Conditions . . . . . . . . . .11 4. . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 35 35 36 38 41 44 47 49 51 51 57 60 61 63 66 67 69 71 71 72 74 75 77 80 82 85 86 88 88 4. . . . . . . . . . . . . . . . . . . . . . .2 Example .2 4. . . . . . . .iv 4. . . . . . . . . .10. . . . CONTENTS . . . .8 4. . . 4. . . . . . . . . . . . 4. 4. . .5. . . . . . . . . . . . . . . .6 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Method of Characteristic Lines . . . . . .3. .7 4. . . . . . . . . . Non-Homogenous PDEs . . 4.5. . . . . . . . . . . . . . . . . . . . . . . . . .2 Example . One Dimensional Heat Equation . . . . . . . . . . . . . . . . . . .10. .1 Observations . . . . . 4. . . . . . . . . . . . . Laplace’s Equation in Polar Coordinates . . . . . . .2 d’Alembert Equation . .4. 4. . . .9 4. . . . Two Dimensional Heat Equation . . . . . . . . .14 . . . . . . . . . . . . . . . . . 4.4. .4 Initial Condtions (IC) and Boundary Conditions (BC) Result . . . . . . . .4. . . . . . . . . .14. . . . . . . . . . .

v . Try to report any errata. You may not laugh at the poorly drawn diagrams. Terms and Conditions 1. provided the Terms and Conditions are met. The source is available upon request.Introduction A This document was typeset completely in L TEX. 2. You are free to distribute this.

vi INTRODUCTION .

The solution takes a form of the following: ∞ y= n=0 cn xn where y(x0 ) = c0 . . y [n] (x0 ) = n!cn Solve for y ′′ (x) from the DE: y ′′ (x) = x2 − (cos x)y Now we substitute in x0 = 0: y ′′ (0) = 02 − y(0) = −c0 = 2c2 1 =⇒ c2 = − c0 2 This gives us c2 in terms of c0 . .Chapter 1 Power Series Solutions Find a power series solution up to the 6th power of the equation y ′′ + (cos x)y = x2 centered at x0 = 0. We can repeat for higher orders. 1 x0 =0 . y ′ (x0 ) = c1 . y [3] (x) = 2x + (sin x)y − (cos x)y ′ y [3] (0) = −c1 = 6c3 1 =⇒ c3 = − c1 6 Continue until you generate all the coefficients from c1 to c6 . . which corresponds to the 6th power.

m ∈ Z+ Legendre’s equation is given by Notes x0 = 0 2x 1 − x2 m(m + 1) q(x) = 1 − x2 ′′ f (x) = y + p(x)y ′ + q(x)y = 0 p(x) = This implies that x0 is an ordinary point. We know that a power series exists at this point (look this up). R = 1. c6 = − − c0 12 12 30 360 80 y = c0 + c1 x + c2 x2 + · · · + c6 x6 + higher order terms 1 1 1 1 1 1 1 + c0 x4 + c1 x5 − + c0 x6 + . .. . Let m = 2 and solve the DE. POWER SERIES SOLUTIONS c4 = Substitute into y 1 1 c1 1 1 − c0 . The radius of convergence.1 Legendre’s Equation (1 + x2 )y ′′ − 2xy ′ + m(m + 1)y = 0. + x + x + . .2 CHAPTER 1. x1 = 1. i. c5 = . .g. = c0 + c1 x + c0 x2 − c1 x3 + 2 6 12 12 30 360 80 1 1 1 1 1 1 6 1 4 = c0 1 − x2 + x4 − x6 + . 1. e. The de is given by (1 − x2 )y ′′ − 2xy ′ + 6y = 0 Seek a solution of the form ∞ y= n=0 cn xn . x2 = −1. 2 12 80 6 30 12 360 The last term is the particular solution. There are two singular points. . . + c1 x − x3 + x5 + . . centered at x0 = 0 is the closest singular point from x0 . and the former two are the solutions to the homogenous equation..e.

therefore we need to find c4 → c6 . They come up every time you solve Legendre equations. and so forth for all even subscripts beginning from 4. 1 2 = c0 + c1 x − 3c0 x2 − c1 x3 − c1 x5 + . ∞ ∞ ∞ 3 (1 − x2 ) ∞ n=0 ∞ n(n − 1)cn xn−2 − 2x −n(n − 1)cn xn − 2x ncn xn−1 + 6 n=1 ∞ n=0 ∞ cn xn = 0 6cn xn = 0 n=2 n(n − 1)cn xn−1 + n=2 n=1 −2ncn xn + n=0 The first step is to match the powers (converting the above): ∞ ∞ ∞ ∞ (n + 2)(n + 1)cn xn + n=0 n=2 −n(n − 1)cn xn + n=1 −2ncn xn + 6cn xn = 0 n=0 The second step is to match the indices: 2c2 + 6c3 x − 2c1 x + 6c0 + 6c1 x+ ∞ n=2 [(n + 2)(n + 1)cn+2 + (6 − n(n + 1))cn ] xn = 0 (2c2 + 6c0 ) + (6c3 − 2c1 + 6c1 )x + · · · = 0 =⇒ 2c2 + 6c0 = 0 =⇒ c2 = −3c0 2 6c3 + 4c1 = 0 =⇒ c3 = − c1 3 Now we try to find a recursive formula. LEGENDRE’S EQUATION Thus. . ] 3 5 These are the two linearly independent solutions in terms of power series. (n + 2)(n + 1)cn+2 + (6 = n(n + 1))cn = 0 cn+2 = −n(n + 1) cn (n + 2)(n + 1) n≥2 n≥2 c0 and c1 are arbitrary as usual. . .1. c4 = 10 c3 = − 30 c1 == − 5 c1 and c7 = 24 c1 7! Substituting: y = c0 + c1 x .1. . Substitute for the 2 12 1 3 other constants. 3 5 2 3 1 5 2 = c0 [1 − 3x ] + c1 [x − x − x + . . c4 = 0 =⇒ c6 = 0. . The c0 -like terms are the Legendre polynomials. .

POWER SERIES SOLUTIONS .4 CHAPTER 1.

2. y ′′ + p(x)y ′ + q(x)y = 0 Suppose that x0 is a singular point. consider the 2nd order homogenous linear differential equation: a2 (x)y ′′ + a1 (x)y ′ + a0 (x)y = 0 Divide the entire equation by a2 (x). 2. If x0 is not an RSP. We have two cases to consider: 1.Chapter 2 Series Solutions Power series look like : DE : a2 (x)y ′′ + a1 (x)y ′ + a0 (x)y = g(x) Gives two linearly independent solutions: ∞ y= n=0 cn (x − x0 )n For series solutions. A point x0 is said to be a regular singlar point (RSP) of the above equation if it is a singular point and the functions P (x) ≡ p(x)(x − x0 ) and Q(x) ≡ q(x)(x − x0 )2 are both analytic at x0 .1 Example (x2 − 9)2 y ′′ + 2(x − 3)y ′ + 4y = 0 p(x) = 2(x − 3) 2 = 2 − 9)2 (x (x − 3)(x + 3)2 Consider the DE given by 5 . then it is called an irregular singular point (ISP).

though this is not always the case. even at x = −3 Q(x) = q(x)(x − x0 )2 = Therefore. SERIES SOLUTIONS It is important to reduce the fractions to lowest terms. 2. q(x) = 4 (x2 − 9)2 This says that x0 = ±3 are singular points.6 CHAPTER 2. x0 = 3 is a regular singular point (RSP). then there exists at least one solution of the form: ∞ ∞ y = (x − x0 )r n=0 cn (x − x0 )n = n=0 cn (x − x0 )n+r (2. The general solution can then be generated. 2. we have to resort to reduction of order to find the other linearly independent solution. Remark If Frobenius’s method generates 2 solutions in the form of a series. for x0 = −3: P (x) = p(x)(x − x0 ) = 2 2 (x + 3) = (x − 3)(x + 3)2 (x − 3)(x + 3) Which is not analytical. For x0 = 3: P (x) = p(x)(x − x0 ) = 2 2 (x − 3) = (x − 3)(x + 3)2 (x + 3)2 4 $ (x −$ 2 $$$ 3) (x + 3)2$$$ 2 (x − 3) which is analytical. The series will converge at least over the interval 0 < x − x0 < R.2 Frobenius’s Theorem Suppose that x = x0 is an RSP of the above standard DE. then they are linearly independent. this implies that x0 = 3 is an ISP. The idea is to find series solutions centered at a RSP.3 Another Example Find a series solution for the DE . as it sometimes generates only one solution.1) where r is to be determined. Now. In that case.

therefore x0 = 0 is an RSP. ANOTHER EXAMPLE 7 2xy ′′ − y ′ + 2y = 0 1 p(x) = − 2x 1 q(x) = x Then x0 = 0 is a SP. 1 2 Q(x) = q(x)x2 = x P (x) = p(x)x = Both of these are analytic.1). and we may apply Frobenius’s Theorem. Now we substitute into the DE: ∞ ∞ ∞ 2 n=0 (n + r)(n + r − 1)cn xn+r−2 − (n + r)cn xn+r−1 + 2 n=0 n=0 cn xn+r = 0 Rearrange to ∞ ∞ ∞ n=0 2(n + r)(n + r − 1)cn xn+r−2 + n=0 −(n + r)cn xn+r−1 + 2cn xn+r = 0 n=0 And now we take out the xr ’s as a common factor ∞ ∞ ∞ xr n=0 2(n + r)(n + r − 1)cn xn−1 + ∞ n=0 −(n + r)cn xn−1 + 2cn xn n=0 ∞ =0 =0 xr n=0 (n + r)(2n + 2r − 3)n xn−1 + 2cn xn n=0 .3.2. Now we seek a solution of the form given by (2. ∞ ∞ y = xr n=0 ∞ cn xn = n=0 cn xn+r y′ = n=0 ∞ (n + r)cn xn+r−1 (n + r)(n + r − 1)cn xn+r−2 y ′′ = n=0 Remark The index here stays at n = 0.

and 3 the roots of this equation are called the indicial roots.1 r= 3 2 This gives us.8 Now we match the powers. n ≥ 0 From the former equation.3. ∞ CHAPTER 2. The resulting equation r(2r − 3) = 0 is known as the indicial equation. Plugging zero into c0 gives us the trivial solution. leaving c0 arbitrary (we know this from the recursive formula): 2 c1 = − c0 5 2 c0 c2 = 35 4 c3 = − c0 945 2 c0 c4 = 10395 . where r = 0. we match the indices: ∞ xr r(2r − 3)c0 x−1 + n=0 [(n + r + 1)(2n + 2r − 1)cn+1 + 2cn ] xn =0 This implies that r(2r − 3)c0 = 0 and (n + r + 1)(2n + 2r − 1)cn+1 + 2cn = 0. for r = 3 : 2 cn+1 = + 2) −2 = (2n + 5)(n + 1) (n + 5 2 )(2n −2 We continue finding the constants. 2 . SERIES SOLUTIONS ∞ xr n=−1 (n + r + 1)(2n + 2r − 1)cn+1 xn + 2cn xn n=0 =0 Next. 2. it is apparent that one of the terms has to be zero. Plug the roots into the recursive formula (the latter of the above two).

. 5 35 n=0 9 ∞ y = x2 5 7 3 2 2 = c0 x 2 − c0 x 2 + c0 x 2 + .3. Remark This is known as a power series due to the fact that they are not whole number roots. . . 5 35 3 2 5 2 7 = c0 x 2 − x 2 + x 2 + . . . the recursive formula looks like: −2 cn ∀{n ≥ 0} (n + 1)(2n − 1) 2 4 5 4 x + . . Remark Notice that (r1 − r2 ) = 3 2 ∈Z / 2. FROBENIUS’S METHOD Substituting in r: 3 3 2 2 cn xn = x 2 c0 − c0 x + c0 x2 + .2. .4. 5 35 The bracketed expression is one of the linearly independent solutions 3 2 5 2 7 y1 = c0 x 2 − x 2 + x 2 + .. 5 35 2.4 Frobenius’s Method Let us see what we can draw from the previous example.. and may be fractions. .2 r=0 Second root r2 = 0. . and linear combinations of the two form the general solution. =⇒ y2 = 1 + 2x − 22 + x3 + x4 + 9 45 1575 cn+1 = Both solutions are linearly independent.

4. then the Frobenius method will generate two linearly / independent solutions y1 . y1 is generated. SERIES SOLUTIONS 1. In this case. we apply reduction of order to generate the second linearly independent solution y2 . If (r1 − r2 ) ∈ Z. 3. y1 . and the task is done (finding the general solution. y2 . 2. or you have only one generated solution via Frobenius’ method. that is). By reduction of order. Seek a solution of the form ∞ ∞ y = xr n=0 ∞ cn xn = n=0 cn xn+r y′ = n=0 ∞ (n + r)cn xn+r−1 (n + r − 1)(n + r)cn xn+r−2 y ′′ = n=0 Substituting back into the DE: ∞ ∞ ∞ n=0 (n + r)(n + r − 1)cn xn+r−1 + n=0 −(n + r)cn xn+r + cn xn+r = 0 n=0 . If r1 = r2 = r. then one solution. given by: ∞ y2 = y1 ln x + n=1 bn xn+r 2.1 Let r1 > r2 . you either have two linearly independent solutions y1 . the second linearly independent solution is given by: ∞ y2 = cy1 ln x + n=0 bn xn+r2 where r2 is the smaller indicial root.10 Result 2. If (r1 − r2 ) ∈ Z. y2 . CHAPTER 2.1 Example of case 2 Find a series solution to the equation xy ′′ − xy ′ + y = 0 x0 = 0 is a regular singular point (check). and all is well.

therefore c0 = 0 and is not arbitrary. r=1 : cn+1 = n cn ∀ {n ≥ 0} (n + 2)(n + 1) c0 = 0 as it gives us the trivial solution.2. this gives us case 2 (above). respectively. ∞ y = xr n=0 cn xn = c0 x =⇒ y1 = x r=0 : cn+1 = n−1 (n + 1)n n≥0 c1 is undefined. . and c1 is the arbitrary parameter.4. and adding the right two terms together: ∞ ∞ 11 x =⇒ xr r n=0 ∞ (n + r − 1)(n + r)cn x n−1 + n=0 ∞ (1 − n − r)cn xn (1 − n − r)cn xn =0 =0 (n + r + 1)(n + r)cn+1 xn + n=−1 n=0 Now we have (matching the indices): ∞ xr r(r − 1)c0 x−1 + n=0 [(n + r + 1)(n + r)cn+1 + (1 − n − r)cn ] xn =0 This gives us the equations: r(r − 1)c0 = 0 (n + r + 1)(n + r)cn+1 + (1 − n − r)cn = 0 n≥0 Which are the indicial equation and recursive formula. The indicial equation gives us the roots: r1 = 1 r2 = 0 the difference of the two is an integer. FROBENIUS’S METHOD Factoring out xr .

n must start at 1. we find c2 = c3 = c4 = · · · = 0 =⇒ y = xr cn xr n=0 = c1 x = y2 = x = y1 Frobenius’s method only generated one solution.12 CHAPTER 2. Note that the expression inside the bracket is equal to zero (from the DE). leaving us with: y1 ′ = C − 2Cy1 + Cy1 x ∞ ∞ ∞ =⇒ n=2 n(n − 1)bn xn−1 + nbn xn + n=1 n=0 bn xn = −C + Cx y1 = x . We were seeking a solution of the form: ∞ y = Cy1 ln x + n=0 ∞ bn xn+r2 bn xn n=0 = Cy1 ln x + Now we differentiate ′ y ′ = Cy1 ln x + C y1 nbn xn−1 + x n=1 ∞ ′′ y ′′ = Cy1 ln x + 2C y1 y1 n(n − 1)bn xn−2 −C 2 + x x n=2 ∞ ∞ Now we substitute it back into the DE xy ′′ − xy ′ + y = 0: ∞ ′′ ′ Cxy1 ln x + 2Cy1 − C y1 ′ −nbn xn n(n − 1)bn xn−1 − Cxy1 ln x − Cy1 + + x n=1 n=2 ∞ +Cy1 ln x + n=0 bn xn = 0 We can factor out C ln x: ∞ ′′ ′ C ln x [xy1 − xy1 + y1 ] + ∞ ∞ + n=2 n=1 + n=0 where the Σs represent the three series above. Now. cn+1 = n−1 (n + 1)n n≥1 ∞ Evaluating the coefficients. and we apply reduction of order. SERIES SOLUTIONS Remark Sometimes you have to go to c2 or higher.

1 · 2! 2 · 3! ∞ xn = b1 x + C x ln x − 1 + (n − 1)n! n=2 Which gives us us our two linearly independent solutions. .5. . 2. . 1 1 3 = Cx ln x − C + b1 x + Cx2 + x + . We will find a series solution to this equation centered at x0 = 0..2. We can now pump out all of the other coefficients. We now apply Frobenius’ method.5 Bessel’s Equation x2 y ′′ + xy ′ + (x2 − ν 2 )y = 0 Where ν ≥ 0 Bessel’s Equation is given by They are called Bessel’s equations of order ν. . 2 · 3! 3 · 4! y = Cy1 ln x + b0 + b1 x + b2 x2 + . BESSEL’S EQUATION Now we match the powers and indices (exercise). b3 = 1 1 C. we get: b0 = −C 2b2 = C n(n + 1)bn+1 + (1 − n)bn = 0∀{n ≥ 2} which gives us our recursive formula: bn+1 = n−1 bn n(n + 1) n≥2 This leaves b1 arbitrary. and thus our general solution. x0 = 0 is a regular singular point (exercise). b4 = C. seek a solution of the form: ∞ ∞ y= n=0 cn xn+r = xr n=0 cn xn . ∞ 13 b0 + n=1 {n(n + 1)bn+1 + (1 − n)bn } xn = −C + Cx Equating both sides. ..

by convention: r2 − ν 2 = 0 r = ±ν This implies: (r + 1)2 − ν 2 c1 = 0 =⇒ c1 = 0(∀r) −1 = cn . . n ≥ 0 (n + r + 2)2 − ν 2 cn+2 For r1 = ν cn+2 = −1 cn (n + ν + 2)2 − ν 2 This implies that every odd subscript is zero. SERIES SOLUTIONS y′ = n=0 ∞ (n + r)cn xn+r−1 (n + r)(n + r − 1)cn xn+r−2 y ′′ = n=0 substituting it into the DE: ∞ ∞ ∞ ∞ 0= n=0 (n + r)(n + r − 1)cn xn+r + r (n + r)cn xn+r + n=0 n=0 cn xn+r+2 + n=0 −ν 2 cn xn+r We now factor out x : ∞ ∞ x ∞ r n=0 (n + r) − ν 2 2 cn x + n=0 ∞ n cn xn+2 cn xn+2 n=0 =0 =0 xr n=−2 (n + r + 2)2 − ν 2 cn+2 xn+2 + Matching indices: ∞ xr (r2 − ν 2 )c0 + (r + 1)2 − ν 2 c1 x + n=0 (n + r + 2)2 − ν 2 cn+2 + cn xn+2 =0 We choose the lowest order polynomial as the indicial equation.14 which gives us: ∞ CHAPTER 2.

so we can interchange it (for convenience) c2n = −1 c2n−2 . n ≥ 1 (2n + ν)2 − ν 2 −1 c2n−2 . BESSEL’S EQUATION Let 2m = n + 2. (n + ν) 2ν Γ(1 + ν) (−1)n .n ≥ 1 = 2n+ν 2 n!Γ(n + ν + 1) 22n n!(1 (2. . .2) . set: c0 = (since it is arbitrary). m ≥ 1: c2m = −1 c2m−2 (2m + ν)2 − ν 2 m≥1 15 Where m and n are dummy variables.2. it gives us: Γ(1 + n) = nΓ(n) = n(n − 1)Γ(n − 1) = n! which generates the n! in the expression also. n ≥ 1 = 2 2 2 n + 22 nν We left off last time eliminating all of the odd subscripts. Thus: c2n = (−1)n 1 · + ν) . We are now looking for c2 and the other even subscripts: −1 c0 + ν) 1 −1 c2 = 4 c0 c4 = 2 2 · 2(2 + ν) 2 · 1 · 2(1 + ν)(2 + ν) −1 c6 = 6 c0 2 · 1 · 2 · 3(1 + ν)(2 + ν)(3 + ν) (−1)n =⇒ c2n = 2n n 2 · n! m=1 (m + ν) c2 = 22 (1 To simplify. where ∞ 1 2ν Γ(1 + ν) Γ(α) = 0 tα−1 e−t dt which has a property: Γ(1 + α) = αΓ(α) When we let α = n.5.

. if ν is an integer (i. Remark 3 Let ν be a non-integer. Thus: ∞ y1 = xν n=0 ∞ cn xn c2n x n=0 2n = xr (c0 + c1 x + c2 x2 + . The general solution of the DE is then y = AJn (x) + BJ−ν (x) However.e. ν = m). reduction of order). then Jν and J−ν are linearly independent. ) =x ν =x = ν (−1)n x2n 22n+ν n!Γ(n + ν + 1) n=0 2n+ν ∞ (−1)n n!Γ(n + ν + 1) n=0 ∞ x 2 ≡ Jν (x) r2 = −ν (exercise): J−ν (x) = (−1)n n!Γ(n − ν + 1) n=0 ∞ x 2 2n−ν Remark 1 These functions Jν (x) and J−ν (x) are called Bessel functions of the first kind. SERIES SOLUTIONS Plugging in n = 0. This implies that n starts from 0. This implies that it is linearly dependent to J−ν because the solution space of a second order differential equation is two dimensional. . We therefore need a second linearly independent solution for the general solution. linearly independent of Jν .e. we still get c0 as defined above. You have to go back to cases 2 and 3 to find this second solution (i.16 CHAPTER 2. Define the function: Yν (x) = Jν (x) cos(νπ) − J−ν (x) sin(νπ) This function is another Bessel function (solution to Bessel’s equation). Remark 2 If ν is not an integer. . then J−m (x) = (−1)m Jm (x) Which implies that J−m and Jm are linearly dependent (one is a multiple of the other).

5. i. BESSEL’S EQUATION With ν as an integer. Ym (x) = lim Yν (x) ν→m 17 which is linearly independent of Jm .2. Yν is known as a Bessel function of the second kind. .e. Yielding the general solution: y = AJm (x) + BYm (x) or AJ−m (x) + BYm (x) The latter equation holds because Jm and J−m are linearly dependent. ν = m.

SERIES SOLUTIONS .18 CHAPTER 2.

. 2. consider the sawtooth function: f (x) = x ∧ f (x + 3) f (x) is 3-periodic. n − 1.Chapter 3 Fourier Series Fourier series are expressions involving sin and cos of x. k = 1. if − 1 ≤ x < 0 if 0 ≤ x < 2 ∧ f (x + 3) = f (x) Notice that at points c = 0. b]. as well as for every multiple of π. The above representation is not unique. (i. we have finite jumps (discontinuities). and π is therefore the fundamental period.1 A function f is said to be piecewise continuous on an interval [a. if there are at most a finite number of points xk . Observe that f (c− ) = lim− f (x) = 3 x→c and f (c+ ) = lim+ f (x) = 0 x→c Definition 3. . x. x ∈ R (3. . Now. f has limits at xk ) and is continuous on each open interval (tk . f (x) = x + 3. tk+1 ). 2. rather than powers of x. For example. 3. . . . . 19 . ±3.1 Periodic Functions f (x + T ) = f (x). n. ±6 .e. . . h(x) = sin(2x) is a periodic function that is π periodic. k = 1. xk ) at which f has finite discontinuities.1) A function f is called T-periodic (or periodic of period T ) if T is called the fundamental period if T is the smallest positive number satisfying this equation. (xk−1 . . . The goal is to approximate functions using both sin and cos x.

2 1. b 2. though the statement is redundant in that if f is continuous.1 Example Define the function [x] as the greatest integer less than or equal to x: Examples: [1.2 A function f is said to be piecewise differentiable if f and f ′ are piecewise continuous. and as well.3] = 0 [2] = 2 This is the step function.3] = −1 [0.1. and f is piecewise differentiable. f (x) = x ∧ f (x + 3) = r f ′ (x) = 1∀x = 3n. Now consider g(x) = x − [x] g(x + 1) = x + 1 − [x + 1] = x + 1 − ([x] + 1) = x − [x] Which makes g(x) a 1-periodic function. n ∈ Z the latter term makes it piecewise continuous.2] = 1 [−0. 3.3 The two functions f ∧ g are orthogonal on [a. this function is piecewise continuous.1 A T-periodic function f is piecewise continuous in R if f is piecewise continuous on every interval [a.20 CHAPTER 3. Definition 3.1. f ′ is automatically piecewise continuous. b] if b f (x)g(x)dx = 0 a . For example. b] ∈ R 3. FOURIER SERIES Result 3. Properties Let f be T -periodic. Definition 3. a+T b+T f (x)dx = a b f (x)dx ∀a.

PERIODIC FUNCTIONS Applications (a) R 21 Let m. If f (−x) = −f (x). and therefore the whole integral is odd. n be nonnegative integers ∀m. If f (−x) = f (x). and over a symmetric interval. ∀x =⇒ f is an odd function ≡ that f is symmetric about the origin. n =⇒ cos(mx) sin(mx)dx −R cos is an even function. .1. π ∀m = n =⇒ sin(mx) sin(nx)dx = 0 −π =⇒ sin(mx) ⊥ sin(nx) (d) π π cos2 (mx)dx = π = −π −π sin2 (mx)dx Apply the double angle formula to generate the answer π.3. sin is an odd function. the integral therefore evaluates to 0. ∀x =⇒ f is an even function ≡ that f is symmemtric around the y-axis. 2. Recall: 1. n =⇒ (b) cos(mx) sin(nx)dx = 0 −R π ∀m = n =⇒ =2 cos(mx) cos(nx)dx −π π cos(mx) cos(nx)dx 0 Using the identity cos a cos b = 1 [cos(a + b) − cos(a − b)] 2 (c) We find that cos mx ∧ cos nx are orthogonal. R ∀m.

Taylor series approximate functions via: f (x) = f [n] (x0 ) (x − x0 )n n! n=0 ∞ Now how about using other types of series. bn = 1 π π f (x) sin(nx)dx = −π 1 π 2π f (x) sin(nx)dx 0 (3. ∀m and integrate: π f (x) cos(mx)dx −π = = $ −π ∞ $ $$$ $ cos(nx)dx + $a0 $ π π ∞ π an cos(nx) cos(mx)dx + n=1 −π $ $ $$$ b n $$$ $ sin(nx) cos(mx)dx −π $ $$ π an cos(nx) cos(mx)dx n=1 −π =⇒ am = πam 1 π = f (x) cos(mx)dx π −π 1 π π =⇒ an = f (x) cos(nx)dx = −π 1 π 2π f (x) cos(nx)dx 0 (3. Notice that : π π ∞ π f (x)dx = −π −π π a0 dx + n=1 ∞ −π an cos(nx)dx + π $ $$ a cos(nx)dx 2 $ $$$ $ n $0 $ $$$ $$ $bn sin(nx)dx $ −π π = −π a0 dx + n=1 = 2πa0 π 1 =⇒ a0 = f (x)dx 2π −π a0 = 1 2π 2π f (x)dx 0 (3.5) Memorize these formulae.2) by cos mx.22 CHAPTER 3. a0 .3) Multiply (3. FOURIER SERIES 3. . bn .2) where f (x) is taken to be 2π-periodic. an . such as Fourier series: ∞ f (x) = a0 + n=1 [an cos(nx) + bn sin(nx)] (3.2) by sin mx similarly (excercise).4) To find the bn s we can multiply (3. n ≥ 1 are to be determined.

−4 n2 π . and as well. =⇒ an = a0 = 0. f is piecewise continuous. ∀n Observation 1.3. and find its Fourier series. ∀n Thus: f (x) = π −4 + cos [(2n + 1)x] 2 n=0 (2n + 1)2 π ∞ which is the Fourier series for f (x) = |x| .∀n 2. Integration by parts 0 if n is even if n is odd =⇒ a2n+1 = −4 . piecewise differentiable. f is even: a0 = Continuing: 1 π π f (x)dx −π 1 π π f (x)dx = 0 π 2 f (x) cos(nx)dx = −π 2 π π f (x) cos(nx)dx = 0 2 π π x cos(nx)dx 0 π 2 π 2 = π 2 = π = = x 1 sin(nx) + 2 cos(nx) n n cos nπ 1 − 2 n2 n (−1)n 1 − 2 n2 n 0. If f is even =⇒ bn = 0. If f is odd.2. EXAMPLE 23 3. 1 2π π a0 = From above.2 Example Consider the function f (x) = |x| if −π ≤ x ≤ π and f (x + 2π) = f (x).n ≥ 0 (2n + 1)2 π From the above equations: bn = 1 π π −π f (x) sin(nx)dx = 0.

If p = kπ then k ∈ Z. b2n−1 to generate the odd subscripts. a0 = a1 = · · · = an = 0∀{n ≥ 1} Recall: bn = 1 π g(x) sin(nx)dx π −π 2 π = g(x) sin(nx)dx π 0 2c π sin(nx)dx = π 0 π 2c −1 cos(nx) = π n 0 = 1 2c (−1)n+1 + π n n this implies that all even subscripts are zero. piecewise differentiable function.e.3 If we let Uniform Convergence π 4 − 2 π 1 cos [(2n + 1)x] . which is an odd.3. g(x + 2π) = g(x). ∀{x ∈ R} (2n + 1)2 n=0 N SN = N →∞ lim SN = S∞ = f (x) then it is known as uniform convergence. 3.24 CHAPTER 3. i. FOURIER SERIES 3.1 Example −c c if − π ≤ x ≤ 0 if 0 ≤ x ≤ π Find the Fourier series for the function: g(x) = . b2n−1 = 4c π(2n − 1) n≥1 For the Fourier series : ∞ S∞ = a0 n=1 ∞ (an cos(nx) + bn sin(nx)) = 4c sin [(2n − 1)x] π(2n − 1) n=1 . Because g is an odd function. g(p) = ±C. c > 0 which produces the square wave function. thus we only deal with the odd subscripts.

series evaluated at Pk converges to 2 Thus if f is continuous for all x. f is a 2π-periodic function. S∞ = g(x). Observe: 1 f (x) + g(x) 2 which implies that the Fourier series of h(x) is going to be the linearl combination generated by the Fourier series of f and g. SN = = 1 π 4 − 2 2 π ∞ 4c 1 cos [(2n + 1)x] + sin [(2n + 1)x] 2 (2n + 1) π(2n − 1) n=1 n=0 N ∞ π −2 4c + cos [(2n + 1)x] + sin [(2n + 1)x] 4 n=0 π(2n + 1)2 π(2n + 1) . in other words.1 Let: Example h(x) = 1 −1 2 x − c 1 2x + c if − π ≤ x ≤ 0 if 0 ≤ x ≤ π h(x) = . Notice at p.4. then the Fourier series will converge to the mean of the jump.. f is piecewise smooth on each interval of length 2π. At the finitely many points of discontinuity Pk . we have g(p+ ) + g(p− ) S∞ (p) = 0 = 2 i. the Fourier series evaluated at x converges to f (x) uniformly.e. then we have uniform convergence for g(x). f (P + )+f (P − ) 3. the Fourier k k . FOURIER’S THEOREM 25 because there are points of discontinuity. then the Fourier series converges uniformly for all x. c > 0. Suppose that we define g(p) = 0∀{p = kπ}.3. f (p+ ) + f (p− ) 2 where p is a point of discontinuity.4. and S∞ (0) = 0.4 Fourier’s Theorem Suppose that 1. 2. 0 represetnts the midpoint between c and −c. since S∞ = g(x)∀x. then at all points of continuity x. S∞ (p) = 3. This phenomenon is called the Gibbs phenomenon. Observation If p is a point of discontinuity. h(x + 2π) = h(x).

8) (3.7) (3.1. Define g(x) as follows: g(x) = f =f L x π L x π =⇒ g(x + 2π) = f = g(x) L (x + 2π) = f π L x + 2L π This implies that g(x) is a 2π-periodic piecewise smooth function. i. they are more general.e. which implies that the Fourier theorem applies.5 Extending Fourier series to 2L-periodic functions f (x + 2L) = f (x) Let a function f (x) be a piecewise smooth 2L-periodic function.6) f (x) cos nπ x dx L nπ x dx L (3.26 CHAPTER 3. and 3.9) f (x) cos −L L 1 nπ x dx = L L 1 nπ x dx = L L 0 2L f (x) sin −L f (x) sin 0 n≥1 Memorize these formulae.2: a0 = an = bn = 1 2L 1 L 1 L L f (x)dx = −L L 1 2L 2L f (x)dx 0 2L (3. . This further implies that: ∞ g(x) = a0 n=1 (nn cos(nx) + bn sin(nx)) L L ¯ = f ( x) =⇒ Let x = x π π π ¯ =⇒ x = x L ∞ nπ nπ an cos =⇒ f (¯) = a0 + x x + bn sin ¯ x ¯ L L n=1 This implies that the fourier series of f is given by: ∞ f (x) = a0 + n=1 an cos nπ nπ x + bn sin x L L The Fourier coefficients are given by the standard equations in Sections 3. FOURIER SERIES 3.

e.6. −1 ≤ x < 1 Consider the function and f (x + 2) = f (x). n ∈ Z. . i. x ∈ R. let f be a piecewise smooth function. and 2L-periodic. =⇒ L = 1.5.1 Example f (x) = x2 . ∀ {n ≥ 1} Find a0 . We need to find the Fourier series of f on (0. therfore: bn = 0.6): a0 = By (3. this implies that the Fourier series converges uniformly for all x. thereby making it symmetric around the y-axis. Find its Fourier series. an . L). HALF EXPANSIONS 27 3. and f is continuous for all x. n ≥ 1. We can do this in two different ways: 1.6 Half Expansions Suppose that f is defined over a finite interval (0. Half range cosine series expansion: Expanding the function to make it even. effectively extending the domain to (−∞. This can be done by mirroring the function along y = nL.7): 1 1 1 2 x2 dx = −1 1 3 an = −1 x2 cos(nπx)dx 1 an = 2 0 x2 cos(nπx)dx integrating by parts: an = 2 x2 2 2x sin(nπx) + 2 2 cos(nπx) − 3 3 sin(nπx) nπ n π n π n n (−1) 4 (−1) 2 =2 = 2 2 n2 π 2 n π 4 1 + 3 π2 ∞ 1 0 f (x) = n=1 (−1)n cos(nπx) n2 3. To remedy this. By Equation (3. therefore f is an even function.3. Call this new function f1 (x). ∞). f is 2-periodic. One of the chief requirements for Fourier series is the requirement of the continuity of the function. f is symmetric around the y-axis. L).

L) f2 (x) = −f (−x).14) bn = f (x) sin 0 because we know that in the interval (0. L) L except for all points of discontinuity. x ∈ (−L. L) It is an even function.11) f (x) cos 0 nπ 2 x dx = L L ∞ nπ x dx L (3. f1 (x + 2L) = f1 (x) f1 (x) = f (−x). 0) f2 (x + 2L) = f2 (x) To minimize the effect of the Gibbs Phemenon at the boundaries of discontinuity. 0) And therefore the Fourier theorem is applicable. x ∈ (0. x ∈ (0. f2 (x) = f (x) This in turn implies: ∞ f (x) = n=1 bn sin nπ x ∀x ∈ (0. It is an odd function. I may set a point to the average of the two values at the boundaries.10) f (x)dx 0 L f1 (x)dx = f1 (x) cos −L (3. Now we are ready to find the Fourier series of f2 .28 CHAPTER 3. L).13) (3. Half range sine series expansion: We shall construct an odd 2L-perodic piecewise smooth function f2 (x) which is an extension for f . this is implied. so the graph is reflected along the origin. . L) L except at points of discontinuity. x ∈ (0. x ∈ (−L. ∞ f2 (x) = n=0 bn sin 2 L L nπ X L nπ x dx L (3. FOURIER SERIES f1 (x) = f (x). ∞ f1 (x) = a0 + a0 = an = 1 2L 1 L n=1 L −L L an cos nπ x L 1 L L (3. f2 (x) = f (x). 2.12) This implies that the Fourier series of f is f (x) = a0 + n=1 an cos nπ x .

0) x = k.   π −π f2 (x) =  0 f2 (x + 2) = f2 (x) Now we find the Fourier series: 1 x ∈ (0. f2 (x) becomes the square wave function that is 2-periodic. indeed. ∀x Now we can find a0 . ∀n ≥ 0 2n + 1 . HALF EXPANSIONS 29 3. ∀n ≥ 1 This implies that the Fourier series for π is. Find f2 (x) (odd extension). an . x ∈ (0.1 Example Consider the function: f (x) = π. π. 1) 2n + 1 n=0 ∞ 4 . 1) Find the two half range expansions. 1 1 a0 = 0 f (x)dx = π 0 1 dx = π 1 an = 2 0 f (x) cos(nπx)dx = 2π 0 cos(nπx)dx = 0. n ≥ 1. and: b2n+1 = and f (x) = 4 1 sin [(2n + 1)πx] . f1 (x) = π. Find f1 (x) (even extension). ∀x ∈ (0. we set the discontinuities of this function to be zero. To minimize Gibbs phenomenon.6. k ∈ Z 1 bn = 2 0 f (x) sin(nπx)dx = 2π 0 sin(nπx)dx = 2(−1)n+1 2 + n n this implies that the even ns are zero for all n ≥ 1. 1) x ∈ (−1.3.6. Amazing. 1. 2.

The Fourier series of f is given by: ∞ f (x) = a0 + n=1 an cos nπ nπ x + bn sin x L L Consider the partial sum given by: N SN = a0 + n=1 an cos nπ nπ x + bn sin x L L Consider the error given by: |f (x) − SN | will not help much becaues of Gibbs phenomenon.16) is called the class of integrable functions over [a. the integral is finite. Theorem 3.15) This is called the mean-square error.4 A function f is said to be square integrable on [a. The set of all functions satisfying (3.7 Error Analysis Suppose that f is a 2L-periodic piecewise smooth (differentiable) function. b] if: b a f 2 (x)dx < ∞ (3. i. L]. if this occurs. FOURIER SERIES 3. and therefore approximate. We shall now consider a different type of error.1 If f is a square integrable function on [−L. The class of piecewise smooth functions is a subset of the set of square integrable functions.e. then SN is said to converge. We want EN → 0 as N → ∞. The question is: when does this convergence occur? Definition 3.30 CHAPTER 3. then the N th partial sum SN approximates f in the mean. Observation EN ≥ 0 and will overcome the Gibbs phenomenon. this error would be huge.e..16) i. b]. At the points of discontinuity. given by: EN = 1 2L L −L (f (x) − SN )2 dx (3.: N →∞ lim EN = 0 . f in the mean.

Theorem 3.7. then: 1 2L L f 2 (x)dx = a2 + 0 −L 1 a2 + b2 n 2 n=1 n ∞ (3.18) is called Bessel’s inequality and equality (3.17) Corallary 1 Since EN ≥ 0: 1 2L L −L f 2 (x)dx ≥ a2 + 0 1 a2 + b2 .7. f is odd: =⇒ a0 = an = 0∀n ≥ 1 2 (−1)n+1 + 1 . .19) Inequality (3.1 Example 1 0<x<L −1 −L < x < 0 Consider the function: f (x) = f (x) = f (x + 2L) Another square wave function.19) is called Parseval’s identity. ∀N ≥ 1 n 2 n=1 n N (3. Bessel’s inequality is very important.2 If F is square integrable on [−L. ERROR ANALYSIS Observation Suppose that f is continuous. n ≥ 1 bn = nπ The Fourier series of f is given by: f (x) = 2 π nπ (−1)n+1 + 1 sin x n L n=1 ∞ For even ns. then EN = 1 2L L −L 31 f 2 (x)dx − a0 2 − 1 an 2 + bn 2 2 n=1 N (3. 3.3. the coefficients are zero.18) Corallary 2 Since limN →∞ EN = 0. then the previous error |f (x) − SN | will generate “good” results because Gibbs phenomenon will not occur. but not very useful in this course. L].

FOURIER SERIES f 2 (x)dx − a0 2 − dx − 1 b2 2 n=1 n N 1 a2 + b2 n 2 n=1 n N =1− =1− E1 = 1 − 2 1 2 n=1 nπ 2 π2 N N (−1)n+1 + 1 2 2 1 (−1)n+1 + 1 n2 n=1 8 ≈ 0.189 π2 E 2 = E1 E3 ≈ 0.099 = E4 Observe ethat EN is decreasing and getting closer and closer to zero as N → ∞.32 Let’s find EN : EN = = 1 2L 1 2L L −L L −L CHAPTER 3. Now we use Parseval’s idenity: 1 2L L f (x)dx = −L 2 a2 0 1 + an 2 + bn 2 2 n=1 ∞ 2 ∞ =⇒ 1 = 1 4 (−1)n+1 + 1 2 π2 2 n=1 n ∞ 2 1 π2 = (−1)n+1 + 1 2 n2 n=1 = 4 (2n + 1)2 n=0 ∞ ∞ π2 1 =⇒ = 8 (2n + 1)2 n=0 .

e. not an order. The independent variables are x. 4.g. ∂u ∂t 4 + ∂2u ∂3u = ∂x2 ∂y 3 is a 3rd order PDE. .1 Recall Classification ODEs take this form: F (x. y ′ .1 Example ∂2u = c2 ∂t2 ∂2u ∂2u + 2 ∂x2 ∂y This is known as the two-dimensional wave equation. and t. y. ∂u ∂u +u = f (x) ∂t ∂x 1. Order : is the number of the highest derivative in the equation. e. Linearity: You want the unknown function u and its derivatives to appear in a linear fashion in order for the PDE to be linear. . y). In this case. u = u(x.Chapter 4 Partial Differential Equations! 4. The 4 is a power. Suppose you have two or more independent variables. .1.g. y [n] ) = 0 x is the only independent variable. we have to deal with partial differential equations (PDEs). . 2. e. 33 .g. y.

it is homogenous. t) . this is Laplace’s Equation. PARTIAL DIFFERENTIAL EQUATIONS! ∂u = c2 ∂t ∂2u ∂2u + 2 ∂x2 ∂y is a linear 2nd order PDE. ∂ψ h2 ∂ 2 ψ =− + V (x)ψ ∂t 2m ∂x2 ih is also a linear 2nd order PDE.2 Initial Condtions (IC) and Boundary Conditions (BC) Time t will deal the initial conditions whereas the spatial variables will deal with boundary conditions. y) ∂x2 ∂y This is a second order linear non-homogeneous PDE. Suppose that the unknown function is u = u(x. o uux + uy = 0 is a non-linear 1st order PDE. this specific equation is known as the two dimensional heat equation.34 Examples CHAPTER 4. 4. this is the Shr¨dinger Equation. Otherwise. 3. this specific equation is the Poisson equation ∂2u ∂2u + 2 =0 ∂x2 ∂y This is a second order linear homogenous PDE. Example ∂2u ∂2u + 2 = f (x. Homogeneity: Search for one non-zero term that does not include the unknown function to say that the PDE is non-homogeneous.

u(x. t0 ) = 0 35 is also an initial condition (note the time dependence). specifically zeroth order linear non-homogenous boundary conditions. u(0. d are constants.1 Example ∂u ∂u + =0 ∂x ∂t Consider the PDE: It is a linear homogenous first order PDE. t) − u(L. c. u(0.4. we might have: u(x. apply the substitution α = ax + bt and β = cx + dt where a.3. then so does u = c1 u1 + c2 u2 . RESULT For example. t) = g1 (t) u(L. t0 ) − ux (x. b. This is easily solvable. t) = g(t) is another example of a BC. t) = g2 (t) 0≤x≤L These are boundary conditions. t)ux (L. if u1 ∧ u2 satisfy a linear homogenous boundary condition BC. 4. This one is a first order linear homogenous IC. Moreover.3 Result Theorem 4. non-homogenous and of order 1.1 If u1 ∧ u2 are two solutions to a linear homogenous PDE. 4. then u = c1 u1 + c2 u2 is also a solution to that PDE. t0 ) = f (x) This is a zeroth order non-homogenous linear initial condition. Using the chain rule: ∂u ∂u ∂α ∂u ∂β = + ∂x ∂α ∂x ∂β ∂x = auα + cuβ ∂u = buα + duβ ∂t exercise . This one is non-linear.3.

36

CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!

Substituting back into the PDE: auα + cuβ + +buα + duβ = 0 =⇒ (a + b)uα + (c + d)uβ = 0 Since our constants are arbitrarily chosen, we may choose values such that it makes the PDE easier to solve, e.g. a = −b, k = c + d, giving: kuβ = 0 =⇒ uB = 0 =⇒ u = u(α) = u (a(x − t)) this is the general solution to the problem; any funciton satisfying the initial PDE must satisfy this new condition e.g. ea(x−t) ; ln(a(x − t)); sin(a(x − t)) are all solutions. There are infinitely many linearly independent solutions. Linear combinations of solutions are also solutions, since this is a linear homogenous PDE. This property only holds for linear homogenous PDEs; if any of the two conditions fail, the superposition property also fails.

4.4

One Dimensional Wave Equations

Consider an elastic (very flexible) string with fixed end points of length L (between the fixed points), with vertical motion u(x), x ∈ [0, L] where x is the position along the string. Horizontal motion is very small, and therefore will be neglected. This motion is called transverse. u(x, t) will be the position of a point x at a given time t. We need to find u. Apply Newton’s second law of motion. Fi = ma
i

(4.1)

The acceleration is defined as: ∂2u ∂t2 u(x, 0) = f (x) u(0, t) = u(L, t) = 0 a≡ 0<x<L I.C. B.C.

Let ρ be the mass density M . L Now we find the forces on the string: 1. Tensile force (tension) τ

4.4. ONE DIMENSIONAL WAVE EQUATIONS (a) This force τ is considered to be constant along the string. (b) τ is constant for all time. (c) The tensile force is tangental to the string.

37

2. External forces: such as damping, electromagnetic, gravitational, etc. We shall consider these forces per unit mass, i.e.: FE = mF = ρLF Consider a very small portion of the string between A = x and B = x+∆x. Let θA and θB be the angle between the tangental vector and the horizontal at A and B respectively. Solving for the vertical component of the tensile force: TA = −τ sin θA TB = τ sin θB F = ma −τ sin θA + τ sin θB + ρ∆xF = ρ∆x Now we make a few assumptions to simplify them. (a) θA and θB are both very small, implying: cos θA ≈ 1 ≈ cos θB sin θA ≈ tan θA The latter is just the slope of the line at the point A. sin θA = tan θA = sin θB = Now we substitute it back: −τ ∂u ∂2u ∂u (x, t) + τ (x + ∆x, t) + ρ∆xF = ρ∆x 2 ∂x ∂x ∂t ∂2u τ ∂u (x + ∆x, t) − ∂u (x, t) ∂x ∂x +F = ∂t2 ρ ∆x ∂u (x, t) ∂x ∂2u ∂t2

∂u (x + ∆x, t) ∂x

Let ∆x → 0 and c2 = τ ; this can be done because both τ and ρ are ρ positive. The units of c2 are velocity squared. ∂2u ∂2u = c2 2 + F ∂t2 ∂x

38

CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS! which gives us the one dimensional wave equation with an external force (per unit mass F ) a.k.a. the forced one-dimensional wave equation. If F = 0, then we have the unforced wave equation: ∂2u ∂2u = c2 2 ∂t2 ∂x which is a linear second order homogenous PDE. Remark i. If F is produced by gravity, then: F = −g ii. F is produced by damping, which is proportional to velocity: F ∝ ∂u ∂u =⇒ F = −2k ∂t ∂t

We shall solve the wave equation.

4.4.1

Solving the Wave Equation
 ∂2u ∂2u   = c2 2    ∂t2 ∂x   u(0, t) = u(L, t) = 0 B.C.  ∂u  (x, 0) = g(x) I.C. u(x, 0) = f (x),   ∂t    0 < x < L, t ≥ 0

(4.2)

there is not one method to solve all PDEs; we shall try one method here, known as the method of separation of variables. We seek a solution of the form u(x, t) = X(x)T (t) which we can plug back into (3.16) ∂2u ∂u = X(x)T ′′ (t) = X(x)T ′ (t) =⇒ ∂t ∂t2 ∂2u = X ′′ (x)T (t) ∂x2 XT ′′ = c2 X ′′ T. =⇒ X ′′ 1 T ′′ = 2 =k X c T (4.3)

k is known as the separation constant. X(0) = 0 =⇒ c1 + 0 = 0 =⇒ c1 = 0 X(L) = 0 =⇒ c2 sinh µL = 0 =⇒ c2 = 0 =⇒ X(x) = 0 which gives us our trivial solution u(x. We now generate two equations out of this. X ′′ − kX = 0 ′′ 2 (4.5) T − c kT = 0 Now we have two homogenous second order ODEs to solve.4). Now we try to solve (4. The characteristic equations are: m2 − k = 0 There are three cases to consider 1. These are the boundary equations associated with (4.4). m = ±µ m2 = k . ONE DIMENSIONAL WAVE EQUATIONS 39 it has to be a constant because both sides are functions of different variables. u(x. uninteresting. t) = 0 is the trivial solution and is uninteresting). t) = u(L. t) = 0 again.4) (4. t) = 0 (because u(x.4.3): X(0)T (t) = X(L)T (t) = 0 =⇒ X(0) = X(L) = 0 because T (t) = 0 yields the trivial solution. it is a second order linear ODE. k = µ2 > 0 X(x) = c1 eµx + c2 e−µx ¯ ¯ = c1 cosh µx + c2 sinh µ u(0. t) = 0 By (4.4. 2. k = µ2 = 0 m1 = m2 = 0 =⇒ X(x) = c1 x + c2 X(0) = 0 =⇒ c2 = 0 X(L) = 0 =⇒ c1 0 =⇒ u = 0 again.

2) is linear. Let’s try sin µL = 0: µL = mπ. m ∈ Z mπ µ= L nπ µ = µn = . Now let’s solve (4. Then. t) = n=1 sin nπ x [αn cos λn t + βn sin λn t] L (4. Xn (x) = sin is the solution to (4. another trivial solution. we can ignore the negative numbers in m.n ∈ Z > 0 L Because the arbitrary constant can absorb the negative signs.2 = ±iµ We set c2 = 1 as we only need one solution out of this family.4).6) CE Tn (t) = αn cos(λn t) + βn sin(λn t) =⇒ un (x. The wave equation in (4.n ≥ 1 L (4.7) . m = 0. k = −µ2 < 0 CHAPTER 4.5): ′′ Tn + nπ x L cnπ L 2 Tn = 0.40 3. This implies ∞ u(x. our solution can be written as X(x) = Xn (x) = c2 sin nπ x L m1. n ≥ 1 Let λn = Continuing: ′′ Tn + λ2 Tn = 0 n m = ±iλn cnπ . PARTIAL DIFFERENTIAL EQUATIONS! X(x) = c1 cos µx + c2 sin µx X(0) = 0 =⇒ X(x) = c2 sin µx X(L) = 0 =⇒ c2 sin µL = 0 Oh noes. t) = Xn Tn nπ = sin x [αn cos λn t + βn sin λn t] L This is known as the nth normal mode.

8) ut (x. Now let’s apply the initial conditions. Find an expression for the subsequent motion. βn = 2 λn L L g(x) sin 0 nπ x dx L (4. ONE DIMENSIONAL WAVE EQUATIONS 41 is the solution to (4. The first mode is known as the fundamental mode.9) 4.4.2 d’Alembert Equation mπ x L m ∈ Z+ Suppose you have a string with initial shape f (x) = sin 0<x<L starting from rest. t) = n=1 sin nπ x [−λn αn sin (λn t) + λn βn cos (λn t)] L ∞ ut (x. 0) = g(x) = n=1 λn βn sin nπ x L Which gives us the half range sine series expansion for g.4. Because it is starting from rest: g(x) = 0 =⇒ βn = 0 by (4. u(x.9).2). whereas every other mode is called an overtone. Let’s find αn αn = 2 L =0 =1 L 0 ∀n ≥ 1 nπ mπ sin x dx L L if n = m otherwise sin . t) depends only on its nth normal mode.7): ∞ u(x.4. 0) = f (x) Now we sub this into (4. then u is said to follow its own nth normal mode. If u(x. 0) = g(x) ∞ (4. 0) = f (x) = n=1 αn sin nπ x L This is the half range sine series expansion of f . we can apply the Fourier series coefficient equation: αn = nπ 2 L x dx f (x) sin L 0 L ut (x.

x G(x) = a g ∗ (s)ds x+2L x G(x + 2L) − G(x) = = a x+2L g ∗ (s)ds − g ∗ (s)ds + g ∗ (s)ds a a g ∗ (s)ds x L a x+2L = x g ∗ (s)ds = −L g ∗ (s)ds =0 . (4. PARTIAL DIFFERENTIAL EQUATIONS! ∞ u(x. Consider the integral in (4.7): CHAPTER 4. f is periodic.11) where f ∗ and g ∗ are 2L-periodic odd extensions of f and g.11) is called d’Alembert equation.11). t) = 1 1 ∗ [f (x + ct) + f ∗ (x − ct)] + 2 2c x+ct g ∗ (s)ds x−ct (4. f is defined for all x.1 For a given string of length L whose initial shape is f (x) and initial velocity is g(x). In general. u(x. t) = n=1 sin nπ x L $ αn cos(λn t) + $$$$ t) βn sin(λn mπ x cos(λm t) L mπ mπ 1 sin x + λm t + sin x − λm t = 2 L L = sin Subbing in λm = cmπ L : mπ mπ 1 sin (x + ct) + sin (x − ct) 2 L L 1 = [f (x + ct) + f (x − ct)] 2 Two things we know about f : = 1. g(x) = 0.e.2) is given by u(x. for a string of length L and initial shape f (x) starting from rest.10) Result 4. t) = (4.42 By Equation (4. i. the solution to system (4. 2. we have 1 ∗ [f (x + ct) + f ∗ (x − ct] 2 The new function f ∗ is the 2L-periodic odd extension of f .

4. f ∗ (x) = f (x) f (x) = −f (−x) f (x + 2) = f ∗ (x)  1  −x − 1 −1 ≤ x < − 2 ∗ 1 1 x −2 ≤ x < 2 f (x) =  1 1−x 2 ≤x<1 ∗ x ∗ x ∈ (−1.12) 1 for x ∈ (0. Thus. ONE DIMENSIONAL WAVE EQUATIONS because it is an odd function. Then we apply d’Alembert’s method.12).11): u(x. Find the equation of the subsequent motion. x G(x) = −1 g ∗ (s)ds = x −1 0 −1 −πds = −πx − π x g ∗ (s)ds + 0 g ∗ (s)ds = π(x − 1) x<0 x>0 G(x + 2) = G(x) By (4. we have u(x.4. Recall: G(x) = a g ∗ (s)ds Let’s find G over the interval (−1. t) = 1 ∗ t f x+ 2 π + f∗ x − t π + π t G x+ 2 π −G x− t π . 1). 0) x ∈ (0. G(x) is a 2L-periodic function. 1) Let’s find g ∗ . 1). x+ct a x+ct 43 g ∗ (s)ds = x−ct x−ct g ∗ (s)ds + a g ∗ (s)ds = −G(x − ct) + G(x + ct) By (4. Let’s find f ∗ . t) = Example Consider a vibrating string with initial shape: f (x) = g(x) = π 1 x 0<x< 2 1 1−x 2 ≤x<1 1 ∗ 1 [f (x + ct) + f ∗ (x − ct)] + [G(x + ct) − G(x − ct)] 2 2c (4. where c = π We start by finding the 2L-periodic odd extension of f and g.

Let’s now consider the whole interval [0. these will be referred to as L1 and L2 respectively. These inequalities generate what is known as the strip zone. L]. m is the slope of the line. L1 and L2 intersect at the point (x0 . Taking any point (x. Note Take a region in under both lines and above t = 0. thus we have x − ct = x0 − ct0 x + ct = x0 + ct0 → → 1 =m c 1 − =m c generating two sets of lines that change according to x0 and t0 . We begin by drawing the characteristic lines through x = 0 and x = L.44 CHAPTER 4. From this. The x-intercept of L1 is (x0 −ct0 .2) in t ≥ 0. 0) and for L2 is (x0 + ct0 . 0 < x < L. . c0 ). 1 Let m = ± 2 . I. In order to do this.4. This interval is called the interval of dependence and the lines L1 . t0 ) depends on the interval [x0 − ct0 . IV). x0 + ct0 ]. we need the following theorem. PARTIAL DIFFERENTIAL EQUATIONS! 1 2 How about finding u at t = π ? We just substitute it in. the interval of dependence will be contained inside x ∈ (0. and from solution given by (4. L) We shall use region I to determine u(x. Recall t − t0 = m(x − x0 ) is the equation of any line.3 Method of Characteristic Lines We had before in system (4. III. x − ct = 0 x + ct = L are our new lines. Repeat this for G(x) (exercise): 2 u x. 0). We need f ∗ x + 2 and f ∗ x − 1 . t) in I. t) in other regions (II. L2 are called the characteristic lines. π 2 = 2πx −2π(x − 1) 4.11) we see that u(x0 .

1: Region Diagram t P Q’ Q P’ x . ONE DIMENSIONAL WAVE EQUATIONS t x + ct = L IV x − ct = 0 45 II III I x x=L Figure 4.4.4.

PARTIAL DIFFERENTIAL EQUATIONS! Theorem 4.e. In region II: t We use the parallelogram theorem above: P Q’ Q P’ I x=L x . 1 2 2 + 1 4 and g(x) = 2x In region I: The solution is given by (4. t) = 1 1 [f (x + 3t) + f (x − 3t)] + 2 6 x+3t g(s)ds x−3t Substitute in f and we get (exercise): u(x. L]. i.46 CHAPTER 4. Then: u(P ) + u(P ′ ) = u(Q) + u(Q′ ) Example Consider the wave equation with c = 3. f (x) = − x − with 0 < x1. t) = −x2 − 9t2 + x + 2xt for region I.2 Let P QP ′ Q′ be the parallelogram generated by the characteristic lines. u(x. Find u in region I and II.11) where x + ct and x − ct will never be outside the interval [0.

4. these are boundary conditions. t) = 0 B. We are interested in finding out the temperature of a given point x on the rod at a given time t. 0 < x < L.5. Recall: u(P ) + u(P ′ ) = u(Q) + u(Q′ ) ¨ u(P ) = u(Q) + ¨¨′ ) − u(P ′ ) u(Q = u(Q) − u(P ′ ) Coordinates of Q: x − 3t = 0 x + 3t = x0 + 3t0 Coordinates of Q′ : x=0 x − 3t = x0 − 3t0 Coordinates of P ′ x − 3t = 0 x + 3t = −x0 + 3t0 Substituting in u(P ): u(P ) = 9 1 2 41 x − xt + t2 + x 4 2 4 =⇒ +3t x = −x02 0 −x0 +3t0 t= 6 47 =⇒ x = x0 +3t0 2 t = x0 +3t0 6 =⇒ x=0 +3t t = −x03 0 4.13) . u(0. The initial boundary value problem is given by  ∂u 2 ∂2u  ∂t = c ∂x2 u(0. t) = u(L. 0) = f (x) I. u(x. t ≥ 0 (4.C.e. t) = u(L. The temperature u at the edges of the rod are held fixed at 0. t) = 0  u(x. 0) = f (x). i.5 One Dimensional Heat Equation Consider a thin rod with negligible thickness. ONE DIMENSIONAL HEAT EQUATION Q′ must be on the t-axis. making it effectively one dimensional.C Say that the temperature distribution is initially given by f (x) (initial condition). and of length L.

14) cnπ L 2 nπ L x B. t) = 0 ≡ X(0)T (t) = X(L)T (t) = 0 X(0) = X(L) = 0 We need to discuss the three cases: 1. n ≥ 1. u(0. k = µ2 = 0 both generate the trivial solution. k = −µ2 < 0 generates Xn = sin Now we need to find T (t). T =0 . since both sides are independent of their own respective variables. t) = u(L.48 CHAPTER 4. Substitute T′ + Let λn = nπc ′ =⇒ Tn + λ2 Tn = 0 n L (4. X ′′ − kX = 0 T ′ − kc2 T = 0 We have already solved these equations in the Wave Equations section. =⇒ 1 T′ X ′′ = 2 =k X c T where k is the separation constant.C . c2 is known as the thermal diffusivity. ∂u ∂2u = XT ′ . u(x. PARTIAL DIFFERENTIAL EQUATIONS! The PDE is linear 2nd order homogenous. 3. We then apply the method of separation of variables. t) = X(x)T (t) We can do this because the heat equation is a linear homogenous PDE with 0 boundary conditions. k = µ2 > 0 2. 2 X ′′ T ∂t ∂x T ′ X = c2 X ′′ T =⇒ X ′′ 1 T′ = 2 X c T This implies that the ratios are a constant.

n ≥ 1 L un is called the nth normal mode. 0) = n=1 αn sin nπ x L This represents the half range sine series expansion of f over (0. similar ideas are applicable.e.17) .5.5. t ≥ 0 ∂u ∂t 2        (4. i. The PDE becomes: = c2 ∂ u ∂x2 u(0. L).16) 4. For system (4. Observations λn → ∞ =⇒ e−λn t → 0 as n → ∞. or time independent solutions. 0) = f (x). 3.13). This means that the fundamental mode is the dominant term whereas the overtones are negligible. 2. n ≥ 1 Thus un (x. We are interested in finding the long term behaviour of the temperature. 0 < x < L. This means that ∂u = 0. t) = n=1 αn e−λn t sin 2 nπ x L (4. the steady state solutions. t) = T2 u(x. Suppose we need to find out what happens when we have non-zero boundary conditions.4. the steady state solution is u = 0. and thus we can get the constants αn from: αn = 2 L L f (x) sin 0 nπ x dx L (4. t) = T1 u(L. ∞ u(x. As t → ∞. t) = Xn Tn = αn e−λn t sin Reminder 2 49 nπ x . and thus indepen∂t dent of time.1 1. ONE DIMENSIONAL HEAT EQUATION dTn = −λ2 dt n Tn =⇒ Tn = αn e−λn t .15) Now we can apply the initial condition: ∞ f (x) = u(x.

We have now generated a new initial boundary value problem ∂u2 ∂ 2 u2 = c2 ∂t ∂x2 u2 (0. t) = 0 u2 (x. Notice ∂u ∂u ∂u2 = −0= ∂t ∂t ∂t ∂ 2 u2 ∂2u ∂2u = −0= 2 2 ∂x ∂x ∂x2 2 ∂u2 ∂ u2 =⇒ = c2 ∂x ∂x2 and we have generated a one dimensional heat function for u2 u2 (0. will comprise our initial condition. ∂u1 =0 ∂t ∂ 2 u1 =0 =⇒ ∂x2 by the PDE.C. 0) = f (x) − u1 (x) I. =⇒ u1 (x) = Ax + B Applying the boundary condition: u1 (0) = B = T1 T2 − T1 u1 (L) = AL + T1 =⇒ A = L Thus giving us the steady state solution: u1 (x) = T2 − T1 x + T1 L (4. the boundary conditions are non-zero.19) Oh mama mia! We generated two zero boundary conditions! u2 (x. 0) = f (x) − u)1(x) (4. Let u2 (x.17) using the steady solution (4. t) = u2 (L. t) u2 (L. t) = T1 − T1 = 0 = T2 − T2 = 0 B.17). t) = u(x. To solve system (4.18) The general idea is to solve system (4.17). we need to discuss the steady state behaviour.50 CHAPTER 4. t) is the solution to (4.C (4. PARTIAL DIFFERENTIAL EQUATIONS! The method of separation of variables will not work. t) − u1 (x) where u(x.18).20) .

t) + u1 (x) Beccause the initial and boundary conditions are inconsistent.21) (4.5. . our solution u may be discontinuous and thus will exhibit the Gibbs phenomenon at t = 0.5. By (4. L = c = 1 Find the steady state solution by Equation (4. λn = L L αn = 2 L nπ x dx L 0 =⇒ u(x.20) by the method of separation of variables. we have ∞ u2 (x. u(1. n ≥ 1 1 an = 2 0 [60x − 60x − 20] sin nπ x dx L = α2n+1 By (4. t) = 2 2 −80 e−(2n+1) π t sin ((2n + 1)πx) (2n + 1)π n=0 ∞ u(x. t) = 20. 4. f (x) = 60x. t) = u2 (x. t) = u2 (x.15) we have: 0 if n is even 80 − nπ if n is odd −80 . ONE DIMENSIONAL HEAT EQUATION 51 We can now solve (4. t) = n=1 αn e−λn t sin 2 nπ cnπ x .3 Other Types of Boundary Conditions Let’s discuss other types of boundary conditions. t) + u1 (x) [f (x) − u1 (x)] sin L (4.2 Example Solve the initial boundary value problem (4. t) = 80.18) u1 (x) = 80 − 20 x + 20 = 60x + 20 1 λn = nπ.17) given the data: u(0.5.15).4.n ≥ 0 = (2n + 1)π u2 (x.22) 4.

i.52 Variation 1 CHAPTER 4. k = µ2 > 0 (trivial solution) 2. .e. We have inifinitely many solutions. We now try to employ the method of separation of variables. so we set X0 (x) = 1. This X0 is a non-trivial solution. k = µ2 = 0 yields X(x) = c1 + c2 x. µ = µ0 = 0. PARTIAL DIFFERENTIAL EQUATIONS! ∂u ∂2u = c2 2 ∂t ∂x ∂u ∂u (0. t) = (L. Now we check the boundary conditions: X ′ (0)T (t) = 0 X ′ (L)T (t) = 0 =⇒ X ′ (0) = X ′ (L) = 0 Now we solve X ′′ − KX = 0 X ′ (0 = X ′ (L) = 0 which gives us the characteristic equation: m2 − k = 0 =⇒ m2 = k and we deal with our three cases again: 1. Seek a solution of the form u(x. which is a line. there is no propagation in the x direction. 0) = f (x) (4. t) = 0 ∂x ∂x u(x. t) = X(x)T (t) generates X ′′ − kX = 0 T ′ − kT = 0 where k is the separation constant. and we only need one solution. X ′ (x) = c2 =⇒ X ′ (0) = c2 = 0 X ′ (L) = c2 = 0 Therefore the boundary conditions are satisfied.23) This means that there is no heat flux.

5. t) = n=0 αn e−λn t cos 2 nπ x L . we therefore merge them together as follows: nπ x . t) = αn e−λn t cos 2 cnπ L (4.n ≥ 1 L X(x) = Xn (x) = c1 cos µn x nπ Xn (x) = cos x L =⇒ µ = µn = Since at n = 0 we have X0 = 1 and µ0 = 0 for cases 2 and 3. we will end up with the following: ′ Tn − (cµn )2 Tn = 0.n ≥ 0 Xn (x) = cos L nπ where un = L Substituting k = −µ2 into the DE for T .24) −λ2 dt n 2 =⇒ Tn = αn e−λn t nπ x L Because this is a linear PDE. this implies that the soultion is an infinite sum of all the modes: ∞ u(x. ONE DIMENSIONAL HEAT EQUATION 3. k = −µ2 < 0 yields X(x) = c1 cos(µx) + c2 sin(µx) X ′ (x) = −c1 µ sin µx + c2 µ cos µx =⇒ 53 =⇒ µL = nπ X ′ (0) = c2 µ = 0 =⇒ c2 = 0 X ′ (L) = −c1 µ sin µL = 0 =⇒ sin µL = 0 nπ .4. we let λn = cµn = Giving us: T ′ + λ2 Tn = 0 n which is a separable equation: dTn = Tn Therefore the nth normal mode is: un (x. n ≥ 0 To simplify (we always do this).

t) u(0. t) = −τ u(L. ∞ f (x) = u(x.C. ∂x τ ≥0 L 0 1 L L f (x)dx 0 nπ f (x) cos x dx L (4.26) (B. 0) = α0 + n=0 αn cos nπ x L Which gives us the half range cosine series expansion of f (x). αn . t) = X(x)T (t): T ′ − c2 kT = 0 where k once again is the separation constant.) u(x. we consider the three cases X(0)T (t) = 0 X ′′ − kX = 0 X(0) = 0 X ′ (L) + τ X(L) = 0 . Now we apply the boundary conditions. X ′ (L)¨(t) = −τ X(L)¨(t) T¨ T¨ =⇒ now we solve X ′′ − kX = 0: m2 − k = 0 =⇒ m2 = k And again. t) = 0. f (x) =⇒ a0 = 2 an = L Variation II Consider the initial boundary value problem ∂2u ∂u = c2 2 ∂t ∂x ∂u (L. t) = α0 + n=0 αn e−λn t cos 2 nπ x L (4. n ≥ 1. 0) = f (x) Let u(x.54 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS! Taking out the first term: ∞ u(x.25) We now use the initial conditions to find α0 .C) (I.

ONE DIMENSIONAL HEAT EQUATION 1. 3.5. k = µ2 > 0 and 2.4. We assume c2 = 0 as it would yield the trivial solution. c2 = 0 =⇒ µ cos(µL) + τ sin(µL) = 0 Mama mia! Notice: tan(µL) = − µ τ We can solve this numerically. k = −µ2 < 0 implies that the characteristic roots are ±iµ and X(x) = c1 cos(µx) + c2 sin(µx) =⇒ X(0) = 0 = & c1 =⇒ X(x) = c2 sin(µx) X ′ (x) = c2 µ cos(µx) c2 µ cos(µL) + τ c2 sin(µL) = 0 55 where the last step comes from applying the second boundary condition. graphically: . k = µ2 = 0 both generate the trivial solution. but how do we know that µ exists? Let y1 (u) = tan µL 1 y2 (u) = − u τ and now look for the points of intersection of these two functions.

n ≥ 1 One can determine them numerically using software. 0) = f (x) sin(µm x) = n=1 αn e−λn t sin(µn x) sin(µm x) 2 and we integrate with respect to dx. 0) = f (x) = n=1 αn e−λn t sin(µn x) 2 This is not a half range sine series expansion. then this destroys all the terms except for m = n. Now we check the initial conditions: ∞ u(x. PARTIAL DIFFERENTIAL EQUATIONS! There are clearly intersections.56 CHAPTER 4. Thus. we have infinitely many points of intersection. X(x) = Xn (x) = sin (µn x) . t) = αn e−λn t sin(µn x) ∞ 2 2 u(x. Question Are sin (µn x) and sin(µm x) orthogonal? The answer is yes. αm = 1 L 0 L sin2 (µm x)dx f (x) sin (µm x) dx 0 . µ = µn . The proof will be revisited later on. Now we can mutiply both iides by sin (µm x) ∞ u(x. n ≥ 1 The nth normal mode is then: un (x. n ≥ 1 (c2 = 1) Recall T ′ − c2 kT = 0 ′ =⇒ Tn + (cµn )2 Tn = 0 Which is like the DE we solved before: Tn (t) = αn e−λn t λn = cµn . we don’t know that the functions are orthogonal (due to µn inside the sine function). t) = n=1 αn e−λn t sin(µn x) 2 Where the latter equation is the general solution.

6 Two Dimensional Wave Equation Can we find theh Fourier series of a function given by z = f (x.29) Now we apply the separations of variables. t) is a continuous function with constant first and second order partial derivatives on [0. t) = u(a. t) is given by ∞ ∞ f (x. The initial boundary value problem is given by ∂2u = c2 ∂t2 ∂2u ∂2u + 2 ∂x2 ∂y   2D Wave Equation       BC 1 BC 2       ICs u(0. a] × [0. we will have a double Fourier series. Now let’s use this material to solve 2-d wave equations. y)? The answer is yes. t) = 0 u(x. TWO DIMENSIONAL WAVE EQUATION 57 4.27) where Bmn = 4 ab b 0 0 a f (x.28) To simplify the notation. y) = n=1 m=1 Bmn sin nπ mπ x sin y a b (4. remember that m = k and l = n. b]. t) = X(x)Y (y)T (t) Substituting in: XY T ′′ = c2 (X ′′ Y T + XY ′′ T ) X ′′ Y ′′ 1 T ′′ = + = −k 2 c2 T X Y Divide by c2 XY T .n=1 Bmn sin nπ mπ x sin y a b Observation Notice that the functions sin nπ x sin mπ y and sin kπ x sin lπ y a b a b are orthogonal functions.4. y) = m. y. Theorem 4. we merge the two summation signs: ∞ f (x. We seek a solution of the form u(x. y. t) = 0 ∂u u(x. 0) = f (x. y) ∂t (4.3 If f (x. In this case. y. y) sin mπ nπ x sin y dxdy a b (4. 0) = g(x. 0. y. y. (x. t) = u(x. then the double Fourier half range sin series expansion of f (x.6. y). b.

L = µ2 ≥ 0 generates the trivial solution. our other one is Y ′′ + ν 2 Y = 0 Y (0) = Y (b) = 0 For µ = µm = mπ a k 2 = µ2 + ν 2 we have: X = Xm (x) = sin mπ x a nπ y b For ν = νn = nπ b we have: Y = Yn (y) = sin But k 2 = µ2 + ν 2 2 =⇒ kmn = π 2 m2 n2 + 2 a2 b . PARTIAL DIFFERENTIAL EQUATIONS! If L = k 2 ≥ 0 will generate the trivial solution. X(0) = X(a) = 0 Y (0) = Y (b) = 0 We need to solve X ′′ + µ2 X = 0 X(0) = X(a) = 0 as our first boundary value problem. where L is the separation constant.58 CHAPTER 4. =⇒ Y ′′ = −(k 2 − µ2 ) ≡ −ν 2 Y Now we can generate a system of ODEs that we can solve X ′′ + µ2 X = 0 Y ′′ + ν 2 Y = 0 T ′′ + (ck)2 T = 0 Now we have to apply the boundary conditions. Thus X ′′ Y ′′ + = −k 2 X Y X ′′ Y ′′ =⇒ =− − k 2 = −µ2 X Y Again.

αmn = 4 ab b 0 0 a f (x. y. again.32) Now we can apply the initial conditions to get the final solution f (x.e.4 The set of all points in the membrane that stays still (i. do not vibrate) are called nodal lines.30) This term is called the characteristic frequency. y. the solution is: ∞ u(x.33) Skipping a few steps. By (4. 0) = x sin y ∂t a b m. y) sin mπ nπ x sin y dxdy a b (4.n=1 ∞ which is.27) and (4. y) = ∂u mπ nπ λmn βmn sin (x. y. t) = sin =⇒ m2 + λ2 = 0 mn =⇒ m1.n=1 αmn sin nπ mπ x sin y a b which is the double sine Fourier series expansion.34) Theorem 4.6.4.n=1 sin mπ nπ x sin y [αmn cos (λmn t) + βmn sin (λmn t)] a b (4.28).2 = ±iλmn = αmn cos (λmn t) + βmn sin (λmn t) mπ nπ x sin y [αmn cos (λmn t) + βmn sin (λmn t)] a b (4. the solution is: g(x. y) sin mπ nπ x sin y dxdy a b (4. t) = 0. TWO DIMENSIONAL WAVE EQUATION Let 2 λ2 = c2 kmn = c2 π 2 mn 59 m2 n2 + 2 a2 b (4. by the same equations: βmn = 4 λmn ab 0 b 0 a g(x. y) = u(x. y.31) Thus. t) = m. 0) ∞ = m. y. They satisfy the equation umn (x. ∀t . the double sine Fourier series expansion. Thus ′′ Tmn + λ2 Tmn = 0 mn Tmn The normal mode: u(x.

y) = 0 1 1 αmn = 4 0 1 0 sin (3πx) sin (πy) sin (mπx) sin (nπy) dxdy 1 = 2 0 sin (3πx) sin (mπx) dx 1 2 0 1 sin (πy) sin (nπy) dy sin (πy) sin (nπy) dy = −1 sin (3πx) sin (mπx) dx −1 We know from the orthogonality principle that: = 0m = 3 or n = 1 1m = 3 and n = 1 βmn = 0∀n. y. our nodal lines are x= 1 2 .60 CHAPTER 4. k ∈ Z+ k x= 3 So again. k ∈ Z+ Inside our boundaries. PARTIAL DIFFERENTIAL EQUATIONS! 4. m = 3 or n = 1 √ 10 So we find our nodal lines by setting the above expression to zero: √ sin (3πx) sin (πy) cos 10t = 0 This implies that 3πx = kπ. t) = sin (3πx) sin (πy) cos √ 10t m2 + n2 . m λmn = Thus yielding our final solution: u(x. λ31 = α31 = 1. 3 3 .6. to get a zero: πy = kπ =⇒ y = k. y) = sin 3πx sin πy g(x. αmn = 0.1 Example a=b=1 1 c= π f (x.

4. TWO DIMENSIONAL HEAT EQUATION 61 4. y) = u(a. y) = 0 u(x. y. b) = f2 (x) This means that there is a heat source at one side of the boundary conditions. t) = 0 and an initial condition u(x.1 ∂u ∂2u ∂2u + 2 = c2 ∂t ∂x2 ∂y is the heat equation in two dimensions. y. t) = u(x. If we have zero boundary conditions: u(0. . . y) = X(x)Y (y) . 0.36) λmn = cπ αmn = (4.7. t) = m. n = 1.38) b a mπx nπy 4 f (x. t) = 0 u(x. y) sin sin dxdy ab 0 0 a b m.37) (4. y) then we can use the method of separation of variables: ∞ (4. 3. y. .39) u(0.n=1 αmn sin m2 n2 + 2 2 a b mπx nπy −λ2 t sin e mn a b (4. b. Again. 0) = f (x. 2. To get a steady state solution. t) = u(a. 0) = 0. u(x. we use separation of variables. we set: ∂u =0 ∂t ∂2u ∂2u + 2 =0 ∂x2 ∂y ∇2 u = 0 (4.7 Two Dimensional Heat Equation Definition 4. u(x. y.35) u(x.

PARTIAL DIFFERENTIAL EQUATIONS! Substitute it into the equation we want to solve u(x.62 CHAPTER 4. 3. n = 1. 2. which doesn’t matter because it gets absorbed later by another constant. a We pick c2 = 1. . . . 2. . . X(0) = 0. Now we can solve for µ. . ′′ Yn − µ2 Yn = 0 n This gives us an exponential answer Yn (y) = αn cosh (µn y) + βn sinh (µn y) . X(a) = 0 X(x)Y (0) = 0 =⇒ Y (0) = 0 So we take k = µ2 > 0 again to avoid the trivial solutions X(x) = c1 cos (µ) + c2 sin (µx) X(0) = c1 = 0 X(x) = c2 sin (µx) X(a) = c2 sin (µa) = 0 =⇒ µa = nπ. and our final solution for x is Xn (x) = sin nπx . n = 1. y) = X(x)Y (y) X ′′ Y + XY ′′ = 0 X ′′ Y ′′ + =0 X Y Y ′′ X ′′ =− =k Y X This yields two equations Y ′′ − kY = 0 X ′′ + kX = 0 X(0)Y (t) = X(a)Y (y) = 0 For this to be true.

y) = n=n un (x.4.42) Notice that these are non-zero.8. y) = g2 (y) u(x. y) = g1 (y) u(a. DIRICHLET PROBLEM Now we can apply the boundary conditions 63 Yn (0) = 0 = αn Yn (y) = βn sinh(µn y) nπy nπx un (x. We split the PDE into the following sections: . 0) = f1 (x) u(x. y) = 2 a sinh n=n a nπb a βn sinh f2 (x) sin nπx nπy sin a a nπ x dx a (4.40) (4.8 Dirichlet Problem Consider a box with the following boundaries: y b u = g1 (y) u = f2 (x) ∇2 u = 0 u = f1 (x) a u = g2 (y) x ∇2 u = 0 u(0. y) = βn sinh sin a a ∞ ∞ =⇒ u(x. b) = f2 (x) (4.41) Bn = 0 4.

y) = X(x). y) = 0 u1 (x. we may now employ the method of separation of variables u1 (0. y. PARTIAL DIFFERENTIAL EQUATIONS! y b 0 f2 (x) b ∇2 u1 = 0 0 a 0 x 0 y 0 ∇2 u2 = 0 f1 (x) y a 0 x y b g1 (y) 0 b 0 x 0 0 ∇2 u3 = 0 0 a ∇2 u4 = 0 0 a g2 (y) x These new initial boundary value problems have the following properties: u = u1 + u2 + u3 + u4 u(x.64 CHAPTER 4. y) = u1 (a. and thus. b) = 0 u1 (x. 0) = f1 (x) u1 (x. Y (y) ∇2 u1 = 0 . b) = f2 (x) ∇2 u = ∇2 (u1 + u2 + u3 + u4 ) = ∇2 u1 + ∇2 u2 + ∇2 u3 + ∇2 u4 This gives us zero boundary condition problems.

The result is given below: .4.8. Yn (y) = An cosh (µn y) + Bn sinh (µn y) Yn (b) = An cosh (µn b) + Bn sinh (µn b) = 0 =⇒ An sinh (µn b) =− Bn cosh (µn b) An Yn (y) = Bn cosh (µn y) + sinh (µn y) Bn sinh (µn b) = Bn − cosh (µn y) + sinh (µn y) cosh (µn b) −Bn [sinh(µn b) cosh(µn y) − sinh(µn y) + cosh(µn b)] = cosh(µn b) −Bn [sinh (µn (b − y))] = cosh(µn b) = αn sinh(µn (b − y)) This gives us enough information to solve each of the four cases. DIRICHLET PROBLEM Y ′′ −X ′′ = =k Y X =⇒ X ′′ + kX = 0 Y ′′ − kY = 0 65 k ≡ µ2 X(0) = X(a) = 0 nπ a X(x) = Xn (x) = sin where µn = nπx a n≥1 Now we do Y .

8.46) (4. from (4.47) (4. y) = 2 b sinh ∞ nπa b g1 (y) sin 0 nπy dy b δn sinh n=1 nπx nπy sin b b b where δn = 2 b sinh nπa b g2 (y) sin 0 nπy dy b 4.45) (4. PARTIAL DIFFERENTIAL EQUATIONS! u1n (x. y) = 1 2 sinh(πn) 2  0 = 1  sinh(π2) sin(2πy) sin(nπy)dy −1 n=2 n=2 1 sinh(2πx) sin(2πy) sinh(2π) .1 Example a=b=1 u(0. y) = u(x. 0) = x Consider the initial boundary value problem given by: We can see immediately that we need to only use u1 and u4 . y) = sin (2πy) u(x. y) = f1 (x) sin 0 nπx dx a γn sinh n=1 nπy nπ (a − x) sin b b b where γn = u4 (x. y) = sinh(µn (b − y)) a n=1 2 a sinh ∞ a nπb a (4. 1) = 0 u(1. from (4.48): δn = = 2 sinh (πh) 1 sin(2πy) sin(nπy)dy 0 1 And consequently.48) where αn = u3 (x. =⇒ u4 (x. from above.66 Result 4. y) = αn sin nπx sinh(µn (b − y)) a ∞ nπx αn sin u1 (x.47).43) (4.2 CHAPTER 4.44) (4.

u = f2 (x).9 Poisson’s Equation ∇2 = f (x. αn = = 2 sinh(nπ) 1 67 x sin(nπx)dx 0 1 0 2 1 1 − x cos(nπx) + 2 2 sin(nπx) sinh(nπ) nπ n π 1 2 − cos(nπ) − 0 = sinh(nπ) nπ 2(−1)n =− nπ sinh(nπ Plugging into (4. y) = −2(−1)n sin(nπx) sinh(nπ(1 − y)) nπ sinh(nπ n=1 ∞ And thus. y) We shall solve the boundary value problem u(0. our general solution is: u(x. y) = 2(−1)n 1 sinh(2πx) sin(2πy) − sin(nπx) sinh(nπ(1 − y)) sinh(2π) nπ sinh(nπ n=1 ∞ 4. b)f2 (x) (4. u(x. u = g1 (y). y) f1 (x) a g2 (y) x f2 (x) . y b g1 (y) ∇2 u = f (x. POISSON’S EQUATION From (4. 0) = f1 (x). u1 (x.9.49) Consider a box bounded by u = f1 (x).4.48). y) = g2 (y) u(x. y) = u1 (x. y) = g1 (y). u = g2 (y). y) + u2 (x.44). u(a.

68 CHAPTER 4. The former problem will be known as problem 1. We try a function given by ϕmn (x. y b g1 (y) ∇2 u1 = 0 f1 (x) a g2 (y) x f2 (x) The second rectangle has four zero boundary conditions. y b 0 0 ∇2 u2 = f (x. PARTIAL DIFFERENTIAL EQUATIONS! We can now decompose (split) the problem into two. Problem 1 is the Dirichlet problem.50) Thus: . We are now left with a problem with zero boundary conditions. and the latter will be referred to as problem 2. y) 0 a 0 x and we can add both rectangles together to get the original problem. y) = sin mπ nπ x sin y a b Note that the right hand side of this comes from the solution to the Dirichlet problem. of course). which was solved previously. The idea is to guess a solution and plug it in (an educated guess. We now calculate the Laplacian ∇2 ϕmn = ∂ 2 ϕmn ∂ 2 ϕmn + 2 ∂x ∂y 2 n2 π 2 mπ nπ mπ nπ m2 π 2 x sin y − 2 sin x sin y = − 2 sin a a b b a b 2 2 2 2 n π mπ m π nπ sin =− + 2 x sin y 2 a b a b mπ nπ = −Λmn sin x sin y a b ∇2 ϕmn = −Λmn ϕmn (4.

51) This is known as the eigenseries expansion.51). y) = u(x. y) sin mπ nπ x sin y dxdy a b (4.9. and now we can substitute (4. This representation is not unique. Also notice that the boundary conditions of (2) are satisfied by ϕmn . We need to solve ∇2 u2 = f (x.9.52) 4. y) a b This is the double Fourier series expansion of f (x. Emn is to be determined. y). POISSON’S EQUATION 69 This problem is basically an eigenvalue problem where Λmn is the eigenvalue associated with the eigenfunction ϕmn .4. y). ∞ m. the educated guess is ∞ u2 (x. y) = sin (2πy) u(x.n=1 Emn sin nπ mπ x sin y a b (4. y) = m. 0) = x We have a rectangle as follows: y b 0 0 ∇2 u = xy x a sin (2πy) x We begin the decomposition into two separate problems. Thus. .n=1 −Λmn Emn sin mπ nπ x sin y = f (x. 1) = 0 u(1.1 Example Solve Equation (4.49) subject to the boundary conditions u(0. Emn = − 4 Λmn ab 0 b 0 a f (x.

we seek a solution of the form (4.Dirichlet problem y b 0 0 ∇2 u1 = 0 x a sin(2πy) x Problem 2 .51). (−1)n+m+1 sin (mπx) sin (nπy) mn(m + n) m.n=1 . Λmn = By (4.Zero Boundary Conditions y 0 b 0 ∇2 u2 = xy 0 From the previous example: u1 = sinh 2πx 2 sin 2πy + sinh 2π π (−1)n+1 sin(nπx) sinh(nπ(1 − y)) n sinh nπ n=1 ∞ 0 x a Now let’s look for u2 . y) = Thus: u = u1 + u2 is our final answer. PARTIAL DIFFERENTIAL EQUATIONS! Problem 1 . It follows that u2 (x.52): Emn = − =− 4 π 2 (m2 + n2 ) 4 0 0 1 1 0 1 1 n2 π 2 m2 π 2 + 2 = π 2 (m2 + n2 ) 2 a b xy sin (mπx) sin (nπy) dxdy x sin (mπx) dx 0 π 2 (m2 + n2 ) 4(−1)m+n+1 = 2 π mn(m + n) 4 π2 ∞ y sin(nπy)dy This is the Fourier coefficient.70 CHAPTER 4.

Therefore. b]. d2 are constants. b] These are known as regularity conditions.10. p′ (x). (d) p(x) and r(x) > 0. q(x). STURM LIOUVILLE PROBLEMS 71 4. 2.10 Sturm Liouville Problems Definition 4. The Regular Sturm-Liouville problem (RSL) over a finite interval [a. b] is given by the second order boundary value problem [P (x)y ′ ] + [q(x) + λr(x)] y = 0 c1 y(a) + c2 y ′ (a) = 0 d1 y(b) + d2 y (b) = 0 where (a) c1 . is as above.4. ′ ′ (4.10. one of which is non-zero (b) d1 . r(x) are continuous on [a. except that one of the regularity conditions would fail. we shall apply the following substitution: x ≡ x = λu λ 1 dy dy = y′ = dx λ du 1 d2 y ′′ y = 2 2 λ du x2 d2 y x dy + + x2 − ν 2 )y = 0 λ2 du2 λ du dy d2 y u2 2 + u + λ2 u2 − ν 2 y = 0 du du u= Divide by u u d du d2 u dy + du2 du i h 2 − ν +λ2 u u + − ν2 + λ2 u y = 0 u dy [u du ]+ . ∀x ∈ [a.53) 4.2 1. A Singular Sturm-Liouville Problem (SSL). c2 are constants.1 Example: Bessel’s Equation x2 y ′′ + xy ′ + (x2 − ν 2 )y = 0 The latter is not a Sturm-Liouville problem. one of which is non-zero (c) p(x).

This is known as the parametrized Bessel’s equation.2 Example y ′′ + λy = 0 p(x) = r(x) = 1.53) is called an eigenfunction of the Sturm-Liouville problem corresponding to the eigenvalue λ. Continuing: p(u) = u q(u) = − ν2 u r(u) = u Because q(u) is discontinuous.3 Each non-zero or non-trivial solution to (4. q(x) = 0 Find the eigenfunctions and eigenvalues of the Sturm-Liouville problem This implies that this is a regular Sturm-Liouville problem.10. λ = −µ2 < 0 =⇒ m = ±µ y(x) = c1 cosh µx + c2 sinh µx This generates the trivial solution. Definition 4. y(0) = y(2π) = 0 are the boundary conditions. 2. λ = µ2 > 0 =⇒ m = ±iµ y(x) = c1 cos µx + c2 sin µx y(0) = 0 =⇒ c1 = 0 y(2π) = 0 =⇒ c2 sin 2πµ = 0 n =⇒ sin 2πµ =⇒ µn = 2 n≥1 .72 CHAPTER 4. We will discuss this in detail later as we will need this to solve PDEs. Our characteristic equation is m2 + λ = 0 =⇒ m2 = −λ As usual. we have three cases to consider: 1. PARTIAL DIFFERENTIAL EQUATIONS! And this is now a Sturm-Liouville problem. λ = −µ2 = 0 Also a trivial solution 3. 4. this is a singular Sturm-Liouville problem.

3.e. and An = b a f (x)yn (x)r(x)dx b a 2 yn (x)r(x)dx (4.3 There are four things we can draw from this: 1. then λn is real and λ1 < λ2 < · · · < λn < . This does not hold for periodic boundary conditions. The eigenfunctions of a regular Sturm-Liouville problem and singular Sturm-Liouville problems subject to the boundary condition x→b− ′ ′ lim {p(x) [yn (x)ym (x) − ym (x)yn (x)]} x→a ′ ′ − lim+ {p(x) [yn (x)ym (x) − ym (x)yn (x)]} = 0 (4.4.55) where yn (x) are the eigenfunctions of a regular Sturm-Liouville problem. .r.54) are orthogonal w. and limn→∞ = ∞.. If λn are the eigenvalues of a regular Sturm-Liouville problem.56) . then ∞ f (x) = n=1 An yn (x) is continuous on [a. b] (4. i. STURM LIOUVILLE PROBLEMS These are the eigenvalues: n2 4 73 λn = µ2 = n The eigenfunctions corresponding to the eigenvalues are: nx 2 yn = sin Result 4. 2.10. then b r(x)yn (x)ym (x)dx = 0 a 4. There exists only one linearly independent eigenfunction corresponding to each eigenvalue of a regular Sturm-Liouville problem.t the weight function r(x). . If yn and ym are eigenfunction corresponding to two different eigenvalues. λn ∧ λm . If f (x) is a piecewise smooth (differentiability implied) function. .

C. . . µ3 . t) = −u(1. n ≥ 1. Thus j (x) = Xn (x) = cos(µn x). n ≥ 1 Now find T : ′ Tn + µ2 Tn = 0 =⇒ Tn (t) = αn eµn t n 2 un (x. t) = αn e−µn t cos µn x ∞ 2 u(x. . We have infinitely many points of intersection µn . PARTIAL DIFFERENTIAL EQUATIONS! 4. 0) = x We shall use the only method of solving this we learned so far.10. . Let’s call these points of intersection µ1 .3 Solve Example   ut = uxx    u (0. where we seek a solution of the ofrm u(x. t) = n=1 ∞ αn e−µn t cos µn x αn cos µn x n=1 2 x= I.C. t) = X(x)T (t) XT ′ = X ′′ T =⇒ X ′ + µ2 X = 0 X ′ (0) = 0.74 CHAPTER 4. t) = 0 x   ux (1. µ2 . the method of separation of variables. The characteristic equation is given by m2 + µ2 = 0 =⇒ m = ±iµ X(x) = c1 cos µx + c2 sin µx X ′ (x) = −c1 µ sin µx + c2 µ cos µx X ′ (0) = 0 =⇒ c2 = 0 =⇒ X(x) = c1 cos µx X ′ (1) + X(1) = 0 =⇒ − c1 µ sin µ + c1 cos µ = 0 cot µ = µ Let y1 (µ) = cot µ and y2 (µ) = µ T′ X ′′ = = k = −µ2 < 0 T X T ′ + µ2 T = 0 (∗) First B. t)   u(x. X ′ (1) + X(1) = 0 Let’s find X. .

respectively. 1]   r(x) = 1 > 0 cos µn x cos µm xdx = 0 when n = m 0 By equation (4.11. Thus the eigenfunctions Xn (x) = cos µn k assiciated with the eigenvalues k = −µ2 = −µ2 n satisfy the orthogonality property given by result 3. THE PARAMETRIZED BESSEL’S EQUATION Notice the following about (∗): [1X ] + 0 + µ x X = 0 X ′ (1) + X(1) = 0 ′ ′ 2 75 X (0) = 0 ′  p(x) = 1 > 0  We write it in this form so that it is consistent with the Sturm-Liouville problem. The solution of Bessel’s equation: y(x) = c1 Jm (x) + c2 Ym (x).58) (4.4.57) is a singular Sturm-Liouville problem (SSL). We can now confirm that it is a RSL.57) is y(u) = c1 Jm (λu) + c2 Ym (λu) Consider the boundary conditions y ′ (a) = −ly(a) y(a) = 0 (4. m ∈ Z Thus.57) This equation was generated from Bessel’s equation using the substitution x = λu.59) Where each of these denote a different case.11 The Parametrized Bessel’s Equation d2 y dy m2 + + λ2 u − 2 du du u Consider the parameterized Bessel’s equation given by u y=0 (4. the solution of (4. This implies 1 q(x) = 0 continuous on [0. case 1 and case 2. (4.56): αn = =2 1 x cos µn xds 0 1 cos2 µn xdx 0 µn sin µn + cos µn − 1 µ2 + µn sin µn cos µn n exercise 4. .

n ≥ q at which h1 ∧ h2 will intersect.4 There are infinitely many λ = λmn . y(a) = 0 = c1 Jm (λa) Assume c1 = 0 Jm (λa) = 0 We need infinitely many λs that will satisfy this equation. 2. .59).57) is a SSL.61) This is derived in the last problem of assignment 11. ′ y ′ (u) = c1 λJm (λu) c1 ′ c1 &λJm (λa) = −l&Jm (λa) ′ λJm (λa) = −lJm (λa) h1 (λ) h2 (λ) (cq = 0) Result 4. (The most important result) Equations (4. 4. We are then left with y(u) = c1 Jm (λu) Let us consider first (4. the eigenfunctions are yn = Jm (λmn u) corresponding to the eigenvalues λmn .54) and (4.57) by subjecting it to (4.76 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS! By property 4 of the handout for Bessel functions: u→0+ lim Y (u) = −∞ not good So we set c2 = 0 to eliminate that possibility. This is true (won’t be proven) We solve (4. Although (4. There are eigenfunctions Jmn (λmn u) corresponding to the eigenvalues λmn for both cases.55) are satisfied by the eigenfunctions yn = Jm (λmn u).5 More generally: 1. Two different eigenfunctions will satisfy the orthogonality property a uJm (λmn u)Jm (λmk u)du = 0 0 (k = n) (4.60) 3. We have a 0 2 uJm (λmn u)du = a2 2 J (λmn a) 2 m+1 (4. Thus.58). Result 4.

we have 77 ∞ f (x) = αn = αn Jm (λmn x) n=1 a uf (u)Jm (λmn u)du 0 a 2 uJm (λmn u)du 0 (4. Note that (4. RADIALLY SYMMETRIC CONDITIONS In other words: for a piecewise smooth function f (x). respectively. We shall use now the polar coordinates to deal with membranes of a circular shape. Gibbs phenomenon is observed.4. 1 utt = c2 urr + ur + r 1 ut = c2 urr + ur + r 1 uθθ r2 1 uθθ r2 . the 2-d wave and heat equations are given by.12.12 Radially Symmetric Conditions Review We used rectangular coordinates before. Remember that r2 = x2 + y 2 y θ = arctan x The Laplacian in polar coodinates is given by (after some tedious algebra) 1 1 ∇2 u = uxx + uyy → urr + ur + 2 uθθ r r Consequently.63) this is known as the Bessel-Fourier expansion. 4. This function holds except at points of jump discontinuity.62) n≥1 (4.63) is given by (4.61).

64) Again we use the powerful method of separation of variables. i.e. t) = 0  u(r. 0) (the initial shape and velocity of the membrane) are independent of θ. implying uθ = uθθ = 0 We need to solve   utt = c2 urr + 1 ur r    u(a. 0) = g(r)    r≥0 (4.. we say that the membrane is radially symmetric. θ) and g(r. t) = R(r)T (t) Substituting 1 RT ′′ = c2 R′′ T + R′ T r . we seek a solution f the form u(r. 0) = f (r) ∧ ut (r.78 CHAPTER 4.4 If f (r. PARTIAL DIFFERENTIAL EQUATIONS! θ Definition 4.

12. RADIALLY SYMMETRIC CONDITIONS Dividing by c2 RT 1 T ′′ = c2 T R′′ + 1 R′ r R = −λ2 < 0 79 This is not to avoid the trivial solutions. we let αn . This gives us the following ODEs: 1 R′′ + R′ + λ2 R = 0 r ′′ rR + R′ + λ2 rR = 0 =⇒ R(a) = 0 T ′′ + c2 λ2 T = 0 Notice that the R equation is the parameterized Bessel’s equation of order zero.4. we do this to avoid unbounded solutions. Using this result for R: R(r) = C1 J0 (λr) Now we can apply the boundary conditions R(a) = &J0 (λa) = 0 =⇒ J0 (λa) = 0 c1 Now. n ≥ 1 be the infinitely many coefficients of J0 : λa = α0n =⇒ λ = λ0n α0n a These are the eigenvalues of the 0th order Bessel functions: Rn (r) = J0 (λ0n r) which are consequently. X $0 y(x) = c1 J0 (λx) + $$$$ c2 Y0 (λx) y(x) = c1 J0 (λx) We remove the second solution to this problem due to the singularity at 0. our eigenfunctions. Now we solve the T equation: ′′ Tn + c2 λ0n 2 Tn = 0 The characteristic equation of this ODE is m2 + c2 λ0n 2 = 0 =⇒ m = ±icλ0n =⇒ Tn (t) = αn cos (cλ0n t) + βn sin (cλ0n t) .

65).63). 0) = f (r) = n=1 An J0 (λ0n r) Oh Mamma Mia! We have the eigenfunction expansion for f in terms of J! Reminder This is the Fourier-Bessel expansion with Fourier-Bessel coefficients. α0n a 2 2 cα0n aJ1 (α0n ) 0 a rg(r)J0 (α0n r) dr (4.12.66) An = 2 2 (α ) J1 0n a 0 r(1 − r2 ) J0 (λ0n r) dr . Now we just have to find An and Bn . An = 2 2 J 2 (α ) a 1 0n a rf (r)J0 (λ0n r) dr 0 (4.67) 4. PARTIAL DIFFERENTIAL EQUATIONS! This gives us our nth normal mode un (r.61) and (4.80 CHAPTER 4.64) subject to: Because f and g are independent of θ. From (4. An may be determined. we use Bn cλ0n instead.61) and (4. the solution is given by (4. by (4. t) = n=i J0 (λ0n r) [αn cos (cλ0n t) + βn sin (cλ0n t)] (4. t) = Rn (r)Tn (t) and our general solution ∞ u(r.65) Now we can apply the initial conditions ∞ u(r. employing (4.63): Since λ0n = Bn = Now let’s see an example.66) Now we determine the Bn s: ∞ ut (r. we can conclude that this is a radially symmetric case. and thus. t) = n=1 Bn cλ0n J0 (λ0n r) = g(t) Instead of Bn being the Fourier-Bessel coefficient.1 Example a=1 c = 10 f (r) = 1 − r2 g(r) = 1 Solve the wave equation (4.

Let s = λ0n r and ds = λ0n dr: 1 0 r(1 − r2 )J0 (λ0n r) dr = λ0n 0 s λ0n 0 2 α0n 1− s2 J(s) λ0n 2 ds λ0n 1 = 4 α0n 2 α0n − s2 sJ0 (s)ds u dv Which we now integrate by parts as indicated: 2 u = α0n − s2 v = sJ1 (s) du = −2sds dv = sJ0 (s)ds Substituting it back in:  X0 α0n $$$ 1  2 2 $$ $sJ1 (s) α0n − s +2 4 α0n $$$$ 0 = 2 s2 J2 (s) 4 α0n α0n 0 α0n 0 = s2 J1 (s)ds 2 2 J2 α0n α0n  So then An is given by: An = J2 (α0n ) 2 2 α0n J1 (α0n ) 4 This can be simplified further from property 8 of the Bessel functions (on the handout): 0 2 U  J J1 (α0n ) − 0 (α0n ) α0n J2 (α0n ) = Remember that α0n are the zeroes of J0 . which is left as an excercise: Bn = 1 5α0n 2 J1 (α0n ) .4. RADIALLY SYMMETRIC CONDITIONS 81 In order to evalute this integral. we have to make sure that the Bessel functions are a function of one variable only – this is accomplished using substitution. This gives us the final result: An = 8 α0n 3 J1 (α0n ) Similarly. justifying the last step.12. we can solve for Bn .

82

CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS!

4.13

Laplace’s Equation in Polar Coordinates
1 1 ∇2 u = urr + ur + 2 uθθ = 0 r r u(r, 0) = u(r, 2π), uθ (r, 0) = uθ (r, 0) = uθ (r, 2π) u(a, θ) = f (θ), 0 ≤ r < a, 0 < θ < 2π f (θ + 2π) = f (θ)

Laplace’s equation in polar coordinates is given by

(4.68)

to solve this, we use the method of separation of variables; i.e., we seek a solution of the form u(r, θ) = R(r)Θ(θ) Substituting into (4.68) 1 1 R′′ Θ + R′ Θ + 2 RΘ′′ = 0 r r
R Dividing by Θ r2

r2 R =⇒

R′′ +

R′ r

=−

Θ′′ =k Θ

Θ′′ + kθ = 0 r2 R′′ + rR′ − kR = 0

Applying the boundary conditions: u(r, 0) = u(r, 2π) R(r)Θ(0) = R(r)Θ(2π) =⇒ Θ(0) = Θ(2π) uθ (r, 0) = uθ (r, 2π) =⇒ Θ′ (0) = Θ′ (2π) Thus Θ′′ + kΘ = 0 Θ(0) = Θ(2π); Θ′ (0) = Θ′ (2π) These are called periodic boundary conditions, and have something special about them. Continuing, the characteristic equation is m2 + k = 0 =⇒ m2 = −k We have three cases again:

4.13. LAPLACE’S EQUATION IN POLAR COORDINATES

83

1. k = −µ2 < 0 =⇒ m = ±µ gives us the trivial solution. Note that contrary to the previous three case scenarios, this would be the case 3. Make sure to check every case. 2. k = µ2 = 0 =⇒ m = 0 which gives us: Θ(θ) = aθ + b Applying the boundary conditions, we have Θ(0) = Θ(2π) =⇒ a(0) + ¡ = a(2π) + ¡ b b =⇒ 0 = 2πa =⇒ a = 0 =⇒ Θ(0) = b and Θ′ (θ) = 0 This implies that the second boundary condition is always satisfied, and b is arbitrary. Now we have infinitely many solutions for Θ(θ); we only need one solution, so we choose b = 1, thus yielding: Θ(θ) = Θ0 (θ) = 1 the eigenvalue for this problem is k = 0, and the corresponding eigenfunction is 1, from: k0 = µ2 = 0 =⇒ µ0 = 0 0 3. k = µ2 > 0 =⇒ m = ±iµ – This yields the solution Θ(θ) = c1 cos (µθ) + c2 sin (µθ) c1 = c1 cos(2πµ) + c2 sin(2πµ) 1 − cos 2πµ c2 = c1 sin 2πµ µc2 = −c1 µ sin 2πµ + µc2 cos 2πµ Θ′ (0) = Θ′ (2π) : −c1 µ sin 2πµ c2 = µ − µ cos 2πµ Θ(0) = Θ(2π) : 1 − cos 2πµ −& sin 2πµ c1 µ c1 & sin 2πµ = µ − µ cos 2πµ   Remember that c1 = 0 (1 − cos 2πµ)2 = − sin2 2πµ (∗)

(∗∗)

By (∗) and (∗∗):

=⇒ 1 − 2 cos 2πµ + cos2 2πµ = − sin2 2πµ 2 − 2cos2πµ = 0 =⇒ cos 2πµ = 1 =⇒ 2πµ = 2πm; m ∈ Z; m ≥ 1

84

CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS! The eigenvalues are k, and therefore m2 . Now we look for the eigenfunctions. Note that the constants in the original Θ equation now are changing as the eigenvalues change, and thus we will subscript them: Θ(θ) = Θm (θ) = αm cos mθ + βm sin mθ; m≥1

Note that we have two eigenfunctions for every eigenvalue. This is due to the fact that our boundary conditions are periodic. Case 2 and 3 could be merged into a single equation by extending m. Θm (θ) = αm cos(mθ) + βm sin(mθ), m ≥ 0 Now let’s look for R(r).
′′ ′ r2 Rm + rRm − m2 Rm = 0

(k = m2 )

This DE is Euler’s equation. The characteristic equation is given by: λ(λ − 1) + λ − m2 = 0 This is the first case for Euler’s equation: Rm (r) = Cm rm + Dm r−m If m = 0, then: Rm (r) = Cm + Dm ln r At r = 0, the D terms blow up, and is unrealistic, and therefore we set Dm = 0 for bounded solutions. Therefore, the solution for R is: Rm (r) = Cm mrm , m ≥ 0 r m γm = γm ; Cm = m a a The mth normal mode is then um (r, θ) = am r m (am cos mθ + bm sin mθ) a = γm αm ; bm = γm βm

λ2 − m2 =⇒ λ = ±m

yielding our general solution of u

u(r, θ) =
m=0

r a

m

(am cos mθ + bm sin mθ)

(4.69)

13. (3.5) to find the constants.1 Example Solve the boundary value problem (4.4). 4.e.3). and the constants are: a0 = am = 1 2π 0 2π f (θ)dθ = 0 2 1 2π π f (θ)dθ = 0 π 4 1 π 1 = π πf (θ) cos(mθ)dθ π 0 (π − θ) cos(mθ)dθ =− bm = 1 π 1 {(−1)m − 1} πm2 2 πf (θ) sin (mθ) dθ = 0 1 m .13. (3. Now we apply (3. θ) = m=0 ∞ (am cos mθ + bm sin mθ) (am cos mθ + bm sin mθ) = a0 + m=0 Note that this form is exactly the same as for the Fourier series (i.68) given that f (θ) = π−θ 0 if 0 ≤ θ < π if π ≤ θ < 2π The solution u is given by (4. LAPLACE’S EQUATION IN POLAR COORDINATES Applying our initial conditions 85 ∞ u(a. in (3.69).4.2).

0) = u(r.2 Example β α We shall consider only θ bounded between 2 angles α and β The two edges generated at θ = α and θ = β will be lined at zero. θ) = R(r)Θ(θ) Substituting in: 1 1 R′′ Θ + R′ θ + 2 RΘ′′ = 0 r r Rearranging r2 R 1 R′′ + R′ r =− Θ′′ = k = µ2 > 0 Θ .86 CHAPTER 4. PARTIAL DIFFERENTIAL EQUATIONS! 4. θ) = sin θ ∂r a=1 We seek a solution of the form u(r. π )=0 4 ∂u (1. 0 < θ < π/4 subject to u(r. Solve Laplace’s equation over the wedge 0 < r < 1.13.

so the solution is thus: B0 D¨ Rm (r) = Cm r4m + ¨m r−4m we cancel out Dm for a bounded solution at r = 0. LAPLACE’S EQUATION IN POLAR COORDINATES Cases 1 and 2 generate the trivial solution. unlike before (check) Θ′′ + u2 Θ = 0 Θ(0) = Θ π 4 As an excercise.4. check that this is an RSL (it is). $ cos (µθ) Θ(θ) = c1$$$ + c2 sin(µθ) = c2 sin µθ π π = 0 = c2 sin µ Θ 4 4 µπ =⇒ sin =0 4 µπ =⇒ = mπ. θ) = sin θ ∂r2 ∞ βm = m=1 4mCm sin 4mθ = sin θ This is the half range sine series expansion of sin θ: Cm = = 1 4mπ 2π f (θ) sin (4mθ) dθ 0 π/4 8 sin θ sin (4mθ) dθ 4πm 0 √ 4 2 (−1)m+1 = π m(16m2 − 1) . our final solution is given by ∞ u(r. m ≥ 1 4 µ = µm = 4m. check 87 λ(λ − 1) + λ − 16m = 0 =⇒ λ = ±4m. m ≥ 1 Remember that our eigenvalues are k = µ2 Θ(θ) = Θm (θ) = sin (4mθ) r2 R 1 R′′ + R r = 16m2 SSL. m ≥ 1 ′′ ′ r2 Rm + rRm − 16m2 Rm = 0 2 This is Euler’s equation. θ) = m=1 Cm r4m sin (4mθ) ∂2u (1.13.

14. ux (1. t) ← non-homo heat equation Suppose that we want to solve Steps for solving these equations: 1. Find the S.14 Non-Homogenous PDEs utt = c2 uxx + F (x. ux (1. we seek a solution of the form u = X(x)T (t) Subbing into the DE XT ′ = X ′′ T =⇒ T′ X ′′ = = −µ2 < 0 X T (to avoid trivial solutions) This is a SL problem of order 2 (look at the X derivative). t) = 0   u(x. written as X ′′ + µ2 X = 0 X(0) = 0.88 CHAPTER 4. t) = 0 To find u. 0) = 0 1. Find the eigenvalues and eigenfunctions of the SL problem associated with the homo PDE 2. t) + u(1. t) + u(1. X ′ (1) + X(1) = 0 The characteristic equation is given by m2 + µ2 = 0 =⇒ m = ±iµ X(x) = c1 cos µx + c2 sin µx = c2 sin µx (X(0) = 0) .L problem associated with the homogenous PDE ut = uxx u(0. t) = 0. t) = 0. PARTIAL DIFFERENTIAL EQUATIONS! 4. t) ← non-homo wave equation ut = c2 uxx + F (x.1 Example Solve the one dimensional heat equation  ut = uxx + xe−t  u(0. Generate an eigenvalue expansion from step one to solve the non homogenous PDE 4.

∞ xe−t = n=1 γn (t) sin (µn x) This is the generalized Fourier series for xe−t . Continuing.4. NON-HOMOGENOUS PDES Applying the second initial condition X ′ (1) + X(1) = 0 =⇒ c2 µ cos µ − c2 sin µ tan µ = −µ 89 Notice now we have infinitely many solutions µn . 1 γn (t) = 0 xe−t sin (µn x) dx 1 −t 0 γn (t) = e x sin (µn x) dx −µn cos µn + sin µn µn 2 cn = e−t . Now we solve the non homogenous PDE by seeking a solution given by ∞ u(x. n ≥ 1) 2.14. as it is not in an infinite sum – we need it to be a summation so that we can manipulate it easier. t) = n=1 αn Xn (x) Notice that the function depends on t also. ∞ ′ αn (t) sin (µn x) = −uxx = − ∞    sin (µn x)  αn (t)µ2 sin (µn x) + xe−t n n=1 (∗) (∗∗) n=1 Now let’s find the eigenfunction exansion of (∗). so we rewrite the constant as ∞ u(x. t) = n=1 ∞ αn (t)Xn (x) αn (t) sin (µn x) n=1 = Now we find αn (t) for all n ≥ 1  ∞  ′   αn (t) sin (µn x) ut =   n=1 ∞ Substitute into the non homogenous PDE uxx = − αn (t)µ2 n n=1 Reminder The boundary conditions of u (not u) are satisfied by the eigenfunction expansion.

ie. this implies that the bracketed expression must be equal to zero. ′ αn + µ2 αn − cn e−t = 0 n which is a first order linear ODE. 0) = 0 = 0 αn (0) sin (µn ) xdx αn (0) = 0 . so we solve by using the initial conditions 1 u(x. let’s find the integrating factor and solve it I(t) = e ′ R µ2 dt n 2 = eµn t e µ2 t n Multiply the DE by I α = cn e −t+µ2 t n 2 = cn e(µn −1)t eµn t α = αn (t) = 2 2 cn e(µn −1)t dt = 2 cn e−t + kn e−µn t µ2 − 1 n 2 cn e(µn −1)t + kn µ2 − 1 n We are still left with kn . PARTIAL DIFFERENTIAL EQUATIONS! Now we sub into (∗∗) ∞ ′ αn (t) sin (µn x) = − ∞ ∞ αn (t)µ2 sin (µn x) + n n=1 n=1 cn ee−t sin (µn x) n=1 ∞ =⇒ n=1 ′ αn (t) + αn (t)µ2 − cn e−t sin(µn x) = 0 n Since this is true for all values of x.90 CHAPTER 4.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times