You are on page 1of 71

Introduction to Partial Differential Equations

A partial differential equation is one which involves one or more partial derivatives. The
order of the highest derivative is called the order of the equation. A partial differential equation
contains more than one independent variable. But, here we shall consider partial differential only
equation two independent variables x and y so that z = f(x,y). We shall denote

A partial differential equation is linear if it is of the first degree in the dependent variable
and its partial derivatives. If each term of such an equation contains either the dependent variable
or one of its derivatives, the equation is said to be homogeneous, otherwise it is non
homogeneous.

A Partial Differential Equation (PDE) is a differential equation that contains unknown


multivariable functions and their partial derivatives. PDEs are used to formulate problems
involving functions of several variables. These equations are used to describe phenomena such
as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics.
A differential equation containing one or more partial derivatives is known as a partial
differential equation.

A differential equation with one independent variable is called an ordinary differential


equation. An example of such an equation would be where is the
dependent variable, and is the independent variable.

What if there is more than one independent variable? Then the differential equation is called a
partial differential equation. An example of such an equation would be
subject to certain conditions: where is the dependent variable, and
and are the independent variables.

FORMATION OF PARTIAL DIFFERENTIAL EQUATIONS

Partial differential equations can be obtained by the elimination of arbitrary constants or by the
elimination of arbitrary functions.

By the elimination of arbitrary constants


Let us consider the function
( x, y, z, a, b ) = 0 ------------- (1)
where a & b are arbitrary constants
Differentiating equation (1) partially w.r.t x & y, we get

(2)
(3)

Eliminating a and b from equations (1), (2) and (3), we get a partial differential equation of the
first order of the form f (x,y,z, p, q) = 0

Example 1
Eliminate the arbitrary constants a & b from z = ax + by + ab
Consider z = ax + by + ab (1)
Differentiating (1) partially w.r.t x & y, we get

` (2)

(3)

Using (2) & (3) in (1), we get


z = px +qy+ pq
which is the required partial differential equation.
Example 2
Form the partial differential equation by eliminating the arbitrary constants a and b from
z = ( x2 +a2 ) ( y2 + b 2)
Given z = ( x2 +a2 ) ( y2 + b2) ……..(1)
Differentiating (1) partially w.r.t x & y , we get
p = 2x (y2 + b2 )
q = 2y (x + a )

Substituting the values of p and q in (1), we get


4xyz = pq
which is the required partial differential equation.
Example 3

Find the partial differential equation of the family of spheres of radius one whose centre lie in the
xy - plane.

The equation of the sphere is given by

( x –a )2 + ( y- b) 2 + z2 = 1 (1)

Differentiating (1) partially w.r.t x & y , we get


2 (x-a ) + 2 zp = 0
2 ( y-b ) + 2 zq = 0

From these equations we obtain


x-a = -zp (2)
y -b = -zq (3)
Using (2) and (3) in (1), we get
z2p2 + z2q2 + z 2 = 1

or z2 ( p2 + q2 + 1) = 1

Example 4
Eliminate the arbitrary constants a, b & c from

and form the partial differential equation.


The given equation is

(1)
Differentiating (1) partially w.r.t. x & y, we get

Therefore we get

(2)

(3)
Again differentiating (2) partially w.r.t. ‘x’, we get

(4)
Multiplying (4) by x, we get

From (2), we have

or p2

By the elimination of arbitrary functions


Let u and v be any two functions arbitrary function. This relation can be expressed as
u = f(v) ______________ (1)
Differentiating (1) partially w.r.t x & y and eliminating the arbitrary functions from
these relations, we get a partial differential equation of the first order of the form
f(x, y, z, p, q ) = 0.

Example 5

Obtain the partial differential equation by eliminating „f„from z = ( x+y ) f ( x2 - y2 )

Let us now consider the equation

z = (x+y ) f(x2- y2) (1)


Differentiating (1) partially w.r.t x & y , we get
p = ( x + y ) f ' ( x2 - y2 ) . 2x + f ( x2 - y2 )
q = ( x + y ) f ' ( x2 - y2 ) . (-2y) + f ( x2 - y2 )
p - f ( x2 - y2 ) = ( x + y ) f ' ( x2 - y2 ) . 2x (2)
q - f ( x2 - y2) = ( x + y ) f ' ( x2 - y2 ) . (-2y) (3)

Hence, we get

i.e, py - yf( x2 - y2 ) = -qx +xf ( x2 - y2 )


i.e, py +qx = ( x+y ) f ( x2 - y2 )
Therefore, we have by(1), py +qx = z

Example 6

Form the partial differential equation by eliminating the arbitrary function f from z = e y f (x + y)

Consider z = ey f ( x +y ) ( 1)

Differentiating (1) partially w .r. t x & y, we get

p = ey f ' (x + y)

q = ey f '(x + y) + f(x + y). ey


Hence, we have

q=p+z

Example 7
Form the PDE by eliminating f and  from
Consider (1)
Differentiating (1) partially w.r.t. x and y, we get
(2)
(3)
Differentiating (2) and (3) again partially w.r.t. x and y, we get

i.e.,
or

Exercises:
1. Form the partial differential equation by eliminating the arbitrary constants ‘a’ & ‘b’ from the
following equations.
(i)

(ii)

(iii)
(iv)
(v)

2. Find the PDE of the family of spheres of radius 1 having their centres lie on the xy
plane{Hint: (x –a)2 + (y –b)2 + z2 = 1}

3. Find the PDE of all spheres whose centre lie on the (i) z axis (ii) x-axis

4. Form the partial differential equations by eliminating the arbitrary functions in the following
cases.

2 2

2 2 2

3 3

From Ordinary to a Partial Differential Equation


Assume we put a spherical steel ball that is at room temperature in hot water. The temperature of
the ball is going to increase with time. What if we wish to find what this temperature vs. time
profile would look like for the ball? We would develop a mathematical model for this based on
the law of conservation of heat energy. From an energy balance,
Heat gained - Heat lost= Heat stored (1)

The energy stored in the mass is given by


Heat stored in the ball (2)
where
= mass of ball,
= specific heat of the ball,
= temperature of the ball at a given time,

The rate of heat gained by the ball due to convection is


Rate of heat gained due to convection , (3)
where
= the convective cooling coefficient, .
= surface area of ball,
ambient temperature of the hot water,
As you can see we have the expression for the rate at which heat is gained (not the heat gained),
so we rewrite the heat energy balance as
Rate at which heat is gained - Rate at which heat is lost
=Rate at which heat is stored (4)
This gives us
(5)
Equation (5) is a first order ordinary differential equation that when solved with the initial
condition , would give us the temperature of the spherical ball as a function of time.
However, we made a large assumption in deriving Equation (5) - we assumed that the
system is lumped. What does a lumped system mean? It implies that the internal conduction in
the sphere is large enough that the temperature throughout the ball is uniform. This allows us to
make the assumption that the temperature is only a function of time and not of the location in the
spherical ball. The system being considered lumped for this case depends on: material of the
ball, geometry, and heat exchange factor (convection coefficient) of the ball with its
surroundings.
What happens if the system cannot be treated as a lumped system? In that case, the
temperature of the ball will now be a function not only of time, but also the location.

In spherical co-ordinates, the location is given by , , co-ordinates.


z

P

r

Figure 1 Spherical Coordinate System.

The differential equation would now be a partial differential equation and is given as

, at the surface (6)

where
= thermal conductivity of material,
= density of material,
As an introduction to solve PDEs, most textbooks concentrate on linear second order PDEs with
two independent variables and one dependent variable. The general form of such an equation is
(7)

Where are functions of and is a function of .


Depending on the value of , a 2nd order linear PDE can be classified into three
categories.
1. if , it is called elliptic
2. if , it is called parabolic
3. if , it is called hyperbolic
Elliptic Equation
The Laplace equation for steady state temperature in a plate is an example of an elliptic second
order linear partial differential equation. The Laplace equation for steady state temperature in a
plate is given by
(8)

Using the general form of second order linear PDEs with one dependent variable and two
independent variables,

gives

This classifies Equation (8) as elliptic.

Parabolic Equation
The heat conduction equation is an example of a parabolic second order linear partial differential
equation. The heat conduction equation is given by
(9)

Using the general form of second order linear PDEs with one dependent variable and two
independent variables,

gives

This classifies Equation (9) as parabolic.


Hyperbolic Equation
The wave equation is an example of a hyperbolic second order linear partial differential
equation. The wave equation is given by
(10)
Using the general form of second order linear PDEs with one dependent variable and two
independent variables,

gives

This classifies Equation (10) as hyperbolic.

Examples of some important PDEs:

(1) One-dimensional wave equation

(2) One-dimensional heat equation

(3) Two-dimensional Laplace equation

(4) Two-dimensional Poisson equation

Note that for PDEs one typically uses some other function letter such as u instead of y, which
now quite often shows up as one of the variables involved in the multivariable function.

In general we can use the same terminology to describe PDEs as in the case of ODEs. For
starters, we will call any equation involving one or more partial derivatives of a multivariable
function a partial differential equation. The order of such an equation is the highest order
partial derivative that shows up in the equation. In addition, the equation is called linear if it is
of the first degree in the unknown function u, and its partial derivatives, ux, uxx, uy, etc. (this
means that the highest power of the function, u, and its derivatives is just equal to one in each
term in the equation, and that only one of them appears in each term). If each term in the
equation involves either u, or one of its partial derivatives, then the function is classified as
homogeneous.
Take a look at the list of PDEs above. Try to classify each one using the terminology given
above. Note that the f(x,y) function in the Poisson equation is just a function of the variables x
and y, it has nothing to do with u(x,y).

Answers: all of these PDEs are second order, and are linear. All are also homogeneous except
for the fourth one, the Poisson equation, as the f(x,y) term on the right hand side doesn’t involve
u or any of its derivatives.

The reason for defining the classifications linear and homogeneous for PDEs is to bring up the
principle of superposition. This excellent principle (which also shows up in the study of linear
homogeneous ODEs) is useful exactly whenever one considers solutions to linear homogeneous
PDEs. The idea is that if one has two functions, and that satisfy a linear homogeneous
differential equation, then since taking the derivative of a sum of functions is the same as taking
the sum of their derivatives, then as long as the highest powers of derivatives involved in the
equation are one (i.e., that it’s linear), and that each term has a derivative in it (i.e. that it’s
homogeneous), then it’s a straightforward exercise to see that the sum of and will also be a
solution to the differential equation. In fact, so will any linear combination, , where a
and b are constants.

For instance, the two functions and are both solutions for the first-order linear
homogeneous PDE:

(5)

It’s a simple exercise to check that and are also solutions


to the same PDE (as will be any linear combination of and )

This principle is extremely important, as it enables us to build up particular solutions out of


infinite families of solutions through the use of Fourier series.

SOLUTIONS OF A PARTIAL DIFFERENTIAL EQUATION

A solution or integral of a partial differential equation is a relation connecting the dependent and
the independent variables which satisfies the given differential equation. A partial differential
equation can result both from elimination of arbitrary constants and from elimination of arbitrary
functions. But, there is a basic difference in the two forms of solutions. A solution containing as
many arbitrary constants as there are independent variables is called a complete integral. Here,
the partial differential equations contain only two independent variables so that the complete
integral will include two constants. A solution obtained by giving particular values to the
arbitrary constants in a complete integral is called a particular integral.
Singular Integral
Let f (x,y,z,p,q) = 0 (1)
be the partial differential equation whose complete integral is
(x,y,z,a,b) = 0 (2)

where ‘a’ and ‘b’ are arbitrary constants.

Differentiating (2) partially w.r.t. a and b, we obtain

(3)

and (4)

The eliminant of ‘a’ and ‘b’ from the equations (2), (3) and (4), when it exists, is called the
singular integral of (1).

General Integral
In the complete integral (2), put b = F(a), we get
 (x,y,z,a, F(a) ) = 0 (5)
Differentiating (2), partially w.r.t.a, we get

(6)

The eliminant of ‘a’ between (5) and (6), if it exists, is called the general integral of (1).

SOLUTION OF STANDARD TYPES OF FIRST ORDER PARTIAL DIFFERENTIAL


EQUATIONS.

The first order partial differential equation can be written as

f(x,y,z, p,q) = 0,

where p = z/ x and q = z / y. In this section, we shall solve some standard forms of


equations by special methods.

Standard I : f (p,q) = 0. i.e, equations containing p and q only.

Suppose that z = ax + by +c is a solution of the equation f(p,q) = 0, where f (a,b) = 0.

Solving this for b, we get b = F (a).

Hence the complete integral is z = ax + F(a)y +c (1)

Now, the singular integral is obtained by eliminating a & c between


z = ax + y F(a) + c
Differentiating it partially w.r.t ‘a’, we get
0 = x + y F'(a)
While differentiating partially w.r.t ‘c’, we get, 0 = 1,

The last equation being absurd, the singular integral does not exist in this case.

To obtain the general integral, let us take c =  (a).

Then, z = ax + F(a) y +  (a) (2)


Differentiating (2) partially w.r.t. a, we get
0 = x + F'(a). y +  '(a) (3)
Eliminating ‘a’ between (2) and (3), we get the general solution.

Example 8

Solve pq = 2

The given equation is of the form f (p,q) = 0

The solution is z = ax + by +c, where ab = 2.


Solving, b = 2/a

The complete integral is

Z = ax + 2/a y + c (1)

Differentiating (1) partially w.r.t ‘c’, we

0 = 1,

which is absurd. Hence, there is no singular integral.

(a) in (1), we get  To find the general integral, put c =

Z = ax + 2/a y + (a)

Differentiating partially w.r.t ‘a’, we get

0 = x – 2 y / a2 + ’(a)

Eliminating ‘a’ between these equations gives the general integral.


Example 9

Solve pq + p +q = 0

The given equation is of the form f (p,q) = 0.

The solution is z = ax + by +c, where ab + a + b = 0.

Solving, we get

Hence the complete integral is (1)

Differentiating (1) partially w.r.t. ‘c’, we get

0 = 1.

The above equation being absurd, there is no singular integral for the given partial differential
equation.

To find the general integral, put c = (a) in (1), we have

(2)

Differentiating (2) partially w.r.t a, we get

(3)

Example 10

Solve p2 + q2 = npq

The solution of this equation is z = ax + by + c, where a2 + b2 = nab.

Solving, we get

Hence the complete integral is


(1)

Differentiating (1) partially w.r.t c, we get 0 = 1, which is absurd. Therefore, there is no singular
integral for the given equation.
To find the general Integral, put C =  (a), we get

Differentiating partially w.r.t. ‘a’, we have

The eliminant of ‘a’ between these equations gives the general integral

Standard II : Equations of the form f (x,p,q) = 0, f (y,p,q) = 0 and f (z,p,q) = 0. i.e, one of the
variables x,y,z occurs explicitly.

(i) Let us consider the equation f (x,p,q) = 0.


Since z is a function of x and y, we have

or dz = pdx + qdy

Assume that q = a.

Then the given equation takes the form f (x, p,a ) = 0

Solving, we get p = (x,a).

Therefore, dz = (x,a) dx + a dy.

(ii) Let us consider the equation f(y,p,q) = 0. Assume that p = a.

Then the equation becomes f (y,a, q) = 0 Solving, we get q =  (y,a).

Therefore, dz = adx + (y,a) dy.

Integrating, z = ax + (y,a)dy+ b, which is a complete Integral.

(iii) Let us consider the equation f(z, p, q) = 0.

Assume that q = ap.

Then the equation becomes f (z, p, ap) = 0


Solving, we get p = (z,a). Hence dz = (z,a) dx + a (z, a) dy.

i.e.,

Integrating, , which is a complete integral.

Example 11
Solve q = xp + p2
Given q = xp + p2 …(1)
This is of the form f (x,p,q) = 0.
Put q = a in (1), we get
a = xp + p2
i.e, p2 + xp –a = 0.
Therefore,

Integrating,

Thus,

Example 12
Solve q = yp2
This is of the form f (y,p,q) = 0
Then, put p = a.
Therfore, the given equation becomes q = a2y.
Since dz = pdx + qdy, we have
dz = adx + a2y dy
Integrating, we get

Example 13
Solve 9 (p2z + q2) = 4
This is of the form f (z,p,q) = 0
Then, putting q = ap, the given equation becomes
2 2 2
9 (p z + a p ) = 4

Therefore, and

Since ,
Multiplying both sides by , we get

, which on further integration gives,

or

Standard III : f1(x,p) = f2 (y,q). ie, equations in which ‘z’ is absent and the variables are
separable.

Let us assume as a trivial solution that

Hence

Therefore,

, which is the complete integral of the given equation


containing two constants a and b. The singular and general integrals are found in the usual way.

Example 14

Solve pq = xy
The given equation can be written as

Therefore, implies , and implies

Since , we have
Example 15

Solve
The given equation can be written as (say)
implies
and Implies
But dz = pdx + qdy
i.e., dx+ dy
Integrating, we get

Standard IV (Clairaut’s) form


Equation of the type z = px + qy + f (p,q) ------(1) is known as Clairaut‟s
Differentiating (1) partially w.r.t x and y, we get p = a and q = b.

Therefore, the complete integral is given by


z = ax + by + f (a,b).

Example 16

Solve z = px + qy +pq

The given equation is in Clairaut‟s. form

Putting p = a and q = b, we have

z = ax + by + ab (1)

which is the complete integral.

To find the singular integral, differentiating (1) partially w.r.t a and b, we get

0=x+b

0=y+a

Therefore we have, a = -y and b= -x.

Substituting the values of a & b in (1), we get


z = -xy –xy + xy

or z + xy = 0, which is the singular integral.

To get the general integral, put b = (a) in (1).

Then z = ax + (a)y + a (a) (2)

Differentiating (2) partially w.r.t a, we have

0 = x + '(a) y + a'(a) + (a) (3)

Eliminating ‘a’ between (2) and (3), we get the general integral.

Example 17
Find the complete and singular solutions of

The complete integral is given by


(1)
to obtain the singular integral, differentiating (1) partially w.r.t. a and b. Then

Therefore,

(2)
and
(3)

squaring (2) and (3) and adding, we get

Now,

i.e., =
Therefore,
= (4)

Using (4) in (2) and (3), we get

and

Hence, and

Substituting the values of a and b in (1), we get

which on simplification gives


or , which is the singular integral.

Exercises

Solve the following Equations


1. pq = k
2. p + q = pq
3. + =x
4. p = y2q2
5. z = p2 + q2
6. p+q=x+y
7. p2z2 + q2 = 1
8.
9. {z –(px + qy)}2 = c2 + p2 + q2
10. z = px + qy + p2q2

EQUATIONS REDUCIBLE TO THE STANDARD FORMS

Sometimes, it is possible to have non –linear partial differential equations of the first
order which do not belong to any of the four standard forms discussed earlier. By changing the
variables suitably, we will reduce them into any one of the four standard forms.

Type (i) : Equations of the form F(xm p, ynq) = 0 (or) F (z, xmp, ynq) = 0.

Case(i) : If m ≠1 and n≠ 1, then put x1-m = X and y1-n = Y.

Now,

Therefore,
Similaraly,

Hence, the given equation takes the form F(P,Q) = 0 (or) F(z,P,Q) = 0.

Case(ii) : If m = 1 and n = 1, then put log x = X and log y = Y.

Now,

Therefore, xp=
Similarly, .
Example 18

Solve x4p2 + y2zq = 2z2

The given equation can be expressed as

(x2p)2 + (y2q)z = 2z2

Here m = 2, n = 2

Put X = x1-m = x -1 and Y = y 1-n = y -1.


We have xmp = (1-m) P and ynq = (1-n)Q
i.e, x2p = -P and y2q = -Q.
Hence the given equation becomes
P2 –Qz = 2z2 …(1)
This equation is of the form f (z,P,Q) = 0.

Let us take Q = aP.

Then equation (1) reduces to


P2 –aPz =2z2

Hence, and

Since , we have

i.e.,
Integrating, we get
Therefore, which is the complete solution.

Example 19

Solve x2p2 + y2q2 = z2

The given equation can be written as

(xp)2 + (yq)2 = z2

Here m = 1, n = 1.

Put X = log x and Y = log y.


Then xp = P and yq = Q.

Hence the given equation becomes


P2 + Q2 = z2 ..(1)
This equation is of the form F(z,P,Q) = 0.

Therefore, let us assume that Q = aP.

Now, equation (1) becomes,


P2 + a2 P2 = z2

Hence and

Since , we have

i.e.,

Integrating, we get

log z = X + aY + b.

Therefore, log z = logx + a logy + b, which is the complete solution.

Type (ii) : Equations of the form F(zkp, zkq) = 0 (or) F(x, zkp) = G(y,zkq).
Case (i) : If , put Z = zk+1,
Now

Therefore, and similarly,

Case (ii): If put

Now, , similarly,

Example 20

Solve z4q2 –z2p = 1

The given equation can also be written as

(z2q)2 –(z2p) =1
Here k = 2. Putting , we get

and

i.e., and
hence the given equation reduces to

, i.e.,

which is of the form F(P,Q) = 0.

Hence its solution is Z = ax + by + c, where b2 –3a –9 = 0.

Solving for b,

Hence the complete solution is

or

Lagrange’s Linear Equation


Equations of the form Pp + Qq = R ________ (1), where P, Q and R are functions of x, y, z.
Let us consider the equations u = a and v = b, where a, b are arbitrary constants and u, v are
functions of x, y, z.

…(2)
Comparing (1) and (2), we have

…(3)

Similarly, ...(4)
By cross-multiplication, we have

or

…(5)

Equations (5) represent a pair of simultaneous equations which are of the first order and of first
degree.Therefore, the two solutions of (5) are u = a and v = b. Thus, is the required
solution of (1).

Note :

To solve the Lagrange‟s equation,we have to form the subsidiary or auxiliary equations

which can be solved either by the method of grouping or by the method of multipliers.

Example 21

Find the general solution of px + qy = z.

Here, the subsidiary equations are

Taking the first two ratios,


Integrating, log x = log y + log c1, or x = c1 y i.e, c1 = x / y
From the last two ratios,
Integrating, log y = log z + log c2, or y = c2 z i.e, c2 = y / z
Hence the required general solution is
Φ( x/y,= 0,y/z)where Φ is arbitrary

Example 22
Solve p tan x + q tan y = tan z
The subsidiary equations are

Taking the first two ratios,

i.e.,
integrating,
i.e.,

Therefore,
Similarly, from the last two ratios, we get

i.e.,

Hence the required general solution is

where Φ is arbitrary

Example 23
Solve (y-z) p + (z-x) q = x-y
Here the subsidiary equations are
Again using multipliers x y and z.
Each ratio

Therefore,

Integrating,

or, …(2)
hence from 91) and (2), the general solution is

Example 24
Find the general solution of (mz - ny) p + (nx- lz)q = ly - mx.
Exercises

Solve the following equations

1. px2 + qy2 = z2
2. pyz + qzx = xy
3. xp –yq = y2 –x2
4. y2zp + x2zq = y2x
5. z (x –y) = px2 –qy2
6. (a –x) p + (b –y) q = c –z
7. (y2z p) /x + xzq = y2
8. (y2 + z2) p –xyq + xz = 0
9. x2p + y2q = (x + y) z
10. p –q = log (x+y)
11. (xz + yz)p + (xz –yz)q = x2 + y2
12. (y –z)p –(2x + y)q = 2x + z
SOLUTIONS OF PARTIAL DIFFERENTIAL EQUATIONS OF FIRST-ORDER
Solution of Partial Differential Equations of first-order with constant coefficients.
The most general form of linear partial differential equations of first order with constant
coefficients is
Aux+Buy+Ku=f(x,y)
where A,B and K are constants
Let u(x,y) be a solution then
du=uxdx+uydy
From above we get the auxiliary system of equations (comparing coefficients of u x, uy
and remaining terms).
= =

The solution of the left pair is Bx-Ay=c or y= , where c is an arbitrary constant of


integration
or Bdx-Ady=0 or Bx-Ay=c by integrating both sides of the previous

equation .

The other pair

is reduced to an ordinary differential equation with u as the dependent variable and x as the
independent variable, namely

or

The integrating factor of this differential equation is e Kx/A. Making change of variable by
v=ueKx/A problem takes the form
Avx+Bvy = f(x,y)e =g(x,y)
The substitution v=ueKy/B in problem leads to Av x+Bvy=f(x,y) eKy/B. Thus, we need to
consider only the formal reduced form
Aux+Buy=f(x,y)
The auxiliary system of equations for the above equation is

The solution of is
Bx-Ay=c, which gives
x=
Substituting this value in
we get

or du=F(y,c) dy where F(y,c)=

Solution of this equation is


u=G(y,c)+c1, where Gy (y,c)= F(y,c).
Thus, the general solution is obtained by replacing c 1 by  (c) and c by Bx-Ay, thereby
yielding
u(x,y)=G(y, Bx-Ay)+ ( Bx-Ay), and the solution of problem is
u(x,y)=[G(y, Bx-Ay)+ ( Bx-Ay)]e-Kx/A (11.20)
Equations = = are called the equations of the Characteristics. These
equations contain two independent equations, with two solutions of the form F(x,y,u)=c 1 and
G(x,y,u)=c2. Each of these represents a family of surfaces. The curves of intersection of these
two families of surfaces are known as the characteristics of the partial differential equation. The
projections of these curves in the (x,y)-plane are called the base characteristics. The general
solution represents a family of surfaces, and these surfaces are called integral surfaces.
Thus, the equation Bx-Ay=c represents a family of planes. The intersection of any one of
these planes with an integral surface is a curve whose projection in the (x,y)-plane is again given
by Bx-Ay=c, but this time this equation represents a straight line and is the base characteristic.
Therefore, the solution u on a base characteristic Bx-Ay=c is given by u=G(y,c)+c 1, and the
general solution is the same as above.
Example 6 Find the general solution of the first-order linear partial differential equation with
the constant coefficients:
4ux+uy=x2y
Solution: The auxiliary system of equations is

From here we get


or dx-4dy=0. Integrating both sides

we get x-4y=c. Also or x2y dx=4du

or x2 =4du or

(x3 – cx2) dx = du
Integrating both sides we get
u=c1+
= f(c)+
After replacing c by x-4y, we get the general solution
u=f(x-4y)+

=f(x-4y)-
Lagrange's Method
The general form of first-order linear partial differential equations with variable
coefficients is
P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) (1)
We can eliminate the term in u from P(x,y)u x+Q(x,y)uy+f(x,y)u=R(x,y) by substituting
u=ve-(x,y), where (x,y) satisfies the equation
P(x,y) x(x,y)+ Q (x,y) y(x,y)=f(x,y)
Hence, Eq P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) is reduced to
P(x,y)ux+Q (x,y) uy =R(x,y) (2)
where P,Q,R in (2) are not the same as in (1). The following theorem provides a method for
solving (2) often called Lagrange's Method.
Theorem 1 The general solution of the linear partial differential equation of first order
Pp+Qq=R; (3)
where p= , P, Q and R are functions of x y and u
is F(, ) = 0 (4)
where F is an arbitrary function and  (x,y,u) =c1 and  (x,y,u)=c2 form a solution of the
auxiliary system of equations
(5)
Proof: Let  (x,y,u)=c1 and  (x,y,u)=c2 satisfy (5), then equations
xdx+y dy +udu=0
and

must be compatible, that is, we must have P x+Qy+Ru=0


Similarly we must have
Px+Qy+Ru=0
Solving these equations for P,Q, and R, we have
(6)
where  (,)/(y,u)= yu- yu0 denotes the Jacobian.
Let F(,)=0. By differentiating this equation with respect to x and y, respectively, we
obtain the equations
and if we now eliminate and from these equations, we obtain the equation p

+q = (7)
Substituting from equations (6) into equation (7), we see that F(,)=0 is a general
solution of (3). The solution can also be written as
 =g() or =h(),
Example 7 Find the general solution of the partial differential equation y2up + x2uq = y2x
Solution: The auxiliary system of equations is

Taking the first two members we have x 2dx = y2dy which on integration given x3-y3 = c1.
Again taking the first and third members,
we have x dx = u du
which on integration given x2-u2 = c2
Hence, the general solution is
F(x3-y3,x2-u2) = 0
Charpit's Method for solving nonlinear Partial Differential Equation of First-Order
We present here a general method for solving non-linear partial differential equations.
This is known as Charpit's method.
Let
F(x,y,u, p.q)=0 (8)
be a general non linear partial differential equation of first-order. Since u depends on x
and y, we have
du=uxdx+uydy = pdx+qdy (9)
where p=ux= , q = uy=
If we can find another relation between x,y,u,p,q such that
f(x,y,u,p,q)=0 (10)
then we can solve (10) and (9) for p and q and substitute them in equation (8). This will give the
solution provided (9) is integrable.
To determine f, differentiate (8) and (10) w.r.t. x and y so that
(11)

(12)

(13)

(14)

Eliminating from, equations (10) and (11), and from equations (12) and (13) we
obtain
Adding these two equations and using

and rearranging the terms, we get

(15)

Following arguments in the proof of Theorem 1 we get the auxiliary system of equations

(16)

An Integral of these equations, involving. p or q or both, can be taken as the required


equation (9). p and q determined from (8) and (10) will make (9) integerable.
Example 8 Find the general solution of the partial differential equation.

Solution: Let p = ,q=


The auxiliary system of equations is

which we obtain from (15) by putting values of

and multiplying by -1 throughout the auxiliary system. From first and 4 th expression in problem
equation, we get
dx = . From second and 5th expression

dy=
Using these values of dx and dy in problem equation,we get
=

or
Taking integral of all terms we get
ln|x| + 2ln|p| = ln|y|+2ln|q|+lnc
or ln|x| p2 = ln|y|q2c
or p2x=cq2y, where c is an arbitrary constant.
Solving (16) and auxiliary system of equations for p and q we get cq2y+q2y -u=0
(c+1)q2y=u
q=

p=
F(x,y,u, p.q)=0, takes the following form in this case
du=

or

By integrating this equation we obtain


This is a complete solution.
Solutions of special type of partial differential equations
(i) Equations containing p and q only
Let us consider a partial differential equation of the type
F(p,q)=0 (17)
The auxiliary system of equations of Charpit's method (Equation (15)) takes the form

It is clear that p=c is a solution of these equations. Putting value of p in (17) we have
F(c,q)=0 (18)
So that q=G(c) where c is a constant
Then observing that
du=cdx+G(c) dy
we get the solution u=cx +G(c) y+c1,
where c1 is another constant.
Example 9 Solve p2+q2=1
Solution: The auxiliary system of equation is
-

or
Using dp =0, we get p=c and q= , and these two combined with du =pdx+qdy yield
u=cx+y + c1 which is a complete solution.
Using = p , we get du = where p= c

Integrating the equation we get u = + c1

Also du = , where q =
or du = . Integrating this equation we get u = y +c2
This cu = x+cc1 and = y + c2
Replacing cc1 and c2 by -  and - respectively, and eliminating c, we get
2 2 2
u = (x-) + (y-)
This is another complete solution.
Working Rules of Charpit’s Method for Solving Non-Linear Partial Differential
Equations of Order One with Two Independent Variables
The following steps are required while using Charpit’s method for
solving non-linear partial differential equation of order one:
Step 1. Transfer all the terms of given PDE to L.H.S. and denote the entire
expression in L.H.S. by f(x, y, z, p, q).
Step 2. Write down the Charpit’s auxiliary equations.
Step 3. Find the values of / , / , etc occurring in Charpits auxiliary
equations. Put them in Charpit auxiliary equations and simplify.
Step 4. Choose two proper fractions from Charpit’s auxiliary equations such
that the resulting integral may come out as simplest relation involving
at least one of p or q or both.

Step 5. The simplest relation of step 4 is solved along with given partial
differential equation to find p and q. Put these values of p and q in dz
= pdx + qdy which on integration gives the complete integral of the
given partial differential equation.
The singular and general integrals may be obtained in the usual manner.

(ii) Clairaut equations


An equation of the form
u=px+qy+f(p,q)
or
F=px+qy+f(p,q)-u=0 (19)
is known as Clairaut equation.
The auxiliary system of equations for Clairaut equation takes the form
= = = =
From here we find that
dp=0, dq=0 implying
p=c1, q=c2
If we put these values of p and q in Eq. (11.42), we get
u = c1 x +c2y +f (c1, c2)
Therefore, F(x,y,u,c1,c2) = c1x + c2y + f (c1,c2) -u=0 is a complete solution of F(c,q)=0.
(iii) Equations not containing x and y
Consider a partial differential equation of the type
F(u,p,q) = 0 (20)
The auxiliary system of equations take the form
= = = =

The last two terms yield =


i.e. p = a2q where a2 is an arbitrary constant
This equation together with clairaut equation F=px+qy+f(p,q)-u=0 can be solved for p
and q and we proceed as in previous cases.
Example 10 Solve u2+pq – 4 = 0
Solution. The auxiliary system of equations is
= = = =
The last two equations yield p = a2q.
Substituting in u2+pq – 4 = 0 gives
q= and p = + a
Then du = pdx+qdy yields
du = +

or =+

Integrating we get sin--1 =+

or u = + 2 sin
which is the required complete solution.
(iv) Equations of the type
f(x,p) = g(y,q)
Then each of these functions must be constant, that is
f(x, p) = g(y, q) = C
Solving for p and q, and using du=pdx+qdy we can obtain the solution
Example 11 Solve p2(1-x2)-q2(4-y2) = 0
Solution Let p2(1-x2) = q2 (4-y2) = a2
This gives p = and q =
(neglecting the negative sign).
Substituting in du = pdx + q dy we have
du = dx + dy

Integration gives u = a + c.
which is the required complete solution.

Geometric concepts related to Partial Differential Equations of First order


We have discussed geometrical interpretation of a first order ordinary differential
equation in chapter. .........
The situation for a partial differential equation is some what complicated. In this case the
values of p= , q= are not unique at a point (x,y,u). If an integral surface is g(x,y,u)=0, then
p and q represent the slopes of the curves of intersection of the surface with the planes
y=constant and x=constant, respectively. Moreover, p,q,-1 represent the direction ratios of the
normals to the surface at the point (x,y,u). The derivatives p and q are constrained by
F(x,y,u,p,q)=0. Obviously, at a fixed point, p and q can be represented by a single parameter.
Hence, there are infinitely many possible normals and consequently infinitely many integral
surfaces passing through any fixed point. So, unlike the case of ordinary differential equations,
we cannot determine a unique integral surface by making it pass through a point.
Cauchy established that a unique integral surface can be obtained by making it pass
through a continuous twisted space curve, also known as an initial curve, except when the curve
is a characteristic of the differential equation.
The infinity of normals passing through a fixed point generates a cone known as the
normal cone. The corresponding tangent planes to the integral surfaces envelope a cone known
as the Monge cone. In the case of a linear or a quasi linear equation, the normal cone
degenerates into a plane since each normal is perpendicular to a fixed line. Consider the equation
ap+bq=c, where a,b, and c are functions x,y, and u. Then the direction p,q,-1 is perpendicular to
the direction ratios a,b,c. This direction is fixed at a fixed point. The Monge cone then
degenerates into a coaxial set of planes known as the Monge pencil. The common axis of the
planes is the line through the fixed point with direction ratios a,b,c. This line is known as the
Monge axis.

LINEAR PARTIAL DIFFERENTIAL EQUATIONS OF SECOND ORDER WITH


CONSTANT COEFFICIENTS

Homogeneous Equations
Let Dx=
We are looking for solving equations of the type
(21)
where k1 and k2 are constants.
(20) can be written as

or F(Dx, Dy) u=0 (22)


The auxiliary equation of (21) is

Dy then equation (21) can be written as


Let the roots of this equation be m1 and m2, that is, Dx=m1Dy, Dx=m2Dy
(Dx-m1Dy) (Dx-m2Dy)u=0- (23)
This implies
(Dx-m2Dy) u=0 or p-m2q=0
The auxiliary system of equations for p-m2q=0 is of the type

This gives us -m2dx=dy


or y+m2x=c
and u=c1= (c)
Thus, u=(y+m2x) is a solution of (20).
From (22) we also have (Dx-m1Dy) u=0
or p-m1q=0
Its auxiliary system of equations is

This gives –m1dx=dy or m1x+y=c1 and u=c2 and so u=(y+m1x) is a solution of (20).
Therefore u= (y+m2x) +  (y+m1x) is the complete solution of (20)
If the roots are equal (m1 = m2) then Equation 20 is equivalent to
(Dx-m1Dy)2 u = 0
Putting (Dx-m1Dy) u = z, we get
(Dx-m1Dy) z=0 which gives
z= (y+m1x)
Substituting z in (Dx-m1Dy) u=z gives
(Dx-m1Dy) u =  (y+m1x)
or p-m1q =  (y+m1x)
Its auxiliary system of equations is

which gives y+m1x = a & u +  (a) x+b


The complete solution in this case is
u= x  (y+m1x) +  (y+m1x)
Example 12 Find the solution of the equation
=0
Solution: In the terminology introduced above this equation can be written as
(Dx2-Dy2) u = 0.
or (Dx-Dy) (Dx+Dy)u=0
Its auxiliary equation is
(Dx-Dy)(Dx+Dy)=0,
that is, Dx - Dy =0
or Dx= -Dy. that is,
p=q or p = - q
p-q = 0 or p+q=0
Auxiliary system of equations for p-q=0 is

This gives x+y = c.


The auxiliary system for p+q = 0 is
This gives x-y =c1
The complete solution is
u=(x+y)+  (x-y) where  and  are arbitrary functions.
Non-homogeneous Partial Differential Equations of the second-order
Equations of the type
=f(x,y) (24)
are called non-homogeneous partial differential equations of the second-order with
constant coefficients.
Let uc be the general solution of
=0 (25)
and let up be a particular solution of (23)
Then uc+up is the solution of (23)
We have discussed the method for finding the general solution (complementary function)
of (24). Let f(Dx,Dy) be a linear partial differential operator with constant coefficients, then the
corresponding inverse operator is defined
as
The following results hold

f(Dx,Dy) (26)

(27)

= (28)

(29)

(30)
f(Dx,Dy)  (x,y) eax+by=eax+by f(Dx+a, Dy+b) (x,y)
(31)

= (32)

f cos (ax+by) = f(-a2,-b2) cos (ax+by)


(33)

f sin (ax+by) = f(-a2,-b2) sin (ax+by)

(34)

When (x,y) is any function of x and y, we resolve into partial fractions


treating f(Dx, Dy) as a function of Dx alone and operate each partial fraction on (x,y),
remembering that
(x,y) =
where c is replaced by y+mx after integration.
Example 13
Find the particular solution of the following partial differential equations
(i)

(ii)
Solution: (i) The equation can be written as
u = ex-3y

up = ex-3y

= ex-3y by (29)

= ex-3y
(ii) The equation can be written as
(3D2x-Dy)u=ex sin (x+y)
up = ex sin (x+y)

= ex sin (x+y)

= ex sin(x+y)

= ex sin(x+y)

= ex sin (x+y) = ex sin(x+y)

= ex
=- ex cos(x+y).
Example 14 Solve the partial differential equation
= e-xsin t
Solution: The equation can be written as
(D -c2Dx2) u = e-xsin t
The particular solution is
up=

=-
By proceeding on the lines of the solution of Example 12 we get
uc =  (x-ct)+  (x+ct)
u(x,t)=  (x-ct)+  (x+ct) -
The solution uc is known as the d' Alembert's solution of the wave equation
=0.
Monge's Method for a special class of non linear Equations (quasi linear Equations) of the
Second order.
Let u(x,y) be a function of two variables x and y
Let p =
Monge's method provides a technique for solving a special class of partial differential
equation of second order of the type
F(x,y,u,p,q,r,s,t)=0 (35)
Monge's method comprises in establishing one or two first integrals of the form
= f() (36)
where  and  are known function of x,y,u, p and q and the function f is arbitrary; that is,
in finding relations of the type (35) such that equation (36) can be derived from equation (35).
The following equations are obtained from it by partial differentiation.
x+up+pr+qs=f'() {x+up+pr+qs} (37)
y+uq+ps+qt=f'() {y+uq+ps+qt} (38)
It may be noted that every equation of the type (34) does not have a first integral of the
type (35). By eliminating f'() from equations (36) and (37), we find that any second order partial
differential equation which possesses a first integral of the type (35) must be expressible in the
form
R1r+S1s+T1t+U1(rt-s2)=V1 (39)
where R1, S1,T1,U1 and V1 are functions of x,y,u, p and q defined by the relations
R1 = (40)

S1= (41)

U1= (42)
The equation (38) reduces to the form
R1r+S1s+T1t=V1 (43)
if and only if the Jacobian pp- qp=0 identically. Equation (42) is a non-linear equation
because the coefficients R1, S1, T1, V1 are functions of p and q as well as of x,y, and u. Infact it is
a quasi linear equation. We explain here the method of finding solution of the equation of the
type (42), namely
Rr+Ss+Tt = V (44)
for which a first integral of the form (35) exists. For any function u of x and y we have
the relations dp =rdx+sdy, dq=sdx+tdy (45)
Eliminating r and t from this pair of equations and equation (43), we see that any
solution of (43) must satisfy the relation
Rdpdy+Tdqdx - Vdxdy=0 (46)
2 2
Rdy +Tdx –Sdxdy=0 (47)
The method of finding solutions of (45) and (46) is explained through the following
example:
Example: 15
Solve the equation
This equation is of the form (43) where
R=
Therefore (45) and (46) become respectively
q2dpdy + p2dq dx=0 (48)
(pdx+qdy)2 = 0 (49)
By the equation du=pdx+qdy and (48) we get du=0, which gives integral u=c 1. From (47)
and (48) we have qdp =pdq, which has solution
p=c2q. Thus, the first integral is
p=q f(u) (50)
where f(.) is arbitrary. We solve (49) by Lagrange's method. The auxiliary system of
equations (characteristic equations) are

with integral u=c1, y+x f(c1)=c2 leading to the general solution


y+x f(u)=g(u)
where the functions f and g are arbitrary.

PARTIAL DIFFERENTIAL EQUATIONS OF HIGHER ORDER WITH CONSTANT


COEFFICIENTS
Homogeneous Linear Equations with constant Coefficients.
A homogeneous linear partial differential equation of the nth order is of the form

(1)

where , ,…………, are constants and F is a function of ‘x’ and ‘y’. It is homogeneous
because all its terms contain derivatives of the same order.
Equation (1) can be expressed as

or (2),

where, and
As in the case of ordinary linear equations with constant coefficients the complete
solution of (1) consists of two parts, namely, the complementary function and the particular
integral.
The complementary function is the complete solution of f (D,D') z = 0-------(3), which
must contain n arbitrary functions as the degree of the polynomial f(D,D'). The particular
integral is the particular solution of equation (2).

Finding the complementary function

Let us now consider the equation f(D,D') z = F (x,y)

The auxiliary equation of (3) is obtained by replacing D by m and D' by 1.

Solving equation (4) for „m‟, we get „n‟ roots. Depending upon the nature of the roots, the
Complementary function is written as given below:

Finding the particular Integral


Consider the equation f(D,D’)z=F(x,y)
Now, the P.I. is given by

Now, the P.I. is given by

Case (i): When F(x,y)=

Replacing D by ‘a’ and D’ by “b”, we have

, where

Case (ii): When

Replacing , we get

, where

Case (iii): When

Expand [f (D,D')]-1 in ascending powers of D or D' and operate on xm yn term by term.

Case (iv) : When F(x,y) is any function of x and y.

Resolve into partial fractions considering f (D,D') as a function of D alone.

Then operate each partial fraction on F(x,y) in such a way that

where c is replaced by y+mx after integration

NON –HOMOGENEOUS LINEAR EQUATIONS

Let us consider the partial differential equation


f (D,D') z = F (x,y) (1)
If f (D,D') is not homogeneous, then (1) is a non–homogeneous linear partial differential
equation. Here also, the complete solution = C.F + P.I.
The methods for finding the Particular Integrals are the same as those for homogeneous linear
equations. But for finding the C.F, we have to factorize f (D,D') into factors of the form D –
mD' –c.

Consider now the equation (D –mD' –c) z = 0 (2).

This equation can be expressed as p –mq = cz (3),

which is in Lagrangian form.

The subsidiary equations are

(4)

The solutions of (4) are y + mx = a and z = becx.

Taking b = f (a), we get z = ecx f (y+mx) as the solution of (2).

Note:

1. If (D-m1D' –C1) (D –m2D'-C2) …… –m(Dn'-Cn) z = 0 is the partial


differential equation, then its complete solution is

z = ec1x f1(y +m1x) + ec2x f2(y+m2x) + . . . . . + ecnx fn(y+mnx)

2. In the case of repeated factors, the equation (D-mD' –C)nz = 0 has a complete

solution z = ecx f1(y +mx) + x ecx f2(y+mx) + . . . . . +x n-1 ecx fn(y+mx).

SOLVING PDEs

Solving PDEs is considerably more difficult in general than solving ODEs, as the level of
complexity involved can be great. For instance the following seemingly completely unrelated
functions are all solutions to the two-dimensional Laplace equation:

(1) , and

You should check to see that these are all in fact solutions to the Laplace equation by doing the
same thing you would do for an ODE solution, namely, calculate and , substitute them
into the PDE equation and see if the two sides of the equation are identical.
Now, there are certain types of PDEs for which finding the solutions is not too hard. For
instance, consider the first-order PDE

(2)

where u is assumed to be a two-variable function depending on x and y. How could you solve
this PDE? Think about it, is there any reason that we couldn’t just undo the partial derivative of
u with respect to x by integrating with respect to x? No, so try it out! Here, note that we are
given information about just one of the partial derivatives, so when we find a solution, there will
be an unknown factor that’s not necessarily just an arbitrary constant, but in fact is a completely
arbitrary function depending on y.

To solve (2), then, integrate both sides of the equation with respect to x, as mentioned. Thus

(3)

so that . What is F? Note that it could be any function such that when
one takes its partial derivative with respect to x, the result is 0. This means that in the case of
PDEs, the arbitrary constants that we ran into during the course of solving ODEs are now taking
the form of whole functions. Here F, is in fact any function, F(y),of y alone. To check that this
is indeed a solution to the original PDE, it is easy enough to take the partial derivative of this
function and see that it indeed satisfies the PDE in (2).

Now consider a second-order PDE such as

(4)

where u is again a two-variable function depending on x and y. We can solve this PDE by
integrating first with respect to x, to get to an intermediate PDE,

(5)

where F(y) is a function of y alone. Now, integrating both sides with respect to y yields

(6)

where now G(x) is a function of x alone (Note that we could have integrated with respect to y
first, then x and we would have ended up with the same result). Thus, whereas in the ODE
world, general solutions typically end up with as many arbitrary constants as the order of the
original ODE, here in the PDE world, one typically ends up with as many arbitrary functions in
the general solutions.

To end up with a specific solution, then, we will need to be given extra conditions that indicate
what these arbitrary functions are. Thus the initial conditions for PDEs will typically involve
knowing whole functions, not just constant values. We will also see that the initial conditions
that appeared in specific ODE situations have slightly more involved analogs in the PDE world,
namely there are often so-called boundary conditions as well as initial conditions to take into
consideration.

Partial Differential Equations of Real World Systems


1. Partial Differential Equations as Models of Real World Systems
2. Elements of Trigonometric Fourier Series for Solutions of Partial
Differential Equations
3. Method of Separation of Variables for Solving Partial Differential Equations
3.1 The Heat Equation
3.2 The Wave Equation
3.3 The Laplace Equation
4. Solutions of Partial Differential Equations with Boundary conditions
4.1 The Wave Equation with Initial and Boundary conditions
4.2 The Heat Equation with Initial and Boundary conditions
4.3 The Laplace Equation with Initial and Boundary conditions
In this chapter we shall see that a lot of other physical situations and real world systems
are represented by partial differential equations with or without appropriate conditions
(boundary or/initial conditions). In Section 1 we describe sixteen partial differential equations
which model real world systems. Section 2 deals with elements of trigonometric Fourier series
which are required for solution of some important partial different equations with boundary
conditions. Method of separation of variables and its application in solving partial differential
equations are discussed in Section 3. Different types of boundary value problems are solved in
Section 4.
1 Partial Differential Equations as Models of Real World Systems
(a) Heat equation or Diffusion equation ( one dimensional)

where u (x,t) denotes the temperature distribution and k the thermal diffusivity.
The equation, in its simplest form, goes back to the beginning of the 19 th century. Besides
modeling temperature distribution it has been used to model the following physical phenomena.
(i) Diffusion of one material within another, smoke particles in air.
(ii) Chemical reactions, such as the Belousov-Zhabotinsky reaction which exhibits
fascinating wave structure.
(iii) Electrical activity in the membranes of living organisms, the Hodgkin-Huxby
model.
(iv) Dispersion of populations; individuals move both randomly and to avoid
overcrowding.
(v) Pursuit and evasion in predator-prey systems
(vi) Pattern formation in animal coats, the formation of zebra stripes
(vii) Dispersion of pollutants in a running stream.
More recently it has been used in Financial Mathematics or Financial Engineering for
determining appropriate prices of an option. We discuss it in Section 4
(b) Wave equation in dimension 1 (R)

u(x,t) represents the displacement, for example of a vibrating string from its equilibrium position,
and c the wave speed.
This type of equations have been applied to model vibrating type of membrane, acoustic
problems for the velocity potential for the fluid flow through which sound can be transmitted,
longitudinal vibrations of an elastic rod or beam, and both electric and magnetic fields in the
absence of charge and dielectric.
(c) Laplace equation in R2 (two dimensional)
2u=u =
where 2 =. denotes the Laplacian
=
The equation is satisfied by the electrostatic potential in the absence of charges, by the
gravitational potential in the absence of mass, by the equilibrium displacement of a membrane
with a given displacement of its boundary, by the velocity potential for an inviscid,
incompressible, irrotational homogeneous fluid in the absence of sources and sinks, by the
temperature in steady-state heat flow in the
absence of sources and sinks, and in many other real world systems.
(d) Transport equation in R(one dimensional)

c is a constant and u(x,t) denotes the location of a car at the time t.


(e) Traffic Flow

u(x,t) denotes the density of cars per unit kilometer of expressway at location x at time t and a(u)
is a function of u, say the local velocity of traffic at location x at time t.
(f) Burger's equation in one dimension

This equation arises in the study of stream of particles or fluid flow with zero viscosity.
(g) Eikonal equation in R2

which models problems of geometric optics.


(h) Poisson equation (non-homogeneous Laplace equation)
2u= - f(x,y)
One encounters this equation while studying the electrostatic potential in the presence of
charge, the gravitational potential in the presence of distributed matter, the equilibrium
displacement of a membrane under distributed forces, the velocity potential for an inviscid,
incompressible, irrotational homogeneous fluid in the presence of distributed sources in sinks,
and the steady state temperature in the presence of thermal sources or sinks.
(i) Helm holtz equation
(2+k2) u=0 which has been found quite useful in diffraction theory.
(j) Klein-Gordon equation
- c2 2u+m2u=0
This equation arises in quantum field theory, where m denotes the mass.
(k) Telegraph equation

where A and B are constants. This equation arises in the study of propagation of electrical
signals in a cable transmission line. Both the current  and the voltage V satisfy an equation of
this type. This equation also arises in the propagation of pressure waves in the study of pulsatile
blood flow in arteries, and in one-dimensional random motion of bugs along a hedge.

(l) Schrödinger equation (Time independent)

where m is the mass of the particle whose wave function is u(x,y), h is the universal Planck's
constant, V is the potential energy and E is a constant.
This equation arises in quantum mechanics.
If V=0 then it reduces to the Helmholtz equation (equation of (i)).
(m) Korteweg de Vries (KdV) equation in one-dimension

This equation arises in shallow water waves.


(n) Euler Equation in R3
+ (u. )u+ p=0, where u denotes the velocity field, and p the pressure
(o) Navier-Stokes equation in R3
ut+(u. )u+ p=v2u, where
v denotes the kinematic viscosity and  the density of the fluid.
(p) Maxwell equations in R3

where E and H denote the electric and the magnetic field, respectively; they are system of six
equations in six unknowns.
There exists vast literature concerning Schrödinger, Korteweg de Vries, Euler, Navier-
Stokes and Maxwell equations. A large part of technological advancement is based on these
equations. It is not an exaggeration to say that a systematic study of any branch of science and
engineering is nothing but the study of one of these 16 equations, particularly, Heat, Wave,
Laplace, Burger Telegraph, Schrödinger, Korteweg de Vries, Euler, Navier-Stokes and Maxwell
equations.
2 Elements of Trigonometric Fourier Series for solution of Partial Differential Equations
In this section we discuss Fourier series expassion of arbitrary, even and odd functions.
2.1 Fourier Series
DEFINITION 1 Fourier Coefficients and Series
Let f be a Riemann integrable function on [-l,l].
1. The numbers
an= 1
and
bn

are the Fourier coefficients of f on [-l,l].


2. The series
(3)
is the Fourier series of f on [-l.l] when the constants are chosen to be the Fourier coefficients of f
on [-l.l].
Example 1
Let f (x) =x for - x  . We will write the Fourier series of f on [-,]. The Fourier
coefficients are

and

since cos (n) = (-1)n if n is an integer. The Fourier series of x on [-,] is

In this example, the constant term and cosine coefficients are all zero, and the Fourier
series contains only sine terms.
Example 2
Let
f (x) =
Here l = 3 and the Fourier coefficients are

= [(-1)n-1]
and

=
The Fourier series of f on [-3,3] is

Even and Odd Functions


Sometimes we can save some work in computing Fourier coefficients by observing
special properties of f(x), namely even and odd functions.
DEFINITION 2
Even Function
f is an even function on [-l,l] if f(-x) = f(x) for –l  x  l.
Odd Function
f is an odd function on [-l,l] if f(-x) = - f (x) for –l  x  l.
For example, x2, x4, cos (nx/l), and e-|x| are even functions on any interval [-l,l].
The functions x,x3, x5, and sin (nx/l) are odd functions on any interval [-l,l].
If f is odd, then f(0) =0, since
f(-0)=f(0)=-f(0).
Of course, "most" functions are neither even nor odd. For example, f(x)=ex is not even or odd on
any interval [l-l].
Even and odd functions behave like even and odd integers under multiplication:
even.even=even,
odd.odd=even
and
even.odd=odd,
For example, x2 cos (nx/l) is an even function (product of two even functions); x2 sin (nx/l) is
odd (product of an even function with an odd function); and x3 sin (nx/l) is even (product of
two odd functions).
Let f be even on [-l,l], then its Fourier series on this interval is
(4)
where
(5)
Let f be odd on [-l,l], then its Fourier series on this interval is
(6)
where
(7)
Theorem 1 Convergence Theorem
Let f be piecewise continuous on [-l,l]. Then,
1, If –l< x < l and f has a left and right derivative at x, then the Fourier series of f on [-l,l]
converges at x to
(f (x+)+f(x-)).
2. If f'R(-l) and f'L(l) exist, then at both l and –l, the Fourier series of f on [-l,l] converges
to
(f(-l+)+f(l-)).
2.2 Fourier Cosine and Sine Series
If f(x) is defined on [-l,l] we may be able to write its Fourier series. The coefficients of
this series are completely determined by the function and the interval.
We will now show that if f(x) is defined on the half-interval [0,l), then we have a choice
and can write a series containing just cosines or just sines in attempting to represent f(x) on this
half-interval.
The Fourier Cosine Series of a Function
Let f be integrable on [0,l]. We want to expand f(x) in a series of cosine functions.

fe is an even function,
fe (-x) =f(x),
and agrees with f on [0,l],
fe (x)=f(x) for 0  x  l.
We call fe the even extension of f to [-l,l]. For example if
f(x) =ex for 0  x  2. Then

Here we put fe(x)=f(-x) =e-x for –2  x <0.


Because fe is an even function on [-l,l], its Fourier series on [-l,l] is
(8)
in which
(9)
since fe(x) = f(x) for 0  x  l. We call the series (8) the Fourier cosine series of on [0,l,]. The
coefficients (9) are the Fourier cosine coefficients of f on [0, l,].
The even extension fe was introduced only to be able to make use of earlier work to
derive a series containing just cosines. When we actually write a Fourier cosine series, we just
use equation (5) to calculate the coefficients, without defining fe.
The other point to having fe in the background, however, is that we can use the Fourier
convergence theorems to write a convergence theorem for cosine series.
Theorem 2 Convergence of Fourier Cosine Series
Let f be piecewise continuous on [0,l]. Then,
1. If 0 < x < l, and f has left and right derivatives at x, then the Fourier cosine
series for f(x) on [0,l] converges at x to

2. If f has a right derivative at 0, then the Fourier cosine series for f (x) on [0,l]
converges at 0 to f(0+).
3. If f has a left derivative at l, then the Fourier cosine series for f(x) on [0,l]
converges at l to f(l-).
Example 3
Let f(x) = ex for 0  x  2. We will write the Fourier cosine series of f on [0,2,].
The Fourier coefficients are
an =

The cosine series is


2.3 The Fourier Sine Series of a Function
By duplicating the strategy just used for writing a cosine series, except now extending f
to an odd function f0 over [-l,l], we can write a Fourier sine series for f(x) on [0,l]. In particular,
if f (x) is defined on [0,l], let

Then f0 is an odd function, and f0(x) =f(x) for 0  x  l. This is the odd extension of f to [-
l,l]. For example, if f(x) = e2x for 0  x  l, let

Now write the Fourier series for f0(x) on [-l,l]. By equations (6) and (7), the Fourier
series of f0 is
(10)
with coefficients
(11)
We call the series (10) the Fourier sine series of f on [0,l]. The coefficients given by
equation (11) are the Fourier sine coefficients of f on [0,l]. As with cosine series, we do not need
to explicitly make the extension to f0 to write the Fourier sine series for f on [0,l].
Again, as with the cosine expansion, we can write a convergence theorem for sine series using
the convergence theorem for Fourier series.
Theorem 3 Convergence of Fourier Sine Series
Let f be piecewise continuous on [0,l]. Then
1. If 0< x < l, and f has left and right derivatives at x, then the Fourier sine
series for f(x) on [0,l] converges at x to

2. At 0 and at l. the Fourier sine series for f(x) on [0,l] converges to 0.


Conclusion (2) is immediate because each term of the sine series (10) is zero for
x=0 and for x=l.
Example 4
Let f(x)=e2x for 0  x  1. We will write the Fourier sine series of f on [0,l]. The
coefficients are

The sine series is

The series converges to e2x for 0 < x < 1, and to zero for x=0 and for x=1.
3 Method of Separation of Variables for Solving partial Differential Equations
Method of separation of variables is a powerful method for solving partial differential
equations of the type
(12)
under certain situations.
The basic idea of this method is to transform a partial differential equation into as many
differential equations as the number of independent variables in the partial differential
equation by representing the solution as a product of functions of each independent variable.
After these ordinary differential equations are solved, the method reduces to solving
eigenvalue problems and constructing the general solution as an eigenfunction expansion,
where the coefficients are evaluated by using the boundary and initial conditions see Section 4
for further details.
Let u (x,y) = X(x) Y(y) (13) be a
solution of (12) then (12) may be written in the form
(14)
where f(Dx), g(Dy) are quadratic functions of Dx= and Dy = respectively. In this
situation we say that (12) is separable in the variables x,y. The derivation of a solution of the
equation is straight forward. For the left hand side of (14) is a function of x alone, and right-
hand is a function of y-alone, and the two can be equal only if each is equal to a constant, say
. The problem of finding solutions of the form (13) of () therefore reduces to solving the pair
of second order linear ordinary differential equations
f(D) X =  X(x), g(D) Y=  Y(y) (15)

3.1. Application to Heat Equation

Let u(x,t) = X(x) T(t)


be a solution of the heat equation. Then the last equation can be written as
(16)

and
Putting these values in the heat equation we get equation 16]. The pair of ordinary
differential equations corresponding to (15) is

or =0 (17)

Let  = - n2 then by the method discussed in 2.1 we find that T(t)= is a general
solution of the second equation of (17), where K is a constant of integration which can be
determined by given initial and boundary conditions. The general solution of the first equation of
(17) is given in Section 6.7.
3.2 Application to Wave Equation

Let u(x,t) = X(x) T(t), then

Putting these values in the equation we get


X(x) T"(t) = c2X"(x) T(t)
or
or T"(t) + c2  T = 0, X"(x) + X = 0
Solutions of these equations can be found with different boundary conditions as discussed
in section (6.7).
3.3. Application to Laplace Equation

Let u(x,y) = X(x) Y(y) be a solution of the equation. Then

Putting these values in the equation we get


X"(x) Y(y) + X(x) Y"(y) = 0
or
or X"(x) + n2X=0, Y"(y) -n2Y+0
Solutions of these Equations can be obtained on the lines of Section 6.7
4 Solutions of Partial Differential Equations with Boundary Conditions
In this section we present solutions of the wave, heat and Laplace equations with
boundary and initial conditions. We briefly discuss how a physical situation can be written in the
form of the wave equation.
The Black-Scholes model, which fetched 1997 Nobel Prize of Economics, is described in
the subsection 4 of this section.
4.1 The Wave Equation with Initial and Boundary Conditions
Modeling of a Physical Situation
Vibrations in a membrane or drumhead, or oscillations induced in a guitar or violin string, are
governed by a partial differential equation called the wave equation. We will derive this
equation in a simple setting.
Consider an elastic string stretched between two pegs, as on a guitar. We want to describe
the motion of the string if it is given a small displacement and released to vibrate in a plane.
Place the string along the x axis from 0 to l and assume that it vibrates in the x, y plane.
We want a function u(x,t) such that at any time t>0, the graph of the function u=u(x,t) of x is the
shape of the string at that time. Thus, u (x,t) allows us to take a snapshot of the string at any time,
showing it as a curve in the plane. For this reason u(x,t) is called the position function for the
string. Figure 1 shows a typical configuration.
To begin with a simple case, neglect damping forces such as air resistance and the weight
of the string and assume that the tension T(x,t) in the string always acts tangentially to the string
and that individual particles of the string move only vertically. Also assume that the mass  per
unit length is constant.
Now consider a typical segment of string between x and x+x and apply Newton's
second law of motion to write
Net force on this segment due to the tension
= acceleration of the center of mass of the segment times its mass.
This is a vector equation. For x small, the vertical component of this equation (Figure 2)
gives us approximately.
T(x+x,t) sin ( +)-T(x,t) sin () = x
where is the center of mass of the segment and T(x,t) =||T(x,t)||=magnitude of T.

Figure 1 String Profile at time =t. Figure 2


Then

Now v(x,t)= T(x,t) sin () is the vertical component of the tension, so the last equation
becomes

In the limit as x 0, we also have  x and the last equation becomes
(18)
The horizontal component of the tension is h(x,t) =T(x,t) cos(), so
v(x,t)=h(x,t)tan ()=h(x,t)
Substitute this into equation (18) to get
(19)
To compute the left side of this equation, use the fact that the horizontal component of
the tension of the segment is zero, so
h(x+x,t)-h(x,t)=0.
Thus h is independent of x and equation (19) can be written
=
Letting c2 = h/, this equation is often written
=
This is the one-dimensional (1-space dimension) wave equation.
In order to model the string's motion, we need more than just the wave equation. We
must also incorporate information about constraints on the ends of the string and about the initial
velocity and position of the string, which will obviously influence the motion.
If the ends of the string are fixed, then
u(0,t)=u(l,t)=0 for t  0.
These are the boundary conditions.
The initial conditions specify the initial (at time zero) position
u(x,0)=f(x) for 0  x  l
and the initial velocity
(x,0) = g(x) for 0 < x < l,
in which f and g are given functions satisfying certain compatibility conditions. For example, if
the string is fixed at its ends, then the initial position function must reflect this by satisfying
f(0)=f(l)=0.
If the initial velocity is zero (the string is released from rest), then g(x)=0.
The wave equation, together with the boundary and initial conditions, constitute a
boundary value problem for the position function u(x,t) of the string. These provide enough
information to uniquely determine the solution u(x,t).
If there is an external force of magnitude F units of force per unit length acting on the
string in the vertical direction, then this derivation can be modified to obtain
=c2 F.
Again, the boundary value problem consists of this wave equation and the boundary and
initial conditions.
In 2-space dimensions the wave equation is

(20)

This equation governs vertical displacements u(x,y,t) of a membrane covering a specified


region of the plane (for example, vibrations of a drum surface).
Again, boundary and initial conditions must be given to determine a unique solution.
Typically, the frame is fixed on a boundary (the rim of the drum surface), so we would have no
displacement of points on the boundary:
u(x,y,t)= 0 for (x,y) on the boundary of the region and t>0.
Further, the initial displacement and initial velocity must be given. These initial
conditions have the form
u(x,y,0) = f(x,y), (x,y,0) = g(x,y)
with f and g given.
Sometimes polar coordinates formulation is more convenient. We present below this
form. Let
x=r cos(), y=r sin().
Then
r= and  = tan -1 (y/x).
Let
u(x,y)= u (r cos(), r sin ()) = v (r, ).
Compute

=
=
Then

=
By a similar calculation, we get

and

Then

Therefore, in polar coordinates, the two-dimensional wave equation (20) is

(21)

in which v(r,,t) is the vertical displacement of the membrane from the x, y plane at point
(r, ) and time t.
Separable Variable - Fourier Series Method for the Wave Equation
Consider an elastic string of length (l), fastened at its ends on the x axis at x=0 and x=l.
The string is displaced, then released from rest to vibrate in the x,y plane. We want to find the
displacement function u (x,t), whose graph is a curve in the x,y plane showing the shape of the
string at time t. If we took a snapshot of the string at time t, we would see this curve.
The boundary value problem for the displacement function is
for 0 < x < l, t > 0,
u(0,t)=u(l,t)= 0 for t  0.
u(x,0) = f(x) for 0  x  l.
(x,0) = 0 for 0  x  l.
The graph of f(x) is the position of the string before release.
The Fourier method, or separation of variables, consists of attempting a solution of
the form u(x,t) =X(x) T(t). Substitute this into the wave equation to obtain
XT" = c2 X"T.
where T' = dT/dt and X' = dX/dx. Then

The left side of this equation depends only on x, and the right only on t. Because x and t
are independent, we can choose any t0 we like and fix the right side of this equation at the
constant value T" (t0)/c2T(t0), while varying x on the left side. Therefore, X"/X must be constant
for all x in (0,l). But then T"/c2T must equal the same constant for all t>0. Denote this constant -
. (The negative sign is customary and convenient, but we would arrive at the same final solution
if we used just ).  is called the separation constant, and we now have

Then
X"+X=0 and T" + c2T=0.
The wave equation has separated into two ordinary differential equations.
Now consider the boundary conditions. First,
u(0,t) = X(0) T(t)=0
for t  0. If T(t) = 0 for all t  0, then u(x,t) =0 for 0  x  l and t  0. This is indeed the solution
if f (x) =0, since in the absence of initial velocity or a driving force, and with zero displacement,
the string remains stationary for all time. However, if T (t)  0 for any time, then this boundary
condition can be satisfied only if
X(0)=0.
Similarly, u(l,t)=X(l)T(t)=0 for t  0 requires that
X(l)=0
We now have a boundary value problem for X:
X" +  X=0; X(0) = X(l) =0.
The value of  for which this problem has nontrivial solutions are the eigenvalues of this
problem, and the corresponding non trivial solutions for X are the eigenfunctions. We can solve
this regular Sturm Liouville problem, obtaining the eigenvalues

The eigenfunctions are nonzero constant multiples of


Xn(x) = sin for n=1,2,...
At this point we therefore have infinitely many possibilities for the separation
constant and for X(x).
Now turn to T(t). Since the string is released from rest,
(x,0) = X(x) T'(0) =0.
This requires that T'(0) =0. The problem to be solved for T is therefore
T" + c2T =0; T'(0)=0.
However, we now know that  can take on only values of the form n22/l2, so this
problem is really
T" +
The differential equation for T has general solution
T(t) = a cos + b sin
Now
T'(0) =
so b=0. We therefore have solutions for T(t) of the form
Tn(t) = cn cos
for each positive integer n, with the constants cn as yet undetermined.
We now have, for n=1,2,...., functions
un(x,t)=cn sin cos . (22)
Each of these functions satisfies the wave equation, both boundary conditions, and the
initial condition u(x,0)=0. We need to satisfy the condition u(x,0)=f(x).
It may be possible to choose some n so that un(x,t) is the solution for some choice of cn.
For example, suppose the initial displacement is
f(x) = 14 sin
Now choose n=3 and c3 =14 to obtain the solution
u(x,t)=14 sin . cos
This function satisfies the wave equation, the conditions u(0)=u(l)=0, the initial
condition u(x,0)=14 sin (3x/l), and the zero initial velocity condition
(x,0)=0.
However, depending on the initial displacement function, we may not be able to get by
simply by picking a particular n and c n in equation (22). For example, if we initially pick the
string up in the middle and have initial displacement function
(23)
(as in Figure 3), then we can never satisfy u(x,0)= f(x) with one of the u n's . Even
if we try a finite linear combination

we cannot choose c1......,cN to satisfy u(x,0)=f(x) for this function, since f(x)
cannot be written as a finite sum of sine functions.
We are therefore led to attempt an infinite superposition

We must choose the cn's to satisfy

We can do this! This series is the Fourier sine expansion of f(x) on [0,l]. Thus
choose the Fourier sine coefficients

With this choice, we obtain the solution


.(24)

This strategy will work for any initial displacement function f that is continuous
with a piecewise continuous derivative on [0,l] and satisfies f(0) = f(l) = 0. These
conditions ensure that the Fourier sine series of f(x) on [0.l] converges to f(x) for 0  x 
l.
In specific instances, where f(x) is given, we can of course explicitly compute the
coefficients in this solution. For the initial position function (23) compute the
coefficients:

cn

= 4l .
The solution for this initial displacement function, and zero initial velocity, is
u(x,t) =
Since sin (n/2) = 0 if n is even, we can sum over just the odd integers. Further, if
n=2k-1, then
sin(n/2)=sin((2k-1) /2)=(-1)k+1
Therefore,
u (25)

Vibrating String with Given Initial Velocity and Zero Initial Displacement
Now consider the case that the string is released from its horizontal position (zero
initial displacement) but with an initial velocity given at x by g(x). The boundary value
problem for the displacement function is

u(0,t)=u(l,t)=0 for t  0,
u(x,0)=0 for = 0  x  l,

We begin as before with separation of variables. Put u(x,t) = X(x) T(t). Since the
partial differential equation and boundary conditions are the same as before, we again
obtain
X" + X=0; X(0) = X(l)=0.
with eigenvalues n =
and eigenfunctions constant multiples of
Xn(x) = sin .
Now, however, the problem for T is different and we have
u(x,0)=0 = X(x) T=(0).
so T(0)=0. The problem for T is
T" + T = 0; T(0) = 0.
(In the case of zero initial velocity we had T'(0)=0). The general solution of the
differential equation for T is
T(t) = a cos .
Since T (0) =a=0, solutions for T(t) are constant multiples of sin (nct/l). Thus, for
n=1,2,...., we have functions
un (x,t) = cn sin sin .
Each of these functions satisfies the wave equation, the boundary conditions, and
the zero initial displacement condition. To satisfy the initial velocity condition u t
(x,0)=g(x), we generally must attempt a superposition
u(x,t)= .
Assuming that we can differentiate this series term-by-term, then

This is the Fourier sine expansion of g(x) on [0,l]. Choose the entire coefficient of
sin(nx/l) to be the Fourier sine coefficient of g(x) on [0,l]:

or

The solution is

u(x,t) = (26)

For example, suppose the string is released from its horizontal position with an initial velocity
given by g(x) = x(1+cos( x /l )). Compute
cn =

The solution for this initial velocity function is


u(x,t)= (27)

If we let c=1 and l=, we obtain


u(x,t) = sin (x) sin (t) +
Transformation of Boundary Value Problems Involving the Wave Equation
There are boundary value problems involving the wave equation for which separation of
variables does not lead to the solution. This can occur because of the form of the wave equation
(for example, there may be an external forcing term), or because of the form of the boundary
conditions. Here is an example of such a problem and a strategy of overcoming the difficulty.
Consider the boundary value problem

u(0,t) = u(l,t)=0 for t  0,


u(x,0) = 0, (x,0)=1 for 0 < x < l.
A is a positive constant. The term Ax in the wave equation represents an external force
which at x has magnitude Ax. We have let c=1 in this problem.
If we put u(x,t)= X(x) T(t) into the partial differential equation, we get
XT" = X"T + Ax,
and there is no way to separate the t dependency on one side of the equation and the x
dependent terms on the other.
We will transform this problem into one for which separation of variables works. Let
u(x,t) = U(x,t) +  (x).
The idea is to choose  to reduce the given problem to one we have already solved.
Substitute u(x,t) into the partial differential equation to get
+  " (x) +Ax.
This will be simplified if we choose  so that
 "(x)+Ax=0.
There are many such choices. By integrating twice, we get
(x) = - A +Cx+D,
with C and D constants we can still choose any way we like. Now look at the boundary
conditions.
First, u(0,t) =U(0,t)+  (0)=0.
This will be just u(0,t)=U(0,t) if we choose
(0)=D=0.
Next,
u(l,t)=U(l,t) + (l)=U(l,t)-A + Cl=0.
This will reduce to u(l,t)=U (l,t) if we choose C so that
 (l) = -A +Cl=0
or C= Al2
This means that
 (x) = - Ax3+ Al2x = Ax (l2-x2)
With this choice of ,
U(0,t)=U(l,t)=0.
Now relate the initial conditions for u to initial conditions for U. First,
U(x,0)=u(x,0) -(x)= -(x)= Ax(x2-l2)
and
(x,0) = (x,0)=1.
We now have a boundary value problem for U (x,t):

U (0,t) = 0,U (l,t) = 0 for t > 0,


U (x,0) = Ax (x2 – l2), (x,0) =1 for 0 < x < l.
Using equations (24) and (26), we immediately write the solution

U (x,t) =

+
The solution of the original problem is
u(x,t)=U(x,t) + Ax (l2 – x2).
4.2 The Heat Equation with Boundary and Initial Conditions
We discuss here solutions of the heat equation by separable variables. Fourier series
method under certain initial and boundary conditions.
Ends of the Bar Kept at Temperature Zero
Suppose we want the temperature distribution u(x,t) in a thin, homogeneous (constant
density) bar of length l, given that the initial temperature in the bar at time zero in the cross
section at x perpendicular to the x axis is f(x). The ends of the bar are maintained at temperature
zero for all time.
The boundary value problem modeling this temperature distribution is
=k for 0 < x < l, t > 0,
u(0,t) =u(l,t)=0 for t  0,
u(x,0) = f(x) for 0  x  l.
We will use separation of variables. Substitute u(x,t)=X(x) T(t) into the heat equation to
get
XT'=kX"T
or

The left side depends only on time, and the right side only on position, and these
variables are independent. Therefore for some constant ,
=-
Now
u(0,t) =X(0) T(t)=0.
If T(t)=0 for all t, then the temperature function has the constant value zero, which occurs
if the initial temperature f(x) =0 for 0  x  l. Otherwise, T(t) cannot be identically zero, so we
must have X(0)=0. Similarly, u(l,t) =X(l)T(t)=0 implies that X(l)=0. The problem for X is
therefore
X" +X=0; X(0) = X(l)=0.
We seek values of  (the eigenvalues) for which this problem for X has nontrivial
solutions (the eigenfunctions).
This problem for X is exactly the same one encountered for the space-dependent function
in separating variables in the wave equation. There we found that the eigenvalues are
n= for n=1,2….,
and corresponding eigenfunctions are nonzero constant multiples of
Xn(x) = sin .
The problem for T becomes
T'+ T = 0.
which has general solution
Tn(t) =
For n=1,2,….., we now have functions
un(x,t) = cn sin
which satisfy the heat equation on [0,l] and the boundary conditions u(0,t)=u(l,t)=0.
There remains to find a solution satisfying the initial condition. We can choose n and c n so that
un(x,0) = cn sin = f(x)
only if the given initial temperature function is a multiple of this sine function. This need not be
the case. In general, we must attempt to construct a solution using the superposition
u(x,t) =
Now we need
u(x,0)= =f(x).
which we recognize as the Fourier sine expansion of f(x) on [0,l]. Thus choose
cn= .
With this choice of the coefficients, we have the solution for the temperature distribution
function:
u(x,t) = (28)
Temperature in a bar with Insulated Ends
Consider heat conduction in a bar with insulated ends, hence no energy loss across the
ends. If the initial temperature is f(x), the temperature function is modeled by the boundary value
problem
for 0 < x < l, t > 0,

(l,t)=0 for t> 0,


u(x,0)=f(x) for 0  x  l.
Attempt a separation of variables by putting u(x,t) =X(x)T(t). We obtain, as in the
preceding subsection,
X"+X=0, T' +kT=0.
Now (0,t) = X'(0)T(t)=0
implies (except in the trivial case of zero temperature) that X'(0) =0. Similarly,
(l,t) =X'(l)T(t)=0
implies that X'(l)=0. The problem for X(x) is therefore
X"+X=0, X'(0) =X'(l)=0.
We have encountered this problem before. The eigenvalues are
n= for n=0,1,2,….., with eigenfunctions nonzero constant multiples of

Xn(x) = cos .
The equation for T is now
T'+
When n=0, we get T0(t)=constant.
For n=1,2,…..,
.
Tn(t)=cn
We now have functions
.
un(x,t)=cn cos
For n=0,1,2,….., each of which satisfies the heat equation and the insulation boundary
conditions. To satisfy the initial conditions, we must generally use a superposition
u(x,t) =
Here we wrote the constant term (n=0) as c 0/2 in anticipation of a Fourier cosine
expansion. Indeed, we need
u(x,0)=f(x) = , (29)
the Fourier cosine expansion of f(x) on [0.l]. (This is also the expansion of the initial temperature
function in the eigenfunctions of this problem). We therefore choose
cn =
With this choice of coefficients, equation (29) gives the solution of this boundary value
problem.
Left-half of a bar at constant temperature and Right half at zero Temperature
Suppose the left half of the bar is initially at temperature A and the right half is kept at
temperature zero. Thus

Then
co =
and, for n=1,2,…..,
cn=
The solution for this temperature function is
u(x,t)=
Now sin(n/2) is zero if n is even. Further, if n=2j-1 is odd, then sin(n/2) = (-1)j+1.
The solution may therefore be written
u(x,t) =
4.3 The Laplace Equation with Boundary and Initial Conditions
We consider the steady-state heat conduction (or potential) problem for the rectangle
R{0<x<a,0<y<b}
+ =0, x,y  R. (30)
subject to the Dirichlet boundary conditions
u(0,y) =0 = u(a,y), u(x,0) = 0, u(x,b) = f(x). (31)
Physically, this problem arises if three edges of a thin isotropic rectangular plate are
insulated and maintained at zero temperature, while the fourth edge is subjected to a variable
temperature f(x) until the steady-state conditions are attained through R. Then the steady-state
value of u(x,y) represents distribution of temperature in the interior of the plate.
Let u(x,y), = X(x) Y(y) be a solution.
which, after substitution into Eq (30) leads to the set of two ordinary differential equations:
X" – cX = 0, (32)
Y" + cY = 0. (33)
where c is a constant. Since the first three boundary conditions in (31) are homogeneous, they
become
X(0) = 0, X(a) =0, Y(0) = 0. (34)
but the fourth boundary condition which is nonhomogeneous must be used separately. Now,
taking c = - 2, as before, the solution of (32) subject to the first two boundary conditions in (34)
leads to the eigenvalues and the corresponding eigenfunctions as

while for these eigenvalues the solutions of (30) satisfying the third boundary condition
in (34) are
(35)
Hence, for arbitrary constants cn, n = 1,2,….., we get

(36)

The coefficients cn are then determined by using the fourth boundary condition in (31).
Thus,

which, in view of (7) yields

(37)

Final comments on PDEs

From what you have seen so far in this short introduction to PDEs, it should be clear that
knowledge of PDEs is an important part of the mathematical modeling done in many different
scientific fields. What you have seen so far is just a small sampling of the vast world of PDEs.
In each of the cases we solved, we worked with just the one-dimensional cases, but with a little
effort, it is possible to set up and solve similar PDEs for higher-dimensional situations as well.
For instance, the two-dimensional wave equation

(1)

can be used to model waves on the surface of drumheads, or on the surface of liquids, and the
three-dimensional heat equation
(2)

can be used to study temperature diffusion in three-dimensional objects. To solve such


equations, one can still use the separation of variables technique that we saw used for the
solutions to the one-dimensional cases. However, with more variables involved, one typically
has to invoke the technique several times in a row, splitting PDEs involving functions of more
than two variables into a sequence of PDEs each involving just one-variable functions. Solutions
are then found for each of the one-variable differential equations, and combined to yield
solutions to the original PDEs, in much the same way that we saw in the two separation of
variables solutions above.

PDE Problems

(1) Determine which of the following functions are solutions to the two-dimensional Laplace
equation

(a) (b)

(c) (d)

(e) (f)

(2) Determine which of the following functions are solutions to the one-dimensional wave
equation (for a suitable value of the constant c). Also determine what c must equal in each case.

(a) (b)

(c) (d)

(e) (f)

(3) Solve the following four PDEs where u is a function of two variables, x and y. Note that
your answer might have undetermined functions of either x or y, the same way an ODE might
have undetermined constants. Note you can solve these PDEs without having to use the
separation of variables technique.
(a) (b)

(c) (d)

(4) Solve the following systems of PDEs where u is a function of two variables, x and y.
Note that once again your answer might have undetermined functions of either x or y.

(a) and (b) and

(c) and (d) , and

(5) Determine specific solutions to the one-dimensional wave equation for each of the
following sets of initial conditions. Suppose that each one is modeling a vibrating string of
length with fixed ends, and with constants such that in the wave equation PDE.

(a) and

(b) and

(c) and

(d) and

(6) Find solutions to each of the following PDEs by using the separation of variables
technique.

(a) (b)

(c) (d)

(e) (f)

You might also like