Professional Documents
Culture Documents
A partial differential equation is one which involves one or more partial derivatives. The
order of the highest derivative is called the order of the equation. A partial differential equation
contains more than one independent variable. But, here we shall consider partial differential only
equation two independent variables x and y so that z = f(x,y). We shall denote
A partial differential equation is linear if it is of the first degree in the dependent variable
and its partial derivatives. If each term of such an equation contains either the dependent variable
or one of its derivatives, the equation is said to be homogeneous, otherwise it is non
homogeneous.
What if there is more than one independent variable? Then the differential equation is called a
partial differential equation. An example of such an equation would be
subject to certain conditions: where is the dependent variable, and
and are the independent variables.
Partial differential equations can be obtained by the elimination of arbitrary constants or by the
elimination of arbitrary functions.
(2)
(3)
Eliminating a and b from equations (1), (2) and (3), we get a partial differential equation of the
first order of the form f (x,y,z, p, q) = 0
Example 1
Eliminate the arbitrary constants a & b from z = ax + by + ab
Consider z = ax + by + ab (1)
Differentiating (1) partially w.r.t x & y, we get
` (2)
(3)
Find the partial differential equation of the family of spheres of radius one whose centre lie in the
xy - plane.
( x –a )2 + ( y- b) 2 + z2 = 1 (1)
or z2 ( p2 + q2 + 1) = 1
Example 4
Eliminate the arbitrary constants a, b & c from
(1)
Differentiating (1) partially w.r.t. x & y, we get
Therefore we get
(2)
(3)
Again differentiating (2) partially w.r.t. ‘x’, we get
(4)
Multiplying (4) by x, we get
or p2
Example 5
Hence, we get
Example 6
Form the partial differential equation by eliminating the arbitrary function f from z = e y f (x + y)
Consider z = ey f ( x +y ) ( 1)
p = ey f ' (x + y)
q=p+z
Example 7
Form the PDE by eliminating f and from
Consider (1)
Differentiating (1) partially w.r.t. x and y, we get
(2)
(3)
Differentiating (2) and (3) again partially w.r.t. x and y, we get
i.e.,
or
Exercises:
1. Form the partial differential equation by eliminating the arbitrary constants ‘a’ & ‘b’ from the
following equations.
(i)
(ii)
(iii)
(iv)
(v)
2. Find the PDE of the family of spheres of radius 1 having their centres lie on the xy
plane{Hint: (x –a)2 + (y –b)2 + z2 = 1}
3. Find the PDE of all spheres whose centre lie on the (i) z axis (ii) x-axis
4. Form the partial differential equations by eliminating the arbitrary functions in the following
cases.
2 2
2 2 2
3 3
P
r
The differential equation would now be a partial differential equation and is given as
where
= thermal conductivity of material,
= density of material,
As an introduction to solve PDEs, most textbooks concentrate on linear second order PDEs with
two independent variables and one dependent variable. The general form of such an equation is
(7)
Using the general form of second order linear PDEs with one dependent variable and two
independent variables,
gives
Parabolic Equation
The heat conduction equation is an example of a parabolic second order linear partial differential
equation. The heat conduction equation is given by
(9)
Using the general form of second order linear PDEs with one dependent variable and two
independent variables,
gives
gives
Note that for PDEs one typically uses some other function letter such as u instead of y, which
now quite often shows up as one of the variables involved in the multivariable function.
In general we can use the same terminology to describe PDEs as in the case of ODEs. For
starters, we will call any equation involving one or more partial derivatives of a multivariable
function a partial differential equation. The order of such an equation is the highest order
partial derivative that shows up in the equation. In addition, the equation is called linear if it is
of the first degree in the unknown function u, and its partial derivatives, ux, uxx, uy, etc. (this
means that the highest power of the function, u, and its derivatives is just equal to one in each
term in the equation, and that only one of them appears in each term). If each term in the
equation involves either u, or one of its partial derivatives, then the function is classified as
homogeneous.
Take a look at the list of PDEs above. Try to classify each one using the terminology given
above. Note that the f(x,y) function in the Poisson equation is just a function of the variables x
and y, it has nothing to do with u(x,y).
Answers: all of these PDEs are second order, and are linear. All are also homogeneous except
for the fourth one, the Poisson equation, as the f(x,y) term on the right hand side doesn’t involve
u or any of its derivatives.
The reason for defining the classifications linear and homogeneous for PDEs is to bring up the
principle of superposition. This excellent principle (which also shows up in the study of linear
homogeneous ODEs) is useful exactly whenever one considers solutions to linear homogeneous
PDEs. The idea is that if one has two functions, and that satisfy a linear homogeneous
differential equation, then since taking the derivative of a sum of functions is the same as taking
the sum of their derivatives, then as long as the highest powers of derivatives involved in the
equation are one (i.e., that it’s linear), and that each term has a derivative in it (i.e. that it’s
homogeneous), then it’s a straightforward exercise to see that the sum of and will also be a
solution to the differential equation. In fact, so will any linear combination, , where a
and b are constants.
For instance, the two functions and are both solutions for the first-order linear
homogeneous PDE:
(5)
A solution or integral of a partial differential equation is a relation connecting the dependent and
the independent variables which satisfies the given differential equation. A partial differential
equation can result both from elimination of arbitrary constants and from elimination of arbitrary
functions. But, there is a basic difference in the two forms of solutions. A solution containing as
many arbitrary constants as there are independent variables is called a complete integral. Here,
the partial differential equations contain only two independent variables so that the complete
integral will include two constants. A solution obtained by giving particular values to the
arbitrary constants in a complete integral is called a particular integral.
Singular Integral
Let f (x,y,z,p,q) = 0 (1)
be the partial differential equation whose complete integral is
(x,y,z,a,b) = 0 (2)
(3)
and (4)
The eliminant of ‘a’ and ‘b’ from the equations (2), (3) and (4), when it exists, is called the
singular integral of (1).
General Integral
In the complete integral (2), put b = F(a), we get
(x,y,z,a, F(a) ) = 0 (5)
Differentiating (2), partially w.r.t.a, we get
(6)
The eliminant of ‘a’ between (5) and (6), if it exists, is called the general integral of (1).
f(x,y,z, p,q) = 0,
The last equation being absurd, the singular integral does not exist in this case.
Example 8
Solve pq = 2
Z = ax + 2/a y + c (1)
0 = 1,
Z = ax + 2/a y + (a)
0 = x – 2 y / a2 + ’(a)
Solve pq + p +q = 0
Solving, we get
0 = 1.
The above equation being absurd, there is no singular integral for the given partial differential
equation.
(2)
(3)
Example 10
Solve p2 + q2 = npq
Solving, we get
Differentiating (1) partially w.r.t c, we get 0 = 1, which is absurd. Therefore, there is no singular
integral for the given equation.
To find the general Integral, put C = (a), we get
The eliminant of ‘a’ between these equations gives the general integral
Standard II : Equations of the form f (x,p,q) = 0, f (y,p,q) = 0 and f (z,p,q) = 0. i.e, one of the
variables x,y,z occurs explicitly.
or dz = pdx + qdy
Assume that q = a.
i.e.,
Example 11
Solve q = xp + p2
Given q = xp + p2 …(1)
This is of the form f (x,p,q) = 0.
Put q = a in (1), we get
a = xp + p2
i.e, p2 + xp –a = 0.
Therefore,
Integrating,
Thus,
Example 12
Solve q = yp2
This is of the form f (y,p,q) = 0
Then, put p = a.
Therfore, the given equation becomes q = a2y.
Since dz = pdx + qdy, we have
dz = adx + a2y dy
Integrating, we get
Example 13
Solve 9 (p2z + q2) = 4
This is of the form f (z,p,q) = 0
Then, putting q = ap, the given equation becomes
2 2 2
9 (p z + a p ) = 4
Therefore, and
Since ,
Multiplying both sides by , we get
or
Standard III : f1(x,p) = f2 (y,q). ie, equations in which ‘z’ is absent and the variables are
separable.
Hence
Therefore,
Example 14
Solve pq = xy
The given equation can be written as
Since , we have
Example 15
Solve
The given equation can be written as (say)
implies
and Implies
But dz = pdx + qdy
i.e., dx+ dy
Integrating, we get
Example 16
Solve z = px + qy +pq
z = ax + by + ab (1)
To find the singular integral, differentiating (1) partially w.r.t a and b, we get
0=x+b
0=y+a
Eliminating ‘a’ between (2) and (3), we get the general integral.
Example 17
Find the complete and singular solutions of
Therefore,
(2)
and
(3)
Now,
i.e., =
Therefore,
= (4)
and
Hence, and
Exercises
Sometimes, it is possible to have non –linear partial differential equations of the first
order which do not belong to any of the four standard forms discussed earlier. By changing the
variables suitably, we will reduce them into any one of the four standard forms.
Type (i) : Equations of the form F(xm p, ynq) = 0 (or) F (z, xmp, ynq) = 0.
Now,
Therefore,
Similaraly,
Hence, the given equation takes the form F(P,Q) = 0 (or) F(z,P,Q) = 0.
Now,
Therefore, xp=
Similarly, .
Example 18
Here m = 2, n = 2
Hence, and
Since , we have
i.e.,
Integrating, we get
Therefore, which is the complete solution.
Example 19
(xp)2 + (yq)2 = z2
Here m = 1, n = 1.
Hence and
Since , we have
i.e.,
Integrating, we get
log z = X + aY + b.
Type (ii) : Equations of the form F(zkp, zkq) = 0 (or) F(x, zkp) = G(y,zkq).
Case (i) : If , put Z = zk+1,
Now
Now, , similarly,
Example 20
(z2q)2 –(z2p) =1
Here k = 2. Putting , we get
and
i.e., and
hence the given equation reduces to
, i.e.,
Solving for b,
or
…(2)
Comparing (1) and (2), we have
…(3)
Similarly, ...(4)
By cross-multiplication, we have
or
…(5)
Equations (5) represent a pair of simultaneous equations which are of the first order and of first
degree.Therefore, the two solutions of (5) are u = a and v = b. Thus, is the required
solution of (1).
Note :
To solve the Lagrange‟s equation,we have to form the subsidiary or auxiliary equations
which can be solved either by the method of grouping or by the method of multipliers.
Example 21
Example 22
Solve p tan x + q tan y = tan z
The subsidiary equations are
i.e.,
integrating,
i.e.,
Therefore,
Similarly, from the last two ratios, we get
i.e.,
where Φ is arbitrary
Example 23
Solve (y-z) p + (z-x) q = x-y
Here the subsidiary equations are
Again using multipliers x y and z.
Each ratio
Therefore,
Integrating,
or, …(2)
hence from 91) and (2), the general solution is
Example 24
Find the general solution of (mz - ny) p + (nx- lz)q = ly - mx.
Exercises
1. px2 + qy2 = z2
2. pyz + qzx = xy
3. xp –yq = y2 –x2
4. y2zp + x2zq = y2x
5. z (x –y) = px2 –qy2
6. (a –x) p + (b –y) q = c –z
7. (y2z p) /x + xzq = y2
8. (y2 + z2) p –xyq + xz = 0
9. x2p + y2q = (x + y) z
10. p –q = log (x+y)
11. (xz + yz)p + (xz –yz)q = x2 + y2
12. (y –z)p –(2x + y)q = 2x + z
SOLUTIONS OF PARTIAL DIFFERENTIAL EQUATIONS OF FIRST-ORDER
Solution of Partial Differential Equations of first-order with constant coefficients.
The most general form of linear partial differential equations of first order with constant
coefficients is
Aux+Buy+Ku=f(x,y)
where A,B and K are constants
Let u(x,y) be a solution then
du=uxdx+uydy
From above we get the auxiliary system of equations (comparing coefficients of u x, uy
and remaining terms).
= =
equation .
is reduced to an ordinary differential equation with u as the dependent variable and x as the
independent variable, namely
or
The integrating factor of this differential equation is e Kx/A. Making change of variable by
v=ueKx/A problem takes the form
Avx+Bvy = f(x,y)e =g(x,y)
The substitution v=ueKy/B in problem leads to Av x+Bvy=f(x,y) eKy/B. Thus, we need to
consider only the formal reduced form
Aux+Buy=f(x,y)
The auxiliary system of equations for the above equation is
The solution of is
Bx-Ay=c, which gives
x=
Substituting this value in
we get
or x2 =4du or
(x3 – cx2) dx = du
Integrating both sides we get
u=c1+
= f(c)+
After replacing c by x-4y, we get the general solution
u=f(x-4y)+
=f(x-4y)-
Lagrange's Method
The general form of first-order linear partial differential equations with variable
coefficients is
P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) (1)
We can eliminate the term in u from P(x,y)u x+Q(x,y)uy+f(x,y)u=R(x,y) by substituting
u=ve-(x,y), where (x,y) satisfies the equation
P(x,y) x(x,y)+ Q (x,y) y(x,y)=f(x,y)
Hence, Eq P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) is reduced to
P(x,y)ux+Q (x,y) uy =R(x,y) (2)
where P,Q,R in (2) are not the same as in (1). The following theorem provides a method for
solving (2) often called Lagrange's Method.
Theorem 1 The general solution of the linear partial differential equation of first order
Pp+Qq=R; (3)
where p= , P, Q and R are functions of x y and u
is F(, ) = 0 (4)
where F is an arbitrary function and (x,y,u) =c1 and (x,y,u)=c2 form a solution of the
auxiliary system of equations
(5)
Proof: Let (x,y,u)=c1 and (x,y,u)=c2 satisfy (5), then equations
xdx+y dy +udu=0
and
+q = (7)
Substituting from equations (6) into equation (7), we see that F(,)=0 is a general
solution of (3). The solution can also be written as
=g() or =h(),
Example 7 Find the general solution of the partial differential equation y2up + x2uq = y2x
Solution: The auxiliary system of equations is
Taking the first two members we have x 2dx = y2dy which on integration given x3-y3 = c1.
Again taking the first and third members,
we have x dx = u du
which on integration given x2-u2 = c2
Hence, the general solution is
F(x3-y3,x2-u2) = 0
Charpit's Method for solving nonlinear Partial Differential Equation of First-Order
We present here a general method for solving non-linear partial differential equations.
This is known as Charpit's method.
Let
F(x,y,u, p.q)=0 (8)
be a general non linear partial differential equation of first-order. Since u depends on x
and y, we have
du=uxdx+uydy = pdx+qdy (9)
where p=ux= , q = uy=
If we can find another relation between x,y,u,p,q such that
f(x,y,u,p,q)=0 (10)
then we can solve (10) and (9) for p and q and substitute them in equation (8). This will give the
solution provided (9) is integrable.
To determine f, differentiate (8) and (10) w.r.t. x and y so that
(11)
(12)
(13)
(14)
Eliminating from, equations (10) and (11), and from equations (12) and (13) we
obtain
Adding these two equations and using
(15)
Following arguments in the proof of Theorem 1 we get the auxiliary system of equations
(16)
and multiplying by -1 throughout the auxiliary system. From first and 4 th expression in problem
equation, we get
dx = . From second and 5th expression
dy=
Using these values of dx and dy in problem equation,we get
=
or
Taking integral of all terms we get
ln|x| + 2ln|p| = ln|y|+2ln|q|+lnc
or ln|x| p2 = ln|y|q2c
or p2x=cq2y, where c is an arbitrary constant.
Solving (16) and auxiliary system of equations for p and q we get cq2y+q2y -u=0
(c+1)q2y=u
q=
p=
F(x,y,u, p.q)=0, takes the following form in this case
du=
or
It is clear that p=c is a solution of these equations. Putting value of p in (17) we have
F(c,q)=0 (18)
So that q=G(c) where c is a constant
Then observing that
du=cdx+G(c) dy
we get the solution u=cx +G(c) y+c1,
where c1 is another constant.
Example 9 Solve p2+q2=1
Solution: The auxiliary system of equation is
-
or
Using dp =0, we get p=c and q= , and these two combined with du =pdx+qdy yield
u=cx+y + c1 which is a complete solution.
Using = p , we get du = where p= c
Also du = , where q =
or du = . Integrating this equation we get u = y +c2
This cu = x+cc1 and = y + c2
Replacing cc1 and c2 by - and - respectively, and eliminating c, we get
2 2 2
u = (x-) + (y-)
This is another complete solution.
Working Rules of Charpit’s Method for Solving Non-Linear Partial Differential
Equations of Order One with Two Independent Variables
The following steps are required while using Charpit’s method for
solving non-linear partial differential equation of order one:
Step 1. Transfer all the terms of given PDE to L.H.S. and denote the entire
expression in L.H.S. by f(x, y, z, p, q).
Step 2. Write down the Charpit’s auxiliary equations.
Step 3. Find the values of / , / , etc occurring in Charpits auxiliary
equations. Put them in Charpit auxiliary equations and simplify.
Step 4. Choose two proper fractions from Charpit’s auxiliary equations such
that the resulting integral may come out as simplest relation involving
at least one of p or q or both.
Step 5. The simplest relation of step 4 is solved along with given partial
differential equation to find p and q. Put these values of p and q in dz
= pdx + qdy which on integration gives the complete integral of the
given partial differential equation.
The singular and general integrals may be obtained in the usual manner.
or =+
or u = + 2 sin
which is the required complete solution.
(iv) Equations of the type
f(x,p) = g(y,q)
Then each of these functions must be constant, that is
f(x, p) = g(y, q) = C
Solving for p and q, and using du=pdx+qdy we can obtain the solution
Example 11 Solve p2(1-x2)-q2(4-y2) = 0
Solution Let p2(1-x2) = q2 (4-y2) = a2
This gives p = and q =
(neglecting the negative sign).
Substituting in du = pdx + q dy we have
du = dx + dy
Integration gives u = a + c.
which is the required complete solution.
Homogeneous Equations
Let Dx=
We are looking for solving equations of the type
(21)
where k1 and k2 are constants.
(20) can be written as
This gives –m1dx=dy or m1x+y=c1 and u=c2 and so u=(y+m1x) is a solution of (20).
Therefore u= (y+m2x) + (y+m1x) is the complete solution of (20)
If the roots are equal (m1 = m2) then Equation 20 is equivalent to
(Dx-m1Dy)2 u = 0
Putting (Dx-m1Dy) u = z, we get
(Dx-m1Dy) z=0 which gives
z= (y+m1x)
Substituting z in (Dx-m1Dy) u=z gives
(Dx-m1Dy) u = (y+m1x)
or p-m1q = (y+m1x)
Its auxiliary system of equations is
f(Dx,Dy) (26)
(27)
= (28)
(29)
(30)
f(Dx,Dy) (x,y) eax+by=eax+by f(Dx+a, Dy+b) (x,y)
(31)
= (32)
(34)
(ii)
Solution: (i) The equation can be written as
u = ex-3y
up = ex-3y
= ex-3y by (29)
= ex-3y
(ii) The equation can be written as
(3D2x-Dy)u=ex sin (x+y)
up = ex sin (x+y)
= ex sin (x+y)
= ex sin(x+y)
= ex sin(x+y)
= ex
=- ex cos(x+y).
Example 14 Solve the partial differential equation
= e-xsin t
Solution: The equation can be written as
(D -c2Dx2) u = e-xsin t
The particular solution is
up=
=-
By proceeding on the lines of the solution of Example 12 we get
uc = (x-ct)+ (x+ct)
u(x,t)= (x-ct)+ (x+ct) -
The solution uc is known as the d' Alembert's solution of the wave equation
=0.
Monge's Method for a special class of non linear Equations (quasi linear Equations) of the
Second order.
Let u(x,y) be a function of two variables x and y
Let p =
Monge's method provides a technique for solving a special class of partial differential
equation of second order of the type
F(x,y,u,p,q,r,s,t)=0 (35)
Monge's method comprises in establishing one or two first integrals of the form
= f() (36)
where and are known function of x,y,u, p and q and the function f is arbitrary; that is,
in finding relations of the type (35) such that equation (36) can be derived from equation (35).
The following equations are obtained from it by partial differentiation.
x+up+pr+qs=f'() {x+up+pr+qs} (37)
y+uq+ps+qt=f'() {y+uq+ps+qt} (38)
It may be noted that every equation of the type (34) does not have a first integral of the
type (35). By eliminating f'() from equations (36) and (37), we find that any second order partial
differential equation which possesses a first integral of the type (35) must be expressible in the
form
R1r+S1s+T1t+U1(rt-s2)=V1 (39)
where R1, S1,T1,U1 and V1 are functions of x,y,u, p and q defined by the relations
R1 = (40)
S1= (41)
U1= (42)
The equation (38) reduces to the form
R1r+S1s+T1t=V1 (43)
if and only if the Jacobian pp- qp=0 identically. Equation (42) is a non-linear equation
because the coefficients R1, S1, T1, V1 are functions of p and q as well as of x,y, and u. Infact it is
a quasi linear equation. We explain here the method of finding solution of the equation of the
type (42), namely
Rr+Ss+Tt = V (44)
for which a first integral of the form (35) exists. For any function u of x and y we have
the relations dp =rdx+sdy, dq=sdx+tdy (45)
Eliminating r and t from this pair of equations and equation (43), we see that any
solution of (43) must satisfy the relation
Rdpdy+Tdqdx - Vdxdy=0 (46)
2 2
Rdy +Tdx –Sdxdy=0 (47)
The method of finding solutions of (45) and (46) is explained through the following
example:
Example: 15
Solve the equation
This equation is of the form (43) where
R=
Therefore (45) and (46) become respectively
q2dpdy + p2dq dx=0 (48)
(pdx+qdy)2 = 0 (49)
By the equation du=pdx+qdy and (48) we get du=0, which gives integral u=c 1. From (47)
and (48) we have qdp =pdq, which has solution
p=c2q. Thus, the first integral is
p=q f(u) (50)
where f(.) is arbitrary. We solve (49) by Lagrange's method. The auxiliary system of
equations (characteristic equations) are
(1)
where , ,…………, are constants and F is a function of ‘x’ and ‘y’. It is homogeneous
because all its terms contain derivatives of the same order.
Equation (1) can be expressed as
or (2),
where, and
As in the case of ordinary linear equations with constant coefficients the complete
solution of (1) consists of two parts, namely, the complementary function and the particular
integral.
The complementary function is the complete solution of f (D,D') z = 0-------(3), which
must contain n arbitrary functions as the degree of the polynomial f(D,D'). The particular
integral is the particular solution of equation (2).
Solving equation (4) for „m‟, we get „n‟ roots. Depending upon the nature of the roots, the
Complementary function is written as given below:
, where
Replacing , we get
, where
(4)
Note:
2. In the case of repeated factors, the equation (D-mD' –C)nz = 0 has a complete
SOLVING PDEs
Solving PDEs is considerably more difficult in general than solving ODEs, as the level of
complexity involved can be great. For instance the following seemingly completely unrelated
functions are all solutions to the two-dimensional Laplace equation:
(1) , and
You should check to see that these are all in fact solutions to the Laplace equation by doing the
same thing you would do for an ODE solution, namely, calculate and , substitute them
into the PDE equation and see if the two sides of the equation are identical.
Now, there are certain types of PDEs for which finding the solutions is not too hard. For
instance, consider the first-order PDE
(2)
where u is assumed to be a two-variable function depending on x and y. How could you solve
this PDE? Think about it, is there any reason that we couldn’t just undo the partial derivative of
u with respect to x by integrating with respect to x? No, so try it out! Here, note that we are
given information about just one of the partial derivatives, so when we find a solution, there will
be an unknown factor that’s not necessarily just an arbitrary constant, but in fact is a completely
arbitrary function depending on y.
To solve (2), then, integrate both sides of the equation with respect to x, as mentioned. Thus
(3)
so that . What is F? Note that it could be any function such that when
one takes its partial derivative with respect to x, the result is 0. This means that in the case of
PDEs, the arbitrary constants that we ran into during the course of solving ODEs are now taking
the form of whole functions. Here F, is in fact any function, F(y),of y alone. To check that this
is indeed a solution to the original PDE, it is easy enough to take the partial derivative of this
function and see that it indeed satisfies the PDE in (2).
(4)
where u is again a two-variable function depending on x and y. We can solve this PDE by
integrating first with respect to x, to get to an intermediate PDE,
(5)
where F(y) is a function of y alone. Now, integrating both sides with respect to y yields
(6)
where now G(x) is a function of x alone (Note that we could have integrated with respect to y
first, then x and we would have ended up with the same result). Thus, whereas in the ODE
world, general solutions typically end up with as many arbitrary constants as the order of the
original ODE, here in the PDE world, one typically ends up with as many arbitrary functions in
the general solutions.
To end up with a specific solution, then, we will need to be given extra conditions that indicate
what these arbitrary functions are. Thus the initial conditions for PDEs will typically involve
knowing whole functions, not just constant values. We will also see that the initial conditions
that appeared in specific ODE situations have slightly more involved analogs in the PDE world,
namely there are often so-called boundary conditions as well as initial conditions to take into
consideration.
where u (x,t) denotes the temperature distribution and k the thermal diffusivity.
The equation, in its simplest form, goes back to the beginning of the 19 th century. Besides
modeling temperature distribution it has been used to model the following physical phenomena.
(i) Diffusion of one material within another, smoke particles in air.
(ii) Chemical reactions, such as the Belousov-Zhabotinsky reaction which exhibits
fascinating wave structure.
(iii) Electrical activity in the membranes of living organisms, the Hodgkin-Huxby
model.
(iv) Dispersion of populations; individuals move both randomly and to avoid
overcrowding.
(v) Pursuit and evasion in predator-prey systems
(vi) Pattern formation in animal coats, the formation of zebra stripes
(vii) Dispersion of pollutants in a running stream.
More recently it has been used in Financial Mathematics or Financial Engineering for
determining appropriate prices of an option. We discuss it in Section 4
(b) Wave equation in dimension 1 (R)
u(x,t) represents the displacement, for example of a vibrating string from its equilibrium position,
and c the wave speed.
This type of equations have been applied to model vibrating type of membrane, acoustic
problems for the velocity potential for the fluid flow through which sound can be transmitted,
longitudinal vibrations of an elastic rod or beam, and both electric and magnetic fields in the
absence of charge and dielectric.
(c) Laplace equation in R2 (two dimensional)
2u=u =
where 2 =. denotes the Laplacian
=
The equation is satisfied by the electrostatic potential in the absence of charges, by the
gravitational potential in the absence of mass, by the equilibrium displacement of a membrane
with a given displacement of its boundary, by the velocity potential for an inviscid,
incompressible, irrotational homogeneous fluid in the absence of sources and sinks, by the
temperature in steady-state heat flow in the
absence of sources and sinks, and in many other real world systems.
(d) Transport equation in R(one dimensional)
u(x,t) denotes the density of cars per unit kilometer of expressway at location x at time t and a(u)
is a function of u, say the local velocity of traffic at location x at time t.
(f) Burger's equation in one dimension
This equation arises in the study of stream of particles or fluid flow with zero viscosity.
(g) Eikonal equation in R2
where A and B are constants. This equation arises in the study of propagation of electrical
signals in a cable transmission line. Both the current and the voltage V satisfy an equation of
this type. This equation also arises in the propagation of pressure waves in the study of pulsatile
blood flow in arteries, and in one-dimensional random motion of bugs along a hedge.
where m is the mass of the particle whose wave function is u(x,y), h is the universal Planck's
constant, V is the potential energy and E is a constant.
This equation arises in quantum mechanics.
If V=0 then it reduces to the Helmholtz equation (equation of (i)).
(m) Korteweg de Vries (KdV) equation in one-dimension
where E and H denote the electric and the magnetic field, respectively; they are system of six
equations in six unknowns.
There exists vast literature concerning Schrödinger, Korteweg de Vries, Euler, Navier-
Stokes and Maxwell equations. A large part of technological advancement is based on these
equations. It is not an exaggeration to say that a systematic study of any branch of science and
engineering is nothing but the study of one of these 16 equations, particularly, Heat, Wave,
Laplace, Burger Telegraph, Schrödinger, Korteweg de Vries, Euler, Navier-Stokes and Maxwell
equations.
2 Elements of Trigonometric Fourier Series for solution of Partial Differential Equations
In this section we discuss Fourier series expassion of arbitrary, even and odd functions.
2.1 Fourier Series
DEFINITION 1 Fourier Coefficients and Series
Let f be a Riemann integrable function on [-l,l].
1. The numbers
an= 1
and
bn
and
In this example, the constant term and cosine coefficients are all zero, and the Fourier
series contains only sine terms.
Example 2
Let
f (x) =
Here l = 3 and the Fourier coefficients are
= [(-1)n-1]
and
=
The Fourier series of f on [-3,3] is
fe is an even function,
fe (-x) =f(x),
and agrees with f on [0,l],
fe (x)=f(x) for 0 x l.
We call fe the even extension of f to [-l,l]. For example if
f(x) =ex for 0 x 2. Then
2. If f has a right derivative at 0, then the Fourier cosine series for f (x) on [0,l]
converges at 0 to f(0+).
3. If f has a left derivative at l, then the Fourier cosine series for f(x) on [0,l]
converges at l to f(l-).
Example 3
Let f(x) = ex for 0 x 2. We will write the Fourier cosine series of f on [0,2,].
The Fourier coefficients are
an =
Then f0 is an odd function, and f0(x) =f(x) for 0 x l. This is the odd extension of f to [-
l,l]. For example, if f(x) = e2x for 0 x l, let
Now write the Fourier series for f0(x) on [-l,l]. By equations (6) and (7), the Fourier
series of f0 is
(10)
with coefficients
(11)
We call the series (10) the Fourier sine series of f on [0,l]. The coefficients given by
equation (11) are the Fourier sine coefficients of f on [0,l]. As with cosine series, we do not need
to explicitly make the extension to f0 to write the Fourier sine series for f on [0,l].
Again, as with the cosine expansion, we can write a convergence theorem for sine series using
the convergence theorem for Fourier series.
Theorem 3 Convergence of Fourier Sine Series
Let f be piecewise continuous on [0,l]. Then
1. If 0< x < l, and f has left and right derivatives at x, then the Fourier sine
series for f(x) on [0,l] converges at x to
The series converges to e2x for 0 < x < 1, and to zero for x=0 and for x=1.
3 Method of Separation of Variables for Solving partial Differential Equations
Method of separation of variables is a powerful method for solving partial differential
equations of the type
(12)
under certain situations.
The basic idea of this method is to transform a partial differential equation into as many
differential equations as the number of independent variables in the partial differential
equation by representing the solution as a product of functions of each independent variable.
After these ordinary differential equations are solved, the method reduces to solving
eigenvalue problems and constructing the general solution as an eigenfunction expansion,
where the coefficients are evaluated by using the boundary and initial conditions see Section 4
for further details.
Let u (x,y) = X(x) Y(y) (13) be a
solution of (12) then (12) may be written in the form
(14)
where f(Dx), g(Dy) are quadratic functions of Dx= and Dy = respectively. In this
situation we say that (12) is separable in the variables x,y. The derivation of a solution of the
equation is straight forward. For the left hand side of (14) is a function of x alone, and right-
hand is a function of y-alone, and the two can be equal only if each is equal to a constant, say
. The problem of finding solutions of the form (13) of () therefore reduces to solving the pair
of second order linear ordinary differential equations
f(D) X = X(x), g(D) Y= Y(y) (15)
and
Putting these values in the heat equation we get equation 16]. The pair of ordinary
differential equations corresponding to (15) is
or =0 (17)
Let = - n2 then by the method discussed in 2.1 we find that T(t)= is a general
solution of the second equation of (17), where K is a constant of integration which can be
determined by given initial and boundary conditions. The general solution of the first equation of
(17) is given in Section 6.7.
3.2 Application to Wave Equation
Now v(x,t)= T(x,t) sin () is the vertical component of the tension, so the last equation
becomes
In the limit as x 0, we also have x and the last equation becomes
(18)
The horizontal component of the tension is h(x,t) =T(x,t) cos(), so
v(x,t)=h(x,t)tan ()=h(x,t)
Substitute this into equation (18) to get
(19)
To compute the left side of this equation, use the fact that the horizontal component of
the tension of the segment is zero, so
h(x+x,t)-h(x,t)=0.
Thus h is independent of x and equation (19) can be written
=
Letting c2 = h/, this equation is often written
=
This is the one-dimensional (1-space dimension) wave equation.
In order to model the string's motion, we need more than just the wave equation. We
must also incorporate information about constraints on the ends of the string and about the initial
velocity and position of the string, which will obviously influence the motion.
If the ends of the string are fixed, then
u(0,t)=u(l,t)=0 for t 0.
These are the boundary conditions.
The initial conditions specify the initial (at time zero) position
u(x,0)=f(x) for 0 x l
and the initial velocity
(x,0) = g(x) for 0 < x < l,
in which f and g are given functions satisfying certain compatibility conditions. For example, if
the string is fixed at its ends, then the initial position function must reflect this by satisfying
f(0)=f(l)=0.
If the initial velocity is zero (the string is released from rest), then g(x)=0.
The wave equation, together with the boundary and initial conditions, constitute a
boundary value problem for the position function u(x,t) of the string. These provide enough
information to uniquely determine the solution u(x,t).
If there is an external force of magnitude F units of force per unit length acting on the
string in the vertical direction, then this derivation can be modified to obtain
=c2 F.
Again, the boundary value problem consists of this wave equation and the boundary and
initial conditions.
In 2-space dimensions the wave equation is
(20)
=
=
Then
=
By a similar calculation, we get
and
Then
(21)
in which v(r,,t) is the vertical displacement of the membrane from the x, y plane at point
(r, ) and time t.
Separable Variable - Fourier Series Method for the Wave Equation
Consider an elastic string of length (l), fastened at its ends on the x axis at x=0 and x=l.
The string is displaced, then released from rest to vibrate in the x,y plane. We want to find the
displacement function u (x,t), whose graph is a curve in the x,y plane showing the shape of the
string at time t. If we took a snapshot of the string at time t, we would see this curve.
The boundary value problem for the displacement function is
for 0 < x < l, t > 0,
u(0,t)=u(l,t)= 0 for t 0.
u(x,0) = f(x) for 0 x l.
(x,0) = 0 for 0 x l.
The graph of f(x) is the position of the string before release.
The Fourier method, or separation of variables, consists of attempting a solution of
the form u(x,t) =X(x) T(t). Substitute this into the wave equation to obtain
XT" = c2 X"T.
where T' = dT/dt and X' = dX/dx. Then
The left side of this equation depends only on x, and the right only on t. Because x and t
are independent, we can choose any t0 we like and fix the right side of this equation at the
constant value T" (t0)/c2T(t0), while varying x on the left side. Therefore, X"/X must be constant
for all x in (0,l). But then T"/c2T must equal the same constant for all t>0. Denote this constant -
. (The negative sign is customary and convenient, but we would arrive at the same final solution
if we used just ). is called the separation constant, and we now have
Then
X"+X=0 and T" + c2T=0.
The wave equation has separated into two ordinary differential equations.
Now consider the boundary conditions. First,
u(0,t) = X(0) T(t)=0
for t 0. If T(t) = 0 for all t 0, then u(x,t) =0 for 0 x l and t 0. This is indeed the solution
if f (x) =0, since in the absence of initial velocity or a driving force, and with zero displacement,
the string remains stationary for all time. However, if T (t) 0 for any time, then this boundary
condition can be satisfied only if
X(0)=0.
Similarly, u(l,t)=X(l)T(t)=0 for t 0 requires that
X(l)=0
We now have a boundary value problem for X:
X" + X=0; X(0) = X(l) =0.
The value of for which this problem has nontrivial solutions are the eigenvalues of this
problem, and the corresponding non trivial solutions for X are the eigenfunctions. We can solve
this regular Sturm Liouville problem, obtaining the eigenvalues
we cannot choose c1......,cN to satisfy u(x,0)=f(x) for this function, since f(x)
cannot be written as a finite sum of sine functions.
We are therefore led to attempt an infinite superposition
We can do this! This series is the Fourier sine expansion of f(x) on [0,l]. Thus
choose the Fourier sine coefficients
This strategy will work for any initial displacement function f that is continuous
with a piecewise continuous derivative on [0,l] and satisfies f(0) = f(l) = 0. These
conditions ensure that the Fourier sine series of f(x) on [0.l] converges to f(x) for 0 x
l.
In specific instances, where f(x) is given, we can of course explicitly compute the
coefficients in this solution. For the initial position function (23) compute the
coefficients:
cn
= 4l .
The solution for this initial displacement function, and zero initial velocity, is
u(x,t) =
Since sin (n/2) = 0 if n is even, we can sum over just the odd integers. Further, if
n=2k-1, then
sin(n/2)=sin((2k-1) /2)=(-1)k+1
Therefore,
u (25)
Vibrating String with Given Initial Velocity and Zero Initial Displacement
Now consider the case that the string is released from its horizontal position (zero
initial displacement) but with an initial velocity given at x by g(x). The boundary value
problem for the displacement function is
u(0,t)=u(l,t)=0 for t 0,
u(x,0)=0 for = 0 x l,
We begin as before with separation of variables. Put u(x,t) = X(x) T(t). Since the
partial differential equation and boundary conditions are the same as before, we again
obtain
X" + X=0; X(0) = X(l)=0.
with eigenvalues n =
and eigenfunctions constant multiples of
Xn(x) = sin .
Now, however, the problem for T is different and we have
u(x,0)=0 = X(x) T=(0).
so T(0)=0. The problem for T is
T" + T = 0; T(0) = 0.
(In the case of zero initial velocity we had T'(0)=0). The general solution of the
differential equation for T is
T(t) = a cos .
Since T (0) =a=0, solutions for T(t) are constant multiples of sin (nct/l). Thus, for
n=1,2,...., we have functions
un (x,t) = cn sin sin .
Each of these functions satisfies the wave equation, the boundary conditions, and
the zero initial displacement condition. To satisfy the initial velocity condition u t
(x,0)=g(x), we generally must attempt a superposition
u(x,t)= .
Assuming that we can differentiate this series term-by-term, then
This is the Fourier sine expansion of g(x) on [0,l]. Choose the entire coefficient of
sin(nx/l) to be the Fourier sine coefficient of g(x) on [0,l]:
or
The solution is
u(x,t) = (26)
For example, suppose the string is released from its horizontal position with an initial velocity
given by g(x) = x(1+cos( x /l )). Compute
cn =
U (x,t) =
+
The solution of the original problem is
u(x,t)=U(x,t) + Ax (l2 – x2).
4.2 The Heat Equation with Boundary and Initial Conditions
We discuss here solutions of the heat equation by separable variables. Fourier series
method under certain initial and boundary conditions.
Ends of the Bar Kept at Temperature Zero
Suppose we want the temperature distribution u(x,t) in a thin, homogeneous (constant
density) bar of length l, given that the initial temperature in the bar at time zero in the cross
section at x perpendicular to the x axis is f(x). The ends of the bar are maintained at temperature
zero for all time.
The boundary value problem modeling this temperature distribution is
=k for 0 < x < l, t > 0,
u(0,t) =u(l,t)=0 for t 0,
u(x,0) = f(x) for 0 x l.
We will use separation of variables. Substitute u(x,t)=X(x) T(t) into the heat equation to
get
XT'=kX"T
or
The left side depends only on time, and the right side only on position, and these
variables are independent. Therefore for some constant ,
=-
Now
u(0,t) =X(0) T(t)=0.
If T(t)=0 for all t, then the temperature function has the constant value zero, which occurs
if the initial temperature f(x) =0 for 0 x l. Otherwise, T(t) cannot be identically zero, so we
must have X(0)=0. Similarly, u(l,t) =X(l)T(t)=0 implies that X(l)=0. The problem for X is
therefore
X" +X=0; X(0) = X(l)=0.
We seek values of (the eigenvalues) for which this problem for X has nontrivial
solutions (the eigenfunctions).
This problem for X is exactly the same one encountered for the space-dependent function
in separating variables in the wave equation. There we found that the eigenvalues are
n= for n=1,2….,
and corresponding eigenfunctions are nonzero constant multiples of
Xn(x) = sin .
The problem for T becomes
T'+ T = 0.
which has general solution
Tn(t) =
For n=1,2,….., we now have functions
un(x,t) = cn sin
which satisfy the heat equation on [0,l] and the boundary conditions u(0,t)=u(l,t)=0.
There remains to find a solution satisfying the initial condition. We can choose n and c n so that
un(x,0) = cn sin = f(x)
only if the given initial temperature function is a multiple of this sine function. This need not be
the case. In general, we must attempt to construct a solution using the superposition
u(x,t) =
Now we need
u(x,0)= =f(x).
which we recognize as the Fourier sine expansion of f(x) on [0,l]. Thus choose
cn= .
With this choice of the coefficients, we have the solution for the temperature distribution
function:
u(x,t) = (28)
Temperature in a bar with Insulated Ends
Consider heat conduction in a bar with insulated ends, hence no energy loss across the
ends. If the initial temperature is f(x), the temperature function is modeled by the boundary value
problem
for 0 < x < l, t > 0,
Xn(x) = cos .
The equation for T is now
T'+
When n=0, we get T0(t)=constant.
For n=1,2,…..,
.
Tn(t)=cn
We now have functions
.
un(x,t)=cn cos
For n=0,1,2,….., each of which satisfies the heat equation and the insulation boundary
conditions. To satisfy the initial conditions, we must generally use a superposition
u(x,t) =
Here we wrote the constant term (n=0) as c 0/2 in anticipation of a Fourier cosine
expansion. Indeed, we need
u(x,0)=f(x) = , (29)
the Fourier cosine expansion of f(x) on [0.l]. (This is also the expansion of the initial temperature
function in the eigenfunctions of this problem). We therefore choose
cn =
With this choice of coefficients, equation (29) gives the solution of this boundary value
problem.
Left-half of a bar at constant temperature and Right half at zero Temperature
Suppose the left half of the bar is initially at temperature A and the right half is kept at
temperature zero. Thus
Then
co =
and, for n=1,2,…..,
cn=
The solution for this temperature function is
u(x,t)=
Now sin(n/2) is zero if n is even. Further, if n=2j-1 is odd, then sin(n/2) = (-1)j+1.
The solution may therefore be written
u(x,t) =
4.3 The Laplace Equation with Boundary and Initial Conditions
We consider the steady-state heat conduction (or potential) problem for the rectangle
R{0<x<a,0<y<b}
+ =0, x,y R. (30)
subject to the Dirichlet boundary conditions
u(0,y) =0 = u(a,y), u(x,0) = 0, u(x,b) = f(x). (31)
Physically, this problem arises if three edges of a thin isotropic rectangular plate are
insulated and maintained at zero temperature, while the fourth edge is subjected to a variable
temperature f(x) until the steady-state conditions are attained through R. Then the steady-state
value of u(x,y) represents distribution of temperature in the interior of the plate.
Let u(x,y), = X(x) Y(y) be a solution.
which, after substitution into Eq (30) leads to the set of two ordinary differential equations:
X" – cX = 0, (32)
Y" + cY = 0. (33)
where c is a constant. Since the first three boundary conditions in (31) are homogeneous, they
become
X(0) = 0, X(a) =0, Y(0) = 0. (34)
but the fourth boundary condition which is nonhomogeneous must be used separately. Now,
taking c = - 2, as before, the solution of (32) subject to the first two boundary conditions in (34)
leads to the eigenvalues and the corresponding eigenfunctions as
while for these eigenvalues the solutions of (30) satisfying the third boundary condition
in (34) are
(35)
Hence, for arbitrary constants cn, n = 1,2,….., we get
(36)
The coefficients cn are then determined by using the fourth boundary condition in (31).
Thus,
(37)
From what you have seen so far in this short introduction to PDEs, it should be clear that
knowledge of PDEs is an important part of the mathematical modeling done in many different
scientific fields. What you have seen so far is just a small sampling of the vast world of PDEs.
In each of the cases we solved, we worked with just the one-dimensional cases, but with a little
effort, it is possible to set up and solve similar PDEs for higher-dimensional situations as well.
For instance, the two-dimensional wave equation
(1)
can be used to model waves on the surface of drumheads, or on the surface of liquids, and the
three-dimensional heat equation
(2)
PDE Problems
(1) Determine which of the following functions are solutions to the two-dimensional Laplace
equation
(a) (b)
(c) (d)
(e) (f)
(2) Determine which of the following functions are solutions to the one-dimensional wave
equation (for a suitable value of the constant c). Also determine what c must equal in each case.
(a) (b)
(c) (d)
(e) (f)
(3) Solve the following four PDEs where u is a function of two variables, x and y. Note that
your answer might have undetermined functions of either x or y, the same way an ODE might
have undetermined constants. Note you can solve these PDEs without having to use the
separation of variables technique.
(a) (b)
(c) (d)
(4) Solve the following systems of PDEs where u is a function of two variables, x and y.
Note that once again your answer might have undetermined functions of either x or y.
(5) Determine specific solutions to the one-dimensional wave equation for each of the
following sets of initial conditions. Suppose that each one is modeling a vibrating string of
length with fixed ends, and with constants such that in the wave equation PDE.
(a) and
(b) and
(c) and
(d) and
(6) Find solutions to each of the following PDEs by using the separation of variables
technique.
(a) (b)
(c) (d)
(e) (f)