You are on page 1of 8

Partial Differential Equations

Exercises

1. Carry out the proof of existence of solutions (with appropriate assumptions) for a system of
differential equations of the form ∂t u(t) = F(u(t)), t ∈ I; u(t0 ) = u0 , where F : Rd → Rd ,
u : I → Rd and t0 ∈ I.
Consider the kth order differential equation

G u(t), ∂t u(t), · · · , ∂tk u(t), t = 0,



(0.1)

where u : I → Rd , G : (Rd )k+1 × I → Rd , and I ⊂ R. Consider now the function


G̃ : (Rd × R)k+1 → Rd × R given by

G̃ ((u0 , s0 ), (u1 , s1 ), · · · (uk , sk )) = (G (u0 , u1 , · · · , uk , s0 ) , s1 − 1) .

Show that a solution to (0.1) corresponds to a solution ũ(t) = (u(t), s(t)) of

G̃ ũ(t), ∂ ũ(t), · · · , ∂tk ũ(t) = 0.



(0.2)

We have thus removed the time variable t in (0.1) by adding a clock s(t) (we say equation
(0.2) is autonomous). Now, if the hypothesis of the implicit function theorem are satisfied,
we can write
∂tk ũ(t) = F ũ(t), ∂t ũ(t), · · · , ∂tk−1 ũ(t) ,

(0.3)
for some function F . Consider now v : I → (Rd )k given by

v(t) := ũ(t), ∂t ũ(t), · · · , ∂tk−1 ũ(t) .




Show that ∂t v(t) = H(v(t)) where H : (Rd )k → (Rd )k is given by

H(a0 , a1 , · · · ak−1 ) = (a1 , a2 , · · · , ak−1 , F (a0 , a1 , · · · , ak−1 )) .

Check that v is continuously differentiable if and only if ũ is k times continuously differen-


tiable. We have thus converted a kth order DE to a first order DE. The k initial conditions
ũ(t0 ) = ũ0 , ∂t ũ(t0 ) = ũ1 , · · · , ∂tk−1 ũ(t0 ) = ũk−1 become v(t0 ) = (ũ0 , ũ1 , · · · , ũk−1 ).
2. Say that F : Rm → Rn is locally Lipschitz if for each x ∈ Rm there is a ball B(x, ) around
x in which the function F is Lipschitz. Show that this is equivalent to F being Lipschitz in
every ball B(0, R), R > 0.
3. Here is a full version of Picard’s existence theorem which only requires F to be locally
Lipschitz.
Let t0 ∈ R and Ω ⊂ Rd a nonempty subset. Let N (Ω) = {u ∈ Rd : |u − v| <  for some v ∈
Ω} be the -neighbourhood of Ω for some  > 0. Let F : Rd → Rd be a function which is
Lipschitz on N (Ω) with some Lipschitz constant M , and assume F is bounded by A > 0 in
this region. Let 0 < T < min(/A, 1/M ) and let I = [t0 − T, t0 + T ]. Show that for every
u0 ∈ Ω there exists a solution u : I → N (Ω) to the Cauchy problem ∂t u(t) = F (u(t)), t ∈ I
with u(t0 ) = u0 .
4. Show that if F is k times continuously differentiable then the solution u to the Cauchy
problem above is k + 1 times continuously differentiable.
5. Consider the ODE
∂tk u(t) = F u(t), ∂t u(t), · · · , ∂tk−1 u(t)

(0.4)
where u is Rd valued.
Cauchy-Kowalevski theorem: Let k ≥ 1. Suppose F : (Rd )k → Rd is real analytic, let
t0 ∈ R, and let u0 , · · · uk−1 ∈ Rd be arbitrary. Then there exists an open time interval I
containing t0 , and a unique real analytic solution u : I → Rd of (0.4), which obeys the initial
value conditions u(t0 ) = u0 , ∂t u(t0 ) = u1 , · · · ∂tk−1 u(t0 ) = uk−1 .

1
Begin the proof of the above theorem by reducing to the case k = 1, t0 = 0 and u0 =
0. Then use induction to show if the higher derivatives ∂tm u(0) are defined recursively by
differentiating (0.4), then we have some bound of the form

|∂tm u(0)| ≤ K m+1 m!

for all m ≥ 0 and some large K > 0 depending on F . Then, define u : I → Rd for some
sufficiently small neighbourhood I of the origin by the power series

X ∂tm u(0) m
u(t) = t
m=0
m!

and show that ∂t u(t) − F (u(t)) is real analytic on I and vanishes at infinite order at zero,
and thus is zero on all of I.
6. (Comparison principle) Let I = [t0 , t1 ] be a compact interval, and let u : I → R, v : I → R
be two scalar differentiable functions. Let F : I × R → R be a locally Lipschitz function,
and suppose that u and v obey the differential inequalities
 
∂t u(t) ≤ F t, u(t) ; ∂t v(t) ≥ F t, v(t)

for all t ∈ I. Show that if u(t0 ) ≤ v(t0 ), then u(t) ≤ v(t) for all t ∈ [t0 , t1 ], and similarly if
u(t0 ) < v(t0 ), then u(t) < v(t) for all t ∈ [t0 , t1 ].
2
(Hint: for the first claim, study the derivative of max 0, u(t) − v(t) and use Gronwall’s
inequality. For the second, perturb the first argument by an epsilon)

7. (Sturm comparison principle) Let I be a time interval, let u, v : I → R be twice continuously


differentiable functions and let a, f, g : I → R be continuous functions such that

∂t2 u(t) + a(t)∂t u(t) + f (t)u(t) = ∂t2 v(t) + a(t)∂t v(t) + g(t)v(t) = 0

for all t ∈ I. Suppose also that v oscillates faster than u in the sense that g(t) ≥ f (t) for all
t ∈ I. Suppose also that u is not identically zero. Show that the zeroes of v intersperse the
zeroes of u, in the sense that whenever t1 < t2 are times in I such that u(t1 ) = u(t2 ) = 0,
then v has at least one zero in the interval [t1 , t2 ].
(Hint: reduce to the case when t1 and t2 are consecutive zeroes of u, and argue by con-
tradiction. By replacing u or v with −u or −v if necessary one may assume that u, v are
nonnegative on [t1 , t2 ]. Obtian a first order equation for the Wronskian u∂t v − v∂t u.)

8. Let F : I → Rd be a locally Lipschitz function of at most linear growth, that is |F (u)| ≤


C(1 + |u|) for some constant C. Show that for each u0 ∈ Rd and t0 ∈ R there exists a unique
global solution u : R → Rd to the Cauchy problem ∂t u(t) = F (u(t)), t ∈ R; u(t0 ) = u0 .
9. [HW 1, due 19 Jan] The following is the Peano existence theorem.
Let F : R × Rd → Rd be a continuous function (we do not assume any Lipschitz condition
on F). Assume that |F(t, x)| ≤ M when |x − c| ≤ K and |t − a| ≤ T . Show that there
exists a solution u : I → Rd to the differential equation ∂t u = F(t, u) with u(a) = c, where
I : [a − T1 , a + T1 ] and T1 = min(T, K/M ).
(Hint: Without loss of generality assume that a = 0 and consider the sequence of functions
R t−T /n 
u(n) given by u(n) (t) ≡ c for 0 ≤ t ≤ T1 /n and u(n) (t) = c + 0 1 F s, u(n) (s) for
T1 /n < t ≤ T1 . Use the Arzela-Ascoli theorem.)

10. [HW 1, due 19 Jan] The following is Osgood’s uniqueness theorem.


We consider solutions u : I → R of ∂t u = f (t, u), u(t0 ) = u0 . Consider some domain
D ⊂ R. Suppose for all (t, u1 ), (t, u2 ) ∈ I × D we have

|f (t, u1 ) − f (t, u2 )| ≤ φ(|u1 − u2 |)

2
for some continuous function φ such that φ(x) > 0 for x > 0 and
Z c
dx
= ∞.
0 φ(x)

Then no more than solution passes through (t0 , u0 ) ∈ I × D.


(Hint: Suppose there are two such solutions u1 (t), u2 (t). Consider the difference v(t) =
u1 (t) − u2 (t). W.l.g. assume v1 := v(t1 ) > 0 for some t1 > t0 . By assumption ∂t v <
2φ(|v(t)|)R when v(t) 6= 0. Consider first the DE ∂t z = 2φ(z), z(t1 ) = v1 when t ≤ t1 . Let
v dx
Φ(z) := z 1 φ(x) , so that z(t) = Φ−1 (2(t1 − t)). Now check that ∂t v(t) < ∂t z(t) at any point
of interesection of the two curves. Now since v(t0 ) = 0 argue that z and v must intersect at
a point to the left of t1 , and arrive at a contradiction.)
State and prove a generalization to higher dimensions.
11. Consider the differential equation u̇ = f (t, u) with initial condition u(t0 ) = u0 . A differen-
tiable function u+ (t) satisfying u̇+ (t) > f t, u+ (t) , t ∈ [t0 , T ), is a super solution of our
equation. Similarly a sub solution u− (t) satisfies u̇− (t) < f t, u− (t) , t ∈ [t0 , T ). Show that
a solution u to the DE satisfies u(t) < u+ (t), t ∈ (t0 , T ), whenever u(t0 ) ≤ u+ (t0 ). Similarly
show that u− (t) < u(t), t ∈ (t0 , T ), whenever u(t0 ) ≥ u− (t0 ).
If the definition of super solution had ≥ instead of >, would the first conclusion above hold
with < replaced by ≤ ?

12. Consider the linear equation

u(n) (t) + an−1 (t)u(n−1) (t) + · · · a1 (t)u0 (t) + a0 (t)u(t) = 0, (0.5)

and let u1 , u2 , · · · , un be n solutions. The Wronskian is defined as


 
(i)
W (t) := det uj (t)
0≤i,j≤n−1

More generally, if u1 · · · un are n solutions to ∂t u = A(t)u, where A(t) = aij (t) 1≤i,j≤n is

an n×n matrix, we define the Wronskian as the determinantP of the matrix u1 (t) u2 (t) · · · un (t) .
n
Show that the Wronskian in this case satisfies ∂t W (t) = ( k=1 akk (t)) · W (t), and hence
Z tXn
!
W (t) = W (t0 ) exp akk (x)dx .
t0 k=1

What is the Wronskian in the case of (0.5). This gives another argument about the linear
independence of n solutions to linear equations if the solutions are independent at a specific
time.

13. For a scalar linear homogeneous equation with constant coefficients

u(n) (t) + an−1 u(n−1) (t) + · · · a1 u0 (t) + a0 u(t) = 0, (0.6)

where ai ∈ R, we showed that when λ is a root of multiplicity k of the characteristic


polynomial then the functions tr eλt , r = 0, 1, · · · , k − 1 are the solutions.

• If w(x) = u(x) + iv(x) satisfy (0.6) then so do u(x) and v(x). Since complex roots of
the characteristic polynomial occur in conjugate pairs, one can obtain n real solutions
to (0.6).
• We next show that the solutions of the form tr eλj t (where λj is a root of the characteristic
polynomial and r < k) are linearly independent over C on any nonvoid interval. Suppose
X
crj frj (t) ≡ 0 (0.7)

3
with crj ∈ C and some crj 6= 0. For given λj choose R to be the largest r so that
crj 6= 0. Consider the operator
Y
Q(D) = (D − λj )R (D − λi )ki +1 ,
i6=j

where for i 6= j, ki is the largest r such that tr eλi t is a member of the functions in (0.7).
Check Q(D)[fri ] = 0 for i 6= j and Q(D)[frj ] = 0 for R > r. Apply Q(D) to (0.7) and
conclude that cRj = 0 (a contradiction). Thus (0.6) has n nlinearly indepedent (over
C) complex solutions of the form tr eλt .
• Conclude that (0.6) has n linearly independent (over C) real valued solutions.
• Conclude that (0.6) has n linearly independent (over R) real valued solutions.
14. Consider Euler’s homogeneous DE:

xn u(n) + b1 xn−1 u(n−1) + b2 xn−2 u(n−2) + · · · + bn u = 0

on the positive semi-axis x > 0, where b1 , b2 , · · · bn are constants. Convert this to a homoge-
neous constant coefficient linear differential equation by making the substitution t = ln x.
Find a basis of solutions of x2 u00 + 5xu0 + 3u = 0.
15. Find a basis of solutions for (i) u(4) − 3u00 + 2u = 0 and u000 + 6u00 + 12u0 + 8u = 0.
Find the general solution to u(4) + 5u00 + 4u = cos t.
16. If u1 (x) and u2 (x) are two linearly independent solutions of u00 + p(x)u0 + q(x)u = 0 then
show that the zeroes of these functions are distinct and occur alternately - in the sense that
u1 (x) vanishes exactly once between the zeroes of u2 , and vice versa. (Hint: consider the
Wronskian of u1 and u2 )

17. Show that any DE of the form u00 + p(x)u0 + q(x)u = 0 (standard form) can be converted to
an equation of the form v 00 + r(x)v = 0 (normal form). (Hint: put u(x) = v(x)f (x), plug in
to the DE in standard form and choose f appropriately)
18. If q(x) < 0 for all x, and if u is a nontrivial solution of u00 + q(x)u = 0, then show that u has
at most one zero, that is the solutions of the equation do not oscillate at all.
19. RLet u(x) be any nontrivial solution of u00 + q(x)u = 0 when q(x) > 0 for all x > 0. If

1
q(x) dx = ∞, then show that u(x) has infinitely many zeros on the positive x axis.
(Hint: Suppose u > 0 at some point x0 . note that u00 = −qu is negative so that the slope
u0 is decreasing. If we show u0 is negative somewhere then we have that u hits the x axis at
some point to the right of x0 . Consider v = −u0 /u and compute the derivative of v. )

20. Find the general solution of: (a) u00 + 2u0 + u = e−x . (b) u00 + 2u0 + 2u = sin x. (c)
u00 + 2u0 + 2u = e−x . (d) u00 + 2u0 + 2u = e−x + sin x. (e) u00 + 2u0 + 2u = e−x sin x. (f)
u00 + 3u0 + 2u = x2 + 2x + 3.
Find solutions of the above DEs satisfying the initial values u(0) = u0 (0) = 0.

21. (Method of variation of parameters) Consider the solution to the differential equation

u00 + p(t)u0 + q(t)u = r(t). (0.8)

Assume that the general solution to the corresponding homogeneous equation is known. Let
u1 and u2 be two independent solutions to the homogeneous equation. We search for a
particular solution to (0.8) of the form u = w1 u1 + w2 u2 . Plug this into (0.8). To avoid a
double derivative of w1 and w2 make the assumption w10 u1 + w20 u2 = 0. Solve for w1 and w2 .

4
22. (Reduction of order method) Consider the DE

an (t)u(n) + an−1 (t)u(n−1) + · · · + a1 (t)u0 + a0 (t)u = 0. (0.9)

Suppose we know one solution u1 (t) to the above DE. We will look for other solutions of the
form u(t) = z(t)u1 (t). Plug this into (0.9) to show that z(t) satisfies a DE of the form

bn (t)z (n) + bn−1 (t)z (n−1) + · · · + b0 (t)z(t) = 0. (0.10)

Show that b0 (t) ≡ 0. Let y(t) = z 0 (t) and show that y satisfies

bn (t)y (n−1) + bn−1 (t)y (n−2) + · · · + b1 (t)y(t) = 0. (0.11)

RShow that is y1R, y2 , · · · yn−1 are linearly independent solutions of (0.11) then z0 ≡ 1, z1 =
y1 , · · · zn−1 = yn−1 are independent solutions of (0.10).
Find the solution to t2 u00 + tu0 − u = 0 by the above method. Use that t is a solution.
23. Let us analyze solutions of the DE ẋ = x2 −t2 . Because of the symmetry of the transformation
(t, x) → (−t, −x), it suffices to consider t ≥ 0. Divide the (t, x), t ≥ 0 plane into regions
A : x > t, B : −t < x < t, C : x < −t. Argue that
• For solutions starting in region A there are two cases: either the solutions stays in A fro
all time and hence must converge to +∞ (maybe in finite time) or it enters region B.
• A solution starting in region B (or entering region B) will stay there for all time and
hence must converge to −∞. Since it must stay above x = −t this cannot happen in
finite time.
• A solution starting in region C will eventually hit x = −t and enter region B.
√ p
Show that x− (t) = −t, t ≥ 0 is a sub solution and that y+ (t) = − t2 − 2, t > 2 2/3 is a
super solution. Show that any solution in region B will eventually end up between y+ (t) and
x− (t), and will therefore converge to the line x = −t.

Show that x+ (t) = t, t ≥ 0 is a super solution and that y− (t) = 2 + t2 t > 0 is a super
solution. Consider solutions with initial conditions (T, x+ (T )) and (T, y− (T )). These hit
t = 0 at some points a(T ) and b(T ) respectively. Since different solutions can never cross,
the solutions which stay between x+ (t) and y− (t) for t ∈ [0, T ] are precisely those starting at
t = 0 in the interval [a(T ), b(T )]. Moreover, this also implies a(T ) is strictly increasing and
b(T ) is strictly decreasing. Let T → ∞ to see that all solutions starting in [a(∞), b(∞)] at
t = 0 stay between x+ (t) and y− (t) for all t > 0.
Next show that if there were 2 solutions in region A, the distance between them increases
with time, and therefore there can be at most one solution x0 (t) that stays between x+ (t)
and y− (t) for all t > 0. All solutions below x0 (t) will eventually enter region B and converge
to −∞ along x = −t. All solutions above x0 (t) will eventually be above y− (t) and converge
to +∞.
Next show that any solution above y− (t) converges to +∞ in finite
√ time as follows. Show
that for any solution x(t) above y− (t) we must have x(t) > t/ 1 −  for sufficiently large
t, and we must therefore have ẋ(t) > x(t)2 for large t. Conclude by comparing with the
differential equation ż(t) = z(t)2 .
Note: Similar type of arguments can be used to understand behaviour of solutions to DEs
of the form ẋ = f (t, x). The reason our method works is that (t, x) ∈ R2 . In R2 any curve
splits the plane into two regions. The only way to get from one region to the other is by
crossing the curve. Such arguments don’t hold when we consider higher order DEs or systems
of DEs where (t, x) ∈ Rn , n > 2.
24. (Generalized Gronwall inequality) Suppose ψ(t) is a continuous function with
Z t
ψ(t) ≤ α(t) + β(s)ψ(s) ds, t ∈ [0, T ]
0

5
where α(t), β(t) are continous functions with β(t) ≥ 0. Then show that
Z t Z t 
ψ(t) ≤ α(t) + α(s)β(s) exp β(r) dr ds, t ∈ [0, T ].
0 s

Moreover, if in addition α is nondecreasing, then


Z t 
ψ(t) ≤ α(t) exp β(s) ds , t ∈ [0, T ].
0

Rt  R 
t
HINT: Consider the derivative of φ(t) 0
β(s)ψ(s) ds, where φ(t) = exp − 0 β(s) ds

25. Suppose f and g are continuous functions in R+ × R with

|f (t, x) − f (t, y)|


L = sup < ∞ and M = sup |f (t, x) − g(t, x)| < ∞.
x6=y |x − y| R+ ×R

Let x(t) be a solution of the initial value problem

ẋ = f (t, x), x(t0 ) = x0

and let y(t) be a solution of the initial value problem

ẏ = g(t, y), y(t0 ) = y0 .

Show that
M  L|t−t0 | 
|x(t) − y(t)| ≤ |x0 − y0 |eL|t−t0 | + e −1 .
L
26. Consider the inhomogeneous linear second-order DE L[u] = r(x) with initial conditions
u(a) = u0 (a) = 0. Here L[u] := 00 0
R x u + p(x)u + q(x)u. We will construct a function (called
Green’s function) G such that a G(x, ξ)r(ξ) dξ is the solution of the above equation.
Define the function G(t, τ ) as follows.
• G(t, τ ) = 0, for a ≤ t ≤ τ .
• For each fixed τ ≥ a and all t > τ , G(t, τ ) is that particular solution of the DE
Gtt + p(t)Gt + q(t)G = 0 which satisfies the initial conditions G = 0 and Gt = 1 at
t = τ.
Show that G is the Green’s function of the operator L for the inital value problem on t ≥ a.
Let f (t) and g(t) be two linearly independent solutions of the linear homogeneous equation
L[u] = 0. The show that the solution for L[u] = r(t) for the initial conditions u(a) = u0 (a) = 0
is the function Z t
f (τ )g(t) − g(τ )f (t)
u(t) = r(τ ) dτ.
a W [f (τ ), g(τ )]

27. Most DEs cannot be solved explicitly. Sometimes we can look for power series solutions.
Consider u00 + p(x)u0 + q(x)u = 0 where both p and q have a power series in the neighborhood
|x − x0 | < R. Show that there is a unique solution u with u(x0 ) = a0 , u0 (x0 ) = a1 and which
has a power series in this neighborhood.
(HINT: Start by finding what relation the coefficients in the power series of u must satisfy if
there was one.)
28. Find the general solution to the Legendre equation
 
d 2 du
(1 − x ) + λu = 0
dx dx

6
29. Consider the DE u0 = 1 + u2 . It is easy to see that y = tan x is a particular solution for
which y(0) = 0. Show that
1 2
tan x = x + x3 + x5 + · · ·
3 15
an xn and finding the an .
P
by assuming a solution for the DE in the form of a power series
30. Hermite’s equation is u00 − 2xu0 + 2pu = 0 where p is a constant. It is a DE that arises in
the theory of the linear harmonic oscillator in quantum mechanics.

• Show that its general solution is u(x) = a0 u1 (x) + a1 u2 (x) where

2p 2 22 p(p − 2) 4 23 p(p − 2)(p − 4)


u1 (x) = 1 − x + x − x + ···
2! 4! 6!
and
2(p − 1) 3 22 (p − 1)(p − 3) 5 23 (p − 1)(p − 3)(p − 5) 7
u2 (x) = x − x + x − x + ···
3! 5! 7!
Show directly that both series converge for all x as is expected since the coefficients of
u0 and u have infinite radius of convergence.
• Note that if p is a nonnegative integer then one of the above the series terminates
and is thus a polynomial: y1 (x) if p is even and y2 (x) if p is odd, while the other
remains an infinite series. Any polynomial solution of Hermite’s equation is a constant
multiple of the above polynomials. Hermite polynomials Hn (x) are polynomial solutions
of Hermite’s DE where the highest powers of x in the polynomial are of the form 2n xn .
Show that the nth Hermite polynomial is given by
[n/2]
X n!
Hn (x) = (−1)k (2x)n−2k .
k!(n − k)!
k=0

2 P∞ Hn (x) n
• Show that e2xt−t = n=0 n! t .
2
dn −x2
Use this to prove Rodrigues’ formula for Hermite polynomials: Hn (x) = (−1)n ex dxn e .
2
• wn (x) = e−x /2 Hn (x) is the Hermite function of order n.
Show that it solves wn00 + (2n + 1 − x2 )wn = 0.
R∞
Show that the Hermite functions are orthogonal in R in the sense that −∞ wm wn dx = 0
if m 6= n.
R∞ 2 √
Show that −∞ e−x [Hn (x)]2 dx = 2n n! π. (Hint: Observe that the left hand side is
R∞ dn −x2
(−1)n −∞ Hn (x) dx ne dx and use integration by parts several times)
• One
P∞ can show that for a “large” class of functions f , one can decompose it as f (x) =
n=0 an Hn (x). Show that for such functions
Z ∞
1 2
an = n √ e−x Hn (x)f (x) dx.
2 n! π −∞

31. Solutions of Airy’s equation y 00 + xy = 0 are called Airy functions, and have applications
in the theory of diffraction. They have also recently appeared extensively in the theory of
interacting particle systems.
• Show that every nontrivial Airy function has infinitely many positive zeroes and at most
one negative zero.
• Find the Airy function in the form of power series, and verify directly that these series
converge for all x.

7
32. For further reading on power series solutions consult the book by Simmons. Other top-
ics worth exploring and not covered in this course include Sturm-Liouville equations, two
dimensional systems and nonlinear equations.
33. [Homework 2, due Feb 14] Consider the first order PDE F (Du, u, x) = 0 where u : Rd → R
subject to boundary conditions u = g on Γ ⊂ ∂U . Show that if Γ is not flat near x0 , the
condition that Γ is noncharacteristic at (p0 , z 0 , x0 ) reads

Dp F (p0 , z 0 , x0 ) · ν(x0 ) 6= 0,

where ν(x0 ) is the outward unit normal to ∂U at x0 .


34. [Homework 2, due Feb 14] Solve the equation (1 + x2 )ux + uy = 0. Sketch some of the
characterisitc curves.

35. [Homework 2, due Feb 14] Solve ux + uy + u = ex+2y with u(x, 0) = 0.


36. Suppose u(i) , i = 1, 2 satisfy ut − kuxx = f (x, t), u(x, 0) = φ(i) (x), u(0, t) = g(t), u(l, t) =
h(t) for 0 < x < l, and t > 0, then show that

max u(1) (x, t) − u(2) (x, t) ≤ max φ(1) (x, t) − φ(2) (x, t) for t > 0.
0≤x≤l 0≤x≤l

37. [HW 3, due March 21] Solve the heat equation vt = kvxx on an interval 0 < x < l, 0 < t,
with initial condition v(x, 0) = g(x), 0 < x < l, and Dirichlet boundary conditions v(0, t) =
v(l, t) = 0, by first extending the initial profile appropriately to a profile on R, and solving
the heat equation on R. Write your solution as an integral involving the heat kernel Φ and
the initial profile g. (This gives you an alternate expression for the solution from the one
obtained in class using Fourier series).
Carry the above for the heat equation on an interval with Neumann and periodic boundary
conditions.
38. Solve the following heat equation on the half line 0 < x < ∞ and t > 0: vt − kvxx = f (x, t)
with v(0, t) = h(t) and v(x, 0) = φ(x).

39. Solve the following heat equation on the half line 0 < x < ∞ and t > 0: vt − kvxx = f (x, t)
with vx (0, t) = h(t) and v(x, 0) = φ(x).

You might also like