You are on page 1of 12

Differential Equations

Nick Schandler
NSchandler@gmail.com

Last Updated: January 2017

1 Terminology and Conventions


Definitions:
• An ordinary differential equation (ODE) involves derivatives of a function
of a single variable
• A partial differential equation (PDE) involves partial derivatives of a func-
tion of more than one variable
• The order of a differential equation is the highest n such that the nth
derivative of a function appears
• A linear ODE of order n is one which can be expressed in the form:
a0 (x)y (n) + a1 (x)y (n−1) + ... + an−1 (x)y 0 + an (x)y = b(x)

• A linear ODE is homogenous if b(x) = 0. If b(x) is not identically 0, it is


inhomogenous
A linear ODE is in ”standard linear form” if it is written in the form:
y (n) + Pn−1 (x)y n−1 + . . . + p1 (x)y 0 + p0 (x)y = q(t)

2 Geometric Methods

3 Separation of Variables
Separation of variables is a technique that reduces the problem of solving certain
first-order ODEs to evaluating two integrals. It works on those homogenous
first-order ODEs for which we can write the equation in the form
dy
= g(t)f (y)
dt

1
We then separate the variables and integrate
1
dy = g(t)dt
f (y)
Z Z
1
dy = g(t)dt
f (y)
F (y) = G(t) + C

This technique can be justified by showing that it works in reverse. Namely, if


d 1
H(y) = ,
dy f (y)
d
G(t) = g(t)
dt

and y is implicitly defined by

H(y) = G(t) + C,

then we can implicitly differentiate to show we arrive at the differential equation


we started with:
d d
H(y) = (G(t) + C)
dt dt
dH dy dG
=
dy dt dt
1 dy
= g(t)
f (y) dt
dy
= f (y)g(t)
dt

Notice that since we divide by f (y), separation of variables can ”lose” the solu-
tion f (y) = 0, so following this method we must check to see whether f (y) = 0
satisfies the equation.

4 First Order Linear ODEs


The general first order linear ODE in the unknown function y = y(x) has the
form:
dy
A(x) + B(x)y(x) = C(x),
dx

2
which we can divide by A(t) to put in standard form:

dy
+ p(x)y(x) = q(x)
dx

If C(t) = 0 then the equation is homogenous and we can use separation of vari-
ables. Otherwise, it is inhomogenous and we can use the method of integrating
factor or variation of parameters

4.1 Variation of Parameters

The idea behind the variation of parameters technique is that to solve a first-
order inhomogenous equation, we look for a more general form of the solution cyh
to the homogenous equation. Namely, we replace the constant c by a function
u(x) that satisfies the inhomogenous equation. This technique can be seen as a
special case of Green’s function method for solving linear differential equations.
The steps to this variation of parameters technique are:
dy
1. Find a nonzero solution yh of the associated homogenous ODE dx +
p(x)yh = 0
2. Substitute y = u(x)yh into the inhomogenous equation, to find an equation
for u(x):

d
(uyh ) + puyh = q
dx
u0 yh + uyh0 + puyh = q
u0 yh + u (yh0 + pyh ) = q
| {z }
=0
u0 yh = q

q
3. Solve u0 = yh

4. Solution to inhomogenous equation is:

y = u(x)yh (x)

4.2 Integrating Factor

An alternative approach to solving linear first-order ODEs is through use of an


integrating factor. The steps to this technique are:
1. Put equation in standard form y(x) + p(x)y = q(x)

3
R
2. Multiply both side by u = e p dx , which enables us to express the left-hand
side as the derivative of something:

uy 0 + upy = uq
d
(uy) = uq
dx

3. Integrate both sides


Z
u(x)y(x) = u(x)q(x) dx + C
Z
1
y(x) = ( u(x)q(x) dx + c)
u(x)

5 Fourier Series
Description:
• Suppose f (t) has period 2π, then:

a0 X
f (t) ∼ + an cos (nt) + bn sin (nt) where
2 n=1
1 π
Z
a0 = f (t) dt
π −π
Z π
1
an = f (t) cos (nt) dt
π −π
Z π
1
bn = f (t) sin (nt) dt
π −π

• If f is continuous at t0 then f (t0 ) = sum of its Fourier Series. If f


has a jump discontinuity then the sum of its Fourier Series at t0 =
midpoint of jump
Justification:
• u(t),
R π v(t) functions on R are orthogonal on [−π, π] if
−π
u(t) v(t) dt = 0
• Any two distinct sin (nt) n = 1, ..., ∞ and cos (mt) m = 1, ..., ∞ are
orthogonal on [−π, π]

4
• Given f (t) with 2π period, find an , bn :

f (t) = ...ak cos (kt) + ... + an cos (nt) + ...


Z π Z π Z π
f (t) cos (nt) dt = ak cos (kt) cos (nt) dt + ... + ak cos2 (nt) dt + ...
−π −π −π
Z π
f (t) cos (nt) dt = π an (cancellations from orthogonality relations)
−π
Z π
1
f (t) cos (nt) dt = an
π −π

Simplifying Fourier Calculations


• If f (t) is even, then:
– all bn are zero
Rπ Rπ
– π1 −π f (t) cos (nt) dt = 2
π 0
f (t) cos (nt) dt
• If f (t) is odd, then:
– all an including a0 are zero
Rπ Rπ
– π1 −π f (t) sin (nt) dt = π2 0 f (t) sin (nt) dt
• Scaling and Shifting
– Let sq(t) be standard odd, period 2π square wave:
(
−1 for − π ≤ t < 0 4 X sin (nt)
sq(t) = =
1 for 0 ≤ t < π π n
n odd

4 X sin (nt)
– f1 (t) is sq(t) shifted up by 1 = 1 +
π n
n odd

8 X sin (nt)
– f2 (t) = 2 sq(t) =
π n
n odd

– f3 (t) is shifted in time such that f3 (t) = sq(πt) so


4 X sin (nπt)
f3 (t) =
π n
n odd
sin (3t+3π/2)
– f4 (t) = sq(t + π2 ) = π4 (sin (t + π2 ) + 3 ) =
4 cos3t
π (cos t − 3 + ...)
Extensions:

5
• Suppose f (t) has period 2L, then:

a0 X π π
f (t) ∼ + an cos (n t) + bn sin (n t) where
2 n=1
L L
1 L
Z
a0 = f (t) dt
L −L
1 L
Z
π
an = f (t) cos (n t) dt
L −L L
1 L
Z
π
bn = f (t) sin (n t) dt
L −L L

• f (t) defined on [0,L]:


– Extend function as if it were periodic on [-L,L] and only use
portion from [0,L]
Finding resonances
• Express function as its Fourier series and use superposition to find
its various resonances (or near resonances)
1 4 sin 3t sin 5t
• Example: ẍ + 9.1x = sq(t) = 2 − π 2 (sin t + 3 + 5 + ...)
sin nt
– Solve for individual components: ẍ + 9.1x = n
sin nt
– ERF gives: xn,p (t) = n(9.1−n2 )
4 sin t sin 3t sin 5t
– ẍ + 9.1x = π ( 9.1−1 + 3(9.1−9) + 5(9.1−25) + ...)
– n=3 (embedded third harmonic) is near resonance, which the
Fourier series picks up nicely. Input signal has base frequency of
1, so this third harmonic is not immediately apparent

6 Green’s Formula
Step and Delta Functions, Convolution

0 t<0
• Unit step function (Heaviside Function) =
1 t>0
• Delta function δ(t) = limh→0 qh (t), where qh (t) is a box of width h
and area 1

0 t 6= 0
– δ(t) =
∞ t=0
– Generalized derivative of step function

6
• Convolution
R t+
– (f ∗ g)(t) = 0−
f (τ ) g(t − τ ) dτ
Green’s Formula
• Suppose we have a linear time invariant system with rest initial con-
ditions:
p(D)y = f (t)
• Green’s formula: y(t) = (f ∗ w)(t) where
w(t) is the weight function/ unit impulse response, i.e. p(D)w(t) = δ

7 Laplace Tranform
R∞
• (Lf )(s) = F (s) = 0−
f (t)e−st dt for all values of s for which the integral
converges
– Integral converges if f(t) is of exponential order
– Laplace transfor is linear
• Inverse Laplace transform
– denoted L−1
– Can often use table along with properties of Laplace transform to
find it
– For more complicated inverse Laplace transforms you generally need
to use the Heaviside cover-up method
• Using Laplace transforms to solve ODE
– Suppose we have p(D)x = f (t)
– Take Laplace transform of both sides to give:
X(s) = L(x(t))
– Take inverse Laplace transform of X(s) to recover solution:
x(t) = L−1 (X(s))
– When using Laplace transforms you must always specify initial con-
ditions

7
– Example:

x0 + 3x = e−t with rest IC


1
sX(s) − x(0− ) + 3X(s) =
s+1
1
X(s) =
(s + 1)(s + 3)
1/2 1/2
= − (using Heaviside cover-up)
s+1 s+3
1 1
x(t) = e−t − e−3t
2 2

• Transfer Function (aka System Function)


– For a system p(D)x = f (t) the transfer function is simply W (s) =
1
p(s)

– It is also the Laplace transform of the unit impulse response: W (s) =


L(w(t)), where w(t) is the unit impulse response
– Best way to think about it is as a ratio of output to input:

p(D)x = f (t) with rest IC


p(s)X(s) = F (s)
1
X(s) = F (s) = W (s)F (s)
p(s)
X(s)
W (s) =
F (s)

• Laplace Transform of Convolution


– Green’s formula in time is: x(t) = (w ∗ f )(t)
– Green’s formula in frequency is: X(s) = W (s)F (s)
– Viewed from the t side, the solution is the convolution of the weight
function and in the input. Viewed from the s side, the solution is the
product of the transfer function and the input.
– Laplace transforms convolution into multiplication:
L(f ∗ g) = F (s)G(s)

8 Linear Systems
• Suppose
 0  wehave a 
2x 2 system
 of differential equations:
x a b x
= ⇔ ẋ = Ax
y0 c d y

8
– Try solution x = eλt a

λeλt a = eλt Aa
λa = Aa
λIa = Aa
(A − Iλ)a = 0

∗ Has non-trivial solution only if |A − λI| = 0


∗ Characteristic equation is: λ2 − tr(A)λ + det A = 0
∗ Roots λ1 and λ2 are eigenvalues of the matrix A
– 3 possibilities:
∗ Real distinct eigenvalues
· Solution is x = c1 eλ1 t v1 + c2 eλ2 t v2
· Solutions a = v1 and a = v2 are the eigenvectors corre-
sponding to λ1 and λ2
∗ Repeated real eigenvalues
· In complete case, two linearly independent eigenvectors v1
and v2 can be found
· In defective case, system has only 1 non-zero solution v1
· x1 = eλ1 t v1
· x2 = eλ1 t (tv1 + v2 ) where v2 is any vector satisfying:
(A − λ1 I)v2 = v1
∗ Conjugate complex eigenvalues
· Eigenvalue is a + bi
· Write corresponding eigenvector as v = v1 + iv2
· x = eat (cos(bt) + i sin(bt))(v1 + iv2 )
· Take real and complex parts:
x1 = eat (v1 cos(bt) − v2 sin(bt))
x2 = eat (v1 sin(bt) + v2 cos(bt))
· General solution is c1 x1 + c2 x2

9
Appendices
A Complex Numbers

A.1 Terminology and Properties of Complex Numbers

Complex numbers are expressions of the form z = a + bi, where a, b ∈ R. The


real part of z, Re(z), is a; and the imaginary part of z, Im(z), is b (notice the
imaginary part is the real number b, not bi). Arithmetic operations in C are
the same as in R2 , with the added feature that we can multiply two complex
numbers using the rule i2 = −1. These operations satisfy the properties of a
field, namely:
• Closure under addition
• Associativity under addition and multiplication
• Commutativity under addition and multiplication
• Existence of additivity and multiplicative identity elements
• Existence of additive inverses and multiplicative inverses
• Distributivity of multiplication over addition

√ modulus |z| of a complex number z = a + bi is its distance to the origin:


The
a2 + b2 . The complex conjugate of a complex number z = a + bi as z̄ = a − bi.
Complex conjugation respects addition and multiplication:
• z+w =z+w
• zw = zw (proof)
Some useful identities of complex numbers are:
z+z
Re(z) =
2
z−z
Im(z) =
2i
z=z
zz = |z|2
Re(cz) = c Re(z), c ∈ R
Im(cz) = c Im(z), c ∈ R

A.2 Polar Form and Euler’s Formula

Given a nonzero complex number z = a + bi, we can express the point using
polar coordinates: a + bi = r(cos θ + i sin θ), where a = r cos(θ) and b = r sin(θ).

10
Convention dictates that r = |z| so that r > 0. Notice that there are an infinite
number of θ = arg(z), so we define the principal argument of z Arg(z) to be
the unique θ in the range −π < θ ≤ π.
It is typically easier to work with polar coordinates using Euler’s formula:
eiθ = cos θ + i sin θ

This equation is in fact the definition of eiθ , and we justify it based on its having
the same properties as the exponential of a real variable. Namely, it obeys the
following:
• Exponential Law: eit1 eit2 = ei(t1 +t2 ) (proof)
d it
• Initial Value Problem: dt e = ieit , ei0 = 1 (proof)

(it)n
• Taylor’s Formula: eit =
P
n! (proof)
n=0

Using Euler’s
√ formula, we can then express a complex number z = a + bi as re ,
where r = a2 + b2 and θ = arctan( ab ). This provides an easy and insightful
way to perform many arithmetic operations. For example:
multiplication :(r1 eiθ1 )(r2 eiθ2 ) = r1 r2 ei(θ1 +θ2 )
conjugation : reiθ = re−iθ

We can also use Euler’s formula to easily find nth roots of complex polynomials.
For a complex number w = reiθ , z is an nth root of w if z n = w, so
1 1
w n = r n ei(θ+2kπ)/n , k = 0, 1, ..., n − 1

A.3 Complex Exponential

We can extend Euler’s Formula further by defining the complex function ezt of
a real function t to be the solution to the initial value problem
d zt
e = zezt , ez0 = 1
dt
As before, this is justified based on its possessing properties similar to the
exponential of a real variable. Namely,
• ea+ib = ea (cos b + i sin b) = ea eib for all real numbers a and b (proof)
• ez+w = ez ew for complex numbers z and w (proof)
• (ez )n = enz for every complex number z and integer n (proof)
• Taylor series of ez is given by
z2 z3
ez = 1 + z + + + ...
2 6

11
B Laplace Table
Function Transform
R∞ Notes
f(t) F (s) = 0− f (t)e−st dt (Definition)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

12

You might also like