Professional Documents
Culture Documents
Differential Equations NS
Differential Equations NS
Nick Schandler
NSchandler@gmail.com
2 Geometric Methods
3 Separation of Variables
Separation of variables is a technique that reduces the problem of solving certain
first-order ODEs to evaluating two integrals. It works on those homogenous
first-order ODEs for which we can write the equation in the form
dy
= g(t)f (y)
dt
1
We then separate the variables and integrate
1
dy = g(t)dt
f (y)
Z Z
1
dy = g(t)dt
f (y)
F (y) = G(t) + C
H(y) = G(t) + C,
Notice that since we divide by f (y), separation of variables can ”lose” the solu-
tion f (y) = 0, so following this method we must check to see whether f (y) = 0
satisfies the equation.
2
which we can divide by A(t) to put in standard form:
dy
+ p(x)y(x) = q(x)
dx
If C(t) = 0 then the equation is homogenous and we can use separation of vari-
ables. Otherwise, it is inhomogenous and we can use the method of integrating
factor or variation of parameters
The idea behind the variation of parameters technique is that to solve a first-
order inhomogenous equation, we look for a more general form of the solution cyh
to the homogenous equation. Namely, we replace the constant c by a function
u(x) that satisfies the inhomogenous equation. This technique can be seen as a
special case of Green’s function method for solving linear differential equations.
The steps to this variation of parameters technique are:
dy
1. Find a nonzero solution yh of the associated homogenous ODE dx +
p(x)yh = 0
2. Substitute y = u(x)yh into the inhomogenous equation, to find an equation
for u(x):
d
(uyh ) + puyh = q
dx
u0 yh + uyh0 + puyh = q
u0 yh + u (yh0 + pyh ) = q
| {z }
=0
u0 yh = q
q
3. Solve u0 = yh
y = u(x)yh (x)
3
R
2. Multiply both side by u = e p dx , which enables us to express the left-hand
side as the derivative of something:
uy 0 + upy = uq
d
(uy) = uq
dx
5 Fourier Series
Description:
• Suppose f (t) has period 2π, then:
∞
a0 X
f (t) ∼ + an cos (nt) + bn sin (nt) where
2 n=1
1 π
Z
a0 = f (t) dt
π −π
Z π
1
an = f (t) cos (nt) dt
π −π
Z π
1
bn = f (t) sin (nt) dt
π −π
4
• Given f (t) with 2π period, find an , bn :
4 X sin (nt)
– f1 (t) is sq(t) shifted up by 1 = 1 +
π n
n odd
8 X sin (nt)
– f2 (t) = 2 sq(t) =
π n
n odd
5
• Suppose f (t) has period 2L, then:
∞
a0 X π π
f (t) ∼ + an cos (n t) + bn sin (n t) where
2 n=1
L L
1 L
Z
a0 = f (t) dt
L −L
1 L
Z
π
an = f (t) cos (n t) dt
L −L L
1 L
Z
π
bn = f (t) sin (n t) dt
L −L L
6 Green’s Formula
Step and Delta Functions, Convolution
0 t<0
• Unit step function (Heaviside Function) =
1 t>0
• Delta function δ(t) = limh→0 qh (t), where qh (t) is a box of width h
and area 1
0 t 6= 0
– δ(t) =
∞ t=0
– Generalized derivative of step function
6
• Convolution
R t+
– (f ∗ g)(t) = 0−
f (τ ) g(t − τ ) dτ
Green’s Formula
• Suppose we have a linear time invariant system with rest initial con-
ditions:
p(D)y = f (t)
• Green’s formula: y(t) = (f ∗ w)(t) where
w(t) is the weight function/ unit impulse response, i.e. p(D)w(t) = δ
7 Laplace Tranform
R∞
• (Lf )(s) = F (s) = 0−
f (t)e−st dt for all values of s for which the integral
converges
– Integral converges if f(t) is of exponential order
– Laplace transfor is linear
• Inverse Laplace transform
– denoted L−1
– Can often use table along with properties of Laplace transform to
find it
– For more complicated inverse Laplace transforms you generally need
to use the Heaviside cover-up method
• Using Laplace transforms to solve ODE
– Suppose we have p(D)x = f (t)
– Take Laplace transform of both sides to give:
X(s) = L(x(t))
– Take inverse Laplace transform of X(s) to recover solution:
x(t) = L−1 (X(s))
– When using Laplace transforms you must always specify initial con-
ditions
7
– Example:
8 Linear Systems
• Suppose
0 wehave a
2x 2 system
of differential equations:
x a b x
= ⇔ ẋ = Ax
y0 c d y
8
– Try solution x = eλt a
λeλt a = eλt Aa
λa = Aa
λIa = Aa
(A − Iλ)a = 0
9
Appendices
A Complex Numbers
Given a nonzero complex number z = a + bi, we can express the point using
polar coordinates: a + bi = r(cos θ + i sin θ), where a = r cos(θ) and b = r sin(θ).
10
Convention dictates that r = |z| so that r > 0. Notice that there are an infinite
number of θ = arg(z), so we define the principal argument of z Arg(z) to be
the unique θ in the range −π < θ ≤ π.
It is typically easier to work with polar coordinates using Euler’s formula:
eiθ = cos θ + i sin θ
This equation is in fact the definition of eiθ , and we justify it based on its having
the same properties as the exponential of a real variable. Namely, it obeys the
following:
• Exponential Law: eit1 eit2 = ei(t1 +t2 ) (proof)
d it
• Initial Value Problem: dt e = ieit , ei0 = 1 (proof)
∞
(it)n
• Taylor’s Formula: eit =
P
n! (proof)
n=0
iθ
Using Euler’s
√ formula, we can then express a complex number z = a + bi as re ,
where r = a2 + b2 and θ = arctan( ab ). This provides an easy and insightful
way to perform many arithmetic operations. For example:
multiplication :(r1 eiθ1 )(r2 eiθ2 ) = r1 r2 ei(θ1 +θ2 )
conjugation : reiθ = re−iθ
We can also use Euler’s formula to easily find nth roots of complex polynomials.
For a complex number w = reiθ , z is an nth root of w if z n = w, so
1 1
w n = r n ei(θ+2kπ)/n , k = 0, 1, ..., n − 1
We can extend Euler’s Formula further by defining the complex function ezt of
a real function t to be the solution to the initial value problem
d zt
e = zezt , ez0 = 1
dt
As before, this is justified based on its possessing properties similar to the
exponential of a real variable. Namely,
• ea+ib = ea (cos b + i sin b) = ea eib for all real numbers a and b (proof)
• ez+w = ez ew for complex numbers z and w (proof)
• (ez )n = enz for every complex number z and integer n (proof)
• Taylor series of ez is given by
z2 z3
ez = 1 + z + + + ...
2 6
11
B Laplace Table
Function Transform
R∞ Notes
f(t) F (s) = 0− f (t)e−st dt (Definition)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
12