You are on page 1of 3

# Hao Lian, programmer-poet.

## This review of an introduction to differential equations course 2.2 Non-homogeneous

assumes in various, tiny ways that you’ve been exposed to a
Theorem 2.3 (Existence and uniqueness.). For
formal class. Otherwise, you probably should read a textbook
a, b, c, t 0 , Y0 , Y1 ∈ R with y P on I with t 0 ∈ I with y1 , y2
first.
lindep, there exists a unique solution to the non-homogeneous
IVP.
1 FIRST- ORDER LINEAR Proof. We use the EUT for the homogeneous case and su-
perposition principle to construct the the solution; therefore,
When using the separable method, losing zeroes occurs. For it exists. To prove the solution ( y = y + c y + c y ) is
g p 1 1 2 2
example, the naïve solution to y 0 = x 2 ( y −13) misses y(x) = unique, assume there exists another solution ψ such that
13. ψ 6= y g . Let y = y g − ψ. Trivially, y satisfies the IVP where
Given a1 (x) y 0 (x)+ a0 (x) y(x) = b(x), there are two trivial a y 00 + b y 0 + c y = y − ψ = 0 with initial values y(0) = 0
g
cases: a0 (x) = 0 and a0 (x) = a10 (x), the latter due the and y 0 (0) = 0. Note that this is a homogeneous equation.
product rule. We can reduce all such equations R to this case. However, f (t) = 0 is already a unique solution to this IVP.
Let µ(x) such that µ0 = µP, implying µ = exp( P d x). By the homogeneous EUT, it must be the same solution as y,
implying y = y g − ψ = 0 or y g = ψ, a contradiction.
0
d(µ y)
µQ = µ( y + P y) =
Z dx 2.3 Miscellaneous
µy = µQ d x + C. Let
Z Z
−g y2 g y1
Definition 1.1 (Exact equation). An exact equation on rect- v1 = dt v2 = dt
aW [ y1 , y2 ] aW [ y1 , y2 ]
angle R is of the form M (x, y) d x + N (x, y) d y = 0 such that
there exists F such that F x = M and F y = N . where a y 00 + b y 0 + c = g. Then y p = v1 y1 + v2 y2 . This is the
method of variations.
Theorem 1.1. An equation is exact iff M y = Nx . The solution Linear equations always obey the superposition principle,
follows from the properties of a total derivative: distinguishing them from non-linear equations. Non-linear
equations also do not guarantee uniqueness of solutions.
M d x + N d y = 0 F x d x + F y d y = 0 F (x, y) = C.

## 3 HIGHER - ORDER LINEAR

2 SECOND - ORDER LINEAR
We can easily generalize the homogeneous linear equation to
higher powers using the characteristic equation
A guess of exp(r t) for a y + b y + c = 0, the equation
00 0
We can also generalize the method of undetermined co-
resolves to exp(r t)(ar 2 + br + c) = 0. With two dis-
efficients by finding a linear differential operator A (an an-
tinct r values as solutions, then by superposition y(t) =
nihilator) such that A[ f ](x) = 0. So suppose we have a
c1 exp(r1 t) + c2 exp(r2 t).
linear equation in operator form L[ y](x) = f (x). Then
AL[ y](x) = A[ f ](x) = 0. We have reduced the non-
Theorem 2.1 (Existence and uniqueness). Given the universe
homogeneous equation to homogeneous form. Boo-ya.
of reals, there exists a unique solution to second-order linear
Constructing an annhilator is like constructing a guess
equations with initial conditions y(t 0 ) = Y0 and y 0 (t 0 ) = Y1
using the method of undetermined coefficients: Memorize
valid for all t ∈ R.
a brief table. If x k exp(r x), then (D − r)m where m is any
integer such that m > k. sin β x or cos β x, then (D2 + β 2 ).
Theorem 2.2 (Representation). For y1 , y2 that are lindep on
x k exp(r x) sin β x, then [(D − r)2 + β 2 ]m . For example,
I with t 0 ∈ I, there exists c1 , c2 such that c1 y1 + c2 y2 satisfies
the IVP on I with the initial t 0 condition. 0 = (D − 1)3 (D2 + 1)(D + 1)[ y]
y = (C1 e x + C2 x e x + C3 x e x ) + (C4 sin x + C5 cos x) + C6 e−x .
The proof follows from knowing W [ y1 , y2 ] = 0 iff the two
solutions are linindep and some non-obvious algebra.
4 MATRICES
2.1 Working from the characteristic polynomial roots
The following statements are equivalent: (1) A is singular
For two identical roots, use a t exp(r t) term. For characteris- (has no inverse); (2) |A| = 0; (3) Ax has non-trivial solutions,
tic equations with complex conjugates α±β i, there exists two: i.e. where x 6= 0; and (4) rows of A are lindep.
exp(αt)(cos β t, sin β t). Note that complex conjugates only If A is singular, Ax = 0 has infinitely many solutions, all a
occur if a, b, c ∈ R. Otherwise, two non-conjugate complex scalar multiple of some x0 6= 0. Furthermore, Ax = b either
roots may arise. has no solutions or infinitely many of the form x = x p + xh .

1
Hao Lian, programmer-poet.

Another property, which is significant in the light that matrix Proof. Suppose u1 = cu2 . Then (r1 − r2 )u1 = 0, implying for
multiplication is not commutative: a contradiction that r1 = r2 because we know u1 6= 0.

## d dB dA As a corollary, n distinct eigenvalues and eigenvectors im-

AB = A + B.
dt dt dt ply a fundamental solution set for x0 = Ax.
The conjugates for complex eigenvalues and eigenvectors
Theorem 4.1 (Existence and uniqueness). If A, f are contin-
create two linearly independent real vector solutions:
uous on an open interval I, t 0 ∈ I, and x0 , then there exists a
unique x(t) on I to the IVP x0 (t) = Ax + f. exp(αt)(a, b) · ((cos β t, − sin β t), (sin β t, cos β t)).
Theorem 4.2. Let xi be linindep solutions to Ax = x0 . If they
are linindep, then the Wronskian is never zero on I. 4.3 Non-homogeneous
Proof. Suppose for a contradiction that W (t 0 ) = 0 for some Method of undetermined coefficients works similarly. How-
t 0 . X(t 0 )c = 0 for some c 6= 0 because columns become ever, variation of parameters yields the equation
linearly dependent when the matrix has a nonzero determi- Z
nant per the above theorem. Another solution to Ax = 0 x = Xc + X X−1 f d t.
is z(t) = 0. These solutions are identical on I by the EUT,
implying X(t)c = 0 for all t. This implies the xi are lindep,
contradicting the initial assumption. 4.4 Matrix exponential
Two implications: W (t) is always zero or never zero on I. Definition 4.1 (Matrix exponential).
And the set of solutions are linindep iff W (t) is never zero,
whence the representation theorem: t2
exp(At) = I + At + A2 + · · · .
2
Theorem 4.3 (Representation). Let x i be linindep solutions to
the homogeneous system x0 = Ax(t) on I. Then every solution is The inverse—exp(At)−1 —is exp(−At). The derivative is
in the form x(t) = X(t)c. Denote X as the fundamental matrix A exp(At) by virtue of differentiating that pseudo-Taylor se-
for the fundamental solution set. ries pseudo-polynomial. Therefore, the exponential is a solu-
tion to X0 = AX. Because exp(At) is invertible, the columns
4.1 Non-homogeneous are linindep solutions to the system. The general solution is
then x(t) = exp(At)c, and exp(At) is the fundamental matrix
Theorem 4.4 (Superposition principle). Let L[x] :− x0 − Ax. for the system.
If x1 , x2 are solutions to L[x] = g1 and L[x] = g2 , then c1 x1 +
c2 x2 are solutions to L[x] = c1 g1 + c2 g2 . Definition 4.2 (Nilpotent). A matrix B is nilpotent iff there
exists k > 0 such that Bk = 0. In such a case, the exponential
Theorem 4.5 (Representation). Let x p solve x0 = Ax + f on I has a finite number of terms.
and X be the fundamental matrix for the homogeneous system
x = Ax. Then every solution on I is of the form x = x p + Xc by If A − rI is nilpotent, then there is a finite expansion:
the superposition principle. Denote this as the general solution.
(A − rI)n−1 t n−1
 
At r t (A−rI)t
e =e e =e rt
··· + . (1)
4.2 Eigenapalooza (n − 1)!
For the system x0 = Ax, guess x = exp(r t)u, implying
r exp(r t)u = exp(r t)Au or (A − rI)u = 0. Eigenvalues are 4.5 Generalized eigenvectors
numbers r such that that equation has a nontrivial (u 6= 0) With repeated eigenvalues, we need a strategy to find addi-
solution; the corresponding u are eigenvectors. Nontrivial tional eigenvectors.
solutions, from a previous theorem, occurs iff |A − rI| = 0. Lemma. If X, Y are fundamental matrices for the sys-
The determinant is the characteristic polynomial, and this is tem x0 = Ax, then there exists a constant matrix such that
the characteristic equation. X(t) = Y(t)C.
Theorem 4.6. exp(ri t)ui is a fundamental set for linearly What is the relationship between exp(At) and a given
dependent {ui }. fundamental matrix X? By the lemma, exp(At) = X(t)C for
some C. Plugging t = 0, we find I = X(0)C. Therefore,
The Wronskian over the set of those solutions is
Proof. P exp(At) = X(t)X−1 (0).
exp(t ri )|U| where U = [u1 , . . . , un ]. The determinant is So, to obtain the fundamental solution matrix X, assume
never zero because the eigenvectors are linearly independent; the requirement that columns must be of the form exp(At)u,
therefore, the Wronskian is never zero for all t. which can be decomposed by (1). Therefore, we need n
vectors u whose calculations are feasible. To do so, find
Theorem 4.7. If r1 , r2 are distinct eigenvalues, then u1 , u2 are p(r) = |A− rI| and therefore the eigenvalues. For each ri with
linearly independent. multiplicity mi , find mi linindep generalized eigenvectors. (A

2
Hao Lian, programmer-poet.

nonzero vector u that satisfies (A−rI)k u = 0 for some positive order linear equations.
k is a generalized eigenvector.)
Compute x = exp(r t) exp(t(A − rI)), using the Taylor ex- y 00 + 2t y 0 − 4 y = 1 with y(0) = y 0 (0) = 0
pansion for the last exponential. It becomes one of the n 
3 s

1 d(µY ) s
= − e−s /4
2
0
linindep solutions to the system. The expansion will be finite Y + − Y =− 2 ⇒
s 2 2s ds 2
because (A − rI)i ui = 0 for some i ∈ [1, mi ]. The hardest
es /4
2
part is finding such a ui each time. 1
Y (s) = 2 + C 3 . (2)
s s

## 5 LAPLACE TRANSFORM Theorem 5.2. If f (t) is piecewise continuous on [0, ∞) and

of exponential order, then lims→∞ L{ f }(s) = 0.
Definition 5.1 (Laplace transform). If f is defined on [0, ∞),
How to determine C? Using the above theorem, we take
then
Z∞ the limit of solution at (2) and set it equal to zero, finding
L{ f }(s) = e−st f (t) d t. easily that C = 0. Therefore, with C, we can now find y.
0 From Y (s) = 1/s3 , it follows that y(t) = t 2 /2. Magic.

## The power lies within its ability (usually) to replace dif-

5.2 Discontinuous and periodic functions
ferential equations with algebriac equations by recursively
applying L{x 0 } = sL{x} − x(0). Definition 5.3 (Heaviside step function).

u(t|t < 0) = 0
Definition 5.2. A function is piecewise continuous on the
interval [a, b] iff it is continous on that interval except at a u(t|0 < t) = 1.
finite number of points at which a jump discontinuity occurs.
We can then express any piecewise function in terms of the
step function for easier transformation. For example,
The jump at f (x) = 1/x for x = 0 would, for example,
count as an “infinite” jump and not a jump discontinuity. f (t|t < 2) = 3
f (t|2 < t < 5) = 1
Theorem 5.1. Roughly, L{ f } exists for s > α if f does not
f (t|5 < t < 8) = t
grow faster than an exponential function of some positive order
α and f is piecewise continuous on [0, ∞). f (t|8 < t) = t 2 /10
f (t) = 3 − (1 − 3)u(t − 2) + (t − 1)u(t − 5)
The proof follows from expanding out the exponential
+ (t 2 /10 − t)u(t − 8).
order test and then performing an integral comparison test.
For s > 0: 1 to 1/s, t n to n!/s n+1 , sin bt to b/(s2 + b2 ), and Properties:
cos bt to s2 /(s2 + b2 ). For s > a, e at to 1/(s − a) and e at t n to
n!/(s−a)n+1 . Finally, f (n) to s n F (s)−s n−1 f (0)−· · ·− f (n−1) (0). L{u(t − a)}(s) = e−as /s
Also, L −1{e−as F (s)}(t) = f (t − a)u(t − a)
N.B. L{e at f (t)}(s) = F (s − a).
L{e at f (t)}(s) = F (s − a)
L{t n f (t)}(s) = (−1)n F (n) (s). The solution to a constant-coefficient linear second-order
differential equation with a stepped non-homogeneous ex-
pression can be, magically, a continous function. However, its
5.1 Inverse Laplace second derivative is instead discontinuous.
L −1 , like L , is a linear operator, implying L −1{ f1 + f2 } =
L −1{ f1 } + L −1{ f2 } and L −1{c f } = cL −1{ f }. To take the
inverse, you basically consult the table. For rational polyno-
mial fractions, partial function decomposition is needed with
a small twist:

## 2s2 + 10s A(s − 1) + 2B

⇒ .
(s2 − 2s + 5) (s − 1)2 + 22

## Completing the square is necessary to remove any cs n

terms. Then, those factors have to go into the numerator
so the inverse Laplace rules for sin and cos work correctly.
For t 1 coefficients to linear equations, the transform turns
these non-constant equations into constant coefficient first-