# CHAPTER 6

Linear Diﬀerential Systems
6.1 Higher-Order Linear Diﬀerential Equations
This section is a continuation of Chapter 3. As you will see, all of the “theory” that we
developed for second-order linear diﬀerential equations carries over, essentially verbatim, to
linear diﬀerential equations of order greater than two.
Recall that a ﬁrst order, linear diﬀerential equation is an equation which can be written
in the form
y

+ p(x)y = q(x)
where p and q are continuous functions on some interval I. A second order, linear
diﬀerential equation has an analogous form.
y

+ p(x)y

+ q(x)y = f(x)
where p, q, and f are continuous functions on some interval I.
In general, an n
th
-order linear diﬀerential equation is an equation that can be written
in the form
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+· · · + p
1
(x)y

+ p
0
(x)y = f(x) (L)
where p
0
, p
1
, . . . , p
n−1
, and f are continuous functions on some interval I. As before,
the functions p
0
, p
1
, . . . , p
n−1
are called the coeﬃcients, and f is called the forcing
function or the nonhomogeneous term.
Equation (L) is homogeneous if the function f on the right side is 0 for all x ∈ I. In
this case, equation (L) becomes
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+· · · + p
1
(x)y

+ p
0
(x)y = 0 (H)
Equation (L) is nonhomogeneous if f is not the zero function on I, i.e., (L) is nonhomoge-
neous if f(x) = 0 for some x ∈ I. As in the case of second order linear equations, almost
all of our attention will be focused on homogeneous equations.
Remarks on “Linear.” Intuitively, an n
th
-order diﬀerential equation is linear if y and
its derivatives appear in the equation with exponent 1 only, and there are no so-called
”cross-product” terms, y y

, y y

, y

y

, etc.
If we set L[y] = y
(n)
+p
n−1
y
(n−1)
+· · · +p
1
(x)y

+p
0
(x)y, then we can view L as an
“operator” that transforms an n-times diﬀerentiable function y = y(x) into the continuous
function
L[y(x)] = y
(n)
(x) + p
n−1
(x)y
(n−1)
(x) +· · · + p
1
(x)y

(x) + p
0
(x)y(x).
251
It is easy to check that, for any two n-times diﬀerentiable functions y
1
(x) and y
2
(x),
L[y
1
(x) + y
2
(x)] = L[y
1
(x)] + L[y
2
(x)]
and, for any n-times diﬀerentiable function y and any constant c,
L[cy(x)] = cL[y(x)].
Therefore, as introduced in Section 2.1, L is a linear diﬀerential operator. This is the real
reason that equation (L) is said to be a linear diﬀerential equation.
THEOREM 1. (Existence and Uniqueness Theorem) Given the n
th
- order linear
equation (L). Let a be any point on the interval I, and let α
0
, α
1
, . . . , α
n−1
be any n
real numbers. Then the initial-value problem
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+ · · · + p
1
(x)y

+ p
0
(x)y = f(x);
y(a) = α
0
, y

(a) = α
1
, . . . , y
(n−1)
(a) = α
n−1
has a unique solution.
Remark: We can solve any ﬁrst order linear diﬀerential equation, see Section 2.1. In
contrast, there is no general method for solving second or higher order linear diﬀerential
equations. However, as we saw in our study of second order equations, there are methods
for solving certain special types of higher order linear equations and we shall look at these
later in this section.
Homogeneous Equations
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+ · · · + p
1
(x)y

+ p
0
(x)y = 0. (H)
Note ﬁrst that the zero function, y(x) = 0 for all x ∈ I, (also denoted by y ≡ 0) is a
solution of (H). As before, this solution is called the trivial solution . Obviously, our main
interest is in ﬁnding nontrivial solutions.
We now establish some essential facts about homogeneous equations. The proofs are
identical to those given in Section 3.2
THEOREM 2. If y = y(x) is a solution of (H) and if c is any real number, then
u(x) = cy(x) is also a solution of (H).
Any constant multiple of a solution of (H) is also a solution of (H).
THEOREM 3. If y = y
1
(x) and y = y
2
(x) are any two solutions of (H), then
u(x) = y
1
(x) + y
2
(x) is also a solution of (H).
252
The sum of any two solutions of (H) is also a solution of (H).
The general theorem, which combines and extends Theorems 1 and 2, is:
THEOREM 4. If y = y
1
(x), y = y
2
(x), . . . , y = y
k
(x) are solutions of (H), and if
c
1
, c
2
, . . . , c
k
are any k real numbers, then
y(x) = c
1
y
1
(x) + c
2
y
2
(x) +· · · + c
k
y
k
(x)
is also a solution of (H).
Any linear combination of solutions of (H) is also a solution of (H).
Note that if k = n in the linear combination above, then the equation
y(x) = c
1
y
1
(x) + c
2
y
2
(x) +· · · + c
n
y
n
(x) (1)
has the form of a general solution of equation (H). So the question is: If y
1
, y
2
, . . . , y
n
are solutions of (H), is the expression (1) the general solution of (H)? That is, can every
solution of (H) be written as a linear combination of y
1
, y
2
, . . . , y
n
? It turns out that
(1) may or not be the general solution; it depends on the relation between the solutions
y
1
, y
2
, . . . , y
n
.
Suppose that y = y
1
(x), y = y
2
(x), . . . , y = y
n
(x) are solutions of (H). Under what
conditions is (1) the general solution of (H)?
Let u = u(x) be any solution of (H) and choose any point a ∈ I. Suppose that
α
0
= u(a), α
1
= u

(a), . . . , α
n−1
= u
(n−1)
(a).
Then u is a member of the n-parameter family (1) if and only if there are values for
c
1
c
2
, . . . , c
n
such that
c
1
y
1
(a) + c
2
y
2
(a) + · · · + c
n
y
n
(a) = α
0
c
1
y

1
(a) + c
2
y

2
(a) + · · · + c
n
y

n
(a) = α
1
c
1
y

1
(a) + c
2
y

2
(a) + · · · + c
n
y

n
(a) = α
1
.
.
.
.
.
.
c
1
y
(n−1)
1
(a) + c
2
y
(n−1)
2
(a) + · · · + c
n
y
(n−1)
n
(a) = α
n−1
According to Cramer’s rule, we are guaranteed that this pair of equations has a solution
c
1
, c
2
, . . . , c
n
if
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
y
1
(a) y
2
(a) . . . y
n
(a)
y

1
(a) y

2
(a) . . . y

n
(a)
y

1
(a) y

2
(a) . . . y

n
(a)
.
.
.
.
.
.
.
.
.
y
(n−1)
1
(a) y
(n−1)
2
(a) . . . y
(n−1)
n
(a)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
= 0.
253
Since a was chosen to be any point on I, we conclude that (1) is the general solution of
(H) if and only if
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
y
1
(x) y
2
(x) . . . y
n
(x)
y

1
(x) y

2
(x) . . . y

n
(x)
y

1
(x) y

2
(x) . . . y

n
(x)
.
.
.
.
.
.
.
.
.
y
(n−1)
1
(x) y
(n−1)
2
(x) . . . y
(n−1)
n
(x)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
= 0 for all x ∈ I. (2)
As you know, this determinant is called the Wronskian of the solutions y
1
, y
2
, . . . , y
n
.
THEOREM 5. Let y = y
1
(x), y = y
2
(x), . . . , y = y
n
(x) be solutions of equation (H),
and let W(x) be their Wronskian. Exactly one of the following holds:
(i) W(x) = 0 for all x ∈ I and y
1
, y
2
, . . . , y
n
are linearly dependent.
(ii) W(x) = 0 for all x ∈ I which implies that y
1
, y
2
, . . . , y
n
are linearly independent
and
y(x) = c
1
y
1
(x) + c
2
y
2
(x) + · · · + c
n
y
n
(x)
is the general solution of (H).
Example 1. (a) The functions y
1
(x) = x, y
2
(x) = x
2
and y
3
(x) = x
3
are each solutions
of
y

3
x
y

+
6
x
2
y

6
x
3
y = 0, x ∈ I = (0, ∞). (verify)
Their Wronskian is:
W(x) =
¸
¸
¸
¸
¸
¸
¸
x x
2
x
3
1 2x 3x
2
0 2 6x
¸
¸
¸
¸
¸
¸
¸
= 2x
3
= 0 on I.
The general solution of the diﬀerential equation is y = c
1
x + c
2
x
2
+ c
3
x
3
.
(b) The functions y
1
(x) = e
x
, y
2
(x) = e
2x
and y
3
(x) = e
3x
are each solutions of
y

− 6y

+ 11y

− 6y = 0, x ∈ I = (−∞, ∞). (verify)
Their Wronskian is:
W(x) =
¸
¸
¸
¸
¸
¸
¸
e
x
e
2x
e
3x
e
x
2e
2x
3e
3x
e
x
4e
2x
9e
3x
¸
¸
¸
¸
¸
¸
¸
= 2e
6x
= 0 on I.
The general solution of the diﬀerential equation is y = c
1
e
x
+ c
2
e
2x
+ c
3
e
3x
.
DEFINITION 1. (Fundamental Set) A set of n linearly independent solutions y =
y
1
(x), y = y
2
(x), . . . , y = y
n
(x) of (H) is called a fundamental set of solutions.
254
A set of solutions y
1
, y
2
, . . . , y
n
of (H) is a fundamental set if and only if
W[y
1
, y
2
, . . . , y
n
](x) = 0 for all x ∈ I.
Homogeneous Equations with Constant Coeﬃcients
We have emphasized that there are no general methods for solving second or higher or-
der linear diﬀerential equations. However, there are some special cases for which solution
methods do exist. Here we consider such a case, linear equations with constant coeﬃcients.
We’ll look ﬁrst at homogeneous equations.
An n
th
-order linear homogeneous diﬀerential equation with constant coeﬃcients is an
equation which can be written in the form
y
(n)
+ a
n−1
y
(n−1)
+ a
n−2
y
(n−2)
+ · · · + a
1
y

+ a
0
y = 0 (3)
where a
0
, a
1
, . . . , a
n−1
are real numbers.
We have seen that ﬁrst- and second-order equations with constant coeﬃcients have
solutions of the form y = e
rx
. Thus, we’ll look for solutions of (3) of this form
If y = e
rx
, then
y

= r e
rx
, y

= r
2
e
rx
, . . . , y
(n−1)
= r
n−1
r
rx
, y
(n)
= r
n
e
rx
.
Substituting y and its derivatives into (3) gives
r
n
e
rx
+ a
n−1
r
n−1
e
rx
+· · · + a
1
r e
rx
+ a
0
e
rx
= 0
or
e
rx
_
r
n
+ a
n−1
r
n−1
+ · · · + a
1
r + a
0
_
= 0.
Since e
rx
= 0 for all x, we conclude that y = e
rx
is a solution of (3) if and only if
r
n
+ a
n−1
r
n−1
+· · · + a
1
r + a
0
= 0. (4)
DEFINITION 2. Given the diﬀerential equation (3). The corresponding polynomial
equation
p(r) = r
n
+ a
n−1
r
n−1
+· · · + a
1
r + a
0
= 0.
is called the characteristic equation of (3); the n
th
-degree polynomial p(r) is called the
characteristic polynomial. The roots of the characteristic equation are called the character-
istic roots.
Thus, we can ﬁnd solutions of the equation if we can ﬁnd the roots of the corresponding
characteristic polynomial. Appendix 1 gives the basic facts about polynomials with real
coeﬃcients.
255
In Chapter 3 we proved that if r
1
= r
2
, then y
1
= e
r
1
x
and y
2
= e
r
2
x
are linearly
independent. We also showed that y
3
(x) = e
rx
and y
4
(x) = xe
rx
are linearly independent.
Here is the general result.
THEOREM 6
1. If r
1
, r
2
, . . . , r
k
are distinct numbers (real or complex), then the distinct exponential
functions y
1
= e
r
1
x
, y
2
= e
r
2
x
, . . . , y
k
= e
r
k
x
are linearly independent.
2. For any real number α the functions y
1
(x) = e
αx
, y
2
(x) = xe
αx
, . . . , y
k
(x) =
x
k−1
e
αx
are linearly independent.
Proof: In each case, the Wronskian W[y
1
, y
2
, . . . , y
k
](x) = 0.
Since all of the ground work for solving linear equations with constant coeﬃcients was
established in Chapter 3, we’ll simply give some examples here. Theorem 6 will be useful
in showing that our sets of solutions are linearly independent.
Example 2. Find the general solution of
y

+ 3y

−y

−3y = 0
given that r = 1 is a root of the characteristic polynomial.
SOLUTION The characteristic equation is
r
3
+ 3r
2
− r − 3 = 0
(r −1)(r
2
+ 4r + 3) = 0
(r −1)(r + 1)(r + 3) = 0
The characteristic roots are: r
1
= 1, r
2
= −1, r
3
= −3. The functions y
1
(x) = e
x
, y
2
(x) =
e
−x
, y
3
(x) = e
−3x
are solutions. Since these are distinct exponential functions, the solutions
form a fundamental set and
y = C
1
e
4x
+ C
2
e
−x
+ C
3
e
−3x
is the general solution of the equation.
Example 3. Find the general solution of
y
(4)
−4y

+ 3y

+ 4y

−4y = 0
given that r = 2 is a root of multiplicity 2 of the characteristic polynomial.
SOLUTION The characteristic equation is
r
4
− 4r
3
+ 3r
2
+ 4r −4 = 0
(r − 2)
2
(r
2
− 1) = 0
(r −2)
2
(r −1)(r + 1) = 0
256
The characteristic roots are: r
1
= 1, r
2
= −1, r
3
= r
4
= 2. The functions y
1
(x) =
e
x
, y
2
(x) = e
−x
, y
3
(x) = e
2x
are solutions. Based on our work in Chapter 3, we conjecture
that y
4
= xe
2x
is also a solution since r = 2 is a “double” root. You can verify that this
is the case. Since y
4
is distinct from y
1
, y
2
, and is independent of y
3
, these solutions
form a fundamental set and
y = C
1
e
x
+ C
2
e
−x
+ C
3
e
2x
+ C
4
xe
2x
is the general solution of the equation.
Example 4. Find the general solution of
y
(4)
−2y

+ y

+ 8y

− 20y = 0
given that r = 1 + 2i is a root of the characteristic polynomial.
SOLUTION The characteristic equation is
p(r) = r
4
− 2r
3
+ r
2
+ 8r − 20 = 0.
Since 1 + 2i is a root of p(r), 1 −2i is also a root, and r
2
−2r + 5 is a factor of p(r).
Therefore
r
4
−2r
3
+ r
2
+ 8r −20 = 0
(r
2
− 2r + 5)(r
2
−4) = 0
(r
2
−2r + 5)(r − 2)(r + 2) = 0
The characteristic roots are: r
1
= 1 + 2i, r
2
= 1 − 2i, r
3
= 2, r
4
= −2. Since these roots
are distinct, the corresponding exponential functions are linearly independent. Again based
on our work in Chapter 3, we convert the complex exponentials
u
1
= e
(1+2i)x
and u
2
(x) = e
(1−2i)x
into y
1
= e
x
cos 2x and y
2
= e
x
sin 2x.
Then, y
1
, y
2
, y
3
= e
2x
, y
4
= e
−2x
form a fundamental set and
y = C
1
e
x
cos 2x + C
2
e
x
sin 2x + C
3
e
2x
+ C
4
e
−2x
is the general solution of the equation.
Recovering a Homogeneous Diﬀerential Equation from Its Solutions
Once you understand the relationship between the homogeneous equation, the characteristic
equation, the roots of the characteristic equation and the solutions of the diﬀerential equa-
tion, it is easy to go from the diﬀerential equation to the solutions and from the solutions
to the diﬀerential equation. Here are some examples.
257
Example 5. Find a fourth order, linear, homogeneous diﬀerential equation with constant
coeﬃcients that has the functions y
1
(x) = e
2x
, y
2
(x) = e
−3x
and y
3
(x) = e
2x
cos x as
solutions.
SOLUTION Since e
2x
is a solution, 2 must be a root of the characteristic equation and
r − 2 must be a factor of the characteristic polynomial; similarly, e
−3x
a solution means
that −3 is a root and r − (−3) = r + 3 is a factor of the characteristic polynomial. The
solution e
2x
cos x indicates that 2 + i is a root of the characteristic equation. So 2 − i
must also be a root (and y
4
(x) = e
2x
sin x must also be a solution). Thus the characteristic
equation must be
(r−2)(r+3)(r−[2+i)](r−[2−i]) = (r
2
+r−6)(r
2
−4r+5) = r
4
−3r
3
−5r
2
+29r−30 = 0.
Therefore, the diﬀerential equation is
y
(4)
−3y

−5y

+ 29y

−30y = 0.
Example 6. Find a third order, linear, homogeneous diﬀerential equation with constant
coeﬃcients that has
y = C
1
e
−4x
+ C
2
x e
−4x
+ C
3
e
2x
as its general solution.
SOLUTION Since e
−4x
and xe
−4x
are solutions, −4 must be a double root of the
characteristic equation; since e
2x
is a solution, 2 is a root of the characteristic equation.
Therefore, the characteristic equation is
(r + 4)
2
(r −2) = 0 which expands to r
3
+ 6r
2
−32 = 0
and the diﬀerential equation is
y

+ 6y

−32y = 0.
Nonhomogeneous Equations
Now we’ll consider linear nonhomogeneous equations:
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+· · · + p
1
(x)y

+ p
0
(x)y = f(x) (N)
where p
0
, p
1
, . . . , p
n−1
, f are continuous functions on an interval I.
Continuing the analogy with second order linear equations, the corresponding homoge-
neous equation
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+ · · · + p
1
(x)y

+ p
0
(x)y = 0. (H)
is called the reduced equation of equation (N).
258
The following theorems are exactly the same as Theorems 1 and 2 in Section 3.4, and
exactly the same proofs can be used.
THEOREM 7 If z = z
1
(x) and z = z
2
(x) are solutions of (N), then
y(x) = z
1
(x) − z
2
(x)
is a solution of equation (H).
the diﬀerence of any two solutions of the nonhomogeneous equation (N) is a solution of
its reduced equation (H).
The next theorem gives the “structure” of the set of solutions of (N).
THEOREM 8 Let y = y
1
(x), y
2
(x), . . . , y
n
(x) be a fundamental set of solutions of the
reduced equation (H) and let z = z(x) be a particular solution of (N). If u = u(x) is any
solution of (N), then there exist constants c
1
, c
2
, . . . , c
n
such that
u(x) = c
1
y
1
(x) + c
2
y
2
(x) + · · · + c
n
y
n
(x) + z(x)
According to Theorem 8, if {y
1
(x), y
2
(x), . . . , y
n
(x)} is a fundamental set of solutions
of the reduced equation (H) and if z = z(x) is a particular solution of (N), then
y = C
1
y
1
(x) + C
2
y
2
(x) +· · · + C
n
y
n
(x) + z(x) (5)
represents the set of all solutions of (N). That is, (5) is the general solution of (N). Another
way to look at (5) is: The general solution of (N) consists of the general solution of the
reduced equation (H) plus a particular solution of (N):
y
.¸¸.
general solution of (N)
= C
1
y
1
(x) + C
2
y
2
(x) + · · · + C
n
y
n
(x)
. ¸¸ .
general solution of (H)
+ z(x).
. ¸¸ .
particular solution of (N)
The superposition principle also holds:
THEOREM 9 If z = z
f
(x) and z = z
g
(x) are particular solutions of
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+ · · · + p
1
(x)y

+ p
0
(x)y = f(x),
and
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+· · · + p
1
(x)y

+ p
0
(x)y = g(x)
respectively, then z(x) = z
f
(x) + z
g
(x) is a particular solution of
y
(n)
+ p
n−1
(x)y
(n−1)
+ p
n−2
(x)y
(n−2)
+ · · · + p
1
(x)y

+ p
0
(x)y = f(x) + g(x).
259
Finding a Particular Solution
The method of variation of parameters can be extended to higher-order linear nonhomo-
geneous equations but the calculations become quite involved. Instead we’ll look at the
special equations for which the method of undetermined coeﬃcients can be used.
As we saw in Chapter 3, the method of undetermined coeﬃcients can be applied only
to nonhomogeneous equations of the form
y
(n)
+ a
n−1
y
(n−1)
+ a
n−2
y
(n−2)
+ · · · + a
1
y

+ a
0
(x)y = f(x),
where a
0
, a
1
, . . . , a
n−1
are constants and the nonhomogeneous term f is a polynomial,
an exponential function, a sine, a cosine, or a combination of such functions.
Here is the basic table from Section 3.5, slightly modiﬁed to apply to equations of order
greater than 2:
Table 1
A particular solution of y
(n)
+ a
n−1
y
(n−1)
+ · · · + a
1
y

+ a
0
y = f(x)
If f(x) = try z(x) =*
ce
rx
Ae
rx
c cos βx + d sin βx z(x) = A cos βx + B sin βx
ce
αx
cos βx + de
αx
sin βx z(x) = Ae
αx
cos βx + Be
αx
sin βx
*Note: If z satisﬁes the reduced equation, then x
k
z, where k is the least integer such that
x
k
z does not satisfy the reduced equation, will give a particular solution
The method of undetermined coeﬃcients is applied in exactly the same manner as in
Section 3.5.
Example 7. Find the general solution of
y

− 2y

−5y

+ 6y = 4 −2e
2x
. (*)
SOLUTION First we solve the reduced equation
y

−2y

− 5y

+ 6y = 0.
The characteristic equation is
r
3
− 2r
2
− 5r + 6 = (r −1)(r + 2)(r − 3) = 0.
260
The roots are r
1
= 1, r
2
= −2, r
3
= 3 and the corresponding solutions of the reduced
equation are y
1
= e
x
, y
2
= e
−2x
, y
3
= e
3x
. Since these are distinct exponential functions,
they are linearly independent and
y = C
1
e
x
+ C
2
e
−2x
+ C
3
e
3x
is the general solution of the reduced equation.
Next we ﬁnd a particular solution of the nonhomogeneous equation. The table indicates
that we should look for a solution of the form
z = A + Be
2x
.
The derivatives of z are:
z = A + Be
2x
, z

= 2Be
2x
, z

= 4Be
2x
, z

= 8Be
2x
.
Substituting into the left side of (*), we get
z

− 2z

−5z

+ 6z = 8Be
2x
− 2
_
4Be
2x
_
−5
_
2Be
2x
_
+ 6
_
A + Be
2x
_
= 6A−4Be
2x
.
Setting z

+ 6z

+ 9z = 4 −2e
2x
gives
6A = 4 and − 4B = −2 which implies A =
2
3
and B =
1
2
.
Thus, z(x) =
2
3
+
1
2
e
2x
is a particular solution of (*).
The general solution of (*) is
y = C
1
e
x
+ C
2
e
−2x
+ C
3
e
3x
+
2
3
+
1
2
e
2x
.
Example 8. Find the general solution of
y
(4)
+ y

− 3y

−5y

− 2y = 6e
−x
(**)
SOLUTION First we solve the reduced equation
y
(4)
+ y

− 3y

− 5y

− 2y = 0.
The characteristic equation is
r
4
+ r
3
−3r
2
−5r −2 = (r + 1)
3
(r − 2) = 0.
The roots are r
1
= r
2
= r
3
= −1, r
4
= 2 and the corresponding solutions of the reduced
equation are y
1
= e
−x
, y
2
= xe
−x
, y
3
= x
2
e
−x
, y
4
= e
2x
. Since distinct powers of x are
linearly independent, it follows that y
1
, y
2
, y
3
are linearly independent; and since e
2x
261
and e
−x
are independent, we can conclude that y
1
, y
2
, y
3
, y
4
are linearly independent.
Thus, the general solution of the reduced equation is
y = C
1
e
−x
+ C
2
xe
−x
+ C
3
x
2
e
−x
+ C
4
e
2x
.
Next we ﬁnd a particular solution of the nonhomogeneous equation. The table indicates
that we should look for a solution of the form
z = Ax
3
e
−x
.
The derivatives of z are:
z = Ax
3
e
−x
z

= 3Ax
2
e
−x
− Ax
3
e
−x
z

= 6Axe
−x
−6Ax
2
e
−x
+ Ax
3
e
−x
z

= 6Ae
−x
−18Axe
−x
+ 9Ax
2
e
−x
−Ax
3
e
−x
z
(4)
= −24Ae
−x
+ 36Axe
−x
− 12Ax
2
e
−x
+ Ax
3
e
−x
Substituting z and its derivatives into the left side of (**), we get
z
(4)
+ z

− 3z

−5z

−2z = −18Ae
−x
.
Thus, we have −18Ae
−x
= 6e
−x
which implies A = −
1
3
and z = −
1
3
x
2
e
−x
is a particular
solution of (**).
The general solution of (**) is
y = C
1
e
−x
+ C
2
xe
−x
+ C
3
x
2
e
−x
+ C
4
e
2x

1
3
x
3
e
−x
.
Example 9. Give the form of a particular solution of
y

− 3y

+ 3y

−y = 4e
x
− 3 cos 2x.
SOLUTION To get the proper form for a particular solution of the equation we need to
ﬁnd the solutions of the reduced equation:
y

− 3y

+ 3y

− y = 0.
The characteristic equation is
r
3
−3r
3
+ 3r −1 = (r − 1)
3
= 0.
Thus, the roots are r
1
= r
2
= r
3
= 1, and the corresponding solutions are y
1
= e
x
, y
2
=
xe
x
, y
3
= x
2
e
x
. The table indicates that the form of a particular solution z of the
nonhomogeneous equation is
z = Ax
3
e
x
+ B cos 2x + C sin 2x.
262
Example 10. Give the form of a particular solution of
y
(4)
− 16y = 4e
2x
− 2e
3x
+ 5 sin 2x + 2 cos 2x.
SOLUTION To get the proper form for a particular solution of the equation we need to
ﬁnd the solutions of the reduced equation:
y
(4)
− 16y = 0.
The characteristic equation is
r
4
−16 = (r
2
− 4)(r
2
+ 4) = (r −2)(r + 2)(r
2
+ 4) = 0.
Thus, the roots are r
1
= 2, r
2
= −2, r
3
= 2i, r
4
= −2i, and the corresponding solutions
are y
1
= e
2x
, y
2
= e
−2
x, y
3
= cos 2x, y
4
= sin 2x. The table indicates that the form of a
particular solution z of the nonhomogeneous equation is
z = Axe
2x
+ Be
3x
+ Cx cos 2x + Dx sin 2x.
Exercises 6.1
Find the general solution of the homogeneous equation
1. y

− 6y

+ 11y

− 6y = 0, r
1
= 1 is a root of the characteristic equation.
2. y

+ y

+ 10y = 0, r
1
= −2 is a root of the characteristic equation.
3. y
(4)
−2y

+y

+8y

−20y = 0, r
1
= 1 +2i is a root of the characteristic equation.
4. y
(4)
−3y

− 4y = 0, r
1
= i is a root of the characteristic equation.
5. y
(4)
−4y

+ 14y

−4y

+ 13y = 0, r
1
= i is a root of the characteristic equation.
6. y

+ y

− 4y

−4y = 0, r
1
= −1 is a root of the characteristic equation.
7. y
(6)
−y

= 0.
8. y
(5)
−3y
(4)
+ 3y

− 3y

+ 2y

= 0.
Find the solution of the initial-value problem.
9. y
(4)
−4y

+ 4y

= 0; y(0) = −1, y

(0) = 2, y

(0) = 0, y

(0) = 0.
10. y

+ y

= 0; y(0) = 0, y

(0) = 1, y

(0) = 2.
11. y

− y

+ 9y

−9y = 0; y(0) = y

(0) = 0, y

(0) = 2.
263
12. 2y
(4)
−y

−9y

+ 4y

+ 4y = 0; y(0) = 0, y

(1) = 2, y

(0) = 2, y

(0) = 0.
Find the homogeneous equation with constant coeﬃcients that has the given general
solution.
13. y = C
1
e
−3x
+ C
2
xe
−3x
+ C
3
e
x
cos 3x + C
4
e
x
sin 3x.
14. y = C
1
e
4x
+ C
2
x + C
3
+ C
4
e
x
cos 2x + C
5
e
x
sin 2x.
15. y = C
1
e
3x
+ C
2
e
−x
+ C
3
cos x + C
4
sin x + C
5
.
16. y = C
1
e
2x
+ C
2
xe
2x
+ C
3
x
2
e
2x
+ C
4
.
Find the homogeneous equation with constant coeﬃcients of least order that has the
given function as a solution.
17. y = 2e
2x
+ 3 sin x −x.
18. y = 3xe
−x
+ e
−x
cos 2x + 1.
19. y = 2e
x
− 3e
−x
+ 2x.
20. y = 3e
3x
−2 cos 2x + 4 sin x − 3.
Find the general solution of the nonhomogeneous equation.
21. y

+ y

+ y

+ y = e
x
+ 4.
22. y
(4)
−y = 2e
x
+ cos x.
23. y
(4)
+ 2y

+ y = 6 + cos 2x.
24. y

− y

− y

+ y = 2e
−x
+ 4e
2x
.
Find the solution of the initial-value problem.
25. y

− 8y = e
2x
; y(0) = y

(0) = y

(0) = 0.
26. y

− 2y

−5y

+ 6y = 2e
x
; y(0) = 2, y

(0) = 0, y

(0) = −1.
264
6.2 Systems of Linear Diﬀerential Equations
Introduction
Up to this point the entries in a vector or matrix have been real numbers. In this section,
and in the following sections, we will be dealing with vectors and matrices whose entries are
functions. A vector whose components are functions is called a vector-valued function or
vector function. Similarly, a matrix whose entries are functions is called a matrix function.
The operations of vector and matrix addition, multiplication by a number and matrix
multiplication for vector and matrix functions are exactly as deﬁned in Chapter 5 so there is
nothing new in terms of arithmetic. However, there are operations on functions other than
arithmetic operations that we have to deﬁne for vector and matrix functions, namely the
operations from calculus (limits, diﬀerentiation, integration). The operations from calculus
are deﬁned in a natural way.
Let v(t) = (f
1
(t), f
2
(t), . . . , f
n
(t)) be a vector function whose components
are deﬁned on an interval I.
Limit: Let c ∈ I. If lim
x→c
f
i
(t) = α
i
exists for i = 1, 2, . . . n, then
lim
t→c
v(t) =
_
lim
t→c
f
1
(t), lim
t→c
f
2
(t), . . . , lim
t→c
f
n
(t)
_
= (α
1
, α
2
, . . . , α
n
) .
Limits of vector functions are calculated “component-wise.”
Derivative: If f
1
, f
2
, . . . , f
n
are diﬀerentiable on I, then v is diﬀerentiable
on I, and
v

(t) =
_
(f

1
(t), f

2
(t), . . . , f

n
(t)
_
.
Thus v

is the vector function whose components are the derivatives of the
components of v.
Integral: Since diﬀerentiation of vector functions is done component-wise,
integration must also be component-wise. That is
_
v(t) dt =
__
f
1
(t) dt,
_
f
2
(t) dt, . . . ,
_
f
n
(t) dt
_
.
Calculus of matrix functions: Limits, diﬀerentiation and integration of
matrix functions are done in exactly the same way — component-wise.
Systems of Linear diﬀerential Equations
Consider the third-order linear diﬀerential equation
y

+ p(t)y

+ q(t)y

+ r(t)y = f(t)
265
where p, q, r, f are continuous functions on some interval I. Solving the equation for
y

, we get
y

= −r(t)y −q(t)y

− p(t)y

+ f(t).
Introduce new dependent variables x
1
, x
2
, x
3
, as follows:
x
1
= y
x
2
= x

1
(= y

)
x
3
= x

2
(= y

)
Then
y

= x

3
= −r(t)x
1
−q(t)x
2
− p(t)x
3
+ f(t)
and the third-order equation can be written equivalently as the system of three ﬁrst-order
equations:
x

1
= x
2
x

2
= x
3
x

3
= −r(t)x
1
− q(t)x
2
− p(t)x
3
+ f(t).
Note that this system is just a very special case of the “general” system of three, ﬁrst-order
diﬀerential equations:
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ a
13
(t)x
3
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ a
23
(t)x
3
(t) + b
2
(t)
x

3
= a
31
(t)x
1
+ a
32
(t)x
2
+ a
33
(t)x
3
(t) + b
3
(t).
Example 1. (a) Consider the third-order nonhomgeneous equation
y

−y

− 8y

+ 12y = 2e
t
.
Solving the equation for y

, we have
y

= −12y + 8y

+ y

+ 2e
t
.
Let x
1
= y, x

1
= x
2
(= y

), x

2
= x
3
(= y

). Then
y

= x

3
= −12x
1
+ 8x
2
+ x
3
+ 2e
t
and the equation converts to the equivalent system:
x

1
= x
2
x

2
= x
3
x

3
= −12x
1
+ 8x
2
+ x
3
+ 2e
t
.
266
(b) Consider the second-order homogeneous equation
t
2
y

−ty

− 3y = 0.
Solving this equation for y

, we get
y

=
3
t
2
y +
1
t
y

.
To convert this equation to an equivalent system, we let x
1
= y, x

1
= x
2
(= y

). Then we
have
x

1
= x
2
x

2
=
3
t
2
x
1
+
1
t
x
2
which is just a special case of the general system of two ﬁrst-order diﬀerential equations:
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ b
2
(t).
General Theory
Let a
11
(t), a
12
(t), . . . , a
1n
(t), a
21
(t), . . . , a
nn
(t), b
1
(t), b
2
(t), . . . , b
n
(t) be continuous
functions on some interval I. The system of n ﬁrst-order diﬀerential equations
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ · · · + a
1n
(t)x
n
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ · · · + a
2n
(t)x
n
(t) + b
2
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ · · · + a
nn
(t)x
n
(t) + b
n
(t)
(S)
is called a ﬁrst-order linear diﬀerential system.
The system (S) is homogeneous if
b
1
(t) ≡ b
2
(t) ≡ · · · ≡ b
n
(t) ≡ 0 on I.
(S) is nonhomogeneous if the functions b
i
(t) are not all identically zero on I; that is, (S)
is nonhomgeneous if there is at least one point a ∈ I and at least one function b
i
(t) such
that b
i
(a) = 0.
Let A(t) be the n ×n matrix
A(t) =
_
_
_
_
_
_
a
11
(t) a
12
(t) · · · a
1n
(t)
a
21
(t) a
22
(t) · · · a
2n
(t)
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) · · · a
nn
(t)
_
_
_
_
_
_
267
and let x and b(t) be the vectors
x =
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
_
, b(t) =
_
_
_
_
_
_
b
1
(t)
b
2
(t)
.
.
.
b
n
(t)
_
_
_
_
_
_
.
Then (S) can be written in the vector-matrix form
x

= A(t) x +b(t). (S)
The matrix A(t) is called the matrix of coeﬃcients or the coeﬃcient matrix of the system.
Example 2. The vector-matrix form of the system in Example 1 (a) is:
x

=
_
_
_
0 1 0
0 0 1
−12 8 1
_
_
_x +
_
_
_
0
0
2e
t
_
_
_, where x =
_
_
_
x
1
x
2
x
3
_
_
_,
a nonhomogeneous system.
The vector-matrix form of the system in Example 1(b) is:
x

=
_
0 1
3/t
2
1/t
_
x +
_
0
0
_
=
_
0 1
3/t
2
1/t
_
x, where x =
_
x
1
x
2
_
,
a homogeneous system.
The vector-matrix form of y

+ p(t)y

+ q(t)y

+ r(t)y = 0 is:
x

=
_
_
_
0 0 1
0 0 1
−r(t) −q(t) −p(t)
_
_
_x where x =
_
_
_
x
1
x
2
x
3
_
_
_.

A solution of the linear diﬀerential system (S) is a diﬀerentiable vector function
v(t) =
_
_
_
_
_
_
v
1
(t)
v
2
(t)
.
.
.
v
n
(t)
_
_
_
_
_
_
that satisﬁes (S) on the interval I.
268
Example 3. Verify that v(t) =
_
t
3
3t
2
_
is a solution of the homogeneous system
x

=
_
0 1
3/t
2
1/t
__
x
1
x
2
_
of Example 1 (b).
SOLUTION
v

=
_
t
3
3t
2
_

=
_
3t
2
6t
_
?
=
_
0 1
3/t
2
1/t
__
t
3
3t
2
_
=
_
3t
2
6t
_
;
v is a solution.
Example 4. Verify that v(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_ is a solution of the nonhomogeneous
system
x

=
_
_
_
0 1 0
0 0 1
−12 8 1
_
_
_x +
_
_
_
0
0
2e
t
_
_
_
of Example 1 (a).
SOLUTION
v

=
_
¸
_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
_
¸
_

=
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
?
=
_
_
_
0 1 0
0 0 1
−12 8 1
_
_
_
_
¸
_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
_
¸
_ +
_
_
_
0
0
2e
t
_
_
_
?
=
_
_
_
0 1 0
0 0 1
−12 8 1
_
_
_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_
+
_
_
_
0 1 0
0 0 1
−12 8 1
_
_
_
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
+
_
_
_
0
0
2e
t
_
_
_
?
=
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t

3
2
e
t
_
_
_+
_
_
_
0
0
2e
t
_
_
_=
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_;
v is a solution.
269
THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval
I, and let α
1
, α
2
, . . . , α
n
be any n real numbers. Then the initial-value problem
x

= A(t) x + b(t), x(a) =
_
_
_
_
_
_
α
1
α
2
.
.
.
α
n
_
_
_
_
_
_
has a unique solution.
Exercises 6.2
Convert the diﬀerential equation into a system of ﬁrst-order equations.
1. y

−ty

+ 3y = sin 2t.
2. y

+ y = 2e

2t.
3. y

− y

+ y = e
t
.
4. my

+ cy

+ ky = cos λt, m, c, k, λ are constants.
In Exercises 5 - 8 a matrix function A and a vector function b are given. Write
the system of equations corresponding to x

= A(t)x + b(t).
5. A(t) =
_
2 −1
3 0
_
, b(t) =
_
e
2t
2e
−t
_
.
6. A(t) =
_
t
3
t
cos t 2
_
, b(t) =
_
t − 1
2
_
.
7. A(t) =
_
_
_
2 3 −1
−2 0 1
2 3 0
_
_
_, b(t) =
_
_
_
e
t
2e
−t
e
2t
_
_
_.
8. A(t) =
_
_
_
t
2
3t t −1
−2 t −2 t
2t 3 t
_
_
_, b(t) =
_
_
_
0
0
1
_
_
_.
Write the system in vector-matrix form.
9.
x

1
= −2x
1
+ x
2
+ sin t
x

2
= x
1
−3x
2
−2 cos t
270
10.
x

1
= e
t
x
1
− e
2t
x
2
x

2
= e
−t
x
1
−3e
t
x
2
11.
x

1
= 2x
1
+ x
2
+ 3x
3
+ 3e
2t
x

2
= x
1
−3x
2
−2 cos t
x

3
= 2x
1
−x
2
+ 4x
3
+ t
12.
x

1
= t
2
x
1
+ x
2
− tx
3
+ 3
x

2
= −3e
t
x
2
+ 2x
3
− 2e
−2t
x

3
= 2x
1
+ t
2
x
2
+ 4x
3
13. Verify that u(t) =
_
t
−1
−t
−2
_
is a solution of the system in Example 1 (b).
14. Verify that u(t) =
_
_
_
e
−3t
−3e
−3t
9e
−3t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_ is a solution of the system in Example
1 (a).
15. Verify that w(t) =
_
_
_
te
2t
e
2t
+ 2te
2t
4e
2t
+ 4te2t
_
_
_ is a solution of the homogeneous system
associated with the system in Example 1 (a).
16. Verify that v(t) =
_
−sin t
−cos t − 2 sin t
_
is a solution of the system
x

=
_
−2 1
−3 2
_
x +
_
0
2 sin t
_
.
17. Verify that v(t) =
_
_
_
−2e
−2t
0
3e
−2t
_
_
_
is a solution of the system
x

=
_
_
_
1 −3 2
0 −1 0
0 −1 −2
_
_
_x.
271
6.3 Homogeneous Systems
In this section we give the basic theory for linear homogeneous systems. This “theory” is
simply a repetition of the results given in Sections 3.2 and 6.1, phrased this time in terms
of the system
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+· · · + a
1n
(t)x
n
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+· · · + a
2n
(t)x
n
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ · · · + a
nn
(t)x
n
(t)
(H)
or
x

= A(t)x. (H)
Note ﬁrst that the zero vector z(t) ≡ 0 =
_
_
_
_
_
_
0
0
.
.
.
0
_
_
_
_
_
_
is a solution of (H). As before,
this solution is called the trivial solution. Of course, we are interested in ﬁnding nontrivial
solutions.
THEOREM 1. If v = v(t) is a solution of (H) and α is any real number, then
u(t) = αv(t) is also a solution of (H); any constant multiple of a solution of (H) is a
solution of (H).
THEOREM 2. If v
1
= v
1
(t) and v
2
= v
2
(t) are solutions of (H), then
u(t) = v
1
(t) +v
2
(t)
is also a solution of (H); the sum of any two solutions of (H) is a solution of (H).
These two theorems can be combined and extended to:
THEOREM 3. If v
1
= v
1
(t), v
2
= v
2
(t), . . . , v
k
= v
k
(t) are solutions of (H), and if
c
1
, c
2
, . . . , c
k
are real numbers, then
v(t) = c
1
v
1
(t) + c
2
v
2
(t) +· · · + c
k
v
k
(t)
is a solution of (H); any linear combination of solutions of (H) is also a solution of (H).
Linear Dependence and Linear Independence of Vector Functions
This subsection is an extension of the discussion of linear dependence and linear indepen-
dence of functions in Section 5.7. This is a general treatment. We will return to linear
diﬀerential systems after we treat the general case of linear dependence/independence of
vector functions.
272
DEFINITION 1. Let
v
1
(t) =
_
_
_
_
_
_
v
11
(t)
v
21
(t)
.
.
.
v
n1
(t)
_
_
_
_
_
_
, v
2
(t) =
_
_
_
_
_
_
v
12
(t)
v
22
(t)
.
.
.
v
n2
(t)
_
_
_
_
_
_
, . . . , v
k
(t) =
_
_
_
_
_
_
v
1k
(t)
v
2k
(t)
.
.
.
v
nk
(t)
_
_
_
_
_
_
be n-component vector functions deﬁned on some interval I. The vectors are linearly
dependent on I if there exist k real numbers c
1
, c
2
, . . . , c
k
, not all zero, such that
c
1
v
1
(t) + c
2
v
2
(t) +· · · + c
k
v
k
(t) ≡ 0 on I.
Otherwise the vectors are linearly independent on I.
THEOREM 4. Let v
1
(t), v
2
(t), . . . , v
n
(t) be n, n-component vector functions deﬁned
on an interval I. If the vectors are linearly dependent, then
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
v
11
(t) v
12
(t) · · · v
1n
(t)
v
21
(t) v
22
(t) · · · v
2n
(t)
.
.
.
.
.
.
.
.
.
.
.
.
v
n1
(t) v
n2
(t) · · · v
nn
(t)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
≡ 0 on I.
Proof: The method of proof of Theorem 1 in Section 5.7 applies here.
As before, the determinant in Theorem 4 is called the Wronskian of the vector functions
v
1
, v
2
, . . . , v
n
. We will let W(v
1
, v
2
, . . . , v
n
)(t), or simply W(t), denote the Wronskian.
COROLLARY Let v
1
(t), v
2
(t), . . . , v
n
(t) be n, n-component vector functions deﬁned
on an interval I, and let W(t)) be their Wronskian. If W(t) = 0 for at least one t ∈ I,
then the vector functions are linearly independent on I.
It is important to understand that in this general case, W(t) ≡ 0 does not imply that
the vector functions are linearly dependent. An example is given in Section 5.7
Example 1. (a) The Wronskian of the vector functions
u(t) =
_
t
3
3t
2
_
and v(t) =
_
t
−1
−t
−2
_
.
is:
W(x) =
¸
¸
¸
¸
¸
t
3
t
−1
3t
2
−t
−2
¸
¸
¸
¸
¸
= −4t.
(Note: u and v are solutions of the homogeneous system in Example 3, Section 6.2.)
(b) The Wronskian of the vector functions
v
1
(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_, v
2
(t) =
_
_
_
e
−3t
−3e
−3t
9e
−3t
_
_
_, v
3
(t) =
_
_
_
te
2t
e
2t
+ 2te
2t
4e
2t
+ 4te
2t
_
_
_
273
is:
W(x) =
¸
¸
¸
¸
¸
¸
¸
e
2t
e
−3t
te
2t
2e
2t
−3e−3t e
2t
+ 2te
2t
4e
2t
9e
−3t
4e
2t
+ 4te
2t
¸
¸
¸
¸
¸
¸
¸
= −25e
t
.
Back to Linear Diﬀerential Systems
When the vector functions v
1
, v
2
, . . . , v
n
are n solutions of the homogeneous system
(H) we get a much stronger version of Theorem 4.
THEOREM 5. Let v
1
(t), v
2
(t), . . . , v
n
(t) be n solutions of (H). Exactly one of the
following holds:
1. W(v
1
, v
2
, . . . , v
n
)(t) ≡ 0 on I and the solutions are linearly dependent.
2. W(v
1
, v
2
, . . . , v
n
)(t) = 0 for all t ∈ I and the solutions are linearly independent.
Compare this result with Theorem 4, Section 3.2, and Theorem 6, Section 6.1.
It is easy to construct sets of n linearly independent solutions of (H). Simply pick any
point a ∈ I and any nonsingular n ×n matrix A. Let a
1
be the ﬁrst column of A,
a
2
the second column of A, and so on. Then let v
1
(t) be the solution of (H) such that
v
1
(a) = a
1
, let v
2
(t) be the solution of (H) such that v
2
(a) = a
2
, . . . , and let v
n
(t) be
the solution of (H) such that v
n
= a
n
. The existence and uniqueness theorem guarantees
the existence of these solutions. Now
W(v
1
, v
2
, . . . , v
n
)(a) = det A = 0.
Therefore, W(t) = 0 for all t ∈ I and the solutions are linearly independent.
A particularly nice set of n linearly independent solutions is obtained by choosing
A = I
n
, the identity matrix.
THEOREM 6. Let v
1
(t), v
2
(t), . . . , v
n
(t) be n linearly independent solutions of (H).
Let u(t) be any solution of (H). Then there exists a unique set of constants C
1
, C
2
, . . . , C
n
such that
u(t) = C
1
v
1
(t) + C
2
v
2
(t) + · · · + C
n
v
n
(t).
That is, every solution of (H) can be written as a unique linear combination of v
1
, v
2
, . . . , v
n
.
DEFINITION 2. A set {v
1
, v
2
, . . . , v
n
} of n linearly independent solutions of (H) is
called a fundamental set of solutions. A fundamental set of solutions is also called a solution
basis for (H). If {v
1
, v
2
, . . . , v
n
} is a fundamental set of solutions of (H), then the n × n
274
matrix
V (t) =
_
_
_
_
_
_
v
11
(t) v
12
(t) · · · v
1n
(t)
v
21
(t) v
22
(t) · · · v
2n
(t)
.
.
.
.
.
.
.
.
.
v
n1
(t) v
n2
(t) · · · v
nn
(t)
_
_
_
_
_
_
(the vectors v
1
, v
2
, . . . , v
n
are the columns of V ) is called a fundamental matrix for (H).
DEFINITION 3. Let {v
1
(t), v
2
(t), . . . , v
n
(t)} be a fundamental set of solutions of
(H). Then
x(t) = C
1
v
1
(t) + C
2
v
2
(t) + · · · + C
n
v
n
(t),
where C
1
, C
2
, . . . , C
n
are arbitrary constants, is the general solution of (H).
Note that the general solution can also be written in terms of the fundamental matrix:
C
1
v
1
(t) + C
2
v
2
(t) +· · · + C
n
v
n
(t) =
_
_
_
_
_
_
v
11
(t) v
12
(t) · · · v
1n
(t)
v
21
(t) v
22
(t) · · · v
2n
(t)
.
.
.
.
.
.
.
.
.
v
n1
(t) v
n2
(t) · · · v
nn
(t)
_
_
_
_
_
_
_
_
_
_
_
_
C
1
C
2
.
.
.
C
n
_
_
_
_
_
_
= V (t)C.
Example 2. The vectors
u(t) =
_
t
3
3t
2
_
and v(t) =
_
t
−1
−t
−2
_
form a fundamental set of solutions of
x

=
_
0 1
3/t
2
1/t
__
x
1
x
2
_
.
The matrix
V (t) =
_
t
3
t
−1
3t
2
−t
−2
_
is a fundamental matrix for the system and
x(t) = C
1
_
t
3
3t
2
_
+ C
2
_
t
−1
−t
−2
_
=
_
t
3
t
−1
3t
2
−t
−2
__
C
1
C
2
_
is the general solution of the system.
Exercises 6.3
Determine whether or not the vector functions are linearly dependent.
1. u =
_
2t − 1
−t
_
, v =
_
−t + 1
2t
_
275
2. u =
_
cos t
sin t
_
, v =
_
sin t
cos t
_
3. u =
_
t −t
2
−t
_
, v =
_
−2t + 4t
2
2t
_
4. u =
_
te
t
t
_
, v =
_
e
t
1
_
5. u =
_
_
_
2 −t
t
−2
_
_
_, v =
_
_
_
t
−1
2
_
_
_, w =
_
_
_
2 + t
t −2
2
_
_
_.
6. u =
_
_
_
cos t
sin t
0
_
_
_, v =
_
_
_
cos t
0
sin t
_
_
_, w =
_
_
_
0
cos t
sin t
_
_
_.
7. u =
_
_
_
e
t
−e
t
e
t
_
_
_, v =
_
_
_
−e
t
2e
t
−e
t
_
_
_, w =
_
_
_
0
e
t
0
_
_
_.
8. u =
_
2 −t
t
_
, v =
_
t + 1
−2
_
, w =
_
t
t + 2
_
9. u =
_
e
t
0
_
, v =
_
0
0
_
, w =
_
0
e
t
_
10. u =
_
_
_
_
_
cos (t + π/4)
0
0
0
_
_
_
_
_
, v =
_
_
_
_
_
cos t)
0
0
e
t
_
_
_
_
_
, w =
_
_
_
_
_
sin t)
0
0
e
t
_
_
_
_
_
11. Given the linear diﬀerential system
x

=
_
5 −3
2 0
_
x.
Let
u =
_
e
2t
e
2t
_
and v =
_
3e
3t
2e
3t
_
.
(a) Show that u, v are a fundamental set of solutions of the system.
(b) Let V be the corresponding fundamental matrix. Show that
V

= AV.
(c) Give the general solution of the system.
276
(d) Find the solution of the system that satisﬁes x(0) =
_
1
0
_
.
12. Let V be the matrix function
V (t) =
_
cos 2t sin 2t
sin 2t −cos 2t
_
(a) Verify that V is a fundamental matrix for the system
x

=
_
0 −2
2 0
_
x.
(b) Find the solution of the system that satisﬁes x(0) =
_
2
3
_
.
13. Let V be the matrix function
V (t) =
_
_
_
0 4te
−t
e
−t
1 e
−t
0
1 0 0
_
_
_
(a) Verify that V is a fundamental matrix for the system
x

=
_
_
_
−1 4 −4
0 −1 1
0 0 0
_
_
_x.
(b) Find the solution of the system that satisﬁes x(0) =
_
_
_
0
1
2
_
_
_.
14. The linear diﬀerential system equivalent to the equation
y

+ p(t)y

+ q(t)y

+ r(t)y = 0
is:
_
_
_
x

1
x

2
x

3
_
_
_=
_
_
_
0 0 1
0 0 1
−r(t) −q(t) −p(t)
_
_
_
_
_
_
x
1
x
2
x
3
_
_
_.
(See Example 2, Section 6.2.) Show that if y = y(t) is a solution of the equation,
then x(t) =
_
_
_
y(t)
y

(t)
y

(t)
_
_
_ is a solution of the system.
Note: This result holds for linear equations of all orders. However, it is important
to understand that solutions of systems which are not converted from equations do
not have this special form.
277
15. Find three linearly independent solutions of
x

=
_
_
_
0 1 0
0 0 1
4 4 −1
_
_
_x.
16. Find three linearly independent solutions of
x

=
_
_
_
0 1 0
0 0 1
−18 3 4
_
_
_x.
17. Find two linearly independent solutions of
x

=
_
0 1
−6/t
2
−6/t
_
x.
18. Find two linearly independent solutions of
x

=
_
0 1
−4/t
2
3/t
_
x.
19. Let {v
1
(t), v
2
(t), . . . , v
n
(t)} be a fundamental set of solutions of (H), and let
V (t) be the corresponding fundamental matrix. Show that V satisﬁes the matrix
diﬀerential equation
X

= A(t)X.
278
6.4 Homogeneous Systems with Constant Coeﬃcients
A homogeneous system with constant coeﬃcients is a linear diﬀerential system having the
form
x

1
= a
11
x
1
+ a
12
x
2
+· · · + a
1n
x
n
x

2
= a
21
x
1
+ a
22
x
2
+· · · + a
2n
x
n
.
.
.
.
.
.
x

n
= a
n1
x
1
+ a
n2
x
2
+ · · · + a
nn
x
n
where a
11
, a
12
, . . . , a
nn
are constants. The system in vector-matrix form is
_
_
_
_
_
x

1
x

2

x

n
_
_
_
_
_
=
_
_
_
_
_
a
11
a
12
· · · a
1n
a
21
a
22
· · · a
2n
− − − −
a
n1
a
n2
· · · a
nn
_
_
_
_
_
_
_
_
_
_
x
1
x
2

x
n
_
_
_
_
_
or x

= Ax. (1)
Example 1. Consider the 3
rd
order linear homogeneous diﬀerential equation
y

+ 2y

− 5y

−6y = 0.
The characteristic equation is:
r
3
+ 2r
2
− 5r −6 = (r −2)(r + 1)(r + 3) = 0
and {e
2t
, e
−t
, e
−3t
} is a solution basis for the equation.
The corresponding linear homogeneous system is
x

=
_
_
_
0 1 0
0 0 1
6 5 −2
_
_
_x
and
v
1
(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_= e
2t
_
_
_
1
2
4
_
_
_
is a solution vector (see Problem 14, Exercises 6.3). Similarly,
v
2
(t) =
_
_
_
e
−t
−e
−t
e
−t
_
_
_= e
−t
_
_
_
1
−1
1
_
_
_ and v
3
(t) =
_
_
_
e
−3t
−3e
−3t
9e
−3t
_
_
_= e
−3t
_
_
_
1
−3
9
_
_
_
are solution vectors.
279
Solutions: Eigenvalues and Eigenvectors
Example 1 suggests that homogeneous systems with constant coeﬃcients might have
solution vectors of the form v(t) = e
λt
c, for some number λ and some constant vector
c.
Set v(t) = e
λt
c. Then v

(t) = λe
λt
c. Substituting into (1), we get:
λe
λt
c = Ae
λt
c which implies Ac = λc.
The latter equation is an eigenvalue-eigenvector equation for A. Thus, we look for
solutions of the form v(t) = e
λt
c where λ is an eigenvalue of A and c is a corresponding
eigenvector.
Example 2. Returning to Example 1, note that
_
_
_
0 1 0
0 0 1
6 5 −2
_
_
_
_
_
_
1
2
4
_
_
_= 2
_
_
_
1
2
4
_
_
_,
_
_
_
0 1 0
0 0 1
6 5 −2
_
_
_
_
_
_
1
−1
1
_
_
_= −1
_
_
_
1
−1
1
_
_
_,
and
_
_
_
0 1 0
0 0 1
6 5 −2
_
_
_
_
_
_
1
−3
9
_
_
_
= −3
_
_
_
1
−3
9
_
_
_
.
2 is an eigenvalue of A =
_
_
_
0 1 0
0 0 1
6 5 −2
_
_
_ with corresponding eigenvector
_
_
_
1
2
4
_
_
_, −1
is an eigenvalue of A with corresponding eigenvector
_
_
_
1
−1
1
_
_
_, and −3 is an eigenvalue
of A with corresponding eigenvector
_
_
_
1
−3
9
_
_
_.
Example 3. Find a fundamental set of solution vectors of
x

=
_
1 5
3 3
_
x
and give the general solution of the system.
SOLUTION First we ﬁnd the eigenvalues:
det(A− λI) =
¸
¸
¸
¸
¸
1 −λ 5
3 3 − λ
¸
¸
¸
¸
¸
= (λ −6)(λ + 2).
280
The eigenvalues are λ
1
= 6 and λ
2
= −2.
Next, we ﬁnd corresponding eigenvectors. For λ
1
= 6 we have:
(A− 6I)x =
_
−5 5
3 −3
__
x
1
x
2
_
=
_
0
0
_
which implies x
1
= x
2
, x
2
arbitrary.
Setting x
2
= 1, we get the eigenvector
_
1
1
_
.
Repeating the process for λ
2
= −2, we get the eigenvector
_
5
−3
_
.
Thus v
1
(t) = e
6t
_
1
1
_
and v
2
(t) = e
−2t
_
5
−3
_
are solution vectors of the system.
The Wronskian of v
1
and v
2
is:
W(t) =
¸
¸
¸
¸
¸
e
6t
5e
−2t
e
6t
−3e
−2t
¸
¸
¸
¸
¸
= −8e
4t
= 0.
Thus v
1
and v
2
are linearly independent; they form a fundamental set of solutions. The
general solution of the system is
x(t) = C
1
e
6t
_
1
1
_
+ C
2
e
−2t
_
5
−3
_
.
Example 4. Find a fundamental set of solution vectors of
x

=
_
_
_
3 −1 −1
−12 0 5
4 −2 −1
_
_
_
x
and ﬁnd the solution that satisﬁes the initial condition x(0) =
_
_
_
1
0
1
_
_
_.
SOLUTION
det(A−λI) =
¸
¸
¸
¸
¸
¸
¸
3 − λ −1 −1
−12 −λ 5
4 −2 −1 −λ
¸
¸
¸
¸
¸
¸
¸
= −λ
3
+ 2λ
2
+ λ − 2.
Now
det(A− λI) = 0 implies λ
3
− 2λ
2
−λ + 2 = (λ −2)(λ −1)(λ + 1) = 0.
The eigenvalues are λ
1
= 2, λ
2
= 1, λ
3
= −1.
281
As you can check, corresponding eigenvectors are:
c
1
=
_
_
_
1
−1
2
_
_
_, c
2
=
_
_
_
3
−1
7
_
_
_, c
3
=
_
_
_
1
2
2
_
_
_.
A fundamental set of solution vectors is:
v
1
(t) = e
2t
_
_
_
1
−1
2
_
_
_, v
2
(t) = e
t
_
_
_
3
−1
7
_
_
_, v
3
(t) = e
−t
_
_
_
1
2
2
_
_
_
since distinct exponential vector-functions are linearly independent (calculate the Wronskian
to verify) and
x(t) = C
1
e
2t
_
_
_
1
−1
2
_
_
_+ C
2
e
t
_
_
_
3
−1
7
_
_
_+ C
3
e
−t
_
_
_
1
2
2
_
_
_
is the general solution.
To ﬁnd the solution vector satisfying the initial condition, solve
C
1
v
1
(0) + C
2
v
2
(0) + C
3
v
3
(0) =
_
_
_
1
0
1
_
_
_
which is:
C
1
_
_
_
1
−1
2
_
_
_+ C
2
_
_
_
3
−1
7
_
_
_+ C
3
_
_
_
1
2
2
_
_
_ =
_
_
_
1
0
1
_
_
_
or
_
_
_
1 3 1
−1 −1 2
2 7 2
_
_
_
_
_
_
C
1
C
2
C
3
_
_
_=
_
_
_
1
0
1
_
_
_.
Note: The matrix of coeﬃcients here is the fundamental matrix evaluated at t = 0
Using the solution method of your choice (row reduction, inverse, Cramer’s rule), the
solution is: C
1
= 3, C
2
= −1, C
3
= 1. The solution of the initial-value problem is
x(t) = 3e
2t
_
_
_
1
−1
2
_
_
_− e
t
_
_
_
3
−1
7
_
_
_+ e
−t
_
_
_
1
2
2
_
_
_.
Two Diﬃculties
There are two diﬃculties that can arise:
282
1. A has complex eigenvalues.
If λ = a + bi is a complex eigenvalue of A with corresponding (complex) eigenvector
u + i v, then λ = a − bi (the complex conjugate of λ) is also an eigenvalue of A and
u − i v is a corresponding eigenvector. The corresponding linearly independent complex
solutions of x

= Ax are:
w
1
(t) = e
(a+bi)t
(u + i v) = e
at
(cos bt + i sin bt)(u + i v)
= e
at
[(cos bt u − sin bt v) + i(cos bt v + sin bt u)]
w
2
(t) = e
(a−bi)t
(u − i v) = e
at
(cos bt − i sin bt)(u − i v)
= e
at
[(cos bt u − sin bt v) − i(cos bt v + sin bt u)]
Now
x
1
(t) =
1
2
[w
1
(t) +w
2
(t)] = e
at
(cos bt u −sin bt v)
and
x
2
(t) =
1
2i
[w
1
(t) −w
2
(t)] = e
at
(cos bt v + sin bt u)
are linearly independent solutions of the system, and they are real-valued vector functions.
Note that x
1
and x
2
are simply the real and imaginary parts of w
1
(or of w
2
).
(Review Section 3.3 where you were shown how to convert complex exponential solutions
into real-valued solutions involving sine and cosine.)
Example 5. Determine the general solution of
x

=
_
2 −5
1 0
_
x.
SOLUTION
det(A− λI) =
¸
¸
¸
¸
¸
2 −λ −5
1 −λ
¸
¸
¸
¸
¸
= λ
2
− 2λ + 5.
The eigenvalues are: λ
1
= 1 + 2i, λ
2
= 1 −2i. The corresponding eigenvectors are:
c
1
=
_
1 + 2i
1
_
=
_
1
1
_
+ i
_
2
0
_
, c
2
=
_
1 −2i
1
_
=
_
1
1
_
+ i
_
2
0
_
.
Now
e
(1+2i)t
__
1
1
_
+ i
_
2
0
__
=
e
t
(cos 2t + i sin 2t)
__
1
1
_
+ i
_
2
0
__
=
e
t
_
cos 2t
_
1
1
_
−sin 2t
_
2
0
__
+ i e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
283
A fundamental set of solution vectors for the system is:
v
1
(t) = e
t
_
cos 2t
_
1
1
_
− sin 2t
_
2
0
__
,
v
2
(t) = e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
The general solution of the system is
x(t) = C
1
e
t
_
cos 2t
_
1
1
_
− sin 2t
_
2
0
__
+ C
2
e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
Example 6. Determine a fundamental set of solution vectors of
x

=
_
_
_
1 −4 −1
3 2 3
1 1 3
_
_
_x.
SOLUTION
det(A− λI) =
¸
¸
¸
¸
¸
¸
¸
1 −λ −4 −1
3 2 − λ 3
1 1 3 − λ
¸
¸
¸
¸
¸
¸
¸
= −λ
3
+ 6λ
2
− 21λ + 26 = −(λ −2)(λ
2
−4λ + 13).
The eigenvalues are: λ
1
= 2, λ
2
= 2 + 3i, λ
3
= 2 − 3i. The corresponding eigenvectors
are:
c
1
=
_
_
_
1
0
−1
_
_
_, c
2
=
_
_
_
−5 + 3i
3 + 3i
2
_
_
_ =
_
_
_
−5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
c
3
=
_
_
_
−5 −3i
3 −3i
2
_
_
_ =
_
_
_
−5
3
2
_
_
_−i
_
_
_
3
3
0
_
_
_.
Now
e
(2+3i)t
_
¸
_
_
_
_
−5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
_
¸
_ =
e
2t
(cos 3t + i sin 3t)
_
¸
_
_
_
_
−5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
_
¸
_ =
e
2t
_
¸
_cos 3t
_
_
_
−5
3
2
_
_
_− sin 3t
_
_
_
3
3
0
_
_
_
_
¸
_ + i e
2t
_
¸
_cos 3t
_
_
_
3
3
0
_
_
_+ sin 3t
_
_
_
−5
3
2
_
_
_
_
¸
_.
284
A fundamental set of solution vectors for the system is:
v
1
(t) = e
2t
_
_
_
1
0
−1
_
_
_, v
2
(t) = e
2t
_
¸
_cos 3t
_
_
_
−5
3
2
_
_
_− sin 3t
_
_
_
3
3
0
_
_
_
_
¸
_,
v
3
(t) = e
2t
_
¸
_cos 3t
_
_
_
3
3
0
_
_
_+ sin 3t
_
_
_
−5
3
2
_
_
_
_
¸
_.
2. A has an eigenvalue of multiplicity greater than 1
We’ll treat the case where A has an eigenvalue of multiplicity 2.
Example 7. Determine a fundamental set of solution vectors of
x

=
_
_
_
1 −3 3
3 −5 3
6 −6 4
_
_
_x.
SOLUTION
det(A−λI) =
¸
¸
¸
¸
¸
¸
¸
1 − λ −3 3
3 −5 −λ 3
6 −6 4 − λ
¸
¸
¸
¸
¸
¸
¸
= −λ
3
+ 12λ −16 = −(λ −4)(λ + 2)
2
.
The eigenvalues are: λ
1
= 4, λ
2
= λ
3
= −2.
As you can check, an eigenvector corresponding to λ
1
= 4 is c
1
=
_
_
_
1
1
2
_
_
_.
We’ll carry out the details involved in ﬁnding an eigenvector corresponding to the “dou-
ble” eigenvalue −2.
[A−(−2)I]c =
_
_
_
3 −3 3
3 −3 3
6 −6 6
_
_
_
_
_
_
c
1
c
2
c
3
_
_
_=
_
_
_
0
0
0
_
_
_.
The augmented matrix for this system of equations is
_
_
_
3 −3 3 0
3 −3 3 0
6 −6 6 0
_
_
_ which row reduces to
_
_
_
1 −1 1 0
0 0 0 0
0 0 0 0
_
_
_
The solutions of this system are: c
1
= c
2
−c
3
, c
2
, c
3
arbitrary. We can assign values to
c
2
and c
3
independently and obtain two linearly independent eigenvectors. For example,
285
setting c
2
= 1, c
3
= 0, we get the eigenvector c
2
=
_
_
_
1
1
0
_
_
_. Reversing the roles, we set
c
2
= 0, c
3
= −1 to get the eigenvector c
3
=
_
_
_
1
0
−1
_
_
_. Clearly c
2
and c
3
are linearly
independent. You should understand that there is nothing magic about our two choices for
c
2
, c
3
; any choice which produces two independent vectors will do.
The important thing to note here is that this eigenvalue of multiplicity 2 produced two
independent eigenvectors.
Based on our work above, a fundamental set of solutions for the diﬀerential system
x

=
_
_
_
1 −3 3
3 −5 3
6 −6 4
_
_
_x
is
v
1
(t) = e
4t
_
_
_
1
1
2
_
_
_, v
2
(t) = e
−2t
_
_
_
1
1
0
_
_
_, v
3
(t) = e
−2t
_
_
_
1
0
−1
_
_
_.
Example 8. Let A =
_
_
_
0 1 0
0 0 1
12 8 −1
_
_
_
det(A−λI) =
¸
¸
¸
¸
¸
¸
¸
−λ 1 0
0 −λ 1
12 8 −1 −λ
¸
¸
¸
¸
¸
¸
¸
= −λ
3
−λ
2
+ 8λ − 12 = −(λ −3)(λ + 2)
2
.
The eigenvalues are: λ
1
= 3, λ
2
= λ
3
= −2.
As you can check, an eigenvector corresponding to λ
1
= 3 is c
1
=
_
_
_
1
3
9
_
_
_.
We’ll carry out the details involved in ﬁnding an eigenvector corresponding to the “dou-
ble” eigenvalue −2.
[A− (−2)I]c =
_
_
_
2 1 0
0 2 1
12 8 1
_
_
_
_
_
_
c
1
c
2
c
3
_
_
_ =
_
_
_
0
0
0
_
_
_.
The augmented matrix for this system of equations is
_
_
_
2 1 0 0
0 2 1 0
12 8 1 0
_
_
_ which row reduces to
_
_
_
2 1 0 0
0 2 1 0
0 0 0 0
_
_
_
286
The solutions of this system are c
1
=
1
4
c
3
, c
2
= −
1
2
c
3
, c
3
arbitrary. Here there is only
one parameter and so we’ll get only one eigenvector. Setting c
3
= 4 we get the eigenvector
c
2
=
_
_
_
1
−2
4
_
_
_.
In contrast to the preceding example, the “double” eigenvalue here has only one (inde-
pendent) eigenvector.
Suppose that we were asked to ﬁnd a fundamental set of solutions of the linear diﬀerential
system
x

=
_
_
_
0 1 0
0 0 1
12 8 −1
_
_
_x.
By our work above, we have two independent solutions
v
1
= e
3t
_
_
_
1
3
9
_
_
_ and v
2
= e
−2t
_
_
_
1
−2
4
_
_
_.
We need a third solution which is independent of these two.
Our system has a special form; it is equivalent to the third order equation
y

+ y

− 8y

− 12y = 0.
The characteristic equation is
r
3
+ r
2
−8r −12 = (r −3)(r + 2)
2
= 0
(compare with det(A − λI).) The roots are: r
1
= 3, r
2
= r
3
= −2 and a fundamental
set of solutions is {y
1
= e
3t
, y
2
= e
−2t
, y
3
= te
−2t
}. The correspondence between these
solutions and the solution vectors we found above should be clear:
e
3t
−→ e
3t
_
_
_
1
3
9
_
_
_, e
−2t
−→ e
−2t
_
_
_
1
−2
4
_
_
_.
As we saw in Section 6.2, the solution y
3
(t) = te
−2t
of the equation produces the
solution vector
v
3
(t) =
_
_
_
y
3
(t)
y

3
(t)
y

3
(t)
_
_
_ =
_
_
_
te
−2t
e
−2t
− 2te
−2t
−4e
−2t
−4te−2t
_
_
_= e
−2t
_
_
_
0
1
−4
_
_
_+ te
−2t
_
_
_
1
−2
4
_
_
_
of the corresponding system.
287
The appearance of the te
−2t
c
2
term should not be unexpected since we know that a
characteristic root r of multiplicity 2 produces a solution of the form te
rt
.
You can check that v
3
is independent of v
1
and v
2
. Therefore, the solution vectors
v
1
, v
2
, v
3
are a fundamental set of solutions of the system.
The question is: What is the signiﬁcance of the vector w =
_
_
_
0
1
−4
_
_
_? How is it related
to the eigenvalue −2 which generated it, and to the corresponding eigenvector?
Let’s look at [A−(−2)I]w = [A + 2I]w:
[A + 2I]w =
_
_
_
2 1 0
0 2 1
12 8 1
_
_
_
_
_
_
0
1
−4
_
_
_ =
_
_
_
1
−2
4
_
_
_= c
2
;
A−(−2)I “maps” w onto the eigenvector c
2
. The corresponding solution of the system
has the form
v
3
(t) = e
−2t
w+ te
−2t
c
2
where c
2
is the eigenvector corresponding to −2 and w satisﬁes
[A− (−2)I]w = c
2
.
General Result
Given the linear diﬀerential system x

= Ax. Suppose that A has an eigenvalue λ of
multiplicity 2. Then exactly one of the following holds:
1. λ has two linearly independent eigenvectors, c
1
and c
2
. Corresponding linearly
independent solution vectors of the diﬀerential system are v
1
(t) = e
λt
c
1
and v
2
(t) =
e
λt
c
2
.
2. λ has only one (independent) eigenvector c. Then a linearly independent pair of
solution vectors corresponding to λ are:
v
1
(t) = e
λt
c and v
2
(t) = e
λt
w+ te
λt
c
where w is a vector that satisﬁes (A − λI)w = c. The vector w is called a
generalized eigenvector corresponding to the eigenvalue λ.
Example 9. Find a fundamental set of solution vectors for x

=
_
1 −1
1 3
_
x.
SOLUTION
det(A−λI) =
¸
¸
¸
¸
¸
1 − λ −1
1 3 − λ
¸
¸
¸
¸
¸
= λ
2
− 4λ + 4 = (λ − 2)
2
.
288
Characteristic values: λ
1
= λ
2
= 2.
Characteristic vectors:
(A−2I)c =
_
−1 −1
1 1
__
c
1
c
2
_
=
_
0
0
_
;
_
−1 −1 0
1 1 0
_
−→
_
1 1 0
0 0 0
_
.
The solutions are: c
1
= −c
2
, c
2
arbitrary; there is only one eigenvector. Setting c
2
= −1,
we get c =
_
1
−1
_
.
The vector v
1
= e
2t
_
1
−1
_
is a solution of the system.
A second solution, independent of v
1
is v
2
= e
2t
w+ te
2t
c where w is a solution of
(A−2I)z = c:
(A−2I)z =
_
−1 −1
1 1
__
z
1
z
2
_
=
_
1
−1
_
;
_
−1 −1 1
1 1 1
_
−→
_
1 1 −1
0 0 0
_
.
The solutions of this system are z
1
= −1 − z
2
, z
2
arbitrary. If we choose z
2
= 0 (any
choice for z
2
will do), we get z
1
= −1 and w =
_
−1
0
_
. Thus
v
2
(t) = e
2t
_
−1
0
_
+ te
2t
_
1
−1
_
is a solution of the system independent of v
1
. The solutions
v
1
(t) = e
2t
_
1
−1
_
, v
2
(t) = e
2t
_
−1
0
_
+ te
2t
_
1
−1
_
are a fundamental set of solutions of the system.
Example 10. Let A =
_
_
_
3 1 −1
2 2 −1
2 2 0
_
_
_. Find a fundamental set of solutions of
x

= Ax
SOLUTION
289
det(A− λI) =
¸
¸
¸
¸
¸
¸
¸
3 −λ 1 −1
2 2 − λ −1
2 2 −λ
¸
¸
¸
¸
¸
¸
¸
= −λ
3
+ 5λ
2
− 8λ + 4 = −(λ − 1)(λ −2)
2
.
The eigenvalues are: λ
1
= 1, λ
2
= λ
3
= 2.
An eigenvector corresponding to λ
1
= 1 is c
1
=
_
_
_
1
0
2
_
_
_ (check this).
We’ll show the details involved in ﬁnding an eigenvector (or eigenvectors) corresponding
to the “double” eigenvalue 2.
[A− 2)I]c =
_
_
_
1 1 −1
2 0 −1
2 2 −2
_
_
_
_
_
_
c
1
c
2
c
3
_
_
_=
_
_
_
0
0
0
_
_
_.
The augmented matrix for this system of equations is
_
_
_
1 1 −1 0
2 0 −1 0
2 2 −2 0
_
_
_ which row reduces to
_
_
_
1 1 −1 0
0 −2 1 0
0 0 0 0
_
_
_
The solutions of this system are: c
1
= −c
2
+ c
3
, c
3
= 2c
2
, c
2
arbitrary. There is only
one eigenvector corresponding to the eigenvalue 2. Setting c
2
= 1, we get c
2
=
_
_
_
1
1
2
_
_
_.
Thus, two independent solutions of the given linear diﬀerential system are
v
1
= e
t
_
_
_
1
0
2
_
_
_, v
2
= e
2t
_
_
_
1
1
2
_
_
_.
We need another solution corresponding to the eigenvalue 2, one which is independent
of v
2
. We know that this solution has the form
v
3
(t) = e
2t
w+ te
2t
c
2
where w is a solution of (A−2I)z = c
2
. That is:
_
_
_
1 1 −1
2 0 −1
2 2 −2
_
_
_
_
_
_
z
1
z
2
z
3
_
_
_ =
_
_
_
1
1
2
_
_
_.
The augmented matrix is
_
_
_
1 1 −1 1
2 0 −1 1
2 2 −2 2
_
_
_ which row reduces to
_
_
_
1 1 −1 1
0 −2 1 −1
0 0 0 0
_
_
_
290
The solutions of this system are
z
3
= −1 + 2z
2
, z
1
= 1 −z
2
+ z
3
= 1 − z
2
+ (−1 + 2z
2
) = z
2
, z
2
arbitrary.
If we choose z
2
= 0 (any choice for z
2
will do), we get z
1
= 0, z
2
= 0, z
3
= −1 and
w =
_
_
_
0
0
−1
_
_
_. Thus
v
3
= e
2t
_
_
_
0
0
−1
_
_
_+ te
2t
_
_
_
1
1
2
_
_
_
is a solution of the system independent of v
2
(and of v
1
). The solutions
v
1
= e
t
_
_
_
1
0
2
_
_
_, v
2
= e
2t
_
_
_
1
1
2
_
_
_, v
3
= e
2t
_
_
_
0
0
−1
_
_
_+ te
2t
_
_
_
1
1
2
_
_
_
are a fundamental set of solutions of the system.
Exercises 6.4
Find the general solution of the system x

= Ax where A is the given matrix. If an
initial condition is given, also ﬁnd the solution that satisﬁes the condition.
1.
_
−2 4
1 1
_
.
2.
_
−3 2
1 −2
_
.
3.
_
2 4
−2 −2
_
, x(0) =
_
1
3
_
.
4.
_
−1 2
−1 −3
_
.
5.
_
−1 1
−4 3
_
.
6.
_
5 2
−2 1
_
.
7.
_
3 2
−8 −5
_
, x(0) =
_
3
−2
_
.
291
8.
_
−1 1
−4 −5
_
.
9.
_
_
_
3 0 −1
−2 2 1
8 0 −3
_
_
_, x(0) =
_
_
_
−1
2
−8
_
_
_. Hint: 2 is an eigenvalue.
10.
_
_
_
−2 2 1
0 −1 0
2 −2 −1
_
_
_. Hint: 0 is an eigenvalue.
11.
_
_
_
3 −4 4
4 −5 4
4 −4 3
_
_
_, x(0) =
_
_
_
2
1
−1
_
_
_. Hint: 3 is an eigenvalue.
12.
_
_
_
−3 0 −3
1 −2 3
1 0 1
_
_
_
. Hint: −2 is an eigenvalue.
13.
_
_
_
0 4 0
−1 0 0
1 4 −1
_
_
_. Hint: −1 is an eigenvalue.
14.
_
_
_
5 −5 −5
−1 4 2
3 −5 −3
_
_
_. Hint: 2 is an eigenvalue.
15.
_
_
_
8 2 1
1 7 3
1 1 6
_
_
_ Hint: 5 is an eigenvalue.
16.
_
_
_
1 1 2
0 1 0
0 1 3
_
_
_, x(0) =
_
_
_
−1
3
2
_
_
_. Hint: 3 is an eigenvalue.
17.
_
_
_
−3 1 −1
−7 5 −1
−6 6 −2
_
_
_, x(0) =
_
_
_
1
0
−1
_
_
_. Hint: 4 is an eigenvalue.
18.
_
_
_
0 1 1
1 1 −1
−2 1 3
_
_
_. Hint: 2 is an eigenvalue.
19.
_
_
_
0 0 −2
1 2 1
1 0 3
_
_
_. Hint: 2 is an eigenvalue.
292
20.
_
_
_
2 −1 1
0 3 −1
0 −1 3
_
_
_. Hint: 4 is an eigenvalue.
21.
_
_
_
2 −1 −1
2 1 −1
0 −1 1
_
_
_, x(0) =
_
_
_
1
−2
0
_
_
_. Hint: 2 is an eigenvalue.
22.
_
_
_
−2 1 −1
3 −3 4
3 −1 2
_
_
_. Hint: 1 is an eigenvalue.
23.
_
_
_
2 2 −6
2 −1 −3
−2 −1 1
_
_
_. Hint: 6 is an eigenvalue.
24.
_
_
_
8 −6 1
10 −9 2
10 −7 0
_
_
_. Hint: 3 is an eigenvalue.
Appendix: Eigenvalues of Multiplicity 3.
Given the diﬀerential system x

= Ax. Suppose that λ is an eigenvalue of A of
multiplicity 3. Then exactly one of the following holds:
1. λ has three linearly independent eigenvectors c
1
, c
2
, c
3
. Then three linearly inde-
pendent solution vectors of the system corresponding to λ are:
v
1
(t) = e
λt
c
1
, v
2
(t) = e
λt
c
2
, v
3
(t) = e
λt
c
3
.
2. λ has two linearly independent eigenvectors c
1
, c
2
. Then two linearly independent
solutions of the system corresponding to λ are:
v
1
(t) = e
λt
c
1
, v
2
(t) = e
λt
c
2
A third solution, independent of v
1
and v
2
has the form
v
3
(t) = e
λt
w+ te
λt
v
where v is an eigenvector corresponding to λ and (A−λI)w = v.
3. λ has only one (independent) eigenvector c. Then three linearly independent solu-
tions of the system have the form:
v
1
= e
λt
c, v
2
= e
λt
w+ tge
λt
c,
v
3
(t) = e
λt
z + te
λt
w+ t
2
e
λt
c
where (A− λI)w= c and (A− λI)z = w.
293
6.5 Nonhomogeneous Systems
The treatment in this section parallels exactly the treatments of linear nonhomogeneous
equations in Sections 3.4 and 6.1.
Recall from Section 6.2 that a linear nonhomgeneous diﬀerential system is a system of
the form
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ · · · + a
1n
(t)x
n
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ · · · + a
2n
(t)x
n
(t) + b
2
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ · · · + a
nn
(t)x
n
(t) + b
n
(t)
(N)
where a
11
(t), a
12
(t), . . . , a
1n
(t), a
21
(t), . . . , a
nn
(t), b
1
(t), b
2
(t), . . . , b
n
(t) are continuous
functions on some interval I and the functions b
i
(t) are not all identically zero on I; that
is, there is at least one point a ∈ I and at least one function b
i
(t) such that b
i
(a) = 0.
Let A(t) be the n ×n matrix
A(t) =
_
_
_
_
_
_
a
11
(t) a
12
(t) · · · a
1n
(t)
a
21
(t) a
22
(t) · · · a
2n
(t)
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) · · · a
nn
(t)
_
_
_
_
_
_
and let x and b(t) be the vectors
x =
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
_
, b(t) =
_
_
_
_
_
_
b
1
(t)
b
2
(t)
.
.
.
b
n
(t)
_
_
_
_
_
_
.
Then (N) can be written in the vector-matrix form
x

= A(t) x +b(t). (N)
The corresponding linear homogeneous system
x

= A(t) x (H)
is called the reduced system of (N).
THEOREM 1. If z
1
(t) and z
2
(t) are solutions of (N), then
x(t) = z
1
(t) − z
2
(t)
is a solution of (H). (C.f. Theorem 1, Section 3.4, and Theorem ??, Section 6.1.)
Proof: Since z
1
and z
2
are solutions of (N),
z

1
(t) = A(t)z
1
(t) +b(t) and z

2
(t) = A(t)z
2
(t) +b(t)).
294
Let x(t) = z
1
(t) −z
2
(t). Then
x

(t) = z

1
(t) −z

2
(t) = [A(t)z
1
(t) +b(t)] −[A(t)z
2
(t) +b(t)]
= A(t) [z
1
(t) − z
2
(t)] = A(t)x(t).
Thus, x(t) = z
1
(t) −z
2
(t) is a solution of (H).
Our next theorem gives the “structure” of the set of solutions of (N).
THEOREM 2. Let x
1
(t), x
2
(t), . . . , x
n
(t) be a fundamental set of solutions the reduced
system (H) and let z = z(t) be a particular solution of (N). If u = u(t) is any solution
of (N), then there exist constants c
1
, c
2
, . . . , c
n
such that
u(t) = c
1
x
1
(t) + c
2
x
2
(t) +· · · + c
n
x
n
(t) +z(t)
(C.f. Theorem 2, Section 3.4, and Theorem ??, Section 6.1.)
Proof: Let u = u(t) be any solution of (N). By Theorem 1, u(t) −z(t) is a solution of
the reduced system (H). Since x
1
(t), x
2
(t), . . . , x
n
(t) are n linearly independent solutions
of (H), there exist constants c
1
, c
2
, . . . , c
n
such that
u(t) −z(t) = c
1
x
1
(t) + c
2
x
2
(t) + · · · + c
n
x
n
(t).
Therefore
u(t) = c
1
x
1
(t) + c
2
x
2
(t) + · · · + c
n
x
n
(t) + z(t).
According to Theorem 2, if x
1
(t), x
2
(t), . . . , x
n
(t) are linearly independent solutions
of the reduced system (H) and z = z(t) is a particular solution of (N), then
x(t) = C
1
x
1
(t) + C
2
x
2
(t) +· · · + C
n
x
n
(t) +z(t) (1)
represents the set of all solutions of (N). That is, (1) is the general solution of (N). Another
way to look at (1) is: The general solution of (N) consists of the general solution of the
reduced equation (H) plus a particular solution of (N):
x
.¸¸.
general solution of (N)
= C
1
x
1
(t) + C
2
x
2
(t) + · · · + C
n
x
n
(t)
. ¸¸ .
general solution of (H)
+ z(t).
.¸¸.
particular solution of (N)
295
Variation of Parameters
Let x
1
(t), x
2
(t), . . . , x
n
(t) be a fundamental set of solutions of (H) and let V (t) be the cor-
responding fundamental matrix (V is the n×n matrix whose columns are x
1
, x
2
, . . . , x
n
).
Then, as we saw in Section 6.3, the general solution of (H) can be written
V (t)C where C =
_
_
_
_
_
_
C
1
C
2
.
.
.
C
n
_
_
_
_
_
_
.
In Exercises 6.3, Problem 19, you were asked to show that V satisﬁes the matrix diﬀerential
system
X

= A(t)X.
That is, V

(t) = A(t)V (t).
We replace the constant vector C by a vector function u(t) which is to be determined
so that
z(t) = V (t)u(t)
is a solution of (N). Diﬀerentiating z, we get
z

(t) = [V (t)u(t)]

= V (t)u

(t) + V

(t)u(t) = V (t)u

(t) + A(t)V (t)u(t).
Since z is to satisfy (N), we have
z

(t) = A(t)z(t) +b(t) = A(t)V (t)u(t) +b(t).
Therefore
V (t)u

(t) + A(t)V (t)u(t) = A(t)V (t)u(t) + b(t),
from which it follows that
V (t)u

(t) = b(t).
Since V is a fundamental matrix, it is nonsingular, and so we can solve for u

:
u

(t) = V
−1
(t)b(t) which implies u(t) =
_
V
−1
(t)b(t) dt.
Finally, we have
z(t) = V (t)
_
V
−1
(t)b(t) dt
is a solution of (N).
By Theorem 2, the general solution of (N) is given by
x(t) = V (t)C+ V (t)
_
V
−1
(t)b(t) dt. (2)
Compare this result with the general solution of ﬁrst order linear diﬀerential equation given
by equation (2) in Section 2.1
296
Example 1. Find the general solution of the nonhomogeneous linear diﬀerential system
x

=
_
0 1
−t
−2
t
−1
_
x +
_
0
2t
−1
_
.
SOLUTION You can verify that v
1
(t) =
_
t
1
_
and v
2
(t) =
_
t ln t
1 + ln t
_
is a funda-
mental set of solutions of the reduced system
x

=
_
0 1
−t
−2
t
−1
_
x.
The corresponding fundamental matrix is
V (t) =
_
t t ln t
1 1 + ln t
_
.
The inverse of V is given by
V
−1
(t) =
_
t
−1
+ t
−1
ln t −ln t
−t
−1
1
_
.
We are now ready to calculate z using the result given above:
z =
_
t t ln t
1 1 + ln t
_
_
_
t
−1
+ t
−1
ln t −ln t
−t
−1
1
__
0
2t
−1
_
dt
=
_
t t ln t
1 1 + ln t
_
_
_
−2t
−1
ln t
2t
−1
_
dt
=
_
t t ln t
1 1 + ln t
__
−(ln t)
2
2 ln t
_
=
_
t(ln t
2
2 ln t + (ln t)
2
_
.
The general solution of the given nonhomogeneous system is
x(t) =
_
t t ln t
1 1 + ln t
__
C
1
C
2
_
+
_
t(ln t
2
2 ln t + (ln t)
2
_
.
By ﬁxing a point a on the interval I, the general solution of (1) given by (2) can be
written as
x(t) = V (t)C+ V (t)
_
t
a
V
−1
(s)b(s) ds, t ∈ I. (3)
This form is useful in solving system (1) subject to an initial condition x(a) = x
o
. Substi-
tuting t = a in (3) gives
x
o
= V (a)C which implies C = V
−1
(a)x
o
.
297
Therefore the solution of the initial-value problem
x

= A(t) x x(a) = x
o
is given by
x(t) = V (t)V
−1
(a)x
o
+ V (t)
_
t
a
V
−1
(s)b(s) ds. (4)
Exercises 6.5
Find the general solution of the system x

= A(t)x + b(t) where A and b are given.
1. A(t) =
_
−3 1
2 −4
_
, b(t) =
_
3t
e

t
_
2. A(t) =
_
2 −1
3 −2
_
, b(t) =
_
0
4t
_
3. A(t) =
_
2 2
−3 −3
_
, b(t) =
_
1
2t
_
4. A(t) =
_
3 2
−4 −3
_
, b(t) =
_
2 cos t
2 sin t
_
5. A(t) =
_
−3 1
2 −4
_
, b(t) =
_
3t
e

t
_
6. A(t) =
_
0 −1
1 0
_
, b(t) =
_
sec t
0
_
7. A(t) =
_
1 −1
1 1
_
, b(t) =
_
e
t
cos t
e
t
sin t
_
8. A(t) =
_
3t
2
t
0 t
−1
_
, b(t) =
_
4t
2
1
_
9. A(t) =
_
_
_
1 1 0
1 1 0
0 0 3
_
_
_, b(t) =
_
_
_
e
t
e
2t
te
3t
_
_
_
10. A(t) =
_
_
_
1 −1 1
0 0 1
0 −1 2
_
_
_
, b(t) =
_
_
_
0
e
t
e
t
_
_
_
298
Solve the initial-value problem.
11. x

=
_
3 −1
−1 3
_
x +
_
4e
2t
4e
4t
_
, x(0) =
_
1
1
_
12. x

=
_
3 −2
1 0
_
x +
_
−2e
−t
−2e
−t
_
, x(0) =
_
2
−1
_
299
6.6 Direction Fields and Phase Planes
There are many types of diﬀerential equations which do not have solutions which can
be easily written in terms of elementary functions such as exponentials, sines and cosine,
or even as integrals of such functions. Fortunately, when these equations are of ﬁrst or
second order one can still gain a good understanding of the behavior of their solutions
using geometric methods. In this section we discuss the basics of phase plane analysis, an
extension of the method of slope ﬁelds discussed in Section 2.4.
Let us again consider the diﬀerential equation
y

= f(y),
and think about it geometrically. The equality implies that the graph of a solution of this
equation in x −y plane, must have slope equal to f(y) at the point (x, y). For instance, for
the diﬀerential equation
y

= 2y(1 − y)
the slope of the solution at all points for which y = 1 equal to 0. Indeed, since the solution
satisfying the initial condition y(0) = 1 is the constant solution y(x) = 1, this is what
we expect. The following ﬁgure shows this solution, along with the solution satisfying
y(0) = 0.1.
Let us now turn to autonomous diﬀerential equations in two variables
x

1
= f(x
1
, x
2
) (1)
x

2
= g(x
1
, x
2
). (2)
Drawing slope ﬁelds for x
1
and x
2
separately will not work, since each slope ﬁeld depends
on both variables. However, note that any solution of this equation, (x
1
(t), x
2
(t)), paramet-
rically deﬁnes a curve in the x
1
−x
2
plane. Indeed, the vector (x

1
(t
0
), x

2
(t
0
)) is the tangent
vector to this parametric curve at the point (x
1
(t
0
), x
2
(t
0
)).
Therefore, we can sketch the solutions of the diﬀerential equation (1–2) by selecting a
number of points in the plane. At each point in the collection we draw a vector emanating
300
from the point, so that the vector (f(x
1
, x
2
), g(x
1
, x
2
)) emanates from the point (x
1
, x
2
).
The collection of these vectors is called a vector ﬁeld. In practice we may have to scale the
length of the vectors by a constant factor.
x

1
= x
1
+ 5x
2
x

2
= 3x
1
+ 3x
2
.
considered in Example 2 of section 6.4. The ﬁgure below shows vectors attached to points
spaced 0.1 units apart in the horizontal and vertical direction. For instance, to the point
with coordinates x
1
= 0.3, and x
2
= 0.5 we attach the vector with components x
1
+ 5x
2
=
0.3 +5 ×0.5 = 2.8 in the horizontal and 3x
1
+ 3x
2
= 3 ×0.5 + 3 ×0.3 = 2.4 in the vertical
direction. Similarly, the vector (−3.2, −3.6) emanates from the point (−0.7, −0.5). The
length of all vectors is scaled by an equal factor so that they all ﬁt in the ﬁgure.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
Also shown are two solutions of the diﬀerential equation, one with initial condition
(x
1
(0), x
2
(0)) = (−0.5, 0.25), and the other with (x
1
(0), x
2
(0)) = (0.2, −0.1). The arrows
point in the direction in which the solutions are traversed. You can also see that the
solutions diverge from the origin in the direction of the eigenvector (1, 1) corresponding to
the positive eigenvalue.
Let us next consider the following equation
y

+ y

+ y = 0.
This equation is known as the linear damped pendulum. If we think of y as the angular
displacement from the resting position and y

as the angular velocity of the pendulum, then
the solutions of the equation will describe its oscillation around the equilibrium position at
y = 0. As we will see shortly, measures the amount of damping.
301
If we let x
1
= y, and x
2
= y

, then we obtain the following pair of equations
x

1
= x
2
(3)
x

2
= −x
2
−x
1
. (4)
This is a linear system, and we can solve it using the methods of Section 6.4. Instead, let
us look at the phase plane, and see what happens as we vary .
In the ﬁgure below you will see the vector ﬁeld and solutions with initial condition
x
1
= x
2
= 0.8, in both cases. In the left ﬁgure = 0.1, while on the right = 0.2. As you
could guess, a pendulum that is subject to more damping will oscillate fewer times before
reaching the equilibrium at the origin.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
The linear system (3-4) only describes the behavior of the pendulum accurately when
the displacement from the rest position is small. For larger displacements it is necessary to
use the nonlinear equation
x

1
= x
2
(5)
x

2
= −x
2
− sin(x
1
). (6)
Although this equation may not look much more complicated than the previous one, it is
much more diﬃcult to solve. However, a phase plane analysis can be easily performed in this
case as well. In the ﬁgure below = 0.05, and the initial conditions are x
1
= 0, x
2
= 2π/3.
302
3 2 1 1 2 3
3
2
1
1
2
3
Exercises
1. Sketch the phase plane for the following systems
(a)
x

= 1
y

= y
(b)
x

= x
y

= y
(c)
x

= x
2
−1
y

= x − y
2. Solve the system for the damped pendulum (3-4) when = 0. Sketch the vector ﬁeld
and the solutions. What happens to the amplitude of the solutions as the pendulum
oscillates in this case?
Solution: In this case the solutions have the form x
1
(t) = C
1
cos t + C
2
sin t, x
2
=
C
2
cos −C
1
sint. They oscillate forever with constant amplitude.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
3. Solve the equation for the damped linear pendulum (3-4). Show that when > 0
equations oscillate with diminishing amplitude.
303

It is easy to check that, for any two n-times diﬀerentiable functions y1 (x) and y2 (x), L[y1 (x) + y2 (x)] = L[y1 (x)] + L[y2 (x)] and, for any n-times diﬀerentiable function y and any constant c, L[cy(x)] = cL[y(x)]. Therefore, as introduced in Section 2.1, L is a linear diﬀerential operator. This is the real reason that equation (L) is said to be a linear diﬀerential equation. THEOREM 1. (Existence and Uniqueness Theorem) Given the nth - order linear equation (L). Let a be any point on the interval I, and let α0 , α1 , . . . , αn−1 be any n real numbers. Then the initial-value problem y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1 (x)y + p0 (x)y = f (x); y(a) = α0 , y (a) = α1 , . . . , y (n−1) (a) = αn−1 has a unique solution. Remark: We can solve any ﬁrst order linear diﬀerential equation, see Section 2.1. In contrast, there is no general method for solving second or higher order linear diﬀerential equations. However, as we saw in our study of second order equations, there are methods for solving certain special types of higher order linear equations and we shall look at these later in this section.

Homogeneous Equations
y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1 (x)y + p0 (x)y = 0. (H)

Note ﬁrst that the zero function, y(x) = 0 for all x ∈ I, (also denoted by y ≡ 0) is a solution of (H). As before, this solution is called the trivial solution . Obviously, our main interest is in ﬁnding nontrivial solutions. We now establish some essential facts about homogeneous equations. The proofs are identical to those given in Section 3.2 THEOREM 2. If y = y(x) is a solution of (H) and if c is any real number, then u(x) = cy(x) is also a solution of (H). Any constant multiple of a solution of (H) is also a solution of (H). THEOREM 3. If y = y1 (x) and y = y2 (x) are any two solutions of (H), then u(x) = y1 (x) + y2 (x) is also a solution of (H). 252

The sum of any two solutions of (H) is also a solution of (H). The general theorem, which combines and extends Theorems 1 and 2, is: THEOREM 4. If y = y1 (x), y = y2 (x), . . . , y = yk (x) are solutions of (H), and if c1, c2, . . . , ck are any k real numbers, then y(x) = c1y1 (x) + c2 y2 (x) + · · · + ck yk (x) is also a solution of (H). Any linear combination of solutions of (H) is also a solution of (H). Note that if k = n in the linear combination above, then the equation y(x) = c1y1 (x) + c2y2 (x) + · · · + cn yn (x) (1)

has the form of a general solution of equation (H). So the question is: If y1 , y2 , . . . , yn are solutions of (H), is the expression (1) the general solution of (H)? That is, can every solution of (H) be written as a linear combination of y1 , y2 , . . . , yn ? It turns out that (1) may or not be the general solution; it depends on the relation between the solutions y1 , y2 , . . . , yn . Suppose that y = y1 (x), y = y2 (x), . . . , y = yn (x) are solutions of (H). Under what conditions is (1) the general solution of (H)? Let u = u(x) be any solution of (H) and choose any point a ∈ I. Suppose that α0 = u(a), α1 = u (a), . . . , αn−1 = u(n−1) (a). Then u is a member of the n-parameter family (1) if and only if there are values for c1 c2, . . . , cn such that c1 y1 (a) + c2y2 (a) + · · · + cn yn (a) = α0 c1 y1 (a) + c2y2 (a) + · · · + cn yn (a) = α1 c1y1 (a) + c2 y2 (a) + · · · + cn yn (a) = α1 . . . c1 y 1
(n−1)

. . .

(a) + c2 y2

(n−1)

(n−1) (a) + · · · + cn yn (a) = αn−1

According to Cramer’s rule, we are guaranteed that this pair of equations has a solution c1, c2, . . . , cn if y1 (a) y2 (a) . . . yn (a) y1 (a) y2 (a) . . . yn (a) y1 (a) y2 (a) . . . yn (a) = 0. . . . . . . . . . (n−1) (n−1) (n−1) y1 (a) y2 (a) . . . yn (a) 253

Since a was chosen to be any point on I, we conclude that (1) is the general solution of (H) if and only if y1 (x) y1 (x) y1 (x) . . . y1
(n−1)

y2 (x) . . . y2 (x) . . . y2 (x) . . . . . .
(n−1)

yn (x) yn (x) yn (x) . . .
(n−1)

= 0 for all x ∈ I.

(2)

(x) y2

(x) . . . yn

(x)

As you know, this determinant is called the Wronskian of the solutions y1 , y2 , . . . , yn . THEOREM 5. Let y = y1 (x), y = y2 (x), . . . , y = yn (x) be solutions of equation (H), and let W (x) be their Wronskian. Exactly one of the following holds: (i) W (x) = 0 for all x ∈ I and y1 , y2 , . . . , yn are linearly dependent. (ii) W (x) = 0 for all x ∈ I which implies that y1 , y2 , . . . , yn are linearly independent and y(x) = c1y1 (x) + c2 y2 (x) + · · · + cn yn (x) is the general solution of (H). Example 1. (a) The functions y1 (x) = x, y2 (x) = x2 and y3 (x) = x3 are each solutions of 3 6 6 y − y + 2 y − 3 y = 0, x ∈ I = (0, ∞). (verify) x x x Their Wronskian is: x x 2 x3 1 2x 3x2 0 2 6x = 2x3 = 0 on I.

W (x) =

The general solution of the diﬀerential equation is y = c1x + c2 x2 + c3x3 . (b) The functions y1 (x) = ex , y2 (x) = e2x and y3 (x) = e3x are each solutions of y − 6y + 11y − 6y = 0, Their Wronskian is: ex e2x e3x ex 2e2x 3e3x ex 4e2x 9e3x = 2e6x = 0 on I. x ∈ I = (−∞, ∞). (verify)

W (x) =

The general solution of the diﬀerential equation is y = c1ex + c2e2x + c3 e3x. DEFINITION 1. (Fundamental Set) A set of n linearly independent solutions y = y1 (x), y = y2 (x), . . . , y = yn (x) of (H) is called a fundamental set of solutions. 254

A set of solutions y1 , y2 , . . . , yn of (H) is a fundamental set if and only if W [y1 , y2, . . . , yn ](x) = 0 for all x ∈ I.

Homogeneous Equations with Constant Coeﬃcients We have emphasized that there are no general methods for solving second or higher order linear diﬀerential equations. However, there are some special cases for which solution methods do exist. Here we consider such a case, linear equations with constant coeﬃcients. We’ll look ﬁrst at homogeneous equations. An nth -order linear homogeneous diﬀerential equation with constant coeﬃcients is an equation which can be written in the form y (n) + an−1 y (n−1) + an−2 y (n−2) + · · · + a1 y + a0 y = 0 where a0, a1, . . . , an−1 are real numbers. We have seen that ﬁrst- and second-order equations with constant coeﬃcients have solutions of the form y = erx . Thus, we’ll look for solutions of (3) of this form If y = erx , then y = r erx , y = r2erx , . . . , y (n−1) = rn−1 rrx , y (n) = rn erx . Substituting y and its derivatives into (3) gives rn erx + an−1 rn−1 erx + · · · + a1 r erx + a0 erx = 0 or erx rn + an−1 rn−1 + · · · + a1 r + a0 = 0. Since erx = 0 for all x, we conclude that y = erx is a solution of (3) if and only if rn + an−1 rn−1 + · · · + a1 r + a0 = 0. (4) (3)

DEFINITION 2. Given the diﬀerential equation (3). The corresponding polynomial equation p(r) = rn + an−1 rn−1 + · · · + a1 r + a0 = 0. is called the characteristic equation of (3); the nth -degree polynomial p(r) is called the characteristic polynomial. The roots of the characteristic equation are called the characteristic roots. Thus, we can ﬁnd solutions of the equation if we can ﬁnd the roots of the corresponding characteristic polynomial. Appendix 1 gives the basic facts about polynomials with real coeﬃcients. 255

. . y2 = er2 x . yk = erk x are linearly independent. SOLUTION The characteristic equation is r3 + 3r2 − r − 3 = 0 (r − 1)(r2 + 4r + 3) = 0 (r − 1)(r + 1)(r + 3) = 0 The characteristic roots are: r1 = 1. yk (x) = xk−1 eαx are linearly independent. . Here is the general result. We also showed that y3 (x) = erx and y4 (x) = xerx are linearly independent. y2 . . . . r3 = −3. the solutions form a fundamental set and y = C1 e4x + C2 e−x + C3 e−3x is the general solution of the equation. The functions y1 (x) = ex . Theorem 6 will be useful in showing that our sets of solutions are linearly independent. . Example 3. y2 (x) = e−x . If r1 . Since these are distinct exponential functions. . Since all of the ground work for solving linear equations with constant coeﬃcients was established in Chapter 3. r2. then the distinct exponential functions y1 = er1 x . y2 (x) = xeαx. y3 (x) = e−3x are solutions. .In Chapter 3 we proved that if r1 = r2 . Find the general solution of y + 3y − y − 3y = 0 given that r = 1 is a root of the characteristic polynomial.. Find the general solution of y (4) − 4y + 3y + 4y − 4y = 0 given that r = 2 is a root of multiplicity 2 of the characteristic polynomial. . SOLUTION The characteristic equation is r4 − 4r3 + 3r2 + 4r − 4 = 0 (r − 2)2(r2 − 1) = 0 (r − 2)2(r − 1)(r + 1) = 0 256 . yk ](x) = 0. . r2 = −1. 2. rk are distinct numbers (real or complex). THEOREM 6 1. the Wronskian W [y1. . Proof: In each case. . For any real number α the functions y1 (x) = eαx . . . we’ll simply give some examples here. Example 2. then y1 = er1 x and y2 = er2 x are linearly independent.

these solutions form a fundamental set and y = C1 ex + C2 e−x + C3 e2x + C4 xe2x is the general solution of the equation. Since y4 is distinct from y1 . r3 = 2. r4 = −2. and is independent of y3 . we conjecture that y4 = xe2x is also a solution since r = 2 is a “double” root. Then. Here are some examples. r2 = 1 − 2i. the characteristic equation. 1 − 2i is also a root. r2 = −1. You can verify that this is the case.The characteristic roots are: r1 = 1. Example 4. y2 . 257 . Again based on our work in Chapter 3. Since 1 + 2i is a root of p(r). we convert the complex exponentials u1 = e(1+2i)x and u2 (x) = e(1−2i)x into y1 = ex cos 2x and y2 = ex sin 2x. y1 . SOLUTION The characteristic equation is p(r) = r4 − 2r3 + r2 + 8r − 20 = 0. Find the general solution of y (4) − 2y + y + 8y − 20y = 0 given that r = 1 + 2i is a root of the characteristic polynomial. Since these roots are distinct. r3 = r4 = 2. Recovering a Homogeneous Diﬀerential Equation from Its Solutions Once you understand the relationship between the homogeneous equation. y3 (x) = e2x are solutions. the corresponding exponential functions are linearly independent. y4 = e−2x form a fundamental set and y = C1 ex cos 2x + C2 ex sin 2x + C3 e2x + C4 e−2x is the general solution of the equation. Therefore r4 − 2r3 + r2 + 8r − 20 = 0 (r2 − 2r + 5)(r2 − 4) = 0 (r2 − 2r + 5)(r − 2)(r + 2) = 0 The characteristic roots are: r1 = 1 + 2i. and r2 − 2r + 5 is a factor of p(r). it is easy to go from the diﬀerential equation to the solutions and from the solutions to the diﬀerential equation. The functions y1 (x) = ex . the roots of the characteristic equation and the solutions of the diﬀerential equation. y2 (x) = e−x . y2 . Based on our work in Chapter 3. y3 = e2x .

SOLUTION Since e−4x and xe−4x are solutions. pn−1 . −4 must be a double root of the characteristic equation.Example 5. Therefore. Example 6. similarly. e−3x a solution means that −3 is a root and r − (−3) = r + 3 is a factor of the characteristic polynomial. . is called the reduced equation of equation (N). . . Continuing the analogy with second order linear equations. f are continuous functions on an interval I. 258 (H) (N) . The solution e2x cos x indicates that 2 + i is a root of the characteristic equation. . the corresponding homogeneous equation y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1 (x)y + p0 (x)y = 0. since e2x is a solution. linear. Find a fourth order. So 2 − i must also be a root (and y4 (x) = e2x sin x must also be a solution). Therefore. linear. which expands to r3 + 6r2 − 32 = 0 Nonhomogeneous Equations Now we’ll consider linear nonhomogeneous equations: y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1(x)y + p0(x)y = f (x) where p0. homogeneous diﬀerential equation with constant coeﬃcients that has y = C1 e−4x + C2 x e−4x + C3 e2x as its general solution. SOLUTION Since e2x is a solution. homogeneous diﬀerential equation with constant coeﬃcients that has the functions y1 (x) = e2x . 2 must be a root of the characteristic equation and r − 2 must be a factor of the characteristic polynomial. 2 is a root of the characteristic equation. the characteristic equation is (r + 4)2(r − 2) = 0 and the diﬀerential equation is y + 6y − 32y = 0. p1. Find a third order. the diﬀerential equation is y (4) − 3y − 5y + 29y − 30y = 0. y2 (x) = e−3x and y3 (x) = e2x cos x as solutions. Thus the characteristic equation must be (r − 2)(r + 3)(r − [2 + i)](r − [2 − i]) = (r2 + r − 6)(r2 − 4r + 5) = r4 − 3r3 − 5r2 + 29r − 30 = 0.

The following theorems are exactly the same as Theorems 1 and 2 in Section 3. and y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1(x)y + p0(x)y = g(x) respectively. cn such that u(x) = c1 y1 (x) + c2y2 (x) + · · · + cn yn (x) + z(x) According to Theorem 8. then there exist constants c1. . .4. then y = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x) + z(x) (5) represents the set of all solutions of (N). y2 (x). then y(x) = z1 (x) − z2 (x) is a solution of equation (H). . the diﬀerence of any two solutions of the nonhomogeneous equation (N) is a solution of its reduced equation (H). The next theorem gives the “structure” of the set of solutions of (N). c2. y2 (x). . yn (x)} is a fundamental set of solutions of the reduced equation (H) and if z = z(x) is a particular solution of (N). particular solution of (N) The superposition principle also holds: THEOREM 9 If z = zf (x) and z = zg (x) are particular solutions of y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1 (x)y + p0 (x)y = f (x). That is. . and exactly the same proofs can be used. THEOREM 7 If z = z1 (x) and z = z2 (x) are solutions of (N). THEOREM 8 Let y = y1 (x). . . yn (x) be a fundamental set of solutions of the reduced equation (H) and let z = z(x) be a particular solution of (N). . then z(x) = zf (x) + zg (x) is a particular solution of y (n) + pn−1 (x)y (n−1) + pn−2 (x)y (n−2) + · · · + p1 (x)y + p0 (x)y = f (x) + g(x). Another way to look at (5) is: The general solution of (N) consists of the general solution of the reduced equation (H) plus a particular solution of (N): y general solution of (N) = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x) + general solution of (H) z(x). . if {y1 (x). . . 259 . If u = u(x) is any solution of (N). . (5) is the general solution of (N).

an−1 are constants and the nonhomogeneous term f is a polynomial. a cosine. a1 . . Find the general solution of y − 2y − 5y + 6y = 4 − 2e2x . .Finding a Particular Solution The method of variation of parameters can be extended to higher-order linear nonhomogeneous equations but the calculations become quite involved. The characteristic equation is r3 − 2r2 − 5r + 6 = (r − 1)(r + 2)(r − 3) = 0. Instead we’ll look at the special equations for which the method of undetermined coeﬃcients can be used.5. . slightly modiﬁed to apply to equations of order greater than 2: Table 1 A particular solution of y (n) + an−1 y (n−1) + · · · + a1 y + a0 y = f (x) If f (x) = cerx c cos βx + d sin βx ceαx cos βx + deαx sin βx try z(x) =* Aerx z(x) = A cos βx + B sin βx z(x) = Aeαx cos βx + Beαx sin βx *Note: If z satisﬁes the reduced equation. or a combination of such functions. will give a particular solution The method of undetermined coeﬃcients is applied in exactly the same manner as in Section 3. Here is the basic table from Section 3. an exponential function. then xk z. (*) 260 . the method of undetermined coeﬃcients can be applied only to nonhomogeneous equations of the form y (n) + an−1 y (n−1) + an−2 y (n−2) + · · · + a1 y + a0 (x)y = f (x). where a0 . Example 7. where k is the least integer such that xk z does not satisfy the reduced equation.5. a sine. SOLUTION First we solve the reduced equation y − 2y − 5y + 6y = 0. As we saw in Chapter 3. .

z = 2Be2x . The characteristic equation is r4 + r3 − 3r2 − 5r − 2 = (r + 1)3 (r − 2) = 0. they are linearly independent and y = C1 ex + C2 e−2x + C3 e3x is the general solution of the reduced equation. Setting z + 6z + 9z = 4 − 2e2x gives 6A = 4 Thus. The derivatives of z are: z = A + Be2x . The roots are r1 = r2 = r3 = −1. 2 . y3 are linearly independent. and since e2x 261 (**) 2 3 + 1 e2x . Substituting into the left side of (*). it follows that y1 . y2 . r3 = 3 and the corresponding solutions of the reduced equation are y1 = ex . r4 = 2 and the corresponding solutions of the reduced equation are y1 = e−x . y3 = e3x . z = 4Be2x . y2 = e−2x . we get z − 2z − 5z + 6z = 8Be2x − 2 4Be2x − 5 2Be2x + 6 A + Be2x = 6A − 4Be2x . z = 8Be2x . r2 = −2. y3 = x2e−x . y4 = e2x. z(x) = 2 3 and − 4B = −2 which implies A = 2 3 and B = 1 . 2 + 1 e2x is a particular solution of (*). y2 = xe−x . The table indicates that we should look for a solution of the form z = A + Be2x .The roots are r1 = 1. Since distinct powers of x are linearly independent. Since these are distinct exponential functions. Find the general solution of y (4) + y − 3y − 5y − 2y = 6e−x SOLUTION First we solve the reduced equation y (4) + y − 3y − 5y − 2y = 0. 2 The general solution of (*) is y = C1 ex + C2 e−2x + C3 e3x + Example 8. Next we ﬁnd a particular solution of the nonhomogeneous equation.

y3 . y4 are linearly independent. the general solution of the reduced equation is y = C1 e−x + C2 xe−x + C3 x2 e−x + C4 e2x . Thus. Thus. Next we ﬁnd a particular solution of the nonhomogeneous equation. . y2 . The table indicates that the form of a particular solution z of the nonhomogeneous equation is z = Ax3 ex + B cos 2x + C sin 2x.and e−x are independent. SOLUTION To get the proper form for a particular solution of the equation we need to ﬁnd the solutions of the reduced equation: y − 3y + 3y − y = 0. Thus. the roots are r1 = r2 = r3 = 1. The derivatives of z are: z = Ax3 e−x z z z = 3Ax2 e−x − Ax3 e−x = 6Axe−x − 6Ax2 e−x + Ax3e−x = 6Ae−x − 18Axe−x + 9Ax2e−x − Ax3 e−x z (4) = −24Ae−x + 36Axe−x − 12Ax2e−x + Ax3 e−x Substituting z and its derivatives into the left side of (**). 262 1 3 x3 e−x . y3 = x2ex . we get z (4) + z − 3z − 5z − 2z = −18Ae−x . and the corresponding solutions are y1 = ex . The general solution of (**) is y = C1 e−x + C2 xe−x + C3 x2 e−x + C4 e2x − Example 9. y2 = xex . we have −18Ae−x = 6e−x which implies A = − 1 and z = − 1 x2 e−x is a particular 3 3 solution of (**). we can conclude that y1 . Give the form of a particular solution of y − 3y + 3y − y = 4ex − 3 cos 2x. The characteristic equation is r3 − 3r3 + 3r − 1 = (r − 1)3 = 0. The table indicates that we should look for a solution of the form z = Ax3 e−x .

and the corresponding solutions are y1 = e2x. y (0) = 0. y(0) = y (0) = 0. 5. y − 6y + 11y − 6y = 0. r1 = 1 + 2i is a root of the characteristic equation. y (4) − 2y + y + 8y − 20y = 0.Example 10. Exercises 6. y + y − 4y − 4y = 0. y(0) = −1. y (0) = 2. 2. 3. r1 = −2 is a root of the characteristic equation. 10. y (0) = 0. y3 = cos 2x.1 Find the general solution of the homogeneous equation 1. r3 = 2i. y + y = 0. y (0) = 2. y (4) − 4y + 4y = 0. 6. y2 = e−2 x. y (4) − 4y + 14y − 4y + 13y = 0. r1 = 1 is a root of the characteristic equation. The table indicates that the form of a particular solution z of the nonhomogeneous equation is z = Axe2x + Be3x + Cx cos 2x + Dx sin 2x. y − y + 9y − 9y = 0. y + y + 10y = 0. r2 = −2. the roots are r1 = 2. Thus. r1 = −1 is a root of the characteristic equation. 263 . SOLUTION To get the proper form for a particular solution of the equation we need to ﬁnd the solutions of the reduced equation: y (4) − 16y = 0. 4. y(0) = 0. r4 = −2i. y (0) = 1. y (5) − 3y (4) + 3y − 3y + 2y = 0. 7. r1 = i is a root of the characteristic equation. 8. r1 = i is a root of the characteristic equation. The characteristic equation is r4 − 16 = (r2 − 4)(r2 + 4) = (r − 2)(r + 2)(r2 + 4) = 0. 9. Find the solution of the initial-value problem. y (6) − y = 0. y4 = sin 2x. 11. y (0) = 2. y (4) − 3y − 4y = 0. Give the form of a particular solution of y (4) − 16y = 4e2x − 2e3x + 5 sin 2x + 2 cos 2x.

y − 2y − 5y + 6y = 2ex . 2y (4) − y − 9y + 4y + 4y = 0. Find the homogeneous equation with constant coeﬃcients of least order that has the given function as a solution. 17. 20. 22. y = C1 e4x + C2 x + C3 + C4 ex cos 2x + C5 ex sin 2x. y = 2e2x + 3 sin x − x. y (0) = 2. Find the homogeneous equation with constant coeﬃcients that has the given general solution. 14. 25. y (1) = 2. 26. y(0) = y (0) = y (0) = 0. y (0) = 0. y (4) + 2y + y = 6 + cos 2x. 21. 264 . Find the solution of the initial-value problem. 19. y = C1 e2x + C2 xe2x + C3 x2 e2x + C4 . y + y + y + y = ex + 4. 16. y = C1 e3x + C2 e−x + C3 cos x + C4 sin x + C5 . y = 2ex − 3e−x + 2x. y(0) = 0. y − 8y = e2x . 23.12. y − y − y + y = 2e−x + 4e2x . y = 3xe−x + e−x cos 2x + 1. y (0) = −1. y (0) = 0. y (4) − y = 2ex + cos x. 18. y = 3e3x − 2 cos 2x + 4 sin x − 3. 13. Find the general solution of the nonhomogeneous equation. y = C1 e−3x + C2 xe−3x + C3 ex cos 3x + C4 ex sin 3x. 15. 24. y(0) = 2.

f2 (t). . . then x→c t→c lim v(t) = lim f1 (t). and v (t) = (f1 (t). .2 Systems of Linear Diﬀerential Equations Introduction Up to this point the entries in a vector or matrix have been real numbers.n. . Let v(t) = (f1 (t). Thus v is the vector function whose components are the derivatives of the components of v. That is v(t) dt = f1 (t) dt. fn (t)) be a vector function whose components are deﬁned on an interval I. there are operations on functions other than arithmetic operations that we have to deﬁne for vector and matrix functions. . Systems of Linear diﬀerential Equations Consider the third-order linear diﬀerential equation y + p(t)y + q(t)y + r(t)y = f (t) 265 . diﬀerentiation and integration of matrix functions are done in exactly the same way — component-wise. . . 2.6. a matrix whose entries are functions is called a matrix function. fn (t) dt . The operations of vector and matrix addition. . . f2 (t). In this section. α2 . diﬀerentiation. namely the operations from calculus (limits. . . . Similarly. . and in the following sections. However. lim fn (t) = (α1 . Integral: Since diﬀerentiation of vector functions is done component-wise. integration must also be component-wise. . A vector whose components are functions is called a vector-valued function or vector function. we will be dealing with vectors and matrices whose entries are functions. . t→c t→c t→c Limits of vector functions are calculated “component-wise. If lim fi (t) = αi exists for i = 1. . fn are diﬀerentiable on I. . αn ) . f2 (t) dt. lim f2 (t). . . .” Derivative: If f1 . fn (t) . . . Calculus of matrix functions: Limits. . multiplication by a number and matrix multiplication for vector and matrix functions are exactly as deﬁned in Chapter 5 so there is nothing new in terms of arithmetic. f2 . integration). . Limit: Let c ∈ I. . then v is diﬀerentiable on I. The operations from calculus are deﬁned in a natural way. . .

Let x1 = y. x3 . Solving the equation for y . we have y = −12y + 8y + y + 2et . ﬁrst-order diﬀerential equations: x1 = a11(t)x1 + a12 (t)x2 + a13(t)x3(t) + b1(t) x2 = a21(t)x1 + a22 (t)x2 + a23(t)x3(t) + b2(t) x3 = a31(t)x1 + a32 (t)x2 + a33(t)x3(t) + b3(t). as follows: x1 = y x2 = x1 (= y ) x3 = x2 (= y ) Then y = x3 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t) and the third-order equation can be written equivalently as the system of three ﬁrst-order equations: x 1 = x2 x2 = x 3 x3 = −r(t)x1 − q(t)x2 − p(t)x3 + f (t). (a) Consider the third-order nonhomgeneous equation y − y − 8y + 12y = 2et. q. Note that this system is just a very special case of the “general” system of three. Introduce new dependent variables x1 . Then y = x3 = −12x1 + 8x2 + x3 + 2et and the equation converts to the equivalent system: x 1 = x2 x2 = x 3 x3 = −12x1 + 8x2 + x3 + 2et . x2 = x3 (= y ). Solving the equation for y . we get y = −r(t)y − q(t)y − p(t)y + f (t). x2 . f are continuous functions on some interval I. 266 .where p. r. x1 = x2 (= y ). Example 1.

b1(t). a12(t). . . we let x1 = y. b2(t). 2 t t To convert this equation to an equivalent system. . . . a1n (t). . (S) xn = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) + bn (t) is called a ﬁrst-order linear diﬀerential system. . . . . .  . . The system (S) is homogeneous if b1(t) ≡ b2(t) ≡ · · · ≡ bn (t) ≡ 0 on I. a11(t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t) + b1(t) a21(t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t) + b2(t) . x1 = x2 (= y ). that is. . bn (t) be continuous functions on some interval I. . ann (t). The system of n ﬁrst-order diﬀerential equations x1 = x2 = .(b) Consider the second-order homogeneous equation t2 y − ty − 3y = 0. . Then we have x1 = x2 x2 = 3 1 x1 + x 2 t2 t which is just a special case of the general system of two ﬁrst-order diﬀerential equations: x1 = a11(t)x1 + a12(t)x2 + b1(t) x2 = a21(t)x1 + a22(t)x2 + b2(t). we get y = 3 1 y+ y. (S) is nonhomogeneous if the functions bi (t) are not all identically zero on I. . . . General Theory Let a11(t).   an1 (t) an2 (t) · · · ann (t) 267 . . . a21(t). . . Let A(t) be the n × n matrix   a11 (t) a12(t) · · · a1n (t)    a21 (t) a22(t) · · · a2n (t)    A(t) =  . (S) is nonhomgeneous if there is at least one point a ∈ I and at least one function bi (t) such that bi(a) = 0. . Solving this equation for y .

The vector-matrix form of the system in Example 1(b) is: x = 0 1 2 3/t 1/t x+ 0 0 = 0 1 2 3/t 1/t x. 2et −12 8 1 x3 a nonhomogeneous system.  .     b(t) =     b1(t) b2(t) . a homogeneous system. The vector-matrix form of the system in Example 1 (a) is:       0 x1 0 1 0       x = 0 0 1 x +  0 .  . −r(t) −q(t) −p(t) x3 A solution of the linear diﬀerential system (S) is a diﬀerentiable vector function   v1 (t)    v2 (t)   v(t) =  . where x= x1 x2 . . Example 2. 268 . where x =  x2 . bn (t)    . The vector-matrix form of y + p(t)y + q(t)y + r(t)y = 0 is:     0 0 1 x1     x = 0 0 1 x where x =  x2  .   vn (t) that satisﬁes (S) on the interval I. (S) The matrix A(t) is called the matrix of coeﬃcients or the coeﬃcient matrix of the system. .  .and let x and b(t) be the vectors  x1   x2 x= .  .   Then (S) can be written in the vector-matrix form x = A(t) x + b(t). xn    .

Example 3. Verify that v(t) =  2e2t  +  1 et  is a solution of the nonhomogeneous 2 1 t 4e2t 2e system     0 0 1 0     x = 0 0 1 x +  0  2et −12 8 1 of Example 1 (a). SOLUTION v =    1 t e e2t 2  2t   1 t   2e  +  2 e  1 t 4e2t 2e     1 t 2e2t 2e  2t   1 t   4e  +  2 e  1 t 8e2t 2e         1 t 0 0 1 0 e2t 2e         0 0 1   2e2t  +  1 et  +  0   2 1 t 2et −12 8 1 4e2t 2e         1 t e2t 0 1 0 0 e 0 1 0  2        0 0 1  1 et  +  0  0 0 1  2e2t  +   2 1 t −12 8 1 2et −12 8 1 4e2t 2e           1 t 1 t 0 2e2t 2e2t 2e 2e       2t   1 t    4e  +  2 e  +  0  =  4e2t  +  1 et . t3 3t2 = v is a solution. 269 . SOLUTION v = t3 3t2 = 0 1 2 1/t 3/t 3t2 6t = ? 0 1 2 3/t 1/t 3t2 6t . 2 1 t 2et 8e2t − 3 et 8e2t 2 2e  = = ? = ? = ? v is a solution. Verify that v(t) = t3 3t2 is a solution of the homogeneous system x1 x2 x = of Example 1 (b).     1 t e2t e    2  Example 4.

and let α1. (Existence and Uniqueness Theorem) Let a be any point on the interval I. m.  0  0 . k.   et 2 3 −1   −t  7. b(t) = . 6. Then the initial-value problem   α1    α2  x = A(t) x + b(t). αn be any n real numbers. x(a) =  . b(t) =  2t 3 t Write the system in vector-matrix form. c.THEOREM 1. Write the system of equations corresponding to x = A(t)x + b(t).   .  αn has a unique solution. 9. 3.8 a matrix function A and a vector function b are given. Exercises 6.2 Convert the diﬀerential equation into a system of ﬁrst-order equations. my + cy + ky = cos λt. . . A(t) = . b(t) =  2e 2 3 0 e2t    3t t − 1 t2    8. .   .  . y − ty + 3y = sin 2t. y − y + y = et . In Exercises 5 . A(t) =  −2 0 1 . 1 x1 = −2x1 + x2 + sin t x2 = x1 − 3x2 − 2 cos t 270 . 4. λ are constants. α2 . 5. y + y = 2e− 2t. 2. A(t) = 2 −1 3 0 t3 t cos t 2  . 1. b(t) = e2t 2e−t t−1 2  . . A(t) =  −2 t − 2 t .

Verify that w(t) =  e2t + 2te2t  is a solution of the homogeneous system 4e2t + 4te2t associated with the system in Example 1 (a).  −2e−2t   17. Verify that v(t) = − sin t − cos t − 2 sin t x =  −2 1 −3 2 is a solution of the system 0 2 sin t x+ .10. Verify that v(t) =  0  is a solution of the system 3e−2t   1 −3 2   x =  0 −1 0 x. Verify that u(t) = is a solution of the system in Example 1 (b).    1 t e−3t 2e     14. x1 = et x1 − e2tx2 x2 = e−t x1 − 3et x2 11. x1 = t2 x1 + x2 − tx3 + 3 x2 = −3et x2 + 2x3 − 2e−2t x3 = 2x1 + t2 x2 + 4x3 t−1 −t−2  13. 0 −1 −2 271 . x1 = 2x1 + x2 + 3x3 + 3e2t x2 = x1 − 3x2 − 2 cos t x3 = 2x1 − x2 + 4x3 + t 12.   te2t   15. 16. Verify that u(t) =  −3e−3t  +  1 et  is a solution of the system in Example 2 1 t 9e−3t 2e 1 (a).

 (H) a21 (t)x1 + a22(t)x2 + · · · + a2n (t)xn (t) . .3 Homogeneous Systems In this section we give the basic theory for linear homogeneous systems. This is a general treatment. Linear Dependence and Linear Independence of Vector Functions This subsection is an extension of the discussion of linear dependence and linear independence of functions in Section 5. If v = v(t) is a solution of (H) and α is any real number.7. c2. and if c1. . As before. . This “theory” is simply a repetition of the results given in Sections 3. . Of course. . . .2 and 6. . We will return to linear diﬀerential systems after we treat the general case of linear dependence/independence of vector functions. then v(t) = c1 v1(t) + c2 v2(t) + · · · + ck vk (t) is a solution of (H). then u(t) = v1(t) + v2 (t) is also a solution of (H). ck are real numbers.6. (H) xn = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) 0 this solution is called the trivial solution.   These two theorems can be combined and extended to: THEOREM 3. If v1 = v1 (t) and v2 = v2(t) are solutions of (H). then u(t) = αv(t) is also a solution of (H). .   Note ﬁrst that the zero vector z(t) ≡ 0 =       is a solution of (H). 272 . we are interested in ﬁnding nontrivial solutions. . phrased this time in terms of the system x1 = a11 (t)x1 + a12(t)x2 + · · · + a1n (t)xn (t) x2 = .  0 0 . THEOREM 2. the sum of any two solutions of (H) is a solution of (H). . or x = A(t)x. v2 = v2(t). . vk = vk (t) are solutions of (H).1. any constant multiple of a solution of (H) is a solution of (H). . If v1 = v1(t). any linear combination of solutions of (H) is also a solution of (H). . THEOREM 1.

. v2(t). . not all zero. .DEFINITION 1.. .7 Example 1. Section 6. Let v1(t). then the vector functions are linearly independent on I. v2(t) =  −3e−3t . . . .  vn1 (t)    . . and let W (t)) be their Wronskian. If W (t) = 0 for at least one t ∈ I. . Let  v11 (t)   v21 (t) v1 (t) =  . vn2 (t)    . .  . then v11 (t) v12(t) · · · v1n (t) v21 (t) v22(t) · · · v2n (t) . . An example is given in Section 5. . . . . vn )(t). . vnk (t)       be n-component vector functions deﬁned on some interval I. W (t) ≡ 0 does not imply that the vector functions are linearly dependent. .) (b) The Wronskian of the vector functions     e2t e−3t     v1(t) =  2e2t . t3 3t2 and v(t) = t−1 −t−2 . . . .     vk (t) =     v1k (t) v2k (t) . . THEOREM 4. (Note: u and v are solutions of the homogeneous system in Example 3. vn (t) be n. ck . . . n-component vector functions deﬁned on an interval I. . COROLLARY Let v1(t). . . n-component vector functions deﬁned on an interval I. . . If the vectors are linearly dependent. (a) The Wronskian of the vector functions u(t) = is: W (x) = t3 t−1 3t2 −t−2 = −4t. vn (t) be n. It is important to understand that in this general case. Proof: The method of proof of Theorem 1 in Section 5. v2 (t). 4e2t 9e−3t 273  te2t   v3 (t) =  e2t + 2te2t  4e2t + 4te2t  . . denote the Wronskian. . . . such that c1 v1(t) + c2 v2(t) + · · · + ck vk (t) ≡ 0 on I. . c2.7 applies here.2. . v2. or simply W (t). . v2. We will let W (v1 . vn . The vectors are linearly dependent on I if there exist k real numbers c1.     v2(t) =     v12(t) v22(t) . As before. . . . . the determinant in Theorem 4 is called the Wronskian of the vector functions v1. Otherwise the vectors are linearly independent on I. vn1 (t) vn2 (t) · · · vnn (t) ≡ 0 on I.

is: W (x) = e2t e−3t te2t 2e2t −3e−3t e2t + 2te2t 4e2t 9e−3t 4e2t + 4te2t = −25et . Let a1 be the ﬁrst column of A.2. let v2(t) be the solution of (H) such that v2 (a) = a2 . . A set {v1. . . vn )(a) = det A = 0. . . It is easy to construct sets of n linearly independent solutions of (H). . vn (t) be n solutions of (H). DEFINITION 2. vn (t) be n linearly independent solutions of (H). . Back to Linear Diﬀerential Systems When the vector functions v1. A fundamental set of solutions is also called a solution basis for (H). v2. . and Theorem 6. vn are n solutions of the homogeneous system (H) we get a much stronger version of Theorem 4. . . v2(t). v2 (t). . . . . . Section 3. . . v2 . Exactly one of the following holds: 1. Therefore. Simply pick any point a ∈ I and any nonsingular n × n matrix A. Compare this result with Theorem 4. . 2. Let v1 (t). . . . then the n × n 274 . If {v1. W (t) = 0 for all t ∈ I and the solutions are linearly independent. vn . . . . . That is. The existence and uniqueness theorem guarantees the existence of these solutions. . . v2 . vn )(t) = 0 for all t ∈ I and the solutions are linearly independent. . C2. Section 6. . Cn such that u(t) = C1 v1(t) + C2 v2(t) + · · · + Cn vn (t). . Let u(t) be any solution of (H). . vn )(t) ≡ 0 on I and the solutions are linearly dependent. Then there exists a unique set of constants C1 . v2. . W (v1. . . v2.1. . THEOREM 6. every solution of (H) can be written as a unique linear combination of v1. THEOREM 5. Now W (v1. and let vn (t) be the solution of (H) such that vn = an . . vn } is a fundamental set of solutions of (H). W (v1. . . Then let v1(t) be the solution of (H) such that v1(a) = a1 . Let v1(t). a2 the second column of A. . . . the identity matrix.. A particularly nice set of n linearly independent solutions is obtained by choosing A = In . . vn } of n linearly independent solutions of (H) is called a fundamental set of solutions. . v2 . v2. and so on.

. where C1 . . .  . . . is the general solution of (H). v= . . . . . . . The vectors u(t) = t3 3t2 and v(t) = t−1 −t−2 form a fundamental set of solutions of x = The matrix V (t) = is a fundamental matrix for the system and x(t) = C1 t3 3t2 + C2 t−1 −t−2 = t3 t−1 3t2 −t−2 C1 C2 t3 t−1 2 3t −t−2 0 1 2 3/t 1/t x1 x2 . . Note that the general solution can also be written in terms of the fundamental matrix:    v11(t) v12(t) · · · v1n (t) C1     v21(t) v22(t) · · · v2n (t)  C2    . . 2t − 1 −t −t + 1 2t 275 1. . Let {v1(t). Exercises 6. . . C1 v1(t) + C2 v2(t) + · · · + Cn vn (t) =  . is the general solution of the system. v2 (t). . Then x(t) = C1 v1 (t) + C2 v2(t) + · · · + Cn vn (t). .matrix   V (t) =     v11 (t) v12(t) · · · v1n (t) v21 (t) v22(t) · · · v2n (t) .  vn1 (t) vn2 (t) · · · vnn (t) Cn Example 2. . Cn are arbitrary constants. C2. . .   . vn1 (t) vn2 (t) · · · vnn (t)       (the vectors v1. vn are the columns of V ) is called a fundamental matrix for (H). vn (t)} be a fundamental set of solutions of (H). .3 Determine whether or not the vector functions are linearly dependent. . . .  . u = . . . v2. DEFINITION 3.  = V (t)C.

−2 2 2       cos t cos t 0       6.  t+1 −2 . 276 . 0 sin t sin t       −et 0 et       7.2. v= 0 0    . v= sin t cos t −2t + 4t2 2t et 1  3. u =  t . 5 −3 2 0 x. u = 2−t t et 0  . v =  0 . Show that V = AV. u = . w =  t − 2 .    w=   sin t) 0 0 et      t t+2 9. u =  −et . . . u = cos t sin t t − t2 −t tet t  . . (b) Let V be the corresponding fundamental matrix. (a) Show that u. u = v=     2−t t 2+t       5. w= 0 et    . (c) Give the general solution of the system. v =  −1 . w =  et . et −et 0 8. w =  cos t . v= 4. u = v= w=    10. u =  sin t . v are a fundamental set of solutions of the system. u =   cos (t + π/4) 0 0 0   v=  cos t) 0 0 et 11. Given the linear diﬀerential system x = Let u= e2t e2t and v= 3e3t 2e3t . v =  2et .

y (t) Note: This result holds for linear equations of all orders.  0   (b) Find the solution of the system that satisﬁes x(0) =  1 .   y(t)   then x(t) =  y (t)  is a solution of the system. The linear diﬀerential system equivalent to the equation y + p(t)y + q(t)y + r(t)y = 0 is:      x1 0 0 1 x1      0 0 1   x2  . Let V be the matrix function V (t) = cos 2t sin 2t sin 2t − cos 2t 1 0 . it is important to understand that solutions of systems which are not converted from equations do not have this special form. (b) Find the solution of the system that satisﬁes x(0) = 13. 2 14.2. 277 .(d) Find the solution of the system that satisﬁes x(0) = 12. 0 0 0  2 3 .  x2  =  −r(t) −q(t) −p(t) x3 x3  (See Example 2. However. Let V be the matrix function  0 4te−t e−t   V (t) =  1 e−t 0  1 0 0 (a) Verify that V is a fundamental matrix for the system   −1 4 −4   x =  0 −1 1 x. Section 6. (a) Verify that V is a fundamental matrix for the system x = 0 −2 2 0 x.) Show that if y = y(t) is a solution of the equation.

Find two linearly independent solutions of x = 0 1 2 −6/t −6/t x. and let V (t) be the corresponding fundamental matrix. . −18 3 4 17. vn (t)} be a fundamental set of solutions of (H). Find three linearly independent solutions  0  x = 0 4 of  1 0  0 1 x. Show that V satisﬁes the matrix diﬀerential equation X = A(t)X.15. . Find three linearly independent solutions of   0 1 0   x = 0 0 1 x. 18. 278 . 4 −1 16. Let {v1(t). . v2(t). . 19. Find two linearly independent solutions of x = 0 1 −4/t2 3/t x.

e−t . − −  −  xn · · · ann (1) Example 1.6. . . . . . . . Consider the 3rd order linear homogeneous diﬀerential equation y + 2y − 5y − 6y = 0. 279 . .    x1 a11 a12  x   a  2   21 a22  =  −   − − xn an1 an2 The system in vector-matrix form is   x1 · · · a1n · · · a2n  x2      or x = Ax. The corresponding linear homogeneous system is   0 1 0   x = 0 0 1 x 6 5 −2 and    1 e2t   2t  2t  v1(t) =  2e  = e  2  4 4e2t  is a solution vector (see Problem 14. Exercises 6. The characteristic equation is: r3 + 2r2 − 5r − 6 = (r − 2)(r + 1)(r + 3) = 0 and {e2t. ann are constants.4 Homogeneous Systems with Constant Coeﬃcients A homogeneous system with constant coeﬃcients is a linear diﬀerential system having the form x1 = a11x1 + a12x2 + · · · + a1n xn x2 = a21x1 + a22x2 + · · · + a2n xn . xn = an1 x1 + an2 x2 + · · · + ann xn where a11. e−3t} is a solution basis for the equation. a12.         1 1 e−t e−3t         v2(t) =  −e−t  = e−t  −1  and v3(t) =  −3e−3t  = e−3t  −3  1 9 e−t 9e−3t are solution vectors. .3). Similarly.

−1 4 an eigenvalue 280 . Then v (t) = λeλtc. 1 5 3 3 x   1    2 . Substituting into (1). Set v(t) = eλt c. 0 1  2  = 2 2 . 9 Example 3. and −3 is 1   1   of A with corresponding eigenvector  −3 .  0 0 1 4 4 6 5 −2 1 5 −2      1 1 0 1 0      1   −3  = −3  −3  .  0 0 6 5 −2 9 9   0 1 0   2 is an eigenvalue of A =  0 0 1  with corresponding eigenvector 6 5 −2   1   is an eigenvalue of A with corresponding eigenvector  −1 . The latter equation is an eigenvalue-eigenvector equation for A. note that          0 1 0 1 1 1 0 1 1          1   −1  = −1  −1  .Solutions: Eigenvalues and Eigenvectors Example 1 suggests that homogeneous systems with constant coeﬃcients might have solution vectors of the form v(t) = eλt c. SOLUTION First we ﬁnd the eigenvalues: det(A − λI) = 1−λ 5 3 3−λ = (λ − 6)(λ + 2). Thus. we get: λeλtc = Aeλt c which implies Ac = λ c. Example  0   0 6 and 2. Find a fundamental set of solution vectors of x = and give the general solution of the system. we look for solutions of the form v(t) = eλt c where λ is an eigenvalue of A and c is a corresponding eigenvector. Returning to Example 1. for some number λ and some constant vector c.

Thus v1 and v2 are linearly independent. we get the eigenvector Repeating the process for λ2 = −2. λ2 = 1. The eigenvalues are λ1 = 2. 5 −3 which implies x1 = x2. For λ1 = 6 we have: (A − 6I)x = −5 5 3 −3 x1 x2 = 0 0 1 1 . The general solution of the system is x(t) = C1 e6t 1 1 + C2 e−2t 5 −3 . we get the eigenvector 1 1 5 −3 . Example 4.The eigenvalues are λ1 = 6 and λ2 = −2. Setting x2 = 1. Find a fundamental set of solution vectors of   3 −1 −1   x =  −12 0 5 x 4 −2 −1  1   and ﬁnd the solution that satisﬁes the initial condition x(0) =  0 . 1 SOLUTION 3 − λ −1 −1 −12 −λ 5 4 −2 −1 − λ = −λ3 + 2λ2 + λ − 2. they form a fundamental set of solutions. λ3 = −1. Thus v1 (t) = e6t and v2 (t) = e−2t are solution vectors of the system. 281 . we ﬁnd corresponding eigenvectors.  det(A − λI) = Now det(A − λI) = 0 implies λ3 − 2λ2 − λ + 2 = (λ − 2)(λ − 1)(λ + 1) = 0. The Wronskian of v1 and v2 is: W (t) = e6t 5e−2t e6t −3e−2t = −8e4t = 0. Next. x2 arbitrary.

inverse. 2  1   v3 (t) = e−t  2  2    3   v2(t) = et  −1  . 2 7 2 1 C3  or Note: The matrix of coeﬃcients here is the fundamental matrix evaluated at t = 0 Using the solution method of your choice (row reduction.As you can check. The solution of the initial-value problem is       1 3 1       x(t) = 3e2t  −1  − et  −1  + e−t  2  . C3 = 1. 7 since distinct exponential vector-functions are linearly independent (calculate the Wronskian to verify) and       1 3 1    2t  t −t  x(t) = C1 e  −1  + C2 e  −1  + C3 e  2  2 7 2 is the general solution. To ﬁnd the solution vector satisfying the initial condition. solve   1   C1 v1(0) + C2 v2 (0) + C3 v3 (0) =  0  1 which is:        1 3 1 1         C1  −1  + C2  −1  + C3  2  =  0  2 7 2 1      1 3 1 C1 1       −1 −1 2   C2  =  0  . 2 7 2 Two Diﬃculties There are two diﬃculties that can arise: 282 . corresponding eigenvectors are:     1 3     c1 =  −1  . Cramer’s rule). C2 = −1. c2 =  −1  . 2 7 A fundamental set of solution  1 2t  v1 (t) = e  −1 2 vectors is:   . the solution is: C1 = 3.   1   c3 =  2  .

and they are real-valued vector functions. Now e(1+2i)t 1 1 +i 2 0 1 1 = +i 2 0 2 0 = 2 0 + sin 2t 1 1 . c2 = 1 − 2i 1 = 1 1 +i 2 0 . then λ = a − bi (the complex conjugate of λ) is also an eigenvalue of A and u − i v is a corresponding eigenvector. If λ = a + bi is a complex eigenvalue of A with corresponding (complex) eigenvector u + i v. et (cos 2t + i sin 2t) et cos 2t 1 1 − sin 2t + i et cos 2t 283 . The corresponding linearly independent complex solutions of x = Ax are: w1 (t) = e(a+bi)t (u + i v) = eat(cos bt + i sin bt)(u + i v) = eat [(cos bt u − sin bt v) + i(cos bt v + sin bt u)] w2 (t) = e(a−bi)t (u − i v) = eat(cos bt − i sin bt)(u − i v) = eat [(cos bt u − sin bt v) − i(cos bt v + sin bt u)] Now x1 (t) = and x2 (t) = 1 2i 1 2 [w1 (t) + w2 (t)] = eat(cos bt u − sin bt v) [w1 (t) − w2 (t)] = eat(cos bt v + sin bt u) are linearly independent solutions of the system.1.) Example 5. Note that x1 and x2 are simply the real and imaginary parts of w1 (or of w2 ). (Review Section 3. Determine the general solution of x = 2 −5 1 0 x. The corresponding eigenvectors are: c1 = 1 + 2i 1 = 1 1 +i 2 0 .3 where you were shown how to convert complex exponential solutions into real-valued solutions involving sine and cosine. λ2 = 1 − 2i. The eigenvalues are: λ1 = 1 + 2i. A has complex eigenvalues. SOLUTION det(A − λI) = 2 − λ −5 1 −λ = λ2 − 2λ + 5.

The eigenvalues are: λ1 = 2. v2(t) = et cos 2t The general solution of the system is x(t) = C1 et cos 2t 1 1 − sin 2t 2 0 + sin 2t . 2 0 0 2 284  . + C2 et cos 2t 2 0 + sin 2t 1 1 . The corresponding eigenvectors are:         1 −5 + 3i −5 3         c1 =  0  . 2 2 0 Now    −5 3    (2+3i)t  e  3  + i  3  = 2 0     −5 3     2t e (cos 3t + i sin 3t)  3  + i  3  = 2 0           −5 3 3 −5           e2t cos 3t  3  − sin 3t  3  + i e2t cos 3t  3  + sin 3t  3  . Example 6. λ3 = 2 − 3i. c2 =  3 + 3i  =  3  + i  3  −1 2 2 0       −5 − 3i −5 3       c3 =  3 − 3i  =  3  − i  3  . Determine a fundamental set of solution vectors of   1 −4 −1   x = 3 2 3 x.A fundamental set of solution vectors for the system is: v1(t) = et cos 2t 1 1 2 0 − sin 2t 2 0 1 1 . 1 1 3 SOLUTION det(A − λI) = 1−λ −4 −1 3 2−λ 3 1 1 3−λ = −λ3 + 6λ2 − 21λ + 26 = −(λ − 2)(λ2 − 4λ + 13). λ2 = 2 + 3i.

det(A − λI) = The eigenvalues are: λ1 = 4. 6 −6 4 SOLUTION 1−λ −3 3 3 −5 − λ 3 6 −6 4 − λ = −λ3 + 12λ − 16 = −(λ − 4)(λ + 2)2. v2 (t) = e cos 3t  3  − sin 3t  3  . For example.  1   As you can check. A has an eigenvalue of multiplicity greater than 1 We’ll treat the case where A has an eigenvalue of multiplicity 2. Example 7. c3 arbitrary.  0 c3  The augmented matrix for this system of equations is     3 −3 3 0 1 −1 1 0     0 0 0   3 −3 3 0  which row reduces to  0 6 −6 6 0 0 0 0 0 The solutions of this system are: c1 = c2 − c3 . 2 We’ll carry out the details involved in ﬁnding ble” eigenvalue −2.A fundamental set of solution vectors for the system is:        1 −5 3      2t  2t  v1(t) = e  0  . c2. Determine a fundamental set of solution vectors of   1 −3 3   x =  3 −5 3 x. 0 2   2. λ2 = λ3 = −2. −1 2 0    3 −5      v3(t) = e2t cos 3t  3  + sin 3t  3  . an eigenvector corresponding to λ1 = 4 is c1 =  1 . 285 . We can assign values to c2 and c3 independently and obtain two linearly independent eigenvectors.  3 −3 3  [A − (−2)I]c =  3 −3 3 6 −6 6 an eigenvector corresponding to the “dou    c1 0     c2  =  0  .

2    set of solutions for the diﬀerential system  −3 3  −5 3 x −6 4  1   v3(t) = e−2t  0  . −1    1   v2 (t) = e−2t  1  . 9 We’ll carry out the details involved in ﬁnding an eigenvector corresponding to the “double” eigenvalue −2.  1   As you can check. an eigenvector corresponding to λ1 = 3 is c1 =  3 . 0  0 1 0   Example 8. Based on our work above. 1   setting c2 = 1. 12 8 1 0 c3 The augmented matrix for this  2 1 0 0   0 2 1 0 12 8 1 0 system of equations is     2 1 0 0   which row reduces to  0 2 1 0  0 0 0 0 286   .      2 1 0 c1 0      [A − (−2)I]c =  0 2 1   c2  =  0  . a fundamental  1  x = 3 6 is  1   v1 (t) = e4t  1  . The important thing to note here is that this eigenvalue of multiplicity 2 produced two independent eigenvectors. Let A =  0 0 1  12 8 −1 −λ 1 0 0 −λ 1 12 8 −1 − λ det(A − λI) = = −λ3 − λ2 + 8λ − 12 = −(λ − 3)(λ + 2)2. we get the eigenvector c2 =  1 . any choice which produces two independent vectors will do. c3 = 0. Reversing the roles. we set 0   1   c2 = 0. The eigenvalues are: λ1 = 3. Clearly c2 and c3 are linearly −1 independent. You should understand that there is nothing magic about our two choices for c2. c3. c3 = −1 to get the eigenvector c3 =  0 . λ2 = λ3 = −2.

c3 arbitrary. The characteristic equation is r3 + r2 − 8r − 12 = (r − 3)(r + 2)2 = 0 (compare with det(A − λI). Suppose that we were asked to ﬁnd a fundamental set of solutions of the linear diﬀerential system   0 1 0   x = 0 0 1 x. we have two independent solutions     1 1     v1 = e3t  3  and v2 = e−2t  −2  . The correspondence between these solutions and the solution vectors we found above should be clear:     1 1     e3t −→ e3t  3  . y3 = te−2t }. the “double” eigenvalue here has only one (independent) eigenvector. 4 In contrast to the preceding example. the solution y3 (t) = te−2t of the equation produces the solution vector         0 1 y3 (t) te−2t         v3(t) =  y3 (t)  =  e−2t − 2te−2t  = e−2t  1  + te−2t  −2  −4 4 y3 (t) −4e−2t − 4te−2t of the corresponding system. 287 . 12 8 −1 By our work above. it is equivalent to the third order equation y + y − 8y − 12y = 0. c2 = − 1 c3. y2 = e−2t . Here there is only 4 2 one parameter and so we’ll get only one eigenvector. Setting c3 = 4 we get the eigenvector   1   c2 =  −2 .) The roots are: r1 = 3. Our system has a special form. r2 = r3 = −2 and a fundamental set of solutions is {y1 = e3t .2. 9 4 We need a third solution which is independent of these two.The solutions of this system are c1 = 1 c3. e−2t −→ e−2t  −2  . 9 4 As we saw in Section 6.

The vector w is called a generalized eigenvector corresponding to the eigenvalue λ. . c1 and c2 . You can check that v3 is independent of v1 and v2. 1 −1 1 3 x. General Result Given the linear diﬀerential system x = Ax. λ has only one (independent) eigenvector c. The corresponding solution of the system has the form v3 (t) = e−2t w + te−2t c2 where c2 is the eigenvector corresponding to −2 and w satisﬁes [A − (−2)I]w = c2 . Then exactly one of the following holds: 1. Find a fundamental set of solution vectors for x = SOLUTION det(A − λI) = 1−λ −1 1 3−λ 288 = λ2 − 4λ + 4 = (λ − 2)2. Therefore. v2. 2. Then a linearly independent pair of solution vectors corresponding to λ are: v1(t) = eλt c and v2 (t) = eλt w + teλt c where w is a vector that satisﬁes (A − λI)w = c. v3 are a fundamental set of solutions of the system. Corresponding linearly independent solution vectors of the diﬀerential system are v1(t) = eλt c1 and v2(t) = eλtc2 . 12 8 1 −4 4 A − (−2)I “maps” w onto the eigenvector c2 . λ has two linearly independent eigenvectors. the solution vectors v1.   0   The question is: What is the signiﬁcance of the vector w =  1 ? How is it related −4 to the eigenvalue −2 which generated it. Example 9. and to the corresponding eigenvector? Let’s look at [A − (−2)I]w = [A + 2I]w:      2 1 0 0 1      [A + 2I]w =  0 2 1   1  =  −2  = c2 .The appearance of the te−2t c2 term should not be unexpected since we know that a characteristic root r of multiplicity 2 produces a solution of the form tert . Suppose that A has an eigenvalue λ of multiplicity 2.

A second solution. The solutions of this system are z1 = −1 − z2 . we get z1 = −1 and w = . independent of v1 is v2 = e2tw + te2t c where w is a solution of (A − 2I)z = c: −1 −1 z1 1 (A − 2I)z = = . Let A =  2 2 −1 . v2 (t) = e2t −1 0 + te2t 1 −1 are a fundamental set of solutions of the system. Find a fundamental set of solutions of 2 2 0 x = Ax SOLUTION 289 . c2 arbitrary. there is only one eigenvector. Characteristic vectors: (A − 2I)c = −1 −1 1 1 −→ c1 c2 = 0 0 . −1 −1 0 1 1 0 1 1 0 0 0 0 The solutions are: c1 = −c2 . 1 1 −1 z2 −1 −1 1 1 1 1 −→ 1 1 −1 0 0 0 . Setting c2 = −1. z2 arbitrary.   3 1 −1   Example 10. If we choose z2 = 0 (any −1 choice for z2 will do). . The solutions v1(t) = e2t 1 −1 . Thus 0 v2 (t) = e2t −1 0 + te2t 1 −1 is a solution of the system independent of v1. −1 The vector v1 = e2t 1 −1 is a solution of the system.Characteristic values: λ1 = λ2 = 2. 1 we get c = .

     1 1 −1 c1 0      [A − 2)I]c =  2 0 −1   c2  =  0  . We know that this solution has the form v3(t) = e2tw + te2t c2 where w is a solution of (A − 2I)z = c2 . That   1 1 −1 z1    2 0 −1   z2 2 2 −2 z3 The augmented matrix is   1 1 −1 1    2 0 −1 1  2 2 −2 2 is:   1     =  1 . 2 We’ll show the details involved in ﬁnding an eigenvector (or eigenvectors) corresponding to the “double” eigenvalue 2. c3 = 2c2. λ2 = λ3 = 2. v2 = e  1  . 2 2 We need another solution corresponding to the eigenvalue 2. two independent solutions of the given linear diﬀerential system are     1 1   t 2t  v1 = e  0  . we get c2 =  1 . The eigenvalues are: λ1 = 1. 2 Thus. one which is independent of v2.  1   An eigenvector corresponding to λ1 = 1 is c1 =  0  (check this).det(A − λI) = 3−λ 1 −1 2 2 − λ −1 2 2 −λ = −λ3 + 5λ2 − 8λ + 4 = −(λ − 1)(λ − 2)2. 2 2 −2 0 c3 The augmented matrix for this system of equations is   1 1 −1 0    2 0 −1 0  which row reduces to 2 2 −2 0   1 1 −1 0   1 0   0 −2 0 0 0 0  The solutions of this system are: c1 = −c2 + c3. There only is  1   one eigenvector corresponding to the eigenvalue 2. Setting c2 = 1. 2  which row reduces to  1 1 1 −1   1 −1   0 −2 0 0 0 0  290 . c2 arbitrary.

. Thus −1     0 1   2t  2t  v3 = e  0  + te  1  −1 2 is a solution of the system independent of v2 (and of v1 ). . If an initial condition is given. x(0) = .The solutions of this system are z3 = −1 + 2z2 . z2 arbitrary. . −2 4 1 1 −3 2 1 −2 2 4 −2 −2 −1 2 −1 −3 −1 1 −4 3 5 2 −2 1 3 2 −8 −5 . 2. . z2 = 0. Exercises 6.4 Find the general solution of the system x = Ax where A is the given matrix. 291 . v2 = e  1  . 1 3 3. 6. 5. The solutions         1 1 0 1     t 2t  2t  2t  v1 = e  0  . also ﬁnd the solution that satisﬁes the condition. 4. . z1 = 1 − z2 + z3 = 1 − z2 + (−1 + 2z2 ) = z2 . . z3 = −1 and 0   w =  0 . v3 = e  0  + te  1  2 2 −1 2 are a fundamental set of solutions of the system. If we choosez2 = 0 (any choice for z2 will do). 1. 3 −2 7. x(0) = . we get z1 = 0.

Hint: 2 is an eigenvalue. 3 −5 −3   8 2 1   15. Hint: 3 is an eigenvalue.  1 7 3  Hint: 5 is an eigenvalue. 1 0 1   0 4 0   13.  −1 1 −4 −5 . x(0) =  1 . 0 1 3 2     1 −3 1 −1     17. Hint: −1 is an eigenvalue.  4 −5 4 .  1 −2 3 .  −2 2 1 . x(0) =  2 . Hint: 2 is an eigenvalue. 4 −4 3 −1   −3 0 −3   12. −8 8 0 −3   −2 2 1   10. x(0) =  3 .  0 −1 0 .  −1 4 2 . 1 1 6     −1 1 1 2     16. Hint: −2 is an eigenvalue. Hint: 3 is an eigenvalue. 1 0 3 292 . −2 1 3   0 0 −2   19.8. x(0) =  0 .  −1 0 0 .    −1 3 0 −1     9. 2 −2 −1     3 −4 4 2     11.  1 2 1 .  −7 5 −1 . Hint: 2 is an eigenvalue. 1 4 −1   5 −5 −5   14.  0 1 0 . Hint: 4 is an eigenvalue.  1 1 −1 . Hint: 0 is an eigenvalue. −1 −6 6 −2   0 1 1   18. Hint: 2 is an eigenvalue.

v2(t) = eλtc2 . Given the diﬀerential system x = Ax.  10 −9 2 . Then three linearly independent solution vectors of the system corresponding to λ are: v1(t) = eλtc1 . Then three linearly independent solutions of the system have the form: v1 = eλtc. x(0) =  −2 .  0 3 −1 . −2 −1 1   8 −6 1   24. c3 . Hint: 3 is an eigenvalue. v2(t) = eλt c2 A third solution. Hint: 4 is an eigenvalue.  2 −1 −3 . 2 −1 1   20. Then two linearly independent solutions of the system corresponding to λ are: v1(t) = eλt c1 . Suppose that λ is an eigenvalue of A of multiplicity 3. 2.  3 −3 4 . Hint: 6 is an eigenvalue. Then exactly one of the following holds: 1. 0 −1 3     1 2 −1 −1     21. 0 0 −1 1   −2 1 −1   22. 3 −1 2   2 2 −6   23. Hint: 2 is an eigenvalue. 3. 10 −7 0  Appendix: Eigenvalues of Multiplicity 3. v3(t) = eλt c3 . c2 . c2 . Hint: 1 is an eigenvalue. v3(t) = eλtz + teλt w + t2 eλt c where (A − λI)w = c and (A − λI)z = w. independent of v1 and v2 has the form v3(t) = eλtw + teλt v where v is an eigenvector corresponding to λ and (A − λI)w = v.  2 1 −1 . λ has three linearly independent eigenvectors c1 . 293 . λ has two linearly independent eigenvectors c1 . λ has only one (independent) eigenvector c. v2 = eλt w + tgeλtc.

f.2 that a linear nonhomgeneous diﬀerential system is a system of the form x1 = a11(t)x1 + a12 (t)x2 + · · · + a1n (t)xn (t) + b1(t) x2 = . b2 (t). . .     b(t) =    b1(t) b2(t) .1. Theorem 1.  . (C. . . If z1 (t) and z2 (t) are solutions of (N). a1n (t). .   Then (N) can be written in the vector-matrix form x = A(t) x + b(t). . Let A(t) be the n × n matrix  a11 (t) a12(t) · · · a1n (t)   a21 (t) a22(t) · · · a2n (t) A(t) =  .5 Nonhomogeneous Systems The treatment in this section parallels exactly the treatments of linear nonhomogeneous equations in Sections 3. . . THEOREM 1.  .  . bn (t)   . The corresponding linear homogeneous system x = A(t) x is called the reduced system of (N). . . a21(t)x1 + a22 (t)x2 + · · · + a2n (t)xn (t) + b2(t) .1. (H) (N) . . . bn (t) are continuous functions on some interval I and the functions bi(t) are not all identically zero on I. . . there is at least one point a ∈ I and at least one function bi(t) such that bi (a) = 0. (N) xn = an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn (t) + bn (t) where a11 (t). .6. xn            .) Proof: Since z1 and z2 are solutions of (N). . a12 (t). . Section 3. that is. . a21(t). . . Section 6. . and Theorem ??. b1(t). . then x(t) = z1 (t) − z2 (t) is a solution of (H). ann (t). z1(t) = A(t)z1 (t) + b(t) and 294 z2 (t) = A(t)z2 (t) + b(t)).  an1 (t) an2 (t) · · · ann (t) and let x and b(t) be the vectors  x1   x2 x= . . .4 and 6. Recall from Section 6.4. .

x2(t). THEOREM 2. Another way to look at (1) is: The general solution of (N) consists of the general solution of the reduced equation (H) plus a particular solution of (N): x general solution of (N) = C1 x1(t) + C2 x2(t) + · · · + Cn xn (t) + general solution of (H) z(t). cn such that u(t) − z(t) = c1x1(t) + c2x2(t) + · · · + cn xn (t). Section 3. . Theorem 2. . . u(t) − z(t) is a solution of the reduced system (H). then x(t) = C1 x1(t) + C2 x2(t) + · · · + Cn xn (t) + z(t) (1) represents the set of all solutions of (N). . c2. cn such that u(t) = c1 x1(t) + c2 x2(t) + · · · + cn xn (t) + z(t) (C. . Since x1(t). . particular solution of (N) 295 . Therefore u(t) = c1x1 (t) + c2x2 (t) + · · · + cn xn (t) + z(t).f. . . x(t) = z1 (t) − z2 (t) is a solution of (H). x2 (t).4. x2(t).) Proof: Let u = u(t) be any solution of (N). (1) is the general solution of (N). . .1. That is. If u = u(t) is any solution of (N). there exist constants c1 . . By Theorem 1. Then x (t) = z1 (t) − z2 (t) = [A(t)z1 (t) + b(t)] − [A(t)z2 (t) + b(t)] = A(t) [z1 (t) − z2 (t)] = A(t)x(t). then there exist constants c1. c2.Let x(t) = z1 (t) − z2 (t). Section 6. According to Theorem 2. . . xn(t) are linearly independent solutions of the reduced system (H) and z = z(t) is a particular solution of (N). if x1(t). Thus. and Theorem ??. . xn(t) be a fundamental set of solutions the reduced system (H) and let z = z(t) be a particular solution of (N). Let x1(t). . . . . Our next theorem gives the “structure” of the set of solutions of (N). . . xn (t) are n linearly independent solutions of (H).

and so we can solve for u : u (t) = V −1 (t)b(t) which implies u(t) = Finally. .  . x2 (t). . . (2) V −1 (t)b(t) dt V −1 (t)b(t) dt. we have z (t) = A(t)z(t) + b(t) = A(t)V (t)u(t) + b(t).3. By Theorem 2. . you were asked to show that V satisﬁes the matrix diﬀerential system X = A(t)X. V (t)C where C =   . the general solution of (N) is given by x(t) = V (t)C + V (t) V −1 (t)b(t) dt. Since V is a fundamental matrix. Compare this result with the general solution of ﬁrst order linear diﬀerential equation given by equation (2) in Section 2. .  Cn In Exercises 6.3. x2.Variation of Parameters Let x1(t). Diﬀerentiating z. Then. we have z(t) = V (t) is a solution of (N). it is nonsingular. xn(t) be a fundamental set of solutions of (H) and let V (t) be the corresponding fundamental matrix (V is the n×n matrix whose columns are x1. from which it follows that V (t)u (t) = b(t). That is.1 296 . as we saw in Section 6. the general solution of (H) can be written   C1    C2   . xn ). V (t) = A(t)V (t). . Since z is to satisfy (N). Problem 19. Therefore V (t)u (t) + A(t)V (t)u(t) = A(t)V (t)u(t) + b(t). . . . We replace the constant vector C by a vector function u(t) which is to be determined so that z(t) = V (t)u(t) is a solution of (N). we get z (t) = [V (t)u(t)] = V (t)u (t) + V (t)u(t) = V (t)u (t) + A(t)V (t)u(t).

The general solution of the given nonhomogeneous system is x(t) = t t ln t 1 1 + ln t C1 C2 + t(ln t2 2 ln t + (ln t)2 . Substituting t = a in (3) gives xo = V (a)C which implies C = V −1 (a)xo. 297 . t t ln t 1 1 + ln t . SOLUTION You can verify that v1(t) = mental set of solutions of the reduced system x = The corresponding fundamental matrix is V (t) = The inverse of V is given by V −1 (t) = and v2(t) = t ln t 1 + ln t is a funda- 0 −t −2 1 t −1 x. t−1 + t−1 ln t − ln t −t−1 1 . the general solution of (1) given by (2) can be written as t x(t) = V (t)C + V (t) a V −1 (s)b(s) ds. By ﬁxing a point a on the interval I. We are now ready to calculate z using the result given above: z = t t ln t 1 1 + ln t t t ln t 1 1 + ln t t t ln t 1 1 + ln t t−1 + t−1 ln t − ln t −t−1 1 −2t−1 ln t 2t−1 −(ln t)2 2 ln t = dt t(ln t2 2 ln t + (ln t)2 0 2t −1 dt = = . t ∈ I.Example 1. (3) This form is useful in solving system (1) subject to an initial condition x(a) = xo . Find the general solution of the nonhomogeneous linear diﬀerential system x = 0 −t−2 1 t−1 t 1 x+ 0 2t−1 .

A(t) = .5 Find the general solution of the system x = A(t)x + b(t) where A and b are given. b(t) = 8. A(t) = . b(t) =   1 1 0 et    2t 9. b(t) =  e 0 −1 2 et 298 . A(t) = b(t) = sec t 0 et cos t et sin t 4t2 1       7. A(t) =  0 0 1 . 1. A(t) = b(t) = 3. b(t) = 3t e− t 0 4t 1 2t 2 cos t 2 sin t 3t e t − 2. (4) Exercises 6. A(t) = −3 1 2 −4 2 −1 3 −2 2 2 −3 −3 3 2 −4 −3 −3 1 2 −4 0 −1 1 0 1 −1 1 1 t 3t2 −1 0 t  .Therefore the solution of the initial-value problem x = A(t) x x(a) = xo is given by x(t) = V (t)V −1 (a)xo + V (t) a t V −1 (s)b(s) ds. A(t) =  1 1 0 . b(t) = 5. b(t) =  e te3t 0 0 3    1 −1 1 0    t 10. b(t) = 6. . A(t) = . A(t) = . b(t) = 4. A(t) = . .

x = 3 −1 −1 3 3 −2 1 0 x+ 4e2t 4e4t −2e−t −2e−t . x = x+ .Solve the initial-value problem. x(0) = 1 1 2 −1 12. x(0) = 299 . 11.

6 Direction Fields and Phase Planes There are many types of diﬀerential equations which do not have solutions which can be easily written in terms of elementary functions such as exponentials.4. sines and cosine. The equality implies that the graph of a solution of this equation in x − y plane. Indeed.1. x2) (1) (2) x2 = g(x1. The following ﬁgure shows this solution. or even as integrals of such functions. At each point in the collection we draw a vector emanating 300 . we can sketch the solutions of the diﬀerential equation (1–2) by selecting a number of points in the plane. x2(t0 )) is the tangent vector to this parametric curve at the point (x1(t0 ). In this section we discuss the basics of phase plane analysis. the vector (x1 (t0 ). Drawing slope ﬁelds for x1 and x2 separately will not work. and think about it geometrically. Indeed. since each slope ﬁeld depends on both variables. However. must have slope equal to f (y) at the point (x. x2). this is what we expect. Therefore. Let us again consider the diﬀerential equation y = f (y). when these equations are of ﬁrst or second order one can still gain a good understanding of the behavior of their solutions using geometric methods. parametrically deﬁnes a curve in the x1 − x2 plane. y). x2(t0 )). along with the solution satisfying y(0) = 0.6. an extension of the method of slope ﬁelds discussed in Section 2. for the diﬀerential equation y = 2y(1 − y) the slope of the solution at all points for which y = 1 equal to 0. Fortunately. note that any solution of this equation. (x1 (t). Let us now turn to autonomous diﬀerential equations in two variables x1 = f (x1. since the solution satisfying the initial condition y(0) = 1 is the constant solution y(x) = 1. For instance. x2(t)).

x2(0)) = (−0. For instance.5 1. x2)) emanates from the point (x1 .5 1. The collection of these vectors is called a vector ﬁeld. 0.0 Also shown are two solutions of the diﬀerential equation. As we will see shortly. 1) corresponding to the positive eigenvalue. then the solutions of the equation will describe its oscillation around the equilibrium position at y = 0.3.5. and the other with (x1(0).2. so that the vector (f (x1. x2). x2).8 in the horizontal and 3x1 + 3x2 = 3 × 0.1 units apart in the horizontal and vertical direction. and x2 = 0. Let us next consider the following equation y + y + y = 0. considered in Example 2 of section 6.from the point.25).3 = 2. −0. x2(0)) = (0. g(x1.5 we attach the vector with components x1 + 5x2 = 0. measures the amount of damping.0 0. This equation is known as the linear damped pendulum. the vector (−3.5). −0.6) emanates from the point (−0.0 0.5 1.3 + 5 × 0. one with initial condition (x1(0). Example 1: Let us start with the constant coeﬃcient system x1 = x1 + 5x2 x2 = 3x1 + 3x2. Similarly.7.1). 301 .0 0. The length of all vectors is scaled by an equal factor so that they all ﬁt in the ﬁgure.2. The ﬁgure below shows vectors attached to points spaced 0. to the point with coordinates x1 = 0.4.5 0.5 = 2. The arrows point in the direction in which the solutions are traversed. You can also see that the solutions diverge from the origin in the direction of the eigenvector (1.5 + 3 × 0. In practice we may have to scale the length of the vectors by a constant factor. If we think of y as the angular displacement from the resting position and y as the angular velocity of the pendulum.4 in the vertical direction. −3. 1.

let us look at the phase plane.2.0 0.5 1.1. In the ﬁgure below = 0. 1.5 0.0 0. x2 = 2π/3. and the initial conditions are x1 = 0.5 1. (3) (4) This is a linear system. it is much more diﬃcult to solve. and see what happens as we vary . a phase plane analysis can be easily performed in this case as well. As you could guess. then we obtain the following pair of equations x1 = x2 x2 = − x2 − x1 . 302 .0 0. In the ﬁgure below you will see the vector ﬁeld and solutions with initial condition x1 = x2 = 0. Instead.5 0.5 1.5 0.4. For larger displacements it is necessary to use the nonlinear equation x 1 = x2 x2 = − x2 − sin(x1).0 The linear system (3-4) only describes the behavior of the pendulum accurately when the displacement from the rest position is small.5 0. and we can solve it using the methods of Section 6. and x2 = y .If we let x1 = y.0 1. a pendulum that is subject to more damping will oscillate fewer times before reaching the equilibrium at the origin.0 1.5 1. while on the right = 0. in both cases.05. In the left ﬁgure = 0.8. (5) (6) Although this equation may not look much more complicated than the previous one.0 0. However.0 1.

> 0 303 .3 2 1 3 2 1 1 2 3 1 2 3 Exercises 1. x2 = C2 cos −C1 sin t. Solve the system for the damped pendulum (3-4) when = 0.0 3.5 1. Show that when equations oscillate with diminishing amplitude. Sketch the phase plane for the following systems (a) x y = 1 = y (b) x y = x = y (c) x y = x2 − 1 = x−y 2. What happens to the amplitude of the solutions as the pendulum oscillates in this case? Solution: In this case the solutions have the form x1(t) = C1 cos t + C2 sin t.0 0. Sketch the vector ﬁeld and the solutions. Solve the equation for the damped linear pendulum (3-4). They oscillate forever with constant amplitude. 1.5 1.0 0.5 1.0 0.5 0.